Updates from: 10/13/2022 02:36:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Previously updated : 08/24/2021 Last updated : 10/07/2022
Declare your claims in the [claims schema](claimsschema.md). Open the extensions
[Page layout version](contentdefinitions.md#migrating-to-page-layout) 2.1.2 is required to enable the self-service password reset flow in the sign-up or sign-in journey. To upgrade the page layout version:
+1. Open the base file of your policy, for example, *SocialAndLocalAccounts/TrustFrameworkBase.xml*.
1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it. 1. Locate the [ContentDefinitions](contentdefinitions.md) element. If the element doesn't exist, add it. 1. Modify the **DataURI** element within the **ContentDefinition** element to have the ID `api.signuporsignin`:
Declare your claims in the [claims schema](claimsschema.md). Open the extensions
``` ### Add the technical profiles
-A claims transformation technical profile accesses the `isForgotPassword` claim. The technical profile is referenced later. When it's invoked, it sets the value of the `isForgotPassword` claim to `true`. Find the **ClaimsProviders** element (if the element doesn't exist, create it), and then add the following claims provider:
+A claims transformation technical profile accesses the `isForgotPassword` claim. The technical profile is referenced later. When it's invoked, it sets the value of the `isForgotPassword` claim to `true`.
+
+1. Open the extensions file of your policy, for example, in *SocialAndLocalAccounts/TrustFrameworkExtensions.xml*.
+1. Find the **ClaimsProviders** element (if the element doesn't exist, create it), and then add the following claims provider:
```xml <!--
The user can now sign in, sign up, and perform password reset in your user journ
The sub journey is called from the user journey and performs the specific steps that deliver the password reset experience to the user. Use the `Call` type sub journey so that when the sub journey is finished, control is returned to the orchestration step that initiated the sub journey.
-Find the **SubJourneys** element. If the element doesn't exist, add it after the **User Journeys** element. Then, add the following sub journey:
+1. Open the extensions file of your policy, such as *SocialAndLocalAccounts/TrustFrameworkExtensions.xml*.
+1. Find the **SubJourneys** element. If the element doesn't exist, add it after the **User Journeys** element. Then, add the following sub journey:
```xml <!--
Next, connect the **Forgot your password?** link to the Forgot Password sub jour
If you don't have your own custom user journey that has a **CombinedSignInAndSignUp** step, complete the following steps to duplicate an existing sign-up or sign-in user journey. Otherwise, continue to the next section.
-1. In the starter pack, open the *TrustFrameworkBase.xml* file.
+1. In the starter pack, open the *TrustFrameworkBase.xml* file such as *SocialAndLocalAccounts/TrustFrameworkBase.xml*.
1. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-1. Open *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
+1. Open *TrustFrameworkExtensions.xml* file, such as *SocialAndLocalAccounts/TrustFrameworkExtensions.xml*, and find the **UserJourneys** element. If the element doesn't exist, create it.
1. Create a child element of the **UserJourneys** element by pasting the entire contents of the **UserJourney** element you copied in step 2. 1. Rename the ID of the user journey. For example, `Id="CustomSignUpSignIn"`. ### Connect the Forgot Password link to the Forgot Password sub journey
-In your user journey, you can represent the Forgot Password sub journey as a **ClaimsProviderSelection**. Adding this element connects the **Forgot your password?** link to the Forgot Password sub journey.
+In your user journey, you can represent the Forgot Password sub journey as a **ClaimsProviderSelection**. By adding this element, you connect the **Forgot your password?** link to the Forgot Password sub journey.
+
+1. Open the *TrustFrameworkExtensions.xml* file, such as *SocialAndLocalAccounts/TrustFrameworkExtensions.xml*.
1. In the user journey, find the orchestration step element that includes `Type="CombinedSignInAndSignUp"` or `Type="ClaimsProviderSelection"`. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can use to sign in. Add the following line:
In your user journey, you can represent the Forgot Password sub journey as a **C
<ClaimsProviderSelection TargetClaimsExchangeId="ForgotPasswordExchange" /> ```
-1. In the next orchestration step, add a **ClaimsExchange** element. Add the following line:
+1. In the next orchestration step, add a **ClaimsExchange** element by adding the following line:
```xml <ClaimsExchange Id="ForgotPasswordExchange" TechnicalProfileReferenceId="ForgotPassword" />
In your user journey, you can represent the Forgot Password sub journey as a **C
### Set the user journey to be executed
-Now that you've modified or created a user journey, in the **Relying Party** section, specify the journey that Azure AD B2C will execute for this custom policy. In the [RelyingParty](relyingparty.md) element, find the **DefaultUserJourney** element. Update the **DefaultUserJourney ReferenceId** to match the ID of the user journey in which you added the **ClaimsProviderSelections**.
+Now that you've modified or created a user journey, in the **Relying Party** section, specify the journey that Azure AD B2C will execute for this custom policy.
+
+1. Open the file that has the **Relying Party** element, such as *SocialAndLocalAccounts/SignUpOrSignin.xml*.
+
+1. In the [RelyingParty](relyingparty.md) element, find the **DefaultUserJourney** element.
+
+1. Update the **DefaultUserJourney ReferenceId** to match the ID of the user journey in which you added the **ClaimsProviderSelections**.
```xml <RelyingParty>
Your application might need to detect whether the user signed in by using the Fo
1. In the Azure portal, search for and select **Azure AD B2C**. 1. In the menu under **Policies**, select **Identity Experience Framework**. 1. Select **Upload custom policy**. In the following order, upload the two policy files that you changed:
- 1. The extension policy, for example, *TrustFrameworkExtensions.xml*.
- 1. The relying party policy, for example, *SignUpSignIn.xml*.
+ 1. The extension policy, for example, *SocialAndLocalAccounts/TrustFrameworkExtensions.xml*.
+ 1. The relying party policy, for example, *SocialAndLocalAccounts/SignUpOrSignin.xml*.
::: zone-end
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
Previously updated : 08/10/2022 Last updated : 10/11/2022
These steps can differ slightly based on the type of application you're building
## Web applications
-For web applications (including .NET, PHP, Java, Ruby, Python, and Node.js) that are hosted on a server and accessed through a browser, Azure AD B2C supports [OpenID Connect](protocols-overview.md) for all user experiences. In the Azure AD B2C implementation of OpenID Connect, your web application initiates user experiences by issuing authentication requests to Azure AD. The result of the request is an `id_token`. This security token represents the user's identity. It also provides information about the user in the form of claims:
+For web applications (including .NET, PHP, Java, Ruby, Python, and Node.js) that are hosted on a web server and accessed through a browser, Azure AD B2C supports [OpenID Connect](protocols-overview.md) for all user experiences. In the Azure AD B2C implementation of OpenID Connect, your web application initiates user experiences by issuing authentication requests to Azure AD. The result of the request is an `id_token`. This security token represents the user's identity. It also provides information about the user in the form of claims:
```json // Partial raw id_token
Validation of the `id_token` by using a public signing key that is received from
To see this scenario in action, try one of the web application sign-in code samples in our [Getting started section](overview.md).
-In addition to facilitating simple sign in, a web server application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis).
+In addition to facilitating simple sign in, a web application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis).
## Single-page applications
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
export const msalConfig: Configuration = {
export const protectedResources = { todoListApi: { endpoint: "http://localhost:5000/hello",
- scopes: ["https://your-tenant-namee.onmicrosoft.com/tasks-api/tasks.read"],
+ scopes: ["https://your-tenant-name.onmicrosoft.com/tasks-api/tasks.read"],
}, } ```
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 06/22/2022 Last updated : 10/06/2022 zone_pivot_groups: b2c-policy-type
With a Mailjet account created and the Mailjet API key stored in an Azure AD B2C
<td valign="top" width="50%"></td> </tr> </table>
- <img src="https://mucp.api.account.microsoft.com/m/v2/v?d=AIAACWEPFYXYIUTJIJVV4ST7XLBHVI5MLLYBKJAVXHBDTBHUM5VBSVVPTTVRWDFIXJ5JQTHYOH5TUYIPO4ZAFRFK52UAMIS3UNIPPI7ZJNDZPRXD5VEJBN4H6RO3SPTBS6AJEEAJOUYL4APQX5RJUJOWGPKUABY&amp;i=AIAACL23GD2PFRFEY5YVM2XQLM5YYWMHFDZOCDXUI2B4LM7ETZQO473CVF22PT6WPGR5IIE6TCS6VGEKO5OZIONJWCDMRKWQQVNP5VBYAINF3S7STKYOVDJ4JF2XEW4QQVNHMAPQNHFV3KMR3V3BA4I36B6BO7L4VQUHQOI64EOWPLMG5RB3SIMEDEHPILXTF73ZYD3JT6MYOLAZJG7PJJCAXCZCQOEFVH5VCW2KBQOKRYISWQLRWAT7IINZ3EFGQI2CY2EMK3FQOXM7UI3R7CZ6D73IKDI" width="1" height="1"></body>
+ </body>
</html> ```
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
Previously updated : 04/25/2022 Last updated : 10/06/2022 zone_pivot_groups: b2c-policy-type
With a SendGrid account created and SendGrid API key stored in an Azure AD B2C p
<td valign="top" width="50%"></td> </tr> </table>
- <img src="https://mucp.api.account.microsoft.com/m/v2/v?d=AIAACWEPFYXYIUTJIJVV4ST7XLBHVI5MLLYBKJAVXHBDTBHUM5VBSVVPTTVRWDFIXJ5JQTHYOH5TUYIPO4ZAFRFK52UAMIS3UNIPPI7ZJNDZPRXD5VEJBN4H6RO3SPTBS6AJEEAJOUYL4APQX5RJUJOWGPKUABY&amp;i=AIAACL23GD2PFRFEY5YVM2XQLM5YYWMHFDZOCDXUI2B4LM7ETZQO473CVF22PT6WPGR5IIE6TCS6VGEKO5OZIONJWCDMRKWQQVNP5VBYAINF3S7STKYOVDJ4JF2XEW4QQVNHMAPQNHFV3KMR3V3BA4I36B6BO7L4VQUHQOI64EOWPLMG5RB3SIMEDEHPILXTF73ZYD3JT6MYOLAZJG7PJJCAXCZCQOEFVH5VCW2KBQOKRYISWQLRWAT7IINZ3EFGQI2CY2EMK3FQOXM7UI3R7CZ6D73IKDI" width="1" height="1"></body>
+ </body>
</html> ```
active-directory-b2c Enable Authentication Android App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-android-app-options.md
Previously updated : 11/11/2021 Last updated : 10/06/2022
b2cApp.acquireToken(parameters);
#### [Kotlin](#tab/kotlin) ```kotlin
-val extraQueryParameters: MutableList<Pair<String, String>> = ArrayList()
-extraQueryParameters.add(Pair("ui_locales", "en-us"))
+val extraQueryParameters: MutableList<Map.Entry<String, String>> = ArrayList()
+
+val mapEntry = object : Map.Entry<String, String> {
+ override val key: String = "ui_locales"
+ override val value: String = "en-us"
+ }
+
+extraQueryParameters.add(mapEntry )
val parameters = AcquireTokenParameters.Builder() .startAuthorizationFromActivity(activity)
val parameters = AcquireTokenParameters.Builder()
b2cApp!!.acquireToken(parameters) ```- #### [Java](#tab/java) ```java
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 06/08/2022 Last updated : 10/11/2022
As of November 2020, new application registrations show up as unverified in the
To enable sign-in for users with an Azure AD account from a specific Azure AD organization, in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your organizational Azure AD tenant (for example, Contoso). Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
-1. Under **Azure services**, select **App registrations** or search for and select **App registrations**.
-1. Select **New registration**.
+1. Make sure you're using the directory that contains your organizational Azure AD tenant (for example, Contoso):
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure Active Directory**.
+1. In the left menu, under **Manage**, select **App registrations**.
+1. Select **+ New registration**.
1. Enter a **Name** for your application. For example, `Azure AD B2C App`. 1. Accept the default selection of **Accounts in this organizational directory only (Default Directory only - Single tenant)** for this application. 1. For the **Redirect URI**, accept the value of **Web**, and enter the following URL in all lowercase letters, where `your-B2C-tenant-name` is replaced with the name of your Azure AD B2C tenant.
To enable sign-in for users with an Azure AD account from a specific Azure AD or
If you want to get the `family_name` and `given_name` claims from Azure AD, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Azure AD app](../active-directory/develop/active-directory-optional-claims.md).
-1. Sign in to the [Azure portal](https://portal.azure.com) using your organizational Azure AD tenant. Search for and select **Azure Active Directory**.
-1. From the **Manage** section, select **App registrations**.
-1. Select the application you want to configure optional claims for in the list.
+1. Sign in to the [Azure portal](https://portal.azure.com) using your organizational Azure AD tenant. Or if you're already signed in, make sure you're using the directory that contains your organizational Azure AD tenant (for example, Contoso):
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure Active Directory**.
+1. In the left menu, under **Manage**, select **App registrations**.
+1. Select the application you want to configure optional claims for in the list, such as `Azure AD B2C App`.
1. From the **Manage** section, select **Token configuration**. 1. Select **Add optional claim**. 1. For the **Token type**, select **ID**.
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md
Previously updated : 08/10/2022 Last updated : 09/07/2022
The following claims transformation outputs a JSON string claim that will be the
} ```
+The **GenerateJson** claims transformation accepts plain strings. If an input claim contains a JSON string, that string will be escaped. In the following example, if you use email output from [CreateJsonArray above](json-transformations.md#example-of-createjsonarray), that is ["someone@contoso.com"], as an input parameter, the email will look like as shown in the following JSON claim:
+
+- Output claim:
+ - **requestBody**:
+
+ ```json
+ {
+ "customerEntity":{
+ "email":"[\"someone@contoso.com\"]",
+ "userObjectId":"01234567-89ab-cdef-0123-456789abcdef",
+ "firstName":"John",
+ "lastName":"Smith",
+ "role":{
+ "name":"Administrator",
+ "id": 1
+ }
+ }
+ }
+ ```
+ ## GetClaimFromJson Get a specified element from a JSON data. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation/json#getclaimfromjson) of this claims transformation.
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for Web Application Firewall (WAF).
| ![Screenshot of Azure WAF logo](./medi) provides centralized protection of your web applications from common exploits and vulnerabilities. | ![Screenshot of Cloudflare logo](./medi) is a WAF provider that helps organizations protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS. |
+## Identity verification tools
+
+Microsoft partners with the following ISVs for tools that can help with implementation of your authentication solution.
+
+| ISV partner | Description and integration walkthroughs |
+|:-|:--|
+| ![Screenshot of a grit ief editor logo.](./medi) is a tool that saves time during authentication deployment. It supports multiple languages without the need to write code. It also has a no code debugger for user journeys.|
## Additional information
active-directory-b2c Partner Grit Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-editor.md
+
+ Title: Edit identity experience framework XML with Grit Visual Identity Experience Framework (IEF) Editor
+
+description: Learn how Grit Visual IEF Editor enables fast authentication deployments in Azure AD B2C
++++++ Last updated : 10/10/2022++++++
+# Edit Azure Active Directory B2C Identity Experience Framework (IEF) XML with Grit Visual IEF Editor
+
+[Grit Software Systems Visual Identity Experience Framework (IEF) Editor](https://www.gritiam.com/iefeditor), is a tool that saves time during Azure Active Directory B2C (Azure AD B2C) authentication deployment. It supports multiple languages without the need to write code. It also has a no code debugger for user journeys.
+
+Use the Visual IEF Editor to:
+
+- Create Azure AD B2C IEF XML, HTML/CSS/JS, and .NET API to deploy Azure AD B2C.
+- Load your Azure AD B2C IEF XML.
+- Visualize and modify your current code, check it in, and run it through a continuous integration/continuous delivery (CI/CD) pipeline.
+
+## Prerequisites
+
+To get started with the IEF Editor, ensure the following prerequisites are met:
+
+- An Azure AD subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/).
+- An Azure AD B2C tenant linked to the Azure subscription. Learn more at [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md).
+- [Visual IEF Editor](https://www.gritiefedit.com) is free and works only with Google Chrome browser.
+- Review and download policies from [Azure AD B2C customer policies starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack)
+- Install Google Chrome browser
+
+## Sample code development workflow
+
+The following illustration shows a sample code-development workflow from XML files to production.
+
+![Screenshot shows the sample code-development workflow.](./media/partner-grit-editor/sample-code-development-workflow.png)
+
+| Step | Description |
+|:--|:|
+| 1. | Go to https://www.gritiefedit.com and upload the policies from [Azure AD B2C customer policies starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack) using the upload policy button in the user interface.|
+| 2. | Using the Visual IEF editor tool, select and edit any user journey and self asserted profile that needs update/modification.|
+|3. | Once the files are updated, select the download button. All the policies will be downloaded to the local machine.|
+|4. | Check in the files in GitHub or CI/CD pipeline. |
+|5. | Use the files in the lower environment for testing the Azure AD B2C policies.|
+|6. | Deploy the policies in Azure AD B2C production environment. |
+
+Learn more about [IEF Editor](https://app.archbee.com/doc/uwPRnuvZNjyEaJ8odNOEC/WmcXf6fTZjAHpx7-rAlac).
+
+## Scenario descriptions
+
+The following sections describe two Visual IEF Editor scenarios for *Contoso* and *Fabrikam* to consider when you plan your Azure AD B2C deployment using this tool.
+
+### Case 1 - Contoso: IEF logic, make changes, and enable features
+
+The *Contoso* enterprise uses Azure AD B2C, and has an extensive IEF deployment. Current challenges for *Contoso* are:
+
+- Teaching IEF logic to new-hire developers.
+- Making changes to IEF.
+- Enabling features such as, fraud protection, identity protection, and biometrics.
+
+When IEF files are loaded into Visual IEF Editor, a list of user journeys appears with a flow chart for each journey. The user journey elements contain useful data and functionalities. Search eases the process of tracing through IEF logic, and enables needed features. The modified files can be:
+
+- Downloaded to a local machine.
+- Uploaded to GitHub.
+- Run through CI/CD.
+- Deployed to a lower environment for testing.
+
+### Case 2 - Fabrikam: Fast implementation
+
+*Fabrikam* is a large enterprise, which has decided to use Azure AD B2C. Their goals are:
+
+- Implement Azure AD B2C quickly
+- Discover functionalities without learning IEF
+
+>[!NOTE]
+>This scenario is in private preview. For access, or questions, contact [Grit IAM Solutions support](https://www.gritiam.com/).
+
+Fabrikam has a set of pre-built templates with intuitive charts that show user flows. Use Visual IEF Editor to modify templates and then deploy them into a lower environment, or upload them to GitHub for CI/CD.
+
+After the IEF is modified, download, and upload the files to Azure AD B2C to see them in action.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](custom-policy-get-started.md?tabs=applications)
+
+- [IEF Editor](https://app.archbee.com/doc/uwPRnuvZNjyEaJ8odNOEC/WmcXf6fTZjAHpx7-rAlac) documentation
+
+- [Grit IAM B2B2C](partner-grit-iam.md)
+
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Previously updated : 09/24/2021 Last updated : 10/11/2021 + # User profile attributes
The table below lists the [user resource type](/graph/api/resources/user) attrib
|creationType |String|If the user account was created as a local account for an Azure Active Directory B2C tenant, the value is LocalAccount or nameCoexistence. Read only.|No|No|Persisted, Output| |dateOfBirth |Date|Date of birth.|No|No|Persisted, Output| |department |String|The name for the department in which the user works. Max length 64.|Yes|No|Persisted, Output|
-|displayName |String|The display name for the user. Max length 256.|Yes|Yes|Persisted, Output|
+|displayName |String|The display name for the user. Max length 256. \< \> characters aren't allowed. | Yes|Yes|Persisted, Output|
|facsimileTelephoneNumber<sup>1</sup>|String|The telephone number of the user's business fax machine.|Yes|No|Persisted, Output| |givenName |String|The given name (first name) of the user. Max length 64.|Yes|Yes|Persisted, Output| |jobTitle |String|The user's job title. Max length 128.|Yes|Yes|Persisted, Output|
In user migration scenarios, if the accounts you want to migrate have weaker pas
## MFA phone number attribute
-When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](/graph/api/authentication-post-phonemethods) a new phone number programmatically, [update](/graph/api/b2cauthenticationmethodspolicy-update), [get](/graph/api/b2cauthenticationmethodspolicy-get), or [delete](/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](/graph/api/resources/phoneauthenticationmethod).
+When using a phone for multi-factor authentication (MFA), the mobile phone is used to verify the user identity. To [add](/graph/api/authentication-post-phonemethods) a new phone number programmatically, [update](/graph/api/phoneauthenticationmethod-update), [get](/graph/api/phoneauthenticationmethod-get), or [delete](/graph/api/phoneauthenticationmethod-delete) the phone number, use MS Graph API [phone authentication method](/graph/api/resources/phoneauthenticationmethod).
In Azure AD B2C [custom policies](custom-policy-overview.md), the phone number is available through `strongAuthenticationPhoneNumber` claim type.
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
Previously updated : 09/30/2022 Last updated : 10/06/2022
active-directory Application Provisioning Config Problem No Users Provisioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned.md
Previously updated : 05/11/2021 Last updated : 10/06/2022
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Previously updated : 05/25/2022 Last updated : 10/06/2022
active-directory Application Provisioning Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem.md
Previously updated : 05/11/2021 Last updated : 10/06/2022
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
Previously updated : 06/03/2021 Last updated : 10/06/2022
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
Previously updated : 05/11/2021 Last updated : 10/06/2022
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
Previously updated : 05/11/2021 Last updated : 10/06/2022
active-directory Application Provisioning When Will Provisioning Finish Specific User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md
Previously updated : 05/11/2021 Last updated : 10/06/2022
active-directory Configure Automatic User Provisioning Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md
Previously updated : 05/11/2021 Last updated : 10/06/2022
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 11/15/2021 Last updated : 10/06/2022
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/what-is-application-proxy.md
Last updated 08/29/2022 -+ # Using Azure AD Application Proxy to publish on-premises apps for remote users
The following diagram illustrates in general how Azure AD authentication service
|**Component**|**Description**| |:-|:-|
-|Endpoint|The endpoint is a URL or an [user portal](../manage-apps/end-user-experiences.md). Users can reach applications while outside of your network by accessing an external URL. Users within your network can access the application through a URL or an user portal. When users go to one of these endpoints, they authenticate in Azure AD and then are routed through the connector to the on-premises application.|
+|Endpoint|The endpoint is a URL or an [user portal](../manage-apps/end-user-experiences.md). Users can reach applications while outside of your network by accessing an external URL. Users within your network can access the application through a URL or a user portal. When users go to one of these endpoints, they authenticate in Azure AD and then are routed through the connector to the on-premises application.|
|Azure AD|Azure AD performs the authentication using the tenant directory stored in the cloud.| |Application Proxy service|This Application Proxy service runs in the cloud as part of Azure AD. It passes the sign-on token from the user to the Application Proxy Connector. Application Proxy forwards any accessible headers on the request and sets the headers as per its protocol, to the client IP address. If the incoming request to the proxy already has that header, the client IP address is added to the end of the comma-separated list that is the value of the header.| |Application Proxy connector|The connector is a lightweight agent that runs on a Windows Server inside your network. The connector manages communication between the Application Proxy service in the cloud and the on-premises application. The connector only uses outbound connections, so you don't have to open any inbound ports or put anything in the DMZ. The connectors are stateless and pull information from the cloud as necessary. For more information about connectors, like how they load-balance and authenticate, see [Understand Azure AD Application Proxy connectors](./application-proxy-connectors.md).|
active-directory Active Directory Certificate Based Authentication Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-android.md
Title: Android certificate-based authentication - Azure Active Directory
+ Title: Android certificate-based authentication with federation - Azure Active Directory
description: Learn about the supported scenarios and the requirements for configuring certificate-based authentication in solutions with Android devices Previously updated : 02/16/2022 Last updated : 09/30/2022
-# Azure Active Directory certificate-based authentication on Android
+# Azure Active Directory certificate-based authentication with federation on Android
Android devices can use certificate-based authentication (CBA) to authenticate to Azure Active Directory using a client certificate on their device when connecting to:
The device OS version must be Android 5.0 (Lollipop) and above.
A federation server must be configured.
-For Azure Active Directory to revoke a client certificate, the ADFS token must have the following claims:
+For Azure Active Directory to revoke a client certificate, the AD FS token must have the following claims:
* `http://schemas.microsoft.com/ws/2008/06/identity/claims/<serialnumber>` (The serial number of the client certificate) * `http://schemas.microsoft.com/2012/12/certificatecontext/field/<issuer>` (The string for the issuer of the client certificate)
-Azure Active Directory adds these claims to the refresh token if they are available in the ADFS token (or any other SAML token). When the refresh token needs to be validated, this information is used to check the revocation.
+Azure Active Directory adds these claims to the refresh token if they're available in the AD FS token (or any other SAML token). When the refresh token needs to be validated, this information is used to check the revocation.
-As a best practice, you should update your organization's ADFS error pages with the following information:
+As a best practice, you should update your organization's AD FS error pages with the following information:
* The requirement for installing the Microsoft Authenticator on Android. * Instructions on how to get a user certificate. For more information, see [Customizing the AD FS Sign-in Pages](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn280950(v=ws.11)).
-Some Office apps (with modern authentication enabled) send '*prompt=login*' to Azure AD in their request. By default, Azure AD translates '*prompt=login*' in the request to ADFS as '*wauth=usernamepassworduri*' (asks ADFS to do U/P Auth) and '*wfresh=0*' (asks ADFS to ignore SSO state and do a fresh authentication). If you want to enable certificate-based authentication for these apps, you need to modify the default Azure AD behavior. Set the '*PromptLoginBehavior*' in your federated domain settings to '*Disabled*'.
+Office apps with modern authentication enabled send '*prompt=login*' to Azure AD in their request. By default, Azure AD translates '*prompt=login*' in the request to AD FS as '*wauth=usernamepassworduri*' (asks AD FS to do U/P Auth) and '*wfresh=0*' (asks AD FS to ignore SSO state and do a fresh authentication). If you want to enable certificate-based authentication for these apps, you need to modify the default Azure AD behavior. Set the '*PromptLoginBehavior*' in your federated domain settings to '*Disabled*'.
You can use the [MSOLDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings) cmdlet to perform this task:
-`Set-MSOLDomainFederationSettings -domainname <domain> -PromptLoginBehavior Disabled`
+```powershell
+Set-MSOLDomainFederationSettings -domainname <domain> -PromptLoginBehavior Disabled
+```
## Exchange ActiveSync clients support
active-directory Active Directory Certificate Based Authentication Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-ios.md
Title: Certificate-based authentication on iOS - Azure Active Directory
+ Title: Certificate-based authentication with federation on iOS - Azure Active Directory
description: Learn about the supported scenarios and the requirements for configuring certificate-based authentication for Azure Active Directory in solutions with iOS devices Previously updated : 05/04/2022 Last updated : 09/30/2022
-# Azure Active Directory certificate-based authentication on iOS
+# Azure Active Directory certificate-based authentication with federation on iOS
To improve security, iOS devices can use certificate-based authentication (CBA) to authenticate to Azure Active Directory (Azure AD) using a client certificate on their device when connecting to the following applications or
To use CBA with iOS, the following requirements and considerations apply:
* The device OS version must be iOS 9 or above. * Microsoft Authenticator is required for Office applications on iOS.
-* An identity preference must be created in the macOS Keychain that include the authentication URL of the ADFS server. For more information, see [Create an identity preference in Keychain Access on Mac](https://support.apple.com/guide/keychain-access/create-an-identity-preference-kyca6343b6c9/mac).
+* An identity preference must be created in the macOS Keychain that includes the authentication URL of the AD FS server. For more information, see [Create an identity preference in Keychain Access on Mac](https://support.apple.com/guide/keychain-access/create-an-identity-preference-kyca6343b6c9/mac).
-The following Active Directory Federation Services (ADFS) requirements and considerations apply:
+The following Active Directory Federation Services (AD FS) requirements and considerations apply:
-* The ADFS server must be enabled for certificate authentication and use federated authentication.
+* The AD FS server must be enabled for certificate authentication and use federated authentication.
* The certificate needs to have to use Enhanced Key Usage (EKU) and contain the UPN of the user in the *Subject Alternative Name (NT Principal Name)*.
-## Configure ADFS
+## Configure AD FS
-For Azure AD to revoke a client certificate, the ADFS token must have the following claims. Azure AD adds these claims to the refresh token if they're available in the ADFS token (or any other SAML token). When the refresh token needs to be validated, this information is used to check the revocation:
+For Azure AD to revoke a client certificate, the AD FS token must have the following claims. Azure AD adds these claims to the refresh token if they're available in the AD FS token (or any other SAML token). When the refresh token needs to be validated, this information is used to check the revocation:
* `http://schemas.microsoft.com/ws/2008/06/identity/claims/<serialnumber>` - add the serial number of your client certificate * `http://schemas.microsoft.com/2012/12/certificatecontext/field/<issuer>` - add the string for the issuer of your client certificate
-As a best practice, you also should update your organization's ADFS error pages with the following information:
+As a best practice, you also should update your organization's AD FS error pages with the following information:
* The requirement for installing the Microsoft Authenticator on iOS. * Instructions on how to get a user certificate.
For more information, see [Customizing the AD FS sign in page](/previous-version
## Use modern authentication with Office apps
-Some Office apps with modern authentication enabled send `prompt=login` to Azure AD in their request. By default, Azure AD translates `prompt=login` in the request to ADFS as `wauth=usernamepassworduri` (asks ADFS to do U/P Auth) and `wfresh=0` (asks ADFS to ignore SSO state and do a fresh authentication). If you want to enable certificate-based authentication for these apps, modify the default Azure AD behavior.
+Some Office apps with modern authentication enabled send `prompt=login` to Azure AD in their request. By default, Azure AD translates `prompt=login` in the request to AD FS as `wauth=usernamepassworduri` (asks AD FS to do U/P Auth) and `wfresh=0` (asks AD FS to ignore SSO state and do a fresh authentication). If you want to enable certificate-based authentication for these apps, modify the default Azure AD behavior.
To update the default behavior, set the '*PromptLoginBehavior*' in your federated domain settings to *Disabled*. You can use the [MSOLDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings) cmdlet to perform this task, as shown in the following example:
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
+
+ Title: Overview of Azure Active Directory authentication strength (preview)
+description: Learn how admins can use Azure AD Conditional Access to distinguish which authentication methods can be used based on relevant security factors.
+++++ Last updated : 10/04/2022++++++++
+# Conditional Access authentication strength (preview)
+
+Authentication strength is a Conditional Access control that allows administrators to specify which combination of authentication methods can be used to access a resource. For example, they can make only phishing-resistant authentication methods available to access a sensitive resource. But to access a nonsensitive resource, they can allow less secure multifactor authentication (MFA) combinations, such as password + SMS.
+
+Authentication strength is based on the [Authentication methods policy](concept-authentication-methods.md), where administrators can scope authentication methods for specific users and groups to be used across Azure Active Directory (Azure AD) federated applications. Authentication strength allows further control over the usage of these methods based upon specific scenarios such as sensitive resource access, user risk, location, and more.
+
+Administrators can specify an authentication strength to access a resource by creating a Conditional Access policy with the **Require authentication strength** control. They can choose from three built-in authentication strengths: **Multifactor authentication strength**, **Passwordless MFA strength**, and **Phishing-resistant MFA strength**. They can also create a custom authentication strength based on the authentication method combinations they want to allow.
++
+## Scenarios for authentication strengths
+
+Authentication strengths can help customers address scenarios, such as:
+
+- Require specific authentication methods to access a sensitive resource.
+- Require a specific authentication method when a user takes a sensitive action within an application (in combination with Conditional Access authentication context).
+- Require users to use a specific authentication method when they access sensitive applications outside of the corporate network.
+- Require more secure authentication methods for users at high risk.
+- Require specific authentication methods from guest users who access a resource tenant (in combination with cross-tenant settings). <!-- Namrata - Add / review external users scenario here -->
+
+## Authentication strengths
+
+An authentication strength can include a combination of authentication methods. Users can satisfy the strength requirements by authenticating with any of the allowed combinations. For example, the built-in **Phishing-resistant MFA strength** allows the following combinations:
+
+- Windows Hello for Business
+
+ Or
+
+- FIDO2 security key
+
+ Or
+
+- Azure AD Certificate-Based Authentication (Multi-Factor)
++
+### Built-in authentication strengths
+
+Built-in authentication strengths are combinations of authentication methods that are predefined by Microsoft. Built-in authentication strengths are always available and can't be modified. Microsoft will update built-in authentication strengths when new methods become available.
+
+The following table lists the combinations of authentication methods for each built-in authentication strength. Depending on which methods are available in the authentication methods policy and registered for users, they can use any one of the combinations to sign-in.
+
+- **MFA strength** - the same set of combinations that could be used to satisfy the **Require multifactor authentication** setting.
+- **Passwordless MFA strength** - includes authentication methods that satisfy MFA but don't require a password.
+- **Phishing-resistant MFA strength** - includes methods that require an interaction between the authentication method and the sign-in surface.
+
+|Authentication method combination |MFA strength | Passwordless MFA strength| Phishing-resistant MFA strength|
+|-|-|-|-|
+|FIDO2 security key| &#x2705; | &#x2705; | &#x2705; |
+|Windows Hello for Business| &#x2705; | &#x2705; | &#x2705; |
+|Certificate-based authentication (Multi-Factor) | &#x2705; | &#x2705; | &#x2705; |
+|Microsoft Authenticator (Phone Sign-in)| &#x2705; | &#x2705; | |
+|Temporary Access Pass (One-time use AND Multi-use)| &#x2705; | | |
+|Password + something you have<sup>1</sup>| &#x2705; | | |
+|Federated single-factor + something you have<sup>1</sup>| &#x2705; | | |
+|Federated Multi-Factor| &#x2705; | | |
+|Certificate-based authentication (single-factor)| | | |
+|SMS sign-in | | | |
+|Password | | | |
+|Federated single-factor| | | |
+
+<!-- We will move these methods back to the table as they become supported - expected very soon
+|Email One-time pass (Guest)| | | |
+-->
+
+<sup>1</sup> Something you have refers to one of the following methods: SMS, voice, push notification, software OATH token. Hardware OATH token is currently not supported.
+
+The following API call can be used to list definitions of all the built-in authentication strengths:
+
+```http
+GET https://graph.microsoft.com/beta/identity/conditionalAccess/authenticationStrengths/policies?$filter=policyType eq 'builtIn'
+```
+
+### Custom authentication strengths
+
+In addition to the three built-in authentication strengths, administrators can create up to 15 of their own custom authentication strengths to exactly suit their requirements. A custom authentication strength can contain any of the supported combinations in the preceding table.
+
+1. In the Azure portal, browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths (Preview)**.
+1. Select **New authentication strength**.
+1. Provide a descriptive **Name** for your new authentication strength.
+1. Optionally provide a **Description**.
+1. Select any of the available methods you want to allow.
+1. Choose **Next** and review the policy configuration.
+
+ :::image type="content" border="true" source="media/concept-authentication-strengths/authentication-strength-custom.png" alt-text="Screenshot showing the creation of a custom authentication strength.":::
+
+#### Update and delete custom authentication strengths
+
+You can edit a custom authentication strength. If it's referenced by a Conditional Access policy, it can't be deleted, and you need to confirm any edit.
+To check if an authentication strength is referenced by a Conditional Access policy,click **Conditional Access policies** column.
+
+#### FIDO2 security key advanced options
+Custom authentication strengths allow customers to further restrict the usage of some FIDO2 security keys based on their Authenticator Attestation GUIDs (AAGUIDs). The capability allows administrators to require a FIDO2 key from a specific manufacture in order to access the resource. To require a specific FIDO2 security key, complete the preceding steps to create a custom authentication strength, select **FIDO2 Security Key**, and click **Advanced options**.
++
+Next to **Allowed FIDO2 Keys** click **+**, copy the AAGUID value, and click **Save**.
++
+## Using authentication strength in Conditional Access
+After you determine the authentication strength you need, you'll need to create a Conditional Access policy to require that authentication strength to access a resource. When the Conditional Access policy gets applied, the authentication strength restricts which authentication methods are allowed.
+<!-- ### Place holder:How to create conditional access policy that uses authentication strength
+- Add a note that you can use either require mfa or require auth strengths
+- (JF) Possibly add a reference doc that lists all the definitions of the things you can configure?
+-->
+
+### How authentication strength works with the Authentication methods policy
+There are two policies that determine which authentication methods can be used to access resources. If a user is enabled for an authentication method in either policy, they can sign in with that method.
+
+- **Security** > **Authentication methods** > **Policies** is a more modern way to manage authentication methods for specific users and groups. You can specify users and groups for different methods. You can also configure parameters to control how a method can be used.
+
+ :::image type="content" border="true" source="./media/concept-authentication-strengths/authentication-methods-policy.png" alt-text="Screenshot of Authentication methods policy.":::
+
+- **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings** is a legacy way to control multifactor authentication methods for all of the users in the tenant.
+
+ :::image type="content" border="true" source="./media/concept-authentication-strengths/service-settings.png" alt-text="Screenshot of MFA service settings.":::
+
+Users may register for authentications for which they are enabled, and in other cases, an administrator can configure a user's device with a method, such as certificate-based authentication.
+
+The authentication strength Conditional Access policy defines which methods can be used. Azure AD checks the policy during sign-in to determine the userΓÇÖs access to the resource. For example, an administrator configures a Conditional Access policy with a custom authentication strength that requires FIDO2 Security Key or Password + SMS. The user accesses a resource protected by this policy. During sign-in, all settings are checked to determine which methods are allowed, which methods are registered, and which methods are required by the Conditional Access policy. To be used, a method must be allowed, registered by the user (either before or as part of the access request), and satisfy the authentication strength.
+
+## User experience
+
+The following factors determine if the user gains access to the resource:
+
+- Which authentication method was previously used?
+- Which methods are available for the authentication strength?
+- Which methods are allowed for user sign-in in the Authentication methods policy?
+- Is the user registered for any available method?
+
+When a user accesses a resource protected by an authentication strength Conditional Access policy, Azure AD evaluates if the methods they have previously used satisfy the authentication strength. If a satisfactory method was used, Azure AD grants access to the resource. For example, let's say a user signs in with password + SMS. They access a resource protected by MFA authentication strength. In this case, the user can access the resource without another authentication prompt.
+
+Let's suppose they next access a resource protected by Phishing-resistant MFA authentication strength. At this point, they'll be prompted to provide a phishing-resistant authentication method, such as Windows Hello for Business.
+
+If the user hasn't registered for any methods that satisfy the authentication strength, they are redirected to [combined registration](concept-registration-mfa-sspr-combined.md#interrupt-mode). <!-- making this a comment for now because we have a limitation. Once it is fixed we can remove the comment::: Only users who satisfy MFA are redirected to register another strong authentication method.-->
+
+If the authentication strength doesn't include a method that the user can register and use, the user is blocked from sign-in to the resource.
+
+### Registering authentication methods
+
+The following authentication methods can't be registered as part of combined registration interrupt mode:
+* [Microsoft Authenticator (phone sign-in)](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c) - Can be registered from the Authenticator app.
+* [FIDO2](howto-authentication-passwordless-security-key.md) - can be registered using [combined registration managed mode](concept-registration-mfa-sspr-combined.md#manage-mode).
+* [Certificate-based authentication](concept-certificate-based-authentication.md) - Require administrator setup, cannot be registered by the user.
+* [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-prepare-people-to-use) - Can be registered in the Windows Out of Box Experience (OOBE) or the Windows Settings menu.
+
+If a user isn't registered for these methods, they can't access the resource until the required method is registered. For the best user experience, make sure users complete combined registered in advance for the different methods they may need to use.
+
+### Federated user experience
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider by setting the federatedIdpMfaBehavior. If the federatedIdpMfaBehavior setting is set to enforceMfaByFederatedIdp, the user must authenticate on their federated IdP and can only satisfy the **Federated Multi-Factor** combination of the authentication strength requirement. For more information about the federation settings, see [Plan support for MFA](../hybrid/migrate-from-federation-to-cloud-authentication.md#plan-support-for-mfa).
+
+If a user from a federated domain has multifactor authentication settings in scope for Staged Rollout, the user can complete multifactor authentication in the cloud and satisfy any of the **Federated single-factor + something you have** combinations. For more information about staged rollout, see [Enable Staged Rollout using Azure portal](how-to-mfa-server-migration-utility.md#enable-staged-rollout-using-azure-portal).
+
+## External users
+
+The Authentication methods policy is especially useful for restricting external access to sensitive apps in your organization because you can enforce specific authentication methods, such as phishing-resistant methods, for external users.
+
+When you apply an authentication strength Conditional Access policy to external Azure AD users, the policy works together with MFA trust settings in your cross-tenant access settings to determine where and how the external user must perform MFA. An Azure AD user authenticates in their home Azure AD tenant. Then when they access your resource, Azure AD applies the policy and checks to see if you've enabled MFA trust. Note that enabling MFA trust is optional for B2B collaboration but is *required* for [B2B direct connect](../external-identities/b2b-direct-connect-overview.md#multi-factor-authentication-mfa).
+
+In external user scenarios, the authentication methods that can satisfy authentication strength vary, depending on whether the user is completing MFA in their home tenant or the resource tenant. The following table indicates the allowed methods in each tenant. If a resource tenant has opted to trust claims from external Azure AD organizations, only those claims listed in the ΓÇ£Home tenantΓÇ¥ column below will be accepted by the resource tenant for MFA. If the resource tenant has disabled MFA trust, the external user must complete MFA in the resource tenant using one of the methods listed in the ΓÇ£Resource tenantΓÇ¥ column.
+
+|Authentication method |Home tenant | Resource tenant |
+||||
+|SMS as second factor | &#x2705; | &#x2705; |
+|Voice call | &#x2705; | &#x2705; |
+|Microsoft Authenticator push notification | &#x2705; | &#x2705; |
+|Microsoft Authenticator phone sign-in | &#x2705; | &#x2705; |
+|OATH software token | &#x2705; | &#x2705; |
+|OATH hardware token | &#x2705; | |
+|FIDO2 security key | &#x2705; | |
+|Windows Hello for Business | &#x2705; | |
++
+### User experience for external users
+
+An authentication strength Conditional Access policy works together with [MFA trust settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) in your cross-tenant access settings. First, an Azure AD user authenticates with their own account in their home tenant. Then when this user tries to access your resource, Azure AD applies the authentication strength Conditional Access policy and checks to see if you've enabled MFA trust.
+
+- **If MFA trust is enabled**, Azure AD checks the user's authentication session for a claim indicating that MFA has been fulfilled in the user's home tenant. See the preceding table for authentication methods that are acceptable for MFA when completed in an external user's home tenant. If the session contains a claim indicating that MFA policies have already been met in the user's home tenant, and the methods satisfy the authentication strength requirements, the user is allowed access. Otherwise, Azure AD presents the user with a challenge to complete MFA in the home tenant using an acceptable authentication method.
+- **If MFA trust is disabled**, Azure AD presents the user with a challenge to complete MFA in the resource tenant using an acceptable authentication method. (See the table above for authentication methods that are acceptable for MFA by an external user.)
+
+## Known issues
+
+- **Users who signed in by using certificate-based authentication aren't prompted to reauthenticate** - If a user first authenticated by using certificate-based authentication and the authentication strength requires another method, such as a FIDO2 security key, the user isn't prompted to use a FIDO2 security key and authentication fails. The user must restart their session to sign-in with a FIDO2 security key.
+
+- **Authentication methods that are currently not supported by authentication strength** - The following authentication methods are included in the available combinations but currently have limited functionality:
+ - Email one-time pass (Guest)
+ - Hardware-based OATH token
+
+- **Conditional Access What-if tool** ΓÇô When running the what-if tool, it will return policies that require authentication strength correctly. However, when clicking on the authentication strength name, a name page is open with additional information about the methods the user can use. This information may be incorrect.
+
+- **Authentication strength is not enforced on Register security information user action** ΓÇô If an Authentication strength Conditional Access policy targets **Register security information** user action, the policy would not apply.
+
+- **Conditional Access audit log** ΓÇô When a Conditional Access policy with the authentication strength grant control is created or updated in the Azure AD portal, the auditing log includes details about the policy that was updated, but doesn't include the details about which authentication strength is referenced by the Conditional Access policy. This issue doesn't exist when a policy is created or updated By using Microsoft Graph APIs.
+<!-- Namrata to update about B2B>
+
+## Limitations
+
+- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength will not restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue.
+
+- **Require multifactor authentication and Require authentication strength can't be used together in the same Conditional Access policy** - These two Conditional Access grant controls can't be used together because the built-in authentication strength **Multifactor authentication** is equivalent to the **Require multifactor authentication** grant control.
++
+<!place holder: Auth Strength with CCS - will be documented in resilience-defaults doc-->
+
+## FAQ
+
+### Should I use authentication strength or the Authentication methods policy?
+Authentication strength is based on the Authentication methods policy. The Authentication methods policy helps to scope and configure authentication methods to be used across Azure AD by specific users and groups. Authentication strength allows another restriction of methods for specific scenarios, such as sensitive resource access, user risk, location, and more.
+
+For example, the administrator of Contoso wants to allow their users to use Microsoft Authenticator with either push notifications or passwordless authentication mode. The administrator goes to the Microsoft Authenticator settings in the Authentication method policy, scopes the policy for the relevant users and set the **Authentication mode** to **Any**.
+
+Then for ContosoΓÇÖs most sensitive resource, the administrator wants to restrict the access to only passwordless authentication methods. The administrator creates a new Conditional Access policy, using the built-in **Passwordless MFA strength**.
+
+As a result, users in Contoso can access most of the resources in the tenant using password + push notification from the Microsoft Authenticator OR only using Microsoft Authenticator (phone sign-in). However, when the users in the tenant access the sensitive application, they must use Microsoft Authenticator (phone sign-in).
+
+## Prerequisites
+
+- **Azure AD Premium P1** - Your tenant needs to have Azure AD Premium P1 license to use Conditional Access. If needed, you can enable a [free trial](https://www.microsoft.com/security/business/get-started/start-free-trial).
+- **Enable combined registration** - Authentication strengths are supported when using [combined MFA and SSPR registration](howto-registration-mfa-sspr-combined.md). Using the legacy registration will result in poor user experience as the user may register methods that aren't required by the authentication method policy.
+
+## Next steps
+
+- [Troubleshoot authentication strengths](troubleshoot-authentication-strengths.md)
+
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
+
+ Title: Certificate user IDs for Azure AD certificate-based authentication - Azure Active Directory
+description: Learn about certificate user IDs for Azure AD certificate-based authentication without federation
+++++ Last updated : 10/05/2022++++++++++
+# Certificate user IDs
+
+You can add certificate user IDs to users in Azure AD can have certificate user IDs. a multivalued attribute named **certificateUserIds**. The attribute allows up to four values, and each value can be of 120-character length. It can store any value, and doesn't require email ID format. It can store non-routable User Principal Names (UPNs) like _bob@woodgrove_ or _bob@local_.
+
+## Supported patterns for certificate user IDs
+
+The values stored in **certificateUserIds** should be in the format described in the following table.
+
+|Certificate mapping Field | Examples of values in CertificateUserIds |
+|--|--|
+|PrincipalName | ΓÇ£X509:\<PN>bob@woodgrove.comΓÇ¥ |
+|PrincipalName | ΓÇ£X509:\<PN>bob@woodgroveΓÇ¥ |
+|RFC822Name | ΓÇ£X509:\<RFC822>user@woodgrove.comΓÇ¥ |
+|X509SKI | ΓÇ£X509:\<SKI>123456789abcdefΓÇ¥|
+|X509SHA1PublicKey |ΓÇ£X509:\<SHA1-PUKEY>123456789abcdefΓÇ¥ |
+
+## Roles to update certificateUserIds
+
+For cloud only users, only users with roles **Global Administrators**, **Privileged Authentication Administrator** can write into certificateUserIds.
+For sync'd users, AD users with role **Hybrid Identity Administrator** can write into the attribute.
+
+>[!NOTE]
+>Active Directory Administrators (including accounts with delegated administrative privilege over sync'd user accounts as well as administrative rights over the Azure >AD Connect Servers) can make changes that impact the certificateUserIds value in Azure AD for any sync'd accounts.
+
+## Update certificate user IDs in the Azure portal
+
+Tenant admins can use the following steps Azure portal to update certificate user IDs for a user account:
+
+1. In the Azure AD portal, click **All users (preview)**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/user.png" alt-text="Screenshot of test user account.":::
+
+1. Click a user, and click **Edit Properties**.
+
+1. Next to **Authorization info**, click **View**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/view.png" alt-text="Screenshot of View authorization info.":::
+
+1. Click **Edit certificate user IDs**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/edit-cert.png" alt-text="Screenshot of Edit certificate user IDs.":::
+
+1. Click **Add**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/add.png" alt-text="Screenshot of how to add a CertificateUserID.":::
+
+1. Enter the value and click **Save**. You can add up to four values, each of 120 characters.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/save.png" alt-text="Screenshot of a value to enter for CertificateUserId.":::
+
+## Update certificate user IDs using Azure AD Connect
+
+To update certificate user IDs for federated users, configure Azure AD Connect to sync userPrincipalName to certificateUserIds.
+
+1. On the Azure AD Connect server, find and start the **Synchronization Rules Editor**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/sync-rules-editor.png" alt-text="Screenshot of Synchronization Rules Editor.":::
+
+1. Click **Direction**, and click **Outbound**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/outbound.png" alt-text="Screenshot of outbound synchronization rule.":::
+
+1. Find the rule **Out to AAD ΓÇô User Identity**, click **Edit**, and click **Yes** to confirm.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/user-identity.png" alt-text="Screenshot of user identity.":::
+
+1. Enter a high number in the **Precedence** field, and then click **Next**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/precedence.png" alt-text="Screenshot of a precedence value.":::
+
+1. Click **Transformations** > **Add transformation**. You may need to scroll down the list of transformations before you can create a new one.
+
+### Synchronize X509:\<PN>PrincipalNameValue
+
+To synchronize X509:\<PN>PrincipalNameValue, create an outbound synchronization rule, and choose **Expression** in the flow type. Choose the target attribute as \<certificateUserIds>, and in the source field, add the expression <"X509:\<PN>"&[userPrincipalName]>. If your source attribute isn't userPrincipalName, you can change the expression accordingly.
+
+
+### Synchronize X509:\<RFC822>RFC822Name
+
+To synchronize X509:\<RFC822>RFC822Name, create an outbound synchronization rule, choose **Expression** in the flow type. Choose the target attribute as \<certificateUserIds>, and in the source field, add the expression <"X509:\<RFC822>"&[userPrincipalName]>. If your source attribute isn't userPrincipalName, you can change the expression accordingly.
++
+1. Click **Target Attribute**, select **CertificateUserIds**, click **Source**, select **UserPrincipalName**, and then click **Save**.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/edit-rule.png" alt-text="Screenshot of how to save a rule.":::
+
+1. Click **OK** to confirm.
+
+> [!NOTE]
+> Make sure you use the latest version of [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594).
+
+For more information about declarative provisioning expressions, see [Azure AD Connect: Declarative Provisioning Expressions](../hybrid/concept-azure-ad-connect-sync-declarative-provisioning-expressions.md).
+
+## Synchronize alternativeSecurityId attribute from AD to Azure AD CBA CertificateUserIds
+
+AlternativeSecurityId isn't part of the default attributes. An administrator needs to add the attribute to the person object, and then create the appropriate synchronization rules.
+
+1. Open Metaverse Designer, and select alternativeSecurityId to add it to the person object.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/alt-security-identity-add.png" alt-text="Screenshot of how to add alternativeSecurityId to the person object":::
+
+1. Create an inbound synchronization rule to transform from altSecurityIdentities to alternateSecurityId attribute.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/alt-security-identity-inbound.png" alt-text="Screenshot of how to transform from altSecurityIdentities to alternateSecurityId attribute":::
+
+1. Create an outbound synchronization rule to transform from alternateSecurityId attribute to certificateUserIds
+alt-security-identity-add.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/alt-security-identity-outbound.png" alt-text="Screenshot of outbound synchronization rule to transform from alternateSecurityId attribute to certificateUserIds":::
+
+To map the pattern supported by certificateUserIds, administrators must use expressions to set the correct value.
+
+You can use the following expression for mapping to SKI and SHA1-PUKEY:
+
+```
+(Contains([alternativeSecurityId],"x509:\<SKI>")>0,[alternativeSecurityId],Error("No altSecurityIdentities SKI match found."))
+& IIF(Contains([alternativeSecurityId],"x509:\<SHA1-PUKEY>")>0,[alternativeSecurityId],Error("No altSecurityIdentities SHA1-PUKEY match found."))
+```
+
+## Look up certificateUserIds using Microsoft Graph queries
+
+Tenant admins can run MS Graph queries to find all the users with a given certificateUserId value.
+
+GET all user objects that have the value 'bob@contoso.com' value in certificateUserIds:
+
+```http
+GET https://graph.microsoft.com/v1.0/users?$filter=certificateUserIds/any(x:x eq 'bob@contoso.com')
+```
+
+```http
+GET https://graph.microsoft.com/v1.0/users?$filter=startswith(certificateUserIds, 'bob@contoso.com')
+```
+
+```http
+GET https://graph.microsoft.com/v1.0/users?$filter=certificateUserIds eq 'bob@contoso.com'
+```
+
+## Next steps
+
+- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Azure AD CBA on iOS devices](concept-certificate-based-authentication-mobile-ios.md)
+- [Azure AD CBA on Android devices](concept-certificate-based-authentication-mobile-android.md)
+- [Windows smart card logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
+- [FAQ](certificate-based-authentication-faq.yml)
active-directory Concept Certificate Based Authentication Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-limitations.md
Previously updated : 06/07/2022 Last updated : 10/10/2022
This topic covers supported and unsupported scenarios for Azure Active Directory (Azure AD) certificate-based authentication.
->[!NOTE]
->Azure AD certificate-based authentication is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Supported scenarios The following scenarios are supported: - User sign-ins to web browser-based applications on all platforms.
+- User sign-ins to Office mobile apps, including Outlook, OneDrive, and so on.
- User sign-ins on mobile native browsers. - Support for granular authentication rules for multifactor authentication by using the certificate issuer **Subject** and **policy OIDs**.-- Configuring certificate-to-user account bindings by using the certificate Subject Alternate Name (SAN) principal name and SAN RFC822 name.
+- Configuring certificate-to-user account bindings by using any of the certificate fields:
+ - Subject Alternate Name (SAN) PrincipalName and SAN RFC822Name
+ - Subject Key Identifier (SKI) and SHA1PublicKey
+- Configuring certificate-to-user account bindings by using any of the user object attributes:
+ - User Principal Name
+ - onPremisesUserPrincipalName
+ - CertificateUserIds
## Unsupported scenarios
The following scenarios aren't supported:
- Certificate Authority hints aren't supported, so the list of certificates that appears for users in the UI isn't scoped. - Only one CRL Distribution Point (CDP) for a trusted CA is supported. - The CDP can be only HTTP URLs. We don't support Online Certificate Status Protocol (OCSP), or Lightweight Directory Access Protocol (LDAP) URLs.-- Configuring other certificate-to-user account bindings, such as using the **subject field**, or **keyid** and **issuer**, arenΓÇÖt available in this release.
+- Configuring other certificate-to-user account bindings, such as using the **subject + issuer** or **Issuer + Serial Number**, arenΓÇÖt available in this release.
- Currently, password can't be disabled when CBA is enabled and the option to sign in using a password is displayed.
+## Supported operating systems
+
+| Operating system | Certificate on-device/Derived PIV | Smart cards |
+|:--|::|:-:|
+| Windows | &#x2705; | &#x2705; |
+| macOS | &#x2705; | &#x2705; |
+| iOS | &#x2705; | Supported vendors only |
+| Android | &#x2705; | Supported vendors only |
++
+## Supported browsers
+
+| Operating system | Chrome certificate on-device | Chrome smart card | Safari certificate on-device | Safari smart card | Edge certificate on-device | Edge smart card |
+|:--|:-:|:-:|:-:|:-:|:--|:-:|
+| Windows | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| macOS | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| iOS | &#10060; | &#10060; | &#x2705; | Supported vendors only | &#10060; | &#10060; |
+| Android | &#x2705; | &#10060; | N/A | N/A | &#10060; | &#10060; |
+
+>[!NOTE]
+> On iOS and Android mobile, Edge browser users can sign into Edge to set up a profile by using the Microsoft Authentication Library (MSAL), like the Add account flow. When logged in to Edge with a profile, CBA is supported with on-device certificates and smart cards.
+
+## Smart card providers
+
+|Provider | Windows | Mac OS | iOS | Android |
+|:|:-:|:-:|:-:|:-:|
+|YubiKey | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
++ ## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md) - [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) - [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
+- [CertificateUserIDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
- [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)+
active-directory Concept Certificate Based Authentication Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-migration.md
+
+ Title: Migrate from federation to Azure AD CBA
+description: Learn how to migrate from Federated server to Azure AD
+++++ Last updated : 10/05/2022+++++++++++
+# Migrate from federation to Azure AD certificate-based authentication (CBA)
+
+This article explains how to migrate from running federated servers such as Active Directory Federation Services (AD FS) on-premises to cloud authentication using Azure Active Directory (Azure AD) certificate-based authentication (CBA).
+
+## Staged Rollout
+
+[Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) helps customers transition from AD FS to Azure AD by testing cloud authentication with selected groups of users before switching the entire tenant.
+
+## Enable Staged Rollout for certificate-based authentication on your tenant
+
+To configure Staged Rollout, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization.
+1. Search for and select **Azure Active Directory**.
+1. From the left menu, select **Azure AD Connect**.
+1. On the Azure AD Connect page, under the Staged Rollout of cloud authentication, click **Enable Staged Rollout for managed user sign-in**.
+1. On the **Enable Staged Rollout** feature page, click **On** for the option [Certificate-based authentication](active-directory-certificate-based-authentication-get-started.md)
+1. Click **Manage groups** and add groups you want to be part of cloud authentication. To avoid a time-out, ensure that the security groups contain no more than 200 members initially.
+
+For more information, see [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
+
+## Use Azure AD connect to update certificateUserIds attribute
+
+An AD FS admin can use **Synchronization Rules Editor** to create rules to sync the values of attributes from AD FS to Azure AD user objects. For more information, see [Sync rules for certificateUserIds](concept-certificate-based-authentication-certificateuserids.md#update-certificate-user-ids-using-azure-ad-connect).
+
+Azure AD Connect requires a special role named **Hybrid Identity Administrator**, which grants the necessary permissions. You need this role for permission to write to the new cloud attribute.
+
+>[!NOTE]
+>If a user is using synchronized attributes, such as the onPremisesUserPrincipalName attribute in the user object for username binding, be aware that any user that has administrative access to the Azure AD Connect server can change the synchronized attribute mapping, and change the value of the synchronized attribute. The user does not need to be a cloud admin. The AD FS admin should make sure the administrative access to the Azure AD Connect server should be limited, and privileged accounts should be cloud-only accounts.
+
+## Frequently asked questions about migrating from AD FS to Azure AD
+
+### Can we have privileged accounts with a federated AD FS server?
+
+Although it's possible, Microsoft recommends privileged accounts be cloud-only accounts. Using cloud-only accounts for privileged access limits exposure in Azure AD from a compromised on-premises environment. For more information, see [Protecting Microsoft 365 from on-premises attacks](../fundamentals/protect-m365-from-on-premises-attacks.md).
+
+### If an organization is a hybrid running both AD FS and Azure CBA, are they still vulnerable to the AD FS compromise?
+
+Microsoft recommends privileged accounts be cloud-only accounts. This practice will limit the exposure in Azure AD from a compromised on-premises environment. Maintaining privileged accounts a cloud-only is foundational to this goal.
+
+For synchronized accounts:
+
+- If they're in a managed domain (not federated), there's no risk from the federated IdP.
+- If they're in a federated domain, but a subset of accounts is being moved to Azure AD CBA by Staged Rollout, they're subject to risks related to the federated Idp until the federated domain is fully switched to cloud authentication.
+
+### Should organizations eliminate federated servers like AD FS to prevent the capability to pivot from AD FS to Azure?
+
+With federation, an attacker could impersonate anyone, such as a CIO, even if they can't obtain a cloud-only role like the Global Administrator account.
+
+When a domain is federated in Azure AD, a high level of trust is being placed on the Federated IdP. AD FS is one example, but the notion holds true for *any* federated IdP. Many organizations deploy a federated IdP such as AD FS exclusively to accomplish certificate based authentication. Azure AD CBA completely removes the AD FS dependency in this case. With Azure AD CBA, customers can move their application estate to Azure AD to modernize their IAM infrastructure and reduce costs with increased security.
+
+From a security perspective, there's no change to the credential, including the X.509 certificate, CACs, PIVs, and so on, or to the PKI being used. The PKI owners retain complete control of the certificate issuance and revocation lifecycle and policy. The revocation check and the authentication happen at Azure AD instead of federated Idp. These checks enable passwordless, phishing-resistant authentication directly to Azure AD for all users.
+
+### How does authentication work with Federated AD FS and Azure AD cloud authentication with Windows?
+
+Azure AD CBA requires the user or application to supply the Azure AD UPN of the user who signs in.
+
+In the browser example, the user most often types in their Azure AD UPN. The Azure AD UPN is used for realm and user discovery. The certificate used then must match this user by using one of the configured username bindings in the policy.
+
+In Windows sign-in, the match depends on if the device is hybrid or Azure AD joined. But in both cases, if username hint is provided, Windows will send the hint as an Azure AD UPN. The certificate used then must match this user by using one of the configured username bindings in the policy.
++
+## Next steps
+
+- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Azure AD CBA on iOS devices](concept-certificate-based-authentication-mobile-ios.md)
+- [Azure AD CBA on Android devices](concept-certificate-based-authentication-mobile-android.md)
+- [Windows smart card logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [FAQ](certificate-based-authentication-faq.yml)
active-directory Concept Certificate Based Authentication Mobile Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile-android.md
+
+ Title: Azure Active Directory certificate-based authentication on Android devices - Azure Active Directory
+description: Learn about Azure Active Directory certificate-based authentication on Android devices
+++++ Last updated : 10/05/2022+++++++++
+# Azure Active Directory certificate-based authentication on Android devices
+
+Android devices can use a client certificate on their device for certificate-based authentication (CBA) to Azure Active Directory (Azure AD). CBA can be used to connect to:
+
+- Office mobile applications such as Microsoft Outlook and Microsoft Word
+- Exchange ActiveSync (EAS) clients
+
+Azure AD CBA is supported for certificates on-device on native browsers, and on Microsoft first-party applications on Android devices.
+
+## Prerequisites
+
+- Android version must be Android 5.0 (Lollipop) or later.
+
+## Support for on-device certificates
+
+On-device certificates are provisioned on the device. Customers can use Mobile Device Management (MDM) to provision the certificates on the device.
+
+## Supported platforms
+
+- Applications using latest MSAL libraries or Microsoft Authenticator can do CBA
+- Edge with profile, when users add account and sign in with a profile, will support CBA
+- Microsoft first-party apps with latest MSAL libraries or Microsoft Authenticator can do CBA
+
+## Microsoft mobile applications support
+
+| Applications | Support |
+|:|::|
+|Azure Information Protection app| &#x2705; |
+|Company Portal | &#x2705; |
+|Microsoft Teams | &#x2705; |
+|Office (mobile) | &#x2705; |
+|OneNote | &#x2705; |
+|OneDrive | &#x2705; |
+|Outlook | &#x2705; |
+|Power BI | &#x2705; |
+|Skype for Business | &#x2705; |
+|Word / Excel / PowerPoint | &#x2705; |
+|Yammer | &#x2705; |
+
+## Support for Exchange ActiveSync clients
+
+Certain Exchange ActiveSync applications on Android 5.0 (Lollipop) or later are supported.
+
+To determine if your email application supports Azure AD CBA, contact your application developer.
+
+## Next steps
+
+- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Azure AD CBA on iOS devices](concept-certificate-based-authentication-mobile-ios.md)
+- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
+- [FAQ](certificate-based-authentication-faq.yml)
active-directory Concept Certificate Based Authentication Mobile Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile-ios.md
+
+ Title: Azure Active Directory certificate-based authentication on iOS devices - Azure Active Directory
+description: Learn about Azure Active Directory certificate-based authentication on iOS devices
+++++ Last updated : 10/05/2022+++++++++
+# Azure Active Directory certificate-based authentication on iOS
+
+Devices that run iOS can use certificate-based authentication (CBA) to authenticate to Azure Active Directory (Azure AD) using a client certificate on their device when connecting to:
+
+- Office mobile applications such as Microsoft Outlook and Microsoft Word
+- Exchange ActiveSync (EAS) clients
+
+Azure AD CBA is supported for certificates on-device on native browsers and on Microsoft first-party applications on iOS devices.
+
+## Prerequisites
+
+- iOS version must be iOS 9 or later.
+- Microsoft Authenticator is required for Office applications and Outlook on iOS.
+
+## Support for on-device certificates and external storage
+
+On-device certificates are provisioned on the device. Customers can use Mobile Device Management (MDM) to provision the certificates on the device. Since iOS doesn't support hardware protected keys out of the box, customers can use external storage devices for certificates.
+
+## Advantages of external storage for certificates
+
+Customers can use external security keys to store their certificates. Security keys with certificates:
+
+- Enable the usage on any device and doesn't require the provision on every device the user has
+- Are hardware secured with a PIN, which makes them phishing resistant
+- Provide multifactor authentication with a PIN as second factor to access the private key of the certificate in the key
+- Satisfy the industry requirement to have MFA on separate device
+- Future proofing where multiple credentials can be stored including FIDO2 keys
+
+## Supported platforms
+
+- Only native browsers are supported
+- Applications using latest MSAL libraries or Microsoft Authenticator can do CBA
+- Edge with profile, when users add account and logged in a profile will support CBA
+- Microsoft first party apps with latest MSAL libraries or Microsoft Authenticator can do CBA
+
+### Browsers
+
+|Edge | Chrome | Safari | Firefox |
+|--|||-|
+|&#10060; | &#10060; | &#x2705; |&#10060; |
+
+### Vendors for External storage
+
+Azure AD CBA will support certificates on YubiKeys. Users can install YubiKey authenticator application from YubiKey and do Azure AD CBA. Applications that don't use latest MSAL libraries need to also install Microsoft Authenticator.
+
+## Microsoft mobile applications support
+
+| Applications | Support |
+|:|::|
+|Azure Information Protection app| &#x2705; |
+|Company Portal | &#x2705; |
+|Microsoft Teams | &#x2705; |
+|Office (mobile) | &#x2705; |
+|OneNote | &#x2705; |
+|OneDrive | &#x2705; |
+|Outlook | &#x2705; |
+|Power BI | &#x2705; |
+|Skype for Business | &#x2705; |
+|Word / Excel / PowerPoint | &#x2705; |
+|Yammer | &#x2705; |
+
+## Support for Exchange ActiveSync clients
+
+On iOS 9 or later, the native iOS mail client is supported.
+
+To determine if your email application supports Azure AD CBA, contact your application developer.
+
+## Known issue
+
+On iOS, users will see a "double prompt", where they must click the option to use certificate-based authentication twice. We're working to create a seamless user experience.
+
+## Next steps
+
+- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Azure AD CBA on Android devices](concept-certificate-based-authentication-mobile-android.md)
+- [Windows smart card logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
+- [FAQ](certificate-based-authentication-faq.yml)
active-directory Concept Certificate Based Authentication Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile.md
- Title: Azure Active Directory certificate-based authentication on mobile devices (Android and iOS) - Azure Active Directory
-description: Learn about Azure Active Directory certificate-based authentication on mobile devices (Android and iOS)
----- Previously updated : 06/07/2022---------
-# Azure Active Directory certificate-based authentication on mobile devices (Android and iOS) (Preview)
-
-Android and iOS devices can use certificate-based authentication (CBA) to authenticate to Azure Active Directory using a client certificate on their device when connecting to:
--- Office mobile applications such as Microsoft Outlook and Microsoft Word-- Exchange ActiveSync (EAS) clients-
-Azure AD certificate-based authentication (CBA) is supported for certificates on-device on native browsers as well as on Microsoft first-party applications on both iOS and Android devices.
-
-Azure AD CBA eliminates the need to enter a username and password combination into certain mail and Microsoft Office applications on your mobile device.
-
-## Prerequisites
--- For Android device, OS version must be Android 5.0 (Lollipop) and above.-- For iOS device, OS version must be iOS 9 or above.-- Microsoft Authenticator is required for Office applications on iOS.-
-## Microsoft mobile applications support
-
-| Applications | Support |
-|:|::|
-|Azure Information Protection app| &#x2705; |
-|Company Portal | &#x2705; |
-|Microsoft Teams | &#x2705; |
-|Office (mobile) | &#x2705; |
-|OneNote | &#x2705; |
-|OneDrive | &#x2705; |
-|Outlook | &#x2705; |
-|Power BI | &#x2705; |
-|Skype for Business | &#x2705; |
-|Word / Excel / PowerPoint | &#x2705; |
-|Yammer | &#x2705; |
-
-## Support for Exchange ActiveSync clients
-
-On iOS 9 or later, the native iOS mail client is supported.
-
-Certain Exchange ActiveSync applications on Android 5.0 (Lollipop) or later are supported.
-
-To determine if your email application supports this feature, contact your application developer.
-
-## Known issue
-
-On iOS, users will see a double prompt, where they must click the option to use certificate-based authentication twice. We are working on making the user experience better.
-
-## Next steps
--- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)-- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) -- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md)-- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)-- [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)--
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
Title: Windows SmartCard logon using Azure Active Directory certificate-based authentication - Azure Active Directory
-description: Learn how to enable Windows SmartCard logon using Azure Active Directory certificate-based authentication
+ Title: Windows smart card sign-in using Azure Active Directory certificate-based authentication - Azure Active Directory
+description: Learn how to enable Windows smart card sign-in using Azure Active Directory certificate-based authentication
Previously updated : 06/15/2022 Last updated : 10/05/2022 -+
-# Windows SmartCard logon using Azure Active Directory certificate-based authentication (Preview)
+# Windows smart card sign-in using Azure Active Directory certificate-based authentication
-Azure AD users can authenticate using X.509 certificates on their SmartCards directly against Azure AD at Windows logon. There is no special configuration needed on the Windows client to accept the SmartCard authentication.
+Azure Active Directory (Azure AD) users can authenticate using X.509 certificates on their smart cards directly against Azure AD at Windows sign-in. There's no special configuration needed on the Windows client to accept the smart card authentication.
## User experience
-Follow these steps to set up Windows SmartCard logon:
+Follow these steps to set up Windows smart card sign-in:
1. Join the machine to either Azure AD or a hybrid environment (hybrid join). 1. Configure Azure AD CBA in your tenant as described in [Configure Azure AD CBA](how-to-certificate-based-authentication.md).
-1. Make sure the user is either on managed authentication or using staged rollout.
-1. Present the physical or virtual SmartCard to the test machine.
-1. Select SmartCard icon, enter the PIN and authenticate the user.
+1. Make sure the user is either on managed authentication or using [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
+1. Present the physical or virtual smart card to the test machine.
+1. Select the smart card icon, enter the PIN, and authenticate the user.
- :::image type="content" border="false" source="./media/concept-certificate-based-authentication/smartcard.png" alt-text="Screenshot of SmartCard sign in.":::
+ :::image type="content" border="false" source="./media/concept-certificate-based-authentication/smartcard.png" alt-text="Screenshot of smart card sign-in.":::
-Users will get a primary refresh token (PRT) from Azure Active Directory after the successful login and depending on the Certificate-based authentication configuration, the PRT will contain the multifactor claim.
+Users will get a primary refresh token (PRT) from Azure AD after the successful sign-in. Depending on the CBA configuration, the PRT will contain the multifactor claim.
+
+## Expected behavior of Windows sending user UPN to Azure AD CBA
+
+|Sign-in | Azure AD join | Hybrid join |
+|--||-|
+|First sign-in | Pull from certificate | AD UPN or x509Hint |
+|Subsequent sign-in | Pull from certificate | Cached Azure AD UPN |
+
+### Windows rules for sending UPN for Azure AD-joined devices
+
+Windows will first use a principal name and if not present then RFC822Name from the SubjectAlternativeName (SAN) of the certificate being used to sign into Windows. If neither are present, the user must additionally supply a User Name Hint. For more information, see [User Name Hint](/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings#allow-user-name-hint)
+
+### Windows rules for sending UPN for hybrid Azure AD-joined devices
+
+Hybrid Join sign-in must first successfully sign-in against the Active Directory(AD) domain. The users AD UPN is sent to Azure AD. In most cases, the Active Directory UPN value is the same as the Azure AD UPN value and is synchronized with Azure AD Connect.
+
+Some customers may maintain different and sometimes may have non-routable UPN values in Active Directory (such as user@woodgrove.local) In these cases the value sent by Windows may not match the users Azure Active Directory UPN. To support these scenarios where Azure AD can't match the value sent by Windows, a subsequent lookup is performed for a user with a matching value in their **onPremisesUserPrincipalName** attribute. If the sign-in is successful, Windows will cache the users Azure AD UPN and is sent in subsequent sign-ins.
+
+>[!NOTE]
+>In all cases, a user supplied username login hint (X509UserNameHint) will be sent if provided. For more information, see [User Name Hint](/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings#allow-user-name-hint)
+
+For more information about the Windows flow, see [Certificate Requirements and Enumeration (Windows)](/windows/security/identity-protection/smart-cards/smart-card-certificate-requirements-and-enumeration).
+
+## Supported Windows platforms
+
+The Windows smart card sign-in works with the latest preview build of Windows 11. The functionality is also available for these earlier Windows versions after you apply one of the following updates [KB5017383](https://support.microsoft.com/topic/september-20-2022-kb5017383-os-build-22000-1042-preview-62753265-68e9-45d2-adcb-f996bf3ad393):
+
+- [Windows 11 - kb5017383](https://support.microsoft.com/topic/september-20-2022-kb5017383-os-build-22000-1042-preview-62753265-68e9-45d2-adcb-f996bf3ad393)
+- [Windows 10 - kb5017379](https://support.microsoft.com/topic/20-september-2022-kb5017379-os-build-17763-3469-preview-50a9b9e2-745d-49df-aaae-19190e10d307)
+- [Windows Server 20H2- kb5017380](https://support.microsoft.com/topic/20-september-2022-kb5017380-os-builds-19042-2075-19043-2075-og-19044-2075-preview-59ab550c-105e-4481-b440-c37f07bf7897)
+- [Windows Server 2022 - kb5017381](https://support.microsoft.com/topic/20-september-2022-kb5017381-os-build-20348-1070-preview-dc843fea-bccd-4550-9891-a021ae5088f0)
+- [Windows Server 2019 - kb5017379](https://support.microsoft.com/topic/20-september-2022-kb5017379-os-build-17763-3469-preview-50a9b9e2-745d-49df-aaae-19190e10d307)
+
+## Supported browsers
+
+|Edge | Chrome | Safari | Firefox |
+|--|||-|
+|&#x2705; | &#x2705; | &#x2705; |&#x2705; |
+
+>[!NOTE]
+>Azure AD CBA supports both certificates on-device as well as external storage like security keys on Windows.
## Restrictions and caveats -- The Windows login only works with the latest preview build of Windows 11. We are working to backport the functionality to Windows 10 and Windows Server.-- Only Windows machines that are joined to either Azure AD or a hybrid environment can test SmartCard logon. -- Like in the other Azure AD CBA scenarios, the user must be on a managed domain or using staged rollout and cannot use a federated authentication model.
+- Azure AD CBA is supported on Windows Hybrid or Azure AD Joined.
+- Users must be in a managed domain or using Staged Rollout and can't use a federated authentication model.
## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md)-- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) -- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md)-- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
+- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)
+- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Azure AD CBA on iOS devices](concept-certificate-based-authentication-mobile-ios.md)
+- [Azure AD CBA on Android devices](concept-certificate-based-authentication-mobile-android.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
- [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Title: Azure AD certificate-based authentication technical deep dive (Preview) - Azure Active Directory
+ Title: Azure AD certificate-based authentication technical deep dive - Azure Active Directory
description: Learn how Azure AD certificate-based authentication works Previously updated : 06/15/2022 Last updated : 10/10/2022
-# Azure AD certificate-based authentication technical deep dive (Preview)
+# Azure AD certificate-based authentication technical deep dive
-This article explains how Azure Active Directory (Azure AD) certificate-based authentication (CBA) works, with background information and testing scenarios.
+This article explains how Azure Active Directory (Azure AD) certificate-based authentication (CBA) works, and dives into technical details on Azure AD CBA configurations.
->[!NOTE]
->Azure AD certificate-based authentication is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## How does Azure Active Directory certificate-based authentication work?
+## How does Azure AD certificate-based authentication work?
-This diagram shows what happens when a user tries to sign into an application secured by Azure AD CBA is enabled on the tenant:
+The following image describes what happens when a user tries to sign in to an application in a tenant where Azure AD CBA is enabled.
:::image type="content" border="false" source="./media/concept-certificate-based-authentication-technical-deep-dive/how-it-works.png" alt-text="Illustration with steps about how Azure AD certificate-based authentication works." :::
-Let's cover each step:
+Now we'll walk through each step:
1. The user tries to access an application, such as [MyApps portal](https://myapps.microsoft.com/).
-1. If the user is not already signed in, the user is redirected to the Azure AD **User Sign-in** page at [https://login.microsoftonline.com/](https://login.microsoftonline.com/).
-1. The user enters their username into the Azure AD sign-in page, and then clicks **Next**.
+1. If the user isn't already signed in, the user is redirected to the Azure AD **User Sign-in** page at [https://login.microsoftonline.com/](https://login.microsoftonline.com/).
+1. The user enters their username into the Azure AD sign-in page, and then clicks **Next**. Azure AD does home realm discovery using the tenant name and the username is used to look up the user in Azure AD tenant.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in.png" alt-text="Screenshot of the Sign-in for MyApps portal.":::
-1. Azure AD checks whether CBA is enabled for the tenant. If CBA is enabled for the tenant, the user sees a link to **Sign in with a certificate** on the password page. If you do not see the sign-in link, make sure CBA is enabled on the tenant. For more information, see [How do I enable Azure AD CBA?](certificate-based-authentication-faq.yml#how-do-i-enable-azure-ad-cba-).
+1. Azure AD checks whether CBA is enabled for the tenant. If CBA is enabled, the user sees a link to **Use a certificate or smartcard** on the password page. If the user doesn't see the sign-in link, make sure CBA is enabled on the tenant. For more information, see [How do I enable Azure AD CBA?](certificate-based-authentication-faq.yml#how-can-an-administrator-enable-azure-ad-cba-).
>[!NOTE]
- > If CBA is enabled on the tenant, all users will see the link to **Sign in with a certificate** on the password page. However, only the users in scope for CBA will be able to authenticate successfully against an application that uses Azure Active Directory as their Identity provider.
+ > If CBA is enabled on the tenant, all users will see the link to **Use a certificate or smart card** on the password page. However, only the users in scope for CBA will be able to authenticate successfully against an application that uses Azure AD as their Identity provider (IdP).
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-cert.png" alt-text="Screenshot of the Sign-in with a certificate.":::
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-cert.png" alt-text="Screenshot of the Use a certificate or smart card.":::
- If you have enabled other authentication methods like **Phone sign-in** or **FIDO2**, users may see a different sign-in screen.
+ If you enabled other authentication methods like **Phone sign-in** or **FIDO2**, users may see a different sign-in screen.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled.":::
-1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
+1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us).
- The endpoint performs mutual authentication and requests the client certificate as part of the TLS handshake. You will see an entry for this request in the Sign-in logs. There is a [known issue](#known-issues) where User ID is displayed instead of Username.
+ The endpoint performs TLS mutual authentication, and requests the client certificate as part of the TLS handshake. You'll see an entry for this request in the Sign-ins log.
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png" alt-text="Screenshot of the Sign-in log in Azure AD." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png":::
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png" alt-text="Screenshot of the Sign-ins log in Azure AD." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png":::
>[!NOTE]
- >The network administrator should allow access to certauth endpoint for the customerΓÇÖs cloud environment in addition to login.microsoftonline.com. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
+ >The network administrator should allow access to the User sign-in page and certauth endpoint for the customerΓÇÖs cloud environment. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
- Click the log entry to bring up **Activity Details** and click **Authentication Details**. You will see an entry for X.509 certificate.
+ Click the log entry to bring up **Activity Details** and click **Authentication Details**. You'll see an entry for the X.509 certificate.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/entry.png" alt-text="Screenshot of the entry for X.509 certificate.":::
-1. Azure AD will request a client certificate and the user picks the client certificate and clicks **Ok**.
+1. Azure AD will request a client certificate, the user picks the client certificate, and clicks **Ok**.
>[!NOTE]
- >TrustedCA hints are not supported, so the list of certificates can't be further scoped.
+ >Trusted CA hints are not supported, so the list of certificates can't be further scoped. We're looking into adding this functionality in the future.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png" alt-text="Screenshot of the certificate picker." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png":::
-1. Azure AD verifies the certificate revocation list to make sure the certificate is not revoked and is valid. Azure AD identifies the user in the tenant by using the [username binding configured](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy) on the tenant by mapping the certificate field value to user attribute value.
-1. If a unique user is found and the user has a conditional access policy and needs multifactor authentication (MFA) and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Azure AD signs the user in immediately. If the certificate satisfies only a single factor, then it requests the user for a second factor to complete Azure AD Multi-Factor Authentication.
+1. Azure AD verifies the certificate revocation list to make sure the certificate isn't revoked and is valid. Azure AD identifies the user by using the [username binding configured](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy) on the tenant to map the certificate field value to the user attribute value.
+1. If a unique user is found with a Conditional Access policy that requires multifactor authentication (MFA), and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Azure AD signs the user in immediately. If the certificate satisfies only a single factor, then it requests the user for a second factor to complete Azure AD Multi-Factor Authentication.
1. Azure AD completes the sign-in process by sending a primary refresh token back to indicate successful sign-in. 1. If the user sign-in is successful, the user can access the application. ## Understanding the authentication binding policy
-The authentication binding policy helps determine the strength of authentication as either single-factor or multi-factor. An administrator can change the default value from single factor to multifactor or set up custom policy configurations either by issuer subject or policy OID fields in the certificate.
+The authentication binding policy helps determine the strength of authentication as either single-factor or multifactor. An administrator can change the default value from single factor to multifactor, or set up custom policy configurations either by using issuer subject or policy OID fields in the certificate.
+
+### Certificate strengths
+
+An admin can determine whether the certificates are single-factor or multifactor strength. For more information, see the documentation that maps [NIST Authentication Assurance Levels to Azure AD Auth Methods](https://aka.ms/AzureADNISTAAL), which builds upon [NIST 800-63B SP 800-63B, Digital Identity Guidelines: Authentication and Lifecycle Mgmt](https://csrc.nist.gov/publications/detail/sp/800-63b/final).
+
+### Single-factor certificate authentication
+
+When a user has a single-factor certificate, they can't perform multifactor authentication. There's no support for a second factor when the first factor is a single-factor certificate. We're working to add support for second factors.
-Since multiple authentication binding policy rules can be created with different certificate fields, there are some rules that determine the authentication protection level. They are as follows:
-1. Exact match is used for strong authentication via policy OID. If you have a certificate A with policy OID **1.2.3.4.5** and a derived credential B based on that certificate has a policy OID **1.2.3.4.5.6** and the custom rule is defined as **Policy OID** with value **1.2.3.4.5** with MFA, only certificate A will satisfy MFA and credential B will satisfy only single-factor authentication. If the user used derived credential during sign-in and was configured to have MFA, the user will be asked for a second factor for successful authentication.
-1. Policy OID rules will take precedence over certificate issuer rules. If a certificate has both policy OID and Issuer, the policy OID is always checked first and if no policy rule is found then the issuer subject bindings are checked. Policy OID has a higher strong authentication binding priority than the issuer.
+### Multifactor certificate authentication
+
+When a user has a multifactor certificate, they can perform multifactor authentication only with certificates. However, the tenant admin should make sure the certificates are protected with a PIN or hardware module to be considered multifactor.
+
+### How Azure AD resolves multiple authentication policy binding rules
+
+Because multiple authentication binding policy rules can be created with different certificate fields, there are some rules that determine the authentication protection level. They are as follows:
+
+1. Exact match is used for strong authentication by using policy OID. If you have a certificate A with policy OID **1.2.3.4.5** and a derived credential B based on that certificate has a policy OID **1.2.3.4.5.6**, and the custom rule is defined as **Policy OID** with value **1.2.3.4.5** with MFA, only certificate A will satisfy MFA, and credential B will satisfy only single-factor authentication. If the user used derived credential during sign-in and was configured to have MFA, the user will be asked for a second factor for successful authentication.
+1. Policy OID rules will take precedence over certificate issuer rules. If a certificate has both policy OID and Issuer, the policy OID is always checked first, and if no policy rule is found then the issuer subject bindings are checked. Policy OID has a higher strong authentication binding priority than the issuer.
1. If one CA binds to MFA, all user certificates that the CA issues qualify as MFA. The same logic applies for single-factor authentication. 1. If one policy OID binds to MFA, all user certificates that include this policy OID as one of the OIDs (A user certificate could have multiple policy OIDs) qualify as MFA.
-1. If there is a conflict between multiple policy OIDs (such as when a certificate has two policy OIDs, where one binds to single-factor authentication and the other binds to MFA) then treat the certificate as a single-factor authentication.
-1. One certificate can only have one valid strong authentication binding (that is, a certificate cannot bind to both single-factor and MFA).
+1. If there's a conflict between multiple policy OIDs (such as when a certificate has two policy OIDs, where one binds to single-factor authentication and the other binds to MFA) then treat the certificate as a single-factor authentication.
+1. One certificate can only have one valid strong authentication binding (that is, a certificate can't bind to both single-factor and MFA).
## Understanding the username binding policy
-The username binding policy helps locate the user in the tenant. By default, Subject Alternate Name (SAN) Principal Name in the certificate is mapped to onPremisesUserPrincipalName attribute of the user object to determine the user.
+The username binding policy helps validate the certificate of the user. By default, Subject Alternate Name (SAN) Principal Name in the certificate is mapped to UserPrincipalName attribute of the user object to determine the user.
+
+### Achieve higher security with certificate bindings
+
+There are four supported methods. In general, mapping types are considered high-affinity if they're based on identifiers that you can't reuse (Such as Subject Key Identifiers or SHA1 Public Key). These identifiers convey a higher assurance that only a single certificate can be used to authenticate the respective user. Therefore, all mapping types based on usernames and email addresses are considered low-affinity. Therefore, Azure AD implements two mappings considered low-affinity (based on reusable identifiers), and the other two are considered high-affinity bindings. For more information, see [certificateUserIds](concept-certificate-based-authentication-certificateuserids.md).
-An administrator can override the default and create a custom mapping. Currently, we support two certificate fields SAN Principal Name and SAN RFC822Name to map against the user object attribute userPrincipalName and onPremisesUserPrincipalName.
+|Certificate mapping Field | Examples of values in certificateUserIds | User object attributes | Type |
+|--|--||-|
+|PrincipalName | ΓÇ£X509:\<PN>bob@woodgrove.comΓÇ¥ | userPrincipalName <br> onPremisesUserPrincipalName <br> certificateUserIds | low-affinity |
+|RFC822Name | ΓÇ£X509:\<RFC822>user@woodgrove.comΓÇ¥ | userPrincipalName <br> onPremisesUserPrincipalName <br> certificateUserIds | low-affinity |
+|X509SKI | ΓÇ£X509:\<SKI>123456789abcdefΓÇ¥| certificateUserIds | high-affinity |
+|X509SHA1PublicKey |ΓÇ£X509:\<SHA1-PUKEY>123456789abcdefΓÇ¥ | certificateUserIds | high-affinity |
-**Rules applied for user bindings:**
+### How Azure AD resolves multiple username policy binding rules
Use the highest priority (lowest number) binding.
-1. If the X.509 certificate field is on the presented certificate, try to look up the user by using the value in the specified field.
- 1. If a unique user is found, authenticate the user.
- 1. If a unique user is not found, authentication fails.
-1. If the X.509 certificate field is not on the presented certificate, move to the next priority binding.
-1. If the specified X.509 certificate field is found on the certificate, but Azure AD does not find a user object in the directory matching that value, the authentication fails. Azure AD does not attempt to use the next binding in the list in this case. Only if the X.509 certificate field is not on the certificate does it try the next binding, as mentioned in Step 2.
+1. Look up the user object by using the username or User Principal Name.
+1. If the X.509 certificate field is on the presented certificate, Azure AD will match the value in the certificate field to the user object attribute value.
+ 1. If a match is found, user authentication is successful.
+ 1. If a match isn't found, move to the next priority binding.
+1. If the X.509 certificate field isn't on the presented certificate, move to the next priority binding.
+1. Validate all the configured username bindings until one of them results in a match and user authentication is successful.
+1. If a match isn't found on any of the configured username bindings, user authentication fails.
+
+## Securing Azure AD configuration with multiple username bindings
+
+Each of the Azure AD attributes (userPrincipalName, onPremiseUserPrincipalName, certificateUserIds) available to bind certificates to Azure AD user accounts has unique constraint to ensure a certificate only matches a single Azure AD user account. However, Azure AD CBA does support configuring multiple binding methods in the username binding policy. This allows an administrator to accommodate multiple certificate configurations. However the combination of some methods can also potentially permit one certificate to match to multiple Azure AD user accounts.
+
+>[!IMPORTANT]
+>When using multiple bindings, Azure AD CBA authentication is only as secure as your low-affinity binding as Azure AD CBA will validate each of the bindings to authenticate the user. In order to eliminate a scenario where a single certificate matching multiple Azure AD accounts, the tenant administrator should:
+>- Configure a single binding method in the username binding policy.
+>- If a tenant has multiple binding methods configured and doesn't want to allow one certificate to multiple accounts, the tenant admin must ensure all allowable methods configured in the policy map to the same Azure AD Account, i.e all user accounts should have values matching all the bindings.
+>- If a tenant has multiple binding methods configured, the admin should make sure that they do not have more than one low-affinity binding
+
+For example, if the tenant admin has two username bindings on PrincipalName mapped to Azure AD UPN and SubjectKeyIdentifier (SKI) to certificateUserIds and wants a certificate to only be used for a single Azure AD Account, the admin must make sure that account has the UPN that is present in the certificate and implements the SKI mapping in the same account certificateUserId attribute.
+
+Here's an example of potential values for UPN and certificateUserIDs:
+
+Azure AD User Principal Name = Bob.Smith@Contoso.com <br>
+certificateUserIDs = [x509:\<SKI>89b0f468c1abea65ec22f0a882b8fda6fdd6750p]<br>
+
+Having both PrincipalName and SKI values from the user's certificate mapped to the same account ensures that while the tenant policy permits mapping PrincipalName to Azure AD UPN & SKI values in certificateUserIds, that certificate can only match a single Azure AD account. With unique constraint on both UserPrincipalName and certificateUserIds, no other user account can have the same values and can't successfully authenticate with the same certificate.
## Understanding the certificate revocation process
-The certificate revocation process allows the admin to revoke a previously issued certificate from being used for future authentication. The certificate revocation will not revoke already issued tokens of the user. Follow the steps to manually revoke tokens at [Configure revocation](active-directory-certificate-based-authentication-get-started.md#step-3-configure-revocation).
+The certificate revocation process allows the admin to revoke a previously issued certificate from being used for future authentication. The certificate revocation won't revoke already issued tokens of the user. Follow the steps to manually revoke tokens at [Configure revocation](active-directory-certificate-based-authentication-get-started.md#step-3-configure-revocation).
Azure AD downloads and caches the customers certificate revocation list (CRL) from their certificate authority to check if certificates are revoked during the authentication of the user.
-An admin can configure the CRL distribution point during the setup process of the trusted issuers in the Azure AD tenant. Each trusted issuer should have a CRL that can be referenced via an internet-facing URL.
+An admin can configure the CRL distribution point during the setup process of the trusted issuers in the Azure AD tenant. Each trusted issuer should have a CRL that can be referenced by using an internet-facing URL.
>[!IMPORTANT]
->The maximum size of a CRL for Azure Active Directory to successfully download and cache is 20MB in Azure Global and 45MB in Azure US Government clouds, and the time required to download the CRL must not exceed 10 seconds. If Azure Active Directory can't download a CRL, certificate-based authentications using certificates issued by the corresponding CA will fail. Best practices to ensure CRL files are within size constraints are to keep certificate lifetimes to within reasonable limits and to clean up expired certificates. For more information, see [Is there a limit for CRL size?](certificate-based-authentication-faq.yml#is-there-a-limit-for-crl-size-).
+>The maximum size of a CRL for Azure AD to successfully download on an interactive sign-in and cache is 20 MB in Azure Global and 45 MB in Azure US Government clouds, and the time required to download the CRL must not exceed 10 seconds. If Azure AD can't download a CRL, certificate-based authentications using certificates issued by the corresponding CA will fail. As a best practice to keep CRL files within size limits, keep certificate lifetimes within reasonable limits and to clean up expired certificates. For more information, see [Is there a limit for CRL size?](certificate-based-authentication-faq.yml#is-there-a-limit-for-crl-size-).
+
+When a user performs an interactive sign-in with a certificate, and the CRL exceeds the interactive limit for a cloud, their initial sign-in will fail with the following error:
+
+"The Certificate Revocation List (CRL) downloaded from {uri} has exceeded the maximum allowed size ({size} bytes) for CRLs in Azure Active Directory. Try again in few minutes. If the issue persists, contact your tenant administrators."
+
+After the error, Azure AD will attempt to download the CRL subject to the service-side limits (45 MB in Azure Global and 150 MB in Azure US Government clouds).
>[!IMPORTANT]
->If the admin skips the configuration of the CRL, Azure AD will not perform any CRL checks during the certificate-based authentication of the user. This can be helpful for initial troubleshooting but should not be considered for production use.
+>If the admin skips the configuration of the CRL, Azure AD will not perform any CRL checks during the certificate-based authentication of the user. This can be helpful for initial troubleshooting, but shouldn't be considered for production use.
As of now, we don't support Online Certificate Status Protocol (OCSP) because of performance and reliability reasons. Instead of downloading the CRL at every connection by the client browser for OCSP, Azure AD downloads once at the first sign-in and caches it, thereby improving the performance and reliability of CRL verification. We also index the cache so the search is much faster every time. Customers must publish CRLs for certificate revocation.
-**Typical flow of the CRL check:**
+The following steps are a typical flow of the CRL check:
1. Azure AD will attempt to download the CRL at the first sign-in event of any user with a certificate of the corresponding trusted issuer or certificate authority. 1. Azure AD will cache and re-use the CRL for any subsequent usage. It will honor the **Next update date** and, if available, **Next CRL Publish date** (used by Windows Server CAs) in the CRL document. 1. The user certificate-based authentication will fail if:
- - A CRL has been configured for the trusted issuer and Azure AD cannot download the CRL, due to availability, size, or latency constraints.
+ - A CRL has been configured for the trusted issuer and Azure AD can't download the CRL, due to availability, size, or latency constraints.
- The user's certificate is listed as revoked on the CRL.
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/user-cert.png" alt-text="Screenshot of the revoked user certificate in the CRL." :::
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/user-cert.png" alt-text="Screenshot of the revoked user certificate in the CRL." :::
- Azure AD will attempt to download a new CRL from the distribution point if the cached CRL document is expired. >[!NOTE]
->Azure AD will only check the CRL of the issuing CA but not of the entire PKI trust chain up to the root CA. In case of a CA compromise, the administrator should remove the compromised trusted issuer from the Azure AD tenant configuration.
+>Azure AD will check the CRL of the issuing CA and other CAs in the PKI trust chain up to the root CA. We have a limit of up to 5 CAs from the leaf client certificate for CRL validation in the PKI chain. The limitation is to make sure a bad actor will not bring down the service by uploading a PKI chain with a huge number of CAs with a bigger CRL size.
+If the tenantΓÇÖs PKI chain has more than 5 CAs and in case of a CA compromise, the administrator should remove the compromised trusted issuer from the Azure AD tenant configuration.
+
>[!IMPORTANT] >Due to the nature of CRL caching and publishing cycles, it is highly recommended in case of a certificate revocation to also revoke all sessions of the affected user in Azure AD.
-There is no way for the administrator to manually force or re-trigger the download of the CRL.
+As of now, there's no way for the administrator to manually force or re-trigger the download of the CRL.
### How to configure revocation [!INCLUDE [Configure revocation](../../../includes/active-directory-authentication-configure-revocation.md)]
-## Understanding Sign in logs
+## Understanding Sign-in logs
Sign-in logs provide information about sign-ins and how your resources are used by your users. For more information about sign-in logs, see [Sign-in logs in Azure Active Directory](../reports-monitoring/concept-all-sign-ins.md). Let's walk through two scenarios, one where the certificate satisfies single-factor authentication and another where the certificate satisfies MFA.
-**Test scenario configuration**
- For the test scenarios, choose a user with a conditional access policy that requires MFA. Configure the user binding policy by mapping SAN Principal Name to UserPrincipalName.
For the first test scenario, configure the authentication policy where the Issue
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/single-factor.png" alt-text="Screenshot of the Authentication policy configuration showing single-factor authentication required." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/single-factor.png":::
-1. Sign in to the Azure portal as the test user by using CBA. The authentication policy is set where Issuer subject rule satisfies single-factor authentication, but the user has MFA required by the conditional access policy, so a second authentication factor is requested.
+1. Sign in to the Azure portal as the test user by using CBA. The authentication policy is set where Issuer subject rule satisfies single-factor authentication.
1. After sign-in was succeeds, click **Azure Active Directory** > **Sign-in logs**. Let's look closer at some of the entries you can find in the **Sign-in logs**.
- The first entry requests the X.509 certificate from the user. The status **Success** means that Azure AD validated that CBA is enabled in the tenant and a certificate is requested for authentication.
+ The first entry requests the X.509 certificate from the user. The status **Interrupted** means that Azure AD validated that CBA is enabled in the tenant and a certificate is requested for authentication.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/entry-one.png" alt-text="Screenshot of single-factor authentication entry in the sign-in logs." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/entry-one.png":::
- The next entry provides more information about the authentication request and the certificate used. We can see that since the certificate satisfies only a single-factor and the user requires MFA, a second factor was requested.
-
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/second-factor.png" alt-text="Screenshot of second-factor sign-in details in the sign-in logs." :::
-
- The **Authentication Details** also show the second factor request.
-
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-details-mfa.png" alt-text="Screenshot of multifactor sign-in details in the sign-in logs." :::
+ The **Activity Details** shows this is just part of the expected login flow where the user selects a certificate.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/cert-activity-details.png" alt-text="Screenshot of activity details in the sign-in logs." :::
The **Additional Details** show the certificate information. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/additional-details.png" alt-text="Screenshot of multifactor additional details in the sign-in logs." :::
- These additional entries show that the authentication is complete and a primary refresh token is sent back to the browser and user is given access to the resource.
+ These additional entries show that the authentication is complete, a primary refresh token is sent back to the browser, and user is given access to the resource.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/refresh-token.png" alt-text="Screenshot of refresh token entry in the sign-in logs." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/refresh-token.png":::
- Click **Additional Details** to MFA succeeded.
-
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/refresh-token-details.png" alt-text="Screenshot of refresh token authentication details in the sign-in logs." :::
-- ### Test multifactor authentication For the next test scenario, configure the authentication policy where the **policyOID** rule satisfies multifactor authentication. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png" alt-text="Screenshot of the Authentication policy configuration showing multifactor authentication required." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png":::
-1. Sign in to the Azure portal using CBA. since the policy was set to satisfy multifactor authentication, the user sign-in is successful without a second factor.
-1. Click **Azure Active Directory** > **Sign-in logs**, including and entry with **Interrupted** status.
+1. Sign in to the Azure portal using CBA. Since the policy was set to satisfy multifactor authentication, the user sign-in is successful without a second factor.
+1. Click **Azure Active Directory** > **Sign-ins**.
- You will see several entries in the Sign-in logs.
+ You'll see several entries in the Sign-in logs, including an entry with **Interrupted** status.
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/several-entries.png" alt-text="Screenshot of several entries in the sign-in logs." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/several-entries.png":::
+ The **Activity Details** shows this is just part of the expected login flow where the user selects a certificate.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/mfacert-activity-details.png" alt-text="Screenshot of second-factor sign-in details in the sign-in logs." :::
+
The entry with **Interrupted** status has more diagnostic info on the **Additional Details** tab. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/interrupted-user-details.png" alt-text="Screenshot of interrupted attempt details in the sign-in logs." :::
For the next test scenario, configure the authentication policy where the **poli
| User certificate authentication level type | PolicyId<br>This shows policy OID was used to determine the authentication strength. | | User certificate authentication level identifier | 1.2.3.4<br>This shows the value of the identifier policy OID from the certificate. |
-## Known issues
+## Understanding the certificate-based authentication error page
+
+Certificate-based authentication can fail for reasons such as the certificate being invalid, or the user selected the wrong certificate or an expired certificate, or because of a Certificate Revocation List (CRL) issue. When certificate validation fails, the user sees this error:
-- The Sign-in log shows the User ID instead of the username in one of the log entries.
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/known-issue.png" alt-text="Screenshot of username in the sign-in logs." :::
+If CBA fails on a browser, even if the failure is because you cancel the certificate picker, you need to close the browser session and open a new session to try CBA again. A new session is required because browsers cache the certificate. When CBA is re-tried, the browser will send the cached certificate during the TLS challenge, which causes sign-in failure and the validation error.
-- The **Additional Details** tab shows **User certificate subject name** as the attribute name but it is actually "User certificate binding identifier". It is the value of the certificate field that username binding is configured to use.
+Click **More details** to get logging information that can be sent to an administrator, who in turn can get more information from the Sign-in logs.
-- There is a double prompt for iOS because iOS only supports pushing certificates to a device storage. When an organization pushes user certificates to an iOS device through Mobile Device Management (MDM) or when a user accesses first-party or native apps, there is no access to device storage. Only Safari can access device storage.
- When an iOS client sees a client TLS challenge and the user clicks **Sign in with certificate**, iOS client knows it cannot handle it and sends a completely new authorization request using the Safari browser. The user clicks **Sign in with certificate** again, at which point Safari which has access to certificates for authentication in device storage. This requires users to click **Sign in with certificate** twice, once in appΓÇÖs WKWebView and once in SafariΓÇÖs System WebView.
+Click **Other ways to sign in** to try other methods available to the user to sign in.
+
+>[!NOTE]
+>If you retry CBA in a browser, it'll keep failing due to the browser caching issue. Users need to open a new browser session and sign in again.
++
+## Certificate-based authentication in MostRecentlyUsed (MRU) methods
+
+Once a user authenticates successfully using CBA, the user's MostRecentlyUsed (MRU) authentication method will be set to CBA. Next time, when the user enters their UPN and clicks **Next**, the user will be taken to the CBA method directly, and need not select **Use the certificate or smart card**.
+
+To reset the MRU method, the user needs to cancel the certificate picker, click **Other ways to sign in**, and select another method available to the user and authenticate successfully.
- We are aware of the UX experience issue and are working to fix this on iOS and to have a seamless UX experience.
+## External identity support
+
+An external identity can't perform multifactor authentication (MFA) to the resource tenant with Azure AD CBA. Instead, have the user perform MFA using CBA in the home tenant, and set up cross tenant settings for the resource tenant to trust MFA from the home tenant.
+
+For more information about how to enable **Trust multi-factor authentication from Azure AD tenants**, see [Configure B2B collaboration cross-tenant access](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims).
+
+## Known issues
+
+- On iOS clients, there's a double prompt issue as part of the Azure AD CBA flow where the user needs to click **Use the certificate or smart card** twice. We're aware of the UX experience issue and working on fixing this for a seamless UX experience.
## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md)-- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md) - [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)-- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)-- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
+- [Azure AD CBA on iOS devices](concept-certificate-based-authentication-mobile-ios.md)
+- [Azure AD CBA on Android devices](concept-certificate-based-authentication-mobile-android.md)
+- [Windows smart card logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
- [FAQ](certificate-based-authentication-faq.yml) - [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)+
active-directory Concept Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md
Title: Overview of Azure AD certificate-based authentication (Preview) - Azure Active Directory
+ Title: Overview of Azure AD certificate-based authentication - Azure Active Directory
description: Learn about Azure AD certificate-based authentication without federation Previously updated : 06/07/2022 Last updated : 10/05/2022 -+
-# Overview of Azure AD certificate-based authentication (Preview)
+# Overview of Azure AD certificate-based authentication
-Azure AD certificate-based authentication (CBA) enables customers to allow or require users to authenticate with X.509 certificates against their Azure Active Directory (Azure AD) for applications and browser sign-in.
-This feature enables customers to adopt a phishing resistant authentication and authenticate with an X.509 certificate against their Enterprise Public Key Infrastructure (PKI).
-
->[!NOTE]
->Azure AD certificate-based authentication is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Azure AD certificate-based authentication (CBA) enables customers to allow or require users to authenticate directly with X.509 certificates against their Azure Active Directory (Azure AD) for applications and browser sign-in.
+This feature enables customers to adopt a phishing resistant authentication and authenticate with an X.509 certificate against their Public Key Infrastructure (PKI).
## What is Azure AD CBA?
-Before this feature brought cloud-managed support for CBA to Azure AD, customers had to implement federated certificate-based authentication. Federated CBA requires deploying Active Directory Federation Services (AD FS) to be able to authenticate using X.509 certificates against Azure AD. With Azure AD certificate-based authentication, customers can authenticate directly against Azure AD. Azure AD CBA eliminates the need for federated AD FS, which helps simplify customer environments and reduce costs.
+Before cloud-managed support for CBA to Azure AD, customers had to implement federated certificate-based authentication, which requires deploying Active Directory Federation Services (AD FS) to be able to authenticate using X.509 certificates against Azure AD. With Azure AD certificate-based authentication, customers can authenticate directly against Azure AD and eliminate the need for federated AD FS, with simplified customer environments and cost reduction.
The following images show how Azure AD CBA simplifies the customer environment by eliminating federated AD FS.
The following images show how Azure AD CBA simplifies the customer environment b
| Benefits | Description | ||| | Great user experience |- Users who need certificate-based authentication can now directly authenticate against Azure AD and not have to invest in federated AD FS.<br>- Portal UI enables users to easily configure how to map certificate fields to a user object attribute to look up the user in the tenant ([certificate username bindings](concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-username-binding-policy))<br>- Portal UI to [configure authentication policies](concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-authentication-binding-policy) to help determine which certificates are single-factor versus multifactor. |
-| Easy to deploy and administer |- No need for complex on-premises deployments or network configuration.<br>- Directly authenticate against Azure AD. <br>- No management overhead or cost. |
-| Secure |- On-premises passwords need not be stored in the cloud in any form.<br>- Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including multifactor authentication (MFA) and blocking legacy authentication.<br>- Strong authentication support where users can define authentication policies through the certificate fields like issuer or policy OID (object identifiers) to determine which certificates qualify as single-factor versus multifactor. |
+| Easy to deploy and administer |- Azure AD CBA is a free feature, and you don't need any paid editions of Azure AD to use it. <br>- No need for complex on-premises deployments or network configuration.<br>- Directly authenticate against Azure AD. |
+| Secure |- On-premises passwords don't need to be stored in the cloud in any form.<br>- Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including unphishable [multifactor authentication](concept-mfa-howitworks.md) (MFA which requires [licensed edition](concept-mfa-licensing.md)) and blocking legacy authentication.<br>- Strong authentication support where users can define authentication policies through the certificate fields, such as issuer or policy OID (object identifiers), to determine which certificates qualify as single-factor versus multifactor.<br>- The feature works seamlessly with [Conditional Access features](../conditional-access/overview.md) and authentication strength capability to enforce MFA to help secure your users. |
++
+## Supported scenarios
+
+The following scenarios are supported:
+
+- User sign-ins to web browser-based applications on all platforms.
+- User sign-ins to Office mobile apps, including Outlook, OneDrive, and so on.
+- User sign-ins on mobile native browsers.
+- Support for granular authentication rules for multifactor authentication by using the certificate issuer **Subject** and **policy OIDs**.
+- Configuring certificate-to-user account bindings by using any of the certificate fields:
+ - Subject Alternate Name (SAN) PrincipalName and SAN RFC822Name
+ - Subject Key Identifier (SKI) and SHA1PublicKey
+- Configuring certificate-to-user account bindings by using any of the user object attributes:
+ - User Principal Name
+ - onPremisesUserPrincipalName
+ - CertificateUserIds
-## Feature highlights
+## Unsupported scenarios
-- Facilitates onboarding to Azure quickly without being delayed by additional on-premises infrastructure to support certificate-based authentication in public and United States Government clouds. -- Provides support for unphishable multifactor authentication.-- Supports user sign-in against cloud Azure AD using X.509 certificates into all web browser-based applications and into Microsoft Office client applications that use modern authentication.-- The feature works seamlessly with Conditional Access features such as MFA to help secure your users.-- It's a free feature, and you don't need any paid editions of Azure AD to use it.-- Eliminates the need for federated AD FS and reduces the cost and on-premises footprint.
+The following scenarios aren't supported:
+
+- Certificate Authority hints aren't supported, so the list of certificates that appears for users in the certificate picket UI isn't scoped.
+- Only one CRL Distribution Point (CDP) for a trusted CA is supported.
+- The CDP can be only HTTP URLs. We don't support Online Certificate Status Protocol (OCSP), or Lightweight Directory Access Protocol (LDAP) URLs.
+- Configuring other certificate-to-user account bindings, such as using the **Subject**, **Subject + Issuer** or **Issuer + Serial Number**, arenΓÇÖt available in this release.
+- Password as an authentication method cannot be disabled and the option to sign in using a password is displayed even with Azure AB CBA method available to the user.
+
+## Out of Scope
+
+The following scenarios are out of scope for Azure AD CBA:
+
+- Public Key Infrastructure for creating client certificates. Customers need to configure their own Public Key Infrastructure (PKI) and provision certificates to their users and devices.
## Next steps - [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md)-- [Limitations with CBA](concept-certificate-based-authentication-limitations.md)-- [How to configure CBA](how-to-certificate-based-authentication.md)-- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)-- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
+- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)
+- [Azure AD CBA on iOS devices](concept-certificate-based-authentication-mobile-ios.md)
+- [Azure AD CBA on Android devices](concept-certificate-based-authentication-mobile-android.md)
+- [Windows smart card logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
- [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot CBA](troubleshoot-certificate-based-authentication.md)-
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
Title: How to configure Azure AD certificate-based authentication without federation (Preview) - Azure Active Directory
+ Title: How to configure Azure AD certificate-based authentication - Azure Active Directory
description: Topic that shows how to configure Azure AD certificate-based authentication in Azure Active Directory Previously updated : 06/15/2022 Last updated : 10/10/2022 ---+++
-# How to configure Azure AD certificate-based authentication (Preview)
+# How to configure Azure AD certificate-based authentication
-Azure Active Directory (Azure AD) certificate-based authentication (CBA) enables customers to configure their Azure AD tenants to allow or require users to authenticate with X.509 certificates verified against their Enterprise Public Key Infrastructure (PKI) for app and browser sign-in. This feature enables customers to adopt phishing resistant authentication by using an x.509 certificate.
+Azure Active Directory (Azure AD) certificate-based authentication (CBA) enables organizations to configure their Azure AD tenants to allow or require users to authenticate with X.509 certificates created by their Enterprise Public Key Infrastructure (PKI) for app and browser sign-in. This feature enables organizations to adopt phishing-resistant modern passwordless authentication by using an x.509 certificate.
-During sign-in, users will see an option to authenticate with a certificate instead of entering a password.
-If multiple matching certificates are present on the device, the user can pick which one to use. The certificate is validated, the binding to the user account is checked, and if successful, they are signed in.
+During sign-in, users will see also an option to authenticate with a certificate instead of entering a password.
+If multiple matching certificates are present on the device, the user can pick which one to use. The certificate is validated against the user account and if successful, they sign in.
<!Clarify plans that are covered >
-This topic covers how to configure and use certificate-based authentication for tenants in Office 365 Enterprise and US Government plans. You should already have a [public key infrastructure (PKI)](https://aka.ms/securingpki) configured.
-
-Follow these instructions to configure and use Azure AD CBA.
-
->[!NOTE]
->Azure AD certificate-based authentication is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Follow these instructions to configure and use Azure AD CBA for tenants in Office 365 Enterprise and US Government plans. You should already have a [public key infrastructure (PKI)](https://aka.ms/securingpki) configured.
## Prerequisites
-Make sure that the following prerequisites are in place.
+Make sure that the following prerequisites are in place:
-- Configure at least one certification authority (CA) and any intermediate certification authorities in Azure Active Directory.
+- Configure at least one certification authority (CA) and any intermediate CAs in Azure AD.
- The user must have access to a user certificate (issued from a trusted Public Key Infrastructure configured on the tenant) intended for client authentication to authenticate against Azure AD.
+- Each CA should have a certificate revocation list (CRL) that can be referenced from internet-facing URLs. If the trusted CA doesn't have a CRL configured, Azure AD won't perform any CRL checking, revocation of user certificates won't work, and authentication won't be blocked.
>[!IMPORTANT]
->Each CA should have a certificate revocation list (CRL) that can be referenced from internet-facing URLs. If the trusted CA does not have a CRL configured, Azure AD will not perform any CRL checking, revocation of user certificates will not work, and authentication will not be blocked.
+>Make sure the PKI is secure and can't be easily compromised. In the event of a compromise, the attacker can create and sign client certificates and compromise any user in the tenant, both users whom are synchronized from on-premises and cloud-only users. However, a strong key protection strategy, along with other physical and logical controls, such as HSM activation cards or tokens for the secure storage of artifacts, can provide defense-in-depth to prevent external attackers or insider threats from compromising the integrity of the PKI. For more information, see [Securing PKI](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn786443(v=ws.11)).
->[!IMPORTANT]
->Make sure the PKI is secure and cannot be easily compromised. In the event of a compromise, the attacker can create and sign client certificates and compromise any user in the tenant, both synced and cloud-only users. However, a strong key protection strategy, along with other physical and logical controls such as HSM activation cards or tokens for the secure storage of artifacts, can provide defense-in-depth to prevent external attackers or insider threats from compromising the integrity of the PKI. For more information, see [Securing PKI](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn786443(v=ws.11)).
+>[!NOTE]
+>When evaluating a PKI, it is important to review certificate issuance policies and enforcement. As mentioned, adding certificate authorities (CAs) to Azure AD configuration allows certificates issued by those CAs to authenticate any user in Azure AD. For this reason, it is important to consider how and when the CAs are allowed to issue certificates, and how they implement reusable identifiers. Where administrators need to ensure only a specific certificate is able to be used to authenticate a user, admins should exclusively use high-affinity bindings to achieve a higher level of assurance that only a specific certificate is able to authenticate the user. For more information, see [high-affinity bindings](concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-username-binding-policy).
## Steps to configure and test Azure AD CBA
-There are some configuration steps to complete before enabling Azure AD CBA. First, an admin must configure the trusted CAs that issue user certificates. As seen in the following diagram, we use role-based access control to make sure only least-privileged administrators make changes. Configuring the certification authority is done only by the [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) role.
+Some configuration steps to be done before you enable Azure AD CBA. First, an admin must configure the trusted CAs that issue user certificates. As seen in the following diagram, we use role-based access control to make sure only least-privileged administrators are needed to make changes. Only the [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) role can configure the CA.
-Optionally, you can also configure authentication bindings to map certificates to single-factor or multifactor and configure username bindings to map certificate field to a user object attribute. Configuring user-related settings can be done by [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator). Once all the configurations are complete, enable Azure AD CBA on the tenant.
+Optionally, you can also configure authentication bindings to map certificates to single-factor or multifactor authentication, and configure username bindings to map the certificate field to an attribute of the user object. [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator) can configure user-related settings. Once all the configurations are complete, enable Azure AD CBA on the tenant.
:::image type="content" border="false" source="./media/how-to-certificate-based-authentication/steps.png" alt-text="Diagram of the steps required to enable Azure Active Directory certificate-based authentication."::: ## Step 1: Configure the certification authorities
+You can configure CAs by using the Azure portal or PowerShell.
+ ### Configure certification authorities using the Azure portal To enable the certificate-based authentication and configure user bindings in the Azure portal, complete the following steps: 1. Sign in to the Azure portal as a Global Administrator.
-1. Select Azure Active Directory, then choose Security from the menu on the left-hand side.
+1. Click **Azure Active Directory** > **Security**.
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate-authorities.png" alt-text="Screenshot of certification authorities."::: 1. To upload a CA, click **Upload**: 1. Select the CA file. 1. Select **Yes** if the CA is a root certificate, otherwise select **No**.
- 1. Set the http internet-facing URL for the certification authority's base CRL that contains all revoked certificates. This should be set or authentication with revoked certificates will not fail.
+ 1. Set the http internet-facing URL for the CA base CRL that contains all revoked certificates. If the URL isn't set, authentication with revoked certificates won't fail.
1. Set **Delta CRL URL** - the http internet-facing URL for the CRL that contains all revoked certificates since the last base CRL was published. 1. Click **Add**.
To enable the certificate-based authentication and configure user bindings in th
### Configure certification authorities using PowerShell
-Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can only be HTTP URLs. Online Certificate Status Protocol (OCSP) or Lightweight Directory Access Protocol (LDAP) URLs are not supported.
+Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can only be HTTP URLs. Online Certificate Status Protocol (OCSP) or Lightweight Directory Access Protocol (LDAP) URLs aren't supported.
[!INCLUDE [Configure certification authorities](../../../includes/active-directory-authentication-configure-certificate-authorities.md)]
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can
[!INCLUDE [New-AzureAD](../../../includes/active-directory-authentication-new-trusted-azuread.md)] **AuthorityType**-- Use 0 to indicate that this is a Root certification authority-- Use 1 to indicate that this is an Intermediate or Issuing certification authority
+- Use 0 to indicate a Root certification authority
+- Use 1 to indicate an Intermediate or Issuing certification authority
**crlDistributionPoint**
-You can validate the crlDistributionPoint value you provide in the above PowerShell example are valid for the certification authority being added by downloading the CRL and comparing the CA certificate and the CRL Information.
+You can download the CRL and compare the CA certificate and the CRL information to validate the crlDistributionPoint value in the preceding PowerShell example is valid for the CA you want to add.
-The below table and graphic indicate how to map information from the CA Certificate to the attributes of the downloaded CRL.
+The following table and graphic show how to map information from the CA certificate to the attributes of the downloaded CRL.
| CA Certificate Info |= |Downloaded CRL Info| |-|:-:|-|
The below table and graphic indicate how to map information from the CA Certific
:::image type="content" border="false" source="./media/how-to-certificate-based-authentication/certificate-crl-compare.png" alt-text="Compare CA Certificate with CRL Information."::: >[!TIP]
->The value for crlDistributionPoint in the above is the http location for the CAΓÇÖs Certificate Revocation List (CRL). This can be found in a few places.
+>The value for crlDistributionPoint in the preceding example is the http location for the CAΓÇÖs Certificate Revocation List (CRL). This can be found in a few places.
>
->- In the CRL Distribution Point (CDP) attribute of a certificate issued from the CA
+>- In the CRL Distribution Point (CDP) attribute of a certificate issued from the CA.
>
->If Issuing CA is Windows Server
+>If Issuing CA is Windows Server:
> >- On the [Properties](/windows-server/networking/core-network-guide/cncg/server-certs/configure-the-cdp-and-aia-extensions-on-ca1#to-configure-the-cdp-and-aia-extensions-on-ca1)
- of the CA in the certification authority Microsoft Management Console (MMC)
->- On the CA running [certutil](/windows-server/administration/windows-commands/certutil#-cainfo) -cainfo cdp
+ of the CA in the certification authority Microsoft Management Console (MMC).
+>- On the CA by running `certutil -cainfo cdp`. For more information, see [certutil](/windows-server/administration/windows-commands/certutil#-cainfo).
-For additional details see: [Understanding the certificate revocation process](./concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-certificate-revocation-process).
+For more information, see [Understanding the certificate revocation process](./concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-certificate-revocation-process).
### Remove
For additional details see: [Understanding the certificate revocation process](.
## Step 2: Enable CBA on the tenant
-To enable the certificate-based authentication in the Azure Portal, complete the following steps:
+To enable the certificate-based authentication in the Azure portal, complete the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com/) as an Authentication Policy Administrator. 1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
Once certificate-based authentication is enabled on the tenant, all users in the
The authentication binding policy helps determine the strength of authentication to either a single factor or multi factor. An admin can change the default value from single-factor to multifactor and configure custom policy rules by mapping to issuer Subject or policy OID fields in the certificate.
-To enable the certificate-based authentication and configure user bindings in the Azure portal, complete the following steps:
+To enable Azure AD CBA and configure user bindings in the Azure portal, complete the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com) as an Authentication Policy Administrator. 1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
To enable the certificate-based authentication and configure user bindings in th
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/multifactor-issuer.png" alt-text="Screenshot of multifactor authentication policy."::: - To create a rule by Policy OID, click **Policy OID**. 1. Enter a value for **Policy OID**.
To enable the certificate-based authentication and configure user bindings in th
## Step 4: Configure username binding policy
-The username binding policy helps determine the user in the tenant. By default, we map Principal Name in the certificate to onPremisesUserPrincipalName in the user object to determine the user.
+The username binding policy helps validate the certificate of the user. By default, we map Principal Name in the certificate to UserPrincipalName in the user object to determine the user. An admin can override the default and create a custom mapping.
-An admin can override the default and create a custom mapping. Currently, we support two certificate fields, SAN (Subject Alternate Name) Principal Name and SAN RFC822Name, to map against the user object attribute userPrincipalName and onPremisesUserPrincipalName.
+To determine how to configure username binding, see [How username binding works](concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-username-binding-policy).
>[!IMPORTANT]
->If a username binding policy uses synced attributes, such as onPremisesUserPrincipalName attribute of the user object, be aware that any user with administrative access to the Azure AD Connect server can change the sync attribute mapping, and in turn change the value of the synced attribute to their needs. The user does not need to be a cloud admin.
+>If a username binding policy uses synchronized attributes, such as onPremisesUserPrincipalName attribute of the user object, be aware that any user with Active Directory Administrators privileges can make changes that impact the onPremisesUserPrincipalName value in Azure AD for any synchronized accounts, including users with delegated administrative privilege over synchronized user accounts or administrative rights over the Azure AD Connect Servers.
-1. Create the username binding by selecting one of the X.509 certificate fields to bind with one of the user attributes. The username binding order represents the priority level of the binding. The first one has the highest priority and so on.
+1. Create the username binding by selecting one of the X.509 certificate fields to bind with one of the user attributes. The username binding order represents the priority level of the binding. The first one has the highest priority, and so on.
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/username-binding-policy.png" alt-text="Screenshot of a username binding policy.":::
- If the specified X.509 certificate field is found on the certificate, but Azure AD doesnΓÇÖt find a user object using that value, the authentication fails. Azure AD doesnΓÇÖt try the next binding in the list.
+ If the specified X.509 certificate field is found on the certificate, but Azure AD doesnΓÇÖt find a user object using that value, the authentication fails. Azure AD will fall back and try the next binding in the list.
- The next priority is attempted only if the X.509 certificate field is not in the certificate.
1. Click **Save** to save the changes.
-Currently supported set of username bindings:
--- SAN Principal Name > userPrincipalName-- SAN Principal Name > onPremisesUserPrincipalName-- SAN RFC822Name > userPrincipalName-- SAN RFC822Name > onPremisesUserPrincipalName-
->[!NOTE]
->If the RFC822Name binding is evaluated and if no RFC822Name is specified in the certificate Subject Alternative Name, we will fall back on legacy Subject Name "E=user@contoso.com" if no RFC822Name is specified in the certificate we will fall back on legacy Subject Name E=user@contoso.com.
- The final configuration will look like this image: :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/final.png" alt-text="Screenshot of the final configuration.":::
As a first configuration test, you should try to sign in to the [MyApps portal](
1. Click **Next**.
- :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate.png" alt-text="Screenshot of sign in with certificate.":::
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate.png" alt-text="Screenshot of sign-in with certificate.":::
- If you have enabled other authentication methods like Phone sign-in or FIDO2, users may see a different sign-in screen.
+ If you enabled other authentication methods like Phone sign-in or FIDO2, users may see a different sign-in screen.
- :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/alternative.png" alt-text="Screenshot of the alternative sign in.":::
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/alternative.png" alt-text="Screenshot of the alternative sign-in.":::
1. Select **Sign in with a certificate**. 1. Pick the correct user certificate in the client certificate picker UI and click **OK**.+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/picker.png" alt-text="Screenshot of the certificate picker UI."::: 1. Users should be signed into [MyApps portal](https://myapps.microsoft.com/).
As a first configuration test, you should try to sign in to the [MyApps portal](
If your sign-in is successful, then you know that: - The user certificate has been provisioned into your test device.-- Azure Active Directory is configured correctly with trusted CAs.
+- Azure AD is configured correctly with trusted CAs.
- Username binding is configured correctly, and the user is found and authenticated. ### Testing custom authentication binding rules
-Let's walk through a scenario where we will validate strong authentication by creating two authentication policy rules, one via issuer subject satisfying single factor and one via policy OID satisfying multi factor.
+Let's walk through a scenario where we validate strong authentication. We'll create two authentication policy rules, one by using issuer subject to satisfy single-factor authentication, and another by using policy OID to satisfy multifactor authentication.
-1. Create an issuer Subject rule with protection level as single factor authentication and value set to your CAs Subject value. For example:
+1. Create an issuer Subject rule with protection level as single-factor authentication and value set to your CAs Subject value. For example:
- `CN=ContosoCA,DC=Contoso,DC=org`
+ `CN = WoodgroveCA`
-1. Create a policy OID rule, with protection level as multi-factor authentication and value set to one of the policy OIDΓÇÖs in your certificate. For example, 1.2.3.4.
+1. Create a policy OID rule, with protection level as multifactor authentication and value set to one of the policy OIDΓÇÖs in your certificate. For example, 1.2.3.4.
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/policy-oid-rule.png" alt-text="Screenshot of the Policy OID rule.":::
-1. Create a conditional access policy for the user to require multi-factor authentication by following steps at [Conditional Access - Require MFA](../conditional-access/howto-conditional-access-policy-all-users-mfa.md#create-a-conditional-access-policy).
+1. Create a Conditional Access policy for the user to require multifactor authentication by following steps at [Conditional Access - Require MFA](../conditional-access/howto-conditional-access-policy-all-users-mfa.md#create-a-conditional-access-policy).
1. Navigate to [MyApps portal](https://myapps.microsoft.com/). Enter your UPN and click **Next**. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/name.png" alt-text="Screenshot of the User Principal Name."::: 1. Select **Sign in with a certificate**.
- :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate.png" alt-text="Screenshot of sign in with certificate.":::
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate.png" alt-text="Screenshot of sign-in with certificate.":::
- If you have enabled other authentication methods like Phone sign-in or FIDO2, users may see a different sign-in screen.
+ If you enabled other authentication methods like Phone sign-in or FIDO2, users may see a different sign-in screen.
- :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/alternative.png" alt-text="Screenshot of the alternative sign in.":::
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/alternative.png" alt-text="Screenshot of the alternative sign-in.":::
1. Select the client certificate and click **Certificate Information**.
Let's walk through a scenario where we will validate strong authentication by cr
1. Select the client certificate and click **OK**.
-1. The policy OID in the certificate matches the configured value of **1.2.3.4** and it will satisfy multifactor authentication. Similarly, the issuer in the certificate matches the configured value of **CN=ContosoCA,DC=Contoso,DC=org** and it will satisfy single-factor authentication.
+1. The policy OID in the certificate matches the configured value of **1.2.3.4** and it will satisfy multifactor authentication. Similarly, the issuer in the certificate matches the configured value of **CN=WoodgroveCA** and it will satisfy single-factor authentication.
1. Because policy OID rule takes precedence over issuer rule, the certificate will satisfy multifactor authentication.
-1. The conditional access policy for the user requires MFA and the certificate satisfies multifactor, so the user will be authenticated into the application.
+1. The Conditional Access policy for the user requires MFA and the certificate satisfies multifactor, so the user will be authenticated into the application.
## Enable Azure AD CBA using Microsoft Graph API
-To enable the certificate-based authentication and configure username bindings using Graph API, complete the following steps.
+To enable CBA and configure username bindings using Graph API, complete the following steps.
>[!NOTE] >The following steps use Graph Explorer which is not available in the US Government cloud. US Government cloud tenants can use Postman to test the Microsoft Graph queries.
To enable the certificate-based authentication and configure username bindings u
1. GET all authentication methods: ```http
- GET https://graph.microsoft.com/beta/policies/authenticationmethodspolicy
+ GET https://graph.microsoft.com/v1.0/policies/authenticationmethodspolicy
``` 1. GET the configuration for the x509Certificate authentication method: ```http
- GET https://graph.microsoft.com/beta/policies/authenticationmethodspolicy/authenticationMethodConfigurations/X509Certificate
+ GET https://graph.microsoft.com/v1.0/policies/authenticationmethodspolicy/authenticationMethodConfigurations/X509Certificate
``` 1. By default, the x509Certificate authentication method is disabled. To allow users to sign in with a certificate, you must enable the authentication method and configure the authentication and username binding policies through an update operation. To update policy, run a PATCH request.
To enable the certificate-based authentication and configure username bindings u
```http
- PATCH https: //graph.microsoft.com/beta/policies/authenticationMethodsPolicy/authenticationMethodConfigurations/x509Certificate
+ PATCH https: //graph.microsoft.com/v1.0/policies/authenticationMethodsPolicy/authenticationMethodConfigurations/x509Certificate
Content-Type: application/json {
To enable the certificate-based authentication and configure username bindings u
"x509CertificateField": "RFC822Name", "userProperty": "userPrincipalName", "priority": 2
+ },
+ {
+ "x509CertificateField": "PrincipalName",
+ "userProperty": "certificateUserIds",
+ "priority": 3
} ], "authenticationModeConfiguration": {
To enable the certificate-based authentication and configure username bindings u
"rules": [ { "x509CertificateRuleType": "issuerSubject",
- "identifier": "CN=ContosoCA,DC=Contoso,DC=org ",
+ "identifier": "CN=WoodgroveCA ",
"x509CertificateAuthenticationMode": "x509CertificateMultiFactor" }, {
To enable the certificate-based authentication and configure username bindings u
] }
-1. You will get a `204 No content` response code. Re-run the GET request to make sure the policies are updated correctly.
+1. You'll get a `204 No content` response code. Re-run the GET request to make sure the policies are updated correctly.
1. Test the configuration by signing in with a certificate that satisfies the policy. ## Next steps
To enable the certificate-based authentication and configure username bindings u
- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md) - [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md) - [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)
+- [Certificate user IDs](concept-certificate-based-authentication-certificateuserids.md)
+- [How to migrate federated users](concept-certificate-based-authentication-migration.md)
- [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 09/22/2022 Last updated : 10/07/2022
The additional context can be combined with [number matching](how-to-mfa-number-
### Policy schema changes
-You can enable and disable application name and geographic location separately. Under featureSettings, you can use the following name mapping for each features:
+You can enable and disable application name and geographic location separately. Under featureSettings, you can use the following name mapping for each feature:
- Application name: displayAppInformationRequiredState - Geographic location: displayLocationInformationRequiredState
You can enable and disable application name and geographic location separately.
Identify your single target group for each of the features. Then use the following API endpoint to change the displayAppInformationRequiredState or displayLocationInformationRequiredState properties under featureSettings to **enabled** and include or exclude the groups you want: ```http
-https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` #### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
//Change the Query to PATCH and Run query {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
To verify, run GET again and verify the ObjectID: ```http
-GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` #### Example of how to disable application name and only enable geographic location
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
In **featureSettings**, change the states of **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled.** Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-In addition, for each of the features, you'll change the id of the excludeTarget to the ObjectID of the group from the Azure AD portal. This will exclude that group from seeing application name or geographic location.
+In addition, for each of the features, you'll change the id of the excludeTarget to the ObjectID of the group from the Azure AD portal. This change excludes that group from seeing application name or geographic location.
You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
To turn off additional context, you'll need to PATCH **displayAppInformationRequ
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
To turn off additional context, you'll need to PATCH **displayAppInformationRequ
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 09/22/2022 Last updated : 10/07/2022
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+Number matching will be available in Azure Government two weeks after General Availability. Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
### Multifactor authentication
To create the registry key that overrides push notifications:
Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property under featureSettings to **enabled**, and include or exclude groups: ```
-https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` >[!NOTE]
https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMetho
| Property | Type | Description | |-||-|
-| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br> Please note: You'll be able to only exclude one group for number matching. |
-| includeTarget | featureTarget | A single entity that is included in this feature. <br> Please note: You'll be able to only set one group for number matching.|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group for number matching. |
+| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for number matching.|
| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. | #### Feature target properties
https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMetho
In **featureSettings**, you'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you don't want to allow passwordless, use **push**.
+The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you don't want to allow passwordless, use **push**.
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
//Change the Query to PATCH and Run query {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```
-To confirm this has applied, please run the GET request by using the following endpoint:
+To confirm the change is applied, run the GET request by using the following endpoint:
```http
-GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` #### Example of how to enable number matching for a single group
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
To verify, run GET again and verify the ObjectID: ```http
-GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` #### Example of removing the excluded group from number matching
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
To turn number matching off, you'll need to PATCH remove **numberMatchingRequire
```json {
- "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
To turn number matching off, you'll need to PATCH remove **numberMatchingRequire
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
description: Learn how to move your organization away from less secure authentic
+ Last updated 06/23/2022
-
-# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
+#Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
# How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator
The feature aims to empower admins to get users set up with MFA using the Authen
If this user doesnΓÇÖt have the Authenticator app set up for push notifications and is enabled for it by policy, yes, the user will see the nudge.
-**Will a user who has a the Authenticator app setup only for TOTP codes see the nudge?** 
+**Will a user who has the Authenticator app setup only for TOTP codes see the nudge?** 
Yes. If the Authenticator app is not set up for push notifications and the user is enabled for it by policy, yes, the user will see the nudge.
It's the same as snoozing.
## Next steps
-[Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md)
+[Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md)
active-directory Troubleshoot Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-authentication-strengths.md
+
+ Title: Troubleshoot Azure AD authentication strength (Preview)
+description: Learn how to resolve errors when using Azure AD authentication strength.
+++++ Last updated : 09/26/2022++++++++
+# Troubleshoot Azure AD authentication strength (Preview)
+
+This topic covers errors you might see when you use Azure Active Directory (Azure AD) authentication strength and how to resolve them.
+
+## A user is asked to sign in with another method, but they don't see a method they expect
+
+<!What could be a good example?>
+
+Users can sign in only by using authentication methods that they registered and are enabled by the Authentication methods policy. For more information, see [How Conditional Access Authentication strengths policies are used in combination with Authentication methods policy](concept-authentication-strengths.md#how-authentication-strength-works-with-the-authentication-methods-policy).
+
+To verify if a method can be used:
+
+1. Check which authentication strength is required. Click **Security** > **Authentication methods** > **Authentication strengths**.
+1. Check if the user is enabled for a required method:
+ 1. Check the Authentication methods policy to see if the user is enabled for any method required by the authentication strength. Click **Security** > **Authentication methods** > **Policies**.
+ 1. As needed, check if the tenant is enabled for any method required for the authentication strength. Click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**.
+1. Check which authentication methods are registered for the user in the Authentication methods policy. Click **Users and groups** > _username_ > **Authentication methods**.
+
+If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user will need to restart the session and choose **Sign-in options** and select a method required by the authentication strength.
+
+## A user can't access a resource
+
+If an authentication strength requires a method that a user canΓÇÖt use, the user is blocked from sign-in. To check which method is required by an authentication strength, and which method the user is registered and enabled to use, follow the steps in the [previous section](#a-user-is-asked-to-sign-in-with-another-method-but-they-dont-see-a-method-they-expect).
+
+## How to check which authentication strength was enforced during sign-in
+Use the **Sign-ins** log to find additional information about the sign-in:
+
+- Under the **Authentication details** tab, the **Requirement** column shows the name of the authentication strengths policy.
+
+ :::image type="content" source="./media/troubleshoot-authentication-strengths/sign-in-logs-authentication-details.png" alt-text="Screenshot showing the authentication strength in the Sign-ins log.":::
+
+- Under the **Conditional Access** tab, you can see which Conditional Access policy was applied. Click the name of the policy, and look for **Grant controls** to see the authentication strength that was enforced.
+
+ :::image type="content" source="./media/troubleshoot-authentication-strengths/sign-in-logs-control.png" alt-text="Screenshot showing the authentication strength under Conditional Access Policy details in the Sign-ins log.":::
+
+## My users can't use their FIDO2 security key to sign in
+An admin can restrict access to specific security keys. When a user tries to sign in by using a key they can't use, this **You can't get there from here** message appears. The user has to restart the session, and sign-in with a different FIDO2 security key.
++
+## A user can't register a new method during sign-in
+
+Some methods can't be registered during sign-in, or they need more setup beyond the combined registration. For more information, see [Registering authentication methods](concept-authentication-strengths.md#registering-authentication-methods).
+
+
+## Next steps
+
+- [Azure AD Authentication Strengths overview](concept-authentication-strengths.md)
active-directory Troubleshoot Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-certificate-based-authentication.md
- Title: Troubleshoot Azure AD certificate-based authentication without federation (Preview) - Azure Active Directory
-description: Learn how to troubleshoot Azure AD certificate-based authentication in Azure Active Directory
----- Previously updated : 06/15/2022---------
-# Troubleshoot Azure AD certificate-based authentication (Preview)
-
-This topic covers how to troubleshoot Azure AD certificate-based authentication (CBA).
-
->[!NOTE]
->Azure AD certificate-based authentication is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Why don't I see an option to sign in using certificates against Azure Active Directory after I enter my username?
-
-An administrator needs to enable CBA for the tenant to make the sign-in with certificate option available for users. For more information, see [Step 3: Configure authentication binding policy](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy).
-
-## User-facing sign-in error messages
-
-If the user is unable to sign in using Certificate-based Authentication, they may see one of the following user-facing errors on the Azure AD sign-in screen.
-
-### ADSTS1001000 - Unable to acquire certificate policy from tenant
-
-This is a server-side error that occurs when the server could not fetch an authentication policy for the user using the SAN Principal Name/SAN RFC822Name field of the user certificate. Make sure that the authentication policy rules are correct, a valid certificate is used, and retry.
-
-### AADSTS1001003 ΓÇô User sign-in fails with "Unable To Acquire Value Specified In Binding From Certificate"
--
-This error is returned if the user selects the wrong user certificate from the list while signing in.
-
-Make sure the certificate is valid and works for the user binding and authentication policy configuration.
-
-### AADSTS50034 - User sign-in fails with "Your account or password is incorrect. If you don't remember your password, reset it now."
--
-Make sure the user is trying to sign in with the correct username. This error happens when a unique user can't be found using the [username binding](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy) on the certificate fields.
--- Make sure user bindings are set correctly and the certificate field is mapped to the correct user Attribute.-- Make sure the user Attribute contains the correct value that matches the certificate field value.-
-For more information, see [Step 4: Configure username binding policy](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy).
-
-If the user is a federated user moving to Azure AD and if the user binding configuration is Principal Name > onPremisesUserPrincipalName:
--- Make sure the onPremisesUserPrincipalName is being synchronized, and ALT IDs are enabled in Azure AD Connect. -- Make sure the value of onPremisesUserPrincipalName is correct and synchronized in Azure AD Connect.-
->[!NOTE]
->There is a known issue that this scenario is not logged into the sign-in logs.
-
-### AADSTS130501 - User sign-in fails with "Sign in was blocked due to User Credential Policy"
--
-There is also a known issue when a user who is not in scope for CBA ties to sign in with a certificate to an [Office app](https://office.com) or any portal app, and the sign-in fails with an error:
--
-In both cases, the error can be resolved by making sure the user is in scope for Azure AD CBA. For more information, see [Step 2: Enable CBA on the tenant](how-to-certificate-based-authentication.md#step-2-enable-cba-on-the-tenant).
-
-### AADSTS90100: flowtoken parameter is empty or not valid
-
-After sign-in fails and I retry sign-in with the correct certificate, I get an error:
--
-This is a client behavior where the browser keeps using the original certificate selected. When the sign-in fails, close the existing browser session and retry sign-in from a new browser session.
-
-## User sign-in failed but not much diagnostic information
-
-There is a known issue when the authentication sometimes fails, the failure screen may not have an error message or troubleshooting information.
-
-For example, if a user certificate is revoked and is part of a Certificate Revocation List, then authentication fails correctly. However, instead of the error message, you might see the following screen:
--
-To get more diagnostic information, look in **Sign-in logs**. If a user authentication fails due to CRL validation for example, sign-in logs show the error information correctly.
--
-## Why didn't my changes to authentication policy changes take effect?
-
-The authentication policy is cached. After a policy update, it may take up to an hour for the changes to be effective. Try after an hour to make sure the policy caching is not the cause.
-
-## I get an error ΓÇÿCannot read properties of undefineΓÇÖ while trying to add a custom authentication rule
-
-This is a known issue, and we are working on graceful error handling. This error happens when there is no Certification Authority (CA) on the tenant. To resolve the error, see [Configure the certificate authorities](how-to-certificate-based-authentication.md#step-1-configure-the-certification-authorities).
---
-## I see a valid Certificate Revocation List (CRL) endpoint set, but why don't I see any CRL revocation?
--- Make sure the CRL distribution point is set to a valid HTTP URL.-- Make sure the CRL distribution point is accessible via an internet-facing URL.-- Make sure the CRL sizes are within the limit for public preview. For more information about the maximum CRL size, see [What is the maximum size for downloading a CRL?](certificate-based-authentication-faq.yml#is-there-a-limit-for-crl-size-).-
-## Next steps
--- [Overview of Azure AD CBA](concept-certificate-based-authentication.md)-- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) -- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md)-- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md)-- [Windows SmartCard logon using Azure AD CBA](concept-certificate-based-authentication-smartcard.md)-- [Azure AD CBA on mobile devices (Android and iOS)](concept-certificate-based-authentication-mobile.md)-- [FAQ](certificate-based-authentication-faq.yml)--
active-directory Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/videos.md
Title: Azure ADAL to MSAL migration videos description: Videos that help you migrate from the Azure Active Directory developer platform to the Microsoft identity platform -+ Last updated 02/12/2020-+
Learn about the new Microsoft identity platform and how to migrate to it from th
## Migrate from v1.0 to v2.0
-**Learn about migrating to the the latest version of the Microsoft identity platform**
+**Learn about migrating to the latest version of the Microsoft identity platform**
:::row::: :::column:::
active-directory Concept How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-how-it-works.md
+ Last updated 12/05/2019
Cloud sync is built on top of the Azure AD services and has 2 key components:
## Initial setup
-During initial setup, the a few things are done that makes cloud sync happen. These are:
+During initial setup, a few things are done that makes cloud sync happen. These are:
- **During agent installation**: You configure the agent for the AD domains you want to provision from. This configuration registers the domains in the hybrid identity service and establishes an outbound connection to the service bus listening for requests. - **When you enable provisioning**: You select the AD domain and enable provisioning which runs every 2 mins. Optionally you may deselect password hash sync and define notification email. You can also manage attribute transformation using Microsoft Graph APIs.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The control for blocking access considers any assignments and prevents access ba
Administrators can choose to enforce one or more controls when granting access. These controls include the following options: - [Require multifactor authentication (Azure AD Multi-Factor Authentication)](../authentication/concept-mfa-howitworks.md)
+- [Require authentication strength (Preview)](#require-authentication-strength-preview)
- [Require device to be marked as compliant (Microsoft Intune)](/intune/protect/device-compliance-get-started) - [Require hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) - [Require approved client app](app-based-conditional-access.md)
Selecting this checkbox requires users to perform Azure Active Directory (Azure
[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies.
+### Require authentication strength (preview)
+
+Administrators can choose to require [specific authentication strengths](../authentication/concept-authentication-strengths.md) in their Conditional Access policies. These authentication strengths are defined in the **Azure portal** > **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths (Preview)**. Administrators can choose to create their own or use the built-in versions.
+
+> [!NOTE]
+> Require authentication strength is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ### Require device to be marked as compliant Organizations that have deployed Intune can use the information returned from their devices to identify devices that meet specific policy compliance requirements. Intune sends compliance information to Azure AD so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see [Set rules on devices to allow access to resources in your organization by using Intune](/intune/protect/device-compliance-get-started).
The following client apps are confirmed to support this setting:
- Notate for Intune - Yammer (iOS and iPadOS)
-This list is not all encompassing, if your app is not in this list please check with the application vendor to confirm support.
+This list isn't all encompassing, if your app isn't in this list please check with the application vendor to confirm support.
> [!NOTE] > Kaizala, Skype for Business, and Visio don't support the **Require app protection policy** grant. If you require these apps to work, use the **Require approved apps** grant exclusively. Using the "or" clause between the two grants will not work for these three applications.
If your organization has created terms of use, other options might be visible un
### Custom controls (preview)
-Custom controls is a preview capability of Azure AD. When you use custom controls, your users are redirected to a compatible service to satisfy authentication requirements that are separate from Azure AD. For more information, check out the [Custom controls](controls.md) article.
+Custom controls are a preview capability of Azure AD. When you use custom controls, your users are redirected to a compatible service to satisfy authentication requirements that are separate from Azure AD. For more information, check out the [Custom controls](controls.md) article.
## Next steps
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
Previously updated : 08/05/2022 Last updated : 10/03/2022
A Conditional Access policy must include a user assignment as one of the signals
> [!VIDEO https://www.youtube.com/embed/5DsW1hB3Jqs]
+> [!NOTE]
+> Some Conditional Access features are currently in public preview and might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Include users This list of users typically includes all of the users an organization is targeting in a Conditional Access policy.
The following options are available to include when creating a Conditional Acces
- All users - All users that exist in the directory including B2B guests. - Select users and groups
- - All guest and external users
- - This selection includes any [B2B guests and external users](../external-identities/external-identities-overview.md) including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP).
+ - Guest or external users (preview)
+ - This selection provides several choices that can be used to target Conditional Access policies to specific guest or external user types and specific tenants containing those types of users. There are [several different types of guest or external users that can be selected](../external-identities/authentication-conditional-access.md#conditional-access-for-external-users), and multiple selections can be made:
+ - B2B collaboration guest users
+ - B2B collaboration member users
+ - B2B direct connect users
+ - Local guest users, for example any user belonging to the home tenant with the user type attribute set to guest
+ - Service provider users, for example a Cloud Solution Provider (CSP)
+ - Other external users, or users not represented by the other user type selections
+ - One or more tenants can be specified for the selected user type(s), or you can specify all tenants.
- Directory roles - Allows administrators to select specific [built-in Azure AD directory roles](../roles/permissions-reference.md) used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the Global Administrator role. Other role types aren't supported, including administrative unit-scoped roles and custom roles. - Users and groups
When organizations both include and exclude a user or group the user or group is
The following options are available to exclude when creating a Conditional Access policy. -- All guest and external users
- - This selection includes any B2B guests and external users including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP).
+- Guest or external users
+ - This selection provides several choices that can be used to target Conditional Access policies to specific guest or external user types and specific tenants containing those types of users. There are [several different types of guest or external users that can be selected](../external-identities/authentication-conditional-access.md#conditional-access-for-external-users), and multiple selections can be made:
+ - B2B collaboration guest users
+ - B2B collaboration member users
+ - B2B direct connect users
+ - Local guest users, for example any user belonging to the home tenant with the user type attribute set to guest
+ - Service provider users, for example a Cloud Solution Provider (CSP)
+ - Other external users, or users not represented by the other user type selections
+ - One or more tenants can be specified for the selected user type(s), or you can specify all tenants.
- Directory roles - Allows administrators to select specific Azure AD directory roles used to determine assignment. For example, organizations may create a more restrictive policy on users assigned the Global Administrator role. - Users and groups
If you do find yourself locked out, see [What to do if you're locked out of the
### External partner access
-Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction).
+Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction). For policies that are intended to target service provider tenants, use the **Service provider user** external user type available in the **Guest or external users** selection options.
## Next steps - [Conditional Access: Cloud apps or actions](concept-conditional-access-cloud-apps.md)- - [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Conditional Access Policy Authentication Strength External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md
+
+ Title: Conditional Access - Authentication strength for external users - Azure Active Directory
+description: Create a custom Conditional Access policy with authentication strength to require specific multifactor authentication (MFA) methods for external users.
+++++ Last updated : 10/12/2022+++++++
+# Conditional Access: Require an authentication strength for external users
+
+Authentication strength is a Conditional Access control that lets you define a specific combination of multifactor authentication (MFA) methods that an external user must complete to access your resources. This control is especially useful for restricting external access to sensitive apps in your organization. For example, you can create a Conditional Access policy, require a phishing-resistant authentication strength in the policy, and assign it to guests and external users.
+
+Azure AD provides three [built-in authentication strengths](https://aka.ms/b2b-auth-strengths):
+
+- Multifactor authentication strength
+- Passwordless MFA strength
+- Phishing-resistant MFA strength
+
+You can use one of the built-in strengths or create a [custom authentication strength](https://aka.ms/b2b-auth-strengths) based on the authentication methods you want to require.
+
+In external user scenarios, the MFA authentication methods that a resource tenant can accept vary depending on whether the user is completing MFA in their home tenant or in the resource tenant. For details, see [Conditional Access authentication strength](https://aka.ms/b2b-auth-strengths).
+
+> [!NOTE]
+> Currently, you can only apply authentication strength policies to external users who authenticate with Azure AD. For email one-time passcode, SAML/WS-Fed, and Google federation users, use the [MFA grant control](concept-conditional-access-grant.md#require-multi-factor-authentication) to require MFA.
+## Configure cross-tenant access settings to trust MFA
+
+Authentication strength policies work together with [MFA trust settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) in your cross-tenant access settings to determine where and how the external user must perform MFA. An Azure AD user first authenticates with their own account in their home tenant. Then when this user tries to access your resource, Azure AD applies the authentication strength Conditional Access policy and checks to see if you've enabled MFA trust.
+
+- **If MFA trust is enabled**, Azure AD checks the user's authentication session for a claim indicating that MFA has been fulfilled in the user's home tenant. The table below indicates which authentication methods are acceptable for MFA fulfillment when completed in an external user's home tenant.
+- **If MFA trust is disabled**, the resource tenant presents the user with a challenge to complete MFA in the resource tenant using an acceptable authentication method. The table below shows which authentication methods are acceptable for MFA fulfillment by an external user.
+
+> [!IMPORTANT]
+> Before you create the Conditional Access policy, check your cross-tenant access settings to make sure your inbound MFA trust settings are configured as intended.
+## Choose an authentication strength
+
+Determine if one of the built-in authentication strengths will work for your scenario or if you'll need to create a custom authentication strength.
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths (Preview)**.
+1. Review the built-in authentication strengths to see if one of them meets your requirements.
+1. If you want to enforce a different set of authentication methods, [create a custom authentication strength](https://aka.ms/b2b-auth-strengths).
+
+> [!NOTE]
+> The authentication methods that external users can use to satisfy MFA requirements are different depending on whether the user is completing MFA in their home tenant or the resource tenant. See the table in [Conditional Access authentication strength](https://aka.ms/b2b-auth-strengths).
+
+## Create a Conditional Access policy
+
+Use the following steps to create a Conditional Access policy that applies an authentication strength to external users.
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+1. Under **Include**, choose **Select users and groups**, and then select **Guest or external users**.
+
+ <!![Screenshot showing where to select guest and external user types.](media/howto-conditional-access-policy-authentication-strength-external/assignments-external-user-types.png)>
+
+1. Select the types of [guest or external users](../external-identities/authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types-preview) you want to apply the policy to.
+
+1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions**, under **Include** or **Exclude**, select any applications you want to include in or exclude from the authentication strength requirements.
+1. Under **Access controls** > **Grant**:
+ 1. Choose **Grant access**.
+ 1. Select **Require authentication strength**, and then select the built-in or custom authentication strength from the list.
+
+ ![Screenshot showing where to select an authentication strength.](media/howto-conditional-access-policy-authentication-strength-external/select-authentication-strength.png)
+
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+2. Select **Create** to create to enable your policy.
+
+After you confirm your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
Previously updated : 02/25/2022 Last updated : 09/13/2022
If the required controls of a policy weren't previously satisfied, the policy is
- Sign-in risk - User risk - Country location (resolving new IP or GPS coordinates)
+- Authentication strengths
+
+When active, the Backup Authentication Service doesn't evaluate authentication methods required by [authentication strengths](../authentication/concept-authentication-strengths.md). If you used a non-phishing-resistant authentication method before an outage, during an outage you aren't be prompted for multifactor authentication even if accessing a resource protected by a Conditional Access policy with a phishing-resistant authentication strength.
## Resilience defaults enabled
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
To compute the assertion, you can use one of the many JWT libraries in the langu
| | | | `alg` | Should be **RS256** | | `typ` | Should be **JWT** |
-| `x5t` | Base64-encoded SHA-1 thumbprint of the X.509 certificate. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64). |
+| `x5t` | Base64url-encoded SHA-1 thumbprint of the X.509 certificate's DER encoding. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ` (Base64url). |
### Claims (payload)
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
-+ Last updated 06/16/2021
To run this script you need:
- password for the private key (pfx file) > [!IMPORTANT]
-> The private key must be in PKCS#12 format since Azure AD does not support other format types. Using the wrong format can result in the the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate info.
+> The private key must be in PKCS#12 format since Azure AD does not support other format types. Using the wrong format can result in the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate info.
```powershell
active-directory Config Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/config-authority.md
Title: Configure identity providers (MSAL iOS/macOS) description: Learn how to use different authorities such as B2C, sovereign clouds, and guest users, with MSAL for iOS and macOS. -+
Last updated 08/28/2019-+
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-rbac-for-developers.md
Last updated 08/19/2022-+ - #Customer intent: As a developer, I want to learn about custom RBAC and why I need to use it in my application.
Developers can also use [Azure AD groups](../fundamentals/active-directory-manag
### Custom data store
-App roles and groups both store information about user assignments in the Azure AD directory. Another option for managing user role information that is available to developers is to maintain the information outside of the directory in a custom data store. For example, in a SQL Database, Azure Table storage or Azure Cosmos DB Table API.
+App roles and groups both store information about user assignments in the Azure AD directory. Another option for managing user role information that is available to developers is to maintain the information outside of the directory in a custom data store. For example, in a SQL database, Azure Table storage, or Azure Cosmos DB for Table.
Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. If developers maintain role information in a custom data store, they'll need to have the applications retrieve the roles. Retrieving the roles is typically done using extensibility points defined in the middleware available to the platform that's being used to develop the application. Developers are responsible for properly securing the custom data store.
active-directory Customize Webviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/customize-webviews.md
Title: Customize browsers & WebViews (MSAL iOS/macOS) description: Learn how to customize the MSAL iOS/macOS browser experience to sign in users. -+
Last updated 08/28/2019-+
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
Title: Support and help options for Microsoft identity platform developers description: Learn where to get help and find answers to your questions as you build identity and access management (IAM) solutions that integrate with Azure Active Directory (Azure AD) and other components of the Microsoft identity platform. -+
Last updated 03/09/2022-+
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
multiple Previously updated : 02/22/2021 Last updated : 10/11/2021
$sp = New-AzADServicePrincipal -DisplayName exampleapp `
-EndDate $cert.NotAfter ` -StartDate $cert.NotBefore Sleep 20
-New-AzRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $sp.ApplicationId
+New-AzRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $sp.AppId
``` The example sleeps for 20 seconds to allow some time for the new service principal to propagate throughout Azure AD. If your script doesn't wait long enough, you'll see an error stating: "Principal {ID} does not exist in the directory {DIR-ID}." To resolve this error, wait a moment then run the **New-AzRoleAssignment** command again.
Whenever you sign in as a service principal, provide the tenant ID of the direct
```powershell $TenantId = (Get-AzSubscription -SubscriptionName "Contoso Default").TenantId
-$ApplicationId = (Get-AzADApplication -DisplayNameStartWith exampleapp).ApplicationId
+$ApplicationId = (Get-AzADApplication -DisplayNameStartWith exampleapp).AppId
$Thumbprint = (Get-ChildItem cert:\CurrentUser\My\ | Where-Object {$_.Subject -eq "CN=exampleappScriptCert" }).Thumbprint Connect-AzAccount -ServicePrincipal `
Param (
{ # Sleep here for a few seconds to allow the service principal application to become active (should only take a couple of seconds normally) Sleep 15
- New-AzRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $ServicePrincipal.ApplicationId | Write-Verbose -ErrorAction SilentlyContinue
+ New-AzRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $ServicePrincipal.AppId | Write-Verbose -ErrorAction SilentlyContinue
$NewRole = Get-AzRoleAssignment -ObjectId $ServicePrincipal.Id -ErrorAction SilentlyContinue $Retries++; }
The application ID and tenant ID aren't sensitive, so you can embed them directl
If you need to retrieve the application ID, use: ```powershell
-(Get-AzADApplication -DisplayNameStartWith {display-name}).ApplicationId
+(Get-AzADApplication -DisplayNameStartWith {display-name}).AppId
``` ## Change credentials
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
Previously updated : 08/26/2022 Last updated : 10/11/2022
This article shows you how to create a new Azure Active Directory (Azure AD) application and service principal that can be used with the role-based access control. When you have applications, hosted services, or automated tools that need to access or modify resources, you can create an identity for the app. This identity is known as a service principal. Access to resources is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.
-This article shows you how to use the portal to create the service principal in the Azure portal. It focuses on a single-tenant application where the application is intended to run within only one organization. You typically use single-tenant applications for line-of-business applications that run within your organization. You can also [use Azure PowerShell to create a service principal](howto-authenticate-service-principal-powershell.md).
+This article shows you how to use the portal to create the service principal in the Azure portal. It focuses on a single-tenant application where the application is intended to run within only one organization. You typically use single-tenant applications for line-of-business applications that run within your organization. You can also [use Azure PowerShell](howto-authenticate-service-principal-powershell.md) or the [Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli) to create a service principal.
> [!IMPORTANT] > Instead of creating a service principal, consider using managed identities for Azure resources for your application identity. If your code runs on a service that supports managed identities and accesses resources that support Azure AD authentication, managed identities are a better option for you. To learn more about managed identities for Azure resources, including which services currently support it, see [What is managed identities for Azure resources?](../managed-identities-azure-resources/overview.md).
Keep in mind, you might need to configure additional permissions on resources th
![Add access policy](./media/howto-create-service-principal-portal/add-access-policy.png) ## Next steps
-* Learn how to [use Azure PowerShell to create a service principal](howto-authenticate-service-principal-powershell.md).
+* Learn how to use [Azure PowerShell](howto-authenticate-service-principal-powershell.md) or [Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli) to create a service principal.
* To learn about specifying security policies, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md). * For a list of available actions that can be granted or denied to users, see [Azure Resource Manager Resource Provider operations](../../role-based-access-control/resource-provider-operations.md). * For information about working with app registrations by using **Microsoft Graph**, see the [Applications](/graph/api/resources/application) API reference.
active-directory Howto V2 Keychain Objc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-v2-keychain-objc.md
Title: Configure keychain description: Learn how to configure keychain so that your app can cache tokens in the keychain. -+
Last updated 08/28/2019-+
active-directory Identity Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md
Title: Microsoft identity platform videos description: A list of videos about modern authentication and the Microsoft identity platform -+
Last updated 08/03/2020-+
active-directory Migrate Adal Msal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-adal-msal-java.md
Title: ADAL to MSAL migration guide (MSAL4j) description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Java app to the Microsoft Authentication Library (MSAL). -+
Java Last updated 11/04/2019-+ #Customer intent: As a Java application developer, I want to learn how to migrate my v1 ADAL app to v2 MSAL.
active-directory Migrate Android Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-android-adal-msal.md
Title: ADAL to MSAL migration guide for Android description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Android app to the Microsoft Authentication Library (MSAL). -+
Android Last updated 10/14/2020-+ # Customer intent: As an Android application developer, I want to learn how to migrate my v1 ADAL app to v2 MSAL.
active-directory Migrate Objc Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-objc-adal-msal.md
Title: ADAL to MSAL migration guide (MSAL iOS/macOS) description: Learn the differences between MSAL for iOS/macOS and the Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate to MSAL for iOS/macOS. -+
Last updated 08/28/2019-+ #Customer intent: As an application developer, I want to learn about the differences between the Objective-C ADAL and MSAL for iOS and macOS libraries so I can migrate my applications to MSAL for iOS and macOS.
active-directory Migrate Spa Implicit To Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md
Title: Migrate JavaScript single-page app from implicit grant to authorization code flow description: How to update a JavaScript SPA using MSAL.js 1.x and the implicit grant flow to MSAL.js 2.x and the authorization code flow with PKCE and CORS support. -+ Last updated 07/17/2020-+
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
Title: "Quickstart: Add sign in with Microsoft to an Android app" description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform. -+
Last updated 02/15/2022 -+ #Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app" description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API. -+
Last updated 02/15/2022 -+ #Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my iOS or macOS application.
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
Title: Acquire and cache tokens with Microsoft Authentication Library (MSAL) description: Learn about acquiring and caching tokens using MSAL. -+
Last updated 03/22/2022-+ #Customer intent: As an application developer, I want to learn about acquiring and caching tokens so my app can support authentication and authorization.
active-directory Msal Android Handling Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-handling-exceptions.md
Title: Errors and exceptions (MSAL Android) description: Learn how to handle errors and exceptions, Conditional Access, and claims challenges in MSAL Android applications. -+
Last updated 08/07/2020-+
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md
Title: Shared device mode for Android devices description: Learn how to enable shared device mode to allow frontline workers to share an Android device -+
Last updated 09/30/2021-+
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
Title: How to enable cross-app SSO on Android using MSAL description: How to use the Microsoft Authentication Library (MSAL) for Android to enable single sign-on across your applications. -+
android
ms.devlang: java Last updated 10/15/2020-+
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
Title: Authentication flow support in the Microsoft Authentication Library (MSAL) description: Learn about the authorization grants and authentication flows supported by MSAL. -+
Last updated 03/22/2022-+ # Customer intent: As an application developer, I want to learn about the authentication flows supported by MSAL.
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Title: Client application configuration (MSAL) description: Learn about configuration options for public client and confidential client applications using the Microsoft Authentication Library (MSAL). -+
Last updated 07/15/2022-+ #Customer intent: As an application developer, I want to learn about the types of client applications so I can decide if this platform meets my app development needs.
active-directory Msal Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-applications.md
Title: Public and confidential client apps (MSAL) description: Learn about public client and confidential client applications in the Microsoft Authentication Library (MSAL). -+
Last updated 10/26/2021-+ #Customer intent: As an application developer, I want to learn about the types of client apps so I can decide if this platform meets my app development requirements.
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
Title: "Migrate your JavaScript application from ADAL.js to MSAL.js" description: How to update your existing JavaScript application to use the Microsoft Authentication Library (MSAL) for authentication and authorization instead of the Active Directory Authentication Library (ADAL). -+
Last updated 07/06/2021-+ #Customer intent: As an application developer, I want to learn how to change the code in my JavaScript application from using ADAL.js as its authentication library to MSAL.js.
active-directory Msal Differences Ios Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-differences-ios-macos.md
Title: MSAL for iOS & macOS differences description: Describes the Microsoft Authentication Library (MSAL) usage differences between iOS and macOS. -+
Last updated 08/28/2019-+ #Customer intent: As an application developer, I want to learn about the Microsoft Authentication Library for macOS and iOS differences so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Error Handling Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-ios.md
Title: Handle errors and exceptions in MSAL for iOS/macOS description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL for iOS/macOS applications. -+
Last updated 11/26/2020-+
active-directory Msal Error Handling Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-java.md
Title: Handle errors and exceptions in MSAL4J description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL4J applications. -+
Last updated 11/27/2020-+
active-directory Msal Error Handling Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md
Title: Handle errors and exceptions in MSAL for Python description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL for Python applications. -+
Last updated 11/26/2020-+
active-directory Msal Java Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-adfs-support.md
Title: AD FS support (MSAL for Java) description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Java (MSAL4j). -+
Last updated 11/21/2019-+ #Customer intent: As an application developer, I want to learn about AD FS support in MSAL for Java so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Java Get Remove Accounts Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-get-remove-accounts-token-cache.md
Title: Get & remove accounts from the token cache (MSAL4j) description: Learn how to view and remove accounts from the token cache using the Microsoft Authentication Library for Java. -+
Last updated 11/07/2019-+ #Customer intent: As an application developer using the Microsoft Authentication Library for Java (MSAL4J), I want to learn how to get and remove accounts stored in the token cache.
active-directory Msal Java Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-token-cache-serialization.md
Title: Custom token cache serialization (MSAL4j) description: Learn how to serialize the token cache for MSAL for Java -+
Last updated 11/07/2019-+ #Customer intent: As an application developer using the Microsoft Authentication Library for Java (MSAL4J), I want to learn how to persist the token cache so that it is available to a new instance of my application.
active-directory Msal Js Avoid Page Reloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-avoid-page-reloads.md
Title: Avoid page reloads (MSAL.js) description: Learn how to avoid page reloads when acquiring and renewing tokens silently using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 05/29/2019-+ #Customer intent: As an application developer, I want to learn about avoiding page reloads so I can create more robust applications.
active-directory Msal Js Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-initializing-client-applications.md
Title: Initialize MSAL.js client apps description: Learn about initializing client applications using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 10/21/2021-+ # Customer intent: As an application developer, I want to learn about initializing a client application in MSAL.js to enable support for authentication and authorization in a JavaScript single-page application (SPA).
active-directory Msal Js Known Issues Ie Edge Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-known-issues-ie-edge-browsers.md
Title: Issues on Internet Explorer & Microsoft Edge (MSAL.js) description: Learn about know issues when using the Microsoft Authentication Library for JavaScript (MSAL.js) with Internet Explorer and Microsoft Edge browsers. -+
Last updated 05/18/2020-+ #Customer intent: As an application developer, I want to learn about issues with MSAL.js library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Js Pass Custom State Authentication Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-pass-custom-state-authentication-request.md
Title: Pass custom state in authentication requests (MSAL.js) description: Learn how to pass a custom state parameter value in authentication request using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 01/16/2020-+ #Customer intent: As an application developer, I want to learn about passing custom state in authentication requests so I can create more robust applications.
active-directory Msal Js Prompt Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-prompt-behavior.md
Title: Prompt behavior with MSAL.js description: Learn to customize prompt behavior using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 04/24/2019-+ #Customer intent: As an application developer, I want to learn about customizing the UI prompt behaviors in MSAL.js library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
Title: Single sign-on (MSAL.js) description: Learn about building single sign-on experiences using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 10/25/2021-+ #Customer intent: As an application developer, I want to learn about enabling single sign on experiences with MSAL.js library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Js Use Ie Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-use-ie-browser.md
Title: Issues on Internet Explorer (MSAL.js) description: Use the Microsoft Authentication Library for JavaScript (MSAL.js) with Internet Explorer browser. -+
Last updated 12/01/2021-+ #Customer intent: As an application developer, I want to learn about issues with MSAL.js library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Logging Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-android.md
Title: Logging errors and exceptions in MSAL for Android. description: Learn how to log errors and exceptions in MSAL for Android. -+
Last updated 01/25/2021-+
active-directory Msal Logging Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-ios.md
Title: Logging errors and exceptions in MSAL for iOS/macOS description: Learn how to log errors and exceptions in MSAL for iOS/macOS -+
Last updated 01/25/2021-+
active-directory Msal Logging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md
Title: Logging errors and exceptions in MSAL for Java description: Learn how to log errors and exceptions in MSAL for Java -+
Last updated 01/25/2021-+
active-directory Msal Logging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-python.md
Title: Logging errors and exceptions in MSAL for Python description: Learn how to log errors and exceptions in MSAL for Python -+
Last updated 01/25/2021-+
active-directory Msal Net Migration Android Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-android-broker.md
Title: Migrate Xamarin Android apps using brokers to MSAL.NET description: Learn how to migrate Xamarin Android apps that use the Microsoft Authenticator or Intune Company Portal from ADAL.NET to MSAL.NET.-+
Last updated 08/31/2020-+ #Customer intent: As an application developer, I want to learn how to migrate my Xamarin Android applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET.
active-directory Msal Net System Browser Android Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-system-browser-android-considerations.md
Title: Xamarin Android system browser considerations (MSAL.NET) description: Learn about considerations for using system browsers on Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 10/30/2019-+ #Customer intent: As an application developer, I want to learn about considerations for using Xamarin Android and MSAL.NET so I can decide if this platform meets my application development needs.
We recommend that you use browsers that support custom tabs. Here are some examp
|Kiwi | com.kiwibrowser.browser| |Brave | com.brave.browser|
-In addition to identifying browsers that offer custom tabs support, our testing indicates that a few browsers that don't support custom tabs also work for authentication. These browsers include Opera, Opera Mini, InBrowser, and Maxthon.
+In addition to identifying browsers that offer custom tabs support, our testing indicates that a few browsers that don't support custom tabs also work for authentication. These browsers include Opera, Opera Mini, InBrowser, and Maxthon.
## Tested devices and browsers The following table lists the devices and browsers that have been tested for authentication compatibility.
-| Device | Browser | Result |
+| Device | Browser | Result |
| - |:-:|:--:| | Huawei/One+ | Chrome\* | Pass| | Huawei/One+ | Edge\* | Pass|
The following table lists the devices and browsers that have been tested for aut
## Known issues
-If the user has no browser enabled on the device, MSAL.NET will throw an `AndroidActivityNotFound` exception.
+If the user has no browser enabled on the device, MSAL.NET will throw an `AndroidActivityNotFound` exception.
- **Mitigation**: Ask the user to enable a browser on their device. Recommend a browser that supports custom tabs.
-If authentication fails (for example, if authentication launches with DuckDuckGo), MSAL.NET will return `AuthenticationCanceled MsalClientException`.
- - **Root problem**: A browser that supports custom tabs wasn't enabled on the device. Authentication launched with a browser that couldn't complete authentication.
+If authentication fails (for example, if authentication launches with DuckDuckGo), MSAL.NET will return `AuthenticationCanceled MsalClientException`.
+ - **Root problem**: A browser that supports custom tabs wasn't enabled on the device. Authentication launched with a browser that couldn't complete authentication.
- **Mitigation**: Ask the user to enable a browser on their device. Recommend a browser that supports custom tabs. ## Next steps
-For more information and code examples, see [Choosing between an embedded web browser and a system browser on Xamarin Android](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/MSAL.NET-uses-web-browser#choosing-between-embedded-web-browser-or-system-browser-on-xamarinandroid) and [Embedded versus system web UI](msal-net-web-browsers.md#embedded-vs-system-web-ui).
+For more information and code examples, see [Choosing between an embedded web browser and a system browser on Xamarin Android](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/MSAL.NET-uses-web-browser#choosing-between-embedded-web-browser-or-system-browser-on-xamarinandroid) and [Embedded versus system web UI](msal-net-web-browsers.md#embedded-vs-system-web-ui).
active-directory Msal Net User Gets Consent For Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-user-gets-consent-for-multiple-resources.md
Title: Get consent for several resources (MSAL.NET) description: Learn how a user can get pre-consent for several resources using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 04/30/2019-+ #Customer intent: As an application developer, I want to learn how to specify additional scopes so I can get pre-consent for several resources.
active-directory Msal Net Uwp Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-uwp-considerations.md
Title: UWP considerations (MSAL.NET) description: Learn about considerations for using Universal Windows Platform (UWP) with the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 03/03/2021-+ #Customer intent: As an application developer, I want to learn about considerations for using Universal Windows Platform and MSAL.NET so that I can decide if this platform meets my application development needs.
For more information, see [Web authentication broker - Fiddler](/windows/uwp/sec
## Next steps The following samples provide more information.
-Sample | Platform | Description
+Sample | Platform | Description
| | -- | --| |[`active-directory-dotnet-native-uwp-v2`](https://github.com/azure-samples/active-directory-dotnet-native-uwp-v2) | UWP | A UWP client application that uses MSAL.NET. It accesses Microsoft Graph for a user who authenticates by using an Azure AD 2.0 endpoint. <br>![Topology](media/msal-net-uwp-considerations/topology-native-uwp.png)| |[`active-directory-xamarin-native-v2`](https://github.com/Azure-Samples/active-directory-xamarin-native-v2) | Xamarin iOS, Android, UWP | A Xamarin Forms app that shows how to use MSAL to authenticate Microsoft personal accounts and Azure AD via the Microsoft identity platform. It also shows how to access Microsoft Graph and shows the resulting token. <br>![Diagram that shows how to use MSAL to authenticate Microsoft personal accounts and Azure AD via the Microsoft identity platform.](media/msal-net-uwp-considerations/topology-xamarin-native.png)|
active-directory Msal Net Web Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-web-browsers.md
Title: Using web browsers (MSAL.NET) description: Learn about specific considerations when using Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 05/18/2020-+ #Customer intent: As an application developer, I want to learn about web browsers MSAL.NET so I can decide if this platform meets my application development needs and requirements.
By default, MSAL.NET supports the system web browser on Xamarin.iOS, Xamarin.And
Using the system browser has the significant advantage of sharing the SSO state with other applications and with web applications without needing a broker (Company portal / Authenticator). The system browser was used, by default, in MSAL.NET for the Xamarin iOS and Xamarin Android platforms because, on these platforms, the system web browser occupies the whole screen, and the user experience is better. The system web view isn't distinguishable from a dialog. On iOS, though, the user might have to give consent for the browser to call back the application, which can be annoying.
-## System browser experience on .NET
+## System browser experience on .NET
On .NET Core, MSAL.NET will start the system browser as a separate process. MSAL.NET doesn't have control over this browser, but once the user finishes authentication, the web page is redirected in such a way that MSAL.NET can intercept the URI.
To enable the system browser:
IPublicClientApplication pca = PublicClientApplicationBuilder .Create("<CLIENT_ID>") // or use a known port if you wish "http://localhost:1234"
- .WithRedirectUri("http://localhost")
+ .WithRedirectUri("http://localhost")
.Build(); ```
On macOS, the browser is opened by invoking `open <url>`.
MSAL.NET can respond with an HTTP message or HTTP redirect when a token is received or an error occurs. ```csharp
-var options = new SystemWebViewOptions()
+var options = new SystemWebViewOptions()
{ HtmlMessageError = "<p> An error occurred: {0}. Details {1}</p>", BrowserRedirectSuccess = new Uri("https://www.microsoft.com");
await pca.AcquireTokenInteractive(s_scopes)
You may customize the way MSAL.NET opens the browser. For example instead of using whatever browser is the default, you can force open a specific browser: ```csharp
-var options = new SystemWebViewOptions()
+var options = new SystemWebViewOptions()
{ OpenBrowserAsync = SystemWebViewOptions.OpenWithEdgeBrowserAsync }
For desktop applications, however, launching a System Webview leads to a subpar
## Enable embedded webviews on iOS and Android
-You can also enable embedded webviews in Xamarin.iOS and Xamarin.Android apps. Starting with MSAL.NET 2.0.0-preview, MSAL.NET also supports using the **embedded** webview option.
+You can also enable embedded webviews in Xamarin.iOS and Xamarin.Android apps. Starting with MSAL.NET 2.0.0-preview, MSAL.NET also supports using the **embedded** webview option.
As a developer using MSAL.NET targeting Xamarin, you may choose to use either embedded webviews or system browsers. This is your choice depending on the user experience and security concerns you want to target.
active-directory Msal Net Xamarin Android Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-xamarin-android-considerations.md
Title: Xamarin Android code configuration and troubleshooting (MSAL.NET) description: Learn about considerations for using Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 08/28/2020-+ #Customer intent: As an application developer, I want to learn about special requirements for using Xamarin Android and MSAL.NET.
protected override void OnActivityResult(int requestCode,
} ```
-## Update the Android manifest for System WebView support
+## Update the Android manifest for System WebView support
To support System WebView, the *AndroidManifest.xml* file should contain the following values:
Xamarin.Forms 4.3.x generates code that sets the `package` attribute to `com.com
## Android 11 support
-To use the system browser and brokered authentication in Android 11, you must first declare these packages, so they are visible to the app. Apps that target Android 10 (API 29) and earlier can query the OS for a list of packages that are available on the device at any given time. To support privacy and security, Android 11 reduces package visibility to a default list of OS packages and the packages that are specified in the app's *AndroidManifest.xml* file.
+To use the system browser and brokered authentication in Android 11, you must first declare these packages, so they are visible to the app. Apps that target Android 10 (API 29) and earlier can query the OS for a list of packages that are available on the device at any given time. To support privacy and security, Android 11 reduces package visibility to a default list of OS packages and the packages that are specified in the app's *AndroidManifest.xml* file.
To enable the application to authenticate by using both the system browser and the broker, add the following section to *AndroidManifest.xml*:
To enable the application to authenticate by using both the system browser and t
<action android:name="android.support.customtabs.action.CustomTabsService" /> </intent> </queries>
-```
+```
-Replace `{Package Name}` with the application package name.
+Replace `{Package Name}` with the application package name.
Your updated manifest, which now includes support for the system browser and brokered authentication, should look similar to this example: ```xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package="com.companyname.XamarinDev">
- <uses-sdk android:minSdkVersion="21" android:targetSdkVersion="30" />
- <uses-permission android:name="android.permission.INTERNET" />
- <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
- <application android:theme="@android:style/Theme.NoTitleBar">
- <activity android:name="microsoft.identity.client.BrowserTabActivity" android:configChanges="orientation|screenSize">
- <intent-filter>
- <action android:name="android.intent.action.VIEW" />
- <category android:name="android.intent.category.DEFAULT" />
- <category android:name="android.intent.category.BROWSABLE" />
- <data android:scheme="msal4a1aa1d5-c567-49d0-ad0b-cd957a47f842" android:host="auth" />
- </intent-filter>
- <intent-filter>
- <action android:name="android.intent.action.VIEW" />
- <category android:name="android.intent.category.DEFAULT" />
- <category android:name="android.intent.category.BROWSABLE" />
- <data android:scheme="msauth" android:host="com.companyname.XamarinDev" android:path="/Fc4l/5I4mMvLnF+l+XopDuQ2gEM=" />
- </intent-filter>
- </activity>
- </application>
- <!-- Required for API Level 30 to make sure we can detect browsers and other apps we want to
+ <uses-sdk android:minSdkVersion="21" android:targetSdkVersion="30" />
+ <uses-permission android:name="android.permission.INTERNET" />
+ <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
+ <application android:theme="@android:style/Theme.NoTitleBar">
+ <activity android:name="microsoft.identity.client.BrowserTabActivity" android:configChanges="orientation|screenSize">
+ <intent-filter>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.DEFAULT" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="msal4a1aa1d5-c567-49d0-ad0b-cd957a47f842" android:host="auth" />
+ </intent-filter>
+ <intent-filter>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.DEFAULT" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="msauth" android:host="com.companyname.XamarinDev" android:path="/Fc4l/5I4mMvLnF+l+XopDuQ2gEM=" />
+ </intent-filter>
+ </activity>
+ </application>
+ <!-- Required for API Level 30 to make sure we can detect browsers and other apps we want to
be able to talk to.-->
- <!--https://developer.android.com/training/basics/intents/package-visibility-use-cases-->
- <queries>
- <package android:name="com.azure.authenticator" />
- <package android:name="com.companyname.xamarindev" />
- <package android:name="com.microsoft.windowsintune.companyportal" />
- <!-- Required for API Level 30 to make sure we can detect browsers
+ <!--https://developer.android.com/training/basics/intents/package-visibility-use-cases-->
+ <queries>
+ <package android:name="com.azure.authenticator" />
+ <package android:name="com.companyname.xamarindev" />
+ <package android:name="com.microsoft.windowsintune.companyportal" />
+ <!-- Required for API Level 30 to make sure we can detect browsers
(that don't support custom tabs) -->
- <intent>
- <action android:name="android.intent.action.VIEW" />
- <category android:name="android.intent.category.BROWSABLE" />
- <data android:scheme="https" />
- </intent>
- <!-- Required for API Level 30 to make sure we can detect browsers that support custom tabs -->
- <!-- https://developers.google.com/web/updates/2020/07/custom-tabs-android-11#detecting_browsers_that_support_custom_tabs -->
- <intent>
- <action android:name="android.support.customtabs.action.CustomTabsService" />
- </intent>
- </queries>
+ <intent>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="https" />
+ </intent>
+ <!-- Required for API Level 30 to make sure we can detect browsers that support custom tabs -->
+ <!-- https://developers.google.com/web/updates/2020/07/custom-tabs-android-11#detecting_browsers_that_support_custom_tabs -->
+ <intent>
+ <action android:name="android.support.customtabs.action.CustomTabsService" />
+ </intent>
+ </queries>
</manifest> ```
active-directory Msal Net Xamarin Ios Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-xamarin-ios-considerations.md
Title: Xamarin iOS considerations (MSAL.NET) description: Learn about considerations for using Xamarin iOS with the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 09/09/2020-+ #Customer intent: As an application developer, I want to learn about considerations for using Xamarin iOS and MSAL.NET.
For more information, see the [iOS entitlements documentation](https://developer
#### Troubleshooting KeyChain access
-If you get an error message similar to "The application cannot access the iOS keychain for the application publisher (the TeamId is null)", this means MSAL is not able to access the KeyChain. This is a configuration issue. To troubleshoot, try to access the KeyChain on your own, for example:
+If you get an error message similar to "The application cannot access the iOS keychain for the application publisher (the TeamId is null)", this means MSAL is not able to access the KeyChain. This is a configuration issue. To troubleshoot, try to access the KeyChain on your own, for example:
```csharp var queryRecord = new SecRecord(SecKind.GenericPassword)
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
Title: "Migrate your Node.js application from ADAL to MSAL" description: How to update your existing Node.js application to use the Microsoft Authentication Library (MSAL) for authentication and authorization instead of the Active Directory Authentication Library (ADAL). -+
Last updated 04/26/2021-+ #Customer intent: As an application developer, I want to learn how to change the code in my Node.js application from using ADAL as its authentication library to MSAL.
active-directory Msal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-overview.md
Title: Learn about MSAL description: The Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be the Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports multiple application architectures and platforms. -+
Last updated 09/20/2022-+ #Customer intent: As an application developer, I want to learn about the Microsoft Authentication Library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal V1 App Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-v1-app-scopes.md
Title: Scopes for v1.0 apps (MSAL) description: Learn about the scopes for a v1.0 application using the Microsoft Authentication Library (MSAL). -+
Last updated 11/25/2019-+ #Customer intent: As an application developer, I want to learn scopes for a v1.0 application so I can decide if this platform meets my application development needs and requirements.
active-directory Quickstart Configure App Access Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
Title: "Quickstart: Configure an app to access a web API" description: In this quickstart, you configure an app registration representing a web API in the Microsoft identity platform to enable scoped resource access (permissions) to client applications. -+ Last updated 05/05/2022-+ #Customer intent: As an application developer, I want to know how to configure my web API's app registration with permissions client applications can use to obtain scoped access to the API.
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
Title: "Quickstart: Register and expose a web API" description: In this quickstart, your register a web API with the Microsoft identity platform and configure its scopes, exposing it to clients for permissions-based access to the API's resources. -+ Last updated 03/25/2022-+ #Customer intent: As an application developer, I need learn to how to register my web API with the Microsoft identity platform and expose permissions (scopes) to make the API's resources available to users of my client application.
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-register-app.md
Title: "Quickstart: Register an app in the Microsoft identity platform" description: In this quickstart, you learn how to register an application with the Microsoft identity platform. -+ Last updated 01/13/2022-+ #Customer intent: As developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them.
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-android.md
Title: "Quickstart: Add sign in with Microsoft to an Android app" description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform. -+
Last updated 01/14/2022 -+ #Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md
Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app" description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API. -+
Last updated 01/14/2022 -+ #Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my iOS or macOS application.
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-daemon.md
Title: "Quickstart: Call Microsoft Graph from a Java daemon" description: In this quickstart, you learn how a Java app can get an access token and call an API protected by Microsoft identity platform endpoint, using the app's own identity -+
Last updated 01/10/2022 -+ #Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to a Java web app" description: In this quickstart, you'll learn how to add sign-in with Microsoft to a Java web application by using OpenID Connect. -+
Last updated 11/22/2021 -+ # Quickstart: Add sign-in with Microsoft to a Java web app
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
Title: "Quickstart: Sign in users in JavaScript single-page apps (SPA) with auth code" description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow. -+
Last updated 11/12/2021 -+ #Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md
Title: "Quickstart: Sign in users in JavaScript single-page apps" description: In this quickstart, you learn how a JavaScript app can call an API that requires access tokens issued by the Microsoft identity platform. -+ Last updated 04/11/2019-+ #Customer intent: As an app developer, I want to learn how to get access tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
Title: "Quickstart: Call Microsoft Graph from a Node.js console app" description: In this quickstart, you download and run a code sample that shows how a Node.js console application can get an access token and call an API protected by a Microsoft identity platform endpoint, using the app's own identity -+ Last updated 01/10/2022 -+ #Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
Title: "Quickstart: Call Microsoft Graph from a Node.js desktop app" description: In this quickstart, you learn how a Node.js Electron desktop application can sign-in users and get an access token to call an API protected by a Microsoft identity platform endpoint -+ Last updated 01/14/2022 -+ #Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
Title: "Quickstart: Add authentication to a Node.js web app with MSAL Node" description: In this quickstart, you learn how to implement authentication with a Node.js web app and the Microsoft Authentication Library (MSAL) for Node.js. -+
Last updated 11/22/2021 -+ #Customer intent: As an application developer, I want to know how to set up authentication in a web application built using Node.js and MSAL Node.
active-directory Redirect Uris Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/redirect-uris-ios.md
Title: Use redirect URIs with MSAL (iOS/macOS) description: Learn about the differences between the Microsoft Authentication Library for ObjectiveC (MSAL for iOS and macOS) and Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate between them. -+
Last updated 08/28/2019-+ #Customer intent: As an application developer, I want to learn about how to use redirect URIs.
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 08/10/2022 Last updated : 10/10/2022
The `error` field has several possible values - review the protocol documentatio
| AADSTS90020 | The SAML 1.1 Assertion is missing ImmutableID of the user. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.| | AADSTS90022 | AuthenticatedInvalidPrincipalNameFormat - The principal name format isn't valid, or doesn't meet the expected `name[/host][@realm]` format. The principal name is required, host and realm are optional and may be set to null. | | AADSTS90023 | InvalidRequest - The authentication service request isn't valid. |
+| AADSTS900236| InvalidRequestSamlPropertyUnsupported- The SAML authentication request property '{propertyName}' is not supported and must not be set.
| AADSTS9002313 | InvalidRequest - Request is malformed or invalid. - The issue here is because there was something wrong with the request to a certain endpoint. The suggestion to this issue is to get a fiddler trace of the error occurring and looking to see if the request is actually properly formatted or not. | | AADSTS9002332 | Application '{principalId}'({principalName}) is configured for use by Azure Active Directory users only. Please do not use the /consumers endpoint to serve this request. | | AADSTS90024 | RequestBudgetExceededError - A transient error has occurred. Try again. |
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md
Title: Microsoft identity platform authentication libraries description: List of client libraries and middleware compatible with the Microsoft identity platform. Use these libraries to add support for user sign-in (authentication) and protected web API access (authorization) to your applications. -+
Last updated 03/30/2021-+ # Customer intent: As a developer, I want to know whether there's a Microsoft Authentication Library (MSAL) available for the language/framework I'm using to build my application, and whether the library is GA or in preview.
active-directory Request Custom Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/request-custom-claims.md
Title: Request custom claims (MSAL iOS/macOS) description: Learn how to request custom claims. -+
Last updated 08/26/2019-+
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
Title: Code samples for Microsoft identity platform authentication and authorization description: An index of Microsoft-maintained code samples demonstrating authentication and authorization in several application types, development languages, and frameworks. -+
Last updated 03/29/2022-+
active-directory Scenario Daemon Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md
Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because
# [Java](#tab/java)
-This code is extracted from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/confidential-client/).
+This code is extracted from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/tree/dev/msal4j-sdk/src/samples/confidential-client/).
```Java private static IAuthenticationResult acquireToken() throws Exception {
active-directory Scenario Desktop Acquire Token Device Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
Previously updated : 08/25/2021 Last updated : 10/07/2022 #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
private static async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublic
# [Java](#tab/java)
-This extract is from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
+This extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/msal4j-sdk/src/samples/public-client/DeviceCodeFlow.java).
```java
-private static IAuthenticationResult acquireTokenDeviceCode() throws Exception {
-
- // Load token cache from file and initialize token cache aspect. The token cache will have
- // dummy data, so the acquireTokenSilently call will fail.
- TokenCacheAspect tokenCacheAspect = new TokenCacheAspect("sample_cache.json");
-
- PublicClientApplication pca = PublicClientApplication.builder(CLIENT_ID)
- .authority(AUTHORITY)
- .setTokenCacheAccessAspect(tokenCacheAspect)
- .build();
-
- Set<IAccount> accountsInCache = pca.getAccounts().join();
- // Take first account in the cache. In a production application, you would filter
- // accountsInCache to get the right account for the user authenticating.
- IAccount account = accountsInCache.iterator().next();
-
- IAuthenticationResult result;
- try {
- SilentParameters silentParameters =
- SilentParameters
- .builder(SCOPE, account)
- .build();
-
- // try to acquire token silently. This call will fail since the token cache
- // does not have any data for the user you are trying to acquire a token for
- result = pca.acquireTokenSilently(silentParameters).join();
- } catch (Exception ex) {
- if (ex.getCause() instanceof MsalException) {
-
- Consumer<DeviceCode> deviceCodeConsumer = (DeviceCode deviceCode) ->
- System.out.println(deviceCode.message());
-
- DeviceCodeFlowParameters parameters =
- DeviceCodeFlowParameters
- .builder(SCOPE, deviceCodeConsumer)
+ private static IAuthenticationResult acquireTokenDeviceCode() throws Exception {
+
+ // Load token cache from file and initialize token cache aspect. The token cache will have
+ // dummy data, so the acquireTokenSilently call will fail.
+ TokenCacheAspect tokenCacheAspect = new TokenCacheAspect("sample_cache.json");
+
+ PublicClientApplication pca = PublicClientApplication.builder(CLIENT_ID)
+ .authority(AUTHORITY)
+ .setTokenCacheAccessAspect(tokenCacheAspect)
+ .build();
+
+ Set<IAccount> accountsInCache = pca.getAccounts().join();
+ // Take first account in the cache. In a production application, you would filter
+ // accountsInCache to get the right account for the user authenticating.
+ IAccount account = accountsInCache.iterator().next();
+
+ IAuthenticationResult result;
+ try {
+ SilentParameters silentParameters =
+ SilentParameters
+ .builder(SCOPE, account)
.build();
- // Try to acquire a token via device code flow. If successful, you should see
- // the token and account information printed out to console, and the sample_cache.json
- // file should have been updated with the latest tokens.
- result = pca.acquireToken(parameters).join();
- } else {
- // Handle other exceptions accordingly
- throw ex;
+ // try to acquire token silently. This call will fail since the token cache
+ // does not have any data for the user you are trying to acquire a token for
+ result = pca.acquireTokenSilently(silentParameters).join();
+ } catch (Exception ex) {
+ if (ex.getCause() instanceof MsalException) {
+
+ Consumer<DeviceCode> deviceCodeConsumer = (DeviceCode deviceCode) ->
+ System.out.println(deviceCode.message());
+
+ DeviceCodeFlowParameters parameters =
+ DeviceCodeFlowParameters
+ .builder(SCOPE, deviceCodeConsumer)
+ .build();
+
+ // Try to acquire a token via device code flow. If successful, you should see
+ // the token and account information printed out to console, and the sample_cache.json
+ // file should have been updated with the latest tokens.
+ result = pca.acquireToken(parameters).join();
+ } else {
+ // Handle other exceptions accordingly
+ throw ex;
+ }
}
+ return result;
}
- return result;
-}
``` # [macOS](#tab/macOS)
active-directory Scenario Desktop Acquire Token Integrated Windows Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-integrated-windows-authentication.md
Previously updated : 08/25/2021 Last updated : 10/07/2022 #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
For the list of possible modifiers on AcquireTokenByIntegratedWindowsAuthenticat
# [Java](#tab/java)
-This extract is from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
+This extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/msal4j-sdk/src/samples/public-client/IntegratedWindowsAuthenticationFlow.java).
```java
-private static IAuthenticationResult acquireTokenIwa() throws Exception {
-
- // Load token cache from file and initialize token cache aspect. The token cache will have
- // dummy data, so the acquireTokenSilently call will fail.
- TokenCacheAspect tokenCacheAspect = new TokenCacheAspect("sample_cache.json");
-
- PublicClientApplication pca = PublicClientApplication.builder(CLIENT_ID)
- .authority(AUTHORITY)
- .setTokenCacheAccessAspect(tokenCacheAspect)
- .build();
-
- Set<IAccount> accountsInCache = pca.getAccounts().join();
- // Take first account in the cache. In a production application, you would filter
- // accountsInCache to get the right account for the user authenticating.
- IAccount account = accountsInCache.iterator().next();
-
- IAuthenticationResult result;
- try {
- SilentParameters silentParameters =
- SilentParameters
- .builder(SCOPE, account)
- .build();
-
- // try to acquire token silently. This call will fail since the token cache
- // does not have any data for the user you are trying to acquire a token for
- result = pca.acquireTokenSilently(silentParameters).join();
- } catch (Exception ex) {
- if (ex.getCause() instanceof MsalException) {
-
- IntegratedWindowsAuthenticationParameters parameters =
- IntegratedWindowsAuthenticationParameters
- .builder(SCOPE, USER_NAME)
- .build();
+ PublicClientApplication pca = PublicClientApplication.builder(clientId)
+ .authority(authority)
+ .build();
+
+ Set<IAccount> accountsInCache = pca.getAccounts().join();
+ IAccount account = getAccountByUsername(accountsInCache, username);
+
+ //Attempt to acquire token when user's account is not in the application's token cache
+ IAuthenticationResult result = acquireTokenIntegratedWindowsAuth(pca, scope, account, username);
+ System.out.println("Account username: " + result.account().username());
+ System.out.println("Access token: " + result.accessToken());
+ System.out.println("Id token: " + result.idToken());
+ System.out.println();
+
+ //Get list of accounts from the application's token cache, and search them for the configured username
+ //getAccounts() will be empty on this first call, as accounts are added to the cache when acquiring a token
+ accountsInCache = pca.getAccounts().join();
+ account = getAccountByUsername(accountsInCache, username);
+
+ //Attempt to acquire token again, now that the user's account and a token are in the application's token cache
+ result = acquireTokenIntegratedWindowsAuth(pca, scope, account, username);
+ System.out.println("Account username: " + result.account().username());
+ System.out.println("Access token: " + result.accessToken());
+ System.out.println("Id token: " + result.idToken());
+ }
- // Try to acquire a IWA. You will need to generate a Kerberos ticket.
- // If successful, you should see the token and account information printed out to
- // console
- result = pca.acquireToken(parameters).join();
- } else {
- // Handle other exceptions accordingly
- throw ex;
+ private static IAuthenticationResult acquireTokenIntegratedWindowsAuth( PublicClientApplication pca,
+ Set<String> scope,
+ IAccount account,
+ String username) throws Exception {
+
+ IAuthenticationResult result;
+ try {
+ SilentParameters silentParameters =
+ SilentParameters
+ .builder(scope)
+ .account(account)
+ .build();
+ // Try to acquire token silently. This will fail on the first acquireTokenIntegratedWindowsAuth() call
+ // because the token cache does not have any data for the user you are trying to acquire a token for
+ result = pca.acquireTokenSilently(silentParameters).join();
+ System.out.println("==acquireTokenSilently call succeeded");
+ } catch (Exception ex) {
+ if (ex.getCause() instanceof MsalException) {
+ System.out.println("==acquireTokenSilently call failed: " + ex.getCause());
+ IntegratedWindowsAuthenticationParameters parameters =
+ IntegratedWindowsAuthenticationParameters
+ .builder(scope, username)
+ .build();
+
+ // Try to acquire a token using Integrated Windows Authentication (IWA). You will need to generate a Kerberos ticket.
+ // If successful, you should see the token and account information printed out to console
+ result = pca.acquireToken(parameters).join();
+ System.out.println("==Integrated Windows Authentication flow succeeded");
+ } else {
+ // Handle other exceptions accordingly
+ throw ex;
+ }
}
+ return result;
}
- return result;
-}
``` # [macOS](#tab/macOS)
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
Previously updated : 08/25/2021 Last updated : 07/10/2022 #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
For more information on all the modifiers that can be applied to `AcquireTokenBy
# [Java](#tab/java)
-The following extract is from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
+The following extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
```java
-private static IAuthenticationResult acquireTokenUsernamePassword() throws Exception {
-
- // Load token cache from file and initialize token cache aspect. The token cache will have
- // dummy data, so the acquireTokenSilently call will fail.
- TokenCacheAspect tokenCacheAspect = new TokenCacheAspect("sample_cache.json");
-
- PublicClientApplication pca = PublicClientApplication.builder(CLIENT_ID)
- .authority(AUTHORITY)
- .setTokenCacheAccessAspect(tokenCacheAspect)
- .build();
-
- Set<IAccount> accountsInCache = pca.getAccounts().join();
- // Take first account in the cache. In a production application, you would filter
- // accountsInCache to get the right account for the user authenticating.
- IAccount account = accountsInCache.iterator().next();
-
- IAuthenticationResult result;
- try {
- SilentParameters silentParameters =
- SilentParameters
- .builder(SCOPE, account)
- .build();
- // try to acquire token silently. This call will fail since the token cache
- // does not have any data for the user you are trying to acquire a token for
- result = pca.acquireTokenSilently(silentParameters).join();
- } catch (Exception ex) {
- if (ex.getCause() instanceof MsalException) {
-
- UserNamePasswordParameters parameters =
- UserNamePasswordParameters
- .builder(SCOPE, USER_NAME, USER_PASSWORD.toCharArray())
+ PublicClientApplication pca = PublicClientApplication.builder(clientId)
+ .authority(authority)
+ .build();
+
+ //Get list of accounts from the application's token cache, and search them for the configured username
+ //getAccounts() will be empty on this first call, as accounts are added to the cache when acquiring a token
+ Set<IAccount> accountsInCache = pca.getAccounts().join();
+ IAccount account = getAccountByUsername(accountsInCache, username);
+
+ //Attempt to acquire token when user's account is not in the application's token cache
+ IAuthenticationResult result = acquireTokenUsernamePassword(pca, scope, account, username, password);
+ System.out.println("Account username: " + result.account().username());
+ System.out.println("Access token: " + result.accessToken());
+ System.out.println("Id token: " + result.idToken());
+ System.out.println();
+
+ accountsInCache = pca.getAccounts().join();
+ account = getAccountByUsername(accountsInCache, username);
+
+ //Attempt to acquire token again, now that the user's account and a token are in the application's token cache
+ result = acquireTokenUsernamePassword(pca, scope, account, username, password);
+ System.out.println("Account username: " + result.account().username());
+ System.out.println("Access token: " + result.accessToken());
+ System.out.println("Id token: " + result.idToken());
+ }
+
+ private static IAuthenticationResult acquireTokenUsernamePassword(PublicClientApplication pca,
+ Set<String> scope,
+ IAccount account,
+ String username,
+ String password) throws Exception {
+ IAuthenticationResult result;
+ try {
+ SilentParameters silentParameters =
+ SilentParameters
+ .builder(scope)
+ .account(account)
.build();
- // Try to acquire a token via username/password. If successful, you should see
- // the token and account information printed out to console
- result = pca.acquireToken(parameters).join();
- } else {
- // Handle other exceptions accordingly
- throw ex;
+ // Try to acquire token silently. This will fail on the first acquireTokenUsernamePassword() call
+ // because the token cache does not have any data for the user you are trying to acquire a token for
+ result = pca.acquireTokenSilently(silentParameters).join();
+ System.out.println("==acquireTokenSilently call succeeded");
+ } catch (Exception ex) {
+ if (ex.getCause() instanceof MsalException) {
+ System.out.println("==acquireTokenSilently call failed: " + ex.getCause());
+ UserNamePasswordParameters parameters =
+ UserNamePasswordParameters
+ .builder(scope, username, password.toCharArray())
+ .build();
+ // Try to acquire a token via username/password. If successful, you should see
+ // the token and account information printed out to console
+ result = pca.acquireToken(parameters).join();
+ System.out.println("==username/password flow succeeded");
+ } else {
+ // Handle other exceptions accordingly
+ throw ex;
+ }
}
+ return result;
}
- return result;
-}
``` # [macOS](#tab/macOS)
active-directory Scenario Desktop App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-configuration.md
Before the call to the `.Build()` method, you can override your configuration wi
# [Java](#tab/java)
-Here's the class used in MSAL Java development samples to configure the samples: [TestData](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
+Here's the class used in MSAL Java development samples to configure the samples: [TestData](https://github.com/AzureAD/microsoft-authentication-library-for-java/tree/dev/msal4j-sdk/src/samples/public-client/).
```Java PublicClientApplication pca = PublicClientApplication.builder(CLIENT_ID)
active-directory Scenario Spa App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-app-configuration.md
Title: Configure single-page app description: Learn how to build a single-page application (app's code configuration) -+
Last updated 02/11/2020-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Scenario Spa App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-app-registration.md
Title: Register single-page applications (SPA) description: Learn how to build a single-page application (app registration) -+ Last updated 05/10/2022-+ # Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Scenario Spa Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-overview.md
Title: JavaScript single-page app scenario description: Learn how to build a single-page application (scenario overview) by using the Microsoft identity platform. -+
Last updated 10/12/2021-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Scenario Spa Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-production.md
Title: Move single-page app to production description: Learn how to build a single-page application (move to production) -+
Last updated 05/07/2019-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Scenario Spa Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-sign-in.md
Title: Single-page app sign-in & sign-out description: Learn how to build a single-page application (sign-in) -+
Last updated 07/19/2022-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
Previously updated : 09/17/2019 Last updated : 10/12/2022 -+ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
Learn all you need to build a web app that uses the Microsoft identity platform
If you want to create your first portable (ASP.NET Core) web app that signs in users, follow this quickstart:
-[Quickstart: ASP.NET Core web app that signs in users](quickstart-v2-aspnet-core-webapp.md)
+[Quickstart: Use ASP.NET Core to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-aspnet-core)
# [ASP.NET](#tab/aspnet) If you want to understand how to add sign-in to an existing ASP.NET web application, try the following quickstart:
-[Quickstart: ASP.NET web app that signs in users](quickstart-v2-aspnet-webapp.md)
+[Quickstart: Use ASP.NET to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-aspnet)
# [Java](#tab/java) If you're a Java developer, try the following quickstart:
-[Quickstart: Add sign-in with Microsoft to a Java web app](quickstart-v2-java-webapp.md)
+[Quickstart: Use Java to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-java)
# [Node.js](#tab/nodejs) If you're a Node.js developer, try the following quickstart:
-[Quickstart: Add sign-in with Microsoft to a Node.js web app](quickstart-v2-nodejs-webapp-msal.md)
+[Quickstart: Use Node.js to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-nodejs-msal)
# [Python](#tab/python) If you develop with Python, try the following quickstart:
-[Quickstart: Add sign-in with Microsoft to a Python web app](quickstart-v2-python-webapp.md)
+[Quickstart: Use Python to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-python)
Web apps authenticate a user in a web browser. In this scenario, the web app dir
As a second phase, you can enable your application to call web APIs on behalf of the signed-in user. This next phase is a different scenario, which you'll find in [Web app that calls web APIs](scenario-web-app-call-api-overview.md).
-> [!NOTE]
-> Adding sign-in to a web app is about protecting the web app and validating a user token, which is what **middleware** libraries do. In the case of .NET, this scenario does not yet require the Microsoft Authentication Library (MSAL), which is about acquiring a token to call protected APIs. Authentication libraries for .NET will be introduced in the follow-up scenario, when the web app needs to call web APIs.
- ## Specifics -- During the application registration, you'll need to provide one or several (if you deploy your app to several locations) reply URIs. In some cases (ASP.NET and ASP.NET Core), you'll need to enable the ID token. Finally, you'll want to set up a sign-out URI so that your application reacts to users signing out.-- In the code for your application, you'll need to provide the authority to which your web app delegates sign-in. You might want to customize token validation (in particular, in partner scenarios).
+- During the application registration, provide one or several (if you deploy your app to several locations) reply URIs. For ASP.NET, you will need to select **ID tokens** under **Implicit grant and hybrid flows**. Finally, set up a sign-out URI so that the application reacts to users signing out.
+- In the app's code, provide the authority to which the web app delegates sign-in. Consider customizing token validation for certain scenarios (in particular, in partner scenarios).
- Web applications support any account types. For more information, see [Supported account types](v2-supported-account-types.md). ## Recommended reading
active-directory Single Sign On Macos Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-macos-ios.md
Title: Configure SSO on macOS and iOS description: Learn how to configure single sign on (SSO) on macOS and iOS. -+
Last updated 02/03/2020-+
active-directory Ssl Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/ssl-issues.md
Title: Troubleshoot TLS/SSL issues (MSAL iOS/macOS) description: Learn what to do about various problems using TLS/SSL certificates with the MSAL.Objective-C library. -+
Last updated 08/28/2019-+
active-directory Sso Between Adal Msal Apps Macos Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sso-between-adal-msal-apps-macos-ios.md
Title: SSO between ADAL & MSAL apps (iOS/macOS) description: Learn how to share SSO between ADAL and MSAL apps -+
Last updated 08/28/2019-+
active-directory Tutorial V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md
Title: "Tutorial: Create an Android app that uses the Microsoft identity platform for authentication" description: In this tutorial, you build an Android app that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf. -+
Last updated 11/26/2019-+
active-directory Tutorial V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md
Title: "Tutorial: Create an iOS or macOS app that uses the Microsoft identity platform for authentication" description: Build an iOS or macOS app that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.-+ Last updated 05/28/2022-+
active-directory Tutorial V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md
Title: "Tutorial: Create a JavaScript single-page app that uses auth code flow" description: In this tutorial, you create a JavaScript SPA that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API. -+ Last updated 10/12/2021-+
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Title: "Tutorial: Create a JavaScript single-page app that uses the Microsoft identity platform for authentication" description: In this tutorial, you build a JavaScript single-page app (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf. -+
Last updated 09/26/2022-+
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
Title: "Tutorial: Call Microsoft Graph in a Node.js console app" description: In this tutorial, you build a console app for calling Microsoft Graph to a Node.js console app. -+ Last updated 12/12/2021-+ # Tutorial: Call the Microsoft Graph API in a Node.js console app
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
Title: "Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app" description: In this tutorial, you build an Electron desktop app that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API. -+ Last updated 02/17/2021-+ # Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Title: "Tutorial: Sign in users in a Node.js & Express web app" description: In this tutorial, you add support for signing-in users in a web app. -+ Last updated 02/17/2021-+ # Tutorial: Sign in users and acquire a token for Microsoft Graph in a Node.js & Express web app
active-directory Tutorial V2 Shared Device Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-shared-device-mode.md
Title: "Tutorial: Use shared-device mode with the Microsoft Authentication Library (MSAL) for Android" description: In this tutorial, you learn how to prepare an Android device to run in shared mode and run a first-line worker app. -+
Last updated 1/15/2020-+
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
Title: Microsoft identity platform scopes, permissions, & consent description: Learn about authorization in the Microsoft identity platform endpoint, including scopes, permissions, and consent. -+
Last updated 04/21/2022-+
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-saml-bearer-assertion.md
Title: Exchange a SAML token issued by Active Directory Federation Services (AD FS) for a Microsoft Graph access token description: Learn how to fetch data from Microsoft Graph without prompting an AD FS-federated user for credentials by using the SAML bearer assertion flow. -+ Last updated 01/11/2022-+
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Title: "What's new in the Microsoft identity platform docs" description: "New and updated documentation for the Microsoft identity platform." -+ Last updated 09/03/2022
-+
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-gcp.md
private string getGoogleIdToken()
} } ```+
+# [Java](#tab/java)
+HereΓÇÖs an example in Java of how to request an ID token from the Google metadata server:
+```java
+private String getGoogleIdToken() throws IOException {
+ final String endpoint = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=api://AzureADTokenExchange";
+
+ URL url = new URL(endpoint);
+ HttpURLConnection httpUrlConnection = (HttpURLConnection) url.openConnection();
+
+ httpUrlConnection.setRequestMethod("GET");
+ httpUrlConnection.setRequestProperty("Metadata-Flavor", "Google ");
+
+ InputStream inputStream = httpUrlConnection.getInputStream();
+ InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
+ BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
+ StringBuffer content = new StringBuffer();
+ String inputLine;
+
+ while ((inputLine = bufferedReader.readLine()) != null)
+ content.append(inputLine);
+
+ bufferedReader.close();
+
+ return content.toString();
+}
+```
> [!IMPORTANT]
public class ClientAssertionCredential:TokenCredential
} ```
+# [Java](#tab/java)
+
+The following Java sample code snippet implements the `TokenCredential` interface, gets an ID token from Google (using the `getGoogleIDToken` method previously defined), and exchanges the ID token for an access token.
+
+```java
+import java.io.Exception;
+import java.time.Instant;
+import java.time.OffsetDateTime;
+import java.time.ZoneOffset;
+import java.util.HashSet;
+import java.util.Set;
+
+import com.azure.core.credential.AccessToken;
+import com.azure.core.credential.TokenCredential;
+import com.azure.core.credential.TokenRequestContext;
+import com.microsoft.aad.msal4j.ClientCredentialFactory;
+import com.microsoft.aad.msal4j.ClientCredentialParameters;
+import com.microsoft.aad.msal4j.ConfidentialClientApplication;
+import com.microsoft.aad.msal4j.IClientCredential;
+import com.microsoft.aad.msal4j.IAuthenticationResult;
+import reactor.core.publisher.Mono;
+
+public class ClientAssertionCredential implements TokenCredential {
+ private String clientID;
+ private String tenantID;
+ private String aadAuthority;
+
+ public ClientAssertionCredential(String clientID, String tenantID, String aadAuthority)
+ {
+ this.clientID = clientID;
+ this.tenantID = tenantID;
+ this.aadAuthority = aadAuthority; // https://login.microsoftonline.com/
+ }
+
+ @Override
+ public Mono<AccessToken> getToken(TokenRequestContext requestContext) {
+ try {
+ // Get the ID token from Google
+ String idToken = getGoogleIdToken(); // calling this directly just for clarity, this should be a callback
+
+ IClientCredential clientCredential = ClientCredentialFactory.createFromClientAssertion(idToken);
+ String authority = String.format("%s%s", aadAuthority, tenantID);
+
+ ConfidentialClientApplication app = ConfidentialClientApplication
+ .builder(clientID, clientCredential)
+ .authority(aadAuthority)
+ .build();
+
+ Set<String> scopes = new HashSet<String>(requestContext.getScopes());
+ ClientCredentialParameters clientCredentialParam = ClientCredentialParameters
+ .builder(scopes)
+ .build();
+
+ IAuthenticationResult authResult = app.acquireToken(clientCredentialParam).get();
+ Instant expiresOnInstant = authResult.expiresOnDate().toInstant();
+ OffsetDateTime expiresOn = OffsetDateTime.ofInstant(expiresOnInstant, ZoneOffset.UTC);
+
+ AccessToken accessToken = new AccessToken(authResult.accessToken(), expiresOn);
+
+ return Mono.just(accessToken);
+ } catch (Exception ex) {
+ return Mono.error(ex);
+ }
+ }
+}
+```
+ ## Access Azure AD protected resources
BlobServiceClient blobServiceClient = new BlobServiceClient(new Uri(storageUrl),
// write code to access Blob storage ```
+# [Java](#tab/java)
+
+```java
+String clientID = "<client-id>";
+String tenantID = "<tenant-id>";
+String authority = "https://login.microsoftonline.com/";
+String storageUrl = "https://<storageaccount>.blob.core.windows.net";
+
+ClientAssertionCredential credential = new ClientAssertionCredential(clientID, tenantID, authority);
+
+BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
+ .endpoint(storageUrl)
+ .credential(credential)
+ .buildClient();
+
+// write code to access Blob storage
+```
+ ## Next steps
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
To improve the security of Linux virtual machines (VMs) in Azure, you can integr
This article shows you how to create and configure a Linux VM and log in with Azure AD by using OpenSSH certificate-based authentication.
-> [!IMPORTANT]
-> This capability is now generally available. The previous version that made use of device code flow was [deprecated on August 15, 2021](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad). To migrate from the old version to this version, see the section [Migrate from the previous (preview) version](#migrate-from-the-previous-preview-version).
- There are many security benefits of using Azure AD with OpenSSH certificate-based authentication to log in to Linux VMs in Azure. They include: - Use your Azure AD credentials to log in to Azure Linux VMs.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
To configure role assignments for your Azure AD-enabled Windows Server 2019 Data
The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. You obtain the username of your current Azure account by using [az account show](/cli/azure/account#az-account-show), and you set the scope to the VM created in a previous step by using [az vm show](/cli/azure/vm#az-vm-show).
-You can also assign the scope at a resource group or subscription level. Normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure by using Azure Active Directory authentication](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
+You can also assign the scope at a resource group or subscription level. Normal Azure RBAC inheritance permissions apply.
```AzureCLI $username=$(az account show --query user.name --output tsv)
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Title: Delete an Azure AD tenant - Azure Active Directory | Microsoft Docs
-description: Explains how to prepare an Azure AD tenant for deletion, including self-service tenants
+ Title: Delete an Azure Active Directory tenant
+description: Learn how to prepare an Azure AD tenant, including a self-service tenant, for deletion.
documentationcenter: ''
# Delete a tenant in Azure Active Directory
-When an organization (tenant) is deleted in Azure Active Directory (Azure AD), part of Microsoft Entra, all resources that are contained in the organization are also deleted. Prepare your organization by minimizing its associated resources before you delete. Only a global administrator in Azure AD can delete an Azure AD organization from the portal.
+When an organization (tenant) is deleted in Azure Active Directory (Azure AD), part of Microsoft Entra, all resources in the organization are also deleted. Prepare your organization by minimizing its associated resources before you delete. Only a global administrator in Azure AD can delete an Azure AD organization from the Azure portal.
## Prepare the organization
-You can't delete an organization in Azure AD until it passes several checks. These checks reduce risk that deleting an Azure AD organization negatively impacts user access, such as the ability to sign in to Microsoft 365 or access resources in Azure. For example, if the organization associated with a subscription is unintentionally deleted, then users can't access the Azure resources for that subscription. The following conditions should be checked:
+You can't delete an organization in Azure AD until it passes several checks. These checks reduce the risk that deleting an Azure AD organization negatively affects user access, such as the ability to sign in to Microsoft 365 or access resources in Azure. For example, if the organization associated with a subscription is unintentionally deleted, users can't access the Azure resources for that subscription.
-* You must have paid all outstanding invoices and amounts due or overdue.
-* There can be no users in the Azure AD tenant except one global administrator who is to delete the organization. Any other users must be deleted before the organization can be deleted. If users are synchronized from on-premises, then sync must first be turned off, and the users must be deleted in the cloud organization using the Azure portal or Azure PowerShell cmdlets.
-* There can be no applications in the organization. Any applications must be removed before the organization can be deleted.
-* There can be no multifactor authentication providers linked to the organization.
-* There can be no subscriptions for any Microsoft Online Services such as Microsoft Azure, Microsoft 365, or Azure AD Premium associated with the organization. For example, if a default Azure AD tenant was created for you in Azure, you can't delete this organization if your Azure subscription still relies on it for authentication. You also can't delete a tenant if another user has associated an Azure subscription with it.
+Check the following conditions:
+
+* You've paid all outstanding invoices and amounts due or overdue.
+* No users are in the Azure AD tenant, except one global administrator who will delete the organization. You must delete any other users before you can delete the organization.
+
+ If users are synchronized from on-premises, turn off the sync first. You must delete the users in the cloud organization by using the Azure portal or Azure PowerShell cmdlets.
+* No applications are in the organization. You must remove any applications before you can delete the organization.
+* No multifactor authentication providers are linked to the organization.
+* No subscriptions for any Microsoft Online Services offerings (such as Azure, Microsoft 365, or Azure AD Premium) are associated with the organization.
+
+ For example, if a default Azure AD tenant was created for you in Azure, you can't delete this organization if your Azure subscription still relies on it for authentication. You also can't delete a tenant if another user has associated an Azure subscription with it.
> [!NOTE]
-> Microsoft is aware that customers with certain tenant configurations may be unable to successfully delete their Azure AD organization. We are working to address this problem. In the meantime, if needed, you can contact Microsoft support for details about the issue.
+> Microsoft is aware that customers with certain tenant configurations might be unable to successfully delete their Azure AD organization. We're working to address this problem. If you need more information, contact Microsoft support.
## Delete the organization
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with an account that is the Global Administrator for your organization.
+1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with an account that is the global administrator for your organization.
1. Select **Azure Active Directory**.
-1. On a tenant Overview page, select **Manage tenants**.
+1. On a tenant's **Overview** page, select **Manage tenants**.
- ![Confirm organization before deleting](./media/directory-delete-howto/manage-tenants-command.png)
+ ![Screenshot that shows the button for managing tenants.](./media/directory-delete-howto/manage-tenants-command.png)
-1. Select the check box for the tenant you want to delete, and select **Delete**.
+1. Select the checkbox for the tenant that you want to delete, and then select **Delete**.
- ![select the command to delete the organization](./media/directory-delete-howto/manage-tenants-delete-command.png)
-1. If your organization doesn't pass one or more checks, you're provided with a link to more information on how to pass. After you pass all checks, select **Delete** to complete the process.
+ ![Screenshot that shows the button for deleting an organization.](./media/directory-delete-howto/manage-tenants-delete-command.png)
+1. If your organization doesn't pass one or more checks, you'll get a link to more information on how to pass. After you pass all checks, select **Delete** to complete the process.
-## If you can't delete the organization
+## Deprovision subscriptions to allow organization deletion
-When you configured your Azure AD organization, you may have also activated license-based subscriptions for your organization like Azure AD Premium P2, Microsoft 365 Business Standard, or Enterprise Mobility + Security E5. To avoid accidental data loss, you can't delete an organization until the subscriptions are fully deleted. The subscriptions must be in a **Deprovisioned** state to allow organization deletion. An **Expired** or **Canceled** subscription moves to the **Disabled** state, and the final stage is the **Deprovisioned** state.
+When you configured your Azure AD organization, you might have also activated license-based subscriptions for your organization, like Azure AD Premium P2, Microsoft 365 Business Standard, or Enterprise Mobility + Security E5. To avoid accidental data loss, you can't delete an organization until the subscriptions are fully deleted. The subscriptions must be in a **Deprovisioned** state to allow organization deletion. An **Expired** or **Canceled** subscription moves to the **Disabled** state, and the final stage is the **Deprovisioned** state.
For what to expect when a trial Microsoft 365 subscription expires (not including paid Partner/CSP, Enterprise Agreement, or Volume Licensing), see the following table. For more information on Microsoft 365 data retention and subscription lifecycle, see [What happens to my data and access when my Microsoft 365 for business subscription ends?](https://support.office.com/article/what-happens-to-my-data-and-access-when-my-office-365-for-business-subscription-ends-4436582f-211a-45ec-b72e-33647f97d8a3). Subscription state | Data | Access to data -- | -- | --
-Active (30 days for trial) | Data accessible to all | Users have normal access to Microsoft 365 files, or apps<br>Admins have normal access to Microsoft 365 admin center and resources
-Expired (30 days) | Data accessible to all| Users have normal access to Microsoft 365 files, or apps<br>Admins have normal access to Microsoft 365 admin center and resources
-Disabled (30 days) | Data accessible to admin only | Users canΓÇÖt access Microsoft 365 files, or apps<br>Admins can access the Microsoft 365 admin center but canΓÇÖt assign licenses to or update users
-Deprovisioned (30 days after Disabled) | Data deleted (automatically deleted if no other services are in use) | Users canΓÇÖt access Microsoft 365 files, or apps<br>Admins can access the Microsoft 365 admin center to purchase and manage other subscriptions
+**Active** (30 days for trial) | Data is accessible to all. | Users have normal access to Microsoft 365 files or apps.<br>Admins have normal access to the Microsoft 365 admin center and resources.
+**Expired** (30 days) | Data is accessible to all.| Users have normal access to Microsoft 365 files or apps.<br>Admins have normal access to the Microsoft 365 admin center and resources.
+**Disabled** (30 days) | Data is accessible to admins only. | Users can't access Microsoft 365 files or apps.<br>Admins can access the Microsoft 365 admin center but can't assign licenses to or update users.
+**Deprovisioned** (30 days after **Disabled**) | Data is deleted (automatically deleted if no other services are in use). | Users can't access Microsoft 365 files or apps.<br>Admins can access the Microsoft 365 admin center to purchase and manage other subscriptions.
-## Delete a Office/Microsoft 365 subscription
+## Delete an Office 365 or Microsoft 365 subscription
-You can put a subscription into the **Deprovisioned** state to be deleted in three days using the Microsoft 365 admin center.
+You can use the Microsoft admin center to put a subscription into the **Deprovisioned** state for deletion in three days:
-1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com) with an account that is a global administrator in your organization. If you are trying to delete the ΓÇ£ContosoΓÇ¥ organization that has the initial default domain contoso.onmicrosoft.com, sign in with a UPN such as admin@contoso.onmicrosoft.com.
+1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com) with an account that is a global administrator in your organization. If you're trying to delete the Contoso organization that has the initial default domain `contoso.onmicrosoft.com`, sign in with a User Principal Name (UPN) such as `admin@contoso.onmicrosoft.com`.
-1. Preview the new Microsoft 365 admin center by making sure the **Try the new admin center** toggle is enabled.
+1. Preview the new Microsoft 365 admin center by turning on the **Try the new admin center** toggle.
- ![Preview the new M365 admin center experience](./media/directory-delete-howto/preview-toggle.png)
+ ![Screenshot that shows the toggle for previewing the new admin center.](./media/directory-delete-howto/preview-toggle.png)
-1. Once the new admin center is enabled, you need to cancel a subscription before you can delete it. Select **Billing** and select **Your products**, then select **Cancel subscription** for the subscription you want to cancel. You'll be brought to a feedback page.
+1. You need to cancel a subscription before you can delete it. Select **Billing** > **Your products**, and then select **Cancel subscription** for the subscription that you want to cancel.
- ![Choose a subscription to cancel](./media/directory-delete-howto/cancel-choose-subscription.png)
+ ![Screenshot that shows choosing a subscription to cancel.](./media/directory-delete-howto/cancel-choose-subscription.png)
-1. Complete the feedback form and select **Cancel subscription** to cancel the subscription.
+1. Complete the feedback form, and then select **Cancel subscription**.
- ![Cancel command in the subscription preview](./media/directory-delete-howto/cancel-command.png)
+ ![Screenshot that shows feedback options and the button for canceling a subscription.](./media/directory-delete-howto/cancel-command.png)
-1. You can now delete the subscription. Select **Delete** for the subscription you want to delete. If you can't find the subscription in the **Products & services** page, make sure you have **Subscription status** set to **All**.
+1. Select **Delete** for the subscription that you want to delete. If you can't find the subscription on the **Your products** page, make sure that you have **Subscription status** set to **All**.
- ![Delete link for deleting subscription](./media/directory-delete-howto/delete-command.png)
+ ![Screenshot that shows subscription status and the delete link.](./media/directory-delete-howto/delete-command.png)
-1. Select **Delete subscription** to delete the subscription and accept the terms and conditions. All data is permanently deleted within three days. You can [reactivate the subscription](/office365/admin/subscriptions-and-billing/reactivate-your-subscription) during the three-day period if you change your mind.
+1. Select the checkbox to accept terms and conditions, and then select **Delete subscription**. All data for the subscription is permanently deleted in three days. You can [reactivate the subscription](/office365/admin/subscriptions-and-billing/reactivate-your-subscription) during the three-day period if you change your mind.
- ![carefully read terms and conditions](./media/directory-delete-howto/delete-terms.png)
+ ![Screenshot that shows the link for terms and conditions, along with the button for deleting a subscription.](./media/directory-delete-howto/delete-terms.png)
-1. Now the subscription state has changed, and the subscription is marked for deletion. The subscription enters the **Deprovisioned** state 72 hours later.
+ Now the subscription state has changed to **Disabled**, and the subscription is marked for deletion. The subscription enters the **Deprovisioned** state 72 hours later.
-1. Once you've deleted a subscription in your organization and 72 hours have elapsed, you can sign back into the Azure AD admin center again and there should be no required action and no subscriptions blocking your organization deletion. You should be able to successfully delete your Azure AD organization.
+1. After you've deleted a subscription in your organization and 72 hours have elapsed, sign in to the Azure AD admin center again. Confirm that no required actions or subscriptions are blocking your organization deletion. You should be able to successfully delete your Azure AD organization.
- ![pass subscription check at deletion screen](./media/directory-delete-howto/delete-checks-passed.png)
+ ![Screenshot that shows resources that have passed a subscription check.](./media/directory-delete-howto/delete-checks-passed.png)
## Delete an Azure subscription
-If you have an Active or canceled Azure subscription associated to your Azure AD Tenant then you wouldn't be able to delete Azure AD Tenant. After you cancel, billing is stopped immediately. However, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data.
+If you have an active or canceled Azure subscription associated with your Azure AD tenant, you can't delete the Azure AD tenant. After you cancel, billing is stopped immediately. However, Microsoft waits 30 to 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data.
-- If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription three days after you cancel it. The Delete subscription option isn't available until three days after you cancel your subscription. For more details please read through [Delete free trial or pay-as-you-go subscriptions](../../cost-management-billing/manage/cancel-azure-subscription.md#delete-subscriptions).-- All other subscription types are deleted only through the [subscription cancellation](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately.-- Alternatively, you can also move/transfer the Azure subscription to another Azure AD tenant account. When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. Additionally, performing Switch Directory on the subscription wouldn't help as the billing would still be aligned with Azure AD Tenant which was used to sign up for the subscription. For more information review [Transfer a subscription to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account)
+If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to be automatically deleted. You can delete your subscription three days after you cancel it, when the **Delete subscription** option becomes available. For details, read through [Delete free trial or pay-as-you-go subscriptions](../../cost-management-billing/manage/cancel-azure-subscription.md#delete-subscriptions).
-Once you have all the Azure and Office/Microsoft 365 Subscriptions canceled and deleted, you can proceed with cleaning up rest of the things within Azure AD Tenant before actually delete it.
+All other subscription types are deleted only through the [subscription cancellation](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) and ask to have the subscription deleted immediately.
-## Enterprise apps with no way to delete
+Alternatively, you can move the Azure subscription to another Azure AD tenant account. When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. Performing a **Switch Directory** action on the subscription wouldn't help, because the billing would still be aligned with the Azure AD tenant that was used to sign up for the subscription. For more information, review [Transfer a subscription to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account).
-Currently, there are few enterprise applications that can't be deleted in the Azure portal. If you find that you are unable to successfully delete an Azure AD tenant from the portal, you can use the following PowerShell commands to remove any blocking enterprise applications.
+After you have all the Azure, Office 365, and Microsoft 365 subscriptions canceled and deleted, you can clean up the rest of the things within an Azure AD tenant before you delete it.
-Follow below instructions to remove blocking enterprise apps/service principals before you attempt to delete the tenant:
+## Remove enterprise apps that you can't delete
-1. Install MSOnline module for PowerShell by running the following command:
+A few enterprise applications can't be deleted in the Azure portal and might block you from deleting the tenant. Use the following PowerShell procedure to remove those applications:
- 'Install-Module -Name MSOnline'
+1. Install the MSOnline module for PowerShell by running the following command:
-2. Install Az PowerShell module by running the following command:
+ `Install-Module -Name MSOnline`
- 'Install-Module -Name Az'
+2. Install the Az PowerShell module by running the following command:
-3. Create or use a managed admin account from the tenant you would like to delete, for example, newAdmin@tenanttodelete.onmicrosoft.com
+ `Install-Module -Name Az`
-4. Open PowerShell and connect to MSODS using the admin credentials, with command
+3. Create or use a managed admin account from the tenant that you want to delete. For example: `newAdmin@tenanttodelete.onmicrosoft.com`.
- 'connect-msolservice'
+4. Open PowerShell and connect to Azure AD by using admin credentials with the following command:
+
+ `connect-msolservice`
>[!WARNING]
- > You must run PowerShell using admin credentials for the tenant that you are trying to delete. Only homed-in admins have access to manage the directory via Powershell.You can't use guest user admins, live-ids or multi-directories. Before you proceed, to verify you are connected to the tenant you intend to delete with MSOnline module. It is recommended you run the command `Get-MsolDomain` to confirm that you are connected to the correct tenantID and onmicrosoft.com domain.
+ > You must run PowerShell by using admin credentials for the tenant that you're trying to delete. Only homed-in admins have access to manage the directory via Powershell. You can't use guest user admins, Microsoft accounts, or multiple directories.
+ >
+ > Before you proceed, verify that you're connected to the tenant that you want to delete with the MSOnline module. We recommend that you run the `Get-MsolDomain` command to confirm that you're connected to the correct tenant ID and `onmicrosoft.com` domain.
-5. Run below command to set the tenant context
+5. Run the following command to set the tenant context:
- 'Connect-AzAccount -Tenant \<object id of the tenant you are attempting to delete\>'
+ `Connect-AzAccount -Tenant \<object id of the tenant you are attempting to delete\>`
>[!WARNING]
- > Before proceeding, to verify you are connected to the tenant you intend to delete with Az module, it is recommended you run the command Get-AzContext to check the connected tenant ID and onmicrosoft.com domain.
+ > Before you proceed, verify that you're connected to the tenant that you want to delete with the Az PowerShell module. We recommend that you run the `Get-AzContext` command to check the connected tenant ID and `onmicrosoft.com` domain.
+
+6. Run the following command to remove any enterprise apps that you can't delete:
-6. Run below command to remove any enterprise apps with no way to delete:
+ `Get-AzADServicePrincipal | ForEach-Object { Remove-AzADServicePrincipal -ObjectId $_.Id }`
- 'Get-AzADServicePrincipal | ForEach-Object { Remove-AzADServicePrincipal -ObjectId $_.Id }'
+7. Run the following command to remove applications and service principals:
-7. Run below command to remove application/service principals
+ `Get-MsolServicePrincipal | Remove-MsolServicePrincipal`
- 'Get-MsolServicePrincipal | Remove-MsolServicePrincipal'
+8. Run the following command to disable any blocking service principals:
-8. Lastly, run the command to disable any blocking service principals:
+ `Get-MsolServicePrincipal | Set-MsolServicePrincipal -AccountEnabled $false`
- 'Get-MsolServicePrincipal | Set-MsolServicePrincipal -AccountEnabled $false'
+9. Sign in to the Azure portal again, and remove any new admin account that you created in step 3.
-9. Sign back into the Azure portal and remove any new admin account created in step 3.
+10. Retry tenant deletion from the Azure portal.
-10. Retry tenant deletion from the Azure portal again.
+## Handle a trial subscription that blocks deletion
-## Trial subscription that blocks deletion
+There are [self-service sign-up products](/office365/admin/misc/self-service-sign-up) like Microsoft Power BI, Azure Rights Management (Azure RMS), Microsoft Power Apps, and Dynamics 365. Individual users can sign up via Microsoft 365, which also creates a guest user for authentication in your Azure AD organization.
-There are [self-service sign-up products](/office365/admin/misc/self-service-sign-up) like Microsoft Power BI, Rights Management Services, Microsoft Power Apps, or Dynamics 365, individual users can sign up via Microsoft 365, which also creates a guest user for authentication in your Azure AD organization. These self-service products block directory deletions until the products are fully deleted from the organization, to avoid data loss. They can be deleted only by the Azure AD admin whether the user signed up individually or was assigned the product.
+These self-service products block directory deletions until the products are fully deleted from the organization, to avoid data loss. Only the Azure AD admin can delete them, whether the user signed up individually or was assigned the product.
-There are two types of self-service sign-up products in how they are assigned:
+There are two types of self-service sign-up products, in terms of how they're assigned:
-* Org-level assignment: An Azure AD admin assigns the product to the entire organization and a user can be actively using the service with this org-level assignment even if they aren't licensed individually.
-* User level assignment: An individual user during self-service sign-up essentially assigns the product to themselves without an admin. Once the organization becomes managed by an admin (see [Administrator takeover of an unmanaged organization](domains-admin-takeover.md), then the admin can directly assign the product to users without self-service sign-up.
+* Organizational-level assignment: An Azure AD admin assigns the product to the entire organization. A user can actively use the service with the organizational-level assignment, even if the user isn't licensed individually.
+* User-level assignment: An individual user during self-service sign-up essentially self-assigns the product without an admin. After an admin starts managing the organization (see [Administrator takeover of an unmanaged organization](domains-admin-takeover.md)), the admin can directly assign the product to users without self-service sign-up.
-When you begin the deletion of the self-service sign-up product, the action permanently deletes the data and removes all user access to the service. Any user that was assigned the offer individually or on the organization level is then blocked from signing in or accessing any existing data. If you want to prevent data loss with the self-service sign-up product like [Microsoft Power BI dashboards](/power-bi/service-export-to-pbix) or [Rights Management Services policy configuration](/azure/information-protection/configure-policy#how-to-configure-the-azure-information-protection-policy), ensure that the data is backed up and saved elsewhere.
+When you begin the deletion of a self-service sign-up product, the action permanently deletes the data and removes all user access to the service. Any user who was assigned the offer individually or on the organization level is then blocked from signing in or accessing any existing data. If you want to prevent data loss with a self-service sign-up product like [Microsoft Power BI dashboards](/power-bi/service-export-to-pbix) or [Azure RMS policy configuration](/azure/information-protection/configure-policy#how-to-configure-the-azure-information-protection-policy), ensure that the data is backed up and saved elsewhere.
For more information about currently available self-service sign-up products and services, see [Available self-service programs](/office365/admin/misc/self-service-sign-up#available-self-service-programs).
-For what to expect when a trial Microsoft 365 subscription expires (not including paid Partner/CSP, Enterprise Agreement, or Volume Licensing), see the following table. For more information on Microsoft 365 data retention and subscription lifecycle, see [What happens to my data and access when my Microsoft 365 for business subscription ends?](/office365/admin/subscriptions-and-billing/what-if-my-subscription-expires).
+For what to expect when a trial Microsoft 365 subscription expires (not including paid Partner/CSP, Enterprise Agreement, or Volume Licensing), see the following table. For more information on Microsoft 365 data retention and subscription lifecycle, see [What happens to my data and access when my Microsoft 365 for Business subscription ends?](/office365/admin/subscriptions-and-billing/what-if-my-subscription-expires).
Product state | Data | Access to data - | - | --
-Active (30 days for trial) | Data accessible to all | Users have normal access to self-service sign up product, files, or apps<br>Admins have normal access to Microsoft 365 admin center and resources
-Deleted | Data deleted | Users canΓÇÖt access self-service sign-up product, files, or apps<br>Admins can access the Microsoft 365 admin center to purchase and manage other subscriptions
+**Active** (30 days for trial) | Data is accessible to all. | Users have normal access to self-service sign-up products, files, or apps.<br>Admins have normal access to the Microsoft 365 admin center and resources.
+**Deleted** | Data is deleted. | Users can't access self-service sign-up products, files, or apps.<br>Admins can access the Microsoft 365 admin center to purchase and manage other subscriptions.
## Delete a self-service sign-up product
-You can put a self-service sign-up product like Microsoft Power BI or Azure Rights Management Services into a **Delete** state to be immediately deleted in the Azure AD portal.
+You can put a self-service sign-up product like Microsoft Power BI or Azure RMS into a **Delete** state to be immediately deleted in the Azure AD portal:
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) with an account that is a Global administrator in the organization. If you are trying to delete the ΓÇ£ContosoΓÇ¥ organization that has the initial default domain contoso.onmicrosoft.com, sign on with a UPN such as admin@contoso.onmicrosoft.com.
+1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) with an account that is a global administrator in the organization. If you're trying to delete the Contoso organization that has the initial default domain `contoso.onmicrosoft.com`, sign in with a UPN such as `admin@contoso.onmicrosoft.com`.
-1. Select **Licenses**, and then select **Self-service sign-up products**. You can see all the self-service sign-up products separately from the seat-based subscriptions. Choose the product you want to permanently delete. Here's an example in Microsoft Power BI:
+1. Select **Licenses**, and then select **Self-service sign-up products**. You can see all the self-service sign-up products separately from the seat-based subscriptions. Choose the product that you want to permanently delete. Here's an example in Microsoft Power BI:
- ![Screenshot that shows the "Licenses - Self-service sign-up products" page.](./media/directory-delete-howto/licenses-page.png)
+ ![Screenshot that shows a list of self-service sign-up products.](./media/directory-delete-howto/licenses-page.png)
-1. Select **Delete** to delete the product and accept the terms that data is deleted immediately and irrevocably. This delete action will remove all users and remove organization access to the product. Select Yes to move forward with the deletion.
+1. Select **Delete** to delete the product. This action will remove all users and remove organization access to the product. A dialog warns you that deleting the product will immediately and irrevocably delete data. Select **Yes** to confirm.
- ![Screenshot that shows the "Licenses - Self-service sign-up products" page with the "Delete self-service sign-up product" window open.](./media/directory-delete-howto/delete-product.png)
+ ![Screenshot of the confirmation dialog that warns about deletion of data.](./media/directory-delete-howto/delete-product.png)
-1. When you select **Yes**, the deletion of the self-service product will be initiated. There is a notification that will tell you of the deletion in progress.
+ A notification tells you that the deletion is in progress.
- ![Screenshot that shows the "Licenses - Self-service sign-up products" page with the "deletion in progress" notification displayed.](./media/directory-delete-howto/progress-message.png)
+ ![Screenshot of a notification that a deletion is in progress.](./media/directory-delete-howto/progress-message.png)
-1. Now the self-service sign-up product state has changed to **Deleted**. When you refresh the page, the product should be removed from the **Self-service sign-up products** page.
+1. The self-service sign-up product state has changed to **Deleted**. Refresh the page, and verify that the product is removed from the **Self-service sign-up products** page.
- ![Screenshot that shows the "Licenses - Self-service sign-up products" page with the "Self-service sign-up product deleted" pane on the right-side.](./media/directory-delete-howto/product-deleted.png)
+ ![Screenshot that shows the list of self-service sign-up products and a pane that confirms the deletion of a self-service sign-up product.](./media/directory-delete-howto/product-deleted.png)
-1. Once you have deleted all the products, you can sign back into the Azure AD admin center again, and there should be no required action and no products blocking your organization deletion. You should be able to successfully delete your Azure AD organization.
+1. After you've deleted all the products, sign in to the Azure AD admin center again. Confirm that no required actions or products are blocking your organization deletion. You should be able to successfully delete your Azure AD organization.
- ![the username is mistyped or not found](./media/directory-delete-howto/delete-checks-passed.png)
+ ![Screenshot that shows status information for resources.](./media/directory-delete-howto/delete-checks-passed.png)
## Next steps
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-overview-user-model.md
Previously updated : 06/23/2022 Last updated : 09/12/2022
This article introduces and administrator for Azure Active Directory (Azure AD), part of Microsoft Entra, to the relationship between top [identity management](../fundamentals/active-directory-whatis.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) tasks for users in terms of their groups, licenses, deployed enterprise apps, and administrator roles. As your organization grows, you can use Azure AD groups and administrator roles to:
-* Assign licenses to groups instead of to individual users.
-* Delegate permissions to distribute the work of Azure AD management to less-privileged roles.
+* Assign licenses to groups instead of assigning licenses to individual users.
+* Grant permissions to delegate Azure AD management work to personnel in less-privileged roles.
* Assign enterprise app access to groups. ## Assign users to groups
-You can use groups in Azure AD to assign licenses to large numbers of users, or to assign user access to deployed enterprise apps. You can use groups to assign all administrator roles except for Global Administrator in Azure AD, or you can grant access to resources that are external, such as SaaS applications or SharePoint sites.
+You can use groups in Azure AD to assign licenses, or deployed enterprise apps, to large numbers of users. You can also use groups to assign all administrator roles except for Azure AD Global Administrator, or you can grant access to external resources, such as SaaS applications or SharePoint sites.
-For additional flexibility and to reduce group membership management work, you can use [dynamic groups](groups-create-rule.md) in Azure AD to expand and contract group membership automatically. You'll need an Azure AD Premium P1 license for each unique user that is a member of one or more dynamic groups.
+You can use [dynamic groups](groups-create-rule.md) in Azure AD to expand and contract group membership automatically. Dynamic groups give you greater flexibility and they reduce group membership management work. You'll need an Azure AD Premium P1 license for each unique user that is a member of one or more dynamic groups.
## Assign licenses to groups
-Assigning or removing licenses from users individually can demand time and attention. If you [assign licenses to groups](../fundamentals/license-users-groups.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) instead, you can make your large-scale license management easier.
+Managing user license assignments individually is time consuming and error prone. If you [assign licenses to groups](../fundamentals/license-users-groups.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) instead, you experience easier large-scale license management.
Azure AD users who join a licensed group are automatically assigned the appropriate licenses. When users leave the group, Azure AD removes their license assignments. Without Azure AD groups, you'd have to write a PowerShell script or use Graph API to bulk add or remove user licenses for users joining or leaving the organization.
-If there aren't enough licenses available, or an issue occurs like service plans that can't be assigned at the same time, you can see status of any licensing issue for the group in the Azure portal.
+If there aren't enough licenses available, or an issue occurs like service plans that can't be assigned at the same time, you can see the status of any licensing issue for the group in the Azure portal.
## Delegate administrator roles
New Azure AD administrator roles are being added. Check the Azure portal or the
## Assign app access
-You can use Azure AD to assign group access to the [enterprise apps that are deployed in your Azure AD organization](../manage-apps/assign-user-or-group-access-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context). If you combine dynamic groups with group assignment to apps, you can automate your user app access assignments as your organization grows. You'll need an Azure Active Directory Premium P1 or Premium P2 license to assign access to enterprise apps.
+You can use Azure AD to assign group access to [enterprise apps deployed in your Azure AD organization](../manage-apps/assign-user-or-group-access-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context). If you combine dynamic groups with group assignment to apps, you can automate user app access assignments as your organization grows. You'll need an Azure Active Directory Premium P1 or Premium P2 license to assign access to enterprise apps.
Azure AD also gives you granular control of the data that flows between the app and the groups to whom you assign access. In [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps), open an app and select **Provisioning** to:
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
dirSyncEnabled |true false |user.dirSyncEnabled -eq true
| memberOf | Any string value (valid group object ID) | user.memberof -any (group.objectId -in ['value']) | | mobile |Any string value or *null* | user.mobile -eq "value" | | objectId |GUID of the user object | user.objectId -eq "11111111-1111-1111-1111-111111111111" |
-| onPremisesDistinguishedName (preview)| Any string value or *null* | user.onPremisesDistinguishedName -eq "value" |
+| onPremisesDistinguishedName | Any string value or *null* | user.onPremisesDistinguishedName -eq "value" |
| onPremisesSecurityIdentifier | On-premises security identifier (SID) for users who were synchronized from on-premises to the cloud. | user.onPremisesSecurityIdentifier -eq "S-1-1-11-1111111111-1111111111-1111111111-1111111" | | passwordPolicies |None<br>DisableStrongPassword<br>DisablePasswordExpiration<br>DisablePasswordExpiration, DisableStrongPassword | user.passwordPolicies -eq "DisableStrongPassword" | | physicalDeliveryOfficeName |Any string value or *null* | user.physicalDeliveryOfficeName -eq "value" |
active-directory Add Users Information Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md
Previously updated : 12/19/2018 Last updated : 10/07/2022
After an app is configured for self-service, application owners can use their ow
![Screenshot showing the Manage app sub-menu for the Salesforce app](media/add-users-iw/access-panel-manage-app.png)
-3. At the top of the users list, select **+**.
-
- ![Screenshot showing the plus symbol for adding members to the app](media/add-users-iw/access-panel-manage-app-add-user.png)
+3. At the top of the users list, select **+** on the right-hand side.
+ 4. In the **Add members** search box, type the email address for the guest user. Optionally, include a welcome message.
See the following articles on Azure AD B2B collaboration:
- [What is Azure AD B2B collaboration?](what-is-b2b.md) - [How do Azure Active Directory admins add B2B collaboration users?](add-users-administrator.md) - [B2B collaboration invitation redemption](redemption-experience.md)-- [External Identities pricing](external-identities-pricing.md)
+- [External Identities pricing](external-identities-pricing.md)
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
Previously updated : 06/30/2022 Last updated : 10/12/2022
The following diagram illustrates the flow when email one-time passcode authenti
## Conditional Access for external users
-Organizations can enforce Conditional Access policies for external B2B collaboration and B2B direct connect users in the same way that theyΓÇÖre enabled for full-time employees and members of the organization. With the introduction of cross-tenant access settings, you can also trust MFA and device claims from external Azure AD organizations. This section describes important considerations for applying Conditional Access to users outside of your organization.
+Organizations can enforce [Conditional Access](../conditional-access/overview.md) policies for external B2B collaboration and B2B direct connect users in the same way that theyΓÇÖre enabled for full-time employees and members of the organization. With the introduction of cross-tenant access settings, you can also trust MFA and device claims from external Azure AD organizations. This section describes important considerations for applying Conditional Access to users outside of your organization.
+
+### Assigning Conditional Access policies to external user types (preview)
+
+> [!NOTE]
+> This section describes a preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+When configuring a Conditional Access policy, you have granular control over the types of external users you want to apply the policy to. External users are categorized based on how they authenticate (internally or externally) and their relationship to your organization (guest or member).
+
+- **B2B collaboration guest users** - Most users who are commonly considered guests fall into this category. This B2B collaboration user has an account in an external Azure AD organization or an external identity provider (such as a social identity), and they have guest-level permissions in your organization. The user object created in your Azure AD directory has a UserType of Guest. This category includes B2B collaboration users who have been invited and who have used self-service sign-up.
+- **B2B collaboration member users** - This B2B collaboration user has an account in an external Azure AD organization or an external identity provider (such as a social identity) and member-level access to resources in your organization. This scenario is common in organizations consisting of multiple tenants, where users are considered part of the larger organization and need member-level access to resources in the organizationΓÇÖs other tenants. The user object created in the resource Azure AD directory has a UserType of Member.
+- **B2B direct connect users** - External users who are able to access your resources via B2B direct connect, which is a mutual, two-way connection with another Azure AD organization that allows single sign-on access to certain Microsoft applications (currently, Microsoft Teams Connect shared channels). B2B direct connect users donΓÇÖt have a presence in your Azure AD organization, but are instead managed from within the application (for example, by the Teams shared channel owner).
+- **Local guest users** - Local guest users have credentials that are managed in your directory. Before Azure AD B2B collaboration was available, it was common to collaborate with distributors, suppliers, vendors, and others by setting up internal credentials for them and designating them as guests by setting the user object UserType to Guest.
+- **Service provider users** - Organizations that serve as cloud service providers for your organization (the isServiceProvider property in the Microsoft Graph [partner-specific configuration](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner) is true).
+- **Other external users** - Applies to any users who don't fall into the categories above, but who are not considered internal members of your organization, meaning they don't authenticate internally via Azure AD, and the user object created in the resource Azure AD directory does not have a UserType of Member.
+
+Learn more about [Conditional Access user assignments](../conditional-access/concept-conditional-access-users-groups.md).
### MFA for Azure AD external users
The following PowerShell cmdlets are available to *proof up* or request MFA regi
Reset-MsolStrongAuthenticationMethodByUpn -UserPrincipalName gsamoogle_gmail.com#EXT#@ WoodGroveAzureAD.onmicrosoft.com ```
+### Authentication strength policies for external users
+
+[Authentication strength](https://aka.ms/b2b-auth-strengths) is a Conditional Access control that lets you define a specific combination of multifactor authentication (MFA) methods that an external user must complete to access your resources. This control is especially useful for restricting external access to sensitive apps in your organization because you can enforce specific authentication methods, such as a phishing-resistant method, for external users.
+
+You also have the ability to apply authentication strength to the different types of [guest or external users](#assigning-conditional-access-policies-to-external-user-types-preview) that you collaborate or connect with. This means you can enforce authentication strength requirements that are unique to your B2B collaboration, B2B direct connect, and other external access scenarios.
+
+Azure AD provides three [built-in authentication strengths](https://aka.ms/b2b-auth-strengths):
+
+- Multifactor authentication strength
+- Passwordless MFA strength
+- Phishing-resistant MFA strength
+
+You can use one of these built-in strengths or create a custom authentication strength policy based on the authentication methods you want to require.
+
+> [!NOTE]
+> Currently, you can only apply authentication strength policies to external users who authenticate with Azure AD. For email one-time passcode, SAML/WS-Fed, and Google federation users, use the MFA grant control to require MFA.
+
+When you apply an authentication strength policy to external Azure AD users, the policy works together with [MFA trust settings](cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) in your cross-tenant access settings to determine where and how the external user must perform MFA. An Azure AD user first authenticates using their own account in their home Azure AD tenant. Then when this user tries to access your resource, Azure AD applies the authentication strength Conditional Access policy and checks to see if you've enabled MFA trust.
+
+In external user scenarios, the authentication methods that are acceptable for fulfilling authentication strength vary, depending on whether the user is completing MFA in their home tenant or the resource tenant. The following table indicates the acceptable methods in each tenant. If a resource tenant has opted to trust claims from external Azure AD organizations, only those claims listed in the ΓÇ£Home tenantΓÇ¥ column below will be accepted by the resource tenant for MFA fulfillment. If the resource tenant has disabled MFA trust, the external user must complete MFA in the resource tenant using one of the methods listed in the ΓÇ£Resource tenantΓÇ¥ column.
+
+##### Table 1. Authentication strength MFA methods for external users
+
+|Authentication method |Home tenant | Resource tenant |
+||||
+|SMS as second factor | &#x2705; | &#x2705; |
+|Voice call | &#x2705; | &#x2705; |
+|Microsoft Authenticator push notification | &#x2705; | &#x2705; |
+|Microsoft Authenticator phone sign-in | &#x2705; | &#x2705; |
+|OATH software token | &#x2705; | &#x2705; |
+|OATH hardware token | &#x2705; | |
+|FIDO2 security key | &#x2705; | |
+|Windows Hello for Business | &#x2705; | |
++
+To configure a Conditional Access policy that applies authentication strength requirements to external users or guests, see [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
+
+#### User experience for external Azure AD users
+
+Authentication strength policies work together withΓÇ»[MFA trust settings](cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) in your cross-tenant access settings to determine where and how the external user must perform MFA.
+
+First, an Azure AD user authenticates with their own account in their home tenant. Then when this user tries to access your resource, Azure AD applies the authentication strength Conditional Access policy and checks to see if you've enabled MFA trust.
+
+- **If MFA trust is enabled**, Azure AD checks the user's authentication session for a claim indicating that MFA has been fulfilled in the user's home tenant. (See [Table 1](#table-1-authentication-strength-mfa-methods-for-external-users) for authentication methods that are acceptable for MFA fulfillment when completed in an external user's home tenant.) If the session contains a claim indicating that MFA policies have already been met in the user's home tenant and the methods satisfy the authentication strength requirements, the user is allowed access. Otherwise, Azure AD presents the user with a challenge to complete MFA in the home tenant using an acceptable authentication method. The MFA method must be enabled in the home tenant and user must be able to register for it.
+- **If MFA trust is disabled**, Azure AD presents the user with a challenge to complete MFA in the resource tenant using an acceptable authentication method. (See [Table 1](#table-1-authentication-strength-mfa-methods-for-external-users) for authentication methods that are acceptable for MFA fulfillment by an external user.)
+
+If the user is unable to complete MFA, or if a Conditional Access policy (such as a compliant device policy) prevents them from registering, access is blocked.
+ ### Device compliance and hybrid Azure AD joined device policies Organizations can use Conditional Access policies to require users' devices to be managed by Microsoft Intune. Such policies can block external user access, because an external user can't register their unmanaged device with the resource organization. Devices can only be managed by a user's home tenant.
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Previously updated : 08/09/2021 Last updated : 10/06/2022
-# Azure Active Directory (Azure AD) identity provider for External Identities
+# Add Azure Active Directory (Azure AD) as an identity provider for External Identities
Azure Active Directory is available as an identity provider option for B2B collaboration by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account. ## Guest sign-in using Azure Active Directory accounts
-Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the invitation flow or a self-service sign-up user flow.
+Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the invitation flow or a [self-service sign-up user flow](self-service-sign-up-overview.md).
![Azure AD account in the identity providers list](media/azure-ad-account/azure-ad-account-identity-provider.png)
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
You can configure organization-specific settings by adding an organization and m
- For B2B collaboration with other Azure AD organizations, use cross-tenant access settings to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations. > [!TIP]
- >If you intend to trust inbound MFA for external users, make sure you don't have an [Identity Protection policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) in place that requires external users to register for MFA. When both of these policies are present, external users wonΓÇÖt be able to satisfy the requirements for access. If you want to enforce the Identity Protection MFA registration policy, be sure to exclude external users.
+ >We recommend excluding external users from the [Identity Protection MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md), if you are going to [trust MFA for external users](authentication-conditional-access.md#mfa-for-azure-ad-external-users). When both policies are present, external users wonΓÇÖt be able to satisfy the requirements for access.
- For B2B direct connect, use organizational settings to set up a mutual trust relationship with another Azure AD organization. Both your organization and the external organization need to mutually enable B2B direct connect by configuring inbound and outbound cross-tenant access settings.
active-directory Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/microsoft-account.md
Previously updated : 08/09/2021 Last updated : 09/29/2022 +
+#Customer intent: As an Azure AD administrator user, I want to set up invitation flow or a self-service sign-up user flow for guest users, so they can sign into my Azure AD apps with their Microsoft account (MSA).
-# Microsoft account (MSA) identity provider for External Identities
+# Add Microsoft account (MSA) as an identity provider for External Identities
Your B2B guest users can use their own personal Microsoft accounts for B2B collaboration without further configuration. Guest users can redeem your B2B collaboration invitations or complete your sign-up user flows using their personal Microsoft account.
-Microsoft accounts are set up by a user to get access to consumer-oriented Microsoft products and cloud services, such as Outlook, OneDrive, Xbox LIVE, or Microsoft 365. The account is created and stored in the Microsoft consumer identity account system that's run by Microsoft.
+Microsoft accounts are set up by a user to get access to consumer-oriented Microsoft products and cloud services, such as Outlook, OneDrive, Xbox LIVE, or Microsoft 365. The account is created and stored in the Microsoft consumer identity account system, run by Microsoft.
## Guest sign-in using Microsoft accounts
-Microsoft account is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Microsoft account using either the invitation flow or a self-service sign-up user flow.
+Microsoft account is available by default in the list of **External Identities** > **All identity providers**. No further configuration is needed to allow guest users to sign in with their Microsoft account using either the invitation flow, or a self-service sign-up user flow.
-![Microsoft account in the identity providers list](media/microsoft-account/microsoft-account-identity-provider.png)
### Microsoft account in the invitation flow When you [invite a guest user](add-users-administrator.md) to B2B collaboration, you can specify their Microsoft account as the email address they'll use to sign in.
-![Invite using a Microsoft account](media/microsoft-account/microsoft-account-invite.png)
### Microsoft account in self-service sign-up user flows
-Microsoft account is an identity provider option for your self-service sign-up user flows. Users can sign up for your applications using their own Microsoft accounts. First, you'll need to [enable self-service sign-up](self-service-sign-up-user-flow.md) for your tenant. Then you can set up a user flow for the application and select Microsoft account as one of the sign-in options.
+Microsoft account is an identity provider option for your self-service sign-up user flows. Users can sign up for your applications using their own Microsoft accounts. First, you'll need to [enable self-service sign-up](self-service-sign-up-user-flow.md) for your tenant. Then you can set up a user flow for the application, and select Microsoft account as one of the sign-in options.
-![Microsoft account in a self-service sign-up user flow](media/microsoft-account/microsoft-account-user-flow.png)
## Verifying the application's publisher domain
-As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD tenant](azure-ad-account.md) as the identity provider. To meet these new requirements, do the following:
+As of November 2020, new application registrations show up as unverified in the user consent prompt, unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md), ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) For Azure AD user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD tenant](azure-ad-account.md) as the identity provider. To meet these new requirements, follow the steps below:
1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact. 1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
As of November 2020, new application registrations show up as unverified in the
## Next steps - [Add Azure Active Directory B2B collaboration users](add-users-administrator.md)-- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
+- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
Previously updated : 04/26/2022 Last updated : 10/12/2022
Next, you'll create the user flow for self-service sign-up and add it to an appl
![Add a new user flow button](media/self-service-sign-up-user-flow/new-user-flow.png)
-5. On the **Create** page, enter a **Name** for the user flow. Note that the name is automatically prefixed with **B2X_1_**.
-6. In the **Identity providers** list, select one or more identity providers that your external users can use to log into your application. **Azure Active Directory Sign up** is selected by default. (See [Before you begin](#before-you-begin) earlier in this article to learn how to add identity providers.)
-7. Under **User attributes**, choose the attributes you want to collect from the user. For additional attributes, select **Show more**. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
+5. Select the user flow type (for example, **Sign up and sign in**), and then select the version (**Recommended** or **Preview**).
+6. On the **Create** page, enter a **Name** for the user flow. Note that the name is automatically prefixed with **B2X_1_**.
+7. In the **Identity providers** list, select one or more identity providers that your external users can use to log into your application. **Azure Active Directory Sign up** is selected by default. (See [Before you begin](#before-you-begin) earlier in this article to learn how to add identity providers.)
+8. Under **User attributes**, choose the attributes you want to collect from the user. For additional attributes, select **Show more**. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
![Create a new user flow page](media/self-service-sign-up-user-flow/create-user-flow.png)
You can choose order in which the attributes are displayed on the sign-up page.
## Add applications to the self-service sign-up user flow
-Now you can associate applications with the user flow.
+Now you'll associate applications with the user flow to enable sign-up for those applications. New users who access the associated applications will be presented with your new self-service sign-up experience.
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Under **Azure services**, select **Azure Active Directory**.
Now you can associate applications with the user flow.
- [Add Facebook to your list of social identity providers](facebook-federation.md) - [Use API connectors to customize and extend your user flows via web APIs](api-connectors-overview.md) - [Add custom approval workflow to your user flow](self-service-sign-up-add-approvals.md)
+- [Learn more about initiating an OAuth 2.0 authorization code flow](../develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code)
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/4-secure-access-groups.md
Both Azure AD security groups and Microsoft 365 groups can be created from the A
| What can the group contain?| Users<br>Groups<br>Service principals<br>Devices| Users only | | Where is the group created?| Azure AD portal<br>Microsoft 365 portal (if to be mail enabled)<br>PowerShell<br>Microsoft Graph<br>End user portal| Microsoft 365 portal<br>Azure AD portal<br>PowerShell<br>Microsoft Graph<br>In Microsoft 365 applications | | Who creates by default?| Administrators <br>Users| Administrators<br>Users |
-| Who can be added by default?| Internal users (tenant members)| Tenant members and guests from any organization |
+| Who can be added by default?| Internal users (tenant members) and guest users | Tenant members and guests from any organization |
| What does it grant access to?| Only resources to which it's assigned.| All group-related resources:<br>(Group mailbox, site, team, chats, and other included Microsoft 365 resources)<br>Any other resources to which group is added | | Can be used with| Conditional Access<br>Entitlement Management<br>Group licensing| Conditional Access<br>Entitlement Management<br>Sensitivity labels |
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a new tenant for your organization After you sign in to the Azure portal, you can create a new tenant for your organization. Your new tenant represents your organization and helps you to manage a specific instance of Microsoft cloud services for your internal and external users.-
+>[!Important]
+>If users with the business need to create tenants are unable to create them, review your user settings page to ensure that **Tenant Creation** is not switched off. If it is switched off, reach out to your Global Administrator to provide those who need it with access to the Tenant Creator role.
### To create a new tenant 1. Sign in to your organization's [Azure portal](https://portal.azure.com/).
active-directory License Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/license-users-groups.md
There are several license plans available for the Azure AD service, including:
For specific information about each license plan and the associated licensing details, see [What license do I need?](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). To sign up for Azure AD premium license plans see [here](./active-directory-get-started-premium.md).
-Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. Any user whose usage location isn't specified inherits the location of the Azure AD organization.
+Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. When assigning licenses to a group or bulk updates such as disabling the synchronization status for the organization, any user whose usage location isn't specified inherits the location of the Azure AD organization.
## View license plans and plan details
Make sure that anyone needing to use a licensed Azure AD service has the appropr
The **Assign license** page updates to show that a user is selected and that the assignments are configured. > [!NOTE]
- > Not all Microsoft services are available in all locations. Before a license can be assigned to a user, you must specify the **Usage location**. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. Any user whose usage location is not specified inherits the location of the Azure AD organization.
+ > Not all Microsoft services are available in all locations. Before a license can be assigned to a user, you must specify the **Usage location**. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. When assigning licenses to a group or bulk updates such as disabling the synchronization status for the organization, any user whose usage location isn't specified inherits the location of the Azure AD organization.
1. Select **Assign**.
active-directory Secure With Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-single-tenant.md
Last updated 7/5/2022 -+
Azure RBAC allows you to design an administration model with granular scopes and
* **Resource group** - You can assign roles to specific resource groups so that they don't impact any other resource groups. In the example above, the Benefits engineering team can assign the Contributor role to the test lead so they can manage the test DB and the test web app, or to add more resources.
-* **Individual resources** - You can assign roles to specific resources so that they don't impact any other resources. In the example above, the Benefits engineering team can assign a data analyst the Cosmos DB Account Reader role just for the test instance of the Cosmos DB, without interfering with the test web app, or any production resource.
+* **Individual resources** - You can assign roles to specific resources so that they don't impact any other resources. In the example above, the Benefits engineering team can assign a data analyst the Cosmos DB Account Reader role just for the test instance of the Azure Cosmos DB database, without interfering with the test web app or any production resource.
For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) and [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-managed-identities.md
Last updated 08/20/2022 -+
With managed identities the source system can obtain a token from Azure AD witho
The target system needs to authenticate (identify) and authorize the source system before allowing access. When the target service supports Azure AD-based authentication it accepts an access token issued by Azure AD.
-Azure has a control plane and a data plane. In the control plane, you create resources, and in the data plane you access them. For example, you create a Cosmos database in the control plane, but query it in the data plane.
+Azure has a control plane and a data plane. In the control plane, you create resources, and in the data plane you access them. For example, you create an Azure Cosmos DB database in the control plane, but query it in the data plane.
Once the target system accepts the token for authentication, it can support different mechanisms for authorization for its control plane and data plane.
There are several ways in which you can find managed identities:
You can get a list of all managed identities in your tenant with the following GET request to Microsoft Graph:
-`https://graph.microsoft.com/v1.0/servicePrincipals?$filter=(servicePrincipalType eq 'ManagedIdentity') `
+`https://graph.microsoft.com/v1.0/servicePrincipals?$filter=(servicePrincipalType eq 'ManagedIdentity')`
You can filter these requests. For more information, see the Graph documentation for [GET servicePrincipal](/graph/api/serviceprincipal-get).
You can assess the security of managed identities in the following ways:
* Examine privileges and ensure that the least privileged model is selected. Use the following PowerShell cmdlet to get the permissions assigned to your managed identities.
- ` Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }`
+ `Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }`
-* Ensure the managed identity is not part of any privileged groups, such as an administrators group.
-ΓÇÄYou can do this by enumerating the members of your highly privileged groups with PowerShell.
+* Ensure the managed identity is not part of any privileged groups, such as an administrators group. You can do this by enumerating the members of your highly privileged groups with PowerShell.
`Get-AzureADGroupMember -ObjectId <String> [-All <Boolean>] [-Top <Int32>] [<CommonParameters>]`
If you are using a service principal or an Azure AD user account, evaluate if y
[Governing Azure service accounts](service-accounts-governing-azure.md) [Introduction to on-premises service accounts](service-accounts-on-premises.md)-
-
-
-
-
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for member users in the following ways:
| Permission | Setting explanation | | - | | | **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role. |
+| **Create tenants** | Setting this option to **No** prevents users from creating new Azure AD or Azure AD B2C tenants. You can grant the ability back to specific individuals by adding them to tenant creator role. |
| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
-| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It does not restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It does not restrict access as long as a user is assigned a custom role (or any role). <br>It does not restrict access to Entra Portal. </p><p></p><p>**When should I use this switch?** <br>Use this to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Do not use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
+| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It does not restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It does not restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Do not use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. | > [!NOTE]
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
+
+ Title: What's new in Sovereign Clouds? Release notes - Azure Active Directory | Microsoft Docs
+description: Learn what is new with Azure Active Directory Sovereign Cloud.
+++++ Last updated : 08/03/2022+++++
+# What's new in Azure Active Directory Sovereign Clouds?
++
+Azure AD receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+
+- [Azure Government](/azure/azure-government/documentation-government-welcome)
+
+This page is updated monthly, so revisit it regularly.
+++
+## September 2022
+
+### General Availability - Azure AD certificate-based authentication
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** User Authentication
+
+
+Azure AD certificate-based authentication (CBA) enables customers to allow or require users to authenticate with X.509 certificates against their Azure Active Directory (Azure AD) for applications and browser sign-in. This feature enables customers to adopt a phishing resistant authentication and authenticate with an X.509 certificate against their Enterprise Public Key Infrastructure (PKI). For more information, see: [Overview of Azure AD certificate-based authentication (Preview)](../authentication/concept-certificate-based-authentication.md).
+
++
+### General Availability - Audited BitLocker Recovery
+
+**Type:** New feature
+**Service category:** Device Access Management
+**Product capability:** Device Lifecycle Management
+
+
+BitLocker keys are sensitive security items. Audited BitLocker recovery ensures that when BitLocker keys are read, an audit log is generated so that you can trace who accesses this information for given devices. For more information, see: [View or copy BitLocker keys](../devices/device-management-azure-portal.md#view-or-copy-bitlocker-keys).
+
++
+### General Availability - More device properties supported for Dynamic Device groups
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+
+You can now create or update dynamic device groups using the following properties:
+
+- deviceManagementAppId
+- deviceTrustType
+- extensionAttribute1-15
+- profileType
+
+For more information on how to use this feature, see: [Dynamic membership rule for device groups](../enterprise-users/groups-dynamic-membership.md#rules-for-devices)
+
+++
+### General Availability - No more waiting, provision groups on demand into your SaaS applications.
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Identity Lifecycle Management
+
+
+Pick a group of up to five members and provision them into your third-party applications in seconds. Get started testing, troubleshooting, and provisioning to non-Microsoft applications such as ServiceNow, ZScaler, and Adobe. For more information, see: [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md).
+
++
+### General Availability - Devices Overview
+
+**Type:** New feature
+**Service category:** Device Registration and Management
+**Product capability:** Device Lifecycle Management
+
+
+
+The new Device Overview in the Azure Active Directory portal provides meaningful and actionable insights about devices in your tenant.
+
+In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. For more information, see: [Manage device identities by using the Azure portal](../devices/device-management-azure-portal.md).
+
++
+### General Availability - Support for Linux as Device Platform in Azure AD Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** User Authentication
+
+
+
+Added support for ΓÇ£LinuxΓÇ¥ device platform in Azure AD Conditional Access.
+
+An admin can now require a user is on a compliant Linux device, managed by Intune, to sign-in to a selected service (for example ΓÇÿall cloud appsΓÇÖ or ΓÇÿOffice 365ΓÇÖ). For more information, see: [Device platforms](../conditional-access/concept-conditional-access-conditions.md#device-platforms)
+
++
+### General Availability - Cross-tenant access settings for B2B collaboration
+
+**Type:** Changed feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+
+Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. For more information, see: [Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md).
+
++
+### General Availability - Location Aware Authentication using GPS from Authenticator App
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+
+
+Admins can now enforce Conditional Access policies based off of GPS location from Authenticator. For more information, see: [Named locations](../conditional-access/location-condition.md#named-locations).
+
++
+### General Availability - My Sign-ins now supports org switching and improved navigation
+
+**Type:** Changed feature
+**Service category:** MFA
+**Product capability:** End User Experiences
+
+
+
+We've improved the My Sign-ins experience to now support organization switching. Now users who are guests in other tenants can easily switch and sign-in to manage their security info and view activity. More improvements were made to make it easier to switch from My Sign-ins directly to other end user portals such as My Account, My Apps, My Groups, and My Access. For more information, see: [Sign-in logs in Azure Active Directory - preview](../reports-monitoring/concept-all-sign-ins.md)
+
++
+### General Availability - Temporary Access Pass is now available
+
+**Type:** New feature
+**Service category:** MFA
+**Product capability:** User Authentication
+
+
+
+Temporary Access Pass (TAP) is now generally available. TAP can be used to securely register password-less methods such as Phone Sign-in, phishing resistant methods such as FIDO2, and even help Windows onboarding (AADJ and WHFB). TAP also makes recovery easier when a user has lost or forgotten their strong authentication methods and needs to sign in to register new authentication methods. For more information, see: [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md).
+
++
+### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+
+
+In some scenarios customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency Every time support requiring a user to reauthenticate during Intune device enrollment, password change for risky users and risky sign-ins.
+
+More information: [Configure authentication session management - Azure Active Directory - Microsoft Entra | Microsoft Docs](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time).
+
++
+### General Availability - Non-interactive risky sign-ins
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+
+Identity Protection now emits risk (such as unfamiliar sign-in properties) on non-interactive sign-ins. Admins can now find these non-interactive risky sign-ins using the "sign-in type" filter in the Risky sign-ins report. For more information, see: [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
+++
+
+### General Availability - Workload Identity Federation with App Registrations are available now
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Developer Experience
+
+
+
+Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials.
+
+For more information on this capability and supported scenarios, see: [Workload identity federation](../develop/workload-identity-federation.md).
+
+++
+### General Availability - Continuous Access Evaluation
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Access Control
+
+
+
+With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. For more information, see: [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md)
+
+++
+### Public Preview ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
++
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
+
+We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
+
+++
+## Next steps
+<!-- Add a context sentence for the following links -->
+- [What's new in Azure Active Directory?](whats-new.md)
+- [Archive for What's new in Azure Active Directory?](whats-new-archive.md)
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
To better understand entitlement management and its documentation, you can refer
[!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)]
-Specialized clouds, such as Azure Germany, and Azure China 21Vianet, aren't currently available for use.
- ### How many licenses must you have? Ensure that your directory has at least as many Azure AD Premium P2 licenses as you have:
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
To ensure timing accuracy of scheduled workflows itΓÇÖs curial to consider:
6. Select **Add attribute**. 7. Fill in the following information: - Mapping Type: Direct
- - Source attribute: msDS-cloudExtensionAttribute1
+ - Source attribute: extensionAttribute1
- Default value: Leave blank - Target attribute: employeeHireDate - Apply this mapping: Always
For more information on attributes, see [Attribute mapping in Azure AD Connect c
## How to create a custom synch rule in Azure AD Connect for EmployeeHireDate The following example will walk you through setting up a custom synchronization rule that synchronizes the Active Directory attribute to the employeeHireDate attribute in Azure AD.
- 1. Open a PowerShell window as administrator and run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
+ 1. Open a PowerShell window as administrator and run `Set-ADSyncScheduler -SyncCycleEnabled $false` to disable the scheduler.
2. Go to Start\Azure AD Connect\ and open the Synchronization Rules Editor 3. Ensure the direction at the top is set to **Inbound**. 4. Select **Add Rule.**
The following example will walk you through setting up a custom synchronization
![Screenshot of create outbound synchronization rule transformations.](media/how-to-lifecycle-workflow-sync-attributes/create-outbound-rule-transformations.png) 16. Select **Add**. 17. Close the Synchronization Rules Editor
+ 18. Enable the scheduler again by running `Set-ADSyncScheduler -SyncCycleEnabled $true`.
For more information, see [How to customize a synchronization rule](../hybrid/ho
## Next steps - [What are lifecycle workflows?](what-are-lifecycle-workflows.md) - [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)-- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
The default specific parameters for the **Onboard new hire employee** template a
|||| |Category | Joiner | ❌ | |Trigger Type | Trigger and Scope Based | ❌ |
-|Days from event | 0 | ✔️ |
+|Days from event | 0 | ❌ |
|Event timing | On | ❌ | |Event User attribute | EmployeeHireDate | ❌ | |Scope type | Rule based | ❌ |
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
First we'll create our employee, Melva Prince.
"displayName": "Melva Prince", "mailNickname": "mprince", "department": "sales",
- "mail": "mpricne@<your tenant name here>",
- "employeeHireDate": "2022-04-15T22:10:00Z"
+ "mail": "mprince@<your tenant name here>",
+ "employeeHireDate": "2022-04-15T22:10:00Z",
"userPrincipalName": "mprince@<your tenant name here>", "passwordProfile" : { "forceChangePasswordNextSignIn": true,
Next, we'll create Britta Simon. This is the account that will be used as our m
"mailNickname": "bsimon", "department": "sales", "mail": "bsimon@<your tenant name here>",
- "employeeHireDate": "2021-01-15T22:10:00Z"
+ "employeeHireDate": "2021-01-15T22:10:00Z",
"userPrincipalName": "bsimon@<your tenant name here>", "passwordProfile" : { "forceChangePasswordNextSignIn": true,
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Previously updated : 06/15/2022 Last updated : 10/12/2022
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources.
+>[NOTE]
+>The Group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
++ There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities: - You can write back Microsoft 365 groups as distribution groups, security groups, or mail-enabled security groups.
If you plan to make changes to the default behavior, we recommend that you do so
## Understand limitations of public previewΓÇ»
-Although this release has undergone extensive testing, you might still encounter issues. One of the goals of this public preview release is to find and fix any issues before the feature moves to general availability.
-
-Microsoft provides support for this public preview release, but it might not be able to immediately fix issues that you encounter. For this reason, we recommend that you use your best judgment before deploying this release in your production environment.ΓÇ»
+Although this release has undergone extensive testing, you might still encounter issues. One of the goals of this public preview release is to find and fix any issues before the feature moves to general availability. Please also note that any public preview functionality can still receive breaking changes which may require you to make changes to you configuration to continue using this feature. We may also decide to change or remove certain functionality without prior notice.
+Microsoft provides support for this public preview release, but we might not be able to immediately fix issues that you encounter. For these reasons, we recommend that you do not deploy this release in your production environment.ΓÇ»
These limitations and known issues are specific to group writeback:
These limitations and known issues are specific to group writeback:
- Nested cloud groups that are members of writeback enabled groups must also be enabled for writeback to remain nested in AD. - Group Writeback setting to manage new security group writeback at scale is not yet available. You will need to configure writeback for each group.  - If you have a nested group like this, you'll see an export error in Azure AD Connect with the message "A universal group cannot have a local group as a member." The resolution is to remove the member with the **Domain local** scope from the Azure AD group, or update the nested group member scope in Active Directory to **Global** or **Universal**. - Group writeback supports writing back groups to only a single organizational unit (OU). After the feature is enabled, you can't change the OU that you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature.  - Nested cloud groups that are members of writeback-enabled groups must also be enabled for writeback to remain nested in Active Directory.
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Integrating your on-premises directories with Azure AD makes your users more pro
## Why use Azure AD Connect Health? When authenticating with Azure AD, your users are more productive because there's a common identity to access both cloud and on-premises resources. Ensuring the environment is reliable, so that users can access these resources, becomes a challenge. Azure AD Connect Health helps monitor and gain insights into your on-premises identity infrastructure thus ensuring the reliability of this environment. It is as simple as installing an agent on each of your on-premises identity servers.
-Azure AD Connect Health for AD FS supports AD FS 2.0 on Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and Windows Server 2016. It also supports monitoring the AD FS proxy or web application proxy servers that provide authentication support for extranet access. With an easy and quick installation of the Health Agent, Azure AD Connect Health for AD FS provides you a set of key capabilities.
+Azure AD Connect Health for AD FS supports AD FS 2.0 on Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016 and Windows Server 2019. It also supports monitoring the AD FS proxy or web application proxy servers that provide authentication support for extranet access. With an easy and quick installation of the Health Agent, Azure AD Connect Health for AD FS provides you a set of key capabilities.
Key benefits and best practices:
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Previously updated : 08/10/2022 Last updated : 10/12/2022
+zone_pivot_groups: enterprise-apps-minus-aad-powershell
+ #customer intent: As an admin, I want to configure how end-users consent to applications.
To configure user consent, you need:
- A user account. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A Global Administrator or Privileged Administrator role.
-# [The Azure portal](#tab/azure-portal)
- ## Configure user consent settings + To configure user consent settings through the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
To configure user consent settings through the Azure portal:
:::image type="content" source="media/configure-user-consent/setting-for-all-users.png" alt-text="Screenshot of the 'User consent settings' pane.":::
-# [PowerShell](#tab/azure-powershell)
+ To choose which app consent policy governs user consent for applications, you can use the [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) module. The cmdlets used here are included in the [Microsoft.Graph.Identity.SignIns](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.SignIns) module.
-#### Connect to Microsoft Graph PowerShell
+### Connect to Microsoft Graph PowerShell
Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*.
Connect to Microsoft Graph PowerShell using the least-privilege permission neede
Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization" ```
-#### Disable user consent
+### Disable user consent
To disable user consent, set the consent policies that govern user consent to empty:
Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
"PermissionGrantPoliciesAssigned" = @() } ```
-#### Allow user consent subject to an app consent policy
+### Allow user consent subject to an app consent policy
To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps:
Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
"PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.microsoft-user-default-low") } ``` -++
+Use the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to choose which app consent policy governs user consent for applications.
+
+To disable user consent, set the consent policies that govern user consent to empty:
+
+```http
+PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
+{
+ "defaultUserRolePermissions": {
+ "permissionGrantPoliciesAssigned": []
+ }
+}
+```
+
+### Allow user consent subject to an app consent policy
+
+To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps:
+
+```http
+PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
+
+{
+ "defaultUserRolePermissions": {
+ "permissionGrantPoliciesAssigned": ["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"]
+ }
+}
+```
+
+Replace `{consent-policy-id}` with the ID of the policy you want to apply. You can choose a [custom app consent policy](manage-app-consent-policies.md#create-a-custom-app-consent-policy) that you've created, or you can choose from the following built-in policies:
+
+| ID | Description |
+|:|:|
+| microsoft-user-default-low | **Allow user consent for apps from verified publishers, for selected permissions**<br/> Allow limited user consent only for apps from verified publishers and apps that are registered in your tenant, and only for permissions that you classify as *low impact*. (Remember to [classify permissions](configure-permission-classifications.md) to select which permissions users are allowed to consent to.) |
+| microsoft-user-default-legacy | **Allow user consent for apps**<br/> This option allows all users to consent to any permission that doesn't require admin consent, for any application |
+
+For example, to enable user consent subject to the built-in policy `microsoft-user-default-low`, use the following PATCH command:
+
+```http
+PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
+
+{
+ "defaultUserRolePermissions": {
+ "permissionGrantPoliciesAssigned": [
+ "managePermissionGrantsForSelf.microsoft-user-default-low"
+ ]
+ }
+}
+```
+ > [!TIP] > To allow users to request an administrator's review and approval of an application that the user isn't allowed to consent to, [enable the admin consent workflow](configure-admin-consent-workflow.md). For example, you might do this when user consent has been disabled or when an application is requesting permissions that the user isn't allowed to grant.- ## Next steps - [Manage app consent policies](manage-app-consent-policies.md)
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Batch | [Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity](../../batch/batch-customer-managed-key.md) </BR> [Configure managed identities in Batch pools](../../batch/managed-identity-pools.md) | | Azure Blueprints | [Stages of a blueprint deployment](../../governance/blueprints/concepts/deployment-stages.md) | | Azure Cache for Redis | [Managed identity for storage accounts with Azure Cache for Redis](../../azure-cache-for-redis/cache-managed-identity.md) |
+| Azure Container Apps | [Managed identities in Azure Container Apps](../../container-apps/managed-identity.md) |
| Azure Container Instance | [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) | | Azure Container Registry | [Use an Azure-managed identity in ACR Tasks](../../container-registry/container-registry-tasks-authentication-managed-identity.md) | | Azure Cognitive Services | [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md) |
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
na+ Last updated 12/10/2020
This tutorial shows you how to use a system-assigned managed identity for a Linux virtual machine (VM) to access Azure Cosmos DB. You learn how to: > [!div class="checklist"]
-> * Create a Cosmos DB account
-> * Create a collection in the Cosmos DB account
+> * Create an Azure Cosmos DB account
+> * Create a collection in the Azure Cosmos DB account
> * Grant the system-assigned managed identity access to an Azure Cosmos DB instance > * Retrieve the `principalID` of the of the Linux VM's system-assigned managed identity > * Get an access token and use it to call Azure Resource Manager
-> * Get access keys from Azure Resource Manager to make Cosmos DB calls
+> * Get access keys from Azure Resource Manager to make Azure Cosmos DB calls
## Prerequisites
This tutorial shows you how to use a system-assigned managed identity for a Linu
- Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top right corner of code blocks. - Run scripts locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az-login). Use an account associated with the Azure subscription in which you'd like to create resources.
-## Create a Cosmos DB account
+## Create an Azure Cosmos DB account
-If you don't already have one, create a Cosmos DB account. You can skip this step and use an existing Cosmos DB account.
+If you don't already have one, create an Azure Cosmos DB account. You can skip this step and use an existing Azure Cosmos DB account.
1. Click the **+ Create a resource** button found on the upper left-hand corner of the Azure portal. 2. Click **Databases**, then **Azure Cosmos DB**, and a new "New account" panel displays.
-3. Enter an **ID** for the Cosmos DB account, which you use later.
+3. Enter an **ID** for the Azure Cosmos DB account, which you use later.
4. **API** should be set to "SQL." The approach described in this tutorial can be used with the other available API types, but the steps in this tutorial are for the SQL API.
-5. Ensure the **Subscription** and **Resource Group** match the ones you specified when you created your VM in the previous step. Select a **Location** where Cosmos DB is available.
+5. Ensure the **Subscription** and **Resource Group** match the ones you specified when you created your VM in the previous step. Select a **Location** where Azure Cosmos DB is available.
6. Click **Create**.
-### Create a collection in the Cosmos DB account
+### Create a collection in the Azure Cosmos DB account
-Next, add a data collection in the Cosmos DB account that you can query in later steps.
+Next, add a data collection in the Azure Cosmos DB account that you can query in later steps.
-1. Navigate to your newly created Cosmos DB account.
+1. Navigate to your newly created Azure Cosmos DB account.
2. On the **Overview** tab click the **+/Add Collection** button, and an "Add Collection" panel slides out. 3. Give the collection a database ID, collection ID, select a storage capacity, enter a partition key, enter a throughput value, then click **OK**. For this tutorial, it is sufficient to use "Test" as the database ID and collection ID, select a fixed storage capacity and lowest throughput (400 RU/s). ## Grant access
-To gain access to the Cosmos DB account access keys from the Resource Manager in the following section, you need to retrieve the `principalID` of the Linux VM's system-assigned managed identity. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>` (resource group in which your VM resides), and `<VM NAME>` parameter values with your own values.
+To gain access to the Azure Cosmos DB account access keys from the Resource Manager in the following section, you need to retrieve the `principalID` of the Linux VM's system-assigned managed identity. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>` (resource group in which your VM resides), and `<VM NAME>` parameter values with your own values.
```azurecli-interactive az resource show --id /subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.Compute/virtualMachines/<VM NAMe> --api-version 2017-12-01
The response includes the details of the system-assigned managed identity (note
} ```
-### Grant your Linux VM's system-assigned identity access to the Cosmos DB account access keys
+### Grant your Linux VM's system-assigned identity access to the Azure Cosmos DB account access keys
-Cosmos DB does not natively support Azure AD authentication. However, you can use a managed identity to retrieve a Cosmos DB access key from the Resource Manager, then use the key to access Cosmos DB. In this step, you grant your system-assigned managed identity access to the keys to the Cosmos DB account.
+Azure Cosmos DB does not natively support Azure AD authentication. However, you can use a managed identity to retrieve an Azure Cosmos DB access key from the Resource Manager, then use the key to access Azure Cosmos DB. In this step, you grant your system-assigned managed identity access to the keys to the Azure Cosmos DB account.
-To grant the system-assigned managed identity access to the Cosmos DB account in Azure Resource Manager using the Azure CLI, update the values for `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<COSMOS DB ACCOUNT NAME>` for your environment. Replace `<MI PRINCIPALID>` with the `principalId` property returned by the `az resource show` command in Retrieve the principalID of the Linux VM's MI. Cosmos DB supports two levels of granularity when using access keys: read/write access to the account, and read-only access to the account. Assign the `DocumentDB Account Contributor` role if you want to get read/write keys for the account, or assign the `Cosmos DB Account Reader Role` role if you want to get read-only keys for the account:
+To grant the system-assigned managed identity access to the Azure Cosmos DB account in Azure Resource Manager using the Azure CLI, update the values for `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<COSMOS DB ACCOUNT NAME>` for your environment. Replace `<MI PRINCIPALID>` with the `principalId` property returned by the `az resource show` command in Retrieve the principalID of the Linux VM's MI. Azure Cosmos DB supports two levels of granularity when using access keys: read/write access to the account, and read-only access to the account. Assign the `DocumentDB Account Contributor` role if you want to get read/write keys for the account, or assign the `Cosmos DB Account Reader Role` role if you want to get read-only keys for the account:
```azurecli-interactive az role assignment create --assignee <MI PRINCIPALID> --role '<ROLE NAME>' --scope "/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.DocumentDB/databaseAccounts/<COSMODS DB ACCOUNT NAME>"
To complete these steps, you need an SSH client. If you are using Windows, you c
"client_id":"1ef89848-e14b-465f-8780-bf541d325cd5"} ```
-### Get access keys from Azure Resource Manager to make Cosmos DB calls
+### Get access keys from Azure Resource Manager to make Azure Cosmos DB calls
-Now use CURL to call Resource Manager using the access token retrieved in the previous section to retrieve the Cosmos DB account access key. Once we have the access key, we can query Cosmos DB. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<COSMOS DB ACCOUNT NAME>` parameter values with your own values. Replace the `<ACCESS TOKEN>` value with the access token you retrieved earlier. If you want to retrieve read/write keys, use key operation type `listKeys`. If you want to retrieve read-only keys, use the key operation type `readonlykeys`:
+Now use CURL to call Resource Manager using the access token retrieved in the previous section to retrieve the Azure Cosmos DB account access key. Once we have the access key, we can query Azure Cosmos DB. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<COSMOS DB ACCOUNT NAME>` parameter values with your own values. Replace the `<ACCESS TOKEN>` value with the access token you retrieved earlier. If you want to retrieve read/write keys, use key operation type `listKeys`. If you want to retrieve read-only keys, use the key operation type `readonlykeys`:
```bash curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.DocumentDB/databaseAccounts/<COSMOS DB ACCOUNT NAME>/<KEY OPERATION TYPE>?api-version=2016-03-31' -X POST -d "" -H "Authorization: Bearer <ACCESS TOKEN>"
The CURL response gives you the list of Keys. For example, if you get the read-
"secondaryReadonlyMasterKey":"38v5ns...7bA=="} ```
-Now that you have the access key for the Cosmos DB account you can pass it to a Cosmos DB SDK and make calls to access the account.
+Now that you have the access key for the Azure Cosmos DB account, you can pass it to an Azure Cosmos DB SDK and make calls to access the account.
## Next steps
-In this tutorial, you learned how to use a system-assigned managed identity on a Linux virtual machine to access Cosmos DB. To learn more about Cosmos DB see:
+In this tutorial, you learned how to use a system-assigned managed identity on a Linux virtual machine to access Azure Cosmos DB. To learn more about Azure Cosmos DB, see:
> [!div class="nextstepaction"] >[Azure Cosmos DB overview](../../cosmos-db/introduction.md)
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Title: Use managed identities from a virtual machine to access Cosmos DB
+ Title: Use managed identities from a virtual machine to access Azure Cosmos DB
description: Learn how to use managed identities with Windows VMs using the Azure portal, CLI, PowerShell, Azure Resource Manager template
Last updated 06/24/2022 -+ ms.tool: azure-cli, azure-powershell ms.devlang: azurecli
-#Customer intent: As an administrator, I want to know how to access Cosmos DB from a virtual machine using a managed identity
+#Customer intent: As an administrator, I want to know how to access Azure Cosmos DB from a virtual machine using a managed identity
-# How to use managed identities to connect to Cosmos DB from an Azure virtual machine
+# How to use managed identities to connect to Azure Cosmos DB from an Azure virtual machine
-In this article, we set up a virtual machine to use managed identities to connect to Cosmos. [Azure Cosmos DB](../../cosmos-db/introduction.md) is a fully managed NoSQL database for modern app development. [Managed identities for Azure resources](overview.md) allow your applications to authenticate when accessing services that support Azure AD authentication using an identity managed by Azure.
+In this article, we set up a virtual machine to use managed identities to connect to Azure Cosmos DB. [Azure Cosmos DB](../../cosmos-db/introduction.md) is a fully managed NoSQL database for modern app development. [Managed identities for Azure resources](overview.md) allow your applications to authenticate when accessing services that support Azure AD authentication using an identity managed by Azure.
## Prerequisites
Under the resources element, add the following entry to assign a user-assigned m
-## Create a Cosmos DB account
+## Create an Azure Cosmos DB account
-Now that we have a VM with either a user-assigned managed identity or a system-assigned managed identity we need a Cosmos DB account available where you have administrative rights. If you need to create a Cosmos DB account for this tutorial, the [Cosmos DB quickstart](../..//cosmos-db/sql/create-cosmosdb-resources-portal.md) provides detailed steps on how to do that.
+Now that we have a VM with either a user-assigned managed identity or a system-assigned managed identity we need an Azure Cosmos DB account available where you have administrative rights. If you need to create an Azure Cosmos DB account for this tutorial, the [Azure Cosmos DB quickstart](../..//cosmos-db/sql/create-cosmosdb-resources-portal.md) provides detailed steps on how to do that.
>[!NOTE]
-> Managed identities may be used to access any Azure resource that supports Azure Active Directory authentication. This tutorial assumes that your Cosmos DB account will be configured as shown below.
+> Managed identities may be used to access any Azure resource that supports Azure Active Directory authentication. This tutorial assumes that your Azure Cosmos DB account will be configured as shown below.
|Setting|Value|Description | ||||
- |Subscription|Subscription name|Select the Azure subscription that you want to use for this Azure Cosmos account. |
+ |Subscription|Subscription name|Select the Azure subscription that you want to use for this Azure Cosmos DB account. |
|Resource Group|Resource group name|Select **mi-test**, or select **Create new**, then enter a unique name for the new resource group. |
- |Account Name|A unique name|Enter a name to identify your Azure Cosmos account. Because *documents.azure.com* is appended to the name that you provide to create your URI, use a unique name.<br><br>The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must be between 3-44 characters in length.|
- |API|The type of account to create|Select **Core (SQL)** to create a document database and query by using SQL syntax. <br><br>[Learn more about the SQL API](../../cosmos-db/introduction.md).|
+ |Account Name|A unique name|Enter a name to identify your Azure Cosmos DB account. Because *documents.azure.com* is appended to the name that you provide to create your URI, use a unique name.<br><br>The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must be between 3-44 characters in length.|
+ |API|The type of account to create|Select **Azure Cosmos DB for NoSQL** to create a document database and query by using SQL syntax. <br><br>[Learn more about the SQL API](../../cosmos-db/introduction.md).|
|Location|The region closest to your users|Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data.| > [!NOTE]
- > If you are testing you may want to apply Azure Cosmos DB free tier discount. With Azure Cosmos DB free tier, you will get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). Keep in mind that for the purpose of this tutorial this choice makes no difference.
+ > If you are testing you may want to apply Azure Cosmos DB free tier discount. With the Azure Cosmos DB free tier, you will get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). Keep in mind that for the purpose of this tutorial this choice makes no difference.
## Grant access
-At this point, we should have both a virtual machine configured with a managed identity and a Cosmos DB Account. Before we continue, we need to grant the managed identity a couple of different roles.
+At this point, we should have both a virtual machine configured with a managed identity and an Azure Cosmos DB account. Before we continue, we need to grant the managed identity a couple of different roles.
-- First grant access to the Cosmos management plane using [Azure RBAC](../../cosmos-db/role-based-access-control.md). The managed identity needs to have the DocumentDB Account Contributor role assigned to create Databases and containers.
+- First grant access to the Azure Cosmos DB management plane using [Azure RBAC](../../cosmos-db/role-based-access-control.md). The managed identity needs to have the DocumentDB Account Contributor role assigned to create Databases and containers.
-- You also need to grant the managed identity a contributor role using [Cosmos RBAC](../../cosmos-db/how-to-setup-rbac.md). You can see specific steps below.
+- You also need to grant the managed identity a contributor role using [Azure Cosmos DB RBAC](../../cosmos-db/how-to-setup-rbac.md). You can see specific steps below.
> [!NOTE] > We will use the **Cosmos DB Built-in Data contributor** role. To grant access, you need to associate the role definition with the identity. In our case, the managed identity associated with our virtual machine.
az cosmosdb sql role assignment create --account-name $accountName --resource-gr
## Access data
-Getting access to Cosmos using managed identities may be achieved using the Azure.identity library to enable authentication in your application. You can call [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) directly or use [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential).
+Getting access to Azure Cosmos DB using managed identities may be achieved using the Azure.identity library to enable authentication in your application. You can call [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) directly or use [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential).
The ManagedIdentityCredential class attempts to authentication using a managed identity assigned to the deployment environment. The [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme) class goes through different authentication options in order. The second authentication option that DefaultAzureCredential attempts is Managed identities.
Language-specific examples using ManagedIdentityCredential:
### .NET
-Initialize your Cosmos DB client:
+Initialize your Azure Cosmos DB client:
```csharp CosmosClient client = new CosmosClient("<account-endpoint>", new ManagedIdentityCredential());
Then [read and write data](../../cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md).
### Java
-Initialize your Cosmos DB client:
+Initialize your Azure Cosmos DB client:
```java CosmosAsyncClient Client = new CosmosClientBuilder().endpoint("<account-endpoint>") .credential(new ManagedIdentityCredential()) .build();
Then read and write data as described in [these samples](../../cosmos-db/sql/sql
### JavaScript
-Initialize your Cosmos DB client:
+Initialize your Azure Cosmos DB client:
```javascript const client = new CosmosClient({ "<account-endpoint>", aadCredentials: new ManagedIdentityCredential() });
Learn more about managed identities for Azure resources:
- [What are managed identities for Azure resources?](overview.md) - [Azure Resource Manager templates](https://github.com/Azure/azure-quickstart-templates)
-Learn more about Azure Cosmos
+Learn more about Azure Cosmos DB:
- [Azure Cosmos DB resource model](../../cosmos-db/account-databases-containers-items.md)-- [Tutorial: Build a .NET console app to manage data in Azure Cosmos DB SQL API account](../../cosmos-db/sql/sql-api-get-started.md)
+- [Tutorial: Build a .NET console app to manage data in an Azure Cosmos DB for NoSQL account](../../cosmos-db/sql/sql-api-get-started.md)
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
Last updated 01/11/2022 -+
[!INCLUDE [preview-notice](../../../includes/active-directory-msi-preview-notice.md)]
-This tutorial shows you how to use a system-assigned managed identity for a Windows virtual machine (VM) to access Cosmos DB. You learn how to:
+This tutorial shows you how to use a system-assigned managed identity for a Windows virtual machine (VM) to access Azure Cosmos DB. You learn how to:
> [!div class="checklist"]
-> * Create a Cosmos DB account
-> * Grant a Windows VM system-assigned managed identity access to the Cosmos DB account access keys
+> * Create an Azure Cosmos DB account
+> * Grant a Windows VM system-assigned managed identity access to the Azure Cosmos DB account access keys
> * Get an access token using the Windows VM system-assigned managed identity to call Azure Resource Manager
-> * Get access keys from Azure Resource Manager to make Cosmos DB calls
+> * Get access keys from Azure Resource Manager to make Azure Cosmos DB calls
## Prerequisites
This tutorial shows you how to use a system-assigned managed identity for a Wind
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
-## Create a Cosmos DB account
+## Create an Azure Cosmos DB account
-If you don't already have one, create a Cosmos DB account. You can skip this step and use an existing Cosmos DB account.
+If you don't already have one, create an Azure Cosmos DB account. You can skip this step and use an existing Azure Cosmos DB account.
1. Click the **+ Create a resource** button found on the upper left-hand corner of the Azure portal. 2. Click **Databases**, then **Azure Cosmos DB**, and a new "New account" panel displays.
-3. Enter an **ID** for the Cosmos DB account, which you use later.
+3. Enter an **ID** for the Azure Cosmos DB account, which you use later.
4. **API** should be set to "SQL." The approach described in this tutorial can be used with the other available API types, but the steps in this tutorial are for the SQL API.
-5. Ensure the **Subscription** and **Resource Group** match the ones you specified when you created your VM in the previous step. Select a **Location** where Cosmos DB is available.
+5. Ensure the **Subscription** and **Resource Group** match the ones you specified when you created your VM in the previous step. Select a **Location** where Azure Cosmos DB is available.
6. Click **Create**. ### Create a collection
-Next, add a data collection in the Cosmos DB account that you can query in later steps.
+Next, add a data collection in the Azure Cosmos DB account that you can query in later steps.
-1. Navigate to your newly created Cosmos DB account.
+1. Navigate to your newly created Azure Cosmos DB account.
2. On the **Overview** tab click the **+/Add Collection** button, and an "Add Collection" panel slides out. 3. Give the collection a database ID, collection ID, select a storage capacity, enter a partition key, enter a throughput value, then click **OK**. For this tutorial, it is sufficient to use "Test" as the database ID and collection ID, select a fixed storage capacity and lowest throughput (400 RU/s). ## Grant access
-This section shows how to grant Windows VM system-assigned managed identity access to the Cosmos DB account access keys. Cosmos DB does not natively support Azure AD authentication. However, you can use a system-assigned managed identity to retrieve a Cosmos DB access key from Resource Manager, and use the key to access Cosmos DB. In this step, you grant your Windows VM system-assigned managed identity access to the keys to the Cosmos DB account.
+This section shows how to grant Windows VM system-assigned managed identity access to the Azure Cosmos DB account access keys. Azure Cosmos DB does not natively support Azure AD authentication. However, you can use a system-assigned managed identity to retrieve an Azure Cosmos DB access key from Resource Manager, and use the key to access Azure Cosmos DB. In this step, you grant your Windows VM system-assigned managed identity access to the keys to the Azure Cosmos DB account.
-To grant the Windows VM system-assigned managed identity access to the Cosmos DB account in Azure Resource Manager using PowerShell, update the following values:
+To grant the Windows VM system-assigned managed identity access to the Azure Cosmos DB account in Azure Resource Manager using PowerShell, update the following values:
- `<SUBSCRIPTION ID>` - `<RESOURCE GROUP>` - `<COSMOS DB ACCOUNT NAME>`
-Cosmos DB supports two levels of granularity when using access keys: read/write access to the account, and read-only access to the account. Assign the `DocumentDB Account Contributor` role if you want to get read/write keys for the account, or assign the `Cosmos DB Account Reader Role` role if you want to get read-only keys for the account. For this tutorial, assign the `Cosmos DB Account Reader Role`:
+Azure Cosmos DB supports two levels of granularity when using access keys: read/write access to the account, and read-only access to the account. Assign the `DocumentDB Account Contributor` role if you want to get read/write keys for the account, or assign the `Cosmos DB Account Reader Role` role if you want to get read-only keys for the account. For this tutorial, assign the `Cosmos DB Account Reader Role`:
```azurepowershell $spID = (Get-AzVM -ResourceGroupName myRG -Name myVM).identity.principalid
You need to install the latest version of [Azure CLI](/cli/azure/install-azure-c
### Get access keys
-This section shows how to get access keys from Azure Resource Manager to make Cosmos DB calls. We are using PowerShell to call Resource Manager using the access token we got earlier to retrieve the Cosmos DB account access key. Once we have the access key, we can query Cosmos DB. Use your own values to replace the entries below:
+This section shows how to get access keys from Azure Resource Manager to make Azure Cosmos DB calls. We are using PowerShell to call Resource Manager using the access token we got earlier to retrieve the Azure Cosmos DB account access key. Once we have the access key, we can query Azure Cosmos DB. Use your own values to replace the entries below:
- `<SUBSCRIPTION ID>` - `<RESOURCE GROUP>`
The response gives you the list of Keys. For example, if you get read-only keys
"secondaryReadonlyMasterKey":"38v5ns...7bA=="} ```
-Now that you have the access key for the Cosmos DB account you can pass it to a Cosmos DB SDK and make calls to access the account.
+Now that you have the access key for the Azure Cosmos DB account you can pass it to an Azure Cosmos DB SDK and make calls to access the account.
## Disable
Now that you have the access key for the Cosmos DB account you can pass it to a
## Next steps
-In this tutorial, you learned how to use a Windows VM system-assigned identity to access Cosmos DB. To learn more about Cosmos DB see:
+In this tutorial, you learned how to use a Windows VM system-assigned identity to access Azure Cosmos DB. To learn more about Azure Cosmos DB, see:
> [!div class="nextstepaction"] >[Azure Cosmos DB overview](../../cosmos-db/introduction.md)
active-directory Cirrus Identity Bridge For Azure Ad Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md
Previously updated : 08/03/2021 Last updated : 10/10/2022 # Tutorial: Azure Active Directory single sign-on (SSO) integration with Cirrus Identity Bridge for Azure AD
-In this tutorial, you'll learn how to integrate Cirrus Identity Bridge for Azure AD with Azure Active Directory (Azure AD). When you integrate Cirrus Identity Bridge for Azure AD with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Cirrus Identity Bridge for Azure AD with Azure Active Directory (Azure AD) using the Microsoft Graph API based integration pattern. When you integrate Cirrus Identity Bridge for Azure AD with Azure AD in this way, you can:
-* Control in Azure AD who has access to Cirrus Identity Bridge for Azure AD.
-* Enable your users to be automatically signed-in to Cirrus Identity Bridge for Azure AD with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
+* Control who has access to InCommon or other multilateral federation service providers from Azure AD.
+* Enable your users to SSO to InCommon or other multilateral federation service providers with their Azure AD accounts.
+* Enable your users to access Central Authentication Service (CAS) applications with their Azure AD accounts.
+* Manage your application access in one central location - the Azure portal.
## Prerequisites
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Cirrus Identity Bridge for Azure AD supports **SP** and **IDP** initiated SSO.
+## Before adding the Cirrus Identity Bridge for Azure AD from the gallery
+
+When subscribing to the Cirrus Identity Bridge for Azure AD, you will be asked for your Azure AD TenantID. To view this:
+
+1. Sign in to the Azure portal using a Microsoft account with access to administer Azure Active Directory.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Overview** and view the Tenant ID.
+1. Copy the value and send it to the Cirrus Identity contract representative you are working with.
+
+To use the Microsoft Graph API integration, you must grant the Cirrus Identity Bridge for Azure AD access to use the API in your tenant. To do this:
+
+1. Sign in to the Azure portal as a Global Administrator for your Microsoft Azure Tenant.
+1. Edit the URL `https://login.microsoftonline.com/$TENANT_ID/adminconsent?client_id=ea71bc49-6159-422d-84d5-6c29d7287974&state=12345&redirect_uri=https://admin.cirrusidentity.com/azure-registration` replacing **$TENANT_ID** with the value for your Azure AD Tenant.
+1. Paste the URL into the browser where you are signed in as a Global Administrator.
+1. You will be asked to consent to grant access.
+1. When successful, there should be a new application called Cirrus Bridge API.
+1. Advise the Cirrus Identity contract representative you are working with that you have successfully granted API access to the Cirrus Identity Bridge for Azure AD.
++
+Once Cirrus Identity has the Tenant ID, and access has been granted, we will provision Cirrus Identity Bridge for Azure AD infrastructure and provide you with the following information unique to your subscription:
+
+- Identifier URI/ Entity ID
+- Redirect URI / Reply URL
+- Single-logout URL
+- SP Encryption Cert (if using encrypted assertions or logout)
+- A URL for testing
+- Additional instructions depending on the options included with your subscription
++
+> [!NOTE]
+> If you are unable to grant API access to the Cirrus Identity Bridge for Azure AD, the Bridge can be integrated using a traditional SAML2 integration. Advise the Cirrus Identity contract representative you are working with that you are not able to use MS Graph API integration.
+ ## Add Cirrus Identity Bridge for Azure AD from the gallery To configure the integration of Cirrus Identity Bridge for Azure AD into Azure AD, you need to add Cirrus Identity Bridge for Azure AD from the gallery to your list of managed SaaS apps.
To configure and test Azure AD SSO with Cirrus Identity Bridge for Azure AD, per
Follow these steps to enable Azure AD SSO in the Azure portal.
+1. In the Azure portal, on the **Cirrus Identity Bridge for Azure AD** application integration page, find the **Manage** section and select **Properties**.
+1. On the **Properties** page, toggle **Assignment Required** based on your access requirements. If set to **Yes**, you will need to assign the **Cirrus Identity Bridge for Azure AD** application to an access control group on the **Users and Groups** page.
+1. While still on the **Properties** page, toggle **Visible to users** to **No**. The initial integration will always represent the default integration used for multiple service providers. In this case, there will not be any one service provider to direct end users to. To make specific applications visible to end users, you will have to use linking single sign-on to give end user access in My Apps to specific service providers. [See here](../manage-apps/configure-linked-sign-on.md) for more details.
+ 1. In the Azure portal, on the **Cirrus Identity Bridge for Azure AD** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.cirrusidentity.com/bridge`
+ `https://<DOMAIN>/bridge`
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<NAME>.proxy.cirrusidentity.com/module.php/saml/sp/saml2-acs.php/<NAME>_proxy`
Follow these steps to enable Azure AD SSO in the Azure portal.
`<CUSTOMER_LOGIN_URL>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Sign on URL. If you have not yet subscribed to the Cirrus Bridge, please visit the [registration page](https://info.cirrusidentity.com/cirrus-identity-azure-ad-app-gallery-registration). If you are an existing Cirrus Bridge customer, contact [Cirrus Identity Bridge for Azure AD Client support team](https://www.cirrusidentity.com/resources/service-desk) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. If you have not yet subscribed to the Cirrus Bridge, please visit the [registration page](https://info.cirrusidentity.com/cirrus-identity-azure-ad-app-gallery-registration). If you are an existing Cirrus Bridge customer, contact [Cirrus Identity Bridge for Azure AD Client support team](https://www.cirrusidentity.com/resources/service-desk) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Cirrus Identity Bridge for Azure AD application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ![image](common/default-attributes.png)
-1. In addition to above, Cirrus Identity Bridge for Azure AD application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. Cirrus Identity Bridge for Azure AD pre-populates **Attributes & Claims** which are typical for use with the InCommon trust federation. You can review and modify them to meet your requirements. Consult the [eduPerson schema specification](https://wiki.refeds.org/display/STAN/eduPerson) for more details.
| Name | Source Attribute| | | |
- | displayname | user.displayname |
+ | urn:oid:2.5.4.42 | user.givenname |
+ | urn:oid:2.5.4.4 | user.surname |
+ | urn:oid:0.9.2342.19200300.100.1.3 | user.mail |
+ | urn:oid:1.3.6.1.4.1.5923.1.1.1.6 | user.userprincipalname |
+ | cirrus.nameIdFormat | "urn:oasis:names:tc:SAML:2.0:nameid-format:transient" |
+
+ > [!NOTE]
+ > These defaults assume the Azure AD UPN is suitable to use as an eduPersonPrincipalName.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Cirrus Identity Bridge for Azure AD SSO
-To configure single sign-on on **Cirrus Identity Bridge for Azure AD** side, you need to send the **App Federation Metadata Url** to [Cirrus Identity Bridge for Azure AD support team](https://www.cirrusidentity.com/resources/service-desk). They set this setting to have the SAML SSO connection set properly on both sides.
+More documentation on configuring the Cirrus Bridge is available [from Cirrus Identity](https://blog.cirrusidentity.com/documentation/azure-bridge-setup). To also configure the Cirrus Bridge to support access for CAS services, CAS support is also available [for the Cirrus Bridge](https://blog.cirrusidentity.com/documentation/cas-bridge-setup).
### Setup Cirrus Identity Bridge for Azure AD testing
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps Once you configure Cirrus Identity Bridge for Azure AD you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).+
+You can also create multiple App configurations for the Cirrus Identity Bridge for Azure AD, when using MS Graph API integration. These allow you to implement different claims, access controls, or Azure AD Conditional Access policies for groups of multilateral federation. See [here](https://blog.cirrusidentity.com/documentation/azure-bridge-setup) for further details. Many of these same access controls can also be applied to [CAS applications](https://blog.cirrusidentity.com/documentation/cas-bridge-setup).
active-directory Factset Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/factset-tutorial.md
Previously updated : 09/26/2022 Last updated : 10/10/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Set up single sign-on with SAML** page, perform the following steps:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type the URL: `https://auth.factset.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type the URL: `https://auth.factset.com/sp/ACS.saml2`
- c. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.factset.com/services/saml2/`
-
- > [!NOTE]
- > The Sign-on URL value is not real. Update the value with the actual Sign-on URL. Contact the [FactSet Support Team](https://www.factset.com/contact-us) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the metadata file and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up FactSet** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
active-directory Keylight Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keylight-tutorial.md
- Title: 'Tutorial: Azure Active Directory integration with NAVEX IRM (Lockpath/Keylight) | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and NAVEX IRM (Lockpath/Keylight).
-------- Previously updated : 09/09/2022--
-# Tutorial: Azure Active Directory integration with NAVEX IRM (Lockpath/Keylight)
-
-In this tutorial, you'll learn how to integrate NAVEX IRM (Lockpath/Keylight) with Azure Active Directory (Azure AD). When you integrate NAVEX IRM (Lockpath/Keylight) with Azure AD, you can:
-
-* Control in Azure AD who has access to NAVEX IRM (Lockpath/Keylight).
-* Enable your users to be automatically signed-in to NAVEX IRM (Lockpath/Keylight) with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To configure Azure AD integration with NAVEX IRM (Lockpath/Keylight), you need the following items:
-
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
-* NAVEX IRM (Lockpath/Keylight) single sign-on enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-
-* NAVEX IRM (Lockpath/Keylight) supports **SP** initiated SSO.
-* NAVEX IRM (Lockpath/Keylight) supports **Just In Time** user provisioning.
-
-## Add NAVEX IRM (Lockpath/Keylight) from the gallery
-
-To configure the integration of NAVEX IRM (Lockpath/Keylight) into Azure AD, you need to add NAVEX IRM (Lockpath/Keylight) from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **NAVEX IRM (Lockpath/Keylight)** in the search box.
-1. Select **NAVEX IRM (Lockpath/Keylight)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for NAVEX IRM (Lockpath/Keylight)
-
-Configure and test Azure AD SSO with NAVEX IRM (Lockpath/Keylight) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in NAVEX IRM (Lockpath/Keylight).
-
-To configure and test Azure AD SSO with NAVEX IRM (Lockpath/Keylight), perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure NAVEX IRM (Lockpath/Keylight) SSO](#configure-navex-irm-lockpathkeylight-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create NAVEX IRM (Lockpath/Keylight) test user](#create-navex-irm-lockpathkeylight-test-user)** - to have a counterpart of B.Simon in NAVEX IRM (Lockpath/Keylight) that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **NAVEX IRM (Lockpath/Keylight)** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<COMPANY_NAME>.keylightgrc.com`
-
- b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<COMPANY_NAME>.keylightgrc.com/Login.aspx`
-
- c. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<COMPANY_NAME>.keylightgrc.com/`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [NAVEX IRM (Lockpath/Keylight) Client support team](https://www.lockpath.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/certificateraw.png)
-
-6. On the **Set up NAVEX IRM (Lockpath/Keylight)** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to NAVEX IRM (Lockpath/Keylight).
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **NAVEX IRM (Lockpath/Keylight)**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure NAVEX IRM (Lockpath/Keylight) SSO
-
-1. To enable SSO in NAVEX IRM (Lockpath/Keylight), perform the following steps:
-
- a. Sign-on to your NAVEX IRM (Lockpath/Keylight) account as administrator.
-
- b. In the menu on the top, click **User Icon**, and select **Setup**.
-
- ![Screenshot that shows the "Person" icon selected, and "Keylight Setup" selected from the drop-down.](./media/keylight-tutorial/setup-icon.png)
-
- c. In the treeview on the left, click **SAML**.
-
- ![Screenshot that shows "S A M L" selected in the tree view.](./media/keylight-tutorial/tree-view.png)
-
- d. On the **SAML Settings** dialog, click **Edit**.
-
- ![Screenshot that shows the "S A M L Settings" window with the "Edit" button selected.](./media/keylight-tutorial/edit-icon.png)
-
-1. On the **Edit SAML Settings** dialog page, perform the following steps:
-
- ![Configure Single Sign-On](./media/keylight-tutorial/settings.png)
-
- a. Set **SAML authentication** to **Active**.
-
- b. In the **Identity Provider Login URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
-
- c. In the **Identity Provider Logout URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
-
- d. Click **Choose File** to select your downloaded NAVEX IRM (Lockpath/Keylight) certificate, and then click **Open** to upload the certificate.
-
- e. Set **SAML User Id location** to **NameIdentifier element of the subject statement**.
-
- f. Provide the **Service Provider Entity Id** using the following pattern: `https://<CompanyName>.keylightgrc.com`.
-
- g. Set **Auto-provision users** to **Active**.
-
- h. Set **Auto-provision account type** to **Full User**.
-
- i. Set **Auto-provision security role**, select **Standard User with SAML**.
-
- j. Set **Auto-provision security config**, select **Standard User Configuration**.
-
- k. In the **Email attribute** textbox, type `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`.
-
- l. In the **First name attribute** textbox, type `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname`.
-
- m. In the **Last name attribute** textbox, type `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname`.
-
- n. Click **Save**.
-
-### Create NAVEX IRM (Lockpath/Keylight) test user
-
-In this section, a user called Britta Simon is created in NAVEX IRM (Lockpath/Keylight). NAVEX IRM (Lockpath/Keylight) supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in NAVEX IRM (Lockpath/Keylight), a new one is created after authentication. If you need to create a user manually, you need to contact the [NAVEX IRM (Lockpath/Keylight) Customer support team](https://www.lockpath.com/contact/).
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to NAVEX IRM (Lockpath/Keylight) Sign-on URL where you can initiate the login flow.
-
-* Go to NAVEX IRM (Lockpath/Keylight) Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you click the NAVEX IRM (Lockpath/Keylight) tile in the My Apps, this will redirect to NAVEX IRM (Lockpath/Keylight) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Next steps
-
-Once you configure NAVEX IRM (Lockpath/Keylight) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Kronos Workforce Dimensions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kronos-workforce-dimensions-tutorial.md
Previously updated : 01/27/2021 Last updated : 10/10/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
To configure single sign-on on **Kronos Workforce Dimensions** side, you need to send the **App Federation Metadata Url** to [Kronos Workforce Dimensions support team](mailto:support@kronos.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Kronos Workforce Dimensions test user
+## Create Kronos Workforce Dimensions test user
In this section, you create a user called Britta Simon in Kronos Workforce Dimensions. Work with [Kronos Workforce Dimensions support team](mailto:support@kronos.com) to add the users in the Kronos Workforce Dimensions platform. Users must be created and activated before you use single sign-on.
+> [!NOTE]
+> Original Microsoft documentation advises to contact UKG Support via email to create your Azure AD Users. While this option is available please consider the following self-service options.
+
+### Manual Process
+
+There are two ways to manually create your Azure AD users in WFD. You can either select an existing user, duplicate them and then update the necessary fields to make that user unique. This process can be time consuming and requires knowledge of the WFD User Interface. The alternative is to create the user via the WFD API which is much quicker. This option requires knowledge of using API Tools such as Postman to send the request to the API instead. The following instructions will assist with importing a prebuilt example into the Postman API Tool.
+
+#### Setup
+
+1. Open Postman tool and import the following files:
+
+ a. Workforce Dimensions - Create User.postman_collection.json
+
+ b. AAD to WFD Env Variables.json
+
+1. In the left-pane, select the **Environments** button.
+
+1. Click on **AAD_to_WFD_Env_Variables** and add the values provided by UKG Support pertaining to your WFD instance.
+
+ > [!NOTE]
+ > access_token and refresh_token should be empty as these will automatically populate as a result of the Obtain Access Token HTTP Request.
+
+1. Open the **Create Azure AD User in WFD** HTTP Request and update highlighted properties within the JSON payload:
+
+ ```
+ {
+
+ "personInformation":ΓÇ»{
+
+    "accessAssignment": {
+
+       "accessProfileName": "accessProfileName",
+
+       "notificationProfileName": "All"
+
+     },
+
+     "emailAddresses": [
+
+       {
+
+         "address": "address”
+
+         "contactTypeName": "Work"
+
+       }
+
+     ],
+
+     "employmentStatusList": [
+
+       {
+
+         "effectiveDate": "2019-08-15",
+
+         "employmentStatusName": "Active",
+
+         "expirationDate": "3000-01-01"
+
+       }
+
+     ],
+
+     "person": {
+
+       "personNumber": "personNumber",
+
+       "firstName": "firstName",
+
+       "lastName": "lastName",
+
+       "fullName": "fullName",
+
+       "hireDate": "2019-08-15",
+
+       "shortName": "shortName"
+
+     },
+
+     "personAuthenticationTypes": [
+
+       {
+
+         "activeFlag": true,
+
+         "authenticationTypeName": "Federated"
+
+       }
+
+     ],
+
+     "personLicenseTypes": [
+
+       {
+
+         "activeFlag": true,
+
+         "licenseTypeName": "Employee"
+
+       },
+
+       {
+
+         "activeFlag": true,
+
+         "licenseTypeName": "Absence"
+
+       },
+
+       {
+
+         "activeFlag": true,
+
+         "licenseTypeName": "Hourly Timekeeping"
+
+       },
+
+       {
+
+         "activeFlag": true,
+
+         "licenseTypeName": "Scheduling"
+
+       }
+
+     ],
+
+     "userAccountStatusList": [
+
+       {
+
+         "effectiveDate": "2019-08-15",
+
+         "expirationDate": "3000-01-01",
+
+         "userAccountStatusName": "Active"
+
+       }
+
+     ]
+
+   },
+
+   "jobAssignment": {
+
+     "baseWageRates": [
+
+       {
+
+         "effectiveDate": "2019-01-01",
+
+         "expirationDate": "3000-01-01",
+
+         "hourlyRate": 20.15
+
+       }
+
+     ],
+
+     "jobAssignmentDetails": {
+
+       "payRuleName": "payRuleName",
+
+       "timeZoneName": "timeZoneName"
+
+     },
+
+     "primaryLaborAccounts": [
+
+       {
+
+         "effectiveDate": "2019-08-15",
+
+         "expirationDate": "3000-01-01",
+
+         "organizationPath": "organizationPath"
+
+       }
+
+     ]
+
+   },
+
+   "user": {
+
+     "userAccount": {
+
+       "logonProfileName": "Default",
+
+       "userName": "userName"
+
+     }
+
+   }
+
+ }
+ ```
+
+ > [!NOTE]
+ > The personInformation.emailAddress.address and the user.userAccount.userName must both match the targeted Azure AD User you are trying to create in WFD.
+
+1. In the upper-righthand corner, select the **Environments** drop-down-box and select **AAD_to_WFD_Env_Variables**.
+
+1. Once the JSON payload has been updated and the correct environment variables selected, select the **Obtain Access Token** HTTP Request and click the **Send** button. This will leverage the updated environment variables to authenticate to your WFD instance and then cache your access token in the environment variables to use when calling the create user method.
+
+1. If the authentication call was successful, you should see a 200 response with an access token returned. This access token will also now show in the **CURRENT VALUE** column in the environment variables for the **access_token** entry.
+
+ > [!NOTE]
+ > If an access_token is not received, confirm that all variables in the environment variables are correct. User credentials should be a super user account.
+
+1. Once an **access_token** is obtained, select the **AAD_to_WFD_Env_Variables** HTTP Request and click the **Send** button. If the request is successful you will receive a 200 HTTP status back.
+
+1. Login to WFD with the **Super User** account and confirm the new Azure AD User was created within the WFD instance.
+
+### Automated Process
+
+The automated process consists of a flat-file in CSV format which allows the user to prespecify the highlighted values in the payload from the manual API process above. The flat-file is consumed by the accompanying PowerShell script which creates the new WFD users in bulk. The script processes new user creations in batches of 70 (default) which is configurable for optimal performance. The following instructions will walk through the setup and execution of the script.
+
+1. Save both the **AAD_To_WFD.csv** and **AAD_To_WFD.ps1** files locally to your computer.
+
+1. Open the **AAD_To_WFD.csv** file and fill in the columns.
+
+ * **personInformation.accessAssignment.accessProfileName**: Specific Access Profile Name from WFD instance.
+
+ * **personInformation.emailAddresses.address**:
+ Must match the User Principle Name in Azure Active Directory.
+
+ * **personInformation.personNumber**: Must be unique across the WFD instance.
+
+ * **personInformation.firstName**: UserΓÇÖs first name.
+
+ * **personInformation.lastName**: UserΓÇÖs last name.
+
+ * **jobAssignment.jobAssignmentDetails.payRuleName**: Specific Pay Rule Name from WFD.
+
+ * **jobAssignment.jobAssignmentDetails.timeZoneName**: Timezone format must match WFD instance (i.e. (GMT -08:00) Pacific Time)
+
+ * **jobAssignment.primaryLaborAccounts.organizationPath**: Organization Path of a specific Business structure in the WFD instance.
+
+1. Save the .csv file.
+
+1. Right-Click the **AAD_To_WFD.ps1** script and click **Edit** to modify it.
+
+1. Confirm the path specified in Line 15 is the correct name/path to the **AAD_To_WFD.csv** file.
+
+1. Update the following lines with the values provided by UKG Support pertaining to your WFD instance.
+
+ * Line 33: vanityUrl
+
+ * Line 43: appKey
+
+ * Line 48: client_id
+
+ * Line 49: client_secret
+
+1. Save and execute the script.
+
+1. Provide WFD **Super User** credentials when prompted.
+
+1. Once completed, the script will return a list of any users that failed to create.
+
+> [!Note]
+> Be sure to check the values provided in the AAD_To_WFD.csv file if it is returned as the result of typos or mismatched fields in the WFD instance. The error could also be returned by the WFD API instance if all users in the batch already exist in the instance.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Navex Irm Keylight Lockpath Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/navex-irm-keylight-lockpath-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory integration with NAVEX IRM (Lockpath/Keylight) | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and NAVEX IRM (Lockpath/Keylight).
++++++++ Last updated : 09/09/2022++
+# Tutorial: Azure Active Directory integration with NAVEX IRM (Lockpath/Keylight)
+
+In this tutorial, you'll learn how to integrate NAVEX IRM (Lockpath/Keylight) with Azure Active Directory (Azure AD). When you integrate NAVEX IRM (Lockpath/Keylight) with Azure AD, you can:
+
+* Control in Azure AD who has access to NAVEX IRM (Lockpath/Keylight).
+* Enable your users to be automatically signed-in to NAVEX IRM (Lockpath/Keylight) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To configure Azure AD integration with NAVEX IRM (Lockpath/Keylight), you need the following items:
+
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* NAVEX IRM (Lockpath/Keylight) single sign-on enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+
+* NAVEX IRM (Lockpath/Keylight) supports **SP** initiated SSO.
+* NAVEX IRM (Lockpath/Keylight) supports **Just In Time** user provisioning.
+
+## Add NAVEX IRM (Lockpath/Keylight) from the gallery
+
+To configure the integration of NAVEX IRM (Lockpath/Keylight) into Azure AD, you need to add NAVEX IRM (Lockpath/Keylight) from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **NAVEX IRM (Lockpath/Keylight)** in the search box.
+1. Select **NAVEX IRM (Lockpath/Keylight)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for NAVEX IRM (Lockpath/Keylight)
+
+Configure and test Azure AD SSO with NAVEX IRM (Lockpath/Keylight) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in NAVEX IRM (Lockpath/Keylight).
+
+To configure and test Azure AD SSO with NAVEX IRM (Lockpath/Keylight), perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure NAVEX IRM (Lockpath/Keylight) SSO](#configure-navex-irm-lockpathkeylight-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create NAVEX IRM (Lockpath/Keylight) test user](#create-navex-irm-lockpathkeylight-test-user)** - to have a counterpart of B.Simon in NAVEX IRM (Lockpath/Keylight) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+In this section, you enable Azure AD single sign-on in the Azure portal.
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **NAVEX IRM (Lockpath/Keylight)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+4. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<COMPANY_NAME>.keylightgrc.com`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<COMPANY_NAME>.keylightgrc.com/Login.aspx`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<COMPANY_NAME>.keylightgrc.com/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [NAVEX IRM (Lockpath/Keylight) Client support team](https://www.lockpath.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer.
+
+ ![The Certificate download link](common/certificateraw.png)
+
+6. On the **Set up NAVEX IRM (Lockpath/Keylight)** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to NAVEX IRM (Lockpath/Keylight).
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **NAVEX IRM (Lockpath/Keylight)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure NAVEX IRM (Lockpath/Keylight) SSO
+
+1. To enable SSO in NAVEX IRM (Lockpath/Keylight), perform the following steps:
+
+ a. Sign on to your NAVEX IRM (Lockpath/Keylight) account as administrator.
+
+ b. In the menu on the top, click **User Icon**, and select **Setup**.
+
+ ![Screenshot that shows the "Person" icon selected, and "Keylight Setup" selected from the drop-down.](./media/keylight-tutorial/setup-icon.png)
+
+ c. In the treeview on the left, click **SAML**.
+
+ ![Screenshot that shows "S A M L" selected in the tree view.](./media/keylight-tutorial/tree-view.png)
+
+ d. On the **SAML Settings** dialog, click **Edit**.
+
+ ![Screenshot that shows the "S A M L Settings" window with the "Edit" button selected.](./media/keylight-tutorial/edit-icon.png)
+
+1. On the **Edit SAML Settings** dialog page, perform the following steps:
+
+ ![Configure Single Sign-On](./media/keylight-tutorial/settings.png)
+
+ a. Set **SAML authentication** to **Active**.
+
+ b. In the **Identity Provider Login URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **Identity Provider Logout URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
+
+ d. Click **Choose File** to select your downloaded NAVEX IRM (Lockpath/Keylight) certificate, and then click **Open** to upload the certificate.
+
+ e. Set **SAML User Id location** to **NameIdentifier element of the subject statement**.
+
+ f. Provide the **Service Provider Entity Id** using the following pattern: `https://<CompanyName>.keylightgrc.com`.
+
+ g. Set **Auto-provision users** to **Active**.
+
+ h. Set **Auto-provision account type** to **Full User**.
+
+ i. Set **Auto-provision security role**, select **Standard User with SAML**.
+
+ j. Set **Auto-provision security config**, select **Standard User Configuration**.
+
+ k. In the **Email attribute** textbox, type `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`.
+
+ l. In the **First name attribute** textbox, type `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname`.
+
+ m. In the **Last name attribute** textbox, type `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname`.
+
+ n. Click **Save**.
+
+### Create NAVEX IRM (Lockpath/Keylight) test user
+
+In this section, a user called Britta Simon is created in NAVEX IRM (Lockpath/Keylight). NAVEX IRM (Lockpath/Keylight) supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in NAVEX IRM (Lockpath/Keylight), a new one is created after authentication. If you need to create a user manually, you need to contact the [NAVEX IRM (Lockpath/Keylight) Customer support team](https://www.lockpath.com/contact/).
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to NAVEX IRM (Lockpath/Keylight) Sign-on URL where you can initiate the login flow.
+
+* Go to NAVEX IRM (Lockpath/Keylight) Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the NAVEX IRM (Lockpath/Keylight) tile in the My Apps, this will redirect to NAVEX IRM (Lockpath/Keylight) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure NAVEX IRM (Lockpath/Keylight) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Nordpass Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nordpass-provisioning-tutorial.md
# Tutorial: Configure NordPass for automatic user provisioning
-This tutorial describes the steps you need to perform in both NordPass and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [NordPass](https://nordpass.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both NordPass and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [NordPass](https://nordpass.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
Add NordPass from the Azure AD application gallery to start managing provisionin
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users , you can control this by assigning one or two users to the app. When scope is set to all users , you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. ## Step 5. Configure automatic user provisioning to NordPass
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in NordPass based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in NordPass based on user assignments in Azure AD.
### To configure automatic user provisioning for NordPass in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
![Token](common/provisioning-testconnection-tenanturltoken.png)
-1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-1. Define the users and/or groups that you would like to provision to NordPass by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users that you would like to provision to NordPass by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
This section guides you through the steps to configure the Azure AD provisioning
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
The article series features guidance that encompasses existing agency investment
* MFA must be enforced at the application layer instead of the network layer.
- * For agency staff, contractors, and partners, phishing-resistant MFA is required. For public users, phishing-resistant MFA must be an option.
-
-* Password policies must not require the use of special characters or regular rotation.
+ * For agency staff, contractors, and partners, phishing-resistant MFA is required.
+
+ * For public users, phishing-resistant MFA must be an option.
+
+ * Password policies must not require the use of special characters or regular rotation.
* When agencies are authorizing users to access resources, they must consider at least one device-level signal alongside identity information about the authenticated user.
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Now that you have a new credential, you're going to gather some information abou
1. Copy your **Tenant ID**, and record it for later. The Tenant ID is the guid in the manifest URL highlighted in red above.
+ >[!NOTE]
+ > When setting up access policies for Azure Key Vault, you must add the access policies for both **Verifiable Credentials Service Request** and **Verifiable Credentials Service**.
+ ## Download the sample code The sample application is available in .NET, and the code is maintained in a GitHub repository. Download the sample code from [GitHub](https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet), or clone the repository to your local machine:
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Title: Tutorial - Configure your tenant for Microsoft Entra Verified ID
-description: In this tutorial, you learn how to configure your tenant to support the Verifiable Credentials service.
+description: In this tutorial, you learn how to configure your tenant to support the Verified ID service.
Specifically, you learn how to:
> [!div class="checklist"] > - Create an Azure Key Vault instance.
-> - Set up the Verifiable Credentials service.
+> - Set up the Verified ID service.
> - Register an application in Azure AD. The following diagram illustrates the Verified ID architecture and the component you configure. :::image type="content" source="media/verifiable-credentials-configure-tenant/verifiable-credentials-architecture.png" alt-text="Diagram that illustrates the Microsoft Entra Verified ID architecture." border="false"::: - ## Prerequisites - You need an Azure tenant with an active subscription. If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The Verifiable credentials service request is the Request Service API, and it ne
1. For **Key permissions**, select permissions **Get** and **Sign**.
- ![screenshot of key vault granting access to a security principal](media/verifiable-credentials-configure-tenant/set-key-vault-sp-access-policy.png)
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/set-key-vault-sp-access-policy.png" alt-text="screenshot of key vault granting access to a security principal":::
+
+1. To save the changes, select **Add**.
-1. To save the changes, select **Save**.
## Set up Verified ID
To set up Verified ID, follow these steps:
1. Set up your organization by providing the following information:
- 1. **Organization name**: Enter a name to reference your business within Verifiable Credentials. Your customers don't see this name.
+ 1. **Organization name**: Enter a name to reference your business within Verified IDs. Your customers don't see this name.
1. **Domain**: Enter a domain that's added to a service endpoint in your decentralized identity (DID) document. The domain is what binds your DID to something tangible that the user might know about your business. Microsoft Authenticator and other digital wallets use this information to validate that your DID is linked to your domain. If the wallet can verify the DID, it displays a verified symbol. If the wallet can't verify the DID, it informs the user that the credential was issued by an organization it couldn't validate.
-
+ >[!IMPORTANT] > The domain can't be a redirect. Otherwise, the DID and domain can't be linked. Make sure to use HTTPS for the domain. For example: `https://contoso.com`. 1. **Key vault**: Select the key vault that you created earlier. 1. Under **Advanced**, you may choose the **trust system** that you want to use for your tenant. You can choose from either **Web** or **ION**. Web means your tenant uses [did:web](https://w3c-ccg.github.io/did-method-web/) as the did method and ION means it uses [did:ion](https://identity.foundation/ion/).
-
+ >[!IMPORTANT]
- > The only way to change the trust system is to opt-out of verifiable credentials and redo the onboarding.
+ > The only way to change the trust system is to opt-out of the Verified ID service and redo the onboarding.
1. Select **Save and get started**.
-
- ![Screenshots that shows how to set up Verifiable Credentials.](media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png)
+
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png" alt-text="Screenshot that shows how to set up Verifiable Credentials.":::
## Register an application in Azure AD
Your application needs to get access tokens when it wants to call into Microsoft
1. Under **Manage**, select **App registrations** > **New registration**.
- ![Screenshot that shows how to select a new application registration.](media/verifiable-credentials-configure-tenant/register-azure-ad-app.png)
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/register-azure-ad-app.png" alt-text="Screenshot that shows how to select a new application registration.":::
1. Enter a display name for your application. For example: *verifiable-credentials-app*.
Your application needs to get access tokens when it wants to call into Microsoft
1. Select **Register** to create the application.
- ![Screenshot that shows how to register the verifiable credentials app.](media/verifiable-credentials-configure-tenant/register-azure-ad-app-properties.png)
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/register-azure-ad-app-properties.png" alt-text="Screenshot that shows how to register the verifiable credentials app.":::
### Grant permissions to get access tokens
In this step, you grant permissions to the **Verifiable Credentials Service Requ
To add the required permissions, follow these steps: 1. Stay in the **verifiable-credentials-app** application details page. Select **API permissions** > **Add a permission**.
-
- ![Screenshot that shows how to add permissions to the verifiable credentials app.](media/verifiable-credentials-configure-tenant/add-app-api-permissions.png)
+
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/add-app-api-permissions.png" alt-text="Screenshot that shows how to add permissions to the verifiable credentials app.":::
1. Select **APIs my organization uses**. 1. Search for the **Verifiable Credentials Service Request** and **Verifiable Credentials Service** service principals, and select them.
-
- ![Screenshot that shows how to select the service principal.](media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png)
+
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png" alt-text="Screenshot that shows how to select the service principal.":::
1. Choose **Application Permission**, and expand **VerifiableCredential.Create.All**.
- ![Screenshot that shows how to select the required permissions.](media/verifiable-credentials-configure-tenant/add-app-api-permissions-verifiable-credentials.png)
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/add-app-api-permissions-verifiable-credentials.png" alt-text="Screenshot that shows how to select the required permissions.":::
1. Select **Add permissions**. 1. Select **Grant admin consent for \<your tenant name\>**.
+You can choose to grant issuance and presentation permissions separately if you prefer to segregate the scopes to different applications.
++ ## Service endpoint configuration
-1. Navigate to the Verified ID in the Azure portal.
+
+1. Navigate to the Verified ID service in the Azure portal.
1. Select **Registration**. 1. Notice that there are two sections: 1. Website ID registration
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
This article lists the latest features, improvements, and changes in the Microsoft Entra Verified ID service.
+## September 2022
+
+- The Request Service API now have [granular app permissions](verifiable-credentials-configure-tenant.md?#grant-permissions-to-get-access-tokens) and you can grant **VerifiableCredential.Create.IssueRequest** and **VerifiableCredential.Create.PresentRequest** separately to segregate duties of issuance and presentation to separate application.
+- [IDV Partner Gallery](partner-gallery.md) now available in the documentation guiding you how to integrate with Microsoft's Identity Verification partners.
+- How-to guide for implementing the [presentation attestation flow](how-to-use-quickstart-presentation.md) that requires presenting a verifiable credential during issuance.
+ ## August 2022 Microsoft Entra Verified ID is now generally available (GA) as the new member of the Microsoft Entra portfolio! [read more](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-verified-id-now-generally-available/ba-p/3295506)
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Title: Cost recommendations description: Full list of available cost recommendations in Advisor. + Last updated 02/04/2022
Our internal telemetry shows that the PostgreSQL database server resources have
Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size underutilized PostgreSQL servers)](https://aka.ms/postgresqlpricing).
-## Cosmos DB
+## Azure Cosmos DB
### Review the configuration of your Azure Cosmos DB free tier account
-Your Azure Cosmos DB free tier account is currently containing resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because Azure Cosmos DB's free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s will be billed at the regular pricing. As a result, we anticipate that you will get charged for the throughput currently provisioned on your Azure Cosmos DB account.
+Your Azure Cosmos DB free tier account is currently containing resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because the free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s will be billed at the regular pricing. As a result, we anticipate that you will get charged for the throughput currently provisioned on your Azure Cosmos DB account.
-Learn more about [Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](../cosmos-db/understand-your-bill.md#azure-free-tier).
+Learn more about [Azure Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](../cosmos-db/understand-your-bill.md#azure-free-tier).
### Consider taking action on your idle Azure Cosmos DB containers We haven't detected any activity over the past 30 days on one or more of your Azure Cosmos DB containers. Consider lowering their throughput, or deleting them if you don't plan on using them.
-Learn more about [Cosmos DB account - CosmosDBIdleContainers (Consider taking action on your idle Azure Cosmos DB containers)](/azure/cosmos-db/how-to-provision-container-throughput).
+Learn more about [Azure Cosmos DB account - CosmosDBIdleContainers (Consider taking action on your idle Azure Cosmos DB containers)](/azure/cosmos-db/how-to-provision-container-throughput).
### Enable autoscale on your Azure Cosmos DB database or container Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
-Learn more about [Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](../cosmos-db/provision-throughput-autoscale.md).
+Learn more about [Azure Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](../cosmos-db/provision-throughput-autoscale.md).
### Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%.
-Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](../cosmos-db/how-to-choose-offer.md).
+Learn more about [Azure Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](../cosmos-db/how-to-choose-offer.md).
## Data Explorer
Reserved instances can provide a significant discount over pay-as-you-go prices.
Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserved instances to save money over pay-as-you-go costs)](https://aka.ms/reservedinstances).
-### Consider Cosmos DB reserved instance to save over your pay-as-you-go costs
+### Consider Azure Cosmos DB reserved instance to save over your pay-as-you-go costs
-We analyzed your Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more.
+We analyzed your Azure Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Azure Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more.
-Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - CosmosDBReservedCapacity (Consider Azure Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Title: Operational excellence recommendations description: Operational excellence recommendations + Last updated 02/02/2022
We have determined that too many of your host pools have Validation Environment
Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](../virtual-desktop/create-host-pools-powershell.md).
-## Cosmos DB
+## Azure Cosmos DB
### Migrate Azure Cosmos DB attachments to Azure Blob Storage
-We noticed that your Azure Cosmos collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
+We noticed that your Azure Cosmos DB collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
-Learn more about [Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage).
+Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage).
### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup may also be more cost-effective as a single copy of your data is retained.
-Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md).
+Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md).
## Monitor
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Title: Performance recommendations description: Full list of available performance recommendations in Advisor. + Last updated 02/03/2022
Depth first load balancing uses the max session limit to determine the maximum n
Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](../virtual-desktop/configure-host-pool-load-balancing.md).
-## Cosmos DB
+## Azure Cosmos DB
### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1
-You are using the query page size of 100 for queries for your Azure Cosmos container. We recommend using a page size of -1 for faster scans.
+You are using the query page size of 100 for queries for your Azure Cosmos DB container. We recommend using a page size of -1 for faster scans.
-Learn more about [Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count).
+Learn more about [Azure Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count).
### Add composite indexes to your Azure Cosmos DB container Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It is recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries.
-Learn more about [Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes).
+Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes).
### Optimize your Azure Cosmos DB indexing policy to only index what's needed Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries.
-Learn more about [Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md).
+Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md).
### Use hierarchical partition keys for optimal data distribution This account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. This setting was applied by the Azure Cosmos DB team as a temporary measure to give you time to re-architect your application with a different partition key. It is not recommended as a long-term solution, as SLA guarantees are not honored when the limit is increased. You can now use hierarchical partition keys (preview) to re-architect your application. The feature allows you to exceed the 20 GB limit by setting up to three partition keys, ideal for multi-tenant scenarios or workloads that use synthetic keys.
-Learn more about [Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
+Learn more about [Azure Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
-We noticed that your Azure Cosmos DB applications are using Gateway mode via the Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
+We noticed that your Azure Cosmos DB applications are using Gateway mode via the Azure Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
-Learn more about [Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
+Learn more about [Azure Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
## HDInsight
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Title: Reliability recommendations description: Full list of available reliability recommendations in Advisor. + Last updated 02/04/2022
Some or all of your devices are using outdated SDK and we recommend you upgrade
Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
-## Cosmos DB
+## Azure Cosmos DB
-### Configure Consistent indexing mode on your Azure Cosmos container
+### Configure Consistent indexing mode on your Azure Cosmos DB container
-We noticed that your Azure Cosmos container is configured with the Lazy indexing mode, which may impact the freshness of query results. We recommend switching to Consistent mode.
+We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which may impact the freshness of query results. We recommend switching to Consistent mode.
-Learn more about [Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos container)](/azure/cosmos-db/how-to-manage-indexing-policy).
+Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos DB container)](/azure/cosmos-db/how-to-manage-indexing-policy).
### Upgrade your old Azure Cosmos DB SDK to the latest version Your Azure Cosmos DB account is using an old version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml).
+Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml).
### Upgrade your outdated Azure Cosmos DB SDK to the latest version Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml).
+Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml).
### Configure your Azure Cosmos DB containers with a partition key Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
-Learn more about [Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey).
+Learn more about [Azure Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey).
-### Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
+### Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features
-Your Azure Cosmos DB API for MongoDB account is eligible to upgrade to version 4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0.
+Your Azure Cosmos DB for MongoDB account is eligible to upgrade to version 4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0.
-Learn more about [Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-version-upgrade).
+Learn more about [Azure Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-version-upgrade).
### Add a second region to your production workloads on Azure Cosmos DB
Based on their names and configuration, we have detected the Azure Cosmos DB acc
> [!NOTE] > Additional regions will incur extra costs.
-Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](../cosmos-db/high-availability.md).
+Learn more about [Azure Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](../cosmos-db/high-availability.md).
-### Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account
+### Enable Server Side Retry (SSR) on your Azure Cosmos DB for MongoDB account
We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you.
-Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/cassandra/prevent-rate-limiting-errors).
+Learn more about [Azure Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB for MongoDB account)](/azure/cosmos-db/cassandra/prevent-rate-limiting-errors).
-### Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
+### Migrate your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features
-Migrate your database account to a new database account to take advantage of Azure Cosmos DB's API for MongoDB v4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data.
+Migrate your database account to a new database account to take advantage of Azure Cosmos DB for MongoDB v4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data.
-Learn more about [Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-feature-support-40).
+Learn more about [Azure Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-feature-support-40).
-### Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key
+### Your Azure Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key
-It appears that your key vault's configuration is preventing your Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore.
+It appears that your key vault's configuration is preventing your Azure Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Azure Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore.
-Learn more about [Cosmos DB account - CosmosDBKeyVaultWrap (Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](../cosmos-db/how-to-setup-cmk.md).
+Learn more about [Azure Cosmos DB account - CosmosDBKeyVaultWrap (Your Azure Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](../cosmos-db/how-to-setup-cmk.md).
### Avoid being rate limited from metadata operations
-We found a high number of metadata operations on your account. Your data in Cosmos DB, including metadata about your databases and collections is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. Avoid being rate limited from metadata operations by using static Cosmos DB client instances in your code and caching the names of databases and collections.
+We found a high number of metadata operations on your account. Your data in Azure Cosmos DB, including metadata about your databases and collections is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. Avoid being rate limited from metadata operations by using static Azure Cosmos DB client instances in your code and caching the names of databases and collections.
-Learn more about [Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips).
+Learn more about [Azure Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips).
-### Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account
+### Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB for MongoDB account
-We observed some of your applications are connecting to your upgraded Azure Cosmos DB's API for MongoDB account using the legacy 3.2 endpoint - [accountname].documents.azure.com. Use the new endpoint - [accountname].mongo.cosmos.azure.com (or its equivalent in sovereign, government, or restricted clouds).
+We observed some of your applications are connecting to your upgraded Azure Cosmos DB for MongoDB account using the legacy 3.2 endpoint `[accountname].documents.azure.com`. Use the new endpoint `[accountname].mongo.cosmos.azure.com` (or its equivalent in sovereign, government, or restricted clouds).
-Learn more about [Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/mongodb-feature-support-40).
+Learn more about [Azure Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB for MongoDB account)](/azure/cosmos-db/mongodb-feature-support-40).
### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however it is still highly recommended you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
-Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md).
+Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md).
### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
-Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](../cosmos-db/sql/sql-api-sdk-java-v4.md).
+Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](../cosmos-db/sql/sql-api-sdk-java-v4.md).
## Fluid Relay
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Imple
### Avoid hostname override to ensure site integrity
-Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
+Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
advisor Advisor Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-sovereign-clouds.md
Title: Sovereign cloud feature variations description: List of feature variations and usage limitations for Advisor in sovereign clouds. + Last updated 09/19/2022
The following Azure Advisor recommendation **features aren't currently available
- (Preview) Consider Blob storage reserved capacity to save on Blob v2 and Data Lake Storage Gen2 costs. - (Preview) Consider Blob storage reserved instance to save on Blob v2 and Data Lake Storage Gen2 costs. - (Preview) Consider Cache for Redis reserved capacity to save over your pay-as-you-go costs.-- (Preview) Consider Cosmos DB reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Azure Cosmos DB reserved capacity to save over your pay-as-you-go costs.
- (Preview) Consider Database for MariaDB reserved capacity to save over your pay-as-you-go costs. - (Preview) Consider Database for MySQL reserved capacity to save over your pay-as-you-go costs. - (Preview) Consider Database for PostgreSQL reserved capacity to save over your pay-as-you-go costs.
The following Azure Advisor recommendation **features aren't currently available
- Consider App Service stamp fee reserved instance to save over your on-demand costs. - Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs. - Consider Cache for Redis reserved instance to save over your pay-as-you-go costs.-- Consider Cosmos DB reserved instance to save over your pay-as-you-go costs.
+- Consider Azure Cosmos DB reserved instance to save over your pay-as-you-go costs.
- Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs. - Consider Database for MySQL reserved instance to save over your pay-as-you-go costs. - Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs.
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Title: Cluster configuration in Azure Kubernetes Services (AKS)
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) + Last updated 10/04/2022
az aks nodepool add --name ephemeral --cluster-name myAKSCluster --resource-grou
If you want to create node pools with network-attached OS disks, you can do so by specifying `--node-osdisk-type Managed`.
+## Mariner OS
+
+Mariner can be deployed on AKS through Azure CLI or ARM templates.
+
+### Prerequisites
+
+1. You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+2. You need the `aks-preview` Azure CLI extension for the ability to select the Mariner 2.0 operating system SKU. Run `az extension remove --name aks-preview` to clear any previous versions, then run `az extension add --name aks-preview`.
+3. If you don't already have kubectl installed, install it through Azure CLI using `az aks install-cli` or follow the [upstream instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/).
+
+### Deploy an AKS Mariner cluster with Azure CLI
+
+Use the following example commands to create a Mariner cluster.
+
+```azurecli
+az group create --name MarinerTest --location eastus
+
+az aks create --name testMarinerCluster --resource-group MarinerTest --os-sku mariner
+
+az aks get-credentials --resource-group MarinerTest --name testMarinerCluster
+
+kubectl get pods --all-namespaces
+```
+
+### Deploy an AKS Mariner cluster with an ARM template
+
+To add Mariner to an existing ARM template, you need to add `"osSKU": "mariner"` and `"mode": "System"` to `agentPoolProfiles` and set the apiVersion to 2021-03-01 or newer (`"apiVersion": "2021-03-01"`). The following deployment uses the ARM template "marineraksarm.yml".
+
+```yml
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.1",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "defaultValue": "marinerakscluster",
+ "metadata": {
+ "description": "The name of the Managed Cluster resource."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "The location of the Managed Cluster resource."
+ }
+ },
+ "dnsPrefix": {
+ "type": "string",
+ "metadata": {
+ "description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
+ }
+ },
+ "osDiskSizeGB": {
+ "type": "int",
+ "defaultValue": 0,
+ "minValue": 0,
+ "maxValue": 1023,
+ "metadata": {
+ "description": "Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize."
+ }
+ },
+ "agentCount": {
+ "type": "int",
+ "defaultValue": 3,
+ "minValue": 1,
+ "maxValue": 50,
+ "metadata": {
+ "description": "The number of nodes for the cluster."
+ }
+ },
+ "agentVMSize": {
+ "type": "string",
+ "defaultValue": "Standard_DS2_v2",
+ "metadata": {
+ "description": "The size of the Virtual Machine."
+ }
+ },
+ "linuxAdminUsername": {
+ "type": "string",
+ "metadata": {
+ "description": "User name for the Linux Virtual Machines."
+ }
+ },
+ "sshRSAPublicKey": {
+ "type": "string",
+ "metadata": {
+ "description": "Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm'"
+ }
+ },
+ "osType": {
+ "type": "string",
+ "defaultValue": "Linux",
+ "allowedValues": [
+ "Linux"
+ ],
+ "metadata": {
+ "description": "The type of operating system."
+ }
+ },
+ "osSKU": {
+ "type": "string",
+ "defaultValue": "mariner",
+ "allowedValues": [
+ "mariner",
+ "Ubuntu",
+ ],
+ "metadata": {
+ "description": "The Linux SKU to use."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ContainerService/managedClusters",
+ "apiVersion": "2021-03-01",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "dnsPrefix": "[parameters('dnsPrefix')]",
+ "agentPoolProfiles": [
+ {
+ "name": "agentpool",
+ "mode": "System",
+ "osDiskSizeGB": "[parameters('osDiskSizeGB')]",
+ "count": "[parameters('agentCount')]",
+ "vmSize": "[parameters('agentVMSize')]",
+ "osType": "[parameters('osType')]",
+ "osSKU": "[parameters('osSKU')]",
+ "storageProfile": "ManagedDisks"
+ }
+ ],
+ "linuxProfile": {
+ "adminUsername": "[parameters('linuxAdminUsername')]",
+ "ssh": {
+ "publicKeys": [
+ {
+ "keyData": "[parameters('sshRSAPublicKey')]"
+ }
+ ]
+ }
+ }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ],
+ "outputs": {
+ "controlPlaneFQDN": {
+ "type": "string",
+ "value": "[reference(parameters('clusterName')).fqdn]"
+ }
+ }
+}
+```
+
+Create this file on your system and fill it with the contents of the Mariner AKS YAML file.
+
+```azurecli
+az group create --name MarinerTest --location eastus
+
+az deployment group create --resource-group MarinerTest --template-file marineraksarm.yml --parameters clusterName=testMarinerCluster dnsPrefix=marineraks1 linuxAdminUsername=azureuser sshRSAPublicKey=`<contents of your id_rsa.pub>`
+
+az aks get-credentials --resource-group MarinerTest --name testMarinerCluster
+
+kubectl get pods --all-namespaces
+```
+ ## Custom resource group name When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group gets created for the worker nodes. By default, AKS will name the node resource group `MC_resourcegroupname_clustername_location`, but you can also provide your own name.
This enables an OIDC Issuer URL of the provider which allows the API server to d
### Prerequisites
-* The Azure CLI version 2.42.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* The Azure CLI version 2.40.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
* AKS version 1.22 and higher. If your cluster is running version 1.21 and the OIDC Issuer preview is enabled, we recommend you upgrade the cluster to the minimum required version supported. ### Create an AKS cluster with OIDC Issuer
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
- Title: Dapr extension for Azure Kubernetes Service (AKS) overview
-description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications.
--- Previously updated : 07/21/2022---
-# Dapr
-
-Distributed Application Runtime (Dapr) offers APIs that simplify microservice development and implementation. Running as a sidecar process in tandem with your applications, Dapr APIs abstract away common complexities developers regularly encounter when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management. Whether your inter-application communication is direct service-to-service, or pub/sub messaging, Dapr helps you write simple, portable, resilient, and secured microservices.
-
-Dapr is incrementally adoptable ΓÇô the API building blocks can be used as the need arises. Use one, several, or all to develop your application faster.
--
-## Capabilities and features
-
-Dapr provides the following set of capabilities to help with your microservice development on AKS:
-
-* Easy provisioning of Dapr on AKS through [cluster extensions][cluster-extensions].
-* Portability enabled through HTTP and gRPC APIs which abstract underlying technologies choices
-* Reliable, secure, and resilient service-to-service calls through HTTP and gRPC APIs
-* Publish and subscribe messaging made easy with support for CloudEvent filtering and ΓÇ£at-least-onceΓÇ¥ semantics for message delivery
-* Pluggable observability and monitoring through Open Telemetry API collector
-* Works independent of language, while also offering language specific SDKs
-* Integration with VS Code through the Dapr extension
-* [More APIs for solving distributed application challenges][dapr-blocks]
-
-## Frequently asked questions
-
-### How do Dapr and Service meshes compare?
-
-A: Where a service mesh is defined as a networking service mesh, Dapr is not a service mesh. While Dapr and service meshes do offer some overlapping capabilities, a service mesh is focused on networking concerns, whereas Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, while service meshes are infrastructure-centric.
-
-Some common capabilities that Dapr shares with service meshes include:
-
-* Secure service-to-service communication with mTLS encryption
-* Service-to-service metric collection
-* Service-to-service distributed tracing
-* Resiliency through retries
-
-In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. If your solution would benefit from the traffic splitting a service mesh provides, consider using [Open Service Mesh][osm-docs].
-
-For more information on Dapr and service meshes, and how they can be used together, visit the [Dapr documentation][dapr-docs].
-
-### How does the Dapr secrets API compare to the Secrets Store CSI driver?
-
-Both the Dapr secrets API and the managed Secrets Store CSI driver allow for the integration of secrets held in an external store, abstracting secret store technology from application code. The Secrets Store CSI driver mounts secrets held in Azure Key Vault as a CSI volume for consumption by an application. Dapr exposes secrets via a RESTful API that can be called by application code and can be configured with assorted secret stores. The following table lists the capabilities of each offering:
-
-| | Dapr secrets API | Secrets Store CSI driver |
-| | | |
-| **Supported secrets stores** | Local environment variables (for Development); Local file (for Development); Kubernetes Secrets; AWS Secrets Manager; Azure Key Vault secret store; Azure Key Vault with Managed Identities on Kubernetes; GCP Secret Manager; HashiCorp Vault | Azure Key Vault secret store|
-| **Accessing secrets in application code** | Call the Dapr secrets API | Access the mounted volume or sync mounted content as a Kubernetes secret and set an environment variable |
-| **Secret rotation** | New API calls obtain the updated secrets | Polls for secrets and updates the mount at a configurable interval |
-| **Logging and metrics** | The Dapr sidecar generates logs, which can be configured with collectors such as Azure Monitor, emits metrics via Prometheus, and exposes an HTTP endpoint for health checks | Emits driver and Azure Key Vault provider metrics via Prometheus |
-
-For more information on the secret management in Dapr, see the [secrets management building block overview][dapr-secrets-block].
-
-For more information on the Secrets Store CSI driver and Azure Key Vault provider, see the [Secrets Store CSI driver overview][csi-secrets-store].
-
-### How does the managed Dapr cluster extension compare to the open source Dapr offering?
-
-The managed Dapr cluster extension is the easiest method to provision Dapr on an AKS cluster. With the extension, you're able to offload management of the Dapr runtime version by opting into automatic upgrades. Additionally, the extension installs Dapr with smart defaults (for example, provisioning the Dapr control plane in high availability mode).
-
-When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
-
-Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
-
-[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
-
-### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
-
-Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
-
-If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
-
-## Next Steps
-
-After learning about Dapr and some of the challenges it solves, try [Deploying an application with the Dapr cluster extension][dapr-quickstart].
-
-<!-- Links Internal -->
-[csi-secrets-store]: ./csi-secrets-store-driver.md
-[osm-docs]: ./open-service-mesh-about.md
-[cluster-extensions]: ./cluster-extensions.md
-[dapr-quickstart]: ./quickstart-dapr.md
-[dapr-migration]: ./dapr-migration.md
-
-<!-- Links External -->
-[dapr-docs]: https://docs.dapr.io/
-[dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
-[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+
+ Title: Dapr extension for Azure Kubernetes Service (AKS) overview
+description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications.
+++ Last updated : 07/21/2022+++
+# Dapr
+
+Distributed Application Runtime (Dapr) offers APIs that simplify microservice development and implementation. Running as a sidecar process in tandem with your applications, Dapr APIs abstract away common complexities developers regularly encounter when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management. Whether your inter-application communication is direct service-to-service, or pub/sub messaging, Dapr helps you write simple, portable, resilient, and secured microservices.
+
+Dapr is incrementally adoptable ΓÇô the API building blocks can be used as the need arises. Use one, several, or all to develop your application faster.
++
+## Capabilities and features
+
+Dapr provides the following set of capabilities to help with your microservice development on AKS:
+
+* Easy provisioning of Dapr on AKS through [cluster extensions][cluster-extensions].
+* Portability enabled through HTTP and gRPC APIs which abstract underlying technologies choices
+* Reliable, secure, and resilient service-to-service calls through HTTP and gRPC APIs
+* Publish and subscribe messaging made easy with support for CloudEvent filtering and ΓÇ£at-least-onceΓÇ¥ semantics for message delivery
+* Pluggable observability and monitoring through Open Telemetry API collector
+* Works independent of language, while also offering language specific SDKs
+* Integration with VS Code through the Dapr extension
+* [More APIs for solving distributed application challenges][dapr-blocks]
+
+## Frequently asked questions
+
+### How do Dapr and Service meshes compare?
+
+A: Where a service mesh is defined as a networking service mesh, Dapr is not a service mesh. While Dapr and service meshes do offer some overlapping capabilities, a service mesh is focused on networking concerns, whereas Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, while service meshes are infrastructure-centric.
+
+Some common capabilities that Dapr shares with service meshes include:
+
+* Secure service-to-service communication with mTLS encryption
+* Service-to-service metric collection
+* Service-to-service distributed tracing
+* Resiliency through retries
+
+In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. If your solution would benefit from the traffic splitting a service mesh provides, consider using [Open Service Mesh][osm-docs].
+
+For more information on Dapr and service meshes, and how they can be used together, visit the [Dapr documentation][dapr-docs].
+
+### How does the Dapr secrets API compare to the Secrets Store CSI driver?
+
+Both the Dapr secrets API and the managed Secrets Store CSI driver allow for the integration of secrets held in an external store, abstracting secret store technology from application code. The Secrets Store CSI driver mounts secrets held in Azure Key Vault as a CSI volume for consumption by an application. Dapr exposes secrets via a RESTful API that can be called by application code and can be configured with assorted secret stores. The following table lists the capabilities of each offering:
+
+| | Dapr secrets API | Secrets Store CSI driver |
+| | | |
+| **Supported secrets stores** | Local environment variables (for Development); Local file (for Development); Kubernetes Secrets; AWS Secrets Manager; Azure Key Vault secret store; Azure Key Vault with Managed Identities on Kubernetes; GCP Secret Manager; HashiCorp Vault | Azure Key Vault secret store|
+| **Accessing secrets in application code** | Call the Dapr secrets API | Access the mounted volume or sync mounted content as a Kubernetes secret and set an environment variable |
+| **Secret rotation** | New API calls obtain the updated secrets | Polls for secrets and updates the mount at a configurable interval |
+| **Logging and metrics** | The Dapr sidecar generates logs, which can be configured with collectors such as Azure Monitor, emits metrics via Prometheus, and exposes an HTTP endpoint for health checks | Emits driver and Azure Key Vault provider metrics via Prometheus |
+
+For more information on the secret management in Dapr, see the [secrets management building block overview][dapr-secrets-block].
+
+For more information on the Secrets Store CSI driver and Azure Key Vault provider, see the [Secrets Store CSI driver overview][csi-secrets-store].
+
+### How does the managed Dapr cluster extension compare to the open source Dapr offering?
+
+The managed Dapr cluster extension is the easiest method to provision Dapr on an AKS cluster. With the extension, you're able to offload management of the Dapr runtime version by opting into automatic upgrades. Additionally, the extension installs Dapr with smart defaults (for example, provisioning the Dapr control plane in high availability mode).
+
+When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configuration options are the responsibility of developers and cluster maintainers.
+
+Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
+
+[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
+
+### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm?
+
+Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
+
+If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
+
+## Next Steps
+
+After learning about Dapr and some of the challenges it solves, try [installing the dapr extension][dapr-extension].
+
+<!-- Links Internal -->
+[csi-secrets-store]: ./csi-secrets-store-driver.md
+[osm-docs]: ./open-service-mesh-about.md
+[cluster-extensions]: ./cluster-extensions.md
+[dapr-quickstart]: ./quickstart-dapr.md
+[dapr-migration]: ./dapr-migration.md
+[dapr-extension]: ./dapr.md
+
+<!-- Links External -->
+[dapr-docs]: https://docs.dapr.io/
+[dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/
+[dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
No, AKS is a managed service, and manipulation of the IaaS resources isn't suppo
## Does AKS store any customer data outside of the cluster's region?
-The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo.
+No, all data is stored in the cluster's region.
## Are AKS images required to run as root?
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
description: Learn the features and benefits of Azure Kubernetes Service to depl
Last updated 02/24/2021--+ # Azure Kubernetes Service
AKS supports the creation of Intel SGX-based, confidential computing node pools
For more information, see [Confidential computing nodes on AKS][conf-com-node].
+### Mariner nodes
+
+Mariner is an open-source Linux distribution created by Microsoft, and itΓÇÖs now available for preview as a container host on Azure Kubernetes Service (AKS). The Mariner container host provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Mariner node pools in a new cluster, add Mariner node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Mariner nodes.
+
+For more information, see [Use the Mariner container host on Azure Kubernetes Service (AKS)](use-mariner.md)
+ ### Storage volume support To support application workloads, you can mount static or dynamic storage volumes for persistent data. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by either:
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
For general KEDA questions, we recommend [visiting the FAQ overview][keda-faq].
[keda-azure-cli]: keda-deploy-addon-az-cli.md [keda-cli]: keda-deploy-add-on-cli.md [keda-arm]: keda-deploy-add-on-arm.md
-[keda-troubleshoot]: keda-troubleshoot.md
+[keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
<!-- LINKS - external --> [keda]: https://keda.sh/
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
description: Use an ARM template to deploy the Kubernetes Event-driven Autoscali
Previously updated : 05/24/2022 Last updated : 10/10/2022
az group delete --name MyResourceGroup
This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
-You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
+You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
<!-- LINKS - internal --> [az-aks-create]: /cli/azure/aks#az-aks-create
You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-tr
[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials [az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete
-[keda-troubleshoot]: keda-troubleshoot.md
+[keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules <!-- LINKS - external -->
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
Previously updated : 06/08/2022 Last updated : 10/10/2022
az aks update \
## Next steps This article showed you how to install the KEDA add-on on an AKS cluster using Azure CLI. The steps to verify that KEDA add-on is installed and running are included. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
-You can troubleshoot troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
+You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
[az-aks-create]: /cli/azure/aks#az-aks-create [az aks install-cli]: /cli/azure/aks#az-aks-install-cli [az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials [az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete
-[keda-troubleshoot]: keda-troubleshoot.md
+[keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules [kubectl]: https://kubernetes.io/docs/user-guide/kubectl [keda]: https://keda.sh/ [keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue-
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
However, these external scalers aren't supported as part of the add-on and rely
[aks-support-policy]: support-policies.md [keda-cli]: keda-deploy-add-on-cli.md [keda-arm]: keda-deploy-add-on-arm.md
-[keda-troubleshoot]: keda-troubleshoot.md
+[keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
<!-- LINKS - external --> [keda-scalers]: https://keda.sh/docs/latest/scalers/
aks Keda Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-troubleshoot.md
- Title: Troubleshooting Kubernetes Event-driven Autoscaling (KEDA) add-on
-description: How to troubleshoot Kubernetes Event-driven Autoscaling add-on
-- Previously updated : 8/26/2021---
-# Kubernetes Event-driven Autoscaling (KEDA) AKS add-on Troubleshooting Guides
-
-When you deploy the KEDA AKS add-on, you could possibly experience problems associated with configuration of the application autoscaler.
-
-The following guide will assist you on how to troubleshoot errors and resolve common problems with the add-on, in addition to the official KEDA [FAQ][keda-faq] & [troubleshooting guide][keda-troubleshooting].
-
-## Verifying and Troubleshooting KEDA components
-
-### Check available KEDA version
-
-You can check the available KEDA version by using the `kubectl` command:
-
-```azurecli-interactive
-kubectl get crd/scaledobjects.keda.sh -o custom-columns='APP:.metadata.labels.app\.kubernetes\.io/version'
-```
-
-An overview will be provided with the installed KEDA version:
-
-```Output
-APP
-2.7.0
-```
-
-### Ensuring the cluster firewall is configured correctly
-
-It might happen that KEDA isn't scaling applications because it can't start up.
-
-When checking the operator logs, you might find errors similar to the following:
-
-```output
-1.6545953013458195e+09 ERROR Failed to get API Group-Resources {"error": "Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"}
-sigs.k8s.io/controller-runtime/pkg/cluster.New
-/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/cluster/cluster.go:160
-sigs.k8s.io/controller-runtime/pkg/manager.New
-/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/manager/manager.go:313
-main.main
-/workspace/main.go:87
-runtime.main
-/usr/local/go/src/runtime/proc.go:255
-1.6545953013459463e+09 ERROR setup unable to start manager {"error": "Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"}
-main.main
-/workspace/main.go:97
-runtime.main
-/usr/local/go/src/runtime/proc.go:255
-```
-
-While in the metric server you might notice that it's not able to start up:
-
-```output
-I0607 09:53:05.297924 1 main.go:147] keda_metrics_adapter "msg"="KEDA Version: 2.7.1"
-I0607 09:53:05.297979 1 main.go:148] keda_metrics_adapter "msg"="KEDA Commit: "
-I0607 09:53:05.297996 1 main.go:149] keda_metrics_adapter "msg"="Go Version: go1.17.9"
-I0607 09:53:05.298006 1 main.go:150] keda_metrics_adapter "msg"="Go OS/Arch: linux/amd64"
-E0607 09:53:15.344324 1 logr.go:279] keda_metrics_adapter "msg"="Failed to get API Group-Resources" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
-E0607 09:53:15.344360 1 main.go:104] keda_metrics_adapter "msg"="failed to setup manager" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
-E0607 09:53:15.344378 1 main.go:209] keda_metrics_adapter "msg"="making provider" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
-E0607 09:53:15.344399 1 main.go:168] keda_metrics_adapter "msg"="unable to run external metrics adapter" "error"="Get \"https://10.0.0.1:443/api?timeout=32s\": EOF"
-```
-
-This most likely means that the KEDA add-on isn't able to start up due to a misconfigured firewall.
-
-In order to make sure it runs correctly, make sure to configure the firewall to meet [the requirements][aks-firewall-requirements].
-
-### Enabling add-on on clusters with self-managed open-source KEDA installations
-
-While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
-
-When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
-
-This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
-
-While there's a possibility that the existing autoscaling will keep on working, it introduces a risk given it will be configured differently and won't support features such as managed identity.
-
-It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
-
-In order to determine which metrics adapter is being used by KEDA, use the `kubectl` command:
-
-```azurecli-interactive
-kubectl get APIService/v1beta1.external.metrics.k8s.io -o custom-columns='NAME:.spec.service.name,NAMESPACE:.spec.service.namespace'
-```
-
-An overview will be provided showing the service and namespace that Kubernetes will use to get metrics:
-
-```Output
-NAME NAMESPACE
-keda-operator-metrics-apiserver kube-system
-```
-
-> [!WARNING]
-> If the namespace is not `kube-system`, then the AKS add-on is being ignored and another metric server is being used.
-
-[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
-[keda-troubleshooting]: https://keda.sh/docs/latest/troubleshooting/
-[keda-faq]: https://keda.sh/docs/latest/faq/
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
Title: Monitoring AKS data reference description: Important reference material needed when you monitor AKS -+ Last updated 07/18/2022-+ # Monitoring AKS data reference
The following table lists the platform metrics collected for AKS. Follow each l
For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-In addition to the above platform metrics, Azure Monitor container insights collects [these custom metrics](../azure-monitor/containers/container-insights-metric-alerts.md#metrics-collected) for nodes, pods, containers, and persistent volumes.
+In addition to the above platform metrics, Azure Monitor container insights collects [these custom metrics](../azure-monitor/containers/container-insights-custom-metrics.md) for nodes, pods, containers, and persistent volumes.
## Metric dimensions
For more information on the schema of Activity Log entries, see [Activity Log s
## See also - See [Monitoring Azure AKS](monitor-aks.md) for a description of monitoring Azure AKS.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Title: Monitor Azure Kubernetes Service (AKS) with Azure Monitor description: Describes how to use Azure Monitor monitor the health and performance of AKS clusters and their workloads. + Last updated 07/29/2021- # Monitoring Azure Kubernetes Service (AKS) with Azure Monitor
This scenario is intended for customers using Azure Monitor to monitor AKS. It d
## Container insights AKS generates [platform metrics and resource logs](monitor-aks-reference.md), like any other Azure resource, that you can use to monitor its basic health and performance. Enable [Container insights](../azure-monitor/containers/container-insights-overview.md) to expand on this monitoring. Container insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS in addition to other cluster configurations. Container insights provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
-[Prometheus](https://prometheus.io/) and [Grafana](https://www.prometheus.io/docs/visualization/grafan) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
+[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are CNCF backed widely popular open source tools for kubernetes monitoring. AKS exposes many metrics in Prometheus format which makes Prometheus a popular choice for monitoring. [Container insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
Learn more about using Container insights at [Container insights overview](../azure-monitor/containers/container-insights-overview.md). [Monitor layers of AKS with Container insights](#monitor-layers-of-aks-with-container-insights) below introduces various features of Container insights and the monitoring scenarios that they support.
When you enable Container insights for your AKS cluster, it deploys a containeri
### Configure collection from Prometheus
-Container insights allows you to collect certain Prometheus metrics in your Log Analytics workspace without requiring a Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container insights. See [Configure scraping of Prometheus metrics with Container insights](../azure-monitor/containers/container-insights-prometheus-integration.md) for details on this configuration.
+Container insights allows you to send Prometheus metrics to [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) or to your Log Analytics workspace without requiring a local Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container insights. See [Collect Prometheus metrics with Container insights](../azure-monitor/containers/container-insights-prometheus.md) for details on this configuration.
### Collect resource logs
The logs for AKS control plane components are implemented in Azure as [resource
You need to create a diagnostic setting to collect resource logs. Create multiple diagnostic settings to send different sets of logs to different locations. See [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) to create diagnostic settings for your AKS cluster.
-There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
+There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
If you're unsure about which resource logs to initially enable, use the recommendations in the following table which are based on the most common customer requirements. Enable the other categories if you later find that you require this information.
Access Azure Monitor features for all AKS clusters in your subscription from the
| Alerts | Views alerts for the current cluster. | | Metrics | Open metrics explorer with the scope set to the current cluster. | | Diagnostic settings | Create diagnostic settings for the cluster to collect resource logs. |
-| Advisor | recommendations Recommendations for the current cluster from Azure Advisor. |
+| Advisor | Recommendations for the current cluster from Azure Advisor. |
| Logs | Open Log Analytics with the scope set to the current cluster to analyze log data and access prebuilt queries. | | Workbooks | Open workbook gallery for Kubernetes service. |
Azure Monitor and container insights don't yet provide full monitoring for the A
:::image type="content" source="media/monitor-aks/grafana-api-server.png" alt-text="Grafana API server" lightbox="media/monitor-aks/grafana-api-server.png":::
-Use the **Kubelet** workbook to view the health and performance of each kubelet. See [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks) for details on this workbooks. For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](kubelet-logs.md).
+Use the **Kubelet** workbook to view the health and performance of each kubelet. See [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks) for details on this workbook. For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](kubelet-logs.md).
:::image type="content" source="media/monitor-aks/container-insights-kubelet-workbook.png" alt-text="Container insights kubelet workbook" lightbox="media/monitor-aks/container-insights-kubelet-workbook.png":::
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
Last updated 03/05/2018 -+ # Using OpenFaaS on AKS
Output:
## Create second function
-Now create a second function. This example will be deployed using the OpenFaaS CLI and includes a custom container image and retrieving data from a Cosmos DB. Several items need to be configured before creating the function.
+Now create a second function. This example will be deployed using the OpenFaaS CLI and includes a custom container image and retrieving data from an Azure Cosmos DB instance. Several items need to be configured before creating the function.
-First, create a new resource group for the Cosmos DB.
+First, create a new resource group for the Azure Cosmos DB instance.
```azurecli-interactive az group create --name serverless-backing --location eastus ```
-Deploy a CosmosDB instance of kind `MongoDB`. The instance needs a unique name, update `openfaas-cosmos` to something unique to your environment.
+Deploy an Azure Cosmos DB instance of kind `MongoDB`. The instance needs a unique name, update `openfaas-cosmos` to something unique to your environment.
```azurecli-interactive az cosmosdb create --resource-group serverless-backing --name openfaas-cosmos --kind MongoDB ```
-Get the Cosmos database connection string and store it in a variable.
+Get the Azure Cosmos DB database connection string and store it in a variable.
-Update the value for the `--resource-group` argument to the name of your resource group, and the `--name` argument to the name of your Cosmos DB.
+Update the value for the `--resource-group` argument to the name of your resource group, and the `--name` argument to the name of your Azure Cosmos DB instance.
```azurecli-interactive COSMOS=$(az cosmosdb list-connection-strings \
COSMOS=$(az cosmosdb list-connection-strings \
--output tsv) ```
-Now populate the Cosmos DB with test data. Create a file named `plans.json` and copy in the following json.
+Now populate the Azure Cosmos DB with test data. Create a file named `plans.json` and copy in the following json.
```json {
Now populate the Cosmos DB with test data. Create a file named `plans.json` and
} ```
-Use the *mongoimport* tool to load the CosmosDB instance with data.
+Use the *mongoimport* tool to load the Azure Cosmos DB instance with data.
If needed, install the MongoDB tools. The following example installs these tools using brew, see the [MongoDB documentation][install-mongo] for other options.
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md
description: Learn the cluster operator best practices for how to manage authentication and authorization for clusters in Azure Kubernetes Service (AKS) + Last updated 09/29/2022
There are two levels of access needed to fully operate an AKS cluster:
> [!NOTE] > Pod identities are intended for use with Linux pods and container images only. Pod-managed identities support for Windows containers is coming soon.
-To access other Azure resources, like Cosmos DB, Key Vault, or Blob Storage, the pod needs authentication credentials. You could define authentication credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated.
+To access other Azure resources, like Azure Cosmos DB, Key Vault, or Blob storage, the pod needs authentication credentials. You could define authentication credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated.
With pod-managed identities (preview) for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is currently in preview for AKS. Refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)](./use-azure-ad-pod-identity.md) documentation to get started.
For more information about cluster operations in AKS, see the following best pra
[azure-ad-rbac]: azure-ad-rbac.md [aad-pod-identity]: ./use-azure-ad-pod-identity.md [use-azure-ad-pod-identity]: ./use-azure-ad-pod-identity.md#create-an-identity
-[workload-identity-overview]: workload-identity-overview.md
+[workload-identity-overview]: workload-identity-overview.md
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
+
+ Title: Best practices for running AKS at scale
+
+description: Learn the cluster operator best practices and special considerations for running large clusters at 500 node scale and beyond
++ Last updated : 10/04/2022
+
++
+# Best practices for creating and running Azure Kubernetes Service (AKS) clusters at scale
+
+AKS clusters that satisfy any of the below criteria should use the [Uptime SLA][Uptime SLA] feature for higher reliability and scalability of the Kubernetes control plan:
+* Clusters running greater than 10 nodes on average
+* Clusters that need to scale beyond 1000 nodes
+* Clusters running production workloads or availability sensitive mission critical workloads
+
+To scale AKS clusters beyond 1000 nodes, you need to request a node limit quota increase by raising a support ticket via the [portal][Azure Portal] up-to a maximum of 5000 nodes per cluster.
+
+To increase the node limit beyond 1000, you must have the following pre-requisites:
+- An existing AKS cluster that needs the node limit increase. This cluster shouldn't be deleted as that will remove the limit increase.
+- Uptime SLA enabled on your cluster.
+
+> [!NOTE]
+> It may take up to a week to enable your clusters with the larger node limit.
+
+> [!IMPORTANT]
+> Raising the node limit does not increase other AKS service quota limits, such as the number of pods per node. For more details, [Limits for resources, SKUs, regions][quotas-skus-regions].
+
+## Networking considerations and best practices
+
+* Use Managed NAT for cluster egress with at least 2 public IPs on the NAT Gateway. For more information, see [Managed NAT Gateway - Azure Kubernetes Service][Managed NAT Gateway - Azure Kubernetes Service].
+* Use Azure CNI with Dynamic IP allocation for optimum IP utilization and scale up to 50k application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][Configure Azure CNI networking in Azure Kubernetes Service (AKS)].
+* When using internal Kubernetes services behind an internal load balancer, it's recommended to create an internal load balancer or internal service below 750 node scale for best scaling performance and load balancer elasticity.
+
+> [!NOTE]
+> You can't use NPM with clusters greater than 500 Nodes
++
+## Node pool scaling considerations and best practices
+
+* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs to provide sufficient compute resources for *kube-system* pods.
+* Create at-least five user node pools to scale up to 5,000 nodes since there's a 1000 nodes per node pool limit.
+* Use cluster autoscaler wherever possible when running at-scale AKS clusters to ensure dynamic scaling of node pools based on the demand for compute resources.
+* When scaling beyond 1000 nodes without cluster autoscaler, it's recommended to scale in batches of a maximum 500 to 700 nodes at a time. These scaling operations should also have 2 mins to 5-mins sleep time between consecutive scale-ups to prevent Azure API throttling.
+
+## Cluster upgrade best practices
+
+* AKS clusters have a hard limit of 5000 nodes. This limit prevents clusters from upgrading that are running at this limit since there's no more capacity do a rolling update with the max surge property. We recommend scaling the cluster down below 3000 nodes before doing cluster upgrades to provide extra capacity for node churn and minimize control plane load.
+* By default, AKS configures upgrades to surge with one extra node through the max surge settings. This default value allows AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. When you are upgrading clusters with a large number of nodes, using the default max surge settings can force an upgrade to take several hours to complete as the upgrade churns through a large number of nodes. You can customize the max surge settings per node pool to enable a trade-off between upgrade speed and upgrade disruption. By increasing the max surge settings, the upgrade process completes faster but may cause disruptions during the upgrade process.
+* It isn't recommended to upgrade a cluster with greater than 500 nodes with the default max-surge configuration of one node. We suggest increasing the max surge settings to between 10 to 20 percent with up to a maximum of 500 nodes max-surge based on your workload disruption tolerance.
+* For more information, see [Upgrade an Azure Kubernetes Service (AKS) cluster][cluster upgrades].
+
+<!-- Links - External -->
+[Managed NAT Gateway - Azure Kubernetes Service]: nat-gateway.md
+[Configure Azure CNI networking in Azure Kubernetes Service (AKS)]: configure-azure-cni.md#dynamic-allocation-of-ips-and-enhanced-subnet-support
+[max surge]: upgrade-cluster.md?tabs=azure-cli#customize-node-surge-upgrade
+[Azure Portal]: https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22subId%22%3A+%22%22%2C%0D%0A%09%22pesId%22%3A+%225a3a423f-8667-9095-1770-0a554a934512%22%2C%0D%0A%09%22supportTopicId%22%3A+%2280ea0df7-5108-8e37-2b0e-9737517f0b96%22%2C%0D%0A%09%22contextInfo%22%3A+%22AksLabelDeprecationMarch22%22%2C%0D%0A%09%22caller%22%3A+%22Microsoft_Azure_ContainerService+%2B+AksLabelDeprecationMarch22%22%2C%0D%0A%09%22severity%22%3A+%223%22%0D%0A%7D
+[uptime SLA]: uptime-sla.md
+
+<!-- LINKS - Internal -->
+[quotas-skus-regions]: quotas-skus-regions.md
+[cluster upgrades]: upgrade-cluster.md
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Last updated 05/03/2022-+ # Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
cd quickstarts/hello-kubernetes
## Create and configure a state store
-Dapr can use a number of different state stores (Redis, Cosmos DB, DynamoDB, Cassandra, etc.) to persist and retrieve state. For this example, we will use Redis.
+Dapr can use a number of different state stores (Redis, Azure Cosmos DB, DynamoDB, Cassandra, etc.) to persist and retrieve state. For this example, we will use Redis.
### Create a Redis store
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
aks Use Cvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md
Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) (Preview)
+ Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS)
description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS) Previously updated : 08/01/2022-+ Last updated : 10/04/2022
-# Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster (Preview)
+# Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster
-You can use the generally available [confidential VM sizes (DCav5/ECav5)][cvm-announce] to add a node pool to your AKS cluster with CVM. Confidential VMs with AMD SEV-SNP support bring a new set of security features to protect date-in-use with full VM memory encryption. These features enable node pools with CVM to target the migration of highly sensitive container workloads to AKS without any code refactoring while benefiting from the features of AKS. The nodes in a node pool created with CVM use a customized Ubuntu 20.04 image specially configured for CVM. For more details on CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm].
+You can use the generally available [confidential VM sizes (DCav5/ECav5)][cvm-announce] to add a node pool to your AKS cluster with CVM. Confidential VMs with AMD SEV-SNP support bring a new set of security features to protect data-in-use with full VM memory encryption. These features enable node pools with CVM to target the migration of highly sensitive container workloads to AKS without any code refactoring while benefiting from the features of AKS. The nodes in a node pool created with CVM use a customized Ubuntu 20.04 image specially configured for CVM. For more details on CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm].
Adding a node pool with CVM to your AKS cluster is currently in preview. ## Before you begin
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 08/19/2022 Last updated : 10/03/2022 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
For more information on using the KMS plugin, see [Encrypting Secret Data at Res
* Azure CLI version 2.39.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. > [!WARNING]
-> KMS only supports Konnectivity. You can use `kubectl get po -n kube-system` to check if a 'konnectivity-agent-xxx' pod is running.
+> KMS only supports Konnectivity and Vnet Integration.
+> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks cluster show -g -n` to verify the setting `enableVnetIntegration` is set to **true**.
## Limitations
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
++
+ Title: Use the Mariner container host on Azure Kubernetes Service (AKS)
+description: Learn how to use the Mariner container host on Azure Kubernetes Service (AKS)
+++ Last updated : 09/22/2022++
+# Use the Mariner container host on Azure Kubernetes Service (AKS)
+
+Mariner is an open-source Linux distribution created by Microsoft, and itΓÇÖs now available for preview as a container host on Azure Kubernetes Service (AKS). The Mariner container host provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Mariner node pools in a new cluster, add Mariner node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Mariner nodes. To learn more about Mariner, see the [Mariner documentation][mariner-doc].
+
+## Why use Mariner
+
+The Mariner container host on AKS uses a native AKS image that provides one place to do all Linux development. Every package is built from source and validated, ensuring your services run on proven components. Mariner is lightweight, only including the necessary set of packages needed to run container workloads. It provides a reduced attack surface and eliminates patching and maintenance of unnecessary packages. At Mariner's base layer, it has a Microsoft hardened kernel tuned for Azure. Learn more about the [key capabilities of Mariner][mariner-capabilities].
+
+## How to use Mariner on AKS
+
+To get started using Mariner on AKS, see:
+
+* [Creating a cluster with Mariner][mariner-cluster-config]
+* [Add a Mariner node pool to your existing cluster][mariner-node-pool]
+* [Ubuntu to Mariner migration][ubuntu-to-mariner]
+
+## How to upgrade Mariner nodes
+
+We recommend keeping your clusters up to date and secured by enabling automatic upgrades for your cluster. To enable automatic upgrades, see:
+
+* [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][auto-upgrade-aks]
+* [Deploy kured in an AKS cluster][kured]
+
+To manually upgrade the node-image on a cluster, you can run `az aks nodepool upgrade`:
+
+```azurecli
+az aks nodepool upgrade \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodePool \
+ --node-image-only
+```
+
+## Regional availability
+
+Mariner is available for use in the same regions as AKS.
+
+## Limitations
+
+Mariner currently has the following limitations:
+
+* Mariner does not yet have image SKUs for GPU, ARM64, SGX, or FIPS.
+* Mariner does not yet have FedRAMP, FIPS, or CIS certification.
+* Mariner cannot yet be deployed through Azure portal or Terraform.
+* Qualys and Trivy are the only vulnerability scanning tools that support Mariner today.
+* The Mariner container host is a Gen 2 image. Mariner does not plan to offer a Gen 1 SKU.
+* Node configurations are not yet supported.
+* Mariner is not yet supported in GitHub actions.
+* Mariner does not support AppArmor. Support for SELinux can be manually configured.
+* Some addons, extensions, and open-source integrations may not be supported yet on Mariner. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are confirmed to be supported.
+* AKS diagnostics does not yet support Mariner.
+
+<!-- LINKS - Internal -->
+[mariner-doc]: https://microsoft.github.io/CBL-Mariner/docs/#cbl-mariner-linux
+[mariner-capabilities]: https://microsoft.github.io/CBL-Mariner/docs/#key-capabilities-of-cbl-mariner-linux
+[mariner-cluster-config]: cluster-configuration.md
+[mariner-node-pool]: use-multiple-node-pools.md
+[ubuntu-to-mariner]: use-multiple-node-pools.md
+[auto-upgrade-aks]: auto-upgrade-cluster.md
+[kured]: node-updates-kured.md
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) -+ Last updated 05/16/2022
az aks nodepool add \
--node-vm-size Standard_Dpds_v5 ```
+### Add a Mariner node pool
+
+Mariner is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. Mariner only includes the minimal set of packages needed for running container workloads, which improves boot times and overall performance.
+
+You can add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
+
+```azurecli
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-sku mariner
+```
+
+### Migrate Ubuntu nodes to Mariner
+
+Use the following instructions to migrate your Ubuntu nodes to Mariner nodes.
+
+1. Add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
+
+> [!NOTE]
+> When adding a new Mariner node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.
+2. [Cordon the existing Ubuntu nodes][cordon-and-drain].
+3. [Drain the existing Ubuntu nodes][drain-nodes].
+4. Remove the existing Ubuntu nodes using the `az aks delete` command.
+
+```azurecli
+az aks nodepool delete \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodePool
+```
+ ### Add a node pool with a unique subnet A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
az group delete --name myResourceGroup2 --yes --no-wait
[use-labels]: use-labels.md [cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes [internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
+[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes
aks Virtual Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md
Virtual Nodes functionality is heavily dependent on ACI's feature set. In additi
* Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI. * Virtual nodes require AKS clusters with Azure CNI networking. * Using api server authorized ip ranges for AKS.
-* Volume mounting Azure Files share support [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md)
+* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md).
* Using IPv6 is not supported. * Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature.
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
In the following example, the policy fragment named *myFragment* is added in the
[...] ```
-## Elements
+### Elements
| Element | Description | Required | | -- | - | -- |
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
To trace request processing, you must enable the **Allow tracing** setting for t
1. On the **Message** tab, the **ocp-apim-trace-location** header shows the location of the trace data stored in Azure blob storage. If needed, go to this location to retrieve the trace. Trace data can be accessed for up to 24 hours. :::image type="content" source="media/api-management-howto-api-inspector/response-message-1.png" alt-text="Trace location in Azure Storage":::+
+## Enable tracing using Ocp-Apim-Trace header
+
+When making requests to API Management using `curl`, a REST client such as Postman, or a client app, enable tracing by adding the following request headers:
+
+* **Ocp-Apim-Trace** - set value to `true`
+* **Ocp-Apim-Subscription-Key** - set value to the key for a tracing-enabled subscription that allows access to the API
+
+The response includes the **Ocp-Apim-Trace-Location** header, with a URL to the location of the trace data in Azure blob storage.
+
+For information about customizing trace information, see the [trace](api-management-advanced-policies.md#Trace) policy.
+ ## Next steps In this tutorial, you learned how to:
api-management Developer Portal Use Community Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-use-community-widgets.md
-
Title: Use community widgets in developer portal-
-description: Learn about community widgets for the API Management developer portal and how to inject and use them in your code.
-- Previously updated : 08/18/2022----
-# Use community widgets in the developer portal
-
-All developers place their community-contributed widgets in the `/community/widgets/` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal). Each has been accepted by the developer portal team. You can use the widgets by injecting them into your managed developer portal or a [self-hosted version](developer-portal-self-host.md) of the portal.
-
-> [!NOTE]
-> The developer portal team thoroughly inspects contributed widgets and their dependencies. However, the team canΓÇÖt guarantee itΓÇÖs safe to load the widgets. Use your own judgment when deciding to use a widget contributed by the community. Refer to our [widget contribution guidelines](developer-portal-widget-contribution-guidelines.md#contribution-guidelines) to learn about our preventive measures.
-
-## Inject and use external widget - managed portal
-
-For guidance to create and use a development environment to scaffold and upload a custom widget, see [Create and upload custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget).
-
-## Inject and use external widget - self-hosted portal
-
-1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
-
-1. Go to the widget's folder in the `/community/widgets` directory. Read the widget's description in the `readme.md` file.
-
-1. Register the widget in the portal's modules:
-
- 1. `src/apim.design.module.ts` - a module that registers design-time dependencies.
-
- ```typescript
- import { WidgetNameDesignModule } from "../community/widgets/<widget-name>/widget.design.module";
-
- ...
-
- injector.bindModule(new WidgetNameDesignModule());
- ```
-
- 1. `src/apim.publish.module.ts` - a module that registers publish-time dependencies.
-
- ```typescript
- import { WidgetNamePublishModule } from "../community/widgets/<widget-name>/widget.publish.module";
-
- ...
-
- injector.bindModule(new WidgetNamePublishModule());
- ```
-
- 1. `src/apim.runtime.module.ts` - a module that registers run-time dependencies.
-
- ```typescript
- import { WidgetNameRuntimeModule } from "../community/widgets/<widget-name>/widget.runtime.module";
-
- ...
-
- injector.bindModule(new WidgetNameRuntimeModule());
- ```
-
-1. Check if the widget has an `npm_dependencies` file.
-
-1. If so, copy the commands from the file and run them in the repository's top directory.
-
- Doing so will install the widget's dependencies.
-
-1. Run `npm start`.
-
-You can see the widget in the **Community** category in the widget selector.
---
-## Next steps
--
-Learn more about the developer portal:
--- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)--- [Contribute widgets](developer-portal-widget-contribution-guidelines.md)
api-management Developer Portal Widget Contribution Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-widget-contribution-guidelines.md
- Title: How to contribute widgets for developer portal-
-description: Learn about recommended guidelines to follow when you contribute a widget to the API Management developer portal repository.
-- Previously updated : 08/18/2022----
-# How to contribute widgets to the API Management developer portal
-
-If you'd like to contribute a widget to the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal), follow this three-step process:
-
-1. Fork the repository.
-
-1. Implement the widget.
-
-1. Open a pull request to include your widget in the official repository.
-
-Your widget will inherit the repository's license. It will be available for [opt-in installation](developer-portal-use-community-widgets.md) in either the managed developer portal or a [self-hosted version](developer-portal-self-host.md) of the portal. The developer portal team may decide to also include it in the managed version of the portal.
-
-For an example of how to develop your own widget and upload it to your developer portal, see [Create and upload custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget).
-
-## Contribution guidelines
-
-This guidance is intended to ensure the safety and privacy of our customers and the visitors to their portals. Follow these guidelines to ensure your contribution is accepted:
-
-1. Place your widget in the `community/widgets/<your-widget-name>` folder.
-
-1. Your widget's name must be lowercase and alphanumeric with dashes separating the words. For example, `my-new-widget`.
-
-1. The folder must contain a screenshot of your widget in a published portal.
-
-1. The folder must contain a `readme.md` file, which follows the template from the `/scaffolds/widget/readme.md` file.
-
-1. The folder can contain an `npm_dependencies` file with npm commands to install or manage the widget's dependencies.
-
- Explicitly specify the version of every dependency. For example:
-
- ```console
- npm install azure-storage@2.10.3 axios@0.19.1
- ```
-
- Your widget should require minimal dependencies. Every dependency will be carefully inspected by the reviewers. In particular, the core logic of your widget should be open-sourced in your widget's folder. Don't wrap it in an npm package.
-
-1. Changes to any files outside your widget's folder aren't allowed as part of a widget contribution. That includes, but isn't limited to, the `/package.json` file.
-
-1. Injecting tracking scripts or sending customer-authored data to custom services isn't allowed.
-
- > [!NOTE]
- > You can only collect customer-authored data through the `Logger` interface.
-
-## Next steps
--- For more information about contributions, see the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal/).--- See [Extend the developer portal with custom features](developer-portal-extend-custom-functionality.md) to learn about options to add custom functionality to the developer portal.--- See [Use community widgets](developer-portal-use-community-widgets.md) to learn how to use widgets contributed by the community.
api-management Export Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-postman.md
+
+ Title: Export API from Azure API Management to Postman for testing and monitoring | Microsoft Docs
+description: Learn how to export an API definition from API Management to Postman and use Postman for API testing and monitoring
+++++ Last updated : 10/11/2022++
+# Export API definition to Postman for API testing and monitoring
+
+To enhance development of your APIs, you can export an API fronted in API Management to [Postman](https://www.postman.com/product/what-is-postman/). Export an API definition from API Management as a Postman [collection](https://learning.postman.com/docs/getting-started/creating-the-first-collection/) so that you can use Postman's tools to design, document, test, monitor, and collaborate on APIs.
+
+## Prerequisites
+++ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).++ Make sure that your instance manages an API that you'd like to export to Postman. +
+ > [!NOTE]
+ > Currently, you can only export HTTP APIs from API Management directly to Postman.
+
+ For testing authorization in Postman as outlined later in this article, the API should require a subscription.
+++ A [Postman](https://www.postman.com) account, which you can use to access Postman for Web.
+ * Optionally, [download and install](https://learning.postman.com/docs/getting-started/installation-and-updates/) the Postman desktop app locally.
+++
+## Export an API to Postman
+
+1. In the portal, under **APIs**, select an API.
+1. In the context menu (**...**), select **Export** > **Postman**.
+
+ :::image type="content" source="media/export-api-postman/export-to-postman.png" alt-text="Screenshot of exporting an API to Postman in the Azure portal.":::
+
+1. In the **Run in** dialog, select the Postman location to export to. You can select the option for the desktop app if you've installed it locally.
+1. In Postman, select a Postman workspace to import the API to. The default is *My Workspace*.
+1. In Postman, select **Generate collection from this API** to automatically generate a collection from the API definition. If needed, configure advanced import options, or accept default values. Select **Import**.
+
+ The collection and documentation are imported to Postman.
+
+ :::image type="content" source="media/export-api-postman/postman-collection-documentation.png" alt-text="Screenshot of collection imported to Postman.":::
+
+## Authorize requests in Postman
+
+If the API you exported requires a subscription, you'll need to configure a valid subscription key from your API Management instance to send requests from Postman.
+
+Use the following steps to configure a subscription key as a secret variable for the collection.
+
+1. In your Postman workspace, select **Environments** > **Create environment**.
+1. Enter a name for the environment such as *Azure API Management*.
+1. Add a variable with the following values:
+ 1. Name - *apiKey*
+ 1. Type - **secret**
+ 1. Initial value - a valid API Management subscription key for the API
+1. Select **Save**.
+1. Select **Collections** and the name of the collection that you imported.
+1. Select the **Authorization** tab.
+1. In the upper right, select the name of the environment you created, such as *Azure API Management*.
+1. For the key **Ocp-Apim-Subscription-Key**, enter the variable name `{{apiKey}}`. Select **Save**.
+
+ :::image type="content" source="media/export-api-postman/postman-api-authorization.png" alt-text="Screenshot of configuring secret API key in Postman.":::
+1. Test your configuration by selecting an operation in your API such as a `GET` operation, and select **Send**.
+
+ If correctly configured, the operation returns a `200 OK` status and some output.
+
+## Next steps
+
+* Learn more about [importing APIs to Postman](https://learning.postman.com/docs/designing-and-developing-your-api/importing-an-api/).
+* Learn more about [authorizing requests in Postman](https://learning.postman.com/docs/sending-requests/authorization/).
api-management Front Door Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md
Use API Management policies to ensure that your API Management instance accepts
### Restrict incoming IP addresses
-You can configure an inbound [ip-filter](/api-management-access-restriction-policies.md#CheckHTTPHeader) policy in API Management to allow only Front Door-related traffic, which includes:
+You can configure an inbound [ip-filter](api-management-access-restriction-policies.md#RestrictCallerIPs) policy in API Management to allow only Front Door-related traffic, which includes:
* **Front Door's backend IP address space** - Allow IP addresses corresponding to the *AzureFrontDoor.Backend* section in [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519).
You can configure an inbound [ip-filter](/api-management-access-restriction-poli
### Check Front Door header
-Requests routed through Front Door include headers specific to your Front Door configuration. You can configure the [check-header](/api-management-access-restriction-policies.md#CheckHTTPHeader) policy to filter incoming requests based on the unique value of the `X-Azure-FDID` HTTP request header that is sent to API Management. This header value is the **Front Door ID**, which is shown in the portal on the **Overview** page of the Front Door profile.
+Requests routed through Front Door include headers specific to your Front Door configuration. You can configure the [check-header](api-management-access-restriction-policies.md#CheckHTTPHeader) policy to filter incoming requests based on the unique value of the `X-Azure-FDID` HTTP request header that is sent to API Management. This header value is the **Front Door ID**, which is shown in the portal on the **Overview** page of the Front Door profile.
In the following policy example, the Front Door ID is specified using a [named value](api-management-howto-properties.md) named `FrontDoorId`.
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
The following are virtual network resource requirements for API Management. Some
### Subnet size
-The minimum size of the subnet in which API Management can be deployed is /29, which gives three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
+The minimum size of the subnet in which API Management can be deployed is /29, which provides three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
* Azure reserves five IP addresses within each subnet that can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
The minimum size of the subnet in which API Management can be deployed is /29, w
* For Basic, Standard, or Premium SKUs:
- * **/29 subnet**: 8 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 0 remaining IP addresses left for scaling units.
+ * **/29 subnet**: 8 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 0 remaining IP addresses left for scale-out units.
- * **/28 subnet**: 16 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 8 remaining IP addresses left for four scale-out units (2 IP addresses/scale-out unit) for a total of five units. **This subnet efficiently maximizes Basic and Standard SKU scale-out limits.**
+ * **/28 subnet**: 16 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 8 remaining IP addresses left for four scale-out units (2 IP addresses/scale-out unit) for a total of five units. **This subnet efficiently maximizes Basic and Standard SKU scale-out limits.**
- * **/27 subnet**: 32 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 24 remaining IP addresses left for twelve scale-out units (2 IP addresses/scale-out unit) for a total of thirteen units. **This subnet efficiently maximizes the soft-limit Premium SKU scale-out limit.**
+ * **/27 subnet**: 32 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 24 remaining IP addresses left for twelve scale-out units (2 IP addresses/scale-out unit) for a total of thirteen units. **This subnet efficiently maximizes the soft-limit Premium SKU scale-out limit.**
- * **/26 subnet**: 64 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 56 remaining IP addresses left for twenty-eight scale-out units (2 IP addresses/scale-out unit) for a total of twenty-nine units. It is possible, with an Azure Support ticket, to scale the Premium SKU past twelve units. If you foresee such high demand, consider the /26 subnet.
+ * **/26 subnet**: 64 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 56 remaining IP addresses left for twenty-eight scale-out units (2 IP addresses/scale-out unit) for a total of twenty-nine units. It is possible, with an Azure Support ticket, to scale the Premium SKU past twelve units. If you foresee such high demand, consider the /26 subnet.
- * **/25 subnet**: 128 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP for internal load balancer, if used in internal mode = 120 remaining IP addresses left for sixty scale-out units (2 IP addresses/scale-out unit) for a total of sixty-one units. This is an extremely large, theoretical number of scale-out units.
+ * **/25 subnet**: 128 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 120 remaining IP addresses left for sixty scale-out units (2 IP addresses/scale-out unit) for a total of sixty-one units. This is an extremely large, theoretical number of scale-out units.
### Routing
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / [80], 443 | Inbound | TCP | Internet / VirtualNetwork | **Client communication to API Management** | External only | | * / 3443 | Inbound | TCP | ApiManagement / VirtualNetwork | **Management endpoint for Azure portal and PowerShell** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal |
-| * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) dependency (optional) | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | Access to Azure Key Vault for [named values](api-management-howto-properties.md) integration (optional) | External & Internal |
| * / 1433 | Outbound | TCP | VirtualNetwork / SQL | **Access to Azure SQL endpoints** | External & Internal | | * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / Azure Event Hubs | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal | | * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
app-service Deploy Content Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-content-sync.md
Invoke-AzureRmResourceAction -ResourceGroupName <group-name> -ResourceType Micro
[!INCLUDE [What happens to my app during deployment?](../../includes/app-service-deploy-atomicity.md)]
+## OneDrive and Dropbox integration retirements
+
+On September 30th, 2023 the integrations for Microsoft OneDrive and Dropbox for Azure App Service and Azure Functions will be retired. If you are using OneDrive or Dropbox, you should [disable content sync deployments](#disable-content-sync-deployment) from OneDrive and Dropbox. Then, you can set up deployments from any of the following alternatives
+
+- [GitHub Actions](deploy-github-actions.md)
+- [Azure DevOps Pipelines](https://docs.microsoft.com/azure/devops/pipelines/targets/webapp?view=azure-devops)
+- [Azure CLI](https://docs.microsoft.com/azure/app-service/deploy-zip?tabs=cli)
+- [VS Code](https://docs.microsoft.com/azure/app-service/deploy-zip?tabs=cli)
+- [Local Git Repository](https://docs.microsoft.com/azure/app-service/deploy-local-git?tabs=cli)
+ ## Next steps > [!div class="nextstepaction"]
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
If you don't need to work with tokens in your app, you can disable the token sto
If you [enable application logging](troubleshoot-diagnostic-logs.md), you will see authentication and authorization traces directly in your log files. If you see an authentication error that you didn't expect, you can conveniently find all the details by looking in your existing application logs. If you enable [failed request tracing](troubleshoot-diagnostic-logs.md), you can see exactly what role the authentication and authorization module may have played in a failed request. In the trace logs, look for references to a module named `EasyAuthModule_32/64`.
+### Considerations when using Azure Front Door
+
+When using Azure App Service with Easy Auth behind Azure Front Door or other reverse proxies, a few additional things have to be taken into consideration.
+
+1) Disable Caching for the authentication workflow
+
+ See [Disable cache for auth workflow](/azure/static-web-apps/front-door-manual#disable-cache-for-auth-workflow) to learn more on how to configure rules in Azure Front Door to disable caching for authentication and authorization-related pages.
+
+2) Use the Front Door endpoint for redirects
+
+ App Service is usually not accessible directly when exposed via Azure Front Door. This can be prevented, for example, by exposing App Service via Private Link in Azure Front Door Premium. To prevent the authentication workflow to redirect traffic back to App Service directly, it is important to configure the application to redirect back to `https://<front-door-endpoint>/.auth/login/<provider>/callback`.
+
+3) Ensure that App Service is using the right redirect URI
+
+ In some configurations, the App Service is using the App Service FQDN as the redirect URI instead of the Front Door FQDN. This will lead to an issue when the client is being redirected to App Service instead of Front Door. To change that, the `forwardProxy` setting needs to be set to `Standard` to make App Service respect the `X-Forwarded-Host` header set by Azure Front Door.
+
+ Other reverse proxies like Azure Application Gateway or 3rd-party products might use different headers and need a different forwardProxy setting.
+
+ This configuration cannot be done via the Azure portal today and needs to be done via `az rest`:
+
+ **Export settings**
+
+ `az rest --uri /subscriptions/REPLACE-ME-SUBSCRIPTIONID/resourceGroups/REPLACE-ME-RESOURCEGROUP/providers/Microsoft.Web/sites/REPLACE-ME-APPNAME?api-version=2020-09-01 --method get > auth.json`
+
+ **Update settings**
+
+ Search for
+ ```json
+ "httpSettings": {
+ "forwardProxy": {
+ "convention": "Standard"
+ }
+ }
+ ```
+ and ensure that `convention` is set to `Standard` to respect the `X-Forwarded-Host` header used by Azure Front Door.
+
+ **Import settings**
+
+ `az rest --uri /subscriptions/REPLACE-ME-SUBSCRIPTIONID/resourceGroups/REPLACE-ME-RESOURCEGROUP/providers/Microsoft.Web/sites/REPLACE-ME-APPNAME?api-version=2020-09-01 --method put --body @auth.json`
+
## More resources - [How-To: Configure your App Service or Azure Functions app to use Azure AD login](configure-authentication-provider-aad.md)
Samples:
- [Tutorial: Add authentication to your web app running on Azure App Service](scenario-secure-app-authentication-app-service.md) - [Tutorial: Authenticate and authorize users end-to-end in Azure App Service (Windows or Linux)](tutorial-auth-aad.md) - [.NET Core integration of Azure AppService EasyAuth (3rd party)](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth)-- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)
+- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
Title: 'Quickstart: Create a Node.js web app'
description: Deploy your first Node.js Hello World to Azure App Service in minutes. ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a -
-#zone_pivot_groups: app-service-platform-windows-linux
+ Last updated 03/22/2022 ms.devlang: javascript #zone_pivot_groups: app-service-ide-oss
Congratulations, you've successfully completed this quickstart!
Check out the other Azure extensions.
-* [Cosmos DB](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb)
+* [Azure Cosmos DB](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb)
* [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) * [Docker Tools](https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker) * [Azure CLI Tools](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azurecli)
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-cli.md
tags: azure-service-management
ms.assetid: 53e6a15a-370a-48df-8618-c6737e26acec Last updated 04/21/2022-+ keywords: azure cli samples, azure cli examples, azure cli code samples- # CLI samples for Azure App Service
The following table includes links to bash scripts built using the Azure CLI.
| [Connect an app to a SQL Database](./scripts/cli-connect-to-sql.md)| Creates an App Service app and a database in Azure SQL Database, then adds the database connection string to the app settings. | | [Connect an app to a storage account](./scripts/cli-connect-to-storage.md)| Creates an App Service app and a storage account, then adds the storage connection string to the app settings. | | [Connect an app to an Azure Cache for Redis](./scripts/cli-connect-to-redis.md) | Creates an App Service app and an Azure Cache for Redis, then adds the redis connection details to the app settings.) |
-| [Connect an app to Cosmos DB](./scripts/cli-connect-to-documentdb.md) | Creates an App Service app and a Cosmos DB, then adds the Cosmos DB connection details to the app settings. |
+| [Connect an app to Azure Cosmos DB](./scripts/cli-connect-to-documentdb.md) | Creates an App Service app and an Azure Cosmos DB, then adds the Azure Cosmos DB connection details to the app settings. |
|**Backup and restore app**|| | [Backup and restore app](./scripts/cli-backup-schedule-restore.md) | Creates an App Service app and creates a one-time backup for it, creates a backup schedule for it, and then restores an App Service app from a backup. | |**Monitor app**||
app-service Cli Connect To Documentdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-documentdb.md
Title: 'CLI: Connect an app to Cosmos DB'
-description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to connect an app to MongoDB (Cosmos DB).
+ Title: 'CLI: Connect an app to Azure Cosmos DB'
+description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to connect an app to Azure Cosmos DB.
tags: azure-service-management
ms.devlang: azurecli
Last updated 04/21/2022 -+
-# Connect an App Service app to Cosmos DB using CLI
+# Connect an App Service app to Azure Cosmos DB via the Azure CLI
-This sample script creates an Azure Cosmos DB account using the Azure Cosmos DB's API for MongoDB and an App Service app. It then links a MongoDB connection string to the web app using app settings.
+This sample script creates an Azure Cosmos DB account using Azure Cosmos DB for MongoDB and an App Service app. It then links a MongoDB connection string to the web app using app settings.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
az group delete --name $resourceGroup
## Sample reference
-This script uses the following commands to create a resource group, App Service app, Cosmos DB, and all related resources. Each command in the table links to command specific documentation.
+This script uses the following commands to create a resource group, App Service app, Azure Cosmos DB, and all related resources. Each command in the table links to command specific documentation.
| Command | Notes | ||| | [`az group create`](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [`az appservice plan create`](/cli/azure/appservice/plan#az-appservice-plan-create) | Creates an App Service plan. | | [`az webapp create`](/cli/azure/webapp#az-webapp-create) | Creates an App Service app. |
-| [`az cosmosdb create`](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates a Cosmos DB account. |
-| [`az cosmosdb list-connection-strings`](/cli/azure/cosmosdb#az-cosmosdb-list-connection-strings) | Lists connection strings for the specified Cosmos DB account. |
+| [`az cosmosdb create`](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
+| [`az cosmosdb list-connection-strings`](/cli/azure/cosmosdb#az-cosmosdb-list-connection-strings) | Lists connection strings for the specified Azure Cosmos DB account. |
| [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) | Creates or updates an app setting for an App Service app. App settings are exposed as environment variables for your app (see [Environment variables and app settings reference](../reference-app-settings.md)). | ## Next steps
app-service Cli Integrate App Service With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-integrate-app-service-with-application-gateway.md
na
Last updated 04/15/2022 -+ # Integrate App Service with Application Gateway using CLI
az group delete --name $resourceGroup
## Sample reference
-This script uses the following commands to create a resource group, App Service app, Cosmos DB, and all related resources. Each command in the table links to command specific documentation.
+This script uses the following commands to create a resource group, an App Service app, an Azure Cosmos DB instance, and all related resources. Each command in the table links to command specific documentation.
| Command | Notes | |||
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
keywords: azure app service, web app, security, msi, managed service identity, m
ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity
- [Azure Database for PostgreSQL](../postgresql/index.yml) > [!NOTE]
-> This tutorial doesn't include guidance for [Azure Cosmos DB](../cosmos-db/index.yml), which supports Azure Active Directory authentication differently. For information, see Cosmos DB documentation. For example: [Use system-assigned managed identities to access Azure Cosmos DB data](../cosmos-db/managed-identity-based-authentication.md).
+> This tutorial doesn't include guidance for [Azure Cosmos DB](../cosmos-db/index.yml), which supports Azure Active Directory authentication differently. For more information, see the Azure Cosmos DB documentation, such as [Use system-assigned managed identities to access Azure Cosmos DB data](../cosmos-db/managed-identity-based-authentication.md).
Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. This tutorial shows you how to connect to the above-mentioned databases from App Service using managed identities.
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
Title: 'Tutorial: Linux Java app with MongoDB'
-description: Learn how to get a data-driven Linux Java app working in Azure App Service, with connection to a MongoDB running in Azure (Cosmos DB).
+description: Learn how to get a data-driven Linux Java app working in Azure App Service, with connection to a MongoDB running in Azure Cosmos DB.
ms.devlang: java Last updated 12/10/2018-+ # Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB
When you are finished, you will have a [Spring Boot](https://spring.io/projects/
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a Cosmos DB database.
+> * Create an Azure Cosmos DB database.
> * Connect a sample app to the database and test it locally > * Deploy the sample app to Azure > * Stream diagnostic logs from App Service
In this tutorial, you learn how to:
## Clone the sample TODO app and prepare the repo
-This tutorial uses a sample TODO list app with a web UI that calls a Spring REST API backed by [Spring Data Azure Cosmos DB](https://github.com/Microsoft/spring-data-cosmosdb). The code for the app is available [on GitHub](https://github.com/Microsoft/spring-todo-app). To learn more about writing Java apps using Spring and Cosmos DB, see the [Spring Boot Starter with the Azure Cosmos DB SQL API tutorial](/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db) and the [Spring Data Azure Cosmos DB quick start](https://github.com/Microsoft/spring-data-cosmosdb#quick-start).
-
+This tutorial uses a sample TODO list app with a web UI that calls a Spring REST API backed by [Spring Data for Azure Cosmos DB](https://github.com/Microsoft/spring-data-cosmosdb). The code for the app is available [on GitHub](https://github.com/Microsoft/spring-todo-app). To learn more about writing Java apps using Spring and Azure Cosmos DB, see the [Spring Boot Starter with the Azure Cosmos DB for NoSQL tutorial](/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db) and the [Spring Data for Azure Cosmos DB quick start](https://github.com/Microsoft/spring-data-cosmosdb#quick-start).
Run the following commands in your terminal to clone the sample repo and set up the sample app environment.
Follow these steps to create an Azure Cosmos DB database in your subscription. T
``` 3. Create Azure Cosmos DB with the `GlobalDocumentDB` kind.
-The name of Cosmos DB must use only lower case letters. Note down the `documentEndpoint` field in the response from the command.
+The name of the Azure Cosmos DB instance must use only lower case letters. Note down the `documentEndpoint` field in the response from the command.
```azurecli az cosmosdb create --kind GlobalDocumentDB \
The name of Cosmos DB must use only lower case letters. Note down the `documentE
## Configure the TODO app properties
-Open a terminal on your computer. Copy the sample script file in the cloned repo so you can customize it for your Cosmos DB database you just created.
+Open a terminal on your computer. Copy the sample script file in the cloned repo so you can customize it for the Azure Cosmos DB database you just created.
```bash cd initial/spring-todo-app
cp set-env-variables-template.sh .scripts/set-env-variables.sh
``` Edit `.scripts/set-env-variables.sh` in your favorite editor and supply Azure
-Cosmos DB connection info. For the App Service Linux configuration, use the same region as before (`your-resource-group-region`) and resource group (`your-azure-group-name`) used when creating the Cosmos DB database. Choose a WEBAPP_NAME that is unique since it cannot duplicate any web app name in any Azure deployment.
+Azure Cosmos DB connection info. For the App Service Linux configuration, use the same region as before (`your-resource-group-region`) and resource group (`your-azure-group-name`) used when creating the Azure Cosmos DB database. Choose a WEBAPP_NAME that is unique since it cannot duplicate any web app name in any Azure deployment.
```bash export COSMOSDB_URI=<put-your-COSMOS-DB-documentEndpoint-URI-here>
public interface TodoItemRepository extends DocumentDbRepository<TodoItem, Strin
} ```
-Then the sample app uses the `@Document` annotation imported from `com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document` to set up an entity type to be stored and managed by Cosmos DB:
+Then the sample app uses the `@Document` annotation imported from `com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document` to set up an entity type to be stored and managed by Azure Cosmos DB:
```java @Document
az group delete --name <your-azure-group-name> --yes
[Azure for Java Developers](/java/azure/) [Spring Boot](https://spring.io/projects/spring-boot),
-[Spring Data for Cosmos DB](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db),
+[Spring Data for Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db),
[Azure Cosmos DB](../cosmos-db/introduction.md) and [App Service Linux](overview.md).
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Title: Deploy a Node.js web app using MongoDB to Azure
-description: This article shows you have to deploy a Node.js app using Express.js and a MongoDB database to Azure. Azure App Service is used to host the web application and Azure Cosmos DB to host the database using the 100% compatible MongoDB API built into Cosmos DB.
+description: This article shows you have to deploy a Node.js app using Express.js and a MongoDB database to Azure. Azure App Service is used to host the web application and Azure Cosmos DB to host the database using the 100% compatible MongoDB API built into Azure Cosmos DB.
Last updated 09/06/2022 ms.role: developer ms.devlang: javascript-+ # Deploy a Node.js + MongoDB web app to Azure
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure Node.js app in Azure App Service that's connected to a MongoDB database (using [Azure Cosmos DB with MongoDB API](../cosmos-db/mongodb/mongodb-introduction.md)). When you're finished, you'll have an Express.js app running on Azure App Service on Linux.
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure Node.js app in Azure App Service that's connected to a [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb/mongodb-introduction.md) database. When you're finished, you'll have an Express.js app running on Azure App Service on Linux.
:::image type="content" source="./media/tutorial-nodejs-mongodb-app/app-diagram.png" alt-text="A diagram showing how the Express.js app will be deployed to Azure App Service and the MongoDB data will be hosted inside of Azure Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/app-diagram-large.png":::
If you want to run the application locally, do the following:
* Start the application using `npm start`. * To view the app, browse to `http://localhost:3000`.
-## 1. Create App Service and Cosmos DB
+## 1. Create App Service and Azure Cosmos DB
-In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Cosmos DB API for MongoDB that's. For the creation process, you'll specify:
+In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Cosmos DB for MongoDB. For the creation process, you'll specify:
* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`. * The **Region** to run the app physically in the world.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
1. *Name* &rarr; **msdocs-expressjs-mongodb-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. 1. *Runtime stack* &rarr; **Node 16 LTS**. 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
- 1. **Cosmos DB API for MongoDB** is selected by default as the database engine. Azure Cosmos DB is a cloud native database offering a 100% MongoDB compatible API. Note the database name that's generated for you (*\<app-name>-database*). You'll need it later.
+ 1. **Azure Cosmos DB for MongoDB** is selected by default as the database engine. Azure Cosmos DB is a cloud native database offering a 100% MongoDB compatible API. Note the database name that's generated for you (*\<app-name>-database*). You'll need it later.
1. Select **Review + create**. 1. After validation completes, select **Create**. :::column-end:::
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
- **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic. - **Private endpoint** &rarr; Access endpoint for the database resource in the virtual network. - **Network interface** &rarr; Represents a private IP address for the private endpoint.
- - **Cosmos DB API for MongoDB** &rarr; Accessible only from behind the private endpoint. A database and a user are created for you on the server.
- - **Private DNS zone** &rarr; Enables DNS resolution of the Cosmos DB server in the virtual network.
+ - **Azure Cosmos DB for MongoDB** &rarr; Accessible only from behind the private endpoint. A database and a user are created for you on the server.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the Azure Cosmos DB server in the virtual network.
:::column-end::: :::column:::
When you're finished, you can delete all of the resources from your Azure subscr
## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost)-- [How do I connect to the Cosmos DB server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-cosmos-db-server-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How do I connect to the Azure Cosmos DB server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-azure-cosmos-db-server-thats-secured-behind-the-virtual-network-with-other-tools)
- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) - [Why is the GitHub Actions deployment so slow?](#why-is-the-github-actions-deployment-so-slow)
When you're finished, you can delete all of the resources from your Azure subscr
Pricing for the create resources is as follows: - The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).-- The Cosmos DB server is create in a single region and can be distributed to other regions. See [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
+- The Azure Cosmos DB server is created in a single region and can be distributed to other regions. See [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). - The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-#### How do I connect to the Cosmos DB server that's secured behind the virtual network with other tools?
+#### How do I connect to the Azure Cosmos DB server that's secured behind the virtual network with other tools?
-- For basic access from a commmand-line tool, you can run `mongosh` from the app's SSH terminal. The app's container doesn't come with `mongosh`, so you must [install it manually](https://www.mongodb.com/docs/mongodb-shell/install/). Remember that the installed client doesn't persist across app restarts.
+- For basic access from a command-line tool, you can run `mongosh` from the app's SSH terminal. The app's container doesn't come with `mongosh`, so you must [install it manually](https://www.mongodb.com/docs/mongodb-shell/install/). Remember that the installed client doesn't persist across app restarts.
- To connect from a MongoDB GUI client, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.-- To connect from the Mongo shell from the Cosmos DB management page in the portal, your machine must also be within the virtual network. You could instead open the Cosmos DB server's firewall for your local machine's IP address, but it increases the attack surface for your configuration.
+- To connect from the MongoDB shell from the Azure Cosmos DB management page in the portal, your machine must also be within the virtual network. You could instead open the Azure Cosmos DB server's firewall for your local machine's IP address, but it increases the attack surface for your configuration.
#### How does local app development work with GitHub Actions?
app-service Webjobs Sdk How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-how-to.md
description: Learn more about how to write code for the WebJobs SDK. Create even
ms.devlang: csharp-+ Last updated 06/24/2021
These binding-specific settings are equivalent to settings in the [host.json pro
You can configure the following bindings:
-* [Azure CosmosDB trigger](#azure-cosmosdb-trigger-configuration-version-3x)
+* [Azure Cosmos DB trigger](#azure-cosmos-db-trigger-configuration-version-3x)
* [Event Hubs trigger](#event-hubs-trigger-configuration-version-3x) * [Queue storage trigger](#queue-storage-trigger-configuration) * [SendGrid binding](#sendgrid-binding-configuration-version-3x) * [Service Bus trigger](#service-bus-trigger-configuration-version-3x)
-#### Azure CosmosDB trigger configuration (version 3.*x*)
+#### Azure Cosmos DB trigger configuration (version 3.*x*)
This example shows how to configure the Azure Cosmos DB trigger:
static async Task Main()
} ```
-For more information, see the [Azure CosmosDB binding](../azure-functions/functions-bindings-cosmosdb-v2.md#hostjson-settings) article.
+For more information, see the [Azure Cosmos DB binding](../azure-functions/functions-bindings-cosmosdb-v2.md#hostjson-settings) article.
#### Event Hubs trigger configuration (version 3.*x*)
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
After Application Gateway is configured to use Key Vault certificates, its insta
> [!TIP] > Any change to Application Gateway will force a check against Key Vault to see if any new versions of certificates are available. This includes, but not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate will immediately be presented.
-Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`.
+Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`. You may refer to the PowerShell steps provided in the [section below](#key-vault-azure-role-based-access-control-permission-model).
The Azure portal supports only Key Vault certificates, not secrets. Application Gateway still supports referencing secrets from Key Vault, but only through non-portal resources like PowerShell, the Azure CLI, APIs, and Azure Resource Manager templates (ARM templates).
applied-ai-services Build Training Data Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/build-training-data-set.md
- Title: "How to build a training data set for a custom model - Form Recognizer"-
-description: Learn how to ensure your training data set is optimized for training a Form Recognizer model.
----- Previously updated : 11/02/2021-
-#Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way.
---
-# Build a training data set for a custom model
-
-When you use the Form Recognizer custom model, you provide your own training data to the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
-
-You need at least five filled-in forms of the same type.
-
-If you want to use manually labeled training data, you must start with at least five filled-in forms of the same type. You can still use unlabeled forms in addition to the required data set.
-
-## Custom model input requirements
-
-First, make sure your training data set follows the input requirements for Form Recognizer.
--
-## Training data tips
-
-Follow these additional tips to further optimize your data set for training.
-
-* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-* For filled-in forms, use examples that have all of their fields filled in.
-* Use forms with different values in each field.
-* If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-## Upload your training data
-
-When you've put together the set of form documents that you'll use for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
-
-If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](label-tool.md) (or your own UI) to generate these files.
-
-### Organize your data in subfolders (optional)
-
-By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API will only use form documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
-
-```json
-{
- "source":"<SAS URL>"
-}
-```
-
-If you add the following content to the request body, the API will train with documents located in subfolders. The `"prefix"` field is optional and will limit the training data set to files whose paths begin with the given string. So a value of `"Test"`, for example, will cause the API to look at only the files or folders that begin with the word "Test".
-
-```json
-{
- "source": "<SAS URL>",
- "sourceFilter": {
- "prefix": "<prefix string>",
- "includeSubFolders": true
- },
- "useLabelFile": false
-}
-```
-
-## Next steps
-
-Now that you've learned how to build a training data set, follow a quickstart to train a custom Form Recognizer model and start using it on your forms.
-
-* [Train a model and extract form data using the client library or REST API](quickstarts/try-sdk-rest-api.md)
-* [Train with labels using the sample labeling tool](label-tool.md)
-
-## See also
-
-* [What is Form Recognizer?](./overview.md)
applied-ai-services Compose Custom Models V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v2-1.md
- Title: "How to guide: create and compose custom models with Form Recognizer v2.1"-
-description: Learn how to create, compose use, and manage custom models with Form Recognizer v2.1
----- Previously updated : 08/22/2022-
-recommendations: false
--
-# Compose custom models v2.1
-
-> [!NOTE]
-> This how-to guide references Form Recognizer v2.1 . To try Form Recognizer v3.0 , see [Compose custom models v3.0](compose-custom-models-v3.md).
-
-Form Recognizer uses advanced machine-learning technology to detect and extract information from document images and return the extracted data in a structured JSON output. With Form Recognizer, you can train standalone custom models or combine custom models to create composed models.
-
-* **Custom models**. Form Recognizer custom models enable you to analyze and extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases.
-
-* **Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model that encompasses your form types. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis.
-
-In this article, you'll learn how to create Form Recognizer custom and composed models using our [Form Recognizer Sample Labeling tool](label-tool.md), [REST APIs](quickstarts/client-library.md?branch=main&pivots=programming-language-rest-api#train-a-custom-model), or [client-library SDKs](quickstarts/client-library.md?branch=main&pivots=programming-language-csharp#train-a-custom-model).
-
-## Sample Labeling tool
-
-Try extracting data from custom forms using our Sample Labeling tool. You'll need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-> [!div class="nextstepaction"]
-> [Try it](https://fott-2-1.azurewebsites.net/projects/create)
-
-In the Form Recognizer UI:
-
-1. Select **Use Custom to train a model with labels and get key value pairs**.
-
- :::image type="content" source="media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
-
-1. In the next window, select **New project**:
-
- :::image type="content" source="media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
-
-## Create your models
-
-The steps for building, training, and using custom and composed models are as follows:
-
-* [**Assemble your training dataset**](#assemble-your-training-dataset)
-* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
-* [**Train your custom model**](#train-your-custom-model)
-* [**Compose custom models**](#create-a-composed-model)
-* [**Analyze documents**](#analyze-documents-with-your-custom-or-composed-model)
-* [**Manage your custom models**](#manage-your-custom-models)
-
-## Assemble your training dataset
-
-Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-training-data-set.md#custom-model-input-requirements) for Form Recognizer.
-
-## Upload your training dataset
-
-You'll need to [upload your training data](build-training-data-set.md#upload-your-training-data)
-to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Train your custom model
-
-You [train your model](./quickstarts/try-sdk-rest-api.md#train-a-custom-model) with labeled data sets. Labeled datasets rely on the prebuilt-layout API, but supplementary human input is included such as your specific labels and field locations. Start with at least five completed forms of the same type for your labeled training data.
-
-When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-
-Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
-
-[Get started with Train with labels](label-tool.md)
-
-> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
-
-## Create a composed model
-
-> [!NOTE]
-> **Model Compose is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
-
-With the Model Compose operation, you can assign up to 100 trained custom models to a single model ID. When you call Analyze with the composed model ID, Form Recognizer will first classify the form you submitted, choose the best matching assigned model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
-
-Using the Form Recognizer Sample Labeling tool, the REST API, or the Client-library SDKs, follow the steps below to set up a composed model:
-
-1. [**Gather your custom model IDs**](#gather-your-custom-model-ids)
-1. [**Compose your custom models**](#compose-your-custom-models)
-
-#### Gather your custom model IDs
-
-Once the training process has successfully completed, your custom model will be assigned a model ID. You can retrieve a model ID as follows:
-
-### [**Form Recognizer Sample Labeling tool**](#tab/fott)
-
-When you train models using the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net/), the model ID is located in the Train Result window:
--
-### [**REST API**](#tab/rest-api)
-
-The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#train-a-custom-model) will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
--
-### [**Client-library SDKs**](#tab/sdks)
-
- The [**client-library SDKs**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-csharp#train-a-custom-model) return a model object that can be queried to return the trained model ID:
-
-* C\# | [CustomFormModel Class](/dotnet/api/azure.ai.formrecognizer.training.customformmodel?view=azure-dotnet&preserve-view=true#properties "Azure SDK for .NET")
-
-* Java | [CustomFormModelInfo Class](/java/api/com.azure.ai.formrecognizer.training.models.customformmodelinfo?view=azure-java-stable&preserve-view=true#methods "Azure SDK for Java")
-
-* JavaScript | CustomFormModelInfo interface
-
-* Python | [CustomFormModelInfo Class](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.customformmodelinfo?view=azure-python&preserve-view=true&branch=main#variables "Azure SDK for Python")
---
-#### Compose your custom models
-
-After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
-
-### [**Form Recognizer Sample Labeling tool**](#tab/fott)
-
-The **Sample Labeling tool** enables you to quickly get started training models and composing them to a single model ID.
-
-After you have completed training, compose your models as follows:
-
-1. On the left rail menu, select the **Model Compose** icon (merging arrow).
-
-1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
-
-1. Choose the **Compose button** from the upper-left corner.
-
-1. In the pop-up window, name your newly composed model and select **Compose**.
-
-When the operation completes, your newly composed model will appear in the list.
-
- :::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="media/custom-model-compose-expanded.png":::
-
-### [**REST API**](#tab/rest-api)
-
-Using the **REST API**, you can make a [**Compose Custom Model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`.
-
-### [**Client-library SDKs**](#tab/sdks)
-
-Use the programming language code of your choice to create a composed model that will be called with a single model ID. Below are links to code samples that demonstrate how to create a composed model from existing custom models:
-
-* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
-
-* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java).
-
-* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
-
-* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
---
-## Analyze documents with your custom or composed model
-
- The custom form **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You can provide a single custom model ID or a composed model ID for the `modelID` parameter.
-
-### [**Form Recognizer Sample Labeling tool**](#tab/fott)
-
-1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
-
-1. Choose a local file or image URL to analyze.
-
-1. Select the **Run Analysis** button.
-
-1. The tool will apply tags in bounding boxes and report the confidence percentage for each tag.
--
-### [**REST API**](#tab/rest-api)
-
-Using the REST API, you can make an [Analyze Document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request to analyze a document and extract key-value pairs and table data.
-
-### [**Client-library SDKs**](#tab/sdks)
-
-Using the programming language of your choice to analyze a form or document with a custom or composed model. You'll need your Form Recognizer endpoint, key, and model ID.
-
-* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
-
-* [**Java**](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
-
-* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/recognizeCustomForm.js)
-
-* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.1/sample_recognize_custom_forms.py)
---
-Test your newly trained models by [analyzing forms](./quickstarts/try-sdk-rest-api.md#analyze-forms-with-a-custom-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](label-tool.md#improve-results).
-
-## Manage your custom models
-
-You can [manage your custom models](./quickstarts/try-sdk-rest-api.md#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) from your account.
-
-Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
-
-## Next steps
-
-Learn more about the Form Recognizer client library by exploring our API reference documentation.
-
-> [!div class="nextstepaction"]
-> [Form Recognizer API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
->
applied-ai-services Compose Custom Models V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v3.md
- Title: "How to guide: create and compose custom models with Form Recognizer v2.0"-
-description: Learn how to create, use, and manage Form Recognizer v2.0 custom and composed models
----- Previously updated : 08/22/2022-
-recommendations: false
--
-# Compose custom models v3.0
-
-> [!NOTE]
-> This how-to guide references Form Recognizer v3.0 . To use Form Recognizer v2.1 , see [Compose custom models v2.1](compose-custom-models-v2-1.md).
-
-A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 100 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-
-To learn more, see [Composed custom models](concept-composed-models.md).
-
-In this article, you'll learn how to create and use composed custom models to analyze your forms and documents.
-
-## Prerequisites
-
-To get started, you'll need the following resources:
-
-* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
-
-* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
- 1. After the resource deploys, select **Go to resource**.
-
- 1. Copy the **Keys and Endpoint** values from the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
-
- :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
-
- > [!TIP]
- > For more information, see [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
-
-* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Create your custom models
-
-First, you'll need a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
-
-* [**Assemble your training dataset**](#assemble-your-training-dataset)
-* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
-* [**Train your custom models**](#train-your-custom-model)
-
-## Assemble your training dataset
-
-Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-training-data-set.md#custom-model-input-requirements) for Form Recognizer.
-
->[!TIP]
-> Follow these tips to optimize your data set for training:
->
-> * If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-> * For filled-in forms, use examples that have all of their fields filled in.
-> * Use forms with different values in each field.
-> * If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-See [Build a training data set](./build-training-data-set.md) for tips on how to collect your training documents.
-
-## Upload your training dataset
-
-When you've gathered a set of training documents, you'll need to [upload your training data](build-training-data-set.md#upload-your-training-data) to an Azure blob storage container.
-
-If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents.
-
-## Train your custom model
-
-When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-
-Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
-
-### [Form Recognizer Studio](#tab/studio)
-
-To create custom models, start with configuring your project:
-
-1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
-
-1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
-
-1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
-
-1. Review and submit your settings to create the project.
--
-While creating your custom models, you may need to extract data collections from your documents. The collections may appear one of two formats. Using tables as the visual pattern:
-
-* Dynamic or variable count of values (rows) for a given set of fields (columns)
-
-* Specific collection of values for a given set of fields (columns and/or rows)
-
-See [Form Recognizer Studio: labeling as tables](quickstarts/try-v3-form-recognizer-studio.md#labeling-as-tables)
-
-### [REST API](#tab/rest)
-
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
-
-Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
-
-Once you have your label files, you can include them with by calling the training method with the *useLabelFile* parameter set to `true`.
--
-### [Client-libraries](#tab/sdks)
-
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you've them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
-
-|Language |Method|
-|--|--|
-|**C#**|**StartBuildModel**|
-|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.documentanalysis.administration.documentmodeladministrationclient.beginbuildmodel)|
-|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)|
-| **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
---
-## Create a composed model
-
-> [!NOTE]
-> **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
-
-With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Form Recognizer first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
-
-### [Form Recognizer Studio](#tab/studio)
-
-Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-
-* [**Gather your custom model IDs**](#gather-your-model-ids)
-* [**Compose your custom models**](#compose-your-custom-models)
-* [**Analyze documents**](#analyze-documents)
-* [**Manage your composed models**](#manage-your-composed-models)
-
-#### Gather your model IDs
-
-When you train models using the [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/), the model ID is located in the models menu under a project:
--
-#### Compose your custom models
-
-1. Select a custom models project.
-
-1. In the project, select the ```Models``` menu item.
-
-1. From the resulting list of models, select the models you wish to compose.
-
-1. Choose the **Compose button** from the upper-left corner.
-
-1. In the pop-up window, name your newly composed model and select **Compose**.
-
-1. When the operation completes, your newly composed model will appear in the list.
-
-1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results.
-
-#### Analyze documents
-
-The custom model **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You should provide the composed model ID for the `modelID` parameter in your applications.
--
-#### Manage your composed models
-
-You can manage your custom models throughout life cycles:
-
-* Test and validate new documents.
-* Download your model to use in your applications.
-* Delete your model when its lifecycle is complete.
--
-### [REST API](#tab/rest)
-
-Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-
-* [**Compose your custom models**](#compose-your-custom-models)
-* [**Analyze documents**](#analyze-documents)
-* [**Manage your composed models**](#manage-your-composed-models)
--
-#### Compose your custom models
-
-The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) accepts a list of model IDs to be composed.
--
-#### Analyze documents
-
-To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
--
-#### Manage your composed models
-
-You can manage custom models throughout your development needs including [**copying**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo), [**listing**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels), and [**deleting**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) your models.
-
-### [Client-libraries](#tab/sdks)
-
-Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-
-* [**Create a composed model**](#create-a-composed-model)
-* [**Analyze documents**](#analyze-documents)
-* [**Manage your composed models**](#manage-your-composed-models)
-
-#### Create a composed model
-
-You can use the programming language of your choice to create a composed model:
-
-| Programming language| Code sample |
-|--|--|
-|**C#** | [Model compose](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
-|**Java** | [Model compose](https://github.com/Azure/azure-sdk-for-java/blob/afa0d44fa42979ae9ad9b92b23cdba493a562127/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java)
-|**JavaScript** | [Compose model](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/composeModel.js)
-|**Python** | [Create composed model](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
-
-#### Analyze documents
-
-Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
-
-|Programming language| Code sample |
-|--|--|
-|**C#** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzeWithCustomModel.md)
-|**Java** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
-|**JavaScript** | [Analyze a document with a custom/composed model using model ID](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/analyzeDocumentByModelId.js)
-|**Python** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_analyze_custom_documents.py)
-
-## Manage your composed models
-
-You can manage a custom model at each stage in its life cycles. You can copy a custom model between resources, view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
-
-|Programming language| Code sample |
-|--|--|
-|**C#** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_CopyCustomModel.md#copy-a-custom-model-between-form-recognizer-resources)|
-|**Java** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/CopyDocumentModel.java)|
-|**JavaScript** | [Copy a custom model between Form Recognizer resources](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/copyModel.js)|
-|**Python** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_copy_model_to.py)|
---
-## Next steps
-
-Try one of our Form Recognizer quickstarts:
-
-> [!div class="nextstepaction"]
-> [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md)
-
-> [!div class="nextstepaction"]
-> [REST API](quickstarts/get-started-v3-sdk-rest-api.md)
-
-> [!div class="nextstepaction"]
-> [C#](quickstarts/get-started-v3-sdk-rest-api.md#prerequisites)
-
-> [!div class="nextstepaction"]
-> [Java](quickstarts/get-started-v3-sdk-rest-api.md)
-
-> [!div class="nextstepaction"]
-> [JavaScript](quickstarts/get-started-v3-sdk-rest-api.md)
-
-> [!div class="nextstepaction"]
-> [Python](quickstarts/get-started-v3-sdk-rest-api.md)
applied-ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md
Previously updated : 02/15/2022 Last updated : 10/10/2022
The accuracy of your model is affected by variances in the visual structure of y
* Separate visually distinct document types to train different models. * As a general rule, if you remove all user entered values and the documents look similar, you need to add more training data to the existing model.
- * If the documents are dissimilar, split your training data into different folders and train a model for each variation. You can then [compose](compose-custom-models-v2-1.md#create-a-composed-model) the different variations into a single model.
+ * If the documents are dissimilar, split your training data into different folders and train a model for each variation. You can then [compose](how-to-guides/compose-custom-models.md?view=form-recog-2.1.0&preserve-view=true#create-a-composed-model) the different variations into a single model.
* Make sure that you don't have any extraneous labels.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-businessCard**|
+|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-businessCard**|
The following tools are supported by Form Recognizer v2.1:
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
With composed models, you can assign multiple custom models to a composed model
### Composed model compatibility
-|Custom model type |Models trained with version 2.1 and v2.0 | Custom template models (3.0) preview | Custom neural models 3.0 Preview |Custom neural models 3.0 GA|
+|Custom model type|Models trained with v2.1 and v2.0| Custom template models v3.0 (preview)|Custom neural models v3.0 (preview)|Custom neural models 3.0 (GA)|
|--|--|--|--|--|
-| Models trained with version 2.1 and v2.0 | Supported | Supported | Not Supported | Not Supported |
-| Custom template models (3.0) preview | Supported |Supported | Not Supported | Not Supported |
-| Custom template models 3.0 GA | Not Supported |Not Supported | Supported | Not Supported |
-| Custom neural models 3.0 Preview | Not Supported | NotSupported | Supported | Not Supported |
-|Custom Neural models 3.0 GA| Not Supported | NotSupported |NotSupported |Supported |
-
+|**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported|
+|**Custom template models v3.0 (preview)** |Supported|Supported|Not Supported|NotSupported|
+|**Custom template models v3.0 (GA)** |Not Supported|Not Supported|Supported|Not Supported|
+|**Custom neural models v3.0 (preview)**|Not Supported|Not Supported|Supported|Not Supported|
+|**Custom Neural models v3.0 (GA)**|Not Supported|Not Supported|Not Supported|Supported|
* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition will ensure that the v2.1 model can be composed with other models.
With composed models, you can assign multiple custom models to a composed model
## Development options The following resources are supported by Form Recognizer **v3.0** : | Feature | Resources | |-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Java SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[JavaScript SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Python SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+|_**Custom model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Java SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
| _**Composed model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|+ The following resources are supported by Form Recognizer v2.1:
The following resources are supported by Form Recognizer v2.1:
|-|-| |_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>| | _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|- ## Next steps Learn to create and compose custom models: > [!div class="nextstepaction"]
-> [**Form Recognizer v2.1**](compose-custom-models-v2-1.md)
+> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Previously updated : 08/02/2022 Last updated : 10/10/2022 recommendations: false
Custom neural models are only available in the [v3 API](v3-migration-guide.md).
| Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom document | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom document | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```.
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
## Next steps
-* Train a custom model:
+Learn to create and compose custom models:
- > [!div class="nextstepaction"]
- > [How to train a model](how-to-guides/build-custom-model-v3.md)
-
-* Learn more about custom template models:
-
- > [!div class="nextstepaction"]
- > [Custom template models](concept-custom-template.md )
-
-* View the REST API:
-
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+> [!div class="nextstepaction"]
+> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
Template models are available generally [v3.0 API](https://westus.dev.cognitive.
| Model | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom template | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom template | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
| Custom template | [Form Recognizer 2.1 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Form Recognizer Sample labeling tool](https://fott-2-1.azurewebsites.net/)| On the v3 API, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```.
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
} ``` - ## Next steps
-* * Train a custom model:
-
- > [!div class="nextstepaction"]
- > [How to train a model](how-to-guides/build-custom-model-v3.md)
-
-* Learn more about custom neural models:
-
- > [!div class="nextstepaction"]
- > [Custom neural models](concept-custom-neural.md )
-
-* View the REST API:
+Learn to create and compose custom models:
- > [!div class="nextstepaction"]
- > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)
+> [!div class="nextstepaction"]
+> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Python SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|***custom-model-id***|
The following tools are supported by Form Recognizer v2.1:
The following table describes the features available with the associated tools a
| Document type | REST API | SDK | Label and Test Models| |--|--|--|--| | Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
-| Custom template 3.0 | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom neural | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom template 3.0 | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom neural | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
> [!NOTE] > Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API.
applied-ai-services Concept Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md
Previously updated : 08/22/2022 Last updated : 10/10/2022
+monikerRange: 'form-recog-3.0.0'
+recommendations: false
# Form Recognizer Studio
-[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) to get started analyzing documents with pre-trained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-v3-sdk-rest-api.md) and other quickstarts.
+[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) to get started analyzing documents with pre-trained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts.
The following image shows the Invoice prebuilt model feature at work.
The following image shows the Invoice prebuilt model feature at work.
The following Form Recognizer service features are available in the Studio.
-* **Read**: Try out Form Recognizer's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-v3-sdk-rest-api.md).
+* **Read**: Try out Form Recognizer's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true).
-* **Layout**: Try out Form Recognizer's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-v3-sdk-rest-api.md#layout-model).
+* **Layout**: Try out Form Recognizer's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model).
-* **General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs and entities. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model).
+* **General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs and entities. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model).
-* **Prebuilt models**: Form Recognizer's pre-built models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model).
+* **Prebuilt models**: Form Recognizer's pre-built models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model).
* **Custom models**: Form Recognizer's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the online wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more and use the [Form Recognizer v3.0 migration guide](v3-migration-guide.md) to start integrating the new models with your applications. ## Next steps * Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**v3.0 SDK quickstarts**](quickstarts/get-started-v3-sdk-rest-api.md) to try the v3.0 features in your applications using the new SDKs.
-* Refer to our [**v3.0 REST API quickstarts**](quickstarts/get-started-v3-sdk-rest-api.md) to try the v3.0features using the new REST API.
+* Explore our [**v3.0 SDK quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
+* Refer to our [**v3.0 REST API quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0features using the new REST API.
> [!div class="nextstepaction"] > [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md)
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Previously updated : 08/22/2022 Last updated : 10/10/2022
+monikerRange: 'form-recog-3.0.0'
recommendations: false <!-- markdownlint-disable MD033 -->
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID |-|-||
-| **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-document**|
+| **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-document**|
### Try Form Recognizer
You'll need the following resources:
Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. If you have documents where the same value is described in different ways, for example, customer and user, the associated key will be either customer or user, based on context.
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key will be either customer or user (based on context).
## Data extraction
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-idDocument**|
+|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-idDocument**|
The following tools are supported by Form Recognizer v2.1:
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-invoice**|
+|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-invoice**|
The following tools are supported by Form Recognizer v2.1:
The invoice key-value pairs and line items extracted are in the `documentResults
The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. If you have documents where the same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on context.
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key will be either customer or user (based on context).
## Form Recognizer v3.0
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|||
-|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-layout**|
+|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-layout**|
The following tools are supported by Form Recognizer v2.1:
The Layout model extracts all identified blocks of text in the `paragraphs` coll
```json "paragraphs": [
- {
- "spans": [],
- "boundingRegions": [],
- "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
- }
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
+ }
] ``` ### Paragraph roles<sup> 🆕</sup>
The pages collection is the very first object you see in the service response.
```json "pages": [
- {
- "pageNumber": 1,
- "angle": 0,
- "width": 915,
- "height": 1190,
- "unit": "pixel",
- "words": [],
- "lines": [],
- "spans": [],
- "kind": "document"
- }
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": [],
+ "kind": "document"
+ }
] ``` ### Text lines and words
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Previously updated : 08/22/2022 Last updated : 10/10/2022
+monikerRange: 'form-recog-3.0.0'
recommendations: false- # Form Recognizer Read OCR model
The following resources are supported by Form Recognizer v3.0:
| Model | Resources | Model ID | |-|||
-|**prebuilt-read**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/v3-0-sdk-rest-api.md)</li><li>[**C# SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
+|**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
## Try Form Recognizer
Read adds [language detection](language-support.md#detected-languages-read-api)
] ``` ### Microsoft Office and HTML support (preview) <sup>🆕</sup>
-Use the parameter `api-version=2022-06-30-preview when using the REST API or the corresponding SDKs of that API version to preview the support for Microsoft Word, Excel, PowerPoint, and HTML files.
+Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview the support for Microsoft Word, Excel, PowerPoint, and HTML files.
:::image type="content" source="media/office-to-ocr.png" alt-text="Screenshot of a Microsoft Word document extracted by Form Recognizer Read OCR.":::
Complete a Form Recognizer quickstart:
> [!div class="checklist"] >
-> * [**REST API**](how-to-guides/v3-0-sdk-rest-api.md)
-> * [**C# SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-csharp)
-> * [**Python SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-python)
-> * [**Java SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-java)
-> * [**JavaScript**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-javascript)</li></ul>
+> * [**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+> * [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)
+> * [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)
+> * [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)
+> * [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>
Explore our REST API:
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-receipt**|
+|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-receipt**|
The following tools are supported by Form Recognizer v2.1:
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Previously updated : 08/22/2022 Last updated : 10/10/2022
+monikerRange: 'form-recog-3.0.0'
recommendations: false
The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following t
| Feature | Resources | Model ID | |-|-|--|
-|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
+|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
### Try Form Recognizer
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need
* Complete a Form Recognizer quickstart: > [!div class="checklist"] >
-> * [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)
-> * [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
-> * [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
-> * [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
-> * [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>
+> * [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+> * [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)
+> * [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)
+> * [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)
+> * [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
That's it! You're now ready to start automating data extraction using Azure Form
* Complete a Form Recognizer quickstart and get started creating a document processing app in the development language of your choice:
- * [C#](quickstarts/get-started-v3-sdk-rest-api.md)
- * [Python](quickstarts/get-started-v3-sdk-rest-api.md)
- * [Java](quickstarts/get-started-v3-sdk-rest-api.md)
- * [JavaScript](quickstarts/get-started-v3-sdk-rest-api.md)
+ * [C#](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+ * [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+ * [Java](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+ * [JavaScript](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
Previously updated : 02/15/2022 Last updated : 10/10/2022
+monikerRange: 'form-recog-2.1.0'
+recommendations: false
# Deploy the Sample Labeling tool
> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the v3.0 version.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
> [!NOTE] > The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the sample labeling tool for yourself.
applied-ai-services Form Recognizer Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/form-recognizer-studio-overview.md
The studio supports Form Recognizer v3.0 models and v3.0 model training. Previou
:::image type="content" source="media/studio/form-recognizer-studio-front.png" alt-text="Screenshot of Form Recognizer Studio front page.":::
-1. After you've tried Form Recognizer Studio, use the [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md) or [**Python**](quickstarts/get-started-v3-sdk-rest-api.md) client libraries or the [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) to get started incorporating Form Recognizer models into your own applications.
+1. After you've tried Form Recognizer Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to get started incorporating Form Recognizer models into your own applications.
To learn more about each model, *see* concepts pages.
applied-ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-model.md
+
+ Title: "Build and train a custom model"
+
+description: Learn how to build, label, and train a custom model.
+++++ Last updated : 10/10/2022+
+recommendations: false
++
+# Build and train a custom model
++
+Form Recognizer models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
+
+## Custom model input requirements
+
+First, make sure your training data set follows the input requirements for Form Recognizer.
++++
+## Training data tips
+
+Follow these tips to further optimize your data set for training:
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+* For forms with input fields, use examples that have all of the fields completed.
+* Use forms with different values in each field.
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+Once you've put together the set of forms or documents for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Create a project in the Form Recognizer Studio
+
+The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
+
+1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you'll need to [initialize your subscription, resource group, and resource](../quickstarts/try-v3-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
+
+1. In the Studio, select the **Custom models** tile, on the custom models page and select the **Create a project** button.
+
+ :::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Form Recognizer Studio.":::
+
+ 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
+
+ 1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue.
+
+ > [!IMPORTANT]
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
+
+ :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
+
+1. Next select the storage account you used to upload your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
+
+ :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
+
+1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
+
+## Label your data
+
+In your project, your first task is to label your dataset with the fields you wish to extract.
+
+You'll see the files you uploaded to storage on the left of your screen, with the first file ready to be labeled.
+
+1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type.
+
+ :::image type="content" source="../media/how-to/studio-create-label.png" alt-text="Screenshot: Create a label.":::
+
+1. Enter a name for the field.
+
+1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields.
+
+1. Repeat the process for all the fields you wish to label for your dataset.
+
+1. Label the remaining documents in your dataset by selecting each document and selecting the text to be labeled.
+
+You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and a new fields.json file. This training dataset will be submitted to train the model.
+
+## Train your model
+
+With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
+
+1. On the train model dialog, provide a unique model ID and, optionally, a description. The model ID accepts a string data type.
+
+1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
+
+ :::image type="content" source="../media/how-to/studio-train-model.png" alt-text="Screenshot: Train model dialog":::
+
+1. Select **Train** to initiate the training process.
+
+1. Template models train in a few minutes. Neural models can take up to 30 minutes to train.
+
+1. Navigate to the *Models* menu to view the status of the train operation.
+
+## Test the model
+
+Once the model training is complete, you can test your model by selecting the model on the models list page.
+
+1. Select the model and select on the **Test** button.
+
+1. Select the `+ Add` button to select a file to test the model.
+
+1. With a file selected, choose the **Analyze** button to test the model.
+
+1. The model results are displayed in the main window and the fields extracted are listed in the right navigation bar.
+
+1. Validate your model by evaluating the results for each field.
+
+1. The right navigation bar also has the sample code to invoke your model and the JSON results from the API.
+
+Congratulations you've trained a custom model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about custom model types](../concept-custom.md)
+
+> [!div class="nextstepaction"]
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
+++
+**Applies to:** ![Form Recognizer v2.1 checkmark](../medi?view=form-recog-3.0.0&preserve-view=true)
+
+When you use the Form Recognizer custom model, you provide your own training data to the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
+
+You need at least five filled-in forms of the same type.
+
+If you want to use manually labeled training data, you must start with at least five filled-in forms of the same type. You can still use unlabeled forms in addition to the required data set.
+
+## Custom model input requirements
+
+First, make sure your training data set follows the input requirements for Form Recognizer.
++++
+## Training data tips
+
+Follow these tips to further optimize your data set for training.
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+* For filled-in forms, use examples that have all of their fields filled in.
+* Use forms with different values in each field.
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+When you've put together the set of form documents that you'll use for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
+
+If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files.
+
+### Organize your data in subfolders (optional)
+
+By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API will only use documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
+
+```json
+{
+ "source":"<SAS URL>"
+}
+```
+
+If you add the following content to the request body, the API will train with documents located in subfolders. The `"prefix"` field is optional and will limit the training data set to files whose paths begin with the given string. So a value of `"Test"`, for example, will cause the API to look at only the files or folders that begin with the word "Test".
+
+```json
+{
+ "source": "<SAS URL>",
+ "sourceFilter": {
+ "prefix": "<prefix string>",
+ "includeSubFolders": true
+ },
+ "useLabelFile": false
+}
+```
+
+## Next steps
+
+Now that you've learned how to build a training data set, follow a quickstart to train a custom Form Recognizer model and start using it on your forms.
+
+* [Train a model and extract document data using the client library or REST API](../quickstarts/get-started-sdks-rest-api.md)
+* [Train with labels using the sample labeling tool](../label-tool.md)
+
+## See also
+
+* [What is Form Recognizer?](../overview.md)
+
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
- Title: "Train a custom model in the Form Recognizer Studio"-
-description: Learn how to build, label, and train a custom model in the Form Recognizer Studio.
----- Previously updated : 04/13/2022---
-# Build your training dataset for a custom model
-
-Form Recognizer models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
-
-## Custom model input requirements
-
-First, make sure your training data set follows the input requirements for Form Recognizer.
--
-## Training data tips
-
-Follow these tips to further optimize your data set for training:
-
-* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-* For forms with input fields, use examples that have all of the fields completed.
-* Use forms with different values in each field.
-* If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-## Upload your training data
-
-Once you've put together the set of forms or documents for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Create a project in the Form Recognizer Studio
-
-The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
-
-1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you'll need to [initialize your subscription, resource group, and resource](../quickstarts/try-v3-form-recognizer-studio.md). Follow the [additional prerequisite for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
-
-1. In the Studio, select the **Custom models** tile, on the custom models page and select the **Create a project** button.
-
- :::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Form Recognizer Studio.":::
-
- 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
-
- 1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue.
-
- > [!IMPORTANT]
- > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
-
- :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
-
-1. Next select the storage account where you uploaded your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
-
- :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
-
-1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
-
-## Label your data
-
-In your project, your first task is to label your dataset with the fields you wish to extract.
-
-You'll see the files you uploaded to storage on the left of your screen, with the first file ready to be labeled.
-
-1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type.
-
- :::image type="content" source="../media/how-to/studio-create-label.png" alt-text="Screenshot: Create a label.":::
-
-1. Enter a name for the field.
-
-1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields.
-
-1. Repeat the process for all the fields you wish to label for your dataset.
-
-1. Label the remaining documents in your dataset by selecting each document and selecting the text to be labeled.
-
-You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and a new fields.json file. This training dataset will be submitted to train the model.
-
-## Train your model
-
-With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
-
-1. On the train model dialog, provide a unique model ID and, optionally, a description. The model ID accepts a string data type.
-
-1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
-
- :::image type="content" source="../media/how-to/studio-train-model.png" alt-text="Screenshot: Train model dialog":::
-
-1. Select **Train** to initiate the training process.
-
-1. Template models train in a few minutes. Neural models can take up to 30 minutes to train.
-
-1. Navigate to the *Models* menu to view the status of the train operation.
-
-## Test the model
-
-Once the model training is complete, you can test your model by selecting the model on the models list page.
-
-1. Select the model and select on the **Test** button.
-
-1. Select the `+ Add` button to select a file to test the model.
-
-1. With a file selected, choose the **Analyze** button to test the model.
-
-1. The model results are displayed in the main window and the fields extracted are listed in the right navigation bar.
-
-1. Validate your model by evaluating the results for each field.
-
-1. The right navigation bar also has the sample code to invoke your model and the JSON results from the API.
-
-Congratulations you've trained a custom model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about custom model types](../concept-custom.md)
-
-> [!div class="nextstepaction"]
-> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/compose-custom-models.md
+
+ Title: "How to guide: create and compose custom models with Form Recognizer"
+
+description: Learn how to create, use, and manage Form Recognizer custom and composed models
+++++ Last updated : 10/10/2022+
+recommendations: false
++
+# Compose custom models
+
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+++
+A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 100 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
+
+To learn more, see [Composed custom models](../concept-composed-models.md).
+
+In this article, you'll learn how to create and use composed custom models to analyze your forms and documents.
+
+## Prerequisites
+
+To get started, you'll need the following resources:
+
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
+
+* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+ 1. After the resource deploys, select **Go to resource**.
+
+ 1. Copy the **Keys and Endpoint** values from the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
+
+ > [!TIP]
+ > For more information, see [**create a Form Recognizer resource**](../create-a-form-recognizer-resource.md).
+
+* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Create your custom models
+
+First, you'll need a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
+
+* [**Assemble your training dataset**](#assemble-your-training-dataset)
+* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
+* [**Train your custom models**](#train-your-custom-model)
+
+## Assemble your training dataset
+
+Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](../how-to-guides/build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#custom-model-input-requirements) for Form Recognizer.
+
+>[!TIP]
+> Follow these tips to optimize your data set for training:
+>
+> * If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+> * For filled-in forms, use examples that have all of their fields filled in.
+> * Use forms with different values in each field.
+> * If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+See [Build a training data set](../how-to-guides/build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true) for tips on how to collect your training documents.
+
+## Upload your training dataset
+
+When you've gathered a set of training documents, you'll need to [upload your training data](../how-to-guides/build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#upload-your-training-data) to an Azure blob storage container.
+
+If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents.
+
+## Train your custom model
+
+When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+
+Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
+
+### [Form Recognizer Studio](#tab/studio)
+
+To create custom models, start with configuring your project:
+
+1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
+
+1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
+
+1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
+
+1. Review and submit your settings to create the project.
++
+While creating your custom models, you may need to extract data collections from your documents. The collections may appear one of two formats. Using tables as the visual pattern:
+
+* Dynamic or variable count of values (rows) for a given set of fields (columns)
+
+* Specific collection of values for a given set of fields (columns and/or rows)
+
+See [Form Recognizer Studio: labeling as tables](../quickstarts/try-v3-form-recognizer-studio.md#labeling-as-tables)
+
+### [REST API](#tab/rest)
+
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
+
+Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
+
+Once you have your label files, you can include them with by calling the training method with the *useLabelFile* parameter set to `true`.
++
+### [Client libraries](#tab/sdks)
+
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you've them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
+
+|Language |Method|
+|--|--|
+|**C#**|**StartBuildModel**|
+|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.documentanalysis.administration.documentmodeladministrationclient.beginbuildmodel)|
+|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)|
+| **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
+++
+## Create a composed model
+
+> [!NOTE]
+> **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
+
+With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Form Recognizer first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+### [Form Recognizer Studio](#tab/studio)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Gather your custom model IDs**](#gather-your-model-ids)
+* [**Compose your custom models**](#compose-your-custom-models)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Gather your model IDs
+
+When you train models using the [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/), the model ID is located in the models menu under a project:
++
+#### Compose your custom models
+
+1. Select a custom models project.
+
+1. In the project, select the ```Models``` menu item.
+
+1. From the resulting list of models, select the models you wish to compose.
+
+1. Choose the **Compose button** from the upper-left corner.
+
+1. In the pop-up window, name your newly composed model and select **Compose**.
+
+1. When the operation completes, your newly composed model will appear in the list.
+
+1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results.
+
+#### Analyze documents
+
+The custom model **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You should provide the composed model ID for the `modelID` parameter in your applications.
++
+#### Manage your composed models
+
+You can manage your custom models throughout life cycles:
+
+* Test and validate new documents.
+* Download your model to use in your applications.
+* Delete your model when its lifecycle is complete.
++
+### [REST API](#tab/rest)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Compose your custom models**](#compose-your-custom-models)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Compose your custom models
+
+The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) accepts a list of model IDs to be composed.
++
+#### Analyze documents
+
+To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
++
+#### Manage your composed models
+
+You can manage custom models throughout your development needs including [**copying**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo), [**listing**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels), and [**deleting**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) your models.
+
+### [Client libraries](#tab/sdks)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Create a composed model**](#create-a-composed-model)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Create a composed model
+
+You can use the programming language of your choice to create a composed model:
+
+| Programming language| Code sample |
+|--|--|
+|**C#** | [Model compose](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
+|**Java** | [Model compose](https://github.com/Azure/azure-sdk-for-java/blob/afa0d44fa42979ae9ad9b92b23cdba493a562127/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java)
+|**JavaScript** | [Compose model](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/composeModel.js)
+|**Python** | [Create composed model](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
+
+#### Analyze documents
+
+Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
+
+|Programming language| Code sample |
+|--|--|
+|**C#** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzeWithCustomModel.md)
+|**Java** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
+|**JavaScript** | [Analyze a document with a custom/composed model using model ID](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/analyzeDocumentByModelId.js)
+|**Python** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_analyze_custom_documents.py)
+
+#### Manage your composed models
+
+You can manage a custom model at each stage in its life cycles. You can copy a custom model between resources, view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
+
+|Programming language| Code sample |
+|--|--|
+|**C#** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_CopyCustomModel.md#copy-a-custom-model-between-form-recognizer-resources)|
+|**Java** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/CopyDocumentModel.java)|
+|**JavaScript** | [Copy a custom model between Form Recognizer resources](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/copyModel.js)|
+|**Python** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_copy_model_to.py)|
++
+Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
+
+## Next steps
+
+Try one of our Form Recognizer quickstarts:
+
+> [!div class="nextstepaction"]
+> [Form Recognizer Studio](../quickstarts/try-v3-form-recognizer-studio.md)
+
+> [!div class="nextstepaction"]
+> [REST API](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [C#](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Java](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [JavaScript](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [Python](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
++++
+Form Recognizer uses advanced machine-learning technology to detect and extract information from document images and return the extracted data in a structured JSON output. With Form Recognizer, you can train standalone custom models or combine custom models to create composed models.
+
+* **Custom models**. Form Recognizer custom models enable you to analyze and extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases.
+
+* **Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model that encompasses your form types. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis.
+
+In this article, you'll learn how to create Form Recognizer custom and composed models using our [Form Recognizer Sample Labeling tool](../label-tool.md), [REST APIs](../how-to-guides/use-sdk-rest-api.md?view=form-recog-2.1.0&preserve-view=true#train-a-custom-model), or [client-library SDKs](../how-to-guides/use-sdk-rest-api.md?view=form-recog-2.1.0&preserve-view=true#train-a-custom-model).
+
+## Sample Labeling tool
+
+Try extracting data from custom forms using our Sample Labeling tool. You'll need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+> [!div class="nextstepaction"]
+> [Try it](https://fott-2-1.azurewebsites.net/projects/create)
+
+In the Form Recognizer UI:
+
+1. Select **Use Custom to train a model with labels and get key value pairs**.
+
+ :::image type="content" source="../media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
+
+1. In the next window, select **New project**:
+
+ :::image type="content" source="../media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
+
+## Create your models
+
+The steps for building, training, and using custom and composed models are as follows:
+
+* [**Assemble your training dataset**](#assemble-your-training-dataset)
+* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
+* [**Train your custom model**](#train-your-custom-model)
+* [**Compose custom models**](#create-a-composed-model)
+* [**Analyze documents**](#analyze-documents-with-your-custom-or-composed-model)
+* [**Manage your custom models**](#manage-your-custom-models)
+
+## Assemble your training dataset
+
+Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#custom-model-input-requirements) for Form Recognizer.
+
+## Upload your training dataset
+
+You'll need to [upload your training data](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#upload-your-training-data)
+to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Train your custom model
+
+You [train your model](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#train-your-model) with labeled data sets. Labeled datasets rely on the prebuilt-layout API, but supplementary human input is included such as your specific labels and field locations. Start with at least five completed forms of the same type for your labeled training data.
+
+When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+
+Form Recognizer uses the [Layout](../concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
+
+[Get started with Train with labels](../label-tool.md)
+
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+
+## Create a composed model
+
+> [!NOTE]
+> **Model Compose is only available for custom models trained *with* labels.** Attempting to compose unlabeled models will produce an error.
+
+With the Model Compose operation, you can assign up to 100 trained custom models to a single model ID. When you call Analyze with the composed model ID, Form Recognizer will first classify the form you submitted, choose the best matching assigned model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+Using the Form Recognizer Sample Labeling tool, the REST API, or the Client-library SDKs, follow the steps below to set up a composed model:
+
+1. [**Gather your custom model IDs**](#gather-your-custom-model-ids)
+1. [**Compose your custom models**](#compose-your-custom-models)
+
+### Gather your custom model IDs
+
+Once the training process has successfully completed, your custom model will be assigned a model ID. You can retrieve a model ID as follows:
+
+<!-- Applies to FOTT but labeled studio to eliminate tab grouping warning -->
+### [**Form Recognizer Sample Labeling tool**](#tab/studio)
+
+When you train models using the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net/), the model ID is located in the Train Result window:
++
+### [**REST API**](#tab/rest)
+
+The [**REST API**](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#train-your-model) will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
++
+### [**Client-library SDKs**](#tab/sdks)
+
+ The [**client-library SDKs**](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#train-your-model) return a model object that can be queried to return the trained model ID:
+
+* C\# | [CustomFormModel Class](/dotnet/api/azure.ai.formrecognizer.training.customformmodel?view=azure-dotnet&preserve-view=true#properties "Azure SDK for .NET")
+
+* Java | [CustomFormModelInfo Class](/java/api/com.azure.ai.formrecognizer.training.models.customformmodelinfo?view=azure-java-stable&preserve-view=true#methods "Azure SDK for Java")
+
+* JavaScript | CustomFormModelInfo interface
+
+* Python | [CustomFormModelInfo Class](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.customformmodelinfo?view=azure-python&preserve-view=true&branch=main#variables "Azure SDK for Python")
++
+#### Compose your custom models
+
+After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
+
+<!-- Applies to FOTT but labeled studio to eliminate tab grouping warning -->
+### [**Form Recognizer Sample Labeling tool**](#tab/studio)
+
+The **Sample Labeling tool** enables you to quickly get started training models and composing them to a single model ID.
+
+After you have completed training, compose your models as follows:
+
+1. On the left rail menu, select the **Model Compose** icon (merging arrow).
+
+1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
+
+1. Choose the **Compose button** from the upper-left corner.
+
+1. In the pop-up window, name your newly composed model and select **Compose**.
+
+When the operation completes, your newly composed model will appear in the list.
+
+ :::image type="content" source="../media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="../media/custom-model-compose-expanded.png":::
+
+### [**REST API**](#tab/rest)
+
+Using the **REST API**, you can make a [**Compose Custom Model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`.
+
+### [**Client-library SDKs**](#tab/sdks)
+
+Use the programming language code of your choice to create a composed model that will be called with a single model ID. Below are links to code samples that demonstrate how to create a composed model from existing custom models:
+
+* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
+
+* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java).
+
+* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
+
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
+++
+## Analyze documents with your custom or composed model
+
+ The custom form **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You can provide a single custom model ID or a composed model ID for the `modelID` parameter.
++
+### [**Form Recognizer Sample Labeling tool**](#tab/studio)
+
+1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
+
+1. Choose a local file or image URL to analyze.
+
+1. Select the **Run Analysis** button.
+
+1. The tool will apply tags in bounding boxes and report the confidence percentage for each tag.
++
+### [**REST API**](#tab/rest)
+
+Using the REST API, you can make an [Analyze Document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request to analyze a document and extract key-value pairs and table data.
+
+### [**Client-library SDKs**](#tab/sdks)
+
+Using the programming language of your choice to analyze a form or document with a custom or composed model. You'll need your Form Recognizer endpoint, key, and model ID.
+
+* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
+
+* [**Java**](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
+
+* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/recognizeCustomForm.js)
+
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.1/sample_recognize_custom_forms.py)
+++
+Test your newly trained models by [analyzing forms](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#test-the-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](../label-tool.md#improve-results).
+
+## Manage your custom models
+
+You can [manage your custom models](../how-to-guides/use-sdk-rest-api.md?view=form-recog-2.1.0&preserve-view=true#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) from your account.
+
+Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
+
+## Next steps
+
+Learn more about the Form Recognizer client library by exploring our API reference documentation.
+
+> [!div class="nextstepaction"]
+> [Form Recognizer API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+
applied-ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api.md
+
+ Title: "Use Form Recognizer client library SDKs or REST API "
+
+description: How to use Form Recognizer SDKs or REST API and create apps to extract key data from documents.
+++++ Last updated : 10/07/2022+
+zone_pivot_groups: programming-languages-set-formre
+recommendations: false
+
+<!-- markdownlint-disable MD051 -->
+
+# Use Form Recognizer models
++
+ In this guide, you'll learn how to add Form Recognizer models to your applications and workflows using a programming language SDK of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key text and structure elements from documents. We recommend that you use the free service as you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+Choose from the following Form Recognizer models to analyze and extract data and values from forms and documents:
+
+> [!div class="checklist"]
+>
+> * The [prebuilt-read](../concept-read.md) model is at the core of all Form Recognizer models and can detect lines, words, locations, and languages. Layout, general document, prebuilt, and custom models all use the read model as a foundation for extracting texts from documents.
+>
+> * The [prebuilt-layout](../concept-layout.md) model extracts text and text locations, tables, selection marks, and structure information from documents and images.
+>
+> * The [prebuilt-document](../concept-general-document.md) model extracts key-value pairs, tables, and selection marks from documents and can be used as an alternative to training a custom model without labels.
+>
+> * The [prebuilt-tax.us.w2](../concept-w2.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms.
+>
+> * The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
+>
+> * The [prebuilt-receipt](../concept-receipt.md) model extracts key information from printed and handwritten sales receipts.
+>
+> * The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards.
+>
+> * The [prebuilt-businessCard](../concept-business-card.md) model extracts key information from business card images.
++++++++++++++++++
+## Next steps
+
+Congratulations! You've learned to use Form Recognizer models to analyze various documents in different ways. Next, explore the Form Recognizer Studio and reference documentation.
+>[!div class="nextstepaction"]
+> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!div class="nextstepaction"]
+> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
++
+In this how-to guide, you'll learn how to add Form Recognizer to your applications and workflows using an SDK, in a programming language of your choice, or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+You'll use the following APIs to extract structured data from forms and documents:
+
+* [Authenticate the client](#authenticate-the-client)
+* [Analyze Layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
+* [Analyze ID documents](#analyze-id-documents)
+* [Train a custom model](#train-a-custom-model)
+* [Analyze forms with a custom model](#analyze-forms-with-a-custom-model)
+* [Manage custom models](#manage-custom-models)
+++++++++++++++
applied-ai-services V2 1 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api.md
- Title: "Use Form Recognizer client library SDKs or REST API (v2.1)"-
-description: How to use a Form Recognizer SDKs or REST API (v2.1) to create apps that extract key data from documents.
----- Previously updated : 09/13/2022-
-zone_pivot_groups: programming-languages-set-formre
-recommendations: false
--
-<!-- markdownlint-disable MD051 -->
-
-# Use Form Recognizer SDKs or REST API | v2.1
-
- In this how-to guide, you'll learn how to add Form Recognizer to your applications and workflows using an SDK, in a programming language of your choice, or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-You'll use the following APIs to extract structured data from forms and documents:
-
-* [Authenticate the client](#authenticate-the-client)
-* [Analyze Layout](#analyze-layout)
-* [Analyze receipts](#analyze-receipts)
-* [Analyze business cards](#analyze-business-cards)
-* [Analyze invoices](#analyze-invoices)
-* [Analyze ID documents](#analyze-id-documents)
-* [Train a custom model](#train-a-custom-model)
-* [Analyze forms with a custom model](#analyze-forms-with-a-custom-model)
-* [Manage custom models](#manage-custom-models)
---------------
applied-ai-services V3 0 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/v3-0-sdk-rest-api.md
- Title: "Use Form Recognizer client library SDKs or REST API (v3.0)"-
-description: How to use Form Recognizer SDKs or REST API (v3.0) to create apps to extract key data from documents.
----- Previously updated : 10/03/2022-
-zone_pivot_groups: programming-languages-set-formre
-recommendations: false
-
-<!-- markdownlint-disable MD051 -->
-
-# Use Form Recognizer SDKs or REST API | v3.0
-
- In this guide, you'll learn how to add Form Recognizer to your applications and workflows using a programming language SDK of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key text and structure elements from documents. We recommend that you use the free service as you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-Choose from the following Form Recognizer models to analyze and extract data and values from forms and documents:
-
-> [!div class="checklist"]
->
-> * The [prebuilt-read](../concept-read.md) model is at the core of all Form Recognizer models and can detect lines, words, locations, and languages. Layout, general document, prebuilt, and custom models all use the read model as a foundation for extracting texts from documents.
->
-> * The [prebuilt-layout](../concept-layout.md) model extracts text and text locations, tables, selection marks, and structure information from documents and images.
->
-> * The [prebuilt-document](../concept-general-document.md) model extracts key-value pairs, tables, and selection marks from documents and can be used as an alternative to training a custom model without labels.
->
-> * The [prebuilt-tax.us.w2](../concept-w2.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms.
->
-> * The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
->
-> * The [prebuilt-receipt](../concept-receipt.md) model extracts key information from printed and handwritten sales receipts.
->
-> * The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards.
->
-> * The [prebuilt-businessCard](../concept-business-card.md) model extracts key information from business card images.
----------------
-## Next steps
-
-Congratulations! You've learned to use Form Recognizer models to analyze various documents in different ways. Next, explore the Form Recognizer Studio and reference documentation.
->[!div class="nextstepaction"]
-> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!div class="nextstepaction"]
-> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Previously updated : 06/23/2022 Last updated : 10/10/2022 -
-keywords: document processing
+monikerRange: 'form-recog-2.1.0'
+recommendations: false
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD024 -->
keywords: document processing
> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the V3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the V3.0.
In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
Previously updated : 07/29/2022 Last updated : 10/10/2022
This article covers the supported languages for text and field **extraction (by
## Read, layout, and custom form (template) model
-The following lists include the currently GA languages in for the v2.1 version and the most recent v3.0 version. These languages are supported by Read, Layout, and Custom form (template) model features.
+The following lists include the currently GA languages in the most recent v3.0 version. These languages are supported by Read, Layout, and Custom form (template) model features.
> [!NOTE] > **Language code optional** > > Form Recognizer's deep learning based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](v3-migration-guide.md) to understand the differences from the v2.1 GA API and explore the [v3.0 SDK and REST API quickstarts](quickstarts/get-started-v3-sdk-rest-api.md).
+To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](v3-migration-guide.md) to understand the differences from the v2.1 GA API and explore the [v3.0 SDK and REST API quickstarts](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true).
-### Handwritten text (v3.0 and v2.1)
+### Handwritten text (V3.0 GA)
The following table lists the supported languages for extracting handwritten texts.
The following table lists the supported languages for extracting handwritten tex
|German |`de`|Spanish |`es`| |Italian |`it`|
-### Print text
+### Print text (v3.0 GA)
-This section lists the supported languages for extracting printed texts using version v3.0.
+The following table lists the supported languages for print text by the most recent GA version.
|Language| Code (optional) |Language| Code (optional) | |:--|:-:|:--|:-:|
-|Angika (Devanagari) | `anp`|Lakota | `lkt`
-|Arabic | `ar`|Latin | `la`
-|Awadhi-Hindi (Devanagari) | `awa`|Lithuanian | `lt`
-|Azerbaijani (Latin) | `az`|Lower Sorbian | `dsb`
-|Bagheli | `bfy`|Lule Sami | `smj`
-|Belarusian (Cyrillic) | `be`, `be-cyrl`|Mahasu Pahari (Devanagari) | `bfz`
-|Belarusian (Latin) | `be`, `be-latn`|Maltese | `mt`
-|Bhojpuri-Hindi (Devanagari) | `bho`|Malto (Devanagari) | `kmj`
-|Bodo (Devanagari) | `brx`|Maori | `mi`
-|Bosnian (Latin) | `bs`|Marathi | `mr`
-|Brajbha | `bra`|Mongolian (Cyrillic) | `mn`
-|Bulgarian | `bg`|Montenegrin (Cyrillic) | `cnr-cyrl`
-|Bundeli | `bns`|Montenegrin (Latin) | `cnr-latn`
-|Buryat (Cyrillic) | `bua`|Nepali | `ne`
-|Chamling | `rab`|Niuean | `niu`
-|Chhattisgarhi (Devanagari)| `hne`|Nogay | `nog`
-|Croatian | `hr`|Northern Sami (Latin) | `sme`
-|Dari | `prs`|Ossetic | `os`
-|Dhimal (Devanagari) | `dhi`|Pashto | `ps`
-|Dogri (Devanagari) | `doi`|Persian | `fa`
-|Erzya (Cyrillic) | `myv`|Punjabi (Arabic) | `pa`
-|Faroese | `fo`|Ripuarian | `ksh`
-|Gagauz (Latin) | `gag`|Romanian | `ro`
-|Gondi (Devanagari) | `gon`|Russian | `ru`
-|Gurung (Devanagari) | `gvr`|Sadri (Devanagari) | `sck`
-|Halbi (Devanagari) | `hlb`|Samoan (Latin) | `sm`
-|Haryanvi | `bgc`|Sanskrit (Devanagari) | `sa`
-|Hawaiian | `haw`|Santali(Devanagiri) | `sat`
-|Hindi | `hi`|Serbian (Latin) | `sr`, `sr-latn`
-|Ho(Devanagiri) | `hoc`|Sherpa (Devanagari) | `xsr`
-|Icelandic | `is`|Sirmauri (Devanagari) | `srx`
-|Inari Sami | `smn`|Skolt Sami | `sms`
-|Jaunsari (Devanagari) | `Jns`|Slovak | `sk`
-|Kangri (Devanagari) | `xnr`|Somali (Arabic) | `so`
-|Karachay-Balkar | `krc`|Southern Sami | `sma`
-|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Tajik (Cyrillic) | `tg`
-|Kazakh (Cyrillic) | `kk-cyrl`|Thangmi | `thf`
-|Kazakh (Latin) | `kk-latn`|Tongan | `to`
-|Khaling | `klr`|Turkmen (Latin) | `tk`
-|Korku | `kfq`|Tuvan | `tyv`
-|Koryak | `kpy`|Urdu | `ur`
-|Kosraean | `kos`|Uyghur (Arabic) | `ug`
-|Kumyk (Cyrillic) | `kum`|Uzbek (Arabic) | `uz-arab`
-|Kurdish (Arabic) | `ku-arab`|Uzbek (Cyrillic) | `uz-cyrl`
-|Kurukh (Devanagari) | `kru`|Welsh | `cy`
-|Kyrgyz (Cyrillic) | `ky`
-
-### Print text
-
-This section lists the supported languages for extracting printed texts in the latest GA version.
+|Afrikaans|`af`|Khasi | `kha` |
+|Albanian |`sq`|K'iche' | `quc` |
+|Angika (Devanagiri) | `anp`| Korean | `ko` |
+|Arabic | `ar` | Korku | `kfq`|
+|Asturian |`ast`| Koryak | `kpy`|
+|Awadhi-Hindi (Devanagiri) | `awa`| Kosraean | `kos`|
+|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`|
+|Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`|
+|Basque |`eu`| Kurdish (Latin) | `ku-latn`
+|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagiri) | `kru`|
+|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
+|Bhojpuri-Hindi (Devanagiri) | `bho`| Lakota | `lkt` |
+|Bislama |`bi`| Latin | `la` |
+|Bodo (Devanagiri) | `brx`| Lithuanian | `lt` |
+|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` |
+|Brajbha | `bra`|Lule Sami | `smj`|
+|Breton |`br`|Luxembourgish | `lb` |
+|Bulgarian | `bg`|Mahasu Pahari (Devanagiri) | `bfz`|
+|Bundeli | `bns`|Malay (Latin) | `ms` |
+|Buryat (Cyrillic) | `bua`|Maltese | `mt`
+|Catalan |`ca`|Malto (Devanagiri) | `kmj`
+|Cebuano |`ceb`|Manx | `gv` |
+|Chamling | `rab`|Maori | `mi`|
+|Chamorro |`ch`|Marathi | `mr`|
+|Chhattisgarhi (Devanagiri)| `hne`| Mongolian (Cyrillic) | `mn`|
+|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`|
+|Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`|
+|Cornish |`kw`|Neapolitan | `nap` |
+|Corsican |`co`|Nepali | `ne`|
+|Crimean Tatar (Latin)|`crh`|Niuean | `niu`|
+|Croatian | `hr`|Nogay | `nog`
+|Czech | `cs` |Northern Sami (Latin) | `sme`|
+|Danish | `da` |Norwegian | `no` |
+|Dari | `prs`|Occitan | `oc` |
+|Dhimal (Devanagiri) | `dhi`| Ossetic | `os`|
+|Dogri (Devanagiri) | `doi`|Pashto | `ps`|
+|Dutch | `nl` |Persian | `fa`|
+|English | `en` |Polish | `pl` |
+|Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
+|Estonian |`et`|Punjabi (Arabic) | `pa`|
+|Faroese | `fo`|Ripuarian | `ksh`|
+|Fijian |`fj`|Romanian | `ro` |
+|Filipino |`fil`|Romansh | `rm` |
+|Finnish | `fi` | Russian | `ru` |
+|French | `fr` |Sadri (Devanagiri) | `sck` |
+|Friulian | `fur` | Samoan (Latin) | `sm`
+|Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`|
+|Galician | `gl` |Santali(Devanagiri) | `sat` |
+|German | `de` | Scots | `sco` |
+|Gilbertese | `gil` | Scottish Gaelic | `gd` |
+|Gondi (Devanagiri) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
+|Greenlandic | `kl` | Sherpa (Devanagiri) | `xsr` |
+|Gurung (Devanagiri) | `gvr`| Sirmauri (Devanagiri) | `srx`|
+|Haitian Creole | `ht` | Skolt Sami | `sms` |
+|Halbi (Devanagiri) | `hlb`| Slovak | `sk`|
+|Hani | `hni` | Slovenian | `sl` |
+|Haryanvi | `bgc`|Somali (Arabic) | `so`|
+|Hawaiian | `haw`|Southern Sami | `sma`
+|Hindi | `hi`|Spanish | `es` |
+|Hmong Daw (Latin)| `mww` | Swahili (Latin) | `sw` |
+|Ho(Devanagiri) | `hoc`|Swedish | `sv` |
+|Hungarian | `hu` |Tajik (Cyrillic) | `tg` |
+|Icelandic | `is`| Tatar (Latin) | `tt` |
+|Inari Sami | `smn`|Tetum | `tet` |
+|Indonesian | `id` | Thangmi | `thf` |
+|Interlingua | `ia` |Tongan | `to`|
+|Inuktitut (Latin) | `iu` | Turkish | `tr` |
+|Irish | `ga` |Turkmen (Latin) | `tk`|
+|Italian | `it` |Tuvan | `tyv`|
+|Japanese | `ja` |Upper Sorbian | `hsb` |
+|Jaunsari (Devanagiri) | `Jns`|Urdu | `ur`|
+|Javanese | `jv` |Uyghur (Arabic) | `ug`|
+|Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`|
+|Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
+|Kangri (Devanagiri) | `xnr`|Uzbek (Latin) | `uz` |
+|Karachay-Balkar | `krc`|Volap├╝k | `vo` |
+|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` |
+|Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
+|Kashubian | `csb` |Western Frisian | `fy` |
+|Kazakh (Cyrillic) | `kk-cyrl`|Yucatec Maya | `yua` |
+|Kazakh (Latin) | `kk-latn`|Zhuang | `za` |
+|Khaling | `klr`|Zulu | `zu` |
+
+### Preview (v2022-06-30-preview)
+
+Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
|Language| Code (optional) |Language| Code (optional) | |:--|:-:|:--|:-:|
-|Afrikaans|`af`|Japanese | `ja` |
-|Albanian |`sq`|Javanese | `jv` |
-|Asturian |`ast`|K'iche' | `quc` |
-|Basque |`eu`|Kabuverdianu | `kea` |
-|Bislama |`bi`|Kachin (Latin) | `kac` |
-|Breton |`br`|Kara-Kalpak (Latin) | `kaa` |
-|Catalan |`ca`|Kashubian | `csb` |
-|Cebuano |`ceb`|Khasi | `kha` |
-|Chamorro |`ch`|Korean | `ko` |
-|Chinese Simplified | `zh-Hans`|Kurdish (Latin) | `ku-latn`
-|Chinese Traditional | `zh-Hant`|Luxembourgish | `lb` |
-|Cornish |`kw`|Malay (Latin) | `ms` |
-|Corsican |`co`|Manx | `gv` |
-|Crimean Tatar (Latin)|`crh`|Neapolitan | `nap` |
-|Czech | `cs` |Norwegian | `no` |
-|Danish | `da` |Occitan | `oc` |
-|Dutch | `nl` |Polish | `pl` |
-|English | `en` |Portuguese | `pt` |
-|Estonian |`et`|Romansh | `rm` |
-|Fijian |`fj`|Scots | `sco` |
-|Filipino |`fil`|Scottish Gaelic | `gd` |
-|Finnish | `fi` |Slovenian | `sl` |
-|French | `fr` |Spanish | `es` |
-|Friulian | `fur` |Swahili (Latin) | `sw` |
-|Galician | `gl` |Swedish | `sv` |
-|German | `de` |Tatar (Latin) | `tt` |
-|Gilbertese | `gil` |Tetum | `tet` |
-|Greenlandic | `kl` |Turkish | `tr` |
-|Haitian Creole | `ht` |Upper Sorbian | `hsb` |
-|Hani | `hni` |Uzbek (Latin) | `uz` |
-|Hmong Daw (Latin)| `mww` |Volap├╝k | `vo` |
-|Hungarian | `hu` |Walser | `wae` |
-|Indonesian | `id` |Western Frisian | `fy` |
-|Interlingua | `ia` |Yucatec Maya | `yua` |
-|Inuktitut (Latin) | `iu` |Zhuang | `za` |
-|Irish | `ga` |Zulu | `zu` |
-|Italian | `it` |
+|Abaza|`abq`|Malagasy | `mg` |
+|Abkhazian |`ab`|Mandinka | `mnk` |
+|Achinese | `ace`| Mapudungun | `arn` |
+|Acoli | `ach` | Mari (Russia) | `chm`|
+|Adangme |`ada`| Masai | `mas`|
+|Adyghe | `ady`| Mende (Sierra Leone) | `men`|
+|Afar | `aa`| Meru | `mer`|
+|Akan | `ak`| Meta' | `mgo`|
+|Algonquin |`alq`| Minangkabau | `min`
+|Asu (Tanzania) | `asa`|Mohawk| `moh`|
+|Avaric | `av`| Mongondow | `mog`
+|Aymara | `ay`| Morisyen | `mfe` |
+|Bafia |`ksf`| Mundang | `mua` |
+|Bambara | `bm`| Nahuatl | `nah` |
+|Bashkir | `ba`| Navajo | `nv` |
+|Bemba (Zambia) | `bem`| Ndonga | `ng` |
+|Bena (Tanzania) | `bez`|Ngomba | `jgo`|
+|Bikol |`bik`|North Ndebele | `nd` |
+|Bini | `bin`|Nyanja | `ny`|
+|Chechen | `ce`|Nyankole | `nyn` |
+|Chiga | `cgg`|Nzima | `nzi`
+|Choctaw |`cho`|Ojibwa | `oj`
+|Chukot |`ckt`|Oromo | `om` |
+|Chuvash | `cv`|Pampanga | `pam`|
+|Cree |`cr`|Pangasinan | `pag`|
+|Creek| `mus`| Papiamento | `pap`|
+|Crow | `cro`|Pedi | `nso`|
+|Dargwa | `dar`|Quechua | `qu`|
+|Duala |`dua`|Rundi | `rn` |
+|Dungan |`dng`|Rwa | `rwk`|
+|Efik|`efi`|Samburu | `saq`|
+|Fon | `fon`|Sango | `sg`
+|Ga | `gaa` |Sangu (Gabon) | `snq`|
+|Ganda | `lg` |Sena | `seh` |
+|Gayo | `gay`|Serbian (Cyrillic) | `sr-cyrl` |
+|Guarani| `gn`| Shambala | `ksb`|
+|Gusii | `guz`|Shona | `sn`|
+|Greek | `el`|Siksika | `bla`|
+|Herero | `hz` |Soga | `xog`|
+|Hiligaynon | `hil` |Somali (Latin) | `so-latn` |
+|Iban | `iba`|Songhai | `son` |
+|Igbo |`ig`|South Ndebele | `nr`|
+|Iloko | `ilo`|Southern Altai | `alt`|
+|Ingush |`inh`|Southern Sotho | `st` |
+|Jola-Fonyi |`dyo`|Sundanese | `su` |
+|Kabardian | `kbd` | Swati | `ss` |
+|Kalenjin | `kln` |Tabassaran| `tab` |
+|Kalmyk | `xal` | Tachelhit| `shi` |
+|Kanuri | `kr`|Tahitian | `ty`|
+|Khakas | `kjh` |Taita | `dav` |
+|Kikuyu | `ki` | Tatar (Cyrillic) | `tt-cyrl` |
+|Kildin Sami | `sjd` | Teso | `teo` |
+|Kinyarwanda| `rw`| Thai | `th`|
+|Komi | `kv` | Tok Pisin | `tpi` |
+|Kongo| `kg`| Tsonga | `ts`|
+|Kpelle | `kpe` | Tswana | `tn` |
+|Kuanyama | `kj`| Udmurt | `udm`|
+|Lak | `lbe` | Uighur (Cyrillic) | `ug-cyrl` |
+|Latvian | `lv`|Ukrainian | `uk`|
+|Lezghian | `lex`|Vietnamese | `vi`
+|Lingala | `ln`|Vunjo | `vun` |
+|Lozi| `loz` | Wolof | `wo` |
+|Luo (Kenya and Tanzania) | `luo`| Xhosa|`xh` |
+|Luyia | `luy` |Yakut | `sah` |
+|Macedonian | `mk`| Zapotec | `zap` |
+|Machame| `jmc`| Zarma | `dje` |
+|Madurese | `mad` |
+|Makhuwa-Meetto | `mgh` |
+|Makonde | `kde` |
## Custom neural model
applied-ai-services Overview Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview-experiment.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
This section will help you decide which Form Recognizer v3.0 supported model you
## Form Recognizer models and development options
-### [Form Recognizer v3.0](#tab/v3-0)
-The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
+> [!NOTE]
+> The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
| Model | Description |Automation use cases | Development options | |-|--|-|--|
-|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/v3-0-sdk-rest-api.md)</li><li>[**C# SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-javascript)</li></ul> |
-|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-
-### [Form Recognizer GA (v2.1)](#tab/v2-1)
+|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul> |
+|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
+|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul> |
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
++ >[!TIP] >
The following models are supported by Form Recognizer v2.1. Use the links in the
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
- ## How to use Form Recognizer documentation
This documentation contains the following article types:
## Next steps
-### [Form Recognizer v3.0](#tab/v3-0)
> [!div class="checklist"] >
This documentation contains the following article types:
> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more. > * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
-### [Form Recognizer v2.1](#tab/v2-1)
+ > [!div class="checklist"] >
This documentation contains the following article types:
> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) to learn more. > * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes. -
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 08/22/2022 Last updated : 10/06/2022 recommendations: false adobe-target: true
adobe-target-content: ./overview-experiment
<!-- markdownlint-disable MD036 --> # What is Azure Form Recognizer?
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. Form Recognizer analyzes your forms and documents, extracts text and data, maps field relationships as key-value pairs, and returns a structured JSON output. You quickly get accurate results that are tailored to your specific content without excessive manual intervention or extensive data science expertise. Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* Concepts articles:
-Form Recognizer uses the following models to easily identify, extract, and analyze document data:
+| Model type | Model name |
+||--|
+|**Document analysis models**| &#9679; [**Read model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
-**Document analysis models**
+## Which Form Recognizer model should I use?
-* [**Read model**](concept-read.md) | Extract text lines, words, locations, and detected languages from documents and images.
-* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents and images.
-* [**General document model**](concept-general-document.md) | Extract text, tables, selection marks, structure information, key-value pairs, and entities from documents.
+This section will help you decide which Form Recognizer v3.0 supported model you should use for your application:
-**Prebuilt models**
+| Type of document | Data to extract |Document format | Your best solution |
+| --|-| -|-|
+|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read model**](concept-read.md)|
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document model**](concept-general-document.md)
+|**U.S. W-2 form**|You want to extract key information such as salary, wages, and taxes withheld from US W2 tax forms.</li></ul> |The W-2 document is in United States English (en-US) text.|[**W-2 model**](concept-w2.md)
+|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md)
+ |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
+|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)|
+|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
-* [**W-2 form model**](concept-w2.md) | Extract text and key information from US W2 tax forms.
-* [**Invoice model**](concept-invoice.md) | Extract text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**Receipt model**](concept-receipt.md) | Extract text and key information from receipts.
-* [**ID document model**](concept-id-document.md) | Extract text and key information from driver licenses and international passports.
-* [**Business card model**](concept-business-card.md) | Extract text and key information from business cards.
+>[!Tip]
+>
+> * If you're still unsure which model to use, try the General Document model.
+> * The General Document model is powered by the Read OCR model to detect lines, words, locations, and languages.
+> * General document extracts all the same fields as Layout model (pages, tables, styles) and also extracts key-value pairs.
-**Custom models**
+## Form Recognizer models and development options
-* [**Custom model**](concept-custom.md) | Extract and analyze distinct data and use cases from forms and documents specific to your business.
-* [**Composed model**](concept-model-overview.md) | Compose a collection of custom models and assign them to a single model built from your form types.
-## Which Form Recognizer feature should I use?
+> [!NOTE]
+> The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
-This section helps you decide which Form Recognizer v3.0 supported feature you should use for your application:
+| Model | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul> |
+|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
+|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul> |
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|
-| What type of document do you want to analyze?| How is the document formatted? | Your best solution |
-| --|-| -|
-|<ul><li>**W-2 Form**</li></yl>| Is your W-2 document composed in United States English (en-US) text?|<ul><li>If **Yes**, use the [**W-2 Form**](concept-w2.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model.</li></ul>|
-|<ul><li>**Primarily text content**</li></yl>| Is your document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) and are you only interested in text and not tables, selection marks, and the structure?|<ul><li>If **Yes** to text-only extraction, use the [**Read**](concept-read.md) model.<li>If **No**, because you also need structure information, use the [**Layout**](concept-layout.md) model.</li></ul>
-|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document **](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul>
-|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model.</li></ul>
-|<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model.</li></ul>|
-|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model</li></ul>|
- |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
-## Form Recognizer features and development options
-### [Form Recognizer v3.0](#tab/v3-0)
+ >[!TIP]
+ >
+ > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+ > * The v3.0 Studio supports any model trained with v2.1 labeled data.
+ > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-The following features and development options are supported by the Form Recognizer service v3.0. Use the links in the table to learn more about each feature and browse the API references.
+The following models are supported by Form Recognizer v2.1. Use the links in the table to learn more about each model and browse the API references.
-| Feature | Description | Development options |
+| Model| Description | Development options |
|-|--|-|
-|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/v3-0-sdk-rest-api.md)</li><li>[**C# SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/v3-0-sdk-rest-api.md?pivots=programming-language-javascript)</li></ul> |
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
-|[**General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-
-### [Form Recognizer GA (v2.1)](#tab/v2-1)
-
-The following features are supported by Form Recognizer v2.1. Use the links in the table to learn more about each feature and browse the API references.
-
-| Feature | Description | Development options |
-|-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
- ## How to use Form Recognizer documentation
This documentation contains the following article types:
## Next steps
-### [Form Recognizer v3.0](#tab/v3-0)
> [!div class="checklist"] >
This documentation contains the following article types:
> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more. > * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
-### [Form Recognizer v2.1](#tab/v2-1)
+ > [!div class="checklist"] >
This documentation contains the following article types:
> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) to learn more. > * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes. -
applied-ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api.md
+
+ Title: "Quickstart: Form Recognizer SDKs | REST API "
+
+description: Use a Form Recognizer SDK or the REST API to create a forms processing app that extracts key data and structure elements from your documents.
+++++ Last updated : 10/10/2022+
+zone_pivot_groups: programming-languages-set-formre
+recommendations: false
++
+ # Get started with Form Recognizer
+++
+Get started with the latest version of Azure Form Recognizer. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Form Recognizer models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md) page.
+++++++++++++++++
+That's it, congratulations!
+
+In this quickstart, you used a form Form Recognizer model to analyze various forms and documents. Next, explore the Form Recognizer Studio and reference documentation to learn about Form Recognizer API in depth.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!div class="nextstepaction"]
+> [**Explore our how-to documentation and take a deeper dive into Form Recognizer models**](../how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+++
+Get started with Azure Form Recognizer using the programming language of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md) page.
++++++++++++++++++
+That's it, congratulations! In this quickstart, you used Form Recognizer models to analyze various forms in different ways.
+
+## Next steps
+
+* For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+
+* The v3.0 Studio supports any model trained with v2.1 labeled data.
+
+* You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+* *See* our [**REST API**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [**Python**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
+
applied-ai-services Get Started V2 1 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-v2-1-sdk-rest-api.md
- Title: "Quickstart: Form Recognizer client library SDKs | REST API"-
-description: Use the Form Recognizer client library SDKs or REST API to create a forms processing app that extracts key/value pairs and table data from your custom documents.
----- Previously updated : 06/21/2021-
-zone_pivot_groups: programming-languages-set-formre
-recommendations: false
--
-# Get started with Form Recognizer client library SDKs or REST API
-
-Get started with Azure Form Recognizer using the programming language of your choice. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
---------------
applied-ai-services Get Started V3 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-v3-sdk-rest-api.md
- Title: "Quickstart: Form Recognizer SDKs | REST API v3.0"-
-description: Use a Form Recognizer SDK or the REST API to create a forms processing app that extracts key data from your documents.
----- Previously updated : 10/03/2022-
-zone_pivot_groups: programming-languages-set-formre
-recommendations: false
--
-# Quickstart: Form Recognizer SDKs | REST API v3.0
-
-Get started with the latest version of Azure Form Recognizer. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Form Recognizer models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
----------------
-That's it, congratulations!
-
-In this quickstart, you used a form Form Recognizer model to analyze various forms and documents. Next, explore the Form Recognizer Studio and reference documentation to learn about Form Recognizer API in depth.
-
-## Next steps
-
->[!div class="nextstepaction"]
-> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!div class="nextstepaction"]
-> [**Explore our how-to documentation and take a deeper dive into Form Recognizer models**](../how-to-guides/v3-0-sdk-rest-api.md)
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Previously updated : 06/24/2022 Last updated : 10/10/2022
-keywords: document processing
+monikerRange: 'form-recog-2.1.0'
+recommendations: false
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD024 -->
keywords: document processing
<!-- markdownlint-disable MD029 --> # Get started with the Form Recognizer Sample Labeling tool + >[!TIP] > > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](get-started-v3-sdk-rest-api.md) or [**C#**](get-started-v3-sdk-rest-api.md), [**Java**](get-started-v3-sdk-rest-api.md), [**JavaScript**](get-started-v3-sdk-rest-api.md), or [Python](get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the v3.0 version.
+> * *See* our [**REST API**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR)
Use the tags editor pane to create a new tag you'd like to identify:
Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./try-sdk-rest-api.md?pivots=programming-language-rest-api) or [client library](./try-sdk-rest-api.md).
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./get-started-sdks-rest-api.md?pivots=programming-language-rest-api) or [client library](./get-started-sdks-rest-api.md).
* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and retraining to create a new model. We recommend starting by labeling five forms analyzing and testing the results and then if needed adding more forms as needed. * The list of tags, and the estimated accuracy per tag. For more information, _see_ [Interpret and improve accuracy and confidence](../concept-accuracy-confidence.md).
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
Previously updated : 03/08/2022 Last updated : 10/10/2022 -
+monikerRange: 'form-recog-3.0.0'
# Get started: Form Recognizer Studio | v3.0
-[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-v3-sdk-rest-api.md) and other quickstarts.
+[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE56n49]
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs and named entities. * [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms.
-* [ **Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+* [**Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
* [**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP). * [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices. * [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
To label for signature detection: (Custom form only)
## Next steps * Follow our [**Form Recognizer v3.0 migration guide**](../v3-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**v3.0 SDK quickstarts**](get-started-v3-sdk-rest-api.md) to try the v3.0 features in your applications using the new SDKs.
-* Refer to our [**v3.0 REST API quickstarts**](get-started-v3-sdk-rest-api.md) to try the v3.0 features using the new REST API.
+* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
+* Refer to our [**v3.0 REST API quickstarts**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features using the new REST API.
[Get started with the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com).
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
Previously updated : 09/09/2022 Last updated : 10/10/2022 recommendations: false
Form Recognizer SDK supports the following languages and platforms:
| Language → SDK version | Package| Azure Form Recognizer SDK |Supported API version| Platform support | |:-:|:-|:-| :-|--|
-|[.NET/C# → 4.0.0 (latest GA release)](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-csharp#set-up)| [NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 4.0.0 (latest GA release)](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-java#set-up) |[Maven](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer) | [Azure SDK for Java](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[JavaScript → 4.0.0 (latest GA release)](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-javascript#set-up)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html) | [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[Python → 3.2.0 (latest GA release)](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-python#set-up) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+|[.NET/C# → 4.0.0 (latest GA release)](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp#set-up)| [NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.0.0 (latest GA release)](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java#set-up) |[Maven](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer) | [Azure SDK for Java](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.0.0 (latest GA release)](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript#set-up)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html) | [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.2.0 (latest GA release)](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python#set-up) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
## Supported Clients
For more information, *see* [Authenticate the client](https://github.com/Azure/a
### 4. Build your application
-First, you'll create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-v3-sdk-rest-api.md) in a language of your choice.
+First, you'll create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice.
## Help options
The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf
## Next steps >[!div class="nextstepaction"]
-> [**Try a Form Recognizer quickstart**](quickstarts/get-started-v3-sdk-rest-api.md)
+> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
> [!div class="nextstepaction"] > [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
-For the usage with [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md), [Form Recognizer REST API](quickstarts/get-started-v3-sdk-rest-api.md), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/).
+For the usage with [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/).
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--| | **Concurrent Request limit** | 1 | 15 (default value) | | Adjustable | No | Yes<sup>2</sup> |
-| **Max document size** | 500 MB | 500 MB |
+| **Max document size** | 4 MB | 500 MB |
| Adjustable | No | No | | **Max number of pages (Analysis)** | 2 | 2000 | | Adjustable | No | No |
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
Previously updated : 07/11/2022 Last updated : 10/10/2022 #Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way.-
+monikerRange: 'form-recog-2.1.0'
+recommendations: false
# Use table tags to train your custom template model
> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with version v3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with version v3.0.
In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
Next, you'll add your own code to the Python script to call the Form Recognizer
``` > [!IMPORTANT]
- > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* Cognitive Services [security](../../cognitive-services/cognitive-services-security.md).
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* Cognitive Services [security](../../cognitive-services/security-features.md).
1. Next, add code to query the service and get the returned data.
applied-ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-error-guide.md
Previously updated : 08/22/2022 Last updated : 10/07/2022
+monikerRange: 'form-recog-3.0.0'
+recommendations: false
# Form Recognizer error guide v3.0
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 recommendations: false
recommendations: false
Form Recognizer v3.0 introduces several new features and capabilities:
-* [Form Recognizer REST API](quickstarts/get-started-v3-sdk-rest-api.md) has been redesigned for better usability.
+* [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) has been redesigned for better usability.
* [**General document (v3.0)**](concept-general-document.md) model is a new API that extracts text, tables, structure, and key-value pairs, from forms and documents. * [**Custom neural model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents. * [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 09/09/2022 Last updated : 10/10/2022 <!-- markdownlint-disable MD024 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+## October 2022
+
+With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages including Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages, making it a total of 299 supported languages across the most recent GA and the new preview versions. Please refer to the [supported languages](language-support.md) page to see all supported languages.
+
+Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
+ ## September 2022 ### Region expansion for training custom neural models
This new release includes the following updates:
* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models. * [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
-Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-v3-sdk-rest-api.md), or [.NET](quickstarts/get-started-v3-sdk-rest-api.md) SDK for the v3.0 preview API.
+Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
#### Form Recognizer model data extraction
The `BuildModelOperation` and `CopyModelOperation` now correctly populate the `P
* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Form Recognizer Studio to test the different prebuilt models or label and train a custom model
-Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-v3-sdk-rest-api.md), or [.NET](quickstarts/get-started-v3-sdk-rest-api.md) SDK for the v3.0 preview API.
+Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
#### Form Recognizer model data extraction
For more information about the Form Recognizer Sample Labeling tool, review the
### TLS 1.2 enforcement
-TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/cognitive-services-security.md).
+TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/security-features.md).
## January 2020
applied-ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/data-feeds-from-different-sources.md
+ Last updated 05/26/2021
The following sections specify the parameters required for all authentication ty
Only the metrics *Name* and *Value* are accepted. For example:
- ``` JSON
+ ```json
{"count":11, "revenue":1.23} ```
The following sections specify the parameters required for all authentication ty
The metrics *Dimensions* and *timestamp* are also accepted. For example:
- ``` JSON
+ ```json
[ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
The following sections specify the parameters required for all authentication ty
## <span id="cosmosdb">Azure Cosmos DB (SQL)</span>
-* **Connection string**: The connection string to access your Azure Cosmos DB. This can be found in the Azure Cosmos DB resource in the Azure portal, in **Keys**. For more information, see [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md).
+* **Connection string**: The connection string to access your Azure Cosmos DB instance. This can be found in the Azure Cosmos DB resource in the Azure portal, in **Keys**. For more information, see [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md).
* **Database**: The database to query against. In the Azure portal, under **Containers**, go to **Browse** to find the database. * **Collection ID**: The collection ID to query against. In the Azure portal, under **Containers**, go to **Browse** to find the collection ID. * **SQL query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
The following sections specify the parameters required for all authentication ty
Metrics Advisor supports the data schema in the JSON files, as in the following example:
- ``` JSON
+ ```json
[ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
The following sections specify the parameters required for all authentication ty
Valid messages are as follows:
- ``` JSON
- Single JSON object
+ Single JSON object:
+
+ ```json
{ "metric_1": 234, "metric_2": 344,
The following sections specify the parameters required for all authentication ty
"dimension_2": "name_2" } ```
-
- ``` JSON
- JSON array
+
+ JSON array:
+
+ ```json
[ { "timestamp": "2020-12-12T12:00:00", "temperature": 12.4,
applied-ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/glossary.md
-+ Last updated 09/14/2020
This document explains the technical terms used in Metrics Advisor. Use this art
> [!NOTE] > Multiple metrics can share the same data source, and even the same data feed.
-A data feed is what Metrics Advisor ingests from your data source, such as Cosmos DB or a SQL server. A data feed contains rows of:
+A data feed is what Metrics Advisor ingests from your data source, such as Azure Cosmos DB or a SQL server. A data feed contains rows of:
* timestamps * zero or more dimensions * one or more measures.
applied-ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md
+ Last updated 04/20/2021
Consider the following scenarios:
* *"I need Metrics Advisor to roll up my data by calculating Sum/Max/Min/Avg/Count and represent it by {some string}."*
- Some data sources such as Cosmos DB or Azure Blob Storage do not support certain calculations like *group by* or *cube*. Metrics Advisor provides the roll up option to automatically generate a data cube during ingestion.
+ Some data sources such as Azure Cosmos DB or Azure Blob Storage do not support certain calculations like *group by* or *cube*. Metrics Advisor provides the roll up option to automatically generate a data cube during ingestion.
This option means you need Metrics Advisor to calculate the roll-up using the algorithm you've selected and use the specified string to represent the roll-up in Metrics Advisor. This won't change any data in your data source. For example, suppose you have a set of time series which stands for Sales metrics with the dimension (Country, Region). For a given timestamp, it might look like the following:
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/basic-concepts.md
Azure Attestation provides a regional shared provider in every available region.
| West Europe| `https://sharedweu.weu.attest.azure.net` | | US East 2 | `https://sharedeus2.eus2.attest.azure.net` | | Central US | `https://sharedcus.cus.attest.azure.net` |
-| South East Asia | `https://sharedsasia.sasia.attest.azure.net` |
| North Central US | `https://sharedncus.ncus.attest.azure.net` | | South Central US | `https://sharedscus.scus.attest.azure.net` | | Australia East | `https://sharedeau.eau.attest.azure.net` |
-| Australia SouthEast | `https://sharedsau.sau.attest.azure.net` |
+| Australia SouthEast | `https://sharedsau.sau.attest.azure.net` |
+| South East Asia | `https://sharedsasia.sasia.attest.azure.net` |
+| Japan East | `https://sharedjpe.jpe.attest.azure.net` |
+| Switzerland North | `https://sharedswn.swn.attest.azure.net` |
| US Gov Virginia | `https://sharedugv.ugv.attest.azure.us` | | US Gov Arizona | `https://shareduga.uga.attest.azure.us` |
+| Central US EUAP | `https://sharedcuse.cuse.attest.azure.net` |
+| East US2 EUAP | `https://sharedeus2e.eus2e.attest.azure.net` |
## Attestation request
automanage Arm Deploy Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/arm-deploy-arc.md
The following ARM template will onboard your specified Azure Arc-enabled server
"resources": [ { "type": "Microsoft.HybridCompute/machines/providers/configurationProfileAssignments",
- "apiVersion": "2021-04-30-preview",
+ "apiVersion": "2022-05-04",
"name": "[concat(parameters('machineName'), '/Microsoft.Automanage/default')]", "properties": { "configurationProfile": "[parameters('configurationProfile')]"
automanage Arm Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/arm-deploy.md
The following ARM template will onboard your specified machine onto Azure Automa
"resources": [ { "type": "Microsoft.Compute/virtualMachines/providers/configurationProfileAssignments",
- "apiVersion": "2021-04-30-preview",
+ "apiVersion": "2022-05-04",
"name": "[concat(parameters('machineName'), '/Microsoft.Automanage/default')]", "properties": { "configurationProfile": "[parameters('configurationProfileName')]"
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md
Hotpatch is available in all global Azure regions.
## How to get started > [!NOTE]
-> You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/ws2022ae-portal-preview).
+> You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
To start using Hotpatch on a new VM, follow these steps: 1. Start creating a new VM from the Azure portal
- * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/ws2022ae-portal-preview).
+ * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
1. Supply details during VM creation * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. Use [this guide](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported. * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' will be selected. Patch orchestration options will be set to 'Azure-orchestrated'.
- * If you create a VM using [this link](https://aka.ms/ws2022ae-portal-preview), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
+ * If you create a VM using [this link](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
1. Create your new VM
automanage Automanage Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-upgrade.md
Previously updated : 10/20/2021 Last updated : 9/1/2022 # Upgrade your machines to the latest Automanage version
-Automanage released a new version of the machine best practices offering in November 2021. The new API now supports creating custom profiles where you can pick and choose the services and settings you want to apply to your machines. Also, with this new version, the Automanage account is no longer required. This article describes the differences in the versions and how to upgrade.
+Automanage machine best practices released the generally available API version. The API now supports creating custom profiles where you can pick and choose the services and settings you want to apply to your machines. This article describes the differences in the versions and how to upgrade.
## How to upgrade your machines
-Below are the set of instructions on how to upgrade your machines to the latest API version of Automanage.
+1. In the [Automanage portal](https://aka.ms/automanageportal), if your machine status is **Needs Upgrade** on the Automanage machines tab, please follow these [steps](automanage-upgrade.md#upgrade-your-machines-to-the-latest-automanage-version). You will also see a banner on the Automanage overview page indicating that you need to upgrade your machines.
-### Prerequisites
+ :::image type="content" source="media\automanage-upgrade\overview-blade.png" alt-text="Needs upgrade status.":::
-If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/) before you begin.
+2. Update any onboarding automation to reference the GA API version: 2022-05-04. For instance, if you have onboarding templates saved, you will need to update the template to reference the new GA API version as the preview versions will no longer be supported. Also, if you have deployed the [Automanage built-in policy](virtual-machines-policy-enable.md) that references the preview APIs, you will need to redeploy the built-in policy which now references the GA API version.
-> [!NOTE]
-> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
-
-> [!IMPORTANT]
-> The following Azure RBAC permission is needed to enable Automanage for the first time on a subscription with the new Automanage version: **Owner** role, or **Contributor** along with **User Access Administrator** roles.
-
-### Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com/).
-
-### Check to see which machines need to be upgraded
-
-All machines that need to be upgraded will have the status **Needs upgrade**. You will also see a banner on the Automanage overview page indicating that you need to upgrade you machines.
-
+## Upgrade your machines to the latest Automanage version
+If your machine status is **Needs Upgrade** on the Automanage machines tab, you will need to do the following:
+1. [Disable Automanage on the machine](automanage-upgrade.md#disable-automanage-machines-that-need-to-be-upgraded)
+1. [Re-enable Automanage on the machine](automanage-upgrade.md#re-enable-automanage-on-your-machines)
### Disable Automanage machines that need to be upgraded
In the previous version of Automanage, you selected your Environment type: Dev/T
In the previous version of Automanage, you were able to customer a subset of settings through **Configuration Preferences**. In the latest version of Automanage, we have enhanced the customization so you can pick and choose each service you want to onboard and support modifying some settings on the services through **Custom Profiles**. ### Automanage Account and First party application
-In the previous version of Automanage, the Automanage Account was used as an MSI to preform actions on your machine. However, in the latest version of Automanage, Automanage uses a first party application (Application Id : d828acde-4b48-47f5-a6e8-52460104a052) to order to perform actions on the Automanage machines.
+In the previous version of Automanage, the Automanage Account was used as an MSI to perform actions on your machine. However, in the latest version of Automanage, Automanage uses a first party application (Application ID: d828acde-4b48-47f5-a6e8-52460104a052) in order to perform actions on the Automanage machines.
For both the previous version and the new version of Automanage, you need the following permissions: * If onboarding Automanage for the first time in a subscription, you need **Owner** role, or **Contributor** along with **User Access Administrator** roles.
Get the most frequently asked questions answered in our FAQ.
> [!div class="nextstepaction"] > [Frequently Asked Questions](faq.yml)-
automanage Move Automanaged Configuration Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/move-automanaged-configuration-profile.md
This article describes how to migrate an Automanage Configuration Profile to a d
We'll begin by downloading our previous Configuration Profile using PowerShell. First, perform a `GET` using `Invoke-RestMethod` against the Automanage Resource Provider, substituting the values for your subscription. ```url
-https://management.azure.com/subscriptions/<yourSubscription>/providers/Microsoft.Automanage/configurationProfiles?api-version=2021-04-30-preview
+https://management.azure.com/subscriptions/<yourSubscription>/providers/Microsoft.Automanage/configurationProfiles?api-version=2022-05-04
``` The GET command will display a list of Automanage Configuration Profile information, including the settings and the ConfigurationProfile ID ```azurepowershell-interactive
-$listConfigurationProfilesURI = "https://management.azure.com/subscriptions/<yourSubscription>/providers/Microsoft.Automanage/configurationProfiles?api-version=2021-04-30-preview"
+$listConfigurationProfilesURI = "https://management.azure.com/subscriptions/<yourSubscription>/providers/Microsoft.Automanage/configurationProfiles?api-version=2022-05-04"
Invoke-RestMethod ` -URI $listConfigurationProfilesURI
Here are the results, edited for brevity.
The next step is to do another `GET`, this time to retrieve the specific profile we would like to create in a new region. For this example, we'll retrieve 'testProfile1'. We'll perform a `GET` against the `id` value for the desired profile. ```azurepowershell-interactive
-$profileId = "https://management.azure.com/subscriptions/yourSubscription/resourceGroups/yourResourceGroup/providers/Microsoft.Automanage/configurationProfiles/testProfile1?api-version=2021-04-30-preview"
+$profileId = "https://management.azure.com/subscriptions/yourSubscription/resourceGroups/yourResourceGroup/providers/Microsoft.Automanage/configurationProfiles/testProfile1?api-version=2022-05-04"
$profile = Invoke-RestMethod ` -URI $listConfigurationProfilesURI
automanage Overview About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/overview-about.md
Title: About Azure Automanage
-description: Learn about Azure Automanage for virtual machines.
+ Title: About Azure Automanage Machine Best Practices
+description: Learn about Azure Automanage machine best practices.
# Azure Automanage machine best practices
-This article covers information about Azure Automanage for machine best practices, which have the following benefits:
+This article covers information about Azure Automanage machine best practices, which have the following benefits:
- Intelligently onboards virtual machines to select best practices Azure services - Automatically configures each service per Azure best practices
If you are enabling Automanage for the first time in a subscription:
If you are enabling Automanage on a machine in a subscription that already has Automanage machines: * **Contributor** role on the resource group containing your machines
-The Automanage service will grant **Contributor** permission to this first party application (Automanage API Application Id: d828acde-4b48-47f5-a6e8-52460104a052) to perform actions on Automanaged machines. Guest users will need to have the **directory reader role** assigned to enable Automanage.
+The Automanage service will grant **Contributor** permission to this first party application (Automanage API Application ID: d828acde-4b48-47f5-a6e8-52460104a052) to perform actions on Automanaged machines. Guest users will need to have the **directory reader role** assigned to enable Automanage.
> [!NOTE] > If you want to use Automanage on a VM that is connected to a workspace in a different subscription, you must have the permissions described above on each subscription.
automanage Overview Vm Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/overview-vm-status.md
-# Automanage virtual machine statuses
+# Automanage machine statuses
-In the Azure portal, go to the **Automanage ΓÇô Azure machine best practices** page which lists all of your automanage machines. Here you will see the overall status of each machine.
+In the Azure portal, go to the **Automanage machine best practices** page which lists all of your automanage machines. Here you will see the overall status of each machine.
[ ![Screenshot of a list of automanaged enabled virtual machines.](./media/automanage-virtual-machines/configured-status.png) ](./media/automanage-virtual-machines/configured-status.png#lightbox) For each listed machine, the following details are displayed: Name, Configuration profile, Status, Resource type, Resource group, Subscription.
-## States of an Automanaged virtual machine
+## States of an Automanaged machine
The **Status** column can display the following states: - *In progress* - the VM is being configured - *Conformant* - the VM is configured and no drift is detected - *Not conformant* - the VM has drifted and Automanage was unable to correct one or more services to the assigned configuration profile - *Needs upgrade* - the VM is onboarded to an earlier version of Automanage and needs to be [upgraded](automanage-upgrade.md) to the latest version-- *Unknown* - the Automanage service is unable to determine the desired configuration of the machine. This is usually because the VM agent is not installed or the machine is not running. It can also indicate that the Automanage service does not have the necessary permissions that it needs to determine the desired configuration
+- *Action required* - the Automanage service is unable to determine the desired configuration of the machine. This is usually because the VM agent is not installed or the machine is not running. It can also indicate that the Automanage service does not have the necessary permissions that it needs to determine the desired configuration
- *Error* - the Automanage service encountered an error while attempting to determine if the machine conforms with the desired configuration If you see the **Status** as *Not conformant* or *Error*, you can troubleshoot by clicking on the status in the portal and using the troubleshooting links provided
automanage Virtual Machines Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-best-practices.md
Title: Azure Automanage for Virtual Machines Best Practices
-description: Learn about the Azure Automanage for virtual machines best practices for services that are automatically onboarded and configured for you.
+ Title: Azure Automanage Machine Best Practices
+description: Learn about the Azure Automanage machine best practices for services that are automatically onboarded and configured for you.
# Azure Automanage for virtual machines best practices
-These Azure services are automatically onboarded for you when you use Automanage for virtual machines. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management).
+These Azure services are automatically onboarded for you when you use Automanage. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management).
For all of these services, we will auto-onboard, auto-configure, monitor for drift, and mediate if drift is detected. To learn more about this process, see [Azure Automanage for virtual machines](overview-about.md).
automanage Virtual Machines Custom Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md
The following ARM template will create an Automanage custom profile. Details on
"resources": [ { "type": "Microsoft.Automanage/configurationProfiles",
- "apiVersion": "2021-04-30-preview",
+ "apiVersion": "2022-05-04",
"name": "[parameters('customProfileName')]", "location": "[parameters('location')]", "properties": {
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
automation Enable From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-vm.md
Sign in to the [Azure portal](https://portal.azure.com).
1. In the [Azure portal](https://portal.azure.com), select **Virtual machines** or search for and select **Virtual machines** from the Home page.
-2. Select the VM for which you want to enable Update Management. VMs can exist in any region, no matter the location of your Automation account. You
+2. Select the VM for which you want to enable Update Management. VMs can exist in any region, no matter the location of your Automation account.
3. On the VM page, under **Operations**, select **Guest + host updates**.
azure-app-configuration Enable Dynamic Configuration Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
Title: "Tutorial: Use App Configuration dynamic configuration in ASP.NET Core"
+ Title: "Tutorial: Use dynamic configuration in an ASP.NET Core app"
-description: In this tutorial, you learn how to dynamically update the configuration data for ASP.NET Core apps
+description: In this tutorial, you learn how to dynamically update the configuration data for ASP.NET Core apps.
--
+documentationcenter: ""
+ - ms.devlang: csharp Previously updated : 09/1/2020---
-#Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
Last updated : 09/30/2022++
-# Tutorial: Use dynamic configuration in an ASP.NET Core app
-
-ASP.NET Core has a pluggable configuration system that can read configuration data from a variety of sources. It can handle changes dynamically without causing an application to restart. ASP.NET Core supports the binding of configuration settings to strongly typed .NET classes. It injects them into your code by using `IOptionsSnapshot<T>`, which automatically reloads the application's configuration when the underlying data changes.
-This tutorial shows how you can implement dynamic configuration updates in your code. It builds on the web app introduced in the quickstarts. Before you continue, finish [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first.
+# Tutorial: Use dynamic configuration in an ASP.NET Core app
-You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option that's available on the Windows, macOS, and Linux platforms.
+This tutorial shows how you can enable dynamic configuration updates in an ASP.NET Core app. It builds on the web app introduced in the quickstarts. Your app will leverage the App Configuration provider library for its built-in configuration caching and refreshing capabilities. Before you continue, finish [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Set up your application to update its configuration in response to changes in an App Configuration store.
-> * Inject the latest configuration in your application's controllers.
+> * Set up your app to update its configuration in response to changes in an App Configuration store.
+> * Inject the latest configuration into your app.
## Prerequisites
-To do this tutorial, install the [.NET Core SDK](https://dotnet.microsoft.com/download).
--
-Before you continue, finish [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first.
+Finish the quickstart: [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md).
## Add a sentinel key
-A *sentinel key* is a special key that you update after you complete the change of all other keys. Your application monitors the sentinel key. When a change is detected, your application refreshes all configuration values. This approach helps to ensure the consistency of configuration in your application and reduces the overall number of requests made to App Configuration, compared to monitoring all keys for changes.
+A *sentinel key* is a key that you update after you complete the change of all other keys. Your app monitors the sentinel key. When a change is detected, your app refreshes all configuration values. This approach helps to ensure the consistency of configuration in your app and reduces the overall number of requests made to your App Configuration store, compared to monitoring all keys for changes.
-1. In the Azure portal, select **Configuration Explorer > Create > Key-value**.
+1. In the Azure portal, open your App Configuration store and select **Configuration Explorer > Create > Key-value**.
1. For **Key**, enter *TestApp:Settings:Sentinel*. For **Value**, enter 1. Leave **Label** and **Content type** blank. 1. Select **Apply**.
-> [!NOTE]
-> If you aren't using a sentinel key, you need to manually register every key you want to monitor.
- ## Reload data from App Configuration
-1. Add a reference to the `Microsoft.Azure.AppConfiguration.AspNetCore` NuGet package by running the following command:
-
- ```dotnetcli
- dotnet add package Microsoft.Azure.AppConfiguration.AspNetCore
- ```
-
-1. Open *Program.cs*, and update the `CreateWebHostBuilder` method to add the `config.AddAzureAppConfiguration()` method.
-
- #### [.NET 5.x](#tab/core5x)
+1. Open *Program.cs*, and update the `AddAzureAppConfiguration` method you added previously during the quickstart.
+ #### [.NET 6.x](#tab/core6x)
```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration((hostingContext, config) =>
- {
- var settings = config.Build();
- config.AddAzureAppConfiguration(options =>
- {
- options.Connect(settings["ConnectionStrings:AppConfig"])
- .ConfigureRefresh(refresh =>
- {
- refresh.Register("TestApp:Settings:Sentinel", refreshAll: true)
- .SetCacheExpiration(new TimeSpan(0, 5, 0));
- });
- });
- })
- .UseStartup<Startup>());
+ // Load configuration from Azure App Configuration
+ builder.Configuration.AddAzureAppConfiguration(options =>
+ {
+ options.Connect(connectionString)
+ // Load all keys that start with `TestApp:` and have no label
+ .Select("TestApp:*", LabelFilter.Null)
+ // Configure to reload configuration if the registered sentinel key is modified
+ .ConfigureRefresh(refreshOptions =>
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
+ });
``` #### [.NET Core 3.x](#tab/core3x)- ```csharp public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration((hostingContext, config) =>
+ {
+ webBuilder.ConfigureAppConfiguration(config =>
{
- var settings = config.Build();
+ //Retrieve the Connection String from the secrets manager
+ IConfiguration settings = config.Build();
+ string connectionString = settings.GetConnectionString("AppConfig");
+
+ // Load configuration from Azure App Configuration
config.AddAzureAppConfiguration(options => {
- options.Connect(settings["ConnectionStrings:AppConfig"])
- .ConfigureRefresh(refresh =>
- {
- refresh.Register("TestApp:Settings:Sentinel", refreshAll: true)
- .SetCacheExpiration(new TimeSpan(0, 5, 0));
- });
+ options.Connect(connectionString)
+ // Load all keys that start with `TestApp:` and have no label
+ .Select("TestApp:*", LabelFilter.Null)
+ // Configure to reload configuration if the registered sentinel key is modified
+ .ConfigureRefresh(refreshOptions =>
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
});
- })
- .UseStartup<Startup>());
- ```
- #### [.NET Core 2.x](#tab/core2x)
-
- ```csharp
- public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .ConfigureAppConfiguration((hostingContext, config) =>
- {
- var settings = config.Build();
-
- config.AddAzureAppConfiguration(options =>
- {
- options.Connect(settings["ConnectionStrings:AppConfig"])
- .ConfigureRefresh(refresh =>
- {
- refresh.Register("TestApp:Settings:Sentinel", refreshAll: true)
- .SetCacheExpiration(new TimeSpan(0, 5, 0));
- });
});
- })
- .UseStartup<Startup>();
+
+ webBuilder.UseStartup<Startup>();
+ });
```
- In the `ConfigureRefresh` method, you register keys within your App Configuration store that you want to monitor for changes. The `refreshAll` parameter to the `Register` method indicates that all configuration values should be refreshed if the registered key changes. The `SetCacheExpiration` method specifies the minimum time that must elapse before a new request is made to App Configuration to check for any configuration changes. In this example, you override the default expiration time of 30 seconds specifying a time of 5 minutes instead. This reduces the potential number of requests made to your App Configuration store.
+ The `Select` method is used to load all key-values whose key name starts with *TestApp:* and that have *no label*. You can call the `Select` method more than once to load configurations with different prefixes or labels. If you share one App Configuration store with multiple apps, this approach helps load configuration only relevant to your current app instead of loading everything from your store.
+
+ In the `ConfigureRefresh` method, you register keys you want to monitor for changes in your App Configuration store. The `refreshAll` parameter to the `Register` method indicates that all configurations you specified by the `Select` method will be reloaded if the registered key changes.
- > [!NOTE]
- > For testing purposes, you may want to lower the cache refresh expiration time.
+ > [!TIP]
+ > You can add a call to the `refreshOptions.SetCacheExpiration` method to specify the minimum time between configuration refreshes. In this example, you use the default value of 30 seconds. Adjust to a higher value if you need to reduce the number of requests made to your App Configuration store.
- To actually trigger a configuration refresh, you'll use the App Configuration middleware. You'll see how to do this in a later step.
+1. Add Azure App Configuration middleware to the service collection of your app.
-1. Add a *Settings.cs* file in the Controllers directory that defines and implements a new `Settings` class. Replace the namespace with the name of your project.
+ #### [.NET 6.x](#tab/core6x)
+ Update *Program.cs* with the following code.
```csharp
- namespace TestAppConfig
- {
- public class Settings
- {
- public string BackgroundColor { get; set; }
- public long FontSize { get; set; }
- public string FontColor { get; set; }
- public string Message { get; set; }
- }
- }
- ```
+ // Existing code in Program.cs
+ // ... ...
-1. Open *Startup.cs*, and update the `ConfigureServices` method. Call `Configure<Settings>` to bind configuration data to the `Settings` class. Call `AddAzureAppConfiguration` to add App Configuration components to the service collection of your application.
+ builder.Services.AddRazorPages();
- #### [.NET 5.x](#tab/core5x)
+ // Add Azure App Configuration middleware to the container of services.
+ builder.Services.AddAzureAppConfiguration();
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
- services.AddControllersWithViews();
- services.AddAzureAppConfiguration();
- }
+ // Bind configuration "TestApp:Settings" section to the Settings object
+ builder.Services.Configure<Settings>(builder.Configuration.GetSection("TestApp:Settings"));
+
+ var app = builder.Build();
+
+ // The rest of existing code in program.cs
+ // ... ...
```+ #### [.NET Core 3.x](#tab/core3x)
+ Open *Startup.cs*, and update the `ConfigureServices` method.
```csharp public void ConfigureServices(IServiceCollection services) {
- services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
- services.AddControllersWithViews();
+ services.AddRazorPages();
+
+ // Add Azure App Configuration middleware to the container of services.
services.AddAzureAppConfiguration();
- }
- ```
- #### [.NET Core 2.x](#tab/core2x)
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
+ // Bind configuration "TestApp:Settings" section to the Settings object
services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
- services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
- services.AddAzureAppConfiguration();
- }
+ }
```
-1. Update the `Configure` method, and add a call to `UseAzureAppConfiguration`. It enables your application to use the App Configuration middleware to handle the configuration updates for you automatically.
+1. Call the `UseAzureAppConfiguration` method. It enables your app to use the App Configuration middleware to update the configuration for you automatically.
- #### [.NET 5.x](#tab/core5x)
+ #### [.NET 6.x](#tab/core6x)
+ Update *Program.cs* withe the following code.
```csharp
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
- {
- if (env.IsDevelopment())
- {
- app.UseDeveloperExceptionPage();
- }
- else
- {
- app.UseExceptionHandler("/Home/Error");
- // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
- app.UseHsts();
- }
-
- // Add the following line:
- app.UseAzureAppConfiguration();
+ // Existing code in Program.cs
+ // ... ...
- app.UseHttpsRedirection();
-
- app.UseStaticFiles();
+ var app = builder.Build();
- app.UseRouting();
-
- app.UseAuthorization();
-
- app.UseEndpoints(endpoints =>
- {
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- });
- }
- ```
- #### [.NET Core 3.x](#tab/core3x)
-
- ```csharp
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
- {
- if (env.IsDevelopment())
- {
- app.UseDeveloperExceptionPage();
- }
- else
- {
- app.UseExceptionHandler("/Home/Error");
- // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
- app.UseHsts();
- }
-
- // Add the following line:
- app.UseAzureAppConfiguration();
-
- app.UseHttpsRedirection();
-
- app.UseStaticFiles();
-
- app.UseRouting();
-
- app.UseAuthorization();
-
- app.UseEndpoints(endpoints =>
- {
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- });
- }
- ```
- #### [.NET Core 2.x](#tab/core2x)
-
- ```csharp
- public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+ if (!app.Environment.IsDevelopment())
{
- app.UseAzureAppConfiguration();
-
- services.Configure<CookiePolicyOptions>(options =>
- {
- options.CheckConsentNeeded = context => true;
- options.MinimumSameSitePolicy = SameSiteMode.None;
- });
-
- app.UseMvc();
+ app.UseExceptionHandler("/Error");
+ app.UseHsts();
}
- ```
-
-
- > [!NOTE]
- > The App Configuration middleware monitors the sentinel key or any other keys you registered for refreshing in the `ConfigureRefresh` call in the previous step. The middleware is triggered upon every incoming request to your application. However, the middleware will only send requests to check the value in App Configuration when the cache expiration time you set has passed. When a change is detected, it will either update all the configuration if the sentinel key is used or update the registered keys' values only.
- > - If a request to App Configuration for change detection fails, your application will continue to use the cached configuration. Another check will be made when the configured cache expiration time has passed again, and there are new incoming requests to your application.
- > - The configuration refresh happens asynchronously to the processing of your application incoming requests. It will not block or slow down the incoming request that triggered the refresh. The request that triggered the refresh may not get the updated configuration values, but subsequent requests will do.
- > - To ensure the middleware is triggered, call `app.UseAzureAppConfiguration()` as early as appropriate in your request pipeline so another middleware will not short-circuit it in your application.
-## Use the latest configuration data
+ // Use Azure App Configuration middleware for dynamic configuration refresh.
+ app.UseAzureAppConfiguration();
-1. Open *HomeController.cs* in the Controllers directory, and add a reference to the `Microsoft.Extensions.Options` package.
-
- ```csharp
- using Microsoft.Extensions.Options;
+ // The rest of existing code in program.cs
+ // ... ...
```
-2. Update the `HomeController` class to receive `Settings` through dependency injection, and make use of its values.
-
- #### [.NET 5.x](#tab/core5x)
+ #### [.NET Core 3.x](#tab/core3x)
+ Update the `Configure` method in *Startup.cs*.
```csharp
- public class HomeController : Controller
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
- private readonly Settings _settings;
- private readonly ILogger<HomeController> _logger;
-
- public HomeController(ILogger<HomeController> logger, IOptionsSnapshot<Settings> settings)
+ if (env.IsDevelopment())
{
- _logger = logger;
- _settings = settings.Value;
+ app.UseDeveloperExceptionPage();
}-
- public IActionResult Index()
+ else
{
- ViewData["BackgroundColor"] = _settings.BackgroundColor;
- ViewData["FontSize"] = _settings.FontSize;
- ViewData["FontColor"] = _settings.FontColor;
- ViewData["Message"] = _settings.Message;
-
- return View();
+ app.UseExceptionHandler("/Error");
+ app.UseHsts();
}
- // ...
- }
- ```
- #### [.NET Core 3.x](#tab/core3x)
-
- ```csharp
- public class HomeController : Controller
- {
- private readonly Settings _settings;
- private readonly ILogger<HomeController> _logger;
+ // Use Azure App Configuration middleware for dynamic configuration refresh.
+ app.UseAzureAppConfiguration();
- public HomeController(ILogger<HomeController> logger, IOptionsSnapshot<Settings> settings)
- {
- _logger = logger;
- _settings = settings.Value;
- }
+ app.UseHttpsRedirection();
+ app.UseStaticFiles();
- public IActionResult Index()
- {
- ViewData["BackgroundColor"] = _settings.BackgroundColor;
- ViewData["FontSize"] = _settings.FontSize;
- ViewData["FontColor"] = _settings.FontColor;
- ViewData["Message"] = _settings.Message;
+ app.UseRouting();
- return View();
- }
+ app.UseAuthorization();
- // ...
- }
- ```
- #### [.NET Core 2.x](#tab/core2x)
-
- ```csharp
- public class HomeController : Controller
- {
- private readonly Settings _settings;
- public HomeController(IOptionsSnapshot<Settings> settings)
+ app.UseEndpoints(endpoints =>
{
- _settings = settings.Value;
- }
-
- public IActionResult Index()
- {
- ViewData["BackgroundColor"] = _settings.BackgroundColor;
- ViewData["FontSize"] = _settings.FontSize;
- ViewData["FontColor"] = _settings.FontColor;
- ViewData["Message"] = _settings.Message;
-
- return View();
- }
+ endpoints.MapRazorPages();
+ });
} ```
- > [!Tip]
- > To learn more about the options pattern when reading configuration values, see [Options Patterns in ASP.NET Core](/aspnet/core/fundamentals/configuration/options).
-3. Open *Index.cshtml* in the Views > Home directory, and replace its content with the following script:
+You've set up your app to use the [options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options) during the quickstart. When the underlying configuration of your app is updated from App Configuration, your strongly typed `Settings` object obtained via `IOptionsSnapshot<T>` is updated automatically.
+
+## Request-driven configuration refresh
- ```html
- <!DOCTYPE html>
- <html lang="en">
- <style>
- body {
- background-color: @ViewData["BackgroundColor"]
- }
- h1 {
- color: @ViewData["FontColor"];
- font-size: @ViewData["FontSize"]px;
- }
- </style>
- <head>
- <title>Index View</title>
- </head>
- <body>
- <h1>@ViewData["Message"]</h1>
- </body>
- </html>
- ```
+The configuration refresh is triggered by the incoming requests to your web app. No refresh will occur if your app is idle. When your app is active, the App Configuration middleware monitors the sentinel key, or any other keys you registered for refreshing in the `ConfigureRefresh` call. The middleware is triggered upon every incoming request to your app. However, the middleware will only send requests to check the value in App Configuration when the cache expiration time you set has passed.
+
+- If a request to App Configuration for change detection fails, your app will continue to use the cached configuration. New attempts to check for changes will be made periodically while there are new incoming requests to your app.
+- The configuration refresh happens asynchronously to the processing of your app's incoming requests. It will not block or slow down the incoming request that triggered the refresh. The request that triggered the refresh may not get the updated configuration values, but later requests will get new configuration values.
+- To ensure the middleware is triggered, call the `app.UseAzureAppConfiguration()` method as early as appropriate in your request pipeline so another middleware won't skip it in your app.
## Build and run the app locally
A *sentinel key* is a special key that you update after you complete the change
![Launching quickstart app locally](./media/quickstarts/aspnet-core-app-launch-local-before.png)
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store instance that you created in the quickstart.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store that you created in the quickstart.
-1. Select **Configuration Explorer**, and update the values of the following keys. Remember to update the sentinel key at last.
+1. Select **Configuration explorer**, and update the values of the following keys. Remember to update the sentinel key at last.
| Key | Value | |||
A *sentinel key* is a special key that you update after you complete the change
| TestApp:Settings:Message | Data from Azure App Configuration - now with live updates! | | TestApp:Settings:Sentinel | 2 |
-1. Refresh the browser page to see the new configuration settings. You may need to refresh more than once for the changes to be reflected, or change your cache expiration time to less than 5 minutes.
+1. Refresh the browser a few times. When the cache expires after 30 seconds, the page shows with updated content.
![Launching updated quickstart app locally](./media/quickstarts/aspnet-core-app-launch-local-after.png)
A *sentinel key* is a special key that you update after you complete the change
In this tutorial, you enabled your ASP.NET Core web app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure-managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
+> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
Title: Enable geo-replication (preview) description: Learn how to use Azure App Configuration geo replication to create, delete, and manage replicas of your configuration store. -+
+ms.devlang: csharp, java
Previously updated : 8/1/2022- Last updated : 10/10/2022+ #Customer intent: I want to be able to list, create, and delete the replicas of my configuration store.
This article covers replication of Azure App Configuration stores. You'll learn about how to create, use and delete a replica in your configuration store.
-To learn more about the concept of geo-replication, see [Geo-replication in Azure App Configuration](./concept-soft-delete.md).
+To learn more about the concept of geo-replication, see [Geo-replication in Azure App Configuration](./concept-geo-replication.md).
## Prerequisites -- An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free)
- We assume you already have an App Configuration store. If you want to create one, [create an App Configuration store](quickstart-aspnet-core-app.md). ## Create and list a replica
To delete a replica in the portal, follow the steps below.
Each replica you create has its dedicated endpoint. If your application resides in multiple geolocations, you can update each deployment of your application in a location to connect to the replica closer to that location, which helps minimize the network latency between your application and App Configuration. Since each replica has its separate request quota, this setup also helps the scalability of your application while it grows to a multi-region distributed service.
-When geo-replication is enabled, and if one replica isn't accessible, you can let your application failover to another replica for improved resiliency. App Configuration provider libraries have built-in failover support by accepting multiple replica endpoints. You can provide a list of your replica endpoints in the order of the most preferred to the least preferred endpoint. When the current endpoint isn't accessible, the provider library will fail over to a less preferred endpoint, but it will try to connect to the more preferred endpoints from time to time. When a more preferred endpoint becomes available, it will switch to it for future requests. You can update your application as the sample code below to take advantage of the failover feature.
+When geo-replication is enabled, and if one replica isn't accessible, you can let your application failover to another replica for improved resiliency. App Configuration provider libraries have built-in failover support by accepting multiple replica endpoints. You can provide a list of your replica endpoints in the order of the most preferred to the least preferred endpoint. When the current endpoint isn't accessible, the provider library will fail over to a less preferred endpoint, but it will try to connect to the more preferred endpoints from time to time. When a more preferred endpoint becomes available, it will switch to it for future requests.
+
+Assuming you have an application using Azure App Configuration, you can update it as the following sample code to take advantage of the failover feature.
> [!NOTE] > You can only use Azure AD authentication to connect to replicas. Authentication with access keys is not supported during the preview.
-<!-- ### [.NET](#tab/dotnet) -->
+### [.NET](#tab/dotnet)
+
+Edit the call to the `AddAzureAppConfiguration` method, which is often found in the `program.cs` file of your application.
```csharp configurationBuilder.AddAzureAppConfiguration(options =>
configurationBuilder.AddAzureAppConfiguration(options =>
> - `Microsoft.Azure.AppConfiguration.AspNetCore` > - `Microsoft.Azure.AppConfiguration.Functions.Worker`
-<!-- ### [Java Spring](#tab/spring)
-Placeholder for Java Spring instructions
- -->
+### [Java Spring](#tab/spring)
+
+Edit the endpoint configuration in `bootstrap.properties`, to use endpoints which allows a list of endpoints.
+
+```properties
+spring.cloud.azure.appconfiguration.stores[0].endpoints[0]="https://<first-replica-endpoint>.azconfig.io"
+spring.cloud.azure.appconfiguration.stores[0].endpoints[1]="https://<second-replica-endpoint>.azconfig.io"
+```
+> [!NOTE]
+> The failover support is available if you use version of **2.10.0-beta.1** or later of any of the following packages.
+> - `azure-spring-cloud-appconfiguration-config`
+> - `azure-spring-cloud-appconfiguration-config-web`
+> - `azure-spring-cloud-starter-appconfiguration-config`
++ The failover may occur if the App Configuration provider observes the following conditions. - Receives responses with service unavailable status (HTTP status code 500 or above).
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md
Title: Quickstart for Azure App Configuration with ASP.NET Core | Microsoft Docs description: Create an ASP.NET Core app with Azure App Configuration to centralize storage and management of application settings for an ASP.NET Core application. -+ ms.devlang: csharp Previously updated : 1/3/2022- Last updated : 9/29/2022+ #Customer intent: As an ASP.NET Core developer, I want to learn how to manage all my app settings in one place. # Quickstart: Create an ASP.NET Core app with Azure App Configuration
-In this quickstart, you'll use Azure App Configuration to centralize storage and management of application settings for an ASP.NET Core app. ASP.NET Core builds a single, key-value-based configuration object using settings from one or more data sources specified by an app. These data sources are known as *configuration providers*. Because App Configuration's .NET Core client is implemented as a configuration provider, the service appears like another data source.
+In this quickstart, you'll use Azure App Configuration to externalize storage and management of your app settings for an ASP.NET Core app. ASP.NET Core builds a single, key-value-based configuration object using settings from one or more [configuration providers](/aspnet/core/fundamentals/configuration#configuration-providers). App Configuration offers a .NET configuration provider library. Therefore, you can use App Configuration as an extra configuration source for your app. If you have an existing app, to begin using App Configuration, you'll only need a few small changes to your app startup code.
## Prerequisites
In this quickstart, you'll use Azure App Configuration to centralize storage and
[!INCLUDE[Azure App Configuration resource creation steps](../../includes/azure-app-configuration-create.md)]
-7. Select **Operations** > **Configuration explorer** > **Create** > **Key-value** to add the following key-value pairs:
+9. Select **Operations** > **Configuration explorer** > **Create** > **Key-value** to add the following key-value pairs:
| Key | Value | ||-|
- | `TestApp:Settings:BackgroundColor` | *#FFF* |
- | `TestApp:Settings:FontColor` | *#000* |
+ | `TestApp:Settings:BackgroundColor` | *white* |
+ | `TestApp:Settings:FontColor` | *black* |
| `TestApp:Settings:FontSize` | *24* | | `TestApp:Settings:Message` | *Data from Azure App Configuration* |
In this quickstart, you'll use Azure App Configuration to centralize storage and
## Create an ASP.NET Core web app
-Use the [.NET Core command-line interface (CLI)](/dotnet/core/tools) to create a new ASP.NET Core MVC project. The [Azure Cloud Shell](https://shell.azure.com) provides these tools for you. They're also available across the Windows, macOS, and Linux platforms.
+Use the [.NET Core command-line interface (CLI)](/dotnet/core/tools) to create a new ASP.NET Core web app project. The [Azure Cloud Shell](https://shell.azure.com) provides these tools for you. They're also available across the Windows, macOS, and Linux platforms.
-Run the following command to create an ASP.NET Core MVC project in a new *TestAppConfig* folder:
+Run the following command to create an ASP.NET Core web app in a new *TestAppConfig* folder:
+#### [.NET 6.x](#tab/core6x)
```dotnetcli
-dotnet new mvc --no-https --output TestAppConfig
+dotnet new webapp --output TestAppConfig --framework net6.0
```
+#### [.NET Core 3.x](#tab/core3x)
+```dotnetcli
+dotnet new webapp --output TestAppConfig --framework netcoreapp3.1
+```
+ ## Connect to the App Configuration store
-1. Run the following command to add a [Microsoft.Azure.AppConfiguration.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.AspNetCore) NuGet package reference:
+1. Navigate into the project's directory *TestAppConfig*, and run the following command to add a [Microsoft.Azure.AppConfiguration.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.AspNetCore) NuGet package reference:
```dotnetcli dotnet add package Microsoft.Azure.AppConfiguration.AspNetCore ```
-1. Run the following command in the same directory as the *.csproj* file. The command uses Secret Manager to store a secret named `ConnectionStrings:AppConfig`, which stores the connection string for your App Configuration store. Replace the `<your_connection_string>` placeholder with your App Configuration store's connection string. You can find the connection string under **Access Keys** in the Azure portal.
+1. Run the following command. The command uses [Secret Manager](/aspnet/core/security/app-secrets) to store a secret named `ConnectionStrings:AppConfig`, which stores the connection string for your App Configuration store. Replace the `<your_connection_string>` placeholder with your App Configuration store's connection string. You can find the connection string under **Access Keys** of your App Configuration store in the Azure portal.
```dotnetcli
+ dotnet user-secrets init
dotnet user-secrets set ConnectionStrings:AppConfig "<your_connection_string>" ```
- > [!IMPORTANT]
+ > [!TIP]
> Some shells will truncate the connection string unless it's enclosed in quotes. Ensure that the output of the `dotnet user-secrets list` command shows the entire connection string. If it doesn't, rerun the command, enclosing the connection string in quotes.
- Secret Manager is used only to test the web app locally. When the app is deployed to [Azure App Service](https://azure.microsoft.com/services/app-service/web), use the **Connection Strings** application setting in App Service instead of Secret Manager to store the connection string.
-
- Access this secret using the .NET Core Configuration API. A colon (`:`) works in the configuration name with the Configuration API on all supported platforms. For more information, see [Configuration keys and values](/aspnet/core/fundamentals/configuration#configuration-keys-and-values).
+ Secret Manager stores the secret outside of your project tree, which helps prevent the accidental sharing of secrets within source code. It's used only to test the web app locally. When the app is deployed to Azure like [App Service](/azure/app-service/overview), use the *Connection strings*, *Application settings* or environment variables to store the connection string. Alternatively, to avoid connection strings all together, you can [connect to App Configuration using managed identities](./howto-integrate-azure-managed-service-identity.md) or your other [Azure AD identities](./concept-enable-rbac.md).
-1. Select the correct syntax based on your environment.
+1. Open *Program.cs*, and add Azure App Configuration as an extra configuration source by calling the `AddAzureAppConfiguration` method.
#### [.NET 6.x](#tab/core6x)
- In *Program.cs*, replace its content with the following code:
- ```csharp var builder = WebApplication.CreateBuilder(args);
- //Retrieve the Connection String from the secrets manager
- var connectionString = builder.Configuration.GetConnectionString("AppConfig");
-
- builder.Host.ConfigureAppConfiguration(builder =>
- {
- //Connect to your App Config Store using the connection string
- builder.AddAzureAppConfiguration(connectionString);
- })
- .ConfigureServices(services =>
- {
- services.AddControllersWithViews();
- });
- var app = builder.Build();
+ // Retrieve the connection string
+ string connectionString = builder.Configuration.GetConnectionString("AppConfig");
+
+ // Load configuration from Azure App Configuration
+ builder.Configuration.AddAzureAppConfiguration(connectionString);
+
+ // The rest of existing code in program.cs
+ // ... ...
+ ```
+
+ #### [.NET Core 3.x](#tab/core3x)
+ Update the `CreateHostBuilder` method.
- // Configure the HTTP request pipeline.
- if (!app.Environment.IsDevelopment())
+ ```csharp
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ {
+ webBuilder.ConfigureAppConfiguration(config =>
+ {
+ // Retrieve the connection string
+ IConfiguration settings = config.Build();
+ string connectionString = settings.GetConnectionString("AppConfig");
+
+ // Load configuration from Azure App Configuration
+ config.AddAzureAppConfiguration(connectionString);
+ });
+
+ webBuilder.UseStartup<Startup>();
+ });
+ ```
+
+
+ This code will connect to your App Configuration store using a connection string and load *all* key-values that have *no labels*. For more information on the App Configuration provider, see the [App Configuration provider API reference](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
+
+## Read from the App Configuration store
+
+In this example, you'll update a web page to display its content using the settings you configured in your App Configuration store.
+
+1. Add a *Settings.cs* file at the root of your project directory. It defines a strongly typed `Settings` class for the configuration you're going to use. Replace the namespace with the name of your project.
+
+ ```csharp
+ namespace TestAppConfig
{
- app.UseExceptionHandler("/Home/Error");
+ public class Settings
+ {
+ public string BackgroundColor { get; set; }
+ public long FontSize { get; set; }
+ public string FontColor { get; set; }
+ public string Message { get; set; }
+ }
}
- app.UseStaticFiles();
-
- app.UseRouting();
-
- app.UseAuthorization();
-
- app.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
-
- app.Run();
```
-
- #### [.NET 5.x](#tab/core5x)
-
- 1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
-
- ```csharp
- using Microsoft.Extensions.Configuration;
- ```
- 1. Update the `CreateHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
-
- ```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(connection);
- }).UseStartup<Startup>());
- ```
- #### [.NET Core 3.x](#tab/core3x)
+1. Bind the `TestApp:Settings` section in configuration to the `Settings` object.
- > [!IMPORTANT]
- > `CreateHostBuilder` in .NET 3.x replaces `CreateWebHostBuilder` in .NET Core 2.x.
+ #### [.NET 6.x](#tab/core6x)
+ Update *Program.cs* with the following code.
- 1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
-
- ```csharp
- using Microsoft.Extensions.Configuration;
- ```
- 1. Update the `CreateHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
-
- ```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(connection);
- }).UseStartup<Startup>());
- ```
-
- #### [.NET Core 2.x](#tab/core2x)
-
- 1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
-
- ```csharp
- using Microsoft.Extensions.Configuration;
- ```
+ ```csharp
+ // Existing code in Program.cs
+ // ... ...
+
+ builder.Services.AddRazorPages();
+
+ // Bind configuration "TestApp:Settings" section to the Settings object
+ builder.Services.Configure<Settings>(builder.Configuration.GetSection("TestApp:Settings"));
+
+ var app = builder.Build();
+
+ // The rest of existing code in program.cs
+ // ... ...
+ ```
- 1. Update the `CreateWebHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
+ #### [.NET Core 3.x](#tab/core3x)
+ Open *Startup.cs* and update the `ConfigureServices` method.
- ```csharp
- public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(connection);
- })
- .UseStartup<Startup>();
- ```
--
-This code will connect to your App Configuration store using a connection string and load all key-values. For more information on the configuration provider APIs, reference the [configuration provider for App Configuration docs](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddRazorPages();
-## Read from the App Configuration store
+ // Bind configuration "TestApp:Settings" section to the Settings object
+ services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
+ }
+ ```
+
-Complete the following steps to read and display values stored in the App Configuration store. The .NET Core Configuration API will be used to access the store. Razor syntax will be used to display the keys' values.
+1. Open *Index.cshtml.cs* in the *Pages* directory, and update the `IndexModel` class with the following code. Add `using Microsoft.Extensions.Options` namespace at the beginning of the file, if it's not already there.
-Open *\<app root>/Views/Home/Index.cshtml*, and replace its content with the following code:
+ ```csharp
+ public class IndexModel : PageModel
+ {
+ private readonly ILogger<IndexModel> _logger;
-```cshtml
-@using Microsoft.Extensions.Configuration
-@inject IConfiguration Configuration
+ public Settings Settings { get; }
-<style>
- body {
- background-color: @Configuration["TestApp:Settings:BackgroundColor"]
+ public IndexModel(IOptionsSnapshot<Settings> options, ILogger<IndexModel> logger)
+ {
+ Settings = options.Value;
+ _logger = logger;
+ }
}
- h1 {
- color: @Configuration["TestApp:Settings:FontColor"];
- font-size: @Configuration["TestApp:Settings:FontSize"]px;
+ ```
+
+1. Open *Index.cshtml* in the *Pages* directory, and update the content with the following code.
+
+ ```html
+ @page
+ @model IndexModel
+ @{
+ ViewData["Title"] = "Home page";
}
-</style>
-<h1>@Configuration["TestApp:Settings:Message"]</h1>
-```
+ <style>
+ body {
+ background-color: @Model.Settings.BackgroundColor;
+ }
-In the preceding code, the App Configuration store's keys are used as follows:
+ h1 {
+ color: @Model.Settings.FontColor;
+ font-size: @Model.Settings.FontSize;
+ }
+ </style>
-* The `TestApp:Settings:BackgroundColor` key's value is assigned to the CSS `background-color` property.
-* The `TestApp:Settings:FontColor` key's value is assigned to the CSS `color` property.
-* The `TestApp:Settings:FontSize` key's value is assigned to the CSS `font-size` property.
-* The `TestApp:Settings:Message` key's value is displayed as a heading.
+ <h1>@Model.Settings.Message</h1>
+ ```
## Build and run the app locally
In the preceding code, the App Configuration store's keys are used as follows:
dotnet run ```
-1. If you're working on your local machine, use a browser to navigate to `http://localhost:5000` or as specified in the command output. This address is the default URL for the locally hosted web app. If you're working in the Azure Cloud Shell, select the **Web Preview** button followed by **Configure**.
+1. Open a browser and navigate to the URL the app is listening on, as specified in the command output. It looks like `https://localhost:5001`.
+
+ If you're working in the Azure Cloud Shell, select the *Web Preview* button followed by *Configure*. When prompted to configure the port for preview, enter *5000*, and select *Open and browse*.
![Locate the Web Preview button](./media/quickstarts/cloud-shell-web-preview.png)
- When prompted to configure the port for preview, enter *5000* and select **Open and browse**. The web page will read "Data from Azure App Configuration."
+ The web page will look like this:
+ ![Launching quickstart app locally](./media/quickstarts/aspnet-core-app-launch-local-before.png)
## Clean up resources
In the preceding code, the App Configuration store's keys are used as follows:
In this quickstart, you: * Provisioned a new App Configuration store.
-* Registered the App Configuration store's .NET Core configuration provider.
-* Read the App Configuration store's keys with the configuration provider.
-* Displayed the App Configuration store's key values using Razor syntax.
+* Connected to your App Configuration store using the App Configuration provider library.
+* Read your App Configuration store's key-values with the configuration provider library.
+* Displayed a web page using the settings you configured in your App Configuration store.
-To learn how to configure your ASP.NET Core app to dynamically refresh configuration settings, continue to the next tutorial.
+To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial.
> [!div class="nextstepaction"] > [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md)
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
azure-arc Deploy Telemetry Router https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-telemetry-router.md
**What is the Arc Telemetry Router?**
-The Arc telemetry router enables exporting the collected monitoring telemetry data to other monitoring solutions. For this Public Preview we only support exporting log data to either Kafka or Elasticsearch.
+The Arc telemetry router enables exporting the collected monitoring telemetry data to other monitoring solutions. For this Public Preview, we only support exporting log data to either Kafka or Elasticsearch and metric data to Kafka.
This document specifies how to deploy the telemetry router and configure it to work with the supported exporters.
General Exporter Settings
|--|--| | endpoint | Endpoint of the monitoring solution to export to | | certificateName | The client certificate in order to export to the monitoring solution |
-| caCertificateName | The cluster's Certificate Authority certificate for the Exporter |
+| caCertificateName | The cluster's Certificate Authority or customer-provided certificate for the Exporter |
Kafka Exporter Settings
Elasticsearch Exporter Settings
| Setting | Description | |--|--|
-| index | This can be the name of an index or datastream name to publish events to |
+| index | This setting can be the name of an index or datastream name to publish events to |
### Pipelines
-During the Public Preview, only logs pipelines are supported. These are exposed in the custom resource specification of the Arc telemetry router and available for modification. Currently, we do not allow configuration of receivers and processors in these pipelines - only exporters are changeable. All pipelines must be prefixed with "logs" in order to be injected with the necessary receivers and processors. e.g., `logs/internal`
+During the Public Preview, only logs and metrics pipelines are supported. These pipelines are exposed in the custom resource specification of the Arc telemetry router and available for modification. During our public preview, only exporters are configurable. All pipelines must be prefixed with "logs" or "metrics" in order to be injected with the necessary receivers and processors. For example, `logs/internal`
+
+Logs pipelines may export to Kafka or Elasticsearch. Metrics pipelines may only export to Kafka.
Pipeline Settings | Setting | Description | |--|--| | logs | Can only declare new logs pipelines. Must be prefixed with "logs" |
+| metrics | Can only declare new metrics pipelines. Must be prefixed with "metrics" |
| exporters | List of exporters. Can be multiple of the same type. | ### Credentials
Pipeline Settings
### Example TelemetryRouter Specification: ```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
+apiVersion: arcdata.microsoft.com/v1beta2
kind: TelemetryRouter metadata: name: arc-telemetry-router
metadata:
spec: collector: customerPipelines:
- # Only logs pipelines are supported for the first preview.
- # Any additional logs pipelines, must be prefixed with "logs"
- # e.g. logs/internal, logs/external, etc.
+ # Additional logs pipelines must be prefixed with "logs"
+ # For example: logs/internal, logs/external, etc.
logs: # The name of these exporters need to map to the declared ones beneath # the exporters property.
+ # logs pipelines can export to elasticsearch or kafka
exporters: - elasticsearch - kafka
+ # Additional metrics piplines must be prefixed with "metrics"
+ # For example: metrics/internal, metrics/external, etc.
+ metrics:
+ # The name of these exporters need to map to the declared ones beneath
+ # the exporters property.
+ # metrics pipelines can export to kafka only
+ exporters:
+ - kafka
exporters: # Only elasticsearch and kafka exporters are supported for this first preview. # Any additional exporters of those types must be prefixed with the name
spec:
> [!NOTE] > The telemetry router currently supports indirect mode only.
-Once you have your cluster and Azure CLI setup correctly, to deploy the telemetry router, you must create the *DataController* custom resource. Then, set the `enableOpenTelemetry` flag on its spec to `true`. This is a temporary feature flag that must be enabled.
+Once you have your cluster and Azure CLI setup correctly, to deploy the telemetry router, you must create the *DataController* custom resource. Then, set the `enableOpenTelemetry` flag on its spec to `true`. This flag is a temporary feature flag that must be enabled.
-To do this, follow the [normal configuration profile instructions](create-custom-configuration-template.md). After you have created your configuration profile, add the monitoring property with the `enableOpenTelemetry` flag set to `true`. You can do this by running the following commends in the az CLI:
+To set the feature flag, follow the [normal configuration profile instructions](create-custom-configuration-template.md). After you have created your configuration profile, add the monitoring property with the `enableOpenTelemetry` flag set to `true`. You can do set the feature flag by running the following commands in the az CLI:
```bash az arcdata dc config add --path ./output/control.json --json-values ".spec.monitoring={}"
spec:
Then deploy the data controller as normal in the [Deployment Instructions](create-data-controller-indirect-cli.md?tabs=linux)
-When the data controller is deployed, it also deploys a default TelemetryRouter custom resource at the end of the data controller creation. Use the following command to verify that it exists:
+When the data controller is deployed, it also deploys a default TelemetryRouter custom resource as part of the data controller creation. Note that the controller pod will only be marked ready when both custom resources have finished deploying. Use the following command to verify that the TelemetryRouter exists:
```bash kubectl describe telemetryrouter arc-telemetry-router -n <namespace> ``` ```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
+apiVersion: arcdata.microsoft.com/v1beta2
kind: TelemetryRouter metadata:
- creationTimestamp: "2022-09-08T16:54:04Z"
- generation: 1
name: arc-telemetry-router namespace: <namespace>
- ownerReferences:
- - apiVersion: arcdata.microsoft.com/v5
- controller: true
- kind: DataController
- name: datacontroller-arc
- uid: 9c0443d8-1cc3-4c40-b600-3552272b3d3e
- resourceVersion: "15000547"
- uid: 3349f73a-0904-4063-a501-d92bd6d3e66e
spec: collector: customerPipelines:
apiVersion: arcdata.microsoft.com/v1beta1
certificates: - certificateName: arcdata-msft-elasticsearch-exporter-internal - certificateName: cluster-ca-certificate
- status:
- lastUpdateTime: "2022-09-08T16:54:05.042806Z"
- observedGeneration: 1
- runningVersion: v1.11.0_2022-09-13
- state: Ready
- ```
-We are exporting logs to our deployment of Elasticsearch in the Arc cluster. You can see the index, service endpoint, and certificates it is using to do so. This is provided as an example in the deployment, so you can see how to export to your own monitoring solutions.
+We are exporting logs to our deployment of Elasticsearch in the Arc cluster. When you deploy the telemetry router, two OtelCollector custom resources are created. You can see the index, service endpoint, and certificates it is using to do so. This telemetry router is provided as an example of the deployment, so you can see how to export to your own monitoring solutions.
-You can run the following command to see the detailed deployment of the child collector that is receiving logs and exporting to Elasticsearch:
+You can run the following commands to see the detailed deployment of the child collectors that are receiving logs and exporting to Elasticsearch:
```bash
-kubectl describe otelcollector collector -n <namespace>
+kubectl describe otelcollector collector-inbound -n <namespace>
+kubectl describe otelcollector collector-outbound -n <namespace>
```
+The first of the two OtelCollector custom resources is the inbound collector, dedicated to the inbound telemetry layer. The inbound collector receives the logs and metrics, then exports them to a Kafka custom resource.
+ ```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
- kind: OtelCollector
- metadata:
- creationTimestamp: "2022-09-08T16:54:04Z"
- generation: 1
- name: collector
- namespace: <namespace>
- ownerReferences:
- - apiVersion: arcdata.microsoft.com/v1beta1
- controller: true
- kind: TelemetryRouter
- name: arc-telemetry-router
- uid: <uid>
- resourceVersion: "15000654"
- uid: <uid>
- spec:
- collector:
- exporters:
- elasticsearch/arcdata/msft/internal:
- endpoints:
- - https://logsdb-svc:9200
- index: logstash-otel
- tls:
- ca_file: cluster-ca-certificate
- cert_file: arcdata-msft-elasticsearch-exporter-internal
- key_file: arcdata-msft-elasticsearch-exporter-internal
- extensions:
- memory_ballast:
- size_mib: 683
- processors:
- batch:
- send_batch_max_size: 500
- send_batch_size: 100
- timeout: 10s
- memory_limiter:
- check_interval: 5s
- limit_mib: 1500
- spike_limit_mib: 512
- receivers:
- fluentforward:
- endpoint: 0.0.0.0:8006
- service:
- extensions:
- - memory_ballast
- pipelines:
- logs:
- exporters:
- - elasticsearch/arcdata/msft/internal
- processors:
- - memory_limiter
- - batch
- receivers:
- - fluentforward
- credentials:
- certificates:
- - certificateName: arcdata-msft-elasticsearch-exporter-internal
- - certificateName: cluster-ca-certificate
- status:
- lastUpdateTime: "2022-09-08T16:54:56.923140Z"
- observedGeneration: 1
- runningVersion: v1.11.0_2022-09-13
- state: Ready
+Name: collector-inbound
+Namespace: <namespace>
+Labels: <none>
+Annotations: <none>
+Is Valid: true
+API Version: arcdata.microsoft.com/v1beta2
+Kind: OtelCollector
+Spec:
+ Collector:
+ Exporters:
+ kafka/arcdata/msft/logs:
+ Brokers: kafka-broker-svc:9092
+ Encoding: otlp_proto
+ protocol_version: 2.0.0
+ Tls:
+ ca_file: cluster-ca-certificate
+ cert_file: arcdata-msft-kafka-exporter-internal
+ key_file: arcdata-msft-kafka-exporter-internal
+ Topic: arcdata.microsoft.com.logs
+ kafka/arcdata/msft/metrics:
+ Brokers: kafka-broker-svc:9092
+ Encoding: otlp_proto
+ protocol_version: 2.0.0
+ Tls:
+ ca_file: cluster-ca-certificate
+ cert_file: arcdata-msft-kafka-exporter-internal
+ key_file: arcdata-msft-kafka-exporter-internal
+ Topic: arcdata.microsoft.com.metrics
+ Extensions:
+ memory_ballast:
+ size_mib: 683
+ Limits: <nil>
+ Processors:
+ Batch:
+ send_batch_max_size: 500
+ send_batch_size: 100
+ Timeout: 10s
+ memory_limiter:
+ check_interval: 5s
+ limit_mib: 1500
+ spike_limit_mib: 512
+ Receivers:
+ Collectd:
+ Endpoint: 0.0.0.0:8003
+ Fluentforward:
+ Endpoint: 0.0.0.0:8002
+ Requests: <nil>
+ Service:
+ Extensions:
+ memory_ballast
+ Pipelines:
+ Logs:
+ Exporters:
+ kafka/arcdata/msft/logs
+ Processors:
+ memory_limiter
+ batch
+ Receivers:
+ fluentforward
+ Metrics:
+ Exporters:
+ kafka/arcdata/msft/metrics
+ Processors:
+ memory_limiter
+ batch
+ Receivers:
+ collectd
+ Storage: <nil>
+ Credentials:
+ Certificates:
+ Certificate Name: arcdata-msft-kafka-exporter-internal
+ Secret Name: <secret>
+ Secret Namespace: <secret namespace>
+ Update: <nil>
+Events: <none>
```
-The purpose of this child resource is to provide a visual representation of the inner configuration of the collector, and you should see it in a *Ready* state. For modification, all updates should go through its parent resource, the TelemetryRouter custom resource.
-
-If you look at the pods, you should see an otel-collector-0 pod there as well:
+The second of the two OtelCollector custom resources is the outbound collector, dedicated to the outbound telemetry layer. The outbound collector receives the logs and metrics data from the Kafka custom resource. Those logs and metrics can then be exported to the customer's monitoring solutions, such as Kafka or Elasticsearch.
-```bash
-kubectl get pods -n <namespace>
+```yaml
+Name: collector-outbound
+Namespace: arc
+Labels: <none>
+Annotations: <none>
+Is Valid: true
+API Version: arcdata.microsoft.com/v1beta2
+Kind: OtelCollector
+Spec:
+ Collector:
+ Exporters:
+ elasticsearch/arcdata/msft/internal:
+ Endpoints:
+ https://logsdb-svc:9200
+ Index: logstash-otel
+ Tls:
+ ca_file: cluster-ca-certificate
+ cert_file: arcdata-msft-elasticsearch-exporter-internal
+ key_file: arcdata-msft-elasticsearch-exporter-internal
+ Extensions:
+ memory_ballast:
+ size_mib: 683
+ Limits: <nil>
+ Processors:
+ Batch:
+ send_batch_max_size: 500
+ send_batch_size: 100
+ Timeout: 10s
+ memory_limiter:
+ check_interval: 5s
+ limit_mib: 1500
+ spike_limit_mib: 512
+ Receivers:
+ kafka/arcdata/msft/logs:
+ Auth:
+ Tls:
+ ca_file: cluster-ca-certificate
+ cert_file: arcdata-msft-kafka-receiver-internal
+ key_file: arcdata-msft-kafka-receiver-internal
+ Brokers: kafka-broker-svc:9092
+ Encoding: otlp_proto
+ protocol_version: 2.0.0
+ Topic: arcdata.microsoft.com.logs
+ kafka/arcdata/msft/metrics:
+ Auth:
+ Tls:
+ ca_file: cluster-ca-certificate
+ cert_file: arcdata-msft-kafka-receiver-internal
+ key_file: arcdata-msft-kafka-receiver-internal
+ Brokers: kafka-broker-svc:9092
+ Encoding: otlp_proto
+ protocol_version: 2.0.0
+ Topic: arcdata.microsoft.com.metrics
+ Requests: <nil>
+ Service:
+ Extensions:
+ memory_ballast
+ Pipelines:
+ Logs:
+ Exporters:
+ elasticsearch/arcdata/msft/internal
+ Processors:
+ memory_limiter
+ batch
+ Receivers:
+ kafka/arcdata/msft/logs
+ Storage: <nil>
+ Credentials:
+ Certificates:
+ Certificate Name: arcdata-msft-kafka-receiver-internal
+ Secret Name: <secret>
+ Secret Namespace: <secret namespace>
+ Certificate Name: arcdata-msft-elasticsearch-exporter-internal
+ Secret Name: <secret>
+ Secret Namespace: <secret namespace>
+ Certificate Name: cluster-ca-certificate
+ Secret Name: <secret>
+ Secret Namespace: <secret namespace>
+ Update: <nil>
+Events: <none>
-NAME READY STATUS RESTARTS AGE
-arc-bootstrapper-job-r4m45 0/1 Completed 0 9m5s
-arc-webhook-job-7d443-lf9ws 0/1 Completed 0 9m3s
-bootstrapper-96b5c4fc7-kvxgq 1/1 Running 0 9m3s
-control-l5j2c 2/2 Running 0 8m46s
-controldb-0 2/2 Running 0 8m46s
-logsdb-0 3/3 Running 0 7m51s
-logsui-rx746 3/3 Running 0 6m9s
-metricsdb-0 2/2 Running 0 7m51s
-metricsdc-6g66g 2/2 Running 0 7m51s
-metricsui-jg25t 2/2 Running 0 7m51s
-otel-collector-0 2/2 Running 0 5m4s
```
-To verify that the exporting of the logs is happening correctly, you can inspect the logs of the collector or look at Elasticsearch and verify.
+After you deploy the Telemetry Router, both OtelCollector custom resources should be in a *Ready* state. For modification, all updates should go through its parent resource, the TelemetryRouter custom resource.
-To look at the logs of the collector, you will need to exec into the container run the following command:
+If you look at the pods, you should see the two collector pods - `arctc-collector-inbound-0` and `arctc-collector-outbound-0`. You should also see the `kakfa-server-0` pod.
```bash
- kubectl exec -it otel-collector-0 -c otel-collector -- /bin/bash -n <namespace>
-
-cd /var/log/opentelemetry-collector/
-```
-
-If you look at the logs files, you should see successful POSTs to Elasticsearch with response code 200.
-
-Example Output:
+kubectl get pods -n <namespace>
-```bash
-2022-08-30T16:08:33.455Z debug elasticsearchexporter@v0.53.0/exporter.go:182 Request roundtrip completed. {"kind": "exporter", "name": "elasticsearch/arcdata/internal", "path": "/_bulk", "method": "POST", "duration": 0.006774934, "status": "200 OK"}
+NAME READY STATUS RESTARTS AGE
+arc-bootstrapper-job-kmrsx 0/1 Completed 0 19h
+arc-webhook-job-5bd06-r6g8w 0/1 Completed 0 19h
+arctc-collector-inbound-0 2/2 Running 0 19h
+arctc-collector-outbound-0 2/2 Running 0 19h
+bootstrapper-789b4f89-c77z6 1/1 Running 0 19h
+control-xtjrr 2/2 Running 0 19h
+controldb-0 2/2 Running 0 19h
+kafka-server-0 2/2 Running 0 19h
+logsdb-0 3/3 Running 0 19h
+logsui-67hvm 3/3 Running 0 19h
+metricsdb-0 2/2 Running 0 19h
+metricsdc-hq25d 2/2 Running 0 19h
+metricsdc-twq7r 2/2 Running 0 19h
+metricsui-psnvg 2/2 Running 0 19h
```
-If there are successful POSTs, everything should be running correctly.
- ## **Exporting to Your Monitoring Solutions** This next section will guide you through a series of modifications you can make on the Arc telemetry router to export to your own Elasticsearch or Kafka instances. ### **1. Add an Elasticsearch Exporter**
-You can test adding your own Elasticsearch exporter to send logs to your deployment of Elasticsearch by doing the following:
+You can test adding your own Elasticsearch exporter to send logs to your deployment of Elasticsearch by doing the following steps:
1. Add your Elasticsearch exporter to the exporters list beneath customer pipelines 2. Declare your Elasticsearch exporter with the needed settings - certificates, endpoint, and index
For example:
**router.yaml** ```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
+apiVersion: arcdata.microsoft.com/v1beta2
kind: TelemetryRouter metadata: name: arc-telemetry-router
spec:
kubectl apply -f router.yaml -n <namespace> ```
-This will add a second Elasticsearch exporter that exports to your instance of Elasticsearch on the logs pipeline. The TelemetryRouter custom resource should go into an updating state and the collector service will restart. Once it is in a ready state, you can inspect the collector logs as shown above again to ensure it's successfully posting to your instance of Elasticsearch.
-
-### **2. Add a new logs pipeline with your Elasticsearch exporter**
-
-You can test adding a new logs pipeline by updating the TelemetryRouter custom resource as seen below:
-
-**router.yaml**
-
-```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
-kind: TelemetryRouter
-metadata:
- name: arc-telemetry-router
- namespace: <namespace>
-spec:
- collector:
- customerPipelines:
- logs:
- exporters:
- - elasticsearch/arcdata/msft/internal
- logs/example:
- exporters:
- - elasticsearch/example
- exporters:
- elasticsearch/example:
- # Provide your client and CA certificate names
- # for the exporter as well as any additional settings needed
- caCertificateName: <ca-certificate-name>
- certificateName: <elasticsearch-client-certificate-name>
- endpoint: <elasticsearch_endpoint>
- settings:
- # Currently supported properties include: index
- # This can be the name of an index or datastream name to publish events to
- index: <elasticsearch_index>
- elasticsearch/arcdata/msft/internal:
- caCertificateName: cluster-ca-certificate
- certificateName: arcdata-msft-elasticsearch-exporter-internal
- endpoint: https://logsdb-svc:9200
- settings:
- index: logstash-otel
- credentials:
- certificates:
- - certificateName: arcdata-msft-elasticsearch-exporter-internal
- - certificateName: cluster-ca-certificate
- # Provide your client and ca certificates through Kubernetes secrets
- # where the name of the secret and its namespace are specified.
- - certificateName: <elasticsearch-client-certificate-name>
- secretName: <name_of_secret>
- secretNamespace: <namespace_with_secret>
- - certificateName: <ca-certificate-name>
- secretName: <name_of_secret>
- secretNamespace: <namespace_with_secret>
-```
-
-```bash
-kubectl apply -f router.yaml -n <namespace>
-```
-
-This will add your Elasticsearch exporter to a *different* logs pipeline called 'logs/example'. The TelemetryRouter custom resource should go into an updating state and the collector service will restart. Once it is in a ready state, you can inspect the collector logs as shown above again to ensure it's successfully posting to your instance of Elasticsearch.
+You've now added a second Elasticsearch exporter that exports to your instance of Elasticsearch on the logs pipeline. The TelemetryRouter custom resource should go into an updating state and the collector service will restart.
-### **3. Add a Kafka Exporter**
+### **2. Add a Kafka Exporter**
-You can test adding your own Kafka exporter to send logs to your deployment of Kafka by doing the following:
+You can test adding your own Kafka exporter to send logs to your deployment of Kafka by doing the following steps:
1. Add your Kafka exporter to the exporters list beneath customer pipelines 2. Declare your Kafka exporter with the needed settings - topic, broker, and encoding
For example:
**router.yaml** ```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
+apiVersion: arcdata.microsoft.com/v1beta2
kind: TelemetryRouter metadata: name: arc-telemetry-router
spec:
kubectl apply -f router.yaml -n <namespace> ```
-This will add a Kafka exporter that exports to the topic name at the broker service endpoint you provided on the logs pipeline. The TelemetryRouter custom resource should go into an updating state and the collector service will restart. Once it is in a ready state, you can inspect the collector logs as shown above again to ensure there are no errors and verify on your Kafka cluster that it is receiving the logs.
+You've now added a Kafka exporter that exports to the topic name at the broker service endpoint you provided on the logs pipeline. The TelemetryRouter custom resource should go into an updating state and the collector service will restart.
## Next steps
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôs
By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance will be synchronously replicated to each of the secondary instances.
-## View and monitor availability group status
+## View and monitor high availability status
Once the deployment is complete, connect to the primary endpoint from SQL Server Management Studio.
kubectl get sqlmi -A
### Get the primary and secondary endpoints and AG status
-Use the `kubectl describe sqlmi` or `az sql mi-arc show` commands to view the primary and secondary endpoints, and availability group status.
+Use the `kubectl describe sqlmi` or `az sql mi-arc show` commands to view the primary and secondary endpoints, and high availability status.
Example:
kubectl describe sqlmi sqldemo -n my-namespace
or ```azurecli
-az sql mi-arc show sqldemo --k8s-namespace my-namespace --use-k8s
+az sql mi-arc show --name sqldemo --k8s-namespace my-namespace --use-k8s
``` Example output: ```console "status": {
- "AGStatus": "Healthy",
- "logSearchDashboard": "https://10.120.230.404:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqldemo'))",
- "metricsDashboard": "https://10.120.230.46:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi1-0",
- "mirroringEndpoint": "10.15.100.150:5022",
+ "endpoints": {
+ "logSearchDashboard": "https://10.120.230.404:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqldemo'))",
+ "metricsDashboard": "https://10.120.230.46:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqldemo-0",
+ "mirroring": "10.15.100.150:5022",
+ "primary": "10.15.100.150,1433",
+ "secondary": "10.15.100.156,1433"
+ },
+ "highAvailability": {
+ "healthState": "OK",
+ "mirroringCertificate": "--BEGIN CERTIFICATE--\n...\n--END CERTIFICATE--"
+ },
"observedGeneration": 1,
- "primaryEndpoint": "10.15.100.150,1433",
"readyReplicas": "2/2",
- "runningVersion": "v1.2.0_2021-12-15",
- "secondaryEndpoint": "10.15.100.156,1433",
"state": "Ready" } ```
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
To see the regions that currently support Azure Arc-enabled data services, go to
> **Just want to try things out?** > Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. >
->In addition, deploy [Jumpstart ArcBox](https://azurearcjumpstart.io/azure_jumpstart_arcbox/), an easy to deploy sandbox for all things Azure Arc. ArcBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for you to get hands-on with all available Azure Arc-enabled technology with nothing more than an available Azure subscription.
+>In addition, deploy [Jumpstart ArcBox for DataOps](https://aka.ms/ArcBoxDataOps), an easy to deploy sandbox for all things Azure Arc-enabled SQL Managed Instance. ArcBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for you to get hands-on with all available Azure Arc-enabled technology with nothing more than an available Azure subscription.
[Install the client tools](install-client-tools.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
New for this release:
- New command to list AD Connectors `az arcdata ad-connector list --k8s-namespace <namespace> --use-k8s` - Az CLI Polling for AD Connector create/update/delete: This feature changes the default behavior of `az arcdata ad-connector create/update/delete` to hang and wait until the operation finishes. To override this behavior, the user has to use the `--no-wait` flag when invoking the command.
+Deprecation and breaking changes notices:
+The following properties in the Arc SQL Managed Instance status will be deprecated/moved in the _next_ release:
+- `status.logSearchDashboard`: use `status.endpoints.logSearchDashboard` instead.
+- `status.metricsDashboard`: use `status.endpoints.metricsDashboard` instead.
+- `status.primaryEndpoint`: use `status.endpoints.primary` instead.
+- `status.readyReplicas`: uses `status.roles.sql.readyReplicas` instead.
## September 13, 2022
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
description: "This article provides a conceptual overview of GitOps in Azure for
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 9/22/2022 Last updated : 10/12/2022
GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](h
:::image type="content" source="media/gitops/flux2-extension-install-aks.png" alt-text="Diagram showing the installation of the Flux extension for Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-extension-install-aks.png":::
-GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension will be installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (*az k8s-extension create --extensionType=microsoft.flux*), ARM template, or REST API.
+GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension will be installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (*az k8s-extension create --extensionType=microsoft.flux*), ARM template, or REST API.
+
+### Version support
+
+The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
+
+### Controllers
The `microsoft.flux` extension installs by default the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed and can optionally install the Flux image-automation and image-reflector controllers, which provide functionality around updating and retrieving Docker images.
The `microsoft.flux` extension installs by default the [Flux controllers](https:
* [Flux Helm controller](https://toolkit.fluxcd.io/components/helm/controller/): Watches the `helm.toolkit.fluxcd.io` custom resources. Retrieves the associated chart from the Helm Repository source surfaced by the Source controller. Creates the `HelmChart` custom resource and applies the `HelmRelease` with given version, name, and customer-defined values to the cluster. * [Flux Notification controller](https://toolkit.fluxcd.io/components/notification/controller/): Watches the `notification.toolkit.fluxcd.io` custom resources. Receives notifications from all Flux controllers. Pushes notifications to user-defined webhook endpoints. * Flux Custom Resource Definitions:
- * `kustomizations.kustomize.toolkit.fluxcd.io`
- * `imagepolicies.image.toolkit.fluxcd.io`
- * `imagerepositories.image.toolkit.fluxcd.io`
- * `imageupdateautomations.image.toolkit.fluxcd.io`
- * `alerts.notification.toolkit.fluxcd.io`
- * `providers.notification.toolkit.fluxcd.io`
- * `receivers.notification.toolkit.fluxcd.io`
- * `buckets.source.toolkit.fluxcd.io`
- * `gitrepositories.source.toolkit.fluxcd.io`
- * `helmcharts.source.toolkit.fluxcd.io`
- * `helmrepositories.source.toolkit.fluxcd.io`
- * `helmreleases.helm.toolkit.fluxcd.io`
- * `fluxconfigs.clusterconfig.azure.com`
+
+ * `kustomizations.kustomize.toolkit.fluxcd.io`
+ * `imagepolicies.image.toolkit.fluxcd.io`
+ * `imagerepositories.image.toolkit.fluxcd.io`
+ * `imageupdateautomations.image.toolkit.fluxcd.io`
+ * `alerts.notification.toolkit.fluxcd.io`
+ * `providers.notification.toolkit.fluxcd.io`
+ * `receivers.notification.toolkit.fluxcd.io`
+ * `buckets.source.toolkit.fluxcd.io`
+ * `gitrepositories.source.toolkit.fluxcd.io`
+ * `helmcharts.source.toolkit.fluxcd.io`
+ * `helmrepositories.source.toolkit.fluxcd.io`
+ * `helmreleases.helm.toolkit.fluxcd.io`
+ * `fluxconfigs.clusterconfig.azure.com`
+ * [FluxConfig CRD](https://github.com/Azure/ClusterConfigurationAgent/blob/master/charts/azure-k8s-flux/templates/clusterconfig.azure.com_fluxconfigs.yaml): Custom Resource Definition for `fluxconfigs.clusterconfig.azure.com` custom resources that define `FluxConfig` Kubernetes objects. * fluxconfig-agent: Responsible for watching Azure for new or updated `fluxConfigurations` resources, and for starting the associated Flux configuration in the cluster. Also, is responsible for pushing Flux status changes in the cluster back to Azure for each `fluxConfigurations` resource. * fluxconfig-controller: Watches the `fluxconfigs.clusterconfig.azure.com` custom resources and responds to changes with new or updated configuration of GitOps machinery in the cluster.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `m
Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository or Bucket) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams. > [!NOTE]
-> * `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure.
-> * Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, assure that your clusters connect with Azure within 48 hours.
+> The `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure.
+>
+> Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, make sure that your clusters connect with Azure within 48 hours.
## GitOps with Private Link
-If you've added support for private link to an Azure Arc-enabled Kubernetes cluster, then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you will need to provision these endpoints behind your firewall or list them on your firewall so that the Flux Source controller can successfully reach them.
-
-For more information on private link scopes in Azure Arc, refer to [this document](../servers/private-link-security.md#create-a-private-link-scope).
+If you've added support for [private link to an Azure Arc-enabled Kubernetes cluster](private-link.md), then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you will need to provision these endpoints behind your firewall or list them on your firewall so that the Flux Source controller can successfully reach them.
## Data residency+ The Azure GitOps service (Azure Kubernetes Configuration Management) stores/processes customer data. By default, customer data is replicated to the paired region. For the regions Singapore, East Asia, and Brazil South, all customer data is stored and processed in the region. ## Apply Flux configurations at scale
Because Azure Resource Manager manages your configurations, you can automate cre
## Next steps
-Advance to the next tutorial to learn how to enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters
+Advance to the next tutorial to learn how to enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters:
+ > [!div class="nextstepaction"] * [Enable GitOps with Flux](./tutorial-use-gitops-flux2.md)
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes" Previously updated : 07/27/2022 Last updated : 10/12/2022 description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters"
This is because a service principal doesn't have permissions to get information
| Parameter name | Description | |-|| | `--name, --n` | Name of the custom location |
- | `--resource-group, --g` | Resource group of the custom location |
+ | `--resource-group, --g` | Resource group of the custom location |
| `--namespace` | Namespace in the cluster bound to the custom location being created | | `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster) | | `--cluster-extension-ids` | Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-separated list of the cluster extension IDs |
Required parameters:
| Parameter name | Description | |-|| | `--name, --n` | Name of the custom location |
-| `--resource-group, --g` | Resource group of the custom location |
+| `--resource-group, --g` | Resource group of the custom location |
| `--namespace` | Namespace in the cluster bound to the custom location being created | | `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster) |
Optional parameters:
## Patch a custom location
-Use the `patch` command to replace existing tags, cluster extension IDs with new tags, and cluster extension IDs. `--cluster-extension-ids`, `assign-identity`, `--tags` can be patched.
+Use the `patch` command to replace existing tags, cluster extension IDs with new tags, and cluster extension IDs. `--cluster-extension-ids`, `assign-identity`, `--tags` can be patched.
```azurecli az customlocation patch -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds>
Required parameters:
| Parameter name | Description | |-|| | `--name, --n` | Name of the custom location |
-| `--resource-group, --g` | Resource group of the custom location |
+| `--resource-group, --g` | Resource group of the custom location |
Optional parameters:
To delete a custom location, use the following command:
az customlocation delete -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> ```
+## Troubleshooting
+
+If custom location creation fails with the error 'Unknown proxy error occurred', it may be due to network policies configured to disallow pod-to-pod internal communication.
+
+To resolve this issue, modify your network policy to allow pod-to-pod internal communication within the `azure-arc` namespace. Be sure to also add the `azure-arc` namespace as part of the no-proxy exclusion list for your configured policy.
+ ## Next steps - Securely connect to the cluster using [Cluster Connect](cluster-connect.md).
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Title: "Azure Arc-enabled Kubernetes cluster extensions" - Previously updated : 07/12/2022+ Last updated : 10/12/2022 description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes"
A conceptual overview of this feature is available in [Cluster extensions - Azur
## Prerequisites * [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
-* `connectedk8s` (version >= 1.2.0) and `k8s-extension` (version >= 1.0.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
+* `connectedk8s` (version >= 1.2.0) and `k8s-extension` (version >= 1.0.0) Azure CLI extensions. Install the latest version of these Azure CLI extensions by running the following commands:
```azurecli az extension add --name connectedk8s
The following extensions are currently available.
| [Flux (GitOps)](./conceptual-gitops-flux2.md) | Use GitOps with Flux to manage cluster configuration and application deployment. | | [Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes](../../aks/dapr.md)| Eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. |
+> [!NOTE]
+> Installing Azure Arc extensions on [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions.
+ ### Extension scope Extension installations on the Arc-enabled Kubernetes cluster are either *cluster-scoped* or *namespace-scoped*.
az k8s-extension create --name azuremonitor-containers --extension-type Microso
| `--scope` | Scope of installation for the extension - `cluster` or `namespace` | | `--cluster-name` | Name of the Azure Arc-enabled Kubernetes resource on which the extension instance has to be created | | `--resource-group` | The resource group containing the Azure Arc-enabled Kubernetes resource |
-| `--cluster-type` | The cluster type on which the extension instance has to be created. Current only `connectedClusters`, which corresponds to Azure Arc-enabled Kubernetes, is an accepted value |
+| `--cluster-type` | The cluster type on which the extension instance has to be created. For most scenarios, use `connectedClusters`, which corresponds to Azure Arc-enabled Kubernetes. |
+
+> [!NOTE]
+> When working with [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
**Optional parameters**
az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterNa
>[!NOTE] > The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state.
+> [!NOTE]
+> When working with [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview), you must add `--yes` to the delete command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
+
+## AKS hybrid clusters provisioned from Azure (preview)
+
+You can deploy extensions to AKS hybrid clusters provisioned from Azure. However, there are a few key differences to keep in mind in order to deploy successfully:
+
+* The value for the `--cluster-type` parameter must be `provisionedClusters`.
+* You must add `--cluster-resource-provider microsoft.hybridcontainerservice` to your commands.
+* When deleting an extension instance, you must add `--yes` to the command:
+
+ ```azurecli
+ az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type provisionedClusters --cluster-resource-provider microsoft.hybridcontainerservice --yes
+ ```
+
+In addition, you must be using the latest version of the Azure CLI `k8s-extension` module (version >= 1.3.3). Use the following commands to add or update to the latest version:
+
+```azurecli
+# add if you do not have this installed
+az extension add --name k8s-extension
+
+# update if you do have the module installed
+az extension update --name k8s-extension
+```
+
+> [!IMPORTANT]
+> Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
+ ## Next steps Learn more about the cluster extensions currently available for Azure Arc-enabled Kubernetes:
-> [!div class="nextstepaction"]
-> [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json)
-> [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
-> [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md)
->
-> [!div class="nextstepaction"]
-> [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
->
-> [!div class="nextstepaction"]
-> [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md)
->
-> [!div class="nextstepaction"]
-> [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md)
->
-> [!div class="nextstepaction"]
-> [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md)
+* [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json)
+* [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
+* [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md)
+* [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
+* [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md)
+* [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md)
+* [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 09/15/2022 Last updated : 10/12/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
- * If you want to connect an OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
-
- ```azurecli-interactive
- oc adm policy add-scc-to-user privileged -z <service account name> -n <service account namespace>
- ```
>[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported.
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
> [!NOTE] > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+> [!IMPORTANT]
+> To view and manage connected clusters in the Azure portal, be sure that your network allows traffic to `*.arc.azure.net`.
+ ## Create a resource group Run the following command:
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azur
description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 5/26/2022+ Last updated : 10/12/2022
Benefits of the Azure Key Vault Secrets Provider extension include the folllowin
- A cluster with a supported Kubernetes distribution that has already been [connected to Azure Arc](quickstart-connect-cluster.md). The following Kubernetes distributions are currently supported for this scenario: - Cluster API Azure
- - Azure Kubernetes Service on Azure Stack HCI (AKS-HCI)
+ - AKS hybrid clusters provisioned from Azure
- Google Kubernetes Engine - OpenShift Kubernetes Distribution - Canonical Kubernetes Distribution
Benefits of the Azure Key Vault Secrets Provider extension include the folllowin
- Tanzu Kubernetes Grid - Ensure you have met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension.
+> [!TIP]
+> When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
+ ## Install the Azure Key Vault Secrets Provider extension on an Arc-enabled Kubernetes cluster You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying ARM template.
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster Previously updated : 05/25/2022+ Last updated : 10/12/2022
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
- Support is available for the two most recently released minor versions of Arc-enabled Open Service Mesh. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases. - The following Kubernetes distributions are currently supported: - AKS Engine
- - AKS on HCI
+ - AKS hybrid clusters provisioned from Azure
- Cluster API Azure - Google Kubernetes Engine - Canonical Kubernetes Distribution
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
- VMware Tanzu Kubernetes Grid - Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available [in preview with limited support](#monitoring-application-using-azure-monitor-and-applications-insights-preview).
+> [!TIP]
+> When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
+ ## Basic installation using Azure portal To deploy using Azure portal, once you have an Arc connected cluster, go to the cluster's **Open Service Mesh** section.
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
CD pipeline manipulates PRs in the GitOps repository. It needs a Service Connect
--set orchestratorPAT=<Azure Repos PAT token> ``` > [!NOTE]
-> `Azure Repos PAT token` should have `Build: Read & executee` and `Code: Read` permissions.
+> `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Read` permissions.
3. Configure Flux to send notifications to GitOps connector: ```console
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 06/08/2022 Last updated : 10/12/2022 -+ # Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters
GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clu
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
->[!IMPORTANT]
+> [!IMPORTANT]
> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters).
+> [!TIP]
+> When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
+ ## Prerequisites
-To manage GitOps through the Azure CLI or the Azure portal, you need the following items.
+To manage GitOps through the Azure CLI or the Azure portal, you need the following:
### For Azure Arc-enabled Kubernetes clusters * An Azure Arc-enabled Kubernetes connected cluster that's up and running. [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).+ * Read and write permissions on the `Microsoft.Kubernetes/connectedClusters` resource type. ### For Azure Kubernetes Service clusters * An MSI-based AKS cluster that's up and running.
- >[!IMPORTANT]
- >**Ensure that the AKS cluster is created with MSI** (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
- >For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md).
+ > [!IMPORTANT]
+ > **Ensure that the AKS cluster is created with MSI** (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
+ > For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md).
* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. * Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command:
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
### Common to both cluster types * Read and write permissions on these resource types:
- * `Microsoft.KubernetesConfiguration/extensions`
- * `Microsoft.KubernetesConfiguration/fluxConfigurations`
+
+ * `Microsoft.KubernetesConfiguration/extensions`
+ * `Microsoft.KubernetesConfiguration/fluxConfigurations`
* Azure CLI version 2.15 or later. [Install the Azure CLI](/cli/azure/install-azure-cli) or use the following commands to update to the latest version:
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
Microsoft.KubernetesConfiguration RegistrationRequired Registered ```
-### Supported regions
+### Version and region support
-GitOps is currently supported in all regions that Azure Arc-enabled Kubernetes supports. [See the supported regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence.
+GitOps is currently supported in [all regions that Azure Arc-enabled Kubernetes supports](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence.
+
+The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
### Network requirements
The GitOps agents require outbound (egress) TCP to the repo source on either por
## Enable CLI extensions >[!NOTE]
->The `k8s-configuration` CLI extension manages either Flux v2 or Flux v1 configurations. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
+>The `k8s-configuration` CLI extension manages either Flux v2 or Flux v1 configurations. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
False whl k8s-extension C:\Users\somename\.azure\c
## Apply a Flux configuration by using the Azure CLI
-Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [gitops-flux2-kustomize-helm-mt](https://github.com/Azure/gitops-flux2-kustomize-helm-mt) repository.
+Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [gitops-flux2-kustomize-helm-mt](https://github.com/Azure/gitops-flux2-kustomize-helm-mt) repository.
->[!IMPORTANT]
->The demonstration repo is designed to simplify your use of this tutorial and illustrate some key principles. To keep up to date, the repo can get breaking changes occasionally from version upgrades. These changes won't affect your new application of this tutorial, only previous tutorial applications that have not been deleted. To learn how to handle these changes please see the [breaking change disclaimer](https://github.com/Azure/gitops-flux2-kustomize-helm-mt#breaking-change-disclaimer-%EF%B8%8F).
+> [!IMPORTANT]
+> The demonstration repo is designed to simplify your use of this tutorial and illustrate some key principles. To keep up to date, the repo can get breaking changes occasionally from version upgrades. These changes won't affect your new application of this tutorial, only previous tutorial applications that have not been deleted. To learn how to handle these changes please see the [breaking change disclaimer](https://github.com/Azure/gitops-flux2-kustomize-helm-mt#breaking-change-disclaimer-%EF%B8%8F).
In the following example:
In the following example:
* The `apps` kustomization depends on the `infra` kustomization. (The `infra` kustomization must finish before the `apps` kustomization runs.) * Set `prune=true` on both kustomizations. This setting assures that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted.
-If the `microsoft.flux` extension isn't already installed in the cluster, it'll be installed. When the flux configuration is installed, the initial compliance state may be "Pending" or "Non-compliant" because reconciliation is still on-going. After a minute you can query the configuration again and see the final compliance state.
+If the `microsoft.flux` extension isn't already installed in the cluster, it'll be installed. When the flux configuration is installed, the initial compliance state may be "Pending" or "Non-compliant" because reconciliation is still on-going. After a minute, you can query the configuration again and see the final compliance state.
```console az k8s-configuration flux create -g flux-demo-rg \
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connec
``` ### Red Hat OpenShift onboarding guidance+ Flux controllers require a **nonroot** [Security Context Constraint](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.2/html/authentication/managing-pod-security-policies) to properly provision pods on the cluster. These constraints must be added to the cluster prior to onboarding of the `microsoft.flux` extension. ```console
Just like private keys, you can provide your `known_hosts` content directly or i
| Parameter | Format | Notes | | - | - | - |
-| `--url` `-u` | https://server/repo[.git] | HTTPS with Basic Authentication. |
+| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. |
| `--https-user` | Raw string | HTTPS username. | | `--https-key` | Raw string | HTTPS personal access token or password.
Just like private keys, you can provide your `known_hosts` content directly or i
| Parameter | Format | Notes | | - | - | - |
-| `--url` `-u` | https://server/repo[.git] | HTTPS with Basic Authentication. |
+| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. |
| `--https-ca-cert` | Base64 string | CA certificate for TLS communication. | | `--https-ca-cert-file` | Full path to local file | Provide CA certificate content in a local file. | ### Bucket source arguments+ If you use a `bucket` source instead of a `git` source, here are the bucket-specific command arguments. | Parameter | Format | Notes |
If you use a `bucket` source instead of a `git` source, here are the bucket-spec
| `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. | ### Local secret for authentication with source+ You can use a local Kubernetes secret for authentication with a `git` or `bucket` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration. | Parameter | Format | Notes |
For both cases, when you create the Flux configuration, use `--local-auth-ref my
```console az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret ```+ Learn more about using a local Kubernetes secret with these authentication methods:+ * [Git repository HTTPS authentication](https://fluxcd.io/docs/components/source/gitrepositories/#https-authentication) * [Git repository HTTPS self-signed certificates](https://fluxcd.io/docs/components/source/gitrepositories/#https-self-signed-certificates) * [Git repository SSH authentication](https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication) * [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication)
->[!NOTE]
->If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
+> [!NOTE]
+> If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
### Git implementation
Examples
## Manage GitOps configurations by using the Azure portal
-The Azure portal is useful for managing GitOps configurations and the Flux extension in Azure Arc-enabled Kubernetes or AKS clusters. The portal displays all Flux configurations associated with each cluster and enables drilling in to each.
+The Azure portal is useful for managing GitOps configurations and the Flux extension in Azure Arc-enabled Kubernetes or AKS clusters. The portal displays all Flux configurations associated with each cluster and enables drilling in to each.
The portal provides the overall compliance state of the cluster. The Flux objects that have been deployed to the cluster are also shown, along with their installation parameters, compliance state, and any errors.
You can also use the portal to create, update, and delete GitOps configurations.
The Flux Kustomize controller is installed as part of the `microsoft.flux` cluster extension. It allows the declarative management of cluster configuration and application deployment by using Kubernetes manifests synced from a Git repository. These Kubernetes manifests can include a *kustomize.yaml* file, but it isn't required.
-For usage details, see the following documents:
+For usage details, see the following:
* [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) * [Kustomize reference documents](https://kubectl.docs.kubernetes.io/references/kustomize/)
For usage details, see the following documents:
The Flux Helm controller is installed as part of the `microsoft.flux` cluster extension. It allows you to declaratively manage Helm chart releases with Kubernetes manifests that you maintain in your Git repository.
-For usage details, see the following documents:
+For usage details, see the following:
* [Flux for Helm users](https://fluxcd.io/docs/use-cases/helm/) * [Manage Helm releases](https://fluxcd.io/docs/guides/helmreleases/)
For usage details, see the following documents:
* [Flux Helm controller](https://fluxcd.io/docs/components/helm/) > [!TIP]
-> Because of how Helm handles index files, processing helm charts is an expensive operation and can have very high memory footprint. As a result, helm chart reconciliation, when occurring in parallel can cause memory spikes and OOMKilled if you are reconciling a large number of helm charts at a given time. By default, the source-controller sets its memory limit at 1Gi and its memory requests at 64Mi. If you need to increase this limit and requests due to a high number of large helm chart reconciliations, you can do so by running the following command after Microsoft.Flux extension installation.
+> Because of how Helm handles index files, processing helm charts is an expensive operation and can have very high memory footprint. As a result, helm chart reconciliation, when occurring in parallel, can cause memory spikes and OOMKilled if you are reconciling a large number of helm charts at a given time. By default, the source-controller sets its memory limit at 1Gi and its memory requests at 64Mi. If you need to increase this limit and requests due to a high number of large helm chart reconciliations, run the following command after installing the microsoft.flux extension:
> > `az k8s-extension update -g <resource-group> -c <cluster-name> -n flux -t connectedClusters --config source-controller.resources.limits.memory=2Gi source-controller.resources.requests.memory=300Mi` ### Use the GitRepository source for Helm charts
-If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can add an annotation to your HelmRelease yaml to indicate that the configured source should be used as the source of the Helm charts. The annotation is `clusterconfig.azure.com/use-managed-source: "true"`, and here is a usage example:
+If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can indicate that the configured source should be used as the source of the Helm charts by adding `clusterconfig.azure.com/use-managed-source: "true"` to your HelmRelease yaml, as shown in the following example:
```console
spec:
... ```
-By using this annotation, the HelmRelease that is deployed will be patched with the reference to the configured source. Note that only GitRepository source is supported for this currently.
+By using this annotation, the HelmRelease that is deployed will be patched with the reference to the configured source. Currently, only `GitRepository` source is supported.
## Multi-tenancy Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability has been integrated into Azure GitOps with Flux v2.
->[!NOTE]
->For the multi-tenancy feature you need to know if your manifests contain any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare, take these actions:
+>[ !NOTE]
+> For the multi-tenancy feature, you need to know if your manifests contain any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare, take these actions:
> > * Upgrade to Kubernetes version 1.20.6 or greater.
-> * In your Kubernetes manifests assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
-> * If you need time to update your manifests, you can opt-out of multi-tenancy. However, you still need to upgrade your Kubernetes version.
+> * In your Kubernetes manifests, assure that all `sourceRef` are to objects within the same namespace as the GitOps configuration.
+> * If you need time to update your manifests, you can [opt out of multi-tenancy](#opt-out-of-multi-tenancy). However, you still need to upgrade your Kubernetes version.
### Update manifests for multi-tenancy
-LetΓÇÖs say we deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. We configure the source to sync the https://github.com/fluxcd/flux2-kustomize-helm-example repo. This is the same sample Git repo used in the tutorial earlier in this doc. After Flux syncs the repo, it will deploy the resources described in the manifests (yamls). Two of the manifests describe HelmRelease and HelmRepository objects.
+LetΓÇÖs say you deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. You configure the source to sync the https://github.com/fluxcd/flux2-kustomize-helm-example repo. This is the same sample Git repo used in the tutorial earlier in this doc. After Flux syncs the repo, it will deploy the resources described in the manifests (YAML files). Two of the manifests describe HelmRelease and HelmRepository objects.
```yaml apiVersion: helm.toolkit.fluxcd.io/v2beta1
az k8s-extension update --configuration-settings multiTenancy.enforce=false -c C
## Migrate from Flux v1
-If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources installed in the cluster.
+If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in the cluster.
-Use these az CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster:
+Use these Azure CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster:
```console az k8s-configuration list --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name>
az k8s-configuration delete --name <configuration name> --cluster-name <Arc or A
You can also use the Azure portal to view and delete GitOps configurations in Azure Arc-enabled Kubernetes or AKS clusters.
-General information about migration from Flux v1 to Flux v2 is available in the fluxcd project: [Migrate from Flux v1 to v2](https://fluxcd.io/docs/migration/).
+General information about migration from Flux v1 to Flux v2 is available in the fluxed project: [Migrate from Flux v1 to v2](https://fluxcd.io/docs/migration/).
## Next steps
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 |
-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
+| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) | | Nutanix | [Karbon](https://www.nutanix.com/products/karbon) | Version 2.2.1 | | Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 |
-| Cisco | [Intersight Kubernetes Service (IKS)](https://www.cisco.com/c/en/us/products/cloud-systems-management/cloud-operations/intersight-kubernetes-service.html) Distribution | Upstream K8s version: 1.21.13, 1.19.5 |
| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 | | Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 | | Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 07/05/2022 Last updated : 10/08/2022
The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
-## Agent component details
+## Agent components
:::image type="content" source="media/agent-overview/connected-machine-agent.png" alt-text="Azure Arc-enabled servers agent architectural overview." border="false":::
The Azure Connected Machine agent package contains several logical components, w
>[!NOTE] > The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
+## Agent resources
+
+The following information describes the directories and user accounts used by the Azure Connected Machine agent.
+
+### Windows agent installation details
+
+The Windows agent is distributed as a Windows Installer package (MSI) and can be downloaded from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent).
+After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
+
+* The following installation folders are created during setup.
+
+ | Directory | Description |
+ |--|-|
+ | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.|
+ | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
+ | %ProgramFiles%\AzureConnectedMachineAgent\GuestConfig\GC | Guest configuration (policy) service executables.|
+ | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+ | %SYSTEMDRIVE%\packages | Extension package executables |
+
+* The following Windows services are created on the target machine during installation of the agent.
+
+ | Service name | Display name | Process name | Description |
+ |--|--|--|-|
+ | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Azure Active Directory managed identity tokens |
+ | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. |
+ | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. |
+
+* The following virtual service account is created during agent installation.
+
+ | Virtual Account | Description |
+ ||-|
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
+
+* The following local security group is created during agent installation.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
+
+* The following environmental variables are created during agent installation.
+
+ | Name | Default value | Description |
+ ||||
+ | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
+ | IMDS_ENDPOINT | `http://localhost:40342` |
+
+* There are several log files available for troubleshooting. They are described in the following table.
+
+ | Log | Description |
+ |--|-|
+ | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. |
+ | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. |
+ | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. |
+ | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
+ | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. |
+
+* The local security group **Hybrid agent extension applications** is created.
+
+* During uninstall of the agent, the following artifacts are not removed.
+
+ * %ProgramData%\AzureConnectedMachineAgent\Log
+ * %ProgramData%\AzureConnectedMachineAgent
+ * %ProgramData%\GuestConfig
+ * %SystemDrive%\packages
+
+### Linux agent installation details
+
+The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
+
+Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
+
+After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
+
+* The following installation folders are created during setup.
+
+ | Directory | Description |
+ |--|-|
+ | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. |
+ | /opt/GC_Ext/ | Extension service executables. |
+ | /opt/GC_Service/ | Guest configuration (policy) service executables. |
+ | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* The following daemons are created on the target machine during installation of the agent.
+
+ | Service name | Display name | Process name | Description |
+ |--|--|--|-|
+ | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. |
+ | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. |
+
+* There are several log files available for troubleshooting. They are described in the following table.
+
+ | Log | Description |
+ |--|-|
+ | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. |
+ | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. |
+ | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. |
+ | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
+ | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. |
+
+* The following environment variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`.
+
+ | Name | Default value | Description |
+ |||-|
+ | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
+ | IMDS_ENDPOINT | `http://localhost:40342` |
+
+* During uninstall of the agent, the following artifacts are not removed.
+
+ * /var/opt/azcmagent
+ * /var/lib/GuestConfig
+
+## Agent resource governance
+
+The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
+
+* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies.
+* The Extension Service agent is limited to use up to 5% of the CPU to install, upgrade, run, and delete extensions. The following exceptions apply:
+
+ * If the extension installs background services that run independent of Azure Arc, such as the Microsoft Monitoring Agent, those services will not be subject to the resource governance constraints listed above.
+ * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+ * The Azure Monitor Agent can use up to 30% of the CPU during normal operations.
+ ## Instance metadata Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 09/09/2022 Last updated : 10/11/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.18 - May 2022
+
+### New features
+
+- The agent can now be configured to operate in [monitoring mode](security-overview.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension).
+- VMs and hosts running on Azure Stack HCI now report the cloud provider as "HCI" when [Azure benefits are enabled](/azure-stack/hci/manage/azure-benefits#enable-azure-benefits).
+
+### Fixed
+
+- `systemd` is now an official prerequisite on Linux and your package manager will alert you if you try to install the Azure Connected Machine agent on a server without systemd.
+- Guest configuration policies no longer create unnecessary files in the `/tmp` directory on Linux servers
+- Improved reliability when extracting extensions and guest configuration policy packages
+- Improved reliability for guest configuration policies that have child processes
+ ## Version 1.17 - April 2022 ### New features
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
### Known issues -- `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](deployment-options.md#agent-installation-details).
+- `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](agent-overview.md#agent-resources).
### New features
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 09/27/2022 Last updated : 10/11/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.23 - October 2022
+
+### New features
+
+- The minimum PowerShell version required on Windows Server has been reduced to PowerShell 4.0
+- The Windows agent installer is now compatible with systems that enforce a Microsoft publisher-based Windows Defender Application Control policy.
+- Added support for Rocky Linux 8 and Debian 11.
+
+### Fixed
+
+- Tag values are correctly preserved when connecting a server and specifying multiple tags (fixes known issue from version 1.22).
+- An issue preventing some users who tried authenticating with an identity from a different tenant than the tenant where the server is (will be) registered has been fixed.
+- The `azcamgent check` command no longer validates CNAME records to reduce warnings that did not impact agent functionality.
+- The agent will now try to obtain an access token for up to 5 minutes when authenticating with an Azure Active Directory service principal.
+- Cloud presence checks now only run once at the time the `himds` service starts on the server to reduce local network traffic. If you live migrate your virtual machine to a different cloud provider, it will not reflect the new cloud provider until the service or computer has rebooted.
+- Improved logging during the installation process.
+- The install script for Windows now saves the MSI to the TEMP directory instead of the current directory.
+ ## Version 1.22 - September 2022
+### Known issues
+
+- When connecting a server and specifying multiple tags, the value of the last tag is used for all tags. You will need to fix the tags after onboarding to use the correct values.
+ ### New features - The default login flow for Windows computers now loads the local web browser to authenticate with Azure Active Directory instead of providing a device code. You can use the `--use-device-code` flag to return to the old behavior or [provide service principal credentials](onboard-service-principal.md) for a non-interactive authentication experience.
This page is updated monthly, so revisit it regularly. If you're looking for ite
- An issue that could cause the extension manager to hang during extension installation, update, and removal operations has been resolved. - Improved support for TLS 1.3
-## Version 1.18 - May 2022
-
-### New features
--- The agent can now be configured to operate in [monitoring mode](security-overview.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension).-- VMs and hosts running on Azure Stack HCI now report the cloud provider as "HCI" when [Azure benefits are enabled](/azure-stack/hci/manage/azure-benefits#enable-azure-benefits).-
-### Fixed
--- `systemd` is now an official prerequisite on Linux and your package manager will alert you if you try to install the Azure Connected Machine agent on a server without systemd.-- Guest configuration policies no longer create unnecessary files in the `/tmp` directory on Linux servers-- Improved reliability when extracting extensions and guest configuration policy packages-- Improved reliability for guest configuration policies that have child processes- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Deployment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md
Title: Azure Connected Machine agent deployment options description: Learn about the different options to onboard machines to Azure Arc-enabled servers. Previously updated : 03/14/2022 Last updated : 10/08/2022
Connecting machines in your hybrid environment directly with Azure can be accomp
## Onboarding methods
- The following table highlights each method so that you can determine which works best for your deployment. For detailed information, follow the links to view the steps for each topic.
+The following table highlights each method so that you can determine which works best for your deployment. For detailed information, follow the links to view the steps for each topic.
| Method | Description | |--|-|
Connecting machines in your hybrid environment directly with Azure can be accomp
| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. | > [!IMPORTANT]
-> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
+> The Connected Machine agent cannot be installed on an Azure virtual machine. The install script will warn you and roll back if it detects the server is running in Azure.
-Be sure to review the basic [prerequisites](prerequisites.md) and [network configuration requirements](network-requirements.md) before deploying the agent, as well as any specific requirements listed in the steps for the onboarding method you choose.
-
-## Agent installation details
-
-Review the following details to understand more about how the Connected Machine agent is installed on Windows or Linux machines.
-
-### Windows agent installation details
-
-You can download the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
-
-The Connected Machine agent for Windows can be installed by using one of the following three methods:
-
-* Running the file `AzureConnectedMachineAgent.msi`.
-* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell.
-* From a PowerShell session using a scripted method.
-
-Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
-
-After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
-
-* The following installation folders are created during setup.
-
- |Folder |Description |
- |-||
- |%ProgramFiles%\AzureConnectedMachineAgent |azcmagent CLI and instance metadata service executables.|
- |%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
- |%ProgramFiles%\AzureConnectedMachineAgent\GuestConfig\GC | Guest configuration (policy) service executables.|
- |%ProgramData%\AzureConnectedMachineAgent |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- |%ProgramData%\GuestConfig |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* The following Windows services are created on the target machine during installation of the agent.
-
- |Service name |Display name |Process name |Description |
- |-|-|-||
- |himds |Azure Hybrid Instance Metadata Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
- |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
-
-* The following virtual service account is created during agent installation.
-
- | Virtual Account | Description |
- |||
- | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
-
- > [!TIP]
- > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
-* The following local security group is created during agent installation.
-
- | Security group name | Description |
- ||-|
- | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
-
-* The following environmental variables are created during agent installation.
-
- |Name |Default value |Description |
- |--|--||
- |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
- |IMDS_ENDPOINT |<`http://localhost:40342`> ||
-
-* There are several log files available for troubleshooting. They are described in the following table.
-
- |Log |Description |
- |-||
- |%ProgramData%\AzureConnectedMachineAgent\Log\himds.log |Records details of the heartbeat and identity agent component.|
- |%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log |Contains the output of the azcmagent tool commands.|
- |%ProgramData%\GuestConfig\arc_policy_logs\ |Records details about the guest configuration (policy) agent component.|
- |%ProgramData%\GuestConfig\ext_mgr_logs|Records details about the Extension agent component.|
- |%ProgramData%\GuestConfig\extension_logs\\\<Extension>|Records details from the installed extension.|
-
-* The local security group **Hybrid agent extension applications** is created.
-
-* During uninstall of the agent, the following artifacts are not removed.
-
- * %ProgramData%\AzureConnectedMachineAgent\Log
- * %ProgramData%\AzureConnectedMachineAgent and subdirectories
- * %ProgramData%\GuestConfig
-
-### Linux agent installation details
-
-The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
-
-Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
-
-After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
-
-* The following installation folders are created during setup.
-
- |Folder |Description |
- |-||
- |/opt/azcmagent/ |azcmagent CLI and instance metadata service executables.|
- |/opt/GC_Ext/ | Extension service executables.|
- |/opt/GC_Service/ |Guest configuration (policy) service executables.|
- |/var/opt/azcmagent/ |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- |/var/lib/GuestConfig/ |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* The following daemons are created on the target machine during installation of the agent.
-
- |Service name |Display name |Process name |Description |
- |-|-|-||
- |himdsd.service |Azure Connected Machine Agent Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |gcad.service |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
- |extd.service |Extension Service |gc_linux_service | Installs the required extensions targeting the machine.|
-
-* There are several log files available for troubleshooting. They are described in the following table.
-
- |Log |Description |
- |-||
- |/var/opt/azcmagent/log/himds.log |Records details of the heartbeat and identity agent component.|
- |/var/opt/azcmagent/log/azcmagent.log |Contains the output of the azcmagent tool commands.|
- |/var/lib/GuestConfig/arc_policy_logs |Records details about the guest configuration (policy) agent component.|
- |/var/lib/GuestConfig/ext_mgr_logs |Records details about the extension agent component.|
- |/var/lib/GuestConfig/extension_logs|Records details from extension install/update/uninstall operations.|
-
-* The following environmental variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`.
-
- |Name |Default value |Description |
- |--|--||
- |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
- |IMDS_ENDPOINT |<`http://localhost:40342`> ||
-
-* During uninstall of the agent, the following artifacts are not removed.
-
- * /var/opt/azcmagent
- * /var/lib/GuestConfig
-
-## Agent resource governance
-
-The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
-
-* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies.
-* The Extension Service agent is limited to use up to 5% of the CPU to install and manage extensions.
-
- * Once installed, each extension is limited to use up to 5% of the CPU while running. For example, if you have two extensions installed, they can use a combined total of 10% of the CPU.
- * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+Be sure to review the basic [prerequisites](prerequisites.md) and [network configuration requirements](network-requirements.md) before deploying the agent, as well as any specific requirements listed in the steps for the onboarding method you choose. To learn more about what changes the agent will make to your system, see [Overview of the Azure Connected Machine Agent](agent-overview.md).
## Next steps
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 08/03/2022 Last updated : 10/12/2022
You do not need to restart any services when reconfiguring the proxy settings wi
### Proxy bypass for private endpoints
-Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Azure Active Directory (Azure AD) and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network.
+Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Azure Active Directory and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network.
The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server. | Proxy bypass value | Affected endpoints | | | |
-| Azure AD | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` |
-| ARM | `management.azure.com` |
-| Arc | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` |
+| `AAD` | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` |
+| `ARM` | `management.azure.com` |
+| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` |
To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command:
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc-enabled servers description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 07/26/2022 Last updated : 10/08/2022
VM extension functionality is available only in the list of [supported regions](
In this release, we support the following VM extensions on Windows and Linux machines.
-To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md#agent-component-details).
+To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md).
> [!NOTE] > The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/machine-configuration-azure-automation-migration.md) or using the Custom Script Extension to manage the post-deployment configuration of your server.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 09/27/2022 Last updated : 10/11/2022
The following versions of the Windows and Linux operating system are officially
* Windows IoT Enterprise * Azure Stack HCI * Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS
-* Debian 10
+* Debian 10 and 11
* CentOS Linux 7 and 8
+* Rocky Linux 8
* SUSE Linux Enterprise Server (SLES) 12 and 15 * Red Hat Enterprise Linux (RHEL) 7 and 8 * Amazon Linux 2
The following versions of the Windows and Linux operating system are officially
Windows operating systems: * NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
-* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+* Windows PowerShell 4.0 or later is required. No action is required for Windows Server 2012 R2 and above. For Windows Server 2012 and Windows Server 2008 R2, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
Linux operating systems:
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
When you're seeing the aggregation type:
- **Average** shows the average value of all data points in the time granularity. - **Sum** shows the sum of all data points in the time granularity and may be misleading depending on the specific metric.
-Under normal conditions, **Average** and **Max** are similar because only one node emits these metrics (the primary node). In a scenario where the number of connected clients changes rapidly, **Max**, **Average**, and **Min** would show different values and this is also expected behavior.
+Under normal conditions, **Average** and **Max** are similar because only one node emits these metrics (the primary node). In a scenario where the number of connected clients changes rapidly, **Max**, **Average**, and **Min** would show different values and is also expected behavior.
Generally, **Average** shows you a smooth chart of your desired metric and reacts well to changes in time granularity. **Max** and **Min** can hide large changes in the metric if the time granularity is large but can be used with a small time granularity to help pinpoint exact times when large changes occur in the metric. The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connected clients included). Instead, we suggest you look at the **Average** metrics and not the **Sum** metrics. > [!NOTE]
-> Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. This activity is normal during the operation of an instance of Azure Cache for Redis.
+> Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. The activity is normal in the operation of cache.
>
-| Metric | Description |
-| -|-|
-| Cache Hits |The number of successful key lookups during the specified reporting interval. This number maps to `keyspace_hits` from the Redis [INFO](https://redis.io/commands/info) command. |
-| Cache Latency | The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. |
-| Cache Misses |The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`. |
-| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
-| Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. |
-| Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. |
-| Connections Created Per Second | The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. |
-| Connections Closed Per Second | The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. |
-| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server.|
-| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** ΓÇô when a cache fails over (subordinate promotes to primary)</li><li>**Dataloss** ΓÇô when there's data loss on the cache</li><li>**UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough</li><li>**AOF** ΓÇô when there's an issue related to AOF persistence</li><li>**RDB** ΓÇô when there's an issue related to RDB persistence</li><li>**Import** ΓÇô when there's an issue related to Import RDB</li><li>**Export** ΓÇô when there's an issue related to Export RDB</li></ul> |
-| Evicted Keys |The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. This number maps to `evicted_keys` from the Redis INFO command. |
-| Expired Keys |The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.|
-| Gets |The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval. |
-| Operations per Second | The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command. |
-| Redis Server Load |The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, it means the Redis server has hit a performance ceiling and the CPU can't process work any faster. If you're seeing high Redis Server Load, then you see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches. |
-| Sets |The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. |
-| Total Keys | The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. |
-| Total Operations |The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub there will be no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there will be `Total Operations` metrics that reflect the cache usage for pub/sub operations. |
-| Used Memory |The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation. |
-| Used Memory Percentage | The % of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. |
-| Used Memory RSS |The amount of cache memory used in MB during the specified reporting interval, including fragmentation and metadata. This value maps to `used_memory_rss` from the Redis INFO command. |
+## List of metrics
+
+- Cache Latency (preview)
+ - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval.
+- Cache Misses
+ - The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`.
+- Cache Miss Rate
+ - The percent of unsuccessful key lookups during the specified reporting interval. This metric isn't available in Enterprise or Enterprise Flash tier caches.
+- Cache Read
+ - The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.
+- Cache Write
+ - The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client.
+- Connected Clients
+ - The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections.
+- Connections Created Per Second
+ - The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches.
+- Connections Closed Per Second
+ - The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches.
+- CPU
+ - The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server.
+- Errors
+ - Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows:
+ - **Failover** ΓÇô when a cache fails over (subordinate promotes to primary)
+ - **Dataloss** ΓÇô when there's data loss on the cache
+ - **UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough
+ - **AOF** ΓÇô when there's an issue related to AOF persistence
+ - **RDB** ΓÇô when there's an issue related to RDB persistence
+ - **Import** ΓÇô when there's an issue related to Import RDB
+ - **Export** ΓÇô when there's an issue related to Export RDB
+- Evicted Keys
+ - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit.
+ - This number maps to `evicted_keys` from the Redis INFO command.
+- Expired Keys
+ - The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.
+
+> [!IMPORTANT]
+> Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+>
+
+- Geo Replication Connectivity Lag (preview)
+ - Depicts the time, in seconds, since the last successful data synchronization between geo-primary & geo-secondary. If the link goes down, this value continues to increase, indicating a problem.
+ - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+- Geo Replication Data Sync Offset (preview)
+ - Depicts the approximate amount of data, in bytes, that has yet to be synchronized to geo-secondary cache.
+ - This metric is only emitted **from the geo-primary** cache instance. On the geo-secondary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+- Geo Replication Full Sync Event Finished (preview)
+ - Depicts the completion of full synchronization between geo-replicated caches. When you see lots of writes on geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
+ - This metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+
+- Geo Replication Full Sync Event Started (preview)
+ - Depicts the start of full synchronization between geo-replicated caches. When there are a lot of writes in geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
+ - This metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+
+- Geo Replication Healthy
+ - Depicts the status of the geo-replication link between caches. There can be two possible states that the replication link can be in:
+ - 0 disconnected/unhealthy
+ - 1 ΓÇô healthy
+ - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+ - This metric may indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning.
+ - A value of 0 does not mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
+ - If the geo-replication link is unhealthy for over an hour, [file a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+- Gets
+ - The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval.
+- Operations per Second
+ - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command.
+- Server Load
+ - The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. If you're seeing high Redis Server Load, then you see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches.
+- Sets
+ - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`.
+- Total Keys
+ - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval.
+- Total Operations
+ - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub there will be no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there will be `Total Operations` metrics that reflect the cache usage for pub/sub operations.
+- Used Memory
+ - The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation.
+- Used Memory Percentage
+ - The percent of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage.
+- Used Memory RSS
+ - The amount of cache memory used in MB during the specified reporting interval, including fragmentation and metadata. This value maps to `used_memory_rss` from the Redis INFO command. This metric isn't available in Enterprise or Enterprise Flash tier caches.
## Create alerts
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
description: Learn about Azure Cache for Redis to enable cache-aside, content ca
+ Last updated 03/15/2022- # About Azure Cache for Redis
Azure Cache for Redis provides an in-memory data store based on the [Redis](http
Azure Cache for Redis offers both the Redis open-source (OSS Redis) and a commercial product from Redis Inc. (Redis Enterprise) as a managed service. It provides secure and dedicated Redis server instances and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and usable by any application within or outside of Azure.
-Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed as a standalone. Or, it can be deployed along with other Azure database services, such as Azure SQL or Cosmos DB.
+Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed as a standalone. Or, it can be deployed along with other Azure database services, such as Azure SQL or Azure Cosmos DB.
## Key scenarios
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Last updated 09/29/2022
### Upgrade your Azure Cache for Redis instances to use Redis version 6 by June 30, 2023
-On June 30, 2023, we'll retire version 4 for Azure Cache for Redis instances. Before that date, you need to upgrade any of your cache instances to version 6.
+On June 30, 2023, we'll retire version 4 for Azure Cache for Redis instances. Before that date, you need to [upgrade](cache-how-to-upgrade.md) any of your cache instances to version 6.
- All cache instances running Redis version 4 after June 30, 2023 will be upgraded automatically. - All cache instances running Redis version 4 that have geo-replication enabled will be upgraded automatically after August 30, 2023.
-We recommend that you upgrade your caches on your own to accommodate your schedule and the needs of your users to make the upgrade as convenient as possible.
+We recommend that you [upgrade](cache-how-to-upgrade.md) your caches on your own to accommodate your schedule and the needs of your users to make the upgrade as convenient as possible.
For more information, see [Retirements](cache-retired-features.md).
azure-cache-for-redis Redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/redis-cache-insights-overview.md
Last updated 09/10/2020 --+ # Explore Azure Monitor for Azure Cache for Redis
To view the utilization and performance of your storage accounts across all of y
1. Search for **Monitor**, and select **Monitor**.
- ![Search box with the word "Monitor" and the Services search result that shows "Monitor" with a speedometer symbol](../cosmos-db/media/cosmosdb-insights-overview/search-monitor.png)
+ ![Search box with the word "Monitor" and the Services search result that shows "Monitor" with a speedometer symbol](../cosmos-db/media/insights-overview/search-monitor.png)
1. Select **Azure Cache for Redis**. If this option isn't present, select **More** > **Azure Cache for Redis**.
Selecting any of the other tabs for **Performance** or **Operations** opens the
To pin any metric section to an [Azure dashboard](../azure-portal/azure-portal-dashboards.md), select the pushpin symbol in the section's upper right.
-![A metric section with the pushpin symbol highlighted](../cosmos-db/media/cosmosdb-insights-overview/pin.png)
+![A metric section with the pushpin symbol highlighted](../cosmos-db/media/insights-overview/pin.png)
To export your data into an Excel format, select the down arrow symbol to the left of the pushpin symbol.
-![A highlighted export-workbook symbol](../cosmos-db/media/cosmosdb-insights-overview/export.png)
+![A highlighted export-workbook symbol](../cosmos-db/media/insights-overview/export.png)
To expand or collapse all views in a workbook, select the expand symbol to the left of the export symbol.
-![A highlighted expand-workbook symbol](../cosmos-db/media/cosmosdb-insights-overview/expand.png)
+![A highlighted expand-workbook symbol](../cosmos-db/media/insights-overview/expand.png)
## Customize Azure Monitor for Azure Cache for Redis Because this experience is built atop Azure Monitor workbook templates, you can select **Customize** > **Edit** > **Save** to save a copy of your modified version into a custom workbook.
-![A command bar with Customize highlighted](../cosmos-db/media/cosmosdb-insights-overview/customize.png)
+![A command bar with Customize highlighted](../cosmos-db/media/insights-overview/customize.png)
Workbooks are saved within a resource group in either the **My Reports** section or the **Shared Reports** section. **My Reports** is available only to you. **Shared Reports** is available to everyone with access to the resource group. After you save a custom workbook, go to the workbook gallery to open it.
-![A command bar with Gallery highlighted](../cosmos-db/media/cosmosdb-insights-overview/gallery.png)
+![A command bar with Gallery highlighted](../cosmos-db/media/insights-overview/gallery.png)
## Troubleshooting
For troubleshooting guidance, refer to the dedicated workbook-based insights [tr
* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerts that aid in detecting problems.
-* Learn the scenarios that workbooks support, how to author or customize reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
+* Learn the scenarios that workbooks support, how to author or customize reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
azure-functions Durable Functions Disaster Recovery Geo Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-disaster-recovery-geo-distribution.md
Title: Disaster recovery and geo-distribution Azure Durable Functions
description: Learn about disaster recovery and geo-distribution in Durable Functions. + Last updated 05/11/2021
In Durable Functions, all state is persisted in Azure Storage by default. A [tas
> [!NOTE] > The guidance in this article assumes that you are using the default Azure Storage provider for storing Durable Functions runtime state. However, it's possible to configure alternate storage providers that store state elsewhere, like a SQL Server database. Different disaster recovery and geo-distribution strategies may be required for the alternate storage providers. For more information on the alternate storage providers, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
-Orchestrations and entities can be triggered using [client functions](durable-functions-types-features-overview.md#client-functions) that are themselves triggered via HTTP or one of the other supported Azure Functions trigger types. They can also be triggered using [built-in HTTP APIs](durable-functions-http-features.md#built-in-http-apis). For simplicity, this article will focus on scenarios involving Azure Storage and HTTP-based function triggers, and options to increase availability and minimize downtime during disaster recovery activities. Other trigger types, such as Service Bus or Cosmos DB triggers, will not be explicitly covered.
+Orchestrations and entities can be triggered using [client functions](durable-functions-types-features-overview.md#client-functions) that are themselves triggered via HTTP or one of the other supported Azure Functions trigger types. They can also be triggered using [built-in HTTP APIs](durable-functions-http-features.md#built-in-http-apis). For simplicity, this article will focus on scenarios involving Azure Storage and HTTP-based function triggers, and options to increase availability and minimize downtime during disaster recovery activities. Other trigger types, such as Service Bus or Azure Cosmos DB triggers, will not be explicitly covered.
The following scenarios are based on Active-Passive configurations, since they are guided by the usage of Azure Storage. This pattern consists of deploying a backup (passive) function app to a different region. Traffic Manager will monitor the primary (active) function app for HTTP availability. It will fail over to the backup function app if the primary fails. For more information, see [Azure Traffic Manager](https://azure.microsoft.com/services/traffic-manager/)'s [Priority Traffic-Routing Method.](../../traffic-manager/traffic-manager-routing-methods.md#priority-traffic-routing-method)
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
Last updated 05/25/2022 ms.devlang: csharp, java, javascript, python+ #Customer intent: As a developer, I want to understand the options provided for managing my Durable Functions orchestration instances, so I can keep my orchestrations running efficiently and make improvements.
public static void SendInstanceInfo(
{ HttpManagementPayload payload = client.CreateHttpManagementPayload(ctx.InstanceId);
- // send the payload to Cosmos DB
+ // send the payload to Azure Cosmos DB
document = new { Payload = payload, id = ctx.InstanceId }; } ```
modules.exports = async function(context, ctx) {
const payload = client.createHttpManagementPayload(ctx.instanceId);
- // send the payload to Cosmos DB
+ // send the payload to Azure Cosmos DB
context.bindings.document = JSON.stringify({ id: ctx.instanceId, payload,
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
Last updated 06/14/2022 ms.devlang: java-+ # Create your first durable function in Java (Preview)
Add a `host.json` file to your project directory. It should look similar to the
It's important to note that only the Azure Functions v4 _Preview_ bundle currently has the necessary support for Durable Functions for Java. > [!WARNING]
-> Be aware that the Azure Functions v4 preview bundles do not yet support Cosmos DB bindings for Java function apps. For more information, see [Azure Cosmos DB trigger and bindings reference documentation](../functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-java#install-bundle).
+> Be aware that the Azure Functions v4 preview bundles do not yet support Azure Cosmos DB bindings for Java function apps. For more information, see [Azure Cosmos DB trigger and bindings reference documentation](../functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-java#install-bundle).
Add a `local.settings.json` file to your project directory. You should have the connection string of your Azure Storage account configured for `AzureWebJobsStorage`:
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Last updated 08/17/2021
zone_pivot_groups: programming-languages-set-functions-temp ms.devlang: csharp, javascript-+ # Connect Azure Functions to Azure Cosmos DB using Visual Studio Code
Before you get started, make sure to install the [Azure Databases extension](htt
|Prompt| Selection| |--|--|
- |**Select an Azure Database Server**| Choose `Core (SQL)` to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB SQL API](../cosmos-db/introduction.md). |
+ |**Select an Azure Database Server**| Choose **Azure Cosmos DB for NoSQL** to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB for NoSQL](../cosmos-db/introduction.md). |
|**Account name**| Enter a unique name to identify your Azure Cosmos DB account. The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.| |**Select a capacity model**| Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode. |**Select a resource group for new resources**| Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). |
To create a binding, right-click (Ctrl+click on macOS) the `function.json` file
| **Select binding direction** | `out` | The binding is an output binding. | | **Select binding with direction "out"** | `Azure Cosmos DB` | The binding is an Azure Cosmos DB binding. | | **The name used to identify this binding in your code** | `outputDocument` | Name that identifies the binding parameter referenced in your code. |
-| **The Cosmos DB database where data will be written** | `my-database` | The name of the Azure Cosmos DB database containing the target container. |
+| **The Azure Cosmos DB database where data will be written** | `my-database` | The name of the Azure Cosmos DB database containing the target container. |
| **Database collection where data will be written** | `my-container` | The name of the Azure Cosmos DB container where the JSON documents will be written. |
-| **If true, creates the Cosmos DB database and collection** | `false` | The target database and container already exist. |
+| **If true, creates the Azure Cosmos DB database and collection** | `false` | The target database and container already exist. |
| **Select setting from "local.setting.json"** | `CosmosDbConnectionString` | The name of an application setting that contains the connection string for the Azure Cosmos DB account. | | **Partition key (optional)** | *leave blank* | Only required when the output binding creates the container. | | **Collection throughput (optional)** | *leave blank* | Only required when the output binding creates the container. |
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
description: Learn to use the Azure Cosmos DB input binding in Azure Functions.
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace CosmosDBSamplesV2
### Queue trigger, look up ID from string
-The following example shows a Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
Here's the binding data in the *function.json* file:
public class DocByIdFromQueryString {
} ```
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on function parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on function parameters whose value would come from Azure Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
<a id="http-trigger-look-up-id-from-query-stringpojo-parameter-java"></a>
This section contains the following examples that read a single document by spec
### Queue trigger, look up ID from JSON
-The following example shows a Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads a single document and updates the document's text value.
Here's the binding data in the *function.json* file:
module.exports = async function (context, input) {
### Queue trigger, look up ID from JSON
-The following example demonstrates how to read and update a single Cosmos DB document. The document's unique identifier is provided through JSON value in a queue message.
+The following example demonstrates how to read and update a single Azure Cosmos DB document. The document's unique identifier is provided through JSON value in a queue message.
-The Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
+The Azure Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
<a name="queue-trigger-look-up-id-from-json-ps"></a>
Push-OutputBinding -Name InputDocumentOut -Value $Document
### HTTP trigger, look up ID from query string
-The following example demonstrates how to read and update a single Cosmos DB document from a web API. The document's unique identifier is provided through a querystring parameter from the HTTP request, as defined in the binding's `"Id": "{Query.Id}"` property.
+The following example demonstrates how to read and update a single Azure Cosmos DB document from a web API. The document's unique identifier is provided through a querystring parameter from the HTTP request, as defined in the binding's `"Id": "{Query.Id}"` property.
-The Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
+The Azure Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
```json {
The Cosmos DB input binding is listed first in the list of bindings found in the
} ```
-The the _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
+The _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
```powershell using namespace System.Net
if (-not $ToDoItem) {
### HTTP trigger, look up ID from route data
-The following example demonstrates how to read and update a single Cosmos DB document from a web API. The document's unique identifier is provided through a route parameter. The route parameter is defined in the HTTP request binding's `route` property and referenced in the Cosmos DB `"Id": "{Id}"` binding property.
+The following example demonstrates how to read and update a single Azure Cosmos DB document from a web API. The document's unique identifier is provided through a route parameter. The route parameter is defined in the HTTP request binding's `route` property and referenced in the Azure Cosmos DB `"Id": "{Id}"` binding property.
-The Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
+The Azure Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
```json {
The Cosmos DB input binding is listed first in the list of bindings found in the
} ```
-The the _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
+The _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
```powershell using namespace System.Net
if (-not $ToDoItem) {
### Queue trigger, get multiple docs, using SqlQuery
-The following example demonstrates how to read multiple Cosmos DB documents. The function's configuration file (_function.json_) defines the binding properties, which includes the `sqlQuery`. The SQL statement provided to the `sqlQuery` property selects the set of documents provided to the function.
+The following example demonstrates how to read multiple Azure Cosmos DB documents. The function's configuration file (_function.json_) defines the binding properties, which includes the `sqlQuery`. The SQL statement provided to the `sqlQuery` property selects the set of documents provided to the function.
```json {
The following example demonstrates how to read multiple Cosmos DB documents. The
} ```
-The the _run1.ps_ file has the PowerShell code which reads the incoming documents.
+The _run1.ps1_ file has the PowerShell code which reads the incoming documents.
```powershell param($QueueItem, $Documents, $TriggerMetadata)
This section contains the following examples that read a single document by spec
### Queue trigger, look up ID from JSON
-The following example shows a Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function reads a single document and updates the document's text value.
Here's the binding data in the *function.json* file:
Only JSON string inputs are currently supported.
<!--Any of the below pivots can be combined if the usage info is identical.--> ::: zone pivot="programming-language-java"
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), the [@CosmosDBInput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput) annotation exposes Cosmos DB data to the function. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), the [@CosmosDBInput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput) annotation exposes Azure Cosmos DB data to the function. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell" Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the [JavaScript example](#example) for more detail.
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
description: Learn to use the Azure Cosmos DB output binding in Azure Functions.
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace CosmosDBSamplesV2
### Queue trigger, write one doc (v4 extension)
-Apps using Cosmos DB [extension version 4.x] or higher will have different attribute properties which are shown below. The following example shows a [C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage.
+Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties which are shown below. The following example shows a [C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage.
```cs using Microsoft.Azure.WebJobs;
public String cosmosDbQueryById(
#### HTTP trigger, save one document to database via return value
-The following example shows a Java function whose signature is annotated with ```@CosmosDBOutput``` and has return value of type ```String```. The JSON document returned by the function will be automatically written to the corresponding CosmosDB collection.
+The following example shows a Java function whose signature is annotated with `@CosmosDBOutput` and has return value of type `String`. The JSON document returned by the function will be automatically written to the corresponding Azure Cosmos DB collection.
```java @FunctionName("WriteOneDoc")
The following example shows a Java function whose signature is annotated with ``
### HTTP trigger, save one document to database via OutputBinding
-The following example shows a Java function that writes a document to CosmosDB via an ```OutputBinding<T>``` output parameter. In this example, the ```outputItem``` parameter needs to be annotated with ```@CosmosDBOutput```, not the function signature. Using ```OutputBinding<T>``` lets your function take advantage of the binding to write the document to CosmosDB while also allowing returning a different value to the function caller, such as a JSON or XML document.
+The following example shows a Java function that writes a document to Azure Cosmos DB via an `OutputBinding<T>` output parameter. In this example, the `outputItem` parameter needs to be annotated with `@CosmosDBOutput`, not the function signature. Using `OutputBinding<T>` lets your function take advantage of the binding to write the document to Azure Cosmos DB while also allowing returning a different value to the function caller, such as a JSON or XML document.
```java @FunctionName("WriteOneDocOutputBinding")
The following example shows a Java function that writes a document to CosmosDB v
### HTTP trigger, save multiple documents to database via OutputBinding
-The following example shows a Java function that writes multiple documents to CosmosDB via an ```OutputBinding<T>``` output parameter. In this example, the ```outputItem``` parameter is annotated with ```@CosmosDBOutput```, not the function signature. The output parameter, ```outputItem``` has a list of ```ToDoItem``` objects as its template parameter type. Using ```OutputBinding<T>``` lets your function take advantage of the binding to write the documents to CosmosDB while also allowing returning a different value to the function caller, such as a JSON or XML document.
+The following example shows a Java function that writes multiple documents to Azure Cosmos DB via an `OutputBinding<T>` output parameter. In this example, the `outputItem` parameter is annotated with `@CosmosDBOutput`, not the function signature. The output parameter, `outputItem` has a list of `ToDoItem` objects as its template parameter type. Using `OutputBinding<T>` lets your function take advantage of the binding to write the documents to Azure Cosmos DB while also allowing returning a different value to the function caller, such as a JSON or XML document.
```java @FunctionName("WriteMultipleDocsOutputBinding")
The following example shows a Java function that writes multiple documents to Co
} ```
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that will be written to Cosmos DB. The annotation parameter type should be ```OutputBinding<T>```, where T is either a native Java type or a POJO.
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that will be written to Azure Cosmos DB. The annotation parameter type should be `OutputBinding<T>`, where `T` is either a native Java type or a POJO.
::: zone-end ::: zone pivot="programming-language-javascript"
For bulk insert form the objects first and then run the stringify function. Here
::: zone-end ::: zone pivot="programming-language-powershell"
-The following example show how to write data to Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and take data from a queue message and writes out to a Cosmos DB document.
+The following example show how to write data to Azure Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and take data from a queue message and writes out to an Azure Cosmos DB document.
```json {
By default, when you write to the output parameter in your function, a document
| Binding | Reference | |||
-| CosmosDB | [CosmosDB Error Codes](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
+| Azure Cosmos DB | [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
## Next steps - [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md) - [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md)-
-[extension version 4.x]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
description: Learn to use the Azure Cosmos DB trigger in Azure Functions.
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace CosmosDBSamplesV2
# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-Apps using Cosmos DB [extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type.
+Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type.
```cs namespace CosmosDBSamplesV2
The following code defines a `MyDocument` type:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="37-46":::
-This document type is the type of the [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1) used as the Cosmos DB trigger binding parameter in the following example:
+This document type is the type of the [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1) used as the Azure Cosmos DB trigger binding parameter in the following example:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="4-35":::
Example pending.
# [Functions 2.x+](#tab/functionsv2/csharp-script)
-The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
Here's the binding data in the *function.json* file:
Here's the C# script code:
# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
-The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
Here's the binding data in the *function.json* file:
This function is invoked when there are inserts or updates in the specified data
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters whose value would come from Azure Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
::: zone-end ::: zone pivot="programming-language-javascript"
-The following example shows a Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
Here's the binding data in the *function.json* file:
Here's the JavaScript code:
::: zone-end ::: zone pivot="programming-language-powershell"
-The following example shows how to run a function as data changes in Cosmos DB.
+The following example shows how to run a function as data changes in Azure Cosmos DB.
[!INCLUDE [functions-cosmosdb-trigger-attributes](../../includes/functions-cosmosdb-trigger-attributes.md)]
Write-Host "First document Id modified : $($Documents[0].id)"
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a Cosmos DB trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes log messages when Cosmos DB records are modified.
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
Here's the binding data in the *function.json* file:
Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotn
# [Functions 2.x+](#tab/functionsv2)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Cosmos DB. The annotation supports the following properties:
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+ [name](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.name) + [connectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.connectionstringsetting)
The trigger requires a second collection that it uses to store _leases_ over the
::: zone pivot="programming-language-csharp" >[!IMPORTANT]
-> If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `LeaseCollectionPrefix` for each function. Otherwise, only one of the functions is triggered. For information about the prefix, see the [Attributes section](#attributes).
+> If multiple functions are configured to use an Azure Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `LeaseCollectionPrefix` for each function. Otherwise, only one of the functions is triggered. For information about the prefix, see the [Attributes section](#attributes).
::: zone-end ::: zone pivot="programming-language-java" >[!IMPORTANT]
-> If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `leaseCollectionPrefix` for each function. Otherwise, only one of the functions is triggered. For information about the prefix, see the [Annotations section](#annotations).
+> If multiple functions are configured to use an Azure Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `leaseCollectionPrefix` for each function. Otherwise, only one of the functions is triggered. For information about the prefix, see the [Annotations section](#annotations).
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" >[!IMPORTANT]
-> If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `leaseCollectionPrefix` for each function. Otherwise, only one of the functions will be triggered. For information about the prefix, see the [Configuration section](#configuration).
+> If multiple functions are configured to use an Azure Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `leaseCollectionPrefix` for each function. Otherwise, only one of the functions will be triggered. For information about the prefix, see the [Configuration section](#configuration).
::: zone-end The trigger doesn't indicate whether a document was updated or inserted, it just provides the document itself. If you need to handle updates and inserts differently, you could do that by implementing timestamp fields for insertion or update.
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
Title: Azure Cosmos DB bindings for Functions 2.x and higher description: Understand how to use Azure Cosmos DB triggers and bindings in Azure Functions. + Last updated 03/04/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This set of articles explains how to work with [Azure Cosmos DB](../cosmos-db/se
> [!NOTE] > This reference is for [Azure Functions version 2.x and higher](functions-versions.md). For information about how to use these bindings in Functions 1.x, see [Azure Cosmos DB bindings for Azure Functions 1.x](functions-bindings-cosmosdb.md). >
-> This binding was originally named DocumentDB. In Functions version 2.x and higher, the trigger, bindings, and package are all named Cosmos DB.
+> This binding was originally named DocumentDB. In Azure Functions version 2.x and higher, the trigger, bindings, and package are all named Azure Cosmos DB.
## Supported APIs
Working with the trigger and bindings requires that you reference the appropriat
# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
-This preview version of the Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+This preview version of the Azure Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
This version also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](../cosmos-db/sql/sql-api-sdk-dotnet-standard.md). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](../cosmos-db/sql/migrate-dotnet-v3.md), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples.
Add the extension to your project by installing the [NuGet package](https://www.
# [Extension 4.x+ (preview)](#tab/extensionv4/isolated-process)
-This preview version of the Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+This preview version of the Azure Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 4.x. # [Functions 2.x+](#tab/functionsv2/csharp-script)
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
This extension version is available from the preview extension bundle v4 by addi
::: zone pivot="programming-language-javascript,programming-language-python,programming-language-java,programming-language-powershell"
-## Install bundle
+## Install bundle
-The Cosmos DB is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
+The Azure Cosmos DB bindings extension is part of an [extension bundle], which is specified in your *host.json* project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
# [Bundle v2.x and v3.x](#tab/functionsv2)
You can install this version of the extension in your function app by registerin
# [Bundle v4.x (Preview)](#tab/extensionv4)
-This version of the bundle contains a preview version of the Cosmos DB bindings extension (version 4.x) that introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
+This version of the bundle contains a preview version of the Azure Cosmos DB bindings extension (version 4.x) that introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
::: zone-end ::: zone pivot="programming-language-java"
To learn more, see [Update your extensions].
| Binding | Reference | |||
-| CosmosDB | [CosmosDB Error Codes](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
+| Azure Cosmos DB | [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
<a name="host-json"></a>
azure-functions Functions Bindings Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb.md
description: Understand how to use Azure Cosmos DB triggers and bindings in Azur
Last updated 11/21/2017 ms.devlang: csharp, javascript-+ # Azure Cosmos DB bindings for Azure Functions 1.x
This article explains how to work with [Azure Cosmos DB](../cosmos-db/serverless
> [!NOTE] > This article is for Azure Functions 1.x. For information about how to use these bindings in Functions 2.x and higher, see [Azure Cosmos DB bindings for Azure Functions 2.x](functions-bindings-cosmosdb-v2.md). >
->This binding was originally named DocumentDB. In Functions version 1.x, only the trigger was renamed Cosmos DB; the input binding, output binding, and NuGet package retain the DocumentDB name.
+>This binding was originally named DocumentDB. In Azure Functions version 1.x, only the trigger was renamed Azure Cosmos DB; the input binding, output binding, and NuGet package retain the DocumentDB name.
> [!NOTE]
-> Azure Cosmos DB bindings are only supported for use with the SQL API. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API, including [Azure Cosmos DB's API for MongoDB](../cosmos-db/mongodb-introduction.md), [Cassandra API](../cosmos-db/cassandra-introduction.md), [Gremlin API](../cosmos-db/graph-introduction.md), and [Table API](../cosmos-db/table-introduction.md).
+> Azure Cosmos DB bindings are only supported for use with the SQL API. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API, including [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb-introduction.md), [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md), [Azure Cosmos DB for Apache Gremlin](../cosmos-db/graph-introduction.md), and [Azure Cosmos DB for Table](../cosmos-db/table-introduction.md).
## Packages - Functions 1.x
namespace CosmosDBSamplesV1
# [C# Script](#tab/csharp-script)
-The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are modified.
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
Here's the binding data in the *function.json* file:
Here's the C# script code:
# [JavaScript](#tab/javascript)
-The following example shows a Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Cosmos DB records are modified.
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
Here's the binding data in the *function.json* file:
The following table explains the binding configuration properties that you set i
The trigger requires a second collection that it uses to store _leases_ over the partitions. Both the collection being monitored and the collection that contains the leases must be available for the trigger to work. >[!IMPORTANT]
-> If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `LeaseCollectionPrefix` for each function. Otherwise, only one of the functions will be triggered. For information about the prefix, see the [Configuration section](#triggerconfiguration).
+> If multiple functions are configured to use an Azure Cosmos DB trigger for the same collection, each of the functions should use a dedicated lease collection or specify a different `LeaseCollectionPrefix` for each function. Otherwise, only one of the functions will be triggered. For information about the prefix, see the [Configuration section](#triggerconfiguration).
The trigger doesn't indicate whether a document was updated or inserted, it just provides the document itself. If you need to handle updates and inserts differently, you could do that by implementing timestamp fields for insertion or update.
namespace CosmosDBSamplesV1
### Queue trigger, look up ID from string
-The following example shows a Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
Here's the binding data in the *function.json* file:
This section contains the following examples:
### Queue trigger, look up ID from JSON
-The following example shows a Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads a single document and updates the document's text value.
Here's the binding data in the *function.json* file:
By default, when you write to the output parameter in your function, a document
| Binding | Reference | |||
-| CosmosDB | [CosmosDB Error Codes](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
+| Azure Cosmos DB | [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
## Next steps
-* [Learn more about serverless database computing with Cosmos DB](../cosmos-db/serverless-computing-database.md)
-* [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
+* [Learn more about serverless database computing with Azure Cosmos DB](../cosmos-db/serverless-computing-database.md)
+* [Learn more about Azure Functions triggers and bindings](functions-triggers-bindings.md)
<! > [!div class="nextstepaction"]
-> [Go to a quickstart that uses a Cosmos DB trigger](functions-create-cosmos-db-triggered-function.md)
+> [Go to a quickstart that uses an Azure Cosmos DB trigger](functions-create-cosmos-db-triggered-function.md)
>
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
Title: Azure Event Hubs output binding for Azure Functions
description: Learn to write messages to Azure Event Hubs streams using Azure Functions. ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b + Last updated 03/04/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
public String sendTime(
} ```
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@EventHubOutput` annotation on parameters whose value would be published to Event Hub. The parameter should be of type `OutputBinding<T>` , where T is a POJO or any native Java type.
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@EventHubOutput` annotation on parameters whose value would be published to Event Hub. The parameter should be of type `OutputBinding<T>` , where `T` is a POJO or any native Java type.
::: zone-end ::: zone pivot="programming-language-csharp"
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-expressions-patterns.md
description: Learn to create different Azure Functions binding expressions based
ms.devlang: csharp-+ Last updated 02/18/2019
module.exports = async function (context, info) {
### Dot notation
-If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot (`.`) notation. This notation doesn't work for [Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
+If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot (`.`) notation. This notation doesn't work for [Azure Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
For example, suppose your JSON looks like this:
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
public String pushToQueue(
} ```
- In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on function parameters whose value would be written to a Service Bus queue. The parameter type should be `OutputBinding<T>`, where T is any native Java type of a POJO.
+ In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on function parameters whose value would be written to a Service Bus queue. The parameter type should be `OutputBinding<T>`, where `T` is any native Java type of a POJO.
Java functions can also write to a Service Bus topic. The following example uses the `@ServiceBusTopicOutput` annotation to describe the configuration for the output binding.
When the parameter value is null when the function exits, Functions doesn't crea
::: zone-end
-In Azure Functions 1.x, the runtime creates the queue if it doesn't exist and you have set `accessRights` to `manage`. In Functions version 2.x and higher, the queue or topic must already exist; if you specify a queue or topic that doesn't exist, the function fails.
+In Azure Functions 1.x, the runtime creates the queue if it doesn't exist and you have set `accessRights` to `manage`. In Azure Functions version 2.x and higher, the queue or topic must already exist; if you specify a queue or topic that doesn't exist, the function fails.
<!--Any of the below pivots can be combined if the usage info is identical.--> ::: zone pivot="programming-language-java"
For a complete example, see [the examples section](#example).
## Next steps -- [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)
+- [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
description: Learn how to provide Azure Blob storage output binding data to an A
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
This section contains the following examples:
} ```
- In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobOutput` annotation on function parameters whose value would be written to an object in blob storage. The parameter type should be `OutputBinding<T>`, where T is any native Java type or a POJO.
+ In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@BlobOutput` annotation on function parameters whose value would be written to an object in blob storage. The parameter type should be `OutputBinding<T>`, where `T` is any native Java type or a POJO.
::: zone-end ::: zone pivot="programming-language-javascript"
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Add the extension to your project by installing the [Microsoft.Azure.Functions.W
Using the .NET CLI: ```dotnetcli
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs--version 5.0.0
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs --version 5.0.0
``` [!INCLUDE [functions-bindings-storage-extension-v5-isolated-worker-tables-note](../../includes/functions-bindings-storage-extension-v5-isolated-worker-tables-note.md)]
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Functions 1.x apps automatically have a reference to the extension.
|batchSize|16|The number of queue messages that the Functions runtime retrieves simultaneously and processes in parallel. When the number being processed gets down to the `newBatchThreshold`, the runtime gets another batch and starts processing those messages. So the maximum number of concurrent messages being processed per function is `batchSize` plus `newBatchThreshold`. This limit applies separately to each queue-triggered function. <br><br>If you want to avoid parallel execution for messages received on one queue, you can set `batchSize` to 1. However, this setting eliminates concurrency as long as your function app runs only on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.<br><br>The maximum `batchSize` is 32. | |maxDequeueCount|5|The number of times to try processing a message before moving it to the poison queue.| |newBatchThreshold|N*batchSize/2|Whenever the number of messages being processed concurrently gets down to this number, the runtime retrieves another batch.<br><br>`N` represents the number of vCPUs available when running on App Service or Premium Plans. Its value is `1` for the Consumption Plan.|
-|messageEncoding|base64| This setting is only available in [extension version 5.0.0 and higher](#storage-extension-5x-and-higher). It represents the encoding format for messages. Valid values are `base64` and `none`.|
+|messageEncoding|base64| This setting is only available in [extension bundle version 5.0.0 and higher](#storage-extension-5x-and-higher). It represents the encoding format for messages. Valid values are `base64` and `none`.|
## Next steps
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
description: Understand how to use Azure Tables input bindings in Azure Function
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Tables input bindings for Azure Functions
-Use the Azure Tables input binding to read a table in an Azure Storage or Cosmos DB account.
+Use the Azure Tables input binding to read a table in an Azure Storage or Azure Cosmos DB account.
For information on setup and configuration details, see the [overview](./functions-bindings-storage-table.md).
For more information about how to use CloudTable, see [Get started with Azure Ta
If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-# [Table API extension](#tab/table-api/in-process)
+# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
The following example shows a [C# function](./functions-dotnet-class-library.md) that reads a single table row. For every message sent to the queue, the function will be triggered.
public static void Run([QueueTrigger("myqueue", Connection = "AzureWebJobsStorag
``` The `Filter` and `Take` properties are used to limit the number of entities returned.
-# [Table API extension (preview)](#tab/table-api/isolated-process)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Table API extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
For more information about how to use CloudTable, see [Get started with Azure Ta
If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-# [Table API extension (preview)](#tab/table-api/csharp-script)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
-Version 3.x of the extension bundle doesn't currently include the Table API bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
+Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/csharp-script)
To return a specific entity by key, use a binding parameter that derives from [T
To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
-# [Table API extension](#tab/table-api/in-process)
+# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
To return a specific entity by key, use a plain-old CLR object (POCO). The speci
When returning multiple entities as an [`IEnumerable<T>`], you can instead use `Take` and `Filter` properties to restrict the result set.
-# [Table API extension (preview)](#tab/table-api/isolated-process)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Table API extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
To return a specific entity by key, use a binding parameter that derives from [T
To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
-# [Table API extension (preview)](#tab/table-api/csharp-script)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
-Version 3.x of the extension bundle doesn't currently include the Table API bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
+Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/csharp-script)
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
description: Understand how to use Azure Tables output bindings in Azure Functio
Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Tables output bindings for Azure Functions
-Use an Azure Tables output binding to write entities to a table in an Azure Storage or Cosmos DB account.
+Use an Azure Tables output binding to write entities to a table in an Azure Storage or Azure Cosmos DB account.
For information on setup and configuration details, see the [overview](./functions-bindings-storage-table.md)
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-# [Table API extension](#tab/table-api/in-process)
+# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.a
Return a plain-old CLR object (POCO) with properties that can be mapped to the table entity.
-# [Table API extension (preview)](#tab/table-api/isolated-process)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Table API extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
+The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-# [Table API extension (preview)](#tab/table-api/csharp-script)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
-Version 3.x of the extension bundle doesn't currently include the Table API bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
+Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/csharp-script)
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Title: Azure Tables bindings for Azure Functions
description: Understand how to use Azure Tables bindings in Azure Functions. Last updated 03/04/2022-+ zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Tables bindings for Azure Functions
-Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.md) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using the Tables API for [Azure Storage](../storage/index.yml) and [Cosmos DB](../cosmos-db/introduction.md).
+Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.md) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using the Tables API for [Azure Storage](../storage/index.yml) and [Azure Cosmos DB](../cosmos-db/introduction.md).
> [!NOTE]
-> The Table bindings have historically only supported Azure Storage. Support for Cosmos DB is currently in preview. See [Table API extension (preview)](#table-api-extension).
+> The Table bindings have historically only supported Azure Storage. Support for Azure Cosmos DB is currently in preview. See [Azure Cosmos DB for Table extension (preview)](#table-api-extension).
| Action | Type | |||
The process for installing the extension varies depending on the extension versi
<a name="storage-extension"></a> <a name="table-api-extension"></a>
-# [Table API extension](#tab/table-api/in-process)
+# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
[!INCLUDE [functions-bindings-supports-identity-connections-note](../../includes/functions-bindings-supports-identity-connections-note.md)]
-This version allows you to bind to types from [Azure.Data.Tables](/dotnet/api/azure.data.tables). It also introduces the ability to use Cosmos DB Table APIs.
+This version allows you to bind to types from [`Azure.Data.Tables`](/dotnet/api/azure.data.tables). It also introduces the ability to use Azure Cosmos DB for Table.
This extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Tables NuGet package][table-api-package] into a project using version 5.x or higher of the extensions for [blobs](./functions-bindings-storage-blob.md?tabs=in-process%2Cextensionv5) and [queues](./functions-bindings-storage-queue.md?tabs=in-process%2Cextensionv5).
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
Working with the bindings requires that you reference the appropriate NuGet package. Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package][storage-4.x], version 3.x or 4.x. > [!NOTE]
-> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Table API extension](#table-api-extension) when using version 5.x.
+> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Azure Cosmos DB for Table extension](#table-api-extension) when using version 5.x.
# [Functions 1.x](#tab/functionsv1/in-process)
Tables are included in a combined package for Azure Storage. Install the [Micros
> [!NOTE] > Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x.
-# [Table API extension (preview)](#tab/table-api/isolated-process)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Table API extension does not currently support isolated process. You will instead need to use the [Storage extension](#storage-extension).
+The Azure Cosmos DB for Table extension does not currently support isolated process. You will instead need to use the [Storage extension](#storage-extension).
# [Functions 1.x](#tab/functionsv1/isolated-process)
You can install this version of the extension in your function app by registerin
> [!NOTE] > Version 3.x of the extension bundle doesn't include the Table Storage bindings. You need to instead use version 2.x for now.
-# [Table API extension (preview)](#tab/table-api/csharp-script)
+# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
-Version 3.x of the extension bundle doesn't currently include the Table API bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the [Storage extension](#storage-extension).
+Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the [Storage extension](#storage-extension).
# [Functions 1.x](#tab/functionsv1/csharp-script)
Functions 1.x apps automatically have a reference to the extension.
[extension bundle]: ./functions-bindings-register.md#extension-bundles [Update your extensions]: ./functions-bindings-register.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-cli-samples.md
description: Find links to bash scripts for Azure Functions that use the Azure C
ms.assetid: 577d2f13-de4d-40d2-9dfc-86ecc79f3ab0 Last updated 09/17/2021-+ keywords: functions, azure cli samples, azure cli examples, azure cli code samples
The following table includes links to bash scripts for Azure Functions that use
| Integrate | Description| ||| | [Create a function app and connect to a storage account](scripts/functions-cli-create-function-app-connect-to-storage-account.md) | Create a function app and connect it to a storage account. |
-| [Create a function app and connect to an Azure Cosmos DB](scripts/functions-cli-create-function-app-connect-to-cosmos-db.md) | Create a function app and connect it to an Azure Cosmos DB. |
-| [Create a Python function app and mount a Azure Files share](scripts/functions-cli-mount-files-storage-linux.md) | By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. |
+| [Create a function app and connect to an Azure Cosmos DB](scripts/functions-cli-create-function-app-connect-to-cosmos-db.md) | Create a function app and connect it to an Azure Cosmos DB instance. |
+| [Create a Python function app and mount an Azure Files share](scripts/functions-cli-mount-files-storage-linux.md) | By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. |
| Continuous deployment | Description| |||
azure-functions Functions Create Cosmos Db Triggered Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-cosmos-db-triggered-function.md
description: Use Azure Functions to create a serverless function that is invoked
ms.assetid: bc497d71-75e7-47b1-babd-a060a664adca Last updated 04/28/2020-+ # Create a function triggered by Azure Cosmos DB
Next, you create a function in the new function app.
| Setting | Suggested value | Description | | | - | |
- | **New Function** | Accept the default name | The name of the function. |
- | **Cosmos DB account connection** | Accept the default new name | Select **New**, the **Database Account** you created earlier, and then **OK**. This action creates an application setting for your account connection. This setting is used by the binding to connection to the database. |
+ | **New function** | Accept the default name | The name of the function. |
+ | **Azure Cosmos DB account connection** | Accept the default new name | Select **New**, the **Database Account** you created earlier, and then **OK**. This action creates an application setting for your account connection. This setting is used by the binding to connection to the database. |
| **Database name** | Tasks | Name of the database that includes the collection to be monitored. | | **Collection name** | Items | Name of the collection to be monitored. | | **Collection name for leases** | leases | Name of the collection to store the leases. |
Next, you create a function in the new function app.
1. Select **Create Function**.
- Azure creates the Cosmos DB trigger function.
+ Azure creates the Azure Cosmos DB trigger function.
1. To display the template-based function code, select **Code + Test**.
- :::image type="content" source="./media/functions-create-cosmos-db-triggered-function/function-cosmosdb-template.png" alt-text="Cosmos DB function template in C#":::
+ :::image type="content" source="./media/functions-create-cosmos-db-triggered-function/function-cosmosdb-template.png" alt-text="Azure Cosmos DB function template in C#":::
This function template writes the number of documents and the first document ID to the logs.
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
To use Azure Toolkit for IntelliJ to create a local Azure Functions project, fol
To run the project locally, follow these steps:
+> [!IMPORTANT]
+> You must have the JAVA_HOME environment variable set correctly to the JDK directory that is used during code compiling using Maven. Make sure that the version of the JDK is at least as high as the `Java.version` setting.
+ 1. Navigate to *src/main/java/org/example/functions/HttpTriggerFunction.java* to see the code generated. Beside the line *24*, you'll notice that there's a green **Run** button. Click it and select **Run 'Functions-azur...'**. You'll see that your function app is running locally with a few logs. :::image type="content" source="media/functions-create-first-java-intellij/local-run-functions-project.png" alt-text="Local run project." lightbox="media/functions-create-first-java-intellij/local-run-functions-project.png":::
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-vnet.md
You'll create a .NET function app in the Premium plan because this tutorial uses
| **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Choose to publish code files or a Docker container. | | **Runtime stack** | .NET | This tutorial uses .NET. |
+ | **Version** | 3.1 | This tutorial uses .NET Core 3.1 |
|**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. | 1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings.
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
Last updated 11/04/2019 ms.devlang: java--+ #Customer intent: As a Java developer, I want to write Java functions that process data continually (for example, from IoT sensors), and store the processing results in Azure Cosmos DB. # Tutorial: Create a function in Java with an Event Hub trigger and an Azure Cosmos DB output binding
-This tutorial shows you how to use Azure Functions to create a Java function that analyzes a continuous stream of temperature and pressure data. Event hub events that represent sensor readings trigger the function. The function processes the event data, then adds status entries to an Azure Cosmos DB.
+This tutorial shows you how to use Azure Functions to create a Java function that analyzes a continuous stream of temperature and pressure data. Event hub events that represent sensor readings trigger the function. The function processes the event data, then adds status entries to an Azure Cosmos DB instance.
In this tutorial, you'll:
In this tutorial, you'll need these resources:
* A resource group to contain the other resources * An Event Hubs namespace, event hub, and authorization rule
-* A Cosmos DB account, database, and collection
+* An Azure Cosmos DB account, database, and collection
* A function app and a storage account to host it The following sections show you how to create these resources using the Azure CLI.
az cosmosdb sql container create ^
-The `partition-key-path` value partitions your data based on the `temperatureStatus` value of each item. The partition key enables Cosmos DB to increase performance by dividing your data into distinct subsets that it can access independently.
+The `partition-key-path` value partitions your data based on the `temperatureStatus` value of each item. The partition key enables Azure Cosmos DB to increase performance by dividing your data into distinct subsets that it can access independently.
### Create a storage account and function app
Your function app will need to access the other resources to work correctly. The
### Retrieve resource connection strings
-Use the following commands to retrieve the storage, event hub, and Cosmos DB connection strings and save them in environment variables:
+Use the following commands to retrieve the storage, event hub, and Azure Cosmos DB connection strings and save them in environment variables:
# [Bash](#tab/bash)
This command generates several files inside a `telemetry-functions` folder:
* A `pom.xml` file for use with Maven * A `local.settings.json` file to hold app settings for local testing
-* A `host.json` file that enables the Azure Functions Extension Bundle, required for Cosmos DB output binding in your data analysis function
+* A `host.json` file that enables the Azure Functions Extension Bundle, required for Azure Cosmos DB output binding in your data analysis function
* A `Function.java` file that includes a default function implementation * A few test files that this tutorial doesn't need
After some build and startup messages, you'll see output similar to the followin
You can then go to the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Select **Data Explorer**, expand **TelemetryInfo**, then select **Items** to view your data when it arrives.
-![Cosmos DB Data Explorer](media/functions-event-hub-cosmos-db/data-explorer.png)
+![Azure Cosmos DB Data Explorer](media/functions-event-hub-cosmos-db/data-explorer.png)
## Deploy to Azure and view app telemetry
az group delete --name %RESOURCE_GROUP%
## Next steps
-In this tutorial, you learned how to create an Azure Function that handles Event Hub events and updates a Cosmos DB. For more information, see the [Azure Functions Java developer guide](./functions-reference-java.md). For information on the annotations used, see the [com.microsoft.azure.functions.annotation](/java/api/com.microsoft.azure.functions.annotation) reference.
+In this tutorial, you learned how to create an Azure Function that handles Event Hub events and updates an Azure Cosmos DB instance. For more information, see the [Azure Functions Java developer guide](./functions-reference-java.md). For information on the annotations used, see the [com.microsoft.azure.functions.annotation](/java/api/com.microsoft.azure.functions.annotation) reference.
This tutorial used environment variables and application settings to store secrets such as connection strings. For information on storing these secrets in Azure Key Vault, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md).
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
Title: host.json reference for Azure Functions 2.x description: Reference documentation for the Azure Functions host.json file with the v2 runtime. + Last updated 04/28/2020
For more information on snapshots, see [Debug snapshots on exceptions in .NET ap
| snapshotsPerTenMinutesLimit | 1 | The maximum number of snapshots allowed in 10 minutes. Although there is no upper bound on this value, exercise caution increasing it on production workloads because it could impact the performance of your application. Creating a snapshot is fast, but creating a minidump of the snapshot and uploading it to the Snapshot Debugger service is a much slower operation that will compete with your application for resources (both CPU and I/O). | | tempFolder | null | Specifies the folder to write minidumps and uploader log files. If not set, then *%TEMP%\Dumps* is used. | | thresholdForSnapshotting | 1 | How many times Application Insights needs to see an exception before it asks for snapshots. |
-| uploaderProxy | null | Overrides the proxy server used in the Snapshot Uploader process. You may need to use this setting if your application connects to the internet via a proxy server. The Snapshot Collector runs within your application's process and will use the same proxy settings. However, the Snapshot Uploader runs as a separate process and you may need to configure the proxy server manually. If this value is null, then Snapshot Collector will attempt to autodetect the proxy's address by examining System.Net.WebRequest.DefaultWebProxy and passing on the value to the Snapshot Uploader. If this value isn't null, then autodetection isn't used and the proxy server specified here will be used in the Snapshot Uploader. |
+| uploaderProxy | null | Overrides the proxy server used in the Snapshot Uploader process. You may need to use this setting if your application connects to the internet via a proxy server. The Snapshot Collector runs within your application's process and will use the same proxy settings. However, the Snapshot Uploader runs as a separate process and you may need to configure the proxy server manually. If this value is null, then Snapshot Collector will attempt to autodetect the proxy's address by examining `System.Net.WebRequest.DefaultWebProxy` and passing on the value to the Snapshot Uploader. If this value isn't null, then autodetection isn't used and the proxy server specified here will be used in the Snapshot Uploader. |
## blobs
This setting is a child of [logging](#logging). It controls the console logging
|DisableColors|false| Suppresses log formatting in the container logs on Linux. Set to true if you are seeing unwanted ANSI control characters in the container logs when running on Linux. | |isEnabled|false|Enables or disables console logging.|
-## cosmosDb
+## Azure Cosmos DB
-Configuration setting can be found in [Cosmos DB triggers and bindings](functions-bindings-cosmosdb-v2.md#hostjson-settings).
+Configuration settings can be found in [Azure Cosmos DB triggers and bindings](functions-bindings-cosmosdb-v2.md#hostjson-settings).
## customHandler
azure-functions Functions Integrate Store Unstructured Data Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-integrate-store-unstructured-data-cosmosdb.md
Title: Store unstructured data using Azure Cosmos DB and Functions
-description: Store unstructured data using Azure Functions and Cosmos DB
+description: Store unstructured data using Azure Functions and Azure Cosmos DB
Last updated 10/01/2020 ms.devlang: csharp, javascript-+ # Store unstructured data using Azure Functions and Azure Cosmos DB
-[Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is a great way to store unstructured and JSON data. Combined with Azure Functions, Cosmos DB makes storing data quick and easy with much less code than required for storing data in a relational database.
+[Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is a great way to store unstructured and JSON data. Combined with Azure Functions, Azure Cosmos DB makes storing data quick and easy with much less code than required for storing data in a relational database.
> [!NOTE] > At this time, the Azure Cosmos DB trigger, input bindings, and output bindings work with SQL API and Graph API accounts only.
You must have an Azure Cosmos DB account that uses the SQL API before you create
| Setting | Suggested value | Description | | | - | | | **Binding Type** | Azure Cosmos DB | Name of the binding type to select to create the output binding to Azure Cosmos DB. |
- | **Document parameter name** | taskDocument | Name that refers to the Cosmos DB object in code. |
+ | **Document parameter name** | taskDocument | Name that refers to the Azure Cosmos DB object in code. |
| **Database name** | taskDatabase | Name of database to save documents. | | **Collection name** | taskCollection | Name of the database collection. |
- | **If true, creates the Cosmos DB database and collection** | Yes | The collection doesn't already exist, so create it. |
- | **Cosmos DB account connection** | New setting | Select **New**, then choose **Azure Cosmos DB Account** and the **Database account** you created earlier, and then select **OK**. Creates an application setting for your account connection. This setting is used by the binding to connection to the database. |
+ | **If true, creates the Azure Cosmos DB database and collection** | Yes | The collection doesn't already exist, so create it. |
+ | **Azure Cosmos DB account connection** | New setting | Select **New**, then choose **Azure Cosmos DB Account** and the **Database account** you created earlier, and then select **OK**. Creates an application setting for your account connection. This setting is used by the binding to connection to the database. |
1. Select **OK** to create the binding.
This code sample reads the HTTP Request query strings and assigns them to fields
1. In the Azure portal, search for and select **Azure Cosmos DB**.
- :::image type="content" source="./media/functions-integrate-store-unstructured-data-cosmosdb/functions-search-cosmos-db.png" alt-text="Search for the Cosmos DB service." border="true":::
+ :::image type="content" source="./media/functions-integrate-store-unstructured-data-cosmosdb/functions-search-cosmos-db.png" alt-text="Search for the Azure Cosmos DB service." border="true":::
1. Choose your Azure Cosmos DB account, then select **Data Explorer**.
This code sample reads the HTTP Request query strings and assigns them to fields
:::image type="content" source="./media/functions-integrate-store-unstructured-data-cosmosdb/functions-data-explorer-check-document.png" alt-text="Verify the string values in your document." border="true":::
-You've successfully added a binding to your HTTP trigger to store unstructured data in an Azure Cosmos DB.
+You've successfully added a binding to your HTTP trigger to store unstructured data in an Azure Cosmos DB instance.
[!INCLUDE [Clean-up section](../../includes/clean-up-section-portal.md)] ## Next steps
-For more information about binding to a Cosmos DB database, see [Azure Functions Cosmos DB bindings](functions-bindings-cosmosdb.md).
+For more information about binding to an Azure Cosmos DB instance, see [Azure Functions Azure Cosmos DB bindings](functions-bindings-cosmosdb.md).
[!INCLUDE [functions-quickstart-next-steps](../../includes/functions-quickstart-next-steps-2.md)]
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
description: Learn how Azure Functions can help build robust serverless apps.
ms.assetid: 01d6ca9f-ca3f-44fa-b0b9-7ffee115acd4 Last updated 05/27/2022-+ # Introduction to Azure Functions
The following are a common, _but by no means exhaustive_, set of scenarios for A
| **Build a web API** | Implement an endpoint for your web applications using the [HTTP trigger](./functions-bindings-http-webhook.md) | | **Process file uploads** | Run code when a file is uploaded or changed in [blob storage](./functions-bindings-storage-blob.md) | | **Build a serverless workflow** | Chain a series of functions together using [durable functions](./durable/durable-functions-overview.md) |
-| **Respond to database changes** | Run custom logic when a document is created or updated in [Cosmos DB](./functions-bindings-cosmosdb-v2.md) |
+| **Respond to database changes** | Run custom logic when a document is created or updated in [Azure Cosmos DB](./functions-bindings-cosmosdb-v2.md) |
| **Run scheduled tasks** | Execute code on [pre-defined timed intervals](./functions-bindings-timer.md) | | **Create reliable message queue systems** | Process message queues using [Queue Storage](./functions-bindings-storage-queue.md), [Service Bus](./functions-bindings-service-bus.md), or [Event Hubs](./functions-bindings-event-hubs.md) | | **Analyze IoT data streams** | Collect and process [data from IoT devices](./functions-bindings-event-iot.md) |
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Title: Azure Functions C# script developer reference description: Understand how to develop Azure Functions using C# script. -+ Last updated 09/15/2022- # Azure Functions C# script (.csx) developer reference
The following table lists the .NET attributes for each binding type and the pack
> [!div class="mx-codeBreakAll"] > | Binding | Attribute | Add reference | > ||||
-> | Cosmos DB | [`Microsoft.Azure.WebJobs.DocumentDBAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/CosmosDBAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.CosmosDB"` |
+> | Azure Cosmos DB | [`Microsoft.Azure.WebJobs.DocumentDBAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/CosmosDBAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.CosmosDB"` |
> | Event Hubs | [`Microsoft.Azure.WebJobs.ServiceBus.EventHubAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/v2.x/src/Microsoft.Azure.WebJobs.ServiceBus/EventHubs/EventHubAttribute.cs), [`Microsoft.Azure.WebJobs.ServiceBusAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/b798412ad74ba97cf2d85487ae8479f277bdd85c/test/Microsoft.Azure.WebJobs.ServiceBus.UnitTests/ServiceBusAccountTests.cs) | `#r "Microsoft.Azure.Jobs.ServiceBus"` | > | Mobile Apps | [`Microsoft.Azure.WebJobs.MobileTableAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.MobileApps"` | > | Notification Hubs | [`Microsoft.Azure.WebJobs.NotificationHubAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.NotificationHubs/NotificationHubAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.NotificationHubs"` |
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
description: Understand how to develop functions with Java.
Last updated 09/14/2018 ms.devlang: java-+ # Azure Functions Java developer guide
As a Java developer, if you're new to Azure Functions, please consider first rea
| Getting started | Concepts| Scenarios/samples | | -- | -- | -- |
-| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hub trigger and Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li></ul> |
+| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hub trigger and Azure Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li></ul> |
## Java function basics
To send multiple output values, use `OutputBinding<T>` defined in the `azure-fun
} ```
-You invoke this function on an HttpRequest. It writes multiple values to Queue storage.
+You invoke this function on an `HttpRequest` object. It writes multiple values to Queue storage.
## HttpRequestMessage and HttpResponseMessage
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e
Last updated 9/02/2021 ms.devlang: csharp+ # Azure Functions developer guide In Azure Functions, specific functions share a few core technical concepts and components, regardless of the language or binding you use. Before you jump into learning details specific to a given language or binding, be sure to read through this overview that applies to all of them.
Identity-based connections are supported by the following components:
| Azure Event Hubs triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-event-hubs.md?tabs=extensionv5) | | Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-service-bus.md) | | Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later](.//functions-bindings-cosmosdb-v2.md?tabs=extensionv4) |
-| Azure Tables (when using Azure Storage) - Preview | All | [Table API extension](./functions-bindings-storage-table.md#table-api-extension) |
+| Azure Tables (when using Azure Storage) - Preview | All | [Azure Cosmos DB for Table extension](./functions-bindings-storage-table.md#table-api-extension) |
| Durable Functions storage provider (Azure Storage) - Preview | All | [Extension version 2.7.0 or later](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) | | Host-required storage ("AzureWebJobsStorage") - Preview | All | [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity-preview) |
azure-functions Functions Triggers Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md
description: Learn to use triggers and bindings to connect your Azure Function t
Last updated 05/25/2022-+ # Azure Functions triggers and bindings concepts
Consider the following examples of how you could implement different functions.
| Example scenario | Trigger | Input binding | Output binding | |-|||-| | A new queue message arrives which runs a function to write to another queue. | Queue<sup>*</sup> | *None* | Queue<sup>*</sup> |
-|A scheduled job reads Blob Storage contents and creates a new Cosmos DB document. | Timer | Blob Storage | Cosmos DB |
-|The Event Grid is used to read an image from Blob Storage and a document from Cosmos DB to send an email. | Event Grid | Blob Storage and Cosmos DB | SendGrid |
+|A scheduled job reads Blob Storage contents and creates a new Azure Cosmos DB document. | Timer | Blob Storage | Azure Cosmos DB |
+|The Event Grid is used to read an image from Blob Storage and a document from Azure Cosmos DB to send an email. | Event Grid | Blob Storage and Azure Cosmos DB | SendGrid |
| A webhook that uses Microsoft Graph to update an Excel sheet. | HTTP | *None* | Microsoft Graph | <sup>\*</sup> Represents different queues
azure-functions Functions Cli Create Function App Connect To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md
Title: Create a function app with Azure Cosmos DB - Azure CLI
description: Azure CLI Script Sample - Create an Azure Function that connects to an Azure Cosmos DB Last updated 03/24/2022-+ # Create an Azure Function that connects to an Azure Cosmos DB
-This Azure Functions sample script creates a function app and connects the function to an Azure Cosmos DB database. It makes the connection using a Azure Cosmos DB endpoint and access key that it adds to app settings. The created app setting that contains the connection can be used with an [Azure Cosmos DB trigger or binding](../functions-bindings-cosmosdb.md).
+This Azure Functions sample script creates a function app and connects the function to an Azure Cosmos DB database. It makes the connection using an Azure Cosmos DB endpoint and access key that it adds to app settings. The created app setting that contains the connection can be used with an [Azure Cosmos DB trigger or binding](../functions-bindings-cosmosdb.md).
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
Title: Azure Government isolation guidelines for Impact Level 5
description: Guidance for configuring Azure Government services for DoD Impact Level 5 workloads -+ recommendations: false
For Containers services availability in Azure Government, see [Products availabl
### [Container Registry](../container-registry/index.yml) -- When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an extra encryption layer by [using a key that you create and manage in Azure Key Vault](../container-registry/container-registry-customer-managed-keys.md).
+- When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an extra encryption layer by [using a key that you create and manage in Azure Key Vault](../container-registry/tutorial-enable-customer-managed-keys.md).
## Databases
For Databases services availability in Azure Government, see [Products available
### [Azure Cosmos DB](../cosmos-db/index.yml) -- Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys). For more information, see [Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault](../cosmos-db/how-to-setup-cmk.md).
+- Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys). For more information, see [Configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault](../cosmos-db/how-to-setup-cmk.md).
### [Azure Database for MySQL](../mysql/index.yml)
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
Title: Log Analytics agent data sources in Azure Monitor
-description: Data sources define the log data that Azure Monitor collects from agents and other connected sources. This article describes the concept of how Azure Monitor uses data sources, explains the details of how to configure them, and provides a summary of the different data sources available.
+description: Data sources define the log data that Azure Monitor collects from agents and other connected sources. This article describes how Azure Monitor uses data sources, explains how to configure them, and summarizes the different data sources available.
# Log Analytics agent data sources in Azure Monitor
-The data that Azure Monitor collects from virtual machines with the legacy [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure on the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type with each type having its own set of properties.
-![Log data collection](media/agent-data-sources/overview.png)
+The data that Azure Monitor collects from virtual machines with the legacy [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure in the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type. Each type has its own set of properties.
+
+![Diagram that shows log data collection.](media/agent-data-sources/overview.png)
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] > [!IMPORTANT]
-> The data sources described in this article apply only to virtual machines running the Log Analytics agent.
+> The data sources described in this article apply only to virtual machines running the Log Analytics agent.
## Summary of data sources
-The following table lists the agent data sources that are currently available with the Log Analytics agent. Each has a link to a separate article providing detail for that data source. It also provides information on their method and frequency of collection.
+The following table lists the agent data sources that are currently available with the Log Analytics agent. Each agent data source links to an article that provides information for that data source. It also provides information on their method and frequency of collection.
-| Data source | Platform | Log analytics agent | Operations Manager agent | Azure storage | Operations Manager required? | Operations Manager agent data sent via management group | Collection frequency |
+| Data source | Platform | Log Analytics agent | Operations Manager agent | Azure Storage | Operations Manager required? | Operations Manager agent data sent via management group | Collection frequency |
| | | | | | | | |
-| [Custom logs](data-sources-custom-logs.md) | Windows |&#8226; | | | | | on arrival |
-| [Custom logs](data-sources-custom-logs.md) | Linux |&#8226; | | | | | on arrival |
-| [IIS logs](data-sources-iis-logs.md) | Windows |&#8226; |&#8226; |&#8226; | | |depends on Log File Rollover setting |
-| [Performance counters](data-sources-performance-counters.md) | Windows |&#8226; |&#8226; | | | |as scheduled, minimum of 10 seconds |
-| [Performance counters](data-sources-performance-counters.md) | Linux |&#8226; | | | | |as scheduled, minimum of 10 seconds |
-| [Syslog](data-sources-syslog.md) | Linux |&#8226; | | | | |from Azure storage: 10 minutes; from agent: on arrival |
-| [Windows Event logs](data-sources-windows-events.md) |Windows |&#8226; |&#8226; |&#8226; | |&#8226; | on arrival |
+| [Custom logs](data-sources-custom-logs.md) | Windows |&#8226; | | | | | On arrival. |
+| [Custom logs](data-sources-custom-logs.md) | Linux |&#8226; | | | | | On arrival. |
+| [IIS logs](data-sources-iis-logs.md) | Windows |&#8226; |&#8226; |&#8226; | | |Depends on the Log File Rollover setting. |
+| [Performance counters](data-sources-performance-counters.md) | Windows |&#8226; |&#8226; | | | |As scheduled, minimum of 10 seconds. |
+| [Performance counters](data-sources-performance-counters.md) | Linux |&#8226; | | | | |As scheduled, minimum of 10 seconds. |
+| [Syslog](data-sources-syslog.md) | Linux |&#8226; | | | | |From Azure Storage is 10 minutes. From agent is on arrival. |
+| [Windows Event logs](data-sources-windows-events.md) |Windows |&#8226; |&#8226; |&#8226; | |&#8226; | On arrival. |
+## Configure data sources
-## Configuring data sources
-To configure data sources for Log Analytics agents, go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace. Click on **Agents configuration**. Select the tab for the data source you want to configure. You can follow the links in the table above to documentation for each data source and details on their configuration.
+To configure data sources for Log Analytics agents, go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace. Select **Agents configuration**. Select the tab for the data source you want to configure. Use the links in the preceding table to access documentation for each data source and information on their configuration.
-Any configuration is delivered to all agents connected to that workspace. You cannot exclude any connected agents from this configuration.
+Any configuration is delivered to all agents connected to that workspace. You can't exclude any connected agents from this configuration.
-[![Configure Windows events](media/agent-data-sources/configure-events.png)](media/agent-data-sources/configure-events.png#lightbox)
+[![Screenshot that shows configuring Windows events.](media/agent-data-sources/configure-events.png)](media/agent-data-sources/configure-events.png#lightbox)
+## Data collection
+Data source configurations are delivered to agents that are directly connected to Azure Monitor within a few minutes. The specified data is collected from the agent and delivered directly to Azure Monitor at intervals specific to each data source. See the documentation for each data source for these specifics.
-## Data collection
-Data source configurations are delivered to agents that are directly connected to Azure Monitor within a few minutes. The specified data is collected from the agent and delivered directly to Azure Monitor at intervals specific to each data source. See the documentation for each data source for these specifics.
+For System Center Operations Manager agents in a connected management group, data source configurations are translated into management packs and delivered to the management group every 5 minutes by default. The agent downloads the management pack like any other and collects the specified data. Depending on the data source, the data will either be sent to a management server, which forwards the data to the Azure Monitor, or the agent will send the data to Azure Monitor without going through the management server.
-For System Center Operations Manager agents in a connected management group, data source configurations are translated into management packs and delivered to the management group every 5 minutes by default. The agent downloads the management pack like any other and collects the specified data. Depending on the data source, the data will be either sent to a management server which forwards the data to the Azure Monitor, or the agent will send the data to Azure Monitor without going through the management server. See [Data collection details for monitoring solutions in Azure](../monitor-reference.md) for details. You can read about details of connecting Operations Manager and Azure Monitor and modifying the frequency that configuration is delivered at [Configure Integration with System Center Operations Manager](./om-agents.md).
+For more information, see [Data collection details for monitoring solutions in Azure](../monitor-reference.md). You can read about details of connecting Operations Manager and Azure Monitor and modifying the frequency that configuration is delivered at [Configure integration with System Center Operations Manager](./om-agents.md).
-If the agent is unable to connect to Azure Monitor or Operations Manager, it will continue to collect data that it will deliver when it establishes a connection. Data can be lost if the amount of data reaches the maximum cache size for the client, or if the agent is not able to establish a connection within 24 hours.
+If the agent is unable to connect to Azure Monitor or Operations Manager, it will continue to collect data that it will deliver when it establishes a connection. Data can be lost if the amount of data reaches the maximum cache size for the client, or if the agent can't establish a connection within 24 hours.
## Log records
-All log data collected by Azure Monitor is stored in the workspace as records. Records collected by different data sources will have their own set of properties and be identified by their **Type** property. See the documentation for each data source and solution for details on each record type.
+
+All log data collected by Azure Monitor is stored in the workspace as records. Records collected by different data sources will have their own set of properties and be identified by their **Type** property. See the documentation for each data source and solution for details on each record type.
## Next steps+ * Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the workspace.
-* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and monitoring solutions.
+* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and monitoring solutions.
* Configure [alerts](../alerts/alerts-overview.md) to proactively notify you of critical data collected from data sources and monitoring solutions.
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
A clean reinstall of the agent fixes most issues. This task might be the first s
Extra configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf` > [!NOTE]
- > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [agent's configuration](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration**. For a single agent, run the following script:
+ > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [agent's configuration](../agents/agent-data-sources.md#configure-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration**. For a single agent, run the following script:
> > `sudo /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable && sudo rm /etc/opt/omi/conf/omsconfig/configuration/Current.mof* /etc/opt/omi/conf/omsconfig/configuration/Pending.mof*`
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
# Define Azure Monitor Agent network settings
-Azure Monitor Agent supports connecting using direct proxies, Log Analytics gateway, and private links. This article explains how to define network settings and enable network isolation for Azure Monitor Agent.
+Azure Monitor Agent supports connecting by using direct proxies, Log Analytics gateway, and private links. This article explains how to define network settings and enable network isolation for Azure Monitor Agent.
## Virtual network service tags
-The Azure Monitor Agent supports [Azure virtual network service tags](../../virtual-network/service-tags-overview.md). Both *AzureMonitor* and *AzureResourceManager* tags are required.
+Azure Monitor Agent supports [Azure virtual network service tags](../../virtual-network/service-tags-overview.md). Both *AzureMonitor* and *AzureResourceManager* tags are required.
## Firewall requirements
The Azure Monitor Agent supports [Azure virtual network service tags](../../virt
| Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above | >[!NOTE]
-> If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
-> Azure Monitor metrics (custom metrics) preview is not available in Azure Government and Azure China clouds
+> If you use private links on the agent, you must also add the [data collection endpoints (DCEs)](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
+> The Azure Monitor Metrics (custom metrics) preview isn't available in Azure Government and Azure China clouds.
## Proxy configuration
If the machine connects through a proxy server to communicate over the internet,
The Azure Monitor Agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported. > [!IMPORTANT]
-> Proxy configuration is not supported for [Azure Monitor Metrics (Public preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
+> Proxy configuration isn't supported for [Azure Monitor Metrics (public preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
-1. Use this flowchart to determine the values of the *`Settings` and `ProtectedSettings` parameters first.
+1. Use this flowchart to determine the values of the `Settings` and `ProtectedSettings` parameters first.
![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png) > [!NOTE]
- > Azure Monitor agent for Linux doesnΓÇÖt support system proxy via environment variables such as `http_proxy` and `https_proxy`
+ > Azure Monitor Agent for Linux doesn't support system proxy via environment variables such as `http_proxy` and `https_proxy`.
-1. After determining the `Settings` and `ProtectedSettings` parameter values, *provide these other parameters* when you deploy Azure Monitor Agent, using PowerShell commands, as shown in the following examples:
+1. After you determine the `Settings` and `ProtectedSettings` parameter values, provide these other parameters when you deploy Azure Monitor Agent. Use PowerShell commands, as shown in the following examples:
# [Windows VM](#tab/PowerShellWindows) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}} Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMo
# [Linux VM](#tab/PowerShellLinux) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}} Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMoni
# [Windows Arc-enabled server](#tab/PowerShellWindowsArc) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}} New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType Az
# [Linux Arc-enabled server](#tab/PowerShellLinuxArc) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}} New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
## Log Analytics gateway configuration
-1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that corresponds to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that correspond to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
1. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com` `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`.
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
1. Add the **data ingestion endpoint URL** to the allowlist for the gateway `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`. 1. Restart the **OMS Gateway** service to apply the changes
- `Stop-Service -Name <gateway-name>`
+ `Stop-Service -Name <gateway-name>` and
`Start-Service -Name <gateway-name>`.
-## Enable network isolation for the Azure Monitor agent
-By default, Azure Monitor agent will connect to a public endpoint to connect to your Azure Monitor environment. You can enable network isolation for your agents by creating [data collection endpoints](../essentials/data-collection-endpoint-overview.md) and adding them to your [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources).
+## Enable network isolation for Azure Monitor Agent
+By default, Azure Monitor Agent connects to a public endpoint to connect to your Azure Monitor environment. To enable network isolation for your agents, you can create [data collection endpoints](../essentials/data-collection-endpoint-overview.md) and add them to your [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources).
-### Create data collection endpoint
-To use network isolation, you must create a data collection endpoint for each of your regions for agents to connect instead of the public endpoint. See [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-data-collection-endpoint) for details on create a DCE. An agent can only connect to a DCE in the same region. If you have agents in multiple regions, then you must create a DCE in each one.
+### Create a data collection endpoint
+To use network isolation, you must create a data collection endpoint for each of your regions so that agents can connect instead of using the public endpoint. For information on how to create a DCE, see [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-data-collection-endpoint). An agent can only connect to a DCE in the same region. If you have agents in multiple regions, you must create a DCE in each one.
-### Create private link
-With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link connects a private endpoint to a set of Azure Monitor resources, defining the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope (AMPLS). See [Configure your Private Link](../logs/private-link-configure.md) for details on creating and configuring your AMPLS.
+### Create a private link
-### Add DCE to AMPLS
-Add the data collection endpoints to a new or existing [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources) resource. This adds the DCE endpoints to your private DNS zone (see [how to validate](../logs/private-link-configure.md#review-and-validate-your-private-link-setup)) and allows communication via private links. You can do this from either the AMPLS resource or from within an existing DCE resource's 'Network Isolation' tab.
+With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor private link connects a private endpoint to a set of Azure Monitor resources that define the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope. For information on how to create and configure your AMPLS, see [Configure your private link](../logs/private-link-configure.md).
-> [!NOTE]
-> Other Azure Monitor resources like the Log Analytics workspace(s) configured in your data collection rules that you wish to send data to, must be part of this same AMPLS resource.
--
-For your data collection endpoint(s), ensure **Accept access from public networks not connected through a Private Link Scope** option is set to **No** under the 'Network Isolation' tab of your endpoint resource in Azure portal, as shown below. This ensures that public internet access is disabled, and network communication only happen via private links.
+### Add DCEs to AMPLS
+Add the data collection endpoints to a new or existing [Azure Monitor Private Link Scopes](../logs/private-link-configure.md#connect-azure-monitor-resources) resource. This process adds the DCEs to your private DNS zone (see [how to validate](../logs/private-link-configure.md#review-and-validate-your-private-link-setup)) and allows communication via private links. You can do this task from the AMPLS resource or on an existing DCE resource's **Network isolation** tab.
+> [!NOTE]
+> Other Azure Monitor resources like the Log Analytics workspaces configured in your data collection rules that you want to send data to must be part of this same AMPLS resource.
- Associate the data collection endpoints to the target resources by editing the data collection rule in Azure portal. From the **Resources** tab, select **Enable Data Collection Endpoints** and select a DCE for each virtual machine. See [Configure data collection for the Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md).
+For your data collection endpoints, ensure the **Accept access from public networks not connected through a Private Link Scope** option is set to **No** on the **Network Isolation** tab of your endpoint resource in the Azure portal. This setting ensures that public internet access is disabled and network communication only happens via private links.
+ Associate the data collection endpoints to the target resources by editing the data collection rule in the Azure portal. On the **Resources** tab, select **Enable Data Collection Endpoints**. Select a DCE for each virtual machine. See [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md).
## Next steps+ - [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)-- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
+- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 9/15/2022 Last updated : 9/28/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| August 2022 | <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> | 1.8.0.0 | Coming soon |
+| August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 |
| July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None | | June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None | | May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Title: Migrate from legacy agents to Azure Monitor Agent
-description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor Agent (AMA) and data collection rules (DCR).
+description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor Agent (AMA) and data collection rules (DCRs).
Last updated 9/14/2022
-# Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
+# Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in both Azure and non-Azure (on-premises and 3rd party clouds) environments. It introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
+
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in both Azure and non-Azure (on-premises and third-party clouds) environments. It introduces a simplified, flexible method of configuring collection configuration called [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent and provides guidance on how to implement a successful migration.
> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
+> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you're currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent by using the information in this article.
-## Benefits
+## Benefits
Azure Monitor Agent provides the following benefits over legacy agents: - **Security and performance**
- - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients).
- - A higher events per second (EPS) upload rate.
-- **Cost savings** using data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). Using Data Collection Rules is one of the most useful advantages of using Azure Monitor Agent.
- - DCRs lets you configure data collection for specific machines connected to a workspace as compared to the ΓÇ£all or nothingΓÇ¥ approach of legacy agents.
- - Using DCRs you can define which data to ingest and which data to filter out to reduce workspace clutter and save on costs.
-- **Simpler management** of data collection, including ease of troubleshooting
- - Easy **multihoming** on Windows and Linux.
- - Centralized, ΓÇÿin the cloudΓÇÖ agent configuration makes every action simpler and more easily scalable throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
- - Greater transparency and control of more capabilities and services, such as Sentinel, Defender for Cloud, and VM Insights.
-- **A single agent** that consolidates all features necessary to address all telemetry data collection needs across servers and client devices (running Windows 10, 11). This is the goal, though Azure Monitor Agent currently converges with the Log Analytics agents.
+ - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients).
+ - A higher events-per-second (EPS) upload rate.
+- **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md). Using DCRs is one of the most useful advantages of using Azure Monitor Agent:
+ - DCRs let you configure data collection for specific machines connected to a workspace as compared to the "all or nothing" approach of legacy agents.
+ - With DCRs, you can define which data to ingest and which data to filter out to reduce workspace clutter and save on costs.
+- **Simpler management** of data collection, including ease of troubleshooting:
+ - Easy *multihoming* on Windows and Linux.
+ - Centralized, "in the cloud" agent configuration makes every action simpler and more easily scalable throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
+ - Greater transparency and control of more capabilities and services, such as Microsoft Sentinel, Defender for Cloud, and VM Insights.
+- **A single agent** that consolidates all features necessary to address all telemetry data collection needs across servers and client devices running Windows 10 or 11. A single agent is the goal, although Azure Monitor Agent currently converges with the Log Analytics agents.
## Migration plan considerations Your migration plan to the Azure Monitor Agent should take into account: -- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to **discover what solutions and features you're using today that depend on the legacy agent**.
+- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to *discover what solutions and features you're using today that depend on the legacy agent*.
If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. -- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a **new environment** with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
+- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
- Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. While this allows you to begin the transition, ensure you understand the limitations:
- - Be careful in collecting duplicate data from the same machine, which could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
- If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are **collecting data from different machines** or **sending the data to different destinations**. Collecting duplicate data also generates more charges for data ingestion and retention.
+ Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the limitations:
+ - Be careful when you collect duplicate data from the same machine. Duplicate data could skew query results and affect downstream features like alerts, dashboards, or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
+ If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are *collecting data from different machines* or *sending the data to different destinations*. Collecting duplicate data also generates more charges for data ingestion and retention.
- - Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth.
+ - Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth.
## Prerequisites
-Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use Azure Monitor Agent. For non-Azure servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Arc for this purpose comes at no added cost, and it's not mandatory to use Arc for server management overall (i.e. you can continue using your existing non-Azure management solutions). Once Arc agent is installed, you can follow the same guidance below across Azure and non-Azure for migration.
+
+Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use with Azure Monitor Agent. For non-Azure servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Azure Arc for this purpose comes at no added cost. It's not mandatory to use Azure Arc for server management overall. You can continue using your existing non-Azure management solutions. After the Azure Arc agent is installed, you can follow the same guidance in this article across Azure and non-Azure for migration.
## Migration testing+ To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
-See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. Alternatively you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to convert existing legacy agent configuration into data collection rules.
-After you **validate** that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
+To start collecting some of the existing data types, see [Create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association). Alternatively, you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to convert existing legacy agent configuration into data collection rules.
+
+After you *validate* that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
## At-scale migration using Azure Policy
-We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to find sources, such as virtual machines, virtual machine scale sets, and non-Azure servers.
-
+
+We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent by using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview). Use this tool to find sources like virtual machines, virtual machine scale sets, and non-Azure servers.
+ Use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to migrate legacy agent configuration, including data sources and destinations, from the workspace to the new DCRs. > [!IMPORTANT]
-> Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you may collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly.
+> Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you might collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly.
-Validate that Azure Monitor Agent is collecting data as expected and all downstream dependencies, such as dashboards, alerts, and workbooks, function properly.
+Validate that Azure Monitor Agent is collecting data as expected and all downstream dependencies, such as dashboards, alerts, and workbooks, function properly.
After you confirm that Azure Monitor Agent is collecting data properly, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. > [!IMPORTANT]
-> Don't uninstall the legacy agent if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on Azure Monitor Agent.
+> Don't uninstall the legacy agent if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on Azure Monitor Agent.
## Next steps
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Title: Set up the Azure Monitor agent on Windows client devices (Preview)
+ Title: Set up the Azure Monitor agent on Windows client devices
description: This article describes the instructions to install the agent on Windows 10, 11 client OS devices, configure data collection, manage and troubleshoot the agent. Previously updated : 5/20/2022 Last updated : 10/10/2022
-# Azure Monitor agent on Windows client devices (Preview)
+# Azure Monitor agent on Windows client devices
This article provides instructions and guidance for using the client installer for Azure Monitor Agent. It also explains how to leverage Data Collection Rules on Windows client devices.
-With the new client installer available in this preview, you can now collect telemetry data from your Windows client devices in addition to servers and virtual machines.
-Both the [generally available extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**.
+Using the new client installer described here, you can now collect telemetry data from your Windows client devices in addition to servers and virtual machines.
+Both the [extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**.
### Comparison with virtual machine extension
-Here is a comparison between client installer and VM extension for Azure Monitor agent. It also highlights which parts are in preview:
+Here is a comparison between client installer and VM extension for Azure Monitor agent:
| Functional component | For VMs/servers via extension | For clients via installer| |:|:|:|
-| Agent installation method | Via VM extension | Via client installer <sup>preview</sup> |
+| Agent installation method | Via VM extension | Via client installer |
| Agent installed | Azure Monitor Agent | Same |
-| Authentication | Using Managed Identity | Using AAD device token <sup>preview</sup> |
+| Authentication | Using Managed Identity | Using AAD device token |
| Central configuration | Via Data collection rules | Same |
-| Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant <sup>preview</sup> |
+| Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant |
| Data upload to Log Analytics | Via Log Analytics endpoints | Same | | Feature support | All features documented [here](./azure-monitor-agent-overview.md) | Features dependent on AMA agent extension that don't require additional extensions. This includes support for Sentinel Windows Event filtering | | [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
Here is a comparison between client installer and VM extension for Azure Monitor
| Device type | Supported? | Installation method | Additional information | |:|:|:|:|
-| Windows 10, 11 desktops, workstations | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer |
-| Windows 10, 11 laptops | Yes | Client installer (preview) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
+| Windows 10, 11 desktops, workstations | Yes | Client installer | Installs the agent using a Windows MSI installer |
+| Windows 10, 11 laptops | Yes | Client installer | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
| Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework | | On-premises servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
Make sure to start the installer on administrator command prompt. Silent install
## Questions and feedback
-Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
+Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the client installer.
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-iis-logs.md
Azure Monitor collects entries from log files created by IIS, so you must [confi
Azure Monitor only supports IIS log files stored in W3C format and does not support custom fields or IIS Advanced Logging. It does not collect logs in NCSA or IIS native format.
-Configure IIS logs in Azure Monitor from the [Agent configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics agent. There is no configuration required other than selecting **Collect W3C format IIS log files**.
+Configure IIS logs in Azure Monitor from the [Agent configuration menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics agent. There is no configuration required other than selecting **Collect W3C format IIS log files**.
## Data collection
azure-monitor Data Sources Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md
Performance counters in Windows and Linux provide insight into the performance o
![Performance counters](media/data-sources-performance-counters/overview.png) ## Configuring Performance counters
-Configure Performance counters from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
+Configure Performance counters from the [Agents configuration menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace.
When you first configure Windows or Linux Performance counters for a new workspace, you are given the option to quickly create several common counters. They are listed with a checkbox next to each. Ensure that any counters you want to initially create are checked and then click **Add the selected performance counters**.
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
The Log Analytics agent for Linux will only collect events with the facilities a
### Configure Syslog in the Azure portal
-Configure Syslog from the [Agent configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace. This configuration is delivered to the configuration file on each Linux agent.
+Configure Syslog from the [Agent configuration menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace. This configuration is delivered to the configuration file on each Linux agent.
You can add a new facility by selecting **Add facility**. For each facility, only messages with the selected severities will be collected. Select the severities for the particular facility that you want to collect. You can't provide any other criteria to filter messages.
log { source(src); filter(f_user_oms); destination(d_oms); };
The Log Analytics agent listens for Syslog messages on the local client on port 25224. When the agent is installed, a default Syslog configuration is applied and found in the following location:
-* Rsyslog: `/etc/rsyslog.d/95-omsagent.conf`
-* Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
+* **Rsyslog**: `/etc/rsyslog.d/95-omsagent.conf`
+* **Syslog-ng**: `/etc/syslog-ng/syslog-ng.conf`
You can change the port number by creating two configuration files: a FluentD config file and a rsyslog-or-syslog-ng file depending on the Syslog daemon you have installed.
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
Windows event logs are one of the most common [data sources](../agents/agent-dat
## Configure Windows event logs
-Configure Windows event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
+Configure Windows event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace.
Azure Monitor only collects events from Windows event logs that are specified in the settings. You can add an event log by entering the name of the log and selecting **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any other criteria to filter events.
The following table provides different examples of log queries that retrieve Win
| Query | Description | |:|:| | Event |All Windows events. |
-| Event &#124; where EventLevelName == "error" |All Windows events with severity of error. |
+| Event &#124; where EventLevelName == "Error" |All Windows events with severity of error. |
| Event &#124; summarize count() by Source |Count of Windows events by source. |
-| Event &#124; where EventLevelName == "error" &#124; summarize count() by Source |Count of Windows error events by source. |
+| Event &#124; where EventLevelName == "Error" &#124; summarize count() by Source |Count of Windows error events by source. |
## Next steps
azure-monitor Diagnostics Extension Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-logs.md
Title: Use blob storage for IIS and table storage for events in Azure Monitor | Microsoft Docs
-description: Azure Monitor can read the logs for Azure services that write diagnostics to table storage or IIS logs written to blob storage.
+ Title: Use Blob Storage for IIS and Table Storage for events in Azure Monitor | Microsoft Docs
+description: Azure Monitor can read the logs for Azure services that write diagnostics to Azure Table Storage or IIS logs written to Azure Blob Storage.
-# Send data from Azure diagnostics extension to Azure Monitor Logs
-Azure diagnostics extension is an [agent in Azure Monitor](../agents/agents-overview.md) that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. This article describes how to collect data collected by the diagnostics extension from Azure Storage to Azure Monitor Logs.
+# Send data from Azure Diagnostics extension to Azure Monitor Logs
+
+Azure Diagnostics extension is an [agent in Azure Monitor](../agents/agents-overview.md) that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. This article describes how to collect data collected by the diagnostics extension from Azure Storage to Azure Monitor Logs.
> [!NOTE]
-> The Log Analytics agent in Azure Monitor is typically the preferred method to collect data from the guest operating system into Azure Monitor Logs. See [Overview of the Azure Monitor agents](../agents/agents-overview.md) for a detailed comparison of the agents.
+> The Log Analytics agent in Azure Monitor is typically the preferred method to collect data from the guest operating system into Azure Monitor Logs. For a comparison of the agents, see [Overview of the Azure Monitor agents](../agents/agents-overview.md).
## Supported data types
-Azure diagnostics extension stores data in an Azure Storage account. For Azure Monitor Logs to collect this data, it must be in the following locations:
-| Log Type | Resource Type | Location |
+Azure Diagnostics extension stores data in an Azure Storage account. For Azure Monitor Logs to collect this data, it must be in the following locations:
+
+| Log type | Resource type | Location |
| | | |
-| IIS logs |Virtual Machines <br> Web roles <br> Worker roles |wad-iis-logfiles (Blob Storage) |
-| Syslog |Virtual Machines |LinuxsyslogVer2v0 (Table Storage) |
-| Service Fabric Operational Events |Service Fabric nodes |WADServiceFabricSystemEventTable |
+| IIS logs |Virtual machines <br> Web roles <br> Worker roles |wad-iis-logfiles (Azure Blob Storage) |
+| Syslog |Virtual machines |LinuxsyslogVer2v0 (Azure Table Storage) |
+| Azure Service Fabric Operational Events |Service Fabric nodes |WADServiceFabricSystemEventTable |
| Service Fabric Reliable Actor Events |Service Fabric nodes |WADServiceFabricReliableActorEventTable | | Service Fabric Reliable Service Events |Service Fabric nodes |WADServiceFabricReliableServiceEventTable |
-| Windows Event logs |Service Fabric nodes <br> Virtual Machines <br> Web roles <br> Worker roles |WADWindowsEventLogsTable (Table Storage) |
-| Windows ETW logs |Service Fabric nodes <br> Virtual Machines <br> Web roles <br> Worker roles |WADETWEventTable (Table Storage) |
+| Windows Event logs |Service Fabric nodes <br> Virtual machines <br> Web roles <br> Worker roles |WADWindowsEventLogsTable (Table Storage) |
+| Windows ETW logs |Service Fabric nodes <br> Virtual machines <br> Web roles <br> Worker roles |WADETWEventTable (Table Storage) |
## Data types not supported
+The following data types aren't supported:
+ - Performance data from the guest operating system - IIS logs from Azure websites
+## Enable Azure Diagnostics extension
-## Enable Azure diagnostics extension
-See [Install and configure Windows Azure diagnostics extension (WAD)](../agents/diagnostics-extension-windows-install.md) or [Use Linux Diagnostic Extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md) for details on installing and configuring the diagnostics extension. This will allow you to specify the storage account and to configure collection of the data that you want to forward to Azure Monitor Logs.
-
+For information on how to install and configure the diagnostics extension, see [Install and configure Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-windows-install.md) or [Use Azure Diagnostics extension for Linux to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md). You can specify the storage account and configure collection of the data that you want to forward to Azure Monitor Logs.
## Collect logs from Azure Storage
-Use the following procedure to enable collection of diagnostics extension data from an Azure Storage account:
+
+To enable collection of diagnostics extension data from an Azure Storage account:
1. In the Azure portal, go to **Log Analytics Workspaces** and select your workspace.
-1. Click **Storage accounts logs** in the **Workspace Data Sources** section of the menu.
-2. Click **Add**.
-3. Select the **Storage account** that contains the data to collect.
-4. Select the **Data Type** you want to collect.
-5. The value for Source is automatically populated based on the data type.
-6. Click **OK** to save the configuration.
-7. Repeat for additional data types.
+1. Select **Storage accounts logs** in the **Workspace Data Sources** section of the menu.
+1. Select **Add**.
+1. Select the **Storage account** that contains the data to collect.
+1. Select the **Data Type** you want to collect.
+1. The value for **Source** is automatically populated based on the data type.
+1. Select **OK** to save the configuration.
+1. Repeat for more data types.
-In approximately 30 minutes, you are able to see data from the storage account in the Log Analytics workspace. You will only see data that is written to storage after the configuration is applied. The workspace does not read the pre-existing data from the storage account.
+In approximately 30 minutes, you'll see data from the storage account in the Log Analytics workspace. You'll only see data that's written to storage after the configuration is applied. The workspace doesn't read the preexisting data from the storage account.
> [!NOTE]
-> The portal does not validate that the source exists in the storage account or if new data is being written.
--
+> The portal doesn't validate that the source exists in the storage account or if new data is being written.
## Next steps * [Collect logs and metrics for Azure services](../essentials/resource-logs.md#send-to-log-analytics-workspace) for supported Azure services.
-* [Enable Solutions](../insights/solutions.md) to provide insight into the data.
+* [Enable solutions](../insights/solutions.md) to provide insight into the data.
* [Use search queries](../logs/log-query-overview.md) to analyze the data.
azure-monitor Diagnostics Extension Windows Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-windows-install.md
Title: Install and configure Windows Azure diagnostics extension (WAD)
-description: Learn about installing and configuring the Windows diagnostics extension. Also, learn how a description of how the data is stored in and Azure Storage account.
+ Title: Install and configure the Azure Diagnostics extension for Windows (WAD)
+description: Learn about installing and configuring the Azure Diagnostics extension for Windows and how the data is stored in an Azure Storage account.
ms.devlang: azurecli
-# Install and configure Windows Azure diagnostics extension (WAD)
-[Azure diagnostics extension](diagnostics-extension-overview.md) is an agent in Azure Monitor that collects monitoring data from the guest operating system and workloads of Azure virtual machines and other compute resources. This article provides details on installing and configuring the Windows diagnostics extension and a description of how the data is stored in and Azure Storage account.
+# Install and configure the Azure Diagnostics extension for Windows (WAD)
-The diagnostics extension is implemented as a [virtual machine extension](../../virtual-machines/extensions/overview.md) in Azure, so it supports the same installation options using Resource Manager templates, PowerShell, and CLI. See [Virtual machine extensions and features for Windows](../../virtual-machines/extensions/features-windows.md) for details on installing and maintaining virtual machine extensions.
+The [Azure Diagnostics extension](diagnostics-extension-overview.md) is an agent in Azure Monitor that collects monitoring data from the guest operating system and workloads of Azure virtual machines and other compute resources. This article provides information on how to install and configure the Azure Diagnostics extension for Windows and describes how the data is stored in an Azure Storage account.
+
+The diagnostics extension is implemented as a [virtual machine extension](../../virtual-machines/extensions/overview.md) in Azure. It supports the same installation options by using Azure Resource Manager templates, PowerShell, and the Azure CLI. For information on how to install and maintain virtual machine extensions, see [Virtual machine extensions and features for Windows](../../virtual-machines/extensions/features-windows.md).
## Overview
-When you configure Windows Azure the diagnostics extension, you must specify a storage account where all specified data will be sent. You can optionally add one or more *data sinks* to send the data to different locations.
-- Azure Monitor sink - Send guest performance data to Azure Monitor Metrics.-- Event hub sink - Send guest performance and log data to Azure event hubs to forward outside of Azure. This sink cannot be configured in the Azure portal.
+When you configure the Azure Diagnostics extension for Windows, you must specify a storage account where all specified data will be sent. You can optionally add one or more *data sinks* to send the data to different locations:
+- **Azure Monitor sink**: Send guest performance data to Azure Monitor Metrics.
+- **Azure Event Hub sink**: Send guest performance and log data to event hubs to forward outside of Azure. This sink can't be configured in the Azure portal.
## Install with Azure portal
-You can install and configure the diagnostics extension on an individual virtual machine in the Azure portal which provides you an interface as opposed to working directly with the configuration. When you enable the diagnostics extension, it will automatically use a default configuration with the most common performance counters and events. You can modify this default configuration according to your specific requirements.
+
+You can install and configure the diagnostics extension on an individual virtual machine in the Azure portal. You'll work with an interface as opposed to working directly with the configuration. When you enable the diagnostics extension, it will automatically use a default configuration with the most common performance counters and events. You can modify this default configuration according to your specific requirements.
> [!NOTE]
-> The following describe the most common settings for the diagnostics extension. For details on all of the configuration options, see [Windows diagnostics extension schema](diagnostics-extension-schema-windows.md).
+> The following steps describe the most common settings for the diagnostics extension. For more information on all the configuration options, see [Windows diagnostics extension schema](diagnostics-extension-schema-windows.md).
1. Open the menu for a virtual machine in the Azure portal.
-2. Click on **Diagnostic settings** in the **Monitoring** section of the VM menu.
+1. Select **Diagnostic settings** in the **Monitoring** section of the VM menu.
-3. Click **Enable guest-level monitoring** if the diagnostics extension hasn't already been enabled.
+1. Select **Enable guest-level monitoring** if the diagnostics extension hasn't already been enabled.
- ![Enable monitoring](media/diagnostics-extension-windows-install/enable-monitoring.png)
+ ![Screenshot that shows enabling monitoring.](media/diagnostics-extension-windows-install/enable-monitoring.png)
-4. A new Azure Storage account will be created for the VM with the name will be based on the name of the resource group for the VM, and a default set of guest performance counters and logs will be selected.
+1. A new Azure Storage account will be created for the VM. The name will be based on the name of the resource group for the VM. A default set of guest performance counters and logs will be selected.
- ![Diagnostic settings](media/diagnostics-extension-windows-install/diagnostic-settings.png)
+ ![Screenshot that shows Diagnostic settings.](media/diagnostics-extension-windows-install/diagnostic-settings.png)
-5. In the **Performance counters** tab, select the guest metrics you would like to collect from this virtual machine. Use the **Custom** setting for more advanced selection.
+1. On the **Performance counters** tab, select the guest metrics you want to collect from this virtual machine. Use the **Custom** setting for more advanced selection.
- ![Performance counters](media/diagnostics-extension-windows-install/performance-counters.png)
+ ![Screenshot that shows Performance counters.](media/diagnostics-extension-windows-install/performance-counters.png)
-6. In the **Logs** tab, select the logs to collect from the virtual machine. Logs can be sent to storage or event hubs, but not to Azure Monitor. Use the [Log Analytics agent](../agents/log-analytics-agent.md) to collect guest logs to Azure Monitor.
+1. On the **Logs** tab, select the logs to collect from the virtual machine. Logs can be sent to storage or event hubs, but not to Azure Monitor. Use the [Log Analytics agent](../agents/log-analytics-agent.md) to collect guest logs to Azure Monitor.
- ![Screenshot shows the Logs tab with different logs selected for a virtual machine.](media/diagnostics-extension-windows-install/logs.png)
+ ![Screenshot that shows the Logs tab with different logs selected for a virtual machine.](media/diagnostics-extension-windows-install/logs.png)
-7. In the **Crash dumps** tab, specify any processes to collect memory dumps after a crash. The data will be written to the storage account for the diagnostic setting, and you can optionally specify a blob container.
+1. On the **Crash dumps** tab, specify any processes to collect memory dumps after a crash. The data will be written to the storage account for the diagnostic setting. You can optionally specify a blob container.
- ![Crash dumps](media/diagnostics-extension-windows-install/crash-dumps.png)
+ ![Screenshot that shows the Crash dumps tab.](media/diagnostics-extension-windows-install/crash-dumps.png)
-8. In the **Sinks** tab, specify whether to send the data to locations other than Azure storage. If you select **Azure Monitor**, guest performance data will be sent to Azure Monitor Metrics. You cannot configure the event hubs sink using the Azure portal.
+1. On the **Sinks** tab, specify whether to send the data to locations other than Azure storage. If you select **Azure Monitor**, guest performance data will be sent to Azure Monitor Metrics. You can't configure the event hubs sink by using the Azure portal.
- ![Screenshot shows the Sinks tab with the Send diagnostic data to Azure Monitor option Enabled.](media/diagnostics-extension-windows-install/sinks.png)
+ ![Screenshot that shows the Sinks tab with the Send diagnostic data to Azure Monitor option enabled.](media/diagnostics-extension-windows-install/sinks.png)
- If you have not enabled a System Assigned Identity configured for your virtual machine, you may see the below warning when you save a configuration with the Azure Monitor sink. Click on the banner to enable the system assigned identity.
+ If you haven't enabled a system-assigned identity configured for your virtual machine, you might see the following warning when you save a configuration with the Azure Monitor sink. Select the banner to enable the system-assigned identity.
- ![Managed entity](media/diagnostics-extension-windows-install/managed-entity.png)
+ ![Screenshot that shows the managed identity warning.](media/diagnostics-extension-windows-install/managed-entity.png)
-9. In the **Agent**, you can change the storage account, set the disk quota, and specify whether to collect diagnostic infrastructure logs.
+1. On the **Agent** tab, you can change the storage account, set the disk quota, and specify whether to collect diagnostic infrastructure logs.
- ![Screenshot shows the Agent tab with the option to set the storage account.](media/diagnostics-extension-windows-install/agent.png)
+ ![Screenshot that shows the Agent tab with the option to set the storage account.](media/diagnostics-extension-windows-install/agent.png)
-10. Click **Save** to save the configuration.
+1. Select **Save** to save the configuration.
> [!NOTE]
-> While the configuration for diagnostics extension can be formatted in either JSON or XML, any configuration done in the Azure portal will always be stored as JSON. If you use XML with another configuration method and then change your configuration with the Azure portal, the settings will be changed to JSON. Also, there is no option to set up the retention period for these logs.
+> The configuration for the diagnostics extension can be formatted in either JSON or XML, but any configuration done in the Azure portal will always be stored as JSON. If you use XML with another configuration method and then change your configuration with the Azure portal, the settings will be changed to JSON. Also, there's no option to set up the retention period for these logs.
## Resource Manager template
-See [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md) on deploying the diagnostics extension with Azure Resource Manager templates.
+
+For information on how to deploy the diagnostics extension with Azure Resource Manager templates, see [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md).
## Azure CLI deployment
-The Azure CLI can be used to deploy the Azure Diagnostics extension to an existing virtual machine using [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) as in the following example.
+
+The Azure CLI can be used to deploy the Azure Diagnostics extension to an existing virtual machine by using [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) as in the following example:
```azurecli az vm extension set \
az vm extension set \
--settings public-settings.json ```
-The protected settings are defined in the [PrivateConfig element](diagnostics-extension-schema-windows.md#privateconfig-element) of the configuration schema. Following is a minimal example of a protected settings file that defines the storage account. See [Example configuration](diagnostics-extension-schema-windows.md#privateconfig-element) for complete details of the private settings.
+The protected settings are defined in the [PrivateConfig element](diagnostics-extension-schema-windows.md#privateconfig-element) of the configuration schema. The following minimal example of a protected settings file defines the storage account. For complete details of the private settings, see [Example configuration](diagnostics-extension-schema-windows.md#privateconfig-element).
```JSON {
The protected settings are defined in the [PrivateConfig element](diagnostics-ex
} ```
-The public settings are defined in the [Public element](diagnostics-extension-schema-windows.md#publicconfig-element) of the configuration schema. Following is a minimal example of a public settings file that enables collection of diagnostic infrastructure logs, a single performance counter, and a single event log. See [Example configuration](diagnostics-extension-schema-windows.md#publicconfig-element) for complete details of the public settings.
+The public settings are defined in the [Public element](diagnostics-extension-schema-windows.md#publicconfig-element) of the configuration schema. The following minimal example of a public settings file enables collection of diagnostic infrastructure logs, a single performance counter, and a single event log. For complete details of the public settings, see [Example configuration](diagnostics-extension-schema-windows.md#publicconfig-element).
```JSON {
The public settings are defined in the [Public element](diagnostics-extension-sc
} ``` -- ## PowerShell deployment
-PowerShell can be used to deploy the Azure Diagnostics extension to an existing virtual machine using [Set-AzVMDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/set-azurevmdiagnosticsextension) as in the following example.
+
+PowerShell can be used to deploy the Azure Diagnostics extension to an existing virtual machine by using [Set-AzVMDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/set-azurevmdiagnosticsextension), as in the following example:
```powershell Set-AzVMDiagnosticsExtension -ResourceGroupName "myvmresourcegroup" `
Set-AzVMDiagnosticsExtension -ResourceGroupName "myvmresourcegroup" `
-DiagnosticsConfigurationPath "DiagnosticsConfiguration.json" ```
-The private settings are defined in the [PrivateConfig element](diagnostics-extension-schema-windows.md#privateconfig-element), while the public settings are defined in the [Public element](diagnostics-extension-schema-windows.md#publicconfig-element) of the configuration schema. You can also choose to specify the details of the storage account as parameters of the Set-AzVMDiagnosticsExtension cmdlet rather than including them in the private settings.
+The private settings are defined in the [PrivateConfig element](diagnostics-extension-schema-windows.md#privateconfig-element). The public settings are defined in the [Public element](diagnostics-extension-schema-windows.md#publicconfig-element) of the configuration schema. You can also choose to specify the details of the storage account as parameters of the `Set-AzVMDiagnosticsExtension` cmdlet rather than including them in the private settings.
-Following is a minimal example of a configuration file that enables collection of diagnostic infrastructure logs, a single performance counter, and a single event log. See [Example configuration](diagnostics-extension-schema-windows.md#publicconfig-element) for complete details of the private and public settings.
+The following minimal example of a configuration file enables collection of diagnostic infrastructure logs, a single performance counter, and a single event log. For complete details of the private and public settings, see [Example configuration](diagnostics-extension-schema-windows.md#publicconfig-element).
```JSON {
Following is a minimal example of a configuration file that enables collection o
See also [Use PowerShell to enable Azure Diagnostics in a virtual machine running Windows](../../virtual-machines/extensions/diagnostics-windows.md). ## Data storage
-The following table lists the different types of data collected from the diagnostics extension and whether they're stored as a table or a blob. The data stored in tables can also be stored in blobs depending on the [StorageType setting](diagnostics-extension-schema-windows.md#publicconfig-element) in your public configuration.
+The following table lists the different types of data collected from the diagnostics extension and whether they're stored as a table or a blob. The data stored in tables can also be stored in blobs depending on the [StorageType setting](diagnostics-extension-schema-windows.md#publicconfig-element) in your public configuration.
| Data | Storage type | Description | |:|:|:| | WADDiagnosticInfrastructureLogsTable | Table | Diagnostic monitor and configuration changes. |
-| WADDirectoriesTable | Table | Directories that the diagnostic monitor is monitoring. This includes IIS logs, IIS failed request logs, and custom directories. The location of the blob log file is specified in the Container field and the name of the blob is in the RelativePath field. The AbsolutePath field indicates the location and name of the file as it existed on the Azure virtual machine. |
-| WadLogsTable | Table | Logs written in code using the trace listener. |
+| WADDirectoriesTable | Table | Directories that the diagnostic monitor is monitoring. This group includes IIS logs, IIS failed request logs, and custom directories. The location of the blob log file is specified in the Container field and the name of the blob is in the RelativePath field. The AbsolutePath field indicates the location and name of the file as it existed on the Azure virtual machine. |
+| WadLogsTable | Table | Logs written in code by using the trace listener. |
| WADPerformanceCountersTable | Table | Performance counters. | | WADWindowsEventLogsTable | Table | Windows Event logs. | | wad-iis-failedreqlogfiles | Blob | Contains information from IIS Failed Request logs. | | wad-iis-logfiles | Blob | Contains information about IIS logs. |
-| "custom" | Blob | A custom container based on configuring directories that are monitored by the diagnostic monitor. The name of this blob container will be specified in WADDirectoriesTable. |
+| "custom" | Blob | A custom container based on configuring directories that are monitored by the diagnostic monitor. The name of this blob container will be specified in WADDirectoriesTable. |
## Tools to view diagnostic data
-Several tools are available to view the data after it is transferred to storage. For example:
-* Server Explorer in Visual Studio - If you have installed the Azure Tools for Microsoft Visual Studio, you can use the Azure Storage node in Server Explorer to view read-only blob and table data from your Azure storage accounts. You can display data from your local storage emulator account and also from storage accounts you have created for Azure. For more information, see [Browsing and Managing Storage Resources with Server Explorer](/visualstudio/azure/vs-azure-tools-storage-resources-server-explorer-browse-manage).
-* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a standalone app that enables you to easily work with Azure Storage data on Windows, OSX, and Linux.
-* [Azure Management Studio](https://cerebrata.com/blog/introducing-azure-management-studio-and-azure-explorer) includes Azure Diagnostics Manager which allows you to view, download and manage the diagnostics data collected by the applications running on Azure.
+Several tools are available to view the data after it's transferred to storage. For example:
+
+* **Server Explorer in Visual Studio**: If you've installed the Azure Tools for Microsoft Visual Studio, you can use the Azure Storage node in Server Explorer to view read-only blob and table data from your Azure Storage accounts. You can display data from your local storage emulator account and from storage accounts you've created for Azure. For more information, see [Browsing and managing storage resources with Server Explorer](/visualstudio/azure/vs-azure-tools-storage-resources-server-explorer-browse-manage).
+* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md): This standalone app enables you to easily work with Azure Storage data on Windows, OSX, and Linux.
+* [Azure Management Studio](https://cerebrata.com/blog/introducing-azure-management-studio-and-azure-explorer): This tool includes Azure Diagnostics Manager. Use it to view, download, and manage the diagnostics data collected by the applications running on Azure.
## Next steps-- See [Send data from Windows Azure diagnostics extension to Event Hubs](diagnostics-extension-stream-event-hubs.md) for details on forwarding monitoring data to Azure Event Hubs.+
+For information on forwarding monitoring data to Azure Event Hubs, see [Send data from Azure Diagnostics extension to Event Hubs](diagnostics-extension-stream-event-hubs.md).
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
description: Learn how to create a new alert rule.
+ Last updated 08/23/2022
The *sampleActivityLogAlert.parameters.json* file contains the values provided f
If you're creating a new log alert rule, please note that current alert rule wizard is a little different from the earlier experience: - Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
- - We recommend using [Dimensions](alerts-types.md#narrow-the-target-by-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
+ - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
- When you need to investigate in the logs, use the link in the alert to the search results in Logs. - If you need the raw search results or for any other advanced customizations, use Logic Apps. - The new alert rule wizard doesn't support customization of the JSON payload.
If you're creating a new log alert rule, please note that current alert rule wiz
- For more advanced customizations, use Logic Apps. ## Next steps
+ - [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Last updated 09/14/2022-+ # Types of Azure Monitor alerts
-This article describes the kinds of Azure Monitor alerts you can create and helps you understand when to use each type of alert.
-
-Azure Monitor has four types of alerts:
+This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
+There are four types of alerts:
- [Metric alerts](#metric-alerts)
+- [Prometheus alerts](#prometheus-alerts-preview)
- [Log alerts](#log-alerts) - [Activity log alerts](#activity-log-alerts) - [Smart detection alerts](#smart-detection-alerts)
-## Choose the right alert type
+## Choosing the right alert type
-This table can help you decide when to use each type of alert. For more information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-|Alert type |When to use |Pricing information|
+|Alert Type |When to Use |Pricing Information|
||||
-|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, we recommend that you use metric alerts.|Each metric alert rule is charged based on the number of time series that are monitored. |
-|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation by using log alerts. Log alerts are more expensive than metric alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for [at-scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
-|Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to receive an alert when a resource experiences a specific event. Examples are a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
-
+|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, using metric alerts is recommended.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
+|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts. Log alerts are more expensive than metric alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+|Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. |
## Metric alerts A metric alert rule monitors a resource by evaluating conditions on the resource metrics at regular intervals. If the conditions are met, an alert is fired. A metric time-series is a series of metric values captured over a period of time.
-You can create rules by using these metrics:
-
+You can create rules using these metrics:
- [Platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported) - [Custom metrics](../essentials/metrics-custom-overview.md) - [Application Insights custom metrics](../app/api-custom-events-metrics.md) - [Selected logs from a Log Analytics workspace converted to metrics](alerts-metric-logs.md) Metric alert rules include these features:- - You can use multiple conditions on an alert rule for a single resource.-- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-by-using-dimensions).-- You can use [dynamic thresholds](#dynamic-thresholds) driven by machine learning.
+- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-using-dimensions).
+- You can use [Dynamic thresholds](#dynamic-thresholds) driven by machine learning.
- You can configure if metric alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). Metric alerts are stateful by default. The target of the metric alert rule can be:--- A single resource, such as a VM. For supported resource types, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md).
+- A single resource, such as a VM. See [this article](alerts-metric-near-real-time.md) for supported resource types.
- [Multiple resources](#monitor-multiple-resources) of the same type in the same Azure region, such as a resource group. ### Multiple conditions
-When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items." When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true. The alert resolves when at least one of the conditions is no longer true for three consecutive checks.
-
-### Narrow the target by using dimensions
+When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items". When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true and is resolved when at least one of the conditions is no longer true for three consecutive checks.
+### Narrow the target using Dimensions
-Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
+Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
+For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in any API name (which is the aggregated data), or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
+If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric.
+The alert rule separately monitors all the dimensions value combinations.
+See [this article](alerts-metric-multiple-time-series-single-rule.md) for detailed instructions on using dimensions in metric alert rules.
-For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction. Examples are GetBlob, DeleteBlob, and PutPage. You can choose to have an alert fired when there's a high number of transactions in any API name, which is the aggregated data. Or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
+### Create resource-centric alerts using splitting by dimensions
-If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric. The alert rule separately monitors all the dimension value combinations.
-For instructions on how to use dimensions in metric alert rules, see [Monitor multiple time series in a single metric alert rule](alerts-metric-multiple-time-series-single-rule.md).
+To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on Azure resource ID column makes the specified resource into the alert target.
-### Create resource-centric alerts by splitting by dimensions
-
-To monitor for the same condition on multiple Azure resources, you can use the technique of splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on the Azure resource ID column makes the specified resource into the alert target.
-
-You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
### Monitor multiple resources You can monitor at scale by applying the same metric alert rule to multiple resources of the same type for resources that exist in the same Azure region. Individual notifications are sent for each monitored resource.
-Platform metrics are supported in the Azure cloud for the following
+The platform metrics for these services in the following Azure clouds are supported:
| Service | Global Azure | Government | China | |:--|:-|:--|:--|
-| Azure Virtual Machines | Yes |Yes | Yes |
-| SQL Server databases | Yes | Yes | Yes |
-| SQL Server elastic pools | Yes | Yes | Yes |
-| Azure NetApp Files capacity pools | Yes | Yes | Yes |
-| Azure NetApp Files volumes | Yes | Yes | Yes |
-| Azure Key Vault | Yes | Yes | Yes |
+| Virtual machines* | Yes |Yes | Yes |
+| SQL server databases | Yes | Yes | Yes |
+| SQL server elastic pools | Yes | Yes | Yes |
+| NetApp files capacity pools | Yes | Yes | Yes |
+| NetApp files volumes | Yes | Yes | Yes |
+| Key vaults | Yes | Yes | Yes |
| Azure Cache for Redis | Yes | Yes | Yes | | Azure Stack Edge devices | Yes | Yes | Yes | | Recovery Services vaults | Yes | No | No |
-| Azure Database for PostgreSQL - Flexible servers | Yes | Yes | Yes |
+| Azure Database for PostgreSQL - Flexible Servers | Yes | Yes | Yes |
> [!NOTE]
- > Multi-resource metric alerts aren't supported for the following scenarios:
- >
- > - Alerting on virtual machines' guest metrics.
- > - Alerting on virtual machines' network metrics. These metrics include Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, and Outbound Flows Maximum Creation Rate.
+ > Multi-resource metric alerts are not supported for the following scenarios:
+ > - Alerting on virtual machines' guest metrics
+ > - Alerting on virtual machines' network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
-You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
+You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
-- A list of virtual machines in one Azure region within a subscription.-- All virtual machines in one Azure region in one or more resource groups in a subscription.-- All virtual machines in one Azure region in a subscription.
+- a list of virtual machines (in one Azure region) within a subscription
+- all virtual machines (in one Azure region) in one or more resource groups in a subscription
+- all virtual machines (in one Azure region) in a subscription
### Dynamic thresholds
-Dynamic thresholds use advanced machine learning to:
--- Learn the historical behavior of metrics.-- Identify patterns and adapt to metric changes over time, such as hourly, daily, or weekly patterns.-- Recognize anomalies that indicate possible service issues.-- Calculate the most appropriate threshold for the metric.
+Dynamic thresholds use advanced machine learning (ML) to:
+- Learn the historical behavior of metrics
+- Identify patterns and adapt to metric changes over time, such as hourly, daily or weekly patterns.
+- Recognize anomalies that indicate possible service issues
+- Calculate the most appropriate threshold for the metric
-Machine learning continuously uses new data to learn more and make the threshold more accurate. The system adapts to the metrics' behavior over time and alerts based on deviations from its pattern. For this reason, you don't have to know the "right" threshold for each metric.
+Machine Learning continuously uses new data to learn more and make the threshold more accurate. Because the system adapts to the metricsΓÇÖ behavior over time, and alerts based on deviations from its pattern, you don't have to know the "right" threshold for each metric.
Dynamic thresholds help you:- - Create scalable alerts for hundreds of metric series with one alert rule. If you have fewer alert rules, you spend less time creating and managing alerts rules.-- Create rules without having to know what threshold to configure.-- Configure metric alerts by using high-level concepts without extensive domain knowledge about the metric.-- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.
+- Create rules without having to know what threshold to configure
+- Configure up metric alerts using high-level concepts without extensive domain knowledge about the metric
+- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern
- Handle noisy metrics (such as machine CPU or memory) and metrics with low dispersion (such as availability and error rate).
-For instructions on how to use dynamic thresholds in metric alert rules, see [Dynamic thresholds in metric alerts](alerts-dynamic-thresholds.md).
+See [this article](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules.
## Log alerts A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data. The target of the log alert rule can be:--- A single resource, such as a VM.-- Multiple resources of the same type in the same Azure region, such as a resource group. This capability is currently available for selected resource types.-- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights).
+- A single resource, such as a VM.
+- Multiple resources of the same type in the same Azure region, such as a resource group. This is currently available for selected resource types.
+- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights).
Log alerts can measure two different things, which can be used for different monitoring scenarios:--- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions.-- **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
+- Table rows: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions.
+- Calculation of a numeric column: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage.
You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state) (currently in preview). > [!NOTE]
-> Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
+> Log alerts work best when you are trying to detect specific data in the logs, as opposed to when you are trying to detect a **lack** of data in the logs. Since logs are semi-structured data, they are inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you are trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
### Dimensions in log alert rules
-You can use dimensions when you create log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually and notifications are sent for each instance.
+You can use dimensions when creating log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
### Splitting by dimensions in log alert rules
-To monitor for the same condition on multiple Azure resources, you can use the technique of splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
-
-You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
+You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-### Use the API
+### Using the API
-Manage new rules in your workspaces by using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
+Manage new rules in your workspaces using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
> [!NOTE]
-> Log alerts for Log Analytics was previously managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current scheduledQueryRules API](alerts-log-api-switch.md).
-
+> Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
## Log alerts on your Azure bill
-Log alerts are listed under the resource provider `microsoft.insights/scheduledqueryrules` with:
--- Log alerts on Application Insights shown with the exact resource name along with resource group and alert properties.-- Log alerts on Log Analytics shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API.-- Log alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked in [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources. They have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties.-
-> [!NOTE]
->
-> Unsupported resource characters such as <, >, %, &, \, ?, and / are replaced with an underscore character (_) in the hidden resource names. This change also appears in the billing information.
+Log Alerts are listed under resource provider microsoft.insights/scheduledqueryrules with:
+- Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties.
+- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using scheduledQueryRules API.
+- Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties.
+> [!Note]
+> Unsupported resource characters such as <, >, %, &, \, ?, / are replaced with _ in the hidden resource names and this will also reflect in the billing information.
## Activity log alerts
-An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
-
-You might want to use activity log alerts for these types of scenarios:
+An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
-- When a specific operation occurs on resources in a specific resource group or subscription. For example, you might want to be notified when:
- - Any virtual machine in a production resource group is deleted.
+You may want to use activity log alerts for these types of scenarios:
+- When a specific operation occurs on resources in a specific resource group or subscription. For example, you may want to be notified when:
+ - Any virtual machine in a production resource group is deleted.
- Any new roles are assigned to a user in your subscription.-- When a service health event occurs. Service health events include notifications of incidents and maintenance events that apply to resources in your subscription.
+- A service health event occurs. Service health events include notifications of incidents and maintenance events that apply to resources in your subscription.
You can create an activity log alert on:
+- Any of the activity log [event categories](../essentials/activity-log-schema.md), other than on alert events.
+- Any activity log event in top-level property in the JSON object.
-- Any of the activity log [event categories](../essentials/activity-log-schema.md), other than on alert events.-- Any activity log event in a top-level property in the JSON object.-
-Activity log alert rules are Azure resources, so you can use an Azure Resource Manager template to create them. You can also create, update, or delete activity log alert rules in the Azure portal.
+Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal.
An activity log alert only monitors events in the subscription in which the alert is created.
-## Smart detection alerts
+## Smart Detection alerts
-After you set up Application Insights for your project, your app begins to generate data. Based on this data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others. The overall failure rate might go up as load increases.
+After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
-Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and especially the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
+As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
-As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature doesn't need setup or configuration because it uses machine learning algorithms to predict the normal failure rate.
+While metric alerts tell you there might be a problem, Smart Detection starts the diagnostic work for you, performing much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem.
-Metric alerts tell you there might be a problem, but Smart Detection starts the diagnostic work for you. It performs much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, which helps you get to the root of the problem quickly.
+Smart detection works for web apps hosted in the cloud or on your own servers that generate application requests or dependency data.
-Smart Detection works for web apps hosted in the cloud or on your own servers that generate application requests or dependency data.
+## Prometheus alerts (preview)
-## Next steps
+Prometheus alerts are based on metric values stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). They fire when the results of a PromQL query resolves to true. Prometheus alerts are displayed and managed like other alert types when they fire, but they are configured with a Prometheus rule group. See [Rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md) for details.
+## Next steps
- Get an [overview of alerts](alerts-overview.md). - [Create an alert rule](alerts-log.md). - Learn more about [Smart Detection](proactive-failure-diagnostics.md).
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
Title: Understand migration for Azure Monitor alerts description: Understand how the alerts migration works and troubleshoot problems. + Last updated 2/23/2022
Classic alert rules on Percent metrics must be migrated based on [the mapping be
Classic alert rules on AnonymousThrottlingError, SASThrottlingError, and ThrottlingError must be split into two new alerts because there's no combined metric that provides the same functionality. Thresholds will need to be adapted appropriately.
-### Cosmos DB metrics
+### Azure Cosmos DB metrics
-All classic alerts on Cosmos DB metrics can be migrated except alerts on these metrics:
+All classic alerts on Azure Cosmos DB metrics can be migrated except alerts on these metrics:
- Average Requests per Second - Consistency Level
For Storage account services like blob, table, file, and queue, the following me
### Microsoft.DocumentDB/databaseAccounts
-For Cosmos DB, equivalent metrics are as shown below:
+For Azure Cosmos DB, equivalent metrics are as shown below:
| Metric in classic alerts | Equivalent metric in new alerts | Comments| |--|||
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
+
+ Title: Prometheus metric alerts in Azure Monitor
+description: Overview of Prometheus alert rules in Azure Monitor generated by data in Azure Monitor managed services for Prometheus.
+++ Last updated : 09/15/2022++
+# Prometheus metric alerts in Azure Monitor
+Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL) that are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
+
+> [!NOTE]
+> Azure Monitor managed service for Prometheus, including Prometheus metrics, is currently in public preview and does not yet have all of its features enabled. Prometheus metrics are displayed with alerts generated by other types of alert rules, but they currently have a difference experience for creating and managing them.
+
+## Create Prometheus alert rule
+Prometheus alert rules are created as part of a Prometheus rule group which is stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
+
+## View Prometheus metric alerts
+View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus metric alerts.
+
+1. From the **Monitor** menu in the Azure portal, select **Alerts**.
+2. If **Monitoring Service** isn't displayed as a filter option, then select **Add Filter** and add it.
+3. Set the filter **Monitoring Service** to **Prometheus** to see Prometheus alerts.
++
+4. Click the alert name to view the details of a specific fired/resolved alert.
++
+## Next steps
+
+- [Create a Prometheus rule groups](../essentials/prometheus-rule-groups.md).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
This limitation isn't applicable from version [2.15.0](https://www.nuget.org/pac
This SDK requires `HttpContext`. Therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
+## Troubleshooting
++ ## Open-source SDK * [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
For the template-based ASP.NET MVC app from this article, the file that you need
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/asp-net-troubleshoot-no-data).
-There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key or connection string in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets.
+There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key or connection string in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets.
+ ## Open-source SDK
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Extension execution output is logged to files found in the following directories
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWindows\<version>\ ``` + ## Release notes ### 2.8.44
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Below is our step-by-step troubleshooting guide for Java-based applications runn
> [!NOTE] > If you set the JAVA_OPTS environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the JAVA_OPTS variable in App Service configurations settings. - [!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] + ## Release notes For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md).
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Below is our step-by-step troubleshooting guide for extension/agent based monito
-#### Default website deployed with web apps doesn't support automatic client-side monitoring
+### Default website deployed with web apps doesn't support automatic client-side monitoring
When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET managed web part in IIS. This behavior allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
If you wish to test out codeless server and client-side monitoring for ASP.NET C
[!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] + ### PHP and WordPress aren't supported PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. However, manually instrumenting client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your web pages can be accomplished by using the [JavaScript SDK](./javascript.md).
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Below is our step-by-step troubleshooting guide for extension/agent based monito
If any of these entries exist, remove the following packages from your application: `Microsoft.ApplicationInsights`, `System.Diagnostics.DiagnosticSource`, and `Microsoft.AspNet.TelemetryCorrelation`.
-#### Default website deployed with web apps does not support automatic client-side monitoring
+### Default website deployed with web apps does not support automatic client-side monitoring
When you create a web app with the `ASP.NET` runtimes in Azure App Services it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring. If you wish to test out codeless server and client-side monitoring for ASP.NET in an Azure App Services web app, we recommend following the official guides for [creating an ASP.NET Framework web app](../../app-service/quickstart-dotnetcore.md?tabs=netframework48) and then use the instructions in the current article to enable monitoring. - ### APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression isn't supported If you use APPINSIGHTS_JAVASCRIPT_ENABLED=true in cases where content is encoded, you might get errors like:
For the latest information on the Application Insights agent/extension, check ou
[!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] + ### PHP and WordPress are not supported PHP and WordPress sites are not supported. There is currently no officially supported SDK/agent for server-side monitoring of these workloads. However, manually instrumenting client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your web pages can be accomplished by using the [JavaScript SDK](./javascript.md).
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Below is our step-by-step troubleshooting guide for extension/agent based monito
[!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] + ## Release notes For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md).
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights can test your website at regular intervals to check that it
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/java-2x-troubleshoot). + ## Next steps * [Monitor dependency calls](java-2x-agent.md) * [Monitor Unix performance counters](java-2x-collectd.md)
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
description: Application performance monitoring for Java applications running in
Last updated 07/22/2022 ms.devlang: java-+
Telemetry emitted by these Azure SDKs is automatically collected by default:
* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+ * [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+
-[//]: # "Cosmos 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571"
+[//]: # "Azure Cosmos DB 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571"
[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html" [//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0"
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
See the dedicated [troubleshooting article](java-standalone-troubleshoot.md). + ## Release notes See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
The following steps will guide you through enabling the profiling component on t
} } ```
- Alternatively, set the `APPLICATIONINSIGHTS_PROFILER_ENABLED` environment variable to true.
+ Alternatively, set the `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED` environment variable to true.
1. Restart your process with the updated configuration.
Profiles can be generated/edited in the JDK Mission Control (JMC) user interface
### Environment variables -- `APPLICATIONINSIGHTS_PROFILER_ENABLED`: boolean (default: `false`)
+- `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED`: boolean (default: `false`)
Enables/disables the profiling feature. ### Configuration file
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<
If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when you use connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
-## <a name="next"></a> Next steps
+
+## Next steps
* [Source map for JavaScript](source-map-support.md) * [Track usage](usage-overview.md)
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
For the applications in other languages we currently recommend using the SDKs:
* [JavaScript](./javascript.md) * [Python](./opencensus-python.md)
+## Troubleshooting
++ ## Next steps * Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md)
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
APPLICATIONINSIGHTS_ENABLE_AGENT: true
Please follow this [instruction](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent). + ## Distributed tracing for Python Function apps To collect custom telemetry from services such as Redis, Memcached, MongoDB, and more, you can use the [OpenCensus Python Extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](../../azure-functions/functions-reference-python.md?tabs=azurecli-linux%2capplication-level#log-custom-telemetry). You can find the list of supported services [here](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
These properties are client specific, so you can configure `appInsights.defaultC
| correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation. (Default is `30000`.) | | correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection. (Default. See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts).)|
+## Troubleshooting
++ ## Next steps * [Monitor your telemetry in the portal](./overview-dashboard.md)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
For more detailed information about how to use queries and logs, see [Logs in Az
* [OpenCensus Integrations](https://github.com/census-instrumentation/opencensus-python#extensions) * [Azure Monitor Sample Applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
+## Troubleshooting
++ ## Next steps * [Tracking incoming requests](./opencensus-python-dependency.md)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Known issues for the Azure Monitor OpenTelemetry Exporters include:
- Device model is missing on request and dependency telemetry, which adversely affects device cohort analysis. - Database server name is left out of dependency name, which incorrectly aggregates tables with the same name on different servers. + ## Support To get support:
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent is located here: https://www.powershellgallery.com/pa
- [Set-ApplicationInsightsMonitoringConfig](./status-monitor-v2-api-reference.md#set-applicationinsightsmonitoringconfig) - [Start-ApplicationInsightsMonitoringTrace](./status-monitor-v2-api-reference.md#start-applicationinsightsmonitoringtrace)
-## Troubleshooting
-
-See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot).
- ## FAQ - Does Application Insights Agent support proxy installations?
Each of these options is described in the [detailed instructions](status-monitor
union * | summarize count() by cloud_RoleName, cloud_RoleInstance ```
+## Troubleshooting
+
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot).
+ ## Release notes
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Previously updated : 09/11/2022 Last updated : 10/12/2022 -+
-# Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)
+# Use predictive autoscale to scale out before load demands in virtual machine scale sets
*Predictive autoscale* uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. It predicts the overall CPU load by observing and learning from historical usage. This process ensures that scale-out occurs in time to meet the demand.
Predictive autoscale adheres to the scaling boundaries you've set for your virtu
*Forecast only* allows you to view your predicted CPU forecast without triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before you enable the predictive autoscale feature.
-## Public preview support and limitations
-
->[!NOTE]
-> This release is a public preview. We're testing and gathering feedback for future releases. As such, we do not provide production-level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
-
-The following limitations apply during public preview. Predictive autoscale:
+## Predictive autoscale offerings
- Predictive autoscale is for workloads exhibiting cyclical CPU usage patterns. - Support is only available for virtual machine scale sets.
azure-monitor Autoscale Understanding Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-understanding-settings.md
description: "A detailed breakdown of autoscale settings and how they work. Appl
Last updated 12/18/2017 + # Understand Autoscale settings
There are three types of Autoscale profiles:
- **Recurrence profile:** This type of profile enables you to ensure that this profile is always used on a particular day of the week. Recurrence profiles only have a start time. They run until the next recurrence profile or fixed date profile is set to start. An Autoscale setting with only one recurrence profile runs that profile, even if there is a regular profile defined in the same setting. The following two examples illustrate how this profile is used: **Example 1: Weekdays vs. weekends**
-
- LetΓÇÖs say that on weekends, you want your maximum capacity to be 4. On weekdays, because you expect more load, you want your maximum capacity to be 10. In this case, your setting would contain two recurrence profiles, one to run on weekends and the other on weekdays.
+
+ Let's say that on weekends, you want your maximum capacity to be 4. On weekdays, because you expect more load, you want your maximum capacity to be 10. In this case, your setting would contain two recurrence profiles, one to run on weekends and the other on weekdays.
+ The setting looks like this:
- ``` JSON
+ ```json
"profiles": [
- {
- "name": "weekdayProfile",
- "capacity": {
- ...
- },
- "rules": [{
- ...
- }],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Monday"
- ],
- "hours": [
- 0
- ],
- "minutes": [
- 0
- ]
- }
- }}
- },
- {
- "name": "weekendProfile",
- "capacity": {
- ...
- },
- "rules": [{
- ...
- }]
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Saturday"
+ {
+ "name": "weekdayProfile",
+ "capacity": {
+ ...
+ },
+ "rules": [
+ {
+ ...
+ }
],
- "hours": [
- 0
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "Pacific Standard Time",
+ "days": [
+ "Monday"
+ ],
+ "hours": [
+ 0
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "weekendProfile",
+ "capacity": {
+ ...
+ },
+ "rules": [
+ {
+ ...
+ }
],
- "minutes": [
- 0
- ]
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "Pacific Standard Time",
+ "days": [
+ "Saturday"
+ ],
+ "hours": [
+ 0
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
}
- }
- }]
+ ]
``` The preceding setting shows that each recurrence profile has a schedule. This schedule determines when the profile starts running. The profile stops when itΓÇÖs time to run another profile.
There are three types of Autoscale profiles:
Let's say you want to have one metric threshold during business hours (9:00 AM to 5:00 PM), and a different one for all other times. The setting would look like this:
- ``` JSON
+ ```json
"profiles": [ { "name": "businessHoursProfile",
Learn more about Autoscale by referring to the following:
* [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md) * [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) * [Autoscale REST API](/rest/api/monitor/autoscalesettings)-
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: 'Azure Monitor best practices: Cost management' description: Guidance and recommendations for reducing your cost for Azure Monitor. + Last updated 03/31/2022 - # Azure Monitor best practices: Cost management
See the documentation for other services that store their data in a Log Analytic
- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) - **Microsoft Sentinel**: [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)-- **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level)
+- **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/working-with-log-analytics-agent.md#data-collection-tier)
## Filter data with transformations (preview)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa Last updated 08/23/2022 --+ # Use Change Analysis in Azure Monitor
Azure Monitor Change Analysis service supports resource property level changes i
- Storage - SQL - Redis Cache
- - Cosmos DB, etc.
+ - Azure Cosmos DB, etc.
## Data sources
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Title: Monitoring cost for Container insights | Microsoft Docs description: This article describes the monitoring cost for metrics & inventory data collected by Container insights to help customers manage their usage and associated costs. + Last updated 08/29/2022 - # Understand monitoring costs for Container insights
The following are examples of what changes you can apply to your cluster by modi
ttlSecondsAfterFinished: 100 ```
-After applying one or more of these changes to your ConfigMaps, see [Apply updated ConfigMap](container-insights-prometheus-integration.md#apply-updated-configmap) to apply it to your cluster.
+After applying one or more of these changes to your ConfigMaps, apply it to your cluster withe the command `kubectl apply -f <config3. map_yaml_file.yaml>`. For example, run the command `kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
### Prometheus metrics scraping
-If you are utilizing [Prometheus metric scraping](container-insights-prometheus-integration.md), ensure you consider the following to limit the number of metrics that you collect from your cluster:
+If you are utilizing [Prometheus metric scraping](container-insights-prometheus.md), ensure you consider the following to limit the number of metrics that you collect from your cluster:
- Ensure scraping frequency is set optimally (the default is 60 seconds). While you can increase the frequency to 15 seconds, you need to ensure that the metrics you are scraping are published at that frequency. Otherwise there will be many duplicate metrics scraped and sent to your Log Analytics workspace at intervals adding to data ingestion and retention costs, but are of less value.
azure-monitor Container Insights Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md
+
+ Title: Custom metrics collected by Container insights
+description: Describes the custom metrics collected for a Kubernetes cluster by Container insights in Azure Monitor.
++ Last updated : 09/28/2022 +++
+# Metrics collected by Container insights
+Container insights collects [custom metrics](../essentials/metrics-custom-overview.md) from Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes cluster nodes and pods, which allows you to present timely aggregate calculations (average, count, maximum, minimum, sum) in performance charts, pin performance charts in Azure portal dashboards, and take advantage of [metric alerts](../alerts/alerts-types.md#metric-alerts).
+
+> [!NOTE]
+> This article describes collection of custom metrics from Kubernetes clusters. You can also collect Prometheus metrics as described in [Collect Prometheus metrics with Container insights](container-insights-prometheus.md).
+
+## Using custom metrics
+Custom metrics collected by Container insights can be accessed with the same methods as custom metrics collected from other data sources, including [metrics explorer](../essentials/metrics-getting-started.md) and [metrics alerts](../alerts/alerts-types.md#metric-alerts).
+
+## Metrics collected
+The following sections describe the metric values collected for your cluster.
+
+### Node metrics
+
+Namespace: `Insights.container/nodes`<br>
+Dimensions: `host`
++
+|Metric |Description |
+|-||
+|cpuUsageMillicores |CPU utilization in millicores by host.|
+|cpuUsagePercentage, cpuUsageAllocatablePercentage (preview) |CPU usage percentage by node and allocatable respectively.|
+|memoryRssBytes |Memory RSS utilization in bytes by host.|
+|memoryRssPercentage, memoryRssAllocatablePercentage (preview) |Memory RSS usage percentage by host and allocatable respectively.|
+|memoryWorkingSetBytes |Memory Working Set utilization in bytes by host.|
+|memoryWorkingSetPercentage, memoryRssAllocatablePercentage (preview) |Memory Working Set usage percentage by host and allocatable respectively.|
+|nodesCount |Count of nodes by status.|
+|diskUsedPercentage |Percentage of disk used on the node by device.|
+
+### Pod metrics
+Namespace: `Insights.container/pods`<br>
+Dimensions: `controllerName`, `Kubernetes namespace`
+
+|Metric |Description |
+|-||
+|podCount |Count of pods by controller, namespace, node, and phase.|
+|completedJobsCount |Completed jobs count older user configurable threshold (default is six hours) by controller, Kubernetes namespace. |
+|restartingContainerCount |Count of container restarts by controller, Kubernetes namespace.|
+|oomKilledContainerCount |Count of OOMkilled containers by controller, Kubernetes namespace.|
+|podReadyPercentage |Percentage of pods in ready state by controller, Kubernetes namespace.|
+
+### Container metrics
+Namespace: `Insights.container/containers`
+Dimensions: `containerName`, `controllerName`, `Kubernetes namespace`, `podName`
+
+|Metric |Description |
+|-||
+|**(Old)cpuExceededPercentage** |CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
+|**(New)cpuThresholdViolated** |Metric triggered when CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
+|**(Old)memoryRssExceededPercentage** |Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|**(New)memoryRssThresholdViolated** |Metric triggered when Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|**(Old)memoryWorkingSetExceededPercentage** |Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|**(New)memoryWorkingSetThresholdViolated** |Metric triggered when Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
++
+### Persistent volume metrics
+
+Namespace: `Insights.container/persistentvolumes`
+Dimensions: `kubernetesNamespace`, `node`, `podName`, `volumeName`
++
+|Metric |Description |
+|-||
+|**(Old)pvUsageExceededPercentage** |PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.|
+|**(New)pvUsageThresholdViolated** |Metric triggered when PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.|
+
+## Enable custom metrics
+If your cluster uses [managed identity authentication](container-insights-onboard.md#authentication) for Container insights, then custom metrics will be enabled for you. If not, then you need to enable custom metrics using one of the methods described below.
+
+This process assigns the *Monitoring Metrics Publisher* role to the cluster's service principal. Monitoring Metrics Publisher has permission only to push metrics to the resource. It can't alter any state, update the resource, or read any data. For more information, see [Monitoring Metrics Publisher role](../../role-based-access-control/built-in-roles.md#monitoring-metrics-publisher). The Monitoring Metrics Publisher role requirement doesn't apply to Azure Arc-enabled Kubernetes clusters.
+
+### Prerequisites
+
+Before you update your cluster, confirm the following:
+
+- See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+
+* You're a member of the [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the AKS cluster resource to enable collection of custom performance metrics for nodes and pods. This requirement does not apply to Azure Arc-enabled Kubernetes clusters.
+
+### Enablement options
+Use one of the following methods to enable custom metrics for either a single cluster or all clusters in your subscription.
+
+#### [Azure portal](#tab/portal)
+
+1. Select the **Insights** menu for the cluster in the Azure portal.
+2. In the banner that appears at the top of the pane, select **Enable** to start the update.
+
+ :::image type="content" source="./media/container-insights-update-metrics/portal-banner-enable-01.png" alt-text="Screenshot of the Azure portal that shows the banner for upgrading an AKS cluster." lightbox="media/container-insights-update-metrics/portal-banner-enable-01.png":::
+
+ The process can take several seconds to finish. You can track its progress under **Notifications** from the menu.
+
+### [CLI](#tab/cli)
+
+#### Update a single cluster
+In the command below, edit the values for `subscriptionId`, `resourceGroupName`, and `clusterName` by using the values on the **AKS Overview** page for the AKS cluster. The value of `clientIdOfSPN` is returned when you run the command `az aks show`.
+
+```azurecli
+az login
+az account set --subscription "<subscriptionName>"
+az aks show -g <resourceGroupName> -n <clusterName> --query "servicePrincipalProfile"
+az aks show -g <resourceGroupName> -n <clusterName> --query "addonProfiles.omsagent.identity"
+az role assignment create --assignee <clientIdOfSPN> --scope <clusterResourceId> --role "Monitoring Metrics Publisher"
+```
+
+To get the value for `clientIdOfSPNOrMsi`, you can run the command `az aks show` as shown in the following example. If the `servicePrincipalProfile` object has a valid `objectid` value, you can use that. Otherwise, if it's set to `msi`, you need to pass in the Object ID from `addonProfiles.omsagent.identity.objectId`.
+
+```azurecli
+az login
+az account set --subscription "<subscriptionName>"
+az aks show -g <resourceGroupName> -n <clusterName> --query "servicePrincipalProfile"
+az aks show -g <resourceGroupName> -n <clusterName> --query "addonProfiles.omsagent.identity"
+az role assignment create --assignee <clientIdOfSPNOrMsi> --scope <clusterResourceId> --role "Monitoring Metrics Publisher"
+```
+
+>[!NOTE]
+>If you want to perform the role assignment with your user account, use the `--assignee` parameter as shown in the example. If you want to perform the role assignment with a service principal name (SPN), use the `--assignee-object-id` and `--assignee-principal-type` parameters instead of the `--assignee` parameter.
+
+#### Update all clusters
+Run the following command to update all clusters in your subscription. Edit the value for `subscriptionId` by using the value from the **AKS Overview** page for the AKS cluster.
+
+```azurecli
+az login
+az account set --subscription "Subscription Name"
+curl -sL https://aka.ms/ci-md-onboard-atscale | bash -s subscriptionId
+```
+The configuration change can take a few seconds to finish. When it's finished, a message like the following one appears and includes the result:
+
+```azurecli
+completed role assignments for all AKS clusters in subscription: <subscriptionId>
+```
+++
+### [PowerShell](#tab/powershell)
+
+#### Update a single cluster
+
+Use the following steps to enable custom metrics for a specific cluster.
+
+1. [Download the *mdm_onboarding.ps1* script from GitHub](https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/docs/aks/mdmonboarding/mdm_onboarding.ps1) and save it to a local folder.
+
+2. Run the following command. Edit the values for `subscriptionId`, `resourceGroupName`, and `clusterName` by using the values on the **AKS Overview** page for the AKS cluster.
+
+ ```powershell
+ .\mdm_onboarding.ps1 subscriptionId <subscriptionId> resourceGroupName <resourceGroupName> clusterName <clusterName>
+ ```
+
+ The configuration change can take a few seconds to finish. When it's finished, a message like the following one appears and includes the result:
+
+ ```powershell
+ Successfully added Monitoring Metrics Publisher role assignment to cluster : <clusterName>
+ ```
++
+#### Update all clusters
+
+Use the following steps to enable custom metrics for all clusters in your subscription.
+
+1. [Download the *mdm_onboarding_atscale.ps1* script from GitHub](https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/docs/aks/mdmonboarding/mdm_onboarding_atscale.ps1) and save it to a local folder.
+2. Run the following command. Edit the value for `subscriptionId` by using the value from the **AKS Overview** page for the AKS cluster.
+
+ ```powershell
+ .\mdm_onboarding_atscale.ps1 subscriptionId
+ ```
+ The configuration change can take a few seconds to finish. When it's finished, a message like the following one appears and includes the result:
+
+ ```powershell
+ Completed adding role assignment for the aks clusters in subscriptionId :<subscriptionId>
+ ```
+++
+## Verify the update
+You can verify that custom metrics is enabled by opening [metrics explorer](../essentials/metrics-getting-started.md) and verify from **Metric namespace** that **insights** is listed.
+
+## Next steps
+
+- [Create alerts based on custom metrics collected for the cluster](container-insights-metric-alerts.md).
+- [Collect Prometheus metrics from your AKS cluster](container-insights-prometheus.md)
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
+
+ Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed
+description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription.
+ Last updated : 09/28/2022++++
+# Enable Container insights for Azure Kubernetes Service (AKS) cluster
+This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on an [Azure Kubernetes Service](../../aks/index.yml) cluster.
+
+## Prerequisites
+
+If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+
+## New AKS cluster
+You can enable monitoring for an AKS cluster as when it's created using any of the following methods:
+
+- Azure CLI. Follow the steps in [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md).
+- Azure Policy. Follow the steps in [Enable AKS monitoring addon using Azure Policy](container-insights-enable-aks-policy.md).
+- Terraform. If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
+
+## Existing AKS cluster
+Use any of the following methods to enable monitoring for an existing AKS cluster.
+
+## [CLI](#tab/azure-cli)
+
+> [!NOTE]
+> Azure CLI version 2.39.0 or higher required for managed identity authentication.
+
+### Use a default Log Analytics workspace
+
+Use the following command to enable monitoring of your AKS cluster using a default Log Analytics workspace for the resource group. If a default workspace doesn't already exist in the cluster's region, then one will be created with a name in the format *DefaultWorkspace-\<GUID>-\<Region>*.
+
+```azurecli
+az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name>
+```
+
+The output will resemble the following:
+
+```output
+provisioningState : Succeeded
+```
+
+### Specify a Log Analytics workspace
+
+Use the following command to enable monitoring of your AKS cluster on a specific Log Analytics workspace. The resource ID of the workspace will be in the form `"/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>"`.
+
+```azurecli
+az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id>
+```
+
+The output will resemble the following:
+
+```output
+provisioningState : Succeeded
+```
+
+## [Terraform](#tab/terraform)
+Use the following steps to enable monitoring using Terraform:
+
+1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster)
+
+ ```
+ addon_profile {
+ oms_agent {
+ enabled = true
+ log_analytics_workspace_id = "${azurerm_log_analytics_workspace.test.id}"
+ }
+ }
+ ```
+
+2. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) following the steps in the Terraform documentation.
+3. Enable collection of custom metrics using the guidance at [Enable custom metrics](container-insights-custom-metrics.md)
+
+## [Azure portal](#tab/portal-azure-monitor)
+
+> [!NOTE]
+> You can initiate this same process from the **Insights** option in the AKS menu for your cluster in the Azure portal.
+
+To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor, do the following:
+
+1. In the Azure portal, select **Monitor**.
+2. Select **Containers** from the list.
+3. On the **Monitor - containers** page, select **Unmonitored clusters**.
+4. From the list of unmonitored clusters, find the cluster in the list and click **Enable**.
+5. On the **Configure Container insights** page, click **Configure**
+
+ :::image type="content" source="media/container-insights-enable-aks/container-insights-configure.png" lightbox="media/container-insights-enable-aks/container-insights-configure.png" alt-text="Screenshot of configuration screen for AKS cluster.":::
+
+6. On the **Configure Container insights**, fill in the following information:
+
+ | Option | Description |
+ |:|:|
+ | Log Analytics workspace | Select a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) from the drop-down list or click **Create new** to create a default Log Analytics workspace. The Log Analytics workspace must be in the same subscription as the AKS container. |
+ | Enable Prometheus metrics | Select this option to collect Prometheus metrics for the cluster in [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). |
+ | Azure Monitor workspace | If you select **Enable Prometheus metrics**, then you must select an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). The Azure Monitor workspace must be in the same subscription as the AKS container and the Log Analytics workspace. |
+ | Grafana workspace | To use the collected Prometheus metrics with dashboards in [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) to the Azure Monitor workspace if it isn't already. |
+
+7. Select **Use managed identity** if you want to use [managed identity authentication with the Azure Monitor agent](container-insights-onboard.md#authentication).
+
+After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
+
+## [Resource Manager template](#tab/arm)
+
+>[!NOTE]
+>The template needs to be deployed in the same resource group as the cluster.
++
+### Create or download templates
+You will either download template and parameter files or create your own depending on what authentication mode you're using.
+
+**To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
+
+1. Download the template at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
+
+2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
+
+3. Edit the values in the parameter file.
+
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension DCR of the cluster and the name of the data collection rule, which will be MSCI-\<clusterName\>-\<clusterRegion\> and this resource created in AKS clusters Resource Group. If this is first-time onboarding, you can set the arbitrary tag values.
++
+**To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
+
+1. Save the following JSON as **existingClusterOnboarding.json**.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "aksResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "AKS Cluster Resource ID"
+ }
+ },
+ "aksResourceLocation": {
+ "type": "string",
+ "metadata": {
+ "description": "Location of the AKS resource e.g. \"East US\""
+ }
+ },
+ "aksResourceTagValues": {
+ "type": "object",
+ "metadata": {
+ "description": "Existing all tags on AKS Cluster Resource"
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Azure Monitor Log Analytics Resource ID"
+ }
+ }
+ },
+ "resources": [
+ {
+ "name": "[split(parameters('aksResourceId'),'/')[8]]",
+ "type": "Microsoft.ContainerService/managedClusters",
+ "location": "[parameters('aksResourceLocation')]",
+ "tags": "[parameters('aksResourceTagValues')]",
+ "apiVersion": "2018-03-31",
+ "properties": {
+ "mode": "Incremental",
+ "id": "[parameters('aksResourceId')]",
+ "addonProfiles": {
+ "omsagent": {
+ "enabled": true,
+ "config": {
+ "logAnalyticsWorkspaceResourceID": "[parameters('workspaceResourceId')]"
+ }
+ }
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+2. Save the following JSON as **existingClusterParam.json**.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "aksResourceId": {
+ "value": "/subscriptions/<SubscriptionId>/resourcegroups/<ResourceGroup>/providers/Microsoft.ContainerService/managedClusters/<ResourceName>"
+ },
+ "aksResourceLocation": {
+ "value": "<aksClusterLocation>"
+ },
+ "workspaceResourceId": {
+ "value": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroup>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>"
+ },
+ "aksResourceTagValues": {
+ "value": {
+ "<existing-tag-name1>": "<existing-tag-value1>",
+ "<existing-tag-name2>": "<existing-tag-value2>",
+ "<existing-tag-nameN>": "<existing-tag-valueN>"
+ }
+ }
+ }
+ }
+ ```
+
+2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save as **existingClusterParam.json**.
+
+3. Edit the values in the parameter file.
+
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Use the existing tag values specified for the AKS cluster.
+
+### Deploy template
+
+Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods.
+++
+#### To deploy with Azure PowerShell:
+
+```powershell
+New-AzResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <ResourceGroupName> -TemplateFile .\existingClusterOnboarding.json -TemplateParameterFile .\existingClusterParam.json
+```
+
+The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+
+```output
+provisioningState : Succeeded
+```
+
+#### To deploy with Azure CLI, run the following commands:
+
+```azurecli
+az login
+az account set --subscription "Subscription Name"
+az deployment group create --resource-group <ResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
+```
+
+The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+
+```output
+provisioningState : Succeeded
+```
+
+After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
+++
+## Verify agent and solution deployment
+Run the following command to verify that the agent is deployed successfully.
+
+```
+kubectl get ds omsagent --namespace=kube-system
+```
+
+The output should resemble the following, which indicates that it was deployed properly:
+
+```output
+User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
+```
+
+If there are Windows Server nodes on the cluster then you can run the following command to verify that the agent is deployed successfully.
+
+```
+kubectl get ds omsagent-win --namespace=kube-system
+```
+
+The output should resemble the following, which indicates that it was deployed properly:
+
+```output
+User@aksuser:~$ kubectl get ds omsagent-win --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+omsagent-win 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
+```
+
+To verify deployment of the solution, run the following command:
+
+```
+kubectl get deployment omsagent-rs -n=kube-system
+```
+
+The output should resemble the following, which indicates that it was deployed properly:
+
+```output
+User@aksuser:~$ kubectl get deployment omsagent-rs -n=kube-system
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+omsagent 1 1 1 1 3h
+```
+
+## View configuration with CLI
+
+Use the `aks show` command to get details such as is the solution enabled or not, what is the Log Analytics workspace resourceID, and summary details about the cluster.
+
+```azurecli
+az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster>
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about solution. The results of the command should show the monitoring add-on profile and resembles the following example output:
+
+```output
+"addonProfiles": {
+ "omsagent": {
+ "config": {
+ "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>"
+ },
+ "enabled": true
+ }
+ }
+```
+
+## Migrate to managed identity authentication
+
+### Existing clusters with service principal
+AKS Clusters with service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+
+1. Get the configured Log Analytics workspace resource ID:
+
+```cli
+az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+```
+
+2. Disable monitoring with the following command:
+
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
+ ```
+
+3. Upgrade cluster to system managed identity with the following command:
+
+ ```cli
+ az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity
+ ```
+
+4. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
+
+ ```cli
+ az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ ```
+
+### Existing clusters with system or user assigned identity
+AKS Clusters with system assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user assigned identity, only Azure Public cloud is supported.
+
+1. Get the configured Log Analytics workspace resource ID:
+
+ ```cli
+ az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+ ```
+
+2. Disable monitoring with the following command:
+
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
+ ```
+
+3. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
+
+ ```cli
+ az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ ```
+
+## Private link
+To enable network isolation by connecting your cluster to the Log Analytics workspace using [private link](../logs/private-link-security.md), your cluster must be using managed identity authentication with the Azure Monitor agent.
+
+1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your AMPLS.
+2. Create an association between the cluster and the data collection endpoint using the following API call. See [Data Collection Rule Associations - Create](/rest/api/monitor/data-collection-rule-associations/create) for details on this call. The DCR association name must beΓÇ»**configurationAccessEndpoint**, `resourceUri` is the resource ID of the AKS cluster.
+
+ ```rest
+ PUT https://management.azure.com/{cluster-resource-id}/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
+ {
+ "properties": {
+ "dataCollectionEndpointId": "{data-collection-endpoint-resource-id}"
+ }
+ }
+ ```
+
+ Following is an example of this API call.
+
+ ```rest
+ PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/my-aks-cluster/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
+
+ {
+ "properties": {
+ "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Insights/dataCollectionEndpoints/myDataCollectionEndpoint"
+ }
+ }
+ ```
+
+3. Enable monitoring with managed identity authentication option using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
+
+## Limitations
+
+- Enabling managed identity authentication (preview) is not currently supported using Terraform or Azure Policy.
+- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. This name cannot currently be modified.
+
+## Next steps
+
+* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
+
+* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: Monitor Azure Arc-enabled Kubernetes clusters Last updated 05/24/2022 + description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
If you want to tweak the default resource requests and limits, you can use the a
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi ```
-Checkout the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
+Check out the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
### Option 4 - On Azure Stack Edge
For issues with enabling monitoring, we have provided a [troubleshooting script]
- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file. -- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)
+- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus.md)
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
- Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed | Microsoft Docs
-description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription.
- Previously updated : 05/24/2022----
-# Enable monitoring for existing Azure Kubernetes Service (AKS) cluster
-This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on [Azure Kubernetes Service](../../aks/index.yml) that have already been deployed in your subscription.
-
-If you're connecting an existing AKS cluster to an Azure Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription in which the Log Analytics workspace was created. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
--
-## [CLI](#tab/azure-cli)
-
-> [!NOTE]
-> Azure CLI version 2.39.0 or higher required for managed identity authentication.
-
-The following step enables monitoring of your AKS cluster using Azure CLI. In this example, you are not required to pre-create or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the AKS cluster subscription if one does not already exist in the region. The default workspace created resembles the format of *DefaultWorkspace-\<GUID>-\<Region>*.
-
-```azurecli
-az aks enable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG
-```
-
-The output will resemble the following:
-
-```output
-provisioningState : Succeeded
-```
-
-### Integrate with an existing workspace
-
-If you would rather integrate with an existing workspace, perform the following steps to first identify the full resource ID of your Log Analytics workspace required for the `--workspace-resource-id` parameter, and then run the command to enable the monitoring add-on against the specified workspace.
-
-1. List all the subscriptions that you have access to using the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
- The output will resemble the following:
-
- ```output
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 68627f8c-91fO-4905-z48q-b032a81f8vy0 Enabled True
- ```
-
- Copy the value for **SubscriptionId**.
-
-2. Switch to the subscription hosting the Log Analytics workspace using the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-3. The following example displays the list of workspaces in your subscriptions in the default JSON format.
-
- ```azurecli
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
- In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **id**.
-
-4. Switch to the subscription hosting the cluster using the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the cluster>
- ```
-
-5. Run the following command to enable the monitoring add-on, replacing the value for the `--workspace-resource-id` parameter. The string value must be within the double quotes:
-
- ```azurecli
- az aks enable-addons -a monitoring -n ExistingManagedCluster -g ExistingManagedClusterRG --workspace-resource-id "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>"
- ```
-
- The output will resemble the following:
-
- ```output
- provisioningState : Succeeded
- ```
-
-## [Terraform](#tab/terraform)
-To enable monitoring using Terraform, do the following:
-
-1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster)
-
- ```
- addon_profile {
- oms_agent {
- enabled = true
- log_analytics_workspace_id = "${azurerm_log_analytics_workspace.test.id}"
- }
- }
- ```
-
-2. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) following the steps in the Terraform documentation.
-
-3. The metrics are not collected by default through Terraform, so once onboarded, there is an additional step to assign the monitoring metrics publisher role, which is required to [enable the metrics](./container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli).
-
-## [Azure Monitor portal](#tab/portal-azure-monitor)
-
-To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor, do the following:
-
-1. In the Azure portal, select **Monitor**.
-
-2. Select **Containers** from the list.
-
-3. On the **Monitor - containers** page, select **Unmonitored clusters**.
-
-4. From the list of unmonitored clusters, find the cluster in the list and click **Enable**.
-
-5. On the **Onboarding to Container insights** page, if you have an existing Log Analytics workspace in the same subscription as the cluster, select it from the drop-down list.
- The list preselects the default workspace and location that the AKS container is deployed to in the subscription.
-
- ![Enable AKS Container insights monitoring](./media/container-insights-onboard/kubernetes-onboard-brownfield-01.png)
-
- >[!NOTE]
- >If you want to create a new Log Analytics workspace for storing the monitoring data from the cluster, follow the instructions in [Create a Log Analytics workspace](../logs/quick-create-workspace.md). Be sure to create the workspace in the same subscription that the AKS container is deployed to.
-
-6. Select **Use managed identity** if you want to use [managed identity authentication with the Azure Monitor agent](container-insights-onboard.md#authentication).
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-## [AKS portal](#tab/portal-aks)
-
-To enable monitoring directly from one of your AKS clusters in the Azure portal, do the following:
-
-1. In the Azure portal, select **All services**.
-
-2. In the list of resources, begin typing **Containers**. The list filters based on your input.
-
-3. Select **Kubernetes services**.
-
-4. In the list of Kubernetes services, select a service.
-
-5. On the Kubernetes service overview page, select **Monitoring - Insights**.
-
-6. On the **Onboarding to Container insights** page, if you have an existing Log Analytics workspace in the same subscription as the cluster, select it in the drop-down list.
- The list preselects the default workspace and location that the AKS container is deployed to in the subscription.
-
- ![Enable AKS container health monitoring](./media/container-insights-onboard/kubernetes-onboard-brownfield-02.png)
-
- >[!NOTE]
- >If you want to create a new Log Analytics workspace for storing the monitoring data from the cluster, follow the instructions in [Create a Log Analytics workspace](../logs/quick-create-workspace.md). Be sure to create the workspace in the same subscription that the AKS container is deployed to.
-
-6. Select **Use managed identity** if you want to use [managed identity authentication with the Azure Monitor agent](container-insights-onboard.md#authentication).
--
-After you've enabled monitoring, it might take about 15 minutes before you can view operational data for the cluster.
-
-## [Resource Manager template](#tab/arm)
--
-This method includes two JSON templates. One template specifies the configuration to enable monitoring, and the other contains parameter values that you configure to specify the following:
-
-* The AKS container resource ID.
-* The resource group that the cluster is deployed in.
-
->[!NOTE]
->The template needs to be deployed in the same resource group as the cluster.
--
-### Prerequisites
-The Log Analytics workspace must be created before you deploy the Resource Manager template.
--
-### Create or download templates
-
-**If you want to enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
-
-1. Download the template at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
-
-2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
-
-3. Edit the values in the parameter file.
-
- - For **aksResourceId** and **aksResourceLocation**, use the values on the **AKS Overview** page for the AKS cluster.
- - For **workspaceResourceId**, use the resource ID of your Log Analytics workspace.
- - For **resourceTagValues**, match the existing tag values specified for the existing Container insights extension DCR of the cluster and the name of the data collection rule, which will be MSCI-\<clusterName\>-\<clusterRegion\> and this resource created in AKS clusters Resource Group. If this first-time onboarding, you can set the arbitrary tag values.
--
-**If you don't want to enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
-
-1. Save the following JSON as **existingClusterOnboarding.json**.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "aksResourceId": {
- "type": "string",
- "metadata": {
- "description": "AKS Cluster Resource ID"
- }
- },
- "aksResourceLocation": {
- "type": "string",
- "metadata": {
- "description": "Location of the AKS resource e.g. \"East US\""
- }
- },
- "aksResourceTagValues": {
- "type": "object",
- "metadata": {
- "description": "Existing all tags on AKS Cluster Resource"
- }
- },
- "workspaceResourceId": {
- "type": "string",
- "metadata": {
- "description": "Azure Monitor Log Analytics Resource ID"
- }
- }
- },
- "resources": [
- {
- "name": "[split(parameters('aksResourceId'),'/')[8]]",
- "type": "Microsoft.ContainerService/managedClusters",
- "location": "[parameters('aksResourceLocation')]",
- "tags": "[parameters('aksResourceTagValues')]",
- "apiVersion": "2018-03-31",
- "properties": {
- "mode": "Incremental",
- "id": "[parameters('aksResourceId')]",
- "addonProfiles": {
- "omsagent": {
- "enabled": true,
- "config": {
- "logAnalyticsWorkspaceResourceID": "[parameters('workspaceResourceId')]"
- }
- }
- }
- }
- }
- ]
- }
- ```
-
-2. Save the following JSON as **existingClusterParam.json**.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "aksResourceId": {
- "value": "/subscriptions/<SubscriptionId>/resourcegroups/<ResourceGroup>/providers/Microsoft.ContainerService/managedClusters/<ResourceName>"
- },
- "aksResourceLocation": {
- "value": "<aksClusterLocation>"
- },
- "workspaceResourceId": {
- "value": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroup>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>"
- },
- "aksResourceTagValues": {
- "value": {
- "<existing-tag-name1>": "<existing-tag-value1>",
- "<existing-tag-name2>": "<existing-tag-value2>",
- "<existing-tag-nameN>": "<existing-tag-valueN>"
- }
- }
- }
- }
- ```
-
-2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save as **existingClusterParam.json**.
-
-3. Edit the values in the parameter file.
-
- - For **aksResourceId** and **aksResourceLocation**, use the values on the **AKS Overview** page for the AKS cluster.
- - For **workspaceResourceId**, use the resource ID of your Log Analytics workspace.
- - For **aksResourceTagValues**, use the existing tag values specified for the AKS cluster.
--
-### Deploy template
-
-If you are unfamiliar with the concept of deploying resources by using a template, see:
-
-* [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)
-* [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)
-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
--
-#### To deploy with Azure PowerShell:
-
-```powershell
-New-AzResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <ResourceGroupName> -TemplateFile .\existingClusterOnboarding.json -TemplateParameterFile .\existingClusterParam.json
-```
-
-The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
-
-```output
-provisioningState : Succeeded
-```
-
-#### To deploy with Azure CLI, run the following commands:
-
-```azurecli
-az login
-az account set --subscription "Subscription Name"
-az deployment group create --resource-group <ResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
-```
-
-The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
-
-```output
-provisioningState : Succeeded
-```
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
---
-## Verify agent and solution deployment
-
-With agent version *06072018* or later, you can verify that both the agent and the solution were deployed successfully. With earlier versions of the agent, you can verify only agent deployment.
-
-### Agent version 06072018 or later
-
-Run the following command to verify that the agent is deployed successfully.
-
-```
-kubectl get ds omsagent --namespace=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
-```
-
-If there are Windows Server nodes on the cluster then you can run the following command to verify that the agent is deployed successfully.
-
-```
-kubectl get ds omsagent-win --namespace=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get ds omsagent-win --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent-win 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
-```
-
-To verify deployment of the solution, run the following command:
-
-```
-kubectl get deployment omsagent-rs -n=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get deployment omsagent-rs -n=kube-system
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-omsagent 1 1 1 1 3h
-```
-
-### Agent version earlier than 06072018
-
-To verify that the Log Analytics agent version released before *06072018* is deployed properly, run the following command:
-
-```
-kubectl get ds omsagent --namespace=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
-```
-
-## View configuration with CLI
-
-Use the `aks show` command to get details such as is the solution enabled or not, what is the Log Analytics workspace resourceID, and summary details about the cluster.
-
-```azurecli
-az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster>
-```
-
-After a few minutes, the command completes and returns JSON-formatted information about solution. The results of the command should show the monitoring add-on profile and resembles the following example output:
-
-```output
-"addonProfiles": {
- "omsagent": {
- "config": {
- "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>"
- },
- "enabled": true
- }
- }
-```
-
-## Migrate to managed identity authentication
-
-### Existing clusters with service principal
-AKS Clusters with service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
-
-1. Get the configured Log Analytics workspace resource id:
-
-```cli
-az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
-```
-
-2. Disable monitoring with the following command:
-
- ```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
- ```
-
-3. Upgrade cluster to system managed identity with the following command:
-
- ```cli
- az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity
- ```
-
-4. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
-
- ```cli
- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
- ```
-
-### Existing clusters with system or user assigned identity
-AKS Clusters with system assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user assigned identity, only Azure Public cloud is supported.
-
-1. Get the configured Log Analytics workspace resource id:
-
- ```cli
- az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
- ```
-
-2. Disable monitoring with the following command:
-
- ```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
- ```
-
-3. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
-
- ```cli
- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
- ```
-
-## Private link
-To enable network isolation by connecting your cluster to the Log Analytics workspace using [private link](../logs/private-link-security.md), your cluster must be using managed identity authentication with the Azure Monitor agent.
-
-1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your AMPLS.
-2. Create an association between the cluster and the data collection endpoint using the following API call. See [Data Collection Rule Associations - Create](/rest/api/monitor/data-collection-rule-associations/create) for details on this call. The DCR association name must beΓÇ»**configurationAccessEndpoint**, `resourceUri` is the resource Id of the AKS cluster.
-
- ```rest
- PUT https://management.azure.com/{cluster-resource-id}/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
- {
- "properties": {
- "dataCollectionEndpointId": "{data-collection-endpoint-resource-id}"
- }
- }
- ```
-
- Following is an example of this API call.
-
- ```rest
- PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/my-aks-cluster/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
-
- {
- "properties": {
- "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Insights/dataCollectionEndpoints/myDataCollectionEndpoint"
- }
- }
- ```
-
-3. Enable monitoring with managed identity authentication option using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
-
-## Limitations
--- Enabling managed identity authentication (preview) is not currently supported using Terraform or Azure Policy.-- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. This name cannot currently be modified.-
-## Next steps
-
-* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
-
-* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
- Title: Monitor a new Azure Kubernetes Service (AKS) cluster | Microsoft Docs
-description: Learn how to enable monitoring for a new Azure Kubernetes Service (AKS) cluster with Container insights subscription.
- Previously updated : 05/24/2022----
-# Enable monitoring of a new Azure Kubernetes Service (AKS) cluster
-
-This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on [Azure Kubernetes Service](../../aks/index.yml) that you are preparing to deploy in your subscription.
--
-## Enable using Azure CLI
-
-To enable monitoring of a new AKS cluster created with Azure CLI, follow the step in the quickstart article under the section [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md).
-
->[!NOTE]
->If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.39.0 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
->If you have installed the aks-preview CLI extension version 0.4.12 or higher, remove any changes you have made to enable a preview extension as it can override the default Azure CLI behavior since AKS Preview features aren't available in Azure US Governmnet cloud.
-
-## Enable using Terraform
-
-If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
-
-## Verify agent and solution deployment
-With agent version *06072018* or later, you can verify that both the agent and the solution were deployed successfully. With earlier versions of the agent, you can verify only agent deployment.
-
-### Agent version 06072018 or later
-Run the following command to verify that the agent is deployed successfully.
-
-```
-kubectl get ds omsagent --namespace=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```
-User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
-```
-
-To verify deployment of the solution, run the following command:
-
-```
-kubectl get deployment omsagent-rs -n=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```
-User@aksuser:~$ kubectl get deployment omsagent-rs -n=kube-system
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-omsagent 1 1 1 1 3h
-```
-
-### Agent version earlier than 06072018
-
-To verify that the Log Analytics agent version released before *06072018* is deployed properly, run the following command:
-
-```
-kubectl get ds omsagent --namespace=kube-system
-```
-
-The output should resemble the following, which indicates that it was deployed properly:
-
-```
-User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
-```
-
-## View configuration with CLI
-Use the `aks show` command to get details such as is the solution enabled or not, what is the Log Analytics workspace resourceID, and summary details about the cluster.
-
-```azurecli
-az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster>
-```
-
-After a few minutes, the command completes and returns JSON-formatted information about solution. The results of the command should show the monitoring add-on profile and resembles the following example output:
-
-```
-"addonProfiles": {
- "omsagent": {
- "config": {
- "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>"
- },
- "enabled": true
- }
- }
-```
-
-## Next steps
-
-* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
-
-* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
-
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Title: Configure GPU monitoring with Container insights description: This article describes how you can configure monitoring Kubernetes clusters with NVIDIA and AMD GPU enabled nodes with Container insights. + Last updated 05/24/2022
Starting with agent version *ciprod03022019*, Container insights integrated agen
> * containerGpumemoryTotalBytes > * containerGpumemoryUsedBytes >
-> To continue collecting GPU metrics through Container Insights, please migrate by December 31, 2022 to your GPU vendor specific metrics exporter and configure [Prometheus scraping](./container-insights-prometheus-integration.md) to scrape metrics from the deployed vendor specific exporter.
+> To continue collecting GPU metrics through Container Insights, please migrate by December 31, 2022 to your GPU vendor specific metrics exporter and configure [Prometheus scraping](./container-insights-prometheus.md) to scrape metrics from the deployed vendor specific exporter.
## Supported GPU vendors
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: Query logs from Container insights description: Container insights collects metrics and log data, and this article describes the records and includes sample queries. + Last updated 08/29/2022 - # Query logs from Container insights
AzureDiagnostics
| summarize count() by Category ```
-## Query Prometheus metrics data
+## Prometheus metrics
The following example is a Prometheus metrics query showing disk reads per second per disk per node.
InsightsMetrics
```
-To view Prometheus metrics scraped by Azure Monitor and filtered by namespace, specify "prometheus". Here's a sample query to view Prometheus metrics from the `default` Kubernetes namespace.
+To view Prometheus metrics scraped by Azure Monitor and filtered by namespace, specify *"prometheus"*. Here's a sample query to view Prometheus metrics from the `default` Kubernetes namespace.
``` InsightsMetrics
InsightsMetrics
| where Name contains "some_prometheus_metric" ```
-### Query configuration or scraping errors
+To identify the ingestion volume of each metrics size in GB per day to understand if it's high, the following query is provided.
+
+```
+InsightsMetrics
+| where Namespace contains "prometheus"
+| where TimeGenerated > ago(24h)
+| summarize VolumeInGB = (sum(_BilledSize) / (1024 * 1024 * 1024)) by Name
+| order by VolumeInGB desc
+| render barchart
+```
+
+The output will show results similar to the following example.
+
+![Screenshot that shows the log query results of data ingestion volume.](./media/container-insights-prometheus/log-query-example-usage-03.png)
+
+To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
+
+```
+InsightsMetrics
+| where Namespace contains "prometheus"
+| where TimeGenerated > ago(24h)
+| summarize EstimatedGBPer30dayMonth = (sum(_BilledSize) / (1024 * 1024 * 1024)) * 30 by Name
+| order by EstimatedGBPer30dayMonth desc
+| render barchart
+```
+
+The output will show results similar to the following example.
+
+![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-prometheus/log-query-example-usage-02.png)
++
+## Configuration or scraping errors
To investigate any configuration or scraping errors, the following example query returns informational events from the `KubeMonAgentEvents` table.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights
-description: This article reviews the recommended metric alerts available from Container insights in public preview.
+ Title: Create metric alert rules in Container insights (preview)
+description: Describes how to create recommended metric alerts rules for a Kubernetes cluster in Container insights.
Previously updated : 05/24/2022+ Last updated : 09/28/2022
-# Recommended metric alerts (preview) from Container insights
+# Metric alert rules in Container insights (preview)
-To alert on system resource issues when they are experiencing peak demand and running near capacity, with Container insights you would create a log alert based on performance data stored in Azure Monitor Logs. Container insights now includes pre-configured metric alert rules for your AKS and Azure Arc-enabled Kubernetes cluster, which is in public preview.
+Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides pre-configured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
-This article reviews the experience and provides guidance on configuring and managing these alert rules.
+> [!IMPORTANT]
+> Container insights in Azure Monitor now supports alerts based on Prometheus metrics. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.
+## Types of metric alert rules
+There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details).
-If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
+| Alert rule type | Description |
+|:|:|
+| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are hand-picked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>-*Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
+| [Metric rules](#metrics-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
-> [!NOTE]
-> Beginning October 8, 2021, three alerts have been updated to correctly calculate the alert condition: **Container CPU %**, **Container working set memory %**, and **Persistent Volume Usage %**. These new alerts have the same names as their corresponding previously available alerts, but they use new, updated metrics. We recommend that you disable the alerts that use the "Old" metrics, described in this article, and enable the "New" metrics. The "Old" metrics will no longer be available in recommended alerts after they are disabled, but you can manually re-enable them.
-## Prerequisites
+## Prometheus alert rules
+[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor manage service for Prometheus](../essentials/prometheus-metrics-overview.md).
-Before you start, confirm the following:
+### Prerequisites
+- Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
-* Custom metrics are only available in a subset of Azure regions. A list of supported regions is documented in [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+### Enable alert rules
-* To support metric alerts and the introduction of additional metrics, the minimum agent version required is **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod05262020** for AKS and **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod09252020** for Azure Arc-enabled Kubernetes cluster.
+The only method currently available for creating Prometheus alert rules is a Resource Manager template.
- To verify your cluster is running the newer version of the agent, you can either:
+1. Download the template that includes the set of alert rules that you want to enable. See [Alert rule details](#alert-rule-details) for a listing of the rules for each.
- * Run the command: `kubectl describe pod <omsagent-pod-name> --namespace=kube-system`. In the status returned, note the value under **Image** for omsagent in the *Containers* section of the output.
- * On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**.
+ - [Community alerts](https://aka.ms/azureprometheus-communityalerts)
+ - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
- The value shown for AKS should be version **ciprod05262020** or later. The value shown for Azure Arc-enabled Kubernetes cluster should be version **ciprod09252020** or later. If your cluster has an older version, see [How to upgrade the Container insights agent](container-insights-manage-agent.md#upgrade-agent-on-aks-cluster) for steps to get the latest version.
+2. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates) for guidance.
- For more information related to the agent release, see [agent release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). To verify metrics are being collected, you can use Azure Monitor metrics explorer and verify from the **Metric namespace** that **insights** is listed. If it is, you can go ahead and start setting up the alerts. If you don't see any metrics collected, the cluster Service Principal or MSI is missing the necessary permissions. To verify the SPN or MSI is a member of the **Monitoring Metrics Publisher** role, follow the steps described in the section [Upgrade per cluster using Azure CLI](container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli) to confirm and set role assignment.
-
-> [!TIP]
-> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
+> [!NOTE]
+> While the Prometheus alert could be created in a different resource group to the target resource, you should use the same resource group as your target resource.
-## Alert rules overview
+### Edit alert rules
-To alert on what matters, Container insights includes the following metric alerts for your AKS and Azure Arc-enabled Kubernetes clusters:
+ To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it using any deployment method.
-|Name| Description |Default threshold |
-|-|-||
-|**(New)Average container CPU %** |Calculates average CPU used per container.|When average CPU usage per container is greater than 95%.|
-|**(New)Average container working set memory %** |Calculates average working set memory used per container.|When average working set memory usage per container is greater than 95%. |
-|Average CPU % |Calculates average CPU used per node. |When average node CPU utilization is greater than 80% |
-| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md) |
-|Average Disk Usage % |Calculates average disk usage for a node.|When disk usage for a node is greater than 80%. |
-|**(New)Average Persistent Volume Usage %** |Calculates average PV usage per pod. |When average PV usage per pod is greater than 80%.|
-|Average Working set memory % |Calculates average Working set memory for a node. |When average Working set memory for a node is greater than 80%. |
-|Restarting container count |Calculates number of restarting containers. | When container restarts are greater than 0. |
-|Failed Pod Counts |Calculates if any pod in failed state.|When a number of pods in failed state are greater than 0. |
-|Node NotReady status |Calculates if any node is in NotReady state.|When a number of nodes in NotReady state are greater than 0. |
-|OOM Killed Containers |Calculates number of OOM killed containers. |When a number of OOM killed containers is greater than 0. |
-|Pods ready % |Calculates the average ready state of pods. |When ready state of pods is less than 80%.|
-|Completed job count |Calculates number of jobs completed more than six hours ago. |When number of stale jobs older than six hours is greater than 0.|
+### Configure alertable metrics in ConfigMaps
-There are common properties across all of these alert rules:
+Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
-* All alert rules are metric based.
+- *cpuExceededPercentage*
+- *cpuThresholdViolated*
+- *memoryRssExceededPercentage*
+- *memoryRssThresholdViolated*
+- *memoryWorkingSetExceededPercentage*
+- *memoryWorkingSetThresholdViolated*
+- *pvUsageExceededPercentage*
+- *pvUsageThresholdViolated*
-* All alert rules are disabled by default.
+> [!TIP]
+> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-* All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
-* Alerts rules do not have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
+1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
-* You can modify the threshold for alert rules by directly editing them. However, refer to the guidance provided in each alert rule before modifying its threshold.
+ - **Example**. Use the following ConfigMap configuration to modify the *cpuExceededPercentage* threshold to 90%:
-The following alert-based metrics have unique behavior characteristics compared to the other metrics:
+ ```
+ [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
+ # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
+ container_cpu_threshold_percentage = 90.0
+ # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
+ container_memory_rss_threshold_percentage = 95.0
+ # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
+ container_memory_working_set_threshold_percentage = 95.0
+ ```
-* *completedJobsCount* metric is only sent when there are jobs that are completed greater than six hours ago.
+ - **Example**. Use the following ConfigMap configuration to modify the *pvUsageExceededPercentage* threshold to 80%:
-* *containerRestartCount* metric is only sent when there are containers restarting.
+ ```
+ [alertable_metrics_configuration_settings.pv_utilization_thresholds]
+ # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
+ pv_usage_threshold_percentage = 80.0
+ ```
-* *oomKilledContainerCount* metric is only sent when there are OOM killed containers.
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-* *cpuExceededPercentage*, *memoryRssExceededPercentage*, and *memoryWorkingSetExceededPercentage* metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-* *pvUsageExceededPercentage* metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
-## Metrics collected
+## Metrics alert rules
+[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
-The following metrics are enabled and collected, unless otherwise specified, as part of this feature. The metrics in **bold** with label "Old" are the ones replaced by "New" metrics collected for correct alert evaluation.
-|Metric namespace |Metric |Description |
-||-||
-|Insights.container/nodes |cpuUsageMillicores |CPU utilization in millicores by host.|
-|Insights.container/nodes |cpuUsagePercentage, cpuUsageAllocatablePercentage (preview) |CPU usage percentage by node and allocatable respectively.|
-|Insights.container/nodes |memoryRssBytes |Memory RSS utilization in bytes by host.|
-|Insights.container/nodes |memoryRssPercentage, memoryRssAllocatablePercentage (preview) |Memory RSS usage percentage by host and allocatable respectively.|
-|Insights.container/nodes |memoryWorkingSetBytes |Memory Working Set utilization in bytes by host.|
-|Insights.container/nodes |memoryWorkingSetPercentage, memoryRssAllocatablePercentage (preview) |Memory Working Set usage percentage by host and allocatable respectively.|
-|Insights.container/nodes |nodesCount |Count of nodes by status.|
-|Insights.container/nodes |diskUsedPercentage |Percentage of disk used on the node by device.|
-|Insights.container/pods |podCount |Count of pods by controller, namespace, node, and phase.|
-|Insights.container/pods |completedJobsCount |Completed jobs count older user configurable threshold (default is six hours) by controller, Kubernetes namespace. |
-|Insights.container/pods |restartingContainerCount |Count of container restarts by controller, Kubernetes namespace.|
-|Insights.container/pods |oomKilledContainerCount |Count of OOMkilled containers by controller, Kubernetes namespace.|
-|Insights.container/pods |podReadyPercentage |Percentage of pods in ready state by controller, Kubernetes namespace.|
-|Insights.container/containers |**(Old)cpuExceededPercentage** |CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
-|Insights.container/containers |**(New)cpuThresholdViolated** |Metric triggered when CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
-|Insights.container/containers |**(Old)memoryRssExceededPercentage** |Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(New)memoryRssThresholdViolated** |Metric triggered when Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(Old)memoryWorkingSetExceededPercentage** |Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(New)memoryWorkingSetThresholdViolated** |Metric triggered when Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/persistentvolumes |**(Old)pvUsageExceededPercentage** |PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.|
-|Insights.container/persistentvolumes |**(New)pvUsageThresholdViolated** |Metric triggered when PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.
+### Prerequisites
+ - You may need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
+ - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
-## Enable alert rules
-Follow these steps to enable the metric alerts in Azure Monitor from the Azure portal. To enable using a Resource Manager template, see [Enable with a Resource Manager template](#enable-with-a-resource-manager-template).
+### Enable and configure alert rules
-### From the Azure portal
+#### [Azure portal](#tab/azure-portal)
-This section walks through enabling Container insights metric alert (preview) from the Azure portal.
+#### Enable alert rules
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. From the **Insights** menu for your cluster, select **Recommended alerts**.
-2. Access to the Container insights metrics alert (preview) feature is available directly from an AKS cluster by selecting **Insights** from the left pane in the Azure portal.
+ :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot showing recommended alerts option in Container insights.":::
-3. From the command bar, select **Recommended alerts**.
- ![Recommended alerts option in Container insights](./media/container-insights-metric-alerts/command-bar-recommended-alerts.png)
+2. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
-4. The **Recommended alerts** property pane automatically displays on the right side of the page. By default, all alert rules in the list are disabled. After selecting **Enable**, the alert rule is created and the rule name updates to include a link to the alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot showing list of recommended alerts and option for enabling each.":::
- ![Recommended alerts properties pane](./media/container-insights-metric-alerts/recommended-alerts-pane.png)
+3. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page, specify an existing or create an action group by selecting **Create action group**.
- After selecting the **Enable/Disable** toggle to enable the alert, an alert rule is created and the rule name updates to include a link to the actual alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot showing selection of an action group.":::
- ![Enable alert rule](./media/container-insights-metric-alerts/recommended-alerts-pane-enable.png)
+#### Edit alert rules
-5. Alert rules are not associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** and on the **Action Groups** page, specify an existing or create an action group by selecting **Add** or **Create**.
+To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your AKS cluster.
- ![Select an action group](./media/container-insights-metric-alerts/select-action-group.png)
+1. From Container insights for your cluster, select **Recommended alerts**.
+2. Click the **Rule Name** to open the alert rule.
+3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for details on the alert rule settings.
-### Enable with a Resource Manager template
+#### Disable alert rules
+1. From Container insights for your cluster, select **Recommended alerts**.
+2. Change the status for the alert rule to **Disabled**.
-You can use an Azure Resource Manager template and parameters file to create the included metric alerts in Azure Monitor.
+### [Resource Manager](#tab/resource-manager)
+For custom metrics, a separate Resource Manager template is provided for each alert rule.
-The basic steps are as follows:
+#### Enable alert rules
1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).- 2. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
+3. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md) for guidance.
-3. Deploy the template from the Azure portal, PowerShell, or Azure CLI.
-
-#### Deploy through Azure portal
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-
-2. To deploy a customized template through the portal, select **Create a resource** from the [Azure portal](https://portal.azure.com).
-
-3. Search for **template**, and then select **Template deployment**.
-
-4. Select **Create**.
-
-5. You see several options for creating a template, select **Build your own template in editor**.
-
-6. On the **Edit template page**, select **Load file** and then select the template file.
-
-7. On the **Edit template** page, select **Save**.
-
-8. On the **Custom deployment** page, specify the following and then when complete select **Purchase** to deploy the template and create the alert rule.
-
- * Resource group
- * Location
- * Alert Name
- * Cluster Resource ID
+#### Disable alert rules
+To disable custom alert rules, use the same Resource Manager template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
-#### Deploy with Azure PowerShell or CLI
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-
-2. You can create the metric alert using the template and parameters file using PowerShell or Azure CLI.
-
- Using Azure PowerShell
-
- ```powershell
- Connect-AzAccount
-
- Select-AzSubscription -SubscriptionName <yourSubscriptionName>
- New-AzResourceGroupDeployment -Name CIMetricAlertDeployment -ResourceGroupName ResourceGroupofTargetResource `
- -TemplateFile templateFilename.json -TemplateParameterFile templateParameterFilename.parameters.json
- ```
-
- Using Azure CLI
-
- ```azurecli
- az login
-
- az deployment group create \
- --name AlertDeployment \
- --resource-group ResourceGroupofTargetResource \
- --template-file templateFileName.json \
- --parameters @templateParameterFilename.parameters.json
- ```
+
- >[!NOTE]
- >While the metric alert could be created in a different resource group to the target resource, we recommend using the same resource group as your target resource.
-## Edit alert rules
+## Alert rule details
+The following sections provide details on the alert rules provided by Container insights.
+
+### Community alert rules
+These are hand-picked alerts from Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins).
+
+- KubeJobNotCompleted
+- KubeJobFailed
+- KubePodCrashLooping
+- KubePodNotReady
+- KubeDeploymentReplicasMismatch
+- KubeStatefulSetReplicasMismatch
+- KubeHpaReplicasMismatch
+- KubeHpaMaxedOut
+- KubeQuotaAlmostFull
+- KubeMemoryQuotaOvercommit
+- KubeCPUQuotaOvercommit
+- KubeVersionMismatch
+- KubeNodeNotReady
+- KubeNodeReadinessFlapping
+- KubeletTooManyPods
+- KubeNodeUnreachable
+### Recommended alert rules
+The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.
+
+| Prometheus alert name | Custom metric alert name | Description | Default threshold |
+|:|:|:|:|
+| Average container CPU % | Average container CPU % | Calculates average CPU used per container. | 95% |
+| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% |
+| Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% |
+| Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
+| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average PV usage per pod. | 80% |
+| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% |
+| Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 |
+| Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 |
+| Node NotReady status | Node NotReady status | Calculates if any node is in NotReady state. | 0 |
+| OOM Killed Containers | OOM Killed Containers | Calculates number of OOM killed containers. | 0 |
+| Pods ready % | Pods ready % | Calculates the average ready state of pods. | 80% |
+| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 |
-You can view and manage Container insights alert rules, to edit its threshold or configure an [action group](../alerts/action-groups.md) for your AKS cluster. While you can perform these actions from the Azure portal and Azure CLI, it can also be done directly from your AKS cluster in Container insights.
+> [!NOTE]
+> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule is not included with the Prometheus alert rules.
+>
+> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) using the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
-1. From the command bar, select **Recommended alerts**.
-2. To modify the threshold, on the **Recommended alerts** pane, select the enabled alert. In the **Edit rule**, select the **Alert criteria** you want to edit.
+Common properties across all of these alert rules include:
- * To modify the alert rule threshold, select the **Condition**.
- * To specify an existing or create an action group, select **Add** or **Create** under **Action group**
+- All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
+- All alert rules are disabled by default.
+- Alerts rules don't have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
+- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before modifying its threshold.
-To view alerts created for the enabled rules, in the **Recommended alerts** pane select **View in alerts**. You are redirected to the alert menu for the AKS cluster, where you can see all the alerts currently created for your cluster.
+The following metrics have unique behavior characteristics:
-## Configure alertable metrics in ConfigMaps
+**Prometheus and custom metrics**
+- `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.
+- `containerRestartCount` metric is only sent when there are containers restarting.
+- `oomKilledContainerCount` metric is only sent when there are OOM killed containers.
+- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). cpuThresholdViolated, memoryRssThresholdViolated, and memoryWorkingSetThresholdViolated metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.
+- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). `pvUsageThresholdViolated` metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+
+**Prometheus only**
+- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), you should configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. See [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
-* *cpuExceededPercentage*
-* *cpuThresholdViolated*
-* *memoryRssExceededPercentage*
-* *memoryRssThresholdViolated*
-* *memoryWorkingSetExceededPercentage*
-* *memoryWorkingSetThresholdViolated*
-* *pvUsageExceededPercentage*
-* *pvUsageThresholdViolated*
-1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
- - To modify the *cpuExceededPercentage* threshold to 90% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
+## View alerts
+View fired alerts for your cluster from [**Alerts** in the **Monitor menu** in the Azure portal] with other fired alerts in your subscription. You can also select **View in alerts** from the **Recommended alerts** pane to view alerts from custom metrics.
- ```
- [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
- # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
- container_cpu_threshold_percentage = 90.0
- # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
- container_memory_rss_threshold_percentage = 95.0
- # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
- container_memory_working_set_threshold_percentage = 95.0
- ```
-
- - To modify the *pvUsageExceededPercentage* threshold to 80% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
-
- ```
- [alertable_metrics_configuration_settings.pv_utilization_thresholds]
- # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
- pv_usage_threshold_percentage = 80.0
- ```
-
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+> [!NOTE]
+> Prometheus alerts will not currently be displayed when you select **Alerts** from your AKs cluster since the alert rule doesn't use the cluster as its target.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
## Next steps -- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.--- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md).
+- [Read about the different alert rule types in Azure Monitor](../alerts/alerts-types.md).
+- [Read about alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Title: Enable Container insights description: This article describes how to enable and configure Container insights so that you can understand how your container is performing and what performance-related issues have been identified. + Last updated 08/29/2022
Container insights supports the following environments:
- [AKS engine](https://github.com/Azure/aks-engine) - [Red Hat OpenShift](https://docs.openshift.com/container-platform/latest/welcome/https://docsupdatetracker.net/index.html) version 4.x
-## Supported Kubernetes versions
The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
+### Differences between Windows and Linux clusters
+
+The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
+
+- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+- Disk storage capacity information isn't available for Windows nodes.
+- Only pod environments are monitored, not Docker environments.
+- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.
+ >[!NOTE] > Container insights support for Windows Server 2022 operating system in public preview. +
+## Installation options
+
+- [AKS cluster](container-insights-enable-aks.md)
+- [AKS cluster with Azure Policy](container-insights-enable-aks-policy.md)
+- [Azure Arc-enabled cluster](container-insights-enable-arc-enabled-clusters.md)
+- [Hybrid Kubernetes clusters](container-insights-hybrid-setup.md)
++ ## Prerequisites Before you start, make sure that you've met the following requirements:
-**Log Analytics workspace**
+### Log Analytics workspace
Container insights stores its data in a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). It supports workspaces in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md).
-You can let the onboarding experience create a default workspace in the default resource group of the AKS cluster subscription. If you already have a workspace though, then you will most likely want to use that one. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for details.
+You can let the onboarding experience create a Log Analytics workspace in the default resource group of the AKS cluster subscription. If you already have a workspace though, then you will most likely want to use that one. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for details.
An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure portal, but can be done with Azure CLI or Resource Manager template.
+### Azure Monitor workspace (preview)
+If you are going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), then you must have an Azure Monitor workspace which is where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
-**Permissions**
+### Permissions
To enable container monitoring, you require the following permissions: - Member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role. - Member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
-To enable container monitoring, you require the following permissions:
+To view data once container monitoring is enabled, you require the following permissions:
- Member of [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of [Log Analytics contributor](../logs/manage-access.md#azure-rbac).
-**Prometheus**
-Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported.
-
-**Kubelet secure port**
-Log Analytics Containerized Linux Agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
+### Kubelet secure port
+The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
+### Network firewall requirements
+See [Network firewall requirements](#network-firewall-requirements) for details on the firewall requirements for the AKS cluster.
+
+## Authentication
+Container Insights now supports authentication using managed identity (preview). This is a secure and simplified authentication model where the monitoring agent uses the clusterΓÇÖs managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
+
+> [!NOTE]
+> Container Insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Container Insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service (AKS)](../../aks/faq.md).
+
+## Agent
+
+### Azure Monitor agent
+When using managed identity authentication (preview), Container insights relies on a containerized Azure Monitor agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
++
+### Log Analytics agent
+When not using managed identity authentication, Container insights relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+
+The agent version is *microsoft/oms:ciprod04202018* or later, and it's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
++
+>[!NOTE]
+>With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows server node to collect logs and forward it to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor on behalf all Windows nodes in the cluster.
++
+> [!NOTE]
+> If you've already deployed an AKS cluster and enabled monitoring using either the Azure CLI or a Azure Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
## Network firewall requirements
-The following table lists the proxy and firewall configuration information that's required for the containerized agent to communicate with Container insights. All network traffic from the agent is outbound to Azure Monitor.
+The following table lists the proxy and firewall configuration information required for the containerized agent to communicate with Container insights. All network traffic from the agent is outbound to Azure Monitor.
**Azure public cloud**
The following table lists the additional firewall configuration required for man
|Agent resource| Purpose | Port | |--||| | `global.handler.control.monitor.azure.com` | Access control service | 443 |
+| `<cluster-region-name>.ingest.monitor.azure.com` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
| `<cluster-region-name>.handler.control.monitor.azure.com` | Fetch data collection rules for specific AKS cluster | 443 | **Azure China 21Vianet cloud**
The following table lists the additional firewall configuration required for man
| `<cluster-region-name>.handler.control.monitor.azure.us` | Fetch data collection rules for specific AKS cluster | 443 |
-## Authentication
-Container Insights now supports authentication using managed identity (preview). This is a secure and simplified authentication model where the monitoring agent uses the clusterΓÇÖs managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
-
-> [!NOTE]
-> Container Insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Container Insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service (AKS)](../../aks/faq.md).
-
-## Agent
-
-### Azure Monitor agent
-When using managed identity authentication (preview), Container insights relies on a containerized Azure Monitor agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
--
-### Log Analytics agent
-When not using managed identity authentication, Container insights relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
-
-The agent version is *microsoft/oms:ciprod04202018* or later, and it's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
--
->[!NOTE]
->With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows server node to collect logs and forward it to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor on behalf all Windows nodes in the cluster.
--
-> [!NOTE]
-> If you've already deployed an AKS cluster and enabled monitoring using either the Azure CLI or a Azure Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
-
-## Installation options
-To enable Container insights, use one of the methods that's described in the following table:
-
-| Deployment state | Method |
-||--|
-| New Kubernetes cluster | [Enable monitoring for a new AKS cluster using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md) |
-| | [Enable for a new AKS cluster by using the open-source tool Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)|
-| Existing AKS cluster | [Enable monitoring for an existing AKS cluster using the Azure CLI](container-insights-enable-existing-clusters.md?tabs=azure-powershell) |
-| | [Enable for an existing AKS cluster using Terraform](container-insights-enable-existing-clusters.md?tabs=terraform) |
-| | [Enable for an existing AKS cluster from Azure Monitor portal](container-insights-enable-existing-clusters.md?tabs=portal-azure-monitor)|
-| | [Enable directly from an AKS cluster in the Azure portal](container-insights-enable-existing-clusters.md?tabs=portal-aks)|
-| | [Enable for AKS cluster using an Azure Resource Manager template](container-insights-enable-existing-clusters.md?tabs=aks)|
-| Existing non-AKS Kubernetes cluster | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc using the Azure CLI](container-insights-enable-arc-enabled-clusters.md?tabs=create-cli#create-extension-instance). |
-| | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc using a preconfigured Azure Resource Manager template](container-insights-enable-arc-enabled-clusters.md?tabs=create-arm#create-extension-instance) |
-| | [Enable for non-AKS Kubernetes cluster hosted outside of Azure and enabled with Azure Arc from the multicluster page Azure Monitor](container-insights-enable-arc-enabled-clusters.md?tabs=create-portal#create-extension-instance) |
- ## Next steps Once you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).-
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights | Microsoft Docs
+ Title: Overview of Container insights in Azure Monitor
description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. - Previously updated : 08/29/2022+ Last updated : 09/28/2022 # Container insights overview
-Container insights is a feature designed to monitor the performance of container workloads deployed to:
--- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).-- [Azure Container Instances](../../container-instances/container-instances-overview.md).-- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises.-- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).-
-Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI-compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
-
->[!NOTE]
-> Container insights support for Windows Server 2022 operating system and AKS for ARM nodes is in public preview.
-
-Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications.
-
-Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
+Container insights is a feature designed to monitor the performance of container workloads deployed to the cloud. It gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
:::image type="content" source="media/container-insights-overview/azmon-containers-architecture-01.png" lightbox="media/container-insights-overview/azmon-containers-architecture-01.png" alt-text="Overview diagram of Container insights" border="false":::
Container insights gives you performance visibility by collecting memory and pro
Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can: -- Identify resource bottlenecks by identifying AKS containers running on the node and their average processor and memory utilization.
+- Identify resource bottlenecks by identifying AKS containers running on the node and their processor and memory utilization.
- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances. - View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads. - Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://prometheus.io/docs/introduction/overview/) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis.-- Monitor container workloads [deployed to AKS Engine](https://github.com/Azure/aks-engine) on-premises and [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).-- Monitor container workloads [deployed to Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
+- Integrate with [Prometheus](https://aka.ms/azureprometheus-promio-docs) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis.
+ The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*. > [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA] + ## Access Container insights
-You can access Container insights in the Azure portal from Azure Monitor or directly from the selected AKS cluster. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
+Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
-## Differences between Windows and Linux clusters
+## Supported configurations
+
+- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).
+- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).
+- [Azure Container Instances](../../container-instances/container-instances-overview.md).
+- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises.
+- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
-The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
+Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI-compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
-- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.-- Disk storage capacity information isn't available for Windows nodes.-- Only pod environments are monitored, not Docker environments.-- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.
+>[!NOTE]
+> Container insights support for Windows Server 2022 operating system and AKS for ARM nodes is in public preview.
## Next steps
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
- Title: Configure Container insights Prometheus integration | Microsoft Docs
-description: This article describes how you can configure the Container insights agent to scrape metrics from Prometheus with your Kubernetes cluster.
- Previously updated : 08/29/2022---
-# Configure scraping of Prometheus metrics with Container insights
-
-[Prometheus](https://prometheus.io/) is a popular open-source metric monitoring solution and is a part of the [Cloud Native Compute Foundation](https://www.cncf.io/). Container insights provides a seamless onboarding experience to collect Prometheus metrics.
-
-Typically, to use Prometheus, you need to set up and manage a Prometheus server with a store. If you integrate with Azure Monitor, a Prometheus server isn't required. You only need to expose the Prometheus metrics endpoint through your exporters or pods (application). Then the containerized agent for Container insights can scrape the metrics for you.
--
->[!NOTE]
->The minimum agent version supported for scraping Prometheus metrics is ciprod07092019. The agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Red Hat OpenShift v4, the agent version is ciprod04162020 or later.
->
->For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
->To verify your agent version, select the **Insights** tab of the resource. From the **Nodes** tab, select a node. In the properties pane, note the value of the **Agent Image Tag** property.
-
-Scraping of Prometheus metrics is supported with Kubernetes clusters hosted on:
--- Azure Kubernetes Service (AKS).-- Azure Stack or on-premises.-- Azure Arc enabled Kubernetes.-- Red Hat OpenShift version 4.x through cluster connect to Azure Arc.-
-### Prometheus scraping settings
-
-Active scraping of metrics from Prometheus is performed from one of two perspectives:
-
-* **Cluster-wide**: HTTP URL and discover targets from listed endpoints of a service, for example, Kubernetes services such as kube-dns and kube-state-metrics, and pod annotations specific to an application. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
-* **Node-wide**: HTTP URL and discover targets from listed endpoints of a service. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
-
-| Endpoint | Scope | Example |
-|-|-||
-| Pod annotation | Cluster-wide | Annotations: <br>`prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
-| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
-| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
-
-When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
-
-|Scope | Key | Data type | Value | Description |
-||--|--|-|-|
-| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
-| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
-| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
-| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
-| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
-| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
-| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
-| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
-| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
-| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
-
-ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
-
-## Configure and deploy ConfigMaps
-
-Perform the following steps to configure your ConfigMap configuration file for the following clusters:
-
-* Azure Kubernetes Service (AKS)
-* Azure Stack or on-premises
-* Red Hat OpenShift version 4.x
-
-1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as container-azm-ms-agentconfig.yaml.
-
-1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
-
- - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
- ```
-
- - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
- ```
-
- - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings ΓÇï
- [prometheus_data_collection_settings.node] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
- fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
- fielddrop = ["metric_to_drop"] ΓÇï
- ```
-
- >[!NOTE]
- >$NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
-
- - To configure scraping of Prometheus metrics by specifying a pod annotation:
-
- 1. In the ConfigMap, specify the following configuration:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h
- monitor_kubernetes_pods = true
- ```
-
- 1. Specify the following configuration for pod annotations:
-
- ```
- - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
- - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
- - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
- - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
- ```
-
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-
-1. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-
-The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-
-## Apply updated ConfigMap
-
-If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then apply it by using the same commands as before.
-
-For the following Kubernetes environments:
--- Azure Kubernetes Service (AKS)-- Azure Stack or on-premises-- Red Hat OpenShift version 4.x-
-run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
-
-For an example, run the command `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
-
-The configuration change can take a few minutes to finish before taking effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods. Not all pods restart at the same time. When the restarts are finished, a message appears that's similar to the following and includes the result "configmap 'container-azm-ms-agentconfig' created" to indicate the configmap resource was created.
-
-## Verify configuration
-
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
-
-If there are configuration errors from the omsagent pods, the output will show errors similar to the following example:
-
-```
-***************Start Config Processing********************
-config::unsupported/missing config schema version - 'v21' , using defaults
-```
-
-Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics:
--- From an agent pod logs using the same `kubectl logs` command.--- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:-
- ```
- 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
- ```
--- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.-
-Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
-
-## Query Prometheus metrics data
-
-To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#query-prometheus-metrics-data) and [Query configuration or scraping errors](container-insights-log-query.md#query-configuration-or-scraping-errors).
-
-## View Prometheus metrics in Grafana
-
-Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
-
-## Review Prometheus data usage
-
-To identify the ingestion volume of each metrics size in GB per day to understand if it's high, the following query is provided.
-
-```
-InsightsMetrics
-| where Namespace contains "prometheus"
-| where TimeGenerated > ago(24h)
-| summarize VolumeInGB = (sum(_BilledSize) / (1024 * 1024 * 1024)) by Name
-| order by VolumeInGB desc
-| render barchart
-```
-
-The output will show results similar to the following example.
--
-To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
-
-```
-InsightsMetrics
-| where Namespace contains "prometheus"
-| where TimeGenerated > ago(24h)
-| summarize EstimatedGBPer30dayMonth = (sum(_BilledSize) / (1024 * 1024 * 1024)) * 30 by Name
-| order by EstimatedGBPer30dayMonth desc
-| render barchart
-```
-
-The output will show results similar to the following example.
--
-For more information on how to analyze usage, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
-
-## Next steps
-
-To learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
azure-monitor Container Insights Prometheus Metrics Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-metrics-addon.md
+
+ Title: Send Prometheus metrics to Azure Monitor managed service for Prometheus with Container insights
+description: Configure the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster and send to Azure Monitor managed service for Prometheus.
++ Last updated : 09/28/2022+++
+# Send metrics to Azure Monitor managed service for Prometheus with Container insights (preview)
+This article describes how to configure Container insights to send Prometheus metrics from an Azure Kubernetes cluster to Azure Monitor managed service for Prometheus. This includes installing the metrics addon for Container insights.
+
+## Prerequisites
+
+- The cluster must be [onboarded to Container insights](container-insights-enable-aks.md).
+- The cluster must use [managed identity authentication](container-insights-enable-aks.md#migrate-to-managed-identity-authentication).
+- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace.
+ - Microsoft.ContainerService
+ - Microsoft.Insights
+ - Microsoft.AlertsManagement
+
+## Enable Prometheus metric collection
+Use any of the following methods to install the metrics addon on your cluster and send Prometheus metrics to an Azure Monitor workspace.
+
+### [Azure portal](#tab/azure-portal)
+
+Managed Prometheus can be enabled in the Azure portal through either Container insights or an Azure Monitor workspace.
+
+#### Enable from Container insights
+
+1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
+2. Click **Insights**.
+3. Click **Monitor settings**.
+
+ :::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" alt-text="Screenshot of button for monitor settings for an AKS cluster.":::
+
+4. Click the checkbox for **Enable Prometheus metrics** and select your Azure Monitor workspace.
+5. To send the collected metrics to Grafana, select a Grafana workspace. See [Create an Azure Managed Grafana instance](../../managed-grafan) for details on creating a Grafana workspace.
+
+ :::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" alt-text="Screenshot of monitor settings for an AKS cluster.":::
+
+6. Click **Configure** to complete the configuration.
+
+#### Enable from Azure Monitor workspace
+Use the following procedure to install the Azure Monitor agent and the metrics addon to collect Prometheus metrics. This method doesn't enable other Container insights features.
+
+1. Create an Azure Monitor workspace using the guidance at [Create an Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md#create-an-azure-monitor-workspace).
+2. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster.
+3. Select **Managed Prometheus** to display a list of AKS clusters.
+4. Click **Configure** next to the cluster you want to enable.
+
+ :::image type="content" source="media/container-insights-prometheus-metrics-addon/azure-monitor-workspace-configure-prometheus.png" lightbox="media/container-insights-prometheus-metrics-addon/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot of Azure Monitor workspace with Prometheus configuration.":::
++
+### [CLI](#tab/cli)
+
+#### Prerequisites
+
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](https://learn.microsoft.com/cli/azure/azure-cli-extensions-overview).
+- Azure CLI version 2.41.0 or higher is required for this feature.
+
+#### Install metrics addon
+
+Use `az aks update` with the `-enable-azuremonitormetrics` option to install the metrics addon. Following are multiple options depending on the Azure Monitor workspace and Grafana workspace you want to use.
++
+**Create a new default Azure Monitor workspace.**<br>
+If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
+This Azure Monitor Workspace will be in the region specific in [Region mappings](#region-mappings).
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+**Use an existing Azure Monitor workspace.**<br>
+If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data will be available in Grafana.
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
+```
+
+**Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br>
+This creates a link between the Azure Monitor workspace and the Grafana workspace.
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
+```
+
+The output for each command will look similar to the following:
+
+```json
+"azureMonitorProfile": {
+ "metrics": {
+ "enabled": true,
+ "kubeStateMetrics": {
+ "metrican'tationsAllowList": "",
+ "metricLabelsAllowlist": ""
+ }
+ }
+}
+```
+
+#### Optional parameters
+Following are optional parameters that you can use with the previous commands.
+
+- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.
+- `--ksm-metric-labels-allow-list` is a comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional labels provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.
+
+**Use annotations and labels.**
+
+```azurecli
+az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]"
+```
+
+The output will be similar to the following:
+
+```json
+ "azureMonitorProfile": {
+ "metrics": {
+ "enabled": true,
+ "kubeStateMetrics": {
+ "metrican'tationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]",
+ "metricLabelsAllowlist": "namespaces=[k8s-label-1,k8s-label-n]"
+ }
+ }
+ }
+```
+
+## [Resource Manager](#tab/resource-manager)
+
+### Prerequisites
+
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
+- The template needs to be deployed in the same resource group as the cluster.
+
+### Retrieve list of Grafana integrations
+If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Open the **Overview** page for the Azure Managed Grafana instance and select the JSON view. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+
+```json
+"properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ ]
+ }
+}
+```
+
+### Retrieve System Assigned identity for Grafana resource
+If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Open the **Overview** page for the Azure Managed Grafana instance and select the JSON view. Copy the value of the `principalId` field for the `SystemAssigned` identity.
+
+```json
+"identity": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "type": "SystemAssigned"
+ },
+```
+
+Assign the `Monitoring Data Reader` role to the Grafana System Assigned Identity. This is the principalId on the Azure Monitor Workspace resource. This will let the Azure Managed Grafana resource read data from the Azure Monitor Workspace and is a requirement for viewing the metrics.
+
+### Download and edit template and parameter file
+
+1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**.
+2. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**.
+3. Edit the values in the parameter file.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
+ | `metrican'tationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
+ | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
++
+4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following:
+
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ }
+ ]
+ }
+ }
+ ````
++
+### Deploy template
+
+Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods.
++++++
+## Verify Deployment
+
+Run the following command to which verify that the daemon set was deployed properly:
+
+```
+kubectl get ds ama-metrics-node --namespace=kube-system
+```
+
+The output should resemble the following:
+
+```
+User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-metrics-node 1 1 1 1 1 <none> 10h
+```
+
+Run the following command to which verify that the replica set was deployed properly:
+
+```
+kubectl get rs --namespace=kube-system
+```
+
+The output should resemble the following:
+
+```
+User@aksuser:~$kubectl get rs --namespace=kube-system
+NAME DESIRED CURRENT READY AGE
+ama-metrics-5c974985b8 1 1 1 11h
+ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
+```
++
+## Limitations
+
+- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the Resource Manager template deployments that require exact values in the `kube-state` metrics pods. If the kuberenetes pod has any issues with malformed parameters and isn't running, then the feature won't work as expected.
+- A data collection rule and data collection endpoint is created with the name `MSPROM-\<cluster-name\>-\<cluster-region\>`. These names can't currently be modified.
+- You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the Resource Manager template with it, otherwise it will overwrite and remove the existing integrations from the grafana workspace.
+- CPU and Memory requests and limits can't be changed for [Container insights metrics addon](../containers/container-insights-prometheus-metrics-addon.md). If changed, they'll be reconciled and replaced by original values in a few seconds.
+- Metrics addon doesn't work on AKS clusters configured with HTTP proxy.
++
+## Uninstall metrics addon
+Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. The following command removes the agent from the cluster nodes and deletes the recording rules created for the data being collected from the cluster, it doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
+
+```azurecli
+az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+## Region mappings
+When you allow a default Azure Monitor workspace to be created when you install the metrics addon, it's created in the region listed in the following table.
+
+| AKS Cluster region | Azure Monitor workspace region |
+|--||
+|australiacentral |eastus|
+|australiacentral2 |eastus|
+|australiaeast |eastus|
+|australiasoutheast |eastus|
+|brazilsouth |eastus|
+|canadacentral |eastus|
+|canadaeast |eastus|
+|centralus |centralus|
+|centralindia |centralindia|
+|eastasia |westeurope|
+|eastus |eastus|
+|eastus2 |eastus2|
+|francecentral |westeurope|
+|francesouth |westeurope|
+|japaneast |eastus|
+|japanwest |eastus|
+|koreacentral |westeurope|
+|koreasouth |westeurope|
+|northcentralus |eastus|
+|northeurope |westeurope|
+|southafricanorth |westeurope|
+|southafricawest |westeurope|
+|southcentralus |eastus|
+|southeastasia |westeurope|
+|southindia |centralindia|
+|uksouth |westeurope|
+|ukwest |westeurope|
+|westcentralus |eastus|
+|westeurope |westeurope|
+|westindia |centralindia|
+|westus |westus|
+|westus2 |westus2|
+|westus3 |westus|
+|norwayeast |westeurope|
+|norwaywest |westeurope|
+|switzerlandnorth |westeurope|
+|switzerlandwest |westeurope|
+|uaenorth |westeurope|
+|germanywestcentral |westeurope|
+|germanynorth |westeurope|
+|uaecentral |westeurope|
+|eastus2euap |eastus2euap|
+|centraluseuap |westeurope|
+|brazilsoutheast |eastus|
+|jioindiacentral |centralindia|
+|swedencentral |westeurope|
+|swedensouth |westeurope|
+|qatarcentral |westeurope|
+
+## Next steps
+
+- [Customize Prometheus metric scraping for the cluster](../essentials/prometheus-metrics-scrape-configuration.md).
+- [Create Prometheus alerts based on collected metrics](container-insights-metric-alerts.md)
+- [Learn more about collecting Prometheus metrics](container-insights-prometheus.md).
azure-monitor Container Insights Prometheus Monitoring Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-monitoring-addon.md
+
+ Title: Send Prometheus metrics to Azure Monitor Logs with Container insights
+description: Configure the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster and send to Log Analytics workspace in Azure Monitor.
++ Last updated : 09/15/2022+++
+# Send Prometheus metrics to Azure Monitor Logs with Container insights
+This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Send Kubernetes metrics to Azure Monitor managed service for Prometheus with Container insights](container-insights-prometheus-metrics-addon.md).
++
+## Prometheus scraping settings
+
+Active scraping of metrics from Prometheus is performed from one of two perspectives:
+
+- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
+- **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
+
+| Endpoint | Scope | Example |
+|-|-||
+| Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
+| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
+| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
+
+When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
+
+|Scope | Key | Data type | Value | Description |
+||--|--|-|-|
+| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
+| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
+| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
+| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
+| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
+| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
+| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
+| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
+| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
+| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
+
+## Configure ConfigMaps
+Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
+++
+1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as c*ontainer-azm-ms-agentconfig.yaml*. If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
++
+ ### [Cluster-wide](#tab/cluster-wide)
+
+ To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
+ fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
+ kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
+ ```
+
+ ### [Specific URL](#tab/url)
+
+ To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
+ fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
+ urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
+ ```
+
+ ### [DaemonSet](#tab/deamonset)
+
+ To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings ΓÇï
+ [prometheus_data_collection_settings.node] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
+ fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
+ fielddrop = ["metric_to_drop"] ΓÇï
+ ```
+
+ `$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
+
+ ### [Pod annotation](#tab/pod)
+
+ To configure scraping of Prometheus metrics by specifying a pod annotation:
+
+ 1. In the ConfigMap, specify the following configuration:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h
+ monitor_kubernetes_pods = true
+ ```
+
+ 1. Specify the following configuration for pod annotations:
+
+ ```
+ - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
+ - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
+ - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
+ - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
+ ```
+
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
+
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+
+The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
++
+## Verify configuration
+
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
++
+If there are configuration errors from the omsagent pods, the output will show errors similar to the following example:
+
+```
+***************Start Config Processing********************
+config::unsupported/missing config schema version - 'v21' , using defaults
+```
+
+Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics:
+
+- From an agent pod logs using the same `kubectl logs` command.
+
+- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:
+
+ ```
+ 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
+ ```
+
+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.
+- For Azure Red Hat OpenShift v3.x and v4.x, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
+
+Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
+
+For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+
+## Query Prometheus metrics data
+
+To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics).
+
+## View Prometheus metrics in Grafana
+
+Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
+++
+## Next steps
+
+- [Learn more about scraping Prometheus metrics](container-insights-prometheus.md).
+- [Configure your cluster to send data to Azure Monitor managed service for Prometheus](container-insights-prometheus-metrics-addon.md).
azure-monitor Container Insights Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md
+
+ Title: Collect Prometheus metrics with Container insights
+description: Describes different methods for configuring the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster.
++ Last updated : 09/28/2022+++
+# Collect Prometheus metrics with Container insights
+[Prometheus](https://aka.ms/azureprometheus-promio) is a popular open-source metric monitoring solution and is the most common monitoring tool used to monitor Kubernetes clusters. Container insights uses its containerized agent to collect much of the same data that is typically collected from the cluster by Prometheus without requiring a Prometheus server. This data is presented in Container insights views and available to other Azure Monitor features such as [log queries](container-insights-log-query.md) and [log alerts](container-insights-log-alerts.md).
+
+Container insights can also scrape Prometheus metrics from your cluster for the cases described below. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram.
+
+## Collect additional data
+You may want to collect additional data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace.
+
+See [Collect Prometheus metrics Logs with Container insights (preview)](container-insights-prometheus-monitoring-addon.md) to configure your cluster to collect additional Prometheus metrics with the monitoring addon.
+
+## Send data to Azure Monitor managed service for Prometheus
+Container insights currently stores the data that it collects in Azure Monitor Logs. [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus.
+
+See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md) to configure your cluster to send metrics to Azure Monitor managed service for Prometheus.
+++
+## Next steps
+
+- [Configure your cluster to send data to Azure Monitor managed service for Prometheus](container-insights-prometheus-metrics-addon.md).
+- [Configure your cluster to send data to Azure Monitor Logs](container-insights-prometheus-metrics-addon.md).
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-update-metrics.md
- Title: Update Container insights for metrics | Microsoft Docs
-description: This article describes how you update Container insights to enable the custom metrics feature that supports exploring and alerting on aggregated metrics.
- Previously updated : 08/29/2022 -----
-# Update Container insights to enable metrics
-
-Container insights now includes support for collecting metrics from Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes cluster nodes and pods, and then writing those metrics to the Azure Monitor metrics store. With this support, you can present timely aggregate calculations (average, count, maximum, minimum, sum) in performance charts, pin performance charts in Azure portal dashboards, and take advantage of metric alerts.
-
-This feature enables the following metrics:
-
-| Metric namespace | Metric | Description |
-||--|-|
-| Insights.container/nodes | cpuUsageMillicores, cpuUsagePercentage, memoryRssBytes, memoryRssPercentage, memoryWorkingSetBytes, memoryWorkingSetPercentage, cpuUsageAllocatablePercentage, memoryWorkingSetAllocatablePercentage, memoryRssAllocatablePercentage, nodesCount, diskUsedPercentage, | As *node* metrics, they include host as a dimension. They also include the node's name as a value for the host dimension. |
-| Insights.container/pods | podCount, completedJobsCount, restartingContainerCount, oomKilledContainerCount, podReadyPercentage | As *pod* metrics, they include the following as dimensions: ControllerName, Kubernetes namespace, name, phase. |
-| Insights.container/containers | cpuExceededPercentage, memoryRssExceededPercentage, memoryWorkingSetExceededPercentage, cpuThresholdViolated, memoryRssThresholdViolated, memoryWorkingSetThresholdViolated | |
-| Insights.container/persistentvolumes | pvUsageExceededPercentage, pvUsageThresholdViolated | |
-
-To support these capabilities, the feature includes these containerized agents:
--- Version *microsoft/oms:ciprod05262020* for AKS-- Version *microsoft/oms:ciprod09252020* for Azure Arc-enabled Kubernetes clusters-
-New deployments of AKS automatically include this configuration and the metric-collecting capabilities. You can update your cluster to support this feature from the Azure portal, Azure PowerShell, or the Azure CLI. With Azure PowerShell and the Azure CLI, you can enable the feature for each cluster or for all clusters in your subscription.
-
-Either process assigns the *Monitoring Metrics Publisher* role to the cluster's service principal or the user-assigned MSI for the monitoring add-on. The data that the agent collects from the agent can then be published to your cluster resource.
-
-Monitoring Metrics Publisher has permission only to push metrics to the resource. It can't alter any state, update the resource, or read any data. For more information, see [Monitoring Metrics Publisher role](../../role-based-access-control/built-in-roles.md#monitoring-metrics-publisher). The Monitoring Metrics Publisher role requirement doesn't apply to Azure Arc-enabled Kubernetes clusters.
-
-> [!IMPORTANT]
-> Azure Arc-enabled Kubernetes clusters already have the minimum required agent version, so an update isn't necessary. The assignment of Monitoring Metrics Publisher role to the cluster's service principal or user-assigned MSI for the monitoring add-on happens automatically when you're using the Azure portal, Azure PowerShell, or the Azure CLI.
-
-## Prerequisites
-
-Before you update your cluster, confirm the following:
-
-* Custom metrics are available in only a subset of Azure regions. [See a list of supported regions](../essentials/metrics-custom-overview.md#supported-regions).
-
-* You're a member of the [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the AKS cluster resource to enable collection of custom performance metrics for nodes and pods. This requirement does not apply to Azure Arc-enabled Kubernetes clusters.
-
-If you choose to use the Azure CLI, you first need to install and use it locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Update one cluster by using the Azure portal
-
-To update an existing AKS cluster monitored by Container insights:
-
-1. Select the cluster to view its health from the multiple-cluster view in Azure Monitor or directly from the cluster by selecting **Insights** from the left pane.
-
-2. In the banner that appears at the top of the pane, select **Enable** to start the update.
-
- :::image type="content" source="./media/container-insights-update-metrics/portal-banner-enable-01.png" alt-text="Screenshot of the Azure portal that shows the banner for upgrading an AKS cluster." lightbox="media/container-insights-update-metrics/portal-banner-enable-01.png":::
-
- The process can take several seconds to finish. You can track its progress under **Notifications** from the menu.
-
-## Update all clusters by using the Azure CLI
-
-To update all clusters in your subscription by using Bash in the Azure CLI, run the following command. Edit the value for `subscriptionId` by using the value from the **AKS Overview** page for the AKS cluster.
-
-```azurecli
-az login
-az account set --subscription "Subscription Name"
-curl -sL https://aka.ms/ci-md-onboard-atscale | bash -s subscriptionId
-```
-
-The configuration change can take a few seconds to finish. When it's finished, a message like the following one appears and includes the result:
-
-```azurecli
-completed role assignments for all AKS clusters in subscription: <subscriptionId>
-```
-
-## Update one cluster by using the Azure CLI
-
-To update a specific cluster in your subscription by using Azure CLI, run the following command. Edit the values for `subscriptionId`, `resourceGroupName`, and `clusterName` by using the values on the **AKS Overview** page for the AKS cluster. The value of `clientIdOfSPN` is returned when you run the command `az aks show`.
-
-```azurecli
-az login
-az account set --subscription "<subscriptionName>"
-az aks show -g <resourceGroupName> -n <clusterName> --query "servicePrincipalProfile"
-az aks show -g <resourceGroupName> -n <clusterName> --query "addonProfiles.omsagent.identity"
-az role assignment create --assignee <clientIdOfSPN> --scope <clusterResourceId> --role "Monitoring Metrics Publisher"
-```
--
-To get the value for `clientIdOfSPNOrMsi`, you can run the command `az aks show` as shown in the following example. If the `servicePrincipalProfile` object has a valid `objectid` value, you can use that. Otherwise, if it's set to `msi`, you need to pass in the Object ID from `addonProfiles.omsagent.identity.objectId`.
-
-```azurecli
-az login
-az account set --subscription "<subscriptionName>"
-az aks show -g <resourceGroupName> -n <clusterName> --query "servicePrincipalProfile"
-az aks show -g <resourceGroupName> -n <clusterName> --query "addonProfiles.omsagent.identity"
-az role assignment create --assignee <clientIdOfSPNOrMsi> --scope <clusterResourceId> --role "Monitoring Metrics Publisher"
-```
-
->[!NOTE]
->If you want to perform the role assignment with your user account, use the `--assignee` parameter as shown in the example. If you want to perform the role assignment with a service principal name (SPN), use the `--assignee-object-id` and `--assignee-principal-type` parameters instead of the `--assignee` parameter.
-
-## Update all clusters by using Azure PowerShell
-
-To update all clusters in your subscription by using Azure PowerShell:
-
-1. [Download](https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/docs/aks/mdmonboarding/mdm_onboarding_atscale.ps1) the *mdm_onboarding_atscale.ps1* script from GitHub and save it to a local folder.
-2. Run the following command. Edit the value for `subscriptionId` by using the value from the **AKS Overview** page for the AKS cluster.
-
- ```powershell
- .\mdm_onboarding_atscale.ps1 subscriptionId
- ```
- The configuration change can take a few seconds to finish. When it's finished, a message like the following one appears and includes the result:
-
- ```powershell
- Completed adding role assignment for the aks clusters in subscriptionId :<subscriptionId>
- ```
-
-## Update one cluster by using Azure PowerShell
-
-To update a specific cluster by using Azure PowerShell:
-
-1. [Download](https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/docs/aks/mdmonboarding/mdm_onboarding.ps1) the *mdm_onboarding.ps1* script from GitHub and save it to a local folder.
-
-2. Run the following command. Edit the values for `subscriptionId`, `resourceGroupName`, and `clusterName` by using the values on the **AKS Overview** page for the AKS cluster.
-
- ```powershell
- .\mdm_onboarding.ps1 subscriptionId <subscriptionId> resourceGroupName <resourceGroupName> clusterName <clusterName>
- ```
-
- The configuration change can take a few seconds to finish. When it's finished, a message like the following one appears and includes the result:
-
- ```powershell
- Successfully added Monitoring Metrics Publisher role assignment to cluster : <clusterName>
- ```
-
-## Verify the update
-
-After you update Container insights by using one of the methods described earlier, go to the Azure Monitor metrics explorer and verify from **Metric namespace** that **insights** is listed. If it is, you can start setting up [metric alerts](../alerts/alerts-metric.md) or pinning your charts to [dashboards](../../azure-portal/azure-portal-dashboards.md).
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
description: Overview of the Azure Monitor data platform and collection of obser
na+ Last updated 07/28/2022
Today's complex computing environments run distributed applications that rely on
[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources. You can gain deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor.
-![Diagram that shows an overview of Azure Monitor with data sources on the left sending data to a central data platform and features of Azure Monitor on the right that use the collected data.](media/overview/azure-monitor-overview-optm.svg)
+![Diagram that shows an overview of Azure Monitor with data sources on the left sending data to a central data platform and features of Azure Monitor on the right that use the collected data.](media/overview/azure-monitor-overview-2022_10_15-add-prometheus-opt.svg)
## Observability data in Azure Monitor Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. A monitoring tool must collect and analyze these three different kinds of data to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed by using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
description: View the Azure Monitor activity log and send it to Azure Monitor Lo
+ Last updated 07/01/2022
Send the activity log to Azure Event Hubs to send entries outside of Azure, for
The following sample output data is from event hubs for an activity log:
-``` JSON
+```json
{ "records": [ {
Each PT1H.json blob contains a JSON blob of events that occurred within the hour
Each event is stored in the PT1H.json file with the following format. This format uses a common top-level schema but is otherwise unique for each category, as described in [Activity log schema](activity-log-schema.md).
-``` JSON
+```json
{ "time": "2020-06-12T13:07:46.766Z", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.COMPUTE/VIRTUALMACHINES/MV-VM-01", "correlationId": "0f0cb6b4-804b-4129-b893-70aeeb63997e", "operationName": "Microsoft.Resourcehealth/healthevent/Updated/action", "level": "Information", "resultType": "Updated", "category": "ResourceHealth", "properties": {"eventCategory":"ResourceHealth","eventProperties":{"title":"This virtual machine is starting as requested by an authorized user or process. It will be online shortly.","details":"VirtualMachineStartInitiatedByControlPlane","currentHealthStatus":"Unknown","previousHealthStatus":"Unknown","type":"Downtime","cause":"UserInitiated"}}} ```
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
+
+ Title: Azure Monitor workspace overview (preview)
+description: Overview of Azure Monitor workspace, which is a unique environment for data collected by Azure Monitor.
+++ Last updated : 05/09/2022++
+# Azure Monitor workspace (preview)
+An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions.
++
+## Contents of Azure Monitor workspace
+Azure Monitor workspaces will eventually contain all metric data collected by Azure Monitor. Currently, the only data hosted by an Azure Monitor workspace is Prometheus metrics.
+
+The following table lists the contents of Azure Monitor workspaces. This table will be updated as other types of data are stored in them.
+
+| Current contents | Future contents |
+|:|:|
+| Prometheus metrics | Native platform metrics<br>Native custom metrics<br>Prometheus metrics |
+++
+## Workspace design
+A single Azure Monitor workspace can collect data from multiple sources, but there may be circumstances where you require multiple workspaces to address your particular business requirements. Azure Monitor workspace design is similar to [Log Analytics workspace design](../logs/workspace-design.md). There are several reasons that you may consider creating additional workspaces including the following.
+
+- If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant.
+- Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.
+- You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies.
+
+Many customers will choose an Azure Monitor workspace design to match their Log Analytics workspace design. Since Azure Monitor workspaces currently only contain Prometheus metrics, and metric data is typically not as sensitive as log data, you may choose to further consolidate your Azure Monitor workspaces for simplicity.
+
+## Create an Azure Monitor workspace
+In addition to the methods below, you may be given the option to create a new Azure Monitor workspace in the Azure portal as part of a configuration that requires one. For example, when you configure Azure Monitor managed service for Prometheus, you can select an existing Azure Monitor workspace or create a new one.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Open the **Azure Monitor workspaces** menu in the Azure portal.
+2. Click **Create**.
+
+ :::image type="content" source="media/azure-monitor-workspace-overview/view-azure-monitor-workspaces.png" lightbox="media/azure-monitor-workspace-overview/view-azure-monitor-workspaces.png" alt-text="Screenshot of Azure Monitor workspaces menu and page.":::
+
+3. On the **Create an Azure Monitor Workspace** page, select a **Subscription** and **Resource group** where the workspace should be created.
+4. Provide a **Name** and a **Region** for the workspace.
+5. Click **Review + create** to create the workspace.
+
+### [CLI](#tab/cli)
+Use the following command to create an Azure Monitor workspace using Azure CLI.
+
+```azurecli
+az resource create --resource-group divyaj-test --namespace microsoft.monitor --resource-type accounts --name testmac0929 --location westus2 --properties {}
+```
+
+### [Resource Manager](#tab/resource-manager)
+Use the following Resource Manager template with any of the [standard deployment options](../resource-manager-samples.md#deploy-the-sample-templates) to create an Azure Monitor workspace.
+
+```json
+{
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "name": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": ""
+ }
+ },
+ "resources": [
+ {
+ "type": "microsoft.monitor/accounts",
+ "apiVersion": "2021-06-03-preview",
+ "name": "[parameters('name')]",
+ "location": "[if(empty(parameters('location')), resourceGroup().location, parameters('location'))]"
+ }
+ ]
+}
+```
+++++
+## Delete an Azure Monitor workspace
+When you delete an Azure Monitor workspace, no soft-delete operation is performed like with a [Log Analytics workspace](../logs/delete-workspace.md). The data in the workspace is immediately deleted, and there is no recovery option.
++
+### [Azure portal](#tab/azure-portal)
+
+1. Open the **Azure Monitor workspaces** menu in the Azure portal.
+2. Select your workspace.
+4. Click **Delete**.
+
+ :::image type="content" source="media/azure-monitor-workspace-overview/delete-azure-monitor-workspace.png" lightbox="media/azure-monitor-workspace-overview/delete-azure-monitor-workspace.png" alt-text="Screenshot of Azure Monitor workspaces delete button.":::
+
+### [CLI](#tab/cli)
+To be completed.
+
+### [Resource Manager](#tab/resource-manager)
+To be completed.
++++
+## Link a Grafana workspace
+Connect an Azure Monitor workspace to an [Azure Managed Grafana](../../managed-grafan) workspace to authorize Grafana to use the Azure Monitor workspace as a resource type in a Grafana dashboard. An Azure Monitor workspace can be connected to multiple Grafana workspaces, and a Grafana workspace can be connected to multiple Azure Monitor workspaces.
+
+> [!NOTE]
+> When you add the Azure Monitor workspace as a data source to Grafana, it will be listed in form `Managed_Prometheus_<azure-workspace-name>`.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Open the **Azure Monitor workspace** menu in the Azure portal.
+2. Select your workspace.
+3. Click **Linked Grafana workspaces**.
+4. Select a Grafana workspace.
+
+### [CLI](#tab/cli)
+To be completed.
+
+### [Resource Manager](#tab/resource-manager)
+To be completed.
++++
+## Next steps
+
+- Learn more about the [Azure Monitor data platform](../data-platform.md).
azure-monitor Data Collection Rule Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-edit.md
Title: Tutorial - Editing Data Collection Rules description: This article describes how to make changes in Data Collection Rule definition using command line tools and simple API calls. +
code "temp.dcr"
LetΓÇÖs modify the KQL transformation within DCR to drop rows where RequestType is anything, but ΓÇ£GETΓÇ¥. 1. Open the file created in the previous part for editing using an editor of your choice. 2. Locate the line containing `ΓÇ¥transformKqlΓÇ¥` attribute, which, if you followed the tutorial for custom log creation, should look similar to this:
- ``` JSON
+ ```json
"transformKql": " source\n | extend TimeGenerated = todatetime(Time)\n | parse RawData with \n ClientIP:string\n ' ' *\n ' ' *\n ' [' * '] \"' RequestType:string\n \" \" Resource:string\n \" \" *\n '\" ' ResponseCode:int\n \" \" *\n | where ResponseCode != 200\n | project-away Time, RawData\n" ``` 3. Modify KQL transformation to include additional filter by RequestType
- ``` JSON
+ ```json
"transformKql": " source\n | where RawData contains \"GET\"\n | extend TimeGenerated = todatetime(Time)\n | parse RawData with \n ClientIP:string\n ' ' *\n ' ' *\n ' [' * '] \"' RequestType:string\n \" \" Resource:string\n \" \" *\n '\" ' ResponseCode:int\n \" \" *\n | where ResponseCode != 200\n | project-away Time, RawData\n" ``` 4. Save the file with modified DCR content.
DCR content will open in embedded code editor. Once editing is complete, enterin
## Next steps -- [Read more about data collection rules and options for creating them.](data-collection-rule-overview.md)
+- [Read more about data collection rules and options for creating them.](data-collection-rule-overview.md)
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Title: Metrics in Azure Monitor | Microsoft Docs description: Learn about metrics in Azure Monitor, which are lightweight monitoring data capable of supporting near real-time scenarios. na+ Last updated 05/09/2022
Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from [monitored resources](../monitor-reference.md) into a time-series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
-Metrics in Azure Monitor are lightweight and capable of supporting near-real-time scenarios. For these reasons, they're useful for alerting and fast detection of issues. You can:
--- Analyze them interactively by using Metrics Explorer.-- Be proactively notified with an alert when a value crosses a threshold.-- Visualize them in a workbook or dashboard.- > [!NOTE] > Azure Monitor Metrics is one half of the data platform that supports Azure Monitor. The other half is [Azure Monitor Logs](../logs/data-platform-logs.md), which collects and organizes log and performance data. You can analyze that data by using a rich query language.
->
-> The Azure Monitor Metrics feature can only store numeric data in a particular structure. The Azure Monitor Logs feature can store a variety of datatypes, each with its own structure. You can also perform complex analysis on log data by using log queries, which you can't use for analysis of metric data.
-## What can you do with Azure Monitor Metrics?
-The following table lists the ways that you can use the Azure Monitor Metrics feature.
+## Types of metrics
+There are multiple types of metrics supported by Azure Monitor Metrics:
-| Uses | Description |
-|:|:|
-| Analyze | Use [Metrics Explorer](metrics-charts.md) to analyze collected metrics on a chart and compare metrics from various resources. |
-| Alert | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. |
-| Visualize | Pin a chart from Metrics Explorer to an [Azure dashboard](../app/tutorial-app-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboards and combine with other data sources. |
-| Automate | Use [Autoscale](../autoscale/autoscale-overview.md) to increase or decrease resources based on a metric value crossing a threshold. |
-| Retrieve | Access metric values from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/metrics) or [Azure PowerShell cmdlets](/powershell/module/az.monitor).</li><li>Custom app via the [REST API](./rest-api-walkthrough.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
-| Export | [Route metrics to logs](./resource-logs.md#send-to-azure-storage) to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.<br>Stream metrics to an [event hub](./stream-monitoring-data-event-hubs.md) to route them to external systems. |
-| Archive | [Archive](./platform-logs-overview.md) the performance or health history of your resource for compliance, auditing, or offline reporting purposes. |
+- Native metrics use tools in Azure Monitor for analysis and alerting.
+ - Platform metrics are collected from Azure resources. They require no configuration and have no cost.
+ - Custom metrics are collected from different sources that you configure including applications and agents running on virtual machines.
+- Prometheus metrics (preview) are collected from Kubernetes clusters including Azure Kubernetes service (AKS) and use industry standard tools for analyzing and alerting such as PromQL and Grafana.
![Diagram that shows sources and uses of metrics.](media/data-platform-metrics/metrics-overview.png)
+The differences between each of the metrics are summarized in the following table.
+
+| Category | Native platform metrics | Native custom metrics | Prometheus metrics (preview) |
+|:|:|:|:|
+| Sources | Azure resources | Azure Monitor agent<br>Application insights<br>REST API | Azure Kubernetes service (AKS) cluster<br>Any Kubernetes cluster through remote-write |
+| Configuration | None | Varies by source | Enable Azure Monitor managed service for Prometheus |
+| Stored | Subscription | Subscription | [Azure Monitor workspace](azure-monitor-workspace-overview.md) |
+| Cost | No | Yes | Yes (free during preview) |
+| Aggregation | pre-aggregated | pre-aggregated | raw data |
+| Analyze | [Metrics Explorer](metrics-charts.md) | [Metrics Explorer](metrics-charts.md) | PromQL<br>Grafana dashboards |
+| Alert | [metrics alert rule](../alerts/tutorial-metric-alert.md) | [metrics alert rule](../alerts/tutorial-metric-alert.md) | [Prometheus alert rule](../essentials/prometheus-rule-groups.md) |
+| Visualize | [Workbooks](../visualize/workbooks-overview.md)<br>[Azure dashboards](../app/tutorial-app-dashboards.md)<br>[Grafana](../visualize/grafana-plugin.md) | [Workbooks](../visualize/workbooks-overview.md)<br>[Azure dashboards](../app/tutorial-app-dashboards.md)<br>[Grafana](../visualize/grafana-plugin.md) | [Grafana](../../managed-grafan) |
+| Retrieve | [Azure CLI](/cli/azure/monitor/metrics)<br>[Azure PowerShell cmdlets](/powershell/module/az.monitor)<br>[REST API](./rest-api-walkthrough.md) or client library<br>[.NET](/dotnet/api/overview/azure/Monitor.Query-readme)<br>[Java](/jav) |
+++ ## Data collection Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source:
Azure Monitor collects metrics from the following sources. After these metrics a
- **Applications**: Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_. - **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). - **Custom metrics**: You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights. You can also create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md).
+- **Kubernetes clusters**: Kubernetes clusters typically send metric data to a local Prometheus server that you must maintain. [Azure Monitor managed service for Prometheus ](prometheus-metrics-overview.md) provides a managed service that collects metrics from Kubernetes clusters and store them in Azure Monitor Metrics.
For a complete list of data sources that can send data to Azure Monitor Metrics, see [What is monitored by Azure Monitor?](../monitor-reference.md).
Data that Azure Monitor Metrics collects is stored in a time-series database tha
* [Multiple dimensions](#multi-dimensional-metrics) when they're present. Custom metrics are limited to 10 dimensions. ## Multi-dimensional metrics- One of the challenges to metric data is that it often has limited information to provide context for collected values. Azure Monitor addresses this challenge with multi-dimensional metrics.
-Dimensions of a metric are name/value pairs that carry more data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
+Metric dimensions are name/value pairs that carry more data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
-The following example illustrates two datasets for a hypothetical metric called _Network throughput_. The first dataset has no dimensions. The second dataset shows the values with two dimensions, _IP_ and _Direction_.
+See [Apply dimension filters and splitting](metrics-getting-started.md?#apply-dimension-filters-and-splitting) for details on viewing metric dimensions in metrics explorer.
-### Network throughput
+### Nondimensional metric
+The following table shows sample data from a nondimensional metric, network throughput. It can only answer a basic question like "What was my network throughput at a given time?"
| Timestamp | Metric value | | - |:-|
The following example illustrates two datasets for a hypothetical metric called
| 8/9/2017 8:15 | 1,141.4 Kbps | | 8/9/2017 8:16 | 1,110.2 Kbps |
-This nondimensional metric can only answer a basic question like "What was my network throughput at a given time?"
+ ### Network throughput and two dimensions ("IP" and "Direction")
+The following table shows sample data from a multidimensional metric, network throughput with two dimensions called *IP* and *Direction*. It can answer questions such as "What was the network throughput for each IP address?" and "How much data was sent versus received?"
| Timestamp | Dimension "IP" | Dimension "Direction" | Metric value| | - |:--|:- |:--|
This nondimensional metric can only answer a basic question like "What was my ne
| 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Send" | 155.0 Kbps | | 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Receive" | 100.1 Kbps |
-This metric can answer questions such as "What was the network throughput for each IP address?" and "How much data was sent versus received?" Multi-dimensional metrics carry more analytical and diagnostic value compared to nondimensional metrics.
-
-### View multi-dimensional performance counter metrics in Metrics Explorer
-
-It's not possible to send performance counter metrics that contain an asterisk (\*) to Azure Monitor via the Classic Guest Metrics API. This API can't display metrics that contain an asterisk because it's a multi-dimensional metric, which classic metrics don't support.
-
-To configure and view multi-dimensional guest OS performance counter metrics by using the Azure Diagnostic extension:
-
-1. Go to the **Diagnostic settings** page for your virtual machine.
-1. Select the **Performance counters** tab.
-1. Select **Custom** to configure the performance counters that you want to collect.
-
- ![Screenshot that shows the performance counters section of the Diagnostic settings page.](media/data-platform-metrics/azure-monitor-perf-counter.png)
-
-1. Select **Sinks**. Then select **Enabled** to send your data to Azure Monitor.
-
- ![Screenshot that shows the Sinks section of the Diagnostic settings page.](media/data-platform-metrics/azure-monitor-sink.png)
-
-1. To view your metric in Azure Monitor, select **Virtual Machine Guest** in the **Metric Namespace** dropdown.
-
- ![Screenshot that shows the Metric Namespace dropdown.](media/data-platform-metrics/vm-guest-namespace.png)
-
-1. Select **Apply splitting** and fill in the details to split the metric by instance. You can then see the metric broken down by each of the possible values represented by the asterisk in the configuration. In this example, the asterisk represents the logical disk volumes plus the total.
-
- ![Screenshot that shows splitting a metric by instance.](media/data-platform-metrics/split-by-instance.png)
## Retention of metrics
-For most resources in Azure, platform metrics are stored for 93 days. There are some exceptions:
+### Platform and custom metrics
+Platform and custom metrics are stored for **93 days** with the following exceptions:
- **Classic guest OS metrics**: These performance counters are collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) or the [Linux diagnostic extension](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure Storage account. Retention for these metrics is guaranteed to be at least 14 days, although no expiration date is written to the storage account.
For most resources in Azure, platform metrics are stored for 93 days. There are
> [!NOTE] > You can [send platform metrics for Azure Monitor resources to a Log Analytics workspace](./resource-logs.md#send-to-azure-storage) for long-term trending.
-As mentioned earlier, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in the **Metrics** tile) for a maximum of 30 days' worth of data on any single chart. This limitation doesn't apply to log-based metrics.
+While platform and custom metrics are stored for 93 days, you can only query (in the **Metrics** tile) for a maximum of 30 days' worth of data on any single chart. This limitation doesn't apply to log-based metrics. If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
+
-If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
+### Prometheus metrics
+Prometheus metrics are stored for **18 months**, but a PromQL query can only span a maximum of 32 days.
## Next steps - Learn more about the [Azure Monitor data platform](../data-platform.md). - Learn about [log data in Azure Monitor](../logs/data-platform-logs.md).-- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
+- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
+ Last updated 09/12/2022
This latest update adds a new column and reorders the metrics to be alphabetical
|IngressFailedRequests|Yes|Failed Requests|Count|Average|Count of failed requests by Azure Spring Cloud from the clients|Hostname, HttpStatus| |IngressRequests|Yes|Requests|Count|Average|Count of requests by Azure Spring Cloud from the clients|Hostname, HttpStatus| |IngressResponseStatus|Yes|Response Status|Count|Average|HTTP response status returned by Azure Spring Cloud. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories|Hostname, HttpStatus|
-|IngressResponseTime|Yes|Response Time|Seconds|Average|Http response time return by Azure Spring Cloud|Hostname, HttpStatus||jvm.gc.max.data.size|Yes|jvm.gc.max.data.size|Bytes|Average|Max size of old generation memory pool|Deployment, AppName, Pod|
+|IngressResponseTime|Yes|Response Time|Seconds|Average|Http response time return by Azure Spring Cloud|Hostname, HttpStatus|
+|jvm.gc.max.data.size|Yes|jvm.gc.max.data.size|Bytes|Average|Max size of old generation memory pool|Deployment, AppName, Pod|
|jvm.gc.memory.allocated|Yes|jvm.gc.memory.allocated|Bytes|Maximum|Incremented for an increase in the size of the young generation memory pool after one GC to before the next|Deployment, AppName, Pod| |jvm.gc.memory.promoted|Yes|jvm.gc.memory.promoted|Bytes|Maximum|Count of positive increases in the size of the old generation memory pool before GC to after GC|Deployment, AppName, Pod| |jvm.gc.pause.total.count|Yes|jvm.gc.pause.total.count|Count|Total|GC Pause Count|Deployment, AppName, Pod|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|TotalJob|Yes|Total Jobs|Count|Total|The total number of jobs|Runbook, Status|
+|TotalJob|Yes|Total Jobs|Count|Total|The total number of jobs|Runbook Name, Status|
|TotalUpdateDeploymentMachineRuns|Yes|Total Update Deployment Machine Runs|Count|Total|Total software update deployment machine runs in a software update deployment run|SoftwareUpdateConfigurationName, Status, TargetComputer, SoftwareUpdateConfigurationRunId| |TotalUpdateDeploymentRuns|Yes|Total Update Deployment Runs|Count|Total|Total software update deployment runs|SoftwareUpdateConfigurationName, Status|
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://learn.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Azure Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more information, see [Azure Cosmos DB service quotas](/azure/cosmos-db/concepts-limits). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
This latest update adds a new column and reorders the metrics to be alphabetical
|IntegratedCacheItemHitRate|No|IntegratedCacheItemHitRate|Percent|Average|Number of point reads that used the integrated cache divided by number of point reads routed through the dedicated gateway with eventual consistency|Region, | |IntegratedCacheQueryExpirationCount|No|IntegratedCacheQueryExpirationCount|Count|Average|Number of queries evicted from the integrated cache due to TTL expiration|Region, | |IntegratedCacheQueryHitRate|No|IntegratedCacheQueryHitRate|Percent|Average|Number of queries that used the integrated cache divided by number of queries routed through the dedicated gateway with eventual consistency|Region, |
-|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
+|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Azure Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
|MongoCollectionCreate|No|Mongo Collection Created|Count|Count|Mongo Collection Created|ResourceName, ChildResourceName, | |MongoCollectionDelete|No|Mongo Collection Deleted|Count|Count|Mongo Collection Deleted|ResourceName, ChildResourceName, | |MongoCollectionThroughputUpdate|No|Mongo Collection Throughput Updated|Count|Count|Mongo Collection Throughput Updated|ResourceName, ChildResourceName, |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Availability|Yes|Availability|Percent|Average|The availability rate of the service.|No Dimensions|
-|CosmosDbCollectionSize|Yes|Cosmos DB Collection Size|Bytes|Total|The size of the backing Cosmos DB collection, in bytes.|No Dimensions|
-|CosmosDbIndexSize|Yes|Cosmos DB Index Size|Bytes|Total|The size of the backing Cosmos DB collection's index, in bytes.|No Dimensions|
-|CosmosDbRequestCharge|Yes|Cosmos DB RU usage|Count|Total|The RU usage of requests to the service's backing Cosmos DB.|Operation, ResourceType|
-|CosmosDbRequests|Yes|Service Cosmos DB requests|Count|Sum|The total number of requests made to a service's backing Cosmos DB.|Operation, ResourceType|
-|CosmosDbThrottleRate|Yes|Service Cosmos DB throttle rate|Count|Sum|The total number of 429 responses from a service's backing Cosmos DB.|Operation, ResourceType|
+|CosmosDbCollectionSize|Yes|Azure Cosmos DB Collection Size|Bytes|Total|The size of the backing Azure Cosmos DB collection, in bytes.|No Dimensions|
+|CosmosDbIndexSize|Yes|Azure Cosmos DB Index Size|Bytes|Total|The size of the backing Azure Cosmos DB collection's index, in bytes.|No Dimensions|
+|CosmosDbRequestCharge|Yes|Azure Cosmos DB RU usage|Count|Total|The RU usage of requests to the service's backing Azure Cosmos DB.|Operation, ResourceType|
+|CosmosDbRequests|Yes|Service Azure Cosmos DB requests|Count|Sum|The total number of requests made to a service's backing Azure Cosmos DB.|Operation, ResourceType|
+|CosmosDbThrottleRate|Yes|Service Azure Cosmos DB throttle rate|Count|Sum|The total number of 429 responses from a service's backing Azure Cosmos DB.|Operation, ResourceType|
|IoTConnectorDeviceEvent|Yes|Number of Incoming Messages|Count|Sum|The total number of messages received by the Azure IoT Connector for FHIR prior to any normalization.|Operation, ConnectorName| |IoTConnectorDeviceEventProcessingLatencyMs|Yes|Average Normalize Stage Latency|Milliseconds|Average|The average time between an event's ingestion time and the time the event is processed for normalization.|Operation, ConnectorName| |IoTConnectorMeasurement|Yes|Number of Measurements|Count|Sum|The number of normalized value readings received by the FHIR conversion stage of the Azure IoT Connector for FHIR.|Operation, ConnectorName|
This latest update adds a new column and reorders the metrics to be alphabetical
|ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName| |ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, | |ServerSendLatency|Yes|Server Send Latency.|Milliseconds|Average|Server Send Latency.|EntityName|
-|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName|
+|Size|No|Size|Bytes|Average|Size of a Queue/Topic in Bytes.|EntityName|
|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, | |ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, MessagingErrorSubCode| |UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, |
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-multiple-workspaces.md
+
+ Title: Send Prometheus metrics to multiple Azure Monitor workspaces (preview)
+description: Describes data collection rules required to send Prometheus metrics from a cluster in Azure Monitor to multiple Azure Monitor workspaces.
++ Last updated : 09/28/2022+++
+# Send Prometheus metrics to multiple Azure Monitor workspaces (preview)
+
+Routing metrics to more Azure Monitor Workspaces can be done through the creation of additional data collection rules. All metrics can be sent to all workspaces or different metrics can be sent to different workspaces.
+
+## Send same metrics to multiple Azure Monitor workspaces
+
+You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](../containers/container-insights-prometheus-metrics-addon.md#enable-prometheus-metric-collection) and then edit the same Resource Manager templates to add additional DCRs for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, and add an additional Azure Monitor workspace integration for Grafana.
+
+- Add the following parameters:
+ ```json
+ "parameters": {
+ "azureMonitorWorkspaceResourceId2": {
+ "type": "string"
+ },
+ "azureMonitorWorkspaceLocation2": {
+ "type": "string",
+ "defaultValue": "",
+ "allowedValues": [
+ "eastus2euap",
+ "centraluseuap",
+ "centralus",
+ "eastus",
+ "eastus2",
+ "northeurope",
+ "southcentralus",
+ "southeastasia",
+ "uksouth",
+ "westeurope",
+ "westus",
+ "westus2"
+ ]
+ },
+ ...
+ }
+ ```
+
+- Add an additional DCR with the same Data Collection Endpoint. You *must* replace `<dcrName>`:
+ ```json
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2021-09-01-preview",
+ "name": "<dcrName>",
+ "location": "[parameters('azureMonitorWorkspaceLocation2')]",
+ "kind": "Linux",
+ "properties": {
+ "dataCollectionEndpointId": "[resourceId('Microsoft.Insights/dataCollectionEndpoints/', variables('dceName'))]",
+ "dataFlows": [
+ {
+ "destinations": ["MonitoringAccount2"],
+ "streams": ["Microsoft-PrometheusMetrics"]
+ }
+ ],
+ "dataSources": {
+ "prometheusForwarder": [
+ {
+ "name": "PrometheusDataSource",
+ "streams": ["Microsoft-PrometheusMetrics"],
+ "labelIncludeFilter": {}
+ }
+ ]
+ },
+ "description": "DCR for Azure Monitor Metrics Profile (Managed Prometheus)",
+ "destinations": {
+ "monitoringAccounts": [
+ {
+ "accountResourceId": "[parameters('azureMonitorWorkspaceResourceId2')]",
+ "name": "MonitoringAccount2"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/dataCollectionEndpoints/', variables('dceName'))]"
+ ]
+ }
+ ```
++
+- Add an additional Grafana integration:
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ // Existing azureMonitorWorkspaceIntegrations values (if any)
+ // {
+ // "azureMonitorWorkspaceResourceId": "<value>"
+ // },
+ // {
+ // "azureMonitorWorkspaceResourceId": "<value>"
+ // },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId2')]"
+ }
+ ]
+ }
+ }
+ }
+ ```
+ Similar to the regular Resource Manager onboarding process, the `Monitoring Data Reader` role will need to be assigned for every Azure Monitor workspace linked to Grafana. This will allow the Azure Managed Grafana resource to read data from the Azure Monitor workspace and is a requirement for viewing the metrics.
+
+## Send different metrics to different Azure Monitor workspaces
+
+If you want to send some metrics to one Azure Monitor Workspace and other metrics to a different one, follow the above steps to add additional DCRs. The value of `microsoft_metrics_include_label` under the `labelIncludeFilter` in the DCR is the identifier for the workspace. To then configure which metrics are routed to which workspace, you can add an extra pre-defined label, `microsoft_metrics_account` to the metrics. The value should be the same as the corresponding `microsoft_metrics_include_label` in the DCR for that workspace. To add the label to the metrics, you can utilize `relabel_configs` in your scrape config. To send all metrics from one job to a certain workspace, add the following relabel config:
+
+```yaml
+relabel_configs:
+- source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel1"
+```
+
+The source label is `__address__` because this label will always exist so this relabel config will always be applied. The target label will always be `microsoft_metrics_account` and its value should be replaced with the corresponding label value for the workspace.
++
+### Example
+
+If you want to configure three different jobs to send the metrics to three different workspaces, then include the following in each data collection rule:
+
+```json
+"labelIncludeFilter": {
+ "microsoft_metrics_include_label": "MonitoringAccountLabel1"
+}
+```
+
+```json
+"labelIncludeFilter": {
+ "microsoft_metrics_include_label": "MonitoringAccountLabel2"
+}
+```
+
+```json
+"labelIncludeFilter": {
+ "microsoft_metrics_include_label": "MonitoringAccountLabel3"
+}
+```
+
+Then in your scrape config, include the same label value for each:
+```yaml
+scrape_configs:
+- job_name: prometheus_ref_app_1
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: "prometheus-reference-app-1"
+ - source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel1"
+- job_name: prometheus_ref_app_2
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: "prometheus-reference-app-2"
+ - source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel2"
+- job_name: prometheus_ref_app_3
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: "prometheus-reference-app-3"
+ - source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel3"
+```
++
+## Next steps
+
+- [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).
+- [Collect Prometheus metrics from AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
+
+ Title: Overview of Azure Monitor Managed Service for Prometheus (preview)
+description: Overview of Azure Monitor managed service for Prometheus, which provides a Prometheus-compatible interface for storing and retrieving metric data.
+++ Last updated : 09/28/2022++
+# Azure Monitor managed service for Prometheus (preview)
+Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Compute Foundation. This fully managed service allows you to use the [Prometheus query language (PromQL)](https://aka.ms/azureprometheus-promio-promql) to analyze and alert on the performance of monitored infrastructure and workloads without having to operate the underlying infrastructure.
+
+Azure Monitor managed service for Prometheus is a component of [Azure Monitor Metrics](data-platform-metrics.md), providing additional flexibility in the types of metric data that you can collect and analyze with Azure Monitor. Prometheus metrics share some features with platform and custom metrics, but use some different features to better support open source tools such as [PromQL](https://aka.ms/azureprometheus-promio-promql) and [Grafana](../../managed-grafan).
+
+> [!IMPORTANT]
+> Azure Monitor managed service for Prometheus is intended for storing information about service health of customer machines and applications. It is not intended for storing any data classified as Personal Identifiable Information (PII) or End User Identifiable Information (EUII). We strongly recommend that you do not send any sensitive information (usernames, credit card numbers etc.) into Azure Monitor managed service for Prometheus fields like metric names, label names, or label values.
+
+## Data sources
+Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources.
+
+- Azure Kubernetes service (AKS). [Configure the Azure Monitor managed service for Prometheus AKS add-on](../containers/container-insights-prometheus-metrics-addon.md) to scrape metrics from an AKS cluster.
+- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw). In this configuration, metrics are collected by a local Prometheus server for each cluster and then consolidated in Azure Monitor managed service for Prometheus.
++
+## Grafana integration
+The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan). Connect your Azure Monitor workspace to a Grafana workspace so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
+
+## Alerts
+Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alerts using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules]
+
+## Enable
+The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics such as Container insights for your AKS cluster as described in [Send Kubernetes metrics to Azure Monitor managed service for Prometheus with Container insights](../containers/container-insights-prometheus-metrics-addon.md).
++
+## Limitations
+See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.
+
+- Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace.
+- Azure monitor managed service for Prometheus is only supported in public clouds.
+- Metrics addon doesn't work on AKS clusters configured with HTTP proxy.
+- Scraping and storing metrics at frequencies less than 1 second isn't supported.
++
+## Prometheus references
+Following are links to Prometheus documentation.
+
+- [PromQL](https://aka.ms/azureprometheus-promio-promql)
+- [Grafana](https://aka.ms/azureprometheus-promio-grafana)
+- [Recording rules](https://aka.ms/azureprometheus-promio-recrules)
+- [Alerting rules](https://aka.ms/azureprometheus-promio-alertrules)
+- [Writing Exporters](https://aka.ms/azureprometheus-promio-exporters)
++
+## Next steps
+
+- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).
+- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration-minimal.md
+
+ Title: Minimal Prometheus ingestion profile in Azure Monitor (preview)
+description: Describes how the setting for minimal ingestion profile for Prometheus metrics in Azure Monitor is configured and how you can modify it to collect additional data.
++ Last updated : 09/28/2022+++
+# Minimal ingestion profile for Prometheus metrics in Azure Monitor (preview)
+When Prometheus metric scraping is enabled for a cluster in Container insights, it collects a minimal amount of data by default. This helps reduce ingestion volume of series/metrics used by default dashboards, default recording rules & default alerts. This article describes how this setting is configured and how you can modify it to collect additional data.
+
+## Configuration setting
+The setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"` is enabled by default on the metrics addon. You can specify this setting in [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under default-targets-metrics-keep-list section.
+
+## Scenarios
+
+There are four scenarios where you may want to customize this behavior:
+
+**Ingest only minimal metrics per default target.**<br>
+This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only series and metrics listed below will be ingested for each of the default targets.
+
+**Ingest a few additional metrics for one or more default targets in addition to minimal metrics.**<br>
+Keep ` minimalIngestionProfile="true"` and specify the appropriate `keeplistRegexes.*` specific to the target, for example `keeplistRegexes.coreDns="X|Y"`. X,Y will be merged with default metric list for the target and then ingested. |
++
+**Ingest only a specific set of metrics for a target, and nothing else.**<br>
+Set `minimalIngestionProfile="false"` and specify the appropriate `default-targets-metrics-keep-list.="X|Y"` specific to the target in the `ama-metrics-settings-configmap`.
++
+**Ingest all metrics scraped for the default target.**<br>
+Set `minimalIngestionProfile="false"` and don't specify any `default-targets-metrics-keep-list.<targetname>` for that target. This can increase metric ingestion volume by a factor per target.
++
+> [!NOTE]
+> `up` metric is not part of the allow/keep list because it will be ingested per scrape, per target, regardless of `keepLists` specified. This metric is not actually scraped but produced as result of scrape by the metrics addon. For histograms and summaries, each series has to be included explicitly in the list (`*bucket`, `*sum`, `*count`).
+
+### Minimal ingestion for ON targets
+The following metrics are allow-listed with `minimalingestionprofile=true` for default ON targets. These metrics will be collected by default as these targets are scraped by default.
+
+`default-targets-metrics-keep-list.kubelet` = `"kubelet_volume_stats_used_bytes|kubelet_node_name|kubelet_running_pods|kubelet_running_pod_count|kubelet_running_containers|kubelet_running_container_count|volume_manager_total_volumes|kubelet_node_config_error|kubelet_runtime_operations_total|kubelet_runtime_operations_errors_total|kubelet_runtime_operations_duration_seconds|kubelet_runtime_operations_duration_seconds_bucket|kubelet_runtime_operations_duration_seconds_sum|kubelet_runtime_operations_duration_seconds_count|kubelet_pod_start_duration_seconds|kubelet_pod_start_duration_seconds_bucket|kubelet_pod_start_duration_seconds_sum|kubelet_pod_start_duration_seconds_count|kubelet_pod_worker_duration_seconds|kubelet_pod_worker_duration_seconds_bucket|kubelet_pod_worker_duration_seconds_sum|kubelet_pod_worker_duration_seconds_count|storage_operation_duration_seconds|storage_operation_duration_seconds_bucket|storage_operation_duration_seconds_sum|storage_operation_duration_seconds_count|storage_operation_errors_total|kubelet_cgroup_manager_duration_seconds|kubelet_cgroup_manager_duration_seconds_bucket|kubelet_cgroup_manager_duration_seconds_sum|kubelet_cgroup_manager_duration_seconds_count|kubelet_pleg_relist_duration_seconds|kubelet_pleg_relist_duration_seconds_bucket|kubelet_pleg_relist_duration_sum|kubelet_pleg_relist_duration_seconds_count|kubelet_pleg_relist_interval_seconds|kubelet_pleg_relist_interval_seconds_bucket|kubelet_pleg_relist_interval_seconds_sum|kubelet_pleg_relist_interval_seconds_count|rest_client_requests_total|rest_client_request_duration_seconds|rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubelet_volume_stats_capacity_bytes|kubelet_volume_stats_available_bytes|kubelet_volume_stats_inodes_used|kubelet_volume_stats_inodes|kubernetes_build_info"`
+
+`default-targets-metrics-keep-list.cadvisor` = `"container_spec_cpu_period|container_spec_cpu_quota|container_cpu_usage_seconds_total|container_memory_rss|container_network_receive_bytes_total|container_network_transmit_bytes_total|container_network_receive_packets_total|container_network_transmit_packets_total|container_network_receive_packets_dropped_total|container_network_transmit_packets_dropped_total|container_fs_reads_total|container_fs_writes_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_memory_cache|container_memory_swap|container_cpu_cfs_throttled_periods_total|container_cpu_cfs_periods_total|container_memory_usage_bytes|kubernetes_build_info"`
+
+`default-targets-metrics-keep-list.kubestate` = `"kube_node_status_capacity|kube_job_status_succeeded|kube_job_spec_completions|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_number_ready|kube_deployment_spec_replicas|kube_deployment_status_replicas_ready|kube_pod_container_status_last_terminated_reason|kube_node_status_condition|kube_pod_container_status_restarts_total|kube_pod_container_resource_requests|kube_pod_status_phase|kube_pod_container_resource_limits|kube_node_status_allocatable|kube_pod_info|kube_pod_owner|kube_resourcequota|kube_statefulset_replicas|kube_statefulset_status_replicas|kube_statefulset_status_replicas_ready|kube_statefulset_status_replicas_current|kube_statefulset_status_replicas_updated|kube_namespace_status_phase|kube_node_info|kube_statefulset_metadata_generation|kube_pod_labels|kube_pod_annotations|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_spec_max_replicas|kube_node_status_condition|kube_node_spec_taint|kube_pod_container_status_waiting_reason|kube_job_failed|kube_job_status_start_time|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_replicaset_owner|kubernetes_build_info"`
+
+`default-targets-metrics-keep-list.nodeexporter` = `"node_cpu_seconds_total|node_memory_MemAvailable_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_MemFree_bytes|node_memory_Slab_bytes|node_memory_MemTotal_bytes|node_netstat_Tcp_RetransSegs|node_netstat_Tcp_OutSegs|node_netstat_TcpExt_TCPSynRetrans|node_load1|node_load5|node_load15|node_disk_read_bytes_total|node_disk_written_bytes_total|node_disk_io_time_seconds_total|node_filesystem_size_bytes|node_filesystem_avail_bytes|node_network_receive_bytes_total|node_network_transmit_bytes_total|node_vmstat_pgmajfault|node_network_receive_drop_total|node_network_transmit_drop_total|node_disk_io_time_weighted_seconds_total|node_exporter_build_info|node_time_seconds|node_uname_info"`
+
+### Minimal ingestion for OFF targets
+The following metrics that are allow-listed with `minimalingestionprofile=true` for default OFF targets. These metrics won't be collected by default as these targets aren't scraped by default. You turn them on using `default-targets-metrics-keep-list.<target-name>=true`'.
+
+`default-targets-metrics-keep-list.coredns` = `"coredns_build_info|coredns_panics_total|coredns_dns_responses_total|coredns_forward_responses_total|coredns_dns_request_duration_seconds|coredns_dns_request_duration_seconds_bucket|coredns_dns_request_duration_seconds_sum|coredns_dns_request_duration_seconds_count|coredns_forward_request_duration_seconds|coredns_forward_request_duration_seconds_bucket|coredns_forward_request_duration_seconds_sum|coredns_forward_request_duration_seconds_count|coredns_dns_requests_total|coredns_forward_requests_total|coredns_cache_hits_total|coredns_cache_misses_total|coredns_cache_entries|coredns_plugin_enabled|coredns_dns_request_size_bytes|coredns_dns_request_size_bytes_bucket|coredns_dns_request_size_bytes_sum|coredns_dns_request_size_bytes_count|coredns_dns_response_size_bytes|coredns_dns_response_size_bytes_bucket|coredns_dns_response_size_bytes_sum|coredns_dns_response_size_bytes_count|coredns_dns_response_size_bytes_bucket|coredns_dns_response_size_bytes_sum|coredns_dns_response_size_bytes_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubernetes_build_info"`
+
+`default-targets-metrics-keep-list.kubeproxy` = `"kubeproxy_sync_proxy_rules_duration_seconds|kubeproxy_sync_proxy_rules_duration_seconds_bucket|kubeproxy_sync_proxy_rules_duration_seconds_sum|kubeproxy_sync_proxy_rules_duration_seconds_count|kubeproxy_network_programming_duration_seconds|kubeproxy_network_programming_duration_seconds_bucket|kubeproxy_network_programming_duration_seconds_sum|kubeproxy_network_programming_duration_seconds_count|rest_client_requests_total|rest_client_request_duration_seconds|rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubernetes_build_info"`
+
+`default-targets-metrics-keep-list.apiserver` = `"apiserver_request_duration_seconds|apiserver_request_duration_seconds_bucket|apiserver_request_duration_seconds_sum|apiserver_request_duration_seconds_count|apiserver_request_total|workqueue_adds_total|workqueue_depth|workqueue_queue_duration_seconds|workqueue_queue_duration_seconds_bucket|workqueue_queue_duration_seconds_sum|workqueue_queue_duration_seconds_count|process_resident_memory_bytes|process_cpu_seconds_total|go_goroutines|kubernetes_build_info"`
+
+## Next steps
+
+- [Learn more about customizing Prometheus metric scraping in Container insights](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
+
+ Title: Customize scraping of Prometheus metrics in Azure Monitor (preview)
+description: Customize metrics scraping for a Kubernetes cluster with the metrics addon in Azure Monitor.
++ Last updated : 09/28/2022+++
+# Customize scraping of Prometheus metrics in Azure Monitor (preview)
+
+This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](../containers/container-insights-prometheus-metrics-addon.md) in Azure Monitor.
+
+## Configmaps
+
+Three different configmaps can be configured to change the default settings of the metrics addon:
+
+- ama-metrics-settings-configmap
+- ama-metrics-prometheus-config
+- ama-metrics-prometheus-config-node
+
+## Metrics addon settings configmap
+
+The [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) can be downloaded, edited, and applied to the cluster to customize the out-of-the-box features of the metrics addon.
+
+### Enabling and disabling default targets
+The following table has a list of all the default targets that the Azure Monitor metrics addon can scrape by default and whether it's initially enabled. Default targets are scraped every 30 seconds.
+
+| Key | Type | Enabled | Description |
+|--||-|-|
+| kubelet | bool | `true` | Scrape kubelet in every node in the k8s cluster without any extra scrape config. |
+| cadvisor | bool | `true` | Scrape cAdvisor in every node in the k8s cluster without any extra scrape config.<br>Linux only. |
+| kubestate | bool | `true` | Scrape kube-state-metrics in the k8s cluster (installed as a part of the addon) without any extra scrape config. |
+| nodeexporter | bool | `true` | Scrape node metrics without any extra scrape config.<br>Linux only. |
+| coredns | bool | `false` | Scrape coredns service in the k8s cluster without any extra scrape config. |
+| kubeproxy | bool | `false` | Scrape kube-proxy in every linux node discovered in the k8s cluster without any extra scrape config.<br>Linux only. |
+| apiserver | bool | `false` | Scrape the kubernetes api server in the k8s cluster without any extra scrape config. |
+| prometheuscollectorhealth | bool | `false` | Scrape info about the prometheus-collector container such as the amount and size of timeseries scraped. |
+
+If you want to turn on the scraping of the default targets that aren't enabled by default, edit the configmap `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) to update the targets listed under `default-scrape-settings-enabled` to `true`, and apply the configmap to your cluster.
+
+### Customizing metrics collected by default targets
+By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, in the configmap under `default-targets-metrics-keep-list`, set `minimalingestionprofile` to `false`.
+
+To filter in additional metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you'd like to change.
+
+For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following to filter IN metrics collected for the default targets using regex based filtering.
+
+```
+kubelet = "metricX|metricY"
+apiserver = "mymetric.*"
+```
+
+> [!NOTE]
+> If you use quotes or backslashes in the regex, you'll need to escape them using a backslash. For example `"test\'smetric\"s\""` and `testbackslash\\*`.
+
+To further customize the default jobs to change properties such as collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`, and then apply the job using custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs).
+
+### Cluster alias
+The cluster label appended to every time series scraped will use the last part of the full AKS cluster's ARM resourceID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
+
+To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one.
+
+The new label will also show up in the cluster parameter dropdown in the Grafana dashboards instead of the default one.
+
+> [!NOTE]
+> Only alphanumeric characters are allowed. Any other characters else will be replaced with `_`. This is to ensure that different components that consume this label will adhere to the basic alphanumeric convention.
+
+### Debug mode
+To view every metric that is being scraped for debugging purposes, the metrics addon agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one. See [the Debug Mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode) for more details.
+
+## Configure custom Prometheus scrape jobs
+
+You can configure the metrics addon to scrape targets other than the default ones, using the same configuration format as the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
+
+Follow the instructions to [create, validate, and apply the configmap](prometheus-metrics-scrape-validate.md) for your cluster.
+
+### Advanced Setup: Configure custom Prometheus scrape jobs for the daemonset
+
+The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery; otherwise each node will try to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
+
+ ```yaml
+ - job_name: node
+ scrape_interval: 30s
+ scheme: http
+ metrics_path: /metrics
+ relabel_configs:
+ - source_labels: [__metrics_path__]
+ regex: (.*)
+ target_label: metrics_path
+ - source_labels: [__address__]
+ replacement: '$NODE_NAME'
+ target_label: instance
+ static_configs:
+ - targets: ['$NODE_IP:9100']
+ ```
+
+Custom scrape targets can follow the same format using `static_configs` with targets using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the daemonset will take the config, scrape the metrics, and send them for that node.
+
+## Prometheus configuration tips and examples
+
+### Configuration File for custom scrape config
+
+The configuration format is the same as the [Prometheus configuration file](https://aka.ms/azureprometheus-promioconfig). Currently supported are the following sections:
+
+```yaml
+global:
+ scrape_interval: <duration>
+ scrape_timeout: <duration>
+ external_labels:
+ <labelname1>: <labelvalue>
+ <labelname2>: <labelvalue>
+scrape_configs:
+ - <job-x>
+ - <job-y>
+```
+
+Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
+
+> [!NOTE]
+> When custom scrape configuration fails to apply due to validation errors, default scrape configuration will continue to be used.
+
+## Scrape Configs
+The currently supported methods of target discovery for a [scrape config](https://aka.ms/azureprometheus-promioconfig-scrape) are either [`static_configs`](https://aka.ms/azureprometheus-promioconfig-static) or [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) for specifying or discovering targets.
+
+#### Static config
+
+A static config has a list of static targets and any extra labels to add to them.
+
+```yaml
+scrape_configs:
+ - job_name: example
+ - targets: [ '10.10.10.1:9090', '10.10.10.2:9090', '10.10.10.3:9090' ... ]
+ - labels: [ label1: value1, label1: value2, ... ]
+```
+
+#### Kubernetes Service Discovery config
+
+Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) will each have different `__meta_*` labels depending on what role is specified. These can be used in the `relabel_configs` section to filter targets or replace labels for the targets.
+
+See the [Prometheus examples](https://aka.ms/azureprometheus-promsampleossconfig) of scrape configs for a Kubernetes cluster.
+
+### Relabel configs
+The `relabel_configs` section is applied at the time of target discovery and applies to each target for the job. Below are examples showing ways to use `relabel_configs`.
+
+#### Adding a label
+Add a new label called `example_label` with value `example_value` to every metric of the job. Use `__address__` as the source label only because that label will always exist. This will add the label for every target of the job.
+
+```yaml
+relabel_configs:
+- source_labels: [__address__]
+ target_label: example_label
+ replacement: 'example_value'
+```
+
+#### Use Kubernetes Service Discovery labels
+
+If a job is using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) to discover targets, each role has associated `__meta_*` labels for metrics. The `__*` labels are dropped after discovering the targets. To filter by them at the metrics level, first keep them using `relabel_configs` by assigning a label name and then use `metric_relabel_configs` to filter.
+
+```yaml
+# Use the kubernetes namespace as a label called 'kubernetes_namespace'
+relabel_configs:
+- source_labels: [__meta_kubernetes_namespace]
+ action: replace
+ target_label: kubernetes_namespace
+
+# Keep only metrics with the kubernetes namespace 'default'
+metric_relabel_configs:
+- source_labels: [kubernetes_namespace]
+ action: keep
+ regex: 'default'
+```
+
+#### Job and instance relabeling
+
+The `job` and `instance` label values can be changed based on the source label, just like any other label.
+
+```yaml
+# Replace the job name with the pod label 'k8s app'
+relabel_configs:
+- source_labels: [__meta_kubernetes_pod_label_k8s_app]
+ target_label: job
+
+# Replace the instance name with the node name. This is helpful to replace a node IP
+# and port with a value that is more readable
+relabel_configs:
+- source_labels: [__meta_kubernetes_node_name]]
+ target_label: instance
+```
+
+### Metric relabel configs
+
+Metric relabel configs are applied after scraping and before ingestion. Use the `metric_relabel_configs` section to filter metrics after scraping. Below are examples of how to do so.
+
+#### Drop metrics by name
+
+```yaml
+# Drop the metric named 'example_metric_name'
+metric_relabel_configs:
+- source_labels: [__name__]
+ action: drop
+ regex: 'example_metric_name'
+```
+
+#### Keep only certain metrics by name
+
+```yaml
+# Keep only the metric named 'example_metric_name'
+metric_relabel_configs:
+- source_labels: [__name__]
+ action: keep
+ regex: 'example_metric_name'
+```
+
+```yaml
+# Keep only metrics that start with 'example_'
+metric_relabel_configs:
+- source_labels: [__name__]
+ action: keep
+ regex: '(example_.*)'
+```
+
+#### Rename Metrics
+Metric renaming isn't supported.
+
+#### Filter Metrics by Labels
+
+```yaml
+# Keep only metrics with where example_label = 'example'
+metric_relabel_configs:
+- source_labels: [example_label]
+ action: keep
+ regex: 'example'
+```
+
+```yaml
+# Keep metrics only if `example_label` equals `value_1` or `value_2`
+metric_relabel_configs:
+- source_labels: [example_label]
+ action: keep
+ regex: '(value_1|value_2)'
+```
+
+```yaml
+# Keep metric only if `example_label_1 = value_1` and `example_label_2 = value_2`
+metric_relabel_configs:
+- source_labels: [example_label_1, example_label_2]
+ separator: ';'
+ action: keep
+ regex: 'value_1;value_2'
+```
+
+```yaml
+# Keep metric only if `example_label` exists as a label
+metric_relabel_configs:
+- source_labels: [example_label_1]
+ action: keep
+ regex: '.+'
+```
+
+### Pod Annotation Based Scraping
+
+If you're currently using Azure Monitor Container Insights Prometheus scraping with the setting `monitor_kubernetes_pods = true`, adding this job to your custom config will allow you to scrape the same pods and metrics.
+
+The scrape config below uses the `__meta_*` labels added from the `kubernetes_sd_configs` for the `pod` role to filter for pods with certain annotations.
+
+To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the below job will scrape only the address specified by the annotation:
+- `prometheus.io/scrape`: Enable scraping for this pod
+- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set this to `https` & most likely set the tls config.
+- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation.
+- `prometheus.io/port`: Specify a single, desired port to scrape
+
+```yaml
+scrape_configs:
+ - job_name: 'kubernetes-pods'
+
+ kubernetes_sd_configs:
+ - role: pod
+
+ relabel_configs:
+ # Scrape only pods with the annotation: prometheus.io/scrape = true
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
+ action: keep
+ regex: true
+
+ # If prometheus.io/path is specified, scrape this path instead of /metrics
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+
+ # If prometheus.io/port is specified, scrape this port instead of the default
+ - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ action: replace
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ target_label: __address__
+
+ # If prometheus.io/scheme is specified, scrape with this scheme instead of http
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
+ action: replace
+ regex: (http|https)
+ target_label: __scheme__
+
+ # Include the pod namespace as a label for each metric
+ - source_labels: [__meta_kubernetes_namespace]
+ action: replace
+ target_label: kubernetes_namespace
+
+ # Include the pod name as a label for each metric
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ target_label: kubernetes_pod_name
+
+ # [Optional] Include all pod labels as labels for each metric
+ - action: labelmap
+ regex: __meta_kubernetes_pod_label_(.+)
+```
+++
+## Next steps
+
+- [Learn more about collecting Prometheus metrics](prometheus-metrics-overview.md).
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md
+
+ Title: Default Prometheus metrics configuration in Azure Monitor (preview)
+description: Lists the default targets, dashboards, and recording rules for Prometheus metrics in Azure Monitor.
++ Last updated : 09/28/2022+++
+# Default Prometheus metrics configuration in Azure Monitor (preview)
+
+This article lists the default targets, dashboards, and recording rules when you [configure Container insights to collect Prometheus metrics by enabling metrics-addon](../containers/container-insights-prometheus-metrics-addon.md) for any AKS cluster.
+
+## Scrape frequency
+
+ The default scrape frequency for all default targets and scrapes is **30 seconds**.
+
+## Targets scraped
+
+- `cadvisor` (`job=cadvisor`)
+- `nodeexporter` (`job=node`)
+- `kubelet` (`job=kubelet`)
+- `kube-state-metrics` (`job=kube-state-metrics`)
+
+## Metrics collected from default targets
+
+The following metrics are collected by default from each default target. All other metrics are dropped through relabeling rules.
+
+ **cadvisor (job=cadvisor)**<br>
+ - `container_memory_rss`
+ - `container_network_receive_bytes_total`
+ - `container_network_transmit_bytes_total`
+ - `container_network_receive_packets_total`
+ - `container_network_transmit_packets_total`
+ - `container_network_receive_packets_dropped_total`
+ - `container_network_transmit_packets_dropped_total`
+ - `container_fs_reads_total`
+ - `container_fs_writes_total`
+ - `container_fs_reads_bytes_total`
+ - `container_fs_writes_bytes_total|container_cpu_usage_seconds_total`
+
+ **kubelet (job=kubelet)**<br>
+ - `kubelet_node_name`
+ - `kubelet_running_pods`
+ - `kubelet_running_pod_count`
+ - `kubelet_running_sum_containers`
+ - `kubelet_running_container_count`
+ - `volume_manager_total_volumes`
+ - `kubelet_node_config_error`
+ - `kubelet_runtime_operations_total`
+ - `kubelet_runtime_operations_errors_total`
+ - `kubelet_runtime_operations_duration_seconds_bucket`
+ - `kubelet_runtime_operations_duration_seconds_sum`
+ - `kubelet_runtime_operations_duration_seconds_count`
+ - `kubelet_pod_start_duration_seconds_bucket`
+ - `kubelet_pod_start_duration_seconds_sum`
+ - `kubelet_pod_start_duration_seconds_count`
+ - `kubelet_pod_worker_duration_seconds_bucket`
+ - `kubelet_pod_worker_duration_seconds_sum`
+ - `kubelet_pod_worker_duration_seconds_count`
+ - `storage_operation_duration_seconds_bucket`
+ - `storage_operation_duration_seconds_sum`
+ - `storage_operation_duration_seconds_count`
+ - `storage_operation_errors_total`
+ - `kubelet_cgroup_manager_duration_seconds_bucket`
+ - `kubelet_cgroup_manager_duration_seconds_sum`
+ - `kubelet_cgroup_manager_duration_seconds_count`
+ - `kubelet_pleg_relist_interval_seconds_bucket`
+ - `kubelet_pleg_relist_interval_seconds_count`
+ - `kubelet_pleg_relist_interval_seconds_sum`
+ - `kubelet_pleg_relist_duration_seconds_bucket`
+ - `kubelet_pleg_relist_duration_seconds_count`
+ - `kubelet_pleg_relist_duration_seconds_sum`
+ - `rest_client_requests_total`
+ - `rest_client_request_duration_seconds_bucket`
+ - `rest_client_request_duration_seconds_sum`
+ - `rest_client_request_duration_seconds_count`
+ - `process_resident_memory_bytes`
+ - `process_cpu_seconds_total`
+ - `go_goroutines`
+ - `kubernetes_build_info`
+
+ **nodexporter (job=node)**<br>
+ - `node_memory_MemTotal_bytes`
+ - `node_cpu_seconds_total`
+ - `node_memory_MemAvailable_bytes`
+ - `node_memory_Buffers_bytes`
+ - `node_memory_Cached_bytes`
+ - `node_memory_MemFree_bytes`
+ - `node_memory_Slab_bytes`
+ - `node_filesystem_avail_bytes`
+ - `node_filesystem_size_bytes`
+ - `node_time_seconds`
+ - `node_exporter_build_info`
+ - `node_load1`
+ - `node_vmstat_pgmajfault`
+ - `node_network_receive_bytes_total`
+ - `node_network_transmit_bytes_total`
+ - `node_network_receive_drop_total`
+ - `node_network_transmit_drop_total`
+ - `node_disk_io_time_seconds_total`
+ - `node_disk_io_time_weighted_seconds_total`
+ - `node_load5`
+ - `node_load15`
+ - `node_disk_read_bytes_total`
+ - `node_disk_written_bytes_total`
+ - `node_uname_info`
+
+ **kube-state-metrics (job=kube-state-metrics)**<br>
+ - `kube_node_status_allocatable`
+ - `kube_pod_owner`
+ - `kube_pod_container_resource_requests`
+ - `kube_pod_status_phase`
+ - `kube_pod_container_resource_limits`
+ - `kube_pod_info|kube_replicaset_owner`
+ - `kube_resourcequota`
+ - `kube_namespace_status_phase`
+ - `kube_node_status_capacity`
+ - `kube_node_info`
+ - `kube_pod_info`
+ - `kube_deployment_spec_replicas`
+ - `kube_deployment_status_replicas_available`
+ - `kube_deployment_status_replicas_updated`
+ - `kube_statefulset_status_replicas_ready`
+ - `kube_statefulset_status_replicas`
+ - `kube_statefulset_status_replicas_updated`
+ - `kube_job_status_start_time`
+ - `kube_job_status_active`
+ - `kube_job_failed`
+ - `kube_horizontalpodautoscaler_status_desired_replicas`
+ - `kube_horizontalpodautoscaler_status_current_replicas`
+ - `kube_horizontalpodautoscaler_spec_min_replicas`
+ - `kube_horizontalpodautoscaler_spec_max_replicas`
+ - `kubernetes_build_info`
+ - `kube_node_status_condition`
+ - `kube_node_spec_taint`
+
+## Dashboards
+
+Following are the default dashboards that are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-overview.md#link-a-grafana-workspace). Source code for these dashboards can be found in [GitHub](https://aka.ms/azureprometheus-mixins)
+
+- Kubernetes / Compute Resources / Cluster
+- Kubernetes / Compute Resources / Namespace (Pods)
+- Kubernetes / Compute Resources / Node (Pods)
+- Kubernetes / Compute Resources / Pod
+- Kubernetes / Compute Resources / Namespace (Workloads)
+- Kubernetes / Compute Resources / Workload
+- Kubernetes / Kubelet
+- Node Exporter / USE Method / Node
+- Node Exporter / Nodes
+
+## Recording rules
+
+Following are the default recording rules that are automatically configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-overview.md#link-a-grafana-workspace). Source code for these recording rules can be found in [GitHub](https://aka.ms/azureprometheus-mixins)
++
+- `cluster:node_cpu:ratio_rate5m`
+- `namespace_cpu:kube_pod_container_resource_requests:sum`
+- `namespace_cpu:kube_pod_container_resource_limits:sum`
+- `:node_memory_MemAvailable_bytes:sum`
+- `namespace_memory:kube_pod_container_resource_requests:sum`
+- `namespace_memory:kube_pod_container_resource_limits:sum`
+- `namespace_workload_pod:kube_pod_owner:relabel`
+- `node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate`
+- `cluster:namespace:pod_cpu:active:kube_pod_container_resource_requests`
+- `cluster:namespace:pod_cpu:active:kube_pod_container_resource_limits`
+- `cluster:namespace:pod_memory:active:kube_pod_container_resource_requests`
+- `cluster:namespace:pod_memory:active:kube_pod_container_resource_limits`
+- `node_namespace_pod_container:container_memory_working_set_bytes`
+- `node_namespace_pod_container:container_memory_rss`
+- `node_namespace_pod_container:container_memory_cache`
+- `node_namespace_pod_container:container_memory_swap`
+- `instance:node_cpu_utilisation:rate5m`
+- `instance:node_load1_per_cpu:ratio`
+- `instance:node_memory_utilisation:ratio`
+- `instance:node_vmstat_pgmajfault:rate5m`
+- `instance:node_network_receive_bytes_excluding_lo:rate5m`
+- `instance:node_network_transmit_bytes_excluding_lo:rate5m`
+- `instance:node_network_receive_drop_excluding_lo:rate5m`
+- `instance:node_network_transmit_drop_excluding_lo:rate5m`
+- `instance_device:node_disk_io_time_seconds:rate5m`
+- `instance_device:node_disk_io_time_weighted_seconds:rate5m`
+- `instance:node_num_cpu:sum`
+
+## Next steps
+
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-scale.md
+
+ Title: Scrape Prometheus metrics at scale in Azure Monitor (preview)
+description: Guidance on performance that can be expected when collection metrics at high scale for Azure Monitor managed service for Prometheus.
++ Last updated : 09/28/2022+++
+# Scrape Prometheus metrics at scale in Azure Monitor (preview)
+This article provides guidance on performance that can be expected when collection metrics at high scale for [Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).
++
+## CPU and memory
+The CPU and memory usage is correlated with the number of bytes of each sample and the number of samples scraped. The benchmarks below are based on the [default targets scraped](prometheus-metrics-scrape-default.md), volume of custom metrics scraped, and number of nodes, pods, and containers. These numbers are meant as a reference since usage can still vary significantly depending on the number of timeseries and bytes per metric.
+
+The upper volume limit per pod is currently about 3-3.5 million samples per minute, depending on the number of bytes per sample. This limitation will be eliminated when sharding is added to the feature.
+
+The Container insights agent consists of a deployment with one replica and daemonset for scraping metrics. The daemonset scrapes any node-level targets such as cAdvisor, kubelet, and node exporter. You can also configure it to scrape any custom targets at the node level with static configs. The replicaset scrapes everything else such as kube-state-metrics or custom scrape jobs that utilize service discovery.
+
+## Comparison between small and large cluster for replicaset
+
+| Scrape Targets | Samples Sent / Minute | Node Count | Pod Count | Prometheus-Collector CPU Usage (cores) |Prometheus-Collector Memory Usage (bytes) |
+|:|:|:|:|:|:|
+| default targets | 11,344 | 3 | 40 | 12.9 mc | 148 Mi |
+| default targets | 260,000 | 340 | 13000 | 1.10 c | 1.70 GB |
+| default targets<br>+ custom targets | 3.56 million | 340 | 13000 | 5.13 c | 9.52 GB |
+
+## Comparison between small and large cluster for daemonsets
+
+| Scrape Targets | Samples Sent / Minute Total | Samples Sent / Minute / Pod | Node Count | Pod Count | Prometheus-Collector CPU Usage Total (cores) |Prometheus-Collector Memory Usage Total (bytes) | Prometheus-Collector CPU Usage / Pod (cores) |Prometheus-Collector Memory Usage / Pod (bytes) |
+|:|:|:|:|:|:|:|:|:|
+| default targets | 9,858 | 3,327 | 3 | 40 | 41.9 mc | 581 Mi | 14.7 mc | 189 Mi |
+| default targets | 2.3 million | 14,400 | 340 | 13000 | 805 mc | 305.34 GB | 2.36 mc | 898 Mi |
+
+For more custom metrics, the single pod will behave the same as the replicaset pod depending on the volume of custom metrics.
++
+## Schedule ama-metrics replicaset pod on a nodepool with more resources
+
+A large volume of metrics per pod will require a large enough node to be able to handle the CPU and memory usage required. If the *ama-metrics* replicaset pod doesn't get scheduled on a node that has enough resources, it might keep getting OOMKilled and go to CrashLoopBackoff. In order to overcome this issue, if you have a node on your cluster that has higher resources (preferably in the system nodepool) and want to get the replicaset scheduled on that node, you can add the label `azuremonitor/metrics.replica.preferred=true` on the node and the replicaset pod will get scheduled on this node.
+
+ ```
+ kubectl label nodes <node-name> azuremonitor/metrics.replica.preferred="true"
+ ```
+## Next steps
+
+- [Troubleshoot issues with Prometheus data collection](prometheus-metrics-troubleshoot.md).
azure-monitor Prometheus Metrics Scrape Validate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-validate.md
+
+ Title: Create and validate custom configuration file for Prometheus metrics in Azure Monitor (preview)
+description: Describes how to create custom configuration file Prometheus metrics in Azure Monitor and use validation tool before applying to Kubernetes cluster.
++ Last updated : 09/28/2022+++
+# Create and validate custom configuration file for Prometheus metrics in Azure Monitor (preview)
+
+In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide additional scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape).
+
+## Create Prometheus configuration file
+Create a Prometheus scrape configuration file named `prometheus-config`. See the [configuration tips and examples](prometheus-metrics-scrape-configuration.md#prometheus-configuration-tips-and-examples) for more details on authoring scrape config for Prometheus. You can also refer to [Prometheus.io](https://aka.ms/azureprometheus-promio) scrape configuration [reference](https://aka.ms/azureprometheus-promioconfig-scrape). Your config file will list the scrape configs under the section `scrape_configs` and can optionally use the global section for setting the global `scrape_interval`, `scrape_timeout`, and `external_labels`.
++
+> [!TIP]
+> Changes to global section will impact the default configs and the custom config.
+
+Below is a sample Prometheus scrape config file:
+
+```
+global:
+ scrape_interval: 60s
+scrape_configs:
+- job_name: node
+ scrape_interval: 30s
+ scheme: http
+ kubernetes_sd_configs:
+ - role: endpoints
+ namespaces:
+ names:
+ - node-exporter
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_endpoints_name]
+ action: keep
+ regex: "dev-cluster-node-exporter-release-prometheus-node-exporter"
+ - source_labels: [__metrics_path__]
+ regex: (.*)
+ target_label: metrics_path
+ - source_labels: [__meta_kubernetes_endpoint_node_name]
+ regex: (.*)
+ target_label: instance
+
+- job_name: kube-state-metrics
+ scrape_interval: 30s
+ static_configs:
+ - targets: ['dev-cluster-kube-state-metrics-release.kube-state-metrics.svc.cluster.local:8080']
+
+- job_name: prometheus_ref_app
+ scheme: http
+ kubernetes_sd_configs:
+ - role: service
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_service_name]
+ action: keep
+ regex: "prometheus-reference-service"
+```
+
+## Validate the scrape config file
+
+The agent uses the `promconfigvalidator` tool to validate the Prometheus config given to it through the configmap. If the config isn't valid, then the custom configuration given won't be used by the agent. Once you have your Prometheus config file, you can *optionally* use the `promconfigvalidator` tool to validate your config before creating a configmap that the agent consumes.
+
+The `promconfigvalidator` tool is inside the Azure Monitor metrics addon. You can use any of the `ama-metrics-node-*` pods in `kube-system` namespace in your cluster to download the tool for validation. Use `kubectl cp` to download the tool and its configuration as shown below:
+
+```
+for podname in $(kubectl get pods -l rsName=ama-metrics -n=kube-system -o json | jq -r '.items[].metadata.name'); do kubectl cp -n=kube-system "${podname}":/opt/promconfigvalidator ./promconfigvalidator; kubectl cp -n=kube-system "${podname}":/opt/microsoft/otelcollector/collector-config-template.yml ./collector-config-template.yml; chmod 500 promconfigvalidator; done
+```
+
+After copying the executable and the yaml, locate the path of your Prometheus configuration file. Then replace `<config path>` below and run the validator with the command:
+
+```
+./promconfigvalidator/promconfigvalidator --config "<config path>" --otelTemplate "./promconfigvalidator/collector-config-template.yml"
+```
+
+Running the validator generates the merged configuration file `merged-otel-config.yaml` if no path is provided with the optional `output` parameter. Don't use this merged file as config to the metrics collector agent, as it's only used for tool validation and debugging purposes.
+
+### Apply config file
+Your custom Prometheus configuration file is consumed as a field named `prometheus-config` in a configmap called `ama-metrics-prometheus-config` in the `kube-system` namespace. You can create a configmap from a file by renaming your Prometheus configuration file to `prometheus-config` (with no file extension) and running the following command:
+
+```
+kubectl create configmap ama-metrics-prometheus-config --from-file=prometheus-config -n kube-system
+```
+
+*Ensure that the Prometheus config file is named `prometheus-metrics` before running the following command since the file name is used as the configmap setting name.*
+
+This will create a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics pod will then restart to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` pods.
+
+A sample of the `ama-metrics-prometheus-config` configmap is [here](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-prometheus-config-configmap.yaml).
+++
+## Next steps
+
+- [Learn more about collecting Prometheus metrics](prometheus-metrics-overview.md).
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-troubleshoot.md
+
+ Title: Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)
+description: Steps that you can take if you aren't collecting Prometheus metrics as expected.
++ Last updated : 09/28/2022+++
+# Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)
+
+Follow the steps in this article to determine the cause of Prometheus metrics not being collected as expected in Azure Monitor.
+
+## Pod status
+
+Check the pod status with the following command:
+
+```
+kubectl get pods -n kube-system | grep ama-metrics
+```
+
+- There should be one `ama-metrics-xxxxxxxxxx-xxxxx` replicaset pod, one `ama-metrics-ksm-*` pod, and an `ama-metrics-node-*` pod for each node on the cluster.
+- Each pod state should be `Running` and have an equal number of restarts to the number of configmap changes that have been applied:
++
+If each pod state is `Running` but one or more pods have restarts, run the following command:
+
+```
+kubectl describe pod <ama-metrics pod name> -n kube-system
+```
+
+- This provides the reason for the restarts. Pod restarts are expected if configmap changes have been made. If the reason for the restart is `OOMKilled`, the pod can't keep up with the volume of metrics. See the scale recommendations for the volume of metrics.
+
+If the pods are running as expected, the next place to check is the container logs.
+
+## Container logs
+View the container logs with the following command:
+
+```
+kubectl logs <ama-metrics pod name> -n kube-system -c prometheus-collector
+```
+
+ At startup, any initial errors are printed in red, while warnings are printed in yellow. (Viewing the colored logs requires at least PowerShell version 7 or a linux distribution.)
+
+- Verify if there's an issue with getting the authentication token:
+ - The message *No configuration present for the AKS resource* will be logged every 5 minutes.
+ * The pod will restart every 15 minutes to try again with the error: *No configuration present for the AKS resource*.
+- Verify there are no errors with parsing the Prometheus config, merging with any default scrape targets enabled, and validating the full config.
+- Verify there are no errors from MetricsExtension regarding authenticating with the Azure Monitor workspace.
+- Verify there are no errors from the OpenTelemetry collector about scraping the targets.
+
+Run the following command:
+
+```
+kubectl logs <ama-metrics pod name> -n kube-system -c addon-token-adapter
+```
+
+- This will show an error if there's an issue with authenticating with the Azure Monitor workspace. Following is an example of logs with no issues:
+-
+
+If there are no errors in the logs, the Prometheus interface can be used for debugging to verify the expected configuration and targets being scraped.
+
+## Prometheus interface
+
+Every `ama-metrics-*` pod has the Prometheus Agent mode User Interface available on port 9090/ Port forward into either the replicaset or the daemonset to check the config, service discovery and targets endpoints as described below. This is used to verify the custom configs are correct, the intended targets have been discovered for each job, and there are no errors with scraping specific targets.
+
+Run the command `kubectl port-forward <ama-metrics pod> -n kube-system 9090`.
+
+- Open a browser to the address `127.0.0.1:9090/config`. This will have the full scrape configs. Verify all jobs are included in the config.
++
+- Go to `127.0.0.1:9090/service-discovery` to view the targets discovered by the service discovery object specified and what the relabel_configs have filtered the targets to be. For example, if missing metrics from a certain pod, you can find if that pod was discovered and what its URI is. You can then use this URI when looking at the targets to see if there are any scrape errors.
++
+- Go to `127.0.0.1:9090/targets` to view all jobs, the last time the endpoint for that job was scraped, and any errors
+
+If there are no issues and the intended targets are being scraped, you can view the exact metrics being scraped by enabling debug mode.
+
+## Debug mode
+
+The metrics addon can be configured to run in debug mode by changing the configmap setting `enabled` under `debug-mode` to `true` by following the instructions [here](prometheus-metrics-scrape-configuration.md#debug-mode). This mode can affect performance and should only be enabled for a short time for debugging purposes.
+
+When enabled, all Prometheus metrics that are scraped are hosted at port 9090. Run the following command:
+
+```
+kubectl port-forward <ama-metrics pod name> -n kube-system 9091
+```
+
+Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This can be done for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. See below for the service limits for Prometheus metrics.
+
+## Metric names, label names & label values
+
+Agent based scraping currently has the limitations in the following table:
+
+| Property | Limit |
+|:|:|
+| Label name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, the entire scrape job will fail, and metrics will be dropped from that job before ingestion. You can see up=0 for that job and also target Ux will show the reason for up=0. |
+| Label value length | Less than or equal to 1023 characters. When this limit is exceeded for any time-series in a job, the entire scrape job will fail, and metrics will be dropped from that job before ingestion. You can see up=0 for that job and also target Ux will show the reason for up=0. |
+| Number of labels per timeseries | Less than or equal to 63. When this limit is exceeded for any time-series in a job, the entire scrape job will fail, and metrics will be dropped from that job before ingestion. You can see up=0 for that job and also target Ux will show the reason for up=0. |
+| Metric name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, only that particular series will be dropped. MetricextensionConsoleDebugLog will have traces for the dropped metric. |
+
+## Next steps
+
+- [Check considerations for collecting metrics at high scale](prometheus-metrics-scrape-scale.md).
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
+
+ Title: Rule groups in Azure Monitor Managed Service for Prometheus (preview)
+description: Description of rule groups in Azure Monitor managed service for Prometheus which alerting and data computation.
+++ Last updated : 09/28/2022++
+# Azure Monitor managed service for Prometheus rule groups (preview)
+Rules in Prometheus act on data as it's collected. They're configured as part of a Prometheus rule group, which is stored in [Azure Monitor workspace](azure-monitor-workspace-overview.md). Rules are run sequentially in the order they're defined in the group.
++
+## Rule types
+There are two types of Prometheus rules as described in the following table.
+
+| Type | Description |
+|:|:|
+| Alert | Alert rules let you create an Azure Monitor alert based on the results of a Prometheus Query Language (Prom QL) query. |
+| Recording | Recording rules allow you to pre-compute frequently needed or computationally extensive expressions and store their result as a new set of time series. Querying the precomputed result will then often be much faster than executing the original expression every time it's needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh, or for use in alert rules, where multiple alert rules may be based on the same complex query. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. |
+
+## View Prometheus rule groups
+You can view the rule groups and their included rules in the Azure portal by selecting **Rule groups** from the Azure Monitor workspace.
+++
+## Enable rules
+To enable or disable a rule, click on the rule in the Azure portal. Select either **Enable** or **Disable** to change its status.
++
+> [!NOTE]
+> After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group.
++
+## Create Prometheus rules
+In the public preview, rule groups, recording rules and alert rules are configured using Azure Resource Manager (ARM) templates, API, and provisioning tools. This uses a new resource called **Prometheus Rule Group**. You can create and configure rule group resources where the alert rules and recording rules are defined as part of the rule group properties. Azure Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md).
++
+You can use a Resource Manager template to create and configure Prometheus rule groups, alert rules and recording rules. Resource Manager templates enable you to programmatically set up alert and recording rules in a consistent and reproducible way across your environments.
+
+The basic steps are as follows:
+
+1. Use the templates below as a JSON file that describes how to create the rule group.
+2. Deploy the template using any deployment method, such as [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md), or [Rest API](../../azure-resource-manager/templates/deploy-rest.md).
+
+### Limiting rules to a specific cluster
+
+You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property.
+You should try to limit rules to a single cluster if your monitoring workspace contains a large scale of data from multiple clusters, and there's a concern that running a single set of rules on all the data may cause performance or throttling issues. Using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, limiting each group to cover a different cluster.
+
+- The `clusterName` value must be identical to the `cluster` label that is added to the metrics from a specific cluster during data collection.
+- If `clusterName` is not specified for a specific rule group, the rules in the group will query all the data in the workspace from all clusters.
++
+### Template example for a Prometheus rule group
+Below is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The rules are executed in the order they appear within a group.
+
+``` json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "name": "sampleRuleGroup",
+ "type": "Microsoft.AlertsManagement/prometheusRuleGroups",
+ "apiVersion": "2021-07-22-preview",
+ "location": "northcentralus",
+ "properties": {
+ "description": "Sample Prometheus Rule Group",
+ "scopes": [
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>"
+ ],
+ "enabled": true,
+ "clusterName": "<myCLusterName>",
+ "interval": "PT1M",
+ "rules": [
+ {
+ "record": "instance:node_cpu_utilisation:rate5m",
+ "expression": "1 - avg without (cpu) (sum without (mode)(rate(node_cpu_seconds_total{job=\"node\", mode=~\"idle|iowait|steal\"}[5m])))",
+ "enabled": true
+ },
+ {
+ "alert": "KubeCPUQuotaOvercommit",
+ "expression": "sum(min without(resource) (kube_resourcequota{job=\"kube-state-metrics\", type=\"hard\", resource=~\"(cpu|requests.cpu)\"})) / sum(kube_node_status_allocatable{resource=\"cpu\", job=\"kube-state-metrics\"}) > 1.5",
+ "for": "PT5M",
+ "labels": {
+ "team": "prod"
+ },
+ "annotations": {
+ "description": "Cluster has overcommitted CPU resource requests for Namespaces.",
+ "runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubecpuquotaovercommit",
+ "summary": "Cluster has overcommitted CPU resource requests."
+ },
+ "enabled": true,
+ "severity": 3,
+ "resolveConfiguration": {
+ "autoResolved": true,
+ "timeToResolve": "PT10M"
+ },
+ "actions": [
+ {
+ "actionGroupId": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>"
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+The following tables describe each of the properties in the rule definition.
+
+### Rule group
+The rule group will have the following properties, whether it includes alerting rule, recording rule, or both.
+
+| Name | Required | Type | Description |
+|:|:|:|:|
+| `name` | True | string | Prometheus rule group name |
+| `type` | True | string | `Microsoft.AlertsManagement/prometheusRuleGroups` |
+| `apiVersion` | True | string | `2021-07-22-preview` |
+| `location` | True | string | Resource location from regions supported in the preview |
+| `properties.description` | False | string | Rule group description |
+| `properties.scopes` | True | string[] | Target Azure Monitor workspace. Only one scope currently supported |
+| `properties.ebabled` | False | boolean | Enable/disable group. Default is true. |
+| `properties.clusterName` | False | string | Apply rule to data from a specific cluster. Default is apply to all data in workspace. |
+| `properties.interval` | False | string | Group evaluation interval. Default = PT1M |
+
+### Recording rules
+The `rules` section will have the following properties for recording rules.
+
+| Name | Required | Type | Description |
+|:|:|:|:|
+| `record` | True | string | Recording rule name. This is the name that will be used for the new time series. |
+| `expression` | True | string | PromQL expression to calculate the new time series value. |
+| `enabled` | False | boolean | Enable/disable group. Default is true. |
++
+### Alerting rules
+The `rules` section will have the following properties for alerting rules.
+
+| Name | Required | Type | Description | Notes |
+|:|:|:|:|:|
+| `alert` | False | string | Alert rule name |
+| `expression` | True | string | PromQL expression to evaluate. |
+| `for` | False | string | Alert firing timeout. Values - 'PT1M', 'PT5M' etc. |
+| `labels` | False | object | labels key-value pairs | Prometheus alert rule labels |
+| `rules.annotations` | False | object | Annotations key-value pairs to add to the alert. |
+| `enabled` | False | boolean | Enable/disable group. Default is true. |
+| `rules.severity` | False | integer | Alert severity. 0-4, default is 3 (informational) |
+| `rules.resolveConfigurations.autoResolved` | False | boolean | When enabled, the alert is automatically resolved when the condition is no longer true. Default = true |
+| `rules.resolveConfigurations.timeToResolve` | False | string | Alert auto resolution timeout. Default = "PT5M" |
+| `rules.action[].actionGroupId` | false | string | One or more action group resource IDs. Each is activated when an alert is fired. |
++
+## Next steps
+
+- [Use preconfigured alert rules for your Kubernetes cluster](../containers/container-insights-metric-alerts.md).
+- [Learn more about the Azure alerts](../alerts/alerts-types.md).
+- [Prometheus documentation for recording rules](https://aka.ms/azureprometheus-promio-recrules).
+- [Prometheus documentation for alerting rules](https://aka.ms/azureprometheus-promio-alertrules).
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
description: Learn how to stream Azure resource logs to a Log Analytics workspac
+ Last updated 05/09/2022 - # Azure resource logs
Within the PT1H.json file, each event is stored in the following format. It uses
> [!NOTE] > Logs are written to the blob relevant to the time that the log was generated, not the time that it was received. So, at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
-``` JSON
+```json
{"time": "2016-07-01T00:00:37.2040000Z","systemId": "46cdbb41-cb9c-4f3d-a5b4-1d458d827ff1","category": "NetworkSecurityGroupRuleCounter","resourceId": "/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/TESTNSG","operationName": "NetworkSecurityGroupCounters","properties": {"vnetResourceGuid": "{12345678-9012-3456-7890-123456789012}","subnetPrefix": "10.3.0.0/24","macAddress": "000123456789","ruleName": "/subscriptions/ s1id1234-5679-0123-4567-890123456789/resourceGroups/testresourcegroup/providers/Microsoft.Network/networkSecurityGroups/testnsg/securityRules/default-allow-rdp","direction": "In","type": "allow","matchedConnections": 1988}} ```
azure-monitor Solution Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-office-365.md
You can remove the Office 365 management solution using the process in [Remove a
[Parameter(Mandatory=$True)][string]$WorkspaceName, [Parameter(Mandatory=$True)][string]$ResourceGroupName, [Parameter(Mandatory=$True)][string]$SubscriptionId,
- [Parameter(Mandatory=$True)][string]$OfficeTennantId
+ [Parameter(Mandatory=$True)][string]$OfficeTennantId,
+ [Parameter(Mandatory=$True)][string]$clientId,
+ [Parameter(Mandatory=$True)][string]$xms_client_tenant_Id
) $line='#-'
You can remove the Office 365 management solution using the process in [Remove a
$WorkspaceLocation= $Workspace.Location # Client ID for Azure PowerShell
- $clientId = "1950a258-227b-4e31-a9cf-717495945fc2"
# Set redirect URI for Azure PowerShell $redirectUri = "urn:ietf:wg:oauth:2.0:oob" $domain='login.microsoftonline.com' $adTenant = $Subscription[0].Tenant.Id $authority = "https://login.windows.net/$adTenant";
- $ARMResource ="https://management.azure.com/";
- $xms_client_tenant_Id ='55b65fb5-b825-43b5-8972-c8b6875867c1'
+ $ARMResource ="https://management.azure.com/";'
switch ($WorkspaceLocation) { "USGov Virginia" {
azure-monitor Troubleshoot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/troubleshoot-workbooks.md
+
+ Title: Troubleshooting Azure Monitor workbook-based insights
+description: Provides troubleshooting guidance for Azure Monitor workbook-based insights for services like Azure Key Vault, Azure Cosmos DB, Azure Storage, and Azure Cache for Redis.
++ Last updated : 06/17/2020++
+# Troubleshooting workbook-based insights
+
+This article will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Azure Monitor workbook-based insights.
++
+## Why can I only see 200 resources
+
+The number of selected resources has a limit of 200, regardless of the number of subscriptions that are selected.
+
+## What happens when I select a recently pinned tile in the dashboard
+
+* If you select anywhere on the tile, it will take you to the tab where the tile was pinned from. For example, if you pin a graph in the "Overview" tab then when you select that tile in the dashboard it will open up that default view, however if you pin a graph from your own saved copy then it will open up your saved copy's view.
+* The filter icon in the top left of the title opens the "Configure tile settings" tab.
+* The ellipse icon in the top right will give you the options to "Customize title data", "customize", "refresh" and "remove from dashboard".
+
+## What happens when I save a workbook
+
+* When you save a workbook, it lets you create a new copy of the workbook with your edits and changes the title. Saving doesn't overwrite the workbook, the current workbook will always be the default view.
+* An **unsaved** workbook is just the default view.
+
+## Why donΓÇÖt I see all my subscriptions in the portal
+
+The portal will show data only for selected subscriptions on portal launch. To change what subscriptions are selected, go to the top right and select the notebook with a filter icon. This option will show the **Directory + subscriptions** tab.
+
+![Screenshot of the section to select the directory + subscription.](./media/storage-insights-overview/fqa3.png)
+
+## What is time range
+
+Time range shows you data from a certain time frame. For example, if the time range is 24 hours, then it's showing data from the past 24 hours.
+
+## What is time granularity (time grain)
+
+Time granularity is the time difference between two data points. For example, if the time grain is set to 1 second that means metrics are collected each second.
+
+## What is the time granularity once we pin any part of the workbooks to a dashboard
+
+The default time granularity is set to automatic, it currently can't be changed at this time.
+
+## How do I change the timespan/ time range of the workbook step on my dashboard
+
+By default the timespan/time range on your dashboard tile is set to 24 hours, to change this select the ellipses in the top right, select **Customize tile data**, check "override the dashboard time settings at the title level" box and then pick a timespan using the dropdown menu.
+
+![Screenshot showing the ellipses and the Customize this data section in the right corner of the tile.](./media/storage-insights-overview/fqa-data-settings.png)
+
+![Screenshot of the Configure tile settings, with the timespan dropdown to change the timespan/time range.](./media/storage-insights-overview/fqa-timespan.png)
+
+## How do I change the title of the workbook or a workbook step I pinned to a dashboard
+
+The title of the workbook or workbook step that is pinned to a dashboard retains the same name it had in the workbook. To change the title, you must save your own copy of the workbook. Then you'll be able to name the workbook before you press save.
+
+![Screenshot showing the save icon at the top of the workbook to save a copy of the workbook and to change the name.](./media/storage-insights-overview/fqa-change-workbook-name.png)
+
+To change the name of a step in your saved workbook, select edit under the step and then select the gear at the bottom of settings.
+
+![Screenshot of the edit icon at the bottom of a workbook.](./media/storage-insights-overview/fqa-edit.png)
+![Screenshot of the settings icon at the bottom of a workbook.](./media/storage-insights-overview/fqa-change-name.png)
+
+## Next steps
+
+Learn more about the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Title: Configure Basic Logs in Azure Monitor (Preview)
+ Title: Configure Basic Logs in Azure Monitor
description: Configure a table for Basic Logs in Azure Monitor. Previously updated : 05/15/2022 Last updated : 10/01/2022
-# Configure Basic Logs in Azure Monitor (Preview)
+# Configure Basic Logs in Azure Monitor
-Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-plans-preview) to *Basic Logs* lets you save on the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. This article describes how to configure Basic Logs for a particular table in your Log Analytics workspace.
+Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-plans) to *Basic Logs* lets you save on the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. This article describes how to configure Basic Logs for a particular table in your Log Analytics workspace.
> [!IMPORTANT] > You can switch a table's plan once a week. The Basic Logs feature is not available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-p
By default, all tables in your Log Analytics are Analytics tables, and available for query and alerts. You can currently configure the following tables for Basic Logs: -- All tables created with the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) -- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) -- Used in [Container Insights](../containers/container-insights-overview.md) and includes verbose text-based log records.-- [AppTraces](/azure/azure-monitor/reference/tables/apptraces) -- A freeform Application Insights traces.-- [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs) -- Logs generated by Container Apps, within a Container App Environment.
+- All tables created with or converted to the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md)
+- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) - Used in [Container Insights](../containers/container-insights-overview.md) and includes verbose text-based log records.
+- [AppTraces](/azure/azure-monitor/reference/tables/apptraces) - Freeform Application Insights traces.
+- [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs) - Logs generated by Container Apps, within a Container App environment.
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) do not support Basic Logs. - ## Set table configuration # [Portal](#tab/portal-1) To configure a table for Basic Logs or Analytics Logs in the Azure portal:
-1. From the **Log Analytics workspaces** menu, select **Tables (preview)**.
+1. From the **Log Analytics workspaces** menu, select **Tables**.
- The **Tables (preview)** screen lists all of the tables in the workspace.
+ The **Tables** screen lists all of the tables in the workspace.
1. Select the context menu for the table you want to configure and select **Manage table**.
PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups
This example configures the `ContainerLogV2` table for Basic Logs.
-Container Insights uses ContainerLog by default, to switch to using ContainerLogV2, please follow these [instructions](../containers/container-insights-logging-v2.md) before attempting to convert the table to Basic Logs.
+Container Insights uses ContainerLog by default. To switch to using ContainerLogV2 for Container Insights, [enable the ContainerLogV2 schema](../containers/container-insights-logging-v2.md) before you convert the table to Basic Logs.
**Sample request**
Alternatively:
Basic Logs tables have a unique icon:
- ![Screenshot of the Basic Logs table icon in the table list.](./media/basic-logs-configure/table-icon.png#lightbox)
+ :::image type="content" source="media/basic-logs-configure/table-icon.png" alt-text="Screenshot of the Basic Logs table icon in the table list." lightbox="media/basic-logs-configure/table-icon.png":::
You can also hover over a table name for the table information view, which indicates whether the table is configured as Basic Logs:
-
- ![Screenshot of the Basic Logs table indicator in the table details.](./media/basic-logs-configure/table-info.png#lightbox)
+ :::image type="content" source="media/basic-logs-configure/table-info.png" alt-text="Screenshot of the Basic Logs table indicator in the table details." lightbox="media/basic-logs-configure/table-info.png":::
+
# [API](#tab/api-2) To check the configuration of a table, call the **Tables - Get** API:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
|Name | Type | Description | | | | | |properties.plan | string | The table plan. Either "Analytics" or "Basic". |
-|properties.retentionInDays | integer | The table's data retention in days. In _Basic Logs_, the value is 8 days, fixed. In _Analytics Logs_, between 7 and 730.|
+|properties.retentionInDays | integer | The table's data retention in days. In _Basic Logs_, the value is 8 days, fixed. In _Analytics Logs_, the value is between 7 and 730.|
|properties.totalRetentionInDays | integer | The table's data retention including Archive period| |properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).| |properties.lastPlanModifiedDate|String|Last time when plan was set for this table. Null if no change was ever done from the default settings (read-only)
Basic Logs tables retain data for eight days. When you change an existing table'
## Next steps -- [Learn more about the different log plans.](log-analytics-workspace-overview.md#log-data-plans-preview)
+- [Learn more about the different log plans.](log-analytics-workspace-overview.md#log-data-plans)
- [Query data in Basic Logs.](basic-logs-query.md)
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Title: Query data from Basic Logs in Azure Monitor (Preview)
+ Title: Query data from Basic Logs in Azure Monitor
description: Create a log query using tables configured for Basic logs in Azure Monitor. Previously updated : 01/27/2022 Last updated : 10/01/2022
-# Query Basic Logs in Azure Monitor (Preview)
+# Query Basic Logs in Azure Monitor
Basic Logs tables reduce the cost of ingesting high-volume verbose logs and let you query the data they store using a limited set of log queries. This article explains how to query data from Basic Logs tables.
-For more information, see [Azure log data plans](log-analytics-workspace-overview.md#log-data-plans-preview) and [Configure a table for Basic Logs](basic-logs-configure.md).
+For more information, see [Azure log data plans](log-analytics-workspace-overview.md#log-data-plans) and [Configure a table for Basic Logs](basic-logs-configure.md).
> [!NOTE]
You can run two concurrent queries per user.
### Purge You canΓÇÖt [purge personal data](personal-data-mgmt.md#exporting-and-deleting-personal-data) from Basic Logs tables. - ## Run a query on a Basic Logs table Creating a query using Basic Logs is the same as any other query in Log Analytics. See [Get started with Azure Monitor Log Analytics](./log-analytics-tutorial.md) if you aren't familiar with this process.
The charge for a query on Basic Logs is based on the amount of data the query sc
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). > [!NOTE]
-> During the preview period, there is no cost for log queries on Basic Logs.
+> Billing of queries on Basic Logs is not yet enabled. You can query Basic Logs for free until early 2023.
## Next steps -- [Learn more about Basic Logs and the different log plans.](log-analytics-workspace-overview.md#log-data-plans-preview)
+- [Learn more about Basic Logs and the different log plans.](log-analytics-workspace-overview.md#log-data-plans)
- [Configure a table for Basic Logs.](basic-logs-configure.md) - [Use a search job to retrieve data from Basic Logs into Analytics Logs where it can be queries multiple times.](search-jobs.md)
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Billing for the commitment tiers is done per workspace on a daily basis. If the
Azure Commitment Discounts such as those received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise) are applied to Azure Monitor Logs Commitment Tier pricing just as they are to Pay-As-You-Go pricing (whether the usage is being billed per workspace or per dedicated cluster). > [!TIP]
-> The **Usage and estimated costs** menu item for each Log Analytics workspace hows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view.
+> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view.
## Dedicated clusters An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features such as [customer-managed keys](customer-managed-keys.md) and use the same commitment tier pricing model as workspaces although they must have a commitment level of at least 500 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There is no Pay-As-You-Go option for clusters.
azure-monitor Create Pipeline Datacollector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-pipeline-datacollector-api.md
Title: Use Data Collector API to create a data pipeline description: You can use the Azure Monitor HTTP Data Collector API to add POST JSON data to the Log Analytics workspace from any client that can call the REST API. This article describes how to upload data stored in files in an automated way. + Last updated 08/09/2018- # Create a data pipeline with the Data Collector API
In this example, we parse a CSV file, but any other file type can be similarly p
![Azure Functions example project](./media/create-pipeline-datacollector-api/functions-example-project-01.png)
- ``` JSON
+ ```json
{ "frameworks": { "net46":{
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
Title: Configure data retention and archive in Azure Monitor Logs (Preview)
description: Configure archive settings for a table in a Log Analytics workspace in Azure Monitor. Previously updated : 01/27/2022 Last updated : 10/01/2022 # Customer intent: As an Azure account administrator, I want to set data retention and archive policies to save retention costs.
-# Configure data retention and archive policies in Azure Monitor Logs (Preview)
+# Configure data retention and archive policies in Azure Monitor Logs
Retention policies define when to remove or archive data in a [Log Analytics workspace](log-analytics-workspace-overview.md). Archiving lets you keep older, less used data in your workspace at a reduced cost. This article describes how to configure data retention and archiving.
If you change the archive settings on a table with existing data, the relevant d
You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md). > [!NOTE]
-> The archive feature is currently in public preview and can only be set at the table level, not at the workspace level.
+> The archive period can only be set at the table level, not at the workspace level.
## Configure the default workspace retention policy You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period.
You can keep data in interactive retention between 4 and 730 days. You can set t
To set the retention and archive duration for a table in the Azure portal:
-1. From the **Log Analytics workspaces** menu, select **Tables (preview)**.
+1. From the **Log Analytics workspaces** menu, select **Tables **.
- The **Tables (preview)** screen lists all of the tables in the workspace.
+ The **Tables** screen lists all of the tables in the workspace.
1. Select the context menu for the table you want to configure and select **Manage table**.
az monitor log-analytics workspace table update --subscription ContosoSID --reso
# [Portal](#tab/portal-2)
-To view the retention and archive duration for a table in the Azure portal, from the **Log Analytics workspaces** menu, select **Tables (preview)**.
+To view the retention and archive duration for a table in the Azure portal, from the **Log Analytics workspaces** menu, select **Tables**.
-The **Tables (preview)** screen shows the interactive retention and archive period for all of the tables in the workspace.
+The **Tables** screen shows the interactive retention and archive period for all of the tables in the workspace.
:::image type="content" source="media/data-retention-configure/log-analytics-view-table-retention-archive.png" lightbox="media/data-retention-configure/log-analytics-view-table-retention-archive.png" alt-text="Screenshot showing the Manage table button for one of the tables in a workspace.":::
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Title: Log Analytics workspace overview
description: Overview of Log Analytics workspace, which stores data for Azure Monitor Logs. na Previously updated : 05/15/2022 Last updated : 10/01/2022 # Log Analytics workspace overview
Each workspace contains multiple tables that are organized into separate columns
## Cost
-There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the data plan of each table, as described in [Log data plans (preview)](#log-data-plans-preview).
+There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the data plan of each table, as described in [Log data plans (preview)](#log-data-plans).
For information on pricing, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). For guidance on how to reduce your costs, see [Azure Monitor best practices - Cost management](../best-practices-cost.md). If you're using your Log Analytics workspace with services other than Azure Monitor, see the documentation for those services for pricing information.
-## Log data plans (preview)
+## Log data plans
-By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure certain tables as **Basic Logs (preview)** to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
+By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure [certain tables as **Basic Logs**](basic-logs-configure.md#which-tables-support-basic-logs) to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
-The following table summarizes the two plans. For more information on Basic Logs and how to configure them, see [Configure Basic Logs in Azure Monitor (preview)](basic-logs-configure.md).
+The following table summarizes the two plans. For more information on Basic Logs and how to configure them, see [Configure Basic Logs in Azure Monitor](basic-logs-configure.md).
> [!NOTE]
-> Basic Logs are in public preview. You can currently work with Basic Logs tables in the Azure portal and use a limited number of other components. The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+> The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
| Category | Analytics Logs | Basic Logs | |:|:|:|
For example, you might have [diagnostic settings](../essentials/diagnostic-setti
Data in each table in a [Log Analytics workspace](log-analytics-workspace-overview.md) is retained for a specified period of time after which it's either removed or archived with a reduced retention fee. Set the retention time to balance your requirement for having data available with reducing your cost for data retention.
-> [!NOTE]
-> Archive is currently in public preview.
- To access archived data, you must first retrieve data from it in an Analytics Logs table by using one of the following methods: | Method | Description |
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Title: Manage access to Log Analytics workspaces description: This article explains how you can manage access to data stored in a Log Analytics workspace in Azure Monitor by using resource, workspace, or table-level permissions. ++ Previously updated : 03/22/2022 Last updated : 10/06/2022
The factors that define the data you can access are described in the following t
| [Access mode](#access-mode) | Method used to access the workspace. Defines the scope of the data available and the access control mode that's applied. | | [Access control mode](#access-control-mode) | Setting on the workspace that defines whether permissions are applied at the workspace or resource level. | | [Azure role-based access control (RBAC)](#azure-rbac) | Permissions applied to individuals or groups of users for the workspace or resource sending data to the workspace. Defines what data you have access to. |
-| [Table-level Azure RBAC](#table-level-azure-rbac) | Optional permissions that define specific data types in the workspace that you can access. Apply to all users no matter your access mode or access control mode. |
+| [Table-level Azure RBAC](#set-table-level-read-access) | Optional permissions that define specific data types in the workspace that you can access. Apply to all users no matter your access mode or access control mode. |
## Access mode
The following table summarizes the access modes:
|:|:|:| | Who is each model intended for? | Central administration.<br>Administrators who need to configure data collection and users who need access to a wide variety of resources. Also currently required for users who need to access logs for resources outside of Azure. | Application teams.<br>Administrators of Azure resources being monitored. Allows them to focus on their resource without filtering. | | What does a user require to view logs? | Permissions to the workspace.<br>See "Workspace permissions" in [Manage access using workspace permissions](./manage-access.md#azure-rbac). | Read access to the resource.<br>See "Resource permissions" in [Manage access using Azure permissions](./manage-access.md#azure-rbac). Permissions can be inherited from the resource group or subscription or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. The user doesn't require access to the workspace.|
-| What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables they have permissions to. See [Table access control](./manage-access.md#table-level-azure-rbac). | Azure resource.<br>Users can query logs for specific resources, resource groups, or subscriptions they have access to in any workspace, but they can't query logs for other resources. |
+| What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables they have permissions to. See [Set table-level read access](./manage-access.md#set-table-level-read-access). | Azure resource.<br>Users can query logs for specific resources, resource groups, or subscriptions they have access to in any workspace, but they can't query logs for other resources. |
| How can a user access logs? | On the **Azure Monitor** menu, select **Logs**.<br><br>Select **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#workbooks). | Select **Logs** on the menu for the Azure resource. Users will have access to data for that resource.<br><br>Select **Logs** on the **Azure Monitor** menu. Users will have access to data for all resources they have access to.<br><br>Select **Logs** from **Log Analytics workspaces**. Users will have access to data for all resources they have access to.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#workbooks). | ## Access control mode The *access control mode* is a setting on each workspace that defines how permissions are determined for the workspace.
-* **Require workspace permissions**. This control mode doesn't allow granular Azure RBAC. To access the workspace, the user must be [granted permissions to the workspace](#azure-rbac) or to [specific tables](#table-level-azure-rbac).
+* **Require workspace permissions**. This control mode doesn't allow granular Azure RBAC. To access the workspace, the user must be [granted permissions to the workspace](#azure-rbac) or to [specific tables](#set-table-level-read-access).
If a user accesses the workspace in [workspace-context mode](#access-mode), they have access to all data in any table they've been granted access to. If a user accesses the workspace in [resource-context mode](#access-mode), they have access to only data for that resource in any table they've been granted access to.
Grant a user access to log data from their resources and read all Azure AD sign-
- `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read`: Required to be able to use Update Management solutions - Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`
-## Table-level Azure RBAC
-
-By using table-level Azure RBAC, you can define more granular control to data in a Log Analytics workspace by defining specific data types that are accessible only to a specific set of users.
-
-Implement table access control with [Azure custom roles](../../role-based-access-control/custom-roles.md) to grant access to specific [tables](../logs/data-platform-logs.md) in the workspace. These roles are applied to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
-
-Create a [custom role](../../role-based-access-control/custom-roles.md) with the following actions to define access to a particular table:
-
-* Include the **Actions** section of the role definition. To subtract access from the allowed **Actions**, include it in the **NotActions** section.
-* Use `Microsoft.OperationalInsights/workspaces/query/*` to specify all tables.
-
-### Examples
+## Set table-level read access
+
+To create a role that lets users or groups read data from specific tables in a workspace:
+
+1. Create a custom role that grants read access to table data, based on the built-in Azure Monitor Logs **Reader** role:
+
+ 1. Navigate to your workspace and select **Access control (AIM)** > **Roles**.
+
+ 1. Right-click the **Reader** role and select **Clone**.
+
+ :::image type="content" source="media/manage-access/access-control-clone-role.png" alt-text="Screenshot that shows the Roles tab of the Access control screen with the clone button highlighted for the Reader role." lightbox="media/manage-access/access-control-clone-role.png":::
+
+ This opens the **Create a custom role** screen.
+
+ 1. On the **Basics** tab of the screen enter a **Custom role name** value and, optionally, provide a description.
+
+ :::image type="content" source="media/manage-access/manage-access-create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role screen with the Custom role name and Description fields highlighted." lightbox="media/manage-access/manage-access-create-custom-role.png":::
+
+ 1. Select the **JSON** tab > **Edit** and edit the `"actions"` section to include only `Microsoft.OperationalInsights/workspaces/query/read` and select **Save**.
+
+ :::image type="content" source="media/manage-access/manage-access-create-custom-role-json.png" alt-text="Screenshot that shows the JSON tab of the Create a custom role screen with the actions section of the JSON file highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
+
+ 1. Select **Review + Create** at the bottom of the screen, and then **Create** on the next page.
+ 1. Copy the custom role ID:
+ 1. Select **Access control (AIM)** > **Roles**.
+ 1. Right-click on your custom role and select **Edit**.
+
+ This opens the **Custom Role** screen.
+
+ :::image type="content" source="media/manage-access/manage-access-role-definition-id.png" alt-text="Screenshot that shows the JSON tab of the Custom Role screen with the ID field highlighted." lightbox="media/manage-access/manage-access-role-definition-id.png":::
+
+ 1. Select **JSON** and copy the `id` field.
+
+ You'll need the `/providers/Microsoft.Authorization/roleDefinitions/<definition_id>` value when you call the https://management.azure.com/batch?api-version=2020-06-01 POST API.
+
+1. Assign your custom role to the relevant users or groups:
+ 1. Select **Access control (AIM)** > **Add** > **Add role assignment**.
+
+ :::image type="content" source="media/manage-access/manage-access-add-role-assignment-button.png" alt-text="Screenshot that shows the Access control screen with the Add role assignment button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-button.png":::
+
+ 1. Select the custom role you created and select **Next**.
+
+ :::image type="content" source="media/manage-access/manage-access-add-role-assignment-screen.png" alt-text="Screenshot that shows the Add role assignment screen with a custom role and the Next button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-screen.png":::
+
+
+ This opens the **Members** tab of the **Add custom role assignment** screen.
+
+ 1. Click **+ Select members** to open the **Select members** screen.
+
+ :::image type="content" source="media/manage-access/manage-access-add-role-assignment-select-members.png" alt-text="Screenshot that shows the Select members screen." lightbox="media/manage-access/manage-access-add-role-assignment-select-members.png":::
+
+ 1. Search for and select the relevant user or group and click **Select**.
+ 1. Select **Review and assign**.
+
+1. Grant the users or groups read access to specific tables in a workspace by calling the https://management.azure.com/batch?api-version=2020-06-01 POST API and sending the following details in the request body:
+
+ ```json
+ {
+ "requests": [
+ {
+ "content": {
+ "Id": "<GUID_1>",
+ "Properties": {
+ "PrincipalId": "<User_object_ID>",
+ "PrincipalType": "User",
+ "RoleDefinitionId": "<custom_role_ID>",
+ "Scope": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>",
+ "Condition": null,
+ "ConditionVersion": null
+ }
+ },
+ "httpMethod": "PUT",
+ "name": "<GUID_2>",
+ "requestHeaderDetails": {
+ "commandName": "Microsoft_Azure_AD."
+ },
+ "url": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>/providers/Microsoft.Authorization/roleAssignments/<GUID_1>?api-version=2020-04-01-preview"
+ }
+ ]
+ }
+ ```
+
+ Where:
+ - You can generate a GUID for `<GUID 1>` and `<GUID 2>` using any GUID generator.
+ - `<custom_role_ID>` is the `/providers/Microsoft.Authorization/roleDefinitions/<definition_id>` value you copied earlier.
+ - `<subscription_ID>` is the ID of the subscription related to the workspace.
+ - `<resource_group_name>` is the resource group of the workspace.
+ - `<workspace_name>` is the name of the workspace.
+ - `<table_name>` is the name of the table to which you want to assign the user or group permission to read data from.
+
+### Legacy method of setting table-level read access
+
+[Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant access to specific tables in the workspace, although we recommend defining [table-level read access](#set-table-level-read-access) as described above.
+
+Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
+
+To define access to a particular table, create a [custom role](../../role-based-access-control/custom-roles.md):
+
+* Set the user permissions in the **Actions** section of the role definition.
+* Use `Microsoft.OperationalInsights/workspaces/query/*` to grant access to all tables.
+* To exclude access to specific tables when you use a wildcard in **Actions**, list the tables excluded tables in the **NotActions** section of the role definition.
+
+#### Examples
Here are examples of custom role actions to grant and deny access to specific tables.
Grant access to all tables except the _SecurityAlert_ table:
], ```
-### Custom logs
+#### Custom tables
- Custom logs are tables created from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#view-table-information).
+ Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information).
> [!NOTE] > Tables created by the [Logs ingestion API](../essentials/../logs/logs-ingestion-api-overview.md) don't yet support table-level RBAC.
Some custom logs come from sources that aren't directly associated to a specific
For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs*. Make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users who were granted access to *MyFireWallLogs* or those users with full workspace access.
-### Considerations
+#### Considerations
- If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data. - If a user is granted per-table access but no other permissions, they can access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
Title: Restore logs in Azure Monitor (Preview)
+ Title: Restore logs in Azure Monitor
description: Restore a specific time range of data in a Log Analytics workspace for high-performance queries. Previously updated : 01/19/2022 Last updated : 10/01/2022
-# Restore logs in Azure Monitor (preview)
+# Restore logs in Azure Monitor
The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. This article describes how to restore data, query that data, and then dismiss the data when you're done. ## When to restore logs
The charge for maintaining restored logs is calculated based on the volume of da
For example, if your table holds 500 GB a day and you restore 10 days of data, you'll be charged for 5000 GB a day until you dismiss the restored data. > [!NOTE]
-> There is no charge for restored data during the preview period.
+> Billing of restore is not yet enabled. Restore can be used for free until February 1, 2023.
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Title: Search jobs in Azure Monitor (Preview)
+ Title: Run search jobs in Azure Monitor
description: Search jobs are asynchronous log queries in Azure Monitor that make results available as a table for further analytics. Previously updated : 01/27/2022- Last updated : 10/01/2022+
+#customer-intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including archived and basic logs.
-# Search jobs in Azure Monitor (preview)
+# Run search jobs in Azure Monitor
-Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across extremely large datasets. This article describes how to create a search job and how to query its resulting data.
+Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across large datasets. This article describes how to create a search job and how to query its resulting data.
> [!NOTE]
-> The search job feature is currently in public preview and isn't supported in:
-> - Workspaces with [customer-managed keys](customer-managed-keys.md).
-> - The China East 2 region.
+> The search job feature is currently not supported in workspaces with [customer-managed keys](customer-managed-keys.md) and in the China East 2 region.
## When to use search jobs
-Use a search job when the log query timeout of 10 minutes isn't enough time to search through large volumes of data or when you're running a slow query.
+Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if your running a slow query.
Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
Search jobs also let you retrieve records from [Archived Logs](data-retention-ar
Use restore when you have a temporary need to run many queries on a large volume of data. - Querying Basic Logs directly and paying for each query.<br/>
- To decide which alternative is more cost-effective, compare the cost of querying Basic Logs with the cost of performing a search job and storing the resulting data based on your needs.
+ To determine which alternative is more cost-effective, compare the cost of querying Basic Logs with the cost of running a search job and storing the search job results.
## What does a search job do? A search job sends its results to a new table in the same workspace as the source data. The results table is available as soon as the search job begins, but it may take time for results to begin to appear.
-The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans-preview) table that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
+The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans) table that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
-The search results table schema is based on the source table schema and the specified query. The following additional columns help you track the source records:
+The search results table schema is based on the source table schema and the specified query. The following other columns help you track the source records:
| Column | Value | |:|:|
The search results table schema is based on the source table schema and the spec
Queries on the results table appear in [log query auditing](query-audit.md) but not the initial search job.
-## Create a search job
+## Run a search job
+
+Run a search job to fetch records from large datasets into a new search results table in your workspace.
+
+> [!TIP]
+> You incur charges for running a search job. Therefore, write and optimize your query in interactive query mode before running the search job.
+
+### [Portal](#tab/portal-1)
+
+To run a search job, in the Azure portal:
+
+1. From the **Log Analytics workspace** menu, select **Logs**.
+1. Select the ellipsis menu on the right-hand side of the screen and toggle **Search job mode** on.
+
+ :::image type="content" source="media/search-job/switch-to-search-job-mode.png" alt-text="Screenshot of the Logs screen with the Search job mode switch highlighted." lightbox="media/search-job/switch-to-search-job-mode.png":::
+
+ Azure Monitor Logs intellisense supports [KQL query limitations in search job mode](#kql-query-limitations) to help you write your search job query.
+
+1. Specify the search job date range using the time picker.
+1. Type the search job query and select the **Search Job** button.
+
+ Azure Monitor Logs prompts you to provide a name for the result set table and informs you that the search job is subject to billing.
+
+ :::image type="content" source="media/search-job/run-search-job.png" alt-text="Screenshot that shows the Azure Monitor Logs prompt to provide a name for the search job results table." lightbox="media/search-job/run-search-job.png":::
+
+1. Enter a name for the search job result table and select **Run a search job**.
-# [API](#tab/api-1)
+ Azure Monitor Logs runs the search job and creates a new table in your workspace for your search job results.
+
+ :::image type="content" source="media/search-job/search-job-execution-1.png" alt-text="Screenshot that shows an Azure Monitor Logs message that the search job is running and the search job results table will be available shortly." lightbox="media/search-job/search-job-execution-1.png":::
+
+1. When the new table is ready, select **View tablename_SRCH** to view the table in Log Analytics.
+
+ :::image type="content" source="media/search-job/search-job-execution-2.png" alt-text="Screenshot that shows an Azure Monitor Logs message that the search job results table is available to view." lightbox="media/search-job/search-job-execution-2.png":::
+
+ You can see the search job results as they begin flowing into the newly created search job results table.
+
+ :::image type="content" source="media/search-job/search-job-execution-3.png" alt-text="Screenshot that shows search job results table with data." lightbox="media/search-job/search-job-execution-3.png":::
+
+ Azure Monitor Logs shows a **Search job is done** message at the end of the search job. The results table is now ready with all the records that match the search query.
+
+ :::image type="content" source="media/search-job/search-job-done.png" alt-text="Screenshot that shows an Azure Monitor Logs message that the search job is done." lightbox="media/search-job/search-job-done.png":::
+
+### [API](#tab/api-1)
To run a search job, call the **Tables - Create or Update** API. The call includes the name of the results table to be created. The name of the results table must end with *_SRCH*. ```http
PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
Status code: 202 accepted.
-# [CLI](#tab/cli-1)
+### [CLI](#tab/cli-1)
To run a search job, run the [az monitor log-analytics workspace table search-job create](/cli/azure/monitor/log-analytics/workspace/table/search-job#az-monitor-log-analytics-workspace-table-search-job-create) command. The name of the results table, which you set using the `--name` parameter, must end with *_SRCH*.
az monitor log-analytics workspace table search-job create --subscription Contos
## Get search job status and details
+### [Portal](#tab/portal-2)
+1. From the **Log Analytics workspace** menu, select **Logs**.
+1. From the Tables tab, select **Search results** to view all search job results tables.
+
+ The icon on the search job results table displays an update indication until the search job is completed.
+
+ :::image type="content" source="media/search-job/search-results-tables.png" alt-text="Screenshot that shows the Tables tab on Logs screen in the Azure portal with the search results tables listed under Search results." lightbox="media/search-job/search-results-tables.png":::
-# [API](#tab/api-2)
+### [API](#tab/api-2)
Call the **Tables - Get** API to get the status and details of a search job: ```http
GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} ```
-# [CLI](#tab/cli-2)
+### [CLI](#tab/cli-2)
To check the status and details of a search job table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
az monitor log-analytics workspace table show --subscription ContosoSID --resour
-## Delete search job table
+## Delete search a job table
We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and extra charges for data retention.
+### [Portal](#tab/portal-3)
+1. From the Log Analytics workspace menu, select **Tables.**
+1. Search for the tables you want to delete by name, or by selecting **Search results** in the **Type** field.
+
+ :::image type="content" source="media/search-job/search-results-on-log-analytics-tables-screen.png" alt-text="Screenshot that shows the Tables screen for a Log Analytics workspace with the Filter by name and Type fields highlighted." lightbox="media/search-job/search-results-on-log-analytics-tables-screen.png":::
-# [API](#tab/api-3)
+1. Select the tables you want to delete, select **Delete**, and confirm the deletion by typing **yes**.
+
+ :::image type="content" source="media/search-job/delete-table.png" alt-text="Screenshot that shows the Delete Table screen for a table in a Log Analytics workspace." lightbox="media/search-job/delete-table.png":::
+
+### [API](#tab/api-3)
To delete a table, call the **Tables - Delete** API:
To delete a table, call the **Tables - Delete** API:
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
-# [CLI](#tab/cli-3)
+### [CLI](#tab/cli-3)
To delete a search table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
Search jobs are subject to the following limitations:
When you reach the record limit, Azure aborts the job with a status of *partial success*, and the table will contain only records ingested up to that point. ### KQL query limitations
-Log queries in a search job are intended to scan very large sets of data. To support distribution and segmentation, the queries use a subset of KQL, including the operators:
+
+Search jobs are intended to scan large volumes of data in a specific table. Therefore, search job queries must always start with a table name. To enable asynchronous execution using distribution and segmentation, the query supports a subset of KQL, including the operators:
- [where](/azure/data-explorer/kusto/query/whereoperator) - [extend](/azure/data-explorer/kusto/query/extendoperator)
You can use all functions and binary operators within these operators.
## Pricing model The charge for a search job is based on: -- The amount of data the search job needs to scan.-- The amount of data ingested in the results table.
+- Search job execution - the amount of data the search job needs to scan.
+- Search job results - the amount of data ingested in the results table, based on the regular log data ingestion prices.
For example, if your table holds 500 GB per day, for a query on three days, you'll be charged for 1500 GB of scanned data. If the job returns 1000 records, you'll be charged for ingesting these 1000 records into the results table. > [!NOTE]
-> There is no charge for search jobs during the public preview. You'll be charged only for the ingestion of the results set.
+> Search job execution is free until early 2023. In other words, until early 2023, you will only incur charges for ingesting the search results, not for executing the search job.
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Last updated 05/25/2022
# Design a Log Analytics workspace architecture
-While a single [Log Analytics workspace](log-analytics-workspace-overview.md) may be sufficient for many environments using Azure Monitor and Microsoft Sentinel, many organizations will create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces and the configuration and placement of those workspace to meet your particular requirements while optimizing your costs.
+While a single [Log Analytics workspace](log-analytics-workspace-overview.md) may be sufficient for many environments using Azure Monitor and Microsoft Sentinel, many organizations will create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces and the configuration and placement of those workspaces to meet your particular requirements while optimizing your costs.
> [!NOTE] > This article includes both Azure Monitor and Microsoft Sentinel since many customers need to consider both in their design, and most of the decision criteria applies to both. If you only use one of these services, then you can simply ignore the other in your evaluation.
By default, if a user has read access to an Azure resource, they inherit permiss
- **If you want to explicitly assign permissions for all users**, change the access control mode to *Require workspace permissions*.
-[Table-level RBAC](manage-access.md#table-level-azure-rbac)
+[Table-level RBAC](manage-access.md#set-table-level-read-access)
With table-level RBAC, you can grant or deny access to specific tables in the workspace. This allows you to implement granular permissions required for specific situations in your environment. For example, you might grant access to only specific tables collected by Sentinel to an internal auditing team. Or you might deny access to security related tables to resource owners who need operational data related to their resources.
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
Title: What is monitored by Azure Monitor description: Reference of all services and other resources monitored by Azure Monitor. + Last updated 04/05/2022
The other services and older monitoring solutions in the following table store t
|:|:| | [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change tracking](../automation/change-tracking/overview.md) and [Update management](../automation/update-management/overview.md). | | [Azure Information Protection](/azure/information-protection/) | Classify and optionally protect documents and emails. See [Central reporting for Azure Information Protection](/azure/information-protection/reports-aip#configure-a-log-analytics-workspace-for-the-reports). |
-| [Defender for the Cloud (was Azure Security Center)](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/enable-data-collection.md). |
+| [Defender for the Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/monitoring-components.md). |
| [Microsoft Sentinel](../sentinel/index.yml) | Connect to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). | | [Microsoft Intune](/intune/) | Create a diagnostic setting to send logs to Azure Monitor. See [Send log data to storage, Event Hubs, or log analytics in Intune (preview)](/intune/fundamentals/review-logs-using-azure-monitor). | | Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyze Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. |
azure-monitor Observability Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/observability-data.md
description: Describes the
documentationcenter: '' na+ Last updated 08/18/2022
Enabling observability across today's complex computing environments running dis
[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor. ## Pillars of observability
Azure resources generate a significant amount of monitoring data. Azure Monitor
## Metrics [Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using a variety of algorithms, compared to other metrics, and analyzed for trends over time.
-Metrics in Azure Monitor are stored in a time-series database which is optimized for analyzing time-stamped data. This makes metrics particularly suited for alerting and fast detection of issues. They can tell you how your system is performing but typically need to be combined with logs to identify the root cause of issues.
+Metrics in Azure Monitor are stored in a time-series database which is optimized for analyzing time-stamped data. This makes metrics ideal for alerting and fast detection of issues. They can tell you how your system is performing but typically need to be combined with logs to identify the root cause of issues.
Metrics are available for interactive analysis in the Azure portal with [Azure Metrics Explorer](essentials/metrics-getting-started.md). They can be added to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data and used for near-real time [alerting](alerts/alerts-metric.md). Read more about Azure Monitor Metrics including their sources of data in [Metrics in Azure Monitor](essentials/data-platform-metrics.md). ## Logs
-[Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.
+[Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free-form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.
Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [Azure Data Explorer](/azure/data-explorer/) which provides a powerful analysis engine and [rich query language](/azure/kusto/query/). Logs typically provide enough information to provide complete context of the issue being identified and are valuable for identifying root case of issues.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Title: Azure Monitor overview description: Overview of Microsoft services and functionalities that contribute to a complete monitoring strategy for your Azure services and applications. + Last updated 09/01/2022 - # Azure Monitor overview
The following diagram gives a high-level view of Azure Monitor.
- On the left are the [sources of monitoring data](data-sources.md) that populate these [data stores](data-platform.md). - On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and integration such as streaming to external systems. The following video uses an earlier version of the preceding diagram, but its explanations are still relevant.
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
Last updated 04/05/2022-+ # Resource Manager template samples for Azure Monitor
You can deploy and configure Azure Monitor at scale by using [Azure Resource Man
The basic steps to use one of the template samples are: 1. Copy the template and save it as a JSON file.
-1. Modify the parameters for your environment and save the JSON file.
-1. Deploy the template by using [any deployment method for Resource Manager templates](../azure-resource-manager/templates/deploy-powershell.md).
+2. Modify the parameters for your environment and save the JSON file.
+3. Deploy the template by using [any deployment method for Resource Manager templates](../azure-resource-manager/templates/deploy-portal.md).
-For example, use the following commands to deploy the template and parameter file to a resource group by using PowerShell or the Azure CLI:
+Following are basic steps for using different methods to deploy the sample templates. Follow the included links for more details.
-```powershell
-Connect-AzAccount
-Select-AzSubscription -SubscriptionName my-subscription
-New-AzResourceGroupDeployment -Name AzureMonitorDeployment -ResourceGroupName my-resource-group -TemplateFile azure-monitor-deploy.json -TemplateParameterFile azure-monitor-deploy.parameters.json
-```
+## [Azure portal](#tab/portal)
+
+For more details, see [Deploy resources with ARM templates and Azure portal](../azure-resource-manager/templates/deploy-portal.md).
+
+1. In the Azure portal, select **Create a resource**, search for **template**. and then select **Template deployment**.
+2. Select **Create**.
+4. Select **Build your own template in editor**.
+5. Click **Load file** and select your template file.
+6. Click **Save**.
+7. Fill in parameter values.
+8. Click **Review + Create**.
+
+## [CLI](#tab/cli)
+
+For more details, see [How to use Azure Resource Manager (ARM) deployment templates with Azure CLI](../azure-resource-manager/templates/deploy-cli.md).
```azurecli az login az deployment group create \ --name AzureMonitorDeployment \
- --resource-group ResourceGroupofTargetResource \
+ --resource-group <resource-group> \
--template-file azure-monitor-deploy.json \ --parameters azure-monitor-deploy.parameters.json ```
+## [PowerShell](#tab/powershell)
+
+For more details, see [Deploy resources with ARM templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md).
+
+```powershell
+Connect-AzAccount
+Select-AzSubscription -SubscriptionName <subscription>
+New-AzResourceGroupDeployment -Name AzureMonitorDeployment -ResourceGroupName <resource-group> -TemplateFile azure-monitor-deploy.json -TemplateParameterFile azure-monitor-deploy.parameters.json
+```
+
+## [REST API](#tab/api)
+
+For more details, see [Deploy resources with ARM templates and Azure Resource Manager REST API](../azure-resource-manager/templates/deploy-rest.md).
+
+```rest
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
+```
+
+In the request body, provide a link to your template and parameter file.
+
+```json
+{
+ "properties": {
+ "templateLink": {
+ "uri": "http://mystorageaccount.blob.core.windows.net/templates/template.json",
+ "contentVersion": "1.0.0.0"
+ },
+ "parametersLink": {
+ "uri": "http://mystorageaccount.blob.core.windows.net/templates/parameters.json",
+ "contentVersion": "1.0.0.0"
+ },
+ "mode": "Incremental"
+ }
+}
+```
+++ ## List of sample templates - [Agents](agents/resource-manager-agent.md): Deploy and configure the Log Analytics agent and a diagnostic extension.
az deployment group create \
## Next steps
-Learn more about [Resource Manager templates](../azure-resource-manager/templates/overview.md).
+Learn more about [Resource Manager templates](../azure-resource-manager/templates/overview.md).
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
Title: Azure Monitor service limits | Microsoft Docs description: Lists limits in different areas of Azure Monitor. + Last updated 04/05/2022- # Azure Monitor service limits
This article lists limits in different areas of Azure Monitor.
## Alerts ## Action groups ## Autoscale +
+## Prometheus metrics
+ ## Logs ingestion API ## Data collection rules ## Diagnostic Settings ## Log queries and language ## Log Analytics workspaces ## Application Insights ## Next Steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
+- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
azure-monitor Workbooks Retrieve Legacy Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-retrieve-legacy-workbooks.md
+
+ Title: Retrieve legacy and private workbooks
+description: Learn how to retrieve deprecated legacy and private Azure workbooks.
++
+ ibiza
+ Last updated : 09/08/2022++++
+# Retrieve legacy Application Insights workbooks
+
+Private and legacy workbooks have been deprecated and aren't accessible from the Azure portal. If you're looking for the deprecated workbook that you forgot to convert before the deadline, you can use this process to retrieve the content of your old workbook and load it into a new workbook. This tool will only be available for a limited time.
+
+Application Insights Workbooks, also known as "Legacy Workbooks", are stored as a different Azure resource type than all other Azure Workbooks. These different Azure resource types are now being merged one single standard type so that you can take advantage of all the existing and new functionality available in standard Azure Workbooks. For example:
+
+* Converted legacy workbooks can be queried via Azure Resource Graph (ARG), and show up in other standard Azure views of resources in a resource group or subscription.
+* Converted legacy workbooks can support top level ARM template features like other resource types, including, but not limited to:
+ * Tags
+ * Policies
+ * Activity Log / Change Tracking
+ * Resource locks
+* Converted legacy workbooks can support [ARM templates](workbooks-automate.md).
+* Converted legacy workbooks can support the [BYOS](workbooks-bring-your-own-storage.md) feature.
+* Converted legacy workbooks can be saved in region of your choice.
+
+The legacy workbook deprecation doesn't change where you find your workbooks in the Azure portal. The legacy workbooks are still visible in the Workbooks section of Application Insights. The deprecation won't affect the content of your workbook.
+
+> [!NOTE]
+>
+> - After April 15 2021, you will not be able to save legacy workbooks.
+> - Use `Save as` on a legacy workbook to create a standard Azure workbook.
+> - Any new workbook you create will be a standard workbook.
+
+## Why isn't there an automatic conversion?
+- The write permissions for legacy workbooks are only based on Azure role based access control on the Application Insights resource itself. A user may not be allowed to create new workbooks in that resource group. If the workbooks were auto migrated, they could fail to be moved, or they could be created but then a user might not be able to delete them after the fact.
+- Legacy workbooks support "My" (private) workbooks, which is no longer supported by Azure Workbooks. A migration would cause those private workbooks to become publicly visible to users with read access to that same resource group.
+- Usage of links/group content loaded from saved Legacy workbooks would become broken. Authors will need to manually update these links to point to the new saved items.
+
+For these reasons, we suggest that users manually migrate the workbooks they want to keep.
+## Convert a legacy Application Insights workbook
+1. Identify legacy workbooks. In the gallery view, legacy workbooks have a warning icon. When you open a legacy workbook, there's a banner.
+
+ :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-legacy-warning.png" alt-text="Screenshot of the warning symbol on a deprecated workbook.":::
+
+ :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-legacy-banner.png" alt-text="Screenshot of the banner at the top of a deprecated workbook.":::
+
+1. Convert the legacy workbooks. For any legacy workbook you want to keep after June 30 2021:
+
+ 1. Open the workbook, and then from the toolbar, select **Edit**, then **Save As**.
+ 1. Enter the workbook name.
+ 1. Select a subscription, resource group, and region where you have write access.
+ 1. If the Legacy Workbook uses links to other Legacy Workbooks, or loading workbook content in groups, those items will need to be updated to point to the newly saved workbook.
+ 1. After you have saved the workbook, you can delete the legacy Workbook, or update its contents to be a link to the newly saved workbook.
+
+1. Verify permissions. For legacy workbooks, permissions were based on the Application Insights specific roles, like Application Insights Contributor. Verify that users of the new workbook have the appropriate standard Monitoring Reader/Contributor or Workbook Reader/Contributor roles so that they can see and create Workbooks in the appropriate resource groups.
+
+For more information, see [access control](workbooks-overview.md#access-control).
+
+After deprecation of the legacy workbooks, you'll still be able to retrieve the content of Legacy Workbooks for a limited time by using Azure CLI or PowerShell tools, to query `microsoft.insights/components/[name]/favorites` for the specific resource using `api-version=2015-05-01`.
+## Convert a private workbook
+
+1. Open a new or empty workbook.
+1. In the toolbar, select **Edit** and then navigate to the advanced editor.
+
+ :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-advanced-editor.png" alt-text="Screenshot of the advanced editor used to retrieve deprecated workbooks.":::
+
+1. Copy the [workbook json](#json-for-private-workbook-conversion) and paste it into your open advanced editor.
+1. Select **Apply** at the top right.
+1. Select the subscription and resource group and category of the workbook you'd like to retrieve.
+1. The grid at the bottom of this workbook lists all the private workbooks in the selected subscription or resource group.
+1. Select one of the workbooks in the grid. Your workbook should look something like this:
+
+ :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-private.png" alt-text="Screenshot of a deprecated private workbook converted to a standard workbook." lightbox="media//workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-private.png":::
+
+1. Select **Open Content as Workbook** at the bottom of the workbook.
+1. A new workbook appears with the content of the old private workbook that you selected. Save the workbook as a standard workbook.
+1. You have to re-create links to the deprecated workbook or its contents, including dashboard pins and URL links.
+## Convert a favorites-based (legacy) workbook
+
+1. Navigate to your Application Insights Resource > Workbooks gallery.
+1. Open a new or empty workbook.
+1. Select Edit in the toolbar and navigate to the advanced editor.
+
+ :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-advanced-editor.png" alt-text="Screenshot of the advanced editor used to retrieve deprecated workbooks.":::
+
+1. Copy the [workbook json](#json-for-private-workbook-conversion) and paste it into your open advanced editor.
+1. Select **Apply**.
+1. The grid at the bottom of this workbook lists all the legacy workbooks within the current AppInsights resource.
+1. Select one of the workbooks in the grid. Your workbook should now look something like this:
+
+ :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-legacy.png" alt-text="Screenshot of a deprecated legacy workbook converted to a standard workbook." lightbox="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-legacy.png":::
+
+1. Select **Open Content as Workbook** at the bottom of the workbook.
+1. A new workbook appears with the content of the old private workbook that you selected. Save the workbook as a standard workbook.
+1. You have to re-create links to the deprecated workbook or its contents, including dashboard pins and URL links.
+
+## JSON for legacy workbook conversion
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "parameters": [
+ {
+ "id": "876235fc-ef67-418d-87f5-69f496be171b",
+ "version": "KqlParameterItem/1.0",
+ "name": "resource",
+ "type": 5,
+ "typeSettings": {
+ "additionalResourceOptions": [
+ "value::1"
+ ],
+ "componentIdOnly": true
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "defaultValue": "value::1"
+ }
+ ],
+ "style": "pills",
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components"
+ },
+ "conditionalVisibility": {
+ "parameterName": "debug",
+ "comparison": "isNotEqualTo"
+ },
+ "name": "resource selection"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "# Legacy (Favorites based) Workbook Conversion\r\n\r\nThis workbook shows favorite based (legacy) workbooks in this Application Insights resource: \r\n\r\n{resource:grid}\r\n\r\nThe grid below will show the favorite workbooks found, and allows you to copy the contents, or open them as a full Azure Workbook where they can be saved."
+ },
+ "name": "text - 5"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GETARRAY\",\"path\":\"{resource}/favorites\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2015-05-01\"},{\"key\":\"sourceType\",\"value\":\"notebook\"},{\"key\":\"canFetchContent\",\"value\":\"false\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"columns\":[{\"path\":\"$.Name\",\"columnid\":\"name\"},{\"path\":\"$.FavoriteId\",\"columnid\":\"id\"},{\"path\":\"$.TimeModified\",\"columnid\":\"modified\",\"columnType\":\"datetime\"},{\"path\":\"$.FavoriteType\",\"columnid\":\"type\"}]}}]}",
+ "size": 0,
+ "title": "Legacy Workbooks (Select an item to see contents)",
+ "noDataMessage": "No legacy workbooks found",
+ "noDataMessageStyle": 3,
+ "exportedParameters": [
+ {
+ "fieldName": "id",
+ "parameterName": "favoriteId"
+ },
+ {
+ "fieldName": "name",
+ "parameterName": "name",
+ "parameterType": 1
+ }
+ ],
+ "queryType": 12,
+ "gridSettings": {
+ "rowLimit": 1000,
+ "filter": true
+ }
+ },
+ "name": "list favorites"
+ },
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "parameters": [
+ {
+ "id": "8d78556d-a4f3-4868-bf06-9e0980246d31",
+ "version": "KqlParameterItem/1.0",
+ "name": "config",
+ "type": 1,
+ "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GET\",\"path\":\"{resource}/favorites/{favoriteId}\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2015-05-01\"},{\"key\":\"sourceType\",\"value\":\"notebook\"},{\"key\":\"canFetchContent\",\"value\":\"true\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"columns\":[{\"path\":\"$.Config\",\"columnid\":\"Content\"}]}}]}",
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 12
+ }
+ ],
+ "style": "pills",
+ "queryType": 12
+ },
+ "conditionalVisibility": {
+ "parameterName": "debug",
+ "comparison": "isNotEqualTo"
+ },
+ "name": "turn response into param"
+ },
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "list",
+ "links": [
+ {
+ "id": "fc93ee9e-d5b2-41de-b74a-1fb62f0df49e",
+ "linkTarget": "OpenBlade",
+ "linkLabel": "Open Content as Workbook",
+ "style": "primary",
+ "bladeOpenContext": {
+ "bladeName": "UsageNotebookBlade",
+ "extensionName": "AppInsightsExtension",
+ "bladeParameters": [
+ {
+ "name": "ComponentId",
+ "source": "parameter",
+ "value": "resource"
+ },
+ {
+ "name": "NewNotebookData",
+ "source": "parameter",
+ "value": "config"
+ }
+ ]
+ }
+ }
+ ]
+ },
+ "conditionalVisibility": {
+ "parameterName": "config",
+ "comparison": "isNotEqualTo"
+ },
+ "name": "links - 4"
+ }
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
++
+## JSON for private workbook conversion
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "crossComponentResources": [
+ "{Subscription}"
+ ],
+ "parameters": [
+ {
+ "id": "1f74ed9a-e3ed-498d-bd5b-f68f3836a117",
+ "version": "KqlParameterItem/1.0",
+ "name": "Subscription",
+ "type": 6,
+ "isRequired": true,
+ "typeSettings": {
+ "additionalResourceOptions": [
+ "value::1"
+ ],
+ "includeAll": false,
+ "showDefault": false
+ }
+ },
+ {
+ "id": "b616a3a3-4271-4208-b1a9-a92a78efed08",
+ "version": "KqlParameterItem/1.0",
+ "name": "ResourceGroup",
+ "label": "Resource group",
+ "type": 2,
+ "isRequired": true,
+ "query": "Resources\r\n| summarize by resourceGroup\r\n| order by resourceGroup asc\r\n| project id=resourceGroup, resourceGroup",
+ "crossComponentResources": [
+ "{Subscription}"
+ ],
+ "typeSettings": {
+ "additionalResourceOptions": [
+ "value::1"
+ ],
+ "showDefault": false
+ },
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ {
+ "id": "3872fc90-1467-4b01-81ef-d82d90665d72",
+ "version": "KqlParameterItem/1.0",
+ "name": "Category",
+ "type": 2,
+ "description": "Workbook Category",
+ "isRequired": true,
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "jsonData": "[\"workbook\",\"sentinel\",\"usage\",\"tsg\",\"usageMetrics\",\"workItems\",\"performance-websites\",\"performance-appinsights\",\"performance-documentdb\",\"performance-storage\",\"performance-storageclassic\",\"performance-vm\",\"performance-vmclassic\",\"performance-sqlserverdatabases\",\"performance-virtualnetwork\",\"performance-virtualmachinescalesets\",\"performance-computedisks\",\"performance-networkinterfaces\",\"performance-logicworkflows\",\"performance-appserviceplans\",\"performance-applicationgateway\",\"performance-runbooks\",\"performance-servicebusqueues\",\"performance-iothubs\",\"performance-networkroutetables\",\"performance-cognitiveserviceaccounts\",\"performance-containerservicemanagedclusters\",\"performance-servicefabricclusters\",\"performance-cacheredis\",\"performance-eventhubnamespaces\",\"performance-hdinsightclusters\",\"failure-websites\",\"failure-appinsights\",\"failure-documentdb\",\"failure-storage\",\"failure-storageclassic\",\"failure-vm\",\"failure-vmclassic\",\"failure-sqlserverdatabases\",\"failure-virtualnetwork\",\"failure-virtualmachinescalesets\",\"failure-computedisks\",\"failure-networkinterfaces\",\"failure-logicworkflows\",\"failure-appserviceplans\",\"failure-applicationgateway\",\"failure-runbooks\",\"failure-servicebusqueues\",\"failure-iothubs\",\"failure-networkroutetables\",\"failure-cognitiveserviceaccounts\",\"failure-containerservicemanagedclusters\",\"failure-servicefabricclusters\",\"failure-cacheredis\",\"failure-eventhubnamespaces\",\"failure-hdinsightclusters\",\"storage-insights\",\"cosmosdb-insights\",\"vm-insights\",\"container-insights\",\"keyvaults-insights\",\"backup-insights\",\"rediscache-insights\",\"servicebus-insights\",\"eventhub-insights\",\"workload-insights\",\"adxcluster-insights\",\"wvd-insights\",\"activitylog-insights\",\"hdicluster-insights\",\"laws-insights\",\"hci-insights\"]",
+ "defaultValue": "workbook"
+ }
+ ],
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ "name": "resource selection"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "# Private Workbook Conversion\r\n\r\nThis workbook shows private workbooks within the current subscription / resource group: \r\n\r\n| Subscription | Resource Group | \r\n|--|-|\r\n|{Subscription}|{ResourceGroup} |\r\n\r\nThe grid below will show the private workbooks found, and allows you to copy the contents, or open them as a full Azure Workbook where they can be saved.\r\n\r\nUse the button below to load the selected private workbook content into a new workbook. From there you can save it as a new workbook."
+ },
+ "name": "text - 5"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GETARRAY\",\"path\":\"/{Subscription}/resourceGroups/{ResourceGroup}/providers/microsoft.insights/myworkbooks\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2020-10-20\"},{\"key\":\"category\",\"value\":\"{Category}\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"tablePath\":\"$..[?(@.kind == \\\"user\\\")]\",\"columns\":[{\"path\":\"$.properties.displayName\",\"columnid\":\"name\"},{\"path\":\"$.name\",\"columnid\":\"id\"},{\"path\":\"$.kind\",\"columnid\":\"type\",\"columnType\":\"string\"},{\"path\":\"$.properties.timeModified\",\"columnid\":\"modified\",\"columnType\":\"datetime\"},{\"path\":\"$.properties.sourceId\",\"columnid\":\"resource\",\"columnType\":\"string\"}]}}]}",
+ "size": 1,
+ "title": "Private Workbooks",
+ "noDataMessage": "No private workbooks found",
+ "noDataMessageStyle": 3,
+ "exportedParameters": [
+ {
+ "fieldName": "id",
+ "parameterName": "id"
+ },
+ {
+ "fieldName": "name",
+ "parameterName": "name",
+ "parameterType": 1
+ },
+ {
+ "fieldName": "resource",
+ "parameterName": "resource",
+ "parameterType": 1
+ }
+ ],
+ "queryType": 12,
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "resource",
+ "formatter": 13,
+ "formatOptions": {
+ "linkTarget": null,
+ "showIcon": true
+ }
+ }
+ ],
+ "rowLimit": 1000,
+ "filter": true,
+ "labelSettings": [
+ {
+ "columnId": "resource",
+ "label": "Linked To"
+ }
+ ]
+ },
+ "sortBy": []
+ },
+ "name": "list private workbooks"
+ },
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "parameters": [
+ {
+ "id": "8d78556d-a4f3-4868-bf06-9e0980246d31",
+ "version": "KqlParameterItem/1.0",
+ "name": "config",
+ "type": 1,
+ "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GET\",\"path\":\"{Subscription}/resourceGroups/{ResourceGroup}/providers/microsoft.insights/myworkbooks/{id}\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2020-10-20\"},{\"key\":\"sourceType\",\"value\":\"notebook\"},{\"key\":\"canFetchContent\",\"value\":\"true\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"columns\":[{\"path\":\"$..serializedData\",\"columnid\":\"Content\"}]}}]}",
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 12
+ }
+ ],
+ "style": "pills",
+ "queryType": 12
+ },
+ "conditionalVisibility": {
+ "parameterName": "debug",
+ "comparison": "isNotEqualTo"
+ },
+ "name": "turn response into param"
+ },
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "list",
+ "links": [
+ {
+ "id": "fc93ee9e-d5b2-41de-b74a-1fb62f0df49e",
+ "linkTarget": "OpenBlade",
+ "linkLabel": "Open Content as Workbook",
+ "style": "primary",
+ "bladeOpenContext": {
+ "bladeName": "UsageNotebookBlade",
+ "extensionName": "AppInsightsExtension",
+ "bladeParameters": [
+ {
+ "name": "ComponentId",
+ "source": "parameter",
+ "value": "resource"
+ },
+ {
+ "name": "NewNotebookData",
+ "source": "parameter",
+ "value": "config"
+ }
+ ]
+ }
+ }
+ ]
+ },
+ "conditionalVisibility": {
+ "parameterName": "config",
+ "comparison": "isNotEqualTo"
+ },
+ "name": "links - 4"
+ }
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
+## September 2022
+
+### Network Insights
+
+| Article | Description |
+|||
+|[Network Insights](../network-watcher/network-insights-overview.md)| Onboarded the new topology experience to Network Insights in Azure Monitor.|
++ ## August 2022
azure-netapp-files Application Volume Group Deploy First Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-deploy-first-host.md
na Previously updated : 11/19/2021 Last updated : 10/13/2022 # Deploy the first SAP HANA host using application volume group for SAP HANA
This article describes how to deploy the first SAP HANA host using Azure NetApp
## Before you begin
-> [!IMPORTANT]
-> Azure NetApp Files application volume group for SAP HANA is currently in preview. You need to submit a waitlist request for accessing the feature through the [**Azure NetApp Files application volume group for SAP HANA waitlist submission page**](https://aka.ms/anfavgpreviewsignup). Wait for an official confirmation email from the Azure NetApp Files team before using application volume group for SAP HANA.
- You should understand the [requirements and considerations for application volume group for SAP HANA](application-volume-group-considerations.md). Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)** and have at least one HANA VM in the availability set started.
azure-netapp-files Application Volume Group Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-introduction.md
na Previously updated : 11/19/2021 Last updated : 10/13/2022 # Understand Azure NetApp Files application volume group for SAP HANA
-This article helps you understand the use cases and key features of Azure NetApp Files application volume group for SAP HANA.
-
-> [!IMPORTANT]
-> Azure NetApp Files application volume group for SAP HANA is currently in preview. You need to submit a waitlist request for accessing the feature through the [**Azure NetApp Files application volume group for SAP HANA waitlist submission page**](https://aka.ms/anfavgpreviewsignup). Wait for an official confirmation email from the Azure NetApp Files team before using application volume group for SAP HANA.
+This article helps you understand the use cases and key features of Azure NetApp Files application volume group for SAP HANA.
Application volume group for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices. Instead of individually creating the required SAP HANA volumes (including data, log, shared, log-backup, and data-backup volumes), application volume group for SAP HANA creates these volumes in a single "atomic" call. The atomic call ensures that either all volumes or no volumes at all are created.
azure-netapp-files Azacsnap Cmd Ref Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-restore.md
na Previously updated : 09/04/2022 Last updated : 10/09/2022
Doing a volume restore from a snapshot is done using the `azacsnap -c restore` c
The `-c restore` command has the following options: -- `--restore snaptovol` Creates a new volume based on the latest snapshot on the target volume. This command creates a new "cloned" volume based on the configured target volume, using the latest volume snapshot as the base to create the new volume. This command does not interrupt the storage replication from primary to secondary. Instead clones of the latest available snapshot are created at the DR site and recommended filesystem mountpoints of the cloned volumes are presented. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system).
+- `--restore snaptovol` Creates a new volume based on a volume snapshot. This command creates a new "cloned" volume for each volume in the configuration file, by default using the latest volume snapshot as the base to create the new volume. For data volumes it's possible to select a snapshot to clone by using the option `--snapshotfilter <Snapshot Name>`, this will only complete if ALL data volumes have that same snapshot. This command does not interrupt the storage replication from primary to secondary. Instead clones of the snapshot are created at the same location and recommended filesystem mountpoints of the cloned volumes are presented. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system).
+
+- `--restore revertvolume` Reverts the target volume to a prior state based on a volume snapshot. Using this command as part of DR Failover into the paired DR region. This command **stops** storage replication from the primary site to the secondary site, and reverts the target DR volume(s) to their latest available snapshot on the DR volumes along with recommended filesystem mountpoints for the reverted DR volumes. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system).
-- `--restore revertvolume` Reverts the target volume to a prior state based on the most recent snapshot. Using this command as part of DR Failover into the paired DR region. This command **stops** storage replication from the primary site to the secondary site, and reverts the target DR volume(s) to their latest available snapshot on the DR volumes along with recommended filesystem mountpoints for the reverted DR volumes. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system). > [!NOTE] > The sub-command (`--restore revertvolume`) is only available for Azure Large Instance and is not available for Azure NetApp Files.+ - `--dbsid <SAP HANA SID>` is the SAP HANA SID being selected from the configuration file to apply the volume restore commands to. - `[--configfile <config filename>]` is an optional parameter allowing for custom configuration file names.
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
Return to this document for details on using the preview features.
Microsoft provides many storage options for deploying databases such as SAP HANA. Many of these options are detailed on the [Azure Storage types for SAP workload](../virtual-machines/workloads/sap/planning-guide-storage.md) web page. Additionally there's a
-[Cost conscious solution with Azure premium storage](../virtual-machines/workloads/sap/hana-vm-operations-storage.md#cost-conscious-solution-with-azure-premium-storage).
+[Cost conscious solution with Azure premium storage](../virtual-machines/workloads/sap/hana-vm-premium-ssd-v1.md#cost-conscious-solution-with-azure-premium-storage).
AzAcSnap is able to take application consistent database snapshots when deployed on this type of architecture (that is, a VM with Managed Disks). However, the set up for this platform is slightly more complicated as in this scenario we need to block I/O to the mountpoint (using `xfs_freeze`) before taking a snapshot of the Managed
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 09/29/2022 Last updated : 10/10/2022 # Guidelines for Azure NetApp Files network planning
The following table describes the network topologies supported by each network f
| Connectivity from on-premises to a volume in a spoke VNet over VPN gateway and VNet peering with gateway transit | Yes | Yes | | Connectivity over Active/Passive VPN gateways | Yes | Yes | | Connectivity over Active/Active VPN gateways | Yes | No |
-| Connectivity over Active/Active Zone Redundant gateways | No | No |
+| Connectivity over Active/Active Zone Redundant gateways | Yes | Yes |
| Connectivity over Virtual WAN (VWAN) | No | No | \* This option will incur a charge on ingress and egress traffic that uses a virtual network peering connection. For more information, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). For more general information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 09/07/2022 Last updated : 10/07/2022
Azure NetApp Files backup is supported for the following regions:
* Australia East * East US * East US 2
+* France Central
+* Germany West Central
* Japan East * North Europe * South Central US
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 09/07/2022 Last updated : 10/13/2022 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## October 2022
+
+* [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA)
+
+ The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
+ ## August 2022 * [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions).
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/audio-button-led-behavior.md
Title: Azure Percept Audio button and LED states description: Learn more about the button and LED states of Azure Percept Audio-+ Previously updated : 08/03/2021 Last updated : 10/04/2022 # Azure Percept Audio button and LED states + See the following guidance for information on the button and LED states of the Azure Percept Audio device. ## Button behavior
azure-percept Azure Percept Audio Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-audio-datasheet.md
Title: Azure Percept Audio datasheet description: Check out the Azure Percept Audio datasheet for detailed device specifications-+ Previously updated : 02/16/2021 Last updated : 10/04/2022 # Azure Percept Audio datasheet ++ |Product Specification |Value | |--|--| |Performance |180 Degrees Far-field at 4 m, 63 dB |
azure-percept Azure Percept Devkit Container Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-container-release-notes.md
Title: Azure Percept DK Container release notes description: Information of changes and fixes for Azure Percept DK Container releases.-+ Previously updated : 09/17/2021 Last updated : 10/04/2022 # Azure Percept DK Container release notes + This page provides information of changes and fixes for Azure Percept DK Container releases. To download the container updates, go to [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/main/overview), select Devices from the left navigation pane, choose the specific device, and then select Vision and Speech tabs to initiate container downloads.
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md
Title: Azure Percept DK software release notes description: Information about changes made to the Azure Percept DK software.-+ Previously updated : 08/23/2021 Last updated : 10/04/2022 # Azure Percept DK software release notes + This page provides information of changes and fixes for each Azure Percept DK OS and firmware release. To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-dk-datasheet.md
Title: Azure Percept DK datasheet description: Check out the Azure Percept DK datasheet for detailed device specifications-+ Previously updated : 02/16/2021 Last updated : 10/04/2022 # Azure Percept DK datasheet ++ |Product Specification |Value | |--|--| |Industrial Design |Integrated 80/20 1010 Series mounts |
azure-percept Azure Percept Vision Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-vision-datasheet.md
Title: Azure Percept Vision datasheet description: Check out the Azure Percept Vision datasheet for detailed device specifications-+ Previously updated : 02/16/2021 Last updated : 10/04/2022 # Azure Percept Vision datasheet ++ Specifications listed below are for the Azure Percept Vision device, included in the [Azure Percept DK](./azure-percept-dk-datasheet.md). |Product Specification |Value |
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/concept-security-configuration.md
Title: Azure Percept security recommendations description: Learn more about Azure Percept firewall configuration and security recommendations-+ Previously updated : 03/25/2021 Last updated : 10/04/2022 # Azure Percept security recommendations + Review the guidelines below for information on configuring firewalls and general security best practices with Azure Percept. ## Configuring firewalls for Azure Percept DK
azure-percept Connect Over Cellular Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-gateway.md
Title: Connect Azure Percept DK over 5G and LTE networks by using a gateway description: This article explains how to connect Azure Percept DK over 5G and LTE networks by using a cellular gateway.-+ Previously updated : 09/23/2021 Last updated : 10/04/2022 # Connect Azure Percept DK over 5G and LTE networks by using a gateway + A simple way to connect Azure Percept to the internet is to use a gateway that connects to the internet over 5G or LTE and provides Ethernet ports. In this case, Azure Percept isn't even aware that it's connected over 5G or LTE. It "knows" only that its Ethernet port has connectivity and it's routing all traffic through that port.
azure-percept Connect Over Cellular Usb Multitech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-multitech.md
Title: Connect Azure Percept DK over LTE by using a MultiTech MultiConnect USB modem description: This article explains how to connect Azure Percept DK over 5G or LTE networks by using a MultiTech MultiConnect USB modem.-+ Previously updated : 09/23/2021 Last updated : 10/04/2022 # Connect Azure Percept DK over LTE by using a MultiTech MultiConnect USB modem + This article discusses how to connect your Azure Percept DK by using a MultiTech MultiConnect (MTCM-LNA3-B03) USB modem. > [!Note]
azure-percept Connect Over Cellular Usb Quectel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-quectel.md
Title: Connect Azure Percept DK over 5G or LTE by using a Quectel RM500 5G modem description: This article explains how to connect Azure Percept DK over 5G or LTE networks by using a Quectel 5G modem.-+ Previously updated : 09/03/2021 Last updated : 10/04/2022 # Connect Azure Percept DK over 5G or LTE by using a Quectel RM500-GL 5G modem + This article discusses how to connect Azure Percept DK over 5G or LTE by using a Quectel RM500-GL 5G modem. For more information about this 5G modem dev kit, contact your Quectel local sales team:
azure-percept Connect Over Cellular Usb Vodafone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-vodafone.md
Title: Connect Azure Percept DK over 5G and LTE by using a Vodafone USB modem description: This article explains how to connect Azure Percept DK over 5G and LTE networks by using a Vodafone USB modem.-+ Previously updated : 09/23/2021 Last updated : 10/04/2022 # Connect Azure Percept DK over 5G and LTE by using a Vodafone USB Connect 4G v2 modem + This article discusses how to connect Azure Percept DK by using a Vodafone USB Connect 4G v2 modem. For more information about this modem, go to the [Vodafone Integrated Terminals](https://www.vodafone.com/business/iot/iot-devices/integrated-terminals) page.
azure-percept Connect Over Cellular Usb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb.md
Title: Connect Azure Percept DK over 5G and LTE networks by using a USB modem description: This article explains how to connect Azure Percept DK over 5G and LTE networks by using a USB modem.-+ Previously updated : 09/03/2021 Last updated : 10/04/2022 # Connect Azure Percept DK over 5G and LTE networks by using a USB modem + This article discusses how to connect Azure Percept DK to 5G or LTE networks by using a USB modem. > [!NOTE]
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular.md
Title: Connect Azure Percept over 5G or LTE networks description: This article explains how to connect the Azure Percept DK over 5G or LTE networks.-+ Previously updated : 09/23/2021 Last updated : 10/04/2022 # Connect Azure Percept over 5G or LTE networks + The benefits of connecting Edge AI devices over 5G/LTE networks are many. Scenarios where Edge AI is most effective are in places where Wi-Fi and LAN connectivity are limited, such as smart cities, autonomous vehicles, and agriculture. Additionally, 5G/LTE networks provide better security than Wi-Fi. Lastly, using IoT devices that run AI at the Edge provides a way to optimize the bandwidth on 5G/LTE networks. Only the necessary information is sent to the cloud while most of the data is processed on the device. Today, Azure Percept DK even supports direct connection to 5G/LTE networks using a simple USB modem. Below more about the different options. ## Options for connecting Azure Percept DK over 5G or LTE networks
azure-percept Create And Deploy Manually Azure Precept Devkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-and-deploy-manually-azure-precept-devkit.md
Title: How to do a manual default container deployment to Azure Percept DK description: this article shows the audience how to manually create and deploy an Azure Precept Devkit-+ Previously updated : 01/25/2022 Last updated : 10/04/2022 # How to do a manual default container deployment to Azure Percept DK + The following guide is to help customers manually deploy a factory fresh IoT Edge deployment to existing Azure Percept devices. We've also included the steps to manually create your Azure Percept IoT Edge device instance. ## Prerequisites
azure-percept Create People Counting Solution With Azure Percept Devkit Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-people-counting-solution-with-azure-percept-devkit-vision.md
Title: Create a people counting solution with Azure Percept Vision description: This guide will focus on detecting and counting people using the Azure Percept DK hardware, Azure IoT Hub, Azure Stream Analytics, and Power BI dashboard. -+ Previously updated : 01/19/2021 Last updated : 10/06/2022 # Create a people counting solution with Azure Percept Vision ++ This guide will focus on detecting and counting people using the Azure Percept DK hardware, Azure IoT Hub, Azure Stream Analytics, and Power BI dashboard. The tutorial is intended to show detailed steps on how users can create, configure, and implement the basic components of this solution. Users can easily expand the tutorial and create additional ways to visualize people counting data.
In this tutorial, you learn how to:
[ ![Power BI](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-mini.png) ](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi.png#lightbox) -- Percept DK ([Purchase](https://www.microsoft.com/store/build/azure-percept/8v2qxmzbz9vc)) - Azure Subscription: ([Free trial account](https://azure.microsoft.com/free/)) - Power BI subscription: ([Try Power BI for free](https://go.microsoft.com/fwlink/?LinkId=874445&clcid=0x409&cmpid=pbi-gett-hero-try-powerbifree)) - Power BI workspace: ([Create the new workspaces in Power BI](https://github.com/MicrosoftDocs/powerbi-docs/blob/main/powerbi-docs/collaborate-share/service-create-the-new-workspaces.md))
azure-percept Delete Voice Assistant Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/delete-voice-assistant-application.md
Title: Delete your Azure Percept Audio voice assistant application description: This article shows you how to delete a previously created voice assistant application.-+ Previously updated : 08/03/2021 Last updated : 10/04/2022 # Delete your Azure Percept Audio voice assistant application + These instructions will show you how to delete a voice assistant application from your Azure Percept Audio device. ## Prerequisites
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/dev-tools-installer.md
Title: Install Azure Percept development tools description: Learn more about using the Dev Tools Pack Installer to accelerate advanced development with Azure Percept-+ Previously updated : 03/25/2021 Last updated : 10/04/2022 # Install Azure Percept development tools + The Dev Tools Pack Installer is a one-stop solution that installs and configures all the tools required to develop an advanced intelligent edge solution. ## Mandatory tools
azure-percept How To Capture Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-capture-images.md
Title: Capture images in Azure Percept Studio description: How to capture images with your Azure Percept DK in Azure Percept Studio-+ Previously updated : 02/12/2021 Last updated : 10/04/2022 # Capture images in Azure Percept Studio + Follow this guide to capture images using Azure Percept DK for an existing vision project. If you haven't created a vision project yet, see the [no-code vision tutorial](./tutorial-nocode-vision.md). ## Prerequisites
azure-percept How To Configure Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-configure-voice-assistant.md
Title: Configure your Azure Percept voice assistant application description: Configure your voice assistant application using Azure IoT Hub-+ Previously updated : 02/15/2021 Last updated : 10/04/2022 # Configure your Azure Percept voice assistant application + This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial for the process of creating a voice assistant, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md). ## Update your voice assistant configuration
azure-percept How To Connect Over Ethernet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-connect-over-ethernet.md
Title: Connect to Azure Percept DK over Ethernet description: This guide shows users how to connect to the Azure Percept DK setup experience when connected over an Ethernet connection.-+ Previously updated : 06/01/2021 Last updated : 10/06/2021 # Connect to Azure Percept DK over Ethernet + In this how-to guide you'll learn how to launch the Azure Percept DK setup experience over an Ethernet connection. It's a companion to the [Quick Start: Set up your Azure Percept DK and deploy your first AI model](./quickstart-percept-dk-set-up.md) guide. See each option outlined below and choose which one is most appropriate for your environment. ## Prerequisites -- An Azure Percept DK ([Get one here](https://go.microsoft.com/fwlink/?linkid=2155270))
+- An Azure Percept DK
- A Windows, Linux, or OS X based host computer with Wi-Fi or ethernet capability and a web browser - Network cable
azure-percept How To Connect To Percept Dk Over Serial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-connect-to-percept-dk-over-serial.md
Title: Connect to Azure Percept DK over serial description: How to set up a serial connection to your Azure Percept DK with a USB to TTL serial cable-+ Previously updated : 02/03/2021 Last updated : 10/04/2022 # Connect to Azure Percept DK over serial + Follow the steps below to set up a serial connection to your Azure Percept DK through [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html). > [!WARNING]
azure-percept How To Ssh Into Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-ssh-into-percept-dk.md
Title: Connect to Azure Percept DK over SSH description: Learn how to SSH into your Azure Percept DK with PuTTY-+ Previously updated : 03/18/2021 Last updated : 10/04/2022 # Connect to Azure Percept DK over SSH ++ Follow the steps below to set up an SSH connection to your Azure Percept DK through OpenSSH or [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html). ## Prerequisites
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-troubleshoot-setup.md
Title: Troubleshoot the Azure Percept DK setup experience description: Get troubleshooting tips for some of the more common issues found during the setup experience-+ Previously updated : 03/25/2021 Last updated : 10/04/2022 # Troubleshoot the Azure Percept DK setup experience ++ Refer to the table below for workarounds to common issues found during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md). If your issue still persists, contact Azure customer support. |Issue|Reason|Workaround|
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-over-the-air.md
Title: Update Azure Percept DK over-the-air description: Learn how to receive over-the air (OTA) updates to your Azure Percept DK-+ Previously updated : 03/30/2021 Last updated : 10/04/2022 # Update Azure Percept DK over-the-air ++ >[!CAUTION] >**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
azure-percept How To View Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-view-telemetry.md
Title: View your Azure Percept DK's model inference telemetry description: Learn how to view your Azure Percept DK's vision model inference telemetry in Azure IoT Explorer-+ Previously updated : 02/17/2021 Last updated : 10/04/2022 # View your Azure Percept DK's model inference telemetry + Follow this guide to view your Azure Percept DK's vision model inference telemetry in [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases). ## Prerequisites
azure-percept How To View Video Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-view-video-stream.md
Title: View your Azure Percept DK RTSP video stream description: Learn how to view the RTSP video stream from Azure Percept DK-+ Previously updated : 02/12/2021 Last updated : 10/04/2022 # View your Azure Percept DK RTSP video stream + Follow this guide to view the RTSP video stream from the Azure Percept DK within Azure Percept Studio. Inferencing from vision AI models deployed to your device will be viewable in the web stream. ## Prerequisites
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/known-issues.md
Title: Azure Percept known issues description: Learn more about Azure Percept known issues and their workarounds-+ Previously updated : 03/25/2021 Last updated : 10/04/2022 # Azure Percept known issues + Here are issues with the Azure Percept DK, Azure Percept Audio, or Azure Percept Studio that the product teams are aware of. Workarounds and troubleshooting steps are provided where possible. If you're blocked by any of these issues, you can post it as a question on [Microsoft Q&A](/answers/topics/azure-percept.html) or submit a customer support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). |Area|Symptoms|Description of Issue|Workaround|
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-8020-integration.md
Previously updated : 10/04/2022 Last updated : 10/06/2022
your local 80/20 distributor: https://8020.net/distributorlookup/
| Arm Mounts | ![Arm Mount Image](./media/overview-8020-integration-images/arm-mount.png) | [ ![Clamp Bracket Image](./media/overview-8020-integration-images/azure-percept-8020-clamp-bracket-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-clamp-bracket.png#lightbox)
-## Next steps
-> [!div class="nextstepaction"]
-> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-audio.md
Previously updated : 10/04/2022 Last updated : 10/06/2022
[!INCLUDE [Retirement note](./includes/retire.md)]
-Azure Percept Audio is an accessory device that adds speech AI capabilities to [Azure Percept DK](./overview-azure-percept-dk.md). It contains a preconfigured audio processor and a four-microphone linear array, enabling you to use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. It is integrated out-of-the-box with Azure Percept DK, [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), and other Azure edge management services. Azure Percept Audio is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
-
-> [!div class="nextstepaction"]
-> [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
-
+Azure Percept Audio is an accessory device that adds speech AI capabilities to [Azure Percept DK](./overview-azure-percept-dk.md). It contains a preconfigured audio processor and a four-microphone linear array, enabling you to use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. It is integrated out-of-the-box with Azure Percept DK, [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), and other Azure edge management services.
</br> > [!VIDEO https://www.youtube.com/embed/Qj8NGn-7s5A]
Build a [no-code speech solution](./tutorial-no-code-speech.md) in [Azure Percep
- [Azure Percept Audio datasheet](./azure-percept-audio-datasheet.md) - [Button and LED behavior](./audio-button-led-behavior.md)
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Buy an Azure Percept Audio device from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-dk.md
Previously updated : 10/04/2022 Last updated : 10/06/2022
[!INCLUDE [Retirement note](./includes/retire.md)]
-Azure Percept DK is an edge AI development kit designed for developing vision and audio AI solutions with [Azure Percept Studio](./overview-azure-percept-studio.md). Azure Percept DK is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
-
-> [!div class="nextstepaction"]
-> [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
+Azure Percept DK is an edge AI development kit designed for developing vision and audio AI solutions with [Azure Percept Studio](./overview-azure-percept-studio.md).
</br>
Azure Percept DK is an edge AI development kit designed for developing vision an
- [Create a no-code vision solution in Azure Percept Studio](./tutorial-nocode-vision.md) - [Create a no-code speech solution in Azure Percept Studio](./tutorial-no-code-speech.md) (Azure Percept Audio accessory required)
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-studio.md
Title: Azure Percept Studio overview v1
+ Title: Azure Percept Studio overview
description: Learn more about Azure Percept Studio Previously updated : 10/04/2022 Last updated : 10/06/2022
-# Azure Percept Studio overview v1
+# Azure Percept Studio overview
[!INCLUDE [Retirement note](./includes/retire.md)]
Regardless of if you are a beginner or an advanced AI model and solution develop
## Next steps - Check out [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819)-- Get the Azure Percept DK and Azure Percept Audio accessory at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270) - Learn more about [Azure Percept AI models and solutions](./overview-ai-models.md)
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept.md
Previously updated : 10/04/2022 Last updated : 10/06/2022
The main components of Azure Percept are:
- A development kit that is flexible enough to support a wide variety of prototyping scenarios for device builders, solution builders, and customers.
- > [!div class="nextstepaction"]
- > [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
- - Services and workflows that accelerate edge AI model and solution development. - Development workflows and pre-built models accessible from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-percept-security.md
Previously updated : 10/04/2022 Last updated : 10/06/2022
Device Update for IoT Hub enables more secure, scalable, and reliable over-the-a
> [!div class="nextstepaction"] > [Learn more about firewall configurations and security recommendations](concept-security-configuration.md)
-> [!div class="nextstepaction"]
-> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Update Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-update-experience.md
Previously updated : 10/04/2022 Last updated : 10/06/2022
With Azure Percept DK, you may update your dev kit OS and firmware over-the-air
- [Update your Azure Percept DK over-the-air (OTA)](./how-to-update-over-the-air.md) - [Update your Azure Percept DK over USB](./how-to-update-via-usb.md)
-## Next steps
-
-Order an Azure Percept DK at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
azure-percept Quickstart Percept Audio Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-audio-setup.md
Title: Set up the Azure Percept Audio device description: Learn how to connect your Azure Percept Audio device to your Azure Percept DK-+ Previously updated : 03/25/2021 Last updated : 10/04/2022 # Set up the Azure Percept Audio device + Azure Percept Audio works out of the box with Azure Percept DK. No unique setup is required. ## Prerequisites
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-dk-set-up.md
Title: Set up the Azure Percept DK device description: Set up your Azure Percept DK and connect it to Azure IoT Hub-+ Previously updated : 03/17/2021 Last updated : 10/04/2022 # Set up the Azure Percept DK device + Complete the Azure Percept DK setup experience to configure your dev kit. After verifying that your Azure account is compatible with Azure Percept, you will: - Launch the Azure Percept DK setup experience
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-dk-unboxing.md
Title: Unbox and assemble the Azure Percept DK device description: Learn how to unbox, connect, and power on your Azure Percept DK-+ Previously updated : 02/16/2021 Last updated : 10/04/2022 # Unbox and assemble the Azure Percept DK device + Once you have received your Azure Percept DK, reference this guide for information on connecting the components and powering on the device. ## Prerequisites
azure-percept Retirement Of Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md
description: Information about the retirement of the Azure Percept DK.
- Previously updated : 10/04/2022+ Last updated : 10/05/2022 # Retirement of Azure Percept DK
-The [Azure Percept](https://azure.microsoft.com/products/azure-percept/) public preview will be evolving to support new edge device platforms and developer experiences. As part of this evolution the Azure Percept DK and Audio Accessory and associated supporting Azure services for the Percept DK will be retired March 30, 2023.
+The [Azure Percept](https://azure.microsoft.com/products/azure-percept/) public preview will be evolving to support new edge device platforms and developer experiences. As part of this evolution the Azure Percept DK and Audio Accessory and associated supporting Azure services for the Percept DK will be retired March 30, 2023.
+
+## How does this change affect me?
-Effective March 30, 2023, the Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio, OS updates, containers updates, view web stream, and Custom Vision integration. Microsoft will no longer provide customer success support and any associated supporting services
+- After March 30, 2023, the Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio, OS updates, containers updates, view web stream, and Custom Vision integration.
+- Microsoft will no longer provide customer success support for the Azure Percept DK and Audio Accessory and any associated supporting services for the Percept DK.
+- Existing Custom Vision and Custom Speech projects created using Percept Studio for the Percept DK will not be deleted and billing if applicable will continue. You can no longer modify or use your project with Percept Studio.
+
+## Recommended action
+
+You should plan to close the resources and projects associated with the Azure Percept Studio and DK to avoid unanticipated billing of backend resources and projects will continue to bill after retirement.
+
+## Help and support
+
+If you have questions regarding Azure Percept DK, please refer to the below **FAQ**.
++
+| Question | Answer |
+|-||
+| Why is this change being made? | The Azure Percept DK, Percept Audio Accessory, and Azure Percept Studio were in preview stages. This is like a public beta. Previews give customers the opportunity to try the latest and greatest software and hardware. Due to the preview nature of the software and hardware retirements may occur. |
+| What is changing? | Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio and Updates. |
+| When is this change occurring? | On March 30, 2023. Until this date your DK and Studio will function as-is and updates and customer support will be offered. After this date, all updates and customer support will stop. |
+| Will my projects be deleted? | Your projects remain in the underlying Azure Services they were created in (example: Custom Vision, Speech Studio, etc.). They won't be deleted due to this retirement. You can no longer modify or use your project with Percept Studio. |
+| Do I need to do anything before March 30, 2023? | Yes, you will need to close the resources and projects associated with the Azure Percept Studio and DK to avoid future billing, as these backend resources and projects will continue to bill after retirement. |
+| Will my device still power on? | The various backend services that allow the DK and Audio Accessory to fully function will be shut down upon retirement, rending the DK and Audio Accessory effectively unusable. The SoMs, such as the camera and Audio Accessory, will no longer be identified by the DK after retirement and thus effectively unusable. |
azure-percept Return To Voice Assistant Application Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/return-to-voice-assistant-application-window.md
Title: Find your voice assistant application in Azure Percept Studio description: This article shows you how to return to a previously created voice assistant application window. -+ Previously updated : 08/03/2021 Last updated : 10/04/2022 # Find your voice assistant application in Azure Percept Studio + This how-to guide shows you how to return to a previously created voice assistant application. ## Prerequisites
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-over-the-air-updates.md
Title: Software releases for Azure Percept DK OTA updates description: Information and download links for the Azure Percept DK over-the-air update packages-+ Previously updated : 08/23/2021 Last updated : 10/04/2022 # Software releases for OTA updates ++ >[!CAUTION] >**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
Title: Software releases for Azure Percept DK USB cable updates description: Information and download links for the USB cable update package of Azure Percept DK -+ Previously updated : 08/23/2021 Last updated : 10/04/2022 # Software releases for USB cable updates + This page provides information and download links for all the dev kit OS/firmware image releases. For detail of changes/fixes in each version, refer to the release notes: - [Azure Percept DK software release notes](./azure-percept-devkit-software-release-notes.md).
azure-percept Speech Module Interface Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/speech-module-interface-workflow.md
Title: Azure Percept speech module interface workflow description: Describes the workflow and available methods for the Azure Percept speech module -+ Previously updated : 7/19/2021 Last updated : 10/04/2022 # Azure Percept speech module interface workflow + This article describes how the Azure Percept speech module interacts with IoT Hub. It does so via Module Twin and Module methods. Furthermore, it lists the direct method calls used to invoke the speech module. ## Speech module interaction with IoT hub via Module Twin and Module method
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
Title: Allow the Azure portal URLs on your firewall or proxy server description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 06/29/2022 Last updated : 10/12/2022
api.aadrm.com (Azure AD)
api.loganalytics.io (Log Analytics Service) *.applicationinsights.azure.com (Application Insights Service) appservice.azure.com (Azure App Services)
+*.arc.azure.net (Azure Arc)
asazure.windows.net (Analysis Services) bastion.azure.com (Azure Bastion Service) batch.azure.com (Azure Batch Service)
azure-relay Relay Hybrid Connections Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-protocol.md
Title: Azure Relay Hybrid Connections protocol guide | Microsoft Docs description: This article describes the client-side interactions with the Hybrid Connections relay for connecting clients in listener and sender roles. + Last updated 06/21/2022
The JSON content for `request` is as follows:
be used. * **body** ΓÇô boolean. Indicates whether one or more binary body frames follows.
-``` JSON
+```json
{ "request" : { "address" : "wss://dc-node.servicebus.windows.net:443/$hc/{path}?...",
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Title: Bicep functions - files description: Describes the functions to use in a Bicep file to load content from a file. Previously updated : 09/28/2022 Last updated : 10/10/2022 # File functions for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file, and it should be a compile-time constant (cannot use variables). |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. It can't include variables. |
### Remarks
-Use this function when you have binary content you would like to include in deployment. Rather than manually encoding the file to a base64 string and adding it to your Bicep file, load the file with this function. The file is loaded when the Bicep file is compiled to a JSON template. Hence variables cannot be used in filePath as they are not resolved at this stage. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+Use this function when you have binary content you would like to include in deployment. Rather than manually encoding the file to a base64 string and adding it to your Bicep file, load the file with this function. The file is loaded when the Bicep file is compiled to a JSON template. You can't use variables in the file path because they haven't been resolved when compiling to the template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
This function requires **Bicep version 0.4.412 or later**.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. The path is relative to the deployed Bicep file, and it should be a compile-time constant (cannot use variables). |
-| jsonPath | No | string | JSONPath expression to take only a part of the JSON into ARM. |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. It can't include variables. |
+| jsonPath | No | string | JSONPath expression to specify that only part of the file is loaded. |
| encoding | No | string | The file encoding. The default value is `utf-8`. The available options are: `iso-8859-1`, `us-ascii`, `utf-16`, `utf-16BE`, or `utf-8`. | ### Remarks
-Use this function when you have JSON content or minified JSON content that is stored in a separate file. Rather than duplicating the JSON content in your Bicep file, load the content with this function. You can load a part of a JSON file by specifying a JSON path. The file is loaded when the Bicep file is compiled to the JSON template. Hence variables cannot be used in filePath as they are not resolved at this stage. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+Use this function when you have JSON content or minified JSON content that is stored in a separate file. Rather than duplicating the JSON content in your Bicep file, load the content with this function. You can load a part of a JSON file by specifying a JSON path. The file is loaded when the Bicep file is compiled to the JSON template. You can't include variables in the file path because they haven't been resolved when compiling to the template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. The path is relative to the deployed Bicep file, and it should be a compile-time constant (cannot use variables). |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. It can't contain variables. |
| encoding | No | string | The file encoding. The default value is `utf-8`. The available options are: `iso-8859-1`, `us-ascii`, `utf-16`, `utf-16BE`, or `utf-8`. | ### Remarks
-Use this function when you have content that is more stored in a separate file. Rather than duplicating the content in your Bicep file, load the content with this function. For example, you can load a deployment script from a file. The file is loaded when the Bicep file is compiled to the JSON template. Hence variables cannot be used in filePath as they are not resolved at this stage. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+Use this function when you have content that is stored in a separate file. You can load the content rather than duplicating it in your Bicep file. For example, you can load a deployment script from a file. The file is loaded when the Bicep file is compiled to the JSON template. You can't include any variables in the file path because they haven't been resolved when compiling to the template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
Use the [`loadJsonContent()`](#loadjsoncontent) function to load JSON files.
azure-resource-manager Resource Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-dependencies.md
Title: Set resource dependencies in Bicep description: Describes how to specify the order resources are deployed. Previously updated : 03/02/2022 Last updated : 10/05/2022 # Resource dependencies in Bicep
resource myParent 'My.Rp/parentType@2020-01-01' = {
} ```
+A resource that includes the [parent](./child-resource-name-type.md) property has an implicit dependency on the parent resource. It depends on the parent resource, not any of its other child resources.
+
+The following example shows a storage account and file service. The file service has an implicit dependency on the storage account.
++ When an implicit dependency exists, **don't add an explicit dependency**. For more information about nested resources, see [Set name and type for child resources in Bicep](./child-resource-name-type.md).
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep description: Describes how to create template specs in Bicep and share them with other users in your organization. + Last updated 08/23/2022
To deploy the template spec, you use standard Azure tools like PowerShell, Azure
> [!NOTE] > To use template specs in Bicep with Azure PowerShell, you must install [version 6.3.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.27.0 or later](/cli/azure/install-azure-cli).
-When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your Bicep along with copy loops to create multiple instances of these resources.
+When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Azure Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your Bicep along with copy loops to create multiple instances of these resources.
> [!TIP] > The choice between module registry and template specs is mostly a matter of preference. There are a few things to consider when you choose between the two:
azure-resource-manager Custom Providers Action Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/custom-providers-action-endpoint-how-to.md
Title: Adding custom actions to Azure REST API description: Learn how to add custom actions to the Azure REST API. This article will walk through the requirements and best practices for endpoints that wish to implement custom actions. + Last updated 06/20/2019
Sample **ResourceProvider**:
An **endpoint** that implements an **action** must handle the request and response for the new API in Azure. When a custom resource provider with an **action** is created, it will generate a new set of APIs in Azure. In this case, the action will generate a new Azure action API for `POST` calls:
-``` JSON
+```http
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomAction ``` Azure API Incoming Request:
-``` HTTP
+```http
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomAction?api-version=2018-09-01-preview Authorization: Bearer eyJ0e... Content-Type: application/json
Content-Type: application/json
This request will then be forwarded to the **endpoint** in the form:
-``` HTTP
+```http
POST https://{endpointURL}/?api-version=2018-09-01-preview Content-Type: application/json X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomAction
Similarly, the response from the **endpoint** is then forwarded back to the cust
- A valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8".
-``` HTTP
+```http
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8
Content-Type: application/json; charset=utf-8
Azure Custom Resource Provider Response:
-``` HTTP
+```http
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8
Sample **ResourceProvider** with List Action:
Sample Azure Resource Manager Template:
-``` JSON
+```json
{ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
azure-resource-manager Custom Providers Resources Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/custom-providers-resources-endpoint-how-to.md
Title: Adding custom resources to Azure REST API description: Learn how to add custom resources to the Azure REST API. This article will walk through the requirements and best practices for endpoints that wish to implement custom resources. + Last updated 06/20/2019
An **endpoint** that implements an **resourceType** must handle the request and
Manipulate Single Resource (`PUT`, `GET`, and `DELETE`):
-``` JSON
+```http
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResource/{myCustomResourceName} ``` Retrieve All Resources (`GET`):
-``` JSON
+```http
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResource ```
Azure Resource Manager Templates require that `id`, `name`, and `type` are retur
Sample **endpoint** response:
-``` JSON
+```json
{ "properties": { "myProperty1": "myPropertyValue1",
azure-resource-manager Proxy Resource Endpoint Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/proxy-resource-endpoint-reference.md
Title: Custom resource proxy reference description: Custom resource proxy reference for Azure Custom Resource Providers. This article will go through the requirements for endpoints implementing proxy custom resources. + Last updated 05/13/2022
An endpoint that implements a "Proxy" resource endpoint must handle the request
**Sample resource**:
-``` JSON
+```json
{ "name": "{myCustomResourceName}", "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName}",
azure-resource-manager Publish Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-notifications.md
Title: Azure managed applications with notifications description: Configure an Azure managed application with webhook endpoints to receive notifications about creates, updates, deletes, and errors on the managed application instances. + Last updated 08/18/2022
To get started, see [Publish a service catalog application through Azure portal]
> [!NOTE] > You can only supply one endpoint in the `notificationEndpoints` property of the managed application definition.
-``` JSON
+```json
{ "properties": { "isEnabled": true,
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services
description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Last updated 02/28/2022-+ # Resource providers for Azure services
The resources providers that are marked with **- registered** are registered by
| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) | | Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) | | Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) |
-| Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) |
+| Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) |
| Microsoft.DomainRegistration | [App Service](../../app-service/index.yml) | | Microsoft.DynamicsLcs | [Lifecycle Services](https://lcs.dynamics.com/Logon/Index ) | | Microsoft.EnterpriseKnowledgeGraph | Enterprise Knowledge Graph |
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
For more information, see [Functions Hosting plans comparison](../../azure-funct
[!INCLUDE [container-service-limits](../../../includes/container-service-limits.md)]
+## Azure Lab Services
++ ## Azure Load Testing limits For Azure Load Testing limits, see [Service limits in Azure Load Testing](../../load-testing/resource-limits-quotas-capacity.md).
There are limits, per subscription, for deploying resources using Compute Galler
- 1,000 image definitions, per subscription, per region - 10,000 image versions, per subscription, per region
-## Virtual machine scale sets limits
+## Virtual Machine Scale Sets limits
[!INCLUDE [virtual-machine-scale-sets-limits](../../../includes/azure-virtual-machine-scale-sets-limits.md)]
azure-resource-manager Control Plane And Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-and-data-plane.md
Title: Control plane and data plane operations description: Describes the difference between control plane and data plane operations. Control plane operations are handled by Azure Resource Manager. Data plane operations are handled by a service. + Last updated 09/10/2020 # Azure control plane and data plane
For example:
* You create a storage account through the control plane. You use the data plane to read and write data in the storage account.
-* You create a Cosmos database through the control plane. To query data in the Cosmos database, you use the data plane.
+* You create an Azure Cosmos DB database through the control plane. To query data in the Azure Cosmos DB database, you use the data plane.
## Control plane
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 09/26/2022 Last updated : 10/05/2022 # What is Azure Resource Manager?
There are some important factors to consider when defining your resource group:
The Azure Resource Manager service is designed for resiliency and continuous availability. Resource Manager and control plane operations (requests sent to `management.azure.com`) in the REST API are:
-* Distributed across regions. Some services are regional.
+* Distributed across regions. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service.
* Distributed across Availability Zones (and regions) in locations that have multiple Availability Zones.
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
Previously updated : 09/28/2022 Last updated : 10/05/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | workspaces | resource group | 3-33 | Alphanumerics and hyphens. |
-> | workspaces / computes | workspace | 2-16 | Alphanumerics and hyphens. |
+> | workspaces / computes | workspace | 3-24 for compute instance<br>3-32 for AML compute<br>2-16 for other compute types | Alphanumerics and hyphens. |
## Microsoft.ManagedIdentity
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. + Last updated 09/28/2022
You define resources with the following structure:
| tags |No |Tags that are associated with the resource. Apply tags to logically organize resources across your subscription. | | identity | No | Some resources support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). Those resources have an identity object at the root level of the resource declaration. You can set whether the identity is user-assigned or system-assigned. For user-assigned identities, provide a list of resource IDs for the identities. Set the key to the resource ID and the value to an empty object. For more information, see [Configure managed identities for Azure resources on an Azure VM using templates](../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md). | | sku | No | Some resources allow values that define the SKU to deploy. For example, you can specify the type of redundancy for a storage account. |
-| kind | No | Some resources allow a value that defines the type of resource you deploy. For example, you can specify the type of Cosmos DB to create. |
+| kind | No | Some resources allow a value that defines the type of resource you deploy. For example, you can specify the type of Azure Cosmos DB instance to create. |
| scope | No | The scope property is only available for [extension resource types](../management/extension-resource-types.md). Use it when specifying a scope that is different than the deployment scope. See [Setting scope for extension resources in ARM templates](scope-extension-resources.md). | | copy |No |If more than one instance is needed, the number of resources to create. The default mode is parallel. Specify serial mode when you don't want all or the resources to deploy at the same time. For more information, see [Create several instances of resources in Azure Resource Manager](copy-resources.md). | | plan | No | Some resources allow values that define the plan to deploy. For example, you can specify the marketplace image for a virtual machine. |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Last updated 09/09/2022-+ # Resource functions for ARM templates
You can use the response from `pickZones` to determine whether to provide null f
}, ```
-Cosmos DB isn't a zonal resource but you can use the `pickZones` function to determine whether to enable zone redundancy for georeplication. Pass the **Microsoft.Storage/storageAccounts** resource type to determine whether to enable zone redundancy.
+Azure Cosmos DB isn't a zonal resource, but you can use the `pickZones` function to determine whether to enable zone redundancy for georeplication. Pass the **Microsoft.Storage/storageAccounts** resource type to determine whether to enable zone redundancy.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/functions/resource/pickzones-cosmosdb.json":::
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs
description: Describes how to create template specs and share them with other users in your organization. Last updated 01/12/2022-+ # Azure Resource Manager template specs
To deploy the template spec, you use standard Azure tools like PowerShell, Azure
> [!NOTE] > To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
-When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
+When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For example, your deployments include multiple instances of Azure Cosmos DB, with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
### Training resources
azure-signalr Concept Upstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-upstream.md
The URL of upstream is not encryption at rest. If you have any sensitive informa
For example, a complete reference would look like the following: ```
- @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
+ {@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)}
+ ```
+
+ An upstream URL to Azure Function would look like the following:
+ ```
+ https://contoso.azurewebsites.net/runtime/webhooks/signalr?code={@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)}
``` > [!NOTE]
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
azure-signalr Signalr Concept Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md
Title: Build Real-time app - Azure Functions & Azure SignalR Service
description: Learn how to develop real-time serverless web application with Azure SignalR Service by following example. + Last updated 11/13/2019
Azure Functions allow you to write code in [several languages](../azure-function
- Event Grid - Event Hubs - Service Bus
- - Cosmos DB change feed
+ - Azure Cosmos DB change feed
- Storage - blobs and queues - Logic Apps connectors such as Salesforce and SQL Server
By using Azure Functions to integrate these events with Azure SignalR Service, y
Some common scenarios for real-time serverless messaging that you can implement with Azure Functions and SignalR Service include: * Visualize IoT device telemetry on a real-time dashboard or map
-* Update data in an application when documents update in Cosmos DB
+* Update data in an application when documents update in Azure Cosmos DB
* Send in-app notifications when new orders are created in Salesforce ## SignalR Service bindings for Azure Functions
The SignalR Service bindings for Azure Functions allow an Azure Function app to
### An example scenario
-An example of how to use the SignalR Service bindings is using Azure Functions to integrate with Azure Cosmos DB and SignalR Service to send real-time messages when new events appear on a Cosmos DB change feed.
+An example of how to use the SignalR Service bindings is using Azure Functions to integrate with Azure Cosmos DB and SignalR Service to send real-time messages when new events appear on an Azure Cosmos DB change feed.
-![Cosmos DB, Azure Functions, SignalR Service](media/signalr-concept-azure-functions/signalr-cosmosdb-functions.png)
+![Azure Cosmos DB, Azure Functions, SignalR Service](media/signalr-concept-azure-functions/signalr-cosmosdb-functions.png)
-1. A change is made in a Cosmos DB collection
-2. The change event is propagated to the Cosmos DB change feed
-3. An Azure Functions is triggered by the change event using the Cosmos DB trigger
+1. A change is made in an Azure Cosmos DB collection
+2. The change event is propagated to the Azure Cosmos DB change feed
+3. An Azure Functions is triggered by the change event using the Azure Cosmos DB trigger
4. The SignalR Service output binding publishes a message to SignalR Service 5. SignalR Service publishes the message to all connected clients
azure-signalr Signalr Howto Scale Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-signalr.md
This article shows you how to scale your instance of Azure SignalR Service. There are two scenarios for scaling, scale up and scale out. * [Scale up](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Get more units, connections, messages, and more. You scale up by changing the pricing tier from Free to Standard.
-* [Scale out](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Increase the number of SignalR units. You can scale out to as many as 100 units. There are limited unit options to select for the scaling: 1, 2, 5, 10, 20, 50 and 100 units for a single SignalR Service instance.
+* [Scale out](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Increase the number of SignalR units. You can scale out to as many as 100 units. There are limited unit options to select for the scaling: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 units for a single SignalR Service instance.
The scale settings take a few minutes to apply. In rare cases, it may take around 30 minutes to apply. Scaling doesn't require you to change your code or redeploy your server application. For information about the pricing and capacities of individual SignalR Service, see [Azure SignalR Service Pricing Details](https://azure.microsoft.com/pricing/details/signalr-service/). > [!NOTE]
-> Changing SignalR Service from **Free** tier to **Standard** tier or vice versa, the public service IP will be changed and it usually takes 30-60 minutes to propagate the change to DNS servers across the entire internet.
+> Changing SignalR Service from **Free** tier to **Standard** or **Premium** tier or vice versa, the public service IP will be changed and it usually takes 30-60 minutes to propagate the change to DNS servers across the entire internet.
> Your service might be unreachable before DNS gets updated. Generally itΓÇÖs not recommended to change your pricing tier too often.
-## Scale on Azure portal
+## Scale Up on Azure portal
1. In your browser, open the [Azure portal](https://portal.azure.com).
-2. In your SignalR Service page, from the left menu, select **Scale**.
+2. In your SignalR Service page, from the left menu, select **Scale Up**.
-3. Choose your pricing tier, and then select **Select**. Set the unit count for **Standard** Tier.
+3. Click **Change** and select **Standard** Tier in the pop out blade.
- ![Scale on Portal](./media/signalr-howto-scale/signalr-howto-scale.png)
+ ![Screenshot of scaling up on Portal.](./media/signalr-howto-scale/signalr-howto-scale-up.png)
4. Select **Save**. +
+## Scale Out on Azure portal
+
+1. In your browser, open the [Azure portal](https://portal.azure.com).
+
+2. In your SignalR Service page, from the left menu, select **Scale Out**.
+
+3. Choose the unit in the **Manual scale** sector.
+
+ ![Screenshot of scaling out on Portal.](./media/signalr-howto-scale/signalr-howto-scale-out.png)
+
+4. Select **Save**.
++ ## Scale using Azure CLI This script creates a new SignalR Service resource of **Free** Tier and a new resource group, and scales it up to **Standard** Tier.
azure-sql-edge Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-portal.md
Title: Deploy Azure SQL Edge using the Azure portal
description: Learn how to deploy Azure SQL Edge using the Azure portal - Previously updated : 09/22/2020+ Last updated : 09/16/2022 keywords: deploy SQL Edge-
-# Deploy Azure SQL Edge
+# Deploy Azure SQL Edge
-Azure SQL Edge is a relational database engine optimized for IoT and Azure IoT Edge deployments. It provides capabilities to create a high-performance data storage and processing layer for IoT applications and solutions. This quickstart shows you how to get started with creating an Azure SQL Edge module through Azure IoT Edge using the Azure portal.
+Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] is a relational database engine optimized for IoT and Azure IoT Edge deployments. It provides capabilities to create a high-performance data storage and processing layer for IoT applications and solutions. This quickstart shows you how to get started with creating an Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module through Azure IoT Edge using the Azure portal.
## Before you begin
Azure SQL Edge is a relational database engine optimized for IoT and Azure IoT E
* Create an [Azure IoT Hub](../iot-hub/iot-hub-create-through-portal.md). * Create an [Azure IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
-> [!NOTE]
+> [!NOTE]
> To deploy an Azure Linux VM as an IoT Edge device, see this [quickstart guide](../iot-edge/quickstart-linux.md). ## Deploy SQL Edge Module from Azure Marketplace
-Azure Marketplace is an online applications and services marketplace where you can browse through a wide range of enterprise applications and solutions that are certified and optimized to run on Azure, including [IoT Edge modules](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules). Azure SQL Edge can be deployed to an edge device through the marketplace.
+Azure Marketplace is an online applications and services marketplace where you can browse through a wide range of enterprise applications and solutions that are certified and optimized to run on Azure, including [IoT Edge modules](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules). Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] can be deployed to an edge device through the marketplace.
-1. Find the Azure SQL Edge module on the Azure Marketplace.<br><br>
+1. Find the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module on the Azure Marketplace.
- ![SQL Edge in MarketPlace](media/deploy-portal/find-offer-marketplace.png)
+ :::image type="content" source="media/deploy-portal/find-offer-marketplace.png" alt-text="Screenshot of SQL Edge in Marketplace.":::
-2. Pick the software plan that best matches your requirements and click **Create**. <br><br>
+1. Pick the software plan that best matches your requirements and select **Create**.
- ![Pick the correct software plan](media/deploy-portal/pick-correct-plan.png)
+ :::image type="content" source="media/deploy-portal/pick-correct-plan.png" alt-text="Screenshot showing how to pick the correct software plan.":::
-3. On the Target Devices for IoT Edge Module page, specify the following details and then click **Create**
+1. On the Target Devices for IoT Edge Module page, specify the following details and then select **Create**.
- |**Field** |**Description** |
+ | Field | Description |
|||
- |Subscription | The Azure subscription under which the IoT Hub was created |
- |IoT Hub | Name of the IoT Hub where the IoT Edge device is registered and then select "Deploy to a device" option|
- |IoT Edge Device Name | Name of the IoT Edge device where SQL Edge would be deployed |
+ | **Subscription** | The Azure subscription under which the IoT Hub was created |
+ | **IoT Hub** | Name of the IoT Hub where the IoT Edge device is registered and then select "Deploy to a device" option |
+ | **IoT Edge Device Name** | Name of the IoT Edge device where [!INCLUDE [sql-edge](../../includes/sql-edge.md)] would be deployed |
-4. On the **Set Modules on device:** page, click on the Azure SQL Edge module under **IoT Edge Modules**. The default module name is set to *AzureSQLEdge*.
+1. On the **Set Modules on device:** page, select the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module under **IoT Edge Modules**. The default module name is set to *AzureSQLEdge*.
-5. On the *Module Settings* section of the **Update IoT Edge Module** blade, specify the desired values for the *IoT Edge Module Name*, *Restart Policy* and *Desired Status*.
+1. On the *Module Settings* section of the **Update IoT Edge Module** pane, specify the desired values for the *IoT Edge Module Name*, *Restart Policy* and *Desired Status*.
- > [!IMPORTANT]
- > Do not change or update the **Image URI** settings on the module.
+ > [!IMPORTANT]
+ > Don't change or update the **Image URI** settings on the module.
-6. On the *Environment Variables* section of the **Update IoT Edge Module** blade, specify the desired values for the environment variables. For a complete list of Azure SQL Edge environment variables refer [Configure using environment variables](configure.md#configure-by-using-environment-variables). The following default environment variables are defined for the module.
+1. On the *Environment Variables* section of the **Update IoT Edge Module** pane, specify the desired values for the environment variables. For a complete list of Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] environment variables, see [Configure using environment variables](configure.md#configure-by-using-environment-variables). The following default environment variables are defined for the module.
|**Parameter** |**Description**| |||
- | MSSQL_SA_PASSWORD | Change the default value to specify a strong password for the SQL Edge admin account. |
- | MSSQL_LCID | Change the default value to set the desired language ID to use for SQL Edge. For example, 1036 is French. |
- | MSSQL_COLLATION | Change the default value to set the default collation for SQL Edge. This setting overrides the default mapping of language ID (LCID) to collation. |
+ | MSSQL_SA_PASSWORD | Change the default value to specify a strong password for the [!INCLUDE [sql-edge](../../includes/sql-edge.md)] admin account. |
+ | MSSQL_LCID | Change the default value to set the desired language ID to use for [!INCLUDE [sql-edge](../../includes/sql-edge.md)]. For example, 1036 is French. |
+ | MSSQL_COLLATION | Change the default value to set the default collation for [!INCLUDE [sql-edge](../../includes/sql-edge.md)]. This setting overrides the default mapping of language ID (LCID) to collation. |
- > [!IMPORTANT]
- > Do not change or update the **ACCEPT_EULA** environment variable for the module.
+ > [!IMPORTANT]
+ > Don't change or update the `ACCEPT_EULA` environment variable for the module.
-7. On the *Container Create Options* section of the **Update IoT Edge Module** blade, update the following options as per requirement.
- - **Host Port :** Map the specified host port to port 1433 (default SQL port) in the container.
- - **Binds** and **Mounts :** If you need to deploy more than one SQL Edge module, ensure that you update the mounts option to create a new source & target pair for the persistent volume. For more information on mounts and volume, refer [Use volumes](https://docs.docker.com/storage/volumes/) on docker documentation.
+1. On the *Container Create Options* section of the **Update IoT Edge Module** pane, update the following options as per requirement.
+
+ - **Host Port**
+
+ Map the specified host port to port 1433 (default SQL port) in the container.
+
+ - **Binds** and **Mounts**
+
+ If you need to deploy more than one [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module, ensure that you update the mounts option to create a new source and target pair for the persistent volume. For more information on mounts and volume, refer [Use volumes](https://docs.docker.com/storage/volumes/) on Docker documentation.
```json {
Azure Marketplace is an online applications and services marketplace where you c
] } ```
- > [!IMPORTANT]
- > Do not change the `PlanId` enviroment variable defined in the create config setting. If this value is changed, the Azure SQL Edge container will fail to start.
-
-8. On the **Update IoT Edge Module** pane, click **Update**.
-9. On the **Set modules on device** page click **Next: Routes >** if you need to define routes for your deployment. Otherwise click **Review + Create**. For more information on configuring routes, see [Deploy modules and establish routes in IoT Edge](../iot-edge/module-composition.md).
-11. On the **Set modules on device** page, click **Create**.
+
+ > [!IMPORTANT]
+ > Do not change the `PlanId` environment variable defined in the create config setting. If this value is changed, the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] container will fail to start.
+
+ > [!WARNING]
+ > If you reinstall the module, remember to remove any existing bindings first, otherwise your environment variables will not be updated.
+
+1. On the **Update IoT Edge Module** pane, select **Update**.
+1. On the **Set modules on device** page select **Next: Routes >** if you need to define routes for your deployment. Otherwise select **Review + Create**. For more information on configuring routes, see [Deploy modules and establish routes in IoT Edge](../iot-edge/module-composition.md).
+1. On the **Set modules on device** page, select **Create**.
## Connect to Azure SQL Edge
-The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside the container to connect to Azure SQL Edge.
+The following steps use the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] command-line tool, **sqlcmd**, inside the container to connect to Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)].
-> [!NOTE]
-> SQL Command line tools (sqlcmd) are not available inside the ARM64 version of Azure SQL Edge containers.
+> [!NOTE]
+> SQL Server command line tools, including **sqlcmd**, are not available inside the ARM64 version of Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] containers.
1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example `AzureSQLEdge` is name specified by the `Name` parameter of your IoT Edge Module.
The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside
sudo docker exec -it AzureSQLEdge "bash" ```
-2. Once inside the container, connect locally with sqlcmd. Sqlcmd is not in the path by default, so you have to specify the full path.
+1. Once inside the container, connect locally with the **sqlcmd** tool. **sqlcmd** isn't in the path by default, so you have to specify the full path.
```bash /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<YourNewStrong@Passw0rd>" ```
- > [!TIP]
+ > [!TIP]
> You can omit the password on the command-line to be prompted to enter it.
-3. If successful, you should get to a **sqlcmd** command prompt: `1>`.
+1. If successful, you should get to a **sqlcmd** command prompt: `1>`.
## Create and query data
The following steps create a new database named `TestDB`.
1. From the **sqlcmd** command prompt, paste the following Transact-SQL command to create a test database: ```sql
- CREATE DATABASE TestDB
+ CREATE DATABASE TestDB;
Go ```
-2. On the next line, write a query to return the name of all of the databases on your server:
+1. On the next line, write a query to return the name of all of the databases on your server:
```sql
- SELECT Name from sys.Databases
+ SELECT Name from sys.databases;
Go ``` ### Insert data
-Next create a new table, `Inventory`, and insert two new rows.
+Next, create a new table called `Inventory`, and insert two new rows.
1. From the **sqlcmd** command prompt, switch context to the new `TestDB` database: ```sql
- USE TestDB
+ USE TestDB;
```
-2. Create new table named `Inventory`:
+1. Create new table named `Inventory`:
```sql CREATE TABLE Inventory (id INT, name NVARCHAR(50), quantity INT) ```
-3. Insert data into the new table:
+1. Insert data into the new table:
```sql INSERT INTO Inventory VALUES (1, 'banana', 150); INSERT INTO Inventory VALUES (2, 'orange', 154); ```
-4. Type `GO` to execute the previous commands:
+1. Type `GO` to execute the previous commands:
```sql GO
Now, run a query to return data from the `Inventory` table.
SELECT * FROM Inventory WHERE quantity > 152; ```
-2. Execute the command:
+1. Execute the command:
```sql GO
Now, run a query to return data from the `Inventory` table.
QUIT ```
-2. To exit the interactive command-prompt in your container, type `exit`. Your container continues to run after you exit the interactive bash shell.
+1. To exit the interactive command-prompt in your container, type `exit`. Your container continues to run after you exit the interactive bash shell.
## Connect from outside the container
-You can connect and run SQL queries against your Azure SQL Edge instance from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a SQL Edge container from outside, refer [Connect and Query Azure SQL Edge](./connect.md).
+You can connect and run SQL queries against your Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] instance from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a [!INCLUDE [sql-edge](../../includes/sql-edge.md)] container from outside, refer [Connect and Query Azure SQL Edge](./connect.md).
-In this quickstart, you deployed a SQL Edge Module on an IoT Edge device.
+In this quickstart, you deployed a [!INCLUDE [sql-edge](../../includes/sql-edge.md)] Module on an IoT Edge device.
-## Next Steps
+## Next steps
- [Machine Learning and Artificial Intelligence with ONNX in SQL Edge](onnx-overview.md) - [Building an end to end IoT Solution with SQL Edge using IoT Edge](tutorial-deploy-azure-resources.md)
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
While in the edit mode, hover between two transcription lines. You'll find a gap
:::image type="content" alt-text="Screenshot of how to add new transcription." source="./media/edit-transcript-lines-portal/add-new-transcription-line.png":::
-After clicking the add new transcription line, there will be an option to add the new text and the time stamp for the new line. Enter the text, choose the time stamp for the new line, and select **save**. Default timestamp is the gap between the previous and next transcript line.
+After clicking the add new transcription line, there will be an option to add the new text and the time stamp for the new line. Enter the text, choose the time stamp for the new line, and select **save**. The default time stamp is the gap between the previous and next transcript line.
:::image type="content" alt-text="Screenshot of a new transcript time stamp line." source="./media/edit-transcript-lines-portal/transcript-time-stamp.png":::
Choose an existing line in the transcript line, click the **three dots** icon, s
## Edit existing line
-While in the edit mode, select the three dots icon. The editing options were enhanced, they now contain not just the text but also the timestamp with accuracy of milliseconds.
+While in the edit mode, select the three dots icon. The editing options were enhanced, they now contain not just the text but also the time stamp with accuracy of milliseconds.
## Delete line
Lines can now be deleted through the same three dots icon.
## Example how and when to use this feature
-To consolidate two lines which you believe should appear as one.
+To consolidate two lines, which you believe should appear as one.
1. Go to line number 2, select edit. 1. Copy the text
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
description: This article provides a comprehensive list of language support by s
+ Last updated 09/14/2022
Last updated 09/14/2022
This article provides a comprehensive list of language support by service features in Azure Video Indexer. For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
+The list below contains the source languages for transcription that are supported by the Video Indexer API.
+ > [!NOTE]
-> The list below contains the source languages for transcription that are supported by the Video Indexer API. Some languages are supported only through the
-> API and not through the Video Indexer website or widgets.
+> Some languages are supported only through the API and not through the Video Indexer website or widgets.
> > To make sure a language is supported for search, transcription, or translation by the Azure Video Indexer website and widgets, see the [frontend language > support table](#language-support-in-frontend-experiences) further below. ## General language support
-This section describes language support in Azure Video Indexer.
+This section describes languages supported by Azure Video Indexer API.
- Transcription (source language of the video/audio file) - Language identification (LID)
This section describes language support in Azure Video Indexer.
| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (Language model) | |::|:--:|:--:|:-:|:-:|:-:|::|
-| Afrikaans | `af-ZA` | | | | Γ£ö | |
+| Afrikaans | `af-ZA` | | Γ£ö | Γ£ö | Γ£ö | |
| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | | | Γ£ö | Γ£ö |
-| Bangla | `bn-BD` | | | | Γ£ö | |
-| Bosnian | `bs-Latn` | | | | Γ£ö | |
-| Bulgarian | `bg-BG` | | | | Γ£ö | |
-| Catalan | `ca-ES` | | | | Γ£ö | |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Bangla | `bn-BD` | | Γ£ö | Γ£ö | Γ£ö | |
+| Bosnian | `bs-Latn` | | Γ£ö | Γ£ö | Γ£ö | |
+| Bulgarian | `bg-BG` | | Γ£ö | Γ£ö | Γ£ö | |
+| Catalan | `ca-ES` | | Γ£ö | Γ£ö | Γ£ö | |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
| Chinese (Simplified) | `zh-Hans` | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
-| Croatian | `hr-HR` | | | | Γ£ö | |
-| Czech | `cs-CZ` | Γ£ö | | | Γ£ö | Γ£ö |
-| Danish | `da-DK` | Γ£ö | | | Γ£ö | Γ£ö |
-| Dutch | `nl-NL` | Γ£ö | | | Γ£ö | Γ£ö |
-| English Australia | `en-AU` | Γ£ö | | | Γ£ö | Γ£ö |
-| English United Kingdom | `en-GB` | Γ£ö | | | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | | | Γ£ö | |
-| Fijian | `en-FJ` | | | | Γ£ö | |
-| Filipino | `fil-PH` | | | | Γ£ö | |
-| Finnish | `fi-FI` | Γ£ö | | | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French (Canada) | `fr-CA` | Γ£ö | | | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | `el-GR` | | | | Γ£ö | |
-| Haitian | `fr-HT` | | | | Γ£ö | |
-| Hebrew | `he-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Hindi | `hi-IN` | Γ£ö | | | Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | | | Γ£ö | |
-| Indonesian | `id-ID` | | | | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | | | | Γ£ö | |
-| Korean | `ko-KR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Latvian | `lv-LV` | | | | Γ£ö | |
-| Lithuanian | `lt-LT` | | | | Γ£ö | |
-| Malagasy | `mg-MG` | | | | Γ£ö | |
-| Malay | `ms-MY` | | | | Γ£ö | |
-| Maltese | `mt-MT` | | | | Γ£ö | |
-| Norwegian | `nb-NO` | Γ£ö | | | Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Polish | `pl-PL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | `pt-PT` | Γ£ö | | | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | | | | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | | | Γ£ö | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
-| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
-| Slovak | `sk-SK` | | | | Γ£ö | |
-| Slovenian | `sl-SI` | | | | Γ£ö | |
-| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö | | | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | | | Γ£ö | |
-| Thai | `th-TH` | Γ£ö | | | Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | | | Γ£ö | |
-| Turkish | `tr-TR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | Γ£ö | | | Γ£ö | |
-| Urdu | `ur-PK` | | | | Γ£ö | |
-| Vietnamese | `vi-VN` | Γ£ö | | | Γ£ö | |
+| Chinese (Traditional) | `zh-Hant` | | Γ£ö | Γ£ö | Γ£ö | |
+| Croatian | `hr-HR` | | Γ£ö | Γ£ö | Γ£ö | |
+| Czech | `cs-CZ` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Danish | `da-DK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Dutch | `nl-NL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English Australia | `en-AU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English United Kingdom | `en-GB` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | Γ£ö | Γ£ö | Γ£ö | |
+| Fijian | `en-FJ` | | Γ£ö | Γ£ö | Γ£ö | |
+| Filipino | `fil-PH` | | Γ£ö | Γ£ö | Γ£ö | |
+| Finnish | `fi-FI` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French (Canada) | `fr-CA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Greek | `el-GR` | | Γ£ö | Γ£ö | Γ£ö | |
+| Haitian | `fr-HT` | | Γ£ö | Γ£ö | Γ£ö | |
+| Hebrew | `he-IL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Hindi | `hi-IN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Hungarian | `hu-HU` | | Γ£ö | Γ£ö | Γ£ö | |
+| Indonesian | `id-ID` | | Γ£ö | Γ£ö | Γ£ö | |
+| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | | Γ£ö | Γ£ö | Γ£ö | |
+| Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Latvian | `lv-LV` | | Γ£ö | Γ£ö | Γ£ö | |
+| Lithuanian | `lt-LT` | | Γ£ö | Γ£ö | Γ£ö | |
+| Malagasy | `mg-MG` | | Γ£ö | Γ£ö | Γ£ö | |
+| Malay | `ms-MY` | | Γ£ö | Γ£ö | Γ£ö | |
+| Maltese | `mt-MT` | | Γ£ö | Γ£ö | Γ£ö | |
+| Norwegian | `nb-NO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Polish | `pl-PL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Romanian | `ro-RO` | | Γ£ö | Γ£ö | Γ£ö | |
+| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | Γ£ö | Γ£ö | Γ£ö | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | Γ£ö | Γ£ö | Γ£ö | |
+| Serbian (Latin) | `sr-Latn-RS` | | Γ£ö | Γ£ö | Γ£ö | |
+| Slovak | `sk-SK` | | Γ£ö | Γ£ö | Γ£ö | |
+| Slovenian | `sl-SI` | | Γ£ö | Γ£ö | Γ£ö | |
+| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | Γ£ö | Γ£ö | Γ£ö | |
+| Thai | `th-TH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | Γ£ö | Γ£ö | Γ£ö | |
+| Turkish | `tr-TR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Urdu | `ur-PK` | | Γ£ö | Γ£ö | Γ£ö | |
+| Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
## Language support in frontend experiences
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Loc
## October 2022
+### A new built-in role: Video Indexer Restricted Viewer
+
+The limited access **Video Indexer Restricted Viewer** role is intended for the [Azure Video Indexer website](https://www.videoindexer.ai/) users. The role's permitted actions relate to the [Azure Video Indexer website](https://www.videoindexer.ai/) experience.
+
+For more information, see [Manage access with the Video Indexer Restricted Viewer role](restricted-viewer-role.md).
+ ### Slate detection insights (preview) The following slate detection (a movie post-production) insights are automatically identified when indexing a video using the advanced indexing option:
The following slate detection (a movie post-production) insights are automatical
For details, see [Slate detection](slate-detection-insight.md).
-> [!NOTE]
-> Currently, only available in trial accounts.
- ### New source languages support for STT, translation, and search Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications, widgets and APIs.
For more information, see [supported languages](language-support.md).
You can now edit the name of the speakers in the transcription using the Azure Video Indexer API.
-> [!NOTE]
-> Currently, only available in trial accounts.
- ### Word level time annotation with confidence score An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file. Now supporting word level time annotation with confidence score.
-> [!NOTE]
-> Currently, only available in trial accounts.
- ### Azure Monitor integration enabling indexing logs The new set of logs, described below, enables you to better monitor your indexing pipeline. Azure Video Indexer now supports Diagnostics settings for indexing events. You can now export logs monitoring upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-## Expanded supported languages in LID and MLID through the API
+### Expanded supported languages in LID and MLID through the API
We expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure Video Indexer API.
-> [!NOTE]
-> Currently, only available in trial accounts.
+For more information, see [supported languages](language-support.md).
### Configure confidence level in a person model with an API
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Title: Use the Azure Video Indexer API description: This article describes how to get started with Azure Video Indexer API. Previously updated : 06/14/2022 Last updated : 08/14/2022
azure-vmware Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md
Title: Configure customer-managed key encryption at rest in Azure VMware Solution
+ Title: Configure customer-managed key encryption at rest in Azure VMware Solution (Preview)
description: Learn how to encrypt data in Azure VMware Solution with customer-managed keys using Azure Key Vault. Last updated 6/30/2022
-# Configure customer-managed key encryption at rest in Azure VMware Solution
+# Configure customer-managed key encryption at rest in Azure VMware Solution (Preview)
This article illustrates how to encrypt VMware vSAN Key Encryption Keys (KEKs) with customer-managed keys (CMKs) managed by customer-owned Azure Key Vault.
azure-web-pubsub Howto Create Serviceclient With Net And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-net-and-azure-identity.md
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK. ```C#
- using Azure.Identity
+ using Azure.Identity;
namespace chatapp {
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
2. Then create a `client` with `endpoint`, `hub`, and `credential`. ```C#
- using Azure.Identity
-
+ using Azure.Identity;
+ using Azure.Messaging.WebPubSub;
+
public class Program { public static void Main(string[] args)
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)
+- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)
backup Azure Backup Architecture For Sap Hana Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md
Title: Azure Backup Architecture for SAP HANA Backup description: Learn about Azure Backup architecture for SAP HANA backup. Previously updated : 08/11/2022 Last updated : 09/07/2022 + # Azure Backup architecture for SAP HANA backup
To back up SAP HANA databases running on Azure VM, you need to allow the install
## About architecture
+In the following sections you'll learn about the backup architecture of HANA databases in Azure Backup.
+
+### Backup architecture for database
+ See the [high-level architecture of Azure Backup for SAP HANA databases](./sap-hana-db-about.md#backup-architecture). For a detailed understanding of the backup process, see the following process: :::image type="content" source="./media/sap-hana-db-about/backup-architecture.png" alt-text="Diagram showing the backup process of SAP HANA database.":::
See the [high-level architecture of Azure Backup for SAP HANA databases](./sap-h
1. Azure Backup for SAP HANA (a Backint certified solution) doesn't depend on the underlying disk or VM types. The backup is performed by streams generated by SAP HANA.
-## Backup flow
+### Backup flow
+
+This section provides you an understanding about the backup process of an HANA database running on an Azure VM.
1. The scheduled backups are managed by crontab entries created on the HANA VM, while the on-demand backups are directly triggered by the Azure Backup service.
See the [high-level architecture of Azure Backup for SAP HANA databases](./sap-h
1. Backint then executes the read operation from the underlying data volumes ΓÇô the index server and XS engine for the Tenant database and name server for the SYSTEMDB. Premium SSD disks can provide optimal I/O throughput for the backup streaming operation. However, using uncached disks with M64Is can provide higher speeds.
-1. To stream the backup data, Backint creates up to three pipes, which directly write to Azure BackupΓÇÖs Recovery Services vault.
+1. To stream the backup data, Backint creates up to three pipes, which directly write to Recovery Services vault of Azure Backup.
If you arenΓÇÖt using firewall/NVA in your setup, then the backup stream is transferred over the Azure network to the Recovery Services vault / Azure Storage. Also, you can set up [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) or [Private Endpoint](../private-link/private-endpoint-overview.md) to allow SAP HANA to send backup traffic directly to Recovery Services Vault / Azure Storage, skipping NVA/Azure Firewall. Additionally, when you use firewall/NVA, the traffic to Azure Active Directory and Azure Backup Service will pass through the firewall/NVA and it doesnΓÇÖt affect the overall backup performance. 1. Azure Backup attempts to achieve speeds up to 420 MB/sec for non-log backups and up to 100 MB/sec for log backups. [Learn more](./tutorial-backup-sap-hana-db.md#understanding-backup-and-restore-throughput-performance) about backup and restore throughput performance.
-1. Detailed logs are written to the _backup.log_ and _backint.log_ files on the SAP HANA instance.
+1. Detailed logs are written to the *backup.log* and *backint.log* files on the SAP HANA instance.
1. Once the backup streaming is complete, the catalog is streamed to the Recovery Services vault. If both the backup (full/differential/incremental/log) and the catalog for this backup are successfully streamed and saved into the Recovery Services vault, Azure Backup considers the backup operation is successful.
-Refer to the following SAP HANA setups and see the execution of backup operation mentioned above:
+In the following sections you'll learn about different SAP HANA setups and their process of executing backups.
-**SAP HANA setup scenario: Azure network - without any NVA/Azure Firewall**
+#### SAP HANA setup scenario: Azure network - without any NVA/Azure Firewall
:::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/azure-network-without-nva-or-azure-firewall.png" alt-text="Diagram showing the SAP HANA setup if Azure network without any NVA/Azure Firewall.":::
-**SAP HANA setup scenario: Azure network - with UDR + NVA / Azure Firewall**
+#### SAP HANA setup scenario: Azure network - with UDR + NVA / Azure Firewall
:::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/azure-network-with-udr-and-nva-or-azure-firewall.png" alt-text="Diagram showing the SAP HANA setup if Azure network with UDR + NVA / Azure Firewall."::: >[!Note] >NVA/Azure Firewall may add an overhead when SAP HANA stream backup to Azure Storage/Recovery Services vault (data plane). See _point 6_ in the above diagram.
-**SAP HANA setup scenario: Azure network - with UDR + NVA / Azure Firewall + Private Endpoint or Service Endpoint**
+#### SAP HANA setup scenario: Azure network - with UDR + NVA / Azure Firewall + Private Endpoint or Service Endpoint
:::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/azure-network-with-udr-and-nva-or-azure-firewall-and-private-endpoint-or-service-endpoint.png" alt-text="Diagram showing the SAP HANA setup if Azure network with UDR + NVA / Azure Firewall + Private Endpoint or Service Endpoint.":::
+### Backup architecture for database with HANA System Replication (preview)
+
+The backup service resides in both the physical nodes of the HSR setup. Once you confirm that these nodes are in a replication group (using the [pre-registration script](sap-hana-database-with-hana-system-replication-backup.md#run-the-pre-registration-script)), Azure Backup groups the nodes logically, and creates a single backup item during protection configuration.
+
+After configuration, Azure Backup accepts backup requests from the primary node. On failover, when the new primary node starts generating log backup requests, Azure Backup compares the new log backups with the existing chain from the older primary node.
+
+If the backups are sequential, Azure Backup accepts them and protects the new primary node. If there's any inconsistency/break in the log chain, Azure Backup triggers a remedial full backup, and log backups will be successful only after the remedial full backup completes.
++
+>[!Note]
+>The Azure Backup service connects to HANA using `hdbuserstore` keys. As the keys aren't replicated, we recommend you create the same keys in all nodes, so that Azure Backup can connect automatically to any new primary node, without a manual intervention after failover/failback.
+
+#### Backup flows
+
+In the following sections, you'll learn about the backup flow for new/existing machines.
+
+##### New machines
+
+This section provides you an understanding about the backup process of an HANA database with HANA System replication enabled running on a new Azure VM.
+
+1. Create a custom user and `hdbuserstore` key on all the nodes.
+1. Run the pre-registration script on both the nodes with the custom user as the backup user to implement an ID, which indicates that both the nodes belong to a unique/common group.
+1. During HANA protection configuration, select both the nodes for discovery. This helps to identify both nodes as a single database which you can associate with a policy and protect
++
+##### Existing machines
+
+This section provides you an understanding about the backup process of an HANA database with HANA System replication enabled running on an existing Azure VM.
+
+1. Stop protection and retain data for both the nodes.
+1. Run the pre-registration script on both the nodes with the custom user as the backup user to mention an ID, which indicates that both the nodes belong to a unique/common group.
+1. Rediscover the databases in the primary node.
+
+ :::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/rediscover-databases-inline.png" alt-text="Screenshot showing you about how to rediscover a database." lightbox="./media/azure-backup-architecture-for-sap-hana-backup/rediscover-databases-expanded.png":::
+
+1. Configure backup for the newly created replicated database from Step 2 of configure backup.
+1. Delete the backup data of the older standalone backup items for which protection was paused.
+
+>[!Note]
+>For the HANA VMs that are already backed-up as individual machines, you can do the grouping only for future backups.
+ ## Next steps - Learn about the supported configurations and scenarios in the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).-- Learn about how to [back up SAP HANA databases in Azure VMs](./backup-azure-sap-hana-database.md).
+- Learn about how to [backup SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md).
+- Learn about how to [backup SAP HANA System Replication databases in Azure VMs](sap-hana-database-with-hana-system-replication-backup.md).
+- Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs](sap-hana-database-instances-backup.md).
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Title: Support Matrix for Azure file share backup description: Provides a summary of support settings and limitations when backing up Azure file shares. Previously updated : 5/07/2020 Last updated : 10/14/2022
Azure file shares backup is available in all regions, **except** for Germany Cen
| Setting | Limit | | | - |
-| Maximum number of restores per day | 10 |
-| Maximum number of individual files or folders per restore, in case of ILR (Item level recovery) | 99 |
+| Maximum number of restore per day | 10 |
+| Maximum number of individual files or folders per restore, if ILR (Item level recovery) | 99 |
| Maximum recommended restore size per restore for large file shares | 15 TiB | ## Retention limits
Azure file shares backup is available in all regions, **except** for Germany Cen
| | -- | | Maximum total recovery points per file share at any point in time | 200 | | Maximum retention of recovery point created by on-demand backup | 10 years |
-| Maximum retention of daily recovery points (snapshots) per file share| 200 days |
+| Maximum retention of daily recovery points (snapshots) per file share, if daily frequency | 200 days |
+| Maximum retention of daily recovery points (snapshots) per file share, if hourly frequency | Floor (200/number of snapshots according to the schedule)-1 |
| Maximum retention of weekly recovery points (snapshots) per file share | 200 weeks | | Maximum retention of monthly recovery points (snapshots) per file share | 120 months | | Maximum retention of yearly recovery points (snapshots) per file share | 10 years |
backup Sap Hana Database About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-about.md
+
+ Title: About SAP HANA database backup in Azure VMs
+description: In this article, learn about backing up SAP HANA databases that are running on Azure virtual machines.
+ Last updated : 10/06/2022++++++
+# About SAP HANA database backup in Azure VMs
+
+SAP HANA databases are mission critical workloads that require a low recovery point objective (RPO) and a fast recovery time objective (RTO). You can now [back up SAP HANA databases running on Azure VMs](./tutorial-backup-sap-hana-db.md) using [Azure Backup](./backup-overview.md).
+
+Azure Backup is [Backint certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:e062231e-9fb7-4ea8-b7d2-e6fe448c592d) by SAP, to provide native backup support by leveraging SAP HANA's native APIs. This offering from Azure Backup aligns with Azure Backup's mantra of **zero-infrastructure** backups, eliminating the need to deploy and manage the backup infrastructure. You can now seamlessly back up and restore SAP HANA databases running on Azure VMs ([M series VMs](../virtual-machines/m-series.md) also supported now!) and leverage enterprise management capabilities that Azure Backup provides.
+
+## Added value
+
+Using the Azure Backup service to back up and restore SAP HANA databases, gives the following advantages:
+
+* **15-minute Recovery Point Objective (RPO)**: Recovery of critical data of up to 15 minutes is now possible.
+* **One-click, point-in-time restores**: Restore of production data to alternate HANA servers is made easy. The chaining of the backups and catalogs to perform restores are all managed by Azure behind the scenes.
+* **Long-term retention**: For rigorous compliance and audit needs. Retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability.
+* **Backup Management from Azure**: Use Azure Backup's management and monitoring capabilities for improved management experience. Azure CLI is also supported.
+* **Backup of SAP HANA database with HSR**: Support for backup of HANA database with HANA System Replication (HSR) facilitates a single backup chain across nodes and provides an effortless restore experience.
+
+To view the backup and restore scenarios that we support today, see the [SAP HANA scenario support matrix](./sap-hana-backup-support-matrix.md#scenario-support).
+
+## Backup architecture
+
+You can back up SAP HANA databases running inside an Azure VM and stream backup data directly to the Azure Recovery Services vault.
+
+![Backup architecture diagram](./media/sap-hana-db-about/backup-architecture.png)
+
+* The backup process begins by [creating a Recovery Services vault](./tutorial-backup-sap-hana-db.md#create-a-recovery-services-vault) in Azure. This vault will be used to store the backups and recovery points created over time.
+* The Azure VM running SAP HANA server is registered with the vault, and the databases to be backed-up are [discovered](./tutorial-backup-sap-hana-db.md#discover-the-databases). To enable the Azure Backup service to discover databases, a [preregistration script](https://go.microsoft.com/fwlink/?linkid=2173610) must be run on the HANA server as a root user.
+* This script creates the **AZUREWLBACKUPHANAUSER** database user/uses the custom Backup user you have already created, and then creates a corresponding key with the same name in **hdbuserstore**. [Learn more](./tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) about the functionality of the script.
+* Azure Backup Service now installs the **Azure Backup Plugin for HANA** on the registered SAP HANA server.
+* The **AZUREWLBACKUPHANAUSER** database user created by the pre-registration script/custom Backup user that youΓÇÖve created (and added as input to the pre-registration script) is used by the **Azure Backup Plugin for HANA** to perform all backup and restore operations. If you attempt to configure backup for SAP HANA databases without running this script, you might receive the **UserErrorHanaScriptNotRun** error.
+* To [configure backup](./tutorial-backup-sap-hana-db.md#configure-backup) on the databases that are discovered, choose the required backup policy and enable backups.
+
+* Once the backup is configured, Azure Backup service sets up the following Backint parameters at the DATABASE level on the protected SAP HANA server:
+ * `[catalog_backup_using_backint:true]`
+ * `[enable_accumulated_catalog_backup:false]`
+ * `[parallel_data_backup_backint_channels:1]`
+ * `[log_backup_timeout_s:900)]`
+ * `[backint_response_timeout:7200]`
+
+>[!NOTE]
+>Ensure that these parameters are *not* present at HOST level. Host-level parameters will override these parameters and might cause unexpected behavior.
+>
+
+* The **Azure Backup Plugin for HANA** maintains all the backup schedules and policy details. It triggers the scheduled backups and communicates with the **HANA Backup Engine** through the Backint APIs.
+* The **HANA Backup Engine** returns a Backint stream with the data to be backed up.
+* All the scheduled backups and on-demand backups (triggered from the Azure portal) that are either full or differential are initiated by the **Azure Backup Plugin for HANA**. However, log backups are managed and triggered by **HANA Backup Engine** itself.
+* Azure Backup for SAP HANA, being a BackInt certified solution, doesn't depend on underlying disk or VM types. The backup is performed by streams generated by HANA.
+
+## Using Azure VM backup with Azure SAP HANA backup
+
+In addition to using the SAP HANA backup in Azure that provides database level backup and recovery, you can use the Azure VM backup solution to back up the OS and non-database disks.
+
+The [Backint certified Azure SAP HANA backup solution](#backup-architecture) can be used for database backup and recovery.
+
+[Azure VM backup](backup-azure-vms-introduction.md) can be used to back up the OS and other non-database disks. The VM backup is taken once every day and it backups up all the disks (except **Write Accelerator (WA) OS disks** and **ultra disks**). Since the database is being backed up using the Azure SAP HANA backup solution, you can take a file-consistent backup of only the OS and non-database disks using the [Selective disk backup and restore for Azure VMs](selective-disk-backup-restore.md) feature.
+
+To restore a VM running SAP HANA, follow these steps:
+
+1. [Restore a new VM from Azure VM backup](backup-azure-arm-restore-vms.md) from the latest recovery point. Or create a new empty VM and attach the disks from the latest recovery point.
+1. If WA disks are excluded, they arenΓÇÖt restored.
+
+ In this case, create empty WA disks and log area.
+
+1. After all the other configurations (such as IP, system name, and so on) are set, the VM is set to receive DB data from Azure Backup.
+1. Now restore the DB into the VM from the [Azure SAP HANA DB backup](sap-hana-db-restore.md#restore-to-a-point-in-time-or-to-a-recovery-point) to the desired point-in-time.
+
+## Back up a HANA system with replication enabled (preview)
+
+Azure Backup now supports back up of databases that have HANA System Replication (HSR) enabled (preview). This means that backups are managed automatically, when a failover occurs, thus eliminating manual intervention. It also offers immediate protection with no remedial full backups that allows you to protect HANA instances/nodes of the HSR setups as a single HSR container. While there are multiple physical nodes (a primary and a secondary), the backup service now considers them a single HSR container.
+
+>[!Note]
+>As the feature is in preview, there're no Protected Instance charges for a logical HSR container. However, you're charged for the underlying storage of the backups.
+
+## Back up database instance snapshots (preview)
+
+As databases grow in size, the time taken to restore becomes a factor when dealing with streaming backups. Also, during backup, the time taken by the database to generate *Backint streams* can grow in proportion to the churn, which can be factor as well.
+
+A database consistent snapshot based approach helps to solve both issues and provides you the benefit of instant backup and instant restore. For HANA, Azure Backup is now providing a HANA consistent snapshot based approach that is integrated with *Backint*, so that you can use Azure Backup as a single product for your entire HANA landscape, irrespective of size.
+
+### Pricing
+
+#### Managed disk snapshot
+
+Azure Backup uses managed disk snapshots. Azure Backup stores these in a Resource Group you specify. Managed disk snapshots use standard HDDs storage irrespective of the storage type of the disk and you're charged as per [Managed disk snapshot pricing](https://azure.microsoft.com/pricing/details/managed-disks/). The first disk snapshot is a full snapshot and all subsequent ones are incremental that consist only of the changes since the last snapshot.
+
+>[!Note]
+>There are no backup storage costs for snapshots since they are NOT transferred to Recovery Services vault.
+#### BackInt streams
+
+As per SAP recommendation, it's mandatory to have weekly fulls for all the databases within an Instance, which is protected by snapshot. So, you'll be charged for all protected databases within the Instance (Protected instance pricing + backup storage pricing) as per [Azure Backup pricing for SAP HANA databases](https://azure.microsoft.com/pricing/details/backup/).
+
+## Next steps
+
+- Learn about how to [backup SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md).
+- Learn about how to [backup SAP HANA System Replication databases in Azure VMs](sap-hana-database-with-hana-system-replication-backup.md).
+- Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs](sap-hana-database-instances-backup.md).
+- Learn how to [restore an SAP HANA database running on an Azure VM](./sap-hana-db-restore.md)
+- Learn how to [manage SAP HANA databases that are backed up using Azure Backup](./sap-hana-db-manage.md)
backup Sap Hana Database Instance Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instance-troubleshoot.md
+
+ Title: Troubleshoot SAP HANA databases instance backup errors
+description: This article describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA database instances.
+ Last updated : 10/05/2022++++++
+# Troubleshoot SAP HANA snapshot backup jobs on Azure Backup
+
+This article provides troubleshooting information to back up SAP HANA databases instances on Azure Virtual Machines. For more information on the SAP HANA backup scenarios we currently support, see [Scenario support](sap-hana-backup-support-matrix.md#scenario-support).
+
+## Common user errors
+
+### Error code: UserErrorVMIdentityNotEnabled
+
+**Error message**: System-assigned managed identity is not enabled on the Azure VM.
+
+**Recommended action**: To fix this issue, retry the operation. Follow these steps:
+
+1. Enable system-assigned managed identity on the Azure VM.
+1. Assign required role actions for the Azure VM identity. For more information, see [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### Error code: UserErrorVMIdentityRequiresCreateSnapshotsRole
+
+**Error message**: Azure VM's system-assigned managed identity not authorized to create snapshots.
+
+**Recommended action**: Assign the Disk Snapshot Contributor role to the Azure VM's system-assigned managed identity at snapshot resource group scope and retry the operation. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorVMIdentityRequiresReadSnapshotsRole
+
+**Error message**: Azure VM's system-assigned managed identity not authorized to read snapshots.
+
+**Recommended action**: Assign the Disk Snapshot Contributor role to the Azure VM's system-assigned managed identity at snapshot resource group scope and retry the operation. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorVMIdentityRequiresReadDisksRole
+
+**Error message**: Azure VM's system-assigned managed identity not authorized to read disks details.
+
+**Recommended action**: Assign the Disk Snapshot Contributor role to the Azure VM's system-assigned managed identity at snapshot resource group scope and retry the operation. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorVMIdentityRequiresCreateDisksRole
+
+**Error message**: Azure VM's system-assigned managed identity not authorized to create disks.
+
+**Recommended action**: Assign the Disk Snapshot Contributor role to the Azure VM's system-assigned managed identity at snapshot resource group scope and retry the operation. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorVMIdentityRequiresUpdateVMRole
+
+**Error message**: Azure VM's system-assigned managed identity not authorized to attach disks on virtual machine.
+
+**Recommended action**: Assign the Virtual Machine Contributor role to the Azure VM's system-assigned managed identity on target Azure VM and retry the operation. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorVMIdentityRequiresReadVMRole
+
+**Error message**: Azure VM's system-assigned managed identity not authorized to read virtual machine storage profile.
+
+**Recommended action**: Assign the Virtual Machine Contributor role to the Azure VM's system-assigned managed identity on the target
+Azure VM and retry the operation. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorDiskTypeNotSupportedForWkloadBackup
+
+**Error message**: Unmanaged disk not supported for workload snapshot backups.
+
+**Recommended action**: Create a managed disk from the disk vhd and retry the operation.
+
+### Error code: UserErrorMaxDisksSupportedForWkloadBackupExceeded
+
+**Error message**: Disks count exceeded maximum disks supported for workload backup.
+
+**Recommended action**: Check the [support matrix for workload snapshot backups](sap-hana-backup-support-matrix.md#scenario-support). Then reduce the disk count accordingly and retry the operation.
+
+### UserErrorWLBackupFilesystemTypeNotSupported
+
+**Error message**: File system type not supported for workload snapshot backups.
+
+**Recommended action**: Check the [support matrix for workload snapshot backups](sap-hana-backup-support-matrix.md#scenario-support). Then copy datasources to a volume with supported filesystem type and retry the operation.
+
+### UserErrorWLBackupDeviceTypeNotSupported
+
+**Error message**: Device type not supported for workload snapshot backups.
+
+**Recommended action**: Check the [support matrix for workload snapshot backups](sap-hana-backup-support-matrix.md#scenario-support). Then move datasource to supported device type and retry the operation.
+
+### UserErrorWLOnOSVolumeNotSupported
+
+**Error message**: Workload data on OS volume is not supported for snapshot based backups.
+
+**Recommended action**: To use snapshot backups, move the workload data to another non-OS volume.
+
+### UserErrorVMIdentityRequiresGetDiskSASRole
+
+**Error message**: Azure VM's system-assigned identity is not authorized to get disk shared access signature (SAS URI).
+
+**Recommended action**: Assign the Disk Snapshot Contributor role to the Azure VM's system-assigned managed identity at disk scope. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions). Then retry the operation.
+
+### UserErrorSnapshotTargetRGNotFoundOrVMIdentityRequiresPermissions
+
+**Error message**: Either the snapshot resource group does not exist or the Azure VM's system-assigned managed identity is not authorized to create snapshots in snapshot resource group.
+
+**Recommended action**: Ensure that the snapshot resource group specified in the database instance snapshot policy exists, and required actions are assigned to the Azure VM's system-assigned managed identity. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorVMIdentityRequiresPermissionsForSnapshot
+
+**Error message**: Azure VM's system-assigned managed identity does not have adequate permissions for snapshot based workload backup.
+
+**Recommended action**: Assign required role actions for the Azure VM identity mentioned in additional error details. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions).
+
+### UserErrorSnapshotOperationsUnsupportedWithInactiveDatabase
+
+**Error message**: Snapshot backups are not supported on inactive database(s).
+
+**Recommended action**: Ensure that all databases are up and running, then retry the operation.
+
+### Error code: UserErrorDeleteSnapshotRoleOrResourceGroupMissing
+
+**Error message**: Azure Backup does not have permissions to delete workload backup snapshots or the snapshot resource group does not exist.
+
+**Recommended action**: Assign the Disk Snapshot Contributor role to the Backup Management Service at snapshot resource group scope. For more information, see the [Azure workload backup troubleshooting scripts](https://aka.ms/DBSnapshotRBACPermissions). Then retry the operation.
+
+### UserErrorConflictingFileSystemPresentOnTargetMachine
+
+**Error message**: Can't attach snapshot because disks/filesystem with the same identity are present on the target machine.
+
+**Recommended action**: Select another target machine for snapshot restore. For more information, see the [SAP HANA database backup troubleshooting article](https://aka.ms/HANASnapshotTSGuide).
+
+### UserErrorDiskAttachLimitReached
+
+**Error message**: The limit on maximum number of attached disks on the VM is reached.
+
+**Recommended action**: Detach unused disks or perform restore on a different machine. For more information, see the [SAP HANA database backup troubleshooting article](https://aka.ms/HANASnapshotTSGuide).
+
+### Error code: UserErrorPITSnapshotDeleted
+
+**Error message**: The selected snapshot recovery point is deleted or not present in the resource group.
+
+**Recommended action**: Select another snapshot recovery point. For more information, see the [SAP HANA database backup troubleshooting article](https://aka.ms/HANASnapshotTSGuide).
+
+### UserErrorRestoreDiskIncompatible
+
+**Error message**: The restored disk type is not supported by the target vm.
+
+**Recommended action**: Upgrade the VM or use a compatible target vm for restore. For more information, see the [SAP HANA database backup troubleshooting article](https://aka.ms/HANASnapshotTSGuide).
+
+## Appendix
+
+**Perform restoration actions in SAP HANA studio**
+
+1. Recover System database from data snapshot using HANA Studio. See this [SAP documentation](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/9fd053d58cb94ac69655b4ebc41d7b05.html).
+1. Run the pre-registration script to reset the user credentials.
+1. Once done, recover all tenant databases from a data snapshot using HANA Studio. See this [HANA documentation](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/b2c283094b9041e7bdc0830c06b77bf8.html).
+
+## Next steps
+
+Learn about [Azure Backup service to back up database instances (preview)](sap-hana-db-about.md#using-the-azure-backup-service-to-back-up-database-instances-preview).
backup Sap Hana Database Instances Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-backup.md
+
+ Title: Back up SAP HANA database instances on Azure VMs
+description: In this article, discover how to back up SAP HANA database instances that are running on Azure Virtual Machines.
+ Last updated : 10/05/2022++++++
+# Back up SAP HANA databases' instance snapshots in Azure VMs (preview)
+
+Azure Backup now performs SAP HANA Storage Snapshot based backup of the entire database instance. It combines Azure Managed diskΓÇÖs full/incremental snapshot with HANA snapshot commands to provide instant HANA backup and restore.
+
+This article describes how to back up SAP HANA databases instances that are running on Azure VMs to an Azure Backup Recovery Services vault.
+
+In this article, you'll learn how to:
+
+>[!div class="checklist"]
+>- Create and configure a Recovery Services vault
+>- Create a policy
+>- Discover databases instances
+>- Configure backups
+>- Track a a backup job
+
+>[!Note]
+>See [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) for more information about the supported configurations and scenarios.
+
+## Before you start
+
+### Policy
+
+As per SAP recommendation, it's mandatory to have weekly full backup for all the databases within an Instance, which is protected by snapshot. Currently, logs are also mandatory for a database when creating a policy. With snapshots happening daily, we donΓÇÖt see a need to have incremental/differential backup in the database policy. Therefore, all databases under the database Instance (which is required to be protected by snapshots) should have a database policy that has only *weekly fulls + logs ONLY* along with daily snapshots at an Instance level.
+
+>[!Warning]
+>As the policy doesnΓÇÖt have differentials/incrementals, we do NOT recommend to trigger on-demand differential backups from any client.
+
+**Summary**:
+
+- Always protect all the databases within an Instance with a database policy before applying daily snapshots to the database Instance.
+- Make sure that all database policies have only *Weekly fulls + logs*. No differential/incremental backups.
+- Do NOT trigger on-demand Backint based streaming differential/incremental backups for these databases.
+
+### Permissions required for backup
+
+You must assign the required permissions to the Azure Backup service (residing within the HANA VM) to take snapshots of the Managed Disks and place them in a User-specified Resource Group mentioned in the policy. To do so, you can use System-assigned Managed Identity (MSI) of the source VM.
+
+The following table lists the resource, permissions, and scope.
+
+Entity | Built-in role | Scope of permission | Description
+ | | |
+Source VM | Virtual Machine Contributor | Backup admin who configures/runs HANA snapshot backup | Configures HANA instance.
+Source disk resource Group (where all disks are present for backup) | Disk Backup Reader | Source VMΓÇÖs MSI | Creates disk snapshots.
+Source snapshot resource Group | Disk Snapshot Contributor | Source VMΓÇÖs MSI | Creates disk snapshots and store on source snapshot resource group.
+Source snapshot resource Group | Disk Snapshot Contributor | Backup Management Service | Deletes old snapshots on source snapshot resource group.
+
+>[!Note]
+>- The credentials used should have permissions to grant roles to other resources and should be Owner or User Access Administ.rator [as mentioned here](../role-based-access-control/role-assignments-steps.md#step-4-check-your-prerequisites).
+>- During backup configuration, you can use the Azure portal to assign all above permissions, except *Disk snapshot contributor* to *Backup Management Service* principal for snapshot resource group. You need to manually assign this permission.
+>- We recommend you not to change the resource groups once they are given/assigned to Azure Backup as it eases the permissions handling.
+
+Learn about the [permissions required for snapshot restore](sap-hana-database-instances-restore.md#permissions-required-for-snapshot-restore).
++
+## Create a policy
+
+To create a policy for SAP HANA database instance backup, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/), select a Recovery Services vault.
+
+1. Under **Backup**, select **Backup Policies**.
+
+1. Select **+Add**.
+
+1. On **Select policy type**, select **SAP HANA in Azure VM (DB Instance via snapshot) [Preview]**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/select-sap-hana-instance-policy-type.png" alt-text="Screenshot showing to select the policy type.":::
+
+1. On **Create policy**, perform the following actions:
+
+ - **Policy name**: Enter a unique policy name.
+ - **Snapshot Backup**: Set *Time* and *Timezone* for backup as from the drop-down list. The default selection is *10:30 PM* and *(UTC) Coordinated Universal Time* respectively.
+
+ >[!Note]
+ >Azure Backup currently supports Daily backup frequency only.
+
+ - **Instant Restore**: Set the retention of recovery snapshot from *1* to *35* days. The default value is set to *2*.
+ - **Resource group**: Select the appropriate resource group from the drop-down list.
+ - **Managed Identity**: Select a managed identity from the drop-down to assign permissions to take snapshots of the managed disks and to place them in the Resource Group you select in the policy.
+ >[!Note]
+ >You need to manually assign the permission for Azure Backup service to delete the snapshots as per the policy. Other [permissions are assigned by the Azure portal](#configure-snapshot-backup).
+ >
+ >To assign *Disk snapshot contributor role* to *Backup Management Service* manually on snapshot resouce group, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current).
+
+1. Select **Create**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/create-policy.png" alt-text="Screenshot showing how to create the policy.":::
+
+Also, you need to [create a policy for SAP HANA database backup](backup-azure-sap-hana-database.md#create-a-backup-policy).
+
+## Discover the databases instance
+
+To discover the database instance where the snapshot is present, see the [process to discover a database instance](backup-azure-sap-hana-database.md#discover-the-databases).
+
+## Configure snapshot backup
+
+Before configuring backup for the snapshot, [configure backup for the database](backup-azure-sap-hana-database.md#configure-backup).
+
+Once done, follow these steps:
+
+1. Go to the **Recovery Services vault** and select **+Backup**.
+
+1. Select **SAP HANA in Azure VM** as the data source type, select a **Recovery Services vault** to use for backup, and then select **Continue**.
+
+1. In **Step 2**, select **DB Instance via snapshot (Preview)** > **Configure Backup**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/select-db-instance-via-snapshot.png" alt-text="Screenshot showing to select the DB Instance via snapshot option.":::
+
+1. On **Configure Backup**, select the database instance policy from the **Backup policy** drop-down list, and then select **Add/Edit** to check the available database instances.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/add-database-instance-backup-policy.png" alt-text="Screenshot showing to select and add a database instance policy.":::
+
+ To edit a DB instance selection, select the checkbox corresponding to the instance name and select **Add/Edit**.
+
+1. On **Select items to backup**, select the database instances and select **OK**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/select-database-instance-for-backup.png" alt-text="Screenshot showing to select a database instance for backup.":::
+
+ Once you select HANA instances for back-up, the Azure portal validates for missing permissions in the Managed System Identity (MSI) that is assigned to the policy to perform snapshot backup.
+
+1. If the permissions aren't present, you need to Select **Assign missing roles/identity** to assign all permissions.
+
+ Azure portal then automatically re-validates and shows *Backup readiness* as successful.
+
+1. Once the *backup readiness check* is successful, select **Enable backup**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/enable-hana-database-instance-backup.png" alt-text="Screenshot showing to enable HANA database instance backup.":::
+
+## Run an on-demand backup
+
+Follow these steps:
+
+1. In the Azure portal, go to **Recovery Services vault**.
+
+1. In the Recovery Services vault, select **Backup items** in the left pane.
+
+1. By default **Primary Region** is selected. Select **SAP HANA in Azure VM**.
+
+1. On the **Backup Items** page, select **View details** corresponding to the SAP HANA snapshot instance.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/hana-snapshot-view-details.png" alt-text="Screenshot showing to select View Details of HANA database snapshot instance.":::
+
+1. Select **Backup now**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/start-backup-hana-snapshot.png" alt-text="Screenshot showing to start backup of HANA database snapshot instance.":::
+
+1. On the **Backup Now** page, select **OK**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/trigger-backup-hana-snapshot.png" alt-text="Screenshot showing to trigger HANA database snapshot instance backup.":::
+
+## Track a backup job
+
+Azure Backup service creates a job for scheduled backups or if you trigger on-demand backup operation for tracking. To view the backup job status, follow these steps:
+
+1. In the Recovery Services vault, select **Backup Jobs** in the left pane.
+
+ It shows the jobs dashboard with operation and status of the jobs triggered in *past 24 hours*. To modify the time range, select **Filter** and do required changes.
+
+1. To review the job details of a job, select **View details** corresponding to the job.
+
+## Next steps
+
+- [Learn how restore SAP HANA databases' instance snapshots in Azure VMs (preview)](sap-hana-database-instances-restore.md).
+- [Learn how manage SAP HANA databases on Azure VMs (preview)](sap-hana-database-manage.md).
backup Sap Hana Database Instances Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-restore.md
+
+ Title: Restore SAP HANA database instances on Azure VMs
+description: In this article, discover how to restore SAP HANA database instances that on Azure Virtual Machines.
+ Last updated : 10/05/2022++++++
+# Restore SAP HANA databases' instance snapshots in Azure VMs (preview)
+
+This article describes how to restore a backed-up SAP HANA database instance to another target VM via snapshots.
+
+>[!Note]
+>If you want to do an in-place restore (overwrite the backed-up VM by detaching the existing disks and attaching new disks), detach the existing disks and see the following sections for restore.
+
+You can restore the HANA snapshot + storage snapshot as disks by selecting Attach and mount to the target machine. However, Azure Backup won't automatically restore HANA system to the required point.
+
+Here are the two workflows:
+
+- [Restore entire HANA system (system database and all tenant databases) to a single snapshot based restore point](#restore-entire-system-to-snapshot-restore-point).
+- [Restore system database and all tenant database to a different log point-in-time over snapshot](#restore-database-to-a-different-log-point-in-time-over-snapshot).
+
+>[!Note]
+>SAP HANA recommends recovering the entire system during snapshot restore. This means that you must also restore the system database. If system database is restored, the users/access information is now also overwritten or updated, and subsequent attempts of recovery of tenant databases might fail after system database recovery. The two options to resolve this issue are:
+>
+>- Both backed-up VM and the target VM have the same backup key (including username and password). This means that HANA backup service can connect with the same credentials and continue to recover tenant databases.
+>- If the backed-up VM and the target VM have different keys then the pre-registration script has to be run after system database recovery. This will update credentials on the target VM and then the tenant databases could be recovered.
+
+## Prerequisites
+
+#### Permissions required for snapshot restore
+
+During restore, Azure Backup uses target VMΓÇÖs Managed Identity to read disk snapshots from a user-specified resource group, create disks in a target resource group, and attach them to the target VM.
+
+The following table lists the resource, permissions, and scope.
+
+>[!Note]
+>Once restore is completed, you can revoke these permissions.
+
+Entity | Built-in role | Scope of permission | Description
+ | | |
+Target VM | Virtual Machine Contributor | Backup admin who configures/runs HANA snapshot restore and Target VMΓÇÖs MSI. | Restores from disk snapshots to create new managed disks and attach/mount to target VM/OS.
+Source snapshot resource group | Disk Snapshot Contributor | Target | Restores from disk snapshots.
+Target disk resource group (where all existing disks of target VM are present, for revert). <br><br> Target disk resource group (where all new disks will be created during restore). | Disk Restore Operator | Target VMΓÇÖs MSI | Restores from disk snapshots to create new managed disks and attach/mount to target VM/OS.
+
+>[!Note]
+>
+>- The credentials used should have permissions to grant roles to other resources and should be Owner or User Access Administrator [as mentioned here](../role-based-access-control/role-assignments-steps.md#step-4-check-your-prerequisites).
+>- You can use Azure portal to assign all above permissions during restore.
+
+## Restore entire system to snapshot restore point
+
+In the following sections, you'll learn how to restore the system to snapshot restore point.
+
+### Select and mount the snapshot
+
+Follow these steps:
+
+1. In the Azure portal, go to Recovery Services vault.
+
+1. In the left pane, select **Backup items**.
+
+1. Select **Primary Region** and select **SAP HANA in Azure VM**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/select-vm-in-primary-region.png" alt-text="Screenshot showing to select the primary region option for VM selection.":::
+
+1. On the **Backup Items** page, select **View details** corresponding to the SAP HANA snapshot instance.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/select-view-details.png" alt-text="Screenshot showing to select view details of HANA database snapshot.":::
+
+1. Select **Restore**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/restore-hana-snapshot.png" alt-text="Screenshot showing to select the Restore option for HANA database snapshot.":::
+
+1. On the **Restore** page, select the target VM to which the disks should be attached, the required HANA instance, and the resource group.
+
+1. In **Restore Point**, choose **Select**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/restore-system-database-restore-point.png" alt-text="Screenshot showing to select HANA snapshot recovery point.":::
+
+1. In the **Select restore point** pane, select a recovery point and select **OK**.
+
+1. Select the corresponding resource group and the *managed identity* to which all permissions are assigned for restore.
+
+1. Select *Validate* to check if all the permissions are assigned to the managed identity for the relevant scopes.
+
+1. If the permissions aren't assigned, select **Assign missing roles/identity**.
+
+ After the roles are assigned, the Azure portal automatically re-validates the permission updates shows successful.
+
+1. Select **Attach and mount snapshot** to attach the disks to the VM.
+
+1. Select **OK** to create disks from snapshots, attach them to the target VM and mount them.
+
+### Restore System DB
+
+Recover System database from data snapshot using HANA Studio. See [this SAP documentation](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/9fd053d58cb94ac69655b4ebc41d7b05.html).
+
+>[!Note]
+>After restoring System DB, you need to run the pre-registration script on the target VM to update the user credentials.
+
+### Restore Tenant databases
+
+Once done, recover all Tenant databases from a data snapshot using HANA Studio. See [this HANA documentation](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/b2c283094b9041e7bdc0830c06b77bf8.html).
+
+## Restore database to a different log point-in-time over snapshot
+
+Perform the following actions.
+
+### Select and mount the nearest snapshot
+
+First, decide the nearest snapshot to the required log point-in-time. Then [attach and mount that snapshot](#select-and-mount-the-snapshot) to the target VM.
+
+### Restore system database
+
+To select and restore the required point-in-time for System DB, follow these steps:
+
+1. Go to Recovery Services vault and select **Backup items** from the left pane.
+
+1. Select **Primary Region** and select **SAP HANA in Azure VM**.
+
+1. On the **Backup Items** page, select **View details** corresponding to the related system database instance.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/system-database-view-details.png" alt-text="Screenshot showing to view details of system database instance.":::
+
+1. On the system database items page, select **Restore**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/open-system-database-restore-blade.png" alt-text="Screenshot showing to open the restore page of system database instance.":::
+
+1. On the **Restore** page, select **Restore logs over snapshot**.
+
+1. Select the required VM and resource group.
+
+1. On **Restore Point**, choose **Select**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/restore-logs-over-snapshot-restore-point.png" alt-text="Screenshot showing how to select log restore points of system database instance for restore.":::
+
+1. On the **Select restore point** pane, select the restore point and select **OK**.
+
+ >[!Note]
+ >The logs appears after the snapshot point that you previously restored.
+
+1. Select **OK**.
++
+### Restore tenant database
+
+Follow these steps:
+
+1. In the Azure portal, go to Recovery Services vault.
+
+1. In the left pane, select **Backup items**.
+
+1. Select **Primary Region** -> **SAP HANA in Azure VM**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/select-vm-in-primary-region.png" alt-text="Screenshot showing to select the primary region option to back up tenant DB.":::
+
+1. On the **Backup Items** page, select **View details** corresponding to the SAP HANA tenant database.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/select-view-details-of-tenant-database.png" alt-text="Screenshot showing to select view details of HANA tenant database.":::
+
+1. Select **Restore**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/restore-hana-snapshot.png" alt-text="Screenshot showing to select the Restore option for HANA tenant database.":::
+
+1. On the **Restore** page, select the target VM to which the disks should be attached, the required HANA instance, and the resource group.
+
+ :::image type="content" source="./media/sap-hana-database-instances-restore/log-over-snapshots-for-tenant-database-restore-point.png" alt-text="Screenshot showing how to select restore point of log over snapshots for tenant database.":::
+
+ Ensure that the target VM and target disk resource group have relevant permissions using the PowerShell/CLI script.
+
+1. In **Restore Point**, choose **Select**.
+
+1. On the **Select restore point** pane, select the restore point and select **OK**.
+
+ >[!Note]
+ >The logs appear after the snapshot point that you previously restored.
+
+1. Select **OK**.
+
+>[!Note]
+>Ensure that you restore all tenant databases as per SAP HANA guidelines.
+
+## Cross region restore
+
+The Managed Disk snapshots don't get transferred to Recovery Services vault. So, cross-region [restore is only possible via Backint stream backups](sap-hana-db-restore.md#cross-region-restore).
+
+## Next steps
+
+- [About SAP HANA database backup in Azure VMs](sap-hana-db-about.md).
+- [Manage SAP HANA database instances in Azure VMs (preview)](sap-hana-database-manage.md).
backup Sap Hana Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-manage.md
+
+ Title: Manage backed up SAP HANA databases on Azure VMs
+description: In this article, learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines.
+ Last updated : 10/08/2022++++++
+# Manage and monitor backed up SAP HANA databases
+
+This article describes common tasks for managing and monitoring SAP HANA databases that are running on an Azure virtual machine (VM) and that are backed up to an Azure Backup Recovery Services vault by the [Azure Backup](./backup-overview.md) service. You'll learn how to monitor jobs and alerts, trigger an on-demand backup, edit policies, stop and resume database protection and unregister a VM from backups.
+
+>[!Note]
+>Support for HANA Instance snapshot and support for HANA System Replication mode are in preview.
+
+If you haven't configured backups yet for your SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md). Learn more about the [supported configurations and scenarios](sap-hana-backup-support-matrix.md).
+
+## Monitor manual backup jobs using the Azure portal
+
+Azure Backup shows all manually triggered jobs in the **Backup jobs** section in **Backup center**.
++
+The jobs you see in this portal includes database discovery and registering, and backup and restore operations. Scheduled jobs, including log backups aren't shown in this section. Manually triggered backups from the SAP HANA native clients (Studio/ Cockpit/ DBA Cockpit) also don't show up here.
++
+To learn more about monitoring, go to [Monitoring in the Azure portal](./backup-azure-monitoring-built-in-monitor.md) and [Monitoring using Azure Monitor](./backup-azure-monitoring-use-azuremonitor.md).
+
+## View backup alerts
+
+Alerts are an easy means of monitoring backups of SAP HANA databases. Alerts help you focus on the events you care about the most without getting lost in the multitude of events that a backup generates. Azure Backup allows you to set alerts, and they can be monitored as follows:
+
+* Sign in to the [Azure portal](https://portal.azure.com/).
+* On the vault dashboard, select **Backup Alerts**.
+
+ ![Backup alerts on vault dashboard](./media/sap-hana-db-manage/backup-alerts-dashboard.png)
+
+* You'll be able to see the alerts:
+
+ ![List of backup alerts](./media/sap-hana-db-manage/backup-alerts-list.png)
+
+* Select the alerts to see more details:
+
+ ![Alert details](./media/sap-hana-db-manage/alert-details.png)
+
+Today, Azure Backup allows the sending of alerts through email. These alerts are:
+
+* Triggered for all backup failures.
+* Consolidated at the database level by error code.
+* Sent only for a database's first backup failure.
+
+To learn more about monitoring, go to [Monitoring in the Azure portal](./backup-azure-monitoring-built-in-monitor.md) and [Monitoring using Azure Monitor](./backup-azure-monitoring-use-azuremonitor.md).
+
+## Manage operations
+
+Azure Backup makes management of a backed-up SAP HANA database easy with an abundance of management operations that it supports. These operations are discussed in more detail in the following sections.
+
+### Run an on-demand backup
+
+Backups run in accordance with the policy schedule. You can run a backup on-demand as follows:
+
+1. In the vault menu, select **Backup items**.
+1. In **Backup Items**, select the VM running the SAP HANA database, and then select **Backup now**.
+1. In **Backup Now**, choose the type of backup you want to perform. Then select **OK**.
+
+ The retention period of this backup is determined by the type of on-demand backup you have run.
+
+ - *On-demand full backups* are retained for a minimum of *45 days* and a maximum of *99 years*.
+ - *On-demand differential backups* are retained as per the *log retention set in the policy*.
+ - *On-demand incremental backups* aren't currently supported.
+
+1. Monitor the portal notifications. You can monitor the job progress in the vault dashboard > **Backup Jobs** > **In progress**. Depending on the size of your database, creating the initial backup may take a while.
+
+### HANA native client integration
+
+In the following sections, you'll learn how to trigger the backup and restore operations from non-Azure clients, such as HANA studio.
+
+>[!Note]
+>HANA native clients are integrated for Backint based operations only. Snapshots and HANA System Replication mode related operations are currently not supported.
+
+#### Backup
+
+On-demand backups triggered from any of the HANA native clients (to **Backint**) will show up in the backup list on the **Backup Instances** page.
+
+![Last backups run](./media/sap-hana-db-manage/last-backups.png)
+
+You can also [monitor these backups](#monitor-manual-backup-jobs-using-the-azure-portal) from the **Backup jobs** page.
+
+These on-demand backups will also show up in the list of restore points for restore.
+
+![List of restore points](./media/sap-hana-db-manage/list-restore-points.png)
+
+#### Restore
+
+Restores triggered from HANA native clients (using **Backint**) to restore to **the same machine** can be [monitored](#monitor-manual-backup-jobs-using-the-azure-portal) from the **Backup jobs** page.
+Restores triggered from HANA native clients to restore to another machine are not allowed. This is because Azure Backup service cannot authenticate the target server, as per Azure RBAC rules, for restore.
+
+#### Delete
+
+Delete operation from HANA native is **NOT** supported by Azure Backup since the backup policy determines the lifecycle of backups in Azure Recovery services vault.
+
+### Change policy
+
+You can change the underlying policy for an SAP HANA backup item.
+
+>[!Note]
+>In the case of HANA snapshot, the new HANA instance policy can have a different resource group and/or another user-assigned managed identity. Currently, the Azure portal performs all validations during the backup configuration. So, you must assign the required roles on the new snapshot resource group and/or the new user-assigned identity using the [CLI scripts](https://github.com/Azure/Azure-Workload-Backup-Troubleshooting-Scripts/tree/main/SnapshotPreReqCLIScripts).
+
+In the **Backup center** dashboard, go to **Backup Instances**:
+
+* Choose **SAP HANA in Azure VM** as the datasource type.
+
+ :::image type="content" source="./media/sap-hana-db-manage/hana-backup-instances-inline.png" alt-text="Screenshot showing to choose SAP HANA in Azure VM." lightbox="./media/sap-hana-db-manage/hana-backup-instances-expanded.png":::
+
+* Choose the backup item whose underlying policy you want to change.
+* Select the existing Backup policy.
+
+ ![Select existing backup policy](./media/sap-hana-db-manage/existing-backup-policy.png)
+
+* Change the policy, choosing from the list. [Create a new backup policy](./backup-azure-sap-hana-database.md#create-a-backup-policy) if needed.
+
+ ![Choose policy from drop-down list](./media/sap-hana-db-manage/choose-backup-policy.png)
+
+* Save the changes.
+
+ ![Save the changes](./media/sap-hana-db-manage/save-changes.png)
+
+* Policy modification will impact all the associated Backup Items and trigger corresponding **configure protection** jobs.
+
+>[!NOTE]
+> Any change in the retention period will be applied retrospectively to all the older recovery points besides the new ones.
+
+### Modify Policy
+
+To modify policy to change backup types, frequencies, and retention range, follow these steps:
+
+>[!NOTE]
+>- Any change in the retention period will be applied retroactively to all the older recovery points, in addition to the new ones.
+>
+>- In the case of HANA snapshot, you can edit the HANA instance policy to have a different resource group and/or another user-assigned managed identity. Currently, the Azure portal performs all validations during the backup configure only. So, you must assign the required roles on the new snapshot resource group and/or the new user-assigned identity using the [CLI scripts](https://github.com/Azure/Azure-Workload-Backup-Troubleshooting-Scripts/tree/main/SnapshotPreReqCLIScripts).
+
+1. In the **Backup center** dashboard, go to **Backup Policies** and choose the policy you want to edit.
+
+ :::image type="content" source="./media/sap-hana-db-manage/backup-center-policies-inline.png" alt-text="Screenshot showing to choose the policy to edit." lightbox="./media/sap-hana-db-manage/backup-center-policies-expanded.png":::
+
+1. Select **Modify**.
+
+ ![Select Modify](./media/sap-hana-db-manage/modify-policy.png)
+
+1. Choose the frequency for the backup types.
+
+ ![Choose backup frequency](./media/sap-hana-db-manage/choose-frequency.png)
+
+Policy modification will impact all the associated backup items and trigger corresponding **configure protection** jobs.
+
+### Inconsistent policy
+
+Occasionally a modify policy operation can lead to an **inconsistent** policy version for some backup items. This happens when the corresponding **configure protection** job fails for the backup item after a modify policy operation is triggered. It appears as follows in the backup item view:
+
+![Inconsistent policy](./media/sap-hana-db-manage/inconsistent-policy.png)
+
+You can fix the policy version for all the impacted items in one click:
+
+![Fix policy version](./media/sap-hana-db-manage/fix-policy-version.png)
+
+### Stop protection for an SAP HANA database/ HANA Instance
+
+You can stop protecting an SAP HANA database in a couple of ways:
+
+* Stop all future backup jobs and delete all recovery points.
+* Stop all future backup jobs and leave the recovery points intact.
+
+If you choose to leave recovery points, keep these details in mind:
+
+* All recovery points will remain intact forever, and all pruning will stop at stop protection with retain data.
+* You'll be charged for the protected instance and the consumed storage. For more information, see [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/).
+* If you delete a data source without stopping backups, new backups will fail.
+
+>[!Note]
+>In the case of HANA instances, first stop the protection of HANA instance, and then stop protection of all related databases; otherwise the the stop protection operation will fail.
+
+To stop protection for a database:
+
+1. In the **Backup center** dashboard, select **Backup Instances**.
+1. Select **SAP HANA in Azure VM** as the datasource type.
+
+ :::image type="content" source="./media/sap-hana-db-manage/hana-backup-instances-inline.png" alt-text="Screenshot showing to select SAP HANA in Azure VM." lightbox="./media/sap-hana-db-manage/hana-backup-instances-expanded.png":::
+
+1. Select the database for which you want to stop protection on.
+
+1. In the database menu, select **Stop backup**.
+
+ :::image type="content" source="./media/sap-hana-db-manage/stop-backup.png" alt-text="Screenshot showing to select stop backup.":::
+
+1. In the **Stop Backup** menu, select whether to retain or delete data. If you want, provide a reason and comment.
+
+ :::image type="content" source="./media/sap-hana-db-manage/retain-backup-data.png" alt-text="Screenshot showing to select retain or delete data.":::
+
+1. Select **Stop backup**.
+
+### Resume protection for an SAP HANA database/ HANA Instance
+
+When you stop protection for the SAP HANA database or SAP HANA Instance, if you select the **Retain Backup Data** option, you can later resume protection. If you don't retain the backed-up data, you can't resume protection.
+
+To resume protection for an SAP HANA database:
+
+* Open the backup item and select **Resume backup**.
+
+ ![Select resume backup](./media/sap-hana-db-manage/resume-backup.png)
+
+* On the **Backup policy** menu, select a policy, and then select **Save**.
+
+### Upgrading from SDC to MDC
+
+Learn how to continue backup for an SAP HANA database [after upgrading from SDC to MDC](backup-azure-sap-hana-database-troubleshoot.md#sdc-to-mdc-upgrade-with-a-change-in-sid).
+
+### Upgrading from SDC to MDC without a SID change
+
+Learn how to continue backup of an SAP HANA database whose [SID hasn't changed after upgrade from SDC to MDC](backup-azure-sap-hana-database-troubleshoot.md#sdc-to-mdc-upgrade-with-no-change-in-sid).
+
+### Upgrading to a new version in either SDC or MDC
+
+Learn how to continue backup of an SAP HANA database [whose version is being upgraded](backup-azure-sap-hana-database-troubleshoot.md#sdc-version-upgrade-or-mdc-version-upgrade-on-the-same-vm).
+
+### Unregister an SAP HANA instance
+
+Unregister an SAP HANA instance after you disable protection but before you delete the vault:
+
+* On the vault dashboard, under **Manage**, select **Backup Infrastructure**.
+
+ ![Select Backup Infrastructure](./media/sap-hana-db-manage/backup-infrastructure.png)
+
+* Select the **Backup Management type** as **Workload in Azure VM**
+
+ ![Select the Backup Management type as Workload in Azure VM](./media/sap-hana-db-manage/backup-management-type.png)
+
+* In **Protected Servers**, select the instance to unregister. To delete the vault, you must unregister all servers/ instances.
+
+* Right-click the protected instance and select **Unregister**.
+
+ ![Select unregister](./media/sap-hana-db-manage/unregister.png)
+
+### Re-register extension on the SAP HANA server VM
+
+Sometimes the workload extension on the VM may get impacted for one reason or another. In such cases, all the operations triggered on the VM will begin to fail. You may then need to re-register the extension on the VM. Re-register operation reinstalls the workload backup extension on the VM for operations to continue.
+
+Use this option with caution: when triggered on a VM with an already healthy extension, this operation will cause the extension to get restarted. This may cause all the in-progress jobs to fail. Check for one or more of the [symptoms](backup-azure-sap-hana-database-troubleshoot.md#re-registration-failures) before triggering the re-register operation.
+
+## Next steps
+
+* Learn how to [troubleshoot common issues when backing up SAP HANA databases.](./backup-azure-sap-hana-database-troubleshoot.md)
backup Sap Hana Database Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md
+
+ Title: Restore SAP HANA databases on Azure VMs
+description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region.
+ Last updated : 10/07/2022++++++
+# Restore SAP HANA databases on Azure VMs
+
+This article describes how to restore SAP HANA databases running on an Azure Virtual Machine (VM), which the Azure Backup service has backed up to a Recovery Services vault. You can use the restore data to create copies of the data for development/ test scenarios or to return to a previous state.
+
+Azure Backup now supports backup/restore of SAP HANA System Replication (HSR) databases (preview).
+
+>[!Note]
+>The restore process of HANA databases with HANA System Replication (HSR) is the same as restore of HANA databases without HSR. As per SAP advisories, you can restore databases with HANA System Replication mode as *standalone* databases. If the target system has the HANA System Replication mode enabled, first disable this mode, and then restore the database.
+
+For information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
+
+## Restore to a point in time or to a recovery point
+
+Azure Backup can restore SAP HANA databases that are running on Azure VMs as follows:
+
+* Restore to a specific date or time (to the second) by using log backups. Azure Backup automatically determines the appropriate full, differential backups and the chain of log backups that are required to restore based on the selected time.
+
+* Restore to a specific full or differential backup to restore to a specific recovery point.
+
+## Prerequisites
+
+Before restoring a database, note the following:
+
+* You can restore the database only to an SAP HANA instance that's in the same region.
+
+* The target instance must be registered with the same vault as the source. [Learn more](backup-azure-sap-hana-database.md#discover-the-databases).
+
+* Azure Backup can't identify two different SAP HANA instances on the same VM. So restoring data from one instance to another on the same VM isn't possible.
+
+* To ensure that the target SAP HANA instance is ready for restore, check its **Backup readiness** status:
+
+ 1. In the Azure portal, go to **Backup center** and click **+Backup**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/backup-center-configure-inline.png" alt-text="Screenshot showing to start the process to check if the target SAP HANA instance is ready for restore." lightbox="./media/sap-hana-db-restore/backup-center-configure-expanded.png":::
+
+ 1. Select **SAP HANA in Azure VM** as the datasource type, select the vault to which the SAP HANA instance is registered, and then click **Continue**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-select-vault.png" alt-text="Screenshot showing to select SAP HANA in Azure VM.":::
+
+ 1. Under **Discover DBs in VMs**, select **View details**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-discover-databases.png" alt-text="Screenshot showing to view database details.":::
+
+ 1. Review the **Backup Readiness** of the target VM.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-select-virtual-machines-inline.png" alt-text="Screenshot showing protected servers." lightbox="./media/sap-hana-db-restore/hana-select-virtual-machines-expanded.png":::
+
+* To learn more about the restore types that SAP HANA supports, refer to the SAP HANA Note [1642148](https://launchpad.support.sap.com/#/notes/1642148)
+
+## Restore a database
+
+To restore, you need the following permissions:
+
+* **Backup Operator** permissions in the vault where you're doing the restore.
+* **Contributor (write)** access to the source VM that's backed up.
+* **Contributor (write**) access to the target VM:
+ * If you're restoring to the same VM, this is the source VM.
+ * If you're restoring to an alternate location, this is the new target VM.
+
+1. In the Azure portal, go to **Backup center** and click **Restore**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/backup-center-restore-inline.png" alt-text="Screenshot showing to start restoring an SAP HANA database." lightbox="./media/sap-hana-db-restore/backup-center-restore-expanded.png":::
+
+1. Select **SAP HANA in Azure VM** as the datasource type, select the database you wish to restore, and then click **Continue**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-restore-select-database.png" alt-text="Screenshot showing to restore the backup items.":::
+
+1. Under **Restore Configuration**, specify where (or how) to restore data:
+
+ * **Alternate Location**: Restore the database to an alternate location and keep the original source database.
+
+ * **Overwrite DB**: Restore the data to the same SAP HANA instance as the original source. This option overwrites the original database.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-restore-configuration.png" alt-text="Screenshot showing to restore configuration.":::
+
+### Restore to alternate location
+
+1. In the **Restore Configuration** menu, under **Where to Restore**, select **Alternate Location**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-alternate-location-recovery.png" alt-text="Screenshot showing to restore to alternate location.":::
+
+1. Select the SAP HANA host name and instance name to which you want to restore the database.
+1. Check if the target SAP HANA instance is ready for restore by ensuring its **Backup Readiness.** Refer to the [prerequisites section](#prerequisites) for more details.
+1. In the **Restored DB Name** box, enter the name of the target database.
+
+ > [!NOTE]
+ > Single Database Container (SDC) restores must follow these [checks](backup-azure-sap-hana-database-troubleshoot.md#single-container-database-sdc-restore).
+
+1. If applicable, select **Overwrite if the DB with the same name already exists on selected HANA instance**.
+
+1. In **Select restore point**, select **Logs (Point in Time)** to [restore to a specific point in time](#restore-to-a-specific-point-in-time). Or select **Full & Differential** to [restore to a specific recovery point](#restore-to-a-specific-recovery-point).
+
+### Restore and overwrite
+
+1. In the **Restore Configuration** menu, under **Where to Restore**, select **Overwrite DB** > **OK**.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-overwrite-database.png" alt-text="Screenshot showing to overwrite database.":::
+
+1. In **Select restore point**, select **Logs (Point in Time)** to [restore to a specific point in time](#restore-to-a-specific-point-in-time). Or select **Full & Differential** to [restore to a specific recovery point](#restore-to-a-specific-recovery-point).
+
+### Restore as files
+
+>[!Note]
+>Restore as files doesn't work on CIFS share, but works for NFS.
+
+To restore the backup data as files instead of a database, choose **Restore as Files**. Once the files are dumped to a specified path, you can take these files to any SAP HANA machine where you want to restore them as a database. Because you can move these files to any machine, you can now restore the data across subscriptions and regions.
+
+1. In the **Restore Configuration** menu, under **Where and how to Restore**, select **Restore as files**.
+1. Select the **host** / HANA Server name to which you want to restore the backup files.
+1. In the **Destination path on the server**, enter the folder path on the server selected in step 2. This is the location where the service will dump all the necessary backup files.
+
+ The files that are dumped are:
+
+ * Database backup files
+ * JSON metadata files (for each backup file that's involved)
+
+ Typically, a network share path, or path of a mounted Azure file share when specified as the destination path, enables easier access to these files by other machines in the same network or with the same Azure file share mounted on them.
+
+ >[!NOTE]
+ >To restore the database backup files on an Azure file share mounted on the target registered VM, make sure that root account has read/ write permissions on the Azure file share.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-restore-as-files.png" alt-text="Screenshot showing to choose destination path.":::
+
+1. Select the **Restore Point** corresponding to which all the backup files and folders will be restored.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-select-recovery-point-inline.png" alt-text="Screenshot showing to select restore point." lightbox="./media/sap-hana-db-restore/hana-select-recovery-point-expanded.png":::
+
+1. All the backup files associated with the selected restore point are dumped into the destination path.
+1. Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full backups, and the other folder named `Log` contains the log backups and other backups (such as differential, and incremental).
+
+ >[!Note]
+ >If you've selected **Restore to a point in time**, the log files (dumped to the target VM) may sometimes contain logs beyond the point-in-time chosen for restore. Azure Backup does this to ensure that log backups for all HANA services are available for consistent and successful restore to the chosen point-in-time.
+
+1. Move these restored files to the SAP HANA server where you want to restore them as a database.
+1. Then follow these steps:
+ 1. Set permissions on the folder / directory where the backup files are stored using the following command:
+
+ ```bash
+ chown -R <SID>adm:sapsys <directory>
+ ```
+
+ 1. Run the next set of commands as `<SID>adm`
+
+ ```bash
+ su - <sid>adm
+ ```
+
+ 1. Generate the catalog file for restore. Extract the **BackupId** from the JSON metadata file for the full backup, which will be used later in the restore operation. Make sure that the full and log backups (not present for Full Backup Recovery) are in different folders and delete the JSON metadata files in these folders.
+
+ ```bash
+ hdbbackupdiag --generate --dataDir <DataFileDir> --logDirs <LogFilesDir> -d <PathToPlaceCatalogFile>
+ ```
+
+ In the command above:
+
+ * `<DataFileDir>` - the folder that contains the full backups.
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups. For Full BackUp Restore, Log folder isn't created. Add an empty directory in that case.
+ * `<PathToPlaceCatalogFile>` - the folder where the catalog file generated must be placed.
+
+ 1. Restore using the newly generated catalog file through HANA Studio or run the HDBSQL restore query with this newly generated catalog. HDBSQL queries are listed below:
+
+ * To open hdsql prompt, run the following command:
+
+ ```bash
+ hdbsql -U AZUREWLBACKUPHANAUSER -d systemDB
+ ```
+
+ * To restore to a point-in-time:
+
+ If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore using the command `ALTER SYSTEM STOP DATABASE <db> IMMEDIATE`. However, if you're only restoring an existing database, run the HDBSQL command to stop the database.
+
+ Then run the following command to restore the database:
+
+ ```hdbsql
+ RECOVER DATABASE FOR <db> UNTIL TIMESTAMP <t1> USING CATALOG PATH <path> USING LOG PATH <path> USING DATA PATH <path> USING BACKUP_ID <bkId> CHECK ACCESS USING FILE
+ ```
+
+ * `<DatabaseName>` - Name of the new database or existing database that you want to restore
+ * `<Timestamp>` - Exact timestamp of the Point in time restore
+ * `<DatabaseName@HostName>` - Name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So, it doesn't need to be specified for restores done on the same HANA server from where the backup is taken.
+ * `<PathToGeneratedCatalogInStep3>` - Path to the catalog file generated in **Step C**
+ * `<DataFileDir>` - the folder that contains the full backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
+ * `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C**
+
+ * To restore to a particular full or differential backup:
+
+ If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore using the command `ALTER SYSTEM STOP DATABASE <db> IMMEDIATE`. However, if you're only restoring an existing database, run the HDBSQL command to stop the database:
+
+ ```hdbsql
+ RECOVER DATA FOR <DatabaseName> USING BACKUP_ID <BackupIdFromJsonFile> USING SOURCE '<DatabaseName@HostName>' USING CATALOG PATH ('<PathToGeneratedCatalogInStep3>') USING DATA PATH ('<DataFileDir>') CLEAR LOG
+ ```
+
+ * `<DatabaseName>` - the name of the new database or existing database that you want to restore
+ * `<Timestamp>` - the exact timestamp of the Point in time restore
+ * `<DatabaseName@HostName>` - the name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So it need not be specified for restores done on the same HANA server from where the backup is taken.
+ * `<PathToGeneratedCatalogInStep3>` - the path to the catalog file generated in **Step C**
+ * `<DataFileDir>` - the folder that contains the full backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
+ * `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C**
+ * To restore using backup ID:
+
+ ```hdbsql
+ RECOVER DATA FOR <db> USING BACKUP_ID <bkId> USING CATALOG PATH <path> USING LOG PATH <path> USING DATA PATH <path> CHECK ACCESS USING FILE
+ ```
+
+ Examples:
+
+ SAP HANA SYSTEM restoration on same server
+
+ ```hdbsql
+ RECOVER DATABASE FOR SYSTEM UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+ SAP HANA tenant restoration on same server
+
+ ```hdbsql
+ RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+ SAP HANA SYSTEM restoration on different server
+
+ ```hdbsql
+ RECOVER DATABASE FOR SYSTEM UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+ SAP HANA tenant restoration on different server
+
+ ```hdbsql
+ RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+### Partial restore as files
+
+The Azure Backup service decides the chain of files to be downloaded during restore as files. But there are scenarios where you might not want to download the entire content again.
+
+For eg., when you have a backup policy of weekly fulls, daily differentials and logs, and you already downloaded files for a particular differential. You found that this is not the right recovery point and decided to download the next day's differential. Now you just need the differential file since you already have the starting full. With the partial restore as files ability, provided by Azure Backup, you can now exclude the full from the download chain and download only the differential.
+
+#### Excluding backup file types
+
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+
+1. In the target machine where files are to be downloaded, go to "opt/msawb/bin" folder
+2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
+3. Add the following JSON key value pair
+
+ ```json
+ {
+ "RecoveryPointsToBeExcludedForRestoreAsFiles": "ExcludeFull"
+ }
+ ```
+
+4. Change the permissions and ownership of the file as follows:
+
+ ```bash
+ chmod 750 ExtensionSettingsOverrides.json
+ chown root:msawb ExtensionSettingsOverrides.json
+ ```
+
+5. No restart of any service is required. The Azure Backup service will attempt to exclude backup types in the restore chain as mentioned in this file.
+
+The ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` only takes specific values which denote the recovery points to be excluded during restore. For SAP HANA, these values are:
+
+- ExcludeFull (Other backup types such as differential, incremental and logs will be downloaded, if they are present in the restore point chain.
+- ExcludeFullAndDifferential (Other backup types such as incremental and logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndIncremental (Other backup types such as differential and logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndDifferentialAndIncremental (Other backup types such as logs will be downloaded, if they are present in the restore point chain)
+
+### Restore to a specific point in time
+
+If you've selected **Logs (Point in Time)** as the restore type, do the following:
+
+1. Select a recovery point from the log graph and select **OK** to choose the point of restore.
+
+ ![Restore point](media/sap-hana-db-restore/restore-point.png)
+
+1. On the **Restore** menu, select **Restore** to start the restore job.
+
+ ![Select restore](media/sap-hana-db-restore/restore-restore.png)
+
+1. Track the restore progress in the **Notifications** area or track it by selecting **Restore jobs** on the database menu.
+
+ ![Restore triggered successfully](media/sap-hana-db-restore/restore-triggered.png)
+
+### Restore to a specific recovery point
+
+If you've selected **Full & Differential** as the restore type, do the following:
+
+1. Select a recovery point from the list and select **OK** to choose the point of restore.
+
+ ![Restore specific recovery point](media/sap-hana-db-restore/specific-recovery-point.png)
+
+1. On the **Restore** menu, select **Restore** to start the restore job.
+
+ ![Start restore job](media/sap-hana-db-restore/restore-specific.png)
+
+1. Track the restore progress in the **Notifications** area or track it by selecting **Restore jobs** on the database menu.
+
+ ![Restore progress](media/sap-hana-db-restore/restore-progress.png)
+
+ > [!NOTE]
+ > In Multiple Database Container (MDC) restores after the system DB is restored to a target instance, one needs to run the pre-registration script again. Only then the subsequent tenant DB restores will succeed. To learn more refer to [Troubleshooting ΓÇô MDC Restore](backup-azure-sap-hana-database-troubleshoot.md#multiple-container-database-mdc-restore).
+
+## Cross Region Restore
+
+As one of the restore options, Cross Region Restore (CRR) allows you to restore SAP HANA databases hosted on Azure VMs in a secondary region, which is an Azure paired region.
+
+To onboard to the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
+
+To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore)
+
+### View backup items in secondary region
+
+If CRR is enabled, you can view the backup items in the secondary region.
+
+1. From the portal, go to **Recovery Services vault** > **Backup items**.
+1. Select **Secondary Region** to view the items in the secondary region.
+
+>[!NOTE]
+>Only Backup Management Types supporting the CRR feature will be shown in the list. Currently, only support for restoring secondary region data to a secondary region is allowed.
+
+![Backup items in secondary region](./media/sap-hana-db-restore/backup-items-secondary-region.png)
+
+![Databases in secondary region](./media/sap-hana-db-restore/databases-secondary-region.png)
+
+### Restore in secondary region
+
+The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters. A vault should exist in the secondary region and the SAP HANA server should be registered to the vault in the secondary region.
+
+![Where and how to restore](./media/sap-hana-db-restore/restore-secondary-region.png)
+
+![Trigger restore in progress notification](./media/backup-azure-arm-restore-vms/restorenotifications.png)
+
+>[!NOTE]
+>* After the restore is triggered and in the data transfer phase, the restore job can't be cancelled.
+>* The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
+>* The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
+
+### Monitoring secondary region restore jobs
+
+1. In the Azure portal, go to **Backup center** > **Backup Jobs**.
+1. Filter **Operation** for value **CrossRegionRestore** to view the jobs in the secondary region.
+
+ :::image type="content" source="./media/sap-hana-db-restore/hana-view-jobs-inline.png" alt-text="Screenshot showing filtered Backup jobs." lightbox="./media/sap-hana-db-restore/hana-view-jobs-expanded.png":::
+
+## Next steps
+
+- [Learn how](sap-hana-db-manage.md) to manage SAP HANA databases backed up using Azure Backup
+- [About backup of SAP HANA databases in Azure VMs](sap-hana-database-about.md).
backup Sap Hana Database With Hana System Replication Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md
+
+ Title: Back up SAP HANA System Replication database on Azure VMs (preview)
+description: In this article, discover how to back up SAP HANA database with HANA System Replication enabled.
+ Last updated : 10/05/2022++++++
+# Back up SAP HANA System Replication databases in Azure VMs (preview)
+
+SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. You can back up SAP HANA databases running on Azure virtual machines (VMs) by using [Azure Backup](backup-overview.md).
+
+This article describes about how to back up SAP HANA System Replication (HSR) databases running in Azure VMs to an Azure Backup Recovery Services vault.
+
+In this article, you'll learn how to:
+
+>[!div class="checklist"]
+>- Create and configure a Recovery Services vault
+>- Create a policy
+>- Discover databases
+>- Run the pre-registration script
+>- Configure backups
+>- Run an on-demand backup
+>- Run SAP HANA Studio backup
+
+>[!Note]
+>See [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) for more information about the supported configurations and scenarios.
+
+## Prerequisites
+
+- Identify/create a Recovery Services vault in the same region and subscription as the two VMs/nodes of the HSR.
+- Allow connectivity from each of the VMs/nodes to the internet for communication with Azure.
+
+>[!Important]
+>Ensure that the combined length of the SAP HANA Server VM name and the Resource Group name doesn't exceed 84 characters for Azure Resource Manager (ARM) VMs and 77 characters for classic VMs. This is because some characters are reserved by the service.
+++
+## Discover the databases
+
+To discover the HSR database, follow these steps:
+
+1. In the Azure portal, go to **Backup center** and select **+ Backup**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/initiate-database-discovery.png" alt-text="Screenshot showing about how to start database discovery.":::
+
+1. Select **SAP HANA in Azure VM** as the data source type, select the Recovery Services vault to use for the backup, and then select **Continue**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/configure-backup.png" alt-text="Screenshot showing how to configure database backup.":::
+
+1. Select **Start Discovery**.
+
+ This initiates discovery of unprotected Linux VMs in the vault region.
+ - After discovery, unprotected VMs appear in the portal, listed by name and resource group.
+ - If a VM isn't listed as expected, check whether it's already backed up in a vault.
+ - Multiple VMs can have the same name, but they belong to different resource groups.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/discover-hana-database.png" alt-text="Screenshot showing how to discover the HANA database.":::
+
+1. In **Select Virtual Machines**, select the *link to download the script* that provides permissions to the Azure Backup service to access the SAP HANA VMs for database discovery.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/download-script.png" alt-text="Screenshot showing the link location to download the script.":::
+
+1. Run the script on each VM hosting SAP HANA databases that you want to back up.
+
+1. After running the script on the VMs, in **Select Virtual Machines**, select the VMs > **Discover DBs**.
+
+ Azure Backup discovers all SAP HANA databases on the VM. During discovery, Azure Backup registers the VM with the vault, and installs an extension on the VM. It doesn't install any agent on the database.
+
+ To view the details of all databases of each of the discovered VMs, select View details under the **Step 1: Discover DBs in VMs section**.
+
+## Run the pre-registration script
+
+1. When a failover occurs, the users are replicated to the new primary, but the *hdbuserstore* isn't replicated. So, you need to create the same key in all nodes of the HSR setup that allows the Azure Backup service to connect to any new primary node automatically, without any manual intervention.
+
+1. Create a custom backup user in the HANA system with the following roles and permissions:
+
+ | Role | Permission | Description |
+ | | | |
+ | MDC | DATABASE ADMIN and BACKUP ADMIN (HANA 2.0 SPS05 and higher) | Creates new databases during restore. |
+ | SDC | BACKUP ADMIN | Reads the backup catalog. |
+ | SAP_INTERNAL_HANA_SUPPORT | | Accesses a few private tables. <br><br> This is only required for SDC and MDC versions lower than HANA 2.0 SPS04 Rev 46. This isn't required for HANA 2.0 SPS04 Rev 46 versions and higher because we receive the required information from public tables now after the fix from HANA team. |
+
+1. Then add the key to *hdbuserstore* for your custom backup user that enables the HANA backup plug-in to manage all operations (database queries, restore operations, configuring, and running backup). Pass the custom Backup user key to the script as a parameter: `-bk CUSTOM_BACKUP_KEY_NAME` or `-backup-key CUSTOM_BACKUP_KEY_NAME`. If the password of this custom backup key expires, it could lead to back up and restore failures.
+
+1. Create the same customer backup user (with the same password) and key (in hdbuserstore) on both nodes/VMs.
+
+1. Run the SAP HANA backup configuration script (pre-registration script) in the VMs where HANA is installed as the root user. This script sets up the HANA system for backup. For more information about the script actions, see the [What the pre-registration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) section.
+
+1. There's no HANA generated unique ID for an HSR setup. So, you need to provide a unique ID that helps the backup service to group all nodes of an HSR as a single data source. Provide a unique HSR ID as input to the script: `-hn HSR_UNIQUE_VALUE` or `--hsr-unique-value HSR_Unique_Value`. You must provide the same HSR ID on both the VMs/nodes. This ID must be unique within a vault and should be an alphanumeric value (containing at least one digit, lower-case, and upper-case character) with a length of *6* to *35* characters.
+
+1. While running the pre-registration script on the secondary node, you must specify the SDC/MDC port as input. This is because SQL commands to identify the SDC/MDC setup can't be run on the secondary node. You must provide the port number as parameter as shown below: `-p PORT_NUMER` or `ΓÇôport_number PORT_NUMBER`.
+
+ - For MDC, it should have the format as `3<instancenumber>13`.
+ - For SDC, it should have the format as `3<instancenumber>15`.
+1. If your HANA setup uses Private Endpoints, run the pre-registration script with the `-sn` or `--skip-network-checks` parameter. Once the pre-registration script runs successfully, proceed to the next steps.
+
+To set up the database for backup, see the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and the [What the pre-registration script does sections](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does).
+
+## Configure backup
+
+To enable the backup, follow these steps:
+
+1. In **Step 2**, select **Configure Backup**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/configure-database-backup.png" alt-text="Screenshot showing how to start backup configuration.":::
+
+1. In **Select items to back up**, select all the databases you want to protect and select **OK**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/select-virtual-machines-for-protection.png" alt-text="Screenshot showing how to select virtual machines for protection.":::
+
+1. Select **Backup Policy** > **+Add**.
+1. Choose and configure a backup policy type and select **Add** to create a new backup policy for the databases.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/create-backup-policy.png" alt-text="Screenshot showing how to select and add a backup policy.":::
+
+1. After creating the policy, on the **Backup** menu, select **Enable backup**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/enable-backup.png" alt-text="Screenshot showing how to enable backup of database.":::
+
+1. Track the backup configuration progress under **Notifications** in Azure portal.
+
+## Create a backup policy
+
+A backup policy defines the backup schedules and the backup retention duration.
+
+>[!Note]
+>- A policy is created at the vault level.
+>- Multiple vaults can use the same backup policy, but you must apply the backup policy to each vault.
+>- Azure Backup doesnΓÇÖt automatically adjust for daylight saving time changes when backing up a SAP HANA database running in an Azure VM.
+Modify the policy manually as needed.
+
+To configure the policy settings, follow these steps:
+
+1. In **Policy name**, enter a name for the new policy.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/add-policy-name.png" alt-text="Screenshot showing how to add a policy name.":::
+
+1. In **Full Backup** policy, select a **Backup Frequency**, choose **Daily** or **Weekly**.
+
+ - **Daily**: Select the hour and time zone in which the backup job must begin.
+ - You must run a full backup. You can't turn off this option.
+ - Select **Full Backup** to view the policy.
+ - You can't create differential backups for daily full backups.
+
+ - **Weekly**: Select the day of the week, hour, and time zone in which the backup job must run.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/select-backup-frequency.png" alt-text="Screenshot showing how to configure backup frequency.":::
+
+1. In **Retention Range**, configure retention settings for the full backup.
+
+ - By default, all options are selected. Clear any retention range limits you don't want to use and set them as required.
+ - The minimum retention period for any type of backup (full/differential/log) is seven days.
+ - Recovery points are tagged for retention based on their retention range. For example, if you select a daily full backup, only one full backup is triggered each day.
+ - The backup data for a specific day is tagged and retained based on the weekly retention range and settings.
+
+1. In the **Full Backup** policy menu, select **OK** to save the policy settings.
+1. Select **Differential Backup** to add a differential policy.
+1. In **Differential Backup policy**, select **Enable** to open the frequency and retention controls.
+
+ - You can trigger a maximum of one differential backup per day.
+ - You can retain differential backups for a maximum of 180 days. If you need a longer retention, you must use full backups.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/configure-differential-backup-policy.png" alt-text="Screenshot showing how to configure differential backup policy for database.":::
+
+ >[!Note]
+ >You can choose either a differential or an incremental backup as a daily backup at a time.
+
+1. In **Incremental Backup policy**, select **Enable** to open the frequency and retention controls.
+
+ - You can trigger a maximum of one incremental backup per day.
+ - You can retain incremental backups for a maximum of 180 days. If you need a longer retention, you must use full backups.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/enable-incremental-backup-policy.png" alt-text="Screenshot showing how to enable incremental backup policy.":::
+
+1. Select **OK** to save the policy and return to the main **Backup policy** menu.
+1. Select **Log Backup** to add a transactional log backup policy.
+
+ - In **Log Backup**, select **Enable**.
+
+ You can't disable this option, because SAP HANA manages all log backups.
+
+ - Set the frequency and retention controls.
+
+ >[!Note]
+ >Streaming of Log backups only begins after a successful full backup is complete.
+
+1. Select **OK** to save the policy and return to the main **Backup policy** menu.
+1. After the backup policy configuration is complete, select **OK**.
+
+>[!Note]
+>All log-backups are chained to the previous full backup to form a recovery chain. A full backup is retained until the retention of expiry of the last log backup. So, the full backup is retained for an extra period to ensure that all logs can be recovered.
+>
+>For example, consider that you've a weekly full backup, daily differential, and *2 hour* logs. All of them are retained for *30 days*. But the weekly full backup is deleted only after the next full backup is available, that is, after 30 + 7 days.
+>
+>If a weekly full backup happens on November 16, then as per the retention policy, it should be retained until *December 16*. The last log backup for this full backup happens before the next scheduled full backup, on *November 22*. Until this log is available on *December 22*, the November 16 full backup isn't deleted. So, the *November* 16 full backup is retained until *December 22*.
+
+## Run an on-demand backup
+
+The backup operations run in accordance with the policy schedule. You can also run an on-demand backup.
+
+To run a backup on-demand backup, follow these steps:
+
+1. In the vault menu, select **Backup items**.
+
+1. In **Backup Items**, select the *VM running the SAP HANA database* > **Backup now**.
+
+1. In **Backup now**, choose the *type of backup* you want to perform, and then select **OK**.
+
+ This backup will be retained for 45 days.
+
+To monitor the portal notifications and the job progress in the vault dashboard, select **Backup Jobs** > **In progress**.
+
+>[!Note]
+>Time taken to create the initial backup depends on the size of your database.
+
+## Run SAP HANA Studio backup on a database with Azure Backup enabled
+
+To take a local backup (using HANA Studio) of a database that is backed-up using Azure Backup, follow these steps:
+
+1. After full or log backups for the database are complete, check the status in *SAP HANA Studio*/ *Cockpit*.
+
+1. Disable the log backups and set the backup catalog to the file system for relevant database.
+
+ To do so, double-click **systemdb** > **Configuration** > **Select Database** > **Filter (Log)** and:
+ - Set **enable_auto_log_backup** to **No**.
+ - Set **log_backup_using_backint** to **False**.
+ - Set **catalog_backup_using_backint** to **False**.
+
+1. Run an on-demand full backup of the database.
+
+1. Once the full and catalog backups are complete, change the settings to point to Azure:
+
+ - Set **enable_auto_log_backup** to **Yes**.
+ - Set **log_backup_using_backint** to **True**.
+ - Set **catalog_backup_using_backint** to **True**.
+
+## Next steps
+
+- [Restore SAP HANA System Replication databases in Azure VMs (preview)](sap-hana-database-restore.md).
+- [About backup of SAP HANA System Replication databases in Azure VMs (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview).
backup Sap Hana Db About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-about.md
- Title: About SAP HANA database backup in Azure VMs
-description: In this article, learn about backing up SAP HANA databases that are running on Azure virtual machines.
- Previously updated : 08/11/2022--
-# About SAP HANA database backup in Azure VMs
-
-SAP HANA databases are mission critical workloads that require a low recovery point objective (RPO) and a fast recovery time objective (RTO). You can now [back up SAP HANA databases running on Azure VMs](./tutorial-backup-sap-hana-db.md) using [Azure Backup](./backup-overview.md).
-
-Azure Backup is [Backint certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/d/solutions?id=8f3fd455-a2d7-4086-aa28-51d8870acaa5) by SAP, to provide native backup support by leveraging SAP HANA's native APIs. This offering from Azure Backup aligns with Azure Backup's mantra of **zero-infrastructure** backups, eliminating the need to deploy and manage backup infrastructure. You can now seamlessly back up and restore SAP HANA databases running on Azure VMs ([M series VMs](../virtual-machines/m-series.md) also supported now!) and leverage enterprise management capabilities that Azure Backup provides.
-
-## Added value
-
-Using Azure Backup to back up and restore SAP HANA databases, gives the following advantages:
-
-* **15-minute Recovery Point Objective (RPO)**: Recovery of critical data of up to 15 minutes is now possible.
-* **One-click, point-in-time restores**: Restore of production data to alternate HANA servers is made easy. Chaining of backups and catalogs to perform restores is all managed by Azure behind the scenes.
-* **Long-term retention**: For rigorous compliance and audit needs. Retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability.
-* **Backup Management from Azure**: Use Azure Backup's management and monitoring capabilities for improved management experience. Azure CLI is also supported.
-
-To view the backup and restore scenarios that we support today, see the [SAP HANA scenario support matrix](./sap-hana-backup-support-matrix.md#scenario-support).
-
-## Backup architecture
-
-You can back up SAP HANA databases running inside an Azure VM and stream backup data directly to the Azure Recovery Services vault.
-
-![Backup architecture diagram](./media/sap-hana-db-about/backup-architecture.png)
-
-* The backup process begins by [creating a Recovery Services vault](./tutorial-backup-sap-hana-db.md#create-a-recovery-services-vault) in Azure. This vault will be used to store the backups and recovery points created over time.
-* The Azure VM running SAP HANA server is registered with the vault, and the databases to be backed-up are [discovered](./tutorial-backup-sap-hana-db.md#discover-the-databases). To enable the Azure Backup service to discover databases, a [preregistration script](https://go.microsoft.com/fwlink/?linkid=2173610) must be run on the HANA server as a root user.
-* This script creates the **AZUREWLBACKUPHANAUSER** database user/uses the custom Backup user you have already created, and then creates a corresponding key with the same name in **hdbuserstore**. [Learn more](./tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) about the functionality of the script.
-* Azure Backup Service now installs the **Azure Backup Plugin for HANA** on the registered SAP HANA server.
-* The **AZUREWLBACKUPHANAUSER** database user created by the pre-registration script/custom Backup user that youΓÇÖve created (and added as input to the pre-registration script) is used by the **Azure Backup Plugin for HANA** to perform all backup and restore operations. If you attempt to configure backup for SAP HANA databases without running this script, you might receive the **UserErrorHanaScriptNotRun** error.
-* To [configure backup](./tutorial-backup-sap-hana-db.md#configure-backup) on the databases that are discovered, choose the required backup policy and enable backups.
-
-* Once the backup is configured, Azure Backup service sets up the following Backint parameters at the DATABASE level on the protected SAP HANA server:
- * [catalog_backup_using_backint:true]
- * [enable_accumulated_catalog_backup:false]
- * [parallel_data_backup_backint_channels:1]
- * [log_backup_timeout_s:900)]
- * [backint_response_timeout:7200]
-
->[!NOTE]
->Ensure that these parameters are *not* present at HOST level. Host-level parameters will override these parameters and might cause unexpected behavior.
->
-
-* The **Azure Backup Plugin for HANA** maintains all the backup schedules and policy details. It triggers the scheduled backups and communicates with the **HANA Backup Engine** through the Backint APIs.
-* The **HANA Backup Engine** returns a Backint stream with the data to be backed up.
-* All the scheduled backups and on-demand backups (triggered from the Azure portal) that are either full or differential are initiated by the **Azure Backup Plugin for HANA**. However, log backups are managed and triggered by **HANA Backup Engine** itself.
-* Azure Backup for SAP HANA, being a BackInt certified solution, doesn't depend on underlying disk or VM types. The backup is performed by streams generated by HANA.
-
-## Using Azure VM backup with Azure SAP HANA backup
-
-In addition to using the SAP HANA backup in Azure that provides database level backup and recovery, you can use the Azure VM backup solution to back up the OS and non-database disks.
-
-The [Backint certified Azure SAP HANA backup solution](#backup-architecture) can be used for database backup and recovery.
-
-[Azure VM backup](backup-azure-vms-introduction.md) can be used to back up the OS and other non-database disks. The VM backup is taken once every day and it backups up all the disks (except **Write Accelerator (WA) OS disks** and **ultra disks**). Since the database is being backed up using the Azure SAP HANA backup solution, you can take a file-consistent backup of only the OS and non-database disks using the [Selective disk backup and restore for Azure VMs](selective-disk-backup-restore.md) feature.
-
-To restore a VM running SAP HANA, follow these steps:
-
-* [Restore a new VM from Azure VM backup](backup-azure-arm-restore-vms.md) from the latest recovery point. Or create a new empty VM and attach the disks from the latest recovery point.
-* If WA disks are excluded, they arenΓÇÖt restored. In this case, create empty WA disks and log area.
-* After all the other configurations (such as IP, system name, and so on) are set, the VM is set to receive DB data from Azure Backup.
-* Now restore the DB into the VM from the [Azure SAP HANA DB backup](sap-hana-db-restore.md#restore-to-a-point-in-time-or-to-a-recovery-point) to the desired point-in-time.
-
-## Next steps
-
-* Learn how to [restore an SAP HANA database running on an Azure VM](./sap-hana-db-restore.md)
-* Learn how to [manage SAP HANA databases that are backed up using Azure Backup](./sap-hana-db-manage.md)
backup Sap Hana Db Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-manage.md
- Title: Manage backed up SAP HANA databases on Azure VMs
-description: In this article, learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines.
- Previously updated : 08/11/2022-----
-# Manage and monitor backed up SAP HANA databases
-
-This article describes common tasks for managing and monitoring SAP HANA databases that are running on an Azure virtual machine (VM) and that are backed up to an Azure Backup Recovery Services vault by the [Azure Backup](./backup-overview.md) service. You'll learn how to monitor jobs and alerts, trigger an on-demand backup, edit policies, stop and resume database protection and unregister a VM from backups.
-
-If you haven't configured backups yet for your SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md).
-
->[!Note]
->See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
-
-## Monitor manual backup jobs in the portal
-
-Azure Backup shows all manually triggered jobs in the **Backup jobs** section in **Backup center**.
--
-The jobs you see in this portal includes database discovery and registering, and backup and restore operations. Scheduled jobs, including log backups aren't shown in this section. Manually triggered backups from the SAP HANA native clients (Studio/ Cockpit/ DBA Cockpit) also don't show up here.
--
-To learn more about monitoring, go to [Monitoring in the Azure portal](./backup-azure-monitoring-built-in-monitor.md) and [Monitoring using Azure Monitor](./backup-azure-monitoring-use-azuremonitor.md).
-
-## View backup alerts
-
-Alerts are an easy means of monitoring backups of SAP HANA databases. Alerts help you focus on the events you care about the most without getting lost in the multitude of events that a backup generates. Azure Backup allows you to set alerts, and they can be monitored as follows:
-
-* Sign in to the [Azure portal](https://portal.azure.com/).
-* On the vault dashboard, select **Backup Alerts**.
-
- ![Backup alerts on vault dashboard](./media/sap-hana-db-manage/backup-alerts-dashboard.png)
-
-* You'll be able to see the alerts:
-
- ![List of backup alerts](./media/sap-hana-db-manage/backup-alerts-list.png)
-
-* Select the alerts to see more details:
-
- ![Alert details](./media/sap-hana-db-manage/alert-details.png)
-
-Today, Azure Backup allows the sending of alerts through email. These alerts are:
-
-* Triggered for all backup failures.
-* Consolidated at the database level by error code.
-* Sent only for a database's first backup failure.
-
-To learn more about monitoring, go to [Monitoring in the Azure portal](./backup-azure-monitoring-built-in-monitor.md) and [Monitoring using Azure Monitor](./backup-azure-monitoring-use-azuremonitor.md).
-
-## Management Operations
-
-Azure Backup makes management of a backed-up SAP HANA database easy with an abundance of management operations that it supports. These operations are discussed in more detail in the following sections.
-
-### Run an on-demand backup
-
-Backups run in accordance with the policy schedule. You can run a backup on-demand as follows:
-
-1. In the vault menu, select **Backup items**.
-1. In **Backup Items**, select the VM running the SAP HANA database, and then select **Backup now**.
-1. In **Backup Now**, choose the type of backup you want to perform. Then select **OK**.
-
- The retention period of this backup is determined by the type of on-demand backup you have run.
-
- - *On-demand full backups* are retained for a minimum of *45 days* and a maximum of *99 years*.
- - *On-demand differential backups* are retained as per the *log retention set in the policy*.
- - *On-demand incremental backups* aren't currently supported.
-
-1. Monitor the portal notifications. You can monitor the job progress in the vault dashboard > **Backup Jobs** > **In progress**. Depending on the size of your database, creating the initial backup may take a while.
-
-### HANA native client integration
-
-#### Backup
-
-On-demand backups triggered from any of the HANA native clients (to **Backint**) will show up in the backup list on the **Backup Instances** page.
-
-![Last backups run](./media/sap-hana-db-manage/last-backups.png)
-
-You can also [monitor these backups](#monitor-manual-backup-jobs-in-the-portal) from the **Backup jobs** page.
-
-These on-demand backups will also show up in the list of restore points for restore.
-
-![List of restore points](./media/sap-hana-db-manage/list-restore-points.png)
-
-#### Restore
-
-Restores triggered from HANA native clients (using **Backint**) to restore to **the same machine** can be [monitored](#monitor-manual-backup-jobs-in-the-portal) from the **Backup jobs** page.
-Restores triggered from HANA native clients to restore to another machine are not allowed. This is because Azure Backup service cannot authenticate the target server, as per Azure RBAC rules, for restore.
-
-#### Delete
-
-Delete operation from HANA native is **NOT** supported by Azure Backup since the backup policy determines the lifecycle of backups in Azure Recovery services vault.
-
-### Change policy
-
-You can change the underlying policy for an SAP HANA backup item.
-
-In the **Backup center** dashboard, go to **Backup Instances**:
-
-* Choose **SAP HANA in Azure VM** as the datasource type.
-
- :::image type="content" source="./media/sap-hana-db-manage/hana-backup-instances-inline.png" alt-text="Screenshot showing to choose SAP HANA in Azure VM." lightbox="./media/sap-hana-db-manage/hana-backup-instances-expanded.png":::
-
-* Choose the backup item whose underlying policy you want to change.
-* Select the existing Backup policy.
-
- ![Select existing backup policy](./media/sap-hana-db-manage/existing-backup-policy.png)
-
-* Change the policy, choosing from the list. [Create a new backup policy](./backup-azure-sap-hana-database.md#create-a-backup-policy) if needed.
-
- ![Choose policy from drop-down list](./media/sap-hana-db-manage/choose-backup-policy.png)
-
-* Save the changes.
-
- ![Save the changes](./media/sap-hana-db-manage/save-changes.png)
-
-* Policy modification will impact all the associated Backup Items and trigger corresponding **configure protection** jobs.
-
->[!NOTE]
-> Any change in the retention period will be applied retrospectively to all the older recovery points besides the new ones.
-
-### Modify Policy
-
-Modify policy to change backup types, frequencies, and retention range.
-
->[!NOTE]
->Any change in the retention period will be applied retroactively to all the older recovery points, in addition to the new ones.
-
-1. In the **Backup center** dashboard, go to **Backup Policies** and choose the policy you want to edit.
-
- :::image type="content" source="./media/sap-hana-db-manage/backup-center-policies-inline.png" alt-text="Screenshot showing to choose the policy to edit." lightbox="./media/sap-hana-db-manage/backup-center-policies-expanded.png":::
-
-1. Select **Modify**.
-
- ![Select Modify](./media/sap-hana-db-manage/modify-policy.png)
-
-1. Choose the frequency for the backup types.
-
- ![Choose backup frequency](./media/sap-hana-db-manage/choose-frequency.png)
-
-Policy modification will impact all the associated backup items and trigger corresponding **configure protection** jobs.
-
-### Inconsistent policy
-
-Occasionally a modify policy operation can lead to an **inconsistent** policy version for some backup items. This happens when the corresponding **configure protection** job fails for the backup item after a modify policy operation is triggered. It appears as follows in the backup item view:
-
-![Inconsistent policy](./media/sap-hana-db-manage/inconsistent-policy.png)
-
-You can fix the policy version for all the impacted items in one click:
-
-![Fix policy version](./media/sap-hana-db-manage/fix-policy-version.png)
-
-### Stop protection for an SAP HANA database
-
-You can stop protecting an SAP HANA database in a couple of ways:
-
-* Stop all future backup jobs and delete all recovery points.
-* Stop all future backup jobs and leave the recovery points intact.
-
-If you choose to leave recovery points, keep these details in mind:
-
-* All recovery points will remain intact forever, and all pruning will stop at stop protection with retain data.
-* You'll be charged for the protected instance and the consumed storage. For more information, see [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/).
-* If you delete a data source without stopping backups, new backups will fail.
-
-To stop protection for a database:
-
-1. In the **Backup center** dashboard, select **Backup Instances**.
-1. Select **SAP HANA in Azure VM** as the datasource type.
-
- :::image type="content" source="./media/sap-hana-db-manage/hana-backup-instances-inline.png" alt-text="Screenshot showing to select SAP HANA in Azure VM." lightbox="./media/sap-hana-db-manage/hana-backup-instances-expanded.png":::
-
-1. Select the database for which you want to stop protection on.
-
-1. In the database menu, select **Stop backup**.
-
- :::image type="content" source="./media/sap-hana-db-manage/stop-backup.png" alt-text="Screenshot showing to select stop backup.":::
-
-1. In the **Stop Backup** menu, select whether to retain or delete data. If you want, provide a reason and comment.
-
- :::image type="content" source="./media/sap-hana-db-manage/retain-backup-data.png" alt-text="Screenshot showing to select retain or delete data.":::
-
-1. Select **Stop backup**.
-
-### Resume protection for an SAP HANA database
-
-When you stop protection for the SAP HANA database, if you select the **Retain Backup Data** option, you can later resume protection. If you don't retain the backed-up data, you can't resume protection.
-
-To resume protection for an SAP HANA database:
-
-* Open the backup item and select **Resume backup**.
-
- ![Select resume backup](./media/sap-hana-db-manage/resume-backup.png)
-
-* On the **Backup policy** menu, select a policy, and then select **Save**.
-
-### Upgrading from SDC to MDC
-
-Learn how to continue backup for an SAP HANA database [after upgrading from SDC to MDC](backup-azure-sap-hana-database-troubleshoot.md#sdc-to-mdc-upgrade-with-a-change-in-sid).
-
-### Upgrading from SDC to MDC without a SID change
-
-Learn how to continue backup of an SAP HANA database whose [SID hasn't changed after upgrade from SDC to MDC](backup-azure-sap-hana-database-troubleshoot.md#sdc-to-mdc-upgrade-with-no-change-in-sid).
-
-### Upgrading to a new version in either SDC or MDC
-
-Learn how to continue backup of an SAP HANA database [whose version is being upgraded](backup-azure-sap-hana-database-troubleshoot.md#sdc-version-upgrade-or-mdc-version-upgrade-on-the-same-vm).
-
-### Unregister an SAP HANA instance
-
-Unregister an SAP HANA instance after you disable protection but before you delete the vault:
-
-* On the vault dashboard, under **Manage**, select **Backup Infrastructure**.
-
- ![Select Backup Infrastructure](./media/sap-hana-db-manage/backup-infrastructure.png)
-
-* Select the **Backup Management type** as **Workload in Azure VM**
-
- ![Select the Backup Management type as Workload in Azure VM](./media/sap-hana-db-manage/backup-management-type.png)
-
-* In **Protected Servers**, select the instance to unregister. To delete the vault, you must unregister all servers/ instances.
-
-* Right-click the protected instance and select **Unregister**.
-
- ![Select unregister](./media/sap-hana-db-manage/unregister.png)
-
-### Re-register extension on the SAP HANA server VM
-
-Sometimes the workload extension on the VM may get impacted for one reason or another. In such cases, all the operations triggered on the VM will begin to fail. You may then need to re-register the extension on the VM. Re-register operation reinstalls the workload backup extension on the VM for operations to continue.
-
-Use this option with caution: when triggered on a VM with an already healthy extension, this operation will cause the extension to get restarted. This may cause all the in-progress jobs to fail. Check for one or more of the [symptoms](backup-azure-sap-hana-database-troubleshoot.md#re-registration-failures) before triggering the re-register operation.
-
-## Next steps
-
-* Learn how to [troubleshoot common issues when backing up SAP HANA databases.](./backup-azure-sap-hana-database-troubleshoot.md)
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-restore.md
- Title: Restore SAP HANA databases on Azure VMs
-description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region.
- Previously updated : 08/11/2022-----
-# Restore SAP HANA databases on Azure VMs
-
-This article describes how to restore SAP HANA databases running on an Azure Virtual Machine (VM), which the Azure Backup service has backed up to a Recovery Services vault. Restores can be used to create copies of the data for dev / test scenarios or to return to a previous state.
-
-For more information, on how to back up SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md).
-
->[!Note]
->See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
-
-## Restore to a point in time or to a recovery point
-
-Azure Backup can restore SAP HANA databases that are running on Azure VMs as follows:
-
-* Restore to a specific date or time (to the second) by using log backups. Azure Backup automatically determines the appropriate full, differential backups and the chain of log backups that are required to restore based on the selected time.
-
-* Restore to a specific full or differential backup to restore to a specific recovery point.
-
-## Prerequisites
-
-Before restoring a database, note the following:
-
-* You can restore the database only to an SAP HANA instance that's in the same region.
-
-* The target instance must be registered with the same vault as the source. [Learn more](backup-azure-sap-hana-database.md#discover-the-databases).
-
-* Azure Backup can't identify two different SAP HANA instances on the same VM. So restoring data from one instance to another on the same VM isn't possible.
-
-* To ensure that the target SAP HANA instance is ready for restore, check its **Backup readiness** status:
-
- 1. In the Azure portal, go to **Backup center** and click **+Backup**.
-
- :::image type="content" source="./media/sap-hana-db-restore/backup-center-configure-inline.png" alt-text="Screenshot showing to start the process to check if the target SAP HANA instance is ready for restore." lightbox="./media/sap-hana-db-restore/backup-center-configure-expanded.png":::
-
- 1. Select **SAP HANA in Azure VM** as the datasource type, select the vault to which the SAP HANA instance is registered, and then click **Continue**.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-select-vault.png" alt-text="Screenshot showing to select SAP HANA in Azure VM.":::
-
- 1. Under **Discover DBs in VMs**, select **View details**.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-discover-databases.png" alt-text="Screenshot showing to view database details.":::
-
- 1. Review the **Backup Readiness** of the target VM.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-select-virtual-machines-inline.png" alt-text="Screenshot showing protected servers." lightbox="./media/sap-hana-db-restore/hana-select-virtual-machines-expanded.png":::
-
-* To learn more about the restore types that SAP HANA supports, refer to the SAP HANA Note [1642148](https://launchpad.support.sap.com/#/notes/1642148)
-
-## Restore a database
-
-To restore, you need the following permissions:
-
-* **Backup Operator** permissions in the vault where you're doing the restore.
-* **Contributor (write)** access to the source VM that's backed up.
-* **Contributor (write**) access to the target VM:
- * If you're restoring to the same VM, this is the source VM.
- * If you're restoring to an alternate location, this is the new target VM.
-
-1. In the Azure portal, go to **Backup center** and click **Restore**.
-
- :::image type="content" source="./media/sap-hana-db-restore/backup-center-restore-inline.png" alt-text="Screenshot showing to start restoring an SAP HANA database." lightbox="./media/sap-hana-db-restore/backup-center-restore-expanded.png":::
-
-1. Select **SAP HANA in Azure VM** as the datasource type, select the database you wish to restore, and then click **Continue**.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-restore-select-database.png" alt-text="Screenshot showing to restore the backup items.":::
-
-1. Under **Restore Configuration**, specify where (or how) to restore data:
-
- * **Alternate Location**: Restore the database to an alternate location and keep the original source database.
-
- * **Overwrite DB**: Restore the data to the same SAP HANA instance as the original source. This option overwrites the original database.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-restore-configuration.png" alt-text="Screenshot showing to restore configuration.":::
-
-### Restore to alternate location
-
-1. In the **Restore Configuration** menu, under **Where to Restore**, select **Alternate Location**.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-alternate-location-recovery.png" alt-text="Screenshot showing to restore to alternate location.":::
-
-1. Select the SAP HANA host name and instance name to which you want to restore the database.
-1. Check if the target SAP HANA instance is ready for restore by ensuring its **Backup Readiness.** Refer to the [prerequisites section](#prerequisites) for more details.
-1. In the **Restored DB Name** box, enter the name of the target database.
-
- > [!NOTE]
- > Single Database Container (SDC) restores must follow these [checks](backup-azure-sap-hana-database-troubleshoot.md#single-container-database-sdc-restore).
-
-1. If applicable, select **Overwrite if the DB with the same name already exists on selected HANA instance**.
-
-1. In **Select restore point**, select **Logs (Point in Time)** to [restore to a specific point in time](#restore-to-a-specific-point-in-time). Or select **Full & Differential** to [restore to a specific recovery point](#restore-to-a-specific-recovery-point).
-
-### Restore and overwrite
-
-1. In the **Restore Configuration** menu, under **Where to Restore**, select **Overwrite DB** > **OK**.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-overwrite-database.png" alt-text="Screenshot showing to overwrite database.":::
-
-1. In **Select restore point**, select **Logs (Point in Time)** to [restore to a specific point in time](#restore-to-a-specific-point-in-time). Or select **Full & Differential** to [restore to a specific recovery point](#restore-to-a-specific-recovery-point).
-
-### Restore as files
-
->[!Note]
->Restore as files doesn't work on CIFS share, but works for NFS.
-
-To restore the backup data as files instead of a database, choose **Restore as Files**. Once the files are dumped to a specified path, you can take these files to any SAP HANA machine where you want to restore them as a database. Because you can move these files to any machine, you can now restore the data across subscriptions and regions.
-
-1. In the **Restore Configuration** menu, under **Where and how to Restore**, select **Restore as files**.
-1. Select the **host** / HANA Server name to which you want to restore the backup files.
-1. In the **Destination path on the server**, enter the folder path on the server selected in step 2. This is the location where the service will dump all the necessary backup files.
-
- The files that are dumped are:
-
- * Database backup files
- * JSON metadata files (for each backup file that's involved)
-
- Typically, a network share path, or path of a mounted Azure file share when specified as the destination path, enables easier access to these files by other machines in the same network or with the same Azure file share mounted on them.
-
- >[!NOTE]
- >To restore the database backup files on an Azure file share mounted on the target registered VM, make sure that root account has read/ write permissions on the Azure file share.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-restore-as-files.png" alt-text="Screenshot showing to choose destination path.":::
-
-1. Select the **Restore Point** corresponding to which all the backup files and folders will be restored.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-select-recovery-point-inline.png" alt-text="Screenshot showing to select restore point." lightbox="./media/sap-hana-db-restore/hana-select-recovery-point-expanded.png":::
-
-1. All the backup files associated with the selected restore point are dumped into the destination path.
-1. Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full backups, and the other folder named `Log` contains the log backups and other backups (such as differential, and incremental).
-
- >[!Note]
- >If you've selected **Restore to a point in time**, the log files (dumped to the target VM) may sometimes contain logs beyond the point-in-time chosen for restore. Azure Backup does this to ensure that log backups for all HANA services are available for consistent and successful restore to the chosen point-in-time.
-
-1. Move these restored files to the SAP HANA server where you want to restore them as a database.
-1. Then follow these steps:
- 1. Set permissions on the folder / directory where the backup files are stored using the following command:
-
- ```bash
- chown -R <SID>adm:sapsys <directory>
- ```
-
- 1. Run the next set of commands as `<SID>adm`
-
- ```bash
- su - <sid>adm
- ```
-
- 1. Generate the catalog file for restore. Extract the **BackupId** from the JSON metadata file for the full backup, which will be used later in the restore operation. Make sure that the full and log backups (not present for Full Backup Recovery) are in different folders and delete the JSON metadata files in these folders.
-
- ```bash
- hdbbackupdiag --generate --dataDir <DataFileDir> --logDirs <LogFilesDir> -d <PathToPlaceCatalogFile>
- ```
-
- In the command above:
-
- * `<DataFileDir>` - the folder that contains the full backups.
- * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups. For Full BackUp Restore, Log folder isn't created. Add an empty directory in that case.
- * `<PathToPlaceCatalogFile>` - the folder where the catalog file generated must be placed.
-
- 1. Restore using the newly generated catalog file through HANA Studio or run the HDBSQL restore query with this newly generated catalog. HDBSQL queries are listed below:
-
- * To open hdsql prompt, run the following command:
-
- ```bash
- hdbsql -U AZUREWLBACKUPHANAUSER -d systemDB
- ```
-
- * To restore to a point-in-time:
-
- If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore using the command `ALTER SYSTEM STOP DATABASE <db> IMMEDIATE`. However, if you're only restoring an existing database, run the HDBSQL command to stop the database.
-
- Then run the following command to restore the database:
-
- ```hdbsql
- RECOVER DATABASE FOR <db> UNTIL TIMESTAMP <t1> USING CATALOG PATH <path> USING LOG PATH <path> USING DATA PATH <path> USING BACKUP_ID <bkId> CHECK ACCESS USING FILE
- ```
-
- * `<DatabaseName>` - Name of the new database or existing database that you want to restore
- * `<Timestamp>` - Exact timestamp of the Point in time restore
- * `<DatabaseName@HostName>` - Name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So, it doesn't need to be specified for restores done on the same HANA server from where the backup is taken.
- * `<PathToGeneratedCatalogInStep3>` - Path to the catalog file generated in **Step C**
- * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
- * `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C**
-
- * To restore to a particular full or differential backup:
-
- If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore using the command `ALTER SYSTEM STOP DATABASE <db> IMMEDIATE`. However, if you're only restoring an existing database, run the HDBSQL command to stop the database:
-
- ```hdbsql
- RECOVER DATA FOR <DatabaseName> USING BACKUP_ID <BackupIdFromJsonFile> USING SOURCE '<DatabaseName@HostName>' USING CATALOG PATH ('<PathToGeneratedCatalogInStep3>') USING DATA PATH ('<DataFileDir>') CLEAR LOG
- ```
-
- * `<DatabaseName>` - the name of the new database or existing database that you want to restore
- * `<Timestamp>` - the exact timestamp of the Point in time restore
- * `<DatabaseName@HostName>` - the name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So it need not be specified for restores done on the same HANA server from where the backup is taken.
- * `<PathToGeneratedCatalogInStep3>` - the path to the catalog file generated in **Step C**
- * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
- * `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C**
- * To restore using backup ID:
-
- ```hdbsql
- RECOVER DATA FOR <db> USING BACKUP_ID <bkId> USING CATALOG PATH <path> USING LOG PATH <path> USING DATA PATH <path> CHECK ACCESS USING FILE
- ```
-
- Examples:
-
- SAP HANA SYSTEM restoration on same server
-
- ```hdbsql
- RECOVER DATABASE FOR SYSTEM UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
- ```
-
- SAP HANA tenant restoration on same server
-
- ```hdbsql
- RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
- ```
-
- SAP HANA SYSTEM restoration on different server
-
- ```hdbsql
- RECOVER DATABASE FOR SYSTEM UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
- ```
-
- SAP HANA tenant restoration on different server
-
- ```hdbsql
- RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
- ```
-
-### Partial restore as files
-
-The Azure Backup service decides the chain of files to be downloaded during restore as files. But there are scenarios where you might not want to download the entire content again.
-
-For eg., when you have a backup policy of weekly fulls, daily differentials and logs, and you already downloaded files for a particular differential. You found that this is not the right recovery point and decided to download the next day's differential. Now you just need the differential file since you already have the starting full. With the partial restore as files ability, provided by Azure Backup, you can now exclude the full from the download chain and download only the differential.
-
-#### Excluding backup file types
-
-The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
-
-1. In the target machine where files are to be downloaded, go to "opt/msawb/bin" folder
-2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
-3. Add the following JSON key value pair
-
- ```json
- {
- "RecoveryPointsToBeExcludedForRestoreAsFiles": "ExcludeFull"
- }
- ```
-
-4. Change the permissions and ownership of the file as follows:
-
- ```bash
- chmod 750 ExtensionSettingsOverrides.json
- chown root:msawb ExtensionSettingsOverrides.json
- ```
-
-5. No restart of any service is required. The Azure Backup service will attempt to exclude backup types in the restore chain as mentioned in this file.
-
-The ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` only takes specific values which denote the recovery points to be excluded during restore. For SAP HANA, these values are:
--- ExcludeFull (Other backup types such as differential, incremental and logs will be downloaded, if they are present in the restore point chain.-- ExcludeFullAndDifferential (Other backup types such as incremental and logs will be downloaded, if they are present in the restore point chain)-- ExcludeFullAndIncremental (Other backup types such as differential and logs will be downloaded, if they are present in the restore point chain)-- ExcludeFullAndDifferentialAndIncremental (Other backup types such as logs will be downloaded, if they are present in the restore point chain)-
-### Restore to a specific point in time
-
-If you've selected **Logs (Point in Time)** as the restore type, do the following:
-
-1. Select a recovery point from the log graph and select **OK** to choose the point of restore.
-
- ![Restore point](media/sap-hana-db-restore/restore-point.png)
-
-1. On the **Restore** menu, select **Restore** to start the restore job.
-
- ![Select restore](media/sap-hana-db-restore/restore-restore.png)
-
-1. Track the restore progress in the **Notifications** area or track it by selecting **Restore jobs** on the database menu.
-
- ![Restore triggered successfully](media/sap-hana-db-restore/restore-triggered.png)
-
-### Restore to a specific recovery point
-
-If you've selected **Full & Differential** as the restore type, do the following:
-
-1. Select a recovery point from the list and select **OK** to choose the point of restore.
-
- ![Restore specific recovery point](media/sap-hana-db-restore/specific-recovery-point.png)
-
-1. On the **Restore** menu, select **Restore** to start the restore job.
-
- ![Start restore job](media/sap-hana-db-restore/restore-specific.png)
-
-1. Track the restore progress in the **Notifications** area or track it by selecting **Restore jobs** on the database menu.
-
- ![Restore progress](media/sap-hana-db-restore/restore-progress.png)
-
- > [!NOTE]
- > In Multiple Database Container (MDC) restores after the system DB is restored to a target instance, one needs to run the pre-registration script again. Only then the subsequent tenant DB restores will succeed. To learn more refer to [Troubleshooting ΓÇô MDC Restore](backup-azure-sap-hana-database-troubleshoot.md#multiple-container-database-mdc-restore).
-
-## Cross Region Restore
-
-As one of the restore options, Cross Region Restore (CRR) allows you to restore SAP HANA databases hosted on Azure VMs in a secondary region, which is an Azure paired region.
-
-To onboard to the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
-
-To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore)
-
-### View backup items in secondary region
-
-If CRR is enabled, you can view the backup items in the secondary region.
-
-1. From the portal, go to **Recovery Services vault** > **Backup items**.
-1. Select **Secondary Region** to view the items in the secondary region.
-
->[!NOTE]
->Only Backup Management Types supporting the CRR feature will be shown in the list. Currently, only support for restoring secondary region data to a secondary region is allowed.
-
-![Backup items in secondary region](./media/sap-hana-db-restore/backup-items-secondary-region.png)
-
-![Databases in secondary region](./media/sap-hana-db-restore/databases-secondary-region.png)
-
-### Restore in secondary region
-
-The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters. A vault should exist in the secondary region and the SAP HANA server should be registered to the vault in the secondary region.
-
-![Where and how to restore](./media/sap-hana-db-restore/restore-secondary-region.png)
-
-![Trigger restore in progress notification](./media/backup-azure-arm-restore-vms/restorenotifications.png)
-
->[!NOTE]
->* After the restore is triggered and in the data transfer phase, the restore job can't be cancelled.
->* The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
->* The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
-
-### Monitoring secondary region restore jobs
-
-1. In the Azure portal, go to **Backup center** > **Backup Jobs**.
-1. Filter **Operation** for value **CrossRegionRestore** to view the jobs in the secondary region.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-view-jobs-inline.png" alt-text="Screenshot showing filtered Backup jobs." lightbox="./media/sap-hana-db-restore/hana-view-jobs-expanded.png":::
-
-## Next steps
-
-* [Learn how](sap-hana-db-manage.md) to manage SAP HANA databases backed up using Azure Backup
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md
_*The database size limit depends on the data transfer rate that we support and
## Backup throughput performance
-Azure Backup supports a consistent data transfer rate of 200 Mbps for full and differential backups of large SQL databases (of 500 GB). To utilize the optimum performance, ensure that:
+Azure Backup supports a consistent data transfer rate of 200 MBps for full and differential backups of large SQL databases (of 500 GB). To utilize the optimum performance, ensure that:
-- The underlying VM (containing the SQL Server instance, which hosts the database) is configured with the required network throughput. If the maximum throughput of the VM is less than 200 Mbps, Azure Backup canΓÇÖt transfer data at the optimum speed.<br>Also, the disk that contains the database files must have enough throughput provisioned. [Learn more](../virtual-machines/disks-performance.md) about disk throughput and performance in Azure VMs.
+- The underlying VM (containing the SQL Server instance, which hosts the database) is configured with the required network throughput. If the maximum throughput of the VM is less than 200 MBps, Azure Backup canΓÇÖt transfer data at the optimum speed.<br>Also, the disk that contains the database files must have enough throughput provisioned. [Learn more](../virtual-machines/disks-performance.md) about disk throughput and performance in Azure VMs.
- Processes, which are running in the VM, are not consuming the VM bandwidth. - The backup schedules are spread across a subset of databases. Multiple backups running concurrently on a VM shares the network consumption rate between the backups. [Learn more](faq-backup-sql-server.yml#can-i-control-how-many-concurrent-backups-run-on-the-sql-server-) about how to control the number of concurrent backups.
Azure Backup supports a consistent data transfer rate of 200 Mbps for full and d
## Next steps
-Learn how to [back up a SQL Server database](backup-azure-sql-database.md) that's running on an Azure VM.
+Learn how to [back up a SQL Server database](backup-azure-sql-database.md) that's running on an Azure VM.
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- October 2022
+ - [SAP HANA instance snapshot backup support (preview)](#sap-hana-instance-snapshot-backup-support-preview)
+ - [SAP HANA System Replication database backup support (preview)](#sap-hana-system-replication-database-backup-support-preview)
- September 2022 - [Built-in Azure Monitor alerting for Azure Backup is now generally available](#built-in-azure-monitor-alerting-for-azure-backup-is-now-generally-available) - June 2022
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## SAP HANA instance snapshot backup support (preview)
+
+Azure Backup now supports SAP HANA instance snapshot backup that provides a cost-effective backup solution using Managed disk incremental snapshots. Because instant backup uses snapshot, the effect on the database is minimum.
+
+You can now take an instant snapshot of the entire HANA instance and backup logs for all databases, with a single solution. It also enables you to instantly restore the entire instance with point-in-time recovery using logs over the snapshot.
+
+For more information, see [Back up databases' instance snapshots (preview)](sap-hana-database-about.md#back-up-database-instance-snapshots-preview).
+
+## SAP HANA System Replication database backup support (preview)
+
+Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection,
+
+This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection
+
+For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview).
+ ## Built-in Azure Monitor alerting for Azure Backup is now generally available Azure Backup now offers a new and improved alerting solution via Azure Monitor. This solution provides multiple benefits, such as:
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
We offer two SKUs: AN36 and AN36P. For specifications, see [SKUs](skus.md).
* Microsoft Azure Consumption Contract (MACC) credits
-> [!NOTE]
-> During BareMetal Infrastructure for Nutanix Cloud Clusters on Azure, RI is not supported.
-An additional discount may be available.
- ## Support Nutanix (for software-related issues) and Microsoft (for infrastructure-related issues) will provide end-user support.
+## Release notes
+
+[Nutanix Cloud Clusters on Azure Release Notes](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-On-Azure-Release-Notes:Nutanix-Cloud-Clusters-On-Azure-Release-Notes)
+ ## Next steps Learn more:
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
Last updated 07/01/2021
Learn how to sign up for, set up, and use Nutanix Cloud Clusters (NC2) on Azure.
-## Sign up NC2
+## Sign up for NC2
-Once you've satisfied the [requirements](requirements.md), go to [Nutanix Cloud Clusters
-on Azure Deployment
-and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf) to sign up.
+Once you've satisfied the [requirements](requirements.md), go to
+[Nutanix Cloud Clusters
+on Azure Deployment and User Guide](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure) to sign up.
-## Set up NC2 on Azure
-
-To set up NC2 on Azure, go to [Nutanix Cloud Clusters
-on Azure Deployment and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+To learn about Microsoft BareMetal hardware pricing, and to purchase Nutanix software, go to [Azure Marketplace](https://aka.ms/Nutanix-AzureMarketplace).
-## Use NC2 on Azure
+## Set up NC2 on Azure
-For more information about using NC2 on Azure, see [Nutanix Cloud Clusters
-on Azure Deployment
-and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+To set up and use NC2 on Azure, go to [Nutanix Cloud Clusters
+on Azure Deployment and User Guide](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure).
## Next steps
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
NC2 on Azure supports the following region using AN36:
* East US * West US 2
-NC2 on Azure supports the North Central US Azure region using AN36P.
+NC2 on Azure supports the following region using AN36P:
+
+* North Central US
+* East US 2
## Next steps
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
Title: Migrate Batch certificates to Azure Key Vault
-description: Learn how to migrate access management from using certificates in Azure Batch to Azure Key Vault and plan for feature end of support.
+ Title: Migrate Batch account certificates to Azure Key Vault
+description: Learn how to migrate Batch account certificates to Azure Key Vault and plan for feature end of support.
Previously updated : 08/15/2022 Last updated : 10/07/2022
-# Migrate Batch certificates to Azure Key Vault
+# Migrate Batch account certificates to Azure Key Vault
-On *February 29, 2024*, the certificates feature for Azure Batch access management will be retired. Learn how to migrate your access management approach from using certificates in Azure Batch to using Azure Key Vault.
+On *February 29, 2024*, the Azure Batch account certificates feature will be retired. Learn how to migrate your certificates on Azure Batch accounts using Azure Key Vault in this article.
## About the feature
-Often, you need to store secure data for an application. Your data must be securely managed so that only administrators or authorized users can access it.
-
-Currently, Azure Batch offers two ways to secure access. You can use a certificate that you create and manage in Azure Batch or you can use Azure Key Vault to store an access key. Using a key vault is an Azure-standard way to deliver more controlled secure access management.
-
-You can use a certificate at the account level in Azure Batch. You must generate the certificate and upload it manually to Batch by using the Azure portal. To access the certificate, the certificate must be associated with and installed for only the current user. A certificate typically is valid for one year, and it must be updated each year.
+Certificates are often required in various scenarios such as decrypting a secret, securing communication channels, or [accessing another service](credential-access-key-vault.md). Currently, Azure Batch offers two ways to manage certificates on Batch pools. You can add certificates to a Batch account or you can use the Azure Key Vault VM extension to manage certificates on Batch pools. Only the [certificate functionality on an Azure Batch account](https://learn.microsoft.com/rest/api/batchservice/certificate) and the functionality it extends to Batch pools via `CertificateReference` to [Add Pool](https://learn.microsoft.com/rest/api/batchservice/pool/add#certificatereference), [Patch Pool](https://learn.microsoft.com/rest/api/batchservice/pool/patch#certificatereference), [Update Properties](https://learn.microsoft.com/rest/api/batchservice/pool/update-properties#certificatereference) and the corresponding references on Get and List Pool APIs are being retired.
## Feature end of support
-To move toward a simpler, standardized way to secure access to your Batch resources, on February 29, 2024, we'll retire the certificates feature in Azure Batch. We recommend that you use Azure Key Vault as a standard and more modern method to secure your resources in Batch.
-
-In Key Vault, you get these benefits:
--- Reduced manual maintenance and streamlined maintenance overall-- Reduced access to and readability of the key that's generated-- Advanced security-
-After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch might not work as expected. After that date, you won't be able to create a pool by using a certificate. Pools that continue to use certificates after the feature is retired might increase in size and cost.
-
-## Alternative: Use Key Vault
-
-Azure Key Vault is an Azure service you can use to store and manage secrets, certificates, tokens, keys, and other configuration values that give authenticated users access to secure applications and services. Key Vault is based on the idea that security is improved and standardized when you remove hard-coded secrets and keys from application code that's deployed.
-
-Key Vault provides security at the transport layer by ensuring that any data flow from the key vault to the client application is encrypted. Azure Key Vault stores secrets and keys with such strong encryption that even Microsoft can't read key vault-protected keys and secrets.
-
-Azure Key Vault gives you a secure way to store essential access information and to set fine-grained access control. You can manage all secrets from one dashboard. Choose to store a key in either software-protected or hardware-protected hardware security modules (HSMs). You also can set Key Vault to auto-renew certificates
-
-## Create a key vault
-
-To create a key vault to manage access for Batch resources, use one of the following options:
--- Azure portal-- PowerShell-- Azure CLI-
-### Create a key vault by using the Azure portal
--- **Prerequisites**: To create a key vault by using the Azure portal, you must have a valid Azure subscription and Owner or Contributor access for Azure Key Vault.-
-To create a key vault:
-
-1. Sign in to the Azure portal.
-
-1. Search for **key vaults**.
-
-1. In the Key Vault dashboard, select **Create**.
-
-1. Enter or select your subscription, a resource group name, a key vault name, the pricing tier (Standard or Premium), and the region closest to your users. Each key vault name must be unique in Azure.
-
-1. Select **Review**, and then select **Create** to create the key vault account.
-
-1. Go to the key vault you created. The key vault name and the URI you use to access the vault are shown under deployment details.
-
-For more information, see [Quickstart: Create a key vault by using the Azure portal](../key-vault/general/quick-create-portal.md).
-
-### Create a key vault by using PowerShell
-
-1. Use the PowerShell option in Azure Cloud Shell to sign in to your account:
-
- ```powershell
- Login-AzAccount
- ```
-
-1. Use the following command to create a new resource group in the region that's closest to your users. For the `<placeholder>` values, enter the information for the Key Vault instance you want to create.
-
- ```powershell
- New-AzResourceGroup -Name <ResourceGroupName> -Location <Location>
- ```
-
-1. Use the following cmdlet to create the key vault. For the `<placeholder>` values, use the use key vault name, resource group name, and region for the key vault you want to create.
-
- ```powershell
- New-AzKeyVault -Name <KeyVaultName> -ResourceGroupName <ResourceGroupName> -Location <Location>
- ```
-
-For more information, see [Quickstart: Create a key vault by using PowerShell](../key-vault/general/quick-create-powershell.md).
-
-### Create a key vault by using the Azure CLI
-
-1. Use the Bash option in the Azure CLI to create a new resource group in the region that's closest to your users. For the `<placeholder>` values, enter the information for the Key Vault instance you want to create.
+[Azure Key Vault](../key-vault/general/overview.md) is the standard, recommended mechanism for storing and accessing secrets and certificates across Azure securely. Therefore, on February 29, 2024, we'll retire the Batch account certificates feature in Azure Batch. The alternative is to use the Azure Key Vault VM Extension and a user-assigned managed identity on the pool to securely access and install certificates on your Batch pools.
- ```bash
- az group create -name <ResourceGroupName> -l <Location>
- ```
+After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch won't work as expected. After that date, you'll no longer be able to add certificates to a Batch account or link these certificates to Batch pools. Pools that continue to use this feature after this date may not behave as expected such as updating certificate references or the ability to install existing certificate references.
-1. Create the key vault by using the following command. For the `<placeholder>` values, use the use key vault name, resource group name, and region for the key vault you want to create.
+## Alternative: Use Azure Key Vault VM Extension with Pool User-assigned Managed Identity
- ```bash
- az keyvault create -name <KeyVaultName> -resource-group <ResourceGroupName> -location <Location>
- ```
+Azure Key Vault is a fully managed Azure service that provides controlled access to store and manage secrets, certificates, tokens, and keys. Key Vault provides security at the transport layer by ensuring that any data flow from the key vault to the client application is encrypted. Azure Key Vault gives you a secure way to store essential access information and to set fine-grained access control. You can manage all secrets from one dashboard. Choose to store a key in either software-protected or hardware-protected hardware security modules (HSMs). You also can set Key Vault to auto-renew certificates.
-For more information, see [Quickstart: Create a key vault by using the Azure CLI](../key-vault/general/quick-create-cli.md).
+For a complete guide on how to enable Azure Key Vault VM Extension with Pool User-assigned Managed Identity, see [Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
## FAQs -- Does Microsoft recommend using Azure Key Vault for access management in Batch?
+- Do `CloudServiceConfiguration` pools support Azure Key Vault VM extension and managed identity on pools?
- Yes. We recommend that you use Azure Key Vault as part of your approach to essential data protection in the cloud.
+ No. `CloudServiceConfiguration` pools will be [retired](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/) on the same date as Azure Batch account certificate retirement on February 29, 2024. We recommend that you migrate to `VirtualMachinceConfiguration` pools before that date where you'll be able to use these solutions.
-- Does user subscription mode support Azure Key Vault?
+- Do user subscription pool allocation Batch accounts support Azure Key Vault?
- Yes. In user subscription mode, you must create the key vault at the time you create the Batch account.
+ Yes. You may use the same Key Vault as specified with your Batch account as for use with your pools, but your Key Vault used for certificates for your Batch pools may be entirely separate.
- Where can I find best practices for using Azure Key Vault?
For more information, see [Quickstart: Create a key vault by using the Azure CLI
## Next steps
-For more information, see [Key Vault certificate access control](../key-vault/certificates/certificate-access-control.md).
+For more information, see [Key Vault certificate access control](../key-vault/certificates/certificate-access-control.md). For more information about Batch functionality related to this migration, see [Azure Batch Pool extensions](create-pool-extensions.md) and [Azure Batch Pool Managed Identity](managed-identity-pools.md).
batch Credential Access Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/credential-access-key-vault.md
Last updated 06/22/2022
-# Use certificates and securely access Azure Key Vault with Batch
+# Use certificates to securely access Azure Key Vault with Batch
-In this article, you'll learn how to set up Batch nodes to securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md).
+> [!WARNING]
+> Batch account certificates as detailed in this article are [deprecated](batch-certificate-migration-guide.md). To securely access Azure Key Vault, simply use [Pool managed identities](managed-identity-pools.md) with the appropriate access permissions configured for the user-assigned managed identity to access your Key Vault. If you need to provision certificates on Batch nodes, please utilize the available Azure Key Vault VM extension in conjunction with pool Managed Identity to install and manage certificates on your Batch pool. For more information on deploying certificates from Azure Key Vault with Managed Identity on Batch pools, see [Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
+>
+> `CloudServiceConfiguration` pools do not provide the ability to specify either Managed Identity or the Azure Key Vault VM extension, and these pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). You should migrate to `VirtualMachineConfiguration` pools which provide the aforementioned alternatives.
+
+In this article, you'll learn how to set up Batch nodes with certificates to securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md).
To authenticate to Azure Key Vault from a Batch node, you need:
To authenticate to Azure Key Vault from a Batch node, you need:
- A Batch account - A Batch pool with at least one node
-> [!IMPORTANT]
-> Batch now offers an improved option for accessing credentials stored in Azure Key Vault. By creating your pool with a user-assigned managed identity that can access the certificate in Azure Key Vault, you don't need to send the certificate content to the Batch Service, which enhances security. We recommend using automatic certificate rotation instead of the method described in this topic. For more information, see [Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
- ## Obtain a certificate If you don't already have a certificate, [use the PowerShell cmdlet `New-SelfSignedCertificate`](/powershell/module/pki/new-selfsignedcertificate) to make a new self-signed certificate.
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
Previously updated : 08/15/2022 Last updated : 10/06/2022 # Migrate from job and pool lifetime statistics API to logs in Batch
The Azure Batch lifetime statistics API for jobs and pools will be retired on *A
## About the feature
-Currently, you can use API to retrieve lifetime statistics for jobs and pools in Batch. You can use the API to get lifetime statistics for all the jobs and pools in a Batch account or for a specific job or pool. The API collects statistical data from when the Batch account was created until the last time the account was updated or from when a job or pool was created. A customer might use the job and pool lifetime statistics API to help them analyze and evaluate their Batch usage.
+Currently, you can use API to retrieve lifetime statistics for [jobs](https://learn.microsoft.com/rest/api/batchservice/job/get-all-lifetime-statistics) and [pools](https://learn.microsoft.com/rest/api/batchservice/pool/get-all-lifetime-statistics) in Batch. The API collects statistical data from when the Batch account was created for all jobs and pools created for the lifetime of the Batch account.
-To make statistical data available to customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to do a periodic, background rollup of statistics. The aggregation is performed for all Batch accounts, pools, and jobs in each region, regardless of whether a customer needs or queries the stats for their account, pool, or job. The operating cost includes 11 VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
-
-The MapReduce aggregation logic was implemented by using legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repository need to be updated frequently to accommodate increased loads in production and to meet security and compliance requirements. Also, because the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers don't use the API. The Batch service currently uses all the compute and storage usage charges that are associated with MapReduce pools and jobs.
+To make statistical data available to customers, the Batch service performs aggregation and roll-ups on a periodic basis. Due to these lifetime stats APIs being rarely exercised by Batch customers, these APIs are being retired as alternatives exist.
## Feature end of support
-The lifetime statistics API is designed and maintained to help you troubleshoot your Batch services. However, not many customers actually use the API. The customers who use the API are interested in extracting details for not more than a month. More advanced ways of getting data about logs, pools, and jobs can be collected and used on a need basis by using Azure portal logs, alerts, log export, and other methods.
+The lifetime statistics API is designed and maintained to help you gather information about usage of your Batch pools and jobs across the lifetime of your Batch account. Alternatives exist to gather data at a fine-grained level on a [per job](https://learn.microsoft.com/rest/api/batchservice/job/get#jobstatistics) or [per pool](https://learn.microsoft.com/rest/api/batchservice/pool/get#poolstatistics) basis. Only the lifetime statistics APIs are being retired.
When the job and pool lifetime statistics API is retired on April 30, 2023, the API will no longer work, and it will return an appropriate HTTP response error code to the client.
-## Alternative: Set up logs in the Azure portal
+## Alternatives
+
+### Aggregate with per job or per pool statistics
+
+You can get statistics for any active job or pool in a Batch account. For jobs, you can issue a [Get Job](https://learn.microsoft.com/rest/api/batchservice/job/get) request and view the [JobStatistics object](https://learn.microsoft.com/rest/api/batchservice/job/get#jobstatistics). For pools, you can issue a [Get Pool](https://learn.microsoft.com/rest/api/batchservice/pool/get) request and view the [PoolStatistics object](https://learn.microsoft.com/rest/api/batchservice/pool/get#poolstatistics). You'll then be able to use these results and aggregate as needed across jobs and pools that are relevant for your analysis workflow.
+
+### Set up logs in the Azure portal
-The Azure portal has various options to enable the logs. System logs and diagnostic logs can provide statistical data. For more information, see [Monitor Batch solutions](./monitoring-overview.md).
+The Azure portal has various options to enable monitoring and logs. System logs and diagnostic logs can provide statistical data with custom solutions. For more information, see [Monitor Batch solutions](./monitoring-overview.md).
## FAQs
The Azure portal has various options to enable the logs. System logs and diagnos
## Next steps
-For more information, see [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md).
+For more information, see the Batch [Job](https://learn.microsoft.com/rest/api/batchservice/job) or [Pool](https://learn.microsoft.com/rest/api/batchservice/pool) API. For Azure Monitor logs, see [this article](../azure-monitor/logs/data-platform-logs.md).
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
Before you can download the software, set up an Azure Storage account for storin
1. Select **Create**.
- 1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account.
+ 1. Grant the **User-assigned managed identity**, which was used during infrastructure deployment, **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account.
### Download SAP media
center-sap-solutions Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md
ACSS uses this user-assigned managed identity to install VM extensions on the AS
To provide permissions to the SAP system resources to a user-assigned managed identity:
-1. Create a new user-assigned managed identity if needed or use an existing one.
-1. Assign **Contributor** role access to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups.
+1. [Create a new user-assigned managed identity](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) if needed or use an existing one.
+1. [Assign **Contributor** role access](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups.
1. Once the permissions are assigned, this managed identity can be used in ACSS to register and manage SAP systems. ## Register SAP system
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Last updated 06/16/2022 -+ # Chaos Studio fault and action library
Known issues on Linux:
} ```
-## Cosmos DB failover
+## Azure Cosmos DB failover
| Property | Value | |-|-| | Capability Name | Failover-1.0 | | Target type | Microsoft-CosmosDB |
-| Description | Causes a Cosmos DB account with a single write region to fail over to a specified read region to simulate a [write region outage](../cosmos-db/high-availability.md) |
+| Description | Causes an Azure Cosmos DB account with a single write region to fail over to a specified read region to simulate a [write region outage](../cosmos-db/high-availability.md) |
| Prerequisites | None. |
-| Urn | urn:csci:microsoft:cosmosDB:failover/1.0 |
+| Urn | `urn:csci:microsoft:cosmosDB:failover/1.0` |
| Parameters (key, value) | | | readRegion | The read region that should be promoted to write region during the failover, for example, "East US 2" |
chaos-studio Chaos Studio Faults Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-faults-actions.md
Last updated 11/01/2021-+ # Faults and actions in Azure Chaos Studio
There are two varieties of actions in Chaos Studio:
## Faults
-Faults are the most common action in Chaos Studio. Faults cause a disruption in a system, allowing you to verify that the system effectively handles that disruption without impacting availability. Faults can be destructive (for example, killing a process), apply pressure (for example, adding virtual memory pressure), add latency, or cause a configuration change. In addition to a name and type, faults may also have a *duration*, if continuous, and *parameters*. Parameters describe how the fault should be applied and are specific to the fault name. For example, a parameter for the Cosmos DB failover fault is the read region that will be promoted to the write region during the write region failure. Some parameters are required while others are optional.
+Faults are the most common action in Chaos Studio. Faults cause a disruption in a system, allowing you to verify that the system effectively handles that disruption without impacting availability. Faults can be destructive (for example, killing a process), apply pressure (for example, adding virtual memory pressure), add latency, or cause a configuration change. In addition to a name and type, faults may also have a *duration*, if continuous, and *parameters*. Parameters describe how the fault should be applied and are specific to the fault name. For example, a parameter for the Azure Cosmos DB failover fault is the read region that will be promoted to the write region during the write region failure. Some parameters are required while others are optional.
Faults are either *agent-based* or *service-direct* depending on the target type. An agent-based fault requires the Chaos Studio agent to be installed on a virtual machine or virtual machine scale set. The agent is available for both Windows and Linux, but not all faults are available on both operating systems. See the [fault library](chaos-studio-fault-library.md) for information on which faults are supported on each operating system. Service-direct faults do not require any agent - they run directly against an Azure resource.
chaos-studio Chaos Studio Targets Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-targets-capabilities.md
Last updated 11/01/2021-+ # Targets and capabilities in Azure Chaos Studio
Before you can inject a fault against an Azure resource, the resource must first
A chaos **target** enables Chaos Studio to interact with a resource for a particular target type. A **target type** represents the method of injecting faults against a resource. Resource types that only support service-direct faults have one target type, for example the `Microsoft-CosmosDB` type for Azure Cosmos DB. Resource types that support service-direct and agent-based faults have two target types: one for the service-direct faults (for example, `Microsoft-VirtualMachine`), and one for the agent-based faults (always `Microsoft-Agent`).
-A target is an extension resource created as a child of the resource that is being onboarded to Chaos Studio (for example, a Virtual Machine or Network Security Group). A target defines the target type that is enabled on the resource. For example, if onboarding a Cosmos DB instance with this resource ID:
+A target is an extension resource created as a child of the resource that is being onboarded to Chaos Studio (for example, a Virtual Machine or Network Security Group). A target defines the target type that is enabled on the resource. For example, if onboarding an Azure Cosmos DB instance with this resource ID:
``` /subscriptions/fd9ccc83-faf6-4121-9aff-2a2d685ca2a2/resourceGroups/chaosstudiodemo/providers/Microsoft.DocumentDB/databaseAccounts/myDB ```
-The Cosmos DB resource will have a child resource formatted like this:
+The Azure Cosmos DB resource will have a child resource formatted like this:
``` /subscriptions/fd9ccc83-faf6-4121-9aff-2a2d685ca2a2/resourceGroups/chaosstudiodemo/providers/Microsoft.DocumentDB/databaseAccounts/myDB/providers/Microsoft.Chaos/targets/Microsoft-CosmosDB
chaos-studio Chaos Studio Tutorial Service Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-cli.md
Last updated 11/10/2021-+ ms.devlang: azurecli
With your Azure Cosmos DB account now onboarded, you can create your experiment.
## Give experiment permission to your Azure Cosmos DB account When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully.
-Give the experiment access to your resource(s) using the command below, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target resource (in this case, the Cosmos DB instance resource ID). Change the role to the appropriate [built-in role for that resource type](chaos-studio-fault-providers.md). Run this command for each resource targeted in your experiment.
+Give the experiment access to your resource(s) using the command below, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target resource (in this case, the Azure Cosmos DB instance resource ID). Change the role to the appropriate [built-in role for that resource type](chaos-studio-fault-providers.md). Run this command for each resource targeted in your experiment.
```azurecli-interactive az role assignment create --role "Cosmos DB Operator" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 9/29/2022 Last updated : 10/11/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## October 2022 Guest OS
+
+>[!NOTE]
+
+>The October Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the October Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-10 | [5016623] | Latest Cumulative Update(LCU) | 6.49 | Aug 9, 2022 |
+| Rel 22-10 | [5016618] | IE Cumulative Updates | 2.129, 3.116, 4.109 | Aug 9, 2022 |
+| Rel 22-10 | [5016627] | Latest Cumulative Update(LCU) | 7.17 | Aug 9, 2022 |
+| Rel 22-10 | [5016622] | Latest Cumulative Update(LCU) | 5.73 | Aug 9, 2022 |
+| Rel 22-10 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.129 | Oct 11, 2022 |
+| Rel 22-10 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.129 | May 10, 2022 |
+| Rel 22-10 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.109 | Jun 14, 2022 |
+| Rel 22-10 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.109 | May 10, 2022 |
+| Rel 22-10 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.116 | Oct 11, 2022 |
+| Rel 22-10 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.116 | May 10, 2022 |
+| Rel 22-10 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update LKG | 6.49 | May 10, 2022 |
+| Rel 22-10 | [5017028] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.17 | Sep 13, 2022 |
+| Rel 22-10 | [5018454] | Monthly Rollup | 2.129 | Oct 11, 2022 |
+| Rel 22-10 | [5018457] | Monthly Rollup | 3.116 | Oct 11, 2022 |
+| Rel 22-10 | [5018474] | Monthly Rollup | 4.109 | Oct 11, 2022 |
+| Rel 22-10 | [5016263] | Servicing Stack update | 3.116 | Jul 12, 2022 |
+| Rel 22-10 | [5018922] | Servicing Stack update | 4.109 | Oct 11, 2022 |
+| Rel 22-10 | [4578013] | OOB Standalone Security Update | 4.109 | Aug 19, 2020 |
+| Rel 22-10 | [5017396] | Servicing Stack update | 5.73 | Sep 13, 2022 |
+| Rel 22-10 | [5017397] | Servicing Stack update | 2.129 | Sep 13, 2022 |
+| Rel 22-10 | [4494175] | Microcode | 5.73 | Sep 1, 2020 |
+| Rel 22-10 | [4494174] | Microcode | 6.49 | Sep 1, 2020 |
+
+[5016623]: https://support.microsoft.com/kb/5016623
+[5016618]: https://support.microsoft.com/kb/5016618
+[5016627]: https://support.microsoft.com/kb/5016627
+[5016622]: https://support.microsoft.com/kb/5016622
+[5013637]: https://support.microsoft.com/kb/5013637
+[5013644]: https://support.microsoft.com/kb/5013644
+[5013638]: https://support.microsoft.com/kb/5013638
+[5013643]: https://support.microsoft.com/kb/5013643
+[5013635]: https://support.microsoft.com/kb/5013635
+[5013642]: https://support.microsoft.com/kb/5013642
+[5013641]: https://support.microsoft.com/kb/5013641
+[5017028]: https://support.microsoft.com/kb/5017028
+[5018454]: https://support.microsoft.com/kb/5018454
+[5018457]: https://support.microsoft.com/kb/5018457
+[5018474]: https://support.microsoft.com/kb/5018474
+[5016263]: https://support.microsoft.com/kb/5016263
+[5018922]: https://support.microsoft.com/kb/5018922
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017396]: https://support.microsoft.com/kb/5017396
+[5017397]: https://support.microsoft.com/kb/5017397
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## September 2022 Guest OS
cloud-shell Cloud Shell Windows Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-windows-users.md
tags: azure-resource-manager ms.assetid:-+ vm-linux
cloud-shell Embed Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cloud-shell Example Terraform Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/example-terraform-bash.md
tags: azure-cloud-shell -+ vm-linux
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
tags: azure-resource-manager ms.assetid:-+ vm-linux
cloud-shell Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/limitations.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cloud-shell Msi Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/msi-authorization.md
tags: azure-resource-manager-+ vm-linux
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cloud-shell Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/pricing.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
tags: azure-resource-manager ms.assetid:-+ vm-linux
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-powershell.md
description: Learn how to use the PowerShell in your browser with Azure Cloud Sh
tags: azure-resource-manager-+ vm-linux
cloud-shell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart.md
description: Learn how to use the Bash command line in your browser with Azure C
tags: azure-resource-manager-+ vm-linux
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
tags: azure-resource-manager ms.assetid: --+ vm-linux
cloud-shell Using Cloud Shell Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/using-cloud-shell-editor.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cloud-shell Using The Shell Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/using-the-shell-window.md
tags: azure-resource-manager ms.assetid: -+ vm-linux
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Title: Install Read OCR Docker containers from Computer Vision
+ Title: Computer Vision 3.2 GA Read OCR container
-description: Use the Read OCR Docker containers from Computer Vision to extract text from images and documents, on-premises.
+description: Use the Read 3.2 OCR containers from Computer Vision to extract text from images and documents, on-premises.
keywords: on-premises, OCR, Docker, container
-# Install Read OCR Docker containers
+# Install Computer Vision 3.2 GA Read OCR container
[!INCLUDE [container hosting on the Microsoft Container Registry](../containers/includes/gated-container-hosting.md)]
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md
Previously updated : 06/13/2022 Last updated : 09/20/2022 -+ # Image description generation
The following JSON response illustrates what the Analyze API returns when descri
![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
+#### [Version 3.2](#tab/3-2)
+ ```json {
- "description": {
- "tags": ["outdoor", "building", "photo", "city", "white", "black", "large", "sitting", "old", "water", "skyscraper", "many", "boat", "river", "group", "street", "people", "field", "tall", "bird", "standing"],
- "captions": [
- {
- "text": "a black and white photo of a city",
- "confidence": 0.95301952483304808
- },
- {
- "text": "a black and white photo of a large city",
- "confidence": 0.94085190563213816
- },
+ "description":{
+ "tags":[
+ "outdoor",
+ "city",
+ "white"
+ ],
+ "captions":[
+ {
+ "text":"a city with tall buildings",
+ "confidence":0.48468858003616333
+ }
+ ]
+ },
+ "requestId":"7e5e5cac-ef16-43ca-a0c4-02bd49d379e9",
+ "metadata":{
+ "height":300,
+ "width":239,
+ "format":"Png"
+ },
+ "modelVersion":"2021-05-01"
+}
+```
+#### [Version 4.0](#tab/4-0)
+
+```json
+{
+ "metadata":
+ {
+ "width": 239,
+ "height": 300
+ },
+ "descriptionResult":
+ {
+ "values":
+ [
{
- "text": "a large white building in a city",
- "confidence": 0.93108362931954824
+ "text": "a city with tall buildings",
+ "confidence": 0.3551448881626129
} ]
- },
- "requestId": "b20bfc83-fb25-4b8d-a3f8-b2a1f084b159",
- "metadata": {
- "height": 300,
- "width": 239,
- "format": "Jpeg"
} } ```+ ## Use the API
-The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"description"` section.
+#### [Version 3.2](#tab/3-2)
+
+The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
+
+#### [Version 4.0](#tab/4-0)
+
+The image description feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Description` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
++ * [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Adult Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md
Last updated 07/05/2022 -+ # Adult content detection
The "adult" classification contains several different categories:
## Use the API
+#### [Version 3.2](#tab/3-2)
+ You can detect adult content with the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
+#### [Version 4.0](#tab/4-0)
+
+You can detect adult content with the [Analyze Image](https://aka.ms/vision-4-0-ref) API. When you add the value of `Adult` to the **features** query parameter, the API returns three properties&mdash;`adult`, `racy`, and `gore`&mdash;in its JSON response. Each of these properties contains a boolean value and confidence scores between zero and one.
+++ - [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Generating Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generating-thumbnails.md
Last updated 03/11/2018 -+ # Smart-cropped thumbnails
-A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping, together with resizing the image, to create intuitive thumbnails for a given image.
+A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping to create intuitive image thumbnails that include the key objects in the image.
+#### [Version 3.2](#tab/3-2)
The Computer Vision thumbnail generation algorithm works as follows: 1. Remove distracting elements from the image and identify the _area of interest_&mdash;the area of the image in which the main object(s) appears.
The generated thumbnail can vary widely depending on what you specify for height
![A mountain image next to various cropping configurations](./Images/thumbnail-demo.png)
-The following table illustrates typical thumbnails generated by Computer Vision for the example images. The thumbnails were generated for a specified target height and width of 50 pixels, with smart cropping enabled.
+The following table illustrates thumbnails defined by smart-cropping for the example images. The thumbnails were generated for a specified target height and width of 50 pixels, with smart cropping enabled.
| Image | Thumbnail | |-|--|
The following table illustrates typical thumbnails generated by Computer Vision
|![A white flower with a green background](./Images/flower.png) | ![Vision Analyze Flower thumbnail](./Images/flower_thumbnail.png) | |![A woman on the roof of an apartment building](./Images/woman_roof.png) | ![thumbnail of a woman on the roof of an apartment building](./Images/woman_roof_thumbnail.png) |
+#### [Version 4.0](#tab/4-0)
+
+The Computer Vision smart-cropping utility takes a given aspect ratio (or several) and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates.
+
+> [!IMPORTANT]
+> This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
+++ ## Use the API
+#### [Version 3.2](#tab/3-2)
+ The generate thumbnail feature is available through the [Get Thumbnail](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20c) and [Get Area of Interest](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/b156d0f5e11e492d9f64418d) APIs. You can call this API through a native SDK or through REST calls.
+#### [Version 4.0](#tab/4-0)
+
+The smart cropping feature is available through the [Analyze](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `SmartCrops` in the **visualFeatures** query parameter. Also include a **smartcrops-aspect-ratios** query parameter, and set it to a decimal value for the aspect ratio you want (defined as width / height). Multiple aspect ratio values should be comma-separated.
+++ * [Generate a thumbnail (how-to)](./how-to/generate-thumbnail.md)
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md
Previously updated : 06/13/2022 Last updated : 09/20/2022 -+ # Object detection Object detection is similar to [tagging](concept-tagging-images.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat and person, the Detect operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
-The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes.
+The object detection function applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes.
Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
The following JSON response illustrates what the Analyze API returns when detect
![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
+#### [Version 3.2](#tab/3-2)
+ ```json { "objects":[
The following JSON response illustrates what the Analyze API returns when detect
"confidence":0.855 } ],
- "requestId":"a7fde8fd-cc18-4f5f-99d3-897dcd07b308",
+ "requestId":"25018882-a494-4e64-8196-f627a35c1135",
"metadata":{
- "width":1260,
"height":473,
+ "width":1260,
"format":"Jpeg"
- }
+ },
+ "modelVersion":"2021-05-01"
} ```
+#### [Version 4.0](#tab/4-0)
+
+```json
+{
+ "metadata":
+ {
+ "width": 1260,
+ "height": 473
+ },
+ "objectsResult":
+ {
+ "values":
+ [
+ {
+ "name": "kitchen appliance",
+ "confidence": 0.501,
+ "boundingBox": {"x":730,"y":66,"w":135,"h":85}
+ },
+ {
+ "name": "computer keyboard",
+ "confidence": 0.51,
+ "boundingBox": {"x":523,"y":377,"w":185,"h":46}
+ },
+ {
+ "name": "Laptop",
+ "confidence": 0.85,
+ "boundingBox": {"x":471,"y":218,"w":289,"h":226}
+ },
+ {
+ "name": "person",
+ "confidence": 0.855,
+ "boundingBox": {"x":654,"y":0,"w":584,"h":473}
+ }
+ ]
+ }
+}
+```
++ ## Limitations It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
It's important to note the limitations of object detection so you can avoid or m
## Use the API
-The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"objects"` section.
+#### [Version 3.2](#tab/3-2)
+
+The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
+
+#### [Version 4.0](#tab/4-0)
+
+The object detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Objects` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
++ * [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-ocr.md
+
+ Title: Reading text - Computer Vision
+
+description: Learn concepts related to the Read feature of the Computer Vision API - usage and limits.
++++++++ Last updated : 09/12/2022+++
+# Reading text (preview)
+
+Version 4.0 of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the [OCR service](overview-ocr.md), but the latest model version is available through Image Analysis. This version is optimized for image inputs as opposed to documents.
++
+## Reading text example
+
+The following JSON response illustrates what the Analyze API returns when reading text in the given image.
+
+![Photo of a sticky note with writing on it.](./Images/handwritten-note.jpg)
+
+```json
+{
+ "metadata":
+ {
+ "width": 1000,
+ "height": 945
+ },
+ "readResult":
+ {
+ "stringIndexType": "TextElements",
+ "content": "You must be the change you\nWish to see in the world !\nEverything has its beauty , but\nnot everyone sees it !",
+ "pages":
+ [
+ {
+ "height": 945,
+ "width": 1000,
+ "angle": -1.099,
+ "pageNumber": 1,
+ "words":
+ [
+ {
+ "content": "You",
+ "boundingBox": [253,268,301,267,304,318,256,318],
+ "confidence": 0.998,
+ "span": {"offset":0,"length":3}
+ },
+ {
+ "content": "must",
+ "boundingBox": [310,266,376,265,378,316,313,317],
+ "confidence": 0.988,
+ "span": {"offset":4,"length":4}
+ },
+ {
+ "content": "be",
+ "boundingBox": [385,264,426,264,428,314,388,316],
+ "confidence": 0.928,
+ "span": {"offset":9,"length":2}
+ },
+ {
+ "content": "the",
+ "boundingBox": [435,263,494,263,496,311,437,314],
+ "confidence": 0.997,
+ "span": {"offset":12,"length":3}
+ },
+ {
+ "content": "change",
+ "boundingBox": [503,263,600,262,602,306,506,311],
+ "confidence": 0.995,
+ "span": {"offset":16,"length":6}
+ },
+ {
+ "content": "you",
+ "boundingBox": [609,262,665,263,666,302,611,305],
+ "confidence": 0.998,
+ "span": {"offset":23,"length":3}
+ },
+ {
+ "content": "Wish",
+ "boundingBox": [327,348,391,343,392,380,328,382],
+ "confidence": 0.98,
+ "span": {"offset":27,"length":4}
+ },
+ {
+ "content": "to",
+ "boundingBox": [406,342,438,340,439,378,407,379],
+ "confidence": 0.997,
+ "span": {"offset":32,"length":2}
+ },
+ {
+ "content": "see",
+ "boundingBox": [446,340,492,337,494,376,447,378],
+ "confidence": 0.998,
+ "span": {"offset":35,"length":3}
+ },
+ {
+ "content": "in",
+ "boundingBox": [500,337,527,336,529,375,501,376],
+ "confidence": 0.983,
+ "span": {"offset":39,"length":2}
+ },
+ {
+ "content": "the",
+ "boundingBox": [534,336,588,334,590,373,536,375],
+ "confidence": 0.993,
+ "span": {"offset":42,"length":3}
+ },
+ {
+ "content": "world",
+ "boundingBox": [599,334,655,333,658,371,601,373],
+ "confidence": 0.998,
+ "span": {"offset":46,"length":5}
+ },
+ {
+ "content": "!",
+ "boundingBox": [663,333,687,333,690,370,666,371],
+ "confidence": 0.915,
+ "span": {"offset":52,"length":1}
+ },
+ {
+ "content": "Everything",
+ "boundingBox": [255,446,371,441,372,490,256,494],
+ "confidence": 0.97,
+ "span": {"offset":54,"length":10}
+ },
+ {
+ "content": "has",
+ "boundingBox": [380,441,421,440,421,488,381,489],
+ "confidence": 0.793,
+ "span": {"offset":65,"length":3}
+ },
+ {
+ "content": "its",
+ "boundingBox": [430,440,471,439,471,487,431,488],
+ "confidence": 0.998,
+ "span": {"offset":69,"length":3}
+ },
+ {
+ "content": "beauty",
+ "boundingBox": [480,439,552,439,552,485,481,487],
+ "confidence": 0.296,
+ "span": {"offset":73,"length":6}
+ },
+ {
+ "content": ",",
+ "boundingBox": [561,439,571,439,571,485,562,485],
+ "confidence": 0.742,
+ "span": {"offset":80,"length":1}
+ },
+ {
+ "content": "but",
+ "boundingBox": [580,439,636,439,636,485,580,485],
+ "confidence": 0.885,
+ "span": {"offset":82,"length":3}
+ },
+ {
+ "content": "not",
+ "boundingBox": [364,516,412,512,413,546,366,549],
+ "confidence": 0.994,
+ "span": {"offset":86,"length":3}
+ },
+ {
+ "content": "everyone",
+ "boundingBox": [422,511,520,504,521,540,423,545],
+ "confidence": 0.993,
+ "span": {"offset":90,"length":8}
+ },
+ {
+ "content": "sees",
+ "boundingBox": [530,503,586,500,588,538,531,540],
+ "confidence": 0.988,
+ "span": {"offset":99,"length":4}
+ },
+ {
+ "content": "it",
+ "boundingBox": [596,500,627,498,628,536,598,537],
+ "confidence": 0.998,
+ "span": {"offset":104,"length":2}
+ },
+ {
+ "content": "!",
+ "boundingBox": [634,498,657,497,659,536,635,536],
+ "confidence": 0.994,
+ "span": {"offset":107,"length":1}
+ }
+ ],
+ "spans":
+ [
+ {
+ "offset": 0,
+ "length": 108
+ }
+ ],
+ "lines":
+ [
+ {
+ "content": "You must be the change you",
+ "boundingBox": [253,267,670,262,671,307,254,318],
+ "spans": [{"offset":0,"length":26}]
+ },
+ {
+ "content": "Wish to see in the world !",
+ "boundingBox": [326,343,691,332,693,369,327,382],
+ "spans": [{"offset":27,"length":26}]
+ },
+ {
+ "content": "Everything has its beauty , but",
+ "boundingBox": [254,443,640,438,641,485,255,493],
+ "spans": [{"offset":54,"length":31}]
+ },
+ {
+ "content": "not everyone sees it !",
+ "boundingBox": [364,512,658,496,660,534,365,549],
+ "spans": [{"offset":86,"length":22}]
+ }
+ ]
+ }
+ ],
+ "styles":
+ [
+ {
+ "isHandwritten": true,
+ "spans":
+ [
+ {
+ "offset": 0,
+ "length": 26
+ }
+ ],
+ "confidence": 0.95
+ },
+ {
+ "isHandwritten": true,
+ "spans":
+ [
+ {
+ "offset": 27,
+ "length": 58
+ }
+ ],
+ "confidence": 1
+ },
+ {
+ "isHandwritten": true,
+ "spans":
+ [
+ {
+ "offset": 86,
+ "length": 22
+ }
+ ],
+ "confidence": 0.9
+ }
+ ]
+ }
+}
+```
+
+## Use the API
+
+The text reading feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Read` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section.
+
+## Next steps
+
+Follow the [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to read text from an image using the Analyze API.
cognitive-services Concept People Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-people-detection.md
+
+ Title: People detection - Computer Vision
+
+description: Learn concepts related to the people detection feature of the Computer Vision API - usage and limits.
++++++++ Last updated : 09/12/2022+++
+# People detection (preview)
+
+Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score.
+
+> [!IMPORTANT]
+> We built this model by enhancing our object detection model for person detection scenarios. People detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
+
+## People detection example
+
+The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
+
+![Photo of a woman in a kitchen.](./Images/windows-kitchen.jpg)
+
+```json
+{
+ "metadata":
+ {
+ "width": 1260,
+ "height": 473
+ },
+ "peopleResult":
+ {
+ "values":
+ [
+ {
+ "boundingBox":
+ {
+ "x": 660,
+ "y": 0,
+ "w": 582,
+ "h": 473
+ },
+ "confidence": 0.9680353999137878
+ }
+ ]
+ }
+}
+```
+
+## Use the API
+
+The people detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `People` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"people"` section.
+
+## Next steps
+
+Learn the related concept of [Face detection](concept-face-detection.md).
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tagging-images.md
Previously updated : 06/13/2022 Last updated : 09/20/2022 -+ # Image tagging
The following JSON response illustrates what Computer Vision returns when taggin
![A blue house and the front yard](./Images/house_yard.png).
+#### [Version 3.2](#tab/3-2)
+
+```json
+{
+ "tags":[
+ {
+ "name":"grass",
+ "confidence":0.9960499405860901
+ },
+ {
+ "name":"outdoor",
+ "confidence":0.9956876635551453
+ },
+ {
+ "name":"building",
+ "confidence":0.9893627166748047
+ },
+ {
+ "name":"property",
+ "confidence":0.9853052496910095
+ },
+ {
+ "name":"plant",
+ "confidence":0.9791355133056641
+ },
+ {
+ "name":"sky",
+ "confidence":0.9764555096626282
+ },
+ {
+ "name":"home",
+ "confidence":0.9732913970947266
+ },
+ {
+ "name":"house",
+ "confidence":0.9726772904396057
+ },
+ {
+ "name":"real estate",
+ "confidence":0.972320556640625
+ },
+ {
+ "name":"yard",
+ "confidence":0.9480282068252563
+ },
+ {
+ "name":"siding",
+ "confidence":0.945357620716095
+ },
+ {
+ "name":"porch",
+ "confidence":0.9410697221755981
+ },
+ {
+ "name":"cottage",
+ "confidence":0.9143695831298828
+ },
+ {
+ "name":"tree",
+ "confidence":0.9111741185188293
+ },
+ {
+ "name":"farmhouse",
+ "confidence":0.8988939523696899
+ },
+ {
+ "name":"window",
+ "confidence":0.894851565361023
+ },
+ {
+ "name":"lawn",
+ "confidence":0.8940501809120178
+ },
+ {
+ "name":"backyard",
+ "confidence":0.8931854963302612
+ },
+ {
+ "name":"garden buildings",
+ "confidence":0.885913610458374
+ },
+ {
+ "name":"roof",
+ "confidence":0.8695329427719116
+ },
+ {
+ "name":"driveway",
+ "confidence":0.8670971393585205
+ },
+ {
+ "name":"land lot",
+ "confidence":0.8564285039901733
+ },
+ {
+ "name":"landscaping",
+ "confidence":0.8540750741958618
+ }
+ ],
+ "requestId":"d60ac02b-966d-4f62-bc24-fbb1fec8bd5d",
+ "metadata":{
+ "height":200,
+ "width":300,
+ "format":"Png"
+ },
+ "modelVersion":"2021-05-01"
+}
+```
+
+#### [Version 4.0](#tab/4-0)
+ ```json {
- "tags": [
- {
- "name": "grass",
- "confidence": 0.9999995231628418
- },
- {
- "name": "outdoor",
- "confidence": 0.99992108345031738
- },
- {
- "name": "house",
- "confidence": 0.99685388803482056
- },
- {
- "name": "sky",
- "confidence": 0.99532157182693481
- },
- {
- "name": "building",
- "confidence": 0.99436837434768677
- },
- {
- "name": "tree",
- "confidence": 0.98880356550216675
- },
- {
- "name": "lawn",
- "confidence": 0.788884699344635
- },
- {
- "name": "green",
- "confidence": 0.71250593662261963
- },
- {
- "name": "residential",
- "confidence": 0.70859086513519287
- },
- {
- "name": "grassy",
- "confidence": 0.46624681353569031
- }
- ],
- "requestId": "06f39352-e445-42dc-96fb-0a1288ad9cf1",
- "metadata": {
- "height": 200,
+ "metadata":
+ {
"width": 300,
- "format": "Jpeg"
+ "height": 200
+ },
+ "tagsResult":
+ {
+ "values":
+ [
+ {
+ "name": "grass",
+ "confidence": 0.9960499405860901
+ },
+ {
+ "name": "outdoor",
+ "confidence": 0.9956876635551453
+ },
+ {
+ "name": "building",
+ "confidence": 0.9893627166748047
+ },
+ {
+ "name": "property",
+ "confidence": 0.9853052496910095
+ },
+ {
+ "name": "plant",
+ "confidence": 0.9791355729103088
+ },
+ {
+ "name": "sky",
+ "confidence": 0.976455569267273
+ },
+ {
+ "name": "home",
+ "confidence": 0.9732913374900818
+ },
+ {
+ "name": "house",
+ "confidence": 0.9726771116256714
+ },
+ {
+ "name": "real estate",
+ "confidence": 0.972320556640625
+ },
+ {
+ "name": "yard",
+ "confidence": 0.9480281472206116
+ },
+ {
+ "name": "siding",
+ "confidence": 0.945357620716095
+ },
+ {
+ "name": "porch",
+ "confidence": 0.9410697221755981
+ },
+ {
+ "name": "cottage",
+ "confidence": 0.9143695831298828
+ },
+ {
+ "name": "tree",
+ "confidence": 0.9111745357513428
+ },
+ {
+ "name": "farmhouse",
+ "confidence": 0.8988940119743347
+ },
+ {
+ "name": "window",
+ "confidence": 0.894851803779602
+ },
+ {
+ "name": "lawn",
+ "confidence": 0.894050121307373
+ },
+ {
+ "name": "backyard",
+ "confidence": 0.8931854963302612
+ },
+ {
+ "name": "garden buildings",
+ "confidence": 0.8859137296676636
+ },
+ {
+ "name": "roof",
+ "confidence": 0.8695330619812012
+ },
+ {
+ "name": "driveway",
+ "confidence": 0.8670969009399414
+ },
+ {
+ "name": "land lot",
+ "confidence": 0.856428861618042
+ },
+ {
+ "name": "landscaping",
+ "confidence": 0.8540748357772827
+ }
+ ]
} } ```+ ## Use the API
-The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"tags"` section.
+#### [Version 3.2](#tab/3-2)
+
+The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
+
+#### [Version 4.0](#tab/4-0)
+
+The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
++ * [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
The Analyze API gives you access to all of the service's image analysis features
#### [REST](#tab/rest)
-You can specify which features you want to use by setting the URL query parameters of the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need.
+You can specify which features you want to use by setting the URL query parameters of the [Analyze API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need.
|URL parameter | Value | Description| |||--|
-|`visualFeatures`|`Adult` | detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content ("racy" content) is also detected.|
-|`visualFeatures`|`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
-|`visualFeatures`|`Categories` | categorizes image content according to a taxonomy defined in documentation. This value is the default value of `visualFeatures`.|
-|`visualFeatures`|`Color` | determines the accent color, dominant color, and whether an image is black&white.|
-|`visualFeatures`|`Description` | describes the image content with a complete sentence in supported languages.|
-|`visualFeatures`|`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
-|`visualFeatures`|`ImageType` | detects if image is clip art or a line drawing.|
-|`visualFeatures`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
-|`visualFeatures`|`Tags` | tags the image with a detailed list of words related to the image content.|
-|`details`| `Celebrities` | identifies celebrities if detected in the image.|
-|`details`|`Landmarks` |identifies landmarks if detected in the image.|
+|`features`|`Read` | reads the visible text in the image and outputs it as structured JSON data.|
+|`features`|`Description` | describes the image content with a complete sentence in supported languages.|
+|`features`|`SmartCrops` | finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.|
+|`features`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`features`|`Tags` | tags the image with a detailed list of words related to the image content.|
A populated URL might look like this:
-`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities`
+`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2022-10-12-preview&features=Tags`
#### [C#](#tab/csharp)
The following URL query parameter specifies the language. The default value is `
A populated URL might look like this:
-`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities&language=en`
+`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2022-10-12-preview&features=Tags&language=en`
#### [C#](#tab/csharp)
This section shows you how to parse the results of the API call. It includes the
The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response. ```json
-{
- "tags":[
- {
- "name":"outdoor",
- "score":0.976
+{
+ "metadata":
+ {
+ "width": 300,
+ "height": 200
},
- {
- "name":"bird",
- "score":0.95
+ "tagsResult":
+ {
+ "values":
+ [
+ {
+ "name": "grass",
+ "confidence": 0.9960499405860901
+ },
+ {
+ "name": "outdoor",
+ "confidence": 0.9956876635551453
+ },
+ {
+ "name": "building",
+ "confidence": 0.9893627166748047
+ },
+ {
+ "name": "property",
+ "confidence": 0.9853052496910095
+ },
+ {
+ "name": "plant",
+ "confidence": 0.9791355729103088
+ }
+ ]
}
- ],
- "description":{
- "tags":[
- "outdoor",
- "bird"
- ],
- "captions":[
- {
- "text":"partridge in a pear tree",
- "confidence":0.96
- }
- ]
- }
} ```
-See the following table for explanations of the fields in this example:
-
-Field | Type | Content
-|||
-Tags | `object` | The top-level object for an array of tags.
-tags[].Name | `string` | The keyword from the tags classifier.
-tags[].Score | `number` | The confidence score, between 0 and 1.
-description | `object` | The top-level object for an image description.
-description.tags[] | `string` | The list of tags. If there is insufficient confidence in the ability to produce a caption, the tags might be the only information available to the caller.
-description.captions[].text | `string` | A phrase describing the image.
-description.captions[].confidence | `number` | The confidence score for the phrase.
- ### Error codes See the following list of possible errors and their causes:
The following code calls the Image Analysis API and prints the results to the co
## Next steps * Explore the [concept articles](../concept-object-detection.md) to learn more about each feature.
-* See the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn more about the API functionality.
+* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
+ Last updated 06/13/2022
-# Call the Read API
+# Call the Computer Vision 3.2 GA Read API
-In this guide, you'll learn how to call the Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+In this guide, you'll learn how to call the v3.2 GA Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs. This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
-This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
+
+## Input requirements
+
+The **Read** call takes images and documents as its input. They have the following requirements:
+
+* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
+* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
+* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit.
+* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
## Determine how to process the data (optional)
When using the Read operation, use the following values for the optional `model-
| Not provided | Latest GA model | | latest | Latest GA model| | [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
-| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwriitten text, adds support for Japanese and Korean. |
-| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages, For handwriitten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
+| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwritten text, adds support for Japanese and Korean. |
+| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages. For handwritten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
| 2021-04-12 | 2021 GA model | ### Input language
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Last updated 06/13/2022-+ # What is Spatial Analysis?
Try out the capabilities of Spatial Analysis quickly and easily in your browser
<!--This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](tbd) provide in-depth explanations of the service's functionality and features.
+* The [conceptual articles]() provide in-depth explanations of the service's functionality and features.
* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.--> ## What it does
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/language-support.md
+ Last updated 05/02/2022
Some capabilities of Computer Vision support multiple languages; any capabilitie
## Optical Character Recognition (OCR)
-The Computer Vision [Read API](./overview-ocr.md#read-api) supports many languages. The `Read` API can extract text from images and documents with mixed languages, including from the same text line, without requiring a language parameter.
+The Computer Vision [Read API](./overview-ocr.md) supports many languages. The `Read` API can extract text from images and documents with mixed languages, including from the same text line, without requiring a language parameter.
> [!NOTE] > **Language code optional**
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
+ Last updated 06/13/2022
keywords: computer vision, computer vision applications, computer vision service
The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.
-You can use Image Analysis through a client library SDK or by calling the REST API directly. Follow the quickstart to get started.
+The latest version of Image Analysis, 4.0, which is now in public preview, has new features like synchronous OCR and people detection. We recommend you use this version going forward.
+
+You can use Image Analysis through a client library SDK or by calling the [REST API](https://aka.ms/vision-4-0-ref) directly. Follow the [quickstart](quickstarts-sdk/image-analysis-client-library.md) to get started.
> [!div class="nextstepaction"] > [Quickstart](quickstarts-sdk/image-analysis-client-library.md)
Use domain models to detect and identify domain-specific content in an image, su
Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors. [Detect the color scheme](concept-detecting-color-schemes.md) -- ### Generate a thumbnail Analyze the contents of an image to generate an appropriate thumbnail for that image. Computer Vision first generates a high-quality thumbnail and then analyzes the objects within the image to determine the *area of interest*. Computer Vision then crops the image to fit the requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the original image, depending on your needs. [Generate a thumbnail](concept-generating-thumbnails.md)
Analyze the contents of an image to return the coordinates of the *area of inter
You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.
+### Read text in images (preview)
+
+Version 4.0 of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the main [OCR service](overview-ocr.md), but in Image Analysis this feature is optimized for image inputs as opposed to documents. [Reading text in images](concept-ocr.md)
+
+### Detect people in images (preview)
+
+Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. [People detection](concept-people-detection.md)
+ ## Image requirements
+#### [Version 3.2](#tab/3-2)
+ Image Analysis works on images that meet the following requirements: - The image must be presented in JPEG, PNG, GIF, or BMP format - The file size of the image must be less than 4 megabytes (MB)-- The dimensions of the image must be greater than 50 x 50 pixels
+- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
+
+#### [Version 4.0](#tab/4-0)
+
+Image Analysis works on images that meet the following requirements:
+
+- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
+- The file size of the image must be less than 20 megabytes (MB)
+- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
++ ## Data privacy and security
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Previously updated : 06/13/2022 Last updated : 09/23/2022 -+ # What is Optical character recognition?
-Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents&mdash;invoices, bills, financial reports, articles, and more. Microsoft's OCR technologies support extracting printed text in [several languages](./language-support.md).
+Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. Microsoft's **Read** OCR technology is built on top of multiple deep learning models supported by universal script-based models for global language support. This allows them to extract printed and handwritten text in [several languages](./language-support.md), including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
-Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started with the REST API or a client SDK. Or, try out the capabilities of OCR quickly and easily in your browser using Vision Studio.
+## How is OCR related to intelligent document processing (IDP)?
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-![OCR demos](./Images/ocr-demo.gif)
-
-This documentation contains the following types of articles:
-* The [quickstarts](./quickstarts-sdk/client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/call-read-api.md) contain instructions for using the service in more specific or customized ways.
-<!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. -->
+OCR typically refers to the foundational technology focusing on extracting text while delegating the extraction of structure, relationships, key-values, entities, and other document-centric insights to intelligent document processing service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md).
-## Read API
-The Computer Vision [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) is Azure's latest OCR technology ([learn what's new](./whats-new.md)) that extracts printed text (in several languages), handwritten text (in several languages), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports extracting both printed and handwritten text in the same image or document.
+## Start with Vision Studio
-![How OCR extracts text from images and documents.](./Images/how-ocr-works.svg)
+Try out OCR by using Vision Studio.
-## Input requirements
-
-The **Read** call takes images and documents as its input. They have the following requirements:
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
-* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
-* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a dimensions limit.
-* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
## Supported languages
-The Read API latest generally available (GA) model supports 164 languages for print text and 9 languages for handwritten text.
-
-OCR for print text includes support for English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts.
-OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, Spanish languages.
+Both **Read** versions available today in Computer Vision support several languages for printed and handwritten text. OCR for printed text includes support for English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts. OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish languages.
-See [How to specify the model version](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the preview languages and features. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
+Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
-## Key features
+## Read OCR common features
-The Read API includes the following features.
+The Read OCR model is available in Computer Vision and Form Recognizer with common baseline capabilities while optimizing for respective scenarios. The following list summarizes the common features:
-* Print text extraction in 164 languages
-* Handwritten text extraction in nine languages
-* Text lines and words with location and confidence scores
-* No language identification required
+* Printed and handwritten text extraction in supported languages
+* Pages, text lines and words with location and confidence scores
* Support for mixed languages, mixed mode (print and handwritten)
-* Select pages and page ranges from large, multi-page documents
-* Natural reading order option for text line output (Latin only)
-* Handwriting classification for text lines (Latin only)
* Available as Distroless Docker container for on-premises deployment
-Learn [how to use the OCR features](./how-to/call-read-api.md).
+## Use the cloud APIs or deploy on-premises
-## Use the cloud API or deploy on-premises
-The Read 3.x cloud APIs are the preferred option for most customers because of ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
+The cloud APIs are the preferred option for most customers because of their ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
-For on-premises deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the new OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
+For on-premises deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the Computer Vision v3.2 generally available OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
> [!WARNING]
-> The Computer Vision [RecognizeText](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) and [ocr](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) operations are no longer maintained, and are in the process of being deprecated in favor of the new [Read API](#read-api) covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
+> The Computer Vision [ocr](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are no longer supported and should not be used.
## Data privacy and security
As with all of the Cognitive Services, developers using the Computer Vision serv
## Next steps -- Get started with the [OCR (Read) REST API or client library quickstarts](./quickstarts-sdk/client-library.md).-- Learn about the [Read 3.2 REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
+- For general (non-document) images, try the [Computer Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).
+- For PDF, Office and HTML documents and document images, start with [Form Recognizer Read](../../applied-ai-services/form-recognizer/concept-read.md).
+- Looking for the previous GA version? Refer to the [Computer Vision 3.2 GA SDK or REST API quickstarts](./quickstarts-sdk/client-library.md).
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Last updated 06/13/2022 ms.devlang: csharp, golang, java, javascript, python-+ zone_pivot_groups: programming-languages-ocr keywords: computer vision, computer vision service
-# Quickstart: Optical character recognition (OCR)
+# Quickstart: Computer Vision v3.2 GA Read
+ Get started with the Computer Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
Get started with the Computer Vision Read REST API or client libraries. The Read
[!INCLUDE [Vision Studio quickstart](../includes/ocr-studio-quickstart.md)]
cognitive-services Vehicle Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/vehicle-analysis.md
+
+ Title: Configure vehicle analysis containers
+
+description: Vehicle analysis provides each container with a common configuration framework, so that you can easily configure and manage compute, AI insight egress, logging, and security settings.
+++++++ Last updated : 09/28/2022+++
+# Install and run vehicle analysis (preview)
+
+Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you'll learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations.
+
+## Prerequisites
+
+* To utilize the operations of vehicle analysis, you must first follow the steps to [install and run spatial analysis container](./spatial-analysis-container.md) including configuring your host machine, downloading and configuring your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, executing the deployment, and setting up device [logging](spatial-analysis-logging.md).
+ * When you configure your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, refer to the steps below to add the graph configurations for vehicle analysis to your manifest prior to deploying the container. Or, once the spatial analysis container is up and running, you may add the graph configurations and follow the steps to redeploy. The steps below will outline how to properly configure your container.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+> [!NOTE]
+> Make sure that the edge device has at least 50GB disk space available before deploying the Spatial Analysis module.
+
+## Vehicle analysis operations
+
+Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
+
+The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the *.cpu* distinction).
+
+| Operation identifier | Description |
+| -- | - |
+| **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview** and **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview** | Counts vehicles parked in a designated zone in the camera's field of view. </br> Emits an initial _vehicleCountEvent_ event and then _vehicleCountEvent_ events when the count changes. |
+| **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** | Identifies when a vehicle parks in a designated parking region in the camera's field of view. </br> Emits a _vehicleInPolygonEvent_ event when the vehicle is parked inside a parking space. |
+
+In addition to exposing the vehicle location, other estimated attributes for **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview**, **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview**, **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** include vehicle color and vehicle type. All of the possible values for these attributes are found in the output section (below).
+
+### Operation parameters for vehicle analysis
+
+The following table shows the parameters required by each of the vehicle analysis operations. Many are shared with Spatial Analysis; the only one not shared is the `PARKING_REGIONS` setting. The full list of Spatial Analysis operation parameters can be found in the [Spatial Analysis container](/azure/cognitive-services/computer-vision/spatial-analysis-container?tabs=azure-stack-edge#iot-deployment-manifest) guide.
+
+| Operation parameters| Description|
+|||
+| Operation ID | The Operation Identifier from table above.|
+| enabled | Boolean: true or false|
+| VIDEO_URL| The RTSP URL for the camera device (for example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded streams either through RTSP, HTTP, or MP4. |
+| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.|
+| VIDEO_IS_LIVE| True for camera devices; false for recorded videos.|
+| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it is 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
+| PARKING_REGIONS | JSON configuration for zone and line as outlined below. </br> PARKING_REGIONS must contain four points in normalized coordinates ([0, 1]) that define a convex region (the points follow a clockwise or counterclockwise order).|
+| EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE will generate an output on every single frame received (one FPS). ON_CHANGE will generate an output when something changes (number of vehicles or parking spot occupancy). |
+| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX will use an overlap between the detected bounding box and a reference bounding box. PROJECTIONS will project the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.|
+
+This is an example of a valid `PARKING_REGIONS` configuration:
+
+```json
+"{\"parking_slot1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
+```
+
+### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview
+
+This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
+
+```json
+{
+ "zone1": {
+ type: "Queue",
+ region: [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
+ }
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | dictionary | Keys are the zone names and the values are a field with type and region.|
+| `name` | string| Friendly name for this zone.|
+| `region` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which vehicles are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
+| `type` | string| For **cognitiveservices.vision.vehicleanalysis-vehiclecount** this should be "Queue".|
++
+### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview
+
+This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
+
+```json
+{
+ "zone1": {
+ type: "SingleSpot",
+ region: [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
+ }
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | dictionary | Keys are the zone names and the values are a field with type and region.|
+| `name` | string| Friendly name for this zone.|
+| `region` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which vehicles are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
+| `type` | string| For **cognitiveservices.vision.vehicleanalysis-vehicleingpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleingpolygon.cpu-preview** this should be "SingleSpot".|
+
+## Configuring the vehicle analysis operations
+
+You must configure the graphs for vehicle analysis in your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file to enable the vehicle analysis operations. Below are sample graphs for vehicle analysis. You can add these JSON snippets to your deployment manifest in the "graphs" configuration section, configure the parameters for your video stream, and deploy the module. If you only intend to utilize the vehicle analysis capabilities, you may replace the existing graphs in the [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) with the vehicle analysis graphs.
+
+Below is the graph optimized for the **vehicle count** operation.
+
+```json
+"vehiclecount": {
+ "operationId": "cognitiveservices.vision.vehicleanalysis-vehiclecount-preview",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL here>",
+ "VIDEO_SOURCE_ID": "vehiclecountgraph",
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "VIDEO_IS_LIVE":true,
+ "PARKING_REGIONS": ""{\"1\": {\"type\": \"Queue\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
+ }
+}
+```
+
+Below is the graph optimized for the **vehicle in polygon** operation, utilized for vehicles in a parking spot.
+
+```json
+"vehicleinpolygon": {
+ "operationId": "cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL here>",
+ "VIDEO_SOURCE_ID": "vehcileinpolygon",
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "VIDEO_IS_LIVE":true,
+ "PARKING_REGIONS": ""{\"1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
+ }
+}
+```
+
+## Sample cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview output
+
+The JSON below demonstrates an example of the vehicle count operation graph output.
+
+```json
+{
+ "events": [
+ {
+ "id": "95144671-15f7-3816-95cf-2b62f7ff078b",
+ "type": "vehicleCountEvent",
+ "detectionIds": [
+ "249acdb1-65a0-4aa4-9403-319c39e94817",
+ "200900c1-9248-4487-8fed-4b0c48c0b78d"
+ ],
+ "properties": {
+ "@type": "type.googleapis.com/microsoft.rtcv.insights.VehicleCountEventMetadata",
+ "detectionCount": 3
+ },
+ "zone": "1",
+ "trigger": ""
+ }
+ ],
+ "sourceInfo": {
+ "id": "vehiclecountTest",
+ "timestamp": "2022-09-21T19:31:05.558Z",
+ "frameId": "4",
+ "width": 0,
+ "height": 0,
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "vehicle",
+ "id": "200900c1-9248-4487-8fed-4b0c48c0b78d",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.5962499976158142,
+ "y": 0.46250003576278687,
+ "visible": false,
+ "label": ""
+ },
+ {
+ "x": 0.7544531226158142,
+ "y": 0.64000004529953,
+ "visible": false,
+ "label": ""
+ }
+ ],
+ "name": "",
+ "normalizationType": "UNSPECIFIED_NORMALIZATION"
+ },
+ "confidence": 0.9934938549995422,
+ "attributes": [
+ {
+ "task": "VehicleType",
+ "label": "Bicycle",
+ "confidence": 0.00012480001896619797
+ },
+ {
+ "task": "VehicleType",
+ "label": "Bus",
+ "confidence": 1.4998147889855318e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "Car",
+ "confidence": 0.9200984239578247
+ },
+ {
+ "task": "VehicleType",
+ "label": "Motorcycle",
+ "confidence": 0.0058081308379769325
+ },
+ {
+ "task": "VehicleType",
+ "label": "Pickup_Truck",
+ "confidence": 0.0001521655503893271
+ },
+ {
+ "task": "VehicleType",
+ "label": "SUV",
+ "confidence": 0.04790870100259781
+ },
+ {
+ "task": "VehicleType",
+ "label": "Truck",
+ "confidence": 1.346438511973247e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "Van/Minivan",
+ "confidence": 0.02388562448322773
+ },
+ {
+ "task": "VehicleType",
+ "label": "type_other",
+ "confidence": 0.0019937530159950256
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Black",
+ "confidence": 0.49258527159690857
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Blue",
+ "confidence": 0.47634875774383545
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Brown/Beige",
+ "confidence": 0.007451261859387159
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Green",
+ "confidence": 0.0002614705008454621
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Grey",
+ "confidence": 0.0005819533253088593
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Red",
+ "confidence": 0.0026496786158531904
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Silver",
+ "confidence": 0.012039118446409702
+ },
+ {
+ "task": "VehicleColor",
+ "label": "White",
+ "confidence": 0.007863214239478111
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Yellow/Gold",
+ "confidence": 4.345366687630303e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "color_other",
+ "confidence": 0.0001758455764502287
+ }
+ ],
+ "metadata": {
+ "tracking_id": ""
+ }
+ },
+ {
+ "type": "vehicle",
+ "id": "249acdb1-65a0-4aa4-9403-319c39e94817",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.44859376549720764,
+ "y": 0.5375000238418579,
+ "visible": false,
+ "label": ""
+ },
+ {
+ "x": 0.6053906679153442,
+ "y": 0.7537500262260437,
+ "visible": false,
+ "label": ""
+ }
+ ],
+ "name": "",
+ "normalizationType": "UNSPECIFIED_NORMALIZATION"
+ },
+ "confidence": 0.9893689751625061,
+ "attributes": [
+ {
+ "task": "VehicleType",
+ "label": "Bicycle",
+ "confidence": 0.0003215899341739714
+ },
+ {
+ "task": "VehicleType",
+ "label": "Bus",
+ "confidence": 3.258735432609683e-06
+ },
+ {
+ "task": "VehicleType",
+ "label": "Car",
+ "confidence": 0.825579047203064
+ },
+ {
+ "task": "VehicleType",
+ "label": "Motorcycle",
+ "confidence": 0.14065399765968323
+ },
+ {
+ "task": "VehicleType",
+ "label": "Pickup_Truck",
+ "confidence": 0.00044341650209389627
+ },
+ {
+ "task": "VehicleType",
+ "label": "SUV",
+ "confidence": 0.02949284389615059
+ },
+ {
+ "task": "VehicleType",
+ "label": "Truck",
+ "confidence": 1.625348158995621e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "Van/Minivan",
+ "confidence": 0.003406822681427002
+ },
+ {
+ "task": "VehicleType",
+ "label": "type_other",
+ "confidence": 8.27941985335201e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Black",
+ "confidence": 2.028317430813331e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Blue",
+ "confidence": 0.00022600525699090213
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Brown/Beige",
+ "confidence": 3.327144668219262e-06
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Green",
+ "confidence": 5.160827640793286e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Grey",
+ "confidence": 5.614096517092548e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Red",
+ "confidence": 1.0396311012073056e-07
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Silver",
+ "confidence": 0.9996315240859985
+ },
+ {
+ "task": "VehicleColor",
+ "label": "White",
+ "confidence": 1.0256461791868787e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Yellow/Gold",
+ "confidence": 1.8006812751991674e-07
+ },
+ {
+ "task": "VehicleColor",
+ "label": "color_other",
+ "confidence": 5.103976263853838e-07
+ }
+ ],
+ "metadata": {
+ "tracking_id": ""
+ }
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of unique identifier of the vehicle detections that triggered this event|
+| `properties` | collection| Collection of values [detectionCount]|
+| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
+| `trigger` | string| Not used |
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `confidence` | float| Algorithm confidence|
+
+| Attribute | Type | Description |
+||||
+| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
+| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
+| `confidence` | float| Algorithm confidence|
+
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted in format YYYY-MM-DDTHH:MM:SS.ssZ |
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
+
+## Sample cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview output
+
+The JSON below demonstrates an example of the vehicle in polygon operation graph output.
+
+```json
+{
+ "events": [
+ {
+ "id": "1b812a6c-1fa3-3827-9769-7773aeae733f",
+ "type": "vehicleInPolygonEvent",
+ "detectionIds": [
+ "de4256ea-8b38-4883-9394-bdbfbfa5bd41"
+ ],
+ "properties": {
+ "@type": "type.googleapis.com/microsoft.rtcv.insights.VehicleInPolygonEventMetadata",
+ "status": "PARKED"
+ },
+ "zone": "1",
+ "trigger": ""
+ }
+ ],
+ "sourceInfo": {
+ "id": "vehicleInPolygonTest",
+ "timestamp": "2022-09-21T20:36:47.737Z",
+ "frameId": "3",
+ "width": 0,
+ "height": 0,
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "vehicle",
+ "id": "de4256ea-8b38-4883-9394-bdbfbfa5bd41",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.18703125417232513,
+ "y": 0.32875001430511475,
+ "visible": false,
+ "label": ""
+ },
+ {
+ "x": 0.2650781273841858,
+ "y": 0.42500001192092896,
+ "visible": false,
+ "label": ""
+ }
+ ],
+ "name": "",
+ "normalizationType": "UNSPECIFIED_NORMALIZATION"
+ },
+ "confidence": 0.9583820700645447,
+ "attributes": [
+ {
+ "task": "VehicleType",
+ "label": "Bicycle",
+ "confidence": 0.0005135730025358498
+ },
+ {
+ "task": "VehicleType",
+ "label": "Bus",
+ "confidence": 2.502854385966202e-07
+ },
+ {
+ "task": "VehicleType",
+ "label": "Car",
+ "confidence": 0.9575894474983215
+ },
+ {
+ "task": "VehicleType",
+ "label": "Motorcycle",
+ "confidence": 0.03809007629752159
+ },
+ {
+ "task": "VehicleType",
+ "label": "Pickup_Truck",
+ "confidence": 6.314369238680229e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "SUV",
+ "confidence": 0.003204471431672573
+ },
+ {
+ "task": "VehicleType",
+ "label": "Truck",
+ "confidence": 4.916510079056025e-07
+ },
+ {
+ "task": "VehicleType",
+ "label": "Van/Minivan",
+ "confidence": 0.00029918691143393517
+ },
+ {
+ "task": "VehicleType",
+ "label": "type_other",
+ "confidence": 0.00023934587079565972
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Black",
+ "confidence": 0.7501943111419678
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Blue",
+ "confidence": 0.02153826877474785
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Brown/Beige",
+ "confidence": 0.0013857109006494284
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Green",
+ "confidence": 0.0006621106876991689
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Grey",
+ "confidence": 0.007349356077611446
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Red",
+ "confidence": 0.1460476964712143
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Silver",
+ "confidence": 0.015320491977036
+ },
+ {
+ "task": "VehicleColor",
+ "label": "White",
+ "confidence": 0.053948428481817245
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Yellow/Gold",
+ "confidence": 0.0030805091373622417
+ },
+ {
+ "task": "VehicleColor",
+ "label": "color_other",
+ "confidence": 0.0004731453664135188
+ }
+ ],
+ "metadata": {
+ "tracking_id": ""
+ }
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of unique identifier of the vehicle detection that triggered this event|
+| `properties` | collection| Collection of values status [PARKED/EXITED]|
+| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
+| `trigger` | string| Not used |
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `confidence` | float| Algorithm confidence|
+
+| Attribute | Type | Description |
+||||
+| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
+| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
+| `confidence` | float| Algorithm confidence|
+
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted in format YYYY-MM-DDTHH:MM:SS.ssZ |
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
+
+## Zone and line configuration for vehicle analysis
+
+For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone which you're analyzing.
+
+## Camera placement for vehicle analysis
+
+For guidelines on where and how to place your camera for vehicle analysis, refer to the [camera placement](spatial-analysis-camera-placement.md) guide found in the spatial analysis documentation. Other limitations to consider include the height of the camera mounted in the parking lot space. When you analyze vehicle patterns, a higher vantage point is ideal to ensure that the camera's field of view is wide enough to accommodate one or more vehicles, depending on your scenario.
+
+## Billing
+
+The vehicle analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of vehicle analysis in public preview is currently free.
+
+Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
+
+## Next steps
+
+* Set up a [Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## October 2022
+
+### Computer Vision Image Analysis 4.0 public preview
+
+Version 4.0 of Computer Vision has been released in public preview. The new API includes image captioning, image tagging, object detection people detection, and Read OCR functionality, available in the same Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
+ ## September 2022 ### Computer Vision 3.0/3.1 Read previews deprecation
Computer Vision's [OCR (Read) API](overview-ocr.md) latest model with [164 suppo
* Improved processing of digital PDF documents. * Input file size limit increased 10x to 500 MB. * Performance and latency improvements.
-* Available as [cloud service](overview-ocr.md#read-api) and [Docker container](computer-vision-how-to-install-containers.md).
+* Available as [cloud service](overview-ocr.md) and [Docker container](computer-vision-how-to-install-containers.md).
See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
You can prepare recordings of individual utterances and the matching transcript
To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+For data format examples, refer to the sample training set on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/Sample%20Data). The sample training set includes the sample script and the associated audios.
+ > [!TIP] > To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md).
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
To request syllable-level results along with phonemes, set the granularity [conf
## Phoneme alphabet format
-The phoneme name is provided together with the score, to help identity which phonemes were pronounced accurately or inaccurately. For the [supported languages](language-support.md?tabs=stt-tts), you can get the phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format, and for the `en-US` locale, you can also get the phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format.
+For some locales, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. The phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format is available for the `en-GB` and `en-US` locales. The phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format is only available for the `en-US` locale. For other locales, you can only get the phoneme score.
The following table compares example SAPI phonemes with the corresponding IPA phonemes.
using (var speechRecognizer = new SpeechRecognizer(
::: zone pivot="programming-language-cpp"
-Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK foc C++. Word, syllable, and phoneme results are only available in the JSON string.
+Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
```cpp auto speechRecognizer = SpeechRecognizer::FromConfig(
auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.Get
::: zone-end ::: zone pivot="programming-language-java"
-For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK foc Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
+For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
```Java SpeechRecognizer speechRecognizer = new SpeechRecognizer(
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The `say-as` element is optional. It indicates the content type, such as number
| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional | The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column isn't empty in the table below.- | interpret-as | format | Interpretation | | -- | | -- |
-| `address`| None | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington."|
-| `cardinal`, `number` |None| The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">3</say-as> alternatives`<br /><br />As "There are three alternatives."|
| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
-| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
+| `cardinal`, `number` | None| The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">10</say-as> options`<br /><br />As "There are ten options."|
+| `ordinal` | None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."|
| `digits`, `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." | | `fraction` | None | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
-| `ordinal` |None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."|
-| `telephone` | None | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
+| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." | | `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish. |
-| `name` |None | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
+| `telephone` | None | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
+| `currency` | None | The text is spoken as a currency. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="currency">99.9 USD</say-as>`<br /><br />As "ninety-nine US dollars and ninety cents."|
+| `address`| None | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington."|
+| `name` | None | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
**Usage**
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-reference.md
Version 3 of the Translator provides a modern JSON-based Web API. It improves us
## Base URLs
-Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there is a datacenter failure when using the global endpoint, the request may be routed outside of the geography.
+Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there's a datacenter failure when using the global endpoint, the request may be routed outside of the geography.
To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
There are three headers that you can use to authenticate your subscription. This
|Headers|Description| |:-|:-|
-|Ocp-Apim-Subscription-Key|*Use with Cognitive Services subscription if you are passing your secret key*.<br/>The value is the Azure secret key for your subscription to Translator.|
-|Authorization|*Use with Cognitive Services subscription if you are passing an authentication token.*<br/>The value is the Bearer token: `Bearer <token>`.|
+|Ocp-Apim-Subscription-Key|*Use with Cognitive Services subscription if you're passing your secret key*.<br/>The value is the Azure secret key for your subscription to Translator.|
+|Authorization|*Use with Cognitive Services subscription if you're passing an authentication token.*<br/>The value is the Bearer token: `Bearer <token>`.|
|Ocp-Apim-Subscription-Region|*Use with Cognitive Services multi-service and regional translator resource.*<br/>The value is the region of the multi-service or regional translator resource. This value is optional when using a global translator resource.| ### Secret key
curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio
#### Authenticating with a regional resource
-When you use a [regional translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation).
-There are two headers that you need to call the Translator.
+When you use a [regional translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation),
+there are two headers that you need to call the Translator.
|Headers|Description| |:--|:-|
When you use a multi-service secret key, you must include two authentication hea
|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your multi-service resource.| |Ocp-Apim-Subscription-Region| The value is the region of the multi-service resource. |
-Region is required for the multi-service Text API subscription. The region you select is the only region that you can use for text translation when using the multi-service key, and must be the same region you selected when you signed up for your multi-service subscription through the Azure portal.
+Region is required for the multi-service Text API subscription. The region you select is the only region that you can use for text translation when using the multi-service key. It must be the same region you selected when you signed up for your multi-service subscription through the Azure portal.
If you pass the secret key in the query string with the parameter `Subscription-Key`, then you must specify the region with query parameter `Subscription-Region`.
An authentication token is valid for 10 minutes. The token should be reused when
|Header|Value| |:--|:-|
-|Authorization| The value is an access **bearer token** generated by Azure AD.</br><ul><li> The bearer token provides proof of authentication and validates the client's authorization to use the resource.</li><li> An authentication token is valid for 10 minutes and should be reused when making multiple calls to Translator.</br></li>*See* [Authenticating with an access token](#authenticating-with-an-access-token), above. </ul>|
+|Authorization| The value is an access **bearer token** generated by Azure AD.</br><ul><li> The bearer token provides proof of authentication and validates the client's authorization to use the resource.</li><li> An authentication token is valid for 10 minutes and should be reused when making multiple calls to Translator.</br></li>*See* [Sample request: 2. Get a token](../../authentication.md?tabs=powershell#sample-request)</ul>|
|Ocp-Apim-Subscription-Region| The value is the region of the **translator resource**.</br><ul><li> This value is optional if the resource is global.</li></ul>| |Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You'll find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
The error code is a 6-digit number combining the 3-digit HTTP status code follow
| 400043| The client trace ID (ClientTraceId field or X-ClientTranceId header) is missing or invalid.| | 400050| The input text is too long. View [request limits](../request-limits.md).| | 400064| The "translation" parameter is missing or invalid.|
-| 400070| The number of target scripts (ToScript parameter) does not match the number of target languages (To parameter).|
+| 400070| The number of target scripts (ToScript parameter) doesn't match the number of target languages (To parameter).|
| 400071| The value isn't valid for TextType.| | 400072| The array of input text has too many elements.| | 400073| The script parameter isn't valid.| | 400074| The body of the request isn't valid JSON.| | 400075| The language pair and category combination isn't valid.| | 400077| The maximum request size has been exceeded. View [request limits](../request-limits.md).|
-| 400079| The custom system requested for translation between from and to language does not exist.|
+| 400079| The custom system requested for translation between from and to language doesn't exist.|
| 400080| Transliteration isn't supported for the language or script.| | 401000| The request isn't authorized because credentials are missing or invalid.| | 401015| "The credentials provided are for the Speech API. This request requires credentials for the Text API. Use a subscription to Translator."|
The error code is a 6-digit number combining the 3-digit HTTP status code follow
| 403001| The operation isn't allowed because the subscription has exceeded its free quota.| | 405000| The request method isn't supported for the requested resource.| | 408001| The translation system requested is being prepared. Retry in a few minutes.|
-| 408002| Request timed out waiting on incoming stream. The client did not produce a request within the time that the server was prepared to wait. The client may repeat the request without modifications at any later time.|
+| 408002| Request timed out waiting on incoming stream. The client didn't produce a request within the time that the server was prepared to wait. The client may repeat the request without modifications at any later time.|
| 415000| The Content-Type header is missing or invalid.| | 429000, 429001, 429002| The server rejected the request because the client has exceeded request limits.| | 500000| An unexpected error occurred. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
Metrics allow you to view the translator usage and availability information in A
![Translator Metrics](../media/translatormetrics.png)
-This table lists available metrics with description of how they are used to monitor translation API calls.
+This table lists available metrics with description of how they're used to monitor translation API calls.
| Metrics | Description | |:-|:--|
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
+ Last updated 10/28/2021
Cognitive Services for big data is an example of how we can integrate intelligen
- [The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Services](https://databricks.com/session/the-azure-cognitive-services-on-spark-clusters-with-embedded-intelligent-services) - [Spark Summit Keynote: Scalable AI for Good](https://databricks.com/session_eu19/scalable-ai-for-good)-- [Cognitive Services for big data in Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)
+- [Cognitive Services for big data in Azure Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)
- [Lightning Talk on Large Scale Intelligent Microservices](https://www.youtube.com/watch?v=BtuhmdIy9Fk&t=6s) ## Next steps
cognitive-services Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/recipes/anomaly-detection.md
Last updated 07/06/2020 ms.devlang: python-+ # Recipe: Predictive maintenance with the Cognitive Services for big data
-This recipe shows how you can use Azure Synapse Analytics and Cognitive Services on Apache Spark for predictive maintenance of IoT devices. We'll follow along with the [Cosmos DB and Synapse Link](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples) sample. To keep things simple, in this recipe we'll read the data straight from a CSV file rather than getting streamed data through Cosmos DB and Synapse Link. We strongly encourage you to look over the Synapse Link sample.
+This recipe shows how you can use Azure Synapse Analytics and Cognitive Services on Apache Spark for predictive maintenance of IoT devices. We'll follow along with the [Azure Cosmos DB and Synapse Link](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples) sample. To keep things simple, in this recipe we'll read the data straight from a CSV file rather than getting streamed data through Azure Cosmos DB and Synapse Link. We strongly encourage you to look over the Synapse Link sample.
## Hypothetical scenario
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md
-+ Previously updated : 06/02/2022 Last updated : 10/05/2022
The following limit specifies the maximum number of characters that can be in a
| Feature | Value | |||
-| Conversation summarization | 7,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).|
-| Conversation PII | 40,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).|
+| Conversation issue and resolution summarization| 40,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).|
| Text Analytics for health | 30,720 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| All other pre-configured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
+| All other pre-configured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). If you need to submit larger documents, consider using the feature asynchronously (described below). |
| All other pre-configured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). | If a document exceeds the character limit, the API will behave differently depending on how you're sending requests.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/language-support.md
+
+ Title: Language support for language features
+
+description: This article explains which natural languages are supported by the different features of Azure Cognitive Service for Language.
++++++ Last updated : 10/06/2022+++
+# Language support for Language features
+
+Use this article to learn about the languages currently supported by different features.
+
+> [!NOTE]
+> Some of the languages listed below are only supported in some [model versions](../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data). See the linked feature-level language support article for details.
++
+| Language | Language code | [Custom text classification](../custom-text-classification/language-support.md) | [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) | [Conversational language understanding](../conversational-language-understanding/language-support.md) | [Entity linking](../entity-linking/language-support.md) | [Language detection](../language-detection/language-support.md) | [Key phrase extraction](../key-phrase-extraction/language-support.md) | [Named entity recognition(NER)](../named-entity-recognition/language-support.md) | [Orchestration workflow](../orchestration-workflow/language-support.md) | [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents) | [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations) | [Question answering](../question-answering/language-support.md) | [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support) | [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) | [Text Analytics for health](../text-analytics-for-health/language-support.md) | [Summarization](../summarization/language-support.md?tabs=document-summarization) | [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization) |
+|::|:-:|:-:|:-:|:--:|:-:|::|::|:--:|:--:|:-:|:-:|::|::|:-:|:--:|::|:--:|
+| Afrikaans | `af` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Albanian | `sq` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Amharic | `am` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Arabic | `ar` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Armenian | `hy` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Assamese | `as` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Azerbaijani | `az` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Basque | `eu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Belarusian | `be` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Bengali | `bn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Bosnian | `bs` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Breton | `br` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Bulgarian | `bg` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Burmese | `my` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Catalan | `ca` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Central Khmer | `km` | | | | | &check; | | | | | | | | | | | |
+| Chinese (Simplified) | `zh-hans` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | | | &check; | |
+| Chinese (Traditional) | `zh-hant` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Corsican | `co` | | | | | &check; | | | | | | | | | | | |
+| Croatian | `hr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Czech | `cs` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Danish | `da` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Dari | `prs` | | | | | &check; | | | | | | | | | | | |
+| Divehi | `dv` | | | | | &check; | | | | | | | | | | | |
+| Dutch | `nl` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| English (UK) | `en-gb` | &check; | &check; | &check; | | &check; | | | | | | | | | | | |
+| English (US) | `en-us` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
+| Esperanto | `eo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Estonian | `et` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Fijian | `fj` | | | | | &check; | | | | | | | | | | | |
+| Filipino | `tl` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Finnish | `fi` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| French | `fr` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Galician | `gl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Georgian | `ka` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| German | `de` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Greek | `el` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Gujarati | `gu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Haitian | `ht` | | | | | &check; | | | | | | | | | | | |
+| Hausa | `ha` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Hebrew | `he` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | &check; | | |
+| Hindi | `hi` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Hmong Daw | `mww` | | | | | &check; | | | | | | | | | | | |
+| Hungarian | `hu` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Icelandic | `is` | | | | | &check; | | | | | | &check; | | | | | |
+| Igbo | `ig` | | | | | &check; | | | | | | | | | | | |
+| Indonesian | `id` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Inuktitut | `iu` | | | | | &check; | | | | | | | | | | | |
+| Irish | `ga` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Italian | `it` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Japanese | `ja` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | | | &check; | |
+| Javanese | `jv` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Kannada | `kn` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Kazakh | `kk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Khmer | `km` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Kinyarwanda | `rw` | | | | | &check; | | | | | | | | | | | |
+| Korean | `ko` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | | | &check; | |
+| Kurdish (Kurmanji) | `ku` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Kyrgyz | `ky` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Lao | `lo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Latin | `la` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Latvian | `lv` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Lithuanian | `lt` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Luxembourgish | `lb` | | | | | &check; | | | | | | | | | | | |
+| Macedonian | `mk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Malagasy | `mg` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Malay | `ms` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Malayalam | `ml` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Maltese | `mt` | | | | | &check; | | | | | | | | | | | |
+| Maori | `mi` | | | | | &check; | | | | | | | | | | | |
+| Marathi | `mr` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Mongolian | `mn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Nepali | `ne` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Norwegian (Bokmal) | `nb` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | | | | | |
+| Norwegian | `no` | | | | | &check; | | | | | | | &check; | | | | |
+| Norwegian Nynorsk | `nn` | | | | | &check; | | | | | | | | | | | |
+| Oriya | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Oromo | `om` | | | | | | &check; | | | | | | &check; | | | | |
+| Pashto | `ps` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Persian (Farsi) | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Polish | `pl` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Portuguese (Brazil) | `pt-br` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Portuguese (Portugal) | `pt-pt` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | | &check; | &check; | | &check; | |
+| Punjabi | `pa` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Queretaro Otomi | `otq` | | | | | &check; | | | | | | | | | | | |
+| Romanian | `ro` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Russian | `ru` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Samoan | `sm` | | | | | &check; | | | | | | | | | | | |
+| Sanskrit | `sa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Scottish Gaelic | `gd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Serbian | `sr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | | | | | |
+| Shona | `sn` | | | | | &check; | | | | | | | | | | | |
+| Sindhi | `sd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Sinhala | `si` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Slovak | `sk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Slovenian | `sl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Somali | `so` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Spanish | `es` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | | |
+| Sundanese | `su` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Swahili | `sw` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Swedish | `sv` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Tahitian | `ty` | | | | | &check; | | | | | | | | | | | |
+| Tajik | `tg` | | | | | &check; | | | | | | | | | | | |
+| Tamil | `ta` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Tatar | `tt` | | | | | &check; | | | | | | | | | | | |
+| Telugu | `te` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Thai | `th` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Tibetan | `bo` | | | | | &check; | | | | | | | | | | | |
+| Tigrinya | `ti` | | | | | &check; | | | | | | | | | | | |
+| Tongan | `to` | | | | | &check; | | | | | | | | | | | |
+| Turkish | `tr` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Turkmen | `tk` | | | | | &check; | | | | | | | | | | | |
+| Ukrainian | `uk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Urdu | `ur` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Uyghur | `ug` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Uzbek | `uz` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Vietnamese | `vi` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Welsh | `cy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Western Frisian | `fy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Xhosa | `xh` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Yiddish | `yi` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Yoruba | `yo` | | | | | &check; | | | | | | | | | | | |
+| Yucatec Maya | `yua` | | | | | &check; | | | | | | | | | | | |
+| Zulu | `zu` | &check; | &check; | &check; | | &check; | | | | | | | | | | | |
+
+## See also
+
+See the following service-level language support articles for information on model version support for each language:
+* [Custom text classification](../custom-text-classification/language-support.md)
+* [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md)
+* [Conversational language understanding](../conversational-language-understanding/language-support.md)
+* [Entity linking](../entity-linking/language-support.md)
+* [Language detection](../language-detection/language-support.md)
+* [Key phrase extraction](../key-phrase-extraction/language-support.md)
+* [Named entity recognition(NER)](../named-entity-recognition/language-support.md)
+* [Orchestration workflow](../orchestration-workflow/language-support.md)
+* [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents)
+* [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations)
+* [Question answering](../question-answering/language-support.md)
+* [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support)
+* [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support)
+* [Text Analytics for health](../text-analytics-for-health/language-support.md)
+* [Summarization](../summarization/language-support.md?tabs=document-summarization)
+* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)|
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/language-support.md
Last updated 07/28/2022 -+ # Language support for Key Phrase Extraction
Use this article to find the natural languages supported by Key Phrase Analysis.
> [!NOTE] > Languages are added as new [model versions](how-to/call-api.md#specify-the-key-phrase-extraction-model) are released for specific features. The current model version for Key Phrase Extraction is `2022-07-01`.
-Total supported language codes: 31
+Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|:-|:-:|:--:|::|
+| Language | Language code | Starting with model version | Notes |
+|--||-|--|
| Afrikaans      |     `af`  |                2020-07-01                 |                    |
+| Albanian     |     `sq`  |                2022-10-01                 |                    |
+| Amharic     |     `am`  |                2022-10-01                 |                    |
+| Arabic    |     `ar`  |                2022-10-01                 |                    |
+| Armenian    |     `hy`  |                2022-10-01                 |                    |
+| Assamese    |     `as`  |                2022-10-01                 |                    |
+| Azerbaijani    |     `az`  |                2022-10-01                 |                    |
+| Basque    |     `eu`  |                2022-10-01                 |                    |
+| Belarusian |     `be`  |                2022-10-01                 |                    |
+| Bengali     |     `bn`  |                2022-10-01                 |                    |
+| Bosnian    |     `bs`  |                2022-10-01                 |                    |
+| Breton    |     `br`  |                2022-10-01                 |                    |
| Bulgarian      |     `bg`  |                2020-07-01                 |                    |
+| Burmese    |     `my`  |                2022-10-01                 |                    |
| Catalan    |     `ca`  |                2020-07-01                 |                    | | Chinese-Simplified    |     `zh-hans` |                2021-06-01                 |                    |
+| Chinese-Traditional |     `zh-hant` |                2022-10-01                 |                    |
| Croatian | `hr` | 2020-07-01 | |
-| Danish | `da` | 2019-10-01 | |
+| Czech    |     `cs`  |                2022-10-01                 |                    |
+| Danish | `da` | 2019-10-01 | |
| Dutch                 |     `nl`      |                2019-10-01                 |                    | | English               |     `en`      |                2019-10-01                 |                    |
+| Esperanto    |     `eo`  |                2022-10-01                 |                    |
| Estonian              |     `et`      |                2020-07-01                 |                    |
+| Filipino    |     `fil`  |                2022-10-01                 |                    |
| Finnish               |     `fi`      |                2019-10-01                 |                    | | French                |     `fr`      |                2019-10-01                 |                    |
+| Galician    |     `gl`  |                2022-10-01                 |                    |
+| Georgian    |     `ka`  |                2022-10-01                 |                    |
| German                |     `de`      |                2019-10-01                 |                    | | Greek    |     `el`  |                2020-07-01                 |                    |
+| Gujarati    |     `gu`  |                2022-10-01                 |                    |
+| Hausa      |     `ha`  |                2022-10-01                 |                    |
+| Hebrew    |     `he`  |                2022-10-01                 |                    |
+| Hindi      |     `hi`  |                2022-10-01                 |                    |
| Hungarian    |     `hu`  |                2020-07-01                 |                    |
-| Italian               |     `it`      |                2019-10-01                 |                    |
| Indonesian            |     `id`      |                2020-07-01                 |                    |
+| Irish            |     `ga`      |                2022-10-01                 |                    |
+| Italian               |     `it`      |                2019-10-01                 |                    |
| Japanese              |     `ja`      |                2019-10-01                 |                    |
+| Javanese            |     `jv`      |                2022-10-01                 |                    |
+| Kannada            |     `kn`      |                2022-10-01                 |                    |
+| Kazakh            |     `kk`      |                2022-10-01                 |                    |
+| Khmer            |     `km`      |                2022-10-01                 |                    |
| Korean                |     `ko`      |                2019-10-01                 |                    |
+| Kurdish (Kurmanji)   |     `ku`      |                2022-10-01                 |                    |
+| Kyrgyz            |     `ky`      |                2022-10-01                 |                    |
+| Lao            |     `lo`      |                2022-10-01                 |                    |
+| Latin            |     `la`      |                2022-10-01                 |                    |
| Latvian               |     `lv`      |                2020-07-01                 |                    |
-| Norwegian  (Bokmål)   |     `no`      |                2020-07-01                 | `nb` also accepted |
+| Lithuanian            |     `lt`      |                2022-10-01                 |                    |
+| Macedonian            |     `mk`      |                2022-10-01                 |                    |
+| Malagasy            |     `mg`      |                2022-10-01                 |                    |
+| Malay            |     `ms`      |                2022-10-01                 |                    |
+| Malayalam            |     `ml`      |                2022-10-01                 |                    |
+| Marathi            |     `mr`      |                2022-10-01                 |                    |
+| Mongolian            |     `mn`      |                2022-10-01                 |                    |
+| Nepali            |     `ne`      |                2022-10-01                 |                    |
+| Norwegian (Bokmål)    |     `no`      |                2020-07-01                 | `nb` also accepted |
+| Oriya            |     `or`      |                2022-10-01                 |                    |
+| Oromo            |     `om`      |                2022-10-01                 |                    |
+| Pashto            |     `ps`      |                2022-10-01                 |                    |
+| Persian (Farsi)       |     `fa`      |                2022-10-01                 |                    |
| Polish                |     `pl`      |                2019-10-01                 |                    | | Portuguese (Brazil)   |    `pt-BR`    |                2019-10-01                 |                    | | Portuguese (Portugal) |    `pt-PT`    |                2019-10-01                 | `pt` also accepted |
+| Punjabi            |     `pa`      |                2022-10-01                 |                    |
| Romanian              |     `ro`      |                2020-07-01                 |                    | | Russian               |     `ru`      |                2019-10-01                 |                    |
-| Spanish               |     `es`      |                2019-10-01                 |                    |
+| Sanskrit            |     `sa`      |                2022-10-01                 |                    |
+| Scottish Gaelic       |     `gd`      |                2022-10-01                 |                    |
+| Serbian            |     `sr`      |                2022-10-01                 |                    |
+| Sindhi            |     `sd`      |                2022-10-01                 |                    |
+| Sinhala            |     `si`      |                2022-10-01                 |                    |
| Slovak                |     `sk`      |                2020-07-01                 |                    | | Slovenian             |     `sl`      |                2020-07-01                 |                    |
+| Somali            |     `so`      |                2022-10-01                 |                    |
+| Spanish               |     `es`      |                2019-10-01                 |                    |
+| Sudanese            |     `su`      |                2022-10-01                 |                    |
+| Swahili            |     `sw`      |                2022-10-01                 |                    |
| Swedish               |     `sv`      |                2019-10-01                 |                    |
+| Tamil            |     `ta`      |                2022-10-01                 |                    |
+| Telugu           |     `te`      |                2022-10-01                 |                    |
+| Thai            |     `th`      |                2022-10-01                 |                    |
| Turkish              |     `tr`      |                2020-07-01                 |                    |
+| Ukrainian           |     `uk`      |                2022-10-01                 |                    |
+| Urdu            |     `ur`      |                2022-10-01                 |                    |
+| Uyghur            |     `ug`      |                2022-10-01                 |                    |
+| Uzbek            |     `uz`      |                2022-10-01                 |                    |
+| Vietnamese            |     `vi`      |                2022-10-01                 |                    |
+| Welsh            |     `cy`      |                2022-10-01                 |                    |
+| Western Frisian       |     `fy`      |                2022-10-01                 |                    |
+| Xhosa            |     `xh`      |                2022-10-01                 |                    |
+| Yiddish            |     `yi`      |                2022-10-01                 |                    |
## Next steps
-* [how to call the API](how-to/call-api.md) for more information.
-* [Quickstart: Use the key phrase extraction client library and REST API](quickstart.md)
+* [How to call the API](how-to/call-api.md) for more information.
+* [Quickstart: Use the key phrase extraction client library and REST API](quickstart.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Last updated 06/27/2022 -+ # Named Entity Recognition (NER) language support
Use this article to learn which natural languages are supported by the NER featu
| Finnish* | `fi` | 2019-10-01 | | | French | `fr` | 2021-01-15 | | | German | `de` | 2021-01-15 | |
+| Hebrew | `he` | 2022-10-01 | |
+| Hindi | `hi` | 2022-10-01 | |
| Hungarian* | `hu` | 2019-10-01 | | | Italian | `it` | 2021-01-15 | | | Japanese | `ja` | 2021-01-15 | |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/language-support.md
Last updated 07/27/2022 -+ # Sentiment Analysis and Opinion Mining language support
Use this article to learn which languages are supported by Sentiment Analysis and Opinion Mining. > [!NOTE]
-> Languages are added as new model versions are released. The current generally available model version for Sentiment Analysis is `2022-06-01`.
+> Languages are added as new [model versions](../concepts/model-lifecycle.md) are released.
## Sentiment Analysis language support
-Total supported language codes: 21
+Total supported language codes: 94
| Language | Language code | Starting with model version | Notes |
-|-|-|--|-|
-| Chinese-Simplified | `zh-hans` | 2019-10-01 | `zh` also accepted |
-| Chinese-Traditional | `zh-hant` | 2019-10-01 | |
-| Dutch | `nl` | 2019-10-01 | |
-| English | `en` | 2019-10-01 | |
-| French | `fr` | 2019-10-01 | |
-| German | `de` | 2019-10-01 | |
-| Hindi | `hi` | 2020-04-01 | |
-| Italian | `it` | 2019-10-01 | |
-| Japanese | `ja` | 2019-10-01 | |
-| Korean | `ko` | 2019-10-01 | |
-| Norwegian (Bokmål) | `no` | 2019-10-01 | |
-| Portuguese (Brazil) | `pt-BR` | 2019-10-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2019-10-01 | `pt` also accepted |
-| Spanish | `es` | 2019-10-01 | |
-| Arabic | `ar` | 2022-06-01 | |
-| Danish | `da` | 2022-06-01 | |
-| Greek | `el` | 2022-06-01 | |
-| Finnish | `fi` | 2022-06-01 | |
-| Polish | `pl` | 2022-06-01 | |
-| Russian | `ru` | 2022-06-01 | |
-| Swedish | `sv` | 2022-06-01 | |
+|-|-|-|-|
+| Afrikaans | `af` | 2022-10-01 | |
+| Albanian | `sq` | 2022-10-01 | |
+| Amharic | `am` | 2022-10-01 | |
+| Arabic | `ar` | 2022-06-01 | |
+| Armenian | `hy` | 2022-10-01 | |
+| Assamese | `as` | 2022-10-01 | |
+| Azerbaijani | `az` | 2022-10-01 | |
+| Basque | `eu` | 2022-10-01 | |
+| Belarusian | `be` | 2022-10-01 | |
+| Bengali | `bn` | 2022-10-01 | |
+| Bosnian | `bs` | 2022-10-01 | |
+| Breton | `br` | 2022-10-01 | |
+| Bulgarian | `bg` | 2022-10-01 | |
+| Burmese | `my` | 2022-10-01 | |
+| Catalan | `ca` | 2022-10-01 | |
+| Chinese (Simplified) | `zh-hans` | 2019-10-01 | `zh` also accepted |
+| Chinese (Traditional) | `zh-hant` | 2019-10-01 | |
+| Croatian | `hr` | 2022-10-01 | |
+| Czech | `cs` | 2022-10-01 | |
+| Danish | `da` | 2022-06-01 | |
+| Dutch | `nl` | 2019-10-01 | |
+| English | `en` | 2019-10-01 | |
+| Esperanto | `eo` | 2022-10-01 | |
+| Estonian | `et` | 2022-10-01 | |
+| Filipino | `fil` | 2022-10-01 | |
+| Finnish | `fi` | 2022-06-01 | |
+| French | `fr` | 2019-10-01 | |
+| Galician | `gl` | 2022-10-01 | |
+| Georgian | `ka` | 2022-10-01 | |
+| German | `de` | 2019-10-01 | |
+| Greek | `el` | 2022-06-01 | |
+| Gujarati | `gu` | 2022-10-01 | |
+| Hausa | `ha` | 2022-10-01 | |
+| Hebrew | `he` | 2022-10-01 | |
+| Hindi | `hi` | 2020-04-01 | |
+| Hungarian | `hu` | 2022-10-01 | |
+| Indonesian | `id` | 2022-10-01 | |
+| Irish | `ga` | 2022-10-01 | |
+| Italian | `it` | 2019-10-01 | |
+| Japanese | `ja` | 2019-10-01 | |
+| Javanese | `jv` | 2022-10-01 | |
+| Kannada | `kn` | 2022-10-01 | |
+| Kazakh | `kk` | 2022-10-01 | |
+| Khmer | `km` | 2022-10-01 | |
+| Korean | `ko` | 2019-10-01 | |
+| Kurdish (Kurmanji) | `ku` | 2022-10-01 | |
+| Kyrgyz | `ky` | 2022-10-01 | |
+| Lao | `lo` | 2022-10-01 | |
+| Latin | `la` | 2022-10-01 | |
+| Latvian | `lv` | 2022-10-01 | |
+| Lithuanian | `lt` | 2022-10-01 | |
+| Macedonian | `mk` | 2022-10-01 | |
+| Malagasy | `mg` | 2022-10-01 | |
+| Malay | `ms` | 2022-10-01 | |
+| Malayalam | `ml` | 2022-10-01 | |
+| Marathi | `mr` | 2022-10-01 | |
+| Mongolian | `mn` | 2022-10-01 | |
+| Nepali | `ne` | 2022-10-01 | |
+| Norwegian | `no` | 2019-10-01 | |
+| Oriya | `or` | 2022-10-01 | |
+| Oromo | `om` | 2022-10-01 | |
+| Pashto | `ps` | 2022-10-01 | |
+| Persian (Farsi) | `fa` | 2022-10-01 | |
+| Polish | `pl` | 2022-06-01 | |
+| Portuguese (Portugal) | `pt-PT` | 2019-10-01 | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | 2019-10-01 | |
+| Punjabi | `pa` | 2022-10-01 | |
+| Romanian | `ro` | 2022-10-01 | |
+| Russian | `ru` | 2022-06-01 | |
+| Sanskrit | `sa` | 2022-10-01 | |
+| Scottish Gaelic | `gd` | 2022-10-01 | |
+| Serbian | `sr` | 2022-10-01 | |
+| Sindhi | `sd` | 2022-10-01 | |
+| Sinhala | `si` | 2022-10-01 | |
+| Slovak | `sk` | 2022-10-01 | |
+| Slovenian | `sl` | 2022-10-01 | |
+| Somali | `so` | 2022-10-01 | |
+| Spanish | `es` | 2019-10-01 | |
+| Sundanese | `su` | 2022-10-01 | |
+| Swahili | `sw` | 2022-10-01 | |
+| Swedish | `sv` | 2022-06-01 | |
+| Tamil | `ta` | 2022-10-01 | |
+| Telugu | `te` | 2022-10-01 | |
+| Thai | `th` | 2022-10-01 | |
+| Turkish | `tr` | 2022-10-01 | |
+| Ukrainian | `uk` | 2022-10-01 | |
+| Urdu | `ur` | 2022-10-01 | |
+| Uyghur | `ug` | 2022-10-01 | |
+| Uzbek | `uz` | 2022-10-01 | |
+| Vietnamese | `vi` | 2022-10-01 | |
+| Welsh | `cy` | 2022-10-01 | |
+| Western Frisian | `fy` | 2022-10-01 | |
+| Xhosa | `xh` | 2022-10-01 | |
+| Yiddish | `yi` | 2022-10-01 | |
### Opinion Mining language support
Total supported language codes: 7
## Next steps * [how to call the API](how-to/call-api.md#specify-the-sentiment-analysis-model) for more information.
-* [Quickstart: Use the Sentiment Analysis client library and REST API](quickstart.md)
+* [Quickstart: Use the Sentiment Analysis client library and REST API](quickstart.md)
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
Title: Summarize text with the conversation summarization API
description: This article will show you how to summarize chat logs with the conversation summarization API. -+ Previously updated : 08/18/2022-- Last updated : 09/26/2022++ # How to use conversation summarization (preview)
> [!IMPORTANT] > The conversation summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Conversation Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of conversation summarization.
-Conversation summarization is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs.
+## Conversation summarization types
+
+- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization type works on conversations with any number of parties.
+
+- Issue and resolution (call center focused) is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs, which occur between two parties.
+ The AI models used by the API are provided by the service, you just have to send content for analysis. ## Features
-The conversation summarization API uses natural language processing techniques to locate key issues and resolutions in text-based chat logs. Conversation summarization will return issues and resolutions found from the text input.
+The conversation summarization API uses natural language processing techniques to summarize conversations into shorter summaries per request. Conversation summarization can summarize for issues and resolutions discussed in a two-party conversation or summarize a long conversation into chapters and a short narrative for each chapter.
-There's another feature in Azure Cognitive Service for Language named [document summarization](../overview.md?tabs=document-summarization) that can summarize sentences from large documents. When you're deciding between document summarization and conversation summarization, consider the following points:
-* Extractive summarization returns sentences that collectively represent the most important or relevant information within the original content.
-* Conversation summarization returns summaries based on full chat logs including a reason for the chat (a problem), and the resolution. For example, a chat log between a customer and a customer service agent.
+There's another feature in Azure Cognitive Service for Language named [document summarization](../overview.md?tabs=document-summarization) that is more suitable to summarize documents into concise summaries. When you're deciding between document summarization and conversation summarization, consider the following points:
+* Input genre: Conversation summarization can operate on both chat text and speech transcripts. which have speakers and their utterances. Document summarization operations on text.
+* Purpose of summarization: for example, conversation issue and resolution summarization returns a reason and the resolution for a chat between a customer and a customer service agent.
## Submitting data
When you submit data to conversation summarization, we recommend sending one cha
### Get summaries from text chats
-You can use conversation summarization to get summaries from 2-person chats between customer service agents, and customers. To see an example using text chats, see the [quickstart article](../quickstart.md).
+You can use conversation issue and resolution summarization to get summaries as you need. To see an example using text chats, see the [quickstart article](../quickstart.md).
### Get summaries from speech transcriptions
-Conversation summarization also enables you to get summaries from speech transcripts by using the [Speech service's speech-to-text feature](../../../Speech-Service/call-center-overview.md). The following example shows a short conversation that you might include in your API requests.
+Conversation issue and resolution summarization also enables you to get summaries from speech transcripts by using the [Speech service's speech-to-text feature](../../../Speech-Service/call-center-overview.md). The following example shows a short conversation that you might include in your API requests.
```json "conversations":[ { "id":"abcdefgh-1234-1234-1234-1234abcdefgh",
- "language":"En",
+ "language":"en",
"modality":"transcript", "conversationItems":[ {
Conversation summarization also enables you to get summaries from speech transcr
] ```
-## Getting conversation summarization results
+### Get chapter titles
+Conversation chapter title summarization lets you get chapter titles from input conversations. A guided example scenario is provided below:
+
+1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
+
+```bash
+curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-conversations/jobs?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
+-d \
+'
+{
+ "displayName": "Conversation Task Example",
+ "analysisInput": {
+ "conversations": [
+ {
+ "conversationItems": [
+ {
+ "text": "Hello, youΓÇÖre chatting with Rene. How may I help you?",
+ "id": "1",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.",
+ "id": "2",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you please try the following steps for me? First, could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking on and off every second?",
+ "id": "3",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Yes, I pushed the wifi connection button, and now the power light is slowly blinking.",
+ "id": "4",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine? ",
+ "id": "5",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "No. Nothing happened.",
+ "id": "6",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.",
+ "id": "7",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ }
+ ],
+ "modality": "text",
+ "id": "conversation1",
+ "language": "en"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "Conversation Task 1",
+ "kind": "ConversationalSummarizationTask",
+ "parameters": {
+ "summaryAspects": [
+ "chapterTitle"
+ ]
+ }
+ }
+ ]
+}
+'
+```
+
+2. Make the following changes in the command where needed:
+- Replace the value `your-value-language-key` with your key.
+- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.
+
+3. Open a command prompt window (for example: BASH).
+
+4. Paste the command from the text editor into the command prompt window, then run the command.
+
+5. Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```http
+https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
+```
+
+6. To get the results of the request, use the following cURL command. Be sure to replace `<my-job-id>` with the GUID value you received from the previous `operation-location` response header:
+
+```curl
+curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/<my-job-id>?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
+```
+
+Example chapter title summarization JSON response:
+
+```json
+{
+ "jobId": "d874a98c-bf31-4ac5-8b94-5c236f786754",
+ "lastUpdatedDateTime": "2022-09-29T17:36:42Z",
+ "createdDateTime": "2022-09-29T17:36:39Z",
+ "expirationDateTime": "2022-09-30T17:36:39Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "Conversation Task Example",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "conversationalSummarizationResults",
+ "taskName": "Conversation Task 1",
+ "lastUpdateDateTime": "2022-09-29T17:36:42.895694Z",
+ "status": "succeeded",
+ "results": {
+ "conversations": [
+ {
+ "summaries": [
+ {
+ "aspect": "chapterTitle",
+ "text": "Smart Brew 300 Espresso Machine WiFi Connection",
+ "contexts": [
+ { "conversationItemId": "1", "offset": 0, "length": 53 },
+ { "conversationItemId": "2", "offset": 0, "length": 94 },
+ { "conversationItemId": "3", "offset": 0, "length": 266 },
+ { "conversationItemId": "4", "offset": 0, "length": 85 },
+ { "conversationItemId": "5", "offset": 0, "length": 119 },
+ { "conversationItemId": "6", "offset": 0, "length": 21 },
+ { "conversationItemId": "7", "offset": 0, "length": 109 }
+ ]
+ }
+ ],
+ "id": "conversation1",
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "latest"
+ }
+ }
+ ]
+ }
+}
+```
+For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
+
+ ### Get narrative summarization
-When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
+Conversation summarization also lets you get narrative summaries from input conversations. A guided example scenario is provided below:
-The following text is an example of content you might submit for summarization. This is only an example, the API can accept much longer input text. See [data limits](../../concepts/data-limits.md) for more information.
+1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
+
+```bash
+curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-conversations/jobs?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
+-d \
+'
+{
+ "displayName": "Conversation Task Example",
+ "analysisInput": {
+ "conversations": [
+ {
+ "conversationItems": [
+ {
+ "text": "Hello, youΓÇÖre chatting with Rene. How may I help you?",
+ "id": "1",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.",
+ "id": "2",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you please try the following steps for me? First, could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking on and off every second?",
+ "id": "3",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Yes, I pushed the wifi connection button, and now the power light is slowly blinking.",
+ "id": "4",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine? ",
+ "id": "5",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "No. Nothing happened.",
+ "id": "6",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.",
+ "id": "7",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ }
+ ],
+ "modality": "text",
+ "id": "conversation1",
+ "language": "en"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "Conversation Task 1",
+ "kind": "ConversationalSummarizationTask",
+ "parameters": {
+ "summaryAspects": [
+ "narrative"
+ ]
+ }
+ }
+ ]
+}
+'
+```
+
+2. Make the following changes in the command where needed:
+- Replace the value `your-language-resource-key` with your key.
+- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.
+
+3. Open a command prompt window (for example: BASH).
+
+4. Paste the command from the text editor into the command prompt window, then run the command.
+
+5. Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```http
+https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
+```
+
+6. To get the results of a request, use the following cURL command. Be sure to replace `<my-job-id>` with the GUID value you received from the previous `operation-location` response header:
+
+```curl
+curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/<my-job-id>?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
+```
+
+Example narrative summarization JSON response:
+
+```json
+{
+ "jobId": "d874a98c-bf31-4ac5-8b94-5c236f786754",
+ "lastUpdatedDateTime": "2022-09-29T17:36:42Z",
+ "createdDateTime": "2022-09-29T17:36:39Z",
+ "expirationDateTime": "2022-09-30T17:36:39Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "Conversation Task Example",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "conversationalSummarizationResults",
+ "taskName": "Conversation Task 1",
+ "lastUpdateDateTime": "2022-09-29T17:36:42.895694Z",
+ "status": "succeeded",
+ "results": {
+ "conversations": [
+ {
+ "summaries": [
+ {
+ "aspect": "narrative",
+ "text": "Agent_1 helps customer to set up wifi connection for Smart Brew 300 espresso machine.",
+ "contexts": [
+ { "conversationItemId": "1", "offset": 0, "length": 53 },
+ { "conversationItemId": "2", "offset": 0, "length": 94 },
+ { "conversationItemId": "3", "offset": 0, "length": 266 },
+ { "conversationItemId": "4", "offset": 0, "length": 85 },
+ { "conversationItemId": "5", "offset": 0, "length": 119 },
+ { "conversationItemId": "6", "offset": 0, "length": 21 },
+ { "conversationItemId": "7", "offset": 0, "length": 109 }
+ ]
+ }
+ ],
+ "id": "conversation1",
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "latest"
+ }
+ }
+ ]
+ }
+}
+```
+
+For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
+
+## Getting conversation issue and resolution summarization results
+
+The following text is an example of content you might submit for conversation issue and resolution summarization. This is only an example, the API can accept much longer input text. See [data limits](../../concepts/data-limits.md) for more information.
**Agent**: "*Hello, how can I help you*?"
In the above example, the API might return the following summarized sentences:
| "Customer wants to upgrade their subscription. Customer doesn't know how." | issue | | "Customer needs to press upgrade button, and sign in." | resolution | - ## See also
-* [Summarization overview](../overview.md)
+* [Summarization overview](../overview.md)
cognitive-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md
Title: Summarize text with the extractive summarization API
description: This article will show you how to summarize text with the extractive summarization API. -+ Previously updated : 05/26/2022-- Last updated : 09/26/2022++ # How to use document summarization (preview) > [!IMPORTANT]
-> The extractive summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Extractive Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of extractive summarization.
+> The summarization features described in this documentation are preview capabilities provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, document summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of document summarization.
-In general, there are two approaches for automatic document summarization: extractive and abstractive. This API provides extractive summarization.
+Document summarization is designed to shorten content that users consider too long to read. Both extractive and abstractive summarization condense articles, papers, or documents to key sentences.
-Extractive summarization is a feature that produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
+**Extractive summarization**: Produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
-This feature is designed to shorten content that users consider too long to read. Extractive summarization condenses articles, papers, or documents to key sentences.
+**Abstractive summarization**: Produces a summary by generating summarized sentences from the document that capture the main idea.
The AI models used by the API are provided by the service, you just have to send content for analysis. ## Features > [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+> If you want to start using these features, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
-The extractive summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
+The document summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
-Extractive summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
+Document summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
There is another feature in Azure Cognitive Service for Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following:
-* key phrase extraction returns phrases while extractive summarization returns sentences
-* extractive summarization returns sentences together with a rank score, and. Top ranked sentences will be returned per request
-* extractive summarization also returns the following positional information:
- * offset: The start position of each extracted sentence, and
- * Length: is the length of each extracted sentence.
+* Key phrase extraction returns phrases while extractive summarization returns sentences.
+* Extractive summarization returns sentences together with a rank score, and top ranked sentences will be returned per request.
+* Extractive summarization also returns the following positional information:
+ * Offset: The start position of each extracted sentence.
+ * Length: The length of each extracted sentence.
## Determine how to process the data (optional)
-### Specify the document summarization model
+### Submitting data
-By default, document summarization will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
+You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
-### Input languages
+When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-When you submit documents to be processed by key phrase extraction, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
+### Getting document summarization results
-## Submitting data
+When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
-You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
+The following is an example of content you might submit for summarization, which is extracted using the Microsoft blog article [A holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/). This article is only an example, the API can accept much longer input text. See the data limits section for more information.
+
+*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
+
+Using the above example, the API might return the following summarized sentences:
+
+**Extractive summarization**:
+- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
+- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."
+- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
+
+**Abstractive summarization**:
+- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."
+
+### Try document extractive summarization
+
+You can use document extractive summarization to get summaries of articles, papers, or documents. To see an example, see the [quickstart article](../quickstart.md).
You can use the `sentenceCount` parameter to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20. You can also use the `sortby` parameter to specify in what order the extracted sentences will be returned - either `Offset` or `Rank`, with `Offset` being the default. - |parameter value |Description | ||| |Rank | Order sentences according to their relevance to the input document, as decided by the service. | |Offset | Keeps the original order in which the sentences appear in the input document. |
-## Getting document summarization results
-
-When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
-
-The following is an example of content you might submit for summarization, which is extracted using the Microsoft blog article [A holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/). This article is only an example, the API can accept much longer input text. See the data limits section for more information.
-
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-
-The extractive summarization API is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
-
-Using the above example, the API might return the following summarized sentences:
-
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."*
+### Try document abstractive summarization
+
+[Reference documentation](https://go.microsoft.com/fwlink/?linkid=2211684)
+
+The following example will get you started with document abstractive summarization:
+
+1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character instead.
+
+```bash
+curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
+-d \
+'
+{
+ "displayName": "Document Abstractive Summarization Task Example",
+ "analysisInput": {
+ "documents": [
+ {
+ "id": "1",
+ "language": "en",
+ "text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "kind": "AbstractiveSummarization",
+ "taskName": "Document Abstractive Summarization Task 1"
+ }
+ ]
+}
+'
+```
+2. Make the following changes in the command where needed:
+- Replace the value `your-language-resource-key` with your key.
+- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.
+
+3. Open a command prompt window (for example: BASH).
+
+4. Paste the command from the text editor into the command prompt window, then run the command.
+
+5. Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```http
+https://<your-language-resource-endpoint>/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
+```
+
+6. To get the results of the request, use the following cURL command. Be sure to replace `<my-job-id>` with the numerical ID value you received from the previous `operation-location` response header:
+
+```bash
+curl -X GET https://<your-language-resource-endpoint>/language/analyze-text/jobs/<my-job-id>?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
+```
+
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Summarization&Page=quickstart&Section=Document-summarization" target="_target">I ran into an issue</a>
+
+### Abstractive document summarization example JSON response
+
+```json
+{
+ "jobId": "cd6418fe-db86-4350-aec1-f0d7c91442a6",
+ "lastUpdateDateTime": "2022-09-08T16:45:14Z",
+ "createdDateTime": "2022-09-08T16:44:53Z",
+ "expirationDateTime": "2022-09-09T16:44:53Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "Document Abstractive Summarization Task Example",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "AbstractiveSummarizationLROResults",
+ "taskName": "Document Abstractive Summarization Task 1",
+ "lastUpdateDateTime": "2022-09-08T16:45:14.0717206Z",
+ "status": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "summaries": [
+ {
+ "text": "Microsoft is taking a more holistic, human-centric approach to AI. We've developed a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We've achieved human performance on benchmarks in conversational speech recognition, machine translation, ...... and image captions.",
+ "contexts": [
+ {
+ "offset": 0,
+ "length": 247
+ }
+ ]
+ }
+ ],
+ "id": "1"
+ }
+ ],
+ "errors": [],
+ "modelVersion": "latest"
+ }
+ }
+ ]
+ }
+}
+```
+
+|parameter |Description |
+|||
+|`-X POST <endpoint>` | Specifies your endpoint for accessing the API. |
+|`-H Content-Type: application/json` | The content type for sending JSON data. |
+|`-H "Ocp-Apim-Subscription-Key:<key>` | Specifies the key for accessing the API. |
+|`-d <documents>` | The JSON containing the documents you want to send. |
-*"In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z)."*
+The following cURL commands are executed from a BASH shell. Edit these commands with your own resource name, resource key, and JSON values.
-*"At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better."*
## Service and data limits
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
Title: Summarization language support
description: Learn about which languages are supported by document summarization. -+ Previously updated : 06/02/2022-- Last updated : 09/28/2022++ # Summarization language support
Use this article to learn which natural languages are supported by document and
# [Document summarization](#tab/document-summarization)
-## Languages supported by document summarization
+## Languages supported by extractive document summarization
-Document summarization supports the following languages:
+| Language | Language code | Notes |
+|--|||
+| Chinese-Simplified | `zh-hans` | `zh` also accepted |
+| English | `en` | |
+| French | `fr` | |
+| German | `de` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Korean | `ko` | |
+| Spanish | `es` | |
+| Portuguese (Brazil) | `pt-BR` | |
+| Portuguese (Portugal) | `pt-PT` | `pt` also accepted |
-| Language | Language code | Starting with v3 model version | Notes |
-|:-|:-:|:-:|::|
-| Chinese-Simplified | `zh-hans` | 2021-08-01 | `zh` also accepted |
-| English | `en` | 2021-08-01 | |
-| French | `fr` | 2021-08-01 | |
-| German | `de` | 2021-08-01 | |
-| Italian | `it` | 2021-08-01 | |
-| Japanese | `ja` | 2021-08-01 | |
-| Korean | `ko` | 2021-08-01 | |
-| Spanish | `es` | 2021-08-01 | |
-| Portuguese (Brazil) | `pt-BR` | 2021-08-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-08-01 | `pt` also accepted |
+## Languages supported by abstractive document summarization (preview)
+
+| Language | Language code | Notes |
+|--|||
+| English | `en` | |
# [Conversation summarization (preview)](#tab/conversation-summarization)
Document summarization supports the following languages:
Conversation summarization supports the following languages:
-| Language | Language code | Starting with model version | Notes |
-|:-|:-:|:-:|::|
-| English | `en` | `2022-05-15` | |
+| Language | Language code | Notes |
+|--|||
+| English | `en` | |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Title: What is document and conversation summarization (preview)?
description: Learn about summarizing text. -+ Previously updated : 08/18/2022-- Last updated : 09/26/2022++ # What is document and conversation summarization (preview)?
This documentation contains the following article types:
* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Text summarization is a broad topic, consisting of several approaches to represent relevant information in text. The document summarization feature described in this documentation enables you to use extractive text summarization to produce a summary of a document. It extracts sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. For example, it can condense articles, papers, or documents to key sentences.
+Document summarization uses natural language processing techniques to generate a summary for documents. There are two general approaches to automatic summarization, both of which are supported by the API: extractive and abstractive.
-As an example, consider the following paragraph of text:
+Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words which are not simply extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
-*"WeΓÇÖre delighted to announce that Cognitive Service for Language service now supports extractive summarization! In general, there are two approaches for automatic document summarization: extractive and abstractive. This feature provides extractive summarization. Document summarization is a feature that produces a text summary by extracting sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. Extractive summarization condenses articles, papers, or documents to key sentences."*
+## Key features
-The document summarization feature would simplify the text into the following key sentences:
+There are two types of document summarization this API provides:
+* **Extractive summarization**: Produces a summary by extracting salient sentences within the document.
+ * Multiple extracted sentences: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
+ * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization will return the three highest scored sentences.
+ * Positional information: The start position and length of extracted sentences.
+* **Abstractive summarization**: Generates a summary that may not use the same words as those in the document, but captures the main idea.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document may be segmented so multiple groups of summary texts may be returned with their contextual input range.
+ * Contextual input range: The range within the input document that was used to generate the summary text.
-## Key features
+As an example, consider the following paragraph of text:
+
+*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-Document summarization supports the following features:
+The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/multilingual-emoji-support.md) for more information.
-* **Extracted sentences**: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
-* **Rank score**: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
-* **Maximum sentences**: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary Document summarization will return the three highest scored sentences.
-* **Positional information**: The start position and length of extracted sentences.
+Using the above example, the API might return the following summarized sentences:
+
+**Extractive summarization**:
+- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
+- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."
+- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
+
+**Abstractive summarization**:
+- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."
# [Conversation summarization](#tab/conversation-summarization)
+> [!IMPORTANT]
+> Conversation summarization is only available in English.
+ This documentation contains the following article types: * **[Quickstarts](quickstart.md?pivots=rest-api&tabs=conversation-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/conversation-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Conversation summarization is a broad topic, consisting of several approaches to represent relevant information in text. The conversation summarization feature described in this documentation enables you to use abstractive text summarization to produce a summary of issues and resolutions in transcripts of web chats and service call transcripts between customer-service agents, and your customers.
+## Key features
+
+Conversation summarization supports the following features:
+* **Issue/resolution summarization**: A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
+* **Chapter title summarization**: Gives suggested chapter titles of the input conversation.
+* **Narrative summarization**: Gives call notes, meeting notes or chat summaries of the input conversation.
-## When to use conversation summarization
+## When to use issue and resolution summarization
* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as: * The reason for a service chat/call (the issue).
Conversation summarization feature would simplify the text into the following:
|Example summary | Format | Conversation aspect | ||-|-|
-| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
+| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
| Checked if the power light is blinking slowly. Checked the Contoso coffee app. It had no prompt. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
To use this feature, you submit raw text for analysis and handle the API output
|Development option |Description | Links | ||||
-| REST API | Integrate conversation summarization into your applications using the REST API. | [Quickstart: Use conversation summarization](quickstart.md) |
+| REST API | Integrate conversation summarization into your applications using the REST API. | [Quickstart: Use conversation summarization](quickstart.md?tabs=conversation-summarization&pivots=rest-api) |
As you use document summarization in your applications, see the following refere
|Development option / language |Reference documentation |Samples | ||||
-|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
+|REST API | [REST API documentation](https://go.microsoft.com/fwlink/?linkid=2211684) | |
|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) | | Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) | |JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
An AI system includes not only the technology, but also the people who will use
* [Transparency note for Azure Cognitive Service for Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context) * [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/cognitive-services/language-service/context/context) * [Characteristics and limitations of summarization](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
json
## Details of the supported model versions for each language:
-| Language code | model version: | Featured Tag | Specific Tag |
+| Language Code | Model Version: | Featured Tag | Specific Tag |
|:--|:-:|:-:|::| | en | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
-| en,es,it,fr,de,pt | 2022-08-15-preview | latin | 3.0.59413252-latin-onprem-amd64 |
-| he | 2022-08-15-preview | semitic | 3.0.59413252-semitic-onprem-amd64 |
+| en,es,it,fr,de,pt | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
+| he | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 09/19/2022 Last updated : 10/04/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## October 2022
+
+* The summarization feature now has the following capabilities:
+ * [Document summarization](./summarization/overview.md):
+ * Abstractive summarization, which generates a summary of a document that may not use the same words as those in the document, but captures the main idea.
+ * [Conversation summarization](./summarization/overview.md?tabs=document-summarization?tabs=conversation-summarization)
+ * Chapter title summarization, which returns suggested chapter titles of input conversations.
+ * Narrative summarization, which returns call notes, meeting notes or chat summaries of input conversations.
+* Expanded language support for:
+ * [Sentiment analysis](./sentiment-opinion-mining/language-support.md)
+ * [Key phrase extraction](./key-phrase-extraction/language-support.md)
+ * [Named entity recognition](./key-phrase-extraction/language-support.md)
+ ## September 2022 * [Conversational language understanding](./conversational-language-understanding/overview.md) is available in the following regions:
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
communication-services Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advisor-overview.md
Title: Leverage Azure Advisor for Azure Communication Services
+ Title: Use Azure Advisor for Azure Communication Services
description: Learn about Azure Advisor offerings for Azure Communication Services.
Previously updated : 09/30/2021 Last updated : 10/10/2022
## Install the latest SDKs
-To ensure all the recent fixes and updates, it's recommended you always stay up to date with the latest SDKs available. If there is a newer version of the SDK(s) you are using available, you will see a recommendation show up in the **Performance** category to update to the latest SDK.
+To ensure all the recent fixes and updates, it's recommended you always stay up to date with the latest SDKs available. If there's a newer version of the SDK(s) you're using available, you'll see a recommendation shows up in the **Performance** category to update to the latest SDK.
![Azure Advisor example showing recommendation to update chat SDK.](./media/advisor-chat-sdk-update-example.png)
-The following SDKs are supported for this feature, along with all their supported languages. Note that this feature will only send recommendations for the newest generally available major release versions of the SDKs. Beta or preview versions will not trigger any recommendations or alerts. You can learn more about the [SDK options](./sdk-options.md) available.
+The following SDKs are supported for this feature, along with all their supported languages. This feature will only send recommendations for the newest generally available major release versions of the SDKs. Beta or preview versions won't trigger any recommendations or alerts. You can learn more about the [SDK options](./sdk-options.md) available.
* Calling (client) * Chat
The following SDKs are supported for this feature, along with all their supporte
* Phone Numbers * Management * Network Traversal
-* Call Automation
## Next steps
communication-services Call Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/call-logs-azure-monitor.md
Title: Azure Communication Services - Call Logs Preview
+ Title: Azure Communication Services - Call Logs
description: Learn about Call Summary and Call Diagnostic Logs in Azure Monitor
-# Call Summary and Call Diagnostic Logs Preview
+# Call Summary and Call Diagnostic Logs
> [!IMPORTANT] > The following refers to logs enabled through [Azure Monitor](../../../azure-monitor/overview.md) (see also [FAQ](../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](./enable-logging.md) ## Data Concepts
-The following are high level descriptions of data concepts specific to Voice and Video calling within you Communications Services that are important to review in order to understand the meaning of the data captured in the logs.
+The following are high level descriptions of data concepts specific to Voice and Video calling within your Communications Services that are important to review in order to understand the meaning of the data captured in the logs.
### Entities and IDs
A *Call*, as it relates to the entities represented in the data, is an abstracti
A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
-An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint will not be assigned a unique ID. By looking at `endpointType` and the number of `endpointId`s, you can always determine how many users and other non-human Participants (bots, servers) are on the Call. Native SDKs (like the Android calling SDK) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which will always generate a new `endpointId` for each new Call.
+An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint will not be assigned a unique ID. By analyzing endpointType and the number of `endpointIds`, you can determine how many users and other non-human Participants (bots, servers) join a Call. Our native SDKs (Androis, iOS) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which will always generate a new `endpointId` for each new Call.
A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (e.g. audio, video).
-### P2P vs. Group Calls
-
-There are two types of Calls (represented by `callType`): P2P and Group.
-
-**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
-
- :::image type="content" source="media\call-logs-azure-monitor\p2p-diagram.png" alt-text="p2p call":::
--
-**Group** Calls include any Call that's created ahead of time as a meeting/calendar event and any Call that has more than 2 Endpoints connected. Group Calls will include a server Endpoint, and the connection between each Endpoint and the server constitutes a Participant. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. By viewing the `participantStartTime` and `participantDuration`, the timeline of when each Endpoint joined the Call can be determined.
--
- :::image type="content" source="media\call-logs-azure-monitor\group-call-version-a.png" alt-text="Group Call":::
-
-## Log Structure
-Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
-
-Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each Endpoint within a Call (not counting the Server), a distinct Call Summary Log will be created.
-
-Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each data stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` will create a log containing data for the inbound streams, and all other streams will create logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
-
-### Example 1: P2P Call
-
-The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (1 per `endpointId`) and 4 Call Diagnostic Logs would be created (1 per media stream). Each log will contain data relating to the outbound stream of the `endpointId`.
---
-### Example 2: Group Call
-The below diagram represents a Group Call example with three `particpantIds`, which means three `endpointIds` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. For `participantId` 1, two Call Summary Logs would be created: one for for `endpointId`, and another for the server. Four Call Diagnostic Logs would be created relating to `participantId` 1, one for each media stream. The three logs with `endpointId` 1 would contain data relating to the outbound media streams, and the one log with `endpointId = null, endpointType = "Server"` would contain data relating to the inbound stream.
- ## Data Definitions ### Call Summary Log The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log will be created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
+> [!IMPORTANT]
+> Participant information in the call summary log will vary based on the participant tenant. The SDK and OS version will be redacted if the participant is not within the same tenant (also referred to as cross-tenant) as the ACS resource. Cross-tenantsΓÇÖ participants are classified as external users invited by a resource tenant to join and collaborate during a call.
+ | Property | Description | |-|| | time | The timestamp (UTC) of when the log was generated. | | operationName | The operation associated with log record. | | operationVersion | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. | | category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| correlationIdentifier | `correlationIdentifier` is the unique ID for a Call. The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. |
+| correlationId | `correlationId` is the unique ID for a Call. The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationId` will be used to easily identify the Call you're troubleshooting. |
| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. | | callStartTime | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. | | callDuration | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
The Call Summary Log contains data to help you identify key properties of all Ca
| participantStartTime | Timestamp for beginning of the first connection attempt by the participant. | | participantDuration | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. | | participantEndReason | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes below. |
-| endpointId | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationIdentifier`) for native clients. The number of `endpointId`s will determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
+| endpointId | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId`s will determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
| endpointType | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. | | sdkVersion | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) | | osVersion | String that represents the operating system and version of each Endpoint device. |
+| participantTenantId | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction.
### Call Diagnostic Log
-Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, as well as measurements that help to understand quality issues.
+Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, as well as measurements that help to understand quality issues.
+For each Endpoint within a Call, a distinct Call Diagnostic Log is created for outbound media streams (audio, video, etc.) between Endpoints.
+In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In Group Calls the participantId serves as key identifier to join the related outbound logs into a distinct Participant connection. Please note that Call diagnostic logs will remain intact and will be the same regardless of the participant tenant.
+> Note: In this document P2P and group calls are by default within the same tenant, for all call scenarios that are cross-tenant they will be specified accordingly throughout the document.
| Property | Description | ||-| | operationName | The operation associated with log record. | | operationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. | | category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| correlationIdentifier | The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationIdentifier` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. |
+| correlationId | The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationId` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationId` will be used to easily identify the Call you're troubleshooting. |
| participantId | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationIdentifier`) for native clients but will be unique for every Call when the client is a web browser. |
-| endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, or `ΓÇ£UnknownΓÇ¥`. |
+| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams object ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
+| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but will be unique for every Call when the client is a web browser. |
+| endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
| mediaType | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. | | streamId | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`. | | transportType | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
Call Diagnostic Logs provide important information about the Endpoints and the m
| jitterAvg | This is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. | | jitterMax | The is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. | | packetLossRateAvg | This is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| packetLossRateMax | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow. |
-| | |
+| packetLossRateMax | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow.
+### P2P vs. Group Calls
-### Error Codes
-The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint.
-
-| Error code | Description | Action to take |
-|--|--|--|
-| 0 | Success | Call (P2P) or Participant (Group) terminated correctly. |
-| 403 | Forbidden / Authentication failure. | Ensure that your Communication Services token is valid and not expired. If you are using Teams Interoperability, make sure your Teams tenant has been added to the preview access allowlist. To enable/disable Teams tenant interoperability, complete this form. |
-| 404 | Call not found. | Ensure that the number you're calling (or Call you're joining) exists. |
-| 408 | Call controller timed out. | Call Controller timed out waiting for protocol messages from user endpoints. Ensure clients are connected and available. |
-| 410 | Local media stack or media infrastructure error. | Ensure that you're using the latest SDK in a supported environment. |
-| 430 | Unable to deliver message to client application. | Ensure that the client application is running and available. |
-| 480 | Remote client Endpoint not registered. | Ensure that the remote Endpoint is available. |
-| 481 | Failed to handle incoming Call. | File a support request through the Azure portal. |
-| 487 | Call canceled, locally declined, ended due to an Endpoint mismatch issue, or failed to generate media offer. | Expected behavior. |
-| 490, 491, 496, 487, 498 | Local Endpoint network issues. | Check your network. |
-| 500, 503, 504 | Communication Services infrastructure error. | File a support request through the Azure portal. |
-| 603 | Call globally declined by remote Communication Services Participant. | Expected behavior. |
-| Unknown | Non-standard end reason (not part of the standard SIP codes). | |
-
-## Call Examples and Sample Data
+There are two types of Calls (represented by `callType`): P2P and Group.
+
+**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
+
+ :::image type="content" source="media\call-logs-azure-monitor\p2p-diagram.png" alt-text="Screenshot displays P2P call across 2 endpoints.":::
+
+**Group** Calls include any Call that has more than 2 Endpoints connected. Group Calls will include a server Endpoint, and the connection between each Endpoint and the server. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. By viewing the `participantStartTime` and `participantDuration`, the timeline of when each Endpoint joined the Call can be determined.
++
+ :::image type="content" source="media\call-logs-azure-monitor\group-call-version-a.png" alt-text="Screenshot displays group call across multiple endpoints.":::
++
+## Log Structure
+
+Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
+
+Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each participant within a call, a distinct call summary log is created (if someone rejoins a call, they will have the same EndpointId, but a different ParticipantId, so there will be two Call Summary logs for that endpoint).
+
+Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each media stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` will create a log containing data for the inbound streams, and all other streams will create logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
+
+### Example 1: P2P Call
+
+The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (one per `participantID`) and four Call Diagnostic Logs would be created (one per media stream). Each log will contain data relating to the outbound stream of the `participantID`.
+++
+### Example 2: Group Call
+
+The below diagram represents a Group Call example with three `participantIDs`, which means three `participantIDs` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. One Call Summary Logs would be created per `participantID`, and four Call Diagnostic Logs would be created relating to each `participantID`, one for each media stream.
+
+
+### Example 3: P2P Call cross-tenant
+The below diagram represents two participants across multiple tenants that are connected directly in a P2P Call. In this example, one Call Summary Logs would be created (one per participant) with redacted OS and SDK versioning and four Call Diagnostic Logs would be created (one per media stream). Each log will contain data relating to the outbound stream of the `participantID`.
+
++
+### Example 4: Group Call cross-tenant
+The below diagram represents a Group Call example with three `participantIds` across multiple tenants. One Call Summary Logs would be created per participant with redacted OS and SDK versioning, and four Call Diagnostic Logs would be created relating to each `participantId` , one for each media stream.
+++
+> [!NOTE]
+> Only outbound diagnostic logs will be supported in this release.
+> Please note that participants and bots identity are treated the same way, as a result OS and SDK versioning associated to the bot and the participant will be redacted
++
+
+## Sample Data
### P2P Call Shared fields for all logs in the call: ```json "time": "2021-07-19T18:46:50.188Z",
-"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-PROD-CCTS-TESTS/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae", ```
Call Summary Logs have shared operation and category information:
```json "operationName": "CallSummary", "operationVersion": "1.0",
-"category": "CallSummaryPRIVATEPREVIEW",
+"category": "CallSummary",
``` Call Summary for VoIP user 1
Call summary for VoIP user 2
"osVersion": "null" } ```
+Call Summary Logs crossed tenants: Call summary for VoIP user 1
+```json
+"properties": {
+ "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
+ "callStartTime": "2022-08-14T06:18:27.010Z",
+ "callDuration": 520,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
+ "participantStartTime": "2022-08-14T06:18:27.010Z",
+ "participantDuration": "520",
+ "participantEndReason": "0",
+ "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
+ "endpointType": "VoIP",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+Call summary for PSTN call (**Please note:** P2P or group call logs emitted will have OS, and SDK version redacted regardless is the participant or botΓÇÖs tenant)
+```json
+"properties": {
+ "identifier": "b1999c3e-bbbb-4650-9b23-9999bdabab47",
+ "callStartTime": "2022-08-07T13:53:12Z",
+ "callDuration": 1470,
+ "callType": "Group",
+ "teamsThreadId": "19:36ec5177126fff000aaa521670c804a3@thread.v2",
+ "participantId": " b25cf111-73df-4e0a-a888-640000abe34d",
+ "participantStartTime": "2022-08-07T13:56:45Z",
+ "participantDuration": 960,
+ "participantEndReason": "0",
+ "endpointId": "8731d003-6c1e-4808-8159-effff000aaa2",
+ "endpointType": "PSTN",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+ #### Call Diagnostic Logs Call diagnostics logs share operation information: ```json "operationName": "CallDiagnostics", "operationVersion": "1.0",
-"category": "CallDiagnosticsPRIVATEPREVIEW",
+"category": "CallDiagnostics",
``` Diagnostic log for audio stream from VoIP Endpoint 1 to VoIP Endpoint 2: ```json
Diagnostic log for video stream from VoIP Endpoint 1 to VoIP Endpoint 2:
} ``` ### Group Call
-In the following example, there are three users in a Group Call, two connected via VOIP, and one connected via PSTN. All are using only Audio.
-
-The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs.
-
-Shared fields for all logs in the Call:
+The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs. Shared fields for all logs in the Call:
```json "time": "2021-07-05T06:30:06.402Z",
-"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-PROD-CCTS-TESTS/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
"correlationId": "341acde7-8aa5-445b-a3da-2ddadca47d22", ```
Call Summary Logs have shared operation and category information:
```json "operationName": "CallSummary", "operationVersion": "1.0",
-"category": "CallSummaryPRIVATEPREVIEW",
+"category": "CallSummary",
``` Call summary for VoIP Endpoint 1:
Call summary for VoIP Endpoint 1:
"callStartTime": "2021-07-05T06:16:40.240Z", "callDuration": 87, "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
"participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba", "participantStartTime": "2021-07-05T06:16:44.235Z", "participantDuration": "82",
Call summary for PSTN Endpoint 2:
"callStartTime": "2021-07-05T06:16:40.240Z", "callDuration": 87, "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
"participantId": "515650f7-8204-4079-ac9d-d8f4bf07b04c", "participantStartTime": "2021-07-05T06:17:10.447Z", "participantDuration": "52",
Call summary for PSTN Endpoint 2:
"osVersion": "null" } ```
+Call Summary Logs cross-tenant
+```json
+"properties": {
+ "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
+ "callStartTime": "2022-08-14T06:18:27.010Z",
+ "callDuration": 912,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "aa1dd7da-5922-4bb1-a4fa-e350a111fd9c",
+ "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
+ "participantStartTime": "2022-08-14T06:18:27.010Z",
+ "participantDuration": "902",
+ "participantEndReason": "0",
+ "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
+ "endpointType": "VoIP",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+Call summary log crossed tenant with bot as a participant
+Call summary for bot
+```json
+
+"properties": {
+ "identifier": "b1902c3e-b9f7-4650-9b23-9999bdabab47",
+ "callStartTime": "2022-08-09T16:00:32Z",
+ "callDuration": 1470,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MmQwZDcwYTQtZ000HWE6NzI4LTg1YTAtNXXXXX99999ZZZZZ@thread.v2",
+ "participantId": "66e9d9a7-a434-4663-d91d-fb1ea73ff31e",
+ "participantStartTime": "2022-08-09T16:14:18Z",
+ "participantDuration": 644,
+ "participantEndReason": "0",
+ "endpointId": "69680ec2-5ac0-4a3c-9574-eaaa77720b82",
+ "endpointType": "Bot",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
#### Call Diagnostic Logs Call diagnostics logs share operation information: ```json "operationName": "CallDiagnostics", "operationVersion": "1.0",
-"category": "CallDiagnosticsPRIVATEPREVIEW",
+"category": "CallDiagnostics",
``` Diagnostic log for audio stream from VoIP Endpoint 1 to Server Endpoint: ```json
Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
"jitterMax": "4", "packetLossRateAvg": "0", ```
+### Error Codes
+The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](https://docs.microsoft.com/azure/communication-services/concepts/troubleshooting-info?tabs=csharp%2Cios%2Cdotnet#calling-sdk-error-codes)
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
This article provides best practices for managing [User Access Tokens](./authent
Communication Token Credential (Credential) is an authentication primitive that wraps User Access Tokens. It's used to authenticate users in Communication Services, such as Chat or Calling. Additionally, it provides built-in token refreshing functionality for the convenience of the developer.
-## Initialization
+## Choosing the session lifetime
-Depending on your scenario, you may want to initialize the Credential with a [static token](#static-token) or a [callback function](#callback-function) returning tokens.
-No matter which method you choose, you can supply the tokens to the Credential via the Azure Communication Identity API.
+Depending on your scenario, you may want to adjust the lifespan of tokens issued for your application. The following best practices or their combination can help you achieve the optimal solution for your scenario:
+- [Customize the token expiration time](#set-a-custom-token-expiration-time) to your specific needs.
+- Initialize the Credential with a [static token](#static-token) for one-off Chat messages or time-limited Calling sessions.
+- Use a [callback function](#callback-function) for agents using the application for longer periods of time.
+
+### Set a custom token expiration time
+When requesting a new token, we recommend using short lifetime tokens for one-off Chat messages or time-limited Calling sessions and longer lifetime tokens for agents using the application for longer periods of time. The default token expiration time is 24 hours but you can customize it by providing a value between an hour and 24 hours to the optional parameter as follow:
+
+```javascript
+const tokenOptions = { tokenExpiresInMinutes: 60 };
+const user = { communicationUserId: userId };
+const scopes = ["chat"];
+let communicationIdentityToken = await identityClient.getToken(user, scopes, tokenOptions);
+```
### Static token For short-lived clients, initialize the Credential with a static token. This approach is suitable for scenarios such as sending one-off Chat messages or time-limited Calling sessions. ```javascript
-const tokenCredential = new AzureCommunicationTokenCredential("<user_access_token>");
+let communicationIdentityToken = await identityClient.getToken({ communicationUserId: userId }, ["chat", "voip"]);
+const tokenCredential = new AzureCommunicationTokenCredential(communicationIdentityToken.token);
``` ### Callback function
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Previously updated : 06/30/2021 Last updated : 10/10/2022
Development of Calling and Chat applications can be accelerated by the [Azure C
| Email | [REST](/rest/api/communication/Email) | Service|Send and get status on Email messages| | Chat | [REST](/rest/api/communication/) with proprietary signaling | Client & Service | Add real-time text chat to your applications | | Calling | Proprietary transport | Client | Voice, video, screen-sharing, and other real-time communication |
-| Calling Server | [REST](/rest/api/communication/callautomation/server-calls) | Service| Make and manage calls, play audio, and configure recording |
| Network Traversal | [REST](./network-traversal.md)| Service| Access TURN servers for low-level data transport | | UI Library | N/A | Client | Production-ready UI components for chat and calling apps |
communication-services Known Limitations Acs Telephony https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md
This article provides information about limitations and known issues related to
## Azure Communication Services direct routing known limitations -- Anonymous calling isn't supported
+- Anonymous calling isn't supported.
- will be fixed in GA release - Different set of Media Processors (MP) is used with different IP addresses. Currently [any Azure IP address](./direct-routing-infrastructure.md#media-traffic-ip-and-port-ranges) can be used for media connection between Azure MP and Session Border Controller (SBC). - will be fixed in GA release-- Azure Communication Services SBC Fully Qualified Domain Name (FQDN) must be different from Teams Direct Routing SBC FQDN
+- Azure Communication Services SBC Fully Qualified Domain Name (FQDN) must be different from Teams Direct Routing SBC FQDN.
+- One SBC FQDN can be connected to a single resource only. Unique SBC FQDNs are required for pairing to different resources.
- Wildcard SBC certificates require extra workaround. Contact Azure support for details. - will be fixed in GA release-- Media bypass/optimization isn't supported-- No indication of SBC connection status/details in Azure portal
+- Media bypass/optimization isn't supported.
+- No indication of SBC connection status/details in Azure portal.
- will be fixed in GA release-- Azure Communication Services direct routing isn't available in Government Clouds-- Multi-tenant trunks aren't supported-- Location-based routing isn't supported-- No quality dashboard is available for customers-- Enhanced 911 isn't supported -- PSTN numbers missing from Call Summary logs
+- Azure Communication Services direct routing isn't available in Government Clouds.
+- Multi-tenant trunks aren't supported.
+- Location-based routing isn't supported.
+- No quality dashboard is available for customers.
+- Enhanced 911 isn't supported.
+- PSTN numbers missing from Call Summary logs.
## Next steps
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
- Title: Azure Communication Services Call Automation API overview-
-description: Provides an overview of the Call Automation feature and APIs.
----- Previously updated : 06/30/2021----
-# Call Automation overview
--
-Call Automation APIs enable you to access voice and video calling capabilities from **services**. You can use these APIs to create service applications that drive automated outbound reminder calls for appointments or provide proactive notifications for events like power outages or wildfires. Service applications that join a call can monitor updates such as participants joining or leaving, allowing you to implement rich reporting and logging capabilities.
-
-![in and out-of-call apps](../media/call-automation-apps.png)
-
-Call Automation APIs are provided for both in-call (application-participant or app-participant) actions, and out-of-call actions. Two key differences between these sets of APIs are:
-- In-call APIs require your application to join the call as a participant. App-participants are billed at [standard PSTN and VoIP rates](https://azure.microsoft.com/pricing/details/communication-services/).-- In-call APIs use the `callConnectionId` associated with app-participant, while Out-of-Call APIs use the `serverCallId` associated with the call instance. -
-## Use cases
-| Use Case | In-Call (App-participant) | Out-of-Call |
-| | - | - |
-| Place or receive 1:1 calls between bot and human participants | X | |
-| Play audio prompts and listen for responses | X | |
-| Monitor in-call events | X | |
-| Create calls with multiple participants | X | |
-| Get call participants and participant details | X | |
-| Add or remove call participants | X | X |
-| Server-side actions in peer-to-peer calls (e.g. recording) | | X |
-| Play audio announcements to all participants | | X |
-| Start and manage call recording | | X |
-
-## In-Call (App-Participant) APIs
-
-In-call APIs enable an application to take actions in a call as an app-participant. When an application answers or joins a call, a `callConnectionId` is assigned, which is used for in-call actions such as:
-- Add or remove call participants-- Play audio prompts and listen for DTMF responses-- Listen to call roster updates and events-- Hang-up a call-
-![in-call application](../media/call-automation-in-call.png)
-
-### In-Call Events
-Event notifications are sent as JSON payloads to the calling application via the `callbackUri` set when joining the call. Events sent to in-call app-participants are:
-- CallState events (Establishing, Established, Terminating, Terminated)-- DTMF received-- Play audio result-- Cancel media processing-- Invite participant result-- Participants updated-
-## Out-of-Call APIs
-Out-of-Call APIs enable you to perform actions on a call or meeting without having an app-participant present. Out-of-Call APIs use the `serverCallId`, provided by either the Calling Client SDK or generated when a call is created via the Call Automation APIs. Because out-of-call APIs do not require an app-participant, they are useful for implementing server-side business logic into peer-to-peer calls. For example, consider a support call scenario that started as a peer-to-peer call, and the agent (participant B) wants to bring in a subject matter expert to assist. Participant B triggers an event in the client interface for a server application to identify an available subject matter expert and invite them to the call. The end-state of the flow shown below is a group call with three human participants.
-
-![out-of-call application](../media/call-automation-out-of-call.png)
-
-Out-of-call APIs are available for actions such as:
-- Add or remove call participants-- Start/stop/pause/resume call recording
-
-### Out-of-Call Events
-Event notifications are sent as JSON payloads to the calling application via the `callbackUri` providing in the originating API call. Actions with corresponding out-of-call events are:
-- Call Recording (Start, Stop, Pause, Resume)-- Invite participant result-
-## Next steps
-Check out the [Call Automation Quickstart](../../quickstarts/voice-video-calling/call-automation-api-sample.md) to learn more.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. > Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, etc.) to steer and control calls based on your business logic.
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, etc.) to steer and control calls based on your business logic.
## Common Use Cases
Some of the common use cases that can be build using Call Automation include:
ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
-![Diagram of calling flow for a customer service scenario.](./call-automation-architecture.png)
+![Diagram of calling flow for a customer service scenario.](./media/call-automation-architecture.png)
## Capabilities
The following list presents the set of features that are currently available in
Call Automation uses a REST API interface to receive requests and provide responses to all actions performed within the service. Due to the asynchronous nature of calling, most actions will have corresponding events that are triggered when the action completes successfully or fails.
-Event Grid ΓÇô Azure Communication Services uses Event Grid to deliver the IncomingCall event. This event can be triggered:
-- by an inbound PSTN call to a number you've acquired in the portal,-- by connecting your telephony infrastructure using an SBC,-- for one-on-one calls between Communication Service users,-- when a Communication Services user is added to an existing call (group call),-- an existing 1:1 call is transferred to a Communication Service user.
+Azure Communication Services uses Event Grid to deliver the [IncomingCall event](../../concepts/voice-video-calling/incoming-call-notification.md) and HTTPS Webhooks for all mid-call action callbacks.
-Web hooks ΓÇô Calling Automation SDKs use standard web hook HTTP/S callbacks for call state change events and responses to mid-call actions.
-
-![Screenshot of flow for incoming call and actions.](./action-architecture.png)
+![Screenshot of flow for incoming call and actions.](./media/action-architecture.png)
## Call Actions
The Call Automation events are sent to the web hook callback URI specified when
## Next Steps > [!div class="nextstepaction"]
-> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
+> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in MP4 Audio+Video format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice. Call Recording supports all Azure Communication Services data regions.
-![Call recording concept diagram](../media/call-recording-concept.png)
+![Call recording concept diagram](../media/call-recording-conceptual-diagram.png)
## Media output types Call recording currently supports mixed audio+video MP4 and mixed audio MP3/WAV output formats in Public Preview. The mixed audio+video output media matches meeting recordings produced via Microsoft Teams recording. | Content Type | Content Format | Channel Type | Video | Audio | | :-- | :- | :-- | :- | : |
-| audio + video | mp4 | mixed | 1920x1080 8 FPS video of all participants in default tile arrangement | 16kHz mp4a mixed audio of all participants |
-| audio| mp3/wav | mixed | N/A | 16kHz mp3/wav mixed audio of all participants |
-| audio| wav | unmixed | N/A | 16kHz wav, 0-5 channels, 1 for each participant |
+| audio + video | mp4 | mixed | 1920x1080 eight (8) FPS video of all participants in default tile arrangement | 16 kHz mp4 mixed audio of all participants |
+| audio| mp3/wav | mixed | N/A | 16 kHz mp3/wav mixed audio of all participants |
+| audio| wav | unmixed | N/A | 16 kHz wav, 0-5 channels, 1 for each participant |
## Channel types > [!NOTE]
Call recording currently supports mixed audio+video MP4 and mixed audio MP3/WAV
| **Unmixed audio** | wav | Single file, up to 5 wav channels | Quality Assurance Analytics | **Private Preview** | ## Run-time Control APIs
-Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation, or from a user-triggered action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. When creating a call, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
+Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation. Also, recordings can be triggered by a user action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. Once a call is created, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
| Operation | Operates On | Comments | | :-- | : | :-- | | Start Recording | `serverCallId` | Returns `recordingOperationId` | | Get Recording State | `recordingOperationId` | Returns `recordingState` | | Pause Recording | `recordingOperationId` | Pausing and resuming call recording enables you to skip recording a portion of a call or meeting, and resume recording to a single file. |
-| Resume Recording | `recordingOperationId` | Resumes a Paused a recording operation. Content is included in the same file as content from prior to pausing. |
+| Resume Recording | `recordingOperationId` | Resumes a Paused recording operation. Content is included in the same file as content from prior to pausing. |
| Stop Recording | `recordingOperationId` | Stops recording, and initiates final media processing for file download. |
Run-time control APIs can be used to manage recording via internal business logi
> [!NOTE] > Azure Communication Services provides short term media storage for recordings. **Recordings will be available to download for 48 hours.** After 48 hours, recordings will no longer be available.
-An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (e.g. meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file.
+An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (for example, meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file.
### Notification Schema Reference ```typescript
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
## Regulatory and privacy concerns
-Many countries and states have laws and regulations that apply to the recording of PSTN, voice, and video calls, which often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
+Many countries and states have laws and regulations that apply to call recording. PSTN, voice, and video calls, often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/incoming-call-notification.md
+
+ Title: Incoming call concepts
+
+description: Learn about Azure Communication Services IncomingCall notification
++++ Last updated : 09/26/2022++++
+# Incoming call concepts
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call.
+
+## Calling scenarios
+
+First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number will trigger an `IncomingCall` event. The following are examples of these resources:
+
+1. An Azure Communication Services identity
+2. A PSTN phone number owned by your Azure Communication Services resource
+
+Given the above examples, the following scenarios will trigger an `IncomingCall` event sent to Event Grid:
+
+| Source | Destination | Scenario(s) |
+| | -- | -- |
+| Azure Communication Services identity | Azure Communication Services identity | Call, Redirect, Add Participant, Transfer |
+| Azure Communication Services identity | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant
+| Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer
+
+> [!NOTE]
+> An important concept to remember is that an Azure Communication Services identity can be a user or application. Although there is no ability to explicitly assign an identity to a user or application in the platform, this can be done by your own application or supporting infrastructure. Please review the [identity concepts guide](../identity-model.md) for more information on this topic.
+
+## Receiving an incoming call notification from Event Grid
+
+Since Azure Communication Services relies on Event Grid to deliver the `IncomingCall` notification through a subscription, how you choose to handle the notification is up to you. Additionally, since the Call Automation API relies specifically on Webhook callbacks for events, a common Event Grid subscription used would be a 'Webhook'. However, you could choose any one of the available subscription types offered by the service.
+
+This architecture has the following benefits:
+
+- Using Event Grid subscription filters, you can route the `IncomingCall` notification to specific applications.
+- PSTN number assignment and routing logic can exist in your application versus being statically configured online.
+- As identified in the above [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
+
+To subscribe to the `IncomingCall` notification from Event Grid, [follow this how-to guide](../../how-tos/call-automation-sdk/subscribe-to-incoming-call.md).
+
+## Call routing in Call Automation or Event Grid
+
+You can use [advanced filters](../../../event-grid/event-filtering.md) in your Event Grid subscription to subscribe to an `IncomingCall` notification for a specific source/destination phone number or Azure Communication Services identity and sent it to an endpoint such as a Webhook subscription. That endpoint application can then make a decision to **redirect** the call using the Call Automation SDK to another Azure Communication Services identity or to the PSTN.
+
+## Number assignment
+
+Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/play-action.md
The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication.
-``Note: ACS currently only supports files of WAV, mono, 16KHz format.``
+> [!NOTE]
+> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz.
The Play action allows you to provide access to a pre-recorded audio file of WAV format that ACS can access with support for authentication.
Your application might want to play some sort of announcement when a participant
In scenarios with IVRs and virtual assistants, you can use your application or bots to play audio prompts to callers, this prompt can be in the form of a menu to guide the caller through their interaction. ### Hold music
-The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing untill an agent is available to assist the caller.
+The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller.
### Playing compliance messages As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥. ## How the play action workflow looks
-![Screenshot of flow for play action.](./play-action-flow.png)
+![Screenshot of flow for play action.](./media/play-action-flow.png)
## Known Issues/Limitations - Play action isn't enabled to work with Teams Interoperability. - Play won't support loop for targeted playing. ## What's coming up next for Play action
-As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the play action will add new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Text-to-Speech and fine tuning Text-to-Speech with SSML. With these capabilities you can improve customer interactions to create more personalized messages.
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the play action will add new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Text-to-Speech and fine tuning Text-to-Speech with SSML. With these capabilities, you can improve customer interactions to create more personalized messages.
## Next Steps Check out the [Play action quickstart](../../quickstarts/voice-video-calling/Play-Action.md) to learn more.
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation-sdk/redirect-inbound-telephony-calls.md
+
+ Title: Azure Communication Services Call Automation how-to for redirecting inbound PSTN calls
+
+description: Provides a how-to for redirecting inbound telephony calls with Call Automation.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Redirect inbound telephony calls with Call Automation
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
+++
+## Testing the application
+
+1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
+2. Your Event Grid subscription to the IncomingCall should execute and call your web server.
+3. The call will be redirected to the endpoint(s) you specified in your application.
+
+Since this call flow involves a redirected call instead of answering it, pre-call web hook callbacks to notify your application the other endpoint accepted the call aren't published.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features.
+- Learn more about [Play action](../../concepts/voice-video-calling/play-Action.md).
+- Learn how to build a [call workflow](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Subscribe To Incoming Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation-sdk/subscribe-to-incoming-call.md
+
+ Title: Subscribe to IncomingCall for Call Automation
+
+description: Learn how to subscribe to the IncomingCall event from Event Grid for the Call Automation SDK
+++++ Last updated : 09/26/2022+++
+# Subscribe to IncomingCall for Call Automation
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+As described in the [Incoming Call concepts guide](../../concepts/voice-video-calling/incoming-call-notification.md), your Event Grid subscription to the `IncomingCall` notification is critical to using the Call Automation SDK for scenarios involving answering, redirecting, or rejecting a call.
+
+## Choosing the right subscription
+
+Event Grid offers several choices for receiving events including Azure Functions, Azure Service Bus, or simple HTTP/S web hooks. Thinking about how the Call Automation platform functions, we rely on web hook callbacks for mid-call events such as `CallConnected`, `CallTransferAccepted`, or `PlayCompleted` as a few examples. The most optimal choice would be to use a **Webhook** subscription since you need a web API for the mid-call events anyway.
+
+> [!IMPORTANT]
+> When using a Webhook subscription, you must undergo a validation of your web service endpoint as per [the following Event Grid instructions.](../../../event-grid/webhook-event-delivery.md)
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+- A deployed [Communication Service resource](../../quickstarts/create-communication-resource.md) and valid Connection String
+- The [ARMClient application](https://github.com/projectkudu/ARMClient), used to configure the Event Grid subscription.
+
+## Configure an Event Grid subscription
+
+> [!NOTE]
+> The following steps will not be necessary once the `IncomingCall` event is published to the Event Grid portal.
+
+1. Locate and copy the following to be used in the armclient command-line statement below:
+ - Azure subscription ID
+ - Resource group name
+
+ On the picture below you can see the required fields:
+
+ :::image type="content" source="./media/portal.png" alt-text="Screenshot of Communication Services resource page on Azure portal.":::
+
+2. Communication Service resource name
+3. Determine your local development HTTP port used by your web service application.
+4. Start your web service making sure you've followed the steps outlined in the above note regarding validation of your Webhook from Event Grid.
+5. Since the `IncomingCall` event isn't yet published in the Azure portal, you must run the following command-line statements to configure your subscription:
+
+ ``` console
+ armclient login
+
+ armclient put "/subscriptions/<your_azure_subscription_guid>/resourceGroups/<your_resource_group_name>/providers/Microsoft.Communication/CommunicationServices/<your_acs_resource_name>/providers/Microsoft.EventGrid/eventSubscriptions/<subscription_name>?api-version=2022-06-15" "{'properties':{'destination':{'properties':{'endpointUrl':'<your_ngrok_uri>'},'endpointType':'WebHook'},'filter':{'includedEventTypes': ['Microsoft.Communication.IncomingCall']}}}" -verbose
+
+ ```
+
+### How do you know it worked?
+
+1. Click on the **Events** section of your Azure Communication Services resource.
+2. Locate your subscription and check the **Provisioning state** making sure it says **Succeeded**.
+
+ :::image type="content" source="./media/subscription-validation.png" alt-text="Event Grid Subscription Validation":::
+
+>[!IMPORTANT]
+> If you use the Azure portal to modify your Event Grid subscription by adding or removing an event or by modifying any aspect of the subscription such as an advanced filter, the `IncomingCall` subscription will be removed. This is a known issue and will only exist during Private Preview. Use the above command-line statements to simply recreate your subscription if this happens.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Build a Call Automation application](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md)
+> [Redirect an inbound PSTN call](../../how-tos/call-automation-sdk/redirect-inbound-telephony-calls.md)
communication-services Manage Inbound Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/manage-inbound-calls.md
- Title: Azure Communication Services Call Automation quickstart for PSTN calls-
-description: Provides a quickstart for managing inbound telephony calls with Call Automation.
---- Previously updated : 09/06/2022---
-zone_pivot_groups: acs-csharp-java
--
-# Quickstart: Manage inbound telephony calls with Call Automation
-> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
-> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
---
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
--- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features. -- Learn more about [Play action](../../concepts/voice-video-calling/play-Action.md).-- Learn how to build a [call workflow](../voice-video-calling/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Call Automation Api Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/call-automation-api-sample.md
- Title: Azure Communication Services Call Automation API quickstart-
-description: Provides a quickstart sample for the Call Automation APIs.
---- Previously updated : 06/30/2021---
-zone_pivot_groups: acs-csharp-java
---
-# Quickstart: Use the call automation APIs
---
-Get started with Azure Communication Services by using the Communication Services Calling server SDKs to build an automated call routing solution.
---
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
-
-For more information, see the following articles:
--- Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Learn about [Calling SDK capabilities](./getting-started-with-calling.md)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions.md
Title: Azure Communication Services Call Automation API tutorial for VoIP calls-
-description: Tutorial on how to use Call Automation to build call flow for customer interactions.
+ Title: Build a customer interaction workflow using Call Automation
+
+description: Quickstart on how to use Call Automation to answer a call, recognize DTMF input, and add a participant to a call.
zone_pivot_groups: acs-csharp-java
-# Tutorial: Build call workflows for customer interactions
+# Build a customer interaction workflow using Call Automation
> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
-> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-In this tutorial, you'll learn how to build applications that use Azure Communication Services Call Automation to handle common customer support scenarios, such as:
-- receiving notifications for incoming calls to a phone number using Event Grid-- answering the call and playing an audio file using Call Automation SDK-- adding a communication user to the call using Call Automation SDK. This user can be a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+In this quickstart, you'll learn how to build an application that uses the Azure Communication Services Call Automation SDK to handle the following scenario:
+- handling the `IncomingCall` event from Event Grid
+- answering a call
+- playing an audio file
+- adding a communication user to the call such as a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services
::: zone pivot="programming-language-csharp" [!INCLUDE [Call flows for customer interactions with .NET](./includes/call-automation/Callflow-for-customer-interactions-csharp.md)]
In this tutorial, you'll learn how to build applications that use Azure Communic
[!INCLUDE [Call flows for customer interactions with Java](./includes/call-automation/Callflow-for-customer-interactions-java.md)] ::: zone-end
+## Testing the application
+
+1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
+2. Your Event Grid subscription to the `IncomingCall` should execute and call your web server.
+3. The call will be answered, and an asynchronous web hook callback will be sent to the NGROK callback URI.
+4. When the call is connected, a `CallConnected` event will be sent to your web server, wrapped in a `CloudEvent` schema and can be easily deserialized using the Call Automation SDK parser. At this point, the application will request audio to be played and input from a targeted phone number.
+5. When the input has been received and recognized, the web server will make a request to add a participant to the call.
+ ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources). ## Next steps - Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features. -- Learn how to [manage inbound telephony calls](../telephony/manage-inbound-calls.md) with Call Automation.
+- Learn how to [redirect inbound telephony calls](../../how-tos/call-automation-sdk/redirect-inbound-telephony-calls.md) with Call Automation.
- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md). - Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md).
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/hmac-header-tutorial.md
Title: Learn how to sign an HTTP request with C#
+ Title: Learn how to sign an HTTP request with HMAC
-description: Learn how to sign an HTTP request for Azure Communication Services via C#.
+description: Learn how to sign an HTTP request for Azure Communication Services using HMAC.
confidential-computing Confidential Node Pool Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-node-pool-aks.md
Title: Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs - Preview
+ Title: Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs
description: Learn about confidential node pool support on AKS with AMD SEV-SNP confidential VMs Previously updated : 8/1/2022 Last updated : 10/04/2022 -+
-# Confidential VM node pool support on AKS with AMD SEV-SNP confidential VMs - Preview
+# Confidential VM node pool support on AKS with AMD SEV-SNP confidential VMs
[Azure Kubernetes Service (AKS)](../aks/index.yml) makes it simple to deploy a managed Kubernetes cluster in Azure. In AKS, nodes of the same configuration are grouped together into node pools. These node pools contain the underlying VMs that run your applications.
-AKS now supports confidential VM node pools with Azure confidential VMs. These confidential VMs are the [generally available DCasv5 and ECasv5 confidential VM-series](https://aka.ms/AMD-ACC-VMs-GA-Inspire-2022) utilizing 3rd Gen AMD EPYC<sup>TM</sup> processors with Secure Encrypted Virtualization-Secure Nested Paging ([SEV-SNP](https://www.amd.com/en/technologies/infinity-guard)) security features. To read more about this offering, head to our [announcement](https://aka.ms/ACC-AKS-AMD-SEV-SNP-Preview-Blog).
+AKS now supports confidential VM node pools with Azure confidential VMs. These confidential VMs are the [generally available DCasv5 and ECasv5 confidential VM-series](https://aka.ms/AMD-ACC-VMs-GA-Inspire-2022) utilizing 3rd Gen AMD EPYC<sup>TM</sup> processors with Secure Encrypted Virtualization-Secure Nested Paging ([SEV-SNP](https://www.amd.com/en/technologies/infinity-guard)) security features. To read more about this offering, [see the announcement](https://aka.ms/Ignite2022-CVM-Node-Pools-on-AKS-GA).
## Benefits+ Confidential node pools leverage VMs with a hardware-based Trusted Execution Environment (TEE). AMD SEV-SNP confidential VMs deny the hypervisor and other host management code access to VM memory and state, and add defense in depth protections against operator access. In addition to the hardened security profile, confidential node pools on AKS also enable: - Lift and Shift with full AKS feature support - to enable a seamless lift-and-shift of Linux container workloads - Heterogenous Node Pools - to store sensitive data in a VM-level TEE node pool with memory encryption keys generated from the chipset itself
+- Cryptographically attest that your code will be executed on AMD SEV-SNP hardware with [an application to generate the hardware attestation report](https://github.com/Azure/confidential-computing-cvm-guest-attestation/blob/main/aks-linux-sample/cvm-attestation.yaml).
:::image type="content" source="media/confidential-vm-node-pools-on-aks/snp-on-aks-architecture-image.png" alt-text="Graphic of VM nodes in AKS with encrypted code and data in confidential VM node pools 1 and 2, on top of the hypervisor":::
If you have questions about container offerings, please reach out to <acconaks@m
## Next steps - [Deploy a confidential node pool in your AKS cluster](../aks/use-cvm.md)-- Learn more about sizes and specs for [general purpose](../virtual-machines/dcasv5-dcadsv5-series.md) and [memory-optimized](../virtual-machines/ecasv5-ecadsv5-series.md) confidential VMs.
+- Learn more about sizes and specs for [general purpose](../virtual-machines/dcasv5-dcadsv5-series.md) and [memory-optimized](../virtual-machines/ecasv5-ecadsv5-series.md) confidential VMs.
confidential-computing Guest Attestation Confidential Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-confidential-vms.md
+
+ Title: What is guest attestation for confidential VMs?
+description: Learn how you can use guest attestation for assurance that your software inside an Azure confidential virtual machine runs on the expected hardware platform.
+++++ Last updated : 09/29/2022+++
+# What is guest attestation for confidential VMs?
+
+[Confidential virtual machines (confidential VMs)](confidential-vm-overview.md) are an offering within Azure Confidential Computing. You can use this offering when you have stringent security and confidentiality requirements for your VMs.
+
+The *guest attestation* feature helps you to confirm that a confidential VM runs on a hardware-based trusted execution environment (TEE) with security features enabled for isolation and integrity.
+
+Guest attestation gives you and the relying party increased confidence that the workloads are running on a hardware-based TEE.
+
+You can use guest attestation to:
+
+- Make sure that the confidential VM runs on the expected confidential hardware platform (AMD SEV-SNP)
+- Check that a confidential VM has secure boot enabled. This setting protects lower layers of the VM (firmware, boot loader, kernel) from malware (rootkits, bootkits).
+- Get evidence for a relying party that the confidential VM runs on confidential hardware
++
+## Scenarios
+
+The major components and services involved in guest attestation are:
+
+- The workload
+- The guest attestation library
+- Hardware (for reporting). For example, AMD-SEVSNP.
+- The [Microsoft Azure Attestation service](../attestation/overview.md)
+- JSON web token response
+
+ This diagram shows an Azure confidential VM that contains a customer application and the C++ guest attestation library. First, the customer application requests the attestation from the library. Next, the library gets the AMD-SEVSNP report from the hardware. Then, the library attests the AMD-SEVSNP report to the Microsoft Azure Attestation service. Next, the attestation service returns a JSON web token response to the library. Last, the library returns the JSON web token response to the customer application. An extract of the JSON web token report shows the parameter "x-ms-attestation-type" value as "sevsnpvm".
+
+Typical operational scenarios incorporate the client library to make attestation requests as follows.
+
+### Scenario: request in separate workload
+
+In this example scenario, attestation requests are made in a separate workload. The requests determine if the confidential VM runs on the correct hardware platform before a workload is launched.
+
+A workload (**Platform checker client** in the diagram) must integrate with the attestation library and run inside the confidential VM to do the attestation. After the program makes a request to the attestation library, the workload parses the response to determine if the VM runs on the correct hardware platform and/or secure boot setting before launching the sensitive workload.
+
+ This diagram shows a customer's confidential VM that contains a customer-provided workload, platform checker client, and Microsoft's guest attestation library. First, the platform checker client requests attestation from the library. The library gets a platform report from the hardware, then attests the report to the Microsoft Azure Attestation service. The attestation service returns the JSON web token response to the library, which returns the response to the platform checker client. Last, the platform checker client launches the customer workload. An extract of the JSON web token response shows the parameter "x-ms-attestation-type" value as "sevsnpvm", a virtual TPM claim parameter "kid" value of "TpmEphermeralEncryptionKey", and a secure boot claim with the parameter "secureboot" value of "true".
+
+This scenario is similar to the [following scenario](#scenario-request-from-inside-workload). The main difference is how each scenario achieves the same goal based on the location of the request.
+
+### Scenario: request from inside workload
+
+In this example scenario, attestation requests are made inside the workload at the start of the program. The requests check if the confidential VM runs on the correct hardware platform before a workload is launched.
+
+This scenario is similar to the [previous scenario](#scenario-request-in-separate-workload). The main difference is how each scenario achieves the same goal based on the location of the request.
+
+The customer workload must integrate with the attestation library and run inside the confidential VM. After the customer workload makes a request to the attestation library, the customer workload parses the response to determine if the VM runs on the correct hardware platform and/or secure boot setting before fully setting up the sensitive workload.
+
+ This diagram shows a customer's confidential VM that contains a customer workload and Microsoft's guest attestation library. The workload makes an attestation request to the library. The library gets a platform report from the hardware, then attests that platform report to the Microsoft Azure Attestation service. The attestation service returns a JSON web token response to the library. The library then returns the JSON web token response to the customer workload. An extract of the JSON web token response shows the parameter "x-ms-attestation-type" value as "sevsnpvm", a virtual TPM claim parameter "kid" value of "TpmEphermeralEncryptionKey", and a secure boot claim with the parameter "secureboot" value of "true".
+
+### Scenario: relying party handshake
+
+In this example scenario, the confidential VM must prove that it runs on a confidential platform before a relying party will engage. The confidential VM presents an attestation token to the relying party to start the engagement.
+
+Some examples of engagements are:
+
+- The confidential VM wants secrets from a secret management service.
+- A client wants to make sure that the confidential VM runs on a confidential platform before revealing personal data to the confidential VM for processing.
+
+The following diagram shows the handshake between a confidential VM and the relying party.
+
+ This diagram shows a customer's confidential VM that contains a customer workload and Microsoft's guest attestation library. There's also another confidential VM or regular VM that contains the relying party, such as a secrets manager. The customer or the customer's customer provides this VM. The customer's workload requests the attestation from the library. The library then gets the platform report from the hardware, and attests that report to the Microsoft Azure Attestation service. The attestation service returns a JSON web token response to the library, which returns the response to the customer's workload. The workload returns the response to the relying party, which then shares sensitive information with the customer's confidential VM that contains the workload.
+
+The following sequence diagram further explains the relying party scenario. The request/response between the involved systems uses the guest attestation library APIs. The confidential VM interacts with the secrets manager to bootstrap itself by using the received secrets.
+
+ This diagram shows the secrets manager part of the relying party scenario from the previous diagram. The confidential VM gets the nonce from the secrets manager service. The confidential VM attests the API. The secret manager service attests the artifacts and nonce to the Microsoft Azure Attestation service. The attestation service returns a JSON web token to the confidential VM.
+
+ The secrets manager service also validates the JSON web token and nonce. Then, the secrets manager creates PFX, creates an AES key, encrypts PFX using the AES key. The encryption API also extracts the RSA key from the JSON web token and encrypts the AES key using an RSA key.
+
+ The secrets manager returns the encrypted PFX and AES keys to the confidential VM. The confidential VM decryption API then decrypts the AES key using a trusted platform module (TPM). Then, the confidential VM decrypts PFX using the AES key.
++
+## APIs
+
+Microsoft provides the guest attestation library with APIs to [perform attestations](#attest-api), and both [encrypt](#encrypt-api) and [decrypt](#decrypt-api) data. There's also an API to [reclaim memory](#free-api).
+
+You can use these APIs for the different [scenarios](#scenarios) described previously.
+
+### Attest API
+
+The **Attest** API takes the `ClientParameters` object as input and returns a decrypted attestation token. For example:
+
+```rest
+AttestationResult Attest([in] ClientParameters client_params,
+
+ [out] buffer jwt_token);
+```
+
+| Parameter | Information |
+| | -- |
+| `ClientParameters` (type: object) | Object that takes the version (type: `uint32_t`), attestation tenant URI (type: unsigned character), and client payload (type: unsigned character). The client payload is zero or more key-value pairs for any client or customer metadata that is returned in the response payload. The key-value pairs must be in JSON string format `"{\"key1\":\"value1\",\"key2\":\"value2\"}"`. For example, the attestation freshness key-value might look like `{\ΓÇ¥Nonce\ΓÇ¥:\ΓÇ¥011510062022\ΓÇ¥} `. |
+| `buffer` | [JSON web token](#json-web-token) that contains attestation information. |
+
+The **Attest** API returns an `AttestationResult` (type: structure).
+
+### Encrypt API
+
+The **Encrypt** API takes data to be encrypted and a JSON web token as input. The API encrypts the data using the public ephemeral key that is present in the JSON web token. For example:
+
+```rest
+AttestationResult Encrypt(
+
+ [enum] encryption_type,
+
+ [in] const unsigned char* jwt_token,
+
+ [in] const unsigned char* data,
+
+ [in] uint32_t data_size,
+
+ [out] unsigned char** encrypted_data,
+
+ [out] uint32_t* encrypted_data_size,
+
+ [out] unsigned char** encryption_metadata,
+
+ [out] uint32_t encryption_metadata_size);
+```
+
+| Parameter | Explanation |
+| | -- |
+| `encryption_type` | None. |
+| `const unsigned char* jwt_token` | [JSON web token](#json-web-token) that contains with attestation information.
+| `const unsigned char* data` | Data to be encrypted |
+| `uint32_t data_size` | Size of data to be encrypted. |
+| `unsigned char** encrypted_data` | Encrypted data. |
+| `uint32_t* encrypted_data_size` | Size of encrypted data. |
+| `unsigned char** encryption_metadata` | Encryption metadata. |
+| `uint32_t encryption_metadata_size` | Size of encryption metadata. |
+
+The **Encrypt** API returns an `AttestationResult` (type: structure).
+
+### Decrypt API
+
+The **Decrypt** API takes encrypted data as input and decrypts the data using the private ephemeral key that is sealed to the Trusted Platform Module (TPM). For example:
+
+```rest
+AttestationResult Decrypt([enum] encryption_type,
+
+ [in] const unsigned char* encrypted_data,
+
+ [in] uint32_t encrypted_data_size,
+
+ [in] const unsigned char* encryption_metadata,
+
+ [in] unit32_t encryption_metadata_size,
+
+ [out] unsigned char** decrypted_data,
+
+ [out] unit32_t decrypted_data_size);
+```
+
+| Parameter | Explanation |
+| | -- |
+| `encryption_type` | None. |
+| `const unsigned char* encrypted_data` | Data to be decrypted. |
+| `uint32_t encrypted_data_size` | Size of data to be decrypted. |
+| `const unsigned char* encryption_metadata` | Encryption metadata. |
+| `unit32_t encryption_metadata_size` | Size of encryption metadata. |
+| `unsigned char** decrypted_data` | Decrypted data. |
+| `unit32_t decrypted_data_size` | Size of decrypted data. |
+
+The **Decrypt** API returns an `AttestationResult` (type: structure).
+
+### Free API
+
+The **Free** API reclaims memory that is held by data. For example:
+
+```rest
+Free([in] buffer data);
+```
+
+| Parameter | Explanation |
+| | -- |
+| `data` | Reclaim memory held by data. |
+
+The **Free** API doesn't return anything.
+
+## Error codes
+
+The APIs can return the following error codes:
+
+| Error code | Description |
+| - | -- |
+| 1 | Error initializing failure. |
+| 2 | Error parsing response. |
+| 3 | [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) token not found. |
+| 4 | Request exceeded retries. |
+| 5 | Request failed. |
+| 6 | Attestation failed. |
+| 7 | Send request failed. |
+| 8 | Invalid input parameter. |
+| 9 | Attestation parameters validation failed. |
+| 10 | Memory allocation failed. |
+| 11 | Failed to get operating system (OS) information. |
+| 12 | TPM internal failure. |
+| 13 | TPM operation failed. |
+| 14 | JSON web token decryption failed. |
+| 15 | JSON web token decryption TPM error. |
+| 16 | Invalid JSON response. |
+| 17 | Empty Versioned Chip Endorsement Key (VCEK) certificate. |
+| 18 | Empty response. |
+| 19 | Empty request body. |
+| 20 | Report parsing failure. |
+| 21 | Report empty. |
+| 22 | Error extracting JSON web token information. |
+| 23 | Error converting JSON web token to RSA public key. |
+| 24 | **EVP_PKEY** encryption initialization failed. |
+| 25 | **EVP_PKEY** encryption failed. |
+| 26 | Data decryption TPM error. |
+| 27 | Error parsing DNS info. |
+
+## JSON web token
+
+You can extract different parts of the JSON web token for the [different API scenarios](#apis) described previously. The following are important fields for the guest attestation feature:
+
+| Claim | Attribute | Example value |
+| -- | | |
+| - | `x-ms-azurevm-vmid` | `2DEDC52A-6832-46CE-9910-E8C9980BF5A7 ` |
+| AMD SEV-SNP hardware | `x-ms-isolation-tee`| `sevsnpvm` |
+| AMD SEV-SNP hardware | `x-ms-compliance-status` (under `x-ms-isolation-tee`) | `azure-compliant-cvm` |
+| Secure boot | `secure-boot` (under `x-ms-runtime` &gt; `vm-configuration`) | `true` |
+| Virtual TPM | `tpm-enabled` (under `x-ms-runtime` &gt; `vm-configuration`) | `true` |
+| Virtual TPM | `kid` (under `x-ms-runtime` &gt; `keys`) | `TpmEphemeralEncryptionKey` |
+
+```json
+{
+ "exp": 1653021894,
+ "iat": 1652993094,
+ "iss": "https://sharedeus.eus.test.attest.azure.net",
+ "jti": "<value>",
+ "nbf": 1652993094,
+ "secureboot": true,
+ "x-ms-attestation-type": "azurevm",
+ "x-ms-azurevm-attestation-protocol-ver": "2.0",
+ "x-ms-azurevm-attested-pcrs": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4,
+ 5,
+ 6,
+ 7,
+ 11,
+ 12,
+ 13
+ ],
+ "x-ms-azurevm-bootdebug-enabled": false,
+ "x-ms-azurevm-dbvalidated": true,
+ "x-ms-azurevm-dbxvalidated": true,
+ "x-ms-azurevm-debuggersdisabled": true,
+ "x-ms-azurevm-default-securebootkeysvalidated": true,
+ "x-ms-azurevm-elam-enabled": true,
+ "x-ms-azurevm-flightsigning-enabled": false,
+ "x-ms-azurevm-hvci-policy": 0,
+ "x-ms-azurevm-hypervisordebug-enabled": false,
+ "x-ms-azurevm-is-windows": true,
+ "x-ms-azurevm-kerneldebug-enabled": false,
+ "x-ms-azurevm-osbuild": "NotApplicable",
+ "x-ms-azurevm-osdistro": "Microsoft",
+ "x-ms-azurevm-ostype": "Windows",
+ "x-ms-azurevm-osversion-major": 10,
+ "x-ms-azurevm-osversion-minor": 0,
+ "x-ms-azurevm-signingdisabled": true,
+ "x-ms-azurevm-testsigning-enabled": false,
+ "x-ms-azurevm-vmid": "<value>",
+ "x-ms-isolation-tee": {
+ "x-ms-attestation-type": "sevsnpvm",
+ "x-ms-compliance-status": "azure-compliant-cvm",
+ "x-ms-runtime": {
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "HCLAkPub",
+ "kty": "RSA",
+ "n": "<value>"
+ }
+ ],
+ "vm-configuration": {
+ "console-enabled": true,
+ "current-time": 1652993091,
+ "secure-boot": true,
+ "tpm-enabled": true,
+ "vmUniqueId": "<value>"
+ }
+ },
+ "x-ms-sevsnpvm-authorkeydigest": "<value>",
+ "x-ms-sevsnpvm-bootloader-svn": 2,
+ "x-ms-sevsnpvm-familyId": "<value>",
+ "x-ms-sevsnpvm-guestsvn": 1,
+ "x-ms-sevsnpvm-hostdata": "<value>",
+ "x-ms-sevsnpvm-idkeydigest": "<value>",
+ "x-ms-sevsnpvm-imageId": "<value>",
+ "x-ms-sevsnpvm-is-debuggable": false,
+ "x-ms-sevsnpvm-launchmeasurement": "<value>",
+ "x-ms-sevsnpvm-microcode-svn": 55,
+ "x-ms-sevsnpvm-migration-allowed": false,
+ "x-ms-sevsnpvm-reportdata": "<value>",
+ "x-ms-sevsnpvm-reportid": "<value>",
+ "x-ms-sevsnpvm-smt-allowed": true,
+ "x-ms-sevsnpvm-snpfw-svn": 2,
+ "x-ms-sevsnpvm-tee-svn": 0,
+ "x-ms-sevsnpvm-vmpl": 0
+ },
+ "x-ms-policy-hash": "<value>",
+ "x-ms-runtime": {
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "TpmEphemeralEncryptionKey",
+ "kty": "RSA",
+ "n": "<value>"
+ }
+ ]
+ },
+ "x-ms-ver": "1.0"
+}
+```
+
+## Next steps
+
+- [Learn to use a sample application with the guest attestation APIs](guest-attestation-example.md)
+- [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
+- [Learn about Azure confidential VMs](confidential-vm-overview.md)
confidential-computing Guest Attestation Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-defender-for-cloud.md
+
+ Title: Use Microsoft Defender for Cloud with guest attestation for Azure confidential VMs
+description: Learn how you can use Microsoft Defender for Cloud with your Azure confidential VMs with the guest attestation feature installed.
+++++ Last updated : 09/29/2022+++
+# Microsoft Defender for Cloud integration
+
+Azure *confidential virtual machines (confidential VMs)* are integrated with [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md). Defender for Cloud continuously checks that your confidential VM is set up correctly and provides relevant recommendations and alerts.
+
+To use Defender for Cloud with your confidential VM, you must have the [*guest attestation* feature](guest-attestation-confidential-vms.md) installed on the VM. For more information, see the [sample application for guest attestation](guest-attestation-example.md) to learn how to install the feature extension.
+
+## Recommendations
+
+If there's a configuration problem with your confidential VM, Defender for Cloud recommends changes.
+
+### Enable secure boot
+
+**Secure Boot should be enabled on supported Windows/Linux virtual machines**
+
+This low-severity recommendation means that your confidential VM supports secure boot, but this feature is currently disabled.
+
+This recommendation only applies to confidential VMs.
+
+### Install guest attestation extension
+
+**Guest attestation extension should be installed on supported Windows/Linux virtual machines**
+
+This low-severity recommendation shows that your confidential VM doesn't have the guest attestation extension installed. However, secure boot and vTPM are already enabled. When you install this extension, Defender for Cloud can attest and monitor the *boot integrity* of your VMs proactively. Boot integrity is validated through remote attestation.
+
+When you enable boot integrity monitoring, Defender for Cloud issues an assessment with the status of the remote attestation.
+
+This feature is supported for Windows and Linux single VMs and uniform scale sets.
+
+## Alerts
+
+Defender for Cloud also detects and alerts you to VM health problems.
+
+### VM attestation failure
+
+**Attestation failed your virtual machine**
+
+This medium-severity alert means that attestation failed for your VM. Defender for Cloud periodically performs attestation on your VMs, and after the VM boots.
+
+> [!NOTE]
+> This alert is only available for VMs with vTPM enabled and the guest attestation extension installed. Secure boot must also be enabled for the attestation to succeed. If you need to disable secure boot, you can choose to suppress this alert to avoid false positives.
+
+Reasons for attestation failure include:
+
+- The attested information, which includes the boot log, deviates from a trusted baseline. This problem might indicate that untrusted modules have loaded and the OS might be compromised.
+- The attestation quote can't be verified to originate from the vTPM of the attested VM. This problem might indicate that malware is present, which might indicate that traffic to the vTPM is being intercepted.
+
+## Next steps
+
+- [Learn more about the guest attestation feature](guest-attestation-confidential-vms.md)
+- [Learn to use a sample application with the guest attestation APIs](guest-attestation-example.md)
+- [Learn about Azure confidential VMs](confidential-vm-overview.md)
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
+
+ Title: Use sample application for guest attestation in confidential VMs
+description: Learn how to use a sample Linux or Windows application for use with the guest attestation feature APIs.
+++++ Last updated : 09/29/2022++
+
+# Use sample application for guest attestation
+
+The [*guest attestation*](guest-attestation-confidential-vms.md) feature helps you to confirm that a confidential VM runs on a hardware-based trusted execution environment (TEE) with security features enabled for isolation and integrity.
+
+Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation) for [Linux](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-linux-app) and [Windows](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-windows-app).
+
+Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
+
+## Prerequisites
+
+- An Azure subscription.
+- An Azure [confidential VM](quick-create-confidential-vm-portal-amd.md) or a [VM with trusted launch enabled](../virtual-machines/trusted-launch-portal.md). You can use a Linux or Windows VM.
+## Use sample application
+
+To use a sample application in C++ for use with the guest attestation APIs, follow the instructions for your operating system (OS).
+
+#### [Linux](#tab/linux)
+
+1. Sign in to your VM.
+
+1. Clone the [sample Linux application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-linux-app).
+
+1. Install the `build-essential` package. This package installs everything required for compiling the sample application.
+
+ ```bash
+ sudo apt-get install build-essential
+ ```
+
+1. Install the `libcurl4-openssl-dev` and `libjsoncpp-dev` packages.
+
+ ```bash
+ sudo apt-get install libcurl4-openssl-dev
+ ```
+
+ ```bash
+ sudo apt-get install libjsoncpp-dev
+ ```
+
+1. Download the attestation package from <https://packages.microsoft.com/repos/azurecore/pool/main/a/azguestattestation1/>.
+
+1. Install the attestation package. Make sure to replace `<version>` with the version that you downloaded.
+
+ ```bash
+ sudo dpkg -i azguestattestation1_<latest-version>_amd64.deb
+ ```
+
+#### [Windows](#tab/windows)
+
+1. Install Visual Studio with the [**Desktop development with C++** workload](/cpp/build/vscpp-step-0-installation).
+1. Clone the [sample Windows application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-guest-attestation-windows-app).
+1. Build your project. From the **Build** menu, select **Build Solution**.
+1. After the build succeeds, go to the `Release` build folder.
+1. Run the application by running the `AttestationClientApp.exe`.
++++
+## Next steps
+
+- [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
+- [Learn more about the guest attestation feature](guest-attestation-confidential-vms.md)
+- [Learn about Azure confidential VMs](confidential-vm-overview.md)
connectors Connectors Create Api Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md
Last updated 08/23/2022 tags: connectors+ # Process and create Azure Cosmos DB documents using Azure Logic Apps
You can connect to Azure Cosmos DB from both **Logic App (Consumption)** and **L
- Currently, only stateful workflows in a **Logic App (Standard)** resource can use both the managed connector operations and built-in operations. Stateless workflows can use only built-in operations. -- The Azure Cosmos DB connector supports only Azure Cosmos DB accounts created with the [Core (SQL) API](../cosmos-db/choose-api.md#coresql-api).
+- The Azure Cosmos DB connector supports only Azure Cosmos DB accounts created with [Azure Cosmos DB for NoSQL](../cosmos-db/choose-api.md#coresql-api).
## Prerequisites
To add an Azure Cosmos DB action to a logic app workflow in multi-tenant Azure L
1. If your workflow is blank, add any trigger that you want.
- This example starts with the [**When a HTTP request is received** trigger](connectors-native-reqres.md#add-request-trigger).
+ This example starts with the [**When an HTTP request is received** trigger](connectors-native-reqres.md#add-request-trigger).
1. Under the trigger or action where you want to add the Azure Cosmos DB action, select **New step** or **Add an action**, if between steps.
To add an Azure Cosmos DB action to a logic app workflow in single-tenant Azure
1. If your workflow is blank, add any trigger that you want.
- This example starts with the [**When a HTTP request is received** trigger](connectors-native-reqres.md#add-request-trigger), which uses a basic schema definition to represent the item that you want to create.
+ This example starts with the [**When an HTTP request is received** trigger](connectors-native-reqres.md#add-request-trigger), which uses a basic schema definition to represent the item that you want to create.
:::image type="content" source="./media/connectors-create-api-cosmos-db/standard-http-trigger.png" alt-text="Screenshot showing the Azure portal and designer for a Standard logic app workflow with the 'When a HTTP request is received' trigger and parameters configuration.":::
In a **Logic App (Consumption)** workflow, an Azure Cosmos DB connection require
| Property | Required | Value | Description | |-|-|-|-| | **Connection name** | Yes | <*connection-name*> | The name to use for your connection. |
-| **Authentication Type** | Yes | <*connection-type*> | The authentication type that you want to use. This example uses **Access key**. <p><p>- If you select **Access Key**, provide the remaining required property values to create the connection. <p><p>- If you select **Azure AD Integrated**, no other property values are required, but you have to configure your connection by following the steps for [Azure AD authentication and Cosmos DB connector](/connectors/documentdb/#azure-ad-authentication-and-cosmos-db-connector). |
-| **Access key to your Cosmos DB account** | Yes | <*access-key*> | The access key for the Azure Cosmos DB account to use for this connection. This value is either a read-write key or a read-only key. <p><p>**Note**: To find the key, go to the Azure Cosmos DB account page. In the navigation menu, under **Settings**, select **Keys**. Copy one of the available key values. |
+| **Authentication Type** | Yes | <*connection-type*> | The authentication type that you want to use. This example uses **Access key**. <p><p>- If you select **Access Key**, provide the remaining required property values to create the connection. <p><p>- If you select **Azure AD Integrated**, no other property values are required, but you have to configure your connection by following the steps for [Azure AD authentication and Azure Cosmos DB connector](/connectors/documentdb/#azure-ad-authentication-and-cosmos-db-connector). |
+| **Access key to your Azure Cosmos DB account** | Yes | <*access-key*> | The access key for the Azure Cosmos DB account to use for this connection. This value is either a read-write key or a read-only key. <p><p>**Note**: To find the key, go to the Azure Cosmos DB account page. In the navigation menu, under **Settings**, select **Keys**. Copy one of the available key values. |
| **Account Id** | Yes | <*acccount-ID*> | The name for the Azure Cosmos DB account to use for this connection. | |||||
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
A resource's `properties` object has the following properties:
### <a name="container-apps-environment-examples"></a>Examples
-# [ARM template](#tab/arm-template)
- The following example ARM template deploys a Container Apps environment.
+> [!NOTE]
+> The commands to create container app environments don't support YAML configuration input.
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
The following example ARM template deploys a Container Apps environment.
} ```
-# [YAML](#tab/yaml)
-
-YAML input isn't currently used by Azure CLI commands to specify a Container Apps environment.
--- ## Container app The following tables describe the properties available in the container app resource.
The following example ARM template deploys a container app.
}, "registry_password": { "type": "SecureString"
- },
- "storage_share_name": {
- "type": "String"
} }, "variables": {},
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
az containerapp create \
--target-port 3000 \ --env-vars API_BASE_URL=https://$API_BASE_URL \ --ingress 'external' \
- --registry-server $ACR_NAME.azurecr.io \
- --query configuration.ingress.fqdn
+ --registry-server $ACR_NAME.azurecr.io
``` # [PowerShell](#tab/powershell)
az containerapp create `
--env-vars API_BASE_URL=https://$API_BASE_URL ` --target-port 3000 ` --ingress 'external' `
- --registry-server "$ACR_NAME.azurecr.io" `
- --query configuration.ingress.fqdn
+ --registry-server "$ACR_NAME.azurecr.io"
```
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
description: Learn more about using Dapr on your Azure Container App service to
-+ Previously updated : 08/18/2022 Last updated : 09/29/2022 # Dapr integration with Azure Container Apps
-The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable APIs that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once Dapr is enabled in Container Apps, it exposes its HTTP and gRPC APIs via a sidecar: a process that runs in tandem with each of your Container Apps.
+The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable features that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once Dapr is enabled for a container app, a secondary process will be created alongside your application code that will enable communication with Dapr via HTTP or gRPC.
-Dapr APIs, also referred to as building blocks, are built on best practice industry standards, that:
+Dapr's APIs are built on best practice industry standards, that:
- Seamlessly fit with your preferred language or framework-- Are incrementally adoptable; you can use one, several, or all of the building blocks depending on your needs
+- Are incrementally adoptable; you can use one, several, or all dapr capabilities depending on your application's needs
-## Dapr building blocks
+Dapr is an open source, [Cloud Native Computing Foundation (CNCF)][dapr-cncf] project. The CNCF is part of the Linux Foundation and provides support, oversight, and direction for fast-growing, cloud native projects. As an alternative to deploying and managing the Dapr OSS project yourself, the Container Apps platform:
+- Provides a managed and supported Dapr integration
+- Handles Dapr version upgrades seamlessly
+- Exposes a simplified Dapr interaction model to increase developer productivity
-| Building block | Description |
-| -- | -- |
-| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. |
-| [**State management**][dapr-statemgmt] | Provides state management capabilities for transactions and CRUD operations. |
-| [**Pub/sub**][dapr-pubsub] | Allows publisher and subscriber container apps to intercommunicate via an intermediary message broker. |
-| [**Bindings**][dapr-bindings] | Trigger your application with incoming or outgoing events, without SDK or library dependencies. |
-| [**Actors**][dapr-actors] | Dapr actors apply the scalability and reliability that the underlying platform provides. |
-| [**Observability**](./observability.md) | Send tracing information to an Application Insights backend. |
+This guide provides insight into core Dapr concepts and details regarding the Dapr interaction model in Container Apps.
-## Dapr settings
+## Dapr APIs
-The following Pub/sub example demonstrates how Dapr works alongside your container app:
+| Dapr API | Description |
+| -- | |
+| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. |
+| [**State management**][dapr-statemgmt] | Provides state management capabilities for transactions and CRUD operations. |
+| [**Pub/sub**][dapr-pubsub] | Allows publisher and subscriber container apps to intercommunicate via an intermediary message broker. |
+| [**Bindings**][dapr-bindings] | Trigger your applications based on events |
+| [**Actors**][dapr-actors] | Dapr actors are message-driven, single-threaded, units of work designed to quickly scale. For example, in burst-heavy workload situations. |
+| [**Observability**](./observability.md) | Send tracing information to an Application Insights backend. |
+| [**Secrets**][dapr-secrets] | Access secrets from your application code or reference secure values in your Dapr components. |
-| Label | Dapr settings | Description |
-| -- | - | -- |
-| 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring Dapr settings. Dapr settings apply across all revisions of a given container app. |
-| 2 | Dapr sidecar | Fully managed Dapr APIs are exposed to your container app via the Dapr sidecar. These APIs are available through HTTP and gRPC protocols. By default, the sidecar runs on port 3500 in Container Apps. |
-| 3 | Dapr component | Dapr components can be shared by multiple container apps. The Dapr sidecar uses scopes to determine which components to load for a given container app at runtime. |
+## Dapr concepts overview
-### Enable Dapr
+The following example based on the Pub/sub API is used to illustrate core concepts related to Dapr in Azure Container Apps.
-You can define the Dapr configuration for a container app through the Azure CLI or using Infrastructure as Code templates like a bicep or an Azure Resource Manager (ARM) template. You can enable Dapr in your app with the following settings:
-| CLI Parameter | Template field | Description |
-| -- | -- | -- |
-| `--enable-dapr` | `dapr.enabled` | Enables Dapr on the container app. |
-| `--dapr-app-port` | `dapr.appPort` | Identifies which port your application is listening. |
-| `--dapr-app-protocol` | `dapr.appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
-| `--dapr-app-id` | `dapr.appId` | The unique ID of the application. Used for service discovery, state encapsulation, and the pub/sub consumer ID. |
+| Label | Dapr settings | Description |
+| -- | -- | |
+| 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring a set of Dapr arguments. These values apply to all revisions of a given container app when running in multiple revisions mode. |
+| 2 | Dapr | The fully managed Dapr APIs are exposed to each container app through a Dapr sidecar. The Dapr APIs can be invoked from your container app via HTTP or gRPC. The Dapr sidecar runs on HTTP port 3500 and gRPC port 50001. |
+| 3 | Dapr component configuration | Dapr uses a modular design where functionality is delivered as a component. Dapr components can be shared across multiple container apps. The Dapr app identifiers provided in the scopes array dictate which dapr-enabled container apps will load a given component at runtime. |
-The following example shows how to define a Dapr configuration in a template by adding the Dapr configuration to the `properties.configuration` section of your container apps resource declaration.
+## Dapr enablement
+
+You can configure Dapr using various [arguments and annotations][dapr-args] based on the runtime context. Azure Container Apps provides three channels through which you can configure Dapr:
+
+- Container Apps CLI
+- Infrastructure as Code (IaC) templates, as in Bicep or Azure Resource Manager (ARM) templates
+- The Azure portal
+
+The table below outlines the currently supported list of Dapr sidecar configurations in Container Apps:
+
+| Container Apps CLI | Template field | Description |
+| - | - | - |
+| `--enable-dapr` | `dapr.enabled` | Enables Dapr on the container app. |
+| `--dapr-app-port` | `dapr.appPort` | The port your application is listening on which will be used by Dapr for communicating to your application |
+| `--dapr-app-protocol` | `dapr.appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
+| `--dapr-app-id` | `dapr.appId` | A unique Dapr identifier for your container app used for service discovery, state encapsulation and the pub/sub consumer ID. |
+| `--dapr-max-request-size` | `dapr.httpMaxRequestSize` | Set the max size of request body http and grpc servers to handle uploading of large files. Default is 4 MB. |
+| `--dapr-read-buffer-size` | `dapr.httpReadBufferSize` | Set the max size of http header read buffer in to handle when sending multi-KB headers. The default 4 KB. |
+| `--dapr-api-logging` | `dapr.enableApiLogging` | Enables viewing the API calls from your application to the Dapr sidecar. |
+| `--dapr-log-level` | `dapr.logLevel` | Set the log level for the Dapr sidecar. Allowed values: debug, error, info, warn. Default is `info`. |
+
+When using an IaC template, specify the following arguments in the `properties.configuration` section of the container app resource definition.
# [Bicep](#tab/bicep1)
The following example shows how to define a Dapr configuration in a template by
"appProcotol": "http", "appPort": 3000 }
-
```
-Since Dapr settings are considered application-scope changes, new revisions aren't created when you change Dapr setting. However, when changing Dapr settings, the container app revisions and replicas are automatically restarted.
+The above Dapr configuration values are considered application-scope changes. When you run a container app in multiple revision mode, changes to these settings won't create a new revision. Instead, all existing revisions will be restarted to ensure they're configured with the most up-to-date values.
+
+## Dapr components
+
+Dapr uses a modular design where functionality is delivered as a [component][dapr-component]. The use of Dapr components is optional and dictated exclusively by the needs of your application.
+
+Dapr components in container apps are environment-level resources that:
+
+- Can provide a pluggable abstraction model for connecting to supporting external services.
+- Can be shared across container apps or scoped to specific container apps.
+- Can use Dapr secrets to securely retrieve configuration metadata.
+
+### Component schema
+
+All Dapr OSS components conform to the following basic [schema][dapr-component-spec].
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: [COMPONENT-NAME]
+ namespace: [COMPONENT-NAMESPACE]
+spec:
+ type: [COMPONENT-TYPE]
+ version: v1
+ initTimeout: [TIMEOUT-DURATION]
+ ignoreErrors: [BOOLEAN]
+ metadata:
+ - name: [METADATA-NAME]
+ value: [METADATA-VALUE]
+```
+
+In Container Apps, the above schema has been slightly simplified to support Dapr components and remove unnecessary fields, including `apiVersion`, `kind`, and redundant metadata and spec properties.
+
+```yaml
+componentType: [COMPONENT-TYPE]
+version: v1
+initTimeout: [TIMEOUT-DURATION]
+ignoreErrors: [BOOLEAN]
+metadata:
+ - name: [METADATA-NAME]
+ value: [METADATA-VALUE]
+```
+
+### Component scopes
+
+By default, all Dapr-enabled container apps within the same environment will load the full set of deployed components. To ensure components are loaded at runtime by only the appropriate container apps, application scopes should be used. In the example below, the component will only be loaded by the two Dapr-enabled container apps with Dapr application IDs `APP-ID-1` and `APP-ID-2`:
+
+> [!NOTE]
+> Dapr component scopes correspond to the Dapr application ID of a container app, not the container app name.
+
+```yaml
+componentType: [COMPONENT-TYPE]
+version: v1
+initTimeout: [TIMEOUT-DURATION]
+ignoreErrors: [BOOLEAN]
+metadata:
+ - name: [METADATA-NAME]
+ value: [METADATA-VALUE]
+scopes:
+ - [APP-ID-1]
+ - [APP-ID-2]
+```
+
+### Connecting to external services via Dapr
+
+There are a few approaches supported in container apps to securely establish connections to external services for Dapr components.
+
+1. Using Managed Identity
+2. Using a Dapr Secret Store component reference
+3. Using Platform-managed Kubernetes secrets
-### Configure Dapr components
+#### Using managed identity
-Once Dapr is enabled on your container app, you're able to plug in and use the [Dapr APIs](#dapr-building-blocks) as needed. You can also create **Dapr components**, which are specific implementations of a given building block. Dapr components are environment-level resources, meaning they can be shared across Dapr-enabled container apps. Components are pluggable modules that:
+For Azure-hosted services, Dapr can use the managed identity of the scoped container apps to authenticate to the backend service provider. When using managed identity, you don't need to include secret information in a component manifest. Using managed identity is preferred as it eliminates storage of sensitive input in components and doesn't require managing a secret store.
-- Allow you to use the individual Dapr building block APIs.-- Can be scoped to specific container apps.-- Can be easily modified to point to any one of the component implementations.-- Can reference secure configuration values using Container Apps secrets.
+#### Using a Dapr secret store component reference
-Based on your needs, you can "plug in" certain Dapr component types like state stores, pub/sub brokers, and more. In the examples below, you'll find the various schemas available for defining a Dapr component in Azure Container Apps. The Container Apps manifests differ sightly from the Dapr OSS manifests in order to simplify the component creation experience.
+When you create Dapr components for non-AD enabled services, certain metadata fields require sensitive input values. The recommended approach for retrieving these secrets is to reference an existing Dapr secret store component that securely accesses secret information.
+
+Here are the steps to set up a reference:
+
+1. Create a Dapr secret store component using the Container Apps schema. The component type for all supported Dapr secret stores begins with `secretstores.`.
+1. Create extra components as needed which reference this Dapr secret store component to retrieve the sensitive metadata input.
+
+When creating a secret store component in container apps, you can provide sensitive information in the metadata section in either of the following ways:
+
+- For an **Azure Key Vault secret store**, use managed identity to establish the connection.
+- For **non-Azure secret stores**, use platform-managed Kubernetes secrets that are defined directly as part of the component manifest.
+
+The following component showcases the simplest possible secret store configuration. This example publisher and subscriber applications configured to both have a system or user-assigned managed identity with appropriate permissions on the Azure Key Vault instance.
+
+```yaml
+componentType: secretstores.azure.keyvault
+version: v1
+metadata:
+ - name: vaultName
+ value: [your_keyvault_name]
+ - name: azureEnvironment
+ value: "AZUREPUBLICCLOUD"
+ - name: azureClientId
+ value: [your_managed_identity_client_id]
+scopes:
+ - publisher-app
+ - subscriber-app
+```
> [!NOTE]
-> By default, all Dapr-enabled container apps within the same environment will load the full set of deployed components. By adding scopes to a component, you tell the Dapr sidecars for each respective container app which components to load at runtime. Using scopes is recommended for production workloads.
+> Kubernetes secrets, Local environment variables and Local file Dapr secret stores are not supported in Container Apps. As an alternative for the upstream Dapr default Kubernetes secret store, container apps provides a platform-managed approach for creating and leveraging Kubernetes secrets.
+
+#### Using Platform-managed Kubernetes secrets
+
+This component configuration defines the sensitive value as a secret parameter that can be referenced from the metadata section. This approach can be used to connect to non-Azure services or in dev/test scenarios for quickly deploying components via the CLI without setting up a secret store or managed identity.
+
+```yaml
+componentType: secretstores.azure.keyvault
+version: v1
+metadata:
+ - name: vaultName
+ value: [your_keyvault_name]
+ - name: azureEnvironment
+ value: "AZUREPUBLICCLOUD"
+ - name: azureTenantId
+ value: "[your_tenant_id]"
+ - name: azureClientId
+ value: "[your_client_id]"
+ - name: azureClientSecret
+ secretRef: azClientSecret
+secrets:
+ - name: azClientSecret
+ value: "[your_client_secret]"
+scopes:
+ - publisher-app
+ - subscriber-app
+```
+
+#### Referencing Dapr secret store components
+
+Once you've created a Dapr secret store using one of the above approaches, you can reference that secret store from other Dapr components in the same environment. In the following example, the `secretStoreComponent` field is populated with the name of the secret store specified above, where the `sb-root-connectionstring` is stored.
+
+```yaml
+componentType: pubsub.azure.servicebus
+version: v1
+secretStoreComponent: "my-secret-store"
+metadata:
+ - name: connectionString
+ secretRef: sb-root-connectionstring
+scopes:
+ - publisher-app
+ - subscriber-app
+```
+
+### Component examples
# [YAML](#tab/yaml)
-When defining a Dapr component via YAML, you'll pass your component manifest into the Azure CLI. For example, deploy a `pubsub.yaml` component using the following command:
+To create a Dapr component via the Container Apps CLI, you can use a container apps YAML manifest. When configuring multiple components, you must create and apply a separate YAML file for each component.
```azurecli az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub --yaml "./pubsub.yaml" ```
-The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`.
- ```yaml # pubsub.yaml for Azure Service Bus component componentType: pubsub.azure.servicebus version: v1
+secretStoreComponent: "my-secret-store"
metadata:-- name: connectionString
- secretRef: sb-root-connectionstring
-secrets:
-- name: sb-root-connectionstring
- value: "value"
-# Application scopes
+ - name: connectionString
+ secretRef: sb-root-connectionstring
scopes: - publisher-app - subscriber-app
scopes:
# [Bicep](#tab/bicep)
-This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of the Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each.
```bicep resource daprComponent 'daprComponents@2022-03-01' = {
resource daprComponent 'daprComponents@2022-03-01' = {
properties: { componentType: 'pubsub.azure.servicebus' version: 'v1'
- secrets: [
- {
- name: 'sb-root-connectionstring'
- value: 'value'
- }
- ]
+ secretStoreComponent: 'my-secret-store'
metadata: [ { name: 'connectionString' secretRef: 'sb-root-connectionstring' } ]
- // Application scopes
scopes: [ 'publisher-app' 'subscriber-app'
resource daprComponent 'daprComponents@2022-03-01' = {
# [ARM](#tab/arm)
-This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via ARM.
```json {
This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr co
"properties": { "componentType": "pubsub.azure.servicebus", "version": "v1",
- "secrets": [
- {
- "name": "sb-root-connectionstring",
- "value": "value"
- }
- ],
+ "secretScoreComponent": "my-secret-store",
"metadata": [ { "name": "connectionString", "secretRef": "sb-root-connectionstring" } ],
- // Application scopes
"scopes": ["publisher-app", "subscriber-app"]- } } ]
This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr co
-For comparison, a Dapr OSS `pubsub.yaml` file would include:
-
-```yml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: dapr-pubsub
-spec:
- type: pubsub.azure.servicebus
- version: v1
- metadata:
- - name: connectionString
- secretKeyRef:
- name: sb-root-connectionstring
- key: "value"
-# Application scopes
-scopes:
-- publisher-app-- subscriber-app
-```
-
-## Current supported Dapr version
-
-Azure Container Apps supports Dapr version 1.8.3.
-
-Version upgrades are handled transparently by Azure Container Apps. You can find the current version via the Azure portal and the CLI.
- ## Limitations ### Unsupported Dapr capabilities -- **Dapr Secrets Management API**: Use [Container Apps secret mechanism][aca-secrets] as an alternative. - **Custom configuration for Dapr Observability**: Instrument your environment with Application Insights to visualize distributed tracing. - **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec.-- **Advanced Dapr sidecar configurations**: Container Apps allows you to specify sidecar settings including `app-protocol`, `app-port`, and `app-id`. For a list of unsupported configuration options, see [the Dapr documentation](https://docs.dapr.io/reference/arguments-annotations-overview/).
+- **Declarative pub/sub subscriptions**
+- **Any Dapr sidecar annotations not listed above**
### Known limitations -- **Declarative pub/sub subscriptions** - **Actor reminders**: Require a minReplicas of 1+ to ensure reminders will always be active and fire correctly. ## Next Steps
Now that you've learned about Dapr and some of the challenges it solves:
- Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions]. <!-- Links Internal -->+ [dapr-quickstart]: ./microservices-dapr.md [dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md
-[aca-secrets]: ./manage-secrets.md
[dapr-github-actions]: ./dapr-github-actions.md <!-- Links External -->+ [dapr-concepts]: https://docs.dapr.io/concepts/overview/ [dapr-pubsub]: https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview [dapr-statemgmt]: https://docs.dapr.io/developing-applications/building-blocks/state-management/state-management-overview/ [dapr-serviceinvo]: https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/ [dapr-bindings]: https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/ [dapr-actors]: https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/
+[dapr-secrets]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+[dapr-cncf]: https://www.cncf.io/projects/dapr/
+[dapr-args]: https://docs.dapr.io/reference/arguments-annotations-overview/
+[dapr-component]: https://docs.dapr.io/concepts/components-concept/
+[dapr-component-spec]: https://docs.dapr.io/operations/components/component-schema/
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Last updated 09/01/2022
-# Tutorial: Deploy to Azure Container Apps using Visual Studio Code
+# Quickstart: Deploy to Azure Container Apps using Visual Studio Code
Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
container-apps Get Started Existing Container Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image-portal.md
This article demonstrates how to deploy an existing container to Azure Container
- If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). ## Setup
-> [!NOTE]
-> An Azure Container Apps environment can be deployed as a zone redundant resource in regions where support is available. This is a deployment-time only configuration option.
- Begin by signing in to the [Azure portal](https://portal.azure.com). ## Create a container app
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
description: Check startup, liveness, and readiness with Azure Container Apps he
-+ Last updated 03/30/2022
The following example demonstrates how to configure the liveness and readiness p
## Next steps > [!div class="nextstepaction"]
-> [Monitor an app](monitor.md)
+> [Application logging](logging.md)
container-apps Log Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-monitoring.md
description: Monitor your container app logs with Log Analytics
-+ Last updated 08/30/2022
# Monitor logs in Azure Container Apps with Log Analytics
-Azure Container Apps is integrated with Azure Monitor Log Analytics to monitor and analyze your container app's logs. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the system and application log data from all container apps running in the environment.
+Azure Container Apps is integrated with Azure Monitor Log Analytics to monitor and analyze your container app's logs. When selected as your log monitoring solution, your Container Apps environment includes a Log Analytics workspace that provides a common place to store the system and application log data from all container apps running in the environment.
Log entries are accessible by querying Log Analytics tables through the Azure portal or a command shell using the [Azure CLI](/cli/azure/monitor/log-analytics).
$queryResults.Results
-For more information about using Azure CLI to view container app logs, see [Viewing Logs](monitor.md#viewing-logs).
- ## Next steps > [!div class="nextstepaction"]
container-apps Log Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md
+
+ Title: Log storage and monitoring options in Azure Container Apps
+description: Description of logging options in Azure Container Apps
++++ Last updated : 09/29/2022+++
+# Log storage and monitoring options in Azure Container Apps
+
+Azure Container Apps gives you options for storing and viewing your application logs. Logging options are configured in your Container Apps environment where you select the log destination.
+
+Container Apps application logs consist of two different categories:
+
+- Container console output (`stdout`/`stderr`) messages.
+- System logs generated by Azure Container Apps.
+
+You can choose between these logs destinations:
+
+- **Log Analytics**: Azure Monitor Log Analytics is the default storage and viewing option. Your logs are stored in a Log Analytics workspace where they can be viewed and analyzed using Log Analytics queries. To learn more about Log Analytics, see [Azure Monitor Log Analytics](log-monitoring.md).
+- **Azure Monitor**: Azure Monitor routes logs to one or more destinations:
+ - Log Analytics workspace for viewing and analysis.
+ - Azure storage account to archive.
+ - Azure event hub for data ingestion and analytic services. For more information, see [Azure Event Hubs](../event-hubs/event-hubs-about.md).
+ - An Azure partner monitoring solution such as, Datadog, Elastic, Logz.io and others. For more information, see [Partner solutions](../partner-solutions/overview.md).
+- **None**: You can disable the storage of log data. You'll still be able to view real-time container logs via the **Logs stream** feature in your container app. For more information, see [Log streaming](log-streaming.md).
+
+When *None* or the *Azure Monitor* destination is selected, the **Logs** menu item providing the Log Analytics query editor in the Azure portal is disabled.
+
+## Configure options via the Azure portal
+
+Use these steps to configure the logging options for your Container Apps environment in the Azure portal:
+
+1. Go to the **Logging Options** on your Container Apps environment window in the portal.
+ :::image type="content" source="media/observability/log-opts-screenshot-log-analytics.png" alt-text="Screenshot of logs destinations.":::
+1. You can choose from the following **Logs Destination** options:
+ - **Log Analytics**: With this option, you select a Log Analytics workspace to store your log data. Your logs can be viewed through Log Analytics queries. To learn more about Log Analytics, see [Azure Monitor Log Analytics](log-monitoring.md).
+ - **Azure Monitor**: Azure Monitor routes your logs to a destination. When you select this option, you must select **Diagnostic settings** to complete the configuration after you select **Save** on this page.
+ - **None**: This option disables the storage of log data.
+1. Select **Save**.
+ :::image type="content" source="media/observability/log-opts-screenshot-page-save-button.png" alt-text="Screenshot Logging options page.":::
+1. If you have selected **Azure Monitor** as your logs destination, you must configure **Diagnostic settings**. The **Diagnostic settings** item will appear below the **Logging options** menu item.
+
+### Diagnostic settings
+
+When you select **Azure Monitor** as your logs destination, you must configure the destination details. Select **Diagnostic settings** from the left side menu of the Container Apps Environment window in the portal.
++
+Destination details are saved as *diagnostic settings*. You can create up to five diagnostic settings for your container app environment. You can configure different log categories for each diagnostic setting. For example, create one diagnostic setting to send the system logs category to one destination, and another to send the container console logs category to another destination.
+
+To create a new *diagnostic setting*:
+
+1. Select **Add diagnostic setting**.
+ :::image type="content" source="media/observability/diag-setting-new-diag-setting.png" alt-text="Screenshot Diagnostic setting Add new diagnostic setting.":::
+1. Enter a name for your diagnostic setting.
+ :::image type="content" source="media/observability/diag-setting-dialog.png" alt-text="Screenshot Diagnostics settings dialog.":::
+1. Select the log **Category groups** or **Categories** you want to send to this destination. You can select one or more categories.
+
+1. Select one or more **Destination details**:
+ - **Send to Log Analytics workspace**: Select from existing Log Analytics workspaces.
+ :::image type="content" source="media/observability/diag-setting-log-analytics-console-log.png" alt-text="Screenshot diagnostic settings Log Analytics destination.":::
+ - **Archive to a storage account**: You can choose from existing storage accounts. When the individual log categories are selected, you can set the **Retention (days)** for each category.
+ :::image type="content" source="media/observability/diag-setting-storage-acct.png" alt-text="Screenshot Diagnostic settings storage destination.":::
+ - **Stream to an event hub**: Select from Azure event hubs.
+ :::image type="content" source="media/observability/diag-settings-event-hub.png" alt-text="Screenshot Diagnostic settings event hub destination.":::
+ - **Send to a partner solution**: Select from Azure partner solutions.
+1. Select **Save**.
+
+For more information about Diagnostic settings, see [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md).
+
+## Configure options using the Azure CLI
+
+Configure logs destination for your Container Apps environment using the Azure CLI `az containerapp create` and `az containerapp update` commands with the `--logs-destination` argument.
+
+The destination values are: `log-analytics`, `azure-monitor`, and `none`.
+
+For example, to create a Container Apps environment using an existing Log Analytics workspace as the logs destination, you must provide the `--logs-destination` argument with the value `log-analytics` and the `--logs-destination-id` argument with the value of the Log Analytics workspace resource ID. You can get the resource ID from the Log Analytics workspace page in the Azure portal or from the ```az monitor log-analytics workspace show``` command.
+
+Replace \<PLACEHOLDERS\> with your values:
+
+```azurecli
+az containerapp env create \
+ --name <ENVIRONMENT_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --logs-destination log-analytics \
+ --logs-workspace-id <WORKSPACE_ID>
+```
+
+To update an existing Container Apps environment to use Azure Monitor as the logs destination:
+
+Replace \<PLACEHOLDERS\> with your values:
+
+```azurecli
+az containerapp env update \
+ --name <ENVIRONMENT_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --logs-destination azure-monitor
+
+```
+
+When `--logs-destination` is set to `azure-monitor`, create diagnostic settings to configure the destination details for the log categories with the `az monitor diagnostics-settings` command.
+
+For more information about Azure Monitor diagnostic settings commands, see [az monitor diagnostic-settings](/cli/azure/monitor/diagnostic-settings). Container Apps log categories are `ContainerAppConsoleLogs` and `ContainerAppSystemLogs`.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Monitor logs with Log Analytics](log-monitoring.md)
container-apps Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md
+
+ Title: Application logging in Azure Container Apps
+description: Description of logging in Azure Container Apps
+++++ Last updated : 09/29/2022+++
+# Application Logging in Azure Container Apps
+
+Azure Container Apps provides two types of application logging categories:
+
+- [Container console logs](#container-console-logs): Log streams from your container console.
+- [System logs](#system-logs): Logs generated by the Azure Container Apps service.
++
+## Container console Logs
+
+Container console logs are written by your application to the `stdout` and `stderr` output streams of the application's container. By implementing detailed logging in your application, you'll be able to troubleshoot issues and monitor the health of your application.
+
+You can view your container console logs through [Logs streaming](log-streaming.md). For other options to store and monitoring your log data, see [Logging options](log-options.md).
+
+## System logs
+
+System logs are generated by the Azure Container Apps to inform you for the status of service level events. Log messages include the following information:
+
+- Successfully created dapr component
+- Successfully updated dapr component
+- Error creating dapr component
+- Successfully mounted volume
+- Error mounting volume
+- Successfully bound Domain
+- Auth enabled on app. Creating authentication config
+- Auth config created successfully
+- Setting a traffic weight
+- Creating a new revision:
+- Successfully provisioned revision
+- Deactivating Old revisions
+- Error provisioning revision
+
+The system log data can be stored and monitored through the Container Apps logging options. For more information, see [Logging options](log-options.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Logging options](log-options.md)
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Previously updated : 07/28/2022 Last updated : 09/29/2022 -+ # Manage secrets in Azure Container Apps
-Azure Container Apps allows your application to securely store sensitive configuration values. Once defined at the application level, secured values are available to containers, inside scale rules, and via Dapr.
+Azure Container Apps allows your application to securely store sensitive configuration values. Once secrets are defined at the application level, secured values are available to container apps. Specifically, you can reference secured values inside scale rules. For information on using secrets with Dapr, refer to [Dapr integration](./dapr-overview.md)
- Secrets are scoped to an application, outside of any specific revision of an application.-- Adding, removing, or changing secrets does not generate new revisions.
+- Adding, removing, or changing secrets doesn't generate new revisions.
- Each application revision can reference one or more secrets. - Multiple revisions can reference the same secret(s).
-An updated or deleted secret does not automatically impact existing revisions in your app. When a secret is updated or deleted, you can respond to changes in one of two ways:
+An updated or deleted secret doesn't automatically affect existing revisions in your app. When a secret is updated or deleted, you can respond to changes in one of two ways:
- 1. Deploy a new revision.
- 2. Restart an existing revision.
+1. Deploy a new revision.
+2. Restart an existing revision.
Before you delete a secret, deploy a new revision that no longer references the old secret. Then deactivate all revisions that reference the secret.
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
description: Using managed identities in Container Apps
-+ Previously updated : 06/02/2022 Last updated : 09/29/2022 # Managed identities in Azure Container Apps
-A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
Your container app can be granted two types of identities:
Your container app can be granted two types of identities:
## Why use a managed identity?
+You can use a managed identity in a running container app to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+
+With managed identities:
-- **Authentication service options**: You can use a managed identity in a running container app to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication). - Your app connects to resources with the managed identity. You don't need to manage credentials in your container app. - You can use role-based access control to grant specific permissions to a managed identity. - System-assigned identities are automatically created and managed. They're deleted when your container app is deleted. - You can add and delete user-assigned identities and assign them to multiple resources. They're independent of your container app's life cycle.-- You can use managed identity to pull images from a private Azure Container Registry without a username and password. For more information, see [Azure Container Apps image pull with managed identity](managed-identity-image-pull.md).
+- You can use managed identity to [authenticate with a private Azure Container Registry](containers.md#container-registries) without a username and password to pull containers for your Container App.
+- You can use [managed identity to create connections for Dapr-enabled applications via Dapr components](./dapr-overview.md)
### Common use cases
User-assigned identities are ideal for workloads that:
## Limitations
-The identity is only available within a running container, which means you can't use a managed identity in scaling rules or Dapr configuration. To access resources that require a connection string or key, such as storage resources, you'll still need to include the connection string or key in the `secretRef` of the scaling rule.
+Using managed identities in scale rules isn't supported. You'll still need to include the connection string or key in the `secretRef` of the scaling rule.
## Configure managed identities
-You can configure your managed identities through:
+You can configure your managed identities through:
- the Azure portal - the Azure CLI
You can configure your managed identities through:
When a managed identity is added, deleted, or modified on a running container app, the app doesn't automatically restart and a new revision isn't created. > [!NOTE]
-> When adding a managed identity to a container app deployed before April 11, 2022, you must create a new revision.
+> When adding a managed identity to a container app deployed before April 11, 2022, you must create a new revision.
### Add a system-assigned identity
An ARM template can be used to automate deployment of your container app and res
```json "identity": {
- "type": "SystemAssigned"
+ "type": "SystemAssigned"
} ``` Adding the system-assigned type tells Azure to create and manage the identity for your application. For a complete ARM template example, see [ARM API Specification](azure-resource-manager-api-spec.md?tabs=arm-template#container-app-examples). + ### Add a user-assigned identity
-Configuring a container app with a user-assigned identity requires that you first create the identity then add its resource identifier to your container app's configuration. You can create user-assigned identities via the Azure portal or the Azure CLI. For information on creating and managing user-assigned identities, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+Configuring a container app with a user-assigned identity requires that you first create the identity then add its resource identifier to your container app's configuration. You can create user-assigned identities via the Azure portal or the Azure CLI. For information on creating and managing user-assigned identities, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
# [Azure portal](#tab/portal)
First, you'll need to create a user-assigned identity resource.
1. Create a user-assigned identity.
- ```azurecli
- az identity create --resource-group <GROUP_NAME> --name <IDENTITY_NAME> --output json
- ```
+ ```azurecli
+ az identity create --resource-group <GROUP_NAME> --name <IDENTITY_NAME> --output json
+ ```
- Note the `id` property of the new identity.
+ Note the `id` property of the new identity.
1. Run the `az containerapp identity assign` command to assign the identity to the app. The identities parameter is a space separated list.
- ```azurecli
- az containerapp identity assign --resource-group <GROUP_NAME> --name <APP_NAME> \
- --user-assigned <IDENTITY_RESOURCE_ID>
- ```
+ ```azurecli
+ az containerapp identity assign --resource-group <GROUP_NAME> --name <APP_NAME> \
+ --user-assigned <IDENTITY_RESOURCE_ID>
+ ```
- Replace `<IDENTITY_RESOURCE_ID>` with the `id` property of the identity. To assign more than one user-assigned identity, supply a space-separated list of identity IDs to the `--user-assigned` parameter.
+ Replace `<IDENTITY_RESOURCE_ID>` with the `id` property of the identity. To assign more than one user-assigned identity, supply a space-separated list of identity IDs to the `--user-assigned` parameter.
# [ARM template](#tab/arm)
For a complete ARM template example, see [ARM API Specification](azure-resource-
> [!NOTE] > An application can have both system-assigned and user-assigned identities at the same time. In this case, the type property would be `SystemAssigned,UserAssigned`. + ## Configure a target resource
-For some resources, you'll need to configure role assignments for your app's managed identity to grant access. Otherwise, calls from your app to services, such as Azure Key Vault and Azure SQL Database, will be rejected even if you use a valid token for that identity. To learn more about Azure role-based access control (Azure RBAC), see [What is RBAC?](../role-based-access-control/overview.md). To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+For some resources, you'll need to configure role assignments for your app's managed identity to grant access. Otherwise, calls from your app to services, such as Azure Key Vault and Azure SQL Database, will be rejected even if you use a valid token for that identity. To learn more about Azure role-based access control (Azure RBAC), see [What is RBAC?](../role-based-access-control/overview.md). To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
> [!IMPORTANT] > The back-end services for managed identities maintain a cache per resource URI for around 24 hours. If you update the access policy of a particular target resource and immediately retrieve a token for that resource, you may continue to get a cached token with outdated permissions until that token expires. There's currently no way to force a token refresh.
Container Apps provides an internally accessible [REST endpoint](managed-identit
# [.NET](#tab/dotnet) > [!NOTE]
-> When connecting to Azure SQL data sources with [Entity Framework Core](/ef/core/), consider [using Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication), which provides special connection strings for managed identity connectivity.
+> When connecting to Azure SQL data sources with [Entity Framework Core](/ef/core/), consider [using Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication), which provides special connection strings for managed identity connectivity.
For .NET apps, the simplest way to work with a managed identity is through the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
For more code examples of the Azure Identity client library for Java, see [Azure
# [PowerShell](#tab/powershell)
-Use the following script to retrieve a token from the local endpoint by specifying a resource URI of an Azure service. Replace the place holder with the resource URI to obtain the token.
+Use the following script to retrieve a token from the local endpoint by specifying a resource URI of an Azure service. Replace the place holder with the resource URI to obtain the token.
```powershell $resourceURI = "https://<AAD-resource-URI>"
A container app with a managed identity exposes the identity endpoint by definin
To get a token for a resource, make an HTTP GET request to the endpoint, including the following parameters:
-| Parameter name | In | Description|
-||||
-| resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. The resource could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
-| api-version | Query | The version of the token API to be used. Use "2019-08-01" or later. |
-| X-IDENTITY-HEADER | Header | The value of the `IDENTITY_HEADER` environment variable. This header mitigates server-side request forgery (SSRF) attacks. |
-| client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used.|
-| principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Can't be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
-| mi_res_id| Query | (Optional) The Azure resource ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `client_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used.|
+| Parameter name | In | Description |
+| -- | | - |
+| resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. The resource could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
+| api-version | Query | The version of the token API to be used. Use "2019-08-01" or later. |
+| X-IDENTITY-HEADER | Header | The value of the `IDENTITY_HEADER` environment variable. This header mitigates server-side request forgery (SSRF) attacks. |
+| client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+| principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Can't be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+| mi_res_id | Query | (Optional) The Azure resource ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `client_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
> [!IMPORTANT] > If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties. Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist. For more information on the REST endpoint, see [REST endpoint reference](#rest-endpoint-reference). ++ ## View managed identities
-You can show the system-assigned and user-assigned managed identities using the following Azure CLI command. The output will show the managed identity type, tenant IDs and principal IDs of all managed identities assigned to your container app.
+You can show the system-assigned and user-assigned managed identities using the following Azure CLI command. The output shows the managed identity type, tenant IDs and principal IDs of all managed identities assigned to your container app.
```azurecli az containerapp identity show --name <APP_NAME> --resource-group <GROUP_NAME>
az containerapp identity show --name <APP_NAME> --resource-group <GROUP_NAME>
## Remove a managed identity
-When you remove a system-assigned identity, it's deleted from Azure Active Directory. System-assigned identities are also automatically removed from Azure Active Directory when you delete the container app resource itself. Removing user-assigned managed identities from your container app doesn't remove them from Azure Active Directory.
+When you remove a system-assigned identity, it's deleted from Azure Active Directory. System-assigned identities are also automatically removed from Azure Active Directory when you delete the container app resource itself. Removing user-assigned managed identities from your container app doesn't remove them from Azure Active Directory.
# [Azure portal](#tab/portal)
When you remove a system-assigned identity, it's deleted from Azure Active Direc
1. Select **Identity**. Then follow the steps based on the identity type:
- - **System-assigned identity**: Within the **System assigned** tab, switch **Status** to **Off**. Select **Save**.
- - **User-assigned identity**: Select the **User assigned** tab, select the checkbox for the identity, and select **Remove**. Select **Yes** to confirm.
+ - **System-assigned identity**: Within the **System assigned** tab, switch **Status** to **Off**. Select **Save**.
+ - **User-assigned identity**: Select the **User assigned** tab, select the checkbox for the identity, and select **Remove**. Select **Yes** to confirm.
# [Azure CLI](#tab/cli)
To remove all identities, set the `type` of the container app's identity to `Non
} ``` + ## Next steps
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Title: "Tutorial: Deploy a Dapr application to Azure Container Apps with an ARM or Bicep template"
-description: Deploy a Dapr application to Azure Container Apps with an ARM or Bicep template.
+ Title: "Tutorial: Deploy a Dapr application to Azure Container Apps with an Azure Resource Manager or Bicep template"
+description: Deploy a Dapr application to Azure Container Apps with an Azure Resource Manager or Bicep template.
Previously updated : 06/23/2022 Last updated : 06/29/2022 -+ zone_pivot_groups: container-apps # Tutorial: Deploy a Dapr application to Azure Container Apps with an Azure Resource Manager or Bicep template
-[Dapr](https://dapr.io/) (Distributed Application Runtime) is a runtime that helps you build resilient stateless and stateful microservices. In this tutorial, a sample Dapr application is deployed to Azure Container Apps via an Azure Resource Manager (ARM) or Bicep template.
+[Dapr](https://dapr.io/) (Distributed Application Runtime) is a runtime that helps you build resilient stateless and stateful microservices. In this tutorial, a sample Dapr solution is deployed to Azure Container Apps via an Azure Resource Manager (ARM) or Bicep template.
You learn how to: > [!div class="checklist"]
-> * Create an Azure Blob Storage for use as a Dapr state store
-> * Deploy a Container Apps environment to host container apps
-> * Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them
-> * Verify the interaction between the two microservices.
+>
+> - Create an Azure Blob Storage for use as a Dapr state store
+> - Deploy a Container Apps environment to host container apps
+> - Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them
+> - Assign a user-assigned identity to a container app and supply it with the appropiate role assignment to authenticate to the Dapr state store
+> - Verify the interaction between the two microservices.
-With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities.
-In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
+In this tutorial, you deploy the solution from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
The application consists of:
The following architecture diagram illustrates the components that make up this
## Prerequisites - Install [Azure CLI](/cli/azure/install-azure-cli)
+- Install [Git](https://git-scm.com/downloads)
::: zone pivot="container-apps-bicep"
The following architecture diagram illustrates the components that make up this
::: zone-end - An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-
-## Before you begin
-
-This guide uses the following environment variables:
-
-# [Bash](#tab/bash)
-
-```bash
-RESOURCE_GROUP="my-containerapps"
-LOCATION="canadacentral"
-CONTAINERAPPS_ENVIRONMENT="containerapps-env"
-STORAGE_ACCOUNT_CONTAINER="mycontainer"
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$RESOURCE_GROUP="my-containerapps"
-$LOCATION="canadacentral"
-$CONTAINERAPPS_ENVIRONMENT="containerapps-env"
-$STORAGE_ACCOUNT_CONTAINER="mycontainer"
-```
---
-# [Bash](#tab/bash)
-
-```bash
-STORAGE_ACCOUNT="<storage account name>"
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$STORAGE_ACCOUNT="<storage account name>"
-```
---
-Choose a name for `STORAGE_ACCOUNT`. Storage account names must be _unique within Azure_. Be from 3 to 24 characters in length and contain numbers and lowercase letters only.
+- A GitHub Account. If you don't already have one, sign up for [free](https://github.com/join).
## Setup
First, sign in to Azure.
az login ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
+```azurepowershell
Connect-AzAccount ```
-Ensure you're running the latest version of the CLI via the upgrade command.
- # [Bash](#tab/bash)
-```azurecli
-az upgrade
-```
-
-# [PowerShell](#tab/powershell)
+Ensure you're running the latest version of the CLI via the upgrade command and then install the Azure Container Apps extension for the Azure CLI.
```azurecli az upgrade
-```
-
+az extension add --name containerapp --upgrade
+```
-Next, install the Azure Container Apps extension for the Azure CLI.
+# [Azure PowerShell](#tab/azure-powershell)
-# [Bash](#tab/bash)
+You must have the latest `az` module installed. Ignore any warnings about modules currently in use.
-```azurecli
-az extension add --name containerapp --upgrade
+```azurepowershell
+Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
```
-# [PowerShell](#tab/powershell)
+Now install the Az.App module.
-```azurecli
-az extension add --name containerapp --upgrade
+```azurepowershell
+Install-Module -Name Az.App
```
-Now that the extension is installed, register the `Microsoft.App` namespace.
-
-> [!NOTE]
-> Azure Container Apps resources have migrated from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+Now that the current extension or module is installed, register the `Microsoft.App` namespace.
# [Bash](#tab/bash)
Now that the extension is installed, register the `Microsoft.App` namespace.
az provider register --namespace Microsoft.App ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
+```azurepowershell
Register-AzResourceProvider -ProviderNamespace Microsoft.App ```
-Create a resource group to organize the services related to your container apps.
+Next, set the following environment variables:
# [Bash](#tab/bash) ```azurecli
-az group create \
- --name $RESOURCE_GROUP \
- --location "$LOCATION"
+RESOURCE_GROUP="my-container-apps"
+LOCATION="centralus"
+CONTAINERAPPS_ENVIRONMENT="my-environment"
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-New-AzResourceGroup -Name $RESOURCE_GROUP -Location $LOCATION
+```azurepowershell
+$ResourceGroupName = 'my-container-apps'
+$Location = 'centralus'
+$ContainerAppsEnvironment = 'my-environment'
```
-## Set up a state store
-
-### Create an Azure Blob Storage account
-
-Use the following command to create an Azure Storage account.
+With these variables defined, you can create a resource group to organize the services needed for this tutorial.
# [Bash](#tab/bash) ```azurecli
-az storage account create \
- --name $STORAGE_ACCOUNT \
- --resource-group $RESOURCE_GROUP \
- --location "$LOCATION" \
- --sku Standard_RAGRS \
- --kind StorageV2
+az group create \
+ --name $RESOURCE_GROUP \
+ --location $LOCATION
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-New-AzStorageAccount -ResourceGroupName $RESOURCE_GROUP `
- -Name $STORAGE_ACCOUNT `
- -Location $LOCATION `
- -SkuName Standard_RAGRS
+```azurepowershell
+New-AzResourceGroup -Location $Location -Name $ResourceGroupName
```
-Once your Azure Blob Storage account is created, you'll create a template where these storage parameters will use environment variable values. The values are passed in via the `parameters` argument when you deploy your apps with the `az deployment group create` command.
--- `storage_account_name` uses the value of the `STORAGE_ACCOUNT` variable.--- `storage_container_name` uses the value of the `STORAGE_ACCOUNT_CONTAINER` variable. Dapr creates a container with this name when it doesn't already exist in your Azure Storage account.--
-### Create Azure Resource Manager (ARM) template
-
-Create an ARM template to deploy a Container Apps environment that includes:
-
-* the associated Log Analytics workspace
-* the Application Insights resource for distributed tracing
-* a dapr component for the state store
-* the two dapr-enabled container apps: [hello-k8s-node](https://hub.docker.com/r/dapriosamples/hello-k8s-node) and [hello-k8s-python](https://hub.docker.com/r/dapriosamples/hello-k8s-python)
-
-
-Save the following file as _hello-world.json_:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "environment_name": {
- "type": "string"
- },
- "location": {
- "defaultValue": "canadacentral",
- "type": "string"
- },
- "storage_account_name": {
- "type": "string"
- },
- "storage_container_name": {
- "type": "string"
- }
- },
- "variables": {
- "logAnalyticsWorkspaceName": "[concat('logs-', parameters('environment_name'))]",
- "appInsightsName": "[concat('appins-', parameters('environment_name'))]"
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/workspaces",
- "apiVersion": "2021-06-01",
- "name": "[variables('logAnalyticsWorkspaceName')]",
- "location": "[parameters('location')]",
- "properties": {
- "retentionInDays": 30,
- "features": {
- "searchVersion": 1
- },
- "sku": {
- "name": "PerGB2018"
- }
- }
- },
- {
- "type": "Microsoft.Insights/components",
- "apiVersion": "2020-02-02",
- "name": "[variables('appInsightsName')]",
- "location": "[parameters('location')]",
- "kind": "web",
- "dependsOn": [
- "[resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName'))]"
- ],
- "properties": {
- "Application_Type": "web",
- "WorkspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName'))]"
- }
- },
- {
- "type": "Microsoft.App/managedEnvironments",
- "apiVersion": "2022-03-01",
- "name": "[parameters('environment_name')]",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components/', variables('appInsightsName'))]"
- ],
- "properties": {
- "daprAIInstrumentationKey": "[reference(resourceId('Microsoft.Insights/components/', variables('appInsightsName')), '2020-02-02').InstrumentationKey]",
- "appLogsConfiguration": {
- "destination": "log-analytics",
- "logAnalyticsConfiguration": {
- "customerId": "[reference(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2021-06-01').customerId]",
- "sharedKey": "[listKeys(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2021-06-01').primarySharedKey]"
- }
- }
- },
- "resources": [
- {
- "type": "daprComponents",
- "name": "statestore",
- "apiVersion": "2022-03-01",
- "dependsOn": [
- "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]"
- ],
- "properties": {
- "componentType": "state.azure.blobstorage",
- "version": "v1",
- "ignoreErrors": false,
- "initTimeout": "5s",
- "secrets": [
- {
- "name": "storageaccountkey",
- "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts/', parameters('storage_account_name')), '2021-09-01').keys[0].value]"
- }
- ],
- "metadata": [
- {
- "name": "accountName",
- "value": "[parameters('storage_account_name')]"
- },
- {
- "name": "containerName",
- "value": "[parameters('storage_container_name')]"
- },
- {
- "name": "accountKey",
- "secretRef": "storageaccountkey"
- }
- ],
- "scopes": ["nodeapp"]
- }
- }
- ]
- },
- {
- "type": "Microsoft.App/containerApps",
- "apiVersion": "2022-03-01",
- "name": "nodeapp",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]"
- ],
- "properties": {
- "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]",
- "configuration": {
- "ingress": {
- "external": false,
- "targetPort": 3000
- },
- "dapr": {
- "enabled": true,
- "appId": "nodeapp",
- "appProcotol": "http",
- "appPort": 3000
- }
- },
- "template": {
- "containers": [
- {
- "image": "dapriosamples/hello-k8s-node:latest",
- "name": "hello-k8s-node",
- "env": [
- {
- "name": "APP_PORT",
- "value": "3000"
- }
- ],
- "resources": {
- "cpu": 0.5,
- "memory": "1.0Gi"
- }
- }
- ],
- "scale": {
- "minReplicas": 1,
- "maxReplicas": 1
- }
- }
- }
- },
- {
- "type": "Microsoft.App/containerApps",
- "apiVersion": "2022-03-01",
- "name": "pythonapp",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]",
- "[resourceId('Microsoft.App/containerApps/', 'nodeapp')]"
- ],
- "properties": {
- "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]",
- "configuration": {
- "dapr": {
- "enabled": true,
- "appId": "pythonapp"
- }
- },
- "template": {
- "containers": [
- {
- "image": "dapriosamples/hello-k8s-python:latest",
- "name": "hello-k8s-python",
- "resources": {
- "cpu": 0.5,
- "memory": "1.0Gi"
- }
- }
- ],
- "scale": {
- "minReplicas": 1,
- "maxReplicas": 1
- }
- }
- }
- }
- ]
-}
-```
-
+## Prepare the GitHub repository
+Go to the repository holding the ARM and Bicep templates that's used to deploy the solution.
-### Create Azure Bicep templates
-
-Create a bicep template to deploy a Container Apps environment that includes:
-
-* the associated Log Analytics workspace
-* the Application Insights resource for distributed tracing
-* a dapr component for the state store
-* the two dapr-enabled container apps: [hello-k8s-node](https://hub.docker.com/r/dapriosamples/hello-k8s-node) and [hello-k8s-python](https://hub.docker.com/r/dapriosamples/hello-k8s-python)
-
-Save the following file as _hello-world.bicep_:
-
-```bicep
-param environment_name string
-param location string = 'canadacentral'
-param storage_account_name string
-param storage_container_name string
-
-var logAnalyticsWorkspaceName = 'logs-${environment_name}'
-var appInsightsName = 'appins-${environment_name}'
-
-resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
- name: logAnalyticsWorkspaceName
- location: location
- properties: any({
- retentionInDays: 30
- features: {
- searchVersion: 1
- }
- sku: {
- name: 'PerGB2018'
- }
- })
-}
+Select the **Fork** button at the top of the [repository](https://github.com/Azure-Samples/Tutorial-Deploy-Dapr-Microservices-ACA) to fork the repo to your account.
-resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
- name: appInsightsName
- location: location
- kind: 'web'
- properties: {
- Application_Type: 'web'
- WorkspaceResourceId: logAnalyticsWorkspace.id
- }
-}
+Now you can clone your fork to work with it locally.
-resource environment 'Microsoft.App/managedEnvironments@2022-03-01' = {
- name: environment_name
- location: location
- properties: {
- daprAIInstrumentationKey: reference(appInsights.id, '2020-02-02').InstrumentationKey
- appLogsConfiguration: {
- destination: 'log-analytics'
- logAnalyticsConfiguration: {
- customerId: reference(logAnalyticsWorkspace.id, '2021-06-01').customerId
- sharedKey: listKeys(logAnalyticsWorkspace.id, '2021-06-01').primarySharedKey
- }
- }
- }
- resource daprComponent 'daprComponents@2022-03-01' = {
- name: 'statestore'
- properties: {
- componentType: 'state.azure.blobstorage'
- version: 'v1'
- ignoreErrors: false
- initTimeout: '5s'
- secrets: [
- {
- name: 'storageaccountkey'
- value: listKeys(resourceId('Microsoft.Storage/storageAccounts/', storage_account_name), '2021-09-01').keys[0].value
- }
- ]
- metadata: [
- {
- name: 'accountName'
- value: storage_account_name
- }
- {
- name: 'containerName'
- value: storage_container_name
- }
- {
- name: 'accountKey'
- secretRef: 'storageaccountkey'
- }
- ]
- scopes: [
- 'nodeapp'
- ]
- }
- }
-}
+Use the following git command to clone your forked repo into the _acadapr-templates_ directory.
-resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
- name: 'nodeapp'
- location: location
- properties: {
- managedEnvironmentId: environment.id
- configuration: {
- ingress: {
- external: false
- targetPort: 3000
- }
- dapr: {
- enabled: true
- appId: 'nodeapp'
- appProtocol: 'http'
- appPort: 3000
- }
- }
- template: {
- containers: [
- {
- image: 'dapriosamples/hello-k8s-node:latest'
- name: 'hello-k8s-node'
- env: [
- {
- name: 'APP_PORT'
- value: '3000'
- }
- ]
- resources: {
- cpu: json('0.5')
- memory: '1.0Gi'
- }
- }
- ]
- scale: {
- minReplicas: 1
- maxReplicas: 1
- }
- }
- }
-}
-
-resource pythonapp 'Microsoft.App/containerApps@2022-03-01' = {
- name: 'pythonapp'
- location: location
- properties: {
- managedEnvironmentId: environment.id
- configuration: {
- dapr: {
- enabled: true
- appId: 'pythonapp'
- }
- }
- template: {
- containers: [
- {
- image: 'dapriosamples/hello-k8s-python:latest'
- name: 'hello-k8s-python'
- resources: {
- cpu: json('0.5')
- memory: '1.0Gi'
- }
- }
- ]
- scale: {
- minReplicas: 1
- maxReplicas: 1
- }
- }
- }
- dependsOn: [
- nodeapp
- ]
-}
+```git
+git clone https://github.com/$GITHUB_USERNAME/Tutorial-Deploy-Dapr-Microservices-ACA.git acadapr-templates
```
+## Deploy
-> [!NOTE]
-> Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
+The template deploys:
-## Deploy
+- a Container Apps environment
+- a Log Analytics workspace associated with the Container Apps environment
+- an Application Insights resource for distributed tracing
+- a blob storage account and a default storage container
+- a Dapr component for the blob storage account
+- the node, Dapr-enabled container app with a user-assigned managed identity: [hello-k8s-node](https://hub.docker.com/r/dapriosamples/hello-k8s-node)
+- the python, Dapr-enabled container app: [hello-k8s-python](https://hub.docker.com/r/dapriosamples/hello-k8s-python)
+- an Active Directory role assignment for the node app used by the Dapr component to establish a connection to blob storage
+Navigate to the _acadapr-templates_ directory and run the following command:
-Navigate to the directory in which you stored the ARM template file and run the following command:
# [Bash](#tab/bash) ```azurecli az deployment group create \ --resource-group "$RESOURCE_GROUP" \
- --template-file ./hello-world.json \
- --parameters \
- environment_name="$CONTAINERAPPS_ENVIRONMENT" \
- location="$LOCATION" \
- storage_account_name="$STORAGE_ACCOUNT" \
- storage_container_name="$STORAGE_ACCOUNT_CONTAINER"
+ --template-file ./azuredeploy.json \
+ --parameters environment_name="$CONTAINERAPPS_ENVIRONMENT"
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
+```azurepowershell
$params = @{
- environment_name = $CONTAINERAPPS_ENVIRONMENT
- location = $LOCATION
- storage_account_name = $STORAGE_ACCOUNT
- storage_container_name = $STORAGE_ACCOUNT_CONTAINER
+ environment_name = $ContainerAppsEnvironment
+ } New-AzResourceGroupDeployment `
- -ResourceGroupName $RESOURCE_GROUP `
+ -ResourceGroupName $ResourceGroupName `
-TemplateParameterObject $params `
- -TemplateFile ./hello-world.json `
+ -TemplateFile ./azuredeploy.json `
-SkipTemplateParameterPrompt ```
New-AzResourceGroupDeployment `
::: zone pivot="container-apps-bicep"
-Navigate to the directory in which you stored the Bicep template file and run the following command:
- A warning (BCP081) might be displayed. This warning has no effect on the successful deployment of the application. # [Bash](#tab/bash)
A warning (BCP081) might be displayed. This warning has no effect on the success
```azurecli az deployment group create \ --resource-group "$RESOURCE_GROUP" \
- --template-file ./hello-world.bicep \
- --parameters \
- environment_name="$CONTAINERAPPS_ENVIRONMENT" \
- location="$LOCATION" \
- storage_account_name="$STORAGE_ACCOUNT" \
- storage_container_name="$STORAGE_ACCOUNT_CONTAINER"
+ --template-file ./azuredeploy.bicep \
+ --parameters environment_name="$CONTAINERAPPS_ENVIRONMENT"
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
+```azurepowershell
$params = @{
- environment_name = $CONTAINERAPPS_ENVIRONMENT
- location = $LOCATION
- storage_account_name = $STORAGE_ACCOUNT
- storage_container_name = $STORAGE_ACCOUNT_CONTAINER
+ environment_name = $ContainerAppsEnvironment
+ } New-AzResourceGroupDeployment `
- -ResourceGroupName $RESOURCE_GROUP `
+ -ResourceGroupName $ResourceGroupName `
-TemplateParameterObject $params `
- -TemplateFile ./hello-world.bicep `
+ -TemplateFile ./azuredeploy.bicep `
-SkipTemplateParameterPrompt ```
New-AzResourceGroupDeployment `
This command deploys: -- the Container Apps environment and associated Log Analytics workspace for hosting the hello world dapr solution
+- the Container Apps environment and associated Log Analytics workspace for hosting the hello world Dapr solution
- an Application Insights instance for Dapr distributed tracing-- the `nodeapp` app server running on `targetPort: 3000` with dapr enabled and configured using: `"appId": "nodeapp"` and `"appPort": 3000`-- the `daprComponents` object of `"type": "state.azure.blobstorage"` scoped for use by the `nodeapp` for storing state-- the headless `pythonapp` with no ingress and dapr enabled that calls the `nodeapp` service via dapr service-to-service communication
+- the `nodeapp` app server running on `targetPort: 3000` with Dapr enabled and configured using: `"appId": "nodeapp"` and `"appPort": 3000`, and a user-assigned identity with access to the Azure Blob storage via a Storage Data Contributor role assignment
+- A Dapr component of `"type": "state.azure.blobstorage"` scoped for use by the `nodeapp` for storing state
+- the Dapr-enabled, headless `pythonapp` that invokes the `nodeapp` service using Dapr service invocation
## Verify the result
You can confirm that the services are working correctly by viewing data in your
1. Open the [Azure portal](https://portal.azure.com) in your browser.
-1. Navigate to your storage account.
+1. Go to the newly created storage account in your resource group.
1. Select **Containers** from the menu on the left side.
-1. Select **mycontainer**.
+1. Select the created container.
1. Verify that you can see the file named `order` in the container.
-1. Select on the file.
+1. Select the file.
1. Select the **Edit** tab.
az monitor log-analytics query \
--out table ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+```azurepowershell
+$WorkspaceId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).LogAnalyticConfigurationCustomerId
```
-```powershell
-$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5"
+```azurepowershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5"
$queryResults.Results ```
az group delete \
--resource-group $RESOURCE_GROUP ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
+```azurepowershell
Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force ```
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Title: 'Tutorial: Deploy a Dapr application to Azure Container Apps using the Azure CLI'
+ Title: "Tutorial: Deploy a Dapr application to Azure Container Apps using the Azure CLI"
description: Deploy a Dapr application to Azure Container Apps using the Azure CLI. Previously updated : 08/31/2022 Last updated : 09/29/2022 -+ ms.devlang: azurecli # Tutorial: Deploy a Dapr application to Azure Container Apps using the Azure CLI
-[Dapr](https://dapr.io/) (Distributed Application Runtime) is a runtime that helps build resilient, stateless, and stateful microservices. In this tutorial, a sample Dapr application is deployed to Azure Container Apps.
+[Dapr](https://dapr.io/) (Distributed Application Runtime) helps developers build resilient, reliable microservices. In this tutorial, a sample Dapr application is deployed to Azure Container Apps.
You learn how to: > [!div class="checklist"]
+> - Create a Container Apps environment to host your container apps
+> - Create an Azure Blob Storage account
+> - Create a Dapr state store component for the Azure Blob storage
+> - Deploy two container apps: one that produces messages, and one that consumes messages and persists them in the state store
+> - Verify the solution is up and running
-> * Create a Container Apps environment for your container apps
-> * Create an Azure Blob Storage state store for the container app
-> * Deploy two apps that produce and consume messages and persist them in the state store
-> * Verify the interaction between the two microservices.
-
-With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
-
-In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
+In this tutorial, you deploy the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
The application consists of:
-* a client (Python) app that generates messages
-* a service (Node) app that consumes and persists those messages in a configured state store
+- A client (Python) container app to generate messages.
+- A service (Node) container app to consume and persist those messages in a state store
The following architecture diagram illustrates the components that make up this tutorial:
The following architecture diagram illustrates the components that make up this
- # [Bash](#tab/bash) Individual container apps are deployed to an Azure Container Apps environment. To create the environment, run the following command:
az containerapp env create \
# [Azure PowerShell](#tab/azure-powershell)
-A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-
+Individual container apps are deployed to an Azure Container Apps environment. A Log Analytics workspace is deployed as the logging backend before the environment is deployed. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
```azurepowershell $WorkspaceArgs = @{
New-AzContainerAppManagedEnv @EnvArgs
### Create an Azure Blob Storage account -
-Choose a name for storage account. Storage account names must be *unique within Azure*, from 3 to 24 characters in length and must contain numbers and lowercase letters only.
-
+With the environment deployed, the next step is to deploy an Azure Blob Storage account that is used by one of the microservices to store data. Before deploying the service, you need to choose a name for the storage account. Storage account names must be _unique within Azure_, from 3 to 24 characters in length and must contain numbers and lowercase letters only.
# [Bash](#tab/bash)
STORAGE_ACCOUNT_NAME="<storage account name>"
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-$StorageAcctName = "<storage account name>"
+$StorageAcctName = '<storage account name>'
``` -
-# [Bash](#tab/bash)
-
-Set the `STORAGE_ACCOUNT_CONTAINER` name.
-
-```bash
-STORAGE_ACCOUNT_CONTAINER="mycontainer"
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-Set the storage account container name.
-
-```azurepowershell
-$StorageAcctContainerName = 'mycontainer'
-```
---
-Use the following command to create an Azure Storage account.
+Use the following command to create the Azure Storage account.
# [Bash](#tab/bash)
az storage account create \
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
+Install-Module Az.Storage
+ $StorageAcctArgs = @{ Name = $StorageAcctName ResourceGroupName = $ResourceGroupName Location = $Location SkuName = 'Standard_RAGRS'
- Kind = "StorageV2"
+ Kind = 'StorageV2'
} $StorageAccount = New-AzStorageAccount @StorageAcctArgs ```
-Get the storage account key with the following command:
+## Configure a user-assigned identity for the node app
+
+While Container Apps supports both user-assigned and system-assigned managed identity, a user-assigned identity provides the Dapr-enabled node app with permissions to access the blob storage account.
+
+1. Create a user-assigned identity.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az identity create --resource-group $RESOURCE_GROUP --name "nodeAppIdentity" --output json
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Install-Module -Name AZ.ManagedServiceIdentity
+
+New-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity' -Location $Location
+
+```
+++
+Retrieve the `principalId` and `id` properties and store in variables.
# [Bash](#tab/bash) ```azurecli
-STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' --out tsv`
+PRINCIPAL_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GROUP --query principalId | tr -d \")
+IDENTITY_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GROUP --query id | tr -d \")
+CLIENT_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GROUP --query clientId | tr -d \")
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-$StorageAcctKey = (Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAcctName)| Where-Object {$_.KeyName -eq "key1"}
+$PrincipalId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity').PrincipalId
+$IdentityId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity').Id
+$ClientId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity').ClientId
+```
+++
+2. Assign the `Storage Blob Data Contributor` role to the user-assigned identity
+
+Retrieve the subscription ID for your current subscription.
+
+# [Bash](#tab/bash)
+
+```azurecli
+SUBSCRIPTION_ID=$(az account show --query id --output tsv)
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$SubscriptionId=$(Get-AzContext).Subscription.id
+```
+++
+# [Bash](#tab/bash)
+
+```azurecli
+az role assignment create --assignee $PRINCIPAL_ID \
+--role "Storage Blob Data Contributor" \
+--scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT_NAME"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Install-Module Az.Resources
+
+New-AzRoleAssignment -ObjectId $PrincipalId -RoleDefinitionName 'Storage Blob Data Contributor' -Scope '/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Storage/storageAccounts/$StorageAcctName'
``` ### Configure the state store component
+There are multiple ways to authenticate to external resources via Dapr. This example doesn't use the Dapr Secrets API at runtime, but uses an Azure-based state store. Therefore, you can forgo creating a secret store component and instead provide direct access from the node app to the blob store using Managed Identity. If you want to use a non-Azure state store or the Dapr Secrets API at runtime, you could create a secret store component. This component would load runtime secrets so you can reference them at runtime.
+ # [Bash](#tab/bash)
-Create a config file named *statestore.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *statestore.yaml* file should look when configured for your Azure Blob Storage account:
+Create a config file named **statestore.yaml** with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. Since the application is authenticating directly via Managed Identity, there's no need to include the storage account key directly within the component. The following example shows how your **statestore.yaml** file should look when configured for your Azure Blob Storage account:
```yaml # statestore.yaml for Azure Blob storage component componentType: state.azure.blobstorage version: v1 metadata:-- name: accountName
- value: "<STORAGE_ACCOUNT>"
-- name: accountKey
- secretRef: account-key
-- name: containerName
- value: <STORAGE_ACCOUNT_CONTAINER>
-secrets:
-- name: account-key
- value: "<STORAGE_ACCOUNT_KEY>"
+ - name: accountName
+ value: "<STORAGE_ACCOUNT_NAME>"
+ - name: containerName
+ value: mycontainer
+ - name: azureClientId
+ value: "<MANAGED_IDENTITY_CLIENT_ID>"
scopes:-- nodeapp
+ - nodeapp
``` To use this file, update the placeholders:
-* Replace `<STORAGE_ACCOUNT>` with the value of the `STORAGE_ACCOUNT_NAME` variable you defined. To obtain its value, run the following command:
+- Replace `<STORAGE_ACCOUNT_NAME>` with the value of the `STORAGE_ACCOUNT_NAME` variable you defined. To obtain its value, run the following command:
- ```azurecli
- echo $STORAGE_ACCOUNT_NAME
- ```
-
-* Replace `<STORAGE_ACCOUNT_KEY>` with the storage account key. To obtain its value, run the following command:
-
- ```azurecli
- echo $STORAGE_ACCOUNT_KEY
- ```
-
-* Replace `<STORAGE_ACCOUNT_CONTAINER>` with the storage account container name. To obtain its value, run the following command:
+```azurecli
+echo $STORAGE_ACCOUNT_NAME
+```
- ```azurecli
- echo $STORAGE_ACCOUNT_CONTAINER
- ```
+- Replace `<MANAGED_IDENTITY_CLIENT_ID>` with the value of the `CLIENT_ID` variable you defined. To obtain its value, run the following command:
-> [!NOTE]
-> Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
-
-Navigate to the directory in which you stored the *statestore.yaml* file and run the following command to configure the Dapr component in the Container Apps environment.
+```azurecli
+echo $CLIENT_ID
+```
-If you need to add multiple components, create a separate YAML file for each component, and run the `az containerapp env dapr-component set` command multiple times to add each component. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md#configure-dapr-components).
+Navigate to the directory in which you stored the component yaml file and run the following command to configure the Dapr component in the Container Apps environment. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md).
```azurecli az containerapp env dapr-component set \
az containerapp env dapr-component set \
```azurepowershell
-$AcctName = New-AzContainerAppDaprMetadataObject -Name "accountName" -Value $StorageAcctName
+$AcctName = New-AzContainerAppDaprMetadataObject -Name "accountName" -Value $StorageAcctName
-$AcctKey = New-AzContainerAppDaprMetadataObject -Name "accountKey" -SecretRef "account-key"
+$ContainerName = New-AzContainerAppDaprMetadataObject -Name "containerName" -Value 'mycontainer'
-$ContainerName = New-AzContainerAppDaprMetadataObject -Name "containerName" -Value $StorageAcctContainerName
-
-$Secret = New-AzContainerAppSecretObject -Name "account-key" -Value $StorageAcctKey.Value
+$ClientId = New-AzContainerAppDaprMetadataObject -Name "azureClientId" -Value $ClientId
$DaprArgs = @{ EnvName = $ContainerAppsEnvironment ResourceGroupName = $ResourceGroupName DaprName = 'statestore'
- Metadata = $AcctName, $AcctKey, $ContainerName
- Secret = $Secret
+ Metadata = $AcctName, $ContainerName, $ClientId
Scope = 'nodeapp'
- Version = "v1"
+ Version = 'v1'
ComponentType = 'state.azure.blobstorage' }
New-AzContainerAppManagedEnvDapr @DaprArgs
-Your state store is configured using the Dapr component type of `state.azure.blobstorage`. The component is scoped to a container app named `nodeapp` and isn't available to other container apps.
- ## Deploy the service application (HTTP web server) - # [Bash](#tab/bash) ```azurecli az containerapp create \ --name nodeapp \ --resource-group $RESOURCE_GROUP \
+ --user-assigned $IDENTITY_ID \
--environment $CONTAINERAPPS_ENVIRONMENT \ --image dapriosamples/hello-k8s-node:latest \ --target-port 3000 \
az containerapp create \
--env-vars 'APP_PORT=3000' ```
-This command deploys:
-
-* The service (Node) app server on `--target-port 3000` (the app port)
-* Its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000'` for service discovery and invocation
-- # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -En
$EnvVars = New-AzContainerAppEnvironmentVarObject -Name APP_PORT -Value 3000
-$TemplateArgs = @{
+$TemplateArgs = @{
Name = 'nodeapp' Image = 'dapriosamples/hello-k8s-node:latest' Env = $EnvVars
$TemplateArgs = @{
$ServiceTemplateObj = New-AzContainerAppTemplateObject @TemplateArgs $ServiceArgs = @{
- Name = "nodeapp"
+ Name = 'nodeapp'
ResourceGroupName = $ResourceGroupName Location = $Location ManagedEnvironmentId = $EnvId
$ServiceArgs = @{
DaprEnabled = $true DaprAppId = 'nodeapp' DaprAppPort = 3000
+ IdentityType = 'UserAssigned'
+ IdentityUserAssignedIdentity = $IdentityId
} New-AzContainerApp @ServiceArgs ```
-This command deploys:
-
-* the service (Node) app server on `DaprAppPort 3000` (the app port)
-* its accompanying Dapr sidecar configured with `-DaprAppId nodeapp` and `-DaprAppPort 3000'` for service discovery and invocation
- By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-node). - ## Deploy the client application (headless client) Run the following command to deploy the client container app.
az containerapp create \
```azurepowershell
-$TemplateArgs = @{
+$TemplateArgs = @{
Name = 'pythonapp' Image = 'dapriosamples/hello-k8s-python:latest' }
New-AzContainerApp @ClientArgs
-By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-python).
-
-This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless, there's no need to specify a target port nor is there a need to external enable ingress.
- ## Verify the result ### Confirm successful state persistence
You can confirm that the services are working correctly by viewing data in your
### View Logs
-Data logged via a container app are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or with the CLI. Wait a few minutes for the analytics to arrive for the first time before you're able to query the logged data.
+Logs from container apps are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or via the CLI. There may be a small delay initially for the table to appear in the workspace.
-Use the following CLI command to view logs on the command line.
+Use the following CLI command to view logs using the command line.
# [Bash](#tab/bash)
az monitor log-analytics query \
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell+ $queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5 " $queryResults.Results
nodeapp Got a new order! Order ID: 63 PrimaryResult 2021-10-22
## Clean up resources
-Once you're done, run the following command to delete your resource group along with all the resources you created in this tutorial.
+Congratulations! You've completed this tutorial. If you'd like to delete the resources created as a part of this walkthrough, run the following command.
->[!CAUTION]
-> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+> [!CAUTION]
+> This command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
# [Bash](#tab/bash)
Remove-AzResourceGroup -Name $ResourceGroupName -Force
> [!div class="nextstepaction"] > [Application lifecycle management](application-lifecycle-management.md)-
container-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/monitor.md
- Title: Write and view application logs in Azure Container Apps
-description: Learn write and view logs in Azure Container Apps.
---- Previously updated : 11/02/2021----
-# Monitor an app in Azure Container Apps
-
-Azure Container Apps gathers a broad set of data about your container app and stores it using [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md). This article describes the available logs, and how to write and view logs.
-
-## Writing to a log
-
-When you write to the [Standard output (stdout) or standard error (stderr) streams](https://wikipedia.org/wiki/Standard_streams), the Container Apps logging agents write logs for each message.
-
-As a message is logged, the following information is gathered in the log table:
-
-| Property | Remarks |
-|||
-| `RevisionName` | |
-| `ContainerAppName` | |
-| `ContainerGroupID` | |
-| `ContainerGroupName` | |
-| `ContainerImage` | |
-| `ContainerID` | The container's unique identifier. You can use this value to help identify container crashes. |
-| `Stream` | Shows whether `stdout` or `stderr` is used for logging. |
-| `EnvironmentName` | |
-
-### Simple text vs structured data
-
-You can log a single text string or line of serialized JSON data. Information is displayed differently depending on what type of data you log.
-
-| Data type | Description |
-|||
-| A single line of text | Text appears in the `Log_s` column. |
-| Serialized JSON | Data is parsed by the logging agent and displayed in columns that match the JSON object property names. |
-
-## Viewing Logs
-
-Data logged via a container app are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or with the CLI.
-
-Set the name of your resource group and Log Analytics workspace, and then retrieve the `LOG_ANALYTICS_WORKSPACE_CLIENT_ID` with the following commands.
-
-# [Bash](#tab/bash)
-
-```azurecli
-RESOURCE_GROUP="my-containerapps"
-LOG_ANALYTICS_WORKSPACE="containerapps-logs"
-LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv`
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$RESOURCE_GROUP="my-containerapps"
-$LOG_ANALYTICS_WORKSPACE="containerapps-logs"
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $RESOURCE_GROUP -Name $LOG_ANALYTICS_WORKSPACE).CustomerId
-```
---
-Use the following CLI command to view logs on the command line.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az monitor log-analytics query \
- --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated | take 3" \
- --out table
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated | take 3"
-$queryResults.Results
-```
---
-The following output demonstrates the type of response to expect from the CLI command.
-
-```console
-ContainerAppName_s Log_s TableName TimeGenerated
-- -
-my-container-app listening on port 80 PrimaryResult 2021-10-23T02:09:00.168Z
-my-container-app listening on port 80 PrimaryResult 2021-10-23T02:11:36.197Z
-my-container-app listening on port 80 PrimaryResult 2021-10-23T02:11:43.171Z
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Monitor logs in Azure Container Apps with Log Analytics](log-monitoring.md)
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
description: Monitor your running app in Azure Container Apps
+ Last updated 07/29/2022
These features include:
|[Log streaming](log-streaming.md) | View streaming console logs from a container in near real-time. | |[Container console](container-console.md) | Connect to the Linux console in your containers to debug your application from inside the container. | |[Azure Monitor metrics](metrics.md)| View and analyze your application's compute and network usage through metric data. |
+|[Application logging](logging.md) | Monitor, analyze and debug your app using log data.|
|[Azure Monitor Log Analytics](log-monitoring.md) | Run queries to view and analyze your app's system and application logs. | |[Azure Monitor alerts](alerts.md) | Create and manage alerts to notify you of events and conditions based on metric and log data.|
Container Apps manages updates to your container app by creating [revisions](rev
## Next steps -- [Monitor an app in Azure Container Apps](monitor.md)-- [Health probes in Azure Container Apps](health-probes.md)
+> [!div class="nextstepaction"]
+> [Health probes in Azure Container Apps](health-probes.md)
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Last updated 06/23/2022 -+ # Azure Container Apps overview
With Azure Container Apps, you can:
- [**Securely manage secrets**](manage-secrets.md) directly in your application. -- [**Monitor your apps**](monitor.md) using Azure Log Analytics.
+- [**Monitor logs**](log-monitoring.md) using Azure Log Analytics.
- [**Generous quotas**](quotas.md) which are overridable to increase limits on a per-account basis.
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
az containerapp create \
--image $ACR_NAME.azurecr.io/$API_NAME \ --target-port 3500 \ --ingress 'external' \
- --registry-server $ACR_NAME.azurecr.io \
- --query configuration.ingress.fqdn
+ --registry-server $ACR_NAME.azurecr.io
``` # [PowerShell](#tab/powershell)
az containerapp create `
--image "$ACR_NAME.azurecr.io/$API_NAME" ` --target-port 3500 ` --ingress 'external' `
- --registry-server "$ACR_NAME.azurecr.io" `
- --query configuration.ingress.fqdn
+ --registry-server "$ACR_NAME.azurecr.io"
```
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The following quotas are on a per subscription basis for Azure Container Apps.
-| Feature | Quantity | Scope | Remarks |
-|--|--|--|--|
-| Environments | 5 | For a subscription per region | |
-| Container Apps | 20 | Environment | |
-| Revisions | 100 | Container app | |
-| Replicas | 30 | Revision | |
-| Cores | 2 | Replica | Maximum number of cores that can be requested by a revision replica. |
-| Cores | 20 | Environment | Calculated by the total cores an environment can accommodate. For instance, the sum of cores requested by each active replica of all revisions in an environment. |
+| Feature | Quantity | Scope | Limit can be extended | Remarks |
+|--|--|--|--|--|
+| Environments | 5 | For a subscription per region | Yes | |
+| Container Apps | 20 | Environment | Yes |
+| Revisions | 100 | Container app | No |
+| Replicas | 30 | Revision | No |
+| Cores | 2 | Replica | No | Maximum number of cores that can be requested by a revision replica. |
+| Memory | 4 GiB | Replica | No | Maximum amount of memory that can be requested by a revision replica. |
+| Cores | 20 | Environment | Yes| Calculated by the total cores an environment can accommodate. For instance, the sum of cores requested by each active replica of all revisions in an environment. |
## Considerations
container-instances Container Instances Encrypt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-encrypt-data.md
This article reviews two flows for encrypting data with a customer-managed key:
* Encrypt data with a customer-managed key stored in a standard Azure Key Vault * Encrypt data with a customer-managed key stored in a network-protected Azure Key Vault with [Trusted Services](../key-vault/general/network-security.md) enabled.
-## Encrypt data with a customer-managed key stored in a standard Azure Key Vau
+## Encrypt data with a customer-managed key stored in a standard Azure Key Vault
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] ### Create Service Principal for ACI
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| Brazil South | 4 | 16 | 2 | 16 | 50 | N/A | Y | | Brazil South | 4 | 16 | 2 | 8 | 50 | N/A | Y | | Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N |
-| Canada East | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Canada East | 4 | 16 | 4 | 16 | 50 | N/A | N |
| Central India | 4 | 16 | 4 | 4 | 50 | V100 | N | | Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | | East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N |
The following regions and maximum resources are available to container groups wi
| West US | 4 | 16 | 4 | 16 | 50 | N/A | N | | West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N |
-| West US 3 | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N |
The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview).
The following regions and maximum resources are available to container groups wi
| Central US | 4 | 16 | 20 | Y | | East Asia | 4 | 16 | 20 | N | | East US | 4 | 16 | 20 | Y |
-| East US 2 | 2 | 3.5 | 20 | Y |
+| East US 2 | 4 | 16 | 20 | Y |
| France Central | 4 | 16 | 20 | Y | | Japan East | 4 | 16 | 20 | Y | | Korea Central | 4 | 16 | 20 | N |
The following regions and maximum resources are available to container groups wi
| West Europe | 4 | 16 | 20 | Y | | West US | 4 | 16 | 20 | N | | West US 2 | 4 | 16 | 20 | Y |
+| West US 3| 4 | 16 | 20 | Y |
## Next steps
container-registry Allow Access Trusted Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/allow-access-trusted-services.md
description: Enable a trusted Azure service instance to securely access a networ
Previously updated : 01/26/2022 Last updated : 10/11/2022 # Allow trusted services to securely access a network-restricted container registry
container-registry Anonymous Pull Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/anonymous-pull-access.md
Title: Enable anonymous pull access description: Optionally enable anonymous pull access to make content in your Azure container registry publicly available Previously updated : 09/17/2021-++ Last updated : 10/11/2022 # Make your container registry content publicly available
container-registry Authenticate Aks Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-aks-cross-tenant.md
Title: Authenticate from AKS cluster to Azure container registry in different AD tenant description: Configure an AKS cluster's service principal with permissions to access your Azure container registry in a different AD tenant -- Previously updated : 09/13/2021++ Last updated : 10/11/2022 # Pull images from a container registry to an AKS cluster in a different Azure AD tenant
container-registry Authenticate Kubernetes Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-kubernetes-options.md
Title: Scenarios to authenticate with Azure Container Registry from Kubernetes description: Overview of options and scenarios to authenticate to an Azure container registry from a Kubernetes cluster to pull container images -- Previously updated : 09/20/2021++ Last updated : 10/11/2022 # Scenarios to authenticate with Azure Container Registry from Kubernetes
container-registry Buffer Gate Public Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/buffer-gate-public-content.md
Title: Manage public content in private container registry description: Practices and workflows in Azure Container Registry to manage dependencies on public images from Docker Hub and other public content- + Previously updated : 02/01/2022 Last updated : 10/11/2022 # Manage public content with Azure Container Registry
container-registry Container Registry Access Selected Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-access-selected-networks.md
Title: Configure public registry access description: Configure IP rules to enable access to an Azure container registry from selected public IP addresses or address ranges. Previously updated : 07/30/2021++ Last updated : 10/11/2022 # Configure public IP network rules
container-registry Container Registry Auth Aci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-aci.md
Title: Access from Container Instances description: Learn how to provide access to images in your private container registry from Azure Container Instances by using an Azure Active Directory service principal. Previously updated : 04/23/2018++ Last updated : 10/11/2022 # Authenticate with Azure Container Registry from Azure Container Instances
container-registry Container Registry Auth Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-kubernetes.md
description: Learn how to provide a Kubernetes cluster with access to images in
- Previously updated : 06/02/2021 Last updated : 10/11/2022 # Pull images from an Azure container registry to a Kubernetes cluster using a pull secret
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md
Title: Authenticate with service principal description: Provide access to images in your private container registry by using an Azure Active Directory service principal. Previously updated : 03/15/2021++ Last updated : 10/11/2022 # Azure Container Registry authentication with service principals
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
Title: Authenticate with managed identity description: Provide access to images in your private container registry by using a user-assigned or system-assigned managed Azure identity. Previously updated : 06/30/2021++ Last updated : 10/11/2022 # Use an Azure managed identity to authenticate to an Azure container registry
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md
Title: Registry authentication options description: Authentication options for a private Azure container registry, including signing in with an Azure Active Directory identity, using service principals, and using optional admin credentials. Previously updated : 06/16/2021++ Last updated : 10/11/2022
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
Title: Purge tags and manifests description: Use a purge command to delete multiple tags and manifests from an Azure container registry based on age and a tag filter, and optionally schedule purge operations. Previously updated : 05/07/2021++ Last updated : 10/11/2022 # Automatically purge images from an Azure container registry
container-registry Container Registry Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-azure-policy.md
Title: Compliance using Azure Policy description: Assign built-in policy definitions in Azure Policy to audit compliance of your Azure container registries Previously updated : 08/10/2021++ Last updated : 10/11/2022 # Audit compliance of Azure container registries using Azure Policy
container-registry Container Registry Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-best-practices.md
Title: Registry best practices description: Learn how to use your Azure container registry effectively by following these best practices. Previously updated : 08/13/2021++ Last updated : 10/11/2022 # Best practices for Azure Container Registry
container-registry Container Registry Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-check-health.md
Title: Check registry health description: Learn how to run a quick diagnostic command to identify common problems when using an Azure container registry, including local Docker configuration and connectivity to the registry Previously updated : 07/14/2021++ Last updated : 10/11/2022 # Check the health of an Azure container registry
container-registry Container Registry Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-concepts.md
Title: About registries, repositories, images, and artifacts description: Introduction to key concepts of Azure container registries, repositories, container images, and other artifacts. Previously updated : 01/29/2021++ Last updated : 10/11/2022 # About registries, repositories, and artifacts
container-registry Container Registry Content Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-content-trust.md
Title: Manage signed images description: Learn how to enable content trust for your Azure container registry, and push and pull signed images. Content trust implements Docker content trust and is a feature of the Premium service tier. Previously updated : 07/26/2021++ Last updated : 10/11/2022 ms.devlang: azurecli
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-customer-managed-keys.md
- Title: Encrypt registry with a customer-managed key
-description: Learn about encryption-at-rest of your Azure container registry, and how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault
- Previously updated : 09/13/2021---
-# Encrypt registry using a customer-managed key
-
-When you store images and other artifacts in an Azure container registry, Azure automatically encrypts the registry content at rest with [service-managed keys](../security/fundamentals/encryption-models.md). You can supplement default encryption with an additional encryption layer using a key that you create and manage in Azure Key Vault (a customer-managed key). This article walks you through the steps using the Azure CLI, the Azure portal, or a Resource Manager template.
-
-Server-side encryption with customer-managed keys is supported through integration with [Azure Key Vault](../key-vault/general/overview.md):
-
-* You can create your own encryption keys and store them in a key vault, or use Azure Key Vault APIs to generate keys.
-* With Azure Key Vault, you can also audit key usage.
-* Azure Container Registry supports automatic rotation of registry encryption keys when a new key version is available in Azure Key Vault. You can also manually rotate registry encryption keys.
-
-This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
--
-## Things to know
-
-* You can currently enable a customer-managed key only when you create a registry. When enabling the key, you configure a *user-assigned* managed identity to access the key vault. Later, you can enable the registry's system-managed identity for key vault access if needed.
-* After enabling encryption with a customer-managed key on a registry, you can't disable the encryption.
-* Azure Container Registry supports only RSA or RSA-HSM keys. Elliptic curve keys aren't currently supported.
-* [Content trust](container-registry-content-trust.md) is currently not supported in a registry encrypted with a customer-managed key.
-* In a registry encrypted with a customer-managed key, run logs for [ACR Tasks](container-registry-tasks-overview.md) are currently retained for only 24 hours. If you need to retain logs for a longer period, see guidance to [export and store task run logs](container-registry-tasks-logs.md#alternative-log-storage).
-
-## Automatic or manual update of key versions
-
-An important consideration for the security of a registry encrypted with a customer-managed key is how frequently you update (rotate) the encryption key. Your organization might have compliance policies that require regularly updating key [versions](../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning) stored in Azure Key Vault when used as customer-managed keys.
-
-When you configure registry encryption with a customer-managed key, you have two options for updating the key version used for encryption:
-
-* **Automatically update the key version** - To automatically update a customer-managed key when a new version is available in Azure Key Vault, omit the key version when you enable registry encryption with a customer-managed key. When a registry is encrypted with a non-versioned key, Azure Container Registry regularly checks the key vault for a new key version and updates the customer-managed key within 1 hour. Azure Container Registry automatically uses the latest version of the key.
-
-* **Manually update the key version** - To use a specific version of a key for registry encryption, specify that key version when you enable registry encryption with a customer-managed key. When a registry is encrypted with a specific key version, Azure Container Registry uses that version for encryption until you manually rotate the customer-managed key.
-
-For details, see [Choose key ID with or without key version](#choose-key-id-with-or-without-key-version) and [Update key version](#update-key-version), later in this article.
-
-## Prerequisites
-
-To use the Azure CLI steps in this article, you need Azure CLI version 2.2.0 or later, or Azure Cloud Shell. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-## Enable customer-managed key - CLI
-
-### Create a resource group
-
-If needed, run the [az group create][az-group-create] command to create a resource group for creating the key vault, container registry, and other required resources.
-
-```azurecli
-az group create --name <resource-group-name> --location <location>
-```
-
-### Create a user-assigned managed identity
-
-Create a user-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) with the [az identity create][az-identity-create] command. This identity will be used by your registry to access the Key Vault service.
-
-```azurecli
-az identity create \
- --resource-group <resource-group-name> \
- --name <managed-identity-name>
-```
-
-In the command output, take note of the following values: `id` and `principalId`. You need these values in later steps to configure registry access to the key vault.
-
-```JSON
-{
- "clientId": "xxxx2bac-xxxx-xxxx-xxxx-192cxxxx6273",
- "clientSecretUrl": "https://control-eastus.identity.azure.net/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentityname/credentials?tid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&oid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&aid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myresourcegroup",
- "location": "eastus",
- "name": "myidentityname",
- "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "resourceGroup": "myresourcegroup",
- "tags": {},
- "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
-
-For convenience, store these values in environment variables:
-
-```azurecli
-identityID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'id' --output tsv)
-
-identityPrincipalID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'principalId' --output tsv)
-```
-
-### Create a key vault
-
-Create a key vault with [az keyvault create][az-keyvault-create] to store a customer-managed key for registry encryption.
-
-By default, the **soft delete** setting is automatically enabled in a new key vault. To prevent data loss caused by accidental key or key vault deletions, also enable the **purge protection** setting.
-
-```azurecli
-az keyvault create --name <key-vault-name> \
- --resource-group <resource-group-name> \
- --enable-purge-protection
-```
-
-For use in later steps, get the resource ID of the key vault:
-
-```azurecli
-keyvaultID=$(az keyvault show --resource-group <resource-group-name> --name <key-vault-name> --query 'id' --output tsv)
-```
-
-### Enable key vault access by trusted services
-
-If the key vault is protected with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services).
-
-For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-cli).
--
-### Enable key vault access by managed identity
-
-#### Enable key vault access policy
-
-One option is to configure a policy for the key vault so that the identity can access it. In the following [az keyvault set-policy][az-keyvault-set-policy] command, you pass the principal ID of the managed identity that you created, stored previously in an environment variable. Set key permissions to **get**, **unwrapKey**, and **wrapKey**.
-
-```azurecli
-az keyvault set-policy \
- --resource-group <resource-group-name> \
- --name <key-vault-name> \
- --object-id $identityPrincipalID \
- --key-permissions get unwrapKey wrapKey
-
-```
-#### Assign RBAC role
-
-Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command:
-
-```azurecli
-az role assignment create --assignee $identityPrincipalID \
- --role "Key Vault Crypto Service Encryption User" \
- --scope $keyvaultID
-```
-
-### Create key and get key ID
-
-Run the [az keyvault key create][az-keyvault-key-create] command to create a key in the key vault.
-
-```azurecli
-az keyvault key create \
- --name <key-name> \
- --vault-name <key-vault-name>
-```
-
-In the command output, take note of the key's ID, `kid`. You use this ID in the next step:
-
-```output
-[...]
- "key": {
- "crv": null,
- "d": null,
- "dp": null,
- "dq": null,
- "e": "AQAB",
- "k": null,
- "keyOps": [
- "encrypt",
- "decrypt",
- "sign",
- "verify",
- "wrapKey",
- "unwrapKey"
- ],
- "kid": "https://mykeyvault.vault.azure.net/keys/mykey/<version>",
- "kty": "RSA",
-[...]
-```
-
-### Choose key ID with or without key version
-
-For convenience, store the format you choose for the key ID in the $keyID environment variable. You can use a key ID with a version or a key without a version.
-
-#### Manual key rotation - key ID with version
-
-When used to encrypt a registry with a customer-managed key, this key allows only manual key rotation in Azure Container Registry.
-
-This example stores the key's `kid` property:
-
-```azurecli
-keyID=$(az keyvault key show \
- --name <keyname> \
- --vault-name <key-vault-name> \
- --query 'key.kid' --output tsv)
-```
-
-#### Automatic key rotation - key ID omitting version
-
-When used to encrypt a registry with a customer-managed key, this key enables automatic key rotation when a new key version is detected in Azure Key Vault.
-
-This example removes the version from the key's `kid` property:
-
-```azurecli
-keyID=$(az keyvault key show \
- --name <keyname> \
- --vault-name <key-vault-name> \
- --query 'key.kid' --output tsv)
-
-keyID=$(echo $keyID | sed -e "s/\/[^/]*$//")
-```
-
-### Create a registry with customer-managed key
-
-Run the [az acr create][az-acr-create] command to create a registry in the Premium service tier and enable the customer-managed key. Pass the managed identity ID and the key ID, stored previously in environment variables:
-
-```azurecli
-az acr create \
- --resource-group <resource-group-name> \
- --name <container-registry-name> \
- --identity $identityID \
- --key-encryption-key $keyID \
- --sku Premium
-```
-
-### Show encryption status
-
-To show whether registry encryption with a customer-managed key is enabled, run the [az acr encryption show][az-acr-encryption-show] command:
-
-```azurecli
-az acr encryption show --name <registry-name>
-```
-
-Depending on the key used to encrypt the registry, output is similar to:
-
-```console
-{
- "keyVaultProperties": {
- "identity": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "keyIdentifier": "https://myvault.vault.azure.net/keys/myresourcegroup/abcdefg123456789...",
- "keyRotationEnabled": true,
- "lastKeyRotationTimestamp": xxxxxxxx
- "versionedKeyIdentifier": "https://myvault.vault.azure.net/keys/myresourcegroup/abcdefg123456789...",
- },
- "status": "enabled"
-}
-```
-
-## Enable customer-managed key - portal
-
-### Create a managed identity
-
-Create a user-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in the Azure portal. For steps, see [Create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
-
-You use the identity's name in later steps.
--
-### Create a key vault
-
-For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
-
-When creating a key vault for a customer-managed key, in the **Basics** tab, enable the **Purge protection** setting. This setting helps prevent data loss caused by accidental key or key vault deletions.
--
-### Enable key vault access by trusted services
-
-If the key vault is protected with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services).
-
-For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal).
-
-### Enable key vault access by managed identity
-
-#### Enable key vault access policy
-
-One option is to configure a policy for the key vault so that the identity can access it.
-
-1. Navigate to your key vault.
-1. Select **Settings** > **Access policies > +Add Access Policy**.
-1. Select **Key permissions**, and select **Get**, **Unwrap Key**, and **Wrap Key**.
-1. In **Select principal**, select the resource name of your user-assigned managed identity.
-1. Select **Add**, then select **Save**.
--
-#### Assign RBAC role
-
-Alternatively, assign the Key Vault Crypto Service Encryption User role to the user-assigned managed identity at the key vault scope.
-
-For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-### Create key (optional)
-
-Optionally create a key in the key vault for use to encrypt the registry. Follow these steps if you want to select a specific key version as a customer-managed key. You may also need to create a key before creating the registry if key vault access is restricted to a private endpoint or selected networks.
-
-1. Navigate to your key vault.
-1. Select **Settings** > **Keys**.
-1. Select **+Generate/Import** and enter a unique name for the key.
-1. Accept the remaining default values and select **Create**.
-1. After creation, select the key and then select the current version. Copy the **Key identifier** for the key version.
-
-### Create Azure container registry
-
-1. Select **Create a resource** > **Containers** > **Container Registry**.
-1. In the **Basics** tab, select or create a resource group, and enter a registry name. In **SKU**, select **Premium**.
-1. In the **Encryption** tab, in **Customer-managed key**, select **Enabled**.
-1. In **Identity**, select the managed identity you created.
-1. In **Encryption**, choose either of the following:
- * Select **Select from Key Vault**, and select an existing key vault and key, or **Create new**. The key you select is non-versioned and enables automatic key rotation.
- * Select **Enter key URI**, and provide the identifier of an existing key. You can provide either a versioned key URI (for a key that must be rotated manually) or a non-versioned key URI (which enables automatic key rotation). See the previous section for steps to create a key.
-1. In the **Encryption** tab, select **Review + create**.
-1. Select **Create** to deploy the registry instance.
--
-To see the encryption status of your registry in the portal, navigate to your registry. Under **Settings**, select **Encryption**.
-
-## Enable customer-managed key - template
-
-You can also use a Resource Manager template to create a registry and enable encryption with a customer-managed key.
-
-The following template creates a new container registry and a user-assigned managed identity. Copy the following contents to a new file and save it using a filename such as `CMKtemplate.json`.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "vault_name": {
- "defaultValue": "",
- "type": "String"
- },
- "registry_name": {
- "defaultValue": "",
- "type": "String"
- },
- "identity_name": {
- "defaultValue": "",
- "type": "String"
- },
- "kek_id": {
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.ContainerRegistry/registries",
- "apiVersion": "2019-12-01-preview",
- "name": "[parameters('registry_name')]",
- "location": "[resourceGroup().location]",
- "sku": {
- "name": "Premium",
- "tier": "Premium"
- },
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]": {}
- }
- },
- "dependsOn": [
- "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]"
- ],
- "properties": {
- "adminUserEnabled": false,
- "encryption": {
- "status": "enabled",
- "keyVaultProperties": {
- "identity": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').clientId]",
- "KeyIdentifier": "[parameters('kek_id')]"
- }
- },
- "networkRuleSet": {
- "defaultAction": "Allow",
- "virtualNetworkRules": [],
- "ipRules": []
- },
- "policies": {
- "quarantinePolicy": {
- "status": "disabled"
- },
- "trustPolicy": {
- "type": "Notary",
- "status": "disabled"
- },
- "retentionPolicy": {
- "days": 7,
- "status": "disabled"
- }
- }
- }
- },
- {
- "type": "Microsoft.KeyVault/vaults/accessPolicies",
- "apiVersion": "2018-02-14",
- "name": "[concat(parameters('vault_name'), '/add')]",
- "dependsOn": [
- "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]"
- ],
- "properties": {
- "accessPolicies": [
- {
- "tenantId": "[subscription().tenantId]",
- "objectId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').principalId]",
- "permissions": {
- "keys": [
- "get",
- "unwrapKey",
- "wrapKey"
- ]
- }
- }
- ]
- }
- },
- {
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
- "apiVersion": "2018-11-30",
- "name": "[parameters('identity_name')]",
- "location": "[resourceGroup().location]"
- }
- ]
-}
-```
-
-Follow the steps in the previous sections to create the following resources:
-
-* Key vault, identified by name
-* Key vault key, identified by key ID
-
-Run the following [az deployment group create][az-deployment-group-create] command to create the registry using the preceding template file. Where indicated, provide a new registry name and managed identity name, as well as the key vault name and key ID you created.
-
-```azurecli
-az deployment group create \
- --resource-group <resource-group-name> \
- --template-file CMKtemplate.json \
- --parameters \
- registry_name=<registry-name> \
- identity_name=<managed-identity> \
- vault_name=<key-vault-name> \
- kek_id=<key-vault-key-id>
-```
-
-### Show encryption status
-
-To show the status of registry encryption, run the [az acr encryption show][az-acr-encryption-show] command:
-
-```azurecli
-az acr encryption show --name <registry-name>
-```
-
-## Use the registry
-
-After enabling a customer-managed key in a registry, you can perform the same registry operations that you perform in a registry that's not encrypted with a customer-managed key. For example, you can authenticate with the registry and push Docker images. See example commands in [Push and pull an image](container-registry-get-started-docker-cli.md).
-
-## Rotate key
-
-Update the key version in Azure Key Vault, or create a new key, and then update the registry to encrypt data using the key. You can perform these steps using the Azure CLI or in the portal.
-
-When rotating a key, typically you specify the same identity used when creating the registry. Optionally, configure a new user-assigned identity for key access, or enable and specify the registry's system-assigned identity.
-
-> [!NOTE]
-> * To enable the registry's system-assigned identity in the portal, select **Settings** > **Identity** and set the system-assigned identity's status to **On**.
-> * Ensure that the required [key vault access](#enable-key-vault-access-by-managed-identity) is set for the identity you configure for key access.
-
-### Update key version
-
-A common scenario is to update the version of the key used as a customer-managed key. Depending on how the registry encryption is configured, the customer-managed key in Azure Container Registry is automatically updated, or must be manually updated.
-
-### Azure CLI
-
-Use [az keyvault key][az-keyvault-key] commands to create or manage your key vault keys. To create a new key version, run the [az keyvault key create][az-keyvault-key-create] command:
-
-```azurecli
-# Create new version of existing key
-az keyvault key create \
- ΓÇô-name <key-name> \
- --vault-name <key-vault-name>
-```
-
-The next step depends on how the registry encryption is configured:
-
-* If the registry is configured to detect key version updates, the customer-managed key is updated automatically within 1 hour.
-
-* If the registry is configured to require manual updating for a new key version, run the [az acr encryption rotate-key][az-acr-encryption-rotate-key] command, passing the new key ID and the identity you want to configure:
-
-To update the customer-managed key version manually:
-
-```azurecli
-# Rotate key and use user-assigned identity
-az acr encryption rotate-key \
- --name <registry-name> \
- --key-encryption-key <new-key-id> \
- --identity <principal-id-user-assigned-identity>
-
-# Rotate key and use system-assigned identity
-az acr encryption rotate-key \
- --name <registry-name> \
- --key-encryption-key <new-key-id> \
- --identity [system]
-```
-
-> [!TIP]
-> When you run `az acr encryption rotate-key`, you can pass either a versioned key ID or a non-versioned key ID. If you use a non-versioned key ID, the registry is then configured to automatically detect later key version updates.
-
-### Portal
-
-Use the registry's **Encryption** settings to update the key vault, key, or identity settings used for the customer-managed key.
-
-For example, to configure a new key:
-
-1. In the portal, navigate to your registry.
-1. Under **Settings**, select **Encryption** > **Change key**.
-
- :::image type="content" source="media/container-registry-customer-managed-keys/rotate-key.png" alt-text="Rotate key in the Azure portal":::
-1. In **Encryption**, choose one of the following:
- * Select **Select from Key Vault**, and select an existing key vault and key, or **Create new**. The key you select is non-versioned and enables automatic key rotation.
- * Select **Enter key URI**, and provide a key identifier directly. You can provide either a versioned key URI (for a key that must be rotated manually) or a non-versioned key URI (which enables automatic key rotation).
-1. Complete the key selection and select **Save**.
-
-## Revoke key
-
-Revoke the customer-managed encryption key by changing the access policy or permissions on the key vault or by deleting the key. For example, use the [az keyvault delete-policy][az-keyvault-delete-policy] command to change the access policy of the managed identity used by your registry:
-
-```azurecli
-az keyvault delete-policy \
- --resource-group <resource-group-name> \
- --name <key-vault-name> \
- --object-id $identityPrincipalID
-```
-
-Revoking the key effectively blocks access to all registry data, since the registry can't access the encryption key. If access to the key is enabled or the deleted key is restored, your registry will pick the key so you can again access the encrypted registry data.
-
-## Troubleshoot
-
-### Removing managed identity
--
-If you try to remove a user-assigned or system-assigned managed identity from a registry that is used to configure encryption, you might see an error message similar to:
-
-```
-Azure resource '/subscriptions/xxxx/resourcegroups/myGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry' does not have access to identity 'xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx' Try forcibly adding the identity to the registry <registry name>. For more information on bring your own key, please visit 'https://aka.ms/acr/cmk'.
-```
-
-You will also be unable to change (rotate) the encryption key. The resolution steps depend on the type of identity used for encryption.
-
-**User-assigned identity**
-
-If this issue occurs with a user-assigned identity, first reassign the identity using the [az acr identity assign](/cli/azure/acr/identity/#az-acr-identity-assign) command. Pass the identity's resource ID, or use the identity's name when it is in the same resource group as the registry. For example:
-
-```azurecli
-az acr identity assign -n myRegistry \
- --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity"
-```
-
-Then, after changing the key and assigning a different identity, you can remove the original user-assigned identity.
-
-**System-assigned identity**
-
-If this issue occurs with a system-assigned identity, please [create an Azure support ticket](https://azure.microsoft.com/support/create-ticket/) for assistance to restore the identity.
-
-### Enabling key vault firewall
-
-If you enable a key vault firewall or virtual network after creating an encrypted registry, you might see HTTP 403 or other errors with image import or automated key rotation. To correct this problem, reconfigure the managed identity and key you used initially for encryption. See steps in [Rotate key](#rotate-key).
-
-If the problem persists, please contact Azure Support.
-
-### Accidental deletion of key vault or key
-
-Deletion of the key vault, or the key, used to encrypt a registry with a customer-managed key will make the registry's content inaccessible. If [soft delete](../key-vault/general/soft-delete-overview.md) is enabled in the key vault (the default option), you can recover a deleted vault or key vault object and resume registry operations.
-
-For key vault deletion and recovery scenarios, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md).
-
-## Next steps
-
-* Learn more about [encryption at rest in Azure](../security/fundamentals/encryption-atrest.md).
-* Learn more about access policies and how to [secure access to a key vault](../key-vault/general/security-features.md).
--
-<!-- LINKS - external -->
-
-<!-- LINKS - internal -->
-
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-show]: /cli/azure/feature#az_feature_show
-[az-group-create]: /cli/azure/group#az_group_create
-[az-identity-create]: /cli/azure/identity#az_identity_create
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
-[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create
-[az-keyvault-key-create]: /cli/azure/keyvault/key#az_keyvault_key_create
-[az-keyvault-key]: /cli/azure/keyvault/key
-[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
-[az-keyvault-delete-policy]: /cli/azure/keyvault#az_keyvault_delete_policy
-[az-resource-show]: /cli/azure/resource#az_resource_show
-[az-acr-create]: /cli/azure/acr#az_acr_create
-[az-acr-show]: /cli/azure/acr#az_acr_show
-[az-acr-encryption-rotate-key]: /cli/azure/acr/encryption#az_acr_encryption_rotate_key
-[az-acr-encryption-show]: /cli/azure/acr/encryption#az_acr_encryption_show
container-registry Container Registry Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md
Title: Delete image resources description: Details on how to effectively manage registry size by deleting container image data using Azure CLI commands. Previously updated : 05/07/2021++ Last updated : 10/11/2022 # Delete container images in Azure Container Registry
container-registry Container Registry Event Grid Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md
Title: Quickstart - Send events to Event Grid description: In this quickstart, you enable Event Grid events for your container registry, then send container image push and delete events to a sample application. Previously updated : 08/23/2018++ Last updated : 10/11/2022 # Customer intent: As a container registry owner, I want to send events to Event Grid when container images are pushed to or deleted from my container registry so that downstream applications can react to those events.
container-registry Container Registry Firewall Access Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md
Title: Firewall access rules description: Configure rules to access an Azure container registry from behind a firewall, by allowing access to REST API and data endpoint domain names or service-specific IP address ranges. Previously updated : 05/18/2020++ Last updated : 10/11/2022 # Configure rules to access an Azure container registry behind a firewall
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md
Title: Geo-replicate a registry
description: Get started creating and managing a geo-replicated Azure container registry, which enables the registry to serve multiple regions with multi-primary regional replicas. Geo-replication is a feature of the Premium service tier. Previously updated : 06/28/2021-+ Last updated : 10/11/2022+ # Geo-replication in Azure Container Registry
ACR begins syncing images across the configured replicas. Once complete, the por
* For high availability and resiliency, we recommend creating a registry in a region that supports enabling [zone redundancy](zone-redundancy.md). Enabling zone redundancy in each replica region is also recommended. * If an outage occurs in the registry's home region (the region where it was created) or one of its replica regions, a geo-replicated registry remains available for data plane operations such as pushing or pulling container images. * If the registry's home region becomes unavailable, you may be unable to carry out registry management operations including configuring network rules, enabling availability zones, and managing replicas.
-* To plan for high availablity of a geo-replicated registry encrypted with a [customer-managed key](container-registry-customer-managed-keys.md) stored in an Azure key vault, review the guidance for key vault [failover and redundancy](../key-vault/general/disaster-recovery-guidance.md).
+* To plan for high availablity of a geo-replicated registry encrypted with a [customer-managed key](tutorial-enable-customer-managed-keys.md) stored in an Azure key vault, review the guidance for key vault [failover and redundancy](../key-vault/general/disaster-recovery-guidance.md).
## Delete a replica
container-registry Container Registry Get Started Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-azure-cli.md
Title: Quickstart - Create registry - Azure CLI description: Quickly learn to create a private Docker container registry with the Azure CLI. Previously updated : 06/12/2020++ Last updated : 10/11/2022 # Quickstart: Create a private container registry using the Azure CLI
az group create --name myResourceGroup --location eastus
In this quickstart you create a *Basic* registry, which is a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers, see [Container registry service tiers][container-registry-skus].
-Create an ACR instance using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, *myContainerRegistry007* is used. Update this to a unique value.
+Create an ACR instance using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 lowercase alphanumeric characters. In the following example, *mycontainerregistry* is used. Update this to a unique value.
```azurecli az acr create --resource-group myResourceGroup \
- --name myContainerRegistry007 --sku Basic
+ --name mycontainerregistry --sku Basic
``` When the registry is created, the output is similar to the following:
When the registry is created, the output is similar to the following:
{ "adminUserEnabled": false, "creationDate": "2019-01-08T22:32:13.175925+00:00",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry007",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/mycontainerregistry",
"location": "eastus",
- "loginServer": "mycontainerregistry007.azurecr.io",
- "name": "myContainerRegistry007",
+ "loginServer": "mycontainerregistry.azurecr.io",
+ "name": "mycontainerregistry",
"provisioningState": "Succeeded", "resourceGroup": "myResourceGroup", "sku": {
container-registry Container Registry Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md
description: Learn how to create an Azure container registry by using a Bicep fi
Previously updated : 09/27/2021 Last updated : 10/11/2022
container-registry Container Registry Get Started Docker Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-docker-cli.md
Title: Push & pull container image description: Push and pull Docker images to your private container registry in Azure using the Docker CLI Previously updated : 05/12/2021 Last updated : 10/11/2022++
container-registry Container Registry Get Started Geo Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-geo-replication-template.md
description: Learn how to create a geo-replicated Azure container registry by us
Previously updated : 08/15/2022 Last updated : 10/11/2022
container-registry Container Registry Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-portal.md
Title: Quickstart - Create registry in portal description: Quickly learn to create a private Azure container registry using the Azure portal. Previously updated : 06/23/2021++ Last updated : 10/11/2022
container-registry Container Registry Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-powershell.md
Title: Quickstart - Create registry - PowerShell description: Quickly learn to create a private Docker registry in Azure Container Registry with PowerShell Previously updated : 06/03/2021++ Last updated : 10/11/2022
New-AzResourceGroup -Name myResourceGroup -Location EastUS
Next, create a container registry in your new resource group with the [New-AzContainerRegistry][New-AzContainerRegistry] command.
-The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. The following example creates a registry named "myContainerRegistry007." Replace *myContainerRegistry007* in the following command, then run it to create the registry:
+The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. The following example creates a registry named "mycontainerregistry." Replace *mycontainerregistry* in the following command, then run it to create the registry:
```powershell
-$registry = New-AzContainerRegistry -ResourceGroupName "myResourceGroup" -Name "myContainerRegistry007" -EnableAdminUser -Sku Basic
+$registry = New-AzContainerRegistry -ResourceGroupName "myResourceGroup" -Name "mycontainerregistry" -EnableAdminUser -Sku Basic
``` [!INCLUDE [container-registry-quickstart-sku](../../includes/container-registry-quickstart-sku.md)]
container-registry Container Registry Health Error Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-health-error-reference.md
Title: Error reference for registry health checks description: Error codes and possible solutions to problems found by running the az acr check-health diagnostic command in Azure Container Registry Previously updated : 01/25/2021++ Last updated : 10/11/2022 # Health check error reference
This error means that the CLI was unable to determine the Helm version installed
This error means that the registry can't access the user-assigned or sysem-assigned managed identity used to configure registry encryption with a customer-managed key. The managed identity might have been deleted.
-*Potential solution*: To resolve the issue and rotate the key using a different managed identity, see steps to troubleshoot [the user-assigned identity](container-registry-customer-managed-keys.md#troubleshoot).
+*Potential solution*: To resolve the issue and rotate the key using a different managed identity, see steps to troubleshoot [the user-assigned identity](tutorial-troubleshoot-customer-managed-keys.md).
## CONNECTIVITY_DNS_ERROR
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Title: Store Helm charts description: Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry Previously updated : 10/20/2021++ Last updated : 10/11/2022 # Push and pull Helm charts to an Azure container registry
container-registry Container Registry Image Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-formats.md
Title: Supported content formats description: Learn about content formats supported by Azure Container Registry, including Docker-compatible container images, Helm charts, OCI images, and OCI artifacts. Previously updated : 03/02/2021++ Last updated : 10/11/2022 # Content formats supported in Azure Container Registry
container-registry Container Registry Image Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md
Title: Lock images description: Set attributes for a container image or repository so it can't be deleted or overwritten in an Azure container registry. Previously updated : 09/30/2019++ Last updated : 10/11/2022 # Lock a container image in an Azure container registry
container-registry Container Registry Image Tag Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-tag-version.md
Title: Image tag best practices
description: Best practices for tagging and versioning Docker container images when pushing images to and pulling images from an Azure container registry Previously updated : 07/10/2019 Last updated : 10/11/2022
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
Title: Import container images description: Import container images to an Azure container registry by using Azure APIs, without needing to run Docker commands. Previously updated : 09/13/2021++ Last updated : 10/11/2022
container-registry Container Registry Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-intro.md
Title: Managed container registries
description: Introduction to the Azure Container Registry service, providing cloud-based, managed registries. Previously updated : 02/10/2020 Last updated : 10/11/2022
container-registry Container Registry Java Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-java-quickstart.md
Title: Quickstart - Build and push container images of the Java Spring Boot App
description: Learn to build and push a containerized Java Spring Boot app to the Azure Container Registry using Maven and Jib plugin. Previously updated : 01/18/2022 Last updated : 10/11/2022
container-registry Container Registry Oci Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oci-artifacts.md
description: Push and pull Open Container Initiative (OCI) artifacts using a pri
Previously updated : 02/03/2021 Last updated : 10/11/2022
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
description: Attach, push, and pull supply chain artifacts using Azure Registry
Previously updated : 11/11/2021 Last updated : 10/11/2022
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
Title: Set up private endpoint with private link description: Set up a private endpoint on a container registry and enable access over a private link in a local virtual network. Private link access is a feature of the Premium service tier. Previously updated : 7/26/2022+ Last updated : 10/11/2022+ # Connect privately to an Azure container registry using Azure Private Link
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
Title: Quickstart - Build a container image on-demand in Azure description: Use Azure Container Registry commands to quickly build, push, and run a Docker container image on-demand, in the Azure cloud. Previously updated : 09/25/2020++ Last updated : 10/11/2022
container-registry Container Registry Repositories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repositories.md
Title: View repositories in portal description: Use the Azure portal to view Azure Container Registry repositories, which host Docker container images and other supported artifacts. Previously updated : 01/05/2018++ Last updated : 10/11/2022 # View container registry repositories in the Azure portal
container-registry Container Registry Repository Scoped Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md
Title: Permissions to repositories in Azure Container Registry description: Create a token with permissions scoped to specific repositories in a Premium registry to pull or push images, or perform other actions Previously updated : 02/04/2021++ Last updated : 10/11/2022 ms.devlang: azurecli
container-registry Container Registry Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-retention-policy.md
Title: Policy to retain untagged manifests description: Learn how to enable a retention policy in your Premium Azure container registry, for automatic deletion of untagged manifests after a defined period. Previously updated : 04/26/2021++ Last updated : 10/11/2022 # Set a retention policy for untagged manifests
container-registry Container Registry Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-roles.md
Title: Registry roles and permissions description: Use Azure role-based access control (Azure RBAC) and identity and access management (IAM) to provide fine-grained permissions to resources in an Azure container registry. Previously updated : 09/02/2021++ Last updated : 10/11/2022
container-registry Container Registry Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-skus.md
Title: Registry service tiers and features description: Learn about the features and limits (quotas) in the Basic, Standard, and Premium service tiers (SKUs) of Azure Container Registry. Previously updated : 08/12/2021++ Last updated : 10/11/2022 # Azure Container Registry service tiers
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
For example, after five days of soft deleting the artifact, if the user changes
## Known issues
->* Enabling the soft delete policy with AZ through ARM template leaves the registry stuck in the `creation` state. If you see this error, please delete and recreate the registry disabling either soft delete policy or Geo-replication on the registry.
+>* Enabling the soft delete policy with Availability Zones through ARM template leaves the registry stuck in the `creation` state. If you see this error, please delete and recreate the registry disabling Geo-replication on the registry.
>* Accessing the manage deleted artifacts blade after disabling the soft delete policy will throw an error message with 405 status. >* The customers with restrictions on permissions to restore, will see an issue as File not found. ## Enable soft delete policy for registry - CLI
container-registry Container Registry Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-storage.md
Title: Container image storage description: Details on how your container images and other artifacts are stored in Azure Container Registry, including security, redundancy, and capacity. Previously updated : 03/24/2021++ Last updated : 10/11/2022
Every [Basic, Standard, and Premium](container-registry-skus.md) Azure container
## Encryption-at-rest
-All container images and other artifacts in your registry are encrypted at rest. Azure automatically encrypts an image before storing it, and decrypts it on-the-fly when you or your applications and services pull the image. Optionally apply an extra encryption layer with a [customer-managed key](container-registry-customer-managed-keys.md).
+All container images and other artifacts in your registry are encrypted at rest. Azure automatically encrypts an image before storing it, and decrypts it on-the-fly when you or your applications and services pull the image. Optionally apply an extra encryption layer with a [customer-managed key](tutorial-enable-customer-managed-keys.md).
## Regional storage
container-registry Container Registry Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-support-policies.md
Title: Azure Container Registry technical support policies description: Learn about Azure Container Registry (ACR) technical support policies Previously updated : 02/18/2022+ Last updated : 10/11/2022 #Customer intent: As a developer, I want to understand what ACR components I need to manage, what components are managed by Microsoft.
This article provides details about Azure Container Registry (ACR) support polic
>* [Connect to ACR using Azure private link](container-registry-private-link.md) >* [Push and pull Helm charts to ACR](container-registry-helm-repos.md)
->* [Encrypt using Customer managed keys](container-registry-customer-managed-keys.md)
+>* [Encrypt using Customer managed keys](tutorial-enable-customer-managed-keys.md)
>* [Enable Content trust](container-registry-content-trust.md) >* [Scan Images using Azure Security Center](../defender-for-cloud/defender-for-container-registries-introduction.md) >* [ACR Tasks](./container-registry-tasks-overview.md)
container-registry Container Registry Task Run Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-task-run-template.md
Title: Quick task run with template description: Queue an ACR task run to build an image using an Azure Resource Manager template Previously updated : 04/22/2020++ Last updated : 10/11/2022 # Run ACR Tasks using Resource Manager templates
container-registry Container Registry Tasks Authentication Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-key-vault.md
Title: External authentication from ACR task description: Configure an Azure Container Registry Task (ACR Task) to read Docker Hub credentials stored in an Azure key vault, by using a managed identity for Azure resources. Previously updated : 07/06/2020++ Last updated : 10/11/2022 # External authentication in an ACR task using an Azure-managed identity
container-registry Container Registry Tasks Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-managed-identity.md
Title: Managed identity in ACR task description: Enable a managed identity for Azure Resources in an Azure Container Registry task to allow the task to access other Azure resources including other private container registries. --- Previously updated : 01/14/2020+ Last updated : 10/11/2022 # Use an Azure-managed identity in ACR Tasks
container-registry Container Registry Tasks Base Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-base-images.md
Title: Base image updates - Tasks description: Learn about base images for application container images, and about how a base image update can trigger an Azure Container Registry task. Previously updated : 01/22/2019++ Last updated : 10/11/2022 # About base image updates for ACR Tasks
container-registry Container Registry Tasks Cross Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-cross-registry-authentication.md
Title: Cross-registry authentication from ACR task description: Configure an Azure Container Registry Task (ACR Task) to access another private Azure container registry by using a managed identity for Azure resources Previously updated : 07/06/2020++ Last updated : 10/11/2022 # Cross-registry authentication in an ACR task using an Azure-managed identity
container-registry Container Registry Tasks Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-logs.md
Title: View task run logs - Tasks description: How to view and manage run logs generated by ACR Tasks. Previously updated : 03/09/2020++ Last updated : 10/11/2022 # View and manage task run logs
container-registry Container Registry Tasks Multi Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-multi-step.md
Title: Multi-step task to build, test & patch image description: Introduction to multi-step tasks, a feature of ACR Tasks in Azure Container Registry that provides task-based workflows for building, testing, and patching container images in the cloud. Previously updated : 03/28/2019++ Last updated : 10/11/2022 # Run multi-step build, test, and patch tasks in ACR Tasks
container-registry Container Registry Tasks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-overview.md
Title: ACR Tasks overview description: An introduction to ACR Tasks, a suite of features in Azure Container Registry that provides secure, automated container image build, management, and patching in the cloud. Previously updated : 06/14/2021++ Last updated : 10/11/2022 # Automate container image builds and maintenance with ACR Tasks
container-registry Container Registry Tasks Pack Build https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-pack-build.md
Title: Build image with Cloud Native Buildpack description: Use the az acr pack build command to build a container image from an app and push to Azure Container Registry, without using a Dockerfile. Previously updated : 06/24/2021++ Last updated : 10/11/2022
container-registry Container Registry Tasks Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md
Title: YAML reference - ACR Tasks description: Reference for defining tasks in YAML for ACR Tasks, including task properties, step types, step properties, and built-in variables. Previously updated : 07/08/2020++ Last updated : 10/11/2022 # ACR Tasks reference: YAML
container-registry Container Registry Tasks Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-samples.md
Title: ACR task samples description: Sample Azure Container Registry Tasks (ACR Tasks) to build, run, and patch container images Previously updated : 11/14/2019++ Last updated : 10/11/2022 # ACR Tasks samples
container-registry Container Registry Tasks Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md
Title: Tutorial - Schedule an ACR task description: In this tutorial, learn how to run an Azure Container Registry Task on a defined schedule by setting one or more timer triggers Previously updated : 11/24/2020++ Last updated : 10/11/2022 # Tutorial: Run an ACR task on a defined schedule
container-registry Container Registry Transfer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-cli.md
Title: ACR Transfer with Az CLI description: Use ACR Transfer with Az CLI Previously updated : 11/18/2021++ Last updated : 10/11/2022
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-images.md
Title: ACR Transfer with Arm Templates description: ACR Transfer with Az CLI with ARM templates Previously updated : 10/07/2020++ Last updated : 10/11/2022
container-registry Container Registry Transfer Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-prerequisites.md
Title: Transfer artifacts description: Overview of ACR Transfer and prerequisites Previously updated : 11/18/2021++ Last updated : 10/11/2022
container-registry Container Registry Transfer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-troubleshooting.md
Title: ACR Transfer Troubleshooting description: Troubleshoot ACR Transfer++ Last updated : 10/11/2022 Previously updated : 09/24/2022- # ACR Transfer troubleshooting
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-access.md
Title: Troubleshoot network issues with registry description: Symptoms, causes, and resolution of common problems when accessing an Azure container registry in a virtual network or behind a firewall Previously updated : 05/10/2021++ Last updated : 10/11/2022 # Troubleshoot network issues with registry
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md
Title: Troubleshoot login to registry description: Symptoms, causes, and resolution of common problems when logging into an Azure container registry Previously updated : 08/11/2020++ Last updated : 10/11/2022 # Troubleshoot registry login
container-registry Container Registry Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-performance.md
Title: Troubleshoot registry performance description: Symptoms, causes, and resolution of common problems with the performance of a registry Previously updated : 08/11/2020++ Last updated : 10/11/2022 # Troubleshoot registry performance
container-registry Container Registry Tutorial Base Image Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-base-image-update.md
Title: Tutorial - Trigger image build on base image update description: In this tutorial, you learn how to configure an Azure Container Registry Task to automatically trigger container image builds in the cloud when a base image is updated in the same registry. Previously updated : 11/24/2020++ Last updated : 10/11/2022 # Customer intent: As a developer or devops engineer, I want container images to be built automatically when the base image of a container is updated in the registry.
container-registry Container Registry Tutorial Build Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-build-task.md
Title: Tutorial - Build image on code commit description: In this tutorial, you learn how to configure an Azure Container Registry Task to automatically trigger container image builds in the cloud when you commit source code to a Git repository. Previously updated : 11/24/2020++ Last updated : 10/11/2022 # Customer intent: As a developer or devops engineer, I want to trigger container image builds automatically when I commit code to a Git repo.
container-registry Container Registry Tutorial Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-deploy-app.md
Title: Tutorial - Deploy from geo-replicated registry description: Deploy a Linux-based web app to two different Azure regions using a container image from a geo-replicated Azure container registry. Part two of a three-part series. Previously updated : 08/20/2018++ Last updated : 10/11/2022
container-registry Container Registry Tutorial Deploy Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-deploy-update.md
Title: Tutorial - Push update to geo-replicated registry description: Push an updated Docker image to your geo-replicated Azure container registry, then see the changes automatically deployed to web apps running in multiple regions. Part three of a three-part series. Previously updated : 04/30/2018++ Last updated : 10/11/2022
container-registry Container Registry Tutorial Multistep Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-multistep-task.md
Title: Tutorial - Multi-step ACR task description: In this tutorial, you learn how to configure an Azure Container Registry Task to automatically trigger a multi-step workflow to build, run, and push container images in the cloud when you commit source code to a Git repository. Previously updated : 11/24/2020++ Last updated : 10/11/2022 # Customer intent: As a developer or devops engineer, I want to trigger a multi-step container workflow automatically when I commit code to a Git repo.
container-registry Container Registry Tutorial Prepare Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-prepare-registry.md
Title: Tutorial - Create geo-replicated registry description: Create an Azure container registry, configure geo-replication, prepare a Docker image, and deploy it to the registry. Part one of a three-part series. Previously updated : 06/30/2020++ Last updated : 10/11/2022
container-registry Container Registry Tutorial Private Base Image Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-private-base-image-update.md
Title: Tutorial - Trigger image build by private base image update description: In this tutorial, you configure an Azure Container Registry Task to automatically trigger container image builds in the cloud when a base image in another private Azure container registry is updated. Previously updated : 11/20/2020++ Last updated : 10/11/2022
container-registry Container Registry Tutorial Quick Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-quick-task.md
Title: Tutorial - Quick container image build description: In this tutorial, you learn how to build a Docker container image in Azure with Azure Container Registry Tasks (ACR Tasks), then deploy it to Azure Container Instances. Previously updated : 07/20/2021++ Last updated : 10/11/2022 # Customer intent: As a developer or devops engineer, I want to quickly build container images in Azure, without having to install dependencies like Docker Engine, so that I can simplify my inner-loop development pipeline.
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Previously updated : 05/08/2022 Last updated : 10/11/2022 # Build, sign, and verify container images using Notary and Azure Key Vault (Preview)
container-registry Container Registry Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-vnet.md
Title: Restrict access using a service endpoint description: Restrict access to an Azure container registry using a service endpoint in an Azure virtual network. Service endpoint access is a feature of the Premium service tier. Previously updated : 05/04/2020+ Last updated : 10/11/2022 # Restrict access to a container registry using a service endpoint in an Azure virtual network
container-registry Container Registry Webhook Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-webhook-reference.md
Title: Registry webhook schema reference description: Reference for JSON payload for webhook requests in an Azure container registry, which are generated when webhooks are enabled for artifact push or delete events Previously updated : 03/05/2019++ Last updated : 10/11/2022 # Azure Container Registry webhook reference
container-registry Container Registry Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-webhook.md
Title: Webhooks to respond to registry actions description: Learn how to use webhooks to trigger events when push or pull actions occur in your registry repositories. Previously updated : 05/24/2019++ Last updated : 10/11/2022 # Using Azure Container Registry webhooks
container-registry Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/data-loss-prevention.md
Title: Disable export of artifacts description: Set a registry property to prevent data exfiltration from a Premium Azure container registry. Previously updated : 07/27/2021++ Last updated : 10/11/2022 # Disable export of artifacts from an Azure container registry
container-registry Github Action Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/github-action-scan.md
Title: Scan container images using GitHub Actions description: Learn how to scan the container images using Container Scan action-- Previously updated : 05/20/2021++ Last updated : 10/11/2022
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
description: Overview and scenarios of the connected registry feature of Azure C
Previously updated : 10/21/2021 Last updated : 10/11/2022
container-registry Manual Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/manual-regional-move.md
Title: Move Azure container registry to another region description: Manually move Azure container registry settings and data to another Azure region. Previously updated : 06/08/2021++ Last updated : 10/11/2022 # Manually move a container registry to another region
Azure CLI
* Use steps in this article to move the registry to a different region in the same subscription. More configuration may be needed to move a registry to a different Azure subscription in the same Active Directory tenant. * Exporting and using a Resource Manager template can help re-create many registry settings. You can edit the template to configure more settings, or update the target registry after creation.
-* Currently, Azure Container Registry doesn't support a registry move to a different Active Directory tenant. This limitation applies to both registries encrypted with a [customer-managed key](container-registry-customer-managed-keys.md) and unencrypted registries.
+* Currently, Azure Container Registry doesn't support a registry move to a different Active Directory tenant. This limitation applies to both registries encrypted with a [customer-managed key](tutorial-enable-customer-managed-keys.md) and unencrypted registries.
* If you are unable to move a registry is outlined in this article, create a new registry, manually recreate settings, and [Import registry content in the target registry](#import-registry-content-in-target-registry). * You can find the steps to move resources of registry to a new resource group in the same subscription or move resources to a [new subscription.](/azure/azure-resource-manager/management/move-resource-group-and-subscription)
For more information, see [Use exported template from the Azure portal](../azure
> [!IMPORTANT] > If you want to encrypt the target registry using a customer-managed key, make sure to update the template with settings for the required managed identity, key vault, and key. You can only enable the customer-managed key when you deploy the registry. >
-> For more information, see [Encrypt registry using customer-managed key](./container-registry-customer-managed-keys.md#enable-customer-managed-keytemplate).
+> For more information, see [Encrypt registry using customer-managed key](./tutorial-enable-customer-managed-keys.md## Enable a customer-managed key by using a Resource Manager template).
### Create resource group
container-registry Monitor Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-service-reference.md
Previously updated : 03/19/2021 Last updated : 10/11/2022 # Monitoring Azure Container Registry data reference
container-registry Monitor Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-service.md
Title: Monitor Azure Container Registry description: Start here to learn how to monitor your Azure container registry using features of Azure Monitor-- Previously updated : 08/13/2021++ Last updated : 10/11/2022 # Monitor Azure Container Registry
container-registry Overview Connected Registry Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/overview-connected-registry-access.md
Previously updated : 10/21/2021 Last updated : 10/11/2022
container-registry Overview Connected Registry And Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/overview-connected-registry-and-iot-edge.md
Previously updated : 08/24/2021 Last updated : 10/11/2022
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/12/2022++ Last updated : 10/11/2022 --
container-registry Pull Images From Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/pull-images-from-connected-registry.md
Title: Pull images from a connected registry description: Use Azure Container Registry CLI commands to configure a client token and pull images from a connected registry on an IoT Edge device. Previously updated : 10/21/2021--++ Last updated : 10/11/2022 ms.devlang: azurecli
container-registry Push Multi Architecture Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/push-multi-architecture-images.md
Title: Multi-architecture images in your registry description: Use your Azure container registry to build, import, store, and deploy multi-architecture (multi-arch) images Previously updated : 02/07/2021++ Last updated : 10/11/2022
container-registry Quickstart Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-client-libraries.md
Title: Quickstart - Manage container registry content with client libraries description: Use this quickstart to manage repositories, images, and artifacts using the Azure Container Registry client libraries- Previously updated : 10/05/2021-++ Last updated : 10/11/2022 zone_pivot_groups: programming-languages-set-ten ms.devlang: azurecli
container-registry Quickstart Connected Registry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-cli.md
Title: Quickstart - Create connected registry using the CLI description: Use Azure CLI commands to create a connected Azure container registry resource that can synchronize images and other artifacts with the cloud registry. Previously updated : 10/21/2021 Last updated : 10/11/2022
container-registry Quickstart Connected Registry Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-portal.md
Title: Quickstart - Create connected registry using the portal description: Use Azure portal to create a connected Azure container registry resource that can synchronize images and other artifacts with the cloud registry. Previously updated : 10/21/2021 Last updated : 10/11/2022
container-registry Quickstart Deploy Connected Registry Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-deploy-connected-registry-iot-edge-cli.md
Title: Quickstart - Deploy a connected registry to an IoT Edge device description: Use Azure CLI commands and Azure portal to deploy a connected Azure container registry to an Azure IoT Edge device. Previously updated : 10/21/2021 Last updated : 10/11/2022
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 07/07/2022 --++ Last updated : 10/11/2022
container-registry Scan Images Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/scan-images-defender.md
Title: Scan registry images with Microsoft Defender for Cloud description: Learn about using Microsoft Defender for container registries to scan images in your Azure container registries Previously updated : 05/19/2021++ Last updated : 10/11/2022 # Scan registry images with Microsoft Defender for Cloud
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 --++ Last updated : 10/10/2022
container-registry Tasks Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md
Title: Use dedicated pool to run task - Tasks description: Set up a dedicated compute pool (agent pool) in your registry to run an Azure Container Registry task. Previously updated : 10/12/2020++ Last updated : 10/11/2022
container-registry Tasks Consume Public Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-consume-public-content.md
description: Create an automated Azure Container Registry Tasks workflow to trac
Previously updated : 10/29/2020 Last updated : 10/11/2022
container-registry Tutorial Deploy Connected Registry Nested Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-deploy-connected-registry-nested-iot-edge-cli.md
Title: 'Tutorial: Deploy a connected registry to an IoT Edge hierarchy' description: In this tutorial, use Azure CLI commands to create a two-layer hierarchy of Azure IoT Edge devices and deploy a connected registry as a module at each layer. Previously updated : 06/07/2022 Last updated : 10/11/2022
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md
Title: Zone-redundant registry for high availability description: Learn about enabling zone redundancy in Azure Container Registry. Create a container registry or replication in an Azure availability zone. Zone redundancy is a feature of the Premium service tier. Previously updated : 09/13/2021-+ Last updated : 10/11/2022++ # Enable zone redundancy in Azure Container Registry for resiliency and high availability
cosmos-db Access Key Vault Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-key-vault-managed-identity.md
ms.devlang: csharp+ Last updated 06/01/2022 # Access Azure Key Vault from Azure Cosmos DB using a managed identity Azure Cosmos DB may need to read secret/key data from Azure Key Vault. For example, your Azure Cosmos DB may require a customer-managed key stored in Azure Key Vault. To do this, Azure Cosmos DB should be configured with a managed identity, and then an Azure Key Vault access policy should grant the managed identity access. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md)
+- An existing Azure Cosmos DB API for NoSQL account. [Create an Azure Cosmos DB API for NoSQL account](nosql/quickstart-portal.md)
- An existing Azure Key Vault resource. [Create a key vault using the Azure CLI](../key-vault/general/quick-create-cli.md) - To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).
Azure Cosmos DB may need to read secret/key data from Azure Key Vault. For examp
# Variable for function app name keyVaultName="msdocs-keyvault"
- # Variable for Cosmos DB account name
+ # Variable for Azure Cosmos DB account name
cosmosName="msdocs-cosmos-app" # Variable for resource group name
In this step, create an access policy in Azure Key Vault using the previously ma
## Next steps
-* To use customer-managed keys in Azure Key Vault with your Azure Cosmos account, see [configure customer-managed keys](how-to-setup-cmk.md#using-managed-identity)
+* To use customer-managed keys in Azure Key Vault with your Azure Cosmos DB account, see [configure customer-managed keys](how-to-setup-cmk.md#using-managed-identity)
* To use Azure Key Vault to manage secrets, see [secure credentials](access-secrets-from-keyvault.md).
cosmos-db Access Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-previews.md
Title: Request access to Azure Cosmos DB previews
description: Learn how to request access to Azure Cosmos DB previews + Last updated 04/13/2022
# Access Azure Cosmos DB Preview Features ## Steps to register for a preview feature from the portal
Azure Cosmos DB offers several preview features that you can request access to.
3. Click on the feature you would like access to in the list of available preview features. 4. Click the **Register** button at the bottom of the page to join the preview. ## Next steps - Learn [how to choose an API](choose-api.md) in Azure Cosmos DB-- [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)-- [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)-- [Get started with Azure Cosmos DB Cassandra API](cassandr)-- [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)-- [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
+- [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)
+- [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+- [Get started with Azure Cosmos DB for Cassandra](cassandr)
+- [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-dotnet.md)
+- [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
cosmos-db Access Secrets From Keyvault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-secrets-from-keyvault.md
ms.devlang: csharp+ Last updated 06/01/2022
-# Secure Azure Cosmos credentials using Azure Key Vault
+# Secure Azure Cosmos DB credentials using Azure Key Vault
>[!IMPORTANT] > The recommended solution to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [cert based solution](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the key vault solution below.
The following steps are required to store and read Azure Cosmos DB access keys f
* Select **Manual** for **Upload options**. * Provide a **Name** for your secret
- * Provide the connection string of your Cosmos DB account into the **Value** field. And then select **Create**.
+ * Provide the connection string of your Azure Cosmos DB account into the **Value** field. And then select **Create**.
:::image type="content" source="./media/access-secrets-from-keyvault/create-a-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal.":::
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md
Title: Azure Cosmos DB resource model
-description: This article describes Azure Cosmos DB resource model which includes the Azure Cosmos account, database, container, and the items. It also covers the hierarchy of these elements in an Azure Cosmos account.
+description: This article describes Azure Cosmos DB resource model which includes the Azure Cosmos DB account, database, container, and the items. It also covers the hierarchy of these elements in an Azure Cosmos DB account.
+ Last updated 08/03/2022 # Azure Cosmos DB resource model Azure Cosmos DB is a fully managed platform-as-a-service (PaaS). To begin using Azure Cosmos DB, create an Azure Cosmos DB account in an Azure resource group in your subscription. You then create databases and containers within the account.
The following image shows the hierarchy of different entities in an Azure Cosmos
In Azure Cosmos DB, a database is similar to a namespace. A database is simply a group of containers. The following table shows how a database is mapped to various API-specific entities:
-| Azure Cosmos DB entity | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| Azure Cosmos DB entity | API for NoSQL | API for Apache Cassandra | API for MongoDB | API for Apache Gremlin | API for Table |
| | | | | | |
-|Azure Cosmos database | Database | Keyspace | Database | Database | NA |
+|Azure Cosmos DB database | Database | Keyspace | Database | Database | NA |
> [!NOTE]
-> With Table API accounts, to maintain compatibility with Azure Storage Tables, tables in Azure Cosmos DB are created at the account level.
+> With API for Table accounts, to maintain compatibility with Azure Storage Tables, tables in Azure Cosmos DB are created at the account level.
## Azure Cosmos DB containers
Data within a container must have a unique `id` property value for each logical
A container is specialized into API-specific entities as shown in the following table:
-| Azure Cosmos entity | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| Azure Cosmos DB entity | API for NoSQL | API for Cassandra | API for MongoDB | API for Gremlin | API for Table |
| | | | | | |
-|Azure Cosmos container | Container | Table | Collection | Graph | Table |
+|Azure Cosmos DB container | Container | Table | Collection | Graph | Table |
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. Some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
A container is specialized into API-specific entities as shown in the following
An Azure Cosmos DB container has a set of system-defined properties. Depending on which API you use, some properties might not be directly exposed. The following table describes the list of system-defined properties:
-| System-defined property | System-generated or user-configurable | Purpose | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| System-defined property | System-generated or user-configurable | Purpose | API for NoSQL | API for Cassandra | API for MongoDB | API for Gremlin | API for Table |
| | | | | | | | | |\_rid | System-generated | Unique identifier of container | Yes | No | No | No | No | |\_etag | System-generated | Entity tag used for optimistic concurrency control | Yes | No | No | No | No |
An Azure Cosmos DB container has a set of system-defined properties. Depending o
## Azure Cosmos DB items
-Depending on which API you use, data can represent either an item in a container, a document in a collection, a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific entities to an Azure Cosmos item:
+Depending on which API you use, data can represent either an item in a container, a document in a collection, a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific entities to an Azure Cosmos DB item:
-| Cosmos entity | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| Azure Cosmos DB entity | API for NoSQL | API for Cassandra | API for MongoDB | API for Gremlin | API for Table |
| | | | | | | | Azure Cosmos DB item | Item | Row | Document | Node or edge | Item | ### Properties of an item
-Every Azure Cosmos item has the following system-defined properties. Depending on which API you use, some of them might not be directly exposed.
+Every Azure Cosmos DB item has the following system-defined properties. Depending on which API you use, some of them might not be directly exposed.
-| System-defined property | System-generated or user-defined| Purpose | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| System-defined property | System-generated or user-defined | Purpose | API for NoSQL | API for Cassandra | DB API for MongoDB | API for Gremlin | API for Table |
| | | | | | | | | |\_rid | System-generated | Unique identifier of the item | Yes | No | No | No | No | |\_etag | System-generated | Entity tag used for optimistic concurrency control | Yes | No | No | No | No |
Every Azure Cosmos item has the following system-defined properties. Depending o
### Operations on items
-Azure Cosmos items support the following operations. You can use any of the Azure Cosmos APIs to perform the operations.
+Azure Cosmos DB items support the following operations. You can use any of the Azure Cosmos DB APIs to perform the operations.
-| Operation | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| Operation | API for NoSQL | API for Cassandra | API for MongoDB | API for Gremlin | API for Table |
| | | | | | | | Insert, Replace, Delete, Upsert, Read | Yes | Yes | Yes | Yes | Yes | ## Next steps
-Learn how to manage your Azure Cosmos account and other concepts:
+Learn how to manage your Azure Cosmos DB account and other concepts:
-* To learn more, see the [Azure Cosmos DB SQL API](/training/modules/intro-to-azure-cosmos-db-core-api/) training module.
+* To learn more, see the [Azure Cosmos DB API for NoSQL](/training/modules/intro-to-azure-cosmos-db-core-api/) training module.
* [How-to manage your Azure Cosmos DB account](how-to-manage-database-account.md) * [Global distribution](distribute-data-globally.md) * [Consistency levels](consistency-levels.md)
cosmos-db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/advanced-queries.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries (API for NoSQL)
+
+description: Learn how to query diagnostics logs to troubleshoot data stored in Azure Cosmos DB - API for NoSQL.
+++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for the API for NoSQL
++
+> [!div class="op_single_selector"]
+> * [API for NoSQL](advanced-queries.md)
+> * [API for MongoDB](mongodb/diagnostic-queries.md)
+> * [API for Cassandra](cassandr)
+> * [API for Gremlin](gremlin/diagnostic-queries.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview**) tables.
+
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
+
+## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
+
+### Top N(10) queries ordered by Request Unit (RU) consumption in a specific time frame
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBQueryRuntimeStatistics
+ | project QueryText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , QueryText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(24h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ | project querytext_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , querytext_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+### Requests throttled (statusCode = 429) in a specific time window
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBQueryRuntimeStatistics
+ | project QueryText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , QueryText , OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ | project querytext_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , querytext_s , OperationName, TimeGenerated
+ ```
++
+### Queries with the largest response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBQueryRuntimeStatistics
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by QueryText
+ | order by max_ResponseLength desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by querytext_s
+ | order by max_responseLength_s1 desc
+ ```
++
+### RU consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+### RU consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Last updated 03/24/2022 -+ # What is Azure Cosmos DB analytical store? Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against operational data in your Azure Cosmos DB, without any impact to your transactional workloads.
The following constraints are applicable on the operational data in Azure Cosmos
} ```
-* While JSON documents (and Cosmos DB collections/containers) are case-sensitive from the uniqueness perspective, analytical store isn't.
+* While JSON documents (and Azure Cosmos DB collections/containers) are case-sensitive from the uniqueness perspective, analytical store isn't.
* **In the same document:** Properties names in the same level should be unique when compared case-insensitively. For example, the following JSON document has "Name" and "name" in the same level. While it's a valid JSON document, it doesn't satisfy the uniqueness constraint and hence won't be fully represented in the analytical store. In this example, "Name" and "name" are the same when compared in a case-insensitive manner. Only `"Name": "fred"` will be represented in analytical store, because it's the first occurrence. And `"name": "john"` won't be represented at all.
df = spark.read\
There are two types of schema representation in the analytical store. These types define the schema representation method for all containers in the database account and have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas.
-* Well-defined schema representation, default option for SQL (CORE) and Gremlin API accounts.
-* Full fidelity schema representation, default option for Azure Cosmos DB API for MongoDB accounts.
+* Well-defined schema representation, default option for API for NoSQL and Gremlin accounts.
+* Full fidelity schema representation, default option for API for MongoDB accounts.
#### Well-defined schema representation
FROM OPENROWSET('CosmosDB',
                HTAP) WITH (_id VARCHAR(1000)) as HTAP ```
-#### Full fidelity schema for SQL or Gremlin API accounts
+#### Full fidelity schema for API for NoSQL or Gremlin accounts
-It's possible to use full fidelity Schema for SQL (Core) API accounts, instead of the default option, by setting the schema type when enabling Synapse Link on a Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
+It's possible to use full fidelity Schema for API for NoSQL accounts, instead of the default option, by setting the schema type when enabling Synapse Link on an Azure Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
* This option is only valid for accounts that **don't** have Synapse Link already enabled. * It isn't possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
- * Currently Azure Cosmos DB API for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
+ * Currently Azure Cosmso DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
* Currently this change can't be made through the Azure portal. All database accounts that have Synapse Link enabled by the Azure portal will have the default schema representation type, well-defined schema. The schema representation type decision must be made at the same time that Synapse Link is enabled on the account, using Azure CLI or PowerShell.
Synapse Link, and analytical store by consequence, has different compatibility l
### Backup Polices
-There are two possible backup polices and to understand how to use them, the following details about Cosmos DB backups are very important:
+There are two possible backup polices and to understand how to use them, the following details about Azure Cosmos DB backups are very important:
* The original container is restored without analytical store in both backup modes.
- * Cosmos DB doesn't support containers overwrite from a restore.
+ * Azure Cosmos DB doesn't support containers overwrite from a restore.
Now let's see how to use backup and restores from the analytical store perspective.
Analytical store partitioning is completely independent of partitioning in
## Security
-* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace. Please note that the Cosmos DB read only key can also be used.
+* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace. Please note that the Azure Cosmos DB read only key can also be used.
* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
The analytical store is optimized to provide scalability, elasticity, and perfor
By decoupling the analytical storage system from the analytical compute system, data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. As of today, Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. > [!NOTE]
-> You can only read from analytical store using Azure Synapse Analytics runtimes. And the opposite is also true, Azure Synapse Analytics runtimes can only read from analytical store. Only the auto-sync process can change data in analytical store. You can write data back to Cosmos DB transactional store using Azure Synapse Analytics Spark pool, using the built-in Azure Cosmos DB OLTP SDK.
+> You can only read from analytical store using Azure Synapse Analytics runtimes. And the opposite is also true, Azure Synapse Analytics runtimes can only read from analytical store. Only the auto-sync process can change data in analytical store. You can write data back to Azure Cosmos DB transactional store using Azure Synapse Analytics Spark pool, using the built-in Azure Cosmos DB OLTP SDK.
## <a id="analytical-store-pricing"></a> Pricing
Azure Synapse serverless SQL pools. See [Azure Synapse Analytics pricing page](h
In order to get a high-level cost estimate to enable analytical store on an Azure Cosmos DB container, from the analytical store perspective, you can use the [Azure Cosmos DB Capacity planner](https://cosmos.azure.com/capacitycalculator/) and get an estimate of your analytical storage and write operations costs. Analytical read operations costs depends on the analytics workload characteristics but as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065. > [!NOTE]
-> Analytical store read operations estimates aren't included in the Cosmos DB cost calculator since they are a function of your analytical workload. While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
+> Analytical store read operations estimates aren't included in the Azure Cosmos DB cost calculator since they are a function of your analytical workload. While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
cosmos-db Analytical Store Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-private-endpoints.md
Title: Configure private endpoints for Azure Cosmos DB analytical store.
description: Learn how to set up managed private endpoints for Azure Cosmos DB analytical store to restrict network access. + Last updated 09/29/2022 - # Configure Azure Private Link for Azure Cosmos DB analytical store In this article, you will learn how to set up managed private endpoints for Azure Cosmos DB analytical store. If you are using the transactional store, see [Private endpoints for the transactional store](how-to-configure-private-endpoints.md) article. Using [managed private endpoints](../synapse-analytics/security/synapse-workspace-managed-private-endpoints.md), you can restrict network access of your Azure Cosmos DB analytical store, to a Managed Virtual Network associated with your Azure Synapse workspace. Managed private endpoints establish a private link to your analytical store. > [!NOTE]
-> If you are using Private DNS Zones for Cosmos DB and wish to create a Synapse managed private endpoint to the analytical store sub-resource, you must first create a DNS zone for the analytical store (`privatelink.analytics.cosmos.azure.com`) linked to your Cosmos DB's virtual network.
+> If you are using Private DNS Zones for Azure Cosmos DB and wish to create a Synapse managed private endpoint to the analytical store sub-resource, you must first create a DNS zone for the analytical store (`privatelink.analytics.cosmos.azure.com`) linked to your Azure Cosmos DB's virtual network.
+> [!NOTE]
+> Synapse Link for API for Gremlin is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md).
> [!NOTE]
-> Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md).
+> Synapse Link for API for Gremlin is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md).
## Enable a private endpoint for the analytical store
The following access restrictions are applicable when data-exfiltration protecti
:::image type="content" source="./media/analytical-store-private-endpoints/create-new-private-endpoint.png" alt-text="Create a new private endpoint for analytical store." border="true":::
-1. Select **Azure Cosmos DB(SQL API)** account type > **Continue**.
+1. Select **Azure Cosmos DB (API for NoSQL)** account type > **Continue**.
- :::image type="content" source="./media/analytical-store-private-endpoints/select-private-endpoint.png" alt-text="Select Azure Cosmos DB SQL API to create a private endpoint." border="true":::
+ :::image type="content" source="./media/analytical-store-private-endpoints/select-private-endpoint.png" alt-text="Select Azure Cosmos DB API for NoSQL to create a private endpoint." border="true":::
1. Fill out the **New managed private endpoint** form with the following details:
The following access restrictions are applicable when data-exfiltration protecti
* **Description** - Provide a friendly description to identify your private endpoint. * **Azure subscription** - Select an Azure Cosmos DB account from the list of available accounts in your Azure subscriptions. * **Azure Cosmos DB account name** - Select an existing Azure Cosmos DB account of type SQL or MongoDB.
- * **Target sub-resouce** - Select one of the following options:
+ * **Target sub-resource** - Select one of the following options:
**Analytical**: If you want to add the private endpoint for Azure Cosmos DB analytical store.
- **Sql** (or **MongoDB**): If you want to add OLTP or transactional account endpoint.
+ **NoSQL** (or **MongoDB**): If you want to add OLTP or transactional account endpoint.
> [!NOTE] > You can add both transactional store and analytical store private endpoints to the same Azure Cosmos DB account in an Azure Synapse Analytics workspace. If you only want to run analytical queries, you may only want to map the analytical private endpoint.
cosmos-db Attachments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/attachments.md
description: This article presents an overview of Azure Cosmos DB Attachments.
-++ Last updated 08/07/2020 # Azure Cosmos DB Attachments Azure Cosmos DB attachments are special items that contain references to an associated metadata with an external blob or media file.
Azure Cosmos DBΓÇÖs managed attachments are distinct from its support for standa
- Managed attachments aren't compatible with Azure Cosmos DBΓÇÖs global distribution, and they aren't replicated across regions. > [!NOTE]
-> Azure Cosmos DB API for MongoDB version 3.2 utilizes managed attachments for GridFS and are subject to the same limitations.
+> Azure Cosmso DB for MongoDB version 3.2 utilizes managed attachments for GridFS and are subject to the same limitations.
>
-> We recommend developers using the MongoDB GridFS feature set to upgrade to Azure Cosmos DB API for MongoDB version 3.6 or higher, which is decoupled from attachments and provides a better experience. Alternatively, developers using the MongoDB GridFS feature set should also consider using Azure Blob Storage - which is purpose-built for storing blob content and offers expanded functionality at lower cost compared to GridFS.
+> We recommend developers using the MongoDB GridFS feature set to upgrade to Azure Cosmso DB for MongoDB version 3.6 or higher, which is decoupled from attachments and provides a better experience. Alternatively, developers using the MongoDB GridFS feature set should also consider using Azure Blob Storage - which is purpose-built for storing blob content and offers expanded functionality at lower cost compared to GridFS.
## Migrating Attachments to Azure Blob Storage
cosmos-db Audit Control Plane Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-control-plane-logs.md
description: Learn how to audit the control plane operations such as add a regio
+ Last updated 08/13/2021- # How to audit Azure Cosmos DB control plane operations
-Control Plane in Azure Cosmos DB is a RESTful service that enables you to perform a diverse set of operations on the Azure Cosmos account. It exposes a public resource model (for example: database, account) and various operations to the end users to perform actions on the resource model. The control plane operations include changes to the Azure Cosmos account or container. For example, operations such as create an Azure Cosmos account, add a region, update throughput, region failover, add a VNet etc. are some of the control plane operations. This article explains how to audit the control plane operations in Azure Cosmos DB. You can run the control plane operations on Azure Cosmos accounts by using Azure CLI, PowerShell or Azure portal, whereas for containers, use Azure CLI or PowerShell.
+Control Plane in Azure Cosmos DB is a RESTful service that enables you to perform a diverse set of operations on the Azure Cosmos DB account. It exposes a public resource model (for example: database, account) and various operations to the end users to perform actions on the resource model. The control plane operations include changes to the Azure Cosmos DB account or container. For example, operations such as create an Azure Cosmos DB account, add a region, update throughput, region failover, add a VNet etc. are some of the control plane operations. This article explains how to audit the control plane operations in Azure Cosmos DB. You can run the control plane operations on Azure Cosmos DB accounts by using Azure CLI, PowerShell or Azure portal, whereas for containers, use Azure CLI or PowerShell.
The following are some example scenarios where auditing control plane operations is helpful:
-* You want to get an alert when the firewall rules for your Azure Cosmos account are modified. The alert is required to find unauthorized modifications to rules that govern the network security of your Azure Cosmos account and take quick action.
+* You want to get an alert when the firewall rules for your Azure Cosmos DB account are modified. The alert is required to find unauthorized modifications to rules that govern the network security of your Azure Cosmos DB account and take quick action.
-* You want to get an alert if a new region is added or removed from your Azure Cosmos account. Adding or removing regions has implications on billing and data sovereignty requirements. This alert will help you detect an accidental addition or removal of region on your account.
+* You want to get an alert if a new region is added or removed from your Azure Cosmos DB account. Adding or removing regions has implications on billing and data sovereignty requirements. This alert will help you detect an accidental addition or removal of region on your account.
* You want to get more details from the diagnostic logs on what has changed. For example, a VNet was changed. ## Disable key based metadata write access
-Before you audit the control plane operations in Azure Cosmos DB, disable the key-based metadata write access on your account. When key based metadata write access is disabled, clients connecting to the Azure Cosmos account through account keys are prevented from accessing the account. You can disable write access by setting the `disableKeyBasedMetadataWriteAccess` property to true. After you set this property, changes to any resource can happen from a user with the proper Azure role and credentials. To learn more on how to set this property, see the [Preventing changes from SDKs](role-based-access-control.md#prevent-sdk-changes) article.
+Before you audit the control plane operations in Azure Cosmos DB, disable the key-based metadata write access on your account. When key based metadata write access is disabled, clients connecting to the Azure Cosmos DB account through account keys are prevented from accessing the account. You can disable write access by setting the `disableKeyBasedMetadataWriteAccess` property to true. After you set this property, changes to any resource can happen from a user with the proper Azure role and credentials. To learn more on how to set this property, see the [Preventing changes from SDKs](role-based-access-control.md#prevent-sdk-changes) article.
-After the `disableKeyBasedMetadataWriteAccess` is turned on, if the SDK based clients run create or update operations, an error *"Operation 'POST' on resource 'ContainerNameorDatabaseName' is not allowed through Azure Cosmos DB endpoint* is returned. You have to turn on access to such operations for your account, or perform the create/update operations through Azure Resource Manager, Azure CLI or Azure PowerShell. To switch back, set the disableKeyBasedMetadataWriteAccess to **false** by using Azure CLI as described in the [Preventing changes from Cosmos SDK](role-based-access-control.md#prevent-sdk-changes) article. Make sure to change the value of `disableKeyBasedMetadataWriteAccess` to false instead of true.
+After the `disableKeyBasedMetadataWriteAccess` is turned on, if the SDK based clients run create or update operations, an error *"Operation 'POST' on resource 'ContainerNameorDatabaseName' is not allowed through Azure Cosmos DB endpoint* is returned. You have to turn on access to such operations for your account, or perform the create/update operations through Azure Resource Manager, Azure CLI or Azure PowerShell. To switch back, set the disableKeyBasedMetadataWriteAccess to **false** by using Azure CLI as described in the [Preventing changes from Azure Cosmos DB SDK](role-based-access-control.md#prevent-sdk-changes) article. Make sure to change the value of `disableKeyBasedMetadataWriteAccess` to false instead of true.
Consider the following points when turning off the metadata write access:
You can enable diagnostic logs for control plane operations by using the Azure p
Use the following steps to enable logging on control plane operations:
-1. Sign into [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos account.
+1. Sign into [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account.
1. Open the **Diagnostic settings** pane, provide a **Name** for the logs to create.
After you turn on logging, use the following steps to track down operations for
| where TimeGenerated >= ago(1h) ```
- The following screenshots capture logs when a consistency level is changed for an Azure Cosmos account. The `activityId_g` value from results is different from the activity ID of an operation:
+ The following screenshots capture logs when a consistency level is changed for an Azure Cosmos DB account. The `activityId_g` value from results is different from the activity ID of an operation:
:::image type="content" source="./media/audit-control-plane-logs/add-ip-filter-logs.png" alt-text="Control plane logs when a VNet is added":::
If you want to debug further, you can identify a specific operation in the **Act
:::image type="content" source="./media/audit-control-plane-logs/find-operations-with-activity-id.png" alt-text="Use the activity ID and find the operations":::
-## Control plane operations for Azure Cosmos account
+## Control plane operations for Azure Cosmos DB account
The following are the control plane operations available at the account level. Most of the operations are tracked at account level. These operations are available as metrics in Azure monitor:
AzureDiagnostics
## Next steps * [Prevent Azure Cosmos DB resources from being deleted or changed](resource-locks.md)
-* [Explore Azure Monitor for Azure Cosmos DB](cosmosdb-insights-overview.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
+* [Explore Azure Monitor for Azure Cosmos DB](insights-overview.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
* [Monitor and debug with metrics in Azure Cosmos DB](use-metrics.md)
cosmos-db Audit Restore Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md
Title: Auditing the point in time restore action for continuous backup mode in A
description: This article provides details available to audit Azure Cosmos DB's point in time restore feature in continuous backup mode. + Last updated 04/18/2022
# Audit the point in time restore action for continuous backup mode in Azure Cosmos DB
-Azure Cosmos DB provides you the list of all the point in time restores for continuous mode that were performed on a Cosmos DB account using [Activity Logs](../azure-monitor/essentials/activity-log.md). Activity logs can be viewed for any Cosmos DB account from the **Activity Logs** page in the Azure portal. The Activity Log shows all the operations that were triggered on the specific account. When a point in time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The Activity Log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore.
+Azure Cosmos DB provides you the list of all the point in time restores for continuous mode that were performed on an Azure Cosmos DB account using [Activity Logs](../azure-monitor/essentials/activity-log.md). Activity logs can be viewed for any Azure Cosmos DB account from the **Activity Logs** page in the Azure portal. The Activity Log shows all the operations that were triggered on the specific account. When a point in time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The Activity Log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore.
## Audit the restores that were triggered on a live database account
The account status would be *Creating*, but it would have an Activity Log page.
* Provision an account with continuous backup by using the [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), the [Azure CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode. * Learn about the [resource model of continuous backup mode](continuous-backup-restore-resource-model.md).
- * Explore the [Frequently asked questions for continuous mode](continuous-backup-restore-frequently-asked-questions.yml).
+ * Explore the [Frequently asked questions for continuous mode](continuous-backup-restore-frequently-asked-questions.yml).
cosmos-db Automated Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/automated-recommendations.md
description: Learn how to view customized performance, cost, security, and other
+ Last updated 08/26/2021 - # Automated recommendations for Azure Cosmos DB All the cloud services including Azure Cosmos DB get frequent updates with new features, capabilities, and improvements. ItΓÇÖs important for your application to keep up with the latest performance and security updates. The Azure portal offers customized recommendations that enable you to maximize the performance of your application. The Azure Cosmos DB's advisory engine continuously analyzes the usage history of your Azure Cosmos DB resources and provides recommendations based on your workload patterns. These recommendations correspond to areas like partitioning, indexing, network, security etc. These customized recommendations help you to improve the performance of your application.
All the cloud services including Azure Cosmos DB get frequent updates with new f
You can view recommendations for Azure Cosmos DB in the following ways: -- One way to view the recommendations is within the notifications tab. If there are new recommendations, you will see a message bar. Sign into your [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos account. Within your Azure Cosmos account, open the **Notifications** pane and then select the **Recommendations** tab. You can select the message and view recommendations.
+- One way to view the recommendations is within the notifications tab. If there are new recommendations, you will see a message bar. Sign into your [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Within your Azure Cosmos DB account, open the **Notifications** pane and then select the **Recommendations** tab. You can select the message and view recommendations.
:::image type="content" source="./media/automated-recommendations/cosmos-db-pane-recommendations.png" alt-text="View recommendations from Azure Cosmos DB pane":::
In this category, the advisor detects the query execution and identifies that th
## Next steps
-* [Tuning query performance in Azure Cosmos DB](sql-api-query-metrics.md)
+* [Tuning query performance in Azure Cosmos DB](nosql/query-metrics.md)
* [Troubleshoot query issues](troubleshoot-query-performance.md) when using Azure Cosmos DB * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
cosmos-db Bulk Executor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/bulk-executor-overview.md
Title: Azure Cosmos DB bulk executor library overview
description: Perform bulk operations in Azure Cosmos DB through bulk import and bulk update APIs offered by the bulk executor library. + Last updated 05/28/2019
# Azure Cosmos DB bulk executor library overview Azure Cosmos DB is a fast, flexible, and globally distributed database service that is designed to elastically scale out to support:
Azure Cosmos DB is a fast, flexible, and globally distributed database service t
The bulk executor library helps you leverage this massive throughput and storage. The bulk executor library allows you to perform bulk operations in Azure Cosmos DB through bulk import and bulk update APIs. You can read more about the features of bulk executor library in the following sections. > [!NOTE]
-> Currently, bulk executor library supports import and update operations and this library is supported by Azure Cosmos DB SQL API and Gremlin API accounts only.
+> Currently, bulk executor library supports import and update operations and this library is supported by Azure Cosmos DB API for NoSQL and Gremlin accounts only.
> [!IMPORTANT] > The bulk executor library is not currently supported on [serverless](serverless.md) accounts. On .NET, it is recommended to use the [bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) available in the V3 version of the SDK.
The bulk executor library helps you leverage this massive throughput and storage
* It can bulk import more than a terabyte of data within an hour by using a scale-out architecture.
-* It can bulk update existing data in Azure Cosmos containers as patches.
+* It can bulk update existing data in Azure Cosmos DB containers as patches.
## How does the bulk executor operate?
The bulk executor library makes sure to maximally utilize the throughput allocat
## Next Steps
-* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](bulk-executor-dot-net.md) and [Java](bulk-executor-java.md).
-* Check out the bulk executor SDK information and release notes in [.NET](sql-api-sdk-bulk-executor-dot-net.md) and [Java](sql/sql-api-sdk-bulk-executor-java.md).
-* The bulk executor library is integrated into the Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) article.
-* The bulk executor library is also integrated into a new version of [Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md) for Azure Data Factory to copy data.
+* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](nosql/bulk-executor-dotnet.md) and [Java](bulk-executor-java.md).
+* Check out the bulk executor SDK information and release notes in [.NET](nosql/sdk-dotnet-bulk-executor-v2.md) and [Java](nosql/sdk-java-bulk-executor-v2.md).
+* The bulk executor library is integrated into the Azure Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) article.
+* The bulk executor library is also integrated into a new version of [Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md) for Azure Data Factory to copy data.
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
description: Learn more about burst capacity in Azure Cosmos DB
-+ Last updated 05/09/2022 # Burst capacity in Azure Cosmos DB (preview) Azure Cosmos DB burst capacity (preview) allows you to take advantage of your database or container's idle throughput capacity to handle spikes of traffic. With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity, which can be consumed at a rate up to 3000 RU/s. With burst capacity, requests that would have otherwise been rate limited can now be served with burst capacity while it's available.
To check whether an Azure Cosmos DB account is eligible for the preview, you can
## Limitations ### Preview eligibility criteria
-To enroll in the preview, your Cosmos account must meet all the following criteria:
- - Your Cosmos account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
- - If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
- - There are no SDK or driver requirements to use the feature with Cassandra API, Gremlin API, or API for MongoDB.
- - Your Cosmos account isn't using any unsupported connectors
+To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
+ - Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
+ - If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
+ - There are no SDK or driver requirements to use the feature with API for Cassandra, Gremlin, or MongoDB.
+ - Your Azure Cosmos DB account isn't using any unsupported connectors
- Azure Data Factory - Azure Stream Analytics - Logic Apps
To enroll in the preview, your Cosmos account must meet all the following criter
- Azure Cosmos DB data migration tool - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-### SDK requirements (SQL and Table API only)
-#### SQL API
-For SQL API accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with Gremlin API, Cassandra API, or API for MongoDB.
+### SDK requirements (API for NoSQL and Table only)
+#### API for NoSQL
+For API for NoSQL accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with API for Gremlin, Cassandra, or MongoDB.
Find the latest version of the supported SDK:
Find the latest version of the supported SDK:
| | | | | **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
-Support for other SQL API SDKs is planned for the future.
+Support for other API for NoSQL SDKs is planned for the future.
> [!TIP] > You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md). #### Table API
-For Table API accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
+For API for Table accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
| SDK | Supported versions | Package manager link | | | | |
cosmos-db Access Data Spring Data App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/access-data-spring-data-app.md
Title: How to use Spring Data Apache Cassandra API with Azure Cosmos DB
-description: Learn how to use Spring Data Apache Cassandra API with Azure Cosmos DB.
+ Title: How to use Spring Data API for Apache Cassandra with Azure Cosmos DB for Apache Cassandra
+description: Learn how to use Spring Data API for Apache Cassandra with Azure Cosmos DB for Apache Cassandra.
-+ ms.devlang: java+ Last updated 07/17/2021
-# How to use Spring Data Apache Cassandra API with Azure Cosmos DB
+# How to use Spring Data API for Apache Cassandra with Azure Cosmos DB for Apache Cassandra
-This article demonstrates creating a sample application that uses [Spring Data] to store and retrieve information using the [Azure Cosmos DB Cassandra API](/azure/cosmos-db/cassandra-introduction).
+This article demonstrates creating a sample application that uses [Spring Data] to store and retrieve information using the [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra-introduction).
## Prerequisites
The following prerequisites are required in order to complete the steps in this
* A [Git](https://git-scm.com/downloads) client. > [!NOTE]
-> The samples mentioned below implement custom extensions for a better experience when using Azure Cosmos DB Cassandra API. They include custom retry and load balancing policies, as well as implementing recommended connection settings. For a more extensive exploration of how the custom policies are used, see Java samples for [version 3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [version 4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
+> The samples mentioned below implement custom extensions for a better experience when using Azure Cosmos DB for Apache Cassandra. They include custom retry and load balancing policies, as well as implementing recommended connection settings. For a more extensive exploration of how the custom policies are used, see Java samples for [version 3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [version 4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
-## Create a Cosmos DB Cassandra API account
+## Create an Azure Cosmos DB for Apache Cassandra account
[!INCLUDE [cosmos-db-create-dbaccount-cassandra](../includes/cosmos-db-create-dbaccount-cassandra.md)]
cosmos-db Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/adoption.md
+
+ Title: How to adapt to Azure Cosmos DB for Apache Cassandra from Apache Cassandra
+description: Learn best practices and ways to successfully use the Azure Cosmos DB for Apache Cassandra with Apache Cassandra applications.
+++++ Last updated : 03/24/2022++++
+# How to adapt to Azure Cosmos DB for Apache Cassandra if you are coming from Apache Cassandra
++
+The Azure Cosmos DB for Apache Cassandra provides wire protocol compatibility with existing Cassandra SDKs and tools. You can run applications that are designed to connect to Apache Cassandra by using the API for Cassandra with minimal changes.
+
+When you use the API for Cassandra, it's important to be aware of differences between Apache Cassandra and Azure Cosmos DB. If you're familiar with native [Apache Cassandra](https://cassandra.apache.org/), this article can help you begin to use the Azure Cosmos DB for Apache Cassandra.
+
+## Feature support
+
+The API for Cassandra supports a large number of Apache Cassandra features. Some features aren't supported or they have limitations. Before you migrate, be sure that the [Azure Cosmos DB for Apache Cassandra features](support.md) you need are supported.
+
+## Replication
+
+When you plan for replication, it's important to look at both migration and consistency.
+
+Although you can communicate with the API for Cassandra through the Cassandra Query Language (CQL) binary protocol v4 wire protocol, Azure Cosmos DB implements its own internal replication protocol. You can't use the Cassandra gossip protocol for live migration or replication. For more information, see [Live-migrate from Apache Cassandra to the API for Cassandra by using dual writes](migrate-data-dual-write-proxy.md).
+
+For information about offline migration, see [Migrate data from Cassandra to an Azure Cosmos DB for Apache Cassandra account by using Azure Databricks](migrate-data-databricks.md).
+
+Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they're different. A [mapping document](consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://aka.ms/docs.consistency-levels).
+
+## Recommended client configurations
+
+When you use the API for Cassandra, you don't need to make substantial code changes to existing applications that run Apache Cassandra. We recommend some approaches and configuration settings for the API for Cassandra in Azure Cosmos DB. Review the blog post [API for Cassandra recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/).
+
+## Code samples
+
+The API for Cassandra is designed to work with your existing application code. If you encounter connectivity-related errors, use the [quickstart samples](manage-data-java-v4-sdk.md) as a starting point to discover minor setup changes you might need to make in your existing code.
+
+We also have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement recommended client configurations.
+
+You also can use samples for [Java Spring Boot (v3 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3) and [Java Spring Boot (v4 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git).
+
+## Storage
+
+The API for Cassandra is backed by Azure Cosmos DB, which is a document-oriented NoSQL database engine. Azure Cosmos DB maintains metadata, which might result in a change in the amount of physical storage required for a specific workload.
+
+The difference in storage requirements between native Apache Cassandra and Azure Cosmos DB is most noticeable in small row sizes. In some cases, the difference might be offset because Azure Cosmos DB doesn't implement compaction or tombstones. This factor depends significantly on the workload. If you're uncertain about storage requirements, we recommend that you first create a proof of concept.
+
+## Multi-region deployments
+
+Native Apache Cassandra is a multi-master system by default. Apache Cassandra doesn't have an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is redundant in Apache Cassandra. All nodes are independent, and there's no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single-master or multi-master regions for writes.
+
+An advantage of having a single-master region for writes is avoiding cross-region conflict scenarios. It gives you the option to maintain strong consistency across multiple regions while maintaining a level of high availability.
+
+> [!NOTE]
+> Strong consistency across regions and a Recovery Point Objective (RPO) of zero isn't possible for native Apache Cassandra because all nodes are capable of serving writes. You can configure Azure Cosmos DB for strong consistency across regions in a *single write region* configuration. However, like with native Apache Cassandra, you can't configure an Azure Cosmos DB account that's configured with multiple write regions for strong consistency. A distributed system can't provide an RPO of zero *and* a Recovery Time Objective (RTO) of zero.
+
+For more information, see [Load balancing policy](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) in our [API for Cassandra recommendations for Java blog](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java). Also, see [Failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
+
+## Request units
+
+One of the major differences between running a native Apache Cassandra cluster and provisioning an Azure Cosmos DB account is how database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPS. Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed by using a single normalized metric called [request units](../request-units.md). Every request sent to the database has a request unit cost (RU cost). Each request can be profiled to determine its cost.
+
+The benefit of using request units as a metric is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. After you profile the cost of each request, you can use request units to directly associate the number of requests sent to the database with the capacity you need to provision. The challenge with this way of provisioning capacity is that you need to have a solid understanding of the throughput characteristics of your workload.
+
+We highly recommend that you profile your requests. Use that information to help you estimate the number of request units you'll need to provision. Here are some articles that might help you make the estimate:
+
+- [Request units in Azure Cosmos DB](../request-units.md)
+- [Find the request unit charge for operations executed in the Azure Cosmos DB for Apache Cassandra](find-request-unit-charge.md)
+- [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+
+## Capacity provisioning models
+
+In traditional database provisioning, a fixed capacity is provisioned up front to handle the anticipated throughput. Azure Cosmos DB offers a capacity-based model called [provisioned throughput](../set-throughput.md). As a multi-tenant service, Azure Cosmos DB also offers *consumption-based* models in [autoscale](../provision-throughput-autoscale.md) mode and [serverless](../serverless.md) mode. The extent to which a workload might benefit from either of these consumption-based provisioning models depends on the predictability of throughput for the workload.
+
+In general, steady-state workloads that have predictable throughput benefit most from provisioned throughput. Workloads that have large periods of dormancy benefit from serverless mode. Workloads that have a continuous level of minimal throughput, but with unpredictable spikes, benefit most from autoscale mode. We recommend that you review the following articles for a clear understanding of the best capacity model for your throughput needs:
+
+- [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md)
+- [Create Azure Cosmos DB containers and databases with autoscale throughput](../provision-throughput-autoscale.md)
+- [Azure Cosmos DB serverless](../serverless.md)
+
+## Partitioning
+
+Partitioning in Azure Cosmos DB is similar to partitioning in Apache Cassandra. One of the main differences is that Azure Cosmos DB is more optimized for *horizontal scale*. In Azure Cosmos DB, limits are placed on the amount of *vertical throughput* capacity that's available in any physical partition. The effect of this optimization is most noticeable when an existing data model has significant throughput skew.
+
+Take steps to ensure that your partition key design results in a relatively uniform distribution of requests. For more information about how logical and physical partitioning work and limits on throughput capacity per partition, see [Partitioning in the Azure Cosmos DB for Apache Cassandra](partitioning.md).
+
+## Scaling
+
+In native Apache Cassandra, increasing capacity and scale involves adding new nodes to a cluster and ensuring that the nodes are properly added to the Cassandra ring. In Azure Cosmos DB, adding nodes is transparent and automatic. Scaling is a function of how many [request units](../request-units.md) are provisioned for your keyspace or table. Scaling in physical machines occurs when either physical storage or required throughput reaches limits allowed for a logical or a physical partition. For more information, see [Partitioning in the Azure Cosmos DB for Apache Cassandra](partitioning.md).
+
+## Rate limiting
+
+A challenge of provisioning [request units](../request-units.md), particularly if you're using [provisioned throughput](../set-throughput.md), is rate limiting. Azure Cosmos DB returns rate-limited errors if clients consume more resources than the amount you provisioned. The API for Cassandra in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. For information about how to avoid rate limiting in your application, see [Prevent rate-limiting errors for Azure Cosmos DB for Apache Cassandra operations](prevent-rate-limiting-errors.md).
+
+## Apache Spark connector
+
+Many Apache Cassandra users use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to the API for Cassandra the same way and by using the same connector. Before you connect to the API for Cassandra, review [Connect to the Azure Cosmos DB for Apache Cassandra from Spark](connect-spark-configuration.md). In particular, see the section [Optimize Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration).
+
+## Troubleshoot common issues
+
+For solutions to common issues, see [Troubleshoot common issues in the Azure Cosmos DB for Apache Cassandra](troubleshoot-common-issues.md).
+
+## Next steps
+
+- Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
+- Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).
+- Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md).
cosmos-db Apache Cassandra Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/apache-cassandra-consistency-mapping.md
- Title: Apache Cassandra and Azure Cosmos DB consistency levels
-description: Apache Cassandra and Azure Cosmos DB consistency levels.
----- Previously updated : 03/24/2022---
-# Apache Cassandra and Azure Cosmos DB Cassandra API consistency levels
-
-Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB's Cassandra API:
-
-* The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos account. Consistency for a write operation (CL) can't be changed on a per-request basis.
-
-* Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
-
-## Multi-region writes vs single-region writes
-
-Apache Cassandra database is a multi-master system by default, and does not provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
-
-With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also provides the ability to enable [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
-
-## Mapping consistency levels
-
-The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication and the tradeoffs defined by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md), or watch this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform.
-
-The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using Cassandra API. This shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
-
-> [!NOTE]
-> These are not exact mappings. Rather, we have provided the closest analogues to Apache Cassandra, and disambiguated any qualitative differences in the rightmost column. As mentioned above, we recommend reviewing Azure Cosmos DB's [consistency settings](../consistency-levels.md).
---
-If your Azure Cosmos account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
-
-Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
-
-## Next steps
-
-Learn more about global distribution and consistency levels for Azure Cosmos DB:
-
-* [Global distribution overview](../distribute-data-globally.md)
-* [Consistency Level overview](../consistency-levels.md)
-* [High availability](../high-availability.md)
cosmos-db Cassandra Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-adoption.md
- Title: How to adapt to the Cassandra API from Apache Cassandra
-description: Learn best practices and ways to successfully use the Azure Cosmos DB Cassandra API with Apache Cassandra applications.
----- Previously updated : 03/24/2022----
-# How to adapt to the Cassandra API if you are coming from Apache Cassandra
--
-The Azure Cosmos DB Cassandra API provides wire protocol compatibility with existing Cassandra SDKs and tools. You can run applications that are designed to connect to Apache Cassandra by using the Cassandra API with minimal changes.
-
-When you use the Cassandra API, it's important to be aware of differences between Apache Cassandra and Azure Cosmos DB. If you're familiar with native [Apache Cassandra](https://cassandra.apache.org/), this article can help you begin to use the Azure Cosmos DB Cassandra API.
-
-## Feature support
-
-The Cassandra API supports a large number of Apache Cassandra features. Some features aren't supported or they have limitations. Before you migrate, be sure that the [Azure Cosmos DB Cassandra API features](cassandra-support.md) you need are supported.
-
-## Replication
-
-When you plan for replication, it's important to look at both migration and consistency.
-
-Although you can communicate with the Cassandra API through the Cassandra Query Language (CQL) binary protocol v4 wire protocol, Azure Cosmos DB implements its own internal replication protocol. You can't use the Cassandra gossip protocol for live migration or replication. For more information, see [Live-migrate from Apache Cassandra to the Cassandra API by using dual writes](migrate-data-dual-write-proxy.md).
-
-For information about offline migration, see [Migrate data from Cassandra to an Azure Cosmos DB Cassandra API account by using Azure Databricks](migrate-data-databricks.md).
-
-Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they're different. A [mapping document](apache-cassandra-consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://aka.ms/docs.consistency-levels).
-
-## Recommended client configurations
-
-When you use the Cassandra API, you don't need to make substantial code changes to existing applications that run Apache Cassandra. We recommend some approaches and configuration settings for the Cassandra API in Azure Cosmos DB. Review the blog post [Cassandra API recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/).
-
-## Code samples
-
-The Cassandra API is designed to work with your existing application code. If you encounter connectivity-related errors, use the [quickstart samples](manage-data-java-v4-sdk.md) as a starting point to discover minor setup changes you might need to make in your existing code.
-
-We also have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement recommended client configurations.
-
-You also can use samples for [Java Spring Boot (v3 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3) and [Java Spring Boot (v4 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git).
-
-## Storage
-
-The Cassandra API is backed by Azure Cosmos DB, which is a document-oriented NoSQL database engine. Azure Cosmos DB maintains metadata, which might result in a change in the amount of physical storage required for a specific workload.
-
-The difference in storage requirements between native Apache Cassandra and Azure Cosmos DB is most noticeable in small row sizes. In some cases, the difference might be offset because Azure Cosmos DB doesn't implement compaction or tombstones. This factor depends significantly on the workload. If you're uncertain about storage requirements, we recommend that you first create a proof of concept.
-
-## Multi-region deployments
-
-Native Apache Cassandra is a multi-master system by default. Apache Cassandra doesn't have an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is redundant in Apache Cassandra. All nodes are independent, and there's no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single-master or multi-master regions for writes.
-
-An advantage of having a single-master region for writes is avoiding cross-region conflict scenarios. It gives you the option to maintain strong consistency across multiple regions while maintaining a level of high availability.
-
-> [!NOTE]
-> Strong consistency across regions and a Recovery Point Objective (RPO) of zero isn't possible for native Apache Cassandra because all nodes are capable of serving writes. You can configure Azure Cosmos DB for strong consistency across regions in a *single write region* configuration. However, like with native Apache Cassandra, you can't configure an Azure Cosmos DB account that's configured with multiple write regions for strong consistency. A distributed system can't provide an RPO of zero *and* a Recovery Time Objective (RTO) of zero.
-
-For more information, see [Load balancing policy](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) in our [Cassandra API recommendations for Java blog](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java). Also, see [Failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
-
-## Request units
-
-One of the major differences between running a native Apache Cassandra cluster and provisioning an Azure Cosmos DB account is how database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPS. Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed by using a single normalized metric called [request units](../request-units.md). Every request sent to the database has a request unit cost (RU cost). Each request can be profiled to determine its cost.
-
-The benefit of using request units as a metric is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. After you profile the cost of each request, you can use request units to directly associate the number of requests sent to the database with the capacity you need to provision. The challenge with this way of provisioning capacity is that you need to have a solid understanding of the throughput characteristics of your workload.
-
-We highly recommend that you profile your requests. Use that information to help you estimate the number of request units you'll need to provision. Here are some articles that might help you make the estimate:
--- [Request units in Azure Cosmos DB](../request-units.md)-- [Find the request unit charge for operations executed in the Azure Cosmos DB Cassandra API](find-request-unit-charge-cassandra.md)-- [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)-
-## Capacity provisioning models
-
-In traditional database provisioning, a fixed capacity is provisioned up front to handle the anticipated throughput. Azure Cosmos DB offers a capacity-based model called [provisioned throughput](../set-throughput.md). As a multi-tenant service, Azure Cosmos DB also offers *consumption-based* models in [autoscale](../provision-throughput-autoscale.md) mode and [serverless](../serverless.md) mode. The extent to which a workload might benefit from either of these consumption-based provisioning models depends on the predictability of throughput for the workload.
-
-In general, steady-state workloads that have predictable throughput benefit most from provisioned throughput. Workloads that have large periods of dormancy benefit from serverless mode. Workloads that have a continuous level of minimal throughput, but with unpredictable spikes, benefit most from autoscale mode. We recommend that you review the following articles for a clear understanding of the best capacity model for your throughput needs:
--- [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md)-- [Create Azure Cosmos containers and databases with autoscale throughput](../provision-throughput-autoscale.md)-- [Azure Cosmos DB serverless](../serverless.md)-
-## Partitioning
-
-Partitioning in Azure Cosmos DB is similar to partitioning in Apache Cassandra. One of the main differences is that Azure Cosmos DB is more optimized for *horizontal scale*. In Azure Cosmos DB, limits are placed on the amount of *vertical throughput* capacity that's available in any physical partition. The effect of this optimization is most noticeable when an existing data model has significant throughput skew.
-
-Take steps to ensure that your partition key design results in a relatively uniform distribution of requests. For more information about how logical and physical partitioning work and limits on throughput capacity per partition, see [Partitioning in the Azure Cosmos DB Cassandra API](cassandra-partitioning.md).
-
-## Scaling
-
-In native Apache Cassandra, increasing capacity and scale involves adding new nodes to a cluster and ensuring that the nodes are properly added to the Cassandra ring. In Azure Cosmos DB, adding nodes is transparent and automatic. Scaling is a function of how many [request units](../request-units.md) are provisioned for your keyspace or table. Scaling in physical machines occurs when either physical storage or required throughput reaches limits allowed for a logical or a physical partition. For more information, see [Partitioning in the Azure Cosmos DB Cassandra API](cassandra-partitioning.md).
-
-## Rate limiting
-
-A challenge of provisioning [request units](../request-units.md), particularly if you're using [provisioned throughput](../set-throughput.md), is rate limiting. Azure Cosmos DB returns rate-limited errors if clients consume more resources than the amount you provisioned. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. For information about how to avoid rate limiting in your application, see [Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra operations](prevent-rate-limiting-errors.md).
-
-## Apache Spark connector
-
-Many Apache Cassandra users use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to the Cassandra API the same way and by using the same connector. Before you connect to the Cassandra API, review [Connect to the Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md). In particular, see the section [Optimize Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration).
-
-## Troubleshoot common issues
-
-For solutions to common issues, see [Troubleshoot common issues in the Azure Cosmos DB Cassandra API](troubleshoot-common-issues.md).
-
-## Next steps
--- Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).-- Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).-- Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md).
cosmos-db Cassandra Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-change-feed.md
- Title: Change feed in the Azure Cosmos DB API for Cassandra
-description: Learn how to use change feed in the Azure Cosmos DB API for Cassandra to get the changes made to your data.
--- Previously updated : 11/25/2019----
-# Change feed in the Azure Cosmos DB API for Cassandra
-
-[Change feed](../change-feed.md) support in the Azure Cosmos DB API for Cassandra is available through the query predicates in the Cassandra Query Language (CQL). Using these predicate conditions, you can query the change feed API. Applications can get the changes made to a table using the primary key (also known as the partition key) as is required in CQL. You can then take further actions based on the results. Changes to the rows in the table are captured in the order of their modification time and the sort order per partition key.
-
-The following example shows how to get a change feed on all the rows in a Cassandra API Keyspace table using .NET. The predicate COSMOS_CHANGEFEED_START_TIME() is used directly within CQL to query items in the change feed from a specified start time (in this case current datetime). You can download the full sample, for C# [here](/samples/azure-samples/azure-cosmos-db-cassandra-change-feed/cassandra-change-feed/) and for Java [here](https://github.com/Azure-Samples/cosmos-changefeed-cassandra-java).
-
-In each iteration, the query resumes at the last point changes were read, using paging state. We can see a continuous stream of new changes to the table in the Keyspace. We will see changes to rows that are inserted, or updated. Watching for delete operations using change feed in Cassandra API is currently not supported.
-
-> [!NOTE]
-> Reusing a token after dropping a collection and then recreating it with the same name results in an error.
-> We advise you to set the pageState to null when creating a new collection and reusing collection name.
-
-# [Java](#tab/java)
-
-```java
- Session cassandraSession = utils.getSession();
-
- try {
- DateTimeFormatter dtf = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
- LocalDateTime now = LocalDateTime.now().minusHours(6).minusMinutes(30);
- String query="SELECT * FROM uprofile.user where COSMOS_CHANGEFEED_START_TIME()='"
- + dtf.format(now)+ "'";
-
- byte[] token=null;
- System.out.println(query);
- while(true)
- {
- SimpleStatement st=new SimpleStatement(query);
- st.setFetchSize(100);
- if(token!=null)
- st.setPagingStateUnsafe(token);
-
- ResultSet result=cassandraSession.execute(st) ;
- token=result.getExecutionInfo().getPagingState().toBytes();
-
- for(Row row:result)
- {
- System.out.println(row.getString("user_name"));
- }
- }
- } finally {
- utils.close();
- LOGGER.info("Please delete your table after verifying the presence of the data in portal or from CQL");
- }
-```
-
-# [C#](#tab/csharp)
-
-```C#
- //set initial start time for pulling the change feed
- DateTime timeBegin = DateTime.UtcNow;
-
- //initialise variable to store the continuation token
- byte[] pageState = null;
- while (true)
- {
- try
- {
-
- //Return the latest change for all rows in 'user' table
- IStatement changeFeedQueryStatement = new SimpleStatement(
- $"SELECT * FROM uprofile.user where COSMOS_CHANGEFEED_START_TIME() = '{timeBegin.ToString("yyyy-MM-ddTHH:mm:ss.fffZ", CultureInfo.InvariantCulture)}'");
- if (pageState != null)
- {
- changeFeedQueryStatement = changeFeedQueryStatement.SetPagingState(pageState);
- }
- Console.WriteLine("getting records from change feed at last page state....");
- RowSet rowSet = session.Execute(changeFeedQueryStatement);
-
- //store the continuation token here
- pageState = rowSet.PagingState;
-
- List<Row> rowList = rowSet.ToList();
- if (rowList.Count != 0)
- {
- for (int i = 0; i < rowList.Count; i++)
- {
- string value = rowList[i].GetValue<string>("user_name");
- int key = rowList[i].GetValue<int>("user_id");
- // do something with the data - e.g. compute, forward to another event, function, etc.
- // here, we just print the user name field
- Console.WriteLine("user_name: " + value);
- }
- }
- else
- {
- Console.WriteLine("zero documents read");
- }
- }
- catch (Exception e)
- {
- Console.WriteLine("Exception " + e);
- }
- }
-
-```
--
-In order to get the changes to a single row by primary key, you can add the primary key in the query. The following example shows how to track changes for the row where "user_id = 1"
-
-# [C#](#tab/csharp)
-
-```C#
- //Return the latest change for all row in 'user' table where user_id = 1
- IStatement changeFeedQueryStatement = new SimpleStatement(
- $"SELECT * FROM uprofile.user where user_id = 1 AND COSMOS_CHANGEFEED_START_TIME() = '{timeBegin.ToString("yyyy-MM-ddTHH:mm:ss.fffZ", CultureInfo.InvariantCulture)}'");
-
-```
-
-# [Java](#tab/java)
-
-```java
- String query="SELECT * FROM uprofile.user where user_id=1 and COSMOS_CHANGEFEED_START_TIME()='"
- + dtf.format(now)+ "'";
- SimpleStatement st=new SimpleStatement(query);
-```
-
-## Current limitations
-
-The following limitations are applicable when using change feed with Cassandra API:
-
-* Inserts and updates are currently supported. Delete operation is not yet supported. As a workaround, you can add a soft marker on rows that are being deleted. For example, add a field in the row called "deleted" and set it to "true".
-* Last update is persisted as in core SQL API and intermediate updates to the entity are not available.
--
-## Error handling
-
-The following error codes and messages are supported when using change feed in Cassandra API:
-
-* **HTTP error code 429** - When the change feed is rate limited, it returns an empty page.
-
-## Next steps
-
-* [Manage Azure Cosmos DB Cassandra API resources using Azure Resource Manager templates](templates-samples.md)
cosmos-db Cassandra Driver Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-driver-extensions.md
- Title: Azure Cosmos DB Extension Driver Recommended settings
-description: Learn about Azure Cosmos DB Cassandra API extension driver and the recommended settings.
----- Previously updated : 01/27/2022---
-# Azure Cosmos DB Cassandra API driver extension
-
-Azure Cosmos DB offers a driver extension for DataStax Java Driver 3 and 4. These driver extensions provide developers with different features to help improve the performance and reliability of your application and optimize your workloads on Azure Cosmos DB.
-
-In this article, the focus will be on Java v4 of the DataStax Java Driver. The extension created can be implemented without any changes to your code but an update to the `pom.xml` and `application.conf` files. In this article, we share the default values for all configuration options set by the Cosmos Cassandra extensions and in what cases you might wish to override them.
--
-## Recommended settings for Java SDK
-The following settings are specifically for Cassandra client driver Java version 4.
-
-### Authentication
-PlainTextAuthProvider is used by default. This is because the Cosmos DB Cassandra API requires authentication and uses plain text authentication.
-
-```java
- auth-provider {
- class = PlainTextAuthProvider
- }
-```
-
-### Connection
-Cosmos DB load-balances requests against a large number of backend nodes. The default settings in the extension for local and remote node sizes work well in development, test, and low-volume production or staging environments. In high-volume environments, you should consider increasing these values to 50 or 100.
-```java
- connection {
- pool {
- local {
- size = 10
- }
- remote {
- size = 10
- }
- }
- }
-```
-
-### Token map
-The session token map is used internally by the driver to send requests to the optimal coordinator when token-aware routing is enabled. This is an effective optimization when you are connected to an Apache Cassandra instance. It is irrelevant and generates spurious error messages when you are connected to a Cosmos Cassandra endpoint. Hence, we recommend disabling the session token map when you are connected to a Cosmos DB Cassandra API instance.
-```yml
- metadata {
- token-map {
- enabled = false
- }
- }
-```
-
-### Reconnection policy
-We recommend using the `ConstantReconnectionPolicy` for Cassandra API, with a `base-delay` of 2 seconds.
-
-```java
- reconnection-policy {
- class = ConstantReconnectionPolicy
- base-delay = 2 second
- }
-```
-
-### Retry policy
-The default retry policy in the Java Driver does not handle the `OverLoadedException`. We have created a custom policy for Cassandra API to help handle this exception.
-The parameters for the retry policy are defined within the [reference.conf](https://github.com/Azure/azure-cosmos-cassandra-extensions/blob/release/java-driver-4/1.1.2/driver-4/src/main/resources/reference.conf) of the Azure Cosmos DB extension.
-
-```java
- retry-policy {
- class = com.azure.cosmos.cassandra.CosmosRetryPolicy
- max-retries = 5
- fixed-backoff-time = 5000
- growing-backoff-time = 1000
- }
-```
-
-### Balancing policy and preferred regions
-The default load balancing policy in the v4 driver restricts application-level failover and specifying a single local datacenter for the `CqlSession`, object is required by the policy. This provides a good out-of-box experience for communicating with Cosmos Cassandra instances. In addition to setting the load balancing policy, you can configure failover to specified regions in a multi-region-writes deployment, if there are regional outages using the `preferred-regions` parameter.
-
-```java
- load-balancing-policy {
- multi-region-writes=false
- preferred-regions=["Australia East","UK West"]
-}
-```
-
-### SSL connection and timeouts
-The `DefaultsslEngineFactory` is used by default. This is because Cosmos Cassandra API requires SSL:
-```java
- ssl-engine-factory {
- class = DefaultSslEngineFactory
- }
-```
-A request timeout of 60 seconds provides a better out-of-box experience than the default value of 2 seconds. Adjust this value up or down based on workload and Cosmos Cassandra throughput provisioning. The more throughput you provide, the lower you might set this value.
-``` java
- request {
- timeout = "60 seconds"
- }
-```
--
-## Next steps
-- [Create a Cosmos DB Cassandra API Account](create-account-java.md)-- [Implement Azure Cosmos DB Cassandra API Extensions](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4)
cosmos-db Cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-introduction.md
- Title: Introduction to the Azure Cosmos DB Cassandra API
-description: Learn how you can use Azure Cosmos DB to "lift-and-shift" existing applications and build new applications by using the Cassandra drivers and CQL
------ Previously updated : 11/25/2020--
-# Introduction to the Azure Cosmos DB Cassandra API
-
-Azure Cosmos DB Cassandra API can be used as the data store for apps written for [Apache Cassandra](https://cassandra.apache.org). This means that by using existing [Apache drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver) compliant with CQLv4, your existing Cassandra application can now communicate with the Azure Cosmos DB Cassandra API. In many cases, you can switch from using Apache Cassandra to using Azure Cosmos DB's Cassandra API, by just changing a connection string.
-
-The Cassandra API enables you to interact with data stored in Azure Cosmos DB using the Cassandra Query Language (CQL) , Cassandra-based tools (like cqlsh) and Cassandra client drivers that you're already familiar with.
-
-> [!NOTE]
-> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Cassandra API.
-
-## What is the benefit of using Apache Cassandra API for Azure Cosmos DB?
-
-**No operations management**: As a fully managed cloud service, Azure Cosmos DB Cassandra API removes the overhead of managing and monitoring a myriad of settings across OS, JVM, and yaml files and their interactions. Azure Cosmos DB provides monitoring of throughput, latency, storage, availability, and configurable alerts.
-
-**Open source standard**: Despite being a fully managed service, Cassandra API still supports a large surface area of the native [Apache Cassandra wire protocol](cassandra-support.md), allowing you to build applications on a widely used and cloud agnostic open source standard.
-
-**Performance management**: Azure Cosmos DB provides guaranteed low latency reads and writes at the 99th percentile, backed up by the SLAs. Users do not have to worry about operational overhead to ensure high performance and low latency reads and writes. This means that users do not need to deal with scheduling compaction, managing tombstones, setting up bloom filters and replicas manually. Azure Cosmos DB removes the overhead to manage these issues and lets you focus on the application logic.
-
-**Ability to use existing code and tools**: Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB Cassandra API with trivial changes.
-
-**Throughput and storage elasticity**: Azure Cosmos DB provides throughput across all regions and can scale the provisioned throughput with Azure portal, PowerShell, or CLI operations. You can [elastically scale](scale-account-throughput.md) storage and throughput for your tables as needed with predictable performance.
-
-**Global distribution and availability**: Azure Cosmos DB provides the ability to globally distribute data across all Azure regions and serve the data locally while ensuring low latency data access and high availability. Azure Cosmos DB provides 99.99% high availability within a region and 99.999% read and write availability across multiple regions with no operations overhead. Learn more in [Distribute data globally](../distribute-data-globally.md) article.
-
-**Choice of consistency**: Azure Cosmos DB provides the choice of five well-defined consistency levels to achieve optimal tradeoffs between consistency and performance. These consistency levels are strong, bounded-staleness, session, consistent prefix and eventual. These well-defined, practical, and intuitive consistency levels allow developers to make precise trade-offs between consistency, availability, and latency. Learn more in [consistency levels](../consistency-levels.md) article.
-
-**Enterprise grade**: Azure cosmos DB provides [compliance certifications](https://www.microsoft.com/trustcenter) to ensure users can use the platform securely. Azure Cosmos DB also provides encryption at rest and in motion, IP firewall, and audit logs for control plane activities.
-
-**Event Sourcing**: Cassandra API provides access to a persistent change log, the [Change Feed](cassandra-change-feed.md), which can facilitate event sourcing directly from the database. In Apache Cassandra, the only equivalent is change data capture (CDC), which is merely a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached (these capabilities are redundant in Cosmos DB as the relevant aspects are automatically governed).
-
-## Next steps
-
-* You can quickly get started with building the following language-specific apps to create and manage Cassandra API data:
- - [Node.js app](manage-data-nodejs.md)
- - [.NET app](manage-data-dotnet.md)
- - [Python app](manage-data-python.md)
-
-* Get started with [creating a Cassandra API account, database, and a table](create-account-java.md) by using a Java application.
-
-* [Load sample data to the Cassandra API table](load-data-table.md) by using a Java application.
-
-* [Query data from the Cassandra API account](query-data.md) by using a Java application.
-
-* To learn about Apache Cassandra features supported by Azure Cosmos DB Cassandra API, see [Cassandra support](cassandra-support.md) article.
-
-* Read the [Frequently Asked Questions](cassandra-faq.yml).
cosmos-db Cassandra Monitor Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-monitor-insights.md
- Title: Monitor and debug with insights in Azure Cosmos DB Cassandra API
-description: Learn how to debug and monitor your Azure Cosmos DB Cassandra API account using insights
----- Previously updated : 05/02/2022---
-# Monitor and debug with insights in Azure Cosmos DB Cassandra API
-
-Azure Cosmos DB helps provide insights into your applicationΓÇÖs performance using the Azure Monitor API. Azure Monitor for Azure Cosmos DB provides metrics view to monitor your Cassandra API Account and create dashboards.
-
-This article walks through some common use cases and how best to use Azure Cosmos DB insights to analyze and debug your Cassandra API account.
-> [!NOTE]
-> The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything.
--
-## Availability
-The availability shows the percentage of successful requests over the total requests per hour. Monitor service availability for a specified Cassandra API account.
---
-## Latency
-These charts below show the read and write latency observed by your Cassandra API account in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. This metric doesn't represent the end-to-end request latency. Use diagnostic log for cases where you experience high latency for query operations.
-
-The server side latency (Avg) by region also displays a sudden latency spike on the server. It can help a customer differentiate between a client side latency spike and a server-side latency spike.
--
-Also view server-side latency by different operations in a specific keyspace.
-----
-Is your application experiencing any throttling? The chart below shows the total number of requests failed with a 429-response code.
-Exceeding provisioned throughput could be one of the reasons. Enable [Server Side Retry](./prevent-rate-limiting-errors.md) when your application experiences high throttling due to higher consumption of request units than what is allocated.
----
-## System and management operations
-The system view helps show metadata requests count by primary partition. It also helps identify throttled requests. The management operation shows the account activities such as creation, deletion, key, network and replication settings. Request volume per status code over a time period.
---- Metric chart for account diagnostic, network and replication settings over a specified period and filtered based on a Keyspace.----- Metric chart to view account key rotation.-
-You can view changes to primary or secondary password for your Cassandra API account.
---
-## Storage
-Storage distribution for raw and index storage. Also a count of documents in the Cassandra API account.
--
-Maximum request units consumption for an account over a defined time period.
---
-## Throughput and requests
-The Total Request Units metric displays the requests unit usage based on operation types.
-
-These operations can be analyzed within a given time interval, defined keyspace or table.
---
-The Normalized RU Consumption metric is a metric between 0% to 100% that is used to help measure the utilization of provisioned throughput on a database or container. The metric can also be used to view the utilization of individual partition key ranges on a database or container. One of the main factors of a scalable application is having a good cardinality of partition keys.
-The chart below shows if your applicationΓÇÖs high RU consumption is because of hot partition.
--
-The chart below shows a breakdown of requests by different status code. Understand the meaning of the different codes for your [Cassandra API codes](./error-codes-solution.md).
---
-## Next steps
-- [Monitor and debug with insights in Azure Cosmos DB](../use-metrics.md)-- [Create alerts for Azure Cosmos DB using Azure Monitor](../create-alerts.md)
cosmos-db Cassandra Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-partitioning.md
- Title: Partitioning in Azure Cosmos DB Cassandra API
-description: Learn about partitioning in Azure Cosmos DB Cassandra API
----- Previously updated : 09/03/2021---
-# Partitioning in Azure Cosmos DB Cassandra API
-
-This article describes how partitioning works in Azure Cosmos DB Cassandra API.
-
-Cassandra API uses partitioning to scale the individual tables in a keyspace to meet the performance needs of your application. Partitions are formed based on the value of a partition key that is associated with each record in a table. All the records in a partition have the same partition key value. Azure Cosmos DB transparently and automatically manages the placement of partitions across the physical resources to efficiently satisfy the scalability and performance needs of the table. As the throughput and storage requirements of an application increase, Azure Cosmos DB moves and balances the data across a greater number of physical machines.
-
-From the developer perspective, partitioning behaves in the same way for Azure Cosmos DB Cassandra API as it does in native [Apache Cassandra](https://cassandra.apache.org/). However, there are some differences behind the scenes.
--
-## Differences between Apache Cassandra and Azure Cosmos DB
-
-In Azure Cosmos DB, each machine on which partitions are stored is itself referred to as a [physical partition](../partitioning-overview.md#physical-partitions). The physical partition is akin to a Virtual Machine; a dedicated compute unit, or set of physical resources. Each partition stored on this compute unit is referred to as a [logical partition](../partitioning-overview.md#logical-partitions) in Azure Cosmos DB. If you are already familiar with Apache Cassandra, you can think of logical partitions in the same way that you think of regular partitions in Cassandra.
-
-Apache Cassandra recommends a 100-MB limit on the size of a data that can be stored in a partition. The Cassandra API for Azure Cosmos DB allows up to 20 GB per logical partition, and up to 30GB of data per physical partition. In Azure Cosmos DB, unlike Apache Cassandra, compute capacity available in the physical partition is expressed using a single metric called [request units](../request-units.md), which allows you to think of your workload in terms of requests (reads or writes) per second, rather than cores, memory, or IOPS. This can make capacity planning more straight forward, once you understand the cost of each request. Each physical partition can have up to 10000 RUs of compute available to it. You can learn more about scalability options by reading our article on [elastic scale](scale-account-throughput.md) in Cassandra API.
-
-In Azure Cosmos DB, each physical partition consists of a set of replicas, also known as replica sets, with at least 4 replicas per partition. This is in contrast to Apache Cassandra, where setting a replication factor of 1 is possible. However, this leads to low availability if the only node with the data goes down. In Cassandra API there is always a replication factor of 4 (quorum of 3). Azure Cosmos DB automatically manages replica sets, while these need to be maintained using various tools in Apache Cassandra.
-
-Apache Cassandra has a concept of tokens, which are hashes of partition keys. The tokens are based on a murmur3 64 byte hash, with values ranging from -2^63 to -2^63 - 1. This range is commonly referred to as the "token ring" in Apache Cassandra. The token ring is distributed into token ranges, and these ranges are divided amongst the nodes present in a native Apache Cassandra cluster. Partitioning for Azure Cosmos DB is implemented in a similar way, except it uses a different hash algorithm, and has a larger internal token ring. However, externally we expose the same token range as Apache Cassandra, i.e., -2^63 to -2^63 - 1.
--
-## Primary key
-
-All tables in Cassandra API must have a `primary key` defined. The syntax for a primary key is shown below:
-
-```shell
-column_name cql_type_definition PRIMARY KEY
-```
-
-Suppose we want to create a user table, which stores messages for different users:
-
-```shell
-CREATE TABLE uprofile.user (
- id UUID PRIMARY KEY,
- user text,
- message text);
-```
-
-In this design, we have defined the `id` field as the primary key. The primary key functions as the identifier for the record in the table and it is also used as the partition key in Azure Cosmos DB. If the primary key is defined in the previously described way, there will only be a single record in each partition. This will result in a perfectly horizontal and scalable distribution when writing data to the database, and is ideal for key-value lookup use cases. The application should provide the primary key whenever reading data from the table, to maximize read performance.
---
-## Compound primary key
-
-Apache Cassandra also has a concept of `compound keys`. A compound `primary key` consists of more than one column; the first column is the `partition key`, and any additional columns are the `clustering keys`. The syntax for a `compound primary key` is shown below:
-
-```shell
-PRIMARY KEY (partition_key_column_name, clustering_column_name [, ...])
-```
-
-Suppose we want to change the above design and make it possible to efficiently retrieve messages for a given user:
-
-```shell
-CREATE TABLE uprofile.user (
- user text,
- id int,
- message text,
- PRIMARY KEY (user, id));
-```
-
-In this design, we are now defining `user` as the partition key, and `id` as the clustering key. You can define as many clustering keys as you wish, but each value (or a combination of values) for the clustering key must be unique in order to result in multiple records being added to the same partition, for example:
-
-```shell
-insert into uprofile.user (user, id, message) values ('theo', 1, 'hello');
-insert into uprofile.user (user, id, message) values ('theo', 2, 'hello again');
-```
-
-When data is returned, it is sorted by the clustering key, as expected in Apache Cassandra:
--
-> [!WARNING]
-> When querying data in a table that has a compound primary key, if you want to filter on the partition key *and* any other non-indexed fields aside from the clustering key, ensure that you *explicitly add a secondary index on the partition key*:
->
-> ```shell
-> CREATE INDEX ON uprofile.user (user);
-> ```
->
-> Azure Cosmos DB Cassandra API does not apply indexes to partition keys by default, and the index in this scenario may significantly improve query performance. Review our article on [secondary indexing](secondary-indexing.md) for more information.
-
-With data modeled in this way, multiple records can be assigned to each partition, grouped by user. We can thus issue a query that is efficiently routed by the `partition key` (in this case, `user`) to get all the messages for a given user.
-----
-## Composite partition key
-
-Composite partition keys work essentially the same way as compound keys, except that you can specify multiple columns as a composite partition key. The syntax of composite partition keys is shown below:
-
-```shell
-PRIMARY KEY (
- (partition_key_column_name[, ...]),
- clustering_column_name [, ...]);
-```
-For example, you can have the following, where the unique combination of `firstname` and `lastname` would form the partition key, and `id` is the clustering key:
-
-```shell
-CREATE TABLE uprofile.user (
- firstname text,
- lastname text,
- id int,
- message text,
- PRIMARY KEY ((firstname, lastname), id) );
-```
-
-## Next steps
-
-* Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
-* Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).
-* Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md).
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-support.md
- Title: Apache Cassandra features supported by Azure Cosmos DB Cassandra API
-description: Learn about the Apache Cassandra feature support in Azure Cosmos DB Cassandra API
------ Previously updated : 09/14/2020--
-# Apache Cassandra features supported by Azure Cosmos DB Cassandra API
-
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB Cassandra API through the Cassandra Query Language (CQL) Binary Protocol v4 [wire protocol](https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec) compliant open-source Cassandra client [drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver).
-
-By using the Azure Cosmos DB Cassandra API, you can enjoy the benefits of the Apache Cassandra APIs and the enterprise capabilities that Azure Cosmos DB provides. The enterprise capabilities include [global distribution](../distribute-data-globally.md), [automatic scale out partitioning](cassandra-partitioning.md), availability and latency guarantees, encryption at rest, backups, and much more.
-
-## Cassandra protocol
-
-The Azure Cosmos DB Cassandra API is compatible with Cassandra Query Language (CQL) v3.11 API (backward-compatible with version 2.x). The supported CQL commands, tools, limitations, and exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB Cassandra API.
-
-## Cassandra driver
-
-The following versions of Cassandra drivers are supported by Azure Cosmos DB Cassandra API:
-
-* [Java 3.5+](https://github.com/datastax/java-driver)
-* [C# 3.5+](https://github.com/datastax/csharp-driver)
-* [Nodejs 3.5+](https://github.com/datastax/nodejs-driver)
-* [Python 3.15+](https://github.com/datastax/python-driver)
-* [C++ 2.9](https://github.com/datastax/cpp-driver)
-* [PHP 1.3](https://github.com/datastax/php-driver)
-* [Gocql](https://github.com/gocql/gocql)
-
-
-## CQL data types
-
-Azure Cosmos DB Cassandra API supports the following CQL data types:
-
-|Type |Supported |
-|||
-| `ascii` | Yes |
-| `bigint` | Yes |
-| `blob` | Yes |
-| `boolean` | Yes |
-| `counter` | Yes |
-| `date` | Yes |
-| `decimal` | Yes |
-| `double` | Yes |
-| `float` | Yes |
-| `frozen` | Yes |
-| `inet` | Yes |
-| `int` | Yes |
-| `list` | Yes |
-| `set` | Yes |
-| `smallint` | Yes |
-| `text` | Yes |
-| `time` | Yes |
-| `timestamp` | Yes |
-| `timeuuid` | Yes |
-| `tinyint` | Yes |
-| `tuple` | Yes |
-| `uuid` | Yes |
-| `varchar` | Yes |
-| `varint` | Yes |
-| `tuples` | Yes |
-| `udts` | Yes |
-| `map` | Yes |
-
-Static is supported for data type declaration.
-
-## CQL functions
-
-Azure Cosmos DB Cassandra API supports the following CQL functions:
-
-|Command |Supported |
-|||
-| `Token` * | Yes |
-| `ttl` *** | Yes |
-| `writetime` *** | Yes |
-| `cast` ** | Yes |
-
-> [!NOTE]
-> \* Cassandra API supports token as a projection/selector, and only allows token(pk) on the left-hand side of a where clause. For example, `WHERE token(pk) > 1024` is supported, but `WHERE token(pk) > token(100)` is **not** supported.
-> \*\* The `cast()` function is not nestable in Cassandra API. For example, `SELECT cast(count as double) FROM myTable` is supported, but `SELECT avg(cast(count as double)) FROM myTable` is **not** supported.
-> \*\*\* Custom timestamps and TTL specified with the `USING` option are applied at a row level (and not per cell).
---
-Aggregate functions:
-
-|Command |Supported |
-|||
-| `avg` | Yes |
-| `count` | Yes |
-| `min` | Yes |
-| `max` | Yes |
-| `sum` | Yes |
-
-> [!NOTE]
-> Aggregate functions work on regular columns, but aggregates on clustering columns are **not** supported.
--
-Blob conversion functions:
-
-|Command |Supported |
-|||
-| `typeAsBlob(value)` | Yes |
-| `blobAsType(value)` | Yes |
--
-UUID and timeuuid functions:
-
-|Command |Supported |
-|||
-| `dateOf()` | Yes |
-| `now()` | Yes |
-| `minTimeuuid()` | Yes |
-| `unixTimestampOf()` | Yes |
-| `toDate(timeuuid)` | Yes |
-| `toTimestamp(timeuuid)` | Yes |
-| `toUnixTimestamp(timeuuid)` | Yes |
-| `toDate(timestamp)` | Yes |
-| `toUnixTimestamp(timestamp)` | Yes |
-| `toTimestamp(date)` | Yes |
-| `toUnixTimestamp(date)` | Yes |
--
-
-## CQL commands
-
-Azure Cosmos DB supports the following database commands on Cassandra API accounts.
-
-|Command |Supported |
-|||
-| `ALLOW FILTERING` | Yes |
-| `ALTER KEYSPACE` | N/A (PaaS service, replication managed internally)|
-| `ALTER MATERIALIZED VIEW` | No |
-| `ALTER ROLE` | No |
-| `ALTER TABLE` | Yes |
-| `ALTER TYPE` | No |
-| `ALTER USER` | No |
-| `BATCH` | Yes (unlogged batch only)|
-| `COMPACT STORAGE` | N/A (PaaS service) |
-| `CREATE AGGREGATE` | No |
-| `CREATE CUSTOM INDEX (SASI)` | No |
-| `CREATE INDEX` | Yes (without [specifying index name](secondary-indexing.md), and indexes on clustering keys or full FROZEN collection not supported) |
-| `CREATE FUNCTION` | No |
-| `CREATE KEYSPACE` (replication settings ignored) | Yes |
-| `CREATE MATERIALIZED VIEW` | No |
-| `CREATE TABLE` | Yes |
-| `CREATE TRIGGER` | No |
-| `CREATE TYPE` | Yes |
-| `CREATE ROLE` | No |
-| `CREATE USER` (Deprecated in native Apache Cassandra) | No |
-| `DELETE` | Yes |
-| `DISTINCT` | No |
-| `DROP AGGREGATE` | No |
-| `DROP FUNCTION` | No |
-| `DROP INDEX` | Yes |
-| `DROP KEYSPACE` | Yes |
-| `DROP MATERIALIZED VIEW` | No |
-| `DROP ROLE` | No |
-| `DROP TABLE` | Yes |
-| `DROP TRIGGER` | No |
-| `DROP TYPE` | Yes |
-| `DROP USER` (Deprecated in native Apache Cassandra) | No |
-| `GRANT` | No |
-| `INSERT` | Yes |
-| `LIST PERMISSIONS` | No |
-| `LIST ROLES` | No |
-| `LIST USERS` (Deprecated in native Apache Cassandra) | No |
-| `REVOKE` | No |
-| `SELECT` | Yes |
-| `UPDATE` | Yes |
-| `TRUNCATE` | Yes |
-| `USE` | Yes |
-
-## Lightweight Transactions (LWT)
-
-| Component |Supported |
-|||
-| `DELETE IF EXISTS` | Yes |
-| `DELETE conditions` | Yes |
-| `INSERT IF NOT EXISTS` | Yes |
-| `UPDATE IF EXISTS` | Yes |
-| `UPDATE IF NOT EXISTS` | Yes |
-| `UPDATE conditions` | Yes |
-
-> [!NOTE]
-> Lightweight transactions currently aren't supported for accounts that have multi-region writes enabled.
-
-## CQL Shell commands
-
-Azure Cosmos DB supports the following database commands on Cassandra API accounts.
-
-|Command |Supported |
-|||
-| `CAPTURE` | Yes |
-| `CLEAR` | Yes |
-| `CONSISTENCY` * | N/A |
-| `COPY` | No |
-| `DESCRIBE` | Yes |
-| `cqlshExpand` | No |
-| `EXIT` | Yes |
-| `LOGIN` | N/A (CQL function `USER` is not supported, hence `LOGIN` is redundant) |
-| `PAGING` | Yes |
-| `SERIAL CONSISTENCY` * | N/A |
-| `SHOW` | Yes |
-| `SOURCE` | Yes |
-| `TRACING` | N/A (Cassandra API is backed by Azure Cosmos DB - use [diagnostic logging](../cosmosdb-monitor-resource-logs.md) for troubleshooting) |
-
-> [!NOTE]
-> Consistency works differently in Azure Cosmos DB, see [here](apache-cassandra-consistency-mapping.md) for more information.
--
-## JSON Support
-|Command |Supported |
-|||
-| `SELECT JSON` | Yes |
-| `INSERT JSON` | Yes |
-| `fromJson()` | No |
-| `toJson()` | No |
--
-## Cassandra API limits
-
-Azure Cosmos DB Cassandra API does not have any limits on the size of data stored in a table. Hundreds of terabytes or Petabytes of data can be stored while ensuring partition key limits are honored. Similarly, every entity or row equivalent does not have any limits on the number of columns. However, the total size of the entity should not exceed 2 MB. The data per partition key cannot exceed 20 GB as in all other APIs.
-
-## Tools
-
-Azure Cosmos DB Cassandra API is a managed service platform. The platform does not require any management overhead or utilities such as Garbage Collector, Java Virtual Machine(JVM), and nodetool to manage the cluster. Tools such as cqlsh that utilizes Binary CQLv4 compatibility are supported.
-
-* Azure portal's data explorer, metrics, log diagnostics, PowerShell, and CLI are other supported mechanisms to manage the account.
-
-## CQL shell
-
-<!-- You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](../data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](../notebooks-overview.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`).
--
-You can connect to the Cassandra API in Azure Cosmos DB by using the CQLSH installed on a local machine. It comes with Apache Cassandra 3.11 and works out of the box by setting the environment variables. The following sections include the instructions to install, configure, and connect to Cassandra API in Azure Cosmos DB, on Windows or Linux using CQLSH.
-
-> [!WARNING]
-> Connections to Azure Cosmos DB Cassandra API will not work with DataStax Enterprise (DSE) or Cassandra 4.0 versions of CQLSH. Please ensure you use only v3.11 open source Apache Cassandra versions of CQLSH when connecting to Cassandra API.
-
-**Windows:**
-
-<!-- If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below. -->
-
-1. Install [Python 3](https://www.python.org/downloads/windows/)
-1. Install PIP
- 1. Before install PIP, download the get-pip.py file.
- 1. Launch a command prompt if it isn't already open. To do so, open the Windows search bar, type cmd and select the icon.
- 1. Then, run the following command to download the get-pip.py file:
- ```bash
- curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
- ```
-1. Install PIP on Windows
-```bash
-python get-pip.py
-```
-1. Verify the PIP installation (look for a message from step 3 to confirm which folder PIP was installed in and then navigate to that folder and run the command pip help).
-1. Install CQLSH using PIP
-```bash
-pip3 install cqlsh==5.0.3
-```
-4. Install [Python 2](https://www.python.org/downloads/windows/)
-5. Run the [CQLSH using the authentication mechanism](manage-data-cqlsh.md#update-your-connection-string).
-
-> [!NOTE]
-> You would need to set the environment variables to point to the Python 2 folder.
-
-**Install on Unix/Linux/Mac:**
-
-```bash
-# Install default-jre and default-jdk
-sudo apt install default-jre
-sudo apt-get update
-sudo apt install default-jdk
-
-# Import the Baltimore CyberTrust root certificate:
-curl https://cacert.omniroot.com/bc2025.crt > bc2025.crt
-keytool -importcert -alias bc2025ca -file bc2025.crt
-
-# Install the Cassandra libraries in order to get CQLSH:
-echo "deb https://downloads.apache.org/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
-curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add -
-sudo apt-get update
-sudo apt-get install cassandra
-```
-
-**Connect with Unix/Linux/Mac:**
-```bash
-# Export the SSL variables:
-export SSL_VERSION=TLSv1_2
-export SSL_VALIDATE=false
-
-# Connect to Azure Cosmos DB API for Cassandra:
-cqlsh <YOUR_ACCOUNT_NAME>.cassandra.cosmosdb.azure.com 10350 -u <YOUR_ACCOUNT_NAME> -p <YOUR_ACCOUNT_PASSWORD> --ssl --protocol-version=4
-```
-**Connect with Docker:**
-```bash
-docker run -it --rm -e SSL_VALIDATE=false -e SSL_VERSION=TLSv1_2 cassandra:3.11 cqlsh <account_name>.cassandra.cosmos.azure.com 10350 -u <YOUR_ACCOUNT_NAME> -p <YOUR_ACCOUNT_PASSWORD> --ssl
-```
-
-All CRUD operations that are executed through a CQL v4 compatible SDK will return extra information about error and request units consumed. The DELETE and UPDATE commands should be handled with resource governance taken into consideration, to ensure the most efficient use of the provisioned throughput.
-
-* Note gc_grace_seconds value must be zero if specified.
-
-```csharp
-var tableInsertStatement = table.Insert(sampleEntity);
-var insertResult = await tableInsertStatement.ExecuteAsync();
-
-foreach (string key in insertResult.Info.IncomingPayload)
- {
- byte[] valueInBytes = customPayload[key];
- double value = Encoding.UTF8.GetString(valueInBytes);
- Console.WriteLine($"CustomPayload: {key}: {value}");
- }
-```
-
-## Consistency mapping
-
-Azure Cosmos DB Cassandra API provides choice of consistency for read operations. The consistency mapping is detailed [here](apache-cassandra-consistency-mapping.md#mapping-consistency-levels).
-
-## Permission and role management
-
-Azure Cosmos DB supports Azure role-based access control (Azure RBAC) for provisioning, rotating keys, viewing metrics and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com). Azure Cosmos DB does not support roles for CRUD activities.
-
-## Keyspace and Table options
-
-The options for region name, class, replication_factor, and datacenter in the "Create Keyspace" command are ignored currently. The system uses the underlying Azure Cosmos DB's [global distribution](../global-dist-under-the-hood.md) replication method to add the regions. If you need the cross-region presence of data, you can enable it at the account level with PowerShell, CLI, or portal, to learn more, see the [how to add regions](../how-to-manage-database-account.md#addremove-regions-from-your-database-account) article. Durable_writes can't be disabled because Azure Cosmos DB ensures every write is durable. In every region, Azure Cosmos DB replicates the data across the replica set that is made up of four replicas and this replica set [configuration](../global-dist-under-the-hood.md) can't be modified.
-
-All the options are ignored when creating the table, except gc_grace_seconds, which should be set to zero.
-The Keyspace and table have an extra option named "cosmosdb_provisioned_throughput" with a minimum value of 400 RU/s. The Keyspace throughput allows sharing throughput across multiple tables and it is useful for scenarios when all tables are not utilizing the provisioned throughput. Alter Table command allows changing the provisioned throughput across the regions.
-
-```
-CREATE KEYSPACE sampleks WITH REPLICATION = { 'class' : 'SimpleStrategy'} AND cosmosdb_provisioned_throughput=2000;
-
-CREATE TABLE sampleks.t1(user_id int PRIMARY KEY, lastname text) WITH cosmosdb_provisioned_throughput=2000;
-
-ALTER TABLE gks1.t1 WITH cosmosdb_provisioned_throughput=10000 ;
-
-```
-## Secondary Index
-Cassandra API supports secondary indexes on all data types except frozen collection types, decimal, and variant types.
-
-## Usage of Cassandra retry connection policy
-
-Azure Cosmos DB is a resource governed system. You can do a certain number of operations in a given second based on the request units consumed by the operations. If an application exceeds that limit in a given second, requests are rate-limited and exceptions will be thrown. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. To ensure that your application can intercept and retry requests in case of rate limitation, the [spark](https://mvnrepository.com/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper) and the [Java](https://github.com/Azure/azure-cosmos-cassandra-extensions) extensions are provided. See also Java code samples for [version 3](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and [version 4](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4) Datastax drivers, when connecting to Cassandra API in Azure Cosmos DB. If you use other SDKs to access Cassandra API in Azure Cosmos DB, create a retry policy to retry on these exceptions. Alternatively, [enable server-side retries](prevent-rate-limiting-errors.md) for Cassandra API.
-
-## Next steps
--- Get started with [creating a Cassandra API account, database, and a table](create-account-java.md) by using a Java application
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/change-feed.md
+
+ Title: Change feed in the Azure Cosmos DB for Apache Cassandra
+description: Learn how to use change feed in the Azure Cosmos DB for Apache Cassandra to get the changes made to your data.
++++ Last updated : 11/25/2019++++
+# Change feed in the Azure Cosmos DB for Apache Cassandra
+
+[Change feed](../change-feed.md) support in the Azure Cosmos DB for Apache Cassandra is available through the query predicates in the Cassandra Query Language (CQL). Using these predicate conditions, you can query the change feed API. Applications can get the changes made to a table using the primary key (also known as the partition key) as is required in CQL. You can then take further actions based on the results. Changes to the rows in the table are captured in the order of their modification time and the sort order per partition key.
+
+The following example shows how to get a change feed on all the rows in a API for Cassandra Keyspace table using .NET. The predicate COSMOS_CHANGEFEED_START_TIME() is used directly within CQL to query items in the change feed from a specified start time (in this case current datetime). You can download the full sample, for C# [here](/samples/azure-samples/azure-cosmos-db-cassandra-change-feed/cassandra-change-feed/) and for Java [here](https://github.com/Azure-Samples/cosmos-changefeed-cassandra-java).
+
+In each iteration, the query resumes at the last point changes were read, using paging state. We can see a continuous stream of new changes to the table in the Keyspace. We will see changes to rows that are inserted, or updated. Watching for delete operations using change feed in API for Cassandra is currently not supported.
+
+> [!NOTE]
+> Reusing a token after dropping a collection and then recreating it with the same name results in an error.
+> We advise you to set the pageState to null when creating a new collection and reusing collection name.
+
+# [Java](#tab/java)
+
+```java
+ Session cassandraSession = utils.getSession();
+
+ try {
+ DateTimeFormatter dtf = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
+ LocalDateTime now = LocalDateTime.now().minusHours(6).minusMinutes(30);
+ String query="SELECT * FROM uprofile.user where COSMOS_CHANGEFEED_START_TIME()='"
+ + dtf.format(now)+ "'";
+
+ byte[] token=null;
+ System.out.println(query);
+ while(true)
+ {
+ SimpleStatement st=new SimpleStatement(query);
+ st.setFetchSize(100);
+ if(token!=null)
+ st.setPagingStateUnsafe(token);
+
+ ResultSet result=cassandraSession.execute(st) ;
+ token=result.getExecutionInfo().getPagingState().toBytes();
+
+ for(Row row:result)
+ {
+ System.out.println(row.getString("user_name"));
+ }
+ }
+ } finally {
+ utils.close();
+ LOGGER.info("Please delete your table after verifying the presence of the data in portal or from CQL");
+ }
+```
+
+# [C#](#tab/csharp)
+
+```C#
+ //set initial start time for pulling the change feed
+ DateTime timeBegin = DateTime.UtcNow;
+
+ //initialise variable to store the continuation token
+ byte[] pageState = null;
+ while (true)
+ {
+ try
+ {
+
+ //Return the latest change for all rows in 'user' table
+ IStatement changeFeedQueryStatement = new SimpleStatement(
+ $"SELECT * FROM uprofile.user where COSMOS_CHANGEFEED_START_TIME() = '{timeBegin.ToString("yyyy-MM-ddTHH:mm:ss.fffZ", CultureInfo.InvariantCulture)}'");
+ if (pageState != null)
+ {
+ changeFeedQueryStatement = changeFeedQueryStatement.SetPagingState(pageState);
+ }
+ Console.WriteLine("getting records from change feed at last page state....");
+ RowSet rowSet = session.Execute(changeFeedQueryStatement);
+
+ //store the continuation token here
+ pageState = rowSet.PagingState;
+
+ List<Row> rowList = rowSet.ToList();
+ if (rowList.Count != 0)
+ {
+ for (int i = 0; i < rowList.Count; i++)
+ {
+ string value = rowList[i].GetValue<string>("user_name");
+ int key = rowList[i].GetValue<int>("user_id");
+ // do something with the data - e.g. compute, forward to another event, function, etc.
+ // here, we just print the user name field
+ Console.WriteLine("user_name: " + value);
+ }
+ }
+ else
+ {
+ Console.WriteLine("zero documents read");
+ }
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine("Exception " + e);
+ }
+ }
+
+```
++
+In order to get the changes to a single row by primary key, you can add the primary key in the query. The following example shows how to track changes for the row where "user_id = 1"
+
+# [C#](#tab/csharp)
+
+```C#
+ //Return the latest change for all row in 'user' table where user_id = 1
+ IStatement changeFeedQueryStatement = new SimpleStatement(
+ $"SELECT * FROM uprofile.user where user_id = 1 AND COSMOS_CHANGEFEED_START_TIME() = '{timeBegin.ToString("yyyy-MM-ddTHH:mm:ss.fffZ", CultureInfo.InvariantCulture)}'");
+
+```
+
+# [Java](#tab/java)
+
+```java
+ String query="SELECT * FROM uprofile.user where user_id=1 and COSMOS_CHANGEFEED_START_TIME()='"
+ + dtf.format(now)+ "'";
+ SimpleStatement st=new SimpleStatement(query);
+```
+
+## Current limitations
+
+The following limitations are applicable when using change feed with API for Cassandra:
+
+* Inserts and updates are currently supported. Delete operation is not yet supported. As a workaround, you can add a soft marker on rows that are being deleted. For example, add a field in the row called "deleted" and set it to "true".
+* Last update is persisted as in core API for NoSQL and intermediate updates to the entity are not available.
++
+## Error handling
+
+The following error codes and messages are supported when using change feed in API for Cassandra:
+
+* **HTTP error code 429** - When the change feed is rate limited, it returns an empty page.
+
+## Next steps
+
+* [Manage Azure Cosmos DB for Apache Cassandra resources using Azure Resource Manager templates](templates-samples.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Cassandra API
-description: Azure CLI Samples for Azure Cosmos DB Cassandra API
+ Title: Azure CLI Samples for Azure Cosmos DB for Apache Cassandra
+description: Azure CLI Samples for Azure Cosmos DB for Apache Cassandra
-+ Last updated 08/19/2022 -+
-# Azure CLI samples for Azure Cosmos DB Cassandra API
+# Azure CLI samples for Azure Cosmos DB for Apache Cassandra
-The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB Cassandra API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB for Apache Cassandra and to sample Azure CLI scripts that apply to all Azure Cosmos DB APIs. Common samples are the same across all APIs.
These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-## Cassandra API Samples
+## API for Cassandra Samples
|Task | Description | |||
-| [Create an Azure Cosmos account, keyspace and table](../scripts/cli/cassandr)| Creates an Azure Cosmos DB account, keyspace, and table for Cassandra API. |
-| [Create a serverless Azure Cosmos account for Cassandra API, keyspace and table](../scripts/cli/cassandr)| Creates a serverless Azure Cosmos DB account, keyspace, and table for Cassandra API. |
-| [Create an Azure Cosmos account, keyspace and table with autoscale](../scripts/cli/cassandr)| Creates an Azure Cosmos DB account, keyspace, and table with autoscale for Cassandra API. |
+| [Create an Azure Cosmos DB account, keyspace and table](../scripts/cli/cassandr)| Creates an Azure Cosmos DB account, keyspace, and table for API for Cassandra. |
+| [Create a serverless Azure Cosmos DB account for API for Cassandra, keyspace and table](../scripts/cli/cassandr)| Creates a serverless Azure Cosmos DB account, keyspace, and table for API for Cassandra. |
+| [Create an Azure Cosmos DB account, keyspace and table with autoscale](../scripts/cli/cassandr)| Creates an Azure Cosmos DB account, keyspace, and table with autoscale for API for Cassandra. |
| [Perform throughput operations](../scripts/cli/cassandr) | Read, update and migrate between autoscale and standard throughput on a keyspace and table.| | [Lock resources from deletion](../scripts/cli/cassandr)| Prevent resources from being deleted with resource locks.| ||| ## Common API Samples
-These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+These samples apply to all Azure Cosmos DB APIs. These samples use a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB.
|Task | Description | ||| | [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.| | [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create an Azure Cosmos DB account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create an Azure Cosmos DB account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update an Azure Cosmos DB account to secure with service-endpoints when the subnet is eventually configured.|
| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.| |||
Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure
For Azure CLI samples for other APIs see: - [CLI Samples for Gremlin](../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../mongodb/cli-samples.md)
- [CLI Samples for SQL](../sql/cli-samples.md) - [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Connect Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/connect-spark-configuration.md
Title: Working with Azure Cosmos DB Cassandra API from Spark
-description: This article is the main page for Cosmos DB Cassandra API integration from Spark.
+ Title: Working with Azure Cosmos DB for Apache Cassandra from Spark
+description: This article is the main page for Azure Cosmos DB for Apache Cassandra integration from Spark.
-++ Last updated 09/01/2019-
-# Connect to Azure Cosmos DB Cassandra API from Spark
+# Connect to Azure Cosmos DB for Apache Cassandra from Spark
-This article is one among a series of articles on Azure Cosmos DB Cassandra API integration from Spark. The articles cover connectivity, Data Definition Language(DDL) operations, basic Data Manipulation Language(DML) operations, and advanced Azure Cosmos DB Cassandra API integration from Spark.
+This article is one among a series of articles on Azure Cosmos DB for Apache Cassandra integration from Spark. The articles cover connectivity, Data Definition Language(DDL) operations, basic Data Manipulation Language(DML) operations, and advanced Azure Cosmos DB for Apache Cassandra integration from Spark.
## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account.](manage-data-dotnet.md#create-a-database-account)
+* [Provision an Azure Cosmos DB for Apache Cassandra account.](manage-data-dotnet.md#create-a-database-account)
* Provision your choice of Spark environment [[Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal) | [Azure HDInsight-Spark](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md) | Others]. ## Dependencies for connectivity * **Spark connector for Cassandra:**
- Spark connector is used to connect to Azure Cosmos DB Cassandra API. Identify and use the version of the connector located in [Maven central](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that is compatible with the Spark and Scala versions of your Spark environment. We recommend an environment that supports Spark 3.2.1 or higher, and the spark connector available at maven coordinates `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0`. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
+ Spark connector is used to connect to Azure Cosmos DB for Apache Cassandra. Identify and use the version of the connector located in [Maven central](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that is compatible with the Spark and Scala versions of your Spark environment. We recommend an environment that supports Spark 3.2.1 or higher, and the spark connector available at maven coordinates `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0`. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
-* **Azure Cosmos DB helper library for Cassandra API:**
+* **Azure Cosmos DB helper library for API for Cassandra:**
If you're using a version Spark 2.x, then in addition to the Spark connector, you need another library called [azure-cosmos-cassandra-spark-helper]( https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) with maven coordinates `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0` from Azure Cosmos DB in order to handle [rate limiting](./scale-account-throughput.md#handling-rate-limiting-429-errors). This library contains custom connection factory and retry policy classes.
- The retry policy in Azure Cosmos DB is configured to handle HTTP status code 429("Request Rate Large") exceptions. The Azure Cosmos DB Cassandra API translates these exceptions into overloaded errors on the Cassandra native protocol, and you can retry with back-offs. Because Azure Cosmos DB uses provisioned throughput model, request rate limiting exceptions occur when the ingress/egress rates increase. The retry policy protects your spark jobs against data spikes that momentarily exceed the throughput allocated for your container. If using the Spark 3.x connector, implementing this library isn't required.
+ The retry policy in Azure Cosmos DB is configured to handle HTTP status code 429("Request Rate Large") exceptions. The Azure Cosmos DB for Apache Cassandra translates these exceptions into overloaded errors on the Cassandra native protocol, and you can retry with back-offs. Because Azure Cosmos DB uses provisioned throughput model, request rate limiting exceptions occur when the ingress/egress rates increase. The retry policy protects your spark jobs against data spikes that momentarily exceed the throughput allocated for your container. If using the Spark 3.x connector, implementing this library isn't required.
> [!NOTE] > The retry policy can protect your spark jobs against momentary spikes only. If you have not configured enough RUs required to run your workload, then the retry policy is not applicable and the retry policy class rethrows the exception.
-* **Azure Cosmos DB account connection details:** Your Azure Cassandra API account name, account endpoint, and key.
+* **Azure Cosmos DB account connection details:** Your Azure API for Cassandra account name, account endpoint, and key.
## Optimizing Spark connector throughput configuration
The optimal value of these configurations depends on four factors:
- The amount of throughput (Request Units) configured for the table that data is being ingested into. - The number of workers in your Spark cluster. - The number of executors configured for your spark job (which can be controlled using `spark.cassandra.connection.connections_per_executor_max` or `spark.cassandra.connection.remoteConnectionsPerExecutor` depending on Spark version)-- The average latency of each request to Cosmos DB, if you're collocated in the same Data Center. Assume this value to be 10 ms for writes and 3 ms for reads.
+- The average latency of each request to Azure Cosmos DB, if you're collocated in the same Data Center. Assume this value to be 10 ms for writes and 3 ms for reads.
As an example, if we have five workers and a value of `spark.cassandra.output.concurrent.writes`= 1, and a value of `spark.cassandra.connection.remoteConnectionsPerExecutor` = 1, then we have five workers that are concurrently writing into the table, each with one thread. If it takes 10 ms to perform a single write, then we can send 100 requests (1000 milliseconds divided by 10) per second, per thread. With five workers, this would be 500 writes per second. At an average cost of five request units (RUs) per write, the target table would need a minimum 2500 request units provisioned (5 RUs x 500 writes per second).
-Increasing the number of executors can increase the number of threads in a given job, which can in turn increase throughput. However, the exact impact of this can be variable depending on the job, while controlling throughput with number of workers is more deterministic. You can also determine the exact cost of a given request by profiling it to get the Request Unit (RU) charge. This will help you to be more accurate when provisioning throughput for your table or keyspace. Have a look at our article [here](./find-request-unit-charge-cassandra.md) to understand how to get request unit charges at a per request level.
+Increasing the number of executors can increase the number of threads in a given job, which can in turn increase throughput. However, the exact impact of this can be variable depending on the job, while controlling throughput with number of workers is more deterministic. You can also determine the exact cost of a given request by profiling it to get the Request Unit (RU) charge. This will help you to be more accurate when provisioning throughput for your table or keyspace. Have a look at our article [here](./find-request-unit-charge.md) to understand how to get request unit charges at a per request level.
### Scaling throughput in the database The Cassandra Spark connector will saturate throughput in Azure Cosmos DB efficiently. As a result, even with effective retries, you'll need to ensure you have sufficient throughput (RUs) provisioned at the table or keyspace level to prevent rate limiting related errors. The minimum setting of 400 RUs in a given table or keyspace won't be sufficient. Even at minimum throughput configuration settings, the Spark connector can write at a rate corresponding to around **6000 request units** or more.
-If the RU setting required for data movement using Spark is higher than what is required for your steady state workload, you can easily scale throughput up and down systematically in Azure Cosmos DB to meet the needs of your workload for a given time period. Read our article on [elastic scale in Cassandra API](scale-account-throughput.md) to understand the different options for scaling programmatically and dynamically.
+If the RU setting required for data movement using Spark is higher than what is required for your steady state workload, you can easily scale throughput up and down systematically in Azure Cosmos DB to meet the needs of your workload for a given time period. Read our article on [elastic scale in API for Cassandra](scale-account-throughput.md) to understand the different options for scaling programmatically and dynamically.
> [!NOTE] > The guidance above assumes a reasonably uniform distribution of data. If you have a significant skew in the data (that is, an inordinately large number of reads/writes to the same partition key value), then you might still experience bottlenecks, even if you have a large number of [request units](../request-units.md) provisioned in your table. Request units are divided equally among physical partitions, and heavy data skew can cause a bottleneck of requests to a single partition. ## Spark connector throughput configuration parameters
-The following table lists Azure Cosmos DB Cassandra API-specific throughput configuration parameters provided by the connector. For a detailed list of all configuration parameters, see [configuration reference](https://github.com/datastax/spark-cassandra-connector/blob/master/doc/reference.md) page of the Spark Cassandra Connector GitHub repository.
+The following table lists Azure Cosmos DB for Apache Cassandra-specific throughput configuration parameters provided by the connector. For a detailed list of all configuration parameters, see [configuration reference](https://github.com/datastax/spark-cassandra-connector/blob/master/doc/reference.md) page of the Spark Cassandra Connector GitHub repository.
| **Property Name** | **Default value** | **Description** | ||||
The following table lists Azure Cosmos DB Cassandra API-specific throughput conf
| spark.cassandra.connection.connections_per_executor_max (Spark 2.x) spark.cassandra.connection.remoteConnectionsPerExecutor (Spark 3.x) | None | Maximum number of connections per node per executor. 10*n is equivalent to 10 connections per node in an n-node Cassandra cluster. So, if you require five connections per node per executor for a five node Cassandra cluster, then you should set this configuration to 25. Modify this value based on the degree of parallelism or the number of executors that your spark jobs are configured for. | | spark.cassandra.output.concurrent.writes | 100 | Defines the number of parallel writes that can occur per executor. Because you set "batch.size.rows" to 1, make sure to scale up this value accordingly. Modify this value based on the degree of parallelism or the throughput that you want to achieve for your workload. | | spark.cassandra.concurrent.reads | 512 | Defines the number of parallel reads that can occur per executor. Modify this value based on the degree of parallelism or the throughput that you want to achieve for your workload |
-| spark.cassandra.output.throughput_mb_per_sec | None | Defines the total write throughput per executor. This parameter can be used as an upper limit for your spark job throughput, and base it on the provisioned throughput of your Cosmos container. |
-| spark.cassandra.input.reads_per_sec| None | Defines the total read throughput per executor. This parameter can be used as an upper limit for your spark job throughput, and base it on the provisioned throughput of your Cosmos container. |
-| spark.cassandra.output.batch.grouping.buffer.size | 1000 | Defines the number of batches per single spark task that can be stored in memory before sending to Cassandra API |
+| spark.cassandra.output.throughput_mb_per_sec | None | Defines the total write throughput per executor. This parameter can be used as an upper limit for your spark job throughput, and base it on the provisioned throughput of your Azure Cosmos DB container. |
+| spark.cassandra.input.reads_per_sec| None | Defines the total read throughput per executor. This parameter can be used as an upper limit for your spark job throughput, and base it on the provisioned throughput of your Azure Cosmos DB container. |
+| spark.cassandra.output.batch.grouping.buffer.size | 1000 | Defines the number of batches per single spark task that can be stored in memory before sending to API for Cassandra |
| spark.cassandra.connection.keep_alive_ms | 60000 | Defines the period of time until which unused connections are available. |
-Adjust the throughput and degree of parallelism of these parameters based on the workload you expect for your spark jobs, and the throughput you've provisioned for your Cosmos DB account.
+Adjust the throughput and degree of parallelism of these parameters based on the workload you expect for your spark jobs, and the throughput you've provisioned for your Azure Cosmos DB account.
-## Connecting to Azure Cosmos DB Cassandra API from Spark
+## <a id="connecting-to-azure-cosmos-db-cassandra-api-from-spark"></a>Connecting to Azure Cosmos DB for Apache Cassandra from Spark
### cqlsh
-The following commands detail how to connect to Azure Cosmos DB Cassandra API from cqlsh. This is useful for validation as you run through the samples in Spark.<br>
+The following commands detail how to connect to Azure Cosmos DB for Apache Cassandra from cqlsh. This is useful for validation as you run through the samples in Spark.<br>
**From Linux/Unix/Mac:** ```bash
cqlsh.py YOUR-COSMOSDB-ACCOUNT-NAME.cassandra.cosmosdb.azure.com 10350 -u YOUR-C
``` ### 1. Azure Databricks
-The article below covers Azure Databricks cluster provisioning, cluster configuration for connecting to Azure Cosmos DB Cassandra API, and several sample notebooks that cover DDL operations, DML operations and more.<BR>
-[Work with Azure Cosmos DB Cassandra API from Azure Databricks](spark-databricks.md)<BR>
+The article below covers Azure Databricks cluster provisioning, cluster configuration for connecting to Azure Cosmos DB for Apache Cassandra, and several sample notebooks that cover DDL operations, DML operations and more.<BR>
+[Work with Azure Cosmos DB for Apache Cassandra from Azure Databricks](spark-databricks.md)<BR>
### 2. Azure HDInsight-Spark
-The article below covers HDinsight-Spark service, provisioning, cluster configuration for connecting to Azure Cosmos DB Cassandra API, and several sample notebooks that cover DDL operations, DML operations and more.<BR>
-[Work with Azure Cosmos DB Cassandra API from Azure HDInsight-Spark](spark-hdinsight.md)
+The article below covers HDinsight-Spark service, provisioning, cluster configuration for connecting to Azure Cosmos DB for Apache Cassandra, and several sample notebooks that cover DDL operations, DML operations and more.<BR>
+[Work with Azure Cosmos DB for Apache Cassandra from Azure HDInsight-Spark](spark-hdinsight.md)
### 3. Spark environment in general While the sections above were specific to Azure Spark-based PaaS services, this section covers any general Spark environment. Connector dependencies, imports, and Spark session configuration are detailed below. The "Next steps" section covers code samples for DDL operations, DML operations and more.
While the sections above were specific to Azure Spark-based PaaS services, this
#### Connector dependencies: 1. Add the maven coordinates to get the [Cassandra connector for Spark](connect-spark-configuration.md#dependencies-for-connectivity)
-2. Add the maven coordinates for the [Azure Cosmos DB helper library](connect-spark-configuration.md#dependencies-for-connectivity) for Cassandra API
+2. Add the maven coordinates for the [Azure Cosmos DB helper library](connect-spark-configuration.md#dependencies-for-connectivity) for API for Cassandra
#### Imports:
import com.microsoft.azure.cosmosdb.cassandra
## Next steps
-The following articles demonstrate Spark integration with Azure Cosmos DB Cassandra API.
+The following articles demonstrate Spark integration with Azure Cosmos DB for Apache Cassandra.
* [DDL operations](spark-ddl-operations.md) * [Create/insert operations](spark-create-operations.md)
The following articles demonstrate Spark integration with Azure Cosmos DB Cassan
* [Upsert operations](spark-upsert-operations.md) * [Delete operations](spark-delete-operation.md) * [Aggregation operations](spark-aggregation-operations.md)
-* [Table copy operations](spark-table-copy-operations.md)
+* [Table copy operations](spark-table-copy-operations.md)
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/consistency-mapping.md
+
+ Title: Apache Cassandra and Azure Cosmos DB consistency levels
+description: Apache Cassandra and Azure Cosmos DB consistency levels.
++++++ Last updated : 03/24/2022+++
+# Apache Cassandra and Azure Cosmos DB for Apache Cassandra consistency levels
+
+Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB's API for Cassandra:
+
+* The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos DB account. Consistency for a write operation (CL) can't be changed on a per-request basis.
+
+* Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
+
+## Multi-region writes vs single-region writes
+
+Apache Cassandra database is a multi-master system by default, and does not provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
+
+With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also provides the ability to enable [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
+
+## Mapping consistency levels
+
+The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication and the tradeoffs defined by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md), or watch this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform.
+
+The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using API for Cassandra. This shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
+
+> [!NOTE]
+> These are not exact mappings. Rather, we have provided the closest analogues to Apache Cassandra, and disambiguated any qualitative differences in the rightmost column. As mentioned above, we recommend reviewing Azure Cosmos DB's [consistency settings](../consistency-levels.md).
+++
+If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+
+Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+
+## Next steps
+
+Learn more about global distribution and consistency levels for Azure Cosmos DB:
+
+* [Global distribution overview](../distribute-data-globally.md)
+* [Consistency Level overview](../consistency-levels.md)
+* [High availability](../high-availability.md)
cosmos-db Create Account Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/create-account-java.md
Title: 'Tutorial: Build Java app to create Azure Cosmos DB Cassandra API account'
-description: This tutorial shows how to create a Cassandra API account, add a database (also called a keyspace), and add a table to that account by using a Java application.
+ Title: 'Tutorial: Build Java app to create Azure Cosmos DB for Apache Cassandra account'
+description: This tutorial shows how to create a API for Cassandra account, add a database (also called a keyspace), and add a table to that account by using a Java application.
-+ Last updated 12/06/2018 ms.devlang: java-+ #Customer intent: As a developer, I want to build a Java application to access and manage Azure Cosmos DB resources so that customers can store key/value data and utilize the global distribution, elastic scaling, multi-region writes, and other capabilities offered by Azure Cosmos DB.
-# Tutorial: Create a Cassandra API account in Azure Cosmos DB by using a Java application to store key/value data
+# Tutorial: Create a API for Cassandra account in Azure Cosmos DB by using a Java application to store key/value data
-As a developer, you might have applications that use key/value pairs. You can use a Cassandra API account in Azure Cosmos DB to store the key/value data. This tutorial describes how to use a Java application to create a Cassandra API account in Azure Cosmos DB, add a database (also called a keyspace), and add a table. The Java application uses the [Java driver](https://github.com/datastax/java-driver) to create a user database that contains details such as user ID, user name, and user city.
+As a developer, you might have applications that use key/value pairs. You can use a API for Cassandra account in Azure Cosmos DB to store the key/value data. This tutorial describes how to use a Java application to create a API for Cassandra account in Azure Cosmos DB, add a database (also called a keyspace), and add a table. The Java application uses the [Java driver](https://github.com/datastax/java-driver) to create a user database that contains details such as user ID, user name, and user city.
This tutorial covers the following tasks:
This tutorial covers the following tasks:
Get the connection string information from the Azure portal, and copy it into the Java configuration file. The connection string enables your app to communicate with your hosted database.
-1. From the [Azure portal](https://portal.azure.com/), go to your Azure Cosmos account.
+1. From the [Azure portal](https://portal.azure.com/), go to your Azure Cosmos DB account.
2. Open the **Connection String** pane.
Use the following steps to build the sample from scratch:
3. Under the `cassandra-demo\src\main` folder, create a new folder named `resources`. Under the resources folder, add the config.properties and log4j.properties files:
- - The [config.properties](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/resources/config.properties) file stores the connection endpoint and key values of the Cassandra API account.
+ - The [config.properties](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/resources/config.properties) file stores the connection endpoint and key values of the API for Cassandra account.
- - The [log4j.properties](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/resources/log4j.properties) file defines the level of logging required for interacting with the Cassandra API.
+ - The [log4j.properties](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/resources/log4j.properties) file defines the level of logging required for interacting with the API for Cassandra.
-4. Browse to the `src/main/java/com/azure/cosmosdb/cassandra/` folder. Within the cassandra folder, create another folder named `utils`. The new folder stores the utility classes required to connect to the Cassandra API account.
+4. Browse to the `src/main/java/com/azure/cosmosdb/cassandra/` folder. Within the cassandra folder, create another folder named `utils`. The new folder stores the utility classes required to connect to the API for Cassandra account.
- Add the [CassandraUtils](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/java/com/azure/cosmosdb/cassandra/util/CassandraUtils.java) class to create the cluster and to open and close Cassandra sessions. The cluster connects to the Cassandra API account in Azure Cosmos DB and returns a session to access. Use the [Configurations](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/java/com/azure/cosmosdb/cassandra/util/Configurations.java) class to read connection string information from the config.properties file.
+ Add the [CassandraUtils](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/java/com/azure/cosmosdb/cassandra/util/CassandraUtils.java) class to create the cluster and to open and close Cassandra sessions. The cluster connects to the API for Cassandra account in Azure Cosmos DB and returns a session to access. Use the [Configurations](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started/blob/main/src/main/java/com/azure/cosmosdb/cassandra/util/Configurations.java) class to read connection string information from the config.properties file.
5. The Java sample creates a database with user information such as user name, user ID, and user city. You need to define get and set methods to access user details in the main function.
This section describes how to add a database (keyspace) and a table, by using CQ
## Next steps
-In this tutorial, you've learned how to create a Cassandra API account in Azure Cosmos DB, a database, and a table by using a Java application. You can now proceed to the next article:
+In this tutorial, you've learned how to create a API for Cassandra account in Azure Cosmos DB, a database, and a table by using a Java application. You can now proceed to the next article:
> [!div class="nextstepaction"]
-> [load sample data to the Cassandra API table](load-data-table.md).
+> [load sample data to the API for Cassandra table](load-data-table.md).
cosmos-db Diagnostic Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries-cassandra.md
- Title: Troubleshoot issues with advanced diagnostics queries for Cassandra API-
-description: Learn how to use Azure Log Analytics to improve the performance and health of your Azure Cosmos DB Cassandra API account.
---- Previously updated : 06/12/2021---
-# Troubleshoot issues with advanced diagnostics queries for the Cassandra API
--
-> [!div class="op_single_selector"]
-> * [SQL (Core) API](../cosmos-db-advanced-queries.md)
-> * [MongoDB API](../mongodb/diagnostic-queries-mongodb.md)
-> * [Cassandra API](diagnostic-queries-cassandra.md)
-> * [Gremlin API](../queries-gremlin.md)
--
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB Cassansra API account by using diagnostics logs sent to **resource-specific** tables.
-
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-
-For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times.--
-## Prerequisites
--- Create [Cassandra API account](create-account-java.md)-- Create a [Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).-- Create [Diagnostic Settings](../cosmosdb-monitor-resource-logs.md).-
-> [!WARNING]
-> When creating a Diagnostic Setting for the Cassandra API account, ensure that "DataPlaneRequests" remain unselected. In addition, for the Destination table, ensure "Resource specific" is chosen as it offers significant cost savings over "Azure diagnostics.
-
-> [!NOTE]
-> Note that enabling full text diagnostics, the queries returned will contain PII data.
-> This feature will not only log the skeleton of the query with obfuscated parameters but log the values of the parameters themselves.
-> This can help in diagnosing whether queries on a specific Primary Key (or set of Primary Keys) are consuming far more RUs than queries on other Primary Keys.
-
-## Log Analytics queries with different scenarios
--
-### RU consumption
-- Cassandra operations that are consuming high RU/s.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| project TimeGenerated, RequestCharge, OperationName,
-requestType=split(split(PIICommandText,'"')[3], ' ')[0]
-| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType), OperationName;
-```
--- Monitoring RU consumption per operation on logical partition keys.
-```kusto
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
-| order by TotalRequestCharge;
-
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey
-| order by TotalRequestCharge;
-
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey
-| render timechart;
-```
--- What are the top queries impacting RU consumption?
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where TimeGenerated > ago(24h)
-| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0], RequestCharge, TimeGenerated
-| order by RequestCharge desc;
-```
-- RU consumption based on variations in payload sizes for read and write operations.
-```kusto
-// This query is looking at read operations
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName =="SELECT"
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-
-// This query is looking at write operations
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-
-// Write operations over a time period.
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-| render timechart;
-
-// Read operations over a time period.
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName =="SELECT"
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-| render timechart;
-```
--- RU consumption based on read and write operations by logical partition.
-```kusto
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where OperationName in ("Delete", "Read", "Upsert")
-| summarize totalRU=max(RequestCharge) by OperationName, PartitionKeyRangeId
-```
--- RU consumption by physical and logical partition.
-```kusto
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId;
-```
--- Is a hot partition leading to high RU consumption?
-```kusto
-CDBPartitionKeyStatistics
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where TimeGenerated > now(-8h)
-| summarize StorageUsed = sum(SizeKb) by PartitionKey
-| order by StorageUsed desc
-```
--- How does the partition key affect RU consumption?
-```kusto
-let storageUtilizationPerPartitionKey =
-CDBPartitionKeyStatistics
-| project AccountName=tolower(AccountName), PartitionKey, SizeKb;
-CDBCassandraRequests
-| project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName
-| where ErrorCode != -1 //successful
-| project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
-```
-
-### Latency
-- Number of server-side timeouts (Status Code - 408) seen in the time window.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where ErrorCode in (4608, 4352) //Corresponding code in Cassandra
-| summarize max(DurationMs) by bin(TimeGenerated, 10m), ErrorCode
-| render timechart;
-```
--- Do we observe spikes in server-side latencies in the specified time window?
-```kusto
-CDBCassandraRequests
-| where TimeGenerated > now(-6h)
-| DatabaseName=="azure_cosmos" and CollectionName=="user"
-| summarize max(DurationMs) by bin(TimeGenerated, 10m)
-| render timechart;
-```
--- Operations that are getting throttled.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project RequestLength, ResponseLength,
-RequestCharge, DurationMs, TimeGenerated, OperationName,
-query=split(split(PIICommandText,'"')[3], ' ')[0]
-| summarize max(DurationMs) by bin(TimeGenerated, 10m), RequestCharge, tostring(query),
-RequestLength, OperationName
-| order by RequestLength, RequestCharge;
-```
-
-### Throttling
-- Is your application experiencing any throttling?
-```kusto
-CDBCassandraRequests
-| where RetriedDueToRateLimiting != false and RateLimitingDelayMs > 0;
-```
-- What queries are causing your application to throttle with a specified time period looking specifically at 429.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where ErrorCode==4097 // Corresponding error code in Cassandra
-| project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated;
-```
--
-## Next steps
-- Enable [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your Cassandra API account.-- Overview [error code definition](error-codes-solution.md).
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for API for Cassandra
+
+description: Learn how to use Azure Log Analytics to improve the performance and health of your Azure Cosmos DB for Apache Cassandra account.
+++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for the API for Cassandra
++
+> [!div class="op_single_selector"]
+> * [API for NoSQL](../advanced-queries.md)
+> * [API for MongoDB](../mongodb/diagnostic-queries.md)
+> * [API for Cassandra](diagnostic-queries.md)
+> * [API for Gremlin](../queries-gremlin.md)
++
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB Cassansra API account by using diagnostics logs sent to **resource-specific** tables.
+
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](../monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
++
+## Prerequisites
+
+- Create [API for Cassandra account](create-account-java.md)
+- Create a [Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).
+- Create [Diagnostic Settings](../monitor-resource-logs.md).
+
+> [!WARNING]
+> When creating a Diagnostic Setting for the API for Cassandra account, ensure that "DataPlaneRequests" remain unselected. In addition, for the Destination table, ensure "Resource specific" is chosen as it offers significant cost savings over "Azure diagnostics.
+
+> [!NOTE]
+> Note that enabling full text diagnostics, the queries returned will contain PII data.
+> This feature will not only log the skeleton of the query with obfuscated parameters but log the values of the parameters themselves.
+> This can help in diagnosing whether queries on a specific Primary Key (or set of Primary Keys) are consuming far more RUs than queries on other Primary Keys.
+
+## Log Analytics queries with different scenarios
++
+### RU consumption
+- Cassandra operations that are consuming high RU/s.
+```kusto
+CDBCassandraRequests
+| where DatabaseName=="azure_comos" and CollectionName=="user"
+| project TimeGenerated, RequestCharge, OperationName,
+requestType=split(split(PIICommandText,'"')[3], ' ')[0]
+| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType), OperationName;
+```
+
+- Monitoring RU consumption per operation on logical partition keys.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_comos" and CollectionName=="user"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+| order by TotalRequestCharge;
+
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_comos" and CollectionName=="user"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey
+| order by TotalRequestCharge;
+
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_comos" and CollectionName=="user"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey
+| render timechart;
+```
+
+- What are the top queries impacting RU consumption?
+```kusto
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where TimeGenerated > ago(24h)
+| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0], RequestCharge, TimeGenerated
+| order by RequestCharge desc;
+```
+- RU consumption based on variations in payload sizes for read and write operations.
+```kusto
+// This query is looking at read operations
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName =="SELECT"
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+
+// This query is looking at write operations
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+
+// Write operations over a time period.
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+| render timechart;
+
+// Read operations over a time period.
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName =="SELECT"
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+| render timechart;
+```
+
+- RU consumption based on read and write operations by logical partition.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where OperationName in ("Delete", "Read", "Upsert")
+| summarize totalRU=max(RequestCharge) by OperationName, PartitionKeyRangeId
+```
+
+- RU consumption by physical and logical partition.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId;
+```
+
+- Is a hot partition leading to high RU consumption?
+```kusto
+CDBPartitionKeyStatistics
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where TimeGenerated > now(-8h)
+| summarize StorageUsed = sum(SizeKb) by PartitionKey
+| order by StorageUsed desc
+```
+
+- How does the partition key affect RU consumption?
+```kusto
+let storageUtilizationPerPartitionKey =
+CDBPartitionKeyStatistics
+| project AccountName=tolower(AccountName), PartitionKey, SizeKb;
+CDBCassandraRequests
+| project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName
+| where ErrorCode != -1 //successful
+| project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
+```
+
+### Latency
+- Number of server-side timeouts (Status Code - 408) seen in the time window.
+```kusto
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where ErrorCode in (4608, 4352) //Corresponding code in Cassandra
+| summarize max(DurationMs) by bin(TimeGenerated, 10m), ErrorCode
+| render timechart;
+```
+
+- Do we observe spikes in server-side latencies in the specified time window?
+```kusto
+CDBCassandraRequests
+| where TimeGenerated > now(-6h)
+| DatabaseName=="azure_cosmos" and CollectionName=="user"
+| summarize max(DurationMs) by bin(TimeGenerated, 10m)
+| render timechart;
+```
+
+- Operations that are getting throttled.
+```kusto
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project RequestLength, ResponseLength,
+RequestCharge, DurationMs, TimeGenerated, OperationName,
+query=split(split(PIICommandText,'"')[3], ' ')[0]
+| summarize max(DurationMs) by bin(TimeGenerated, 10m), RequestCharge, tostring(query),
+RequestLength, OperationName
+| order by RequestLength, RequestCharge;
+```
+
+### Throttling
+- Is your application experiencing any throttling?
+```kusto
+CDBCassandraRequests
+| where RetriedDueToRateLimiting != false and RateLimitingDelayMs > 0;
+```
+- What queries are causing your application to throttle with a specified time period looking specifically at 429.
+```kusto
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where ErrorCode==4097 // Corresponding error code in Cassandra
+| project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated;
+```
++
+## Next steps
+- Enable [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your API for Cassandra account.
+- Overview [error code definition](error-codes-solution.md).
cosmos-db Driver Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/driver-extensions.md
+
+ Title: Azure Cosmos DB Extension Driver Recommended settings
+description: Learn about Azure Cosmos DB for Apache Cassandra extension driver and the recommended settings.
+++++ Last updated : 01/27/2022+++
+# Azure Cosmos DB for Apache Cassandra driver extension
+
+Azure Cosmos DB offers a driver extension for DataStax Java Driver 3 and 4. These driver extensions provide developers with different features to help improve the performance and reliability of your application and optimize your workloads on Azure Cosmos DB.
+
+In this article, the focus will be on Java v4 of the DataStax Java Driver. The extension created can be implemented without any changes to your code but an update to the `pom.xml` and `application.conf` files. In this article, we share the default values for all configuration options set by the Azure Cosmos DB Cassandra extensions and in what cases you might wish to override them.
++
+## Recommended settings for Java SDK
+The following settings are specifically for Cassandra client driver Java version 4.
+
+### Authentication
+PlainTextAuthProvider is used by default. This is because the Azure Cosmos DB for Apache Cassandra requires authentication and uses plain text authentication.
+
+```java
+ auth-provider {
+ class = PlainTextAuthProvider
+ }
+```
+
+### Connection
+Azure Cosmos DB load-balances requests against a large number of backend nodes. The default settings in the extension for local and remote node sizes work well in development, test, and low-volume production or staging environments. In high-volume environments, you should consider increasing these values to 50 or 100.
+```java
+ connection {
+ pool {
+ local {
+ size = 10
+ }
+ remote {
+ size = 10
+ }
+ }
+ }
+```
+
+### Token map
+The session token map is used internally by the driver to send requests to the optimal coordinator when token-aware routing is enabled. This is an effective optimization when you are connected to an Apache Cassandra instance. It is irrelevant and generates spurious error messages when you are connected to an Azure Cosmos DB Cassandra endpoint. Hence, we recommend disabling the session token map when you are connected to an Azure Cosmos DB for Apache Cassandra instance.
+```yml
+ metadata {
+ token-map {
+ enabled = false
+ }
+ }
+```
+
+### Reconnection policy
+We recommend using the `ConstantReconnectionPolicy` for API for Cassandra, with a `base-delay` of 2 seconds.
+
+```java
+ reconnection-policy {
+ class = ConstantReconnectionPolicy
+ base-delay = 2 second
+ }
+```
+
+### Retry policy
+The default retry policy in the Java Driver does not handle the `OverLoadedException`. We have created a custom policy for API for Cassandra to help handle this exception.
+The parameters for the retry policy are defined within the [reference.conf](https://github.com/Azure/azure-cosmos-cassandra-extensions/blob/release/java-driver-4/1.1.2/driver-4/src/main/resources/reference.conf) of the Azure Cosmos DB extension.
+
+```java
+ retry-policy {
+ class = com.azure.cosmos.cassandra.CosmosRetryPolicy
+ max-retries = 5
+ fixed-backoff-time = 5000
+ growing-backoff-time = 1000
+ }
+```
+
+### Balancing policy and preferred regions
+The default load balancing policy in the v4 driver restricts application-level failover and specifying a single local datacenter for the `CqlSession`, object is required by the policy. This provides a good out-of-box experience for communicating with Azure Cosmos DB Cassandra instances. In addition to setting the load balancing policy, you can configure failover to specified regions in a multi-region-writes deployment, if there are regional outages using the `preferred-regions` parameter.
+
+```java
+ load-balancing-policy {
+ multi-region-writes=false
+ preferred-regions=["Australia East","UK West"]
+}
+```
+
+### SSL connection and timeouts
+The `DefaultsslEngineFactory` is used by default. This is because Azure Cosmos DB Cassandra API requires SSL:
+```java
+ ssl-engine-factory {
+ class = DefaultSslEngineFactory
+ }
+```
+A request timeout of 60 seconds provides a better out-of-box experience than the default value of 2 seconds. Adjust this value up or down based on workload and Azure Cosmos DB Cassandra throughput provisioning. The more throughput you provide, the lower you might set this value.
+``` java
+ request {
+ timeout = "60 seconds"
+ }
+```
++
+## Next steps
+- [Create an Azure Cosmos DB for Apache Cassandra Account](create-account-java.md)
+- [Implement Azure Cosmos DB for Apache Cassandra Extensions](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4)
cosmos-db Error Codes Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/error-codes-solution.md
Title: Server diagnostics for Azure Cosmos DB Cassandra API
-description: This article explains some common error codes in Azure Cosmos DB's Cassandra API and how to troubleshoot using Log Analytics
+ Title: Server diagnostics for Azure Cosmos DB for Apache Cassandra
+description: This article explains some common error codes in Azure Cosmos DB's API for Cassandra and how to troubleshoot using Log Analytics
-+ Last updated 10/12/2021-+
-# Server diagnostics for Azure Cosmos DB Cassandra API
+# Server diagnostics for Azure Cosmos DB for Apache Cassandra
-Log Analytics is a tool in the Azure portal that helps you run server diagnostics on your Cassandra API account. Run log queries from data collected by Azure Monitor Logs and interactively analyze their results. Records retrieved from Log Analytics queries help provide various insights into your data.
+Log Analytics is a tool in the Azure portal that helps you run server diagnostics on your API for Cassandra account. Run log queries from data collected by Azure Monitor Logs and interactively analyze their results. Records retrieved from Log Analytics queries help provide various insights into your data.
## Prerequisites - Create a [Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).-- Create [Diagnostic Settings](../cosmosdb-monitor-resource-logs.md).-- Start [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your Cassandra API account.
+- Create [Diagnostic Settings](../monitor-resource-logs.md).
+- Start [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your API for Cassandra account.
## Use Log Analytics After you've completed the log analytics setup, you can begin to explore your logs to gain more insights. ### Explore Data Plane Operations
-Use the CDBCassandraRequests table to see data plane operations specifically for your Cassandra API account. A sample query to see the topN(10) consuming request and get detailed information on each request made.
+Use the CDBCassandraRequests table to see data plane operations specifically for your API for Cassandra account. A sample query to see the topN(10) consuming request and get detailed information on each request made.
```Kusto CDBCassandraRequests
CDBPartitionKeyRUConsumption
``` ### Explore Control Plane Operations
-The CBDControlPlaneRequests table contains details on control plane operations, specifically for Cassandra API accounts.
+The CBDControlPlaneRequests table contains details on control plane operations, specifically for API for Cassandra accounts.
```Kusto CDBControlPlaneRequests
CDBControlPlaneRequests
## Next steps - Learn more about [Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md).-- Learn how to [migrate from native Apache Cassandra to Azure Cosmos DB Cassandra API](migrate-data-databricks.md).
+- Learn how to [migrate from native Apache Cassandra to Azure Cosmos DB for Apache Cassandra](migrate-data-databricks.md).
cosmos-db Find Request Unit Charge Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/find-request-unit-charge-cassandra.md
- Title: Find request unit (RU) charge for a Cassandra API query in Azure Cosmos DB
-description: Learn how to find the request unit (RU) charge for Cassandra queries executed against an Azure Cosmos container. You can use the Azure portal, .NET and Java drivers to find the RU charge.
----- Previously updated : 10/14/2020--
-# Find the request unit charge for operations executed in Azure Cosmos DB Cassandra API
-
-Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
-
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [SQL API](../find-request-unit-charge.md), [Gremlin API](../find-request-unit-charge-gremlin.md), and [Table API](../table/find-request-unit-charge.md) articles to find the RU/s charge.
-
-When you perform operations against the Azure Cosmos DB Cassandra API, the RU charge is returned in the incoming payload as a field named `RequestCharge`. You have multiple options for retrieving the RU charge.
-
-## Use a Cassandra Driver
-
-### [.NET Driver](#tab/dotnet-driver)
-
-When you use the [.NET SDK](https://www.nuget.org/packages/CassandraCSharpDriver/), you can retrieve the incoming payload under the `Info` property of a `RowSet` object:
-
-```csharp
-RowSet rowSet = session.Execute("SELECT table_name FROM system_schema.tables;");
-double requestCharge = BitConverter.ToDouble(rowSet.Info.IncomingPayload["RequestCharge"].Reverse().ToArray(), 0);
-```
-
-For more information, see [Quickstart: Build a Cassandra app by using the .NET SDK and Azure Cosmos DB](manage-data-dotnet.md).
-
-### [Java Driver](#tab/java-driver)
-
-When you use the [Java SDK](https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core), you can retrieve the incoming payload by calling the `getExecutionInfo()` method on a `ResultSet` object:
-
-```java
-ResultSet resultSet = session.execute("SELECT table_name FROM system_schema.tables;");
-Double requestCharge = resultSet.getExecutionInfo().getIncomingPayload().get("RequestCharge").getDouble();
-```
-
-For more information, see [Quickstart: Build a Cassandra app by using the Java SDK and Azure Cosmos DB](manage-data-java.md).
-
-### [GOCQL Driver](#tab/gocql-driver)
-
-When you use the [GOCQL driver](https://github.com/gocql/gocql), you can retrieve the incoming payload by calling the `GetCustomPayload()` method on a [`Iter`](https://pkg.go.dev/github.com/gocql/gocql#Iter) type:
-
-```go
-query := session.Query(fmt.Sprintf("SELECT * FROM <keyspace.table> where <value> = ?", keyspace, table)).Bind(<value>)
-iter := query.Iter()
-requestCharge := iter.GetCustomPayload()["RequestCharge"]
-requestChargeBits := binary.BigEndian.Uint64(requestCharge)
-requestChargeValue := math.Float64frombits(requestChargeBits)
-fmt.Printf("%v\n", requestChargeValue)
-```
-
-For more information, see [Quickstart: Build a Cassandra app by using GOCQL and Azure Cosmos DB](manage-data-go.md).
--
-## Next steps
-
-To learn about optimizing your RU consumption, see these articles:
-
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
-* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/find-request-unit-charge.md
+
+ Title: Find request unit (RU) charge for a API for Cassandra query in Azure Cosmos DB
+description: Learn how to find the request unit (RU) charge for Cassandra queries executed against an Azure Cosmos DB container. You can use the Azure portal, .NET and Java drivers to find the RU charge.
+++++ Last updated : 10/14/2020
+ms.devlang: csharp, java, golang
++
+# Find the request unit charge for operations executed in Azure Cosmos DB for Apache Cassandra
+
+Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
+
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos DB container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
+
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB for Apache Cassandra. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge.md), [API for NoSQL](../find-request-unit-charge.md), [API for Gremlin](../gremlin/find-request-unit-charge.md), and [API for Table](../table/find-request-unit-charge.md) articles to find the RU/s charge.
+
+When you perform operations against the Azure Cosmos DB for Apache Cassandra, the RU charge is returned in the incoming payload as a field named `RequestCharge`. You have multiple options for retrieving the RU charge.
+
+## Use a Cassandra Driver
+
+### [.NET Driver](#tab/dotnet-driver)
+
+When you use the [.NET SDK](https://www.nuget.org/packages/CassandraCSharpDriver/), you can retrieve the incoming payload under the `Info` property of a `RowSet` object:
+
+```csharp
+RowSet rowSet = session.Execute("SELECT table_name FROM system_schema.tables;");
+double requestCharge = BitConverter.ToDouble(rowSet.Info.IncomingPayload["RequestCharge"].Reverse().ToArray(), 0);
+```
+
+For more information, see [Quickstart: Build a Cassandra app by using the .NET SDK and Azure Cosmos DB](manage-data-dotnet.md).
+
+### [Java Driver](#tab/java-driver)
+
+When you use the [Java SDK](https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core), you can retrieve the incoming payload by calling the `getExecutionInfo()` method on a `ResultSet` object:
+
+```java
+ResultSet resultSet = session.execute("SELECT table_name FROM system_schema.tables;");
+Double requestCharge = resultSet.getExecutionInfo().getIncomingPayload().get("RequestCharge").getDouble();
+```
+
+For more information, see [Quickstart: Build a Cassandra app by using the Java SDK and Azure Cosmos DB](manage-data-java.md).
+
+### [GOCQL Driver](#tab/gocql-driver)
+
+When you use the [GOCQL driver](https://github.com/gocql/gocql), you can retrieve the incoming payload by calling the `GetCustomPayload()` method on a [`Iter`](https://pkg.go.dev/github.com/gocql/gocql#Iter) type:
+
+```go
+query := session.Query(fmt.Sprintf("SELECT * FROM <keyspace.table> where <value> = ?", keyspace, table)).Bind(<value>)
+iter := query.Iter()
+requestCharge := iter.GetCustomPayload()["RequestCharge"]
+requestChargeBits := binary.BigEndian.Uint64(requestCharge)
+requestChargeValue := math.Float64frombits(requestChargeBits)
+fmt.Printf("%v\n", requestChargeValue)
+```
+
+For more information, see [Quickstart: Build a Cassandra app by using GOCQL and Azure Cosmos DB](manage-data-go.md).
++
+## Next steps
+
+To learn about optimizing your RU consumption, see these articles:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db Glowroot Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/glowroot-cassandra.md
- Title: Run Glowroot on Azure Cosmos DB Cassandra API(preview)
-description: This article details how to run Glowroot in Azure Cosmos DB Cassandra API.
----- Previously updated : 10/02/2021---
-# Run Glowroot on Azure Cosmos DB Cassandra API
-
-Glowroot is an application performance management tool used to optimize and monitor the performance of your applications. This article explains how you can now use Glowroot within Azure Cosmos DB Cassandra API to monitor your application's performance.
-
-## Prerequisites and Setup
-
-* [Create an Azure Cosmos DB Cassandra API account](manage-data-java.md#create-a-database-account).
-* [Install Java (version 8) for Windows](https://developers.redhat.com/products/openjdk/download)
-> [!NOTE]
-> Note that there are certain known incompatible build targets with newer versions. If you already have a newer version of Java, you can still download JDK8.
-> If you have newer Java installed in addition to JDK8: Set the %JAVA_HOME% variable in the local command prompt to target JDK8. This will only change Java version for the current session and leave global machine settings intact.
-* [Install maven](https://maven.apache.org/download.cgi)
- * Verify successful installation by running: `mvn --version`
-
-## Run Glowroot central collector with Cosmos DB endpoint
-Once the endpoint configuration has been completed.
-1. [Download Glowroot central collector distribution](https://github.com/glowroot/glowroot)
-2. In the glowroot-central.properties file, populate the following properties from your Cosmos DB Cassandra API endpoint
- * cassandra.contactPoints
- * cassandra.username
- * cassandra.password
-3. Set properties `cassandra.ssl=true`, `cassandra.gcGraceSeconds=0`, and `cassandra.port=10350`.
-4. Ensure that the glowroot-central.properties is in the same folder as the glowroot-central.jar.
-5. Run `java -jar glowroot-central.jar` to begin running Glowroot.
-
-## FAQs
-Open a support ticket if you have issues running or testing Glowroot. Providing the subscription ID and account name where your Glowroot test will be running.
-
-## Next steps
-- Get started with [creating a Cassandra API account, database, and a table](create-account-java.md) by using a Java application.-- Learn about [supported features](cassandra-support.md) in the Azure Cosmos DB Cassandra API.
cosmos-db Glowroot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/glowroot.md
+
+ Title: Run Glowroot on Azure Cosmos DB for Apache Cassandra(preview)
+description: This article details how to run Glowroot in Azure Cosmos DB for Apache Cassandra.
+++++ Last updated : 10/02/2021+++
+# Run Glowroot on Azure Cosmos DB for Apache Cassandra
+
+Glowroot is an application performance management tool used to optimize and monitor the performance of your applications. This article explains how you can now use Glowroot within Azure Cosmos DB for Apache Cassandra to monitor your application's performance.
+
+## Prerequisites and Setup
+
+* [Create an Azure Cosmos DB for Apache Cassandra account](manage-data-java.md#create-a-database-account).
+* [Install Java (version 8) for Windows](https://developers.redhat.com/products/openjdk/download)
+> [!NOTE]
+> Note that there are certain known incompatible build targets with newer versions. If you already have a newer version of Java, you can still download JDK8.
+> If you have newer Java installed in addition to JDK8: Set the %JAVA_HOME% variable in the local command prompt to target JDK8. This will only change Java version for the current session and leave global machine settings intact.
+* [Install maven](https://maven.apache.org/download.cgi)
+ * Verify successful installation by running: `mvn --version`
+
+## Run Glowroot central collector with Azure Cosmos DB endpoint
+Once the endpoint configuration has been completed.
+1. [Download Glowroot central collector distribution](https://github.com/glowroot/glowroot)
+2. In the glowroot-central.properties file, populate the following properties from your Azure Cosmos DB for Apache Cassandra endpoint
+ * cassandra.contactPoints
+ * cassandra.username
+ * cassandra.password
+3. Set properties `cassandra.ssl=true`, `cassandra.gcGraceSeconds=0`, and `cassandra.port=10350`.
+4. Ensure that the glowroot-central.properties is in the same folder as the glowroot-central.jar.
+5. Run `java -jar glowroot-central.jar` to begin running Glowroot.
+
+## FAQs
+Open a support ticket if you have issues running or testing Glowroot. Providing the subscription ID and account name where your Glowroot test will be running.
+
+## Next steps
+- Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application.
+- Learn about [supported features](support.md) in the Azure Cosmos DB for Apache Cassandra.
cosmos-db How To Create Container Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-create-container-cassandra.md
- Title: Create a container in Azure Cosmos DB Cassandra API
-description: Learn how to create a container in Azure Cosmos DB Cassandra API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
----- Previously updated : 10/16/2020---
-# Create a container in Azure Cosmos DB Cassandra API
-
-This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-
-This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container-mongodb.md), [Gremlin API](../how-to-create-container-gremlin.md), [Table API](../table/how-to-create-container.md), and [SQL API](../how-to-create-container.md) articles to create the container.
-
-> [!NOTE]
-> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
-
-## <a id="portal-cassandra"></a>Create using Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](manage-data-dotnet.md#create-a-database-account), or select an existing account.
-
-1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
-
- * Indicate whether you are creating a new keyspace, or using an existing one.
- * Enter a table name.
- * Enter the properties and specify a primary key.
- * Enter a throughput to be provisioned (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-cassandra.png" alt-text="Screenshot of Cassandra API, Add Table dialog box":::
-
-> [!NOTE]
-> For Cassandra API, the primary key is used as the partition key.
-
-## <a id="dotnet-cassandra"></a>Create using .NET SDK
-
-```csharp
-// Create a Cassandra table with a partition/primary key and provision 1000 RU/s throughput.
-session.Execute(CREATE TABLE myKeySpace.myTable(
- user_id int PRIMARY KEY,
- firstName text,
- lastName text) WITH cosmosdb_provisioned_throughput=1000);
-```
-
-If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
-
-## <a id="cli-mongodb"></a>Create using Azure CLI
-
-[Create a Cassandra table with Azure CLI](../scripts/cli/cassandr).
-
-## Create using PowerShell
-
-[Create a Cassandra table with PowerShell](../scripts/powershell/cassandr)
-
-## Next steps
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
-* [Work with Azure Cosmos account](../account-databases-containers-items.md)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-create-container.md
+
+ Title: Create a container in Azure Cosmos DB for Apache Cassandra
+description: Learn how to create a container in Azure Cosmos DB for Apache Cassandra by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
+++++ Last updated : 10/16/2020
+ms.devlang: csharp
+++
+# Create a container in Azure Cosmos DB for Apache Cassandra
+
+This article explains the different ways to create a container in Azure Cosmos DB for Apache Cassandra. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
+
+This article explains the different ways to create a container in Azure Cosmos DB for Apache Cassandra. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container.md), [API for Gremlin](../gremlin/how-to-create-container.md), [API for Table](../table/how-to-create-container.md), and [API for NoSQL](../how-to-create-container.md) articles to create the container.
+
+> [!NOTE]
+> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+
+## <a id="portal-cassandra"></a>Create using Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](manage-data-dotnet.md#create-a-database-account), or select an existing account.
+
+1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
+
+ * Indicate whether you are creating a new keyspace, or using an existing one.
+ * Enter a table name.
+ * Enter the properties and specify a primary key.
+ * Enter a throughput to be provisioned (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-cassandra.png" alt-text="Screenshot of API for Cassandra, Add Table dialog box":::
+
+> [!NOTE]
+> For API for Cassandra, the primary key is used as the partition key.
+
+## <a id="dotnet-cassandra"></a>Create using .NET SDK
+
+```csharp
+// Create a Cassandra table with a partition/primary key and provision 1000 RU/s throughput.
+session.Execute(CREATE TABLE myKeySpace.myTable(
+ user_id int PRIMARY KEY,
+ firstName text,
+ lastName text) WITH cosmosdb_provisioned_throughput=1000);
+```
+
+If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
+
+## <a id="cli-mongodb"></a>Create using Azure CLI
+
+[Create a Cassandra table with Azure CLI](../scripts/cli/cassandr).
+
+## Create using PowerShell
+
+[Create a Cassandra table with PowerShell](../scripts/powershell/cassandr)
+
+## Next steps
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Work with Azure Cosmos DB account](../account-databases-containers-items.md)
cosmos-db How To Provision Throughput Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-provision-throughput-cassandra.md
- Title: Provision throughput on Azure Cosmos DB Cassandra API resources
-description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB Cassandra API resources. You will use Azure portal, CLI, PowerShell and various other SDKs.
----- Previously updated : 10/15/2020---
-# Provision database, container or autoscale throughput on Azure Cosmos DB Cassandra API resources
-
-This article explains how to provision throughput in Azure Cosmos DB Cassandra API. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-
-If you are using a different API, see [SQL API](../how-to-provision-container-throughput.md), [API for MongoDB](../mongodb/how-to-provision-throughput-mongodb.md), [Gremlin API](../how-to-provision-throughput-gremlin.md) articles to provision the throughput.
-
-## <a id="portal-cassandra"></a> Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](../mongodb/create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing Azure Cosmos account.
-
-1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
-
- * Indicate whether you are creating a new keyspace or using an existing one. Select the **Provision database throughput** option if you want to provision throughput at the keyspace level.
- * Enter the table ID within the CQL command.
- * Enter a primary key value (for example, `/userrID`).
- * Enter a throughput that you want to provision (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="./media/how-to-provision-throughput-cassandra/provision-table-throughput-portal-cassandra-api.png" alt-text="Screenshot of Data Explorer, when creating a new collection with database level throughput":::
-
-> [!Note]
-> If you are provisioning throughput on a container in an Azure Cosmos account configured with Cassandra API, use `/myPrimaryKey` for the partition key path.
-
-## <a id="dotnet-cassandra"></a> .NET SDK
-
-### Provision throughput for a Cassandra table
-
-```csharp
-// Create a Cassandra table with a partition (primary) key and provision throughput of 400 RU/s
-session.Execute("CREATE TABLE myKeySpace.myTable(
- user_id int PRIMARY KEY,
- firstName text,
- lastName text) WITH cosmosdb_provisioned_throughput=400");
-
-```
-Similar commands can be issued through any CQL-compliant driver.
-
-### Alter or change throughput for a Cassandra table
-
-```csharp
-// Altering the throughput too can be done through code by issuing following command
-session.Execute("ALTER TABLE myKeySpace.myTable WITH cosmosdb_provisioned_throughput=5000");
-```
-
-Similar command can be executed through any CQL compliant driver.
-
-```csharp
-// Create a Cassandra keyspace and provision throughput of 400 RU/s
-session.Execute("CREATE KEYSPACE IF NOT EXISTS myKeySpace WITH cosmosdb_provisioned_throughput=400");
-```
-
-## Azure Resource Manager
-
-Azure Resource Manager templates can be used to provision autoscale throughput on database or container-level resources for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](templates-samples.md) for samples.
-
-## Azure CLI
-
-Azure CLI can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
-
-## Azure PowerShell
-
-Azure PowerShell can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
-
-## Next steps
-
-See the following articles to learn about throughput provisioning in Azure Cosmos DB:
-
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Provision Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-provision-throughput.md
+
+ Title: Provision throughput on Azure Cosmos DB for Apache Cassandra resources
+description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB for Apache Cassandra resources. You will use Azure portal, CLI, PowerShell and various other SDKs.
+++++ Last updated : 10/15/2020
+ms.devlang: csharp
+++
+# Provision database, container or autoscale throughput on Azure Cosmos DB for Apache Cassandra resources
+
+This article explains how to provision throughput in Azure Cosmos DB for Apache Cassandra. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
+
+If you are using a different API, see [API for NoSQL](../how-to-provision-container-throughput.md), [API for MongoDB](../mongodb/how-to-provision-throughput.md), [API for Gremlin](../gremlin/how-to-provision-throughput.md) articles to provision the throughput.
+
+## <a id="portal-cassandra"></a> Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](../mongodb/create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing Azure Cosmos DB account.
+
+1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
+
+ * Indicate whether you are creating a new keyspace or using an existing one. Select the **Provision database throughput** option if you want to provision throughput at the keyspace level.
+ * Enter the table ID within the CQL command.
+ * Enter a primary key value (for example, `/userrID`).
+ * Enter a throughput that you want to provision (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="./media/how-to-provision-throughput/provision-table-throughput-portal-cassandra-api.png" alt-text="Screenshot of Data Explorer, when creating a new collection with database level throughput":::
+
+> [!Note]
+> If you are provisioning throughput on a container in an Azure Cosmos DB account configured with API for Cassandra, use `/myPrimaryKey` for the partition key path.
+
+## <a id="dotnet-cassandra"></a> .NET SDK
+
+### Provision throughput for a Cassandra table
+
+```csharp
+// Create a Cassandra table with a partition (primary) key and provision throughput of 400 RU/s
+session.Execute("CREATE TABLE myKeySpace.myTable(
+ user_id int PRIMARY KEY,
+ firstName text,
+ lastName text) WITH cosmosdb_provisioned_throughput=400");
+
+```
+Similar commands can be issued through any CQL-compliant driver.
+
+### Alter or change throughput for a Cassandra table
+
+```csharp
+// Altering the throughput too can be done through code by issuing following command
+session.Execute("ALTER TABLE myKeySpace.myTable WITH cosmosdb_provisioned_throughput=5000");
+```
+
+Similar command can be executed through any CQL compliant driver.
+
+```csharp
+// Create a Cassandra keyspace and provision throughput of 400 RU/s
+session.Execute("CREATE KEYSPACE IF NOT EXISTS myKeySpace WITH cosmosdb_provisioned_throughput=400");
+```
+
+## Azure Resource Manager
+
+Azure Resource Manager templates can be used to provision autoscale throughput on database or container-level resources for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](templates-samples.md) for samples.
+
+## Azure CLI
+
+Azure CLI can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
+
+## Azure PowerShell
+
+Azure PowerShell can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
+
+## Next steps
+
+See the following articles to learn about throughput provisioning in Azure Cosmos DB:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/introduction.md
+
+ Title: Introduction to the Azure Cosmos DB for Apache Cassandra
+description: Learn how you can use Azure Cosmos DB to "lift-and-shift" existing applications and build new applications by using the Cassandra drivers and CQL
+++++++ Last updated : 11/25/2020++
+# Introduction to the Azure Cosmos DB for Apache Cassandra
+
+Azure Cosmos DB for Apache Cassandra can be used as the data store for apps written for [Apache Cassandra](https://cassandra.apache.org). This means that by using existing [Apache drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver) compliant with CQLv4, your existing Cassandra application can now communicate with the Azure Cosmos DB for Apache Cassandra. In many cases, you can switch from using Apache Cassandra to using Azure Cosmos DB's API for Cassandra, by just changing a connection string.
+
+The API for Cassandra enables you to interact with data stored in Azure Cosmos DB using the Cassandra Query Language (CQL) , Cassandra-based tools (like cqlsh) and Cassandra client drivers that you're already familiar with.
+
+> [!NOTE]
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's API for Cassandra.
+
+## What is the benefit of using API for Apache Cassandra for Azure Cosmos DB?
+
+**No operations management**: As a fully managed cloud service, Azure Cosmos DB for Apache Cassandra removes the overhead of managing and monitoring a myriad of settings across OS, JVM, and yaml files and their interactions. Azure Cosmos DB provides monitoring of throughput, latency, storage, availability, and configurable alerts.
+
+**Open source standard**: Despite being a fully managed service, API for Cassandra still supports a large surface area of the native [Apache Cassandra wire protocol](support.md), allowing you to build applications on a widely used and cloud agnostic open source standard.
+
+**Performance management**: Azure Cosmos DB provides guaranteed low latency reads and writes at the 99th percentile, backed up by the SLAs. Users do not have to worry about operational overhead to ensure high performance and low latency reads and writes. This means that users do not need to deal with scheduling compaction, managing tombstones, setting up bloom filters and replicas manually. Azure Cosmos DB removes the overhead to manage these issues and lets you focus on the application logic.
+
+**Ability to use existing code and tools**: Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB for Apache Cassandra with trivial changes.
+
+**Throughput and storage elasticity**: Azure Cosmos DB provides throughput across all regions and can scale the provisioned throughput with Azure portal, PowerShell, or CLI operations. You can [elastically scale](scale-account-throughput.md) storage and throughput for your tables as needed with predictable performance.
+
+**Global distribution and availability**: Azure Cosmos DB provides the ability to globally distribute data across all Azure regions and serve the data locally while ensuring low latency data access and high availability. Azure Cosmos DB provides 99.99% high availability within a region and 99.999% read and write availability across multiple regions with no operations overhead. Learn more in [Distribute data globally](../distribute-data-globally.md) article.
+
+**Choice of consistency**: Azure Cosmos DB provides the choice of five well-defined consistency levels to achieve optimal tradeoffs between consistency and performance. These consistency levels are strong, bounded-staleness, session, consistent prefix and eventual. These well-defined, practical, and intuitive consistency levels allow developers to make precise trade-offs between consistency, availability, and latency. Learn more in [consistency levels](../consistency-levels.md) article.
+
+**Enterprise grade**: Azure Cosmos DB provides [compliance certifications](https://www.microsoft.com/trustcenter) to ensure users can use the platform securely. Azure Cosmos DB also provides encryption at rest and in motion, IP firewall, and audit logs for control plane activities.
+
+**Event Sourcing**: API for Cassandra provides access to a persistent change log, the [Change Feed](change-feed.md), which can facilitate event sourcing directly from the database. In Apache Cassandra, the only equivalent is change data capture (CDC), which is merely a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached (these capabilities are redundant in Azure Cosmos DB as the relevant aspects are automatically governed).
+
+## Next steps
+
+* You can quickly get started with building the following language-specific apps to create and manage API for Cassandra data:
+ - [Node.js app](manage-data-nodejs.md)
+ - [.NET app](manage-data-dotnet.md)
+ - [Python app](manage-data-python.md)
+
+* Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application.
+
+* [Load sample data to the API for Cassandra table](load-data-table.md) by using a Java application.
+
+* [Query data from the API for Cassandra account](query-data.md) by using a Java application.
+
+* To learn about Apache Cassandra features supported by Azure Cosmos DB for Apache Cassandra, see [Cassandra support](support.md) article.
+
+* Read the [Frequently Asked Questions](cassandra-faq.yml).
cosmos-db Kafka Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/kafka-connect.md
Title: Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect
-description: Learn how to ingest data from Kafka to Azure Cosmos DB Cassandra API using DataStax Apache Kafka Connector
+ Title: Integrate Apache Kafka and Azure Cosmos DB for Apache Cassandra using Kafka Connect
+description: Learn how to ingest data from Kafka to Azure Cosmos DB for Apache Cassandra using DataStax Apache Kafka Connector
-++ Last updated 12/14/2020
-# Ingest data from Apache Kafka into Azure Cosmos DB Cassandra API using Kafka Connect
+# Ingest data from Apache Kafka into Azure Cosmos DB for Apache Cassandra using Kafka Connect
-Existing Cassandra applications can easily work with the [Azure Cosmos DB Cassandra API](cassandra-introduction.md) because of its [CQLv4 driver compatibility](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver). You leverage this capability to integrate with streaming platforms such as [Apache Kafka](https://kafka.apache.org/) and bring data into Azure Cosmos DB.
+Existing Cassandra applications can easily work with the [Azure Cosmos DB for Apache Cassandra](introduction.md) because of its [CQLv4 driver compatibility](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver). You leverage this capability to integrate with streaming platforms such as [Apache Kafka](https://kafka.apache.org/) and bring data into Azure Cosmos DB.
-Data in Apache Kafka (topics) is only useful when consumed by other applications or ingested into other systems. It's possible to build a solution using the [Kafka Producer/Consumer](https://kafka.apache.org/documentation/#api) APIs [using a language and client SDK of your choice](https://cwiki.apache.org/confluence/display/KAFKA/Clients). Kafka Connect provides an alternative solution. It's a platform to stream data between Apache Kafka and other systems in a scalable and reliable manner. Since Kafka Connect supports off the shelf connectors which includes Cassandra, you don't need to write custom code to integrate Kafka with Azure Cosmos DB Cassandra API.
+Data in Apache Kafka (topics) is only useful when consumed by other applications or ingested into other systems. It's possible to build a solution using the [Kafka Producer/Consumer](https://kafka.apache.org/documentation/#api) APIs [using a language and client SDK of your choice](https://cwiki.apache.org/confluence/display/KAFKA/Clients). Kafka Connect provides an alternative solution. It's a platform to stream data between Apache Kafka and other systems in a scalable and reliable manner. Since Kafka Connect supports off the shelf connectors which includes Cassandra, you don't need to write custom code to integrate Kafka with Azure Cosmos DB for Apache Cassandra.
In this article, we will be using the open-source [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html), that works on top of Kafka Connect framework to ingest records from a Kafka topic into rows of one or more Cassandra tables. The example provides a reusable setup using Docker Compose. This is quite convenient since it enables you to bootstrap all the required components locally with a single command. These components include Kafka, Zookeeper, Kafka Connect worker, and the sample data generator application.
Here is a breakdown of the components and their service definitions - you can re
## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account)
+* [Provision an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account)
-* [Use cqlsh for validation](cassandra-support.md#cql-shell)
+* [Use cqlsh for validation](support.md#cql-shell)
* Install [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install)
Here is a summary of the attributes:
**Basic connectivity** -- `contactPoints`: enter the contact point for Cosmos DB Cassandra-- `loadBalancing.localDc`: enter the region for Cosmos DB account e.g. Southeast Asia
+- `contactPoints`: enter the contact point for Azure Cosmos DB Cassandra
+- `loadBalancing.localDc`: enter the region for Azure Cosmos DB account e.g. Southeast Asia
- `auth.username`: enter the username - `auth.password`: enter the password - `port`: enter the port value (this is `10350`, not `9042`. leave it as is)
cosmos-db Lightweight Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/lightweight-transactions.md
+
+ Title: Lightweight Transactions in Azure Cosmos DB for Apache Cassandra
+description: Learn about Lightweight Transaction support in Azure Cosmos DB for Apache Cassandra
+++++ Last updated : 11/19/2021+++
+# Azure Cosmos DB for Apache Cassandra Lightweight Transactions with Conditions
+
+Apache Cassandra as most NoSQL database platforms gives precedence to availability and partition-tolerance above consistency as it does not support ACID transactions as in relational database. For details on how consistency level works with LWT see [Azure Cosmos DB for Apache Cassandra consistency levels](consistency-mapping.md). Cassandra supports lightweight transactions(LWT) which borders on ACID. It helps perform a read before write, for operations that require the data insert or update must be unique.
+
+## LWT support within Azure Cosmos DB for Apache Cassandra
+To use LWT within Azure Cosmos DB for Apache Cassandra, we advise that the following flags are set at the create table level.
+
+```sql
+with cosmosdb_cell_level_timestamp=true and cosmosdb_cell_level_timestamp_tombstones=true and cosmosdb_cell_level_timetolive=true
+```
+You might experience reduced performance on full row inserts compared to not using the flags.
+
+> [!NOTE]
+> LWTs are not supported for multi-region write scenarios.
+
+## LWT with flags enabled
+```sql
+CREATE KEYSPACE IF NOT EXISTS lwttesting WITH REPLICATION= {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor' : '1'};
+```
+
+```sql
+CREATE TABLE IF NOT EXISTS lwttesting.users (
+ name text,
+ userID int,
+ address text,
+ phoneCode int,
+ vendorName text STATIC,
+ PRIMARY KEY ((name), userID)) with cosmosdb_cell_level_timestamp=true and cosmosdb_cell_level_timestamp_tombstones=true and cosmosdb_cell_level_timetolive=true;
+```
+
+This query below returns TRUE.
+```sql
+INSERT INTO lwttesting.users(name, userID, phoneCode, vendorName)
+VALUES('Sara', 103, 832, 'vendor21') IF NOT EXISTS;
+```
+
+There are some known limitations with flag enabled. If a row has been inserted into the table, an attempt to insert a static row will return FALSE.
+```sql
+INSERT INTO lwttesting.users (userID, vendorName)
+VALUES (104, 'staticVendor') IF NOT EXISTS;
+```
+The above query currently returns FALSE but should be TRUE.
+
+## LWT with flags disabled
+Row delete combined with IF condition is not supported if the flags are not enabled.
+
+```sql
+CREATE TABLE IF NOT EXISTS lwttesting.vendor_users (
+ name text,
+ userID int,
+ areaCode int,
+ vendor text STATIC,
+ PRIMARY KEY ((userID), name)
+);
+```
+
+```sql
+DELETE FROM lwttesting.vendor_users
+WHERE userID =103 AND name = 'Sara'
+IF areaCode = 832;
+```
+An error message: Conditional delete of an entire row is not supported.
+
+## LWT with flags enabled or disabled
+Any request containing assignment and condition combination of a static and regular column is unsupported with the IF condition.
+This query will not return an error message as both columns are regular.
+```sql
+DELETE areaCode
+FROM lwttesting.vendor_users
+WHERE name= 'Sara'
+AND userID = 103 IF areaCode = 832;
+```
+However, the query below returns an error message
+`Conditions and assignments containing combination of Static and Regular columns are not supported.`
+```sql
+DELETE areaCode
+FROM lwttesting.vendor_users
+WHERE name= 'Sara' AND userID = 103 IF vendor = 'vendor21';
+```
+
+## Next steps
+In this tutorial, you've learned how Lightweight Transaction works within Azure Cosmos DB for Apache Cassandra. You can proceed to the next article:
+- [Migrate your data to a API for Cassandra account](migrate-data.md)
+- [Run Glowroot on Azure Cosmos DB for Apache Cassandra](glowroot.md)
cosmos-db Load Data Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/load-data-table.md
Title: 'Tutorial: Java app to load sample data into a Cassandra API table in Azure Cosmos DB'
-description: This tutorial shows how to load sample user data to a Cassandra API table in Azure Cosmos DB by using a Java application.
+ Title: 'Tutorial: Java app to load sample data into a API for Cassandra table in Azure Cosmos DB'
+description: This tutorial shows how to load sample user data to a API for Cassandra table in Azure Cosmos DB by using a Java application.
-+ Last updated 05/20/2019 ms.devlang: java
-#Customer intent: As a developer, I want to build a Java application to load data to a Cassandra API table in Azure Cosmos DB so that customers can store and manage the key/value data and utilize the global distribution, elastic scaling, multi-region , and other capabilities offered by Azure Cosmos DB.
-
+
+#Customer intent: As a developer, I want to build a Java application to load data to a API for Cassandra table in Azure Cosmos DB so that customers can store and manage the key/value data and utilize the global distribution, elastic scaling, multi-region , and other capabilities offered by Azure Cosmos DB.
-# Tutorial: Load sample data into a Cassandra API table in Azure Cosmos DB
+# Tutorial: Load sample data into a API for Cassandra table in Azure Cosmos DB
-As a developer, you might have applications that use key/value pairs. You can use Cassandra API account in Azure Cosmos DB to store and manage key/value data. This tutorial shows how to load sample user data to a table in a Cassandra API account in Azure Cosmos DB by using a Java application. The Java application uses the [Java driver](https://github.com/datastax/java-driver) and loads user data such as user ID, user name, and user city.
+As a developer, you might have applications that use key/value pairs. You can use API for Cassandra account in Azure Cosmos DB to store and manage key/value data. This tutorial shows how to load sample user data to a table in a API for Cassandra account in Azure Cosmos DB by using a Java application. The Java application uses the [Java driver](https://github.com/datastax/java-driver) and loads user data such as user ID, user name, and user city.
This tutorial covers the following tasks:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites
-* This article belongs to a multi-part tutorial. Before you start with this doc, make sure to [create the Cassandra API account, keyspace, and table](create-account-java.md).
+* This article belongs to a multi-part tutorial. Before you start with this doc, make sure to [create the API for Cassandra account, keyspace, and table](create-account-java.md).
## Load data into the table
-Use the following steps to load data into your Cassandra API table:
+Use the following steps to load data into your API for Cassandra table:
1. Open the ΓÇ£UserRepository.javaΓÇ¥ file under the ΓÇ£src\main\java\com\azure\cosmosdb\cassandraΓÇ¥ folder and append the code to insert the user_id, user_name and user_bcity fields into the table:
Use the following steps to load data into your Cassandra API table:
} ```
-2. Open the ΓÇ£UserProfile.javaΓÇ¥ file under the ΓÇ£src\main\java\com\azure\cosmosdb\cassandraΓÇ¥ folder. This class contains the main method that calls the createKeyspace and createTable methods you defined earlier. Now append the following code to insert some sample data into the Cassandra API table.
+2. Open the ΓÇ£UserProfile.javaΓÇ¥ file under the ΓÇ£src\main\java\com\azure\cosmosdb\cassandraΓÇ¥ folder. This class contains the main method that calls the createKeyspace and createTable methods you defined earlier. Now append the following code to insert some sample data into the API for Cassandra table.
```java //Insert rows into user table
You can now open Data Explorer in the Azure portal to confirm that the user info
## Next steps
-In this tutorial, you've learned how to load sample data to a Cassandra API account in Azure Cosmos DB. You can now proceed to the next article:
+In this tutorial, you've learned how to load sample data to a API for Cassandra account in Azure Cosmos DB. You can now proceed to the next article:
> [!div class="nextstepaction"]
-> [Query data from the Cassandra API account](query-data.md)
+> [Query data from the API for Cassandra account](query-data.md)
cosmos-db Lwt Cassandra Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/lwt-cassandra-api.md
- Title: Lightweight Transactions in Azure Cosmos DB Cassandra API
-description: Learn about Lightweight Transaction support in Azure Cosmos DB Cassandra API
----- Previously updated : 11/19/2021---
-# Azure Cosmos DB Cassandra API Lightweight Transactions with Conditions
-
-Apache Cassandra as most NoSQL database platforms gives precedence to availability and partition-tolerance above consistency as it does not support ACID transactions as in relational database. For details on how consistency level works with LWT see [Azure Cosmos DB Cassandra API consistency levels](apache-cassandra-consistency-mapping.md). Cassandra supports lightweight transactions(LWT) which borders on ACID. It helps perform a read before write, for operations that require the data insert or update must be unique.
-
-## LWT support within Azure Cosmos DB Cassandra API
-To use LWT within Azure Cosmos DB Cassandra API, we advise that the following flags are set at the create table level.
-
-```sql
-with cosmosdb_cell_level_timestamp=true and cosmosdb_cell_level_timestamp_tombstones=true and cosmosdb_cell_level_timetolive=true
-```
-You might experience reduced performance on full row inserts compared to not using the flags.
-
-> [!NOTE]
-> LWTs are not supported for multi-region write scenarios.
-
-## LWT with flags enabled
-```sql
-CREATE KEYSPACE IF NOT EXISTS lwttesting WITH REPLICATION= {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor' : '1'};
-```
-
-```sql
-CREATE TABLE IF NOT EXISTS lwttesting.users (
- name text,
- userID int,
- address text,
- phoneCode int,
- vendorName text STATIC,
- PRIMARY KEY ((name), userID)) with cosmosdb_cell_level_timestamp=true and cosmosdb_cell_level_timestamp_tombstones=true and cosmosdb_cell_level_timetolive=true;
-```
-
-This query below returns TRUE.
-```sql
-INSERT INTO lwttesting.users(name, userID, phoneCode, vendorName)
-VALUES('Sara', 103, 832, 'vendor21') IF NOT EXISTS;
-```
-
-There are some known limitations with flag enabled. If a row has been inserted into the table, an attempt to insert a static row will return FALSE.
-```sql
-INSERT INTO lwttesting.users (userID, vendorName)
-VALUES (104, 'staticVendor') IF NOT EXISTS;
-```
-The above query currently returns FALSE but should be TRUE.
-
-## LWT with flags disabled
-Row delete combined with IF condition is not supported if the flags are not enabled.
-
-```sql
-CREATE TABLE IF NOT EXISTS lwttesting.vendor_users (
- name text,
- userID int,
- areaCode int,
- vendor text STATIC,
- PRIMARY KEY ((userID), name)
-);
-```
-
-```sql
-DELETE FROM lwttesting.vendor_users
-WHERE userID =103 AND name = 'Sara'
-IF areaCode = 832;
-```
-An error message: Conditional delete of an entire row is not supported.
-
-## LWT with flags enabled or disabled
-Any request containing assignment and condition combination of a static and regular column is unsupported with the IF condition.
-This query will not return an error message as both columns are regular.
-```sql
-DELETE areaCode
-FROM lwttesting.vendor_users
-WHERE name= 'Sara'
-AND userID = 103 IF areaCode = 832;
-```
-However, the query below returns an error message
-`Conditions and assignments containing combination of Static and Regular columns are not supported.`
-```sql
-DELETE areaCode
-FROM lwttesting.vendor_users
-WHERE name= 'Sara' AND userID = 103 IF vendor = 'vendor21';
-```
-
-## Next steps
-In this tutorial, you've learned how Lightweight Transaction works within Azure Cosmos DB Cassandra API. You can proceed to the next article:
-- [Migrate your data to a Cassandra API account](migrate-data.md)-- [Run Glowroot on Azure Cosmos DB Cassandra API](glowroot-cassandra.md)
cosmos-db Manage Data Cqlsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-cqlsh.md
Title: 'Quickstart: Cassandra API with CQLSH - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB's Apache Cassandra API to create a profile application using CQLSH.
+ Title: 'Quickstart: API for Cassandra with CQLSH - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB's API for Apache Cassandra to create a profile application using CQLSH.
-+ Last updated 01/24/2022-+ # Quickstart: Build a Cassandra app with CQLSH and Azure Cosmos DB > [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Python](manage-data-python.md) > * [Golang](manage-data-go.md)
-In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use CQLSH to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, and use CQLSH to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
## Prerequisites
Before you can create a document database, you need to create a Cassandra accoun
## Install standalone CQLSH tool
-Refer to [CQL shell](cassandra-support.md#cql-shell) on steps on how to launch a standalone cqlsh tool.
+Refer to [CQL shell](support.md#cql-shell) on steps on how to launch a standalone cqlsh tool.
## Update your connection string
Now go back to the Azure portal to get your connection string information and co
export SSL_VALIDATE=false ```
-4. Connect to Azure Cosmos DB API for Cassandra:
+4. Connect to Azure Cosmos DB for Apache Cassandra:
- Paste the USERNAME and PASSWORD value into the command. ```sql cqlsh <USERNAME>.cassandra.cosmos.azure.com 10350 -u <USERNAME> -p <PASSWORD> --ssl --protocol-version=4
In the Azure portal, open **Data Explorer** to query, modify, and work with this
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API using CQLSH that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra using CQLSH that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet-core.md
Title: 'Quickstart: Cassandra API with .NET Core - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with the Azure portal and .NET Core
+ Title: 'Quickstart: API for Cassandra with .NET Core - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB for Apache Cassandra to create a profile application with the Azure portal and .NET Core
-+ ms.devlang: csharp Last updated 05/02/2020-+ # Quickstart: Build a Cassandra app with .NET Core and Azure Cosmos DB > [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-This quickstart shows how to use .NET Core and the Azure Cosmos DB [Cassandra API](cassandra-introduction.md) to build a profile app by cloning an example from GitHub. This quickstart also shows you how to use the web-based Azure portal to create an Azure Cosmos DB account.
+This quickstart shows how to use .NET Core and the Azure Cosmos DB [API for Cassandra](introduction.md) to build a profile app by cloning an example from GitHub. This quickstart also shows you how to use the web-based Azure portal to create an Azure Cosmos DB account.
Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, table, key-value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
In addition, you need:
## Clone the sample application
-Now let's switch to working with code. Let's clone a Cassandra API app from GitHub, set the connection string, and run it. You'll see how easily you can work with data programmatically.
+Now let's switch to working with code. Let's clone a API for Cassandra app from GitHub, set the connection string, and run it. You'll see how easily you can work with data programmatically.
1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
Now let's switch to working with code. Let's clone a Cassandra API app from GitH
This step is optional. If you're interested to learn how the code creates the database resources, you can review the following snippets. The snippets are all taken from the `Program.cs` file within `async Task ProcessAsync()` method, installed in the `C:\git-samples\azure-cosmos-db-cassandra-dotnet-core-getting-started\CassandraQuickStart` folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-* Initialize the session by connecting to a Cassandra cluster endpoint. The Cassandra API on Azure Cosmos DB supports only TLSv1.2.
+* Initialize the session by connecting to a Cassandra cluster endpoint. The API for Cassandra on Azure Cosmos DB supports only TLSv1.2.
```csharp var options = new Cassandra.SSLOptions(SslProtocols.Tls12, true, ValidateServerCertificate);
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Cosmos DB account.
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet.md
Title: 'Quickstart: Cassandra API with .NET - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with the Azure portal and .NET
+ Title: 'Quickstart: Azure Cosmos DB for Apache Cassandra with .NET - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB for Apache Cassandra to create a profile application with the Azure portal and .NET
-+ ms.devlang: csharp Last updated 05/02/2022-+
-# Quickstart: Build a Cassandra app with .NET SDK and Azure Cosmos DB
+# Quickstart: Build an Apache Cassandra app with .NET SDK and Azure Cosmos DB
> [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-This quickstart shows how to use .NET and the Azure Cosmos DB [Cassandra API](cassandra-introduction.md) to build a profile app by cloning an example from GitHub. This quickstart also shows you how to use the web-based Azure portal to create an Azure Cosmos DB account.
+This quickstart shows how to use .NET and the Azure Cosmos DB [API for Cassandra](introduction.md) to build a profile app by cloning an example from GitHub. This quickstart also shows you how to use the web-based Azure portal to create an Azure Cosmos DB account.
Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, table, key-value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
In addition, you need:
## Clone the sample application
-Now let's switch to working with code. Let's clone a Cassandra API app from GitHub, set the connection string, and run it. You'll see how easily you can work with data programmatically.
+Now let's switch to working with code. Let's clone a API for Cassandra app from GitHub, set the connection string, and run it. You'll see how easily you can work with data programmatically.
1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
Now let's switch to working with code. Let's clone a Cassandra API app from GitH
This step is optional. If you're interested to learn how the code creates the database resources, you can review the following snippets. The snippets are all taken from the `Program.cs` file installed in the `C:\git-samples\azure-cosmos-db-cassandra-dotnet-getting-started\CassandraQuickStartSample` folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-* Initialize the session by connecting to a Cassandra cluster endpoint. The Cassandra API on Azure Cosmos DB supports only TLSv1.2.
+* Initialize the session by connecting to a Cassandra cluster endpoint. The API for Cassandra on Azure Cosmos DB supports only TLSv1.2.
```csharp var options = new Cassandra.SSLOptions(SslProtocols.Tls12, true, ValidateServerCertificate);
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Cosmos DB account.
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-go.md
Title: Build a Go app with Azure Cosmos DB Cassandra API using the gocql client
-description: This quickstart shows how to use a Go client to interact with Azure Cosmos DB Cassandra API
+ Title: Build a Go app with Azure Cosmos DB for Apache Cassandra using the gocql client
+description: This quickstart shows how to use a Go client to interact with Azure Cosmos DB for Apache Cassandra
-+ ms.devlang: golang Last updated 07/14/2020-+
-# Quickstart: Build a Go app with the `gocql` client to manage Azure Cosmos DB Cassandra API data
+# Quickstart: Build a Go app with the `gocql` client to manage Azure Cosmos DB for Apache Cassandra data
> [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. In this quickstart, you will start by creating an Azure Cosmos DB Cassandra API account. You will then run a Go application to create a Cassandra keyspace, table, and execute a few operations. This Go app uses [gocql](https://github.com/gocql/gocql), which is a Cassandra client for the Go language.
+Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. In this quickstart, you will start by creating an Azure Cosmos DB for Apache Cassandra account. You will then run a Go application to create a Cassandra keyspace, table, and execute a few operations. This Go app uses [gocql](https://github.com/gocql/gocql), which is a Cassandra client for the Go language.
## Prerequisites
go run main.go
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Go app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Go app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
> [!div class="nextstepaction"] > [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-java-v4-sdk.md
Title: Java app with Azure Cosmos DB Cassandra API using Java 4.0 SDK
-description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with the Azure portal and Java 4.0 SDK.
+ Title: Java app with Azure Cosmos DB for Apache Cassandra using Java 4.0 SDK
+description: This quickstart shows how to use the Azure Cosmos DB for Apache Cassandra to create a profile application with the Azure portal and Java 4.0 SDK.
-+ ms.devlang: java Last updated 05/18/2020-+
-# Quickstart: Build a Java app to manage Azure Cosmos DB Cassandra API data (v4 Driver)
+# Quickstart: Build a Java app to manage Azure Cosmos DB for Apache Cassandra data (v4 Driver)
> [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use a Cassandra Java app cloned from GitHub to create a Cassandra database and container using the [v4.x Apache Cassandra drivers](https://github.com/datastax/java-driver/tree/4.x) for Java. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, and use a Cassandra Java app cloned from GitHub to create a Cassandra database and container using the [v4.x Apache Cassandra drivers](https://github.com/datastax/java-driver/tree/4.x) for Java. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
## Prerequisites
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. > [!NOTE]
-> This is a simple quickstart which uses [version 4](https://github.com/datastax/java-driver/tree/4.x) of the open-source Apache Cassandra driver for Java. In most cases, you should be able to connect an existing Apache Cassandra dependent Java application to Azure Cosmos DB Cassandra API without any changes to your existing code. However, we recommend adding our [custom Java extension](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1), which includes custom retry and load balancing policies, as well as recommended connection settings, for a better overall experience. This is to handle [rate limiting](scale-account-throughput.md#handling-rate-limiting-429-errors) and application level failover in Azure Cosmos DB where required. You can find a comprehensive sample which implements the extension [here](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
+> This is a simple quickstart which uses [version 4](https://github.com/datastax/java-driver/tree/4.x) of the open-source Apache Cassandra driver for Java. In most cases, you should be able to connect an existing Apache Cassandra dependent Java application to Azure Cosmos DB for Apache Cassandra without any changes to your existing code. However, we recommend adding our [custom Java extension](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1), which includes custom retry and load balancing policies, as well as recommended connection settings, for a better overall experience. This is to handle [rate limiting](scale-account-throughput.md#handling-rate-limiting-429-errors) and application level failover in Azure Cosmos DB where required. You can find a comprehensive sample which implements the extension [here](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
## Create a database account
Now let's switch to working with code. Let's clone a Cassandra app from GitHub,
This step is optional. If you're interested to learn how the code creates the database resources, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string). These snippets are all taken from the *src/main/java/com/azure/cosmosdb/cassandra/util/CassandraUtils.java* file.
-* The `CqlSession` connects to the Azure Cosmos DB Cassandra API and returns a session to access (`Cluster` object from v3 driver is now obsolete). Cassandra Host, Port, User name and password is set using the connection string page in the Azure portal.
+* The `CqlSession` connects to the Azure Cosmos DB for Apache Cassandra and returns a session to access (`Cluster` object from v3 driver is now obsolete). Cassandra Host, Port, User name and password is set using the connection string page in the Azure portal.
```java this.session = CqlSession.builder().withSslContext(sc)
Now go back to the Azure portal to get your connection string information and co
`region=West US`
- This is because the v.4x driver only allows one local DC to be paired with the contact point. If you want to add a region other than the default (which is the region that was given when the Cosmos DB account was first created), you will need to use regional suffix when adding contact point, e.g. `host-westus.cassandra.cosmos.azure.com`.
+ This is because the v.4x driver only allows one local DC to be paired with the contact point. If you want to add a region other than the default (which is the region that was given when the Azure Cosmos DB account was first created), you will need to use regional suffix when adding contact point, e.g. `host-westus.cassandra.cosmos.azure.com`.
8. Save the *config.properties* file.
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Java app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Cassandra Java app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-java.md
Title: Java app with Azure Cosmos DB Cassandra API using Java 3.0 SDK
-description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with the Azure portal and Java 3.0 SDK.
+ Title: Java app with Azure Cosmos DB for Apache Cassandra using Java 3.0 SDK
+description: This quickstart shows how to use the Azure Cosmos DB for Apache Cassandra to create a profile application with the Azure portal and Java 3.0 SDK.
-+ ms.devlang: java Last updated 07/17/2021-+
-# Quickstart: Build a Java app to manage Azure Cosmos DB Cassandra API data (v3 Driver)
+# Quickstart: Build a Java app to manage Azure Cosmos DB for Apache Cassandra data (v3 Driver)
> [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use a Cassandra Java app cloned from GitHub to create a Cassandra database and container using the [v3.x Apache Cassandra drivers](https://github.com/datastax/java-driver/tree/3.x) for Java. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, and use a Cassandra Java app cloned from GitHub to create a Cassandra database and container using the [v3.x Apache Cassandra drivers](https://github.com/datastax/java-driver/tree/3.x) for Java. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
## Prerequisites
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. > [!NOTE]
-> This is a simple quickstart which uses [version 3](https://github.com/datastax/java-driver/tree/3.x) of the open-source Apache Cassandra driver for Java. In most cases, you should be able to connect an existing Apache Cassandra dependent Java application to Azure Cosmos DB Cassandra API without any changes to your existing code. However, we recommend adding our [custom Java extension](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/feature/java-driver-3%2F1.0.0), which includes custom retry and load balancing policies, for a better overall experience. This is to handle [rate limiting](./scale-account-throughput.md#handling-rate-limiting-429-errors) and application level failover in Azure Cosmos DB respectively. You can find a comprehensive sample which implements the extension [here](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample).
+> This is a simple quickstart which uses [version 3](https://github.com/datastax/java-driver/tree/3.x) of the open-source Apache Cassandra driver for Java. In most cases, you should be able to connect an existing Apache Cassandra dependent Java application to Azure Cosmos DB for Apache Cassandra without any changes to your existing code. However, we recommend adding our [custom Java extension](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/feature/java-driver-3%2F1.0.0), which includes custom retry and load balancing policies, for a better overall experience. This is to handle [rate limiting](./scale-account-throughput.md#handling-rate-limiting-429-errors) and application level failover in Azure Cosmos DB respectively. You can find a comprehensive sample which implements the extension [here](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample).
## Create a database account
This step is optional. If you're interested to learn how the code creates the da
cluster = Cluster.builder().addContactPoint(cassandraHost).withPort(cassandraPort).withCredentials(cassandraUsername, cassandraPassword).withSSL(sslOptions).build(); ```
-* The `cluster` connects to the Azure Cosmos DB Cassandra API and returns a session to access.
+* The `cluster` connects to the Azure Cosmos DB for Apache Cassandra and returns a session to access.
```java return cluster.connect();
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Java app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Cassandra Java app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-nodejs.md
Title: Create a profile app with Node.js using the Cassandra API
-description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with Node.js.
+ Title: Create a profile app with Node.js using the API for Cassandra
+description: This quickstart shows how to use the Azure Cosmos DB for Apache Cassandra to create a profile application with Node.js.
-+ ms.devlang: javascript Last updated 02/10/2021--- devx-track-js-- mode-api-- kr2b-contr-experiment+ # Quickstart: Build a Cassandra app with Node.js SDK and Azure Cosmos DB > [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use a Cassandra Node.js app cloned from GitHub to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, and use a Cassandra Node.js app cloned from GitHub to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
## Prerequisites
Before you can create a document database, you need to create a Cassandra accoun
## Clone the sample application
-Clone a Cassandra API app from GitHub, set the connection string, and run it.
+Clone a API for Cassandra app from GitHub, set the connection string, and run it.
1. Open a Command Prompt window. Create a new folder named `git-samples`. Then, close the window.
This step is optional. If you're interested to learn how the code creates the da
}); ```
-* The `client` connects to the Azure Cosmos DB Cassandra API.
+* The `client` connects to the Azure Cosmos DB for Apache Cassandra.
```javascript client.connect();
Go to the Azure portal to get your connection string information and copy it int
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Node.js app that creates a Cassandra database and container. You can now import more data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Cassandra Node.js app that creates a Cassandra database and container. You can now import more data into your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-python.md
Title: 'Quickstart: Cassandra API with Python - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB's Apache Cassandra API to create a profile application with Python.
+ Title: 'Quickstart: API for Cassandra with Python - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB's API for Apache Cassandra to create a profile application with Python.
-+ ms.devlang: python Last updated 08/13/2020-+ # Quickstart: Build a Cassandra app with Python SDK and Azure Cosmos DB > [!div class="op_single_selector"] > * [.NET](manage-data-dotnet.md)
> * [Golang](manage-data-go.md) >
-In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use a Cassandra Python app cloned from GitHub to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, and use a Cassandra Python app cloned from GitHub to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
## Prerequisites
Before you can create a document database, you need to create a Cassandra accoun
## Clone the sample application
-Now let's clone a Cassandra API app from GitHub, set the connection string, and run it. You see how easy it is to work with data programmatically.
+Now let's clone a API for Cassandra app from GitHub, set the connection string, and run it. You see how easy it is to work with data programmatically.
1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
Now let's clone a Cassandra API app from GitHub, set the connection string, and
This step is optional. If you're interested to learn how the code creates the database resources, you can review the following snippets. The snippets are all taken from the *pyquickstart.py* file. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-* The `cluster` is initialized with `contactPoint` and `port` information that is retrieved from the Azure portal. The `cluster` then connects to the Azure Cosmos DB Cassandra API by using the `connect()` method. An authorized connection is established by using the username, password, and the default certificate or an explicit certificate if you provide one within the config file.
+* The `cluster` is initialized with `contactPoint` and `port` information that is retrieved from the Azure portal. The `cluster` then connects to the Azure Cosmos DB for Apache Cassandra by using the `connect()` method. An authorized connection is established by using the username, password, and the default certificate or an explicit certificate if you provide one within the config file.
:::code language="python" source="~/cosmosdb-cassandra-python-sample/pyquickstart.py" id="authenticateAndConnect":::
Now go back to the Azure portal to get your connection string information and co
``` > [!NOTE]
- > We recommend Python driver version **3.20.2** for use with Cassandra API. Higher versions may cause errors.
+ > We recommend Python driver version **3.20.2** for use with API for Cassandra. Higher versions may cause errors.
2. Run the following command to start your Python application:
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Python app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Cassandra Python app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
> [!div class="nextstepaction"] > [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Cassandra API with Bicep
-description: Use Bicep to create and configure Azure Cosmos DB Cassandra API.
+ Title: Create and manage Azure Cosmos DB for Apache Cassandra with Bicep
+description: Use Bicep to create and configure Azure Cosmos DB for Apache Cassandra.
-++ Last updated 9/13/2021
-# Manage Azure Cosmos DB Cassandra API resources using Bicep
+# Manage Azure Cosmos DB for Apache Cassandra resources using Bicep
-In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB Cassandra API accounts, keyspaces, and tables.
+In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB for Apache Cassandra accounts, keyspaces, and tables.
-This article shows Bicep samples for Cassandra API accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Gremlin](../graph/manage-with-bicep.md), [MongoDB](../mongodb/manage-with-bicep.md), and [Table](../table/manage-with-bicep.md) APIs.
+This article shows Bicep samples for API for Cassandra accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Gremlin](../graph/manage-with-bicep.md), [MongoDB](../mongodb/manage-with-bicep.md), and [Table](../table/manage-with-bicep.md) APIs.
> [!IMPORTANT] > > * Account names are limited to 44 characters, all lowercase. > * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md). <a id="create-autoscale"></a>
-## Cassandra API with autoscale provisioned throughput
+## API for Cassandra with autoscale provisioned throughput
-Create an Azure Cosmos account in two regions with options for consistency and failover, with a keyspace and table configured for autoscale throughput.
+Create an Azure Cosmos DB account in two regions with options for consistency and failover, with a keyspace and table configured for autoscale throughput.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-cassandra-autoscale/main.bicep"::: <a id="create-manual"></a>
-## Cassandra API with standard provisioned throughput
+## API for Cassandra with standard provisioned throughput
-Create an Azure Cosmos account in two regions with options for consistency and failover, with a keyspace and table configured for standard throughput.
+Create an Azure Cosmos DB account in two regions with options for consistency and failover, with a keyspace and table configured for standard throughput.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-cassandra/main.bicep":::
cosmos-db Materialized Views Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views-cassandra.md
- Title: Materialized Views for Azure Cosmos DB API for Cassandra. (Preview)
-description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB Cassandra API Materialized View.
---- Previously updated : 01/06/2022---
-# Enable materialized views for Azure Cosmos DB API for Cassandra operations (Preview)
-
-> [!IMPORTANT]
-> Materialized Views for Azure Cosmos DB API for Cassandra is currently in gated preview. Please send an email to mv-preview@microsoft.com to try this feature.
-> Materialized View preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Feature overview
-
-Materialized Views when defined will help provide a means to efficiently query a base table (container on Cosmos DB) with non-primary key filters. When users write to the base table, the Materialized view is built automatically in the background. This view can have a different primary key for lookups. The view will also contain only the projected columns from the base table. It will be a read-only table.
-
-You can query a column store without specifying a partition key by using Secondary Indexes. However, the query won't be effective for columns with high cardinality (scanning through all data for a small result set) or columns with low cardinality. Such queries end up being expensive as they end up being a cross partition query.
-
-With Materialized view, you can
-- Use as Global Secondary Indexes and save cross partition scans that reduce expensive queries -- Provide SQL based conditional predicate to populate only certain columns and certain data that meet the pre-condition -- Real time MVs that simplify real time event based scenarios where customers today use Change feed trigger for precondition checks to populate new collections"-
-## Main benefits
--- With Materialized View (Server side denormalization), you can avoid multiple independent tables and client side denormalization. -- Materialized view feature takes on the responsibility of updating views in order to keep them consistent with the base table. With this feature, you can avoid dual writes to the base table and the view.-- Materialized Views helps optimize read performance-- Ability to specify throughput for the materialized view independently-- Based on the requirements to hydrate the view, you can configure the MV builder layer appropriately.-- Speeding up write operations as it only needs to be written to the base table.-- Additionally, This implementation on Cosmos DB is based on a pull model, which doesn't affect the writer performance.---
-## How to get started?
-
-New Cassandra API accounts with Materialized Views enabled can be provisioned on your subscription by using REST API calls from az CLI.
-
-### Log in to the Azure command line interface
-
-Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](/cli/azure/install-azure-cli) and log on using the below:
- ```azurecli-interactive
- az login
- ```
-
-### Create an account
-
-To create account with support for customer managed keys and materialized views skip to **this** section
-
-To create an account, use the following command after creating body.txt with the below content, replacing {{subscriptionId}} with your subscription ID, {{resourceGroup}} with a resource group name that you should have created in advance, and {{accountName}} with a name for your Cassandra API account.
-
- ```azurecli-interactive
- az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview --body @body.txt
- body.txt content:
- {
- "location": "East US",
- "properties":
- {
- "databaseAccountOfferType": "Standard",
- "locations": [ { "locationName": "East US" } ],
- "capabilities": [ { "name": "EnableCassandra" }, { "name": "CassandraEnableMaterializedViews" }],
- "enableMaterializedViews": true
- }
- }
- ```
-
- Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
- ```
- az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview
- ```
-### Create an account with support for customer managed keys and materialized views
-
-This step is optional ΓÇô you can skip this step if you don't want to use Customer Managed Keys for your Cosmos DB account.
-
-To use Customer Managed Keys feature and Materialized views together on Cosmos DB account, you must first configure managed identities with Azure Active Directory for your account and then enable support for materialized views.
-
-You can use the documentation [here](../how-to-setup-cmk.md) to configure your Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](../how-to-setup-managed-identity.md). The next step to enable materializedViews on the account.
-
-Once your account is set up with CMK and managed identity, you can enable materialized views on the account by enabling ΓÇ£enableMaterializedViewsΓÇ¥ property in the request body.
-
- ```azurecli-interactive
- az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
--
-body.txt content:
-{
- "properties":
- {
- "enableMaterializedViews": true
- }
-}
- ```
--
- Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
- ```
-az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview
-```
-
-Perform another patch to set ΓÇ£CassandraEnableMaterializedViewsΓÇ¥ capability and wait for it to succeed
-
-```
-az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
-
-body.txt content:
-{
- "properties":
- {
- "capabilities":
-[{"name":"EnableCassandra"},
- {"name":"CassandraEnableMaterializedViews"}]
- }
-}
-```
-
-### Create materialized view builder
-
-Following this step, you'll also need to provision a Materialized View Builder:
-
-```
-az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview --body @body.txt
-
-body.txt content:
-{
- "properties":
- {
- "serviceType": "materializedViewsBuilder",
- "instanceCount": 1,
- "instanceSize": "Cosmos.D4s"
- }
-}
-```
-
-Wait for a couple of minutes and check the status using the below, the status in the output should have become Running:
-
-```
-az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview
-```
-
-## Caveats and current limitations
-
-Once your account and Materialized View Builder is set up, you should be able to create Materialized views per the documentation [here](https://cassandra.apache.org/doc/latest/cql/mvs.html) :
-
-However, there are a few caveats with Cosmos DB Cassandra APIΓÇÖs preview implementation of Materialized Views:
-- Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined.-- For the MV definitionΓÇÖs WHERE clause, only ΓÇ£IS NOT NULLΓÇ¥ filters are currently allowed.-- After a Materialized View is created against a base table, ALTER TABLE ADD operations aren't allowed on the base tableΓÇÖs schema - they're allowed only if none of the MVs have select * in their definition.-
-In addition to the above, note the following limitations
-
-### Availability zones limitations
--- Materialized views can't be enabled on an account that has Availability zone enabled regions. -- Adding a new region with Availability zone is not supported once ΓÇ£enableMaterializedViewsΓÇ¥ is set to true on the account.-
-### Periodic backup and restore limitations
-
-Materialized views aren't automatically restored with the restore process. Customer needs to re-create the materialized views after the restore process is complete. Customer needs to enableMaterializedViews on their restored account before creating the materialized views and provision the builders for the materialized views to be built.
-
-Other limitations similar to **Open Source Apache Cassandra** behavior
--- Defining Conflict resolution policy on Materialized Views is not allowed.-- Write operations from customer aren't allowed on Materialized views.-- Cross document queries and use of aggregate functions aren't supported on Materialized views.-- Modifying MaterializedViewDefinitionString after MV creation is not supported.-- Deleting base table is not allowed if at least one MV is defined on it. All the MVs must first be deleted and then the base table can be deleted.-- Defining materialized views on containers with Static columns is not allowed-
-## Under the hood
-
-Azure Cosmos DB Cassandra API uses a MV builder compute layer to maintain Materialized views. Customer gets flexibility to configure the MV builder compute instances depending on the latency and lag requirements to hydrate the views. The compute containers are shared among all MVs within the database account. Each provisioned compute container spawns off multiple tasks that read change feed from base table partitions and write data to MV (which is also another table) after transforming them as per MV definition for every MV in the database account.
-
-## Frequently asked questions (FAQs) …
--
-### What transformations/actions are supported?
--- Specifying a partition key that is different from base table partition key.-- Support for projecting selected subset of columns from base table.-- Determine if row from base table can be part of materialized view based on conditions evaluated on primary key columns of base table row. Filters supported - equalities, inequalities, contains. (Planned for GA)-
-### What consistency levels will be supported?
-
-Data in materialized view is eventually consistent. User might read stale rows when compared to data on base table due to redo of some operations on MVs. This behavior is acceptable since we guarantee only eventual consistency on the MV. Customers can configure (scale up and scale down) the MV builder layer depending on the latency requirement for the view to be consistent with base table.
-
-### Will there be an autoscale layer for the MV builder instances?
-
-Autoscaling for MV builder is not available right now. The MV builder instances can be manually scaled by modifying the instance count(scale out) or instance size(scale up).
-
-### Details on the billing model
-
-The proposed billing model will be to charge the customers for:
-
-**MV Builder compute nodes** MV Builder Compute ΓÇô Single tenant layer
-
-**Storage** The OLTP storage of the base table and MV based on existing storage meter for Containers. LogStore won't be charged.
-
-**Request Units** The provisioned RUs for base container and Materialized View.
-
-### What are the different SKUs that will be available?
-Refer to Pricing - [Azure Cosmos DB | Microsoft Azure](https://azure.microsoft.com/pricing/details/cosmos-db/) and check instances under Dedicated Gateway
-
-### What type of TTL support do we have?
-
-Setting table level TTL on MV is not allowed. TTL from base table rows will be applied on MV as well.
--
-### Initial troubleshooting if MVs aren't up to date:
-- Check if MV builder instances are provisioned-- Check if enough RUs are provisioned on the base table-- Check for unavailability on Base table or MV-
-### What type of monitoring is available in addition to the existing monitoring for Cassandra API?
--- Max Materialized View Catchup Gap in Minutes ΓÇô Value(t) indicates rows written to base table in last ΓÇÿtΓÇÖ minutes is yet to be propagated to MV. -- Metrics related to RUs consumed on base table for MV build (read change feed cost)-- Metrics related to RUs consumed on MV for MV build (write cost)-- Metrics related to resource consumption on MV builders (CPU, memory usage metrics)--
-### What are the restore options available for MVs?
-MVs can't be restored. Hence, MVs will need to be recreated once the base table is restored.
-
-### Can you create more than one view on a base table?
-
-Multiple views can be created on the same base table. Limit of five views is enforced.
-
-### How is uniqueness enforced on the materialized view? How will the mapping between the records in base table to the records in materialized view look like?
-
-The partition and clustering key of the base table are always part of primary key of any materialized view defined on it and enforce uniqueness of primary key after data repartitioning.
-
-### Can we add or remove columns on the base table once materialized view is defined?
-
-You'll be able to add a column to the base table, but you won't be able to remove a column. After a MV is created against a base table, ALTER TABLE ADD operations aren't allowed on the base table - they're allowed only if none of the MVs have select * in their definition. Cassandra doesn't support dropping columns on the base table if it has a materialized view defined on it.
-
-### Can we create MV on existing base table?
-
-No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined. MV on existing table is planned for the future.
-
-### What are the conditions on which records won't make it to MV and how to identify such records?
-
-Below are some of the identified cases where data from base table can't be written to MV as they violate some constraints on MV table-
-- Rows that donΓÇÖt satisfy partition key size limit in the materialized views-- Rows that don't satisfy clustering key size limit in materialized views
-
-Currently we drop these rows but plan to expose details related to dropped rows in future so that the user can reconcile the missing data.
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views.md
+
+ Title: Materialized Views for Azure Cosmos DB for Apache Cassandra. (Preview)
+description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB for Apache Cassandra Materialized View.
+++++ Last updated : 01/06/2022+++
+# Enable materialized views for Azure Cosmos DB for Apache Cassandra operations (Preview)
+
+> [!IMPORTANT]
+> Materialized Views for Azure Cosmos DB for Apache Cassandra is currently in gated preview. Please send an email to mv-preview@microsoft.com to try this feature.
+> Materialized View preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Feature overview
+
+Materialized Views when defined will help provide a means to efficiently query a base table (container on Azure Cosmos DB) with non-primary key filters. When users write to the base table, the Materialized view is built automatically in the background. This view can have a different primary key for lookups. The view will also contain only the projected columns from the base table. It will be a read-only table.
+
+You can query a column store without specifying a partition key by using Secondary Indexes. However, the query won't be effective for columns with high cardinality (scanning through all data for a small result set) or columns with low cardinality. Such queries end up being expensive as they end up being a cross partition query.
+
+With Materialized view, you can
+- Use as Global Secondary Indexes and save cross partition scans that reduce expensive queries
+- Provide SQL based conditional predicate to populate only certain columns and certain data that meet the pre-condition
+- Real time MVs that simplify real time event based scenarios where customers today use Change feed trigger for precondition checks to populate new collections"
+
+## Main benefits
+
+- With Materialized View (Server side denormalization), you can avoid multiple independent tables and client side denormalization.
+- Materialized view feature takes on the responsibility of updating views in order to keep them consistent with the base table. With this feature, you can avoid dual writes to the base table and the view.
+- Materialized Views helps optimize read performance
+- Ability to specify throughput for the materialized view independently
+- Based on the requirements to hydrate the view, you can configure the MV builder layer appropriately.
+- Speeding up write operations as it only needs to be written to the base table.
+- Additionally, This implementation on Azure Cosmos DB is based on a pull model, which doesn't affect the writer performance.
+++
+## How to get started?
+
+New API for Cassandra accounts with Materialized Views enabled can be provisioned on your subscription by using REST API calls from az CLI.
+
+### Log in to the Azure command line interface
+
+Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](/cli/azure/install-azure-cli) and log on using the below:
+ ```azurecli-interactive
+ az login
+ ```
+
+### Create an account
+
+To create account with support for customer managed keys and materialized views skip to **this** section
+
+To create an account, use the following command after creating body.txt with the below content, replacing {{subscriptionId}} with your subscription ID, {{resourceGroup}} with a resource group name that you should have created in advance, and {{accountName}} with a name for your API for Cassandra account.
+
+ ```azurecli-interactive
+ az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview --body @body.txt
+ body.txt content:
+ {
+ "location": "East US",
+ "properties":
+ {
+ "databaseAccountOfferType": "Standard",
+ "locations": [ { "locationName": "East US" } ],
+ "capabilities": [ { "name": "EnableCassandra" }, { "name": "CassandraEnableMaterializedViews" }],
+ "enableMaterializedViews": true
+ }
+ }
+ ```
+
+ Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
+ ```
+ az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview
+ ```
+### Create an account with support for customer managed keys and materialized views
+
+This step is optional ΓÇô you can skip this step if you don't want to use Customer Managed Keys for your Azure Cosmos DB account.
+
+To use Customer Managed Keys feature and Materialized views together on Azure Cosmos DB account, you must first configure managed identities with Azure Active Directory for your account and then enable support for materialized views.
+
+You can use the documentation [here](../how-to-setup-cmk.md) to configure your Azure Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](../how-to-setup-managed-identity.md). The next step to enable materializedViews on the account.
+
+Once your account is set up with CMK and managed identity, you can enable materialized views on the account by enabling ΓÇ£enableMaterializedViewsΓÇ¥ property in the request body.
+
+ ```azurecli-interactive
+ az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
++
+body.txt content:
+{
+ "properties":
+ {
+ "enableMaterializedViews": true
+ }
+}
+ ```
++
+ Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
+ ```
+az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview
+```
+
+Perform another patch to set ΓÇ£CassandraEnableMaterializedViewsΓÇ¥ capability and wait for it to succeed
+
+```
+az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
+
+body.txt content:
+{
+ "properties":
+ {
+ "capabilities":
+[{"name":"EnableCassandra"},
+ {"name":"CassandraEnableMaterializedViews"}]
+ }
+}
+```
+
+### Create materialized view builder
+
+Following this step, you'll also need to provision a Materialized View Builder:
+
+```
+az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview --body @body.txt
+
+body.txt content:
+{
+ "properties":
+ {
+ "serviceType": "materializedViewsBuilder",
+ "instanceCount": 1,
+ "instanceSize": "Cosmos.D4s"
+ }
+}
+```
+
+Wait for a couple of minutes and check the status using the below, the status in the output should have become Running:
+
+```
+az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview
+```
+
+## Caveats and current limitations
+
+Once your account and Materialized View Builder is set up, you should be able to create Materialized views per the documentation [here](https://cassandra.apache.org/doc/latest/cql/mvs.html) :
+
+However, there are a few caveats with Azure Cosmos DB for Apache CassandraΓÇÖs preview implementation of Materialized Views:
+- Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined.
+- For the MV definitionΓÇÖs WHERE clause, only ΓÇ£IS NOT NULLΓÇ¥ filters are currently allowed.
+- After a Materialized View is created against a base table, ALTER TABLE ADD operations aren't allowed on the base tableΓÇÖs schema - they're allowed only if none of the MVs have select * in their definition.
+
+In addition to the above, note the following limitations
+
+### Availability zones limitations
+
+- Materialized views can't be enabled on an account that has Availability zone enabled regions.
+- Adding a new region with Availability zone is not supported once ΓÇ£enableMaterializedViewsΓÇ¥ is set to true on the account.
+
+### Periodic backup and restore limitations
+
+Materialized views aren't automatically restored with the restore process. Customer needs to re-create the materialized views after the restore process is complete. Customer needs to enableMaterializedViews on their restored account before creating the materialized views and provision the builders for the materialized views to be built.
+
+Other limitations similar to **Open Source Apache Cassandra** behavior
+
+- Defining Conflict resolution policy on Materialized Views is not allowed.
+- Write operations from customer aren't allowed on Materialized views.
+- Cross document queries and use of aggregate functions aren't supported on Materialized views.
+- Modifying MaterializedViewDefinitionString after MV creation is not supported.
+- Deleting base table is not allowed if at least one MV is defined on it. All the MVs must first be deleted and then the base table can be deleted.
+- Defining materialized views on containers with Static columns is not allowed
+
+## Under the hood
+
+Azure Cosmos DB for Apache Cassandra uses a MV builder compute layer to maintain Materialized views. Customer gets flexibility to configure the MV builder compute instances depending on the latency and lag requirements to hydrate the views. The compute containers are shared among all MVs within the database account. Each provisioned compute container spawns off multiple tasks that read change feed from base table partitions and write data to MV (which is also another table) after transforming them as per MV definition for every MV in the database account.
+
+## Frequently asked questions (FAQs) …
++
+### What transformations/actions are supported?
+
+- Specifying a partition key that is different from base table partition key.
+- Support for projecting selected subset of columns from base table.
+- Determine if row from base table can be part of materialized view based on conditions evaluated on primary key columns of base table row. Filters supported - equalities, inequalities, contains. (Planned for GA)
+
+### What consistency levels will be supported?
+
+Data in materialized view is eventually consistent. User might read stale rows when compared to data on base table due to redo of some operations on MVs. This behavior is acceptable since we guarantee only eventual consistency on the MV. Customers can configure (scale up and scale down) the MV builder layer depending on the latency requirement for the view to be consistent with base table.
+
+### Will there be an autoscale layer for the MV builder instances?
+
+Autoscaling for MV builder is not available right now. The MV builder instances can be manually scaled by modifying the instance count(scale out) or instance size(scale up).
+
+### Details on the billing model
+
+The proposed billing model will be to charge the customers for:
+
+**MV Builder compute nodes** MV Builder Compute ΓÇô Single tenant layer
+
+**Storage** The OLTP storage of the base table and MV based on existing storage meter for Containers. LogStore won't be charged.
+
+**Request Units** The provisioned RUs for base container and Materialized View.
+
+### What are the different SKUs that will be available?
+Refer to Pricing - [Azure Cosmos DB | Microsoft Azure](https://azure.microsoft.com/pricing/details/cosmos-db/) and check instances under Dedicated Gateway
+
+### What type of TTL support do we have?
+
+Setting table level TTL on MV is not allowed. TTL from base table rows will be applied on MV as well.
++
+### Initial troubleshooting if MVs aren't up to date:
+- Check if MV builder instances are provisioned
+- Check if enough RUs are provisioned on the base table
+- Check for unavailability on Base table or MV
+
+### What type of monitoring is available in addition to the existing monitoring for API for Cassandra?
+
+- Max Materialized View Catchup Gap in Minutes ΓÇô Value(t) indicates rows written to base table in last ΓÇÿtΓÇÖ minutes is yet to be propagated to MV.
+- Metrics related to RUs consumed on base table for MV build (read change feed cost)
+- Metrics related to RUs consumed on MV for MV build (write cost)
+- Metrics related to resource consumption on MV builders (CPU, memory usage metrics)
++
+### What are the restore options available for MVs?
+MVs can't be restored. Hence, MVs will need to be recreated once the base table is restored.
+
+### Can you create more than one view on a base table?
+
+Multiple views can be created on the same base table. Limit of five views is enforced.
+
+### How is uniqueness enforced on the materialized view? How will the mapping between the records in base table to the records in materialized view look like?
+
+The partition and clustering key of the base table are always part of primary key of any materialized view defined on it and enforce uniqueness of primary key after data repartitioning.
+
+### Can we add or remove columns on the base table once materialized view is defined?
+
+You'll be able to add a column to the base table, but you won't be able to remove a column. After a MV is created against a base table, ALTER TABLE ADD operations aren't allowed on the base table - they're allowed only if none of the MVs have select * in their definition. Cassandra doesn't support dropping columns on the base table if it has a materialized view defined on it.
+
+### Can we create MV on existing base table?
+
+No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined. MV on existing table is planned for the future.
+
+### What are the conditions on which records won't make it to MV and how to identify such records?
+
+Below are some of the identified cases where data from base table can't be written to MV as they violate some constraints on MV table-
+- Rows that donΓÇÖt satisfy partition key size limit in the materialized views
+- Rows that don't satisfy clustering key size limit in materialized views
+
+Currently we drop these rows but plan to expose details related to dropped rows in future so that the user can reconcile the missing data.
cosmos-db Migrate Data Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-arcion.md
Title: Migrate data from Cassandra to Azure Cosmos DB Cassandra API using Arcion
-description: Learn how to migrate data from Apache Cassandra database to Azure Cosmos DB Cassandra API using Arcion.
+ Title: Migrate data from Cassandra to Azure Cosmos DB for Apache Cassandra using Arcion
+description: Learn how to migrate data from Apache Cassandra database to Azure Cosmos DB for Apache Cassandra using Arcion.
-++ Last updated 04/04/2022
-# Migrate data from Cassandra to Azure Cosmos DB Cassandra API account using Arcion
+# Migrate data from Cassandra to Azure Cosmos DB for Apache Cassandra account using Arcion
-Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for many reasons such as:
+API for Cassandra in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for many reasons such as:
* **No overhead of managing and monitoring:** It eliminates the overhead of managing and monitoring a myriad of settings across OS, JVM, and yaml files and their interactions. * **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
-* **Ability to use existing code and tools:** Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB Cassandra API with trivial changes.
+* **Ability to use existing code and tools:** Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB for Apache Cassandra with trivial changes.
-There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Apache Cassandra database to Azure Cosmos DB Cassandra API using Arcion.
+There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Apache Cassandra database to Azure Cosmos DB for Apache Cassandra using Arcion.
> [!NOTE] > This offering from Arcion is currently in beta. For more information, please contact them at [Arcion Support](mailto:support@arcion.io)
This section describes the steps required to set up Arcion and migrates data fro
After filling out the database filter details, save and close the file.
-1. Next you will set up the destination database configuration. Before you define the configuration, [create an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account) and then create a Keyspace, and a table to store the migrated data. Because you're migrating from Apache Cassandra to Cassandra API in Azure Cosmos DB, you can use the same partition key that you've used with Apache cassandra.
+1. Next you will set up the destination database configuration. Before you define the configuration, [create an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account) and then create a Keyspace, and a table to store the migrated data. Because you're migrating from Apache Cassandra to API for Cassandra in Azure Cosmos DB, you can use the same partition key that you've used with Apache cassandra.
1. Before migrating the data, increase the container throughput to the amount required for your application to migrate quickly. For example, you can increase the throughput to 100000 RUs. Scaling the throughput before starting the migration will help you to migrate your data in less time.
- :::image type="content" source="./media/migrate-data-arcion/scale-throughput.png" alt-text="Scale Azure Cosmos container throughout":::
+ :::image type="content" source="./media/migrate-data-arcion/scale-throughput.png" alt-text="Scale Azure Cosmos DB container throughout":::
Decrease the throughput after the migration is complete. Based on the amount of data stored and RUs required for each operation, you can estimate the throughput required after data migration. To learn more on how to estimate the RUs required, see [Provision throughput on containers and databases](../set-throughput.md) and [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles.
-1. Get the **Contact Point, Port, Username**, and **Primary Password** of your Azure Cosmos account from the **Connection String** pane. You'll use these values in the configuration file.
+1. Get the **Contact Point, Port, Username**, and **Primary Password** of your Azure Cosmos DB account from the **Connection String** pane. You'll use these values in the configuration file.
1. From the CLI terminal, set up the destination database configuration. Open the configuration file using **`vi conf/conn/cosmosdb.yml`** command and add a comma-separated list of host URI, port number, username, password, and other required parameters. The following example shows the contents of the configuration file: ```bash type: COSMOSDB
- host: '<Azure Cosmos accountΓÇÖs Contact point>'
+ host: '<Azure Cosmos DB accountΓÇÖs Contact point>'
port: 10350 username: 'arciondemo'
- password: '<Your Azure Cosmos accountΓÇÖs primary password>'
+ password: '<Your Azure Cosmos DB accountΓÇÖs primary password>'
max-connections: 30 ``` 1. Next migrate the data using Arcion. You can run the Arcion replicant in **full** or **snapshot** mode:
- * **Full mode** ΓÇô In this mode, the replicant continues to run after migration and it listens for any changes on the source Apache Cassandra system. If it detects any changes, they're replicated on the target Azure Cosmos account in real time.
+ * **Full mode** ΓÇô In this mode, the replicant continues to run after migration and it listens for any changes on the source Apache Cassandra system. If it detects any changes, they're replicated on the target Azure Cosmos DB account in real time.
* **Snapshot mode** ΓÇô In this mode, you can perform schema migration and one-time data replication. Real-time replication isnΓÇÖt supported with this option.
This section describes the steps required to set up Arcion and migrates data fro
./bin/replicant full conf/conn/cassandra.yaml conf/conn/cosmosdb.yaml --filter filter/cassandra_filter.yaml --replace-existing ```
- The replicant UI shows the replication progress. Once the schema migration and snapshot operation are done, the progress shows 100%. After the migration is complete, you can validate the data on the target Azure Cosmos database.
+ The replicant UI shows the replication progress. Once the schema migration and snapshot operation are done, the progress shows 100%. After the migration is complete, you can validate the data on the target Azure Cosmos DB database.
:::image type="content" source="./media/migrate-data-arcion/cassandra-data-migration-output.png" alt-text="Cassandra data migration output":::
-1. Because you've used full mode for migration, you can perform operations such as insert, update, or delete data on the source Apache Cassandra database. Later validate that they're replicated real time on the target Azure Cosmos database. After the migration, make sure to decrease the throughput configured for your Azure Cosmos container.
+1. Because you've used full mode for migration, you can perform operations such as insert, update, or delete data on the source Apache Cassandra database. Later validate that they're replicated real time on the target Azure Cosmos DB database. After the migration, make sure to decrease the throughput configured for your Azure Cosmos DB container.
1. You can stop the replicant any point and restart it with **--resume** switch. The replication resumes from the point it has stopped without compromising on data consistency. The following command shows how to use the resume switch.
cosmos-db Migrate Data Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-databricks.md
Title: Migrate data from Apache Cassandra to the Azure Cosmos DB Cassandra API by using Databricks (Spark)
-description: Learn how to migrate data from an Apache Cassandra database to the Azure Cosmos DB Cassandra API by using Azure Databricks and Spark.
+ Title: Migrate data from Apache Cassandra to the Azure Cosmos DB for Apache Cassandra by using Databricks (Spark)
+description: Learn how to migrate data from an Apache Cassandra database to the Azure Cosmos DB for Apache Cassandra by using Azure Databricks and Spark.
-++ Last updated 03/10/2021
-# Migrate data from Cassandra to an Azure Cosmos DB Cassandra API account by using Azure Databricks
+# Migrate data from Cassandra to an Azure Cosmos DB for Apache Cassandra account by using Azure Databricks
-Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for several reasons:
+API for Cassandra in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for several reasons:
* **No overhead of managing and monitoring:** It eliminates the overhead of managing and monitoring settings across OS, JVM, and YAML files and their interactions. * **Significant cost savings:** You can save costs with the Azure Cosmos DB, which includes the cost of VMs, bandwidth, and any applicable licenses. You don't have to manage datacenters, servers, SSD storage, networking, and electricity costs.
-* **Ability to use existing code and tools:** The Azure Cosmos DB provides wire protocol-level compatibility with existing Cassandra SDKs and tools. This compatibility ensures that you can use your existing codebase with the Azure Cosmos DB Cassandra API with trivial changes.
+* **Ability to use existing code and tools:** The Azure Cosmos DB provides wire protocol-level compatibility with existing Cassandra SDKs and tools. This compatibility ensures that you can use your existing codebase with the Azure Cosmos DB for Apache Cassandra with trivial changes.
-There are many ways to migrate database workloads from one platform to another. [Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/) that offers a way to perform offline migrations on a large scale. This article describes the steps required to migrate data from native Apache Cassandra keyspaces and tables into the Azure Cosmos DB Cassandra API by using Azure Databricks.
+There are many ways to migrate database workloads from one platform to another. [Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/) that offers a way to perform offline migrations on a large scale. This article describes the steps required to migrate data from native Apache Cassandra keyspaces and tables into the Azure Cosmos DB for Apache Cassandra by using Azure Databricks.
## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account).
+* [Provision an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account).
-* [Review the basics of connecting to an Azure Cosmos DB Cassandra API](connect-spark-configuration.md).
+* [Review the basics of connecting to an Azure Cosmos DB for Apache Cassandra](connect-spark-configuration.md).
-* Review the [supported features in the Azure Cosmos DB Cassandra API](cassandra-support.md) to ensure compatibility.
+* Review the [supported features in the Azure Cosmos DB for Apache Cassandra](support.md) to ensure compatibility.
-* Ensure you've already created empty keyspaces and tables in your target Azure Cosmos DB Cassandra API account.
+* Ensure you've already created empty keyspaces and tables in your target Azure Cosmos DB for Apache Cassandra account.
-* [Use cqlsh for validation](cassandra-support.md#cql-shell).
+* [Use cqlsh for validation](support.md#cql-shell).
## Provision an Azure Databricks cluster
You might see a 429 error code or "request rate is large" error text even if you
* [Provision throughput on containers and databases](../set-throughput.md) * [Partition key best practices](../partitioning-overview.md#choose-partitionkey) * [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md)
-* [Elastic Scale in Azure Cosmos DB Cassandra API](scale-account-throughput.md)
+* [Elastic Scale in Azure Cosmos DB for Apache Cassandra](scale-account-throughput.md)
cosmos-db Migrate Data Dual Write Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-dual-write-proxy.md
Title: Live migrate data from Apache Cassandra to the Azure Cosmos DB Cassandra API by using dual-write proxy and Apache Spark
-description: Learn how to live migrate data from an Apache Cassandra database to the Azure Cosmos DB Cassandra API by using dual-write proxy and Apache Spark
+ Title: Live migrate data from Apache Cassandra to the Azure Cosmos DB for Apache Cassandra by using dual-write proxy and Apache Spark
+description: Learn how to live migrate data from an Apache Cassandra database to the Azure Cosmos DB for Apache Cassandra by using dual-write proxy and Apache Spark
-++ Last updated 11/25/2021
-# Live migrate data from Apache Cassandra to the Azure Cosmos DB Cassandra API by using dual-write proxy and Apache Spark
+# Live migrate data from Apache Cassandra to the Azure Cosmos DB for Apache Cassandra by using dual-write proxy and Apache Spark
-Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for a variety of reasons such as:
+API for Cassandra in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for a variety of reasons such as:
* **No overhead of managing and monitoring:** It eliminates the overhead of managing and monitoring a myriad of settings across OS, JVM, and yaml files and their interactions. * **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
-* **Ability to use existing code and tools:** Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB Cassandra API with trivial changes.
+* **Ability to use existing code and tools:** Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB for Apache Cassandra with trivial changes.
-Azure Cosmos DB does not support the native Apache Cassandra gossip protocol for replication. Therefore, where zero downtime is a requirement for migration, a different approach is necessary. This tutorial describes how to live migrate data to Azure Cosmos DB Cassandra API from a native Apache Cassandra cluster using a [dual-write proxy](https://github.com/Azure-Samples/cassandra-proxy) and [Apache Spark](https://spark.apache.org/).
+Azure Cosmos DB does not support the native Apache Cassandra gossip protocol for replication. Therefore, where zero downtime is a requirement for migration, a different approach is necessary. This tutorial describes how to live migrate data to Azure Cosmos DB for Apache Cassandra from a native Apache Cassandra cluster using a [dual-write proxy](https://github.com/Azure-Samples/cassandra-proxy) and [Apache Spark](https://spark.apache.org/).
-The following image illustrates the pattern. The dual-write proxy is used to capture live changes, while historical data is copied in bulk using Apache Spark. The proxy can accept connections from your application code with few or no configuration changes. It will route all requests to your source database and asynchronously route writes to Cassandra API while bulk copy is happening.
+The following image illustrates the pattern. The dual-write proxy is used to capture live changes, while historical data is copied in bulk using Apache Spark. The proxy can accept connections from your application code with few or no configuration changes. It will route all requests to your source database and asynchronously route writes to API for Cassandra while bulk copy is happening.
:::image type="content" source="../../managed-instance-apache-cassandra/media/migration/live-migration.gif" alt-text="Animation that shows the live migration of data to Azure Managed Instance for Apache Cassandra." border="false"::: ## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account).
+* [Provision an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account).
-* [Review the basics of connecting to an Azure Cosmos DB Cassandra API](connect-spark-configuration.md).
+* [Review the basics of connecting to an Azure Cosmos DB for Apache Cassandra](connect-spark-configuration.md).
-* Review the [supported features in the Azure Cosmos DB Cassandra API](cassandra-support.md) to ensure compatibility.
+* Review the [supported features in the Azure Cosmos DB for Apache Cassandra](support.md) to ensure compatibility.
-* [Use cqlsh for validation](cassandra-support.md#cql-shell).
+* [Use cqlsh for validation](support.md#cql-shell).
-* Ensure you have network connectivity between your source cluster and target Cassandra API endpoint.
+* Ensure you have network connectivity between your source cluster and target API for Cassandra endpoint.
-* Ensure that you've already migrated the keyspace/table scheme from your source Cassandra database to your target Cassandra API account.
+* Ensure that you've already migrated the keyspace/table scheme from your source Cassandra database to your target API for Cassandra account.
>[!IMPORTANT] > If you have a requirement to preserve Apache Cassandra `writetime` during migration, the following flags must be set when creating tables:
java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server>
### Configure the credentials and port
-By default, the source credentials will be passed through from your client app. The proxy will use the credentials for making connections to the source and target clusters. As mentioned earlier, this process assumes that the source and target credentials are the same. It will be necessary to specify a different username and password for the target Cassandra API endpoint separately when starting the proxy:
+By default, the source credentials will be passed through from your client app. The proxy will use the credentials for making connections to the source and target clusters. As mentioned earlier, this process assumes that the source and target credentials are the same. It will be necessary to specify a different username and password for the target API for Cassandra endpoint separately when starting the proxy:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> ```
-The default source and target ports, when not specified, will be 9042. In this case, Cassandra API runs on port `10350`, so you need to use `--source-port` or `--target-port` to specify port numbers:
+The default source and target ports, when not specified, will be 9042. In this case, API for Cassandra runs on port `10350`, so you need to use `--source-port` or `--target-port` to specify port numbers:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password>
After the dual-write proxy is running, you'll need to change the port on your ap
To load the data, create a Scala notebook in your Azure Databricks account. Replace your source and target Cassandra configurations with the corresponding credentials, and replace the source and target keyspaces and tables. Add more variables for each table as required to the following sample, and then run. After your application starts sending requests to the dual-write proxy, you're ready to migrate historical data. >[!IMPORTANT]
-> Before migrating the data, increase the container throughput to the amount required for your application to migrate quickly. Scaling the throughput before starting the migration will help you to migrate your data in less time. To help safeguard against rate-limiting during the historical data load, you may wish to enable server-side retries (SSR) in Cassandra API. See our article [here](prevent-rate-limiting-errors.md) for more information, and instructions on how to enable SSR.
+> Before migrating the data, increase the container throughput to the amount required for your application to migrate quickly. Scaling the throughput before starting the migration will help you to migrate your data in less time. To help safeguard against rate-limiting during the historical data load, you may wish to enable server-side retries (SSR) in API for Cassandra. See our article [here](prevent-rate-limiting-errors.md) for more information, and instructions on how to enable SSR.
```scala import com.datastax.spark.connector._
After the historical data load is complete, your databases should be in sync and
## Next steps > [!div class="nextstepaction"]
-> [Introduction to the Azure Cosmos DB Cassandra API](cassandra-introduction.md)
+> [Introduction to the Azure Cosmos DB for Apache Cassandra](introduction.md)
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-striim.md
Title: Migrate data to Azure Cosmos DB Cassandra API account using Striim
-description: Learn how to use Striim to migrate data from an Oracle database to an Azure Cosmos DB Cassandra API account.
+ Title: Migrate data to Azure Cosmos DB for Apache Cassandra account using Striim
+description: Learn how to use Striim to migrate data from an Oracle database to an Azure Cosmos DB for Apache Cassandra account.
-++ Last updated 12/09/2021
-# Migrate data to Azure Cosmos DB Cassandra API account using Striim
+# Migrate data to Azure Cosmos DB for Apache Cassandra account using Striim
-The Striim image in the Azure marketplace offers continuous real-time data movement from data warehouses and databases to Azure. While moving the data, you can perform in-line denormalization, data transformation, enable real-time analytics, and data reporting scenarios. ItΓÇÖs easy to get started with Striim to continuously move enterprise data to Azure Cosmos DB Cassandra API. Azure provides a marketplace offering that makes it easy to deploy Striim and migrate data to Azure Cosmos DB.
+The Striim image in the Azure marketplace offers continuous real-time data movement from data warehouses and databases to Azure. While moving the data, you can perform in-line denormalization, data transformation, enable real-time analytics, and data reporting scenarios. ItΓÇÖs easy to get started with Striim to continuously move enterprise data to Azure Cosmos DB for Apache Cassandra. Azure provides a marketplace offering that makes it easy to deploy Striim and migrate data to Azure Cosmos DB.
-This article shows how to use Striim to migrate data from an **Oracle database** to an **Azure Cosmos DB Cassandra API account**.
+This article shows how to use Striim to migrate data from an **Oracle database** to an **Azure Cosmos DB for Apache Cassandra account**.
## Prerequisites
This article shows how to use Striim to migrate data from an **Oracle database**
1. Select **Create a resource** and search for **Striim** in the Azure marketplace. Select the first option and **Create**.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-azure-marketplace.png" alt-text="Find Striim marketplace item":::
+ :::image type="content" source="media/migrate-data-striim/striim-azure-marketplace.png" alt-text="Find Striim marketplace item":::
1. Next, enter the configuration properties of the Striim instance. The Striim environment is deployed in a virtual machine. From the **Basics** pane, enter the **VM user name**, **VM password** (this password is used to SSH into the VM). Select your **Subscription**, **Resource Group**, and **Location details** where youΓÇÖd like to deploy Striim. Once complete, select **OK**.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-configure-basic-settings.png" alt-text="Configure basic settings for Striim":::
+ :::image type="content" source="media/migrate-data-striim/striim-configure-basic-settings.png" alt-text="Configure basic settings for Striim":::
1. In the **Striim Cluster settings** pane, choose the type of Striim deployment and the virtual machine size.
This article shows how to use Striim to migrate data from an **Oracle database**
1. In the **Striim access settings** pane, configure the **Public IP address** (choose the default values), **Domain name for Striim**, **Admin password** that youΓÇÖd like to use to login to the Striim UI. Configure a VNET and Subnet (choose the default values). After filling in the details, select **OK** to continue.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-access-settings.png" alt-text="Striim access settings":::
+ :::image type="content" source="media/migrate-data-striim/striim-access-settings.png" alt-text="Striim access settings":::
1. Azure will validate the deployment and make sure everything looks good; validation takes few minutes to complete. After the validation is completed, select **OK**.
In this section, you configure the Oracle database as the source for data moveme
## Configure target database
-In this section, you will configure the Azure Cosmos DB Cassandra API account as the target for data movement.
+In this section, you will configure the Azure Cosmos DB for Apache Cassandra account as the target for data movement.
-1. Create an [Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account) using the Azure portal.
+1. Create an [Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account) using the Azure portal.
-1. Navigate to the **Data Explorer** pane in your Azure Cosmos account. Select **New Table** to create a new container. Assume that you are migrating *products* and *orders* data from Oracle database to Azure Cosmos DB. Create a new Keyspace named **StriimDemo** with an Orders container. Provision the container with **1000 RUs**(this example uses 1000 RUs, but you should use the throughput estimated for your workload), and **/ORDER_ID** as the primary key. These values will differ depending on your source data.
+1. Navigate to the **Data Explorer** pane in your Azure Cosmos DB account. Select **New Table** to create a new container. Assume that you are migrating *products* and *orders* data from Oracle database to Azure Cosmos DB. Create a new Keyspace named **StriimDemo** with an Orders container. Provision the container with **1000 RUs**(this example uses 1000 RUs, but you should use the throughput estimated for your workload), and **/ORDER_ID** as the primary key. These values will differ depending on your source data.
- :::image type="content" source="./media/migrate-data-striim/create-cassandra-api-account.png" alt-text="Create Cassandra API account":::
+ :::image type="content" source="./media/migrate-data-striim/create-cassandra-api-account.png" alt-text="Create API for Cassandra account":::
## Configure Oracle to Azure Cosmos DB data flow 1. Navigate to the Striim instance that you deployed in the Azure portal. Select the **Connect** button in the upper menu bar and from the **SSH** tab, copy the URL in **Login using VM local account** field.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
+ :::image type="content" source="media/migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using PuTTY or a different SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
+ :::image type="content" source="media/migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
1. From the same terminal window, restart the Striim server by executing the following commands:
In this section, you will configure the Azure Cosmos DB Cassandra API account as
1. Now, navigate back to Azure and copy the Public IP address of your Striim VM.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/copy-public-ip-address.png" alt-text="Copy Striim VM IP address":::
+ :::image type="content" source="media/migrate-data-striim/copy-public-ip-address.png" alt-text="Copy Striim VM IP address":::
1. To navigate to the StriimΓÇÖs Web UI, open a new tab in a browser and copy the public IP followed by: 9080. Sign in by using the **admin** username, along with the admin password you specified in the Azure portal.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-login-ui.png" alt-text="Sign in to Striim":::
+ :::image type="content" source="media/migrate-data-striim/striim-login-ui.png" alt-text="Sign in to Striim":::
1. Now youΓÇÖll arrive at StriimΓÇÖs home page. There are three different panes ΓÇô **Dashboards**, **Apps**, and **SourcePreview**. The Dashboards pane allows you to move data in real time and visualize it. The Apps pane contains your streaming data pipelines, or data flows. On the right hand of the page is SourcePreview where you can preview your data before moving it. 1. Select the **Apps** pane, weΓÇÖll focus on this pane for now. There are a variety of sample apps that you can use to learn about Striim, however in this article you will create our own. Select the **Add App** button in the top right-hand corner.
- :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/add-striim-app.png" alt-text="Add the Striim app":::
+ :::image type="content" source="media/migrate-data-striim/add-striim-app.png" alt-text="Add the Striim app":::
1. There are a few different ways to create Striim applications. Select **Start from Scratch** for this scenario.
In this section, you will configure the Azure Cosmos DB Cassandra API account as
1. Enter the configuration properties of your target Azure Cosmos DB instance and select **Save** to continue. Here are the key parameters to note:
- * **Adapter** - Use **DatabaseWriter**. When writing to Azure Cosmos DB Cassandra API, DatabaseWriter is required. The Cassandra driver 3.6.0 is bundled with Striim. If the DatabaseWriter exceeds the number of RUs provisioned on your Azure Cosmos container, the application will crash.
+ * **Adapter** - Use **DatabaseWriter**. When writing to Azure Cosmos DB for Apache Cassandra, DatabaseWriter is required. The Cassandra driver 3.6.0 is bundled with Striim. If the DatabaseWriter exceeds the number of RUs provisioned on your Azure Cosmos DB container, the application will crash.
* **Connection URL** - Specify your Azure Cosmos DB JDBC connection URL. The URL is in the format `jdbc:cassandra://<contactpoint>:10350/<databaseName>?SSL=true`
- * **Username** - Specify your Azure Cosmos account name.
+ * **Username** - Specify your Azure Cosmos DB account name.
- * **Password** - Specify the primary key of your Azure Cosmos account.
+ * **Password** - Specify the primary key of your Azure Cosmos DB account.
* **Tables** - Target tables must have primary keys and primary keys can not be updated.
In this section, you will configure the Azure Cosmos DB Cassandra API account as
:::image type="content" source="./media/migrate-data-striim/setup-cdc-pipeline.png" alt-text="Set up the CDC pipeline":::
-1. Finally, letΓÇÖs sign into Azure and navigate to your Azure Cosmos account. Refresh the Data Explorer, and you can see that data has arrived.
+1. Finally, letΓÇÖs sign into Azure and navigate to your Azure Cosmos DB account. Refresh the Data Explorer, and you can see that data has arrived.
By using the Striim solution in Azure, you can continuously migrate data to Azure Cosmos DB from various sources such as Oracle, Cassandra, MongoDB, and various others to Azure Cosmos DB. To learn more please visit the [Striim website](https://www.striim.com/), [download a free 30-day trial of Striim](https://go2.striim.com/download-free-trial), and for any issues when setting up the migration path with Striim, file a [support request.](https://go2.striim.com/request-support-striim) ## Next steps
-* If you are migrating data to Azure Cosmos DB SQL API, see [how to migrate data to Cassandra API account using Striim](../cosmosdb-sql-api-migrate-data-striim.md)
+* If you are migrating data to Azure Cosmso DB for NoSQL, see [how to migrate data to API for Cassandra account using Striim](../cosmosdb-sql-api-migrate-data-striim.md)
-* [Monitor and debug your data with Azure Cosmos DB metrics](../use-metrics.md)
+* [Monitor and debug your data with Azure Cosmos DB metrics](../use-metrics.md)
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data.md
Title: 'Migrate your data to a Cassandra API account in Azure Cosmos DB- Tutorial'
-description: In this tutorial, learn how to copy data from Apache Cassandra to a Cassandra API account in Azure Cosmos DB.
+ Title: 'Migrate your data to a API for Cassandra account in Azure Cosmos DB- Tutorial'
+description: In this tutorial, learn how to copy data from Apache Cassandra to a API for Cassandra account in Azure Cosmos DB.
-+ Last updated 12/03/2018 ms.devlang: csharp-+ #Customer intent: As a developer, I want to migrate my existing Cassandra workloads to Azure Cosmos DB so that the overhead to manage resources, clusters, and garbage collection is automatically handled by Azure Cosmos DB.
-# Tutorial: Migrate your data to a Cassandra API account
+# Tutorial: Migrate your data to a API for Cassandra account
-As a developer, you might have existing Cassandra workloads that are running on-premises or in the cloud, and you might want to migrate them to Azure. You can migrate such workloads to a Cassandra API account in Azure Cosmos DB. This tutorial provides instructions on different options available to migrate Apache Cassandra data into the Cassandra API account in Azure Cosmos DB.
+As a developer, you might have existing Cassandra workloads that are running on-premises or in the cloud, and you might want to migrate them to Azure. You can migrate such workloads to a API for Cassandra account in Azure Cosmos DB. This tutorial provides instructions on different options available to migrate Apache Cassandra data into the API for Cassandra account in Azure Cosmos DB.
This tutorial covers the following tasks:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites for migration
-* **Estimate your throughput needs:** Before migrating data to the Cassandra API account in Azure Cosmos DB, you should estimate the throughput needs of your workload. In general, start with the average throughput required by the CRUD operations, and then include the additional throughput required for the Extract Transform Load or spiky operations. You need the following details to plan for migration:
+* **Estimate your throughput needs:** Before migrating data to the API for Cassandra account in Azure Cosmos DB, you should estimate the throughput needs of your workload. In general, start with the average throughput required by the CRUD operations, and then include the additional throughput required for the Extract Transform Load or spiky operations. You need the following details to plan for migration:
* **Existing data size or estimated data size:** Defines the minimum database size and throughput requirement. If you are estimating data size for a new application, you can assume that the data is uniformly distributed across the rows, and estimate the value by multiplying with the data size.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
After you identify the requirements of your existing workload, create an Azure Cosmos DB account, database, and containers, according to the gathered throughput requirements.
- * **Determine the RU charge for an operation:** You can determine the RUs by using any of the SDKs supported by the Cassandra API. This example shows the .NET version of getting RU charges.
+ * **Determine the RU charge for an operation:** You can determine the RUs by using any of the SDKs supported by the API for Cassandra. This example shows the .NET version of getting RU charges.
```csharp var tableInsertStatement = table.Insert(sampleEntity);
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
* **Allocate the required throughput:** Azure Cosmos DB can automatically scale storage and throughput as your requirements grow. You can estimate your throughput needs by using the [Azure Cosmos DB request unit calculator](https://www.documentdb.com/capacityplanner).
-* **Create tables in the Cassandra API account:** Before you start migrating data, pre-create all your tables from the Azure portal or from `cqlsh`. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the containers.
+* **Create tables in the API for Cassandra account:** Before you start migrating data, pre-create all your tables from the Azure portal or from `cqlsh`. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the containers.
* **Increase throughput:** The duration of your data migration depends on the amount of throughput you provisioned for the tables in Azure Cosmos DB. Increase the throughput for the duration of migration. With the higher throughput, you can avoid rate limiting and migrate in less time. After you've completed the migration, decrease the throughput to save costs. We also recommend that you have the Azure Cosmos DB account in the same region as your source database.
You can move data from existing Cassandra workloads to Azure Cosmos DB by using
### Migrate data by using the cqlsh COPY command
-Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc) to copy local data to the Cassandra API account in Azure Cosmos DB.
+Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc) to copy local data to the API for Cassandra account in Azure Cosmos DB.
> [!WARNING] > Only use the CQL COPY to migrate small datasets. To move large datasets, [migrate data by using Spark](#migrate-data-by-using-spark).
Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/too
COPY exampleks.tablename TO 'data.csv' WITH HEADER = TRUE; ```
-1. Now get your Cassandra API accountΓÇÖs connection string information:
+1. Now get your API for Cassandra accountΓÇÖs connection string information:
* Sign in to the [Azure portal](https://portal.azure.com), and go to your Azure Cosmos DB account.
- * Open the **Connection String** pane. Here you see all the information that you need to connect to your Cassandra API account from `cqlsh`.
+ * Open the **Connection String** pane. Here you see all the information that you need to connect to your API for Cassandra account from `cqlsh`.
1. Sign in to `cqlsh` by using the connection information from the portal.
Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/too
COPY exampleks.tablename FROM 'data.csv' WITH HEADER = TRUE; ``` > [!NOTE]
-> Cassandra API supports protocol version 4, which shipped with Cassandra 3.11. There may be issues with using later protocol versions with our API. COPY FROM with later protocol version can go into a loop and return duplicate rows.
+> API for Cassandra supports protocol version 4, which shipped with Cassandra 3.11. There may be issues with using later protocol versions with our API. COPY FROM with later protocol version can go into a loop and return duplicate rows.
> Add the protocol-version to the cqlsh command. ```sql cqlsh <USERNAME>.cassandra.cosmos.azure.com 10350 -u <USERNAME> -p <PASSWORD> --ssl --protocol-version=4
We recommend the below configuration (at minimum) for a collection at 20,000 RUs
###### Example commands -- Copying data from Cassandra API to local csv file
+- Copying data from API for Cassandra to local csv file
```sql COPY standard1 (key, "C0", "C1", "C2", "C3", "C4") TO 'backup.csv' WITH PAGESIZE=100 AND MAXREQUESTS=1 ; ``` -- Copying data from local csv file to Cassandra API
+- Copying data from local csv file to API for Cassandra
```sql COPY standard2 (key, "C0", "C1", "C2", "C3", "C4") FROM 'backup.csv' WITH CHUNKSIZE=100 AND INGESTRATE=100 AND MAXATTEMPTS=10; ```
COPY standard2 (key, "C0", "C1", "C2", "C3", "C4") FROM 'backup.csv' WITH CHUNKS
### Migrate data by using Spark
-Use the following steps to migrate data to the Cassandra API account with Spark:
+Use the following steps to migrate data to the API for Cassandra account with Spark:
1. Provision an [Azure Databricks cluster](spark-databricks.md) or an [Azure HDInsight cluster](spark-hdinsight.md).
-1. Move data to the destination Cassandra API endpoint. Refer to this [how-to guide](migrate-data-databricks.md) for migration with Azure Databricks.
+1. Move data to the destination API for Cassandra endpoint. Refer to this [how-to guide](migrate-data-databricks.md) for migration with Azure Databricks.
Migrating data by using Spark jobs is a recommended option if you have data residing in an existing cluster in Azure virtual machines or any other cloud. To do this, you must set up Spark as an intermediary for one-time or regular ingestion. You can accelerate this migration by using Azure ExpressRoute connectivity between your on-premises environment and Azure.
When they're no longer needed, you can delete the resource group, the Azure Cosm
## Next steps
-In this tutorial, you've learned how to migrate your data to a Cassandra API account in Azure Cosmos DB. You can now learn about other concepts in Azure Cosmos DB:
+In this tutorial, you've learned how to migrate your data to a API for Cassandra account in Azure Cosmos DB. You can now learn about other concepts in Azure Cosmos DB:
> [!div class="nextstepaction"] > [Tunable data consistency levels in Azure Cosmos DB](../consistency-levels.md)
cosmos-db Monitor Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/monitor-insights.md
+
+ Title: Monitor and debug with insights in Azure Cosmos DB for Apache Cassandra
+description: Learn how to debug and monitor your Azure Cosmos DB for Apache Cassandra account using insights
+++++ Last updated : 05/02/2022+++
+# Monitor and debug with insights in Azure Cosmos DB for Apache Cassandra
+
+Azure Cosmos DB helps provide insights into your applicationΓÇÖs performance using the Azure Monitor API. Azure Monitor for Azure Cosmos DB provides metrics view to monitor your API for Cassandra Account and create dashboards.
+
+This article walks through some common use cases and how best to use Azure Cosmos DB insights to analyze and debug your API for Cassandra account.
+> [!NOTE]
+> The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything.
++
+## Availability
+The availability shows the percentage of successful requests over the total requests per hour. Monitor service availability for a specified API for Cassandra account.
+++
+## Latency
+These charts below show the read and write latency observed by your API for Cassandra account in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. This metric doesn't represent the end-to-end request latency. Use diagnostic log for cases where you experience high latency for query operations.
+
+The server side latency (Avg) by region also displays a sudden latency spike on the server. It can help a customer differentiate between a client side latency spike and a server-side latency spike.
++
+Also view server-side latency by different operations in a specific keyspace.
+++++
+Is your application experiencing any throttling? The chart below shows the total number of requests failed with a 429-response code.
+Exceeding provisioned throughput could be one of the reasons. Enable [Server Side Retry](./prevent-rate-limiting-errors.md) when your application experiences high throttling due to higher consumption of request units than what is allocated.
++++
+## System and management operations
+The system view helps show metadata requests count by primary partition. It also helps identify throttled requests. The management operation shows the account activities such as creation, deletion, key, network and replication settings. Request volume per status code over a time period.
++
+- Metric chart for account diagnostic, network and replication settings over a specified period and filtered based on a Keyspace.
+++
+- Metric chart to view account key rotation.
+
+You can view changes to primary or secondary password for your API for Cassandra account.
+++
+## Storage
+Storage distribution for raw and index storage. Also a count of documents in the API for Cassandra account.
++
+Maximum request units consumption for an account over a defined time period.
+++
+## Throughput and requests
+The Total Request Units metric displays the requests unit usage based on operation types.
+
+These operations can be analyzed within a given time interval, defined keyspace or table.
+++
+The Normalized RU Consumption metric is a metric between 0% to 100% that is used to help measure the utilization of provisioned throughput on a database or container. The metric can also be used to view the utilization of individual partition key ranges on a database or container. One of the main factors of a scalable application is having a good cardinality of partition keys.
+The chart below shows if your applicationΓÇÖs high RU consumption is because of hot partition.
++
+The chart below shows a breakdown of requests by different status code. Understand the meaning of the different codes for your [API for Cassandra codes](./error-codes-solution.md).
+++
+## Next steps
+- [Monitor and debug with insights in Azure Cosmos DB](../use-metrics.md)
+- [Create alerts for Azure Cosmos DB using Azure Monitor](../create-alerts.md)
cosmos-db Oracle Migrate Cosmos Db Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/oracle-migrate-cosmos-db-arcion.md
Title: Migrate data from Oracle to Azure Cosmos DB Cassandra API using Arcion
-description: Learn how to migrate data from Oracle database to Azure Cosmos DB Cassandra API using Arcion.
+ Title: Migrate data from Oracle to Azure Cosmos DB for Apache Cassandra using Arcion
+description: Learn how to migrate data from Oracle database to Azure Cosmos DB for Apache Cassandra using Arcion.
-++ Last updated 04/04/2022
-# Migrate data from Oracle to Azure Cosmos DB Cassandra API account using Arcion
+# Migrate data from Oracle to Azure Cosmos DB for Apache Cassandra account using Arcion
-Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads that are running on Oracle for reasons such as:
+API for Cassandra in Azure Cosmos DB has become a great choice for enterprise workloads that are running on Oracle for reasons such as:
* **Better scalability and availability:** It eliminates single points of failure, better scalability, and availability for your applications.
Cassandra API in Azure Cosmos DB has become a great choice for enterprise worklo
* **No overhead of managing and monitoring:** As a fully managed cloud service, Azure Cosmos DB removes the overhead of managing and monitoring a myriad of settings.
-There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Oracle database to Azure Cosmos DB Cassandra API using Arcion.
+There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Oracle database to Azure Cosmos DB for Apache Cassandra using Arcion.
> [!NOTE] > This offering from Arcion is currently in beta. For more information, please contact them at [Arcion Support](mailto:support@arcion.io)
This section describes the steps required to setup Arcion and migrates data from
After filling out the database filter details, save and close the file.
-1. Next you will set up the configuration of the destination database. Before you define the configuration, [create an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account). [Choose the right partition key](../partitioning-overview.md#choose-partitionkey) from your data and then create a Keyspace, and a table to store the migrated data.
+1. Next you will set up the configuration of the destination database. Before you define the configuration, [create an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account). [Choose the right partition key](../partitioning-overview.md#choose-partitionkey) from your data and then create a Keyspace, and a table to store the migrated data.
1. Before migrating the data, increase the container throughput to the amount required for your application to migrate quickly. For example, you can increase the throughput to 100000 RUs. Scaling the throughput before starting the migration will help you to migrate your data in less time.
- :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/scale-throughput.png" alt-text="Scale Azure Cosmos container throughout":::
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/scale-throughput.png" alt-text="Scale Azure Cosmos DB container throughout":::
You must decrease the throughput after the migration is complete. Based on the amount of data stored and RUs required for each operation, you can estimate the throughput required after data migration. To learn more on how to estimate the RUs required, see [Provision throughput on containers and databases](../set-throughput.md) and [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles.
-1. Get the **Contact Point, Port, Username**, and **Primary Password** of your Azure Cosmos account from the **Connection String** pane. You will use these values in the configuration file.
+1. Get the **Contact Point, Port, Username**, and **Primary Password** of your Azure Cosmos DB account from the **Connection String** pane. You will use these values in the configuration file.
1. From the CLI terminal, set up the destination database configuration. Open the configuration file using **`vi conf/conn/cosmosdb.yml`** command and add a comma-separated list of host URI, port number, username, password, and other required parameters. The following is an example of contents in the configuration file: ```bash type: COSMOSDB
- host: `<Azure Cosmos accountΓÇÖs Contact point>`
+ host: `<Azure Cosmos DB accountΓÇÖs Contact point>`
port: 10350 username: 'arciondemo'
- password: `<Your Azure Cosmos accountΓÇÖs primary password>'
+ password: `<Your Azure Cosmos DB accountΓÇÖs primary password>'
max-connections: 30 use-ssl: false
This section describes the steps required to setup Arcion and migrates data from
1. Next migrate the data using Arcion. You can run the Arcion replicant in **full** or **snapshot** mode:
- * **Full mode** ΓÇô In this mode, the replicant continues to run after migration and it listens for any changes on the source Oracle system. If it detects any changes, they're replicated on the target Azure Cosmos account in real time.
+ * **Full mode** ΓÇô In this mode, the replicant continues to run after migration and it listens for any changes on the source Oracle system. If it detects any changes, they're replicated on the target Azure Cosmos DB account in real time.
* **Snapshot mode** ΓÇô In this mode, you can perform schema migration and one-time data replication. Real-time replication isnΓÇÖt supported with this option.
This section describes the steps required to setup Arcion and migrates data from
./bin/replicant full conf/conn/oracle.yaml conf/conn/cosmosdb.yaml --filter filter/oracle_filter.yaml --replace-existing ```
- The replicant UI shows the replication progress. Once the schema migration and snapshot operation are done, the progress shows 100%. After the migration is complete, you can validate the data on the target Azure Cosmos database.
+ The replicant UI shows the replication progress. Once the schema migration and snapshot operation are done, the progress shows 100%. After the migration is complete, you can validate the data on the target Azure Cosmos DB database.
:::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/oracle-data-migration-output.png" alt-text="Oracle data migration output":::
-1. Because you have used full mode for migration, you can perform operations such as insert, update, or delete data on the source Oracle database. Later you can validate that they're replicated real time on the target Azure Cosmos database. After the migration, make sure to decrease the throughput configured for your Azure Cosmos container.
+1. Because you have used full mode for migration, you can perform operations such as insert, update, or delete data on the source Oracle database. Later you can validate that they're replicated real time on the target Azure Cosmos DB database. After the migration, make sure to decrease the throughput configured for your Azure Cosmos DB container.
1. You can stop the replicant any point and restart it with **--resume** switch. The replication resumes from the point it has stopped without compromising on data consistency. The following command shows how to use the resume switch.
cosmos-db Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/partitioning.md
+
+ Title: Partitioning in Azure Cosmos DB for Apache Cassandra
+description: Learn about partitioning in Azure Cosmos DB for Apache Cassandra
++++++ Last updated : 09/03/2021++
+# Partitioning in Azure Cosmos DB for Apache Cassandra
+
+This article describes how partitioning works in Azure Cosmos DB for Apache Cassandra.
+
+API for Cassandra uses partitioning to scale the individual tables in a keyspace to meet the performance needs of your application. Partitions are formed based on the value of a partition key that is associated with each record in a table. All the records in a partition have the same partition key value. Azure Cosmos DB transparently and automatically manages the placement of partitions across the physical resources to efficiently satisfy the scalability and performance needs of the table. As the throughput and storage requirements of an application increase, Azure Cosmos DB moves and balances the data across a greater number of physical machines.
+
+From the developer perspective, partitioning behaves in the same way for Azure Cosmos DB for Apache Cassandra as it does in native [Apache Cassandra](https://cassandra.apache.org/). However, there are some differences behind the scenes.
++
+## Differences between Apache Cassandra and Azure Cosmos DB
+
+In Azure Cosmos DB, each machine on which partitions are stored is itself referred to as a [physical partition](../partitioning-overview.md#physical-partitions). The physical partition is akin to a Virtual Machine; a dedicated compute unit, or set of physical resources. Each partition stored on this compute unit is referred to as a [logical partition](../partitioning-overview.md#logical-partitions) in Azure Cosmos DB. If you are already familiar with Apache Cassandra, you can think of logical partitions in the same way that you think of regular partitions in Cassandra.
+
+Apache Cassandra recommends a 100-MB limit on the size of a data that can be stored in a partition. The API for Cassandra for Azure Cosmos DB allows up to 20 GB per logical partition, and up to 30GB of data per physical partition. In Azure Cosmos DB, unlike Apache Cassandra, compute capacity available in the physical partition is expressed using a single metric called [request units](../request-units.md), which allows you to think of your workload in terms of requests (reads or writes) per second, rather than cores, memory, or IOPS. This can make capacity planning more straight forward, once you understand the cost of each request. Each physical partition can have up to 10000 RUs of compute available to it. You can learn more about scalability options by reading our article on [elastic scale](scale-account-throughput.md) in API for Cassandra.
+
+In Azure Cosmos DB, each physical partition consists of a set of replicas, also known as replica sets, with at least 4 replicas per partition. This is in contrast to Apache Cassandra, where setting a replication factor of 1 is possible. However, this leads to low availability if the only node with the data goes down. In API for Cassandra there is always a replication factor of 4 (quorum of 3). Azure Cosmos DB automatically manages replica sets, while these need to be maintained using various tools in Apache Cassandra.
+
+Apache Cassandra has a concept of tokens, which are hashes of partition keys. The tokens are based on a murmur3 64 byte hash, with values ranging from -2^63 to -2^63 - 1. This range is commonly referred to as the "token ring" in Apache Cassandra. The token ring is distributed into token ranges, and these ranges are divided amongst the nodes present in a native Apache Cassandra cluster. Partitioning for Azure Cosmos DB is implemented in a similar way, except it uses a different hash algorithm, and has a larger internal token ring. However, externally we expose the same token range as Apache Cassandra, i.e., -2^63 to -2^63 - 1.
++
+## Primary key
+
+All tables in API for Cassandra must have a `primary key` defined. The syntax for a primary key is shown below:
+
+```shell
+column_name cql_type_definition PRIMARY KEY
+```
+
+Suppose we want to create a user table, which stores messages for different users:
+
+```shell
+CREATE TABLE uprofile.user (
+ id UUID PRIMARY KEY,
+ user text,
+ message text);
+```
+
+In this design, we have defined the `id` field as the primary key. The primary key functions as the identifier for the record in the table and it is also used as the partition key in Azure Cosmos DB. If the primary key is defined in the previously described way, there will only be a single record in each partition. This will result in a perfectly horizontal and scalable distribution when writing data to the database, and is ideal for key-value lookup use cases. The application should provide the primary key whenever reading data from the table, to maximize read performance.
+++
+## Compound primary key
+
+Apache Cassandra also has a concept of `compound keys`. A compound `primary key` consists of more than one column; the first column is the `partition key`, and any additional columns are the `clustering keys`. The syntax for a `compound primary key` is shown below:
+
+```shell
+PRIMARY KEY (partition_key_column_name, clustering_column_name [, ...])
+```
+
+Suppose we want to change the above design and make it possible to efficiently retrieve messages for a given user:
+
+```shell
+CREATE TABLE uprofile.user (
+ user text,
+ id int,
+ message text,
+ PRIMARY KEY (user, id));
+```
+
+In this design, we are now defining `user` as the partition key, and `id` as the clustering key. You can define as many clustering keys as you wish, but each value (or a combination of values) for the clustering key must be unique in order to result in multiple records being added to the same partition, for example:
+
+```shell
+insert into uprofile.user (user, id, message) values ('theo', 1, 'hello');
+insert into uprofile.user (user, id, message) values ('theo', 2, 'hello again');
+```
+
+When data is returned, it is sorted by the clustering key, as expected in Apache Cassandra:
++
+> [!WARNING]
+> When querying data in a table that has a compound primary key, if you want to filter on the partition key *and* any other non-indexed fields aside from the clustering key, ensure that you *explicitly add a secondary index on the partition key*:
+>
+> ```shell
+> CREATE INDEX ON uprofile.user (user);
+> ```
+>
+> Azure Cosmos DB for Apache Cassandra does not apply indexes to partition keys by default, and the index in this scenario may significantly improve query performance. Review our article on [secondary indexing](secondary-indexing.md) for more information.
+
+With data modeled in this way, multiple records can be assigned to each partition, grouped by user. We can thus issue a query that is efficiently routed by the `partition key` (in this case, `user`) to get all the messages for a given user.
+++++
+## Composite partition key
+
+Composite partition keys work essentially the same way as compound keys, except that you can specify multiple columns as a composite partition key. The syntax of composite partition keys is shown below:
+
+```shell
+PRIMARY KEY (
+ (partition_key_column_name[, ...]),
+ clustering_column_name [, ...]);
+```
+For example, you can have the following, where the unique combination of `firstname` and `lastname` would form the partition key, and `id` is the clustering key:
+
+```shell
+CREATE TABLE uprofile.user (
+ firstname text,
+ lastname text,
+ id int,
+ message text,
+ PRIMARY KEY ((firstname, lastname), id) );
+```
+
+## Next steps
+
+* Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
+* Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).
+* Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md).
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Title: Migrate data from PostgreSQL to Azure Cosmos DB Cassandra API account using Apache Kafka
-description: Learn how to use Kafka Connect to synchronize data from PostgreSQL to Azure Cosmos DB Cassandra API in real time.
+ Title: Migrate data from PostgreSQL to Azure Cosmos DB for Apache Cassandra account using Apache Kafka
+description: Learn how to use Kafka Connect to synchronize data from PostgreSQL to Azure Cosmos DB for Apache Cassandra in real time.
-++ Last updated 04/02/2022
-# Migrate data from PostgreSQL to Azure Cosmos DB Cassandra API account using Apache Kafka
+# Migrate data from PostgreSQL to Azure Cosmos DB for Apache Cassandra account using Apache Kafka
-Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for various reasons such as:
+API for Cassandra in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for various reasons such as:
* **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable Oracle licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
Cassandra API in Azure Cosmos DB has become a great choice for enterprise worklo
[Kafka Connect](https://kafka.apache.org/documentation/#connect) is a platform to stream data between [Apache Kafka](https://kafka.apache.org/) and other systems in a scalable and reliable manner. It supports several off the shelf connectors, which means that you don't need custom code to integrate external systems with Apache Kafka.
-This article will demonstrate how to use a combination of Kafka connectors to set up a data pipeline to continuously synchronize records from a relational database such as [PostgreSQL](https://www.postgresql.org/) to [Azure Cosmos DB Cassandra API](cassandra-introduction.md).
+This article will demonstrate how to use a combination of Kafka connectors to set up a data pipeline to continuously synchronize records from a relational database such as [PostgreSQL](https://www.postgresql.org/) to [Azure Cosmos DB for Apache Cassandra](introduction.md).
## Overview Here is high-level overview of the end to end flow presented in this article.
-Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html), which is a Kafka Connect **source** connector. Inserts, updates, or deletion to records in the PostgreSQL table will be captured as `change data` events and sent to Kafka topic(s). The [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html) (Kafka Connect **sink** connector), forms the second part of the pipeline. It will synchronize the change data events from Kafka topic to Azure Cosmos DB Cassandra API tables.
+Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html), which is a Kafka Connect **source** connector. Inserts, updates, or deletion to records in the PostgreSQL table will be captured as `change data` events and sent to Kafka topic(s). The [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html) (Kafka Connect **sink** connector), forms the second part of the pipeline. It will synchronize the change data events from Kafka topic to Azure Cosmos DB for Apache Cassandra tables.
> [!NOTE] > Using specific features of the DataStax Apache Kafka connector allows us to push data to multiple tables. In this example, the connector will help us persist change data records to two Cassandra tables that can support different query requirements. ## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account)
-* [Use cqlsh for validation](cassandra-support.md#cql-shell)
+* [Provision an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account)
+* [Use cqlsh for validation](support.md#cql-shell)
* JDK 8 or above * [Docker](https://www.docker.com/) (optional)
You can continue to insert more data into PostgreSQL and confirm that the record
## Next steps
-* [Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect](kafka-connect.md)
+* [Integrate Apache Kafka and Azure Cosmos DB for Apache Cassandra using Kafka Connect](kafka-connect.md)
* [Integrate Apache Kafka Connect on Azure Event Hubs (Preview) with Debezium for Change Data Capture](../../event-hubs/event-hubs-kafka-connect-debezium.md)
-* [Migrate data from Oracle to Azure Cosmos DB Cassandra API using Arcion](oracle-migrate-cosmos-db-arcion.md)
+* [Migrate data from Oracle to Azure Cosmos DB for Apache Cassandra using Arcion](oracle-migrate-cosmos-db-arcion.md)
* [Provision throughput on containers and databases](../set-throughput.md) * [Partition key best practices](../partitioning-overview.md#choose-partitionkey) * [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles-
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Cassandra API
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Cassandra API
+ Title: Azure PowerShell samples for Azure Cosmos DB for Apache Cassandra
+description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Apache Cassandra
-++ Last updated 01/20/2021
-# Azure PowerShell samples for Azure Cosmos DB Cassandra API
+# Azure PowerShell samples for Azure Cosmos DB for Apache Cassandra
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples |Task | Description | |||
-|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
+|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's default consistency level. |
+|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's regions. |
+|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos DB account or trigger a manual failover. |
|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
+|[Create an Azure Cosmos DB Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
|||
-## Cassandra API Samples
+## API for Cassandra Samples
|Task | Description | |||
-|[Create an account, keyspace and table](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, keyspace and table. |
-|[Create an account, keyspace and table with autoscale](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, keyspace and table with autoscale. |
+|[Create an account, keyspace and table](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account, keyspace and table. |
+|[Create an account, keyspace and table with autoscale](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account, keyspace and table with autoscale. |
|[List or get keyspaces or tables](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get keyspaces or tables. | |[Perform throughput operations](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a keyspace or table including get, update and migrate between autoscale and standard throughput. | |[Lock resources from deletion](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
cosmos-db Prevent Rate Limiting Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/prevent-rate-limiting-errors.md
Title: Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra.
-description: Prevent your Azure Cosmos DB API for Cassandra operations from hitting rate limiting errors with the SSR (server-side retry) feature
+ Title: Prevent rate-limiting errors for Azure Cosmos DB for Apache Cassandra.
+description: Prevent your Azure Cosmos DB for Apache Cassandra operations from hitting rate limiting errors with the SSR (server-side retry) feature
-++ Last updated 10/11/2021
-# Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra operations
+# Prevent rate-limiting errors for Azure Cosmos DB for Apache Cassandra operations
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (RU). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
-Azure Cosmos DB Cassandra API operations may fail with rate-limiting (OverloadedException/429) errors if they exceed a tableΓÇÖs throughput limit (RUs). This can be handled by client side as described [here](scale-account-throughput.md#handling-rate-limiting-429-errors). If the client retry policy cannot be implemented to handle the failure due to rate limiting error, then we can make use of the Server-side retry (SSR) feature where operations that exceed a tableΓÇÖs throughput limit will be retried automatically after a short delay. This is an account level setting and applies to all Key spaces and Tables in the account.
+Azure Cosmos DB for Apache Cassandra operations may fail with rate-limiting (OverloadedException/429) errors if they exceed a tableΓÇÖs throughput limit (RUs). This can be handled by client side as described [here](scale-account-throughput.md#handling-rate-limiting-429-errors). If the client retry policy cannot be implemented to handle the failure due to rate limiting error, then we can make use of the Server-side retry (SSR) feature where operations that exceed a tableΓÇÖs throughput limit will be retried automatically after a short delay. This is an account level setting and applies to all Key spaces and Tables in the account.
## Use the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Navigate to your Azure Cosmos DB API for Cassandra account.
+2. Navigate to your Azure Cosmos DB for Apache Cassandra account.
3. Go to the **Features** pane underneath the **Settings** section.
Azure Cosmos DB Cassandra API operations may fail with rate-limiting (Overloaded
5. Click **Enable** to enable this feature for all collections in your account. ## Use the Azure CLI
ProgrammaticDriverConfigLoaderBuilder configBuilder = DriverConfigLoader.program
### How can I monitor the effects of a server-side retry?
-You can view the rate limiting errors (429) that are retried server-side in the Cosmos DB Metrics pane. These errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
+You can view the rate limiting errors (429) that are retried server-side in the Azure Cosmos DB Metrics pane. These errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
-You can search for log entries containing *estimatedDelayFromRateLimitingInMilliseconds* in your [Cosmos DB resource logs](../cosmosdb-monitor-resource-logs.md).
+You can search for log entries containing *estimatedDelayFromRateLimitingInMilliseconds* in your [Azure Cosmos DB resource logs](../monitor-resource-logs.md).
### Will server-side retry affect my consistency level?
To learn more about troubleshooting common errors, see this article:
See the following articles to learn about throughput provisioning in Azure Cosmos DB: * [Request units and throughput in Azure Cosmos DB](../request-units.md)
-* [Provision throughput on containers and databases](../how-to-provision-throughput-cassandra.md)
-* [Partition key best practices](../cassandra-partitioning.md)
-
+* [Provision throughput on containers and databases](how-to-provision-throughput.md)
+* [Partition key best practices](partitioning.md)
cosmos-db Query Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/query-data.md
Title: 'Tutorial: Query data from a Cassandra API account in Azure Cosmos DB'
-description: This tutorial shows how to query user data from an Azure Cosmos DB Cassandra API account by using a Java application.
+ Title: 'Tutorial: Query data from a API for Cassandra account in Azure Cosmos DB'
+description: This tutorial shows how to query user data from an Azure Cosmos DB for Apache Cassandra account by using a Java application.
-++ Last updated 09/24/2018
-#Customer intent: As a developer, I want to build a Java application to query data stored in a Cassandra API account of Azure Cosmos DB so that customers can manage the key/value data and utilize the global distribution, elastic scaling, multiple write regions, and other capabilities offered by Azure Cosmos DB.
+#Customer intent: As a developer, I want to build a Java application to query data stored in a API for Cassandra account of Azure Cosmos DB so that customers can manage the key/value data and utilize the global distribution, elastic scaling, multiple write regions, and other capabilities offered by Azure Cosmos DB.
-# Tutorial: Query data from a Cassandra API account in Azure Cosmos DB
+# Tutorial: Query data from a API for Cassandra account in Azure Cosmos DB
-As a developer, you might have applications that use key/value pairs. You can use a Cassandra API account in Azure Cosmos DB to store and query the key/value data. This tutorial shows how to query user data from a Cassandra API account in Azure Cosmos DB by using a Java application. The Java application uses the [Java driver](https://github.com/datastax/java-driver) and queries user data such as user ID, user name, and user city.
+As a developer, you might have applications that use key/value pairs. You can use a API for Cassandra account in Azure Cosmos DB to store and query the key/value data. This tutorial shows how to query user data from a API for Cassandra account in Azure Cosmos DB by using a Java application. The Java application uses the [Java driver](https://github.com/datastax/java-driver) and queries user data such as user ID, user name, and user city.
This tutorial covers the following tasks:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites
-* This article belongs to a multi-part tutorial. Before you start, make sure to complete the previous steps to create the Cassandra API account, keyspace, table, and [load sample data into the table](load-data-table.md).
+* This article belongs to a multi-part tutorial. Before you start, make sure to complete the previous steps to create the API for Cassandra account, keyspace, table, and [load sample data into the table](load-data-table.md).
## Query data
-Use the following steps to query data from your Cassandra API account:
+Use the following steps to query data from your API for Cassandra account:
1. Open the `UserRepository.java` file under the folder `src\main\java\com\azure\cosmosdb\cassandra`. Append the following code block. This code provides three methods:
Use the following steps to query data from your Cassandra API account:
## Clean up resources
-When they're no longer needed, you can delete the resource group, Azure Cosmos account, and all the related resources. To do so, select the resource group for the virtual machine, select **Delete**, and then confirm the name of the resource group to delete.
+When they're no longer needed, you can delete the resource group, Azure Cosmos DB account, and all the related resources. To do so, select the resource group for the virtual machine, select **Delete**, and then confirm the name of the resource group to delete.
## Next steps
-In this tutorial, you've learned how to query data from a Cassandra API account in Azure Cosmos DB. You can now proceed to the next article:
+In this tutorial, you've learned how to query data from a API for Cassandra account in Azure Cosmos DB. You can now proceed to the next article:
> [!div class="nextstepaction"]
-> [Migrate data to Cassandra API account](migrate-data.md)
--
+> [Migrate data to API for Cassandra account](migrate-data.md)
cosmos-db Scale Account Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/scale-account-throughput.md
Title: Elastically scale with Cassandra API in Azure Cosmos DB
-description: Learn about the options available to scale an Azure Cosmos DB Cassandra API account and their advantages/disadvantages
+ Title: Elastically scale with API for Cassandra in Azure Cosmos DB
+description: Learn about the options available to scale an Azure Cosmos DB for Apache Cassandra account and their advantages/disadvantages
-++ Last updated 07/29/2020
-# Elastically scale an Azure Cosmos DB Cassandra API account
+# Elastically scale an Azure Cosmos DB for Apache Cassandra account
-There are a variety of options to explore the elastic nature of the Azure Cosmos DB API for Cassandra. To understand how to scale effectively in Azure Cosmos DB, it is important to understand how to provision the right amount of request units (RU/s) to account for the performance demands in your system. To learn more about request units, see the [request units](../request-units.md) article.
+There are a variety of options to explore the elastic nature of the Azure Cosmos DB for Apache Cassandra. To understand how to scale effectively in Azure Cosmos DB, it is important to understand how to provision the right amount of request units (RU/s) to account for the performance demands in your system. To learn more about request units, see the [request units](../request-units.md) article.
-For the Cassandra API, you can retrieve the Request Unit charge for individual queries using the [.NET and Java SDKs](./find-request-unit-charge-cassandra.md). This is helpful in determining the amount of RU/s you will need to provision in the service.
+For the API for Cassandra, you can retrieve the Request Unit charge for individual queries using the [.NET and Java SDKs](./find-request-unit-charge.md). This is helpful in determining the amount of RU/s you will need to provision in the service.
:::image type="content" source="../media/request-units/request-units.png" alt-text="Database operations consume Request Units" border="false"::: ## Handling rate limiting (429 errors)
-Azure Cosmos DB will return rate-limited (429) errors if clients consume more resources (RU/s) than the amount that you have provisioned. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol.
+Azure Cosmos DB will return rate-limited (429) errors if clients consume more resources (RU/s) than the amount that you have provisioned. The API for Cassandra in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol.
If your system is not sensitive to latency, it may be sufficient to handle the throughput rate-limiting by using retries. See Java code samples for [version 3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [version 4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) of the Apache Cassandra Java drivers for how to handle rate limiting transparently. These samples implements a custom version of the default [Cassandra retry policy](https://docs.datastax.com/en/developer/java-driver/4.4/manual/core/retries/) in Java. You can also use the [Spark extension](https://mvnrepository.com/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper) to handle rate-limiting. When using Spark, ensure you follow our guidance on [Optimizing Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration). ## Manage scaling
-If you need to minimize latency, there is a spectrum of options for managing scale and provisioning throughput (RUs) in the Cassandra API:
+If you need to minimize latency, there is a spectrum of options for managing scale and provisioning throughput (RUs) in the API for Cassandra:
* [Manually by using the Azure portal](#use-azure-portal) * [Programmatically by using the control plane features](#use-control-plane)
The following sections explain the advantages and disadvantages of each approach
## <a id="use-azure-portal"></a>Use the Azure portal
-You can scale the resources in Azure Cosmos DB Cassandra API account by using Azure portal. To learn more, see the article on [Provision throughput on containers and databases](../set-throughput.md). This article explains the relative benefits of setting throughput at either [database](../set-throughput.md#set-throughput-on-a-database) or [container](../set-throughput.md#set-throughput-on-a-container) level in the Azure portal. The terms "database" and "container" mentioned in these articles map to "keyspace" and "table" respectively for the Cassandra API.
+You can scale the resources in Azure Cosmos DB for Apache Cassandra account by using Azure portal. To learn more, see the article on [Provision throughput on containers and databases](../set-throughput.md). This article explains the relative benefits of setting throughput at either [database](../set-throughput.md#set-throughput-on-a-database) or [container](../set-throughput.md#set-throughput-on-a-container) level in the Azure portal. The terms "database" and "container" mentioned in these articles map to "keyspace" and "table" respectively for the API for Cassandra.
The advantage of this method is that it is a straightforward turnkey way to manage throughput capacity on the database. However, the disadvantage is that in many cases, your approach to scaling may require certain levels of automation to be both cost-effective and high performing. The next sections explain the relevant scenarios and methods.
A disadvantage with this approach may be that you cannot respond to unpredictabl
## <a id="use-cql-queries"></a>Use CQL queries with a specific SDK
-You can scale the system dynamically with code by executing the [CQL ALTER commands](cassandra-support.md#keyspace-and-table-options) for the given database or container.
+You can scale the system dynamically with code by executing the [CQL ALTER commands](support.md#keyspace-and-table-options) for the given database or container.
The advantage of this approach is that it allows you to respond to scale needs dynamically and in a custom way that suits your application. With this approach, you can still leverage the standard RU/s charges and rates. If your system's scale needs are mostly predictable (around 70% or more), using SDK with CQL may be a more cost-effective method of auto-scaling than using autoscale. The disadvantage of this approach is that it can be quite complex to implement retries while rate limiting may increase latency. ## <a id="use-autoscale"></a>Use autoscale provisioned throughput
-In addition to standard (manual) or programmatic way of provisioning throughput, you can also configure Azure cosmos containers in autoscale provisioned throughput. Autoscale will automatically and instantly scale to your consumption needs within specified RU ranges without compromising SLAs. To learn more, see the [Create Azure Cosmos containers and databases in autoscale](../provision-throughput-autoscale.md) article.
+In addition to standard (manual) or programmatic way of provisioning throughput, you can also configure Azure Cosmos DB containers in autoscale provisioned throughput. Autoscale will automatically and instantly scale to your consumption needs within specified RU ranges without compromising SLAs. To learn more, see the [Create Azure Cosmos DB containers and databases in autoscale](../provision-throughput-autoscale.md) article.
The advantage of this approach is that it is the easiest way to manage the scaling needs in your system. It will not apply rate-limiting **within the configured RU ranges**. The disadvantage is that, if the scaling needs in your system are predictable, autoscale may be a less cost-effective way of handling your scaling needs than using the bespoke control plane or SDK level approaches mentioned above.
alter table <keyspace name>.<table name> WITH cosmosdb_autoscale_max_throughput=
## Next steps -- Get started with [creating a Cassandra API account, database, and a table](create-account-java.md) by using a Java application
+- Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
Title: Indexing in Azure Cosmos DB Cassandra API account
-description: Learn how secondary indexing works in Azure Azure Cosmos DB Cassandra API account.
+ Title: Indexing in Azure Cosmos DB for Apache Cassandra account
+description: Learn how secondary indexing works in Azure Azure Cosmos DB for Apache Cassandra account.
-++ Last updated 09/03/2021
-# Secondary indexing in Azure Cosmos DB Cassandra API
+# Secondary indexing in Azure Cosmos DB for Apache Cassandra
-The Cassandra API in Azure Cosmos DB leverages the underlying indexing infrastructure to expose the indexing strength that is inherent in the platform. However, unlike the core SQL API, Cassandra API in Azure Cosmos DB does not index all attributes by default. Instead, it supports secondary indexing to create an index on certain attributes, which behaves the same way as Apache Cassandra.
+The API for Cassandra in Azure Cosmos DB leverages the underlying indexing infrastructure to expose the indexing strength that is inherent in the platform. However, unlike the core API for NoSQL, API for Cassandra in Azure Cosmos DB does not index all attributes by default. Instead, it supports secondary indexing to create an index on certain attributes, which behaves the same way as Apache Cassandra.
In general, it's not advised to execute filter queries on the columns that aren't partitioned. You must use ALLOW FILTERING syntax explicitly, which results in an operation that may not perform well. In Azure Cosmos DB you can run such queries on low cardinality attributes because they fan out across partitions to retrieve the results.
It's not advised to create an index on a frequently updated column. It is pruden
> - Clustering keys > [!WARNING]
-> Partition keys are not indexed by default in Cassandra API. If you have a [compound primary key](cassandra-partitioning.md#compound-primary-key) in your table, and you filter either on partition key and clustering key, or just partition key, this will give the desired behaviour. However, if you filter on partition key and any other non-indexed fields aside from the clustering key, this will result in a partition key fan-out - even if the other non-indexed fields have a secondary index. If you have a compound primary key in your table, and you want to filter on both the partition key value element of the compound primary key, plus another field that is not the partition key or clustering key, please ensure that you explicitly add a secondary index on the *partition key*. The index in this scenario should significantly improve query performance, even if the other non-partition key and non-clustering key fields have no index. Review our article on [partitioning](cassandra-partitioning.md) for more information.
+> Partition keys are not indexed by default in API for Cassandra. If you have a [compound primary key](partitioning.md#compound-primary-key) in your table, and you filter either on partition key and clustering key, or just partition key, this will give the desired behaviour. However, if you filter on partition key and any other non-indexed fields aside from the clustering key, this will result in a partition key fan-out - even if the other non-indexed fields have a secondary index. If you have a compound primary key in your table, and you want to filter on both the partition key value element of the compound primary key, plus another field that is not the partition key or clustering key, please ensure that you explicitly add a secondary index on the *partition key*. The index in this scenario should significantly improve query performance, even if the other non-partition key and non-clustering key fields have no index. Review our article on [partitioning](partitioning.md) for more information.
## Indexing example
If you try executing the following statement, you will run into an error that as
select user_id, lastname from sampleks.t1 where lastname='nishu'; ```
-Although the Cassandra API supports ALLOW FILTERING, as mentioned in the previous section, it's not recommended. You should instead create an index in the as shown in the following example:
+Although the API for Cassandra supports ALLOW FILTERING, as mentioned in the previous section, it's not recommended. You should instead create an index in the as shown in the following example:
```shell CREATE INDEX ON sampleks.t1 (lastname); ```
-After creating an index on the "lastname" field, you can now run the previous query successfully. With Cassandra API in Azure Cosmos DB, you do not have to provide an index name. A default index with format `tablename_columnname_idx` is used. For example, ` t1_lastname_idx` is the index name for the previous table.
+After creating an index on the "lastname" field, you can now run the previous query successfully. With API for Cassandra in Azure Cosmos DB, you do not have to provide an index name. A default index with format `tablename_columnname_idx` is used. For example, ` t1_lastname_idx` is the index name for the previous table.
## Dropping the index You need to know what the index name is to drop the index. Run the `desc schema` command to get the description of your table. The output of this command includes the index name in the format `CREATE INDEX tablename_columnname_idx ON keyspacename.tablename(columnname)`. You can then use the index name to drop the index as shown in the following example:
drop index sampleks.t1_lastname_idx;
## Next steps * Learn how [automatic indexing](../index-overview.md) works in Azure Cosmos DB
-* [Apache Cassandra features supported by Azure Cosmos DB Cassandra API](cassandra-support.md)
+* [Apache Cassandra features supported by Azure Cosmos DB for Apache Cassandra](support.md)
cosmos-db Spark Aggregation Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-aggregation-operations.md
Title: Aggregate operations on Azure Cosmos DB Cassandra API tables from Spark
-description: This article covers basic aggregation operations against Azure Cosmos DB Cassandra API tables from Spark
+ Title: Aggregate operations on Azure Cosmos DB for Apache Cassandra tables from Spark
+description: This article covers basic aggregation operations against Azure Cosmos DB for Apache Cassandra tables from Spark
-++ Last updated 09/24/2018-
-# Aggregate operations on Azure Cosmos DB Cassandra API tables from Spark
+# Aggregate operations on Azure Cosmos DB for Apache Cassandra tables from Spark
-This article describes basic aggregation operations against Azure Cosmos DB Cassandra API tables from Spark.
+This article describes basic aggregation operations against Azure Cosmos DB for Apache Cassandra tables from Spark.
> [!NOTE]
-> Server-side filtering, and server-side aggregation is currently not supported in Azure Cosmos DB Cassandra API.
+> Server-side filtering, and server-side aggregation is currently not supported in Azure Cosmos DB for Apache Cassandra.
-## Cassandra API configuration
+## API for Cassandra configuration
Set below spark configuration in your notebook cluster. It's one time activity. ```scala //Connection-related
Set below spark configuration in your notebook cluster. It's one time activity.
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
cosmos-db Spark Create Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-create-operations.md
Title: Create or insert data into Azure Cosmos DB Cassandra API from Spark
-description: This article details how to insert sample data into Azure Cosmos DB Cassandra API tables
+ Title: Create or insert data into Azure Cosmos DB for Apache Cassandra from Spark
+description: This article details how to insert sample data into Azure Cosmos DB for Apache Cassandra tables
-++ Last updated 09/24/2018-
-# Create/Insert data into Azure Cosmos DB Cassandra API from Spark
+# Create/Insert data into Azure Cosmos DB for Apache Cassandra from Spark
-This article describes how to insert sample data into a table in Azure Cosmos DB Cassandra API from Spark.
+This article describes how to insert sample data into a table in Azure Cosmos DB for Apache Cassandra from Spark.
-## Cassandra API configuration
+## API for Cassandra configuration
Set below spark configuration in your notebook cluster. It's one time activity. ```scala
Set below spark configuration in your notebook cluster. It's one time activity.
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
booksDF.show
> [!NOTE] > "Create if not exists" functionality, at a row level, is not yet supported.
-### Persist to Azure Cosmos DB Cassandra API
+### Persist to Azure Cosmos DB for Apache Cassandra
When saving data, you can also set time-to-live and consistency policy settings as shown in the following example:
booksRDD.take(2).foreach(println)
> [!NOTE] > Create if not exists functionality is not yet supported.
-### Persist to Azure Cosmos DB Cassandra API
+### Persist to Azure Cosmos DB for Apache Cassandra
-When saving data to Cassandra API, you can also set time-to-live and consistency policy settings as shown in the following example:
+When saving data to API for Cassandra, you can also set time-to-live and consistency policy settings as shown in the following example:
```scala import com.datastax.spark.connector.writer._
select * from books;
## Next steps
-After inserting data into the Azure Cosmos DB Cassandra API table, proceed to the following articles to perform other operations on the data stored in Cosmos DB Cassandra API:
+After inserting data into the Azure Cosmos DB for Apache Cassandra table, proceed to the following articles to perform other operations on the data stored in Azure Cosmos DB for Apache Cassandra:
* [Read operations](spark-read-operation.md) * [Upsert operations](spark-upsert-operations.md)
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
Title: Access Azure Cosmos DB Cassandra API from Azure Databricks
-description: This article covers how to work with Azure Cosmos DB Cassandra API from Azure Databricks.
+ Title: Access Azure Cosmos DB for Apache Cassandra from Azure Databricks
+description: This article covers how to work with Azure Cosmos DB for Apache Cassandra from Azure Databricks.
-+ Last updated 09/24/2018 ms.devlang: scala+
-# Access Azure Cosmos DB Cassandra API data from Azure Databricks
+# Access Azure Cosmos DB for Apache Cassandra data from Azure Databricks
-This article details how to work with Azure Cosmos DB Cassandra API from Spark on [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks).
+This article details how to work with Azure Cosmos DB for Apache Cassandra from Spark on [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks).
## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account)
+* [Provision an Azure Cosmos DB for Apache Cassandra account](manage-data-dotnet.md#create-a-database-account)
-* [Review the basics of connecting to Azure Cosmos DB Cassandra API](connect-spark-configuration.md)
+* [Review the basics of connecting to Azure Cosmos DB for Apache Cassandra](connect-spark-configuration.md)
* [Provision an Azure Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal)
-* [Review the code samples for working with Cassandra API](connect-spark-configuration.md#next-steps)
+* [Review the code samples for working with API for Cassandra](connect-spark-configuration.md#next-steps)
* [Use cqlsh for validation if you so prefer](connect-spark-configuration.md#connecting-to-azure-cosmos-db-cassandra-api-from-spark)
-* **Cassandra API instance configuration for Cassandra connector:**
+* **API for Cassandra instance configuration for Cassandra connector:**
- The connector for Cassandra API requires the Cassandra connection details to be initialized as part of the spark context. When you launch a Databricks notebook, the spark context is already initialized, and it isn't advisable to stop and reinitialize it. One solution is to add the Cassandra API instance configuration at a cluster level, in the cluster spark configuration. It's one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
+ The connector for API for Cassandra requires the Cassandra connection details to be initialized as part of the spark context. When you launch a Databricks notebook, the spark context is already initialized, and it isn't advisable to stop and reinitialize it. One solution is to add the API for Cassandra instance configuration at a cluster level, in the cluster spark configuration. It's one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
```scala spark.cassandra.connection.host YOUR_COSMOSDB_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
This article details how to work with Azure Cosmos DB Cassandra API from Spark o
## Add the required dependencies
-* **Cassandra Spark connector:** - To integrate Azure Cosmos DB Cassandra API with Spark, the Cassandra connector should be attached to the Azure Databricks cluster. To attach the cluster:
+* **Cassandra Spark connector:** - To integrate Azure Cosmos DB for Apache Cassandra with Spark, the Cassandra connector should be attached to the Azure Databricks cluster. To attach the cluster:
* Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/user-guide/libraries.html) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
-* **Azure Cosmos DB Cassandra API-specific library:** - If you're using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB Cassandra API. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
+* **Azure Cosmos DB for Apache Cassandra-specific library:** - If you're using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB for Apache Cassandra. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
> [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB Cassandra API-specific library mentioned above.
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB for Apache Cassandra-specific library mentioned above.
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected. ## Sample notebooks
-A list of Azure Databricks [sample notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/notebooks/scala) is available in GitHub repo for you to download. These samples include how to connect to Azure Cosmos DB Cassandra API from Spark and perform different CRUD operations on the data. You can also [import all the notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/dbc) into your Databricks cluster workspace and run it.
+A list of Azure Databricks [sample notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/notebooks/scala) is available in GitHub repo for you to download. These samples include how to connect to Azure Cosmos DB for Apache Cassandra from Spark and perform different CRUD operations on the data. You can also [import all the notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/dbc) into your Databricks cluster workspace and run it.
-## Accessing Azure Cosmos DB Cassandra API from Spark Scala programs
+## Accessing Azure Cosmos DB for Apache Cassandra from Spark Scala programs
Spark programs to be run as automated processes on Azure Databricks are submitted to the cluster by using [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html)) and scheduled to run through the Azure Databricks jobs.
-The following are links to help you get started building Spark Scala programs to interact with Azure Cosmos DB Cassandra API.
-* [How to connect to Azure Cosmos DB Cassandra API from a Spark Scala program](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-connector-sample/blob/main/src/main/scala/com/microsoft/azure/cosmosdb/cassandra/SampleCosmosDBApp.scala)
+The following are links to help you get started building Spark Scala programs to interact with Azure Cosmos DB for Apache Cassandra.
+* [How to connect to Azure Cosmos DB for Apache Cassandra from a Spark Scala program](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-connector-sample/blob/main/src/main/scala/com/microsoft/azure/cosmosdb/cassandra/SampleCosmosDBApp.scala)
* [How to run a Spark Scala program as an automated job on Azure Databricks](/azure/databricks/jobs)
-* [Complete list of code samples for working with Cassandra API](connect-spark-configuration.md#next-steps)
+* [Complete list of code samples for working with API for Cassandra](connect-spark-configuration.md#next-steps)
## Next steps
-Get started with [creating a Cassandra API account, database, and a table](create-account-java.md) by using a Java application.
+Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application.
cosmos-db Spark Ddl Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-ddl-operations.md
Title: DDL operations in Azure Cosmos DB Cassandra API from Spark
-description: This article details keyspace and table DDL operations against Azure Cosmos DB Cassandra API from Spark.
+ Title: DDL operations in Azure Cosmos DB for Apache Cassandra from Spark
+description: This article details keyspace and table DDL operations against Azure Cosmos DB for Apache Cassandra from Spark.
-+ Last updated 10/07/2020 ms.devlang: scala+
-# DDL operations in Azure Cosmos DB Cassandra API from Spark
+# DDL operations in Azure Cosmos DB for Apache Cassandra from Spark
-This article details keyspace and table DDL operations against Azure Cosmos DB Cassandra API from Spark.
+This article details keyspace and table DDL operations against Azure Cosmos DB for Apache Cassandra from Spark.
## Spark context
- The connector for Cassandra API requires the Cassandra connection details to be initialized as part of the spark context. When you launch a notebook, the spark context is already initialized, and it isn't advisable to stop and reinitialize it. One solution is to add the Cassandra API instance configuration at a cluster level, in the cluster spark configuration. It's one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
+ The connector for API for Cassandra requires the Cassandra connection details to be initialized as part of the spark context. When you launch a notebook, the spark context is already initialized, and it isn't advisable to stop and reinitialize it. One solution is to add the API for Cassandra instance configuration at a cluster level, in the cluster spark configuration. It's one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
```scala spark.cassandra.connection.host YOUR_COSMOSDB_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
This article details keyspace and table DDL operations against Azure Cosmos DB C
spark.cassandra.connection.keep_alive_ms 600000000 ```
-## Cassandra API-related configuration
+## API for Cassandra-related configuration
```scala import org.apache.spark.sql.cassandra._
import com.datastax.spark.connector.cql.CassandraConnector
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.1**. Later versions of Spark and/or the Cassandra connector may not function as expected.
After creating the keyspace and the table, proceed to the following articles for
* [Upsert operations](spark-upsert-operations.md) * [Delete operations](spark-delete-operation.md) * [Aggregation operations](spark-aggregation-operations.md)
-* [Table copy operations](spark-table-copy-operations.md)
+* [Table copy operations](spark-table-copy-operations.md)
cosmos-db Spark Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-delete-operation.md
Title: Delete operations on Azure Cosmos DB Cassandra API from Spark
-description: This article details how to delete data in tables in Azure Cosmos DB Cassandra API from Spark
+ Title: Delete operations on Azure Cosmos DB for Apache Cassandra from Spark
+description: This article details how to delete data in tables in Azure Cosmos DB for Apache Cassandra from Spark
-+ Last updated 09/24/2018 ms.devlang: scala+
-# Delete data in Azure Cosmos DB Cassandra API tables from Spark
+# Delete data in Azure Cosmos DB for Apache Cassandra tables from Spark
-This article describes how to delete data in Azure Cosmos DB Cassandra API tables from Spark.
+This article describes how to delete data in Azure Cosmos DB for Apache Cassandra tables from Spark.
-## Cassandra API configuration
+## API for Cassandra configuration
Set below spark configuration in your notebook cluster. It's one time activity. ```scala //Connection-related
Set below spark configuration in your notebook cluster. It's one time activity.
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
cosmos-db Spark Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-hdinsight.md
Title: Access Azure Cosmos DB Cassandra API on YARN with HDInsight
-description: This article covers how to work with Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight.
+ Title: Access Azure Cosmos DB for Apache Cassandra on YARN with HDInsight
+description: This article covers how to work with Azure Cosmos DB for Apache Cassandra from Spark on YARN with HDInsight.
-+ Last updated 09/24/2018 ms.devlang: scala-+
-# Access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight
+# Access Azure Cosmos DB for Apache Cassandra from Spark on YARN with HDInsight
-This article covers how to access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight-Spark from `spark-shell`. HDInsight is Microsoft's Hortonworks Hadoop PaaS on Azure. It uses object storage for HDFS and comes in several flavors, including [Spark](../../hdinsight/spark/apache-spark-overview.md). While this article refers to HDInsight-Spark, it applies to all Hadoop distributions.
+This article covers how to access Azure Cosmos DB for Apache Cassandra from Spark on YARN with HDInsight-Spark from `spark-shell`. HDInsight is Microsoft's Hortonworks Hadoop PaaS on Azure. It uses object storage for HDFS and comes in several flavors, including [Spark](../../hdinsight/spark/apache-spark-overview.md). While this article refers to HDInsight-Spark, it applies to all Hadoop distributions.
## Prerequisites
-Before you begin, [review the basics of connecting to Azure Cosmos DB Cassandra API](connect-spark-configuration.md).
+Before you begin, [review the basics of connecting to Azure Cosmos DB for Apache Cassandra](connect-spark-configuration.md).
You need the following prerequisites:
-* Provision Azure Cosmos DB Cassandra API. See [Create a database account](manage-data-dotnet.md#create-a-database-account).
+* Provision Azure Cosmos DB for Apache Cassandra. See [Create a database account](manage-data-dotnet.md#create-a-database-account).
* Provision an HDInsight-Spark cluster. See [Create Apache Spark cluster in Azure HDInsight using ARM template](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
-* Cassandra API configuration in Spark2. The Spark connector for Cassandra requires that the Cassandra connection details to be initialized as part of the Spark context. When you launch a Jupyter notebook, the spark session and context are already initialized. Don't stop and reinitialize the Spark context unless it's complete with every configuration set as part of the HDInsight default Jupyter notebook start-up. One workaround is to add the Cassandra instance details to Ambari, Spark2 service configuration, directly. This approach is a one-time activity per cluster that requires a Spark2 service restart.
+* API for Cassandra configuration in Spark2. The Spark connector for Cassandra requires that the Cassandra connection details to be initialized as part of the Spark context. When you launch a Jupyter notebook, the spark session and context are already initialized. Don't stop and reinitialize the Spark context unless it's complete with every configuration set as part of the HDInsight default Jupyter notebook start-up. One workaround is to add the Cassandra instance details to Ambari, Spark2 service configuration, directly. This approach is a one-time activity per cluster that requires a Spark2 service restart.
1. Go to Ambari, Spark2 service and select configs.
You need the following prerequisites:
spark.cassandra.auth.password=YOUR_COSMOSDB_KEY<br> ```
-You can use `cqlsh` for validation. For more information, see [Connecting to Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md#connecting-to-azure-cosmos-db-cassandra-api-from-spark).
+You can use `cqlsh` for validation. For more information, see [Connecting to Azure Cosmos DB for Apache Cassandra from Spark](connect-spark-configuration.md#connecting-to-azure-cosmos-db-cassandra-api-from-spark).
-## Access Azure Cosmos DB Cassandra API from Spark shell
+## Access Azure Cosmos DB for Apache Cassandra from Spark shell
Spark shell is used for testing and exploration.
Spark shell is used for testing and exploration.
spark.read.format("org.apache.spark.sql.cassandra").options(Map( "table" -> "books", "keyspace" -> "books_ks")).load.show ```
-## Access Azure Cosmos DB Cassandra API from Jupyter notebooks
+## Access Azure Cosmos DB for Apache Cassandra from Jupyter notebooks
HDInsight-Spark comes with Zeppelin and Jupyter notebook services. They're both web-based notebook environments that support Scala and Python. Notebooks are great for interactive exploratory analytics and collaboration, but not meant for operational or production processes.
-The following Jupyter notebooks can be uploaded into your HDInsight Spark cluster and provide ready samples for working with Azure Cosmos DB Cassandra API. Be sure to review the first notebook `1.0-ReadMe.ipynb` to review Spark service configuration for connecting to Azure Cosmos DB Cassandra API.
+The following Jupyter notebooks can be uploaded into your HDInsight Spark cluster and provide ready samples for working with Azure Cosmos DB for Apache Cassandra. Be sure to review the first notebook `1.0-ReadMe.ipynb` to review Spark service configuration for connecting to Azure Cosmos DB for Apache Cassandra.
Download the notebooks under [azure-cosmos-db-cassandra-api-spark-notebooks-jupyter](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-jupyter/blob/main/scala/) to your machine.
When you launch Jupyter, navigate to Scala. Create a directory and then upload t
Go through the notebooks, and each notebook cell sequentially. Select the **Run** button at the top of each notebook to run all cells, or **Shift**+**Enter** for each cell.
-## Access with Azure Cosmos DB Cassandra API from your Spark Scala program
+## Access with Azure Cosmos DB for Apache Cassandra from your Spark Scala program
For automated processes in production, Spark programs are submitted to the cluster by using [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html).
For automated processes in production, Spark programs are submitted to the clust
* [Create a Scala Maven application for Apache Spark in HDInsight using IntelliJ](../../hdinsight/spark/apache-spark-create-standalone-application.md) * [SampleCosmosDBApp.scala](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-connector-sample/blob/main/src/main/scala/com/microsoft/azure/cosmosdb/cassandra/SampleCosmosDBApp.scala)
-* [Connect to Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md)
+* [Connect to Azure Cosmos DB for Apache Cassandra from Spark](connect-spark-configuration.md)
cosmos-db Spark Read Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-read-operation.md
Title: Read Cassandra API table data using Spark
+ Title: Read API for Cassandra table data using Spark
titleSufix: Azure Cosmos DB
-description: This article describes how to read data from Cassandra API tables in Azure Cosmos DB.
+description: This article describes how to read data from API for Cassandra tables in Azure Cosmos DB.
-+ Last updated 06/02/2020 ms.devlang: scala-+
-# Read data from Azure Cosmos DB Cassandra API tables using Spark
+# Read data from Azure Cosmos DB for Apache Cassandra tables using Spark
- This article describes how to read data stored in Azure Cosmos DB Cassandra API from Spark.
+ This article describes how to read data stored in Azure Cosmos DB for Apache Cassandra from Spark.
-## Cassandra API configuration
+## API for Cassandra configuration
Set below spark configuration in your notebook cluster. It's one time activity. ```scala //Connection-related
Set below spark configuration in your notebook cluster. It's one time activity.
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
select * from books_vw where book_pub_year > 1891
## Next steps
-The following are additional articles on working with Azure Cosmos DB Cassandra API from Spark:
+The following are additional articles on working with Azure Cosmos DB for Apache Cassandra from Spark:
* [Upsert operations](spark-upsert-operations.md) * [Delete operations](spark-delete-operation.md) * [Aggregation operations](spark-aggregation-operations.md) * [Table copy operations](spark-table-copy-operations.md)-
cosmos-db Spark Table Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-table-copy-operations.md
Title: Table copy operations on Azure Cosmos DB Cassandra API from Spark
-description: This article details how to copy data between tables in Azure Cosmos DB Cassandra API
+ Title: Table copy operations on Azure Cosmos DB for Apache Cassandra from Spark
+description: This article details how to copy data between tables in Azure Cosmos DB for Apache Cassandra
-+ Last updated 09/24/2018 ms.devlang: scala+
-# Table copy operations on Azure Cosmos DB Cassandra API from Spark
+# Table copy operations on Azure Cosmos DB for Apache Cassandra from Spark
-This article describes how to copy data between tables in Azure Cosmos DB Cassandra API from Spark. The commands described in this article can also be used to copy data from Apache Cassandra tables to Azure Cosmos DB Cassandra API tables.
+This article describes how to copy data between tables in Azure Cosmos DB for Apache Cassandra from Spark. The commands described in this article can also be used to copy data from Apache Cassandra tables to Azure Cosmos DB for Apache Cassandra tables.
-## Cassandra API configuration
+## API for Cassandra configuration
Set below spark configuration in your notebook cluster. It's one time activity. ```scala //Connection-related
Set below spark configuration in your notebook cluster. It's one time activity.
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
newBooksDF: org.apache.spark.sql.DataFrame = [book_id: string, book_
## Next steps
- * Get started with [creating a Cassandra API account, database, and a table](create-account-java.md) by using a Java application.
- * [Load sample data to the Cassandra API table](load-data-table.md) by using a Java application.
- * [Query data from the Cassandra API account](query-data.md) by using a Java application.
+ * Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application.
+ * [Load sample data to the API for Cassandra table](load-data-table.md) by using a Java application.
+ * [Query data from the API for Cassandra account](query-data.md) by using a Java application.
cosmos-db Spark Upsert Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-upsert-operations.md
Title: Upsert data into Azure Cosmos DB Cassandra API from Spark
-description: This article details how to upsert into tables in Azure Cosmos DB Cassandra API from Spark
+ Title: Upsert data into Azure Cosmos DB for Apache Cassandra from Spark
+description: This article details how to upsert into tables in Azure Cosmos DB for Apache Cassandra from Spark
-+ Last updated 09/24/2018 ms.devlang: scala+
-# Upsert data into Azure Cosmos DB Cassandra API from Spark
+# Upsert data into Azure Cosmos DB for Apache Cassandra from Spark
-This article describes how to upsert data into Azure Cosmos DB Cassandra API from Spark.
+This article describes how to upsert data into Azure Cosmos DB for Apache Cassandra from Spark.
-## Cassandra API configuration
+## API for Cassandra configuration
Set below spark configuration in your notebook cluster. It's one time activity. ```scala //Connection-related
Set below spark configuration in your notebook cluster. It's one time activity.
``` > [!NOTE]
-> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Azure Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING] > The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
cdbConnector.withSessionDo(session => session.execute("update books_ks.books set
## Next steps
-Proceed to the following articles to perform other operations on the data stored in Azure Cosmos DB Cassandra API tables:
+Proceed to the following articles to perform other operations on the data stored in Azure Cosmos DB for Apache Cassandra tables:
* [Delete operations](spark-delete-operation.md) * [Aggregation operations](spark-aggregation-operations.md)
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/support.md
+
+ Title: Apache Cassandra features supported by Azure Cosmos DB for Apache Cassandra
+description: Learn about the Apache Cassandra feature support in Azure Cosmos DB for Apache Cassandra
+++++++ Last updated : 09/14/2020++
+# Apache Cassandra features supported by Azure Cosmos DB for Apache Cassandra
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB for Apache Cassandra through the Cassandra Query Language (CQL) Binary Protocol v4 [wire protocol](https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec) compliant open-source Cassandra client [drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver).
+
+By using the Azure Cosmos DB for Apache Cassandra, you can enjoy the benefits of the Apache Cassandra APIs and the enterprise capabilities that Azure Cosmos DB provides. The enterprise capabilities include [global distribution](../distribute-data-globally.md), [automatic scale out partitioning](partitioning.md), availability and latency guarantees, encryption at rest, backups, and much more.
+
+## Cassandra protocol
+
+The Azure Cosmos DB for Apache Cassandra is compatible with Cassandra Query Language (CQL) v3.11 API (backward-compatible with version 2.x). The supported CQL commands, tools, limitations, and exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for Apache Cassandra.
+
+## Cassandra driver
+
+The following versions of Cassandra drivers are supported by Azure Cosmos DB for Apache Cassandra:
+
+* [Java 3.5+](https://github.com/datastax/java-driver)
+* [C# 3.5+](https://github.com/datastax/csharp-driver)
+* [Nodejs 3.5+](https://github.com/datastax/nodejs-driver)
+* [Python 3.15+](https://github.com/datastax/python-driver)
+* [C++ 2.9](https://github.com/datastax/cpp-driver)
+* [PHP 1.3](https://github.com/datastax/php-driver)
+* [Gocql](https://github.com/gocql/gocql)
+
+
+## CQL data types
+
+Azure Cosmos DB for Apache Cassandra supports the following CQL data types:
+
+|Type |Supported |
+|||
+| `ascii` | Yes |
+| `bigint` | Yes |
+| `blob` | Yes |
+| `boolean` | Yes |
+| `counter` | Yes |
+| `date` | Yes |
+| `decimal` | Yes |
+| `double` | Yes |
+| `float` | Yes |
+| `frozen` | Yes |
+| `inet` | Yes |
+| `int` | Yes |
+| `list` | Yes |
+| `set` | Yes |
+| `smallint` | Yes |
+| `text` | Yes |
+| `time` | Yes |
+| `timestamp` | Yes |
+| `timeuuid` | Yes |
+| `tinyint` | Yes |
+| `tuple` | Yes |
+| `uuid` | Yes |
+| `varchar` | Yes |
+| `varint` | Yes |
+| `tuples` | Yes |
+| `udts` | Yes |
+| `map` | Yes |
+
+Static is supported for data type declaration.
+
+## CQL functions
+
+Azure Cosmos DB for Apache Cassandra supports the following CQL functions:
+
+|Command |Supported |
+|||
+| `Token` * | Yes |
+| `ttl` *** | Yes |
+| `writetime` *** | Yes |
+| `cast` ** | Yes |
+
+> [!NOTE]
+> \* API for Cassandra supports token as a projection/selector, and only allows token(pk) on the left-hand side of a where clause. For example, `WHERE token(pk) > 1024` is supported, but `WHERE token(pk) > token(100)` is **not** supported.
+> \*\* The `cast()` function is not nestable in API for Cassandra. For example, `SELECT cast(count as double) FROM myTable` is supported, but `SELECT avg(cast(count as double)) FROM myTable` is **not** supported.
+> \*\*\* Custom timestamps and TTL specified with the `USING` option are applied at a row level (and not per cell).
+++
+Aggregate functions:
+
+|Command |Supported |
+|||
+| `avg` | Yes |
+| `count` | Yes |
+| `min` | Yes |
+| `max` | Yes |
+| `sum` | Yes |
+
+> [!NOTE]
+> Aggregate functions work on regular columns, but aggregates on clustering columns are **not** supported.
++
+Blob conversion functions:
+
+|Command |Supported |
+|||
+| `typeAsBlob(value)` | Yes |
+| `blobAsType(value)` | Yes |
++
+UUID and timeuuid functions:
+
+|Command |Supported |
+|||
+| `dateOf()` | Yes |
+| `now()` | Yes |
+| `minTimeuuid()` | Yes |
+| `unixTimestampOf()` | Yes |
+| `toDate(timeuuid)` | Yes |
+| `toTimestamp(timeuuid)` | Yes |
+| `toUnixTimestamp(timeuuid)` | Yes |
+| `toDate(timestamp)` | Yes |
+| `toUnixTimestamp(timestamp)` | Yes |
+| `toTimestamp(date)` | Yes |
+| `toUnixTimestamp(date)` | Yes |
++
+
+## CQL commands
+
+Azure Cosmos DB supports the following database commands on API for Cassandra accounts.
+
+|Command |Supported |
+|||
+| `ALLOW FILTERING` | Yes |
+| `ALTER KEYSPACE` | N/A (PaaS service, replication managed internally)|
+| `ALTER MATERIALIZED VIEW` | No |
+| `ALTER ROLE` | No |
+| `ALTER TABLE` | Yes |
+| `ALTER TYPE` | No |
+| `ALTER USER` | No |
+| `BATCH` | Yes (unlogged batch only)|
+| `COMPACT STORAGE` | N/A (PaaS service) |
+| `CREATE AGGREGATE` | No |
+| `CREATE CUSTOM INDEX (SASI)` | No |
+| `CREATE INDEX` | Yes (without [specifying index name](secondary-indexing.md), and indexes on clustering keys or full FROZEN collection not supported) |
+| `CREATE FUNCTION` | No |
+| `CREATE KEYSPACE` (replication settings ignored) | Yes |
+| `CREATE MATERIALIZED VIEW` | No |
+| `CREATE TABLE` | Yes |
+| `CREATE TRIGGER` | No |
+| `CREATE TYPE` | Yes |
+| `CREATE ROLE` | No |
+| `CREATE USER` (Deprecated in native Apache Cassandra) | No |
+| `DELETE` | Yes |
+| `DISTINCT` | No |
+| `DROP AGGREGATE` | No |
+| `DROP FUNCTION` | No |
+| `DROP INDEX` | Yes |
+| `DROP KEYSPACE` | Yes |
+| `DROP MATERIALIZED VIEW` | No |
+| `DROP ROLE` | No |
+| `DROP TABLE` | Yes |
+| `DROP TRIGGER` | No |
+| `DROP TYPE` | Yes |
+| `DROP USER` (Deprecated in native Apache Cassandra) | No |
+| `GRANT` | No |
+| `INSERT` | Yes |
+| `LIST PERMISSIONS` | No |
+| `LIST ROLES` | No |
+| `LIST USERS` (Deprecated in native Apache Cassandra) | No |
+| `REVOKE` | No |
+| `SELECT` | Yes |
+| `UPDATE` | Yes |
+| `TRUNCATE` | Yes |
+| `USE` | Yes |
+
+## Lightweight Transactions (LWT)
+
+| Component |Supported |
+|||
+| `DELETE IF EXISTS` | Yes |
+| `DELETE conditions` | Yes |
+| `INSERT IF NOT EXISTS` | Yes |
+| `UPDATE IF EXISTS` | Yes |
+| `UPDATE IF NOT EXISTS` | Yes |
+| `UPDATE conditions` | Yes |
+
+> [!NOTE]
+> Lightweight transactions currently aren't supported for accounts that have multi-region writes enabled.
+
+## CQL Shell commands
+
+Azure Cosmos DB supports the following database commands on API for Cassandra accounts.
+
+|Command |Supported |
+|||
+| `CAPTURE` | Yes |
+| `CLEAR` | Yes |
+| `CONSISTENCY` * | N/A |
+| `COPY` | No |
+| `DESCRIBE` | Yes |
+| `cqlshExpand` | No |
+| `EXIT` | Yes |
+| `LOGIN` | N/A (CQL function `USER` is not supported, hence `LOGIN` is redundant) |
+| `PAGING` | Yes |
+| `SERIAL CONSISTENCY` * | N/A |
+| `SHOW` | Yes |
+| `SOURCE` | Yes |
+| `TRACING` | N/A (API for Cassandra is backed by Azure Cosmos DB - use [diagnostic logging](../monitor-resource-logs.md) for troubleshooting) |
+
+> [!NOTE]
+> Consistency works differently in Azure Cosmos DB, see [here](consistency-mapping.md) for more information.
++
+## JSON Support
+|Command |Supported |
+|||
+| `SELECT JSON` | Yes |
+| `INSERT JSON` | Yes |
+| `fromJson()` | No |
+| `toJson()` | No |
++
+## API for Cassandra limits
+
+Azure Cosmos DB for Apache Cassandra does not have any limits on the size of data stored in a table. Hundreds of terabytes or Petabytes of data can be stored while ensuring partition key limits are honored. Similarly, every entity or row equivalent does not have any limits on the number of columns. However, the total size of the entity should not exceed 2 MB. The data per partition key cannot exceed 20 GB as in all other APIs.
+
+## Tools
+
+Azure Cosmos DB for Apache Cassandra is a managed service platform. The platform does not require any management overhead or utilities such as Garbage Collector, Java Virtual Machine(JVM), and nodetool to manage the cluster. Tools such as cqlsh that utilizes Binary CQLv4 compatibility are supported.
+
+* Azure portal's data explorer, metrics, log diagnostics, PowerShell, and CLI are other supported mechanisms to manage the account.
+
+## CQL shell
+
+<!-- You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](../data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](../notebooks-overview.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`).
++
+You can connect to the API for Cassandra in Azure Cosmos DB by using the CQLSH installed on a local machine. It comes with Apache Cassandra 3.11 and works out of the box by setting the environment variables. The following sections include the instructions to install, configure, and connect to API for Cassandra in Azure Cosmos DB, on Windows or Linux using CQLSH.
+
+> [!WARNING]
+> Connections to Azure Cosmos DB for Apache Cassandra will not work with DataStax Enterprise (DSE) or Cassandra 4.0 versions of CQLSH. Please ensure you use only v3.11 open source Apache Cassandra versions of CQLSH when connecting to API for Cassandra.
+
+**Windows:**
+
+<!-- If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below. -->
+
+1. Install [Python 3](https://www.python.org/downloads/windows/)
+1. Install PIP
+ 1. Before install PIP, download the get-pip.py file.
+ 1. Launch a command prompt if it isn't already open. To do so, open the Windows search bar, type cmd and select the icon.
+ 1. Then, run the following command to download the get-pip.py file:
+ ```bash
+ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
+ ```
+1. Install PIP on Windows
+```bash
+python get-pip.py
+```
+1. Verify the PIP installation (look for a message from step 3 to confirm which folder PIP was installed in and then navigate to that folder and run the command pip help).
+1. Install CQLSH using PIP
+```bash
+pip3 install cqlsh==5.0.3
+```
+4. Install [Python 2](https://www.python.org/downloads/windows/)
+5. Run the [CQLSH using the authentication mechanism](manage-data-cqlsh.md#update-your-connection-string).
+
+> [!NOTE]
+> You would need to set the environment variables to point to the Python 2 folder.
+
+**Install on Unix/Linux/Mac:**
+
+```bash
+# Install default-jre and default-jdk
+sudo apt install default-jre
+sudo apt-get update
+sudo apt install default-jdk
+
+# Import the Baltimore CyberTrust root certificate:
+curl https://cacert.omniroot.com/bc2025.crt > bc2025.crt
+keytool -importcert -alias bc2025ca -file bc2025.crt
+
+# Install the Cassandra libraries in order to get CQLSH:
+echo "deb https://downloads.apache.org/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
+curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add -
+sudo apt-get update
+sudo apt-get install cassandra
+```
+
+**Connect with Unix/Linux/Mac:**
+```bash
+# Export the SSL variables:
+export SSL_VERSION=TLSv1_2
+export SSL_VALIDATE=false
+
+# Connect to Azure Cosmos DB for Apache Cassandra:
+cqlsh <YOUR_ACCOUNT_NAME>.cassandra.cosmosdb.azure.com 10350 -u <YOUR_ACCOUNT_NAME> -p <YOUR_ACCOUNT_PASSWORD> --ssl --protocol-version=4
+```
+**Connect with Docker:**
+```bash
+docker run -it --rm -e SSL_VALIDATE=false -e SSL_VERSION=TLSv1_2 cassandra:3.11 cqlsh <account_name>.cassandra.cosmos.azure.com 10350 -u <YOUR_ACCOUNT_NAME> -p <YOUR_ACCOUNT_PASSWORD> --ssl
+```
+
+All CRUD operations that are executed through a CQL v4 compatible SDK will return extra information about error and request units consumed. The DELETE and UPDATE commands should be handled with resource governance taken into consideration, to ensure the most efficient use of the provisioned throughput.
+
+* Note gc_grace_seconds value must be zero if specified.
+
+```csharp
+var tableInsertStatement = table.Insert(sampleEntity);
+var insertResult = await tableInsertStatement.ExecuteAsync();
+
+foreach (string key in insertResult.Info.IncomingPayload)
+ {
+ byte[] valueInBytes = customPayload[key];
+ double value = Encoding.UTF8.GetString(valueInBytes);
+ Console.WriteLine($"CustomPayload: {key}: {value}");
+ }
+```
+
+## Consistency mapping
+
+Azure Cosmos DB for Apache Cassandra provides choice of consistency for read operations. The consistency mapping is detailed [here](consistency-mapping.md#mapping-consistency-levels).
+
+## Permission and role management
+
+Azure Cosmos DB supports Azure role-based access control (Azure RBAC) for provisioning, rotating keys, viewing metrics and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com). Azure Cosmos DB does not support roles for CRUD activities.
+
+## Keyspace and Table options
+
+The options for region name, class, replication_factor, and datacenter in the "Create Keyspace" command are ignored currently. The system uses the underlying Azure Cosmos DB's [global distribution](../global-dist-under-the-hood.md) replication method to add the regions. If you need the cross-region presence of data, you can enable it at the account level with PowerShell, CLI, or portal, to learn more, see the [how to add regions](../how-to-manage-database-account.md#addremove-regions-from-your-database-account) article. Durable_writes can't be disabled because Azure Cosmos DB ensures every write is durable. In every region, Azure Cosmos DB replicates the data across the replica set that is made up of four replicas and this replica set [configuration](../global-dist-under-the-hood.md) can't be modified.
+
+All the options are ignored when creating the table, except gc_grace_seconds, which should be set to zero.
+The Keyspace and table have an extra option named "cosmosdb_provisioned_throughput" with a minimum value of 400 RU/s. The Keyspace throughput allows sharing throughput across multiple tables and it is useful for scenarios when all tables are not utilizing the provisioned throughput. Alter Table command allows changing the provisioned throughput across the regions.
+
+```
+CREATE KEYSPACE sampleks WITH REPLICATION = { 'class' : 'SimpleStrategy'} AND cosmosdb_provisioned_throughput=2000;
+
+CREATE TABLE sampleks.t1(user_id int PRIMARY KEY, lastname text) WITH cosmosdb_provisioned_throughput=2000;
+
+ALTER TABLE gks1.t1 WITH cosmosdb_provisioned_throughput=10000 ;
+
+```
+## Secondary Index
+API for Cassandra supports secondary indexes on all data types except frozen collection types, decimal, and variant types.
+
+## Usage of Cassandra retry connection policy
+
+Azure Cosmos DB is a resource governed system. You can do a certain number of operations in a given second based on the request units consumed by the operations. If an application exceeds that limit in a given second, requests are rate-limited and exceptions will be thrown. The API for Cassandra in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. To ensure that your application can intercept and retry requests in case of rate limitation, the [spark](https://mvnrepository.com/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper) and the [Java](https://github.com/Azure/azure-cosmos-cassandra-extensions) extensions are provided. See also Java code samples for [version 3](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and [version 4](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4) Datastax drivers, when connecting to API for Cassandra in Azure Cosmos DB. If you use other SDKs to access API for Cassandra in Azure Cosmos DB, create a retry policy to retry on these exceptions. Alternatively, [enable server-side retries](prevent-rate-limiting-errors.md) for API for Cassandra.
+
+## Next steps
+
+- Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application
cosmos-db Templates Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/templates-samples.md
Title: Resource Manager templates for Azure Cosmos DB Cassandra API
-description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Cassandra API.
+ Title: Resource Manager templates for Azure Cosmos DB for Apache Cassandra
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Apache Cassandra.
-++ Last updated 10/14/2020
-# Manage Azure Cosmos DB Cassandra API resources using Azure Resource Manager templates
+# Manage Azure Cosmos DB for Apache Cassandra resources using Azure Resource Manager templates
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, keyspaces, and tables.
-This article has examples for Cassandra API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [SQL](../templates-samples-sql.md), [Gremlin](../templates-samples-gremlin.md), [MongoDB](../mongodb/resource-manager-template-samples.md), [Table](../table/resource-manager-templates.md) articles.
+This article has examples for API for Cassandra accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [NoSQL](../nosql/samples-resource-manager-templates.md), [Gremlin](../templates-samples-gremlin.md), [MongoDB](../mongodb/resource-manager-template-samples.md), [Table](../table/resource-manager-templates.md) articles.
> [!IMPORTANT] > > * Account names are limited to 44 characters, all lowercase. > * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md). <a id="create-autoscale"></a>
-## Azure Cosmos account for Cassandra with autoscale provisioned throughput
+## Azure Cosmos DB account for Cassandra with autoscale provisioned throughput
-This template creates an Azure Cosmos account in two regions with options for consistency and failover, with a keyspace and table configured for autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template creates an Azure Cosmos DB account in two regions with options for consistency and failover, with a keyspace and table configured for autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-cassandra-autoscale%2Fazuredeploy.json)
This template creates an Azure Cosmos account in two regions with options for co
<a id="create-manual"></a>
-## Azure Cosmos account for Cassandra with standard provisioned throughput
+## Azure Cosmos DB account for Cassandra with standard provisioned throughput
-This template creates an Azure Cosmos account in two regions with options for consistency and failover, with a keyspace and table configured for standard throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template creates an Azure Cosmos DB account in two regions with options for consistency and failover, with a keyspace and table configured for standard throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-cassandra%2Fazuredeploy.json)
Here are some additional resources:
* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml) * [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions) * [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.DocumentDB&pageNumber=1&sort=Popular)
-* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
+* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/troubleshoot-common-issues.md
Title: Troubleshoot common errors in the Azure Cosmos DB Cassandra API
-description: This article discusses common issues in the Azure Cosmos DB Cassandra API and how to troubleshoot them.
+ Title: Troubleshoot common errors in the Azure Cosmos DB for Apache Cassandra
+description: This article discusses common issues in the Azure Cosmos DB for Apache Cassandra and how to troubleshoot them.
-+ Last updated 03/02/2021 ms.devlang: java+
-# Troubleshoot common issues in the Azure Cosmos DB Cassandra API
+# Troubleshoot common issues in the Azure Cosmos DB for Apache Cassandra
-The Cassandra API in [Azure Cosmos DB](../introduction.md) is a compatibility layer that provides [wire protocol support](cassandra-support.md) for the open-source Apache Cassandra database.
+The API for Cassandra in [Azure Cosmos DB](../introduction.md) is a compatibility layer that provides [wire protocol support](support.md) for the open-source Apache Cassandra database.
-This article describes common errors and solutions for applications that use the Azure Cosmos DB Cassandra API. If your error isn't listed and you experience an error when you execute a [supported operation in Cassandra](cassandra-support.md), but the error isn't present when using native Apache Cassandra, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+This article describes common errors and solutions for applications that use the Azure Cosmos DB for Apache Cassandra. If your error isn't listed and you experience an error when you execute a [supported operation in Cassandra](support.md), but the error isn't present when using native Apache Cassandra, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
>[!NOTE]
->As a fully managed cloud-native service, Azure Cosmos DB provides [guarantees on availability, throughput, and consistency](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for the Cassandra API. The Cassandra API also facilitates zero-maintenance platform operations and zero-downtime patching.
+>As a fully managed cloud-native service, Azure Cosmos DB provides [guarantees on availability, throughput, and consistency](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for the API for Cassandra. The API for Cassandra also facilitates zero-maintenance platform operations and zero-downtime patching.
>
->These guarantees aren't possible in previous implementations of Apache Cassandra, so many of the Cassandra API back-end operations differ from Apache Cassandra. We recommend particular settings and approaches to help avoid common errors.
+>These guarantees aren't possible in previous implementations of Apache Cassandra, so many of the API for Cassandra back-end operations differ from Apache Cassandra. We recommend particular settings and approaches to help avoid common errors.
## NoNodeAvailableException
See [troubleshoot NoHostAvailableException](troubleshoot-nohostavailable-excepti
Requests are throttled because the total number of request units consumed is higher than the number of request units that you provisioned on the keyspace or table.
-Consider scaling the throughput assigned to a keyspace or table from the Azure portal (see [Elastically scale an Azure Cosmos DB Cassandra API account](scale-account-throughput.md)) or implementing a retry policy.
+Consider scaling the throughput assigned to a keyspace or table from the Azure portal (see [Elastically scale an Azure Cosmos DB for Apache Cassandra account](scale-account-throughput.md)) or implementing a retry policy.
-For Java, see retry samples for the [v3.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and the [v4.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4). See also [Azure Cosmos Cassandra Extensions for Java](https://github.com/Azure/azure-cosmos-cassandra-extensions).
+For Java, see retry samples for the [v3.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and the [v4.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4). See also [Azure Cosmos DB Cassandra Extensions for Java](https://github.com/Azure/azure-cosmos-cassandra-extensions).
### OverloadedException despite sufficient throughput The system seems to be throttling requests even though enough throughput is provisioned for request volume or consumed request unit cost. There are two possible causes: -- **Schema level operations**: The Cassandra API implements a system throughput budget for schema-level operations (CREATE TABLE, ALTER TABLE, DROP TABLE). This budget should be enough for schema operations in a production system. However, if you have a high number of schema-level operations, you might exceed this limit.
+- **Schema level operations**: The API for Cassandra implements a system throughput budget for schema-level operations (CREATE TABLE, ALTER TABLE, DROP TABLE). This budget should be enough for schema operations in a production system. However, if you have a high number of schema-level operations, you might exceed this limit.
Because the budget isn't user-controlled, consider lowering the number of schema operations that you run. If that action doesn't resolve the issue or it isn't feasible for your workload, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). -- **Data skew**: When throughput is provisioned in the Cassandra API, it's divided equally between physical partitions, and each physical partition has an upper limit. If you have a high amount of data being inserted or queried from one particular partition, it might be rate-limited even if you provision a large amount of overall throughput (request units) for that table.
+- **Data skew**: When throughput is provisioned in the API for Cassandra, it's divided equally between physical partitions, and each physical partition has an upper limit. If you have a high amount of data being inserted or queried from one particular partition, it might be rate-limited even if you provision a large amount of overall throughput (request units) for that table.
Review your data model and ensure you don't have excessive skew that might cause hot partitions.
The system seems to be throttling requests even though enough throughput is prov
Connection drops or times out unexpectedly.
-The Apache Cassandra drivers for Java provide two native reconnection policies: `ExponentialReconnectionPolicy` and `ConstantReconnectionPolicy`. The default is `ExponentialReconnectionPolicy`. However, for Azure Cosmos DB Cassandra API, we recommend `ConstantReconnectionPolicy` with a two-second delay.
+The Apache Cassandra drivers for Java provide two native reconnection policies: `ExponentialReconnectionPolicy` and `ConstantReconnectionPolicy`. The default is `ExponentialReconnectionPolicy`. However, for Azure Cosmos DB for Apache Cassandra, we recommend `ConstantReconnectionPolicy` with a two-second delay.
See the [documentation for the Java 4.x driver](https://docs.datastax.com/en/developer/java-driver/4.9/manual/core/reconnection/), the [documentation for the Java 3.x driver](https://docs.datastax.com/en/developer/java-driver/3.7/manual/reconnection/), or [Configuring ReconnectionPolicy for the Java driver](#configure-reconnectionpolicy-for-the-java-driver) examples.
datastax-java-driver {
advanced { reconnection-policy{ # The driver provides two implementations out of the box: ExponentialReconnectionPolicy and
- # ConstantReconnectionPolicy. We recommend ConstantReconnectionPolicy for Cassandra API, with
+ # ConstantReconnectionPolicy. We recommend ConstantReconnectionPolicy for API for Cassandra, with
# base-delay of 2 seconds. class = ConstantReconnectionPolicy base-delay = 2 second
datastax-java-driver {
## Next steps -- Learn about [supported features](cassandra-support.md) in the Azure Cosmos DB Cassandra API.-- Learn how to [migrate from native Apache Cassandra to Azure Cosmos DB Cassandra API](migrate-data-databricks.md).
+- Learn about [supported features](support.md) in the Azure Cosmos DB for Apache Cassandra.
+- Learn how to [migrate from native Apache Cassandra to Azure Cosmos DB for Apache Cassandra](migrate-data-databricks.md).
cosmos-db Troubleshoot Nohostavailable Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/troubleshoot-nohostavailable-exception.md
Title: Troubleshoot NoHostAvailableException and NoNodeAvailableException
description: This article discusses the various reasons for having a NoHostException and ways to handle it. -+ Last updated 12/02/2021 ms.devlang: csharp, java+ # Troubleshoot NoHostAvailableException and NoNodeAvailableException
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
.setCoreConnectionsPerHost(HostDistance.REMOTE, 10) // default 1 .setMaxConnectionsPerHost(HostDistance.REMOTE, 10); //default 1
- // cosmos load balancing policy
+ // Azure Cosmos DB load balancing policy
String Region = "West US"; CosmosLoadBalancingPolicy cosmosLoadBalancingPolicy = CosmosLoadBalancingPolicy.builder() .withWriteDC(Region) .withReadDC(Region) .build();
- // cosmos retry policy
+ // Azure Cosmos DB retry policy
CosmosRetryPolicy retryPolicy = CosmosRetryPolicy.builder() .withFixedBackOffTimeInMillis(5000) .withGrowingBackOffTimeInMillis(1000)
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
## Next steps * To understand the various error codes and their meaning, see [Server-side diagnostics](error-codes-solution.md).
-* See [Diagnose and troubleshoot issues with the Azure Cosmos DB .NET SDK](../sql/troubleshoot-dot-net-sdk.md).
-* Learn about performance guidelines for [.NET v3](../sql/performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](../sql/performance-tips.md).
-* See [Troubleshoot issues with the Azure Cosmos DB Java SDK v4 with SQL API accounts](../sql/troubleshoot-java-sdk-v4-sql.md).
-* See [Performance tips for the Azure Cosmos DB Java SDK v4](../sql/performance-tips-java-sdk-v4-sql.md).
+* See [Diagnose and troubleshoot issues with the Azure Cosmos DB .NET SDK](../nosql/troubleshoot-dotnet-sdk.md).
+* Learn about performance guidelines for [.NET v3](../nosql/performance-tips-dotnet-sdk-v3.md) and [.NET v2](../nosql/performance-tips.md).
+* See [Troubleshoot issues with the Azure Cosmos DB Java SDK v4 with API for NoSQL accounts](../nosql/troubleshoot-java-sdk-v4.md).
+* See [Performance tips for the Azure Cosmos DB Java SDK v4](../nosql/performance-tips-java-sdk-v4.md).
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Last updated 06/07/2021-+ # Change feed in Azure Cosmos DB
-Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The persisted changes can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing.
+Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos DB container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The persisted changes can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing.
Learn more about [change feed design patterns](change-feed-design-patterns.md).
Learn more about [change feed design patterns](change-feed-design-patterns.md).
This feature is currently supported by the following Azure Cosmos DB APIs and client SDKs.
-| **Client drivers** | **SQL API** | **Azure Cosmos DB API for Cassandra** | **Azure Cosmos DB API for MongoDB** | **Gremlin API**|**Table API** |
+| **Client drivers** | **NoSQL** | **Apache Cassandra** | **MongoDB** | **Apache Gremlin** | **Table** |
| | | | | | | | | .NET | Yes | Yes | Yes | Yes | No | |Java|Yes|Yes|Yes|Yes|No|
Change feed items come in the order of their modification time. This sort order
While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next).
-### Change feed in multi-region Azure Cosmos accounts
+### Change feed in multi-region Azure Cosmos DB accounts
-In a multi-region Azure Cosmos account, if a write-region fails over, change feed will work across the manual failover operation and it will be contiguous.
+In a multi-region Azure Cosmos DB account, if a write-region fails over, change feed will work across the manual failover operation and it will be contiguous.
### Change feed and Time to Live (TTL)
Change feed is available for each logical partition key within the container, an
## Features of change feed
-* Change feed is enabled by default for all Azure Cosmos accounts.
+* Change feed is enabled by default for all Azure Cosmos DB accounts.
-* You can use your [provisioned throughput](request-units.md) to read from the change feed, just like any other Azure Cosmos DB operation, in any of the regions associated with your Azure Cosmos database.
+* You can use your [provisioned throughput](request-units.md) to read from the change feed, just like any other Azure Cosmos DB operation, in any of the regions associated with your Azure Cosmos DB database.
* The change feed includes inserts and update operations made to items within the container. You can capture deletes by setting a "soft-delete" flag within your items (for example, documents) in place of deletes. Alternatively, you can set a finite expiration period for your items with the [TTL capability](time-to-live.md). For example, 24 hours and use the value of that property to capture deletes. With this solution, you have to process the changes within a shorter time interval than the TTL expiration period.
Change feed is available for each logical partition key within the container, an
* Changes can be synchronized from any point-in-time, that is there is no fixed data retention period for which changes are available.
-* Changes are available in parallel for all logical partition keys of an Azure Cosmos container. This capability allows changes from large containers to be processed in parallel by multiple consumers.
+* Changes are available in parallel for all logical partition keys of an Azure Cosmos DB container. This capability allows changes from large containers to be processed in parallel by multiple consumers.
* Applications can request multiple change feeds on the same container simultaneously. ChangeFeedOptions.StartTime can be used to provide an initial starting point. For example, to find the continuation token corresponding to a given clock time. The ContinuationToken, if specified, takes precedence over the StartTime and StartFromBeginning values. The precision of ChangeFeedOptions.StartTime is ~5 secs. ## Change feed in APIs for Cassandra and MongoDB
-Change feed functionality is surfaced as change stream in MongoDB API and Query with predicate in Cassandra API. To learn more about the implementation details for MongoDB API, see the [Change streams in the Azure Cosmos DB API for MongoDB](mongodb/change-streams.md).
+Change feed functionality is surfaced as change stream in API for MongoDB and Query with predicate in API for Cassandra. To learn more about the implementation details for API for MongoDB, see the [Change streams in the Azure Cosmos DB API for MongoDB](mongodb/change-streams.md).
-Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB API for Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB API for Cassandra](cassandr).
+Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
## Next steps
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Title: Choose an API in Azure Cosmos DB
-description: Learn how to choose between SQL/Core, MongoDB, Cassandra, Gremlin, and table APIs in Azure Cosmos DB based on your workload requirements.
+description: Learn how to choose between APIs for NoSQL, MongoDB, Cassandra, Gremlin, and Table in Azure Cosmos DB based on your workload requirements.
- Previously updated : 12/08/2021-+ Last updated : 10/05/2021+ adobe-target: true # Choose an API in Azure Cosmos DB+ Azure Cosmos DB is a fully managed NoSQL database for modern app development. Azure Cosmos DB takes database administration off your hands with automatic management, updates, and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
Azure Cosmos DB is a fully managed NoSQL database for modern app development. Az
## APIs in Azure Cosmos DB
-Azure Cosmos DB offers multiple database APIs, which include the Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Using these APIs, Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying.
+Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
-All the APIs offer automatic scaling of storage and throughput, flexibility, and performance guarantees. There is no one best API, and you may choose any one of the APIs to build your application. This article will help you choose an API based on your workload and team requirements.
+All the APIs offer automatic scaling of storage and throughput, flexibility, and performance guarantees. There's no one best API, and you may choose any one of the APIs to build your application. This article will help you choose an API based on your workload and team requirements.
## Considerations when choosing an API
-Core(SQL) API is native to Azure Cosmos DB.
+API for NoSQL is native to Azure Cosmos DB.
API for MongoDB, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true:
-* If you have existing MongoDB, Cassandra, or Gremlin applications.
-* If you don't want to rewrite your entire data access layer.
-* If you want to use the open-source developer ecosystem, client-drivers, expertise, and resources for your database.
-* If you want to use the Azure Cosmos DB key features such as global distribution, elastic scaling of storage and throughput, performance, low latency, ability to run transactional and analytical workload, and use a fully managed platform.
-* If you are developing modernized apps on a multi-cloud environment.
+* If you have existing MongoDB, Cassandra, or Gremlin applications
+* If you don't want to rewrite your entire data access layer
+* If you want to use the open-source developer ecosystem, client-drivers, expertise, and resources for your database
+* If you want to use the Azure Cosmos DB core features such as:
+ * Global distribution
+ * Elastic scaling of storage and throughput
+ * High performance at scale
+ * Low latency
+ * Ability to run transactional and analytical workloads
+ * Fully managed platform
+* If you're developing modernized apps on a multicloud environment
You can build new applications with these APIs or migrate your existing data. To run the migrated apps, change the connection string of your application and continue to run as before. When migrating existing apps, make sure to evaluate the feature support of these APIs.
Based on your workload, you must choose the API that fits your requirement. The
:::image type="content" source="./media/choose-api/choose-api-decision-tree.png" alt-text="Decision tree to choose an API in Azure Cosmos DB." lightbox="./media/choose-api/choose-api-decision-tree.png":::
-## Core(SQL) API
+## <a id="coresql-api"></a> API for NoSQL
-This API stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on SQL API accounts. Azure Cosmos DB SQL API accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB SQL API](/training/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](sql-query-getting-started.md) article.
+The Azure Cosmos DB API for NoSQL stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on API for NoSQL accounts. NoSQL accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB API for NoSQL](/training/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](nosql/query/getting-started.md) article.
-If you are migrating from other databases such as Oracle, DynamoDB, HBase etc. and if you want to use the modernized technologies to build your apps, SQL API is the recommended option. SQL API supports analytics and offers performance isolation between operational and analytical workloads.
+If you're migrating from other databases such as Oracle, DynamoDB, HBase etc. and if you want to use the modernized technologies to build your apps, API for NoSQL is the recommended option. API for NoSQL supports analytics and offers performance isolation between operational and analytical workloads.
## API for MongoDB
-This API stores data in a document structure, via BSON format. It is compatible with MongoDB wire protocol; however, it does not use any native MongoDB related code. This API is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DB features such as scaling, high availability, geo-replication, multiple write locations, automatic and transparent shard management, transparent replication between operational and analytical stores, and more.
+The Azure Cosmos DB API for MongoDB stores data in a document structure, via BSON format. It's compatible with MongoDB wire protocol; however, it doesn't use any native MongoDB related code. The API for MongoDB is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DB features.
-You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB. To learn more, see [API for MongoDB](mongodb/mongodb-introduction.md) article.
+The features that Azure Cosmos DB provides, that you don't have to compromise on includes:
-## Cassandra API
+* Scaling
+* High availability
+* Geo-replication
+* Multiple write locations
+* Automatic and transparent shard management
+* Transparent replication between operational and analytical stores
-This API stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. Cassandra API in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. Cassandra API is wire protocol compatible with the Apache Cassandra. You should consider Cassandra API if you want to benefit the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This means on Cassandra API you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
+You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB. To learn more, see [API for MongoDB](mongodb/introduction.md) article.
-You can use Apache Cassandra client drivers to connect to the Cassandra API. The Cassandra API enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. Cassandra API currently only supports OLTP scenarios. Using Cassandra API, you can also use the unique features of Azure Cosmos DB such as [change feed](cassandra-change-feed.md). To learn more, see [Cassandra API](cassandra-introduction.md) article. If you're already familiar with Apache Cassandra, but new to Azure Cosmos DB, we recommend our article on [how to adapt to the Cassandra API if you are coming from Apache Cassandra](./cassandr).
+## <a id="cassandra-api"></a> API for Apache Cassandra
-## Gremlin API
+The Azure Cosmos DB API for Cassandra stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. API for Cassandra in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. This API for Cassandra is wire protocol compatible with native Apache Cassandra. You should consider API for Cassandra if you want to benefit from the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This fully managed nature means on API for Cassandra you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
-This API allows users to make graph queries and stores data as edges and vertices. Use this API for scenarios involving dynamic data, data with complex relations, data that is too complex to be modeled with relational databases, and if you want to use the existing Gremlin ecosystem and skills. The Azure Cosmos DB Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure. It provides a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches. Gremlin API currently only supports OLTP scenarios.
+You can use Apache Cassandra client drivers to connect to the API for Cassandra. The API for Cassandra enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. API for Cassandra currently only supports OLTP scenarios. Using API for Cassandra, you can also use the unique features of Azure Cosmos DB such as [change feed](cassandr).
-The Azure Cosmos DB Gremlin API is based on the [Apache TinkerPop](https://tinkerpop.apache.org/) graph computing framework. Gremlin API uses the same Graph query language to ingest and query data. It uses the Azure Cosmos DB partition strategy to do the read/write operations from the Graph database engine. Gremlin API has a wire protocol support with the open-source Gremlin, so you can use the open-source Gremlin SDKs to build your application. Azure Cosmos DB Gremlin API also works with Apache Spark and [GraphFrames](https://github.com/graphframes/graphframes) for complex analytical graph scenarios. To learn more, see [Gremlin API](graph-introduction.md) article.
+## <a id="gremlin-api"></a> API for Apache Gremlin
-## Table API
+The Azure Cosmos DB API for Gremlin allows users to make graph queries and stores data as edges and vertices.
-This API stores data in key/value format. If you are currently using Azure Table storage, you may see some limitations in latency, scaling, throughput, global distribution, index management, low query performance. Table API overcomes these limitations and it's recommended to migrate your app if you want to use the benefits of Azure Cosmos DB. Table API only supports OLTP scenarios.
+Use the API for Gremlin for scenarios:
-Applications written for Azure Table storage can migrate to the Table API with little code changes and take advantage of premium capabilities. To learn more, see [Table API](table/introduction.md) article.
+* Involving dynamic data
+* Involving data with complex relations
+* Involving data that is too complex to be modeled with relational databases
+* If you want to use the existing Gremlin ecosystem and skills
-## Capacity planning when migrating data
+The API for Gremlin combines the power of graph database algorithms with highly scalable, managed infrastructure. This API provides a unique and flexible solution to common data problems associated with lack of flexibility or relational approaches. API for Gremlin currently only supports OLTP scenarios.
+
+The API for Gremlin is based on the [Apache TinkerPop](https://tinkerpop.apache.org/) graph computing framework. API for Gremlin uses the same Graph query language to ingest and query data. It uses the Azure Cosmos DB partition strategy to do the read/write operations from the Graph database engine. API for Gremlin has a wire protocol support with the open-source Gremlin, so you can use the open-source Gremlin SDKs to build your application. API for Gremlin also works with Apache Spark and [GraphFrames](https://github.com/graphframes/graphframes) for complex analytical graph scenarios. To learn more, see [API for Gremlin](gremlin/introduction.md) article.
+
+## <a id="table-api"></a> API for Table
-Trying to do capacity planning for a migration to Azure Cosmos DB API for SQL or MongoDB from an existing database cluster? You can use information about your existing database cluster for capacity planning.
+The Azure Cosmos DB API for Table stores data in key/value format. If you're currently using Azure Table storage, you may see some limitations in latency, scaling, throughput, global distribution, index management, low query performance. API for Table overcomes these limitations and it's recommended to migrate your app if you want to use the benefits of Azure Cosmos DB. API for Table only supports OLTP scenarios.
+
+Applications written for Azure Table storage can migrate to the API for Table with little code changes and take advantage of premium capabilities. To learn more, see [API for Table](table/introduction.md) article.
+
+## API for PostgreSQL
+
+## Capacity planning when migrating data
-* If all you know is the number of vcores and servers in your existing sharded and replicated database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md).
+Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or MongoDB from an existing database cluster? You can use information about your existing database cluster for capacity planning.
-* If you know typical request rates for your current database workload, read about estimating request units using Azure Cosmos DB [capacity planner for SQL API](./sql/estimate-ru-with-capacity-planner.md) and [API for MongoDB](./mongodb/estimate-ru-capacity-planner.md)
+* For more information about estimating request units if all you know is the number of vCores and servers in your existing sharded and replicated database cluster, see [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md).
+* For more information about estimating request units if you know typical request rates for your current database workload, see [capacity planner for API for NoSQL](./sql/estimate-ru-with-capacity-planner.md) and [API for MongoDB](./mongodb/estimate-ru-capacity-planner.md)
## Next steps
-* [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
-* [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)
-* [Get started with Azure Cosmos DB Cassandra API](cassandr)
-* [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)
-* [Get started with Azure Cosmos DB Table API](create-table-dotnet.md)
+* [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)
+* [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB for Cassandra](cassandr)
+* [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-dotnet.md)
+* [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](./sql/estimate-ru-with-capacity-planner.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](./sql/estimate-ru-with-capacity-planner.md)
cosmos-db Common Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-cli-samples.md
- Title: Azure CLI Samples common to all Azure Cosmos DB APIs
-description: Azure CLI Samples common to all Azure Cosmos DB APIs
--- Previously updated : 02/22/2022------
-# Azure CLI samples for Azure Cosmos DB API
--
-The following table includes links to sample Azure CLI scripts that apply to all Cosmos DB APIs. For API specific samples, see [API specific samples](#api-specific-samples). Common samples are the same across all APIs.
-
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-
-## Common API Samples
-
-These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
-
-|Task | Description |
-|||
-| [Add or fail over regions](scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
-| [Perform account key operations](scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-|||
-
-## API specific samples
--- [Cassandra API samples](cassandr)-- [Gremlin API samples](graph/cli-samples.md)-- [MongoDB API samples](mongodb/cli-samples.md)-- [SQL API samples](sql/cli-samples.md)-- [Table API samples](table/cli-samples.md)-
-## Next steps
-
-Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
cosmos-db Common Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-powershell-samples.md
- Title: Azure PowerShell samples common to all Azure Cosmos DB APIs
-description: Azure PowerShell Samples common to all Azure Cosmos DB APIs
-- Previously updated : 05/02/2022------
-# Azure PowerShell samples for Azure Cosmos DB API
--
-The following table includes links to sample Azure PowerShell scripts that apply to all Cosmos DB APIs. For API specific samples, see [API specific samples](#api-specific-samples). Common samples are the same across all APIs.
--
-These samples require Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-## Common API Samples
-
-These samples use a SQL (Core) API account. To use these samples for other APIs, copy the related properties and apply to your API specific scripts.
-
-|Task | Description |
-|||
-| [Account keys or connection strings](scripts/powershell/common/keys-connection-strings.md)| Get primary and secondary keys and connection strings, or regenerate an account key.|
-| [Change failover priority or trigger failover](scripts/powershell/common/failover-priority-update.md)| Change the regional failover priority or trigger a manual failover.|
-| [Create an account with IP Firewall](scripts/powershell/common/firewall-create.md)| Create an Azure Cosmos DB account with IP Firewall enabled.|
-| [Update account](scripts/powershell/common/account-update.md) | Update an account's default consistency level.|
-| [Update an account's regions](scripts/powershell/common/update-region.md) | Add regions to an account or change regional failover order.|
-
-## API specific samples
--- [Cassandra API samples](cassandr)-- [Gremlin API samples](graph/powershell-samples.md)-- [MongoDB API samples](mongodb/powershell-samples.md)-- [SQL API samples](sql/powershell-samples.md)-- [Table API samples](table/powershell-samples.md)-
-## Next steps
--- [Azure PowerShell documentation](/powershell)
cosmos-db Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/compliance.md
description: This article describes compliance coverage for Azure Cosmos DB.
+ Last updated 09/11/2021
# Compliance in Azure Cosmos DB Azure Cosmos DB is available in all Azure regions. Microsoft makes the following Azure cloud environments available to customers:
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
+ Last updated 05/30/2022 # Azure Cosmos DB service quotas This article provides an overview of the default quotas offered to different resources in the Azure Cosmos DB. ## Storage and database operations
-After you create an Azure Cosmos account under your subscription, you can manage data in your account by [creating databases, containers, and items](account-databases-containers-items.md).
+After you create an Azure Cosmos DB account under your subscription, you can manage data in your account by [creating databases, containers, and items](account-databases-containers-items.md).
### Provisioned throughput
You can provision throughput at a container-level or a database-level in terms o
### Minimum throughput limits
-A Cosmos container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Cosmos DB requires a minimum throughput to ensure the database or container has sufficient resource for its operations.
+An Azure Cosmos DB container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Azure Cosmos DB requires a minimum throughput to ensure the database or container has sufficient resource for its operations.
The current and minimum throughput of a container or a database can be retrieved from the Azure portal or the SDKs. For more information, see [Provision throughput on containers and databases](set-throughput.md).
-The actual minimum RU/s may vary depending on your account configuration. You can use [Azure Monitor metrics](monitor-cosmos-db.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
+The actual minimum RU/s may vary depending on your account configuration. You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
#### Minimum throughput on container
In summary, here are the minimum provisioned RU limits when using manual through
| Minimum RUs per container ([dedicated throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-db-containers)) | 400 | | Minimum RUs per database ([shared throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-db-containers)) | 400 RU/s for first 25 containers. |
-Cosmos DB supports programmatic scaling of throughput (RU/s) per container or database via the SDKs or portal.
+Azure Cosmos DB supports programmatic scaling of throughput (RU/s) per container or database via the SDKs or portal.
Depending on the current RU/s provisioned and resource settings, each resource can scale synchronously and immediately between the minimum RU/s to up to 100x the minimum RU/s. If the requested throughput value is outside the range, scaling is performed asynchronously. Asynchronous scaling may take minutes to hours to complete depending on the requested throughput and data storage size in the container. [Learn more.](scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus)
Depending on the current RU/s provisioned and resource settings, each resource c
| | | | Maximum RU/s per container | 5,000 | | Maximum storage across all items per (logical) partition | 20 GB |
-| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB (default)<sup>1</sup> |
-| Maximum storage per container (Cassandra API)| 30 GB (default)<sup>1</sup> |
+| Maximum storage per container (API for NoSQL, MongoDB, Table, and Gremlin)| 50 GB (default)<sup>1</sup> |
+| Maximum storage per container (API for Cassandra)| 30 GB (default)<sup>1</sup> |
<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md). ## Control plane operations
-You can [provision and manage your Azure Cosmos account](how-to-manage-database-account.md) using the Azure portal, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. The following table lists the limits per subscription, account, and number of operations.
+You can [provision and manage your Azure Cosmos DB account](how-to-manage-database-account.md) using the Azure portal, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. The following table lists the limits per subscription, account, and number of operations.
| Resource | Limit | | | |
You can [provision and manage your Azure Cosmos account](how-to-manage-database-
<sup>2</sup> Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or have any limits on changing the write region.
-Cosmos DB automatically takes backups of your data at regular intervals. For details on backup retention intervals and windows, see [Online backup and on-demand data restore in Azure Cosmos DB](online-backup-and-restore.md).
+Azure Cosmos DB automatically takes backups of your data at regular intervals. For details on backup retention intervals and windows, see [Online backup and on-demand data restore in Azure Cosmos DB](online-backup-and-restore.md).
## Per-account limits
Cosmos DB automatically takes backups of your data at regular intervals. For det
## Per-container limits
-Depending on which API you use, an Azure Cosmos container can represent either a collection, a table, or graph. Containers support configurations for [unique key constraints](unique-keys.md), [stored procedures, triggers, and UDFs](stored-procedures-triggers-udfs.md), and [indexing policy](how-to-manage-indexing-policy.md). The following table lists the limits specific to configurations within a container.
+Depending on which API you use, an Azure Cosmos DB container can represent either a collection, a table, or graph. Containers support configurations for [unique key constraints](unique-keys.md), [stored procedures, triggers, and UDFs](stored-procedures-triggers-udfs.md), and [indexing policy](how-to-manage-indexing-policy.md). The following table lists the limits specific to configurations within a container.
| Resource | Limit | | | |
Depending on which API you use, an Azure Cosmos container can represent either a
## Per-item limits
-An Azure Cosmos item can represent either a document in a collection, a row in a table, or a node or edge in a graph; depending on which API you use. The following table shows the limits per item in Cosmos DB.
+An Azure Cosmos DB item can represent either a document in a collection, a row in a table, or a node or edge in a graph; depending on which API you use. The following table shows the limits per item in Azure Cosmos DB.
| Resource | Limit | | | |
An Azure Cosmos item can represent either a document in a collection, a row in a
| Maximum TTL value |2147483647 | | Maximum precision/range for numbers in [JSON (to ensure safe interoperability)](https://www.rfc-editor.org/rfc/rfc8259#section-6) | [IEEE 754 binary64](https://www.rfc-editor.org/rfc/rfc8259#ref-IEEE754) |
-<sup>1</sup> Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature ΓÇ£Azure Cosmos DB API For MongoDB 16 MB Document SupportΓÇ¥ from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
+<sup>1</sup> Large document sizes up to 16MB are supported with Azure Cosmos DB for MongoDB only. Read the [feature documentation](../cosmos-db/mongodb/feature-support-42.md#data-types) to learn more.
-There are no restrictions on the item payloads (like number of properties and nesting depth), except for the length restrictions on partition key and ID values, and the overall size restriction of 2 MB. You may have to configure indexing policy for containers with large or complex item structures to reduce RU consumption. See [Modeling items in Cosmos DB](how-to-model-partition-example.md) for a real-world example, and patterns to manage large items.
+There are no restrictions on the item payloads (like number of properties and nesting depth), except for the length restrictions on partition key and ID values, and the overall size restriction of 2 MB. You may have to configure indexing policy for containers with large or complex item structures to reduce RU consumption. See [Modeling items in Azure Cosmos DB](how-to-model-partition-example.md) for a real-world example, and patterns to manage large items.
## Per-request limits
Azure Cosmos DB supports [CRUD and query operations](/rest/api/cosmos-db/) again
Once an operation like query reaches the execution timeout or response size limit, it returns a page of results and a continuation token to the client to resume execution. There's no practical limit on the duration a single query can run across pages/continuations.
-Cosmos DB uses HMAC for authorization. You can use either a primary key, or a [resource token](secure-access-to-data.md) for fine-grained access control to resources. These resources can include containers, partition keys, or items. The following table lists limits for authorization tokens in Cosmos DB.
+Azure Cosmos DB uses HMAC for authorization. You can use either a primary key, or a [resource token](secure-access-to-data.md) for fine-grained access control to resources. These resources can include containers, partition keys, or items. The following table lists limits for authorization tokens in Azure Cosmos DB.
| Resource | Limit | | | |
Cosmos DB uses HMAC for authorization. You can use either a primary key, or a [r
<sup>1</sup> You can increase it by [filing an Azure support ticket](create-support-request-quota-increase.md)
-Cosmos DB supports execution of triggers during writes. The service supports a maximum of one pre-trigger and one post-trigger per write operation.
+Azure Cosmos DB supports execution of triggers during writes. The service supports a maximum of one pre-trigger and one post-trigger per write operation.
## Metadata request limits
See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
## SQL query limits
-Cosmos DB supports querying items using [SQL](./sql-query-getting-started.md). The following table describes restrictions in query statements, for example in terms of number of clauses or query length.
+Azure Cosmos DB supports querying items using [SQL](nosql/query/getting-started.md). The following table describes restrictions in query statements, for example in terms of number of clauses or query length.
| Resource | Limit | | | |
Cosmos DB supports querying items using [SQL](./sql-query-getting-started.md). T
<sup>1</sup> You can increase any of these SQL query limits by creating an [Azure Support request](create-support-request-quota-increase.md).
-## MongoDB API-specific limits
+## API for MongoDB-specific limits
-Cosmos DB supports the MongoDB wire protocol for applications written against MongoDB. You can find the supported commands and protocol versions at [Supported MongoDB features and syntax](mongodb/feature-support-32.md).
+Azure Cosmos DB supports the MongoDB wire protocol for applications written against MongoDB. You can find the supported commands and protocol versions at [Supported MongoDB features and syntax](mongodb/feature-support-32.md).
-The following table lists the limits specific to MongoDB feature support. Other service limits mentioned for the SQL (core) API also apply to the MongoDB API.
+The following table lists the limits specific to MongoDB feature support. Other service limits mentioned for the API for NoSQL also apply to the API for MongoDB.
| Resource | Limit | | | |
The following table lists the limits specific to MongoDB feature support. Other
<sup>1</sup> We recommend that client applications set the idle connection timeout in the driver settings to 2-3 minutes because the [default timeout for Azure LoadBalancer is 4 minutes](../load-balancer/load-balancer-tcp-idle-timeout.md). This timeout will ensure that idle connections aren't closed by an intermediate load balancer between the client machine and Azure Cosmos DB.
-## Try Cosmos DB Free limits
+## Try Azure Cosmos DB Free limits
The following table lists the limits for the [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/) trial. | Resource | Limit | | | | | Duration of the trial | 30 days (a new trial can be requested after expiration) <br> After expiration, the information stored is deleted. |
-| Maximum containers per subscription (SQL, Gremlin, Table API) | 1 |
-| Maximum containers per subscription (MongoDB API) | 3 |
+| Maximum containers per subscription (NoSQL, Gremlin, API for Table) | 1 |
+| Maximum containers per subscription (API for MongoDB) | 3 |
| Maximum throughput per container | 5000 | | Maximum throughput per shared-throughput database | 20000 | | Maximum total storage per account | 10 GB |
-Try Cosmos DB supports global distribution in only the Central US, North Europe, and Southeast Asia regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+Try Azure Cosmos DB supports global distribution in only the Central US, North Europe, and Southeast Asia regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
## Azure Cosmos DB free tier account limits
In addition to the above, the [Per-account limits](#per-account-limits) also app
## Next steps
-Read more about Cosmos DB's core concepts [global distribution](distribute-data-globally.md) and [partitioning](partitioning-overview.md) and [provisioned throughput](request-units.md).
+Read more about Azure Cosmos DB's core concepts [global distribution](distribute-data-globally.md) and [partitioning](partitioning-overview.md) and [provisioned throughput](request-units.md).
Get started with Azure Cosmos DB with one of our quickstarts:
-* [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
-* [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)
-* [Get started with Azure Cosmos DB Cassandra API](cassandr)
-* [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)
-* [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
+* [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)
+* [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB for Cassandra](cassandr)
+* [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-dotnet.md)
+* [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Configure Custom Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-custom-partitioning.md
Last updated 09/29/2022 -+ # Configure custom partitioning to partition analytical store data (Preview)- Custom partitioning enables you to partition analytical store data, on fields that are commonly used as filters in analytical queries, resulting in improved query performance. To learn more about custom partitioning, see [what is custom partitioning](custom-partitioning-analytical-store.md) article.
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md
Title: Configure Azure Cosmos DB account with periodic backup
description: This article describes how to configure Azure Cosmos DB accounts with periodic backup with backup interval. and retention. Also how to contacts support to restore your data. + Last updated 12/09/2021 - # Configure Azure Cosmos DB account with periodic backup Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup:
-* Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.
+* Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos DB account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.
* Azure Cosmos DB stores these backups in Azure Blob storage whereas the actual data resides locally within Azure Cosmos DB. * To guarantee low latency, the snapshot of your backup is stored in Azure Blob storage in the same region as the current write region (or **one** of the write regions, in case you have a multi-region write configuration). For resiliency against regional disaster, each snapshot of the backup data in Azure Blob storage is again replicated to another region through geo-redundant storage (GRS). The region to which the backup is replicated is based on your source region and the regional pair associated with the source region. To learn more, see the [list of geo-redundant pairs of Azure regions](../availability-zones/cross-region-replication-azure.md) article. You cannot access this backup directly. Azure Cosmos DB team will restore your backup when you request through a support request.
- The following image shows how an Azure Cosmos container with all the three primary physical partitions in West US is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
+ The following image shows how an Azure Cosmos DB container with all the three primary physical partitions in West US is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
- :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Periodic full backups of all Cosmos DB entities in GRS Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
+ :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Periodic full backups of all Azure Cosmos DB entities in GRS Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
* The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
You can configure storage redundancy for periodic backup mode at the time of acc
## <a id="configure-backup-interval-retention"></a>Modify the backup interval and retention period
-Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described below, or via [PowerShell](configure-periodic-backup-restore.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](configure-periodic-backup-restore.md#modify-backup-options-using-azure-cli).
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described below, or via [PowerShell](configure-periodic-backup-restore.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](configure-periodic-backup-restore.md#modify-backup-options-using-azure-cli).
If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account. ### Modify backup options using Azure portal - Existing account
-Use the following steps to change the default backup options for an existing Azure Cosmos account:
+Use the following steps to change the default backup options for an existing Azure Cosmos DB account:
1. Sign into the [Azure portal.](https://portal.azure.com/)
-1. Navigate to your Azure Cosmos account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
+1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
* **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a non-zero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval cannot be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
Use the following steps to change the default backup options for an existing Azu
* **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup is not replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect and **you will lose access to restore the older backups immediately.** > [!NOTE]
- > You must have the Azure [Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
+ > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
- :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Configure backup interval, retention, and storage redundancy for an existing Azure Cosmos account." border="true":::
+ :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Configure backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account." border="true":::
### Modify backup options using Azure portal - New account When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region. ### Modify backup options using Azure PowerShell
You should have the following details before requesting a restore:
* If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
-* If one or more databases are deleted, you should provide the Azure Cosmos account, and the Azure Cosmos database names and specify if a new database with the same name exists.
+* If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.
-* If one or more containers are deleted, you should provide the Azure Cosmos account name, database names, and the container names. And specify if a container with the same name exists.
+* If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.
* If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#configure-backup-interval-retention) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team will have enough time to restore your account.
-In addition to Azure Cosmos account name, database names, container names, you should specify the point in time to which the data can be restored to. It is important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
+In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to which the data can be restored to. It is important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request.
The following screenshot illustrates how to create a support request for a conta
You may accidentally delete or modify your data in one of the following scenarios:
-* Delete the entire Azure Cosmos account.
+* Delete the entire Azure Cosmos DB account.
-* Delete one or more Azure Cosmos databases.
+* Delete one or more Azure Cosmos DB databases.
-* Delete one or more Azure Cosmos containers.
+* Delete one or more Azure Cosmos DB containers.
-* Delete or modify the Azure Cosmos items (for example, documents) within a container. This specific case is typically referred to as data corruption.
+* Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.
* A shared offer database or containers within a shared offer database are deleted or corrupted.
-Azure Cosmos DB can restore data in all the above scenarios. When restoring, a new Azure Cosmos account is created to hold the restored data. The name of the new account, if it's not specified, will have the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a pre-created Azure Cosmos account.
+Azure Cosmos DB can restore data in all the above scenarios. When restoring, a new Azure Cosmos DB account is created to hold the restored data. The name of the new account, if it's not specified, will have the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a pre-created Azure Cosmos DB account.
-When you accidentally delete an Azure Cosmos account, we can restore the data into a new account with the same name, if the account name is not in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
+When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name is not in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
-When you accidentally delete an Azure Cosmos database, we can restore the whole database or a subset of the containers within that database. It is also possible to select specific containers across databases and restore them to a new Azure Cosmos account.
+When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It is also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account.
When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there is data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabas
## Options to manage your own backups
-With Azure Cosmos DB SQL API accounts, you can also maintain your own backups by using one of the following approaches:
+With Azure Cosmos DB API for NoSQL accounts, you can also maintain your own backups by using one of the following approaches:
* Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage of your choice.
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
Title: Configure and use Azure Synapse Link for Azure Cosmos DB
-description: Learn how to enable Synapse link for Azure Cosmos DB accounts, create a container with analytical store enabled, connect the Azure Cosmos database to Synapse workspace, and run queries.
+description: Learn how to enable Synapse link for Azure Cosmos DB accounts, create a container with analytical store enabled, connect the Azure Cosmos DB database to Synapse workspace, and run queries.
Last updated 09/26/2022 -+ # Configure and use Azure Synapse Link for Azure Cosmos DB [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data in Azure Cosmos DB. Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics. > [!NOTE]
-> Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing Graphs using Azure CLI.
+> Synapse Link for API for Gremlin is now in preview. You can enable Synapse Link in your new or existing Graphs using Azure CLI.
-Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos DB API for Mongo DB accounts. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB:
+Azure Synapse Link is available for Azure Cosmos DB for NoSQL or MongoDB accounts. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB:
* [Enable Azure Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link) * [Enable Azure Synapse Link for your containers](#update-analytical-ttl)
-* [Connect your Azure Cosmos database to an Azure Synapse workspace](#connect-to-cosmos-database)
+* [Connect your Azure Cosmos DB database to an Azure Synapse workspace](#connect-to-cosmos-database)
* [Query analytical store using Azure Synapse Analytics](#query) * [Improve performance with Best Practices](#best) * [Use Azure Synapse serverless SQL pool to analyze and visualize data in Power BI](#analyze-with-powerbi)
The first step to use Synapse Link is to enable it for your Azure Cosmos DB data
> If you want to use customer-managed keys with Azure Synapse Link, you must configure your account's managed identity in your Azure Key Vault access policy before enabling Synapse Link on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article. > [!NOTE]
-> If you want to use Full Fidelity Schema for SQL (CORE) API accounts, you can't use the Azure portal to enable Synapse Link. This option can't be changed after Synapse Link is enabled in your account and to set it you must use Azure CLI or PowerShell. For more information, check [analytical store schema representation documentation](analytical-store-introduction.md#schema-representation).
+> If you want to use Full Fidelity Schema for API for NoSQL accounts, you can't use the Azure portal to enable Synapse Link. This option can't be changed after Synapse Link is enabled in your account and to set it you must use Azure CLI or PowerShell. For more information, check [analytical store schema representation documentation](analytical-store-introduction.md#schema-representation).
### Azure portal
The first step to use Synapse Link is to enable it for your Azure Cosmos DB data
### Command-Line Tools
-Enable Synapse Link in your Cosmos DB SQL API or MongoDB API account using Azure CLI or PowerShell.
+Enable Synapse Link in your Azure Cosmos DB API for NoSQL or MongoDB account using Azure CLI or PowerShell.
#### Azure CLI
-Use `--enable-analytical-storage true` for both **create** or **update** operations. You also need to choose the representation schema type. For SQL API accounts you can use `--analytical-storage-schema-type` with the values `FullFidelity` or `WellDefined`. For MongoDB API accounts, always use `--analytical-storage-schema-type FullFidelity`.
+Use `--enable-analytical-storage true` for both **create** or **update** operations. You also need to choose the representation schema type. For API for NoSQL accounts you can use `--analytical-storage-schema-type` with the values `FullFidelity` or `WellDefined`. For API for MongoDB accounts, always use `--analytical-storage-schema-type FullFidelity`.
* [Create a new Azure Cosmos DB account with Synapse Link enabled](/cli/azure/cosmosdb#az-cosmosdb-create-optional-parameters) * [Update an existing Azure Cosmos DB account to enable Synapse Link](/cli/azure/cosmosdb#az-cosmosdb-update-optional-parameters)
For existing Gremlin API accounts, replace `create` with `update`.
#### PowerShell
-Use `EnableAnalyticalStorage true` for both **create** or **update** operations. You also need to choose the representation schema type. For SQL API accounts you can use `--analytical-storage-schema-type` with the values `FullFidelity` or `WellDefined`. For MongoDB API accounts, always use `-AnalyticalStorageSchemaType FullFidelity`.
+Use `EnableAnalyticalStorage true` for both **create** or **update** operations. You also need to choose the representation schema type. For API for NoSQL accounts you can use `--analytical-storage-schema-type` with the values `FullFidelity` or `WellDefined`. For API for MongoDB accounts, always use `-AnalyticalStorageSchemaType FullFidelity`.
* [Create a new Azure Cosmos DB account with Synapse Link enabled](/powershell/module/az.cosmosdb/new-azcosmosdbaccount#description) * [Update an existing Azure Cosmos DB account to enable Synapse Link](/powershell/module/az.cosmosdb/update-azcosmosdbaccount)
Serverless SQL pool allows you to query and analyze data in your Azure Cosmos DB
## <a id="analyze-with-powerbi"></a>Use serverless SQL pool to analyze and visualize data in Power BI
-You can use integrated BI experience in Azure Cosmos DB portal, to build BI dashboards using Synapse Link with just a few clicks. To learn more, see [how to build BI dashboards using Synapse Link](integrated-power-bi-synapse-link.md). This integrated experience will create simple T-SQL views in Synapse serverless SQL pools, for your Cosmos DB containers. You can build BI dashboards over these views, which will query your Azure Cosmos DB containers in real-time, using [Direct Query](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), reflecting latest changes to your data. There is no performance or cost impact to your transactional workloads, and no complexity of managing ETL pipelines.
+You can use integrated BI experience in Azure Cosmos DB portal, to build BI dashboards using Synapse Link with just a few clicks. To learn more, see [how to build BI dashboards using Synapse Link](integrated-power-bi-synapse-link.md). This integrated experience will create simple T-SQL views in Synapse serverless SQL pools, for your Azure Cosmos DB containers. You can build BI dashboards over these views, which will query your Azure Cosmos DB containers in real-time, using [Direct Query](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), reflecting latest changes to your data. There is no performance or cost impact to your transactional workloads, and no complexity of managing ETL pipelines.
If you want to use advance T-SQL views with joins across your containers or build BI dashboards in import](/power-bi/connect-dat) article.
Use [this](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/b
## <a id="cosmosdb-synapse-link-samples"></a> Getting started with Azure Synapse Link - Samples
-You can find samples to get started with Azure Synapse Link on [GitHub](https://aka.ms/cosmosdb-synapselink-samples). These showcase end-to-end solutions with IoT and retail scenarios. You can also find the samples corresponding to Azure Cosmos DB API for MongoDB in the same repo under the [MongoDB](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples) folder.
+You can find samples to get started with Azure Synapse Link on [GitHub](https://aka.ms/cosmosdb-synapselink-samples). These showcase end-to-end solutions with IoT and retail scenarios. You can also find the samples corresponding to Azure Cosmso DB for MongoDB in the same repo under the [MongoDB](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples) folder.
## Next steps
cosmos-db Conflict Resolution Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/conflict-resolution-policies.md
Title: Conflict resolution types and resolution policies in Azure Cosmos DB
description: This article describes the conflict categories and conflict resolution policies in Azure Cosmos DB. -++ Last updated 04/20/2020
# Conflict types and resolution policies when using multiple write regions Conflicts and conflict resolution policies are applicable if your Azure Cosmos DB account is configured with multiple write regions.
-For Azure Cosmos accounts configured with multiple write regions, update conflicts can occur when writers concurrently update the same item in multiple regions. Update conflicts can be of the following three types:
+For Azure Cosmos DB accounts configured with multiple write regions, update conflicts can occur when writers concurrently update the same item in multiple regions. Update conflicts can be of the following three types:
* **Insert conflicts**: These conflicts can occur when an application simultaneously inserts two or more items with the same unique index in two or more regions. For example, this conflict might occur with an ID property.
For Azure Cosmos accounts configured with multiple write regions, update conflic
## Conflict resolution policies
-Azure Cosmos DB offers a flexible policy-driven mechanism to resolve write conflicts. You can select from two conflict resolution policies on an Azure Cosmos container:
+Azure Cosmos DB offers a flexible policy-driven mechanism to resolve write conflicts. You can select from two conflict resolution policies on an Azure Cosmos DB container:
-* **Last Write Wins (LWW)**: This resolution policy, by default, uses a system-defined timestamp property. It's based on the time-synchronization clock protocol. If you use the SQL API, you can specify any other custom numerical property (e.g., your own notion of a timestamp) to be used for conflict resolution. A custom numerical property is also referred to as the *conflict resolution path*.
+* **Last Write Wins (LWW)**: This resolution policy, by default, uses a system-defined timestamp property. It's based on the time-synchronization clock protocol. If you use the API for NoSQL, you can specify any other custom numerical property (e.g., your own notion of a timestamp) to be used for conflict resolution. A custom numerical property is also referred to as the *conflict resolution path*.
If two or more items conflict on insert or replace operations, the item with the highest value for the conflict resolution path becomes the winner. The system determines the winner if multiple items have the same numeric value for the conflict resolution path. All regions will converge to a single winner and end up with the same version of the committed item. When delete conflicts are involved, the deleted version always wins over either insert or replace conflicts. This outcome occurs no matter what the value of the conflict resolution path is. > [!NOTE]
- > Last Write Wins is the default conflict resolution policy and uses timestamp `_ts` for the following APIs: SQL, MongoDB, Cassandra, Gremlin and Table. Custom numerical property is available only for SQL API.
+ > Last Write Wins is the default conflict resolution policy and uses timestamp `_ts` for the following APIs: SQL, MongoDB, Cassandra, Gremlin and Table. Custom numerical property is available only for API for NoSQL.
To learn more, see [examples that use LWW conflict resolution policies](how-to-manage-conflicts.md).
-* **Custom**: This resolution policy is designed for application-defined semantics for reconciliation of conflicts. When you set this policy on your Azure Cosmos container, you also need to register a *merge stored procedure*. This procedure is automatically invoked when conflicts are detected under a database transaction on the server. The system provides exactly once guarantee for the execution of a merge procedure as part of the commitment protocol.
+* **Custom**: This resolution policy is designed for application-defined semantics for reconciliation of conflicts. When you set this policy on your Azure Cosmos DB container, you also need to register a *merge stored procedure*. This procedure is automatically invoked when conflicts are detected under a database transaction on the server. The system provides exactly once guarantee for the execution of a merge procedure as part of the commitment protocol.
If you configure your container with the custom resolution option, and you fail to register a merge procedure on the container or the merge procedure throws an exception at runtime, the conflicts are written to the *conflicts feed*. Your application then needs to manually resolve the conflicts in the conflicts feed. To learn more, see [examples of how to use the custom resolution policy and how to use the conflicts feed](how-to-manage-conflicts.md). > [!NOTE]
- > Custom conflict resolution policy is available only for SQL API accounts and can be set only at creation time. It is not possible to set a custom resolution policy on an existing container.
+ > Custom conflict resolution policy is available only for API for NoSQL accounts and can be set only at creation time. It is not possible to set a custom resolution policy on an existing container.
## Next steps
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Last updated 09/26/2022-+ # Consistency levels in Azure Cosmos DB Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but it's more difficult to program applications because data may not be completely consistent across all regions.
Each level provides availability and performance tradeoffs. The following image
## Consistency levels and Azure Cosmos DB APIs
-Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Gremlin, and Azure Table storage. When using Gremlin API and Table API, the default consistency level configured on the Azure Cosmos account is used. For details on consistency level mapping between Cassandra API or the API for MongoDB and Azure Cosmos DB's consistency levels see, [Cassandra API consistency mapping](cassandr).
+Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Apache Gremlin, and Azure Table Storage. When using API for Gremlin or Table, the default consistency level configured on the Azure Cosmos DB account is used. For details on consistency level mapping between Apache Cassandra and Azure Cosmos DB, see [API for Cassandra consistency mapping](cassandr).
## Scope of the read consistency
Read consistency applies to a single read operation scoped within a logical part
## Configure the default consistency level
-You can configure the default consistency level on your Azure Cosmos account at any time. The default consistency level configured on your account applies to all Azure Cosmos databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default. To learn more, see how to [configure the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level). You can also override the default consistency level for a specific request, to learn more, see how to [Override the default consistency level](how-to-manage-consistency.md?#override-the-default-consistency-level) article.
+You can configure the default consistency level on your Azure Cosmos DB account at any time. The default consistency level configured on your account applies to all Azure Cosmos DB databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default. To learn more, see how to [configure the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level). You can also override the default consistency level for a specific request, to learn more, see how to [Override the default consistency level](how-to-manage-consistency.md?#override-the-default-consistency-level) article.
> [!TIP] > Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. For more information, see [Consistency levels and throughput](consistency-levels.md#consistency-levels-and-throughput).
In practice, you may often get stronger consistency guarantees. Consistency guar
If there are no write operations on the database, a read operation with **eventual**, **session**, or **consistent prefix** consistency levels is likely to yield the same results as a read operation with strong consistency level.
-If your Azure Cosmos account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
-Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
## Consistency levels and latency The read latency for all consistency levels is always guaranteed to be less than 10 milliseconds at the 99th percentile. The average read latency, at the 50th percentile, is typically 4 milliseconds or less.
-The write latency for all consistency levels is always guaranteed to be less than 10 milliseconds at the 99th percentile. The average write latency, at the 50th percentile, is usually 5 milliseconds or less. Azure Cosmos accounts that span several regions and are configured with strong consistency are an exception to this guarantee.
+The write latency for all consistency levels is always guaranteed to be less than 10 milliseconds at the 99th percentile. The average write latency, at the 50th percentile, is usually 5 milliseconds or less. Azure Cosmos DB accounts that span several regions and are configured with strong consistency are an exception to this guarantee.
### Write latency and Strong consistency
-For Azure Cosmos accounts configured with strong consistency with more than one region, the write latency is equal to two times round-trip time (RTT) between any of the two farthest regions, plus 10 milliseconds at the 99th percentile. High network RTT between the regions will translate to higher latency for Cosmos DB requests since strong consistency completes an operation only after ensuring that it has been committed to all regions within an account.
+For Azure Cosmos DB accounts configured with strong consistency with more than one region, the write latency is equal to two times round-trip time (RTT) between any of the two farthest regions, plus 10 milliseconds at the 99th percentile. High network RTT between the regions will translate to higher latency for Azure Cosmos DB requests since strong consistency completes an operation only after ensuring that it has been committed to all regions within an account.
-The exact RTT latency is a function of speed-of-light distance and the Azure networking topology. Azure networking doesn't provide any latency SLAs for the RTT between any two Azure regions, however it does publish [Azure network round-trip latency statistics](../networking/azure-network-latency.md). For your Azure Cosmos account, replication latencies are displayed in the Azure portal. You can use the Azure portal (go to the Metrics blade, select Consistency tab) to monitor the replication latencies between various regions that are associated with your Azure Cosmos account.
+The exact RTT latency is a function of speed-of-light distance and the Azure networking topology. Azure networking doesn't provide any latency SLAs for the RTT between any two Azure regions, however it does publish [Azure network round-trip latency statistics](../networking/azure-network-latency.md). For your Azure Cosmos DB account, replication latencies are displayed in the Azure portal. You can use the Azure portal (go to the Metrics blade, select Consistency tab) to monitor the replication latencies between various regions that are associated with your Azure Cosmos DB account.
> [!IMPORTANT] > Strong consistency for accounts with regions spanning more than 5000 miles (8000 kilometers) is blocked by default due to high write latency. To enable this capability please contact support.
For a single region account, the minimum value of *K* and *T* is 10 write operat
## Strong consistency and multiple write regions
-Cosmos accounts configured with multiple write regions cannot be configured for strong consistency as it is not possible for a distributed system to provide an RPO of zero and an RTO of zero. Additionally, there are no write latency benefits on using strong consistency with multiple write regions because a write to any region must be replicated and committed to all configured regions within the account. This results in the same write latency as a single write region account.
+Azure Cosmos DB accounts configured with multiple write regions cannot be configured for strong consistency as it is not possible for a distributed system to provide an RPO of zero and an RTO of zero. Additionally, there are no write latency benefits on using strong consistency with multiple write regions because a write to any region must be replicated and committed to all configured regions within the account. This results in the same write latency as a single write region account.
## Additional reading
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Last updated 08/24/2022 -+ # Continuous backup with point-in-time restore in Azure Cosmos DB Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios including:
The time window available for restore (also known as retention period) is the lo
The selected option depends on the chosen tier of continuous backup. The point in time for restore can be any timestamp within the retention period no further back than the point when the resource was created. In strong consistency mode, backups taken in the write region are more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get the latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in a specific region. Getting the latest timestamp ensures that the resource has taken backups up to the given timestamp, and can restore in that region.
-Currently, you can restore an Azure Cosmos DB account (SQL API or API for MongoDB) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). Table API or Gremlin APIs are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
+Currently, you can restore an Azure Cosmos DB account (API for NoSQL or MongoDB) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). API for Table or Gremlin are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
## Backup storage redundancy
Azure Cosmos DB allows you to isolate and restrict the restore permissions for c
Azure Cosmos DB accounts that have continuous 30-day backup enabled will incur an extra monthly charge to *store the backup*. Both the 30-day and 7-day tier of continuous back incur charges to *restore your data*. The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
-The following example is based on the price for an Azure Cosmos account deployed in West US. The pricing and calculation can vary depending on the region you're using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+The following example is based on the price for an Azure Cosmos DB account deployed in West US. The pricing and calculation can vary depending on the region you're using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
* All accounts enabled with continuous backup policy with 30-day incur a monthly charge for backup storage that is calculated as follows:
For example, if you have 1 TB of data in two regions then:
* Restore cost is calculated as (1000 \* 0.15) = $150 per restore > [!TIP]
-> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continuous 7-day tier does not incur charges for backup of the data.
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Azure Cosmos DB insights](insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continuous 7-day tier does not incur charges for backup of the data.
## Continuous 30-day tier vs Continuous 7-day tier
See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk
Currently the point in time restore functionality has the following limitations:
-* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra API isn't supported now.
+* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. API for Cassandra isn't supported now.
-* Table API and Gremlin API are in preview and supported via PowerShell and Azure CLI.
+* API for Table and Gremlin are in preview and supported via PowerShell and Azure CLI.
* Multi-regions write accounts aren't supported.
Currently the point in time restore functionality has the following limitations:
* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies. These policies grant the permissions for the account to change any VNET, firewall configuration.
-* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created aren't supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
+* Azure Cosmso DB for SQL or MongoDB accounts that create unique index after the container is created aren't supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
-* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you're interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
+* The point-in-time restore functionality always restores to a new Azure Cosmos DB account. Restoring to an existing account is currently not supported. If you're interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
* After restoring, it's possible that for certain collections the consistent index may be rebuilding. You can check the status of the rebuild operation via the [IndexTransformationProgress](how-to-manage-indexing-policy.md) property.
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
Last updated 02/28/2022 -+ # Manage permissions to restore an Azure Cosmos DB account Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope as shown in the following image:
The RestorableAction below represents a custom role. You have to explicitly crea
"assignableScopes": [ "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744" ],
- "description": "Can do a restore request for any Cosmos DB database account with continuous backup",
+ "description": "Can do a restore request for any Azure Cosmos DB database account with continuous backup",
"permissions": [ { "actions": [
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-resource-model.md
Title: Resource model for the Azure Cosmos DB point-in-time restore feature.
-description: This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored in Azure Cosmos DB API for SQL and MongoDB accounts.
+description: This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored in Azure Cosmso DB for SQL and MongoDB accounts.
+ Last updated 06/28/2022
# Resource model for the Azure Cosmos DB point-in-time restore feature
-This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored. This feature is supported in Azure Cosmos DB API for SQL and the Cosmos DB API for MongoDB. Currently, this feature is in preview for Azure Cosmos DB Gremlin API and Table API accounts.
+This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored. This feature is supported in Azure Cosmos DB API for SQL and the Azure Cosmos DB API for MongoDB. Currently, this feature is in preview for Azure Cosmos DB API for Gremlin and Table accounts.
## Database account's resource model
The database account's resource model is updated with a few extra properties to
A new property in the account level backup policy named ``Type`` under the ``backuppolicy`` parameter enables continuous backup and point-in-time restore. This mode is referred to as **continuous backup**. You can set this mode when creating the account or while [migrating an account from periodic to continuous mode](migrate-continuous-backup.md). After continuous mode is enabled, all the containers and databases created within this account will have point-in-time restore and continuous backup enabled by default. The continuous backup tier can be set to ``Continuous7Days`` or ``Continuous30Days``. By default, if no tier is provided, ``Continuous30Days`` is applied on the account. > [!NOTE]
-> Currently the point-in-time restore feature is available for Azure Cosmos DB API for MongoDB and SQL API accounts. It is also available for Table API and Gremlin API in preview. After you create an account with continuous mode you can't switch it to a periodic mode. The ``Continuous7Days`` tier is in preview.
+> Currently the point-in-time restore feature is available for Azure Cosmos DB for MongoDB and API for NoSQL accounts. It is also available for API for Table and API for Gremlin in preview. After you create an account with continuous mode you can't switch it to a periodic mode. The ``Continuous7Days`` tier is in preview.
### CreateMode
Each resource contains information of a mutation event such as creation and dele
> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user >
-To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list) article.
+To get a list of all database mutations, see [Restorable NoSQL Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list) article.
### Restorable SQL container
Each resource contains information of a mutation event such as creation and dele
> * ``SystemOperation``: container modification event triggered by the system. This event isn't initiated by the user >
-To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list) article.
+To get a list of all container mutations under the same database, see [Restorable NoSQL Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list) article.
### Restorable SQL resources
Each resource represents a single database and all the containers under that dat
| ``databaseName`` | The name of the SQL database. | ``collectionNames`` | The list of SQL containers under this database.|
-To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-resources/list) article.
+To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable NoSQL Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-resources/list) article.
### Restorable MongoDB database
To get a list of all container mutations under the same database, see graph [Res
### Restorable Table resources
-Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API doesn't specify an explicit database.
+Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the API for Table doesn't specify an explicit database.
| Property Name | Description | | | |
cosmos-db Convert Vcore To Request Unit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/convert-vcore-to-request-unit.md
-++ Last updated 08/26/2021 # Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s This article explains how to estimate Azure Cosmos DB request units (RU/s) when you are considering data migration but all you know is the total vCore or vCPU count in your existing database replica set(s). When you migrate one or more replica sets to Azure Cosmos DB, each collection held in those replica sets will be stored as an Azure Cosmos DB collection consisting of a sharded cluster with a 4x replication factor. You can read more about our architecture in this [partitioning and scaling guide](partitioning-overview.md). Request units are how throughput capacity is provisioned on a collection; you can [read the request units guide](request-units.md) and the RU/s [provisioning guide](set-throughput.md) to learn more. When you migrate a collection, Azure Cosmos DB provisions enough shards to serve your provisioned request units and store your data. Therefore estimating RU/s for collections is an important step in scoping out the scale of your planned Azure Cosmos DB data estate prior to migration. Based on our experience with thousands of customers, we have found this formula helps us arrive at a rough starting-point RU/s estimate from vCores or vCPUs:
Provisioned RU/s = C*T/R
* *T*: Total vCores and/or vCPUs in your existing database **data-bearing** replica set(s). * *R*: Replication factor of your existing **data-bearing** replica set(s). * *C*: Recommended provisioned RU/s per vCore or vCPU. This value derives from the architecture of Azure Cosmos DB:
- * *C = 600 RU/s/vCore* for Azure Cosmos DB SQL API
- * *C = 1000 RU/s/vCore* for Azure Cosmos DB API for MongoDB v4.0
- * *C* estimates for Cassandra API, Gremlin API, or other APIs are not currently available
+ * *C* = 600 RU/s/vCore* for Azure Cosmos DB for NoSQL
+ * *C* = 1000 RU/s/vCore* for Azure Cosmos DB for MongoDB v4.0
+ * *C* estimates for API for Cassandra, Gremlin, or other APIs are not currently available
Values for *C* are provided above. ***T* must be determined by examining the number of vCores or vCPUs in each data-bearing replica set of your existing database, and summing to get the total**; if you cannot estimate *T* then consider following our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) instead of this guide. *T* should not include *vCores* or *vCPUs* associated with your existing database's routing server or configuration cluster, if it has those components. For *R*, we recommend plugging in the average replication factor of your database replica sets; if this information is not available then *R=3* is a good rule of thumb.
-Azure Cosmos DB interop APIs run on top of the SQL API and implement their own unique architectures; thus Azure Cosmos DB API for MongoDB v4.0 has a different *C*-value than Azure Cosmos DB SQL API.
+Azure Cosmos DB interop APIs run on top of the API for NoSQL and implement their own unique architectures; thus Azure Cosmos DB for MongoDB v4.0 has a different *C*-value than Azure Cosmos DB API for NoSQL.
## Worked example: estimate RU/s for single replica set migration
Consider a single replica set with a replication factor of *R=3* based on a four
* *T* = 12 vCores * *R* = 3
-Then the recommended request units for Azure Cosmos DB SQL API are
+Then the recommended request units for Azure Cosmos DB API for NoSQL are
`
-Provisioned RU/s, SQL API = (600 RU/s/vCore) * (12 vCores) / (3) = 2,400 RU/s
+Provisioned RU/s, API for NoSQL = (600 RU/s/vCore) * (12 vCores) / (3) = 2,400 RU/s
`
-And the recommended request units for Azure Cosmos DB API for MongoDB are
+And the recommended request units for Azure Cosmso DB for MongoDB are
` Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (12 vCores) / (3) = 4,000 RU/s
Consider a sharded and replicated cluster comprising three replica sets each wit
* *T* = 36 vCores * *R* = 3
-Then the recommended request units for Azure Cosmos DB SQL API are
+Then the recommended request units for Azure Cosmos DB API for NoSQL are
`
-Provisioned RU/s, SQL API = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s
+Provisioned RU/s, API for NoSQL = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s
`
-And the recommended request units for Azure Cosmos DB API for MongoDB are
+And the recommended request units for Azure Cosmso DB for MongoDB are
` Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,000 RU/s
Consider a sharded and replicated cluster comprising three replica sets, in whic
* *T* = 36 vCores * *Ravg* = (3+1+5)/3 = 3
-Then the recommended request units for Azure Cosmos DB SQL API are
+Then the recommended request units for Azure Cosmos DB API for NoSQL are
`
-Provisioned RU/s, SQL API = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s
+Provisioned RU/s, API for NoSQL = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s
`
-And the recommended request units for Azure Cosmos DB API for MongoDB are
+And the recommended request units for Azure Cosmso DB for MongoDB are
` Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,000 RU/s
Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,
Estimating RU/s from *vCores* or *vCPUs* requires collecting information about total *vCores*/*vCPUs* and replication factor from your existing database replica set(s). Then you can use known relationships between *vcores*/*vCPUs* and throughput to estimate Azure Cosmos DB request units (RU/s). Finding this request unit estimate will be an important step in anticipating the scale of your Azure Cosmos DB data estate after migration.
-The table below summarizes the relationship between *vCores* and *vCPUs* for Azure Cosmos DB SQL API and API for MongoDB v4.0:
+The table below summarizes the relationship between *vCores* and *vCPUs* for Azure Cosmos DB API for NoSQL and API for MongoDB v4.0:
-| vCores | RU/s (SQL API)<br> (rep. factor=3) | RU/s (API for MongoDB v4.0)<br> (rep. factor=3) |
+| vCores | RU/s (API for NoSQL)<br> (rep. factor=3) | RU/s (API for MongoDB v4.0)<br> (rep. factor=3) |
|-|-|| | 3 | 600 | 1000 | | 6 | 1200 | 2000 |
The table below summarizes the relationship between *vCores* and *vCPUs* for Azu
## Next steps * [Learn about Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/) * [Learn how to plan and manage costs for Azure Cosmos DB](plan-manage-costs.md)
-* [Review options for migrating to Azure Cosmos DB](cosmosdb-migrationchoices.md)
-* [Migrate to Azure Cosmos DB SQL API](import-data.md)
-* [Plan your migration to Azure Cosmos DB API for MongoDB](mongodb/pre-migration-steps.md). This doc includes links to different migration tools that you can use once you are finished planning.
+* [Review options for migrating to Azure Cosmos DB](migration-choices.md)
+* [Migrate to Azure Cosmos DB for NoSQL](import-data.md)
+* [Plan your migration to Azure Cosmos DB for MongoDB](mongodb/pre-migration-steps.md). This doc includes links to different migration tools that you can use once you are finished planning.
-[regions]: https://azure.microsoft.com/regions/
+[regions]: https://azure.microsoft.com/regions/
cosmos-db Cosmos Db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmos-db-advanced-queries.md
- Title: Troubleshoot issues with advanced diagnostics queries (SQL API)-
-description: Learn how to query diagnostics logs to troubleshoot data stored in Azure Cosmos DB - SQL API.
---- Previously updated : 06/12/2021---
-# Troubleshoot issues with advanced diagnostics queries for the SQL (Core) API
--
-> [!div class="op_single_selector"]
-> * [SQL (Core) API](cosmos-db-advanced-queries.md)
-> * [MongoDB API](mongodb/diagnostic-queries-mongodb.md)
-> * [Cassandra API](cassandr)
-> * [Gremlin API](graph/diagnostic-queries-gremlin.md)
->
-
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview**) tables.
-
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-
-For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times.-
-## Common queries
-Common queries are shown in the resource-specific and Azure Diagnostics tables.
-
-### Top N(10) queries ordered by Request Unit (RU) consumption in a specific time frame
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let topRequestsByRUcharge = CDBDataPlaneRequests
- | where TimeGenerated > ago(24h)
- | project RequestCharge , TimeGenerated, ActivityId;
- CDBQueryRuntimeStatistics
- | project QueryText, ActivityId, DatabaseName , CollectionName
- | join kind=inner topRequestsByRUcharge on ActivityId
- | project DatabaseName , CollectionName , QueryText , RequestCharge, TimeGenerated
- | order by RequestCharge desc
- | take 10
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let topRequestsByRUcharge = AzureDiagnostics
- | where Category == "DataPlaneRequests" and TimeGenerated > ago(24h)
- | project requestCharge_s , TimeGenerated, activityId_g;
- AzureDiagnostics
- | where Category == "QueryRuntimeStatistics"
- | project querytext_s, activityId_g, databasename_s , collectionname_s
- | join kind=inner topRequestsByRUcharge on activityId_g
- | project databasename_s , collectionname_s , querytext_s , requestCharge_s, TimeGenerated
- | order by requestCharge_s desc
- | take 10
- ```
--
-### Requests throttled (statusCode = 429) in a specific time window
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let throttledRequests = CDBDataPlaneRequests
- | where StatusCode == "429"
- | project OperationName , TimeGenerated, ActivityId;
- CDBQueryRuntimeStatistics
- | project QueryText, ActivityId, DatabaseName , CollectionName
- | join kind=inner throttledRequests on ActivityId
- | project DatabaseName , CollectionName , QueryText , OperationName, TimeGenerated
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let throttledRequests = AzureDiagnostics
- | where Category == "DataPlaneRequests" and statusCode_s == "429"
- | project OperationName , TimeGenerated, activityId_g;
- AzureDiagnostics
- | where Category == "QueryRuntimeStatistics"
- | project querytext_s, activityId_g, databasename_s , collectionname_s
- | join kind=inner throttledRequests on activityId_g
- | project databasename_s , collectionname_s , querytext_s , OperationName, TimeGenerated
- ```
--
-### Queries with the largest response lengths (payload size of the server response)
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let operationsbyUserAgent = CDBDataPlaneRequests
- | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
- CDBQueryRuntimeStatistics
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- | join kind=inner operationsbyUserAgent on ActivityId
- | summarize max(ResponseLength) by QueryText
- | order by max_ResponseLength desc
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let operationsbyUserAgent = AzureDiagnostics
- | where Category=="DataPlaneRequests"
- | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
- AzureDiagnostics
- | where Category == "QueryRuntimeStatistics"
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- | join kind=inner operationsbyUserAgent on activityId_g
- | summarize max(responseLength_s1) by querytext_s
- | order by max_responseLength_s1 desc
- ```
--
-### RU consumption by physical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
- | render columnchart
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
- | render columnchart
- ```
--
-### RU consumption by logical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
- | render columnchart
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
- | render columnchart
- ```
--
-## Next steps
-* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](cosmosdb-monitor-resource-logs.md).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Cosmos Db Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmos-db-reserved-capacity.md
- Title: Reserved capacity in Azure Cosmos DB to Optimize cost
-description: Learn how to buy Azure Cosmos DB reserved capacity to save on your compute costs.
--- Previously updated : 08/26/2021----
-# Optimize cost with reserved capacity in Azure Cosmos DB
-
-Azure Cosmos DB reserved capacity helps you save money by committing to a reservation for Azure Cosmos DB resources for either one year or three years. With Azure Cosmos DB reserved capacity, you can get a discount on the throughput provisioned for Cosmos DB resources. Examples of resources are databases and containers (tables, collections, and graphs).
-
-Azure Cosmos DB reserved capacity can significantly reduce your Cosmos DB costs&mdash;up to 65 percent on regular prices with a one-year or three-year upfront commitment. Reserved capacity provides a billing discount and doesn't affect the runtime state of your Azure Cosmos DB resources.
-
-Azure Cosmos DB reserved capacity covers throughput provisioned for your resources. It doesn't cover the storage and networking charges. As soon as you buy a reservation, the throughput charges that match the reservation attributes are no longer charged at the pay-as-you go rates. For more information on reservations, see the [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) article.
-
-You can buy Azure Cosmos DB reserved capacity from the [Azure portal](https://portal.azure.com). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy reserved capacity:
-
-* You must be in the Owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
-* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com). Or, if that setting is disabled, you must be an EA Admin on the subscription.
-* For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure Cosmos DB reserved capacity.
-
-## Determine the required throughput before purchase
-
-The size of the reserved capacity purchase should be based on the total amount of throughput that the existing or soon-to-be-deployed Azure Cosmos DB resources will use on an hourly basis. For example: Purchase 30,000 RU/s reserved capacity if that's your consistent hourly usage pattern. In this example, any provisioned throughput above 30,000 RU/s will be billed using your Pay-as-you-go rate. If provisioned throughput is below 30,000 RU/s in an hour, then the extra reserved capacity for that hour will be wasted.
-
-We calculate purchase recommendations based on your hourly usage pattern. Usage over last 7, 30 and 60 days is analyzed, and reserved capacity purchase that maximizes your savings is recommended. You can view recommended reservation sizes in the Azure portal using the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Select **All services** > **Reservations** > **Add**.
-
-3. From the **Purchase reservations** pane, choose **Azure Cosmos DB**.
-
-4. Select the **Recommended** tab to view recommended reservations:
-
-You can filter recommendations by the following attributes:
--- **Term** (1 year or 3 years)-- **Billing frequency** (Monthly or Upfront)-- **Throughput Type** (RU/s vs multi-region write RU/s)-
-Additionally, you can scope recommendations to be within a single resource group, single subscription, or your entire Azure enrollment.
-
-Here's an example recommendation:
--
-This recommendation to purchase a 30,000 RU/s reservation indicates that, among 3 year reservations, a 30,000 RU/s reservation size will maximize savings. In this case, the recommendation is calculated based on the past 30 days of Azure Cosmos DB usage. If this customer expects that the past 30 days of Azure Cosmos DB usage is representative of future use, they would maximize savings by purchasing a 30,000 RU/s reservation.
-
-## Buy Azure Cosmos DB reserved capacity
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Select **All services** > **Reservations** > **Add**.
-
-3. From the **Purchase reservations** pane, choose **Azure Cosmos DB** to buy a new reservation.
-
-4. Fill in the required fields as described in the following table:
-
- :::image type="content" source="./media/cosmos-db-reserved-capacity/fill-reserved-capacity-form.png" alt-text="Fill the reserved capacity form":::
-
- |Field |Description |
- |||
- |Scope | Option that controls how many subscriptions can use the billing benefit associated with the reservation. It also controls how the reservation is applied to specific subscriptions. <br/><br/> If you select **Shared**, the reservation discount is applied to Azure Cosmos DB instances that run in any subscription within your billing context. The billing context is based on how you signed up for Azure. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all individual subscriptions with pay-as-you-go rates created by the account administrator. </br></br>If you select **Management group**, the reservation discount is applied to Azure Cosmos DB instances that run in any of the subscriptions that are a part of both the management group and billing scope. <br/><br/> If you select **Single subscription**, the reservation discount is applied to Azure Cosmos DB instances in the selected subscription. <br/><br/> If you select **Single resource group**, the reservation discount is applied to Azure Cosmos DB instances in the selected subscription and the selected resource group within that subscription. <br/><br/> You can change the reservation scope after you buy the reserved capacity. |
- |Subscription | Subscription that's used to pay for the Azure Cosmos DB reserved capacity. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
- | Resource Group | Resource group to which the reserved capacity discount is applied. |
- |Term | One year or three years. |
- |Throughput Type | Throughput is provisioned as request units. You can buy a reservation for the provisioned throughput for both setups - single region writes as well as multiple region writes. The throughput type has two values to choose from: 100 RU/s per hour and 100 multi-region writes RU/s per hour.|
- | Reserved Capacity Units| The amount of throughput that you want to reserve. You can calculate this value by determining the throughput needed for all your Cosmos DB resources (for example, databases or containers) per region. You then multiply it by the number of regions that you'll associate with your Cosmos database. For example: If you have five regions with 1 million RU/sec in every region, select 5 million RU/sec for the reservation capacity purchase. |
--
-5. After you fill the form, the price required to purchase the reserved capacity is calculated. The output also shows the percentage of discount you get with the chosen options. Next click **Select**
-
-6. In the **Purchase reservations** pane, review the discount and the price of the reservation. This reservation price applies to Azure Cosmos DB resources with throughput provisioned across all regions.
-
- :::image type="content" source="./media/cosmos-db-reserved-capacity/reserved-capacity-summary.png" alt-text="Reserved capacity summary":::
-
-7. Select **Review + buy** and then **buy now**. You see the following page when the purchase is successful:
-
-After you buy a reservation, it's applied immediately to any existing Azure Cosmos DB resources that match the terms of the reservation. If you donΓÇÖt have any existing Azure Cosmos DB resources, the reservation will apply when you deploy a new Cosmos DB instance that matches the terms of the reservation. In both cases, the period of the reservation starts immediately after a successful purchase.
-
-When your reservation expires, your Azure Cosmos DB instances continue to run and are billed at the regular pay-as-you-go rates.
-
-## Cancel, exchange, or refund reservations
-
-You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-
-## Next steps
-
-The reservation discount is applied automatically to the Azure Cosmos DB resources that match the reservation scope and attributes. You can update the scope of the reservation through the Azure portal, PowerShell, Azure CLI, or the API.
-
-* To learn how reserved capacity discounts are applied to Azure Cosmos DB, see [Understand the Azure reservation discount](../cost-management-billing/reservations/understand-cosmosdb-reservation-charges.md).
-
-* To learn more about Azure reservations, see the following articles:
-
- * [What are Azure reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
- * [Manage Azure reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
- * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
- * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md)
- * [Azure reservations in the Partner Center CSP program](/partner-center/azure-reservations)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-## Need help? Contact us.
-
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
cosmos-db Cosmosdb Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-insights-overview.md
- Title: Monitor Azure Cosmos DB with Azure Monitor Cosmos DB insights| Microsoft Docs
-description: This article describes the Cosmos DB insights feature of Azure Monitor that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their Cosmos DB accounts.
---- Previously updated : 05/11/2020-----
-# Explore Azure Monitor Cosmos DB insights
-
-Cosmos DB insights provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article will help you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
-
-## Introduction
-
-Before diving into the experience, you should understand how it presents and visualizes information.
-
-It delivers:
-
-* **At scale perspective** of your Azure Cosmos DB resources across all your subscriptions in a single location, with the ability to selectively scope to only those subscriptions and resources you are interested in evaluating.
-
-* **Drill down analysis** of a particular Azure Cosmos DB resource to help diagnose issues or perform detailed analysis by category - utilization, failures, capacity, and operations. Selecting any one of those options provides an in-depth view of the relevant Azure Cosmos DB metrics.
-
-* **Customizable** - This experience is built on top of Azure Monitor workbook templates allowing you to change what metrics are displayed, modify or set thresholds that align with your limits, and then save into a custom workbook. Charts in the workbooks can then be pinned to Azure dashboards.
-
-This feature does not require you to enable or configure anything, these Azure Cosmos DB metrics are collected by default.
-
->[!NOTE]
->There is no charge to access this feature and you will only be charged for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
-
-## View utilization and performance metrics for Azure Cosmos DB
-
-To view the utilization and performance of your storage accounts across all of your subscriptions, perform the following steps.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Search for **Monitor** and select **Monitor**.
-
- ![Search box with the word "Monitor" and a dropdown that says Services "Monitor" with a speedometer style image](./media/cosmosdb-insights-overview/search-monitor.png)
-
-3. Select **Cosmos DB**.
-
- ![Screenshot of Cosmos DB overview workbook](./media/cosmosdb-insights-overview/cosmos-db.png)
-
-### Overview
-
-On **Overview**, the table displays interactive Azure Cosmos DB metrics. You can filter the results based on the options you select from the following drop-down lists:
-
-* **Subscriptions** - only subscriptions that have an Azure Cosmos DB resource are listed.
-
-* **Cosmos DB** - You can select all, a subset, or single Azure Cosmos DB resource.
-
-* **Time Range** - by default, displays the last 4 hours of information based on the corresponding selections made.
-
-The counter tile under the drop-down lists rolls-up the total number of Azure Cosmos DB resources are in the selected subscriptions. There is conditional color-coding or heatmaps for columns in the workbook that report transaction metrics. The deepest color has the highest value and a lighter color is based on the lowest values.
-
-Selecting a drop-down arrow next to one of the Azure Cosmos DB resources will reveal a breakdown of the performance metrics at the individual database container level:
-
-![Expanded drop down revealing individual database containers and associated performance breakdown](./media/cosmosdb-insights-overview/container-view.png)
-
-Selecting the Azure Cosmos DB resource name highlighted in blue will take you to the default **Overview** for the associated Azure Cosmos DB account.
-
-### Failures
-
-Select **Failures** at the top of the page and the **Failures** portion of the workbook template opens. It shows you total requests with the distribution of responses that make up those requests:
-
-![Screenshot of failures with breakdown by HTTP request type](./media/cosmosdb-insights-overview/failures.png)
-
-| Code | Description |
-|--|:--|
-| `200 OK` | One of the following REST operations were successful: </br>- GET on a resource. </br> - PUT on a resource. </br> - POST on a resource. </br> - POST on a stored procedure resource to execute the stored procedure.|
-| `201 Created` | A POST operation to create a resource is successful. |
-| `404 Not Found` | The operation is attempting to act on a resource that no longer exists. For example, the resource may have already been deleted. |
-
-For a full list of status codes, consult the [Azure Cosmos DB HTTP status code article](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
-
-### Capacity
-
-Select **Capacity** at the top of the page and the **Capacity** portion of the workbook template opens. It shows you how many documents you have, your document growth over time, data usage, and the total amount of available storage that you have left. This can be used to help identify potential storage and data utilization issues.
-
-![Capacity workbook](./media/cosmosdb-insights-overview/capacity.png)
-
-As with the overview workbook, selecting the drop-down next to an Azure Cosmos DB resource in the **Subscription** column will reveal a breakdown by the individual containers that make up the database.
-
-### Operations
-
-Select **Operations** at the top of the page and the **Operations** portion of the workbook template opens. It gives you the ability to see your requests broken down by the type of requests made.
-
-So in the example below you see that `eastus-billingint` is predominantly receiving read requests, but with a small number of upsert and create requests. Whereas `westeurope-billingint` is read-only from a request perspective, at least over the past four hours that the workbook is currently scoped to via its time range parameter.
-
-![Operations workbook](./media/cosmosdb-insights-overview/operation.png)
-
-## View from an Azure Cosmos DB resource
-
-1. Search for or select any of your existing Azure Cosmos DB accounts.
--
-2. Once you've navigated to your Azure Cosmos DB account, in the Monitoring section select **Insights (preview)** or **Workbooks** to perform further analysis on throughput, requests, storage, availability, latency, system, and account management.
--
-### Time range
-
-By default, the **Time Range** field displays data from the **Last 24 hours**. You can modify the time range to display data anywhere from the last 5 minutes to the last seven days. The time range selector also includes a **Custom** mode that allows you to type in the start/end dates to view a custom time frame based on available data for the selected account.
--
-### Insights overview
-
-The **Overview** tab provides the most common metrics for the selected Azure Cosmos DB account including:
-
-* Total Requests
-* Failed Requests (429s)
-* Normalized RU Consumption (max)
-* Data & Index Usage
-* Cosmos DB Account Metrics by Collection
-
-**Total Requests:** This graph provides a view of the total requests for the account broken down by status code. The units at the bottom of the graph are a sum of the total requests for the period.
--
-**Failed Requests (429s)**: This graph provides a view of failed requests with a status code of 429. The units at the bottom of the graph are a sum of the total failed requests for the period.
--
-**Normalized RU Consumption (max)**: This graph provides the max percentage between 0-100% of Normalized RU Consumption units for the specified period.
--
-## Pin, export, and expand
-
-You can pin any one of the metric sections to an [Azure Dashboard](../azure-portal/azure-portal-dashboards.md) by selecting the pushpin icon at the top right of the section.
-
-![Metric section pin to dashboard example](./media/cosmosdb-insights-overview/pin.png)
-
-To export your data into the Excel format, select the down arrow icon to the left of the pushpin icon.
-
-![Export workbook icon](./media/cosmosdb-insights-overview/export.png)
-
-To expand or collapse all drop-down views in the workbook, select the expand icon to the left of the export icon:
-
-![Expand workbook icon](./media/cosmosdb-insights-overview/expand.png)
-
-## Customize Cosmos DB insights
-
-Since this experience is built on top of Azure Monitor workbook templates, you have the ability to **Customize** > **Edit** and **Save** a copy of your modified version into a custom workbook.
-
-![Customize bar](./media/cosmosdb-insights-overview/customize.png)
-
-Workbooks are saved within a resource group, either in the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource group. After you save the custom workbook, you need to go to the workbook gallery to launch it.
-
-![Launch workbook gallery from command bar](./media/cosmosdb-insights-overview/gallery.png)
-
-## Troubleshooting
-
-For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
-
-## Next steps
-
-* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerting to aid in detecting issues.
-
-* Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-migrationchoices.md
- Title: Cosmos DB Migration options
-description: This doc describes the various options to migrate your on-premises or cloud data to Azure Cosmos DB
----- Previously updated : 04/02/2022--
-# Options to migrate your on-premises or cloud data to Azure Cosmos DB
-
-You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB:
-
-* Move data from one Azure Cosmos container to another container in the same database or a different databases.
-* Moving data between dedicated containers to shared database containers.
-* Move data from an Azure Cosmos account located in region1 to another Azure Cosmos account in the same or a different region.
-* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB.
-
-In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations.
-
-## Factors affecting the choice of migration tool
-
-The following factors determine the choice of the migration tool:
-
-* **Online vs offline migration**: Many migration tools provide a path to do a one-time migration only. This means that the applications accessing the database might experience a period of downtime. Some migration solutions provide a way to do a live migration where there is a replication pipeline set up between the source and the target.
-
-* **Data source**: The existing data can be in various data sources like Oracle DB2, Datastax Cassanda, Azure SQL Database, PostgreSQL, etc. The data can also be in an existing Azure Cosmos DB account and the intent of migration can be to change the data model or repartition the data in a container with a different partition key.
-
-* **Azure Cosmos DB API**: For the SQL API in Azure Cosmos DB, there are a variety of tools developed by the Azure Cosmos DB team which aid in the different migration scenarios. All of the other APIs have their own specialized set of tools developed and maintained by the community. Since Azure Cosmos DB supports these APIs at a wire protocol level, these tools should work as-is while migrating data into Azure Cosmos DB too. However, they might require custom handling for throttles as this concept is specific to Azure Cosmos DB.
-
-* **Size of data**: Most migration tools work very well for smaller datasets. When the data set exceeds a few hundred gigabytes, the choices of migration tools are limited.
-
-* **Expected migration duration**: Migrations can be configured to take place at a slow, incremental pace that consumes less throughput or can consume the entire throughput provisioned on the target Azure Cosmos DB container and complete the migration in less time.
-
-## Azure Cosmos DB SQL API
-
-If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
-* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
-
->[!IMPORTANT]
-> The [Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator) is an open-source tool for live container migrations that implements change feed and bulk support. However, please note that the user interface application code for this tool is not supported or actively maintained by Microsoft. For Azure Cosmos DB SQL API live container migrations, we recommend using the Spark Connector + Change Feed as illustrated in the [sample](https://github.com/Azure/azure-sdk-for-jav) is fully supported by Microsoft.
-
-|Migration type|Solution|Supported sources|Supported targets|Considerations|
-||||||
-|Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.|
-|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.|
-|Offline|[Azure Cosmos DB Spark connector](./create-sql-api-spark.md)|Azure Cosmos DB SQL API. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB SQL API. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
-|Online|[Azure Cosmos DB Spark connector + Change Feed](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB SQL API. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB SQL API. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
-|Offline|[Custom tool with Cosmos DB bulk executor library](migrate-cosmosdb-data.md)| The source depends on your custom code | Azure Cosmos DB SQL API| &bull; Provides checkpointing, dead-lettering capabilities which increases migration resiliency. <br/>&bull; Suitable for very large datasets (10 TB+). <br/>&bull; Requires custom setup of this tool running as an App Service. |
-|Online|[Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB SQL API | Azure Cosmos DB SQL API| &bull; Easy to set up. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Not suitable for large datasets. <br/>&bull; Does not capture deletes from the source container. |
-|Online|[Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator)| Azure Cosmos DB SQL API | Azure Cosmos DB SQL API| &bull; Provides progress tracking. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Works for larger datasets as well.<br/>&bull; Requires the user to set up an App Service to host the Change feed processor. <br/>&bull; Does not capture deletes from the source container.|
-|Online|[Striim](cosmosdb-sql-api-migrate-data-striim.md)| &bull;Oracle <br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources. |&bull;Azure Cosmos DB SQL API <br/>&bull; Azure Cosmos DB Cassandra API<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets. | &bull; Works with a large variety of sources like Oracle, DB2, SQL Server.<br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
-
-## Azure Cosmos DB Mongo API
-
-Follow the [pre-migration guide](mongodb/pre-migration-steps.md) to plan your migration.
-* If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
-* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](convert-vcore-to-request-unit.md).
-
-When you are ready to migrate, you can find detailed guidance on migration tools below
-* [Offline migration using MongoDB native tools](mongodb/tutorial-mongotools-cosmos-db.md)
-* [Offline migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db.md)
-* [Online migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db-online.md)
-* [Offline/online migration using Azure Databricks and Spark](mongodb/migrate-databricks.md)
-
-Then, follow our [post-migration guide](mongodb/post-migration-optimization.md) to optimize your Azure Cosmos DB data estate once you have migrated.
-
-A summary of migration pathways from your current solution to Azure Cosmos DB API for MongoDB is provided below:
-
-|Migration type|Solution|Supported sources|Supported targets|Considerations|
-||||||
-|Online|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB|Azure Cosmos DB API for MongoDB |&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
-|Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB| Azure Cosmos DB API for MongoDB| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
-|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db-mongodb-api.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB <br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources. | &bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB <br/>&bull; JSON files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets.| &bull; Easy to set up and supports multiple sources. <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process.<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process. <br/>&bull; Needs custom code to increase read throughput for certain data sources.|
-|Offline|Existing Mongo Tools ([mongodump](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [mongorestore](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [Studio3T](mongodb/connect-using-mongochef.md))|&bull;MongoDB<br/>&bull;Azure Cosmos DB API for MongoDB<br/> | Azure Cosmos DB API for MongoDB| &bull; Easy to set up and integration. <br/>&bull; Needs custom handling for throttles.|
-
-## Azure Cosmos DB Cassandra API
-
-If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
-
-|Migration type|Solution|Supported sources|Supported targets|Considerations|
-||||||
-|Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
-|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/> | Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
-|Online|[Dual-write proxy + Spark](cassandr)| &bull;Apache Cassandra<br/>|&bull;Azure Cosmos DB Cassandra API <br/>| &bull; Supports larger datasets, but careful attention required for setup and validation. <br/>&bull; Open-source tools, no purchase required.|
-|Online|[Striim (from Oracle DB/Apache Cassandra)](cassandr)| &bull;Oracle<br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Cassandra API <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| &bull; Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
-|Online|[Arcion (from Oracle DB/Apache Cassandra)](cassandr)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported sources. |Azure Cosmos DB Cassandra API. <br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
-
-## Other APIs
-
-For APIs other than the SQL API, Mongo API and the Cassandra API, there are various tools supported by each of the API's existing ecosystems.
-
-**Table API**
-
-* [Data Migration Tool](table/table-import.md#data-migration-tool)
-
-**Gremlin API**
-
-* [Graph bulk executor library](bulk-executor-graph-dotnet.md)
-* [Gremlin Spark](https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/samples/graphframes/main.scala)
-
-## Next steps
-
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](bulk-executor-dot-net.md) and [Java](bulk-executor-java.md).
-* The bulk executor library is integrated into the Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) article.
-* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
cosmos-db Cosmosdb Monitor Logs Basic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-monitor-logs-basic-queries.md
- Title: Troubleshoot issues with diagnostics queries-
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB
---- Previously updated : 05/12/2021---
-# Troubleshoot issues with diagnostics queries
-
-In this article, we'll cover how to write simple queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
-
-For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query.
-
-For resource-specific tables, data is written into individual tables for each category of the resource (not available for table API). We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
-
-## <a id="azure-diagnostics-queries"></a> AzureDiagnostics Queries
--- How to query for the operations that are taking longer than 3 milliseconds to run:-
- ```Kusto
- AzureDiagnostics
- | where toint(duration_s) > 3 and ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | summarize count() by clientIpAddress_s, TimeGenerated
- ```
--- How to query for the user agent that is running the operations:-
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | summarize count() by OperationName, userAgent_s
- ```
--- How to query for the long-running operations:-
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | project TimeGenerated , duration_s
- | summarize count() by bin(TimeGenerated, 5s)
- | render timechart
- ```
-
-- How to get partition key statistics to evaluate skew across top 3 partitions for a database account:-
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="PartitionKeyStatistics"
- | project SubscriptionId, regionName_s, databaseName_s, collectionName_s, partitionKey_s, sizeKb_d, ResourceId
- ```
--- How to get the request charges for expensive queries?-
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests" and todouble(requestCharge_s) > 10.0
- | project activityId_g, requestCharge_s
- | join kind= inner (
- AzureDiagnostics
- | where ResourceProvider =="MICROSOFT.DOCUMENTDB" and Category == "QueryRuntimeStatistics"
- | project activityId_g, querytext_s
- ) on $left.activityId_g == $right.activityId_g
- | order by requestCharge_s desc
- | limit 100
- ```
--- How to find which operations are taking most of RU/s?-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | where TimeGenerated >= ago(2h)
- | summarize max(responseLength_s), max(requestLength_s), max(requestCharge_s), count = count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h)
- ```
--- How to get all queries that are consuming more than 100 RU/s joined with data from **DataPlaneRequests** and **QueryRunTimeStatistics**.-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests" and todouble(requestCharge_s) > 100.0
- | project activityId_g, requestCharge_s
- | join kind= inner (
- AzureDiagnostics
- | where ResourceProvider =="MICROSOFT.DOCUMENTDB" and Category == "QueryRuntimeStatistics"
- | project activityId_g, querytext_s
- ) on $left.activityId_g == $right.activityId_g
- | order by requestCharge_s desc
- | limit 100
- ```
--- How to get the request charges and the execution duration of a query?-
- ```kusto
- AzureDiagnostics
- | where TimeGenerated >= ago(24hr)
- | where Category == "QueryRuntimeStatistics"
- | join (
- AzureDiagnostics
- | where TimeGenerated >= ago(24hr)
- | where Category == "DataPlaneRequests"
- ) on $left.activityId_g == $right.activityId_g
- | project databasename_s, collectionname_s, OperationName1 , querytext_s,requestCharge_s1, duration_s1, bin(TimeGenerated, 1min)
- ```
--- How to get the distribution for different operations?-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | where TimeGenerated >= ago(2h)
- | summarize count = count() by OperationName, requestResourceType_s, bin(TimeGenerated, 1h)
- ```
--- What is the maximum throughput that a partition has consumed?
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | where TimeGenerated >= ago(2h)
- | summarize max(requestCharge_s) by bin(TimeGenerated, 1h), partitionId_g
- ```
--- How to get the information about the partition keys RU/s consumption per second?-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.DOCUMENTDB" and Category == "PartitionKeyRUConsumption"
- | summarize total = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, partitionKey_s, TimeGenerated
- | order by TimeGenerated asc
- ```
--- How to get the request charge for a specific partition key-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.DOCUMENTDB" and Category == "PartitionKeyRUConsumption"
- | where parse_json(partitionKey_s)[0] == "2"
- ```
--- How to get the top partition keys with most RU/s consumed in a specific period?-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.DOCUMENTDB" and Category == "PartitionKeyRUConsumption"
- | where TimeGenerated >= datetime("11/26/2019, 11:20:00.000 PM") and TimeGenerated <= datetime("11/26/2019, 11:30:00.000 PM")
- | summarize total = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, partitionKey_s
- | order by total desc
- ```
--- How to get the logs for the partition keys whose storage size is greater than 8 GB?-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="PartitionKeyStatistics"
- | where todouble(sizeKb_d) > 800000
- ```
--- How to get P99 or P50 replication latencies for operations, request charge or the length of the response?-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
- | where TimeGenerated >= ago(2d)
- | summarize percentile(todouble(responseLength_s), 50), percentile(todouble(responseLength_s), 99), max(responseLength_s), percentile(todouble(requestCharge_s), 50), percentile(todouble(requestCharge_s), 99), max(requestCharge_s), percentile(todouble(duration_s), 50), percentile(todouble(duration_s), 99), max(duration_s), count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h)
- ```
-
-- How to get ControlPlane logs?
-
- Remember to switch on the flag as described in the [Disable key-based metadata write access](audit-control-plane-logs.md#disable-key-based-metadata-write-access) article, and execute the operations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager.
-
- ```kusto
- AzureDiagnostics
- | where Category =="ControlPlaneRequests"
- | summarize by OperationName
- ```
--
-## <a id="resource-specific-queries"></a> Resource-specific Queries
--- How to query for the operations that are taking longer than 3 milliseconds to run:-
- ```kusto
- CDBDataPlaneRequests
- | where toint(DurationMs) > 3
- | summarize count() by ClientIpAddress, TimeGenerated
- ```
--- How to query for the user agent that is running the operations:-
- ```kusto
- CDBDataPlaneRequests
- | summarize count() by OperationName, UserAgent
- ```
--- How to query for the long-running operations:-
- ```kusto
- CDBDataPlaneRequests
- | project TimeGenerated , DurationMs
- | summarize count() by bin(TimeGenerated, 5s)
- | render timechart
- ```
-
-- How to get partition key statistics to evaluate skew across top 3 partitions for a database account:-
- ```kusto
- CDBPartitionKeyStatistics
- | project RegionName, DatabaseName, CollectionName, PartitionKey, SizeKb
- ```
--- How to get the request charges for expensive queries?-
- ```kusto
- CDBDataPlaneRequests
- | where todouble(RequestCharge) > 10.0
- | project ActivityId, RequestCharge
- | join kind= inner (
- CDBQueryRuntimeStatistics
- | project ActivityId, QueryText
- ) on $left.ActivityId == $right.ActivityId
- | order by RequestCharge desc
- | limit 100
- ```
--- How to find which operations are taking most of RU/s?-
- ```kusto
- CDBDataPlaneRequests
- | where TimeGenerated >= ago(2h)
- | summarize max(ResponseLength), max(RequestLength), max(RequestCharge), count = count() by OperationName, RequestResourceType, UserAgent, CollectionName, bin(TimeGenerated, 1h)
- ```
--- How to get all queries that are consuming more than 100 RU/s joined with data from **DataPlaneRequests** and **QueryRunTimeStatistics**.-
- ```kusto
- CDBDataPlaneRequests
- | where todouble(RequestCharge) > 100.0
- | project ActivityId, RequestCharge
- | join kind= inner (
- CDBQueryRuntimeStatistics
- | project ActivityId, QueryText
- ) on $left.ActivityId == $right.ActivityId
- | order by RequestCharge desc
- | limit 100
- ```
--- How to get the request charges and the execution duration of a query?-
- ```kusto
- CDBQueryRuntimeStatistics
- | join kind= inner (
- CDBDataPlaneRequests
- ) on $left.ActivityId == $right.ActivityId
- | project DatabaseName, CollectionName, OperationName , QueryText, RequestCharge, DurationMs, bin(TimeGenerated, 1min)
- ```
--- How to get the distribution for different operations?-
- ```kusto
- CDBDataPlaneRequests
- | where TimeGenerated >= ago(2h)
- | summarize count = count() by OperationName, RequestResourceType, bin(TimeGenerated, 1h)
- ```
--- What is the maximum throughput that a partition has consumed?-
- ```kusto
- CDBDataPlaneRequests
- | where TimeGenerated >= ago(2h)
- | summarize max(RequestCharge) by bin(TimeGenerated, 1h), PartitionId
- ```
--- How to get the information about the partition keys RU/s consumption per second?-
- ```kusto
- CDBPartitionKeyRUConsumption
- | summarize total = sum(todouble(RequestCharge)) by DatabaseName, CollectionName, PartitionKey, TimeGenerated
- | order by TimeGenerated asc
- ```
--- How to get the request charge for a specific partition key?-
- ```kusto
- CDBPartitionKeyRUConsumption
- | where parse_json(PartitionKey)[0] == "2"
- ```
--- How to get the top partition keys with most RU/s consumed in a specific period? -
- ```kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= datetime("02/12/2021, 11:20:00.000 PM") and TimeGenerated <= datetime("05/12/2021, 11:30:00.000 PM")
- | summarize total = sum(todouble(RequestCharge)) by DatabaseName, CollectionName, PartitionKey
- | order by total desc
- ```
--- How to get the logs for the partition keys whose storage size is greater than 8 GB?-
- ```kusto
- CDBPartitionKeyStatistics
- | where todouble(SizeKb) > 800000
- ```
--- How to get P99 or P50 replication latencies for operations, request charge or the length of the response?-
- ```kusto
- CDBDataPlaneRequests
- | where TimeGenerated >= ago(2d)
- | summarize percentile(todouble(ResponseLength), 50), percentile(todouble(ResponseLength), 99), max(ResponseLength), percentile(todouble(RequestCharge), 50), percentile(todouble(RequestCharge), 99), max(RequestCharge), percentile(todouble(DurationMs), 50), percentile(todouble(DurationMs), 99), max(DurationMs),count() by OperationName, RequestResourceType, UserAgent, CollectionName, bin(TimeGenerated, 1h)
- ```
-
-- How to get ControlPlane logs?
-
- Remember to switch on the flag as described in the [Disable key-based metadata write access](audit-control-plane-logs.md#disable-key-based-metadata-write-access) article, and execute the operations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager.
-
- ```kusto
- CDBControlPlaneRequests
- | summarize by OperationName
- ```
-
-## Next steps
-* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
-
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Cosmosdb Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
- Title: Monitor Azure Cosmos DB data by using Azure Diagnostic settings
-description: Learn how to use Azure diagnostic settings to monitor the performance and availability of data stored in Azure Cosmos DB
----- Previously updated : 05/20/2021--
-# Monitor Azure Cosmos DB data by using diagnostic settings in Azure
-
-Diagnostic settings in Azure are used to collect resource logs. Azure resource Logs are emitted by a resource and provide rich, frequent data about the operation of that resource. These logs are captured per request and they are also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
-
-Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources:
-- Log Analytics workspaces
- - Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables
-- Event hub-- Storage Account
-
-> [!NOTE]
-> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except Table API) [following our instructions for creating diagnostics setting via REST API](cosmosdb-monitor-resource-logs.md#create-diagnostic-setting). This option provides additional cost-optimizations with an improved view for handling data.
-
-## <a id="create-setting-portal"></a> Create diagnostics settings via the Azure portal
-
-1. Sign into the [Azure portal](https://portal.azure.com).
-
-2. Navigate to your Azure Cosmos account. Open the **Diagnostic settings** pane under the **Monitoring section**, and then select **Add diagnostic setting** option.
-
- :::image type="content" source="./media/monitor-cosmos-db/diagnostics-settings-selection.png" alt-text="Select diagnostics":::
--
-3. In the **Diagnostic settings** pane, fill the form with your preferred categories.
-
-### Choose log categories
-
- |Category |API | Definition | Key Properties |
- |||||
- |DataPlaneRequests | All APIs | Logs back-end requests as data plane operations which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
- |MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- |CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
- |GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- |QueryRuntimeStatistics | SQL | This table details query operations executed against a SQL API account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
- |PartitionKeyStatistics | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: <br/><ul><li> At least 1% of the documents in the physical partition have same logical partition key. </li><li> Out of all the keys in the physical partition, the top 3 keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions are not met, the partition key statistics data is not available. It's okay if the above conditions are not met for your account, which typically indicates you have no logical partition storage skew. <br/><br/>Note: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes are not uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
- |PartitionKeyRUConsumption | SQL API | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for SQL API accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
- |ControlPlaneRequests | All APIs | Logs details on control plane operations i.e. creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
- |TableApiRequests | Table API | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
-
-4. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
-
- :::image type="content" source="./media/monitor-cosmos-db/diagnostics-resource-specific.png" alt-text="Select enable resource-specific":::
-
-## <a id="create-diagnostic-setting"></a> Create diagnostic setting via REST API
-Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorupdate) for creating a diagnostic setting via the interactive console.
-> [!Note]
-> We recommend setting the **logAnalyticsDestinationType** property to **Dedicated** for enabling resource specific tables.
-
-### Request
-
- ```HTTP
- PUT
- https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
- ```
-
-### Headers
-
- |Parameters/Headers | Value/Description |
- |||
- |name | The name of your Diagnostic setting. |
- |resourceUri | subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME} |
- |api-version | 2017-05-01-preview |
- |Content-Type | application/json |
-
-### Body
-
-```json
-{
- "id": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}",
- "type": "Microsoft.Insights/diagnosticSettings",
- "name": "name",
- "location": null,
- "kind": null,
- "tags": null,
- "properties": {
- "storageAccountId": null,
- "serviceBusRuleId": null,
- "workspaceId": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}",
- "eventHubAuthorizationRuleId": null,
- "eventHubName": null,
- "logs": [
- {
- "category": "DataPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "QueryRuntimeStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "PartitionKeyStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "PartitionKeyRUConsumption",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "ControlPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- }
- ],
- "logAnalyticsDestinationType": "Dedicated"
- },
- "identity": null
-}
-```
-
-## Create diagnostic setting via Azure CLI
-Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
-
-> [!Note]
-> If you are using SQL API, we recommend setting the **export-to-resource-specific** property to **true**.
-
- ```azurecli-interactive
- az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/ --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
- ```
-## <a id="full-text-query"></a> Enable full-text query for logging query text
-
-> [!Note]
-> Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
-
-Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, youΓÇÖll be able to view the deobfuscated query for all requests within your Azure Cosmos DB account. YouΓÇÖll also give permission for Azure Cosmos DB to access and surface this data in your logs.
-
-1. To enable this feature, navigate to the `Features` blade in your Cosmos DB account.
-
- :::image type="content" source="./media/monitor-cosmos-db/full-text-query-features.png" alt-text="Navigate to Features blade":::
-
-2. Select `Enable`, this setting will then be applied in the within the next few minutes. All newly ingested logs will have the full-text or PIICommand text for each request.
-
- :::image type="content" source="./media/monitor-cosmos-db/select-enable-full-text.png" alt-text="Select enable full-text":::
-
-To learn how to query using this newly enabled feature visit [advanced queries](cosmos-db-advanced-queries.md).
-
-## Next steps
-* For more information on how to query resource-specific tables see [troubleshooting using resource-specific tables](cosmosdb-monitor-logs-basic-queries.md#resource-specific-queries).
-
-* For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](cosmosdb-monitor-logs-basic-queries.md#azure-diagnostics-queries).
-
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/create-alerts.md
description: Learn how to set up alerts for Azure Cosmos DB using Azure Monitor.
+ Last updated 02/08/2022 # Create alerts for Azure Cosmos DB using Azure Monitor Alerts are used to set up recurring tests to monitor the availability and responsiveness of your Azure Cosmos DB resources. Alerts can send you a notification in the form of an email, or execute an Azure Function when one of your metrics reaches the threshold or if a specific event is logged in the activity log.
-You can receive an alert based on the metrics, activity log events, or Log Analytics logs on your Azure Cosmos account:
+You can receive an alert based on the metrics, activity log events, or Log Analytics logs on your Azure Cosmos DB account:
-* **Metrics** - The alert triggers when the value of a specified metric crosses a threshold you assign. For example, when the total request units consumed exceed 1000 RU/s. This alert is triggered both when the condition is first met and then afterwards when that condition is no longer being met. See the [monitoring data reference](monitor-cosmos-db-reference.md#metrics) article for different metrics available in Azure Cosmos DB.
+* **Metrics** - The alert triggers when the value of a specified metric crosses a threshold you assign. For example, when the total request units consumed exceed 1000 RU/s. This alert is triggered both when the condition is first met and then afterwards when that condition is no longer being met. See the [monitoring data reference](monitor-reference.md#metrics) article for different metrics available in Azure Cosmos DB.
-* **Activity log events** ΓÇô This alert triggers when a certain event occurs. For example, when the keys of your Azure Cosmos account are accessed or refreshed.
+* **Activity log events** ΓÇô This alert triggers when a certain event occurs. For example, when the keys of your Azure Cosmos DB account are accessed or refreshed.
* **Log Analytics** ΓÇô This alert triggers when the value of a specified property in the results of a Log Analytics query crosses a threshold you assign. For example, you can write a Log Analytics query to [monitor if the storage for a logical partition key is reaching the 20 GB logical partition key storage limit](how-to-alert-on-logical-partition-key-storage-size.md) in Azure Cosmos DB.
This section shows how to create an alert when you receive an HTTP status code 4
* Select **Azure Cosmos DB accounts** for the **resource type**.
- * The **location** of your Azure Cosmos account.
+ * The **location** of your Azure Cosmos DB account.
- * After filling in the details, a list of Azure Cosmos accounts in the selected scope is displayed. Choose the one for which you want to configure alerts and select **Done**.
+ * After filling in the details, a list of Azure Cosmos DB accounts in the selected scope is displayed. Choose the one for which you want to configure alerts and select **Done**.
1. Fill out the **Condition** section:
This section shows how to create an alert when you receive an HTTP status code 4
* Choose a **Signal name**. To get an alert for HTTP status codes, choose the **Total Request Units** signal.
- * Now, you can define the logic for triggering an alert and use the chart to view trends of your Azure Cosmos account. The **Total Request Units** metric supports dimensions. These dimensions allow you to filter on the metric. For example, you can use dimensions to filter to a specific database or container you want to monitor. If you don't select any dimension, this value is ignored.
+ * Now, you can define the logic for triggering an alert and use the chart to view trends of your Azure Cosmos DB account. The **Total Request Units** metric supports dimensions. These dimensions allow you to filter on the metric. For example, you can use dimensions to filter to a specific database or container you want to monitor. If you don't select any dimension, this value is ignored.
* Choose **StatusCode** as the **Dimension name**. Select **Add custom value** and set the status code to 429.
After creating the alert, it will be active within 10 minutes.
The following are some scenarios where you can use alerts:
-* When the keys of an Azure Cosmos account are updated.
+* When the keys of an Azure Cosmos DB account are updated.
* When the data or index usage of a container, database, or a region exceeds a certain number of bytes. * [When the storage for a logical partition key is reaching the Azure Cosmos DB 20 GB logical partition storage limit.](how-to-alert-on-logical-partition-key-storage-size.md) * When the normalized RU/s consumption is greater than certain percentage. The normalized RU consumption metric gives the maximum throughput utilization within a replica set. To learn, see the [How to monitor normalized RU/s](monitor-normalized-request-units.md) article.
The following are some scenarios where you can use alerts:
## Next steps
-* How to [monitor normalized RU/s metric](monitor-normalized-request-units.md) in Azure Cosmos container.
-* How to [monitor throughput or request unit usage](monitor-request-unit-usage.md) of an operation in Azure Cosmos DB.
+* How to [monitor normalized RU/s metric](monitor-normalized-request-units.md) in Azure Cosmos DB container.
+* How to [monitor throughput or request unit usage](monitor-request-unit-usage.md) of an operation in Azure Cosmos DB.
cosmos-db Custom Partitioning Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/custom-partitioning-analytical-store.md
Last updated 11/02/2021 -+ # Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview) Custom partitioning enables you to partition analytical store data, on fields that are commonly used as filters in analytical queries, resulting in improved query performance.
You could use one or more partition keys for your analytical data. If you are us
* Currently partitioned store can only point to the primary storage account associated with the Synapse workspace. Selecting custom storage accounts is not supported at this point.
-* Custom partitioning is only available for SQL API in Cosmos DB. API for Mongo DB, Gremlin and Cassandra are not supported at this time.
+* Custom partitioning is only available for API for NoSQL in Azure Cosmos DB. API for MongoDB, Gremlin and Cassandra are not supported at this time.
## Pricing
cosmos-db Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-explorer.md
Title: Use Azure Cosmos DB Explorer to manage your data description: Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB. + Last updated 09/23/2020 - # Work with data using Azure Cosmos DB Explorer Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB. Azure Cosmos DB Explorer is equivalent to the existing **Data Explorer** tab that is available in Azure portal when you create an Azure Cosmos DB account. The key advantages of Azure Cosmos DB Explorer over the existing Data explorer are:
Azure Cosmos DB Explorer is a standalone web-based interface that allows you to
## Known issues
-Currently the **Open Full Screen** experience that allows you to share temporary read-write or read access is not yet supported for Azure Cosmos DB Gremlin and Table API accounts. You can still view your Gremlin and Table API accounts by passing the connection string to Azure Cosmos DB Explorer.
+Currently the **Open Full Screen** experience that allows you to share temporary read-write or read access is not yet supported for Azure Cosmos DB API for Gremlin and Table accounts. You can still view your Gremlin and API for Table accounts by passing the connection string to Azure Cosmos DB Explorer.
Currently, viewing documents that contain a UUID is not supported in Data Explorer. This does not affect loading collections, only viewing individual documents or queries that include these documents. To view and manage these documents, users should continue to use the tool that was originally used to create these documents.
Customers receiving HTTP-401 errors may be due to insufficient Azure RBAC permis
Now that you have learned how to get started with Azure Cosmos DB Explorer to manage your data, next you can:
-* Start defining [queries](./sql-query-getting-started.md) using SQL syntax and perform [server side programming](stored-procedures-triggers-udfs.md) by using stored procedures, UDFs, triggers.
+* Start defining [queries](nosql/query/getting-started.md) using SQL syntax and perform [server side programming](stored-procedures-triggers-udfs.md) by using stored procedures, UDFs, triggers.
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
Title: How to meet data residency requirements in Azure Cosmos DB
description: learn how to meet data residency requirements in Azure Cosmos DB for your data and backups to remain in a single region. + Last updated 04/05/2021 - # How to meet data residency requirements in Azure Cosmos DB In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the [residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/).
Azure Policy is a service that you can use to create, assign, and manage policie
* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md).-
cosmos-db Database Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-encryption-at-rest.md
Last updated 10/26/2021-+ # Data encryption in Azure Cosmos DB
-Encryption at rest is a phrase that commonly refers to the encryption of data on nonvolatile storage devices, such as solid state drives (SSDs) and hard disk drives (HDDs). Cosmos DB stores its primary databases on SSDs. Its media attachments and backups are stored in Azure Blob storage, which is generally backed up by HDDs. With the release of encryption at rest for Cosmos DB, all your databases, media attachments, and backups are encrypted. Your data is now encrypted in transit (over the network) and at rest (nonvolatile storage), giving you end-to-end encryption.
+Encryption at rest is a phrase that commonly refers to the encryption of data on nonvolatile storage devices, such as solid state drives (SSDs) and hard disk drives (HDDs). Azure Cosmos DB stores its primary databases on SSDs. Its media attachments and backups are stored in Azure Blob storage, which is generally backed up by HDDs. With the release of encryption at rest for Azure Cosmos DB, all your databases, media attachments, and backups are encrypted. Your data is now encrypted in transit (over the network) and at rest (nonvolatile storage), giving you end-to-end encryption.
-As a PaaS service, Azure Cosmos DB is very easy to use. Because all user data stored in Azure Cosmos DB is encrypted at rest and in transport, you don't have to take any action. Another way to put this is that encryption at rest is "on" by default. There are no controls to turn it off or on. Azure Cosmos DB uses AES-256 encryption on all regions where the account is running. We provide this feature while we continue to meet our [availability and performance SLAs](https://azure.microsoft.com/support/legal/sl) article.
+As a PaaS service, Azure Cosmos DB is very easy to use. Because all user data stored in Azure Cosmos DB is encrypted at rest and in transport, you don't have to take any action. Another way to put this is that encryption at rest is "on" by default. There are no controls to turn it off or on. Azure Cosmos DB uses AES-256 encryption on all regions where the account is running. We provide this feature while we continue to meet our [availability and performance SLAs](https://azure.microsoft.com/support/legal/sl) article.
## Implementation of encryption at rest for Azure Cosmos DB
Encryption at rest is implemented by using a number of security technologies, in
The basic flow of a user request is as follows: - The user database account is made ready, and storage keys are retrieved via a request to the Management Service Resource Provider.-- A user creates a connection to Cosmos DB via HTTPS/secure transport. (The SDKs abstract the details.)
+- A user creates a connection to Azure Cosmos DB via HTTPS/secure transport. (The SDKs abstract the details.)
- The user sends a JSON document to be stored over the previously created secure connection. - The JSON document is indexed unless the user has turned off indexing. - Both the JSON document and index data are written to secure storage.
The basic flow of a user request is as follows:
A: There is no additional cost. ### Q: Who manages the encryption keys?
-A: Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md).
+A: Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md).
### Q: How often are encryption keys rotated?
-A: Microsoft has a set of internal guidelines for encryption key rotation, which Cosmos DB follows. The specific guidelines are not published. Microsoft does publish the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/default.aspx), which is seen as a subset of internal guidance and has useful best practices for developers.
+A: Microsoft has a set of internal guidelines for encryption key rotation, which Azure Cosmos DB follows. The specific guidelines are not published. Microsoft does publish the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/default.aspx), which is seen as a subset of internal guidance and has useful best practices for developers.
### Q: Can I use my own encryption keys? A: Yes, this feature is now available for new Azure Cosmos DB accounts and this should be done at the time of account creation. Please go through [Customer-managed Keys](./how-to-setup-cmk.md) document for more information.
A: Yes, this feature is now available for new Azure Cosmos DB accounts and this
A: All Azure Cosmos DB regions have encryption turned on for all user data. ### Q: Does encryption affect the performance latency and throughput SLAs?
-A: There is no impact or changes to the performance SLAs now that encryption at rest is enabled for all existing and new accounts. You can read more on the [SLA for Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db) page to see the latest guarantees.
+A: There is no impact or changes to the performance SLAs now that encryption at rest is enabled for all existing and new accounts. You can read more on the [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db) page to see the latest guarantees.
### Q: Does the local emulator support encryption at rest?
-A: The emulator is a standalone dev/test tool and does not use the key management services that the managed Cosmos DB service uses. Our recommendation is to enable BitLocker on drives where you are storing sensitive emulator test data. The [emulator supports changing the default data directory](local-emulator.md) as well as using a well-known location.
+A: The emulator is a standalone dev/test tool and does not use the key management services that the managed Azure Cosmos DB service uses. Our recommendation is to enable BitLocker on drives where you are storing sensitive emulator test data. The [emulator supports changing the default data directory](local-emulator.md) as well as using a well-known location.
## Next steps * You can choose to add a second layer of encryption with your own keys, to learn more, see the [customer-managed keys](how-to-setup-cmk.md) article.
-* For an overview of Cosmos DB security and the latest improvements, see [Azure Cosmos database security](database-security.md).
-* For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
+* For an overview of Azure Cosmos DB security and the latest improvements, see [Azure Cosmos DB database security](database-security.md).
+* For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-security.md
description: Learn how Azure Cosmos DB provides database protection and data sec
+ Last updated 07/18/2022 # Security in Azure Cosmos DB - overview This article discusses database security best practices and key features offered by Azure Cosmos DB to help you prevent, detect, and respond to database breaches. ## What's new in Azure Cosmos DB security
-Encryption at rest is now available for documents and backups stored in Azure Cosmos DB in all Azure regions. Encryption at rest is applied automatically for both new and existing customers in these regions. There is no need to configure anything; and you get the same great latency, throughput, availability, and functionality as before with the benefit of knowing your data is safe and secure with encryption at rest. Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md).
+Encryption at rest is now available for documents and backups stored in Azure Cosmos DB in all Azure regions. Encryption at rest is applied automatically for both new and existing customers in these regions. There is no need to configure anything; and you get the same great latency, throughput, availability, and functionality as before with the benefit of knowing your data is safe and secure with encryption at rest. Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md).
## How do I secure my database
Let's dig into each one in detail.
|Security requirement|Azure Cosmos DB's security approach| |||
-|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems, but they are expanded so that an Azure Cosmos database account is only accessible from an approved set of machines or cloud services. Learn more in [Azure Cosmos DB firewall support](how-to-configure-firewall.md) article.<br><br>Azure Cosmos DB enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB resources from the general Internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, AzureCosmosDB) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.|
+|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems, but they are expanded so that an Azure Cosmos DB database account is only accessible from an approved set of machines or cloud services. Learn more in [Azure Cosmos DB firewall support](how-to-configure-firewall.md) article.<br><br>Azure Cosmos DB enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB resources from the general Internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, AzureCosmosDB) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.|
|Authorization|Azure Cosmos DB uses hash-based message authentication code (HMAC) for authorization. <br><br>Each request is hashed using the secret account key, and the subsequent base-64 encoded hash is sent with each call to Azure Cosmos DB. To validate the request, the Azure Cosmos DB service uses the correct secret key and properties to generate a hash, then it compares the value with the one in the request. If the two values match, the operation is authorized successfully and the request is processed, otherwise there is an authorization failure and the request is rejected.<br><br>You can use either a [primary key](#primary-keys), or a [resource token](secure-access-to-data.md#resource-tokens) allowing fine-grained access to a resource such as a document.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).| |Users and permissions|Using the primary key for the account, you can create user resources and permission resources per database. A resource token is associated with a permission in a database and determines whether the user has access (read-write, read-only, or no access) to an application resource in the database. Application resources include container, documents, attachments, stored procedures, triggers, and UDFs. The resource token is then used during authentication to provide or deny access to the resource.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).|
-|Active directory integration (Azure RBAC)| You can also provide or restrict access to the Cosmos account, database, container, and offers (throughput) using Access control (IAM) in the Azure portal. IAM provides role-based access control and integrates with Active Directory. You can use built in roles or custom roles for individuals and groups. See [Active Directory integration](role-based-access-control.md) article for more information.|
+|Active directory integration (Azure RBAC)| You can also provide or restrict access to the Azure Cosmos DB account, database, container, and offers (throughput) using Access control (IAM) in the Azure portal. IAM provides role-based access control and integrates with Active Directory. You can use built in roles or custom roles for individuals and groups. See [Active Directory integration](role-based-access-control.md) article for more information.|
|Global replication|Azure Cosmos DB offers turnkey global distribution, which enables you to replicate your data to any one of Azure's world-wide datacenters with the click of a button. Global replication lets you scale globally and provide low-latency access to your data around the world.<br><br>In the context of security, global replication ensures data protection against regional failures.<br><br>Learn more in [Distribute data globally](distribute-data-globally.md).| |Regional failovers|If you have replicated your data in more than one data center, Azure Cosmos DB automatically rolls over your operations should a regional data center go offline. You can create a prioritized list of failover regions using the regions in which your data is replicated. <br><br>Learn more in [Regional Failovers in Azure Cosmos DB](high-availability.md).| |Local replication|Even within a single data center, Azure Cosmos DB automatically replicates data for high availability giving you the choice of [consistency levels](consistency-levels.md). This replication guarantees a 99.99% [availability SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db) for all single region accounts and all multi-region accounts with relaxed consistency, and 99.999% read availability on all multi-region database accounts.|
-|Automated online backups|Azure Cosmos databases are backed up regularly and stored in a geo redundant store. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md).|
+|Automated online backups|Azure Cosmos DB databases are backed up regularly and stored in a geo redundant store. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md).|
|Restore deleted data|The automated online backups can be used to recover data you may have accidentally deleted up to ~30 days after the event. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md)| |Protect and isolate sensitive data|All data in the regions listed in What's new? is now encrypted at rest.<br><br>Personal data and other confidential data can be isolated to specific container and read-write, or read-only access can be limited to specific users.|
-|Monitor for attacks|By using [audit logging and activity logs](./monitor-cosmos-db.md), you can monitor your account for normal and abnormal activity. You can view what operations were performed on your resources, who initiated the operation, when the operation occurred, the status of the operation, and much more as shown in the screenshot following this table.|
+|Monitor for attacks|By using [audit logging and activity logs](./monitor.md), you can monitor your account for normal and abnormal activity. You can view what operations were performed on your resources, who initiated the operation, when the operation occurred, the status of the operation, and much more as shown in the screenshot following this table.|
|Respond to attacks|Once you have contacted Azure support to report a potential attack, a 5-step incident response process is kicked off. The goal of the 5-step process is to restore normal service security and operations as quickly as possible after an issue is detected and an investigation is started.<br><br>Learn more in [Microsoft Azure Security Response in the Cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).| |Geo-fencing|Azure Cosmos DB ensures data governance for sovereign regions (for example, Germany, China, US Gov).| |Protected facilities|Data in Azure Cosmos DB is stored on SSDs in Azure's protected data centers.<br><br>Learn more in [Microsoft global datacenters](https://www.microsoft.com/en-us/cloud-platform/global-datacenters)|
Primary/secondary keys come in two versions: read-write and read-only. The read-
The process of key rotation and regeneration is simple. First, make sure that **your application is consistently using either the primary key or the secondary key** to access your Azure Cosmos DB account. Then, follow the steps outlined below. To monitor your account for key updates and key regeneration, see [monitor key updates with metrics and alerts](monitor-account-key-updates.md) article.
-# [SQL API](#tab/sql-api)
+# [API for NoSQL](#tab/sql-api)
#### If your application is currently using the primary key
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your primary key with the secondary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
-1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your secondary key with the primary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-# [Azure Cosmos DB API for MongoDB](#tab/mongo-api)
+# [Azure Cosmso DB for MongoDB](#tab/mongo-api)
#### If your application is currently using the primary key
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your primary key with the secondary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
-1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your secondary key with the primary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-# [Cassandra API](#tab/cassandra-api)
+# [API for Cassandra](#tab/cassandra-api)
#### If your application is currently using the primary key
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your primary key with the secondary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
-1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your secondary key with the primary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-# [Gremlin API](#tab/gremlin-api)
+# [API for Gremlin](#tab/gremlin-api)
#### If your application is currently using the primary key
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your primary key with the secondary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
-1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your secondary key with the primary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-# [Table API](#tab/table-api)
+# [API for Table](#tab/table-api)
#### If your application is currently using the primary key
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your primary key with the secondary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
-1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your secondary key with the primary key in your application.
After you rotate or regenerate a key, you can track it's status from the Activit
For more information about primary keys and resource tokens, see [Securing access to Azure Cosmos DB data](secure-access-to-data.md).
-For more information about audit logging, see [Azure Cosmos DB diagnostic logging](./monitor-cosmos-db.md).
+For more information about audit logging, see [Azure Cosmos DB diagnostic logging](./monitor.md).
-For more information about Microsoft certifications, see [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
+For more information about Microsoft certifications, see [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
Title: Azure Cosmos DB dedicated gateway
description: A dedicated gateway is compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it routes requests and caches data. -++ Last updated 08/29/2022
# Azure Cosmos DB dedicated gateway - Overview A dedicated gateway is server-side compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it both routes requests and caches data. Like provisioned throughput, the dedicated gateway is billed hourly.
cosmoscachefeedback@microsoft.com
## Connection modes
-There are two [connectivity modes](./sql/sql-sdk-connection-modes.md) for Azure Cosmos DB, Direct mode and Gateway mode. With Gateway mode you can connect to either the standard gateway or the dedicated gateway depending on the endpoint you configure.
+There are two [connectivity modes](./nosql/sdk-connection-modes.md) for Azure Cosmos DB, Direct mode and Gateway mode. With Gateway mode you can connect to either the standard gateway or the dedicated gateway depending on the endpoint you configure.
### Connect to Azure Cosmos DB using direct mode
If you connect to Azure Cosmos DB using gateway mode, your application will conn
When connecting to Azure Cosmos DB with gateway mode, you can connect with either of the following options: * **Standard gateway** - While the backend, which includes your provisioned throughput and storage, has dedicated capacity per container, the standard gateway is shared between many Azure Cosmos DB accounts. It is practical for many customers to share a standard gateway since the compute resources consumed by each individual customer are small.
-* **Dedicated gateway** - In this gateway, the backend and gateway both have dedicated capacity. The integrated cache requires a dedicated gateway because it requires significant CPU and memory that is specific to your Azure Cosmos account.
+* **Dedicated gateway** - In this gateway, the backend and gateway both have dedicated capacity. The integrated cache requires a dedicated gateway because it requires significant CPU and memory that is specific to your Azure Cosmos DB account.
You must connect to Azure Cosmos DB using the dedicated gateway in order to use the integrated cache. The dedicated gateway has a different endpoint from the standard one provided with your Azure Cosmos DB account, but requests are routed in the same way. When you connect to your dedicated gateway endpoint, your application sends a request to the dedicated gateway, which then routes the request to different backend nodes. If possible, the integrated cache will serve the result. Diagram of gateway mode connection with a dedicated gateway: ## Provisioning the dedicated gateway
-A dedicated gateway cluster can be provisioned in Core (SQL) API accounts. A dedicated gateway cluster can have up to five nodes by default and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
+A dedicated gateway cluster can be provisioned in API for NoSQL accounts. A dedicated gateway cluster can have up to five nodes by default and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate integrated cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. If an item or query is cached on one node, it isn't necessarily cached on the others.
Like nodes within a cluster, dedicated gateway nodes across regions are independ
The dedicated gateway has the following limitations: -- Dedicated gateways are only supported on SQL API accounts
+- Dedicated gateways are only supported on API for NoSQL accounts
- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](../availability-zones/az-region.md). - You can't use [role-based access control (RBAC)](how-to-setup-rbac.md) to authenticate data plane requests routed through the dedicated gateway
Read more about dedicated gateway usage in the following articles:
- [Integrated cache FAQ](integrated-cache-faq.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Distribute Data Globally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distribute-data-globally.md
Last updated 01/06/2021-+ adobe-target: true # Distribute your data globally with Azure Cosmos DB Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. These applications are typically deployed in multiple datacenters and are called globally distributed. Globally distributed applications need a globally distributed database that can transparently replicate the data anywhere in the world to enable the applications to operate on a copy of the data that's close to its users.
-Azure Cosmos DB is a globally distributed database system that allows you to read and write data from the local replicas of your database. Azure Cosmos DB transparently replicates the data to all the regions associated with your Cosmos account. Azure Cosmos DB is a globally distributed database service that's designed to provide low latency, elastic scalability of throughput, well-defined semantics for data consistency, and high availability. In short, if your application needs fast response time anywhere in the world, if it's required to be always online, and needs unlimited and elastic scalability of throughput and storage, you should build your application on Azure Cosmos DB.
+Azure Cosmos DB is a globally distributed database system that allows you to read and write data from the local replicas of your database. Azure Cosmos DB transparently replicates the data to all the regions associated with your Azure Cosmos DB account. Azure Cosmos DB is a globally distributed database service that's designed to provide low latency, elastic scalability of throughput, well-defined semantics for data consistency, and high availability. In short, if your application needs fast response time anywhere in the world, if it's required to be always online, and needs unlimited and elastic scalability of throughput and storage, you should build your application on Azure Cosmos DB.
-You can configure your databases to be globally distributed and available in [any of the Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=cosmos-db&regions=all). To lower the latency, place the data close to where your users are. Choosing the required regions depends on the global reach of your application and where your users are located. Cosmos DB transparently replicates the data to all the regions associated with your Cosmos account. It provides a single system image of your globally distributed Azure Cosmos database and containers that your application can read and write to locally.
+You can configure your databases to be globally distributed and available in [any of the Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=cosmos-db&regions=all). To lower the latency, place the data close to where your users are. Choosing the required regions depends on the global reach of your application and where your users are located. Azure Cosmos DB transparently replicates the data to all the regions associated with your Azure Cosmos DB account. It provides a single system image of your globally distributed Azure Cosmos DB database and containers that your application can read and write to locally.
-With Azure Cosmos DB, you can add or remove the regions associated with your account at any time. Your application doesn't need to be paused or redeployed to add or remove a region. Cosmos DB is available in all five distinct Azure cloud environments available to customers:
+With Azure Cosmos DB, you can add or remove the regions associated with your account at any time. Your application doesn't need to be paused or redeployed to add or remove a region. Azure Cosmos DB is available in all five distinct Azure cloud environments available to customers:
* **Azure public** cloud, which is available globally.
With Azure Cosmos DB, you can add or remove the regions associated with your acc
- 99.999% read and write availability all around the world. - Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile.
-As you add and remove regions to and from your Azure Cosmos account, your application does not need to be redeployed or paused, it continues to be highly available at all times.
+As you add and remove regions to and from your Azure Cosmos DB account, your application does not need to be redeployed or paused, it continues to be highly available at all times.
**Build highly responsive apps.** Your application can perform near real-time reads and writes against all the regions you chose for your database. Azure Cosmos DB internally handles the data replication between regions with consistency level guarantees of the level you've selected. **Build highly available apps.** Running a database in multiple regions worldwide increases the availability of a database. If one region is unavailable, other regions automatically handle application requests. Azure Cosmos DB offers 99.999% read and write availability for multi-region databases.
-**Maintain business continuity during regional outages.** Azure Cosmos DB supports [service-managed failover](how-to-manage-database-account.md#automatic-failover) during a regional outage. During a regional outage, Azure Cosmos DB continues to maintain its latency, availability, consistency, and throughput SLAs. To help make sure that your entire application is highly available, Cosmos DB offers a manual failover API to simulate a regional outage. By using this API, you can carry out regular business continuity drills.
+**Maintain business continuity during regional outages.** Azure Cosmos DB supports [service-managed failover](how-to-manage-database-account.md#automatic-failover) during a regional outage. During a regional outage, Azure Cosmos DB continues to maintain its latency, availability, consistency, and throughput SLAs. To help make sure that your entire application is highly available, Azure Cosmos DB offers a manual failover API to simulate a regional outage. By using this API, you can carry out regular business continuity drills.
-**Scale read and write throughput globally.** You can enable every region to be writable and elastically scale reads and writes all around the world. The throughput that your application configures on an Azure Cosmos database or a container is provisioned across all regions associated with your Azure Cosmos account. The provisioned throughput is guaranteed up by [financially backed SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/).
+**Scale read and write throughput globally.** You can enable every region to be writable and elastically scale reads and writes all around the world. The throughput that your application configures on an Azure Cosmos DB database or a container is provisioned across all regions associated with your Azure Cosmos DB account. The provisioned throughput is guaranteed up by [financially backed SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/).
**Choose from several well-defined consistency models.** The Azure Cosmos DB replication protocol offers five well-defined, practical, and intuitive consistency models. Each model has a tradeoff between consistency and performance. Use these consistency models to build globally distributed applications with ease.
Read more about global distribution in the following articles:
* [How to configure multi-region writes in your applications](how-to-multi-master.md) * [Configure clients for multihoming](how-to-manage-database-account.md#configure-multiple-write-regions) * [Add or remove regions from your Azure Cosmos DB account](how-to-manage-database-account.md#addremove-regions-from-your-database-account)
-* [Create a custom conflict resolution policy for SQL API accounts](how-to-manage-conflicts.md#create-a-custom-conflict-resolution-policy)
-* [Programmable consistency models in Cosmos DB](consistency-levels.md)
+* [Create a custom conflict resolution policy for API for NoSQL accounts](how-to-manage-conflicts.md#create-a-custom-conflict-resolution-policy)
+* [Programmable consistency models in Azure Cosmos DB](consistency-levels.md)
* [Choose the right consistency level for your application](./consistency-levels.md) * [Consistency levels across Azure Cosmos DB APIs](./consistency-levels.md) * [Availability and performance tradeoffs for various consistency levels](./consistency-levels.md)
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-command-line-parameters.md
Last updated 09/17/2020-+ # Command-line and PowerShell reference for Azure Cosmos DB Emulator The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for local development purposes. After [installing the emulator](local-emulator.md), you can control the emulator with command line and PowerShell commands. This article describes how to use the command-line and PowerShell commands to start and stop the emulator, configure options, and perform other operations. You have to run the commands from the installation location.
To view the list of options, type `Microsoft.Azure.Cosmos.Emulator.exe /?` at th
|DataPath | Specifies the path in which to store data files. Default value is %LocalAppdata%\CosmosDBEmulator. | Microsoft.Azure.Cosmos.Emulator.exe /DataPath=\<datapath\> | \<datapath\>: An accessible path | |Port | Specifies the port number to use for the emulator. Default value is 8081. |Microsoft.Azure.Cosmos.Emulator.exe /Port=\<port\> | \<port\>: Single port number | | ComputePort | Specified the port number to use for the Compute Interop Gateway service. The Gateway's HTTP endpoint probe port is calculated as ComputePort + 79. Hence, ComputePort and ComputePort + 79 must be open and available. The default value is 8900. | Microsoft.Azure.Cosmos.Emulator.exe /ComputePort=\<computeport\> | \<computeport\>: Single port number |
-| EnableMongoDbEndpoint=3.2 | Enables MongoDB API 3.2 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.2 | |
-| EnableMongoDbEndpoint=3.6 | Enables MongoDB API 3.6 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.6 | |
-| EnableMongoDbEndpoint=4.0 | Enables MongoDB API 4.0 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=4.0 | |
+| EnableMongoDbEndpoint=3.2 | Enables API for MongoDB 3.2 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.2 | |
+| EnableMongoDbEndpoint=3.6 | Enables API for MongoDB 3.6 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.6 | |
+| EnableMongoDbEndpoint=4.0 | Enables API for MongoDB 4.0 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=4.0 | |
| MongoPort | Specifies the port number to use for MongoDB compatibility API. Default value is 10255. |Microsoft.Azure.Cosmos.Emulator.exe /MongoPort=\<mongoport\>|\<mongoport\>: Single port number|
-| EnableCassandraEndpoint | Enables Cassandra API | Microsoft.Azure.Cosmos.Emulator.exe /EnableCassandraEndpoint | |
+| EnableCassandraEndpoint | Enables API for Cassandra | Microsoft.Azure.Cosmos.Emulator.exe /EnableCassandraEndpoint | |
| CassandraPort | Specifies the port number to use for the Cassandra endpoint. Default value is 10350. | Microsoft.Azure.Cosmos.Emulator.exe /CassandraPort=\<cassandraport\> | \<cassandraport\>: Single port number |
-| EnableGremlinEndpoint | Enables Gremlin API | Microsoft.Azure.Cosmos.Emulator.exe /EnableGremlinEndpoint | |
+| EnableGremlinEndpoint | Enables API for Gremlin | Microsoft.Azure.Cosmos.Emulator.exe /EnableGremlinEndpoint | |
| GremlinPort | Port number to use for the Gremlin Endpoint. Default value is 8901. | Microsoft.Azure.Cosmos.Emulator.exe /GremlinPort=\<port\> | \<port\>: Single port number |
-|EnableTableEndpoint | Enables Azure Table API | Microsoft.Azure.Cosmos.Emulator.exe /EnableTableEndpoint | |
+|EnableTableEndpoint | Enables API for Table | Microsoft.Azure.Cosmos.Emulator.exe /EnableTableEndpoint | |
|TablePort | Port number to use for the Azure Table Endpoint. Default value is 8902. | Microsoft.Azure.Cosmos.Emulator.exe /TablePort=\<port\> | \<port\>: Single port number| | KeyFile | Read authorization key from the specified file. Use the /GenKeyFile option to generate a keyfile | Microsoft.Azure.Cosmos.Emulator.exe /KeyFile=\<file_name\> | \<file_name\>: Path to the file | | ResetDataPath | Recursively removes all the files in the specified path. If you don't specify a path, it defaults to %LOCALAPPDATA%\CosmosDbEmulator | Microsoft.Azure.Cosmos.Emulator.exe /ResetDataPath=\<path> | \<path\>: File path |
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
description: Use Azure Cosmos DB free tier to get started, develop, test your ap
+ Last updated 07/08/2022 # Azure Cosmos DB free tier Azure Cosmos DB free tier makes it easy to get started, develop, test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account for free. The throughput and storage consumed beyond these limits are billed at regular price. Free tier is available for all API accounts with provisioned throughput, autoscale throughput, single, or multiple write regions.
You can create a free tier account from the Azure portal, PowerShell, CLI, or Az
### Azure portal
-When creating the account using the Azure portal, set the **Apply Free Tier Discount** option to **Apply**. See [create a new account with free tier](create-cosmosdb-resources-portal.md) article for step-by-step guidance.
+When creating the account using the Azure portal, set the **Apply Free Tier Discount** option to **Apply**. See [create a new account with free tier](nosql/quickstart-portal.md) article for step-by-step guidance.
### ARM template
To create a free tier account by using an ARM template, set the property `"enabl
To create an account with free tier using CLI, set the `--enable-free-tier` parameter to true: ```azurecli-interactive
-# Create a free tier account for SQL API
+# Create a free tier account for API for NoSQL
az cosmosdb create \ -n "Myaccount" \ -g "MyResourcegroup" \
az cosmosdb create \
To create an account with free tier using Azure PowerShell, set the `-EnableFreeTier` parameter to true: ```powershell-interactive
-# Create a free tier account for SQL API.
+# Create a free tier account for API for NoSQL.
New-AzCosmosDBAccount -ResourceGroupName "MyResourcegroup" ` -Name "Myaccount" ` -ApiKind "sql" `
If the option to create a free-tier account is disabled or if you receive an err
After you create a free tier account, you can start building apps with Azure Cosmos DB with the following articles: * [Build a console app using the .NET V4 SDK](create-sql-api-dotnet-v4.md) to manage Azure Cosmos DB resources.
-* [Build a .NET web app using Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-dotnet.md)
+* [Build a .NET web app using Azure Cosmos DB for MongoDB](mongodb/create-mongodb-dotnet.md)
* [Create a Jupyter notebook](notebooks-overview.md) and analyze your data. * Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
description: Learn how to get the latest restorable timestamp for accounts enabl
-++ Last updated 04/08/2022 # Get the latest restorable timestamp for continuous backup accounts This article describes how to get the [latest restorable timestamp](latest-restore-timestamp-continuous-backup.md) for accounts with continuous backup mode. It explains how to get the latest restorable time using Azure PowerShell and Azure CLI, and provides the request and response format for the PowerShell and CLI commands.
-This feature is supported for Cosmos DB SQL API containers and Cosmos DB MongoDB API collections. This feature is in preview for Table API tables and Gremlin API graphs.
+This feature is supported for Azure Cosmos DB API for NoSQL containers and Azure Cosmos DB API for MongoDB collections. This feature is in preview for API for Table tables and API for Gremlin graphs.
## SQL container
cosmos-db Global Dist Under The Hood https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/global-dist-under-the-hood.md
Title: Global distribution with Azure Cosmos DB- under the hood
description: This article provides technical details relating to global distribution of Azure Cosmos DB + Last updated 07/02/2020
# Global data distribution with Azure Cosmos DB - under the hood Azure Cosmos DB is a foundational service in Azure, so it's deployed across all Azure regions worldwide including the public, sovereign, Department of Defense (DoD) and government clouds. At a high level, Azure Cosmos DB container data is [horizontally partitioned](partitioning-overview.md) into many replica-sets, which replicate writes, in each region. Replica-sets durably commit writes using a majority quorum.
-Each region contains all the data partitions of an Azure Cosmos container and can serve reads as well as serve writes when multi-region writes is enabled. If your Azure Cosmos account is distributed across *N* Azure regions, there will be at least *N* x 4 copies of all your data.
+Each region contains all the data partitions of an Azure Cosmos DB container and can serve reads as well as serve writes when multi-region writes is enabled. If your Azure Cosmos DB account is distributed across *N* Azure regions, there will be at least *N* x 4 copies of all your data.
-Within a data center, we deploy and manage the Azure Cosmos DB on massive stamps of machines, each with dedicated local storage. Within a data center, Azure Cosmos DB is deployed across many clusters, each potentially running multiple generations of hardware. Machines within a cluster are typically spread across 10-20 fault domains for high availability within a region. The following image shows the Cosmos DB global distribution system topology:
+Within a data center, we deploy and manage the Azure Cosmos DB on massive stamps of machines, each with dedicated local storage. Within a data center, Azure Cosmos DB is deployed across many clusters, each potentially running multiple generations of hardware. Machines within a cluster are typically spread across 10-20 fault domains for high availability within a region. The following image shows the Azure Cosmos DB global distribution system topology:
:::image type="content" source="./media/global-dist-under-the-hood/distributed-system-topology.png" alt-text="System Topology" border="false":::
-**Global distribution in Azure Cosmos DB is turnkey:** At any time, with a few clicks or programmatically with a single API call, you can add or remove the geographical regions associated with your Cosmos database. A Cosmos database, in turn, consists of a set of Cosmos containers. In Cosmos DB, containers serve as the logical units of distribution and scalability. The collections, tables, and graphs you create are (internally) just Cosmos containers. Containers are completely schema-agnostic and provide a scope for a query. Data in a Cosmos container is automatically indexed upon ingestion. Automatic indexing enables users to query the data without the hassles of schema or index management, especially in a globally distributed setup.
+**Global distribution in Azure Cosmos DB is turnkey:** At any time, with a few clicks or programmatically with a single API call, you can add or remove the geographical regions associated with your Azure Cosmos DB database. An Azure Cosmos DB database, in turn, consists of a set of Azure Cosmos DB containers. In Azure Cosmos DB, containers serve as the logical units of distribution and scalability. The collections, tables, and graphs you create are (internally) just Azure Cosmos DB containers. Containers are completely schema-agnostic and provide a scope for a query. Data in an Azure Cosmos DB container is automatically indexed upon ingestion. Automatic indexing enables users to query the data without the hassles of schema or index management, especially in a globally distributed setup.
- In a given region, data within a container is distributed by using a partition-key, which you provide and is transparently managed by the underlying physical partitions (*local distribution*). - Each physical partition is also replicated across geographical regions (*global distribution*).
-When an app using Cosmos DB elastically scales throughput on a Cosmos container or consumes more storage, Cosmos DB transparently handles partition management operations (split, clone, delete) across all the regions. Independent of the scale, distribution, or failures, Cosmos DB continues to provide a single system image of the data within the containers, which are globally distributed across any number of regions.
+When an app using Azure Cosmos DB elastically scales throughput on an Azure Cosmos DB container or consumes more storage, Azure Cosmos DB transparently handles partition management operations (split, clone, delete) across all the regions. Independent of the scale, distribution, or failures, Azure Cosmos DB continues to provide a single system image of the data within the containers, which are globally distributed across any number of regions.
As shown in the following image, the data within a container is distributed along two dimensions - within a region and across regions, worldwide:
As shown in the following image, the data within a container is distributed alon
A physical partition is implemented by a group of replicas, called a *replica-set*. Each machine hosts hundreds of replicas that correspond to various physical partitions within a fixed set of processes as shown in the image above. Replicas corresponding to the physical partitions are dynamically placed and load balanced across the machines within a cluster and data centers within a region.
-A replica uniquely belongs to an Azure Cosmos DB tenant. Each replica hosts an instance of Cosmos DBΓÇÖs [database engine](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf), which manages the resources as well as the associated indexes. The Cosmos database engine operates on an atom-record-sequence (ARS) based type system. The engine is agnostic to the concept of a schema, blurring the boundary between the structure and instance values of records. Cosmos DB achieves full schema agnosticism by automatically indexing everything upon ingestion in an efficient manner, which allows users to query their globally distributed data without having to deal with schema or index management.
+A replica uniquely belongs to an Azure Cosmos DB tenant. Each replica hosts an instance of Azure Cosmos DBΓÇÖs [database engine](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf), which manages the resources as well as the associated indexes. The Azure Cosmos DB database engine operates on an atom-record-sequence (ARS) based type system. The engine is agnostic to the concept of a schema, blurring the boundary between the structure and instance values of records. Azure Cosmos DB achieves full schema agnosticism by automatically indexing everything upon ingestion in an efficient manner, which allows users to query their globally distributed data without having to deal with schema or index management.
-The Cosmos database engine consists of components including implementation of several coordination primitives, language runtimes, the query processor, and the storage and indexing subsystems responsible for transactional storage and indexing of data, respectively. To provide durability and high availability, the database engine persists its data and index on SSDs and replicates it among the database engine instances within the replica-set(s) respectively. Larger tenants correspond to higher scale of throughput and storage and have either bigger or more replicas or both. Every component of the system is fully asynchronous ΓÇô no thread ever blocks, and each thread does short-lived work without incurring any unnecessary thread switches. Rate-limiting and back-pressure are plumbed across the entire stack from the admission control to all I/O paths. Cosmos database engine is designed to exploit fine-grained concurrency and to deliver high throughput while operating within frugal amounts of system resources.
+The Azure Cosmos DB database engine consists of components including implementation of several coordination primitives, language runtimes, the query processor, and the storage and indexing subsystems responsible for transactional storage and indexing of data, respectively. To provide durability and high availability, the database engine persists its data and index on SSDs and replicates it among the database engine instances within the replica-set(s) respectively. Larger tenants correspond to higher scale of throughput and storage and have either bigger or more replicas or both. Every component of the system is fully asynchronous ΓÇô no thread ever blocks, and each thread does short-lived work without incurring any unnecessary thread switches. Rate-limiting and back-pressure are plumbed across the entire stack from the admission control to all I/O paths. Azure Cosmos DB database engine is designed to exploit fine-grained concurrency and to deliver high throughput while operating within frugal amounts of system resources.
-Cosmos DBΓÇÖs global distribution relies on two key abstractions ΓÇô *replica-sets* and *partition-sets*. A replica-set is a modular Lego block for coordination, and a partition-set is a dynamic overlay of one or more geographically distributed physical partitions. To understand how global distribution works, we need to understand these two key abstractions.
+Azure Cosmos DBΓÇÖs global distribution relies on two key abstractions ΓÇô *replica-sets* and *partition-sets*. A replica-set is a modular Lego block for coordination, and a partition-set is a dynamic overlay of one or more geographically distributed physical partitions. To understand how global distribution works, we need to understand these two key abstractions.
## Replica-sets
A physical partition is materialized as a self-managed and dynamically load-bala
- First, the cost of processing the write requests on the leader is higher than the cost of applying the updates on the follower. Correspondingly, the leader is budgeted more system resources than the followers. -- Secondly, as far as possible, the read quorum for a given consistency level is composed exclusively of the follower replicas. We avoid contacting the leader for serving reads unless required. We employ a number of ideas from the research done on the relationship of [load and capacity](https://www.cs.utexas.edu/~lorenzo/corsi/cs395t/04S/notes/naor98load.pdf) in the quorum-based systems for the [five consistency models](consistency-levels.md) that Cosmos DB supports.
+- Secondly, as far as possible, the read quorum for a given consistency level is composed exclusively of the follower replicas. We avoid contacting the leader for serving reads unless required. We employ a number of ideas from the research done on the relationship of [load and capacity](https://www.cs.utexas.edu/~lorenzo/corsi/cs395t/04S/notes/naor98load.pdf) in the quorum-based systems for the [five consistency models](consistency-levels.md) that Azure Cosmos DB supports.
## Partition-sets
-A group of physical partitions, one from each of the configured with the Cosmos database regions, is composed to manage the same set of keys replicated across all the configured regions. This higher coordination primitive is called a *partition-set* - a geographically distributed dynamic overlay of physical partitions managing a given set of keys. While a given physical partition (a replica-set) is scoped within a cluster, a partition-set can span clusters, data centers, and geographical regions as shown in the image below:
+A group of physical partitions, one from each of the configured with the Azure Cosmos DB database regions, is composed to manage the same set of keys replicated across all the configured regions. This higher coordination primitive is called a *partition-set* - a geographically distributed dynamic overlay of physical partitions managing a given set of keys. While a given physical partition (a replica-set) is scoped within a cluster, a partition-set can span clusters, data centers, and geographical regions as shown in the image below:
:::image type="content" source="./media/global-dist-under-the-hood/dynamic-overlay-of-resource-partitions.png" alt-text="Partition Sets" border="false":::
-You can think of a partition-set as a geographically dispersed ΓÇ£super replica-setΓÇ¥, which is composed of multiple replica-sets owning the same set of keys. Similar to a replica-set, a partition-setΓÇÖs membership is also dynamic ΓÇô it fluctuates based on implicit physical partition management operations to add/remove new partitions to/from a given partition-set (for instance, when you scale out throughput on a container, add/remove a region to your Cosmos database, or when failures occur). By virtue of having each of the partitions (of a partition-set) manage the partition-set membership within its own replica-set, the membership is fully decentralized and highly available. During the reconfiguration of a partition-set, the topology of the overlay between physical partitions is also established. The topology is dynamically selected based on the consistency level, geographical distance, and available network bandwidth between the source and the target physical partitions.
+You can think of a partition-set as a geographically dispersed ΓÇ£super replica-setΓÇ¥, which is composed of multiple replica-sets owning the same set of keys. Similar to a replica-set, a partition-setΓÇÖs membership is also dynamic ΓÇô it fluctuates based on implicit physical partition management operations to add/remove new partitions to/from a given partition-set (for instance, when you scale out throughput on a container, add/remove a region to your Azure Cosmos DB database, or when failures occur). By virtue of having each of the partitions (of a partition-set) manage the partition-set membership within its own replica-set, the membership is fully decentralized and highly available. During the reconfiguration of a partition-set, the topology of the overlay between physical partitions is also established. The topology is dynamically selected based on the consistency level, geographical distance, and available network bandwidth between the source and the target physical partitions.
-The service allows you to configure your Cosmos databases with either a single write region or multiple write regions, and depending on the choice, partition-sets are configured to accept writes in exactly one or all regions. The system employs a two-level, nested consensus protocol ΓÇô one level operates within the replicas of a replica-set of a physical partition accepting the writes, and the other operates at the level of a partition-set to provide complete ordering guarantees for all the committed writes within the partition-set. This multi-layered, nested consensus is critical for the implementation of our stringent SLAs for high availability, as well as the implementation of the consistency models, which Cosmos DB offers to its customers.
+The service allows you to configure your Azure Cosmos DB databases with either a single write region or multiple write regions, and depending on the choice, partition-sets are configured to accept writes in exactly one or all regions. The system employs a two-level, nested consensus protocol ΓÇô one level operates within the replicas of a replica-set of a physical partition accepting the writes, and the other operates at the level of a partition-set to provide complete ordering guarantees for all the committed writes within the partition-set. This multi-layered, nested consensus is critical for the implementation of our stringent SLAs for high availability, as well as the implementation of the consistency models, which Azure Cosmos DB offers to its customers.
## Conflict resolution
-Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.kth.se/social/upload/51647982f276546170461c46/4-gossip.pdf) and the [Bayou](https://people.cs.umass.edu/~mcorner/courses/691M/papers/terry.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Cosmos DBΓÇÖs system design, they have also undergone significant transformation as we applied them to the Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Cosmos DB delivers to its customers.
+Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.kth.se/social/upload/51647982f276546170461c46/4-gossip.pdf) and the [Bayou](https://people.cs.umass.edu/~mcorner/courses/691M/papers/terry.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Azure Cosmos DBΓÇÖs system design, they have also undergone significant transformation as we applied them to the Azure Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Azure Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Azure Cosmos DB delivers to its customers.
-Recall that a partition-set is distributed across multiple regions and follows Cosmos DBs (multi-region writes) replication protocol to replicate the data among the physical partitions comprising a given partition-set. Each physical partition (of a partition-set) accepts writes and serves reads typically to the clients that are local to that region. Writes accepted by a physical partition within a region are durably committed and made highly available within the physical partition before they are acknowledged to the client. These are tentative writes and are propagated to other physical partitions within the partition-set using an anti-entropy channel. Clients can request either tentative or committed writes by passing a request header. The anti-entropy propagation (including the frequency of propagation) is dynamic, based on the topology of the partition-set, regional proximity of the physical partitions, and the consistency level configured. Within a partition-set, Cosmos DB follows a primary commit scheme with a dynamically selected arbiter partition. The arbiter selection is dynamic and is an integral part of the reconfiguration of the partition-set based on the topology of the overlay. The committed writes (including multi-row/batched updates) are guaranteed to be ordered.
+Recall that a partition-set is distributed across multiple regions and follows Azure Cosmos DB s (multi-region writes) replication protocol to replicate the data among the physical partitions comprising a given partition-set. Each physical partition (of a partition-set) accepts writes and serves reads typically to the clients that are local to that region. Writes accepted by a physical partition within a region are durably committed and made highly available within the physical partition before they are acknowledged to the client. These are tentative writes and are propagated to other physical partitions within the partition-set using an anti-entropy channel. Clients can request either tentative or committed writes by passing a request header. The anti-entropy propagation (including the frequency of propagation) is dynamic, based on the topology of the partition-set, regional proximity of the physical partitions, and the consistency level configured. Within a partition-set, Azure Cosmos DB follows a primary commit scheme with a dynamically selected arbiter partition. The arbiter selection is dynamic and is an integral part of the reconfiguration of the partition-set based on the topology of the overlay. The committed writes (including multi-row/batched updates) are guaranteed to be ordered.
We employ encoded vector clocks (containing region ID and logical clocks corresponding to each level of consensus at the replica-set and partition-set, respectively) for causality tracking and version vectors to detect and resolve update conflicts. The topology and the peer selection algorithm are designed to ensure fixed and minimal storage and minimal network overhead of version vectors. The algorithm guarantees the strict convergence property.
-For the Cosmos databases configured with multiple write regions, the system offers a number of flexible automatic conflict resolution policies for the developers to choose from, including:
+For the Azure Cosmos DB databases configured with multiple write regions, the system offers a number of flexible automatic conflict resolution policies for the developers to choose from, including:
-- **Last-Write-Wins (LWW)**, which, by default, uses a system-defined timestamp property (which is based on the time-synchronization clock protocol). Cosmos DB also allows you to specify any other custom numerical property to be used for conflict resolution.
+- **Last-Write-Wins (LWW)**, which, by default, uses a system-defined timestamp property (which is based on the time-synchronization clock protocol). Azure Cosmos DB also allows you to specify any other custom numerical property to be used for conflict resolution.
- **Application-defined (Custom) conflict resolution policy** (expressed via merge procedures), which is designed for application-defined semantics reconciliation of conflicts. These procedures get invoked upon detection of the write-write conflicts under the auspices of a database transaction on the server side. The system provides exactly once guarantee for the execution of a merge procedure as a part of the commitment protocol. There are [several conflict resolution samples](how-to-manage-conflicts.md) available for you to play with. ## Consistency Models
-Whether you configure your Cosmos database with a single or multiple write regions, you can choose from the five well-defined consistency models. With multiple write regions, the following are a few notable aspects of the consistency levels:
+Whether you configure your Azure Cosmos DB database with a single or multiple write regions, you can choose from the five well-defined consistency models. With multiple write regions, the following are a few notable aspects of the consistency levels:
The bounded staleness consistency guarantees that all reads will be within *K* prefixes or *T* seconds from the latest write in any of the regions. Furthermore, reads with bounded staleness consistency are guaranteed to be monotonic and with consistent prefix guarantees. The anti-entropy protocol operates in a rate-limited manner and ensures that the prefixes do not accumulate and the backpressure on the writes does not have to be applied. Session consistency guarantees monotonic read, monotonic write, read your own writes, write follows read, and consistent prefix guarantees, worldwide. For the databases configured with strong consistency, the benefits (low write latency, high write availability) of multiple write regions does not apply, because of synchronous replication across regions.
-The semantics of the five consistency models in Cosmos DB are described [here](consistency-levels.md), and mathematically described using a high-level TLA+ specifications [here](https://github.com/Azure/azure-cosmos-tla).
+The semantics of the five consistency models in Azure Cosmos DB are described [here](consistency-levels.md), and mathematically described using a high-level TLA+ specifications [here](https://github.com/Azure/azure-cosmos-tla).
## Next steps
cosmos-db Access System Properties Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/access-system-properties-gremlin.md
- Title: Access system document properties via Azure Cosmos DB Graph
-description: Learn how read and write Cosmos DB system document properties via Gremlin API
--- Previously updated : 09/16/2021----
-# System document properties
-
-Azure Cosmos DB has [system properties](/rest/api/cosmos-db/databases) such as ```_ts```, ```_self```, ```_attachments```, ```_rid```, and ```_etag``` on every document. Additionally, Gremlin engine adds ```inVPartition``` and ```outVPartition``` properties on edges. By default, these properties are available for traversal. However, it's possible to include specific properties, or all of them, in Gremlin traversal.
-
-```console
-g.withStrategies(ProjectionStrategy.build().IncludeSystemProperties('_ts').create())
-```
-
-## E-Tag
-
-This property is used for optimistic concurrency control. If application needs to break operation into a few separate traversals, it can use eTag property to avoid data loss in concurrent writes.
-
-```console
-g.withStrategies(ProjectionStrategy.build().IncludeSystemProperties('_etag').create()).V('1').has('_etag', '"00000100-0000-0800-0000-5d03edac0000"').property('test', '1')
-```
-
-## Time-to-live (TTL)
-
-If collection has document expiration enabled and documents have `ttl` property set on them, then this property will be available in Gremlin traversal as a regular vertex or edge property. `ProjectionStrategy` isn't necessary to enable time-to-live property exposure.
-
-* Use the following command to set time-to-live on a new vertex:
-
- ```console
- g.addV(<ID>).property('ttl', <expirationTime>)
- ```
-
- For example, a vertex created with the following traversal is automatically deleted after *123 seconds*:
-
- ```console
- g.addV('vertex-one').property('ttl', 123)
- ```
-
-* Use the following command to set time-to-live on an existing vertex:
-
- ```console
- g.V().hasId(<ID>).has('pk', <pk>).property('ttl', <expirationTime>)
- ```
-
-* Applying time-to-live property on vertices does not automatically apply it to edges. Because edges are independent records in the database store. Use the following command to set time-to-live on vertices and all the incoming and outgoing edges of the vertex:
-
- ```console
- g.V().hasId(<ID>).has('pk', <pk>).as('v').bothE().hasNot('ttl').property('ttl', <expirationTime>)
- ```
-
-You can set TTL on the container to -1 or set it to **On (no default)** from Azure portal, then the TTL is infinite for any item unless the item has TTL value explicitly set.
-
-## Next steps
-* [Cosmos DB Optimistic Concurrency](../faq.yml#how-does-the-sql-api-provide-concurrency-)
-* [Time to Live (TTL)](../time-to-live.md) in Azure Cosmos DB
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
- Title: Ingest data in bulk in the Azure Cosmos DB Gremlin API by using a bulk executor library
-description: Learn how to use a bulk executor library to massively import graph data into an Azure Cosmos DB Gremlin API container.
--- Previously updated : 05/10/2022-----
-# Ingest data in bulk in the Azure Cosmos DB Gremlin API by using a bulk executor library
--
-Graph databases often need to ingest data in bulk to refresh an entire graph or update a portion of it. Azure Cosmos DB, a distributed database and the backbone of the Azure Cosmos DB Gremlin API, is meant to perform best when the loads are well distributed. Bulk executor libraries in Azure Cosmos DB are designed to exploit this unique capability of Azure Cosmos DB and provide optimal performance. For more information, see [Introducing bulk support in the .NET SDK](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
-
-In this tutorial, you learn how to use the Azure Cosmos DB bulk executor library to import and update *graph* objects into an Azure Cosmos DB Gremlin API container. During this process, you use the library to create *vertex* and *edge* objects programmatically and then insert multiple objects per network request.
-
-Instead of sending Gremlin queries to a database, where the commands are evaluated and then executed one at a time, you use the bulk executor library to create and validate the objects locally. After the library initializes the graph objects, it allows you to send them to the database service sequentially.
-
-By using this method, you can increase data ingestion speeds as much as a hundredfold, which makes it an ideal way to perform initial data migrations or periodic data movement operations.
-
-The bulk executor library now comes in the following varieties.
-
-## .NET
-### Prerequisites
-
-Before you begin, make sure that you have the following:
-
-* Visual Studio 2019 with the Azure development workload. You can get started with the [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) for free.
-
-* An Azure subscription. If you don't already have a subscription, you can [create a free Azure account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db).
-
- Alternatively, you can [create a free Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
-
-* An Azure Cosmos DB Gremlin API database with an *unlimited collection*. To get started, go to [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md).
-
-* Git. To begin, go to the [git downloads](https://git-scm.com/downloads) page.
-
-#### Clone
-
-To use this sample, run the following command:
-
-```bash
-git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git
-```
-
-To get the sample, go to `.\azure-cosmos-graph-bulk-executor\dotnet\src\`.
-
-#### Sample
-
-```csharp
-
-IGraphBulkExecutor graphBulkExecutor = new GraphBulkExecutor("MyConnectionString", "myDatabase", "myContainer");
-
-List<IGremlinElement> gremlinElements = new List<IGremlinElement>();
-gremlinElements.AddRange(Program.GenerateVertices(Program.documentsToInsert));
-gremlinElements.AddRange(Program.GenerateEdges(Program.documentsToInsert));
-BulkOperationResponse bulkOperationResponse = await graphBulkExecutor.BulkImportAsync(
- gremlinElements: gremlinElements,
- enableUpsert: true);
-```
-
-### Execute
-
-Modify the parameters, as described in the following table:
-
-| Parameter|Description |
-|||
-|`ConnectionString`| Your .NET SDK endpoint, which you'll find in the **Overview** section of your Azure Cosmos DB Gremlin API database account. It's formatted as `https://your-graph-database-account.documents.azure.com:443/`.
-`DatabaseName`, `ContainerName`|The names of the target database and container.|
-|`DocumentsToInsert`| The number of documents to be generated (relevant only to synthetic data).|
-|`PartitionKey` | Ensures that a partition key is specified with each document during data ingestion.|
-|`NumberOfRUs` | Is relevant only if a container doesn't already exist and it needs to be created during execution.|
-
-[Download the full sample application in .NET](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/dotnet).
-
-## Java
-
-### Sample usage
-
-The following sample application illustrates how to use the GraphBulkExecutor package. The samples use either the *domain* object annotations or the *POJO* (plain old Java object) objects directly. We recommend that you try both approaches to determine which one better meets your implementation and performance demands.
-
-### Clone
-
-To use the sample, run the following command:
-
-```bash
-git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git
-```
-To get the sample, go to `.\azure-cosmos-graph-bulk-executor\java\`.
-
-### Prerequisites
-
-To run this sample, you need to have the following software:
-
-* OpenJDK 11
-* Maven
-* An Azure Cosmos DB account that's configured to use the Gremlin API
-
-### Sample
-
-```java
-private static void executeWithPOJO(Stream<GremlinVertex> vertices,
- Stream<GremlinEdge> edges,
- boolean createDocs) {
- results.transitionState("Configure Database");
- UploadWithBulkLoader loader = new UploadWithBulkLoader();
- results.transitionState("Write Documents");
- loader.uploadDocuments(vertices, edges, createDocs);
- }
-```
-
-### Configuration
-
-To run the sample, refer to the following configuration and modify it as needed.
-
-The */resources/application.properties* file defines the data that's required to configure Azure Cosmos DB. The required values are described in the following table:
-
-| Property | Description |
-| | |
-| `sample.sql.host` | The value that's provided by Azure Cosmos DB. Ensure that you're using the .NET SDK URI, which you'll find in the **Overview** section of the Azure Cosmos DB account.|
-| `sample.sql.key` | You can get the primary or secondary key from the **Keys** section of the Azure Cosmos DB account. |
-| `sample.sql.database.name` | The name of the database within the Azure Cosmos DB account to run the sample against. If the database isn't found, the sample code creates it. |
-| `sample.sql.container.name` | The name of the container within the database to run the sample against. If the container isn't found, the sample code creates it. |
-| `sample.sql.partition.path` | If you need to create the container, use this value to define the `partitionKey` path. |
-| `sample.sql.allow.throughput` | The container will be updated to use the throughput value that's defined here. If you're exploring various throughput options to meet your performance demands, be sure to reset the throughput on the container when you're done with your exploration. There are costs associated with leaving the container provisioned with a higher throughput. |
-
-### Execute
-
-After you've modified the configuration according to your environment, run the following command:
-
-```bash
-mvn clean package
-```
-
-For added safety, you can also run the integration tests by changing the `skipIntegrationTests` value in the *pom.xml* file to `false`.
-
-After you've run the unit tests successfully, you can run the sample code:
-
-```bash
-java -jar target/azure-cosmos-graph-bulk-executor-1.0-jar-with-dependencies.jar -v 1000 -e 10 -d
-```
-
-Running the preceding command executes the sample with a small batch (1,000 vertices and roughly 5,000 edges). Use the command-line arguments in the following sections to tweak the volumes that are run and which sample version to run.
-
-### Command-line arguments
-
-Several command-line arguments are available while you're running this sample, as described in the following table:
-
-| Argument&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
-| | |
-| `--vertexCount` (`-v`) | Tells the application how many person vertices to generate. |
-| `--edgeMax` (`-e`) | Tells the application the maximum number of edges to generate for each vertex. The generator randomly selects a number from 1 to the value you provide. |
-| `--domainSample` (`-d`) | Tells the application to run the sample by using the person and relationship domain structures instead of the `GraphBulkExecutors`, `GremlinVertex`, and `GremlinEdge` POJOs. |
-| `--createDocuments` (`-c`) | Tells the application to use `create` operations. If the argument isn't present, the application defaults to using `upsert` operations. |
-
-### Detailed sample information
-
-#### Person vertex
-
-The person class is a simple domain object that's been decorated with several annotations to help a transformation into the `GremlinVertex` class, as described in the following table:
-
-| Class annotation | Description |
-| | |
-| `GremlinVertex` | Uses the optional `label` parameter to define all vertices that you create by using this class. |
-| `GremlinId` | Used to define which field will be used as the `ID` value. The field name on the person class is ID, but it isn't required. |
-| `GremlinProperty` | Used on the `email` field to change the name of the property when it's stored in the database.
-| `GremlinPartitionKey` | Used to define which field on the class contains the partition key. The field name you provide should match the value that's defined by the partition path on the container. |
-| `GremlinIgnore` | Used to exclude the `isSpecial` field from the property that's being written to the database. |
-
-#### The RelationshipEdge class
-
-The `RelationshipEdge` class is a versatile domain object. By using the field level label annotation, you can create a dynamic collection of edge types, as shown in the following table:
-
-| Class annotation | Description |
-| | |
-| `GremlinEdge` | The `GremlinEdge` decoration on the class defines the name of the field for the specified partition key. When you create an edge document, the assigned value comes from the source vertex information. |
-| `GremlinEdgeVertex` | Two instances of `GremlinEdgeVertex` are defined, one for each side of the edge (source and destination). Our sample has the field's data type as `GremlinEdgeVertexInfo`. The information provided by the `GremlinEdgeVertex` class is required for the edge to be created correctly in the database. Another option would be to have the data type of the vertices be a class that has been decorated with the `GremlinVertex` annotations. |
-| `GremlinLabel` | The sample edge uses a field to define the `label` value. It allows various labels to be defined, because it uses the same base domain class. |
-
-### Output explained
-
-The console finishes its run with a JSON string that describes the run times of the sample. The JSON string contains the following information:
-
-| JSON string | Description |
-| | |
-| startTime | The `System.nanoTime()` when the process started. |
-| endTime | The `System.nanoTime()` when the process finished. |
-| durationInNanoSeconds | The difference between the `endTime` and `startTime` values. |
-| durationInMinutes | The `durationInNanoSeconds` value, converted into minutes. The `durationInMinutes` value is represented as a float number, not a time value. For example, a value of 2.5 represents 2 minutes and 30 seconds. |
-| vertexCount | The volume of generated vertices, which should match the value that's passed into the command-line execution. |
-| edgeCount | The volume of generated edges, which isn't static and is built with an element of randomness. |
-| exception | Populated only if an exception is thrown when you attempt to make the run. |
-
-#### States array
-
-The states array gives insight into how long each step within the execution takes. The steps are described in the following table:
-
-| Execution&nbsp;step | Description |
-| | |
-| Build&nbsp;sample&nbsp;vertices | The amount of time it takes to fabricate the requested volume of person objects. |
-| Build sample edges | The amount of time it takes to fabricate the relationship objects. |
-| Configure database | The amount of time it takes to configure the database, based on the values that are provided in `application.properties`. |
-| Write documents | The amount of time it takes to write the documents to the database. |
-
-Each state contains the following values:
-
-| State value | Description |
-| | |
-| `stateName` | The name of the state that's being reported. |
-| `startTime` | The `System.nanoTime()` value when the state started. |
-| `endTime` | The `System.nanoTime()` value when the state finished. |
-| `durationInNanoSeconds` | The difference between the `endTime` and `startTime` values. |
-| `durationInMinutes` | The `durationInNanoSeconds` value, converted into minutes. The `durationInMinutes` value is represented as a float number, not a time value. For example, a value of 2.5 represents 2 minutes and 30 seconds. |
-
-## Next steps
-
-* For more information about the classes and methods that are defined in this namespace, review the [BulkExecutor Java open source documentation](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/java/src/main/java/com/azure/graph/bulk/impl).
-* See [Bulk import data to the Azure Cosmos DB SQL API account by using the .NET SDK](../sql/tutorial-sql-api-dotnet-bulk-import.md) article. This bulk mode documentation is part of the .NET V3 SDK.
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/cli-samples.md
- Title: Azure CLI Samples for Azure Cosmos DB Gremlin API
-description: Azure CLI Samples for Azure Cosmos DB Gremlin API
---- Previously updated : 08/19/2022-----
-# Azure CLI samples for Azure Cosmos DB Gremlin API
--
-The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB Gremlin API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-
-These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-
-## Gremlin API Samples
-
-|Task | Description |
-|||
-| [Create an Azure Cosmos account, database and graph](../scripts/cli/gremlin/create.md)| Creates an Azure Cosmos DB account, database, and graph for Gremlin API. |
-| [Create a serverless Azure Cosmos account for Gremlin API, database and graph](../scripts/cli/gremlin/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and graph for Gremlin API. |
-| [Create an Azure Cosmos account, database and graph with autoscale](../scripts/cli/gremlin/autoscale.md)| Creates an Azure Cosmos DB account, database, and graph with autoscale for Gremlin API. |
-| [Perform throughput operations](../scripts/cli/gremlin/throughput.md) | Read, update and migrate between autoscale and standard throughput on a database and graph.|
-| [Lock resources from deletion](../scripts/cli/gremlin/lock.md)| Prevent resources from being deleted with resource locks.|
-|||
-
-## Common API Samples
-
-These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
-
-|Task | Description |
-|||
-| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
-| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
-|||
-
-## Next steps
-
-Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
-
-For Azure CLI samples for other APIs see:
--- [CLI Samples for Cassandra](../cassandr)-- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)-- [CLI Samples for SQL](../sql/cli-samples.md)-- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Create Graph Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-console.md
- Title: 'Query with Azure Cosmos DB Gremlin API using TinkerPop Gremlin Console: Tutorial'
-description: An Azure Cosmos DB quickstart to creates vertices, edges, and queries using the Azure Cosmos DB Gremlin API.
--- Previously updated : 07/10/2020----
-# Quickstart: Create, query, and traverse an Azure Cosmos DB graph database using the Gremlin console
-
-> [!div class="op_single_selector"]
-> * [Gremlin console](create-graph-console.md)
-> * [.NET](create-graph-dotnet.md)
-> * [Java](create-graph-java.md)
-> * [Node.js](create-graph-nodejs.md)
-> * [Python](create-graph-python.md)
-> * [PHP](create-graph-php.md)
->
-
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
-
-This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](graph-introduction.md) account, database, and graph (container) using the Azure portal and then use the [Gremlin Console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) from [Apache TinkerPop](https://tinkerpop.apache.org) to work with Gremlin API data. In this tutorial, you create and query vertices and edges, updating a vertex property, query vertices, traverse the graph, and drop a vertex.
--
-The Gremlin console is Groovy/Java based and runs on Linux, Mac, and Windows. You can download it from the [Apache TinkerPop site](https://tinkerpop.apache.org/download.html).
-
-## Prerequisites
-
-You need to have an Azure subscription to create an Azure Cosmos DB account for this quickstart.
--
-You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.13**. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
-
-## Create a database account
--
-## Add a graph
--
-## <a id="ConnectAppService"></a>Connect to your app service/Graph
-
-1. Before starting the Gremlin Console, create or modify the remote-secure.yaml configuration file in the `apache-tinkerpop-gremlin-console-3.2.5/conf` directory.
-2. Fill in your *host*, *port*, *username*, *password*, *connectionPool*, and *serializer* configurations as defined in the following table:
-
- Setting|Suggested value|Description
- ||
- hosts|[*account-name*.**gremlin**.cosmos.azure.com]|See the following screenshot. This is the **Gremlin URI** value on the Overview page of the Azure portal, in square brackets, with the trailing :443/ removed. Note: Be sure to use the Gremlin value, and **not** the URI that ends with [*account-name*.documents.azure.com] which would likely result in a "Host did not respond in a timely fashion" exception when attempting to execute Gremlin queries later.
- port|443|Set to 443.
- username|*Your username*|The resource of the form `/dbs/<db>/colls/<coll>` where `<db>` is your database name and `<coll>` is your collection name.
- password|*Your primary key*| See second screenshot below. This is your primary key, which you can retrieve from the Keys page of the Azure portal, in the Primary Key box. Use the copy button on the left side of the box to copy the value.
- connectionPool|{enableSsl: true}|Your connection pool setting for TLS.
- serializer|{ className: org.apache.tinkerpop.gremlin.<br>driver.ser.GraphSONMessageSerializerV2d0,<br> config: { serializeResultToString: true }}|Set to this value and delete any `\n` line breaks when pasting in the value.
-
- For the hosts value, copy the **Gremlin URI** value from the **Overview** page:
-
- :::image type="content" source="./media/create-graph-console/gremlin-uri.png" alt-text="View and copy the Gremlin URI value on the Overview page in the Azure portal":::
-
- For the password value, copy the **Primary key** from the **Keys** page:
-
- :::image type="content" source="./media/create-graph-console/keys.png" alt-text="View and copy your primary key in the Azure portal, Keys page":::
-
- Your remote-secure.yaml file should look like this:
-
- ```yaml
- hosts: [your_database_server.gremlin.cosmos.azure.com]
- port: 443
- username: /dbs/your_database/colls/your_collection
- password: your_primary_key
- connectionPool: {
- enableSsl: true
- }
- serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, config: { serializeResultToString: true }}
- ```
-
- make sure to wrap the value of hosts parameter within brackets [].
-
-1. In your terminal, run `bin/gremlin.bat` or `bin/gremlin.sh` to start the [Gremlin Console](https://tinkerpop.apache.org/docs/3.2.5/tutorials/getting-started/).
-
-1. In your terminal, run `:remote connect tinkerpop.server conf/remote-secure.yaml` to connect to your app service.
-
- > [!TIP]
- > If you receive the error `No appenders could be found for logger` ensure that you updated the serializer value in the remote-secure.yaml file as described in step 2. If your configuration is correct, then this warning can be safely ignored as it should not impact the use of the console.
-
-1. Next run `:remote console` to redirect all console commands to the remote server.
-
- > [!NOTE]
- > If you don't run the `:remote console` command but would like to redirect all console commands to the remote server, you should prefix the command with `:>`, for example you should run the command as `:> g.V().count()`. This prefix is a part of the command and it is important when using the Gremlin console with Azure Cosmos DB. Omitting this prefix instructs the console to execute the command locally, often against an in-memory graph. Using this prefix `:>` tells the console to execute a remote command, in this case against Azure Cosmos DB (either the localhost emulator, or an Azure instance).
-
-Great! Now that we finished the setup, let's start running some console commands.
-
-Let's try a simple count() command. Type the following into the console at the prompt:
-
-```console
-g.V().count()
-```
-
-## Create vertices and edges
-
-Let's begin by adding five person vertices for *Thomas*, *Mary Kay*, *Robin*, *Ben*, and *Jack*.
-
-Input (Thomas):
-
-```console
-g.addV('person').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44).property('userid', 1).property('pk', 'pk')
-```
-
-Output:
-
-```bash
-==>[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d,label:person,type:vertex,properties:[firstName:[[id:f02a749f-b67c-4016-850e-910242d68953,value:Thomas]],lastName:[[id:f5fa3126-8818-4fda-88b0-9bb55145ce5c,value:Andersen]],age:[[id:f6390f9c-e563-433e-acbf-25627628016e,value:44]],userid:[[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d|userid,value:1]]]]
-```
-
-Input (Mary Kay):
-
-```console
-g.addV('person').property('firstName', 'Mary Kay').property('lastName', 'Andersen').property('age', 39).property('userid', 2).property('pk', 'pk')
-
-```
-
-Output:
-
-```bash
-==>[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e,label:person,type:vertex,properties:[firstName:[[id:ea0604f8-14ee-4513-a48a-1734a1f28dc0,value:Mary Kay]],lastName:[[id:86d3bba5-fd60-4856-9396-c195ef7d7f4b,value:Andersen]],age:[[id:bc81b78d-30c4-4e03-8f40-50f72eb5f6da,value:39]],userid:[[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e|userid,value:2]]]]
-
-```
-
-Input (Robin):
-
-```console
-g.addV('person').property('firstName', 'Robin').property('lastName', 'Wakefield').property('userid', 3).property('pk', 'pk')
-```
-
-Output:
-
-```bash
-==>[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e,label:person,type:vertex,properties:[firstName:[[id:ec65f078-7a43-4cbe-bc06-e50f2640dc4e,value:Robin]],lastName:[[id:a3937d07-0e88-45d3-a442-26fcdfb042ce,value:Wakefield]],userid:[[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e|userid,value:3]]]]
-```
-
-Input (Ben):
-
-```console
-g.addV('person').property('firstName', 'Ben').property('lastName', 'Miller').property('userid', 4).property('pk', 'pk')
-
-```
-
-Output:
-
-```bash
-==>[id:ee86b670-4d24-4966-9a39-30529284b66f,label:person,type:vertex,properties:[firstName:[[id:a632469b-30fc-4157-840c-b80260871e9a,value:Ben]],lastName:[[id:4a08d307-0719-47c6-84ae-1b0b06630928,value:Miller]],userid:[[id:ee86b670-4d24-4966-9a39-30529284b66f|userid,value:4]]]]
-```
-
-Input (Jack):
-
-```console
-g.addV('person').property('firstName', 'Jack').property('lastName', 'Connor').property('userid', 5).property('pk', 'pk')
-```
-
-Output:
-
-```bash
-==>[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469,label:person,type:vertex,properties:[firstName:[[id:4250824e-4b72-417f-af98-8034aa15559f,value:Jack]],lastName:[[id:44c1d5e1-a831-480a-bf94-5167d133549e,value:Connor]],userid:[[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469|userid,value:5]]]]
-```
--
-Next, let's add edges for relationships between our people.
-
-Input (Thomas -> Mary Kay):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Mary Kay'))
-```
-
-Output:
-
-```bash
-==>[id:c12bf9fb-96a1-4cb7-a3f8-431e196e702f,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:0d1fa428-780c-49a5-bd3a-a68d96391d5c,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
-```
-
-Input (Thomas -> Robin):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Robin'))
-```
-
-Output:
-
-```bash
-==>[id:58319bdd-1d3e-4f17-a106-0ddf18719d15,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:3e324073-ccfc-4ae1-8675-d450858ca116,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
-```
-
-Input (Robin -> Ben):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Robin').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Ben'))
-```
-
-Output:
-
-```bash
-==>[id:889c4d3c-549e-4d35-bc21-a3d1bfa11e00,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:40fd641d-546e-412a-abcc-58fe53891aab,outV:3e324073-ccfc-4ae1-8675-d450858ca116]
-```
-
-## Update a vertex
-
-Let's update the *Thomas* vertex with a new age of *45*.
-
-Input:
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').property('age', 45)
-```
-Output:
-
-```bash
-==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
-```
-
-## Query your graph
-
-Now, let's run a variety of queries against your graph.
-
-First, let's try a query with a filter to return only people who are older than 40 years old.
-
-Input (filter query):
-
-```console
-g.V().hasLabel('person').has('age', gt(40))
-```
-
-Output:
-
-```bash
-==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
-```
-
-Next, let's project the first name for the people who are older than 40 years old.
-
-Input (filter + projection query):
-
-```console
-g.V().hasLabel('person').has('age', gt(40)).values('firstName')
-```
-
-Output:
-
-```bash
-==>Thomas
-```
-
-## Traverse your graph
-
-Let's traverse the graph to return all of Thomas's friends.
-
-Input (friends of Thomas):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person')
-```
-
-Output:
-
-```bash
-==>[id:f04bc00b-cb56-46c4-a3bb-a5870c42f7ff,label:person,type:vertex,properties:[firstName:[[id:14feedec-b070-444e-b544-62be15c7167c,value:Mary Kay]],lastName:[[id:107ab421-7208-45d4-b969-bbc54481992a,value:Andersen]],age:[[id:4b08d6e4-58f5-45df-8e69-6b790b692e0a,value:39]]]]
-==>[id:91605c63-4988-4b60-9a30-5144719ae326,label:person,type:vertex,properties:[firstName:[[id:f760e0e6-652a-481a-92b0-1767d9bf372e,value:Robin]],lastName:[[id:352a4caa-bad6-47e3-a7dc-90ff342cf870,value:Wakefield]]]]
-```
-
-Next, let's get the next layer of vertices. Traverse the graph to return all the friends of Thomas's friends.
-
-Input (friends of friends of Thomas):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
-```
-Output:
-
-```bash
-==>[id:a801a0cb-ee85-44ee-a502-271685ef212e,label:person,type:vertex,properties:[firstName:[[id:b9489902-d29a-4673-8c09-c2b3fe7f8b94,value:Ben]],lastName:[[id:e084f933-9a4b-4dbc-8273-f0171265cf1d,value:Miller]]]]
-```
-
-## Drop a vertex
-
-Let's now delete a vertex from the graph database.
-
-Input (drop Jack vertex):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Jack').drop()
-```
-
-## Clear your graph
-
-Finally, let's clear the database of all vertices and edges.
-
-Input:
-
-```console
-g.E().drop()
-g.V().drop()
-```
-
-Congratulations! You've completed this Azure Cosmos DB: Gremlin API tutorial!
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, create vertices and edges, and traverse your graph using the Gremlin console. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-dotnet.md
- Title: Build an Azure Cosmos DB .NET Framework, Core application using the Gremlin API
-description: Presents a .NET Framework/Core code sample you can use to connect to and query Azure Cosmos DB
----- Previously updated : 05/02/2020--
-# Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB Gremlin API account
-
-> [!div class="op_single_selector"]
-> * [Gremlin console](create-graph-console.md)
-> * [.NET](create-graph-dotnet.md)
-> * [Java](create-graph-java.md)
-> * [Node.js](create-graph-nodejs.md)
-> * [Python](create-graph-python.md)
-> * [PHP](create-graph-php.md)
->
-
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases. All of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
-
-This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](graph-introduction.md) account, database, and graph (container) using the Azure portal. You then build and run a console app built using the open-source driver [Gremlin.Net](https://tinkerpop.apache.org/docs/3.2.7/reference/#gremlin-DotNet).
-
-## Prerequisites
-
-Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
--
-## Create a database account
--
-## Add a graph
--
-## Clone the sample application
-
-Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it's to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- md "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. The ``git clone`` command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-gremlindotnet-getting-started.git
- ```
-
-4. Then open Visual Studio and open the solution file.
-
-5. Restore the NuGet packages in the project. The restore operation should include the Gremlin.Net driver, and the Newtonsoft.Json package.
-
-6. You can also install the Gremlin.Net@v3.4.13 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
-
- ```bash
- nuget install Gremlin.NET -Version 3.4.13
- ```
-
-> [!NOTE]
-> The supported Gremlin.NET driver version for Gremlin API is available [here](gremlin-support.md#compatible-client-libraries). Latest released versions of Gremlin.NET may see incompatibilities, so please check the linked table for compatibility updates.
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-
-The following snippets are all taken from the Program.cs file.
-
-* Set your connection parameters based on the account created above:
-
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="configureConnectivity":::
-
-* The Gremlin commands to be executed are listed in a Dictionary:
-
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineQueries":::
-
-* Create a new `GremlinServer` and `GremlinClient` connection objects using the parameters provided above:
-
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineClientandServerObjects":::
-
-* Execute each Gremlin query using the `GremlinClient` object with an async task. You can read the Gremlin queries from the dictionary defined in the previous step and execute them. Later get the result and read the values, which are formatted as a dictionary, using the `JsonSerializer` class from Newtonsoft.Json package:
-
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="executeQueries":::
-
-## Update your connection string
-
-Now go back to the Azure portal to get your connection string information and copy it into the app.
-
-1. From the [Azure portal](https://portal.azure.com/), navigate to your graph database account. In the **Overview** tab, you can see two endpoints-
-
- **.NET SDK URI** - This value is used when you connect to the graph account by using Microsoft.Azure.Graphs library.
-
- **Gremlin Endpoint** - This value is used when you connect to the graph account by using Gremlin.Net library.
-
- :::image type="content" source="./media/create-graph-dotnet/endpoint.png" alt-text="Copy the endpoint":::
-
- For this sample, record the *Host* value of the **Gremlin Endpoint**. For example, if the URI is ``https://graphtest.gremlin.cosmosdb.azure.com``, the *Host* value would be ``graphtest.gremlin.cosmosdb.azure.com``.
-
-1. Next, navigate to the **Keys** tab and record the *PRIMARY KEY* value from the Azure portal.
-
-1. After you've copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace ``<cosmos-account-name>`` and ``<cosmos-account-primary-key>`` values.
-
- ### [Windows](#tab/windows)
-
- ```powershell
- setx Host "<cosmos-account-name>.gremlin.cosmosdb.azure.com"
- setx PrimaryKey "<cosmos-account-primary-key>"
- ```
-
- ### [Linux / macOS](#tab/linux+macos)
-
- ```bash
- export Host=<cosmos-account-name>.gremlin.cosmosdb.azure.com
- export PrimaryKey=<cosmos-account-primary-key>
- ```
-
-
-
-1. Open the *Program.cs* file and update the "database and "container" variables with the database and container (which is also the graph name) names created above.
-
- `private static string database = "your-database-name";`
- `private static string container = "your-container-or-graph-name";`
-
-1. Save the Program.cs file.
-
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
-
-## Run the console app
-
-Select CTRL + F5 to run the application. The application will print both the Gremlin query commands and results in the console.
-
- The console window displays the vertexes and edges being added to the graph. When the script completes, press ENTER to close the console window.
-
-## Browse using the Data Explorer
-
-You can now go back to Data Explorer in the Azure portal and browse and query your new graph data.
-
-1. In Data Explorer, the new database appears in the Graphs pane. Expand the database and container nodes, and then select **Graph**.
-
-2. Select the **Apply Filter** button to use the default query to view all the vertices in the graph. The data generated by the sample app is displayed in the Graphs pane.
-
- You can zoom in and out of the graph, you can expand the graph display space, add extra vertices, and move vertices on the display surface.
-
- :::image type="content" source="./media/create-graph-dotnet/graph-explorer.png" alt-text="View the graph in Data Explorer in the Azure portal":::
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run an app. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-java.md
- Title: Build a graph database with Java in Azure Cosmos DB
-description: Presents a Java code sample you can use to connect to and query graph data in Azure Cosmos DB using Gremlin.
--- Previously updated : 03/26/2019-----
-# Quickstart: Build a graph database with the Java SDK and the Azure Cosmos DB Gremlin API
-
-> [!div class="op_single_selector"]
-> * [Gremlin console](create-graph-console.md)
-> * [.NET](create-graph-dotnet.md)
-> * [Java](create-graph-java.md)
-> * [Node.js](create-graph-nodejs.md)
-> * [Python](create-graph-python.md)
-> * [PHP](create-graph-php.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API account from the Azure portal, and add data by using a Java app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). -- [Git](https://www.git-scm.com/downloads). -- [Gremlin-driver 3.4.13](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.13), this dependency is mentioned in the quickstart sample's pom.xml-
-## Create a database account
-
-Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
--
-## Add a graph
--
-## Clone the sample application
-
-Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- md "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-java-getting-started.git
- ```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
-
-The following snippets are all taken from the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\GetStarted\Program.java* file.
-
-This Java console app uses a [Gremlin API](graph-introduction.md) database with the OSS [Apache TinkerPop](https://tinkerpop.apache.org/) driver.
--- The Gremlin `Client` is initialized from the configuration in the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\remote.yaml* file.-
- ```java
- cluster = Cluster.build(new File("src/remote.yaml")).create();
- ...
- client = cluster.connect();
- ```
--- Series of Gremlin steps are executed using the `client.submit` method.-
- ```java
- ResultSet results = client.submit(gremlin);
-
- CompletableFuture<List<Result>> completableFutureResults = results.all();
- List<Result> resultList = completableFutureResults.get();
-
- for (Result result : resultList) {
- System.out.println(result.toString());
- }
- ```
-
-## Update your connection information
-
-Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
-
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
-
- Copy the first portion of the URI value.
-
- :::image type="content" source="./media/create-graph-java/copy-access-key-azure-portal.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
-
-2. Open the *src/remote.yaml* file and paste the unique ID value over `$name$` in `hosts: [$name$.graphs.azure.com]`.
-
- Line 1 of *remote.yaml* should now look similar to
-
- `hosts: [test-graph.graphs.azure.com]`
-
-3. Change `graphs` to `gremlin.cosmosdb` in the `endpoint` value. (If you created your graph database account before December 20, 2017, make no changes to the endpoint value and continue to the next step.)
-
- The endpoint value should now look like this:
-
- `"endpoint": "https://testgraphacct.gremlin.cosmosdb.azure.com:443/"`
-
-4. In the Azure portal, use the copy button to copy the PRIMARY KEY and paste it over `$masterKey$` in `password: $masterKey$`.
-
- Line 4 of *remote.yaml* should now look similar to
-
- `password: 2Ggkr662ifxz2Mg==`
-
-5. Change line 3 of *remote.yaml* from
-
- `username: /dbs/$database$/colls/$collection$`
-
- to
-
- `username: /dbs/sample-database/colls/sample-graph`
-
- If you used a unique name for your sample database or graph, update the values as appropriate.
-
-6. Save the *remote.yaml* file.
-
-## Run the console app
-
-1. In the git terminal window, `cd` to the azure-cosmos-db-graph-java-getting-started folder.
-
- ```git
- cd "C:\git-samples\azure-cosmos-db-graph-java-getting-started"
- ```
-
-2. In the git terminal window, use the following command to install the required Java packages.
-
- ```git
- mvn package
- ```
-
-3. In the git terminal window, use the following command to start the Java application.
-
- ```git
- mvn exec:java -D exec.mainClass=GetStarted.Program
- ```
-
- The terminal window displays the vertices being added to the graph.
-
- If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
-
- Once the program stops, select Enter, then switch back to the Azure portal in your internet browser.
-
-<a id="add-sample-data"></a>
-## Review and add sample data
-
-You can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
-
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
-
- :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
-
-2. In the **Results** list, notice the new users added to the graph. Select **ben** and notice that the user is connected to robin. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
-
- :::image type="content" source="./media/create-graph-java/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
-
-3. Let's add a few new users. Select **New Vertex** to add data to your graph.
-
- :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
-
-4. In the label box, enter *person*.
-
-5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
-
- key|value|Notes
- -|-|-
- id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- gender|female|
- tech | java |
-
- > [!NOTE]
- > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
-
-6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
-
-7. Select **New Vertex** again and add an additional new user.
-
-8. Enter a label of *person*.
-
-9. Select **Add property** to add each of the following properties:
-
- key|value|Notes
- -|-|-
- id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- gender|male|
- school|MIT|
-
-10. Select **OK**.
-
-11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
-
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query-graph.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
-
-12. Now you can connect rakesh, and ashley. Ensure **ashley** is selected in the **Results** list, then select :::image type="content" source="./media/create-graph-java/edit-pencil-button.png" alt-text="Change the target of a vertex in a graph"::: next to **Targets** on lower right side. You may need to widen your window to see the button.
-
- :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph - Azure CosmosDB":::
-
-13. In the **Target** box enter *rakesh*, and in the **Edge label** box enter *knows*, and then select the check box.
-
- :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection in Data Explorer - Azure CosmosDB":::
-
-14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
-
- :::image type="content" source="./media/create-graph-java/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer - Azure CosmosDB":::
-
-That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Java app that adds data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-nodejs.md
- Title: Build an Azure Cosmos DB Node.js application by using Gremlin API
-description: Presents a Node.js code sample you can use to connect to and query Azure Cosmos DB
--- Previously updated : 06/05/2019----
-# Quickstart: Build a Node.js application by using Azure Cosmos DB Gremlin API account
-
-> [!div class="op_single_selector"]
-> * [Gremlin console](create-graph-console.md)
-> * [.NET](create-graph-dotnet.md)
-> * [Java](create-graph-java.md)
-> * [Node.js](create-graph-nodejs.md)
-> * [Python](create-graph-python.md)
-> * [PHP](create-graph-php.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API account from the Azure portal, and add data by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -- [Node.js 0.10.29+](https://nodejs.org/).-- [Git](https://git-scm.com/downloads).-
-## Create a database account
--
-## Add a graph
--
-## Clone the sample application
-
-Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- md "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-nodejs-getting-started.git
- ```
-
-3. Open the solution file in Visual Studio.
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-
-The following snippets are all taken from the *app.js* file.
-
-This console app uses the open-source [Gremlin Node.js](https://www.npmjs.com/package/gremlin) driver.
-
-* The Gremlin client is created.
-
- ```javascript
- const authenticator = new Gremlin.driver.auth.PlainTextSaslAuthenticator(
- `/dbs/${config.database}/colls/${config.collection}`,
- config.primaryKey
- )
--
- const client = new Gremlin.driver.Client(
- config.endpoint,
- {
- authenticator,
- traversalsource : "g",
- rejectUnauthorized : true,
- mimeType : "application/vnd.gremlin-v2.0+json"
- }
- );
-
- ```
-
- The configurations are all in *config.js*, which we edit in the [following section](#update-your-connection-string).
-
-* A series of functions are defined to execute different Gremlin operations. This is one of them:
-
- ```javascript
- function addVertex1()
- {
- console.log('Running Add Vertex1');
- return client.submit("g.addV(label).property('id', id).property('firstName', firstName).property('age', age).property('userid', userid).property('pk', 'pk')", {
- label:"person",
- id:"thomas",
- firstName:"Thomas",
- age:44, userid: 1
- }).then(function (result) {
- console.log("Result: %s\n", JSON.stringify(result));
- });
- }
- ```
-
-* Each function executes a `client.execute` method with a Gremlin query string parameter. Here is an example of how `g.V().count()` is executed:
-
- ```javascript
- function countVertices()
- {
- console.log('Running Count');
- return client.submit("g.V().count()", { }).then(function (result) {
- console.log("Result: %s\n", JSON.stringify(result));
- });
- }
- ```
-
-* At the end of the file, all methods are then invoked. This will execute them one after the other:
-
- ```javascript
- client.open()
- .then(dropGraph)
- .then(addVertex1)
- .then(addVertex2)
- .then(addEdge)
- .then(countVertices)
- .catch((err) => {
- console.error("Error running query...");
- console.error(err)
- }).then((res) => {
- client.close();
- finish();
- }).catch((err) =>
- console.error("Fatal error:", err)
- );
- ```
--
-## Update your connection string
-
-1. Open the *config.js* file.
-
-2. In *config.js*, fill in the `config.endpoint` key with the **Gremlin Endpoint** value from the **Overview** page of your Cosmos DB account in the Azure portal.
-
- `config.endpoint = "https://<your_Gremlin_account_name>.gremlin.cosmosdb.azure.com:443/";`
-
- :::image type="content" source="./media/create-graph-nodejs/gremlin-uri.png" alt-text="View and copy an access key in the Azure portal, Overview page":::
-
-3. In *config.js*, fill in the config.primaryKey value with the **Primary Key** value from the **Keys** page of your Cosmos DB account in the Azure portal.
-
- `config.primaryKey = "PRIMARYKEY";`
-
- :::image type="content" source="./media/create-graph-nodejs/keys.png" alt-text="Azure portal keys blade":::
-
-4. Enter the database name, and graph (container) name for the value of config.database and config.collection.
-
-Here's an example of what your completed *config.js* file should look like:
-
-```javascript
-var config = {}
-
-// Note that this must include the protocol (HTTPS:// for .NET SDK URI or wss:// for Gremlin Endpoint) and the port number
-config.endpoint = "https://testgraphacct.gremlin.cosmosdb.azure.com:443/";
-config.primaryKey = "Pams6e7LEUS7LJ2Qk0fjZf3eGo65JdMWHmyn65i52w8ozPX2oxY3iP0yu05t9v1WymAHNcMwPIqNAEv3XDFsEg==";
-config.database = "graphdb"
-config.collection = "Persons"
-
-module.exports = config;
-```
-
-## Run the console app
-
-1. Open a terminal window and change (via `cd` command) to the installation directory for the *package.json* file that's included in the project.
-
-2. Run `npm install` to install the required npm modules, including `gremlin`.
-
-3. Run `node app.js` in a terminal to start your node application.
-
-## Browse with Data Explorer
-
-You can now go back to Data Explorer in the Azure portal to view, query, modify, and work with your new graph data.
-
-In Data Explorer, the new database appears in the **Graphs** pane. Expand the database, followed by the container, and then select **Graph**.
-
-The data generated by the sample app is displayed in the next pane within the **Graph** tab when you select **Apply Filter**.
-
-Try completing `g.V()` with `.has('firstName', 'Thomas')` to test the filter. Note that the value is case sensitive.
-
-## Review SLAs in the Azure portal
--
-## Clean up your resources
--
-## Next steps
-
-In this article, you learned how to create an Azure Cosmos DB account, create a graph by using Data Explorer, and run a Node.js app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic by using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query by using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-php.md
- Title: 'Quickstart: Gremlin API with PHP - Azure Cosmos DB'
-description: Follow this quickstart to run a PHP console application that populates an Azure Cosmos DB Gremlin API database in the Azure portal.
--- Previously updated : 06/29/2022----
-# Quickstart: Create an Azure Cosmos DB graph database with PHP and the Azure portal
--
-> [!div class="op_single_selector"]
-> * [Gremlin console](create-graph-console.md)
-> * [.NET](create-graph-dotnet.md)
-> * [Java](create-graph-java.md)
-> * [Node.js](create-graph-nodejs.md)
-> * [Python](create-graph-python.md)
-> * [PHP](create-graph-php.md)
->
-
-In this quickstart, you create and use an Azure Cosmos DB [Gremlin (Graph) API](graph-introduction.md) database by using PHP and the Azure portal.
-
-Azure Cosmos DB is Microsoft's multi-model database service that lets you quickly create and query document, table, key-value, and graph databases, with global distribution and horizontal scale capabilities. Azure Cosmos DB provides five APIs: Core (SQL), MongoDB, Gremlin, Azure Table, and Cassandra.
-
-You must create a separate account to use each API. In this article, you create an account for the Gremlin (Graph) API.
-
-This quickstart walks you through the following steps:
--- Use the Azure portal to create an Azure Cosmos DB Gremlin (Graph) API account and database.-- Clone a sample Gremlin API PHP console app from GitHub, and run it to populate your database.-- Use Data Explorer in the Azure portal to query, add, and connect data in your database.-
-## Prerequisites
--- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Alternatively, you can [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb) without an Azure subscription.-- [PHP](https://php.net/) 5.6 or newer installed.-- [Composer](https://getcomposer.org/download) open-source dependency management tool for PHP installed.-
-## Create a Gremlin (Graph) database account
-
-First, create a Gremlin (Graph) database account for Azure Cosmos DB.
-
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource** from the left menu.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/create-nosql-db-databases-json-tutorial-0.png" alt-text="Screenshot of Create a resource in the Azure portal.":::
-
-1. On the **New** page, select **Databases** > **Azure Cosmos DB**.
-
-1. On the **Select API Option** page, under **Gremlin (Graph)**, select **Create**.
-
-1. On the **Create Azure Cosmos DB Account - Gremlin (Graph)** page, enter the following required settings for the new account:
-
- - **Subscription**: Select the Azure subscription that you want to use for this account.
- - **Resource Group**: Select **Create new**, then enter a unique name for the new resource group.
- - **Account Name**: Enter a unique name between 3-44 characters, using only lowercase letters, numbers, and hyphens. Your account URI is *gremlin.azure.com* appended to your unique account name.
- - **Location**: Select the Azure region to host your Azure Cosmos DB account. Use the location that's closest to your users to give them the fastest access to the data.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-create-new-account.png" alt-text="Screenshot showing the Create Account page for Azure Cosmos DB for a Gremlin (Graph) account.":::
-
-1. For this quickstart, you can leave the other fields and tabs at their default values. Optionally, you can configure more details for the account. See [Optional account settings](#optional-account-settings).
-
-1. Select **Review + create**, and then select **Create**. Deployment takes a few minutes.
-
-1. When the **Your deployment is complete** message appears, select **Go to resource**.
-
- You go to the **Overview** page for the new Azure Cosmos DB account.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-graph-created.png" alt-text="Screenshot showing the Azure Cosmos DB Quick start page.":::
-
-### Optional account settings
-
-Optionally, you can also configure the following settings on the **Create Azure Cosmos DB Account - Gremlin (Graph)** page.
--- On the **Basics** tab:-
- |Setting|Value|Description |
- ||||
- |**Capacity mode**|**Provisioned throughput** or **Serverless**|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode.|
- |**Apply Azure Cosmos DB free tier discount**|**Apply** or **Do not apply**|With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/).|
-
- > [!NOTE]
- > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you don't see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
-
-- On the **Global Distribution** tab:-
- |Setting|Value|Description |
- ||||
- |**Geo-redundancy**|**Enable** or **Disable**|Enable or disable global distribution on your account by pairing your region with a pair region. You can add more regions to your account later.|
- |**Multi-region Writes**|**Enable** or **Disable**|Multi-region writes capability allows you to take advantage of the provisioned throughput for your databases and containers across the globe.|
-
- > [!NOTE]
- > The following options aren't available if you select **Serverless** as the **Capacity mode**:
- > - **Apply Free Tier Discount**
- > - **Geo-redundancy**
- > - **Multi-region Writes**
--- Other tabs:-
- - **Networking**: Configure [access from a virtual network](../how-to-configure-vnet-service-endpoint.md).
- - **Backup Policy**: Configure either [periodic](../configure-periodic-backup-restore.md) or [continuous](../provision-account-continuous-backup.md) backup policy.
- - **Encryption**: Use either a service-managed key or a [customer-managed key](../how-to-setup-cmk.md#create-a-new-azure-cosmos-account).
- - **Tags**: Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
-
-## Add a graph
-
-1. On the Azure Cosmos DB account **Overview** page, select **Add Graph**.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-add-graph.png" alt-text="Screenshot showing the Add Graph on the Azure Cosmos DB account page.":::
-
-1. Fill out the **New Graph** form. For this quickstart, use the following values:
-
- - **Database id**: Enter *sample-database*. Database names must be between 1 and 255 characters, and can't contain `/ \ # ?` or a trailing space.
- - **Database Throughput**: Select **Manual**, so you can set the throughput to a low value.
- - **Database Max RU/s**: Change the throughput to *400* request units per second (RU/s). If you want to reduce latency, you can scale up throughput later.
- - **Graph id**: Enter *sample-graph*. Graph names have the same character requirements as database IDs.
- - **Partition key**: Enter */pk*. All Cosmos DB accounts need a partition key to horizontally scale. To learn how to select an appropriate partition key, see [Use a partitioned graph in Azure Cosmos DB](../graph-partitioning.md).
-
- :::image type="content" source="../includes/media/cosmos-db-create-graph/azure-cosmosdb-data-explorer-graph.png" alt-text="Screenshot showing the Azure Cosmos DB Data Explorer, New Graph page.":::
-
-1. Select **OK**. The new graph database is created.
-
-### Get the connection keys
-
-Get the Azure Cosmos DB account connection keys to use later in this quickstart.
-
-1. On the Azure Cosmos DB account page, select **Keys** under **Settings** in the left navigation.
-
-1. Copy and save the following values to use later in the quickstart:
-
- - The first part (Azure Cosmos DB account name) of the **.NET SDK URI**.
- - The **PRIMARY KEY** value.
-
- :::image type="content" source="media/create-graph-php/keys.png" alt-text="Screenshot that shows the access keys for the Azure Cosmos DB account.":::
--
-## Clone the sample application
-
-Now, switch to working with code. Clone a Gremlin API app from GitHub, set the connection string, and run the app to see how easy it is to work with data programmatically.
-
-1. In git terminal window, such as git bash, create a new folder named *git-samples*.
-
- ```bash
- mkdir "C:\git-samples"
- ```
-
-1. Switch to the new folder.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-1. Run the following command to clone the sample repository and create a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-php-getting-started.git
- ```
-
-Optionally, you can now review the PHP code you cloned. Otherwise, go to [Update your connection information](#update-your-connection-information).
-
-### Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.php* file in the *C:\git-samples\azure-cosmos-db-graph-php-getting-started* folder.
--- The Gremlin `connection` is initialized in the beginning of the `connect.php` file, using the `$db` object.-
- ```php
- $db = new Connection([
- 'host' => '<your_server_address>.graphs.azure.com',
- 'username' => '/dbs/<db>/colls/<coll>',
- 'password' => 'your_primary_key'
- ,'port' => '443'
-
- // Required parameter
- ,'ssl' => TRUE
- ]);
- ```
--- A series of Gremlin steps execute, using the `$db->send($query);` method.-
- ```php
- $query = "g.V().drop()";
- ...
- $result = $db->send($query);
- $errors = array_filter($result);
- }
- ```
-
-## Update your connection information
-
-1. Open the *connect.php* file in the *C:\git-samples\azure-cosmos-db-graph-php-getting-started* folder.
-
-1. In the `host` parameter, replace `<your_server_address>` with the Azure Cosmos DB account name value you saved from the Azure portal.
-
-1. In the `username` parameter, replace `<db>` and `<coll>` with your database and graph name. If you used the recommended values of `sample-database` and `sample-graph`, it should look like the following code:
-
- `'username' => '/dbs/sample-database/colls/sample-graph'`
-
-1. In the `password` parameter, replace `your_primary_key` with the PRIMARY KEY value you saved from the Azure portal.
-
- The `Connection` object initialization should now look like the following code:
-
- ```php
- $db = new Connection([
- 'host' => 'testgraphacct.graphs.azure.com',
- 'username' => '/dbs/sample-database/colls/sample-graph',
- 'password' => '2Ggkr662ifxz2Mg==',
- 'port' => '443'
-
- // Required parameter
- ,'ssl' => TRUE
- ]);
- ```
-
-1. Save the *connect.php* file.
-
-## Run the console app
-
-1. In the git terminal window, `cd` to the *azure-cosmos-db-graph-php-getting-started* folder.
-
- ```git
- cd "C:\git-samples\azure-cosmos-db-graph-php-getting-started"
- ```
-
-1. Use the following command to install the required PHP dependencies.
-
- ```
- composer install
- ```
-
-1. Use the following command to start the PHP application.
-
- ```
- php connect.php
- ```
-
- The terminal window displays the vertices being added to the graph.
-
- If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
-
- Once the program stops, press Enter.
-
-<a id="add-sample-data"></a>
-## Review and add sample data
-
-You can now go back to Data Explorer in the Azure portal, see the vertices added to the graph, and add more data points.
-
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-database** and **sample-graph**, select **Graph**, and then select **Execute Gremlin Query**.
-
- :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot that shows Graph selected with the option to Execute Gremlin Query.":::
-
-1. In the **Results** list, notice the new users added to the graph. Select **ben**, and notice that they're connected to **robin**. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
-
- :::image type="content" source="./media/create-graph-php/azure-cosmosdb-graph-explorer-new.png" alt-text="Screenshot that shows new vertices in the graph in Data Explorer.":::
-
-1. Add a new user. Select the **New Vertex** button to add data to your graph.
-
- :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot that shows the New Vertex pane where you can enter values.":::
-
-1. Enter a label of *person*.
-
-1. Select **Add property** to add each of the following properties. You can create unique properties for each person in your graph. Only the **id** key is required.
-
- Key | Value | Notes
- -|-|-
- **id** | ashley | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- **gender** | female |
- **tech** | java |
-
- > [!NOTE]
- > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
-
-1. Select **OK**.
-
-1. Select **New Vertex** again and add another new user.
-
-1. Enter a label of *person*.
-
-1. Select **Add property** to add each of the following properties:
-
- Key | Value | Notes
- -|-|-
- **id** | rakesh | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- **gender** | male |
- **school** | MIT |
-
-1. Select **OK**.
-
-1. Select **Execute Gremlin Query** with the default `g.V()` filter to display all the values in the graph. All the users now show in the **Results** list.
-
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change to a different [graph query](tutorial-query-graph.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Execute Gremlin Query** to display all the results again.
-
-1. Now you can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit icon next to **Targets** at lower right.
-
- :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Screenshot that shows changing the target of a vertex in a graph.":::
-
-1. In the **Target** box, type *rakesh*, and in the **Edge label** box type *knows*, and then select the check mark.
-
- :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-set-target.png" alt-text="Screenshot that shows adding a connection between ashley and rakesh in Data Explorer.":::
-
-1. Now select **rakesh** from the results list, and see that ashley and rakesh are connected.
-
- :::image type="content" source="./media/create-graph-php/azure-cosmosdb-graph-explorer.png" alt-text="Screenshot that shows two vertices connected in Data Explorer.":::
-
-You've completed the resource creation part of this quickstart. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries.
-
-You can review the metrics that Azure Cosmos DB provides, and then clean up the resources you created.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-This action deletes the resource group and all resources within it, including the Azure Cosmos DB Gremlin (Graph) account and database.
-
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB Gremlin (Graph) account and database, clone and run a PHP app, and work with your database using the Data Explorer. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query-graph.md)
-
cosmos-db Create Graph Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-python.md
- Title: 'Quickstart: Gremlin API with Python - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB Gremlin API to create a console application with the Azure portal and Python
--- Previously updated : 03/29/2021----
-# Quickstart: Create a graph database in Azure Cosmos DB using Python and the Azure portal
-
-> [!div class="op_single_selector"]
-> * [Gremlin console](create-graph-console.md)
-> * [.NET](create-graph-dotnet.md)
-> * [Java](create-graph-java.md)
-> * [Node.js](create-graph-nodejs.md)
-> * [Python](create-graph-python.md)
-> * [PHP](create-graph-php.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API account from the Azure portal, and add data by using a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Python 3.6+](https://www.python.org/downloads/) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.-- [Python Driver for Gremlin](https://github.com/apache/tinkerpop/tree/master/gremlin-python).-
- You can also install the Python driver for Gremlin by using the `pip` command line:
-
- ```bash
- pip install gremlinpython==3.4.13
- ```
--- [Git](https://git-scm.com/downloads).-
-> [!NOTE]
-> This quickstart requires a graph database account created after December 20, 2017. Existing accounts will support Python once theyΓÇÖre migrated to general availability.
-
-> [!NOTE]
-> We currently recommend using gremlinpython==3.4.13 with Gremlin (Graph) API as we haven't fully tested all language-specific libraries of version 3.5.* for use with the service.
-
-## Create a database account
-
-Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
--
-## Add a graph
--
-## Clone the sample application
-
-Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- mkdir "./git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
-
- ```bash
- cd "./git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started.git
- ```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.py* file in the *C:\git-samples\azure-cosmos-db-graph-python-getting-started\\* folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
-
-* The Gremlin `client` is initialized in line 155 in *connect.py*. Make sure to replace `<YOUR_DATABASE>` and `<YOUR_CONTAINER_OR_GRAPH>` with the values of your account's database name and graph name:
-
- ```python
- ...
- client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/<YOUR_DATABASE>/colls/<YOUR_CONTAINER_OR_GRAPH>",
- password="<YOUR_PASSWORD>")
- ...
- ```
-
-* A series of Gremlin steps are declared at the beginning of the *connect.py* file. They are then executed using the `client.submitAsync()` method:
-
- ```python
- client.submitAsync(_gremlin_cleanup_graph)
- ```
-
-## Update your connection information
-
-Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
-
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
-
- Copy the first portion of the URI value.
-
- :::image type="content" source="./media/create-graph-python/keys.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
-
-2. Open the *connect.py* file and in line 155 paste the URI value over `<YOUR_ENDPOINT>` in here:
-
- ```python
- client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
- password="<YOUR_PASSWORD>")
- ```
-
- The URI portion of the client object should now look similar to this code:
-
- ```python
- client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
- password="<YOUR_PASSWORD>")
- ```
-
-3. Change the second parameter of the `client` object to replace the `<YOUR_DATABASE>` and `<YOUR_COLLECTION_OR_GRAPH>` strings. If you used the suggested values, the parameter should look like this code:
-
- `username="/dbs/sample-database/colls/sample-graph"`
-
- The entire `client` object should now look like this code:
-
- ```python
- client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/sample-database/colls/sample-graph",
- password="<YOUR_PASSWORD>")
- ```
-
-4. On the **Keys** page, use the copy button to copy the PRIMARY KEY and paste it over `<YOUR_PASSWORD>` in the `password=<YOUR_PASSWORD>` parameter.
-
- The entire `client` object definition should now look like this code:
- ```python
- client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/sample-database/colls/sample-graph",
- password="asdb13Fadsf14FASc22Ggkr662ifxz2Mg==")
- ```
-
-6. Save the *connect.py* file.
-
-## Run the console app
-
-1. In the git terminal window, `cd` to the azure-cosmos-db-graph-python-getting-started folder.
-
- ```git
- cd "./git-samples\azure-cosmos-db-graph-python-getting-started"
- ```
-
-2. In the git terminal window, use the following command to install the required Python packages.
-
- ```
- pip install -r requirements.txt
- ```
-
-3. In the git terminal window, use the following command to start the Python application.
-
- ```
- python connect.py
- ```
-
- The terminal window displays the vertices and edges being added to the graph.
-
- If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
-
- Once the program stops, press Enter, then switch back to the Azure portal in your internet browser.
-
-<a id="add-sample-data"></a>
-## Review and add sample data
-
-After the vertices and edges are inserted, you can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
-
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
-
- :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
-
-2. In the **Results** list, notice three new users are added to the graph. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
-
- :::image type="content" source="./media/create-graph-python/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
-
-3. Let's add a few new users. Select the **New Vertex** button to add data to your graph.
-
- :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
-
-4. Enter a label of *person*.
-
-5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
-
- key|value|Notes
- -|-|-
- pk|/pk|
- id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- gender|female|
- tech | java |
-
- > [!NOTE]
- > In this quickstart create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
-
-6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
-
-7. Select **New Vertex** again and add an additional new user.
-
-8. Enter a label of *person*.
-
-9. Select **Add property** to add each of the following properties:
-
- key|value|Notes
- -|-|-
- pk|/pk|
- id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- gender|male|
- school|MIT|
-
-10. Select **OK**.
-
-11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
-
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query-graph.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
-
-12. Now we can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
-
- :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph":::
-
-13. In the **Target** box type *rakesh*, and in the **Edge label** box type *knows*, and then select the check.
-
- :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection between ashley and rakesh in Data Explorer":::
-
-14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
-
- :::image type="content" source="./media/create-graph-python/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer":::
-
-That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Python app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query-graph.md)
cosmos-db Diagnostic Queries Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/diagnostic-queries-gremlin.md
- Title: Troubleshoot issues with advanced diagnostics queries for Gremlin API-
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the Gremlin API.
---- Previously updated : 06/12/2021---
-# Troubleshoot issues with advanced diagnostics queries for the Gremlin API
--
-> [!div class="op_single_selector"]
-> * [SQL (Core) API](../cosmos-db-advanced-queries.md)
-> * [MongoDB API](../mongodb/diagnostic-queries-mongodb.md)
-> * [Cassandra API](../cassandr)
-> * [Gremlin API](diagnostic-queries-gremlin.md)
->
-
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-
-For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times.-
-## Common queries
-Common queries are shown in the resource-specific and Azure Diagnostics tables.
-
-### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let topRequestsByRUcharge = CDBDataPlaneRequests
- | where TimeGenerated > ago(24h)
- | project RequestCharge , TimeGenerated, ActivityId;
- CDBGremlinRequests
- | project PIICommandText, ActivityId, DatabaseName , CollectionName
- | join kind=inner topRequestsByRUcharge on ActivityId
- | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
- | order by RequestCharge desc
- | take 10
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let topRequestsByRUcharge = AzureDiagnostics
- | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
- | project requestCharge_s , TimeGenerated, activityId_g;
- AzureDiagnostics
- | where Category == "GremlinRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
- | join kind=inner topRequestsByRUcharge on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
- | order by requestCharge_s desc
- | take 10
- ```
--
-### Requests throttled (statusCode = 429) in a specific time window
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- let throttledRequests = CDBDataPlaneRequests
- | where StatusCode == "429"
- | project OperationName , TimeGenerated, ActivityId;
- CDBGremlinRequests
- | project PIICommandText, ActivityId, DatabaseName , CollectionName
- | join kind=inner throttledRequests on ActivityId
- | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- let throttledRequests = AzureDiagnostics
- | where Category == "DataPlaneRequests"
- | where statusCode_s == "429"
- | project OperationName , TimeGenerated, activityId_g;
- AzureDiagnostics
- | where Category == "GremlinRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
- | join kind=inner throttledRequests on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
- ```
--
-### Queries with large response lengths (payload size of the server response)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- let operationsbyUserAgent = CDBDataPlaneRequests
- | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
- CDBGremlinRequests
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- | join kind=inner operationsbyUserAgent on ActivityId
- | summarize max(ResponseLength) by PIICommandText
- | order by max_ResponseLength desc
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- let operationsbyUserAgent = AzureDiagnostics
- | where Category=="DataPlaneRequests"
- | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
- AzureDiagnostics
- | where Category == "GremlinRequests"
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- | join kind=inner operationsbyUserAgent on activityId_g
- | summarize max(responseLength_s1) by piiCommandText_s
- | order by max_responseLength_s1 desc
- ```
--
-### RU consumption by physical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
- | render columnchart
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
- | render columnchart
- ```
--
-### RU consumption by logical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
- | render columnchart
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
- | render columnchart
- ```
--
-## Next steps
-* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../cosmosdb-monitor-resource-logs.md).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Find Request Unit Charge Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/find-request-unit-charge-gremlin.md
- Title: Find request unit (RU) charge for Gremlin API queries in Azure Cosmos DB
-description: Learn how to find the request unit (RU) charge for Gremlin queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java drivers to find the RU charge.
--- Previously updated : 10/14/2020----
-# Find the request unit charge for operations executed in Azure Cosmos DB Gremlin API
-
-Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
-
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
-
-Headers returned by the Gremlin API are mapped to custom status attributes, which currently are surfaced by the Gremlin .NET and Java SDK. The request charge is available under the `x-ms-request-charge` key. When you use the Gremlin API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
-
-## Use the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-graph-console.md#create-a-database-account) and feed it with data, or select an existing account that already contains data.
-
-1. Go to the **Data Explorer** pane, and then select the container you want to work on.
-
-1. Enter a valid query, and then select **Execute Gremlin Query**.
-
-1. Select **Query Stats** to display the actual request charge for the request you executed.
--
-## Use the .NET SDK driver
-
-When you use the [Gremlin.NET SDK](https://www.nuget.org/packages/Gremlin.Net/), status attributes are available under the `StatusAttributes` property of the `ResultSet<>` object:
-
-```csharp
-ResultSet<dynamic> results = client.SubmitAsync<dynamic>("g.V().count()").Result;
-double requestCharge = (double)results.StatusAttributes["x-ms-request-charge"];
-```
-
-For more information, see [Quickstart: Build a .NET Framework or Core application by using an Azure Cosmos DB Gremlin API account](create-graph-dotnet.md).
-
-## Use the Java SDK driver
-
-When you use the [Gremlin Java SDK](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver), you can retrieve status attributes by calling the `statusAttributes()` method on the `ResultSet` object:
-
-```java
-ResultSet results = client.submit("g.V().count()");
-Double requestCharge = (Double)results.statusAttributes().get().get("x-ms-request-charge");
-```
-
-For more information, see [Quickstart: Create a graph database in Azure Cosmos DB by using the Java SDK](create-graph-java.md).
-
-## Next steps
-
-To learn about optimizing your RU consumption, see these articles:
-
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
-* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db Graph Execution Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-execution-profile.md
- Title: Use the execution profile to evaluate queries in Azure Cosmos DB Gremlin API
-description: Learn how to troubleshoot and improve your Gremlin queries using the execution profile step.
---- Previously updated : 03/27/2019-----
-# How to use the execution profile step to evaluate your Gremlin queries
-
-This article provides an overview of how to use the execution profile step for Azure Cosmos DB Gremlin API graph databases. This step provides relevant information for troubleshooting and query optimizations, and it is compatible with any Gremlin query that can be executed against a Cosmos DB Gremlin API account.
-
-To use this step, simply append the `executionProfile()` function call at the end of your Gremlin query. **Your Gremlin query will be executed** and the result of the operation will return a JSON response object with the query execution profile.
-
-For example:
-
-```java
- // Basic traversal
- g.V('mary').out()
-
- // Basic traversal with execution profile call
- g.V('mary').out().executionProfile()
-```
-
-After calling the `executionProfile()` step, the response will be a JSON object that includes the executed Gremlin step, the total time it took, and an array of the Cosmos DB runtime operators that the statement resulted in.
-
-> [!NOTE]
-> This implementation for Execution Profile is not defined in the Apache Tinkerpop specification. It is specific to Azure Cosmos DB Gremlin API's implementation.
--
-## Response Example
-
-The following is an annotated example of the output that will be returned:
-
-> [!NOTE]
-> This example is annotated with comments that explain the general structure of the response. An actual executionProfile response won't contain any comments.
-
-```json
-[
- {
- // The Gremlin statement that was executed.
- "gremlin": "g.V('mary').out().executionProfile()",
-
- // Amount of time in milliseconds that the entire operation took.
- "totalTime": 28,
-
- // An array containing metrics for each of the steps that were executed.
- // Each Gremlin step will translate to one or more of these steps.
- // This list is sorted in order of execution.
- "metrics": [
- {
- // This operation obtains a set of Vertex objects.
- // The metrics include: time, percentTime of total execution time, resultCount,
- // fanoutFactor, count, size (in bytes) and time.
- "name": "GetVertices",
- "time": 24,
- "annotations": {
- "percentTime": 85.71
- },
- "counts": {
- "resultCount": 2
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 2,
- "size": 696,
- "time": 0.4
- }
- ]
- },
- {
- // This operation obtains a set of Edge objects.
- // Depending on the query, these might be directly adjacent to a set of vertices,
- // or separate, in the case of an E() query.
- //
- // The metrics include: time, percentTime of total execution time, resultCount,
- // fanoutFactor, count, size (in bytes) and time.
- "name": "GetEdges",
- "time": 4,
- "annotations": {
- "percentTime": 14.29
- },
- "counts": {
- "resultCount": 1
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 1,
- "size": 419,
- "time": 0.67
- }
- ]
- },
- {
- // This operation obtains the vertices that a set of edges point at.
- // The metrics include: time, percentTime of total execution time and resultCount.
- "name": "GetNeighborVertices",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 1
- }
- },
- {
- // This operation represents the serialization and preparation for a result from
- // the preceding graph operations. The metrics include: time, percentTime of total
- // execution time and resultCount.
- "name": "ProjectOperator",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 1
- }
- }
- ]
- }
-]
-```
-
-> [!NOTE]
-> The executionProfile step will execute the Gremlin query. This includes the `addV` or `addE`steps, which will result in the creation and will commit the changes specified in the query. As a result, the Request Units generated by the Gremlin query will also be charged.
-
-## Execution profile response objects
-
-The response of an executionProfile() function will yield a hierarchy of JSON objects with the following structure:
- - **Gremlin operation object**: Represents the entire Gremlin operation that was executed. Contains the following properties.
- - `gremlin`: The explicit Gremlin statement that was executed.
- - `totalTime`: The time, in milliseconds, that the execution of the step incurred in.
- - `metrics`: An array that contains each of the Cosmos DB runtime operators that were executed to fulfill the query. This list is sorted in order of execution.
-
- - **Cosmos DB runtime operators**: Represents each of the components of the entire Gremlin operation. This list is sorted in order of execution. Each object contains the following properties:
- - `name`: Name of the operator. This is the type of step that was evaluated and executed. Read more in the table below.
- - `time`: Amount of time, in milliseconds, that a given operator took.
- - `annotations`: Contains additional information, specific to the operator that was executed.
- - `annotations.percentTime`: Percentage of the total time that it took to execute the specific operator.
- - `counts`: Number of objects that were returned from the storage layer by this operator. This is contained in the `counts.resultCount` scalar value within.
- - `storeOps`: Represents a storage operation that can span one or multiple partitions.
- - `storeOps.fanoutFactor`: Represents the number of partitions that this specific storage operation accessed.
- - `storeOps.count`: Represents the number of results that this storage operation returned.
- - `storeOps.size`: Represents the size in bytes of the result of a given storage operation.
-
-Cosmos DB Gremlin Runtime Operator|Description
-|
-`GetVertices`| This step obtains a predicated set of objects from the persistence layer.
-`GetEdges`| This step obtains the edges that are adjacent to a set of vertices. This step can result in one or many storage operations.
-`GetNeighborVertices`| This step obtains the vertices that are connected to a set of edges. The edges contain the partition keys and ID's of both their source and target vertices.
-`Coalesce`| This step accounts for the evaluation of two operations whenever the `coalesce()` Gremlin step is executed.
-`CartesianProductOperator`| This step computes a cartesian product between two datasets. Usually executed whenever the predicates `to()` or `from()` are used.
-`ConstantSourceOperator`| This step computes an expression to produce a constant value as a result.
-`ProjectOperator`| This step prepares and serializes a response using the result of preceding operations.
-`ProjectAggregation`| This step prepares and serializes a response for an aggregate operation.
-
-> [!NOTE]
-> This list will continue to be updated as new operators are added.
-
-## Examples on how to analyze an execution profile response
-
-The following are examples of common optimizations that can be spotted using the Execution Profile response:
- - Blind fan-out query.
- - Unfiltered query.
-
-### Blind fan-out query patterns
-
-Assume the following execution profile response from a **partitioned graph**:
-
-```json
-[
- {
- "gremlin": "g.V('tt0093640').executionProfile()",
- "totalTime": 46,
- "metrics": [
- {
- "name": "GetVertices",
- "time": 46,
- "annotations": {
- "percentTime": 100
- },
- "counts": {
- "resultCount": 1
- },
- "storeOps": [
- {
- "fanoutFactor": 5,
- "count": 1,
- "size": 589,
- "time": 75.61
- }
- ]
- },
- {
- "name": "ProjectOperator",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 1
- }
- }
- ]
- }
-]
-```
-
-The following conclusions can be made from it:
-- The query is a single ID lookup, since the Gremlin statement follows the pattern `g.V('id')`.-- Judging from the `time` metric, the latency of this query seems to be high since it's [more than 10ms for a single point-read operation](../introduction.md#guaranteed-speed-at-any-scale).-- If we look into the `storeOps` object, we can see that the `fanoutFactor` is `5`, which means that [5 partitions](../partitioning-overview.md) were accessed by this operation.-
-As a conclusion of this analysis, we can determine that the first query is accessing more partitions than necessary. This can be addressed by specifying the partitioning key in the query as a predicate. This will lead to less latency and less cost per query. Learn more about [graph partitioning](../graph-partitioning.md). A more optimal query would be `g.V('tt0093640').has('partitionKey', 't1001')`.
-
-### Unfiltered query patterns
-
-Compare the following two execution profile responses. For simplicity, these examples use a single partitioned graph.
-
-This first query retrieves all vertices with the label `tweet` and then obtains their neighboring vertices:
-
-```json
-[
- {
- "gremlin": "g.V().hasLabel('tweet').out().executionProfile()",
- "totalTime": 42,
- "metrics": [
- {
- "name": "GetVertices",
- "time": 31,
- "annotations": {
- "percentTime": 73.81
- },
- "counts": {
- "resultCount": 30
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 13,
- "size": 6819,
- "time": 1.02
- }
- ]
- },
- {
- "name": "GetEdges",
- "time": 6,
- "annotations": {
- "percentTime": 14.29
- },
- "counts": {
- "resultCount": 18
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 20,
- "size": 7950,
- "time": 1.98
- }
- ]
- },
- {
- "name": "GetNeighborVertices",
- "time": 5,
- "annotations": {
- "percentTime": 11.9
- },
- "counts": {
- "resultCount": 20
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 4,
- "size": 1070,
- "time": 1.19
- }
- ]
- },
- {
- "name": "ProjectOperator",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 20
- }
- }
- ]
- }
-]
-```
-
-Notice the profile of the same query, but now with an additional filter, `has('lang', 'en')`, before exploring the adjacent vertices:
-
-```json
-[
- {
- "gremlin": "g.V().hasLabel('tweet').has('lang', 'en').out().executionProfile()",
- "totalTime": 14,
- "metrics": [
- {
- "name": "GetVertices",
- "time": 14,
- "annotations": {
- "percentTime": 58.33
- },
- "counts": {
- "resultCount": 11
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 11,
- "size": 4807,
- "time": 1.27
- }
- ]
- },
- {
- "name": "GetEdges",
- "time": 5,
- "annotations": {
- "percentTime": 20.83
- },
- "counts": {
- "resultCount": 18
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 18,
- "size": 7159,
- "time": 1.7
- }
- ]
- },
- {
- "name": "GetNeighborVertices",
- "time": 5,
- "annotations": {
- "percentTime": 20.83
- },
- "counts": {
- "resultCount": 18
- },
- "storeOps": [
- {
- "fanoutFactor": 1,
- "count": 4,
- "size": 1070,
- "time": 1.01
- }
- ]
- },
- {
- "name": "ProjectOperator",
- "time": 0,
- "annotations": {
- "percentTime": 0
- },
- "counts": {
- "resultCount": 18
- }
- }
- ]
- }
-]
-```
-
-These two queries reached the same result, however, the first one will require more Request Units since it needed to iterate a larger initial dataset before querying the adjacent items. We can see indicators of this behavior when comparing the following parameters from both responses:
-- The `metrics[0].time` value is higher in the first response, which indicates that this single step took longer to resolve.-- The `metrics[0].counts.resultsCount` value is higher as well in the first response, which indicates that the initial working dataset was larger.-
-## Next steps
-* Learn about the [supported Gremlin features](../gremlin-support.md) in Azure Cosmos DB.
-* Learn more about the [Gremlin API in Azure Cosmos DB](../graph-introduction.md).
cosmos-db Graph Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-introduction.md
- Title: 'Introduction to Azure Cosmos DB Gremlin API'
-description: Learn how you can use Azure Cosmos DB to store, query, and traverse massive graphs with low latency by using the Gremlin graph query language of Apache TinkerPop.
--- Previously updated : 07/26/2021---
-# Introduction to Gremlin API in Azure Cosmos DB
-
-[Azure Cosmos DB](../introduction.md) is the globally distributed, multi-model database service from Microsoft for mission-critical applications. It is a multi-model database and supports document, key-value, graph, and column-family data models. Azure Cosmos DB provides a graph database service via the Gremlin API on a fully managed database service designed for any scale.
--
-This article provides an overview of the Azure Cosmos DB Gremlin API and explains how to use them to store massive graphs with billions of vertices and edges. You can query the graphs with millisecond latency and evolve the graph structure easily. Azure Cosmos DB's Gremlin API is built based on the [Apache TinkerPop](https://tinkerpop.apache.org), a graph computing framework. The Gremlin API in Azure Cosmos DB uses the Gremlin query language.
-
-Azure Cosmos DB's Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure to provide a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches.
-
-> [!NOTE]
-> Azure Cosmos DB graph engine closely follows Apache TinkerPop specification. However, there are some differences in the implementation details that are specific for Azure Cosmos DB. Some features supported by Apache TinkerPop are not available in Azure Cosmos DB, to learn more about the unsupported features, see [compatibility with Apache TinkerPop](gremlin-support.md) article.
-
-## Features of Azure Cosmos DB's Gremlin API
-
-Azure Cosmos DB is a fully managed graph database that offers global distribution, elastic scaling of storage and throughput, automatic indexing and query, tunable consistency levels, and support for the TinkerPop standard.
-
-> [!NOTE]
-> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Gremlin API.
-
-The following are the differentiated features that Azure Cosmos DB Gremlin API offers:
-
-* **Elastically scalable throughput and storage**
-
- Graphs in the real world need to scale beyond the capacity of a single server. Azure Cosmos DB supports horizontally scalable graph databases that can have a virtually unlimited size in terms of storage and provisioned throughput. As the graph database scale grows, the data will be automatically distributed using [graph partitioning](./graph-partitioning.md).
-
-* **Multi-region replication**
-
- Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides automatic regional failover mechanism that can ensure the continuity of your application in the rare case of a service interruption in a region.
-
-* **Fast queries and traversals with the most widely adopted graph query standard**
-
- Store heterogeneous vertices and edges and query them through a familiar Gremlin syntax. Gremlin is an imperative, functional query language that provides a rich interface to implement common graph algorithms.
-
- Azure Cosmos DB enables rich real-time queries and traversals without the need to specify schema hints, secondary indexes, or views. Learn more in [Query graphs by using Gremlin](gremlin-support.md).
-
-* **Fully managed graph database**
-
- Azure Cosmos DB eliminates the need to manage database and machine resources. Most existing graph database platforms are bound to the limitations of their infrastructure and often require a high degree of maintenance to ensure its operation.
-
- As a fully managed service, Cosmos DB removes the need to manage virtual machines, update runtime software, manage sharding or replication, or deal with complex data-tier upgrades. Every graph is automatically backed up and protected against regional failures. This allows developers to focus on delivering application value instead of operating and managing their graph databases.
-
-* **Automatic indexing**
-
- By default, Azure Cosmos DB automatically indexes all the properties within nodes (also called as vertices) and edges in the graph and doesn't expect or require any schema or creation of secondary indices. Learn more about [indexing in Azure Cosmos DB](../index-overview.md).
-
-* **Compatibility with Apache TinkerPop**
-
- Azure Cosmos DB supports the [open-source Apache TinkerPop standard](https://tinkerpop.apache.org/). The Tinkerpop standard has an ample ecosystem of applications and libraries that can be easily integrated with Azure Cosmos DB's Gremlin API.
-
-* **Tunable consistency levels**
-
- Azure Cosmos DB provides five well-defined consistency levels to achieve the right tradeoff between consistency and performance for your application. For queries and read operations, Azure Cosmos DB offers five distinct consistency levels: strong, bounded-staleness, session, consistent prefix, and eventual. These granular, well-defined consistency levels allow you to make sound tradeoffs among consistency, availability, and latency. Learn more in [Tunable data consistency levels in Azure Cosmos DB](../consistency-levels.md).
-
-## Scenarios that use Gremlin API
-
-Here are some scenarios where graph support of Azure Cosmos DB can be useful:
-
-* **Social networks/Customer 365**
-
- By combining data about your customers and their interactions with other people, you can develop personalized experiences, predict customer behavior, or connect people with others with similar interests. Azure Cosmos DB can be used to manage social networks and track customer preferences and data.
-
-* **Recommendation engines**
-
- This scenario is commonly used in the retail industry. By combining information about products, users, and user interactions, like purchasing, browsing, or rating an item, you can build customized recommendations. The low latency, elastic scale, and native graph support of Azure Cosmos DB is ideal for these scenarios.
-
-* **Geospatial**
-
- Many applications in telecommunications, logistics, and travel planning need to find a location of interest within an area or locate the shortest/optimal route between two locations. Azure Cosmos DB is a natural fit for these problems.
-
-* **Internet of Things**
-
- With the network and connections between IoT devices modeled as a graph, you can build a better understanding of the state of your devices and assets. You also can learn how changes in one part of the network can potentially affect another part.
-
-## Introduction to graph databases
-
-Data as it appears in the real world is naturally connected. Traditional data modeling focuses on defining entities separately and computing their relationships at runtime. While this model has its advantages, highly connected data can be challenging to manage under its constraints.
-
-A graph database approach relies on persisting relationships in the storage layer instead, which leads to highly efficient graph retrieval operations. Azure Cosmos DB's Gremlin API supports the [property graph model](https://tinkerpop.apache.org/docs/current/reference/#intro).
-
-### Property graph objects
-
-A property [graph](http://mathworld.wolfram.com/Graph.html) is a structure that's composed of [vertices](http://mathworld.wolfram.com/GraphVertex.html) and [edges](http://mathworld.wolfram.com/GraphEdge.html). Both objects can have an arbitrary number of key-value pairs as properties.
-
-* **Vertices/nodes** - Vertices denote discrete entities, such as a person, a place, or an event.
-
-* **Edges/relationships** - Edges denote relationships between vertices. For example, a person might know another person, be involved in an event, and recently been at a location.
-
-* **Properties** - Properties express information about the vertices and edges. There can be any number of properties in either vertices or edges, and they can be used to describe and filter the objects in a query. Example properties include a vertex that has name and age, or an edge, which can have a time stamp and/or a weight.
-
-* **Label** - A label is a name or the identifier of a vertex or an edge. Labels can group multiple vertices or edges such that all the vertices/edges in a group have a certain label. For example, a graph can have multiple vertices of label type "person".
-
-Graph databases are often included within the NoSQL or non-relational database category, since there is no dependency on a schema or constrained data model. This lack of schema allows for modeling and storing connected structures naturally and efficiently.
-
-### Graph database by example
-
-Let's use a sample graph to understand how queries can be expressed in Gremlin. The following figure shows a business application that manages data about users, interests, and devices in the form of a graph.
--
-This graph has the following *vertex* types (these are also called "label" in Gremlin):
-
-* **People**: The graph has three people, Robin, Thomas, and Ben
-* **Interests**: Their interests, in this example, the game of Football
-* **Devices**: The devices that people use
-* **Operating Systems**: The operating systems that the devices run on
-* **Place**: The places from which the devices are accessed
-
-We represent the relationships between these entities via the following *edge* types:
-
-* **Knows**: For example, "Thomas knows Robin"
-* **Interested**: To represent the interests of the people in our graph, for example, "Ben is interested in Football"
-* **RunsOS**: Laptop runs the Windows OS
-* **Uses**: To represent which device a person uses. For example, Robin uses a Motorola phone with serial number 77
-* **Located**: To represent the location from which the devices are accessed
-
-The Gremlin Console is an interactive terminal offered by the Apache TinkerPop and this terminal is used to interact with the graph data. To learn more, see the quickstart doc on [how to use the Gremlin console](create-graph-console.md). You can also perform these operations using Gremlin drivers in the platform of your choice (Java, Node.js, Python, or .NET). The following examples show how to run queries against this graph data using the Gremlin Console.
-
-First let's look at CRUD. The following Gremlin statement inserts the "Thomas" vertex into the graph:
-
-```java
-:> g.addV('person').property('id', 'thomas.1').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44)
-```
-
-Next, the following Gremlin statement inserts a "knows" edge between Thomas and Robin.
-
-```java
-:> g.V('thomas.1').addE('knows').to(g.V('robin.1'))
-```
-
-The following query returns the "person" vertices in descending order of their first names:
-
-```java
-:> g.V().hasLabel('person').order().by('firstName', decr)
-```
-
-Where graphs shine is when you need to answer questions like "What operating systems do friends of Thomas use?". You can run this Gremlin traversal to get that information from the graph:
-
-```java
-:> g.V('thomas.1').out('knows').out('uses').out('runsos').group().by('name').by(count())
-```
-
-## Next steps
-
-To learn more about graph support in Azure Cosmos DB, see:
-
-* Get started with the [Azure Cosmos DB graph tutorial](create-graph-dotnet.md).
-* Learn about how to [query graphs in Azure Cosmos DB by using Gremlin](gremlin-support.md).
cosmos-db Graph Modeling Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-modeling-tools.md
- Title: Third-party data modeling tools for Azure Cosmos DB graph data
-description: This article describes various tools to design the Graph data model.
---- Previously updated : 05/25/2021---
-# Third-party data modeling tools for Azure Cosmos DB graph data
--
-It is important to design the data model and further important to maintain. Here are set of third-party visual design tools which help in designing & maintaining the graph data model.
-
-> [!IMPORTANT]
-> Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you.
-
-## Hackolade
-
-Hackolade is a data modeling and schema design tool for NoSQL databases. It has a data modeling Studio, which helps in management of schemas for data-at-rest and data-in-motion.
-
-### How it works
-This tool provides the data modeling of vertices / edges and their respective properties. It supports several use cases, some of them are:
-- Start from a blank page and think through different options to graphically build your Cosmos DB Gremlin model. Then forward-engineer the model to your Azure instance to evaluate the result and continue the evolution. All such goodies without writing single line of code.-- Reverse-engineer an existing graph on Azure to clearly understand its structure, so you could effectively query your graph too. Then enrich the data model with descriptions, metadata, and constraints to produce documentation. It supports HTML, Markdown or PDF format, and feeds to corporate data governance or dictionary systems.-- Migrate from relational database to NoSQL through the de-normalization of data structures.-- Integrate with a CI/CD pipeline via a Command-Line Interface-- Collaboration and versioning using Git-- And much more…-
-### Sample
-
-The animation at Figure-2 provides a demonstration of reverse engineering, extraction of entities from RDBMS then Hackolade will discover relations from foreign key relationships then modifications.
-
-Sample DDL for source as SQL Server available at [here](https://github.com/Azure-Samples/northwind-ddl-sample/blob/main/nw.sql)
--
-**Figure-1:** Graph Diagram (extracted the graph data model)
-
-After modification of data model, the tool can generate the gremlin script, which may include custom Cosmos DB index script to ensure optimal indexes are created, refer Figure-2 for full flow.
-
-The following image demonstrates reverse engineering from RDBMS & Hackolade in action:
-
-**Figure-2:** Hackolade in action (demonstrating SQL to Gremlin data model conversion)
-### Useful links
-- [Download a 14-day free trial](https://hackolade.com/download.html)-- [Get more data models](https://hackolade.com/samplemodels.html#cosmosdb).-- [Documentation of Hackolade](https://hackolade.com/help/CosmosDBGremlin.html)-
-## Next steps
-- [Visualizing the data](./graph-visualization-partners.md)
cosmos-db Graph Modeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-modeling.md
- Title: 'Graph data modeling for Azure Cosmos DB Gremlin API'
-description: Learn how to model a graph database by using Azure Cosmos DB Gremlin API. This article describes when to use a graph database and best practices to model entities and relationships.
--- Previously updated : 12/02/2019----
-# Graph data modeling for Azure Cosmos DB Gremlin API
-
-The following document is designed to provide graph data modeling recommendations. This step is vital in order to ensure the scalability and performance of a graph database system as the data evolves. An efficient data model is especially important with large-scale graphs.
-
-## Requirements
-
-The process outlined in this guide is based on the following assumptions:
- * The **entities** in the problem-space are identified. These entities are meant to be consumed _atomically_ for each request. In other words, the database system isn't designed to retrieve a single entity's data in multiple query requests.
- * There is an understanding of **read and write requirements** for the database system. These requirements will guide the optimizations needed for the graph data model.
- * The principles of the [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) are well understood.
-
-## When do I need a graph database?
-
-A graph database solution can be optimally applied if the entities and relationships in a data domain have any of the following characteristics:
-
-* The entities are **highly connected** through descriptive relationships. The benefit in this scenario is the fact that the relationships are persisted in storage.
-* There are **cyclic relationships** or **self-referenced entities**. This pattern is often a challenge when using relational or document databases.
-* There are **dynamically evolving relationships** between entities. This pattern is especially applicable to hierarchical or tree-structured data with many levels.
-* There are **many-to-many relationships** between entities.
-* There are **write and read requirements on both entities and relationships**.
-
-If the above criteria is satisfied, it's likely that a graph database approach will provide advantages for **query complexity**, **data model scalability**, and **query performance**.
-
-The next step is to determine if the graph is going to be used for analytic or transactional purposes. If the graph is intended to be used for heavy computation and data processing workloads, it would be worth to explore the [Cosmos DB Spark connector](../create-sql-api-spark.md) and the use of the [GraphX library](https://spark.apache.org/graphx/).
-
-## How to use graph objects
-
-The [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) defines two types of objects **Vertices** and **Edges**.
-
-The following are the best practices for the properties in the graph objects:
-
-| Object | Property | Type | Notes |
-| | | | |
-| Vertex | ID | String | Uniquely enforced per partition. If a value isn't supplied upon insertion, an auto-generated GUID will be stored. |
-| Vertex | label | String | This property is used to define the type of entity that the vertex represents. If a value isn't supplied, a default value "vertex" will be used. |
-| Vertex | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each vertex. |
-| Vertex | partition key | String, Boolean, Numeric | This property defines where the vertex and its outgoing edges will be stored. Read more about [graph partitioning](graph-partitioning.md). |
-| Edge | ID | String | Uniquely enforced per partition. Auto-generated by default. Edges usually don't have the need to be uniquely retrieved by an ID. |
-| Edge | label | String | This property is used to define the type of relationship that two vertices have. |
-| Edge | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each edge. |
-
-> [!NOTE]
-> Edges don't require a partition key value, since its value is automatically assigned based on their source vertex. Learn more in the [graph partitioning](graph-partitioning.md) article.
-
-## Entity and relationship modeling guidelines
-
-The following are a set of guidelines to approach data modeling for an Azure Cosmos DB Gremlin API graph database. These guidelines assume that there's an existing definition of a data domain and queries for it.
-
-> [!NOTE]
-> The steps outlined below are presented as recommendations. The final model should be evaluated and tested before its consideration as production-ready. Additionally, the recommendations below are specific to Azure Cosmos DB's Gremlin API implementation.
-
-### Modeling vertices and properties
-
-The first step for a graph data model is to map every identified entity to a **vertex object**. A one to one mapping of all entities to vertices should be an initial step and subject to change.
-
-One common pitfall is to map properties of a single entity as separate vertices. Consider the example below, where the same entity is represented in two different ways:
-
-* **Vertex-based properties**: In this approach, the entity uses three separate vertices and two edges to describe its properties. While this approach might reduce redundancy, it increases model complexity. An increase in model complexity can result in added latency, query complexity, and computation cost. This model can also present challenges in partitioning.
--
-* **Property-embedded vertices**: This approach takes advantage of the key-value pair list to represent all the properties of the entity inside a vertex. This approach provides reduced model complexity, which will lead to simpler queries and more cost-efficient traversals.
--
-> [!NOTE]
-> The above examples show a simplified graph model to only show the comparison between the two ways of dividing entity properties.
-
-The **property-embedded vertices** pattern generally provides a more performant and scalable approach. The default approach to a new graph data model should gravitate towards this pattern.
-
-However, there are scenarios where referencing to a property might provide advantages. For example: if the referenced property is updated frequently. Using a separate vertex to represent a property that is constantly changed would minimize the amount of write operations that the update would require.
-
-### Relationship modeling with edge directions
-
-After the vertices are modeled, the edges can be added to denote the relationships between them. The first aspect that needs to be evaluated is the **direction of the relationship**.
-
-Edge objects have a default direction that is followed by a traversal when using the `out()` or `outE()` function. Using this natural direction results in an efficient operation, since all vertices are stored with their outgoing edges.
-
-However, traversing in the opposite direction of an edge, using the `in()` function, will always result in a cross-partition query. Learn more about [graph partitioning](graph-partitioning.md). If there's a need to constantly traverse using the `in()` function, it's recommended to add edges in both directions.
-
-You can determine the edge direction by using the `.to()` or `.from()` predicates to the `.addE()` Gremlin step. Or by using the [bulk executor library for Gremlin API](bulk-executor-graph-dotnet.md).
-
-> [!NOTE]
-> Edge objects have a direction by default.
-
-### Relationship labeling
-
-Using descriptive relationship labels can improve the efficiency of edge resolution operations. This pattern can be applied in the following ways:
-* Use non-generic terms to label a relationship.
-* Associate the label of the source vertex to the label of the target vertex with the relationship name.
--
-The more specific the label that the traverser will use to filter the edges, the better. This decision can have a significant impact on query cost as well. You can evaluate the query cost at any time [using the executionProfile step](graph-execution-profile.md).
--
-## Next steps:
-* Check out the list of supported [Gremlin steps](gremlin-support.md).
-* Learn about [graph database partitioning](graph-partitioning.md) to deal with large-scale graphs.
-* Evaluate your Gremlin queries using the [Execution Profile step](graph-execution-profile.md).
-* Third-party Graph [design data model](graph-modeling-tools.md)
cosmos-db Graph Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-partitioning.md
- Title: Data partitioning in Azure Cosmos DB Gremlin API
-description: Learn how you can use a partitioned graph in Azure Cosmos DB. This article also describes the requirements and best practices for a partitioned graph.
----- Previously updated : 06/24/2019--
-# Using a partitioned graph in Azure Cosmos DB
-
-One of the key features of the Gremlin API in Azure Cosmos DB is the ability to handle large-scale graphs through horizontal scaling. The containers can scale independently in terms of storage and throughput. You can create containers in Azure Cosmos DB that can be automatically scaled to store a graph data. The data is automatically balanced based on the specified **partition key**.
-
-Partitioning is done internally if the container is expected to store more than 20 GB in size or if you want to allocate more than 10,000 request units per second (RUs). Data is automatically partitioned based on the partition key you specify. Partition key is required if you create graph containers from the Azure portal or the 3.x or higher versions of Gremlin drivers. Partition key is not required if you use 2.x or lower versions of Gremlin drivers.
-
-The same general principles from the [Azure Cosmos DB partitioning mechanism](../partitioning-overview.md) apply with a few graph-specific optimizations described below.
--
-## Graph partitioning mechanism
-
-The following guidelines describe how the partitioning strategy in Azure Cosmos DB operates:
--- **Both vertices and edges are stored as JSON documents**.--- **Vertices require a partition key**. This key will determine in which partition the vertex will be stored through a hashing algorithm. The partition key property name is defined when creating a new container and it has a format: `/partitioning-key-name`.--- **Edges will be stored with their source vertex**. In other words, for each vertex its partition key defines where they are stored along with its outgoing edges. This optimization is done to avoid cross-partition queries when using the `out()` cardinality in graph queries.--- **Edges contain references to the vertices they point to**. All edges are stored with the partition keys and IDs of the vertices that they are pointing to. This computation makes all `out()` direction queries always be a scoped partitioned query, and not a blind cross-partition query.--- **Graph queries need to specify a partition key**. To take full advantage of the horizontal partitioning in Azure Cosmos DB, the partition key should be specified when a single vertex is selected, whenever it's possible. The following are queries for selecting one or multiple vertices in a partitioned graph:-
- - `/id` and `/label` are not supported as partition keys for a container in Gremlin API.
--
- - Selecting a vertex by ID, then **using the `.has()` step to specify the partition key property**:
-
- ```java
- g.V('vertex_id').has('partitionKey', 'partitionKey_value')
- ```
-
- - Selecting a vertex by **specifying a tuple including partition key value and ID**:
-
- ```java
- g.V(['partitionKey_value', 'vertex_id'])
- ```
-
- - Selecting a set of vertices with their IDs and **specifying a list of partition key values**:
-
- ```java
- g.V('vertex_id0', 'vertex_id1', 'vertex_id2', …).has('partitionKey', within('partitionKey_value0', 'partitionKey_value01', 'partitionKey_value02', …)
- ```
-
- - Using the **Partition strategy** at the beginning of a query and specifying a partition for the scope of the rest of the Gremlin query:
-
- ```java
- g.withStrategies(PartitionStrategy.build().partitionKey('partitionKey').readPartitions('partitionKey_value').create()).V()
- ```
-
-## Best practices when using a partitioned graph
-
-Use the following guidelines to ensure performance and scalability when using partitioned graphs with unlimited containers:
--- **Always specify the partition key value when querying a vertex**. Getting vertex from a known partition is a way to achieve performance. All subsequent adjacency operations will always be scoped to a partition since Edges contain reference ID and partition key to their target vertices.--- **Use the outgoing direction when querying edges whenever it's possible**. As mentioned above, edges are stored with their source vertices in the outgoing direction. So the chances of resorting to cross-partition queries are minimized when the data and queries are designed with this pattern in mind. On the contrary, the `in()` query will always be an expensive fan-out query.--- **Choose a partition key that will evenly distribute data across partitions**. This decision heavily depends on the data model of the solution. Read more about creating an appropriate partition key in [Partitioning and scale in Azure Cosmos DB](../partitioning-overview.md).--- **Optimize queries to obtain data within the boundaries of a partition**. An optimal partitioning strategy would be aligned to the querying patterns. Queries that obtain data from a single partition provide the best possible performance.-
-## Next steps
-
-Next you can proceed to read the following articles:
-
-* Learn about [Partition and scale in Azure Cosmos DB](../partitioning-overview.md).
-* Learn about the [Gremlin support in Gremlin API](gremlin-support.md).
-* Learn about [Introduction to Gremlin API](graph-introduction.md).
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-visualization-partners.md
- Title: Visualize Azure Cosmos DB Gremlin API data using partner solutions
-description: Learn how to integrate Azure Cosmos DB graph data with different third-party visualization solutions.
----- Previously updated : 07/22/2021--
-# Visualize graph data stored in Azure Cosmos DB Gremlin API with data visualization solutions
-
-You can visualize data stored in Azure Cosmos DB Gremlin API by using various data visualization solutions.
-
-> [!IMPORTANT]
-> Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you.
-
-## Linkurious Enterprise
-
-[Linkurious Enterprise](https://linkurio.us/product/) uses graph technology and data visualization to turn complex datasets into interactive visual networks. The platform connects to your data sources and enables investigators to seamlessly navigate across billions of entities and relationships. The result is a new ability to detect suspicious relationships without juggling with queries or tables.
-
-The interactive interface of Linkurious Enterprise offers an easy way to investigate complex data. You can search for specific entities, expand connections to uncover hidden relationships, and apply layouts of your choice to untangle complex networks. Linkurious Enterprise is now compatible with Azure Cosmos DB Gremlin API. It's suitable for end-to-end graph visualization scenarios and supports read and write capabilities from the user interface. You can request a [demo of Linkurious with Azure Cosmos DB](https://linkurio.us/contact/)
--
-<b>Figure:</b> Linkurious Enterprise visualization flow
-### Useful links
-
-* [Product details](https://linkurio.us/product/)
-* [Documentation](https://doc.linkurio.us/)
-* [Demo](https://linkurious.com/demo/)
-* [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/linkurious.lke_st?tab=Overview)
-
-## Cambridge Intelligence
-
-[Cambridge IntelligenceΓÇÖs](https://cambridge-intelligence.com/products/) graph visualization toolkits support Azure Cosmos DB. The following two visualization toolkits are supported by Azure Cosmos DB:
-
-* [KeyLines for JavaScript developers](https://cambridge-intelligence.com/keylines/)
-
-* [Re-Graph for React developers](https://cambridge-intelligence.com/regraph/)
--
-<b>Figure:</b> KeyLines visualization example at various levels of detail.
-
-These toolkits let you design high-performance graph visualization and analysis applications. They harness powerful Web Graphics Library(WebGL) rendering and carefully crafted code to give users a fast and insightful visualization experience. These tools are compatible with any browser, device, server or database, and come with step-by-step tutorials, fully documented APIs, and interactive demos.
--
-<b>Figure:</b> Re-Graph visualization example at various levels of details
-### Useful links
-
-* [Try the toolkits](https://cambridge-intelligence.com/try/)
-* [KeyLines technology overview](https://cambridge-intelligence.com/keylines/technology/)
-* [Re-Graph technology overview](https://cambridge-intelligence.com/regraph/technology/)
-* [Graph visualization use cases](https://cambridge-intelligence.com/use-cases/)
-
-## Tom Sawyer
-
-[Tom Sawyer Perspectives](https://www.tomsawyer.com/perspectives/) is a robust platform for building enterprise grade graph data visualization and analysis applications. It is a low-code graph & data visualization development platform, which includes integrated design, preview interface, and extensive API libraries. The platform integrates enterprise data sources with powerful graph visualization, layout, and analysis technology to solve big data problems.
-
-Perspectives enables developers to quickly develop production-quality, data-oriented visualization applications. Two graphic modules, the "Designer" and the "Previewer" are used to build applications to visualize and analyze the specific data that drives each project. When used together, the Designer and Previewer provide an efficient round-trip process that dramatically speeds up application development. To visualize Azure Cosmos DB Gremlin API data using this platform, request a [free 60-day evaluation](https://www.tomsawyer.com/get-started) of this tool.
--
-<b>Figure:</b> Tom Sawyer Perspectives in action
-
-[Tom Sawyer Graph Database Browser](https://www.tomsawyer.com/graph-database-browser/) makes it easy to visualize and analyze data in Azure Cosmos DB Gremlin API. The Graph Database Browser helps you see and understand connections in your data without extensive knowledge of the query language or the schema. You can manually define the schema for your project or use schema extraction to create it. So, even less technical users can interact with the data by loading the neighbors of selected nodes and building the visualization in whatever direction they need. Advanced users can execute queries using Gremlin, Cypher, or SPARQL to gain other insights. When you define the schema then you can load the Azure Cosmos DB data into the Perspectives model. With the help of integrator definition, you can specify the location and configuration for the Gremlin endpoint. Later you can bind elements from the Azure Cosmos DB data source to elements in the Perspectives model and visualize your data.
-
-Users of all skill levels can take advantage of five unique graph layouts to display the graph in a way that provides the most meaning. And there are built-in centrality, clustering, and path-finding analyses to reveal previously unseen patterns. Using these techniques, organizations can identify critical patterns in areas like fraud detection, customer intelligence, and cybersecurity. Pattern recognition is very important for network analysts in areas such as general IT and network management, logistics, legacy system migration, and business transformation. Try a live demo of Tom Sawyer Graph Database Browser.
--
-<b>Figure:</b> Tom Sawyer Database Browser's visualization capabilities
-### Useful links
-
-* [Documentation](https://www.tomsawyer.com/graph-database-browser/)
-
-* [Trial for Tom Sawyer Perspectives](https://www.tomsawyer.com/get-started)
-
-* [Live Demo for Tom Sawyer Databrowser](https://support.tomsawyer.com/demonstrations/graph.database.browser.demo/)
-
-## Graphistry
-
-Graphistry automatically transforms your data into interactive, visual investigation maps built for the needs of analysts. It can quickly surface relationships between events and entities without having to write queries or wrangle data. You can harness your data without worrying about scale. You can detect security, fraud, and IT investigations to 3600 views of customers and supply chains, Graphistry turns the potential of your data into human insight and value.
--
-<b>Figure:</b> Graphistry Visualization snapshot
-
-With the Graphistry's GPU client/cloud technology, you can do interactive visualization. By using their standard browser and the cloud, you can use all the data you want, and still remain fast, responsive, and interactive. If you want to run the browser on your hardware, itΓÇÖs as easy as installing a Docker. That way you get the analytical power of GPUs without having to think about GPUs.
--
-<b>Figure:</b> Graphistry in action
-
-### Useful links
-
-* [Documentation](https://www.graphistry.com/docs)
-
-* [Video Guides](https://www.graphistry.com/videos)
-
-* [Deploy on Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/graphistry.graphistry-core-2-24-9)
-
-## Graphlytic
-
-Graphlytic is a highly customizable web application for graph visualization and analysis. Users can interactively explore the graph, look for patterns with the Gremlin language, or use filters to find answers to any graph question. Graph rendering is done with the 'Cytoscape.js' library, which allows Graphlytic to render tens of thousands of nodes and hundreds of thousands of relationships at once.
-
-Graphlytic is compatible with Azure Cosmos DB and can be deployed to Azure in minutes. GraphlyticΓÇÖs UI can be customized and extended in many ways, for instance the default [visualization configuration](https://graphlytic.biz/doc/latest/Visualization_Settings.html), [data schema](https://graphlytic.biz/doc/latest/Data_Schema.html), [style mappings](https://graphlytic.biz/doc/latest/Style_Mappers.html), [virtual properties](https://graphlytic.biz/doc/latest/Virtual_properties.html) in the visualization, or custom implemented [widgets](https://graphlytic.biz/doc/latest/Widgets.html) that can enhance the visualization features with bespoke reports or integrations.
-
-The following are two example scenarios:
-
-* **IT Management use case**
-Companies running their IT operations on their own infrastructure, Telco, or IP providers, all need a solid network documentation and a functional configuration management. Impact analyses describing interdependencies among network elements (active and passive) are being developed to overcome blackouts, which cause significant financial losses, or even single outages causing no or low availability of service. Bottlenecks and single points of failure are determined and solved. Endpoint as well as route redundancies are being implemented.
-Graphlytic property graph visualization is a perfect enabler for all above mentioned points - network documentation, network configuration management, impact analysis and asset management. It stores and depicts all relevant network configuration information in one place, bringing a completely new added value to IT managers and field technicians.
-
- :::image type="content" source="./media/graph-visualization-partners/graphlytic/it-management.gif" alt-text="Graphlytic IT Management use case demo" :::
-
-<b>Figure:</b> Graphlytic IT management use case
-
-* **Anti-fraud use case**
-Fraud pattern is a well-known term to every single insurance company, bank or e-commerce enterprise. Modern fraudsters build sophisticated fraud rings and schemes that are hard to unveil with traditional tools. It can cause serious losses if not detected properly and on time. On the other hand, traditional red flag systems with too strict criteria must be adjusted to eliminate false positive indicators, as it would lead to overwhelming fraud indications. Great amounts of time are spent trying to detect complex fraud, paralyzing investigators in their daily tasks.
-The basic idea behind Graphlytic is the fact that the human eye can simply distinguish and find any pattern in a graphical form much easier than in any table or data set. It means that the antifraud analyst can capture fraud schemes within graph visualization more easily, faster and smarter than with solely traditional tools.
-
- :::image type="content" source="./media/graph-visualization-partners/graphlytic/antifraud.gif" alt-text="Graphlytic Fraud detection use case demo":::
-
-<b>Figure:</b> Graphlytic Fraud detection use case demo
-
-### Useful links
-
-* [Documentation](https://graphlytic.biz/doc/)
-* [Free Online Demo](https://graphlytic.biz/demo)
-* [Blog](https://graphlytic.biz/blog)
-* [REST API documentation](https://graphlytic.biz/doc/latest/REST_API.html)
-* [ETL job drivers & examples](https://graphlytic.biz/doc/latest/ETL_jobs.html)
-* [SMTP Email Server Integration](https://graphlytic.biz/doc/latest/SMTP_Email_Server_Connection.html)
-* [Geo Map Server Integration](https://graphlytic.biz/doc/latest/Geo_Map_Server_Integration.html)
-* [Single Sign-on Configuration](https://graphlytic.biz/doc/latest/Single_sign-on.html)
-
-## yWorks
-
-yWorks specializes in the development of professional software solutions that enable the clear visualization of graphs, diagrams, and networks. yWorks has brought together efficient data structures, complex algorithms, and advanced techniques that provide excellent user interaction on a multitude of target platforms. This allows the user to experience highly versatile and sophisticated diagram visualization in applications across many diverse areas.
-
-Azure Cosmos DB can be queried for data using Gremlin, an efficient graph traversal language. The user can query the database for the stored entities and use the relationships to traverse the connected neighborhood. This approach requires in-depth technical knowledge of the database itself and also the query language Gremlin to explore the stored data. Where as with yWorks visualization you can visually explore the Azure Cosmos DB data, identify significant structures, and get a better understanding of relationships. Besides the visual exploration, you can also interactively edit the stored data by modifying the diagram without any knowledge of the associated query language like Gremlin. This way it provides a high-quality visualization and can analyze large data sets from Azure Cosmos DB data. You can use yFiles to add visualization capabilities to your own applications, dashboards, and reports, or to create new, white-label apps and tools for both in-house and customer facing products.
--
-<b>Figure:</b> yWorks visualization snapshot
-
-With yWorks, you can create meaningful visualizations that help users gain insights into the data quickly and easily. Build interactive user-interfaces that match your company's corporate design and easily connect to existing infrastructure and services. Use highly sophisticated automatic graph layouts to generate clear visualizations of the data hidden in your Azure Cosmos DB account. Efficient implementations of the most important graph analysis algorithms enable the creation of responsive user interfaces that highlight the information the user is interested in or needs to be aware of. Use yFiles to create interactive apps that work on desktops, and mobile devices alike.
-
-Typical use-cases and data models include:
-
-* Social networks, money laundering data, and cash-flow networks, where similar entities are connected to each other
-* Process data where entities are being processed and move from one state to another
-* organizational charts and networks, showing team hierarchies, but also majority ownership dependencies and relationships between companies or customers
-* data lineage information & compliance data can be visualized, reviewed & audited
-* computer networks logs, website logs, customer journey logs
-* knowledge graphs, stored as triplets and in other formats
-* Product Lifecycle Management data
-* Bill of Material lists and Supply Chain data
-
-### Useful links
-
-* [Pricing](https://www.yworks.com/products/yfiles-for-html/pricing)
-* [Visualizing a Microsoft Azure Cosmos DB](https://www.yworks.com/pages/visualizing-a-microsoft-azure-cosmos-db)
-* [yFiles - the diagramming library](https://www.yworks.com/yfiles-overview)
-* [yWorks - Demos](https://www.yworks.com/products/yfiles/demos)
-
-### Next Steps
-
-* [Cosmos DB - Gremlin API Pricing](../how-pricing-works.md)
cosmos-db Gremlin Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/gremlin-headers.md
- Title: Azure Cosmos DB Gremlin response headers
-description: Reference documentation for server response metadata that enables additional troubleshooting
--- Previously updated : 09/03/2019----
-# Azure Cosmos DB Gremlin server response headers
-
-This article covers headers that Cosmos DB Gremlin server returns to the caller upon request execution. These headers are useful for troubleshooting request performance, building application that integrates natively with Cosmos DB service and simplifying customer support.
-
-Keep in mind that taking dependency on these headers you are limiting portability of your application to other Gremlin implementations. In return, you are gaining tighter integration with Cosmos DB Gremlin. These headers are not a TinkerPop standard.
-
-## Headers
-
-| Header | Type | Sample Value | When Included | Explanation |
-| | | | | |
-| **x-ms-request-charge** | double | 11.3243 | Success and Failure | Amount of collection or database throughput consumed in [request units (RU/s or RUs)](../request-units.md) for a partial response message. This header is present in every continuation for requests that have multiple chunks. It reflects the charge of a particular response chunk. Only for requests that consist of a single response chunk this header matches total cost of traversal. However, for majority of complex traversals this value represents a partial cost. |
-| **x-ms-total-request-charge** | double | 423.987 | Success and Failure | Amount of collection or database throughput consumed in [request units (RU/s or RUs)](../request-units.md) for entire request. This header is present in every continuation for requests that have multiple chunks. It indicates cumulative charge since the beginning of request. Value of this header in the last chunk indicates complete request charge. |
-| **x-ms-server-time-ms** | double | 13.75 | Success and Failure | This header is included for latency troubleshooting purposes. It indicates the amount of time, in milliseconds, that Cosmos DB Gremlin server took to execute and produce a partial response message. Using value of this header and comparing it to overall request latency applications can calculate network latency overhead. |
-| **x-ms-total-server-time-ms** | double | 130.512 | Success and Failure | Total time, in milliseconds, that Cosmos DB Gremlin server took to execute entire traversal. This header is included in every partial response. It represents cumulative execution time since the start of request. The last response indicates total execution time. This header is useful to differentiate between client and server as a source of latency. You can compare traversal execution time on the client to the value of this header. |
-| **x-ms-status-code** | long | 200 | Success and Failure | Header indicates internal reason for request completion or termination. Application is advised to look at the value of this header and take corrective action. |
-| **x-ms-substatus-code** | long | 1003 | Failure Only | Cosmos DB is a multi-model database that is built on top of unified storage layer. This header contains additional insights about the failure reason when failure occurs within lower layers of high availability stack. Application is advised to store this header and use it when contacting Cosmos DB customer support. Value of this header is useful for Cosmos DB engineer for quick troubleshooting. |
-| **x-ms-retry-after-ms** | string (TimeSpan) | "00:00:03.9500000" | Failure Only | This header is a string representation of a .NET [TimeSpan](/dotnet/api/system.timespan) type. This value will only be included in requests failed due provisioned throughput exhaustion. Application should resubmit traversal again after instructed period of time. |
-| **x-ms-activity-id** | string (Guid) | "A9218E01-3A3A-4716-9636-5BD86B056613" | Success and Failure | Header contains a unique server-side identifier of a request. Each request is assigned a unique identifier by the server for tracking purposes. Applications should log activity identifiers returned by the server for requests that customers may want to contact customer support about. Cosmos DB support personnel can find specific requests by these identifiers in Cosmos DB service telemetry. |
-
-## Status codes
-
-Most common codes returned for `x-ms-status-code` status attribute by the server are listed below.
-
-| Status | Explanation |
-| | |
-| **401** | Error message `"Unauthorized: Invalid credentials provided"` is returned when authentication password doesn't match Cosmos DB account key. Navigate to your Cosmos DB Gremlin account in the Azure portal and confirm that the key is correct.|
-| **404** | Concurrent operations that attempt to delete and update the same edge or vertex simultaneously. Error message `"Owner resource does not exist"` indicates that specified database or collection is incorrect in connection parameters in `/dbs/<database name>/colls/<collection or graph name>` format.|
-| **409** | `"Conflicting request to resource has been attempted. Retry to avoid conflicts."` This usually happens when vertex or an edge with an identifier already exists in the graph.|
-| **412** | Status code is complemented with error message `"PreconditionFailedException": One of the specified pre-condition is not met`. This error is indicative of an optimistic concurrency control violation between reading an edge or vertex and writing it back to the store after modification. Most common situations when this error occurs is property modification, for example `g.V('identifier').property('name','value')`. Gremlin engine would read the vertex, modify it, and write it back. If there is another traversal running in parallel trying to write the same vertex or an edge, one of them will receive this error. Application should submit traversal to the server again.|
-| **429** | Request was throttled and should be retried after value in **x-ms-retry-after-ms**|
-| **500** | Error message that contains `"NotFoundException: Entity with the specified id does not exist in the system."` indicates that a database and/or collection was re-created with the same name. This error will disappear within 5 minutes as change propagates and invalidates caches in different Cosmos DB components. To avoid this issue, use unique database and collection names every time.|
-| **1000** | This status code is returned when server successfully parsed a message but wasn't able to execute. It usually indicates a problem with the query.|
-| **1001** | This code is returned when server completes traversal execution but fails to serialize response back to the client. This error can happen when traversal generates complex result, that is too large or does not conform to TinkerPop protocol specification. Application should simplify the traversal when it encounters this error. |
-| **1003** | `"Query exceeded memory limit. Bytes Consumed: XXX, Max: YYY"` is returned when traversal exceeds allowed memory limit. Memory limit is **2 GB** per traversal.|
-| **1004** | This status code indicates malformed graph request. Request can be malformed when it fails deserialization, non-value type is being deserialized as value type or unsupported gremlin operation requested. Application should not retry the request because it will not be successful. |
-| **1007** | Usually this status code is returned with error message `"Could not process request. Underlying connection has been closed."`. This situation can happen if client driver attempts to use a connection that is being closed by the server. Application should retry the traversal on a different connection.
-| **1008** | Cosmos DB Gremlin server can terminate connections to rebalance traffic in the cluster. Client drivers should handle this situation and use only live connections to send requests to the server. Occasionally client drivers may not detect that connection was closed. When application encounters an error, `"Connection is too busy. Please retry after sometime or open more connections."` it should retry traversal on a different connection.
-| **1009** | The operation did not complete in the allotted time and was canceled by the server. Optimize your traversals to run quickly by filtering vertices or edges on every hop of traversal to narrow search scope. Request timeout default is **60 seconds**. |
-
-## Samples
-
-A sample client application based on Gremlin.Net that reads one status attribute:
-
-```csharp
-// Following example reads a status code and total request charge from server response attributes.
-// Variable "server" is assumed to be assigned to an instance of a GremlinServer that is connected to Cosmos DB account.
-using (GremlinClient client = new GremlinClient(server, new GraphSON2Reader(), new GraphSON2Writer(), GremlinClient.GraphSON2MimeType))
-{
- ResultSet<dynamic> responseResultSet = await GremlinClientExtensions.SubmitAsync<dynamic>(client, requestScript: "g.V().count()");
- long statusCode = (long)responseResultSet.StatusAttributes["x-ms-status-code"];
- double totalRequestCharge = (double)responseResultSet.StatusAttributes["x-ms-total-request-charge"];
-
- // Status code and request charge are logged into application telemetry.
-}
-```
-
-An example that demonstrates how to read status attribute from Gremlin Java client:
-
-```java
-try {
- ResultSet resultSet = this.client.submit("g.addV().property('id', '13')");
- List<Result> results = resultSet.all().get();
-
- // Process and consume results
-
-} catch (ResponseException re) {
- // Check for known errors that need to be retried or skipped
- if (re.getStatusAttributes().isPresent()) {
- Map<String, Object> attributes = re.getStatusAttributes().get();
- int statusCode = (int) attributes.getOrDefault("x-ms-status-code", -1);
-
- // Now we can check for specific conditions
- if (statusCode == 409) {
- // Handle conflicting writes
- }
- }
-
- // Check if we need to delay retry
- if (attributes.containsKey("x-ms-retry-after-ms")) {
- // Read the value of the attribute as is
- String retryAfterTimeSpan = (String) attributes.get("x-ms-retry-after-ms"));
-
- // Convert the value into actionable duration
- LocalTime locaTime = LocalTime.parse(retryAfterTimeSpan);
- Duration duration = Duration.between(LocalTime.MIN, locaTime);
-
- // Perform a retry after "duration" interval of time has elapsed
- }
- }
-}
-
-```
-
-## Next steps
-* [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb)
-* [Common Azure Cosmos DB REST response headers](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers)
-* [TinkerPop Graph Driver Provider Requirements]( http://tinkerpop.apache.org/docs/current/dev/provider/#_graph_driver_provider_requirements)
cosmos-db Gremlin Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/gremlin-limits.md
- Title: Limits of Azure Cosmos DB Gremlin
-description: Reference documentation for runtime limitations of Graph engine
--- Previously updated : 10/04/2019----
-# Azure Cosmos DB Gremlin limits
-
-This article talks about the limits of Azure Cosmos DB Gremlin engine and explains how they may impact customer traversals.
-
-Cosmos DB Gremlin is built on top of Cosmos DB infrastructure. Due to this, all limits explained in [Azure Cosmos DB service limits](../concepts-limits.md) still apply.
-
-## Limits
-
-When Gremlin limit is reached, traversal is canceled with a **x-ms-status-code** of 429 indicating a throttling error. See [Gremlin server response headers](gremlin-limits.md) for more information.
-
-**Resource** | **Default limit** | **Explanation**
- | |
-*Script length* | **64 KB** | Maximum length of a Gremlin traversal script per request.
-*Operator depth* | **400** | Total number of unique steps in a traversal. For example, ```g.V().out()``` has an operator count of 2: V() and out(), ```g.V('label').repeat(out()).times(100)``` has operator depth of 3: V(), repeat(), and out() because ```.times(100)``` is a parameter to ```.repeat()``` operator.
-*Degree of parallelism* | **32** | Maximum number of storage partitions queried in a single request to storage layer. Graphs with hundreds of partitions will be impacted by this limit.
-*Repeat limit* | **32** | Maximum number of iterations a ```.repeat()``` operator can execute. Each iteration of ```.repeat()``` step in most cases runs breadth-first traversal, which means that any traversal is limited to at most 32 hops between vertices.
-*Traversal timeout* | **30 seconds** | Traversal will be canceled when it exceeds this time. Cosmos DB Graph is an OLTP database with vast majority of traversals completing within milliseconds. To run OLAP queries on Cosmos DB Graph, use [Apache Spark](https://azure.microsoft.com/services/cosmos-db/) with [Graph Data Frames](https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes) and [Cosmos DB Spark Connector](https://github.com/Azure/azure-cosmosdb-spark).
-*Idle connection timeout* | **1 hour** | Amount of time the Gremlin service will keep idle websocket connections open. TCP keep-alive packets or HTTP keep-alive requests don't extend connection lifespan beyond this limit. Cosmos DB Graph engine considers websocket connections to be idle if there are no active Gremlin requests running on it.
-*Resource token per hour* | **100** | Number of unique resource tokens used by Gremlin clients to connect to Gremlin account in a region. When the application exceeds hourly unique token limit, `"Exceeded allowed resource token limit of 100 that can be used concurrently"` will be returned on the next authentication request.
-
-## Next steps
-* [Azure Cosmos DB Gremlin response headers](gremlin-headers.md)
-* [Azure Cosmos DB Resource Tokens with Gremlin](how-to-use-resource-tokens-gremlin.md)
cosmos-db Gremlin Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/gremlin-support.md
- Title: Azure Cosmos DB Gremlin support and compatibility with TinkerPop features
-description: Learn about the Gremlin language from Apache TinkerPop. Learn which features and steps are available in Azure Cosmos DB and the TinkerPop Graph engine compatibility differences.
--- Previously updated : 07/06/2021----
-# Azure Cosmos DB Gremlin graph support and compatibility with TinkerPop features
-
-Azure Cosmos DB supports [Apache Tinkerpop's](https://tinkerpop.apache.org) graph traversal language, known as [Gremlin](https://tinkerpop.apache.org/docs/3.3.2/reference/#graph-traversal-steps). You can use the Gremlin language to create graph entities (vertices and edges), modify properties within those entities, perform queries and traversals, and delete entities.
-
-Azure Cosmos DB Graph engine closely follows [Apache TinkerPop](https://tinkerpop.apache.org/docs/current/reference/#graph-traversal-steps) traversal steps specification but there are differences in the implementation that are specific for Azure Cosmos DB. In this article, we provide a quick walkthrough of Gremlin and enumerate the Gremlin features that are supported by the Gremlin API.
-
-## Compatible client libraries
-
-The following table shows popular Gremlin drivers that you can use against Azure Cosmos DB:
-
-| Download | Source | Getting Started | Supported/Recommended connector version |
-| | | | |
-| [.NET](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](create-graph-dotnet.md) | 3.4.13 |
-| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](create-graph-java.md) | 3.4.13 |
-| [Python](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.4.13 |
-| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.4.13 |
-| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](create-graph-nodejs.md) | 3.4.13 |
-| [PHP](https://packagist.org/packages/brightzone/gremlin-php) | [Gremlin-PHP on GitHub](https://github.com/PommeVerte/gremlin-php) | [Create Graph using PHP](create-graph-php.md) | 3.1.0 |
-| [Go Lang](https://github.com/supplyon/gremcos/) | [Go Lang](https://github.com/supplyon/gremcos/) | | This library is built by external contributors. The Azure Cosmos DB team doesn't offer any support or maintain the library. |
-
-> [!NOTE]
-> Gremlin client driver versions for __3.5.*__, __3.6.*__ have known compatibility issues, so we recommend using the latest supported 3.4.* driver versions listed above.
-> This table will be updated when compatibility issues have been addressed for these newer driver versions.
-
-## Supported Graph Objects
-
-TinkerPop is a standard that covers a wide range of graph technologies. Therefore, it has standard terminology to describe what features are provided by a graph provider. Azure Cosmos DB provides a persistent, high concurrency, writeable graph database that can be partitioned across multiple servers or clusters.
-
-The following table lists the TinkerPop features that are implemented by Azure Cosmos DB:
-
-| Category | Azure Cosmos DB implementation | Notes |
-| | | |
-| Graph features | Provides Persistence and ConcurrentAccess. Designed to support Transactions | Computer methods can be implemented via the Spark connector. |
-| Variable features | Supports Boolean, Integer, Byte, Double, Float, Integer, Long, String | Supports primitive types, is compatible with complex types via data model |
-| Vertex features | Supports RemoveVertices, MetaProperties, AddVertices, MultiProperties, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting vertices |
-| Vertex property features | StringIds, UserSuppliedIds, AddProperty, RemoveProperty, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting vertex properties |
-| Edge features | AddEdges, RemoveEdges, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting edges |
-| Edge property features | Properties, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting edge properties |
-
-## Gremlin wire format
-
-Azure Cosmos DB uses the JSON format when returning results from Gremlin operations. Azure Cosmos DB currently supports the JSON format. For example, the following snippet shows a JSON representation of a vertex *returned to the client* from Azure Cosmos DB:
-
-```json
- {
- "id": "a7111ba7-0ea1-43c9-b6b2-efc5e3aea4c0",
- "label": "person",
- "type": "vertex",
- "outE": {
- "knows": [
- {
- "id": "3ee53a60-c561-4c5e-9a9f-9c7924bc9aef",
- "inV": "04779300-1c8e-489d-9493-50fd1325a658"
- },
- {
- "id": "21984248-ee9e-43a8-a7f6-30642bc14609",
- "inV": "a8e3e741-2ef7-4c01-b7c8-199f8e43e3bc"
- }
- ]
- },
- "properties": {
- "firstName": [
- {
- "value": "Thomas"
- }
- ],
- "lastName": [
- {
- "value": "Andersen"
- }
- ],
- "age": [
- {
- "value": 45
- }
- ]
- }
- }
-```
-
-The properties used by the JSON format for vertices are described below:
-
-| Property | Description |
-| | | |
-| `id` | The ID for the vertex. Must be unique (in combination with the value of `_partition` if applicable). If no value is provided, it will be automatically supplied with a GUID |
-| `label` | The label of the vertex. This property is used to describe the entity type. |
-| `type` | Used to distinguish vertices from non-graph documents |
-| `properties` | Bag of user-defined properties associated with the vertex. Each property can have multiple values. |
-| `_partition` | The partition key of the vertex. Used for [graph partitioning](graph-partitioning.md). |
-| `outE` | This property contains a list of out edges from a vertex. Storing the adjacency information with vertex allows for fast execution of traversals. Edges are grouped based on their labels. |
-
-Each property can store multiple values within an array.
-
-| Property | Description |
-| | |
-| `value` | The value of the property |
-
-And the edge contains the following information to help with navigation to other parts of the graph.
-
-| Property | Description |
-| | |
-| `id` | The ID for the edge. Must be unique (in combination with the value of `_partition` if applicable) |
-| `label` | The label of the edge. This property is optional, and used to describe the relationship type. |
-| `inV` | This property contains a list of in vertices for an edge. Storing the adjacency information with the edge allows for fast execution of traversals. Vertices are grouped based on their labels. |
-| `properties` | Bag of user-defined properties associated with the edge. |
-
-## Gremlin steps
-
-Now let's look at the Gremlin steps supported by Azure Cosmos DB. For a complete reference on Gremlin, see [TinkerPop reference](https://tinkerpop.apache.org/docs/3.3.2/reference).
-
-| step | Description | TinkerPop 3.2 Documentation |
-| | | |
-| `addE` | Adds an edge between two vertices | [addE step](https://tinkerpop.apache.org/docs/3.3.2/reference/#addedge-step) |
-| `addV` | Adds a vertex to the graph | [addV step](https://tinkerpop.apache.org/docs/3.3.2/reference/#addvertex-step) |
-| `and` | Ensures that all the traversals return a value | [and step](https://tinkerpop.apache.org/docs/3.3.2/reference/#and-step) |
-| `as` | A step modulator to assign a variable to the output of a step | [as step](https://tinkerpop.apache.org/docs/3.3.2/reference/#as-step) |
-| `by` | A step modulator used with `group` and `order` | [by step](https://tinkerpop.apache.org/docs/3.3.2/reference/#by-step) |
-| `coalesce` | Returns the first traversal that returns a result | [coalesce step](https://tinkerpop.apache.org/docs/3.3.2/reference/#coalesce-step) |
-| `constant` | Returns a constant value. Used with `coalesce`| [constant step](https://tinkerpop.apache.org/docs/3.3.2/reference/#constant-step) |
-| `count` | Returns the count from the traversal | [count step](https://tinkerpop.apache.org/docs/3.3.2/reference/#count-step) |
-| `dedup` | Returns the values with the duplicates removed | [dedup step](https://tinkerpop.apache.org/docs/3.3.2/reference/#dedup-step) |
-| `drop` | Drops the values (vertex/edge) | [drop step](https://tinkerpop.apache.org/docs/3.3.2/reference/#drop-step) |
-| `executionProfile` | Creates a description of all operations generated by the executed Gremlin step | [executionProfile step](graph-execution-profile.md) |
-| `fold` | Acts as a barrier that computes the aggregate of results| [fold step](https://tinkerpop.apache.org/docs/3.3.2/reference/#fold-step) |
-| `group` | Groups the values based on the labels specified| [group step](https://tinkerpop.apache.org/docs/3.3.2/reference/#group-step) |
-| `has` | Used to filter properties, vertices, and edges. Supports `hasLabel`, `hasId`, `hasNot`, and `has` variants. | [has step](https://tinkerpop.apache.org/docs/3.3.2/reference/#has-step) |
-| `inject` | Inject values into a stream| [inject step](https://tinkerpop.apache.org/docs/3.3.2/reference/#inject-step) |
-| `is` | Used to perform a filter using a boolean expression | [is step](https://tinkerpop.apache.org/docs/3.3.2/reference/#is-step) |
-| `limit` | Used to limit number of items in the traversal| [limit step](https://tinkerpop.apache.org/docs/3.3.2/reference/#limit-step) |
-| `local` | Local wraps a section of a traversal, similar to a subquery | [local step](https://tinkerpop.apache.org/docs/3.3.2/reference/#local-step) |
-| `not` | Used to produce the negation of a filter | [not step](https://tinkerpop.apache.org/docs/3.3.2/reference/#not-step) |
-| `optional` | Returns the result of the specified traversal if it yields a result else it returns the calling element | [optional step](https://tinkerpop.apache.org/docs/3.3.2/reference/#optional-step) |
-| `or` | Ensures at least one of the traversals returns a value | [or step](https://tinkerpop.apache.org/docs/3.3.2/reference/#or-step) |
-| `order` | Returns results in the specified sort order | [order step](https://tinkerpop.apache.org/docs/3.3.2/reference/#order-step) |
-| `path` | Returns the full path of the traversal | [path step](https://tinkerpop.apache.org/docs/3.3.2/reference/#path-step) |
-| `project` | Projects the properties as a Map | [project step](https://tinkerpop.apache.org/docs/3.3.2/reference/#project-step) |
-| `properties` | Returns the properties for the specified labels | [properties step](https://tinkerpop.apache.org/docs/3.3.2/reference/#_properties_step) |
-| `range` | Filters to the specified range of values| [range step](https://tinkerpop.apache.org/docs/3.3.2/reference/#range-step) |
-| `repeat` | Repeats the step for the specified number of times. Used for looping | [repeat step](https://tinkerpop.apache.org/docs/3.3.2/reference/#repeat-step) |
-| `sample` | Used to sample results from the traversal | [sample step](https://tinkerpop.apache.org/docs/3.3.2/reference/#sample-step) |
-| `select` | Used to project results from the traversal | [select step](https://tinkerpop.apache.org/docs/3.3.2/reference/#select-step) |
-| `store` | Used for non-blocking aggregates from the traversal | [store step](https://tinkerpop.apache.org/docs/3.3.2/reference/#store-step) |
-| `TextP.startingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the beginning of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
-| `TextP.endingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the ending of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
-| `TextP.containing(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the contents of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
-| `TextP.notStartingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't start with a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
-| `TextP.notEndingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't end with a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
-| `TextP.notContaining(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't contain a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
-| `tree` | Aggregate paths from a vertex into a tree | [tree step](https://tinkerpop.apache.org/docs/3.3.2/reference/#tree-step) |
-| `unfold` | Unroll an iterator as a step| [unfold step](https://tinkerpop.apache.org/docs/3.3.2/reference/#unfold-step) |
-| `union` | Merge results from multiple traversals| [union step](https://tinkerpop.apache.org/docs/3.3.2/reference/#union-step) |
-| `V` | Includes the steps necessary for traversals between vertices and edges `V`, `E`, `out`, `in`, `both`, `outE`, `inE`, `bothE`, `outV`, `inV`, `bothV`, and `otherV` for | [vertex steps](https://tinkerpop.apache.org/docs/3.3.2/reference/#vertex-steps) |
-| `where` | Used to filter results from the traversal. Supports `eq`, `neq`, `lt`, `lte`, `gt`, `gte`, and `between` operators | [where step](https://tinkerpop.apache.org/docs/3.3.2/reference/#where-step) |
-
-The write-optimized engine provided by Azure Cosmos DB supports automatic indexing of all properties within vertices and edges by default. Therefore, queries with filters, range queries, sorting, or aggregates on any property are processed from the index, and served efficiently. For more information on how indexing works in Azure Cosmos DB, see our paper on [schema-agnostic indexing](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf).
-
-## Behavior differences
-
-* Azure Cosmos DB Graph engine runs ***breadth-first*** traversal while TinkerPop Gremlin is depth-first. This behavior achieves better performance in horizontally scalable system like Cosmos DB.
-
-## Unsupported features
-
-* ***[Gremlin Bytecode](https://tinkerpop.apache.org/docs/current/tutorials/gremlin-language-variants/)*** is a programming language agnostic specification for graph traversals. Cosmos DB Graph doesn't support it yet. Use `GremlinClient.SubmitAsync()` and pass traversal as a text string.
-
-* ***`property(set, 'xyz', 1)`*** set cardinality isn't supported today. Use `property(list, 'xyz', 1)` instead. To learn more, see [Vertex properties with TinkerPop](http://tinkerpop.apache.org/docs/current/reference/#vertex-properties).
-
-* The ***`match()` step*** isn't currently available. This step provides declarative querying capabilities.
-
-* ***Objects as properties*** on vertices or edges aren't supported. Properties can only be primitive types or arrays.
-
-* ***Sorting by array properties*** `order().by(<array property>)` isn't supported. Sorting is supported only by primitive types.
-
-* ***Non-primitive JSON types*** aren't supported. Use `string`, `number`, or `true`/`false` types. `null` values aren't supported.
-
-* ***GraphSONv3*** serializer isn't currently supported. Use `GraphSONv2` Serializer, Reader, and Writer classes in the connection configuration. The results returned by the Azure Cosmos DB Gremlin API don't have the same format as the GraphSON format.
-
-* **Lambda expressions and functions** aren't currently supported. This includes the `.map{<expression>}`, the `.by{<expression>}`, and the `.filter{<expression>}` functions. To learn more, and to learn how to rewrite them using Gremlin steps, see [A Note on Lambdas](http://tinkerpop.apache.org/docs/current/reference/#a-note-on-lambdas).
-
-* ***Transactions*** aren't supported because of distributed nature of the system. Configure appropriate consistency model on Gremlin account to "read your own writes" and use optimistic concurrency to resolve conflicting writes.
-
-## Known limitations
-
-* **Index utilization for Gremlin queries with mid-traversal `.V()` steps**: Currently, only the first `.V()` call of a traversal will make use of the index to resolve any filters or predicates attached to it. Subsequent calls will not consult the index, which might increase the latency and cost of the query.
-
-Assuming default indexing, a typical read Gremlin query that starts with the `.V()` step would use parameters in its attached filtering steps, such as `.has()` or `.where()` to optimize the cost and performance of the query. For example:
-
-```java
-g.V().has('category', 'A')
-```
-
-However, when more than one `.V()` step is included in the Gremlin query, the resolution of the data for the query might not be optimal. Take the following query as an example:
-
-```java
-g.V().has('category', 'A').as('a').V().has('category', 'B').as('b').select('a', 'b')
-```
-
-This query will return two groups of vertices based on their property called `category`. In this case, only the first call, `g.V().has('category', 'A')` will make use of the index to resolve the vertices based on the values of their properties.
-
-A workaround for this query is to use subtraversal steps such as `.map()` and `union()`. This is exemplified below:
-
-```java
-// Query workaround using .map()
-g.V().has('category', 'A').as('a').map(__.V().has('category', 'B')).as('b').select('a','b')
-
-// Query workaround using .union()
-g.V().has('category', 'A').fold().union(unfold(), __.V().has('category', 'B'))
-```
-
-You can review the performance of the queries by using the [Gremlin `executionProfile()` step](graph-execution-profile.md).
-
-## Next steps
-
-* Get started building a graph application [using our SDKs](create-graph-dotnet.md)
-* Learn more about [graph support](graph-introduction.md) in Azure Cosmos DB
cosmos-db How To Create Container Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/how-to-create-container-gremlin.md
- Title: Create a container in Azure Cosmos DB Gremlin API
-description: Learn how to create a container in Azure Cosmos DB Gremlin API by using Azure portal, .NET and other SDKs.
--- Previously updated : 10/16/2020-----
-# Create a container in Azure Cosmos DB Gremlin API
-
-This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-
-This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container-mongodb.md), [Cassandra API](../cassandr) articles to create the container.
-
-> [!NOTE]
-> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
-
-## <a id="portal-gremlin"></a>Create using Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-graph-dotnet.md#create-a-database-account), or select an existing account.
-
-1. Open the **Data Explorer** pane, and select **New Graph**. Next, provide the following details:
-
- * Indicate whether you are creating a new database, or using an existing one.
- * Enter a Graph ID.
- * Select **Unlimited** storage capacity.
- * Enter a partition key for vertices.
- * Enter a throughput to be provisioned (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-gremlin.png" alt-text="Screenshot of Gremlin API, Add Graph dialog box":::
-
-## <a id="dotnet-sql-graph"></a>Create using .NET SDK
-
-If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
-
-```csharp
-// Create a container with a partition key and provision 1000 RU/s throughput.
-DocumentCollection myCollection = new DocumentCollection();
-myCollection.Id = "myContainerName";
-myCollection.PartitionKey.Paths.Add("/myPartitionKey");
-
-await client.CreateDocumentCollectionAsync(
- UriFactory.CreateDatabaseUri("myDatabaseName"),
- myCollection,
- new RequestOptions { OfferThroughput = 1000 });
-```
-
-## <a id="cli-mongodb"></a>Create using Azure CLI
-
-[Create a Gremlin graph with Azure CLI](../scripts/cli/gremlin/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
-
-## Create using PowerShell
-
-[Create a Gremlin graph with PowerShell](../scripts/powershell/gremlin/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
-
-## Next steps
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
-* [Work with Azure Cosmos account](../account-databases-containers-items.md)
cosmos-db How To Provision Throughput Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/how-to-provision-throughput-gremlin.md
- Title: Provision throughput on Azure Cosmos DB Gremlin API resources
-description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB Gremlin API resources. You will use Azure portal, CLI, PowerShell and various other SDKs.
--- Previously updated : 10/15/2020-----
-# Provision database, container or autoscale throughput on Azure Cosmos DB Gremlin API resources
-
-This article explains how to provision throughput in Azure Cosmos DB Gremlin API. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-
-If you are using a different API, see [SQL API](../how-to-provision-container-throughput.md), [Cassandra API](../cassandr) articles to provision the throughput.
-
-## <a id="portal-gremlin"></a> Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](../mongodb/create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing Azure Cosmos account.
-
-1. Open the **Data Explorer** pane, and select **New Graph**. Next, provide the following details:
-
- * Indicate whether you are creating a new database or using an existing one. Select the **Provision database throughput** option if you want to provision throughput at the database level.
- * Enter a graph ID.
- * Enter a partition key value (for example, `/ItemID`).
- * Enter a throughput that you want to provision (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="./media/how-to-provision-throughput-gremlin/provision-database-throughput-portal-gremlin-api.png" alt-text="Screenshot of Data Explorer, when creating a new graph with database level throughput":::
-
-## .NET SDK
-
-> [!Note]
-> Use the Cosmos SDKs for SQL API to provision throughput for all Azure Cosmos DB APIs, except Cassandra and MongoDB API.
-
-### Provision container level throughput
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-// Create a container with a partition key and provision throughput of 400 RU/s
-DocumentCollection myCollection = new DocumentCollection();
-myCollection.Id = "myContainerName";
-myCollection.PartitionKey.Paths.Add("/myPartitionKey");
-
-await client.CreateDocumentCollectionAsync(
- UriFactory.CreateDatabaseUri("myDatabaseName"),
- myCollection,
- new RequestOptions { OfferThroughput = 400 });
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/ContainerDocsSampleCode.cs?name=ContainerCreateWithThroughput)]
---
-### Provision database level throughput
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-//set the throughput for the database
-RequestOptions options = new RequestOptions
-{
- OfferThroughput = 500
-};
-
-//create the database
-await client.CreateDatabaseIfNotExistsAsync(
- new Database {Id = databaseName},
- options);
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/DatabaseDocsSampleCode.cs?name=DatabaseCreateWithThroughput)]
---
-## Azure Resource Manager
-
-Azure Resource Manager templates can be used to provision autoscale throughput on database or container-level resources for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](resource-manager-template-samples.md) for samples.
-
-## Azure CLI
-
-Azure CLI can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
-
-## Azure PowerShell
-
-Azure PowerShell can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
-
-## Next steps
-
-See the following articles to learn about throughput provisioning in Azure Cosmos DB:
-
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Use Resource Tokens Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/how-to-use-resource-tokens-gremlin.md
- Title: Use Azure Cosmos DB resource tokens with the Gremlin SDK
-description: Learn how to create resource tokens and use them to access the Graph database.
----- Previously updated : 09/06/2019---
-# Use Azure Cosmos DB resource tokens with the Gremlin SDK
-
-This article explains how to use [Azure Cosmos DB resource tokens](../secure-access-to-data.md) to access the Graph database through the Gremlin SDK.
-
-## Create a resource token
-
-The Apache TinkerPop Gremlin SDK doesn't have an API to use to create resource tokens. The term *resource token* is an Azure Cosmos DB concept. To create resource tokens, download the [Azure Cosmos DB SDK](../sql-api-sdk-dotnet.md). If your application needs to create resource tokens and use them to access the Graph database, it requires two separate SDKs.
-
-The object model hierarchy above resource tokens is illustrated in the following outline:
--- **Azure Cosmos DB account** - The top-level entity that has a DNS associated with it (for example, `contoso.gremlin.cosmos.azure.com`).
- - **Azure Cosmos DB database**
- - **User**
- - **Permission**
- - **Token** - A Permission object property that denotes what actions are allowed or denied.
-
-A resource token uses the following format: `"type=resource&ver=1&sig=<base64 string>;<base64 string>;"`. This string is opaque for the clients and should be used as is, without modification or interpretation.
-
-```csharp
-// Notice that document client is created against .NET SDK endpoint, rather than Gremlin.
-DocumentClient client = new DocumentClient(
- new Uri("https://contoso.documents.azure.com:443/"),
- "<primary key>",
- new ConnectionPolicy
- {
- EnableEndpointDiscovery = false,
- ConnectionMode = ConnectionMode.Direct
- });
-
- // Read specific permission to obtain a token.
- // The token isn't returned during the ReadPermissionReedAsync() call.
- // The call succeeds only if database id, user id, and permission id already exist.
- // Note that <database id> is not a database name. It is a base64 string that represents the database identifier, for example "KalVAA==".
- // Similar comment applies to <user id> and <permission id>.
- Permission permission = await client.ReadPermissionAsync(UriFactory.CreatePermissionUri("<database id>", "<user id>", "<permission id>"));
-
- Console.WriteLine("Obtained token {0}", permission.Token);
-}
-```
-
-## Use a resource token
-You can use resource tokens directly as a "password" property when you construct the GremlinServer class.
-
-```csharp
-// The Gremlin application needs to be given a resource token. It can't discover the token on its own.
-// You can obtain the token for a given permission by using the Azure Cosmos DB SDK, or you can pass it into the application as a command line argument or configuration value.
-string resourceToken = GetResourceToken();
-
-// Configure the Gremlin server to use a resource token rather than a primary key.
-GremlinServer server = new GremlinServer(
- "contoso.gremlin.cosmosdb.azure.com",
- port: 443,
- enableSsl: true,
- username: "/dbs/<database name>/colls/<collection name>",
-
- // The format of the token is "type=resource&ver=1&sig=<base64 string>;<base64 string>;".
- password: resourceToken);
-
- using (GremlinClient gremlinClient = new GremlinClient(server, new GraphSON2Reader(), new GraphSON2Writer(), GremlinClient.GraphSON2MimeType))
- {
- await gremlinClient.SubmitAsync("g.V().limit(1)");
- }
-```
-
-The same approach works in all TinkerPop Gremlin SDKs.
-
-```java
-Cluster.Builder builder = Cluster.build();
-
-AuthProperties authenticationProperties = new AuthProperties();
-authenticationProperties.with(AuthProperties.Property.USERNAME,
- String.format("/dbs/%s/colls/%s", "<database name>", "<collection name>"));
-
-// The format of the token is "type=resource&ver=1&sig=<base64 string>;<base64 string>;".
-authenticationProperties.with(AuthProperties.Property.PASSWORD, resourceToken);
-
-builder.authProperties(authenticationProperties);
-```
-
-## Limit
-
-With a single Gremlin account, you can issue an unlimited number of tokens. However, you can use only up to 100 tokens concurrently within 1 hour. If an application exceeds the token limit per hour, an authentication request is denied, and you receive the following error message: "Exceeded allowed resource token limit of 100 that can be used concurrently." It doesn't work to close active connections that use specific tokens to free up slots for new tokens. The Azure Cosmos DB Gremlin database engine keeps track of unique tokens during the hour immediately prior to the authentication request.
-
-## Permission
-
-A common error that applications encounter while they're using resource tokens is, "Insufficient permissions provided in the authorization header for the corresponding request. Please retry with another authorization header." This error is returned when a Gremlin traversal attempts to write an edge or a vertex but the resource token grants *Read* permissions only. Inspect your traversal to see whether it contains any of the following steps: *.addV()*, *.addE()*, *.drop()*, or *.property()*.
-
-## Next steps
-* [Azure role-based access control (Azure RBAC)](../role-based-access-control.md) in Azure Cosmos DB
-* [Learn how to secure access to data](../secure-access-to-data.md) in Azure Cosmos DB
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/manage-with-bicep.md
- Title: Create and manage Azure Cosmos DB Gremlin API with Bicep
-description: Use Bicep to create and configure Azure Cosmos DB Gremlin API.
---- Previously updated : 9/13/2021----
-# Manage Azure Cosmos DB Gremlin API resources using Bicep
--
-In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB Gremlin API accounts, databases, and graphs.
-
-This article shows Bicep samples for Gremlin API accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Cassandra](../cassandr) APIs.
-
-> [!IMPORTANT]
->
-> * Account names are limited to 44 characters, all lowercase.
-> * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
-
-To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md).
-
-<a id="create-autoscale"></a>
-
-## Gremlin API with autoscale provisioned throughput
-
-Create an Azure Cosmos account for Gremlin API with a database and graph with autoscale throughput.
--
-<a id="create-manual"></a>
-
-## Gremlin API with standard provisioned throughput
-
-Create an Azure Cosmos account for Gremlin API with a database and graph with standard provisioned throughput.
--
-## Next steps
-
-Here are some additional resources:
-
-* [Bicep documentation](../../azure-resource-manager/bicep/index.yml)
-* [Install Bicep tools](../../azure-resource-manager/bicep/install.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/powershell-samples.md
- Title: Azure PowerShell samples for Azure Cosmos DB Gremlin API
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Gremlin API
---- Previously updated : 01/20/2021----
-# Azure PowerShell samples for Azure Cosmos DB Gremlin API
-
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
-
-## Common Samples
-
-|Task | Description |
-|||
-|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
-|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
-|||
-
-## Gremlin API Samples
-
-|Task | Description |
-|||
-|[Create an account, database and graph](../scripts/powershell/gremlin/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and graph. |
-|[Create an account, database and graph with autoscale](../scripts/powershell/gremlin/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and graph with autoscale. |
-|[List or get databases or graphs](../scripts/powershell/gremlin/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or graph. |
-|[Perform throughput operations](../scripts/powershell/gremlin/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or graph including get, update and migrate between autoscale and standard throughput. |
-|[Lock resources from deletion](../scripts/powershell/gremlin/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
-|||
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/resource-manager-template-samples.md
- Title: Resource Manager templates for Azure Cosmos DB Gremlin API
-description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Gremlin API.
---- Previously updated : 10/14/2020----
-# Manage Azure Cosmos DB Gremlin API resources using Azure Resource Manager templates
-
-In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and graphs.
-
-This article has examples for Gremlin API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../cassandr) articles.
-
-> [!IMPORTANT]
->
-> * Account names are limited to 44 characters, all lowercase.
-> * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
-
-To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md).
-
-<a id="create-autoscale"></a>
-
-## Azure Cosmos DB account for Gremlin with autoscale provisioned throughput
-
-This template will create an Azure Cosmos account for Gremlin API with a database and graph with autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-gremlin-autoscale%2Fazuredeploy.json)
--
-## Next steps
-
-Here are some additional resources:
-
-* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml)
-* [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions)
-* [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.DocumentDB&pageNumber=1&sort=Popular)
-* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Supply Chain Traceability Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/supply-chain-traceability-solution.md
- Title: Infosys supply chain traceability solution using Azure Cosmos DB Gremlin API
-description: The Infosys solution for traceability in global supply chains uses the Azure Cosmos DB Gremlin API and other Azure services. It provides track-and-trace capability in graph form for finished goods.
--- Previously updated : 10/07/2021----
-# Solution for supply chain traceability using the Azure Cosmos DB Gremlin API
--
-This article provides an overview of the [traceability graph solution implemented by Infosys](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview). This solution uses the Azure Cosmos DB Gremlin API and other Azure capabilities to provide a track-and-trace capability for finished goods in global supply chains.
-
-In this article, you'll learn:
-
-* What traceability is in the context of a supply chain.
-* The architecture of a global traceability solution delivered through Azure capabilities.
-* How the Azure Cosmos DB graph database helps you track intricate relationships between raw materials and finished goods in a global supply chain.
-* How Azure integration platform services such as Azure API Management and Event Hubs help you integrate diverse application ecosystems for supply chains.
-* How you can get help from Infosys to use this solution for your traceability needs.
-
-## Overview
-
-In the food supply chain, traceability is the ability to *track and trace* a product across the supply chain throughout the product's lifecycle. The supply chain includes supply, manufacturing, and distribution. Traceability is vital for food safety, brand, and regulatory exposure.
-
-In the past, some organizations failed to track and trace products effectively in their supply chains. Results included expensive recalls, fines, and consumer health issues.
-
-Traceability solutions had to address the needs of data harmonization and data ingestion at various velocities and veracities. They also had to follow the inventory cycle. These objectives weren't possible with traditional platforms.
-
-## Solution architecture
-
-Supply chain traceability commonly shares patterns in ingesting pallet movements, handing quality incidents, and tracing/analyzing store data. Infosys developed an end-to-end traceability solution that uses Azure application services, integration services, and database services. The solution provides these capabilities:
-
-* Receive streaming data from factories, warehouses, and distribution centers across geographies.
-* Ingest and process parallel stock-movement events.
-* View a knowledge graph that analyzes relationships between raw materials, production batches, pallets of finished goods, multilevel parent/child relationships of pallets (copack/repack), and movement of goods.
-* Access to a user portal with a search capability that includes wildcards and specific keywords.
-* Identify impacts of a quality incident, such as affected raw materials, batches, pallets, and locations of pallets.
-* Capture the history of events across multiple markets, including product recall information.
-
-The Infosys traceability solution supports cloud-native, API-first, and data-driven capabilities. The following diagram illustrates the architecture of this solution:
--
-The architecture uses the following Azure services to help with specialized tasks:
-
-* Azure Cosmos DB enables you to scale performance up or down elastically. By using the Gremlin API, you can create and query complex relationships between raw materials, finished goods, and warehouses.
-* Azure API Management provides APIs for stock movement events to third-party logistics (3PL) providers and warehouse management systems (WMSs).
-* Azure Event Hubs provides the ability to gather large numbers of concurrent events from 3PL providers and WMSs for further processing.
-* Azure Functions (through function apps) processes events and ingests data for Azure Cosmos DB by using the Gremlin API.
-* Azure Search enables complex searches and the filtering of pallet information.
-* Azure Databricks reads the change feed and creates models in Azure Synapse Analytics for self-service reporting for users in Power BI.
-* Azure App Service and its Web Apps feature enable the deployment of a user portal.
-* Azure Storage stores archived data for long-term regulatory needs.
-
-## Graph database and its data design
-
-The production and distribution of goods require maintaining a complex and dynamic set of relationships. An adaptive data model in the form of a traceability graph allows storing these relationships through all the steps in the supply chain. Here's a high-level visualization of the process:
--
-The preceding diagram is a simplified view of a complex process. However, getting stock-movement information from the factories and warehouses in real time makes it possible to create an elaborate graph that connects all these disparate pieces of information:
-
-1. The traceability process starts when the supplier sends raw materials to the factories. The solution creates the initial nodes (vertices) of the graph and relationships (edges).
-
-1. The finished goods are produced from raw materials and packed into pallets.
-
-1. The pallets are moved to factory warehouses or market warehouses according to customer orders. The warehouses might be owned by the company or by 3PL providers.
-
-1. The pallets are shipped to various other warehouses according to customer orders. Depending on customers' needs, child pallets or child-of-child pallets are created to accommodate the ordered quantity.
-
- Sometimes, a whole new item is made by mixing multiple items. For example, in a copack scenario that produces a variety pack, sometimes the same item is repacked to smaller or larger quantities in a different pallet as part of a customer order.
-
- :::image type="content" source="./media/supply-chain-traceability-solution/pallet-relationship.png" alt-text="Pallet relationship in the solution for supply chain traceability." lightbox="./media/supply-chain-traceability-solution/pallet-relationship.png" border="true":::
-
-1. Pallets travel through the supply chain network and eventually reach the customer warehouse. During that process, the pallets can be further broken down or combined with other pallets to produce new pallets to fulfill customer orders.
-
-1. Eventually, the system creates a complex graph that holds relationship information for quality incident management.
-
- :::image type="content" source="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" alt-text="Diagram that shows the complete architecture for the supply chain object relationship." lightbox="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" border="true":::
-
- These intricate relationships are vital in a quality incident where the system can track and trace pallets across the supply chain. The graph and its traversals provide the required information for this. For example, if there's an issue with one raw material, the graph can show the affected pallets and the current location.
-
-## Next steps
-
-* Learn about [Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-integrate-for-azure).
-* To visualize graph data, see the [Gremlin API visualization solutions](graph-visualization-partners.md).
-* To model your graph data, see the [Gremlin API modeling solutions](graph-modeling-tools.md).
cosmos-db Tutorial Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/tutorial-query-graph.md
- Title: How to query graph data in Azure Cosmos DB?
-description: Learn how to query graph data from Azure Cosmos DB using Gremlin queries
----- Previously updated : 02/16/2022----
-# Tutorial: Query Azure Cosmos DB Gremlin API by using Gremlin
-
-The Azure Cosmos DB [Gremlin API](graph-introduction.md) supports [Gremlin](https://github.com/tinkerpop/gremlin/wiki) queries. This article provides sample documents and queries to get you started. A detailed Gremlin reference is provided in the [Gremlin support](gremlin-support.md) article.
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Querying data with Gremlin
-
-## Prerequisites
-
-For these queries to work, you must have an Azure Cosmos DB account and have graph data in the container. Don't have any of those? Complete the [5-minute quickstart](create-graph-dotnet.md) to create an account and populate your database. You can run the following queries using the [Gremlin console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console), or your favorite Gremlin driver.
-
-## Count vertices in the graph
-
-The following snippet shows how to count the number of vertices in the graph:
-
-```
-g.V().count()
-```
-
-## Filters
-
-You can perform filters using Gremlin's `has` and `hasLabel` steps, and combine them using `and`, `or`, and `not` to build more complex filters. Azure Cosmos DB provides schema-agnostic indexing of all properties within your vertices and degrees for fast queries:
-
-```
-g.V().hasLabel('person').has('age', gt(40))
-```
-
-## Projection
-
-You can project certain properties in the query results using the `values` step:
-
-```
-g.V().hasLabel('person').values('name')
-```
-
-## Find related edges and vertices
-
-So far, we've only seen query operators that work in any database. Graphs are fast and efficient for traversal operations when you need to navigate to related edges and vertices. Let's find all friends of Thomas. We do this by using Gremlin's `outE` step to find all the out-edges from Thomas, then traversing to the in-vertices from those edges using Gremlin's `inV` step:
-
-```cs
-g.V('thomas').outE('knows').inV().hasLabel('person')
-```
-
-The next query performs two hops to find all of Thomas' "friends of friends", by calling `outE` and `inV` two times.
-
-```cs
-g.V('thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
-```
-
-You can build more complex queries and implement powerful graph traversal logic using Gremlin, including mixing filter expressions, performing looping using the `loop` step, and implementing conditional navigation using the `choose` step. Learn more about what you can do with [Gremlin support](gremlin-support.md)!
-
-## Next steps
-
-In this tutorial, you've done the following:
-
-> [!div class="checklist"]
-> * Learned how to query using Graph
-
-You can now proceed to the Concepts section for more information about Cosmos DB.
-
-> [!div class="nextstepaction"]
-> [Global distribution](../distribute-data-globally.md)
-
cosmos-db Use Regional Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/use-regional-endpoints.md
- Title: Regional endpoints for Azure Cosmos DB Graph database
-description: Learn how to connect to nearest Graph database endpoint for your application
----- Previously updated : 09/09/2019---
-# Regional endpoints for Azure Cosmos DB Graph account
-
-Azure Cosmos DB Graph database is [globally distributed](../distribute-data-globally.md) so applications can use multiple read endpoints. Applications that need write access in multiple locations should enable [multi-region writes](../how-to-multi-master.md) capability.
-
-Reasons to choose more than one region:
-1. **Horizontal read scalability** - as application load increases it may be prudent to route read traffic to different Azure regions.
-2. **Lower latency** - you can reduce network latency overhead of each traversal by routing read and write traffic to the nearest Azure region.
-
-**Data residency** requirement is achieved by setting Azure Resource Manager policy on Cosmos DB account. Customer can limit regions into which Cosmos DB replicates data.
-
-## Traffic routing
-
-Cosmos DB Graph database engine is running in multiple regions, each of which contains multiple clusters. Each cluster has hundreds of machines. Cosmos DB Graph account DNS CNAME *accountname.gremlin.cosmos.azure.com* resolves to DNS A record of a cluster. A single IP address of a load-balancer hides internal cluster topology.
-
-A regional DNS CNAME record is created for every region of Cosmos DB Graph account. Format of regional endpoint is *accountname-region.gremlin.cosmos.azure.com*. Region segment of regional endpoint is obtained by removing all spaces from [Azure region](https://azure.microsoft.com/global-infrastructure/regions) name. For example, `"East US 2"` region for `"contoso"` global database account would have a DNS CNAME *contoso-eastus2.gremlin.cosmos.azure.com*
-
-TinkerPop Gremlin client is designed to work with a single server. Application can use global writable DNS CNAME for read and write traffic. Region-aware applications should use regional endpoint for read traffic. Use regional endpoint for write traffic only if specific region is configured to accept writes.
-
-> [!NOTE]
-> Cosmos DB Graph engine can accept write operation in read region by proxying traffic to write region. It is not recommended to send writes into read-only region as it increases traversal latency and is subject to restrictions in the future.
-
-Global database account CNAME always points to a valid write region. During server-side failover of write region, Cosmos DB updates global database account CNAME to point to new region. If application can't handle traffic rerouting after failover, it should use global database account DNS CNAME.
-
-> [!NOTE]
-> Cosmos DB does not route traffic based on geographic proximity of the caller. It is up to each application to select the right region according to unique application needs.
-
-## Portal endpoint discovery
-
-The easiest way to get the list of regions for Azure Cosmos DB Graph account is overview blade in Azure portal. It will work for applications that do not change regions often, or have a way to update the list via application configuration.
--
-Example below demonstrates general principles of accessing regional Gremlin endpoint. Application should consider number of regions to send the traffic to and number of corresponding Gremlin clients to instantiate.
-
-```csharp
-// Example value: Central US, West US and UK West. This can be found in the overview blade of you Azure Cosmos DB Gremlin Account.
-// Look for Write Locations in the overview blade. You can click to copy and paste.
-string[] gremlinAccountRegions = new string[] {"Central US", "West US" ,"UK West"};
-string gremlinAccountName = "PUT-COSMOSDB-ACCOUNT-NAME-HERE";
-string gremlinAccountKey = "PUT-ACCOUNT-KEY-HERE";
-string databaseName = "PUT-DATABASE-NAME-HERE";
-string graphName = "PUT-GRAPH-NAME-HERE";
-
-foreach (string gremlinAccountRegion in gremlinAccountRegions)
-{
- // Convert preferred read location to the form "[acountname]-[region].gremlin.cosmos.azure.com".
- string regionalGremlinEndPoint = $"{gremlinAccountName}-{gremlinAccountRegion.ToLowerInvariant().Replace(" ", string.Empty)}.gremlin.cosmos.azure.com";
-
- GremlinServer regionalGremlinServer = new GremlinServer(
- hostname: regionalGremlinEndPoint,
- port: 443,
- enableSsl: true,
- username: "/dbs/" + databaseName + "/colls/" + graphName,
- password: gremlinAccountKey);
-
- GremlinClient regionalGremlinClient = new GremlinClient(
- gremlinServer: regionalGremlinServer,
- graphSONReader: new GraphSON2Reader(),
- graphSONWriter: new GraphSON2Writer(),
- mimeType: GremlinClient.GraphSON2MimeType);
-}
-```
-
-## SDK endpoint discovery
-
-Application can use [Azure Cosmos DB SDK](../sql-api-sdk-dotnet.md) to discover read and write locations for Graph account. These locations can change at any time through manual reconfiguration on the server side or service-managed failover.
-
-TinkerPop Gremlin SDK doesn't have an API to discover Cosmos DB Graph database account regions. Applications that need runtime endpoint discovery need to host 2 separate SDKs in the process space.
-
-```csharp
-// Depending on the version and the language of the SDK (.NET vs Java vs Python)
-// the API to get readLocations and writeLocations may vary.
-IDocumentClient documentClient = new DocumentClient(
- new Uri(cosmosUrl),
- cosmosPrimaryKey,
- connectionPolicy,
- consistencyLevel);
-
-DatabaseAccount databaseAccount = await cosmosClient.GetDatabaseAccountAsync();
-
-IEnumerable<DatabaseAccountLocation> writeLocations = databaseAccount.WritableLocations;
-IEnumerable<DatabaseAccountLocation> readLocations = databaseAccount.ReadableLocations;
-
-// Pick write or read locations to construct regional endpoints for.
-foreach (string location in readLocations)
-{
- // Convert preferred read location to the form "[acountname]-[region].gremlin.cosmos.azure.com".
- string regionalGremlinEndPoint = location
- .Replace("http:\/\/", string.Empty)
- .Replace("documents.azure.com:443/", "gremlin.cosmos.azure.com");
-
- // Use code from the previous sample to instantiate Gremlin client.
-}
-```
-
-## Next steps
-* [How to manage database accounts control](../how-to-manage-database-account.md) in Azure Cosmos DB
-* [High availability](../high-availability.md) in Azure Cosmos DB
-* [Global distribution with Azure Cosmos DB - under the hood](../global-dist-under-the-hood.md)
-* [Azure CLI Samples](cli-samples.md) for Azure Cosmos DB
cosmos-db Access System Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/access-system-properties.md
+
+ Title: Access system document properties vian Azure Cosmos DB Graph
+description: Learn how read and write Azure Cosmos DB system document properties via API for Gremlin
++++ Last updated : 09/16/2021++++
+# System document properties
+
+Azure Cosmos DB has [system properties](/rest/api/cosmos-db/databases) such as ```_ts```, ```_self```, ```_attachments```, ```_rid```, and ```_etag``` on every document. Additionally, Gremlin engine adds ```inVPartition``` and ```outVPartition``` properties on edges. By default, these properties are available for traversal. However, it's possible to include specific properties, or all of them, in Gremlin traversal.
+
+```console
+g.withStrategies(ProjectionStrategy.build().IncludeSystemProperties('_ts').create())
+```
+
+## E-Tag
+
+This property is used for optimistic concurrency control. If application needs to break operation into a few separate traversals, it can use eTag property to avoid data loss in concurrent writes.
+
+```console
+g.withStrategies(ProjectionStrategy.build().IncludeSystemProperties('_etag').create()).V('1').has('_etag', '"00000100-0000-0800-0000-5d03edac0000"').property('test', '1')
+```
+
+## Time-to-live (TTL)
+
+If collection has document expiration enabled and documents have `ttl` property set on them, then this property will be available in Gremlin traversal as a regular vertex or edge property. `ProjectionStrategy` isn't necessary to enable time-to-live property exposure.
+
+* Use the following command to set time-to-live on a new vertex:
+
+ ```console
+ g.addV(<ID>).property('ttl', <expirationTime>)
+ ```
+
+ For example, a vertex created with the following traversal is automatically deleted after *123 seconds*:
+
+ ```console
+ g.addV('vertex-one').property('ttl', 123)
+ ```
+
+* Use the following command to set time-to-live on an existing vertex:
+
+ ```console
+ g.V().hasId(<ID>).has('pk', <pk>).property('ttl', <expirationTime>)
+ ```
+
+* Applying time-to-live property on vertices does not automatically apply it to edges. Because edges are independent records in the database store. Use the following command to set time-to-live on vertices and all the incoming and outgoing edges of the vertex:
+
+ ```console
+ g.V().hasId(<ID>).has('pk', <pk>).as('v').bothE().hasNot('ttl').property('ttl', <expirationTime>)
+ ```
+
+You can set TTL on the container to -1 or set it to **On (no default)** from Azure portal, then the TTL is infinite for any item unless the item has TTL value explicitly set.
+
+## Next steps
+* [Azure Cosmos DB Optimistic Concurrency](../faq.yml#how-does-the-api-for-nosql-provide-concurrency-)
+* [Time to Live (TTL)](../time-to-live.md) in Azure Cosmos DB
cosmos-db Bulk Executor Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/bulk-executor-dotnet.md
+
+ Title: Ingest data in bulk in Azure Cosmos DB for Gremlin by using a bulk executor library
+description: Learn how to use a bulk executor library to massively import graph data into an Azure Cosmos DB for Gremlin container.
+++ Last updated : 05/10/2022++
+ms.devlang: csharp, java
+++
+# Ingest data in bulk in the Azure Cosmos DB for Gremlin by using a bulk executor library
++
+Graph databases often need to ingest data in bulk to refresh an entire graph or update a portion of it. Azure Cosmos DB, a distributed database and the backbone of the Azure Cosmos DB for Gremlin, is meant to perform best when the loads are well distributed. Bulk executor libraries in Azure Cosmos DB are designed to exploit this unique capability of Azure Cosmos DB and provide optimal performance. For more information, see [Introducing bulk support in the .NET SDK](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
+
+In this tutorial, you learn how to use the Azure Cosmos DB bulk executor library to import and update *graph* objects into an Azure Cosmos DB for Gremlin container. During this process, you use the library to create *vertex* and *edge* objects programmatically and then insert multiple objects per network request.
+
+Instead of sending Gremlin queries to a database, where the commands are evaluated and then executed one at a time, you use the bulk executor library to create and validate the objects locally. After the library initializes the graph objects, it allows you to send them to the database service sequentially.
+
+By using this method, you can increase data ingestion speeds as much as a hundredfold, which makes it an ideal way to perform initial data migrations or periodic data movement operations.
+
+The bulk executor library now comes in the following varieties.
+
+## .NET
+### Prerequisites
+
+Before you begin, make sure that you have the following:
+
+* Visual Studio 2019 with the Azure development workload. You can get started with the [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) for free.
+
+* An Azure subscription. If you don't already have a subscription, you can [create a free Azure account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db).
+
+ Alternatively, you can [create a free Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+
+* An Azure Cosmos DB for Gremlin database with an *unlimited collection*. To get started, go to [Azure Cosmos DB for Gremlin in .NET](./quickstart-dotnet.md).
+
+* Git. To begin, go to the [git downloads](https://git-scm.com/downloads) page.
+
+#### Clone
+
+To use this sample, run the following command:
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git
+```
+
+To get the sample, go to `.\azure-cosmos-graph-bulk-executor\dotnet\src\`.
+
+#### Sample
+
+```csharp
+
+IGraphBulkExecutor graphBulkExecutor = new GraphBulkExecutor("MyConnectionString", "myDatabase", "myContainer");
+
+List<IGremlinElement> gremlinElements = new List<IGremlinElement>();
+gremlinElements.AddRange(Program.GenerateVertices(Program.documentsToInsert));
+gremlinElements.AddRange(Program.GenerateEdges(Program.documentsToInsert));
+BulkOperationResponse bulkOperationResponse = await graphBulkExecutor.BulkImportAsync(
+ gremlinElements: gremlinElements,
+ enableUpsert: true);
+```
+
+### Execute
+
+Modify the parameters, as described in the following table:
+
+| Parameter|Description |
+|||
+|`ConnectionString`| Your .NET SDK endpoint, which you'll find in the **Overview** section of your Azure Cosmos DB for Gremlin database account. It's formatted as `https://your-graph-database-account.documents.azure.com:443/`.
+`DatabaseName`, `ContainerName`|The names of the target database and container.|
+|`DocumentsToInsert`| The number of documents to be generated (relevant only to synthetic data).|
+|`PartitionKey` | Ensures that a partition key is specified with each document during data ingestion.|
+|`NumberOfRUs` | Is relevant only if a container doesn't already exist and it needs to be created during execution.|
+
+[Download the full sample application in .NET](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/dotnet).
+
+## Java
+
+### Sample usage
+
+The following sample application illustrates how to use the GraphBulkExecutor package. The samples use either the *domain* object annotations or the *POJO* (plain old Java object) objects directly. We recommend that you try both approaches to determine which one better meets your implementation and performance demands.
+
+### Clone
+
+To use the sample, run the following command:
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git
+```
+To get the sample, go to `.\azure-cosmos-graph-bulk-executor\java\`.
+
+### Prerequisites
+
+To run this sample, you need to have the following software:
+
+* OpenJDK 11
+* Maven
+* An Azure Cosmos DB account that's configured to use the Gremlin API
+
+### Sample
+
+```java
+private static void executeWithPOJO(Stream<GremlinVertex> vertices,
+ Stream<GremlinEdge> edges,
+ boolean createDocs) {
+ results.transitionState("Configure Database");
+ UploadWithBulkLoader loader = new UploadWithBulkLoader();
+ results.transitionState("Write Documents");
+ loader.uploadDocuments(vertices, edges, createDocs);
+ }
+```
+
+### Configuration
+
+To run the sample, refer to the following configuration and modify it as needed.
+
+The */resources/application.properties* file defines the data that's required to configure Azure Cosmos DB. The required values are described in the following table:
+
+| Property | Description |
+| | |
+| `sample.sql.host` | The value that's provided by Azure Cosmos DB. Ensure that you're using the .NET SDK URI, which you'll find in the **Overview** section of the Azure Cosmos DB account.|
+| `sample.sql.key` | You can get the primary or secondary key from the **Keys** section of the Azure Cosmos DB account. |
+| `sample.sql.database.name` | The name of the database within the Azure Cosmos DB account to run the sample against. If the database isn't found, the sample code creates it. |
+| `sample.sql.container.name` | The name of the container within the database to run the sample against. If the container isn't found, the sample code creates it. |
+| `sample.sql.partition.path` | If you need to create the container, use this value to define the `partitionKey` path. |
+| `sample.sql.allow.throughput` | The container will be updated to use the throughput value that's defined here. If you're exploring various throughput options to meet your performance demands, be sure to reset the throughput on the container when you're done with your exploration. There are costs associated with leaving the container provisioned with a higher throughput. |
+
+### Execute
+
+After you've modified the configuration according to your environment, run the following command:
+
+```bash
+mvn clean package
+```
+
+For added safety, you can also run the integration tests by changing the `skipIntegrationTests` value in the *pom.xml* file to `false`.
+
+After you've run the unit tests successfully, you can run the sample code:
+
+```bash
+java -jar target/azure-cosmos-graph-bulk-executor-1.0-jar-with-dependencies.jar -v 1000 -e 10 -d
+```
+
+Running the preceding command executes the sample with a small batch (1,000 vertices and roughly 5,000 edges). Use the command-line arguments in the following sections to tweak the volumes that are run and which sample version to run.
+
+### Command-line arguments
+
+Several command-line arguments are available while you're running this sample, as described in the following table:
+
+| Argument&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
+| | |
+| `--vertexCount` (`-v`) | Tells the application how many person vertices to generate. |
+| `--edgeMax` (`-e`) | Tells the application the maximum number of edges to generate for each vertex. The generator randomly selects a number from 1 to the value you provide. |
+| `--domainSample` (`-d`) | Tells the application to run the sample by using the person and relationship domain structures instead of the `GraphBulkExecutors`, `GremlinVertex`, and `GremlinEdge` POJOs. |
+| `--createDocuments` (`-c`) | Tells the application to use `create` operations. If the argument isn't present, the application defaults to using `upsert` operations. |
+
+### Detailed sample information
+
+#### Person vertex
+
+The person class is a simple domain object that's been decorated with several annotations to help a transformation into the `GremlinVertex` class, as described in the following table:
+
+| Class annotation | Description |
+| | |
+| `GremlinVertex` | Uses the optional `label` parameter to define all vertices that you create by using this class. |
+| `GremlinId` | Used to define which field will be used as the `ID` value. The field name on the person class is ID, but it isn't required. |
+| `GremlinProperty` | Used on the `email` field to change the name of the property when it's stored in the database.
+| `GremlinPartitionKey` | Used to define which field on the class contains the partition key. The field name you provide should match the value that's defined by the partition path on the container. |
+| `GremlinIgnore` | Used to exclude the `isSpecial` field from the property that's being written to the database. |
+
+#### The RelationshipEdge class
+
+The `RelationshipEdge` class is a versatile domain object. By using the field level label annotation, you can create a dynamic collection of edge types, as shown in the following table:
+
+| Class annotation | Description |
+| | |
+| `GremlinEdge` | The `GremlinEdge` decoration on the class defines the name of the field for the specified partition key. When you create an edge document, the assigned value comes from the source vertex information. |
+| `GremlinEdgeVertex` | Two instances of `GremlinEdgeVertex` are defined, one for each side of the edge (source and destination). Our sample has the field's data type as `GremlinEdgeVertexInfo`. The information provided by the `GremlinEdgeVertex` class is required for the edge to be created correctly in the database. Another option would be to have the data type of the vertices be a class that has been decorated with the `GremlinVertex` annotations. |
+| `GremlinLabel` | The sample edge uses a field to define the `label` value. It allows various labels to be defined, because it uses the same base domain class. |
+
+### Output explained
+
+The console finishes its run with a JSON string that describes the run times of the sample. The JSON string contains the following information:
+
+| JSON string | Description |
+| | |
+| startTime | The `System.nanoTime()` when the process started. |
+| endTime | The `System.nanoTime()` when the process finished. |
+| durationInNanoSeconds | The difference between the `endTime` and `startTime` values. |
+| durationInMinutes | The `durationInNanoSeconds` value, converted into minutes. The `durationInMinutes` value is represented as a float number, not a time value. For example, a value of 2.5 represents 2 minutes and 30 seconds. |
+| vertexCount | The volume of generated vertices, which should match the value that's passed into the command-line execution. |
+| edgeCount | The volume of generated edges, which isn't static and is built with an element of randomness. |
+| exception | Populated only if an exception is thrown when you attempt to make the run. |
+
+#### States array
+
+The states array gives insight into how long each step within the execution takes. The steps are described in the following table:
+
+| Execution&nbsp;step | Description |
+| | |
+| Build&nbsp;sample&nbsp;vertices | The amount of time it takes to fabricate the requested volume of person objects. |
+| Build sample edges | The amount of time it takes to fabricate the relationship objects. |
+| Configure database | The amount of time it takes to configure the database, based on the values that are provided in `application.properties`. |
+| Write documents | The amount of time it takes to write the documents to the database. |
+
+Each state contains the following values:
+
+| State value | Description |
+| | |
+| `stateName` | The name of the state that's being reported. |
+| `startTime` | The `System.nanoTime()` value when the state started. |
+| `endTime` | The `System.nanoTime()` value when the state finished. |
+| `durationInNanoSeconds` | The difference between the `endTime` and `startTime` values. |
+| `durationInMinutes` | The `durationInNanoSeconds` value, converted into minutes. The `durationInMinutes` value is represented as a float number, not a time value. For example, a value of 2.5 represents 2 minutes and 30 seconds. |
+
+## Next steps
+
+* For more information about the classes and methods that are defined in this namespace, review the [BulkExecutor Java open source documentation](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/java/src/main/java/com/azure/graph/bulk/impl).
+* See [Bulk import data to the Azure Cosmos DB SQL API account by using the .NET SDK](../nosql/tutorial-dotnet-bulk-import.md) article. This bulk mode documentation is part of the .NET V3 SDK.
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/cli-samples.md
+
+ Title: Azure CLI Samples for Azure Cosmos DB for Gremlin
+description: Azure CLI Samples for Azure Cosmos DB for Gremlin
++++ Last updated : 08/19/2022+++++
+# Azure CLI samples for Azure Cosmos DB for Gremlin
++
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB for Gremlin and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
+
+These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+
+## Gremlin API Samples
+
+|Task | Description |
+|||
+| [Create an Azure Cosmos account, database and graph](../scripts/cli/gremlin/create.md)| Creates an Azure Cosmos DB account, database, and graph for Gremlin API. |
+| [Create a serverless Azure Cosmos account for Gremlin API, database and graph](../scripts/cli/gremlin/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and graph for Gremlin API. |
+| [Create an Azure Cosmos account, database and graph with autoscale](../scripts/cli/gremlin/autoscale.md)| Creates an Azure Cosmos DB account, database, and graph with autoscale for Gremlin API. |
+| [Perform throughput operations](../scripts/cli/gremlin/throughput.md) | Read, update and migrate between autoscale and standard throughput on a database and graph.|
+| [Lock resources from deletion](../scripts/cli/gremlin/lock.md)| Prevent resources from being deleted with resource locks.|
+|||
+
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+
+|Task | Description |
+|||
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
+|||
+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Cassandra](../cassandr)
+- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../sql/cli-samples.md)
+- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/diagnostic-queries.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for API for Gremlin
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the API for Gremlin.
+++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for the API for Gremlin
++
+> [!div class="op_single_selector"]
+> * [API for NoSQL](../advanced-queries.md)
+> * [API for MongoDB](../mongodb/diagnostic-queries.md)
+> * [API for Cassandra](../cassandr)
+> * [API for Gremlin](diagnostic-queries.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](../monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
+
+## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
+
+### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBGremlinRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+### Requests throttled (statusCode = 429) in a specific time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBGremlinRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests"
+ | where statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+### Queries with large response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBGremlinRequests
+ //specify collection and database
+ //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by PIICommandText
+ | order by max_ResponseLength desc
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ //| where databasename_s == "DB NAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by piiCommandText_s
+ | order by max_responseLength_s1 desc
+ ```
++
+### RU consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DB NAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+### RU consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DB NAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Execution Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/execution-profile.md
+
+ Title: Use the execution profile to evaluate queries in Azure Cosmos DB for Gremlin
+description: Learn how to troubleshoot and improve your Gremlin queries using the execution profile step.
+++++ Last updated : 03/27/2019++++
+# How to use the execution profile step to evaluate your Gremlin queries
+
+This article provides an overview of how to use the execution profile step for Azure Cosmos DB for Gremlin graph databases. This step provides relevant information for troubleshooting and query optimizations, and it is compatible with any Gremlin query that can be executed against a Cosmos DB Gremlin API account.
+
+To use this step, simply append the `executionProfile()` function call at the end of your Gremlin query. **Your Gremlin query will be executed** and the result of the operation will return a JSON response object with the query execution profile.
+
+For example:
+
+```java
+ // Basic traversal
+ g.V('mary').out()
+
+ // Basic traversal with execution profile call
+ g.V('mary').out().executionProfile()
+```
+
+After calling the `executionProfile()` step, the response will be a JSON object that includes the executed Gremlin step, the total time it took, and an array of the Cosmos DB runtime operators that the statement resulted in.
+
+> [!NOTE]
+> This implementation for Execution Profile is not defined in the Apache Tinkerpop specification. It is specific to Azure Cosmos DB for Gremlin's implementation.
++
+## Response Example
+
+The following is an annotated example of the output that will be returned:
+
+> [!NOTE]
+> This example is annotated with comments that explain the general structure of the response. An actual executionProfile response won't contain any comments.
+
+```json
+[
+ {
+ // The Gremlin statement that was executed.
+ "gremlin": "g.V('mary').out().executionProfile()",
+
+ // Amount of time in milliseconds that the entire operation took.
+ "totalTime": 28,
+
+ // An array containing metrics for each of the steps that were executed.
+ // Each Gremlin step will translate to one or more of these steps.
+ // This list is sorted in order of execution.
+ "metrics": [
+ {
+ // This operation obtains a set of Vertex objects.
+ // The metrics include: time, percentTime of total execution time, resultCount,
+ // fanoutFactor, count, size (in bytes) and time.
+ "name": "GetVertices",
+ "time": 24,
+ "annotations": {
+ "percentTime": 85.71
+ },
+ "counts": {
+ "resultCount": 2
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 2,
+ "size": 696,
+ "time": 0.4
+ }
+ ]
+ },
+ {
+ // This operation obtains a set of Edge objects.
+ // Depending on the query, these might be directly adjacent to a set of vertices,
+ // or separate, in the case of an E() query.
+ //
+ // The metrics include: time, percentTime of total execution time, resultCount,
+ // fanoutFactor, count, size (in bytes) and time.
+ "name": "GetEdges",
+ "time": 4,
+ "annotations": {
+ "percentTime": 14.29
+ },
+ "counts": {
+ "resultCount": 1
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 1,
+ "size": 419,
+ "time": 0.67
+ }
+ ]
+ },
+ {
+ // This operation obtains the vertices that a set of edges point at.
+ // The metrics include: time, percentTime of total execution time and resultCount.
+ "name": "GetNeighborVertices",
+ "time": 0,
+ "annotations": {
+ "percentTime": 0
+ },
+ "counts": {
+ "resultCount": 1
+ }
+ },
+ {
+ // This operation represents the serialization and preparation for a result from
+ // the preceding graph operations. The metrics include: time, percentTime of total
+ // execution time and resultCount.
+ "name": "ProjectOperator",
+ "time": 0,
+ "annotations": {
+ "percentTime": 0
+ },
+ "counts": {
+ "resultCount": 1
+ }
+ }
+ ]
+ }
+]
+```
+
+> [!NOTE]
+> The executionProfile step will execute the Gremlin query. This includes the `addV` or `addE`steps, which will result in the creation and will commit the changes specified in the query. As a result, the Request Units generated by the Gremlin query will also be charged.
+
+## Execution profile response objects
+
+The response of an executionProfile() function will yield a hierarchy of JSON objects with the following structure:
+ - **Gremlin operation object**: Represents the entire Gremlin operation that was executed. Contains the following properties.
+ - `gremlin`: The explicit Gremlin statement that was executed.
+ - `totalTime`: The time, in milliseconds, that the execution of the step incurred in.
+ - `metrics`: An array that contains each of the Cosmos DB runtime operators that were executed to fulfill the query. This list is sorted in order of execution.
+
+ - **Cosmos DB runtime operators**: Represents each of the components of the entire Gremlin operation. This list is sorted in order of execution. Each object contains the following properties:
+ - `name`: Name of the operator. This is the type of step that was evaluated and executed. Read more in the table below.
+ - `time`: Amount of time, in milliseconds, that a given operator took.
+ - `annotations`: Contains additional information, specific to the operator that was executed.
+ - `annotations.percentTime`: Percentage of the total time that it took to execute the specific operator.
+ - `counts`: Number of objects that were returned from the storage layer by this operator. This is contained in the `counts.resultCount` scalar value within.
+ - `storeOps`: Represents a storage operation that can span one or multiple partitions.
+ - `storeOps.fanoutFactor`: Represents the number of partitions that this specific storage operation accessed.
+ - `storeOps.count`: Represents the number of results that this storage operation returned.
+ - `storeOps.size`: Represents the size in bytes of the result of a given storage operation.
+
+Cosmos DB Gremlin Runtime Operator|Description
+|
+`GetVertices`| This step obtains a predicated set of objects from the persistence layer.
+`GetEdges`| This step obtains the edges that are adjacent to a set of vertices. This step can result in one or many storage operations.
+`GetNeighborVertices`| This step obtains the vertices that are connected to a set of edges. The edges contain the partition keys and ID's of both their source and target vertices.
+`Coalesce`| This step accounts for the evaluation of two operations whenever the `coalesce()` Gremlin step is executed.
+`CartesianProductOperator`| This step computes a cartesian product between two datasets. Usually executed whenever the predicates `to()` or `from()` are used.
+`ConstantSourceOperator`| This step computes an expression to produce a constant value as a result.
+`ProjectOperator`| This step prepares and serializes a response using the result of preceding operations.
+`ProjectAggregation`| This step prepares and serializes a response for an aggregate operation.
+
+> [!NOTE]
+> This list will continue to be updated as new operators are added.
+
+## Examples on how to analyze an execution profile response
+
+The following are examples of common optimizations that can be spotted using the Execution Profile response:
+ - Blind fan-out query.
+ - Unfiltered query.
+
+### Blind fan-out query patterns
+
+Assume the following execution profile response from a **partitioned graph**:
+
+```json
+[
+ {
+ "gremlin": "g.V('tt0093640').executionProfile()",
+ "totalTime": 46,
+ "metrics": [
+ {
+ "name": "GetVertices",
+ "time": 46,
+ "annotations": {
+ "percentTime": 100
+ },
+ "counts": {
+ "resultCount": 1
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 5,
+ "count": 1,
+ "size": 589,
+ "time": 75.61
+ }
+ ]
+ },
+ {
+ "name": "ProjectOperator",
+ "time": 0,
+ "annotations": {
+ "percentTime": 0
+ },
+ "counts": {
+ "resultCount": 1
+ }
+ }
+ ]
+ }
+]
+```
+
+The following conclusions can be made from it:
+- The query is a single ID lookup, since the Gremlin statement follows the pattern `g.V('id')`.
+- Judging from the `time` metric, the latency of this query seems to be high since it's [more than 10ms for a single point-read operation](../introduction.md#guaranteed-speed-at-any-scale).
+- If we look into the `storeOps` object, we can see that the `fanoutFactor` is `5`, which means that [5 partitions](../partitioning-overview.md) were accessed by this operation.
+
+As a conclusion of this analysis, we can determine that the first query is accessing more partitions than necessary. This can be addressed by specifying the partitioning key in the query as a predicate. This will lead to less latency and less cost per query. Learn more about [graph partitioning](partitioning.md). A more optimal query would be `g.V('tt0093640').has('partitionKey', 't1001')`.
+
+### Unfiltered query patterns
+
+Compare the following two execution profile responses. For simplicity, these examples use a single partitioned graph.
+
+This first query retrieves all vertices with the label `tweet` and then obtains their neighboring vertices:
+
+```json
+[
+ {
+ "gremlin": "g.V().hasLabel('tweet').out().executionProfile()",
+ "totalTime": 42,
+ "metrics": [
+ {
+ "name": "GetVertices",
+ "time": 31,
+ "annotations": {
+ "percentTime": 73.81
+ },
+ "counts": {
+ "resultCount": 30
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 13,
+ "size": 6819,
+ "time": 1.02
+ }
+ ]
+ },
+ {
+ "name": "GetEdges",
+ "time": 6,
+ "annotations": {
+ "percentTime": 14.29
+ },
+ "counts": {
+ "resultCount": 18
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 20,
+ "size": 7950,
+ "time": 1.98
+ }
+ ]
+ },
+ {
+ "name": "GetNeighborVertices",
+ "time": 5,
+ "annotations": {
+ "percentTime": 11.9
+ },
+ "counts": {
+ "resultCount": 20
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 4,
+ "size": 1070,
+ "time": 1.19
+ }
+ ]
+ },
+ {
+ "name": "ProjectOperator",
+ "time": 0,
+ "annotations": {
+ "percentTime": 0
+ },
+ "counts": {
+ "resultCount": 20
+ }
+ }
+ ]
+ }
+]
+```
+
+Notice the profile of the same query, but now with an additional filter, `has('lang', 'en')`, before exploring the adjacent vertices:
+
+```json
+[
+ {
+ "gremlin": "g.V().hasLabel('tweet').has('lang', 'en').out().executionProfile()",
+ "totalTime": 14,
+ "metrics": [
+ {
+ "name": "GetVertices",
+ "time": 14,
+ "annotations": {
+ "percentTime": 58.33
+ },
+ "counts": {
+ "resultCount": 11
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 11,
+ "size": 4807,
+ "time": 1.27
+ }
+ ]
+ },
+ {
+ "name": "GetEdges",
+ "time": 5,
+ "annotations": {
+ "percentTime": 20.83
+ },
+ "counts": {
+ "resultCount": 18
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 18,
+ "size": 7159,
+ "time": 1.7
+ }
+ ]
+ },
+ {
+ "name": "GetNeighborVertices",
+ "time": 5,
+ "annotations": {
+ "percentTime": 20.83
+ },
+ "counts": {
+ "resultCount": 18
+ },
+ "storeOps": [
+ {
+ "fanoutFactor": 1,
+ "count": 4,
+ "size": 1070,
+ "time": 1.01
+ }
+ ]
+ },
+ {
+ "name": "ProjectOperator",
+ "time": 0,
+ "annotations": {
+ "percentTime": 0
+ },
+ "counts": {
+ "resultCount": 18
+ }
+ }
+ ]
+ }
+]
+```
+
+These two queries reached the same result, however, the first one will require more Request Units since it needed to iterate a larger initial dataset before querying the adjacent items. We can see indicators of this behavior when comparing the following parameters from both responses:
+- The `metrics[0].time` value is higher in the first response, which indicates that this single step took longer to resolve.
+- The `metrics[0].counts.resultsCount` value is higher as well in the first response, which indicates that the initial working dataset was larger.
+
+## Next steps
+* Learn about the [supported Gremlin features](support.md) in Azure Cosmos DB.
+* Learn more about the [Gremlin API in Azure Cosmos DB](introduction.md).
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/find-request-unit-charge.md
+
+ Title: Find request unit (RU) charge for Gremlin API queries in Azure Cosmos DB
+description: Learn how to find the request unit (RU) charge for Gremlin queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java drivers to find the RU charge.
+++ Last updated : 10/14/2020++
+ms.devlang: csharp, java
++
+# Find the request unit charge for operations executed in Azure Cosmos DB for Gremlin
+
+Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
+
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
+
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB for Gremlin. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
+
+Headers returned by the Gremlin API are mapped to custom status attributes, which currently are surfaced by the Gremlin .NET and Java SDK. The request charge is available under the `x-ms-request-charge` key. When you use the Gremlin API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
+
+## Use the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos account](quickstart-console.md#create-a-database-account) and feed it with data, or select an existing account that already contains data.
+
+1. Go to the **Data Explorer** pane, and then select the container you want to work on.
+
+1. Enter a valid query, and then select **Execute Gremlin Query**.
+
+1. Select **Query Stats** to display the actual request charge for the request you executed.
++
+## Use the .NET SDK driver
+
+When you use the [Gremlin.NET SDK](https://www.nuget.org/packages/Gremlin.Net/), status attributes are available under the `StatusAttributes` property of the `ResultSet<>` object:
+
+```csharp
+ResultSet<dynamic> results = client.SubmitAsync<dynamic>("g.V().count()").Result;
+double requestCharge = (double)results.StatusAttributes["x-ms-request-charge"];
+```
+
+For more information, see [Quickstart: Build a .NET Framework or Core application by using an Azure Cosmos DB for Gremlin account](quickstart-dotnet.md).
+
+## Use the Java SDK driver
+
+When you use the [Gremlin Java SDK](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver), you can retrieve status attributes by calling the `statusAttributes()` method on the `ResultSet` object:
+
+```java
+ResultSet results = client.submit("g.V().count()");
+Double requestCharge = (Double)results.statusAttributes().get().get("x-ms-request-charge");
+```
+
+For more information, see [Quickstart: Create a graph database in Azure Cosmos DB by using the Java SDK](quickstart-java.md).
+
+## Next steps
+
+To learn about optimizing your RU consumption, see these articles:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/headers.md
+
+ Title: Azure Cosmos DB for Gremlin response headers
+description: Reference documentation for server response metadata that enables additional troubleshooting
++++ Last updated : 09/03/2019++++
+# Azure Cosmos DB for Gremlin server response headers
+
+This article covers headers that Azure Cosmos DB for Gremlin server returns to the caller upon request execution. These headers are useful for troubleshooting request performance, building application that integrates natively with Azure Cosmos DB service and simplifying customer support.
+
+Keep in mind that taking dependency on these headers you are limiting portability of your application to other Gremlin implementations. In return, you are gaining tighter integration with Azure Cosmos DB for Gremlin. These headers are not a TinkerPop standard.
+
+## Headers
+
+| Header | Type | Sample Value | When Included | Explanation |
+| | | | | |
+| **x-ms-request-charge** | double | 11.3243 | Success and Failure | Amount of collection or database throughput consumed in [request units (RU/s or RUs)](../request-units.md) for a partial response message. This header is present in every continuation for requests that have multiple chunks. It reflects the charge of a particular response chunk. Only for requests that consist of a single response chunk this header matches total cost of traversal. However, for majority of complex traversals this value represents a partial cost. |
+| **x-ms-total-request-charge** | double | 423.987 | Success and Failure | Amount of collection or database throughput consumed in [request units (RU/s or RUs)](../request-units.md) for entire request. This header is present in every continuation for requests that have multiple chunks. It indicates cumulative charge since the beginning of request. Value of this header in the last chunk indicates complete request charge. |
+| **x-ms-server-time-ms** | double | 13.75 | Success and Failure | This header is included for latency troubleshooting purposes. It indicates the amount of time, in milliseconds, that Azure Cosmos DB for Gremlin server took to execute and produce a partial response message. Using value of this header and comparing it to overall request latency applications can calculate network latency overhead. |
+| **x-ms-total-server-time-ms** | double | 130.512 | Success and Failure | Total time, in milliseconds, that Azure Cosmos DB for Gremlin server took to execute entire traversal. This header is included in every partial response. It represents cumulative execution time since the start of request. The last response indicates total execution time. This header is useful to differentiate between client and server as a source of latency. You can compare traversal execution time on the client to the value of this header. |
+| **x-ms-status-code** | long | 200 | Success and Failure | Header indicates internal reason for request completion or termination. Application is advised to look at the value of this header and take corrective action. |
+| **x-ms-substatus-code** | long | 1003 | Failure Only | Azure Cosmos DB is a multi-model database that is built on top of unified storage layer. This header contains additional insights about the failure reason when failure occurs within lower layers of high availability stack. Application is advised to store this header and use it when contacting Azure Cosmos DB customer support. Value of this header is useful for Azure Cosmos DB engineer for quick troubleshooting. |
+| **x-ms-retry-after-ms** | string (TimeSpan) | "00:00:03.9500000" | Failure Only | This header is a string representation of a .NET [TimeSpan](/dotnet/api/system.timespan) type. This value will only be included in requests failed due provisioned throughput exhaustion. Application should resubmit traversal again after instructed period of time. |
+| **x-ms-activity-id** | string (Guid) | "A9218E01-3A3A-4716-9636-5BD86B056613" | Success and Failure | Header contains a unique server-side identifier of a request. Each request is assigned a unique identifier by the server for tracking purposes. Applications should log activity identifiers returned by the server for requests that customers may want to contact customer support about. Azure Cosmos DB support personnel can find specific requests by these identifiers in Azure Cosmos DB service telemetry. |
+
+## Status codes
+
+Most common codes returned for `x-ms-status-code` status attribute by the server are listed below.
+
+| Status | Explanation |
+| | |
+| **401** | Error message `"Unauthorized: Invalid credentials provided"` is returned when authentication password doesn't match Azure Cosmos DB account key. Navigate to your Azure Cosmos DB for Gremlin account in the Azure portal and confirm that the key is correct.|
+| **404** | Concurrent operations that attempt to delete and update the same edge or vertex simultaneously. Error message `"Owner resource does not exist"` indicates that specified database or collection is incorrect in connection parameters in `/dbs/<database name>/colls/<collection or graph name>` format.|
+| **409** | `"Conflicting request to resource has been attempted. Retry to avoid conflicts."` This usually happens when vertex or an edge with an identifier already exists in the graph.|
+| **412** | Status code is complemented with error message `"PreconditionFailedException": One of the specified pre-condition is not met`. This error is indicative of an optimistic concurrency control violation between reading an edge or vertex and writing it back to the store after modification. Most common situations when this error occurs is property modification, for example `g.V('identifier').property('name','value')`. Gremlin engine would read the vertex, modify it, and write it back. If there is another traversal running in parallel trying to write the same vertex or an edge, one of them will receive this error. Application should submit traversal to the server again.|
+| **429** | Request was throttled and should be retried after value in **x-ms-retry-after-ms**|
+| **500** | Error message that contains `"NotFoundException: Entity with the specified id does not exist in the system."` indicates that a database and/or collection was re-created with the same name. This error will disappear within 5 minutes as change propagates and invalidates caches in different Azure Cosmos DB components. To avoid this issue, use unique database and collection names every time.|
+| **1000** | This status code is returned when server successfully parsed a message but wasn't able to execute. It usually indicates a problem with the query.|
+| **1001** | This code is returned when server completes traversal execution but fails to serialize response back to the client. This error can happen when traversal generates complex result, that is too large or does not conform to TinkerPop protocol specification. Application should simplify the traversal when it encounters this error. |
+| **1003** | `"Query exceeded memory limit. Bytes Consumed: XXX, Max: YYY"` is returned when traversal exceeds allowed memory limit. Memory limit is **2 GB** per traversal.|
+| **1004** | This status code indicates malformed graph request. Request can be malformed when it fails deserialization, non-value type is being deserialized as value type or unsupported gremlin operation requested. Application should not retry the request because it will not be successful. |
+| **1007** | Usually this status code is returned with error message `"Could not process request. Underlying connection has been closed."`. This situation can happen if client driver attempts to use a connection that is being closed by the server. Application should retry the traversal on a different connection.
+| **1008** | Azure Cosmos DB for Gremlin server can terminate connections to rebalance traffic in the cluster. Client drivers should handle this situation and use only live connections to send requests to the server. Occasionally client drivers may not detect that connection was closed. When application encounters an error, `"Connection is too busy. Please retry after sometime or open more connections."` it should retry traversal on a different connection.
+| **1009** | The operation did not complete in the allotted time and was canceled by the server. Optimize your traversals to run quickly by filtering vertices or edges on every hop of traversal to narrow search scope. Request timeout default is **60 seconds**. |
+
+## Samples
+
+A sample client application based on Gremlin.Net that reads one status attribute:
+
+```csharp
+// Following example reads a status code and total request charge from server response attributes.
+// Variable "server" is assumed to be assigned to an instance of a GremlinServer that is connected to Azure Cosmos DB account.
+using (GremlinClient client = new GremlinClient(server, new GraphSON2Reader(), new GraphSON2Writer(), GremlinClient.GraphSON2MimeType))
+{
+ ResultSet<dynamic> responseResultSet = await GremlinClientExtensions.SubmitAsync<dynamic>(client, requestScript: "g.V().count()");
+ long statusCode = (long)responseResultSet.StatusAttributes["x-ms-status-code"];
+ double totalRequestCharge = (double)responseResultSet.StatusAttributes["x-ms-total-request-charge"];
+
+ // Status code and request charge are logged into application telemetry.
+}
+```
+
+An example that demonstrates how to read status attribute from Gremlin Java client:
+
+```java
+try {
+ ResultSet resultSet = this.client.submit("g.addV().property('id', '13')");
+ List<Result> results = resultSet.all().get();
+
+ // Process and consume results
+
+} catch (ResponseException re) {
+ // Check for known errors that need to be retried or skipped
+ if (re.getStatusAttributes().isPresent()) {
+ Map<String, Object> attributes = re.getStatusAttributes().get();
+ int statusCode = (int) attributes.getOrDefault("x-ms-status-code", -1);
+
+ // Now we can check for specific conditions
+ if (statusCode == 409) {
+ // Handle conflicting writes
+ }
+ }
+
+ // Check if we need to delay retry
+ if (attributes.containsKey("x-ms-retry-after-ms")) {
+ // Read the value of the attribute as is
+ String retryAfterTimeSpan = (String) attributes.get("x-ms-retry-after-ms"));
+
+ // Convert the value into actionable duration
+ LocalTime locaTime = LocalTime.parse(retryAfterTimeSpan);
+ Duration duration = Duration.between(LocalTime.MIN, locaTime);
+
+ // Perform a retry after "duration" interval of time has elapsed
+ }
+ }
+}
+
+```
+
+## Next steps
+* [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb)
+* [Common Azure Cosmos DB REST response headers](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers)
+* [TinkerPop Graph Driver Provider Requirements]( http://tinkerpop.apache.org/docs/current/dev/provider/#_graph_driver_provider_requirements)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/how-to-create-container.md
+
+ Title: Create a container in Azure Cosmos DB for Gremlin
+description: Learn how to create a container in Azure Cosmos DB for Gremlin by using Azure portal, .NET and other SDKs.
+++ Last updated : 10/16/2020++
+ms.devlang: csharp
+++
+# Create a container in Azure Cosmos DB for Gremlin
+
+This article explains the different ways to create a container in Azure Cosmos DB for Gremlin. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
+
+This article explains the different ways to create a container in Azure Cosmos DB for Gremlin. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container.md), [API for Cassandra](../cassandr) articles to create the container.
+
+> [!NOTE]
+> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+
+## <a id="portal-gremlin"></a>Create using Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-a-database-account), or select an existing account.
+
+1. Open the **Data Explorer** pane, and select **New Graph**. Next, provide the following details:
+
+ * Indicate whether you are creating a new database, or using an existing one.
+ * Enter a Graph ID.
+ * Select **Unlimited** storage capacity.
+ * Enter a partition key for vertices.
+ * Enter a throughput to be provisioned (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-gremlin.png" alt-text="Screenshot of API for Gremlin, Add Graph dialog box":::
+
+## <a id="dotnet-sql-graph"></a>Create using .NET SDK
+
+If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
+
+```csharp
+// Create a container with a partition key and provision 1000 RU/s throughput.
+DocumentCollection myCollection = new DocumentCollection();
+myCollection.Id = "myContainerName";
+myCollection.PartitionKey.Paths.Add("/myPartitionKey");
+
+await client.CreateDocumentCollectionAsync(
+ UriFactory.CreateDatabaseUri("myDatabaseName"),
+ myCollection,
+ new RequestOptions { OfferThroughput = 1000 });
+```
+
+## <a id="cli-mongodb"></a>Create using Azure CLI
+
+[Create a Gremlin graph with Azure CLI](../scripts/cli/gremlin/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
+
+## Create using PowerShell
+
+[Create a Gremlin graph with PowerShell](../scripts/powershell/gremlin/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
+
+## Next steps
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Work with Azure Cosmos DB account](../account-databases-containers-items.md)
cosmos-db How To Provision Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/how-to-provision-throughput.md
+
+ Title: Provision throughput on Azure Cosmos DB for Gremlin resources
+description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB for Gremlin resources. You will use Azure portal, CLI, PowerShell and various other SDKs.
+++ Last updated : 10/15/2020++
+ms.devlang: csharp
+++
+# Provision database, container or autoscale throughput on Azure Cosmos DB for Gremlin resources
+
+This article explains how to provision throughput in Azure Cosmos DB for Gremlin. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
+
+If you are using a different API, see [API for NoSQL](../how-to-provision-container-throughput.md), [API for Cassandra](../cassandr) articles to provision the throughput.
+
+## <a id="portal-gremlin"></a> Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](../mongodb/create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing Azure Cosmos DB account.
+
+1. Open the **Data Explorer** pane, and select **New Graph**. Next, provide the following details:
+
+ * Indicate whether you are creating a new database or using an existing one. Select the **Provision database throughput** option if you want to provision throughput at the database level.
+ * Enter a graph ID.
+ * Enter a partition key value (for example, `/ItemID`).
+ * Enter a throughput that you want to provision (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="./media/how-to-provision-throughput/provision-database-throughput-portal-gremlin-api.png" alt-text="Screenshot of Data Explorer, when creating a new graph with database level throughput":::
+
+## .NET SDK
+
+> [!Note]
+> Use the Azure Cosmos DB SDKs for API for NoSQL to provision throughput for all Azure Cosmos DB APIs, except Cassandra and API for MongoDB.
+
+### Provision container level throughput
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+// Create a container with a partition key and provision throughput of 400 RU/s
+DocumentCollection myCollection = new DocumentCollection();
+myCollection.Id = "myContainerName";
+myCollection.PartitionKey.Paths.Add("/myPartitionKey");
+
+await client.CreateDocumentCollectionAsync(
+ UriFactory.CreateDatabaseUri("myDatabaseName"),
+ myCollection,
+ new RequestOptions { OfferThroughput = 400 });
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/ContainerDocsSampleCode.cs?name=ContainerCreateWithThroughput)]
+++
+### Provision database level throughput
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+//set the throughput for the database
+RequestOptions options = new RequestOptions
+{
+ OfferThroughput = 500
+};
+
+//create the database
+await client.CreateDatabaseIfNotExistsAsync(
+ new Database {Id = databaseName},
+ options);
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/DatabaseDocsSampleCode.cs?name=DatabaseCreateWithThroughput)]
+++
+## Azure Resource Manager
+
+Azure Resource Manager templates can be used to provision autoscale throughput on database or container-level resources for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](resource-manager-template-samples.md) for samples.
+
+## Azure CLI
+
+Azure CLI can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
+
+## Azure PowerShell
+
+Azure PowerShell can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
+
+## Next steps
+
+See the following articles to learn about throughput provisioning in Azure Cosmos DB:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Use Resource Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/how-to-use-resource-tokens.md
+
+ Title: Use Azure Cosmos DB resource tokens with the Gremlin SDK
+description: Learn how to create resource tokens and use them to access the Graph database.
+++++ Last updated : 09/06/2019
+ms.devlang: csharp
+++
+# Use Azure Cosmos DB resource tokens with the Gremlin SDK
+
+This article explains how to use [Azure Cosmos DB resource tokens](../secure-access-to-data.md) to access the Graph database through the Gremlin SDK.
+
+## Create a resource token
+
+The Apache TinkerPop Gremlin SDK doesn't have an API to use to create resource tokens. The term *resource token* is an Azure Cosmos DB concept. To create resource tokens, download the [Azure Cosmos DB SDK](../nosql/sdk-dotnet-v3.md). If your application needs to create resource tokens and use them to access the Graph database, it requires two separate SDKs.
+
+The object model hierarchy above resource tokens is illustrated in the following outline:
+
+- **Azure Cosmos DB account** - The top-level entity that has a DNS associated with it (for example, `contoso.gremlin.cosmos.azure.com`).
+ - **Azure Cosmos DB database**
+ - **User**
+ - **Permission**
+ - **Token** - A Permission object property that denotes what actions are allowed or denied.
+
+A resource token uses the following format: `"type=resource&ver=1&sig=<base64 string>;<base64 string>;"`. This string is opaque for the clients and should be used as is, without modification or interpretation.
+
+```csharp
+// Notice that document client is created against .NET SDK endpoint, rather than Gremlin.
+DocumentClient client = new DocumentClient(
+ new Uri("https://contoso.documents.azure.com:443/"),
+ "<primary key>",
+ new ConnectionPolicy
+ {
+ EnableEndpointDiscovery = false,
+ ConnectionMode = ConnectionMode.Direct
+ });
+
+ // Read specific permission to obtain a token.
+ // The token isn't returned during the ReadPermissionReedAsync() call.
+ // The call succeeds only if database id, user id, and permission id already exist.
+ // Note that <database id> is not a database name. It is a base64 string that represents the database identifier, for example "KalVAA==".
+ // Similar comment applies to <user id> and <permission id>.
+ Permission permission = await client.ReadPermissionAsync(UriFactory.CreatePermissionUri("<database id>", "<user id>", "<permission id>"));
+
+ Console.WriteLine("Obtained token {0}", permission.Token);
+}
+```
+
+## Use a resource token
+You can use resource tokens directly as a "password" property when you construct the GremlinServer class.
+
+```csharp
+// The Gremlin application needs to be given a resource token. It can't discover the token on its own.
+// You can obtain the token for a given permission by using the Azure Cosmos DB SDK, or you can pass it into the application as a command line argument or configuration value.
+string resourceToken = GetResourceToken();
+
+// Configure the Gremlin server to use a resource token rather than a primary key.
+GremlinServer server = new GremlinServer(
+ "contoso.gremlin.cosmosdb.azure.com",
+ port: 443,
+ enableSsl: true,
+ username: "/dbs/<database name>/colls/<collection name>",
+
+ // The format of the token is "type=resource&ver=1&sig=<base64 string>;<base64 string>;".
+ password: resourceToken);
+
+ using (GremlinClient gremlinClient = new GremlinClient(server, new GraphSON2Reader(), new GraphSON2Writer(), GremlinClient.GraphSON2MimeType))
+ {
+ await gremlinClient.SubmitAsync("g.V().limit(1)");
+ }
+```
+
+The same approach works in all TinkerPop Gremlin SDKs.
+
+```java
+Cluster.Builder builder = Cluster.build();
+
+AuthProperties authenticationProperties = new AuthProperties();
+authenticationProperties.with(AuthProperties.Property.USERNAME,
+ String.format("/dbs/%s/colls/%s", "<database name>", "<collection name>"));
+
+// The format of the token is "type=resource&ver=1&sig=<base64 string>;<base64 string>;".
+authenticationProperties.with(AuthProperties.Property.PASSWORD, resourceToken);
+
+builder.authProperties(authenticationProperties);
+```
+
+## Limit
+
+With a single Gremlin account, you can issue an unlimited number of tokens. However, you can use only up to 100 tokens concurrently within 1 hour. If an application exceeds the token limit per hour, an authentication request is denied, and you receive the following error message: "Exceeded allowed resource token limit of 100 that can be used concurrently." It doesn't work to close active connections that use specific tokens to free up slots for new tokens. The Azure Cosmos DB for Gremlin database engine keeps track of unique tokens during the hour immediately prior to the authentication request.
+
+## Permission
+
+A common error that applications encounter while they're using resource tokens is, "Insufficient permissions provided in the authorization header for the corresponding request. Please retry with another authorization header." This error is returned when a Gremlin traversal attempts to write an edge or a vertex but the resource token grants *Read* permissions only. Inspect your traversal to see whether it contains any of the following steps: *.addV()*, *.addE()*, *.drop()*, or *.property()*.
+
+## Next steps
+* [Azure role-based access control (Azure RBAC)](../role-based-access-control.md) in Azure Cosmos DB
+* [Learn how to secure access to data](../secure-access-to-data.md) in Azure Cosmos DB
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/introduction.md
+
+ Title: 'Introduction to Azure Cosmos DB for Gremlin'
+description: Learn how you can use Azure Cosmos DB to store, query, and traverse massive graphs with low latency by using the Gremlin graph query language of Apache TinkerPop.
++++ Last updated : 07/26/2021+++
+# Introduction to Gremlin API in Azure Cosmos DB
+
+[Azure Cosmos DB](../introduction.md) is the globally distributed, multi-model database service from Microsoft for mission-critical applications. It is a multi-model database and supports document, key-value, graph, and column-family data models. Azure Cosmos DB provides a graph database service via the Gremlin API on a fully managed database service designed for any scale.
++
+This article provides an overview of the Azure Cosmos DB for Gremlin and explains how to use them to store massive graphs with billions of vertices and edges. You can query the graphs with millisecond latency and evolve the graph structure easily. Azure Cosmos DB's Gremlin API is built based on the [Apache TinkerPop](https://tinkerpop.apache.org), a graph computing framework. The Gremlin API in Azure Cosmos DB uses the Gremlin query language.
+
+Azure Cosmos DB's Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure to provide a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches.
+
+> [!NOTE]
+> Azure Cosmos DB graph engine closely follows Apache TinkerPop specification. However, there are some differences in the implementation details that are specific for Azure Cosmos DB. Some features supported by Apache TinkerPop are not available in Azure Cosmos DB, to learn more about the unsupported features, see [compatibility with Apache TinkerPop](support.md) article.
+
+## Features of Azure Cosmos DB's Gremlin API
+
+Azure Cosmos DB is a fully managed graph database that offers global distribution, elastic scaling of storage and throughput, automatic indexing and query, tunable consistency levels, and support for the TinkerPop standard.
+
+> [!NOTE]
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Gremlin API.
+
+The following are the differentiated features that Azure Cosmos DB for Gremlin offers:
+
+* **Elastically scalable throughput and storage**
+
+ Graphs in the real world need to scale beyond the capacity of a single server. Azure Cosmos DB supports horizontally scalable graph databases that can have a virtually unlimited size in terms of storage and provisioned throughput. As the graph database scale grows, the data will be automatically distributed using [graph partitioning](./partitioning.md).
+
+* **Multi-region replication**
+
+ Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides automatic regional failover mechanism that can ensure the continuity of your application in the rare case of a service interruption in a region.
+
+* **Fast queries and traversals with the most widely adopted graph query standard**
+
+ Store heterogeneous vertices and edges and query them through a familiar Gremlin syntax. Gremlin is an imperative, functional query language that provides a rich interface to implement common graph algorithms.
+
+ Azure Cosmos DB enables rich real-time queries and traversals without the need to specify schema hints, secondary indexes, or views. Learn more in [Query graphs by using Gremlin](support.md).
+
+* **Fully managed graph database**
+
+ Azure Cosmos DB eliminates the need to manage database and machine resources. Most existing graph database platforms are bound to the limitations of their infrastructure and often require a high degree of maintenance to ensure its operation.
+
+ As a fully managed service, Cosmos DB removes the need to manage virtual machines, update runtime software, manage sharding or replication, or deal with complex data-tier upgrades. Every graph is automatically backed up and protected against regional failures. This allows developers to focus on delivering application value instead of operating and managing their graph databases.
+
+* **Automatic indexing**
+
+ By default, Azure Cosmos DB automatically indexes all the properties within nodes (also called as vertices) and edges in the graph and doesn't expect or require any schema or creation of secondary indices. Learn more about [indexing in Azure Cosmos DB](../index-overview.md).
+
+* **Compatibility with Apache TinkerPop**
+
+ Azure Cosmos DB supports the [open-source Apache TinkerPop standard](https://tinkerpop.apache.org/). The Tinkerpop standard has an ample ecosystem of applications and libraries that can be easily integrated with Azure Cosmos DB's Gremlin API.
+
+* **Tunable consistency levels**
+
+ Azure Cosmos DB provides five well-defined consistency levels to achieve the right tradeoff between consistency and performance for your application. For queries and read operations, Azure Cosmos DB offers five distinct consistency levels: strong, bounded-staleness, session, consistent prefix, and eventual. These granular, well-defined consistency levels allow you to make sound tradeoffs among consistency, availability, and latency. Learn more in [Tunable data consistency levels in Azure Cosmos DB](../consistency-levels.md).
+
+## Scenarios that use Gremlin API
+
+Here are some scenarios where graph support of Azure Cosmos DB can be useful:
+
+* **Social networks/Customer 365**
+
+ By combining data about your customers and their interactions with other people, you can develop personalized experiences, predict customer behavior, or connect people with others with similar interests. Azure Cosmos DB can be used to manage social networks and track customer preferences and data.
+
+* **Recommendation engines**
+
+ This scenario is commonly used in the retail industry. By combining information about products, users, and user interactions, like purchasing, browsing, or rating an item, you can build customized recommendations. The low latency, elastic scale, and native graph support of Azure Cosmos DB is ideal for these scenarios.
+
+* **Geospatial**
+
+ Many applications in telecommunications, logistics, and travel planning need to find a location of interest within an area or locate the shortest/optimal route between two locations. Azure Cosmos DB is a natural fit for these problems.
+
+* **Internet of Things**
+
+ With the network and connections between IoT devices modeled as a graph, you can build a better understanding of the state of your devices and assets. You also can learn how changes in one part of the network can potentially affect another part.
+
+## Introduction to graph databases
+
+Data as it appears in the real world is naturally connected. Traditional data modeling focuses on defining entities separately and computing their relationships at runtime. While this model has its advantages, highly connected data can be challenging to manage under its constraints.
+
+A graph database approach relies on persisting relationships in the storage layer instead, which leads to highly efficient graph retrieval operations. Azure Cosmos DB's Gremlin API supports the [property graph model](https://tinkerpop.apache.org/docs/current/reference/#intro).
+
+### Property graph objects
+
+A property [graph](http://mathworld.wolfram.com/Graph.html) is a structure that's composed of [vertices](http://mathworld.wolfram.com/GraphVertex.html) and [edges](http://mathworld.wolfram.com/GraphEdge.html). Both objects can have an arbitrary number of key-value pairs as properties.
+
+* **Vertices/nodes** - Vertices denote discrete entities, such as a person, a place, or an event.
+
+* **Edges/relationships** - Edges denote relationships between vertices. For example, a person might know another person, be involved in an event, and recently been at a location.
+
+* **Properties** - Properties express information about the vertices and edges. There can be any number of properties in either vertices or edges, and they can be used to describe and filter the objects in a query. Example properties include a vertex that has name and age, or an edge, which can have a time stamp and/or a weight.
+
+* **Label** - A label is a name or the identifier of a vertex or an edge. Labels can group multiple vertices or edges such that all the vertices/edges in a group have a certain label. For example, a graph can have multiple vertices of label type "person".
+
+Graph databases are often included within the NoSQL or non-relational database category, since there is no dependency on a schema or constrained data model. This lack of schema allows for modeling and storing connected structures naturally and efficiently.
+
+### Graph database by example
+
+Let's use a sample graph to understand how queries can be expressed in Gremlin. The following figure shows a business application that manages data about users, interests, and devices in the form of a graph.
++
+This graph has the following *vertex* types (these are also called "label" in Gremlin):
+
+* **People**: The graph has three people, Robin, Thomas, and Ben
+* **Interests**: Their interests, in this example, the game of Football
+* **Devices**: The devices that people use
+* **Operating Systems**: The operating systems that the devices run on
+* **Place**: The places from which the devices are accessed
+
+We represent the relationships between these entities via the following *edge* types:
+
+* **Knows**: For example, "Thomas knows Robin"
+* **Interested**: To represent the interests of the people in our graph, for example, "Ben is interested in Football"
+* **RunsOS**: Laptop runs the Windows OS
+* **Uses**: To represent which device a person uses. For example, Robin uses a Motorola phone with serial number 77
+* **Located**: To represent the location from which the devices are accessed
+
+The Gremlin Console is an interactive terminal offered by the Apache TinkerPop and this terminal is used to interact with the graph data. To learn more, see the quickstart doc on [how to use the Gremlin console](quickstart-console.md). You can also perform these operations using Gremlin drivers in the platform of your choice (Java, Node.js, Python, or .NET). The following examples show how to run queries against this graph data using the Gremlin Console.
+
+First let's look at CRUD. The following Gremlin statement inserts the "Thomas" vertex into the graph:
+
+```java
+:> g.addV('person').property('id', 'thomas.1').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44)
+```
+
+Next, the following Gremlin statement inserts a "knows" edge between Thomas and Robin.
+
+```java
+:> g.V('thomas.1').addE('knows').to(g.V('robin.1'))
+```
+
+The following query returns the "person" vertices in descending order of their first names:
+
+```java
+:> g.V().hasLabel('person').order().by('firstName', decr)
+```
+
+Where graphs shine is when you need to answer questions like "What operating systems do friends of Thomas use?". You can run this Gremlin traversal to get that information from the graph:
+
+```java
+:> g.V('thomas.1').out('knows').out('uses').out('runsos').group().by('name').by(count())
+```
+
+## Next steps
+
+To learn more about graph support in Azure Cosmos DB, see:
+
+* Get started with the [Azure Cosmos DB graph tutorial](quickstart-dotnet.md).
+* Learn about how to [query graphs in Azure Cosmos DB by using Gremlin](support.md).
cosmos-db Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/limits.md
+
+ Title: Limits of Azure Cosmos DB for Gremlin
+description: Reference documentation for runtime limitations of Graph engine
++++ Last updated : 10/04/2019++++
+# Azure Cosmos DB for Gremlin limits
+
+This article talks about the limits of Azure Cosmos DB for Gremlin engine and explains how they may impact customer traversals.
+
+Azure Cosmos DB for Gremlin is built on top of Azure Cosmos DB infrastructure. Due to this, all limits explained in [Azure Cosmos DB service limits](../concepts-limits.md) still apply.
+
+## Limits
+
+When Gremlin limit is reached, traversal is canceled with a **x-ms-status-code** of 429 indicating a throttling error. See [Gremlin server response headers](limits.md) for more information.
+
+**Resource** | **Default limit** | **Explanation**
+ | |
+*Script length* | **64 KB** | Maximum length of a Gremlin traversal script per request.
+*Operator depth* | **400** | Total number of unique steps in a traversal. For example, ```g.V().out()``` has an operator count of 2: V() and out(), ```g.V('label').repeat(out()).times(100)``` has operator depth of 3: V(), repeat(), and out() because ```.times(100)``` is a parameter to ```.repeat()``` operator.
+*Degree of parallelism* | **32** | Maximum number of storage partitions queried in a single request to storage layer. Graphs with hundreds of partitions will be impacted by this limit.
+*Repeat limit* | **32** | Maximum number of iterations a ```.repeat()``` operator can execute. Each iteration of ```.repeat()``` step in most cases runs breadth-first traversal, which means that any traversal is limited to at most 32 hops between vertices.
+*Traversal timeout* | **30 seconds** | Traversal will be canceled when it exceeds this time. Azure Cosmos DB Graph is an OLTP database with vast majority of traversals completing within milliseconds. To run OLAP queries on Azure Cosmos DB Graph, use [Apache Spark](https://azure.microsoft.com/services/cosmos-db/) with [Graph Data Frames](https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes) and [Azure Cosmos DB Spark Connector](https://github.com/Azure/azure-cosmosdb-spark).
+*Idle connection timeout* | **1 hour** | Amount of time the Gremlin service will keep idle websocket connections open. TCP keep-alive packets or HTTP keep-alive requests don't extend connection lifespan beyond this limit. Azure Cosmos DB Graph engine considers websocket connections to be idle if there are no active Gremlin requests running on it.
+*Resource token per hour* | **100** | Number of unique resource tokens used by Gremlin clients to connect to Gremlin account in a region. When the application exceeds hourly unique token limit, `"Exceeded allowed resource token limit of 100 that can be used concurrently"` will be returned on the next authentication request.
+
+## Next steps
+* [Azure Cosmos DB for Gremlin response headers](headers.md)
+* [Azure Cosmos DB Resource Tokens with Gremlin](how-to-use-resource-tokens.md)
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/manage-with-bicep.md
+
+ Title: Create and manage Azure Cosmos DB for Gremlin with Bicep
+description: Use Bicep to create and configure Azure Cosmos DB for Gremlin.
+++++ Last updated : 9/13/2021++++
+# Manage Azure Cosmos DB for Gremlin resources using Bicep
++
+In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB for Gremlin accounts, databases, and graphs.
+
+This article shows Bicep samples for API for Gremlin accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Cassandra](../cassandr) APIs.
+
+> [!IMPORTANT]
+>
+> * Account names are limited to 44 characters, all lowercase.
+> * To change the throughput values, redeploy the template with updated RU/s.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
+
+To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md).
+
+<a id="create-autoscale"></a>
+
+## API for Gremlin with autoscale provisioned throughput
+
+Create an Azure Cosmos DB account for API for Gremlin with a database and graph with autoscale throughput.
++
+<a id="create-manual"></a>
+
+## API for Gremlin with standard provisioned throughput
+
+Create an Azure Cosmos DB account for API for Gremlin with a database and graph with standard provisioned throughput.
++
+## Next steps
+
+Here are some additional resources:
+
+* [Bicep documentation](../../azure-resource-manager/bicep/index.yml)
+* [Install Bicep tools](../../azure-resource-manager/bicep/install.md)
cosmos-db Modeling Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/modeling-tools.md
+
+ Title: Third-party data modeling tools for Azure Cosmos DB graph data
+description: This article describes various tools to design the Graph data model.
+++++ Last updated : 05/25/2021++
+# Third-party data modeling tools for Azure Cosmos DB graph data
++
+It is important to design the data model and further important to maintain. Here are set of third-party visual design tools which help in designing & maintaining the graph data model.
+
+> [!IMPORTANT]
+> Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you.
+
+## Hackolade
+
+Hackolade is a data modeling and schema design tool for NoSQL databases. It has a data modeling Studio, which helps in management of schemas for data-at-rest and data-in-motion.
+
+### How it works
+This tool provides the data modeling of vertices / edges and their respective properties. It supports several use cases, some of them are:
+- Start from a blank page and think through different options to graphically build your Azure Cosmos DB Gremlin model. Then forward-engineer the model to your Azure instance to evaluate the result and continue the evolution. All such goodies without writing single line of code.
+- Reverse-engineer an existing graph on Azure to clearly understand its structure, so you could effectively query your graph too. Then enrich the data model with descriptions, metadata, and constraints to produce documentation. It supports HTML, Markdown or PDF format, and feeds to corporate data governance or dictionary systems.
+- Migrate from relational database to NoSQL through the de-normalization of data structures.
+- Integrate with a CI/CD pipeline via a Command-Line Interface
+- Collaboration and versioning using Git
+- And much more…
+
+### Sample
+
+The animation at Figure-2 provides a demonstration of reverse engineering, extraction of entities from RDBMS then Hackolade will discover relations from foreign key relationships then modifications.
+
+Sample DDL for source as SQL Server available at [here](https://github.com/Azure-Samples/northwind-ddl-sample/blob/main/nw.sql)
++
+**Figure-1:** Graph Diagram (extracted the graph data model)
+
+After modification of data model, the tool can generate the gremlin script, which may include custom Azure Cosmos DB index script to ensure optimal indexes are created, refer Figure-2 for full flow.
+
+The following image demonstrates reverse engineering from RDBMS & Hackolade in action:
+
+**Figure-2:** Hackolade in action (demonstrating SQL to Gremlin data model conversion)
+### Useful links
+- [Download a 14-day free trial](https://hackolade.com/download.html)
+- [Get more data models](https://hackolade.com/samplemodels.html#cosmosdb).
+- [Documentation of Hackolade](https://hackolade.com/help/CosmosDBGremlin.html)
+
+## Next steps
+- [Visualizing the data](./visualization-partners.md)
cosmos-db Modeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/modeling.md
+
+ Title: 'Graph data modeling for Azure Cosmos DB for Gremlin'
+description: Learn how to model a graph database by using Azure Cosmos DB for Gremlin. This article describes when to use a graph database and best practices to model entities and relationships.
++++ Last updated : 12/02/2019++++
+# Graph data modeling for Azure Cosmos DB for Gremlin
+
+The following document is designed to provide graph data modeling recommendations. This step is vital in order to ensure the scalability and performance of a graph database system as the data evolves. An efficient data model is especially important with large-scale graphs.
+
+## Requirements
+
+The process outlined in this guide is based on the following assumptions:
+ * The **entities** in the problem-space are identified. These entities are meant to be consumed _atomically_ for each request. In other words, the database system isn't designed to retrieve a single entity's data in multiple query requests.
+ * There is an understanding of **read and write requirements** for the database system. These requirements will guide the optimizations needed for the graph data model.
+ * The principles of the [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) are well understood.
+
+## When do I need a graph database?
+
+A graph database solution can be optimally applied if the entities and relationships in a data domain have any of the following characteristics:
+
+* The entities are **highly connected** through descriptive relationships. The benefit in this scenario is the fact that the relationships are persisted in storage.
+* There are **cyclic relationships** or **self-referenced entities**. This pattern is often a challenge when using relational or document databases.
+* There are **dynamically evolving relationships** between entities. This pattern is especially applicable to hierarchical or tree-structured data with many levels.
+* There are **many-to-many relationships** between entities.
+* There are **write and read requirements on both entities and relationships**.
+
+If the above criteria is satisfied, it's likely that a graph database approach will provide advantages for **query complexity**, **data model scalability**, and **query performance**.
+
+The next step is to determine if the graph is going to be used for analytic or transactional purposes. If the graph is intended to be used for heavy computation and data processing workloads, it would be worth to explore the [Cosmos DB Spark connector](../nosql/quickstart-spark.md) and the use of the [GraphX library](https://spark.apache.org/graphx/).
+
+## How to use graph objects
+
+The [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) defines two types of objects **Vertices** and **Edges**.
+
+The following are the best practices for the properties in the graph objects:
+
+| Object | Property | Type | Notes |
+| | | | |
+| Vertex | ID | String | Uniquely enforced per partition. If a value isn't supplied upon insertion, an auto-generated GUID will be stored. |
+| Vertex | label | String | This property is used to define the type of entity that the vertex represents. If a value isn't supplied, a default value "vertex" will be used. |
+| Vertex | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each vertex. |
+| Vertex | partition key | String, Boolean, Numeric | This property defines where the vertex and its outgoing edges will be stored. Read more about [graph partitioning](partitioning.md). |
+| Edge | ID | String | Uniquely enforced per partition. Auto-generated by default. Edges usually don't have the need to be uniquely retrieved by an ID. |
+| Edge | label | String | This property is used to define the type of relationship that two vertices have. |
+| Edge | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each edge. |
+
+> [!NOTE]
+> Edges don't require a partition key value, since its value is automatically assigned based on their source vertex. Learn more in the [graph partitioning](partitioning.md) article.
+
+## Entity and relationship modeling guidelines
+
+The following are a set of guidelines to approach data modeling for an Azure Cosmos DB for Gremlin graph database. These guidelines assume that there's an existing definition of a data domain and queries for it.
+
+> [!NOTE]
+> The steps outlined below are presented as recommendations. The final model should be evaluated and tested before its consideration as production-ready. Additionally, the recommendations below are specific to Azure Cosmos DB's Gremlin API implementation.
+
+### Modeling vertices and properties
+
+The first step for a graph data model is to map every identified entity to a **vertex object**. A one to one mapping of all entities to vertices should be an initial step and subject to change.
+
+One common pitfall is to map properties of a single entity as separate vertices. Consider the example below, where the same entity is represented in two different ways:
+
+* **Vertex-based properties**: In this approach, the entity uses three separate vertices and two edges to describe its properties. While this approach might reduce redundancy, it increases model complexity. An increase in model complexity can result in added latency, query complexity, and computation cost. This model can also present challenges in partitioning.
++
+* **Property-embedded vertices**: This approach takes advantage of the key-value pair list to represent all the properties of the entity inside a vertex. This approach provides reduced model complexity, which will lead to simpler queries and more cost-efficient traversals.
++
+> [!NOTE]
+> The above examples show a simplified graph model to only show the comparison between the two ways of dividing entity properties.
+
+The **property-embedded vertices** pattern generally provides a more performant and scalable approach. The default approach to a new graph data model should gravitate towards this pattern.
+
+However, there are scenarios where referencing to a property might provide advantages. For example: if the referenced property is updated frequently. Using a separate vertex to represent a property that is constantly changed would minimize the amount of write operations that the update would require.
+
+### Relationship modeling with edge directions
+
+After the vertices are modeled, the edges can be added to denote the relationships between them. The first aspect that needs to be evaluated is the **direction of the relationship**.
+
+Edge objects have a default direction that is followed by a traversal when using the `out()` or `outE()` function. Using this natural direction results in an efficient operation, since all vertices are stored with their outgoing edges.
+
+However, traversing in the opposite direction of an edge, using the `in()` function, will always result in a cross-partition query. Learn more about [graph partitioning](partitioning.md). If there's a need to constantly traverse using the `in()` function, it's recommended to add edges in both directions.
+
+You can determine the edge direction by using the `.to()` or `.from()` predicates to the `.addE()` Gremlin step. Or by using the [bulk executor library for Gremlin API](bulk-executor-dotnet.md).
+
+> [!NOTE]
+> Edge objects have a direction by default.
+
+### Relationship labeling
+
+Using descriptive relationship labels can improve the efficiency of edge resolution operations. This pattern can be applied in the following ways:
+* Use non-generic terms to label a relationship.
+* Associate the label of the source vertex to the label of the target vertex with the relationship name.
++
+The more specific the label that the traverser will use to filter the edges, the better. This decision can have a significant impact on query cost as well. You can evaluate the query cost at any time [using the executionProfile step](execution-profile.md).
++
+## Next steps:
+* Check out the list of supported [Gremlin steps](support.md).
+* Learn about [graph database partitioning](partitioning.md) to deal with large-scale graphs.
+* Evaluate your Gremlin queries using the [Execution Profile step](execution-profile.md).
+* Third-party Graph [design data model](modeling-tools.md)
cosmos-db Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/partitioning.md
+
+ Title: Data partitioning in Azure Cosmos DB for Gremlin
+description: Learn how you can use a partitioned graph in Azure Cosmos DB. This article also describes the requirements and best practices for a partitioned graph.
+++++ Last updated : 06/24/2019
+ms.devlang: java
++
+# Using a partitioned graph in Azure Cosmos DB
+
+One of the key features of the API for Gremlin in Azure Cosmos DB is the ability to handle large-scale graphs through horizontal scaling. The containers can scale independently in terms of storage and throughput. You can create containers in Azure Cosmos DB that can be automatically scaled to store a graph data. The data is automatically balanced based on the specified **partition key**.
+
+Partitioning is done internally if the container is expected to store more than 20 GB in size or if you want to allocate more than 10,000 request units per second (RUs). Data is automatically partitioned based on the partition key you specify. Partition key is required if you create graph containers from the Azure portal or the 3.x or higher versions of Gremlin drivers. Partition key is not required if you use 2.x or lower versions of Gremlin drivers.
+
+The same general principles from the [Azure Cosmos DB partitioning mechanism](../partitioning-overview.md) apply with a few graph-specific optimizations described below.
++
+## Graph partitioning mechanism
+
+The following guidelines describe how the partitioning strategy in Azure Cosmos DB operates:
+
+- **Both vertices and edges are stored as JSON documents**.
+
+- **Vertices require a partition key**. This key will determine in which partition the vertex will be stored through a hashing algorithm. The partition key property name is defined when creating a new container and it has a format: `/partitioning-key-name`.
+
+- **Edges will be stored with their source vertex**. In other words, for each vertex its partition key defines where they are stored along with its outgoing edges. This optimization is done to avoid cross-partition queries when using the `out()` cardinality in graph queries.
+
+- **Edges contain references to the vertices they point to**. All edges are stored with the partition keys and IDs of the vertices that they are pointing to. This computation makes all `out()` direction queries always be a scoped partitioned query, and not a blind cross-partition query.
+
+- **Graph queries need to specify a partition key**. To take full advantage of the horizontal partitioning in Azure Cosmos DB, the partition key should be specified when a single vertex is selected, whenever it's possible. The following are queries for selecting one or multiple vertices in a partitioned graph:
+
+ - `/id` and `/label` are not supported as partition keys for a container in API for Gremlin.
++
+ - Selecting a vertex by ID, then **using the `.has()` step to specify the partition key property**:
+
+ ```java
+ g.V('vertex_id').has('partitionKey', 'partitionKey_value')
+ ```
+
+ - Selecting a vertex by **specifying a tuple including partition key value and ID**:
+
+ ```java
+ g.V(['partitionKey_value', 'vertex_id'])
+ ```
+
+ - Selecting a set of vertices with their IDs and **specifying a list of partition key values**:
+
+ ```java
+ g.V('vertex_id0', 'vertex_id1', 'vertex_id2', …).has('partitionKey', within('partitionKey_value0', 'partitionKey_value01', 'partitionKey_value02', …)
+ ```
+
+ - Using the **Partition strategy** at the beginning of a query and specifying a partition for the scope of the rest of the Gremlin query:
+
+ ```java
+ g.withStrategies(PartitionStrategy.build().partitionKey('partitionKey').readPartitions('partitionKey_value').create()).V()
+ ```
+
+## Best practices when using a partitioned graph
+
+Use the following guidelines to ensure performance and scalability when using partitioned graphs with unlimited containers:
+
+- **Always specify the partition key value when querying a vertex**. Getting vertex from a known partition is a way to achieve performance. All subsequent adjacency operations will always be scoped to a partition since Edges contain reference ID and partition key to their target vertices.
+
+- **Use the outgoing direction when querying edges whenever it's possible**. As mentioned above, edges are stored with their source vertices in the outgoing direction. So the chances of resorting to cross-partition queries are minimized when the data and queries are designed with this pattern in mind. On the contrary, the `in()` query will always be an expensive fan-out query.
+
+- **Choose a partition key that will evenly distribute data across partitions**. This decision heavily depends on the data model of the solution. Read more about creating an appropriate partition key in [Partitioning and scale in Azure Cosmos DB](../partitioning-overview.md).
+
+- **Optimize queries to obtain data within the boundaries of a partition**. An optimal partitioning strategy would be aligned to the querying patterns. Queries that obtain data from a single partition provide the best possible performance.
+
+## Next steps
+
+Next you can proceed to read the following articles:
+
+* Learn about [Partition and scale in Azure Cosmos DB](../partitioning-overview.md).
+* Learn about the [Gremlin support in API for Gremlin](support.md).
+* Learn about [Introduction to API for Gremlin](introduction.md).
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/powershell-samples.md
+
+ Title: Azure PowerShell samples for Azure Cosmos DB for Gremlin
+description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Gremlin
+++++ Last updated : 01/20/2021++++
+# Azure PowerShell samples for Azure Cosmos DB for Gremlin
+
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+
+## Common Samples
+
+|Task | Description |
+|||
+|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's default consistency level. |
+|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's regions. |
+|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos DB account or trigger a manual failover. |
+|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
+|[Create an Azure Cosmos DB Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
+|||
+
+## API for Gremlin Samples
+
+|Task | Description |
+|||
+|[Create an account, database and graph](../scripts/powershell/gremlin/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account, database and graph. |
+|[Create an account, database and graph with autoscale](../scripts/powershell/gremlin/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account, database and graph with autoscale. |
+|[List or get databases or graphs](../scripts/powershell/gremlin/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or graph. |
+|[Perform throughput operations](../scripts/powershell/gremlin/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or graph including get, update and migrate between autoscale and standard throughput. |
+|[Lock resources from deletion](../scripts/powershell/gremlin/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
+|||
cosmos-db Quickstart Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-console.md
+
+ Title: 'Query with Azure Cosmos DB for Gremlin using TinkerPop Gremlin Console: Tutorial'
+description: An Azure Cosmos DB quickstart to creates vertices, edges, and queries using the Azure Cosmos DB for Gremlin.
+++ Last updated : 07/10/2020++++
+# Quickstart: Create, query, and traverse an Azure Cosmos DB graph database using the Gremlin console
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](quickstart-console.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+> * [PHP](quickstart-php.md)
+>
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](introduction.md) account, database, and graph (container) using the Azure portal and then use the [Gremlin Console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) from [Apache TinkerPop](https://tinkerpop.apache.org) to work with Gremlin API data. In this tutorial, you create and query vertices and edges, updating a vertex property, query vertices, traverse the graph, and drop a vertex.
++
+The Gremlin console is Groovy/Java based and runs on Linux, Mac, and Windows. You can download it from the [Apache TinkerPop site](https://tinkerpop.apache.org/download.html).
+
+## Prerequisites
+
+You need to have an Azure subscription to create an Azure Cosmos DB account for this quickstart.
++
+You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.13**. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
+
+## Create a database account
++
+## Add a graph
++
+## <a id="ConnectAppService"></a>Connect to your app service/Graph
+
+1. Before starting the Gremlin Console, create or modify the remote-secure.yaml configuration file in the `apache-tinkerpop-gremlin-console-3.2.5/conf` directory.
+2. Fill in your *host*, *port*, *username*, *password*, *connectionPool*, and *serializer* configurations as defined in the following table:
+
+ Setting|Suggested value|Description
+ ||
+ hosts|[*account-name*.**gremlin**.cosmos.azure.com]|See the following screenshot. This is the **Gremlin URI** value on the Overview page of the Azure portal, in square brackets, with the trailing :443/ removed. Note: Be sure to use the Gremlin value, and **not** the URI that ends with [*account-name*.documents.azure.com] which would likely result in a "Host did not respond in a timely fashion" exception when attempting to execute Gremlin queries later.
+ port|443|Set to 443.
+ username|*Your username*|The resource of the form `/dbs/<db>/colls/<coll>` where `<db>` is your database name and `<coll>` is your collection name.
+ password|*Your primary key*| See second screenshot below. This is your primary key, which you can retrieve from the Keys page of the Azure portal, in the Primary Key box. Use the copy button on the left side of the box to copy the value.
+ connectionPool|{enableSsl: true}|Your connection pool setting for TLS.
+ serializer|{ className: org.apache.tinkerpop.gremlin.<br>driver.ser.GraphSONMessageSerializerV2d0,<br> config: { serializeResultToString: true }}|Set to this value and delete any `\n` line breaks when pasting in the value.
+
+ For the hosts value, copy the **Gremlin URI** value from the **Overview** page:
+
+ :::image type="content" source="./media/quickstart-console/gremlin-uri.png" alt-text="View and copy the Gremlin URI value on the Overview page in the Azure portal":::
+
+ For the password value, copy the **Primary key** from the **Keys** page:
+
+ :::image type="content" source="./media/quickstart-console/keys.png" alt-text="View and copy your primary key in the Azure portal, Keys page":::
+
+ Your remote-secure.yaml file should look like this:
+
+ ```yaml
+ hosts: [your_database_server.gremlin.cosmos.azure.com]
+ port: 443
+ username: /dbs/your_database/colls/your_collection
+ password: your_primary_key
+ connectionPool: {
+ enableSsl: true
+ }
+ serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, config: { serializeResultToString: true }}
+ ```
+
+ make sure to wrap the value of hosts parameter within brackets [].
+
+1. In your terminal, run `bin/gremlin.bat` or `bin/gremlin.sh` to start the [Gremlin Console](https://tinkerpop.apache.org/docs/3.2.5/tutorials/getting-started/).
+
+1. In your terminal, run `:remote connect tinkerpop.server conf/remote-secure.yaml` to connect to your app service.
+
+ > [!TIP]
+ > If you receive the error `No appenders could be found for logger` ensure that you updated the serializer value in the remote-secure.yaml file as described in step 2. If your configuration is correct, then this warning can be safely ignored as it should not impact the use of the console.
+
+1. Next run `:remote console` to redirect all console commands to the remote server.
+
+ > [!NOTE]
+ > If you don't run the `:remote console` command but would like to redirect all console commands to the remote server, you should prefix the command with `:>`, for example you should run the command as `:> g.V().count()`. This prefix is a part of the command and it is important when using the Gremlin console with Azure Cosmos DB. Omitting this prefix instructs the console to execute the command locally, often against an in-memory graph. Using this prefix `:>` tells the console to execute a remote command, in this case against Azure Cosmos DB (either the localhost emulator, or an Azure instance).
+
+Great! Now that we finished the setup, let's start running some console commands.
+
+Let's try a simple count() command. Type the following into the console at the prompt:
+
+```console
+g.V().count()
+```
+
+## Create vertices and edges
+
+Let's begin by adding five person vertices for *Thomas*, *Mary Kay*, *Robin*, *Ben*, and *Jack*.
+
+Input (Thomas):
+
+```console
+g.addV('person').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44).property('userid', 1).property('pk', 'pk')
+```
+
+Output:
+
+```bash
+==>[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d,label:person,type:vertex,properties:[firstName:[[id:f02a749f-b67c-4016-850e-910242d68953,value:Thomas]],lastName:[[id:f5fa3126-8818-4fda-88b0-9bb55145ce5c,value:Andersen]],age:[[id:f6390f9c-e563-433e-acbf-25627628016e,value:44]],userid:[[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d|userid,value:1]]]]
+```
+
+Input (Mary Kay):
+
+```console
+g.addV('person').property('firstName', 'Mary Kay').property('lastName', 'Andersen').property('age', 39).property('userid', 2).property('pk', 'pk')
+
+```
+
+Output:
+
+```bash
+==>[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e,label:person,type:vertex,properties:[firstName:[[id:ea0604f8-14ee-4513-a48a-1734a1f28dc0,value:Mary Kay]],lastName:[[id:86d3bba5-fd60-4856-9396-c195ef7d7f4b,value:Andersen]],age:[[id:bc81b78d-30c4-4e03-8f40-50f72eb5f6da,value:39]],userid:[[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e|userid,value:2]]]]
+
+```
+
+Input (Robin):
+
+```console
+g.addV('person').property('firstName', 'Robin').property('lastName', 'Wakefield').property('userid', 3).property('pk', 'pk')
+```
+
+Output:
+
+```bash
+==>[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e,label:person,type:vertex,properties:[firstName:[[id:ec65f078-7a43-4cbe-bc06-e50f2640dc4e,value:Robin]],lastName:[[id:a3937d07-0e88-45d3-a442-26fcdfb042ce,value:Wakefield]],userid:[[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e|userid,value:3]]]]
+```
+
+Input (Ben):
+
+```console
+g.addV('person').property('firstName', 'Ben').property('lastName', 'Miller').property('userid', 4).property('pk', 'pk')
+
+```
+
+Output:
+
+```bash
+==>[id:ee86b670-4d24-4966-9a39-30529284b66f,label:person,type:vertex,properties:[firstName:[[id:a632469b-30fc-4157-840c-b80260871e9a,value:Ben]],lastName:[[id:4a08d307-0719-47c6-84ae-1b0b06630928,value:Miller]],userid:[[id:ee86b670-4d24-4966-9a39-30529284b66f|userid,value:4]]]]
+```
+
+Input (Jack):
+
+```console
+g.addV('person').property('firstName', 'Jack').property('lastName', 'Connor').property('userid', 5).property('pk', 'pk')
+```
+
+Output:
+
+```bash
+==>[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469,label:person,type:vertex,properties:[firstName:[[id:4250824e-4b72-417f-af98-8034aa15559f,value:Jack]],lastName:[[id:44c1d5e1-a831-480a-bf94-5167d133549e,value:Connor]],userid:[[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469|userid,value:5]]]]
+```
++
+Next, let's add edges for relationships between our people.
+
+Input (Thomas -> Mary Kay):
+
+```console
+g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Mary Kay'))
+```
+
+Output:
+
+```bash
+==>[id:c12bf9fb-96a1-4cb7-a3f8-431e196e702f,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:0d1fa428-780c-49a5-bd3a-a68d96391d5c,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
+```
+
+Input (Thomas -> Robin):
+
+```console
+g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Robin'))
+```
+
+Output:
+
+```bash
+==>[id:58319bdd-1d3e-4f17-a106-0ddf18719d15,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:3e324073-ccfc-4ae1-8675-d450858ca116,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
+```
+
+Input (Robin -> Ben):
+
+```console
+g.V().hasLabel('person').has('firstName', 'Robin').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Ben'))
+```
+
+Output:
+
+```bash
+==>[id:889c4d3c-549e-4d35-bc21-a3d1bfa11e00,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:40fd641d-546e-412a-abcc-58fe53891aab,outV:3e324073-ccfc-4ae1-8675-d450858ca116]
+```
+
+## Update a vertex
+
+Let's update the *Thomas* vertex with a new age of *45*.
+
+Input:
+```console
+g.V().hasLabel('person').has('firstName', 'Thomas').property('age', 45)
+```
+Output:
+
+```bash
+==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
+```
+
+## Query your graph
+
+Now, let's run a variety of queries against your graph.
+
+First, let's try a query with a filter to return only people who are older than 40 years old.
+
+Input (filter query):
+
+```console
+g.V().hasLabel('person').has('age', gt(40))
+```
+
+Output:
+
+```bash
+==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
+```
+
+Next, let's project the first name for the people who are older than 40 years old.
+
+Input (filter + projection query):
+
+```console
+g.V().hasLabel('person').has('age', gt(40)).values('firstName')
+```
+
+Output:
+
+```bash
+==>Thomas
+```
+
+## Traverse your graph
+
+Let's traverse the graph to return all of Thomas's friends.
+
+Input (friends of Thomas):
+
+```console
+g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person')
+```
+
+Output:
+
+```bash
+==>[id:f04bc00b-cb56-46c4-a3bb-a5870c42f7ff,label:person,type:vertex,properties:[firstName:[[id:14feedec-b070-444e-b544-62be15c7167c,value:Mary Kay]],lastName:[[id:107ab421-7208-45d4-b969-bbc54481992a,value:Andersen]],age:[[id:4b08d6e4-58f5-45df-8e69-6b790b692e0a,value:39]]]]
+==>[id:91605c63-4988-4b60-9a30-5144719ae326,label:person,type:vertex,properties:[firstName:[[id:f760e0e6-652a-481a-92b0-1767d9bf372e,value:Robin]],lastName:[[id:352a4caa-bad6-47e3-a7dc-90ff342cf870,value:Wakefield]]]]
+```
+
+Next, let's get the next layer of vertices. Traverse the graph to return all the friends of Thomas's friends.
+
+Input (friends of friends of Thomas):
+
+```console
+g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
+```
+Output:
+
+```bash
+==>[id:a801a0cb-ee85-44ee-a502-271685ef212e,label:person,type:vertex,properties:[firstName:[[id:b9489902-d29a-4673-8c09-c2b3fe7f8b94,value:Ben]],lastName:[[id:e084f933-9a4b-4dbc-8273-f0171265cf1d,value:Miller]]]]
+```
+
+## Drop a vertex
+
+Let's now delete a vertex from the graph database.
+
+Input (drop Jack vertex):
+
+```console
+g.V().hasLabel('person').has('firstName', 'Jack').drop()
+```
+
+## Clear your graph
+
+Finally, let's clear the database of all vertices and edges.
+
+Input:
+
+```console
+g.E().drop()
+g.V().drop()
+```
+
+Congratulations! You've completed this Azure Cosmos DB: Gremlin API tutorial!
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, create vertices and edges, and traverse your graph using the Gremlin console. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-dotnet.md
+
+ Title: Build an Azure Cosmos DB .NET Framework, Core application using the Gremlin API
+description: Presents a .NET Framework/Core code sample you can use to connect to and query Azure Cosmos DB
++++
+ms.devlang: csharp
+ Last updated : 05/02/2020++
+# Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB for Gremlin account
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](quickstart-console.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+> * [PHP](quickstart-php.md)
+>
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases. All of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](introduction.md) account, database, and graph (container) using the Azure portal. You then build and run a console app built using the open-source driver [Gremlin.Net](https://tinkerpop.apache.org/docs/3.2.7/reference/#gremlin-DotNet).
+
+## Prerequisites
+
+Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
++
+## Create a database account
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it's to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. The ``git clone`` command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-gremlindotnet-getting-started.git
+ ```
+
+4. Then open Visual Studio and open the solution file.
+
+5. Restore the NuGet packages in the project. The restore operation should include the Gremlin.Net driver, and the Newtonsoft.Json package.
+
+6. You can also install the Gremlin.Net@v3.4.13 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
+
+ ```bash
+ nuget install Gremlin.NET -Version 3.4.13
+ ```
+
+> [!NOTE]
+> The supported Gremlin.NET driver version for Gremlin API is available [here](support.md#compatible-client-libraries). Latest released versions of Gremlin.NET may see incompatibilities, so please check the linked table for compatibility updates.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the Program.cs file.
+
+* Set your connection parameters based on the account created above:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="configureConnectivity":::
+
+* The Gremlin commands to be executed are listed in a Dictionary:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineQueries":::
+
+* Create a new `GremlinServer` and `GremlinClient` connection objects using the parameters provided above:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineClientandServerObjects":::
+
+* Execute each Gremlin query using the `GremlinClient` object with an async task. You can read the Gremlin queries from the dictionary defined in the previous step and execute them. Later get the result and read the values, which are formatted as a dictionary, using the `JsonSerializer` class from Newtonsoft.Json package:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="executeQueries":::
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app.
+
+1. From the [Azure portal](https://portal.azure.com/), navigate to your graph database account. In the **Overview** tab, you can see two endpoints-
+
+ **.NET SDK URI** - This value is used when you connect to the graph account by using Microsoft.Azure.Graphs library.
+
+ **Gremlin Endpoint** - This value is used when you connect to the graph account by using Gremlin.Net library.
+
+ :::image type="content" source="./media/quickstart-dotnet/endpoint.png" alt-text="Copy the endpoint":::
+
+ For this sample, record the *Host* value of the **Gremlin Endpoint**. For example, if the URI is ``https://graphtest.gremlin.cosmosdb.azure.com``, the *Host* value would be ``graphtest.gremlin.cosmosdb.azure.com``.
+
+1. Next, navigate to the **Keys** tab and record the *PRIMARY KEY* value from the Azure portal.
+
+1. After you've copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace ``<cosmos-account-name>`` and ``<cosmos-account-primary-key>`` values.
+
+ ### [Windows](#tab/windows)
+
+ ```powershell
+ setx Host "<cosmos-account-name>.gremlin.cosmosdb.azure.com"
+ setx PrimaryKey "<cosmos-account-primary-key>"
+ ```
+
+ ### [Linux / macOS](#tab/linux+macos)
+
+ ```bash
+ export Host=<cosmos-account-name>.gremlin.cosmosdb.azure.com
+ export PrimaryKey=<cosmos-account-primary-key>
+ ```
+
+
+
+1. Open the *Program.cs* file and update the "database and "container" variables with the database and container (which is also the graph name) names created above.
+
+ `private static string database = "your-database-name";`
+ `private static string container = "your-container-or-graph-name";`
+
+1. Save the Program.cs file.
+
+You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+## Run the console app
+
+Select CTRL + F5 to run the application. The application will print both the Gremlin query commands and results in the console.
+
+ The console window displays the vertexes and edges being added to the graph. When the script completes, press ENTER to close the console window.
+
+## Browse using the Data Explorer
+
+You can now go back to Data Explorer in the Azure portal and browse and query your new graph data.
+
+1. In Data Explorer, the new database appears in the Graphs pane. Expand the database and container nodes, and then select **Graph**.
+
+2. Select the **Apply Filter** button to use the default query to view all the vertices in the graph. The data generated by the sample app is displayed in the Graphs pane.
+
+ You can zoom in and out of the graph, you can expand the graph display space, add extra vertices, and move vertices on the display surface.
+
+ :::image type="content" source="./media/quickstart-dotnet/graph-explorer.png" alt-text="View the graph in Data Explorer in the Azure portal":::
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run an app. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-java.md
+
+ Title: Build a graph database with Java in Azure Cosmos DB
+description: Presents a Java code sample you can use to connect to and query graph data in Azure Cosmos DB using Gremlin.
++
+ms.devlang: java
+ Last updated : 03/26/2019+++++
+# Quickstart: Build a graph database with the Java SDK and the Azure Cosmos DB for Gremlin
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](quickstart-console.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+> * [PHP](quickstart-php.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) API account from the Azure portal, and add data by using a Java app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi).
+- [Git](https://www.git-scm.com/downloads).
+- [Gremlin-driver 3.4.13](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.13), this dependency is mentioned in the quickstart sample's pom.xml
+
+## Create a database account
+
+Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-java-getting-started.git
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+
+The following snippets are all taken from the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\GetStarted\Program.java* file.
+
+This Java console app uses a [Gremlin API](introduction.md) database with the OSS [Apache TinkerPop](https://tinkerpop.apache.org/) driver.
+
+- The Gremlin `Client` is initialized from the configuration in the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\remote.yaml* file.
+
+ ```java
+ cluster = Cluster.build(new File("src/remote.yaml")).create();
+ ...
+ client = cluster.connect();
+ ```
+
+- Series of Gremlin steps are executed using the `client.submit` method.
+
+ ```java
+ ResultSet results = client.submit(gremlin);
+
+ CompletableFuture<List<Result>> completableFutureResults = results.all();
+ List<Result> resultList = completableFutureResults.get();
+
+ for (Result result : resultList) {
+ System.out.println(result.toString());
+ }
+ ```
+
+## Update your connection information
+
+Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
+
+ Copy the first portion of the URI value.
+
+ :::image type="content" source="./media/quickstart-java/copy-access-key-azure-portal.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
+
+2. Open the *src/remote.yaml* file and paste the unique ID value over `$name$` in `hosts: [$name$.graphs.azure.com]`.
+
+ Line 1 of *remote.yaml* should now look similar to
+
+ `hosts: [test-graph.graphs.azure.com]`
+
+3. Change `graphs` to `gremlin.cosmosdb` in the `endpoint` value. (If you created your graph database account before December 20, 2017, make no changes to the endpoint value and continue to the next step.)
+
+ The endpoint value should now look like this:
+
+ `"endpoint": "https://testgraphacct.gremlin.cosmosdb.azure.com:443/"`
+
+4. In the Azure portal, use the copy button to copy the PRIMARY KEY and paste it over `$masterKey$` in `password: $masterKey$`.
+
+ Line 4 of *remote.yaml* should now look similar to
+
+ `password: 2Ggkr662ifxz2Mg==`
+
+5. Change line 3 of *remote.yaml* from
+
+ `username: /dbs/$database$/colls/$collection$`
+
+ to
+
+ `username: /dbs/sample-database/colls/sample-graph`
+
+ If you used a unique name for your sample database or graph, update the values as appropriate.
+
+6. Save the *remote.yaml* file.
+
+## Run the console app
+
+1. In the git terminal window, `cd` to the azure-cosmos-db-graph-java-getting-started folder.
+
+ ```git
+ cd "C:\git-samples\azure-cosmos-db-graph-java-getting-started"
+ ```
+
+2. In the git terminal window, use the following command to install the required Java packages.
+
+ ```git
+ mvn package
+ ```
+
+3. In the git terminal window, use the following command to start the Java application.
+
+ ```git
+ mvn exec:java -D exec.mainClass=GetStarted.Program
+ ```
+
+ The terminal window displays the vertices being added to the graph.
+
+ If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
+
+ Once the program stops, select Enter, then switch back to the Azure portal in your internet browser.
+
+<a id="add-sample-data"></a>
+## Review and add sample data
+
+You can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
+
+1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
+
+ :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
+
+2. In the **Results** list, notice the new users added to the graph. Select **ben** and notice that the user is connected to robin. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+
+ :::image type="content" source="./media/quickstart-java/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
+
+3. Let's add a few new users. Select **New Vertex** to add data to your graph.
+
+ :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
+
+4. In the label box, enter *person*.
+
+5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
+
+ key|value|Notes
+ -|-|-
+ id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|female|
+ tech | java |
+
+ > [!NOTE]
+ > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+
+6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
+
+7. Select **New Vertex** again and add an additional new user.
+
+8. Enter a label of *person*.
+
+9. Select **Add property** to add each of the following properties:
+
+ key|value|Notes
+ -|-|-
+ id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|male|
+ school|MIT|
+
+10. Select **OK**.
+
+11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
+
+12. Now you can connect rakesh, and ashley. Ensure **ashley** is selected in the **Results** list, then select :::image type="content" source="./media/quickstart-java/edit-pencil-button.png" alt-text="Change the target of a vertex in a graph"::: next to **Targets** on lower right side. You may need to widen your window to see the button.
+
+ :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph - Azure CosmosDB":::
+
+13. In the **Target** box enter *rakesh*, and in the **Edge label** box enter *knows*, and then select the check box.
+
+ :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection in Data Explorer - Azure CosmosDB":::
+
+14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
+
+ :::image type="content" source="./media/quickstart-java/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer - Azure CosmosDB":::
+
+That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Java app that adds data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-nodejs.md
+
+ Title: Build an Azure Cosmos DB Node.js application by using Gremlin API
+description: Presents a Node.js code sample you can use to connect to and query Azure Cosmos DB
++
+ms.devlang: javascript
+ Last updated : 06/05/2019++++
+# Quickstart: Build a Node.js application by using Azure Cosmos DB for Gremlin account
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](quickstart-console.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+> * [PHP](quickstart-php.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) API account from the Azure portal, and add data by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- [Node.js 0.10.29+](https://nodejs.org/).
+- [Git](https://git-scm.com/downloads).
+
+## Create a database account
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-nodejs-getting-started.git
+ ```
+
+3. Open the solution file in Visual Studio.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the *app.js* file.
+
+This console app uses the open-source [Gremlin Node.js](https://www.npmjs.com/package/gremlin) driver.
+
+* The Gremlin client is created.
+
+ ```javascript
+ const authenticator = new Gremlin.driver.auth.PlainTextSaslAuthenticator(
+ `/dbs/${config.database}/colls/${config.collection}`,
+ config.primaryKey
+ )
++
+ const client = new Gremlin.driver.Client(
+ config.endpoint,
+ {
+ authenticator,
+ traversalsource : "g",
+ rejectUnauthorized : true,
+ mimeType : "application/vnd.gremlin-v2.0+json"
+ }
+ );
+
+ ```
+
+ The configurations are all in *config.js*, which we edit in the [following section](#update-your-connection-string).
+
+* A series of functions are defined to execute different Gremlin operations. This is one of them:
+
+ ```javascript
+ function addVertex1()
+ {
+ console.log('Running Add Vertex1');
+ return client.submit("g.addV(label).property('id', id).property('firstName', firstName).property('age', age).property('userid', userid).property('pk', 'pk')", {
+ label:"person",
+ id:"thomas",
+ firstName:"Thomas",
+ age:44, userid: 1
+ }).then(function (result) {
+ console.log("Result: %s\n", JSON.stringify(result));
+ });
+ }
+ ```
+
+* Each function executes a `client.execute` method with a Gremlin query string parameter. Here is an example of how `g.V().count()` is executed:
+
+ ```javascript
+ function countVertices()
+ {
+ console.log('Running Count');
+ return client.submit("g.V().count()", { }).then(function (result) {
+ console.log("Result: %s\n", JSON.stringify(result));
+ });
+ }
+ ```
+
+* At the end of the file, all methods are then invoked. This will execute them one after the other:
+
+ ```javascript
+ client.open()
+ .then(dropGraph)
+ .then(addVertex1)
+ .then(addVertex2)
+ .then(addEdge)
+ .then(countVertices)
+ .catch((err) => {
+ console.error("Error running query...");
+ console.error(err)
+ }).then((res) => {
+ client.close();
+ finish();
+ }).catch((err) =>
+ console.error("Fatal error:", err)
+ );
+ ```
++
+## Update your connection string
+
+1. Open the *config.js* file.
+
+2. In *config.js*, fill in the `config.endpoint` key with the **Gremlin Endpoint** value from the **Overview** page of your Cosmos DB account in the Azure portal.
+
+ `config.endpoint = "https://<your_Gremlin_account_name>.gremlin.cosmosdb.azure.com:443/";`
+
+ :::image type="content" source="./media/quickstart-nodejs/gremlin-uri.png" alt-text="View and copy an access key in the Azure portal, Overview page":::
+
+3. In *config.js*, fill in the config.primaryKey value with the **Primary Key** value from the **Keys** page of your Cosmos DB account in the Azure portal.
+
+ `config.primaryKey = "PRIMARYKEY";`
+
+ :::image type="content" source="./media/quickstart-nodejs/keys.png" alt-text="Azure portal keys blade":::
+
+4. Enter the database name, and graph (container) name for the value of config.database and config.collection.
+
+Here's an example of what your completed *config.js* file should look like:
+
+```javascript
+var config = {}
+
+// Note that this must include the protocol (HTTPS:// for .NET SDK URI or wss:// for Gremlin Endpoint) and the port number
+config.endpoint = "https://testgraphacct.gremlin.cosmosdb.azure.com:443/";
+config.primaryKey = "Pams6e7LEUS7LJ2Qk0fjZf3eGo65JdMWHmyn65i52w8ozPX2oxY3iP0yu05t9v1WymAHNcMwPIqNAEv3XDFsEg==";
+config.database = "graphdb"
+config.collection = "Persons"
+
+module.exports = config;
+```
+
+## Run the console app
+
+1. Open a terminal window and change (via `cd` command) to the installation directory for the *package.json* file that's included in the project.
+
+2. Run `npm install` to install the required npm modules, including `gremlin`.
+
+3. Run `node app.js` in a terminal to start your node application.
+
+## Browse with Data Explorer
+
+You can now go back to Data Explorer in the Azure portal to view, query, modify, and work with your new graph data.
+
+In Data Explorer, the new database appears in the **Graphs** pane. Expand the database, followed by the container, and then select **Graph**.
+
+The data generated by the sample app is displayed in the next pane within the **Graph** tab when you select **Apply Filter**.
+
+Try completing `g.V()` with `.has('firstName', 'Thomas')` to test the filter. Note that the value is case sensitive.
+
+## Review SLAs in the Azure portal
++
+## Clean up your resources
++
+## Next steps
+
+In this article, you learned how to create an Azure Cosmos DB account, create a graph by using Data Explorer, and run a Node.js app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic by using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query by using Gremlin](tutorial-query.md)
cosmos-db Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-php.md
+
+ Title: 'Quickstart: Gremlin API with PHP - Azure Cosmos DB'
+description: Follow this quickstart to run a PHP console application that populates an Azure Cosmos DB for Gremlin database in the Azure portal.
++
+ms.devlang: php
+ Last updated : 06/29/2022++++
+# Quickstart: Create an Azure Cosmos DB graph database with PHP and the Azure portal
++
+> [!div class="op_single_selector"]
+> * [Gremlin console](quickstart-console.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+> * [PHP](quickstart-php.md)
+>
+
+In this quickstart, you create and use an Azure Cosmos DB [Gremlin (Graph) API](introduction.md) database by using PHP and the Azure portal.
+
+Azure Cosmos DB is Microsoft's multi-model database service that lets you quickly create and query document, table, key-value, and graph databases, with global distribution and horizontal scale capabilities. Azure Cosmos DB provides five APIs: Core (SQL), MongoDB, Gremlin, Azure Table, and Cassandra.
+
+You must create a separate account to use each API. In this article, you create an account for the Gremlin (Graph) API.
+
+This quickstart walks you through the following steps:
+
+- Use the Azure portal to create an Azure Cosmos DB for Gremlin (Graph) API account and database.
+- Clone a sample Gremlin API PHP console app from GitHub, and run it to populate your database.
+- Use Data Explorer in the Azure portal to query, add, and connect data in your database.
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Alternatively, you can [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb) without an Azure subscription.
+- [PHP](https://php.net/) 5.6 or newer installed.
+- [Composer](https://getcomposer.org/download) open-source dependency management tool for PHP installed.
+
+## Create a Gremlin (Graph) database account
+
+First, create a Gremlin (Graph) database account for Azure Cosmos DB.
+
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource** from the left menu.
+
+ :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/create-nosql-db-databases-json-tutorial-0.png" alt-text="Screenshot of Create a resource in the Azure portal.":::
+
+1. On the **New** page, select **Databases** > **Azure Cosmos DB**.
+
+1. On the **Select API Option** page, under **Gremlin (Graph)**, select **Create**.
+
+1. On the **Create Azure Cosmos DB Account - Gremlin (Graph)** page, enter the following required settings for the new account:
+
+ - **Subscription**: Select the Azure subscription that you want to use for this account.
+ - **Resource Group**: Select **Create new**, then enter a unique name for the new resource group.
+ - **Account Name**: Enter a unique name between 3-44 characters, using only lowercase letters, numbers, and hyphens. Your account URI is *gremlin.azure.com* appended to your unique account name.
+ - **Location**: Select the Azure region to host your Azure Cosmos DB account. Use the location that's closest to your users to give them the fastest access to the data.
+
+ :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-create-new-account.png" alt-text="Screenshot showing the Create Account page for Azure Cosmos DB for a Gremlin (Graph) account.":::
+
+1. For this quickstart, you can leave the other fields and tabs at their default values. Optionally, you can configure more details for the account. See [Optional account settings](#optional-account-settings).
+
+1. Select **Review + create**, and then select **Create**. Deployment takes a few minutes.
+
+1. When the **Your deployment is complete** message appears, select **Go to resource**.
+
+ You go to the **Overview** page for the new Azure Cosmos DB account.
+
+ :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-graph-created.png" alt-text="Screenshot showing the Azure Cosmos DB Quick start page.":::
+
+### Optional account settings
+
+Optionally, you can also configure the following settings on the **Create Azure Cosmos DB Account - Gremlin (Graph)** page.
+
+- On the **Basics** tab:
+
+ |Setting|Value|Description |
+ ||||
+ |**Capacity mode**|**Provisioned throughput** or **Serverless**|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode.|
+ |**Apply Azure Cosmos DB free tier discount**|**Apply** or **Do not apply**|With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/).|
+
+ > [!NOTE]
+ > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you don't see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
+
+- On the **Global Distribution** tab:
+
+ |Setting|Value|Description |
+ ||||
+ |**Geo-redundancy**|**Enable** or **Disable**|Enable or disable global distribution on your account by pairing your region with a pair region. You can add more regions to your account later.|
+ |**Multi-region Writes**|**Enable** or **Disable**|Multi-region writes capability allows you to take advantage of the provisioned throughput for your databases and containers across the globe.|
+
+ > [!NOTE]
+ > The following options aren't available if you select **Serverless** as the **Capacity mode**:
+ > - **Apply Free Tier Discount**
+ > - **Geo-redundancy**
+ > - **Multi-region Writes**
+
+- Other tabs:
+
+ - **Networking**: Configure [access from a virtual network](../how-to-configure-vnet-service-endpoint.md).
+ - **Backup Policy**: Configure either [periodic](../configure-periodic-backup-restore.md) or [continuous](../provision-account-continuous-backup.md) backup policy.
+ - **Encryption**: Use either a service-managed key or a [customer-managed key](../how-to-setup-cmk.md#create-a-new-azure-cosmos-account).
+ - **Tags**: Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
+
+## Add a graph
+
+1. On the Azure Cosmos DB account **Overview** page, select **Add Graph**.
+
+ :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-add-graph.png" alt-text="Screenshot showing the Add Graph on the Azure Cosmos DB account page.":::
+
+1. Fill out the **New Graph** form. For this quickstart, use the following values:
+
+ - **Database id**: Enter *sample-database*. Database names must be between 1 and 255 characters, and can't contain `/ \ # ?` or a trailing space.
+ - **Database Throughput**: Select **Manual**, so you can set the throughput to a low value.
+ - **Database Max RU/s**: Change the throughput to *400* request units per second (RU/s). If you want to reduce latency, you can scale up throughput later.
+ - **Graph id**: Enter *sample-graph*. Graph names have the same character requirements as database IDs.
+ - **Partition key**: Enter */pk*. All Cosmos DB accounts need a partition key to horizontally scale. To learn how to select an appropriate partition key, see [Use a partitioned graph in Azure Cosmos DB](partitioning.md).
+
+ :::image type="content" source="../includes/media/cosmos-db-create-graph/azure-cosmosdb-data-explorer-graph.png" alt-text="Screenshot showing the Azure Cosmos DB Data Explorer, New Graph page.":::
+
+1. Select **OK**. The new graph database is created.
+
+### Get the connection keys
+
+Get the Azure Cosmos DB account connection keys to use later in this quickstart.
+
+1. On the Azure Cosmos DB account page, select **Keys** under **Settings** in the left navigation.
+
+1. Copy and save the following values to use later in the quickstart:
+
+ - The first part (Azure Cosmos DB account name) of the **.NET SDK URI**.
+ - The **PRIMARY KEY** value.
+
+ :::image type="content" source="media/quickstart-php/keys.png" alt-text="Screenshot that shows the access keys for the Azure Cosmos DB account.":::
++
+## Clone the sample application
+
+Now, switch to working with code. Clone a Gremlin API app from GitHub, set the connection string, and run the app to see how easy it is to work with data programmatically.
+
+1. In git terminal window, such as git bash, create a new folder named *git-samples*.
+
+ ```bash
+ mkdir "C:\git-samples"
+ ```
+
+1. Switch to the new folder.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+1. Run the following command to clone the sample repository and create a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-php-getting-started.git
+ ```
+
+Optionally, you can now review the PHP code you cloned. Otherwise, go to [Update your connection information](#update-your-connection-information).
+
+### Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.php* file in the *C:\git-samples\azure-cosmos-db-graph-php-getting-started* folder.
+
+- The Gremlin `connection` is initialized in the beginning of the `connect.php` file, using the `$db` object.
+
+ ```php
+ $db = new Connection([
+ 'host' => '<your_server_address>.graphs.azure.com',
+ 'username' => '/dbs/<db>/colls/<coll>',
+ 'password' => 'your_primary_key'
+ ,'port' => '443'
+
+ // Required parameter
+ ,'ssl' => TRUE
+ ]);
+ ```
+
+- A series of Gremlin steps execute, using the `$db->send($query);` method.
+
+ ```php
+ $query = "g.V().drop()";
+ ...
+ $result = $db->send($query);
+ $errors = array_filter($result);
+ }
+ ```
+
+## Update your connection information
+
+1. Open the *connect.php* file in the *C:\git-samples\azure-cosmos-db-graph-php-getting-started* folder.
+
+1. In the `host` parameter, replace `<your_server_address>` with the Azure Cosmos DB account name value you saved from the Azure portal.
+
+1. In the `username` parameter, replace `<db>` and `<coll>` with your database and graph name. If you used the recommended values of `sample-database` and `sample-graph`, it should look like the following code:
+
+ `'username' => '/dbs/sample-database/colls/sample-graph'`
+
+1. In the `password` parameter, replace `your_primary_key` with the PRIMARY KEY value you saved from the Azure portal.
+
+ The `Connection` object initialization should now look like the following code:
+
+ ```php
+ $db = new Connection([
+ 'host' => 'testgraphacct.graphs.azure.com',
+ 'username' => '/dbs/sample-database/colls/sample-graph',
+ 'password' => '2Ggkr662ifxz2Mg==',
+ 'port' => '443'
+
+ // Required parameter
+ ,'ssl' => TRUE
+ ]);
+ ```
+
+1. Save the *connect.php* file.
+
+## Run the console app
+
+1. In the git terminal window, `cd` to the *azure-cosmos-db-graph-php-getting-started* folder.
+
+ ```git
+ cd "C:\git-samples\azure-cosmos-db-graph-php-getting-started"
+ ```
+
+1. Use the following command to install the required PHP dependencies.
+
+ ```
+ composer install
+ ```
+
+1. Use the following command to start the PHP application.
+
+ ```
+ php connect.php
+ ```
+
+ The terminal window displays the vertices being added to the graph.
+
+ If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
+
+ Once the program stops, press Enter.
+
+<a id="add-sample-data"></a>
+## Review and add sample data
+
+You can now go back to Data Explorer in the Azure portal, see the vertices added to the graph, and add more data points.
+
+1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-database** and **sample-graph**, select **Graph**, and then select **Execute Gremlin Query**.
+
+ :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot that shows Graph selected with the option to Execute Gremlin Query.":::
+
+1. In the **Results** list, notice the new users added to the graph. Select **ben**, and notice that they're connected to **robin**. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+
+ :::image type="content" source="./media/quickstart-php/azure-cosmosdb-graph-explorer-new.png" alt-text="Screenshot that shows new vertices in the graph in Data Explorer.":::
+
+1. Add a new user. Select the **New Vertex** button to add data to your graph.
+
+ :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot that shows the New Vertex pane where you can enter values.":::
+
+1. Enter a label of *person*.
+
+1. Select **Add property** to add each of the following properties. You can create unique properties for each person in your graph. Only the **id** key is required.
+
+ Key | Value | Notes
+ -|-|-
+ **id** | ashley | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ **gender** | female |
+ **tech** | java |
+
+ > [!NOTE]
+ > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+
+1. Select **OK**.
+
+1. Select **New Vertex** again and add another new user.
+
+1. Enter a label of *person*.
+
+1. Select **Add property** to add each of the following properties:
+
+ Key | Value | Notes
+ -|-|-
+ **id** | rakesh | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ **gender** | male |
+ **school** | MIT |
+
+1. Select **OK**.
+
+1. Select **Execute Gremlin Query** with the default `g.V()` filter to display all the values in the graph. All the users now show in the **Results** list.
+
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Execute Gremlin Query** to display all the results again.
+
+1. Now you can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit icon next to **Targets** at lower right.
+
+ :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Screenshot that shows changing the target of a vertex in a graph.":::
+
+1. In the **Target** box, type *rakesh*, and in the **Edge label** box type *knows*, and then select the check mark.
+
+ :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-set-target.png" alt-text="Screenshot that shows adding a connection between ashley and rakesh in Data Explorer.":::
+
+1. Now select **rakesh** from the results list, and see that ashley and rakesh are connected.
+
+ :::image type="content" source="./media/quickstart-php/azure-cosmosdb-graph-explorer.png" alt-text="Screenshot that shows two vertices connected in Data Explorer.":::
+
+You've completed the resource creation part of this quickstart. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries.
+
+You can review the metrics that Azure Cosmos DB provides, and then clean up the resources you created.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+This action deletes the resource group and all resources within it, including the Azure Cosmos DB for Gremlin (Graph) account and database.
+
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB for Gremlin (Graph) account and database, clone and run a PHP app, and work with your database using the Data Explorer. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
+
+ Title: 'Quickstart: Gremlin API with Python - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB for Gremlin to create a console application with the Azure portal and Python
++
+ms.devlang: python
+ Last updated : 03/29/2021++++
+# Quickstart: Create a graph database in Azure Cosmos DB using Python and the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](quickstart-console.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+> * [PHP](quickstart-php.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) API account from the Azure portal, and add data by using a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+- [Python 3.6+](https://www.python.org/downloads/) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.
+- [Python Driver for Gremlin](https://github.com/apache/tinkerpop/tree/master/gremlin-python).
+
+ You can also install the Python driver for Gremlin by using the `pip` command line:
+
+ ```bash
+ pip install gremlinpython==3.4.13
+ ```
+
+- [Git](https://git-scm.com/downloads).
+
+> [!NOTE]
+> This quickstart requires a graph database account created after December 20, 2017. Existing accounts will support Python once theyΓÇÖre migrated to general availability.
+
+> [!NOTE]
+> We currently recommend using gremlinpython==3.4.13 with Gremlin (Graph) API as we haven't fully tested all language-specific libraries of version 3.5.* for use with the service.
+
+## Create a database account
+
+Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ mkdir "./git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
+
+ ```bash
+ cd "./git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started.git
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.py* file in the *C:\git-samples\azure-cosmos-db-graph-python-getting-started\\* folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+
+* The Gremlin `client` is initialized in line 155 in *connect.py*. Make sure to replace `<YOUR_DATABASE>` and `<YOUR_CONTAINER_OR_GRAPH>` with the values of your account's database name and graph name:
+
+ ```python
+ ...
+ client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/<YOUR_DATABASE>/colls/<YOUR_CONTAINER_OR_GRAPH>",
+ password="<YOUR_PASSWORD>")
+ ...
+ ```
+
+* A series of Gremlin steps are declared at the beginning of the *connect.py* file. They are then executed using the `client.submitAsync()` method:
+
+ ```python
+ client.submitAsync(_gremlin_cleanup_graph)
+ ```
+
+## Update your connection information
+
+Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
+
+ Copy the first portion of the URI value.
+
+ :::image type="content" source="./media/quickstart-python/keys.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
+
+2. Open the *connect.py* file and in line 155 paste the URI value over `<YOUR_ENDPOINT>` in here:
+
+ ```python
+ client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
+ password="<YOUR_PASSWORD>")
+ ```
+
+ The URI portion of the client object should now look similar to this code:
+
+ ```python
+ client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
+ password="<YOUR_PASSWORD>")
+ ```
+
+3. Change the second parameter of the `client` object to replace the `<YOUR_DATABASE>` and `<YOUR_COLLECTION_OR_GRAPH>` strings. If you used the suggested values, the parameter should look like this code:
+
+ `username="/dbs/sample-database/colls/sample-graph"`
+
+ The entire `client` object should now look like this code:
+
+ ```python
+ client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/sample-database/colls/sample-graph",
+ password="<YOUR_PASSWORD>")
+ ```
+
+4. On the **Keys** page, use the copy button to copy the PRIMARY KEY and paste it over `<YOUR_PASSWORD>` in the `password=<YOUR_PASSWORD>` parameter.
+
+ The entire `client` object definition should now look like this code:
+ ```python
+ client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/sample-database/colls/sample-graph",
+ password="asdb13Fadsf14FASc22Ggkr662ifxz2Mg==")
+ ```
+
+6. Save the *connect.py* file.
+
+## Run the console app
+
+1. In the git terminal window, `cd` to the azure-cosmos-db-graph-python-getting-started folder.
+
+ ```git
+ cd "./git-samples\azure-cosmos-db-graph-python-getting-started"
+ ```
+
+2. In the git terminal window, use the following command to install the required Python packages.
+
+ ```
+ pip install -r requirements.txt
+ ```
+
+3. In the git terminal window, use the following command to start the Python application.
+
+ ```
+ python connect.py
+ ```
+
+ The terminal window displays the vertices and edges being added to the graph.
+
+ If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
+
+ Once the program stops, press Enter, then switch back to the Azure portal in your internet browser.
+
+<a id="add-sample-data"></a>
+## Review and add sample data
+
+After the vertices and edges are inserted, you can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
+
+1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
+
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
+
+2. In the **Results** list, notice three new users are added to the graph. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
+
+3. Let's add a few new users. Select the **New Vertex** button to add data to your graph.
+
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
+
+4. Enter a label of *person*.
+
+5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
+
+ key|value|Notes
+ -|-|-
+ pk|/pk|
+ id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|female|
+ tech | java |
+
+ > [!NOTE]
+ > In this quickstart create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+
+6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
+
+7. Select **New Vertex** again and add an additional new user.
+
+8. Enter a label of *person*.
+
+9. Select **Add property** to add each of the following properties:
+
+ key|value|Notes
+ -|-|-
+ pk|/pk|
+ id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|male|
+ school|MIT|
+
+10. Select **OK**.
+
+11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
+
+12. Now we can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
+
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph":::
+
+13. In the **Target** box type *rakesh*, and in the **Edge label** box type *knows*, and then select the check.
+
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection between ashley and rakesh in Data Explorer":::
+
+14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
+
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer":::
+
+That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Python app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query.md)
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/resource-manager-template-samples.md
+
+ Title: Resource Manager templates for Azure Cosmos DB for Gremlin
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Gremlin.
+++++ Last updated : 10/14/2020++++
+# Manage Azure Cosmos DB for Gremlin resources using Azure Resource Manager templates
+
+In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and graphs.
+
+This article has examples for API for Gremlin accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../cassandr) articles.
+
+> [!IMPORTANT]
+>
+> * Account names are limited to 44 characters, all lowercase.
+> * To change the throughput values, redeploy the template with updated RU/s.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
+
+To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md).
+
+<a id="create-autoscale"></a>
+
+## Azure Cosmos DB account for Gremlin with autoscale provisioned throughput
+
+This template will create an Azure Cosmos DB account for API for Gremlin with a database and graph with autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-gremlin-autoscale%2Fazuredeploy.json)
++
+## Next steps
+
+Here are some additional resources:
+
+* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml)
+* [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions)
+* [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.DocumentDB&pageNumber=1&sort=Popular)
+* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Supply Chain Traceability Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/supply-chain-traceability-solution.md
+
+ Title: Infosys supply chain traceability solution using Azure Cosmos DB for Gremlin
+description: The Infosys solution for traceability in global supply chains uses the Azure Cosmos DB for Gremlin and other Azure services. It provides track-and-trace capability in graph form for finished goods.
+++ Last updated : 10/07/2021++++
+# Solution for supply chain traceability using the Azure Cosmos DB for Gremlin
++
+This article provides an overview of the [traceability graph solution implemented by Infosys](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-traceability-knowledge-graph?tab=Overview). This solution uses the Azure Cosmos DB for Gremlin and other Azure capabilities to provide a track-and-trace capability for finished goods in global supply chains.
+
+In this article, you'll learn:
+
+* What traceability is in the context of a supply chain.
+* The architecture of a global traceability solution delivered through Azure capabilities.
+* How the Azure Cosmos DB graph database helps you track intricate relationships between raw materials and finished goods in a global supply chain.
+* How Azure integration platform services such as Azure API Management and Event Hubs help you integrate diverse application ecosystems for supply chains.
+* How you can get help from Infosys to use this solution for your traceability needs.
+
+## Overview
+
+In the food supply chain, traceability is the ability to *track and trace* a product across the supply chain throughout the product's lifecycle. The supply chain includes supply, manufacturing, and distribution. Traceability is vital for food safety, brand, and regulatory exposure.
+
+In the past, some organizations failed to track and trace products effectively in their supply chains. Results included expensive recalls, fines, and consumer health issues.
+
+Traceability solutions had to address the needs of data harmonization and data ingestion at various velocities and veracities. They also had to follow the inventory cycle. These objectives weren't possible with traditional platforms.
+
+## Solution architecture
+
+Supply chain traceability commonly shares patterns in ingesting pallet movements, handing quality incidents, and tracing/analyzing store data. Infosys developed an end-to-end traceability solution that uses Azure application services, integration services, and database services. The solution provides these capabilities:
+
+* Receive streaming data from factories, warehouses, and distribution centers across geographies.
+* Ingest and process parallel stock-movement events.
+* View a knowledge graph that analyzes relationships between raw materials, production batches, pallets of finished goods, multilevel parent/child relationships of pallets (copack/repack), and movement of goods.
+* Access to a user portal with a search capability that includes wildcards and specific keywords.
+* Identify impacts of a quality incident, such as affected raw materials, batches, pallets, and locations of pallets.
+* Capture the history of events across multiple markets, including product recall information.
+
+The Infosys traceability solution supports cloud-native, API-first, and data-driven capabilities. The following diagram illustrates the architecture of this solution:
++
+The architecture uses the following Azure services to help with specialized tasks:
+
+* Azure Cosmos DB enables you to scale performance up or down elastically. By using the API for Gremlin, you can create and query complex relationships between raw materials, finished goods, and warehouses.
+* Azure API Management provides APIs for stock movement events to third-party logistics (3PL) providers and warehouse management systems (WMSs).
+* Azure Event Hubs provides the ability to gather large numbers of concurrent events from 3PL providers and WMSs for further processing.
+* Azure Functions (through function apps) processes events and ingests data for Azure Cosmos DB by using the API for Gremlin.
+* Azure Search enables complex searches and the filtering of pallet information.
+* Azure Databricks reads the change feed and creates models in Azure Synapse Analytics for self-service reporting for users in Power BI.
+* Azure App Service and its Web Apps feature enable the deployment of a user portal.
+* Azure Storage stores archived data for long-term regulatory needs.
+
+## Graph database and its data design
+
+The production and distribution of goods require maintaining a complex and dynamic set of relationships. An adaptive data model in the form of a traceability graph allows storing these relationships through all the steps in the supply chain. Here's a high-level visualization of the process:
++
+The preceding diagram is a simplified view of a complex process. However, getting stock-movement information from the factories and warehouses in real time makes it possible to create an elaborate graph that connects all these disparate pieces of information:
+
+1. The traceability process starts when the supplier sends raw materials to the factories. The solution creates the initial nodes (vertices) of the graph and relationships (edges).
+
+1. The finished goods are produced from raw materials and packed into pallets.
+
+1. The pallets are moved to factory warehouses or market warehouses according to customer orders. The warehouses might be owned by the company or by 3PL providers.
+
+1. The pallets are shipped to various other warehouses according to customer orders. Depending on customers' needs, child pallets or child-of-child pallets are created to accommodate the ordered quantity.
+
+ Sometimes, a whole new item is made by mixing multiple items. For example, in a copack scenario that produces a variety pack, sometimes the same item is repacked to smaller or larger quantities in a different pallet as part of a customer order.
+
+ :::image type="content" source="./media/supply-chain-traceability-solution/pallet-relationship.png" alt-text="Pallet relationship in the solution for supply chain traceability." lightbox="./media/supply-chain-traceability-solution/pallet-relationship.png" border="true":::
+
+1. Pallets travel through the supply chain network and eventually reach the customer warehouse. During that process, the pallets can be further broken down or combined with other pallets to produce new pallets to fulfill customer orders.
+
+1. Eventually, the system creates a complex graph that holds relationship information for quality incident management.
+
+ :::image type="content" source="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" alt-text="Diagram that shows the complete architecture for the supply chain object relationship." lightbox="./media/supply-chain-traceability-solution/supply-chain-object-relationship.png" border="true":::
+
+ These intricate relationships are vital in a quality incident where the system can track and trace pallets across the supply chain. The graph and its traversals provide the required information for this. For example, if there's an issue with one raw material, the graph can show the affected pallets and the current location.
+
+## Next steps
+
+* Learn about [Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/infosysltd.infosys-integrate-for-azure).
+* To visualize graph data, see the [API for Gremlin visualization solutions](visualization-partners.md).
+* To model your graph data, see the [API for Gremlin modeling solutions](modeling-tools.md).
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/support.md
+
+ Title: Azure Cosmos DB for Gremlin support and compatibility with TinkerPop features
+description: Learn about the Gremlin language from Apache TinkerPop. Learn which features and steps are available in Azure Cosmos DB and the TinkerPop Graph engine compatibility differences.
++++ Last updated : 07/06/2021++++
+# Azure Cosmos DB for Gremlin graph support and compatibility with TinkerPop features
+
+Azure Cosmos DB supports [Apache Tinkerpop's](https://tinkerpop.apache.org) graph traversal language, known as [Gremlin](https://tinkerpop.apache.org/docs/3.3.2/reference/#graph-traversal-steps). You can use the Gremlin language to create graph entities (vertices and edges), modify properties within those entities, perform queries and traversals, and delete entities.
+
+Azure Cosmos DB Graph engine closely follows [Apache TinkerPop](https://tinkerpop.apache.org/docs/current/reference/#graph-traversal-steps) traversal steps specification but there are differences in the implementation that are specific for Azure Cosmos DB. In this article, we provide a quick walkthrough of Gremlin and enumerate the Gremlin features that are supported by the API for Gremlin.
+
+## Compatible client libraries
+
+The following table shows popular Gremlin drivers that you can use against Azure Cosmos DB:
+
+| Download | Source | Getting Started | Supported/Recommended connector version |
+| | | | |
+| [.NET](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](quickstart-dotnet.md) | 3.4.13 |
+| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](quickstart-java.md) | 3.4.13 |
+| [Python](https://tinkerpop.apache.org/docs/3.4.13/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](quickstart-python.md) | 3.4.13 |
+| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](quickstart-console.md) | 3.4.13 |
+| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](quickstart-nodejs.md) | 3.4.13 |
+| [PHP](https://packagist.org/packages/brightzone/gremlin-php) | [Gremlin-PHP on GitHub](https://github.com/PommeVerte/gremlin-php) | [Create Graph using PHP](quickstart-php.md) | 3.1.0 |
+| [Go Lang](https://github.com/supplyon/gremcos/) | [Go Lang](https://github.com/supplyon/gremcos/) | | This library is built by external contributors. The Azure Cosmos DB team doesn't offer any support or maintain the library. |
+
+> [!NOTE]
+> Gremlin client driver versions for __3.5.*__, __3.6.*__ have known compatibility issues, so we recommend using the latest supported 3.4.* driver versions listed above.
+> This table will be updated when compatibility issues have been addressed for these newer driver versions.
+
+## Supported Graph Objects
+
+TinkerPop is a standard that covers a wide range of graph technologies. Therefore, it has standard terminology to describe what features are provided by a graph provider. Azure Cosmos DB provides a persistent, high concurrency, writeable graph database that can be partitioned across multiple servers or clusters.
+
+The following table lists the TinkerPop features that are implemented by Azure Cosmos DB:
+
+| Category | Azure Cosmos DB implementation | Notes |
+| | | |
+| Graph features | Provides Persistence and ConcurrentAccess. Designed to support Transactions | Computer methods can be implemented via the Spark connector. |
+| Variable features | Supports Boolean, Integer, Byte, Double, Float, Integer, Long, String | Supports primitive types, is compatible with complex types via data model |
+| Vertex features | Supports RemoveVertices, MetaProperties, AddVertices, MultiProperties, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting vertices |
+| Vertex property features | StringIds, UserSuppliedIds, AddProperty, RemoveProperty, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting vertex properties |
+| Edge features | AddEdges, RemoveEdges, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting edges |
+| Edge property features | Properties, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting edge properties |
+
+## Gremlin wire format
+
+Azure Cosmos DB uses the JSON format when returning results from Gremlin operations. Azure Cosmos DB currently supports the JSON format. For example, the following snippet shows a JSON representation of a vertex *returned to the client* from Azure Cosmos DB:
+
+```json
+ {
+ "id": "a7111ba7-0ea1-43c9-b6b2-efc5e3aea4c0",
+ "label": "person",
+ "type": "vertex",
+ "outE": {
+ "knows": [
+ {
+ "id": "3ee53a60-c561-4c5e-9a9f-9c7924bc9aef",
+ "inV": "04779300-1c8e-489d-9493-50fd1325a658"
+ },
+ {
+ "id": "21984248-ee9e-43a8-a7f6-30642bc14609",
+ "inV": "a8e3e741-2ef7-4c01-b7c8-199f8e43e3bc"
+ }
+ ]
+ },
+ "properties": {
+ "firstName": [
+ {
+ "value": "Thomas"
+ }
+ ],
+ "lastName": [
+ {
+ "value": "Andersen"
+ }
+ ],
+ "age": [
+ {
+ "value": 45
+ }
+ ]
+ }
+ }
+```
+
+The properties used by the JSON format for vertices are described below:
+
+| Property | Description |
+| | | |
+| `id` | The ID for the vertex. Must be unique (in combination with the value of `_partition` if applicable). If no value is provided, it will be automatically supplied with a GUID |
+| `label` | The label of the vertex. This property is used to describe the entity type. |
+| `type` | Used to distinguish vertices from non-graph documents |
+| `properties` | Bag of user-defined properties associated with the vertex. Each property can have multiple values. |
+| `_partition` | The partition key of the vertex. Used for [graph partitioning](partitioning.md). |
+| `outE` | This property contains a list of out edges from a vertex. Storing the adjacency information with vertex allows for fast execution of traversals. Edges are grouped based on their labels. |
+
+Each property can store multiple values within an array.
+
+| Property | Description |
+| | |
+| `value` | The value of the property |
+
+And the edge contains the following information to help with navigation to other parts of the graph.
+
+| Property | Description |
+| | |
+| `id` | The ID for the edge. Must be unique (in combination with the value of `_partition` if applicable) |
+| `label` | The label of the edge. This property is optional, and used to describe the relationship type. |
+| `inV` | This property contains a list of in vertices for an edge. Storing the adjacency information with the edge allows for fast execution of traversals. Vertices are grouped based on their labels. |
+| `properties` | Bag of user-defined properties associated with the edge. |
+
+## Gremlin steps
+
+Now let's look at the Gremlin steps supported by Azure Cosmos DB. For a complete reference on Gremlin, see [TinkerPop reference](https://tinkerpop.apache.org/docs/3.3.2/reference).
+
+| step | Description | TinkerPop 3.2 Documentation |
+| | | |
+| `addE` | Adds an edge between two vertices | [addE step](https://tinkerpop.apache.org/docs/3.3.2/reference/#addedge-step) |
+| `addV` | Adds a vertex to the graph | [addV step](https://tinkerpop.apache.org/docs/3.3.2/reference/#addvertex-step) |
+| `and` | Ensures that all the traversals return a value | [and step](https://tinkerpop.apache.org/docs/3.3.2/reference/#and-step) |
+| `as` | A step modulator to assign a variable to the output of a step | [as step](https://tinkerpop.apache.org/docs/3.3.2/reference/#as-step) |
+| `by` | A step modulator used with `group` and `order` | [by step](https://tinkerpop.apache.org/docs/3.3.2/reference/#by-step) |
+| `coalesce` | Returns the first traversal that returns a result | [coalesce step](https://tinkerpop.apache.org/docs/3.3.2/reference/#coalesce-step) |
+| `constant` | Returns a constant value. Used with `coalesce`| [constant step](https://tinkerpop.apache.org/docs/3.3.2/reference/#constant-step) |
+| `count` | Returns the count from the traversal | [count step](https://tinkerpop.apache.org/docs/3.3.2/reference/#count-step) |
+| `dedup` | Returns the values with the duplicates removed | [dedup step](https://tinkerpop.apache.org/docs/3.3.2/reference/#dedup-step) |
+| `drop` | Drops the values (vertex/edge) | [drop step](https://tinkerpop.apache.org/docs/3.3.2/reference/#drop-step) |
+| `executionProfile` | Creates a description of all operations generated by the executed Gremlin step | [executionProfile step](execution-profile.md) |
+| `fold` | Acts as a barrier that computes the aggregate of results| [fold step](https://tinkerpop.apache.org/docs/3.3.2/reference/#fold-step) |
+| `group` | Groups the values based on the labels specified| [group step](https://tinkerpop.apache.org/docs/3.3.2/reference/#group-step) |
+| `has` | Used to filter properties, vertices, and edges. Supports `hasLabel`, `hasId`, `hasNot`, and `has` variants. | [has step](https://tinkerpop.apache.org/docs/3.3.2/reference/#has-step) |
+| `inject` | Inject values into a stream| [inject step](https://tinkerpop.apache.org/docs/3.3.2/reference/#inject-step) |
+| `is` | Used to perform a filter using a boolean expression | [is step](https://tinkerpop.apache.org/docs/3.3.2/reference/#is-step) |
+| `limit` | Used to limit number of items in the traversal| [limit step](https://tinkerpop.apache.org/docs/3.3.2/reference/#limit-step) |
+| `local` | Local wraps a section of a traversal, similar to a subquery | [local step](https://tinkerpop.apache.org/docs/3.3.2/reference/#local-step) |
+| `not` | Used to produce the negation of a filter | [not step](https://tinkerpop.apache.org/docs/3.3.2/reference/#not-step) |
+| `optional` | Returns the result of the specified traversal if it yields a result else it returns the calling element | [optional step](https://tinkerpop.apache.org/docs/3.3.2/reference/#optional-step) |
+| `or` | Ensures at least one of the traversals returns a value | [or step](https://tinkerpop.apache.org/docs/3.3.2/reference/#or-step) |
+| `order` | Returns results in the specified sort order | [order step](https://tinkerpop.apache.org/docs/3.3.2/reference/#order-step) |
+| `path` | Returns the full path of the traversal | [path step](https://tinkerpop.apache.org/docs/3.3.2/reference/#path-step) |
+| `project` | Projects the properties as a Map | [project step](https://tinkerpop.apache.org/docs/3.3.2/reference/#project-step) |
+| `properties` | Returns the properties for the specified labels | [properties step](https://tinkerpop.apache.org/docs/3.3.2/reference/#_properties_step) |
+| `range` | Filters to the specified range of values| [range step](https://tinkerpop.apache.org/docs/3.3.2/reference/#range-step) |
+| `repeat` | Repeats the step for the specified number of times. Used for looping | [repeat step](https://tinkerpop.apache.org/docs/3.3.2/reference/#repeat-step) |
+| `sample` | Used to sample results from the traversal | [sample step](https://tinkerpop.apache.org/docs/3.3.2/reference/#sample-step) |
+| `select` | Used to project results from the traversal | [select step](https://tinkerpop.apache.org/docs/3.3.2/reference/#select-step) |
+| `store` | Used for non-blocking aggregates from the traversal | [store step](https://tinkerpop.apache.org/docs/3.3.2/reference/#store-step) |
+| `TextP.startingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the beginning of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.endingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the ending of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.containing(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the contents of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.notStartingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't start with a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.notEndingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't end with a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.notContaining(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't contain a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `tree` | Aggregate paths from a vertex into a tree | [tree step](https://tinkerpop.apache.org/docs/3.3.2/reference/#tree-step) |
+| `unfold` | Unroll an iterator as a step| [unfold step](https://tinkerpop.apache.org/docs/3.3.2/reference/#unfold-step) |
+| `union` | Merge results from multiple traversals| [union step](https://tinkerpop.apache.org/docs/3.3.2/reference/#union-step) |
+| `V` | Includes the steps necessary for traversals between vertices and edges `V`, `E`, `out`, `in`, `both`, `outE`, `inE`, `bothE`, `outV`, `inV`, `bothV`, and `otherV` for | [vertex steps](https://tinkerpop.apache.org/docs/3.3.2/reference/#vertex-steps) |
+| `where` | Used to filter results from the traversal. Supports `eq`, `neq`, `lt`, `lte`, `gt`, `gte`, and `between` operators | [where step](https://tinkerpop.apache.org/docs/3.3.2/reference/#where-step) |
+
+The write-optimized engine provided by Azure Cosmos DB supports automatic indexing of all properties within vertices and edges by default. Therefore, queries with filters, range queries, sorting, or aggregates on any property are processed from the index, and served efficiently. For more information on how indexing works in Azure Cosmos DB, see our paper on [schema-agnostic indexing](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf).
+
+## Behavior differences
+
+* Azure Cosmos DB Graph engine runs ***breadth-first*** traversal while TinkerPop Gremlin is depth-first. This behavior achieves better performance in horizontally scalable system like Azure Cosmos DB.
+
+## Unsupported features
+
+* ***[Gremlin Bytecode](https://tinkerpop.apache.org/docs/current/tutorials/gremlin-language-variants/)*** is a programming language agnostic specification for graph traversals. Azure Cosmos DB Graph doesn't support it yet. Use `GremlinClient.SubmitAsync()` and pass traversal as a text string.
+
+* ***`property(set, 'xyz', 1)`*** set cardinality isn't supported today. Use `property(list, 'xyz', 1)` instead. To learn more, see [Vertex properties with TinkerPop](http://tinkerpop.apache.org/docs/current/reference/#vertex-properties).
+
+* The ***`match()` step*** isn't currently available. This step provides declarative querying capabilities.
+
+* ***Objects as properties*** on vertices or edges aren't supported. Properties can only be primitive types or arrays.
+
+* ***Sorting by array properties*** `order().by(<array property>)` isn't supported. Sorting is supported only by primitive types.
+
+* ***Non-primitive JSON types*** aren't supported. Use `string`, `number`, or `true`/`false` types. `null` values aren't supported.
+
+* ***GraphSONv3*** serializer isn't currently supported. Use `GraphSONv2` Serializer, Reader, and Writer classes in the connection configuration. The results returned by the Azure Cosmos DB for Gremlin don't have the same format as the GraphSON format.
+
+* **Lambda expressions and functions** aren't currently supported. This includes the `.map{<expression>}`, the `.by{<expression>}`, and the `.filter{<expression>}` functions. To learn more, and to learn how to rewrite them using Gremlin steps, see [A Note on Lambdas](http://tinkerpop.apache.org/docs/current/reference/#a-note-on-lambdas).
+
+* ***Transactions*** aren't supported because of distributed nature of the system. Configure appropriate consistency model on Gremlin account to "read your own writes" and use optimistic concurrency to resolve conflicting writes.
+
+## Known limitations
+
+* **Index utilization for Gremlin queries with mid-traversal `.V()` steps**: Currently, only the first `.V()` call of a traversal will make use of the index to resolve any filters or predicates attached to it. Subsequent calls will not consult the index, which might increase the latency and cost of the query.
+
+Assuming default indexing, a typical read Gremlin query that starts with the `.V()` step would use parameters in its attached filtering steps, such as `.has()` or `.where()` to optimize the cost and performance of the query. For example:
+
+```java
+g.V().has('category', 'A')
+```
+
+However, when more than one `.V()` step is included in the Gremlin query, the resolution of the data for the query might not be optimal. Take the following query as an example:
+
+```java
+g.V().has('category', 'A').as('a').V().has('category', 'B').as('b').select('a', 'b')
+```
+
+This query will return two groups of vertices based on their property called `category`. In this case, only the first call, `g.V().has('category', 'A')` will make use of the index to resolve the vertices based on the values of their properties.
+
+A workaround for this query is to use subtraversal steps such as `.map()` and `union()`. This is exemplified below:
+
+```java
+// Query workaround using .map()
+g.V().has('category', 'A').as('a').map(__.V().has('category', 'B')).as('b').select('a','b')
+
+// Query workaround using .union()
+g.V().has('category', 'A').fold().union(unfold(), __.V().has('category', 'B'))
+```
+
+You can review the performance of the queries by using the [Gremlin `executionProfile()` step](execution-profile.md).
+
+## Next steps
+
+* Get started building a graph application [using our SDKs](quickstart-dotnet.md)
+* Learn more about [graph support](introduction.md) in Azure Cosmos DB
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/tutorial-query.md
+
+ Title: How to query graph data in Azure Cosmos DB?
+description: Learn how to query graph data from Azure Cosmos DB using Gremlin queries
+++++ Last updated : 02/16/2022+
+ms.devlang: csharp
+++
+# Tutorial: Query Azure Cosmos DB for Gremlin by using Gremlin
+
+The Azure Cosmos DB [API for Gremlin](introduction.md) supports [Gremlin](https://github.com/tinkerpop/gremlin/wiki) queries. This article provides sample documents and queries to get you started. A detailed Gremlin reference is provided in the [Gremlin support](support.md) article.
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Querying data with Gremlin
+
+## Prerequisites
+
+For these queries to work, you must have an Azure Cosmos DB account and have graph data in the container. Don't have any of those? Complete the [5-minute quickstart](quickstart-dotnet.md) to create an account and populate your database. You can run the following queries using the [Gremlin console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console), or your favorite Gremlin driver.
+
+## Count vertices in the graph
+
+The following snippet shows how to count the number of vertices in the graph:
+
+```
+g.V().count()
+```
+
+## Filters
+
+You can perform filters using Gremlin's `has` and `hasLabel` steps, and combine them using `and`, `or`, and `not` to build more complex filters. Azure Cosmos DB provides schema-agnostic indexing of all properties within your vertices and degrees for fast queries:
+
+```
+g.V().hasLabel('person').has('age', gt(40))
+```
+
+## Projection
+
+You can project certain properties in the query results using the `values` step:
+
+```
+g.V().hasLabel('person').values('name')
+```
+
+## Find related edges and vertices
+
+So far, we've only seen query operators that work in any database. Graphs are fast and efficient for traversal operations when you need to navigate to related edges and vertices. Let's find all friends of Thomas. We do this by using Gremlin's `outE` step to find all the out-edges from Thomas, then traversing to the in-vertices from those edges using Gremlin's `inV` step:
+
+```cs
+g.V('thomas').outE('knows').inV().hasLabel('person')
+```
+
+The next query performs two hops to find all of Thomas' "friends of friends", by calling `outE` and `inV` two times.
+
+```cs
+g.V('thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
+```
+
+You can build more complex queries and implement powerful graph traversal logic using Gremlin, including mixing filter expressions, performing looping using the `loop` step, and implementing conditional navigation using the `choose` step. Learn more about what you can do with [Gremlin support](support.md)!
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Learned how to query using Graph
+
+You can now proceed to the Concepts section for more information about Azure Cosmos DB.
+
+> [!div class="nextstepaction"]
+> [Global distribution](../distribute-data-globally.md)
cosmos-db Use Regional Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/use-regional-endpoints.md
+
+ Title: Regional endpoints for Azure Cosmos DB Graph database
+description: Learn how to connect to nearest Graph database endpoint for your application
+++++ Last updated : 09/09/2019
+ms.devlang: csharp
+++
+# Regional endpoints for Azure Cosmos DB Graph account
+
+Azure Cosmos DB Graph database is [globally distributed](../distribute-data-globally.md) so applications can use multiple read endpoints. Applications that need write access in multiple locations should enable [multi-region writes](../how-to-multi-master.md) capability.
+
+Reasons to choose more than one region:
+1. **Horizontal read scalability** - as application load increases it may be prudent to route read traffic to different Azure regions.
+2. **Lower latency** - you can reduce network latency overhead of each traversal by routing read and write traffic to the nearest Azure region.
+
+**Data residency** requirement is achieved by setting Azure Resource Manager policy on Azure Cosmos DB account. Customer can limit regions into which Azure Cosmos DB replicates data.
+
+## Traffic routing
+
+Azure Cosmos DB Graph database engine is running in multiple regions, each of which contains multiple clusters. Each cluster has hundreds of machines. Azure Cosmos DB Graph account DNS CNAME *accountname.gremlin.cosmos.azure.com* resolves to DNS A record of a cluster. A single IP address of a load-balancer hides internal cluster topology.
+
+A regional DNS CNAME record is created for every region of Azure Cosmos DB Graph account. Format of regional endpoint is *accountname-region.gremlin.cosmos.azure.com*. Region segment of regional endpoint is obtained by removing all spaces from [Azure region](https://azure.microsoft.com/global-infrastructure/regions) name. For example, `"East US 2"` region for `"contoso"` global database account would have a DNS CNAME *contoso-eastus2.gremlin.cosmos.azure.com*
+
+TinkerPop Gremlin client is designed to work with a single server. Application can use global writable DNS CNAME for read and write traffic. Region-aware applications should use regional endpoint for read traffic. Use regional endpoint for write traffic only if specific region is configured to accept writes.
+
+> [!NOTE]
+> Azure Cosmos DB Graph engine can accept write operation in read region by proxying traffic to write region. It is not recommended to send writes into read-only region as it increases traversal latency and is subject to restrictions in the future.
+
+Global database account CNAME always points to a valid write region. During server-side failover of write region, Azure Cosmos DB updates global database account CNAME to point to new region. If application can't handle traffic rerouting after failover, it should use global database account DNS CNAME.
+
+> [!NOTE]
+> Azure Cosmos DB does not route traffic based on geographic proximity of the caller. It is up to each application to select the right region according to unique application needs.
+
+## Portal endpoint discovery
+
+The easiest way to get the list of regions for Azure Cosmos DB Graph account is overview blade in Azure portal. It will work for applications that do not change regions often, or have a way to update the list via application configuration.
++
+Example below demonstrates general principles of accessing regional Gremlin endpoint. Application should consider number of regions to send the traffic to and number of corresponding Gremlin clients to instantiate.
+
+```csharp
+// Example value: Central US, West US and UK West. This can be found in the overview blade of you Azure Cosmos DB for Gremlin Account.
+// Look for Write Locations in the overview blade. You can click to copy and paste.
+string[] gremlinAccountRegions = new string[] {"Central US", "West US" ,"UK West"};
+string gremlinAccountName = "PUT-COSMOSDB-ACCOUNT-NAME-HERE";
+string gremlinAccountKey = "PUT-ACCOUNT-KEY-HERE";
+string databaseName = "PUT-DATABASE-NAME-HERE";
+string graphName = "PUT-GRAPH-NAME-HERE";
+
+foreach (string gremlinAccountRegion in gremlinAccountRegions)
+{
+ // Convert preferred read location to the form "[acountname]-[region].gremlin.cosmos.azure.com".
+ string regionalGremlinEndPoint = $"{gremlinAccountName}-{gremlinAccountRegion.ToLowerInvariant().Replace(" ", string.Empty)}.gremlin.cosmos.azure.com";
+
+ GremlinServer regionalGremlinServer = new GremlinServer(
+ hostname: regionalGremlinEndPoint,
+ port: 443,
+ enableSsl: true,
+ username: "/dbs/" + databaseName + "/colls/" + graphName,
+ password: gremlinAccountKey);
+
+ GremlinClient regionalGremlinClient = new GremlinClient(
+ gremlinServer: regionalGremlinServer,
+ graphSONReader: new GraphSON2Reader(),
+ graphSONWriter: new GraphSON2Writer(),
+ mimeType: GremlinClient.GraphSON2MimeType);
+}
+```
+
+## SDK endpoint discovery
+
+Application can use [Azure Cosmos DB SDK](../nosql/sdk-dotnet-v3.md) to discover read and write locations for Graph account. These locations can change at any time through manual reconfiguration on the server side or service-managed failover.
+
+TinkerPop Gremlin SDK doesn't have an API to discover Azure Cosmos DB Graph database account regions. Applications that need runtime endpoint discovery need to host 2 separate SDKs in the process space.
+
+```csharp
+// Depending on the version and the language of the SDK (.NET vs Java vs Python)
+// the API to get readLocations and writeLocations may vary.
+IDocumentClient documentClient = new DocumentClient(
+ new Uri(cosmosUrl),
+ cosmosPrimaryKey,
+ connectionPolicy,
+ consistencyLevel);
+
+DatabaseAccount databaseAccount = await cosmosClient.GetDatabaseAccountAsync();
+
+IEnumerable<DatabaseAccountLocation> writeLocations = databaseAccount.WritableLocations;
+IEnumerable<DatabaseAccountLocation> readLocations = databaseAccount.ReadableLocations;
+
+// Pick write or read locations to construct regional endpoints for.
+foreach (string location in readLocations)
+{
+ // Convert preferred read location to the form "[acountname]-[region].gremlin.cosmos.azure.com".
+ string regionalGremlinEndPoint = location
+ .Replace("http:\/\/", string.Empty)
+ .Replace("documents.azure.com:443/", "gremlin.cosmos.azure.com");
+
+ // Use code from the previous sample to instantiate Gremlin client.
+}
+```
+
+## Next steps
+* [How to manage database accounts control](../how-to-manage-database-account.md) in Azure Cosmos DB
+* [High availability](../high-availability.md) in Azure Cosmos DB
+* [Global distribution with Azure Cosmos DB - under the hood](../global-dist-under-the-hood.md)
+* [Azure CLI Samples](cli-samples.md) for Azure Cosmos DB
cosmos-db Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/visualization-partners.md
+
+ Title: Visualize Azure Cosmos DB for Gremlin data using partner solutions
+description: Learn how to integrate Azure Cosmos DB graph data with different third-party visualization solutions.
++++++ Last updated : 07/22/2021++
+# Visualize graph data stored in Azure Cosmos DB for Gremlin with data visualization solutions
+
+You can visualize data stored in Azure Cosmos DB for Gremlin by using various data visualization solutions.
+
+> [!IMPORTANT]
+> Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you.
+
+## Linkurious Enterprise
+
+[Linkurious Enterprise](https://linkurio.us/product/) uses graph technology and data visualization to turn complex datasets into interactive visual networks. The platform connects to your data sources and enables investigators to seamlessly navigate across billions of entities and relationships. The result is a new ability to detect suspicious relationships without juggling with queries or tables.
+
+The interactive interface of Linkurious Enterprise offers an easy way to investigate complex data. You can search for specific entities, expand connections to uncover hidden relationships, and apply layouts of your choice to untangle complex networks. Linkurious Enterprise is now compatible with Azure Cosmos DB for Gremlin. It's suitable for end-to-end graph visualization scenarios and supports read and write capabilities from the user interface. You can request a [demo of Linkurious with Azure Cosmos DB](https://linkurio.us/contact/)
++
+<b>Figure:</b> Linkurious Enterprise visualization flow
+### Useful links
+
+* [Product details](https://linkurio.us/product/)
+* [Documentation](https://doc.linkurio.us/)
+* [Demo](https://linkurious.com/demo/)
+* [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/linkurious.lke_st?tab=Overview)
+
+## Cambridge Intelligence
+
+[Cambridge IntelligenceΓÇÖs](https://cambridge-intelligence.com/products/) graph visualization toolkits support Azure Cosmos DB. The following two visualization toolkits are supported by Azure Cosmos DB:
+
+* [KeyLines for JavaScript developers](https://cambridge-intelligence.com/keylines/)
+
+* [Re-Graph for React developers](https://cambridge-intelligence.com/regraph/)
++
+<b>Figure:</b> KeyLines visualization example at various levels of detail.
+
+These toolkits let you design high-performance graph visualization and analysis applications. They harness powerful Web Graphics Library(WebGL) rendering and carefully crafted code to give users a fast and insightful visualization experience. These tools are compatible with any browser, device, server or database, and come with step-by-step tutorials, fully documented APIs, and interactive demos.
++
+<b>Figure:</b> Re-Graph visualization example at various levels of details
+### Useful links
+
+* [Try the toolkits](https://cambridge-intelligence.com/try/)
+* [KeyLines technology overview](https://cambridge-intelligence.com/keylines/technology/)
+* [Re-Graph technology overview](https://cambridge-intelligence.com/regraph/technology/)
+* [Graph visualization use cases](https://cambridge-intelligence.com/use-cases/)
+
+## Tom Sawyer
+
+[Tom Sawyer Perspectives](https://www.tomsawyer.com/perspectives/) is a robust platform for building enterprise grade graph data visualization and analysis applications. It is a low-code graph & data visualization development platform, which includes integrated design, preview interface, and extensive API libraries. The platform integrates enterprise data sources with powerful graph visualization, layout, and analysis technology to solve big data problems.
+
+Perspectives enables developers to quickly develop production-quality, data-oriented visualization applications. Two graphic modules, the "Designer" and the "Previewer" are used to build applications to visualize and analyze the specific data that drives each project. When used together, the Designer and Previewer provide an efficient round-trip process that dramatically speeds up application development. To visualize Azure Cosmos DB for Gremlin data using this platform, request a [free 60-day evaluation](https://www.tomsawyer.com/get-started) of this tool.
++
+<b>Figure:</b> Tom Sawyer Perspectives in action
+
+[Tom Sawyer Graph Database Browser](https://www.tomsawyer.com/graph-database-browser/) makes it easy to visualize and analyze data in Azure Cosmos DB for Gremlin. The Graph Database Browser helps you see and understand connections in your data without extensive knowledge of the query language or the schema. You can manually define the schema for your project or use schema extraction to create it. So, even less technical users can interact with the data by loading the neighbors of selected nodes and building the visualization in whatever direction they need. Advanced users can execute queries using Gremlin, Cypher, or SPARQL to gain other insights. When you define the schema then you can load the Azure Cosmos DB data into the Perspectives model. With the help of integrator definition, you can specify the location and configuration for the Gremlin endpoint. Later you can bind elements from the Azure Cosmos DB data source to elements in the Perspectives model and visualize your data.
+
+Users of all skill levels can take advantage of five unique graph layouts to display the graph in a way that provides the most meaning. And there are built-in centrality, clustering, and path-finding analyses to reveal previously unseen patterns. Using these techniques, organizations can identify critical patterns in areas like fraud detection, customer intelligence, and cybersecurity. Pattern recognition is very important for network analysts in areas such as general IT and network management, logistics, legacy system migration, and business transformation. Try a live demo of Tom Sawyer Graph Database Browser.
++
+<b>Figure:</b> Tom Sawyer Database Browser's visualization capabilities
+### Useful links
+
+* [Documentation](https://www.tomsawyer.com/graph-database-browser/)
+
+* [Trial for Tom Sawyer Perspectives](https://www.tomsawyer.com/get-started)
+
+* [Live Demo for Tom Sawyer Databrowser](https://support.tomsawyer.com/demonstrations/graph.database.browser.demo/)
+
+## Graphistry
+
+Graphistry automatically transforms your data into interactive, visual investigation maps built for the needs of analysts. It can quickly surface relationships between events and entities without having to write queries or wrangle data. You can harness your data without worrying about scale. You can detect security, fraud, and IT investigations to 3600 views of customers and supply chains, Graphistry turns the potential of your data into human insight and value.
++
+<b>Figure:</b> Graphistry Visualization snapshot
+
+With the Graphistry's GPU client/cloud technology, you can do interactive visualization. By using their standard browser and the cloud, you can use all the data you want, and still remain fast, responsive, and interactive. If you want to run the browser on your hardware, itΓÇÖs as easy as installing a Docker. That way you get the analytical power of GPUs without having to think about GPUs.
++
+<b>Figure:</b> Graphistry in action
+
+### Useful links
+
+* [Documentation](https://www.graphistry.com/docs)
+
+* [Video Guides](https://www.graphistry.com/videos)
+
+* [Deploy on Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/graphistry.graphistry-core-2-24-9)
+
+## Graphlytic
+
+Graphlytic is a highly customizable web application for graph visualization and analysis. Users can interactively explore the graph, look for patterns with the Gremlin language, or use filters to find answers to any graph question. Graph rendering is done with the 'Cytoscape.js' library, which allows Graphlytic to render tens of thousands of nodes and hundreds of thousands of relationships at once.
+
+Graphlytic is compatible with Azure Cosmos DB and can be deployed to Azure in minutes. GraphlyticΓÇÖs UI can be customized and extended in many ways, for instance the default [visualization configuration](https://graphlytic.biz/doc/latest/Visualization_Settings.html), [data schema](https://graphlytic.biz/doc/latest/Data_Schema.html), [style mappings](https://graphlytic.biz/doc/latest/Style_Mappers.html), [virtual properties](https://graphlytic.biz/doc/latest/Virtual_properties.html) in the visualization, or custom implemented [widgets](https://graphlytic.biz/doc/latest/Widgets.html) that can enhance the visualization features with bespoke reports or integrations.
+
+The following are two example scenarios:
+
+* **IT Management use case**
+Companies running their IT operations on their own infrastructure, Telco, or IP providers, all need a solid network documentation and a functional configuration management. Impact analyses describing interdependencies among network elements (active and passive) are being developed to overcome blackouts, which cause significant financial losses, or even single outages causing no or low availability of service. Bottlenecks and single points of failure are determined and solved. Endpoint as well as route redundancies are being implemented.
+Graphlytic property graph visualization is a perfect enabler for all above mentioned points - network documentation, network configuration management, impact analysis and asset management. It stores and depicts all relevant network configuration information in one place, bringing a completely new added value to IT managers and field technicians.
+
+ :::image type="content" source="./media/visualization-partners/graphlytic/it-management.gif" alt-text="Graphlytic IT Management use case demo" :::
+
+<b>Figure:</b> Graphlytic IT management use case
+
+* **Anti-fraud use case**
+Fraud pattern is a well-known term to every single insurance company, bank or e-commerce enterprise. Modern fraudsters build sophisticated fraud rings and schemes that are hard to unveil with traditional tools. It can cause serious losses if not detected properly and on time. On the other hand, traditional red flag systems with too strict criteria must be adjusted to eliminate false positive indicators, as it would lead to overwhelming fraud indications. Great amounts of time are spent trying to detect complex fraud, paralyzing investigators in their daily tasks.
+The basic idea behind Graphlytic is the fact that the human eye can simply distinguish and find any pattern in a graphical form much easier than in any table or data set. It means that the antifraud analyst can capture fraud schemes within graph visualization more easily, faster and smarter than with solely traditional tools.
+
+ :::image type="content" source="./media/visualization-partners/graphlytic/antifraud.gif" alt-text="Graphlytic Fraud detection use case demo":::
+
+<b>Figure:</b> Graphlytic Fraud detection use case demo
+
+### Useful links
+
+* [Documentation](https://graphlytic.biz/doc/)
+* [Free Online Demo](https://graphlytic.biz/demo)
+* [Blog](https://graphlytic.biz/blog)
+* [REST API documentation](https://graphlytic.biz/doc/latest/REST_API.html)
+* [ETL job drivers & examples](https://graphlytic.biz/doc/latest/ETL_jobs.html)
+* [SMTP Email Server Integration](https://graphlytic.biz/doc/latest/SMTP_Email_Server_Connection.html)
+* [Geo Map Server Integration](https://graphlytic.biz/doc/latest/Geo_Map_Server_Integration.html)
+* [Single Sign-on Configuration](https://graphlytic.biz/doc/latest/Single_sign-on.html)
+
+## yWorks
+
+yWorks specializes in the development of professional software solutions that enable the clear visualization of graphs, diagrams, and networks. yWorks has brought together efficient data structures, complex algorithms, and advanced techniques that provide excellent user interaction on a multitude of target platforms. This allows the user to experience highly versatile and sophisticated diagram visualization in applications across many diverse areas.
+
+Azure Cosmos DB can be queried for data using Gremlin, an efficient graph traversal language. The user can query the database for the stored entities and use the relationships to traverse the connected neighborhood. This approach requires in-depth technical knowledge of the database itself and also the query language Gremlin to explore the stored data. Where as with yWorks visualization you can visually explore the Azure Cosmos DB data, identify significant structures, and get a better understanding of relationships. Besides the visual exploration, you can also interactively edit the stored data by modifying the diagram without any knowledge of the associated query language like Gremlin. This way it provides a high-quality visualization and can analyze large data sets from Azure Cosmos DB data. You can use yFiles to add visualization capabilities to your own applications, dashboards, and reports, or to create new, white-label apps and tools for both in-house and customer facing products.
++
+<b>Figure:</b> yWorks visualization snapshot
+
+With yWorks, you can create meaningful visualizations that help users gain insights into the data quickly and easily. Build interactive user-interfaces that match your company's corporate design and easily connect to existing infrastructure and services. Use highly sophisticated automatic graph layouts to generate clear visualizations of the data hidden in your Azure Cosmos DB account. Efficient implementations of the most important graph analysis algorithms enable the creation of responsive user interfaces that highlight the information the user is interested in or needs to be aware of. Use yFiles to create interactive apps that work on desktops, and mobile devices alike.
+
+Typical use-cases and data models include:
+
+* Social networks, money laundering data, and cash-flow networks, where similar entities are connected to each other
+* Process data where entities are being processed and move from one state to another
+* organizational charts and networks, showing team hierarchies, but also majority ownership dependencies and relationships between companies or customers
+* data lineage information & compliance data can be visualized, reviewed & audited
+* computer networks logs, website logs, customer journey logs
+* knowledge graphs, stored as triplets and in other formats
+* Product Lifecycle Management data
+* Bill of Material lists and Supply Chain data
+
+### Useful links
+
+* [Pricing](https://www.yworks.com/products/yfiles-for-html/pricing)
+* [Visualizing a Microsoft Azure Cosmos DB](https://www.yworks.com/pages/visualizing-a-microsoft-azure-cosmos-db)
+* [yFiles - the diagramming library](https://www.yworks.com/yfiles-overview)
+* [yWorks - Demos](https://www.yworks.com/products/yfiles/demos)
+
+### Next Steps
+
+* [Azure Cosmos DB - API for Gremlin Pricing](../how-pricing-works.md)
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
description: Learn about subpartitioning in Azure Cosmos DB, how to use the feat
-+ Last updated 05/09/2022 # Hierarchical partition keys in Azure Cosmos DB (preview) Azure Cosmos DB distributes your data across logical and physical partitions based on your partition key to enable horizontal scaling. With hierarchical partition keys, or subpartitoning, you can now configure up to a three level hierarchy for your partition keys to further optimize data distribution and enable higher scale.
If you use synthetic keys today or have scenarios where partition keys can excee
Suppose you have a multi-tenant scenario where you store event information for users in each tenant. This event information could include event occurrences including, but not limited to, as sign-in, clickstream, or payment events.
-In a real world scenario, some tenants can grow large with thousands of users, while the many other tenants are smaller with a few users. Partitioning by **/TenantId** may lead to exceeding Cosmos DB's 20-GB storage limit on a single logical partition, while partitioning by **/UserId** will make all queries on a tenant cross-partition. Both approaches have significant downsides.
+In a real world scenario, some tenants can grow large with thousands of users, while the many other tenants are smaller with a few users. Partitioning by **/TenantId** may lead to exceeding Azure Cosmos DB's 20-GB storage limit on a single logical partition, while partitioning by **/UserId** will make all queries on a tenant cross-partition. Both approaches have significant downsides.
Using a synthetic partition key that combines **TenantId** and **UserId** adds complexity to the application. Additionally, the synthetic partition key queries for a tenant will still be cross-partition, unless all users are known and specified in advance.
-With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Cosmos DB will automatically split the physical partition so that roughly half of the data will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions.
+With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Azure Cosmos DB will automatically split the physical partition so that roughly half of the data will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions.
Queries that specify either the **TenantId**, or both **TenantId** and **UserId** will be efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1000 physical partitions, but a particular **TenantId** was only on five of them, the query would only be routed to the much smaller number of relevant physical partitions.
For more information, see [Azure Cosmos DB emulator](./local-emulator.md).
* In the Data Explorer in the portal, you currently can't view documents in a container with hierarchical partition keys. You can read or edit these documents with the supported .NET v3 or Java v4 SDK version\[s\]. * You can only specify hierarchical partition keys up to three layers in depth. * Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later.
-* Hierarchical partition keys are currently supported only for SQL API accounts (API for MongoDB and Cassandra API aren't currently supported).
+* Hierarchical partition keys are currently supported only for API for NoSQL accounts (API for MongoDB and Cassandra aren't currently supported).
## Next steps
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Title: High availability in Azure Cosmos DB
-description: This article describes how to build a highly available solution using Cosmos DB
+description: This article describes how to build a highly available solution using Azure Cosmos DB
+ Last updated 02/24/2022
-# Achieve high availability with Cosmos DB
+# Achieve high availability with Azure Cosmos DB
-To build a solution with high-availability, you have to evaluate the reliability characteristics of all its components. Cosmos DB is designed to provide multiple features and configuration options to achieve high availability for all solutions' availability needs.
+To build a solution with high-availability, you have to evaluate the reliability characteristics of all its components. Azure Cosmos DB is designed to provide multiple features and configuration options to achieve high availability for all solutions' availability needs.
-We'll use the terms **RTO** (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Cosmos DB and the recovery to full availability, and **RPO** (Recovery Point Objective), to indicate the time between the last write correctly restored and the time of the beginning of the outage affecting Cosmos DB.
+We'll use the terms **RTO** (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Azure Cosmos DB and the recovery to full availability, and **RPO** (Recovery Point Objective), to indicate the time between the last write correctly restored and the time of the beginning of the outage affecting Azure Cosmos DB.
> [!NOTE]
-> Expected and maximum RPOs and RTOs depend on the kind of outage that Cosmos DB is experiencing. For instance, an outage of a single node will have different expected RTO and RPO than a whole region outage.
+> Expected and maximum RPOs and RTOs depend on the kind of outage that Azure Cosmos DB is experiencing. For instance, an outage of a single node will have different expected RTO and RPO than a whole region outage.
-This article details the events that can affect Cosmos DB availability and the corresponding Cosmos DB configuration options to achieve the availability characteristics required by your solution.
+This article details the events that can affect Azure Cosmos DB availability and the corresponding Azure Cosmos DB configuration options to achieve the availability characteristics required by your solution.
## Replica maintenance
-Cosmos DB is a managed multi-tenant service that manages all details of individual compute nodes transparently. Users don't have to worry about any kind of patching and planned maintenance. Cosmos DB guarantees SLAs for availability and P99 latency through all automatic maintenance operations performed by the system.
+Azure Cosmos DB is a managed multi-tenant service that manages all details of individual compute nodes transparently. Users don't have to worry about any kind of patching and planned maintenance. Azure Cosmos DB guarantees SLAs for availability and P99 latency through all automatic maintenance operations performed by the system.
Refer to the [SLAs section](#slas) for the guaranteed availability SLAs. ## Replica outages
-Replica outages refer to outages of individual nodes in a Cosmos DB cluster deployed in an Azure region.
-Cosmos DB automatically mitigates replica outages by guaranteeing at least three replicas of your data in each Azure region for your account within a four replica quorum.
+Replica outages refer to outages of individual nodes in an Azure Cosmos DB cluster deployed in an Azure region.
+Azure Cosmos DB automatically mitigates replica outages by guaranteeing at least three replicas of your data in each Azure region for your account within a four replica quorum.
This results in RTO = 0 and RPO = 0, for individual node outages, with no application changes or configurations required.
-In many Azure regions, it's possible to distribute your Cosmos DB cluster across **availability zones**, which results increased SLAs, as availability zones are physically separate and provide distinct power source, network, and cooling. See [Availability Zones](/azure/architecture/reliability/architect).
-When a Cosmos DB account is deployed using availability zones, Cosmos DB provides RTO = 0 and RPO = 0 even in a zone outage.
+In many Azure regions, it's possible to distribute your Azure Cosmos DB cluster across **availability zones**, which results increased SLAs, as availability zones are physically separate and provide distinct power source, network, and cooling. See [Availability Zones](/azure/architecture/reliability/architect).
+When an Azure Cosmos DB account is deployed using availability zones, Azure Cosmos DB provides RTO = 0 and RPO = 0 even in a zone outage.
-When users deploy in a single Azure region, with no extra user input, Cosmos DB is resilient to node outages. Enabling redundancy across availability zones makes Cosmos DB resilient to zone outages at the cost of increased charges. Both SLAs and price are reported in the [SLAs section](#slas).
+When users deploy in a single Azure region, with no extra user input, Azure Cosmos DB is resilient to node outages. Enabling redundancy across availability zones makes Azure Cosmos DB resilient to zone outages at the cost of increased charges. Both SLAs and price are reported in the [SLAs section](#slas).
-Zone redundancy can only be configured when adding a new region to an Azure Cosmos account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding a region to temporarily fail over to, then removing and adding the desired region with zone redundancy enabled.
+Zone redundancy can only be configured when adding a new region to an Azure Cosmos DB account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding a region to temporarily fail over to, then removing and adding the desired region with zone redundancy enabled.
-By default, a Cosmos DB account doesn't use multiple availability zones. You can enable deployment across multiple availability zones in the following ways:
+By default, an Azure Cosmos DB account doesn't use multiple availability zones. You can enable deployment across multiple availability zones in the following ways:
* [Azure portal](how-to-manage-database-account.md#addremove-regions-from-your-database-account)
-If your Azure Cosmos account is distributed across *N* Azure regions, there will be *N* x 4 copies of all your data. For a more detailed overview of data distribution, see [Global data distribution under the hood](global-dist-under-the-hood.md). Having an Azure Cosmos account in more than 2 regions improves the availability of your application and provides low latency across the associated regions.
+If your Azure Cosmos DB account is distributed across *N* Azure regions, there will be *N* x 4 copies of all your data. For a more detailed overview of data distribution, see [Global data distribution under the hood](global-dist-under-the-hood.md). Having an Azure Cosmos DB account in more than 2 regions improves the availability of your application and provides low latency across the associated regions.
* [Azure CLI](sql/manage-with-cli.md#add-or-remove-regions) * [Azure Resource Manager templates](./manage-with-templates.md)
-Refer to [Global distribution with Azure Cosmos DB- under the hood](./global-dist-under-the-hood.md) for more information on how Cosmos DB replicates data in each region.
+Refer to [Global distribution with Azure Cosmos DB- under the hood](./global-dist-under-the-hood.md) for more information on how Azure Cosmos DB replicates data in each region.
## Region outages
-Region outages refer to outages that affect all Cosmos DB nodes in an Azure region, across all availability zones.
-In the rare cases of region outages, Cosmos DB can be configured to support various outcomes of durability and availability.
+Region outages refer to outages that affect all Azure Cosmos DB nodes in an Azure region, across all availability zones.
+In the rare cases of region outages, Azure Cosmos DB can be configured to support various outcomes of durability and availability.
### Durability
-When a Cosmos DB account is deployed in a single region, generally no data loss occurs and data access is restored after Cosmos DB services recovers in the affected region. Data loss may occur only with an unrecoverable disaster in the Cosmos DB region.
+When an Azure Cosmos DB account is deployed in a single region, generally no data loss occurs and data access is restored after Azure Cosmos DB services recovers in the affected region. Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region.
To protect against complete data loss that may result from catastrophic disasters in a region, Azure Cosmos DB provides 2 different backup modes: - [Continuous backups](./continuous-backup-restore-introduction.md) ensure the backup is taken in each region every 100 seconds and provide the ability to restore your data to any desired point in time with second granularity. In each region, the backup is dependent on the data committed in that region. - [Periodic backups](./configure-periodic-backup-restore.md) take full backups of all partitions from all containers under your account, with no synchronization across partitions. The minimum backup interval is 1 hour.
-When a Cosmos DB account is deployed in multiple regions, data durability depends on the consistency level configured on the account. The following table details, for all consistency levels, the RPO of Cosmos DB account deployed in at least 2 regions.
+When an Azure Cosmos DB account is deployed in multiple regions, data durability depends on the consistency level configured on the account. The following table details, for all consistency levels, the RPO of Azure Cosmos DB account deployed in at least 2 regions.
|**Consistency level**|**RPO in case of region outage**| |||
For multi-region accounts, the minimum value of *K* and *T* is 100,000 write ope
Refer to [Consistency levels](./consistency-levels.md) for more information on the differences between consistency levels. ### Availability
-If your solution requires continuous availability during region outages, Cosmos DB can be configured to replicate your data across multiple regions and to transparently fail over to available regions when required.
+If your solution requires continuous availability during region outages, Azure Cosmos DB can be configured to replicate your data across multiple regions and to transparently fail over to available regions when required.
Single-region accounts may lose availability following a regional outage. To ensure high availability at all times it's recommended to set up your Azure Cosmos DB account with **a single write region and at least a second (read) region** and enable **Service-Managed failover**.
-Service-managed failover allows Cosmos DB to fail over the write region of multi-region account, in order to preserve availability at the cost of data loss as per [durability section](#durability). Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application.
+Service-managed failover allows Azure Cosmos DB to fail over the write region of multi-region account, in order to preserve availability at the cost of data loss as per [durability section](#durability). Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application.
Refer to [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md) for the instructions on how to enable multiple read regions and service-managed failover. > [!IMPORTANT]
-> It is strongly recommended that you configure the Azure Cosmos accounts used for production workloads to **enable service-managed failover**. This enables Cosmos DB to failover the account databases to available regions automatically. In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity.
+> It is strongly recommended that you configure the Azure Cosmos DB accounts used for production workloads to **enable service-managed failover**. This enables Azure Cosmos DB to failover the account databases to available regions automatically. In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity.
### Multiple write regions
-Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When a Cosmos DB account is configured for multiple write regions, strong consistency isn't supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how to resolve conflicts in multiple write region configurations.
+Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When an Azure Cosmos DB account is configured for multiple write regions, strong consistency isn't supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how to resolve conflicts in multiple write region configurations.
Given the internal Azure Cosmos DB architecture, using multiple write regions doesn't guarantee write availability during a region outage. The best configuration to achieve high availability during a region outage is single write region with service-managed failover. #### Conflict-resolution region
-When a Cosmos DB account is configured with multi-region writes, one of the regions will act as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
+When an Azure Cosmos DB account is configured with multi-region writes, one of the regions will act as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
### What to expect during a region outage Client of single-region accounts will experience loss of read and write availability until service is restored.
Multi-region accounts will experience different behaviors depending on the follo
| Configuration | Outage | Availability impact | Durability impact| What to do | | -- | -- | -- | -- | -- | | Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-managed failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. |
+| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-managed failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. |
### Additional information on read region outages
-* The impacted region is automatically disconnected and will be marked offline. The [Azure Cosmos DB SDKs](sql-api-sdk-dotnet.md) will redirect read calls to the next available region in the preferred region list.
+* The impacted region is automatically disconnected and will be marked offline. The [Azure Cosmos DB SDKs](nosql/sdk-dotnet-v3.md) will redirect read calls to the next available region in the preferred region list.
* If none of the regions in the preferred region list is available, calls automatically fall back to the current write region.
Multi-region accounts will experience different behaviors depending on the follo
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, read consistency guarantees continue to be honored by Azure Cosmos DB.
-* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there's no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the rare event of a permanently irrecoverable write region, a multi-region Azure Cosmos account has the durability characteristics specified in the [Durability](#durability) section.
+* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there's no data loss if your multi-region Azure Cosmos DB account is configured with *Strong* consistency. In the rare event of a permanently irrecoverable write region, a multi-region Azure Cosmos DB account has the durability characteristics specified in the [Durability](#durability) section.
### Additional information on write region outages
-* During a write region outage, the Azure Cosmos account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos account. The failover will occur to another region in the order of region priority you've specified.
+* During a write region outage, the Azure Cosmos DB account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos DB account. The failover will occur to another region in the order of region priority you've specified.
* Note that manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions.
-* When the previously impacted region is back online, any write data that wasn't replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos container as appropriate.
+* When the previously impacted region is back online, any write data that wasn't replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos DB container as appropriate.
* Once the previously impacted write region recovers, it becomes automatically available as a read region. You can switch back to the recovered region as the write region. You can switch the regions by using [PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover). There is **no data or availability loss** before, during or after you switch the write region and your application continues to be highly available.
The following table summarizes the high availability capability of various accou
## Building highly available applications
-* Review the expected [behavior of the Azure Cosmos SDKs](troubleshoot-sdk-availability.md) during these events and which are the configurations that affect it.
+* Review the expected [behavior of the Azure Cosmos DB SDKs](troubleshoot-sdk-availability.md) during these events and which are the configurations that affect it.
-* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see [Tutorial: Set up Azure Cosmos DB global distribution using the SQL API](tutorial-global-distribution-sql-api.md).
+* To ensure high write and read availability, configure your Azure Cosmos DB account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see [Tutorial: Set up Azure Cosmos DB global distribution using the API for NoSQL](nosql/tutorial-global-distribution.md).
-* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable service-managed failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
+* For multi-region Azure Cosmos DB accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable service-managed failover, whenever there's a regional disaster, Azure Cosmos DB will fail over your account without any user inputs.
-* Even if your Azure Cosmos account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable service-managed failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore service-managed failover for the account.
+* Even if your Azure Cosmos DB account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable service-managed failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore service-managed failover for the account.
> [!IMPORTANT]
-> Do not invoke manual failover during a Cosmos DB outage on either the source or destination regions, as it requires regions connectivity to maintain data consistency and it will not succeed.
+> Do not invoke manual failover during an Azure Cosmos DB outage on either the source or destination regions, as it requires regions connectivity to maintain data consistency and it will not succeed.
* Within a globally distributed database environment, there's a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after a disruptive event. The time required for an application to fully recover is known as recovery time objective (RTO). You also need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as recovery point objective (RPO). To see the RPO and RTO for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
-## What to expect during a Cosmos DB region outage
+## What to expect during an Azure Cosmos DB region outage
For single-region accounts, clients will experience loss of read and write availability.
Multi-region accounts will experience different behaviors depending on the follo
| Write regions | Service-Managed failover | What to expect | What to do | | -- | -- | -- | -- |
-| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. |
+| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Azure Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
+| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. |
## Next steps
Next you can read the following articles:
* [Consistency levels in Azure Cosmos DB](consistency-levels.md)
-* [How to configure your Cosmos account with multiple write regions](how-to-multi-master.md)
+* [How to configure your Azure Cosmos DB account with multiple write regions](how-to-multi-master.md)
* [SDK behavior on multi-regional environments](troubleshoot-sdk-availability.md)
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Last updated 03/24/2022-+ # Pricing model in Azure Cosmos DB The pricing model of Azure Cosmos DB simplifies the cost management and planning. With Azure Cosmos DB, you pay for the operations you perform against the database and for the storage consumed by your data. > > [!VIDEO https://aka.ms/docs.how-pricing-works] -- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you're using.
+- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos DB account you're using.
- **Provisioned Throughput**: [Provisioned throughput](set-throughput.md) (also called reserved throughput) provides high performance at any scale. You specify the throughput that you need in [Request Units](request-units.md) per second (RU/s), and Azure Cosmos DB dedicates the resources required to provide the configured throughput. You can [provision throughput on either a database or a container](set-throughput.md). Based on your workload needs, you can scale throughput up/down at any time or use [autoscale](provision-throughput-autoscale.md) (although there's a minimum throughput required on a database or a container to guarantee the SLAs). You're billed hourly for the maximum provisioned throughput for a given hour. > [!NOTE] > Because the provisioned throughput model dedicates resources to your container or database, you will be charged for the throughput you have provisioned even if you don't run any workloads.
- - **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.
+ - **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.
- **Storage**: You're billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You're billed only for the storage you consume.
Azure Cosmos DB offers many options for developers to it for free. These options
## Pricing with reserved capacity
-Azure Cosmos DB [reserved capacity](cosmos-db-reserved-capacity.md) helps you save money when using the provisioned throughput mode by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. Azure Cosmos DB reserved capacity helps you lower costs by pre-paying for the provisioned throughput (RU/s) for one year or three years and you get a discount on the throughput provisioned.
+Azure Cosmos DB [reserved capacity](reserved-capacity.md) helps you save money when using the provisioned throughput mode by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. Azure Cosmos DB reserved capacity helps you lower costs by pre-paying for the provisioned throughput (RU/s) for one year or three years and you get a discount on the throughput provisioned.
-Reserved capacity provides a billing discount and doesn't affect the runtime state of your Azure Cosmos DB resources. Reserved capacity is available consistently to all APIs, which includes MongoDB, Cassandra, SQL, Gremlin, and Azure Tables and all regions worldwide. You can learn more about reserved capacity in [Prepay for Azure Cosmos DB resources with reserved capacity](cosmos-db-reserved-capacity.md) article and buy reserved capacity from the [Azure portal](https://portal.azure.com/).
+Reserved capacity provides a billing discount and doesn't affect the runtime state of your Azure Cosmos DB resources. Reserved capacity is available consistently to all APIs, which includes MongoDB, Cassandra, SQL, Gremlin, and Azure Tables and all regions worldwide. You can learn more about reserved capacity in [Prepay for Azure Cosmos DB resources with reserved capacity](reserved-capacity.md) article and buy reserved capacity from the [Azure portal](https://portal.azure.com/).
## Next steps
You can learn more about optimizing the costs for your Azure Cosmos DB resources
* Learn more about [Optimizing storage cost](optimize-cost-storage.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Cosmos accounts](optimize-cost-regions.md)
-* Learn about [Azure Cosmos DB reserved capacity](cosmos-db-reserved-capacity.md)
-* Learn about [Azure Cosmos DB Emulator](local-emulator.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
+* Learn about [Azure Cosmos DB reserved capacity](reserved-capacity.md)
+* Learn about [Azure Cosmos DB Emulator](local-emulator.md)
cosmos-db How To Alert On Logical Partition Key Storage Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-alert-on-logical-partition-key-storage-size.md
description: Learn how to set up alerts for Azure Cosmos DB using Log Analytics
+ Last updated 02/08/2022 # Create alerts to monitor if storage for a logical partition key is approaching 20 GB Azure Cosmos DB enforces a maximum logical partition key size of 20 GB. For example, if you have a container/collection partitioned by **UserId**, the data within the "Alice" logical partition can store up to 20 GB of data.
In this article, weΓÇÖll create an alert that will trigger if the storage for a
WeΓÇÖll be using data from the **PartitionKeyStatistics** log category in Diagnostic Logs to create our alert. Diagnostic Logs is an opt-in feature, so youΓÇÖll need to enable it before proceeding. In our example, weΓÇÖll use the recommended Resource Specific Logs option.
-Follow the instructions in [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](cosmosdb-monitor-resource-logs.md) to ensure:
+Follow the instructions in [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](monitor-resource-logs.md) to ensure:
- Diagnostic Logs is enabled on the Azure Cosmos DB account(s) you want to monitor - You have configured collection of the **PartitionKeyStatistics** log category - The Diagnostic logs are being sent to a Log Analytics workspace
Follow the instructions in [Monitor Azure Cosmos DB data by using diagnostic set
* Select **Azure Cosmos DB accounts** for the **resource type**.
- * The **location** of your Azure Cosmos account.
+ * The **location** of your Azure Cosmos DB account.
- * After filling in the details, a list of Azure Cosmos accounts in the selected scope is displayed. Choose the one for which you want to configure alerts and select **Done**.
+ * After filling in the details, a list of Azure Cosmos DB accounts in the selected scope is displayed. Choose the one for which you want to configure alerts and select **Done**.
1. Fill out the **Condition** section:
Follow the instructions in [Monitor Azure Cosmos DB data by using diagnostic set
* For all other dimensions: * If you want to monitor only a specific Azure Cosmos DB account, database, collection, or partition key, select the specific value or **Add custom value** if the value doesnΓÇÖt currently appear in the dropdown.
- * Otherwise, select **Select all current and future values**. For example, if your Cosmos account currently has two databases and five collections, selecting all current and feature values for the Database and CollectionName dimension will ensure that the alert will apply to all existing databases and collections, as well as any you may create in the future.
+ * Otherwise, select **Select all current and future values**. For example, if your Azure Cosmos DB account currently has two databases and five collections, selecting all current and feature values for the Database and CollectionName dimension will ensure that the alert will apply to all existing databases and collections, as well as any you may create in the future.
* In the Alert logic section:
To learn about best practices for managing workloads that have partition keys re
## Next steps * How to [create alerts for Azure Cosmos DB using Azure Monitor](create-alerts.md).
-* How to [monitor normalized RU/s metric](monitor-normalized-request-units.md) in Azure Cosmos container.
+* How to [monitor normalized RU/s metric](monitor-normalized-request-units.md) in Azure Cosmos DB container.
* How to [monitor throughput or request unit usage](monitor-request-unit-usage.md) of an operation in Azure Cosmos DB.
-* How to [interpret and debut 429 exceptions](sql/troubleshoot-request-rate-too-large.md) in Azure Cosmos container.
-
+* How to [interpret and debut 429 exceptions](sql/troubleshoot-request-rate-too-large.md) in Azure Cosmos DB container.
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-always-encrypted.md
Title: Use client-side encryption with Always Encrypted for Azure Cosmos DB description: Learn how to use client-side encryption with Always Encrypted for Azure Cosmos DB + Last updated 04/04/2022
# Use client-side encryption with Always Encrypted for Azure Cosmos DB > [!IMPORTANT] > A breaking change has been introduced with the 1.0 release of our encryption packages. If you created data encryption keys and encryption-enabled containers with prior versions, you will need to re-create your databases and containers after migrating your client code to 1.0 packages.
If you have flexibility in the way new encrypted properties can be added from a
## Next steps - Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md).-- Learn more about [customer-managed keys for encryption-at-rest](how-to-setup-cmk.md)
+- Learn more about [customer-managed keys for encryption-at-rest](how-to-setup-cmk.md)
cosmos-db How To Choose Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-choose-offer.md
Title: How to choose between manual and autoscale on Azure Cosmos DB
description: Learn how to choose between standard (manual) provisioned throughput and autoscale provisioned throughput for your workload. + Last updated 04/01/2022 # How to choose between standard (manual) and autoscale provisioned throughput Azure Cosmos DB supports two types or offers of provisioned throughput: standard (manual) and autoscale. Both throughput types are suitable for mission-critical workloads that require high performance and scale, and are backed by the same Azure Cosmos DB SLAs on throughput, availability, latency, and consistency.
Use the Azure Cosmos DB [capacity calculator](estimate-ru-with-capacity-planner.
### Existing applications ###
-If you have an existing application using standard (manual) provisioned throughput, you can use [Azure Monitor metrics](cosmosdb-insights-overview.md) to determine if your traffic pattern is suitable for autoscale.
+If you have an existing application using standard (manual) provisioned throughput, you can use [Azure Monitor metrics](insights-overview.md) to determine if your traffic pattern is suitable for autoscale.
First, find the [normalized request unit consumption metric](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) of your database or container. Normalized utilization is a measure of how much you are currently using your standard (manual) provisioned throughput. The closer the number is to 100%, the more you are fully using your provisioned RU/s. [Learn more](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) about the metric.
Next, determine how the normalized utilization varies over time. Find the highes
Let's take a look at two different example workloads and analyze if they are suitable for manual or autoscale throughput. To illustrate the general approach, we'll analyze three hours of history to determine the cost difference between using manual and autoscale. For production workloads, it's recommended to use 7 to 30 days of history (or longer if available) to establish a pattern of RU/s usage. > [!NOTE]
-> All the examples shown in this doc are based on the price for an Azure Cosmos account deployed in a non-government region in the US. The pricing and calculation vary depending on the region you are using, see the Azure Cosmos DB [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for the latest pricing information.
+> All the examples shown in this doc are based on the price for an Azure Cosmos DB account deployed in a non-government region in the US. The pricing and calculation vary depending on the region you are using, see the Azure Cosmos DB [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for the latest pricing information.
Assumptions: - Suppose we currently have manual throughput of 30,000 RU/s.
When using autoscale, use Azure Monitor to see the provisioned autoscale max RU/
## Next steps * Use [RU calculator](https://cosmos.azure.com/capacitycalculator/) to estimate throughput for new workloads.
-* Use [Azure Monitor](monitor-cosmos-db.md#view-operation-level-metrics-for-azure-cosmos-db) to monitor your existing workloads.
-* Learn how to [provision autoscale throughput on an Azure Cosmos database or container](how-to-provision-autoscale-throughput.md).
+* Use [Azure Monitor](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to monitor your existing workloads.
+* Learn how to [provision autoscale throughput on an Azure Cosmos DB database or container](how-to-provision-autoscale-throughput.md).
* Review the [autoscale FAQ](autoscale-faq.yml). * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-firewall.md
Title: Configure an IP firewall for your Azure Cosmos DB account
-description: Learn how to configure IP access control policies for firewall support on Azure Cosmos accounts.
+description: Learn how to configure IP access control policies for firewall support on Azure Cosmos DB accounts.
Last updated 02/18/2022 -+ # Configure IP firewall in Azure Cosmos DB
-To secure the data stored in your account, Azure Cosmos DB supports a secret based authorization model that utilizes a strong Hash-based Message Authentication Code (HMAC). Additionally, Azure Cosmos DB supports IP-based access controls for inbound firewall support. This model is similar to the firewall rules of a traditional database system and provides an additional level of security to your account. With firewalls, you can configure your Azure Cosmos account to be accessible only from an approved set of machines and/or cloud services. Access to data stored in your Azure Cosmos database from these approved sets of machines and services will still require the caller to present a valid authorization token.
+To secure the data stored in your account, Azure Cosmos DB supports a secret based authorization model that utilizes a strong Hash-based Message Authentication Code (HMAC). Additionally, Azure Cosmos DB supports IP-based access controls for inbound firewall support. This model is similar to the firewall rules of a traditional database system and provides an additional level of security to your account. With firewalls, you can configure your Azure Cosmos DB account to be accessible only from an approved set of machines and/or cloud services. Access to data stored in your Azure Cosmos DB database from these approved sets of machines and services will still require the caller to present a valid authorization token.
## <a id="ip-access-control-overview"></a>IP access control
-By default, your Azure Cosmos account is accessible from internet, as long as the request is accompanied by a valid authorization token. To configure IP policy-based access control, the user must provide the set of IP addresses or IP address ranges in CIDR (Classless Inter-Domain Routing) form to be included as the allowed list of client IPs to access a given Azure Cosmos account. Once this configuration is applied, any requests originating from machines outside this allowed list receive 403 (Forbidden) response. When using IP firewall, it is recommended to allow Azure portal to access your account. Access is required to allow use of data explorer as well as to retrieve metrics for your account that show up on the Azure portal. When using data explorer, in addition to allowing Azure portal to access your account, you also need to update your firewall settings to add your current IP address to the firewall rules. Note that firewall changes may take up to 15 minutes to propagate and the firewall may exhibit an inconsistent behavior during this period.
+By default, your Azure Cosmos DB account is accessible from internet, as long as the request is accompanied by a valid authorization token. To configure IP policy-based access control, the user must provide the set of IP addresses or IP address ranges in CIDR (Classless Inter-Domain Routing) form to be included as the allowed list of client IPs to access a given Azure Cosmos DB account. Once this configuration is applied, any requests originating from machines outside this allowed list receive 403 (Forbidden) response. When using IP firewall, it is recommended to allow Azure portal to access your account. Access is required to allow use of data explorer as well as to retrieve metrics for your account that show up on the Azure portal. When using data explorer, in addition to allowing Azure portal to access your account, you also need to update your firewall settings to add your current IP address to the firewall rules. Note that firewall changes may take up to 15 minutes to propagate and the firewall may exhibit an inconsistent behavior during this period.
You can combine IP-based firewall with subnet and VNET access control. By combining them, you can limit access to any source that has a public IP and/or from a specific subnet within VNET. To learn more about using subnet and VNET-based access control see [Access Azure Cosmos DB resources from virtual networks](./how-to-configure-vnet-service-endpoint.md).
-To summarize, authorization token is always required to access an Azure Cosmos account. If IP firewall and VNET Access Control List (ACLs) are not set up, the Azure Cosmos account can be accessed with the authorization token. After the IP firewall or VNET ACLs or both are set up on the Azure Cosmos account, only requests originating from the sources you have specified (and with the authorization token) get valid responses.
+To summarize, authorization token is always required to access an Azure Cosmos DB account. If IP firewall and VNET Access Control List (ACLs) are not set up, the Azure Cosmos DB account can be accessed with the authorization token. After the IP firewall or VNET ACLs or both are set up on the Azure Cosmos DB account, only requests originating from the sources you have specified (and with the authorization token) get valid responses.
You can secure the data stored in your Azure Cosmos DB account by using IP firewalls. Azure Cosmos DB supports IP-based access controls for inbound firewall support. You can set an IP firewall on the Azure Cosmos DB account by using one of the following ways:
When you scale out your cloud service by adding role instances, those new instan
### Requests from virtual machines
-You can also use [virtual machines](https://azure.microsoft.com/services/virtual-machines/) or [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) to host middle-tier services by using Azure Cosmos DB. To configure your Cosmos DB account such that it allows access from virtual machines, you must configure the public IP address of the virtual machine and/or virtual machine scale set as one of the allowed IP addresses for your Azure Cosmos DB account by [configuring the IP access control policy](#configure-ip-policy).
+You can also use [virtual machines](https://azure.microsoft.com/services/virtual-machines/) or [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) to host middle-tier services by using Azure Cosmos DB. To configure your Azure Cosmos DB account such that it allows access from virtual machines, you must configure the public IP address of the virtual machine and/or virtual machine scale set as one of the allowed IP addresses for your Azure Cosmos DB account by [configuring the IP access control policy](#configure-ip-policy).
You can retrieve IP addresses for virtual machines in the Azure portal, as shown in the following screenshot:
To automate the list, please see [Use the Service Tag Discovery API](../virtual-
## <a id="configure-ip-firewall-arm"></a>Configure an IP firewall by using a Resource Manager template
-To configure access control to your Azure Cosmos DB account, make sure that the Resource Manager template specifies the **ipRules** property with an array of allowed IP ranges. If configuring IP Firewall to an already deployed Cosmos account, ensure the `locations` array matches what is currently deployed. You cannot simultaneously modify the `locations` array and other properties. For more information and samples of Azure Resource Manager templates for Azure Cosmos DB see, [Azure Resource Manager templates for Azure Cosmos DB](./templates-samples-sql.md)
+To configure access control to your Azure Cosmos DB account, make sure that the Resource Manager template specifies the **ipRules** property with an array of allowed IP ranges. If configuring IP Firewall to an already deployed Azure Cosmos DB account, ensure the `locations` array matches what is currently deployed. You cannot simultaneously modify the `locations` array and other properties. For more information and samples of Azure Resource Manager templates for Azure Cosmos DB see, [Azure Resource Manager templates for Azure Cosmos DB](./nosql/samples-resource-manager-templates.md)
> [!IMPORTANT] > The **ipRules** property has been introduced with API version 2020-04-01. Previous versions exposed an **ipRangeFilter** property instead, which is a list of comma-separated IP addresses.
Here's the same example for any API version prior to 2020-04-01:
The following command shows how to create an Azure Cosmos DB account with IP access control: ```azurecli-interactive
-# Create a Cosmos DB account with default values and IP Firewall enabled
+# Create an Azure Cosmos DB account with default values and IP Firewall enabled
resourceGroupName='MyResourceGroup' accountName='mycosmosaccount' ipRangeFilter='192.168.221.17,183.240.196.255,40.76.54.131'
az cosmosdb create \
The following script shows how to create an Azure Cosmos DB account with IP access control: ```azurepowershell-interactive
-# Create a Cosmos DB account with default values and IP Firewall enabled
+# Create an Azure Cosmos DB account with default values and IP Firewall enabled
$resourceGroupName = "myResourceGroup" $accountName = "mycosmosaccount" $ipRules = @("192.168.221.17","183.240.196.255","40.76.54.131")
When you access Azure Cosmos DB resources by using SDKs from machines that are n
### Source IPs in blocked requests
-Enable diagnostic logging on your Azure Cosmos DB account. These logs show each request and response. The firewall-related messages are logged with a 403 return code. By filtering these messages, you can see the source IPs for the blocked requests. See [Azure Cosmos DB diagnostic logging](./monitor-cosmos-db.md).
+Enable diagnostic logging on your Azure Cosmos DB account. These logs show each request and response. The firewall-related messages are logged with a 403 return code. By filtering these messages, you can see the source IPs for the blocked requests. See [Azure Cosmos DB diagnostic logging](./monitor.md).
### Requests from a subnet with a service endpoint for Azure Cosmos DB enabled
Requests from a subnet in a virtual network that has a service endpoint for Azur
### Private IP addresses in list of allowed addresses
-Creating or updating an Azure Cosmos account with a list of allowed addresses containing private IP addresses will fail. Make sure that no private IP address is specified in the list.
+Creating or updating an Azure Cosmos DB account with a list of allowed addresses containing private IP addresses will fail. Make sure that no private IP address is specified in the list.
## Next steps
cosmos-db How To Configure Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-integrated-cache.md
Title: How to configure the Azure Cosmos DB integrated cache
description: Learn how to configure the Azure Cosmos DB integrated cache -++ Last updated 08/29/2022
# How to configure the Azure Cosmos DB integrated cache This article describes how to provision a dedicated gateway, configure the integrated cache, and connect your application.
This article describes how to provision a dedicated gateway, configure the integ
- If you don't have an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. - An existing application that uses Azure Cosmos DB. If you don't have one, [here are some examples](https://github.com/AzureCosmosDB/labs).-- An existing [Azure Cosmos DB SQL (core) API account](create-cosmosdb-resources-portal.md).
+- An existing [Azure Cosmos DB API for NoSQL account](nosql/quickstart-portal.md).
## Provision the dedicated gateway
When you create a dedicated gateway, an integrated cache is automatically provis
You donΓÇÖt need to modify the connection string in all applications using the same Azure Cosmos DB account. For example, you could have one `CosmosClient` connect using gateway mode and the dedicated gateway endpoint while another `CosmosClient` uses direct mode. In other words, adding a dedicated gateway doesn't impact the existing ways of connecting to Azure Cosmos DB.
-2. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](sql-sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
+2. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](nosql/sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
> [!NOTE] > If you are using the latest .NET or Java SDK version, the default connection mode is direct mode. In order to use the integrated cache, you must override this default.
cosmoscachefeedback@microsoft.com
- [Integrated cache FAQ](integrated-cache-faq.md) - [Integrated cache overview](integrated-cache.md)-- [Dedicated gateway](dedicated-gateway.md)
+- [Dedicated gateway](dedicated-gateway.md)
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md
Title: Configure Azure Private Link for an Azure Cosmos account
-description: Learn how to set up Azure Private Link to access an Azure Cosmos account by using a private IP address in a virtual network.
+ Title: Configure Azure Private Link for an Azure Cosmos DB account
+description: Learn how to set up Azure Private Link to access an Azure Cosmos DB account by using a private IP address in a virtual network.
Last updated 06/08/2021 -+
-# Configure Azure Private Link for an Azure Cosmos account
+# Configure Azure Private Link for an Azure Cosmos DB account
-By using Azure Private Link, you can connect to an Azure Cosmos account via a private endpoint. The private endpoint is a set of private IP addresses in a subnet within your virtual network. You can then limit access to an Azure Cosmos account over private IP addresses. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. To learn more about private endpoints, see the [Azure Private Link](../private-link/private-link-overview.md) article.
+By using Azure Private Link, you can connect to an Azure Cosmos DB account via a private endpoint. The private endpoint is a set of private IP addresses in a subnet within your virtual network. You can then limit access to an Azure Cosmos DB account over private IP addresses. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. To learn more about private endpoints, see the [Azure Private Link](../private-link/private-link-overview.md) article.
> [!NOTE]
-> Private Link doesn't prevent your Azure Cosmos endpoints from being resolved by public DNS. Filtering of incoming requests happens at application level, not transport or network level.
+> Private Link doesn't prevent your Azure Cosmos DB endpoints from being resolved by public DNS. Filtering of incoming requests happens at application level, not transport or network level.
-Private Link allows users to access an Azure Cosmos account from within the virtual network or from any peered virtual network. Resources mapped to Private Link are also accessible on-premises over private peering through VPN or Azure ExpressRoute.
+Private Link allows users to access an Azure Cosmos DB account from within the virtual network or from any peered virtual network. Resources mapped to Private Link are also accessible on-premises over private peering through VPN or Azure ExpressRoute.
-You can connect to an Azure Cosmos account configured with Private Link by using the automatic or manual approval method. To learn more, see the [Approval workflow](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow) section of the Private Link documentation.
+You can connect to an Azure Cosmos DB account configured with Private Link by using the automatic or manual approval method. To learn more, see the [Approval workflow](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow) section of the Private Link documentation.
This article describes how to set up private endpoints for Azure Cosmos DB transactional store. It assumes that you're using the automatic approval method. If you are using the analytical store, see [Private endpoints for the analytical store](analytical-store-private-endpoints.md) article. ## Create a private endpoint by using the Azure portal
-Use the following steps to create a private endpoint for an existing Azure Cosmos account by using the Azure portal:
+Use the following steps to create a private endpoint for an existing Azure Cosmos DB account by using the Azure portal:
-1. From the **All resources** pane, choose an Azure Cosmos account.
+1. From the **All resources** pane, choose an Azure Cosmos DB account.
1. Select **Private Endpoint Connections** from the list of settings, and then select **Private endpoint**:
Use the following steps to create a private endpoint for an existing Azure Cosmo
|Connection method | Select **Connect to an Azure resource in my directory**. <br/><br/> You can then choose one of your resources to set up Private Link. Or you can connect to someone else's resource by using a resource ID or alias that they've shared with you.| | Subscription| Select your subscription. | | Resource type | Select **Microsoft.AzureCosmosDB/databaseAccounts**. |
- | Resource |Select your Azure Cosmos account. |
- |Target sub-resource |Select the Azure Cosmos DB API type that you want to map. This defaults to only one choice for the SQL, MongoDB, and Cassandra APIs. For the Gremlin and Table APIs, you can also choose **Sql** because these APIs are interoperable with the SQL API. If you have a [dedicated gateway](./dedicated-gateway.md) provisioned for a SQL API account, you will also see an option for **SqlDedicated**. |
+ | Resource |Select your Azure Cosmos DB account. |
+ |Target sub-resource |Select the Azure Cosmos DB API type that you want to map. This defaults to only one choice for the APIs for SQL, MongoDB, and Cassandra. For the APIs for Gremlin and Table, you can also choose **NoSQL** because these APIs are interoperable with the API for NoSQL. If you have a [dedicated gateway](./dedicated-gateway.md) provisioned for a API for NoSQL account, you will also see an option for **SqlDedicated**. |
||| 1. Select **Next: Configuration**.
Use the following steps to create a private endpoint for an existing Azure Cosmo
1. Select **Review + create**. On the **Review + create** page, Azure validates your configuration. 1. When you see the **Validation passed** message, select **Create**.
-When you have approved Private Link for an Azure Cosmos account, in the Azure portal, the **All networks** option in the **Firewall and virtual networks** pane is unavailable.
+When you have approved Private Link for an Azure Cosmos DB account, in the Azure portal, the **All networks** option in the **Firewall and virtual networks** pane is unavailable.
## <a id="private-zone-name-mapping"></a>API types and private zone names
-The following table shows the mapping between different Azure Cosmos account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and Table API accounts through the SQL API, so there are two entries for these APIs. There is also an extra entry for the SQL API for accounts using the [dedicated gateway](./dedicated-gateway.md).
+The following table shows the mapping between different Azure Cosmos DB account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and API for Table accounts through the API for NoSQL, so there are two entries for these APIs. There is also an extra entry for the API for NoSQL for accounts using the [dedicated gateway](./dedicated-gateway.md).
-|Azure Cosmos account API type |Supported sub-resources (or group IDs) |Private zone name |
+|Azure Cosmos DB account API type |Supported sub-resources (or group IDs) |Private zone name |
||||
-|Sql | Sql | privatelink.documents.azure.com |
-|Sql | SqlDedicated | privatelink.sqlx.cosmos.azure.com |
+|NoSQL | Sql | privatelink.documents.azure.com |
+|NoSQL | SqlDedicated | privatelink.sqlx.cosmos.azure.com |
|Cassandra | Cassandra | privatelink.cassandra.cosmos.azure.com | |Mongo | MongoDB | privatelink.mongo.cosmos.azure.com | |Gremlin | Gremlin | privatelink.gremlin.cosmos.azure.com |
After the private endpoint is provisioned, you can query the IP addresses. To vi
Multiple IP addresses are created per private endpoint:
-* One for the global (region-agnostic) endpoint of the Azure Cosmos account
-* One for each region where the Azure Cosmos account is deployed
+* One for the global (region-agnostic) endpoint of the Azure Cosmos DB account
+* One for each region where the Azure Cosmos DB account is deployed
## Create a private endpoint by using Azure PowerShell
-Run the following PowerShell script to create a private endpoint named "MyPrivateEndpoint" for an existing Azure Cosmos account. Replace the variable values with the details for your environment.
+Run the following PowerShell script to create a private endpoint named "MyPrivateEndpoint" for an existing Azure Cosmos DB account. Replace the variable values with the details for your environment.
```azurepowershell-interactive $SubscriptionId = "<your Azure subscription ID>"
-# Resource group where the Azure Cosmos account and virtual network resources are located
+# Resource group where the Azure Cosmos DB account and virtual network resources are located
$ResourceGroupName = "myResourceGroup"
-# Name of the Azure Cosmos account
+# Name of the Azure Cosmos DB account
$CosmosDbAccountName = "mycosmosaccount"
-# Resource for the Azure Cosmos account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
+# Resource for the Azure Cosmos DB account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
$CosmosDbSubResourceType = "Sql" # Name of the existing virtual network $VNetName = "myVnet"
foreach ($IPConfiguration in $networkInterface.IpConfigurations)
## Create a private endpoint by using Azure CLI
-Run the following Azure CLI script to create a private endpoint named "myPrivateEndpoint" for an existing Azure Cosmos account. Replace the variable values with the details for your environment.
+Run the following Azure CLI script to create a private endpoint named "myPrivateEndpoint" for an existing Azure Cosmos DB account. Replace the variable values with the details for your environment.
```azurecli-interactive
-# Resource group where the Azure Cosmos account and virtual network resources are located
+# Resource group where the Azure Cosmos DB account and virtual network resources are located
ResourceGroupName="myResourceGroup"
-# Subscription ID where the Azure Cosmos account and virtual network resources are located
+# Subscription ID where the Azure Cosmos DB account and virtual network resources are located
SubscriptionId="<your Azure subscription ID>"
-# Name of the existing Azure Cosmos account
+# Name of the existing Azure Cosmos DB account
CosmosDbAccountName="mycosmosaccount"
-# API type of your Azure Cosmos account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
+# API type of your Azure Cosmos DB account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
CosmosDbSubResourceType="Sql" # Name of the virtual network to create
az network private-endpoint dns-zone-group create \
You can set up Private Link by creating a private endpoint in a virtual network subnet. You achieve this by using an Azure Resource Manager template.
-Use the following code to create a Resource Manager template named "PrivateEndpoint_template.json." This template creates a private endpoint for an existing Azure Cosmos SQL API account in an existing virtual network.
+Use the following code to create a Resource Manager template named "PrivateEndpoint_template.json." This template creates a private endpoint for an existing Azure Cosmos DB vAPI for NoSQL account in an existing virtual network.
```json {
Create a parameters file for the template, and name it "PrivateEndpoint_paramete
Create a PowerShell script by using the following code. Before you run the script, replace the subscription ID, resource group name, and other variable values with the details for your environment. ```azurepowershell-interactive
-### This script creates a private endpoint for an existing Azure Cosmos account in an existing virtual network
+### This script creates a private endpoint for an existing Azure Cosmos DB account in an existing virtual network
## Step 1: Fill in these details. Replace the variable values with the details for your environment. $SubscriptionId = "<your Azure subscription ID>"
-# Resource group where the Azure Cosmos account and virtual network resources are located
+# Resource group where the Azure Cosmos DB account and virtual network resources are located
$ResourceGroupName = "myResourceGroup"
-# Name of the Azure Cosmos account
+# Name of the Azure Cosmos DB account
$CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
+# API type of the Azure Cosmos DB account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
$CosmosDbSubResourceType = "Sql" # Name of the existing virtual network $VNetName = "myVnet"
$deploymentOutput = New-AzResourceGroupDeployment -Name "PrivateCosmosDbEndpoint
$deploymentOutput ```
-In the PowerShell script, the `GroupId` variable can contain only one value. That value is the API type of the account. Allowed values are: `Sql`, `SqlDedicated`, `MongoDB`, `Cassandra`, `Gremlin`, and `Table`. Some Azure Cosmos account types are accessible through multiple APIs. For example:
+In the PowerShell script, the `GroupId` variable can contain only one value. That value is the API type of the account. Allowed values are: `Sql`, `SqlDedicated`, `MongoDB`, `Cassandra`, `Gremlin`, and `Table`. Some Azure Cosmos DB account types are accessible through multiple APIs. For example:
-* A SQL API account has an added option for accounts configured to use the [Dedicated Gateway](./dedicated-gateway.md).
-* A Gremlin API account can be accessed from both Gremlin and SQL API accounts.
-* A Table API account can be accessed from both Table and SQL API accounts.
+* A API for NoSQL account has an added option for accounts configured to use the [Dedicated Gateway](./dedicated-gateway.md).
+* A API for Gremlin account can be accessed from both Gremlin and API for NoSQL accounts.
+* A API for Table account can be accessed from both Table and API for NoSQL accounts.
For those accounts, you must create one private endpoint for each API type. If you are creating a private endpoint for `SqlDedicated`, you only need to add a second endpoint for `Sql` if you want to also connect to your account using the standard gateway. The corresponding API type is specified in the `GroupId` array.
After the template is deployed successfully, you can see an output similar to wh
:::image type="content" source="./media/how-to-configure-private-endpoints/resource-manager-template-deployment-output.png" alt-text="Deployment output for the Resource Manager template":::
-After the template is deployed, the private IP addresses are reserved within the subnet. The firewall rule of the Azure Cosmos account is configured to accept connections from the private endpoint only.
+After the template is deployed, the private IP addresses are reserved within the subnet. The firewall rule of the Azure Cosmos DB account is configured to accept connections from the private endpoint only.
### Integrate the private endpoint with a Private DNS zone
-Use the following code to create a Resource Manager template named "PrivateZone_template.json." This template creates a private DNS zone for an existing Azure Cosmos SQL API account in an existing virtual network.
+Use the following code to create a Resource Manager template named "PrivateZone_template.json." This template creates a private DNS zone for an existing Azure Cosmos DB API for NoSQL account in an existing virtual network.
```json {
Create the following two parameters file for the template. Create the "PrivateZo
} ```
-Use the following code to create a Resource Manager template named "PrivateZoneGroup_template.json." This template creates a private DNS zone group for an existing Azure Cosmos SQL API account in an existing virtual network.
+Use the following code to create a Resource Manager template named "PrivateZoneGroup_template.json." This template creates a private DNS zone group for an existing Azure Cosmos DB API for NoSQL account in an existing virtual network.
```json {
Create a PowerShell script by using the following code. Before you run the scrip
```azurepowershell-interactive ### This script: ### - creates a private zone
-### - creates a private endpoint for an existing Cosmos DB account in an existing VNet
+### - creates a private endpoint for an existing Azure Cosmos DB account in an existing VNet
### - maps the private endpoint to the private zone ## Step 1: Fill in these details. Replace the variable values with the details for your environment. $SubscriptionId = "<your Azure subscription ID>"
-# Resource group where the Azure Cosmos account and virtual network resources are located
+# Resource group where the Azure Cosmos DB account and virtual network resources are located
$ResourceGroupName = "myResourceGroup"
-# Name of the Azure Cosmos account
+# Name of the Azure Cosmos DB account
$CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
+# API type of the Azure Cosmos DB account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
$CosmosDbSubResourceType = "Sql" # Name of the existing virtual network $VNetName = "myVnet"
When you're creating the private endpoint, you can integrate it with a private D
The following situations and outcomes are possible when you use Private Link in combination with firewall rules:
-* If you don't configure any firewall rules, then by default, all traffic can access an Azure Cosmos account.
+* If you don't configure any firewall rules, then by default, all traffic can access an Azure Cosmos DB account.
* If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule. If a private endpoint is configured in a subnet where service endpoint is also configured: * traffic to the database account mapped by the private endpoint is routed via private endpoint, * traffic to other database accounts from the subnet is routed via service endpoint.
-* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Cosmos account is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, the account is open to the entire network unless PublicNetworkAccess is set to Disabled (see section below).
+* If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Cosmos DB account is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, the account is open to the entire network unless PublicNetworkAccess is set to Disabled (see section below).
## Blocking public network access during account creation
-As described in the previous section, and unless specific firewall rules have been set, adding a private endpoint makes your Azure Cosmos account accessible through private endpoints only. This means that the Azure Cosmos account could be reached from public traffic after it is created and before a private endpoint gets added. To make sure that public network access is disabled even before the creation of private endpoints, you can set the `publicNetworkAccess` flag to `Disabled` during account creation. Note that this flag takes precedence over any IP or virtual network rule; all public and virtual network traffic is blocked when the flag is set to `Disabled`, even if the source IP or virtual network is allowed in the firewall configuration.
+As described in the previous section, and unless specific firewall rules have been set, adding a private endpoint makes your Azure Cosmos DB account accessible through private endpoints only. This means that the Azure Cosmos DB account could be reached from public traffic after it is created and before a private endpoint gets added. To make sure that public network access is disabled even before the creation of private endpoints, you can set the `publicNetworkAccess` flag to `Disabled` during account creation. Note that this flag takes precedence over any IP or virtual network rule; all public and virtual network traffic is blocked when the flag is set to `Disabled`, even if the source IP or virtual network is allowed in the firewall configuration.
See [this Azure Resource Manager template](https://azure.microsoft.com/resources/templates/cosmosdb-private-endpoint/) for an example showing how to use this flag.
-## Adding private endpoints to an existing Cosmos account with no downtime
+## Adding private endpoints to an existing Azure Cosmos DB account with no downtime
By default, adding a private endpoint to an existing account results in a short downtime of approximately 5 minutes. Follow the instructions below to avoid this downtime:
By default, adding a private endpoint to an existing account results in a short
## Port range when using direct mode
-When you're using Private Link with an Azure Cosmos account through a direct mode connection, you need to ensure that the full range of TCP ports (0 - 65535) is open.
+When you're using Private Link with an Azure Cosmos DB account through a direct mode connection, you need to ensure that the full range of TCP ports (0 - 65535) is open.
## Update a private endpoint when you add or remove a region
-For example, if you deploy an Azure Cosmos account in three regions: "West US," "Central US," and "West Europe." When you create a private endpoint for your account, four private IPs are reserved in the subnet. There's one IP for each of the three regions, and there's one IP for the global/region-agnostic endpoint. Later, you might add a new region (for example, "East US") to the Azure Cosmos account. The private DNS zone is updated as follows:
+For example, if you deploy an Azure Cosmos DB account in three regions: "West US," "Central US," and "West Europe." When you create a private endpoint for your account, four private IPs are reserved in the subnet. There's one IP for each of the three regions, and there's one IP for the global/region-agnostic endpoint. Later, you might add a new region (for example, "East US") to the Azure Cosmos DB account. The private DNS zone is updated as follows:
* **If private DNS zone group is used:**
For example, if you deploy an Azure Cosmos account in three regions: "West US,"
* **If private DNS zone group is not used:**
- If you are not using a private DNS zone group, adding or removing regions to an Azure Cosmos account requires you to add or remove DNS entries for that account. After regions have been added or removed, you can update the subnet's private DNS zone to reflect the added or removed DNS entries and their corresponding private IP addresses.
+ If you are not using a private DNS zone group, adding or removing regions to an Azure Cosmos DB account requires you to add or remove DNS entries for that account. After regions have been added or removed, you can update the subnet's private DNS zone to reflect the added or removed DNS entries and their corresponding private IP addresses.
In the previous example, after adding the new region, you need to add a corresponding DNS record to either your private DNS zone or your custom DNS. You can use the same steps when you remove a region. After removing the region, you need to remove the corresponding DNS record from either your private DNS zone or your custom DNS. ## Current limitations
-The following limitations apply when you're using Private Link with an Azure Cosmos account:
+The following limitations apply when you're using Private Link with an Azure Cosmos DB account:
-* You can't have more than 200 private endpoints on a single Azure Cosmos account.
+* You can't have more than 200 private endpoints on a single Azure Cosmos DB account.
-* When you're using Private Link with an Azure Cosmos account through a direct mode connection, you can use only the TCP protocol. The HTTP protocol is not currently supported.
+* When you're using Private Link with an Azure Cosmos DB account through a direct mode connection, you can use only the TCP protocol. The HTTP protocol is not currently supported.
* When you're using Azure Cosmos DB's API for MongoDB accounts, a private endpoint is supported for accounts on server version 3.6 or higher (that is, accounts using the endpoint in the format `*.mongo.cosmos.azure.com`). Private Link is not supported for accounts on server version 3.2 (that is, accounts using the endpoint in the format `*.documents.azure.com`). To use Private Link, you should migrate old accounts to the new version. * When you're using an Azure Cosmos DB's API for MongoDB account that has a Private Link, tools/libraries must support Service Name Identification (SNI) or pass the `appName` parameter from the connection string to properly connect. Some older tools/libraries may not be compatible to use the Private Link feature.
-* A network administrator should be granted at least the `Microsoft.DocumentDB/databaseAccounts/PrivateEndpointConnectionsApproval/action` permission at the Azure Cosmos account scope to create automatically approved private endpoints.
+* A network administrator should be granted at least the `Microsoft.DocumentDB/databaseAccounts/PrivateEndpointConnectionsApproval/action` permission at the Azure Cosmos DB account scope to create automatically approved private endpoints.
-* Currently, you can't approve a rejected private endpoint connection. Instead, re-create the private endpoint to resume the private connectivity. The Cosmos DB private link service automatically approves the re-created private endpoint.
+* Currently, you can't approve a rejected private endpoint connection. Instead, re-create the private endpoint to resume the private connectivity. The Azure Cosmos DB private link service automatically approves the re-created private endpoint.
### Limitations to private DNS zone integration
-Unless you're using a private DNS zone group, DNS records in the private DNS zone are not removed automatically when you delete a private endpoint or you remove a region from the Azure Cosmos account. You must manually remove the DNS records before:
+Unless you're using a private DNS zone group, DNS records in the private DNS zone are not removed automatically when you delete a private endpoint or you remove a region from the Azure Cosmos DB account. You must manually remove the DNS records before:
* Adding a new private endpoint linked to this private DNS zone. * Adding a new region to any database account that has private endpoints linked to this private DNS zone.
To learn more about Azure Cosmos DB security features, see the following article
* To configure a firewall for Azure Cosmos DB, see [Firewall support](how-to-configure-firewall.md).
-* To learn how to configure a virtual network service endpoint for your Azure Cosmos account, see [Configure access from virtual networks](how-to-configure-vnet-service-endpoint.md).
+* To learn how to configure a virtual network service endpoint for your Azure Cosmos DB account, see [Configure access from virtual networks](how-to-configure-vnet-service-endpoint.md).
* To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation.
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
Title: Configure virtual network based access for an Azure Cosmos account
+ Title: Configure virtual network based access for an Azure Cosmos DB account
description: This document describes the steps required to set up a virtual network service endpoint for Azure Cosmos DB. Last updated 08/25/2022 --+ # Configure access to Azure Cosmos DB from virtual networks (VNet)
-You can configure the Azure Cosmos account to allow access only from a specific subnet of a virtual network (VNET). Enable [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) on a subnet within a virtual network to control access to Azure Cosmos DB. The traffic from that subnet is sent to Azure Cosmos DB with the identity of the subnet and Virtual Network. Once the Azure Cosmos DB service endpoint is enabled, you can limit access to the subnet by adding it to your Azure Cosmos account.
+You can configure the Azure Cosmos DB account to allow access only from a specific subnet of a virtual network (VNET). Enable [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) on a subnet within a virtual network to control access to Azure Cosmos DB. The traffic from that subnet is sent to Azure Cosmos DB with the identity of the subnet and Virtual Network. Once the Azure Cosmos DB service endpoint is enabled, you can limit access to the subnet by adding it to your Azure Cosmos DB account.
-By default, an Azure Cosmos account is accessible from any source if the request is accompanied by a valid authorization token. When you add one or more subnets within VNets, only requests originating from those subnets will get a valid response. Requests originating from any other source will receive a 403 (Forbidden) response.
+By default, an Azure Cosmos DB account is accessible from any source if the request is accompanied by a valid authorization token. When you add one or more subnets within VNets, only requests originating from those subnets will get a valid response. Requests originating from any other source will receive a 403 (Forbidden) response.
You can configure Azure Cosmos DB accounts to allow access from only a specific subnet of an Azure virtual network. To limit access to an Azure Cosmos DB account with connections from a subnet in a virtual network:
Use the following steps to configure a service endpoint to an Azure Cosmos DB ac
$subnetId = $vnet.Id + "/subnets/" + $subnetName ```
-1. Prepare a Cosmos DB Virtual Network Rule
+1. Prepare an Azure Cosmos DB Virtual Network Rule
```powershell $vnetRule = New-AzCosmosDBVirtualNetworkRule `
Use the following steps to configure a service endpoint to an Azure Cosmos DB ac
1. Update Azure Cosmos DB account properties with the new Virtual Network endpoint configuration: ```powershell
- $accountName = "<Cosmos DB account name>"
+ $accountName = "<Azure Cosmos DB account name>"
Update-AzCosmosDBAccount ` -ResourceGroupName $resourceGroupName `
Use the following steps to configure a service endpoint to an Azure Cosmos DB ac
## <a id="configure-using-cli"></a>Configure a service endpoint by using the Azure CLI
-Azure Cosmos accounts can be configured for service endpoints when they're created or updated at a later time if the subnet is already configured for them. Service endpoints can also be enabled on the Cosmos account where the subnet isn't yet configured. Then the service endpoint will begin to work when the subnet is configured later. This flexibility allows for administrators who don't have access to both the Cosmos account and virtual network resources to make their configurations independent of each other.
+Azure Cosmos DB accounts can be configured for service endpoints when they're created or updated at a later time if the subnet is already configured for them. Service endpoints can also be enabled on the Azure Cosmos DB account where the subnet isn't yet configured. Then the service endpoint will begin to work when the subnet is configured later. This flexibility allows for administrators who don't have access to both the Azure Cosmos DB account and virtual network resources to make their configurations independent of each other.
-### Create a new Cosmos account and connect it to a back end subnet for a new virtual network
+### Create a new Azure Cosmos DB account and connect it to a back end subnet for a new virtual network
In this example, the virtual network and subnet are created with service endpoints enabled for both when they're created. ```azurecli-interactive
-# Create an Azure Cosmos Account with a service endpoint connected to a backend subnet
+# Create an Azure Cosmos DB Account with a service endpoint connected to a backend subnet
-# Resource group and Cosmos account variables
+# Resource group and Azure Cosmos DB account variables
resourceGroupName='MyResourceGroup' location='West US 2' accountName='mycosmosaccount'
az network vnet create \
--subnet-name $frontEnd \ --subnet-prefix 10.0.1.0/24
-# Create a back-end subnet with service endpoints enabled for Cosmos DB
+# Create a back-end subnet with service endpoints enabled for Azure Cosmos DB
az network vnet subnet create \ -n $backEnd \ -g $resourceGroupName \
az network vnet subnet create \
svcEndpoint=$(az network vnet subnet show -g $resourceGroupName -n $backEnd --vnet-name $vnetName --query 'id' -o tsv)
-# Create a Cosmos DB account with default values and service endpoints
+# Create an Azure Cosmos DB account with default values and service endpoints
az cosmosdb create \ -n $accountName \ -g $resourceGroupName \
az cosmosdb create \
--virtual-network-rules $svcEndpoint ```
-### Connect and configure a Cosmos account to a back end subnet independently
+### Connect and configure an Azure Cosmos DB account to a back end subnet independently
-This sample is intended to show how to connect an Azure Cosmos account to an existing or new virtual network. In this example, the subnet isn't yet configured for service endpoints. Configure the service endpoint by using the `--ignore-missing-vnet-service-endpoint` parameter. This configuration allows the Cosmos DB account to complete without error before the configuration to the virtual network's subnet is complete. Once the subnet configuration is complete, the Cosmos account will then be accessible through the configured subnet.
+This sample is intended to show how to connect an Azure Cosmos DB account to an existing or new virtual network. In this example, the subnet isn't yet configured for service endpoints. Configure the service endpoint by using the `--ignore-missing-vnet-service-endpoint` parameter. This configuration allows the Azure Cosmos DB account to complete without error before the configuration to the virtual network's subnet is complete. Once the subnet configuration is complete, the Azure Cosmos DB account will then be accessible through the configured subnet.
```azurecli-interactive
-# Create an Azure Cosmos Account with a service endpoint connected to a backend subnet
+# Create an Azure Cosmos DB Account with a service endpoint connected to a backend subnet
# that is not yet enabled for service endpoints.
-# Resource group and Cosmos account variables
+# Resource group and Azure Cosmos DB account variables
resourceGroupName='MyResourceGroup' location='West US 2' accountName='mycosmosaccount'
az network vnet subnet create \
svcEndpoint=$(az network vnet subnet show -g $resourceGroupName -n $backEnd --vnet-name $vnetName --query 'id' -o tsv)
-# Create a Cosmos DB account with default values
+# Create an Azure Cosmos DB account with default values
az cosmosdb create -n $accountName -g $resourceGroupName # Add the virtual network rule but ignore the missing service endpoint on the subnet
az network vnet subnet update \
## Port range when using direct mode
-When you're using service endpoints with an Azure Cosmos account through a direct mode connection, you need to ensure that the TCP port range from 10000 to 20000 is open.
+When you're using service endpoints with an Azure Cosmos DB account through a direct mode connection, you need to ensure that the TCP port range from 10000 to 20000 is open.
## <a id="migrate-from-firewall-to-vnet"></a>Migrating from an IP firewall rule to a virtual network ACL
Here are some frequently asked questions about configuring access from virtual n
### Are Notebooks and Mongo/Cassandra Shell currently compatible with Virtual Network enabled accounts?
-At the moment the [Mongo shell](https://devblogs.microsoft.com/cosmosdb/preview-native-mongo-shell/) and [Cassandra shell](https://devblogs.microsoft.com/cosmosdb/announcing-native-cassandra-shell-preview/) integrations in the Cosmos DB Data Explorer, and the [Jupyter Notebooks service](./notebooks-overview.md), aren't supported with VNET access. This integration is currently in active development.
+At the moment the [Mongo shell](https://devblogs.microsoft.com/cosmosdb/preview-native-mongo-shell/) and [Cassandra shell](https://devblogs.microsoft.com/cosmosdb/announcing-native-cassandra-shell-preview/) integrations in the Azure Cosmos DB Data Explorer, and the [Jupyter Notebooks service](./notebooks-overview.md), aren't supported with VNET access. This integration is currently in active development.
-### Can I specify both virtual network service endpoint and IP access control policy on an Azure Cosmos account?
+### Can I specify both virtual network service endpoint and IP access control policy on an Azure Cosmos DB account?
-You can enable both the virtual network service endpoint and an IP access control policy (also known as firewall) on your Azure Cosmos account. These two features are complementary and collectively ensure isolation and security of your Azure Cosmos account. Using IP firewall ensures that static IPs can access your account.
+You can enable both the virtual network service endpoint and an IP access control policy (also known as firewall) on your Azure Cosmos DB account. These two features are complementary and collectively ensure isolation and security of your Azure Cosmos DB account. Using IP firewall ensures that static IPs can access your account.
### How do I limit access to subnet within a virtual network?
-There are two steps required to limit access to Azure Cosmos account from a subnet. First, you allow traffic from subnet to carry its subnet and virtual network identity to Azure Cosmos DB. Changing the identity of the traffic is done by enabling service endpoint for Azure Cosmos DB on the subnet. Next is adding a rule in the Azure Cosmos account specifying this subnet as a source from which account can be accessed.
+There are two steps required to limit access to Azure Cosmos DB account from a subnet. First, you allow traffic from subnet to carry its subnet and virtual network identity to Azure Cosmos DB. Changing the identity of the traffic is done by enabling service endpoint for Azure Cosmos DB on the subnet. Next is adding a rule in the Azure Cosmos DB account specifying this subnet as a source from which account can be accessed.
### Will virtual network ACLs and IP Firewall reject requests or connections?
-When IP firewall or virtual network access rules are added, only requests from allowed sources get valid responses. Other requests are rejected with a 403 (Forbidden). It's important to distinguish Azure Cosmos account's firewall from a connection level firewall. The source can still connect to the service and the connections themselves aren't rejected.
+When IP firewall or virtual network access rules are added, only requests from allowed sources get valid responses. Other requests are rejected with a 403 (Forbidden). It's important to distinguish Azure Cosmos DB account's firewall from a connection level firewall. The source can still connect to the service and the connections themselves aren't rejected.
### My requests started getting blocked when I enabled service endpoint to Azure Cosmos DB on the subnet. What happened?
-Once service endpoint for Azure Cosmos DB is enabled on a subnet, the source of the traffic reaching the account switches from public IP to virtual network and subnet. If your Azure Cosmos account has IP-based firewall only, traffic from service enabled subnet would no longer match the IP firewall rules, and therefore be rejected. Go over the steps to seamlessly migrate from IP-based firewall to virtual network-based access control.
+Once service endpoint for Azure Cosmos DB is enabled on a subnet, the source of the traffic reaching the account switches from public IP to virtual network and subnet. If your Azure Cosmos DB account has IP-based firewall only, traffic from service enabled subnet would no longer match the IP firewall rules, and therefore be rejected. Go over the steps to seamlessly migrate from IP-based firewall to virtual network-based access control.
-### Are extra Azure role-based access control permissions needed for Azure Cosmos accounts with VNET service endpoints?
+### Are extra Azure role-based access control permissions needed for Azure Cosmos DB accounts with VNET service endpoints?
-After you add the VNet service endpoints to an Azure Cosmos account, to make any changes to the account settings, you need access to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` action for all the VNETs configured on your Azure Cosmos account. This permission is required because the authorization process validates access to resources (such as database and virtual network resources) before evaluating any properties.
+After you add the VNet service endpoints to an Azure Cosmos DB account, to make any changes to the account settings, you need access to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` action for all the VNETs configured on your Azure Cosmos DB account. This permission is required because the authorization process validates access to resources (such as database and virtual network resources) before evaluating any properties.
-The authorization validates permission for VNet resource action even if the user doesn't specify the VNET ACLs using Azure CLI. Currently, the Azure Cosmos account's control plane supports setting the complete state of the Azure Cosmos account. One of the parameters to the control plane calls is `virtualNetworkRules`. If this parameter isn't specified, the Azure CLI makes a get database call to retrieve the `virtualNetworkRules` and uses this value in the update call.
+The authorization validates permission for VNet resource action even if the user doesn't specify the VNET ACLs using Azure CLI. Currently, the Azure Cosmos DB account's control plane supports setting the complete state of the Azure Cosmos DB account. One of the parameters to the control plane calls is `virtualNetworkRules`. If this parameter isn't specified, the Azure CLI makes a get database call to retrieve the `virtualNetworkRules` and uses this value in the update call.
-### Do the peered virtual networks also have access to Azure Cosmos account?
+### Do the peered virtual networks also have access to Azure Cosmos DB account?
-Only virtual network and their subnets added to Azure Cosmos account have access. Their peered VNets can't access the account until the subnets within peered virtual networks are added to the account.
+Only virtual network and their subnets added to Azure Cosmos DB account have access. Their peered VNets can't access the account until the subnets within peered virtual networks are added to the account.
-### What is the maximum number of subnets allowed to access a single Cosmos account?
+### What is the maximum number of subnets allowed to access a single Azure Cosmos DB account?
-Currently, you can have at most 256 subnets allowed for an Azure Cosmos account.
+Currently, you can have at most 256 subnets allowed for an Azure Cosmos DB account.
### Can I enable access from VPN and Express Route?
-For accessing Azure Cosmos account over Express route from on premises, you would need to enable Microsoft peering. Once you put IP firewall or virtual network access rules, you can add the public IP addresses used for Microsoft peering on your Azure Cosmos account IP firewall to allow on premises services access to Azure Cosmos account.
+For accessing Azure Cosmos DB account over Express route from on premises, you would need to enable Microsoft peering. Once you put IP firewall or virtual network access rules, you can add the public IP addresses used for Microsoft peering on your Azure Cosmos DB account IP firewall to allow on premises services access to Azure Cosmos DB account.
### Do I need to update the Network Security Groups (NSG) rules?
-NSG rules are used to limit connectivity to and from a subnet with virtual network. When you add service endpoint for Azure Cosmos DB to the subnet, there's no need to open outbound connectivity in NSG for your Azure Cosmos account.
+NSG rules are used to limit connectivity to and from a subnet with virtual network. When you add service endpoint for Azure Cosmos DB to the subnet, there's no need to open outbound connectivity in NSG for your Azure Cosmos DB account.
### Are service endpoints available for all VNets?
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
Title: Create and manage intra-account container copy jobs in Azure Cosmos DB
description: Learn how to create, monitor, and manage container copy jobs within an Azure Cosmos DB account using CLI commands. + Last updated 08/01/2022
# Create and manage intra-account container copy jobs in Azure Cosmos DB (Preview) [Container copy jobs](intra-account-container-copy.md) help create offline copies of containers within an Azure Cosmos DB account.
This article describes how to create, monitor, and manage intra-account containe
* Currently, container copy is only supported in [these regions](intra-account-container-copy.md#supported-regions). Make sure your account's write region belongs to this list.
-## Install the Cosmos DB preview extension
+## Install the Azure Cosmos DB preview extension
This extension contains the container copy commands.
$destinationDatabase = ""
$destinationContainer = "" ```
-## Create an intra-account container copy job for SQL API account
+## Create an intra-account container copy job for API for NoSQL account
-Create a job to copy a container within an Azure Cosmos DB SQL API account:
+Create a job to copy a container within an Azure Cosmos DB API for NoSQL account:
```azurepowershell-interactive az cosmosdb dts copy `
az cosmosdb dts copy `
--dest-sql-container database=$destinationDatabase container=$destinationContainer ```
-## Create intra-account container copy job for Cassandra API account
+## Create intra-account container copy job for API for Cassandra account
-Create a job to copy a container within an Azure Cosmos DB Cassandra API account:
+Create a job to copy a container within an Azure Cosmos DB API for Cassandra account:
```azurepowershell-interactive az cosmosdb dts copy `
cosmos-db How To Define Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-define-unique-keys.md
Title: Define unique keys for an Azure Cosmos container
-description: Learn how to define unique keys for an Azure Cosmos container using Azure portal, PowerShell, .NET, Java, and various other SDKs.
+ Title: Define unique keys for an Azure Cosmos DB container
+description: Learn how to define unique keys for an Azure Cosmos DB container using Azure portal, PowerShell, .NET, Java, and various other SDKs.
-+ Last updated 12/02/2019 --+
-# Define unique keys for an Azure Cosmos container
+# Define unique keys for an Azure Cosmos DB container
-This article presents the different ways to define [unique keys](unique-keys.md) when creating an Azure Cosmos container. It's currently possible to perform this operation either by using the Azure portal or through one of the SDKs.
+This article presents the different ways to define [unique keys](unique-keys.md) when creating an Azure Cosmos DB container. It's currently possible to perform this operation either by using the Azure portal or through one of the SDKs.
## Use the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](create-sql-api-dotnet.md#create-account) or select an existing one.
+1. [Create a new Azure Cosmos DB account](create-sql-api-dotnet.md#create-account) or select an existing one.
1. Open the **Data Explorer** pane and select the container that you want to work on.
This article presents the different ways to define [unique keys](unique-keys.md)
## Use PowerShell
-To create a container with unique keys see, [Create an Azure Cosmos container with unique key and TTL](manage-with-powershell.md#create-container-unique-key-ttl)
+To create a container with unique keys see, [Create an Azure Cosmos DB container with unique key and TTL](manage-with-powershell.md#create-container-unique-key-ttl)
## Use the .NET SDK
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
Title: Learn how to manage database accounts in Azure Cosmos DB
description: Learn how to manage Azure Cosmos DB resources by using the Azure portal, PowerShell, CLI, and Azure Resource Manager templates -++ Last updated 09/13/2021
-# Manage an Azure Cosmos account using the Azure portal
+# Manage an Azure Cosmos DB account using the Azure portal
-This article describes how to manage various tasks on an Azure Cosmos account using the Azure portal.
+This article describes how to manage various tasks on an Azure Cosmos DB account using the Azure portal.
> [!TIP] > Azure Cosmos DB can also be managed with other Azure management clients including [Azure PowerShell](manage-with-powershell.md), [Azure CLI](sql/manage-with-cli.md), [Azure Resource Manager templates](./manage-with-templates.md), and [Bicep](sql/manage-with-bicep.md).
This article describes how to manage various tasks on an Azure Cosmos account us
1. Sign in to [Azure portal](https://portal.azure.com).
-1. Go to your Azure Cosmos account, and select **Replicate data globally** in the resource menu.
+1. Go to your Azure Cosmos DB account, and select **Replicate data globally** in the resource menu.
1. To add regions, select the hexagons on the map with the **+** label that corresponds to your desired region(s). Alternatively, to add a region, select the **+ Add region** option and choose a region from the drop-down menu.
In a multi-region write mode, you can add or remove any region, if you have at l
Open the **Replicate Data Globally** tab and select **Enable** to enable multi-region writes. After you enable multi-region writes, all the read regions that you currently have on the account will become read and write regions.
-## <a id="automatic-failover"></a>Enable service-managed failover for your Azure Cosmos account
+## <a id="automatic-failover"></a>Enable service-managed failover for your Azure Cosmos DB account
The Service-Managed failover option allows Azure Cosmos DB to fail over to the region with the highest failover priority with no user action should a region become unavailable. When service-managed failover is enabled, region priority can be modified. Account must have two or more regions to enable service-managed failover.
-1. From your Azure Cosmos account, open the **Replicate data globally** pane.
+1. From your Azure Cosmos DB account, open the **Replicate data globally** pane.
2. At the top of the pane, select **Automatic Failover**.
The Service-Managed failover option allows Azure Cosmos DB to fail over to the r
:::image type="content" source="./media/how-to-manage-database-account/automatic-failover.png" alt-text="Automatic failover portal menu":::
-## Set failover priorities for your Azure Cosmos account
+## Set failover priorities for your Azure Cosmos DB account
-After a Cosmos account is configured for automatic failover, the failover priority for regions can be changed.
+After an Azure Cosmos DB account is configured for automatic failover, the failover priority for regions can be changed.
> [!IMPORTANT] > You cannot modify the write region (failover priority of zero) when the account is configured for service-managed failover. To change the write region, you must disable service-managed failover and do a manual failover.
-1. From your Azure Cosmos account, open the **Replicate data globally** pane.
+1. From your Azure Cosmos DB account, open the **Replicate data globally** pane.
2. At the top of the pane, select **Automatic Failover**.
After a Cosmos account is configured for automatic failover, the failover priori
:::image type="content" source="./media/how-to-manage-database-account/automatic-failover.png" alt-text="Automatic failover portal menu":::
-## <a id="manual-failover"></a>Perform manual failover on an Azure Cosmos account
+## <a id="manual-failover"></a>Perform manual failover on an Azure Cosmos DB account
> [!IMPORTANT]
-> The Azure Cosmos account must be configured for manual failover for this operation to succeed.
+> The Azure Cosmos DB account must be configured for manual failover for this operation to succeed.
> [!NOTE] > If you perform a manual failover operation while an [asynchronous throughput scaling operation](scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover operation is complete.
-1. Go to your Azure Cosmos account, and open the **Replicate data globally** menu.
+1. Go to your Azure Cosmos DB account, and open the **Replicate data globally** menu.
2. At the top of the menu, select **Manual Failover**.
After a Cosmos account is configured for automatic failover, the failover priori
## Next steps
-For more information and examples on how to manage the Azure Cosmos account as well as database and containers, read the following articles:
+For more information and examples on how to manage the Azure Cosmos DB account as well as database and containers, read the following articles:
* [Manage Azure Cosmos DB using Azure PowerShell](manage-with-powershell.md) * [Manage Azure Cosmos DB using Azure CLI](sql/manage-with-cli.md)
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
Title: Move an Azure Cosmos DB account to another region
description: Learn how to move an Azure Cosmos DB account to another region. -+ -+ Last updated 03/15/2022 # Move an Azure Cosmos DB account to another region This article describes how to either:
Azure Cosmos DB supports data replication natively, so moving data from one regi
When the region that's being removed is currently the write region for the account, you'll need to start a failover to the new region added in the previous step. This is a zero-downtime operation. If you're moving a read region in a multiple-region account, you can skip this step.
- To start a failover, see [Perform manual failover on an Azure Cosmos account](how-to-manage-database-account.md#manual-failover).
+ To start a failover, see [Perform manual failover on an Azure Cosmos DB account](how-to-manage-database-account.md#manual-failover).
1. Remove the original region.
Azure Cosmos DB does not natively support migrating account metadata from one re
> [!IMPORTANT] > It is not necessary to migrate the account metadata if the data is stored or moved to a different region. The region in which the account metadata resides has no impact on the performance, security or any other operational aspects of your Azure Cosmos DB account.
-A near-zero-downtime migration for the SQL API requires the use of the [change feed](change-feed.md) or a tool that uses it. If you're migrating the MongoDB API, the Cassandra API, or another API, or to learn more about options for migrating data between accounts, see [Options to migrate your on-premises or cloud data to Azure Cosmos DB](cosmosdb-migrationchoices.md).
+A near-zero-downtime migration for the API for NoSQL requires the use of the [change feed](change-feed.md) or a tool that uses it. If you're migrating from the API for MongoDB, Cassandra, or another API, or to learn more about options for migrating data between accounts, see [Options to migrate your on-premises or cloud data to Azure Cosmos DB](migration-choices.md).
-The following steps demonstrate how to migrate an Azure Cosmos DB account for the SQL API and its data from one region to another:
+The following steps demonstrate how to migrate an Azure Cosmos DB account for the API for NoSQL and its data from one region to another:
1. Create a new Azure Cosmos DB account in the desired region.
The following steps demonstrate how to migrate an Azure Cosmos DB account for th
1. Create a new database and container.
- To create a new database and container, see [Create an Azure Cosmos container](how-to-create-container.md).
+ To create a new database and container, see [Create an Azure Cosmos DB container](how-to-create-container.md).
1. Migrate data by using the Azure Cosmos DB Live Data Migrator tool.
The following steps demonstrate how to migrate an Azure Cosmos DB account for th
## Next steps
-For more information and examples on how to manage the Azure Cosmos account as well as databases and containers, read the following articles:
+For more information and examples on how to manage the Azure Cosmos DB account as well as databases and containers, read the following articles:
-* [Manage an Azure Cosmos account](how-to-manage-database-account.md)
+* [Manage an Azure Cosmos DB account](how-to-manage-database-account.md)
* [Change feed in Azure Cosmos DB](change-feed.md)
cosmos-db How To Restrict User Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-restrict-user-data.md
Last updated 12/9/2019 -+ # Restrict user access to data operations in Azure Cosmos DB In Azure Cosmos DB, there are two ways to authenticate your interactions with the database service:
The next sections of this article show how to perform these steps.
> In order to execute the commands in the next sections, you need to install Azure PowerShell Module 3.0.0 or later, as well as the [Azure Owner Role](../role-based-access-control/built-in-roles.md#owner) on the subscription that you are trying to modify. In the PowerShell scripts in the next sections, substitute the following placeholders with values specific to your environment:-- `$MySubscriptionId` - The subscription ID that contains the Azure Cosmos account where you want to limit the permissions. For example: `e5c8766a-eeb0-40e8-af56-0eb142ebf78e`.-- `$MyResourceGroupName` - The resource group containing the Azure Cosmos account. For example: `myresourcegroup`.-- `$MyAzureCosmosDBAccountName` - The name of your Azure Cosmos account. For example: `mycosmosdbsaccount`.
+- `$MySubscriptionId` - The subscription ID that contains the Azure Cosmos DB account where you want to limit the permissions. For example: `e5c8766a-eeb0-40e8-af56-0eb142ebf78e`.
+- `$MyResourceGroupName` - The resource group containing the Azure Cosmos DB account. For example: `myresourcegroup`.
+- `$MyAzureCosmosDBAccountName` - The name of your Azure Cosmos DB account. For example: `mycosmosdbsaccount`.
- `$MyUserName` - The login (username@domain) of the user for whom you want to limit access. For example: `cosmosdbuser@contoso.com`. ## Select your Azure subscription
Select-AzSubscription $MySubscriptionId
## Create the custom Azure Active Directory role
-The following script creates an Azure Active Directory role assignment with "Key Only" access for Azure Cosmos accounts. The role is based on [Azure custom roles](../role-based-access-control/custom-roles.md) and [Granular actions for Azure Cosmos DB](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb). These roles and actions are part of the `Microsoft.DocumentDB` Azure Active Directory namespace.
+The following script creates an Azure Active Directory role assignment with "Key Only" access for Azure Cosmos DB accounts. The role is based on [Azure custom roles](../role-based-access-control/custom-roles.md) and [Granular actions for Azure Cosmos DB](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb). These roles and actions are part of the `Microsoft.DocumentDB` Azure Active Directory namespace.
1. First, create a JSON document named `AzureCosmosKeyOnlyAccess.json` with the following content:
$cdba | Set-AzResource -Force
## Next steps -- Learn more about [Cosmos DB's role-based access control](role-based-access-control.md)-- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md)
+- Learn more about [Azure Cosmos DB's role-based access control](role-based-access-control.md)
+- Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md)
cosmos-db How To Setup Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md
description: Learn how to configure encryption with customer-managed keys for Az
+ Last updated 09/27/2022
# Configure cross-tenant customer-managed keys for your Azure Cosmos DB account with Azure Key Vault (preview) Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with service-managed keys managed by Microsoft. However, you can choose to add a second layer of encryption with keys you manage. These keys are known as customer-managed keys (or CMK). Customer-managed keys are stored in an Azure Key Vault instance.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Last updated 07/20/2022 -+ ms.devlang: azurecli
-# Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault
+# Configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault
-Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (**service-managed keys**). Optionally, you can choose to add a second layer of encryption with keys you manage (**customer-managed keys** or CMK).
+
+Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft (**service-managed keys**). Optionally, you can choose to add a second layer of encryption with keys you manage (**customer-managed keys** or CMK).
:::image type="content" source="./media/how-to-setup-cmk/cmk-intro.png" alt-text="Layers of encryption around customer data":::
-You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account.
+You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos DB account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account.
> [!NOTE]
-> Currently, customer-managed keys are available only for new Azure Cosmos accounts. You should configure them during account creation.
+> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
## <a id="register-resource-provider"></a> Register the Azure Cosmos DB resource provider for your Azure subscription
If you're using an existing Azure Key Vault instance, you can verify that these
:::image type="content" source="./media/how-to-setup-cmk/portal-akv-keyid.png" alt-text="Copying the key's key identifier":::
-## Create a new Azure Cosmos account
+## <a id="create-a-new-azure-cosmos-account"></a>Create a new Azure Cosmos DB account
### Using the Azure portal
Get-AzResource -ResourceGroupName $resourceGroupName -Name $accountName `
### Using an Azure Resource Manager template
-When you create a new Azure Cosmos account through an Azure Resource Manager template:
+When you create a new Azure Cosmos DB account through an Azure Resource Manager template:
- Pass the URI of the Azure Key Vault key that you copied earlier under the **keyVaultKeyUri** property in the **properties** object.
New-AzResourceGroupDeployment `
### <a id="using-azure-cli"></a> Using the Azure CLI
-When you create a new Azure Cosmos account through the Azure CLI, pass the URI of the Azure Key Vault key that you copied earlier under the `--key-uri` parameter.
+When you create a new Azure Cosmos DB account through the Azure CLI, pass the URI of the Azure Key Vault key that you copied earlier under the `--key-uri` parameter.
```azurecli-interactive resourceGroupName='myResourceGroup'
az cosmosdb create \
### To create a continuous backup account by using an Azure Resource Manager template
-When you create a new Azure Cosmos account through an Azure Resource Manager template:
+When you create a new Azure Cosmos DB account through an Azure Resource Manager template:
- Pass the URI of the Azure Key Vault key that you copied earlier under the **keyVaultKeyUri** property in the **properties** object. - Use **2021-11-15** or later as the API version.
Double encryption only applies to the main Azure Cosmos DB transactional storage
## Key rotation
-Rotating the customer-managed key used by your Azure Cosmos account can be done in two ways.
+Rotating the customer-managed key used by your Azure Cosmos DB account can be done in two ways.
- Create a new version of the key currently used from Azure Key Vault: :::image type="content" source="./media/how-to-setup-cmk/portal-akv-rot.png" alt-text="Screenshot of the New Version option in the Versions page of the Azure portal."::: -- Swap the key currently used with a different one by updating the key URI on your account. From the Azure portal, go to your Azure Cosmos account and select **Data Encryption** from the left menu:
+- Swap the key currently used with a different one by updating the key URI on your account. From the Azure portal, go to your Azure Cosmos DB account and select **Data Encryption** from the left menu:
:::image type="content" source="./media/how-to-setup-cmk/portal-data-encryption.png" alt-text="Screenshot of the Data Encryption menu option in the Azure portal.":::
No, there's no charge to enable this feature.
### What data gets encrypted with the customer-managed keys?
-All the data stored in your Azure Cosmos account is encrypted with the customer-managed keys, except for the following metadata:
+All the data stored in your Azure Cosmos DB account is encrypted with the customer-managed keys, except for the following metadata:
- The names of your Azure Cosmos DB [accounts, databases, and containers](./account-databases-containers-items.md#elements-in-an-azure-cosmos-db-account)
All the data stored in your Azure Cosmos account is encrypted with the customer-
- The values of your containers' [partition keys](./partitioning-overview.md)
-### Are customer-managed keys supported for existing Azure Cosmos accounts?
+### Are customer-managed keys supported for existing Azure Cosmos DB accounts?
This feature is currently available only for new accounts.
Yes, Azure Synapse Link only supports configuring customer-managed keys using yo
Not currently, but container-level keys are being considered.
-### How can I tell if customer-managed keys are enabled on my Azure Cosmos account?
+### How can I tell if customer-managed keys are enabled on my Azure Cosmos DB account?
-From the Azure portal, go to your Azure Cosmos account and watch for the **Data Encryption** entry in the left menu; if this entry exists, customer-managed keys are enabled on your account:
+From the Azure portal, go to your Azure Cosmos DB account and watch for the **Data Encryption** entry in the left menu; if this entry exists, customer-managed keys are enabled on your account:
:::image type="content" source="./media/how-to-setup-cmk/portal-data-encryption.png" alt-text="The Data Encryption menu entry":::
-You can also programmatically fetch the details of your Azure Cosmos account and look for the presence of the `keyVaultKeyUri` property. See above for ways to do that [in PowerShell](#using-powershell) and [using the Azure CLI](#using-azure-cli).
+You can also programmatically fetch the details of your Azure Cosmos DB account and look for the presence of the `keyVaultKeyUri` property. See above for ways to do that [in PowerShell](#using-powershell) and [using the Azure CLI](#using-azure-cli).
### How do customer-managed keys affect periodic backups?
The only operation possible when the encryption key has been revoked is account
## Next steps - Learn more about [data encryption in Azure Cosmos DB](./database-encryption-at-rest.md).-- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).
+- Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md).
cosmos-db How To Setup Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-managed-identity.md
Title: Configure managed identities with Azure AD for your Azure Cosmos DB accou
description: Learn how to configure managed identities with Azure Active Directory for your Azure Cosmos DB account + Last updated 10/15/2021 # Configure managed identities with Azure Active Directory for your Azure Cosmos DB account Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. This article shows how to create a managed identity for Azure Cosmos DB accounts.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Title: Configure role-based access control for your Azure Cosmos DB account with
description: Learn how to configure role-based access control with Azure Active Directory for your Azure Cosmos DB account + Last updated 02/16/2022
# Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account > [!NOTE] > This article is about role-based access control for data plane operations in Azure Cosmos DB. If you are using management plane operations, see [role-based access control](role-based-access-control.md) applied to your management plane operations article.
The table below lists all the actions exposed by the permission model.
| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/replace` | Replace an existing item. | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/upsert` | "Upsert" an item, which means to create or insert an item if it doesn't already exist, or to update or replace an item if it exists. | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/delete` | Delete an item. |
-| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery` | Execute a [SQL query](sql-query-getting-started.md). |
+| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery` | Execute a [SQL query](nosql/query/getting-started.md). |
| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed` | Read from the container's [change feed](read-change-feed.md). | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeStoredProcedure` | Execute a [stored procedure](stored-procedures-triggers-udfs.md). | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/manageConflicts` | Manage [conflicts](conflict-resolution-policies.md) for multi-write region accounts (that is, list and delete items from the conflict feed). |
The examples below use a service principal with a `ClientSecretCredential` insta
### In .NET
-The Azure Cosmos DB RBAC is currently supported in the [.NET SDK V3](sql-api-sdk-dotnet-standard.md).
+The Azure Cosmos DB RBAC is currently supported in the [.NET SDK V3](nosql/sdk-dotnet-v3.md).
```csharp TokenCredential servicePrincipal = new ClientSecretCredential(
CosmosClient client = new CosmosClient("<account-endpoint>", servicePrincipal);
### In Java
-The Azure Cosmos DB RBAC is currently supported in the [Java SDK V4](sql-api-sdk-java-v4.md).
+The Azure Cosmos DB RBAC is currently supported in the [Java SDK V4](nosql/sdk-java-v4.md).
```java TokenCredential ServicePrincipal = new ClientSecretCredentialBuilder()
CosmosAsyncClient Client = new CosmosClientBuilder()
### In JavaScript
-The Azure Cosmos DB RBAC is currently supported in the [JavaScript SDK V3](sql-api-sdk-node.md).
+The Azure Cosmos DB RBAC is currently supported in the [JavaScript SDK V3](nosql/sdk-nodejs.md).
```javascript const servicePrincipal = new ClientSecretCredential(
const client = new CosmosClient({
### In Python
-The Azure Cosmos DB RBAC is supported in the [Python SDK versions 4.3.0b4](sql-api-sdk-python.md) and higher.
+The Azure Cosmos DB RBAC is supported in the [Python SDK versions 4.3.0b4](nosql/sdk-python.md) and higher.
```python aad_credentials = ClientSecretCredential(
When you access the [Azure Cosmos DB Explorer](https://cosmos.azure.com/?feature
## Audit data requests
-When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the Azure AD identity used for every data request sent to your Azure Cosmos DB account.
+When using the Azure Cosmos DB RBAC, [diagnostic logs](monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the Azure AD identity used for every data request sent to your Azure Cosmos DB account.
This additional information flows in the **DataPlaneRequests** log category and consists of two extra columns:
When creating or updating your Azure Cosmos DB account using Azure Resource Mana
### Which Azure Cosmos DB APIs are supported by RBAC?
-Only the SQL API is currently supported.
+Only the API for NoSQL is currently supported.
### Is it possible to manage role definitions and role assignments from the Azure portal? Azure portal support for role management is not available yet.
-### Which SDKs in Azure Cosmos DB SQL API support RBAC?
+### Which SDKs in Azure Cosmos DB API for NoSQL support RBAC?
-The [.NET V3](sql-api-sdk-dotnet-standard.md), [Java V4](sql-api-sdk-java-v4.md), [JavaScript V3](sql-api-sdk-node.md) and [Python V4.3+](sql-api-sdk-python.md) SDKs are currently supported.
+The [.NET V3](nosql/sdk-dotnet-v3.md), [Java V4](nosql/sdk-java-v4.md), [JavaScript V3](nosql/sdk-nodejs.md) and [Python V4.3+](nosql/sdk-python.md) SDKs are currently supported.
### Is the Azure AD token automatically refreshed by the Azure Cosmos DB SDKs when it expires?
Yes, see [Enforcing RBAC as the only authentication method](#disable-local-auth)
## Next steps -- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).
+- Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md).
- Learn more about [RBAC for Azure Cosmos DB management](role-based-access-control.md).
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md
-++ Last updated 08/26/2021 # Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB
-This tutorial provides instructions on using the Azure Cosmos DB Data Migration tool, which can import data from various sources into Azure Cosmos containers and tables. You can import from JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and even Azure Cosmos DB SQL API collections. You migrate that data to collections and tables for use with Azure Cosmos DB. The Data Migration tool can also be used when migrating from a single partition collection to a multi-partition collection for the SQL API.
+This tutorial provides instructions on using the Azure Cosmos DB Data Migration tool, which can import data from various sources into Azure Cosmos DB containers and tables. You can import from JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and even Azure Cosmos DB API for NoSQL collections. You migrate that data to collections and tables for use with Azure Cosmos DB. The Data Migration tool can also be used when migrating from a single partition collection to a multi-partition collection for the API for NoSQL.
> [!NOTE]
-> The Azure Cosmos DB Data Migration tool is an open source tool designed for small migrations. For larger migrations, view our [guide for ingesting data](cosmosdb-migrationchoices.md).
+> The Azure Cosmos DB Data Migration tool is an open source tool designed for small migrations. For larger migrations, view our [guide for ingesting data](migration-choices.md).
-* **[SQL API](./introduction.md)** - You can use any of the source options provided in the Data Migration tool to import data at a small scale. [Learn about migration options for importing data at a large scale](cosmosdb-migrationchoices.md).
-* **[Table API](table/introduction.md)** - You can use the Data Migration tool to import data. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
-* **[Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to a Cosmos database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB SQL API collections for use with the SQL API.
-* **[Cassandra API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Cassandra API accounts. [Learn about migration options for importing data into Cassandra API](cosmosdb-migrationchoices.md#azure-cosmos-db-cassandra-api)
-* **[Gremlin API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Gremlin API accounts at this time. [Learn about migration options for importing data into Gremlin API](cosmosdb-migrationchoices.md#other-apis)
+* **[API for NoSQL](./introduction.md)** - You can use any of the source options provided in the Data Migration tool to import data at a small scale. [Learn about migration options for importing data at a large scale](migration-choices.md).
+* **[API for Table](table/introduction.md)** - You can use the Data Migration tool to import data. For more information, see [Import data for use with the Azure Cosmos DB API for Table](table/import.md).
+* **[Azure Cosmos DB's API for MongoDB](mongodb/introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to an Azure Cosmos DB database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB API for NoSQL collections for use with the API for NoSQL.
+* **[API for Cassandra](gremlin/introduction.md)** - The Data Migration tool isn't a supported import tool for API for Cassandra accounts. [Learn about migration options for importing data into API for Cassandra](migration-choices.md#azure-cosmos-db-api-for-cassandra)
+* **[API for Gremlin](introduction.md)** - The Data Migration tool isn't a supported import tool for API for Gremlin accounts at this time. [Learn about migration options for importing data into API for Gremlin](migration-choices.md#other-apis)
This tutorial covers the following tasks:
Before following the instructions in this article, ensure that you do the follow
* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual collection or a set of collections. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs. For more information about increasing throughput in the Azure portal, see [performance levels](performance-levels.md) and [pricing tiers](https://azure.microsoft.com/pricing/details/cosmos-db/) in Azure Cosmos DB.
-* **Create Azure Cosmos DB resources:** Before you start the migrating data, pre-create all your collections from the Azure portal. To migrate to an Azure Cosmos DB account that has database level throughput, provide a partition key when you create the Azure Cosmos containers.
+* **Create Azure Cosmos DB resources:** Before you start the migrating data, pre-create all your collections from the Azure portal. To migrate to an Azure Cosmos DB account that has database level throughput, provide a partition key when you create the Azure Cosmos DB containers.
> [!IMPORTANT]
-> To make sure that the Data migration tool uses Transport Layer Security (TLS) 1.2 when connecting to your Azure Cosmos accounts, use the .NET Framework version 4.7 or follow the instructions found in [this article](/dotnet/framework/network-programming/tls).
+> To make sure that the Data migration tool uses Transport Layer Security (TLS) 1.2 when connecting to your Azure Cosmos DB accounts, use the .NET Framework version 4.7 or follow the instructions found in [this article](/dotnet/framework/network-programming/tls).
## <a id="Overviewl"></a>Overview
The Data Migration tool is an open-source solution that imports data to Azure Co
* Azure Table storage * Amazon DynamoDB * HBase
-* Azure Cosmos containers
+* Azure Cosmos DB containers
While the import tool includes a graphical user interface (dtui.exe), it can also be driven from the command-line (dt.exe). In fact, there's an option to output the associated command after setting up an import through the UI. You can transform tabular source data, such as SQL Server or CSV files, to create hierarchical relationships (subdocuments) during import. Keep reading to learn more about source options, sample commands to import from each source, target options, and viewing import results. > [!NOTE]
-> You should only use the Azure Cosmos DB migration tool for small migrations. For large migrations, view our [guide for ingesting data](cosmosdb-migrationchoices.md).
+> You should only use the Azure Cosmos DB migration tool for small migrations. For large migrations, view our [guide for ingesting data](migration-choices.md).
## <a id="Install"></a>Installation
Once you've installed the tool, it's time to import your data. What kind of data
* [Azure Table storage](#AzureTableSource) * [Amazon DynamoDB](#DynamoDBSource) * [Blob](#BlobImport)
-* [Azure Cosmos containers](#SQLSource)
+* [Azure Cosmos DB containers](#SQLSource)
* [HBase](#HBaseSource) * [Azure Cosmos DB bulk import](#SQLBulkTarget) * [Azure Cosmos DB sequential record import](#SQLSeqTarget)
The connection string is in the following format:
`AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>`
-* The `<CosmosDB Endpoint>` is the endpoint URI. You can get this value from the Azure portal. Navigate to your Azure Cosmos account. Open the **Overview** pane and copy the **URI** value.
-* The `<AccountKey>` is the "Password" or **PRIMARY KEY**. You can get this value from the Azure portal. Navigate to your Azure Cosmos account. Open the **Connection Strings** or **Keys** pane, and copy the "Password" or **PRIMARY KEY** value.
+* The `<CosmosDB Endpoint>` is the endpoint URI. You can get this value from the Azure portal. Navigate to your Azure Cosmos DB account. Open the **Overview** pane and copy the **URI** value.
+* The `<AccountKey>` is the "Password" or **PRIMARY KEY**. You can get this value from the Azure portal. Navigate to your Azure Cosmos DB account. Open the **Connection Strings** or **Keys** pane, and copy the "Password" or **PRIMARY KEY** value.
* The `<CosmosDB Database>` is the CosmosDB database name. Example: `AccountEndpoint=https://myCosmosDBName.documents.azure.com:443/;AccountKey=wJmFRYna6ttQ79ATmrTMKql8vPri84QBiHTt6oinFkZRvoe7Vv81x9sn6zlVlBY10bEPMgGM982wfYXpWXWB9w==;Database=myDatabaseName` > [!NOTE]
-> Use the Verify command to ensure that the Cosmos DB account specified in the connection string field can be accessed.
+> Use the Verify command to ensure that the Azure Cosmos DB account specified in the connection string field can be accessed.
Here are some command-line samples to import JSON files:
dt.exe /s:JsonFile /s.Files:D:\\CompanyData\\Companies.json /t:DocumentDBBulk /t
## <a id="MongoDB"></a>Import from MongoDB > [!IMPORTANT]
-> If you're importing to a Cosmos account configured with Azure Cosmos DB's API for MongoDB, follow these [instructions](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json).
+> If you're importing to an Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB, follow these [instructions](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json).
With the MongoDB source importer option, you can import from a single MongoDB collection, optionally filter documents using a query, and modify the document structure by using a projection.
dt.exe /s:CsvFile /s.Files:.\Employees.csv /t:DocumentDBBulk /t.ConnectionString
The Azure Table storage source importer option allows you to import from an individual Azure Table storage table. Optionally, you can filter the table entities to be imported.
-You may output data that was imported from Azure Table Storage to Azure Cosmos DB tables and entities for use with the Table API. Imported data can also be output to collections and documents for use with the SQL API. However, Table API is only available as a target in the command-line utility. You can't export to Table API by using the Data Migration tool user interface. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
+You may output data that was imported from Azure Table Storage to Azure Cosmos DB tables and entities for use with the API for Table. Imported data can also be output to collections and documents for use with the API for NoSQL. However, API for Table is only available as a target in the command-line utility. You can't export to API for Table by using the Data Migration tool user interface. For more information, see [Import data for use with the Azure Cosmos DB API for Table](table/import.md).
:::image type="content" source="./media/import-data/azuretablesource.png" alt-text="Screenshot of Azure Table storage source options":::
The format of the Amazon DynamoDB connection string is:
Here is a command-line sample to import from Amazon DynamoDB: ```console
-dt.exe /s:DynamoDB /s.ConnectionString:ServiceURL=https://dynamodb.us-east-1.amazonaws.com;AccessKey=<accessKey>;SecretKey=<secretKey> /s.Request:"{ """TableName""": """ProductCatalog""" }" /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<Azure Cosmos DB Endpoint>;AccountKey=<Azure Cosmos DB Key>;Database=<Azure Cosmos database>;" /t.Collection:catalogCollection /t.CollectionThroughput:2500
+dt.exe /s:DynamoDB /s.ConnectionString:ServiceURL=https://dynamodb.us-east-1.amazonaws.com;AccessKey=<accessKey>;SecretKey=<secretKey> /s.Request:"{ """TableName""": """ProductCatalog""" }" /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<Azure Cosmos DB Endpoint>;AccountKey=<Azure Cosmos DB Key>;Database=<Azure Cosmos DB database>;" /t.Collection:catalogCollection /t.CollectionThroughput:2500
``` ## <a id="BlobImport"></a>Import from Azure Blob storage
Here is command-line sample to import JSON files from Azure Blob storage:
dt.exe /s:JsonFile /s.Files:"blobs://<account key>@account.blob.core.windows.net:443/importcontainer/.*" /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:doctest ```
-## <a id="SQLSource"></a>Import from a SQL API collection
+## <a id="SQLSource"></a>Import from a API for NoSQL collection
-The Azure Cosmos DB source importer option allows you to import data from one or more Azure Cosmos containers and optionally filter documents using a query.
+The Azure Cosmos DB source importer option allows you to import data from one or more Azure Cosmos DB containers and optionally filter documents using a query.
:::image type="content" source="./media/import-data/documentdbsource.png" alt-text="Screenshot of Azure Cosmos DB source options":::
You can retrieve the Azure Cosmos DB account connection string from the Keys pag
> [!NOTE] > Use the Verify command to ensure that the Azure Cosmos DB instance specified in the connection string field can be accessed.
-To import from a single Azure Cosmos container, enter the name of the collection to import data from. To import from more than one Azure Cosmos container, provide a regular expression to match one or more collection names (for example, collection01 | collection02 | collection03). You may optionally specify, or provide a file for, a query to both filter and shape the data that you're importing.
+To import from a single Azure Cosmos DB container, enter the name of the collection to import data from. To import from more than one Azure Cosmos DB container, provide a regular expression to match one or more collection names (for example, collection01 | collection02 | collection03). You may optionally specify, or provide a file for, a query to both filter and shape the data that you're importing.
> [!NOTE] > Since the collection field accepts regular expressions, if you're importing from a single collection whose name has regular expression characters, then those characters must be escaped accordingly.
The Azure Cosmos DB source importer option has the following advanced options:
Here are some command-line samples to import from Azure Cosmos DB: ```console
-#Migrate data from one Azure Cosmos container to another Azure Cosmos containers
+#Migrate data from one Azure Cosmos DB container to another Azure Cosmos DB containers
dt.exe /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /s.Collection:TEColl /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:TESessions /t.CollectionThroughput:2500
-#Migrate data from more than one Azure Cosmos container to a single Azure Cosmos container
+#Migrate data from more than one Azure Cosmos DB container to a single Azure Cosmos DB container
dt.exe /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /s.Collection:comp1|comp2|comp3|comp4 /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:singleCollection /t.CollectionThroughput:2500
-#Export an Azure Cosmos container to a JSON file
+#Export an Azure Cosmos DB container to a JSON file
dt.exe /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /s.Collection:StoresSub /t:JsonFile /t.File:StoresExport.json /t.Overwrite ```
Here is a command-line sample to import from HBase:
dt.exe /s:HBase /s.ConnectionString:ServiceURL=<server-address>;Username=<username>;Password=<password> /s.Table:Contacts /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:hbaseimport ```
-## <a id="SQLBulkTarget"></a>Import to the SQL API (Bulk Import)
+## <a id="SQLBulkTarget"></a>Import to the API for NoSQL (Bulk Import)
-The Azure Cosmos DB Bulk importer allows you to import from any of the available source options, using an Azure Cosmos DB stored procedure for efficiency. The tool supports import to one single-partitioned Azure Cosmos container. It also supports sharded import whereby data is partitioned across more than one single-partitioned Azure Cosmos container. For more information about partitioning data, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md). The tool creates, executes, and then deletes the stored procedure from the target collection(s).
+The Azure Cosmos DB Bulk importer allows you to import from any of the available source options, using an Azure Cosmos DB stored procedure for efficiency. The tool supports import to one single-partitioned Azure Cosmos DB container. It also supports sharded import whereby data is partitioned across more than one single-partitioned Azure Cosmos DB container. For more information about partitioning data, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md). The tool creates, executes, and then deletes the stored procedure from the target collection(s).
:::image type="content" source="./media/import-data/documentdbbulk.png" alt-text="Screenshot of Azure Cosmos DB bulk options":::
The Azure Cosmos DB Bulk importer has the following additional advanced options:
> [!TIP] > The import tool defaults to connection mode DirectTcp. If you experience firewall issues, switch to connection mode Gateway, as it only requires port 443.
-## <a id="SQLSeqTarget"></a>Import to the SQL API (Sequential Record Import)
+## <a id="SQLSeqTarget"></a>Import to the API for NoSQL (Sequential Record Import)
-The Azure Cosmos DB sequential record importer allows you to import from an available source option on a record-by-record basis. You might choose this option if youΓÇÖre importing to an existing collection that has reached its quota of stored procedures. The tool supports import to a single (both single-partition and multi-partition) Azure Cosmos container. It also supports sharded import whereby data is partitioned across more than one single-partition or multi-partition Azure Cosmos container. For more information about partitioning data, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md).
+The Azure Cosmos DB sequential record importer allows you to import from an available source option on a record-by-record basis. You might choose this option if youΓÇÖre importing to an existing collection that has reached its quota of stored procedures. The tool supports import to a single (both single-partition and multi-partition) Azure Cosmos DB container. It also supports sharded import whereby data is partitioned across more than one single-partition or multi-partition Azure Cosmos DB container. For more information about partitioning data, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md).
:::image type="content" source="./media/import-data/documentdbsequential.png" alt-text="Screenshot of Azure Cosmos DB sequential record import options":::
The format of the Azure Cosmos DB connection string is:
You can retrieve the connection string for the Azure Cosmos DB account from the Keys page of the Azure portal, as described in [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md). However, the name of the database needs to be appended to the connection string in the following format:
-`Database=<Azure Cosmos database>;`
+`Database=<Azure Cosmos DB database>;`
> [!NOTE] > Use the Verify command to ensure that the Azure Cosmos DB instance specified in the connection string field can be accessed.
The Azure Cosmos DB - Sequential record importer has the following additional ad
## <a id="IndexingPolicy"></a>Specify an indexing policy
-When you allow the migration tool to create Azure Cosmos DB SQL API collections during import, you can specify the indexing policy of the collections. In the advanced options section of the Azure Cosmos DB Bulk import and Azure Cosmos DB Sequential record options, navigate to the Indexing Policy section.
+When you allow the migration tool to create Azure Cosmos DB API for NoSQL collections during import, you can specify the indexing policy of the collections. In the advanced options section of the Azure Cosmos DB Bulk import and Azure Cosmos DB Sequential record options, navigate to the Indexing Policy section.
:::image type="content" source="./media/import-data/indexingpolicy1.png" alt-text="Screenshot of Azure Cosmos DB Indexing Policy advanced options.":::
Trying to do capacity planning for a migration to Azure Cosmos DB?
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
->[How to query data?](../cosmos-db/tutorial-query-sql-api.md)
+>[How to query data?](nosql/tutorial-query.md)
cosmos-db Index Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-overview.md
Title: Indexing in Azure Cosmos DB
description: Understand how indexing works in Azure Cosmos DB, different types of indexes such as Range, Spatial, composite indexes supported. -++ Last updated 08/26/2021
# Indexing in Azure Cosmos DB - Overview Azure Cosmos DB is a schema-agnostic database that allows you to iterate on your application without having to deal with schema or index management. By default, Azure Cosmos DB automatically indexes every property for all items in your [container](account-databases-containers-items.md#azure-cosmos-db-containers) without having to define any schema or configure secondary indexes.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
Title: Azure Cosmos DB indexing policies
description: Learn how to configure and change the default indexing policy for automatic indexing and greater performance in Azure Cosmos DB. -++ Last updated 12/07/2021
# Indexing policies in Azure Cosmos DB In Azure Cosmos DB, every container has an indexing policy that dictates how the container's items should be indexed. The default indexing policy for newly created containers indexes every property of every item and enforces range indexes for any string or number. This allows you to get good query performance without having to think about indexing and index management upfront. In some situations, you may want to override this automatic behavior to better suit your requirements. You can customize a container's indexing policy by setting its *indexing mode*, and include or exclude *property paths*. > [!NOTE]
-> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB API for MongoDB](mongodb/mongodb-indexing.md)
+> The method of updating indexing policies described in this article only applies to Azure Cosmos DB API for NoSQL. Learn about indexing in [Azure Cosmos DB API for MongoDB](mongodb/indexing.md)
## Indexing mode
Azure Cosmos DB supports two indexing modes:
- **None**: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete. > [!NOTE]
-> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query a Cosmos container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos account in [serverless](serverless.md) mode which doesn't support lazy indexing).
+> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing).
By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index documents as they are written.
cosmos-db Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/insights-overview.md
+
+ Title: Monitor Azure Cosmos DB with Azure Monitor Azure Cosmos DB insights| Microsoft Docs
+description: This article describes the Azure Cosmos DB insights feature of Azure Monitor that provides Azure Cosmos DB owners with a quick understanding of performance and utilization issues with their Azure Cosmos DB accounts.
++++ Last updated : 05/11/2020++++
+# Explore Azure Monitor Azure Cosmos DB insights
+
+Azure Cosmos DB insights provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article will help you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
+
+## Introduction
+
+Before diving into the experience, you should understand how it presents and visualizes information.
+
+It delivers:
+
+* **At scale perspective** of your Azure Cosmos DB resources across all your subscriptions in a single location, with the ability to selectively scope to only those subscriptions and resources you are interested in evaluating.
+
+* **Drill down analysis** of a particular Azure Cosmos DB resource to help diagnose issues or perform detailed analysis by category - utilization, failures, capacity, and operations. Selecting any one of those options provides an in-depth view of the relevant Azure Cosmos DB metrics.
+
+* **Customizable** - This experience is built on top of Azure Monitor workbook templates allowing you to change what metrics are displayed, modify or set thresholds that align with your limits, and then save into a custom workbook. Charts in the workbooks can then be pinned to Azure dashboards.
+
+This feature does not require you to enable or configure anything, these Azure Cosmos DB metrics are collected by default.
+
+>[!NOTE]
+>There is no charge to access this feature and you will only be charged for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
+
+## View utilization and performance metrics for Azure Cosmos DB
+
+To view the utilization and performance of your storage accounts across all of your subscriptions, perform the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Search for **Monitor** and select **Monitor**.
+
+ ![Search box with the word "Monitor" and a dropdown that says Services "Monitor" with a speedometer style image](./media/insights-overview/search-monitor.png)
+
+3. Select **Azure Cosmos DB**.
+
+ ![Screenshot of Azure Cosmos DB overview workbook](./media/insights-overview/cosmos-db.png)
+
+### Overview
+
+On **Overview**, the table displays interactive Azure Cosmos DB metrics. You can filter the results based on the options you select from the following drop-down lists:
+
+* **Subscriptions** - only subscriptions that have an Azure Cosmos DB resource are listed.
+
+* **Azure Cosmos DB** - You can select all, a subset, or single Azure Cosmos DB resource.
+
+* **Time Range** - by default, displays the last 4 hours of information based on the corresponding selections made.
+
+The counter tile under the drop-down lists rolls-up the total number of Azure Cosmos DB resources are in the selected subscriptions. There is conditional color-coding or heatmaps for columns in the workbook that report transaction metrics. The deepest color has the highest value and a lighter color is based on the lowest values.
+
+Selecting a drop-down arrow next to one of the Azure Cosmos DB resources will reveal a breakdown of the performance metrics at the individual database container level:
+
+![Expanded drop down revealing individual database containers and associated performance breakdown](./media/insights-overview/container-view.png)
+
+Selecting the Azure Cosmos DB resource name highlighted in blue will take you to the default **Overview** for the associated Azure Cosmos DB account.
+
+### Failures
+
+Select **Failures** at the top of the page and the **Failures** portion of the workbook template opens. It shows you total requests with the distribution of responses that make up those requests:
+
+![Screenshot of failures with breakdown by HTTP request type](./media/insights-overview/failures.png)
+
+| Code | Description |
+|--|:--|
+| `200 OK` | One of the following REST operations were successful: </br>- GET on a resource. </br> - PUT on a resource. </br> - POST on a resource. </br> - POST on a stored procedure resource to execute the stored procedure.|
+| `201 Created` | A POST operation to create a resource is successful. |
+| `404 Not Found` | The operation is attempting to act on a resource that no longer exists. For example, the resource may have already been deleted. |
+
+For a full list of status codes, consult the [Azure Cosmos DB HTTP status code article](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
+
+### Capacity
+
+Select **Capacity** at the top of the page and the **Capacity** portion of the workbook template opens. It shows you how many documents you have, your document growth over time, data usage, and the total amount of available storage that you have left. This can be used to help identify potential storage and data utilization issues.
+
+![Capacity workbook](./media/insights-overview/capacity.png)
+
+As with the overview workbook, selecting the drop-down next to an Azure Cosmos DB resource in the **Subscription** column will reveal a breakdown by the individual containers that make up the database.
+
+### Operations
+
+Select **Operations** at the top of the page and the **Operations** portion of the workbook template opens. It gives you the ability to see your requests broken down by the type of requests made.
+
+So in the example below you see that `eastus-billingint` is predominantly receiving read requests, but with a small number of upsert and create requests. Whereas `westeurope-billingint` is read-only from a request perspective, at least over the past four hours that the workbook is currently scoped to via its time range parameter.
+
+![Operations workbook](./media/insights-overview/operation.png)
+
+## View from an Azure Cosmos DB resource
+
+1. Search for or select any of your existing Azure Cosmos DB accounts.
++
+2. Once you've navigated to your Azure Cosmos DB account, in the Monitoring section select **Insights (preview)** or **Workbooks** to perform further analysis on throughput, requests, storage, availability, latency, system, and account management.
++
+### Time range
+
+By default, the **Time Range** field displays data from the **Last 24 hours**. You can modify the time range to display data anywhere from the last 5 minutes to the last seven days. The time range selector also includes a **Custom** mode that allows you to type in the start/end dates to view a custom time frame based on available data for the selected account.
++
+### Insights overview
+
+The **Overview** tab provides the most common metrics for the selected Azure Cosmos DB account including:
+
+* Total Requests
+* Failed Requests (429s)
+* Normalized RU Consumption (max)
+* Data & Index Usage
+* Azure Cosmos DB Account Metrics by Collection
+
+**Total Requests:** This graph provides a view of the total requests for the account broken down by status code. The units at the bottom of the graph are a sum of the total requests for the period.
++
+**Failed Requests (429s)**: This graph provides a view of failed requests with a status code of 429. The units at the bottom of the graph are a sum of the total failed requests for the period.
++
+**Normalized RU Consumption (max)**: This graph provides the max percentage between 0-100% of Normalized RU Consumption units for the specified period.
++
+## Pin, export, and expand
+
+You can pin any one of the metric sections to an [Azure Dashboard](../azure-portal/azure-portal-dashboards.md) by selecting the pushpin icon at the top right of the section.
+
+![Metric section pin to dashboard example](./media/insights-overview/pin.png)
+
+To export your data into the Excel format, select the down arrow icon to the left of the pushpin icon.
+
+![Export workbook icon](./media/insights-overview/export.png)
+
+To expand or collapse all drop-down views in the workbook, select the expand icon to the left of the export icon:
+
+![Expand workbook icon](./media/insights-overview/expand.png)
+
+## Customize Azure Cosmos DB insights
+
+Since this experience is built on top of Azure Monitor workbook templates, you have the ability to **Customize** > **Edit** and **Save** a copy of your modified version into a custom workbook.
+
+![Customize bar](./media/insights-overview/customize.png)
+
+Workbooks are saved within a resource group, either in the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource group. After you save the custom workbook, you need to go to the workbook gallery to launch it.
+
+![Launch workbook gallery from command bar](./media/insights-overview/gallery.png)
+
+## Troubleshooting
+
+For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+
+## Next steps
+
+* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerting to aid in detecting issues.
+
+* Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
cosmos-db Integrated Cache Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache-faq.md
Title: Azure Cosmos DB integrated cache frequently asked questions
description: Frequently asked questions about the Azure Cosmos DB integrated cache. -++ Last updated 08/29/2022
# Azure Cosmos DB integrated cache frequently asked questions The Azure Cosmos DB integrated cache is an in-memory cache that is built in to Azure Cosmos DB. This article answers commonly asked questions about the Azure Cosmos DB integrated cache.
If your app previously used gateway mode with the standard gateway, the integrat
For scenarios that require high availability and in order to be covered by the Azure Cosmos DB availability SLA, you should provision at least 3 dedicated gateway nodes. For example, if one dedicated gateway node is needed in production, you should provision two additional dedicated gateway nodes to account for possible downtime, outages and upgrades. If only one dedicated gateway node is provisioned, you will temporarily lose availability in these scenarios. Additionally, [ensure your dedicated gateway has enough nodes](./integrated-cache.md#i-want-to-understand-if-i-need-to-add-more-dedicated-gateway-nodes) to serve your workload.
-### The integrated cache is only available for SQL (Core) API right now. Are you planning on releasing it for other APIs as well?
+### The integrated cache is only available for API for NoSQL right now. Are you planning on releasing it for other APIs as well?
-Expanding the integrated cache beyond SQL API is planned on the long-term roadmap but is beyond the initial scope of the integrated cache.
+Expanding the integrated cache beyond API for NoSQL is planned on the long-term roadmap but is beyond the initial scope of the integrated cache.
### What consistency does the integrated cache support?
The integrated cache supports both session and eventual consistency. You can als
- [Dedicated gateway](dedicated-gateway.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Title: Azure Cosmos DB integrated cache
description: The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. -++ Last updated 08/29/2022
# Azure Cosmos DB integrated cache - Overview The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. The integrated cache is easy to set up and you donΓÇÖt need to spend time writing custom code for cache invalidation or managing backend infrastructure. Your integrated cache uses a [dedicated gateway](dedicated-gateway.md) within your Azure Cosmos DB account. The integrated cache is the first of many Azure Cosmos DB features that will utilize a dedicated gateway for improved performance. You can choose from three possible dedicated gateway sizes based on the number of cores and memory needed for your workload.
Check the `DedicatedGatewayRequests`. This metric includes all requests that use
### I canΓÇÖt tell if my requests are hitting the integrated cache
-Check the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. If both of these values are zero, then requests are not hitting the integrated cache. Check that you are using the dedicated gateway connection string, [connecting with gateway mode](sql-sdk-connection-modes.md), and [have set session or eventual consistency](consistency-levels.md#configure-the-default-consistency-level).
+Check the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. If both of these values are zero, then requests are not hitting the integrated cache. Check that you are using the dedicated gateway connection string, [connecting with gateway mode](nosql/sdk-connection-modes.md), and [have set session or eventual consistency](consistency-levels.md#configure-the-default-consistency-level).
### I want to understand if my dedicated gateway is too small
In some cases, if latency is unexpectedly high, you may need more dedicated gate
- [Dedicated gateway](dedicated-gateway.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Integrated Power Bi Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-power-bi-synapse-link.md
Last updated 11/23/2021 -+ # Integrated Power BI experience in Azure Cosmos DB portal for Synapse Link enabled accounts With the integrated Power BI experience, you can visualize your Azure Cosmos DB data in near real time in just a few clicks. It uses the built-in Power BI integration feature in the Azure portal along with [Azure Synapse Link](synapse-link.md).
Use the following steps to build a Power BI report from Azure Cosmos DB data in
1. From the **Integrations** section, open the **Power BI** pane and select **Get started**. > [!NOTE]
- > Currently, this option is only available for SQL API accounts.
+ > Currently, this option is only available for API for NoSQL accounts.
1. From the **Enable Azure Synapse Link** tab, if your account is not already enabled with Synapse Link, you can enable it from **Enable Azure Synapse link for this account** section. If Synapse Link is already enabled for your account, you will not see this tab.
Use the following steps to build a Power BI report from Azure Cosmos DB data in
:::image type="content" source="./media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png" alt-text="Synapse Link successfully enabled on the selected containers." border="true" lightbox="./media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png":::
-1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Cosmos DB to Power BI, see [Prepare views](../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article.
+1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Azure Cosmos DB to Power BI, see [Prepare views](../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article.
> [!NOTE]
- > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#azure-cosmos-db-performance-issues) article.
+ > Your Azure Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#azure-cosmos-db-performance-issues) article.
1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
Last updated 08/1/2022 -+ # Intra-account container copy jobs in Azure Cosmos DB (Preview) You can perform offline container copy within an Azure Cosmos DB account using container copy jobs.
Intra-account container copy jobs can be [created and managed using CLI commands
## Get started
-To get started using container copy jobs, register for "Intra-account offline container copy (Cassandra & SQL)" preview from the ['Preview Features'](access-previews.md) list in the Azure portal. Once the registration is complete, the preview will be effective for all Cassandra and SQL API accounts in the subscription.
+To get started using container copy jobs, register for "Intra-account offline container copy (Cassandra & SQL)" preview from the ['Preview Features'](access-previews.md) list in the Azure portal. Once the registration is complete, the preview will be effective for all Cassandra and API for NoSQL accounts in the subscription.
## Overview of steps needed to do container copy
-1. Create the target Cosmos DB container with the desired settings (partition key, throughput granularity, RUs, unique key, etc.).
+1. Create the target Azure Cosmos DB container with the desired settings (partition key, throughput granularity, RUs, unique key, etc.).
2. Stop the operations on the source container by pausing the application instances or any clients connecting to it. 3. [Create the container copy job](how-to-container-copy.md). 4. [Monitor the progress of the container copy job](how-to-container-copy.md#monitor-the-progress-of-a-container-copy-job) and wait until it's completed.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Previously updated : 08/26/2021- Last updated : 10/05/2021+ adobe-target: true # Welcome to Azure Cosmos DB+ Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds.
-Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. App development is faster and more productive thanks to turnkey multi region data distribution anywhere in the world, open source APIs and SDKs for popular languages. As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
+Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+
+App development is faster and more productive thanks to:
+
+- Turnkey multi region data distribution anywhere in the world
+- Open source APIs
+- SDKs for popular languages.
+
+As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
> > [!VIDEO https://aka.ms/docs.essential-introduction]
You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/
Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) speed and throughput, fast global access, and instant elasticity. - Real-time access with fast read and write latencies globally, and throughput and consistency all backed by [SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db)-- Multi-region writes and data distribution to any Azure region with the click of a button.
+- Multi-region writes and data distribution to any Azure region with the just a button.
- Independently and elastically scale storage and throughput across any Azure region ΓÇô even during unpredictable traffic bursts ΓÇô for unlimited scale worldwide. ### Simplified application development
Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cos
Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL analytics over operational data. - Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more.-- Choose from multiple database APIs including the native Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API.-- Build apps on Core (SQL) API using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs.
+- Choose from multiple database APIs including the native API for NoSQL, API for MongoDB, Apache Cassandra, Apache Gremlin, and Table.
+- Build apps on API for NoSQL using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs.
- Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DB's schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
End-to-end database management, with serverless and automatic scaling matching y
- Reduced analytics complexity with No ETL jobs to manage. - Near real-time insights into your operational data.-- No impact on operational workloads.
+- No effect on operational workloads.
- Optimized for large-scale analytics workloads. - Cost effective. - Analytics for locally available, globally distributed, multi-region writes. - Native integration with Azure Synapse Analytics. - ## Solutions that benefit from Azure Cosmos DB
-Any [web, mobile, gaming, and IoT application](use-cases.md) that needs to handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for a variety of data will benefit from Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
+[Web, mobile, gaming, and IoT application](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for various data will benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
## Next steps Get started with Azure Cosmos DB with one of our quickstarts: - Learn [how to choose an API](choose-api.md) in Azure Cosmos DB-- [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)-- [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)-- [Get started with Azure Cosmos DB Cassandra API](cassandr)-- [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)-- [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
+- [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)
+- [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+- [Get started with Azure Cosmos DB for Apache Cassandra](cassandr)
+- [Get started with Azure Cosmos DB for Apache Gremlin](gremlin/quickstart-dotnet.md)
+- [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
- [A whitepaper on next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) - Trying to do capacity planning for a migration to Azure Cosmos DB?
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
> [!div class="nextstepaction"] > [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/)
cosmos-db Key Value Store Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/key-value-store-cost.md
description: Learn about the request unit charges of Azure Cosmos DB for simple
-+ Last updated 08/23/2019-+ # Azure Cosmos DB as a key value store ΓÇô cost overview
-Azure Cosmos DB is a globally distributed, multi-model database service for building highly available, large-scale applications easily. By default, Azure Cosmos DB automatically and efficiently indexes all the data it ingests. This enables fast and consistent [SQL](./sql-query-getting-started.md) (and [JavaScript](stored-procedures-triggers-udfs.md)) queries on the data.
+Azure Cosmos DB is a globally distributed, multi-model database service for building highly available, large-scale applications easily. By default, Azure Cosmos DB automatically and efficiently indexes all the data it ingests. This enables fast and consistent [SQL](nosql/query/getting-started.md) (and [JavaScript](stored-procedures-triggers-udfs.md)) queries on the data.
This article describes the cost of Azure Cosmos DB for simple write and read operations when itΓÇÖs used as a key/value store. Write operations include inserts, replaces, deletes, and upserts of data items. Besides guaranteeing a 99.999% availability SLA for all multi-region accounts, Azure Cosmos DB offers guaranteed <10-ms latency for reads and for the (indexed) writes, at the 99th percentile.
This article describes the cost of Azure Cosmos DB for simple write and read ope
Azure Cosmos DB performance is based on the amount of provisioned throughput expressed in [Request Units](request-units.md) (RU/s). The provisioning is at a second granularity and is purchased in RU/s ([not to be confused with the hourly billing](https://azure.microsoft.com/pricing/details/cosmos-db/)). RUs should be considered as a logical abstraction (a currency) that simplifies the provisioning of required throughput for the application. Users do not have to think of differentiating between read and write throughput. The single currency model of RUs creates efficiencies to share the provisioned capacity between reads and writes. This provisioned capacity model enables the service to provide a **predictable and consistent throughput, guaranteed low latency, and high availability**. Finally, while RU model is used to depict throughput, each provisioned RU also has a defined amount of resources (e.g., memory, cores/CPU and IOPS).
-As a globally distributed database system, Cosmos DB is the only Azure service that provides comprehensive SLAs covering latency, throughput, consistency and high availability. The throughput you provision is applied to each of the regions associated with your Cosmos account. For reads, Cosmos DB offers multiple, well-defined [consistency levels](consistency-levels.md) for you to choose from.
+As a globally distributed database system, Azure Cosmos DB is the only Azure service that provides comprehensive SLAs covering latency, throughput, consistency and high availability. The throughput you provision is applied to each of the regions associated with your Azure Cosmos DB account. For reads, Azure Cosmos DB offers multiple, well-defined [consistency levels](consistency-levels.md) for you to choose from.
The following table shows the number of RUs required to perform read and write operations based on a data item of size 1 KB and 100 KBs with default automatic indexing turned off.
If you provision 1,000 RU/s, this amounts to 3.6 million RU/hour and will cost $
|100 KB|$0.222|$1.111|
-Most of the basic blob or object stores services charge $0.40 per million read transaction and $5 per million write transaction. If used optimally, Cosmos DB can be up to 98% cheaper than these other solutions (for 1 KB transactions).
+Most of the basic blob or object stores services charge $0.40 per million read transaction and $5 per million write transaction. If used optimally, Azure Cosmos DB can be up to 98% cheaper than these other solutions (for 1 KB transactions).
## Next steps
-* Use [RU calculator](https://cosmos.azure.com/capacitycalculator/) to estimate throughput for your workloads.
+* Use [RU calculator](https://cosmos.azure.com/capacitycalculator/) to estimate throughput for your workloads.
cosmos-db Large Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/large-partition-keys.md
Title: Create Azure Cosmos containers with large partition key
+ Title: Create Azure Cosmos DB containers with large partition key
description: Learn how to create a container in Azure Cosmos DB with large partition key using Azure portal and different SDKs. -+ Last updated 12/8/2019 -+ # Create containers with large partition key
-Azure Cosmos DB uses hash-based partitioning scheme to achieve horizontal scaling of data. All Azure Cosmos containers created before May 3, 2019 use a hash function that computes hash based on the first 101 bytes of the partition key. If there are multiple partition keys that have the same first 101 bytes, then those logical partitions are considered as the same logical partition by the service. This can lead to issues like partition size quota being incorrect, unique indexes being incorrectly applied across the partition keys, and uneven distribution of storage. Large partition keys are introduced to solve this issue. Azure Cosmos DB now supports large partition keys with values up to 2 KB.
+Azure Cosmos DB uses hash-based partitioning scheme to achieve horizontal scaling of data. All Azure Cosmos DB containers created before May 3, 2019 use a hash function that computes hash based on the first 101 bytes of the partition key. If there are multiple partition keys that have the same first 101 bytes, then those logical partitions are considered as the same logical partition by the service. This can lead to issues like partition size quota being incorrect, unique indexes being incorrectly applied across the partition keys, and uneven distribution of storage. Large partition keys are introduced to solve this issue. Azure Cosmos DB now supports large partition keys with values up to 2 KB.
Large partition keys are supported by enabling an enhanced version of the hash function, which can generate a unique hash from large partition keys up to 2 KB.
-As a best practice, unless you need support for an [older Cosmos SDK or application that does not support this feature](#supported-sdk-versions), it is always recommended to configure your container with support for large partition keys.
+As a best practice, unless you need support for an [older Azure Cosmos DB SDK or application that does not support this feature](#supported-sdk-versions), it is always recommended to configure your container with support for large partition keys.
## Create a large partition key (Azure portal)
To create a large partition key, when you create a new container using the Azure
To create a container with large partition key support see,
-* [Create an Azure Cosmos container with a large partition key size](manage-with-powershell.md#create-container-big-pk)
+* [Create an Azure Cosmos DB container with a large partition key size](manage-with-powershell.md#create-container-big-pk)
## Create a large partition key (.NET SDK)
Currently, you cannot use containers with large partition key within in Power BI
* [Partitioning in Azure Cosmos DB](partitioning-overview.md) * [Request Units in Azure Cosmos DB](request-units.md) * [Provision throughput on containers and databases](set-throughput.md)
-* [Work with Azure Cosmos account](./account-databases-containers-items.md)
+* [Work with Azure Cosmos DB account](./account-databases-containers-items.md)
cosmos-db Latest Restore Timestamp Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/latest-restore-timestamp-continuous-backup.md
description: The latest restorable timestamp API provides the latest restorable
-++ Last updated 04/08/2022 # Latest restorable timestamp for Azure Cosmos DB accounts with continuous backup mode Azure Cosmos DB offers an API to get the latest restorable timestamp of a container. This API is available for accounts that have continuous backup mode enabled. Latest restorable timestamp represents the latest timestamp in UTC format up to which your data has been successfully backed up. Using this API, you can get the restorable timestamp to trigger the live account restore or monitor that your data is being backed up on time. This API also takes the account location as an input parameter and returns the latest restorable timestamp for the given container in this location. If an account exists in multiple locations, then the latest restorable timestamp for a container in different locations could be different because the backups in each location are taken independently.
-By default, the API only works at the container level, but it can be easily extended to work at the database or account level. This article helps you understand the semantics of latest restorable timestamp api, how it gets calculated and use cases for it. To learn more, see [how to get the latest restore timestamp](get-latest-restore-timestamp.md) for SQL API, MongoDB API, Table API (preview), Gremlin API (preview) accounts.
+By default, the API only works at the container level, but it can be easily extended to work at the database or account level. This article helps you understand the semantics of latest restorable timestamp api, how it gets calculated and use cases for it. To learn more, see [how to get the latest restore timestamp](get-latest-restore-timestamp.md) for API for NoSQL, MongoDB, Table, and Gremlin accounts.
## Use cases
cosmos-db Limit Total Account Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/limit-total-account-throughput.md
Last updated 03/31/2022 -+ # Limit the total throughput provisioned on your Azure Cosmos DB account When using an Azure Cosmos DB account in [provisioned throughput](./set-throughput.md) mode, most of your costs usually come from the amount of throughput that you have provisioned across your account. In particular, these costs are directly influenced by:
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md
Title: Run the Azure Cosmos DB Emulator on Docker for Linux description: Learn how to run and use the Azure Cosmos DB Linux Emulator on Linux, and macOS. Using the emulator you can develop and test your application locally for free, without an Azure subscription. -+
Last updated 05/09/2022
# Run the emulator on Docker for Linux (Preview)
-The Azure Cosmos DB Linux Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Currently, the Linux emulator only supports SQL API and MongoDB API. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Linux Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on macOS and Linux environments.
+The Azure Cosmos DB Linux Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Currently, the Linux emulator only supports API for NoSQL and MongoDB. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Linux Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on macOS and Linux environments.
> [!NOTE]
-> The Cosmos DB Linux Emulator is currently in preview mode and supports only the SQL and MongoDB APIs. Users may experience slight performance degradations in terms of the number of requests per second processed by the emulator when compared to the Windows version. The default number of physical partitions which directly impacts the number of containers that can be provisioned is 10.
+> The Azure Cosmos DB Linux Emulator is currently in preview mode and supports only the APIs for NoSQL and MongoDB. Users may experience slight performance degradations in terms of the number of requests per second processed by the emulator when compared to the Windows version. The default number of physical partitions which directly impacts the number of containers that can be provisioned is 10.
> > We do not recommend use of the emulator (Preview) in production. For heavier workloads, use our [Windows emulator](local-emulator.md).
Functionality that relies on the Azure infrastructure like global replication, s
## Differences between the Linux Emulator and the cloud service
-Since the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos account in the cloud:
+Since the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos DB account in the cloud:
-- Currently, the **Data Explorer** pane in the emulator fully supports SQL and MongoDB API clients only.
+- Currently, the **Data Explorer** pane in the emulator fully supports API for NoSQL and MongoDB clients only.
-- With the Linux emulator, you can create an Azure Cosmos account in [provisioned throughput](set-throughput.md) mode only; currently it doesn't support [serverless](serverless.md) mode.
+- With the Linux emulator, you can create an Azure Cosmos DB account in [provisioned throughput](set-throughput.md) mode only; currently it doesn't support [serverless](serverless.md) mode.
- The Linux emulator isn't a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 10 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#set-partitioncount) article. -- While [consistency levels](consistency-levels.md) can be adjusted using command-line arguments for testing scenarios only (default setting is Session), a user might not expect the same behavior as in the cloud service. For instance, Strong and Bounded staleness consistency has no effect on the emulator, other than signaling to the Cosmos DB SDK the default consistency of the account.
+- While [consistency levels](consistency-levels.md) can be adjusted using command-line arguments for testing scenarios only (default setting is Session), a user might not expect the same behavior as in the cloud service. For instance, Strong and Bounded staleness consistency has no effect on the emulator, other than signaling to the Azure Cosmos DB SDK the default consistency of the account.
- The Linux emulator doesn't offer [multi-region replication](distribute-data-globally.md).
Use the following steps to run the emulator on Linux:
| Memory: `-m` | | On memory, 3 GB or more is required. | | Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least four cores are recommended. | |`AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE` | false | This setting used by itself will help persist the data between container restarts. |
-|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, ``4.0`` and ``4.2``) |
+|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the API for MongoDB endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, ``4.0`` and ``4.2``) |
## Troubleshoot issues
TLS verification can be disabled by setting the environment variable `NODE_TLS_R
NODE_TLS_REJECT_UNAUTHORIZED=0 ```
-This flag is only recommended for local development as it disables TLS for Node.js. More information can be found on in [Node.js documentation](https://nodejs.org/api/cli.html#cli_node_tls_reject_unauthorized_value) and the [Cosmos DB Emulator Certificates documentation](local-emulator-export-ssl-certificates.md#how-to-use-the-certificate-in-nodejs).
+This flag is only recommended for local development as it disables TLS for Node.js. More information can be found on in [Node.js documentation](https://nodejs.org/api/cli.html#cli_node_tls_reject_unauthorized_value) and the [Azure Cosmos DB Emulator Certificates documentation](local-emulator-export-ssl-certificates.md#how-to-use-the-certificate-in-nodejs).
#### The Docker container failed to start
The number of physical partitions provisioned on the emulator is too low. Either
- The emulator fails to start:
- - Make sure you're [running the latest image of the Cosmos DB emulator for Linux](#refresh-linux-container). Otherwise, see the section above regarding connectivity-related issues.
+ - Make sure you're [running the latest image of the Azure Cosmos DB emulator for Linux](#refresh-linux-container). Otherwise, see the section above regarding connectivity-related issues.
- - If the Cosmos DB emulator data folder is "volume mounted", ensure that the volume has enough space and is read/write.
+ - If the Azure Cosmos DB emulator data folder is "volume mounted", ensure that the volume has enough space and is read/write.
- Confirm that creating a container with the recommended settings works. If yes, most likely the cause of failure was the extra settings passed via the respective Docker command upon starting the container.
The number of physical partitions provisioned on the emulator is too low. Either
"Failed loading Emulator secrets certificate. Error: 0x8009000f or similar, a new policy might have been added to your host that prevents an application such as Azure Cosmos DB Emulator from creating and adding self signed certificate files into your certificate store." ```
- This failure can occur even when you run in Administrator context, since the specific policy usually added by your IT department takes priority over the local Administrator. Using a Docker image for the emulator instead might help in this case. The image can help as long as you still have the permission to add the self-signed emulator TLS/SSL certificate into your host machine context. The self-signed certificate is required by Java and .NET Cosmos SDK client applications.
+ This failure can occur even when you run in Administrator context, since the specific policy usually added by your IT department takes priority over the local Administrator. Using a Docker image for the emulator instead might help in this case. The image can help as long as you still have the permission to add the self-signed emulator TLS/SSL certificate into your host machine context. The self-signed certificate is required by Java and .NET Azure Cosmos DB SDK client applications.
- The emulator is crashing:
The number of physical partitions provisioned on the emulator is too low. Either
- Make sure that the self-signed emulator certificate is properly imported and manually trusted in order for your browser to access the data explorer page.
- - Try creating a database/container and inserting an item using the Data Explorer. If successful, most likely the cause of the issue resides within your application. If not, [contact the Cosmos DB team](#report-an-emulator-issue).
+ - Try creating a database/container and inserting an item using the Data Explorer. If successful, most likely the cause of the issue resides within your application. If not, [contact the Azure Cosmos DB team](#report-an-emulator-issue).
### Performance issues
Use the following steps to refresh the Linux container:
docker rmi ID_OF_IMAGE_FROM_ABOVE ```
-1. Pull the latest image of the Cosmos DB Linux Emulator.
+1. Pull the latest image of the Azure Cosmos DB Linux Emulator.
```bash docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
cosmos-db Local Emulator Export Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-export-ssl-certificates.md
Last updated 09/17/2020 -+ # Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js apps The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Azure Cosmos DB Emulator supports only secure communication through TLS connections.
Once the "CosmosDBEmulatorCertificate" TLS/SSL certificate is installed, your ap
## Use the certificate with Python apps
-When connecting to the emulator from Python apps, TLS verification is disabled. By default the [Python SDK(version 2.0.0 or higher)](sql-api-sdk-python.md) for the SQL API will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Python socket wrappers](https://docs.python.org/2/library/ssl.html) documentation.
+When connecting to the emulator from Python apps, TLS verification is disabled. By default the [Python SDK(version 2.0.0 or higher)](nosql/sdk-python.md) for the API for NoSQL will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Python socket wrappers](https://docs.python.org/2/library/ssl.html) documentation.
## How to use the certificate in Node.js
-When connecting to the emulator from Node.js SDKs, TLS verification is disabled. By default the [Node.js SDK(version 1.10.1 or higher)](sql-api-sdk-node.md) for the SQL API will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Node.js documentation](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback).
+When connecting to the emulator from Node.js SDKs, TLS verification is disabled. By default the [Node.js SDK(version 1.10.1 or higher)](nosql/sdk-nodejs.md) for the API for NoSQL will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Node.js documentation](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback).
## Rotate emulator certificates
cosmos-db Local Emulator On Docker Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-on-docker-windows.md
Title: Running the emulator on Docker for Windows
itleSuffix: Running the Azure Cosmos DB emulator on Docker for Windows description: Learn how to run and use the Azure Cosmos DB Emulator on Docker for Windows. Using the emulator you can develop and test your application locally for free, without creating an Azure subscription. +
Last updated 04/20/2021
# <a id="run-on-windows-docker"></a>Use the emulator on Docker for Windows You can run the Azure Cosmos DB Emulator on a Windows Docker container. See [GitHub](https://github.com/Azure/azure-cosmos-db-emulator-docker) for the `Dockerfile` and more information. Currently, the emulator does not work on Docker for Oracle Linux. Use the following instructions to run the emulator on Docker for Windows:
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-release-notes.md
# Azure Cosmos DB Emulator - release notes and download information This article shows the Azure Cosmos DB Emulator released versions and it details the latest updates. The download center only has the latest version of the emulator available to download.
This article shows the Azure Cosmos DB Emulator released versions and it details
- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update, there are a couple of issues addressed in this release: - Update Data Explorer to the latest content and fix a broken link for the quick start sample documentation.
- - Add option to enable the Mongo API version for the Linux Cosmos DB emulator by setting the environment variable: `AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` in the Docker container. Valid settings are: `3.2`, `3.6`, `4.0` and `4.2`
+ - Add option to enable the API for MongoDB and configure version for the Linux Azure Cosmos DB emulator by setting the environment variable: `AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` in the Docker container. Valid settings are: `3.2`, `3.6`, `4.0` and `4.2`
### `2.14.6` (March 7, 2022) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update, there are a couple of issues addressed in this release: - Fix for an issue related to high CPU usage when the emulator is running.
- - Add PowerShell option to set the Mongo API version: `-MongoApiVersion`. Valid settings are: `3.2`, `3.6` and `4.0`
+ - Add PowerShell option to set the API for MongoDB and version: `-MongoApiVersion`. Valid settings are: `3.2`, `3.6` and `4.0`
### `2.14.5` (January 18, 2022)
This article shows the Azure Cosmos DB Emulator released versions and it details
### `2.14.3` (September 8, 2021) -- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. It also addresses issues with performance data that's collected and resets the base image for the Linux Cosmos emulator Docker image.
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. It also addresses issues with performance data that's collected and resets the base image for the Linux Azure Cosmos DB emulator Docker image.
### `2.14.2` (August 12, 2021) -- This release updates the local Data Explorer content to latest Microsoft Azure version and resets the base for the Linux Cosmos emulator Docker image.
+- This release updates the local Data Explorer content to latest Microsoft Azure version and resets the base for the Linux Azure Cosmos DB emulator Docker image.
### `2.14.1` (June 18, 2021)
This article shows the Azure Cosmos DB Emulator released versions and it details
- This release adds two new Azure Cosmos DB Emulator startup options: - `/EnablePreview` - Enables preview features for the Azure Cosmos DB Emulator. The preview features that are still under development and are available via CI and sample writing.
- - `/EnableAadAuthentication` - Enables the emulator to accept custom Azure Active Directory tokens as an alternative to the Azure Cosmos primary keys. This feature is still under development; specific role assignments and other permission-related settings aren't currently supported.
+ - `/EnableAadAuthentication` - Enables the emulator to accept custom Azure Active Directory tokens as an alternative to the Azure Cosmos DB primary keys. This feature is still under development; specific role assignments and other permission-related settings aren't currently supported.
### `2.11.2` (July 7, 2020)
This article shows the Azure Cosmos DB Emulator released versions and it details
### `2.7.0` -- This release fixes a regression in the Azure Cosmos DB Emulator that prevented users from executing SQL related queries. This issue impacts emulator users that configured SQL API endpoint and they're using .NET core or x86 .NET based client applications.
+- This release fixes a regression in the Azure Cosmos DB Emulator that prevented users from executing SQL related queries. This issue impacts emulator users that configured API for NoSQL endpoint and they're using .NET core or x86 .NET based client applications.
### `2.4.6` -- This release provides parity with the features in the Azure Cosmos service as of July 2019, with the exceptions noted in [Develop locally with Azure Cosmos DB Emulator](local-emulator.md). It also fixes several bugs related to emulator shut down when invoked via command line and internal IP address overrides for SDK clients using direct mode connectivity.
+- This release provides parity with the features in the Azure Cosmos DB service as of July 2019, with the exceptions noted in [Develop locally with Azure Cosmos DB Emulator](local-emulator.md). It also fixes several bugs related to emulator shut down when invoked via command line and internal IP address overrides for SDK clients using direct mode connectivity.
### `2.4.3`
cosmos-db Local Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator.md
Last updated 09/22/2020-+ # Install and use the Azure Cosmos DB Emulator for local development and testing
-The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Emulator, you can switch to using an Azure Cosmos account in the cloud. This article describes how to install and use the emulator on Windows, Linux, macOS, and Windows docker environments.
+The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on Windows, Linux, macOS, and Windows docker environments.
## Download the emulator
To get started, download and install the latest version of Azure Cosmos DB Emula
:::image type="icon" source="media/local-emulator/download-icon.png" border="false":::**[Download the Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)**
-You can develop applications using Azure Cosmos DB Emulator with the [SQL](local-emulator.md#sql-api), [Cassandra](local-emulator.md#cassandra-api), [MongoDB](local-emulator.md#azure-cosmos-dbs-api-for-mongodb), [Gremlin](local-emulator.md#gremlin-api), and [Table](local-emulator.md#table-api) API accounts. Currently the data explorer in the emulator fully supports viewing SQL data only; the data created using MongoDB, Gremlin/Graph and Cassandra client applications it is not viewable at this time. To learn more, see [how to connect to the emulator endpoint](#connect-with-emulator-apis) from different APIs.
+You can develop applications using Azure Cosmos DB Emulator with the account using the APIs for [NoSQL](local-emulator.md#api-for-nosql), [Apache Cassandra](local-emulator.md#api-for-cassandra), [MongoDB](local-emulator.md#api-for-mongodb), [Apache Gremlin](local-emulator.md#api-for-gremlin), and [Table](local-emulator.md#api-for-table). Currently the data explorer in the emulator fully supports viewing SQL data only; the data created using MongoDB, Gremlin/Graph and Cassandra client applications it is not viewable at this time. To learn more, see [how to connect to the emulator endpoint](#connect-with-emulator-apis) from different APIs.
## How does the emulator work?
You can migrate data between the Azure Cosmos DB Emulator and the Azure Cosmos D
## Differences between the emulator and the cloud service
-Because the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos account in the cloud:
+Because the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos DB account in the cloud:
-* Currently the **Data Explorer** pane in the emulator fully supports SQL API clients only. The **Data Explorer** view and operations for Azure Cosmos DB APIs such as MongoDB, Table, Graph, and Cassandra APIs are not fully supported.
+* Currently the **Data Explorer** pane in the emulator fully supports API for NoSQL clients only. The **Data Explorer** view and operations for Azure Cosmos DB APIs such as MongoDB, Table, Graph, and Cassandra APIs are not fully supported.
* The emulator supports only a single fixed account and a well-known primary key. You can't regenerate key when using the Azure Cosmos DB Emulator, however you can change the default key by using the [command-line](emulator-command-line-parameters.md) option.
-* With the emulator, you can create an Azure Cosmos account in [provisioned throughput](set-throughput.md) mode only; currently it doesn't support [serverless](serverless.md) mode.
+* With the emulator, you can create an Azure Cosmos DB account in [provisioned throughput](set-throughput.md) mode only; currently it doesn't support [serverless](serverless.md) mode.
* The emulator is not a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 25 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#set-partitioncount) article.
Depending upon your system requirements, you can run the emulator on [Windows](#
Each version of emulator comes with a set of feature updates or bug fixes. To see the available versions, read the [emulator release notes](local-emulator-release-notes.md) article.
-After installation, if you have used the default settings, the data corresponding to the emulator is saved at %LOCALAPPDATA%\CosmosDBEmulator location. You can configure a different location by using the optional data path settings; that is the `/DataPath=PREFERRED_LOCATION` as the [command-line parameter](emulator-command-line-parameters.md). The data created in one version of the Azure Cosmos DB Emulator is not guaranteed to be accessible when using a different version. If you need to persist your data for the long term, it is recommended that you store that data in an Azure Cosmos account, instead of the Azure Cosmos DB Emulator.
+After installation, if you have used the default settings, the data corresponding to the emulator is saved at %LOCALAPPDATA%\CosmosDBEmulator location. You can configure a different location by using the optional data path settings; that is the `/DataPath=PREFERRED_LOCATION` as the [command-line parameter](emulator-command-line-parameters.md). The data created in one version of the Azure Cosmos DB Emulator is not guaranteed to be accessible when using a different version. If you need to persist your data for the long term, it is recommended that you store that data in an Azure Cosmos DB account, instead of the Azure Cosmos DB Emulator.
## <a id="run-on-windows"></a>Use the emulator on Windows
The Azure Cosmos DB Emulator is installed at `C:\Program Files\Azure Cosmos DB E
:::image type="content" source="./media/local-emulator/database-local-emulator-start.png" alt-text="Select the Start button or press the Windows key, begin typing Azure Cosmos DB Emulator, and select the emulator from the list of applications":::
-When the emulator has started, you'll see an icon in the Windows taskbar notification area. It automatically opens the Azure Cosmos data explorer in your browser at this URL `https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html` URL.
+When the emulator has started, you'll see an icon in the Windows taskbar notification area. It automatically opens the Azure Cosmos DB data explorer in your browser at this URL `https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html` URL.
:::image type="content" source="./media/local-emulator/database-local-emulator-taskbar.png" alt-text="Azure Cosmos DB local emulator task bar notification":::
You can also start and stop the emulator from the command-line or PowerShell com
The Azure Cosmos DB Emulator by default runs on the local machine ("localhost") listening on port 8081. The address appears as `https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html`. If you close the explorer and would like to reopen it later, you can either open the URL in your browser or launch it from the Azure Cosmos DB Emulator in the Windows Tray Icon as shown below. ## <a id="run-on-linux-macos"></a>Use the emulator on Linux or macOS
Account key: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZ
## <a id="connect-with-emulator-apis"></a>Connect to different APIs with the emulator
-### SQL API
+### API for NoSQL
-Once you have the Azure Cosmos DB Emulator running on your desktop, you can use any supported [Azure Cosmos DB SDK](sql-api-sdk-dotnet-standard.md) or the [Azure Cosmos DB REST API](/rest/api/cosmos-db/) to interact with the emulator. The Azure Cosmos DB Emulator also includes a built-in data explorer that lets you create containers for SQL API or Azure Cosmos DB for Mongo DB API. By using the data explorer, you can view and edit items without writing any code.
+Once you have the Azure Cosmos DB Emulator running on your desktop, you can use any supported [Azure Cosmos DB SDK](nosql/sdk-dotnet-v3.md) or the [Azure Cosmos DB REST API](/rest/api/cosmos-db/) to interact with the emulator. The Azure Cosmos DB Emulator also includes a built-in data explorer that lets you create containers for API for NoSQL or MongoDB. By using the data explorer, you can view and edit items without writing any code.
```csharp // Connect to the Azure Cosmos DB Emulator running locally
CosmosClient client = new CosmosClient(
```
-### Azure Cosmos DB's API for MongoDB
+### API for MongoDB
-Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableMongoDbEndpoint". Then use the following connection string to connect to the MongoDB API account:
+Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB's API for MongoDB](mongodb/introduction.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableMongoDbEndpoint". Then use the following connection string to connect to the API for MongoDB account:
```bash mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true ```
-### Table API
+### API for Table
-Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB Table API SDK](./table/tutorial-develop-table-dotnet.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableTableEndpoint". Next run the following code to connect to the table API account:
+Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB API for Table SDK](./table/tutorial-develop-table-dotnet.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableTableEndpoint". Next run the following code to connect to the API for Table account:
```csharp using Microsoft.WindowsAzure.Storage;
table.CreateIfNotExists();
table.Execute(TableOperation.Insert(new DynamicTableEntity("partitionKey", "rowKey"))); ```
-### Cassandra API
+### API for Cassandra
Start emulator from an administrator [command prompt](emulator-command-line-parameters.md) with "/EnableCassandraEndpoint". Alternatively you can also set the environment variable `AZURE_COSMOS_EMULATOR_CASSANDRA_ENDPOINT=true`.
Start emulator from an administrator [command prompt](emulator-command-line-para
EXIT ```
-### Gremlin API
+### API for Gremlin
Start emulator from an administrator [command prompt](emulator-command-line-parameters.md)with "/EnableGremlinEndpoint". Alternatively you can also set the environment variable `AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT=true`
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
Title: Use system-assigned managed identities to access Azure Cosmos DB data
description: Learn how to configure an Azure Active Directory (Azure AD) system-assigned managed identity (managed service identity) to access keys from Azure Cosmos DB. -+ Last updated 06/01/2022 -+ # Use system-assigned managed identities to access Azure Cosmos DB data In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) and [data plane role-based access control](how-to-setup-rbac.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
You'll learn how to create a function app that can access Azure Cosmos DB data w
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md)
+- An existing Azure Cosmos DB API for NoSQL account. [Create an Azure Cosmos DB API for NoSQL account](nosql/quickstart-portal.md)
- An existing Azure Functions function app. [Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md) - A system-assigned managed identity for the function app. [Add a system-assigned identity](../app-service/overview-managed-identity.md#add-a-system-assigned-identity) - [Azure Functions Core Tools](../azure-functions/functions-run-local.md)
You'll learn how to create a function app that can access Azure Cosmos DB data w
# Variable for function app name functionName="msdocs-function-app"
- # Variable for Cosmos DB account name
+ # Variable for Azure Cosmos DB account name
cosmosName="msdocs-cosmos-app" # Variable for resource group name
You'll learn how to create a function app that can access Azure Cosmos DB data w
--name $functionName ```
-1. View the Cosmos DB account's properties using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
+1. View the Azure Cosmos DB account's properties using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
```azurecli-interactive az cosmosdb show \
You'll learn how to create a function app that can access Azure Cosmos DB data w
--name $cosmosName ```
-## Create Cosmos DB SQL API databases
+## Create Azure Cosmos DB API for NoSQL databases
In this step, you'll create two databases.
In this step, you'll create two databases.
--account-name $cosmosName ```
-## Get Cosmos DB SQL API endpoint
+## Get Azure Cosmos DB API for NoSQL endpoint
-In this step, you'll query the document endpoint for the SQL API account.
+In this step, you'll query the document endpoint for the API for NoSQL account.
1. Use ``az cosmosdb show`` with the **query** parameter set to ``documentEndpoint``. Record the result. You'll use this value in a later step.
In this step, you'll query the document endpoint for the SQL API account.
> [!NOTE] > This variable will be re-used in a later step.
-## Grant access to your Azure Cosmos account
+## Grant access to your Azure Cosmos DB account
-In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
+In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Azure Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
> [!TIP] > When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Built-in Data Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
In this step, you'll assign a role to the function app's system-assigned managed
```json {
- "RoleName": "Read Cosmos Metadata",
+ "RoleName": "Read Azure Cosmos DB Metadata",
"Type": "CustomRole", "AssignableScopes": ["/"], "Permissions": [{
In this step, you'll assign a role to the function app's system-assigned managed
} ```
-1. Use [``az cosmosdb sql role definition create``](/cli/azure/cosmosdb/sql/role/definition#az-cosmosdb-sql-role-definition-create) to create a new role definition named ``Read Cosmos Metadata`` using the custom JSON object.
+1. Use [``az cosmosdb sql role definition create``](/cli/azure/cosmosdb/sql/role/definition#az-cosmosdb-sql-role-definition-create) to create a new role definition named ``Read Azure Cosmos DB Metadata`` using the custom JSON object.
```azurecli-interactive az cosmosdb sql role definition create \
In this step, you'll assign a role to the function app's system-assigned managed
> [!NOTE] > In this example, the role definition is defined in a file named **definition.json**.
-1. Use [``az role assignment create``](/cli/azure/cosmosdb/sql/role/assignment#az-cosmosdb-sql-role-assignment-create) to assign the ``Read Cosmos Metadata`` role to the system-assigned managed identity.
+1. Use [``az role assignment create``](/cli/azure/cosmosdb/sql/role/assignment#az-cosmosdb-sql-role-assignment-create) to assign the ``Read Azure Cosmos DB Metadata`` role to the system-assigned managed identity.
```azurecli-interactive az cosmosdb sql role assignment create \ --resource-group $resourceGroupName \ --account-name $cosmosName \
- --role-definition-name "Read Cosmos Metadata" \
+ --role-definition-name "Read Azure Cosmos DB Metadata" \
--principal-id $principal \ --scope $scope ```
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
description: Learn more about the merge partitions capability in Azure Cosmos DB
-+ Last updated 05/09/2022 # Merge partitions in Azure Cosmos DB (preview) Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container in place. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container and RU/s per partition is low. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems.
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a
// Add the preview extension Install-Module -Name Az.CosmosDB -AllowPrerelease -Force
-// SQL API
+// API for NoSQL
Invoke-AzCosmosDBSqlContainerMerge ` -ResourceGroupName "<resource-group-name>" ` -AccountName "<cosmos-account-name>" `
Invoke-AzCosmosDBMongoDBCollectionMerge `
// Add the preview extension az extension add --name cosmosdb-preview
-// SQL API
+// API for NoSQL
az cosmosdb sql container merge \ --resource-group '<resource-group-name>' \ --account-name '<cosmos-account-name>' \
You can track whether merge is still in progress by checking the **Activity Log*
## Limitations ### Preview eligibility criteria
-To enroll in the preview, your Cosmos account must meet all the following criteria:
-* Your Cosmos account uses SQL API or API for MongoDB with version >=3.6.
-* Your Cosmos account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
+To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
+* Your Azure Cosmos DB account uses API for NoSQL or MongoDB with version >=3.6.
+* Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
* Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale). * However, only the containers with dedicated throughput will be able to be merged.
-* Your Cosmos account is a single-write region account (merge isn't currently supported for multi-region write accounts).
-* Your Cosmos account doesn't use any of the following features:
+* Your Azure Cosmos DB account is a single-write region account (merge isn't currently supported for multi-region write accounts).
+* Your Azure Cosmos DB account doesn't use any of the following features:
* [Point-in-time restore](continuous-backup-restore-introduction.md) * [Customer-managed keys](how-to-setup-cmk.md) * [Analytical store](analytical-store-introduction.md)
-* Your Cosmos account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
-* If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
+* Your Azure Cosmos DB account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
+* If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
* There are no SDK or driver requirements to use the feature with API for MongoDB.
-* Your Cosmos account doesn't use any currently unsupported connectors:
+* Your Azure Cosmos DB account doesn't use any currently unsupported connectors:
* Azure Data Factory * Azure Stream Analytics * Logic Apps
To enroll in the preview, your Cosmos account must meet all the following criter
* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher ### Account resources and configuration
-* Merge is only available for SQL API and API for MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
+* Merge is only available for API for NoSQL and MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
* Merge is only available for single-region write accounts. Multi-region write account support isn't available. * Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged): * [Point-in-time restore](continuous-backup-restore-introduction.md)
To enroll in the preview, your Cosmos account must meet all the following criter
* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency. * After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
-### SDK requirements (SQL API only)
+### SDK requirements (API for NoSQL only)
Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
description: Azure Cosmos DB currently supports a one-way migration from periodi
-++ Last updated 08/24/2022
# Migrate an Azure Cosmos DB account from periodic to continuous backup mode Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
The following are the key reasons to migrate into continuous mode:
> > You can migrate an account to continuous backup mode only if the following conditions are true. Also checkout the [point in time restore limitations](continuous-backup-restore-introduction.md#current-limitations) before migrating your account: >
-> * If the account is of type SQL API or API for MongoDB.
-> * If the account is of type Table API or Gremlin API. These two APIs are in preview.
+> * If the account is of type API for NoSQL or MongoDB.
+> * If the account is of type API for Table or Gremlin. Support for these two APIs is in preview.
> * If the account has a single write region. > * If the account isn't enabled with analytical store. >
az deployment group create -g <ResourceGroup> --template-file <ProvisionTemplate
You can switch between ``Continuous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
-In the portal for the given Cosmos DB account, choose **Point in Time Restore** pane, select on change link next to Backup policy mode to show you the option of Continuous (30 days) or Continuous (7 days). Choose the required target and select on **Save**.
+In the portal for the given Azure Cosmos DB account, choose **Point in Time Restore** pane, select on change link next to Backup policy mode to show you the option of Continuous (30 days) or Continuous (7 days). Choose the required target and select on **Save**.
:::image type="content" source="./media/migrate-continuous-backup/migrate-continuous-mode-tiers.png" lightbox="./media/migrate-continuous-backup/migrate-continuous-mode-tiers.png" alt-text="Screenshot of dialog to select tier of continuous mode.":::
Yes.
### Which accounts can be targeted for backup migration?
-Currently, SQL API and API for MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview.
+Currently, API for NoSQL and MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Support for API for Table and Gremlin is in preview.
Accounts enabled with analytical storage and multiple-write regions aren't supported for migration.
cosmos-db Migrate Cosmosdb Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-cosmosdb-data.md
- Title: Migrate hundreds of terabytes of data into Azure Cosmos DB
-description: This doc describes how you can migrate 100s of terabytes of data into Cosmos DB
------ Previously updated : 08/26/2021---
-# Migrate hundreds of terabytes of data into Azure Cosmos DB
-
-Azure Cosmos DB can store terabytes of data. You can perform a large-scale data migration to move your production workload to Azure Cosmos DB. This article describes the challenges involved in moving large-scale data to Azure Cosmos DB and introduces you to the tool that helps with the challenges and migrates data to Azure Cosmos DB. In this case study, the customer used the Cosmos DB SQL API.
-
-Before you migrate the entire workload to Azure Cosmos DB, you can migrate a subset of data to validate some of the aspects like partition key choice, query performance, and data modeling. After you validate the proof of concept, you can move the entire workload to Azure Cosmos DB.
-
-## Tools for data migration
-
-Azure Cosmos DB migration strategies currently differ based on the API choice and the size of the data. To migrate smaller datasets ΓÇô for validating data modeling, query performance, partition key choice etc. ΓÇô you can choose the [Data Migration Tool](import-data.md) or [Azure Data FactoryΓÇÖs Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md). If you are familiar with Spark, you can also choose to use the [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) to migrate data.
-
-## Challenges for large-scale migrations
-
-The existing tools for migrating data to Azure Cosmos DB have some limitations that become especially apparent at large scales:
-
- * **Limited scale out capabilities**: In order to migrate terabytes of data into Azure Cosmos DB as quickly as possible, and to effectively consume the entire provisioned throughput, the migration clients should have the ability to scale out indefinitely.
-
-* **Lack of progress tracking and check-pointing**: It is important to track the migration progress and have check-pointing while migrating large data sets. Otherwise, any error that occurs during the migration will stop the migration, and you have to start the process from scratch. It would be not productive to restart the whole migration process when 99% of it has already completed.
-
-* **Lack of dead letter queue**: Within large data sets, in some cases there could be issues with parts of the source data. Additionally, there might be transient issues with the client or the network. Either of these cases should not cause the entire migration to fail. Even though most migration tools have robust retry capabilities that guard against intermittent issues, it is not always enough. For example, if less than 0.01% of the source data documents are greater than 2 MB in size, it will cause the document write to fail in Azure Cosmos DB. Ideally, it is useful for the migration tool to persist these ΓÇÿfailedΓÇÖ documents to another dead letter queue, which can be processed post migration.
-
-Many of these limitations are being fixed for tools like Azure Data factory, Azure Data Migration services.
-
-## Custom tool with bulk executor library
-
-The challenges described in the above section, can be solved by using a custom tool that can be easily scaled out across multiple instances and it is resilient to transient failures. Additionally, the custom tool can pause and resume migration at various checkpoints. Azure Cosmos DB already provides the [bulk executor library](./bulk-executor-overview.md) that incorporates some of these features. For example, the bulk executor library already has the functionality to handle transient errors and can scale out threads in a single node to consume about 500 K RUs per node. The bulk executor library also partitions the source dataset into micro-batches that are operated independently as a form of checkpointing.
-
-The custom tool uses the bulk executor library and supports scaling out across multiple clients and to track errors during the ingestion process. To use this tool, the source data should be partitioned into distinct files in Azure Data Lake Storage (ADLS) so that different migration workers can pick up each file and ingest them into Azure Cosmos DB. The custom tool makes use of a separate collection, which stores metadata about the migration progress for each individual source file in ADLS and tracks any errors associated with them.
-
-The following image describes the migration process using this custom tool. The tool is running on a set of virtual machines, and each virtual machine queries the tracking collection in Azure Cosmos DB to acquire a lease on one of the source data partitions. Once this is done, the source data partition is read by the tool and ingested into Azure Cosmos DB by using the bulk executor library. Next, the tracking collection is updated to record the progress of data ingestion and any errors encountered. After a data partition is processed, the tool attempts to query for the next available source partition. It continues to process the next source partition until all the data is migrated. The source code for the tool is available at the [Azure Cosmos DB bulk ingestion](https://github.com/Azure-Samples/azure-cosmosdb-bulkingestion) repo.
-
-
-
-
-
-
-The tracking collection contains documents as shown in the following example. You will see such documents one for each partition in the source data. Each document contains the metadata for the source data partition such as its location, migration status, and errors (if any):
-
-```json
-{
- "owner": "25812@bulkimporttest07",
- "jsonStoreEntityImportResponse": {
- "numberOfDocumentsReceived": 446688,
- "isError": false,
- "totalRequestUnitsConsumed": 3950252.2800000003,
- "errorInfo": [],
- "totalTimeTakenInSeconds": 188,
- "numberOfDocumentsImported": 446688
- },
- "storeType": "AZURE_BLOB",
- "name": "sourceDataPartition",
- "location": "sourceDataPartitionLocation",
- "id": "sourceDataPartitionId",
- "isInProgress": false,
- "operation": "unpartitioned-writes",
- "createDate": {
- "seconds": 1561667225,
- "nanos": 146000000
- },
- "completeDate": {
- "seconds": 1561667515,
- "nanos": 180000000
- },
- "isComplete": true
-}
-```
-
-
-## Prerequisites for data migration
-
-Before the data migration starts, there are a few prerequisites to consider:
-
-#### Estimate the data size:
-
-The source data size may not exactly map to the data size in Azure Cosmos DB. A few sample documents from the source can be inserted to check their data size in Azure Cosmos DB. Depending on the sample document size, the total data size in Azure Cosmos DB post-migration, can be estimated.
-
-For example, if each document after migration in Azure Cosmos DB is around 1 KB and if there are around 60 billion documents in the source dataset, it would mean that the estimated size in Azure Cosmos DB would be close to 60 TB.
-
-
-
-#### Pre-create containers with enough RUs:
-
-Although Azure Cosmos DB scales out storage automatically, it is not advisable to start from the smallest container size. Smaller containers have lower throughput availability, which means that the migration would take much longer to complete. Instead, it is useful to create the containers with the final data size (as estimated in the previous step) and make sure that the migration workload is fully consuming the provisioned throughput.
-
-In the previous step. since the data size was estimated to be around 60 TB, a container of at least 2.4 M RUs is required to accommodate the entire dataset.
-
-
-
-#### Estimate the migration speed:
-
-Assuming that the migration workload can consume the entire provisioned throughput, the provisioned throughout would provide an estimation of the migration speed. Continuing the previous example, 5 RUs are required for writing a 1-KB document to Azure Cosmos DB SQL API account. 2.4 million RUs would allow a transfer of 480,000 documents per second (or 480 MB/s). This means that the complete migration of 60 TB will take 125,000 seconds or about 34 hours.
-
-In case you want the migration to be completed within a day, you should increase the provisioned throughput to 5 million RUs.
-
-
-
-#### Turn off the indexing:
-
-Since the migration should be completed as soon as possible, it is advisable to minimize time and RUs spent on creating indexes for each of the documents ingested. Azure Cosmos DB automatically indexes all properties, it is worthwhile to minimize indexing to a selected few terms or turn it off completely for the course of migration. You can turn off the containerΓÇÖs indexing policy by changing the indexingMode to none as shown below:
-
-
-```
- {
- "indexingMode": "none"
- }
-```
-
-
-After the migration is complete, you can update the indexing.
-
-## Migration process
-
-After the prerequisites are completed, you can migrate data with the following steps:
-
-1. First import the data from source to Azure Blob Storage. To increase the speed of migration, it is helpful to parallelize across distinct source partitions. Before starting the migration, the source data set should be partitioned into files with size around 200 MB size.
-
-2. The bulk executor library can scale up, to consume 500,000 RUs in a single client VM. Since the available throughput is 5 million RUs, 10 Ubuntu 16.04 VMs (Standard_D32_v3) should be provisioned in the same region where your Azure Cosmos database is located. You should prepare these VMs with the migration tool and its settings file.
-
-3. Run the queue step on one of the client virtual machines. This step creates the tracking collection, which scans the ADLS container and creates a progress-tracking document for each of the source data setΓÇÖs partition files.
-
-4. Next, run the import step on all the client VMs. Each of the clients can take ownership on a source partition and ingest its data into Azure Cosmos DB. Once itΓÇÖs completed and its status is updated in the tracking collection, the clients can then query for the next available source partition in the tracking collection.
-
-5. This process continues until the entire set of source partitions were ingested. Once all the source partitions are processed, the tool should be rerun on the error-correction mode on the same tracking collection. This step is required to identify the source partitions that should to be re-processed due to errors.
-
-6. Some of these errors could be due to incorrect documents in the source data. These should be identified and fixed. Next, you should rerun the import step on the failed partitions to reingest them.
-
-Once the migration is completed, you can validate that the document count in Azure Cosmos DB is same as the document count in the source database. In this example, the total size in Azure Cosmos DB turned out to 65 terabytes. Post migration, indexing can be selectively turned on and the RUs can be lowered to the level required by the workloadΓÇÖs operations.
-
-## Next steps
-
-* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](bulk-executor-dot-net.md) and [Java](bulk-executor-java.md).
-* The bulk executor library is integrated into the Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) article.
-* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate.md
+
+ Title: Migrate hundreds of terabytes of data into Azure Cosmos DB
+description: This doc describes how you can migrate 100s of terabytes of data into Azure Cosmos DB
+++++++ Last updated : 08/26/2021++
+# Migrate hundreds of terabytes of data into Azure Cosmos DB
+
+Azure Cosmos DB can store terabytes of data. You can perform a large-scale data migration to move your production workload to Azure Cosmos DB. This article describes the challenges involved in moving large-scale data to Azure Cosmos DB and introduces you to the tool that helps with the challenges and migrates data to Azure Cosmos DB. In this case study, the customer used the Azure Cosmos DB API for NoSQL.
+
+Before you migrate the entire workload to Azure Cosmos DB, you can migrate a subset of data to validate some of the aspects like partition key choice, query performance, and data modeling. After you validate the proof of concept, you can move the entire workload to Azure Cosmos DB.
+
+## Tools for data migration
+
+Azure Cosmos DB migration strategies currently differ based on the API choice and the size of the data. To migrate smaller datasets ΓÇô for validating data modeling, query performance, partition key choice etc. ΓÇô you can choose the [Data Migration Tool](import-data.md) or [Azure Data FactoryΓÇÖs Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md). If you are familiar with Spark, you can also choose to use the [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) to migrate data.
+
+## Challenges for large-scale migrations
+
+The existing tools for migrating data to Azure Cosmos DB have some limitations that become especially apparent at large scales:
+
+ * **Limited scale out capabilities**: In order to migrate terabytes of data into Azure Cosmos DB as quickly as possible, and to effectively consume the entire provisioned throughput, the migration clients should have the ability to scale out indefinitely.
+
+* **Lack of progress tracking and check-pointing**: It is important to track the migration progress and have check-pointing while migrating large data sets. Otherwise, any error that occurs during the migration will stop the migration, and you have to start the process from scratch. It would be not productive to restart the whole migration process when 99% of it has already completed.
+
+* **Lack of dead letter queue**: Within large data sets, in some cases there could be issues with parts of the source data. Additionally, there might be transient issues with the client or the network. Either of these cases should not cause the entire migration to fail. Even though most migration tools have robust retry capabilities that guard against intermittent issues, it is not always enough. For example, if less than 0.01% of the source data documents are greater than 2 MB in size, it will cause the document write to fail in Azure Cosmos DB. Ideally, it is useful for the migration tool to persist these ΓÇÿfailedΓÇÖ documents to another dead letter queue, which can be processed post migration.
+
+Many of these limitations are being fixed for tools like Azure Data factory, Azure Data Migration services.
+
+## Custom tool with bulk executor library
+
+The challenges described in the above section, can be solved by using a custom tool that can be easily scaled out across multiple instances and it is resilient to transient failures. Additionally, the custom tool can pause and resume migration at various checkpoints. Azure Cosmos DB already provides the [bulk executor library](./bulk-executor-overview.md) that incorporates some of these features. For example, the bulk executor library already has the functionality to handle transient errors and can scale out threads in a single node to consume about 500 K RUs per node. The bulk executor library also partitions the source dataset into micro-batches that are operated independently as a form of checkpointing.
+
+The custom tool uses the bulk executor library and supports scaling out across multiple clients and to track errors during the ingestion process. To use this tool, the source data should be partitioned into distinct files in Azure Data Lake Storage (ADLS) so that different migration workers can pick up each file and ingest them into Azure Cosmos DB. The custom tool makes use of a separate collection, which stores metadata about the migration progress for each individual source file in ADLS and tracks any errors associated with them.
+
+The following image describes the migration process using this custom tool. The tool is running on a set of virtual machines, and each virtual machine queries the tracking collection in Azure Cosmos DB to acquire a lease on one of the source data partitions. Once this is done, the source data partition is read by the tool and ingested into Azure Cosmos DB by using the bulk executor library. Next, the tracking collection is updated to record the progress of data ingestion and any errors encountered. After a data partition is processed, the tool attempts to query for the next available source partition. It continues to process the next source partition until all the data is migrated. The source code for the tool is available at the [Azure Cosmos DB bulk ingestion](https://github.com/Azure-Samples/azure-cosmosdb-bulkingestion) repo.
+
+
+
+
+
+
+The tracking collection contains documents as shown in the following example. You will see such documents one for each partition in the source data. Each document contains the metadata for the source data partition such as its location, migration status, and errors (if any):
+
+```json
+{
+ "owner": "25812@bulkimporttest07",
+ "jsonStoreEntityImportResponse": {
+ "numberOfDocumentsReceived": 446688,
+ "isError": false,
+ "totalRequestUnitsConsumed": 3950252.2800000003,
+ "errorInfo": [],
+ "totalTimeTakenInSeconds": 188,
+ "numberOfDocumentsImported": 446688
+ },
+ "storeType": "AZURE_BLOB",
+ "name": "sourceDataPartition",
+ "location": "sourceDataPartitionLocation",
+ "id": "sourceDataPartitionId",
+ "isInProgress": false,
+ "operation": "unpartitioned-writes",
+ "createDate": {
+ "seconds": 1561667225,
+ "nanos": 146000000
+ },
+ "completeDate": {
+ "seconds": 1561667515,
+ "nanos": 180000000
+ },
+ "isComplete": true
+}
+```
+
+
+## Prerequisites for data migration
+
+Before the data migration starts, there are a few prerequisites to consider:
+
+#### Estimate the data size:
+
+The source data size may not exactly map to the data size in Azure Cosmos DB. A few sample documents from the source can be inserted to check their data size in Azure Cosmos DB. Depending on the sample document size, the total data size in Azure Cosmos DB post-migration, can be estimated.
+
+For example, if each document after migration in Azure Cosmos DB is around 1 KB and if there are around 60 billion documents in the source dataset, it would mean that the estimated size in Azure Cosmos DB would be close to 60 TB.
+
+
+
+#### Pre-create containers with enough RUs:
+
+Although Azure Cosmos DB scales out storage automatically, it is not advisable to start from the smallest container size. Smaller containers have lower throughput availability, which means that the migration would take much longer to complete. Instead, it is useful to create the containers with the final data size (as estimated in the previous step) and make sure that the migration workload is fully consuming the provisioned throughput.
+
+In the previous step. since the data size was estimated to be around 60 TB, a container of at least 2.4 M RUs is required to accommodate the entire dataset.
+
+
+
+#### Estimate the migration speed:
+
+Assuming that the migration workload can consume the entire provisioned throughput, the provisioned throughout would provide an estimation of the migration speed. Continuing the previous example, 5 RUs are required for writing a 1-KB document to Azure Cosmos DB API for NoSQL account. 2.4 million RUs would allow a transfer of 480,000 documents per second (or 480 MB/s). This means that the complete migration of 60 TB will take 125,000 seconds or about 34 hours.
+
+In case you want the migration to be completed within a day, you should increase the provisioned throughput to 5 million RUs.
+
+
+
+#### Turn off the indexing:
+
+Since the migration should be completed as soon as possible, it is advisable to minimize time and RUs spent on creating indexes for each of the documents ingested. Azure Cosmos DB automatically indexes all properties, it is worthwhile to minimize indexing to a selected few terms or turn it off completely for the course of migration. You can turn off the containerΓÇÖs indexing policy by changing the indexingMode to none as shown below:
+
+
+```
+ {
+ "indexingMode": "none"
+ }
+```
+
+
+After the migration is complete, you can update the indexing.
+
+## Migration process
+
+After the prerequisites are completed, you can migrate data with the following steps:
+
+1. First import the data from source to Azure Blob Storage. To increase the speed of migration, it is helpful to parallelize across distinct source partitions. Before starting the migration, the source data set should be partitioned into files with size around 200 MB size.
+
+2. The bulk executor library can scale up, to consume 500,000 RUs in a single client VM. Since the available throughput is 5 million RUs, 10 Ubuntu 16.04 VMs (Standard_D32_v3) should be provisioned in the same region where your Azure Cosmos DB database is located. You should prepare these VMs with the migration tool and its settings file.
+
+3. Run the queue step on one of the client virtual machines. This step creates the tracking collection, which scans the ADLS container and creates a progress-tracking document for each of the source data setΓÇÖs partition files.
+
+4. Next, run the import step on all the client VMs. Each of the clients can take ownership on a source partition and ingest its data into Azure Cosmos DB. Once itΓÇÖs completed and its status is updated in the tracking collection, the clients can then query for the next available source partition in the tracking collection.
+
+5. This process continues until the entire set of source partitions were ingested. Once all the source partitions are processed, the tool should be rerun on the error-correction mode on the same tracking collection. This step is required to identify the source partitions that should to be re-processed due to errors.
+
+6. Some of these errors could be due to incorrect documents in the source data. These should be identified and fixed. Next, you should rerun the import step on the failed partitions to reingest them.
+
+Once the migration is completed, you can validate that the document count in Azure Cosmos DB is same as the document count in the source database. In this example, the total size in Azure Cosmos DB turned out to 65 terabytes. Post migration, indexing can be selectively turned on and the RUs can be lowered to the level required by the workloadΓÇÖs operations.
+
+## Next steps
+
+* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](nosql/bulk-executor-dotnet.md) and [Java](bulk-executor-java.md).
+* The bulk executor library is integrated into the Azure Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) article.
+* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migration Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md
+
+ Title: Azure Cosmos DB Migration options
+description: This doc describes the various options to migrate your on-premises or cloud data to Azure Cosmos DB
++++++ Last updated : 04/02/2022+
+# Options to migrate your on-premises or cloud data to Azure Cosmos DB
+
+You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB:
+
+* Move data from one Azure Cosmos DB container to another container in the same database or a different databases.
+* Moving data between dedicated containers to shared database containers.
+* Move data from an Azure Cosmos DB account located in region1 to another Azure Cosmos DB account in the same or a different region.
+* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB.
+
+In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations.
+
+## Factors affecting the choice of migration tool
+
+The following factors determine the choice of the migration tool:
+
+* **Online vs offline migration**: Many migration tools provide a path to do a one-time migration only. This means that the applications accessing the database might experience a period of downtime. Some migration solutions provide a way to do a live migration where there is a replication pipeline set up between the source and the target.
+
+* **Data source**: The existing data can be in various data sources like Oracle DB2, Datastax Cassanda, Azure SQL Database, PostgreSQL, etc. The data can also be in an existing Azure Cosmos DB account and the intent of migration can be to change the data model or repartition the data in a container with a different partition key.
+
+* **Azure Cosmos DB API**: For the API for NoSQL in Azure Cosmos DB, there are a variety of tools developed by the Azure Cosmos DB team which aid in the different migration scenarios. All of the other APIs have their own specialized set of tools developed and maintained by the community. Since Azure Cosmos DB supports these APIs at a wire protocol level, these tools should work as-is while migrating data into Azure Cosmos DB too. However, they might require custom handling for throttles as this concept is specific to Azure Cosmos DB.
+
+* **Size of data**: Most migration tools work very well for smaller datasets. When the data set exceeds a few hundred gigabytes, the choices of migration tools are limited.
+
+* **Expected migration duration**: Migrations can be configured to take place at a slow, incremental pace that consumes less throughput or can consume the entire throughput provisioned on the target Azure Cosmos DB container and complete the migration in less time.
+
+## Azure Cosmos DB API for NoSQL
+
+If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
+
+>[!IMPORTANT]
+> The [Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator) is an open-source tool for live container migrations that implements change feed and bulk support. However, please note that the user interface application code for this tool is not supported or actively maintained by Microsoft. For Azure Cosmos DB API for NoSQL live container migrations, we recommend using the Spark Connector + Change Feed as illustrated in the [sample](https://github.com/Azure/azure-sdk-for-jav) is fully supported by Microsoft.
+
+|Migration type|Solution|Supported sources|Supported targets|Considerations|
+||||||
+|Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.|
+|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.|
+|Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
+|Online|[Azure Cosmos DB Spark connector + Change Feed](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB for NoSQL. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
+|Offline|[Custom tool with Azure Cosmos DB bulk executor library](migrate.md)| The source depends on your custom code | Azure Cosmos DB for NoSQL| &bull; Provides checkpointing, dead-lettering capabilities which increases migration resiliency. <br/>&bull; Suitable for very large datasets (10 TB+). <br/>&bull; Requires custom setup of this tool running as an App Service. |
+|Online|[Azure Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB for NoSQL | Azure Cosmos DB for NoSQL| &bull; Easy to set up. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Not suitable for large datasets. <br/>&bull; Does not capture deletes from the source container. |
+|Online|[Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator)| Azure Cosmos DB for NoSQL | Azure Cosmos DB for NoSQL| &bull; Provides progress tracking. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Works for larger datasets as well.<br/>&bull; Requires the user to set up an App Service to host the Change feed processor. <br/>&bull; Does not capture deletes from the source container.|
+|Online|[Striim](cosmosdb-sql-api-migrate-data-striim.md)| &bull;Oracle <br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources. |&bull;Azure Cosmos DB for NoSQL <br/>&bull; Azure Cosmos DB for Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets. | &bull; Works with a large variety of sources like Oracle, DB2, SQL Server.<br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
+
+## Azure Cosmos DB API for MongoDB
+
+Follow the [pre-migration guide](mongodb/pre-migration-steps.md) to plan your migration.
+* If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](convert-vcore-to-request-unit.md).
+
+When you are ready to migrate, you can find detailed guidance on migration tools below
+* [Offline migration using MongoDB native tools](mongodb/tutorial-mongotools-cosmos-db.md)
+* [Offline migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db.md)
+* [Online migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db-online.md)
+* [Offline/online migration using Azure Databricks and Spark](mongodb/migrate-databricks.md)
+
+Then, follow our [post-migration guide](mongodb/post-migration-optimization.md) to optimize your Azure Cosmos DB data estate once you have migrated.
+
+A summary of migration pathways from your current solution to Azure Cosmso DB for MongoDB is provided below:
+
+|Migration type|Solution|Supported sources|Supported targets|Considerations|
+||||||
+|Online|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB|Azure Cosmos DB for MongoDB |&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
+|Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB| Azure Cosmos DB for MongoDB| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
+|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db-mongodb-api.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB <br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources. | &bull;Azure Cosmos DB for NoSQL<br/>&bull;Azure Cosmos DB for MongoDB <br/>&bull; JSON files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets.| &bull; Easy to set up and supports multiple sources. <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process.<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process. <br/>&bull; Needs custom code to increase read throughput for certain data sources.|
+|Offline|Existing Mongo Tools ([mongodump](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [mongorestore](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [Studio3T](mongodb/connect-using-mongochef.md))|&bull;MongoDB<br/>&bull;Azure Cosmos DB for MongoDB<br/> | Azure Cosmos DB for MongoDB| &bull; Easy to set up and integration. <br/>&bull; Needs custom handling for throttles.|
+
+## Azure Cosmos DB API for Cassandra
+
+If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+
+|Migration type|Solution|Supported sources|Supported targets|Considerations|
+||||||
+|Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB API for Cassandra| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
+|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/> | Azure Cosmos DB API for Cassandra | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
+|Online|[Dual-write proxy + Spark](cassandr)| &bull;Apache Cassandra<br/>|&bull;Azure Cosmos DB API for Cassandra <br/>| &bull; Supports larger datasets, but careful attention required for setup and validation. <br/>&bull; Open-source tools, no purchase required.|
+|Online|[Striim (from Oracle DB/Apache Cassandra)](cassandr)| &bull;Oracle<br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|&bull;Azure Cosmos DB API for NoSQL<br/>&bull;Azure Cosmos DB API for Cassandra <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| &bull; Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
+|Online|[Arcion (from Oracle DB/Apache Cassandra)](cassandr)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported sources. |Azure Cosmos DB API for Cassandra. <br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
+
+## Other APIs
+
+For APIs other than the API for NoSQL, API for MongoDB and the API for Cassandra, there are various tools supported by each of the API's existing ecosystems.
+
+**API for Table**
+
+* [Data Migration Tool](table/import.md#data-migration-tool)
+
+**API for Gremlin**
+
+* [Graph bulk executor library](gremlin/bulk-executor-dotnet.md)
+* [Gremlin Spark](https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/samples/graphframes/main.scala)
+
+## Next steps
+
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](nosql/bulk-executor-dotnet.md) and [Java](bulk-executor-java.md).
+* The bulk executor library is integrated into the Azure Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) article.
+* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
Title: Change log for Azure CosmosDB API for MongoDB description: Notifies our customers of any minor/medium updates that were pushed -+ Last updated 06/22/2022
-# Change log for Azure Cosmos DB API for MongoDB
+# Change log for Azure Cosmos DB for MongoDB
The Change log for the API for MongoDB is meant to inform you about our feature updates. This document covers more granular updates and complements [Azure Updates](https://azure.microsoft.com/updates/).
-## Cosmos DB's API for MongoDB updates
+## Azure Cosmos DB's API for MongoDB updates
### Azure Data Studio MongoDB extension for Azure Cosmos DB (Preview) You can now use the free and lightweight tool feature to manage and query your MongoDB resources using mongo shell. Azure Data Studio MongoDB extension for Azure Cosmos DB allows you to manage multiple accounts all in one view by
You can now use the free and lightweight tool feature to manage and query your M
[Learn more](https://aka.ms/cosmosdb-ads)
-### Linux emulator with Azure Cosmos DB API for MongoDB
+### Linux emulator with Azure Cosmos DB for MongoDB
The Azure Cosmos DB Linux emulator with API for MongoDB support provides a local environment that emulates the Azure Cosmos DB service for development purposes on Linux and macOS. Using the emulator, you can develop and test your MongoDB applications locally, without creating an Azure subscription or incurring any costs. [Learn more](https://aka.ms/linux-emulator-mongo) ### 16-MB limit per document in API for MongoDB (Preview)
-The 16-MB document limit in the Azure Cosmos DB API for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
+The 16-MB document limit in the Azure Cosmos DB for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
-[Learn more](./mongodb-introduction.md)
+[Learn more](./introduction.md)
-### Azure Cosmos DB API for MongoDB data plane Role-Based Access Control (RBAC) (Preview)
+### Azure Cosmos DB for MongoDB data plane Role-Based Access Control (RBAC) (Preview)
The API for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data. [Learn more](./how-to-setup-rbac.md)
+### Azure Cosmos DB for MongoDB supports version 4.2
-### Azure Cosmos DB API for MongoDB supports version 4.2
-The Azure Cosmos DB API for MongoDB version 4.2 includes new aggregation functionality and improved security features such as client-side field encryption. These features help you accelerate development by applying the new functionality instead of developing it yourself.
+The Azure Cosmos DB for MongoDB version 4.2 includes new aggregation functionality and improved security features such as client-side field encryption. These features help you accelerate development by applying the new functionality instead of developing it yourself.
[Learn more](./feature-support-42.md) - ### Support $expr in Mongo 3.6+ `$expr` allows the use of [aggregation expressions](https://www.mongodb.com/docs/manual/meta/aggregation-quick-reference/#std-label-aggregation-expressions) within the query language. `$expr` can build query expressions that compare fields from the same document in a `$match` stage.
The Azure Cosmos DB API for MongoDB version 4.2 includes new aggregation functio
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md). - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Change Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-streams.md
Title: Change streams in Azure Cosmos DBΓÇÖs API for MongoDB description: Learn how to use change streams n Azure Cosmos DBΓÇÖs API for MongoDB to get the changes made to your data. -+ Last updated 03/02/2021 ms.devlang: csharp, javascript-+ # Change streams in Azure Cosmos DBΓÇÖs API for MongoDB [Change feed](../change-feed.md) support in Azure Cosmos DBΓÇÖs API for MongoDB is available by using the change streams API. By using the change streams API, your applications can get the changes made to the collection or to the items in a single shard. Later you can take further actions based on the results. Changes to the items in the collection are captured in the order of their modification time and the sort order is guaranteed per shard key.
The following limitations are applicable when using change streams:
Due to these limitations, the $match stage, $project stage, and fullDocument options are required as shown in the previous examples.
-Unlike the change feed in Azure Cosmos DB's SQL API, there is not a separate [Change Feed Processor Library](../change-feed-processor.md) to consume change streams or a need for a leases container. There is not currently support for [Azure Functions triggers](../change-feed-functions.md) to process change streams.
+Unlike the change feed in Azure Cosmos DB's API for NoSQL, there is not a separate [Change Feed Processor Library](../change-feed-processor.md) to consume change streams or a need for a leases container. There is not currently support for [Azure Functions triggers](../change-feed-functions.md) to process change streams.
## Error handling
The following error codes and messages are supported when using change streams:
## Next steps
-* [Use time to live to expire data automatically in Azure Cosmos DB's API for MongoDB](mongodb-time-to-live.md)
-* [Indexing in Azure Cosmos DB's API for MongoDB](mongodb-indexing.md)
+* [Use time to live to expire data automatically in Azure Cosmos DB's API for MongoDB](time-to-live.md)
+* [Indexing in Azure Cosmos DB's API for MongoDB](indexing.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB API for MongoDB
-description: Azure CLI Samples for Azure Cosmos DB API for MongoDB
+ Title: Azure CLI Samples for Azure Cosmos DB for MongoDB
+description: Azure CLI Samples for Azure Cosmos DB for MongoDB
-+ Last updated 08/18/2022 -+
-# Azure CLI samples for Azure Cosmos DB API for MongoDB
+# Azure CLI samples for Azure Cosmos DB for MongoDB
-The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB MongoDB API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB for MongoDB and to sample Azure CLI scripts that apply to all Azure Cosmos DB APIs. Common samples are the same across all APIs.
These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-## MongoDB API Samples
+## API for MongoDB Samples
|Task | Description | |||
-| [Create an Azure Cosmos account, database and collection](../scripts/cli/mongodb/create.md)| Creates an Azure Cosmos DB account, database, and collection for MongoDB API. |
-| [Create a serverless Azure Cosmos account, database and collection](../scripts/cli/mongodb/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and collection for MongoDB API. |
-| [Create an Azure Cosmos account, database with autoscale and two collections with shared throughput](../scripts/cli/mongodb/autoscale.md)| Creates an Azure Cosmos DB account, database with autoscale and two collections with shared throughput for MongoDB API. |
+| [Create an Azure Cosmos DB account, database and collection](../scripts/cli/mongodb/create.md)| Creates an Azure Cosmos DB account, database, and collection for API for MongoDB. |
+| [Create a serverless Azure Cosmos DB account, database and collection](../scripts/cli/mongodb/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and collection for API for MongoDB. |
+| [Create an Azure Cosmos DB account, database with autoscale and two collections with shared throughput](../scripts/cli/mongodb/autoscale.md)| Creates an Azure Cosmos DB account, database with autoscale and two collections with shared throughput for API for MongoDB. |
| [Perform throughput operations](../scripts/cli/mongodb/throughput.md) | Read, update and migrate between autoscale and standard throughput on a database and collection.| | [Lock resources from deletion](../scripts/cli/mongodb/lock.md)| Prevent resources from being deleted with resource locks.| ||| ## Common API Samples
-These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+These samples apply to all Azure Cosmos DB APIs. These samples use a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB.
|Task | Description | ||| | [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.| | [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create an Azure Cosmos DB account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create an Azure Cosmos DB account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update an Azure Cosmos DB account to secure with service-endpoints when the subnet is eventually configured.|
| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.| |||
cosmos-db Compression Cost Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/compression-cost-savings.md
Title: Improve performance and optimize costs when upgrading to Azure Cosmos DB
description: Learn how upgrading your API for MongoDB account to versions 4.0+ saves you money on queries and storage. + Last updated 09/06/2022 # Improve performance and optimize costs when upgrading to Azure Cosmos DB API for MongoDB 4.0+ Azure Cosmos DB API for MongoDB introduced a new data compression algorithm in versions 4.0+ that saves up to 90% on RU and storage costs. Upgrading your database account to versions 4.0+ and following this guide will help you realize the maximum performance and cost improvements. ## How it works The API for MongoDB charges users based on how many [request units](../request-units.md) (RUs) are consumed for each operation. With the new compression format, a reduction in storage size and query size directly results in a reduction in RU usage, saving you money. Performance and costs are coupled in Cosmos DB.
-When [upgrading](upgrade-mongodb-version.md) from an API for MongoDB database account versions 3.6 or 3.2 to version 4.0 or greater, all new documents (data) written to that account will be stored in the improved compression format. Older documents, written before the account was upgraded, remain fully backwards compatible, but will remain stored in the older compression format.
+When [upgrading](upgrade-version.md) from an API for MongoDB database account versions 3.6 or 3.2 to version 4.0 or greater, all new documents (data) written to that account will be stored in the improved compression format. Older documents, written before the account was upgraded, remain fully backwards compatible, but will remain stored in the older compression format.
## Upgrading older documents When upgrading your database account to versions 4.0+, it's good idea to consider upgrading your older documents as well. Doing so will provide you with efficiency improvements on your older data as well as new data that gets written to the account after the upgrade. The following steps upgrade your older documents to the new compression format:
-1. [Upgrade](upgrade-mongodb-version.md) your database account to 4.0 or higher. Any new data that's written to any collection in the account will be written in the new format. All formats are backwards compatible.
+1. [Upgrade](upgrade-version.md) your database account to 4.0 or higher. Any new data that's written to any collection in the account will be written in the new format. All formats are backwards compatible.
2. Update at least one field in each old document (from before the upgrade) to a new value or change the document in a different way- such as adding a new field. Don't rewrite the exact same document since the Cosmos DB optimizer will ignore it. 3. Repeat step two for each document. When a document is updated, it will be written in the new format. ## Next steps Learn more about upgrading and the API for MongoDB versions:
-* [Introduction to the API for MongoDB](mongodb-introduction.md)
-* [Upgrade guide](upgrade-mongodb-version.md)
+* [Introduction to the API for MongoDB](introduction.md)
+* [Upgrade guide](upgrade-version.md)
* [Version 4.2](feature-support-42.md)
cosmos-db Connect Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-account.md
+
+ Title: Connect a MongoDB application to Azure Cosmos DB
+description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal
++++++ Last updated : 08/26/2021+
+adobe-target: true
+adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021
+adobe-target-experience: Experience B
+adobe-target-content: ./connect-mongodb-account-experimental
+
+# Connect a MongoDB application to Azure Cosmos DB
+
+Learn how to connect your MongoDB app to an Azure Cosmos DB by using a MongoDB connection string. You can then use an Azure Cosmos DB database as the data store for your MongoDB app.
+
+This tutorial provides two ways to retrieve connection string information:
+
+- [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers
+- [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers
+
+## Prerequisites
+
+- An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.
+- An Azure Cosmos DB account. For instructions, see [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md).
+
+## Get the MongoDB connection string by using the quick start
+
+1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the **Azure Cosmos DB** blade, select the API.
+3. In the left pane of the account blade, click **Quick start**.
+4. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Please comment below on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
+5. Copy and paste the code snippet into your MongoDB app.
+
+ :::image type="content" source="./media/connect-account/quickstart-blade.png" alt-text="Quick start blade":::
+
+## Get the MongoDB connection string to customize
+
+1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the **Azure Cosmos DB** blade, select the API.
+3. In the left pane of the account blade, click **Connection String**.
+4. The **Connection String** blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
+
+ :::image type="content" source="./media/connect-account/connection-string-blade.png" alt-text="Connection String blade" lightbox= "./media/connect-account/connection-string-blade.png" :::
+
+## Connection string requirements
+
+> [!Important]
+> Azure Cosmos DB has strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via *TLS*.
+
+Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. So, the connection string format is:
+
+`mongodb://username:password@host:port/[database]?ssl=true`
+
+The values of this string are available in the **Connection String** blade shown earlier:
+
+* Username (required): Azure Cosmos DB account name.
+* Password (required): Azure Cosmos DB account password.
+* Host (required): FQDN of the Azure Cosmos DB account.
+* Port (required): 10255.
+* Database (optional): The database that the connection uses. If no database is provided, the default database is "test."
+* ssl=true (required)
+
+For example, consider the account shown in the **Connection String** blade. A valid connection string is:
+
+`mongodb://contoso123:0Fc3IolnL12312asdfawejunASDF@asdfYXX2t8a97kghVcUzcDv98hawelufhawefafnoQRGwNj2nMPL1Y9qsIr9Srdw==@contoso123.documents.azure.com:10255/mydatabase?ssl=true`
+
+## Driver Requirements
+
+All drivers that support wire protocol version 3.4 or greater will support Azure Cosmos DB for MongoDB.
+
+Specifically, client drivers must support the Service Name Identification (SNI) TLS extension and/or the appName connection string option. If the `appName` parameter is provided, it must be included as found in the connection string value in the Azure portal.
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Connect Mongodb Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-mongodb-account.md
- Title: Connect a MongoDB application to Azure Cosmos DB
-description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal
----- Previously updated : 08/26/2021-
-adobe-target: true
-adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./connect-mongodb-account-experimental
--
-# Connect a MongoDB application to Azure Cosmos DB
-
-Learn how to connect your MongoDB app to an Azure Cosmos DB by using a MongoDB connection string. You can then use an Azure Cosmos database as the data store for your MongoDB app.
-
-This tutorial provides two ways to retrieve connection string information:
--- [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers-- [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers-
-## Prerequisites
--- An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.-- A Cosmos account. For instructions, see [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md).-
-## Get the MongoDB connection string by using the quick start
-
-1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the **Azure Cosmos DB** blade, select the API.
-3. In the left pane of the account blade, click **Quick start**.
-4. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Please comment below on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
-5. Copy and paste the code snippet into your MongoDB app.
-
- :::image type="content" source="./media/connect-mongodb-account/quickstart-blade.png" alt-text="Quick start blade":::
-
-## Get the MongoDB connection string to customize
-
-1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the **Azure Cosmos DB** blade, select the API.
-3. In the left pane of the account blade, click **Connection String**.
-4. The **Connection String** blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
-
- :::image type="content" source="./media/connect-mongodb-account/connection-string-blade.png" alt-text="Connection String blade" lightbox= "./media/connect-mongodb-account/connection-string-blade.png" :::
-
-## Connection string requirements
-
-> [!Important]
-> Azure Cosmos DB has strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via *TLS*.
-
-Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. So, the connection string format is:
-
-`mongodb://username:password@host:port/[database]?ssl=true`
-
-The values of this string are available in the **Connection String** blade shown earlier:
-
-* Username (required): Cosmos account name.
-* Password (required): Cosmos account password.
-* Host (required): FQDN of the Cosmos account.
-* Port (required): 10255.
-* Database (optional): The database that the connection uses. If no database is provided, the default database is "test."
-* ssl=true (required)
-
-For example, consider the account shown in the **Connection String** blade. A valid connection string is:
-
-`mongodb://contoso123:0Fc3IolnL12312asdfawejunASDF@asdfYXX2t8a97kghVcUzcDv98hawelufhawefafnoQRGwNj2nMPL1Y9qsIr9Srdw==@contoso123.documents.azure.com:10255/mydatabase?ssl=true`
-
-## Driver Requirements
-
-All drivers that support wire protocol version 3.4 or greater will support Azure Cosmos DB API for MongoDB.
-
-Specifically, client drivers must support the Service Name Identification (SNI) TLS extension and/or the appName connection string option. If the `appName` parameter is provided, it must be included as found in the connection string value in the Azure portal.
-
-## Next steps
--- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Connect Using Compass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-compass.md
Title: Connect to Azure Cosmos DB using Compass description: Learn how to use MongoDB Compass to store and manage data in Azure Cosmos DB. -++ Last updated 08/26/2021
# Use MongoDB Compass to connect to Azure Cosmos DB's API for MongoDB
-This tutorial demonstrates how to use [MongoDB Compass](https://www.mongodb.com/products/compass) when storing and/or managing data in Cosmos DB. We use the Azure Cosmos DB's API for MongoDB for this walk-through. For those of you unfamiliar, Compass is a GUI for MongoDB. It is commonly used to visualize your data, run ad-hoc queries, along with managing your data.
+This tutorial demonstrates how to use [MongoDB Compass](https://www.mongodb.com/products/compass) when storing and/or managing data in Azure Cosmos DB. We use the Azure Cosmos DB's API for MongoDB for this walk-through. For those of you unfamiliar, Compass is a GUI for MongoDB. It is commonly used to visualize your data, run ad-hoc queries, along with managing your data.
-Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Cosmos DB.
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
## Pre-requisites
-To connect to your Cosmos DB account using MongoDB Compass, you must:
+To connect to your Azure Cosmos DB account using MongoDB Compass, you must:
* Download and install [Compass](https://www.mongodb.com/download-center/compass?jmp=hero)
-* Have your Cosmos DB [connection string](connect-mongodb-account.md) information
+* Have your Azure Cosmos DB [connection string](connect-account.md) information
-## Connect to Cosmos DB's API for MongoDB
+## Connect to Azure Cosmos DB's API for MongoDB
-To connect your Cosmos DB account to Compass, you can follow the below steps:
+To connect your Azure Cosmos DB account to Compass, you can follow the below steps:
-1. Retrieve the connection information for your Cosmos account configured with Azure Cosmos DB's API MongoDB using the instructions [here](connect-mongodb-account.md).
+1. Retrieve the connection information for your Azure Cosmos DB account configured with Azure Cosmos DB's API MongoDB using the instructions [here](connect-account.md).
:::image type="content" source="./media/connect-using-compass/mongodb-compass-connection.png" alt-text="Screenshot of the connection string blade":::
-2. Click on the button that says **Copy to clipboard** next to your **Primary/Secondary connection string** in Cosmos DB. Clicking this button will copy your entire connection string to your clipboard.
+2. Click on the button that says **Copy to clipboard** next to your **Primary/Secondary connection string** in Azure Cosmos DB. Clicking this button will copy your entire connection string to your clipboard.
:::image type="content" source="./media/connect-using-compass/mongodb-connection-copy.png" alt-text="Screenshot of the copy to clipboard button":::
To connect your Cosmos DB account to Compass, you can follow the below steps:
:::image type="content" source="./media/connect-using-compass/mongodb-compass-replica.png" alt-text="Screenshot shows the Replica Set Name text box.":::
-6. Click on **Connect** at the bottom of the page. Your Cosmos DB account and databases should now be visible within MongoDB Compass.
+6. Click on **Connect** at the bottom of the page. Your Azure Cosmos DB account and databases should now be visible within MongoDB Compass.
## Next steps
cosmos-db Connect Using Mongochef https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongochef.md
Title: Use Studio 3T to connect to Azure Cosmos DB's API for MongoDB description: Learn how to connect to an Azure Cosmos DB's API for MongoDB using Studio 3T. -+ Last updated 08/26/2021 -+
-# Connect to an Azure Cosmos account using Studio 3T
+# Connect to an Azure Cosmos DB account using Studio 3T
To connect to an Azure Cosmos DB's API for MongoDB using Studio 3T, you must: * Download and install [Studio 3T](https://studio3t.com/).
-* Have your Azure Cosmos account's [connection string](connect-mongodb-account.md) information.
+* Have your Azure Cosmos DB account's [connection string](connect-account.md) information.
## Create the connection in Studio 3T
-To add your Azure Cosmos account to the Studio 3T connection manager, use the following steps:
+To add your Azure Cosmos DB account to the Studio 3T connection manager, use the following steps:
-1. Retrieve the connection information for your Azure Cosmos DB's API for MongoDB account using the instructions in the [Connect a MongoDB application to Azure Cosmos DB](connect-mongodb-account.md) article.
+1. Retrieve the connection information for your Azure Cosmos DB's API for MongoDB account using the instructions in the [Connect a MongoDB application to Azure Cosmos DB](connect-account.md) article.
:::image type="content" source="./media/connect-using-mongochef/connection-string-blade.png" alt-text="Screenshot of the connection string page"::: 2. Click **Connect** to open the Connection Manager, then click **New Connection** :::image type="content" source="./media/connect-using-mongochef/connection-manager.png" alt-text="Screenshot of the Studio 3T connection manager that highlights the New Connection button.":::
-3. In the **New Connection** window, on the **Server** tab, enter the HOST (FQDN) of the Azure Cosmos account and the PORT.
+3. In the **New Connection** window, on the **Server** tab, enter the HOST (FQDN) of the Azure Cosmos DB account and the PORT.
:::image type="content" source="./media/connect-using-mongochef/connection-manager-server-tab.png" alt-text="Screenshot of the Studio 3T connection manager server tab"::: 4. In the **New Connection** window, on the **Authentication** tab, choose Authentication Mode **Basic (MONGODB-CR or SCRAM-SHA-1)** and enter the USERNAME and PASSWORD. Accept the default authentication db (admin) or provide your own value.
To create a database, collection, and documents using Studio 3T, perform the fol
- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Connect Using Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongoose.md
Title: Connect a Node.js Mongoose application to Azure Cosmos DB description: Learn how to use the Mongoose Framework to store and manage data in Azure Cosmos DB. -+ ms.devlang: javascript Last updated 08/26/2021 -+ # Connect a Node.js Mongoose application to Azure Cosmos DB
-This tutorial demonstrates how to use the [Mongoose Framework](https://mongoosejs.com/) when storing data in Cosmos DB. We use the Azure Cosmos DB's API for MongoDB for this walkthrough. For those of you unfamiliar, Mongoose is an object modeling framework for MongoDB in Node.js and provides a straight-forward, schema-based solution to model your application data.
+This tutorial demonstrates how to use the [Mongoose Framework](https://mongoosejs.com/) when storing data in Azure Cosmos DB. We use the Azure Cosmos DB's API for MongoDB for this walkthrough. For those of you unfamiliar, Mongoose is an object modeling framework for MongoDB in Node.js and provides a straight-forward, schema-based solution to model your application data.
-Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Cosmos DB.
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
## Prerequisites
Cosmos DB is Microsoft's globally distributed multi-model database service. You
[Node.js](https://nodejs.org/) version v0.10.29 or higher.
-## Create a Cosmos account
+## Create an Azure Cosmos DB account
-Let's create a Cosmos account. If you already have an account you want to use, you can skip ahead to Set up your Node.js application. If you are using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator and skip ahead to Set up your Node.js application.
+Let's create an Azure Cosmos DB account. If you already have an account you want to use, you can skip ahead to Set up your Node.js application. If you are using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator and skip ahead to Set up your Node.js application.
[!INCLUDE [cosmos-db-create-dbaccount-mongodb](../includes/cosmos-db-create-dbaccount-mongodb.md)]
In this application we will cover two ways of creating collections in Azure Cosm
:::image type="content" source="./media/connect-using-mongoose/db-level-throughput.png" alt-text="Node.js tutorial - Screenshot of the Azure portal, showing how to create a database in the Data Explorer for an Azure Cosmos DB account, for use with the Mongoose Node module"::: -- **Storing all object models in a single Cosmos DB collection**: If you'd prefer to store all models in a single collection, you can just create a new database without selecting the Provision Throughput option. Using this capacity model will create each collection with its own throughput capacity for every object model.
+- **Storing all object models in a single Azure Cosmos DB collection**: If you'd prefer to store all models in a single collection, you can just create a new database without selecting the Provision Throughput option. Using this capacity model will create each collection with its own throughput capacity for every object model.
After you create the database, you'll use the name in the `COSMOSDB_DBNAME` environment variable below.
After you create the database, you'll use the name in the `COSMOSDB_DBNAME` envi
var env = require('dotenv').config(); //Use the .env file to load the variables ```
-5. Add your Cosmos DB connection string and Cosmos DB Name to the ```.env``` file. Replace the placeholders {cosmos-account-name} and {dbname} with your own Cosmos account name and database name, without the brace symbols.
+5. Add your Azure Cosmos DB connection string and Azure Cosmos DB Name to the ```.env``` file. Replace the placeholders {cosmos-account-name} and {dbname} with your own Azure Cosmos DB account name and database name, without the brace symbols.
```javascript
- // You can get the following connection details from the Azure portal. You can find the details on the Connection string pane of your Azure Cosmos account.
+ // You can get the following connection details from the Azure portal. You can find the details on the Connection string pane of your Azure Cosmos DB account.
- COSMOSDB_USER = "<Azure Cosmos account's user name, usually the database account name>"
- COSMOSDB_PASSWORD = "<Azure Cosmos account password, this is one of the keys specified in your account>"
- COSMOSDB_DBNAME = "<Azure Cosmos database name>"
- COSMOSDB_HOST= "<Azure Cosmos Host name>"
+ COSMOSDB_USER = "<Azure Cosmos DB account's user name, usually the database account name>"
+ COSMOSDB_PASSWORD = "<Azure Cosmos DB account password, this is one of the keys specified in your account>"
+ COSMOSDB_DBNAME = "<Azure Cosmos DB database name>"
+ COSMOSDB_HOST= "<Azure Cosmos DB Host name>"
COSMOSDB_PORT=10255 ```
-6. Connect to Cosmos DB using the Mongoose framework by adding the following code to the end of index.js.
+6. Connect to Azure Cosmos DB using the Mongoose framework by adding the following code to the end of index.js.
```javascript mongoose.connect("mongodb://"+process.env.COSMOSDB_HOST+":"+process.env.COSMOSDB_PORT+"/"+process.env.COSMOSDB_DBNAME+"?ssl=true&replicaSet=globaldb", {
After you create the database, you'll use the name in the `COSMOSDB_DBNAME` envi
Once you are connected to Azure Cosmos DB, you can now start setting up object models in Mongoose.
-## Best practices for using Mongoose with Cosmos DB
+## Best practices for using Mongoose with Azure Cosmos DB
For every model you create, Mongoose creates a new collection. This is best addressed using the [Database Level Throughput option](../set-throughput.md#set-throughput-on-a-database), which was previously discussed. To use a single collection, you need to use Mongoose [Discriminators](https://mongoosejs.com/docs/discriminators.html). Discriminators are a schema inheritance mechanism. They enable you to have multiple models with overlapping schemas on top of the same underlying MongoDB collection.
This section explores how to achieve this with Azure Cosmos DB's API for MongoDB
}); ```
-1. Finally, let's save the object to Cosmos DB. This creates a collection underneath the covers.
+1. Finally, let's save the object to Azure Cosmos DB. This creates a collection underneath the covers.
```JavaScript family.save((err, saveFamily) => {
This section explores how to achieve this with Azure Cosmos DB's API for MongoDB
}); ```
-1. Now, going into the Azure portal, you notice two collections created in Cosmos DB.
+1. Now, going into the Azure portal, you notice two collections created in Azure Cosmos DB.
:::image type="content" source="./media/connect-using-mongoose/mongo-mutliple-collections.png" alt-text="Node.js tutorial - Screenshot of the Azure portal, showing an Azure Cosmos DB account, with multiple collection names highlighted - Node database":::
-1. Finally, let's read the data from Cosmos DB. Since we're using the default Mongoose operating model, the reads are the same as any other reads with Mongoose.
+1. Finally, let's read the data from Azure Cosmos DB. Since we're using the default Mongoose operating model, the reads are the same as any other reads with Mongoose.
```JavaScript Family.find({ 'children.gender' : "male"}, function(err, foundFamily){
cosmos-db Connect Using Robomongo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-robomongo.md
Title: Use Robo 3T to connect to Azure Cosmos DB description: Learn how to connect to Azure Cosmos DB using Robo 3T and Azure Cosmos DB's API for MongoDB -++ Last updated 08/26/2021 - # Use Robo 3T with Azure Cosmos DB's API for MongoDB
-To connect to Cosmos account using Robo 3T, you must:
+To connect to Azure Cosmos DB account using Robo 3T, you must:
* Download and install [Robo 3T](https://robomongo.org/)
-* Have your Cosmos DB [connection string](connect-mongodb-account.md) information
+* Have your Azure Cosmos DB [connection string](connect-account.md) information
> [!NOTE]
-> Currently, Robo 3T v1.2 and lower versions are supported with Cosmos DB's API for MongoDB.
+> Currently, Robo 3T v1.2 and lower versions are supported with Azure Cosmos DB's API for MongoDB.
## Connect using Robo 3T
-To add your Cosmos account to the Robo 3T connection manager, perform the following steps:
+To add your Azure Cosmos DB account to the Robo 3T connection manager, perform the following steps:
-1. Retrieve the connection information for your Cosmos account configured with Azure Cosmos DB's API MongoDB using the instructions [here](connect-mongodb-account.md).
+1. Retrieve the connection information for your Azure Cosmos DB account configured with Azure Cosmos DB's API MongoDB using the instructions [here](connect-account.md).
:::image type="content" source="./media/connect-using-robomongo/connectionstringblade.png" alt-text="Screenshot of the connection string blade"::: 2. Run the *Robomongo* application.
Both **User Name** and **Password** can be found in your connection information
- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/consistency-mapping.md
Title: Mapping consistency levels for Azure Cosmos DB API for MongoDB
-description: Mapping consistency levels for Azure Cosmos DB API for MongoDB.
+ Title: Mapping consistency levels for Azure Cosmos DB for MongoDB
+description: Mapping consistency levels for Azure Cosmos DB for MongoDB.
-++ Last updated 10/12/2020 - # Consistency levels for Azure Cosmos DB and the API for MongoDB Unlike Azure Cosmos DB, the native MongoDB does not provide precisely defined consistency guarantees. Instead, native MongoDB allows users to configure the following consistency guarantees: a write concern, a read concern, and the isMaster directive - to direct the read operations to either primary or secondary replicas to achieve the desired consistency level.
-When using Azure Cosmos DBΓÇÖs API for MongoDB, the MongoDB driver treats your write region as the primary replica and all other regions are read replica. You can choose which region associated with your Azure Cosmos account as a primary replica.
+When using Azure Cosmos DBΓÇÖs API for MongoDB, the MongoDB driver treats your write region as the primary replica and all other regions are read replica. You can choose which region associated with your Azure Cosmos DB account as a primary replica.
> [!NOTE] > The default consistency model for Azure Cosmos DB is Session. Session is a client-centric consistency model which is not natively supported by either Cassandra or MongoDB. For more information on which consistency model to choose see, [Consistency levels in Azure Cosmos DB](../consistency-levels.md) While using Azure Cosmos DBΓÇÖs API for MongoDB:
-* The write concern is mapped to the default consistency level configured on your Azure Cosmos account.
+* The write concern is mapped to the default consistency level configured on your Azure Cosmos DB account.
* Azure Cosmos DB will dynamically map the read concern specified by the MongoDB client driver to one of the Azure Cosmos DB consistency levels that is configured dynamically on a read request.
-* You can annotate a specific region associated with your Azure Cosmos account as "Primary" by making the region as the first writable region.
+* You can annotate a specific region associated with your Azure Cosmos DB account as "Primary" by making the region as the first writable region.
## Mapping consistency levels
-The following table illustrates how the native MongoDB write/read concerns are mapped to the Azure Cosmos consistency levels when using Azure Cosmos DBΓÇÖs API for MongoDB:
+The following table illustrates how the native MongoDB write/read concerns are mapped to the Azure Cosmos DB consistency levels when using Azure Cosmos DBΓÇÖs API for MongoDB:
:::image type="content" source="../media/consistency-levels-across-apis/consistency-model-mapping-mongodb.png" alt-text="MongoDB consistency model mapping" lightbox= "../media/consistency-levels-across-apis/consistency-model-mapping-mongodb.png":::
-If your Azure Cosmos account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
-Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
## Next steps
cosmos-db Create Mongodb Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-go.md
- Title: Connect a Go application to Azure Cosmos DB's API for MongoDB
-description: This quickstart demonstrates how to connect an existing Go application to Azure Cosmos DB's API for MongoDB.
----- Previously updated : 04/26/2022--
-# Quickstart: Connect a Go application to Azure Cosmos DB's API for MongoDB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](create-mongodb-python.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Golang](create-mongodb-go.md)
->
-
-Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. In this quickstart, you create and manage an Azure Cosmos DB account by using the Azure Cloud Shell, clone an existing sample application from GitHub and configure it to work with Azure Cosmos DB.
-
-The sample application is a command-line based `todo` management tool written in Go. Azure Cosmos DB's API for MongoDB is [compatible with the MongoDB wire protocol](./mongodb-introduction.md), making it possible for any MongoDB client driver to connect to it. This application uses the [Go driver for MongoDB](https://github.com/mongodb/mongo-go-driver) in a way that is transparent to the application that the data is stored in an Azure Cosmos DB database.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.-- [Go](https://go.dev/) installed on your computer, and a working knowledge of Go.-- [Git](https://git-scm.com/downloads).-
-## Clone the sample application
-
-Run the following commands to clone the sample repository.
-
-1. Open a command prompt, create a new folder named `git-samples`, then close the command prompt.
-
- ```bash
- mkdir "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/cosmosdb-go-mongodb-quickstart
- ```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the application works, you can review the following snippets. Otherwise, you can skip ahead to [Run the application](#run-the-application). The application layout is as follows:
-
-```bash
-.
-Γö£ΓöÇΓöÇ go.mod
-Γö£ΓöÇΓöÇ go.sum
-ΓööΓöÇΓöÇ todo.go
-```
-
-The following snippets are all taken from the `todo.go` file.
-
-### Connecting the Go app to Azure Cosmos DB
-
-[`clientOptions`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo/options?tab=doc#ClientOptions) encapsulates the connection string for Azure Cosmos DB, which is passed in using an environment variable (details in the upcoming section). The connection is initialized using [`mongo.NewClient`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#NewClient) to which the `clientOptions` instance is passed. [`Ping` function](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Client.Ping) is invoked to confirm successful connectivity (it is a fail-fast strategy)
-
-```go
- ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
- defer cancel()
-
- clientOptions := options.Client().ApplyURI(mongoDBConnectionString).SetDirect(true)
-
- c, err := mongo.NewClient(clientOptions)
- err = c.Connect(ctx)
- if err != nil {
- log.Fatalf("unable to initialize connection %v", err)
- }
-
- err = c.Ping(ctx, nil)
- if err != nil {
- log.Fatalf("unable to connect %v", err)
- }
-```
-
-> [!NOTE]
-> Using the [`SetDirect(true)`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo/options?tab=doc#ClientOptions.SetDirect) configuration is important, without which you will get the following connectivity error: `unable to connect connection(cdb-ms-prod-<azure-region>-cm1.documents.azure.com:10255[-4]) connection is closed`
->
-
-### Create a `todo` item
-
-To create a `todo`, we get a handle to a [`mongo.Collection`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection) and invoke the [`InsertOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.InsertOne) function.
-
-```go
-func create(desc string) {
- c := connect()
- ctx := context.Background()
- defer c.Disconnect(ctx)
-
- todoCollection := c.Database(database).Collection(collection)
- r, err := todoCollection.InsertOne(ctx, Todo{Description: desc, Status: statusPending})
- if err != nil {
- log.Fatalf("failed to add todo %v", err)
- }
-```
-
-We pass in a `Todo` struct that contains the description and the status (which is initially set to `pending`)
-
-```go
-type Todo struct {
- ID primitive.ObjectID `bson:"_id,omitempty"`
- Description string `bson:"description"`
- Status string `bson:"status"`
-}
-```
-### List `todo` items
-
-We can list TODOs based on criteria. A [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) is created to encapsulate the filter criteria
-
-```go
-func list(status string) {
- .....
- var filter interface{}
- switch status {
- case listAllCriteria:
- filter = bson.D{}
- case statusCompleted:
- filter = bson.D{{statusAttribute, statusCompleted}}
- case statusPending:
- filter = bson.D{{statusAttribute, statusPending}}
- default:
- log.Fatal("invalid criteria for listing todo(s)")
- }
-```
-
-[`Find`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.Find) is used to search for documents based on the filter and the result is converted into a slice of `Todo`
-
-```go
- todoCollection := c.Database(database).Collection(collection)
- rs, err := todoCollection.Find(ctx, filter)
- if err != nil {
- log.Fatalf("failed to list todo(s) %v", err)
- }
- var todos []Todo
- err = rs.All(ctx, &todos)
- if err != nil {
- log.Fatalf("failed to list todo(s) %v", err)
- }
-```
-
-Finally, the information is rendered in tabular format
-
-```go
- todoTable := [][]string{}
-
- for _, todo := range todos {
- s, _ := todo.ID.MarshalJSON()
- todoTable = append(todoTable, []string{string(s), todo.Description, todo.Status})
- }
-
- table := tablewriter.NewWriter(os.Stdout)
- table.SetHeader([]string{"ID", "Description", "Status"})
-
- for _, v := range todoTable {
- table.Append(v)
- }
- table.Render()
-```
-
-### Update a `todo` item
-
-A `todo` can be updated based on its `_id`. A [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) filter is created based on the `_id` and another one is created for the updated information, which is a new status (`completed` or `pending`) in this case. Finally, the [`UpdateOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.UpdateOne) function is invoked with the filter and the updated document
-
-```go
-func update(todoid, newStatus string) {
-....
- todoCollection := c.Database(database).Collection(collection)
- oid, err := primitive.ObjectIDFromHex(todoid)
- if err != nil {
- log.Fatalf("failed to update todo %v", err)
- }
- filter := bson.D{{"_id", oid}}
- update := bson.D{{"$set", bson.D{{statusAttribute, newStatus}}}}
- _, err = todoCollection.UpdateOne(ctx, filter, update)
- if err != nil {
- log.Fatalf("failed to update todo %v", err)
- }
-```
-
-### Delete a `todo`
-
-A `todo` is deleted based on its `_id` and it is encapsulated in the form of a [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) instance. [`DeleteOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.DeleteOne) is invoked to delete the document.
-
-```go
-func delete(todoid string) {
-....
- todoCollection := c.Database(database).Collection(collection)
- oid, err := primitive.ObjectIDFromHex(todoid)
- if err != nil {
- log.Fatalf("invalid todo ID %v", err)
- }
- filter := bson.D{{"_id", oid}}
- _, err = todoCollection.DeleteOne(ctx, filter)
- if err != nil {
- log.Fatalf("failed to delete todo %v", err)
- }
-}
-```
-
-## Build the application
-
-Change into the directory where you cloned the application and build it (using `go build`).
-
-```bash
-cd monogdb-go-quickstart
-go build -o todo
-```
-
-To confirm that the application was built properly.
-
-```bash
-./todo --help
-```
-
-## Setup Azure Cosmos DB
-
-### Sign in to Azure
-
-If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI].
-
-If you are using an installed Azure CLI, sign in to your Azure subscription with the [az login](/cli/azure/reference-index#az-login) command and follow the on-screen directions. You can skip this step if you're using the Azure Cloud Shell.
-
-```azurecli
-az login
-```
-
-### Add the Azure Cosmos DB module
-
-If you are using an installed Azure CLI, check to see if the `cosmosdb` component is already installed by running the `az` command. If `cosmosdb` is in the list of base commands, proceed to the next command. You can skip this step if you're using the Azure Cloud Shell.
-
-If `cosmosdb` is not in the list of base commands, reinstall [Azure CLI](/cli/azure/install-azure-cli).
-
-### Create a resource group
-
-Create a [resource group](../../azure-resource-manager/management/overview.md) with the [az group create](/cli/azure/group#az-group-create). An Azure resource group is a logical container into which Azure resources like web apps, databases and storage accounts are deployed and managed.
-
-The following example creates a resource group in the West Europe region. Choose a unique name for the resource group.
-
-If you are using Azure Cloud Shell, select **Try It**, follow the onscreen prompts to login, then copy the command into the command prompt.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location "West Europe"
-```
-
-### Create an Azure Cosmos DB account
-
-Create a Cosmos account with the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command.
-
-In the following command, please substitute your own unique Cosmos account name where you see the `<cosmosdb-name>` placeholder. This unique name will be used as part of your Cosmos DB endpoint (`https://<cosmosdb-name>.documents.azure.com/`), so the name needs to be unique across all Cosmos accounts in Azure.
-
-```azurecli-interactive
-az cosmosdb create --name <cosmosdb-name> --resource-group myResourceGroup --kind MongoDB
-```
-
-The `--kind MongoDB` parameter enables MongoDB client connections.
-
-When the Azure Cosmos DB account is created, the Azure CLI shows information similar to the following example.
-
-> [!NOTE]
-> This example uses JSON as the Azure CLI output format, which is the default. To use another output format, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
-
-```json
-{
- "databaseAccountOfferType": "Standard",
- "documentEndpoint": "https://<cosmosdb-name>.documents.azure.com:443/",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Document
-DB/databaseAccounts/<cosmosdb-name>",
- "kind": "MongoDB",
- "location": "West Europe",
- "name": "<cosmosdb-name>",
- "readLocations": [
- {
- "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
- "failoverPriority": 0,
- "id": "<cosmosdb-name>-westeurope",
- "locationName": "West Europe",
- "provisioningState": "Succeeded"
- }
- ],
- "resourceGroup": "myResourceGroup",
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "writeLocations": [
- {
- "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
- "failoverPriority": 0,
- "id": "<cosmosdb-name>-westeurope",
- "locationName": "West Europe",
- "provisioningState": "Succeeded"
- }
- ]
-}
-```
-
-### Retrieve the database key
-
-In order to connect to a Cosmos database, you need the database key. Use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command to retrieve the primary key.
-
-```azurecli-interactive
-az cosmosdb keys list --name <cosmosdb-name> --resource-group myResourceGroup --query "primaryMasterKey"
-```
-
-The Azure CLI outputs information similar to the following example.
-
-```json
-"RUayjYjixJDWG5xTqIiXjC..."
-```
-
-## Configure the application
-
-<a name="devconfig"></a>
-### Export the connection string, MongoDB database and collection names as environment variables.
-
-```bash
-export MONGODB_CONNECTION_STRING="mongodb://<COSMOSDB_ACCOUNT_NAME>:<COSMOSDB_PASSWORD>@<COSMOSDB_ACCOUNT_NAME>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@<COSMOSDB_ACCOUNT_NAME>@"
-```
-
-> [!NOTE]
-> The `ssl=true` option is important because of Cosmos DB requirements. For more information, see [Connection string requirements](connect-mongodb-account.md#connection-string-requirements).
->
-
-For the `MONGODB_CONNECTION_STRING` environment variable, replace the placeholders for `<COSMOSDB_ACCOUNT_NAME>` and `<COSMOSDB_PASSWORD>`
-
-1. `<COSMOSDB_ACCOUNT_NAME>`: The name of the Azure Cosmos DB account you created
-2. `<COSMOSDB_PASSWORD>`: The database key extracted in the previous step
-
-```bash
-export MONGODB_DATABASE=todo-db
-export MONGODB_COLLECTION=todos
-```
-
-You can choose your preferred values for `MONGODB_DATABASE` and `MONGODB_COLLECTION` or leave them as is.
-
-## Run the application
-
-To create a `todo`
-
-```bash
-./todo --create "Create an Azure Cosmos DB database account"
-```
-
-If successful, you should see an output with the MongoDB `_id` of the newly created document:
-
-```bash
-added todo ObjectID("5e9fd6befd2f076d1f03bd8a")
-```
-
-Create another `todo`
-
-```bash
-./todo --create "Get the MongoDB connection string using the Azure CLI"
-```
-
-List all the `todo`s
-
-```bash
-./todo --list all
-```
-
-You should see the ones you just added in a tabular format as such
-
-```bash
-+-+--+--+
-| ID | DESCRIPTION | STATUS |
-+-+--+--+
-| "5e9fd6b1bcd2fa6bd267d4c4" | Create an Azure Cosmos DB | pending |
-| | database account | |
-| "5e9fd6befd2f076d1f03bd8a" | Get the MongoDB connection | pending |
-| | string using the Azure CLI | |
-+-+--+--+
-```
-
-To update the status of a `todo` (e.g. change it to `completed` status), use the `todo` ID
-
-```bash
-./todo --update 5e9fd6b1bcd2fa6bd267d4c4,completed
-```
-
-List only the completed `todo`s
-
-```bash
-./todo --list completed
-```
-
-You should see the one you just updated
-
-```bash
-+-+--+--+
-| ID | DESCRIPTION | STATUS |
-+-+--+--+
-| "5e9fd6b1bcd2fa6bd267d4c4" | Create an Azure Cosmos DB | completed |
-| | database account | |
-+-+--+--+
-```
-
-### View data in Data Explorer
-
-Data stored in Azure Cosmos DB is available to view and query in the Azure portal.
-
-To view, query, and work with the user data created in the previous step, login to the [Azure portal](https://portal.azure.com) in your web browser.
-
-In the top Search box, enter **Azure Cosmos DB**. When your Cosmos account blade opens, select your Cosmos account. In the left navigation, select **Data Explorer**. Expand your collection in the Collections pane, and then you can view the documents in the collection, query the data, and even create and run stored procedures, triggers, and UDFs.
---
-Delete a `todo` using it's ID
-
-```bash
-./todo --delete 5e9fd6b1bcd2fa6bd267d4c4,completed
-```
-
-List the `todo`s to confirm
-
-```bash
-./todo --list all
-```
-
-The `todo` you just deleted should not be present
-
-```bash
-+-+--+--+
-| ID | DESCRIPTION | STATUS |
-+-+--+--+
-| "5e9fd6befd2f076d1f03bd8a" | Get the MongoDB connection | pending |
-| | string using the Azure CLI | |
-+-+--+--+
-```
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account using the Azure Cloud Shell, and create and run a Go command-line app to manage `todo`s. You can now import additional data to your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Create Mongodb Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-java.md
- Title: 'Quickstart: Build a web app using the Azure Cosmos DB API for Mongo DB and Java SDK'
-description: Learn to build a Java code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
----- Previously updated : 04/26/2022---
-# Quickstart: Create a console app with Java and the MongoDB API in Azure Cosmos DB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](create-mongodb-python.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Golang](create-mongodb-go.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB for MongoDB API account from the Azure portal, and add data by using a Java SDK app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.-- [Java Development Kit (JDK) version 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). -- [Maven](https://maven.apache.org/download.cgi). Or run `apt-get install maven` to install Maven.-- [Git](https://git-scm.com/downloads). -
-## Create a database account
--
-## Add a collection
-
-Name your new database **db**, and your new collection **coll**.
--
-## Clone the sample application
-
-Now let's clone an app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- md "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-java-getting-started.git
- ```
-
-3. Then open the code in your favorite editor.
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-
-The following snippets are all taken from the *Program.java* file.
-
-This console app uses the [MongoDB Java driver](https://www.mongodb.com/docs/drivers/java-drivers/).
-
-* The DocumentClient is initialized.
-
- ```java
- MongoClientURI uri = new MongoClientURI("FILLME");`
-
- MongoClient mongoClient = new MongoClient(uri);
- ```
-
-* A new database and collection are created.
-
- ```java
- MongoDatabase database = mongoClient.getDatabase("db");
-
- MongoCollection<Document> collection = database.getCollection("coll");
- ```
-
-* Some documents are inserted using `MongoCollection.insertOne`
-
- ```java
- Document document = new Document("fruit", "apple")
- collection.insertOne(document);
- ```
-
-* Some queries are performed using `MongoCollection.find`
-
- ```java
- Document queryResult = collection.find(Filters.eq("fruit", "apple")).first();
- System.out.println(queryResult.toJson());
- ```
-
-## Update your connection string
-
-Now go back to the Azure portal to get your connection string information and copy it into the app.
-
-1. From your Azure Cosmos DB account, select **Quick Start**, select **Java**, then copy the connection string to your clipboard.
-
-2. Open the *Program.java* file, replace the argument to the MongoClientURI constructor with the connection string. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
-
-## Run the console app
-
-1. Run `mvn package` in a terminal to install required npm modules
-
-2. Run `mvn exec:java -D exec.mainClass=GetStarted.Program` in a terminal to start your Java application.
-
-You can now use [Robomongo](connect-using-robomongo.md) / [Studio 3T](connect-using-mongochef.md) to query, modify, and work with this new data.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB API for Mongo DB account, add a database and container using Data Explorer, and add data using a Java console app. You can now import additional data to your Cosmos database.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Create Mongodb Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-python.md
- Title: Get started using Azure Cosmos DB API for MongoDB and Python
-description: Presents a Python code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
----- Previously updated : 04/26/2022---
-# Quickstart: Get started using Azure Cosmos DB API for MongoDB and Python
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](create-mongodb-python.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Golang](create-mongodb-go.md)
->
-
-This [quickstart](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) demonstrates how to:
-1. Create an [Azure Cosmos DB API for MongoDB account](mongodb-introduction.md)
-2. Connect to your account using PyMongo
-3. Create a sample database and collection
-4. Perform CRUD operations in the sample collection
-
-## Prerequisites to run the sample app
-
-* [Python](https://www.python.org/downloads/) 3.9+ (It's best to run the [sample code](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) described in this article with this recommended version. Although it may work on older versions of Python 3.)
-* [PyMongo](https://pypi.org/project/pymongo/) installed on your machine
-
-<a id="create-account"></a>
-## Create a database account
--
-## Learn the object model
-
-Before you continue building the application, let's look into the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
-
-* Azure Cosmos DB API for MongoDB account
-* Databases
-* Collections
-* Documents
-
-To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
-
-## Get the code
-
-Download the sample Python code [from the repository](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) or use git clone:
-
-```shell
-git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started
-```
-
-## Retrieve your connection string
-
-When running the sample code, you have to enter your API for MongoDB account's connection string. Use the following steps to find it:
-
-1. In the [Azure portal](https://portal.azure.com/), select your Cosmos DB account.
-
-2. In the left navigation select **Connection String**, and then select **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the primary connection string.
-
-> [!WARNING]
-> Never check passwords or other sensitive data into source code.
--
-## Run the code
-
-```shell
-python run.py
-```
-
-## Understand how it works
-
-### Connecting
-
-The following code prompts the user for the connection string. It's never a good idea to have your connection string in code since it enables anyone with it to read or write to your database.
-
-```python
-CONNECTION_STRING = getpass.getpass(prompt='Enter your primary connection string: ') # Prompts user for connection string
-```
-
-The following code creates a client connection your API for MongoDB and tests to make sure it's valid.
-
-```python
-client = pymongo.MongoClient(CONNECTION_STRING)
-try:
- client.server_info() # validate connection string
-except pymongo.errors.ServerSelectionTimeoutError:
- raise TimeoutError("Invalid API for MongoDB connection string or timed out when attempting to connect")
-```
-
-### Resource creation
-The following code creates the sample database and collection that will be used to perform CRUD operations. When creating resources programmatically, it's recommended to use the API for MongoDB extension commands (as shown here) because these commands have the ability to set the resource throughput (RU/s) and configure sharding.
-
-Implicitly creating resources will work but will default to recommended values for throughput and will not be sharded.
-
-```python
-# Database with 400 RU throughput that can be shared across the DB's collections
-db.command({'customAction': "CreateDatabase", 'offerThroughput': 400})
-```
-
-```python
- # Creates a unsharded collection that uses the DBs shared throughput
-db.command({'customAction': "CreateCollection", 'collection': UNSHARDED_COLLECTION_NAME})
-```
-
-### Writing a document
-The following inserts a sample document we will continue to use throughout the sample. We get its unique _id field value so that we can query it in subsequent operations.
-
-```python
-"""Insert a sample document and return the contents of its _id field"""
-document_id = collection.insert_one({SAMPLE_FIELD_NAME: randint(50, 500)}).inserted_id
-```
-
-### Reading/Updating a document
-The following queries, updates, and again queries for the document that we previously inserted.
-
-```python
-print("Found a document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
-
-collection.update_one({"_id": document_id}, {"$set":{SAMPLE_FIELD_NAME: "Updated!"}})
-print("Updated document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
-```
-
-### Deleting a document
-Lastly, we delete the document we created from the collection.
-```python
-"""Delete the document containing document_id from the collection"""
-collection.delete_one({"_id": document_id})
-```
-
-## Next steps
-In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and perform CRUD operations.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/custom-commands.md
description: This article describes how to use MongoDB extension commands to man
-+ Last updated 07/30/2021 ms.devlang: javascript-+ # Use MongoDB extension commands to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB The following document contains the custom action commands that are specific to Azure Cosmos DB's API for MongoDB. These commands can be used to create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](../account-databases-containers-items.md).
-By using the Azure Cosmos DBΓÇÖs API for MongoDB, you can enjoy the benefits Cosmos DB such as global distribution, automatic sharding, high availability, latency guarantees, automatic, encryption at rest, backups, and many more, while preserving your investments in your MongoDB app. You can communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using any of the open-source [MongoDB client drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DBΓÇÖs API for MongoDB enables the use of existing client drivers by adhering to the [MongoDB wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+By using the Azure Cosmos DBΓÇÖs API for MongoDB, you can enjoy the benefits Azure Cosmos DB such as global distribution, automatic sharding, high availability, latency guarantees, automatic, encryption at rest, backups, and many more, while preserving your investments in your MongoDB app. You can communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using any of the open-source [MongoDB client drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DBΓÇÖs API for MongoDB enables the use of existing client drivers by adhering to the [MongoDB wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
## MongoDB protocol support
cosmos-db Diagnostic Queries Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/diagnostic-queries-mongodb.md
- Title: Troubleshoot issues with advanced diagnostics queries for Mongo API-
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the MongoDB API.
--- Previously updated : 06/12/2021----
-# Troubleshoot issues with advanced diagnostics queries for the MongoDB API
--
-> [!div class="op_single_selector"]
-> * [SQL (Core) API](../cosmos-db-advanced-queries.md)
-> * [MongoDB API](diagnostic-queries-mongodb.md)
-> * [Cassandra API](../cassandr)
-> * [Gremlin API](../queries-gremlin.md)
->
-
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-
-For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times.-
-## Common queries
-Common queries are shown in the resource-specific and Azure Diagnostics tables.
-
-### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- //Enable full-text query to view entire query text
- CDBMongoRequests
- | where TimeGenerated > ago(24h)
- | project PIICommandText, ActivityId, DatabaseName , CollectionName, RequestCharge
- | order by RequestCharge desc
- | take 10
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where Category == "MongoRequests"
- | where TimeGenerated > ago(24h)
- | project piiCommandText_s, activityId_g, databaseName_s , collectionName_s, requestCharge_s
- | order by requestCharge_s desc
- | take 10
- ```
--
-### Requests throttled (statusCode = 429 or 16500) in a specific time window
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBMongoRequests
- | where TimeGenerated > ago(24h)
- | where ErrorCode == "429" or ErrorCode == "16500"
- | project DatabaseName, CollectionName, PIICommandText, OperationName, TimeGenerated
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where Category == "MongoRequests" and TimeGenerated > ago(24h)
- | where ErrorCode == "429" or ErrorCode == "16500"
- | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
- ```
--
-### Timed-out requests (statusCode = 50) in a specific time window
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBMongoRequests
- | where TimeGenerated > ago(24h)
- | where ErrorCode == "50"
- | project DatabaseName, CollectionName, PIICommandText, OperationName, TimeGenerated
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where Category == "MongoRequests" and TimeGenerated > ago(24h)
- | where ErrorCode == "50"
- | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
- ```
--
-### Queries with large response lengths (payload size of the server response)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBMongoRequests
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- | summarize max(ResponseLength) by PIICommandText, RequestCharge, DurationMs, OperationName, TimeGenerated
- | order by max_ResponseLength desc
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where Category == "MongoRequests"
- //specify collection and database
- //| where databaseName_s == "DBNAME" and collectionName_s == "COLLECTIONNAME"
- | summarize max(responseLength_s) by piiCommandText_s, OperationName, duration_s, requestCharge_s
- | order by max_responseLength_s desc
- ```
--
-### RU consumption by physical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
- | render columnchart
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databaseName_s == "DBNAME" and collectionName_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
- | render columnchart
- ```
--
-### RU consumption by logical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
- | render columnchart
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databaseName_s == "DBNAME" and collectionName_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
- | render columnchart
- ```
--
-## Next steps
-* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../cosmosdb-monitor-resource-logs.md).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/diagnostic-queries.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for API for MongoDB
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the API for MongoDB.
++++ Last updated : 06/12/2021++++
+# Troubleshoot issues with advanced diagnostics queries for the API for MongoDB
++
+> [!div class="op_single_selector"]
+> * [API for NoSQL](../advanced-queries.md)
+> * [API for MongoDB](diagnostic-queries.md)
+> * [API for Cassandra](../cassandr)
+> * [API for Gremlin](../queries-gremlin.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](../monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
+
+## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
+
+### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ //Enable full-text query to view entire query text
+ CDBMongoRequests
+ | where TimeGenerated > ago(24h)
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName, RequestCharge
+ | order by RequestCharge desc
+ | take 10
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where Category == "MongoRequests"
+ | where TimeGenerated > ago(24h)
+ | project piiCommandText_s, activityId_g, databaseName_s , collectionName_s, requestCharge_s
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+### Requests throttled (statusCode = 429 or 16500) in a specific time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBMongoRequests
+ | where TimeGenerated > ago(24h)
+ | where ErrorCode == "429" or ErrorCode == "16500"
+ | project DatabaseName, CollectionName, PIICommandText, OperationName, TimeGenerated
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where Category == "MongoRequests" and TimeGenerated > ago(24h)
+ | where ErrorCode == "429" or ErrorCode == "16500"
+ | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+### Timed-out requests (statusCode = 50) in a specific time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBMongoRequests
+ | where TimeGenerated > ago(24h)
+ | where ErrorCode == "50"
+ | project DatabaseName, CollectionName, PIICommandText, OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where Category == "MongoRequests" and TimeGenerated > ago(24h)
+ | where ErrorCode == "50"
+ | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+### Queries with large response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBMongoRequests
+ //specify collection and database
+ //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
+ | summarize max(ResponseLength) by PIICommandText, RequestCharge, DurationMs, OperationName, TimeGenerated
+ | order by max_ResponseLength desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where Category == "MongoRequests"
+ //specify collection and database
+ //| where databaseName_s == "DB NAME" and collectionName_s == "COLLECTIONNAME"
+ | summarize max(responseLength_s) by piiCommandText_s, OperationName, duration_s, requestCharge_s
+ | order by max_responseLength_s desc
+ ```
++
+### RU consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databaseName_s == "DB NAME" and collectionName_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+### RU consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databaseName_s == "DB NAME" and collectionName_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Error Codes Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/error-codes-solutions.md
Title: Troubleshoot common errors in Azure Cosmos DB's API for Mongo DB
+ Title: Troubleshoot common errors in Azure Cosmos DB's API for MongoDB
description: This doc discusses the ways to troubleshoot common issues encountered in Azure Cosmos DB's API for MongoDB. -++ Last updated 07/15/2020 - # Troubleshoot common issues in Azure Cosmos DB's API for MongoDB
-The following article describes common errors and solutions for deployments using the Azure Cosmos DB API for MongoDB.
+The following article describes common errors and solutions for deployments using the Azure Cosmos DB for MongoDB.
>[!Note] > Azure Cosmos DB does not host the MongoDB engine. It provides an implementation of the MongoDB [wire protocol version 4.0](feature-support-40.md), [3.6](feature-support-36.md), and legacy support for [wire protocol version 3.2](feature-support-32.md). Therefore, some of these errors are only found in Azure Cosmos DB's API for MongoDB.
The following article describes common errors and solutions for deployments usin
| Code | Error | Description | Solution | ||-|--|--|
-| 2 | BadValue | One common cause is that an index path corresponding to the specified order-by item is excluded or the order by query does not have a corresponding composite index that it can be served from. The query requests a sort on a field that is not indexed. | Create a matching index (or composite index) for the sort query being attempted. |
-| 2 | Transaction is not active | The multi-document transaction surpassed the fixed 5 second time limit. | Retry the multi-document transaction or limit the scope of operations within the multi-document transaction to make it complete within the 5 second time limit. |
-| 13 | Unauthorized | The request lacks the permissions to complete. | Ensure you are using the correct keys. |
-| 26 | NamespaceNotFound | The database or collection being referenced in the query cannot be found. | Ensure your database/collection name precisely matches the name in your query.|
-| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity is not sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones. Retrying a write operation that has received this error may result in a duplicate write. <br><br>If you are trying to delete large amounts of data without impacting RUs: <br>- Consider using TTL (Based on Timestamp): [Expire data with Azure Cosmos DB's API for MongoDB](mongodb-time-to-live.md) <br>- Use Cursor/Batch size to perform the delete. You can fetch a single document at a time and delete it through a loop. This will help you slowly delete data without impacting your production application.|
-| 61 | ShardKeyNotFound | The document in your request did not contain the collection's shard key (Azure Cosmos DB partition key). | Ensure the collection's shard key is being used in the request.|
-| 66 | ImmutableField | The request is attempting to change an immutable field | "_id" fields are immutable. Ensure that your request does not attempt to update that field or the shard key field. |
-| 67 | CannotCreateIndex | The request to create an index cannot be completed. | Up to 500 single field indexes can be created in a container. Up to eight fields can be included in a compound index (compound indexes are supported in version 3.6+). |
+| 2 | BadValue | One common cause is that an index path corresponding to the specified order-by item is excluded or the order by query doesn't have a corresponding composite index that it can be served from. The query requests a sort on a field that isn't indexed. | Create a matching index (or composite index) for the sort query being attempted. |
+| 2 | Transaction isn't active | The multi-document transaction surpassed the fixed 5-second time limit. | Retry the multi-document transaction or limit the scope of operations within the multi-document transaction to make it complete within the 5-second time limit. |
+| 13 | Unauthorized | The request lacks the permissions to complete. | Ensure you're using the correct keys. |
+| 26 | NamespaceNotFound | The database or collection being referenced in the query can't be found. | Ensure your database/collection name precisely matches the name in your query.|
+| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity isn't sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones. Retrying a write operation that has received this error may result in a duplicate write. <br><br>If you're trying to delete large amounts of data without impacting RUs: <br>- Consider using TTL (Based on Timestamp): [Expire data with Azure Cosmos DB's API for MongoDB](time-to-live.md) <br>- Use Cursor/Batch size to perform the delete. You can fetch a single document at a time and delete it through a loop. This will help you slowly delete data without impacting your production application.|
+| 61 | ShardKeyNotFound | The document in your request didn't contain the collection's shard key (Azure Cosmos DB partition key). | Ensure the collection's shard key is being used in the request.|
+| 66 | ImmutableField | The request is attempting to change an immutable field | "_id" fields are immutable. Ensure that your request doesn't attempt to update that field or the shard key field. |
+| 67 | CannotCreateIndex | The request to create an index can't be completed. | Up to 500 single field indexes can be created in a container. Up to eight fields can be included in a compound index (compound indexes are supported in version 3.6+). |
| 112 | WriteConflict | The multi-document transaction failed due to a conflicting multi-document transaction | Retry the multi-document transaction until it succeeds. |
-| 115 | CommandNotSupported | The request attempted is not supported. | Additional details should be provided in the error. If this functionality is important for your deployments, create a support ticket in the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) and the Azure Cosmos DB team will get back to you. |
-| 11000 | DuplicateKey | The shard key (Azure Cosmos DB partition key) of the document you're inserting already exists in the collection or a unique index field constraint has been violated. | Use the update() function to update an existing document. If the unique index field constraint has been violated, insert or update the document with a field value that does not exist in the shard/partition yet. Another option would be to use a field containing a combination of the id and shard key fields. |
+| 115 | CommandNotSupported | The request attempted isn't supported. | Other details should be provided in the error. If this functionality is important for your deployments, create a support ticket in the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade), and the Azure Cosmos DB team will get back to you. |
+| 11000 | DuplicateKey | The shard key (Azure Cosmos DB partition key) of the document you're inserting already exists in the collection or a unique index field constraint has been violated. | Use the update() function to update an existing document. If the unique index field constraint has been violated, insert or update the document with a field value that doesn't exist in the shard/partition yet. Another option would be to use a field containing a combination of the ID and shard key fields. |
| 16500 | TooManyRequests | The total number of request units consumed is more than the provisioned request-unit rate for the collection and has been throttled. | Consider scaling the throughput assigned to a container or a set of containers from the Azure portal or you can retry the operation. If you enable [SSR (server-side retry)](prevent-rate-limiting-errors.md), Azure Cosmos DB automatically retries the requests that fail due to this error. |
-| 16501 | ExceededMemoryLimit | As a multi-tenant service, the operation has gone over the client's memory allotment. This is only applicable to Azure Cosmos DB API for MongoDB version 3.2. | Reduce the scope of the operation through more restrictive query criteria or contact support from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Example: `db.getCollection('users').aggregate([{$match: {name: "Andy"}}, {$sort: {age: -1}}]))` |
-| 40324 | Unrecognized pipeline stage name. | The stage name in your aggregation pipeline request was not recognized. | Ensure that all aggregation pipeline names are valid in your request. |
-| - | MongoDB wire version issues | The older versions of MongoDB drivers are unable to detect the Azure Cosmos account's name in the connection strings. | Append `appName=@accountName@` at the end of your connection string, where `accountName` is your Azure Cosmos DB account name. |
-| - | MongoDB client networking issues (such as socket or endOfStream exceptions)| The network request has failed. This is often caused by an inactive TCP connection that the MongoDB client is attempting to use. MongoDB drivers often utilize connection pooling, which results in a random connection chosen from the pool being used for a request. Inactive connections typically timeout on the Azure Cosmos DB end after four minutes. | You can either retry these failed requests in your application code, change your MongoDB client (driver) settings to teardown inactive TCP connections before the four-minute timeout window, or configure your OS `keepalive` settings to maintain the TCP connections in an active state.<br><br>To avoid connectivity messages, you may want to change the connection string to set `maxConnectionIdleTime` to 1-2 minutes.<br>- Mongo driver: configure `maxIdleTimeMS=120000` <br>- Node.JS: configure `socketTimeoutMS=120000`, `autoReconnect` = true, `keepAlive` = true, `keepAliveInitialDelay` = 3 minutes
-| - | Mongo Shell not working in the Azure portal | When user is trying to open a Mongo shell, nothing happens and the tab stays blank. | Check Firewall. Firewall is not supported with the Mongo shell in the Azure portal. <br>- Install Mongo shell on the local computer within the firewall rules <br>- Use legacy Mongo shell
-| - | Unable to connect with connection string | The connection string has changed when upgrading from 3.2 -> 3.6 | Note that when using Azure Cosmos DB's API for MongoDB accounts, the 3.6 version of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts have the endpoint in the format `*.documents.azure.com`.
+| 16501 | ExceededMemoryLimit | As a multi-tenant service, the operation has gone over the client's memory allotment. This is only applicable to Azure Cosmos DB for MongoDB version 3.2. | Reduce the scope of the operation through more restrictive query criteria or contact support from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Example: `db.getCollection('users').aggregate([{$match: {name: "Andy"}}, {$sort: {age: -1}}]))` |
+| 40324 | Unrecognized pipeline stage name. | The stage name in your aggregation pipeline request wasn't recognized. | Ensure that all aggregation pipeline names are valid in your request. |
+| - | MongoDB wire version issues | The older versions of MongoDB drivers are unable to detect the Azure Cosmos DB account's name in the connection strings. | Append `appName=@accountName@` at the end of your connection string, where `accountName` is your Azure Cosmos DB account name. |
+| - | MongoDB client networking issues (such as socket or endOfStream exceptions)| The network request has failed. This is often caused by an inactive TCP connection that the MongoDB client is attempting to use. MongoDB drivers often utilize connection pooling, which results in a random connection chosen from the pool being used for a request. Inactive connections typically time out on the Azure Cosmos DB end after four minutes. | You can either retry these failed requests in your application code, change your MongoDB client (driver) settings to teardown inactive TCP connections before the four-minute timeout window, or configure your OS `keepalive` settings to maintain the TCP connections in an active state.<br><br>To avoid connectivity messages, you may want to change the connection string to set `maxConnectionIdleTime` to 1-2 minutes.<br>- Mongo driver: configure `maxIdleTimeMS=120000` <br>- Node.JS: configure `socketTimeoutMS=120000`, `autoReconnect` = true, `keepAlive` = true, `keepAliveInitialDelay` = 3 minutes
+| - | Mongo Shell not working in the Azure portal | When user is trying to open a Mongo shell, nothing happens and the tab stays blank. | Check Firewall. Firewall isn't supported with the Mongo shell in the Azure portal. <br>- Install Mongo shell on the local computer within the firewall rules <br>- Use legacy Mongo shell
+| - | Unable to connect with connection string | The connection string has changed when upgrading from 3.2 -> 3.6 | When using Azure Cosmos DB's API for MongoDB accounts, the 3.6 version of accounts has the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
## Next steps
cosmos-db Estimate Ru Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/estimate-ru-capacity-planner.md
Title: Estimate costs using the Azure Cosmos DB capacity planner - API for Mongo DB
-description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using Azure Cosmos DB API for MongoDB.
+ Title: Estimate costs using the Azure Cosmos DB capacity planner - API for MongoDB
+description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using Azure Cosmos DB for MongoDB.
-++ Last updated 08/26/2021 -
-# Estimate RU/s using the Azure Cosmos DB capacity planner - Azure Cosmos DB API for MongoDB
+# Estimate RU/s using the Azure Cosmos DB capacity planner - Azure Cosmos DB for MongoDB
> [!NOTE] > If you are planning a data migration to Azure Cosmos DB and all that you know is the number of vcores and servers in your existing sharded and replicated database cluster, please also read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) >
-Configuring your databases and collections with the right amount of provisioned throughput, or [Request Units (RU/s)](../request-units.md), for your workload is essential to optimizing cost and performance. This article describes how to use the Azure Cosmos DB [capacity planner](https://cosmos.azure.com/capacitycalculator/) to get an estimate of the required RU/s and cost of your workload when using the Azure Cosmos DB API for MongoDB. If you are using SQL API, see how to [use capacity calculator with SQL API](../estimate-ru-with-capacity-planner.md) article.
+Configuring your databases and collections with the right amount of provisioned throughput, or [Request Units (RU/s)](../request-units.md), for your workload is essential to optimizing cost and performance. This article describes how to use the Azure Cosmos DB [capacity planner](https://cosmos.azure.com/capacitycalculator/) to get an estimate of the required RU/s and cost of your workload when using the Azure Cosmos DB for MongoDB. If you are using API for NoSQL, see how to [use capacity calculator with API for NoSQL](../estimate-ru-with-capacity-planner.md) article.
[!INCLUDE [capacity planner modes](../includes/capacity-planner-modes.md)]
To get a quick estimate for your workload using the basic mode, navigate to the
|**Input** |**Description** | |||
-| API |Choose MongoDB API |
-|Number of regions|Azure Cosmos DB API for MongoDB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your account. See [global distribution](../distribute-data-globally.md) for more details.|
+| API |Choose API for MongoDB |
+|Number of regions|Azure Cosmos DB for MongoDB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your account. See [global distribution](../distribute-data-globally.md) for more details.|
|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.| |Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.| |Use analytical store| Choose **On** if you want to use [Synapse analytical store](../synapse-link.md). Enter the **Total data stored in analytical store**, it represents the estimated data stored (GB) in the analytical store in a single region. |
After you sign in, you can see more fields compared to the fields in basic mode.
|**Input** |**Description** | |||
-|API|Azure Cosmos DB is a multi-model and multi-API service. Choose MongoDB API. |
-|Number of regions|Azure Cosmos DB API for MongoDB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Cosmos account. See [global distribution](../distribute-data-globally.md) for more details.|
+|API|Azure Cosmos DB is a multi-model and multi-API service. Choose API for MongoDB. |
+|Number of regions|Azure Cosmos DB for MongoDB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. See [global distribution](../distribute-data-globally.md) for more details.|
|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
-|Default consistency|Azure Cosmos DB API for MongoDB supports 5 consistency levels, to allow developers to balance the tradeoff between consistency, availability, and latency tradeoffs. To learn more, see the [consistency levels](../consistency-levels.md) article. <br/><br/> By default, API for MongoDB uses session consistency, which guarantees the ability to read your own writes in a session. <br/><br/> Choosing strong or bounded staleness will require double the required RU/s for reads, when compared to session, consistent prefix, and eventual consistency. Strong consistency with multi-region writes is not supported and will automatically default to single-region writes with strong consistency. |
-|Indexing policy| If you choose **Off** option, none of the properties are indexed. This results in the lowest RU charge for writes. Turn off the indexing policy if you only plan to query using the _id field and the shard key for every query (both per query).<br/><br/> If you choose the **Automatic** option, the 3.6 and higher versions of API for MongoDB automatically index the _id filed. When you choose automatic indexing, it is the equivalent of setting a wildcard index (where every property gets auto-indexed). Use wildcard indexes for all fields for flexible and efficient queries.<br/><br/>If you choose the **Custom** option, you can set how many properties are indexed with multi-key indexes or compound indexes. You can enter the number of properties indexed later in the form. To learn more, see [index management](../mongodb-indexing.md) in API for MongoDB.|
+|Default consistency|Azure Cosmos DB for MongoDB supports 5 consistency levels, to allow developers to balance the tradeoff between consistency, availability, and latency tradeoffs. To learn more, see the [consistency levels](../consistency-levels.md) article. <br/><br/> By default, API for MongoDB uses session consistency, which guarantees the ability to read your own writes in a session. <br/><br/> Choosing strong or bounded staleness will require double the required RU/s for reads, when compared to session, consistent prefix, and eventual consistency. Strong consistency with multi-region writes is not supported and will automatically default to single-region writes with strong consistency. |
+|Indexing policy| If you choose **Off** option, none of the properties are indexed. This results in the lowest RU charge for writes. Turn off the indexing policy if you only plan to query using the _id field and the shard key for every query (both per query).<br/><br/> If you choose the **Automatic** option, the 3.6 and higher versions of API for MongoDB automatically index the _id filed. When you choose automatic indexing, it is the equivalent of setting a wildcard index (where every property gets auto-indexed). Use wildcard indexes for all fields for flexible and efficient queries.<br/><br/>If you choose the **Custom** option, you can set how many properties are indexed with multi-key indexes or compound indexes. You can enter the number of properties indexed later in the form. To learn more, see [index management](../mongodb/indexing.md) in API for MongoDB.|
|Total data stored in transactional store |Total estimated data stored (GB) in the transactional store in a single region.| |Use analytical store| Choose **On** if you want to use [Synapse analytical store](../synapse-link.md). Enter the **Total data stored in analytical store**, it represents the estimated data stored (GB) in the analytical store in a single region. | |Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: 45 hours at peak / 730 hours / month = ~6%.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
The prices shown in the capacity planner are estimates based on the public prici
* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * Learn more about [Azure Cosmos DB's pricing model](../how-pricing-works.md).
-* Create a new [Cosmos account, database, and container](../create-cosmosdb-resources-portal.md).
+* Create a new [Azure Cosmos DB account, database, and container](../nosql/quickstart-portal.md).
* Learn how to [optimize provisioned throughput cost](../optimize-cost-throughput.md).
-* Learn how to [optimize cost with reserved capacity](../cosmos-db-reserved-capacity.md).
+* Learn how to [optimize cost with reserved capacity](../reserved-capacity.md).
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
cosmos-db Feature Support 32 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-32.md
Title: Azure Cosmos DB's API for MongoDB (3.2 version) supported features and syntax description: Learn about Azure Cosmos DB's API for MongoDB (3.2 version) supported features and syntax. -++ Last updated 10/16/2019
# Azure Cosmos DB's API for MongoDB (3.2 version): supported features and syntax Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, automatic indexing of every field, encryption at rest, backups, and much more.
> [!NOTE]
-> Version 3.2 of the Cosmos DB API for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years.
+> Version 3.2 of the Azure Cosmos DB for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years.
## Protocol Support All new accounts for Azure Cosmos DB's API for MongoDB are compatible with MongoDB server version **3.6**. This article covers MongoDB version 3.2. The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB's API for MongoDB.
-Azure Cosmos DB's API for MongoDB also offers a seamless upgrade experience for qualifying accounts. Learn more on the [MongoDB version upgrade guide](upgrade-mongodb-version.md).
+Azure Cosmos DB's API for MongoDB also offers a seamless upgrade experience for qualifying accounts. Learn more on the [MongoDB version upgrade guide](upgrade-version.md).
## Query language support
cursor.sort() | ```cursor.sort({ "Elevation": -1 })``` | Documents without sort
## Unique indexes
-Cosmos DB indexes every field in documents that are written to the database by default. Unique indexes ensure that a specific field doesn't have duplicate values across all documents in a collection, similar to the way uniqueness is preserved on the default `_id` key. You can create custom indexes in Cosmos DB by using the createIndex command, including the 'unique' constraint.
+Azure Cosmos DB indexes every field in documents that are written to the database by default. Unique indexes ensure that a specific field doesn't have duplicate values across all documents in a collection, similar to the way uniqueness is preserved on the default `_id` key. You can create custom indexes in Azure Cosmos DB by using the createIndex command, including the 'unique' constraint.
-Unique indexes are available for all Cosmos accounts using Azure Cosmos DB's API for MongoDB.
+Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos DB's API for MongoDB.
## Time-to-live (TTL)
-Cosmos DB supports a time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for collections by going to the [Azure portal](https://portal.azure.com).
+Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for collections by going to the [Azure portal](https://portal.azure.com).
## User and role management
-Cosmos DB does not yet support users and roles. However, Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Replication
-Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/) that specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Sharding
cosmos-db Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-36.md
Title: Azure Cosmos DB's API for MongoDB (3.6 version) supported features and syntax description: Learn about Azure Cosmos DB's API for MongoDB (3.6 version) supported features and syntax. -++ Last updated 04/04/2022
# Azure Cosmos DB's API for MongoDB (3.6 version): supported features and syntax Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
> [!NOTE]
-> Version 3.6 of the Cosmos DB API for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years.
+> Version 3.6 of the Azure Cosmos DB for MongoDB has no current plans for end-of-life (EOL). The minimum notice for a future EOL is three years.
## Protocol Support
$polygon | No |
When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported. ## Indexing
-The API for MongoDB [supports a variety of indexes](mongodb-indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible MongoDB driver.
## Replication
-Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
## Retryable Writes
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
Title: 4.0 server version supported features and syntax in Azure Cosmos DB's API for MongoDB description: Learn about Azure Cosmos DB's API for MongoDB 4.0 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported. -++ Last updated 04/05/2022
# Azure Cosmos DB's API for MongoDB (4.0 server version): supported features and syntax Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB's API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB's API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB's API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
## Protocol Support
Azure Cosmos DB's API for MongoDB supports the following database commands:
## Data types
-Azure Cosmos DB's API for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0 benefit from this.
+Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. The 4.0 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this.
-In an [upgrade scenario](upgrade-mongodb-version.md), documents written prior to the upgrade to version 4.0 will not benefit from the enhanced performance until they are updated via a write operation through the 4.0 endpoint.
+In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ will not benefit from the enhanced performance until they are updated via a write operation through the 4.0+ endpoint.
+
+16MB document support raises the size limit for your documents from 2MB to 16MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it cannot be disabled. This feature is not compatible with the Azure Synapse Link feature and/or Continuous Backup.
+
+Enabling 16MB can be done in the features tab in the Azure portal or programmatically by [adding the "EnableMongo16MBDocumentSupport" capability](how-to-configure-capabilities.md).
+
+We recommend enabling Server Side Retry to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
| Command | Supported | |||
$polygon | No |
## Sort operations
-When using the `findOneAndUpdate` operation with Mongo API version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields was a limitation of previous wire protocols.
+When using the `findOneAndUpdate` operation with API for MongoDB version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields was a limitation of previous wire protocols.
## Indexing
-The API for MongoDB [supports a variety of indexes](mongodb-indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## GridFS
Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
## Retryable Writes
-Cosmos DB does not yet support retryable writes. Client drivers must add the 'retryWrites=false' URL parameter to their connection string. More URL parameters can be added by prefixing them with an '&'.
+Retryable writes enables MongoDB drivers to automatically retry certain write operations in case of failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement.
+
+For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field city = "NYC", the application will need to execute the operation for all shard key (country) values if Retryable writes is enabled.
+
+- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success**
+- `db.coll.deleteMany({"city": "NYC"})` - **Fails with error `ShardKeyNotFound(61)`**
+
+To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
## Sharding
Multi-document transactions are supported within an unsharded collection. Multi-
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Next steps
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
Title: 4.2 server version supported features and syntax in Azure Cosmos DB API for MongoDB
-description: Learn about Azure Cosmos DB API for MongoDB 4.2 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
+ Title: 4.2 server version supported features and syntax in Azure Cosmos DB for MongoDB
+description: Learn about Azure Cosmos DB for MongoDB 4.2 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
-++ Last updated 04/05/2022
-# Azure Cosmos DB API for MongoDB (4.2 server version): supported features and syntax
+# Azure Cosmos DB for MongoDB (4.2 server version): supported features and syntax
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service, offering [multiple database APIs](../choose-api.md). You can communicate with the Azure Cosmos DB API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service, offering [multiple database APIs](../choose-api.md). You can communicate with the Azure Cosmos DB for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
-By using the Azure Cosmos DB API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
## Protocol Support
-The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB API for MongoDB. When using Azure Cosmos DB API for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
+The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When using Azure Cosmos DB for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
> [!NOTE]
-> This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB API for MongoDB.
+> This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB for MongoDB.
## Query language support
-Azure Cosmos DB API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
## Database commands
-Azure Cosmos DB API for MongoDB supports the following database commands:
+Azure Cosmos DB for MongoDB supports the following database commands:
### Query and write operation commands
Azure Cosmos DB API for MongoDB supports the following database commands:
## Data types
-Azure Cosmos DB API for MongoDB supports documents encoded in MongoDB BSON format. The 4.2 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.2 benefit from this.
+Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. Versions 4.0 and higher (4.0+) enhance the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.0+ benefit from this.
-In an [upgrade scenario](upgrade-mongodb-version.md), documents written prior to the upgrade to version 4.2 will not benefit from the enhanced performance until they are updated via a write operation through the 4.2 endpoint.
+In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ will not benefit from the enhanced performance until they are updated via a write operation through the 4.0+ endpoint.
+
+16 MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it cannot be disabled. This feature is not compatible with the Azure Synapse Link feature and/or Continuous Backup.
+
+Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the `EnableMongo16MBDocumentSupport` capability](how-to-configure-capabilities.md).
+
+We recommend enabling Server Side Retry to ensure requests with larger documents succeed. If necessary, raising your DB/Collection RUs may also help performance.
| Command | Supported | |||
$polygon | No |
When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported. ## Indexing
-The API for MongoDB [supports a variety of indexes](mongodb-indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+The API for MongoDB [supports a variety of indexes](indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
## Client-side field level encryption
Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver.
## Replication
-Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Cosmos DB does not support manual replication commands.
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Azure Cosmos DB does not support manual replication commands.
-## Retryable Writes (preview)
+## Retryable Writes
Retryable writes enables MongoDB drivers to automatically retry certain write operations in case of failure, but results in more stringent requirements for certain operations, which match MongoDB protocol requirements. With this feature enabled, update operations, including deletes, in sharded collections will require the shard key to be included in the query filter or update statement. For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete all the documents with the field city = "NYC", the application will need to execute the operation for all shard key (country) values if Retryable writes is enabled.
db.coll.deleteMany({"country": "USA", "city": "NYC"}) ΓÇô Success
db.coll.deleteMany({"city": "NYC"})- Fails with error ShardKeyNotFound(61)
-To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account.
+To enable the feature, [add the `EnableMongoRetryableWrites` capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal.
## Sharding
Multi-document transactions are supported within an unsharded collection. Multi-
## User and role management
-Azure Cosmos DB does not yet support users and roles. However, Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+Azure Cosmos DB does not yet support users and roles. However, Azure Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
## Write Concern
-Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Azure Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md). - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Find Request Unit Charge Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/find-request-unit-charge-mongodb.md
- Title: Find request unit charge for Azure Cosmos DB API for MongoDB operations
-description: Learn how to find the request unit (RU) charge for MongoDB queries executed against an Azure Cosmos container. You can use the Azure portal, MongoDB .NET, Java, Node.js drivers.
----- Previously updated : 05/12/2022---
-# Find the request unit charge for operations executed in Azure Cosmos DB API for MongoDB
-
-Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
-
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you're using a different API, see [SQL API](../find-request-unit-charge.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
-
-The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB API for MongoDB, you have multiple options for retrieving the RU charge.
-
-## Use the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account) and feed it with data, or select an existing account that already contains data.
-
-1. Go to the **Data Explorer** pane, and then select the container you want to work on.
-
-1. Select the **...** next to the container name and select **New Query**.
-
-1. Enter a valid query, and then select **Execute Query**.
-
-1. Select **Query Stats** to display the actual request charge for the request you executed. This query editor allows you to run and view request unit charges for only query predicates. You can't use this editor for data manipulation commands such as insert statements.
-
- :::image type="content" source="../media/find-request-unit-charge/portal-mongodb-query.png" alt-text="Screenshot of a MongoDB query request charge in the Azure portal":::
-
-1. To get request charges for data manipulation commands, run the `getLastRequestStatistics` command from a shell based UI such as Mongo shell, [Robo 3T](connect-using-robomongo.md), [MongoDB Compass](connect-using-compass.md), or a VS Code extension with shell scripting.
-
- `db.runCommand({getLastRequestStatistics: 1})`
-
-## Use a MongoDB driver
-
-### [.NET driver](#tab/dotnet-driver)
-
-When you use the [official MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/), you can execute commands by calling the `RunCommand` method on a `IMongoDatabase` object. This method requires an implementation of the `Command<>` abstract class:
-
-```csharp
-class GetLastRequestStatisticsCommand : Command<Dictionary<string, object>>
-{
- public override RenderedCommand<Dictionary<string, object>> Render(IBsonSerializerRegistry serializerRegistry)
- {
- return new RenderedCommand<Dictionary<string, object>>(new BsonDocument("getLastRequestStatistics", 1), serializerRegistry.GetSerializer<Dictionary<string, object>>());
- }
-}
-
-Dictionary<string, object> stats = database.RunCommand(new GetLastRequestStatisticsCommand());
-double requestCharge = (double)stats["RequestCharge"];
-```
-
-For more information, see [Quickstart: Build a .NET web app by using an Azure Cosmos DB API for MongoDB](create-mongodb-dotnet.md).
-
-### [Java driver](#tab/java-driver)
-
-When you use the [official MongoDB Java driver](https://mongodb.github.io/mongo-java-driver/), you can execute commands by calling the `runCommand` method on a `MongoDatabase` object:
-
-```java
-Document stats = database.runCommand(new Document("getLastRequestStatistics", 1));
-Double requestCharge = stats.getDouble("RequestCharge");
-```
-
-For more information, see [Quickstart: Build a web app by using the Azure Cosmos DB API for MongoDB and the Java SDK](create-mongodb-java.md).
-
-### [Node.js driver](#tab/node-driver)
-
-When you use the [official MongoDB Node.js driver](https://mongodb.github.io/node-mongodb-native/), you can execute commands by calling the `command` method on a `db` object:
-
-```javascript
-db.command({ getLastRequestStatistics: 1 }, function(err, result) {
- assert.equal(err, null);
- const requestCharge = result['RequestCharge'];
-});
-```
-
-For more information, see [Quickstart: Migrate an existing MongoDB Node.js web app to Azure Cosmos DB](create-mongodb-nodejs.md).
-
-### [Python driver](#tab/python-driver)
-
-```python
-response = db.command('getLastRequestStatistics')
-requestCharge = response['RequestCharge']
-```
---
-## Next steps
-
-To learn about optimizing your RU consumption, see these articles:
-
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
-* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/find-request-unit-charge.md
+
+ Title: Find request unit charge for Azure Cosmos DB for MongoDB operations
+description: Learn how to find the request unit (RU) charge for MongoDB queries executed against an Azure Cosmos DB container. You can use the Azure portal, MongoDB .NET, Java, Node.js drivers.
+++++ Last updated : 05/12/2022
+ms.devlang: csharp, java, javascript
+++
+# Find the request unit charge for operations executed in Azure Cosmos DB for MongoDB
+
+Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
+
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos DB container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
+
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB for MongoDB. If you're using a different API, see [API for NoSQL](../find-request-unit-charge.md), [API for Cassandra](../cassandr) articles to find the RU/s charge.
+
+The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB for MongoDB, you have multiple options for retrieving the RU charge.
+
+## Use the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account) and feed it with data, or select an existing account that already contains data.
+
+1. Go to the **Data Explorer** pane, and then select the container you want to work on.
+
+1. Select the **...** next to the container name and select **New Query**.
+
+1. Enter a valid query, and then select **Execute Query**.
+
+1. Select **Query Stats** to display the actual request charge for the request you executed. This query editor allows you to run and view request unit charges for only query predicates. You can't use this editor for data manipulation commands such as insert statements.
+
+ :::image type="content" source="../media/find-request-unit-charge/portal-mongodb-query.png" alt-text="Screenshot of a MongoDB query request charge in the Azure portal":::
+
+1. To get request charges for data manipulation commands, run the `getLastRequestStatistics` command from a shell based UI such as Mongo shell, [Robo 3T](connect-using-robomongo.md), [MongoDB Compass](connect-using-compass.md), or a VS Code extension with shell scripting.
+
+ `db.runCommand({getLastRequestStatistics: 1})`
+
+## Use a MongoDB driver
+
+### [.NET driver](#tab/dotnet-driver)
+
+When you use the [official MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/), you can execute commands by calling the `RunCommand` method on a `IMongoDatabase` object. This method requires an implementation of the `Command<>` abstract class:
+
+```csharp
+class GetLastRequestStatisticsCommand : Command<Dictionary<string, object>>
+{
+ public override RenderedCommand<Dictionary<string, object>> Render(IBsonSerializerRegistry serializerRegistry)
+ {
+ return new RenderedCommand<Dictionary<string, object>>(new BsonDocument("getLastRequestStatistics", 1), serializerRegistry.GetSerializer<Dictionary<string, object>>());
+ }
+}
+
+Dictionary<string, object> stats = database.RunCommand(new GetLastRequestStatisticsCommand());
+double requestCharge = (double)stats["RequestCharge"];
+```
+
+For more information, see [Quickstart: Build a .NET web app by using an Azure Cosmos DB for MongoDB](create-mongodb-dotnet.md).
+
+### [Java driver](#tab/java-driver)
+
+When you use the [official MongoDB Java driver](https://mongodb.github.io/mongo-java-driver/), you can execute commands by calling the `runCommand` method on a `MongoDatabase` object:
+
+```java
+Document stats = database.runCommand(new Document("getLastRequestStatistics", 1));
+Double requestCharge = stats.getDouble("RequestCharge");
+```
+
+For more information, see [Quickstart: Build a web app by using the Azure Cosmos DB for MongoDB and the Java SDK](quickstart-java.md).
+
+### [Node.js driver](#tab/node-driver)
+
+When you use the [official MongoDB Node.js driver](https://mongodb.github.io/node-mongodb-native/), you can execute commands by calling the `command` method on a `db` object:
+
+```javascript
+db.command({ getLastRequestStatistics: 1 }, function(err, result) {
+ assert.equal(err, null);
+ const requestCharge = result['RequestCharge'];
+});
+```
+
+For more information, see [Quickstart: Migrate an existing MongoDB Node.js web app to Azure Cosmos DB](create-mongodb-nodejs.md).
+
+### [Python driver](#tab/python-driver)
+
+```python
+response = db.command('getLastRequestStatistics')
+requestCharge = response['RequestCharge']
+```
+++
+## Next steps
+
+To learn about optimizing your RU consumption, see these articles:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Title: Configure your API for MongoDB account capabilities
description: Learn how to configure your API for MongoDB account capabilities + Last updated 09/06/2022 # Configure your API for MongoDB account capabilities Capabilities are features that can be added or removed to your API for MongoDB account. Many of these features affect account behavior so it's important to be fully aware of the impact a capability will have before enabling or disabling it. Several capabilities are set on API for MongoDB accounts by default, and cannot be changed or removed. One example is the EnableMongo capability. This article will demonstrate how to enable and disable a capability.
az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities @(
## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB API for MongoDB.
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md). - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db How To Create Container Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-create-container-mongodb.md
- Title: Create a collection in Azure Cosmos DB API for MongoDB
-description: Learn how to create a collection in Azure Cosmos DB API for MongoDB by using Azure portal, .NET, Java, Node.js, and other SDKs.
--- Previously updated : 04/07/2022-----
-# Create a collection in Azure Cosmos DB API for MongoDB
-
-This article explains the different ways to create a collection in Azure Cosmos DB API for MongoDB. It shows how to create a collection using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a collection, specify the partition key, and provision throughput.
-
->[!NOTE]
-> **Containers** and **collections** are similar to a table in a relational database. We refer to **containers** in the Cosmos DB SQL API and throughout the Azure portal, while we use **collections** in the context of the Cosmos DB MongoDB API to match the terminology used in Mongo DB.
-
-This article explains the different ways to create a collection in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](../how-to-create-container.md), [Cassandra API](../cassandr) articles to create the collection.
-
-> [!NOTE]
-> When creating collections, make sure you donΓÇÖt create two collections with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on collections with such names.
-
-## <a id="portal-mongodb"></a>Create using Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing account.
-
-1. Open the **Data Explorer** pane, and select **New Container**. Next, provide the following details:
-
- * Indicate whether you are creating a new database or using an existing one.
- * Enter a container ID.
- * Enter a shard key.
- * Enter a throughput to be provisioned (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-mongodb.png" alt-text="Screenshot of Azure Cosmos DB API for MongoDB, Add Container dialog box":::
-
-## <a id="dotnet-mongodb"></a>Create using .NET SDK
-
-```csharp
-var bson = new BsonDocument
-{
- { "customAction", "CreateCollection" },
- { "collection", "<CollectionName>" },//update CollectionName
- { "shardKey", "<ShardKeyName>" }, //update ShardKey
- { "offerThroughput", 400} //update Throughput
-};
-var shellCommand = new BsonDocumentCommand<BsonDocument>(bson);
-// Create a collection with a partition key by using Mongo Driver:
-db.RunCommand(shellCommand);
-```
-
-If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
-
-## <a id="cli-mongodb"></a>Create using Azure CLI
-
-[Create a collection for Azure Cosmos DB for MongoDB API with Azure CLI](../scripts/cli/mongodb/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
-
-## Create using PowerShell
-
-[Create a collection for Azure Cosmos DB for MongoDB API with PowerShell](../scripts/powershell/mongodb/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
-
-## Create a collection using Azure Resource Manager templates
-
-[Create a collection for Azure Cosmos DB for MongoDB API with Resource Manager template](../manage-with-templates.md#azure-cosmos-account-with-standard-provisioned-throughput).
-
-## Next steps
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
-* [Work with Azure Cosmos account](../account-databases-containers-items.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-create-container.md
+
+ Title: Create a collection in Azure Cosmos DB for MongoDB
+description: Learn how to create a collection in Azure Cosmos DB for MongoDB by using Azure portal, .NET, Java, Node.js, and other SDKs.
+++ Last updated : 04/07/2022++
+ms.devlang: csharp
+++
+# Create a collection in Azure Cosmos DB for MongoDB
+
+This article explains the different ways to create a collection in Azure Cosmos DB for MongoDB. It shows how to create a collection using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a collection, specify the partition key, and provision throughput.
+
+>[!NOTE]
+> **Containers** and **collections** are similar to a table in a relational database. We refer to **containers** in the Azure Cosmos DB for NoSQL and throughout the Azure portal, while we use **collections** in the context of the Azure Cosmos DB for MongoDB to match the terminology used in MongoDB.
+
+This article explains the different ways to create a collection in Azure Cosmos DB for MongoDB. If you are using a different API, see [API for NoSQL](../how-to-create-container.md), [API for Cassandra](../cassandr) articles to create the collection.
+
+> [!NOTE]
+> When creating collections, make sure you donΓÇÖt create two collections with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on collections with such names.
+
+## <a id="portal-mongodb"></a>Create using Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing account.
+
+1. Open the **Data Explorer** pane, and select **New Container**. Next, provide the following details:
+
+ * Indicate whether you are creating a new database or using an existing one.
+ * Enter a container ID.
+ * Enter a shard key.
+ * Enter a throughput to be provisioned (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-mongodb.png" alt-text="Screenshot of Azure Cosmos DB for MongoDB, Add Container dialog box":::
+
+## <a id="dotnet-mongodb"></a>Create using .NET SDK
+
+```csharp
+var bson = new BsonDocument
+{
+ { "customAction", "CreateCollection" },
+ { "collection", "<CollectionName>" },//update CollectionName
+ { "shardKey", "<ShardKeyName>" }, //update ShardKey
+ { "offerThroughput", 400} //update Throughput
+};
+var shellCommand = new BsonDocumentCommand<BsonDocument>(bson);
+// Create a collection with a partition key by using Mongo Driver:
+db.RunCommand(shellCommand);
+```
+
+If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
+
+## <a id="cli-mongodb"></a>Create using Azure CLI
+
+[Create a collection for Azure Cosmos DB for API for MongoDB with Azure CLI](../scripts/cli/mongodb/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
+
+## Create using PowerShell
+
+[Create a collection for Azure Cosmos DB for API for MongoDB with PowerShell](../scripts/powershell/mongodb/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
+
+## Create a collection using Azure Resource Manager templates
+
+[Create a collection for Azure Cosmos DB for API for MongoDB with Resource Manager template](../manage-with-templates.md#azure-cosmos-account-with-standard-provisioned-throughput).
+
+## Next steps
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Work with Azure Cosmos DB account](../account-databases-containers-items.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB MongoDB API and .NET
-description: Get started developing a .NET application that works with Azure Cosmos DB MongoDB API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB MongoDB API database.
+ Title: Get started with Azure Cosmos DB for MongoDB and .NET
+description: Get started developing a .NET application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.
-+ ms.devlang: dotnet Last updated 07/22/2022-+
-# Get started with Azure Cosmos DB MongoDB API and .NET Core
+# Get started with Azure Cosmos DB for MongoDB and .NET Core
-This article shows you how to connect to Azure Cosmos DB MongoDB API using .NET Core and the relevant NuGet packages. Once connected, you can perform operations on databases, collections, and documents.
+This article shows you how to connect to Azure Cosmos DB for MongoDB using .NET Core and the relevant NuGet packages. Once connected, you can perform operations on databases, collections, and documents.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET Core project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
## Prerequisites
This article shows you how to connect to Azure Cosmos DB MongoDB API using .NET
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). * [.NET 6.0](https://dotnet.microsoft.com/en-us/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-* [Azure Cosmos DB MongoDB API resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
+* [Azure Cosmos DB for MongoDB resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
## Create a new .NET Core app
This article shows you how to connect to Azure Cosmos DB MongoDB API using .NET
dotnet run ```
-## Connect to Azure Cosmos DB MongoDB API with the MongoDB native driver
+## Connect to Azure Cosmos DB for MongoDB with the MongoDB native driver
To connect to Azure Cosmos DB with the MongoDB native driver, create an instance of the [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_MongoClient.htm) class. This class is the starting point to perform all operations against MongoDb databases. The most common constructor for **MongoClient** accepts a connection string, which you can retrieve using the following steps:
Define a new instance of the ``MongoClient`` class using the constructor and the
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/101-manage-connection/program.cs" id="client_credentials":::
-## Use the MongoDB client classes with Cosmos DB for MongoDB API
+## Use the MongoDB client classes with Azure Cosmos DB for API for MongoDB
[!INCLUDE [Conceptual object model](<./includes/conceptual-object-model.md>)]
Each type of resource is represented by one or more associated C# classes. Here'
| Class | Description | |||
-|[``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)|This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.|
+|[``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)|This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.|
|[``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)|This class is a reference to a database that may, or may not, exist in the service yet. The database is validated or created server-side when you attempt to perform an operation against it.| |[``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm)|This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.|
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a MongoDB API account, use the next guide to create and manage databases.
+Now that you've connected to a API for MongoDB account, use the next guide to create and manage databases.
> [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB MongoDB API using .NET](how-to-dotnet-manage-databases.md)
+> [Create a database in Azure Cosmos DB for MongoDB using .NET](how-to-dotnet-manage-databases.md)
cosmos-db How To Dotnet Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-collections.md
Title: Create a collection in Azure Cosmos DB MongoDB API using .NET
-description: Learn how to work with a collection in your Azure Cosmos DB MongoDB API database using the .NET SDK.
+ Title: Create a collection in Azure Cosmos DB for MongoDB using .NET
+description: Learn how to work with a collection in your Azure Cosmos DB for MongoDB database using the .NET SDK.
-+ ms.devlang: dotnet Last updated 07/22/2022-+
-# Manage a collection in Azure Cosmos DB MongoDB API using .NET
+# Manage a collection in Azure Cosmos DB for MongoDB using .NET
-Manage your MongoDB collection stored in Cosmos DB with the native MongoDB client driver.
+Manage your MongoDB collection stored in Azure Cosmos DB with the native MongoDB client driver.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
## Name a collection
An index is used by the MongoDB query engine to improve performance to database
## See also -- [Get started with Azure Cosmos DB MongoDB API and .NET](how-to-dotnet-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and .NET](how-to-dotnet-get-started.md)
- [Create a database](how-to-dotnet-manage-databases.md)
cosmos-db How To Dotnet Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-databases.md
Title: Manage a MongoDB database using .NET
-description: Learn how to manage your Cosmos DB resource when it provides the MongoDB API with a .NET SDK.
+description: Learn how to manage your Azure Cosmos DB resource when it provides the API for MongoDB with a .NET SDK.
-+ ms.devlang: dotnet Last updated 07/22/2022-+ # Manage a MongoDB database using .NET Your MongoDB server in Azure Cosmos DB is available from the [MongoDB](https://www.nuget.org/packages/MongoDB.Driver) NuGet package. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
## Name a database
A database is removed from the server using the `DropDatabase` method on the DB
## See also -- [Get started with Azure Cosmos DB MongoDB API and .NET](how-to-dotnet-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and .NET](how-to-dotnet-get-started.md)
- Work with a collection](how-to-dotnet-manage-collections.md)
cosmos-db How To Dotnet Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-documents.md
Title: Create a document in Azure Cosmos DB MongoDB API using .NET
-description: Learn how to work with a document in your Azure Cosmos DB MongoDB API database using the .NET SDK.
+ Title: Create a document in Azure Cosmos DB for MongoDB using .NET
+description: Learn how to work with a document in your Azure Cosmos DB for MongoDB database using the .NET SDK.
-+ ms.devlang: dotnet Last updated 07/22/2022-+
-# Manage a document in Azure Cosmos DB MongoDB API using .NET
+# Manage a document in Azure Cosmos DB for MongoDB using .NET
Manage your MongoDB documents with the ability to insert, update, and delete documents. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
## Insert a document
To update a document, specify the query filter used to find the document along w
## Bulk updates to a collection
-You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
The following bulk operations are available:
To delete documents, use a query to define how the documents are found.
## See also -- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Dotnet Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-queries.md
Title: Query documents in Azure Cosmos DB MongoDB API using .NET
-description: Learn how to query documents in your Azure Cosmos DB MongoDB API database using the .NET SDK.
+ Title: Query documents in Azure Cosmos DB for MongoDB using .NET
+description: Learn how to query documents in your Azure Cosmos DB for MongoDB database using the .NET SDK.
-+ ms.devlang: dotnet Last updated 07/22/2022-+
-# Query documents in Azure Cosmos DB MongoDB API using .NET
+# Query documents in Azure Cosmos DB for MongoDB using .NET
Use queries to find documents in a collection. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
## Query for documents
To find documents, use a query filter on the collection to define how the docume
## See also -- [Get started with Azure Cosmos DB MongoDB API and .NET](how-to-dotnet-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and .NET](how-to-dotnet-get-started.md)
- [Create a database](how-to-dotnet-manage-databases.md)
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
Title: Get started with Azure Cosmos DB MongoDB API and JavaScript
-description: Get started developing a JavaScript application that works with Azure Cosmos DB MongoDB API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB MongoDB API database.
+ Title: Get started with Azure Cosmos DB for MongoDB and JavaScript
+description: Get started developing a JavaScript application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.
-+ ms.devlang: javascript Last updated 06/23/2022--+
-# Get started with Azure Cosmos DB MongoDB API and JavaScript
+# Get started with Azure Cosmos DB for MongoDB and JavaScript
-This article shows you how to connect to Azure Cosmos DB MongoDB API using the native MongoDB npm package. Once connected, you can perform operations on databases, collections, and docs.
+This article shows you how to connect to Azure Cosmos DB for MongoDB using the native MongoDB npm package. Once connected, you can perform operations on databases, collections, and docs.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
## Prerequisites
This article shows you how to connect to Azure Cosmos DB MongoDB API using the n
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). * [Node.js LTS](https://nodejs.org/en/download/) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-* [Azure Cosmos DB MongoDB API resource](quickstart-javascript.md#create-an-azure-cosmos-db-account)
+* [Azure Cosmos DB for MongoDB resource](quickstart-nodejs.md#create-an-azure-cosmos-db-account)
## Create a new JavaScript app
This article shows you how to connect to Azure Cosmos DB MongoDB API using the n
node index.js ```
-## Connect with MongoDB native driver to Azure Cosmos DB MongoDB API
+## Connect with MongoDB native driver to Azure Cosmos DB for MongoDB
To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
The most common constructor for **MongoClient** has two parameters:
| Parameter | Example value | Description | | | | |
-| ``url`` | ``COSMOS_CONNECTION_STRIN`` environment variable | MongoDB API connection string to use for all requests |
+| ``url`` | ``COSMOS_CONNECTION_STRIN`` environment variable | API for MongoDB connection string to use for all requests |
| ``options`` | `{ssl: true, tls: true, }` | [MongoDB Options](https://mongodb.github.io/node-mongodb-native/4.5/interfaces/MongoClientOptions.html) for the connection. | Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection issues.
When your application is finished with the connection remember to close it. That
client.close() ```
-## Use MongoDB client classes with Cosmos DB for MongoDB API
+## Use MongoDB client classes with Azure Cosmos DB for API for MongoDB
[!INCLUDE [Conceptual object model](<./includes/conceptual-object-model.md>)]
Each type of resource is represented by one or more associated JavaScript classe
| Class | Description | |||
-|[``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html)|This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.|
+|[``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html)|This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.|
|[``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)|This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.| |[``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html)|This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.|
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a MongoDB API account, use the next guide to create and manage databases.
+Now that you've connected to a API for MongoDB account, use the next guide to create and manage databases.
> [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB MongoDB API using JavaScript](how-to-javascript-manage-databases.md)
+> [Create a database in Azure Cosmos DB for MongoDB using JavaScript](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-collections.md
Title: Create a collection in Azure Cosmos DB MongoDB API using JavaScript
-description: Learn how to work with a collection in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
+ Title: Create a collection in Azure Cosmos DB for MongoDB using JavaScript
+description: Learn how to work with a collection in your Azure Cosmos DB for MongoDB database using the JavaScript SDK.
-+ ms.devlang: javascript Last updated 06/23/2022-+
-# Manage a collection in Azure Cosmos DB MongoDB API using JavaScript
+# Manage a collection in Azure Cosmos DB for MongoDB using JavaScript
-Manage your MongoDB collection stored in Cosmos DB with the native MongoDB client driver.
+Manage your MongoDB collection stored in Azure Cosmos DB with the native MongoDB client driver.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
## Name a collection
The preceding code snippet displays the following example console output:
## See also -- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)-- [Create a database](how-to-javascript-manage-databases.md)
+- [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-databases.md
Title: Manage a MongoDB database using JavaScript
-description: Learn how to manage your Cosmos DB resource when it provides the MongoDB API with a JavaScript SDK.
+description: Learn how to manage your Azure Cosmos DB resource when it provides the API for MongoDB with a JavaScript SDK.
-+ ms.devlang: javascript Last updated 06/23/2022-+ # Manage a MongoDB database using JavaScript Your MongoDB server in Azure Cosmos DB is available from the common npm packages for MongoDB such as:
Your MongoDB server in Azure Cosmos DB is available from the common npm packages
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
## Name a database
The preceding code snippet displays the following example console output:
## See also -- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
- Work with a collection](how-to-javascript-manage-collections.md)
cosmos-db How To Javascript Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-documents.md
Title: Create a document in Azure Cosmos DB MongoDB API using JavaScript
-description: Learn how to work with a document in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
+ Title: Create a document in Azure Cosmos DB for MongoDB using JavaScript
+description: Learn how to work with a document in your Azure Cosmos DB for MongoDB database using the JavaScript SDK.
-+ ms.devlang: javascript Last updated 06/23/2022-+
-# Manage a document in Azure Cosmos DB MongoDB API using JavaScript
+# Manage a document in Azure Cosmos DB for MongoDB using JavaScript
Manage your MongoDB documents with the ability to insert, update, and delete documents. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
## Insert a document
The preceding code snippet displays the following example console output for an
## Bulk updates to a collection
-You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
The following bulk operations are available:
The preceding code snippet displays the following example console output:
## See also -- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
Title: Use a query in Azure Cosmos DB MongoDB API using JavaScript
-description: Learn how to use a query in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
+ Title: Use a query in Azure Cosmos DB for MongoDB using JavaScript
+description: Learn how to use a query in your Azure Cosmos DB for MongoDB database using the JavaScript SDK.
-+ ms.devlang: javascript Last updated 07/29/2022-+
-# Query data in Azure Cosmos DB MongoDB API using JavaScript
+# Query data in Azure Cosmos DB for MongoDB using JavaScript
Use [queries](#query-for-documents) and [aggregation pipelines](#aggregation-pipelines) to find and manipulate documents in a collection. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
## Query for documents
The preceding code snippet displays the following example console output:
## Aggregation pipelines
-Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Cosmos DB server, instead of performing these operations on the client.
+Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Azure Cosmos DB server, instead of performing these operations on the client.
For specific **aggregation pipeline support**, refer to the following:
Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongo
## See also -- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Provision Throughput Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-provision-throughput-mongodb.md
- Title: Provision throughput on Azure Cosmos DB API for MongoDB resources
-description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB API for MongoDB resources. You will use Azure portal, CLI, PowerShell and various other SDKs.
--- Previously updated : 11/17/2021-----
-# Provision database, container or autoscale throughput on Azure Cosmos DB API for MongoDB resources
-
-This article explains how to provision throughput in Azure Cosmos DB API for MongoDB. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-
-If you are using a different API, see [SQL API](../how-to-provision-container-throughput.md), [Cassandra API](../cassandr) articles to provision the throughput.
-
-## <a id="portal-mongodb"></a> Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing Azure Cosmos account.
-
-1. Open the **Data Explorer** pane, and select **New Collection**. Next, provide the following details:
-
- * Indicate whether you are creating a new database or using an existing one. Select the **Provision database throughput** option if you want to provision throughput at the database level.
- * Enter a collection ID.
- * Enter a partition key value (for example, `ItemID`).
- * Enter a throughput that you want to provision (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="./media/how-to-provision-throughput-mongodb/provision-database-throughput-portal-mongodb-api.png" alt-text="Screenshot of Data Explorer, when creating a new collection with database level throughput":::
-
-> [!Note]
-> If you are provisioning throughput on a container in an Azure Cosmos account configured with the Azure Cosmos DB API for MongoDB, use `myShardKey` for the partition key path.
-
-## <a id="dotnet-mongodb"></a> .NET SDK
-
-```csharp
-// refer to MongoDB .NET Driver
-// https://docs.mongodb.com/drivers/csharp
-
-// Create a new Client
-String mongoConnectionString = "mongodb://DBAccountName:Password@DBAccountName.documents.azure.com:10255/?ssl=true&replicaSet=globaldb";
-mongoUrl = new MongoUrl(mongoConnectionString);
-mongoClientSettings = MongoClientSettings.FromUrl(mongoUrl);
-mongoClient = new MongoClient(mongoClientSettings);
-
-// Change the database name
-mongoDatabase = mongoClient.GetDatabase("testdb");
-
-// Change the collection name, throughput value then update via MongoDB extension commands
-// https://learn.microsoft.com/azure/cosmos-db/mongodb-custom-commands#update-collection
-
-var result = mongoDatabase.RunCommand<BsonDocument>(@"{customAction: ""UpdateCollection"", collection: ""testcollection"", offerThroughput: 400}");
-```
-
-## Azure Resource Manager
-
-Azure Resource Manager templates can be used to provision autoscale throughput on database or container-level resources for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](resource-manager-template-samples.md) for samples.
-
-## Azure CLI
-
-Azure CLI can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
-
-## Azure PowerShell
-
-Azure PowerShell can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
-
-## Next steps
-
-See the following articles to learn about throughput provisioning in Azure Cosmos DB:
-
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db How To Provision Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-provision-throughput.md
+
+ Title: Provision throughput on Azure Cosmos DB for MongoDB resources
+description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB for MongoDB resources. You will use Azure portal, CLI, PowerShell and various other SDKs.
+++ Last updated : 11/17/2021++
+ms.devlang: csharp
+++
+# Provision database, container or autoscale throughput on Azure Cosmos DB for MongoDB resources
+
+This article explains how to provision throughput in Azure Cosmos DB for MongoDB. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
+
+If you are using a different API, see [API for NoSQL](../how-to-provision-container-throughput.md), [API for Cassandra](../cassandr) articles to provision the throughput.
+
+## <a id="portal-mongodb"></a> Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account), or select an existing Azure Cosmos DB account.
+
+1. Open the **Data Explorer** pane, and select **New Collection**. Next, provide the following details:
+
+ * Indicate whether you are creating a new database or using an existing one. Select the **Provision database throughput** option if you want to provision throughput at the database level.
+ * Enter a collection ID.
+ * Enter a partition key value (for example, `ItemID`).
+ * Enter a throughput that you want to provision (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="media/how-to-provision-throughput/provision-database-throughput-portal-mongodb-api.png" alt-text="Screenshot of Data Explorer, when creating a new collection with database level throughput":::
+
+> [!Note]
+> If you are provisioning throughput on a container in an Azure Cosmos DB account configured with the Azure Cosmos DB for MongoDB, use `myShardKey` for the partition key path.
+
+## <a id="dotnet-mongodb"></a> .NET SDK
+
+```csharp
+// refer to MongoDB .NET Driver
+// https://docs.mongodb.com/drivers/csharp
+
+// Create a new Client
+String mongoConnectionString = "mongodb://DB AccountName:Password@DB AccountName.documents.azure.com:10255/?ssl=true&replicaSet=globaldb";
+mongoUrl = new MongoUrl(mongoConnectionString);
+mongoClientSettings = MongoClientSettings.FromUrl(mongoUrl);
+mongoClient = new MongoClient(mongoClientSettings);
+
+// Change the database name
+mongoDatabase = mongoClient.GetDatabase("testdb");
+
+// Change the collection name, throughput value then update via MongoDB extension commands
+// https://learn.microsoft.com/azure/cosmos-db/mongodb-custom-commands#update-collection
+
+var result = mongoDatabase.RunCommand<BsonDocument>(@"{customAction: ""UpdateCollection"", collection: ""testcollection"", offerThroughput: 400}");
+```
+
+## Azure Resource Manager
+
+Azure Resource Manager templates can be used to provision autoscale throughput on database or container-level resources for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](resource-manager-template-samples.md) for samples.
+
+## Azure CLI
+
+Azure CLI can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
+
+## Azure PowerShell
+
+Azure PowerShell can be used to provision autoscale throughput on a database or container-level resources for all Azure Cosmos DB APIs. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
+
+## Next steps
+
+See the following articles to learn about throughput provisioning in Azure Cosmos DB:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
Title: Configure role-based access control for your Azure Cosmos DB API for MongoDB database (preview)
-description: Learn how to configure native role-based access control in the API for MongoDB
+ Title: Configure role-based access control in Azure Cosmos DB for MongoDB database
+description: Learn how to configure native role-based access control in Azure Cosmos DB for MongoDB
+ Previously updated : 04/07/2022 Last updated : 09/26/2022
-# Configure role-based access control for your Azure Cosmos DB API for MongoDB (preview)
+# Configure role-based access control in Azure Cosmos DB for MongoDB
-This article is about role-based access control for data plane operations in Azure Cosmos DB API for MongoDB, currently in public preview.
+This article is about role-based access control for data plane operations in Azure Cosmos DB for MongoDB.
If you are using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
-The API for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users and roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or ARM for this preview feature.
+Azure Cosmos DB for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users and roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or Azure Resource Manager (ARM).
## Concepts
Privileges are actions that can be performed on a specific resource. For example
A role has one or more privileges. Roles are assigned to users (zero or more) to enable them to perform the actions defined in those privileges. Roles are stored within a single database. ### Diagnostic log auditing
-An additional column called `userId` has been added to the `MongoRequests` table in the Azure portal Diagnostics feature. This column will identify which user performed which data plan operation. The value in this column is empty when RBAC is not enabled.
+An additional column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column will identify which user performed which data plan operation. The value in this column is empty when RBAC is not enabled.
## Available Privileges #### Query and Write
Has the following privileges: collStats, createCollection, createIndex, dbStats,
### dbOwner Has the following privileges: collStats, createCollection, createIndex, dbStats, dropCollection, dropDatabase, dropIndex, listCollections, listIndexes, reIndex, find, insert, killCursors, listIndexes, listCollections, remove, update
-## Azure CLI Setup
+## Azure CLI Setup (Quickstart)
We recommend using the cmd when using Windows. 1. Make sure you have latest CLI version(not extension) installed locally. try `az upgrade` command.
-2. Check if you have dev extension version already installed: `az extension show -n cosmosdb-preview`. If it shows your local version, remove it using the following command: `az extension remove -n cosmosdb-preview`. It may ask you to remove it from python virtual env. If that's the case, launch your local CLI extension python env and run `azdev extension remove cosmosdb-preview` (no -n here).
-3. List the available extensions and make sure the list shows the preview version and corresponding "Compatible" flag is true.
-4. Install the latest preview version: `az extension add -n cosmosdb-preview`.
-5. Check if the preview version is installed using this: `az extension list`.
-6. Connect to your subscription.
+2. Connect to your subscription.
```powershell az cloud set -n AzureCloud az login az account set --subscription <your subscription ID> ```
-7. Enable the RBAC capability on your existing API for MongoDB database account. You'll need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account.
+3. Enable the RBAC capability on your existing API for MongoDB database account. You'll need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account. RBAC can also be enabled via the features tab in the Azure portal instead.
If you prefer a new database account instead, create a new database account with the RBAC capability set to true. ```powershell az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl ```
-8. Create a database for users to connect to in the Azure portal.
-9. Create an RBAC user with built-in read role.
+4. Create a database for users to connect to in the Azure portal.
+5. Create an RBAC user with built-in read role.
```powershell az cosmosdb mongodb user definition create --account-name <YOUR_DB_ACCOUNT> --resource-group <YOUR_RG> --body {\"Id\":\"<YOUR_DB_NAME>.<YOUR_USERNAME>\",\"UserName\":\"<YOUR_USERNAME>\",\"Password\":\"<YOUR_PASSWORD>\",\"DatabaseName\":\"<YOUR_DB_NAME>\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"<YOUR_DB_NAME>\"}]} ``` - ## Authenticate using pymongo
-Sending the appName parameter is required to authenticate as a user in the preview. Here is an example of how to do so:
```python from pymongo import MongoClient client = MongoClient("mongodb://<YOUR_HOSTNAME>:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000", username="<YOUR_USER>", password="<YOUR_PASSWORD>", authSource='<YOUR_DATABASE>', authMechanism='SCRAM-SHA-256', appName="<YOUR appName FROM CONNECTION STRING IN AZURE PORTAL>") ```
+## Authenticate using Node.js driver
+```javascript
+connectionString = "mongodb://" + "<YOUR_USER>" + ":" + "<YOUR_PASSWORD>" + "@" + "<YOUR_HOSTNAME>" + ":10255/" + "<YOUR_DATABASE>" +"?ssl=true&retrywrites=false&replicaSet=globaldb&authmechanism=SCRAM-SHA-256&appname=@" + "<YOUR appName FROM CONNECTION STRING IN AZURE PORTAL>" + "@";
+var client = await mongodb.MongoClient.connect(connectionString, { useNewUrlParser: true, useUnifiedTopology: true });
+```
+
+## Authenticate using Java driver
+```java
+connectionString = "mongodb://" + "<YOUR_USER>" + ":" + "<YOUR_PASSWORD>" + "@" + "<YOUR_HOSTNAME>" + ":10255/" + "<YOUR_DATABASE>" +"?ssl=true&retrywrites=false&replicaSet=globaldb&authmechanism=SCRAM-SHA-256&appname=@" + "<YOUR appName FROM CONNECTION STRING IN AZURE PORTAL>" + "@";
+MongoClientURI uri = new MongoClientURI(connectionString);
+MongoClient client = new MongoClient(uri);
+```
+ ## Azure CLI RBAC Commands
-The RBAC management commands will only work with a preview version of the Azure CLI installed. See the Quickstart above on how to get started.
+The RBAC management commands will only work with newer versions of the Azure CLI installed. See the Quickstart above on how to get started.
#### Create Role Definition ```powershell
az cosmosdb mongodb role definition create --account-name <account-name> --resou
```powershell az cosmosdb mongodb role definition create --account-name <account-name> --resource-group <resource-group-name> --body role.json ```
+##### JSON file
+```json
+{
+ "Id": "test.My_Read_Only_Role101",
+ "RoleName": "My_Read_Only_Role101",
+ "Type": "CustomRole",
+ "DatabaseName": "test",
+ "Privileges": [{
+ "Resource": {
+ "Db": "test",
+ "Collection": "test"
+ },
+ "Actions": ["insert", "find"]
+ }],
+ "Roles": []
+}
+```
#### Update Role Definition ```powershell
az cosmosdb mongodb role definition update --account-name <account-name> --resou
```powershell az cosmosdb mongodb role definition update --account-name <account-name> --resource-group <resource-group-name> --body role.json ```
+##### JSON file
+```json
+{
+ "Id": "test.My_Read_Only_Role101",
+ "RoleName": "My_Read_Only_Role101",
+ "Type": "CustomRole",
+ "DatabaseName": "test",
+ "Privileges": [{
+ "Resource": {
+ "Db": "test",
+ "Collection": "test"
+ },
+ "Actions": ["insert", "find"]
+ }],
+ "Roles": []
+}
+```
#### List roles ```powershell
az cosmosdb mongodb user definition create --account-name <account-name> --resou
```powershell az cosmosdb mongodb user definition create --account-name <account-name> --resource-group <resource-group-name> --body user.json ```
+##### JSON file
+```json
+{
+ "Id": "test.myName",
+ "UserName": "myName",
+ "Password": "pass",
+ "DatabaseName": "test",
+ "CustomData": "Some_Random_Info",
+ "Mechanisms": "SCRAM-SHA-256",
+ "Roles": [{
+ "Role": "My_Read_Only_Role101",
+ "Db": "test"
+ }]
+}
+```
#### Update user definition To update the user's password, send the new password in the password field.
az cosmosdb mongodb user definition update --account-name <account-name> --resou
```powershell az cosmosdb mongodb user definition update --account-name <account-name> --resource-group <resource-group-name> --body user.json ```
+##### JSON file
+```json
+{
+ "Id": "test.myName",
+ "UserName": "myName",
+ "Password": "pass",
+ "DatabaseName": "test",
+ "CustomData": "Some_Random_Info",
+ "Mechanisms": "SCRAM-SHA-256",
+ "Roles": [{
+ "Role": "My_Read_Only_Role101",
+ "Db": "test"
+ }]
+}
+```
#### List users ```powershell
az cosmosdb mongodb user definition exists --account-name <account-name> --resou
az cosmosdb mongodb user definition delete --account-name <account-name> --resource-group <resource-group-name> --id test.myName ```
-## <a id="disable-local-auth"></a> Enforcing RBAC as the only authentication method
-
-In situations where you want to force clients to connect to Azure Cosmos DB through RBAC exclusively, you have the option to disable the account's primary/secondary keys. When doing so, any incoming request using either a primary/secondary key or a resource token will be actively rejected.
-
-### Using Azure Resource Manager templates
-
-When creating or updating your Azure Cosmos DB account using Azure Resource Manager templates, set the `disableLocalAuth` property to `true`:
-
-```json
-"resources": [
- {
- "type": " Microsoft.DocumentDB/databaseAccounts",
- "properties": {
- "disableLocalAuth": true,
- },
- },
- ]
-```
- ## Limitations - The number of users and roles you can create must equal less than 10,000. -- The commands listCollections, listDatabases, killCursors, and currentOp are excluded from RBAC in the preview.-- Backup/Restore is not supported in the preview.-- [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) is not supported in the preview.-- Users and Roles across databases are not supported in the preview.-- Users must connect with a tool that support the appName parameter in the preview. Mongo shell and many GUI tools are not supported in the preview. MongoDB drivers are supported.-- A user's password can only be set/reset by through the Azure CLI / PowerShell in the preview.
+- The commands listCollections, listDatabases, killCursors, and currentOp are excluded from RBAC.
+- Users and Roles across databases are not supported.
+- A user's password can only be set/reset by through the Azure CLI / Azure PowerShell.
- Configuring Users and Roles is only supported through Azure CLI / PowerShell.
+- Disabling primary/secondary key authentication is not supported. We recommend rotating your keys to prevent access when enabling RBAC.
## Frequently asked questions (FAQs)
-### Which Azure Cosmos DB APIs are supported by RBAC?
-
-The API for MongoDB (preview) and the SQL API.
- ### Is it possible to manage role definitions and role assignments from the Azure portal?
-Azure portal support for role management is not available yet.
-
-### Is it possible to disable the usage of the account primary/secondary keys when using RBAC?
-
-Yes, see [Enforcing RBAC as the only authentication method](#disable-local-auth).
+Azure portal support for role management is not available. However, RBAC can be enabled via the features tab in the Azure portal.
### How do I change a user's password? Update the user definition with the new password.
+### What Cosmos DB for MongoDB versions support role-based access control (RBAC)?
+
+Versions 3.6 and higher support RBAC.
+ ## Next steps -- Get an overview of [secure access to data in Cosmos DB](../secure-access-to-data.md).
+- Get an overview of [secure access to data in Azure Cosmos DB](../secure-access-to-data.md).
- Learn more about [RBAC for Azure Cosmos DB management](../role-based-access-control.md).
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
+
+ Title: Manage indexing in Azure Cosmos DB for MongoDB
+description: This article presents an overview of Azure Cosmos DB indexing capabilities using Azure Cosmos DB for MongoDB
++
+ms.devlang: javascript
+ Last updated : 4/5/2022++++
+# Manage indexing in Azure Cosmos DB for MongoDB
+
+Azure Cosmos DB for MongoDB takes advantage of the core index-management capabilities of Azure Cosmos DB. This article focuses on how to add indexes using Azure Cosmos DB for MongoDB. Indexes are specialized data structures that make querying your data roughly an order of magnitude faster.
+
+>
+> [!VIDEO https://aka.ms/docs.mongo-indexing]
+
+## Indexing for MongoDB server version 3.6 and higher
+
+Azure Cosmos DB for MongoDB server version 3.6+ automatically indexes the `_id` field and the shard key (only in sharded collections). The API automatically enforces the uniqueness of the `_id` field per shard key.
+
+The API for MongoDB behaves differently from the Azure Cosmos DB for NoSQL, which indexes all fields by default.
+
+### Editing indexing policy
+
+We recommend editing your indexing policy in the Data Explorer within the Azure portal. You can add single field and wildcard indexes from the indexing policy editor in the Data Explorer:
++
+> [!NOTE]
+> You can't create compound indexes using the indexing policy editor in the Data Explorer.
+
+## Index types
+
+### Single field
+
+You can create indexes on any single field. The sort order of the single field index does not matter. The following command creates an index on the field `name`:
+
+`db.coll.createIndex({name:1})`
+
+You could create the same single field index on `name` in the Azure portal:
++
+One query uses multiple single field indexes where available. You can create up to 500 single field indexes per collection.
+
+### Compound indexes (MongoDB server version 3.6+)
+In the API for MongoDB, compound indexes are **required** if your query needs the ability to sort on multiple fields at once. For queries with multiple filters that don't need to sort, create multiple single field indexes instead of a compound index to save on indexing costs.
+
+A compound index or single field indexes for each field in the compound index will result in the same performance for filtering in queries.
++
+> [!NOTE]
+> You can't create compound indexes on nested properties or arrays.
+
+The following command creates a compound index on the fields `name` and `age`:
+
+`db.coll.createIndex({name:1,age:1})`
+
+You can use compound indexes to sort efficiently on multiple fields at once, as shown in the following example:
+
+`db.coll.find().sort({name:1,age:1})`
+
+You can also use the preceding compound index to efficiently sort on a query with the opposite sort order on all fields. Here's an example:
+
+`db.coll.find().sort({name:-1,age:-1})`
+
+However, the sequence of the paths in the compound index must exactly match the query. Here's an example of a query that would require an additional compound index:
+
+`db.coll.find().sort({age:1,name:1})`
+
+> [!NOTE]
+> Compound indexes are only used in queries that sort results. For queries that have multiple filters that don't need to sort, create multipe single field indexes.
+
+### Multikey indexes
+
+Azure Cosmos DB creates multikey indexes to index content stored in arrays. If you index a field with an array value, Azure Cosmos DB automatically indexes every element in the array.
+
+### Geospatial indexes
+
+Many geospatial operators will benefit from geospatial indexes. Currently, Azure Cosmos DB for MongoDB supports `2dsphere` indexes. The API does not yet support `2d` indexes.
+
+Here's an example of creating a geospatial index on the `location` field:
+
+`db.coll.createIndex({ location : "2dsphere" })`
+
+### Text indexes
+
+Azure Cosmos DB for MongoDB does not currently support text indexes. For text search queries on strings, you should use [Azure Cognitive Search](../../search/search-howto-index-cosmosdb.md) integration with Azure Cosmos DB.
+
+## Wildcard indexes
+
+You can use wildcard indexes to support queries against unknown fields. Let's imagine you have a collection that holds data about families.
+
+Here is part of an example document in that collection:
+
+```json
+"children": [
+ {
+ "firstName": "Henriette Thaulow",
+ "grade": "5"
+ }
+]
+```
+
+Here's another example, this time with a slightly different set of properties in `children`:
+
+```json
+"children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Merriam",
+ "givenName": "John",
+ }
+]
+```
+
+In this collection, documents can have many different possible properties. If you wanted to index all the data in the `children` array, you have two options: create separate indexes for each individual property or create one wildcard index for the entire `children` array.
+
+### Create a wildcard index
+
+The following command creates a wildcard index on any properties within `children`:
+
+`db.coll.createIndex({"children.$**" : 1})`
+
+**Unlike in MongoDB, wildcard indexes can support multiple fields in query predicates**. There will not be a difference in query performance if you use one single wildcard index instead of creating a separate index for each property.
+
+You can create the following index types using wildcard syntax:
+
+* Single field
+* Geospatial
+
+### Indexing all properties
+
+Here's how you can create a wildcard index on all fields:
+
+`db.coll.createIndex( { "$**" : 1 } )`
+
+You can also create wildcard indexes using the Data Explorer in the Azure portal:
++
+> [!NOTE]
+> If you are just starting development, we **strongly** recommend starting off with a wildcard index on all fields. This can simplify development and make it easier to optimize queries.
+
+Documents with many fields may have a high Request Unit (RU) charge for writes and updates. Therefore, if you have a write-heavy workload, you should opt to individually index paths as opposed to using wildcard indexes.
+
+### Limitations
+
+Wildcard indexes do not support any of the following index types or properties:
+
+* Compound
+* TTL
+* Unique
+
+**Unlike in MongoDB**, in Azure Cosmos DB for MongoDB you **can't** use wildcard indexes for:
+
+* Creating a wildcard index that includes multiple specific fields
+
+ ```json
+ db.coll.createIndex(
+ { "$**" : 1 },
+ { "wildcardProjection " :
+ {
+ "children.givenName" : 1,
+ "children.grade" : 1
+ }
+ }
+ )
+ ```
+
+* Creating a wildcard index that excludes multiple specific fields
+
+ ```json
+ db.coll.createIndex(
+ { "$**" : 1 },
+ { "wildcardProjection" :
+ {
+ "children.givenName" : 0,
+ "children.grade" : 0
+ }
+ }
+ )
+ ```
+
+As an alternative, you could create multiple wildcard indexes.
+
+## Index properties
+
+The following operations are common for accounts serving wire protocol version 4.0 and accounts serving earlier versions. You can learn more about [supported indexes and indexed properties](feature-support-40.md#indexes-and-index-properties).
+
+### Unique indexes
+
+[Unique indexes](../unique-keys.md) are useful for enforcing that two or more documents do not contain the same value for indexed fields.
+
+The following command creates a unique index on the field `student_id`:
+
+```shell
+globaldb:PRIMARY> db.coll.createIndex( { "student_id" : 1 }, {unique:true} )
+{
+ "_t" : "CreateIndexesResponse",
+ "ok" : 1,
+ "createdCollectionAutomatically" : false,
+ "numIndexesBefore" : 1,
+ "numIndexesAfter" : 4
+}
+```
+
+For sharded collections, you must provide the shard (partition) key to create a unique index. In other words, all unique indexes on a sharded collection are compound indexes where one of the fields is the partition key.
+
+The following commands create a sharded collection ```coll``` (the shard key is ```university```) with a unique index on the fields `student_id` and `university`:
+
+```shell
+globaldb:PRIMARY> db.runCommand({shardCollection: db.coll._fullName, key: { university: "hashed"}});
+{
+ "_t" : "ShardCollectionResponse",
+ "ok" : 1,
+ "collectionsharded" : "test.coll"
+}
+globaldb:PRIMARY> db.coll.createIndex( { "university" : 1, "student_id" : 1 }, {unique:true});
+{
+ "_t" : "CreateIndexesResponse",
+ "ok" : 1,
+ "createdCollectionAutomatically" : false,
+ "numIndexesBefore" : 3,
+ "numIndexesAfter" : 4
+}
+```
+
+In the preceding example, omitting the ```"university":1``` clause returns an error with the following message:
+
+`cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }`
+
+#### Limitations
+
+Unique indexes need to be created while the collection is empty.
+
+### TTL indexes
+
+To enable document expiration in a particular collection, you need to create a [time-to-live (TTL) index](../time-to-live.md). A TTL index is an index on the `_ts` field with an `expireAfterSeconds` value.
+
+Example:
+
+```JavaScript
+globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10})
+```
+
+The preceding command deletes any documents in the ```db.coll``` collection that have not been modified in the last 10 seconds.
+
+> [!NOTE]
+> The **_ts** field is specific to Azure Cosmos DB and is not accessible from MongoDB clients. It is a reserved (system) property that contains the time stamp of the document's last modification.
+
+## Track index progress
+
+Version 3.6+ of Azure Cosmos DB for MongoDB support the `currentOp()` command to track index progress on a database instance. This command returns a document that contains information about in-progress operations on a database instance. You use the `currentOp` command to track all in-progress operations in native MongoDB. In Azure Cosmos DB for MongoDB, this command only supports tracking the index operation.
+
+Here are some examples that show how to use the `currentOp` command to track index progress:
+
+* Get the index progress for a collection:
+
+ ```shell
+ db.currentOp({"command.createIndexes": <collectionName>, "command.$db": <databaseName>})
+ ```
+
+* Get the index progress for all collections in a database:
+
+ ```shell
+ db.currentOp({"command.$db": <databaseName>})
+ ```
+
+* Get the index progress for all databases and collections in an Azure Cosmos DB account:
+
+ ```shell
+ db.currentOp({"command.createIndexes": { $exists : true } })
+ ```
+
+### Examples of index progress output
+
+The index progress details show the percentage of progress for the current index operation. Here's an example that shows the output document format for different stages of index progress:
+
+* An index operation on a "foo" collection and "bar" database that is 60 percent complete will have the following output document. The `Inprog[0].progress.total` field shows 100 as the target completion percentage.
+
+ ```json
+ {
+ "inprog" : [
+ {
+ ………………...
+ "command" : {
+ "createIndexes" : foo
+ "indexes" :[ ],
+ "$db" : bar
+ },
+ "msg" : "Index Build (background) Index Build (background): 60 %",
+ "progress" : {
+ "done" : 60,
+ "total" : 100
+ },
+ …………..…..
+ }
+ ],
+ "ok" : 1
+ }
+ ```
+
+* If an index operation has just started on a "foo" collection and "bar" database, the output document might show 0 percent progress until it reaches a measurable level.
+
+ ```json
+ {
+ "inprog" : [
+ {
+ ………………...
+ "command" : {
+ "createIndexes" : foo
+ "indexes" :[ ],
+ "$db" : bar
+ },
+ "msg" : "Index Build (background) Index Build (background): 0 %",
+ "progress" : {
+ "done" : 0,
+ "total" : 100
+ },
+ …………..…..
+ }
+ ],
+ "ok" : 1
+ }
+ ```
+
+* When the in-progress index operation finishes, the output document shows empty `inprog` operations.
+
+ ```json
+ {
+ "inprog" : [],
+ "ok" : 1
+ }
+ ```
+
+## Background index updates
+
+Regardless of the value specified for the **Background** index property, index updates are always done in the background. Because index updates consume Request Units (RUs) at a lower priority than other database operations, index changes won't result in any downtime for writes, updates, or deletes.
+
+There is no impact to read availability when adding a new index. Queries will only utilize new indexes once the index transformation is complete. During the index transformation, the query engine will continue to use existing indexes, so you'll observe similar read performance during the indexing transformation to what you had observed before initiating the indexing change. When adding new indexes, there is also no risk of incomplete or inconsistent query results.
+
+When removing indexes and immediately running queries the have filters on the dropped indexes, results might be inconsistent and incomplete until the index transformation finishes. If you remove indexes, the query engine does not provide consistent or complete results when queries filter on these newly removed indexes. Most developers do not drop indexes and then immediately try to query them so, in practice, this situation is unlikely.
+
+> [!NOTE]
+> You can [track index progress](#track-index-progress).
+
+## ReIndex command
+
+The `reIndex` command will recreate all indexes on a collection. In some rare cases, query performance or other index issues in your collection may be solved by running the `reIndex` command. If you're experiencing issues with indexing, recreating the indexes with the `reIndex` command is a recommended approach.
+
+You can run the `reIndex` command using the following syntax:
+
+`db.runCommand({ reIndex: <collection> })`
+
+You can use the below syntax to check if running the `reIndex` command would improve query performance in your collection:
+
+`db.runCommand({"customAction":"GetCollection",collection:<collection>, showIndexes:true})`
+
+Sample output:
+
+```
+{
+ "database" : "myDB",
+ "collection" : "myCollection",
+ "provisionedThroughput" : 400,
+ "indexes" : [
+ {
+ "v" : 1,
+ "key" : {
+ "_id" : 1
+ },
+ "name" : "_id_",
+ "ns" : "myDB.myCollection",
+ "requiresReIndex" : true
+ },
+ {
+ "v" : 1,
+ "key" : {
+ "b.$**" : 1
+ },
+ "name" : "b.$**_1",
+ "ns" : "myDB.myCollection",
+ "requiresReIndex" : true
+ }
+ ],
+ "ok" : 1
+}
+```
+
+If `reIndex` will improve query performance, **requiresReIndex** will be true. If `reIndex` won't improve query performance, this property will be omitted.
+
+## Migrate collections with indexes
+
+Currently, you can only create unique indexes when the collection contains no documents. Popular MongoDB migration tools try to create the unique indexes after importing the data. To circumvent this issue, you can manually create the corresponding collections and unique indexes instead of allowing the migration tool to try. (You can achieve this behavior for ```mongorestore``` by using the `--noIndexRestore` flag in the command line.)
+
+## Indexing for MongoDB version 3.2
+
+Available indexing features and defaults are different for Azure Cosmos DB accounts that are compatible with version 3.2 of the MongoDB wire protocol. You can [check your account's version](feature-support-36.md#protocol-support) and [upgrade to version 3.6](upgrade-version.md).
+
+If you're using version 3.2, this section outlines key differences with versions 3.6+.
+
+### Dropping default indexes (version 3.2)
+
+Unlike the 3.6+ versions of Azure Cosmos DB for MongoDB, version 3.2 indexes every property by default. You can use the following command to drop these default indexes for a collection (```coll```):
+
+```JavaScript
+> db.coll.dropIndexes()
+{ "_t" : "DropIndexesResponse", "ok" : 1, "nIndexesWas" : 3 }
+```
+
+After dropping the default indexes, you can add more indexes as you would in version 3.6+.
+
+### Compound indexes (version 3.2)
+
+Compound indexes hold references to multiple fields of a document. If you want to create a compound index, [upgrade to version 3.6 or 4.0](upgrade-version.md).
+
+### Wildcard indexes (version 3.2)
+
+If you want to create a wildcard index, [upgrade to version 4.0 or 3.6](upgrade-version.md).
+
+## Next steps
+
+* [Indexing in Azure Cosmos DB](../index-policy.md)
+* [Expire data in Azure Cosmos DB automatically with time to live](../time-to-live.md)
+* To learn about the relationship between partitioning and indexing, see how to [Query an Azure Cosmos DB container](../how-to-query-container.md) article.
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Integrations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/integrations-overview.md
Title: Integrations overview in Azure Cosmos DB API for MongoDB
-description: Learn how to integrate Azure Cosmos DB API for MongoDB account with other Azure services.
+ Title: Integrations overview in Azure Cosmos DB for MongoDB
+description: Learn how to integrate Azure Cosmos DB for MongoDB account with other Azure services.
+ Last updated 07/25/2022
-# Integrate Azure Cosmos DB API for MongoDB with Azure services
+# Integrate Azure Cosmos DB for MongoDB with Azure services
-Azure Cosmos DB API for MongoDB is a cloud-native offering and can be integrated seamlessly with other Azure services to build enterprise-grade modern applications.
+Azure Cosmos DB for MongoDB is a cloud-native offering and can be integrated seamlessly with other Azure services to build enterprise-grade modern applications.
## Compute services to run your application
Hosting options and deployment scenarios include several services and tools for
Azure App Service allows you to fully configure and manage the web server without needing to manage the underlying environment. Samples to get started:
-* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/azure-samples/todo-nodejs-mongo) to get started. \
-This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB for MongoDB on Azure App Service](https://github.com/azure-samples/todo-nodejs-mongo) to get started. \
+This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
-* [Quickstart: ToDo Application with a C# API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-csharp-cosmos-sql) \
-This sample demonstrates how to build an Azure solution using C#, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a C# API and Azure Cosmos DB for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-csharp-cosmos-sql) \
+This sample demonstrates how to build an Azure solution using C#, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
-* [Quickstart: ToDo Application with a Python API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-python-mongo) \
-This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Python (FastAPI) for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a Python API and Azure Cosmos DB for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-python-mongo) \
+This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Python (FastAPI) for the API, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
### Azure Functions & Static Web Apps
Azure Functions hosts serverless API endpoints or microservices for event-driven
Samples to get started:
-* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB API for MongoDB on Static Web Apps and Functions](https://github.com/Azure-Samples/todo-nodejs-mongo-swa-func) \
-This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB for MongoDB on Static Web Apps and Functions](https://github.com/Azure-Samples/todo-nodejs-mongo-swa-func) \
+This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
-* [Quickstart: ToDo Application with a Python API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-python-mongo-swa-func) \
-This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Python (FastAPI) for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a Python API and Azure Cosmos DB for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-python-mongo-swa-func) \
+This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Python (FastAPI) for the API, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
### Azure Container Apps
Azure Container Apps provide a fully managed serverless container service for bu
Samples to get started:
-* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB API for MongoDB on Azure Container Apps](https://github.com/Azure-Samples/todo-nodejs-mongo-aca)\
-This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB for MongoDB on Azure Container Apps](https://github.com/Azure-Samples/todo-nodejs-mongo-aca)\
+This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
-* [Quickstart: ToDo Application with a Python API and Azure Cosmos DB API for MongoDB on Azure Container Apps](https://github.com/Azure-Samples/todo-python-mongo-aca) \
-This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Python (FastAPI) for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
+* [Quickstart: ToDo Application with a Python API and Azure Cosmos DB for MongoDB on Azure Container Apps](https://github.com/Azure-Samples/todo-python-mongo-aca) \
+This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Python (FastAPI) for the API, Azure Cosmos DB for MongoDB for storage, and Azure Monitor for monitoring and logging.
### Azure Virtual Machines Azure Virtual Machines allow you to have full control on the compute environment running the application. You may also choose to scale from one to thousands of VM instances in minutes with Azure Virtual Machine Scale Sets.
Read more about [how to choose the right compute service on Azure](/azure/archit
### Azure Cognitive Search Azure Cognitive Search is fully managed cloud search service that provides auto-complete, geospatial search, filtering and faceting capabilities for a rich user experience.
-Here's how you can [index data from the Azure Cosmos DB API for MongoDB account](/azure/search/search-howto-index-cosmosdb-mongodb) to use with Azure Cognitive Search.
+Here's how you can [index data from the Azure Cosmos DB for MongoDB account](/azure/search/search-howto-index-cosmosdb-mongodb) to use with Azure Cognitive Search.
## Improve database security ### Azure Networking Azure Networking features allow you to connect and deliver your hybrid and cloud-native applications with low-latency, Zero Trust based networking services -
-* [Configure the Azure Cosmos API for MongoDB account to allow access only from a specific subnet of virtual network (VNet)](../how-to-configure-vnet-service-endpoint.md)
+* [Configure the Azure Cosmos DB for MongoDB account to allow access only from a specific subnet of virtual network (VNet)](../how-to-configure-vnet-service-endpoint.md)
* [Configure IP-based access controls for inbound firewall.](../how-to-configure-firewall.md) * [Configure connectivity to the account via a private endpoint.](../how-to-configure-private-endpoints.md) ### Azure Key Vault Azure Key Vault helps you to securely store and manage application secrets. You can use Azure Key Vault to -
-* [Secure Azure Cosmos DB API for MongoDB account credentials.](../access-secrets-from-keyvault.md)
+* [Secure Azure Cosmos DB for MongoDB account credentials.](../access-secrets-from-keyvault.md)
* [Configure customer-managed keys for your account.](../how-to-setup-cmk.md) ### Azure AD
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md
+
+ Title: Introduction to Azure Cosmos DB for MongoDB
+description: Learn how you can use Azure Cosmos DB to store and query massive amounts of data using Azure Cosmos DB's API for MongoDB.
++++ Last updated : 08/26/2021+++
+# Azure Cosmos DB for MongoDB
+
+The Azure Cosmos DB for MongoDB makes it easy to use Azure Cosmos DB as if it were a MongoDB database. You can leverage your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB account's connection string.
+
+## Why choose the API for MongoDB
+
+The API for MongoDB has numerous added benefits of being built on [Azure Cosmos DB](../introduction.md) when compared to service offerings such as MongoDB Atlas:
+
+* **Instantaneous scalability**: By enabling the [Autoscale](../provision-throughput-autoscale.md) feature, your database can scale up/down with zero warmup period.
+* **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This includes sharding and the number of shards, unlike other MongoDB offerings such as MongoDB Atlas, which require you to specify and manage sharding to horizontally scale. This gives you more time to focus on developing applications for your users.
+* **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
+* **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. APIs for MongoDB users are running databases with over 600TB of storage today. Scaling is done in a cost-efficient manner, since unlike other MongoDB service offering, the Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
+* **Serverless deployments**: Unlike MongoDB Atlas, the API for MongoDB is a cloud native database that offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you are only charged per operation, and don't pay for the database when you don't use it.
+* **Free Tier**: With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level.
+* **Upgrades take seconds**: All API versions are contained within one codebase, making version changes as simple as [flipping a switch](upgrade-version.md), with zero downtime.
+* **Real time analytics (HTAP) at any scale**: The API for MongoDB offers the ability to run complex analytical queries for use cases such as business intelligence against your database data in real time with no impact to your database. This is fast and cheap, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Learn more about the [Azure Synapse Link](../synapse-link.md).
+
+> [!NOTE]
+> [You can use Azure Cosmos DB for MongoDB for free with the free Tier!](../free-tier.md). With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage in your account for free, applied at the account level.
++
+## How the API works
+
+Azure Cosmos DB for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does not host the MongoDB database engine. Any MongoDB client driver compatible with the API version you are using should be able to connect, with no special configuration.
+
+MongoDB feature compatibility:
+
+Azure Cosmos DB for MongoDB is compatible with the following MongoDB server versions:
+- [Version 4.2](feature-support-42.md)
+- [Version 4.0](feature-support-40.md)
+- [Version 3.6](feature-support-36.md)
+- [Version 3.2](feature-support-32.md)
+
+All the APIs for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
++
+## What you need to know to get started
+
+* You are not billed for virtual machines in a cluster. [Pricing](../how-pricing-works.md) is based on throughput in request units (RUs) configured on a per database or per collection basis. The first 1000 RUs per second are free with [Free Tier](../free-tier.md).
+
+* There are three ways to deploy Azure Cosmos DB for MongoDB:
+ * [Provisioned throughput](../set-throughput.md): Set a RU/sec number and change it manually. This model best fits consistent workloads.
+ * [Autoscale](../provision-throughput-autoscale.md): Set an upper bound on the throughput you need. Throughput instantly scales to match your needs. This model best fits workloads that change frequently and optimizes their costs.
+ * [Serverless](../serverless.md): Only pay for the throughput you use, period. This model best fits dev/test workloads.
+
+* Sharded cluster performance is dependent on the shard key you choose when creating a collection. Choose a shard key carefully to ensure that your data is evenly distributed across shards.
+
+### Capacity planning
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md)
+
+## Quickstart
+
+* [Migrate an existing MongoDB Node.js web app](create-mongodb-nodejs.md).
+* [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md)
+* [Build a console app using Azure Cosmos DB's API for MongoDB and Java SDK](quickstart-java.md)
+* [Estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* [Estimating request units using Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md)
+
+## Next steps
+
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+* Follow the [Connect a MongoDB application to Azure Cosmos DB](connect-account.md) tutorial to learn how to get your account connection string information.
+* Follow the [Use Studio 3T with Azure Cosmos DB](connect-using-mongochef.md) tutorial to learn how to create a connection between your Azure Cosmos DB database and MongoDB app in Studio 3T.
+* Follow the [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) tutorial to import your data to an Azure Cosmos DB database.
+* Connect to an Azure Cosmos DB account using [Robo 3T](connect-using-robomongo.md).
+* Learn how to [Configure read preferences for globally distributed apps](tutorial-global-distribution.md).
+* Find the solutions to commonly found errors in our [Troubleshooting guide](error-codes-solutions.md)
+* Configure near real time analytics with [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md)
++
+<sup>Note: This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc.</sup>
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/manage-with-bicep.md
Title: Create and manage MongoDB API for Azure Cosmos DB with Bicep
-description: Use Bicep to create and configure MongoDB API Azure Cosmos DB API.
+ Title: Create and manage API for MongoDB for Azure Cosmos DB with Bicep
+description: Use Bicep to create and configure API for MongoDB Azure Cosmos DB API.
-++ Last updated 05/23/2022
-# Manage Azure Cosmos DB MongoDB API resources using Bicep
+# Manage Azure Cosmos DB for MongoDB resources using Bicep
-In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB accounts for MongoDB API, databases, and collections.
+In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB accounts for API for MongoDB, databases, and collections.
-This article shows Bicep samples for MongoDB API accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Cassandra](../cassandr) APIs.
+This article shows Bicep samples for API for MongoDB accounts. You can also find Bicep samples for [SQL](../sql/manage-with-bicep.md), [Cassandra](../cassandr) APIs.
> [!IMPORTANT] > > * Account names are limited to 44 characters, all lowercase. > * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md). <a id="create-autoscale"></a>
-## MongoDB API with autoscale provisioned throughput
+## API for MongoDB with autoscale provisioned throughput
-This template will create an Azure Cosmos account for MongoDB API (3.2, 3.6, 4.0, or 4.2) with two collections that share autoscale throughput at the database level.
+This template will create an Azure Cosmos DB account for API for MongoDB (3.2, 3.6, 4.0, or 4.2) with two collections that share autoscale throughput at the database level.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-mongodb-autoscale/main.bicep"::: <a id="create-manual"></a>
-## MongoDB API with standard provisioned throughput
+## API for MongoDB with standard provisioned throughput
-Create an Azure Cosmos account for MongoDB API (3.2, 3.6, 4.0, or 4.2) with two collections that share 400 RU/s standard (manual) throughput at the database level.
+Create an Azure Cosmos DB account for API for MongoDB (3.2, 3.6, 4.0, or 4.2) with two collections that share 400 RU/s standard (manual) throughput at the database level.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-mongodb/main.bicep":::
cosmos-db Migrate Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/migrate-databricks.md
Title: Migrate from MongoDB to Azure Cosmos DB API for MongoDB, using Databricks and Spark
+ Title: Migrate from MongoDB to Azure Cosmos DB for MongoDB, using Databricks and Spark
description: Learn how to use Databricks Spark to migrate large datasets from MongoDB instances to Azure Cosmos DB. -++ Last updated 08/26/2021
-# Migrate data from MongoDB to an Azure Cosmos DB API for MongoDB account by using Azure Databricks
+# Migrate data from MongoDB to an Azure Cosmos DB for MongoDB account by using Azure Databricks
This migration guide is part of series on migrating databases from MongoDB to Azure CosmosDB API for MongoDB. The critical migration steps are [pre-migration](pre-migration-steps.md), migration, and [post-migration](post-migration-optimization.md), as shown below.
This migration guide is part of series on migrating databases from MongoDB to Az
## Data migration using Azure Databricks
-[Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/). It offers a way to do offline migrations on a large-scale dataset. You can use Azure Databricks to do an offline migration of databases from MongoDB to Azure Cosmos DB API for MongoDB.
+[Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/). It offers a way to do offline migrations on a large-scale dataset. You can use Azure Databricks to do an offline migration of databases from MongoDB to Azure Cosmos DB for MongoDB.
In this tutorial, you will learn how to:
In this tutorial, you will learn how to:
To complete this tutorial, you need to: - [Complete the pre-migration](pre-migration-steps.md) steps such as estimating throughput and choosing a shard key.-- [Create an Azure Cosmos DB API for MongoDB account](https://portal.azure.com/#create/Microsoft.DocumentDB).
+- [Create an Azure Cosmos DB for MongoDB account](https://portal.azure.com/#create/Microsoft.DocumentDB).
## Provision an Azure Databricks cluster
You can follow instructions to [provision an Azure Databricks cluster](/azure/da
## Add dependencies
-Add the MongoDB Connector for Spark library to your cluster to connect to both native MongoDB and Azure Cosmos DB API for MongoDB endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `org.mongodb.spark:mongo-spark-connector_2.12:3.0.1` Maven coordinates.
+Add the MongoDB Connector for Spark library to your cluster to connect to both native MongoDB and Azure Cosmos DB for MongoDB endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `org.mongodb.spark:mongo-spark-connector_2.12:3.0.1` Maven coordinates.
:::image type="content" source="./media/migrate-databricks/databricks-cluster-dependencies.png" alt-text="Diagram of adding databricks cluster dependencies.":::
import org.apache.spark._
import org.apache.spark.sql._ var sourceConnectionString = "mongodb://<USERNAME>:<PASSWORD>@<HOST>:<PORT>/<AUTHDB>"
-var sourceDb = "<DBNAME>"
+var sourceDb = "<DB NAME>"
var sourceCollection = "<COLLECTIONNAME>" var targetConnectionString = "mongodb://<ACCOUNTNAME>:<PASSWORD>@<ACCOUNTNAME>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<ACCOUNTNAME>@"
-var targetDb = "<DBNAME>"
+var targetDb = "<DB NAME>"
var targetCollection = "<COLLECTIONNAME>" val readConfig = ReadConfig(Map(
Create a Python Notebook in Databricks. Make sure to enter the right values for
from pyspark.sql import SparkSession sourceConnectionString = "mongodb://<USERNAME>:<PASSWORD>@<HOST>:<PORT>/<AUTHDB>"
-sourceDb = "<DBNAME>"
+sourceDb = "<DB NAME>"
sourceCollection = "<COLLECTIONNAME>" targetConnectionString = "mongodb://<ACCOUNTNAME>:<PASSWORD>@<ACCOUNTNAME>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<ACCOUNTNAME>@"
-targetDb = "<DBNAME>"
+targetDb = "<DB NAME>"
targetCollection = "<COLLECTIONNAME>" my_spark = SparkSession \
The migration performance can be adjusted through these configurations:
## Troubleshoot ### Timeout Error (Error code 50)
-You might see a 50 error code for operations against the Cosmos DB API for MongoDB database. The following scenarios can cause timeout errors:
+You might see a 50 error code for operations against the Azure Cosmos DB for MongoDB database. The following scenarios can cause timeout errors:
- **Throughput allocated to the database is low**: Ensure that the target collection has sufficient throughput assigned to it. - **Excessive data skew with large data volume**. If you have a large amount of data to migrate into a given table but have a significant skew in the data, you might still experience rate limiting even if you have several [request units](../request-units.md) provisioned in your table. Request units are divided equally among physical partitions, and heavy data skew can cause a bottleneck of requests to a single shard. Data skew means large number of records for the same shard key value. ### Rate limiting (Error code 16500)
-You might see a 16500 error code for operations against the Cosmos DB API for MongoDB database. These are rate limiting errors and may be observed on older accounts or accounts where server-side retry feature is disabled.
+You might see a 16500 error code for operations against the Azure Cosmos DB for MongoDB database. These are rate limiting errors and may be observed on older accounts or accounts where server-side retry feature is disabled.
- **Enable Server-side retry**: Enable the Server Side Retry (SSR) feature and let the server retry the rate limited operations automatically.
After you migrate the data, you can connect to Azure Cosmos DB and manage the da
## Next steps
-* [Manage indexing in Azure Cosmos DB's API for MongoDB](mongodb-indexing.md)
-* [Find the request unit charge for operations](find-request-unit-charge-mongodb.md)
+* [Manage indexing in Azure Cosmos DB's API for MongoDB](indexing.md)
+* [Find the request unit charge for operations](find-request-unit-charge.md)
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
- Title: Manage indexing in Azure Cosmos DB API for MongoDB
-description: This article presents an overview of Azure Cosmos DB indexing capabilities using Azure Cosmos DB API for MongoDB
--- Previously updated : 4/5/2022----
-# Manage indexing in Azure Cosmos DB API for MongoDB
-
-Azure Cosmos DB API for MongoDB takes advantage of the core index-management capabilities of Azure Cosmos DB. This article focuses on how to add indexes using Azure Cosmos DB API for MongoDB. Indexes are specialized data structures that make querying your data roughly an order of magnitude faster.
-
->
-> [!VIDEO https://aka.ms/docs.mongo-indexing]
-
-## Indexing for MongoDB server version 3.6 and higher
-
-Azure Cosmos DB API for MongoDB server version 3.6+ automatically indexes the `_id` field and the shard key (only in sharded collections). The API automatically enforces the uniqueness of the `_id` field per shard key.
-
-The API for MongoDB behaves differently from the Azure Cosmos DB SQL API, which indexes all fields by default.
-
-### Editing indexing policy
-
-We recommend editing your indexing policy in the Data Explorer within the Azure portal. You can add single field and wildcard indexes from the indexing policy editor in the Data Explorer:
--
-> [!NOTE]
-> You can't create compound indexes using the indexing policy editor in the Data Explorer.
-
-## Index types
-
-### Single field
-
-You can create indexes on any single field. The sort order of the single field index does not matter. The following command creates an index on the field `name`:
-
-`db.coll.createIndex({name:1})`
-
-You could create the same single field index on `name` in the Azure portal:
--
-One query uses multiple single field indexes where available. You can create up to 500 single field indexes per collection.
-
-### Compound indexes (MongoDB server version 3.6+)
-In the API for MongoDB, compound indexes are **required** if your query needs the ability to sort on multiple fields at once. For queries with multiple filters that don't need to sort, create multiple single field indexes instead of a compound index to save on indexing costs.
-
-A compound index or single field indexes for each field in the compound index will result in the same performance for filtering in queries.
--
-> [!NOTE]
-> You can't create compound indexes on nested properties or arrays.
-
-The following command creates a compound index on the fields `name` and `age`:
-
-`db.coll.createIndex({name:1,age:1})`
-
-You can use compound indexes to sort efficiently on multiple fields at once, as shown in the following example:
-
-`db.coll.find().sort({name:1,age:1})`
-
-You can also use the preceding compound index to efficiently sort on a query with the opposite sort order on all fields. Here's an example:
-
-`db.coll.find().sort({name:-1,age:-1})`
-
-However, the sequence of the paths in the compound index must exactly match the query. Here's an example of a query that would require an additional compound index:
-
-`db.coll.find().sort({age:1,name:1})`
-
-> [!NOTE]
-> Compound indexes are only used in queries that sort results. For queries that have multiple filters that don't need to sort, create multipe single field indexes.
-
-### Multikey indexes
-
-Azure Cosmos DB creates multikey indexes to index content stored in arrays. If you index a field with an array value, Azure Cosmos DB automatically indexes every element in the array.
-
-### Geospatial indexes
-
-Many geospatial operators will benefit from geospatial indexes. Currently, Azure Cosmos DB API for MongoDB supports `2dsphere` indexes. The API does not yet support `2d` indexes.
-
-Here's an example of creating a geospatial index on the `location` field:
-
-`db.coll.createIndex({ location : "2dsphere" })`
-
-### Text indexes
-
-Azure Cosmos DB API for MongoDB does not currently support text indexes. For text search queries on strings, you should use [Azure Cognitive Search](../../search/search-howto-index-cosmosdb.md) integration with Azure Cosmos DB.
-
-## Wildcard indexes
-
-You can use wildcard indexes to support queries against unknown fields. Let's imagine you have a collection that holds data about families.
-
-Here is part of an example document in that collection:
-
-```json
-"children": [
- {
- "firstName": "Henriette Thaulow",
- "grade": "5"
- }
-]
-```
-
-Here's another example, this time with a slightly different set of properties in `children`:
-
-```json
-"children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Merriam",
- "givenName": "John",
- }
-]
-```
-
-In this collection, documents can have many different possible properties. If you wanted to index all the data in the `children` array, you have two options: create separate indexes for each individual property or create one wildcard index for the entire `children` array.
-
-### Create a wildcard index
-
-The following command creates a wildcard index on any properties within `children`:
-
-`db.coll.createIndex({"children.$**" : 1})`
-
-**Unlike in MongoDB, wildcard indexes can support multiple fields in query predicates**. There will not be a difference in query performance if you use one single wildcard index instead of creating a separate index for each property.
-
-You can create the following index types using wildcard syntax:
-
-* Single field
-* Geospatial
-
-### Indexing all properties
-
-Here's how you can create a wildcard index on all fields:
-
-`db.coll.createIndex( { "$**" : 1 } )`
-
-You can also create wildcard indexes using the Data Explorer in the Azure portal:
--
-> [!NOTE]
-> If you are just starting development, we **strongly** recommend starting off with a wildcard index on all fields. This can simplify development and make it easier to optimize queries.
-
-Documents with many fields may have a high Request Unit (RU) charge for writes and updates. Therefore, if you have a write-heavy workload, you should opt to individually index paths as opposed to using wildcard indexes.
-
-### Limitations
-
-Wildcard indexes do not support any of the following index types or properties:
-
-* Compound
-* TTL
-* Unique
-
-**Unlike in MongoDB**, in Azure Cosmos DB API for MongoDB you **can't** use wildcard indexes for:
-
-* Creating a wildcard index that includes multiple specific fields
-
- ```json
- db.coll.createIndex(
- { "$**" : 1 },
- { "wildcardProjection " :
- {
- "children.givenName" : 1,
- "children.grade" : 1
- }
- }
- )
- ```
-
-* Creating a wildcard index that excludes multiple specific fields
-
- ```json
- db.coll.createIndex(
- { "$**" : 1 },
- { "wildcardProjection" :
- {
- "children.givenName" : 0,
- "children.grade" : 0
- }
- }
- )
- ```
-
-As an alternative, you could create multiple wildcard indexes.
-
-## Index properties
-
-The following operations are common for accounts serving wire protocol version 4.0 and accounts serving earlier versions. You can learn more about [supported indexes and indexed properties](feature-support-40.md#indexes-and-index-properties).
-
-### Unique indexes
-
-[Unique indexes](../unique-keys.md) are useful for enforcing that two or more documents do not contain the same value for indexed fields.
-
-The following command creates a unique index on the field `student_id`:
-
-```shell
-globaldb:PRIMARY> db.coll.createIndex( { "student_id" : 1 }, {unique:true} )
-{
- "_t" : "CreateIndexesResponse",
- "ok" : 1,
- "createdCollectionAutomatically" : false,
- "numIndexesBefore" : 1,
- "numIndexesAfter" : 4
-}
-```
-
-For sharded collections, you must provide the shard (partition) key to create a unique index. In other words, all unique indexes on a sharded collection are compound indexes where one of the fields is the partition key.
-
-The following commands create a sharded collection ```coll``` (the shard key is ```university```) with a unique index on the fields `student_id` and `university`:
-
-```shell
-globaldb:PRIMARY> db.runCommand({shardCollection: db.coll._fullName, key: { university: "hashed"}});
-{
- "_t" : "ShardCollectionResponse",
- "ok" : 1,
- "collectionsharded" : "test.coll"
-}
-globaldb:PRIMARY> db.coll.createIndex( { "university" : 1, "student_id" : 1 }, {unique:true});
-{
- "_t" : "CreateIndexesResponse",
- "ok" : 1,
- "createdCollectionAutomatically" : false,
- "numIndexesBefore" : 3,
- "numIndexesAfter" : 4
-}
-```
-
-In the preceding example, omitting the ```"university":1``` clause returns an error with the following message:
-
-`cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }`
-
-#### Limitations
-
-Unique indexes need to be created while the collection is empty.
-
-### TTL indexes
-
-To enable document expiration in a particular collection, you need to create a [time-to-live (TTL) index](../time-to-live.md). A TTL index is an index on the `_ts` field with an `expireAfterSeconds` value.
-
-Example:
-
-```JavaScript
-globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10})
-```
-
-The preceding command deletes any documents in the ```db.coll``` collection that have not been modified in the last 10 seconds.
-
-> [!NOTE]
-> The **_ts** field is specific to Azure Cosmos DB and is not accessible from MongoDB clients. It is a reserved (system) property that contains the time stamp of the document's last modification.
-
-## Track index progress
-
-Version 3.6+ of Azure Cosmos DB API for MongoDB support the `currentOp()` command to track index progress on a database instance. This command returns a document that contains information about in-progress operations on a database instance. You use the `currentOp` command to track all in-progress operations in native MongoDB. In Azure Cosmos DB API for MongoDB, this command only supports tracking the index operation.
-
-Here are some examples that show how to use the `currentOp` command to track index progress:
-
-* Get the index progress for a collection:
-
- ```shell
- db.currentOp({"command.createIndexes": <collectionName>, "command.$db": <databaseName>})
- ```
-
-* Get the index progress for all collections in a database:
-
- ```shell
- db.currentOp({"command.$db": <databaseName>})
- ```
-
-* Get the index progress for all databases and collections in an Azure Cosmos account:
-
- ```shell
- db.currentOp({"command.createIndexes": { $exists : true } })
- ```
-
-### Examples of index progress output
-
-The index progress details show the percentage of progress for the current index operation. Here's an example that shows the output document format for different stages of index progress:
-
-* An index operation on a "foo" collection and "bar" database that is 60 percent complete will have the following output document. The `Inprog[0].progress.total` field shows 100 as the target completion percentage.
-
- ```json
- {
- "inprog" : [
- {
- ………………...
- "command" : {
- "createIndexes" : foo
- "indexes" :[ ],
- "$db" : bar
- },
- "msg" : "Index Build (background) Index Build (background): 60 %",
- "progress" : {
- "done" : 60,
- "total" : 100
- },
- …………..…..
- }
- ],
- "ok" : 1
- }
- ```
-
-* If an index operation has just started on a "foo" collection and "bar" database, the output document might show 0 percent progress until it reaches a measurable level.
-
- ```json
- {
- "inprog" : [
- {
- ………………...
- "command" : {
- "createIndexes" : foo
- "indexes" :[ ],
- "$db" : bar
- },
- "msg" : "Index Build (background) Index Build (background): 0 %",
- "progress" : {
- "done" : 0,
- "total" : 100
- },
- …………..…..
- }
- ],
- "ok" : 1
- }
- ```
-
-* When the in-progress index operation finishes, the output document shows empty `inprog` operations.
-
- ```json
- {
- "inprog" : [],
- "ok" : 1
- }
- ```
-
-## Background index updates
-
-Regardless of the value specified for the **Background** index property, index updates are always done in the background. Because index updates consume Request Units (RUs) at a lower priority than other database operations, index changes won't result in any downtime for writes, updates, or deletes.
-
-There is no impact to read availability when adding a new index. Queries will only utilize new indexes once the index transformation is complete. During the index transformation, the query engine will continue to use existing indexes, so you'll observe similar read performance during the indexing transformation to what you had observed before initiating the indexing change. When adding new indexes, there is also no risk of incomplete or inconsistent query results.
-
-When removing indexes and immediately running queries the have filters on the dropped indexes, results might be inconsistent and incomplete until the index transformation finishes. If you remove indexes, the query engine does not provide consistent or complete results when queries filter on these newly removed indexes. Most developers do not drop indexes and then immediately try to query them so, in practice, this situation is unlikely.
-
-> [!NOTE]
-> You can [track index progress](#track-index-progress).
-
-## ReIndex command
-
-The `reIndex` command will recreate all indexes on a collection. In some rare cases, query performance or other index issues in your collection may be solved by running the `reIndex` command. If you're experiencing issues with indexing, recreating the indexes with the `reIndex` command is a recommended approach.
-
-You can run the `reIndex` command using the following syntax:
-
-`db.runCommand({ reIndex: <collection> })`
-
-You can use the below syntax to check if running the `reIndex` command would improve query performance in your collection:
-
-`db.runCommand({"customAction":"GetCollection",collection:<collection>, showIndexes:true})`
-
-Sample output:
-
-```
-{
- "database" : "myDB",
- "collection" : "myCollection",
- "provisionedThroughput" : 400,
- "indexes" : [
- {
- "v" : 1,
- "key" : {
- "_id" : 1
- },
- "name" : "_id_",
- "ns" : "myDB.myCollection",
- "requiresReIndex" : true
- },
- {
- "v" : 1,
- "key" : {
- "b.$**" : 1
- },
- "name" : "b.$**_1",
- "ns" : "myDB.myCollection",
- "requiresReIndex" : true
- }
- ],
- "ok" : 1
-}
-```
-
-If `reIndex` will improve query performance, **requiresReIndex** will be true. If `reIndex` won't improve query performance, this property will be omitted.
-
-## Migrate collections with indexes
-
-Currently, you can only create unique indexes when the collection contains no documents. Popular MongoDB migration tools try to create the unique indexes after importing the data. To circumvent this issue, you can manually create the corresponding collections and unique indexes instead of allowing the migration tool to try. (You can achieve this behavior for ```mongorestore``` by using the `--noIndexRestore` flag in the command line.)
-
-## Indexing for MongoDB version 3.2
-
-Available indexing features and defaults are different for Azure Cosmos accounts that are compatible with version 3.2 of the MongoDB wire protocol. You can [check your account's version](feature-support-36.md#protocol-support) and [upgrade to version 3.6](upgrade-mongodb-version.md).
-
-If you're using version 3.2, this section outlines key differences with versions 3.6+.
-
-### Dropping default indexes (version 3.2)
-
-Unlike the 3.6+ versions of Azure Cosmos DB API for MongoDB, version 3.2 indexes every property by default. You can use the following command to drop these default indexes for a collection (```coll```):
-
-```JavaScript
-> db.coll.dropIndexes()
-{ "_t" : "DropIndexesResponse", "ok" : 1, "nIndexesWas" : 3 }
-```
-
-After dropping the default indexes, you can add more indexes as you would in version 3.6+.
-
-### Compound indexes (version 3.2)
-
-Compound indexes hold references to multiple fields of a document. If you want to create a compound index, [upgrade to version 3.6 or 4.0](upgrade-mongodb-version.md).
-
-### Wildcard indexes (version 3.2)
-
-If you want to create a wildcard index, [upgrade to version 4.0 or 3.6](upgrade-mongodb-version.md).
-
-## Next steps
-
-* [Indexing in Azure Cosmos DB](../index-policy.md)
-* [Expire data in Azure Cosmos DB automatically with time to live](../time-to-live.md)
-* To learn about the relationship between partitioning and indexing, see how to [Query an Azure Cosmos container](../how-to-query-container.md) article.
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Mongodb Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-introduction.md
- Title: Introduction to Azure Cosmos DB API for MongoDB
-description: Learn how you can use Azure Cosmos DB to store and query massive amounts of data using Azure Cosmos DB's API for MongoDB.
--- Previously updated : 08/26/2021----
-# Azure Cosmos DB API for MongoDB
-
-The Azure Cosmos DB API for MongoDB makes it easy to use Cosmos DB as if it were a MongoDB database. You can leverage your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB account's connection string.
-
-## Why choose the API for MongoDB
-
-The API for MongoDB has numerous added benefits of being built on [Azure Cosmos DB](../introduction.md) when compared to service offerings such as MongoDB Atlas:
-
-* **Instantaneous scalability**: By enabling the [Autoscale](../provision-throughput-autoscale.md) feature, your database can scale up/down with zero warmup period.
-* **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This includes sharding and the number of shards, unlike other MongoDB offerings such as MongoDB Atlas, which require you to specify and manage sharding to horizontally scale. This gives you more time to focus on developing applications for your users.
-* **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
-* **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. APIs for MongoDB users are running databases with over 600TB of storage today. Scaling is done in a cost-efficient manner, since unlike other MongoDB service offering, the Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
-* **Serverless deployments**: Unlike MongoDB Atlas, the API for MongoDB is a cloud native database that offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you are only charged per operation, and don't pay for the database when you don't use it.
-* **Free Tier**: With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level.
-* **Upgrades take seconds**: All API versions are contained within one codebase, making version changes as simple as [flipping a switch](upgrade-mongodb-version.md), with zero downtime.
-* **Real time analytics (HTAP) at any scale**: The API for MongoDB offers the ability to run complex analytical queries for use cases such as business intelligence against your database data in real time with no impact to your database. This is fast and cheap, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Learn more about the [Azure Synapse Link](../synapse-link.md).
-
-> [!NOTE]
-> [You can use Azure Cosmos DB API for MongoDB for free with the free Tier!](../free-tier.md). With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage in your account for free, applied at the account level.
--
-## How the API works
-
-Azure Cosmos DB API for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does not host the MongoDB database engine. Any MongoDB client driver compatible with the API version you are using should be able to connect, with no special configuration.
-
-MongoDB feature compatibility:
-
-Azure Cosmos DB API for MongoDB is compatible with the following MongoDB server versions:
-- [Version 4.2](feature-support-42.md)-- [Version 4.0](feature-support-40.md)-- [Version 3.6](feature-support-36.md)-- [Version 3.2](feature-support-32.md)-
-All the APIs for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
--
-## What you need to know to get started
-
-* You are not billed for virtual machines in a cluster. [Pricing](../how-pricing-works.md) is based on throughput in request units (RUs) configured on a per database or per collection basis. The first 1000 RUs per second are free with [Free Tier](../free-tier.md).
-
-* There are three ways to deploy Azure Cosmos DB API for MongoDB:
- * [Provisioned throughput](../set-throughput.md): Set a RU/sec number and change it manually. This model best fits consistent workloads.
- * [Autoscale](../provision-throughput-autoscale.md): Set an upper bound on the throughput you need. Throughput instantly scales to match your needs. This model best fits workloads that change frequently and optimizes their costs.
- * [Serverless](../serverless.md): Only pay for the throughput you use, period. This model best fits dev/test workloads.
-
-* Sharded cluster performance is dependent on the shard key you choose when creating a collection. Choose a shard key carefully to ensure that your data is evenly distributed across shards.
-
-### Capacity planning
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md)
-
-## Quickstart
-
-* [Migrate an existing MongoDB Node.js web app](create-mongodb-nodejs.md).
-* [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md)
-* [Build a console app using Azure Cosmos DB's API for MongoDB and Java SDK](create-mongodb-java.md)
-* [Estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* [Estimating request units using Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md)
-
-## Next steps
-
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-* Follow the [Connect a MongoDB application to Azure Cosmos DB](connect-mongodb-account.md) tutorial to learn how to get your account connection string information.
-* Follow the [Use Studio 3T with Azure Cosmos DB](connect-using-mongochef.md) tutorial to learn how to create a connection between your Cosmos database and MongoDB app in Studio 3T.
-* Follow the [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) tutorial to import your data to a Cosmos database.
-* Connect to a Cosmos account using [Robo 3T](connect-using-robomongo.md).
-* Learn how to [Configure read preferences for globally distributed apps](tutorial-global-distribution-mongodb.md).
-* Find the solutions to commonly found errors in our [Troubleshooting guide](error-codes-solutions.md)
-* Configure near real time analytics with [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md)
--
-<sup>Note: This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc.</sup>
cosmos-db Mongodb Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-time-to-live.md
- Title: MongoDB per-document TTL feature in Azure Cosmos DB
-description: Learn how to set time to live value for documents using Azure Cosmos DB's API for MongoDB, to automatically purge them from the system after a period of time.
----- Previously updated : 02/16/2022--
-# Expire data with Azure Cosmos DB's API for MongoDB
-
-Time-to-live (TTL) functionality allows the database to automatically expire data. Azure Cosmos DB's API for MongoDB utilizes Cosmos DB's core TTL capabilities. Two modes are supported: setting a default TTL value on the whole collection, and setting individual TTL values for each document. The logic governing TTL indexes and per-document TTL values in Cosmos DB's API for MongoDB is the [same as in Cosmos DB](mongodb-indexing.md).
-
-## TTL indexes
-To enable TTL universally on a collection, a ["TTL index" (time-to-live index)](mongodb-indexing.md) needs to be created. The TTL index is an index on the `_ts` field with an "expireAfterSeconds" value.
-
-MongoShell example:
-
-```
-globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10})
-```
-The command in the above example will create an index with TTL functionality.
-
-The output of the command includes various metadata:
-
-```output
-{
- "_t" : "CreateIndexesResponse",
- "ok" : 1,
- "createdCollectionAutomatically" : true,
- "numIndexesBefore" : 1,
- "numIndexesAfter" : 4
-}
-```
-
- Once the index is created, the database will automatically delete any documents in that collection that have not been modified in the last 10 seconds.
-
-> [!NOTE]
-> `_ts` is a Cosmos DB-specific field and is not accessible from MongoDB clients. It is a reserved (system) property that contains the timestamp of the document's last modification.
-
-Java example:
-
-```java
-MongoCollection collection = mongoDB.getCollection("collectionName");
-String index = collection.createIndex(Indexes.ascending("_ts"),
-new IndexOptions().expireAfter(10L, TimeUnit.SECONDS));
-```
-
-C# example:
-
-```csharp
-var options = new CreateIndexOptions {ExpireAfter = TimeSpan.FromSeconds(10)};
-var field = new StringFieldDefinition<BsonDocument>("_ts");
-var indexDefinition = new IndexKeysDefinitionBuilder<BsonDocument>().Ascending(field);
-await collection.Indexes.CreateOneAsync(indexDefinition, options);
-```
-
-## Set time to live value for a document
-Per-document TTL values are also supported. The document(s) must contain a root-level property "ttl" (lower-case), and a TTL index as described above must have been created for that collection. TTL values set on a document will override the collection's TTL value.
-
-The TTL value must be an int32. Alternatively, an int64 that fits in an int32, or a double with no decimal part that fits in an int32. Values for the TTL property that do not conform to these specifications are allowed but not treated as a meaningful document TTL value.
-
-The TTL value for the document is optional; documents without a TTL value can be inserted into the collection. In this case, the collection's TTL value will be honored.
-
-The following documents have valid TTL values. Once the documents are inserted, the document TTL values override the collection's TTL values. So, the documents will be removed after 20 seconds.
-
-```JavaScript
-globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: 20.0})
-globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberInt(20)})
-globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberLong(20)})
-```
-
-The following documents have invalid TTL values. The documents will be inserted, but the document TTL value will not be honored. So, the documents will be removed after 10 seconds because of the collection's TTL value.
-
-```JavaScript
-globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: 20.5}) //TTL value contains non-zero decimal part.
-globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberLong(2147483649)}) //TTL value is greater than Int32.MaxValue (2,147,483,648).
-```
-
-## Next steps
-* [Expire data in Azure Cosmos DB automatically with time to live](../time-to-live.md)
-* [Indexing your Cosmos database configured with Azure Cosmos DB's API for MongoDB](../mongodb-indexing.md)
cosmos-db Nodejs Console App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/nodejs-console-app.md
Title: Use Azure Cosmos DB's API for MongoDB to build a Node.js app description: A tutorial that creates an online database using the Azure Cosmos DB's API for MongoDB. -+ ms.devlang: javascript Last updated 08/26/2021 -+ # Build an app using Node.js and Azure Cosmos DB's API for MongoDB -
-> [!div class="op_single_selector"]
-> * [.NET](../sql-api-get-started.md)
-> * [.NET Core](../sql-api-get-started.md)
-> * [Java](../create-sql-api-java.md)
-> * [Node.js for MongoDB](nodejs-console-app.md)
-> * [Node.js](../sql-api-nodejs-get-started.md)
->
This example shows you how to build a console app using Node.js and Azure Cosmos DB's API for MongoDB. To use this example, you must:
-* [Create](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account) a Cosmos account configured to use Azure Cosmos DB's API for MongoDB.
-* Retrieve your [connection string](connect-mongodb-account.md) information.
+* [Create](create-mongodb-dotnet.md#create-an-azure-cosmos-db-account) an Azure Cosmos DB account configured to use Azure Cosmos DB's API for MongoDB.
+* Retrieve your [connection string](connect-account.md) information.
## Create the app
To use this example, you must:
}); ```
-2. Modify the following variables in the *app.js* file per your account settings (Learn how to find your [connection string](connect-mongodb-account.md)):
+2. Modify the following variables in the *app.js* file per your account settings (Learn how to find your [connection string](connect-account.md)):
> [!IMPORTANT]
- > The **MongoDB Node.js 3.0 driver** requires encoding special characters in the Cosmos DB password. Make sure to encode '=' characters as %3D
+ > The **MongoDB Node.js 3.0 driver** requires encoding special characters in the Azure Cosmos DB password. Make sure to encode '=' characters as %3D
> > Example: The password *jm1HbNdLg5zxEuyD86ajvINRFrFCUX0bIWP15ATK3BvSv==* encodes to *jm1HbNdLg5zxEuyD86ajvINRFrFCUX0bIWP15ATK3BvSv%3D%3D* >
- > The **MongoDB Node.js 2.2 driver** does not require encoding special characters in the Cosmos DB password.
+ > The **MongoDB Node.js 2.2 driver** does not require encoding special characters in the Azure Cosmos DB password.
> >
To use this example, you must:
- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Optimize Write Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/optimize-write-performance.md
Title: Optimize write performance in the Azure Cosmos DB API for MongoDB
-description: This article describes how to optimize write performance in the Azure Cosmos DB API for MongoDB to get the most throughput possible for the lowest cost.
+ Title: Optimize write performance in the Azure Cosmos DB for MongoDB
+description: This article describes how to optimize write performance in the Azure Cosmos DB for MongoDB to get the most throughput possible for the lowest cost.
-++ Last updated 08/26/2021 -
-# Optimize write performance in Azure Cosmos DB API for MongoDB
+# Optimize write performance in Azure Cosmos DB for MongoDB
-Optimizing write performance helps you get the most out of Azure Cosmos DB API for MongoDB's unlimited scale. Unlike other managed MongoDB services, the API for MongoDB automatically and transparently shards your collections for you (when using sharded collections) to scale infinitely.
+Optimizing write performance helps you get the most out of Azure Cosmos DB for MongoDB's unlimited scale. Unlike other managed MongoDB services, the API for MongoDB automatically and transparently shards your collections for you (when using sharded collections) to scale infinitely.
The way you write data needs to be mindful of this by parallelizing and spreading data across shards to get the most writes out of your databases and collections. This article explains best practices to optimize write performance.
If your application writes a massive amount of data to a single shard, this won'
One example of doing this would be a product catalog application that is sharded on the category field. Instead of writing to one category (shard) at a time, it's better write to all categories simultaneously to achieve the maximum write throughput. ## Reduce the number of indexes
-[Indexing](../mongodb-indexing.md) is a great feature to drastically reduce the time it takes to query your data. For the most flexible query experience, the API for MongoDB enables a wildcard index on your data by default to make queries against all fields blazing-fast. However, all indexes, which include wildcard indexes introduce additional load when writing data because writes change the collection and indexes.
+[Indexing](../mongodb/indexing.md) is a great feature to drastically reduce the time it takes to query your data. For the most flexible query experience, the API for MongoDB enables a wildcard index on your data by default to make queries against all fields blazing-fast. However, all indexes, which include wildcard indexes introduce additional load when writing data because writes change the collection and indexes.
Reducing the number of indexes to only the indexes you need to support your queries will make your writes faster and cheaper. As a general rule, we recommend the following:
If you are writing more than 1,000 documents at a time per process/thread, clien
## Next steps
-* Learn more about [indexing in the API for MongoDB](../mongodb-indexing.md).
+* Learn more about [indexing in the API for MongoDB](../mongodb/indexing.md).
* Learn more about [Azure Cosmos DB's sharding/partitioning](../partitioning-overview.md). * Learn more about [troubleshooting common issues](error-codes-solutions.md). * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
cosmos-db Post Migration Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/post-migration-optimization.md
Title: Post-migration optimization steps with Azure Cosmos DB's API for MongoDB
-description: This doc provides the post-migration optimization techniques from MongoDB to Azure Cosmos DB's APi for Mongo DB.
+description: This doc provides the post-migration optimization techniques from MongoDB to Azure Cosmos DB's APi for MongoDB.
-++ Last updated 08/26/2021 - # Post-migration optimization steps when using Azure Cosmos DB's API for MongoDB > [!IMPORTANT] > Please read this entire guide before carrying out your post-migration steps.
In this guide, we assume that you are maintaining a record of your migration's p
In order to optimize price and performance, we recommend that you step through your data estate migration spreadsheet and design an index configuration for each resource. 1. We actually recommend [planning your indexes during the pre-migration phase](pre-migration-steps.md#post-migration). Add a column to your data estate migration spreadsheet for index settings.
- * The Azure Cosmos DB API for MongoDB server versions 3.6 and higher automatically index the _id field only. This field can't be dropped. It automatically enforces the uniqueness of the _id field per shard key. To index additional fields, you apply the MongoDB index-management commands. This default indexing policy differs from the Azure Cosmos DB SQL API, which indexes all fields by default.
+ * The Azure Cosmos DB for MongoDB server versions 3.6 and higher automatically index the _id field only. This field can't be dropped. It automatically enforces the uniqueness of the _id field per shard key. To index additional fields, you apply the MongoDB index-management commands. This default indexing policy differs from the Azure Cosmos DB for NoSQL, which indexes all fields by default.
- * For the Azure Cosmos DB API for MongoDB server version 3.2, all data fields are automatically indexed, by default, during the migration of data to Azure Cosmos DB. In many cases, this default indexing policy is acceptable. In general, removing indexes optimizes write requests and having the default indexing policy (i.e., automatic indexing) optimizes read requests.
+ * For the Azure Cosmos DB for MongoDB server version 3.2, all data fields are automatically indexed, by default, during the migration of data to Azure Cosmos DB. In many cases, this default indexing policy is acceptable. In general, removing indexes optimizes write requests and having the default indexing policy (i.e., automatic indexing) optimizes read requests.
* The indexing capabilities provided by Azure Cosmos DB include adding compound indices, unique indices and time-to-live (TTL) indices. The index management interface is mapped to the createIndex() command. Learn more at Indexing in Azure Cosmos DB and Indexing in Azure Cosmos DB's API for MongoDB. 2. Apply these index settings during post-migration.
In order to optimize price and performance, we recommend that you step through y
## Globally distribute your data Azure Cosmos DB is available in all [Azure regions](https://azure.microsoft.com/regions/#services) worldwide.
-1. Follow the guidance in the article [Distribute data globally on Azure Cosmos DB's API for MongoDB](tutorial-global-distribution-mongodb.md) in order to globally distribute your data. After selecting the default consistency level for your Azure Cosmos DB account, you can associate one or more Azure regions (depending on your global distribution needs). For high availability and business continuity, we always recommend running in at least 2 regions. You can review the tips for [optimizing cost of multi-region deployments in Azure Cosmos DB](../optimize-cost-regions.md).
+1. Follow the guidance in the article [Distribute data globally on Azure Cosmos DB's API for MongoDB](tutorial-global-distribution.md) in order to globally distribute your data. After selecting the default consistency level for your Azure Cosmos DB account, you can associate one or more Azure regions (depending on your global distribution needs). For high availability and business continuity, we always recommend running in at least 2 regions. You can review the tips for [optimizing cost of multi-region deployments in Azure Cosmos DB](../optimize-cost-regions.md).
## Set consistency level
The processing of cutting-over or connecting your application allows you to swit
4. Use the connection information in your application's configuration (or other relevant places) to reflect the Azure Cosmos DB's API for MongoDB connection in your app. :::image type="content" source="./media/post-migration-optimization/connection-string.png" alt-text="Screenshot shows the settings for a Connection String.":::
-For more details, please see the [Connect a MongoDB application to Azure Cosmos DB](connect-mongodb-account.md) page.
+For more details, please see the [Connect a MongoDB application to Azure Cosmos DB](connect-account.md) page.
## Tune for optimal performance
One convenient fact about [indexing](#optimize-the-indexing-policy), [global dis
* Trying to do capacity planning for a migration to Azure Cosmos DB? * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-* [Connect a MongoDB application to Azure Cosmos DB](connect-mongodb-account.md)
+* [Connect a MongoDB application to Azure Cosmos DB](connect-account.md)
* [Connect to Azure Cosmos DB account using Studio 3T](connect-using-mongochef.md) * [How to globally distribute reads using Azure Cosmos DB's API for MongoDB](readpreference-global-distribution.md)
-* [Expire data with Azure Cosmos DB's API for MongoDB](mongodb-time-to-live.md)
+* [Expire data with Azure Cosmos DB's API for MongoDB](time-to-live.md)
* [Consistency Levels in Azure Cosmos DB](../consistency-levels.md) * [Indexing in Azure Cosmos DB](../index-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB API for MongoDB
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB API for MongoDB
+ Title: Azure PowerShell samples for Azure Cosmos DB for MongoDB
+description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for MongoDB
-++ Last updated 08/26/2021
-# Azure PowerShell samples for Azure Cosmos DB API for MongoDB
+# Azure PowerShell samples for Azure Cosmos DB for MongoDB
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples |Task | Description | |||
-|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
+|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's default consistency level. |
+|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's regions. |
+|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos DB account or trigger a manual failover. |
|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
+|[Create an Azure Cosmos DB Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
|||
-## Mongo DB API Samples
+## MongoDB API Samples
|Task | Description | |||
-|[Create an account, database and collection](../scripts/powershell/mongodb/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and collection. |
-|[Create an account, database and collection with autoscale](../scripts/powershell/mongodb/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and collection with autoscale. |
+|[Create an account, database and collection](../scripts/powershell/mongodb/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account, database and collection. |
+|[Create an account, database and collection with autoscale](../scripts/powershell/mongodb/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account, database and collection with autoscale. |
|[List or get databases or collections](../scripts/powershell/mongodb/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or collection. | |[Perform throughput operations](../scripts/powershell/mongodb/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or collection including get, update and migrate between autoscale and standard throughput. | |[Lock resources from deletion](../scripts/powershell/mongodb/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
The following table includes links to commonly used Azure PowerShell scripts for
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
Title: Pre-migration steps for data migration to Azure Cosmos DB's API for MongoDB
-description: This doc provides an overview of the prerequisites for a data migration from MongoDB to Cosmos DB.
+description: This doc provides an overview of the prerequisites for a data migration from MongoDB to Azure Cosmos DB.
-++ Last updated 04/05/2022
# Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB's API for MongoDB > [!IMPORTANT] > Please read this entire guide before carrying out your pre-migration steps.
The results are printed as an output in the DMA notebook and saved to a CSV file
> [!NOTE] > Database Migration Assistant is a preliminary utility meant to assist you with the pre-migration steps. It does not perform an end-to-end assessment.
-> In addition to running the DMA, we also recommend you to go through [the supported features and syntax](./feature-support-42.md), [Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
+> In addition to running the DMA, we also recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
## Pre-migration mapping
Figure out what Azure Cosmos DB resources you'll create. This means stepping thr
* Anticipate that each MongoDB database will become an Azure Cosmos DB database. * Anticipate that each MongoDB collection will become an Azure Cosmos DB collection. * Choose a naming convention for your Azure Cosmos DB resources. Barring any change in the structure of databases and collections, keeping the same resource names is usually a fine choice.
-* Determine whether you'll be using sharded or unsharded collections in Cosmos DB. The unsharded collection limit is 20 GB. Sharding, on the other hand, helps achieve horizontal scale that is critical to the performance of many workloads.
+* Determine whether you'll be using sharded or unsharded collections in Azure Cosmos DB. The unsharded collection limit is 20 GB. Sharding, on the other hand, helps achieve horizontal scale that is critical to the performance of many workloads.
* If using sharded collections, *do not assume that your MongoDB collection shard key becomes your Azure Cosmos DB collection shard key. Do not assume that your existing MongoDB data model/document structure is what you'll employ on Azure Cosmos DB.* * Shard key is the single most important setting for optimizing the scalability and performance of Azure Cosmos DB, and data modeling is the second most important. Both of these settings are immutable and cannot be changed once they are set; therefore it is highly important to optimize them in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information. * Azure Cosmos DB does not recognize certain MongoDB collection types such as capped collections. For these resources, just create normal Azure Cosmos DB collections.
The following Azure Cosmos DB configuration choices cannot be modified or undone
* The following are key factors that affect the number of required RUs: * **Document size**: As the size of an item/document increases, the number of RUs consumed to read or write the item/document also increases.
- * **Document property count**:The number of RUs consumed to create or update a document is related to the number, complexity and length of its properties. You can reduce the request unit consumption for write operations by [limiting the number of indexed properties](mongodb-indexing.md).
+ * **Document property count**:The number of RUs consumed to create or update a document is related to the number, complexity and length of its properties. You can reduce the request unit consumption for write operations by [limiting the number of indexed properties](indexing.md).
* **Query patterns**: The complexity of a query affects how many request units are consumed by the query.
-* The best way to understand the cost of queries is to use sample data in Azure Cosmos DB, [and run sample queries from the MongoDB Shell](connect-mongodb-account.md) using the `getLastRequestStastistics` command to get the request charge, which will output the number of RUs consumed:
+* The best way to understand the cost of queries is to use sample data in Azure Cosmos DB, [and run sample queries from the MongoDB Shell](connect-account.md) using the `getLastRequestStastistics` command to get the request charge, which will output the number of RUs consumed:
`db.runCommand({getLastRequestStatistics: 1})`
The following Azure Cosmos DB configuration choices cannot be modified or undone
`{ "_t": "GetRequestStatisticsResponse", "ok": 1, "CommandName": "find", "RequestCharge": 10.1, "RequestDurationInMilliSeconds": 7.2}`
-* You can also use [the diagnostic settings](../cosmosdb-monitor-resource-logs.md) to understand the frequency and patterns of the queries executed against Azure Cosmos DB. The results from the diagnostic logs can be sent to a storage account, an EventHub instance or [Azure Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md).
+* You can also use [the diagnostic settings](../monitor-resource-logs.md) to understand the frequency and patterns of the queries executed against Azure Cosmos DB. The results from the diagnostic logs can be sent to a storage account, an EventHub instance or [Azure Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md).
## Pre-migration logistics planning
In the pre-migration phase, spend some time to plan what steps you will take tow
* Trying to do capacity planning for a migration to Azure Cosmos DB? * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-* Migrate to Azure Cosmos DB API for MongoDB
+* Migrate to Azure Cosmos DB for MongoDB
* [Offline migration using MongoDB native tools](tutorial-mongotools-cosmos-db.md) * [Offline migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db.md) * [Online migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db-online.md) * [Offline/online migration using Azure Databricks and Spark](migrate-databricks.md)
-* [Post-migration guide](post-migration-optimization.md) - optimize steps once you have migrated to Azure Cosmos DB API for MongoDB
-* [Provision throughput on Azure Cosmos containers and databases](../set-throughput.md)
+* [Post-migration guide](post-migration-optimization.md) - optimize steps once you have migrated to Azure Cosmos DB for MongoDB
+* [Provision throughput on Azure Cosmos DB containers and databases](../set-throughput.md)
* [Partitioning in Azure Cosmos DB](../partitioning-overview.md) * [Global Distribution in Azure Cosmos DB](../distribute-data-globally.md) * [Indexing in Azure Cosmos DB](../index-overview.md)
cosmos-db Prevent Rate Limiting Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/prevent-rate-limiting-errors.md
Title: Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations.
-description: Learn how to prevent your Azure Cosmos DB API for MongoDB operations from hitting rate limiting errors with the SSR (server-side retry) feature.
+ Title: Prevent rate-limiting errors for Azure Cosmos DB for MongoDB operations.
+description: Learn how to prevent your Azure Cosmos DB for MongoDB operations from hitting rate limiting errors with the SSR (server-side retry) feature.
-++ Last updated 08/26/2021
-# Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations
+# Prevent rate-limiting errors for Azure Cosmos DB for MongoDB operations
-Azure Cosmos DB API for MongoDB operations may fail with rate-limiting (16500/429) errors if they exceed a collection's throughput limit (RUs).
+Azure Cosmos DB for MongoDB operations may fail with rate-limiting (16500/429) errors if they exceed a collection's throughput limit (RUs).
You can enable the Server Side Retry (SSR) feature and let the server retry these operations automatically. The requests are retried after a short delay for all collections in your account. This feature is a convenient alternative to handling rate-limiting errors in the client application.
You can enable the Server Side Retry (SSR) feature and let the server retry thes
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your Azure Cosmos DB API for MongoDB account.
+1. Navigate to your Azure Cosmos DB for MongoDB account.
1. Go to the **Features** pane underneath the **Settings** section.
You can enable the Server Side Retry (SSR) feature and let the server retry thes
1. Click **Enable** to enable this feature for all collections in your account. ## Use the Azure CLI
Requests are retried continuously (over and over again) until a 60-second timeou
### How can I monitor the effects of a server-side retry?
-You can view the rate limiting errors (429) that are retried server-side in the Cosmos DB Metrics pane. Keep in mind that these errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
+You can view the rate limiting errors (429) that are retried server-side in the Azure Cosmos DB Metrics pane. Keep in mind that these errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
-You can search for log entries containing *estimatedDelayFromRateLimitingInMilliseconds* in your [Cosmos DB resource logs](../cosmosdb-monitor-resource-logs.md).
+You can search for log entries containing *estimatedDelayFromRateLimitingInMilliseconds* in your [Azure Cosmos DB resource logs](../monitor-resource-logs.md).
### Will server-side retry affect my consistency level?
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB MongoDB API for .NET with MongoDB drier
-description: Learn how to build a .NET app to manage Azure Cosmos DB MongoDB API account resources in this quickstart.
+ Title: Quickstart - Azure Cosmos DB for MongoDB for .NET with MongoDB drier
+description: Learn how to build a .NET app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
-+ ms.devlang: dotnet Last updated 07/06/2022-+
-# Quickstart: Azure Cosmos DB MongoDB API for .NET with the MongoDB driver
+# Quickstart: Azure Cosmos DB for MongoDB for .NET with the MongoDB driver
-Get started with MongoDB to create databases, collections, and docs within your Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
+Get started with MongoDB to create databases, collections, and docs within your Azure Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
-[MongoDB API reference documentation](https://www.mongodb.com/docs/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+[API for MongoDB reference documentation](https://www.mongodb.com/docs/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
## Prerequisites
Get started with MongoDB to create databases, collections, and docs within your
## Setting up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses the MongoDB NuGet packages.
+This section walks you through creating an Azure Cosmos DB account and setting up a project that uses the MongoDB NuGet packages.
### Create an Azure Cosmos DB account
-This quickstart will create a single Azure Cosmos DB account using the MongoDB API.
+This quickstart will create a single Azure Cosmos DB account using the API for MongoDB.
#### [Azure CLI](#tab/azure-cli)
dotnet add package MongoDb.Driver
Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, collections, and docs. Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes. :::image-end::: You'll use the following MongoDB classes to interact with these resources:
-* [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.
+* [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
* [``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it. * [``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
After you insert an item, you can run a query to get all items that match a spec
## Run the code
-This app creates an Azure Cosmos MongoDb API database and collection. The example then creates an item and then reads the exact same item back. Finally, the example creates a second item and then performs a query that should return multiple items. With each step, the example outputs metadata to the console about the steps it has performed.
+This app creates an Azure Cosmos DB MongoDb API database and collection. The example then creates an item and then reads the exact same item back. Finally, the example creates a second item and then performs a query that should return multiple items. With each step, the example outputs metadata to the console about the steps it has performed.
To run the app, use a terminal to navigate to the application directory and run the application.
Sand Surfboard
## Clean up resources
-When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
+When you no longer need the Azure Cosmos DB for NoSQL account, you can delete the corresponding resource group.
### [Azure CLI / Resource Manager template](#tab/azure-cli)
Remove-AzResourceGroup @parameters
> In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``. 1. Select **Delete resource group**.
- :::image type="content" source="media/delete-account-portal/delete-resource-group-option.png" lightbox="media/delete-account-portal/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
+ :::image type="content" source="media/quickstart-dotnet/delete-resource-group-option.png" lightbox="media/quickstart-dotnet/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
- :::image type="content" source="media/delete-account-portal/delete-confirmation.png" lightbox="media/delete-account-portal/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
+ :::image type="content" source="media/quickstart-dotnet/delete-confirmation.png" lightbox="media/quickstart-dotnet/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-go.md
+
+ Title: Connect a Go application to Azure Cosmos DB's API for MongoDB
+description: This quickstart demonstrates how to connect an existing Go application to Azure Cosmos DB's API for MongoDB.
++++
+ms.devlang: golang
+ Last updated : 04/26/2022++
+# Quickstart: Connect a Go application to Azure Cosmos DB's API for MongoDB
+
+> [!div class="op_single_selector"]
+> * [.NET](create-mongodb-dotnet.md)
+> * [Python](quickstart-python.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](create-mongodb-nodejs.md)
+> * [Golang](quickstart-go.md)
+>
+
+Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. In this quickstart, you create and manage an Azure Cosmos DB account by using the Azure Cloud Shell, clone an existing sample application from GitHub and configure it to work with Azure Cosmos DB.
+
+The sample application is a command-line based `todo` management tool written in Go. Azure Cosmos DB's API for MongoDB is [compatible with the MongoDB wire protocol](./introduction.md), making it possible for any MongoDB client driver to connect to it. This application uses the [Go driver for MongoDB](https://github.com/mongodb/mongo-go-driver) in a way that is transparent to the application that the data is stored in an Azure Cosmos DB database.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.
+- [Go](https://go.dev/) installed on your computer, and a working knowledge of Go.
+- [Git](https://git-scm.com/downloads).
+
+## Clone the sample application
+
+Run the following commands to clone the sample repository.
+
+1. Open a command prompt, create a new folder named `git-samples`, then close the command prompt.
+
+ ```bash
+ mkdir "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/cosmosdb-go-mongodb-quickstart
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the application works, you can review the following snippets. Otherwise, you can skip ahead to [Run the application](#run-the-application). The application layout is as follows:
+
+```bash
+.
+Γö£ΓöÇΓöÇ go.mod
+Γö£ΓöÇΓöÇ go.sum
+ΓööΓöÇΓöÇ todo.go
+```
+
+The following snippets are all taken from the `todo.go` file.
+
+### Connecting the Go app to Azure Cosmos DB
+
+[`clientOptions`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo/options?tab=doc#ClientOptions) encapsulates the connection string for Azure Cosmos DB, which is passed in using an environment variable (details in the upcoming section). The connection is initialized using [`mongo.NewClient`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#NewClient) to which the `clientOptions` instance is passed. [`Ping` function](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Client.Ping) is invoked to confirm successful connectivity (it is a fail-fast strategy)
+
+```go
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
+ defer cancel()
+
+ clientOptions := options.Client().ApplyURI(mongoDBConnectionString).SetDirect(true)
+
+ c, err := mongo.NewClient(clientOptions)
+ err = c.Connect(ctx)
+ if err != nil {
+ log.Fatalf("unable to initialize connection %v", err)
+ }
+
+ err = c.Ping(ctx, nil)
+ if err != nil {
+ log.Fatalf("unable to connect %v", err)
+ }
+```
+
+> [!NOTE]
+> Using the [`SetDirect(true)`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo/options?tab=doc#ClientOptions.SetDirect) configuration is important, without which you will get the following connectivity error: `unable to connect connection(cdb-ms-prod-<azure-region>-cm1.documents.azure.com:10255[-4]) connection is closed`
+>
+
+### Create a `todo` item
+
+To create a `todo`, we get a handle to a [`mongo.Collection`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection) and invoke the [`InsertOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.InsertOne) function.
+
+```go
+func create(desc string) {
+ c := connect()
+ ctx := context.Background()
+ defer c.Disconnect(ctx)
+
+ todoCollection := c.Database(database).Collection(collection)
+ r, err := todoCollection.InsertOne(ctx, Todo{Description: desc, Status: statusPending})
+ if err != nil {
+ log.Fatalf("failed to add todo %v", err)
+ }
+```
+
+We pass in a `Todo` struct that contains the description and the status (which is initially set to `pending`)
+
+```go
+type Todo struct {
+ ID primitive.ObjectID `bson:"_id,omitempty"`
+ Description string `bson:"description"`
+ Status string `bson:"status"`
+}
+```
+### List `todo` items
+
+We can list TODOs based on criteria. A [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) is created to encapsulate the filter criteria
+
+```go
+func list(status string) {
+ .....
+ var filter interface{}
+ switch status {
+ case listAllCriteria:
+ filter = bson.D{}
+ case statusCompleted:
+ filter = bson.D{{statusAttribute, statusCompleted}}
+ case statusPending:
+ filter = bson.D{{statusAttribute, statusPending}}
+ default:
+ log.Fatal("invalid criteria for listing todo(s)")
+ }
+```
+
+[`Find`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.Find) is used to search for documents based on the filter and the result is converted into a slice of `Todo`
+
+```go
+ todoCollection := c.Database(database).Collection(collection)
+ rs, err := todoCollection.Find(ctx, filter)
+ if err != nil {
+ log.Fatalf("failed to list todo(s) %v", err)
+ }
+ var todos []Todo
+ err = rs.All(ctx, &todos)
+ if err != nil {
+ log.Fatalf("failed to list todo(s) %v", err)
+ }
+```
+
+Finally, the information is rendered in tabular format
+
+```go
+ todoTable := [][]string{}
+
+ for _, todo := range todos {
+ s, _ := todo.ID.MarshalJSON()
+ todoTable = append(todoTable, []string{string(s), todo.Description, todo.Status})
+ }
+
+ table := tablewriter.NewWriter(os.Stdout)
+ table.SetHeader([]string{"ID", "Description", "Status"})
+
+ for _, v := range todoTable {
+ table.Append(v)
+ }
+ table.Render()
+```
+
+### Update a `todo` item
+
+A `todo` can be updated based on its `_id`. A [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) filter is created based on the `_id` and another one is created for the updated information, which is a new status (`completed` or `pending`) in this case. Finally, the [`UpdateOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.UpdateOne) function is invoked with the filter and the updated document
+
+```go
+func update(todoid, newStatus string) {
+....
+ todoCollection := c.Database(database).Collection(collection)
+ oid, err := primitive.ObjectIDFromHex(todoid)
+ if err != nil {
+ log.Fatalf("failed to update todo %v", err)
+ }
+ filter := bson.D{{"_id", oid}}
+ update := bson.D{{"$set", bson.D{{statusAttribute, newStatus}}}}
+ _, err = todoCollection.UpdateOne(ctx, filter, update)
+ if err != nil {
+ log.Fatalf("failed to update todo %v", err)
+ }
+```
+
+### Delete a `todo`
+
+A `todo` is deleted based on its `_id` and it is encapsulated in the form of a [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) instance. [`DeleteOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.DeleteOne) is invoked to delete the document.
+
+```go
+func delete(todoid string) {
+....
+ todoCollection := c.Database(database).Collection(collection)
+ oid, err := primitive.ObjectIDFromHex(todoid)
+ if err != nil {
+ log.Fatalf("invalid todo ID %v", err)
+ }
+ filter := bson.D{{"_id", oid}}
+ _, err = todoCollection.DeleteOne(ctx, filter)
+ if err != nil {
+ log.Fatalf("failed to delete todo %v", err)
+ }
+}
+```
+
+## Build the application
+
+Change into the directory where you cloned the application and build it (using `go build`).
+
+```bash
+cd monogdb-go-quickstart
+go build -o todo
+```
+
+To confirm that the application was built properly.
+
+```bash
+./todo --help
+```
+
+## Setup Azure Cosmos DB
+
+### Sign in to Azure
+
+If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI].
+
+If you are using an installed Azure CLI, sign in to your Azure subscription with the [az login](/cli/azure/reference-index#az-login) command and follow the on-screen directions. You can skip this step if you're using the Azure Cloud Shell.
+
+```azurecli
+az login
+```
+
+### Add the Azure Cosmos DB module
+
+If you are using an installed Azure CLI, check to see if the `cosmosdb` component is already installed by running the `az` command. If `cosmosdb` is in the list of base commands, proceed to the next command. You can skip this step if you're using the Azure Cloud Shell.
+
+If `cosmosdb` is not in the list of base commands, reinstall [Azure CLI](/cli/azure/install-azure-cli).
+
+### Create a resource group
+
+Create a [resource group](../../azure-resource-manager/management/overview.md) with the [az group create](/cli/azure/group#az-group-create). An Azure resource group is a logical container into which Azure resources like web apps, databases and storage accounts are deployed and managed.
+
+The following example creates a resource group in the West Europe region. Choose a unique name for the resource group.
+
+If you are using Azure Cloud Shell, select **Try It**, follow the onscreen prompts to login, then copy the command into the command prompt.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location "West Europe"
+```
+
+### Create an Azure Cosmos DB account
+
+Create an Azure Cosmos DB account with the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command.
+
+In the following command, please substitute your own unique Azure Cosmos DB account name where you see the `<cosmosdb-name>` placeholder. This unique name will be used as part of your Azure Cosmos DB endpoint (`https://<cosmosdb-name>.documents.azure.com/`), so the name needs to be unique across all Azure Cosmos DB accounts in Azure.
+
+```azurecli-interactive
+az cosmosdb create --name <cosmosdb-name> --resource-group myResourceGroup --kind MongoDB
+```
+
+The `--kind MongoDB` parameter enables MongoDB client connections.
+
+When the Azure Cosmos DB account is created, the Azure CLI shows information similar to the following example.
+
+> [!NOTE]
+> This example uses JSON as the Azure CLI output format, which is the default. To use another output format, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
+
+```json
+{
+ "databaseAccountOfferType": "Standard",
+ "documentEndpoint": "https://<cosmosdb-name>.documents.azure.com:443/",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Document
+DB/databaseAccounts/<cosmosdb-name>",
+ "kind": "MongoDB",
+ "location": "West Europe",
+ "name": "<cosmosdb-name>",
+ "readLocations": [
+ {
+ "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
+ "failoverPriority": 0,
+ "id": "<cosmosdb-name>-westeurope",
+ "locationName": "West Europe",
+ "provisioningState": "Succeeded"
+ }
+ ],
+ "resourceGroup": "myResourceGroup",
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "writeLocations": [
+ {
+ "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
+ "failoverPriority": 0,
+ "id": "<cosmosdb-name>-westeurope",
+ "locationName": "West Europe",
+ "provisioningState": "Succeeded"
+ }
+ ]
+}
+```
+
+### Retrieve the database key
+
+In order to connect to an Azure Cosmos DB database, you need the database key. Use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command to retrieve the primary key.
+
+```azurecli-interactive
+az cosmosdb keys list --name <cosmosdb-name> --resource-group myResourceGroup --query "primaryMasterKey"
+```
+
+The Azure CLI outputs information similar to the following example.
+
+```json
+"RUayjYjixJDWG5xTqIiXjC..."
+```
+
+## Configure the application
+
+<a name="devconfig"></a>
+### Export the connection string, MongoDB database and collection names as environment variables.
+
+```bash
+export MONGODB_CONNECTION_STRING="mongodb://<COSMOSDB_ACCOUNT_NAME>:<COSMOSDB_PASSWORD>@<COSMOSDB_ACCOUNT_NAME>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@<COSMOSDB_ACCOUNT_NAME>@"
+```
+
+> [!NOTE]
+> The `ssl=true` option is important because of Azure Cosmos DB requirements. For more information, see [Connection string requirements](connect-account.md#connection-string-requirements).
+>
+
+For the `MONGODB_CONNECTION_STRING` environment variable, replace the placeholders for `<COSMOSDB_ACCOUNT_NAME>` and `<COSMOSDB_PASSWORD>`
+
+1. `<COSMOSDB_ACCOUNT_NAME>`: The name of the Azure Cosmos DB account you created
+2. `<COSMOSDB_PASSWORD>`: The database key extracted in the previous step
+
+```bash
+export MONGODB_DATABASE=todo-db
+export MONGODB_COLLECTION=todos
+```
+
+You can choose your preferred values for `MONGODB_DATABASE` and `MONGODB_COLLECTION` or leave them as is.
+
+## Run the application
+
+To create a `todo`
+
+```bash
+./todo --create "Create an Azure Cosmos DB database account"
+```
+
+If successful, you should see an output with the MongoDB `_id` of the newly created document:
+
+```bash
+added todo ObjectID("5e9fd6befd2f076d1f03bd8a")
+```
+
+Create another `todo`
+
+```bash
+./todo --create "Get the MongoDB connection string using the Azure CLI"
+```
+
+List all the `todo`s
+
+```bash
+./todo --list all
+```
+
+You should see the ones you just added in a tabular format as such
+
+```bash
++-+--+--+
+| ID | DESCRIPTION | STATUS |
++-+--+--+
+| "5e9fd6b1bcd2fa6bd267d4c4" | Create an Azure Cosmos DB | pending |
+| | database account | |
+| "5e9fd6befd2f076d1f03bd8a" | Get the MongoDB connection | pending |
+| | string using the Azure CLI | |
++-+--+--+
+```
+
+To update the status of a `todo` (e.g. change it to `completed` status), use the `todo` ID
+
+```bash
+./todo --update 5e9fd6b1bcd2fa6bd267d4c4,completed
+```
+
+List only the completed `todo`s
+
+```bash
+./todo --list completed
+```
+
+You should see the one you just updated
+
+```bash
++-+--+--+
+| ID | DESCRIPTION | STATUS |
++-+--+--+
+| "5e9fd6b1bcd2fa6bd267d4c4" | Create an Azure Cosmos DB | completed |
+| | database account | |
++-+--+--+
+```
+
+### View data in Data Explorer
+
+Data stored in Azure Cosmos DB is available to view and query in the Azure portal.
+
+To view, query, and work with the user data created in the previous step, login to the [Azure portal](https://portal.azure.com) in your web browser.
+
+In the top Search box, enter **Azure Cosmos DB**. When your Azure Cosmos DB account blade opens, select your Azure Cosmos DB account. In the left navigation, select **Data Explorer**. Expand your collection in the Collections pane, and then you can view the documents in the collection, query the data, and even create and run stored procedures, triggers, and UDFs.
+++
+Delete a `todo` using it's ID
+
+```bash
+./todo --delete 5e9fd6b1bcd2fa6bd267d4c4,completed
+```
+
+List the `todo`s to confirm
+
+```bash
+./todo --list all
+```
+
+The `todo` you just deleted should not be present
+
+```bash
++-+--+--+
+| ID | DESCRIPTION | STATUS |
++-+--+--+
+| "5e9fd6befd2f076d1f03bd8a" | Get the MongoDB connection | pending |
+| | string using the Azure CLI | |
++-+--+--+
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account using the Azure Cloud Shell, and create and run a Go command-line app to manage `todo`s. You can now import additional data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-java.md
+
+ Title: 'Quickstart: Build a web app using the Azure Cosmos DB for MongoDB and Java SDK'
+description: Learn to build a Java code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
++++
+ms.devlang: java
+ Last updated : 04/26/2022+++
+# Quickstart: Create a console app with Java and the API for MongoDB in Azure Cosmos DB
+
+> [!div class="op_single_selector"]
+> * [.NET](create-mongodb-dotnet.md)
+> * [Python](quickstart-python.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](create-mongodb-nodejs.md)
+> * [Golang](quickstart-go.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB for API for MongoDB account from the Azure portal, and add data by using a Java SDK app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.
+- [Java Development Kit (JDK) version 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk).
+- [Maven](https://maven.apache.org/download.cgi). Or run `apt-get install maven` to install Maven.
+- [Git](https://git-scm.com/downloads).
+
+## Create a database account
++
+## Add a collection
+
+Name your new database **db**, and your new collection **coll**.
++
+## Clone the sample application
+
+Now let's clone an app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-java-getting-started.git
+ ```
+
+3. Then open the code in your favorite editor.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the *Program.java* file.
+
+This console app uses the [MongoDB Java driver](https://www.mongodb.com/docs/drivers/java-drivers/).
+
+* The DocumentClient is initialized.
+
+ ```java
+ MongoClientURI uri = new MongoClientURI("FILLME");`
+
+ MongoClient mongoClient = new MongoClient(uri);
+ ```
+
+* A new database and collection are created.
+
+ ```java
+ MongoDatabase database = mongoClient.getDatabase("db");
+
+ MongoCollection<Document> collection = database.getCollection("coll");
+ ```
+
+* Some documents are inserted using `MongoCollection.insertOne`
+
+ ```java
+ Document document = new Document("fruit", "apple")
+ collection.insertOne(document);
+ ```
+
+* Some queries are performed using `MongoCollection.find`
+
+ ```java
+ Document queryResult = collection.find(Filters.eq("fruit", "apple")).first();
+ System.out.println(queryResult.toJson());
+ ```
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app.
+
+1. From your Azure Cosmos DB account, select **Quick Start**, select **Java**, then copy the connection string to your clipboard.
+
+2. Open the *Program.java* file, replace the argument to the MongoClientURI constructor with the connection string. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+## Run the console app
+
+1. Run `mvn package` in a terminal to install required npm modules
+
+2. Run `mvn exec:java -D exec.mainClass=GetStarted.Program` in a terminal to start your Java application.
+
+You can now use [Robomongo](connect-using-robomongo.md) / [Studio 3T](connect-using-mongochef.md) to query, modify, and work with this new data.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, add a database and container using Data Explorer, and add data using a Java console app. You can now import additional data to your Azure Cosmos DB database.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-javascript.md
- Title: Quickstart - Azure Cosmos DB MongoDB API for JavaScript with MongoDB drier
-description: Learn how to build a JavaScript app to manage Azure Cosmos DB MongoDB API account resources in this quickstart.
------ Previously updated : 07/06/2022---
-# Quickstart: Azure Cosmos DB MongoDB API for JavaScript with MongoDB driver
--
-Get started with the MongoDB npm package to create databases, collections, and docs within your Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
-
-> [!NOTE]
-> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.npmjs.com/package/mongodb)
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [Node.js LTS](https://nodejs.org/en/download/)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-
-### Prerequisite check
-
-* In a terminal or command window, run ``node --version`` to check that Node.js is one of the LTS versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-
-## Setting up
-
-This section walks you through creating an Azure Cosmos account and setting up a project that uses the MongoDB npm package.
-
-### Create an Azure Cosmos DB account
-
-This quickstart will create a single Azure Cosmos DB account using the MongoDB API.
-
-#### [Azure CLI](#tab/azure-cli)
--
-#### [PowerShell](#tab/azure-powershell)
--
-#### [Portal](#tab/azure-portal)
----
-### Get MongoDB connection string
-
-#### [Azure CLI](#tab/azure-cli)
--
-#### [PowerShell](#tab/azure-powershell)
--
-#### [Portal](#tab/azure-portal)
----
-### Create a new JavaScript app
-
-Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
-
-```console
-npm init
-```
-
-### Install the package
-
-Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
-
-```console
-npm install mongodb dotenv
-```
-
-### Configure environment variables
--
-## Object model
-
-Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, collections, and docs.
-
- Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes.
-
-You'll use the following MongoDB classes to interact with these resources:
-
-* [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.
-* [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
-
-## Code examples
-
-* [Authenticate the client](#authenticate-the-client)
-* [Get database instance](#get-database-instance)
-* [Get collection instance](#get-collection-instance)
-* [Chained instances](#chained-instances)
-* [Create an index](#create-an-index)
-* [Create a doc](#create-a-doc)
-* [Get an doc](#get-a-doc)
-* [Query docs](#query-docs)
-
-The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-For this procedure, the database won't use sharding.
-
-### Authenticate the client
-
-1. From the project directory, create an *index.js* file. In your editor, add requires statements to reference the MongoDB and DotEnv npm packages.
-
- :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="package_dependencies":::
-
-2. Define a new instance of the ``MongoClient,`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to read the environment variable you created earlier.
-
- :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="client_credentials":::
-
-For more information on different ways to create a ``MongoClient`` instance, see [MongoDB NodeJS Driver Quick Start](https://www.npmjs.com/package/mongodb#quick-start).
-
-### Set up asynchronous operations
-
-In the ``index.js`` file, add the following code to support the asynchronous operations:
-
-```javascript
-async function main(){
-
-// The remaining operations are added here
-// in the main function
-
-}
-
-main()
- .then(console.log)
- .catch(console.error)
- .finally(() => client.close());
-```
-
-The following code snippets should be added into the *main* function in order to handle the async/await syntax.
-
-### Connect to the database
-
-Use the [``MongoClient.connect``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) method to connect to your Cosmos DB API for MongoDB resource. The connect method returns a reference to the database.
--
-### Get database instance
-
-Use the [``MongoClient.db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#db) gets a reference to a database.
--
-### Get collection instance
-
-The [``MongoClient.Db.collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection) gets a reference to a collection.
--
-### Chained instances
-
-You can chain the client, database, and collection together. Chaining is more convenient if you need to access multiple databases or collections.
-
-```javascript
-const db = await client.db(`adventureworks`).collection('products').updateOne(query, update, options)
-```
-
-### Create an index
-
-Use the [``Collection.createIndex``](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#createIndex) to create an index on the document's properties you intend to use for sorting with the MongoDB's [``FindCursor.sort``](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html#sort) method.
--
-### Create a doc
-
-Create a doc with the *product* properties for the `adventureworks` database:
-
-* An _id property for the unique identifier of the product.
-* A *category* property. This property can be used as the logical partition key.
-* A *name* property.
-* An inventory *quantity* property.
-* A *sale* property, indicating whether the product is on sale.
--
-Create an doc in the collect by calling [``Collection.UpdateOne``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#updateOne). In this example, we chose to *upsert* instead of *create* a new doc in case you run this sample code more than once.
-
-### Get a doc
-
-In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblogs.microsoft.com/cosmosdb/point-reads-versus-queries/) operation by using both the unique identifier (``_id``) and partition key (``category``).
--
-### Query docs
-
-After you insert a doc, you can run a query to get all docs that match a specific filter. This example finds all docs that match a specific category: `gear-surf-surfboards`. Once the query is defined, call [``Collection.find``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#find) to get a [``FindCursor``](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html) result. Convert the cursor into an array to use JavaScript array methods.
--
-Troubleshooting:
-
-* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
-
-## Run the code
-
-This app creates a MongoDB API database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
-
-To run the app, use a terminal to navigate to the application directory and run the application.
-
-```console
-node index.js
-```
-
-The output of the app should be similar to this example:
--
-## Clean up resources
-
-When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
-
-### [Azure CLI](#tab/azure-cli)
-
-Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
-
-```azurecli-interactive
-az group delete --name $resourceGroupName
-```
-
-### [PowerShell](#tab/azure-powershell)
-
-Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
-
-```azurepowershell-interactive
-$parameters = @{
- Name = $RESOURCE_GROUP_NAME
-}
-Remove-AzResourceGroup @parameters
-```
-
-### [Portal](#tab/azure-portal)
-
-1. Navigate to the resource group you previously created in the Azure portal.
-
- > [!TIP]
- > In this quickstart, we recommended the name ``msdocs-cosmos-javascript-quickstart-rg``.
-1. Select **Delete resource group**.
-
- :::image type="content" source="media/quickstart-javascript/delete-resource-group-option.png" lightbox="media/quickstart-javascript/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
-
-1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
-
- :::image type="content" source="media/quickstart-javascript/delete-confirmation.png" lightbox="media/quickstart-javascript/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
---
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account, create a database, and create a collection using the MongoDB driver. You can now dive deeper into the Cosmos DB MongoDB API to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources.
-
-> [!div class="nextstepaction"]
-> [Migrate MongoDB to Azure Cosmos DB API for MongoDB offline](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%3ftoc%3d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
+
+ Title: Quickstart - Azure Cosmos DB for MongoDB for JavaScript with MongoDB drier
+description: Learn how to build a JavaScript app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
+++++
+ms.devlang: javascript
+ Last updated : 07/06/2022+++
+# Quickstart: Azure Cosmos DB for MongoDB for JavaScript with MongoDB driver
++
+Get started with the MongoDB npm package to create databases, collections, and docs within your Azure Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.npmjs.com/package/mongodb)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Node.js LTS](https://nodejs.org/en/download/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+### Prerequisite check
+
+* In a terminal or command window, run ``node --version`` to check that Node.js is one of the LTS versions.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
+
+This section walks you through creating an Azure Cosmos DB account and setting up a project that uses the MongoDB npm package.
+
+### Create an Azure Cosmos DB account
+
+This quickstart will create a single Azure Cosmos DB account using the API for MongoDB.
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
++++
+### Get MongoDB connection string
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
++++
+### Create a new JavaScript app
+
+Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
+
+```console
+npm init
+```
+
+### Install the package
+
+Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+
+```console
+npm install mongodb dotenv
+```
+
+### Configure environment variables
++
+## Object model
+
+Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, collections, and docs.
+
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes.
+
+You'll use the following MongoDB classes to interact with these resources:
+
+* [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
+* [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
+
+## Code examples
+
+* [Authenticate the client](#authenticate-the-client)
+* [Get database instance](#get-database-instance)
+* [Get collection instance](#get-collection-instance)
+* [Chained instances](#chained-instances)
+* [Create an index](#create-an-index)
+* [Create a doc](#create-a-doc)
+* [Get an doc](#get-a-doc)
+* [Query docs](#query-docs)
+
+The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+
+For this procedure, the database won't use sharding.
+
+### Authenticate the client
+
+1. From the project directory, create an *index.js* file. In your editor, add requires statements to reference the MongoDB and DotEnv npm packages.
+
+ :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="package_dependencies":::
+
+2. Define a new instance of the ``MongoClient,`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to read the environment variable you created earlier.
+
+ :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="client_credentials":::
+
+For more information on different ways to create a ``MongoClient`` instance, see [MongoDB NodeJS Driver Quick Start](https://www.npmjs.com/package/mongodb#quick-start).
+
+### Set up asynchronous operations
+
+In the ``index.js`` file, add the following code to support the asynchronous operations:
+
+```javascript
+async function main(){
+
+// The remaining operations are added here
+// in the main function
+
+}
+
+main()
+ .then(console.log)
+ .catch(console.error)
+ .finally(() => client.close());
+```
+
+The following code snippets should be added into the *main* function in order to handle the async/await syntax.
+
+### Connect to the database
+
+Use the [``MongoClient.connect``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) method to connect to your Azure Cosmos DB for MongoDB resource. The connect method returns a reference to the database.
++
+### Get database instance
+
+Use the [``MongoClient.db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#db) gets a reference to a database.
++
+### Get collection instance
+
+The [``MongoClient.Db.collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection) gets a reference to a collection.
++
+### Chained instances
+
+You can chain the client, database, and collection together. Chaining is more convenient if you need to access multiple databases or collections.
+
+```javascript
+const db = await client.db(`adventureworks`).collection('products').updateOne(query, update, options)
+```
+
+### Create an index
+
+Use the [``Collection.createIndex``](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#createIndex) to create an index on the document's properties you intend to use for sorting with the MongoDB's [``FindCursor.sort``](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html#sort) method.
++
+### Create a doc
+
+Create a doc with the *product* properties for the `adventureworks` database:
+
+* An _id property for the unique identifier of the product.
+* A *category* property. This property can be used as the logical partition key.
+* A *name* property.
+* An inventory *quantity* property.
+* A *sale* property, indicating whether the product is on sale.
++
+Create an doc in the collect by calling [``Collection.UpdateOne``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#updateOne). In this example, we chose to *upsert* instead of *create* a new doc in case you run this sample code more than once.
+
+### Get a doc
+
+In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblogs.microsoft.com/cosmosdb/point-reads-versus-queries/) operation by using both the unique identifier (``_id``) and partition key (``category``).
++
+### Query docs
+
+After you insert a doc, you can run a query to get all docs that match a specific filter. This example finds all docs that match a specific category: `gear-surf-surfboards`. Once the query is defined, call [``Collection.find``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#find) to get a [``FindCursor``](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html) result. Convert the cursor into an array to use JavaScript array methods.
++
+Troubleshooting:
+
+* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
+
+## Run the code
+
+This app creates a API for MongoDB database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```console
+node index.js
+```
+
+The output of the app should be similar to this example:
++
+## Clean up resources
+
+When you no longer need the Azure Cosmos DB for NoSQL account, you can delete the corresponding resource group.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
+```
+
+### [PowerShell](#tab/azure-powershell)
+
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
+
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+}
+Remove-AzResourceGroup @parameters
+```
+
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the resource group you previously created in the Azure portal.
+
+ > [!TIP]
+ > In this quickstart, we recommended the name ``msdocs-cosmos-javascript-quickstart-rg``.
+1. Select **Delete resource group**.
+
+ :::image type="content" source="media/quickstart-nodejs/delete-resource-group-option.png" lightbox="media/quickstart-nodejs/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
+
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
+
+ :::image type="content" source="media/quickstart-nodejs/delete-confirmation.png" lightbox="media/quickstart-nodejs/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
+++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, create a database, and create a collection using the MongoDB driver. You can now dive deeper into the Azure Cosmos DB for MongoDB to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources.
+
+> [!div class="nextstepaction"]
+> [Migrate MongoDB to Azure Cosmos DB for MongoDB offline](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%3ftoc%3d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
+
+ Title: Get started using Azure Cosmos DB for MongoDB and Python
+description: Presents a Python code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
+++++ Last updated : 04/26/2022
+ms.devlang: python
+++
+# Quickstart: Get started using Azure Cosmos DB for MongoDB and Python
+
+> [!div class="op_single_selector"]
+> * [.NET](create-mongodb-dotnet.md)
+> * [Python](quickstart-python.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](create-mongodb-nodejs.md)
+> * [Golang](quickstart-go.md)
+>
+
+This [quickstart](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) demonstrates how to:
+1. Create an [Azure Cosmos DB for MongoDB account](introduction.md)
+2. Connect to your account using PyMongo
+3. Create a sample database and collection
+4. Perform CRUD operations in the sample collection
+
+## Prerequisites to run the sample app
+
+* [Python](https://www.python.org/downloads/) 3.9+ (It's best to run the [sample code](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) described in this article with this recommended version. Although it may work on older versions of Python 3.)
+* [PyMongo](https://pypi.org/project/pymongo/) installed on your machine
+
+<a id="create-account"></a>
+## Create a database account
++
+## Learn the object model
+
+Before you continue building the application, let's look into the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+
+* Azure Cosmos DB for MongoDB account
+* Databases
+* Collections
+* Documents
+
+To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
+
+## Get the code
+
+Download the sample Python code [from the repository](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) or use git clone:
+
+```shell
+git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started
+```
+
+## Retrieve your connection string
+
+When running the sample code, you have to enter your API for MongoDB account's connection string. Use the following steps to find it:
+
+1. In the [Azure portal](https://portal.azure.com/), select your Azure Cosmos DB account.
+
+2. In the left navigation select **Connection String**, and then select **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the primary connection string.
+
+> [!WARNING]
+> Never check passwords or other sensitive data into source code.
++
+## Run the code
+
+```shell
+python run.py
+```
+
+## Understand how it works
+
+### Connecting
+
+The following code prompts the user for the connection string. It's never a good idea to have your connection string in code since it enables anyone with it to read or write to your database.
+
+```python
+CONNECTION_STRING = getpass.getpass(prompt='Enter your primary connection string: ') # Prompts user for connection string
+```
+
+The following code creates a client connection your API for MongoDB and tests to make sure it's valid.
+
+```python
+client = pymongo.MongoClient(CONNECTION_STRING)
+try:
+ client.server_info() # validate connection string
+except pymongo.errors.ServerSelectionTimeoutError:
+ raise TimeoutError("Invalid API for MongoDB connection string or timed out when attempting to connect")
+```
+
+### Resource creation
+The following code creates the sample database and collection that will be used to perform CRUD operations. When creating resources programmatically, it's recommended to use the API for MongoDB extension commands (as shown here) because these commands have the ability to set the resource throughput (RU/s) and configure sharding.
+
+Implicitly creating resources will work but will default to recommended values for throughput and will not be sharded.
+
+```python
+# Database with 400 RU throughput that can be shared across the DB's collections
+db.command({'customAction': "CreateDatabase", 'offerThroughput': 400})
+```
+
+```python
+ # Creates a unsharded collection that uses the DB s shared throughput
+db.command({'customAction': "CreateCollection", 'collection': UNSHARDED_COLLECTION_NAME})
+```
+
+### Writing a document
+The following inserts a sample document we will continue to use throughout the sample. We get its unique _id field value so that we can query it in subsequent operations.
+
+```python
+"""Insert a sample document and return the contents of its _id field"""
+document_id = collection.insert_one({SAMPLE_FIELD_NAME: randint(50, 500)}).inserted_id
+```
+
+### Reading/Updating a document
+The following queries, updates, and again queries for the document that we previously inserted.
+
+```python
+print("Found a document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
+
+collection.update_one({"_id": document_id}, {"$set":{SAMPLE_FIELD_NAME: "Updated!"}})
+print("Updated document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
+```
+
+### Deleting a document
+Lastly, we delete the document we created from the collection.
+```python
+"""Delete the document containing document_id from the collection"""
+collection.delete_one({"_id": document_id})
+```
+
+## Next steps
+In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and perform CRUD operations.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Readpreference Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/readpreference-global-distribution.md
description: Learn how to use MongoDB Read Preference with the Azure Cosmos DB's
-+ ms.devlang: javascript Last updated 02/26/2019-+ # How to globally distribute reads using Azure Cosmos DB's API for MongoDB This article shows how to globally distribute read operations with [MongoDB Read Preference](https://docs.mongodb.com/manual/core/read-preference/) settings using Azure Cosmos DB's API for MongoDB.
This article shows how to globally distribute read operations with [MongoDB Read
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. [!INCLUDE [cosmos-db-emulator-mongodb](../includes/cosmos-db-emulator-mongodb.md)]
-Refer to this [Quickstart](tutorial-global-distribution-mongodb.md) article for instructions on using the Azure portal to set up a Cosmos account with global distribution and then connect to it.
+Refer to this [Quickstart](tutorial-global-distribution.md) article for instructions on using the Azure portal to set up an Azure Cosmos DB account with global distribution and then connect to it.
## Clone the sample application
cd mean
npm install node index.js ```
-The application tries to connect to a MongoDB source and fails because the connection string is invalid. Follow the steps in the README to update the connection string `url`. Also, update the `readFromRegion` to a read region in your Cosmos account. The following instructions are from the NodeJS sample:
+The application tries to connect to a MongoDB source and fails because the connection string is invalid. Follow the steps in the README to update the connection string `url`. Also, update the `readFromRegion` to a read region in your Azure Cosmos DB account. The following instructions are from the NodeJS sample:
```
-* Next, substitute the `url`, `readFromRegion` in App.Config with your Cosmos account's values.
+* Next, substitute the `url`, `readFromRegion` in App.Config with your Azure Cosmos DB account's values.
``` After following these steps, the sample application runs and produces the following output:
MongoDB protocol provides the following Read Preference modes for clients to use
4. SECONDARY_PREFERRED 5. NEAREST
-Refer to the detailed [MongoDB Read Preference behavior](https://docs.mongodb.com/manual/core/read-preference-mechanics/#replica-set-read-preference-behavior) documentation for details on the behavior of each of these read preference modes. In Cosmos DB, primary maps to WRITE region and secondary maps to READ region.
+Refer to the detailed [MongoDB Read Preference behavior](https://docs.mongodb.com/manual/core/read-preference-mechanics/#replica-set-read-preference-behavior) documentation for details on the behavior of each of these read preference modes. In Azure Cosmos DB, primary maps to WRITE region and secondary maps to READ region.
Based on common scenarios, we recommend using the following settings:
Refer to the corresponding sample application repos for other platforms, such as
## Read using tags
-In addition to the Read Preference mode, MongoDB protocol allows the use of tags to direct read operations. In Cosmos DB's API for MongoDB, the `region` tag is included by default as a part of the `isMaster` response:
+In addition to the Read Preference mode, MongoDB protocol allows the use of tags to direct read operations. In Azure Cosmos DB's API for MongoDB, the `region` tag is included by default as a part of the `isMaster` response:
```json "tags": {
In addition to the Read Preference mode, MongoDB protocol allows the use of tags
} ```
-Hence, MongoClient can use the `region` tag along with the region name to direct read operations to specific regions. For Cosmos accounts, region names can be found in Azure portal on the left under **Settings->Replica data globally**. This setting is useful for achieving **read isolation** - cases in which client application want to direct read operations to a specific region only. This setting is ideal for non-production/analytics type scenarios, which run in the background and are not production critical services.
+Hence, MongoClient can use the `region` tag along with the region name to direct read operations to specific regions. For Azure Cosmos DB accounts, region names can be found in Azure portal on the left under **Settings->Replica data globally**. This setting is useful for achieving **read isolation** - cases in which client application want to direct read operations to a specific region only. This setting is ideal for non-production/analytics type scenarios, which run in the background and are not production critical services.
The following snippet from the sample application shows how to configure the Read Preference with tags in NodeJS:
If you're not going to continue to use this app, delete all resources created by
## Next steps * [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
-* [Setup a globally distributed database with Azure Cosmos DB's API for MongoDB](tutorial-global-distribution-mongodb.md)
+* [Setup a globally distributed database with Azure Cosmos DB's API for MongoDB](tutorial-global-distribution.md)
* [Develop locally with the Azure Cosmos DB Emulator](../local-emulator.md)
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/resource-manager-template-samples.md
Title: Resource Manager templates for Azure Cosmos DB API for MongoDB
-description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB API for MongoDB.
+ Title: Resource Manager templates for Azure Cosmos DB for MongoDB
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for MongoDB.
-++ Last updated 05/23/2022
-# Manage Azure Cosmos DB MongoDB API resources using Azure Resource Manager templates
+# Manage Azure Cosmos DB for MongoDB resources using Azure Resource Manager templates
-In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts for MongoDB API, databases, and collections.
+In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts for API for MongoDB, databases, and collections.
-This article has examples for Azure Cosmos DB's API for MongoDB only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../cassandr) articles.
+This article has examples for Azure Cosmos DB's API for MongoDB only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../cassandr) articles.
> [!IMPORTANT] > > * Account names are limited to 44 characters, all lowercase. > * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md). <a id="create-autoscale"></a>
-## Azure Cosmos account for MongoDB with autoscale provisioned throughput
+## Azure Cosmos DB account for MongoDB with autoscale provisioned throughput
-This template will create an Azure Cosmos DB account for MongoDB API (3.2, 3.6, 4.0 and 4.2) with two collections that share autoscale throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create an Azure Cosmos DB account for API for MongoDB (3.2, 3.6, 4.0 and 4.2) with two collections that share autoscale throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-mongodb-autoscale%2Fazuredeploy.json)
This template will create an Azure Cosmos DB account for MongoDB API (3.2, 3.6,
<a id="create-manual"></a>
-## Azure Cosmos account for MongoDB with standard provisioned throughput
+## Azure Cosmos DB account for MongoDB with standard provisioned throughput
-This template will create an Azure Cosmos DB account for MongoDB API (3.2, 3.6, 4.0 and 4.2) with two collections that share 400 RU/s standard (manual) throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create an Azure Cosmos DB account for API for MongoDB (3.2, 3.6, 4.0 and 4.2) with two collections that share 400 RU/s standard (manual) throughput at the database level. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-mongodb%2Fazuredeploy.json)
Here are some additional resources:
* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/time-to-live.md
+
+ Title: MongoDB per-document TTL feature in Azure Cosmos DB
+description: Learn how to set time to live value for documents using Azure Cosmos DB's API for MongoDB, to automatically purge them from the system after a period of time.
++++
+ms.devlang: csharp, java, javascript
+ Last updated : 02/16/2022++
+# Expire data with Azure Cosmos DB's API for MongoDB
+
+Time-to-live (TTL) functionality allows the database to automatically expire data. Azure Cosmos DB's API for MongoDB utilizes Azure Cosmos DB's core TTL capabilities. Two modes are supported: setting a default TTL value on the whole collection, and setting individual TTL values for each document. The logic governing TTL indexes and per-document TTL values in Azure Cosmos DB's API for MongoDB is the [same as in Azure Cosmos DB](indexing.md).
+
+## TTL indexes
+To enable TTL universally on a collection, a ["TTL index" (time-to-live index)](indexing.md) needs to be created. The TTL index is an index on the `_ts` field with an "expireAfterSeconds" value.
+
+MongoShell example:
+
+```
+globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10})
+```
+The command in the above example will create an index with TTL functionality.
+
+The output of the command includes various metadata:
+
+```output
+{
+ "_t" : "CreateIndexesResponse",
+ "ok" : 1,
+ "createdCollectionAutomatically" : true,
+ "numIndexesBefore" : 1,
+ "numIndexesAfter" : 4
+}
+```
+
+ Once the index is created, the database will automatically delete any documents in that collection that have not been modified in the last 10 seconds.
+
+> [!NOTE]
+> `_ts` is an Azure Cosmos DB-specific field and is not accessible from MongoDB clients. It is a reserved (system) property that contains the timestamp of the document's last modification.
+
+Java example:
+
+```java
+MongoCollection collection = mongoDB.getCollection("collectionName");
+String index = collection.createIndex(Indexes.ascending("_ts"),
+new IndexOptions().expireAfter(10L, TimeUnit.SECONDS));
+```
+
+C# example:
+
+```csharp
+var options = new CreateIndexOptions {ExpireAfter = TimeSpan.FromSeconds(10)};
+var field = new StringFieldDefinition<BsonDocument>("_ts");
+var indexDefinition = new IndexKeysDefinitionBuilder<BsonDocument>().Ascending(field);
+await collection.Indexes.CreateOneAsync(indexDefinition, options);
+```
+
+## Set time to live value for a document
+Per-document TTL values are also supported. The document(s) must contain a root-level property "ttl" (lower-case), and a TTL index as described above must have been created for that collection. TTL values set on a document will override the collection's TTL value.
+
+The TTL value must be an int32. Alternatively, an int64 that fits in an int32, or a double with no decimal part that fits in an int32. Values for the TTL property that do not conform to these specifications are allowed but not treated as a meaningful document TTL value.
+
+The TTL value for the document is optional; documents without a TTL value can be inserted into the collection. In this case, the collection's TTL value will be honored.
+
+The following documents have valid TTL values. Once the documents are inserted, the document TTL values override the collection's TTL values. So, the documents will be removed after 20 seconds.
+
+```JavaScript
+globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: 20.0})
+globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberInt(20)})
+globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberLong(20)})
+```
+
+The following documents have invalid TTL values. The documents will be inserted, but the document TTL value will not be honored. So, the documents will be removed after 10 seconds because of the collection's TTL value.
+
+```JavaScript
+globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: 20.5}) //TTL value contains non-zero decimal part.
+globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberLong(2147483649)}) //TTL value is greater than Int32.MaxValue (2,147,483,648).
+```
+
+## Next steps
+* [Expire data in Azure Cosmos DB automatically with time to live](../mongodb/time-to-live.md)
+* [Indexing your Azure Cosmos DB database configured with Azure Cosmos DB's API for MongoDB](../mongodb/indexing.md)
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/troubleshoot-query-performance.md
Title: Troubleshoot query issues when using the Azure Cosmos DB API for MongoDB
+ Title: Troubleshoot query issues when using the Azure Cosmos DB for MongoDB
description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB's API for MongoDB query issues. -++ Last updated 08/26/2021
-# Troubleshoot query issues when using the Azure Cosmos DB API for MongoDB
+# Troubleshoot query issues when using the Azure Cosmos DB for MongoDB
-This article walks through a general recommended approach for troubleshooting queries in Azure Cosmos DB. Although you shouldn't consider the steps outlined in this article a complete defense against potential query issues, we've included the most common performance tips here. You should use this article as a starting place for troubleshooting slow or expensive queries in Azure Cosmos DB's API for MongoDB. If you are using the Azure Cosmos DB core (SQL) API, see the [SQL API query troubleshooting guide](troubleshoot-query-performance.md) article.
+This article walks through a general recommended approach for troubleshooting queries in Azure Cosmos DB. Although you shouldn't consider the steps outlined in this article a complete defense against potential query issues, we've included the most common performance tips here. You should use this article as a starting place for troubleshooting slow or expensive queries in Azure Cosmos DB's API for MongoDB. If you are using the Azure Cosmos DB for NoSQL, see the [API for NoSQL query troubleshooting guide](troubleshoot-query-performance.md) article.
Query optimizations in Azure Cosmos DB are broadly categorized as follows:
This article provides examples that you can re-create by using the [nutrition da
## Use $explain command to get metrics
-When you optimize a query in Azure Cosmos DB, the first step is always to [obtain the RU charge](find-request-unit-charge-mongodb.md) for your query. As a rough guideline, you should explore ways to lower the RU charge for queries with charges greater than 50 RUs.
+When you optimize a query in Azure Cosmos DB, the first step is always to [obtain the RU charge](find-request-unit-charge.md) for your query. As a rough guideline, you should explore ways to lower the RU charge for queries with charges greater than 50 RUs.
In addition to obtaining the RU charge, you should use the `$explain` command to obtain the query and index usage metrics. Here is an example that runs a query and uses the `$explain` command to show query and index usage metrics:
You should check the `pathsNotIndexed` array and add these indexes. In this exam
Indexing best practices in Azure Cosmos DB's API for MongoDB are different from MongoDB. In Azure Cosmos DB's API for MongoDB, compound indexes are only used in queries that need to efficiently sort by multiple properties. If you have queries with filters on multiple properties, you should create single field indexes for each of these properties. Query predicates can use multiple single field indexes.
-[Wildcard indexes](mongodb-indexing.md#wildcard-indexes) can simplify indexing. Unlike in MongoDB, wildcard indexes can support multiple fields in query predicates. There will not be a difference in query performance if you use one single wildcard index instead of creating a separate index for each property. Adding a wildcard index for all properties is the easiest way to optimize all of your queries.
+[Wildcard indexes](indexing.md#wildcard-indexes) can simplify indexing. Unlike in MongoDB, wildcard indexes can support multiple fields in query predicates. There will not be a difference in query performance if you use one single wildcard index instead of creating a separate index for each property. Adding a wildcard index for all properties is the easiest way to optimize all of your queries.
You can add new indexes at any time, with no effect on write or read availability. You can [track index transformation progress](../how-to-manage-indexing-policy.md#dotnet-sdk).
In many cases, the RU charge might be acceptable when query latency is still too
### Improve proximity
-Queries that are run from a different region than the Azure Cosmos DB account will have higher latency than if they were run inside the same region. For example, if you're running code on your desktop computer, you should expect latency to be tens or hundreds of milliseconds higher (or more) than if the query came from a virtual machine within the same Azure region as Azure Cosmos DB. It's simple to [globally distribute data in Azure Cosmos DB](tutorial-global-distribution-mongodb.md) to ensure you can bring your data closer to your app.
+Queries that are run from a different region than the Azure Cosmos DB account will have higher latency than if they were run inside the same region. For example, if you're running code on your desktop computer, you should expect latency to be tens or hundreds of milliseconds higher (or more) than if the query came from a virtual machine within the same Azure region as Azure Cosmos DB. It's simple to [globally distribute data in Azure Cosmos DB](tutorial-global-distribution.md) to ensure you can bring your data closer to your app.
### Increase provisioned throughput
The value `estimatedDelayFromRateLimitingInMilliseconds` gives a sense of the po
## Next steps
-* [Troubleshoot query performance (SQL API)](troubleshoot-query-performance.md)
-* [Manage indexing in Azure Cosmos DB's API for MongoDB](mongodb-indexing.md)
+* [Troubleshoot query performance (API for NoSQL)](troubleshoot-query-performance.md)
+* [Manage indexing in Azure Cosmos DB's API for MongoDB](indexing.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Tutorial Develop Mongodb React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-mongodb-react.md
- Title: "MongoDB, React, and Node.js tutorial for Azure"
-description: Learn how to create a MongoDB app with React and Node.js on Azure Cosmos DB using the exact same APIs you use for MongoDB with this video based tutorial series.
---- Previously updated : 08/26/2021----
-# Create a MongoDB app with React and Azure Cosmos DB
-
-This multi-part video tutorial demonstrates how to create a hero tracking app with a React front-end. The app used Node and Express for the server, connects to Cosmos database configured with the [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md), and then connects the React front-end to the server portion of the app. The tutorial also demonstrates how to do point-and-click scaling of Cosmos DB in the Azure portal and how to deploy the app to the internet so everyone can track their favorite heroes.
-
-[Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) supports wire protocol compatibility with MongoDB, enabling clients to use Azure Cosmos DB in place of MongoDB.
-
-This multi-part tutorial covers the following tasks:
-
-> [!div class="checklist"]
-> * Introduction
-> * Setup the project
-> * Build the UI with React
-> * Create an Azure Cosmos DB account using the Azure portal
-> * Use Mongoose to connect to Azure Cosmos DB
-> * Add React, Create, Update, and Delete operations to the app
-
-Want to do build this same app with Angular? See the [Angular tutorial video series](tutorial-develop-nodejs-part-1.md).
-
-## Prerequisites
-* [Node.js](https://www.nodejs.org)
-
-### Finished Project
-Get the completed application [from GitHub](https://github.com/Azure-Samples/react-cosmosdb).
-
-## Introduction
-
-In this video, Burke Holland gives an introduction to Azure Cosmos DB and walks you through the app that is created in this video series.
-
-> [!VIDEO https://www.youtube.com/embed/58IflnJbYJc]
-
-## Project setup
-
-This video shows how set up the Express and React in the same project. Burke then provides a walkthrough of the code in the project.
-
-> [!VIDEO https://www.youtube.com/embed/ytFUPStJJds]
-
-## Build the UI
-
-This video shows how to create the application's user interface (UI) with React.
-
-> [!NOTE]
-> The CSS referenced in this video can be found in the [react-cosmosdb GitHub repo](https://github.com/Azure-Samples/react-cosmosdb/blob/master/src/index.css).
-
-> [!VIDEO https://www.youtube.com/embed/SzHzX0fTUUQ]
-
-## Connect to Azure Cosmos DB
-
-This video shows how to create an Azure Cosmos DB account in the Azure portal, install the MongoDB and Mongoose packages, and then connect the app to the newly created account using the Azure Cosmos DB connection string.
-
-> [!VIDEO https://www.youtube.com/embed/0U2jV1thfvs]
-
-## Read and create heroes in the app
-
-This video shows how to read heroes and create heroes in the Cosmos database, as well as how to test those methods using Postman and the React UI.
-
-> [!VIDEO https://www.youtube.com/embed/AQK9n_8fsQI]
-
-## Delete and update heroes in the app
-
-This video shows how to delete and update heroes from the app and display the updates in the UI.
-
-> [!VIDEO https://www.youtube.com/embed/YmaGT7ztTQM]
-
-## Complete the app
-
-This video shows how to complete the app and finish hooking the UI up to the backend API.
-
-> [!VIDEO https://www.youtube.com/embed/TcSm2ISfTu8]
-
-## Clean up resources
-
-If you're not going to continue to use this app, use the following steps to delete all resources created by this tutorial in the Azure portal.
-
-1. From the left-hand menu in the Azure portal, click **Resource groups** and then click the name of the resource you created.
-2. On your resource group page, click **Delete**, type the name of the resource to delete in the text box, and then click **Delete**.
-
-## Next steps
-
-In this tutorial, you've learned how to:
-
-> [!div class="checklist"]
-> * Create an app with React, Node, Express, and Azure Cosmos DB
-> * Create an Azure Cosmos DB account
-> * Connect the app to the Azure Cosmos DB account
-> * Test the app using Postman
-> * Run the application and add heroes to the database
-
-You can proceed to the next tutorial and learn how to import MongoDB data into Azure Cosmos DB.
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Tutorial Develop Nodejs Part 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-1.md
Title: Node.Js, Angular app using Azure Cosmos DB's API for MongoB (Part1) description: Learn how to create a MongoDB app with Angular and Node on Azure Cosmos DB using the exact same APIs you use for MongoDB with this video based tutorial series.-+ -+ ms.devlang: javascript Last updated 08/26/2021--++ # Create an Angular app with Azure Cosmos DB's API for MongoDB
-This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Cosmos account configured with Cosmos DB's API for MongoDB](mongodb-introduction.md).
+This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB](introduction.md).
Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. It allows you to develop modern apps with SLA-backed speed and availability, automatic and instant scalability, and open source APIs for many NoSQL engines.
This multi-part tutorial covers the following tasks:
> * [Use Mongoose to connect to Azure Cosmos DB](tutorial-develop-nodejs-part-5.md) > * [Add Post, Put, and Delete functions to the app](tutorial-develop-nodejs-part-6.md)
-Want to do build this same app with React? See the [React tutorial video series](tutorial-develop-mongodb-react.md).
+Want to do build this same app with React? See the [React tutorial video series](tutorial-develop-react.md).
## Video walkthrough
cosmos-db Tutorial Develop Nodejs Part 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-2.md
Title: Create Node.js Express app with Azure Cosmos DB's API for MongoDB (Part2) description: Part 2 of the tutorial series on creating a MongoDB app with Angular and Node on Azure Cosmos DB using the exact same APIs you use for MongoDB.-+ -+ ms.devlang: javascript Last updated 08/26/2021--++ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Create a Node.js Express app
-This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Cosmos account configured with Cosmos DB's API for MongoDB](mongodb-introduction.md).
+This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB](introduction.md).
Part 2 of the tutorial builds on [the introduction](tutorial-develop-nodejs-part-1.md) and covers the following tasks:
cosmos-db Tutorial Develop Nodejs Part 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-3.md
Title: Create the Angular app UI with Azure Cosmos DB's API for MongoDB (Part3) description: Part 3 of the tutorial series on creating a MongoDB app with Angular and Node on Azure Cosmos DB using the exact same APIs you use for MongoDB. -+ -+ ms.devlang: javascript Last updated 08/26/2021--++ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Build the UI with Angular
-This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Cosmos account configured with Cosmos DB's API for MongoDB](mongodb-introduction.md).
+This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB](introduction.md).
Part 3 of the tutorial builds on [Part 2](tutorial-develop-nodejs-part-2.md) and covers the following tasks:
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
Title: Create an Angular app with Azure Cosmos DB's API for MongoDB (Part1) description: Part 4 of the tutorial series on creating a MongoDB app with Angular and Node on Azure Cosmos DB using the exact same APIs you use for MongoDB -+ -+ ms.devlang: javascript Last updated 08/26/2021--++ -
-# Create an Angular app with Azure Cosmos DB's API for MongoDB - Create a Cosmos account
+# Create an Angular app with Azure Cosmos DB's API for MongoDB - Create an Azure Cosmos DB account
-This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Cosmos account configured with Cosmos DB's API for MongoDB](mongodb-introduction.md).
+This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB](introduction.md).
Part 4 of the tutorial builds on [Part 3](tutorial-develop-nodejs-part-3.md) and covers the following tasks: > [!div class="checklist"] > * Create an Azure resource group using the Azure CLI
-> * Create a Cosmos account using the Azure CLI
+> * Create an Azure Cosmos DB account using the Azure CLI
## Video walkthrough
It may take a minute or two for the command to complete. When it's done, the ter
Once the Azure Cosmos DB account has been created: 1. Open a new browser window and go to [https://portal.azure.com](https://portal.azure.com)
-1. Click the Azure Cosmos DB logo :::image type="icon" source="./media/tutorial-develop-nodejs-part-4/azure-cosmos-db-icon.png"::: on the left bar, and it shows you all the Azure Cosmos DBs you have.
+1. Click the Azure Cosmos DB logo :::image type="icon" source="./media/tutorial-develop-nodejs-part-4/azure-cosmos-db-icon.png"::: on the left bar, and it shows you all the Azure Cosmos DB s you have.
1. Click on the Azure Cosmos DB account you just created, select the **Overview** tab and scroll down to view the map where the database is located.
- :::image type="content" source="./media/tutorial-develop-nodejs-part-4/azure-cosmos-db-angular-portal.png" alt-text="Screenshot shows the Overview of an Azure Cosmos D B Account.":::
+ :::image type="content" source="./media/tutorial-develop-nodejs-part-4/azure-cosmos-db-angular-portal.png" alt-text="Screenshot shows the Overview of an Azure Cosmos DB DB Account.":::
4. Scroll down on the left navigation and click the **Replicate data globally** tab, this displays a map where you can see the different areas you can replicate into. For example, you can click Australia Southeast or Australia East and replicate your data to Australia. You can learn more about global replication in [How to distribute data globally with Azure Cosmos DB](../distribute-data-globally.md). For now, let's just keep the once instance and when we want to replicate, we know how.
- :::image type="content" source="./media/tutorial-develop-nodejs-part-4/azure-cosmos-db-replicate-portal.png" alt-text="Screenshot shows an Azure Cosmos D B Account with Replicate data globally selected.":::
+ :::image type="content" source="./media/tutorial-develop-nodejs-part-4/azure-cosmos-db-replicate-portal.png" alt-text="Screenshot shows an Azure Cosmos DB DB Account with Replicate data globally selected.":::
## Next steps
cosmos-db Tutorial Develop Nodejs Part 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-5.md
Title: Connect the Angular app to Azure Cosmos DB's API for MongoDB using Mongoose description: This tutorial describes how to build a Node.js application by using Angular and Express to manage the data stored in Cosmos DB. In this part, you use Mongoose to connect to Azure Cosmos DB.-+ -+ ms.devlang: javascript Last updated 08/26/2021--++
-#Customer intent: As a developer, I want to build a Node.js application, so that I can manage the data stored in Cosmos DB.
+#Customer intent: As a developer, I want to build a Node.js application, so that I can manage the data stored in Azure Cosmos DB.
-# Create an Angular app with Azure Cosmos DB's API for MongoDB - Use Mongoose to connect to Cosmos DB
+# Create an Angular app with Azure Cosmos DB's API for MongoDB - Use Mongoose to connect to Azure Cosmos DB
-This multi-part tutorial demonstrates how to create a Node.js app with Express and Angular, and connect it to it to your [Cosmos account configured with Cosmos DB's API for MongoDB](mongodb-introduction.md). This article describes Part 5 of the tutorial and builds on [Part 4](tutorial-develop-nodejs-part-4.md).
+This multi-part tutorial demonstrates how to create a Node.js app with Express and Angular, and connect it to it to your [Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB](introduction.md). This article describes Part 5 of the tutorial and builds on [Part 4](tutorial-develop-nodejs-part-4.md).
In this part of the tutorial, you will: > [!div class="checklist"]
-> * Use Mongoose to connect to Cosmos DB.
-> * Get your Cosmos DB connection string.
+> * Use Mongoose to connect to Azure Cosmos DB.
+> * Get your Azure Cosmos DB connection string.
> * Create the Hero model. > * Create the Hero service to get Hero data. > * Run the app locally.
Next, run the app by using the following steps:
:::image type="content" source="./media/tutorial-develop-nodejs-part-5/azure-cosmos-db-heroes-app.png" alt-text="New Azure Cosmos DB account in the Azure portal":::
-There are no heroes stored yet in the app. In the next part of this tutorial, we'll add put, push, and delete functionality. Then we can add, update, and delete heroes from the UI by using Mongoose connections to our Azure Cosmos database.
+There are no heroes stored yet in the app. In the next part of this tutorial, we'll add put, push, and delete functionality. Then we can add, update, and delete heroes from the UI by using Mongoose connections to our Azure Cosmos DB database.
## Clean up resources
Continue to Part 6 of the tutorial to add Post, Put, and Delete functions to the
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Tutorial Develop Nodejs Part 6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-6.md
Title: Add CRUD functions to an Angular app with Azure Cosmos DB's API for MongoDB description: Part 6 of the tutorial series on creating a MongoDB app with Angular and Node on Azure Cosmos DB using the exact same APIs you use for MongoDB-+ -+ ms.devlang: javascript Last updated 08/26/2021--++ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Add CRUD functions to the app
-This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Cosmos account configured with Cosmos DB's API for MongoDB](mongodb-introduction.md). Part 6 of the tutorial builds on [Part 5](tutorial-develop-nodejs-part-5.md) and covers the following tasks:
+This multi-part tutorial demonstrates how to create a new app written in Node.js with Express and Angular and then connect it to your [Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB](introduction.md). Part 6 of the tutorial builds on [Part 5](tutorial-develop-nodejs-part-5.md) and covers the following tasks:
> [!div class="checklist"] > * Create Post, Put, and Delete functions for the hero service
Check back soon for additional videos in this tutorial series.
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)-
cosmos-db Tutorial Develop React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-react.md
+
+ Title: "MongoDB, React, and Node.js tutorial for Azure"
+description: Learn how to create a MongoDB app with React and Node.js on Azure Cosmos DB using the exact same APIs you use for MongoDB with this video based tutorial series.
+++
+ms.devlang: javascript
+ Last updated : 08/26/2021++++
+# Create a MongoDB app with React and Azure Cosmos DB
+
+This multi-part video tutorial demonstrates how to create a hero tracking app with a React front-end. The app used Node and Express for the server, connects to Azure Cosmos DB database configured with the [Azure Cosmos DB's API for MongoDB](introduction.md), and then connects the React front-end to the server portion of the app. The tutorial also demonstrates how to do point-and-click scaling of Azure Cosmos DB in the Azure portal and how to deploy the app to the internet so everyone can track their favorite heroes.
+
+[Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) supports wire protocol compatibility with MongoDB, enabling clients to use Azure Cosmos DB in place of MongoDB.
+
+This multi-part tutorial covers the following tasks:
+
+> [!div class="checklist"]
+> * Introduction
+> * Setup the project
+> * Build the UI with React
+> * Create an Azure Cosmos DB account using the Azure portal
+> * Use Mongoose to connect to Azure Cosmos DB
+> * Add React, Create, Update, and Delete operations to the app
+
+Want to do build this same app with Angular? See the [Angular tutorial video series](tutorial-develop-nodejs-part-1.md).
+
+## Prerequisites
+* [Node.js](https://www.nodejs.org)
+
+### Finished Project
+Get the completed application [from GitHub](https://github.com/Azure-Samples/react-cosmosdb).
+
+## Introduction
+
+In this video, Burke Holland gives an introduction to Azure Cosmos DB and walks you through the app that is created in this video series.
+
+> [!VIDEO https://www.youtube.com/embed/58IflnJbYJc]
+
+## Project setup
+
+This video shows how set up the Express and React in the same project. Burke then provides a walkthrough of the code in the project.
+
+> [!VIDEO https://www.youtube.com/embed/ytFUPStJJds]
+
+## Build the UI
+
+This video shows how to create the application's user interface (UI) with React.
+
+> [!NOTE]
+> The CSS referenced in this video can be found in the [react-cosmosdb GitHub repo](https://github.com/Azure-Samples/react-cosmosdb/blob/master/src/index.css).
+
+> [!VIDEO https://www.youtube.com/embed/SzHzX0fTUUQ]
+
+## Connect to Azure Cosmos DB
+
+This video shows how to create an Azure Cosmos DB account in the Azure portal, install the MongoDB and Mongoose packages, and then connect the app to the newly created account using the Azure Cosmos DB connection string.
+
+> [!VIDEO https://www.youtube.com/embed/0U2jV1thfvs]
+
+## Read and create heroes in the app
+
+This video shows how to read heroes and create heroes in the Azure Cosmos DB database, as well as how to test those methods using Postman and the React UI.
+
+> [!VIDEO https://www.youtube.com/embed/AQK9n_8fsQI]
+
+## Delete and update heroes in the app
+
+This video shows how to delete and update heroes from the app and display the updates in the UI.
+
+> [!VIDEO https://www.youtube.com/embed/YmaGT7ztTQM]
+
+## Complete the app
+
+This video shows how to complete the app and finish hooking the UI up to the backend API.
+
+> [!VIDEO https://www.youtube.com/embed/TcSm2ISfTu8]
+
+## Clean up resources
+
+If you're not going to continue to use this app, use the following steps to delete all resources created by this tutorial in the Azure portal.
+
+1. From the left-hand menu in the Azure portal, click **Resource groups** and then click the name of the resource you created.
+2. On your resource group page, click **Delete**, type the name of the resource to delete in the text box, and then click **Delete**.
+
+## Next steps
+
+In this tutorial, you've learned how to:
+
+> [!div class="checklist"]
+> * Create an app with React, Node, Express, and Azure Cosmos DB
+> * Create an Azure Cosmos DB account
+> * Connect the app to the Azure Cosmos DB account
+> * Test the app using Postman
+> * Run the application and add heroes to the database
+
+You can proceed to the next tutorial and learn how to import MongoDB data into Azure Cosmos DB.
+
+> [!div class="nextstepaction"]
+> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Tutorial Global Distribution Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-global-distribution-mongodb.md
- Title: 'Tutorial to set up global distribution with Azure Cosmos DB API for MongoDB'
-description: Learn how to set up global distribution using Azure Cosmos DB's API for MongoDB.
----- Previously updated : 08/26/2021----
-# Set up global distributed database using Azure Cosmos DB's API for MongoDB
-
-In this article, we show how to use the Azure portal to setup a global distributed database and connect to it using Azure Cosmos DB's API for MongoDB.
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Configure global distribution using the Azure portal
-> * Configure global distribution using the [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md)
--
-## Verifying your regional setup
-A simple way to check your global configuration with Cosmos DB's API for MongoDB is to run the *isMaster()* command from the Mongo Shell.
-
-From your Mongo Shell:
-
- ```
- db.isMaster()
- ```
-
-Example results:
-
- ```JSON
- {
- "_t": "IsMasterResponse",
- "ok": 1,
- "ismaster": true,
- "maxMessageSizeBytes": 4194304,
- "maxWriteBatchSize": 1000,
- "minWireVersion": 0,
- "maxWireVersion": 2,
- "tags": {
- "region": "South India"
- },
- "hosts": [
- "vishi-api-for-mongodb-southcentralus.documents.azure.com:10255",
- "vishi-api-for-mongodb-westeurope.documents.azure.com:10255",
- "vishi-api-for-mongodb-southindia.documents.azure.com:10255"
- ],
- "setName": "globaldb",
- "setVersion": 1,
- "primary": "vishi-api-for-mongodb-southindia.documents.azure.com:10255",
- "me": "vishi-api-for-mongodb-southindia.documents.azure.com:10255"
- }
- ```
-
-## Connecting to a preferred region
-
-The Azure Cosmos DB's API for MongoDB enables you to specify your collection's read preference for a globally distributed database. For both low latency reads and global high availability, we recommend setting your collection's read preference to *nearest*. A read preference of *nearest* is configured to read from the closest region.
-
-```csharp
-var collection = database.GetCollection<BsonDocument>(collectionName);
-collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Nearest));
-```
-
-For applications with a primary read/write region and a secondary region for disaster recovery (DR) scenarios, we recommend setting your collection's read preference to *secondary preferred*. A read preference of *secondary preferred* is configured to read from the secondary region when the primary region is unavailable.
-
-```csharp
-var collection = database.GetCollection<BsonDocument>(collectionName);
-collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.SecondaryPreferred));
-```
-
-Lastly, if you would like to manually specify your read regions. You can set the region Tag within your read preference.
-
-```csharp
-var collection = database.GetCollection<BsonDocument>(collectionName);
-var tag = new Tag("region", "Southeast Asia");
-collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary, new[] { new TagSet(new[] { tag }) }));
-```
-
-That's it, that completes this tutorial. You can learn how to manage the consistency of your globally replicated account by reading [Consistency levels in Azure Cosmos DB](../consistency-levels.md). And for more information about how global database replication works in Azure Cosmos DB, see [Distribute data globally with Azure Cosmos DB](../distribute-data-globally.md).
-
-## Next steps
-
-In this tutorial, you've done the following:
-
-> [!div class="checklist"]
-> * Configure global distribution using the Azure portal
-> * Configure global distribution using the Cosmos DB's API for MongoDB
-
-You can now proceed to the next tutorial to learn how to develop locally using the Azure Cosmos DB local emulator.
-
-> [!div class="nextstepaction"]
-> [Develop locally with the Azure Cosmos DB emulator](../local-emulator.md)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Tutorial Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-global-distribution.md
+
+ Title: 'Tutorial to set up global distribution with Azure Cosmos DB for MongoDB'
+description: Learn how to set up global distribution using Azure Cosmos DB's API for MongoDB.
+++++ Last updated : 08/26/2021+
+ms.devlang: csharp
++
+# Set up global distributed database using Azure Cosmos DB's API for MongoDB
+
+In this article, we show how to use the Azure portal to setup a global distributed database and connect to it using Azure Cosmos DB's API for MongoDB.
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the [Azure Cosmos DB's API for MongoDB](introduction.md)
++
+## Verifying your regional setup
+A simple way to check your global configuration with Azure Cosmos DB's API for MongoDB is to run the *isMaster()* command from the Mongo Shell.
+
+From your Mongo Shell:
+
+ ```
+ db.isMaster()
+ ```
+
+Example results:
+
+ ```JSON
+ {
+ "_t": "IsMasterResponse",
+ "ok": 1,
+ "ismaster": true,
+ "maxMessageSizeBytes": 4194304,
+ "maxWriteBatchSize": 1000,
+ "minWireVersion": 0,
+ "maxWireVersion": 2,
+ "tags": {
+ "region": "South India"
+ },
+ "hosts": [
+ "vishi-api-for-mongodb-southcentralus.documents.azure.com:10255",
+ "vishi-api-for-mongodb-westeurope.documents.azure.com:10255",
+ "vishi-api-for-mongodb-southindia.documents.azure.com:10255"
+ ],
+ "setName": "globaldb",
+ "setVersion": 1,
+ "primary": "vishi-api-for-mongodb-southindia.documents.azure.com:10255",
+ "me": "vishi-api-for-mongodb-southindia.documents.azure.com:10255"
+ }
+ ```
+
+## Connecting to a preferred region
+
+The Azure Cosmos DB's API for MongoDB enables you to specify your collection's read preference for a globally distributed database. For both low latency reads and global high availability, we recommend setting your collection's read preference to *nearest*. A read preference of *nearest* is configured to read from the closest region.
+
+```csharp
+var collection = database.GetCollection<BsonDocument>(collectionName);
+collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Nearest));
+```
+
+For applications with a primary read/write region and a secondary region for disaster recovery (DR) scenarios, we recommend setting your collection's read preference to *secondary preferred*. A read preference of *secondary preferred* is configured to read from the secondary region when the primary region is unavailable.
+
+```csharp
+var collection = database.GetCollection<BsonDocument>(collectionName);
+collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.SecondaryPreferred));
+```
+
+Lastly, if you would like to manually specify your read regions. You can set the region Tag within your read preference.
+
+```csharp
+var collection = database.GetCollection<BsonDocument>(collectionName);
+var tag = new Tag("region", "Southeast Asia");
+collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary, new[] { new TagSet(new[] { tag }) }));
+```
+
+That's it, that completes this tutorial. You can learn how to manage the consistency of your globally replicated account by reading [Consistency levels in Azure Cosmos DB](../consistency-levels.md). And for more information about how global database replication works in Azure Cosmos DB, see [Distribute data globally with Azure Cosmos DB](../distribute-data-globally.md).
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the Azure Cosmos DB's API for MongoDB
+
+You can now proceed to the next tutorial to learn how to develop locally using the Azure Cosmos DB local emulator.
+
+> [!div class="nextstepaction"]
+> [Develop locally with the Azure Cosmos DB emulator](../local-emulator.md)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Tutorial Mongotools Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-mongotools-cosmos-db.md
Title: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB, using MongoDB native tools
+ Title: Migrate MongoDB offline to Azure Cosmos DB for MongoDB, using MongoDB native tools
description: Learn how MongoDB native tools can be used to migrate small datasets from MongoDB instances to Azure Cosmos DB -++ Last updated 08/26/2021 # Tutorial: Migrate MongoDB to Azure Cosmos DB's API for MongoDB offline using MongoDB native tools > [!IMPORTANT] > Please read this entire guide before carrying out your migration steps.
In this tutorial, you learn how to:
> * Monitor the migration. > * Verify that migration was successful.
-In this tutorial, you migrate a dataset in MongoDB hosted in an Azure Virtual Machine to Azure Cosmos DB's API for MongoDB by using MongoDB native tools. The MongoDB native tools are a set of binaries that facilitate data manipulation on an existing MongoDB instance. Since Azure Cosmos DB exposes a Mongo API, the MongoDB native tools are able to insert data into Azure Cosmos DB. The focus of this doc is on migrating data out of a MongoDB instance using *mongoexport/mongoimport* or *mongodump/mongorestore*. Since the native tools connect to MongoDB using connection strings, you can run the tools anywhere, however we recommend running these tools within the same network as the MongoDB instance to avoid firewall issues.
+In this tutorial, you migrate a dataset in MongoDB hosted in an Azure Virtual Machine to Azure Cosmos DB's API for MongoDB by using MongoDB native tools. The MongoDB native tools are a set of binaries that facilitate data manipulation on an existing MongoDB instance. Since Azure Cosmos DB exposes an API for MongoDB, the MongoDB native tools are able to insert data into Azure Cosmos DB. The focus of this doc is on migrating data out of a MongoDB instance using *mongoexport/mongoimport* or *mongodump/mongorestore*. Since the native tools connect to MongoDB using connection strings, you can run the tools anywhere, however we recommend running these tools within the same network as the MongoDB instance to avoid firewall issues.
The MongoDB native tools can move data only as fast as the host hardware allows; the native tools can be the simplest solution for small datasets where total migration time is not a concern. [MongoDB Spark connector](https://docs.mongodb.com/spark-connector/current/), [Azure Data Migration Service (DMS)](../../dms/tutorial-mongodb-cosmos-db.md), or [Azure Data Factory (ADF)](../../data-factory/connector-azure-cosmos-db-mongodb-api.md) can be better alternatives if you need a scalable migration pipeline.
If you don't have a MongoDB source set up already, see the article [Install and
To complete this tutorial, you need to: * [Complete the pre-migration](pre-migration-steps.md) steps such as estimating throughput, choosing a partition key, and the indexing policy.
-* [Create an Azure Cosmos DB API for MongoDB account](https://portal.azure.com/#create/Microsoft.DocumentDB).
+* [Create an Azure Cosmos DB for MongoDB account](https://portal.azure.com/#create/Microsoft.DocumentDB).
* Log into your MongoDB instance * [Download and install the MongoDB native tools from this link](https://www.mongodb.com/try/download/database-tools). * **Ensure that your MongoDB native tools version matches your existing MongoDB instance.**
- * If your MongoDB instance has a different version than Azure Cosmos DB Mongo API, then **install both MongoDB native tool versions and use the appropriate tool version for MongoDB and Azure Cosmos DB Mongo API, respectively.**
+ * If your MongoDB instance has a different version than Azure Cosmos DB for MongoDB, then **install both MongoDB native tool versions and use the appropriate tool version for MongoDB and Azure Cosmos DB for MongoDB, respectively.**
* Add a user with `readWrite` permissions, unless one already exists. Later in this tutorial, provide this username/password to the *mongoexport* and *mongodump* tools. ## Configure Azure Cosmos DB Server Side Retries
And if it is *Disabled*, then we recommend you enable it as shown below
* *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format will make more efficient use of network resources as the data is inserted into Azure Cosmos DB. * *mongodump* exports your existing data as a BSON file. * *mongorestore* imports your BSON file dump into Azure Cosmos DB.
-* As an aside - if you simply have a small JSON file that you want to import into Azure Cosmos DB Mongo API, the *mongoimport* tool is a quick solution for ingesting the data.
+* As an aside - if you simply have a small JSON file that you want to import into Azure Cosmos DB for MongoDB, the *mongoimport* tool is a quick solution for ingesting the data.
-## Collect the Azure Cosmos DB Mongo API credentials
+## Collect the Azure Cosmos DB for MongoDB credentials
-Azure Cosmos DB Mongo API provides compatible access credentials which MongoDB native tools can utilize. You will need to have these access credentials on-hand in order to migrate data into Azure Cosmos DB Mongo API. To find these credentials:
+Azure Cosmos DB for MongoDB provides compatible access credentials which MongoDB native tools can utilize. You will need to have these access credentials on-hand in order to migrate data into Azure Cosmos DB for MongoDB. To find these credentials:
1. Open the Azure portal
-1. Navigate to your Azure Cosmos DB Mongo API account
+1. Navigate to your Azure Cosmos DB for MongoDB account
1. In the left nav, select the *Connection String* blade, and you should see a display similar to the below: ![Screenshot of Azure Cosmos DB credentials.](media/tutorial-mongotools-cosmos-db/cosmos-mongo-credentials.png)
The rest of this section will guide you through using the pair of tools you sele
1. Finally, examine Azure Cosmos DB to **validate** that migration was successful. Open the Azure Cosmos DB portal and navigate to Data Explorer. You should see (1) that an *edx* database with an *importedQuery* collection has been created, and (2) if you exported only a subset of data, *importedQuery* should contain *only* docs matching the desired subset of the data. In the example below, only one doc matched the filter `{"field1":"value1"}`:
- ![Screenshot of Cosmos DB data verification.](media/tutorial-mongotools-cosmos-db/mongo-review-cosmos.png)
+ ![Screenshot of Azure Cosmos DB data verification.](media/tutorial-mongotools-cosmos-db/mongo-review-cosmos.png)
### *mongodump/mongorestore*
The rest of this section will guide you through using the pair of tools you sele
1. Finally, examine Azure Cosmos DB to **validate** that migration was successful. Open the Azure Cosmos DB portal and navigate to Data Explorer. You should see (1) that an *edx* database with an *importedQuery* collection has been created, and (2) *importedQuery* should contain the *entire* dataset from the source collection:
- ![Screenshot of verifying Cosmos DB mongorestore data.](media/tutorial-mongotools-cosmos-db/mongo-review-cosmos-restore.png)
+ ![Screenshot of verifying Azure Cosmos DB mongorestore data.](media/tutorial-mongotools-cosmos-db/mongo-review-cosmos-restore.png)
## Post-migration optimization
After you migrate the data stored in MongoDB database to Azure Cosmos DBΓÇÖs API
## Additional resources
-* [Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/)
+* [Azure Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/)
* [MongoDB database tools documentation](https://docs.mongodb.com/database-tools/) * Trying to do capacity planning for a migration to Azure Cosmos DB? * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
After you migrate the data stored in MongoDB database to Azure Cosmos DBΓÇÖs API
## Next steps
-* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
cosmos-db Tutorial Query Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-query-mongodb.md
- Title: Query data with Azure Cosmos DB's API for MongoDB
-description: Learn how to query data from Azure Cosmos DB's API for MongoDB by using MongoDB shell commands
----- Previously updated : 12/03/2019---
-# Query data by using Azure Cosmos DB's API for MongoDB
-
-The [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md) supports [MongoDB queries](https://docs.mongodb.com/manual/tutorial/query-documents/).
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Querying data stored in your Cosmos database using MongoDB shell
-
-You can get started by using the examples in this document and watch the [Query Azure Cosmos DB with MongoDB shell](https://azure.microsoft.com/resources/videos/query-azure-cosmos-db-data-by-using-the-mongodb-shell/) video .
-
-## Sample document
-
-The queries in this article use the following sample document.
-
-```json
-{
- "id": "WakefieldFamily",
- "parents": [
- { "familyName": "Wakefield", "givenName": "Robin" },
- { "familyName": "Miller", "givenName": "Ben" }
- ],
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female", "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8 }
- ],
- "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-## <a id="examplequery1"></a> Example query 1
-
-Given the sample family document above, the following query returns the documents where the id field matches `WakefieldFamily`.
-
-**Query**
-
-```bash
-db.families.find({ id: "WakefieldFamily"})
-```
-
-**Results**
-
-```json
-{
- "_id": "ObjectId(\"58f65e1198f3a12c7090e68c\")",
- "id": "WakefieldFamily",
- "parents": [
- {
- "familyName": "Wakefield",
- "givenName": "Robin"
- },
- {
- "familyName": "Miller",
- "givenName": "Ben"
- }
- ],
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }
- ],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-## <a id="examplequery2"></a>Example query 2
-
-The next query returns all the children in the family.
-
-**Query**
-
-```bash
-db.families.find( { id: "WakefieldFamily" }, { children: true } )
-```
-
-**Results**
-
-```json
-{
- "_id": "ObjectId("58f65e1198f3a12c7090e68c")",
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }
- ]
-}
-```
-
-## <a id="examplequery3"></a>Example query 3
-
-The next query returns all the families that are registered.
-
-**Query**
-
-```bash
-db.families.find( { "isRegistered" : true })
-```
-
-**Results**
-
-No document will be returned.
-
-## <a id="examplequery4"></a>Example query 4
-
-The next query returns all the families that are not registered.
-
-**Query**
-
-```bash
-db.families.find( { "isRegistered" : false })
-```
-
-**Results**
-
-```json
-{
- "_id": ObjectId("58f65e1198f3a12c7090e68c"),
- "id": "WakefieldFamily",
- "parents": [{
- "familyName": "Wakefield",
- "givenName": "Robin"
- }, {
- "familyName": "Miller",
- "givenName": "Ben"
- }],
- "children": [{
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [{
- "givenName": "Goofy"
- }, {
- "givenName": "Shadow"
- }]
- }, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-## <a id="examplequery5"></a>Example query 5
-
-The next query returns all the families that are not registered and state is NY.
-
-**Query**
-
-```bash
-db.families.find( { "isRegistered" : false, "address.state" : "NY" })
-```
-
-**Results**
-
-```json
-{
- "_id": ObjectId("58f65e1198f3a12c7090e68c"),
- "id": "WakefieldFamily",
- "parents": [{
- "familyName": "Wakefield",
- "givenName": "Robin"
- }, {
- "familyName": "Miller",
- "givenName": "Ben"
- }],
- "children": [{
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [{
- "givenName": "Goofy"
- }, {
- "givenName": "Shadow"
- }]
- }, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-## <a id="examplequery6"></a>Example query 6
-
-The next query returns all the families where children grades are 8.
-
-**Query**
-
-```bash
-db.families.find( { children : { $elemMatch: { grade : 8 }} } )
-```
-
-**Results**
-
-```json
-{
- "_id": ObjectId("58f65e1198f3a12c7090e68c"),
- "id": "WakefieldFamily",
- "parents": [{
- "familyName": "Wakefield",
- "givenName": "Robin"
- }, {
- "familyName": "Miller",
- "givenName": "Ben"
- }],
- "children": [{
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [{
- "givenName": "Goofy"
- }, {
- "givenName": "Shadow"
- }]
- }, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-## <a id="examplequery7"></a>Example query 7
-
-The next query returns all the families where size of children array is 3.
-
-**Query**
-
-```bash
-db.Family.find( {children: { $size:3} } )
-```
-
-**Results**
-
-No results will be returned as there are no families with more than two children. Only when parameter is 2 this query will succeed and return the full document.
-
-## Next steps
-
-In this tutorial, you've done the following:
-
-> [!div class="checklist"]
-> * Learned how to query using Cosmos DBΓÇÖs API for MongoDB
-
-You can now proceed to the next tutorial to learn how to distribute your data globally.
-
-> [!div class="nextstepaction"]
-> [Distribute your data globally](../tutorial-global-distribution-sql-api.md)
-
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-query.md
+
+ Title: Query data with Azure Cosmos DB's API for MongoDB
+description: Learn how to query data from Azure Cosmos DB's API for MongoDB by using MongoDB shell commands
++++++ Last updated : 12/03/2019+++
+# Query data by using Azure Cosmos DB's API for MongoDB
+
+The [Azure Cosmos DB's API for MongoDB](introduction.md) supports [MongoDB queries](https://docs.mongodb.com/manual/tutorial/query-documents/).
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Querying data stored in your Azure Cosmos DB database using MongoDB shell
+
+You can get started by using the examples in this document and watch the [Query Azure Cosmos DB with MongoDB shell](https://azure.microsoft.com/resources/videos/query-azure-cosmos-db-data-by-using-the-mongodb-shell/) video .
+
+## Sample document
+
+The queries in this article use the following sample document.
+
+```json
+{
+ "id": "WakefieldFamily",
+ "parents": [
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female", "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8 }
+ ],
+ "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+## <a id="examplequery1"></a> Example query 1
+
+Given the sample family document above, the following query returns the documents where the id field matches `WakefieldFamily`.
+
+**Query**
+
+```bash
+db.families.find({ id: "WakefieldFamily"})
+```
+
+**Results**
+
+```json
+{
+ "_id": "ObjectId(\"58f65e1198f3a12c7090e68c\")",
+ "id": "WakefieldFamily",
+ "parents": [
+ {
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+## <a id="examplequery2"></a>Example query 2
+
+The next query returns all the children in the family.
+
+**Query**
+
+```bash
+db.families.find( { id: "WakefieldFamily" }, { children: true } )
+```
+
+**Results**
+
+```json
+{
+ "_id": "ObjectId("58f65e1198f3a12c7090e68c")",
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ]
+}
+```
+
+## <a id="examplequery3"></a>Example query 3
+
+The next query returns all the families that are registered.
+
+**Query**
+
+```bash
+db.families.find( { "isRegistered" : true })
+```
+
+**Results**
+
+No document will be returned.
+
+## <a id="examplequery4"></a>Example query 4
+
+The next query returns all the families that are not registered.
+
+**Query**
+
+```bash
+db.families.find( { "isRegistered" : false })
+```
+
+**Results**
+
+```json
+{
+ "_id": ObjectId("58f65e1198f3a12c7090e68c"),
+ "id": "WakefieldFamily",
+ "parents": [{
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ }, {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }],
+ "children": [{
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [{
+ "givenName": "Goofy"
+ }, {
+ "givenName": "Shadow"
+ }]
+ }, {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+## <a id="examplequery5"></a>Example query 5
+
+The next query returns all the families that are not registered and state is NY.
+
+**Query**
+
+```bash
+db.families.find( { "isRegistered" : false, "address.state" : "NY" })
+```
+
+**Results**
+
+```json
+{
+ "_id": ObjectId("58f65e1198f3a12c7090e68c"),
+ "id": "WakefieldFamily",
+ "parents": [{
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ }, {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }],
+ "children": [{
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [{
+ "givenName": "Goofy"
+ }, {
+ "givenName": "Shadow"
+ }]
+ }, {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+## <a id="examplequery6"></a>Example query 6
+
+The next query returns all the families where children grades are 8.
+
+**Query**
+
+```bash
+db.families.find( { children : { $elemMatch: { grade : 8 }} } )
+```
+
+**Results**
+
+```json
+{
+ "_id": ObjectId("58f65e1198f3a12c7090e68c"),
+ "id": "WakefieldFamily",
+ "parents": [{
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ }, {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }],
+ "children": [{
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [{
+ "givenName": "Goofy"
+ }, {
+ "givenName": "Shadow"
+ }]
+ }, {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+## <a id="examplequery7"></a>Example query 7
+
+The next query returns all the families where size of children array is 3.
+
+**Query**
+
+```bash
+db.Family.find( {children: { $size:3} } )
+```
+
+**Results**
+
+No results will be returned as there are no families with more than two children. Only when parameter is 2 this query will succeed and return the full document.
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Learned how to query using Azure Cosmos DBΓÇÖs API for MongoDB
+
+You can now proceed to the next tutorial to learn how to distribute your data globally.
+
+> [!div class="nextstepaction"]
+> [Distribute your data globally](../nosql/tutorial-global-distribution.md)
cosmos-db Upgrade Mongodb Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/upgrade-mongodb-version.md
- Title: Upgrade the Mongo version of your Azure Cosmos DB's API for MongoDB account
-description: How to upgrade the MongoDB wire-protocol version for your existing Azure Cosmos DB's API for MongoDB accounts seamlessly
--- Previously updated : 02/23/2022-----
-# Upgrade the API version of your Azure Cosmos DB API for MongoDB account
-
-This article describes how to upgrade the API version of your Azure Cosmos DB's API for MongoDB account. After you upgrade, you can use the latest functionality in Azure Cosmos DB's API for MongoDB. The upgrade process doesn't interrupt the availability of your account and it doesn't consume RU/s or decrease the capacity of the database at any point. No existing data or indexes will be affected by this process.
-
-When upgrading to a new API version, start with development/test workloads before upgrading production workloads. It's important to upgrade your clients to a version compatible with the API version you are upgrading to before upgrading your Azure Cosmos DB API for MongoDB account.
-
->[!Note]
-> At this moment, only qualifying accounts using the server version 3.2 can be upgraded to version 3.6 and higher. If your account doesn't show the upgrade option, please [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
-
-## Upgrading to 4.2, 4.0, or 3.6
-### Benefits of upgrading to version 4.2:
-- Several major improvements to the aggregation pipeline such as support for `$merge`, Trigonometry, arithmetic expressions, and more.-- Support for client side field encyption which further secures your database by enabling individual fields to be selectively encrypted and maintaining privacy of the encrypted data from database users and hosting providers.---
-### Benefits of upgrading to version 4.0
-
-The following are the new features included in version 4.0:
-- Support for multi-document transactions within unsharded collections.-- New aggregation operators-- Enhanced scan performance-- Faster, more efficient storage-
-### Benefits of upgrading to version 3.6
-
-The following are the new features included in version 3.6:
-- Enhanced performance and stability-- Support for new database commands-- Support for aggregation pipeline by default and new aggregation stages-- Support for Change Streams-- Support for compound Indexes-- Cross-partition support for the following operations: update, delete, count and sort-- Improved performance for the following aggregate operations: $count, $skip, $limit and $group-- Wildcard indexing is now supported-
-### Changes from version 3.2
--- By default, the [Server Side Retry (SSR)](prevent-rate-limiting-errors.md) feature is enabled, so that requests from the client application will not return 16500 errors. Instead requests will resume until they complete or hit the 60 second timeout.-- Per request timeout is set to 60 seconds.-- MongoDB collections created on the new wire protocol version will only have the `_id` property indexed by default.-
-### Action required when upgrading from 3.2
-
-When upgrading from 3.2, the database account endpoint suffix will be updated to the following format:
-
-```
-<your_database_account_name>.mongo.cosmos.azure.com
-```
-
-If you are upgrading from version 3.2, you will need to replace the existing endpoint in your applications and drivers that connect with this database account. **Only connections that are using the new endpoint will have access to the features in the new API version**. The previous 3.2 endpoint should have the suffix `.documents.azure.com`.
-
-When upgrading from 3.2 to newer versions, [compound indexes](mongodb-indexing.md) are now required to perform sort operations on multiple fields to ensure stable, high performance for these queries. Ensure that these compound indexes are created so that your multi-field sorts succeed.
-
->[!Note]
-> This endpoint might have slight differences if your account was created in a Sovereign, Government or Restricted Azure Cloud.
-
-## How to upgrade
-
-1. Sign into the [Azure portal.](https://portal.azure.com/)
-
-1. Navigate to your Azure Cosmos DB API for MongoDB account. Open the **Overview** pane and verify that your current **Server version** is either 3.2 or 3.6.
-
- :::image type="content" source="./media/upgrade-mongodb-version/check-current-version.png" alt-text="Check the current version of your MongoDB account from the Azure portal." border="true":::
-
-1. From the left menu, open the `Features` pane. This pane shows the account level features that are available for your database account.
-
-1. Select the `Upgrade MongoDB server version` row. If you don't see this option, your account might not be eligible for this upgrade. Please file [a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if that is the case.
-
- :::image type="content" source="./media/upgrade-mongodb-version/upgrade-server-version.png" alt-text="Open the Features blade and upgrade your account." border="true":::
-
-1. Review the information displayed about the upgrade. Select `Set server version to 4.2` (or 4.0 or 3.6 depending upon your current version).
-
- :::image type="content" source="./media/upgrade-mongodb-version/select-upgrade.png" alt-text="Review upgrade guidance and select upgrade." border="true":::
-
-1. After you start the upgrade, the **Feature** menu is greyed out and the status is set to *Pending*. The upgrade takes around 15 minutes to complete. This process will not affect the existing functionality or operations of your database account. After it's complete, the **Update MongoDB server version** status will show the upgraded version. Please [contact support](https://azure.microsoft.com/support/create-ticket/) if there was an issue processing your request.
-
-1. The following are some considerations after upgrading your account:
-
- 1. If you upgraded from 3.2, go back to the **Overview** pane, and copy the new connection string to use in your application. The old connection string running 3.2 will not be interrupted. To ensure a consistent experience, all your applications must use the new endpoint.
-
- 1. If you upgraded from 3.6, your existing connection string will be upgraded to the version specified and should continue to be used.
-
-## How to downgrade
-
-You may also downgrade your account to 4.0 or 3.6 via the same steps in the 'How to Upgrade' section.
-
-If you upgraded from 3.2 to and wish to downgrade back to 3.2, you can simply switch back to using your previous (3.2) connection string with the host `accountname.documents.azure.com` which remains active post-upgrade running version 3.2.
-
-## Next steps
--- Learn about the supported and unsupported [features of MongoDB version 4.2](feature-support-42.md).-- Learn about the supported and unsupported [features of MongoDB version 4.0](feature-support-40.md).-- Learn about the supported and unsupported [features of MongoDB version 3.6](feature-support-36.md).-- For further information check [Mongo 3.6 version features](https://devblogs.microsoft.com/cosmosdb/azure-cosmos-dbs-api-for-mongodb-now-supports-server-version-3-6/)-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Upgrade Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/upgrade-version.md
+
+ Title: Upgrade the Mongo version of your Azure Cosmos DB's API for MongoDB account
+description: How to upgrade the MongoDB wire-protocol version for your existing Azure Cosmos DB's API for MongoDB accounts seamlessly
++++ Last updated : 02/23/2022++++
+# Upgrade the API version of your Azure Cosmos DB for MongoDB account
+
+This article describes how to upgrade the API version of your Azure Cosmos DB's API for MongoDB account. After you upgrade, you can use the latest functionality in Azure Cosmos DB's API for MongoDB. The upgrade process doesn't interrupt the availability of your account and it doesn't consume RU/s or decrease the capacity of the database at any point. No existing data or indexes will be affected by this process.
+
+When upgrading to a new API version, start with development/test workloads before upgrading production workloads. It's important to upgrade your clients to a version compatible with the API version you are upgrading to before upgrading your Azure Cosmos DB for MongoDB account.
+
+>[!Note]
+> At this moment, only qualifying accounts using the server version 3.2 can be upgraded to version 3.6 and higher. If your account doesn't show the upgrade option, please [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+
+## Upgrading to 4.2, 4.0, or 3.6
+### Benefits of upgrading to version 4.2:
+- Several major improvements to the aggregation pipeline such as support for `$merge`, Trigonometry, arithmetic expressions, and more.
+- Support for client side field encyption which further secures your database by enabling individual fields to be selectively encrypted and maintaining privacy of the encrypted data from database users and hosting providers.
+++
+### Benefits of upgrading to version 4.0
+
+The following are the new features included in version 4.0:
+- Support for multi-document transactions within unsharded collections.
+- New aggregation operators
+- Enhanced scan performance
+- Faster, more efficient storage
+
+### Benefits of upgrading to version 3.6
+
+The following are the new features included in version 3.6:
+- Enhanced performance and stability
+- Support for new database commands
+- Support for aggregation pipeline by default and new aggregation stages
+- Support for Change Streams
+- Support for compound Indexes
+- Cross-partition support for the following operations: update, delete, count and sort
+- Improved performance for the following aggregate operations: $count, $skip, $limit and $group
+- Wildcard indexing is now supported
+
+### Changes from version 3.2
+
+- By default, the [Server Side Retry (SSR)](prevent-rate-limiting-errors.md) feature is enabled, so that requests from the client application will not return 16500 errors. Instead requests will resume until they complete or hit the 60 second timeout.
+- Per request timeout is set to 60 seconds.
+- MongoDB collections created on the new wire protocol version will only have the `_id` property indexed by default.
+
+### Action required when upgrading from 3.2
+
+When upgrading from 3.2, the database account endpoint suffix will be updated to the following format:
+
+```
+<your_database_account_name>.mongo.cosmos.azure.com
+```
+
+If you are upgrading from version 3.2, you will need to replace the existing endpoint in your applications and drivers that connect with this database account. **Only connections that are using the new endpoint will have access to the features in the new API version**. The previous 3.2 endpoint should have the suffix `.documents.azure.com`.
+
+When upgrading from 3.2 to newer versions, [compound indexes](indexing.md) are now required to perform sort operations on multiple fields to ensure stable, high performance for these queries. Ensure that these compound indexes are created so that your multi-field sorts succeed.
+
+>[!Note]
+> This endpoint might have slight differences if your account was created in a Sovereign, Government or Restricted Azure Cloud.
+
+## How to upgrade
+
+1. Sign into the [Azure portal.](https://portal.azure.com/)
+
+1. Navigate to your Azure Cosmos DB for MongoDB account. Open the **Overview** pane and verify that your current **Server version** is either 3.2 or 3.6.
+
+ :::image type="content" source="media/upgrade-version/check-current-version.png" alt-text="Check the current version of your MongoDB account from the Azure portal." border="true":::
+
+1. From the left menu, open the `Features` pane. This pane shows the account level features that are available for your database account.
+
+1. Select the `Upgrade MongoDB server version` row. If you don't see this option, your account might not be eligible for this upgrade. Please file [a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if that is the case.
+
+ :::image type="content" source="media/upgrade-version/upgrade-server-version.png" alt-text="Open the Features blade and upgrade your account." border="true":::
+
+1. Review the information displayed about the upgrade. Select `Set server version to 4.2` (or 4.0 or 3.6 depending upon your current version).
+
+ :::image type="content" source="media/upgrade-version/select-upgrade.png" alt-text="Review upgrade guidance and select upgrade." border="true":::
+
+1. After you start the upgrade, the **Feature** menu is greyed out and the status is set to *Pending*. The upgrade takes around 15 minutes to complete. This process will not affect the existing functionality or operations of your database account. After it's complete, the **Update MongoDB server version** status will show the upgraded version. Please [contact support](https://azure.microsoft.com/support/create-ticket/) if there was an issue processing your request.
+
+1. The following are some considerations after upgrading your account:
+
+ 1. If you upgraded from 3.2, go back to the **Overview** pane, and copy the new connection string to use in your application. The old connection string running 3.2 will not be interrupted. To ensure a consistent experience, all your applications must use the new endpoint.
+
+ 1. If you upgraded from 3.6, your existing connection string will be upgraded to the version specified and should continue to be used.
+
+## How to downgrade
+
+You may also downgrade your account to 4.0 or 3.6 via the same steps in the 'How to Upgrade' section.
+
+If you upgraded from 3.2 to and wish to downgrade back to 3.2, you can simply switch back to using your previous (3.2) connection string with the host `accountname.documents.azure.com` which remains active post-upgrade running version 3.2.
+
+## Next steps
+
+- Learn about the supported and unsupported [features of MongoDB version 4.2](feature-support-42.md).
+- Learn about the supported and unsupported [features of MongoDB version 4.0](feature-support-40.md).
+- Learn about the supported and unsupported [features of MongoDB version 3.6](feature-support-36.md).
+- For further information check [Mongo 3.6 version features](https://devblogs.microsoft.com/cosmosdb/azure-cosmos-dbs-api-for-mongodb-now-supports-server-version-3-6/)
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Use Multi Document Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/use-multi-document-transactions.md
Title: Use multi-document transactions in Azure Cosmos DB API for MongoDB
-description: Learn how to create a sample Mongo shell app that can execute a multi-document transaction (all-or-nothing semantic) on a fixed collection in Azure Cosmos DB API for MongoDB 4.0.
+ Title: Use multi-document transactions in Azure Cosmos DB for MongoDB
+description: Learn how to create a sample Mongo shell app that can execute a multi-document transaction (all-or-nothing semantic) on a fixed collection in Azure Cosmos DB for MongoDB 4.0.
-+ Last updated 03/02/2021 ms.devlang: javascript+
-# Use Multi-document transactions in Azure Cosmos DB API for MongoDB
+# Use Multi-document transactions in Azure Cosmos DB for MongoDB
-In this article, you'll create a Mongo shell app that executes a multi-document transaction on a fixed collection in Azure Cosmos DB API for MongoDB with server version 4.0.
+In this article, you'll create a Mongo shell app that executes a multi-document transaction on a fixed collection in Azure Cosmos DB for MongoDB with server version 4.0.
## What are multi-document transactions?
-In Azure Cosmos DB API for MongoDB, operations on a single document are atomic. Multi-document transactions enable applications to execute atomic operations across multiple documents. It offers "all-or-nothing" semantics to the operations. On commit, the changes made inside the transactions are persisted and if the transaction fails, all changes inside the transaction are discarded.
+In Azure Cosmos DB for MongoDB, operations on a single document are atomic. Multi-document transactions enable applications to execute atomic operations across multiple documents. It offers "all-or-nothing" semantics to the operations. On commit, the changes made inside the transactions are persisted and if the transaction fails, all changes inside the transaction are discarded.
Multi-document transactions follow **ACID** semantics:
Multi-document transactions follow **ACID** semantics:
Multi-document transactions are supported within an unsharded collection in API version 4.0. Multi-document transactions are not supported across collections or in sharded collections in 4.0. The timeout for transactions is a fixed 5 seconds.
-All drivers that support wire protocol version 4.0 or greater will support Azure Cosmos DB API for MongoDB multi-document transactions.
+All drivers that support wire protocol version 4.0 or greater will support Azure Cosmos DB for MongoDB multi-document transactions.
## Run multi-document transactions in MongoDB shell > [!Note]
All drivers that support wire protocol version 4.0 or greater will support Azure
## Next steps
-Explore what's new in [Azure Cosmos DB API for MongoDB 4.0](feature-support-40.md)
+Explore what's new in [Azure Cosmos DB for MongoDB 4.0](feature-support-40.md)
cosmos-db Monitor Account Key Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-account-key-updates.md
Title: Monitor your Azure Cosmos DB account for key updates and key regeneration description: Use the Account Keys Updated metric to monitor your account for key updates. This metric shows you the number of times the primary and secondary keys are updated for an account and the time when they were changed. + Last updated 09/16/2021- # Monitor your Azure Cosmos DB account for key updates and key regeneration
Azure Monitor for Azure Cosmos DB provides metrics, alerts, and logs to monitor
1. First choose the required **subscription**, for the **Resource type** field select **Azure Cosmos DB accounts**. A list of resource groups where the Azure Cosmos DB accounts are located is displayed.
- 1. Choose a **Resource Group** and select one of your existing Azure Cosmos accounts. Select Apply.
+ 1. Choose a **Resource Group** and select one of your existing Azure Cosmos DB accounts. Select Apply.
:::image type="content" source="./media/monitor-account-key-updates/select-account-scope.png" alt-text="Select the account scope to view metrics" border="true":::
The following screenshot shows how to configure an alert of type critical when t
## Next steps
-* Monitor Azure Cosmos DB data by using [diagnostic settings](cosmosdb-monitor-resource-logs.md) in Azure.
-* [Audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
+* Monitor Azure Cosmos DB data by using [diagnostic settings](monitor-resource-logs.md) in Azure.
+* [Audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
cosmos-db Monitor Cosmos Db Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db-reference.md
- Title: Monitoring Azure Cosmos DB data reference | Microsoft Docs
-description: Important reference material needed when you monitor logs and metrics in Azure Cosmos DB.
---- Previously updated : 12/07/2020---
-# Monitoring Azure Cosmos DB data reference
--
-This article provides a reference of log and metric data collected to analyze the performance and availability of Azure Cosmos DB. See the [Monitor Azure Cosmos DB](monitor-cosmos-db.md) article for details on collecting and analyzing monitoring data for Azure Cosmos DB.
-
-## Metrics
-
-All the metrics corresponding to Azure Cosmos DB are stored in the namespace **Cosmos DB standard metrics**. For a list of all Azure Monitor support metrics (including Azure Cosmos DB), see [Azure Monitor supported metrics](../azure-monitor/essentials/metrics-supported.md). This section lists all the automatically collected platform metrics collected for Azure Cosmos DB.
-
-### Request metrics
-
-|Metric (Metric Display Name)|Unit (Aggregation Type) |Description|Dimensions| Time granularities| Legacy metric mapping | Usage |
-||||| | | |
-| TotalRequests (Total Requests) | Count (Count) | Number of requests made| DatabaseName, CollectionName, Region, StatusCode| All | TotalRequests, Http 2xx, Http 3xx, Http 400, Http 401, Internal Server error, Service Unavailable, Throttled Requests, Average Requests per Second | Used to monitor requests per status code, container at a minute granularity. To get average requests per second, use Count aggregation at minute and divide by 60. |
-| MetadataRequests (Metadata Requests) |Count (Count) | Count of metadata requests. Azure Cosmos DB maintains system metadata container for each account, that allows you to enumerate collections, databases, etc., and their configurations, free of charge. | DatabaseName, CollectionName, Region, StatusCode| All| |Used to monitor throttles due to metadata requests.|
-| MongoRequests (Mongo Requests) | Count (Count) | Number of Mongo Requests Made | DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Rate, Mongo Update Request Rate, Mongo Delete Request Rate, Mongo Insert Request Rate, Mongo Count Request Rate| Used to monitor Mongo request errors, usages per command type. |
-
-### Request Unit metrics
-
-|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Legacy metric mapping | Usage |
-||||| | | |
-| MongoRequestCharge (Mongo Request Charge) | Count (Total) |Mongo Request Units Consumed| DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Charge, Mongo Update Request Charge, Mongo Delete Request Charge, Mongo Insert Request Charge, Mongo Count Request Charge| Used to monitor Mongo resource RUs in a minute.|
-| TotalRequestUnits (Total Request Units)| Count (Total) | Request Units consumed| DatabaseName, CollectionName, Region, StatusCode |All| TotalRequestUnits| Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Sum aggregation at minute interval/level and divide by 60.|
-| ProvisionedThroughput (Provisioned Throughput)| Count (Maximum) |Provisioned throughput at container granularity| DatabaseName, ContainerName| 5M| | Used to monitor provisioned throughput per container.|
-
-### Storage metrics
-
-|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Legacy metric mapping | Usage |
-||||| | | |
-| AvailableStorage (Available Storage) |Bytes (Total) | Total available storage reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M| Available Storage| Used to monitor available storage capacity (applicable only for fixed storage collections) Minimum granularity should be 5 minutes.|
-| DataUsage (Data Usage) |Bytes (Total) |Total data usage reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M |Data size | Used to monitor total data usage at container and region, minimum granularity should be 5 minutes.|
-| IndexUsage (Index Usage) | Bytes (Total) |Total Index usage reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M| Index Size| Used to monitor total data usage at container and region, minimum granularity should be 5 minutes. |
-| DocumentQuota (Document Quota) | Bytes (Total) | Total storage quota reported at 5-minutes granularity per region.| DatabaseName, CollectionName, Region| 5M |Storage Capacity| Used to monitor total quota at container and region, minimum granularity should be 5 minutes.|
-| DocumentCount (Document Count) | Count (Total) |Total document count reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M |Document Count|Used to monitor document count at container and region, minimum granularity should be 5 minutes.|
-
-### Latency metrics
-
-|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Usage |
-||||| | |
-| ReplicationLatency (Replication Latency)| MilliSeconds (Minimum, Maximum, Average) | P99 Replication Latency across source and target regions for geo-enabled account| SourceRegion, TargetRegion| All | Used to monitor P99 replication latency between any two regions for a geo-replicated account. |
-| Server Side Latency| MilliSeconds (Average) | Time taken by the server to process the request. | CollectionName, ConnectionMode, DatabaseName, OperationType, PublicAPIType, Region | All | Used to monitor the request latency on the Azure Cosmos DB server. |
-
-### Availability metrics
-
-|Metric (Metric Display Name) |Unit (Aggregation Type)|Description| Time granularities| Legacy metric mapping | Usage |
-||||| | |
-| ServiceAvailability (Service Availability)| Percent (Minimum, Maximum) | Account requests availability at one hour granularity| 1H | Service Availability | Represents the percent of total passed requests. A request is considered to be failed due to system error if the status code is 410, 500 or 503 Used to monitor availability of the account at hour granularity. |
-
-### Cassandra API metrics
-
-|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Usage |
-||||| | |
-| CassandraRequests (Cassandra Requests) | Count (Count) | Number of Cassandra API requests made| DatabaseName, CollectionName, ErrorCode, Region, OperationType, ResourceType| All| Used to monitor Cassandra requests at a minute granularity. To get average requests per second, use Count aggregation at minute and divide by 60.|
-| CassandraRequestCharges (Cassandra Request Charges) | Count (Sum, Min, Max, Avg) | Request units consumed by the Cassandra API | DatabaseName, CollectionName, Region, OperationType, ResourceType| All| Used to monitor RUs used per minute by a Cassandra API account.|
-| CassandraConnectionClosures (Cassandra Connection Closures) |Count (Count) |Number of Cassandra Connections closed| ClosureReason, Region| All | Used to monitor the connectivity between clients and the Azure Cosmos DB Cassandra API.|
-
-For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-
-## Resource logs
-
-The following table lists the properties of resource logs in Azure Cosmos DB. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **AzureDiagnostics** table under the resource provider** name of `MICROSOFT.DOCUMENTDB`.
-
-| Azure Storage field or property | Azure Monitor Logs property | Description |
-| | | |
-| **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. |
-| **resourceId** | **Resource** | The Azure Cosmos DB account for which logs are enabled.|
-| **category** | **Category** | For Azure Cosmos DB, **DataPlaneRequests**, **MongoRequests**, **QueryRuntimeStatistics**, **PartitionKeyStatistics**, **PartitionKeyRUConsumption**, **ControlPlaneRequests**, **CassandraRequests**, **GremlinRequests** are the available log types. |
-| **operationName** | **OperationName** | Name of the operation. The operation name can be `Create`, `Update`, `Read`, `ReadFeed`, `Delete`, `Replace`, `Execute`, `SqlQuery`, `Query`, `JSQuery`, `Head`, `HeadFeed`, or `Upsert`. |
-| **properties** | n/a | The contents of this field are described in the rows that follow. |
-| **activityId** | **activityId_g** | The unique GUID for the logged operation. |
-| **userAgent** | **userAgent_s** | A string that specifies the client user agent from which, the request was sent. The format of user agent is `{user agent name}/{version}`.|
-| **requestResourceType** | **requestResourceType_s** | The type of the resource accessed. This value can be database, container, document, attachment, user, permission, stored procedure, trigger, user-defined function, or an offer. |
-| **statusCode** | **statusCode_s** | The response status of the operation. |
-| **requestResourceId** | **ResourceId** | The resourceId that pertains to the request. Depending on the operation performed, this value may point to `databaseRid`, `collectionRid`, or `documentRid`.|
-| **clientIpAddress** | **clientIpAddress_s** | The client's IP address. |
-| **requestCharge** | **requestCharge_s** | The number of RUs that are used by the operation |
-| **collectionRid** | **collectionId_s** | The unique ID for the collection.|
-| **duration** | **duration_d** | The duration of the operation, in milliseconds. |
-| **requestLength** | **requestLength_s** | The length of the request, in bytes. |
-| **responseLength** | **responseLength_s** | The length of the response, in bytes.|
-| **resourceTokenPermissionId** | **resourceTokenPermissionId_s** | This property indicates the resource token permission Id that you have specified. To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
-| **resourceTokenPermissionMode** | **resourceTokenPermissionMode_s** | This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read". To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
-| **resourceTokenUserRid** | **resourceTokenUserRid_s** | This value is non-empty when [resource tokens](./secure-access-to-data.md#resource-tokens) are used for authentication. The value points to the resource ID of the user. |
-| **responseLength** | **responseLength_s** | The length of the response, in bytes.|
-
-For a list of all Azure Monitor log categories and links to associated schemas, see [Azure Monitor Logs categories and schemas](../azure-monitor/essentials/resource-logs-schema.md).
-
-## Azure Monitor Logs tables
-
-Azure Cosmos DB uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of Kusto tables Cosmos DB uses, see the [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-cosmos-db) article.
-
-## See Also
--- See [Monitoring Azure Cosmos DB](monitor-cosmos-db.md) for a description of monitoring Azure Cosmos DB.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
cosmos-db Monitor Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db.md
- Title: Monitor Azure Cosmos DB | Microsoft Docs
-description: Learn how to monitor the performance and availability of Azure Cosmos DB.
----- Previously updated : 05/03/2020---
-# Monitor Azure Cosmos DB
-
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Cosmos databases and how you can use the features of Azure Monitor to analyze and alert on this data.
-
-You can monitor your data with client-side and server-side metrics. When using server-side metrics, you can monitor the data stored in Azure Cosmos DB with the following options:
-
-* **Monitor from Azure Cosmos DB portal:** You can monitor with the metrics available within the **Metrics** tab of the Azure Cosmos account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of seven days. To learn more, see the [Monitoring data collected from Azure Cosmos DB](#monitoring-data) section of this article.
-
-* **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you will not need to explicitly configure anything. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyzing-metrics) section of this article.
-
-* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container is changes, the properties of a Cosmos account are changed these events are captures within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
-
-* **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-azure-cosmos-db-programmatically) section of this article.
-
-The following image shows different options available to monitor Azure Cosmos DB account through Azure portal:
--
-When using Azure Cosmos DB, at the client-side you can collect the details for request charge, activity ID, exception/stack trace information, HTTP status/sub-status code, diagnostic string to debug any issue that might occur. This information is also required if you need to reach out to the Azure Cosmos DB support team.
-
-## Monitor overview
-
-The **Overview** page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful, however only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable other types of data collection with some configuration.
-
-## What is Azure Monitor?
-
-Azure Cosmos DB creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
-
-If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
-
-* What is Azure Monitor?
-* Costs associated with monitoring
-* Monitoring data collected in Azure
-* Configuring data collection
-* Standard tools in Azure for analyzing and alerting on monitoring data
-
-The following sections build on this article by describing the specific data gathered from Azure Cosmos DB and providing examples for configuring data collection and analyzing this data with Azure tools.
-
-## Cosmos DB insights
-
-Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Cosmos DB insights](cosmosdb-insights-overview.md) article.
-
-> [!NOTE]
-> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
-
-## Monitoring data
-
-Azure Cosmos DB collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). See [Azure Cosmos DB monitoring data reference](monitor-cosmos-db-reference.md) for a detailed reference of the logs and metrics created by Azure Cosmos DB.
-
-The **Overview** page in the Azure portal for each Azure Cosmos database includes a brief view of the database usage including its request and hourly billing usage. This is useful information but only a small amount of the monitoring data available. Some of this data is collected automatically and available for analysis as soon as you create the database while you can enable more data collection with some configuration.
--
-## Collection and routing
-
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-
-See [Create diagnostic setting to collect platform logs and metrics in Azure](cosmosdb-monitor-resource-logs.md) for the detailed process for creating a diagnostic setting using the Azure portal and some diagnostic query examples. When you create a diagnostic setting, you specify which categories of logs to collect.
-
-The metrics and logs you can collect are discussed in the following sections.
-
-## Analyzing metrics
-
-Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. You can also check out how to monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
-
-For a list of the platform metrics collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference metrics](monitor-cosmos-db-reference.md#metrics) article.
-
-All metrics for Azure Cosmos DB are in the namespace **Cosmos DB standard metrics**. You can use the following dimensions with these metrics when adding a filter to a chart:
-
-* CollectionName
-* DatabaseName
-* OperationType
-* Region
-* StatusCode
-
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-
-### View operation level metrics for Azure Cosmos DB
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Select **Monitor** from the left-hand navigation bar, and select **Metrics**.
-
- :::image type="content" source="./media/monitor-cosmos-db/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor":::
-
-1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos accounts, and select **Apply**.
-
- :::image type="content" source="./media/monitor-cosmos-db/select-cosmosdb-account.png" alt-text="Choose a Cosmos DB account to view metrics":::
-
-1. Next you can select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Request units** and **Avg** as the aggregation value.
-
- In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the average number of request units consumed per minute for the selected period.
-
- :::image type="content" source="./media/monitor-cosmos-db/metric-types.png" alt-text="Choose a metric from the Azure portal":::
-
-### Add filters to metrics
-
-You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **StatusCode**. To filter the metrics, select **Add filter** and choose the required property such as **OperationType** and select a value such as **Query**. The graph then displays the request units consumed for the query operation for the selected period. The operations executed via Stored procedure aren't logged so they aren't available under the OperationType metric.
--
-You can group metrics by using the **Apply splitting** option. For example, you can group the request units per operation type and view the graph for all the operations at once as shown in the following image:
--
-## Analyzing logs
-
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). For a list of the types of resource logs collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference](monitor-cosmos-db-reference.md#resource-logs).
-
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-
-Azure Cosmos DB stores data in the following tables.
-
-| Table | Description |
-|:|:|
-| AzureDiagnostics | Common table used by multiple services to store Resource logs. Resource logs from Azure Cosmos DB can be identified with `MICROSOFT.DOCUMENTDB`. |
-| AzureActivity | Common table that stores all records from the Activity log.
-
-### Sample Kusto queries
-
-Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](./audit-control-plane-logs.md#enable-diagnostic-logs-for-control-plane-operations). When enabling diagnostic logs, you will select between storing your data in a single [AzureDiagnostics table (legacy)](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode) or [resource-specific tables](../azure-monitor/essentials/resource-logs.md#resource-specific).
-
-When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries will only include data from that resource.
-
-> [!IMPORTANT]
-> If you want to run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
-
-Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos resources. The exact text of the queries will depend on the [collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode) you selected when you enabled diagnostics logs.
-
-#### [AzureDiagnostics table (legacy)](#tab/azure-diagnostics)
-
-* To query for all control-plane logs from Azure Cosmos DB:
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="ControlPlaneRequests"
- ```
-
-* To query for all data-plane logs from Azure Cosmos DB:
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- ```
-
-* To query for a filtered list of data-plane logs, specific to a single resource:
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- | where Resource=="<account-name>"
- ```
-
- > [!IMPORTANT]
- > In the **AzureDiagnostics** table, many fields are case-sensitive and uppercase including, but not limited to; *ResourceId*, *ResourceGroup*, *ResourceProvider*, and *Resource*.
-
-* To get a count of data-plane logs, grouped by resource:
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- | summarize count() by Resource
- ```
-
-* To generate a chart for data-plane logs, grouped by the type of operation:
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- | summarize count() by OperationName
- | render columnchart
- ```
-
-#### [Resource-specific table](#tab/resource-specific-diagnostics)
-
-* To query for all control-plane logs from Azure Cosmos DB:
-
- ```kusto
- CDBControlPlaneRequests
- ```
-
-* To query for all data-plane logs from Azure Cosmos DB:
-
- ```kusto
- CDBDataPlaneRequests
- ```
-
-* To query for a filtered list of data-plane logs, specific to a single resource:
-
- ```kusto
- CDBDataPlaneRequests
- | where AccountName=="<account-name>"
- ```
-
-* To get a count of data-plane logs, grouped by resource:
-
- ```kusto
- CDBDataPlaneRequests
- | summarize count() by AccountName
- ```
-
-* To generate a chart for data-plane logs, grouped by the type of operation:
-
- ```kusto
- CDBDataPlaneRequests
- | summarize count() by OperationName
- | render piechart
- ```
---
-These examples are just a small sampling of the rich queries that can be performed in Azure Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
-
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
-
-For example, the following table lists few alert rules for your resources. You can find a detailed list of alert rules from the Azure portal. To learn more, see [how to configure alerts](create-alerts.md) article.
-
-| Alert type | Condition | Description |
-|:|:|:|
-|Rate limiting on request units (metric alert) |Dimension name: StatusCode, Operator: Equals, Dimension values: 429 | Alerts if the container or a database has exceeded the provisioned throughput limit. |
-|Region failed over |Operator: Greater than, Aggregation type: Count, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable service-managed failover. |
-| Rotate keys(activity log alert)| Event level: Informational, Status: started| Alerts when the account keys are rotated. You can update your application with the new keys. |
-
-## Monitor Azure Cosmos DB programmatically
-
-The account level metrics available in the portal, such as account storage usage and total requests, aren't available via the SQL APIs. However, you can retrieve usage data at the collection level by using the SQL APIs. To retrieve collection level data, do the following:
-
-* To use the REST API, [perform a GET on the collection](/rest/api/cosmos-db/get-a-collection). The quota and usage information for the collection is returned in the x-ms-resource-quota and x-ms-resource-usage headers in the response.
-
-* To use the .NET SDK, use the [DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) method, which returns a [ResourceResponse](/dotnet/api/microsoft.azure.documents.client.resourceresponse-1) that contains many usage properties such as **CollectionSizeUsage**, **DatabaseUsage**, **DocumentUsage**, and more.
-
-To access more metrics, use the [Azure Monitor SDK](https://www.nuget.org/packages/Microsoft.Azure.Insights). Available metric definitions can be retrieved by calling:
-
-```http
-https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01
-```
-
-To retrieve individual metrics, use the following format:
-
-```http
-https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metrics?timespan={StartTime}/{EndTime}&interval={AggregationInterval}&metricnames={MetricName}&aggregation={AggregationType}&`$filter={Filter}&api-version=2018-01-01
-```
-
-To learn more, see the [Azure monitoring REST API](../azure-monitor/essentials/rest-api-walkthrough.md) article.
-
-## Next steps
-
-* See [Azure Cosmos DB monitoring data reference](monitor-cosmos-db-reference.md) for a reference of the logs and metrics created by Azure Cosmos DB.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
cosmos-db Monitor Logs Basic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-logs-basic-queries.md
+
+ Title: Troubleshoot issues with diagnostics queries
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB
+++++ Last updated : 05/12/2021+++
+# Troubleshoot issues with diagnostics queries
+
+In this article, we'll cover how to write simple queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query.
+
+For resource-specific tables, data is written into individual tables for each category of the resource (not available for table API). We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+
+## <a id="azure-diagnostics-queries"></a> AzureDiagnostics Queries
+
+- How to query for the operations that are taking longer than 3 milliseconds to run:
+
+ ```Kusto
+ AzureDiagnostics
+ | where toint(duration_s) > 3 and ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | summarize count() by clientIpAddress_s, TimeGenerated
+ ```
+
+- How to query for the user agent that is running the operations:
+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | summarize count() by OperationName, userAgent_s
+ ```
+
+- How to query for the long-running operations:
+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | project TimeGenerated , duration_s
+ | summarize count() by bin(TimeGenerated, 5s)
+ | render timechart
+ ```
+
+- How to get partition key statistics to evaluate skew across top 3 partitions for a database account:
+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="PartitionKeyStatistics"
+ | project SubscriptionId, regionName_s, databaseName_s, collectionName_s, partitionKey_s, sizeKb_d, ResourceId
+ ```
+
+- How to get the request charges for expensive queries?
+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests" and todouble(requestCharge_s) > 10.0
+ | project activityId_g, requestCharge_s
+ | join kind= inner (
+ AzureDiagnostics
+ | where ResourceProvider =="MICROSOFT.DOCUMENTDB" and Category == "QueryRuntimeStatistics"
+ | project activityId_g, querytext_s
+ ) on $left.activityId_g == $right.activityId_g
+ | order by requestCharge_s desc
+ | limit 100
+ ```
+
+- How to find which operations are taking most of RU/s?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | where TimeGenerated >= ago(2h)
+ | summarize max(responseLength_s), max(requestLength_s), max(requestCharge_s), count = count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h)
+ ```
+
+- How to get all queries that are consuming more than 100 RU/s joined with data from **DataPlaneRequests** and **QueryRunTimeStatistics**.
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests" and todouble(requestCharge_s) > 100.0
+ | project activityId_g, requestCharge_s
+ | join kind= inner (
+ AzureDiagnostics
+ | where ResourceProvider =="MICROSOFT.DOCUMENTDB" and Category == "QueryRuntimeStatistics"
+ | project activityId_g, querytext_s
+ ) on $left.activityId_g == $right.activityId_g
+ | order by requestCharge_s desc
+ | limit 100
+ ```
+
+- How to get the request charges and the execution duration of a query?
+
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated >= ago(24hr)
+ | where Category == "QueryRuntimeStatistics"
+ | join (
+ AzureDiagnostics
+ | where TimeGenerated >= ago(24hr)
+ | where Category == "DataPlaneRequests"
+ ) on $left.activityId_g == $right.activityId_g
+ | project databasename_s, collectionname_s, OperationName1 , querytext_s,requestCharge_s1, duration_s1, bin(TimeGenerated, 1min)
+ ```
+
+- How to get the distribution for different operations?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | where TimeGenerated >= ago(2h)
+ | summarize count = count() by OperationName, requestResourceType_s, bin(TimeGenerated, 1h)
+ ```
+
+- What is the maximum throughput that a partition has consumed?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | where TimeGenerated >= ago(2h)
+ | summarize max(requestCharge_s) by bin(TimeGenerated, 1h), partitionId_g
+ ```
+
+- How to get the information about the partition keys RU/s consumption per second?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.DOCUMENTDB" and Category == "PartitionKeyRUConsumption"
+ | summarize total = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, partitionKey_s, TimeGenerated
+ | order by TimeGenerated asc
+ ```
+
+- How to get the request charge for a specific partition key
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.DOCUMENTDB" and Category == "PartitionKeyRUConsumption"
+ | where parse_json(partitionKey_s)[0] == "2"
+ ```
+
+- How to get the top partition keys with most RU/s consumed in a specific period?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.DOCUMENTDB" and Category == "PartitionKeyRUConsumption"
+ | where TimeGenerated >= datetime("11/26/2019, 11:20:00.000 PM") and TimeGenerated <= datetime("11/26/2019, 11:30:00.000 PM")
+ | summarize total = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, partitionKey_s
+ | order by total desc
+ ```
+
+- How to get the logs for the partition keys whose storage size is greater than 8 GB?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="PartitionKeyStatistics"
+ | where todouble(sizeKb_d) > 800000
+ ```
+
+- How to get P99 or P50 replication latencies for operations, request charge or the length of the response?
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="DataPlaneRequests"
+ | where TimeGenerated >= ago(2d)
+ | summarize percentile(todouble(responseLength_s), 50), percentile(todouble(responseLength_s), 99), max(responseLength_s), percentile(todouble(requestCharge_s), 50), percentile(todouble(requestCharge_s), 99), max(requestCharge_s), percentile(todouble(duration_s), 50), percentile(todouble(duration_s), 99), max(duration_s), count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h)
+ ```
+
+- How to get ControlPlane logs?
+
+ Remember to switch on the flag as described in the [Disable key-based metadata write access](audit-control-plane-logs.md#disable-key-based-metadata-write-access) article, and execute the operations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager.
+
+ ```kusto
+ AzureDiagnostics
+ | where Category =="ControlPlaneRequests"
+ | summarize by OperationName
+ ```
++
+## <a id="resource-specific-queries"></a> Resource-specific Queries
+
+- How to query for the operations that are taking longer than 3 milliseconds to run:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where toint(DurationMs) > 3
+ | summarize count() by ClientIpAddress, TimeGenerated
+ ```
+
+- How to query for the user agent that is running the operations:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | summarize count() by OperationName, UserAgent
+ ```
+
+- How to query for the long-running operations:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | project TimeGenerated , DurationMs
+ | summarize count() by bin(TimeGenerated, 5s)
+ | render timechart
+ ```
+
+- How to get partition key statistics to evaluate skew across top 3 partitions for a database account:
+
+ ```kusto
+ CDBPartitionKeyStatistics
+ | project RegionName, DatabaseName, CollectionName, PartitionKey, SizeKb
+ ```
+
+- How to get the request charges for expensive queries?
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where todouble(RequestCharge) > 10.0
+ | project ActivityId, RequestCharge
+ | join kind= inner (
+ CDBQueryRuntimeStatistics
+ | project ActivityId, QueryText
+ ) on $left.ActivityId == $right.ActivityId
+ | order by RequestCharge desc
+ | limit 100
+ ```
+
+- How to find which operations are taking most of RU/s?
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where TimeGenerated >= ago(2h)
+ | summarize max(ResponseLength), max(RequestLength), max(RequestCharge), count = count() by OperationName, RequestResourceType, UserAgent, CollectionName, bin(TimeGenerated, 1h)
+ ```
+
+- How to get all queries that are consuming more than 100 RU/s joined with data from **DataPlaneRequests** and **QueryRunTimeStatistics**.
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where todouble(RequestCharge) > 100.0
+ | project ActivityId, RequestCharge
+ | join kind= inner (
+ CDBQueryRuntimeStatistics
+ | project ActivityId, QueryText
+ ) on $left.ActivityId == $right.ActivityId
+ | order by RequestCharge desc
+ | limit 100
+ ```
+
+- How to get the request charges and the execution duration of a query?
+
+ ```kusto
+ CDBQueryRuntimeStatistics
+ | join kind= inner (
+ CDBDataPlaneRequests
+ ) on $left.ActivityId == $right.ActivityId
+ | project DatabaseName, CollectionName, OperationName , QueryText, RequestCharge, DurationMs, bin(TimeGenerated, 1min)
+ ```
+
+- How to get the distribution for different operations?
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where TimeGenerated >= ago(2h)
+ | summarize count = count() by OperationName, RequestResourceType, bin(TimeGenerated, 1h)
+ ```
+
+- What is the maximum throughput that a partition has consumed?
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where TimeGenerated >= ago(2h)
+ | summarize max(RequestCharge) by bin(TimeGenerated, 1h), PartitionId
+ ```
+
+- How to get the information about the partition keys RU/s consumption per second?
+
+ ```kusto
+ CDBPartitionKeyRUConsumption
+ | summarize total = sum(todouble(RequestCharge)) by DatabaseName, CollectionName, PartitionKey, TimeGenerated
+ | order by TimeGenerated asc
+ ```
+
+- How to get the request charge for a specific partition key?
+
+ ```kusto
+ CDBPartitionKeyRUConsumption
+ | where parse_json(PartitionKey)[0] == "2"
+ ```
+
+- How to get the top partition keys with most RU/s consumed in a specific period?
+
+ ```kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= datetime("02/12/2021, 11:20:00.000 PM") and TimeGenerated <= datetime("05/12/2021, 11:30:00.000 PM")
+ | summarize total = sum(todouble(RequestCharge)) by DatabaseName, CollectionName, PartitionKey
+ | order by total desc
+ ```
+
+- How to get the logs for the partition keys whose storage size is greater than 8 GB?
+
+ ```kusto
+ CDBPartitionKeyStatistics
+ | where todouble(SizeKb) > 800000
+ ```
+
+- How to get P99 or P50 replication latencies for operations, request charge or the length of the response?
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where TimeGenerated >= ago(2d)
+ | summarize percentile(todouble(ResponseLength), 50), percentile(todouble(ResponseLength), 99), max(ResponseLength), percentile(todouble(RequestCharge), 50), percentile(todouble(RequestCharge), 99), max(RequestCharge), percentile(todouble(DurationMs), 50), percentile(todouble(DurationMs), 99), max(DurationMs),count() by OperationName, RequestResourceType, UserAgent, CollectionName, bin(TimeGenerated, 1h)
+ ```
+
+- How to get ControlPlane logs?
+
+ Remember to switch on the flag as described in the [Disable key-based metadata write access](audit-control-plane-logs.md#disable-key-based-metadata-write-access) article, and execute the operations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager.
+
+ ```kusto
+ CDBControlPlaneRequests
+ | summarize by OperationName
+ ```
+
+## Next steps
+* For more information on how to create diagnostic settings for Azure Cosmos DB see [Creating Diagnostics settings](monitor-resource-logs.md) article.
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Monitor Normalized Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-normalized-request-units.md
Title: Monitor normalized RU/s for an Azure Cosmos container or an account
+ Title: Monitor normalized RU/s for an Azure Cosmos DB container or an account
description: Learn how to monitor the normalized request unit usage of an operation in Azure Cosmos DB. Owners of an Azure Cosmos DB account can understand which operations are consuming more request units. + Last updated 03/03/2022-
-# How to monitor normalized RU/s for an Azure Cosmos container or an account
+# How to monitor normalized RU/s for an Azure Cosmos DB container or an account
Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature doesn't require you to enable or configure anything explicitly.
The **Normalized RU Consumption** metric is a metric between 0% to 100% that is
For example, suppose you have a container where you set [autoscale max throughput](provision-throughput-autoscale.md#how-autoscale-provisioned-throughput-works) of 20,000 RU/s (scales between 2000 - 20,000 RU/s) and you have two partition key ranges (physical partitions) *P1* and *P2*. Because Azure Cosmos DB distributes the provisioned throughput equally across all the partition key ranges, *P1* and *P2* each can scale between 1000 - 10,000 RU/s. Suppose in a 1 minute interval, in a given second, *P1* consumed 6000 request units and *P2* consumed 8000 request units. The normalized RU consumption of P1 is 60% and 80% for *P2*. The overall normalized RU consumption of the entire container is MAX(60%, 80%) = 80%.
-If you're interested in seeing the request unit consumption at a per second interval, along with operation type, you can use the opt-in feature [Diagnostic Logs](cosmosdb-monitor-resource-logs.md) and query the **PartitionKeyRUConsumption** table. To get a high-level overview of the operations and status code your application is performing on the Azure Cosmos DB resource, you can use the built-in Azure Monitor **Total Requests** (SQL API), **Mongo Requests**, **Gremlin Requests**, or **Cassandra Requests** metric. Later you can filter on these requests by the 429 status code and split them by **Operation Type**.
+If you're interested in seeing the request unit consumption at a per second interval, along with operation type, you can use the opt-in feature [Diagnostic Logs](monitor-resource-logs.md) and query the **PartitionKeyRUConsumption** table. To get a high-level overview of the operations and status code your application is performing on the Azure Cosmos DB resource, you can use the built-in Azure Monitor **Total Requests** (API for NoSQL), **Mongo Requests**, **Gremlin Requests**, or **Cassandra Requests** metric. Later you can filter on these requests by the 429 status code and split them by **Operation Type**.
## What to expect and do when normalized RU/s is higher
This doesn't necessarily mean there's a problem with your resource. By default,
In general, for a production workload, if you see between 1-5% of requests with 429s, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized. In this case, the normalized RU consumption metric reaching 100% only means that in a given second, at least one partition key range used all its provisioned throughput. This is acceptable because the overall rate of 429s is still low. No further action is required.
-To determine what percent of your requests to your database or container resulted in 429s, from your Azure Cosmos DB account blade, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container. For Gremlin API, use the **Gremlin Requests** metric.
+To determine what percent of your requests to your database or container resulted in 429s, from your Azure Cosmos DB account blade, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container. For API for Gremlin, use the **Gremlin Requests** metric.
-If the normalized RU consumption metric is consistently 100% across multiple partition key ranges and the rate of 429s is greater than 5%, it's recommended to increase the throughput. You can find out which operations are heavy and what their peak usage is by using the [Azure monitor metrics and Azure monitor diagnostic logs](sql/troubleshoot-request-rate-too-large.md#step-3-determine-what-requests-are-returning-429s). Follow the [best practices for scaling provisioned throughput (RU/s)](scaling-provisioned-throughput-best-practices.md).
+If the normalized RU consumption metric is consistently 100% across multiple partition key ranges and the rate of 429s is greater than 5%, it's recommended to increase the throughput. You can find out which operations are heavy and what their peak usage is by using the [Azure monitor metrics and Azure monitor diagnostic logs](nosql/troubleshoot-request-rate-too-large.md#step-3-determine-what-requests-are-returning-429-responses). Follow the [best practices for scaling provisioned throughput (RU/s)](scaling-provisioned-throughput-best-practices.md).
It isn't always the case that you'll see a 429 rate limiting error just because the normalized RU has reached 100%. That's because the normalized RU is a single value that represents the max usage over all partition key ranges. One partition key range may be busy but the other partition key ranges can serve requests without issues. For example, a single operation such as a stored procedure that consumes all the RU/s on a partition key range will lead to a short spike in the normalized RU consumption metric. In such cases, there won't be any immediate rate limiting errors if the overall request rate is low or requests are made to other partitions on different partition key ranges.
-Learn more about how to [interpret and debug 429 rate limiting errors](sql/troubleshoot-request-rate-too-large.md).
+Learn more about how to [interpret and debug 429 rate limiting errors](nosql/troubleshoot-request-rate-too-large.md).
## How to monitor for hot partitions The normalized RU consumption metric can be used to monitor if your workload has a hot partition. A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical partitions (which implies partition key ranges) that become "hot." Because all data for a logical partition resides on one partition key range and total RU/s is evenly distributed among all the partition key ranges, a hot partition can lead to 429s and inefficient use of throughput.
To verify if there's a hot partition, navigate to **Insights** > **Throughput**
Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has significantly higher normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition.
-To identify the logical partitions that are consuming the most RU/s, as well as recommended solutions, see the article [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](sql/troubleshoot-request-rate-too-large.md#how-to-identify-the-hot-partition).
+To identify the logical partitions that are consuming the most RU/s, as well as recommended solutions, see the article [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](nosql/troubleshoot-request-rate-too-large.md#how-to-identify-the-hot-partition).
## Normalized RU Consumption and autoscale
In general, for a production workload using autoscale, if you see between 1-5% o
:::image type="content" source="./media/monitor-normalized-request-units/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor" border="true":::
-3. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos accounts, and select **Apply**.
+3. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos DB accounts, and select **Apply**.
:::image type="content" source="./media/monitor-account-key-updates/select-account-scope.png" alt-text="Select the account scope to view metrics" border="true":::
-4. Next you can select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, letΓÇÖs select **Normalized RU Consumption** metric and **Max** as the aggregation value.
+4. Next you can select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-reference.md) article. In this example, letΓÇÖs select **Normalized RU Consumption** metric and **Max** as the aggregation value.
In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter.
The normalized request unit consumption metric for each container is displayed a
## Next steps
-* Monitor Azure Cosmos DB data by using [diagnostic settings](cosmosdb-monitor-resource-logs.md) in Azure.
+* Monitor Azure Cosmos DB data by using [diagnostic settings](monitor-resource-logs.md) in Azure.
* [Audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
-* [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](sql/troubleshoot-request-rate-too-large.md)
+* [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](nosql/troubleshoot-request-rate-too-large.md)
cosmos-db Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-reference.md
+
+ Title: Monitoring Azure Cosmos DB data reference | Microsoft Docs
+description: Important reference material needed when you monitor logs and metrics in Azure Cosmos DB.
++++ Last updated : 12/07/2020+++
+# Monitoring Azure Cosmos DB data reference
++
+This article provides a reference of log and metric data collected to analyze the performance and availability of Azure Cosmos DB. See the [Monitor Azure Cosmos DB](monitor.md) article for details on collecting and analyzing monitoring data for Azure Cosmos DB.
+
+## Metrics
+
+All the metrics corresponding to Azure Cosmos DB are stored in the namespace **Azure Cosmos DB standard metrics**. For a list of all Azure Monitor support metrics (including Azure Cosmos DB), see [Azure Monitor supported metrics](../azure-monitor/essentials/metrics-supported.md). This section lists all the automatically collected platform metrics collected for Azure Cosmos DB.
+
+### Request metrics
+
+|Metric (Metric Display Name)|Unit (Aggregation Type) |Description|Dimensions| Time granularities| Legacy metric mapping | Usage |
+||||| | | |
+| TotalRequests (Total Requests) | Count (Count) | Number of requests made| DatabaseName, CollectionName, Region, StatusCode| All | TotalRequests, Http 2xx, Http 3xx, Http 400, Http 401, Internal Server error, Service Unavailable, Throttled Requests, Average Requests per Second | Used to monitor requests per status code, container at a minute granularity. To get average requests per second, use Count aggregation at minute and divide by 60. |
+| MetadataRequests (Metadata Requests) |Count (Count) | Count of metadata requests. Azure Cosmos DB maintains system metadata container for each account, that allows you to enumerate collections, databases, etc., and their configurations, free of charge. | DatabaseName, CollectionName, Region, StatusCode| All| |Used to monitor throttles due to metadata requests.|
+| MongoRequests (Mongo Requests) | Count (Count) | Number of Mongo Requests Made | DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Rate, Mongo Update Request Rate, Mongo Delete Request Rate, Mongo Insert Request Rate, Mongo Count Request Rate| Used to monitor Mongo request errors, usages per command type. |
+
+### Request Unit metrics
+
+|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Legacy metric mapping | Usage |
+||||| | | |
+| MongoRequestCharge (Mongo Request Charge) | Count (Total) |Mongo Request Units Consumed| DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Charge, Mongo Update Request Charge, Mongo Delete Request Charge, Mongo Insert Request Charge, Mongo Count Request Charge| Used to monitor Mongo resource RUs in a minute.|
+| TotalRequestUnits (Total Request Units)| Count (Total) | Request Units consumed| DatabaseName, CollectionName, Region, StatusCode |All| TotalRequestUnits| Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Sum aggregation at minute interval/level and divide by 60.|
+| ProvisionedThroughput (Provisioned Throughput)| Count (Maximum) |Provisioned throughput at container granularity| DatabaseName, ContainerName| 5M| | Used to monitor provisioned throughput per container.|
+
+### Storage metrics
+
+|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Legacy metric mapping | Usage |
+||||| | | |
+| AvailableStorage (Available Storage) |Bytes (Total) | Total available storage reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M| Available Storage| Used to monitor available storage capacity (applicable only for fixed storage collections) Minimum granularity should be 5 minutes.|
+| DataUsage (Data Usage) |Bytes (Total) |Total data usage reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M |Data size | Used to monitor total data usage at container and region, minimum granularity should be 5 minutes.|
+| IndexUsage (Index Usage) | Bytes (Total) |Total Index usage reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M| Index Size| Used to monitor total data usage at container and region, minimum granularity should be 5 minutes. |
+| DocumentQuota (Document Quota) | Bytes (Total) | Total storage quota reported at 5-minutes granularity per region.| DatabaseName, CollectionName, Region| 5M |Storage Capacity| Used to monitor total quota at container and region, minimum granularity should be 5 minutes.|
+| DocumentCount (Document Count) | Count (Total) |Total document count reported at 5-minutes granularity per region| DatabaseName, CollectionName, Region| 5M |Document Count|Used to monitor document count at container and region, minimum granularity should be 5 minutes.|
+
+### Latency metrics
+
+|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Usage |
+||||| | |
+| ReplicationLatency (Replication Latency)| MilliSeconds (Minimum, Maximum, Average) | P99 Replication Latency across source and target regions for geo-enabled account| SourceRegion, TargetRegion| All | Used to monitor P99 replication latency between any two regions for a geo-replicated account. |
+| Server Side Latency| MilliSeconds (Average) | Time taken by the server to process the request. | CollectionName, ConnectionMode, DatabaseName, OperationType, PublicAPIType, Region | All | Used to monitor the request latency on the Azure Cosmos DB server. |
+
+### Availability metrics
+
+|Metric (Metric Display Name) |Unit (Aggregation Type)|Description| Time granularities| Legacy metric mapping | Usage |
+||||| | |
+| ServiceAvailability (Service Availability)| Percent (Minimum, Maximum) | Account requests availability at one hour granularity| 1H | Service Availability | Represents the percent of total passed requests. A request is considered to be failed due to system error if the status code is 410, 500 or 503 Used to monitor availability of the account at hour granularity. |
+
+### API for Cassandra metrics
+
+|Metric (Metric Display Name)|Unit (Aggregation Type)|Description|Dimensions| Time granularities| Usage |
+||||| | |
+| CassandraRequests (Cassandra Requests) | Count (Count) | Number of API for Cassandra requests made| DatabaseName, CollectionName, ErrorCode, Region, OperationType, ResourceType| All| Used to monitor Cassandra requests at a minute granularity. To get average requests per second, use Count aggregation at minute and divide by 60.|
+| CassandraRequestCharges (Cassandra Request Charges) | Count (Sum, Min, Max, Avg) | Request units consumed by the API for Cassandra | DatabaseName, CollectionName, Region, OperationType, ResourceType| All| Used to monitor RUs used per minute by a API for Cassandra account.|
+| CassandraConnectionClosures (Cassandra Connection Closures) |Count (Count) |Number of Cassandra Connections closed| ClosureReason, Region| All | Used to monitor the connectivity between clients and the Azure Cosmos DB API for Cassandra.|
+
+For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+
+## Resource logs
+
+The following table lists the properties of resource logs in Azure Cosmos DB. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **AzureDiagnostics** table under the resource provider** name of `MICROSOFT.DOCUMENTDB`.
+
+| Azure Storage field or property | Azure Monitor Logs property | Description |
+| | | |
+| **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. |
+| **resourceId** | **Resource** | The Azure Cosmos DB account for which logs are enabled.|
+| **category** | **Category** | For Azure Cosmos DB, **DataPlaneRequests**, **MongoRequests**, **QueryRuntimeStatistics**, **PartitionKeyStatistics**, **PartitionKeyRUConsumption**, **ControlPlaneRequests**, **CassandraRequests**, **GremlinRequests** are the available log types. |
+| **operationName** | **OperationName** | Name of the operation. The operation name can be `Create`, `Update`, `Read`, `ReadFeed`, `Delete`, `Replace`, `Execute`, `SqlQuery`, `Query`, `JSQuery`, `Head`, `HeadFeed`, or `Upsert`. |
+| **properties** | n/a | The contents of this field are described in the rows that follow. |
+| **activityId** | **activityId_g** | The unique GUID for the logged operation. |
+| **userAgent** | **userAgent_s** | A string that specifies the client user agent from which, the request was sent. The format of user agent is `{user agent name}/{version}`.|
+| **requestResourceType** | **requestResourceType_s** | The type of the resource accessed. This value can be database, container, document, attachment, user, permission, stored procedure, trigger, user-defined function, or an offer. |
+| **statusCode** | **statusCode_s** | The response status of the operation. |
+| **requestResourceId** | **ResourceId** | The resourceId that pertains to the request. Depending on the operation performed, this value may point to `databaseRid`, `collectionRid`, or `documentRid`.|
+| **clientIpAddress** | **clientIpAddress_s** | The client's IP address. |
+| **requestCharge** | **requestCharge_s** | The number of RUs that are used by the operation |
+| **collectionRid** | **collectionId_s** | The unique ID for the collection.|
+| **duration** | **duration_d** | The duration of the operation, in milliseconds. |
+| **requestLength** | **requestLength_s** | The length of the request, in bytes. |
+| **responseLength** | **responseLength_s** | The length of the response, in bytes.|
+| **resourceTokenPermissionId** | **resourceTokenPermissionId_s** | This property indicates the resource token permission Id that you have specified. To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
+| **resourceTokenPermissionMode** | **resourceTokenPermissionMode_s** | This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read". To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
+| **resourceTokenUserRid** | **resourceTokenUserRid_s** | This value is non-empty when [resource tokens](./secure-access-to-data.md#resource-tokens) are used for authentication. The value points to the resource ID of the user. |
+| **responseLength** | **responseLength_s** | The length of the response, in bytes.|
+
+For a list of all Azure Monitor log categories and links to associated schemas, see [Azure Monitor Logs categories and schemas](../azure-monitor/essentials/resource-logs-schema.md).
+
+## Azure Monitor Logs tables
+
+Azure Cosmos DB uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of Kusto tables Azure Cosmos DB uses, see the [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-cosmos-db) article.
+
+## See Also
+
+- See [Monitoring Azure Cosmos DB](monitor.md) for a description of monitoring Azure Cosmos DB.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
cosmos-db Monitor Request Unit Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-request-unit-usage.md
Title: Monitor the throughput usage of an operation in Azure Cosmos DB description: Learn how to monitor the throughput or request unit usage of an operation in Azure Cosmos DB. Owners of an Azure Cosmos DB account can understand which operations are taking more request units. +
Last updated 09/16/2021
# How to monitor throughput or request unit usage of an operation in Azure Cosmos DB Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly. The **Total Request Units** metric is used to get the request units usage for different types of operations. Later you can analyze which operations used most of the throughput. By default, the throughput data is aggregated at one-minute interval. However, you can change the aggregation unit by changing the time granularity option.
If you notice certain queries are taking more request units, you can take action
* Modify the query to use index with filter clause. * Perform less expensive UDF function calls. * Define partition keys to minimize the fan out of query into different partitions.
-* You can also use the query metrics returned in the call response, the diagnostic log details and refer to [query performance tuning](sql-api-query-metrics.md) article to learn more about the query execution.
+* You can also use the query metrics returned in the call response, the diagnostic log details and refer to [query performance tuning](nosql/query-metrics.md) article to learn more about the query execution.
* You can start from sum and then look at avg utilization using the right dimension. ## View the total request unit usage metric
If you notice certain queries are taking more request units, you can take action
:::image type="content" source="./media/monitor-request-unit-usage/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor":::
-1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos accounts, and select **Apply**.
+1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos DB accounts, and select **Apply**.
:::image type="content" source="./media/monitor-account-key-updates/select-account-scope.png" alt-text="Select the account scope to view metrics" border="true":::
-1. Next select the **Total Request Units** metric from the list of available metrics. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Total Request Units** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the average number of request units consumed per minute for the selected period.
+1. Next select the **Total Request Units** metric from the list of available metrics. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-reference.md) article. In this example, let's select **Total Request Units** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the average number of request units consumed per minute for the selected period.
:::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-metric.png" alt-text="Choose a metric from the Azure portal" border="true":::
You can also filter metrics and get the charts displayed by a specific **Collect
To get the request unit usage of each operation either by total(sum) or average, select **Apply splitting** and choose **Operation type** and the filter value as shown in the following image:
- :::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-operations.png" alt-text="Cosmos DB Request units for operations in Azure monitor":::
+ :::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-operations.png" alt-text="Azure Cosmos DB Request units for operations in Azure monitor":::
If you want to see the request unit usage by collection, select **Apply splitting** and choose the collection name as a filter. You will see a chat like the following with a choice of collections within the dashboard. You can then select a specific collection name to view more details:
- :::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-collection.png" alt-text="Cosmos DB Request units for all operations by the collection in Azure monitor" border="true":::
+ :::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-collection.png" alt-text="Azure Cosmos DB Request units for all operations by the collection in Azure monitor" border="true":::
## Next steps
-* Monitor Azure Cosmos DB data by using [diagnostic settings](cosmosdb-monitor-resource-logs.md) in Azure.
+* Monitor Azure Cosmos DB data by using [diagnostic settings](monitor-resource-logs.md) in Azure.
* [Audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
+
+ Title: Monitor Azure Cosmos DB data by using Azure Diagnostic settings
+description: Learn how to use Azure diagnostic settings to monitor the performance and availability of data stored in Azure Cosmos DB
++++++ Last updated : 05/20/2021++
+# Monitor Azure Cosmos DB data by using diagnostic settings in Azure
+
+Diagnostic settings in Azure are used to collect resource logs. Azure resource Logs are emitted by a resource and provide rich, frequent data about the operation of that resource. These logs are captured per request and they are also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
+
+Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources:
+- Log Analytics workspaces
+ - Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables
+- Event hub
+- Storage Account
+
+> [!NOTE]
+> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except API for Table) [following our instructions for creating diagnostics setting via REST API](monitor-resource-logs.md#create-diagnostic-setting). This option provides additional cost-optimizations with an improved view for handling data.
+
+## <a id="create-setting-portal"></a> Create diagnostics settings via the Azure portal
+
+1. Sign into the [Azure portal](https://portal.azure.com).
+
+2. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section**, and then select **Add diagnostic setting** option.
+
+ :::image type="content" source="./media/monitor/diagnostics-settings-selection.png" alt-text="Select diagnostics":::
++
+3. In the **Diagnostic settings** pane, fill the form with your preferred categories.
+
+### Choose log categories
+
+ |Category |API | Definition | Key Properties |
+ |||||
+ |DataPlaneRequests | All APIs | Logs back-end requests as data plane operations which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
+ |MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
+ |CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ |GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
+ |QueryRuntimeStatistics | SQL | This table details query operations executed against a API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
+ |PartitionKeyStatistics | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: <br/><ul><li> At least 1% of the documents in the physical partition have same logical partition key. </li><li> Out of all the keys in the physical partition, the top 3 keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions are not met, the partition key statistics data is not available. It's okay if the above conditions are not met for your account, which typically indicates you have no logical partition storage skew. <br/><br/>Note: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes are not uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
+ |PartitionKeyRUConsumption | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
+ |ControlPlaneRequests | All APIs | Logs details on control plane operations i.e. creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
+ |TableApiRequests | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+
+4. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
+
+ :::image type="content" source="./media/monitor/diagnostics-resource-specific.png" alt-text="Select enable resource-specific":::
+
+## <a id="create-diagnostic-setting"></a> Create diagnostic setting via REST API
+Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorupdate) for creating a diagnostic setting via the interactive console.
+> [!Note]
+> We recommend setting the **logAnalyticsDestinationType** property to **Dedicated** for enabling resource specific tables.
+
+### Request
+
+ ```HTTP
+ PUT
+ https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
+ ```
+
+### Headers
+
+ |Parameters/Headers | Value/Description |
+ |||
+ |name | The name of your Diagnostic setting. |
+ |resourceUri | subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME} |
+ |api-version | 2017-05-01-preview |
+ |Content-Type | application/json |
+
+### Body
+
+```json
+{
+ "id": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}",
+ "type": "Microsoft.Insights/diagnosticSettings",
+ "name": "name",
+ "location": null,
+ "kind": null,
+ "tags": null,
+ "properties": {
+ "storageAccountId": null,
+ "serviceBusRuleId": null,
+ "workspaceId": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}",
+ "eventHubAuthorizationRuleId": null,
+ "eventHubName": null,
+ "logs": [
+ {
+ "category": "DataPlaneRequests",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "QueryRuntimeStatistics",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "PartitionKeyStatistics",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "PartitionKeyRUConsumption",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "ControlPlaneRequests",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ }
+ ],
+ "logAnalyticsDestinationType": "Dedicated"
+ },
+ "identity": null
+}
+```
+
+## Create diagnostic setting via Azure CLI
+Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
+
+> [!Note]
+> If you are using API for NoSQL, we recommend setting the **export-to-resource-specific** property to **true**.
+
+ ```azurecli-interactive
+ az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/ --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
+ ```
+## <a id="full-text-query"></a> Enable full-text query for logging query text
+
+> [!Note]
+> Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
+
+Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, youΓÇÖll be able to view the deobfuscated query for all requests within your Azure Cosmos DB account. YouΓÇÖll also give permission for Azure Cosmos DB to access and surface this data in your logs.
+
+1. To enable this feature, navigate to the `Features` blade in your Azure Cosmos DB account.
+
+ :::image type="content" source="./media/monitor/full-text-query-features.png" alt-text="Navigate to Features blade":::
+
+2. Select `Enable`, this setting will then be applied in the within the next few minutes. All newly ingested logs will have the full-text or PIICommand text for each request.
+
+ :::image type="content" source="./media/monitor/select-enable-full-text.png" alt-text="Select enable full-text":::
+
+To learn how to query using this newly enabled feature visit [advanced queries](advanced-queries.md).
+
+## Next steps
+* For more information on how to query resource-specific tables see [troubleshooting using resource-specific tables](monitor-logs-basic-queries.md#resource-specific-queries).
+
+* For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](monitor-logs-basic-queries.md#azure-diagnostics-queries).
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Monitor Server Side Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-server-side-latency.md
Title: How to monitor the server-side latency for operations in Azure Cosmos DB
-description: Learn how to monitor server latency for operations in Azure Cosmos DB account or a container. Owners of an Azure Cosmos DB account can understand the server-side latency issues with your Azure Cosmos accounts.
+description: Learn how to monitor server latency for operations in Azure Cosmos DB account or a container. Owners of an Azure Cosmos DB account can understand the server-side latency issues with your Azure Cosmos DB accounts.
+
Last updated 09/23/2022
# How to monitor the server-side latency for operations in an Azure Cosmos DB container or account -
-Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature doesn't require you to enable or configure anything explicitly. The server-side latency metric direct and server-side latency gateway metrics are used to view the server-side latency of an operation in two different connection modes. Use server-side latency gateway metric if your request operation is in gateway connectivity mode. Use server-side latency direct metric if your request operation is in direct connectivity mode. Azure Cosmos DB provides SLA of less than 10 ms for point read/write operations with direct connectivity. For point read and point write operations, the SLAs are calculated as detailed in the [SLA document](https://azure.microsoft.com/support/legal/sl) article.
+Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature doesn't require you to enable or configure anything explicitly. The server-side latency metric direct and server-side latency gateway metrics are used to view the server-side latency of an operation in two different connection modes. Use server-side latency gateway metric if your request operation is in gateway connectivity mode. Use server-side latency direct metric if your request operation is in direct connectivity mode. Azure Cosmos DB provides SLA of less than 10 ms for point read/write operations with direct connectivity. For point read and point write operations, the SLAs are calculated as detailed in the [SLA document](https://azure.microsoft.com/support/legal/sl) article.
The following table indicates which API supports server-side latency metrics (Direct versus Gateway):
You can monitor server-side latency metrics if you see unusually high latency fo
* A read or write operation or * A query
-You can look up the diagnostic log to see the size of the data returned. If you see a sustained high latency for query operations, you should look up the diagnostic log for higher [throughput or RU/s](cosmosdb-monitor-logs-basic-queries.md) used. Server side latency shows the amount of time spent on the backend infrastructure before the data was returned to the client. It's important to look at this metric to rule out any backend latency issues.
+You can look up the diagnostic log to see the size of the data returned. If you see a sustained high latency for query operations, you should look up the diagnostic log for higher [throughput or RU/s](monitor-logs-basic-queries.md) used. Server side latency shows the amount of time spent on the backend infrastructure before the data was returned to the client. It's important to look at this metric to rule out any backend latency issues.
## View the server-side latency metrics
You can look up the diagnostic log to see the size of the data returned. If you
:::image type="content" source="./media/monitor-server-side-latency/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor" border="true":::
-1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos accounts, and select **Apply**.
+1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos DB accounts, and select **Apply**.
:::image type="content" source="./media/monitor-account-key-updates/select-account-scope.png" alt-text="Select the account scope to view metrics" border="true":::
-1. Next select the **Server Side Latency Gateway** metric from the list of available metrics, if your operation is in gateway connectivity mode. Select the **Server Side Latency Direct** metric, if your operation is in direct connectivity mode. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Server Side Latency Gateway** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the server-side latency in gateway connectivity mode per 5 minutes for the selected period.
+1. Next select the **Server Side Latency Gateway** metric from the list of available metrics, if your operation is in gateway connectivity mode. Select the **Server Side Latency Direct** metric, if your operation is in direct connectivity mode. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-reference.md) article. In this example, let's select **Server Side Latency Gateway** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the server-side latency in gateway connectivity mode per 5 minutes for the selected period.
:::image type="content" source="./media/monitor-server-side-latency/server-side-latency-gateway-metric.png" alt-text="Choose the Server-Side Latency Gateway metric from the Azure portal" border="true" lightbox="./media/monitor-server-side-latency/server-side-latency-gateway-metric.png":::
You can also group the metrics by using the **Apply splitting** option.
## Next steps
-* Monitor Azure Cosmos DB data by using [diagnostic settings](cosmosdb-monitor-resource-logs.md) in Azure.
+* Monitor Azure Cosmos DB data by using [diagnostic settings](monitor-resource-logs.md) in Azure.
* [Audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
cosmos-db Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor.md
+
+ Title: Monitor Azure Cosmos DB | Microsoft Docs
+description: Learn how to monitor the performance and availability of Azure Cosmos DB.
+++++ Last updated : 05/03/2020+++
+# Monitor Azure Cosmos DB
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Cosmos DB databases and how you can use the features of Azure Monitor to analyze and alert on this data.
+
+You can monitor your data with client-side and server-side metrics. When using server-side metrics, you can monitor the data stored in Azure Cosmos DB with the following options:
+
+* **Monitor from Azure Cosmos DB portal:** You can monitor with the metrics available within the **Metrics** tab of the Azure Cosmos DB account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of seven days. To learn more, see the [Monitoring data collected from Azure Cosmos DB](#monitoring-data) section of this article.
+
+* **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you will not need to explicitly configure anything. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyzing-metrics) section of this article.
+
+* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container is changes, the properties of an Azure Cosmos DB account are changed these events are captures within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
+
+* **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos DB account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-azure-cosmos-db-programmatically) section of this article.
+
+The following image shows different options available to monitor Azure Cosmos DB account through Azure portal:
++
+When using Azure Cosmos DB, at the client-side you can collect the details for request charge, activity ID, exception/stack trace information, HTTP status/sub-status code, diagnostic string to debug any issue that might occur. This information is also required if you need to reach out to the Azure Cosmos DB support team.
+
+## Monitor overview
+
+The **Overview** page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful, however only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable other types of data collection with some configuration.
+
+## What is Azure Monitor?
+
+Azure Cosmos DB creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+
+If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
+
+* What is Azure Monitor?
+* Costs associated with monitoring
+* Monitoring data collected in Azure
+* Configuring data collection
+* Standard tools in Azure for analyzing and alerting on monitoring data
+
+The following sections build on this article by describing the specific data gathered from Azure Cosmos DB and providing examples for configuring data collection and analyzing this data with Azure tools.
+
+## Azure Cosmos DB insights
+
+Azure Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Azure Cosmos DB insights](insights-overview.md) article.
+
+> [!NOTE]
+> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+
+## Monitoring data
+
+Azure Cosmos DB collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). See [Azure Cosmos DB monitoring data reference](monitor-reference.md) for a detailed reference of the logs and metrics created by Azure Cosmos DB.
+
+The **Overview** page in the Azure portal for each Azure Cosmos DB database includes a brief view of the database usage including its request and hourly billing usage. This is useful information but only a small amount of the monitoring data available. Some of this data is collected automatically and available for analysis as soon as you create the database while you can enable more data collection with some configuration.
++
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](monitor-resource-logs.md) for the detailed process for creating a diagnostic setting using the Azure portal and some diagnostic query examples. When you create a diagnostic setting, you specify which categories of logs to collect.
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. You can also check out how to monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
+
+For a list of the platform metrics collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference metrics](monitor-reference.md#metrics) article.
+
+All metrics for Azure Cosmos DB are in the namespace **Azure Cosmos DB standard metrics**. You can use the following dimensions with these metrics when adding a filter to a chart:
+
+* CollectionName
+* DatabaseName
+* OperationType
+* Region
+* StatusCode
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+
+### View operation level metrics for Azure Cosmos DB
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select **Monitor** from the left-hand navigation bar, and select **Metrics**.
+
+ :::image type="content" source="./media/monitor/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor":::
+
+1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos DB accounts, and select **Apply**.
+
+ :::image type="content" source="./media/monitor/select-cosmosdb-account.png" alt-text="Choose an Azure Cosmos DB account to view metrics":::
+
+1. Next you can select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-reference.md) article. In this example, let's select **Request units** and **Avg** as the aggregation value.
+
+ In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the average number of request units consumed per minute for the selected period.
+
+ :::image type="content" source="./media/monitor/metric-types.png" alt-text="Choose a metric from the Azure portal":::
+
+### Add filters to metrics
+
+You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **StatusCode**. To filter the metrics, select **Add filter** and choose the required property such as **OperationType** and select a value such as **Query**. The graph then displays the request units consumed for the query operation for the selected period. The operations executed via Stored procedure aren't logged so they aren't available under the OperationType metric.
++
+You can group metrics by using the **Apply splitting** option. For example, you can group the request units per operation type and view the graph for all the operations at once as shown in the following image:
++
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). For a list of the types of resource logs collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
+
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+Azure Cosmos DB stores data in the following tables.
+
+| Table | Description |
+|:|:|
+| AzureDiagnostics | Common table used by multiple services to store Resource logs. Resource logs from Azure Cosmos DB can be identified with `MICROSOFT.DOCUMENTDB`. |
+| AzureActivity | Common table that stores all records from the Activity log.
+
+### Sample Kusto queries
+
+Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](./audit-control-plane-logs.md#enable-diagnostic-logs-for-control-plane-operations). When enabling diagnostic logs, you will select between storing your data in a single [AzureDiagnostics table (legacy)](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode) or [resource-specific tables](../azure-monitor/essentials/resource-logs.md#resource-specific).
+
+When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries will only include data from that resource.
+
+> [!IMPORTANT]
+> If you want to run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
+
+Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos DB resources. The exact text of the queries will depend on the [collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode) you selected when you enabled diagnostics logs.
+
+#### [AzureDiagnostics table (legacy)](#tab/azure-diagnostics)
+
+* To query for all control-plane logs from Azure Cosmos DB:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="ControlPlaneRequests"
+ ```
+
+* To query for all data-plane logs from Azure Cosmos DB:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ ```
+
+* To query for a filtered list of data-plane logs, specific to a single resource:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ | where Resource=="<account-name>"
+ ```
+
+ > [!IMPORTANT]
+ > In the **AzureDiagnostics** table, many fields are case-sensitive and uppercase including, but not limited to; *ResourceId*, *ResourceGroup*, *ResourceProvider*, and *Resource*.
+
+* To get a count of data-plane logs, grouped by resource:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ | summarize count() by Resource
+ ```
+
+* To generate a chart for data-plane logs, grouped by the type of operation:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ | summarize count() by OperationName
+ | render columnchart
+ ```
+
+#### [Resource-specific table](#tab/resource-specific-diagnostics)
+
+* To query for all control-plane logs from Azure Cosmos DB:
+
+ ```kusto
+ CDBControlPlaneRequests
+ ```
+
+* To query for all data-plane logs from Azure Cosmos DB:
+
+ ```kusto
+ CDBDataPlaneRequests
+ ```
+
+* To query for a filtered list of data-plane logs, specific to a single resource:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where AccountName=="<account-name>"
+ ```
+
+* To get a count of data-plane logs, grouped by resource:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | summarize count() by AccountName
+ ```
+
+* To generate a chart for data-plane logs, grouped by the type of operation:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | summarize count() by OperationName
+ | render piechart
+ ```
+++
+These examples are just a small sampling of the rich queries that can be performed in Azure Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
+
+For example, the following table lists few alert rules for your resources. You can find a detailed list of alert rules from the Azure portal. To learn more, see [how to configure alerts](create-alerts.md) article.
+
+| Alert type | Condition | Description |
+|:|:|:|
+|Rate limiting on request units (metric alert) |Dimension name: StatusCode, Operator: Equals, Dimension values: 429 | Alerts if the container or a database has exceeded the provisioned throughput limit. |
+|Region failed over |Operator: Greater than, Aggregation type: Count, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable service-managed failover. |
+| Rotate keys(activity log alert)| Event level: Informational, Status: started| Alerts when the account keys are rotated. You can update your application with the new keys. |
+
+## Monitor Azure Cosmos DB programmatically
+
+The account level metrics available in the portal, such as account storage usage and total requests, aren't available via the API for NoSQL. However, you can retrieve usage data at the collection level by using the API for NoSQL. To retrieve collection level data, do the following:
+
+* To use the REST API, [perform a GET on the collection](/rest/api/cosmos-db/get-a-collection). The quota and usage information for the collection is returned in the x-ms-resource-quota and x-ms-resource-usage headers in the response.
+
+* To use the .NET SDK, use the [DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) method, which returns a [ResourceResponse](/dotnet/api/microsoft.azure.documents.client.resourceresponse-1) that contains many usage properties such as **CollectionSizeUsage**, **DatabaseUsage**, **DocumentUsage**, and more.
+
+To access more metrics, use the [Azure Monitor SDK](https://www.nuget.org/packages/Microsoft.Azure.Insights). Available metric definitions can be retrieved by calling:
+
+```http
+https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01
+```
+
+To retrieve individual metrics, use the following format:
+
+```http
+https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metrics?timespan={StartTime}/{EndTime}&interval={AggregationInterval}&metricnames={MetricName}&aggregation={AggregationType}&`$filter={Filter}&api-version=2018-01-01
+```
+
+To learn more, see the [Azure monitoring REST API](../azure-monitor/essentials/rest-api-walkthrough.md) article.
+
+## Next steps
+
+* See [Azure Cosmos DB monitoring data reference](monitor-reference.md) for a reference of the logs and metrics created by Azure Cosmos DB.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
cosmos-db Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitoring-solutions.md
Title: Monitoring Azure Cosmos DB using third-party monitoring tools
-description: This article will describe monitoring third-party tools helps monitoring Cosmos DB.
+description: This article will describe monitoring third-party tools helps monitoring Azure Cosmos DB.
+ Last updated 07/28/2021 # Monitoring Azure Cosmos DB using third-party solutions
-Apart from Azure Monitor, you can use third party monitoring solutions to monitor your Cosmos DB instances.
+Apart from Azure Monitor, you can use third party monitoring solutions to monitor your Azure Cosmos DB instances.
> [!IMPORTANT] > Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you. ## Datadog
-{Supports: SQL API, Azure Cosmos DB API for MongoDB, Gremlin API, Cassandra API & Table API}
+{Supports: API for NoSQL, MongoDB, Gremlin, Cassandra & Table}
[Datadog](https://www.datadoghq.com/) is a fully unified platform encompassing infrastructure monitoring, application performance monitoring, log management, user-experience monitoring, and more. By bringing together data from every tool and service in your companyΓÇÖs stack, Datadog provides a single source of truth for troubleshooting, optimizing performance, and cross-team collaboration. Everything in Datadog is organized under the same set of tags, so all the data relevant to a particular issue is automatically correlated. By eliminating the blind spots, Datadog reduces the risk of overlooked errors, mitigates the burden of ongoing service maintenance, and accelerates digital transformations. Datadog collects over 40 different gauge and count metrics from CosmosDB, including the total available storage per region, the number of SQL databases created, and more. These metrics are collected through the Datadog Azure integration, and appear in the platform 40% faster than the rest of the industry. Datadog also provides an out-of-the-box dashboard for CosmosDB, which provides immediate insight into the performance of CosmosDB instances. Users can visualize platform-level metrics, such as total request units consumed, also API-level metrics, such as the number of Cassandra keyspaces created to better understand their CosmosDB usage.
-Datadog is being used by various Cosmos DB customers, which include
+Datadog is being used by various Azure Cosmos DB customers, which include
- Maersk - PWC - PayScale
Useful links:
## Dynatrace
-{Supports: SQL API & Azure Cosmos DB API for MongoDB}
+{Supports: API for NoSQL & MongoDB}
[Dynatrace](https://www.dynatrace.com/platform/) delivers software intelligence for the cloud to tame cloud complexity and accelerate digital transformation. With automatic and intelligent observability at scale, the Dynatrace all-in-one Software Intelligence Platform delivers precise answers about the performance and security of applications, the underlying infrastructure, and the experience of all users, so teams can automate cloud operations, release better software faster, and deliver unrivaled digital experiences.
-Using the Mongo API, Dynatrace collects and delivers CosmosDB metrics, which includes the numbers of calls and response timesΓÇöall visualized according to aggregation, commands, read-, and write operations. It also tells you exact database statements executed in your environment. Lastly with the power of [Davis AI Engine](https://www.dynatrace.com/davis), it can detect exactly which database statement is the root cause of degradation and can see the database identified as the root cause.
+Using the API for MongoDB, Dynatrace collects and delivers CosmosDB metrics, which includes the numbers of calls and response timesΓÇöall visualized according to aggregation, commands, read-, and write operations. It also tells you exact database statements executed in your environment. Lastly with the power of [Davis AI Engine](https://www.dynatrace.com/davis), it can detect exactly which database statement is the root cause of degradation and can see the database identified as the root cause.
**Figure:** Dynatrace in Action ### Useful links - [Try Dynatrace with 15 days free trial](https://www.dynatrace.com/trial) - [Launch from Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dynatrace.dynatrace-managed)-- [Documentation on how to Cosmos DB with Azure Monitor](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/microsoft-azure-services)-- [Cosmos DB - Dynatrace Integration details](https://www.dynatrace.com/news/blog/azure-services-explained-part-4-azure-cosmos-db/?_ga=2.185016301.559899881.1623174355-748416177.1603817475)
+- [Documentation on how to use Azure Cosmos DB with Azure Monitor](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/microsoft-azure-services)
+- [Azure Cosmos DB - Dynatrace Integration details](https://www.dynatrace.com/news/blog/azure-services-explained-part-4-azure-cosmos-db/?_ga=2.185016301.559899881.1623174355-748416177.1603817475)
- [Dynatrace Monitoring for Azure databases](https://www.dynatrace.com/technologies/azure-monitoring/azure-database-performance/) - [Dynatrace for Azure solution overview](https://www.dynatrace.com/technologies/azure-monitoring/) - [Solution Partners](https://www.dynatrace.com/partners/solution-partners/) ## Next steps-- [Monitoring Cosmos DB data reference](./monitor-cosmos-db-reference.md)
+- [Monitoring Azure Cosmos DB data reference](./monitor-reference.md)
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md
+
+ Title: Azure Cosmos DB best practices for .NET SDK v3
+description: Learn the best practices for using the Azure Cosmos DB .NET SDK v3
++++ Last updated : 08/30/2022+++++
+# Best practices for Azure Cosmos DB .NET SDK
+
+This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
+
+Watch the video below to learn more about using the .NET SDK from an Azure Cosmos DB engineer!
+
+>
+> [!VIDEO https://aka.ms/docs.dotnet-best-practices]
+
+## Checklist
+|Checked | Subject |Details/Links |
+||||
+|<input type="checkbox"/> | SDK Version | Always using the [latest version](sdk-dotnet-v3.md) of the Azure Cosmos DB SDK available for optimal performance. |
+| <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3.md#sdk-usage). |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. |
+| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. |
+| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).|
+|<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
+|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
+|<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
+|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Azure Cosmos DB [visit](troubleshoot-dotnet-sdk-request-timeout.md) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
+|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
+|<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
+| <input type="checkbox"/> | Parallel Queries | The Azure Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
+| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3.md#indexing-policy) |
+| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
+| <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
+| <input type="checkbox"/> | Enabling Query Metrics | For more logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](query-metrics-performance.md) |
+| <input type="checkbox"/> | SDK Logging | Log [SDK diagnostics](#capture-diagnostics) for outstanding scenarios, such as exceptions or when requests go beyond an expected latency.
+| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you're using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3.md#logging-and-tracing) |
+
+## Capture diagnostics
++
+## Best practices when using Gateway mode
+
+Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
+
+## Best practices for write-heavy workloads
+
+For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+
+## Next steps
+
+For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Best Practice Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-java.md
+
+ Title: Azure Cosmos DB best practices for Java SDK v4
+description: Learn the best practices for using the Azure Cosmos DB Java SDK v4
++++ Last updated : 04/01/2022++++
+# Best practices for Azure Cosmos DB Java SDK
+
+This article walks through the best practices for using the Azure Cosmos DB Java SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
+
+## Checklist
+|Checked | Topic |Details/Links |
+||||
+|<input type="checkbox"/> | SDK Version | Always using the [latest version](sdk-java-v4.md) of the Azure Cosmos DB SDK available for optimal performance. |
+| <input type="checkbox"/> | Singleton Client | Use a [single instance](/jav#sdk-usage). |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the Java SDK [visit here](tutorial-global-distribution.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [preferredRegions](/jav). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
+| <input type="checkbox"/> | Hosting | For most common cases of production workloads, we highly recommend using at least 4-cores and 8-GB memory VMs whenever possible. |
+| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V4 SDK documentation](performance-tips-java-sdk-v4.md#networking).|
+| <input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
+| <input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we recommend setting the [`idleEndpointTimeout`](/java/api/com.azure.cosmos.directconnectionconfig.setidleendpointtimeout?view=azure-java-stable#com-azure-cosmos-directconnectionconfig-setidleendpointtimeout(java-time-duration)&preserve-view=true) to a higher value. The `idleEndpointTimeout` property in `DirectConnectionConfig` helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections to an endpoint are kept open for 1 hour. If there aren't requests to a specific endpoint for idle endpoint timeout duration, direct client closes all connections to that endpoint to save resources and I/O cost. |
+| <input type="checkbox"/> | Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads) | Avoid blocking calls: `.block()`. The entire call stack is asynchronous in order to benefit from [async API](https://projectreactor.io/docs/core/release/reference/#intro-reactive) patterns and use of appropriate [threading and schedulers](https://projectreactor.io/docs/core/release/reference/#schedulers) |
+| <input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use [project reactor's timeout API](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#timeout-java.time.Duration-). For more details on timeouts with Azure Cosmos DB [visit here](troubleshoot-java-sdk-request-timeout.md) |
+| <input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit here](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
+| <input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `CosmosAsyncDatabase#read()` or `CosmosAsyncContainer#read()` will result in metadata calls to the service, which consume from the system-reserved RU limit. `createDatabaseIfNotExists()` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
+| <input type="checkbox"/> | Parallel Queries | The Azure Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-java) for better latency and throughput on your queries. We recommend setting the `maxDegreeOfParallelism` property within the `CosmosQueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, set the value to `-1` that will give you the best latency. Also, set the `maxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-java-sdk-v4.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
+| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths `IndexingPolicy#getIncludedPaths()` and `IndexingPolicy#getExcludedPaths()`. Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit here](performance-tips-java-sdk-v4.md#indexing-policy) |
+| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
+| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, follow instructions on how to capture SQL Query Metrics using [Java SDK](troubleshoot-java-sdk-v4.md#query-operations) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [CosmosDiagnostics](/jav#capture-the-diagnostics) |
+
+## Best practices when using Gateway mode
+Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to tweak [maxConnectionPoolSize](/java/api/com.azure.cosmos.gatewayconnectionconfig.setmaxconnectionpoolsize?view=azure-java-stable#com-azure-cosmos-gatewayconnectionconfig-setmaxconnectionpoolsize(int)&preserve-view=true) to a different value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In Java v4 SDK, the default value for `GatewayConnectionConfig#maxConnectionPoolSize` is 1000. To change the value, you can set `GatewayConnectionConfig#maxConnectionPoolSize` to a different value.
+
+## Best practices for write-heavy workloads
+For workloads that have heavy create payloads, set the `CosmosClientBuilder#contentResponseOnWriteEnabled()` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+
+## Next steps
+To learn more about performance tips for Java SDK, see [Performance tips for Azure Cosmos DB Java SDK v4](performance-tips-java-sdk-v4.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bicep Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bicep-samples.md
+
+ Title: Bicep samples for Azure Cosmos DB for NoSQL
+description: Use Bicep to create and configure Azure Cosmos DB.
+++++ Last updated : 09/13/2021++++
+# Bicep for Azure Cosmos DB
++
+This article shows Bicep samples for API for NoSQL accounts. You can also find Bicep samples for [Cassandra](../cassandr) APIs.
+
+## API for NoSQL
+
+|**Sample**|**Description**|
+|||
+|[Create an Azure Cosmos DB account, database, container with autoscale throughput](manage-with-bicep.md#create-autoscale) | Create a API for NoSQL account in two regions, a database and container with autoscale throughput. |
+|[Create an Azure Cosmos DB account, database, container with analytical store](manage-with-bicep.md#create-analytical-store) | Create a API for NoSQL account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. |
+|[Create an Azure Cosmos DB account, database, container with standard (manual) throughput](manage-with-bicep.md#create-manual) | Create a API for NoSQL account in two regions, a database and container with standard throughput. |
+|[Create an Azure Cosmos DB account, database and container with a stored procedure, trigger and UDF](manage-with-bicep.md#create-sproc) | Create a API for NoSQL account in two regions with a stored procedure, trigger and UDF for a container. |
+|[Create an Azure Cosmos DB account with Azure AD identity, Role Definitions and Role Assignment](manage-with-bicep.md#create-rbac) | Create a API for NoSQL account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
+|[Create a free-tier Azure Cosmos DB account](manage-with-bicep.md#free-tier) | Create an Azure Cosmos DB for NoSQL account on free-tier. |
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bulk Executor Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-dotnet.md
+
+ Title: Use bulk executor .NET library in Azure Cosmos DB for bulk import and update operations
+description: Bulk import and update the Azure Cosmos DB documents using the bulk executor .NET library.
++++
+ms.devlang: csharp
+ Last updated : 05/02/2020++++
+# Use the bulk executor .NET library to perform bulk operations in Azure Cosmos DB
+
+> [!NOTE]
+> This bulk executor library described in this article is maintained for applications using the .NET SDK 2.x version. For new applications, you can use the **bulk support** that is directly available with the [.NET SDK version 3.x](tutorial-dotnet-bulk-import.md) and it does not require any external library.
+
+> If you are currently using the bulk executor library and planning to migrate to bulk support on the newer SDK, use the steps in the [Migration guide](how-to-migrate-from-bulk-executor-library.md) to migrate your application.
+
+This tutorial provides instructions on using the bulk executor .NET library to import and update documents to an Azure Cosmos DB container. To learn about the bulk executor library and how it helps you use massive throughput and storage, see the [bulk executor library overview](../bulk-executor-overview.md) article. In this tutorial, you'll see a sample .NET application that bulk imports randomly generated documents into an Azure Cosmos DB container. After importing the data, the library shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
+
+Currently, bulk executor library is supported by the Azure Cosmos DB for NoSQL and API for Gremlin accounts only. This article describes how to use the bulk executor .NET library with API for NoSQL accounts. To learn about using the bulk executor .NET library with API for Gremlin accounts, see [perform bulk operations in the Azure Cosmos DB for Gremlin](../gremlin/bulk-executor-dotnet.md).
+
+## Prerequisites
+
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* You can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
+
+* Create an Azure Cosmos DB for NoSQL account by using the steps described in the [create a database account](quickstart-dotnet.md#create-account) section of the .NET quickstart article.
+
+## Clone the sample application
+
+Now let's switch to working with code by downloading a sample .NET application from GitHub. This application performs bulk operations on the data stored in your Azure Cosmos DB account. To clone the application, open a command prompt, navigate to the directory where you want to copy it and run the following command:
+
+```bash
+git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started.git
+```
+
+The cloned repository contains two samples "BulkImportSample" and "BulkUpdateSample". You can open either of the sample applications, update the connection strings in App.config file with your Azure Cosmos DB account's connection strings, build the solution, and run it.
+
+The "BulkImportSample" application generates random documents and bulk imports them to your Azure Cosmos DB account. The "BulkUpdateSample" application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you'll review the code in each of these sample apps.
+
+## <a id="bulk-import-data-to-an-azure-cosmos-account"></a>Bulk import data to an Azure Cosmos DB account
+
+1. Navigate to the "BulkImportSample" folder and open the "BulkImportSample.sln" file.
+
+2. The Azure Cosmos DB's connection strings are retrieved from the App.config file as shown in the following code:
+
+ ```csharp
+ private static readonly string EndpointUrl = ConfigurationManager.AppSettings["EndPointUrl"];
+ private static readonly string AuthorizationKey = ConfigurationManager.AppSettings["AuthorizationKey"];
+ private static readonly string DatabaseName = ConfigurationManager.AppSettings["DatabaseName"];
+ private static readonly string CollectionName = ConfigurationManager.AppSettings["CollectionName"];
+ private static readonly int CollectionThroughput = int.Parse(ConfigurationManager.AppSettings["CollectionThroughput"]);
+ ```
+
+ The bulk importer creates a new database and a container with the database name, container name, and the throughput values specified in the App.config file.
+
+3. Next the DocumentClient object is initialized with Direct TCP connection mode:
+
+ ```csharp
+ ConnectionPolicy connectionPolicy = new ConnectionPolicy
+ {
+ ConnectionMode = ConnectionMode.Direct,
+ ConnectionProtocol = Protocol.Tcp
+ };
+ DocumentClient client = new DocumentClient(new Uri(endpointUrl),authorizationKey,
+ connectionPolicy)
+ ```
+
+4. The BulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they're set to 0 to pass congestion control to BulkExecutor for its lifetime.
+
+ ```csharp
+ // Set retry options high during initialization (default values).
+ client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 30;
+ client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 9;
+
+ IBulkExecutor bulkExecutor = new BulkExecutor(client, dataCollection);
+ await bulkExecutor.InitializeAsync();
+
+ // Set retries to 0 to pass complete control to bulk executor.
+ client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 0;
+ client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 0;
+ ```
+
+5. The application invokes the BulkImportAsync API. The .NET library provides two overloads of the bulk import API - one that accepts a list of serialized JSON documents and the other that accepts a list of deserialized POCO documents. To learn more about the definitions of each of these overloaded methods, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkimportasync).
+
+ ```csharp
+ BulkImportResponse bulkImportResponse = await bulkExecutor.BulkImportAsync(
+ documents: documentsToImportInBatch,
+ enableUpsert: true,
+ disableAutomaticIdGeneration: true,
+ maxConcurrencyPerPartitionKeyRange: null,
+ maxInMemorySortingBatchSize: null,
+ cancellationToken: token);
+ ```
+ **BulkImportAsync method accepts the following parameters:**
+
+ |**Parameter** |**Description** |
+ |||
+ |enableUpsert | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it's set to false. |
+ |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it's set to true. |
+ |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting to null will cause library to use a default value of 20. |
+ |maxInMemorySortingBatchSize | The maximum number of documents that are pulled from the document enumerator, which is passed to the API call in each stage. For in-memory sorting phase that happens before bulk importing, setting this parameter to null will cause library to use default minimum value (documents.count, 1000000). |
+ |cancellationToken | The cancellation token to gracefully exit the bulk import operation. |
+
+ **Bulk import response object definition**
+ The result of the bulk import API call contains the following attributes:
+
+ |**Parameter** |**Description** |
+ |||
+ |NumberOfDocumentsImported (long) | The total number of documents that were successfully imported out of the total documents supplied to the bulk import API call. |
+ |TotalRequestUnitsConsumed (double) | The total request units (RU) consumed by the bulk import API call. |
+ |TotalTimeTaken (TimeSpan) | The total time taken by the bulk import API call to complete the execution. |
+ |BadInputDocuments (List\<object>) | The list of bad-format documents that weren't successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value isn't a string (null or any other datatype is considered invalid). |
+
+## Bulk update data in your Azure Cosmos DB account
+
+You can update existing documents by using the BulkUpdateAsync API. In this example, you'll set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+
+1. Navigate to the "BulkUpdateSample" folder and open the "BulkUpdateSample.sln" file.
+
+2. Define the update items along with the corresponding field update operations. In this example, you'll use `SetUpdateOperation` to update the `Name` field and `UnsetUpdateOperation` to remove the `Description` field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+
+ ```csharp
+ SetUpdateOperation<string> nameUpdate = new SetUpdateOperation<string>("Name", "UpdatedDoc");
+ UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
+
+ List<UpdateOperation> updateOperations = new List<UpdateOperation>();
+ updateOperations.Add(nameUpdate);
+ updateOperations.Add(descriptionUpdate);
+
+ List<UpdateItem> updateItems = new List<UpdateItem>();
+ for (int i = 0; i < 10; i++)
+ {
+ updateItems.Add(new UpdateItem(i.ToString(), i.ToString(), updateOperations));
+ }
+ ```
+
+3. The application invokes the BulkUpdateAsync API. To learn about the definition of the BulkUpdateAsync method, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.ibulkexecutor.bulkupdateasync).
+
+ ```csharp
+ BulkUpdateResponse bulkUpdateResponse = await bulkExecutor.BulkUpdateAsync(
+ updateItems: updateItems,
+ maxConcurrencyPerPartitionKeyRange: null,
+ maxInMemorySortingBatchSize: null,
+ cancellationToken: token);
+ ```
+ **BulkUpdateAsync method accepts the following parameters:**
+
+ |**Parameter** |**Description** |
+ |||
+ |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting this parameter to null will make the library to use the default value(20). |
+ |maxInMemorySortingBatchSize | The maximum number of update items pulled from the update items enumerator passed to the API call in each stage. For the in-memory sorting phase that happens before bulk updating, setting this parameter to null will cause the library to use the default minimum value(updateItems.count, 1000000). |
+ | cancellationToken|The cancellation token to gracefully exit the bulk update operation. |
+
+ **Bulk update response object definition**
+ The result of the bulk update API call contains the following attributes:
+
+ |**Parameter** |**Description** |
+ |||
+ |NumberOfDocumentsUpdated (long) | The number of documents that were successfully updated out of the total documents supplied to the bulk update API call. |
+ |TotalRequestUnitsConsumed (double) | The total request units (RUs) consumed by the bulk update API call. |
+ |TotalTimeTaken (TimeSpan) | The total time taken by the bulk update API call to complete the execution. |
+
+## Performance tips
+
+Consider the following points for better performance when using the bulk executor library:
+
+* For best performance, run your application from an Azure virtual machine that is in the same region as your Azure Cosmos DB account's write region.
+
+* It's recommended that you instantiate a single `BulkExecutor` object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos DB container.
+
+* A single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO (This happens by spawning multiple tasks internally). Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that is running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
+
+* Ensure the `InitializeAsync()` method is invoked after instantiating a BulkExecutor object to fetch the target Azure Cosmos DB container's partition map.
+
+* In your application's App.Config, ensure **gcServer** is enabled for better performance
+ ```xml
+ <runtime>
+ <gcServer enabled="true" />
+ </runtime>
+ ```
+* The library emits traces that can be collected either into a log file or on the console. To enable both, add the following code to your application's App.Config file.
+
+ ```xml
+ <system.diagnostics>
+ <trace autoflush="false" indentsize="4">
+ <listeners>
+ <add name="logListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="application.log" />
+ <add name="consoleListener" type="System.Diagnostics.ConsoleTraceListener" />
+ </listeners>
+ </trace>
+ </system.diagnostics>
+ ```
+
+## Next steps
+
+* To learn about the NuGet package details and the release notes, see the [bulk executor SDK details](sdk-dotnet-bulk-executor-v2.md).
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-java.md
+
+ Title: Use bulk executor Java library in Azure Cosmos DB to perform bulk import and update operations
+description: Bulk import and update Azure Cosmos DB documents using bulk executor Java library
++++
+ms.devlang: java
+ Last updated : 03/07/2022++++
+# Perform bulk operations on Azure Cosmos DB data
+
+This tutorial provides instructions on performing bulk operations in the [Azure Cosmos DB Java V4 SDK](sdk-java-v4.md). This version of the SDK comes with the bulk executor library built-in. If you're using an older version of Java SDK, it's recommended to [migrate to the latest version](migrate-java-v4-sdk.md). Azure Cosmos DB Java V4 SDK is the current recommended solution for Java bulk support.
+
+Currently, the bulk executor library is supported only by Azure Cosmos DB for NoSQL and API for Gremlin accounts. To learn about using bulk executor .NET library with API for Gremlin, see [perform bulk operations in Azure Cosmos DB for Gremlin](../gremlin/bulk-executor-dotnet.md).
++
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* You can [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
+
+* [Java Development Kit (JDK) 1.8+](/java/azure/jdk/)
+ - On Ubuntu, run `apt-get install default-jdk` to install the JDK.
+
+ - Be sure to set the JAVA_HOME environment variable to point to the folder where the JDK is installed.
+
+* [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a [Maven](https://maven.apache.org/) binary archive
+
+ - On Ubuntu, you can run `apt-get install maven` to install Maven.
+
+* Create an Azure Cosmos DB for NoSQL account by using the steps described in the [create database account](quickstart-java.md#create-a-database-account) section of the Java quickstart article.
+
+## Clone the sample application
+
+Now let's switch to working with code by downloading a generic samples repository for Java V4 SDK for Azure Cosmos DB from GitHub. These sample applications perform CRUD operations and other common operations on Azure Cosmos DB. To clone the repository, open a command prompt, navigate to the directory where you want to copy the application and run the following command:
+
+```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples
+```
+
+The cloned repository contains a sample `SampleBulkQuickStartAsync.java` in the `/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/bulk/async` folder. The application generates documents and executes operations to bulk create, upsert, replace and delete items in Azure Cosmos DB. In the next sections, we will review the code in the sample app.
+
+## Bulk execution in Azure Cosmos DB
+
+1. The Azure Cosmos DB's connection strings are read as arguments and assigned to variables defined in /`examples/common/AccountSettings.java` file. These environment variables must be set
+
+```
+ACCOUNT_HOST=your account hostname;ACCOUNT_KEY=your account primary key
+```
+
+To run the bulk sample, specify its Main Class:
+
+```
+com.azure.cosmos.examples.bulk.async.SampleBulkQuickStartAsync
+```
+
+2. The `CosmosAsyncClient` object is initialized by using the following statements:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=CreateAsyncClient)]
++
+3. The sample creates an async database and container. It then creates multiple documents on which bulk operations will be executed. It adds these documents to a `Flux<Family>` reactive stream object:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=AddDocsToStream)]
++
+4. The sample contains methods for bulk create, upsert, replace, and delete. In each method we map the families documents in the BulkWriter `Flux<Family>` stream to multiple method calls in `CosmosBulkOperations`. These operations are added to another reactive stream object `Flux<CosmosItemOperation>`. The stream is then passed to the `executeBulkOperations` method of the async `container` we created at the beginning, and operations are executed in bulk. See bulk create method below as an example:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItems)]
++
+5. There's also a class `BulkWriter.java` in the same directory as the sample application. This class demonstrates how to handle rate limiting (429) and timeout (408) errors that may occur during bulk execution, and retrying those operations effectively. It is implemented in the below methods, also showing how to implement local and global throughput control.
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkWriterAbstraction)]
++
+6. Additionally, there are bulk create methods in the sample, which illustrate how to add response processing, and set execution options:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItemsWithResponseProcessingAndExecutionOptions)]
++
+ <!-- The importAll method accepts the following parameters:
+
+ |**Parameter** |**Description** |
+ |||
+ |isUpsert | A flag to enable upsert of the documents. If a document with given ID already exists, it's updated. |
+ |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
+ |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
+
+ **Bulk import response object definition**
+ The result of the bulk import API call contains the following get methods:
+
+ |**Parameter** |**Description** |
+ |||
+ |int getNumberOfDocumentsImported() | The total number of documents that were successfully imported out of the documents supplied to the bulk import API call. |
+ |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk import API call. |
+ |Duration getTotalTimeTaken() | The total time taken by the bulk import API call to complete execution. |
+ |List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. |
+ |List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
+
+<!-- 5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
+
+ ```bash
+ mvn clean package
+ ```
+
+6. After the target dependencies are generated, you can invoke the bulk importer application by using the following command:
+
+ ```bash
+ java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint *<Fill in your Azure Cosmos DB's endpoint>* -masterKey *<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkImportDb -collectionId bulkImportColl -operation import -shouldCreateCollection -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
+ ```
+
+ The bulk importer creates a new database and a collection with the database name, collection name, and throughput values specified in the App.config file.
+
+## Bulk update data in Azure Cosmos DB
+
+You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the Name field to a new value and remove the Description field from the existing documents. For the full set of supported field update operations, see [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
+
+1. Defines the update items along with corresponding field update operations. In this example, you will use SetUpdateOperation to update the Name field and UnsetUpdateOperation to remove the Description field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
+
+ ```java
+ SetUpdateOperation<String> nameUpdate = new SetUpdateOperation<>("Name","UpdatedDocValue");
+ UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
+
+ ArrayList<UpdateOperationBase> updateOperations = new ArrayList<>();
+ updateOperations.add(nameUpdate);
+ updateOperations.add(descriptionUpdate);
+
+ List<UpdateItem> updateItems = new ArrayList<>(cfg.getNumberOfDocumentsForEachCheckpoint());
+ IntStream.range(0, cfg.getNumberOfDocumentsForEachCheckpoint()).mapToObj(j -> {
+ return new UpdateItem(Long.toString(prefix + j), Long.toString(prefix + j), updateOperations);
+ }).collect(Collectors.toCollection(() -> updateItems));
+ ```
+
+2. Call the updateAll API that generates random documents to be then bulk imported into an Azure Cosmos DB container. You can configure the command-line configurations to be passed in CmdLineConfiguration.java file.
+
+ ```java
+ BulkUpdateResponse bulkUpdateResponse = bulkExecutor.updateAll(updateItems, null)
+ ```
+
+ The bulk update API accepts a collection of items to be updated. Each update item specifies the list of field update operations to be performed on a document identified by an ID and a partition key value. for more information, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
+
+ ```java
+ public BulkUpdateResponse updateAll(
+ Collection<UpdateItem> updateItems,
+ Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
+ ```
+
+ The updateAll method accepts the following parameters:
+
+ |**Parameter** |**Description** |
+ |||
+ |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
+
+ **Bulk import response object definition**
+ The result of the bulk import API call contains the following get methods:
+
+ |**Parameter** |**Description** |
+ |||
+ |int getNumberOfDocumentsUpdated() | The total number of documents that were successfully updated out of the documents supplied to the bulk update API call. |
+ |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk update API call. |
+ |Duration getTotalTimeTaken() | The total time taken by the bulk update API call to complete execution. |
+ |List\<Exception> getErrors() | Gets the list of operational or networking issues related to the update operation. |
+ |List\<BulkUpdateFailure> getFailedUpdates() | Gets the list of updates, which could not be completed along with the specific exceptions leading to the failures.|
+
+3. After you have the bulk update application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
+
+ ```bash
+ mvn clean package
+ ```
+
+4. After the target dependencies are generated, you can invoke the bulk update application by using the following command:
+
+ ```bash
+ java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
+ ``` -->
+
+## Performance tips
+
+Consider the following points for better performance when using bulk executor library:
+
+* For best performance, run your application from an Azure VM in the same region as your Azure Cosmos DB account write region.
+* For achieving higher throughput:
+
+ * Set the JVM's heap size to a large enough number to avoid any memory issue in handling large number of documents. Suggested heap size: max(3 GB, 3 * sizeof(all documents passed to bulk import API in one batch)).
+ * There's a preprocessing time, due to which you'll get higher throughput when performing bulk operations with a large number of documents. So, if you want to import 10,000,000 documents, running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is preferable than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
+
+* It is recommended to instantiate a single CosmosAsyncClient object for the entire application within a single virtual machine that corresponds to a specific Azure Cosmos DB container.
+
+* Since a single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO. This happens by spawning multiple tasks internally, avoid spawning multiple concurrent tasks within your application process each executing bulk operation API calls. If a single bulk operation API calls running on a single virtual machine is unable to consume your entire container's throughput (if your container's throughput > 1 million RU/s), it's preferable to create separate virtual machines to concurrently execute bulk operation API calls.
+
+
+## Next steps
+* For an overview of bulk executor functionality, see [bulk executor overview](../bulk-executor-overview.md).
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md
+
+ Title: Certificate-based authentication with Azure Cosmos DB and Active Directory
+description: Learn how to configure an Azure AD identity for certificate-based authentication to access keys from Azure Cosmos DB.
++++ Last updated : 06/11/2019+++++
+# Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
+
+Certificate-based authentication enables your client application to be authenticated by using Azure Active Directory (Azure AD) with a client certificate. You can perform certificate-based authentication on a machine where you need an identity, such as an on-premises machine or virtual machine in Azure. Your application can then read Azure Cosmos DB keys without having the keys directly in the application. This article describes how to create a sample Azure AD application, configure it for certificate-based authentication, sign into Azure using the new application identity, and then it retrieves the keys from your Azure Cosmos DB account. This article uses Azure PowerShell to set up the identities and provides a C# sample app that authenticates and accesses keys from your Azure Cosmos DB account.
+
+## Prerequisites
+
+* Install the [latest version](/powershell/azure/install-az-ps) of Azure PowerShell.
+
+* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+## Register an app in Azure AD
+
+In this step, you will register a sample web application in your Azure AD account. This application is later used to read the keys from your Azure Cosmos DB account. Use the following steps to register an application:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Open the Azure **Active Directory** pane, go to **App registrations** pane, and select **New registration**.
+
+ :::image type="content" source="./media/certificate-based-authentication/new-app-registration.png" alt-text="New application registration in Active Directory":::
+
+1. Fill the **Register an application** form with the following details:
+
+ * **Name** ΓÇô Provide a name for your application, it can be any name such as "sampleApp".
+ * **Supported account types** ΓÇô Choose **Accounts in this organizational directory only (Default Directory)** to allow resources in your current directory to access this application.
+ * **Redirect URL** ΓÇô Choose application of type **Web** and provide a URL where your application is hosted, it can be any URL. For this example, you can provide a test URL such as `https://sampleApp.com` it's okay even if the app doesn't exist.
+
+ :::image type="content" source="./media/certificate-based-authentication/register-sample-web-app.png" alt-text="Registering a sample web application":::
+
+1. Select **Register** after you fill the form.
+
+1. After the app is registered, make a note of the **Application(client) ID** and **Object ID**, you will use these details in the next steps.
+
+ :::image type="content" source="./media/certificate-based-authentication/get-app-object-ids.png" alt-text="Get the application and object IDs":::
+
+## Install the AzureAD module
+
+In this step, you will install the Azure AD PowerShell module. This module is required to get the ID of the application you registered in the previous step and associate a self-signed certificate to that application.
+
+1. Open Windows PowerShell ISE with administrator rights. If you haven't already done, install the AZ PowerShell module and connect to your subscription. If you have multiple subscriptions, you can set the context of current subscription as shown in the following commands:
+
+ ```powershell
+ Install-Module -Name Az -AllowClobber
+ Connect-AzAccount
+
+ Get-AzSubscription
+ $context = Get-AzSubscription -SubscriptionId <Your_Subscription_ID>
+ Set-AzContext $context
+ ```
+
+1. Install and import the [AzureAD](/powershell/module/azuread/) module
+
+ ```powershell
+ Install-Module AzureAD
+ Import-Module AzureAD
+ # On PowerShell 7.x, use the -UseWindowsPowerShell parameter
+ # Import-Module AzureAD -UseWindowsPowerShell
+ ```
+
+## Sign into your Azure AD
+
+Sign into your Azure AD where you have registered the application. Use the Connect-AzureAD command to sign into your account, enter your Azure account credentials in the pop-up window.
+
+```powershell
+Connect-AzureAD
+```
+
+## Create a self-signed certificate
+
+Open another instance of Windows PowerShell ISE, and run the following commands to create a self-signed certificate and read the key associated with the certificate:
+
+```powershell
+$cert = New-SelfSignedCertificate -CertStoreLocation "Cert:\CurrentUser\My" -Subject "CN=sampleAppCert" -KeySpec KeyExchange
+$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
+```
+
+## Create the certificate-based credential
+
+Next run the following commands to get the object ID of your application and create the certificate-based credential. In this example, we set the certificate to expire after a year, you can set it to any required end date.
+
+```powershell
+$application = Get-AzureADApplication -ObjectId <Object_ID_of_Your_Application>
+
+New-AzureADApplicationKeyCredential -ObjectId $application.ObjectId -CustomKeyIdentifier "Key1" -Type AsymmetricX509Cert -Usage Verify -Value $keyValue -EndDate "2020-01-01"
+```
+
+The above command results in the output similar to the screenshot below:
++
+## Configure your Azure Cosmos DB account to use the new identity
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos DB account.
+
+1. Assign the Contributor role to the sample app you created in the previous section.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## Register your certificate with Azure AD
+
+You can associate the certificate-based credential with the client application in Azure AD from the Azure portal. To associate the credential, you must upload the certificate file with the following steps:
+
+In the Azure app registration for the client application:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Open the Azure **Active Directory** pane, go to the **App registrations** pane, and open the sample app you created in the previous step.
+
+1. Select **Certificates & secrets** and then **Upload certificate**. Browse the certificate file you created in the previous step to upload.
+
+1. Select **Add**. After the certificate is uploaded, the thumbprint, start date, and expiration values are displayed.
+
+## Access the keys from PowerShell
+
+In this step, you will sign into Azure by using the application and the certificate you created and access your Azure Cosmos DB account's keys.
+
+1. Initially clear the Azure account's credentials you have used to sign into your account. You can clear credentials by using the following command:
+
+ ```powershell
+ Disconnect-AzAccount -Username <Your_Azure_account_email_id>
+ ```
+
+1. Next validate that you can sign into Azure portal by using the application's credentials and access the Azure Cosmos DB keys:
+
+ ```powershell
+ Login-AzAccount -ApplicationId <Your_Application_ID> -CertificateThumbprint $cert.Thumbprint -ServicePrincipal -Tenant <Tenant_ID_of_your_application>
+
+ Get-AzCosmosDBAccountKey `
+ -ResourceGroupName "<Resource_Group_Name_of_your_Azure_Cosmos_account>" `
+ -Name "<Your_Azure_Cosmos_Account_Name>" `
+ -Type "Keys"
+ ```
+
+The previous command will display the primary and secondary primary keys of your Azure Cosmos DB account. You can view the Activity log of your Azure Cosmos DB account to validate that the get keys request succeeded and the event is initiated by the "sampleApp" application.
++
+## Access the keys from a C# application
+
+You can also validate this scenario by accessing keys from a C# application. The following C# console application, that can access Azure Cosmos DB keys by using the app registered in Active Directory. Make sure to update the tenantId, clientID, certName, resource group name, subscription ID, Azure Cosmos DB account name details before you run the code.
+
+```csharp
+using System;
+using Microsoft.IdentityModel.Clients.ActiveDirectory;
+using System.Linq;
+using System.Net.Http;
+using System.Security.Cryptography.X509Certificates;
+using System.Threading;
+using System.Threading.Tasks;
+
+namespace TodoListDaemonWithCert
+{
+ class Program
+ {
+ private static string aadInstance = "https://login.windows.net/";
+ private static string tenantId = "<Your_Tenant_ID>";
+ private static string clientId = "<Your_Client_ID>";
+ private static string certName = "<Your_Certificate_Name>";
+
+ private static int errorCode = 0;
+ static int Main(string[] args)
+ {
+ MainAync().Wait();
+ Console.ReadKey();
+
+ return 0;
+ }
+
+ static async Task MainAync()
+ {
+ string authContextURL = aadInstance + tenantId;
+ AuthenticationContext authContext = new AuthenticationContext(authContextURL);
+ X509Certificate2 cert = ReadCertificateFromStore(certName);
+
+ ClientAssertionCertificate credential = new ClientAssertionCertificate(clientId, cert);
+ AuthenticationResult result = await authContext.AcquireTokenAsync("https://management.azure.com/", credential);
+ if (result == null)
+ {
+ throw new InvalidOperationException("Failed to obtain the JWT token");
+ }
+
+ string token = result.AccessToken;
+ string subscriptionId = "<Your_Subscription_ID>";
+ string rgName = "<ResourceGroup_of_your_Cosmos_account>";
+ string accountName = "<Your_Cosmos_account_name>";
+ string cosmosDBRestCall = $"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/listKeys?api-version=2015-04-08";
+
+ Uri restCall = new Uri(cosmosDBRestCall);
+ HttpClient httpClient = new HttpClient();
+ httpClient.DefaultRequestHeaders.Remove("Authorization");
+ httpClient.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
+ HttpResponseMessage response = await httpClient.PostAsync(restCall, null);
+
+ Console.WriteLine("Got result {0} and keys {1}", response.StatusCode.ToString(), response.Content.ReadAsStringAsync().Result);
+ }
+
+ /// <summary>
+ /// Reads the certificate
+ /// </summary>
+ private static X509Certificate2 ReadCertificateFromStore(string certName)
+ {
+ X509Certificate2 cert = null;
+ X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
+ store.Open(OpenFlags.ReadOnly);
+ X509Certificate2Collection certCollection = store.Certificates;
+
+ // Find unexpired certificates.
+ X509Certificate2Collection currentCerts = certCollection.Find(X509FindType.FindByTimeValid, DateTime.Now, false);
+
+ // From the collection of unexpired certificates, find the ones with the correct name.
+ X509Certificate2Collection signingCert = currentCerts.Find(X509FindType.FindBySubjectName, certName, false);
+
+ // Return the first certificate in the collection, has the right name and is current.
+ cert = signingCert.OfType<X509Certificate2>().OrderByDescending(c => c.NotBefore).FirstOrDefault();
+ store.Close();
+ return cert;
+ }
+ }
+}
+```
+
+This script outputs the primary and secondary primary keys as shown in the following screenshot:
++
+Similar to the previous section, you can view the Activity log of your Azure Cosmos DB account to validate that the get keys request event is initiated by the "sampleApp" application.
++
+## Next steps
+
+* [Secure Azure Cosmos DB keys using Azure Key Vault](../access-secrets-from-keyvault.md)
+
+* [Security baseline for Azure Cosmos DB](../security-baseline.md)
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-design-patterns.md
+
+ Title: Change feed design patterns in Azure Cosmos DB
+description: Overview of common change feed design patterns
++++++ Last updated : 03/24/2022++
+# Change feed design patterns in Azure Cosmos DB
+
+The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This document focuses on common change feed design patterns, design tradeoffs, and change feed limitations.
+
+>
+> [!VIDEO https://aka.ms/docs.change-feed-azure-functions]
++
+Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
+
+* Triggering a notification or a call to an API, when an item is inserted or updated.
+* Real-time stream processing for IoT or real-time analytics processing on operational data.
+* Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage.
+
+The change feed in Azure Cosmos DB enables you to build efficient and scalable solutions for each of these patterns, as shown in the following image:
++
+## Event computing and notifications
+
+The Azure Cosmos DB change feed can simplify scenarios that need to trigger a notification or send a call to an API based on a certain event. You can use the [Change Feed Process Library](change-feed-processor.md) to automatically poll your container for changes and call an external API each time there is a write or update.
+
+You can also selectively trigger a notification or send a call to an API based on specific criteria. For example, if you are reading from the change feed using [Azure Functions](change-feed-functions.md), you can put logic into the function to only send a notification if a specific criteria has been met. While the Azure Function code would execute during each write and update, the notification would only be sent if specific criteria had been met.
+
+## Real-time stream processing
+
+The Azure Cosmos DB change feed can be used for real-time stream processing for IoT or real-time analytics processing on operational data.
+For example, you might receive and store event data from devices, sensors, infrastructure and applications, and process these events in real time, using [Spark](../../hdinsight/spark/apache-spark-overview.md). The following image shows how you can implement a lambda architecture using the Azure Cosmos DB via change feed:
++
+In many cases, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hubs or Apache Kafka. The change feed is a great alternative due to Azure Cosmos DB's ability to support a sustained high rate of data ingestion with guaranteed low read and write latency. The advantages of the Azure Cosmos DB change feed over a message queue include:
+
+### Data persistence
+
+Data written to Azure Cosmos DB will show up in the change feed and be retained until deleted. Message queues typically have a maximum retention period. For example, [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days.
+
+### Querying ability
+
+In addition to reading from an Azure Cosmos DB container's change feed, you can also run SQL queries on the data stored in Azure Cosmos DB. The change feed isn't a duplication of data already in the container but rather just a different mechanism of reading the data. Therefore, if you read data from the change feed, it will always be consistent with queries of the same Azure Cosmos DB container.
+
+### High availability
+
+Azure Cosmos DB offers up to 99.999% read and write availability. Unlike many message queues, Azure Cosmos DB data can be easily globally distributed and configured with an [RTO (Recovery Time Objective)](../consistency-levels.md#rto) of zero.
+
+After processing items in the change feed, you can build a materialized view and persist aggregated values back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, you can, for example, use change feed to implement real-time leaderboards based on scores from completed games.
+
+## Data movement
+
+You can also read from the change feed for real-time data movement.
+
+For example, the change feed helps you perform the following tasks efficiently:
+
+* Update a cache, search index, or data warehouse with data stored in Azure Cosmos DB.
+
+* Perform zero down-time migrations to another Azure Cosmos DB account or another Azure Cosmos DB container with a different logical partition key.
+
+* Implement an application-level data tiering and archival. For example, you can store "hot data" in Azure Cosmos DB and age out "cold data" to other storage systems such as [Azure Blob Storage](../../storage/common/storage-introduction.md).
+
+When you have to [denormalize data across partitions and containers](how-to-model-partition-example.md#v2-introducing-denormalization-to-optimize-read-queries
+), you can read from your container's change feed as a source for this data replication. Real-time data replication with the change feed can only guarantee eventual consistency. You can [monitor how far the Change Feed Processor lags behind](how-to-use-change-feed-estimator.md) in processing changes in your Azure Cosmos DB container.
+
+## Event sourcing
+
+The [event sourcing pattern](/azure/architecture/patterns/event-sourcing) involves using an append-only store to record the full series of actions on that data. Azure Cosmos DB's change feed is a great choice as a central data store in event sourcing architectures where all data ingestion is modeled as writes (no updates or deletes). In this case, each write to Azure Cosmos DB is an "event" and you'll have a full record of past events in the change feed. Typical uses of the events published by the central event store are for maintaining materialized views or for integration with external systems. Because there is no time limit for retention in the change feed, you can replay all past events by reading from the beginning of your Azure Cosmos DB container's change feed.
+
+You can have [multiple change feed consumers subscribe to the same container's change feed](how-to-create-multiple-cosmos-db-triggers.md#optimizing-containers-for-multiple-triggers). Aside from the [lease container's](change-feed-processor.md#components-of-the-change-feed-processor) provisioned throughput, there is no cost to utilize the change feed. The change feed is available in every container regardless of whether it is utilized.
+
+Azure Cosmos DB is a great central append-only persistent data store in the event sourcing pattern because of its strengths in horizontal scalability and high availability. In addition, the change Feed Processor library offers an ["at least once"](change-feed-processor.md#error-handling) guarantee, ensuring that you won't miss processing any events.
+
+## Current limitations
+
+The change feed has important limitations that you should understand. While items in an Azure Cosmos DB container will always remain in the change feed, the change feed is not a full operation log. There are important areas to consider when designing an application that utilizes the change feed.
+
+### Intermediate updates
+
+Only the most recent change for a given item is included in the change feed. When processing changes, you will read the latest available item version. If there are multiple updates to the same item in a short period of time, it is possible to miss processing intermediate updates. If you would like to track updates and be able to replay past updates to an item, we recommend modeling these updates as a series of writes instead.
+
+### Deletes
+
+The change feed does not capture deletes. If you delete an item from your container, it is also removed from the change feed. The most common method of handling this is adding a soft marker on the items that are being deleted. You can add a property called "deleted" and set it to "true" at the time of deletion. This document update will show up in the change feed. You can set a TTL on this item so that it can be automatically deleted later.
+
+### Guaranteed order
+
+There is guaranteed order in the change feed within a partition key value but not across partition key values. You should select a partition key that gives you a meaningful order guarantee.
+
+For example, consider a retail application using the event sourcing design pattern. In this application, different user actions are each "events" which are modeled as writes to Azure Cosmos DB. Imagine if some example events occurred in the following sequence:
+
+1. Customer adds Item A to their shopping cart
+2. Customer adds Item B to their shopping cart
+3. Customer removes Item A from their shopping cart
+4. Customer checks out and shopping cart contents are shipped
+
+A materialized view of current shopping cart contents is maintained for each customer. This application must ensure that these events are processed in the order in which they occur. If, for example, the cart checkout were to be processed before Item A's removal, it is likely that the customer would have had Item A shipped, as opposed to the desired Item B. In order to guarantee that these four events are processed in order of their occurrence, they should fall within the same partition key value. If you select **username** (each customer has a unique username) as the partition key, you can guarantee that these events show up in the change feed in the same order in which they are written to Azure Cosmos DB.
+
+## Examples
+
+Here are some real-world change feed code examples that extend beyond the scope of the provided samples:
+
+- [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)
+- [IoT use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)
+- [Retail use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)
+
+## Next steps
+
+* [Change feed overview](../change-feed.md)
+* [Options to read change feed](read-change-feed.md)
+* [Using change feed with Azure Functions](change-feed-functions.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Feed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-functions.md
+
+ Title: How to use Azure Cosmos DB change feed with Azure Functions
+description: Use Azure Functions to connect to Azure Cosmos DB change feed. Later you can create reactive Azure functions that are triggered on every new event.
+++++++ Last updated : 10/14/2021++
+# Serverless event-based architectures with Azure Cosmos DB and Azure Functions
+
+Azure Functions provides the simplest way to connect to the [change feed](../change-feed.md). You can create small reactive Azure Functions that will be automatically triggered on each new event in your Azure Cosmos DB container's change feed.
++
+With the [Azure Functions trigger for Azure Cosmos DB](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), you can leverage the [Change Feed Processor](change-feed-processor.md)'s scaling and reliable event detection functionality without the need to maintain any [worker infrastructure](change-feed-processor.md). Just focus on your Azure Function's logic without worrying about the rest of the event-sourcing pipeline. You can even mix the Trigger with any other [Azure Functions bindings](../../azure-functions/functions-triggers-bindings.md#supported-bindings).
+
+> [!NOTE]
+> Currently, the Azure Functions trigger for Azure Cosmos DB is supported for use with the API for NoSQL only.
+
+## Requirements
+
+To implement a serverless event-based flow, you need:
+
+* **The monitored container**: The monitored container is the Azure Cosmos DB container being monitored, and it stores the data from which the change feed is generated. Any inserts, updates to the monitored container are reflected in the change feed of the container.
+* **The lease container**: The lease container maintains state across multiple and dynamic serverless Azure Function instances and enables dynamic scaling. You can create the lease container automatically with the Azure Functions trigger for Azure Cosmos DB. You can also create the lease container manually. To automatically create the lease container, set the *CreateLeaseCollectionIfNotExists* flag in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration). Partitioned lease containers are required to have a `/id` partition key definition.
+
+## Create your Azure Functions trigger for Azure Cosmos DB
+
+Creating your Azure Function with an Azure Functions trigger for Azure Cosmos DB is now supported across all Azure Functions IDE and CLI integrations:
+
+* [Visual Studio Extension](../../azure-functions/functions-develop-vs.md) for Visual Studio users.
+* [Visual Studio Code Extension](/azure/developer/javascript/tutorial-vscode-serverless-node-01) for Visual Studio Code users.
+* And finally [Core CLI tooling](../../azure-functions/functions-run-local.md#create-func) for a cross-platform IDE agnostic experience.
+
+## Run your trigger locally
+
+You can run your [Azure Function locally](../../azure-functions/functions-develop-local.md) with the [Azure Cosmos DB Emulator](../local-emulator.md) to create and develop your serverless event-based flows without an Azure Subscription or incurring any costs.
+
+If you want to test live scenarios in the cloud, you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without any credit card or Azure subscription required.
+
+## Next steps
+
+You can now continue to learn more about change feed in the following articles:
+
+* [Overview of change feed](../change-feed.md)
+* [Ways to read change feed](read-change-feed.md)
+* [Using change feed processor library](change-feed-processor.md)
+* [How to work with change feed processor library](change-feed-processor.md)
+* [Serverless database computing using Azure Cosmos DB and Azure Functions](serverless-computing-database.md)
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md
+
+ Title: Change feed processor in Azure Cosmos DB
+description: Learn how to use the Azure Cosmos DB change feed processor to read the change feed, the components of the change feed processor
+++++
+ms.devlang: csharp
+ Last updated : 04/05/2022+++
+# Change feed processor in Azure Cosmos DB
+
+The change feed processor is part of the Azure Cosmos DB [.NET V3](https://github.com/Azure/azure-cosmos-dotnet-v3) and [Java V4](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos) SDKs. It simplifies the process of reading the change feed and distributes the event processing across multiple consumers effectively.
+
+The main benefit of change feed processor library is its fault-tolerant behavior that assures an "at-least-once" delivery of all the events in the change feed.
+
+## Components of the change feed processor
+
+There are four main components of implementing the change feed processor:
+
+1. **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
+
+1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
+
+1. **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
+
+1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
+
+To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items.
+There are two compute instances and the change feed processor is assigning different ranges to each instance to maximize compute distribution, each instance has a unique and different name.
+Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor.
++
+## Implementing the change feed processor
+
+### [.NET](#tab/dotnet)
+
+The point of entry is always the monitored container, from a `Container` instance you call `GetChangeFeedProcessorBuilder`:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=DefineProcessor)]
+
+Where the first parameter is a distinct name that describes the goal of this processor and the second name is the delegate implementation that will handle changes.
+
+An example of a delegate would be:
++
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)]
+
+Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, this should be unique and different in each compute instance you are deploying, and finally which is the container to maintain the lease state with `WithLeaseContainer`.
+
+Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
+
+## Processing life cycle
+
+The normal life cycle of a host instance is:
+
+1. Read the change feed.
+1. If there are no changes, sleep for a predefined amount of time (customizable with `WithPollInterval` in the Builder) and go to #1.
+1. If there are changes, send them to the **delegate**.
+1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1.
+
+## Error handling
+
+The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
+
+> [!NOTE]
+> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
+
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+
+In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed or use the [life cycle notifications](#life-cycle-notifications) to detect underlying failures.
+
+## Life-cycle notifications
+
+The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification:
+
+* Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it.
+* Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.
+* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues).
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)]
+
+## Deployment unit
+
+A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
+
+For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
+
+## Dynamic scaling
+
+As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
+
+1. All instances should have the same lease container configuration.
+1. All instances should have the same `processorName`.
+1. Each instance needs to have a different instance name (`WithInstanceName`).
+
+If these three conditions apply, then the change feed processor will distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute using an equal distribution algorithm. One lease can only be owned by one instance at a given time, so the number of instances should not be greater than the number of leases.
+
+The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly.
+
+Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
+
+## Change feed and provisioned throughput
+
+Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
+
+Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
+
+## Starting time
+
+By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
+
+### Reading from a previous date and time
+
+It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by passing an instance of a `DateTime` to the `WithStartTime` builder extension:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=TimeInitialization)]
+
+The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
+
+> [!NOTE]
+> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
+
+### Reading from the beginning
+
+In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartFromBeginningInitialization)]
+
+The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
+
+> [!NOTE]
+> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
+
+### [Java](#tab/java)
+
+An example of a delegate implementation would be:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=Delegate)]
+
+>[!NOTE]
+> In the above we pass a variable `options` of type `ChangeFeedProcessorOptions`, which can be used to set various values including `setStartFromBeginning`:
+> [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=ChangeFeedProcessorOptions)]
+
+We assign this to a `changeFeedProcessorInstance`, passing parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`) and the `leaseContainer`. We then start the change feed processor:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)]
+
+>[!NOTE]
+> The above code snippets are taken from a sample in GitHub, which you can find [here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java).
+
+## Processing life cycle
+
+The normal life cycle of a host instance is:
+
+1. Read the change feed.
+1. If there are no changes, sleep for a predefined amount of time (customizable with `options.setFeedPollDelay` in the Builder) and go to #1.
+1. If there are changes, send them to the **delegate**.
+1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1.
+
+## Error handling
+
+The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
+
+> [!NOTE]
+> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
+
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+
+In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed.
+
+<!-- ## Life-cycle notifications
+
+The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification:
+
+* Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it.
+* Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.
+* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues).
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)] -->
+
+## Deployment unit
+
+A single change feed processor deployment unit consists of one or more compute instances with the same lease container configuration, the same `leasePrefix`, but different `hostName` name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
+
+For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
+
+## Dynamic scaling
+
+As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
+
+1. All instances should have the same lease container configuration.
+1. All instances should have the same value set in `options.setLeasePrefix` (or none set at all).
+1. Each instance needs to have a different `hostName`.
+
+If these three conditions apply, then the change feed processor will distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute using an equal distribution algorithm. One lease can only be owned by one instance at a given time, so the number of instances should not be greater than the number of leases.
+
+The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix`.
+
+Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
+
+## Change feed and provisioned throughput
+
+Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
+
+Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
+
+## Starting time
+
+By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
+
+### Reading from a previous date and time
+
+It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by setting `setStartTime` in `options`. The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
+
+> [!NOTE]
+> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
+
+### Reading from the beginning
+
+In our above sample, we set `setStartFromBeginning` to `false`, which is the same as the default value. In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can set `setStartFromBeginning` to `true`. The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
+
+> [!NOTE]
+> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
+++
+## Sharing the lease container
+
+You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units.
+
+## Where to host the change feed processor
+
+The change feed processor can be hosted in any platform that supports long running processes or tasks:
+
+* A continuous running [Azure WebJob](/training/modules/run-web-app-background-task-with-webjobs/).
+* A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines).
+* A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service).
+* A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions).
+* An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services).
+
+While change feed processor can run in short lived environments, because the lease container maintains the state, the startup cycle of these environments will add delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started).
+
+## Additional resources
+
+* [Azure Cosmos DB SDK](sdk-dotnet-v2.md)
+* [Complete sample application on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
+* [Additional usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
+* [Azure Cosmos DB workshop labs for change feed processor](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html#consume-cosmos-db-change-feed-via-the-change-feed-processor)
+
+## Next steps
+
+You can now proceed to learn more about change feed processor in the following articles:
+
+* [Overview of change feed](../change-feed.md)
+* [Change feed pull model](change-feed-pull-model.md)
+* [How to migrate from the change feed processor library](how-to-migrate-from-change-feed-library.md)
+* [Using the change feed estimator](how-to-use-change-feed-estimator.md)
+* [Change feed processor start time](#starting-time)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-pull-model.md
+
+ Title: Change feed pull model
+description: Learn how to use the Azure Cosmos DB change feed pull model to read the change feed and the differences between the pull model and Change Feed Processor
+++++
+ms.devlang: csharp
+ Last updated : 04/07/2022+++
+# Change feed pull model in Azure Cosmos DB
+
+With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. As you can already do with the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers.
+
+## Comparing with change feed processor
+
+Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item (or batch of items) in the change feed.
+
+However, you can't convert continuation tokens to a lease container (or vice versa).
+
+> [!NOTE]
+> In most cases when you need to read from the change feed, the simplest option is to use the [change feed processor](change-feed-processor.md).
+
+You should consider using the pull model in these scenarios:
+
+- Read changes from a particular partition key
+- Control the pace at which your client receives changes for processing
+- Perform a one-time read of the existing data in the change feed (for example, to do a data migration)
+
+Here's some key differences between the change feed processor and pull model:
+
+|Feature | Change feed processor| Pull model |
+| | | |
+| Keeping track of current point in processing change feed | Lease (stored in an Azure Cosmos DB container) | Continuation token (stored in memory or manually persisted) |
+| Ability to replay past changes | Yes, with push model | Yes, with pull model|
+| Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` | Manual |
+| Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must check status and manually recheck |
+| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machine consuming from the same container| Yes, and manually parallelized using FeedRange |
+| Process changes from just a single partition key | Not supported | Yes|
+
+> [!NOTE]
+> Unlike when reading using the change feed processor, you must explicitly handle cases where there are no new changes.
+
+## Consuming an entire container's changes
+
+### [.NET](#tab/dotnet)
+
+You can create a `FeedIterator` to process the change feed using the pull model. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed using that specific `FeedIterator`.
+
+You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed
+through stored procedures, transaction scope is preserved when reading items from the Change Feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of
+one atomic batch.
+
+The `FeedIterator` comes in two flavors. In addition to the examples below that return entity objects, you can also obtain the response with `Stream` support. Streams allow you to read data without having it first deserialized, saving on client resources.
+
+Here's an example for obtaining a `FeedIterator` that returns entity objects, in this case a `User` object:
+
+```csharp
+FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+```
+
+Here's an example for obtaining a `FeedIterator` that returns a `Stream`:
+
+```csharp
+FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+```
+
+If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time:
+
+```csharp
+FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
+
+while (iteratorForTheEntireContainer.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorForTheEntireContainer.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `HasMoreResults` is always true. When you try to read the change feed and there are no new changes available, you'll receive a response with `NotModified` status. In the above example, it is handled by waiting 5 seconds before rechecking for changes.
+
+## Consuming a partition key's changes
+
+In some cases, you may only want to process a specific partition key's changes. You can obtain a `FeedIterator` for a specific partition key and process the changes the same way that you can for an entire container.
+
+```csharp
+FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<User>(
+ ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue")), ChangeFeedMode.Incremental));
+
+while (iteratorForThePartitionKey.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorForThePartitionKey.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+## Using FeedRange for parallelization
+
+In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values.
+
+Here's an example showing how to obtain a list of ranges for your container:
+
+```csharp
+IReadOnlyList<FeedRange> ranges = await container.GetFeedRangesAsync();
+```
+
+When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions).
+
+Using a `FeedRange`, you can then create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators, which can process the change feed in parallel.
+
+In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be:
+
+* Using `FeedRange.ToJsonString` and distributing this string value. The consumers can use this value with `FeedRange.FromJsonString`
+* If the distribution is in-process, passing the `FeedRange` object reference.
+
+Here's a sample that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel:
+
+Machine 1:
+
+```csharp
+FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental);
+while (iteratorA.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorA.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+Machine 2:
+
+```csharp
+FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental);
+while (iteratorB.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorA.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+## Saving continuation tokens
+
+You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
+
+```csharp
+FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+
+string continuation = null;
+
+while (iterator.HasMoreResults)
+{
+ FeedResponse<User> response = await iterator.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ continuation = response.ContinuationToken;
+ // Stop the consumption since there are no new changes
+ break;
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+
+// Some time later when I want to check changes again
+FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.Incremental);
+```
+
+As long as the Azure Cosmos DB container still exists, a FeedIterator's continuation token never expires.
+
+### [Java](#tab/java)
+
+You can create a `Iterator<FeedResponse<JsonNode>> responseIterator` to process the change feed using the pull model. When creating `CosmosChangeFeedRequestOptions` you must specify where to start reading the change feed from. You will also pass the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed. If you specify `FeedRange.forFullRange()`, you can process an entire container's change feed at your own pace. You can optionally specify a value in `byPage()`. When set, this property sets the maximum number of items received per page. Below is an example for obtaining a `responseIterator`.
+
+>[!NOTE]
+> All of the below code snippets are taken from a sample in GitHub, which you can find [here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java).
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=FeedResponseIterator)]
+
+We can then iterate over the results. Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `responseIterator.hasNext()` is always true. Below is an example, which reads all changes starting from the beginning. Each iteration persists a continuation token after processing all events, and will pick up from the last processed point in the change feed. This is handled using `createForProcessingFromContinuation`:.
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=AllFeedRanges)]
++
+## Consuming a partition key's changes
+
+In some cases, you may only want to process a specific partition key's changes. You can can process the changes for a specific partition key in the same way that you can for an entire container. Here's an example:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=PartitionKeyProcessing)]
++
+## Using FeedRange for parallelization
+
+In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values.
+
+Here's an example showing how to obtain a list of ranges for your container:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=GetFeedRanges)]
+
+When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions).
+
+Using a `FeedRange`, you can then parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to process changes for the entire container or a single partition key, you can use FeedRanges to process the change feed in parallel.
+
+In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be:
+
+* Using `FeedRange.toString()` and distributing this string value.
+* If the distribution is in-process, passing the `FeedRange` object reference.
+
+Here's a sample that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel:
+
+Machine 1:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine1)]
+
+Machine 2:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine2)]
+++
+## Next steps
+
+* [Overview of change feed](../change-feed.md)
+* [Using the change feed processor](change-feed-processor.md)
+* [Trigger Azure Functions](change-feed-functions.md)
cosmos-db Changefeed Ecommerce Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/changefeed-ecommerce-solution.md
+
+ Title: Use Azure Cosmos DB change feed to visualize real-time data analytics
+description: This article describes how change feed can be used by a retail company to understand user patterns, perform real-time data analysis and visualization
+++++
+ms.devlang: java
+ Last updated : 03/24/2022+++
+# Use Azure Cosmos DB change feed to visualize real-time data analytics
+
+The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos DB container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. To learn more about change feed, see [working with change feed](../change-feed.md) article.
+
+This article describes how change feed can be used by an e-commerce company to understand user patterns, perform real-time data analysis and visualization. You will analyze events such as a user viewing an item, adding an item to their cart, or purchasing an item. When one of these events occurs, a new record is created, and the change feed logs that record. Change feed then triggers a series of steps resulting in visualization of metrics that analyze the company performance and activity. Sample metrics that you can visualize include revenue, unique site visitors, most popular items, and average price of the items that are viewed versus added to a cart versus purchased. These sample metrics can help an e-commerce company evaluate its site popularity, develop its advertising and pricing strategies, and make decisions regarding what inventory to invest in.
+
+>
+> [!VIDEO https://aka.ms/docs.ecomm-change-feed]
+>
+
+## Solution components
+The following diagram represents the data flow and components involved in the solution:
+
+
+1. **Data Generation:** Data simulator is used to generate retail data that represents events such as a user viewing an item, adding an item to their cart, and purchasing an item. You can generate large set of sample data by using the data generator. The generated sample data contains documents in the following format:
+
+ ```json
+ {
+ "CartID": 2486,
+ "Action": "Viewed",
+ "Item": "Women's Denim Jacket",
+ "Price": 31.99
+ }
+ ```
+
+2. **Azure Cosmos DB:** The generated data is stored in an Azure Cosmos DB container.
+
+3. **Change Feed:** The change feed will listen for changes to the Azure Cosmos DB container. Each time a new document is added into the collection (that is when an event occurs such a user viewing an item, adding an item to their cart, or purchasing an item), the change feed will trigger an [Azure Function](../../azure-functions/functions-overview.md).
+
+4. **Azure Function:** The Azure Function processes the new data and sends it to [Azure Event Hubs](../../event-hubs/event-hubs-about.md).
+
+5. **Azure event hub:** The event hub stores these events and sends them to [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) to perform further analysis.
+
+6. **Azure Stream Analytics:** Azure Stream Analytics defines queries to process the events and perform real-time data analysis. This data is then sent to [Microsoft Power BI](/power-bi/desktop-what-is-desktop).
+
+7. **Power BI:** Power BI is used to visualize the data sent by Azure Stream Analytics. You can build a dashboard to see how the metrics change in real time.
+
+## Prerequisites
+
+* Microsoft .NET Framework 4.7.1 or higher
+
+* Microsoft .NET Core 2.1 (or higher)
+
+* Visual Studio with Universal Windows Platform development, .NET desktop development, and ASP.NET and web development workloads
+
+* Microsoft Azure Subscription
+
+* Microsoft Power BI Account
+
+* Download the [Azure Cosmos DB change feed lab](https://github.com/Azure-Samples/azure-cosmos-db-change-feed-dotnet-retail-sample) from GitHub.
+
+## Create Azure resources
+
+Create the Azure resources: Azure Cosmos DB, storage account, event hub, and Stream Analytics required by the solution. You will deploy these resources through an Azure Resource Manager template. Use the following steps to deploy these resources:
+
+1. Set the Windows PowerShell execution policy to **Unrestricted**. To do so, open **Windows PowerShell as an Administrator** and run the following commands:
+
+ ```powershell
+ Get-ExecutionPolicy
+ Set-ExecutionPolicy Unrestricted
+ ```
+
+2. From the GitHub repository you downloaded in the previous step, navigate to the **Azure Resource Manager** folder, and open the file called **parameters.json** file.
+
+3. Provide values for `cosmosdbaccount_name`, `eventhubnamespace_name`, `storageaccount_name`, parameters as indicated in **parameters.json** file. You'll need to use the names that you give to each of your resources later.
+
+4. From **Windows PowerShell**, navigate to the **Azure Resource Manager** folder and run the following command:
+
+ ```powershell
+ .\deploy.ps1
+ ```
+5. When prompted, enter your Azure **Subscription ID**, **changefeedlab** for the resource group name, and **run1** for the deployment name. Once the resources begin to deploy, it may take up to 10 minutes for it to complete.
+
+## Create a database and the collection
+
+You will now create a collection to hold e-commerce site events. When a user views an item, adds an item to their cart, or purchases an item, the collection will receive a record that includes the action ("viewed", "added", or "purchased"), the name of the item involved, the price of the item involved, and the ID number of the user cart involved.
+
+1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** that's been created by the template deployment.
+
+2. From the **Data Explorer** pane, select **New Collection** and fill the form with the following details:
+
+ * For the **Database id** field, select **Create new**, then enter **changefeedlabdatabase**. Leave the **Provision database throughput** box unchecked.
+ * For the **Collection** id field, enter **changefeedlabcollection**.
+ * For the **Partition key** field, enter **/Item**. This is case-sensitive, so make sure you enter it correctly.
+ * For the **Throughput** field, enter **10000**.
+ * Select the **OK** button.
+
+3. Next create another collection named **leases** for change feed processing. The leases collection coordinates processing the change feed across multiple workers. A separate collection is used to store the leases with one lease per partition.
+
+4. Return to the **Data Explorer** pane and select **New Collection** and fill the form with the following details:
+
+ * For the **Database id** field, select **Use existing**, then enter **changefeedlabdatabase**.
+ * For the **Collection id** field, enter **leases**.
+ * For **Storage capacity**, select **Fixed**.
+ * Leave the **Throughput** field set to its default value.
+ * Select the **OK** button.
+
+## Get the connection string and keys
+
+### Get the Azure Cosmos DB connection string
+
+1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** that's created by the template deployment.
+
+2. Navigate to the **Keys** pane, copy the PRIMARY CONNECTION STRING and copy it to a notepad or another document that you will have access to throughout the lab. You should label it **Azure Cosmos DB Connection String**. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
+
+### Get the storage account key and connection string
+
+Azure Storage Accounts allow users to store data. In this lab, you will use a storage account to store data that is used by the Azure Function. The Azure Function is triggered when any modification is made to the collection.
+
+1. Return to your resource group and open the storage account that you created earlier
+
+2. Select **Access keys** from the menu on the left-hand side.
+
+3. Copy the values under **key 1** to a notepad or another document that you will have access to throughout the lab. You should label the **Key** as **Storage Key** and the **Connection string** as **Storage Connection String**. You'll need to copy these strings into your code later, so take a note and remember where you are storing them.
+
+### Get the event hub namespace connection string
+
+An Azure event hub receives the event data, stores, processes, and forwards the data. In this lab, the event hub will receive a document every time a new event occurs (whenever an item is viewed by a user, added to a user's cart, or purchased by a user) and then will forward that document to Azure Stream Analytics.
+
+1. Return to your resource group and open the **Event Hubs Namespace** that you created and named in the prelab.
+
+2. Select **Shared access policies** from the menu on the left-hand side.
+
+3. Select **RootManageSharedAccessKey**. Copy the **Connection string-primary key** to a notepad or another document that you will have access to throughout the lab. You should label it **Event Hub Namespace** connection string. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
+
+## Set up Azure Function to read the change feed
+
+When a new document is created, or a current document is modified in an Azure Cosmos DB container, the change feed automatically adds that modified document to its history of collection changes. You will now build and run an Azure Function that processes the change feed. When a document is created or modified in the collection you created, the Azure Function will be triggered by the change feed. Then the Azure Function will send the modified document to the event hub.
+
+1. Return to the repository that you cloned on your device.
+
+2. Right-click the file named **ChangeFeedLabSolution.sln** and select **Open With Visual Studio**.
+
+3. Navigate to **local.settings.json** in Visual Studio. Then use the values you recorded earlier to fill in the blanks.
+
+4. Navigate to **ChangeFeedProcessor.cs**. In the parameters for the **Run** function, perform the following actions:
+
+ * Replace the text **YOUR COLLECTION NAME HERE** with the name of your collection. If you followed earlier instructions, the name of your collection is changefeedlabcollection.
+ * Replace the text **YOUR LEASES COLLECTION NAME HERE** with the name of your leases collection. If you followed earlier instructions, the name of your leases collection is **leases**.
+ * At the top of Visual Studio, make sure that the Startup Project box on the left of the green arrow says **ChangeFeedFunction**.
+ * Select **Start** at the top of the page to run the program
+ * You can confirm that the function is running when the console app says "Job host started".
+
+## Insert data into Azure Cosmos DB
+
+To see how change feed processes new actions on an e-commerce site, have to simulate data that represents users viewing items from the product catalog, adding those items to their carts, and purchasing the items in their carts. This data is arbitrary and used for replicating what data on an e-commerce site would look like.
+
+1. Navigate back to the repository in File Explorer, and right-click **ChangeFeedFunction.sln** to open it again in a new Visual Studio window.
+
+2. Navigate to the **App.config** file.Within the `<appSettings>` block, add the endpoint and unique **PRIMARY KEY** that of your Azure Cosmos DB account that you retrieved earlier.
+
+3. Add in the **collection** and **database** names. (These names should be **changefeedlabcollection** and **changefeedlabdatabase** unless you choose to name yours differently.)
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/update-connection-string.png" alt-text="Update connection strings":::
+
+4. Save the changes on all the files edited.
+
+5. At the top of Visual Studio, make sure that the **Startup Project** box on the left of the green arrow says **DataGenerator**. Then select **Start** at the top of the page to run the program.
+
+6. Wait for the program to run. The stars mean that data is coming in! Keep the program running - it is important that lots of data is collected.
+
+7. If you navigate to [Azure portal](https://portal.azure.com/) , then to the Azure Cosmos DB account within your resource group, then to **Data Explorer**, you will see the randomized data imported in your **changefeedlabcollection** .
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/data-generated-in-portal.png" alt-text="Data generated in portal":::
+
+## Set up a stream analytics job
+
+Azure Stream Analytics is a fully managed cloud service for real-time processing of streaming data. In this lab, you will use stream analytics to process new events from the event hub (when an item is viewed, added to a cart, or purchased), incorporate those events into real-time data analysis, and send them into Power BI for visualization.
+
+1. From the [Azure portal](https://portal.azure.com/), navigate to your resource group, then to **streamjob1** (the stream analytics job that you created in the prelab).
+
+2. Select **Inputs** as demonstrated below.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/create-input.png" alt-text="Create input":::
+
+3. Select **+ Add stream input**. Then select **Event Hub** from the drop-down menu.
+
+4. Fill the new input form with the following details:
+
+ * In the **Input** alias field, enter **input**.
+ * Select the option for **Select Event Hub from your subscriptions**.
+ * Set the **Subscription** field to your subscription.
+ * In the **Event Hubs namespace** field, enter the name of your event hub namespace that you created during the prelab.
+ * In the **Event Hub name** field, select the option for **Use existing** and choose **event-hub1** from the drop-down menu.
+ * Leave **Event Hub policy** name field set to its default value.
+ * Leave **Event serialization format** as **JSON**.
+ * Leave **Encoding field** set to **UTF-8**.
+ * Leave **Event compression type** field set to **None**.
+ * Select the **Save** button.
+
+5. Navigate back to the stream analytics job page, and select **Outputs**.
+
+6. Select **+ Add**. Then select **Power BI** from the drop-down menu.
+
+7. To create a new Power BI output to visualize average price, perform the following actions:
+
+ * In the **Output alias** field, enter **averagePriceOutput**.
+ * Leave the **Group workspace** field set to **Authorize connection to load workspaces**.
+ * In the **Dataset name** field, enter **averagePrice**.
+ * In the **Table name** field, enter **averagePrice**.
+ * Select the **Authorize** button, then follow the instructions to authorize the connection to Power BI.
+ * Select the **Save** button.
+
+8. Then go back to **streamjob1** and select **Edit query**.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/edit-query.png" alt-text="Edit query":::
+
+9. Paste the following query into the query window. The **AVERAGE PRICE** query calculates the average price of all items that are viewed by users, the average price of all items that are added to users' carts, and the average price of all items that are purchased by users. This metric can help e-commerce companies decide what prices to sell items at and what inventory to invest in. For example, if the average price of items viewed is much higher than the average price of items purchased, then a company might choose to add less expensive items to its inventory.
+
+ ```sql
+ /*AVERAGE PRICE*/
+ SELECT System.TimeStamp AS Time, Action, AVG(Price)
+ INTO averagePriceOutput
+ FROM input
+ GROUP BY Action, TumblingWindow(second,5)
+ ```
+10. Then select **Save** in the upper left-hand corner.
+
+11. Now return to **streamjob1** and select the **Start** button at the top of the page. Azure Stream Analytics can take a few minutes to start up, but eventually you will see it change from "Starting" to "Running".
+
+## Connect to Power BI
+
+Power BI is a suite of business analytics tools to analyze data and share insights. It's a great example of how you can strategically visualize the analyzed data.
+
+1. Sign in to Power BI and navigate to **My Workspace** by opening the menu on the left-hand side of the page.
+
+2. Select **+ Create** in the top right-hand corner and then select **Dashboard** to create a dashboard.
+
+3. Select **+ Add tile** in the top right-hand corner.
+
+4. Select **Custom Streaming Data**, then select the **Next** button.
+
+5. Select **averagePrice** from **YOUR DATASETS**, then select **Next**.
+
+6. In the **Visualization Type** field, choose **Clustered bar chart** from the drop-down menu. Under **Axis**, add action. Skip **Legend** without adding anything. Then, under the next section called **Value**, add **avg**. Select **Next**, then title your chart, and select **Apply**. You should see a new chart on your dashboard!
+
+7. Now, if you want to visualize more metrics, you can go back to **streamjob1** and create three more outputs with the following fields.
+
+ a. **Output alias:** incomingRevenueOutput, Dataset name: incomingRevenue, Table name: incomingRevenue
+ b. **Output alias:** top5Output, Dataset name: top5, Table name: top5
+ c. **Output alias:** uniqueVisitorCountOutput, Dataset name: uniqueVisitorCount, Table name: uniqueVisitorCount
+
+ Then select **Edit query** and paste the following queries **above** the one you already wrote.
+
+ ```sql
+ /*TOP 5*/
+ WITH Counter AS
+ (
+ SELECT Item, Price, Action, COUNT(*) AS countEvents
+ FROM input
+ WHERE Action = 'Purchased'
+ GROUP BY Item, Price, Action, TumblingWindow(second,30)
+ ),
+ top5 AS
+ (
+ SELECT DISTINCT
+ CollectTop(5) OVER (ORDER BY countEvents) AS topEvent
+ FROM Counter
+ GROUP BY TumblingWindow(second,30)
+ ),
+ arrayselect AS
+ (
+ SELECT arrayElement.ArrayValue
+ FROM top5
+ CROSS APPLY GetArrayElements(top5.topevent) AS arrayElement
+ )
+ SELECT arrayvalue.value.item, arrayvalue.value.price, arrayvalue.value.countEvents
+ INTO top5Output
+ FROM arrayselect
+
+ /*REVENUE*/
+ SELECT System.TimeStamp AS Time, SUM(Price)
+ INTO incomingRevenueOutput
+ FROM input
+ WHERE Action = 'Purchased'
+ GROUP BY TumblingWindow(hour, 1)
+
+ /*UNIQUE VISITORS*/
+ SELECT System.TimeStamp AS Time, COUNT(DISTINCT CartID) as uniqueVisitors
+ INTO uniqueVisitorCountOutput
+ FROM input
+ GROUP BY TumblingWindow(second, 5)
+ ```
+
+ The TOP 5 query calculates the top five items, ranked by the number of times that they have been purchased. This metric can help e-commerce companies evaluate which items are most popular and can influence the company's advertising, pricing, and inventory decisions.
+
+ The REVENUE query calculates revenue by summing up the prices of all items purchased each minute. This metric can help e-commerce companies evaluate its financial performance and also understand what times of day contribute to most revenue. This can impact the overall company strategy, marketing in particular.
+
+ The UNIQUE VISITORS query calculates how many unique visitors are on the site every five seconds by detecting unique cart ID's. This metric can help e-commerce companies evaluate their site activity and strategize how to acquire more customers.
+
+8. You can now add tiles for these datasets as well.
+
+ * For Top 5, it would make sense to do a clustered column chart with the items as the axis and the count as the value.
+ * For Revenue, it would make sense to do a line chart with time as the axis and the sum of the prices as the value. The time window to display should be the largest possible in order to deliver as much information as possible.
+ * For Unique Visitors, it would make sense to do a card visualization with the number of unique visitors as the value.
+
+ This is how a sample dashboard looks with these charts:
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/visualizations.png" alt-text="Screenshot shows a sample dashboard with charts named Average Price of Items by Action, Unique Visitors, Revenue, and Top 5 Items Purchased.":::
+
+## Optional: Visualize with an E-commerce site
+
+You will now observe how you can use your new data analysis tool to connect with a real e-commerce site. In order to build the e-commerce site, use an Azure Cosmos DB database to store the list of product categories, the product catalog, and a list of the most popular items.
+
+1. Navigate back to the [Azure portal](https://portal.azure.com/), then to your **Azure Cosmos DB account**, then to **Data Explorer**.
+
+ Add two collections under **changefeedlabdatabase** - **products** and **categories** with Fixed storage capacity.
+
+ Add another collection under **changefeedlabdatabase** named **topItems** and **/Item** as the partition key.
+
+2. Select the **topItems** collection, and under **Scale and Settings** set the **Time to Live** to be **30 seconds** so that topItems updates every 30 seconds.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/time-to-live.png" alt-text="Time to live":::
+
+3. In order to populate the **topItems** collection with the most frequently purchased items, navigate back to **streamjob1** and add a new **Output**. Select **Azure Cosmos DB**.
+
+4. Fill in the required fields as pictured below.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/cosmos-output.png" alt-text="Azure Cosmos DB output":::
+
+5. If you added the optional TOP 5 query in the previous part of the lab, proceed to part 5a. If not, proceed to part 5b.
+
+ 5a. In **streamjob1**, select **Edit query** and paste the following query in your Azure Stream Analytics query editor below the TOP 5 query but above the rest of the queries.
+
+ ```sql
+ SELECT arrayvalue.value.item AS Item, arrayvalue.value.price, arrayvalue.value.countEvents
+ INTO topItems
+ FROM arrayselect
+ ```
+ 5b. In **streamjob1**, select **Edit query** and paste the following query in your Azure Stream Analytics query editor above all other queries.
+
+ ```sql
+ /*TOP 5*/
+ WITH Counter AS
+ (
+ SELECT Item, Price, Action, COUNT(*) AS countEvents
+ FROM input
+ WHERE Action = 'Purchased'
+ GROUP BY Item, Price, Action, TumblingWindow(second,30)
+ ),
+ top5 AS
+ (
+ SELECT DISTINCT
+ CollectTop(5) OVER (ORDER BY countEvents) AS topEvent
+ FROM Counter
+ GROUP BY TumblingWindow(second,30)
+ ),
+ arrayselect AS
+ (
+ SELECT arrayElement.ArrayValue
+ FROM top5
+ CROSS APPLY GetArrayElements(top5.topevent) AS arrayElement
+ )
+ SELECT arrayvalue.value.item AS Item, arrayvalue.value.price, arrayvalue.value.countEvents
+ INTO topItems
+ FROM arrayselect
+ ```
+
+6. Open **EcommerceWebApp.sln** and navigate to the **Web.config** file in the **Solution Explorer**.
+
+7. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier where it says **your URI here** and **your primary key here**. Then add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
+
+ Fill in your **products collection name**, **categories collection name**, and **top items collection name** as indicated. (These names should be **products, categories, and topItems** unless you chose to name yours differently.)
+
+8. Navigate to and open the **Checkout folder** within **EcommerceWebApp.sln.** Then open the **Web.config** file within that folder.
+
+9. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier, where indicated. Then, add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
+
+10. Press **Start** at the top of the page to run the program.
+
+11. Now you can play around on the e-commerce site. When you view an item, add an item to your cart, change the quantity of an item in your cart, or purchase an item, these events will be passed through the Azure Cosmos DB change feed to event hub, Stream Analytics, and then Power BI. We recommend continuing to run DataGenerator to generate significant web traffic data and provide a realistic set of "Hot Products" on the e-commerce site.
+
+## Delete the resources
+
+To delete the resources that you created during this lab, navigate to the resource group on [Azure portal](https://portal.azure.com/), then select **Delete resource group** from the menu at the top of the page and follow the instructions provided.
+
+## Next steps
+
+* To learn more about change feed, see [working with change feed support in Azure Cosmos DB](../change-feed.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/cli-samples.md
+
+ Title: Azure CLI Samples for Azure Cosmos DB | Microsoft Docs
+description: This article lists several Azure CLI code samples available for interacting with Azure Cosmos DB. View API-specific CLI samples.
++++ Last updated : 08/19/2022+++
+keywords: cosmos-db, azure cli samples, azure cli code samples, azure cli script samples
++
+# Azure CLI samples for Azure Cosmos DB for NoSQL
++
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB for NoSQL and to sample Azure CLI scripts that apply to all Azure Cosmos DB APIs. Common samples are the same across all APIs.
+
+These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+
+## API for NoSQL Samples
+
+|Task | Description |
+|||
+| [Create an Azure Cosmos DB account, database, and container](../scripts/cli/sql/create.md)| Creates an Azure Cosmos DB account, database, and container for API for NoSQL. |
+| [Create a serverless Azure Cosmos DB account, database, and container](../scripts/cli/sql/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and container for API for NoSQL. |
+| [Create an Azure Cosmos DB account, database, and container with autoscale](../scripts/cli/sql/autoscale.md)| Creates an Azure Cosmos DB account, database, and container with autoscale for API for NoSQL. |
+| [Perform throughput operations](../scripts/cli/sql/throughput.md) | Read, update, and migrate between autoscale and standard throughput on a database and container.|
+| [Lock resources from deletion](../scripts/cli/sql/lock.md)| Prevent resources from being deleted with resource locks.|
+|||
+
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB.
+
+|Task | Description |
+|||
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create an Azure Cosmos DB account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create an Azure Cosmos DB account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update an Azure Cosmos DB account to secure with service-endpoints when the subnet is eventually configured.|
+| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
+|||
+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Cassandra](../cassandr)
+- [CLI Samples for Gremlin](../graph/cli-samples.md)
+- [CLI Samples for API for MongoDB](../mongodb/cli-samples.md)
+- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Conceptual Resilient Sdk Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/conceptual-resilient-sdk-applications.md
+
+ Title: Designing resilient applications with Azure Cosmos DB SDKs
+description: Learn how to build resilient applications using the Azure Cosmos DB SDKs and what all are the expected error status codes to retry on.
++ Last updated : 05/05/2022++++
+# Designing resilient applications with Azure Cosmos DB SDKs
+
+When authoring client applications that interact with Azure Cosmos DB through any of the SDKs, it's important to understand a few key fundamentals. This article is a design guide to help you understand these fundamentals and design resilient applications.
+
+## Overview
+
+For a video overview of the concepts discussed in this article, see:
+
+> [!VIDEO https://www.youtube.com/embed/McZIQhZpvew?start=118]
+>
+
+## Connectivity modes
+
+Azure Cosmos DB SDKs can connect to the service in two [connectivity modes](sdk-connection-modes.md). The .NET and Java SDKs can connect to the service in both Gateway and Direct mode, while the others can only connect in Gateway mode. Gateway mode uses the HTTP protocol and Direct mode uses the TCP protocol.
+
+Gateway mode is always used to fetch metadata such as the account, container, and routing information regardless of which mode SDK is configured to use. This information is cached in memory and is used to connect to the [service replicas](../partitioning-overview.md#replica-sets).
+
+In summary, for SDKs in Gateway mode, you can expect HTTP traffic, while for SDKs in Direct mode, you can expect a combination of HTTP and TCP traffic under different circumstances (like initialization, or fetching metadata, or routing information).
+
+## Client instances and connections
+
+Regardless of the connectivity mode, it's critical to maintain a Singleton instance of the SDK client per account per application. Connections, both HTTP and TCP, are scoped to the client instance. Most compute environments have limitations in terms of the number of connections that can be open at the same time and when these limits are reached, connectivity will be affected.
+
+## Distributed applications and networks
+
+When you design distributed applications, there are three key components:
+
+* Your application and the environment it runs on.
+* The network, which includes any component between your application and the Azure Cosmos DB service endpoint.
+* The Azure Cosmos DB service endpoint.
+
+When failures occur, they often fall into one of these three areas, and it's important to understand that due to the distributed nature of the system, it's impractical to expect 100% availability for any of these components.
+
+Azure Cosmos DB has a [comprehensive set of availability SLAs](../high-availability.md#slas), but none of them is 100%. The network components that connect your application to the service endpoint can have transient hardware issues and lose packets. Even the compute environment where your application runs could have a CPU spike affecting operations. These failure conditions can affect the operations of the SDKs and normally surface as errors with particular codes.
+
+Your application should be resilient to a [certain degree](#when-to-contact-customer-support) of potential failures across these components by implementing [retry policies](#should-my-application-retry-on-errors) over the responses provided by the SDKs.
+
+## Should my application retry on errors?
+
+The short answer is **yes**. But not all errors make sense to retry on, some of the error or status codes aren't transient. The table below describes them in detail:
+
+| Status Code | Should add retry | SDKs retry | Description |
+|-|-|-|
+| 400 | No | No | [Bad request](troubleshoot-bad-request.md) |
+| 401 | No | No | [Not authorized](troubleshoot-unauthorized.md) |
+| 403 | Optional | No | [Forbidden](troubleshoot-forbidden.md) |
+| 404 | No | No | [Resource is not found](troubleshoot-not-found.md) |
+| 408 | Yes | Yes | [Request timed out](troubleshoot-dotnet-sdk-request-timeout.md) |
+| 409 | No | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
+| 410 | Yes | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
+| 412 | No | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
+| 413 | No | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
+| 429 | Yes | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
+| 449 | Yes | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Azure Cosmos DB. |
+| 500 | No | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
+| 503 | Yes | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
+
+In the table above, all the status codes marked with **Yes** on the second column should have some degree of retry coverage in your application.
+
+### HTTP 403
+
+The Azure Cosmos DB SDKs don't retry on HTTP 403 failures in general, but there are certain errors associated with HTTP 403 that your application might decide to react to. For example, if you receive an error indicating that [a Partition Key is full](troubleshoot-forbidden.md#partition-key-exceeding-storage), you might decide to alter the partition key of the document you're trying to write based on some business rule.
+
+### HTTP 429
+
+The Azure Cosmos DB SDKs will retry on HTTP 429 errors by default following the client configuration and honoring the service's response `x-ms-retry-after-ms` header, by waiting the indicated time and retrying after.
+
+When the SDK retries are exceeded, the error is returned to your application. Ideally inspecting the `x-ms-retry-after-ms` header in the response can be used as a hint to decide how long to wait before retrying the request. Another alternative would be an exponential back-off algorithm or configuring the client to extend the retries on HTTP 429.
+
+### HTTP 449
+
+The Azure Cosmos DB SDKs will retry on HTTP 449 with an incremental back-off during a fixed period of time to accommodate most scenarios.
+
+When the automatic SDK retries are exceeded, the error is returned to your application. HTTP 449 errors can be safely retried. Because of the highly concurrent nature of write operations, it's better to have a random back-off algorithm to avoid repeating the same degree of concurrency after a fixed interval.
+
+### Timeouts and connectivity related failures (HTTP 408/503)
+
+Network timeouts and connectivity failures are among the most common errors. The Azure Cosmos DB SDKs are themselves resilient and will retry timeouts and connectivity issues across the HTTP and TCP protocols if the retry is feasible:
+
+* For read operations, the SDKs will retry any timeout or connectivity related error.
+* For write operations, the SDKs will **not** retry because these operations are **not idempotent**. When a timeout occurs waiting for the response, it's not possible to know if the request reached the service.
+
+If the account has multiple regions available, the SDKs will also attempt a [cross-region retry](troubleshoot-sdk-availability.md#transient-connectivity-issues-on-tcp-protocol).
+
+Because of the nature of timeouts and connectivity failures, these might not appear in your [account metrics](../monitor.md), as they only cover failures happening on the service side.
+
+It's recommended for applications to have their own retry policy for these scenarios and take into consideration how to resolve write timeouts. For example, retrying on a Create timeout can yield an HTTP 409 (Conflict) if the previous request did reach the service, but it would succeed if it didn't.
+
+### Language specific implementation details
+
+For further implementation details regarding a language see:
+
+* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/)
+* [Java SDK implementation information](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/docs/)
+
+## Do retries affect my latency?
+
+From the client perspective, any retries will affect the end to end latency of an operation. When your application P99 latency is being affected, understanding the retries that are happening and how to address them is important.
+
+Azure Cosmos DB SDKs provide detailed information in their logs and diagnostics that can help identify which retries are taking place. For more information, see [how to collect .NET SDK diagnostics](troubleshoot-dotnet-sdk-slow-request.md#capture-diagnostics) and [how to collect Java SDK diagnostics](troubleshoot-java-sdk-v4.md#capture-the-diagnostics).
+
+## What about regional outages?
+
+The Azure Cosmos DB SDKs cover regional availability and can perform retries on another account regions. Refer to the [multiregional environments retry scenarios and configurations](troubleshoot-sdk-availability.md) article to understand which scenarios involve other regions.
+
+## When to contact customer support
+
+Before contacting customer support, go through these steps:
+
+* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
+* Is the P99 latency affected?
+* Are the failures related to [error codes](#should-my-application-retry-on-errors) that my application should retry on and does the application cover such retries?
+* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
+* Have you gone through the related troubleshooting documents in the above table to rule out a problem on the application environment?
+
+If all the application instances are affected, or the percentage of affected operations is outside service SLAs, or affecting your own application SLAs and P99s, contact customer support.
+
+## Next steps
+
+* Learn about [multiregional environments retry scenarios and configurations](troubleshoot-sdk-availability.md)
+* Review the [Availability SLAs](../high-availability.md#slas)
+* Use the latest [.NET SDK](sdk-dotnet-v3.md)
+* Use the latest [Java SDK](sdk-java-v4.md)
+* Use the latest [Python SDK](sdk-python.md)
+* Use the latest [Node SDK](sdk-nodejs.md)
cosmos-db Couchbase Cosmos Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/couchbase-cosmos-migration.md
+
+ Title: 'Migrate from CouchBase to Azure Cosmos DB for NoSQL'
+description: Step-by-Step guidance for migrating from CouchBase to Azure Cosmos DB for NoSQL
+++ Last updated : 02/11/2020+++++
+# Migrate from CouchBase to Azure Cosmos DB for NoSQL
+
+Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article provides instructions to migrate Java applications that are connected to Couchbase to a API for NoSQL account in Azure Cosmos DB.
+
+## Differences in nomenclature
+
+The following are the key features that work differently in Azure Cosmos DB when compared to Couchbase:
+
+| Couchbase | Azure Cosmos DB |
+|--|--|
+| Couchbase server | Account |
+| Bucket | Database |
+| Bucket | Container/Collection |
+| JSON Document | Item / Document |
+
+## Key differences
+
+* Azure Cosmos DB has an "ID" field within the document whereas Couchbase has the ID as a part of bucket. The "ID" field is unique across the partition.
+
+* Azure Cosmos DB scales by using the partitioning or sharding technique. Which means it splits the data into multiple shards/partitions. These partitions/shards are created based on the partition key property that you provide. You can select the partition key to optimize read as well write operations or read/write optimized too. To learn more, see the [partitioning](../partitioning-overview.md) article.
+
+* In Azure Cosmos DB, it is not required for the top-level hierarchy to denote the collection because the collection name already exists. This feature makes the JSON structure much simpler. The following is an example that shows differences in the data model between Couchbase and Azure Cosmos DB:
+
+ **Couchbase**: Document ID = "99FF4444"
+
+ ```json
+ {
+ "TravelDocument":
+ {
+ "Country":"India",
+ "Validity" : "2022-09-01",
+ "Person":
+ {
+ "Name": "Manish",
+ "Address": "AB Road, City-z"
+ },
+ "Visas":
+ [
+ {
+ "Country":"India",
+ "Type":"Multi-Entry",
+ "Validity":"2022-09-01"
+ },
+ {
+ "Country":"US",
+ "Type":"Single-Entry",
+ "Validity":"2022-08-01"
+ }
+ ]
+ }
+ }
+ ```
+
+ **Azure Cosmos DB**: Refer "ID" within the document as shown below
+
+ ```json
+ {
+ "id" : "99FF4444",
+
+ "Country":"India",
+ "Validity" : "2022-09-01",
+ "Person":
+ {
+ "Name": "Manish",
+ "Address": "AB Road, City-z"
+ },
+ "Visas":
+ [
+ {
+ "Country":"India",
+ "Type":"Multi-Entry",
+ "Validity":"2022-09-01"
+ },
+ {
+ "Country":"US",
+ "Type":"Single-Entry",
+ "Validity":"2022-08-01"
+ }
+ ]
+ }
+
+ ```
+
+## Java SDK support
+
+Azure Cosmos DB has following SDKs to support different Java frameworks:
+
+* Async SDK
+* Spring Boot SDK
+
+The following sections describe when to use each of these SDKs. Consider an example where we have three types of workloads:
+
+## Couchbase as document repository & spring data-based custom queries
+
+If the workload that you are migrating is based on Spring Boot Based SDK, then you can use the following steps:
+
+1. Add parent to the POM.xml file:
+
+ ```java
+ <parent>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-parent</artifactId>
+ <version>2.1.5.RELEASE</version>
+ <relativePath/>
+ </parent>
+ ```
+
+1. Add properties to the POM.xml file:
+
+ ```java
+ <azure.version>2.1.6</azure.version>
+ ```
+
+1. Add dependencies to the POM.xml file:
+
+ ```java
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-cosmosdb-spring-boot-starter</artifactId>
+ <version>2.1.6</version>
+ </dependency>
+ ```
+
+1. Add application properties under resources and specify the following. Make sure to replace the URL, key, and database name parameters:
+
+ ```java
+ azure.cosmosdb.uri=<your-cosmosDB-URL>
+ azure.cosmosdb.key=<your-cosmosDB-key>
+ azure.cosmosdb.database=<your-cosmosDB-dbName>
+ ```
+
+1. Define the name of the collection in the model. You can also specify further annotations. For example, ID, partition key to denote them explicitly:
+
+ ```java
+ @Document(collection = "mycollection")
+ public class User {
+ @id
+ private String id;
+ private String firstName;
+ @PartitionKey
+ private String lastName;
+ }
+ ```
+
+The following are the code snippets for CRUD operations:
+
+### Insert and update operations
+
+Where *_repo* is the object of repository and *doc* is the POJO classΓÇÖs object. You can use `.save` to insert or upsert (if document with specified ID found). The following code snippet shows how to insert or update a doc object:
+
+`_repo.save(doc);`
+
+### Delete Operation
+
+Consider the following code snippet, where doc object will have ID and partition key mandatory to locate and delete the object:
+
+`_repo.delete(doc);`
+
+### Read Operation
+
+You can read the document with or without specifying the partition key. If you donΓÇÖt specify the partition key, then it is treated as a cross-partition query. Consider the following code samples, first one will perform operation using ID and partition key field. Second example uses a regular field & without specifying the partition key field.
+
+* ```_repo.findByIdAndName(objDoc.getId(),objDoc.getName());```
+* ```_repo.findAllByStatus(objDoc.getStatus());```
+
+ThatΓÇÖs it, you can now use your application with Azure Cosmos DB. Complete code sample for the example described in this doc is available in the [CouchbaseToCosmosDB-SpringCosmos](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/SpringCosmos) GitHub repo.
+
+## Couchbase as a document repository & using N1QL queries
+
+N1QL queries is the way to define queries in the Couchbase.
+
+|N1QL Query | Azure Cosmos DB Query|
+|-|-|
+|SELECT META(`TravelDocument`).id AS id, `TravelDocument`.* FROM `TravelDocument` WHERE `_type` = "com.xx.xx.xx.xxx.xxx.xxxx " and country = 'IndiaΓÇÖ and ANY m in Visas SATISFIES m.type == 'Multi-Entry' and m.Country IN ['India', BhutanΓÇÖ] ORDER BY ` Validity` DESC LIMIT 25 OFFSET 0 | SELECT c.id,c FROM c JOIN m in c.country=ΓÇÖIndiaΓÇÖ WHERE c._type = " com.xx.xx.xx.xxx.xxx.xxxx" and c.country = 'India' and m.type = 'Multi-Entry' and m.Country IN ('India', 'Bhutan') ORDER BY c.Validity DESC OFFSET 0 LIMIT 25 |
+
+You can notice the following changes in your N1QL queries:
+
+* You donΓÇÖt need to use the META keyword or refer to the first-level document. Instead you can create your own reference to the container. In this example, we have considered it as "c" (it can be anything). This reference is used as a prefix for all the first-level fields. Fr example, c.id, c.country etc.
+
+* Instead of "ANY" now you can do a join on subdocument and refer it with a dedicated alias such as "m". Once you have created alias for a subdocument you need to use alias. For example, m.Country.
+
+* The sequence of OFFSET is different in Azure Cosmos DB query, first you need to specify OFFSET then LIMIT.
+It is recommended not to use Spring Data SDK if you are using maximum custom defined queries as it can have unnecessary overhead at the client side while passing the query to Azure Cosmos DB. Instead we have a direct Async Java SDK, which can be utilized much efficiently in this case.
+
+### Read operation
+
+Use the Async Java SDK with the following steps:
+
+1. Configure the following dependency onto the POM.xml file:
+
+ ```java
+ <!-- https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb -->
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ <version>3.0.0</version>
+ </dependency>
+ ```
+
+1. Create a connection object for Azure Cosmos DB by using the `ConnectionBuilder` method as shown in the following example. Make sure you put this declaration into the bean such that the following code should get executed only once:
+
+ ```java
+ ConnectionPolicy cp=new ConnectionPolicy();
+ cp.connectionMode(ConnectionMode.DIRECT);
+
+ if(client==null)
+ client= CosmosClient.builder()
+ .endpoint(Host)//(Host, PrimaryKey, dbName, collName).Builder()
+ .connectionPolicy(cp)
+ .key(PrimaryKey)
+ .consistencyLevel(ConsistencyLevel.EVENTUAL)
+ .build();
+
+ container = client.getDatabase(_dbName).getContainer(_collName);
+ ```
+
+1. To execute the query, you need to run the following code snippet:
+
+ ```java
+ Flux<FeedResponse<CosmosItemProperties>> objFlux= container.queryItems(query, fo);
+ ```
+
+Now, with the help of above method you can pass multiple queries and execute without any hassle. In case you have the requirement to execute one large query, which can be split into multiple queries then try the following code snippet instead of the previous one:
+
+```java
+for(SqlQuerySpec query:queries)
+{
+ objFlux= container.queryItems(query, fo);
+ objFlux .publishOn(Schedulers.elastic())
+ .subscribe(feedResponse->
+ {
+ if(feedResponse.results().size()>0)
+ {
+ _docs.addAll(feedResponse.results());
+ }
+
+ },
+ Throwable::printStackTrace,latch::countDown);
+ lstFlux.add(objFlux);
+}
+
+ Flux.merge(lstFlux);
+ latch.await();
+}
+```
+
+With the previous code, you can run queries in parallel and increase the distributed executions to optimize. Further you can run the insert and update operations too:
+
+### Insert operation
+
+To insert the document, run the following code:
+
+```java
+Mono<CosmosItemResponse> objMono= container.createItem(doc,ro);
+```
+
+Then subscribe to Mono as:
+
+```java
+CountDownLatch latch=new CountDownLatch(1);
+objMono .subscribeOn(Schedulers.elastic())
+ .subscribe(resourceResponse->
+ {
+ if(resourceResponse.statusCode()!=successStatus)
+ {
+ throw new RuntimeException(resourceResponse.toString());
+ }
+ },
+ Throwable::printStackTrace,latch::countDown);
+latch.await();
+```
+
+### Upsert operation
+
+Upsert operation requires you to specify the document that needs to be updated. To fetch the complete document, you can use the snippet mentioned under heading read operation then modify the required field(s). The following code snippet upserts the document:
+
+```java
+Mono<CosmosItemResponse> obs= container.upsertItem(doc, ro);
+```
+Then subscribe to mono. Refer to the mono subscription snippet in insert operation.
+
+### Delete operation
+
+Following snippet will do delete operation:
+
+```java
+CosmosItem objItem= container.getItem(doc.Id, doc.Tenant);
+Mono<CosmosItemResponse> objMono = objItem.delete(ro);
+```
+
+Then subscribe to mono, refer mono subscription snippet in insert operation. The complete code sample is available in the [CouchbaseToCosmosDB-AsyncInSpring](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/AsyncInSpring) GitHub repo.
+
+## Couchbase as a key/value pair
+
+This is a simple type of workload in which you can perform lookups instead of queries. Use the following steps for key/value pairs:
+
+1. Consider having "/ID" as primary key, which will makes sure you can perform lookup operation directly in the specific partition. Create a collection and specify "/ID" as partition key.
+
+1. Switch off the indexing completely. Because you will execute lookup operations, there is no point of carrying indexing overhead. To turn off indexing, sign into Azure portal, goto Azure Cosmos DB Account. Open the **Data Explorer**, select your **Database** and the **Container**. Open the **Scale & Settings** tab and select the **Indexing Policy**. Currently indexing policy looks like the following:
+
+ ```json
+ {
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/\"_etag\"/?"
+ }
+ ]
+ }
+ ````
+
+ Replace the above indexing policy with the following policy:
+
+ ```json
+ {
+ "indexingMode": "none",
+ "automatic": false,
+ "includedPaths": [],
+ "excludedPaths": []
+ }
+ ```
+
+1. Use the following code snippet to create the connection object. Connection Object (to be placed in @Bean or make it static):
+
+ ```java
+ ConnectionPolicy cp=new ConnectionPolicy();
+ cp.connectionMode(ConnectionMode.DIRECT);
+
+ if(client==null)
+ client= CosmosClient.builder()
+ .endpoint(Host)//(Host, PrimaryKey, dbName, collName).Builder()
+ .connectionPolicy(cp)
+ .key(PrimaryKey)
+ .consistencyLevel(ConsistencyLevel.EVENTUAL)
+ .build();
+
+ container = client.getDatabase(_dbName).getContainer(_collName);
+ ```
+
+Now you can execute the CRUD operations as follows:
+
+### Read operation
+
+To read the item, use the following snippet:
+
+```java
+CosmosItemRequestOptions ro=new CosmosItemRequestOptions();
+ro.partitionKey(new PartitionKey(documentId));
+CountDownLatch latch=new CountDownLatch(1);
+
+var objCosmosItem= container.getItem(documentId, documentId);
+Mono<CosmosItemResponse> objMono = objCosmosItem.read(ro);
+objMono .subscribeOn(Schedulers.elastic())
+ .subscribe(resourceResponse->
+ {
+ if(resourceResponse.item()!=null)
+ {
+ doc= resourceResponse.properties().toObject(UserModel.class);
+ }
+ },
+ Throwable::printStackTrace,latch::countDown);
+latch.await();
+```
+
+### Insert operation
+
+To insert an item, you can perform the following code:
+
+```java
+Mono<CosmosItemResponse> objMono= container.createItem(doc,ro);
+```
+
+Then subscribe to mono as:
+
+```java
+CountDownLatch latch=new CountDownLatch(1);
+objMono.subscribeOn(Schedulers.elastic())
+ .subscribe(resourceResponse->
+ {
+ if(resourceResponse.statusCode()!=successStatus)
+ {
+ throw new RuntimeException(resourceResponse.toString());
+ }
+ },
+ Throwable::printStackTrace,latch::countDown);
+latch.await();
+```
+
+### Upsert operation
+
+To update the value of an item, refer the code snippet below:
+
+```java
+Mono<CosmosItemResponse> obs= container.upsertItem(doc, ro);
+```
+Then subscribe to mono, refer mono subscription snippet in insert operation.
+
+### Delete operation
+
+Use the following snippet to execute the delete operation:
+
+```java
+CosmosItem objItem= container.getItem(id, id);
+Mono<CosmosItemResponse> objMono = objItem.delete(ro);
+```
+
+Then subscribe to mono, refer mono subscription snippet in insert operation. The complete code sample is available in the [CouchbaseToCosmosDB-AsyncKeyValue](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/AsyncKeyValue) GitHub repo.
+
+## Data Migration
+
+There are two ways to migrate data.
+
+* **Use Azure Data Factory:** This is the most recommended method to migrate the data. Configure the source as Couchbase and sink as Azure Cosmos DB for NoSQL, see the Azure [Azure Cosmos DB Data Factory connector](../../data-factory/connector-azure-cosmos-db.md) article for detailed steps.
+
+* **Use the Azure Cosmos DB data import tool:** This option is recommended to migrate using VMs with less amount of data. For detailed steps, see the [Data importer](../import-data.md) article.
+
+## Next Steps
+
+* To do performance testing, see [Performance and scale testing with Azure Cosmos DB](./performance-testing.md) article.
+* To optimize the code, see [Performance tips for Azure Cosmos DB](./performance-tips-async-java.md) article.
+* Explore Java Async V3 SDK, [SDK reference](https://github.com/Azure/azure-cosmosdb-java/tree/v3) GitHub repo.
cosmos-db Create Support Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/create-support-request-quota-increase.md
+
+ Title: How to request quota increase for Azure Cosmos DB resources
+description: Learn how to request a quota increase for Azure Cosmos DB resources. You will also learn how to enable a subscription to access a region.
+++++ Last updated : 04/27/2022++
+# How to request quota increase for Azure Cosmos DB resources
+
+The resources in Azure Cosmos DB have [default quotas/limits](../concepts-limits.md). However, there may be a case where your workload needs more quota than the default value. In such case, you must reach out to the Azure Cosmos DB team to request a quota increase. This article explains how to request a quota increase for Azure Cosmos DB resources. You will also learn how to enable a subscription to access a region.
+
+## Create a new support request
+
+To request a quota increase, you must create a new support request with your workload details. The Azure Cosmos DB team will then evaluate your request and approve it. Use the following steps to create a new support request from the Azure portal:
+
+1. Sign into the Azure portal.
+
+1. From the left-hand menu, select **Help + support** and then select **New support request**.
+
+1. In the **Basics** tab fill the following details:
+
+ * For **Issue type**, select **Service and subscription limits (quotas)**
+ * For **Subscription**, select the subscription for which you want to increase the quota.
+ * For **Quota type**, select **Azure Cosmos DB**
+
+ :::image type="content" source="./media/create-support-request-quota-increase/create-quota-increase-request.png" alt-text="Create a new Azure Cosmos DB support request for quota increase":::
+
+1. In the **Details** tab, enter the details corresponding to your quota request. The Information provided on this tab will be used to further assess your issue and help the support engineer troubleshoot the problem.
+
+1. Fill the following details in this form:
+
+ * **Description**: Provide a short description of your request such as your workload, why the default values arenΓÇÖt sufficient along with any error messages you are observing.
+
+ * **Quota specific fields** provide the requested information for your specific quota request.
+
+ * **File upload**: Upload the diagnostic files or any other files that you think are relevant to the support request. To learn more on the file upload guidance, see the [Azure support](../../azure-portal/supportability/how-to-manage-azure-support-request.md#upload-files) article.
+
+ * **Severity**: Choose one of the available severity levels based on the business impact.
+
+ * **Preferred contact method**: You can either choose to be contacted over **Email** or by **Phone**.
+
+1. Fill out the remaining details such as your availability, support language, contact information, email, and phone number on the form.
+
+1. Select **Next: Review+Create**. Validate the information provided and select **Create** to create a support request.
+
+Within 24 hours, the Azure Cosmos DB support team will evaluate your request and get back to you.
+
+## Next steps
+
+* See the [Azure Cosmos DB default service quotas](../concepts-limits.md)
cosmos-db Create Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/create-website.md
+
+ Title: Deploy a web app with a template - Azure Cosmos DB
+description: Learn how to deploy an Azure Cosmos DB account, Azure App Service Web Apps, and a sample web application using an Azure Resource Manager template.
+++++ Last updated : 06/19/2020+++
+# Deploy Azure Cosmos DB and Azure App Service with a web app from GitHub using an Azure Resource Manager Template
+
+This tutorial shows you how to do a "no touch" deployment of a web application that connects to Azure Cosmos DB on first run without having to cut and paste any connection information from Azure Cosmos DB to `appsettings.json` or to the Azure App Services application settings in the Azure portal. All these actions are accomplished using an Azure Resource Manager template in a single operation. In the example here we will deploy the [Azure Cosmos DB ToDo sample](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app) from a [Web app tutorial](tutorial-dotnet-web-app.md).
+
+Resource Manager templates, are quite flexible and allow you to compose complex deployments across any service in Azure. This includes advanced tasks such as deploying applications from GitHub and injecting connection information into Azure App Service's application settings in the Azure portal. This tutorial will show you how to do the following things using a single Resource Manager template.
+
+* Deploy an Azure Cosmos DB account.
+* Deploy an Azure App Service Hosting Plan.
+* Deploy an Azure App Service.
+* Inject the endpoint and keys from the Azure Cosmos DB account into the App Service application settings in the Azure portal.
+* Deploy a web application from a GitHub repository to the App Service.
+
+The resulting deployment has a fully functional web application that can connect to Azure Cosmos DB without having to cut and paste the Azure Cosmos DB's endpoint URL or authentication keys from the Azure portal.
+
+## Prerequisites
+
+> [!TIP]
+> While this tutorial does not assume prior experience with Azure Resource Manager templates or JSON, should you wish to modify the referenced templates or deployment options, then knowledge of each of these areas is required.
+
+## Step 1: Deploy the template
+
+First, select the **Deploy to Azure** button below to open the Azure portal to create a custom deployment. You can also view the Azure Resource Manager template from the [Azure Quickstart Templates Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-webapp%2Fazuredeploy.json)
+
+Once in the Azure portal, select the subscription to deploy into and select or create a new resource group. Then fill in the following values.
++
+* **Region** - This is required by the Resource Manager. Enter the same region used by the location parameter where your resources are located.
+* **Application Name** - This name is used by all the resources for this deployment. Make sure to choose a unique name to avoid conflicts with existing Azure Cosmos DB and App Service accounts.
+* **Location** - The region where your resources are deployed.
+* **App Service Plan Tier** - App Service Plan's pricing tier.
+* **App Service Plan Instances** - The number of workers for the app service plan.
+* **Repository URL** - The repository to the web application on GitHub.
+* **Branch** - The branch for the GitHub repository.
+* **Database Name** - The Azure Cosmos DB database name.
+* **Container Name** - The Azure Cosmos DB container name.
+
+After filling in the values, select the **Create** button to start the deployment. This step should take between 5 and 10 minutes to complete.
+
+> [!TIP]
+> The template does not validate that the Azure App Service name and Azure Cosmos DB account name entered in the template are valid and available. It is highly recommended that you verify the availability of the names you plan to supply prior to submitting the deployment.
++
+## Step 2: Explore the resources
+
+### View the deployed resources
+
+After the template has deployed the resources, you can now see each of them in your resource group.
++
+### View Azure Cosmos DB endpoint and keys
+
+Next, open the Azure Cosmos DB account in the portal. The following screenshot shows the endpoint and keys for an Azure Cosmos DB account.
++
+### View the Azure Cosmos DB keys in application settings
+
+Next, navigate to the Azure App Service in the resource group. Click the Configuration tab to view the Application Settings for the App Service. The Application Settings contains the Azure Cosmos DB account and primary key values necessary to connect to Azure Cosmos DB as well as the database and container names that were passed in from the template deployment.
++
+### View web app in Deployment Center
+
+Next go to the Deployment Center for the App Service. Here you will see Repository points to the GitHub repository passed in to the template. Also the Status below indicates Success(Active), meaning the application successfully deployed and started.
++
+### Run the web application
+
+Click **Browse** at the top of Deployment Center to open the web application. The web application will open up to the home screen. Click on **Create New** and enter some data into the fields and click Save. The resulting screen shows the data saved to Azure Cosmos DB.
++
+## Step 3: How does it work
+
+There are three elements necessary for this to work.
+
+### Reading app settings at runtime
+
+First, the application needs to request the Azure Cosmos DB endpoint and key in the `Startup` class in the ASP.NET MVC web application. The [Azure Cosmos DB To Do Sample](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app) can run locally where you can enter the connection information into appsettings.json. However, when deployed, this file does deploy with the app. If these lines in red cannot access the settings from appsettings.json, it will try from Application Settings in Azure App Service.
++
+### Using special Azure Resource Management functions
+
+For these values to be available to the application when deployed, the Azure Resource Manager template can ask for those values from the Azure Cosmos DB account using special Azure Resource Management functions including [reference](../../azure-resource-manager/templates/template-functions-resource.md#reference) and [listKeys](../../azure-resource-manager/templates/template-functions-resource.md#listkeys) which grab the values from the Azure Cosmos DB account and insert them into the application settings values with key names that match what is used in the application above in a '{section:key}' format. For example, `CosmosDb:Account`.
++
+### Deploying web apps from GitHub
+
+Lastly, we need to deploy the web application from GitHub into the App Service. This is done using the JSON below. Two things to be careful with are the type and name for this resource. Both the `"type": "sourcecontrols"` and `"name": "web"` property values are hard coded and should not be changed.
++
+## Next steps
+
+Congratulations! You've deployed Azure Cosmos DB, Azure App Service, and a sample web application that automatically has the connection info necessary to connect to Azure Cosmos DB, all in a single operation and without having to cut and paste sensitive information. Using this template as a starting point, you can modify it to deploy your own web applications the same way.
+
+* For the Azure Resource Manager Template for this sample go to [Azure Quickstart Templates Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)
+* For the source code for the sample app go to [Azure Cosmos DB To Do App on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app).
cosmos-db Database Transactions Optimistic Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/database-transactions-optimistic-concurrency.md
+
+ Title: Database transactions and optimistic concurrency control in Azure Cosmos DB
+description: This article describes database transactions and optimistic concurrency control in Azure Cosmos DB
+++++++ Last updated : 12/04/2019++
+# Transactions and optimistic concurrency control
+
+Database transactions provide a safe and predictable programming model to deal with concurrent changes to the data. Traditional relational databases, like SQL Server, allow you to write the business logic using stored-procedures and/or triggers, send it to the server for execution directly within the database engine. With traditional relational databases, you are required to deal with two different programming languages the (non-transactional) application programming language such as JavaScript, Python, C#, Java, etc. and the transactional programming language (such as T-SQL) that is natively executed by the database.
+
+The database engine in Azure Cosmos DB supports full ACID (Atomicity, Consistency, Isolation, Durability) compliant transactions with snapshot isolation. All the database operations within the scope of a container's [logical partition](../partitioning-overview.md) are transactionally executed within the database engine that is hosted by the replica of the partition. These operations include both write (updating one or more items within the logical partition) and read operations. The following table illustrates different operations and transaction types:
+
+| **Operation** | **Operation Type** | **Single or Multi Item Transaction** |
+||||
+| Insert (without a pre/post trigger) | Write | Single item transaction |
+| Insert (with a pre/post trigger) | Write and Read | Multi-item transaction |
+| Replace (without a pre/post trigger) | Write | Single item transaction |
+| Replace (with a pre/post trigger) | Write and Read | Multi-item transaction |
+| Upsert (without a pre/post trigger) | Write | Single item transaction |
+| Upsert (with a pre/post trigger) | Write and Read | Multi-item transaction |
+| Delete (without a pre/post trigger) | Write | Single item transaction |
+| Delete (with a pre/post trigger) | Write and Read | Multi-item transaction |
+| Execute stored procedure | Write and Read | Multi-item transaction |
+| System initiated execution of a merge procedure | Write | Multi-item transaction |
+| System initiated execution of deleting items based on expiration (TTL) of an item | Write | Multi-item transaction |
+| Read | Read | Single-item transaction |
+| Change Feed | Read | Multi-item transaction |
+| Paginated Read | Read | Multi-item transaction |
+| Paginated Query | Read | Multi-item transaction |
+| Execute UDF as part of the paginated query | Read | Multi-item transaction |
+
+## Multi-item transactions
+
+Azure Cosmos DB allows you to write [stored procedures, pre/post triggers, user-defined-functions (UDFs)](stored-procedures-triggers-udfs.md) and merge procedures in JavaScript. Azure Cosmos DB natively supports JavaScript execution inside its database engine. You can register stored procedures, pre/post triggers, user-defined-functions (UDFs) and merge procedures on a container and later execute them transactionally within the Azure Cosmos DB database engine. Writing application logic in JavaScript allows natural expression of control flow, variable scoping, assignment, and integration of exception handling primitives within the database transactions directly in the JavaScript language.
+
+The JavaScript-based stored procedures, triggers, UDFs, and merge procedures are wrapped within an ambient ACID transaction with snapshot isolation across all items within the logical partition. During the course of its execution, if the JavaScript program throws an exception, the entire transaction is aborted and rolled-back. The resulting programming model is simple yet powerful. JavaScript developers get a durable programming model while still using their familiar language constructs and library primitives.
+
+The ability to execute JavaScript directly within the database engine provides performance and transactional execution of database operations against the items of a container. Furthermore, since Azure Cosmos DB database engine natively supports JSON and JavaScript, there is no impedance mismatch between the type systems of an application and the database.
+
+## Optimistic concurrency control
+
+Optimistic concurrency control allows you to prevent lost updates and deletes. Concurrent, conflicting operations are subjected to the regular pessimistic locking of the database engine hosted by the logical partition that owns the item. When two concurrent operations attempt to update the latest version of an item within a logical partition, one of them will win and the other will fail. However, if one or two operations attempting to concurrently update the same item had previously read an older value of the item, the database doesnΓÇÖt know if the previously read value by either or both the conflicting operations was indeed the latest value of the item. Fortunately, this situation can be detected with the **Optimistic Concurrency Control (OCC)** before letting the two operations enter the transaction boundary inside the database engine. OCC protects your data from accidentally overwriting changes that were made by others. It also prevents others from accidentally overwriting your own changes.
+
+### Implementing optimistic concurrency control using ETag and HTTP headers
+
+Every item stored in an Azure Cosmos DB container has a system defined `_etag` property. The value of the `_etag` is automatically generated and updated by the server every time the item is updated. `_etag` can be used with the client supplied `if-match` request header to allow the server to decide whether an item can be conditionally updated. The value of the `if-match` header matches the value of the `_etag` at the server, the item is then updated. If the value of the `if-match` request header is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response message. The client then can re-fetch the item to acquire the current version of the item on the server or override the version of item in the server with its own `_etag` value for the item. In addition, `_etag` can be used with the `if-none-match` header to determine whether a refetch of a resource is needed.
+
+The itemΓÇÖs `_etag` value changes every time the item is updated. For replace item operations, `if-match` must be explicitly expressed as a part of the request options. For an example, see the sample code in [GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L791-L887). `_etag` values are implicitly checked for all written items touched by the stored procedure. If any conflict is detected, the stored procedure will roll back the transaction and throw an exception. With this method, either all or no writes within the stored procedure are applied atomically. This is a signal to the application to reapply updates and retry the original client request.
+
+### Optimistic concurrency control and global distribution
+
+The concurrent updates of an item are subjected to the OCC by Azure Cosmos DBΓÇÖs communication protocol layer. For Azure Cosmos DB accounts configured for **single-region writes**, Azure Cosmos DB ensures that the client-side version of the item that you are updating (or deleting) is the same as the version of the item in the Azure Cosmos DB container. This ensures that your writes are protected from being overwritten accidentally by the writes of others and vice versa. In a multi-user environment, the optimistic concurrency control protects you from accidentally deleting or updating wrong version of an item. As such, items are protected against the infamous "lost update" or "lost delete" problems.
+
+In an Azure Cosmos DB account configured with **multi-region writes**, data can be committed independently into secondary regions if its `_etag` matches that of the data in the local region. Once new data is committed locally in a secondary region, it is then merged in the hub or primary region. If the conflict resolution policy merges the new data into the hub region, this data will then be replicated globally with the new `_etag`. If the conflict resolution policy rejects the new data, the secondary region will be rolled back to the original data and `_etag`.
+
+## Next steps
+
+Learn more about database transactions and optimistic concurrency control in the following articles:
+
+- [Working with Azure Cosmos DB databases, containers and items](../account-databases-containers-items.md)
+- [Consistency levels](../consistency-levels.md)
+- [Conflict types and resolution policies](../conflict-resolution-policies.md)
+- [Using TransactionalBatch](transactional-batch.md)
+- [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md)
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/defender-for-cosmos-db.md
+
+ Title: 'Microsoft Defender for Azure Cosmos DB'
+description: Learn how Microsoft Defender provides advanced threat protection on Azure Cosmos DB.
+++ Last updated : 06/21/2022++++
+# Microsoft Defender for Azure Cosmos DB
+
+Microsoft Defender for Azure Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
+
+Security alerts are triggered when anomalies in activity occur. These security alerts show up in [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). Subscription administrators also get these alerts over email, with details of the suspicious activity and recommendations on how to investigate and remediate the threats.
+
+> [!NOTE]
+>
+> * Microsoft Defender for Azure Cosmos DB is currently available only for the API for NoSQL.
+> * Microsoft Defender for Azure Cosmos DB is not currently available in Azure government and sovereign cloud regions.
+
+For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases.
+
+## Threat types
+
+Microsoft Defender for Azure Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
+
+- **Potential SQL injection attacks**: Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
+
+- **Anomalous database access patterns**: For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations.
+
+- **Suspicious database activity**: For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns.
+
+## Configure Microsoft Defender for Azure Cosmos DB
+
+See [Enable Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/defender-for-databases-enable-cosmos-protections.md).
+
+## Manage security alerts
+
+When Azure Cosmos DB activity anomalies occur, a security alert is triggered with information about the suspicious security event.
+
+ From Microsoft Defender for Cloud, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. An email notification is also sent with the alert details and recommended actions.
+
+## Azure Cosmos DB alerts
+
+ To see a list of the alerts generated when monitoring Azure Cosmos DB accounts, see the [Azure Cosmos DB alerts](../../security-center/alerts-reference.md#alerts-azurecosmos) section in the Microsoft Defender for Cloud documentation.
+
+## Next steps
+
+* Learn more about [Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
+* Learn more about [Diagnostic logging in Azure Cosmos DB](../monitor-resource-logs.md)
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/distribute-throughput-across-partitions.md
+
+ Title: Redistribute throughput across partitions (preview) in Azure Cosmos DB
+description: Learn how to redistribute throughput across partitions (preview)
++++++ Last updated : 05/09/2022++
+# Redistribute throughput across partitions (preview)
+
+By default, Azure Cosmos DB distributes the provisioned throughput of a database or container equally across all physical partitions. However, scenarios may arise where due to a skew in the workload or choice of partition key, certain logical (and thus physical) partitions need more throughput than others. For these scenarios, Azure Cosmos DB gives you the ability to redistribute your provisioned throughput across physical partitions. Redistributing throughput across partitions helps you achieve better performance without having to configure your overall throughput based on the hottest partition.
+
+The throughput redistributing feature applies to databases and containers using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. You can change the throughput per physical partition using the Azure Cosmos DB PowerShell commands.
+
+## When to use this feature
+
+In general, usage of this feature is recommended for scenarios when both the following are true:
+
+- You're consistently seeing greater than 1-5% overall rate of 429 responses
+- You've a consistent, predictable hot partition
+
+If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements. If you have a small amount of RU/s per partition, you can also use the [partition merge (preview)](../merge.md) to reduce the number of partitions and ensure more RU/s per partition for the same total provisioned throughput.
+
+## Getting started
+
+To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+
+Before submitting your request:
+- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+
+The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
+
+To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic.
+++
+## Example scenario
+
+Suppose we have a workload that keeps track of transactions that take place in retail stores. Because most of our queries are by `StoreId`, we partition by `StoreId`. However, over time, we see that some stores have more activity than others and require more throughput to serve their workloads. We're seeing rate limiting (429) for requests against those StoreIds, and our [overall rate of 429 responses is greater than 1-5%](troubleshoot-request-rate-too-large.md#recommended-solution). Meanwhile, other stores are less active and require less throughput. Let's see how we can redistribute our throughput for better performance.
+
+## Step 1: Identify which physical partitions need more throughput
+
+There are two ways to identify if there's a hot partition.
+
+### Option 1: Use Azure Monitor metrics
+
+To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
+
+Each PartitionKeyRangeId maps to one physical partition. Look for one PartitionKeyRangeId that consistently has a higher normalized RU consumption than others. For example, one value is consistently at 100%, but others are at 30% or less. A pattern such as this can indicate a hot partition.
++
+### Option 2: Use Diagnostic Logs
+
+We can use the information from **CDBPartitionKeyRUConsumption** in Diagnostic Logs to get more information about the logical partition keys (and corresponding physical partitions) that are consuming the most RU/s at a second level granularity. Note the sample queries use 24 hours for illustrative purposes only - it's recommended to use at least seven days of history to understand the pattern.
+
+#### Find the physical partition (PartitionKeyRangeId) that is consuming the most RU/s over time
+
+```Kusto
+CDBPartitionKeyRUConsumption
+| where TimeGenerated >= ago(24hr)
+| where DatabaseName == "MyDB" and CollectionName == "MyCollection" // Replace with database and collection name
+| where isnotempty(PartitionKey) and isnotempty(PartitionKeyRangeId)
+| summarize sum(RequestCharge) by bin(TimeGenerated, 1s), PartitionKeyRangeId
+| render timechart
+```
+
+#### For a given physical partition, find the top 10 logical partition keys that are consuming the most RU/s over each hour
+
+```Kusto
+CDBPartitionKeyRUConsumption
+| where TimeGenerated >= ago(24hour)
+| where DatabaseName == "MyDB" and CollectionName == "MyCollection" // Replace with database and collection name
+| where isnotempty(PartitionKey) and isnotempty(PartitionKeyRangeId)
+| where PartitionKeyRangeId == 0 // Replace with PartitionKeyRangeId
+| summarize sum(RequestCharge) by bin(TimeGenerated, 1hour), PartitionKey
+| order by sum_RequestCharge desc | take 10
+```
+
+## Step 2: Determine the target RU/s for each physical partition
+
+### Determine current RU/s for each physical partition
+
+First, let's determine the current RU/s for each physical partition. You can use the new Azure Monitor metric **PhysicalPartitionThroughput** and split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
+
+Alternatively, if you haven't changed your throughput per partition before, you can use the formula:
+``Current RU/s per partition = Total RU/s / Number of physical partitions``
+
+Follow the guidance in the article [Best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md#step-1-find-the-current-number-of-physical-partitions) to determine the number of physical partitions.
+
+You can also use the PowerShell `Get-AzCosmosDBSqlContainerPerPartitionThroughput` and `Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput` commands to read the current RU/s on each physical partition.
+
+```powershell
+// API for NoSQL
+$somePartitions = Get-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -PhysicalPartitionIds ("<PartitionId>", "<PartitionId">)
+
+$allPartitions = Get-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -AllPartitions
+
+// API for MongoDB
+$somePartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -PhysicalPartitionIds ("<PartitionId>", "<PartitionId">, ...)
+
+$allPartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -AllPartitions
+```
+### Determine RU/s for target partition
+
+Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can contain is 10,000 RU/s.
+
+The right approach depends on your workload requirements. General approaches include:
+- Increasing the RU/s by a percentage, measure the rate of 429 responses, and repeat until desired throughput is achieved.
+ - If you aren't sure the right percentage, you can start with 10% to be conservative.
+ - If you already know this physical partition requires most of the throughput of the workload, you can start by doubling the RU/s or increasing it to the maximum of 10,000 RU/s, whichever is lower.
+- Increasing the RU/s to `Total consumed RU/s of the physical partition + (Number of 429 responses per second * Average RU charge per request to the partition)`
+ - This approach tries to estimate what the "real" RU/s consumption would have been if the requests hadn't been rate limited.
+
+### Determine RU/s for source partition
+
+Finally, let's decide how many RU/s we want to keep on our other physical partitions. This selection will determine the partitions that the target physical partition takes throughput from.
+
+In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It's recommended to explicitly specify the minimum throughput.
+
+The right approach depends on your workload requirements. General approaches include:
+- Taking RU/s equally from all source partitions (works best when there are <= 10 partitions)
+ - Calculate the amount we need to offset each source physical partition by. `Offset = Total desired RU/s of target partition(s) - total current RU/s of target partition(s)) / (Total physical partitions - number of target partitions)`
+ - Assign the minimum throughput for each source partition = `Current RU/s of source partition - offset`
+- Taking RU/s from the least active partition(s)
+ - Use Azure Monitor metrics and Diagnostic Logs to determine which physical partition(s) have the least traffic/request volume
+ - Calculate the amount we need to offset each source physical partition by. `Offset = Total desired RU/s of target partition(s) - total current RU/s of target partition) / Number of source physical partitions`
+ - Assign the minimum throughput for each source partition = `Current RU/s of source partition - offset`
+
+## Step 3: Programatically change the throughput across partitions
+
+You can use the PowerShell command `Update-AzCosmosDBSqlContainerPerPartitionThroughput` to redistribute throughput.
+
+To understand the below example, let's take an example where we have a container that has 6000 RU/s total (either 6000 manual RU/s or autoscale 6000 RU/s) and 3 physical partitions. Based on our analysis, we want a layout where:
+
+- Physical partition 0: 1000 RU/s
+- Physical partition 1: 4000 RU/s
+- Physical partition 2: 1000 RU/s
+
+We specify partitions 0 and 2 as our source partitions, and specify that after the redistribution, they should have a minimum RU/s of 1000 RU/s. Partition 1 is out target partition, which we specify should have 4000 RU/s.
+
+```powershell
+$SourcePhysicalPartitionObjects = @()
+$SourcePhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "0" -Throughput 1000
+$SourcePhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "2" -Throughput 1000
+
+$TargetPhysicalPartitionObjects = @()
+$TargetPhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "1" -Throughput 4000
+
+// API for NoSQL
+Update-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -SourcePhysicalPartitionThroughputObject $SourcePhysicalPartitionObjects `
+ -TargetPhysicalPartitionThroughputObject $TargetPhysicalPartitionObjects
+
+// API for MongoDB
+Update-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -SourcePhysicalPartitionThroughputObject $SourcePhysicalPartitionObjects `
+ -TargetPhysicalPartitionThroughputObject $TargetPhysicalPartitionObjects
+```
+
+After you've completed the redistribution, you can verify the change by viewing the **PhysicalPartitionThroughput** metric in Azure Monitor. Split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
+
+If necessary, you can also reset the RU/s per physical partition so that the RU/s of your container are evenly distributed across all physical partitions.
+
+```powershell
+// API for NoSQL
+$resetPartitions = Update-AzCosmosDBSqlContainerPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-container-name>" `
+ -EqualDistributionPolicy
+
+// API for MongoDB
+$resetPartitions = Update-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
+ -ResourceGroupName "<resource-group-name>" `
+ -AccountName "<cosmos-account-name>" `
+ -DatabaseName "<cosmos-database-name>" `
+ -Name "<cosmos-collection-name>" `
+ -EqualDistributionPolicy
+```
+
+## Step 4: Verify and monitor your RU/s consumption
+
+After you've completed the redistribution, you can verify the change by viewing the **PhysicalPartitionThroughput** metric in Azure Monitor. Split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
+
+It's recommended to monitor your overall rate of 429 responses and RU/s consumption. For more information, review [Step 1](#step-1-identify-which-physical-partitions-need-more-throughput) to validate you've achieved the performance you expect.
+
+After the changes, assuming your overall workload hasn't changed, you'll likely see that both the target and source physical partitions have higher [Normalized RU consumption](../monitor-normalized-request-units.md) than previously. Higher normalized RU consumption is expected behavior. Essentially, you have allocated RU/s closer to what each partition actually needs to consume, so higher normalized RU consumption means that each partition is fully utilizing its allocated RU/s. You should also expect to see a lower overall rate of 429 exceptions, as the hot partitions now have more RU/s to serve requests.
+
+## Limitations
+
+### Preview eligibility criteria
+To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
+ - Your Azure Cosmos DB account is using API for NoSQL or API for MongoDB.
+ - If you're using API for MongoDB, the version must be >= 3.6.
+ - Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions doesn't apply to serverless accounts.
+ - If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When the ability to redistribute throughput across partitions is enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
+ - Your Azure Cosmos DB account isn't using any unsupported connectors:
+ - Azure Data Factory
+ - Azure Stream Analytics
+ - Logic Apps
+ - Azure Functions
+ - Azure Search
+ - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB data migration tool
+ - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
+
+### SDK requirements (API for NoSQL only)
+
+Throughput redistribution across partitions is supported only with the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use this feature for API for MongoDB accounts.
+
+Find the latest preview version of the supported SDK:
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+
+Support for other SDKs is planned for the future.
+
+> [!TIP]
+> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](migrate-dotnet-v3.md).
+
+### Unsupported connectors
+
+If you enroll in the preview, the following connectors will fail.
+
+* Azure Data Factory<sup>1</sup>
+* Azure Stream Analytics<sup>1</sup>
+* Logic Apps<sup>1</sup>
+* Azure Functions<sup>1</sup>
+* Azure Search<sup>1</sup>
+* Azure Cosmos DB Spark connector<sup>1</sup>
+* Azure Cosmos DB data migration tool
+* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
+
+<sup>1</sup>Support for these connectors is planned for the future.
+
+## Next steps
+
+Learn about how to use provisioned throughput with the following articles:
+
+* Learn more about [provisioned throughput.](../set-throughput.md)
+* Learn more about [request units.](../request-units.md)
+* Need to monitor for hot partitions? See [monitoring request units.](../monitor-normalized-request-units.md#how-to-monitor-for-hot-partitions)
+* Want to learn the best practices? See [best practices for scaling provisioned throughput.](../scaling-provisioned-throughput-best-practices.md)
cosmos-db Dynamo To Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/dynamo-to-cosmos.md
+
+ Title: Migrate your application from Amazon DynamoDB to Azure Cosmos DB
+description: Learn how to migrate your .NET application from Amazon's DynamoDB to Azure Cosmos DB
++++ Last updated : 05/02/2020+++
+# Migrate your application from Amazon DynamoDB to Azure Cosmos DB
+
+Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article describes how to migrate your .NET application from DynamoDB to Azure Cosmos DB with minimal code changes.
+
+## Conceptual differences
+
+The following are the key conceptual differences between Azure Cosmos DB and DynamoDB:
+
+| DynamoDB | Azure Cosmos DB |
+|||
+|Not applicable| Database |
+|Table | Collection |
+| Item | Document |
+|Attribute|Field|
+|Secondary Index|Secondary Index|
+|Primary Key ΓÇô Partition key|Partition Key|
+|Primary Key ΓÇô Sort Key| Not Required |
+|Stream|ChangeFeed|
+|Write Compute Unit|Request Unit (Flexible, can be used for reads or writes)|
+|Read Compute Unit |Request Unit (Flexible, can be used for reads or writes)|
+|Global Tables| Not Required. You can directly select the region while provisioning the Azure Cosmos DB account (you can change the region later)|
+
+## Structural differences
+
+Azure Cosmos DB has a simpler JSON structure when compared to that of DynamoDB. The following example shows the differences
+
+**DynamoDB**:
+
+The following JSON object represents the data format in DynamoDB
+
+```json
+{
+TableName: "Music",
+KeySchema: [
+{
+ AttributeName: "Artist",
+ KeyType: "HASH", //Partition key
+},
+{
+ AttributeName: "SongTitle",
+ KeyType: "RANGE" //Sort key
+}
+],
+AttributeDefinitions: [
+{
+ AttributeName: "Artist",
+ AttributeType: "S"
+},
+{
+ AttributeName: "SongTitle",
+ AttributeType: "S"
+}
+],
+ProvisionedThroughput: {
+ ReadCapacityUnits: 1,
+ WriteCapacityUnits: 1
+ }
+}
+ ```
+
+**Azure Cosmos DB**:
+
+The following JSON object represents the data format in Azure Cosmos DB
+
+```json
+{
+"Artist": "",
+"SongTitle": "",
+"AlbumTitle": "",
+"Year": 9999,
+"Price": 0.0,
+"Genre": "",
+"Tags": ""
+}
+```
+
+## Migrate your data
+
+There are various options available to migrate your data to Azure Cosmos DB. To learn more, see the [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../migration-choices.md) article.
+
+## Migrate your code
+
+This article is scoped to migrate an application's code to Azure Cosmos DB, which is the critical aspect of database migration. To help you reduce learning curve, the following sections include a side-by-side code comparison between Amazon DynamoDB and Azure Cosmos DB's equivalent code snippet.
+
+To download the source code, clone the following repo:
+
+```bash
+git clone https://github.com/Azure-Samples/DynamoDB-to-CosmosDB
+```
+
+### Pre-requisites
+
+- .NET Framework 4.7.2
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
+- Access to Azure Cosmos DB for NoSQL Account
+- Local installation of Amazon DynamoDB
+- Java 8
+- Run the downloadable version of Amazon DynamoDB at port 8000 (you can change and configure the code)
+
+### Set up your code
+
+Add the following "NuGet package" to your project:
+
+```bash
+Install-Package Microsoft.Azure.Azure Cosmos DB
+```
+
+### Establish connection
+
+**DynamoDB**:
+
+In Amazon DynamoDB, the following code is used to connect:
+
+```csharp
+ AmazonDynamoDBConfig addbConfig = new AmazonDynamoDBConfig() ;
+ addbConfig.ServiceURL = "endpoint";
+ try { aws_dynamodbclient = new AmazonDynamoDBClient( addbConfig ); }
+```
+
+**Azure Cosmos DB**:
+
+To connect Azure Cosmos DB, update your code to:
+
+```csharp
+client_documentDB = new CosmosClient("your connectionstring from the Azure portal");
+```
+
+**Optimize the connection in Azure Cosmos DB**
+
+With Azure Cosmos DB, you can use the following options to optimize your connection:
+
+* **ConnectionMode** - Use direct connection mode to connect to the data nodes in the Azure Cosmos DB service. Use gateway mode only to initialize and cache the logical addresses and refresh on updates. For more information, see [connectivity modes](sdk-connection-modes.md).
+
+* **ApplicationRegion** - This option is used to set the preferred geo-replicated region that is used to interact with Azure Cosmos DB. For more information, see [global distribution](../distribute-data-globally.md).
+
+* **ConsistencyLevel** - This option is used to override default consistency level. For more information, see [consistency levels](../consistency-levels.md).
+
+* **BulkExecutionMode** - This option is used to execute bulk operations by setting the *AllowBulkExecution* property to true. For more information, see [bulk import](tutorial-dotnet-bulk-import.md).
+
+ ```csharp
+ client_cosmosDB = new CosmosClient(" Your connection string ",new CosmosClientOptions()
+ {
+ ConnectionMode=ConnectionMode.Direct,
+ ApplicationRegion=Regions.EastUS2,
+ ConsistencyLevel=ConsistencyLevel.Session,
+ AllowBulkExecution=true
+ });
+ ```
+
+### Create the container
+
+**DynamoDB**:
+
+To store the data into Amazon DynamoDB, you need to create the table first. In the table creation process; you define the schema, key type, and attributes as shown in the following code:
+
+```csharp
+// movies_key_schema
+public static List<KeySchemaElement> movies_key_schema
+ = new List<KeySchemaElement>
+{
+ new KeySchemaElement
+ {
+ AttributeName = partition_key_name,
+ KeyType = "HASH"
+ },
+ new KeySchemaElement
+ {
+ AttributeName = sort_key_name,
+ KeyType = "RANGE"
+ }
+};
+
+// key names for the Movies table
+public const string partition_key_name = "year";
+public const string sort_key_name = "title";
+ public const int readUnits=1, writeUnits=1;
+
+ // movie_items_attributes
+ public static List<AttributeDefinition> movie_items_attributes
+ = new List<AttributeDefinition>
+{
+ new AttributeDefinition
+ {
+ AttributeName = partition_key_name,
+ AttributeType = "N"
+ },
+ new AttributeDefinition
+ {
+ AttributeName = sort_key_name,
+ AttributeType = "S"
+ }
+
+CreateTableRequest request;
+CreateTableResponse response;
+
+// Build the 'CreateTableRequest' structure for the new table
+request = new CreateTableRequest
+{
+ TableName = table_name,
+ AttributeDefinitions = table_attributes,
+ KeySchema = table_key_schema,
+ // Provisioned-throughput settings are always required,
+ // although the local test version of DynamoDB ignores them.
+ ProvisionedThroughput = new ProvisionedThroughput( readUnits, writeUnits );
+};
+```
+
+**Azure Cosmos DB**:
+
+In Amazon DynamoDB, you need to provision the read compute units & write compute units. Whereas in Azure Cosmos DB you specify the throughput as [Request Units (RU/s)](../request-units.md), which can be used for any operations dynamically. The data is organized as database --> container--> item. You can specify the throughput at database level or at collection level or both.
+
+To create a database:
+
+```csharp
+await client_cosmosDB.CreateDatabaseIfNotExistsAsync(movies_table_name);
+```
+
+To create the container:
+
+```csharp
+await cosmosDatabase.CreateContainerIfNotExistsAsync(new ContainerProperties() { PartitionKeyPath = "/" + partitionKey, Id = new_collection_name }, provisionedThroughput);
+```
+
+### Load the data
+
+**DynamoDB**:
+
+The following code shows how to load the data in Amazon DynamoDB. The moviesArray consists of list of JSON document then you need to iterate through and load the JSON document into Amazon DynamoDB:
+
+```csharp
+int n = moviesArray.Count;
+for( int i = 0, j = 99; i < n; i++ )
+ {
+ try
+ {
+ string itemJson = moviesArray[i].ToString();
+ Document doc = Document.FromJson(itemJson);
+ Task putItem = moviesTable.PutItemAsync(doc);
+ if( i >= j )
+ {
+ j++;
+ Console.Write( "{0,5:#,##0}, ", j );
+ if( j % 1000 == 0 )
+ Console.Write( "\n " );
+ j += 99;
+ }
+ await putItem;
+```
+
+**Azure Cosmos DB**:
+
+In Azure Cosmos DB, you can opt for stream and write with `moviesContainer.CreateItemStreamAsync()`. However, in this sample, the JSON will be deserialized into the *MovieModel* type to demonstrate type-casting feature. The code is multi-threaded, which will use Azure Cosmos DB's distributed architecture and speed up the loading:
+
+```csharp
+List<Task> concurrentTasks = new List<Task>();
+for (int i = 0, j = 99; i < n; i++)
+{
+ try
+ {
+ MovieModel doc= JsonConvert.DeserializeObject<MovieModel>(moviesArray[i].ToString());
+ doc.Id = Guid.NewGuid().ToString();
+ concurrentTasks.Add(moviesContainer.CreateItemAsync(doc,new PartitionKey(doc.Year)));
+ {
+ j++;
+ Console.Write("{0,5:#,##0}, ", j);
+ if (j % 1000 == 0)
+ Console.Write("\n ");
+ j += 99;
+ }
+
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine("\n ERROR: Could not write the movie record #{0:#,##0}, because:\n {1}",
+ i, ex.Message);
+ operationFailed = true;
+ break;
+ }
+}
+await Task.WhenAll(concurrentTasks);
+```
+
+### Create a document
+
+**DynamoDB**:
+
+Writing a new document in Amazon DynamoDB isn't type safe, the following example uses newItem as document type:
+
+```csharp
+Task<Document> writeNew = moviesTable.PutItemAsync(newItem, token);
+await writeNew;
+```
+
+**Azure Cosmos DB**:
+
+Azure Cosmos DB provides you type safety via data model. We use data model named 'MovieModel':
+
+```csharp
+public class MovieModel
+{
+ [JsonProperty("id")]
+ public string Id { get; set; }
+ [JsonProperty("title")]
+ public string Title{ get; set; }
+ [JsonProperty("year")]
+ public int Year { get; set; }
+ public MovieModel(string title, int year)
+ {
+ this.Title = title;
+ this.Year = year;
+ }
+ public MovieModel()
+ {
+
+ }
+ [JsonProperty("info")]
+ public MovieInfo MovieInfo { get; set; }
+
+ internal string PrintInfo()
+ {
+ if(this.MovieInfo!=null)
+ return string.Format("\nMovie with Title: {1}\n Year: {2}, Actors: {3}\n Directors:{4}\n Rating:{5}\n", this.Id, this.Title, this.Year, String.Join(",",this.MovieInfo.Actors), this.MovieInfo, this.MovieInfo.Rating);
+ else
+ return string.Format("\nMovie with Title: {0}\n Year: {1}\n", this.Title, this.Year);
+ }
+}
+```
+
+In Azure Cosmos DB newItem will be MovieModel:
+
+```csharp
+ MovieModel movieModel = new MovieModel()
+ {
+ Id = Guid.NewGuid().ToString(),
+ Title = "The Big New Movie",
+ Year = 2018,
+ MovieInfo = new MovieInfo() { Plot = "Nothing happens at all.", Rating = 0 }
+ };
+ var writeNew= moviesContainer.CreateItemAsync(movieModel, new Microsoft.Azure.Cosmos.PartitionKey(movieModel.Year));
+ await writeNew;
+```
+
+### Read a document
+
+**DynamoDB**:
+
+To read in Amazon DynamoDB, you need to define primitives:
+
+```csharp
+// Create Primitives for the HASH and RANGE portions of the primary key
+Primitive hash = new Primitive(year.ToString(), true);
+Primitive range = new Primitive(title, false);
+
+ Task<Document> readMovie = moviesTable.GetItemAsync(hash, range, token);
+ movie_record = await readMovie;
+```
+
+**Azure Cosmos DB**:
+
+However, with Azure Cosmos DB the query is natural (LINQ):
+
+```csharp
+IQueryable<MovieModel> movieQuery = moviesContainer.GetItemLinqQueryable<MovieModel>(true)
+ .Where(f => f.Year == year && f.Title == title);
+// The query is executed synchronously here, but can also be executed asynchronously via the IDocumentQuery<T> interface
+ foreach (MovieModel movie in movieQuery)
+ {
+ movie_record_cosmosdb = movie;
+ }
+```
+
+The documents collection in the above example will be:
+
+- type safe
+- provide a natural query option.
+
+### Update an item
+
+**DynamoDB**:
+To update the item in Amazon DynamoDB:
+
+```csharp
+updateResponse = await client.UpdateItemAsync( updateRequest );
+````
+
+**Azure Cosmos DB**:
+
+In Azure Cosmos DB, update will be treated as Upsert operation meaning insert the document if it doesn't exist:
+
+```csharp
+await moviesContainer.UpsertItemAsync<MovieModel>(updatedMovieModel);
+```
+
+### Delete a document
+
+**DynamoDB**:
+
+To delete an item in Amazon DynamoDB, you again need to fall on primitives:
+
+```csharp
+Primitive hash = new Primitive(year.ToString(), true);
+ Primitive range = new Primitive(title, false);
+ DeleteItemOperationConfig deleteConfig = new DeleteItemOperationConfig( );
+ deleteConfig.ConditionalExpression = condition;
+ deleteConfig.ReturnValues = ReturnValues.AllOldAttributes;
+
+ Task<Document> delItem = table.DeleteItemAsync( hash, range, deleteConfig );
+ deletedItem = await delItem;
+```
+
+**Azure Cosmos DB**:
+
+In Azure Cosmos DB, we can get the document and delete them asynchronously:
+
+```csharp
+var result= ReadingMovieItem_async_List_CosmosDB("select * from c where c.info.rating>7 AND c.year=2018 AND c.title='The Big New Movie'");
+while (result.HasMoreResults)
+{
+ var resultModel = await result.ReadNextAsync();
+ foreach (var movie in resultModel.ToList<MovieModel>())
+ {
+ await moviesContainer.DeleteItemAsync<MovieModel>(movie.Id, new PartitionKey(movie.Year));
+ }
+ }
+```
+
+### Query documents
+
+**DynamoDB**:
+
+In Amazon DynamoDB, api functions are required to query the data:
+
+```csharp
+QueryOperationConfig config = new QueryOperationConfig( );
+ config.Filter = new QueryFilter( );
+ config.Filter.AddCondition( "year", QueryOperator.Equal, new DynamoDBEntry[ ] { 1992 } );
+ config.Filter.AddCondition( "title", QueryOperator.Between, new DynamoDBEntry[ ] { "B", "Hzz" } );
+ config.AttributesToGet = new List<string> { "year", "title", "info" };
+ config.Select = SelectValues.SpecificAttributes;
+ search = moviesTable.Query( config );
+```
+
+**Azure Cosmos DB**:
+
+In Azure Cosmos DB, you can do projection and filter inside a simple sql query:
+
+```csharp
+var result = moviesContainer.GetItemQueryIterator<MovieModel>(
+ "select c.Year, c.Title, c.info from c where Year=1998 AND (CONTAINS(Title,'B') OR CONTAINS(Title,'Hzz'))");
+```
+
+For range operations, for example, 'between', you need to do a scan in Amazon DynamoDB:
+
+```csharp
+ScanRequest sRequest = new ScanRequest
+{
+ TableName = "Movies",
+ ExpressionAttributeNames = new Dictionary<string, string>
+ {
+ { "#yr", "year" }
+ },
+ ExpressionAttributeValues = new Dictionary<string, AttributeValue>
+ {
+ { ":y_a", new AttributeValue { N = "1960" } },
+ { ":y_z", new AttributeValue { N = "1969" } },
+ },
+ FilterExpression = "#yr between :y_a and :y_z",
+ ProjectionExpression = "#yr, title, info.actors[0], info.directors, info.running_time_secs"
+};
+
+ClientScanning_async( sRequest ).Wait( );
+```
+
+In Azure Cosmos DB, you can use SQL query and a single-line statement:
+
+```csharp
+var result = moviesContainer.GetItemQueryIterator<MovieModel>(
+ "select c.title, c.info.actors[0], c.info.directors,c.info.running_time_secs from c where BETWEEN year 1960 AND 1969");
+```
+
+### Delete a container
+
+**DynamoDB**:
+
+To delete the table in Amazon DynamoDB, you can specify:
+
+```csharp
+client.DeleteTableAsync( tableName );
+```
+
+**Azure Cosmos DB**:
+
+To delete the collection in Azure Cosmos DB, you can specify:
+
+```csharp
+await moviesContainer.DeleteContainerAsync();
+```
+Then delete the database too if you need:
+
+```csharp
+await cosmosDatabase.DeleteAsync();
+```
+
+As you can see, Azure Cosmos DB supports natural queries (SQL), operations are asynchronous and much easier. You can easily migrate your complex code to Azure Cosmos DB, which becomes simpler after the migration.
+
+### Next Steps
+
+- Learn about [performance optimization](performance-tips.md).
+- Learn about [optimize reads and writes](../key-value-store-cost.md)
+- Learn about [Monitoring in Azure Cosmos DB](../monitor.md)
+
cosmos-db Estimate Ru With Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/estimate-ru-with-capacity-planner.md
+
+ Title: Estimate costs using the Azure Cosmos DB capacity planner - API for NoSQL
+description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using API for NoSQL.
++++ Last updated : 08/26/2021++++
+# Estimate RU/s using the Azure Cosmos DB capacity planner - API for NoSQL
+
+> [!NOTE]
+> If you are planning a data migration to Azure Cosmos DB and all that you know is the number of vcores and servers in your existing sharded and replicated database cluster, please also read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+>
+
+Configuring your Azure Cosmos DB databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](../request-units.md), for your workload is essential to optimizing cost and performance. This article describes how to use the Azure Cosmos DB [capacity planner](https://cosmos.azure.com/capacitycalculator/) to get an estimate of the required RU/s and cost of your workload when using the API for NoSQL. If you are using API for MongoDB, see how to [use capacity calculator with MongoDB](../mongodb/estimate-ru-capacity-planner.md) article.
++
+## <a id="basic-mode"></a>Estimate provisioned throughput and cost using basic mode
+To get a quick estimate for your workload using the basic mode, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/). Enter in the following parameters based on your workload:
+
+|**Input** |**Description** |
+|||
+| API |Choose API for NoSQL |
+|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. See [global distribution](../distribute-data-globally.md) in Azure Cosmos DB for more details.|
+|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
+|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
+|Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored (GB) in the analytical store in a single region. |
+|Item size|The estimated size of the data item (for example, document), ranging from 1 KB to 2 MB. |
+|Queries/sec |Number of queries expected per second per region. The average RU charge to run a query is estimated at 10 RUs. |
+|Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. |
+|Creates/sec |Number of create operations expected per second per region. |
+|Updates/sec |Number of update operations expected per second per region. When you choose automatic indexing, the estimated RU/s for the update operation is calculated as one property being changed per an update. |
+|Deletes/sec |Number of delete operations expected per second per region. |
+
+After filling the required details, select **Calculate**. The **Cost Estimate** tab shows the total cost for storage and provisioned throughput. You can expand the **Show Details** link in this tab to get the breakdown of the throughput required for different CRUD and query requests. Each time you change the value of any field, select **Calculate** to recalculate the estimated cost.
++
+## <a id="advanced-mode"></a>Estimate provisioned throughput and cost using advanced mode
+
+Advanced mode allows you to provide more settings that impact the RU/s estimate. To use this option, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/) and **sign in** to the tool with an account you use for Azure. The sign-in option is available at the right-hand corner.
+
+After you sign in, you can see more fields compared to the fields in basic mode. Enter the other parameters based on your workload.
+
+|**Input** |**Description** |
+|||
+|API|Azure Cosmos DB is a multi-model and multi-API service. Choose API for NoSQL. |
+|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. See [global distribution](../distribute-data-globally.md) in Azure Cosmos DB for more details.|
+|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
+|Default consistency|Azure Cosmos DB supports 5 consistency levels, to allow developers to balance the tradeoff between consistency, availability, and latency tradeoffs. To learn more, see the [consistency levels](../consistency-levels.md) article. <br/><br/> By default, Azure Cosmos DB uses session consistency, which guarantees the ability to read your own writes in a session. <br/><br/> Choosing strong or bounded staleness will require double the required RU/s for reads, when compared to session, consistent prefix, and eventual consistency. Strong consistency with multi-region writes is not supported and will automatically default to single-region writes with strong consistency. |
+|Indexing policy|By default, Azure Cosmos DB [indexes all properties](../index-policy.md) in all items for flexible and efficient queries (maps to the **Automatic** indexing policy). <br/><br/> If you choose **off**, none of the properties are indexed. This results in the lowest RU charge for writes. Select **off** policy if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and/or writes, and no queries. <br/><br/> If you choose **Automatic**, Azure Cosmos DB automatically indexes all the items as they are written. <br/><br/> **Custom** indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. To learn more, see [indexing policy](../index-overview.md) and [sample indexing policies](how-to-manage-indexing-policy.md#indexing-policy-examples) articles.|
+|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
+|Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored(GB) in the analytical store in a single region. |
+|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: 45 hours at peak / 730 hours / month = ~6%.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
+|Item size|The size of the data item (for example, document), ranging from 1 KB to 2 MB. You can add estimates for multiple sample items. <br/><br/>You can also **Upload sample (JSON)** document for a more accurate estimate.<br/><br/>If your workload has multiple types of items (with different JSON content) in the same container, you can upload multiple JSON documents and get the estimate. Use the **Add new item** button to add multiple sample JSON documents.|
+| Number of properties | The average number of properties per an item. |
+|Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. Point read operations are different from query read operations. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. If your workload mode is **Variable**, you can provide the expected number of point read operations at peak and off peak. |
+|Creates/sec |Number of create operations expected per second per region. |
+|Updates/sec |Number of update operations expected per second per region. |
+|Deletes/sec |Number of delete operations expected per second per region. |
+|Queries/sec |Number of queries expected per second per region. For an accurate estimate, either use the average cost of queries or enter the RU/s your queries use from query stats in Azure portal. |
+| Average RU/s charge per query | By default, the average cost of queries/sec per region is estimated at 10 RU/s. You can increase or decrease it based on the RU/s charges based on your estimated query charge.|
+
+You can also use the **Save Estimate** button to download a CSV file containing the current estimate.
++
+The prices shown in the Azure Cosmos DB capacity planner are estimates based on the public pricing rates for throughput and storage. All prices are shown in US dollars. Refer to the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to see all rates by region.
+
+## Next steps
+
+* If all you know is the number of vcores and servers in your existing sharded and replicated database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* Learn more about [Azure Cosmos DB's pricing model](../how-pricing-works.md).
+* Create a new [Azure Cosmos DB account, database, and container](quickstart-portal.md).
+* Learn how to [optimize provisioned throughput cost](../optimize-cost-throughput.md).
+* Learn how to [optimize cost with reserved capacity](../reserved-capacity.md).
+
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/find-request-unit-charge.md
+
+ Title: Find request unit charge for a SQL query in Azure Cosmos DB
+description: Find the request unit charge for SQL queries against containers created with Azure Cosmos DB, using the Azure portal, .NET, Java, Python, or Node.js.
++++ Last updated : 06/02/2022+
+ms.devlang: csharp, java, javascript, python
+++
+# Find the request unit charge for operations in Azure Cosmos DB for NoSQL
+
+Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
+
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by *request units* (RU). *Request charge* is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your container, costs are always measured in RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see [Request Units in Azure Cosmos DB](../request-units.md).
+
+This article presents the different ways that you can find the request unit consumption for any operation run against a container in Azure Cosmos DB for NoSQL. If you're using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge.md), [API for Cassandra](../cassandr).
+
+Currently, you can measure consumption only by using the Azure portal or by inspecting the response sent from Azure Cosmos DB through one of the SDKs. If you're using the API for NoSQL, you have multiple options for finding the request charge for an operation.
+
+## Use the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-account) and feed it with data, or select an existing Azure Cosmos DB account that already contains data.
+
+1. Go to the **Data Explorer** pane, and then select the container you want to work on.
+
+1. Select **New SQL Query**.
+
+1. Enter a valid query, and then select **Execute Query**.
+
+1. Select **Query Stats** to display the actual request charge for the request you executed.
+
+ :::image type="content" source="../media/find-request-unit-charge/portal-sql-query.png" alt-text="Screenshot of a SQL query request charge in the Azure portal.":::
+
+## Use the .NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+Objects that are returned from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) expose a `RequestCharge` property:
+
+```csharp
+ResourceResponse<Document> fetchDocumentResponse = await client.ReadDocumentAsync(
+ UriFactory.CreateDocumentUri("database", "container", "itemId"),
+ new RequestOptions
+ {
+ PartitionKey = new PartitionKey("partitionKey")
+ });
+var requestCharge = fetchDocumentResponse.RequestCharge;
+
+StoredProcedureResponse<string> storedProcedureCallResponse = await client.ExecuteStoredProcedureAsync<string>(
+ UriFactory.CreateStoredProcedureUri("database", "container", "storedProcedureId"),
+ new RequestOptions
+ {
+ PartitionKey = new PartitionKey("partitionKey")
+ });
+requestCharge = storedProcedureCallResponse.RequestCharge;
+
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri("database", "container"),
+ "SELECT * FROM c",
+ new FeedOptions
+ {
+ PartitionKey = new PartitionKey("partitionKey")
+ }).AsDocumentQuery();
+while (query.HasMoreResults)
+{
+ FeedResponse<dynamic> queryResponse = await query.ExecuteNextAsync<dynamic>();
+ requestCharge = queryResponse.RequestCharge;
+}
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+Objects that are returned from the [.NET SDK v3](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) expose a `RequestCharge` property:
+
+[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/CustomDocsSampleCode.cs?name=GetRequestCharge)]
+
+For more information, see [Quickstart: Build a .NET web app by using a API for NoSQL account in Azure Cosmos DB](quickstart-dotnet.md).
+++
+## Use the Java SDK
+
+Objects that are returned from the [Java SDK](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) expose a `getRequestCharge()` method:
+
+```java
+RequestOptions requestOptions = new RequestOptions();
+requestOptions.setPartitionKey(new PartitionKey("partitionKey"));
+
+Observable<ResourceResponse<Document>> readDocumentResponse = client.readDocument(String.format("/dbs/%s/colls/%s/docs/%s", "database", "container", "itemId"), requestOptions);
+readDocumentResponse.subscribe(result -> {
+ double requestCharge = result.getRequestCharge();
+});
+
+Observable<StoredProcedureResponse> storedProcedureResponse = client.executeStoredProcedure(String.format("/dbs/%s/colls/%s/sprocs/%s", "database", "container", "storedProcedureId"), requestOptions, null);
+storedProcedureResponse.subscribe(result -> {
+ double requestCharge = result.getRequestCharge();
+});
+
+FeedOptions feedOptions = new FeedOptions();
+feedOptions.setPartitionKey(new PartitionKey("partitionKey"));
+
+Observable<FeedResponse<Document>> feedResponse = client
+ .queryDocuments(String.format("/dbs/%s/colls/%s", "database", "container"), "SELECT * FROM c", feedOptions);
+feedResponse.forEach(result -> {
+ double requestCharge = result.getRequestCharge();
+});
+```
+
+For more information, see [Quickstart: Build a Java application by using an Azure Cosmos DB for NoSQL account](quickstart-java.md).
+
+## Use the Node.js SDK
+
+Objects that are returned from the [Node.js SDK](https://www.npmjs.com/package/@azure/cosmos) expose a `headers` subobject that maps all the headers returned by the underlying HTTP API. The request charge is available under the `x-ms-request-charge` key:
+
+```javascript
+const item = await client
+ .database('database')
+ .container('container')
+ .item('itemId', 'partitionKey')
+ .read();
+var requestCharge = item.headers['x-ms-request-charge'];
+
+const storedProcedureResult = await client
+ .database('database')
+ .container('container')
+ .storedProcedure('storedProcedureId')
+ .execute({
+ partitionKey: 'partitionKey'
+ });
+requestCharge = storedProcedureResult.headers['x-ms-request-charge'];
+
+const query = client.database('database')
+ .container('container')
+ .items
+ .query('SELECT * FROM c', {
+ partitionKey: 'partitionKey'
+ });
+while (query.hasMoreResults()) {
+ var result = await query.executeNext();
+ requestCharge = result.headers['x-ms-request-charge'];
+}
+```
+
+For more information, see [Quickstart: Build a Node.js app by using an Azure Cosmos DB for NoSQL account](quickstart-nodejs.md).
+
+## Use the Python SDK
+
+The `CosmosClient` object from the [Python SDK](https://pypi.org/project/azure-cosmos/) exposes a `last_response_headers` dictionary that maps all the headers returned by the underlying HTTP API for the last operation executed. The request charge is available under the `x-ms-request-charge` key:
+
+```python
+response = client.ReadItem(
+ 'dbs/database/colls/container/docs/itemId', {'partitionKey': 'partitionKey'})
+request_charge = client.last_response_headers['x-ms-request-charge']
+
+response = client.ExecuteStoredProcedure(
+ 'dbs/database/colls/container/sprocs/storedProcedureId', None, {'partitionKey': 'partitionKey'})
+request_charge = client.last_response_headers['x-ms-request-charge']
+```
+
+For more information, see [Quickstart: Build a Python app by using an Azure Cosmos DB for NoSQL account](quickstart-python.md).
+
+## Next steps
+
+To learn about optimizing your RU consumption, see these articles:
+
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
+* [Globally scale provisioned throughput](../request-units.md)
+* [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md)
+* [Provision throughput for a container](how-to-provision-container-throughput.md)
+* [Monitor and debug with insights in Azure Cosmos DB](../use-metrics.md)
cosmos-db How To Configure Cosmos Db Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-configure-cosmos-db-trigger.md
+
+ Title: Azure Functions trigger for Azure Cosmos DB advanced configuration
+description: Learn how to configure logging and connection policy used by Azure Functions trigger for Azure Cosmos DB
++++ Last updated : 07/06/2022+++
+# How to configure logging and connectivity with the Azure Functions trigger for Azure Cosmos DB
+
+This article describes advanced configuration options you can set when using the Azure Functions trigger for Azure Cosmos DB.
+
+## Enabling trigger specific logs
+
+The Azure Functions trigger for Azure Cosmos DB uses the [Change Feed Processor Library](change-feed-processor.md) internally, and the library generates a set of health logs that can be used to monitor internal operations for [troubleshooting purposes](./troubleshoot-changefeed-functions.md).
+
+The health logs describe how the Azure Functions trigger for Azure Cosmos DB behaves when attempting operations during load-balancing, initialization, and processing scenarios.
+
+### Enabling logging
+
+To enable logging when using Azure Functions trigger for Azure Cosmos DB, locate the `host.json` file in your Azure Functions project or Azure Functions App and [configure the level of required logging](../../azure-functions/functions-monitoring.md#log-levels-and-categories). Enable the traces for `Host.Triggers.CosmosDB` as shown in the following sample:
+
+```js
+{
+ "version": "2.0",
+ "logging": {
+ "fileLoggingMode": "always",
+ "logLevel": {
+ "Host.Triggers.CosmosDB": "Warning"
+ }
+ }
+}
+```
+
+After the Azure Function is deployed with the updated configuration, you'll see the Azure Functions trigger for Azure Cosmos DB logs as part of your traces. You can view the logs in your configured logging provider under the *Category* `Host.Triggers.CosmosDB`.
+
+### Which type of logs are emitted?
+
+Once enabled, there are three levels of log events that will be emitted:
+
+* Error:
+ * When there's an unknown or critical error on the Change Feed processing that is affecting the correct trigger functionality.
+
+* Warning:
+ * When your Function user code had an unhandled exception - There's a gap in your Function code and the Function isn't [resilient to errors](../../azure-functions/performance-reliability.md#write-defensive-functions) or a serialization error (for C# Functions, the raw json can't be deserialized to the selected C# type).
+ * When there are transient connectivity issues preventing the trigger from interacting with the Azure Cosmos DB account. The trigger will retry these [transient connectivity errors](troubleshoot-dotnet-sdk-request-timeout.md) but if they extend for a long period of time, there could be a network problem. You can enable Debug level traces to obtain the Diagnostics from the underlying Azure Cosmos DB SDK.
+
+* Debug:
+ * When a lease is acquired by an instance - The current instance will start processing the Change Feed for the lease.
+ * When a lease is released by an instance - The current instance has stopped processing the Change Feed for the lease.
+ * When new changes are delivered from the trigger to your Function code - Helps debug situations when your Function code might be having errors and you aren't sure if you're receiving changes or not.
+ * For traces that are Warning and Error, adds the Diagnostics information from the underlying Azure Cosmos DB SDK for troubleshooting purposes.
+
+You can also [refer to the source code](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerHealthMonitor.cs) to see the full details.
+
+### Query the logs
+
+Run the following query to query the logs generated by the Azure Functions trigger for Azure Cosmos DB in [Azure Application Insights' Analytics](../../azure-monitor/logs/log-query-overview.md):
+
+```sql
+traces
+| where customDimensions.Category == "Host.Triggers.CosmosDB"
+```
+
+## Configuring the connection policy
+
+There are two connection modes - Direct mode and Gateway mode. To learn more about these connection modes, see the [connection modes](sdk-connection-modes.md) article. By default, **Gateway** is used to establish all connections on the Azure Functions trigger for Azure Cosmos DB. However, it might not be the best option for performance-driven scenarios.
+
+### Changing the connection mode and protocol
+
+There are two key configuration settings available to configure the client connection policy ΓÇô the **connection mode** and the **connection protocol**. You can change the default connection mode and protocol used by the Azure Functions trigger for Azure Cosmos DB and all the [Azure Cosmos DB bindings](../../azure-functions/functions-bindings-cosmosdb-v2-output.md)). To change the default settings, you need to locate the `host.json` file in your Azure Functions project or Azure Functions App and add the following [extra setting](../../azure-functions/functions-bindings-cosmosdb-v2.md#hostjson-settings):
+
+```js
+{
+ "cosmosDB": {
+ "connectionMode": "Direct",
+ "protocol": "Tcp"
+ }
+}
+```
+
+Where `connectionMode` must have the desired connection mode (Direct or Gateway) and `protocol` the desired connection protocol (Tcp for Direct mode or Https for Gateway mode).
+
+If your Azure Functions project is working with Azure Functions V1 runtime, the configuration has a slight name difference, you should use `documentDB` instead of `cosmosDB`:
+
+```js
+{
+ "documentDB": {
+ "connectionMode": "Direct",
+ "protocol": "Tcp"
+ }
+}
+```
+
+## Customizing the user agent
+
+The Azure Functions trigger for Azure Cosmos DB performs requests to the service that will be reflected on your [monitoring](../monitor.md). You can customize the user agent used for the requests from an Azure Function by changing the `userAgentSuffix` in the `host.json` [extra settings](../../azure-functions/functions-bindings-cosmosdb-v2.md?tabs=extensionv4#hostjson-settings):
+
+```js
+{
+ "cosmosDB": {
+ "userAgentSuffix": "MyUniqueIdentifier"
+ }
+}
+```
+
+> [!NOTE]
+> When hosting your function app in a Consumption plan, each instance has a limit in the amount of Socket Connections that it can maintain. When working with Direct / TCP mode, by design more connections are created and can hit the [Consumption plan limit](../../azure-functions/manage-connections.md#connection-limit), in which case you can either use Gateway mode or instead host your function app in either a [Premium plan](../../azure-functions/functions-premium-plan.md) or a [Dedicated (App Service) plan](../../azure-functions/dedicated-plan.md).
+
+## Next steps
+
+* [Connection limits in Azure Functions](../../azure-functions/manage-connections.md#connection-limit)
+* [Enable monitoring](../../azure-functions/functions-monitoring.md) in your Azure Functions applications.
+* Learn how to [Diagnose and troubleshoot common issues](./troubleshoot-changefeed-functions.md) when using the Azure Functions trigger for Azure Cosmos DB.
cosmos-db How To Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-configure-cross-origin-resource-sharing.md
+
+ Title: Cross-Origin Resource Sharing (CORS) in Azure Cosmos DB
+description: This article describes how to configure Cross-Origin Resource Sharing (CORS) in Azure Cosmos DB by using Azure portal and Azure Resource Manager templates.
++++ Last updated : 10/11/2019+
+ms.devlang: javascript
++
+# Configure Cross-Origin Resource Sharing (CORS)
+
+Cross-Origin Resource Sharing (CORS) is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain. However, CORS provides a secure way to allow the origin domain to call APIs in another domain. The API for NoSQL in Azure Cosmos DB now supports Cross-Origin Resource Sharing (CORS) by using the ΓÇ£allowedOriginsΓÇ¥ header. After you enable the CORS support for your Azure Cosmos DB account, only authenticated requests are evaluated to determine whether they're allowed according to the rules you've specified.
+
+You can configure the Cross-origin resource sharing (CORS) setting from the Azure portal or from an Azure Resource Manager template. For Azure Cosmos DB accounts using the API for NoSQL, Azure Cosmos DB supports a JavaScript library that works in both Node.js and browser-based environments. This library can now take advantage of CORS support when using Gateway mode. There's no client-side configuration needed to use this feature. With CORS support, resources from a browser can directly access Azure Cosmos DB through the [JavaScript library](https://www.npmjs.com/package/@azure/cosmos) or directly from the [REST API](/rest/api/cosmos-db/) for simple operations.
+
+> [!NOTE]
+> CORS support is only applicable and supported for the Azure Cosmos DB for NoSQL. It is not applicable to the Azure Cosmos DB APIs for Cassandra, Gremlin, or MongoDB, as these protocols do not use HTTP for client-server communication.
+
+## Enable CORS support from Azure portal
+
+Follow these steps to enable Cross-Origin Resource Sharing by using Azure portal:
+
+1. Navigate to your Azure Cosmos DB account. Open the **CORS** page.
+
+2. Specify a comma-separated list of origins that can make cross-origin calls to your Azure Cosmos DB account. For example, `https://www.mydomain.com`, `https://mydomain.com`, `https://api.mydomain.com`. You can also use a wildcard ΓÇ£\*ΓÇ¥ to allow all origins and select **Submit**.
+
+ > [!NOTE]
+ > Currently, you cannot use wildcards as part of the domain name. For example `https://*.mydomain.net` format is not yet supported.
+
+ :::image type="content" source="./media/how-to-configure-cross-origin-resource-sharing/enable-cross-origin-resource-sharing-using-azure-portal.png" alt-text="Enable cross origin resource sharing using Azure portal":::
+
+## Enable CORS support from Resource Manager template
+
+To enable CORS by using a Resource Manager template, add the ΓÇ£corsΓÇ¥ section with ΓÇ£allowedOriginsΓÇ¥ property to any existing template. This JSON is an example of a template that creates a new Azure Cosmos DB account with CORS enabled.
+
+```json
+{
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "name": "[variables('accountName')]",
+ "apiVersion": "2019-08-01",
+ "location": "[parameters('location')]",
+ "kind": "GlobalDocumentDB",
+ "properties": {
+ "consistencyPolicy": "[variables('consistencyPolicy')[parameters('defaultConsistencyLevel')]]",
+ "locations": "[variables('locations')]",
+ "databaseAccountOfferType": "Standard",
+ "cors": [
+ {
+ "allowedOrigins": "https://contoso.com"
+ }
+ ]
+ }
+}
+```
+
+## Using the Azure Cosmos DB JavaScript library from a browser
+
+Today, the Azure Cosmos DB JavaScript library only has the CommonJS version of the library shipped with its package. To use this library from the browser, you have to use a tool such as Rollup or Webpack to create a browser compatible library. Certain Node.js libraries should have browser mocks for them. This is an example of a webpack config file that has the necessary mock settings.
+
+```javascript
+const path = require("path");
+
+module.exports = {
+ entry: "./src/index.ts",
+ devtool: "inline-source-map",
+ node: {
+ net: "mock",
+ tls: "mock"
+ },
+ output: {
+ filename: "bundle.js",
+ path: path.resolve(__dirname, "dist")
+ }
+};
+```
+
+Here's a [code sample](https://github.com/christopheranderson/cosmos-browser-sample) that uses TypeScript and Webpack with the Azure Cosmos DB JavaScript SDK library. The sample builds a Todo app that sends real time updates when new items are created.
+
+As a best practice, don't use the primary key to communicate with Azure Cosmos DB from the browser. Instead, use resource tokens to communicate. For more information about resource tokens, see [Securing access to Azure Cosmos DB](../secure-access-to-data.md#resource-tokens) article.
+
+## Next steps
+
+To learn about other ways to secure your Azure Cosmos DB account, see the following articles:
+
+* [Configure a firewall for Azure Cosmos DB](../how-to-configure-firewall.md) article.
+
+* [Configure virtual network and subnet-based access for your Azure Cosmos DB account](../how-to-configure-vnet-service-endpoint.md)
cosmos-db How To Convert Session Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-convert-session-token.md
+
+ Title: How to convert session token formats in .NET SDK - Azure Cosmos DB
+description: Learn how to convert session token formats to ensure compatibilities between different .NET SDK versions
++++ Last updated : 04/30/2020+
+ms.devlang: csharp
+++
+# Convert session token formats in .NET SDK
+
+This article explains how to convert between different session token formats to ensure compatibility between SDK versions.
+
+> [!NOTE]
+> By default, the SDK keeps track of the session token automatically and it will use the most recent session token. For more information, please visit [Utilize session tokens](how-to-manage-consistency.md#utilize-session-tokens). The instructions in this article only apply with the following conditions:
+> * Your Azure Cosmos DB account uses Session consistency.
+> * You are managing the session tokens manually.
+> * You are using multiple versions of the SDK at the same time.
+
+## Session token formats
+
+There are two session token formats: **simple** and **vector**. These two formats are not interchangeable so, the format should be converted when passing to the client application with different versions.
+- The **simple** session token format is used by the .NET SDK V1 (Microsoft.Azure.DocumentDB -version 1.x)
+- The **vector** session token format is used by the .NET SDK V2 (Microsoft.Azure.DocumentDB -version 2.x)
+
+### Simple session token
+
+A simple session token has this format: `{pkrangeid}:{globalLSN}`
+
+### Vector session token
+
+A vector session token has the following format:
+`{pkrangeid}:{Version}#{GlobalLSN}#{RegionId1}={LocalLsn1}#{RegionId2}={LocalLsn2}....#{RegionIdN}={LocalLsnN}`
+
+## Convert to Simple session token
+
+To pass a session token to client using .NET SDK V1, use a **simple** session token format. For example, use the following sample code to convert it.
+
+```csharp
+private static readonly char[] SegmentSeparator = (new[] { '#' });
+private static readonly char[] PkRangeSeparator = (new[] { ':' });
+
+// sessionTokenToConvert = session token from previous response
+string[] items = sessionTokenToConvert.Split(PkRangeSeparator, StringSplitOptions.RemoveEmptyEntries);
+string[] sessionTokenSegments = items[1].Split(SessionTokenHelpers.SegmentSeparator, StringSplitOptions.RemoveEmptyEntries);
+
+string sessionTokenInSimpleFormat;
+
+if (sessionTokenSegments.Length == 1)
+{
+ // returning the same token since it already has the correct format
+ sessionTokenInSimpleFormat = sessionTokenToConvert;
+}
+else
+{
+ long version = 0;
+ long globalLSN = 0;
+
+ if (!long.TryParse(sessionTokenSegments[0], out version)
+ || !long.TryParse(sessionTokenSegments[1], out globalLSN))
+ {
+ throw new ArgumentException("Invalid session token format", sessionTokenToConvert);
+ }
+
+ sessionTokenInSimpleFormat = string.Format("{0}:{1}", items[0], globalLSN);
+}
+```
+
+## Convert to Vector session token
+
+To pass a session token to client using .NET SDK V2, use the **vector** session token format. For example, use the following sample code to convert it.
+
+```csharp
+
+private static readonly char[] SegmentSeparator = (new[] { '#' });
+private static readonly char[] PkRangeSeparator = (new[] { ':' });
+
+// sessionTokenToConvert = session token from previous response
+string[] items = sessionTokenToConvert.Split(PkRangeSeparator, StringSplitOptions.RemoveEmptyEntries);
+string[] sessionTokenSegments = items[1].Split(SegmentSeparator, StringSplitOptions.RemoveEmptyEntries);
+
+string sessionTokenInVectorFormat;
+
+if (sessionTokenSegments.Length == 1)
+{
+ long globalLSN = 0;
+ if (long.TryParse(sessionTokenSegments[0], out globalLSN))
+ {
+ sessionTokenInVectorFormat = string.Format("{0}:-2#{1}", items[0], globalLSN);
+ }
+ else
+ {
+ throw new ArgumentException("Invalid session token format", sessionTokenToConvert);
+ }
+}
+else
+{
+ // returning the same token since it already has the correct format
+ sessionTokenInVectorFormat = sessionTokenToConvert;
+}
+```
+
+## Next steps
+
+Read the following articles:
+
+* [Use session tokens to manage consistency in Azure Cosmos DB](how-to-manage-consistency.md#utilize-session-tokens)
+* [Choose the right consistency level in Azure Cosmos DB](../consistency-levels.md)
+* [Consistency, availability, and performance tradeoffs in Azure Cosmos DB](../consistency-levels.md)
+* [Availability and performance tradeoffs for various consistency levels](../consistency-levels.md)
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-create-account.md
+
+ Title: Create an Azure Cosmos DB for NoSQL account
+description: Learn how to create a new Azure Cosmos DB for NoSQL account to store databases, containers, and items.
++++
+ms.devlang: csharp
++ Last updated : 06/08/2022++
+# Create an Azure Cosmos DB for NoSQL account
+
+An Azure Cosmos DB for NoSQL account contains all of your Azure Cosmos DB resources: databases, containers, and items. The account provides a unique endpoint for various tools and SDKs to connect to Azure Cosmos DB and perform everyday operations. For more information about the resources in Azure Cosmos DB, see [Azure Cosmos DB resource model](../account-databases-containers-items.md).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+
+## Create an account
+
+Create a single Azure Cosmos DB account using the API for NoSQL.
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos"
+
+ # Variable for location
+ location="westus"
+
+ # Variable for account name with a randomnly generated suffix
+ let suffix=$RANDOM*$RANDOM
+ accountName="msdocs-$suffix"
+ ```
+
+1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
+
+1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
+
+ ```azurecli-interactive
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+ ```
+
+1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB for NoSQL account with default settings.
+
+ ```azurecli-interactive
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --locations regionName=$location
+ ```
+
+#### [PowerShell](#tab/azure-powershell)
+
+1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos"
+
+ # Variable for location
+ $LOCATION = "West US"
+
+ # Variable for account name with a randomnly generated suffix
+ $SUFFIX = Get-Random
+ $ACCOUNT_NAME = "msdocs-$SUFFIX"
+ ```
+
+1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+ Location = $LOCATION
+ }
+ New-AzResourceGroup @parameters
+ ```
+
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB for NoSQL account with default settings.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Location = $LOCATION
+ }
+ New-AzCosmosDBAccount @parameters
+ ```
+
+#### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the Azure portal menu or the **Home page**, select **Create a resource**.
+
+1. On the **New** page, search for and select **Azure Cosmos DB**.
+
+1. On the **Select API option** page, select the **Create** option within the **NoSQL - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the API for NoSQL](../index.yml).
+
+ :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select API option page for Azure Cosmos DB DB.":::
+
+1. On the **Create Azure Cosmos DB Account** page, enter the following information:
+
+ | Setting | Value | Description |
+ | | | |
+ | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos DB account. |
+ | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | Account Name | A unique name | Enter a name to identify your Azure Cosmos DB account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
+ | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+ | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
+ | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
+
+ > [!NOTE]
+ > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
+
+ :::image type="content" source="media/create-account-portal/new-cosmos-account-page.png" lightbox="media/create-account-portal/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos DB DB SQL API.":::
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+1. Select **Go to resource** to go to the Azure Cosmos DB account page.
+
+ :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos DB DB SQL API resource.":::
+++
+## Next steps
+
+In this guide, you learned how to create an Azure Cosmos DB for NoSQL account. You can now import additional data to your Azure Cosmos DB account.
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB for NoSQL](../import-data.md)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-create-container.md
+
+ Title: Create a container in Azure Cosmos DB for NoSQL
+description: Learn how to create a container in Azure Cosmos DB for NoSQL by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
++++ Last updated : 01/03/2022++
+ms.devlang: csharp
+++
+# Create a container in Azure Cosmos DB for NoSQL
+
+This article explains the different ways to create an container in Azure Cosmos DB for NoSQL. It shows how to create a container using the Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
+
+This article explains the different ways to create a container in Azure Cosmos DB for NoSQL. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container.md), [API for Cassandra](../cassandr) articles to create the container.
+
+> [!NOTE]
+> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+
+## <a id="portal-sql"></a>Create a container using Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-account), or select an existing account.
+
+1. Open the **Data Explorer** pane, and select **New Container**. Next, provide the following details:
+
+ * Indicate whether you are creating a new database or using an existing one.
+ * Enter a **Container Id**.
+ * Enter a **Partition key** value (for example, `/ItemID`).
+ * Select **Autoscale** or **Manual** throughput and enter the required **Container throughput** (for example, 1000 RU/s). Enter a throughput that you want to provision (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-provision-container-throughput/provision-container-throughput-portal-sql-api.png" alt-text="Screenshot of Data Explorer, with New Collection highlighted":::
+
+## <a id="cli-sql"></a>Create a container using Azure CLI
+
+[Create a container with Azure CLI](manage-with-cli.md#create-a-container). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
+
+## Create a container using PowerShell
+
+[Create a container with PowerShell](manage-with-powershell.md#create-container). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
+
+## <a id="dotnet-sql"></a>Create a container using .NET SDK
+
+If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
+
+```csharp
+// Create a container with a partition key and provision 400 RU/s manual throughput.
+CosmosClient client = new CosmosClient(connectionString, clientOptions);
+Database database = await client.CreateDatabaseIfNotExistsAsync(databaseId);
+
+ContainerProperties containerProperties = new ContainerProperties()
+{
+ Id = containerId,
+ PartitionKeyPath = "/myPartitionKey"
+};
+
+var throughput = ThroughputProperties.CreateManualThroughput(400);
+Container container = await database.CreateContainerIfNotExistsAsync(containerProperties, throughput);
+```
+
+## Next steps
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Work with Azure Cosmos DB account](../account-databases-containers-items.md)
cosmos-db How To Create Multiple Cosmos Db Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-create-multiple-cosmos-db-triggers.md
+
+ Title: Create multiple independent Azure Functions triggers for Azure Cosmos DB
+description: Learn how to configure multiple independent Azure Functions triggers for Azure Cosmos DB to create event-driven architectures.
++++ Last updated : 07/17/2019+
+ms.devlang: csharp
+++
+# Create multiple Azure Functions triggers for Azure Cosmos DB
+
+This article describes how you can configure multiple Azure Functions triggers for Azure Cosmos DB to work in parallel and independently react to changes.
++
+## Event-based architecture requirements
+
+When building serverless architectures with [Azure Functions](../../azure-functions/functions-overview.md), it's [recommended](../../azure-functions/performance-reliability.md#avoid-long-running-functions) to create small function sets that work together instead of large long running functions.
+
+As you build event-based serverless flows using the [Azure Functions trigger for Azure Cosmos DB](./change-feed-functions.md), you'll run into the scenario where you want to do multiple things whenever there is a new event in a particular [Azure Cosmos DB container](../account-databases-containers-items.md#azure-cosmos-db-containers). If actions you want to trigger, are independent from one another, the ideal solution would be to **create one Azure Functions triggers for Azure Cosmos DB per action** you want to do, all listening for changes on the same Azure Cosmos DB container.
+
+## Optimizing containers for multiple Triggers
+
+Given the *requirements* of the Azure Functions trigger for Azure Cosmos DB, we need a second container to store state, also called, the *leases container*. Does this mean that you need a separate leases container for each Azure Function?
+
+Here, you have two options:
+
+* Create **one leases container per Function**: This approach can translate into additional costs, unless you're using a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). Remember, that the minimum throughput at the container level is 400 [Request Units](../request-units.md), and in the case of the leases container, it is only being used to checkpoint the progress and maintain state.
+* Have **one lease container and share it** for all your Functions: This second option makes better use of the provisioned Request Units on the container, as it enables multiple Azure Functions to share and use the same provisioned throughput.
+
+The goal of this article is to guide you to accomplish the second option.
+
+## Configuring a shared leases container
+
+To configure the shared leases container, the only extra configuration you need to make on your triggers is to add the `LeaseCollectionPrefix` [attribute](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#attributes) if you are using C# or `leaseCollectionPrefix` [attribute](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) if you are using JavaScript. The value of the attribute should be a logical descriptor of what that particular trigger.
+
+For example, if you have three Triggers: one that sends emails, one that does an aggregation to create a materialized view, and one that sends the changes to another storage, for later analysis, you could assign the `LeaseCollectionPrefix` of "emails" to the first one, "materialized" to the second one, and "analytics" to the third one.
+
+The important part is that all three Triggers **can use the same leases container configuration** (account, database, and container name).
+
+A very simple code samples using the `LeaseCollectionPrefix` attribute in C#, would look like this:
+
+```cs
+using Microsoft.Azure.Documents;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using System.Collections.Generic;
+using Microsoft.Extensions.Logging;
+
+[FunctionName("SendEmails")]
+public static void SendEmails([CosmosDBTrigger(
+ databaseName: "ToDoItems",
+ collectionName: "Items",
+ ConnectionStringSetting = "CosmosDBConnection",
+ LeaseCollectionName = "leases",
+ LeaseCollectionPrefix = "emails")]IReadOnlyList<Document> documents,
+ ILogger log)
+{
+ ...
+}
+
+[FunctionName("MaterializedViews")]
+public static void MaterializedViews([CosmosDBTrigger(
+ databaseName: "ToDoItems",
+ collectionName: "Items",
+ ConnectionStringSetting = "CosmosDBConnection",
+ LeaseCollectionName = "leases",
+ LeaseCollectionPrefix = "materialized")]IReadOnlyList<Document> documents,
+ ILogger log)
+{
+ ...
+}
+```
+
+And for JavaScript, you can apply the configuration on the `function.json` file, with the `leaseCollectionPrefix` attribute:
+
+```json
+{
+ "type": "cosmosDBTrigger",
+ "name": "documents",
+ "direction": "in",
+ "leaseCollectionName": "leases",
+ "connectionStringSetting": "CosmosDBConnection",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "leaseCollectionPrefix": "emails"
+},
+{
+ "type": "cosmosDBTrigger",
+ "name": "documents",
+ "direction": "in",
+ "leaseCollectionName": "leases",
+ "connectionStringSetting": "CosmosDBConnection",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "leaseCollectionPrefix": "materialized"
+}
+```
+
+> [!NOTE]
+> Always monitor on the Request Units provisioned on your shared leases container. Each Trigger that shares it, will increase the throughput average consumption, so you might need to increase the provisioned throughput as you increase the number of Azure Functions that are using it.
+
+## Next steps
+
+* See the full configuration for the [Azure Functions trigger for Azure Cosmos DB](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration)
+* Check the extended [list of samples](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) for all the languages.
+* Visit the Serverless recipes with Azure Cosmos DB and Azure Functions [GitHub repository](https://github.com/ealsur/serverless-recipes/tree/master/cosmosdbtriggerscenarios) for more samples.
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
+
+ Title: Delete items by partition key value using the Azure Cosmos DB SDK (preview)
+description: Learn how to delete items by partition key value using the Azure Cosmos DB SDKs
+++++ Last updated : 08/19/2022+++
+# Delete items by partition key value - API for NoSQL (preview)
+
+This article explains how to use the Azure Cosmos DB SDKs to delete all items by logical partition key value.
+
+## Feature overview
+
+The delete by partition key feature is an asynchronous, background operation that allows you to delete all documents with the same logical partition key value, using the Comsos SDK.
+
+Because the number of documents to be deleted may be large, the operation runs in the background. Though the physical deletion operation runs in the background, the effects will be available immediately, as the documents to be deleted will not appear in the results of queries or read operations.
+
+To help limit the resources used by this background task, the delete by partition key operation is constrained to consume at most 10% of the total available RU/s on the container each second.
+
+## Getting started
+
+To use the feature, your Azure Cosmos DB account must be enrolled in the preview. To enroll, submit a request for the **DeleteAllItemsByPartitionKey** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
++
+#### [.NET](#tab/dotnet-example)
+
+## Sample code
+Use [version 3.25.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) (or a higher preview version) of the Azure Cosmos DB .NET SDK to delete items by partition key.
+
+```csharp
+// Suppose our container is partitioned by tenantId, and we want to delete all the data for a particular tenant Contoso
+
+// Get reference to the container
+var container = cosmosClient.GetContainer("DatabaseName", "ContainerName");
+
+// Delete by logical partition key
+ResponseMessage deleteResponse = await container.DeleteAllItemsByPartitionKeyStreamAsync(new PartitionKey("Contoso"));
+
+ if (deleteResponse.IsSuccessStatusCode) {
+ Console.WriteLine($"Delete all documents with partition key operation has successfully started");
+}
+```
+#### [Java](#tab/java-example)
+
+Use [version 4.19.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos) (or a higher version) of the Azure Cosmos DB Java SDK to delete items by partition key. The delete by partition key API will be marked as beta.
++
+```java
+// Suppose our container is partitioned by tenantId, and we want to delete all the data for a particular tenant Contoso
+
+// Delete by logical partition key
+CosmosItemResponse<?> deleteResponse = container.deleteAllItemsByPartitionKey(
+ new PartitionKey("Contoso"), new CosmosItemRequestOptions()).block();
+```
+
+
+### Frequently asked questions (FAQ)
+#### Are the results of the delete by partition key operation reflected immediately?
+Yes, once the delete by partition key operation starts, the documents to be deleted will not appear in the results of queries or read operations. This also means that you can write new a new document with the same ID and partition key as a document to be deleted without resulting in a conflict.
+
+See [Known issues](#known-issues) for exceptions.
+
+#### What happens if I issue a delete by partition key operation, and then immediately write a new document with the same partition key?
+When the delete by partition key operation is issued, only the documents that exist in the container at that point in time with the partition key value will be deleted. Any new documents that come in will not be in scope for the deletion.
+
+### How is the delete by partition key operation prioritized among other operations against the container?
+By default, the delete by partition key value operation can consume up to a reserved fraction - 0.1, or 10% - of the overall RU/s on the resource. Any Request Units (RUs) in this bucket that are unused will be available for other non-background operations, such as reads, writes, and queries.
+
+For example, suppose you have provisioned 1000 RU/s on a container. There is an ongoing delete by partition key operation that consumes 100 RUs each second for 5 seconds. During each of these 5 seconds, there are 900 RUs available for non-background database operations. Once the delete operation is complete, all 1000 RU/s are now available again.
+
+### Known issues
+For certain scenarios, the effects of a delete by partition key operation is not guaranteed to be immediately reflected. The effect may be partially seen as the operation progresses.
+
+- [Aggregate queries](query/aggregate-functions.md) that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
+- Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
+- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It is not recommended to use this preview feature if you have a scenario that requires continuous backup.
+
+## How to give feedback or report an issue/bug
+* Email cosmosPkDeleteFeedbk@microsoft.com with questions or feedback.
+
+### SDK requirements
+
+Find the latest version of the SDK that supports this feature.
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.25.0-preview (must be preview version)* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+| **Java SDK v4** | *>= 4.19.0 (API is marked as beta)* | <https://mvnrepository.com/artifact/com.azure/azure-cosmos> |
+
+Support for other SDKs is planned for the future.
+
+## Next steps
+
+See the following articles to learn about more SDK operations in Azure Cosmos DB.
+- [Query an Azure Cosmos DB container
+](how-to-query-container.md)
+- [Transactional batch operations in Azure Cosmos DB using the .NET SDK
+](transactional-batch.md)
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-container.md
+
+ Title: Create a container in Azure Cosmos DB for NoSQL using .NET
+description: Learn how to create a container in your Azure Cosmos DB for NoSQL database using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Create a container in Azure Cosmos DB for NoSQL using .NET
++
+Containers in Azure Cosmos DB store sets of items. Before you can create, query, or manage items, you must first create a container.
+
+## Name a container
+
+In Azure Cosmos DB, a container is analogous to a table in a relational database. When you create a container, the container name forms a segment of the URI used to access the container resource and any child items.
+
+Here are some quick rules when naming a container:
+
+* Keep container names between 3 and 63 characters long
+* Container names can only contain lowercase letters, numbers, or the dash (-) character.
+* Container names must start with a lowercase letter or number.
+
+Once created, the URI for a container is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/colls/<container-name>``
+
+## Create a container
+
+To create a container, call one of the following methods:
+
+* [``CreateContainerAsync``](#create-a-container-asynchronously)
+* [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
+
+### Create a container asynchronously
+
+The following example creates a container asynchronously:
++
+The [``Database.CreateContainerAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerasync) method will throw an exception if a database with the same name already exists.
+
+### Create a container asynchronously if it doesn't already exist
+
+The following example creates a container asynchronously only if it doesn't already exist on the account:
++
+The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) method will only create a new container if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
+
+## Parsing the response
+
+In all examples so far, the response from the asynchronous request was cast immediately to the [``Container``](/dotnet/api/microsoft.azure.cosmos.container) type. You may want to parse metadata about the response including headers and the HTTP status code. The true return type for the **Database.CreateContainerAsync** and **Database.CreateContainerIfNotExistsAsync** methods is [``ContainerResponse``](/dotnet/api/microsoft.azure.cosmos.containerresponse).
+
+The following example shows the **Database.CreateContainerIfNotExistsAsync** method returning a **ContainerResponse**. Once returned, you can parse response properties and then eventually get the underlying **Container** object:
++
+## Next steps
+
+Now that you've create a container, use the next guide to create items.
+
+> [!div class="nextstepaction"]
+> [Create an item](how-to-dotnet-create-item.md)
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-database.md
+
+ Title: Create a database in Azure Cosmos DB for NoSQL using .NET
+description: Learn how to create a database in your Azure Cosmos DB for NoSQL account using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Create a database in Azure Cosmos DB for NoSQL using .NET
++
+Databases in Azure Cosmos DB are units of management for one or more containers. Before you can create or manage containers, you must first create a database.
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
+
+## Create a database
+
+To create a database, call one of the following methods:
+
+* [``CreateDatabaseAsync``](#create-a-database-asynchronously)
+* [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
+
+### Create a database asynchronously
+
+The following example creates a database asynchronously:
++
+The [``CosmosClient.CreateDatabaseAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseasync) method will throw an exception if a database with the same name already exists.
+
+### Create a database asynchronously if it doesn't already exist
+
+The following example creates a database asynchronously only if it doesn't already exist on the account:
++
+The [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method will only create a new database if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
+
+## Parsing the response
+
+In all examples so far, the response from the asynchronous request was cast immediately to the [``Database``](/dotnet/api/microsoft.azure.cosmos.database) type. You may want to parse metadata about the response including headers and the HTTP status code. The true return type for the **CosmosClient.CreateDatabaseAsync** and **CosmosClient.CreateDatabaseIfNotExistsAsync** methods is [``DatabaseResponse``](/dotnet/api/microsoft.azure.cosmos.databaseresponse).
+
+The following example shows the **CosmosClient.CreateDatabaseIfNotExistsAsync** method returning a **DatabaseResponse**. Once returned, you can parse response properties and then eventually get the underlying **Database** object:
++
+## Next steps
+
+Now that you've created a database, use the next guide to create containers.
+
+> [!div class="nextstepaction"]
+> [Create a container](how-to-dotnet-create-container.md)
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-item.md
+
+ Title: Create an item in Azure Cosmos DB for NoSQL using .NET
+description: Learn how to create, upsert, or replace an item in your Azure Cosmos DB for NoSQL container using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Create an item in Azure Cosmos DB for NoSQL using .NET
++
+Items in Azure Cosmos DB represent a specific entity stored within a container. In the API for NoSQL, an item consists of JSON-formatted data with a unique identifier.
+
+## Create a unique identifier for an item
+
+The unique identifier is a distinct string that identifies an item within a container. The ``id`` property is the only required property when creating a new JSON document. For example, this JSON document is a valid item in Azure Cosmos DB:
+
+```json
+{
+ "id": "unique-string-2309509"
+}
+```
+
+Within the scope of a container, two items can't share the same unique identifier.
+
+> [!IMPORTANT]
+> The ``id`` property is case-sensitive. Properties named ``ID``, ``Id``, ``iD``, and ``_id`` will be treated as an arbitrary JSON property.
+
+Once created, the URI for an item is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/docs/<item-resource-identifier>``
+
+When referencing the item using a URI, use the system-generated *resource identifier* instead of the ``id`` field. For more information about system-generated item properties in Azure Cosmos DB for NoSQL, see [properties of an item](../account-databases-containers-items.md#properties-of-an-item)
+
+## Create an item
+
+> [!NOTE]
+> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/250-create-item/Product.cs" id="type" :::
+>
+> The examples also assume that you have already created a new object of type **Product** named **newItem**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/250-create-item/Program.cs" id="create_object" :::
+>
+
+To create an item, call one of the following methods:
+
+* [``CreateItemAsync<>``](#create-an-item-asynchronously)
+* [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
+* [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
+
+## Create an item asynchronously
+
+The following example creates a new item asynchronously:
++
+The [``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) method will throw an exception if there's a conflict with the unique identifier of an existing item. To learn more about potential exceptions, see [``CreateItemAsync<>`` exceptions](/dotnet/api/microsoft.azure.cosmos.container.createitemasync#exceptions).
+
+## Replace an item asynchronously
+
+The following example replaces an existing item asynchronously:
++
+The [``Container.ReplaceItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.replaceitemasync) method requires the provided string for the ``id`` parameter to match the unique identifier of the ``item`` parameter.
+
+## Create or replace an item asynchronously
+
+The following example will create a new item or replace an existing item if an item already exists with the same unique identifier:
++
+The [``Container.UpsertItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync) method will use the unique identifier of the ``item`` parameter to determine if there's a conflict with an existing item and to replace the item appropriately.
+
+## Next steps
+
+Now that you've created various items, use the next guide to read an item.
+
+> [!div class="nextstepaction"]
+> [Read an item](how-to-dotnet-read-item.md)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-get-started.md
+
+ Title: Get started with Azure Cosmos DB for NoSQL and .NET
+description: Get started developing a .NET application that works with Azure Cosmos DB for NoSQL. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for NoSQL endpoint.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Get started with Azure Cosmos DB for NoSQL and .NET
++
+This article shows you how to connect to Azure Cosmos DB for NoSQL using the .NET SDK. Once connected, you can perform operations on databases, containers, and items.
+
+[Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md) | [API reference](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Set up your project
+
+### Create the .NET console application
+
+Create a new .NET application by using the [``dotnet new``](/dotnet/core/tools/dotnet-new) command with the **console** template.
+
+```dotnetcli
+dotnet new console
+```
+
+Import the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package using the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command.
+
+```dotnetcli
+dotnet add package Microsoft.Azure.Cosmos
+```
+
+Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
+
+```dotnetcli
+dotnet build
+```
+
+## <a id="connect-to-azure-cosmos-db-sql-api"></a>Connect to Azure Cosmos DB for NoSQL
+
+To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to a API for NoSQL account using the **CosmosClient** class:
+
+* [Connect with a API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key)
+* [Connect with a API for NoSQL connection string](#connect-with-a-connection-string)
+* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
+
+### Connect with an endpoint and key
+
+The most common constructor for **CosmosClient** has two parameters:
+
+| Parameter | Example value | Description |
+| | | |
+| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | API for NoSQL endpoint to use for all requests |
+| ``authKeyOrResourceToken`` | ``COSMOS_KEY`` environment variable | Account key or resource token to use when authenticating |
+
+#### Retrieve your account endpoint and key
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Create a shell variable for *resourceGroupName*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-dotnet-howto-rg"
+ ```
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Get the API for NoSQL endpoint *URI* for the account using the [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show) command.
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query "documentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "keys" \
+ --query "primaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Create a shell variable for *RESOURCE_GROUP_NAME*.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos-dotnet-howto-rg"
+ ```
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Get the API for NoSQL endpoint *URI* for the account using the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ }
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property "DocumentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "Keys"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "PrimaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+##### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos-dotnet-howto-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the existing Azure Cosmos DB for NoSQL account page.
+
+1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL API account page. The Keys option is highlighted in the navigation menu.":::
+
+1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL API account.":::
+++
+To use the **URI** and **PRIMARY KEY** values within your .NET code, persist them to new environment variables on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_ENDPOINT = "<cosmos-account-URI>"
+$env:COSMOS_KEY = "<cosmos-account-PRIMARY-KEY>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_ENDPOINT="<cosmos-account-URI>"
+export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
+```
+++
+#### Create CosmosClient with account endpoint and key
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` and ``COSMOS_KEY`` environment variables as parameters.
++
+### Connect with a connection string
+
+Another constructor for **CosmosClient** only contains a single parameter:
+
+| Parameter | Example value | Description |
+| | | |
+| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | API for NoSQL endpoint to use for all requests |
+| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the API for NoSQL account |
+
+#### Retrieve your account connection string
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "connection-strings" \
+ --query "connectionStrings[?description == \`Primary SQL Connection String\`] | [0].connectionString"
+ ```
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "ConnectionStrings"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "Primary SQL Connection String" -First 1
+ ```
+
+##### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos-dotnet-howto-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the existing Azure Cosmos DB for NoSQL account page.
+
+1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option.
+
+1. Record the value from the **PRIMARY CONNECTION STRING** field.
++
+To use the **PRIMARY CONNECTION STRING** value within your .NET code, persist it to a new environment variable on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_CONNECTION_STRING = "<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_CONNECTION_STRING="<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+++
+#### Create CosmosClient with connection string
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_CONNECTION_STRING`` environment variable as the only parameter.
++
+### Connect using the Microsoft Identity Platform
+
+To connect to your API for NoSQL account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
+
+| Where the application runs | Security principal
+|--|--||
+| Local machine (developing and testing) | User identity or service principal |
+| Azure | Managed identity |
+| Servers or clients outside of Azure | Service principal |
+
+#### Import Azure.Identity
+
+The **Azure.Identity** NuGet package contains core authentication functionality that is shared among all Azure SDK libraries.
+
+Import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) NuGet package using the ``dotnet add package`` command.
+
+```dotnetcli
+dotnet add package Azure.Identity
+```
+
+Rebuild the project with the ``dotnet build`` command.
+
+```dotnetcli
+dotnet build
+```
+
+In your code editor, add using directives for ``Azure.Core`` and ``Azure.Identity`` namespaces.
++
+#### Create CosmosClient with default credential implementation
+
+If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
+
+For this example, we saved the instance in a variable of type [``TokenCredential``](/dotnet/api/azure.core.tokencredential) as that's a more generic type that's reusable across SDKs.
++
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
++
+#### Create CosmosClient with a custom credential implementation
+
+If you plan to deploy the application out of Azure, you can obtain an OAuth token by using other classes in the [Azure.Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). These other classes also derive from the ``TokenCredential`` class.
+
+For this example, we create a [``ClientSecretCredential``](/dotnet/api/azure.identity.clientsecretcredential) instance by using client and tenant identifiers, along with a client secret.
++
+You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
++
+## Build your application
+
+As you build your application, your code will primarily interact with four types of resources:
+
+* The API for NoSQL account, which is the unique top-level namespace for your Azure Cosmos DB data.
+
+* Databases, which organize the containers in your account.
+
+* Containers, which contain a set of individual items in your database.
+
+* Items, which represent a JSON document in your container.
+
+The following diagram shows the relationship between these resources.
+
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
+
+Each type of resource is represented by one or more associated .NET classes. Here's a list of the most common classes:
+
+| Class | Description |
+|||
+| [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) | This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service. |
+| [``Database``](/dotnet/api/microsoft.azure.cosmos.database) | This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it. |
+| [``Container``](/dotnet/api/microsoft.azure.cosmos.container) | This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it. |
+
+The following guides show you how to use each of these classes to build your application.
+
+| Guide | Description |
+|--||
+| [Create a database](how-to-dotnet-create-database.md) | Create databases |
+| [Create a container](how-to-dotnet-create-container.md) | Create containers |
+| [Read an item](how-to-dotnet-read-item.md) | Point read a specific item |
+| [Query items](how-to-dotnet-query-items.md) | Query multiple items |
+
+## See also
+
+* [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+* [Samples](samples-dotnet.md)
+* [API reference](/dotnet/api/microsoft.azure.cosmos)
+* [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
+* [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+
+## Next steps
+
+Now that you've connected to a API for NoSQL account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-database.md)
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-query-items.md
+
+ Title: Query items in Azure Cosmos DB for NoSQL using .NET
+description: Learn how to query items in your Azure Cosmos DB for NoSQL container using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 06/15/2022+++
+# Query items in Azure Cosmos DB for NoSQL using .NET
++
+Items in Azure Cosmos DB represent entities stored within a container. In the API for NoSQL, an item consists of JSON-formatted data with a unique identifier. When you issue queries using the API for NoSQL, results are returned as a JSON array of JSON documents.
+
+## Query items using SQL
+
+The Azure Cosmos DB for NoSQL supports the use of Structured Query Language (SQL) to perform queries on items in containers. A simple SQL query like ``SELECT * FROM products`` will return all items and properties from a container. Queries can be even more complex and include specific field projections, filters, and other common SQL clauses:
+
+```sql
+SELECT
+ p.name,
+ p.description AS copy
+FROM
+ products p
+WHERE
+ p.price > 500
+```
+
+To learn more about the SQL syntax for Azure Cosmos DB for NoSQL, see [Getting started with SQL queries](query/getting-started.md).
+
+## Query an item
+
+> [!NOTE]
+> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/300-query-items/Product.cs" id="type" :::
+>
+
+To query items in a container, call one of the following methods:
+
+* [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
+* [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
+
+## Query items using a SQL query asynchronously
+
+This example builds a SQL query using a simple string, retrieves a feed iterator, and then uses nested loops to iterate over results. The outer **while** loop will iterate through result pages, while the inner **foreach** loop iterates over results within a page.
++
+The [Container.GetItemQueryIterator<>](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) method returns a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) that is used to iterate through multi-page results. The ``HasMoreResults`` property indicates if there are more result pages left. The ``ReadNextAsync`` method gets the next page of results as an enumerable that is then used in a loop to iterate over results.
+
+Alternatively, use the [QueryDefinition](/dotnet/api/microsoft.azure.cosmos.querydefinition) to build a SQL query with parameterized input:
++
+> [!TIP]
+> Parameterized input values can help prevent many common SQL query injection attacks.
+
+## Query items using LINQ asynchronously
+
+In this example, an [``IQueryable``<>](/dotnet/api/system.linq.iqueryable) object is used to construct a [Language Integrated Query (LINQ)](/dotnet/csharp/programming-guide/concepts/linq/). The results are then iterated over using a feed iterator.
++
+The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.container.getitemlinqqueryable) method constructs an ``IQueryable`` to build the LINQ query. Then the ``ToFeedIterator<>`` method is used to convert the LINQ query expression into a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1).
+
+> [!TIP]
+> While you can iterate over the ``IQueryable<>``, this operation is synchronous. Use the ``ToFeedIterator<>`` method to gather results asynchronously.
+
+## Next steps
+
+Now that you've queried multiple items, try one of our end-to-end tutorials with the API for NoSQL.
+
+> [!div class="nextstepaction"]
+> [Build an app that queries and adds data to Azure Cosmos DB for NoSQL](/training/modules/build-dotnet-app-cosmos-db-sql-api/)
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-read-item.md
+
+ Title: Read an item in Azure Cosmos DB for NoSQL using .NET
+description: Learn how to point read a specific item in your Azure Cosmos DB for NoSQL container using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Read an item in Azure Cosmos DB for NoSQL using .NET
++
+Items in Azure Cosmos DB represent a specific entity stored within a container. In the API for NoSQL, an item consists of JSON-formatted data with a unique identifier.
+
+## Reading items with unique identifiers
+
+Every item in Azure Cosmos DB for NoSQL has a unique identifier specified by the ``id`` property. Within the scope of a container, two items can't share the same unique identifier. However, Azure Cosmos DB requires both the unique identifier and the partition key value of an item to perform a quick *point read* of that item. If only the unique identifier is available, you would have to perform a less efficient [query](how-to-dotnet-query-items.md) to look up the item across multiple logical partitions. To learn more about point reads and queries, see [optimize request cost for reading data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries).
+
+## Read an item
+
+> [!NOTE]
+> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/275-read-item/Product.cs" id="type" :::
+>
+
+To perform a point read of an item, call one of the following methods:
+
+* [``ReadItemAsync<>``](#read-an-item-asynchronously)
+* [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
+* [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
+
+## Read an item asynchronously
+
+The following example point reads a single item asynchronously and returns a deserialized item using the provided generic type:
++
+The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads and item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators).
+
+Alternatively, you can return the **ItemResponse<>** generic type and explicitly get the resource. The more general **ItemResponse<>** type also contains useful metadata about the underlying API operation. In this example, metadata about the request unit charge for this operation is gathered using the **RequestCharge** property.
++
+## Read an item as a stream asynchronously
+
+This example reads an item as a data stream directly:
++
+The [``Container.ReadItemStreamAsync``](/dotnet/api/microsoft.azure.cosmos.container.readitemstreamasync) method returns the item as a [``Stream``](/dotnet/api/system.io.stream) without deserializing the contents.
+
+If you aren't planning to deserialize the items directly, using the stream APIs can improve performance by handing off the item as a stream directly to the next component of your application. For more tips on how to optimize the SDK for high performance scenarios, see [SDK performance tips](performance-tips-dotnet-sdk-v3.md#sdk-usage).
+
+## Read multiple items asynchronously
+
+In this example, a list of tuples containing unique identifier and partition key pairs are used to look up and retrieve multiple items:
++
+[``Container.ReadManyItemsAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readmanyitemsasync) returns a list of items based on the unique identifiers and partition keys you provide. This operation is typically more performant than a query since you'll effectively perform a point read operation on all items in the list.
+
+## Next steps
+
+Now that you've read various items, use the next guide to query items.
+
+> [!div class="nextstepaction"]
+> [Query items](how-to-dotnet-query-items.md)
cosmos-db How To Java Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-java-change-feed.md
+
+ Title: Create an end-to-end Azure Cosmos DB Java SDK v4 application sample by using Change Feed
+description: This guide walks you through a simple Java API for NoSQL application, which inserts documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed.
+++
+ms.devlang: java
+ Last updated : 06/11/2020+++++
+# How to create a Java application that uses Azure Cosmos DB for NoSQL and change feed processor
+
+This how-to guide walks you through a simple Java application, which uses the Azure Cosmos DB for NoSQL to insert documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed and Change Feed Processor. The Java application communicates with the Azure Cosmos DB for NoSQL using Azure Cosmos DB Java SDK v4.
+
+> [!IMPORTANT]
+> This tutorial is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+
+## Prerequisites
+
+* The URI and key for your Azure Cosmos DB account
+
+* Maven
+
+* Java 8
+
+## Background
+
+The Azure Cosmos DB change feed provides an event-driven interface to trigger actions in response to document insertion. This has many uses. For example in applications, which are both read and write heavy, a chief use of change feed is to create a real-time **materialized view** of a container as it is ingesting documents. The materialized view container will hold the same data but partitioned for efficient reads, making the application both read and write efficient.
+
+The work of managing change feed events is largely taken care of by the change feed Processor library built into the SDK. This library is powerful enough to distribute change feed events among multiple workers, if that is desired. All you have to do is provide the change feed library a callback.
+
+This simple example demonstrates change feed Processor library with a single worker creating and deleting documents from a materialized view.
+
+## Setup
+
+If you haven't already done so, clone the app example repo:
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example.git
+```
+
+Open a terminal in the repo directory. Build the app by running
+
+```bash
+mvn clean package
+```
+
+## Walkthrough
+
+1. As a first check, you should have an Azure Cosmos DB account. Open the **Azure portal** in your browser, go to your Azure Cosmos DB account, and in the left pane navigate to **Data Explorer**.
+
+ :::image type="content" source="media/how-to-java-change-feed/cosmos_account_empty.JPG" alt-text="Azure Cosmos DB account":::
+
+1. Run the app in the terminal using the following command:
+
+ ```bash
+ mvn exec:java -Dexec.mainClass="com.azure.cosmos.workedappexample.SampleGroceryStore" -DACCOUNT_HOST="your-account-uri" -DACCOUNT_KEY="your-account-key" -Dexec.cleanupDaemonThreads=false
+ ```
+
+1. Press enter when you see
+
+ ```bash
+ Press enter to create the grocery store inventory system...
+ ```
+
+ then return to the Azure portal Data Explorer in your browser. You'll see a database **GroceryStoreDatabase** has been added with three empty containers:
+
+ * **InventoryContainer** - The inventory record for our example grocery store, partitioned on item ```id```, which is a UUID.
+ * **InventoryContainer-pktype** - A materialized view of the inventory record, optimized for queries over item ```type```
+ * **InventoryContainer-leases** - A leases container is always needed for change feed; leases track the app's progress in reading the change feed.
+
+ :::image type="content" source="media/how-to-java-change-feed/cosmos_account_resources_lease_empty.JPG" alt-text="Empty containers":::
+
+1. In the terminal, you should now see a prompt
+
+ ```bash
+ Press enter to start creating the materialized view...
+ ```
+
+ Press enter. Now the following block of code will execute and initialize the change feed processor on another thread:
+
+ ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=InitializeCFP)]
+
+ ```"SampleHost_1"``` is the name of the Change Feed processor worker. ```changeFeedProcessorInstance.start()``` is what actually starts the Change Feed processor.
+
+ Return to the Azure portal Data Explorer in your browser. Under the **InventoryContainer-leases** container, select **items** to see its contents. You'll see that Change Feed Processor has populated the lease container, that is, the processor has assigned the ```SampleHost_1``` worker a lease on some partitions of the **InventoryContainer**.
+
+ :::image type="content" source="media/how-to-java-change-feed/cosmos_leases.JPG" alt-text="Leases":::
+
+1. Press enter again in the terminal. This will trigger 10 documents to be inserted into **InventoryContainer**. Each document insertion appears in the change feed as JSON; the following callback code handles these events by mirroring the JSON documents into a materialized view:
+
+ ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=CFPCallback)]
+
+1. Allow the code to run 5-10 sec. Then return to the Azure portal Data Explorer and navigate to **InventoryContainer > items**. You should see that items are being inserted into the inventory container; note the partition key (```id```).
+
+ :::image type="content" source="media/how-to-java-change-feed/cosmos_items.JPG" alt-text="Feed container":::
+
+1. Now, in Data Explorer navigate to **InventoryContainer-pktype > items**. This is the materialized view - the items in this container mirror **InventoryContainer** because they were inserted programmatically by change feed. Note the partition key (```type```). So this materialized view is optimized for queries filtering over ```type```, which would be inefficient on **InventoryContainer** because it's partitioned on ```id```.
+
+ :::image type="content" source="media/how-to-java-change-feed/cosmos_materializedview2.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos DB account with Items selected.":::
+
+1. We're going to delete a document from both **InventoryContainer** and **InventoryContainer-pktype** using just a single ```upsertItem()``` call. First, take a look at Azure portal Data Explorer. We'll delete the document for which ```/type == "plums"```; it's encircled in red below
+
+ :::image type="content" source="media/how-to-java-change-feed/cosmos_materializedview-emph-todelete.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos DB account with a particular item ID selected.":::
+
+ Hit enter again to call the function ```deleteDocument()``` in the example code. This function, shown below, upserts a new version of the document with ```/ttl == 5```, which sets document Time-To-Live (TTL) to 5 sec.
+
+ ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=DeleteWithTTL)]
+
+ The change feed ```feedPollDelay``` is set to 100 ms; therefore, change feed responds to this update almost instantly and calls ```updateInventoryTypeMaterializedView()``` shown above. That last function call will upsert the new document with TTL of 5 sec into **InventoryContainer-pktype**.
+
+ The effect is that after about 5 seconds, the document will expire and be deleted from both containers.
+
+ This procedure is necessary because change feed only issues events on item insertion or update, not on item deletion.
+
+1. Press enter one more time to close the program and clean up its resources.
cosmos-db How To Manage Conflicts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-conflicts.md
+
+ Title: Manage conflicts between regions in Azure Cosmos DB
+description: Learn how to manage conflicts in Azure Cosmos DB by creating the last-writer-wins or a custom conflict resolution policy
++++ Last updated : 06/11/2020++
+ms.devlang: csharp, java, javascript
+++
+# Manage conflict resolution policies in Azure Cosmos DB
+
+With multi-region writes, when multiple clients write to the same item, conflicts may occur. When a conflict occurs, you can resolve the conflict by using different conflict resolution policies. This article describes how to manage conflict resolution policies.
+
+> [!TIP]
+> Conflict resolution policy can only be specified at container creation time and cannot be modified after container creation.
+
+## Create a last-writer-wins conflict resolution policy
+
+These samples show how to set up a container with a last-writer-wins conflict resolution policy. The default path for last-writer-wins is the timestamp field or the `_ts` property. For API for NoSQL, this may also be set to a user-defined path with a numeric type. In a conflict, the highest value wins. If the path isn't set or it's invalid, it defaults to `_ts`. Conflicts resolved with this policy do not show up in the conflict feed. This policy can be used by all APIs.
+
+### <a id="create-custom-conflict-resolution-policy-lww-dotnet"></a>.NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+DocumentCollection lwwCollection = await createClient.CreateDocumentCollectionIfNotExistsAsync(
+ UriFactory.CreateDatabaseUri(this.databaseName), new DocumentCollection
+ {
+ Id = this.lwwCollectionName,
+ ConflictResolutionPolicy = new ConflictResolutionPolicy
+ {
+ Mode = ConflictResolutionMode.LastWriterWins,
+ ConflictResolutionPath = "/myCustomId",
+ },
+ });
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+Container container = await createClient.GetDatabase(this.databaseName)
+ .CreateContainerIfNotExistsAsync(new ContainerProperties(this.lwwCollectionName, "/partitionKey")
+ {
+ ConflictResolutionPolicy = new ConflictResolutionPolicy()
+ {
+ Mode = ConflictResolutionMode.LastWriterWins,
+ ResolutionPath = "/myCustomId",
+ }
+ });
+```
++
+### <a id="create-custom-conflict-resolution-policy-lww-javav4"></a> Java V4 SDK
+
+# [Async](#tab/api-async)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConflictResolutionLWWAsync)]
+
+# [Sync](#tab/api-sync)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConflictResolutionLWWSync)]
+
+
+
+### <a id="create-custom-conflict-resolution-policy-lww-javav2"></a>Java V2 SDKs
+
+# [Async Java V2 SDK](#tab/async)
+
+[Async Java V2 SDK](sdk-java-async-v2.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
+
+```java
+DocumentCollection collection = new DocumentCollection();
+collection.setId(id);
+ConflictResolutionPolicy policy = ConflictResolutionPolicy.createLastWriterWinsPolicy("/myCustomId");
+collection.setConflictResolutionPolicy(policy);
+DocumentCollection createdCollection = client.createCollection(databaseUri, collection, null).toBlocking().value();
+```
+
+# [Sync Java V2 SDK](#tab/sync)
+
+[Sync Java V2 SDK](sdk-java-v2.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
+
+```java
+DocumentCollection lwwCollection = new DocumentCollection();
+lwwCollection.setId(this.lwwCollectionName);
+ConflictResolutionPolicy lwwPolicy = ConflictResolutionPolicy.createLastWriterWinsPolicy("/myCustomId");
+lwwCollection.setConflictResolutionPolicy(lwwPolicy);
+DocumentCollection createdCollection = this.tryCreateDocumentCollection(createClient, database, lwwCollection);
+```
++
+### <a id="create-custom-conflict-resolution-policy-lww-javascript"></a>Node.js/JavaScript/TypeScript SDK
+
+```javascript
+const database = client.database(this.databaseName);
+const { container: lwwContainer } = await database.containers.createIfNotExists(
+ {
+ id: this.lwwContainerName,
+ conflictResolutionPolicy: {
+ mode: "LastWriterWins",
+ conflictResolutionPath: "/myCustomId"
+ }
+ }
+);
+```
+
+### <a id="create-custom-conflict-resolution-policy-lww-python"></a>Python SDK
+
+```python
+udp_collection = {
+ 'id': self.udp_collection_name,
+ 'conflictResolutionPolicy': {
+ 'mode': 'LastWriterWins',
+ 'conflictResolutionPath': '/myCustomId'
+ }
+}
+udp_collection = self.try_create_document_collection(
+ create_client, database, udp_collection)
+```
+
+## Create a custom conflict resolution policy using a stored procedure
+
+These samples show how to set up a container with a custom conflict resolution policy with a stored procedure to resolve the conflict. These conflicts don't show up in the conflict feed unless there's an error in your stored procedure. After the policy is created with the container, you need to create the stored procedure. The .NET SDK sample below shows an example. This policy is supported on NoSQL Api only.
+
+### Sample custom conflict resolution stored procedure
+
+Custom conflict resolution stored procedures must be implemented using the function signature shown below. The function name does not need to match the name used when registering the stored procedure with the container but it does simplify naming. Here is a description of the parameters that must be implemented for this stored procedure.
+
+- **incomingItem**: The item being inserted or updated in the commit that is generating the conflicts. Is null for delete operations.
+- **existingItem**: The currently committed item. This value is non-null in an update and null for an insert or deletes.
+- **isTombstone**: Boolean indicating if the incomingItem is conflicting with a previously deleted item. When true, existingItem is also null.
+- **conflictingItems**: Array of the committed version of all items in the container that are conflicting with incomingItem on ID or any other unique index properties.
+
+> [!IMPORTANT]
+> Just as with any stored procedure, a custom conflict resolution procedure can access any data with the same partition key and can perform any insert, update or delete operation to resolve conflicts.
+
+This sample stored procedure resolves conflicts by selecting the lowest value from the `/myCustomId` path.
+
+```javascript
+function resolver(incomingItem, existingItem, isTombstone, conflictingItems) {
+ var collection = getContext().getCollection();
+
+ if (!incomingItem) {
+ if (existingItem) {
+
+ collection.deleteDocument(existingItem._self, {}, function (err, responseOptions) {
+ if (err) throw err;
+ });
+ }
+ } else if (isTombstone) {
+ // delete always wins.
+ } else {
+ if (existingItem) {
+ if (incomingItem.myCustomId > existingItem.myCustomId) {
+ return; // existing item wins
+ }
+ }
+
+ var i;
+ for (i = 0; i < conflictingItems.length; i++) {
+ if (incomingItem.myCustomId > conflictingItems[i].myCustomId) {
+ return; // existing conflict item wins
+ }
+ }
+
+ // incoming item wins - clear conflicts and replace existing with incoming.
+ tryDelete(conflictingItems, incomingItem, existingItem);
+ }
+
+ function tryDelete(documents, incoming, existing) {
+ if (documents.length > 0) {
+ collection.deleteDocument(documents[0]._self, {}, function (err, responseOptions) {
+ if (err) throw err;
+
+ documents.shift();
+ tryDelete(documents, incoming, existing);
+ });
+ } else if (existing) {
+ collection.replaceDocument(existing._self, incoming,
+ function (err, documentCreated) {
+ if (err) throw err;
+ });
+ } else {
+ collection.createDocument(collection.getSelfLink(), incoming,
+ function (err, documentCreated) {
+ if (err) throw err;
+ });
+ }
+ }
+}
+```
+
+### <a id="create-custom-conflict-resolution-policy-stored-proc-dotnet"></a>.NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+DocumentCollection udpCollection = await createClient.CreateDocumentCollectionIfNotExistsAsync(
+ UriFactory.CreateDatabaseUri(this.databaseName), new DocumentCollection
+ {
+ Id = this.udpCollectionName,
+ ConflictResolutionPolicy = new ConflictResolutionPolicy
+ {
+ Mode = ConflictResolutionMode.Custom,
+ ConflictResolutionProcedure = string.Format("dbs/{0}/colls/{1}/sprocs/{2}", this.databaseName, this.udpCollectionName, "resolver"),
+ },
+ });
+
+//Create the stored procedure
+await clients[0].CreateStoredProcedureAsync(
+UriFactory.CreateStoredProcedureUri(this.databaseName, this.udpCollectionName, "resolver"), new StoredProcedure
+{
+ Id = "resolver",
+ Body = File.ReadAllText(@"resolver.js")
+});
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+Container container = await createClient.GetDatabase(this.databaseName)
+ .CreateContainerIfNotExistsAsync(new ContainerProperties(this.udpCollectionName, "/partitionKey")
+ {
+ ConflictResolutionPolicy = new ConflictResolutionPolicy()
+ {
+ Mode = ConflictResolutionMode.Custom,
+ ResolutionProcedure = string.Format("dbs/{0}/colls/{1}/sprocs/{2}", this.databaseName, this.udpCollectionName, "resolver")
+ }
+ });
+
+await container.Scripts.CreateStoredProcedureAsync(
+ new StoredProcedureProperties("resolver", File.ReadAllText(@"resolver.js"))
+);
+```
++
+### <a id="create-custom-conflict-resolution-policy-stored-proc-javav4"></a> Java V4 SDK
+
+# [Async](#tab/api-async)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConflictResolutionSprocAsync)]
+
+# [Sync](#tab/api-sync)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConflictResolutionSprocSync)]
+
+
+
+### <a id="create-custom-conflict-resolution-policy-stored-proc-javav2"></a>Java V2 SDKs
+
+# [Async Java V2 SDK](#tab/async)
+
+[Async Java V2 SDK](sdk-java-async-v2.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
+
+```java
+DocumentCollection collection = new DocumentCollection();
+collection.setId(id);
+ConflictResolutionPolicy policy = ConflictResolutionPolicy.createCustomPolicy("resolver");
+collection.setConflictResolutionPolicy(policy);
+DocumentCollection createdCollection = client.createCollection(databaseUri, collection, null).toBlocking().value();
+```
+
+# [Sync Java V2 SDK](#tab/sync)
+
+[Sync Java V2 SDK](sdk-java-v2.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
+
+```java
+DocumentCollection udpCollection = new DocumentCollection();
+udpCollection.setId(this.udpCollectionName);
+ConflictResolutionPolicy udpPolicy = ConflictResolutionPolicy.createCustomPolicy(
+ String.format("dbs/%s/colls/%s/sprocs/%s", this.databaseName, this.udpCollectionName, "resolver"));
+udpCollection.setConflictResolutionPolicy(udpPolicy);
+DocumentCollection createdCollection = this.tryCreateDocumentCollection(createClient, database, udpCollection);
+```
++
+After your container is created, you must create the `resolver` stored procedure.
+
+### <a id="create-custom-conflict-resolution-policy-stored-proc-javascript"></a>Node.js/JavaScript/TypeScript SDK
+
+```javascript
+const database = client.database(this.databaseName);
+const { container: udpContainer } = await database.containers.createIfNotExists(
+ {
+ id: this.udpContainerName,
+ conflictResolutionPolicy: {
+ mode: "Custom",
+ conflictResolutionProcedure: `dbs/${this.databaseName}/colls/${
+ this.udpContainerName
+ }/sprocs/resolver`
+ }
+ }
+);
+```
+
+After your container is created, you must create the `resolver` stored procedure.
+
+### <a id="create-custom-conflict-resolution-policy-stored-proc-python"></a>Python SDK
+
+```python
+udp_collection = {
+ 'id': self.udp_collection_name,
+ 'conflictResolutionPolicy': {
+ 'mode': 'Custom',
+ 'conflictResolutionProcedure': 'dbs/' + self.database_name + "/colls/" + self.udp_collection_name + '/sprocs/resolver'
+ }
+}
+udp_collection = self.try_create_document_collection(
+ create_client, database, udp_collection)
+```
+
+After your container is created, you must create the `resolver` stored procedure.
+
+## Create a custom conflict resolution policy
+
+These samples show how to set up a container with a custom conflict resolution policy. These conflicts show up in the conflict feed.
+
+### <a id="create-custom-conflict-resolution-policy-dotnet"></a>.NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+DocumentCollection manualCollection = await createClient.CreateDocumentCollectionIfNotExistsAsync(
+ UriFactory.CreateDatabaseUri(this.databaseName), new DocumentCollection
+ {
+ Id = this.manualCollectionName,
+ ConflictResolutionPolicy = new ConflictResolutionPolicy
+ {
+ Mode = ConflictResolutionMode.Custom,
+ },
+ });
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+Container container = await createClient.GetDatabase(this.databaseName)
+ .CreateContainerIfNotExistsAsync(new ContainerProperties(this.manualCollectionName, "/partitionKey")
+ {
+ ConflictResolutionPolicy = new ConflictResolutionPolicy()
+ {
+ Mode = ConflictResolutionMode.Custom
+ }
+ });
+```
++
+### <a id="create-custom-conflict-resolution-policy-javav4"></a> Java V4 SDK
+
+# [Async](#tab/api-async)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConflictResolutionCustomAsync)]
+
+# [Sync](#tab/api-sync)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConflictResolutionCustomSync)]
+
+
+
+### <a id="create-custom-conflict-resolution-policy-javav2"></a>Java V2 SDKs
+
+# [Async Java V2 SDK](#tab/async)
+
+[Async Java V2 SDK](sdk-java-async-v2.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
+
+```java
+DocumentCollection collection = new DocumentCollection();
+collection.setId(id);
+ConflictResolutionPolicy policy = ConflictResolutionPolicy.createCustomPolicy();
+collection.setConflictResolutionPolicy(policy);
+DocumentCollection createdCollection = client.createCollection(databaseUri, collection, null).toBlocking().value();
+```
+
+# [Sync Java V2 SDK](#tab/sync)
+
+[Sync Java V2 SDK](sdk-java-v2.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
+
+```java
+DocumentCollection manualCollection = new DocumentCollection();
+manualCollection.setId(this.manualCollectionName);
+ConflictResolutionPolicy customPolicy = ConflictResolutionPolicy.createCustomPolicy(null);
+manualCollection.setConflictResolutionPolicy(customPolicy);
+DocumentCollection createdCollection = client.createCollection(database.getSelfLink(), collection, null).getResource();
+```
++
+### <a id="create-custom-conflict-resolution-policy-javascript"></a>Node.js/JavaScript/TypeScript SDK
+
+```javascript
+const database = client.database(this.databaseName);
+const {
+ container: manualContainer
+} = await database.containers.createIfNotExists({
+ id: this.manualContainerName,
+ conflictResolutionPolicy: {
+ mode: "Custom"
+ }
+});
+```
+
+### <a id="create-custom-conflict-resolution-policy-python"></a>Python SDK
+
+```python
+database = client.ReadDatabase("dbs/" + self.database_name)
+manual_collection = {
+ 'id': self.manual_collection_name,
+ 'conflictResolutionPolicy': {
+ 'mode': 'Custom'
+ }
+}
+manual_collection = client.CreateContainer(database['_self'], collection)
+```
+
+## Read from conflict feed
+
+These samples show how to read from a container's conflict feed. Conflicts show up in the conflict feed only if they weren't resolved automatically or if using a custom conflict policy.
+
+### <a id="read-from-conflict-feed-dotnet"></a>.NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+FeedResponse<Conflict> conflicts = await delClient.ReadConflictFeedAsync(this.collectionUri);
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+FeedIterator<ConflictProperties> conflictFeed = container.Conflicts.GetConflictQueryIterator();
+while (conflictFeed.HasMoreResults)
+{
+ FeedResponse<ConflictProperties> conflicts = await conflictFeed.ReadNextAsync();
+ foreach (ConflictProperties conflict in conflicts)
+ {
+ // Read the conflicted content
+ MyClass intendedChanges = container.Conflicts.ReadConflictContent<MyClass>(conflict);
+ MyClass currentState = await container.Conflicts.ReadCurrentAsync<MyClass>(conflict, new PartitionKey(intendedChanges.MyPartitionKey));
+
+ // Do manual merge among documents
+ await container.ReplaceItemAsync<MyClass>(intendedChanges, intendedChanges.Id, new PartitionKey(intendedChanges.MyPartitionKey));
+
+ // Delete the conflict
+ await container.Conflicts.DeleteAsync(conflict, new PartitionKey(intendedChanges.MyPartitionKey));
+ }
+}
+```
++
+### <a id="read-from-conflict-feed-javav2"></a>Java V2 SDKs
+
+# [Async Java V2 SDK](#tab/async)
+
+[Async Java V2 SDK](sdk-java-async-v2.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
+
+```java
+FeedResponse<Conflict> response = client.readConflicts(this.manualCollectionUri, null)
+ .first().toBlocking().single();
+for (Conflict conflict : response.getResults()) {
+ /* Do something with conflict */
+}
+```
+# [Sync Java V2 SDK](#tab/sync)
+
+[Sync Java V2 SDK](sdk-java-v2.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
+
+```java
+Iterator<Conflict> conflictsIterator = client.readConflicts(this.collectionLink, null).getQueryIterator();
+while (conflictsIterator.hasNext()) {
+ Conflict conflict = conflictsIterator.next();
+ /* Do something with conflict */
+}
+```
++
+### <a id="read-from-conflict-feed-javascript"></a>Node.js/JavaScript/TypeScript SDK
+
+```javascript
+const container = client
+ .database(this.databaseName)
+ .container(this.lwwContainerName);
+
+const { result: conflicts } = await container.conflicts.readAll().toArray();
+```
+
+### <a id="read-from-conflict-feed-python"></a>Python
+
+```python
+conflicts_iterator = iter(client.ReadConflicts(self.manual_collection_link))
+conflict = next(conflicts_iterator, None)
+while conflict:
+ # Do something with conflict
+ conflict = next(conflicts_iterator, None)
+```
+
+## Next steps
+
+Learn about the following Azure Cosmos DB concepts:
+
+- [Global distribution - under the hood](../global-dist-under-the-hood.md)
+- [How to configure multi-region writes in your applications](how-to-multi-master.md)
+- [Configure clients for multihoming](../how-to-manage-database-account.md#configure-multiple-write-regions)
+- [Add or remove regions from your Azure Cosmos DB account](../how-to-manage-database-account.md#addremove-regions-from-your-database-account)
+- [How to configuremulti-region writes in your applications](how-to-multi-master.md).
+- [Partitioning and data distribution](../partitioning-overview.md)
+- [Indexing in Azure Cosmos DB](../index-policy.md)
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-consistency.md
+
+ Title: Manage consistency in Azure Cosmos DB
+description: Learn how to configure and manage consistency levels in Azure Cosmos DB using Azure portal, .NET SDK, Java SDK and various other SDKs
++++ Last updated : 02/16/2022++
+ms.devlang: csharp, java, javascript
+++
+# Manage consistency levels in Azure Cosmos DB
+
+This article explains how to manage consistency levels in Azure Cosmos DB. You learn how to configure the default consistency level, override the default consistency, manually manage session tokens, and understand the Probabilistically Bounded Staleness (PBS) metric.
++
+## Configure the default consistency level
+
+The [default consistency level](../consistency-levels.md) is the consistency level that clients use by default.
+
+# [Azure portal](#tab/portal)
+
+To view or modify the default consistency level, sign in to the Azure portal. Find your Azure Cosmos DB account, and open the **Default consistency** pane. Select the level of consistency you want as the new default, and then select **Save**. The Azure portal also provides a visualization of different consistency levels with music notes.
++
+# [CLI](#tab/cli)
+
+Create an Azure Cosmos DB account with Session consistency, then update the default consistency.
+
+```azurecli
+# Create a new account with Session consistency
+az cosmosdb create --name $accountName --resource-group $resourceGroupName --default-consistency-level Session
+
+# update an existing account's default consistency
+az cosmosdb update --name $accountName --resource-group $resourceGroupName --default-consistency-level Strong
+```
+
+# [PowerShell](#tab/powershell)
+
+Create an Azure Cosmos DB account with Session consistency, then update the default consistency.
+
+```azurepowershell-interactive
+# Create a new account with Session consistency
+New-AzCosmosDBAccount -ResourceGroupName $resourceGroupName `
+ -Location $locations -Name $accountName -DefaultConsistencyLevel "Session"
+
+# Update an existing account's default consistency
+Update-AzCosmosDBAccount -ResourceGroupName $resourceGroupName `
+ -Name $accountName -DefaultConsistencyLevel "Strong"
+```
+++
+## Override the default consistency level
+
+Clients can override the default consistency level that is set by the service. Consistency level can be set on a per request, which overrides the default consistency level set at the account level.
+
+> [!TIP]
+> Consistency can only be **relaxed** at the SDK instance or request level. To move from weaker to stronger consistency, update the default consistency for the Azure Cosmos DB account.
+
+> [!TIP]
+> Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. See [Consistency levels and throughput](../consistency-levels.md#consistency-levels-and-throughput) for more details.
+
+### <a id="override-default-consistency-dotnet"></a>.NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+// Override consistency at the client level
+documentClient = new DocumentClient(new Uri(endpoint), authKey, connectionPolicy, ConsistencyLevel.Eventual);
+
+// Override consistency at the request level via request options
+RequestOptions requestOptions = new RequestOptions { ConsistencyLevel = ConsistencyLevel.Eventual };
+
+var response = await client.CreateDocumentAsync(collectionUri, document, requestOptions);
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+// Override consistency at the request level via request options
+ItemRequestOptions requestOptions = new ItemRequestOptions { ConsistencyLevel = ConsistencyLevel.Strong };
+
+var response = await client.GetContainer(databaseName, containerName)
+ .CreateItemAsync(
+ item,
+ new PartitionKey(itemPartitionKey),
+ requestOptions);
+```
++
+### <a id="override-default-consistency-javav4"></a> Java V4 SDK
+
+# [Async](#tab/api-async)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConsistencyAsync)]
+
+# [Sync](#tab/api-sync)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConsistencySync)]
+
+
+
+### <a id="override-default-consistency-javav2"></a> Java V2 SDKs
+
+# [Async](#tab/api-async)
+
+Async Java V2 SDK (Maven com.microsoft.azure::azure-cosmosdb)
+
+```java
+// Override consistency at the client level
+ConnectionPolicy policy = new ConnectionPolicy();
+
+AsyncDocumentClient client =
+ new AsyncDocumentClient.Builder()
+ .withMasterKey(this.accountKey)
+ .withServiceEndpoint(this.accountEndpoint)
+ .withConsistencyLevel(ConsistencyLevel.Eventual)
+ .withConnectionPolicy(policy).build();
+```
+
+# [Sync](#tab/api-sync)
+
+Sync Java V2 SDK (Maven com.microsoft.azure::azure-documentdb)
+
+```java
+// Override consistency at the client level
+ConnectionPolicy connectionPolicy = new ConnectionPolicy();
+DocumentClient client = new DocumentClient(accountEndpoint, accountKey, connectionPolicy, ConsistencyLevel.Eventual);
+```
++
+### <a id="override-default-consistency-javascript"></a>Node.js/JavaScript/TypeScript SDK
+
+```javascript
+// Override consistency at the client level
+const client = new CosmosClient({
+ /* other config... */
+ consistencyLevel: ConsistencyLevel.Eventual
+});
+
+// Override consistency at the request level via request options
+const { body } = await item.read({ consistencyLevel: ConsistencyLevel.Eventual });
+```
+
+### <a id="override-default-consistency-python"></a>Python SDK
+
+```python
+# Override consistency at the client level
+connection_policy = documents.ConnectionPolicy()
+client = cosmos_client.CosmosClient(self.account_endpoint, {
+ 'masterKey': self.account_key}, connection_policy, documents.ConsistencyLevel.Eventual)
+```
+
+## Utilize session tokens
+
+One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Azure Cosmos DB accounts by default. When working with Session consistency, each new write request to Azure Cosmos DB is assigned a new SessionToken. The CosmosClient will use this token internally with each read/query request to ensure that the set consistency level is maintained.
+
+In some scenarios you need to manage this Session yourself. Consider a web application with multiple nodes, each node will have its own instance of CosmosClient. If you wanted these nodes to participate in the same session (to be able read your own writes consistently across web tiers) you would have to send the SessionToken from FeedResponse\<T\> of the write action to the end-user using a cookie or some other mechanism, and have that token flow back to the web tier and ultimately the CosmosClient for subsequent reads. If you are using a round-robin load balancer which does not maintain session affinity between requests, such as the Azure Load Balancer, the read could potentially land on a different node to the write request, where the session was created.
+
+If you do not flow the Azure Cosmos DB SessionToken across as described above you could end up with inconsistent read results for a period of time.
+
+To manage session tokens manually, get the session token from the response and set them per request. If you don't need to manage session tokens manually, you don't need to use these samples. The SDK keeps track of session tokens automatically. If you don't set the session token manually, by default, the SDK uses the most recent session token.
+
+### <a id="utilize-session-tokens-dotnet"></a>.NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+var response = await client.ReadDocumentAsync(
+ UriFactory.CreateDocumentUri(databaseName, collectionName, "SalesOrder1"));
+string sessionToken = response.SessionToken;
+
+RequestOptions options = new RequestOptions();
+options.SessionToken = sessionToken;
+var response = await client.ReadDocumentAsync(
+ UriFactory.CreateDocumentUri(databaseName, collectionName, "SalesOrder1"), options);
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+Container container = client.GetContainer(databaseName, collectionName);
+ItemResponse<SalesOrder> response = await container.CreateItemAsync<SalesOrder>(salesOrder);
+string sessionToken = response.Headers.Session;
+
+ItemRequestOptions options = new ItemRequestOptions();
+options.SessionToken = sessionToken;
+ItemResponse<SalesOrder> response = await container.ReadItemAsync<SalesOrder>(salesOrder.Id, new PartitionKey(salesOrder.PartitionKey), options);
+```
++
+### <a id="override-default-consistency-javav4"></a> Java V4 SDK
+
+# [Async](#tab/api-async)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConsistencySessionAsync)]
+
+# [Sync](#tab/api-sync)
+
+ Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConsistencySessionSync)]
+
+
+
+### <a id="utilize-session-tokens-javav2"></a>Java V2 SDKs
+
+# [Async](#tab/api-async)
+
+Async Java V2 SDK (Maven com.microsoft.azure::azure-cosmosdb)
+
+```java
+// Get session token from response
+RequestOptions options = new RequestOptions();
+options.setPartitionKey(new PartitionKey(document.get("mypk")));
+Observable<ResourceResponse<Document>> readObservable = client.readDocument(document.getSelfLink(), options);
+readObservable.single() // we know there will be one response
+ .subscribe(
+ documentResourceResponse -> {
+ System.out.println(documentResourceResponse.getSessionToken());
+ },
+ error -> {
+ System.err.println("an error happened: " + error.getMessage());
+ });
+
+// Resume the session by setting the session token on RequestOptions
+RequestOptions options = new RequestOptions();
+requestOptions.setSessionToken(sessionToken);
+Observable<ResourceResponse<Document>> readObservable = client.readDocument(document.getSelfLink(), options);
+```
+
+# [Sync](#tab/api-sync)
+
+Sync Java V2 SDK (Maven com.microsoft.azure::azure-documentdb)
+
+```java
+// Get session token from response
+ResourceResponse<Document> response = client.readDocument(documentLink, null);
+String sessionToken = response.getSessionToken();
+
+// Resume the session by setting the session token on the RequestOptions
+RequestOptions options = new RequestOptions();
+options.setSessionToken(sessionToken);
+ResourceResponse<Document> response = client.readDocument(documentLink, options);
+```
++
+### <a id="utilize-session-tokens-javascript"></a>Node.js/JavaScript/TypeScript SDK
+
+```javascript
+// Get session token from response
+const { headers, item } = await container.items.create({ id: "meaningful-id" });
+const sessionToken = headers["x-ms-session-token"];
+
+// Immediately or later, you can use that sessionToken from the header to resume that session.
+const { body } = await item.read({ sessionToken });
+```
+
+### <a id="utilize-session-tokens-python"></a>Python SDK
+
+```python
+// Get the session token from the last response headers
+item = client.ReadItem(item_link)
+session_token = client.last_response_headers["x-ms-session-token"]
+
+// Resume the session by setting the session token on the options for the request
+options = {
+ "sessionToken": session_token
+}
+item = client.ReadItem(doc_link, options)
+```
+
+## Monitor Probabilistically Bounded Staleness (PBS) metric
+
+How eventual is eventual consistency? For the average case, can we offer staleness bounds with respect to version history and time. The [**Probabilistically Bounded Staleness (PBS)**](http://pbs.cs.berkeley.edu/) metric tries to quantify the probability of staleness and shows it as a metric.
+
+To view the PBS metric, go to your Azure Cosmos DB account in the Azure portal. Open the **Metrics (Classic)** pane, and select the **Consistency** tab. Look at the graph named **Probability of strongly consistent reads based on your workload (see PBS)**.
++
+## Next steps
+
+Learn more about how to manage data conflicts, or move on to the next key concept in Azure Cosmos DB. See the following articles:
+
+* [Consistency Levels in Azure Cosmos DB](../consistency-levels.md)
+* [Partitioning and data distribution](../partitioning-overview.md)
+* [Manage conflicts between regions](how-to-manage-conflicts.md)
+* [Partitioning and data distribution](../partitioning-overview.md)
+* [Consistency tradeoffs in modern distributed database systems design](https://www.computer.org/csdl/magazine/co/2012/02/mco2012020037/13rRUxjyX7k)
+* [High availability](../high-availability.md)
+* [Azure Cosmos DB SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/)
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
+
+ Title: Manage indexing policies in Azure Cosmos DB
+description: Learn how to manage indexing policies, include or exclude a property from indexing, how to define indexing using different Azure Cosmos DB SDKs
++++ Last updated : 05/25/2021+++++
+# Manage indexing policies in Azure Cosmos DB
+
+In Azure Cosmos DB, data is indexed following [indexing policies](../index-policy.md) that are defined for each container. The default indexing policy for newly created containers enforces range indexes for any string or number. This policy can be overridden with your own custom indexing policy.
+
+> [!NOTE]
+> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's API for NoSQL. Learn about indexing in [Azure Cosmos DB's API for MongoDB](../mongodb/indexing.md) and [Secondary indexing in Azure Cosmos DB for Apache Cassandra.](../cassandr)
+
+## Indexing policy examples
+
+Here are some examples of indexing policies shown in [their JSON format](../index-policy.md#include-exclude-paths), which is how they are exposed on the Azure portal. The same parameters can be set through the Azure CLI or any SDK.
+
+### <a id="range-index"></a>Opt-out policy to selectively exclude some property paths
+
+```json
+ {
+ "indexingMode": "consistent",
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/path/to/single/excluded/property/?"
+ },
+ {
+ "path": "/path/to/root/of/multiple/excluded/properties/*"
+ }
+ ]
+ }
+```
+
+This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
++
+```json
+ {
+ "indexingMode": "consistent",
+ "includedPaths": [
+ {
+ "path": "/*",
+ "indexes": [
+ {
+ "kind": "Range",
+ "dataType": "Number",
+ "precision": -1
+ },
+ {
+ "kind": "Range",
+ "dataType": "String",
+ "precision": -1
+ }
+ ]
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/path/to/single/excluded/property/?"
+ },
+ {
+ "path": "/path/to/root/of/multiple/excluded/properties/*"
+ }
+ ]
+ }
+```
+
+### Opt-in policy to selectively include some property paths
+
+```json
+ {
+ "indexingMode": "consistent",
+ "includedPaths": [
+ {
+ "path": "/path/to/included/property/?"
+ },
+ {
+ "path": "/path/to/root/of/multiple/included/properties/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/*"
+ }
+ ]
+ }
+```
+
+This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
++
+```json
+ {
+ "indexingMode": "consistent",
+ "includedPaths": [
+ {
+ "path": "/path/to/included/property/?",
+ "indexes": [
+ {
+ "kind": "Range",
+ "dataType": "Number"
+ },
+ {
+ "kind": "Range",
+ "dataType": "String"
+ }
+ ]
+ },
+ {
+ "path": "/path/to/root/of/multiple/included/properties/*",
+ "indexes": [
+ {
+ "kind": "Range",
+ "dataType": "Number"
+ },
+ {
+ "kind": "Range",
+ "dataType": "String"
+ }
+ ]
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/*"
+ }
+ ]
+ }
+```
+
+> [!NOTE]
+> It is generally recommended to use an **opt-out** indexing policy to let Azure Cosmos DB proactively index any new property that may be added to your data model.
+
+### <a id="spatial-index"></a>Using a spatial index on a specific property path only
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/_etag/?"
+ }
+ ],
+ "spatialIndexes": [
+ {
+ "path": "/path/to/geojson/property/?",
+ "types": [
+ "Point",
+ "Polygon",
+ "MultiPolygon",
+ "LineString"
+ ]
+ }
+ ]
+}
+```
+
+## <a id="composite-index"></a>Composite indexing policy examples
+
+In addition to including or excluding paths for individual properties, you can also specify a composite index. If you would like to perform a query that has an `ORDER BY` clause for multiple properties, a [composite index](../index-policy.md#composite-indexes) on those properties is required. Additionally, composite indexes will have a performance benefit for queries that have a multiple filters or both a filter and an ORDER BY clause.
+
+> [!NOTE]
+> Composite paths have an implicit `/?` since only the scalar value at that path is indexed. The `/*` wildcard is not supported in composite paths. You shouldn't specify `/?` or `/*` in a composite path.
+
+### Composite index defined for (name asc, age desc):
+
+```json
+ {
+ "automatic":true,
+ "indexingMode":"Consistent",
+ "includedPaths":[
+ {
+ "path":"/*"
+ }
+ ],
+ "excludedPaths":[],
+ "compositeIndexes":[
+ [
+ {
+ "path":"/name",
+ "order":"ascending"
+ },
+ {
+ "path":"/age",
+ "order":"descending"
+ }
+ ]
+ ]
+ }
+```
+
+The above composite index on name and age is required for Query #1 and Query #2:
+
+Query #1:
+
+```sql
+ SELECT *
+ FROM c
+ ORDER BY c.name ASC, c.age DESC
+```
+
+Query #2:
+
+```sql
+ SELECT *
+ FROM c
+ ORDER BY c.name DESC, c.age ASC
+```
+
+This composite index will benefit Query #3 and Query #4 and optimize the filters:
+
+Query #3:
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Tim"
+ORDER BY c.name DESC, c.age ASC
+```
+
+Query #4:
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Tim" AND c.age > 18
+```
+
+### Composite index defined for (name ASC, age ASC) and (name ASC, age DESC):
+
+You can define multiple different composite indexes within the same indexing policy.
+
+```json
+ {
+ "automatic":true,
+ "indexingMode":"Consistent",
+ "includedPaths":[
+ {
+ "path":"/*"
+ }
+ ],
+ "excludedPaths":[],
+ "compositeIndexes":[
+ [
+ {
+ "path":"/name",
+ "order":"ascending"
+ },
+ {
+ "path":"/age",
+ "order":"ascending"
+ }
+ ],
+ [
+ {
+ "path":"/name",
+ "order":"ascending"
+ },
+ {
+ "path":"/age",
+ "order":"descending"
+ }
+ ]
+ ]
+ }
+```
+
+### Composite index defined for (name ASC, age ASC):
+
+It is optional to specify the order. If not specified, the order is ascending.
+
+```json
+{
+ "automatic":true,
+ "indexingMode":"Consistent",
+ "includedPaths":[
+ {
+ "path":"/*"
+ }
+ ],
+ "excludedPaths":[],
+ "compositeIndexes":[
+ [
+ {
+ "path":"/name",
+ },
+ {
+ "path":"/age",
+ }
+ ]
+ ]
+}
+```
+
+### Excluding all property paths but keeping indexing active
+
+This policy can be used in situations where the [Time-to-Live (TTL) feature](time-to-live.md) is active but no additional indexes are necessary (to use Azure Cosmos DB as a pure key-value store).
+
+```json
+ {
+ "indexingMode": "consistent",
+ "includedPaths": [],
+ "excludedPaths": [{
+ "path": "/*"
+ }]
+ }
+```
+
+### No indexing
+
+This policy will turn off indexing. If `indexingMode` is set to `none`, you cannot set a TTL on the container.
+
+```json
+ {
+ "indexingMode": "none"
+ }
+```
+
+## Updating indexing policy
+
+In Azure Cosmos DB, the indexing policy can be updated using any of the below methods:
+
+- from the Azure portal
+- using the Azure CLI
+- using PowerShell
+- using one of the SDKs
+
+An [indexing policy update](../index-policy.md#modifying-the-indexing-policy) triggers an index transformation. The progress of this transformation can also be tracked from the SDKs.
+
+> [!NOTE]
+> When updating indexing policy, writes to Azure Cosmos DB will be uninterrupted. Learn more about [indexing transformations](../index-policy.md#modifying-the-indexing-policy)
+
+## Use the Azure portal
+
+Azure Cosmos DB containers store their indexing policy as a JSON document that the Azure portal lets you directly edit.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Create a new Azure Cosmos DB account or select an existing account.
+
+1. Open the **Data Explorer** pane and select the container that you want to work on.
+
+1. Click on **Scale & Settings**.
+
+1. Modify the indexing policy JSON document (see examples [below](#indexing-policy-examples))
+
+1. Click **Save** when you are done.
++
+## Use the Azure CLI
+
+To create a container with a custom indexing policy see, [Create a container with a custom index policy using CLI](manage-with-cli.md#create-a-container-with-a-custom-index-policy)
+
+## Use PowerShell
+
+To create a container with a custom indexing policy see, [Create a container with a custom index policy using PowerShell](manage-with-powershell.md#create-container-custom-index)
+
+## <a id="dotnet-sdk"></a> Use the .NET SDK
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+The `DocumentCollection` object from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
+
+```csharp
+// Retrieve the container's details
+ResourceResponse<DocumentCollection> containerResponse = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"));
+// Set the indexing mode to consistent
+containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
+// Add an included path
+containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+// Add an excluded path
+containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
+// Add a spatial index
+containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(new SpatialSpec() { Path = "/locations/*", SpatialTypes = new Collection<SpatialType>() { SpatialType.Point } } );
+// Add a composite index
+containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> {new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending }});
+// Update container with changes
+await client.ReplaceDocumentCollectionAsync(containerResponse.Resource);
+```
+
+To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`.
+
+```csharp
+// retrieve the container's details
+ResourceResponse<DocumentCollection> container = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"), new RequestOptions { PopulateQuotaInfo = true });
+// retrieve the index transformation progress from the result
+long indexTransformationProgress = container.IndexTransformationProgress;
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+The `ContainerProperties` object from the [.NET SDK v3](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) (see [this Quickstart](quickstart-dotnet.md) regarding its usage) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
+
+```csharp
+// Retrieve the container's details
+ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync();
+// Set the indexing mode to consistent
+containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
+// Add an included path
+containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+// Add an excluded path
+containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
+// Add a spatial index
+SpatialPath spatialPath = new SpatialPath
+{
+ Path = "/locations/*"
+};
+spatialPath.SpatialTypes.Add(SpatialType.Point);
+containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(spatialPath);
+// Add a composite index
+containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> { new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending } });
+// Update container with changes
+await client.GetContainer("database", "container").ReplaceContainerAsync(containerResponse.Resource);
+```
+
+To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`, then retrieve the value from the `x-ms-documentdb-collection-index-transformation-progress` response header.
+
+```csharp
+// retrieve the container's details
+ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync(new ContainerRequestOptions { PopulateQuotaInfo = true });
+// retrieve the index transformation progress from the result
+long indexTransformationProgress = long.Parse(containerResponse.Headers["x-ms-documentdb-collection-index-transformation-progress"]);
+```
+
+When defining a custom indexing policy while creating a new container, the SDK V3's fluent API lets you write this definition in a concise and efficient way:
+
+```csharp
+await client.GetDatabase("database").DefineContainer(name: "container", partitionKeyPath: "/myPartitionKey")
+ .WithIndexingPolicy()
+ .WithIncludedPaths()
+ .Path("/*")
+ .Attach()
+ .WithExcludedPaths()
+ .Path("/name/*")
+ .Attach()
+ .WithSpatialIndex()
+ .Path("/locations/*", SpatialType.Point)
+ .Attach()
+ .WithCompositeIndex()
+ .Path("/name", CompositePathSortOrder.Ascending)
+ .Path("/age", CompositePathSortOrder.Descending)
+ .Attach()
+ .Attach()
+ .CreateIfNotExistsAsync();
+```
++
+## Use the Java SDK
+
+The `DocumentCollection` object from the [Java SDK](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) (see [this Quickstart](quickstart-java.md) regarding its usage) exposes `getIndexingPolicy()` and `setIndexingPolicy()` methods. The `IndexingPolicy` object they manipulate lets you change the indexing mode and add or remove included and excluded paths.
+
+```java
+// Retrieve the container's details
+Observable<ResourceResponse<DocumentCollection>> containerResponse = client.readCollection(String.format("/dbs/%s/colls/%s", "database", "container"), null);
+containerResponse.subscribe(result -> {
+DocumentCollection container = result.getResource();
+IndexingPolicy indexingPolicy = container.getIndexingPolicy();
+
+// Set the indexing mode to consistent
+indexingPolicy.setIndexingMode(IndexingMode.Consistent);
+
+// Add an included path
+
+Collection<IncludedPath> includedPaths = new ArrayList<>();
+IncludedPath includedPath = new IncludedPath();
+includedPath.setPath("/*");
+includedPaths.add(includedPath);
+indexingPolicy.setIncludedPaths(includedPaths);
+
+// Add an excluded path
+
+Collection<ExcludedPath> excludedPaths = new ArrayList<>();
+ExcludedPath excludedPath = new ExcludedPath();
+excludedPath.setPath("/name/*");
+excludedPaths.add(excludedPath);
+indexingPolicy.setExcludedPaths(excludedPaths);
+
+// Add a spatial index
+
+Collection<SpatialSpec> spatialIndexes = new ArrayList<SpatialSpec>();
+Collection<SpatialType> collectionOfSpatialTypes = new ArrayList<SpatialType>();
+
+SpatialSpec spec = new SpatialSpec();
+spec.setPath("/locations/*");
+collectionOfSpatialTypes.add(SpatialType.Point);
+spec.setSpatialTypes(collectionOfSpatialTypes);
+spatialIndexes.add(spec);
+
+indexingPolicy.setSpatialIndexes(spatialIndexes);
+
+// Add a composite index
+
+Collection<ArrayList<CompositePath>> compositeIndexes = new ArrayList<>();
+ArrayList<CompositePath> compositePaths = new ArrayList<>();
+
+CompositePath nameCompositePath = new CompositePath();
+nameCompositePath.setPath("/name");
+nameCompositePath.setOrder(CompositePathSortOrder.Ascending);
+
+CompositePath ageCompositePath = new CompositePath();
+ageCompositePath.setPath("/age");
+ageCompositePath.setOrder(CompositePathSortOrder.Descending);
+
+compositePaths.add(ageCompositePath);
+compositePaths.add(nameCompositePath);
+
+compositeIndexes.add(compositePaths);
+indexingPolicy.setCompositeIndexes(compositeIndexes);
+
+// Update the container with changes
+
+ client.replaceCollection(container, null);
+});
+```
+
+To track the index transformation progress on a container, pass a `RequestOptions` object that requests the quota info to be populated, then retrieve the value from the `x-ms-documentdb-collection-index-transformation-progress` response header.
+
+```java
+// set the RequestOptions object
+RequestOptions requestOptions = new RequestOptions();
+requestOptions.setPopulateQuotaInfo(true);
+// retrieve the container's details
+Observable<ResourceResponse<DocumentCollection>> containerResponse = client.readCollection(String.format("/dbs/%s/colls/%s", "database", "container"), requestOptions);
+containerResponse.subscribe(result -> {
+ // retrieve the index transformation progress from the response headers
+ String indexTransformationProgress = result.getResponseHeaders().get("x-ms-documentdb-collection-index-transformation-progress");
+});
+```
+
+## Use the Node.js SDK
+
+The `ContainerDefinition` interface from [Node.js SDK](https://www.npmjs.com/package/@azure/cosmos) (see [this Quickstart](quickstart-nodejs.md) regarding its usage) exposes an `indexingPolicy` property that lets you change the `indexingMode` and add or remove `includedPaths` and `excludedPaths`.
+
+Retrieve the container's details
+
+```javascript
+const containerResponse = await client.database('database').container('container').read();
+```
+
+Set the indexing mode to consistent
+
+```javascript
+containerResponse.body.indexingPolicy.indexingMode = "consistent";
+```
+
+Add included path including a spatial index
+
+```javascript
+containerResponse.body.indexingPolicy.includedPaths.push({
+ includedPaths: [
+ {
+ path: "/age/*",
+ indexes: [
+ {
+ kind: cosmos.DocumentBase.IndexKind.Range,
+ dataType: cosmos.DocumentBase.DataType.String
+ },
+ {
+ kind: cosmos.DocumentBase.IndexKind.Range,
+ dataType: cosmos.DocumentBase.DataType.Number
+ }
+ ]
+ },
+ {
+ path: "/locations/*",
+ indexes: [
+ {
+ kind: cosmos.DocumentBase.IndexKind.Spatial,
+ dataType: cosmos.DocumentBase.DataType.Point
+ }
+ ]
+ }
+ ]
+ });
+```
+
+Add excluded path
+
+```javascript
+containerResponse.body.indexingPolicy.excludedPaths.push({ path: '/name/*' });
+```
+
+Update the container with changes
+
+```javascript
+const replaceResponse = await client.database('database').container('container').replace(containerResponse.body);
+```
+
+To track the index transformation progress on a container, pass a `RequestOptions` object that sets the `populateQuotaInfo` property to `true`, then retrieve the value from the `x-ms-documentdb-collection-index-transformation-progress` response header.
+
+```javascript
+// retrieve the container's details
+const containerResponse = await client.database('database').container('container').read({
+ populateQuotaInfo: true
+});
+// retrieve the index transformation progress from the response headers
+const indexTransformationProgress = replaceResponse.headers['x-ms-documentdb-collection-index-transformation-progress'];
+```
+
+## Use the Python SDK
+
+# [Python SDK V3](#tab/pythonv3)
+
+When using the [Python SDK V3](https://pypi.org/project/azure-cosmos/) (see [this Quickstart](quickstart-python.md) regarding its usage), the container configuration is managed as a dictionary. From this dictionary, it is possible to access the indexing policy and all its attributes.
+
+Retrieve the container's details
+
+```python
+containerPath = 'dbs/database/colls/collection'
+container = client.ReadContainer(containerPath)
+```
+
+Set the indexing mode to consistent
+
+```python
+container['indexingPolicy']['indexingMode'] = 'consistent'
+```
+
+Define an indexing policy with an included path and a spatial index
+
+```python
+container["indexingPolicy"] = {
+
+ "indexingMode":"consistent",
+ "spatialIndexes":[
+ {"path":"/location/*","types":["Point"]}
+ ],
+ "includedPaths":[{"path":"/age/*","indexes":[]}],
+ "excludedPaths":[{"path":"/*"}]
+}
+```
+
+Define an indexing policy with an excluded path
+
+```python
+container["indexingPolicy"] = {
+ "indexingMode":"consistent",
+ "includedPaths":[{"path":"/*","indexes":[]}],
+ "excludedPaths":[{"path":"/name/*"}]
+}
+```
+
+Add a composite index
+
+```python
+container['indexingPolicy']['compositeIndexes'] = [
+ [
+ {
+ "path": "/name",
+ "order": "ascending"
+ },
+ {
+ "path": "/age",
+ "order": "descending"
+ }
+ ]
+ ]
+```
+
+Update the container with changes
+
+```python
+response = client.ReplaceContainer(containerPath, container)
+```
+
+# [Python SDK V4](#tab/pythonv4)
+
+When using the [Python SDK V4](https://pypi.org/project/azure-cosmos/), the container configuration is managed as a dictionary. From this dictionary, it is possible to access the indexing policy and all its attributes.
+
+Retrieve the container's details
+
+```python
+database_client = cosmos_client.get_database_client('database')
+container_client = database_client.get_container_client('container')
+container = container_client.read()
+```
+
+Set the indexing mode to consistent
+
+```python
+indexingPolicy = {
+ 'indexingMode': 'consistent'
+}
+```
+
+Define an indexing policy with an included path and a spatial index
+
+```python
+indexingPolicy = {
+ "indexingMode":"consistent",
+ "spatialIndexes":[
+ {"path":"/location/*","types":["Point"]}
+ ],
+ "includedPaths":[{"path":"/age/*","indexes":[]}],
+ "excludedPaths":[{"path":"/*"}]
+}
+```
+
+Define an indexing policy with an excluded path
+
+```python
+indexingPolicy = {
+ "indexingMode":"consistent",
+ "includedPaths":[{"path":"/*","indexes":[]}],
+ "excludedPaths":[{"path":"/name/*"}]
+}
+```
+
+Add a composite index
+
+```python
+indexingPolicy['compositeIndexes'] = [
+ [
+ {
+ "path": "/name",
+ "order": "ascending"
+ },
+ {
+ "path": "/age",
+ "order": "descending"
+ }
+ ]
+]
+```
+
+Update the container with changes
+
+```python
+response = database_client.replace_container(container_client, container['partitionKey'], indexingPolicy)
+```
+
+Retrieve the index transformation progress from the response headers
+```python
+container_client.read(populate_quota_info = True,
+ response_hook = lambda h,p: print(h['x-ms-documentdb-collection-index-transformation-progress']))
+```
+++
+## Next steps
+
+Read more about the indexing in the following articles:
+
+- [Indexing overview](../index-overview.md)
+- [Indexing policy](../index-policy.md)
cosmos-db How To Migrate From Bulk Executor Library Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-bulk-executor-library-java.md
+
+ Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
+description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
++++ Last updated : 05/13/2022+
+ms.devlang: java
++++
+# Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
+
+This article describes the required steps to migrate an existing application's code that uses the [Java bulk executor library](sdk-java-bulk-executor-v2.md) to the [bulk support](bulk-executor-java.md) feature in the latest version of the Java SDK.
+
+## Enable bulk support
+
+To use bulk support in the Java SDK, include the import below:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=CosmosBulkOperationsImport)]
+
+## Add documents to a reactive stream
+
+Bulk support in the Java V4 SDK works by adding documents to a reactive stream object. For example, you can add each document individually:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=AddDocsToStream)]
+
+Or you can add the documents to the stream from a list, using `fromIterable`:
+
+```java
+class SampleDoc {
+ public SampleDoc() {
+ }
+ public String getId() {
+ return id;
+ }
+ public void setId(String id) {
+ this.id = id;
+ }
+ private String id="";
+}
+List<SampleDoc> docList = new ArrayList<>();
+for (int i = 1; i <= 5; i++){
+ SampleDoc doc = new SampleDoc();
+ String id = "id-"+i;
+ doc.setId(id);
+ docList.add(doc);
+}
+
+Flux<SampleDoc> docs = Flux.fromIterable(docList);
+```
+
+If you want to do bulk create or upsert items (similar to using [DocumentBulkExecutor.importAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.importall)), you need to pass the reactive stream to a method like the following:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkUpsertItems)]
+
+You can also use a method like the below, but this is only used for creating items:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItems)]
++
+The [DocumentBulkExecutor.importAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.importall) method in the old BulkExecutor library was also used to bulk *patch* items. The old [DocumentBulkExecutor.mergeAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.mergeall) method was also used for patch, but only for the `set` patch operation type. To do bulk patch operations in the V4 SDK, first you need to create patch operations:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=PatchOperations)]
+
+Then you can pass the operations, along with the reactive stream of documents, to a method like the below. In this example, we apply both `add` and `set` patch operation types. The full set of patch operation types supported can be found [here](../partial-document-update.md#supported-operations) in our overview of [partial document update in Azure Cosmos DB](../partial-document-update.md).
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkPatchItems)]
+
+> [!NOTE]
+> In the above example, we apply `add` and `set` to patch elements whose root parent exists. However, you cannot do this where the root parent does **not** exist. This is because Azure Cosmos DB partial document update is [inspired by JSON Patch RFC 6902](../partial-document-update-faq.yml#is-this-an-implementation-of-json-patch-rfc-6902-). If patching where root parent does not exist, first read back the full documents, then use a method like the below to replace the documents:
+> [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkReplaceItems)]
++
+And if you want to do bulk *delete* (similar to using [DocumentBulkExecutor.deleteAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.deleteall)), you need to use bulk delete:
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkDeleteItems)]
++
+## Retries, timeouts, and throughput control
+
+The bulk support in Java V4 SDK doesn't handle retries and timeouts natively. You can refer to the guidance in [Bulk Executor - Java Library](bulk-executor-java.md), which includes a [sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav#should-my-application-retry-on-errors) for more guidance on the different kinds of errors that can occur, and best practices for handling retries.
++
+## Next steps
+
+* [Bulk samples on GitHub](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/bulk/async)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Migrate From Bulk Executor Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-bulk-executor-library.md
+
+ Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB .NET V3 SDK
+description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB SDK V3
++++ Last updated : 08/26/2021+
+ms.devlang: csharp
++++
+# Migrate from the bulk executor library to the bulk support in Azure Cosmos DB .NET V3 SDK
+
+This article describes the required steps to migrate an existing application's code that uses the [.NET bulk executor library](bulk-executor-dotnet.md) to the [bulk support](tutorial-dotnet-bulk-import.md) feature in the latest version of the .NET SDK.
+
+## Enable bulk support
+
+Enable bulk support on the `CosmosClient` instance through the [AllowBulkExecution](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.allowbulkexecution) configuration:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="Initialization":::
+
+## Create Tasks for each operation
+
+Bulk support in the .NET SDK works by leveraging the [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl) and grouping operations that occur concurrently.
+
+There is no single method in the SDK that will take your list of documents or operations as an input parameter, but rather, you need to create a Task for each operation you want to execute in bulk, and then simply wait for them to complete.
+
+For example, if your initial input is a list of items where each item has the following schema:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="Model":::
+
+If you want to do bulk import (similar to using BulkExecutor.BulkImportAsync), you need to have concurrent calls to `CreateItemAsync`. For example:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkImport":::
+
+If you want to do bulk *update* (similar to using [BulkExecutor.BulkUpdateAsync](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkupdateasync)), you need to have concurrent calls to `ReplaceItemAsync` method after updating the item value. For example:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkUpdate":::
+
+And if you want to do bulk *delete* (similar to using [BulkExecutor.BulkDeleteAsync](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkdeleteasync)), you need to have concurrent calls to `DeleteItemAsync`, with the `id` and partition key of each item. For example:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkDelete":::
+
+## Capture task result state
+
+In the previous code examples, we have created a concurrent list of tasks, and called the `CaptureOperationResponse` method on each of those tasks. This method is an extension that lets us maintain a *similar response schema* as BulkExecutor, by capturing any errors and tracking the [request units usage](../request-units.md).
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="CaptureOperationResult":::
+
+Where the `OperationResponse` is declared as:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="OperationResult":::
+
+## Execute operations concurrently
+
+To track the scope of the entire list of Tasks, we use this helper class:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkOperationsHelper":::
+
+The `ExecuteAsync` method will wait until all operations are completed and you can use it like so:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="WhenAll":::
+
+## Capture statistics
+
+The previous code waits until all operations are completed and calculates the required statistics. These statistics are similar to that of the bulk executor library's [BulkImportResponse](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkimport.bulkimportresponse).
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="ResponseType":::
+
+The `BulkOperationResponse` contains:
+
+1. The total time taken to process the list of operations through bulk support.
+1. The number of successful operations.
+1. The total of request units consumed.
+1. If there are failures, it displays a list of tuples that contain the exception and the associated item for logging and identification purpose.
+
+## Retry configuration
+
+Bulk executor library had [guidance](bulk-executor-dotnet.md#bulk-import-data-to-an-azure-cosmos-account) that mentioned to set the `MaxRetryWaitTimeInSeconds` and `MaxRetryAttemptsOnThrottledRequests` of [RetryOptions](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.retryoptions) to `0` to delegate control to the library.
+
+For bulk support in the .NET SDK, there is no hidden behavior. You can configure the retry options directly through the [CosmosClientOptions.MaxRetryAttemptsOnRateLimitedRequests](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxretryattemptsonratelimitedrequests) and [CosmosClientOptions.MaxRetryWaitTimeOnRateLimitedRequests](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxretrywaittimeonratelimitedrequests).
+
+> [!NOTE]
+> In cases where the provisioned request units is much lower than the expected based on the amount of data, you might want to consider setting these to high values. The bulk operation will take longer but it has a higher chance of completely succeeding due to the higher retries.
+
+## Performance improvements
+
+As with other operations with the .NET SDK, using the stream APIs results in better performance and avoids any unnecessary serialization.
+
+Using stream APIs is only possible if the nature of the data you use matches that of a stream of bytes (for example, file streams). In such cases, using the `CreateItemStreamAsync`, `ReplaceItemStreamAsync`, or `DeleteItemStreamAsync` methods and working with `ResponseMessage` (instead of `ItemResponse`) increases the throughput that can be achieved.
+
+## Next steps
+
+* To learn more about the .NET SDK releases, see the [Azure Cosmos DB SDK](sdk-dotnet-v2.md) article.
+* Get the complete [migration source code](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration) from GitHub.
+* [Additional bulk samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/BulkSupport)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Migrate From Change Feed Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-change-feed-library.md
+
+ Title: Migrate from the change feed processor library to the Azure Cosmos DB .NET V3 SDK
+description: Learn how to migrate your application from using the change feed processor library to the Azure Cosmos DB SDK V3
++++ Last updated : 09/13/2021+
+ms.devlang: csharp
+++
+# Migrate from the change feed processor library to the Azure Cosmos DB .NET V3 SDK
+
+This article describes the required steps to migrate an existing application's code that uses the [change feed processor library](https://github.com/Azure/azure-documentdb-changefeedprocessor-dotnet) to the [change feed](../change-feed.md) feature in the latest version of the .NET SDK (also referred as .NET V3 SDK).
+
+## Required code changes
+
+The .NET V3 SDK has several breaking changes, the following are the key steps to migrate your application:
+
+1. Convert the `DocumentCollectionInfo` instances into `Container` references for the monitored and leases containers.
+1. Customizations that use `WithProcessorOptions` should be updated to use `WithLeaseConfiguration` and `WithPollInterval` for intervals, `WithStartTime` [for start time](./change-feed-processor.md#starting-time), and `WithMaxItems` to define the maximum item count.
+1. Set the `processorName` on `GetChangeFeedProcessorBuilder` to match the value configured on `ChangeFeedProcessorOptions.LeasePrefix`, or use `string.Empty` otherwise.
+1. The changes are no longer delivered as a `IReadOnlyList<Document>`, instead, it's a `IReadOnlyCollection<T>` where `T` is a type you need to define, there is no base item class anymore.
+1. To handle the changes, you no longer need an implementation of `IChangeFeedObserver`, instead you need to [define a delegate](change-feed-processor.md#implementing-the-change-feed-processor). The delegate can be a static Function or, if you need to maintain state across executions, you can create your own class and pass an instance method as delegate.
+
+For example, if the original code to build the change feed processor looks as follows:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=ChangeFeedProcessorLibrary)]
+
+The migrated code will look like:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=ChangeFeedProcessorMigrated)]
+
+For the delegate, you can have a static method to receive the events. If you were consuming information from the `IChangeFeedObserverContext` you can migrate to use the `ChangeFeedProcessorContext`:
+
+* `ChangeFeedProcessorContext.LeaseToken` can be used instead of `IChangeFeedObserverContext.PartitionKeyRangeId`
+* `ChangeFeedProcessorContext.Headers` can be used instead of `IChangeFeedObserverContext.FeedResponse`
+* `ChangeFeedProcessorContext.Diagnostics` contains detailed information about request latency for troubleshooting
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=Delegate)]
+
+## State and lease container
+
+Similar to the change feed processor library, the change feed feature in .NET V3 SDK uses a [lease container](change-feed-processor.md#components-of-the-change-feed-processor) to store the state. However, the schemas are different.
+
+The SDK V3 change feed processor will detect any old library state and migrate it to the new schema automatically upon the first execution of the migrated application code.
+
+You can safely stop the application using the old code, migrate the code to the new version, start the migrated application, and any changes that happened while the application was stopped, will be picked up and processed by the new version.
+
+## Additional resources
+
+* [Azure Cosmos DB SDK](sdk-dotnet-v2.md)
+* [Usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
+* [Additional samples on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
+
+## Next steps
+
+You can now proceed to learn more about change feed processor in the following articles:
+
+* [Overview of change feed processor](change-feed-processor.md)
+* [Using the change feed estimator](how-to-use-change-feed-estimator.md)
+* [Change feed processor start time](./change-feed-processor.md#starting-time)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Model Partition Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-model-partition-example.md
+
+ Title: Model and partition data on Azure Cosmos DB with a real-world example
+description: Learn how to model and partition a real-world example using the Azure Cosmos DB Core API
++++ Last updated : 08/26/2021+
+ms.devlang: javascript
+++
+# How to model and partition data on Azure Cosmos DB using a real-world example
+
+This article builds on several Azure Cosmos DB concepts like [data modeling](../modeling-data.md), [partitioning](../partitioning-overview.md), and [provisioned throughput](../request-units.md) to demonstrate how to tackle a real-world data design exercise.
+
+If you usually work with relational databases, you have probably built habits and intuitions on how to design a data model. Because of the specific constraints, but also the unique strengths of Azure Cosmos DB, most of these best practices don't translate well and may drag you into suboptimal solutions. The goal of this article is to guide you through the complete process of modeling a real-world use-case on Azure Cosmos DB, from item modeling to entity colocation and container partitioning.
+
+[Download or view a community-generated source code](https://github.com/jwidmer/AzureCosmosDbBlogExample) that illustrates the concepts from this article. This code sample was contributed by a community contributor and Azure Cosmos DB team doesn't support its maintenance.
+
+## The scenario
+
+For this exercise, we are going to consider the domain of a blogging platform where *users* can create *posts*. Users can also *like* and add *comments* to those posts.
+
+> [!TIP]
+> We have highlighted some words in *italic*; these words identify the kind of "things" our model will have to manipulate.
+
+Adding more requirements to our specification:
+
+- A front page displays a feed of recently created posts,
+- We can fetch all posts for a user, all comments for a post and all likes for a post,
+- Posts are returned with the username of their authors and a count of how many comments and likes they have,
+- Comments and likes are also returned with the username of the users who have created them,
+- When displayed as lists, posts only have to present a truncated summary of their content.
+
+## Identify the main access patterns
+
+To start, we give some structure to our initial specification by identifying our solution's access patterns. When designing a data model for Azure Cosmos DB, it's important to understand which requests our model will have to serve to make sure that the model will serve those requests efficiently.
+
+To make the overall process easier to follow, we categorize those different requests as either commands or queries, borrowing some vocabulary from [CQRS](https://en.wikipedia.org/wiki/Command%E2%80%93query_separation#Command_query_responsibility_segregation) where commands are write requests (that is, intents to update the system) and queries are read-only requests.
+
+Here is the list of requests that our platform will have to expose:
+
+- **[C1]** Create/edit a user
+- **[Q1]** Retrieve a user
+- **[C2]** Create/edit a post
+- **[Q2]** Retrieve a post
+- **[Q3]** List a user's posts in short form
+- **[C3]** Create a comment
+- **[Q4]** List a post's comments
+- **[C4]** Like a post
+- **[Q5]** List a post's likes
+- **[Q6]** List the *x* most recent posts created in short form (feed)
+
+At this stage, we haven't thought about the details of what each entity (user, post etc.) will contain. This step is usually among the first ones to be tackled when designing against a relational store, because we have to figure out how those entities will translate in terms of tables, columns, foreign keys etc. It is much less of a concern with a document database that doesn't enforce any schema at write.
+
+The main reason why it is important to identify our access patterns from the beginning, is because this list of requests is going to be our test suite. Every time we iterate over our data model, we will go through each of the requests and check its performance and scalability. We calculate the request units consumed in each model and optimize them. All these models use the default indexing policy and you can override it by indexing specific properties, which can further improve the RU consumption and latency.
+
+## V1: A first version
+
+We start with two containers: `users` and `posts`.
+
+### Users container
+
+This container only stores user items:
+
+```json
+{
+ "id": "<user-id>",
+ "username": "<username>"
+}
+```
+
+We partition this container by `id`, which means that each logical partition within that container will only contain one item.
+
+### Posts container
+
+This container hosts posts, comments, and likes:
+
+```json
+{
+ "id": "<post-id>",
+ "type": "post",
+ "postId": "<post-id>",
+ "userId": "<post-author-id>",
+ "title": "<post-title>",
+ "content": "<post-content>",
+ "creationDate": "<post-creation-date>"
+}
+
+{
+ "id": "<comment-id>",
+ "type": "comment",
+ "postId": "<post-id>",
+ "userId": "<comment-author-id>",
+ "content": "<comment-content>",
+ "creationDate": "<comment-creation-date>"
+}
+
+{
+ "id": "<like-id>",
+ "type": "like",
+ "postId": "<post-id>",
+ "userId": "<liker-id>",
+ "creationDate": "<like-creation-date>"
+}
+```
+
+We partition this container by `postId`, which means that each logical partition within that container will contain one post, all the comments for that post and all the likes for that post.
+
+Note that we have introduced a `type` property in the items stored in this container to distinguish between the three types of entities that this container hosts.
+
+Also, we have chosen to reference related data instead of embedding it (check [this section](modeling-data.md) for details about these concepts) because:
+
+- there's no upper limit to how many posts a user can create,
+- posts can be arbitrarily long,
+- there's no upper limit to how many comments and likes a post can have,
+- we want to be able to add a comment or a like to a post without having to update the post itself.
+
+## How well does our model perform?
+
+It's now time to assess the performance and scalability of our first version. For each of the requests previously identified, we measure its latency and how many request units it consumes. This measurement is done against a dummy data set containing 100,000 users with 5 to 50 posts per user, and up to 25 comments and 100 likes per post.
+
+### [C1] Create/edit a user
+
+This request is straightforward to implement as we just create or update an item in the `users` container. The requests will nicely spread across all partitions thanks to the `id` partition key.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 7 ms | 5.71 RU | ✅ |
+
+### [Q1] Retrieve a user
+
+Retrieving a user is done by reading the corresponding item from the `users` container.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 2 ms | 1 RU | ✅ |
+
+### [C2] Create/edit a post
+
+Similarly to **[C1]**, we just have to write to the `posts` container.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 9 ms | 8.76 RU | ✅ |
+
+### [Q2] Retrieve a post
+
+We start by retrieving the corresponding document from the `posts` container. But that's not enough, as per our specification we also have to aggregate the username of the post's author and the counts of how many comments and how many likes this post has, which requires 3 additional SQL queries to be issued.
++
+Each of the additional queries filters on the partition key of its respective container, which is exactly what we want to maximize performance and scalability. But we eventually have to perform four operations to return a single post, so we'll improve that in a next iteration.
+
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 9 ms | 19.54 RU | ΓÜá |
+
+### [Q3] List a user's posts in short form
+
+First, we have to retrieve the desired posts with a SQL query that fetches the posts corresponding to that particular user. But we also have to issue additional queries to aggregate the author's username and the counts of comments and likes.
++
+This implementation presents many drawbacks:
+
+- the queries aggregating the counts of comments and likes have to be issued for each post returned by the first query,
+- the main query does not filter on the partition key of the `posts` container, leading to a fan-out and a partition scan across the container.
+
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 130 ms | 619.41 RU | ΓÜá |
+
+### [C3] Create a comment
+
+A comment is created by writing the corresponding item in the `posts` container.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 7 ms | 8.57 RU | ✅ |
+
+### [Q4] List a post's comments
+
+We start with a query that fetches all the comments for that post and once again, we also need to aggregate usernames separately for each comment.
++
+Although the main query does filter on the container's partition key, aggregating the usernames separately penalizes the overall performance. We'll improve that later on.
+
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 23 ms | 27.72 RU | ΓÜá |
+
+### [C4] Like a post
+
+Just like **[C3]**, we create the corresponding item in the `posts` container.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 6 ms | 7.05 RU | ✅ |
+
+### [Q5] List a post's likes
+
+Just like **[Q4]**, we query the likes for that post, then aggregate their usernames.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 59 ms | 58.92 RU | ΓÜá |
+
+### [Q6] List the x most recent posts created in short form (feed)
+
+We fetch the most recent posts by querying the `posts` container sorted by descending creation date, then aggregate usernames and counts of comments and likes for each of the posts.
++
+Once again, our initial query doesn't filter on the partition key of the `posts` container, which triggers a costly fan-out. This one is even worse as we target a much larger result set and sort the results with an `ORDER BY` clause, which makes it more expensive in terms of request units.
+
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 306 ms | 2063.54 RU | ΓÜá |
+
+## Reflecting on the performance of V1
+
+Looking at the performance issues we faced in the previous section, we can identify two main classes of problems:
+
+- some requests require multiple queries to be issued in order to gather all the data we need to return,
+- some queries don't filter on the partition key of the containers they target, leading to a fan-out that impedes our scalability.
+
+Let's resolve each of those problems, starting with the first one.
+
+## V2: Introducing denormalization to optimize read queries
+
+The reason why we have to issue additional requests in some cases is because the results of the initial request doesn't contain all the data we need to return. When working with a non-relational data store like Azure Cosmos DB, this kind of issue is commonly solved by denormalizing data across our data set.
+
+In our example, we modify post items to add the username of the post's author, the count of comments and the count of likes:
+
+```json
+{
+ "id": "<post-id>",
+ "type": "post",
+ "postId": "<post-id>",
+ "userId": "<post-author-id>",
+ "userUsername": "<post-author-username>",
+ "title": "<post-title>",
+ "content": "<post-content>",
+ "commentCount": <count-of-comments>,
+ "likeCount": <count-of-likes>,
+ "creationDate": "<post-creation-date>"
+}
+```
+
+We also modify comment and like items to add the username of the user who has created them:
+
+```json
+{
+ "id": "<comment-id>",
+ "type": "comment",
+ "postId": "<post-id>",
+ "userId": "<comment-author-id>",
+ "userUsername": "<comment-author-username>",
+ "content": "<comment-content>",
+ "creationDate": "<comment-creation-date>"
+}
+
+{
+ "id": "<like-id>",
+ "type": "like",
+ "postId": "<post-id>",
+ "userId": "<liker-id>",
+ "userUsername": "<liker-username>",
+ "creationDate": "<like-creation-date>"
+}
+```
+
+### Denormalizing comment and like counts
+
+What we want to achieve is that every time we add a comment or a like, we also increment the `commentCount` or the `likeCount` in the corresponding post. As our `posts` container is partitioned by `postId`, the new item (comment or like) and its corresponding post sit in the same logical partition. As a result, we can use a [stored procedure](stored-procedures-triggers-udfs.md) to perform that operation.
+
+Now when creating a comment (**[C3]**), instead of just adding a new item in the `posts` container we call the following stored procedure on that container:
+
+```javascript
+function createComment(postId, comment) {
+ var collection = getContext().getCollection();
+
+ collection.readDocument(
+ `${collection.getAltLink()}/docs/${postId}`,
+ function (err, post) {
+ if (err) throw err;
+
+ post.commentCount++;
+ collection.replaceDocument(
+ post._self,
+ post,
+ function (err) {
+ if (err) throw err;
+
+ comment.postId = postId;
+ collection.createDocument(
+ collection.getSelfLink(),
+ comment
+ );
+ }
+ );
+ })
+}
+```
+
+This stored procedure takes the ID of the post and the body of the new comment as parameters, then:
+
+- retrieves the post
+- increments the `commentCount`
+- replaces the post
+- adds the new comment
+
+As stored procedures are executed as atomic transactions, the value of `commentCount` and the actual number of comments will always stay in sync.
+
+We obviously call a similar stored procedure when adding new likes to increment the `likeCount`.
+
+### Denormalizing usernames
+
+Usernames require a different approach as users not only sit in different partitions, but in a different container. When we have to denormalize data across partitions and containers, we can use the source container's [change feed](../change-feed.md).
+
+In our example, we use the change feed of the `users` container to react whenever users update their usernames. When that happens, we propagate the change by calling another stored procedure on the `posts` container:
++
+```javascript
+function updateUsernames(userId, username) {
+ var collection = getContext().getCollection();
+
+ collection.queryDocuments(
+ collection.getSelfLink(),
+ `SELECT * FROM p WHERE p.userId = '${userId}'`,
+ function (err, results) {
+ if (err) throw err;
+
+ for (var i in results) {
+ var doc = results[i];
+ doc.userUsername = username;
+
+ collection.upsertDocument(
+ collection.getSelfLink(),
+ doc);
+ }
+ });
+}
+```
+
+This stored procedure takes the ID of the user and the user's new username as parameters, then:
+
+- fetches all items matching the `userId` (which can be posts, comments, or likes)
+- for each of those items
+ - replaces the `userUsername`
+ - replaces the item
+
+> [!IMPORTANT]
+> This operation is costly because it requires this stored procedure to be executed on every partition of the `posts` container. We assume that most users choose a suitable username during sign-up and won't ever change it, so this update will run very rarely.
+
+## What are the performance gains of V2?
+
+### [Q2] Retrieve a post
+
+Now that our denormalization is in place, we only have to fetch a single item to handle that request.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 2 ms | 1 RU | ✅ |
+
+### [Q4] List a post's comments
+
+Here again, we can spare the extra requests that fetched the usernames and end up with a single query that filters on the partition key.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 4 ms | 7.72 RU | ✅ |
+
+### [Q5] List a post's likes
+
+Exact same situation when listing the likes.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 4 ms | 8.92 RU | ✅ |
+
+## V3: Making sure all requests are scalable
+
+Looking at our overall performance improvements, there are still two requests that we haven't fully optimized: **[Q3]** and **[Q6]**. They are the requests involving queries that don't filter on the partition key of the containers they target.
+
+### [Q3] List a user's posts in short form
+
+This request already benefits from the improvements introduced in V2, which spares additional queries.
++
+But the remaining query is still not filtering on the partition key of the `posts` container.
+
+The way to think about this situation is actually simple:
+
+1. This request *has* to filter on the `userId` because we want to fetch all posts for a particular user
+1. It doesn't perform well because it is executed against the `posts` container, which is not partitioned by `userId`
+1. Stating the obvious, we would solve our performance problem by executing this request against a container that *is* partitioned by `userId`
+1. It turns out that we already have such a container: the `users` container!
+
+So we introduce a second level of denormalization by duplicating entire posts to the `users` container. By doing that, we effectively get a copy of our posts, only partitioned along a different dimensions, making them way more efficient to retrieve by their `userId`.
+
+The `users` container now contains 2 kinds of items:
+
+```json
+{
+ "id": "<user-id>",
+ "type": "user",
+ "userId": "<user-id>",
+ "username": "<username>"
+}
+
+{
+ "id": "<post-id>",
+ "type": "post",
+ "postId": "<post-id>",
+ "userId": "<post-author-id>",
+ "userUsername": "<post-author-username>",
+ "title": "<post-title>",
+ "content": "<post-content>",
+ "commentCount": <count-of-comments>,
+ "likeCount": <count-of-likes>,
+ "creationDate": "<post-creation-date>"
+}
+```
+
+Note that:
+
+- we have introduced a `type` field in the user item to distinguish users from posts,
+- we have also added a `userId` field in the user item, which is redundant with the `id` field but is required as the `users` container is now partitioned by `userId` (and not `id` as previously)
+
+To achieve that denormalization, we once again use the change feed. This time, we react on the change feed of the `posts` container to dispatch any new or updated post to the `users` container. And because listing posts does not require to return their full content, we can truncate them in the process.
++
+We can now route our query to the `users` container, filtering on the container's partition key.
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 4 ms | 6.46 RU | ✅ |
+
+### [Q6] List the x most recent posts created in short form (feed)
+
+We have to deal with a similar situation here: even after sparing the additional queries left unnecessary by the denormalization introduced in V2, the remaining query does not filter on the container's partition key:
++
+Following the same approach, maximizing this request's performance and scalability requires that it only hits one partition. This is conceivable because we only have to return a limited number of items; in order to populate our blogging platform's home page, we just need to get the 100 most recent posts, without the need to paginate through the entire data set.
+
+So to optimize this last request, we introduce a third container to our design, entirely dedicated to serving this request. We denormalize our posts to that new `feed` container:
+
+```json
+{
+ "id": "<post-id>",
+ "type": "post",
+ "postId": "<post-id>",
+ "userId": "<post-author-id>",
+ "userUsername": "<post-author-username>",
+ "title": "<post-title>",
+ "content": "<post-content>",
+ "commentCount": <count-of-comments>,
+ "likeCount": <count-of-likes>,
+ "creationDate": "<post-creation-date>"
+}
+```
+
+This container is partitioned by `type`, which will always be `post` in our items. Doing that ensures that all the items in this container will sit in the same partition.
+
+To achieve the denormalization, we just have to hook on the change feed pipeline we have previously introduced to dispatch the posts to that new container. One important thing to bear in mind is that we need to make sure that we only store the 100 most recent posts; otherwise, the content of the container may grow beyond the maximum size of a partition. This is done by calling a [post-trigger](stored-procedures-triggers-udfs.md#triggers) every time a document is added in the container:
++
+Here's the body of the post-trigger that truncates the collection:
+
+```javascript
+function truncateFeed() {
+ const maxDocs = 100;
+ var context = getContext();
+ var collection = context.getCollection();
+
+ collection.queryDocuments(
+ collection.getSelfLink(),
+ "SELECT VALUE COUNT(1) FROM f",
+ function (err, results) {
+ if (err) throw err;
+
+ processCountResults(results);
+ });
+
+ function processCountResults(results) {
+ // + 1 because the query didn't count the newly inserted doc
+ if ((results[0] + 1) > maxDocs) {
+ var docsToRemove = results[0] + 1 - maxDocs;
+ collection.queryDocuments(
+ collection.getSelfLink(),
+ `SELECT TOP ${docsToRemove} * FROM f ORDER BY f.creationDate`,
+ function (err, results) {
+ if (err) throw err;
+
+ processDocsToRemove(results, 0);
+ });
+ }
+ }
+
+ function processDocsToRemove(results, index) {
+ var doc = results[index];
+ if (doc) {
+ collection.deleteDocument(
+ doc._self,
+ function (err) {
+ if (err) throw err;
+
+ processDocsToRemove(results, index + 1);
+ });
+ }
+ }
+}
+```
+
+The final step is to reroute our query to our new `feed` container:
++
+| **Latency** | **RU charge** | **Performance** |
+| | | |
+| 9 ms | 16.97 RU | ✅ |
+
+## Conclusion
+
+Let's have a look at the overall performance and scalability improvements we have introduced over the different versions of our design.
+
+| | V1 | V2 | V3 |
+| | | | |
+| **[C1]** | 7 ms / 5.71 RU | 7 ms / 5.71 RU | 7 ms / 5.71 RU |
+| **[Q1]** | 2 ms / 1 RU | 2 ms / 1 RU | 2 ms / 1 RU |
+| **[C2]** | 9 ms / 8.76 RU | 9 ms / 8.76 RU | 9 ms / 8.76 RU |
+| **[Q2]** | 9 ms / 19.54 RU | 2 ms / 1 RU | 2 ms / 1 RU |
+| **[Q3]** | 130 ms / 619.41 RU | 28 ms / 201.54 RU | 4 ms / 6.46 RU |
+| **[C3]** | 7 ms / 8.57 RU | 7 ms / 15.27 RU | 7 ms / 15.27 RU |
+| **[Q4]** | 23 ms / 27.72 RU | 4 ms / 7.72 RU | 4 ms / 7.72 RU |
+| **[C4]** | 6 ms / 7.05 RU | 7 ms / 14.67 RU | 7 ms / 14.67 RU |
+| **[Q5]** | 59 ms / 58.92 RU | 4 ms / 8.92 RU | 4 ms / 8.92 RU |
+| **[Q6]** | 306 ms / 2063.54 RU | 83 ms / 532.33 RU | 9 ms / 16.97 RU |
+
+### We have optimized a read-heavy scenario
+
+You may have noticed that we have concentrated our efforts towards improving the performance of read requests (queries) at the expense of write requests (commands). In many cases, write operations now trigger subsequent denormalization through change feeds, which makes them more computationally expensive and longer to materialize.
+
+This is justified by the fact that a blogging platform (like most social apps) is read-heavy, which means that the amount of read requests it has to serve is usually orders of magnitude higher than the amount of write requests. So it makes sense to make write requests more expensive to execute in order to let read requests be cheaper and better performing.
+
+If we look at the most extreme optimization we have done, **[Q6]** went from 2000+ RUs to just 17 RUs; we have achieved that by denormalizing posts at a cost of around 10 RUs per item. As we would serve a lot more feed requests than creation or updates of posts, the cost of this denormalization is negligible considering the overall savings.
+
+### Denormalization can be applied incrementally
+
+The scalability improvements we've explored in this article involve denormalization and duplication of data across the data set. It should be noted that these optimizations don't have to be put in place at day 1. Queries that filter on partition keys perform better at scale, but cross-partition queries can be totally acceptable if they are called rarely or against a limited data set. If you're just building a prototype, or launching a product with a small and controlled user base, you can probably spare those improvements for later; what's important then is to [monitor](../use-metrics.md) your model's performance so you can decide if and when it's time to bring them in.
+
+The change feed that we use to distribute updates to other containers store all those updates persistently. This makes it possible to request all updates since the creation of the container and bootstrap denormalized views as a one-time catch-up operation even if your system already has a lot of data.
+
+## Next steps
+
+After this introduction to practical data modeling and partitioning, you may want to check the following articles to review the concepts we have covered:
+
+- [Work with databases, containers, and items](../account-databases-containers-items.md)
+- [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+- [Change feed in Azure Cosmos DB](../change-feed.md)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Multi Master https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-multi-master.md
+
+ Title: How to configure multi-region writes in Azure Cosmos DB
+description: Learn how to configure multi-region writes for your applications by using different SDKs in Azure Cosmos DB.
++++ Last updated : 01/06/2021+++++
+# Configure multi-region writes in your applications that use Azure Cosmos DB
+
+Once an account has been created with multiple write regions enabled, you must make two changes in your application to the ConnectionPolicy for the Azure Cosmos DB client to enable the multi-region writes in Azure Cosmos DB. Within the ConnectionPolicy, set UseMultipleWriteLocations to true and pass the name of the region where the application is deployed to ApplicationRegion. This will populate the PreferredLocations property based on the geo-proximity from location passed in. If a new region is later added to the account, the application does not have to be updated or redeployed, it will automatically detect the closer region and will auto-home on to it should a regional event occur.
+
+> [!Note]
+> Azure Cosmos DB accounts initially configured with single write region can be configured to multiple write regions with zero down time. To learn more see, [Configure multiple-write regions](../how-to-manage-database-account.md#configure-multiple-write-regions)
+
+## <a id="portal"></a> Azure portal
+
+To enable multi-region writes from Azure portal, use the following steps:
+
+1. Sign-in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos DB account and from the menu, open the **Replicate data globally** pane.
+
+1. Under the **Multi-region writes** option, choose **enable**. It automatically adds the existing regions to read and write regions.
+
+1. You can add additional regions by selecting the icons on the map or by selecting the **Add region** button. All the regions you add will have both read and writes enabled.
+
+1. After you update the region list, select **save** to apply the changes.
+
+ :::image type="content" source="./media/how-to-multi-master/enable-multi-region-writes.png" alt-text="Screenshot to enable multi-region writes using Azure portal" lightbox="./media/how-to-multi-master/enable-multi-region-writes.png":::
+
+## <a id="netv2"></a>.NET SDK v2
+
+To enable multi-region writes in your application, set `UseMultipleWriteLocations` to `true`. Also, set `SetCurrentLocation` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+
+```csharp
+ConnectionPolicy policy = new ConnectionPolicy
+ {
+ ConnectionMode = ConnectionMode.Direct,
+ ConnectionProtocol = Protocol.Tcp,
+ UseMultipleWriteLocations = true
+ };
+policy.SetCurrentLocation("West US 2");
+```
+
+## <a id="netv3"></a>.NET SDK v3
+
+To enable multi-region writes in your application, set `ApplicationRegion` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+
+```csharp
+CosmosClient cosmosClient = new CosmosClient(
+ "<connection-string-from-portal>",
+ new CosmosClientOptions()
+ {
+ ApplicationRegion = Regions.WestUS2,
+ });
+```
+
+Optionally, you can use the `CosmosClientBuilder` and `WithApplicationRegion` to achieve the same result:
+
+```csharp
+CosmosClientBuilder cosmosClientBuilder = new CosmosClientBuilder("<connection-string-from-portal>")
+ .WithApplicationRegion(Regions.WestUS2);
+CosmosClient client = cosmosClientBuilder.Build();
+```
+
+## <a id="java4-multi-region-writes"></a> Java V4 SDK
+
+To enable multi-region writes in your application, call `.multipleWriteRegionsEnabled(true)` and `.preferredRegions(preferredRegions)` in the client builder, where `preferredRegions` is a `List` containing one element - that is the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+
+# [Async](#tab/api-async)
+
+ [Java SDK V4](sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ConfigureMultimasterAsync)]
+
+# [Sync](#tab/api-sync)
+
+ [Java SDK V4](sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ConfigureMultimasterSync)]
+
+
+
+## <a id="java2-multi-region-writes"></a> Async Java V2 SDK
+
+The Java V2 SDK used the Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb). To enable multi-region writes in your application, set `policy.setUsingMultipleWriteLocations(true)` and set `policy.setPreferredLocations` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+
+```java
+ConnectionPolicy policy = new ConnectionPolicy();
+policy.setUsingMultipleWriteLocations(true);
+policy.setPreferredLocations(Collections.singletonList(region));
+
+AsyncDocumentClient client =
+ new AsyncDocumentClient.Builder()
+ .withMasterKeyOrResourceToken(this.accountKey)
+ .withServiceEndpoint(this.accountEndpoint)
+ .withConsistencyLevel(ConsistencyLevel.Eventual)
+ .withConnectionPolicy(policy).build();
+```
+
+## <a id="javascript"></a>Node.js, JavaScript, and TypeScript SDKs
+
+To enable multi-region writes in your application, set `connectionPolicy.UseMultipleWriteLocations` to `true`. Also, set `connectionPolicy.PreferredLocations` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+
+```javascript
+const connectionPolicy: ConnectionPolicy = new ConnectionPolicy();
+connectionPolicy.UseMultipleWriteLocations = true;
+connectionPolicy.PreferredLocations = [region];
+
+const client = new CosmosClient({
+ endpoint: config.endpoint,
+ auth: { masterKey: config.key },
+ connectionPolicy,
+ consistencyLevel: ConsistencyLevel.Eventual
+});
+```
+
+## <a id="python"></a>Python SDK
+
+To enable multi-region writes in your application, set `connection_policy.UseMultipleWriteLocations` to `true`. Also, set `connection_policy.PreferredLocations` to the region in which the application is being deployed and where Azure Cosmos DB is replicated.
+
+```python
+connection_policy = documents.ConnectionPolicy()
+connection_policy.UseMultipleWriteLocations = True
+connection_policy.PreferredLocations = [region]
+
+client = cosmos_client.CosmosClient(self.account_endpoint, {
+ 'masterKey': self.account_key}, connection_policy, documents.ConsistencyLevel.Session)
+```
+
+## Next steps
+
+Read the following articles:
+
+* [Use session tokens to manage consistency in Azure Cosmos DB](how-to-manage-consistency.md#utilize-session-tokens)
+* [Conflict types and resolution policies in Azure Cosmos DB](../conflict-resolution-policies.md)
+* [High availability in Azure Cosmos DB](../high-availability.md)
+* [Consistency levels in Azure Cosmos DB](../consistency-levels.md)
+* [Choose the right consistency level in Azure Cosmos DB](../consistency-levels.md)
+* [Consistency, availability, and performance tradeoffs in Azure Cosmos DB](../consistency-levels.md)
+* [Availability and performance tradeoffs for various consistency levels](../consistency-levels.md)
+* [Globally scaling provisioned throughput](../request-units.md)
+* [Global distribution: Under the hood](../global-dist-under-the-hood.md)
cosmos-db How To Provision Autoscale Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-provision-autoscale-throughput.md
+
+ Title: Provision autoscale throughput in Azure Cosmos DB for NoSQL
+description: Learn how to provision autoscale throughput at the container and database level in Azure Cosmos DB for NoSQL using Azure portal, CLI, PowerShell, and various other SDKs.
+++++ Last updated : 04/01/2022+++
+# Provision autoscale throughput on database or container in Azure Cosmos DB - API for NoSQL
+
+This article explains how to provision autoscale throughput on a database or container (collection, graph, or table) in Azure Cosmos DB for NoSQL. You can enable autoscale on a single container, or provision autoscale throughput on a database and share it among all the containers in the database.
+
+If you are using a different API, see [API for MongoDB](../mongodb/how-to-provision-throughput.md), [API for Cassandra](../cassandr) articles to provision the throughput.
+
+## Azure portal
+
+### Create new database or container with autoscale
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Cosmos DB explorer.](https://cosmos.azure.com/)
+
+1. Navigate to your Azure Cosmos DB account and open the **Data Explorer** tab.
+
+1. Select **New Container.** Enter a name for your database, container, and a partition key. Under database or container throughput, select the **Autoscale** option, and set the [maximum throughput (RU/s)](../provision-throughput-autoscale.md#how-autoscale-provisioned-throughput-works) that you want the database or container to scale to.
+
+ :::image type="content" source="./media/how-to-provision-autoscale-throughput/create-new-autoscale-container.png" alt-text="Creating a container and configuring autoscale provisioned throughput":::
+
+1. Select **OK**.
+
+To provision autoscale on shared throughput database, select the **Provision database throughput** option when creating a new database.
+
+### Enable autoscale on existing database or container
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Cosmos DB explorer.](https://cosmos.azure.com/)
+
+1. Navigate to your Azure Cosmos DB account and open the **Data Explorer** tab.
+
+1. Select **Scale and Settings** for your container, or **Scale** for your database.
+
+1. Under **Scale**, select the **Autoscale** option and **Save**.
+
+ :::image type="content" source="./media/how-to-provision-autoscale-throughput/autoscale-scale-and-settings.png" alt-text="Enabling autoscale on an existing container":::
+
+> [!NOTE]
+> When you enable autoscale on an existing database or container, the starting value for max RU/s is determined by the system, based on your current manual provisioned throughput settings and storage. After the operation completes, you can change the max RU/s if needed. [Learn more.](../autoscale-faq.yml#how-does-the-migration-between-autoscale-and-standard--manual--provisioned-throughput-work-)
+
+## Azure Cosmos DB .NET V3 SDK
+
+Use [version 3.9 or higher](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) of the Azure Cosmos DB .NET SDK for API for NoSQL to manage autoscale resources.
+
+> [!IMPORTANT]
+> You can use the .NET SDK to create new autoscale resources. The SDK does not support migrating between autoscale and standard (manual) throughput. The migration scenario is currently supported in only the [Azure portal](#enable-autoscale-on-existing-database-or-container), [CLI](#azure-cli), and [PowerShell](#azure-powershell).
+
+### Create database with shared throughput
+
+```csharp
+// Create instance of CosmosClient
+CosmosClient cosmosClient = new CosmosClient(Endpoint, PrimaryKey);
+
+// Autoscale throughput settings
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(1000); //Set autoscale max RU/s
+
+//Create the database with autoscale enabled
+database = await cosmosClient.CreateDatabaseAsync(DatabaseName, throughputProperties: autoscaleThroughputProperties);
+```
+
+### Create container with dedicated throughput
+
+```csharp
+// Get reference to database that container will be created in
+Database database = await cosmosClient.GetDatabase("DatabaseName");
+
+// Container and autoscale throughput settings
+ContainerProperties autoscaleContainerProperties = new ContainerProperties("ContainerName", "/partitionKey");
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(1000); //Set autoscale max RU/s
+
+// Create the container with autoscale enabled
+container = await database.CreateContainerAsync(autoscaleContainerProperties, autoscaleThroughputProperties);
+```
+
+### Read the current throughput (RU/s)
+
+```csharp
+// Get a reference to the resource
+Container container = cosmosClient.GetDatabase("DatabaseName").GetContainer("ContainerName");
+
+// Read the throughput on a resource
+ThroughputProperties autoscaleContainerThroughput = await container.ReadThroughputAsync(requestOptions: null);
+
+// The autoscale max throughput (RU/s) of the resource
+int? autoscaleMaxThroughput = autoscaleContainerThroughput.AutoscaleMaxThroughput;
+
+// The throughput (RU/s) the resource is currently scaled to
+int? currentThroughput = autoscaleContainerThroughput.Throughput;
+```
+
+### Change the autoscale max throughput (RU/s)
+
+```csharp
+// Change the autoscale max throughput (RU/s)
+await container.ReplaceThroughputAsync(ThroughputProperties.CreateAutoscaleThroughput(newAutoscaleMaxThroughput));
+```
+
+## Azure Cosmos DB Java V4 SDK
+
+You can use [version 4.0 or higher](https://mvnrepository.com/artifact/com.azure/azure-cosmos) of the Azure Cosmos DB Java SDK for API for NoSQL to manage autoscale resources.
+
+> [!IMPORTANT]
+> You can use the Java SDK to create new autoscale resources. The SDK does not support migrating between autoscale and standard (manual) throughput. The migration scenario is currently supported in only the [Azure portal](#enable-autoscale-on-existing-database-or-container), [CLI](#azure-cli), and [PowerShell](#azure-powershell).
+### Create database with shared throughput
+
+# [Async](#tab/api-async)
+
+```java
+// Create instance of CosmosClient
+CosmosAsyncClient client = new CosmosClientBuilder()
+ .setEndpoint(HOST)
+ .setKey(PRIMARYKEY)
+ .setConnectionPolicy(CONNECTIONPOLICY)
+ .buildAsyncClient();
+
+// Autoscale throughput settings
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
+
+//Create the database with autoscale enabled
+CosmosAsyncDatabase database = client.createDatabase(databaseName, autoscaleThroughputProperties).block().getDatabase();
+```
+
+# [Sync](#tab/api-sync)
+
+```java
+// Create instance of CosmosClient
+CosmosClient client = new CosmosClientBuilder()
+ .setEndpoint(HOST)
+ .setKey(PRIMARYKEY)
+ .setConnectionPolicy(CONNECTIONPOLICY)
+ .buildClient();
+
+// Autoscale throughput settings
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
+
+//Create the database with autoscale enabled
+CosmosDatabase database = client.createDatabase(databaseName, autoscaleThroughputProperties).getDatabase();
+```
+
+
+
+### Create container with dedicated throughput
+
+# [Async](#tab/api-async)
+
+```java
+// Get reference to database that container will be created in
+CosmosAsyncDatabase database = client.createDatabase("DatabaseName").block().getDatabase();
+
+// Container and autoscale throughput settings
+CosmosContainerProperties autoscaleContainerProperties = new CosmosContainerProperties("ContainerName", "/partitionKey");
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
+
+// Create the container with autoscale enabled
+CosmosAsyncContainer container = database.createContainer(autoscaleContainerProperties, autoscaleThroughputProperties, new CosmosContainerRequestOptions())
+ .block()
+ .getContainer();
+```
+
+# [Sync](#tab/api-sync)
+
+```java
+// Get reference to database that container will be created in
+CosmosDatabase database = client.createDatabase("DatabaseName").getDatabase();
+
+// Container and autoscale throughput settings
+CosmosContainerProperties autoscaleContainerProperties = new CosmosContainerProperties("ContainerName", "/partitionKey");
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
+
+// Create the container with autoscale enabled
+CosmosContainer container = database.createContainer(autoscaleContainerProperties, autoscaleThroughputProperties, new CosmosContainerRequestOptions())
+ .getContainer();
+```
+
+
+
+### Read the current throughput (RU/s)
+
+# [Async](#tab/api-async)
+
+```java
+// Get a reference to the resource
+CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
+
+// Read the throughput on a resource
+ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
+
+// The autoscale max throughput (RU/s) of the resource
+int autoscaleMaxThroughput = autoscaleContainerThroughput.getAutoscaleMaxThroughput();
+
+// The throughput (RU/s) the resource is currently scaled to
+int currentThroughput = autoscaleContainerThroughput.Throughput;
+```
+
+# [Sync](#tab/api-sync)
+
+```java
+// Get a reference to the resource
+CosmosContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
+
+// Read the throughput on a resource
+ThroughputProperties autoscaleContainerThroughput = container.readThroughput().getProperties();
+
+// The autoscale max throughput (RU/s) of the resource
+int autoscaleMaxThroughput = autoscaleContainerThroughput.getAutoscaleMaxThroughput();
+
+// The throughput (RU/s) the resource is currently scaled to
+int currentThroughput = autoscaleContainerThroughput.Throughput;
+```
+
+
+
+### Change the autoscale max throughput (RU/s)
+
+# [Async](#tab/api-async)
+
+```java
+// Change the autoscale max throughput (RU/s)
+container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();
+```
+
+# [Sync](#tab/api-sync)
+
+```java
+// Change the autoscale max throughput (RU/s)
+container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput));
+```
+++
+## Azure Resource Manager
+
+Azure Resource Manager templates can be used to provision autoscale throughput on a new database or container-level resource for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](./samples-resource-manager-templates.md) for samples. By design, Azure Resource Manager templates cannot be used to migrate between provisioned and autoscale throughput on an existing resource.
+
+## Azure CLI
+
+Azure CLI can be used to provision autoscale throughput on a new database or container-level resource for all Azure Cosmos DB APIs, or enable autoscale on an existing resource. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
+
+## Azure PowerShell
+
+Azure PowerShell can be used to provision autoscale throughput on a new database or container-level resource for all Azure Cosmos DB APIs, or enable autoscale on an existing resource. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
+
+## Next steps
+
+* Learn about the [benefits of provisioned throughput with autoscale](../provision-throughput-autoscale.md#benefits-of-autoscale).
+* Learn how to [choose between manual and autoscale throughput](../how-to-choose-offer.md).
+* Review the [autoscale FAQ](../autoscale-faq.yml).
cosmos-db How To Provision Container Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-provision-container-throughput.md
+
+ Title: Provision container throughput in Azure Cosmos DB for NoSQL
+description: Learn how to provision throughput at the container level in Azure Cosmos DB for NoSQL using Azure portal, CLI, PowerShell and various other SDKs.
++++ Last updated : 10/14/2020+++++
+# Provision standard (manual) throughput on an Azure Cosmos DB container - API for NoSQL
+
+This article explains how to provision standard (manual) throughput on a container in Azure Cosmos DB for NoSQL. You can provision throughput on a single container, or [provision throughput on a database](how-to-provision-database-throughput.md) and share it among the containers within the database. You can provision throughput on a container using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
+
+If you are using a different API, see [API for MongoDB](../mongodb/how-to-provision-throughput.md), [API for Cassandra](../cassandr) articles to provision the throughput.
+
+## Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-account), or select an existing Azure Cosmos DB account.
+
+1. Open the **Data Explorer** pane, and select **New Container**. Next, provide the following details:
+
+ * Indicate whether you are creating a new database or using an existing one.
+ * Enter a **Container Id**.
+ * Enter a **Partition key** value (for example, `/ItemID`).
+ * Select **Autoscale** or **Manual** throughput and enter the required **Container throughput** (for example, 1000 RU/s). Enter a throughput that you want to provision (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-provision-container-throughput/provision-container-throughput-portal-sql-api.png" alt-text="Screenshot of Data Explorer, with New Collection highlighted":::
+
+## Azure CLI or PowerShell
+
+To create a container with dedicated throughput see,
+
+* [Create a container using Azure CLI](manage-with-cli.md#create-a-container)
+* [Create a container using PowerShell](manage-with-powershell.md#create-container)
+
+## .NET SDK
+
+> [!Note]
+> Use the Azure Cosmos DB SDKs for API for NoSQL to provision throughput for all Azure Cosmos DB APIs, except Cassandra and API for MongoDB.
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+// Create a container with a partition key and provision throughput of 400 RU/s
+DocumentCollection myCollection = new DocumentCollection();
+myCollection.Id = "myContainerName";
+myCollection.PartitionKey.Paths.Add("/myPartitionKey");
+
+await client.CreateDocumentCollectionAsync(
+ UriFactory.CreateDatabaseUri("myDatabaseName"),
+ myCollection,
+ new RequestOptions { OfferThroughput = 400 });
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/ContainerDocsSampleCode.cs?name=ContainerCreateWithThroughput)]
+++
+## JavaScript SDK
+
+```javascript
+// Create a new Client
+const client = new CosmosClient({ endpoint, key });
+
+// Create a database
+const { database } = await client.databases.createIfNotExists({ id: "databaseId" });
+
+// Create a container with the specified throughput
+const { resource } = await database.containers.createIfNotExists({
+id: "containerId",
+throughput: 1000
+});
+
+// To update an existing container or databases throughput, you need to user the offers API
+// Get all the offers
+const { resources: offers } = await client.offers.readAll().fetchAll();
+
+// Find the offer associated with your container or the database
+const offer = offers.find((_offer) => _offer.offerResourceId === resource._rid);
+
+// Change the throughput value
+offer.content.offerThroughput = 2000;
+
+// Replace the offer.
+await client.offer(offer.id).replace(offer);
+```
+
+## Next steps
+
+See the following articles to learn about throughput provisioning in Azure Cosmos DB:
+
+* [How to provision standard (manual) throughput on a database](how-to-provision-database-throughput.md)
+* [How to provision autoscale throughput on a database](how-to-provision-autoscale-throughput.md)
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Provision Database Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-provision-database-throughput.md
+
+ Title: Provision database throughput in Azure Cosmos DB for NoSQL
+description: Learn how to provision throughput at the database level in Azure Cosmos DB for NoSQL using Azure portal, CLI, PowerShell and various other SDKs.
++++ Last updated : 10/15/2020+++++
+# Provision standard (manual) throughput on a database in Azure Cosmos DB - API for NoSQL
+
+This article explains how to provision standard (manual) throughput on a database in Azure Cosmos DB for NoSQL. You can provision throughput for a single [container](how-to-provision-container-throughput.md), or for a database and share the throughput among the containers within it. To learn when to use container level and database level throughput, see the [Use cases for provisioning throughput on containers and databases](../set-throughput.md) article. You can provision database level throughput by using the Azure portal or Azure Cosmos DB SDKs.
+
+If you are using a different API, see [API for MongoDB](../mongodb/how-to-provision-throughput.md), [API for Cassandra](../cassandr) articles to provision the throughput.
+
+## Provision throughput using Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-account), or select an existing Azure Cosmos DB account.
+
+1. Open the **Data Explorer** pane, and select **New Database**. Provide the following details:
+
+ * Enter a database ID.
+ * Select the **Share throughput across containers** option.
+ * Select **Autoscale** or **Manual** throughput and enter the required **Database throughput** (for example, 1000 RU/s).
+ * Enter a name for your container under **Container ID**
+ * Enter a **Partition key**
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-provision-database-throughput/provision-database-throughput-portal-sql-api.png" alt-text="Screenshot of New Database dialog box":::
+
+## Provision throughput using Azure CLI or PowerShell
+
+To create a database with shared throughput see,
+
+* [Create a database using Azure CLI](manage-with-cli.md#create-a-database-with-shared-throughput)
+* [Create a database using PowerShell](manage-with-powershell.md#create-db-ru)
+
+## Provision throughput using .NET SDK
+
+> [!Note]
+> You can use Azure Cosmos DB SDKs for API for NoSQL to provision throughput for all APIs. You can optionally use the following example for API for Cassandra as well.
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+```csharp
+//set the throughput for the database
+RequestOptions options = new RequestOptions
+{
+ OfferThroughput = 500
+};
+
+//create the database
+await client.CreateDatabaseIfNotExistsAsync(
+ new Database {Id = databaseName},
+ options);
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/DatabaseDocsSampleCode.cs?name=DatabaseCreateWithThroughput)]
+++
+## Next steps
+
+See the following articles to learn about provisioned throughput in Azure Cosmos DB:
+
+* [Globally scale provisioned throughput](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [How to provision standard (manual) throughput for a container](how-to-provision-container-throughput.md)
+* [How to provision autoscale throughput for a container](how-to-provision-autoscale-throughput.md)
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Query Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-query-container.md
+
+ Title: Query containers in Azure Cosmos DB
+description: Learn how to query containers in Azure Cosmos DB using in-partition and cross-partition queries
++++ Last updated : 3/18/2019++++
+# Query an Azure Cosmos DB container
+
+This article explains how to query a container (collection, graph, or table) in Azure Cosmos DB. In particular, it covers how in-partition and cross-partition queries work in Azure Cosmos DB.
+
+## In-partition query
+
+When you query data from containers, if the query has a partition key filter specified, Azure Cosmos DB automatically optimizes the query. It routes the query to the [physical partitions](../partitioning-overview.md#physical-partitions) corresponding to the partition key values specified in the filter.
+
+For example, consider the below query with an equality filter on `DeviceId`. If we run this query on a container partitioned on `DeviceId`, this query will filter to a single physical partition.
+
+```sql
+SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
+```
+
+As with the earlier example, this query will also filter to a single partition. Adding the additional filter on `Location` does not change this:
+
+```sql
+SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'
+```
+
+Here's a query that has a range filter on the partition key and won't be scoped to a single physical partition. In order to be an in-partition query, the query must have an equality filter that includes the partition key:
+
+```sql
+SELECT * FROM c WHERE c.DeviceId > 'XMS-0001'
+```
+
+## Cross-partition query
+
+The following query doesn't have a filter on the partition key (`DeviceId`). Therefore, it must fan-out to all physical partitions where it is run against each partition's index:
+
+```sql
+SELECT * FROM c WHERE c.Location = 'Seattle`
+```
+
+Each physical partition has its own index. Therefore, when you run a cross-partition query on a container, you are effectively running one query *per* physical partition. Azure Cosmos DB will automatically aggregate results across different physical partitions.
+
+The indexes in different physical partitions are independent from one another. There is no global index in Azure Cosmos DB.
+
+## Parallel cross-partition query
+
+The Azure Cosmos DB SDKs 1.9.0 and later support parallel query execution options. Parallel cross-partition queries allow you to perform low latency, cross-partition queries.
+
+You can manage parallel query execution by tuning the following parameters:
+
+- **MaxConcurrency**: Sets the maximum number of simultaneous network connections to the container's partitions. If you set this property to `-1`, the SDK manages the degree of parallelism. If theΓÇ»`MaxConcurrency` set to `0`, there is a single network connection to the container's partitions.
+
+- **MaxBufferedItemCount**: Trades query latency versus client-side memory utilization. If this option is omitted or to set to -1, the SDK manages the number of items buffered during parallel query execution.
+
+Because of the Azure Cosmos DB's ability to parallelize cross-partition queries, query latency will generally scale well as the system adds [physical partitions](../partitioning-overview.md#physical-partitions). However, RU charge will increase significantly as the total number of physical partitions increases.
+
+When you run a cross-partition query, you are essentially doing a separate query per individual physical partition. While cross-partition queries queries will use the index, if available, they are still not nearly as efficient as in-partition queries.
+
+## Useful example
+
+Here's an analogy to better understand cross-partition queries:
+
+Let's imagine you are a delivery driver that has to deliver packages to different apartment complexes. Each apartment complex has a list on the premises that has all of the resident's unit numbers. We can compare each apartment complex to a physical partition and each list to the physical partition's index.
+
+We can compare in-partition and cross-partition queries using this example:
+
+### In-partition query
+
+If the delivery driver knows the correct apartment complex (physical partition), then they can immediately drive to the correct building. The driver can check the apartment complex's list of the resident's unit numbers (the index) and quickly deliver the appropriate packages. In this case, the driver does not waste any time or effort driving to an apartment complex to check and see if any package recipients live there.
+
+### Cross-partition query (fan-out)
+
+If the delivery driver does not know the correct apartment complex (physical partition), they'll need to drive to every single apartment building and check the list with all of the resident's unit numbers (the index). Once they arrive at each apartment complex, they'll still be able to use the list of the addresses of each resident. However, they will need to check every apartment complex's list, whether any package recipients live there or not. This is how cross-partition queries work. While they can use the index (don't need to knock on every single door), they must separately check the index for every physical partition.
+
+### Cross-partition query (scoped to only a few physical partitions)
+
+If the delivery driver knows that all package recipients live within a certain few apartment complexes, they won't need to drive to every single one. While driving to a few apartment complexes will still require more work than visiting just a single building, the delivery driver still saves significant time and effort. If a query has the partition key in its filter with the `IN` keyword, it will only check the relevant physical partition's indexes for data.
+
+## Avoiding cross-partition queries
+
+For most containers, it's inevitable that you will have some cross-partition queries. Having some cross-partition queries is ok! Nearly all query operations are supported across partitions (both logical partition keys and physical partitions). Azure Cosmos DB also has many optimizations in the query engine and client SDKs to parallelize query execution across physical partitions.
+
+For most read-heavy scenarios, we recommend simply selecting the most common property in your query filters. You should also make sure your partition key adheres to other [partition key selection best practices](../partitioning-overview.md#choose-partitionkey).
+
+Avoiding cross-partition queries typically only matters with large containers. You are charged a minimum of about 2.5 RU's each time you check a physical partition's index for results, even if no items in the physical partition match the query's filter. As such, if you have only one (or just a few) physical partitions, cross-partition queries will not consume significantly more RU's than in-partition queries.
+
+The number of physical partitions is tied to the amount of provisioned RU's. Each physical partition allows for up to 10,000 provisioned RU's and can store up to 50 GB of data. Azure Cosmos DB will automatically manage physical partitions for you. The number of physical partitions in your container is dependent on your provisioned throughput and consumed storage.
+
+You should try to avoid cross-partition queries if your workload meets the criteria below:
+- You plan to have over 30,000 RU's provisioned
+- You plan to store over 100 GB of data
+
+## Next steps
+
+See the following articles to learn about partitioning in Azure Cosmos DB:
+
+- [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+- [Synthetic partition keys in Azure Cosmos DB](synthetic-partition-keys.md)
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-time-to-live.md
+
+ Title: Configure and manage Time to Live in Azure Cosmos DB
+description: Learn how to configure and manage time to live on a container and an item in Azure Cosmos DB
++++ Last updated : 05/12/2022+++++
+# Configure time to live in Azure Cosmos DB
+
+In Azure Cosmos DB, you can choose to configure Time to Live (TTL) at the container level, or you can override it at an item level after setting for the container. You can configure TTL for a container by using Azure portal or the language-specific SDKs. Item level TTL overrides can be configured by using the SDKs.
+
+> This article's content is related to Azure Cosmos DB transactional store TTL. If you are looking for analitycal store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl).
+
+## Enable time to live on a container using the Azure portal
+
+Use the following steps to enable time to live on a container with no expiration. Enabling TTL at the container level to allow the same value to be overridden at an individual item's level. You can also set the TTL by entering a non-zero value for seconds.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Create a new Azure Cosmos DB account or select an existing account.
+
+3. Open the **Data Explorer** pane.
+
+4. Select an existing container, expand the **Settings** tab and modify the following values:
+
+ * Under **Setting** find, **Time to Live**.
+ * Based on your requirement, you can:
+ * Turn **off** this setting
+ * Set it to **On (no default)** or
+ * Turn **On** with a TTL value specified in seconds.
+
+ * Select **Save** to save the changes.
+
+ :::image type="content" source="./media/how-to-time-to-live/how-to-time-to-live-portal.png" alt-text="Configure Time to live in Azure portal":::
+
+* When DefaultTimeToLive is null, then your Time to Live is Off
+* When DefaultTimeToLive is -1 then, your Time to Live setting is On (No default)
+* When DefaultTimeToLive has any other Int value (except 0), then your Time to Live setting is On. The server will automatically delete items based on the configured value.
+
+## Enable time to live on a container using Azure CLI or Azure PowerShell
+
+To create or enable TTL on a container see,
+
+* [Create a container with TTL using Azure CLI](manage-with-cli.md#create-a-container-with-ttl)
+* [Create a container with TTL using PowerShell](manage-with-powershell.md#create-container-unique-key-ttl)
+
+## Enable time to live on a container using an SDK
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+```csharp
+Database database = client.GetDatabase("database");
+
+ContainerProperties properties = new ()
+{
+ Id = "container",
+ PartitionKeyPath = "/customerId",
+ // Never expire by default
+ DefaultTimeToLive = -1
+};
+
+// Create a new container with TTL enabled and without any expiration value
+Container container = await database
+ .CreateContainerAsync(properties);
+```
+
+### [Java SDK v4](#tab/javav4)
+
+```java
+CosmosDatabase database = client.getDatabase("database");
+
+CosmosContainerProperties properties = new CosmosContainerProperties(
+ "container",
+ "/customerId"
+);
+// Never expire by default
+properties.setDefaultTimeToLiveInSeconds(-1);
+
+// Create a new container with TTL enabled and without any expiration value
+CosmosContainerResponse response = database
+ .createContainerIfNotExists(properties);
+```
+
+### [Node SDK](#tab/node-sdk)
+
+```javascript
+const database = await client.database("database");
+
+const properties = {
+ id: "container",
+ partitionKey: "/customerId",
+ // Never expire by default
+ defaultTtl: -1
+};
+
+const { container } = await database.containers
+ .createIfNotExists(properties);
+
+```
+
+### [Python SDK](#tab/python-sdk)
+
+```python
+database = client.get_database_client('database')
+
+database.create_container(
+ id='container',
+ partition_key=PartitionKey(path='/customerId'),
+ # Never expire by default
+ default_ttl=-1
+)
+```
+++
+## Set time to live on a container using an SDK
+
+To set the time to live on a container, you need to provide a non-zero positive number that indicates the time period in seconds. Based on the configured TTL value, all items in the container after the last modified timestamp of the item `_ts` are deleted.
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+```csharp
+Database database = client.GetDatabase("database");
+
+ContainerProperties properties = new ()
+{
+ Id = "container",
+ PartitionKeyPath = "/customerId",
+ // Expire all documents after 90 days
+ DefaultTimeToLive = 90 * 60 * 60 * 24
+};
+
+// Create a new container with TTL enabled and without any expiration value
+Container container = await database
+ .CreateContainerAsync(properties);
+```
+
+### [Java SDK v4](#tab/javav4)
+
+```java
+CosmosDatabase database = client.getDatabase("database");
+
+CosmosContainerProperties properties = new CosmosContainerProperties(
+ "container",
+ "/customerId"
+);
+// Expire all documents after 90 days
+properties.setDefaultTimeToLiveInSeconds(90 * 60 * 60 * 24);
+
+CosmosContainerResponse response = database
+ .createContainerIfNotExists(properties);
+```
+
+### [Node SDK](#tab/node-sdk)
+
+```javascript
+const database = await client.database("database");
+
+const properties = {
+ id: "container",
+ partitionKey: "/customerId",
+ // Expire all documents after 90 days
+ defaultTtl: 90 * 60 * 60 * 24
+};
+
+const { container } = await database.containers
+ .createIfNotExists(properties);
+```
+
+### [Python SDK](#tab/python-sdk)
+
+```python
+database = client.get_database_client('database')
+
+database.create_container(
+ id='container',
+ partition_key=PartitionKey(path='/customerId'),
+ # Expire all documents after 90 days
+ default_ttl=90 * 60 * 60 * 24
+)
+```
+++
+## Set time to live on an item using the Portal
+
+In addition to setting a default time to live on a container, you can set a time to live for an item. Setting time to live at the item level will override the default TTL of the item in that container.
+
+* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item shouldn't expire.
+
+* If the item doesn't have a TTL field, then by default, the TTL set to the container will apply to the item.
+
+* If TTL is disabled at the container level, the TTL field on the item will be ignored until TTL is re-enabled on the container.
+
+Use the following steps to enable time to live on an item:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Create a new Azure Cosmos DB account or select an existing account.
+
+3. Open the **Data Explorer** pane.
+
+4. Select an existing container, expand it and modify the following values:
+
+ * Open the **Scale & Settings** window.
+ * Under **Setting** find, **Time to Live**.
+ * Select **On (no default)** or select **On** and set a TTL value.
+ * Select **Save** to save the changes.
+
+5. Next navigate to the item for which you want to set time to live, add the `ttl` property and select **Update**.
+
+ ```json
+ {
+ "id": "1",
+ "_rid": "Jic9ANWdO-EFAAAAAAAAAA==",
+ "_self": "dbs/Jic9AA==/colls/Jic9ANWdO-E=/docs/Jic9ANWdO-EFAAAAAAAAAA==/",
+ "_etag": "\"0d00b23f-0000-0000-0000-5c7712e80000\"",
+ "_attachments": "attachments/",
+ "ttl": 10,
+ "_ts": 1551307496
+ }
+ ```
+
+## Set time to live on an item using an SDK
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+```csharp
+public record SalesOrder(string id, string customerId, int? ttl);
+```
+
+```csharp
+Container container = database.GetContainer("container");
+
+SalesOrder item = new (
+ "SO05",
+ "CO18009186470"
+ // Expire sales order in 30 days using "ttl" property
+ ttl: 60 * 60 * 24 * 30
+);
+
+await container.CreateItemAsync<SalesOrder>(item);
+```
+
+### [Java SDK v4](#tab/javav4)
+
+```java
+public class SalesOrder {
+
+ public String id;
+
+ public String customerId;
+
+ // Include a property that serializes to "ttl" in JSON
+ public Integer ttl;
+
+}
+```
+
+```java
+CosmosContainer container = database.getContainer("container");
+
+SalesOrder item = new SalesOrder();
+item.id = "SO05";
+item.customerId = "CO18009186470";
+// Expire sales order in 30 days using "ttl" property
+item.ttl = 60 * 60 * 24 * 30;
+
+container.createItem(item);
+```
+
+### [Node SDK](#tab/node-sdk)
+
+```javascript
+const container = await database.container("container");
+
+const item = {
+ id: 'SO05',
+ customerId: 'CO18009186470',
+ // Expire sales order in 30 days using "ttl" property
+ ttl: 60 * 60 * 24 * 30
+};
+
+await container.items.create(item);
+```
+
+### [Python SDK](#tab/python-sdk)
+
+```python
+container = database.get_container_client('container')
+
+item = {
+ 'id': 'SO05',
+ 'customerId': 'CO18009186470',
+ # Expire sales order in 30 days using "ttl" property
+ 'ttl': 60 * 60 * 24 * 30
+}
+
+container.create_item(body=item)
+```
+++
+## Reset time to live using an SDK
+
+You can reset the time to live on an item by performing a write or update operation on the item. The write or update operation will set the `_ts` to the current time, and the TTL for the item to expire will begin again. If you wish to change the TTL of an item, you can update the field just as you update any other field.
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+```csharp
+SalesOrder item = await container.ReadItemAsync<SalesOrder>(
+ "SO05",
+ new PartitionKey("CO18009186470")
+);
+
+// Update ttl to 2 hours
+SalesOrder modifiedItem = item with {
+ ttl = 60 * 60 * 2
+};
+
+await container.ReplaceItemAsync<SalesOrder>(
+ modifiedItem,
+ "SO05",
+ new PartitionKey("CO18009186470")
+);
+```
+
+### [Java SDK v4](#tab/javav4)
+
+```java
+CosmosItemResponse<SalesOrder> response = container.readItem(
+ "SO05",
+ new PartitionKey("CO18009186470"),
+ SalesOrder.class
+);
+
+SalesOrder item = response.getItem();
+
+// Update ttl to 2 hours
+item.ttl = 60 * 60 * 2;
+
+CosmosItemRequestOptions options = new CosmosItemRequestOptions();
+container.replaceItem(
+ item,
+ "SO05",
+ new PartitionKey("CO18009186470"),
+ options
+);
+```
+
+### [Node SDK](#tab/node-sdk)
+
+```javascript
+const { resource: item } = await container.item(
+ 'SO05',
+ 'CO18009186470'
+).read();
+
+// Update ttl to 2 hours
+item.ttl = 60 * 60 * 2;
+
+await container.item(
+ 'SO05',
+ 'CO18009186470'
+).replace(item);
+```
+
+### [Python SDK](#tab/python-sdk)
+
+```python
+item = container.read_item(
+ item='SO05',
+ partition_key='CO18009186470'
+)
+
+# Update ttl to 2 hours
+item['ttl'] = 60 * 60 * 2
+
+container.replace_item(
+ item='SO05',
+ body=item
+)
+```
+++
+## Disable time to live using an SDK
+
+To disable time to live on a container and stop the background process from checking for expired items, the `DefaultTimeToLive` property on the container should be deleted. Deleting this property is different from setting it to -1. When you set it to -1, new items added to the container will live forever, however you can override this value on specific items in the container. When you remove the TTL property from the container the items will never expire, even if there are they have explicitly overridden the previous default TTL value.
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+```csharp
+ContainerProperties properties = await container.ReadContainerAsync();
+
+// Disable ttl at container-level
+properties.DefaultTimeToLive = null;
+
+await container.ReplaceContainerAsync(properties);
+```
+
+### [Java SDK v4](#tab/javav4)
+
+```java
+CosmosContainerResponse response = container.read();
+CosmosContainerProperties properties = response.getProperties();
+
+// Disable ttl at container-level
+properties.setDefaultTimeToLiveInSeconds(null);
+
+container.replace(properties);
+```
+
+### [Node SDK](#tab/node-sdk)
+
+```javascript
+const { resource: definition } = await container.read();
+
+// Disable ttl at container-level
+definition.defaultTtl = null;
+
+await container.replace(definition);
+```
+
+### [Python SDK](#tab/python-sdk)
+
+```python
+database.replace_container(
+ container,
+ partition_key=PartitionKey(path='/id'),
+ # Disable ttl at container-level
+ default_ttl=None
+)
+```
+++
+## Next steps
+
+Learn more about time to live in the following article:
+
+* [Time to live](time-to-live.md)
cosmos-db How To Use Change Feed Estimator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-change-feed-estimator.md
+
+ Title: Use the change feed estimator - Azure Cosmos DB
+description: Learn how to use the change feed estimator to analyze the progress of your change feed processor
++++ Last updated : 04/01/2021+
+ms.devlang: csharp
+++
+# Use the change feed estimator
+
+This article describes how you can monitor the progress of your [change feed processor](./change-feed-processor.md) instances as they read the change feed.
+
+## Why is monitoring progress important?
+
+The change feed processor acts as a pointer that moves forward across your [change feed](../change-feed.md) and delivers the changes to a delegate implementation.
+
+Your change feed processor deployment can process changes at a particular rate based on its available resources like CPU, memory, network, and so on.
+
+If this rate is slower than the rate at which your changes happen in your Azure Cosmos DB container, your processor will start to lag behind.
+
+Identifying this scenario helps understand if we need to scale our change feed processor deployment.
+
+## Implement the change feed estimator
+
+### As a push model for automatic notifications
+
+Like the [change feed processor](./change-feed-processor.md), the change feed estimator can work as a push model. The estimator will measure the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and push this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds.
+
+As an example, if your change feed processor is defined like this:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartProcessorEstimator)]
+
+The correct way to initialize an estimator to measure that processor would be using `GetChangeFeedEstimatorBuilder` like so:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartEstimator)]
+
+Where both the processor and the estimator share the same `leaseContainer` and the same name.
+
+The other two parameters are the delegate, which will receive a number that represents **how many changes are pending to be read** by the processor, and the time interval at which you want this measurement to be taken.
+
+An example of a delegate that receives the estimation is:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=EstimationDelegate)]
+
+You can send this estimation to your monitoring solution and use it to understand how your progress is behaving over time.
+
+### As an on-demand detailed estimation
+
+In contrast with the push model, there's an alternative that lets you obtain the estimation on demand. This model also provides more detailed information:
+
+* The estimated lag per lease.
+* The instance owning and processing each lease, so you can identify if there's an issue on an instance.
+
+If your change feed processor is defined like this:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartProcessorEstimatorDetailed)]
+
+You can create the estimator with the same lease configuration:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartEstimatorDetailed)]
+
+And whenever you want it, with the frequency you require, you can obtain the detailed estimation:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=GetIteratorEstimatorDetailed)]
+
+Each `ChangeFeedProcessorState` will contain the lease and lag information, and also who is the current instance owning it.
+
+> [!NOTE]
+> The change feed estimator does not need to be deployed as part of your change feed processor, nor be part of the same project. It can be independent and run in a completely different instance, which is recommended. It just needs to use the same name and lease configuration.
+
+## Additional resources
+
+* [Azure Cosmos DB SDK](sdk-dotnet-v2.md)
+* [Usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
+* [Additional samples on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
+
+## Next steps
+
+You can now proceed to learn more about change feed processor in the following articles:
+
+* [Overview of change feed processor](change-feed-processor.md)
+* [Change feed processor start time](./change-feed-processor.md#starting-time)
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md
+
+ Title: Register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB SDKs
+description: Learn how to register and call stored procedures, triggers, and user-defined functions using the Azure Cosmos DB SDKs
++++ Last updated : 11/03/2021++
+ms.devlang: csharp, java, javascript, python
+++
+# How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB
+
+The API for NoSQL in Azure Cosmos DB supports registering and invoking stored procedures, triggers, and user-defined functions (UDFs) written in JavaScript. Once you've defined one or more stored procedures, triggers, and user-defined functions, you can load and view them in the [Azure portal](https://portal.azure.com/) by using Data Explorer.
+
+You can use the API for NoSQL SDK across multiple platforms including [.NET v2 (legacy)](sdk-dotnet-v2.md), [.NET v3](sdk-dotnet-v3.md), [Java](sdk-java-v2.md), [JavaScript](sdk-nodejs.md), or [Python](sdk-python.md) SDKs to perform these tasks. If you haven't worked with one of these SDKs before, see the *"Quickstart"* article for the appropriate SDK:
+
+| SDK | Getting started |
+| : | : |
+| .NET v3 | [Quickstart: Build a .NET console app to manage Azure Cosmos DB for NoSQL resources](quickstart-dotnet.md) |
+| Java | [Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data](quickstart-java.md)
+| JavaScript | [Quickstart: Use Node.js to connect and query data from Azure Cosmos DB for NoSQL account](quickstart-nodejs.md) |
+| Python | [Quickstart: Build a Python application using an Azure Cosmos DB for NoSQL account](quickstart-python.md) |
+
+## How to run stored procedures
+
+Stored procedures are written using JavaScript. They can create, update, read, query, and delete items within an Azure Cosmos DB container. For more information on how to write stored procedures in Azure Cosmos DB, see [How to write stored procedures in Azure Cosmos DB](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures) article.
+
+The following examples show how to register and call a stored procedure by using the Azure Cosmos DB SDKs. Refer to [Create a Document](how-to-write-stored-procedures-triggers-udfs.md#create-an-item) as the source for this stored procedure is saved as `spCreateToDoItem.js`.
+
+> [!NOTE]
+> For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well.
+
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
+
+The following example shows how to register a stored procedure by using the .NET SDK v2:
+
+```csharp
+string storedProcedureId = "spCreateToDoItems";
+StoredProcedure newStoredProcedure = new StoredProcedure
+ {
+ Id = storedProcedureId,
+ Body = File.ReadAllText($@"..\js\{storedProcedureId}.js")
+ };
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+var response = await client.CreateStoredProcedureAsync(containerUri, newStoredProcedure);
+StoredProcedure createdStoredProcedure = response.Resource;
+```
+
+The following code shows how to call a stored procedure by using the .NET SDK v2:
+
+```csharp
+dynamic[] newItems = new dynamic[]
+{
+ new {
+ category = "Personal",
+ name = "Groceries",
+ description = "Pick up strawberries",
+ isComplete = false
+ },
+ new {
+ category = "Personal",
+ name = "Doctor",
+ description = "Make appointment for check up",
+ isComplete = false
+ }
+};
+
+Uri uri = UriFactory.CreateStoredProcedureUri("myDatabase", "myContainer", "spCreateToDoItem");
+RequestOptions options = new RequestOptions { PartitionKey = new PartitionKey("Personal") };
+var result = await client.ExecuteStoredProcedureAsync<string>(uri, options, new[] { newItems });
+```
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+The following example shows how to register a stored procedure by using the .NET SDK v3:
+
+```csharp
+string storedProcedureId = "spCreateToDoItems";
+StoredProcedureResponse storedProcedureResponse = await client.GetContainer("myDatabase", "myContainer").Scripts.CreateStoredProcedureAsync(new StoredProcedureProperties
+{
+ Id = storedProcedureId,
+ Body = File.ReadAllText($@"..\js\{storedProcedureId}.js")
+});
+```
+
+The following code shows how to call a stored procedure by using the .NET SDK v3:
+
+```csharp
+dynamic[] newItems = new dynamic[]
+{
+ new {
+ category = "Personal",
+ name = "Groceries",
+ description = "Pick up strawberries",
+ isComplete = false
+ },
+ new {
+ category = "Personal",
+ name = "Doctor",
+ description = "Make appointment for check up",
+ isComplete = false
+ }
+};
+
+var result = await client.GetContainer("database", "container").Scripts.ExecuteStoredProcedureAsync<string>("spCreateToDoItem", new PartitionKey("Personal"), new[] { newItems });
+```
+
+### [Java SDK](#tab/java-sdk)
+
+The following example shows how to register a stored procedure by using the Java SDK:
+
+```java
+CosmosStoredProcedureProperties definition = new CosmosStoredProcedureProperties(
+ "spCreateToDoItems",
+ Files.readString(Paths.get("createToDoItems.js"))
+);
+
+CosmosStoredProcedureResponse response = container
+ .getScripts()
+ .createStoredProcedure(definition);
+```
+
+The following code shows how to call a stored procedure by using the Java SDK:
+
+```java
+CosmosStoredProcedure sproc = container
+ .getScripts()
+ .getStoredProcedure("spCreateToDoItems");
+
+List<Object> items = new ArrayList<Object>();
+
+ToDoItem firstItem = new ToDoItem();
+firstItem.category = "Personal";
+firstItem.name = "Groceries";
+firstItem.description = "Pick up strawberries";
+firstItem.isComplete = false;
+items.add(firstItem);
+
+ToDoItem secondItem = new ToDoItem();
+secondItem.category = "Personal";
+secondItem.name = "Doctor";
+secondItem.description = "Make appointment for check up";
+secondItem.isComplete = true;
+items.add(secondItem);
+
+CosmosStoredProcedureRequestOptions options = new CosmosStoredProcedureRequestOptions();
+options.setPartitionKey(
+ new PartitionKey("Personal")
+);
+
+CosmosStoredProcedureResponse response = sproc.execute(
+ items,
+ options
+);
+```
+
+### [JavaScript SDK](#tab/javascript-sdk)
+
+The following example shows how to register a stored procedure by using the JavaScript SDK
+
+```javascript
+const container = client.database("myDatabase").container("myContainer");
+const sprocId = "spCreateToDoItems";
+await container.scripts.storedProcedures.create({
+ id: sprocId,
+ body: require(`../js/${sprocId}`)
+});
+```
+
+The following code shows how to call a stored procedure by using the JavaScript SDK:
+
+```javascript
+const newItem = [{
+ category: "Personal",
+ name: "Groceries",
+ description: "Pick up strawberries",
+ isComplete: false
+}];
+const container = client.database("myDatabase").container("myContainer");
+const sprocId = "spCreateToDoItems";
+const {resource: result} = await container.scripts.storedProcedure(sprocId).execute(newItem, {partitionKey: newItem[0].category});
+```
+
+### [Python SDK](#tab/python-sdk)
+
+The following example shows how to register a stored procedure by using the Python SDK:
+
+```python
+import azure.cosmos.cosmos_client as cosmos_client
+
+url = "your_cosmos_db_account_URI"
+key = "your_cosmos_db_account_key"
+database_name = 'your_cosmos_db_database_name'
+container_name = 'your_cosmos_db_container_name'
+
+with open('../js/spCreateToDoItems.js') as file:
+ file_contents = file.read()
+
+sproc = {
+ 'id': 'spCreateToDoItem',
+ 'serverScript': file_contents,
+}
+client = cosmos_client.CosmosClient(url, key)
+database = client.get_database_client(database_name)
+container = database.get_container_client(container_name)
+created_sproc = container.scripts.create_stored_procedure(body=sproc)
+```
+
+The following code shows how to call a stored procedure by using the Python SDK:
+
+```python
+import uuid
+
+new_id= str(uuid.uuid4())
+
+# Creating a document for a container with "id" as a partition key.
+
+new_item = {
+ "id": new_id,
+ "category":"Personal",
+ "name":"Groceries",
+ "description":"Pick up strawberries",
+ "isComplete":False
+ }
+result = container.scripts.execute_stored_procedure(sproc=created_sproc,params=[[new_item]], partition_key=new_id)
+```
+++
+## How to run pre-triggers
+
+The following examples show how to register and call a pre-trigger by using the Azure Cosmos DB SDKs. Refer to the [Pre-trigger example](how-to-write-stored-procedures-triggers-udfs.md#pre-triggers) as the source for this pre-trigger is saved as `trgPreValidateToDoItemTimestamp.js`.
+
+Pre-triggers are passed in the RequestOptions object, when executing an operation, by specifying `PreTriggerInclude` and then passing the name of the trigger in a List object.
+
+> [!NOTE]
+> Even though the name of the trigger is passed as a List, you can still execute only one trigger per operation.
+
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
+
+The following code shows how to register a pre-trigger using the .NET SDK v2:
+
+```csharp
+string triggerId = "trgPreValidateToDoItemTimestamp";
+Trigger trigger = new Trigger
+{
+ Id = triggerId,
+ Body = File.ReadAllText($@"..\js\{triggerId}.js"),
+ TriggerOperation = TriggerOperation.Create,
+ TriggerType = TriggerType.Pre
+};
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+await client.CreateTriggerAsync(containerUri, trigger);
+```
+
+The following code shows how to call a pre-trigger using the .NET SDK v2:
+
+```csharp
+dynamic newItem = new
+{
+ category = "Personal",
+ name = "Groceries",
+ description = "Pick up strawberries",
+ isComplete = false
+};
+
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+RequestOptions requestOptions = new RequestOptions { PreTriggerInclude = new List<string> { "trgPreValidateToDoItemTimestamp" } };
+await client.CreateDocumentAsync(containerUri, newItem, requestOptions);
+```
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+The following code shows how to register a pre-trigger using the .NET SDK v3:
+
+```csharp
+await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
+{
+ Id = "trgPreValidateToDoItemTimestamp",
+ Body = File.ReadAllText("@..\js\trgPreValidateToDoItemTimestamp.js"),
+ TriggerOperation = TriggerOperation.Create,
+ TriggerType = TriggerType.Pre
+});
+```
+
+The following code shows how to call a pre-trigger using the .NET SDK v3:
+
+```csharp
+dynamic newItem = new
+{
+ category = "Personal",
+ name = "Groceries",
+ description = "Pick up strawberries",
+ isComplete = false
+};
+
+await client.GetContainer("database", "container").CreateItemAsync(newItem, null, new ItemRequestOptions { PreTriggers = new List<string> { "trgPreValidateToDoItemTimestamp" } });
+```
+
+### [Java SDK](#tab/java-sdk)
+
+The following code shows how to register a pre-trigger using the Java SDK:
+
+```java
+CosmosTriggerProperties definition = new CosmosTriggerProperties(
+ "preValidateToDoItemTimestamp",
+ Files.readString(Paths.get("validateToDoItemTimestamp.js"))
+);
+definition.setTriggerOperation(TriggerOperation.CREATE);
+definition.setTriggerType(TriggerType.PRE);
+
+CosmosTriggerResponse response = container
+ .getScripts()
+ .createTrigger(definition);
+```
+
+The following code shows how to call a pre-trigger using the Java SDK:
+
+```java
+ToDoItem item = new ToDoItem();
+item.category = "Personal";
+item.name = "Groceries";
+item.description = "Pick up strawberries";
+item.isComplete = false;
+
+CosmosItemRequestOptions options = new CosmosItemRequestOptions();
+options.setPreTriggerInclude(
+ Arrays.asList("preValidateToDoItemTimestamp")
+);
+
+CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
+```
+
+### [JavaScript SDK](#tab/javascript-sdk)
+
+The following code shows how to register a pre-trigger using the JavaScript SDK:
+
+```javascript
+const container = client.database("myDatabase").container("myContainer");
+const triggerId = "trgPreValidateToDoItemTimestamp";
+await container.triggers.create({
+ id: triggerId,
+ body: require(`../js/${triggerId}`),
+ triggerOperation: "create",
+ triggerType: "pre"
+});
+```
+
+The following code shows how to call a pre-trigger using the JavaScript SDK:
+
+```javascript
+const container = client.database("myDatabase").container("myContainer");
+const triggerId = "trgPreValidateToDoItemTimestamp";
+await container.items.create({
+ category: "Personal",
+ name: "Groceries",
+ description: "Pick up strawberries",
+ isComplete: false
+}, {preTriggerInclude: [triggerId]});
+```
+
+### [Python SDK](#tab/python-sdk)
+
+The following code shows how to register a pre-trigger using the Python SDK:
+
+```python
+import azure.cosmos.cosmos_client as cosmos_client
+from azure.cosmos import documents
+
+url = "your_cosmos_db_account_URI"
+key = "your_cosmos_db_account_key"
+database_name = 'your_cosmos_db_database_name'
+container_name = 'your_cosmos_db_container_name'
+
+with open('../js/trgPreValidateToDoItemTimestamp.js') as file:
+ file_contents = file.read()
+
+trigger_definition = {
+ 'id': 'trgPreValidateToDoItemTimestamp',
+ 'serverScript': file_contents,
+ 'triggerType': documents.TriggerType.Pre,
+ 'triggerOperation': documents.TriggerOperation.All
+}
+client = cosmos_client.CosmosClient(url, key)
+database = client.get_database_client(database_name)
+container = database.get_container_client(container_name)
+trigger = container.scripts.create_trigger(trigger_definition)
+```
+
+The following code shows how to call a pre-trigger using the Python SDK:
+
+```python
+item = {'category': 'Personal', 'name': 'Groceries',
+ 'description': 'Pick up strawberries', 'isComplete': False}
+container.create_item(item, {'pre_trigger_include': 'trgPreValidateToDoItemTimestamp'})
+```
+++
+## How to run post-triggers
+
+The following examples show how to register a post-trigger by using the Azure Cosmos DB SDKs. Refer to the [Post-trigger example](how-to-write-stored-procedures-triggers-udfs.md#post-triggers) as the source for this post-trigger is saved as `trgPostUpdateMetadata.js`.
+
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
+
+The following code shows how to register a post-trigger using the .NET SDK v2:
+
+```csharp
+string triggerId = "trgPostUpdateMetadata";
+Trigger trigger = new Trigger
+{
+ Id = triggerId,
+ Body = File.ReadAllText($@"..\js\{triggerId}.js"),
+ TriggerOperation = TriggerOperation.Create,
+ TriggerType = TriggerType.Post
+};
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+await client.CreateTriggerAsync(containerUri, trigger);
+```
+
+The following code shows how to call a post-trigger using the .NET SDK v2:
+
+```csharp
+var newItem = {
+ name: "artist_profile_1023",
+ artist: "The Band",
+ albums: ["Hellujah", "Rotators", "Spinning Top"]
+};
+
+RequestOptions options = new RequestOptions { PostTriggerInclude = new List<string> { "trgPostUpdateMetadata" } };
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+await client.createDocumentAsync(containerUri, newItem, options);
+```
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+The following code shows how to register a post-trigger using the .NET SDK v3:
+
+```csharp
+await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
+{
+ Id = "trgPostUpdateMetadata",
+ Body = File.ReadAllText(@"..\js\trgPostUpdateMetadata.js"),
+ TriggerOperation = TriggerOperation.Create,
+ TriggerType = TriggerType.Post
+});
+```
+
+The following code shows how to call a post-trigger using the .NET SDK v3:
+
+```csharp
+var newItem = {
+ name: "artist_profile_1023",
+ artist: "The Band",
+ albums: ["Hellujah", "Rotators", "Spinning Top"]
+};
+
+await client.GetContainer("database", "container").CreateItemAsync(newItem, null, new ItemRequestOptions { PostTriggers = new List<string> { "trgPostUpdateMetadata" } });
+```
+
+### [Java SDK](#tab/java-sdk)
+
+The following code shows how to register a post-trigger using the Java SDK:
+
+```java
+CosmosTriggerProperties definition = new CosmosTriggerProperties(
+ "postUpdateMetadata",
+ Files.readString(Paths.get("updateMetadata.js"))
+);
+definition.setTriggerOperation(TriggerOperation.CREATE);
+definition.setTriggerType(TriggerType.POST);
+
+CosmosTriggerResponse response = container
+ .getScripts()
+ .createTrigger(definition);
+```
+
+The following code shows how to call a post-trigger using the Java SDK:
+
+```java
+ToDoItem item = new ToDoItem();
+item.category = "Personal";
+item.name = "Doctor";
+item.description = "Make appointment for check up";
+item.isComplete = true;
+
+CosmosItemRequestOptions options = new CosmosItemRequestOptions();
+options.setPostTriggerInclude(
+ Arrays.asList("postUpdateMetadata")
+);
+
+CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
+```
+
+### [JavaScript SDK](#tab/javascript-sdk)
+
+The following code shows how to register a post-trigger using the JavaScript SDK:
+
+```javascript
+const container = client.database("myDatabase").container("myContainer");
+const triggerId = "trgPostUpdateMetadata";
+await container.triggers.create({
+ id: triggerId,
+ body: require(`../js/${triggerId}`),
+ triggerOperation: "create",
+ triggerType: "post"
+});
+```
+
+The following code shows how to call a post-trigger using the JavaScript SDK:
+
+```javascript
+const item = {
+ name: "artist_profile_1023",
+ artist: "The Band",
+ albums: ["Hellujah", "Rotators", "Spinning Top"]
+};
+const container = client.database("myDatabase").container("myContainer");
+const triggerId = "trgPostUpdateMetadata";
+await container.items.create(item, {postTriggerInclude: [triggerId]});
+```
+
+### [Python SDK](#tab/python-sdk)
+
+The following code shows how to register a post-trigger using the Python SDK:
+
+```python
+import azure.cosmos.cosmos_client as cosmos_client
+from azure.cosmos import documents
+
+url = "your_cosmos_db_account_URI"
+key = "your_cosmos_db_account_key"
+database_name = 'your_cosmos_db_database_name'
+container_name = 'your_cosmos_db_container_name'
+
+with open('../js/trgPostValidateToDoItemTimestamp.js') as file:
+ file_contents = file.read()
+
+trigger_definition = {
+ 'id': 'trgPostValidateToDoItemTimestamp',
+ 'serverScript': file_contents,
+ 'triggerType': documents.TriggerType.Post,
+ 'triggerOperation': documents.TriggerOperation.All
+}
+client = cosmos_client.CosmosClient(url, key)
+database = client.get_database_client(database_name)
+container = database.get_container_client(container_name)
+trigger = container.scripts.create_trigger(trigger_definition)
+```
+
+The following code shows how to call a post-trigger using the Python SDK:
+
+```python
+item = {'category': 'Personal', 'name': 'Groceries',
+ 'description': 'Pick up strawberries', 'isComplete': False}
+container.create_item(item, {'post_trigger_include': 'trgPreValidateToDoItemTimestamp'})
+```
+++
+## How to work with user-defined functions
+
+The following examples show how to register a user-defined function by using the Azure Cosmos DB SDKs. Refer to this [User-defined function example](how-to-write-stored-procedures-triggers-udfs.md#udfs) as the source for this post-trigger is saved as `udfTax.js`.
+
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
+
+The following code shows how to register a user-defined function using the .NET SDK v2:
+
+```csharp
+string udfId = "Tax";
+var udfTax = new UserDefinedFunction
+{
+ Id = udfId,
+ Body = File.ReadAllText($@"..\js\{udfId}.js")
+};
+
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+await client.CreateUserDefinedFunctionAsync(containerUri, udfTax);
+
+```
+
+The following code shows how to call a user-defined function using the .NET SDK v2:
+
+```csharp
+Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
+var results = client.CreateDocumentQuery<dynamic>(containerUri, "SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000"));
+
+foreach (var result in results)
+{
+ //iterate over results
+}
+```
+
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
+
+The following code shows how to register a user-defined function using the .NET SDK v3:
+
+```csharp
+await client.GetContainer("database", "container").Scripts.CreateUserDefinedFunctionAsync(new UserDefinedFunctionProperties
+{
+ Id = "Tax",
+ Body = File.ReadAllText(@"..\js\Tax.js")
+});
+```
+
+The following code shows how to call a user-defined function using the .NET SDK v3:
+
+```csharp
+var iterator = client.GetContainer("database", "container").GetItemQueryIterator<dynamic>("SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000");
+while (iterator.HasMoreResults)
+{
+ var results = await iterator.ReadNextAsync();
+ foreach (var result in results)
+ {
+ //iterate over results
+ }
+}
+```
+
+### [Java SDK](#tab/java-sdk)
+
+The following code shows how to register a user-defined function using the Java SDK:
+
+```java
+CosmosUserDefinedFunctionProperties definition = new CosmosUserDefinedFunctionProperties(
+ "udfTax",
+ Files.readString(Paths.get("tax.js"))
+);
+
+CosmosUserDefinedFunctionResponse response = container
+ .getScripts()
+ .createUserDefinedFunction(definition);
+```
+
+The following code shows how to call a user-defined function using the Java SDK:
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+
+CosmosPagedIterable<ToDoItem> iterable = container.queryItems(
+ "SELECT t.cost, udf.udfTax(t.cost) AS costWithTax FROM t",
+ options,
+ ToDoItem.class);
+```
+
+### [JavaScript SDK](#tab/javascript-sdk)
+
+The following code shows how to register a user-defined function using the JavaScript SDK:
+
+```javascript
+const container = client.database("myDatabase").container("myContainer");
+const udfId = "Tax";
+await container.userDefinedFunctions.create({
+ id: udfId,
+ body: require(`../js/${udfId}`)
+```
+
+The following code shows how to call a user-defined function using the JavaScript SDK:
+
+```javascript
+const container = client.database("myDatabase").container("myContainer");
+const sql = "SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000";
+const {result} = await container.items.query(sql).toArray();
+```
+
+### [Python SDK](#tab/python-sdk)
+
+The following code shows how to register a user-defined function using the Python SDK:
+
+```python
+import azure.cosmos.cosmos_client as cosmos_client
+
+url = "your_cosmos_db_account_URI"
+key = "your_cosmos_db_account_key"
+database_name = 'your_cosmos_db_database_name'
+container_name = 'your_cosmos_db_container_name'
+
+with open('../js/udfTax.js') as file:
+ file_contents = file.read()
+udf_definition = {
+ 'id': 'Tax',
+ 'serverScript': file_contents,
+}
+client = cosmos_client.CosmosClient(url, key)
+database = client.get_database_client(database_name)
+container = database.get_container_client(container_name)
+udf = container.scripts.create_user_defined_function(udf_definition)
+```
+
+The following code shows how to call a user-defined function using the Python SDK:
+
+```python
+results = list(container.query_items(
+ 'query': 'SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000'))
+```
+++
+## Next steps
+
+Learn more concepts and how-to write or use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
+
+- [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md)
+- [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md)
+- [How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-write-stored-procedures-triggers-udfs.md)
+- [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
cosmos-db How To Write Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-write-javascript-query-api.md
+
+ Title: Write stored procedures and triggers using the JavaScript query API in Azure Cosmos DB
+description: Learn how to write stored procedures and triggers using the JavaScript Query API in Azure Cosmos DB
++++ Last updated : 05/07/2020++
+ms.devlang: javascript
+++
+# How to write stored procedures and triggers in Azure Cosmos DB by using the JavaScript query API
+
+Azure Cosmos DB allows you to perform optimized queries by using a fluent JavaScript interface without any knowledge of SQL language that can be used to write stored procedures or triggers. To learn more about JavaScript Query API support in Azure Cosmos DB, see [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) article.
+
+## <a id="stored-procedures"></a>Stored procedure using the JavaScript query API
+
+The following code sample is an example of how the JavaScript query API is used in the context of a stored procedure. The stored procedure inserts an Azure Cosmos DB item that is specified by an input parameter, and updates a metadata document by using the `__.filter()` method, with minSize, maxSize, and totalSize based upon the input item's size property.
+
+> [!NOTE]
+> `__` (double-underscore) is an alias to `getContext().getCollection()` when using the JavaScript query API.
+
+```javascript
+/**
+ * Insert an item and update metadata doc: minSize, maxSize, totalSize based on item.size.
+ */
+function insertDocumentAndUpdateMetadata(item) {
+ // HTTP error codes sent to our callback function by CosmosDB server.
+ var ErrorCode = {
+ RETRY_WITH: 449,
+ }
+
+ var isAccepted = __.createDocument(__.getSelfLink(), item, {}, function(err, item, options) {
+ if (err) throw err;
+
+ // Check the item (ignore items with invalid/zero size and metadata itself) and call updateMetadata.
+ if (!item.isMetadata && item.size > 0) {
+ // Get the metadata. We keep it in the same container. it's the only item that has .isMetadata = true.
+ var result = __.filter(function(x) {
+ return x.isMetadata === true
+ }, function(err, feed, options) {
+ if (err) throw err;
+
+ // We assume that metadata item was pre-created and must exist when this script is called.
+ if (!feed || !feed.length) throw new Error("Failed to find the metadata item.");
+
+ // The metadata item.
+ var metaItem = feed[0];
+
+ // Update metaDoc.minSize:
+ // for 1st document use doc.Size, for all the rest see if it's less than last min.
+ if (metaItem.minSize == 0) metaItem.minSize = item.size;
+ else metaItem.minSize = Math.min(metaItem.minSize, item.size);
+
+ // Update metaItem.maxSize.
+ metaItem.maxSize = Math.max(metaItem.maxSize, item.size);
+
+ // Update metaItem.totalSize.
+ metaItem.totalSize += item.size;
+
+ // Update/replace the metadata item in the store.
+ var isAccepted = __.replaceDocument(metaItem._self, metaItem, function(err) {
+ if (err) throw err;
+ // Note: in case concurrent updates causes conflict with ErrorCode.RETRY_WITH, we can't read the meta again
+ // and update again because due to Snapshot isolation we will read same exact version (we are in same transaction).
+ // We have to take care of that on the client side.
+ });
+ if (!isAccepted) throw new Error("replaceDocument(metaItem) returned false.");
+ });
+ if (!result.isAccepted) throw new Error("filter for metaItem returned false.");
+ }
+ });
+ if (!isAccepted) throw new Error("createDocument(actual item) returned false.");
+}
+```
+
+## Next steps
+
+See the following articles to learn about stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
+
+* [How to work with stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
+
+* [How to register and use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures)
+
+* How to register and use [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) in Azure Cosmos DB
+
+* [How to register and use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions)
+
+* [Synthetic partition keys in Azure Cosmos DB](synthetic-partition-keys.md)
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-write-stored-procedures-triggers-udfs.md
+
+ Title: Write stored procedures, triggers, and UDFs in Azure Cosmos DB
+description: Learn how to define stored procedures, triggers, and user-defined functions in Azure Cosmos DB
++++ Last updated : 10/05/2021++
+ms.devlang: javascript
+++
+# How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB
+
+Azure Cosmos DB provides language-integrated, transactional execution of JavaScript that lets you write **stored procedures**, **triggers**, and **user-defined functions (UDFs)**. When using the API for NoSQL in Azure Cosmos DB, you can define the stored procedures, triggers, and UDFs in JavaScript language. You can write your logic in JavaScript and execute it inside the database engine. You can create and execute triggers, stored procedures, and UDFs by using [Azure portal](https://portal.azure.com/), the [JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) and the [Azure Cosmos DB for NoSQL client SDKs](samples-dotnet.md).
+
+To call a stored procedure, trigger, and user-defined function, you need to register it. For more information, see [How to work with stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md).
+
+> [!NOTE]
+> For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well.
+> [!Tip]
+> Azure Cosmos DB supports deploying containers with stored procedures, triggers and user-defined functions. For more information see [Create an Azure Cosmos DB container with server-side functionality.](./manage-with-templates.md#create-sproc)
+
+## <a id="stored-procedures"></a>How to write stored procedures
+
+Stored procedures are written using JavaScript, they can create, update, read, query, and delete items inside an Azure Cosmos DB container. Stored procedures are registered per collection, and can operate on any document or an attachment present in that collection.
+
+Here is a simple stored procedure that returns a "Hello World" response.
+
+```javascript
+var helloWorldStoredProc = {
+ id: "helloWorld",
+ serverScript: function () {
+ var context = getContext();
+ var response = context.getResponse();
+
+ response.setBody("Hello, World");
+ }
+}
+```
+
+The context object provides access to all operations that can be performed in Azure Cosmos DB, as well as access to the request and response objects. In this case, you use the response object to set the body of the response to be sent back to the client.
+
+Once written, the stored procedure must be registered with a collection. To learn more, see [How to use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures) article.
+
+### <a id="create-an-item"></a>Create an item using stored procedure
+
+When you create an item by using stored procedure, the item is inserted into the Azure Cosmos DB container and an ID for the newly created item is returned. Creating an item is an asynchronous operation and depends on the JavaScript callback functions. The callback function has two parameters - one for the error object in case the operation fails and another for a return value; in this case, the created object. Inside the callback, you can either handle the exception or throw an error. In case a callback is not provided and there is an error, the Azure Cosmos DB runtime will throw an error.
+
+The stored procedure also includes a parameter to set the description, it's a boolean value. When the parameter is set to true and the description is missing, the stored procedure will throw an exception. Otherwise, the rest of the stored procedure continues to run.
+
+The following example stored procedure takes an array of new Azure Cosmos DB items as input, inserts it into the Azure Cosmos DB container and returns the count of the items inserted. In this example, we are leveraging the ToDoList sample from the [Quickstart .NET API for NoSQL](quickstart-dotnet.md)
+
+```javascript
+function createToDoItems(items) {
+ var collection = getContext().getCollection();
+ var collectionLink = collection.getSelfLink();
+ var count = 0;
+
+ if (!items) throw new Error("The array is undefined or null.");
+
+ var numItems = items.length;
+
+ if (numItems == 0) {
+ getContext().getResponse().setBody(0);
+ return;
+ }
+
+ tryCreate(items[count], callback);
+
+ function tryCreate(item, callback) {
+ var options = { disableAutomaticIdGeneration: false };
+
+ var isAccepted = collection.createDocument(collectionLink, item, options, callback);
+
+ if (!isAccepted) getContext().getResponse().setBody(count);
+ }
+
+ function callback(err, item, options) {
+ if (err) throw err;
+ count++;
+ if (count >= numItems) {
+ getContext().getResponse().setBody(count);
+ } else {
+ tryCreate(items[count], callback);
+ }
+ }
+}
+```
+
+### Arrays as input parameters for stored procedures
+
+When defining a stored procedure in Azure portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to string and sent to the stored procedure. To work around this, you can define a function within your stored procedure to parse the string as an array. The following code shows how to parse a string input parameter as an array:
+
+```javascript
+function sample(arr) {
+ if (typeof arr === "string") arr = JSON.parse(arr);
+
+ arr.forEach(function(a) {
+ // do something here
+ console.log(a);
+ });
+}
+```
+
+### <a id="transactions"></a>Transactions within stored procedures
+
+You can implement transactions on items within a container by using a stored procedure. The following example uses transactions within a fantasy football gaming app to trade players between two teams in a single operation. The stored procedure attempts to read the two Azure Cosmos DB items each corresponding to the player IDs passed in as an argument. If both players are found, then the stored procedure updates the items by swapping their teams. If any errors are encountered along the way, the stored procedure throws a JavaScript exception that implicitly aborts the transaction.
+
+```javascript
+// JavaScript source code
+function tradePlayers(playerId1, playerId2) {
+ var context = getContext();
+ var container = context.getCollection();
+ var response = context.getResponse();
+
+ var player1Document, player2Document;
+
+ // query for players
+ var filterQuery =
+ {
+ 'query' : 'SELECT * FROM Players p where p.id = @playerId1',
+ 'parameters' : [{'name':'@playerId1', 'value':playerId1}]
+ };
+
+ var accept = container.queryDocuments(container.getSelfLink(), filterQuery, {},
+ function (err, items, responseOptions) {
+ if (err) throw new Error("Error" + err.message);
+
+ if (items.length != 1) throw "Unable to find both names";
+ player1Item = items[0];
+
+ var filterQuery2 =
+ {
+ 'query' : 'SELECT * FROM Players p where p.id = @playerId2',
+ 'parameters' : [{'name':'@playerId2', 'value':playerId2}]
+ };
+ var accept2 = container.queryDocuments(container.getSelfLink(), filterQuery2, {},
+ function (err2, items2, responseOptions2) {
+ if (err2) throw new Error("Error" + err2.message);
+ if (items2.length != 1) throw "Unable to find both names";
+ player2Item = items2[0];
+ swapTeams(player1Item, player2Item);
+ return;
+ });
+ if (!accept2) throw "Unable to read player details, abort ";
+ });
+
+ if (!accept) throw "Unable to read player details, abort ";
+
+ // swap the two playersΓÇÖ teams
+ function swapTeams(player1, player2) {
+ var player2NewTeam = player1.team;
+ player1.team = player2.team;
+ player2.team = player2NewTeam;
+
+ var accept = container.replaceDocument(player1._self, player1,
+ function (err, itemReplaced) {
+ if (err) throw "Unable to update player 1, abort ";
+
+ var accept2 = container.replaceDocument(player2._self, player2,
+ function (err2, itemReplaced2) {
+ if (err) throw "Unable to update player 2, abort"
+ });
+
+ if (!accept2) throw "Unable to update player 2, abort";
+ });
+
+ if (!accept) throw "Unable to update player 1, abort";
+ }
+}
+```
+
+### <a id="bounded-execution"></a>Bounded execution within stored procedures
+
+The following is an example of a stored procedure that bulk-imports items into an Azure Cosmos DB container. The stored procedure handles bounded execution by checking the boolean return value from `createDocument`, and then uses the count of items inserted in each invocation of the stored procedure to track and resume progress across batches.
+
+```javascript
+function bulkImport(items) {
+ var container = getContext().getCollection();
+ var containerLink = container.getSelfLink();
+
+ // The count of imported items, also used as current item index.
+ var count = 0;
+
+ // Validate input.
+ if (!items) throw new Error("The array is undefined or null.");
+
+ var itemsLength = items.length;
+ if (itemsLength == 0) {
+ getContext().getResponse().setBody(0);
+ }
+
+ // Call the create API to create an item.
+ tryCreate(items[count], callback);
+
+ // Note that there are 2 exit conditions:
+ // 1) The createDocument request was not accepted.
+ // In this case the callback will not be called, we just call setBody and we are done.
+ // 2) The callback was called items.length times.
+ // In this case all items were created and we donΓÇÖt need to call tryCreate anymore. Just call setBody and we are done.
+ function tryCreate(item, callback) {
+ var isAccepted = container.createDocument(containerLink, item, callback);
+
+ // If the request was accepted, callback will be called.
+ // Otherwise report current count back to the client,
+ // which will call the script again with remaining set of items.
+ if (!isAccepted) getContext().getResponse().setBody(count);
+ }
+
+ // This is called when container.createDocument is done in order to process the result.
+ function callback(err, item, options) {
+ if (err) throw err;
+
+ // One more item has been inserted, increment the count.
+ count++;
+
+ if (count >= itemsLength) {
+ // If we created all items, we are done. Just set the response.
+ getContext().getResponse().setBody(count);
+ } else {
+ // Create next document.
+ tryCreate(items[count], callback);
+ }
+ }
+}
+```
+
+### <a id="async-promises"></a>Async await with stored procedures
+
+The following is an example of a stored procedure that uses async-await with Promises using a helper function. The stored procedure queries for an item and replaces it.
+
+```javascript
+function async_sample() {
+ const ERROR_CODE = {
+ NotAccepted: 429
+ };
+
+ const asyncHelper = {
+ queryDocuments(sqlQuery, options) {
+ return new Promise((resolve, reject) => {
+ const isAccepted = __.queryDocuments(__.getSelfLink(), sqlQuery, options, (err, feed, options) => {
+ if (err) reject(err);
+ resolve({ feed, options });
+ });
+ if (!isAccepted) reject(new Error(ERROR_CODE.NotAccepted, "queryDocuments was not accepted."));
+ });
+ },
+
+ replaceDocument(doc) {
+ return new Promise((resolve, reject) => {
+ const isAccepted = __.replaceDocument(doc._self, doc, (err, result, options) => {
+ if (err) reject(err);
+ resolve({ result, options });
+ });
+ if (!isAccepted) reject(new Error(ERROR_CODE.NotAccepted, "replaceDocument was not accepted."));
+ });
+ }
+ };
+
+ async function main() {
+ let continuation;
+ do {
+ let { feed, options } = await asyncHelper.queryDocuments("SELECT * from c", { continuation });
+
+ for (let doc of feed) {
+ doc.newProp = 1;
+ await asyncHelper.replaceDocument(doc);
+ }
+
+ continuation = options.continuation;
+ } while (continuation);
+ }
+
+ main().catch(err => getContext().abort(err));
+}
+```
+
+## <a id="triggers"></a>How to write triggers
+
+Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item. Triggers are not automatically executed, they must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) by using the Azure Cosmos DB SDKs.
+
+### <a id="pre-triggers"></a>Pre-triggers
+
+The following example shows how a pre-trigger is used to validate the properties of an Azure Cosmos DB item that is being created. In this example, we are leveraging the ToDoList sample from the [Quickstart .NET API for NoSQL](quickstart-dotnet.md), to add a timestamp property to a newly added item if it doesn't contain one.
+
+```javascript
+function validateToDoItemTimestamp() {
+ var context = getContext();
+ var request = context.getRequest();
+
+ // item to be created in the current operation
+ var itemToCreate = request.getBody();
+
+ // validate properties
+ if (!("timestamp" in itemToCreate)) {
+ var ts = new Date();
+ itemToCreate["timestamp"] = ts.getTime();
+ }
+
+ // update the item that will be created
+ request.setBody(itemToCreate);
+}
+```
+
+Pre-triggers cannot have any input parameters. The request object in the trigger is used to manipulate the request message associated with the operation. In the previous example, the pre-trigger is run when creating an Azure Cosmos DB item, and the request message body contains the item to be created in JSON format.
+
+When triggers are registered, you can specify the operations that it can run with. This trigger should be created with a `TriggerOperation` value of `TriggerOperation.Create`, which means using the trigger in a replace operation as shown in the following code is not permitted.
+
+For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles.
+
+### <a id="post-triggers"></a>Post-triggers
+
+The following example shows a post-trigger. This trigger queries for the metadata item and updates it with details about the newly created item.
++
+```javascript
+function updateMetadata() {
+ var context = getContext();
+ var container = context.getCollection();
+ var response = context.getResponse();
+
+ // item that was created
+ var createdItem = response.getBody();
+
+ // query for metadata document
+ var filterQuery = 'SELECT * FROM root r WHERE r.id = "_metadata"';
+ var accept = container.queryDocuments(container.getSelfLink(), filterQuery,
+ updateMetadataCallback);
+ if(!accept) throw "Unable to update metadata, abort";
+
+ function updateMetadataCallback(err, items, responseOptions) {
+ if(err) throw new Error("Error" + err.message);
+
+ if(items.length != 1) throw 'Unable to find metadata document';
+
+ var metadataItem = items[0];
+
+ // update metadata
+ metadataItem.createdItems += 1;
+ metadataItem.createdNames += " " + createdItem.id;
+ var accept = container.replaceDocument(metadataItem._self,
+ metadataItem, function(err, itemReplaced) {
+ if(err) throw "Unable to update metadata, abort";
+ });
+
+ if(!accept) throw "Unable to update metadata, abort";
+ return;
+ }
+}
+```
+
+One thing that is important to note is the transactional execution of triggers in Azure Cosmos DB. The post-trigger runs as part of the same transaction for the underlying item itself. An exception during the post-trigger execution will fail the whole transaction. Anything committed will be rolled back and an exception returned.
+
+For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles.
+
+## <a id="udfs"></a>How to write user-defined functions
+
+The following sample creates a UDF to calculate income tax for various income brackets. This user-defined function would then be used inside a query. For the purposes of this example assume there is a container called "Incomes" with properties as follows:
+
+```json
+{
+ "name": "Satya Nadella",
+ "country": "USA",
+ "income": 70000
+}
+```
+
+The following is a function definition to calculate income tax for various income brackets:
+
+```javascript
+function tax(income) {
+ if (income == undefined)
+ throw 'no input';
+
+ if (income < 1000)
+ return income * 0.1;
+ else if (income < 10000)
+ return income * 0.2;
+ else
+ return income * 0.4;
+}
+```
+
+For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions) article.
+
+## Logging
+
+When using stored procedure, triggers or user-defined functions, you can log the steps by enabling the script logging. A string for debugging is generated when `EnableScriptLogging` is set to true as shown in the following examples:
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+let requestOptions = { enableScriptLogging: true };
+const { resource: result, headers: responseHeaders} await container.scripts
+ .storedProcedure(Sproc.id)
+ .execute(undefined, [], requestOptions);
+console.log(responseHeaders[Constants.HttpHeaders.ScriptLogResults]);
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+var response = await client.ExecuteStoredProcedureAsync(
+document.SelfLink,
+new RequestOptions { EnableScriptLogging = true } );
+Console.WriteLine(response.ScriptLog);
+```
++
+## Next steps
+
+Learn more concepts and how-to write or use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
+
+* [How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
+
+* [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
+
+* [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md)
+
+* [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md)
cosmos-db Index Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/index-metrics.md
+
+ Title: Azure Cosmos DB indexing metrics
+description: Learn how to obtain and interpret the indexing metrics in Azure Cosmos DB
++++ Last updated : 10/25/2021+++
+# Indexing metrics in Azure Cosmos DB
+
+Azure Cosmos DB provides indexing metrics to show both utilized indexed paths and recommended indexed paths. You can use the indexing metrics to optimize query performance, especially in cases where you aren't sure how to modify the [indexing policy](../index-policy.md)).
+
+> [!NOTE]
+> The indexing metrics are only supported in the .NET SDK (version 3.21.0 or later) and Java SDK (version 4.19.0 or later)
+
+## Enable indexing metrics
+
+You can enable indexing metrics for a query by setting the `PopulateIndexMetrics` property to `true`. When not specified, `PopulateIndexMetrics` defaults to `false`. We only recommend enabling the index metrics for troubleshooting query performance. As long as your queries and indexing policy stay the same, the index metrics are unlikely to change. Instead, we recommend identifying expensive queries by monitoring query RU charge and latency using diagnostic logs.
+
+### .NET SDK example
+
+```csharp
+ string sqlQueryText = "SELECT TOP 10 c.id FROM c WHERE c.Item = 'value1234' AND c.Price > 2";
+
+ QueryDefinition query = new QueryDefinition(sqlQueryText);
+
+ FeedIterator<Item> resultSetIterator = container.GetItemQueryIterator<Item>(
+ query, requestOptions: new QueryRequestOptions
+ {
+ PopulateIndexMetrics = true
+ });
+
+ FeedResponse<Item> response = null;
+
+ while (resultSetIterator.HasMoreResults)
+ {
+ response = await resultSetIterator.ReadNextAsync();
+ Console.WriteLine(response.IndexMetrics);
+ }
+```
+
+### Example output
+
+In this example query, we observe the utilized paths `/Item/?` and `/Price/?` and the potential composite indexes `(/Item ASC, /Price ASC)`.
+
+```
+Index Utilization Information
+ Utilized Single Indexes
+ Index Spec: /Item/?
+ Index Impact Score: High
+
+ Index Spec: /Price/?
+ Index Impact Score: High
+
+ Potential Single Indexes
+ Utilized Composite Indexes
+ Potential Composite Indexes
+ Index Spec: /Item ASC, /Price ASC
+ Index Impact Score: High
+
+```
+
+## Utilized indexed paths
+
+The utilized single indexes and utilized composite indexes respectively show the included paths and composite indexes that the query used. Queries can use multiple indexed paths, as well as a mix of included paths and composite indexes. If an indexed path isn't listed as utilized, removing the indexed path won't have any impact on the query's performance.
+
+Consider the list of utilized indexed paths as evidence that a query used those paths. If you aren't sure if a new indexed path will improve query performance, you should try adding the new indexed paths and check if the query uses them.
+
+## Potential indexed paths
+
+The potential single indexes and utilized composite indexes respectively show the included paths and composite indexes that, if added, the query might utilize. If you see potential indexed paths, you should consider adding them to your indexing policy and observe if they improve query performance.
+
+Consider the list of potential indexed paths as recommendations rather than conclusive evidence that a query will use a specific indexed path. The potential indexed paths are not an exhaustive list of indexed paths that a query could use. Additionally, it's possible that some potential indexed paths won't have any impact on query performance. [Add the recommended indexed paths](how-to-manage-indexing-policy.md) and confirm that they improve query performance.
+
+> [!NOTE]
+> Do you have any feedback about the indexing metrics? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: cosmosdbindexing@microsoft.com
+
+## Index impact score
+
+The index impact score is the likelihood that an indexed path, based on the query shape, has a significant impact on query performance. In other words, the index impact score is the probability that, without that specific indexed path, the query RU charge would have been substantially higher.
+
+There are two possible index impact scores: **high** and **low**. If you have multiple potential indexed paths, we recommend focusing on indexed paths with a **high** impact score.
+
+The only criteria used in the index impact score is the query shape. For example, in the below query, the indexed path `/name/?` would be assigned a **high** index impact score:
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Samer"
+```
+
+The actual impact depending on the nature of the data. If only a few items match the `/name` filter, the indexed path will substantially improve the query RU charge. However, if most items end up matching the `/name` filter anyway, the indexed path may not end up improving query performance. In each of these cases, the indexed path `/name/?` would be assigned a **high** index impact score because, based on the query shape, the indexed path has a high likelihood of improving query performance.
+
+## Additional examples
+
+### Example query
+
+```sql
+SELECT c.id
+FROM c
+WHERE c.name = 'Tim' AND c.age > 15 AND c.town = 'Redmond' AND c.timestamp > 2349230183
+```
+
+### Index metrics
+
+```
+Index Utilization Information
+ Utilized Single Indexes
+ Index Spec: /name/?
+ Index Impact Score: High
+
+ Index Spec: /age/?
+ Index Impact Score: High
+
+ Index Spec: /town/?
+ Index Impact Score: High
+
+ Index Spec: /timestamp/?
+ Index Impact Score: High
+
+ Potential Single Indexes
+ Utilized Composite Indexes
+ Potential Composite Indexes
+ Index Spec: /name ASC, /town ASC, /age ASC
+ Index Impact Score: High
+
+ Index Spec: /name ASC, /town ASC, /timestamp ASC
+ Index Impact Score: High
+
+```
+These index metrics show that the query used the indexed paths `/name/?`, `/age/?`, `/town/?`, and `/timestamp/?`. The index metrics also indicate that there's a high likelihood that adding the composite indexes `(/name ASC, /town ASC, /age ASC)` and `(/name ASC, /town ASC, /timestamp ASC)` will further improve performance.
+
+## Next steps
+
+Read more about indexing in the following articles:
+
+- [Indexing overview](../index-overview.md)
+- [How to manage indexing policy](how-to-manage-indexing-policy.md)
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/javascript-query-api.md
+
+ Title: Work with JavaScript integrated query API in Azure Cosmos DB Stored Procedures and Triggers
+description: This article introduces the concepts for JavaScript language-integrated query API to create stored procedures and triggers in Azure Cosmos DB.
++++ Last updated : 05/07/2020++
+ms.devlang: javascript
+++
+# JavaScript query API in Azure Cosmos DB
+
+In addition to issuing queries using the API for NoSQL in Azure Cosmos DB, the [Azure Cosmos DB server-side SDK](https://github.com/Azure/azure-cosmosdb-js-server/) provides a JavaScript interface for performing optimized queries in Azure Cosmos DB Stored Procedures and Triggers. You don't have to be aware of the SQL language to use this JavaScript interface. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls, with a syntax familiar to ECMAScript5's array built-ins and popular JavaScript libraries like Lodash. Queries are parsed by the JavaScript runtime and efficiently executed using Azure Cosmos DB indices.
+
+## Supported JavaScript functions
+
+| **Function** | **Description** |
+|||
+|`chain() ... .value([callback] [, options])`|Starts a chained call that must be terminated with value().|
+|`filter(predicateFunction [, options] [, callback])`|Filters the input using a predicate function that returns true/false in order to filter in/out input documents into the resulting set. This function behaves similar to a WHERE clause in SQL.|
+|`flatten([isShallow] [, options] [, callback])`|Combines and flattens arrays from each input item into a single array. This function behaves similar to SelectMany in LINQ.|
+|`map(transformationFunction [, options] [, callback])`|Applies a projection given a transformation function that maps each input item to a JavaScript object or value. This function behaves similar to a SELECT clause in SQL.|
+|`pluck([propertyName] [, options] [, callback])`|This function is a shortcut for a map that extracts the value of a single property from each input item.|
+|`sortBy([predicate] [, options] [, callback])`|Produces a new set of documents by sorting the documents in the input document stream in ascending order by using the given predicate. This function behaves similar to an ORDER BY clause in SQL.|
+|`sortByDescending([predicate] [, options] [, callback])`|Produces a new set of documents by sorting the documents in the input document stream in descending order using the given predicate. This function behaves similar to an ORDER BY x DESC clause in SQL.|
+|`unwind(collectionSelector, [resultSelector], [options], [callback])`|Performs a self-join with inner array and adds results from both sides as tuples to the result projection. For instance, joining a person document with person.pets would produce [person, pet] tuples. This is similar to SelectMany in .NET LINQ.|
+
+When included inside predicate and/or selector functions, the following JavaScript constructs get automatically optimized to run directly on Azure Cosmos DB indices:
+
+- Simple operators: `=` `+` `-` `*` `/` `%` `|` `^` `&` `==` `!=` `===` `!===` `<` `>` `<=` `>=` `||` `&&` `<<` `>>` `>>>!` `~`
+- Literals, including the object literal: {}
+- var, return
+
+The following JavaScript constructs do not get optimized for Azure Cosmos DB indices:
+
+- Control flow (for example, if, for, while)
+- Function calls
+
+For more information, see the [Azure Cosmos DB Server Side JavaScript Documentation](https://github.com/Azure/azure-cosmosdb-js-server/).
+
+## SQL to JavaScript cheat sheet
+
+The following table presents various SQL queries and the corresponding JavaScript queries. As with SQL queries, properties (for example, item.id) are case-sensitive.
+
+> [!NOTE]
+> `__` (double-underscore) is an alias to `getContext().getCollection()` when using the JavaScript query API.
+
+|**SQL**|**JavaScript Query API**|**Description**|
+||||
+|SELECT *<br>FROM docs| __.map(function(doc) { <br>&nbsp;&nbsp;&nbsp;&nbsp;return doc;<br>});|Results in all documents (paginated with continuation token) as is.|
+|SELECT <br>&nbsp;&nbsp;&nbsp;docs.id,<br>&nbsp;&nbsp;&nbsp;docs.message AS msg,<br>&nbsp;&nbsp;&nbsp;docs.actions <br>FROM docs|__.map(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;return {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;id: doc.id,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;msg: doc.message,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;actions:doc.actions<br>&nbsp;&nbsp;&nbsp;&nbsp;};<br>});|Projects the id, message (aliased to msg), and action from all documents.|
+|SELECT *<br>FROM docs<br>WHERE<br>&nbsp;&nbsp;&nbsp;docs.id="X998_Y998"|__.filter(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;return doc.id ==="X998_Y998";<br>});|Queries for documents with the predicate: id = "X998_Y998".|
+|SELECT *<br>FROM docs<br>WHERE<br>&nbsp;&nbsp;&nbsp;ARRAY_CONTAINS(docs.Tags, 123)|__.filter(function(x) {<br>&nbsp;&nbsp;&nbsp;&nbsp;return x.Tags && x.Tags.indexOf(123) > -1;<br>});|Queries for documents that have a Tags property and Tags is an array containing the value 123.|
+|SELECT<br>&nbsp;&nbsp;&nbsp;docs.id,<br>&nbsp;&nbsp;&nbsp;docs.message AS msg<br>FROM docs<br>WHERE<br>&nbsp;&nbsp;&nbsp;docs.id="X998_Y998"|__.chain()<br>&nbsp;&nbsp;&nbsp;&nbsp;.filter(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return doc.id ==="X998_Y998";<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>&nbsp;&nbsp;&nbsp;&nbsp;.map(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;id: doc.id,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;msg: doc.message<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;};<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>.value();|Queries for documents with a predicate, id = "X998_Y998", and then projects the id and message (aliased to msg).|
+|SELECT VALUE tag<br>FROM docs<br>JOIN tag IN docs.Tags<br>ORDER BY docs._ts|__.chain()<br>&nbsp;&nbsp;&nbsp;&nbsp;.filter(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return doc.Tags && Array.isArray(doc.Tags);<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>&nbsp;&nbsp;&nbsp;&nbsp;.sortBy(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return doc._ts;<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>&nbsp;&nbsp;&nbsp;&nbsp;.pluck("Tags")<br>&nbsp;&nbsp;&nbsp;&nbsp;.flatten()<br>&nbsp;&nbsp;&nbsp;&nbsp;.value()|Filters for documents that have an array property, Tags, and sorts the resulting documents by the _ts timestamp system property, and then projects + flattens the Tags array.|
+
+## Next steps
+
+Learn more concepts and how-to write and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
+
+- [How to write stored procedures and triggers using JavaScript Query API](how-to-write-javascript-query-api.md)
+- [Working with Azure Cosmos DB stored procedures, triggers and user-defined functions](stored-procedures-triggers-udfs.md)
+- [How to use stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
+- [Azure Cosmos DB JavaScript server-side API reference](https://azure.github.io/azure-cosmosdb-js-server)
+- [JavaScript ES6 (ECMA 2015)](https://www.ecma-international.org/ecma-262/6.0/)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/kafka-connector-sink.md
+
+ Title: Kafka Connect for Azure Cosmos DB - Sink connector
+description: The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
++++ Last updated : 05/13/2022+++
+# Kafka Connect for Azure Cosmos DB - sink connector
+
+Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
+
+## Prerequisites
+
+* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you don't wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You'll also need to install and configure the Azure Cosmos DB connectors manually.
+* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md)
+* Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1.
+* Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
+* Download [Maven](https://maven.apache.org/download.cgi)
+
+## Install sink connector
+
+If you're using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB sink connector is included in the installation, and you can skip this step.
+
+Otherwise, you can download the JAR file from the latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) or package this repo to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code.
+
+```bash
+# clone the kafka-connect-cosmosdb repo if you haven't done so already
+git clone https://github.com/microsoft/kafka-connect-cosmosdb.git
+cd kafka-connect-cosmosdb
+
+# package the source code into a JAR file
+mvn clean package
+
+# include the following JAR file in Kafka Connect installation
+ls target/*dependencies.jar
+```
+
+## Create a Kafka topic and write data
+
+If you're using the Confluent Platform, the easiest way to create a Kafka topic is by using the supplied Control Center UX. Otherwise, you can create a Kafka topic manually using the following syntax:
+
+```bash
+./kafka-topics.sh --create --zookeeper <ZOOKEEPER_URL:PORT> --replication-factor <NO_OF_REPLICATIONS> --partitions <NO_OF_PARTITIONS> --topic <TOPIC_NAME>
+```
+
+For this scenario, we'll create a Kafka topic named ΓÇ£hotelsΓÇ¥ and will write non-schema embedded JSON data to the topic. To create a topic inside Control Center, see the [Confluent guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
+
+Next, start the Kafka console producer to write a few records to the ΓÇ£hotelsΓÇ¥ topic.
+
+```powershell
+# Option 1: If using Codespaces, use the built-in CLI utility
+kafka-console-producer --broker-list localhost:9092 --topic hotels
+
+# Option 2: Using this repo's Confluent Platform setup, first exec into the broker container
+docker exec -it broker /bin/bash
+kafka-console-producer --broker-list localhost:9092 --topic hotels
+
+# Option 3: Using your Confluent Platform setup and CLI install
+<path-to-confluent>/bin/kafka-console-producer --broker-list <kafka broker hostname> --topic hotels
+```
+
+In the console producer, enter:
+
+```json
+{"id": "h1", "HotelName": "Marriott", "Description": "Marriott description"}
+{"id": "h2", "HotelName": "HolidayInn", "Description": "HolidayInn description"}
+{"id": "h3", "HotelName": "Motel8", "Description": "Motel8 description"}
+```
+
+The three records entered are published to the ΓÇ£hotelsΓÇ¥ Kafka topic in JSON format.
+
+## Create the sink connector
+
+Create an Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector. Make sure to replace the values for `connect.cosmos.connection.endpoint` and `connect.cosmos.master.key`, properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
+
+For more information on each of these configuration properties, see [sink properties](#sink-configuration-properties).
+
+```json
+{
+ "name": "cosmosdb-sink-connector",
+ "config": {
+ "connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
+ "tasks.max": "1",
+ "topics": [
+ "hotels"
+ ],
+ "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "value.converter.schemas.enable": "false",
+ "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "key.converter.schemas.enable": "false",
+ "connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
+ "connect.cosmos.master.key": "<cosmosdbprimarykey>",
+ "connect.cosmos.databasename": "kafkaconnect",
+ "connect.cosmos.containers.topicmap": "hotels#kafka"
+ }
+}
+```
+
+Once you have all the values filled out, save the JSON file somewhere locally. You can use this file to create the connector using the REST API.
+
+### Create connector using Control Center
+
+An easy option to create the connector is by going through the Control Center webpage. Follow this [installation guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. Instead of using the `DatagenConnector` option, use the `CosmosDBSinkConnector` tile instead. When configuring the sink connector, fill out the values as you've filled in the JSON file.
+
+Alternatively, in the connectors page, you can upload the JSON file created earlier by using the **Upload connector config file** option.
++
+### Create connector using REST API
+
+Create the sink connector using the Connect REST API:
+
+```bash
+# Curl to Kafka connect service
+curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors
+
+```
+
+## Confirm data written to Azure Cosmos DB
+
+Sign into the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account.
+
+## Cleanup
+
+To delete the connector from the Control Center, navigate to the sink connector you created and select the **Delete** icon.
++
+Alternatively, use the Connect REST API to delete:
+
+```bash
+# Curl to Kafka connect service
+curl -X DELETE http://localhost:8083/connectors/cosmosdb-sink-connector
+```
+
+To delete the created Azure Cosmos DB service and its resource group using Azure CLI, refer to these [steps](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md#cleanup).
+
+## <a id="sink-configuration-properties"></a>Sink configuration properties
+
+The following settings are used to configure an Azure Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
+
+| Name | Type | Description | Required/Optional |
+| : | : | : | : |
+| Topics | list | A list of Kafka topics to watch. | Required |
+| connector.class | string | Class name of the Azure Cosmos DB sink. It should be set to `com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector`. | Required |
+| connect.cosmos.connection.endpoint | uri | Azure Cosmos DB endpoint URI string. | Required |
+| connect.cosmos.master.key | string | The Azure Cosmos DB primary key that the sink connects with. | Required |
+| connect.cosmos.databasename | string | The name of the Azure Cosmos DB database the sink writes to. | Required |
+| connect.cosmos.containers.topicmap | string | Mapping between Kafka topics and Azure Cosmos DB containers, formatted using CSV as shown: `topic#container,topic2#container2`. | Required |
+| key.converter | string | Serialization format for the key data written into Kafka topic. | Required |
+| value.converter | string | Serialization format for the value data written into the Kafka topic. | Required |
+| key.converter.schemas.enable | string | Set to "true" if the key data has embedded schema. | Optional |
+| value.converter.schemas.enable | string | Set to "true" if the key data has embedded schema. | Optional |
+| tasks.max | int | Maximum number of connector sink tasks. Default is `1` | Optional |
+
+Data will always be written to the Azure Cosmos DB as JSON without any schema.
+
+## Supported data types
+
+The Azure Cosmos DB sink connector converts sink record into JSON document supporting the following schema types:
+
+| Schema type | JSON data type |
+| : | : |
+| Array | Array |
+| Boolean | Boolean |
+| Float32 | Number |
+| Float64 | Number |
+| Int8 | Number |
+| Int16 | Number |
+| Int32 | Number |
+| Int64 | Number|
+| Map | Object (JSON)|
+| String | String<br> Null |
+| Struct | Object (JSON) |
+
+The sink Connector also supports the following AVRO logical types:
+
+| Schema Type | JSON Data Type |
+| : | : |
+| Date | Number |
+| Time | Number |
+| Timestamp | Number |
+
+> [!NOTE]
+> Byte deserialization is currently not supported by the Azure Cosmos DB sink connector.
+
+## Single Message Transforms(SMT)
+
+Along with the sink connector settings, you can specify the use of Single Message Transformations (SMTs) to modify messages flowing through the Kafka Connect platform. For more information, see [Confluent SMT Documentation](https://docs.confluent.io/platform/current/connect/transforms/overview.html).
+
+### Using the InsertUUID SMT
+
+You can use InsertUUID SMT to automatically add item IDs. With the custom `InsertUUID` SMT, you can insert the `id` field with a random UUID value for each message, before it's written to Azure Cosmos DB.
+
+> [!WARNING]
+> Use this SMT only if the messages donΓÇÖt contain the `id` field. Otherwise, the `id` values will be overwritten and you may end up with duplicate items in your database. Using UUIDs as the message ID can be quick and easy but are [not an ideal partition key](https://stackoverflow.com/questions/49031461/would-using-a-substring-of-a-guid-in-cosmosdb-as-partitionkey-be-a-bad-idea) to use in Azure Cosmos DB.
+
+### Install the SMT
+
+Before you can use the `InsertUUID` SMT, you'll need to install this transform in your Confluent Platform setup. If you're using the Confluent Platform setup from this repo, the transform is already included in the installation, and you can skip this step.
+
+Alternatively, you can package the [InsertUUID source](https://github.com/confluentinc/kafka-connect-insert-uuid) to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually).
+
+```bash
+# clone the kafka-connect-insert-uuid repo
+https://github.com/confluentinc/kafka-connect-insert-uuid.git
+cd kafka-connect-insert-uuid
+
+# package the source code into a JAR file
+mvn clean package
+
+# include the following JAR file in Confluent Platform installation
+ls target/*.jar
+```
+
+### Configure the SMT
+
+Inside your sink connector config, add the following properties to set the `id`.
+
+```json
+"transforms": "insertID",
+"transforms.insertID.type": "com.github.cjmatta.kafka.connect.smt.InsertUuid$Value",
+"transforms.insertID.uuid.field.name": "id"
+```
+
+For more information on using this SMT, see the [InsertUUID repository](https://github.com/confluentinc/kafka-connect-insert-uuid).
+
+### Using SMTs to configure Time to live (TTL)
+
+Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-the-azure-portal) doc.
+
+Inside your Sink connector config, add the following properties to set the TTL in seconds. In this following example, the TTL is set to 100 seconds. If the message already contains the `TTL` field, the `TTL` value will be overwritten by these SMTs.
+
+```json
+"transforms": "insertTTL,castTTLInt",
+"transforms.insertTTL.type": "org.apache.kafka.connect.transforms.InsertField$Value",
+"transforms.insertTTL.static.field": "ttl",
+"transforms.insertTTL.static.value": "100",
+"transforms.castTTLInt.type": "org.apache.kafka.connect.transforms.Cast$Value",
+"transforms.castTTLInt.spec": "ttl:int32"
+```
+
+For more information on using these SMTs, see the [InsertField](https://docs.confluent.io/platform/current/connect/transforms/insertfield.html) and [Cast](https://docs.confluent.io/platform/current/connect/transforms/cast.html) documentation.
+
+## Troubleshooting common issues
+
+Here are solutions to some common problems that you may encounter when working with the Kafka sink connector.
+
+### Read non-JSON data with JsonConverter
+
+If you have non-JSON data on your source topic in Kafka and attempt to read it using the `JsonConverter`, you'll see the following exception:
+
+```console
+org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
+…
+org.apache.kafka.common.errors.SerializationException: java.io.CharConversionException: Invalid UTF-32 character 0x1cfa7e2 (above 0x0010ffff) at char #1, byte #7)
+
+```
+
+This error is likely caused by data in the source topic being serialized in either Avro or another format such as CSV string.
+
+**Solution**: If the topic data is in AVRO format, then change your Kafka Connect sink connector to use the `AvroConverter` as shown below.
+
+```json
+"value.converter": "io.confluent.connect.avro.AvroConverter",
+"value.converter.schema.registry.url": "http://schema-registry:8081",
+```
+
+### Read non-Avro data with AvroConverter
+
+This scenario is applicable when you try to use the Avro converter to read data from a topic that isn't in Avro format. Which, includes data written by an Avro serializer other than the Confluent Schema RegistryΓÇÖs Avro serializer, which has its own wire format.
+
+```console
+org.apache.kafka.connect.errors.DataException: my-topic-name
+at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)
+…
+org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
+org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
+
+```
+
+**Solution**: Check the source topicΓÇÖs serialization format. Then, either switch Kafka ConnectΓÇÖs sink connector to use the right converter or switch the upstream format to Avro.
+
+### Read a JSON message without the expected schema/payload structure
+
+Kafka Connect supports a special structure of JSON messages containing both payload and schema as follows.
+
+```json
+{
+ "schema": {
+ "type": "struct",
+ "fields": [
+ {
+ "type": "int32",
+ "optional": false,
+ "field": "userid"
+ },
+ {
+ "type": "string",
+ "optional": false,
+ "field": "name"
+ }
+ ]
+ },
+ "payload": {
+ "userid": 123,
+ "name": "Sam"
+ }
+}
+```
+
+If you try to read JSON data that doesn't contain the data in this structure, you'll get the following error:
+
+```none
+org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
+```
+
+To be clear, the only JSON structure that is valid for `schemas.enable=true` has schema and payload fields as the top-level elements as shown above. As the error message states, if you just have plain JSON data, you should change your connectorΓÇÖs configuration to:
+
+```json
+"value.converter": "org.apache.kafka.connect.json.JsonConverter",
+"value.converter.schemas.enable": "false",
+```
+
+## Limitations
+
+* Autocreation of databases and containers in Azure Cosmos DB aren't supported. The database and containers must already exist, and they must be configured correctly.
+
+## Next steps
+
+You can learn more about change feed in Azure Cosmo DB with the following docs:
+
+* [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)
+* [Reading from change feed](read-change-feed.md)
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/kafka-connector-source.md
+
+ Title: Kafka Connect for Azure Cosmos DB - Source connector
+description: Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic. Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB.
++++ Last updated : 05/13/2022+++
+# Kafka Connect for Azure Cosmos DB - source connector
+
+Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic.
+
+## Prerequisites
+
+* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you don't wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You'll also need to install and configure the Azure Cosmos DB connectors manually.
+* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md)
+* Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1.
+* Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
+* Download [Maven](https://maven.apache.org/download.cgi)
+
+## Install the source connector
+
+If you're using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB source connector is included in the installation, and you can skip this step.
+
+Otherwise, you can use JAR file from latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) and install the connector manually. To learn more, see these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code:
+
+```bash
+# clone the kafka-connect-cosmosdb repo if you haven't done so already
+git clone https://github.com/microsoft/kafka-connect-cosmosdb.git
+cd kafka-connect-cosmosdb
+
+# package the source code into a JAR file
+mvn clean package
+
+# include the following JAR file in Confluent Platform installation
+ls target/*dependencies.jar
+```
+
+## Create a Kafka topic
+
+Create a Kafka topic using Confluent Control Center. For this scenario, we'll create a Kafka topic named "apparels" and write non-schema embedded JSON data to the topic. To create a topic inside the Control Center, see [create Kafka topic doc](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
+
+## Create the source connector
+
+### Create the source connector in Kafka Connect
+
+To create the Azure Cosmos DB source connector in Kafka Connect, use the following JSON config. Make sure to replace the placeholder values for `connect.cosmos.connection.endpoint`, `connect.cosmos.master.key` properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
+
+```json
+{
+ "name": "cosmosdb-source-connector",
+ "config": {
+ "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector",
+ "tasks.max": "1",
+ "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "connect.cosmos.task.poll.interval": "100",
+ "connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
+ "connect.cosmos.master.key": "<cosmosdbprimarykey>",
+ "connect.cosmos.databasename": "kafkaconnect",
+ "connect.cosmos.containers.topicmap": "apparels#kafka",
+ "connect.cosmos.offset.useLatest": false,
+ "value.converter.schemas.enable": "false",
+ "key.converter.schemas.enable": "false"
+ }
+}
+```
+
+For more information on each of the above configuration properties, see the [source properties](#source-configuration-properties) section. Once you have all the values filled out, save the JSON file somewhere locally. You can use this file to create the connector using the REST API.
+
+#### Create connector using Control Center
+
+An easy option to create the connector is from the Confluent Control Center portal. Follow the [Confluent setup guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. When setting up, instead of using the `DatagenConnector` option, use the `CosmosDBSourceConnector` tile instead. When configuring the source connector, fill out the values as you've filled in the JSON file.
+
+Alternatively, in the connectors page, you can upload the JSON file built from the previous section by using the **Upload connector config file** option.
++
+#### Create connector using REST API
+
+Create the source connector using the Connect REST API
+
+```bash
+# Curl to Kafka connect service
+curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors
+```
+
+## Insert document into Azure Cosmos DB
+
+1. Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account.
+1. Open the **Data Explore** tab and select **Databases**
+1. Open the "kafkaconnect" database and "kafka" container you created earlier.
+1. To create a new JSON document, in the API for NoSQL pane, expand "kafka" container, select **Items**, then select **New Item** in the toolbar.
+1. Now, add a document to the container with the following structure. Paste the following sample JSON block into the Items tab, overwriting the current content:
+
+ ``` json
+
+ {
+ "id": "2",
+ "productId": "33218897",
+ "category": "Women's Outerwear",
+ "manufacturer": "Contoso",
+ "description": "Black wool pea-coat",
+ "price": "49.99",
+ "shipping": {
+ "weight": 2,
+ "dimensions": {
+ "width": 8,
+ "height": 11,
+ "depth": 3
+ }
+ }
+ }
+
+ ```
+
+1. Select **Save**.
+1. Confirm the document has been saved by viewing the Items on the left-hand menu.
+
+### Confirm data written to Kafka topic
+
+1. Open Kafka Topic UI on `http://localhost:9000`.
+1. Select the Kafka "apparels" topic you created.
+1. Verify that the document you inserted into Azure Cosmos DB earlier appears in the Kafka topic.
+
+### Cleanup
+
+To delete the connector from the Confluent Control Center, navigate to the source connector you created and select the **Delete** icon.
++
+Alternatively, use the connectorΓÇÖs REST API:
+
+```bash
+# Curl to Kafka connect service
+curl -X DELETE http://localhost:8083/connectors/cosmosdb-source-connector
+```
+
+To delete the created Azure Cosmos DB service and its resource group using Azure CLI, refer to these [steps](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md#cleanup).
+
+## Source configuration properties
+
+The following settings are used to configure the Kafka source connector. These configuration values determine which Azure Cosmos DB container is consumed, data from which Kafka topics is written, and formats to serialize the data. For an example with default values, see this [configuration file](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/source.example.json).
+
+| Name | Type | Description | Required/optional |
+| : | : | : | : |
+| connector.class | String | Class name of the Azure Cosmos DB source. It should be set to `com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector` | Required |
+| connect.cosmos.databasename | String | Name of the database to read from. | Required |
+| connect.cosmos.master.key | String | The Azure Cosmos DB primary key. | Required |
+| connect.cosmos.connection.endpoint | URI | The account endpoint. | Required |
+| connect.cosmos.containers.topicmap | String | Comma-separated topic to container mapping. For example, topic1#coll1, topic2#coll2 | Required |
+| connect.cosmos.messagekey.enabled | Boolean | This value represents if the Kafka message key should be set. Default value is `true` | Required |
+| connect.cosmos.messagekey.field | String | Use the field's value from the document as the message key. Default is `id`. | Required |
+| connect.cosmos.offset.useLatest | Boolean | Set to `true` to use the most recent source offset. Set to `false` to use the earliest recorded offset. Default value is `false`. | Required |
+| connect.cosmos.task.poll.interval | Int | Interval to poll the change feed container for changes. | Required |
+| key.converter | String | Serialization format for the key data written into Kafka topic. | Required |
+| value.converter | String | Serialization format for the value data written into the Kafka topic. | Required |
+| key.converter.schemas.enable | String | Set to `true` if the key data has embedded schema. | Optional |
+| value.converter.schemas.enable | String | Set to `true` if the key data has embedded schema. | Optional |
+| tasks.max | Int | Maximum number of connectors source tasks. Default value is `1`. | Optional |
+
+## Supported data types
+
+The Azure Cosmos DB source connector converts JSON document to schema and supports the following JSON data types:
+
+| JSON data type | Schema type |
+| : | : |
+| Array | Array |
+| Boolean | Boolean |
+| Number | Float32<br>Float64<br>Int8<br>Int16<br>Int32<br>Int64|
+| Null | String |
+| Object (JSON)| Struct|
+| String | String |
+
+## Next steps
+
+* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Kafka Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/kafka-connector.md
+
+ Title: Use Kafka Connect for Azure Cosmos DB to read and write data
+description: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. Kafka Connect is a tool for scalable and reliably streaming data between Apache Kafka and other systems
++++ Last updated : 06/28/2021+++
+# Kafka Connect for Azure Cosmos DB
+
+[Kafka Connect](http://kafka.apache.org/documentation.html#connect) is a tool for scalable and reliably streaming data between Apache Kafka and other systems. Using Kafka Connect you can define connectors that move large data sets into and out of Kafka. Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB.
+
+## Source & sink connectors semantics
+
+* **Source connector** - Currently this connector supports at-least once with multiple tasks and exactly once for single tasks.
+
+* **Sink connector** - This connector fully supports exactly once semantics.
+
+## Supported data formats
+
+The source and sink connectors can be configured to support the following data formats:
+
+| Format | Description |
+| :-- | :- |
+| Plain JSON | JSON record structure without any attached schema. |
+| JSON with schema | JSON record structure with explicit schema information to ensure the data matches the expected format. |
+| AVRO | A row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types, protocols, and serializes data in a compact binary format.
+
+The key and value settings, including the format and serialization can be independently configured in Kafka. So, it is possible to work with different data formats for keys and values, respectively. To cater for different data formats, there is converter configuration for both `key.converter` and `value.converter`.
+
+## Converter configuration examples
+
+### <a id="json-plain"></a>Plain JSON
+
+If you need to use JSON without schema registry for connect data, use the `JsonConverter` supported with Kafka. The following example shows the `JsonConverter` key and value properties that are added to the configuration:
+
+ ```java
+ key.converter=org.apache.kafka.connect.json.JsonConverter
+ key.converter.schemas.enable=false
+ value.converter=org.apache.kafka.connect.json.JsonConverter
+ value.converter.schemas.enable=false
+ ```
+
+### <a id="json-with-schema"></a>JSON with schema
+
+Set the properties `key.converter.schemas.enable` and `value.converter.schemas.enable` to true so that the key or value is treated as a composite JSON object that contains both an internal schema and the data. Without these properties, the key or value is treated as plain JSON.
+
+ ```java
+ key.converter=org.apache.kafka.connect.json.JsonConverter
+ key.converter.schemas.enable=true
+ value.converter=org.apache.kafka.connect.json.JsonConverter
+ value.converter.schemas.enable=true
+ ```
+
+The resulting message to Kafka would look like the example below, with schema and payload as top-level elements in the JSON:
+
+ ```json
+ {
+ "schema": {
+ "type": "struct",
+ "fields": [
+ {
+ "type": "int32",
+ "optional": false,
+ "field": "userid"
+ },
+ {
+ "type": "string",
+ "optional": false,
+ "field": "name"
+ }
+ ],
+ "optional": false,
+ "name": "ksql.users"
+ },
+ "payload": {
+ "userid": 123,
+ "name": "user's name"
+ }
+ }
+ ```
+
+> [!NOTE]
+> The message written to Azure Cosmos DB is made up of the schema and payload. Notice the size of the message, as well as the proportion of it that is made up of the payload vs. the schema. The schema is repeated in every message you write to Kafka. In scenarios like this, you may want to use a serialization format like JSON Schema or AVRO, where the schema is stored separately, and the message holds just the payload.
+
+### <a id="avro"></a>AVRO
+
+The Kafka Connector supports AVRO data format. To use AVRO format, configure a `AvroConverter` so that Kafka Connect knows how to work with AVRO data. Azure Cosmos DB Kafka Connect has been tested with the [AvroConverter](https://www.confluent.io/hub/confluentinc/kafka-connect-avro-converter) supplied by Confluent, under Apache 2.0 license. You can also use a different custom converter if you prefer.
+
+Kafka deals with keys and values independently. Specify the `key.converter` and `value.converter` properties as required in the worker configuration. When using `AvroConverter`, add an extra converter property that provides the URL for the schema registry. The following example shows the AvroConverter key and value properties that are added to the configuration:
+
+ ```java
+ key.converter=io.confluent.connect.avro.AvroConverter
+ key.converter.schema.registry.url=http://schema-registry:8081
+ value.converter=io.confluent.connect.avro.AvroConverter
+ value.converter.schema.registry.url=http://schema-registry:8081
+ ```
+
+## Choose a conversion format
+
+The following are some considerations on how to choose a conversion format:
+
+* When configuring a **Source connector**:
+
+ * If you want Kafka Connect to include plain JSON in the message it writes to Kafka, set [Plain JSON](#json-plain) configuration.
+
+ * If you want Kafka Connect to include the schema in the message it writes to Kafka, set [JSON with Schema](#json-with-schema) configuration.
+
+ * If you want Kafka Connect to include AVRO format in the message it writes to Kafka, set [AVRO](#avro) configuration.
+
+* If youΓÇÖre consuming JSON data from a Kafka topic into a **Sink connector**, understand how the JSON was serialized when it was written to the Kafka topic:
+
+ * If it was written with JSON serializer, set Kafka Connect to use the JSON converter `(org.apache.kafka.connect.json.JsonConverter)`.
+
+ * If the JSON data was written as a plain string, determine if the data includes a nested schema or payload. If it does, set [JSON with schema](#json-with-schema) configuration.
+ * However, if youΓÇÖre consuming JSON data and it doesnΓÇÖt have the schema or payload construct, then you must tell Kafka Connect **not** to look for a schema by setting `schemas.enable=false` as per [Plain JSON](#json-plain) configuration.
+
+ * If it was written with AVRO serializer, set Kafka Connect to use the AVRO converter `(io.confluent.connect.avro.AvroConverter)` as per [AVRO](#avro) configuration.
+
+## Configuration
+
+### Common configuration properties
+
+The source and sink connectors share the following common configuration properties:
+
+| Name | Type | Description | Required/Optional |
+| : | : | : | : |
+| connect.cosmos.connection.endpoint | uri | Azure Cosmos DB endpoint URI string | Required |
+| connect.cosmos.master.key | string | The Azure Cosmos DB primary key that the sink connects with. | Required |
+| connect.cosmos.databasename | string | The name of the Azure Cosmos DB database the sink writes to. | Required |
+| connect.cosmos.containers.topicmap | string | Mapping between Kafka topics and Azure Cosmos DB containers. It is formatted using CSV as `topic#container,topic2#container2` | Required |
+
+For sink connector-specific configuration, see the [Sink Connector Documentation](kafka-connector-sink.md)
+
+For source connector-specific configuration, see the [Source Connector Documentation](kafka-connector-source.md)
+
+## Common configuration errors
+
+If you misconfigure the converters in Kafka Connect, it can result in errors. These errors will show up at the Kafka Connector sink because youΓÇÖll try to deserialize the messages already stored in Kafka. Converter problems donΓÇÖt usually occur in source because serialization is set at the source.
+
+For more information, see [common configuration errors](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#common-errors) doc.
+
+## Project setup
+
+Refer to the [Developer walkthrough and project setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Developer_Walkthrough.md) for initial setup instructions.
+
+## Performance testing
+
+For more information on the performance tests run for the sink and source connectors, see the [Performance testing document](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Performance_Testing.md).
+
+Refer to the [Performance environment setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/perf/README.md) for exact steps on deploying the performance test environment for the connectors.
+
+## Resources
+
+* [Kafka Connect](http://kafka.apache.org/documentation.html#connect)
+* [Kafka Connect Deep Dive ΓÇô Converters and Serialization Explained](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/)
+
+## Next steps
+
+* Kafka Connect for Azure Cosmos DB [source connector](kafka-connector-source.md)
+* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-bicep.md
+
+ Title: Create and manage Azure Cosmos DB with Bicep
+description: Use Bicep to create and configure Azure Cosmos DB for API for NoSQL
+++++ Last updated : 02/18/2022++++
+# Manage Azure Cosmos DB for NoSQL resources with Bicep
++
+In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB accounts, databases, and containers.
+
+This article shows Bicep samples for API for NoSQL accounts. You can also find Bicep samples for [Cassandra](../cassandr) APIs.
+
+> [!IMPORTANT]
+>
+> * Account names are limited to 44 characters, all lowercase.
+> * To change the throughput (RU/s) values, redeploy the Bicep file with updated RU/s.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
+> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
+
+To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Bicep files including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md).
+
+<a id="create-autoscale"></a>
+
+## Azure Cosmos DB account with autoscale throughput
+
+Create an Azure Cosmos DB account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most index policy options enabled.
++
+<a id="create-analytical-store"></a>
+
+## Azure Cosmos DB account with analytical store
+
+Create an Azure Cosmos DB account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput.
++
+<a id="create-manual"></a>
+
+## Azure Cosmos DB account with standard provisioned throughput
+
+Create an Azure Cosmos DB account in two regions with options for consistency and failover, with database and container configured for standard throughput that has most policy options enabled.
++
+<a id="create-sproc"></a>
+
+## Azure Cosmos DB container with server-side functionality
+
+Create an Azure Cosmos DB account, database and container with with a stored procedure, trigger, and user-defined function.
++
+<a id="create-rbac"></a>
+
+## Azure Cosmos DB account with Azure AD and RBAC
+
+Create an Azure Cosmos DB account, a natively maintained Role Definition, and a natively maintained Role Assignment for an AAD identity.
++
+<a id="free-tier"></a>
+
+## Free tier Azure Cosmos DB account
+
+Create a free-tier Azure Cosmos DB account and a database with shared throughput that can be shared with up to 25 containers.
++
+## Next steps
+
+Here are some additional resources:
+
+* [Bicep documentation](../../azure-resource-manager/bicep/index.yml)
+* [Install Bicep tools](../../azure-resource-manager/bicep/install.md)
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-cli.md
+
+ Title: Manage Azure Cosmos DB for NoSQL resources using Azure CLI
+description: Manage Azure Cosmos DB for NoSQL resources using Azure CLI.
++++ Last updated : 02/18/2022++++
+# Manage Azure Cosmos DB for NoSQL resources using Azure CLI
++
+The following guide describes common commands to automate management of your Azure Cosmos DB accounts, databases and containers using Azure CLI. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). You can also find more examples in [Azure CLI samples for Azure Cosmos DB](cli-samples.md), including how to create and manage Azure Cosmos DB accounts, databases and containers for MongoDB, Gremlin, Cassandra and API for Table.
++
+- This article requires version 2.22.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+For Azure CLI samples for other APIs see [CLI Samples for Cassandra](../cassandr)
+
+> [!IMPORTANT]
+> Azure Cosmos DB resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
+
+## Azure Cosmos DBAccounts
+
+The following sections demonstrate how to manage the Azure Cosmos DB account, including:
+
+- [Create an Azure Cosmos DB account](#create-an-azure-cosmos-db-account)
+- [Add or remove regions](#add-or-remove-regions)
+- [Enable multi-region writes](#enable-multiple-write-regions)
+- [Set regional failover priority](#set-failover-priority)
+- [Enable service-managed failover](#enable-service-managed-failover)
+- [Trigger manual failover](#trigger-manual-failover)
+- [List account keys](#list-account-keys)
+- [List read-only account keys](#list-read-only-account-keys)
+- [List connection strings](#list-connection-strings)
+- [Regenerate account key](#regenerate-account-key)
+
+### Create an Azure Cosmos DB account
+
+Create an Azure Cosmos DB account with API for NoSQL, Session consistency in West US and East US regions:
+
+> [!IMPORTANT]
+> The Azure Cosmos DB account name must be lowercase and less than 44 characters.
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount' #needs to be lower case and less than 44 characters
+
+az cosmosdb create \
+ -n $accountName \
+ -g $resourceGroupName \
+ --default-consistency-level Session \
+ --locations regionName='West US' failoverPriority=0 isZoneRedundant=False \
+ --locations regionName='East US' failoverPriority=1 isZoneRedundant=False
+```
+
+### Add or remove regions
+
+Create an Azure Cosmos DB account with two regions, add a region, and remove a region.
+
+> [!NOTE]
+> You cannot simultaneously add or remove regions `locations` and change other properties for an Azure Cosmos DB account. Modifying regions must be performed as a separate operation than any other change to the account resource.
+> [!NOTE]
+> This command allows you to add and remove regions but does not allow you to modify failover priorities or trigger a manual failover. See [Set failover priority](#set-failover-priority) and [Trigger manual failover](#trigger-manual-failover).
+> [!TIP]
+> When a new region is added, all data must be fully replicated and committed into the new region before the region is marked as available. The amount of time this operation takes will depend upon how much data is stored within the account. If an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused and will resume automatically when the add/remove region operation is complete.
+
+```azurecli-interactive
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+
+# Create an account with 2 regions
+az cosmosdb create --name $accountName --resource-group $resourceGroupName \
+ --locations regionName="West US" failoverPriority=0 isZoneRedundant=False \
+ --locations regionName="East US" failoverPriority=1 isZoneRedundant=False
+
+# Add a region
+az cosmosdb update --name $accountName --resource-group $resourceGroupName \
+ --locations regionName="West US" failoverPriority=0 isZoneRedundant=False \
+ --locations regionName="East US" failoverPriority=1 isZoneRedundant=False \
+ --locations regionName="South Central US" failoverPriority=2 isZoneRedundant=False
+
+# Remove a region
+az cosmosdb update --name $accountName --resource-group $resourceGroupName \
+ --locations regionName="West US" failoverPriority=0 isZoneRedundant=False \
+ --locations regionName="East US" failoverPriority=1 isZoneRedundant=False
+```
+
+### Enable multiple write regions
+
+Enable multi-region writes for an Azure Cosmos DB account
+
+```azurecli-interactive
+# Update an Azure Cosmos DB account from single write region to multiple write regions
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+
+# Get the account resource id for an existing account
+accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
+
+az cosmosdb update --ids $accountId --enable-multiple-write-locations true
+```
+
+### Set failover priority
+
+Set the failover priority for an Azure Cosmos DB account configured for service-managed failover
+
+```azurecli-interactive
+# Assume region order is initially 'West US'=0 'East US'=1 'South Central US'=2 for account
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+
+# Get the account resource id for an existing account
+accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
+
+# Make South Central US the next region to fail over to instead of East US
+az cosmosdb failover-priority-change --ids $accountId \
+ --failover-policies 'West US=0' 'South Central US=1' 'East US=2'
+```
+
+### Enable service-managed failover
+
+```azurecli-interactive
+# Enable service-managed failover on an existing account
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+
+# Get the account resource id for an existing account
+accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
+
+az cosmosdb update --ids $accountId --enable-automatic-failover true
+```
+
+### Trigger manual failover
+
+> [!CAUTION]
+> Changing region with priority = 0 will trigger a manual failover for an Azure Cosmos DB account. Any other priority change will not trigger a failover.
+
+> [!NOTE]
+> If you perform a manual failover operation while an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover operation is complete.
+
+```azurecli-interactive
+# Assume region order is initially 'West US=0' 'East US=1' 'South Central US=2' for account
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+
+# Get the account resource id for an existing account
+accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
+
+# Trigger a manual failover to promote East US 2 as new write region
+az cosmosdb failover-priority-change --ids $accountId \
+ --failover-policies 'East US=0' 'South Central US=1' 'West US=2'
+```
+
+### <a id="list-account-keys"></a> List all account keys
+
+Get all keys for an Azure Cosmos DB account.
+
+```azurecli-interactive
+# List all account keys
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+
+az cosmosdb keys list \
+ -n $accountName \
+ -g $resourceGroupName
+```
+
+### List read-only account keys
+
+Get read-only keys for an Azure Cosmos DB account.
+
+```azurecli-interactive
+# List read-only account keys
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+
+az cosmosdb keys list \
+ -n $accountName \
+ -g $resourceGroupName \
+ --type read-only-keys
+```
+
+### List connection strings
+
+Get the connection strings for an Azure Cosmos DB account.
+
+```azurecli-interactive
+# List connection strings
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+
+az cosmosdb keys list \
+ -n $accountName \
+ -g $resourceGroupName \
+ --type connection-strings
+```
+
+### Regenerate account key
+
+Regenerate a new key for an Azure Cosmos DB account.
+
+```azurecli-interactive
+# Regenerate secondary account keys
+# key-kind values: primary, primaryReadonly, secondary, secondaryReadonly
+az cosmosdb keys regenerate \
+ -n $accountName \
+ -g $resourceGroupName \
+ --key-kind secondary
+```
+
+## Azure Cosmos DB database
+
+The following sections demonstrate how to manage the Azure Cosmos DB database, including:
+
+- [Create a database](#create-a-database)
+- [Create a database with shared throughput](#create-a-database-with-shared-throughput)
+- [Migrate a database to autoscale throughput](#migrate-a-database-to-autoscale-throughput)
+- [Change database throughput](#change-database-throughput)
+- [Prevent a database from being deleted](#prevent-a-database-from-being-deleted)
+
+### Create a database
+
+Create an Azure Cosmos DB database.
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+
+az cosmosdb sql database create \
+ -a $accountName \
+ -g $resourceGroupName \
+ -n $databaseName
+```
+
+### Create a database with shared throughput
+
+Create an Azure Cosmos DB database with shared throughput.
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+throughput=400
+
+az cosmosdb sql database create \
+ -a $accountName \
+ -g $resourceGroupName \
+ -n $databaseName \
+ --throughput $throughput
+```
+
+### Migrate a database to autoscale throughput
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+
+# Migrate to autoscale throughput
+az cosmosdb sql database throughput migrate \
+ -a $accountName \
+ -g $resourceGroupName \
+ -n $databaseName \
+ -t 'autoscale'
+
+# Read the new autoscale max throughput
+az cosmosdb sql database throughput show \
+ -g $resourceGroupName \
+ -a $accountName \
+ -n $databaseName \
+ --query resource.autoscaleSettings.maxThroughput \
+ -o tsv
+```
+
+### Change database throughput
+
+Increase the throughput of an Azure Cosmos DB database by 1000 RU/s.
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+newRU=1000
+
+# Get minimum throughput to make sure newRU is not lower than minRU
+minRU=$(az cosmosdb sql database throughput show \
+ -g $resourceGroupName -a $accountName -n $databaseName \
+ --query resource.minimumThroughput -o tsv)
+
+if [ $minRU -gt $newRU ]; then
+ newRU=$minRU
+fi
+
+az cosmosdb sql database throughput update \
+ -a $accountName \
+ -g $resourceGroupName \
+ -n $databaseName \
+ --throughput $newRU
+```
+
+### Prevent a database from being deleted
+
+Put an Azure resource delete lock on a database to prevent it from being deleted. This feature requires locking the Azure Cosmos DB account from being changed by data plane SDKs. To learn more, see [preventing changes from SDKs](../role-based-access-control.md#prevent-sdk-changes). Azure resource locks can also prevent a resource from being changed by specifying a `ReadOnly` lock type. For an Azure Cosmos DB database, it can be used to prevent throughput from being changed.
+
+```azurecli-interactive
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+
+lockType='CanNotDelete' # CanNotDelete or ReadOnly
+databaseParent="databaseAccounts/$accountName"
+databaseLockName="$databaseName-Lock"
+
+# Create a delete lock on database
+az lock create --name $databaseLockName \
+ --resource-group $resourceGroupName \
+ --resource-type Microsoft.DocumentDB/sqlDatabases \
+ --lock-type $lockType \
+ --parent $databaseParent \
+ --resource $databaseName
+
+# Delete lock on database
+lockid=$(az lock show --name $databaseLockName \
+ --resource-group $resourceGroupName \
+ --resource-type Microsoft.DocumentDB/sqlDatabases \
+ --resource $databaseName \
+ --parent $databaseParent \
+ --output tsv --query id)
+az lock delete --ids $lockid
+```
+
+## Azure Cosmos DB container
+
+The following sections demonstrate how to manage the Azure Cosmos DB container, including:
+
+- [Create a container](#create-a-container)
+- [Create a container with autoscale](#create-a-container-with-autoscale)
+- [Create a container with TTL enabled](#create-a-container-with-ttl)
+- [Create a container with custom index policy](#create-a-container-with-a-custom-index-policy)
+- [Change container throughput](#change-container-throughput)
+- [Migrate a container to autoscale throughput](#migrate-a-container-to-autoscale-throughput)
+- [Prevent a container from being deleted](#prevent-a-container-from-being-deleted)
+
+### Create a container
+
+Create an Azure Cosmos DB container with default index policy, partition key and RU/s of 400.
+
+```azurecli-interactive
+# Create a API for NoSQL container
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+partitionKey='/myPartitionKey'
+throughput=400
+
+az cosmosdb sql container create \
+ -a $accountName -g $resourceGroupName \
+ -d $databaseName -n $containerName \
+ -p $partitionKey --throughput $throughput
+```
+
+### Create a container with autoscale
+
+Create an Azure Cosmos DB container with default index policy, partition key and autoscale RU/s of 4000.
+
+```azurecli-interactive
+# Create a API for NoSQL container
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+partitionKey='/myPartitionKey'
+maxThroughput=4000
+
+az cosmosdb sql container create \
+ -a $accountName -g $resourceGroupName \
+ -d $databaseName -n $containerName \
+ -p $partitionKey --max-throughput $maxThroughput
+```
+
+### Create a container with TTL
+
+Create an Azure Cosmos DB container with TTL enabled.
+
+```azurecli-interactive
+# Create an Azure Cosmos DB container with TTL of one day
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+
+az cosmosdb sql container update \
+ -g $resourceGroupName \
+ -a $accountName \
+ -d $databaseName \
+ -n $containerName \
+ --ttl=86400
+```
+
+### Create a container with a custom index policy
+
+Create an Azure Cosmos DB container with a custom index policy, a spatial index, composite index, a partition key and RU/s of 400.
+
+```azurecli-interactive
+# Create a API for NoSQL container
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+partitionKey='/myPartitionKey'
+throughput=400
+
+# Generate a unique 10 character alphanumeric string to ensure unique resource names
+uniqueId=$(env LC_CTYPE=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 10 | head -n 1)
+
+# Define the index policy for the container, include spatial and composite indexes
+idxpolicy=$(cat << EOF
+{
+ "indexingMode": "consistent",
+ "includedPaths": [
+ {"path": "/*"}
+ ],
+ "excludedPaths": [
+ { "path": "/headquarters/employees/?"}
+ ],
+ "spatialIndexes": [
+ {"path": "/*", "types": ["Point"]}
+ ],
+ "compositeIndexes":[
+ [
+ { "path":"/name", "order":"ascending" },
+ { "path":"/age", "order":"descending" }
+ ]
+ ]
+}
+EOF
+)
+# Persist index policy to json file
+echo "$idxpolicy" > "idxpolicy-$uniqueId.json"
++
+az cosmosdb sql container create \
+ -a $accountName -g $resourceGroupName \
+ -d $databaseName -n $containerName \
+ -p $partitionKey --throughput $throughput \
+ --idx @idxpolicy-$uniqueId.json
+
+# Clean up temporary index policy file
+rm -f "idxpolicy-$uniqueId.json"
+```
+
+### Change container throughput
+
+Increase the throughput of an Azure Cosmos DB container by 1000 RU/s.
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+newRU=1000
+
+# Get minimum throughput to make sure newRU is not lower than minRU
+minRU=$(az cosmosdb sql container throughput show \
+ -g $resourceGroupName -a $accountName -d $databaseName \
+ -n $containerName --query resource.minimumThroughput -o tsv)
+
+if [ $minRU -gt $newRU ]; then
+ newRU=$minRU
+fi
+
+az cosmosdb sql container throughput update \
+ -a $accountName \
+ -g $resourceGroupName \
+ -d $databaseName \
+ -n $containerName \
+ --throughput $newRU
+```
+
+### Migrate a container to autoscale throughput
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+
+# Migrate to autoscale throughput
+az cosmosdb sql container throughput migrate \
+ -a $accountName \
+ -g $resourceGroupName \
+ -d $databaseName \
+ -n $containerName \
+ -t 'autoscale'
+
+# Read the new autoscale max throughput
+az cosmosdb sql container throughput show \
+ -g $resourceGroupName \
+ -a $accountName \
+ -d $databaseName \
+ -n $containerName \
+ --query resource.autoscaleSettings.maxThroughput \
+ -o tsv
+```
+
+### Prevent a container from being deleted
+
+Put an Azure resource delete lock on a container to prevent it from being deleted. This feature requires locking the Azure Cosmos DB account from being changed by data plane SDKs. To learn more, see [preventing changes from SDKs](../role-based-access-control.md#prevent-sdk-changes). Azure resource locks can also prevent a resource from being changed by specifying a `ReadOnly` lock type. For an Azure Cosmos DB container, locks can be used to prevent throughput or any other property from being changed.
+
+```azurecli-interactive
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+databaseName='database1'
+containerName='container1'
+
+lockType='CanNotDelete' # CanNotDelete or ReadOnly
+databaseParent="databaseAccounts/$accountName"
+containerParent="databaseAccounts/$accountName/sqlDatabases/$databaseName"
+containerLockName="$containerName-Lock"
+
+# Create a delete lock on container
+az lock create --name $containerLockName \
+ --resource-group $resourceGroupName \
+ --resource-type Microsoft.DocumentDB/containers \
+ --lock-type $lockType \
+ --parent $containerParent \
+ --resource $containerName
+
+# Delete lock on container
+lockid=$(az lock show --name $containerLockName \
+ --resource-group $resourceGroupName \
+ --resource-type Microsoft.DocumentDB/containers \
+ --resource-name $containerName \
+ --parent $containerParent \
+ --output tsv --query id)
+az lock delete --ids $lockid
+```
+
+## Next steps
+
+For more information on the Azure CLI, see:
+
+- [Install Azure CLI](/cli/azure/install-azure-cli)
+- [Azure CLI Reference](/cli/azure/cosmosdb)
+- [More Azure CLI samples for Azure Cosmos DB](cli-samples.md)
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-powershell.md
+
+ Title: Manage Azure Cosmos DB for NoSQL resources using using PowerShell
+description: Manage Azure Cosmos DB for NoSQL resources using using PowerShell.
++++ Last updated : 02/18/2022+++++
+# Manage Azure Cosmos DB for NoSQL resources using PowerShell
+
+The following guide describes how to use PowerShell to script and automate management of Azure Cosmos DB for NoSQL resources, including the Azure Cosmos DB account, database, container, and throughput. For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](../cassandr)
+
+> [!NOTE]
+> Samples in this article use [Az.CosmosDB](/powershell/module/az.cosmosdb) management cmdlets. See the [Az.CosmosDB](/powershell/module/az.cosmosdb) API reference page for the latest changes.
+
+For cross-platform management of Azure Cosmos DB, you can use the `Az` and `Az.CosmosDB` cmdlets with [cross-platform PowerShell](/powershell/scripting/install/installing-powershell), as well as the [Azure CLI](manage-with-cli.md), the [REST API][rp-rest-api], or the [Azure portal](quickstart-dotnet.md#create-account).
++
+## Getting Started
+
+Follow the instructions in [How to install and configure Azure PowerShell][powershell-install-configure] to install and sign in to your Azure account in PowerShell.
+
+> [!IMPORTANT]
+> Azure Cosmos DB resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
+
+## Azure Cosmos DB accounts
+
+The following sections demonstrate how to manage the Azure Cosmos DB account, including:
+
+* [Create an Azure Cosmos DB account](#create-account)
+* [Update an Azure Cosmos DB account](#update-account)
+* [List all Azure Cosmos DB accounts in a subscription](#list-accounts)
+* [Get an Azure Cosmos DB account](#get-account)
+* [Delete an Azure Cosmos DB account](#delete-account)
+* [Update tags for an Azure Cosmos DB account](#update-tags)
+* [List keys for an Azure Cosmos DB account](#list-keys)
+* [Regenerate keys for an Azure Cosmos DB account](#regenerate-keys)
+* [List connection strings for an Azure Cosmos DB account](#list-connection-strings)
+* [Modify failover priority for an Azure Cosmos DB account](#modify-failover-priority)
+* [Trigger a manual failover for an Azure Cosmos DB account](#trigger-manual-failover)
+* [List resource locks on an Azure Cosmos DB account](#list-account-locks)
+
+### <a id="create-account"></a> Create an Azure Cosmos DB account
+
+This command creates an Azure Cosmos DB database account with [multiple regions][distribute-data-globally], [service-managed failover](../how-to-manage-database-account.md#automatic-failover) and bounded-staleness [consistency policy](../consistency-levels.md).
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$apiKind = "Sql"
+$consistencyLevel = "BoundedStaleness"
+$maxStalenessInterval = 300
+$maxStalenessPrefix = 100000
+$locations = @()
+$locations += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
+$locations += New-AzCosmosDBLocationObject -LocationName "West US" -FailoverPriority 1 -IsZoneRedundant 0
+
+New-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -LocationObject $locations `
+ -Name $accountName `
+ -ApiKind $apiKind `
+ -EnableAutomaticFailover:$true `
+ -DefaultConsistencyLevel $consistencyLevel `
+ -MaxStalenessIntervalInSeconds $maxStalenessInterval `
+ -MaxStalenessPrefix $maxStalenessPrefix
+```
+
+* `$resourceGroupName` The Azure resource group into which to deploy the Azure Cosmos DB account. It must already exist.
+* `$locations` The regions for the database account, the region with `FailoverPriority 0` is the write region.
+* `$accountName` The name for the Azure Cosmos DB account. Must be unique, lowercase, include only alphanumeric and '-' characters, and between 3 and 31 characters in length.
+* `$apiKind` The type of Azure Cosmos DB account to create. For more information, see [APIs in Azure Cosmos DB](../introduction.md#simplified-application-development).
+* `$consistencyPolicy`, `$maxStalenessInterval`, and `$maxStalenessPrefix` The default consistency level and settings of the Azure Cosmos DB account. For more information, see [Consistency Levels in Azure Cosmos DB](../consistency-levels.md).
+
+Azure Cosmos DB accounts can be configured with IP Firewall, Virtual Network service endpoints, and private endpoints. For information on how to configure the IP Firewall for Azure Cosmos DB, see [Configure IP Firewall](../how-to-configure-firewall.md). For information on how to enable service endpoints for Azure Cosmos DB, see [Configure access from virtual Networks](../how-to-configure-vnet-service-endpoint.md). For information on how to enable private endpoints for Azure Cosmos DB, see [Configure access from private endpoints](../how-to-configure-private-endpoints.md).
+
+### <a id="list-accounts"></a> List all Azure Cosmos DB accounts in a Resource Group
+
+This command lists all Azure Cosmos DB accounts in a Resource Group.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+
+Get-AzCosmosDBAccount -ResourceGroupName $resourceGroupName
+```
+
+### <a id="get-account"></a> Get the properties of an Azure Cosmos DB account
+
+This command allows you to get the properties of an existing Azure Cosmos DB account.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+
+Get-AzCosmosDBAccount -ResourceGroupName $resourceGroupName -Name $accountName
+```
+
+### <a id="update-account"></a> Update an Azure Cosmos DB account
+
+This command allows you to update your Azure Cosmos DB database account properties. Properties that can be updated include the following:
+
+* Adding or removing regions
+* Changing default consistency policy
+* Changing IP Range Filter
+* Changing Virtual Network configurations
+* Enabling multi-region writes
+
+> [!NOTE]
+> You cannot simultaneously add or remove regions (`locations`) and change other properties for an Azure Cosmos DB account. Modifying regions must be performed as a separate operation from any other change to the account.
+> [!NOTE]
+> This command allows you to add and remove regions but does not allow you to modify failover priorities or trigger a manual failover. See [Modify failover priority](#modify-failover-priority) and [Trigger manual failover](#trigger-manual-failover).
+> [!TIP]
+> When a new region is added, all data must be fully replicated and committed into the new region before the region is marked as available. The amount of time this operation takes will depend upon how much data is stored within the account. If an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused and will resume automatically when the add/remove region operation is complete.
+
+```azurepowershell-interactive
+# Create account with two regions
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$apiKind = "Sql"
+$consistencyLevel = "Session"
+$enableAutomaticFailover = $true
+$locations = @()
+$locations += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
+$locations += New-AzCosmosDBLocationObject -LocationName "West US" -FailoverPriority 1 -IsZoneRedundant 0
+
+# Create the Azure Cosmos DB account
+New-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -LocationObject $locations `
+ -Name $accountName `
+ -ApiKind $apiKind `
+ -EnableAutomaticFailover:$enableAutomaticFailover `
+ -DefaultConsistencyLevel $consistencyLevel
+
+# Add a region to the account
+$locationObject2 = @()
+$locationObject2 += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
+$locationObject2 += New-AzCosmosDBLocationObject -LocationName "West US" -FailoverPriority 1 -IsZoneRedundant 0
+$locationObject2 += New-AzCosmosDBLocationObject -LocationName "South Central US" -FailoverPriority 2 -IsZoneRedundant 0
+
+Update-AzCosmosDBAccountRegion `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -LocationObject $locationObject2
+
+Write-Host "Update-AzCosmosDBAccountRegion returns before the region update is complete."
+Write-Host "Check account in Azure portal or using Get-AzCosmosDBAccount for region status."
+Write-Host "When region was added, press any key to continue."
+$HOST.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | OUT-NULL
+$HOST.UI.RawUI.Flushinputbuffer()
+
+# Remove West US region from the account
+$locationObject3 = @()
+$locationObject3 += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
+$locationObject3 += New-AzCosmosDBLocationObject -LocationName "South Central US" -FailoverPriority 1 -IsZoneRedundant 0
+
+Update-AzCosmosDBAccountRegion `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -LocationObject $locationObject3
+
+Write-Host "Update-AzCosmosDBAccountRegion returns before the region update is complete."
+Write-Host "Check account in Azure portal or using Get-AzCosmosDBAccount for region status."
+```
+
+### <a id="multi-region-writes"></a> Enable multiple write regions for an Azure Cosmos DB account
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$enableAutomaticFailover = $false
+$enableMultiMaster = $true
+
+# First disable service-managed failover - cannot have both service-managed
+# failover and multi-region writes on an account
+Update-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -EnableAutomaticFailover:$enableAutomaticFailover
+
+# Now enable multi-region writes
+Update-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -EnableMultipleWriteLocations:$enableMultiMaster
+```
+
+### <a id="delete-account"></a> Delete an Azure Cosmos DB account
+
+This command deletes an existing Azure Cosmos DB account.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+
+Remove-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -PassThru:$true
+```
+
+### <a id="update-tags"></a> Update Tags of an Azure Cosmos DB account
+
+This command sets the [Azure resource tags][azure-resource-tags] for an Azure Cosmos DB account. Tags can be set both at account creation using `New-AzCosmosDBAccount` as well as on account update using `Update-AzCosmosDBAccount`.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$tags = @{dept = "Finance"; environment = "Production";}
+
+Update-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -Tag $tags
+```
+
+### <a id="list-keys"></a> List Account Keys
+
+When you create an Azure Cosmos DB account, the service generates two primary access keys that can be used for authentication when the Azure Cosmos DB account is accessed. Read-only keys for authenticating read-only operations are also generated.
+By providing two access keys, Azure Cosmos DB enables you to regenerate and rotate one key at a time with no interruption to your Azure Cosmos DB account.
+Azure Cosmos DB accounts have two read-write keys (primary and secondary) and two read-only keys (primary and secondary).
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+
+Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -Type "Keys"
+```
+
+### <a id="list-connection-strings"></a> List Connection Strings
+
+The following command retrieves connection strings to connect apps to the Azure Cosmos DB account.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+
+Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -Type "ConnectionStrings"
+```
+
+### <a id="regenerate-keys"></a> Regenerate Account Keys
+
+Access keys to an Azure Cosmos DB account should be periodically regenerated to help keep connections secure. A primary and secondary access keys are assigned to the account. This allows clients to maintain access while one key at a time is regenerated.
+There are four types of keys for an Azure Cosmos DB account (Primary, Secondary, PrimaryReadonly, and SecondaryReadonly)
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup" # Resource Group must already exist
+$accountName = "mycosmosaccount" # Must be all lower case
+$keyKind = "primary" # Other key kinds: secondary, primaryReadonly, secondaryReadonly
+
+New-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -KeyKind $keyKind
+```
+
+### <a id="enable-automatic-failover"></a> Enable service-managed failover
+
+The following command sets an Azure Cosmos DB account to fail over automatically to its secondary region should the primary region become unavailable.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$enableAutomaticFailover = $true
+$enableMultiMaster = $false
+
+# First disable multi-region writes - cannot have both automatic
+# failover and multi-region writes on an account
+Update-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -EnableMultipleWriteLocations:$enableMultiMaster
+
+# Now enable service-managed failover
+Update-AzCosmosDBAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -EnableAutomaticFailover:$enableAutomaticFailover
+```
+
+### <a id="modify-failover-priority"></a> Modify Failover Priority
+
+For accounts configured with Service-Managed Failover, you can change the order in which Azure Cosmos DB will promote secondary replicas to primary should the primary become unavailable.
+
+For the example below, assume the current failover priority is `West US = 0`, `East US = 1`, `South Central US = 2`. The command will change this to `West US = 0`, `South Central US = 1`, `East US = 2`.
+
+> [!CAUTION]
+> Changing the location for `failoverPriority=0` will trigger a manual failover for an Azure Cosmos DB account. Any other priority changes will not trigger a failover.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$locations = @("West US", "South Central US", "East US") # Regions ordered by UPDATED failover priority
+
+Update-AzCosmosDBAccountFailoverPriority `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -FailoverPolicy $locations
+```
+
+### <a id="trigger-manual-failover"></a> Trigger Manual Failover
+
+For accounts configured with Manual Failover, you can fail over and promote any secondary replica to primary by modifying to `failoverPriority=0`. This operation can be used to initiate a disaster recovery drill to test disaster recovery planning.
+
+For the example below, assume the account has a current failover priority of `West US = 0` and `East US = 1` and flip the regions.
+
+> [!CAUTION]
+> Changing `locationName` for `failoverPriority=0` will trigger a manual failover for an Azure Cosmos DB account. Any other priority change will not trigger a failover.
+
+> [!NOTE]
+> If you perform a manual failover operation while an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover operation is complete.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$locations = @("East US", "West US") # Regions ordered by UPDATED failover priority
+
+Update-AzCosmosDBAccountFailoverPriority `
+ -ResourceGroupName $resourceGroupName `
+ -Name $accountName `
+ -FailoverPolicy $locations
+```
+
+### <a id="list-account-locks"></a> List resource locks on an Azure Cosmos DB account
+
+Resource locks can be placed on Azure Cosmos DB resources including databases and collections. The example below shows how to list all Azure resource locks on an Azure Cosmos DB account.
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$resourceTypeAccount = "Microsoft.DocumentDB/databaseAccounts"
+$accountName = "mycosmosaccount"
+
+Get-AzResourceLock `
+ -ResourceGroupName $resourceGroupName `
+ -ResourceType $resourceTypeAccount `
+ -ResourceName $accountName
+```
+
+## Azure Cosmos DB Database
+
+The following sections demonstrate how to manage the Azure Cosmos DB database, including:
+
+* [Create an Azure Cosmos DB database](#create-db)
+* [Create an Azure Cosmos DB database with shared throughput](#create-db-ru)
+* [Get the throughput of an Azure Cosmos DB database](#get-db-ru)
+* [Migrate database throughput to autoscale](#migrate-db-ru)
+* [List all Azure Cosmos DB databases in an account](#list-db)
+* [Get a single Azure Cosmos DB database](#get-db)
+* [Delete an Azure Cosmos DB database](#delete-db)
+* [Create a resource lock on an Azure Cosmos DB database to prevent delete](#create-db-lock)
+* [Remove a resource lock on an Azure Cosmos DB database](#remove-db-lock)
+
+### <a id="create-db"></a>Create an Azure Cosmos DB database
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+
+New-AzCosmosDBSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -Name $databaseName
+```
+
+### <a id="create-db-ru"></a>Create an Azure Cosmos DB database with shared throughput
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$databaseRUs = 400
+
+New-AzCosmosDBSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -Name $databaseName `
+ -Throughput $databaseRUs
+```
+
+### <a id="get-db-ru"></a>Get the throughput of an Azure Cosmos DB database
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+
+Get-AzCosmosDBSqlDatabaseThroughput `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -Name $databaseName
+```
+
+## <a id="migrate-db-ru"></a>Migrate database throughput to autoscale
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+
+Invoke-AzCosmosDBSqlDatabaseThroughputMigration `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -Name $databaseName `
+ -ThroughputType Autoscale
+```
+
+### <a id="list-db"></a>Get all Azure Cosmos DB databases in an account
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+
+Get-AzCosmosDBSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName
+```
+
+### <a id="get-db"></a>Get a single Azure Cosmos DB database
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+
+Get-AzCosmosDBSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -Name $databaseName
+```
+
+### <a id="delete-db"></a>Delete an Azure Cosmos DB database
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+
+Remove-AzCosmosDBSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -Name $databaseName
+```
+
+### <a id="create-db-lock"></a>Create a resource lock on an Azure Cosmos DB database to prevent delete
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$resourceName = "$accountName/$databaseName"
+$lockName = "myResourceLock"
+$lockLevel = "CanNotDelete"
+
+New-AzResourceLock `
+ -ResourceGroupName $resourceGroupName `
+ -ResourceType $resourceType `
+ -ResourceName $resourceName `
+ -LockName $lockName `
+ -LockLevel $lockLevel
+```
+
+### <a id="remove-db-lock"></a>Remove a resource lock on an Azure Cosmos DB database
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$resourceName = "$accountName/$databaseName"
+$lockName = "myResourceLock"
+
+Remove-AzResourceLock `
+ -ResourceGroupName $resourceGroupName `
+ -ResourceType $resourceType `
+ -ResourceName $resourceName `
+ -LockName $lockName
+```
+
+## Azure Cosmos DB Container
+
+The following sections demonstrate how to manage the Azure Cosmos DB container, including:
+
+* [Create an Azure Cosmos DB container](#create-container)
+* [Create an Azure Cosmos DB container with autoscale](#create-container-autoscale)
+* [Create an Azure Cosmos DB container with a large partition key](#create-container-big-pk)
+* [Get the throughput of an Azure Cosmos DB container](#get-container-ru)
+* [Migrate container throughput to autoscale](#migrate-container-ru)
+* [Create an Azure Cosmos DB container with custom indexing](#create-container-custom-index)
+* [Create an Azure Cosmos DB container with indexing turned off](#create-container-no-index)
+* [Create an Azure Cosmos DB container with unique key and TTL](#create-container-unique-key-ttl)
+* [Create an Azure Cosmos DB container with conflict resolution](#create-container-lww)
+* [List all Azure Cosmos DB containers in a database](#list-containers)
+* [Get a single Azure Cosmos DB container in a database](#get-container)
+* [Delete an Azure Cosmos DB container](#delete-container)
+* [Create a resource lock on an Azure Cosmos DB container to prevent delete](#create-container-lock)
+* [Remove a resource lock on an Azure Cosmos DB container](#remove-container-lock)
+
+### <a id="create-container"></a>Create an Azure Cosmos DB container
+
+```azurepowershell-interactive
+# Create an Azure Cosmos DB container with default indexes and throughput at 400 RU
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+$throughput = 400 #minimum = 400
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -Throughput $throughput
+```
+
+### <a id="create-container-autoscale"></a>Create an Azure Cosmos DB container with autoscale
+
+```azurepowershell-interactive
+# Create an Azure Cosmos DB container with default indexes and autoscale throughput at 4000 RU
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+$autoscaleMaxThroughput = 4000 #minimum = 4000
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -AutoscaleMaxThroughput $autoscaleMaxThroughput
+```
+
+### <a id="create-container-big-pk"></a>Create an Azure Cosmos DB container with a large partition key size
+
+```azurepowershell-interactive
+# Create an Azure Cosmos DB container with a large partition key value (version = 2)
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -PartitionKeyVersion 2
+```
+
+### <a id="get-container-ru"></a>Get the throughput of an Azure Cosmos DB container
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+
+Get-AzCosmosDBSqlContainerThroughput `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName
+```
+
+### <a id="migrate-container-ru"></a>Migrate container throughput to autoscale
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+
+Invoke-AzCosmosDBSqlContainerThroughputMigration `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -ThroughputType Autoscale
+```
+
+### <a id="create-container-custom-index"></a>Create an Azure Cosmos DB container with custom index policy
+
+```azurepowershell-interactive
+# Create a container with a custom indexing policy
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+$indexPathIncluded = "/*"
+$indexPathExcluded = "/myExcludedPath/*"
+
+$includedPathIndex = New-AzCosmosDBSqlIncludedPathIndex -DataType String -Kind Range
+$includedPath = New-AzCosmosDBSqlIncludedPath -Path $indexPathIncluded -Index $includedPathIndex
+
+$indexingPolicy = New-AzCosmosDBSqlIndexingPolicy `
+ -IncludedPath $includedPath `
+ -ExcludedPath $indexPathExcluded `
+ -IndexingMode Consistent `
+ -Automatic $true
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -IndexingPolicy $indexingPolicy
+```
+
+### <a id="create-container-no-index"></a>Create an Azure Cosmos DB container with indexing turned off
+
+```azurepowershell-interactive
+# Create an Azure Cosmos DB container with no indexing
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+
+$indexingPolicy = New-AzCosmosDBSqlIndexingPolicy `
+ -IndexingMode None
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -IndexingPolicy $indexingPolicy
+```
+
+### <a id="create-container-unique-key-ttl"></a>Create an Azure Cosmos DB container with unique key policy and TTL
+
+```azurepowershell-interactive
+# Create a container with a unique key policy and TTL of one day
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+$uniqueKeyPath = "/myUniqueKeyPath"
+$ttlInSeconds = 86400 # Set this to -1 (or don't use it at all) to never expire
+
+$uniqueKey = New-AzCosmosDBSqlUniqueKey `
+ -Path $uniqueKeyPath
+
+$uniqueKeyPolicy = New-AzCosmosDBSqlUniqueKeyPolicy `
+ -UniqueKey $uniqueKey
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -UniqueKeyPolicy $uniqueKeyPolicy `
+ -TtlInSeconds $ttlInSeconds
+```
+
+### <a id="create-container-lww"></a>Create an Azure Cosmos DB container with conflict resolution
+
+To write all conflicts to the ConflictsFeed and handle separately, pass `-Type "Custom" -Path ""`.
+
+```azurepowershell-interactive
+# Create container with last-writer-wins conflict resolution policy
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+$conflictResolutionPath = "/myResolutionPath"
+
+$conflictResolutionPolicy = New-AzCosmosDBSqlConflictResolutionPolicy `
+ -Type LastWriterWins `
+ -Path $conflictResolutionPath
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -ConflictResolutionPolicy $conflictResolutionPolicy
+```
+
+To create a conflict resolution policy to use a stored procedure, call `New-AzCosmosDBSqlConflictResolutionPolicy` and pass parameters `-Type` and `-ConflictResolutionProcedure`.
+
+```azurepowershell-interactive
+# Create container with custom conflict resolution policy using a stored procedure
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$partitionKeyPath = "/myPartitionKey"
+$conflictResolutionSprocName = "mysproc"
+
+$conflictResolutionSproc = "/dbs/$databaseName/colls/$containerName/sprocs/$conflictResolutionSprocName"
+
+$conflictResolutionPolicy = New-AzCosmosDBSqlConflictResolutionPolicy `
+ -Type Custom `
+ -ConflictResolutionProcedure $conflictResolutionSproc
+
+New-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName `
+ -PartitionKeyKind Hash `
+ -PartitionKeyPath $partitionKeyPath `
+ -ConflictResolutionPolicy $conflictResolutionPolicy
+```
++
+### <a id="list-containers"></a>List all Azure Cosmos DB containers in a database
+
+```azurepowershell-interactive
+# List all Azure Cosmos DB containers in a database
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+
+Get-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName
+```
+
+### <a id="get-container"></a>Get a single Azure Cosmos DB container in a database
+
+```azurepowershell-interactive
+# Get a single Azure Cosmos DB container in a database
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+
+Get-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName
+```
+
+### <a id="delete-container"></a>Delete an Azure Cosmos DB container
+
+```azurepowershell-interactive
+# Delete an Azure Cosmos DB container
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+
+Remove-AzCosmosDBSqlContainer `
+ -ResourceGroupName $resourceGroupName `
+ -AccountName $accountName `
+ -DatabaseName $databaseName `
+ -Name $containerName
+```
+### <a id="create-container-lock"></a>Create a resource lock on an Azure Cosmos DB container to prevent delete
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$resourceName = "$accountName/$databaseName/$containerName"
+$lockName = "myResourceLock"
+$lockLevel = "CanNotDelete"
+
+New-AzResourceLock `
+ -ResourceGroupName $resourceGroupName `
+ -ResourceType $resourceType `
+ -ResourceName $resourceName `
+ -LockName $lockName `
+ -LockLevel $lockLevel
+```
+
+### <a id="remove-container-lock"></a>Remove a resource lock on an Azure Cosmos DB container
+
+```azurepowershell-interactive
+$resourceGroupName = "myResourceGroup"
+$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers"
+$accountName = "mycosmosaccount"
+$databaseName = "myDatabase"
+$containerName = "myContainer"
+$resourceName = "$accountName/$databaseName/$containerName"
+$lockName = "myResourceLock"
+
+Remove-AzResourceLock `
+ -ResourceGroupName $resourceGroupName `
+ -ResourceType $resourceType `
+ -ResourceName $resourceName `
+ -LockName $lockName
+```
+
+## Next steps
+
+* [All PowerShell Samples](powershell-samples.md)
+* [How to manage Azure Cosmos DB account](../how-to-manage-database-account.md)
+* [Create an Azure Cosmos DB container](how-to-create-container.md)
+* [Configure time-to-live in Azure Cosmos DB](how-to-time-to-live.md)
+
+<!--Reference style links - using these makes the source content way more readable than using inline links-->
+
+[powershell-install-configure]: /powershell/azure/
+[scaling-globally]: ../distribute-data-globally.md#EnableGlobalDistribution
+[distribute-data-globally]: ../distribute-data-globally.md
+[azure-resource-groups]: ../../azure-resource-manager/management/overview.md#resource-groups
+[azure-resource-tags]: ../../azure-resource-manager/management/tag-resources.md
+[rp-rest-api]: /rest/api/cosmos-db-resource-provider/
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-templates.md
+
+ Title: Create and manage Azure Cosmos DB with Resource Manager templates
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for API for NoSQL
+++++ Last updated : 02/18/2022++++
+# Manage Azure Cosmos DB for NoSQL resources with Azure Resource Manager templates
++
+In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
+
+This article only shows Azure Resource Manager template examples for API for NoSQL accounts. You can also find template examples for [Cassandra](../cassandr) APIs.
+
+> [!IMPORTANT]
+>
+> * Account names are limited to 44 characters, all lowercase.
+> * To change the throughput values, redeploy the template with updated RU/s.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
+> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
+
+To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md).
+
+<a id="create-autoscale"></a>
+
+## Azure Cosmos DB account with autoscale throughput
+
+This template creates an Azure Cosmos DB account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most policy options enabled. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+> [!NOTE]
+> You can use Azure Resource Manager templates update the autoscale max RU/s setting on an database and container resources already configured with autoscale. Migrating between manual and autoscale throughput is a POST operation and not supported with Azure Resource Manager templates. To migrate throughput use [Azure CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell).
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-autoscale%2Fazuredeploy.json)
++
+<a id="create-analytical-store"></a>
+
+## Azure Cosmos DB account with analytical store
+
+This template creates an Azure Cosmos DB account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-analytical-store%2Fazuredeploy.json)
++
+<a id="create-manual"></a>
+
+## Azure Cosmos DB account with standard provisioned throughput
+
+This template creates an Azure Cosmos DB account in two regions with options for consistency and failover, with database and container configured for standard throughput with many indexing policy options configured. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql%2Fazuredeploy.json)
++
+<a id="create-sproc"></a>
+
+## Azure Cosmos DB container with server-side functionality
+
+This template creates an Azure Cosmos DB account, database and container with with a stored procedure, trigger, and user-defined function. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-container-sprocs%2Fazuredeploy.json)
++
+<a id="create-rbac"></a>
+
+## Azure Cosmos DB account with Azure AD and RBAC
+
+This template will create a SQL Azure Cosmos DB account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure AD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-rbac%2Fazuredeploy.json)
++
+<a id="free-tier"></a>
+
+## Free tier Azure Cosmos DB account
+
+This template creates a free-tier Azure Cosmos DB account and a database with shared throughput that can be shared with up to 25 containers. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-free%2Fazuredeploy.json)
++
+## Next steps
+
+Here are some additional resources:
+
+* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml)
+* [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions)
+* [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Documentdb&pageNumber=1&sort=Popular)
+* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Manage With Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-terraform.md
+
+ Title: Create and manage Azure Cosmos DB with terraform
+description: Use terraform to create and configure Azure Cosmos DB for NoSQL
++++ Last updated : 09/16/2022++++
+# Manage Azure Cosmos DB for NoSQL resources with terraform
++
+In this article, you learn how to use terraform to deploy and manage your Azure Cosmos DB accounts, databases, and containers.
+
+This article shows terraform samples for NoSQL accounts.
+
+> [!IMPORTANT]
+>
+> * Account names are limited to 44 characters, all lowercase.
+> * To change the throughput (RU/s) values, redeploy the terraform file with updated RU/s.
+> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
+
+To create any of the Azure Cosmos DB resources below, copy the example into a new terraform file (main.tf) or alternatively, have two separate files for resources (main.tf) and variables (variables.tf). Be sure to include the azurerm provider either in the main terraform file or split out to a separate providers file. All examples can be found on the [terraform samples repository](https://github.com/Azure/terraform).
++
+## <a id="create-autoscale"></a>Azure Cosmos account with autoscale throughput
+
+Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most index policy options enabled.
+
+### `main.tf`
++
+### `variables.tf`
++
+## <a id="create-analytical-store"></a>Azure Cosmos account with analytical store
+
+Create an Azure Cosmos account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput.
+
+### `main.tf`
++
+### `variables.tf`
++
+## <a id="create-manual"></a>Azure Cosmos account with standard provisioned throughput
+
+Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for standard throughput that has most policy options enabled.
+
+### `main.tf`
++
+### `variables.tf`
++
+## <a id="create-sproc"></a>Azure Cosmos DB container with server-side functionality
+
+Create an Azure Cosmos account, database and container with a stored procedure, trigger, and user-defined function.
+
+### `main.tf`
++
+### `variables.tf`
++
+## <a id="create-rbac"></a>Azure Cosmos DB account with Azure AD and role-based access control
+
+Create an Azure Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure Active Directory identity.
+
+### `main.tf`
++
+### `variables.tf`
++
+## <a id="free-tier"></a>Free tier Azure Cosmos DB account
+
+Create a free-tier Azure Cosmos account and a database with shared throughput that can be shared with up to 25 containers.
+
+### `main.tf`
++
+### `variables.tf`
++
+## Next steps
+
+Here are some more resources:
+
+* [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
+* [Terraform Azure Tutorial](https://learn.hashicorp.com/collections/terraform/azure-get-started)
+* [Terraform tools](https://www.terraform.io/docs/terraform-tools)
+* [Azure Provider Terraform documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+* [Terraform documentation](https://www.terraform.io/docs)
cosmos-db Migrate Containers Partitioned To Nonpartitioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-containers-partitioned-to-nonpartitioned.md
+
+ Title: Migrate non-partitioned Azure Cosmos DB containers to partitioned containers
+description: Learn how to migrate all the existing non-partitioned containers into partitioned containers.
++++ Last updated : 08/26/2021+++++
+# Migrate non-partitioned containers to partitioned containers
+
+Azure Cosmos DB supports creating containers without a partition key. Currently you can create non-partitioned containers by using Azure CLI and Azure Cosmos DB SDKs (.Net, Java, NodeJs) that have a version less than or equal to 2.x. You cannot create non-partitioned containers using the Azure portal. However, such non-partitioned containers arenΓÇÖt elastic and have fixed storage capacity of 20 GB and throughput limit of 10K RU/s.
+
+The non-partitioned containers are legacy and you should migrate your existing non-partitioned containers to partitioned containers to scale storage and throughput. Azure Cosmos DB provides a system defined mechanism to migrate your non-partitioned containers to partitioned containers. This document explains how all the existing non-partitioned containers are auto-migrated into partitioned containers. You can take advantage of the auto-migration feature only if you are using the V3 version of SDKs in all the languages.
+
+> [!NOTE]
+> Currently, you cannot migrate Azure Cosmos DB MongoDB and API for Gremlin accounts by using the steps described in this document.
+
+## Migrate container using the system defined partition key
+
+To support the migration, Azure Cosmos DB provides a system defined partition key named `/_partitionkey` on all the containers that donΓÇÖt have a partition key. You cannot change the partition key definition after the containers are migrated. For example, the definition of a container that is migrated to a partitioned container will be as follows:
+
+```json
+{
+ "Id": "CollId"
+ "partitionKey": {
+ "paths": [
+ "/_partitionKey"
+ ],
+ "kind": "Hash"
+ },
+}
+```
+
+After the container is migrated, you can create documents by populating the `_partitionKey` property along with the other properties of the document. The `_partitionKey` property represents the partition key of your documents.
+
+Choosing the right partition key is important to utilize the provisioned throughput optimally. For more information, see [how to choose a partition key](../partitioning-overview.md) article.
+
+> [!NOTE]
+> You can take advantage of system defined partition key only if you are using the latest/V3 version of SDKs in all the languages.
+
+The following example shows a sample code to create a document with the system defined partition key and read that document:
+
+**JSON representation of the document**
+
+### [.NET SDK V3](#tab/dotnetv3)
+
+```csharp
+DeviceInformationItem = new DeviceInformationItem
+{
+ "id": "elevator/PugetSound/Building44/Floor1/1",
+ "deviceId": "3cf4c52d-cc67-4bb8-b02f-f6185007a808",
+ "_partitionKey": "3cf4c52d-cc67-4bb8-b02f-f6185007a808"
+}
+
+public class DeviceInformationItem
+{
+ [JsonProperty(PropertyName = "id")]
+ public string Id { get; set; }
+
+ [JsonProperty(PropertyName = "deviceId")]
+ public string DeviceId { get; set; }
+
+ [JsonProperty(PropertyName = "_partitionKey", NullValueHandling = NullValueHandling.Ignore)]
+ public string PartitionKey {get {return this.DeviceId; set; }
+}
+
+CosmosContainer migratedContainer = database.Containers["testContainer"];
+
+DeviceInformationItem deviceItem = new DeviceInformationItem() {
+ Id = "1234",
+ DeviceId = "3cf4c52d-cc67-4bb8-b02f-f6185007a808"
+}
+
+ItemResponse<DeviceInformationItem > response =
+ await migratedContainer.CreateItemAsync<DeviceInformationItem>(
+ deviceItem.PartitionKey,
+ deviceItem
+ );
+
+// Read back the document providing the same partition key
+ItemResponse<DeviceInformationItem> readResponse =
+ await migratedContainer.ReadItemAsync<DeviceInformationItem>(
+ partitionKey:deviceItem.PartitionKey,
+ id: device.Id
+ );
+
+```
+
+For the complete sample, see the [.Net samples][1] GitHub repository.
+
+## Migrate the documents
+
+While the container definition is enhanced with a partition key property, the documents within the container arenΓÇÖt auto migrated. Which means the system partition key property `/_partitionKey` path is not automatically added to the existing documents. You need to repartition the existing documents by reading the documents that were created without a partition key and rewrite them back with `_partitionKey` property in the documents.
+
+## Access documents that don't have a partition key
+
+Applications can access the existing documents that donΓÇÖt have a partition key by using the special system property called "PartitionKey.None", this is the value of the non-migrated documents. You can use this property in all the CRUD and query operations. The following example shows a sample to read a single Document from the NonePartitionKey.
+
+```csharp
+CosmosItemResponse<DeviceInformationItem> readResponse =
+await migratedContainer.Items.ReadItemAsync<DeviceInformationItem>(
+ partitionKey: PartitionKey.None,
+ id: device.Id
+);
+
+```
+
+### [Java SDK V4](#tab/javav4)
+
+```java
+static class Family {
+ public String id;
+ public String firstName;
+ public String lastName;
+ public String _partitionKey;
+
+ public Family(String id, String firstName, String lastName, String _partitionKey) {
+ this.id = id;
+ this.firstName = firstName;
+ this.lastName = lastName;
+ this._partitionKey = _partitionKey;
+ }
+}
+
+...
+
+CosmosDatabase cosmosDatabase = cosmosClient.getDatabase("testdb");
+CosmosContainer cosmosContainer = cosmosDatabase.getContainer("testcontainer");
+
+// Create single item
+Family family = new Family("id-1", "John", "Doe", "Doe");
+cosmosContainer.createItem(family, new PartitionKey(family._partitionKey), new CosmosItemRequestOptions());
+
+// Create items through bulk operations
+family = new Family("id-2", "Jane", "Doe", "Doe");
+CosmosItemOperation createItemOperation = CosmosBulkOperations.getCreateItemOperation(family,
+ new PartitionKey(family._partitionKey));
+cosmosContainer.executeBulkOperations(Collections.singletonList(createItemOperation));
+```
+
+For the complete sample, see the [Java samples][2] GitHub repository.
+
+## Migrate the documents
+
+While the container definition is enhanced with a partition key property, the documents within the container arenΓÇÖt auto migrated. Which means the system partition key property `/_partitionKey` path is not automatically added to the existing documents. You need to repartition the existing documents by reading the documents that were created without a partition key and rewrite them back with `_partitionKey` property in the documents.
+
+## Access documents that don't have a partition key
+
+Applications can access the existing documents that donΓÇÖt have a partition key by using the special system property called "PartitionKey.None", this is the value of the non-migrated documents. You can use this property in all the CRUD and query operations. The following example shows a sample to read a single Document from the NonePartitionKey.
+
+```java
+CosmosItemResponse<JsonNode> cosmosItemResponse =
+ cosmosContainer.readItem("itemId", PartitionKey.NONE, JsonNode.class);
+```
+
+For the complete sample on how to repartition the documents, see the [Java samples][2] GitHub repository.
+++
+## Compatibility with SDKs
+
+Older version of Azure Cosmos DB SDKs such as V2.x.x and V1.x.x donΓÇÖt support the system defined partition key property. So, when you read the container definition from an older SDK, it doesnΓÇÖt contain any partition key definition and these containers will behave exactly as before. Applications that are built with the older version of SDKs continue to work with non-partitioned as is without any changes.
+
+If a migrated container is consumed by the latest/V3 version of SDK and you start populating the system defined partition key within the new documents, you cannot access (read, update, delete, query) such documents from the older SDKs anymore.
+
+## Known issues
+
+**Querying for the count of items that were inserted without a partition key by using V3 SDK may involve higher throughput consumption**
+
+If you query from the V3 SDK for the items that are inserted by using V2 SDK, or the items inserted by using the V3 SDK with `PartitionKey.None` parameter, the count query may consume more RU/s if the `PartitionKey.None` parameter is supplied in the FeedOptions. We recommend that you don't supply the `PartitionKey.None` parameter if no other items are inserted with a partition key.
+
+If new items are inserted with different values for the partition key, querying for such item counts by passing the appropriate key in `FeedOptions` will not have any issues. After inserting new documents with partition key, if you need to query just the document count without the partition key value, that query may again incur higher RU/s similar to the regular partitioned collections.
+
+## Next steps
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Work with Azure Cosmos DB account](../account-databases-containers-items.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+[1]: https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/NonPartitionContainerMigration
+[2]: https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/nonpartitioncontainercrud
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-data-striim.md
+
+ Title: Migrate data to Azure Cosmos DB for NoSQL account using Striim
+description: Learn how to use Striim to migrate data from an Oracle database to an Azure Cosmos DB for NoSQL account.
++++++ Last updated : 12/09/2021+++
+# Migrate data to Azure Cosmos DB for NoSQL account using Striim
+
+The Striim image in the Azure marketplace offers continuous real-time data movement from data warehouses and databases to Azure. While moving the data, you can perform in-line denormalization, data transformation, enable real-time analytics, and data reporting scenarios. ItΓÇÖs easy to get started with Striim to continuously move enterprise data to Azure Cosmos DB for NoSQL. Azure provides a marketplace offering that makes it easy to deploy Striim and migrate data to Azure Cosmos DB.
+
+This article shows how to use Striim to migrate data from an **Oracle database** to an **Azure Cosmos DB for NoSQL account**.
+
+## Prerequisites
+
+* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* An Oracle database running on-premises with some data in it.
+
+## Deploy the Striim marketplace solution
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Select **Create a resource** and search for **Striim** in the Azure marketplace. Select the first option and **Create**.
+
+ :::image type="content" source="media/migrate-data-striim/striim-azure-marketplace.png" alt-text="Find Striim marketplace item":::
+
+1. Next, enter the configuration properties of the Striim instance. The Striim environment is deployed in a virtual machine. From the **Basics** pane, enter the **VM user name**, **VM password** (this password is used to SSH into the VM). Select your **Subscription**, **Resource Group**, and **Location details** where youΓÇÖd like to deploy Striim. Once complete, select **OK**.
+
+ :::image type="content" source="media/migrate-data-striim/striim-configure-basic-settings.png" alt-text="Configure basic settings for Striim":::
+
+1. In the **Striim Cluster settings** pane, choose the type of Striim deployment and the virtual machine size.
+
+ |Setting | Value | Description |
+ | | | |
+ |Striim deployment type |Standalone | Striim can run in a **Standalone** or **Cluster** deployment types. Standalone mode will deploy the Striim server on a single virtual machine and you can select the size of the VMs depending on your data volume. Cluster mode will deploy the Striim server on two or more VMs with the selected size. Cluster environments with more than 2 nodes offer automatic high availability and failover.</br></br> In this tutorial, you can select Standalone option. Use the default ΓÇ£Standard_F4sΓÇ¥ size VM. |
+ | Name of the Striim cluster| <Striim_cluster_Name>| Name of the Striim cluster.|
+ | Striim cluster password| <Striim_cluster_password>| Password for the cluster.|
+
+ After you fill the form, select **OK** to continue.
+
+1. In the **Striim access settings** pane, configure the **Public IP address** (choose the default values), **Domain name for Striim**, **Admin password** that youΓÇÖd like to use to login to the Striim UI. Configure a VNET and Subnet (choose the default values). After filling in the details, select **OK** to continue.
+
+ :::image type="content" source="media/migrate-data-striim/striim-access-settings.png" alt-text="Striim access settings":::
+
+1. Azure will validate the deployment and make sure everything looks good; validation takes few minutes to complete. After the validation is completed, select **OK**.
+
+1. Finally, review the terms of use and select **Create** to create your Striim instance.
+
+## Configure the source database
+
+In this section, you configure the Oracle database as the source for data movement. Striim server comes with the Oracle JDBC driver that's used to connect to Oracle. To read changes from your source Oracle database, you can either use the [LogMiner](https://www.oracle.com/technetwork/database/features/availability/logmineroverview-088844.html) or the [XStream APIs](https://docs.oracle.com/cd/E11882_01/server.112/e16545/xstrm_intro.htm#XSTRM72647). The Oracle JDBC driver is present in Striim's Java classpath to read, write, or persist data from Oracle database.
+
+## Configure the target database
+
+In this section, you will configure the Azure Cosmos DB for NoSQL account as the target for data movement.
+
+1. Create an [Azure Cosmos DB for NoSQL account](quickstart-portal.md) using the Azure portal.
+
+1. Navigate to the **Data Explorer** pane in your Azure Cosmos DB account. Select **New Container** to create a new container. Assume that you are migrating *products* and *orders* data from Oracle database to Azure Cosmos DB. Create a new database named **StriimDemo** with a container named **Orders**. Provision the container with **1000 RUs** (this example uses 1000 RUs, but you should use the throughput estimated for your workload), and **/ORDER_ID** as the partition key. These values will differ depending on your source data.
+
+ :::image type="content" source="media/migrate-data-striim/create-sql-api-account.png" alt-text="Create a API for NoSQL account":::
+
+## Configure Oracle to Azure Cosmos DB data flow
+
+1. Navigate to the Striim instance that you deployed in the Azure portal. Select the **Connect** button in the upper menu bar and from the **SSH** tab, copy the URL in **Login using VM local account** field.
+
+ :::image type="content" source="media/migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
+
+1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using PuTTY or a different SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
+
+ :::image type="content" source="media/migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
+
+1. From the same terminal window, restart the Striim server by executing the following commands:
+
+ ```bash
+ Systemctl stop striim-node
+ Systemctl stop striim-dbms
+ Systemctl start striim-dbms
+ Systemctl start striim-node
+ ```
+
+1. Striim will take a minute to start up. If youΓÇÖd like to see the status, run the following command:
+
+ ```bash
+ tail -f /opt/striim/logs/striim-node.log
+ ```
+
+1. Now, navigate back to Azure and copy the Public IP address of your Striim VM.
+
+ :::image type="content" source="media/migrate-data-striim/copy-public-ip-address.png" alt-text="Copy Striim VM IP address":::
+
+1. To navigate to the StriimΓÇÖs Web UI, open a new tab in a browser and copy the public IP followed by: 9080. Sign in by using the **admin** username, along with the admin password you specified in the Azure portal.
+
+ :::image type="content" source="media/migrate-data-striim/striim-login-ui.png" alt-text="Sign in to Striim":::
+
+1. Now youΓÇÖll arrive at StriimΓÇÖs home page. There are three different panes ΓÇô **Dashboards**, **Apps**, and **SourcePreview**. The Dashboards pane allows you to move data in real time and visualize it. The Apps pane contains your streaming data pipelines, or data flows. On the right hand of the page is SourcePreview where you can preview your data before moving it.
+
+1. Select the **Apps** pane, weΓÇÖll focus on this pane for now. There are a variety of sample apps that you can use to learn about Striim, however in this article you will create our own. Select the **Add App** button in the top right-hand corner.
+
+ :::image type="content" source="media/migrate-data-striim/add-striim-app.png" alt-text="Add the Striim app":::
+
+1. There are a few different ways to create Striim applications. Select **Start with Template** to start with an existing template.
+
+ :::image type="content" source="media/migrate-data-striim/start-with-template.png" alt-text="Start the app with the template":::
+
+1. In the **Search templates** field, type ΓÇ£CosmosΓÇ¥ and select **Target: Azure Cosmos DB** and then select **Oracle CDC to Azure Cosmos DB**.
+
+ :::image type="content" source="media/migrate-data-striim/oracle-cdc-cosmosdb.png" alt-text="Select Oracle CDC to Azure Cosmos DB":::
+
+1. In the next page, name your application. You can provide a name such as **oraToCosmosDB** and then select **Save**.
+
+1. Next, enter the source configuration of your source Oracle instance. Enter a value for the **Source Name**. The source name is just a naming convention for the Striim application, you can use something like **src_onPremOracle**. Enter values for rest of the source parameters **URL**, **Username**, **Password**, choose **LogMiner** as the reader to read data from Oracle. Select **Next** to continue.
+
+ :::image type="content" source="media/migrate-data-striim/configure-source-parameters.png" alt-text="Configure source parameters":::
+
+1. Striim will check your environment and make sure that it can connect to your source Oracle instance, have the right privileges, and that CDC was configured properly. Once all the values are validated, select **Next**.
+
+ :::image type="content" source="media/migrate-data-striim/validate-source-parameters.png" alt-text="Validate source parameters":::
+
+1. Select the tables from Oracle database that youΓÇÖd like to migrate. For example, letΓÇÖs choose the Orders table and select **Next**.
+
+ :::image type="content" source="media/migrate-data-striim/select-source-tables.png" alt-text="Select source tables":::
+
+1. After selecting the source table, you can do more complicated operations such as mapping and filtering. In this case, you will just create a replica of your source table in Azure Cosmos DB. So, select **Next** to configure the target
+
+1. Now, letΓÇÖs configure the target:
+
+ * **Target Name** - Provide a friendly name for the target.
+ * **Input From** - From the dropdown list, select the input stream from the one you created in the source Oracle configuration.
+ * **Collections**- Enter the target Azure Cosmos DB configuration properties. The collections syntax is **SourceSchema.SourceTable, TargetDatabase.TargetContainer**. In this example, the value would be ΓÇ£SYSTEM.ORDERS, StriimDemo.OrdersΓÇ¥.
+ * **AccessKey** - The PrimaryKey of your Azure Cosmos DB account.
+ * **ServiceEndpoint** ΓÇô The URI of your Azure Cosmos DB account, they can be found under the **Keys** section of the Azure portal.
+
+ Select **Save** and **Next**.
+
+ :::image type="content" source="media/migrate-data-striim/configure-target-parameters.png" alt-text="Configure target parameters":::
++
+1. Next, youΓÇÖll arrive at the flow designer, where you can drag and drop out of the box connectors to create your streaming applications. You will not make any modifications to the flow at this point. so go ahead and deploy the application by selecting the **Deploy App** button.
+
+ :::image type="content" source="media/migrate-data-striim/deploy-app.png" alt-text="Deploy the app":::
+
+1. In the deployment window, you can specify if you want to run certain parts of your application on specific parts of your deployment topology. Since weΓÇÖre running in a simple deployment topology through Azure, weΓÇÖll use the default option.
+
+ :::image type="content" source="media/migrate-data-striim/deploy-using-default-option.png" alt-text="Use the default option":::
+
+1. After deploying, you can preview the stream to see data flowing through. Select the **wave** icon and the eyeball next to it. Select the **Deployed** button in the top menu bar, and select **Start App**.
+
+ :::image type="content" source="media/migrate-data-striim/start-app.png" alt-text="Start the app":::
+
+1. By using a **CDC(Change Data Capture)** reader, Striim will pick up only new changes on the database. If you have data flowing through your source tables, youΓÇÖll see it. However, since this is a demo table, the source isnΓÇÖt connected to any application. If you use a sample data generator, you can insert a chain of events into your Oracle database.
+
+1. YouΓÇÖll see data flowing through the Striim platform. Striim picks up all the metadata associated with your table as well, which is helpful to monitor the data and make sure that the data lands on the right target.
+
+ :::image type="content" source="media/migrate-data-striim/configure-cdc-pipeline.png" alt-text="Configure CDC pipeline":::
+
+1. Finally, letΓÇÖs sign into Azure and navigate to your Azure Cosmos DB account. Refresh the Data Explorer, and you can see that data has arrived.
+
+ :::image type="content" source="media/migrate-data-striim/portal-validate-results.png" alt-text="Validate migrated data in Azure":::
+
+By using the Striim solution in Azure, you can continuously migrate data to Azure Cosmos DB from various sources such as Oracle, Cassandra, MongoDB, and various others to Azure Cosmos DB. To learn more please visit the [Striim website](https://www.striim.com/), [download a free 30-day trial of Striim](https://go2.striim.com/download-free-trial), and for any issues when setting up the migration path with Striim, file a [support request.](https://go2.striim.com/request-support-striim)
+
+## Next steps
+
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+* If you are migrating data to Azure Cosmos DB for NoSQL, see [how to migrate data to API for Cassandra account using Striim](../cassandr)
+
+* [Monitor and debug your data with Azure Cosmos DB metrics](../use-metrics.md)
cosmos-db Migrate Dotnet V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v2.md
+
+ Title: Migrate your application to use the Azure Cosmos DB .NET SDK 2.0 (Microsoft.Azure.Cosmos)
+description: Learn how to upgrade your existing .NET application from the v1 SDK to .NET SDK v2 for API for NoSQL.
++++ Last updated : 08/26/2021
+ms.devlang: csharp
+++
+# Migrate your application to use the Azure Cosmos DB .NET SDK v2
+
+> [!IMPORTANT]
+> It is important to note that the v3 of the .NET SDK is currently available and a migration plan from v2 to v3 is available [here](migrate-dotnet-v3.md). To learn about the Azure Cosmos DB .NET SDK v2, see the [Release notes](sdk-dotnet-v2.md), the [.NET GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v2), .NET SDK v2 [Performance Tips](performance-tips.md), and the [Troubleshooting guide](troubleshoot-dotnet-sdk.md).
+>
+
+This article highlights some of the considerations to upgrade your existing v1 .NET application to Azure Cosmos DB .NET SDK v2 for API for NoSQL. Azure Cosmos DB .NET SDK v2 corresponds to the `Microsoft.Azure.DocumentDB` namespace. You can use the information provided in this document if you are migrating your application from any of the following Azure Cosmos DB .NET Platforms to use the v2 SDK `Microsoft.Azure.Cosmos`:
+
+* Azure Cosmos DB .NET Framework v1 SDK for API for NoSQL
+* Azure Cosmos DB .NET Core SDK v1 for API for NoSQL
+
+## What's available in the .NET v2 SDK
+
+The v2 SDK contains many usability and performance improvements, including:
+
+* Support for TCP direct mode for non-Windows clients
+* Multi-region write support
+* Improvements on query performance
+* Support for geospatial/geometry collections and indexing
+* Increased improvements for direct/TCP transport diagnostics
+* Updates on direct TCP transport stack to reduce the number of connections established
+* Improvements in latency reduction in the RequestTimeout
+
+Most of the retry logic and lower levels of the SDK remains largely unchanged.
+
+## Why migrate to the .NET v2 SDK
+
+In addition to the numerous performance improvements, new feature investments made in the latest SDK will not be back ported to older versions.
+
+Additionally, the older SDKs will be replaced by newer versions and the v1 SDK will go into [maintenance mode](sdk-dotnet-v2.md). For the best development experience, we recommend migrating your application to a later version of the SDK.
+
+## Major changes from v1 SDK to v2 SDK
+
+### Direct mode + TCP
+
+The .NET v2 SDK now supports both direct and gateway mode. Direct mode supports connectivity through TCP protocol and offers better performance as it connects directly to the backend replicas with fewer network hops.
+
+For more details, read through the [Azure Cosmos DB SQL SDK connectivity modes guide](sdk-connection-modes.md).
+
+### Session token formatting
+
+The v2 SDK no longer uses the *simple* session token format as used in v1, instead the SDK uses *vector* formatting. The format should be converted when passing to the client application with different versions, since the formats are not interchangeable.
+
+For more information, see [converting session token formats in the .NET SDK](how-to-convert-session-token.md).
+
+### Using the .NET change feed processor SDK
+
+The .NET change feed processor library 2.1.x requires `Microsoft.Azure.DocumentDB` 2.0 or later.
+
+The 2.1.x library has the following key changes:
+
+* Stability and diagnosability improvements
+* Improved handling of errors and exceptions
+* Additional support for partitioned lease collections
+* Advanced extensions to implement the `ChangeFeedDocument` interface and class for additional error handling and tracing
+* Added support for using a custom store to persist continuation tokens per partition
+
+For more information, see the change feed processor library [release notes](sdk-dotnet-change-feed-v2.md).
+
+### Using the bulk executor library
+
+The v2 bulk executor library currently has a dependency on the Azure Cosmos DB .NET SDK 2.5.1 or later.
+
+For more information, see the [Azure Cosmos DB bulk executor library overview](../bulk-executor-overview.md) and the .NET bulk executor library [release notes](sdk-dotnet-bulk-executor-v2.md).
+
+## Next steps
+
+* Read about [additional performance tips](quickstart-dotnet.md) using Azure Cosmos DB for API for NoSQL v2 for optimization your application to achieve max performance
+* Learn more about [what you can do with the v2 SDK](samples-dotnet.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md
+
+ Title: Migrate your application to use the Azure Cosmos DB .NET SDK 3.0 (Microsoft.Azure.Cosmos)
+description: Learn how to upgrade your existing .NET application from the v2 SDK to the newer .NET SDK v3 (Microsoft.Azure.Azure Cosmos DB package) for API for NoSQL.
+++++ Last updated : 06/01/2022
+ms.devlang: csharp
++
+# Migrate your application to use the Azure Cosmos DB .NET SDK v3
+
+> [!IMPORTANT]
+> To learn about the Azure Cosmos DB .NET SDK v3, see the [Release notes](sdk-dotnet-v3.md), the [.NET GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v3), .NET SDK v3 [Performance Tips](performance-tips-dotnet-sdk-v3.md), and the [Troubleshooting guide](troubleshoot-dotnet-sdk.md).
+>
+
+This article highlights some of the considerations of upgrading your existing .NET application to the newer Azure Cosmos DB .NET SDK v3 for API for NoSQL. Azure Cosmos DB .NET SDK v3 corresponds to the Microsoft.Azure.Azure Cosmos DB namespace. You can use the information provided in this doc if you're migrating your application from any of the following Azure Cosmos DB .NET SDKs:
+
+* Azure Cosmos DB .NET Framework SDK v2 for API for NoSQL
+* Azure Cosmos DB .NET Core SDK v2 for API for NoSQL
+
+The instructions in this article also help you to migrate the following external libraries that are now part of the Azure Cosmos DB .NET SDK v3 for API for NoSQL:
+
+* .NET change feed processor library 2.0
+* .NET bulk executor library 1.1 or greater
+
+## What's new in the .NET V3 SDK
+
+The v3 SDK contains many usability and performance improvements, including:
+
+* Intuitive programming model naming
+* .NET Standard 2.0 **
+* Increased performance through stream API support
+* Fluent hierarchy that replaces the need for URI factory
+* Built-in support for change feed processor library
+* Built-in support for bulk operations
+* Mockable APIs for easier unit testing
+* Transactional batch and Blazor support
+* Pluggable serializers
+* Scale non-partitioned and autoscale containers
+
+** The SDK targets .NET Standard 2.0 that unifies the existing Azure Cosmos DB .NET Framework and .NET Core SDKs into a single .NET SDK. You can use the .NET SDK in any platform that implements .NET Standard 2.0, including your .NET Framework 4.6.1+ and .NET Core 2.0+ applications.
+
+Most of the networking, retry logic, and lower levels of the SDK remain largely unchanged.
+
+**The Azure Cosmos DB .NET SDK v3 is now open source.** We welcome any pull requests and will be logging issues and tracking feedback on [GitHub.](https://github.com/Azure/azure-cosmos-dotnet-v3/) We'll work on taking on any features that will improve customer experience.
+
+## Why migrate to the .NET v3 SDK
+
+In addition to the numerous usability and performance improvements, new feature investments made in the latest SDK won't be back ported to older versions.
+The v2 SDK is currently in maintenance mode. For the best development experience, we recommend always starting with the latest supported version of SDK.
+
+## Major name changes from v2 SDK to v3 SDK
+
+The following name changes have been applied throughout the .NET 3.0 SDK to align with the API naming conventions for the API for NoSQL:
+
+* `DocumentClient` is renamed to `CosmosClient`
+* `Collection` is renamed to `Container`
+* `Document` is renamed to `Item`
+
+All the resource objects are renamed with additional properties, which, includes the resource name for clarity.
+
+The following are some of the main class name changes:
+
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`Microsoft.Azure.Documents.Client.DocumentClient`|`Microsoft.Azure.Cosmos.CosmosClient`|
+|`Microsoft.Azure.Documents.Client.ConnectionPolicy`|`Microsoft.Azure.Cosmos.CosmosClientOptions`|
+|`Microsoft.Azure.Documents.Client.DocumentClientException` |`Microsoft.Azure.Cosmos.CosmosException`|
+|`Microsoft.Azure.Documents.Client.Database`|`Microsoft.Azure.Cosmos.DatabaseProperties`|
+|`Microsoft.Azure.Documents.Client.DocumentCollection`|`Microsoft.Azure.Cosmos.ContainerProperties`|
+|`Microsoft.Azure.Documents.Client.RequestOptions`|`Microsoft.Azure.Cosmos.ItemRequestOptions`|
+|`Microsoft.Azure.Documents.Client.FeedOptions`|`Microsoft.Azure.Cosmos.QueryRequestOptions`|
+|`Microsoft.Azure.Documents.Client.StoredProcedure`|`Microsoft.Azure.Cosmos.StoredProcedureProperties`|
+|`Microsoft.Azure.Documents.Client.Trigger`|`Microsoft.Azure.Cosmos.TriggerProperties`|
+|`Microsoft.Azure.Documents.SqlQuerySpec`|`Microsoft.Azure.Cosmos.QueryDefinition`|
+
+### Classes replaced on .NET v3 SDK
+
+The following classes have been replaced on the 3.0 SDK:
+
+* `Microsoft.Azure.Documents.UriFactory`
+
+* `Microsoft.Azure.Documents.Document`
+
+* `Microsoft.Azure.Documents.Resource`
+
+The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+Container container = client.GetContainer(databaseName,containerName);
+ItemResponse<SalesOrder> response = await this._container.CreateItemAsync(
+ salesOrder,
+ new PartitionKey(salesOrder.AccountNumber));
+
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName, containerName);
+await client.CreateDocumentAsync(
+ collectionUri,
+ salesOrder,
+ new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber) });
+```
+++
+Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+
+### Changes to item ID generation
+
+Item ID is no longer auto populated in the .NET v3 SDK. Therefore, the Item ID must specifically include a generated ID. View the following example:
+
+```csharp
+[JsonProperty(PropertyName = "id")]
+public Guid Id { get; set; }
+```
+
+### Changed default behavior for connection mode
+
+The SDK v3 now defaults to Direct + TCP connection modes compared to the previous v2 SDK, which defaulted to Gateway + HTTPS connections modes. This change provides enhanced performance and scalability.
+
+### Changes to FeedOptions (QueryRequestOptions in v3.0 SDK)
+
+The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions` in the SDK v3 and within the class, several properties have had changes in name and/or default value or been removed completely.
+
+`FeedOptions.MaxDegreeOfParallelism` has been renamed to `QueryRequestOptions.MaxConcurrency` and default value and associated behavior remains the same, operations run client side during parallel query execution will be executed serially with no-parallelism.
+
+`FeedOptions.EnableCrossPartitionQuery` has been removed and the default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically.
+
+`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the `FeedResponse.Diagnostics` property of the response.
+
+`FeedOptions.RequestContinuation` has now been promoted to the query methods themselves.
+
+The following properties have been removed:
+
+* `FeedOptions.DisableRUPerMinuteUsage`
+
+* `FeedOptions.EnableCrossPartitionQuery`
+
+* `FeedOptions.JsonSerializerSettings`
+
+* `FeedOptions.PartitionKeyRangeId`
+
+* `FeedOptions.PopulateQueryMetrics`
+
+### Constructing a client
+
+The .NET SDK v3 provides a fluent `CosmosClientBuilder` class that replaces the need for the SDK v2 URI Factory.
+
+The fluent design builds URLs internally and allows a single `Container` object to be passed around instead of a `DocumentClient`, `DatabaseName`, and `DocumentCollection`.
+
+The following example creates a new `CosmosClientBuilder` with a strong ConsistencyLevel and a list of preferred locations:
+
+```csharp
+CosmosClientBuilder cosmosClientBuilder = new CosmosClientBuilder(
+ accountEndpoint: "https://testcosmos.documents.azure.com:443/",
+ authKeyOrResourceToken: "SuperSecretKey")
+.WithConsistencyLevel(ConsistencyLevel.Strong)
+.WithApplicationRegion(Regions.EastUS);
+CosmosClient client = cosmosClientBuilder.Build();
+```
+
+### Exceptions
+
+Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
+
+```csharp
+catch (CosmosException ex)
+{
+ HttpStatusCode statusCode = ex.StatusCode;
+ CosmosDiagnostics diagnostics = ex.Diagnostics;
+ // store diagnostics optionally with diagnostics.ToString();
+ // or log the entire error details with ex.ToString();
+}
+```
+
+### Diagnostics
+
+Where the v2 SDK had Direct-only diagnostics available through the `RequestDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
+
+```csharp
+try
+{
+ ItemResponse<MyItem> response = await container.ReadItemAsync<MyItem>(
+ partitionKey: new PartitionKey("MyPartitionKey"),
+ id: "MyId");
+
+ TimeSpan elapsedTime = response.Diagnostics.GetElapsedTime();
+ if (elapsedTime > somePreDefinedThreshold)
+ {
+ // log response.Diagnostics.ToString();
+ IReadOnlyList<(string region, Uri uri)> regions = response.Diagnostics.GetContactedRegions();
+ }
+}
+catch (CosmosException cosmosException) {
+ string diagnostics = cosmosException.Diagnostics.ToString();
+
+ TimeSpan elapsedTime = cosmosException.Diagnostics.GetElapsedTime();
+
+ IReadOnlyList<(string region, Uri uri)> regions = cosmosException.Diagnostics.GetContactedRegions();
+
+ // log cosmosException.ToString()
+}
+```
+
+### ConnectionPolicy
+
+Some settings in `ConnectionPolicy` have been renamed or replaced:
+
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`EnableEndpointRediscovery`|`LimitToEndpoint` - The value is now inverted, if `EnableEndpointRediscovery` was being set to `true`, `LimitToEndpoint` should be set to `false`. Before using this setting, you need to understand [how it affects the client](troubleshoot-sdk-availability.md).|
+|`ConnectionProtocol`|Removed. Protocol is tied to the Mode, either it's Gateway (HTTPS) or Direct (TCP). Direct mode with HTTPS protocol is no longer supported on V3 SDK and the recommendation is to use TCP protocol. |
+|`MediaRequestTimeout`|Removed. Attachments are no longer supported.|
+|`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.|
+|`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.|
+
+### Indexing policy
+
+In the indexing policy, it is not possible to configure these properties. When not specified, these properties will now always have the following values:
+
+| **Property Name** | **New Value (not configurable)** |
+| -- | -- |
+| `Kind` | `range` |
+| `dataType` | `String` and `Number` |
+
+See [this section](how-to-manage-indexing-policy.md#indexing-policy-examples) for indexing policy examples for including and excluding paths. Due to improvements in the query engine, configuring these properties, even if using an older SDK version, has no impact on performance.
+
+### Session token
+
+Where the v2 SDK exposed the session token of a response as `ResourceResponse.SessionToken` for cases where capturing the session token was required, because the session token is a header, the v3 SDK exposes that value in the `Headers.Session` property of any response.
+
+### Timestamp
+
+Where the v2 SDK exposed the timestamp of a document through the `Timestamp` property, because `Document` is no longer available, users can map the `_ts` [system property](../account-databases-containers-items.md#properties-of-an-item) to a property in their model.
+
+### OpenAsync
+
+For use cases where `OpenAsync()` was being used to warm up the v2 SDK client, `CreateAndInitializeAsync` can be used to both [create and warm-up](https://devblogs.microsoft.com/cosmosdb/improve-net-sdk-initialization/) a v3 SDK client.
+
+### Using the change feed processor APIs directly from the v3 SDK
+
+The v3 SDK has built-in support for the Change Feed Processor APIs, allowing you use the same SDK for building your application and change feed processor implementation. Previously, you had to use a separate change feed processor library.
+
+For more information, see [how to migrate from the change feed processor library to the Azure Cosmos DB .NET v3 SDK](how-to-migrate-from-change-feed-library.md)
+
+### Change feed queries
+
+Executing change feed queries on the v3 SDK is considered to be using the [change feed pull model](change-feed-pull-model.md). Follow this table to migrate configuration:
+
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`ChangeFeedOptions.PartitionKeyRangeId`|`FeedRange` - In order to achieve parallelism reading the change feed [FeedRanges](change-feed-pull-model.md#using-feedrange-for-parallelization) can be used. It's no longer a required parameter, you can [read the Change Feed for an entire container](change-feed-pull-model.md#consuming-an-entire-containers-changes) easily now.|
+|`ChangeFeedOptions.PartitionKey`|`FeedRange.FromPartitionKey` - A FeedRange representing the desired Partition Key can be used to [read the Change Feed for that Partition Key value](change-feed-pull-model.md#consuming-a-partition-keys-changes).|
+|`ChangeFeedOptions.RequestContinuation`|`ChangeFeedStartFrom.Continuation` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
+|`ChangeFeedOptions.StartTime`|`ChangeFeedStartFrom.Time` |
+|`ChangeFeedOptions.StartFromBeginning` |`ChangeFeedStartFrom.Beginning` |
+|`ChangeFeedOptions.MaxItemCount`|`ChangeFeedRequestOptions.PageSizeHint` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
+|`IDocumentQuery.HasMoreResults` |`response.StatusCode == HttpStatusCode.NotModified` - The change feed is conceptually infinite, so there could always be more results. When a response contains the `HttpStatusCode.NotModified` status code, it means there are no new changes to read at this time. You can use that to stop and [save the continuation](change-feed-pull-model.md#saving-continuation-tokens) or to temporarily sleep or wait and then call `ReadNextAsync` again to test for new changes. |
+|Split handling|It's no longer required for users to handle split exceptions when reading the change feed, splits will be handled transparently without the need of user interaction.|
+
+### Using the bulk executor library directly from the V3 SDK
+
+The v3 SDK has built-in support for the bulk executor library, allowing you to use the same SDK for building your application and performing bulk operations. Previously, you were required to use a separate bulk executor library.
+
+For more information, see [how to migrate from the bulk executor library to bulk support in Azure Cosmos DB .NET V3 SDK](how-to-migrate-from-bulk-executor-library.md)
+
+## Code snippet comparisons
+
+The following code snippet shows the differences in how resources are created between the .NET v2 and v3 SDKs:
+
+## Database operations
+
+### Create a database
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+// Create database with no shared provisioned throughput
+DatabaseResponse databaseResponse = await client.CreateDatabaseIfNotExistsAsync(DatabaseName);
+Database database = databaseResponse;
+DatabaseProperties databaseProperties = databaseResponse;
+
+// Create a database with a shared manual provisioned throughput
+string databaseIdManual = new string(DatabaseName + "_SharedManualThroughput");
+database = await client.CreateDatabaseIfNotExistsAsync(databaseIdManual, ThroughputProperties.CreateManualThroughput(400));
+
+// Create a database with shared autoscale provisioned throughput
+string databaseIdAutoscale = new string(DatabaseName + "_SharedAutoscaleThroughput");
+database = await client.CreateDatabaseIfNotExistsAsync(databaseIdAutoscale, ThroughputProperties.CreateAutoscaleThroughput(4000));
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+// Create database
+ResourceResponse<Database> databaseResponse = await client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseName });
+Database database = databaseResponse;
+
+// Create a database with shared standard provisioned throughput
+database = await client.CreateDatabaseIfNotExistsAsync(new Database{ Id = databaseIdStandard }, new RequestOptions { OfferThroughput = 400 });
+
+// Creating a database with shared autoscale provisioned throughput from v2 SDK is not supported use v3 SDK
+```
++
+### Read a database by ID
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+// Read a database
+Console.WriteLine($"{Environment.NewLine} Read database resource: {DatabaseName}");
+database = client.GetDatabase(DatabaseName);
+Console.WriteLine($"{Environment.NewLine} database { database.Id.ToString()}");
+
+// Read all databases
+string findQueryText = "SELECT * FROM c";
+using (FeedIterator<DatabaseProperties> feedIterator = client.GetDatabaseQueryIterator<DatabaseProperties>(findQueryText))
+{
+ while (feedIterator.HasMoreResults)
+ {
+ FeedResponse<DatabaseProperties> databaseResponses = await feedIterator.ReadNextAsync();
+ foreach (DatabaseProperties _database in databaseResponses)
+ {
+ Console.WriteLine($"{ Environment.NewLine} database {_database.Id.ToString()}");
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+// Read a database
+database = await client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri(DatabaseName));
+Console.WriteLine("\n database {0}", database.Id.ToString());
+
+// Read all databases
+Console.WriteLine("\n1.1 Reading all databases resources");
+foreach (Database _database in await client.ReadDatabaseFeedAsync())
+{
+ Console.WriteLine("\n database {0} \n {1}", _database.Id.ToString(), _database.ToString());
+}
+```
++
+### Delete a database
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+// Delete a database
+await client.GetDatabase(DatabaseName).DeleteAsync();
+Console.WriteLine($"{ Environment.NewLine} database {DatabaseName} deleted.");
+
+// Delete all databases in an account
+string deleteQueryText = "SELECT * FROM c";
+using (FeedIterator<DatabaseProperties> feedIterator = client.GetDatabaseQueryIterator<DatabaseProperties>(deleteQueryText))
+{
+ while (feedIterator.HasMoreResults)
+ {
+ FeedResponse<DatabaseProperties> databaseResponses = await feedIterator.ReadNextAsync();
+ foreach (DatabaseProperties _database in databaseResponses)
+ {
+ await client.GetDatabase(_database.Id).DeleteAsync();
+ Console.WriteLine($"{ Environment.NewLine} database {_database.Id} deleted");
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+// Delete a database
+database = await client.DeleteDatabaseAsync(UriFactory.CreateDatabaseUri(DatabaseName));
+Console.WriteLine(" database {0} deleted.", DatabaseName);
+
+// Delete all databases in an account
+foreach (Database _database in await client.ReadDatabaseFeedAsync())
+{
+ await client.DeleteDatabaseAsync(UriFactory.CreateDatabaseUri(_database.Id));
+ Console.WriteLine("\n database {0} deleted", _database.Id);
+}
+```
++
+## Container operations
+
+### Create a container (Autoscale + Time to live with expiration)
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task CreateManualThroughputContainer(Database database)
+{
+ // Set throughput to the minimum value of 400 RU/s manually configured throughput
+ string containerIdManual = ContainerName + "_Manual";
+ ContainerResponse container = await database.CreateContainerIfNotExistsAsync(
+ id: containerIdManual,
+ partitionKeyPath: partitionKeyPath,
+ throughput: 400);
+}
+
+// Create container with autoscale
+private static async Task CreateAutoscaleThroughputContainer(Database database)
+{
+ string autoscaleContainerId = ContainerName + "_Autoscale";
+ ContainerProperties containerProperties = new ContainerProperties(autoscaleContainerId, partitionKeyPath);
+
+ Container container = await database.CreateContainerIfNotExistsAsync(
+ containerProperties: containerProperties,
+ throughputProperties: ThroughputProperties.CreateAutoscaleThroughput(autoscaleMaxThroughput: 4000);
+}
+
+// Create a container with TTL Expiration
+private static async Task CreateContainerWithTtlExpiration(Database database)
+{
+ string containerIdManualwithTTL = ContainerName + "_ManualTTL";
+
+ ContainerProperties properties = new ContainerProperties
+ (id: containerIdManualwithTTL,
+ partitionKeyPath: partitionKeyPath);
+
+ properties.DefaultTimeToLive = (int)TimeSpan.FromDays(1).TotalSeconds; //expire in 1 day
+
+ ContainerResponse containerResponse = await database.CreateContainerIfNotExistsAsync(containerProperties: properties);
+ ContainerProperties returnedProperties = containerResponse;
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+// Create a collection
+private static async Task CreateManualThroughputContainer(DocumentClient client)
+{
+ string containerIdManual = ContainerName + "_Manual";
+
+ // Set throughput to the minimum value of 400 RU/s manually configured throughput
+
+ DocumentCollection collectionDefinition = new DocumentCollection();
+ collectionDefinition.Id = containerIdManual;
+ collectionDefinition.PartitionKey.Paths.Add(partitionKeyPath);
+
+ DocumentCollection partitionedCollection = await client.CreateDocumentCollectionIfNotExistsAsync(
+ UriFactory.CreateDatabaseUri(DatabaseName),
+ collectionDefinition,
+ new RequestOptions { OfferThroughput = 400 });
+}
+
+private static async Task CreateAutoscaleThroughputContainer(DocumentClient client)
+{
+ // .NET v2 SDK does not support the creation of provisioned autoscale throughput containers
+}
+
+ private static async Task CreateContainerWithTtlExpiration(DocumentClient client)
+{
+ string containerIdManualwithTTL = ContainerName + "_ManualTTL";
+
+ DocumentCollection collectionDefinition = new DocumentCollection();
+ collectionDefinition.Id = containerIdManualwithTTL;
+ collectionDefinition.DefaultTimeToLive = (int)TimeSpan.FromDays(1).TotalSeconds; //expire in 1 day
+ collectionDefinition.PartitionKey.Paths.Add(partitionKeyPath);
+
+ DocumentCollection partitionedCollection = await client.CreateDocumentCollectionIfNotExistsAsync(
+ UriFactory.CreateDatabaseUri(DatabaseName),
+ collectionDefinition,
+ new RequestOptions { OfferThroughput = 400 });
+
+}
+```
++
+### Read container properties
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task ReadContainerProperties(Database database)
+{
+ string containerIdManual = ContainerName + "_Manual";
+ Container container = database.GetContainer(containerIdManual);
+ ContainerProperties containerProperties = await container.ReadContainerAsync();
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task ReadContainerProperties(DocumentClient client)
+{
+ string containerIdManual = ContainerName + "_Manual";
+ DocumentCollection collection = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseName, containerIdManual));
+}
+```
++
+### Delete a container
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task DeleteContainers(Database database)
+{
+ string containerIdManual = ContainerName + "_Manual";
+
+ // Delete a container
+ await database.GetContainer(containerIdManual).DeleteContainerAsync();
+
+ // Delete all CosmosContainer resources for a database
+ using (FeedIterator<ContainerProperties> feedIterator = database.GetContainerQueryIterator<ContainerProperties>())
+ {
+ while (feedIterator.HasMoreResults)
+ {
+ foreach (ContainerProperties _container in await feedIterator.ReadNextAsync())
+ {
+ await database.GetContainer(_container.Id).DeleteContainerAsync();
+ Console.WriteLine($"{Environment.NewLine} deleted container {_container.Id}");
+ }
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task DeleteContainers(DocumentClient client)
+{
+ // Delete a collection
+ string containerIdManual = ContainerName + "_Manual";
+ await client.DeleteDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseName, containerIdManual));
+
+ // Delete all containers for a database
+ foreach (var collection in await client.ReadDocumentCollectionFeedAsync(UriFactory.CreateDatabaseUri(DatabaseName)))
+ {
+ await client.DeleteDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseName, collection.Id));
+ }
+}
+```
++
+## Item and query operations
+
+### Create an item
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task CreateItemAsync(Container container)
+{
+ // Create a SalesOrder POCO object
+ SalesOrder salesOrder1 = GetSalesOrderSample("Account1", "SalesOrder1");
+ ItemResponse<SalesOrder> response = await container.CreateItemAsync(salesOrder1,
+ new PartitionKey(salesOrder1.AccountNumber));
+}
+
+private static async Task RunBasicOperationsOnDynamicObjects(Container container)
+{
+ // Dynamic Object
+ dynamic salesOrder = new
+ {
+ id = "SalesOrder5",
+ AccountNumber = "Account1",
+ PurchaseOrderNumber = "PO18009186470",
+ OrderDate = DateTime.UtcNow,
+ Total = 5.95,
+ };
+ Console.WriteLine("\nCreating item");
+ ItemResponse<dynamic> response = await container.CreateItemAsync<dynamic>(
+ salesOrder, new PartitionKey(salesOrder.AccountNumber));
+ dynamic createdSalesOrder = response.Resource;
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task CreateItemAsync(DocumentClient client)
+{
+ // Create a SalesOrder POCO object
+ SalesOrder salesOrder1 = GetSalesOrderSample("Account1", "SalesOrder1");
+ await client.CreateDocumentAsync(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
+ salesOrder1,
+ new RequestOptions { PartitionKey = new PartitionKey("Account1")});
+}
+
+private static async Task RunBasicOperationsOnDynamicObjects(DocumentClient client)
+{
+ // Create a dynamic object
+ dynamic salesOrder = new
+ {
+ id= "SalesOrder5",
+ AccountNumber = "Account1",
+ PurchaseOrderNumber = "PO18009186470",
+ OrderDate = DateTime.UtcNow,
+ Total = 5.95,
+ };
+ ResourceResponse<Document> response = await client.CreateDocumentAsync(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
+ salesOrder,
+ new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber)});
+
+ dynamic createdSalesOrder = response.Resource;
+ }
+```
++
+### Read all the items in a container
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task ReadAllItems(Container container)
+{
+ // Read all items in a container
+ List<SalesOrder> allSalesForAccount1 = new List<SalesOrder>();
+
+ using (FeedIterator<SalesOrder> resultSet = container.GetItemQueryIterator<SalesOrder>(
+ queryDefinition: null,
+ requestOptions: new QueryRequestOptions()
+ {
+ PartitionKey = new PartitionKey("Account1"),
+ MaxItemCount = 5
+ }))
+ {
+ while (resultSet.HasMoreResults)
+ {
+ FeedResponse<SalesOrder> response = await resultSet.ReadNextAsync();
+ SalesOrder salesOrder = response.First();
+ Console.WriteLine($"\n1.3.1 Account Number: {salesOrder.AccountNumber}; Id: {salesOrder.Id}");
+ allSalesForAccount1.AddRange(response);
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task ReadAllItems(DocumentClient client)
+{
+ // Read all items in a collection
+ List<SalesOrder> allSalesForAccount1 = new List<SalesOrder>();
+
+ string continuationToken = null;
+ do
+ {
+ var feed = await client.ReadDocumentFeedAsync(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
+ new FeedOptions { MaxItemCount = 5, RequestContinuation = continuationToken });
+ continuationToken = feed.ResponseContinuation;
+ foreach (Document document in feed)
+ {
+ SalesOrder salesOrder = (SalesOrder)(dynamic)document;
+ Console.WriteLine($"\n1.3.1 Account Number: {salesOrder.AccountNumber}; Id: {salesOrder.Id}");
+ allSalesForAccount1.Add(salesOrder);
+
+ }
+ } while (continuationToken != null);
+}
+```
++
+### Query items
+#### Changes to SqlQuerySpec (QueryDefinition in v3.0 SDK)
+
+The `SqlQuerySpec` class in SDK v2 has now been renamed to `QueryDefinition` in the SDK v3.
+
+`SqlParameterCollection` and `SqlParameter` has been removed. Parameters are now added to the `QueryDefinition` with a builder model using `QueryDefinition.WithParameter`. Users can access the parameters with `QueryDefinition.GetQueryParameters`
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task QueryItems(Container container)
+{
+ // Query for items by a property other than Id
+ QueryDefinition queryDefinition = new QueryDefinition(
+ "select * from sales s where s.AccountNumber = @AccountInput")
+ .WithParameter("@AccountInput", "Account1");
+
+ List<SalesOrder> allSalesForAccount1 = new List<SalesOrder>();
+ using (FeedIterator<SalesOrder> resultSet = container.GetItemQueryIterator<SalesOrder>(
+ queryDefinition,
+ requestOptions: new QueryRequestOptions()
+ {
+ PartitionKey = new PartitionKey("Account1"),
+ MaxItemCount = 1
+ }))
+ {
+ while (resultSet.HasMoreResults)
+ {
+ FeedResponse<SalesOrder> response = await resultSet.ReadNextAsync();
+ SalesOrder sale = response.First();
+ Console.WriteLine($"\n Account Number: {sale.AccountNumber}; Id: {sale.Id};");
+ allSalesForAccount1.AddRange(response);
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task QueryItems(DocumentClient client)
+{
+ // Query for items by a property other than Id
+ SqlQuerySpec querySpec = new SqlQuerySpec()
+ {
+ QueryText = "select * from sales s where s.AccountNumber = @AccountInput",
+ Parameters = new SqlParameterCollection()
+ {
+ new SqlParameter("@AccountInput", "Account1")
+ }
+ };
+ var query = client.CreateDocumentQuery<SalesOrder>(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
+ querySpec,
+ new FeedOptions {EnableCrossPartitionQuery = true});
+
+ var allSalesForAccount1 = query.ToList();
+
+ Console.WriteLine($"\n1.4.2 Query found {allSalesForAccount1.Count} items.");
+}
+```
++
+### Delete an item
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task DeleteItemAsync(Container container)
+{
+ ItemResponse<SalesOrder> response = await container.DeleteItemAsync<SalesOrder>(
+ partitionKey: new PartitionKey("Account1"), id: "SalesOrder3");
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task DeleteItemAsync(DocumentClient client)
+{
+ ResourceResponse<Document> response = await client.DeleteDocumentAsync(
+ UriFactory.CreateDocumentUri(DatabaseName, ContainerName, "SalesOrder3"),
+ new RequestOptions { PartitionKey = new PartitionKey("Account1") });
+}
+```
++
+### Change feed query
+
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private static async Task QueryChangeFeedAsync(Container container)
+{
+ FeedIterator<SalesOrder> iterator = container.GetChangeFeedIterator<SalesOrder>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+
+ string continuation = null;
+ while (iterator.HasMoreResults)
+ {
+ FeedResponse<SalesOrder> response = await iteratorForTheEntireContainer.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ // No new changes
+ continuation = response.ContinuationToken;
+ break;
+ }
+ else
+ {
+ // Process the documents in response
+ }
+ }
+}
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private static async Task QueryChangeFeedAsync(DocumentClient client, string partitionKeyRangeId)
+{
+ ChangeFeedOptions options = new ChangeFeedOptions
+ {
+ PartitionKeyRangeId = partitionKeyRangeId,
+ StartFromBeginning = true,
+ };
+
+ using(var query = client.CreateDocumentChangeFeedQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName), options))
+ {
+ do
+ {
+ var response = await query.ExecuteNextAsync<Document>();
+ if (response.Count > 0)
+ {
+ var docs = new List<Document>();
+ docs.AddRange(response);
+ // Process the documents.
+ // Save response.ResponseContinuation if needed
+ }
+ }
+ while (query.HasMoreResults);
+ }
+}
+```
++
+## Next steps
+
+* [Build a Console app](quickstart-dotnet.md) to manage Azure Cosmos DB for NoSQL data using the v3 SDK
+* Learn more about [what you can do with the v3 SDK](samples-dotnet.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-hbase-to-cosmos-db.md
+
+ Title: Migrate data from Apache HBase to Azure Cosmos DB for NoSQL account
+description: Learn how to migrate your data from HBase to Azure Cosmos DB for NoSQL account.
++++ Last updated : 12/07/2021++++
+# Migrate data from Apache HBase to Azure Cosmos DB for NoSQL account
+
+Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article guides how to migrate your data from HBase to Azure Cosmos DB for NoSQL account.
+
+## Differences between Azure Cosmos DB and HBase
+
+Before migrating, you must understand the differences between Azure Cosmos DB and HBase.
+
+### Resource model
+
+Azure Cosmos DB has the following resource model:
+
+HBase has the following resource model:
+
+### Resource mapping
+
+The following table shows a conceptual mapping between Apache HBase, Apache Phoenix, and Azure Cosmos DB.
+
+| **HBase** | **Phoenix** | **Azure Cosmos DB** |
+| - | -- | |
+| Cluster | Cluster | Account |
+| Namespace | Schema (if enabled) | Database |
+| Table | Table | Container/Collection |
+| Column family | Column family | N/A |
+| Row | Row | Item/Document |
+| Version (Timestamp) | Version (Timestamp) | N/A |
+| N/A | Primary Key | Partition Key |
+| N/A | Index | Index |
+| N/A | Secondary Index | Secondary Index |
+| N/A | View | N/A |
+| N/A | Sequence | N/A |
+
+### Data structure comparison and differences
+
+The key differences between the data structure of Azure Cosmos DB and HBase are as follows:
+
+**RowKey**
+
+* In HBase, data is stored by [RowKey](https://hbase.apache.org/book.html#rowkey.design) and horizontally partitioned into regions by the range of RowKey specified during the table creation.
+
+* Azure Cosmos DB on the other side distributes data into partitions based on the hash value of a specified [Partition key](../partitioning-overview.md).
+
+**Column family**
+
+* In HBase, columns are grouped within a Column Family (CF).
+
+* Azure Cosmos DB (API for NoSQL) stores data as [JSON](https://www.json.org/json-en.html) document. Hence, all properties associated with a JSON data structure apply.
+
+**Timestamp**
+
+* HBase uses timestamp to version multiple instances of a given cell. You can query different versions of a cell using timestamp.
+
+* Azure Cosmos DB ships with the [Change feed feature](../change-feed.md) which tracks persistent record of changes to a container in the order they occur. It then outputs the sorted list of documents that were changed in the order in which they were modified.
+
+**Data format**
+
+* HBase data format consists of RowKey, Column Family: Column Name, Timestamp, Value. The following is an example of a HBase table row:
+
+ ```console
+ ROW COLUMN+CELL
+ 1000 column=Office:Address, timestamp=1611408732448, value=1111 San Gabriel Dr.
+ 1000 column=Office:Phone, timestamp=1611408732418, value=1-425-000-0002
+ 1000 column=Personal:Name, timestamp=1611408732340, value=John Dole
+ 1000 column=Personal:Phone, timestamp=1611408732385, value=1-425-000-0001
+ ```
+
+* In Azure Cosmos DB for NoSQL, the JSON object represents the data format. The partition key resides in a field in the document and sets which field is the partition key for the collection. Azure Cosmos DB does not have the concept of timestamp used for column family or version. As highlighted previously, it has change feed support through which one can track/record changes performed on a container. The following is an example of a document.
+
+ ```json
+ {
+ "RowId": "1000",
+ "OfficeAddress": "1111 San Gabriel Dr.",
+ "OfficePhone": "1-425-000-0002",
+ "PersonalName": "John Dole",
+ "PersonalPhone": "1-425-000-0001",
+ }
+ ```
+
+> [!TIP]
+> HBase stores data in byte array, so if you want to migrate data that contains double-byte characters to Azure Cosmos DB, the data must be UTF-8 encoded.
+
+### Consistency model
+
+HBase offers strictly consistent reads and writes.
+
+Azure Cosmos DB offers [five well-defined consistency levels](../consistency-levels.md). Each level provides availability and performance trade-offs. From strongest to weakest, the consistency levels supported are:
+
+* Strong
+* Bounded staleness
+* Session
+* Consistent prefix
+* Eventual
+
+### Sizing
+
+**HBase**
+
+For an enterprise-scale deployment of HBase, Master; Region servers; and ZooKeeper drive bulk of the sizing. Like any distributed application, HBase is designed to scale out. HBase performance is primarily driven by the size of the HBase RegionServers. Sizing is primarily driven by two key requirements ΓÇô throughput and size of the dataset that must be stored on HBase.
+
+**Azure Cosmos DB**
+
+Azure Cosmos DB is a PaaS offering from Microsoft and underlying infrastructure deployment details are abstracted from the end users. When an Azure Cosmos DB container is provisioned, Azure platform automatically provisions underlying infrastructure (compute, storage, memory, networking stack) to support the performance requirements of a given workload. The cost of all database operations is normalized by Azure Cosmos DB and is expressed by [Request Units (or RUs, for short).](../request-units.md)
+
+To estimate RUs consumed by your workload, consider the following [factors](../request-units.md#request-unit-considerations):
+
+There is a [capacity calculator](estimate-ru-with-capacity-planner.md) available to assist with sizing exercise for RUs.
+
+You can also use [autoscaling provisioning throughput](../provision-throughput-autoscale.md) in Azure Cosmos DB to automatically and instantly scale your database or container throughput (RU/sec). Throughput is scaled based on usage without impacting workload availability, latency, throughput, or performance.
+
+### Data distribution
+
+**HBase**
+HBase sorts data according to RowKey. The data is then partitioned into regions and stored in RegionServers. The automatic partitioning divides regions horizontally according to the partitioning policy. This is controlled by the value assigned to HBase parameter `hbase.hregion.max.filesize` (default value is 10 GB). A row in HBase with a given RowKey always belongs to one region. In addition, the data is separated on disk for each column family. This enables filtering at the time of reading and isolation of I/O on HFile.
+
+**Azure Cosmos DB**
+Azure Cosmos DB uses [partitioning](../partitioning-overview.md) to scale individual containers in the database. Partitioning divides the items in a container into specific subsets called "logical partitions". Logical partitions are formed based on the value of the "partition key" associated with each item in the container. All items in a logical partition have the same partition key value. Each logical partition can hold up to 20 GB of data.
+
+Physical partitions each contain a replica of your data and an instance of the Azure Cosmos DB database engine. This structure makes your data durable and highly available and throughput is divided equally amongst the local physical partitions. Physical partitions are automatically created and configured, and it's not possible to control their size, location, or which logical partitions they contain. Logical partitions are not split between physical partitions.
+
+As with HBase RowKey, partition key design is important for Azure Cosmos DB. HBase's Row Key works by sorting data and storing continuous data, and Azure Cosmos DB's Partition Key is a different mechanism because it hash-distributes data. Assuming your application using HBase is optimized for data access patterns to HBase, using the same RowKey for the partition Key will not give good performance results. Given that it's sorted data on HBase, the [Azure Cosmos DB composite index](../index-policy.md#composite-indexes) may be useful. It is required if you want to use the ORDER BY clause in more than one field. You can also improve the performance of many equal and range queries by defining a composite index.
+
+### Availability
+
+**HBase**
+HBase consists of Master; Region Server; and ZooKeeper. High availability in a single cluster can be achieved by making each component redundant. When configuring geo-redundancy, one can deploy HBase clusters across different physical data centers and use replication to keep multiple clusters in-sync.
+
+**Azure Cosmos DB**
+Azure Cosmos DB does not require any configuration such as cluster component redundancy. It provides a comprehensive SLA for high availability, consistency, and latency. Please see [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for more detail.
+
+### Data reliability
+
+**HBase**
+HBase is built on Hadoop Distributed File System (HDFS) and data stored on HDFS is replicated three times.
+
+**Azure Cosmos DB**
+Azure Cosmos DB primarily provides high availability in two ways. First, Azure Cosmos DB replicates data between regions configured within your Azure Cosmos DB account. Second, Azure Cosmos DB keeps four replicas of the data in the region.
+
+## Considerations before migrating
+
+### System dependencies
+
+This aspect of planning focuses on understanding upstream and downstream dependencies for HBase instance, which is being migrated to Azure Cosmos DB.
+
+Example of downstream dependencies could be applications that read data from HBase. These must be refactored to read from Azure Cosmos DB. These following points must be considered as part of the migration:
+
+* Questions for assessing dependencies - Is the current HBase system an independent component? Or Does it call a process on another system, or is it called by a process on another system, or is it accessed using a directory service? Are other important processes working in your HBase cluster? These system dependencies need to be clarified to determine the impact of migration.
+
+* The RPO and RTO for HBase deployment on-premises.
+
+### Offline Vs online migration
+
+For successful data migration, it is important to understand the characteristics of the business that uses the database and decide how to do it. Select offline migration if you can completely shut down the system, perform data migration, and restart the system at the destination. Also, if your database is always busy and you can't afford a long outage, consider migrating online.
+
+> [!NOTE]
+> This document covers only offline migration.
+
+When performing offline data migration, it depends on the version of HBase you are currently running and the tools available. See the [Data Migration](#migrate-your-data) section for more details.
+
+### Performance considerations
+
+This aspect of planning is to understand performance targets for HBase and then translate them across to Azure Cosmos DB semantics. For example ΓÇô to hit *"X"* IOPS on HBase, how many Request Units (RU/s) will be required in Azure Cosmos DB. There are differences between HBase and Azure Cosmos DB, this exercise focuses on building a view of how performance targets from HBase will be translated across to Azure Cosmos DB. This will drive the scaling exercise.
+
+Questions to ask:
+
+* Is the HBase deployment read-heavy or write-heavy?
+* What is the split between reads and writes?
+* What is the target IOPS expresses as percentile?
+* How/what applications are used to load data into HBase?
+* How/what applications are used to read data from HBase?
+
+When executing queries that request sorted data, HBase will return the result quickly because the data is sorted by RowKey. However, Azure Cosmos DB doesn't have such a concept. In order to optimize the performance, you can use [composite indexes](../index-policy.md#composite-indexes) as needed.
+
+### Deployment considerations
+
+You can use [the Azure portal or Azure CLI to deploy the Azure Cosmos DB for NoSQL](quickstart-portal.md). Since the migration destination is Azure Cosmos DB for NoSQL, select "NoSQL" for the API as a parameter when deploying. In addition, set Geo-Redundancy, Multi-region Writes, and Availability Zones according to your availability requirements.
+
+### Network consideration
+
+Azure Cosmos DB has three main network options. The first is a configuration that uses a Public IP address and controls access with an IP firewall (default). The second is a configuration that uses a Public IP address and allows access only from a specific subnet of a specific virtual network (service endpoint). The third is a configuration (private endpoint) that joins a private network using a Private IP address.
+
+See the following documents for more information on the three network options:
+
+* [Public IP with Firewall](../how-to-configure-firewall.md)
+* [Public IP with Service Endpoint](../how-to-configure-vnet-service-endpoint.md)
+* [Private Endpoint](../how-to-configure-private-endpoints.md)
+
+## Assess your existing data
+
+### Data discovery
+
+Gather information in advance from your existing HBase cluster to identify the data you want to migrate. These can help you identify how to migrate, decide which tables to migrate, understand the structure within those tables, and decide how to build your data model. For example, gather details such as the following:
+
+* HBase version
+* Migration target tables
+* Column family information
+* Table status
+
+The following commands show how to collect the above details using a hbase shell script and store them in the local file system of the operating machine.
+
+#### Get the HBase version
+
+```console
+hbase version -n > hbase-version.txt
+```
+
+**Output:**
+
+```console
+cat hbase-version.txt
+HBase 2.1.8.4.1.2.5
+```
+
+#### Get the list of tables
+
+You can get a list of tables stored in HBase. If you have created a namespace other than default, it will be output in the "Namespace: Table" format.
+
+```console
+echo "list" | hbase shell -n > table-list.txt
+HBase 2.1.8.4.1.2.5
+```
+
+**Output:**
+
+```console
+echo "list" | hbase shell -n > table-list.txt
+cat table-list.txt
+TABLE
+COMPANY
+Contacts
+ns1:t1
+3 row(s)
+Took 0.4261 seconds
+COMPANY
+Contacts
+ns1:t1
+```
+
+#### Identify the tables to be migrated
+
+Get the details of the column families in the table by specifying the table name to be migrated.
+
+```console
+echo "describe '({Namespace}:){Table name}'" | hbase shell -n > {Table name} -schema.txt
+```
+
+**Output:**
+
+```console
+cat {Table name} -schema.txt
+Table {Table name} is ENABLED
+{Table name}
+COLUMN FAMILIES DESCRIPTION
+{NAME => 'cf1', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
+{NAME => 'cf2', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
+2 row(s)
+Took 0.5775 seconds
+```
+
+#### Get the column families in the table and their settings
+
+```console
+echo "status 'detailed'" | hbase shell -n > hbase-status.txt
+```
+
+**Output:**
+
+```console
+{HBase version}
+0 regionsInTransition
+active master: {Server:Port number}
+2 backup masters
+ {Server:Port number}
+ {Server:Port number}
+master coprocessors: []
+# live servers
+ {Server:Port number}
+ requestsPerSecond=0.0, numberOfOnlineRegions=44, usedHeapMB=1420, maxHeapMB=15680, numberOfStores=49, numberOfStorefiles=14, storefileUncompressedSizeMB=7, storefileSizeMB=7, compressionRatio=1.0000, memstoreSizeMB=0, storefileIndexSizeKB=15, readRequestsCount=36210, filteredReadRequestsCount=415729, writeRequestsCount=439, rootIndexSizeKB=15, totalStaticIndexSizeKB=5, totalStaticBloomSizeKB=16, totalCompactingKVs=464, currentCompactedKVs=464, compactionProgressPct=1.0, coprocessors=[GroupedAggregateRegionObserver, Indexer, MetaDataEndpointImpl, MetaDataRegionObserver, MultiRowMutationEndpoint, ScanRegionObserver, SecureBulkLoadEndpoint, SequenceRegionObserver, ServerCachingEndpointImpl, UngroupedAggregateRegionObserver]
+
+ [...]
+
+ "Contacts,,1611126188216.14a597a0964383a3d923b2613524e0bd."
+ numberOfStores=2, numberOfStorefiles=2, storefileUncompressedSizeMB=7168, lastMajorCompactionTimestamp=0, storefileSizeMB=7, compressionRatio=0.0010, memstoreSizeMB=0, readRequestsCount=4393, writeRequestsCount=0, rootIndexSizeKB=14, totalStaticIndexSizeKB=5, totalStaticBloomSizeKB=16, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
+
+[...]
+
+```
+
+You can get useful sizing information such as the size of heap memory, the number of regions, the number of requests as the status of the cluster, and the size of the data in compressed/uncompressed as the status of the table.
+
+If you are using Apache Phoenix on HBase cluster, you need to collect data from Phoenix as well.
+
+* Migration target table
+* Table schemas
+* Indexes
+* Primary key
+
+#### Connect to Apache Phoenix on your cluster
+
+```console
+sqlline.py ZOOKEEPER/hbase-unsecure
+```
+
+#### Get the table list
+
+```console
+!tables
+```
+
+#### Get the table details
+
+```console
+!describe <Table Name>
+```
+
+#### Get the index details
+
+ ```console
+!indexes <Table Name>
+```
+
+### Get the primary key details
+
+ ```console
+!primarykeys <Table Name>
+```
+
+## Migrate your data
+
+### Migration options
+
+There are various methods to migrate data offline, but here we will introduce how to use Azure Data Factory and Data Migration Tool.
+
+| Solution | Source version | Considerations |
+| | -- | - |
+| Azure Data Factory | HBase < 2 | Easy to set up. Suitable for large datasets. DoesnΓÇÖt support HBase 2 or later. |
+| Apache Spark | All versions | Support all versions of HBase. Suitable for large datasets. Spark setup required. |
+| Custom tool with Azure Cosmos DB bulk executor library | All versions | Most flexible to create custom data migration tools using libraries. Requires more effort to set up. |
+
+The following flowchart uses some conditions to reach the available data migration methods.
+
+### Migrate using Data Factory
+
+This option is suitable for large datasets. The Azure Cosmos DB Bulk Executor library is used. There are no checkpoints, so if you encounter any issues during the migration you will have to restart the migration process from the beginning. You can also use Data Factory's self-hosted integration runtime to connect to your on-premises HBase, or deploy Data Factory to a Managed VNET and connect to your on-premises network via VPN or ExpressRoute.
+
+Data Factory's Copy activity supports HBase as a data source. See the [Copy data from HBase using Azure Data Factory](../../data-factory/connector-hbase.md) article for more details.
+
+You can specify Azure Cosmos DB (API for NoSQL) as the destination for your data. See the [Copy and transform data in Azure Cosmos DB (API for NoSQL) by using Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md) article for more details.
++
+### Migrate using Apache Spark - Apache HBase Connector & Azure Cosmos DB Spark connector
+
+Here is an example to migrate your data to Azure Cosmos DB. It assumes that HBase 2.1.0 and Spark 2.4.0 are running in the same cluster.
+
+Apache Spark ΓÇô Apache HBase Connector repository can be found at [Apache Spark - Apache HBase Connector](https://github.com/hortonworks-spark/shc)
+
+For Azure Cosmos DB Spark connector, refer to the [Quick Start Guide](quickstart-spark.md) and download the appropriate library for your Spark version.
+
+1. Copy hbase-site.xml to your Spark configuration directory.
+
+ ```console
+ cp /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/
+ ```
+
+1. Run spark -shell with Spark HBase connector and Azure Cosmos DB Spark connector.
+
+ ```console
+ spark-shell --packages com.hortonworks.shc:shc-core:1.1.0.3.1.2.2-1 --repositories http://repo.hortonworcontent/groups/public/ --jars azure-cosmosdb-spark_2.4.0_2.11-3.6.8-uber.jar
+ ```
+
+1. After the Spark shell starts, execute the Scala code as follows. Import the libraries needed to load data from HBase.
+
+ ```scala
+ // Import libraries
+ import org.apache.spark.sql.{SQLContext, _}
+ import org.apache.spark.sql.execution.datasources.hbase._
+ import org.apache.spark.{SparkConf, SparkContext}
+ import spark.sqlContext.implicits._
+ ```
+
+1. Define the Spark catalog schema for your HBase tables. Here the Namespace is "default" and the table name is "Contacts". The row key is specified as the key. Columns, Column Family and Column are mapped to Spark's catalog.
+
+ ```scala
+ // define a catalog for the Contacts table you created in HBase
+ def catalog = s"""{
+ |"table":{"namespace":"default", "name":"Contacts"},
+ |"rowkey":"key",
+ |"columns":{
+ |"rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
+ |"officeAddress":{"cf":"Office", "col":"Address", "type":"string"},
+ |"officePhone":{"cf":"Office", "col":"Phone", "type":"string"},
+ |"personalName":{"cf":"Personal", "col":"Name", "type":"string"},
+ |"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"}
+ |}
+ |}""".stripMargin
+
+ ```
+
+1. Next, define a method to get the data from the HBase Contacts table as a DataFrame.
+
+ ```scala
+ def withCatalog(cat: String): DataFrame = {
+ spark.sqlContext
+ .read
+ .options(Map(HBaseTableCatalog.tableCatalog->cat))
+ .format("org.apache.spark.sql.execution.datasources.hbase")
+ .load()
+ }
+
+ ```
+
+1. Create a DataFrame using the defined method.
+
+ ```scala
+ val df = withCatalog(catalog)
+ ```
+
+1. Then import the libraries needed to use the Azure Cosmos DB Spark connector.
+
+ ```scala
+ import com.microsoft.azure.cosmosdb.spark.schema._
+ import com.microsoft.azure.cosmosdb.spark._
+ import com.microsoft.azure.cosmosdb.spark.config.Config
+ ```
+
+1. Make settings for writing data to Azure Cosmos DB.
+
+ ```scala
+ val writeConfig = Config(Map( "Endpoint" -> "https://<cosmos-db-account-name>.documents.azure.com:443/", "Masterkey" -> "<comsmos-db-master-key>", "Database" -> "<database-name>", "Collection" -> "<collection-name>", "Upsert" -> "true" ))
+ ```
+
+1. Write DataFrame data to Azure Cosmos DB.
+
+ ```scala
+ import org.apache.spark.sql.SaveMode df.write.mode(SaveMode.Overwrite).cosmosDB(writeConfig)
+ ```
+
+It writes in parallel at high speed, its performance is high. On the other hand, note that it may consume up RU/s on the Azure Cosmos DB side.
+
+### Phoenix
+
+Phoenix is supported as a Data Factory data source. Refer to the following documents for detailed steps.
+
+* [Copy data from Phoenix using Azure Data Factory](../../data-factory/connector-phoenix.md)
+* [Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB](../import-data.md)
+* [Copy data from HBase using Azure Data Factory](../../data-factory/connector-hbase.md)
+
+## Migrate your code
+
+This section describes the differences between creating applications in Azure Cosmos DB for NoSQLs and HBase. The examples here use Apache HBase 2.x APIs and [Azure Cosmos DB Java SDK v4](sdk-java-v4.md).
+
+These HBase's sample codes are based on those described in [HBase's official documentation](https://hbase.apache.org/book.html).
+
+The code for Azure Cosmos DB presented here is based on the [Azure Cosmos DB for NoSQL: Java SDK v4 examples](samples-java.md) documentation. You can access the full code example from the documentation.
+
+The mappings for code migration are shown here, but the HBase RowKeys and Azure Cosmos DB Partition Keys used in these examples are not always well designed. Design according to the actual data model of the migration source.
+
+### Establish connection
+
+**HBase**
+
+```java
+Configuration config = HBaseConfiguration.create();
+config.set("hbase.zookeeper.quorum","zookeepernode0,zookeepernode1,zookeepernode2");
+config.set("hbase.zookeeper.property.clientPort", "2181");
+config.set("hbase.cluster.distributed", "true");
+Connection connection = ConnectionFactory.createConnection(config)
+```
+
+**Phoenix**
+
+```java
+//Use JDBC to get a connection to an HBase cluster
+Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333",props);
+```
+
+**Azure Cosmos DB**
+
+```java
+// Create sync client
+client = new CosmosClientBuilder()
+ .endpoint(AccountSettings.HOST)
+ .key(AccountSettings.MASTER_KEY)
+ .consistencyLevel(ConsistencyLevel.{ConsistencyLevel})
+ .contentResponseOnWriteEnabled(true)
+ .buildClient();
+```
+
+### Create database/table/collection
+
+**HBase**
+
+```java
+// create an admin object using the config
+HBaseAdmin admin = new HBaseAdmin(config);
+// create the table...
+HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("FamilyTable"));
+// ... with single column families
+tableDescriptor.addFamily(new HColumnDescriptor("ColFam"));
+admin.createTable(tableDescriptor);
+```
+
+**Phoenix**
+
+```java
+CREATE IF NOT EXISTS FamilyTable ("id" BIGINT not null primary key, "ColFam"."lastName" VARCHAR(50));
+```
+
+**Azure Cosmos DB**
+
+```java
+// Create database if not exists
+CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
+database = client.getDatabase(databaseResponse.getProperties().getId());
+
+// Create container if not exists
+CosmosContainerProperties containerProperties = new CosmosContainerProperties("FamilyContainer", "/lastName");
+
+// Provision throughput
+ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
+
+// Create container with 400 RU/s
+CosmosContainerResponse databaseResponse = database.createContainerIfNotExists(containerProperties, throughputProperties);
+container = database.getContainer(databaseResponse.getProperties().getId());
+```
+
+### Create row/document
+
+**HBase**
+
+```java
+HTable table = new HTable(config, "FamilyTable");
+Put put = new Put(Bytes.toBytes(RowKey));
+
+put.add(Bytes.toBytes("ColFam"), Bytes.toBytes("id"), Bytes.toBytes("1"));
+put.add(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"), Bytes.toBytes("Witherspoon"));
+table.put(put)
+```
+
+**Phoenix**
+
+```sql
+UPSERT INTO FamilyTable (id, lastName) VALUES (1, ΓÇÿWitherspoonΓÇÖ);
+```
+
+**Azure Cosmos DB**
+
+Azure Cosmos DB provides you type safety via data model. We use data model named ΓÇÿFamilyΓÇÖ.
+
+```java
+public class Family {
+ public Family() {
+ }
+
+ public void setId(String id) {
+ this.id = id;
+ }
+
+ public void setLastName(String lastName) {
+ this.lastName = lastName;
+ }
+
+ private String id="";
+ private String lastName="";
+}
+```
+
+The above is part of the code. See [full code example](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/df1840b0b5e3715b8555c29f422e0e7d2bc1d49a/src/main/java/com/azure/cosmos/examples/common/Family.java).
+
+Use the Family class to define document and insert item.
+
+```java
+Family family = new Family();
+family.setLastName("Witherspoon");
+family.setId("1");
+
+// Insert this item as a document
+// Explicitly specifying the /pk value improves performance.
+container.createItem(family,new PartitionKey(family.getLastName()),new CosmosItemRequestOptions());
+```
+
+### Read row/document
+
+**HBase**
+
+```java
+HTable table = new HTable(config, "FamilyTable");
+
+Get get = new Get(Bytes.toBytes(RowKey));
+get.addColumn(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"));
+
+Result result = table.get(get);
+
+byte[] col = result.getValue(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"));
+```
+
+**Phoenix**
+
+```sql
+SELECT lastName FROM FamilyTable;
+```
+
+**Azure Cosmos DB**
+
+```java
+// Read document by ID
+Family family = container.readItem(documentId,new PartitionKey(documentLastName),Family.class).getItem();
+
+String sql = "SELECT lastName FROM c";
+
+CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(), Family.class);
+```
+
+### Update data
+
+**HBase**
+
+For HBase, use the append method and checkAndPut method to update the value. append is the process of appending a value atomically to the end of the current value, and checkAndPut atomically compares the current value with the expected value and updates only if they match.
+
+```java
+// append
+HTable table = new HTable(config, "FamilyTable");
+Append append = new Append(Bytes.toBytes(RowKey));
+Append.add(Bytes.toBytes("ColFam"), Bytes.toBytes("id"), Bytes.toBytes(2));
+Append.add(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"), Bytes.toBytes("Harris"));
+Result result = table.append(append)
+
+// checkAndPut
+byte[] row = Bytes.toBytes(RowKey);
+byte[] colfam = Bytes.toBytes("ColFam");
+byte[] col = Bytes.toBytes("lastName");
+Put put = new Put(row);
+put.add(colfam, col, Bytes.toBytes("Patrick"));
+boolearn result = table.checkAndPut(row, colfam, col, Bytes.toBytes("Witherspoon"), put);
+```
+
+**Phoenix**
+
+```sql
+UPSERT INTO FamilyTable (id, lastName) VALUES (1, ΓÇÿBrownΓÇÖ)
+ON DUPLICATE KEY UPDATE id = "1", lastName = "Whiterspoon";
+```
+
+**Azure Cosmos DB**
+
+In Azure Cosmos DB, updates are treated as Upsert operations. That is, if the document does not exist, it will be inserted.
+
+```java
+// Replace existing document with new modified document (contingent on modification).
+
+Family family = new Family();
+family.setLastName("Brown");
+family.setId("1");
+
+CosmosItemResponse<Family> famResp = container.upsertItem(family, new CosmosItemRequestOptions());
+```
+
+### Delete row/document
+
+**HBase**
+
+In Hbase, there is no direct delete way of selecting the row by value. You may have implemented the delete process in combination with ValueFilter etc. In this example, the row to be deleted is specified by RowKey.
+
+```java
+HTable table = new HTable(config, "FamilyTable");
+
+Delete delete = new Delete(Bytes.toBytes(RowKey));
+delete.deleteColumn(Bytes.toBytes("ColFam"), Bytes.toBytes("id"));
+delete.deleteColumn(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"));
+
+table.dalate(delete)
+```
+
+**Phoenix**
+
+```sql
+DELETE FROM TableName WHERE id = "xxx";
+```
+
+**Azure Cosmos DB**
+
+The deletion method by Document ID is shown below.
+
+ ```java
+container.deleteItem(documentId, new PartitionKey(documentLastName), new CosmosItemRequestOptions());
+```
+
+### Query rows/documents
+
+**HBase**
+HBase allows you to retrieve multiple Rows using scan. You can use Filter to specify detailed scan conditions. See [Client Request Filters](https://hbase.apache.org/book.html#client.filter) for HBase built-in filter types.
+
+```java
+HTable table = new HTable(config, "FamilyTable");
+
+Scan scan = new Scan();
+SingleColumnValueFilter filter = new SingleColumnValueFilter(Bytes.toBytes("ColFam"),
+Bytes.toBytes("lastName"), CompareOp.EQUAL, New BinaryComparator(Bytes.toBytes("Witherspoon")));
+filter.setFilterIfMissing(true);
+filter.setLatestVersionOnly(true);
+scan.setFilter(filter);
+
+ResultScanner scanner = table.getScanner(scan);
+```
+
+**Phoenix**
+
+```sql
+SELECT * FROM FamilyTable WHERE lastName = "Witherspoon"
+```
+
+**Azure Cosmos DB**
+
+Filter operation
+
+ ```java
+String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
+CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(), Family.class);
+```
+
+### Delete table/collection
+
+**HBase**
+
+```java
+HBaseAdmin admin = new HBaseAdmin(config);
+admin.deleteTable("FamilyTable")
+```
+
+**Phoenix**
+
+```sql
+DROP TABLE IF EXISTS FamilyTable;
+```
+
+**Azure Cosmos DB**
+
+```java
+CosmosContainerResponse containerResp = database.getContainer("FamilyContainer").delete(new CosmosContainerRequestOptions());
+```
+
+### Other considerations
+
+HBase clusters may be used with HBase workloads and MapReduce, Hive, Spark, and more. If you have other workloads with your current HBase, they also need to be migrated. For details, refer to each migration guide.
+
+* MapReduce
+* HBase
+* Spark
+
+### Server-side programming
+
+HBase offers several server-side programming features. If you are using these features, you will also need to migrate their processing.
+
+**HBase**
+
+* [Custom filters](https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html)
+
+ Various filters are available as default in HBase, but you can also implement your own custom filters. Custom filters may be implemented if the filters available as default on HBase do not meet your requirements.
+
+* [Coprocessor](https://hbase.apache.org/book.html#_types_of_coprocessors)
+
+ The Coprocessor is a framework that allows you to run your own code on the Region Server. By using the Coprocessor, it is possible to perform the processing that was being executed on the client side on the server side, and depending on the processing, it can be made more efficient. There are two types of Coprocessors, Observer and Endpoint.
+
+ * Observer
+ * Observer hooks specific operations and events. This is a function for adding arbitrary processing. This is a feature similar to RDBMS triggers.
+
+ * Endpoint
+ * Endpoint is a feature for extending HBase RPC. It is a function similar to an RDBMS stored procedure.
+
+**Azure Cosmos DB**
+
+* [Stored Procedure](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures)
+
+ * Azure Cosmos DB stored procedures are written in JavaScript and can perform operations such as creating, updating, reading, querying, and deleting items in Azure Cosmos DB containers.
+
+* [Trigger](how-to-write-stored-procedures-triggers-udfs.md#triggers)
+
+ * Triggers can be specified for operations on the database. There are two methods provided: a pre-trigger that runs before the database item changes and a post-trigger that runs after the database item changes.
+
+* [UDF](how-to-write-stored-procedures-triggers-udfs.md#udfs)
+
+ * Azure Cosmos DB allows you to define User-Defined Functions (UDFs). UDFs can also be written in JavaScript.
+
+Stored procedures and triggers consume RUs based on the complexity of the operations performed. When developing server-side processing, check the required usage to get a better understanding of the amount of RU consumed by each operation. See [Request Units in Azure Cosmos DB](../request-units.md) and [Optimize request cost in Azure Cosmos DB](../optimize-cost-reads-writes.md) for details.
+
+Server-side programming mappings
+
+| HBase | Azure Cosmos DB | Description |
+| -- | - | -- |
+| Custom filters | WHERE Clause | If the processing implemented by the custom filter cannot be achieved by the WHERE clause in Azure Cosmos DB, use UDF in combination. See [UDF examples](query/udfs.md#examples) for an example of using UDF to further filter the results of the WHERE clause. |
+| Coprocessor (Observer) | Trigger | Observer is a trigger that executes before and after a particular event. Just as Observer supports pre- and post-calls, Azure Cosmos DB's Trigger also supports pre- and post-triggers. |
+| Coprocessor (Endpoint) | Stored Procedure | Endpoint is a server-side data processing mechanism that is executed for each region. This is similar to an RDBMS stored procedure. Azure Cosmos DB stored procedures are written using JavaScript. It provides access to all the operations you can perform on Azure Cosmos DB through stored procedures. |
+
+> [!NOTE]
+> Different mappings and implementations may be required in Azure Cosmos DB depending on the processing implemented on HBase.
+
+## Security
+
+Data security is a shared responsibility of the customer and the database provider. For on-premises solutions, customers have to provide everything from endpoint protection to physical hardware security, which is not an easy task. If you choose a PaaS cloud database provider such as Azure Cosmos DB, customer involvement will be reduced. For Microsoft's security shared responsibility model, see [Shared Responsibilities for Cloud Computing](https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91). Azure Cosmos DB runs on the Azure platform, so it can be enhanced in a different way than HBase. Azure Cosmos DB does not require any extra components to be installed for security. We recommend that you consider migrating your database system security implementation using the following checklist:
+
+| **Security control** | **HBase** | **Azure Cosmos DB** |
+| -- | -- | - |
+| Network Security and firewall setting | Control traffic using security functions such as network devices. | Supports policy-based IP-based access control on the inbound firewall. |
+| User authentication and fine-grained user controls | Fine-grained access control by combining LDAP with security components such as Apache Ranger. | You can use the account primary key to create user and permission resources for each database. Resource tokens are associated with permissions in the database to determine how users can access application resources in the database (read/write, read-only, or no access). You can also use your Azure Active Directory (AAD) ID to authenticate your data requests. This allows you to authorize data requests using a fine-grained RBAC model.|
+| Ability to replicate data globally for regional failures | Make a database replica in a remote data center using HBase's replication. | Azure Cosmos DB performs configuration-free global distribution and allows you to replicate data to data centers around the world in Azure with the select of a button. In terms of security, global replication ensures that your data is protected from local failures. |
+| Ability to fail over from one data center to another | You need to implement failover yourself. | If you're replicating data to multiple data centers and the region's data center goes offline, Azure Cosmos DB automatically rolls over the operation. |
+| Local data replication within a data center | The HDFS mechanism allows you to have multiple replicas across nodes within a single file system. | Azure Cosmos DB automatically replicates data to maintain high availability, even within a single data center. You can choose the consistency level yourself. |
+| Automatic data backups | There is no automatic backup function. You need to implement data backup yourself. | Azure Cosmos DB is backed up regularly and stored in the geo redundant storage. |
+| Protect and isolate sensitive data | For example, if you are using Apache Ranger, you can use Ranger policy to apply the policy to the table. | You can separate personal and other sensitive data into specific containers and read / write, or limit read-only access to specific users. |
+| Monitoring for attacks | It needs to be implemented using third party products. | By using [audit logging and activity logs](../monitor.md), you can monitor your account for normal and abnormal activity. |
+| Responding to attacks | It needs to be implemented using third party products. | When you contact Azure support and report a potential attack, a five-step incident response process begins. |
+| Ability to geo-fence data to adhere to data governance restrictions | You need to check the restrictions of each country and implement it yourself. | Guarantees data governance for sovereign regions (Germany, China, US Gov, etc.). |
+| Physical protection of servers in protected data centers | It depends on the data center where the system is located. | For a list of the latest certifications, see the global [Azure compliance site](/compliance/regulatory/offering-home?view=o365-worldwide&preserve-view=true). |
+| Certifications | Depends on the Hadoop distribution. | See [Azure compliance documentation](../compliance.md) |
+
+For more information on security, please refer to [Security in Azure Cosmos DB - overview](../database-security.md)
+
+## Monitoring
+
+HBase typically monitors the cluster using the cluster metric web UI or with Ambari, Cloudera Manager, or other monitoring tools. Azure Cosmos DB allows you to use the monitoring mechanism built into the Azure platform. For more information on Azure Cosmos DB monitoring, see [Monitor Azure Cosmos DB](../monitor.md).
+
+If your environment implements HBase system monitoring to send alerts, such as by email, you may be able to replace it with Azure Monitor alerts. You can receive alerts based on metrics or activity log events for your Azure Cosmos DB account.
+
+For more information on alerts in Azure Monitor, please refer to [Create alerts for Azure Cosmos DB using Azure Monitor](../create-alerts.md)
+
+Also, see [Azure Cosmos DB metrics and log types](../monitor-reference.md) that can be collected by Azure Monitor.
+
+## Backup & disaster recovery
+
+### Backup
+
+There are several ways to get a backup of HBase. For example, Snapshot, Export, CopyTable, Offline backup of HDFS data, and other custom backups.
+
+Azure Cosmos DB automatically backs up data at periodic intervals, which does not affect the performance or availability of database operations. Backups are stored in Azure storage and can be used to recover data if needed. There are two types of Azure Cosmos DB backups:
+
+* [Periodic backup](../configure-periodic-backup-restore.md)
+
+* [Continuous backup](../continuous-backup-restore-introduction.md)
+
+### Disaster recovery
+
+HBase is a fault-tolerant distributed system, but you must implement disaster recovery using Snapshot, replication, etc. when failover is required at the backup location in the case of a data center level failure. HBase replication can be set up with three replication models: Leader-Follower, Leader-Leader, and Cyclic. If the source HBase implements Disaster Recovery, you need to understand how you can configure Disaster Recovery in Azure Cosmos DB and meet your system requirements.
+
+Azure Cosmos DB is a globally distributed database with built-in disaster recovery capabilities. You can replicate your DB data to any Azure region. Azure Cosmos DB keeps your database highly available in the unlikely event of a failure in some regions.
+
+Azure Cosmos DB account that uses only a single region may lose availability in the event of a region failure. We recommend that you configure at least two regions to ensure high availability always. You can also ensure high availability for both writes and reads by configuring your Azure Cosmos DB account to span at least two regions with multiple write regions to ensure high availability for writes and reads. For multi-region accounts that consist of multiple write regions, failover between regions is detected and handled by the Azure Cosmos DB client. These are momentary and do not require any changes from the application. In this way, you can achieve an availability configuration that includes Disaster Recovery for Azure Cosmos DB. As mentioned earlier, HBase replication can be set up with three models, but Azure Cosmos DB can be set up with SLA-based availability by configuring single-write and multi-write regions.
+
+For more information on High Availability, please refer to [How does Azure Cosmos DB provide high availability](../high-availability.md)
+
+## Frequently asked questions
+
+#### Why migrate to API for NoSQL instead of other APIs in Azure Cosmos DB?
+
+API for NoSQL provides the best end-to-end experience in terms of interface, service SDK client library. The new features rolled out to Azure Cosmos DB will be first available in your API for NoSQL account. In addition, the API for NoSQL supports analytics and provides performance separation between production and analytics workloads. If you want to use the modernized technologies to build your apps, API for NoSQL is the recommended option.
+
+#### Can I assign the HBase RowKey to the Azure Cosmos DB partition key?
+
+It may not be optimized as it is. In HBase, the data is sorted by the specified RowKey, stored in the Region, and divided into fixed sizes. This behaves differently than partitioning in Azure Cosmos DB. Therefore, the keys need to be redesigned to better distribute the data according to the characteristics of the workload. See the [Distribution](#data-distribution) section for more details.
+
+#### Data is sorted by RowKey in HBase, but partitioning by key in Azure Cosmos DB. How can Azure Cosmos DB achieve sorting and collocation?
+
+In Azure Cosmos DB, you can add a Composite Index to sort your data in ascending or descending order to improve the performance of equality and range queries. See the [Distribution](#data-distribution) section and the [Composite Index](../index-policy.md#composite-indexes) in product documentation.
+
+#### Analytical processing is executed on HBase data with Hive or Spark. How can I modernize them in Azure Cosmos DB?
+
+You can use the Azure Cosmos DB analytical store to automatically synchronize operational data to another column store. The column store format is suitable for large analytic queries that are executed in an optimized way, which improves latency for such queries. Azure Synapse Link allows you to build an ETL-free HTAP solution by linking directly from Azure Synapse Analytics to the Azure Cosmos DB analytical store. This allows you to perform large-scale, near-real-time analysis of operational data. Synapse Analytics supports Apache Spark and serverless SQL pools in the Azure Cosmos DB analytics store. You can take advantage of this feature to migrate your analytical processing. See [Analytical store](../analytical-store-introduction.md) for more information.
+
+#### How can users be using timestamp query in HBase to Azure Cosmos DB?
+
+Azure Cosmos DB doesn't have exactly the same timestamp versioning feature as HBase. But Azure Cosmos DB provides the ability to access the [change feed](../change-feed.md) and you can utilize it for versioning.
+
+* Store every version/change as a separate item.
+
+* Read the change feed to merge/consolidate changes and trigger appropriate actions downstream by filtering with "_ts" field.
+Additionally, for old version of data, you can expire old versions using [TTL](time-to-live.md).
+
+## Next steps
+
+* To do performance testing, see [Performance and scale testing with Azure Cosmos DB](./performance-testing.md) article.
+
+* To optimize the code, see [Performance tips for Azure Cosmos DB](./performance-tips-async-java.md) article.
+
+* Explore Java Async V3 SDK, [SDK reference](https://github.com/Azure/azure-cosmosdb-java/tree/v3) GitHub repo.
cosmos-db Migrate Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-java-v4-sdk.md
+
+ Title: Migrate your application to use the Azure Cosmos DB Java SDK v4 (com.azure.cosmos)
+description: Learn how to upgrade your existing Java application from using the older Azure Cosmos DB Java SDKs to the newer Java SDK 4.0 (com.azure.cosmos package)for API for NoSQL.
+
+ms.devlang: java
++++++ Last updated : 08/26/2021++
+# Migrate your application to use the Azure Cosmos DB Java SDK v4
+
+> [!IMPORTANT]
+> For more information about this SDK, please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md).
+>
+
+> [!IMPORTANT]
+> Because Azure Cosmos DB Java SDK v4 has up to 20% enhanced throughput, TCP-based direct mode, and support for the latest backend service features, we recommend you upgrade to v4 at the next opportunity. Continue reading below to learn more.
+>
+
+Update to the latest Azure Cosmos DB Java SDK to get the best of what Azure Cosmos DB has to offer - a managed non-relational database service with competitive performance, five-nines availability, one-of-a-kind resource governance, and more. This article explains how to upgrade your existing Java application that is using an older Azure Cosmos DB Java SDK to the newer Azure Cosmos DB Java SDK 4.0 for API for NoSQL. Azure Cosmos DB Java SDK v4 corresponds to the `com.azure.cosmos` package. You can use the instructions in this doc if you are migrating your application from any of the following Azure Cosmos DB Java SDKs:
+
+* Sync Java SDK 2.x.x
+* Async Java SDK 2.x.x
+* Java SDK 3.x.x
+
+## Azure Cosmos DB Java SDKΓÇÖs and package mappings
+
+The following table lists different Azure Cosmos DB Java SDKs, the package name and the release information:
+
+| Java SDK| Release Date | Bundled APIs | Maven Jar | Java package name |API Reference | Release Notes | Retire date |
+|-||--|--|--|-||--|
+| Async 2.x.x | June 2018 | Async(RxJava) | `com.microsoft.azure::azure-cosmosdb` | `com.microsoft.azure.cosmosdb.rx` | [API](https://azure.github.io/azure-cosmosdb-jav) | August 31, 2024 |
+| Sync 2.x.x | Sept 2018 | Sync | `com.microsoft.azure::azure-documentdb` | `com.microsoft.azure.cosmosdb` | [API](https://azure.github.io/azure-cosmosdb-jav) | February 29, 2024 |
+| 3.x.x | July 2019 | Async(Reactor)/Sync | `com.microsoft.azure::azure-cosmos` | `com.azure.data.cosmos` | [API](https://azure.github.io/azure-cosmosdb-java/3.0.0/) | - | August 31, 2024 |
+| 4.0 | June 2020 | Async(Reactor)/Sync | `com.azure::azure-cosmos` | `com.azure.cosmos` | [API](/java/api/overview/azure/cosmosdb) | - | - |
+
+## SDK level implementation changes
+
+The following are the key implementation differences between different SDKs:
+
+### RxJava is replaced with reactor in Azure Cosmos DB Java SDK versions 3.x.x and 4.0
+
+If you are unfamiliar with asynchronous programming or Reactive Programming, see the [Reactor pattern guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md) for an introduction to async programming and Project Reactor. This guide may be useful if you have been using Azure Cosmos DB Sync Java SDK 2.x.x or Azure Cosmos DB Java SDK 3.x.x Sync API in the past.
+
+If you have been using Azure Cosmos DB Async Java SDK 2.x.x, and you plan on migrating to the 4.0 SDK, see the [Reactor vs RxJava Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) for guidance on converting RxJava code to use Reactor.
+
+### Azure Cosmos DB Java SDK v4 has direct connectivity mode in both Async and Sync APIs
+
+If you have been using Azure Cosmos DB Sync Java SDK 2.x.x, note that the direct connection mode based on TCP (as opposed to HTTP) is implemented in Azure Cosmos DB Java SDK 4.0 for both the Async and Sync APIs.
+
+## API level changes
+
+The following are the API level changes in Azure Cosmos DB Java SDK 4.x.x compared to previous SDKs (Java SDK 3.x.x, Async Java SDK 2.x.x, and Sync Java SDK 2.x.x):
++
+* The Azure Cosmos DB Java SDK 3.x.x and 4.0 refer the client resources as `Cosmos<resourceName>`. For example, `CosmosClient`, `CosmosDatabase`, `CosmosContainer`. Whereas in version 2.x.x, the Azure Cosmos DB Java SDKs donΓÇÖt have a uniform naming scheme.
+
+* Azure Cosmos DB Java SDK 3.x.x and 4.0 offer both Sync and Async APIs.
+
+ * **Java SDK 4.0** : All the classes belong to the Sync API unless the class name is appended with `Async` after `Cosmos`.
+
+ * **Java SDK 3.x.x**: All the classes belong to the Async API unless the class name is appended with `Async` after `Cosmos`.
+
+ * **Async Java SDK 2.x.x**: The class names are similar to Sync Java SDK 2.x.x, however the name starts with *Async*.
+
+### Hierarchical API structure
+
+Azure Cosmos DB Java SDK 4.0 and 3.x.x introduce a hierarchical API structure that organizes the clients, databases, and containers in a nested fashion as shown in the following 4.0 SDK code snippet:
+
+```java
+CosmosContainer container = client.getDatabase("MyDatabaseName").getContainer("MyContainerName");
+
+```
+
+In version 2.x.x of the Azure Cosmos DB Java SDK, all operations on resources and documents are performed through the client instance.
+
+### Representing documents
+
+In Azure Cosmos DB Java SDK 4.0, custom POJO's and `JsonNodes` are the two options to read and write the documents from Azure Cosmos DB.
+
+In the Azure Cosmos DB Java SDK 3.x.x, the `CosmosItemProperties` object is exposed by the public API and served as a document representation. This class is no longer exposed publicly in version 4.0.
+
+### Imports
+
+* The Azure Cosmos DB Java SDK 4.0 packages begin with `com.azure.cosmos`
+* Azure Cosmos DB Java SDK 3.x.x packages begin with `com.azure.data.cosmos`
+* Azure Cosmos DB Java SDK 2.x.x Sync API packages begin with `com.microsoft.azure.documentdb`
+
+* Azure Cosmos DB Java SDK 4.0 places several classes in a nested package `com.azure.cosmos.models`. Some of these packages include:
+
+ * `CosmosContainerResponse`
+ * `CosmosDatabaseResponse`
+ * `CosmosItemResponse`
+ * The Async API analogs for all of the above packages
+ * `CosmosContainerProperties`
+ * `FeedOptions`
+ * `PartitionKey`
+ * `IndexingPolicy`
+ * `IndexingMode` ...etc.
+
+### Accessors
+
+Azure Cosmos DB Java SDK 4.0 exposes `get` and `set` methods to access the instance members. For example, the `CosmosContainer` instance has `container.getId()` and `container.setId()` methods.
+
+This is different from Azure Cosmos DB Java SDK 3.x.x which exposes a fluent interface. For example, a `CosmosSyncContainer` instance has `container.id()` which is overloaded to get or set the `id` value.
+
+## Code snippet comparisons
+
+### Create resources
+
+The following code snippet shows the differences in how resources are created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateJavaSDKv4ResourceAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+ConnectionPolicy defaultPolicy = ConnectionPolicy.defaultPolicy();
+// Setting the preferred location to Azure Cosmos DB Account region
+defaultPolicy.preferredLocations(Lists.newArrayList("Your Account Location"));
+
+// Create async client
+// <CreateAsyncClient>
+client = new CosmosClientBuilder()
+ .endpoint("your.hostname")
+ .key("yourmasterkey")
+ .connectionPolicy(defaultPolicy)
+ .consistencyLevel(ConsistencyLevel.EVENTUAL)
+ .build();
+
+// Create database with specified name
+client.createDatabaseIfNotExists("YourDatabaseName")
+ .flatMap(databaseResponse -> {
+ database = databaseResponse.database();
+ // Container properties - name and partition key
+ CosmosContainerProperties containerProperties =
+ new CosmosContainerProperties("YourContainerName", "/id");
+ // Create container with specified properties & provisioned throughput
+ return database.createContainerIfNotExists(containerProperties, 400);
+ }).flatMap(containerResponse -> {
+ container = containerResponse.container();
+ return Mono.empty();
+}).subscribe();
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+ConnectionPolicy defaultPolicy = ConnectionPolicy.GetDefault();
+// Setting the preferred location to Azure Cosmos DB Account region
+defaultPolicy.setPreferredLocations(Lists.newArrayList("Your Account Location"));
+
+// Create document client
+// <CreateDocumentClient>
+client = new DocumentClient("your.hostname", "your.masterkey", defaultPolicy, ConsistencyLevel.Eventual)
+
+// Create database with specified name
+Database databaseDefinition = new Database();
+databaseDefinition.setId("YourDatabaseName");
+ResourceResponse<Database> databaseResourceResponse = client.createDatabase(databaseDefinition, new RequestOptions());
+
+// Read database with specified name
+String databaseLink = "dbs/YourDatabaseName";
+databaseResourceResponse = client.readDatabase(databaseLink, new RequestOptions());
+Database database = databaseResourceResponse.getResource();
+
+// Create container with specified name
+DocumentCollection documentCollection = new DocumentCollection();
+documentCollection.setId("YourContainerName");
+documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
+```
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+// Create Async client.
+// Building an async client is still a sync operation.
+AsyncDocumentClient client = new Builder()
+ .withServiceEndpoint("your.hostname")
+ .withMasterKeyOrResourceToken("yourmasterkey")
+ .withConsistencyLevel(ConsistencyLevel.Eventual)
+ .build();
+// Create database with specified name
+Database database = new Database();
+database.setId("YourDatabaseName");
+client.createDatabase(database, new RequestOptions())
+ .flatMap(databaseResponse -> {
+ // Collection properties - name and partition key
+ DocumentCollection documentCollection = new DocumentCollection();
+ documentCollection.setId("YourContainerName");
+ documentCollection.setPartitionKey(new PartitionKeyDefinition("/id"));
+ // Create collection
+ return client.createCollection(databaseResponse.getResource().getSelfLink(), documentCollection, new RequestOptions());
+}).subscribe();
+
+```
+++
+### Item operations
+
+The following code snippet shows the differences in how item operations are performed between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateItemOpsAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+// Container is created. Generate many docs to insert.
+int number_of_docs = 50000;
+ArrayList<JsonNode> docs = generateManyDocs(number_of_docs);
+
+// Insert many docs into container...
+Flux.fromIterable(docs)
+ .flatMap(doc -> container.createItem(doc))
+ .subscribe(); // ...Subscribing triggers stream execution.
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+// Container is created. Generate documents to insert.
+Document document = new Document();
+document.setId("YourDocumentId");
+ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
+ new RequestOptions(), true);
+Document responseDocument = documentResourceResponse.getResource();
+```
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+// Collection is created. Generate many docs to insert.
+int number_of_docs = 50000;
+ArrayList<Document> docs = generateManyDocs(number_of_docs);
+// Insert many docs into collection...
+Observable.from(docs)
+ .flatMap(doc -> client.createDocument(createdCollection.getSelfLink(), doc, new RequestOptions(), false))
+ .subscribe(); // ...Subscribing triggers stream execution.
+```
+++
+### Indexing
+
+The following code snippet shows the differences in how indexing is created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateIndexingAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+CosmosContainerProperties containerProperties = new CosmosContainerProperties(containerName, "/lastName");
+
+// Custom indexing policy
+IndexingPolicy indexingPolicy = new IndexingPolicy();
+indexingPolicy.setIndexingMode(IndexingMode.CONSISTENT); //To turn indexing off set IndexingMode.NONE
+
+// Included paths
+List<IncludedPath> includedPaths = new ArrayList<>();
+IncludedPath includedPath = new IncludedPath();
+includedPath.path("/*");
+includedPaths.add(includedPath);
+indexingPolicy.includedPaths(includedPaths);
+
+// Excluded paths
+List<ExcludedPath> excludedPaths = new ArrayList<>();
+ExcludedPath excludedPath = new ExcludedPath();
+excludedPath.path("/name/*");
+excludedPaths.add(excludedPath);
+indexingPolicy.excludedPaths(excludedPaths);
+
+containerProperties.indexingPolicy(indexingPolicy);
+
+CosmosContainer containerIfNotExists = database.createContainerIfNotExists(containerProperties, 400)
+ .block()
+ .container();
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+// Custom indexing policy
+IndexingPolicy indexingPolicy = new IndexingPolicy();
+indexingPolicy.setIndexingMode(IndexingMode.Consistent); //To turn indexing off set IndexingMode.NONE
+
+// Included paths
+List<IncludedPath> includedPaths = new ArrayList<>();
+IncludedPath includedPath = new IncludedPath();
+includedPath.setPath("/*");
+includedPaths.add(includedPath);
+indexingPolicy.setIncludedPaths(includedPaths);
+
+// Excluded paths
+List<ExcludedPath> excludedPaths = new ArrayList<>();
+ExcludedPath excludedPath = new ExcludedPath();
+excludedPath.setPath("/name/*");
+excludedPaths.add(excludedPath);
+indexingPolicy.setExcludedPaths(excludedPaths);
+
+// Create container with specified name and indexing policy
+DocumentCollection documentCollection = new DocumentCollection();
+documentCollection.setId("YourContainerName");
+documentCollection.setIndexingPolicy(indexingPolicy);
+documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
+```
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+// Custom indexing policy
+IndexingPolicy indexingPolicy = new IndexingPolicy();
+indexingPolicy.setIndexingMode(IndexingMode.Consistent); //To turn indexing off set IndexingMode.None
+// Included paths
+List<IncludedPath> includedPaths = new ArrayList<>();
+IncludedPath includedPath = new IncludedPath();
+includedPath.setPath("/*");
+includedPaths.add(includedPath);
+indexingPolicy.setIncludedPaths(includedPaths);
+// Excluded paths
+List<ExcludedPath> excludedPaths = new ArrayList<>();
+ExcludedPath excludedPath = new ExcludedPath();
+excludedPath.setPath("/name/*");
+excludedPaths.add(excludedPath);
+indexingPolicy.setExcludedPaths(excludedPaths);
+// Create container with specified name and indexing policy
+DocumentCollection documentCollection = new DocumentCollection();
+documentCollection.setId("YourContainerName");
+documentCollection.setIndexingPolicy(indexingPolicy);
+client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).subscribe();
+```
+++
+### Stored procedures
+
+The following code snippet shows the differences in how stored procedures are created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateSprocAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+logger.info("Creating stored procedure...\n");
+
+sprocId = "createMyDocument";
+String sprocBody = "function createMyDocument() {\n" +
+ "var documentToCreate = {\"id\":\"test_doc\"}\n" +
+ "var context = getContext();\n" +
+ "var collection = context.getCollection();\n" +
+ "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
+ " function (err, documentCreated) {\n" +
+ "if (err) throw new Error('Error' + err.message);\n" +
+ "context.getResponse().setBody(documentCreated.id)\n" +
+ "});\n" +
+ "if (!accepted) return;\n" +
+ "}";
+CosmosStoredProcedureProperties storedProcedureDef = new CosmosStoredProcedureProperties(sprocId, sprocBody);
+container.getScripts()
+ .createStoredProcedure(storedProcedureDef,
+ new CosmosStoredProcedureRequestOptions()).block();
+
+// ...
+
+logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
+
+CosmosStoredProcedureRequestOptions options = new CosmosStoredProcedureRequestOptions();
+options.partitionKey(new PartitionKey("test_doc"));
+
+container.getScripts()
+ .getStoredProcedure(sprocId)
+ .execute(null, options)
+ .flatMap(executeResponse -> {
+ logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
+ sprocId,
+ executeResponse.responseAsString(),
+ executeResponse.statusCode(),
+ executeResponse.requestCharge()));
+ return Mono.empty();
+ }).block();
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+logger.info("Creating stored procedure...\n");
+
+String sprocId = "createMyDocument";
+String sprocBody = "function createMyDocument() {\n" +
+ "var documentToCreate = {\"id\":\"test_doc\"}\n" +
+ "var context = getContext();\n" +
+ "var collection = context.getCollection();\n" +
+ "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
+ " function (err, documentCreated) {\n" +
+ "if (err) throw new Error('Error' + err.message);\n" +
+ "context.getResponse().setBody(documentCreated.id)\n" +
+ "});\n" +
+ "if (!accepted) return;\n" +
+ "}";
+StoredProcedure storedProcedureDef = new StoredProcedure();
+storedProcedureDef.setId(sprocId);
+storedProcedureDef.setBody(sprocBody);
+StoredProcedure storedProcedure = client.createStoredProcedure(documentCollection.getSelfLink(), storedProcedureDef, new RequestOptions())
+ .getResource();
+
+// ...
+
+logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
+
+RequestOptions options = new RequestOptions();
+options.setPartitionKey(new PartitionKey("test_doc"));
+
+StoredProcedureResponse storedProcedureResponse =
+ client.executeStoredProcedure(storedProcedure.getSelfLink(), options, null);
+logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
+ sprocId,
+ storedProcedureResponse.getResponseAsString(),
+ storedProcedureResponse.getStatusCode(),
+ storedProcedureResponse.getRequestCharge()));
+```
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+logger.info("Creating stored procedure...\n");
+String sprocId = "createMyDocument";
+String sprocBody = "function createMyDocument() {\n" +
+ "var documentToCreate = {\"id\":\"test_doc\"}\n" +
+ "var context = getContext();\n" +
+ "var collection = context.getCollection();\n" +
+ "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
+ " function (err, documentCreated) {\n" +
+ "if (err) throw new Error('Error' + err.message);\n" +
+ "context.getResponse().setBody(documentCreated.id)\n" +
+ "});\n" +
+ "if (!accepted) return;\n" +
+ "}";
+StoredProcedure storedProcedureDef = new StoredProcedure();
+storedProcedureDef.setId(sprocId);
+storedProcedureDef.setBody(sprocBody);
+StoredProcedure storedProcedure = client
+ .createStoredProcedure(documentCollection.getSelfLink(), storedProcedureDef, new RequestOptions())
+ .toBlocking()
+ .single()
+ .getResource();
+// ...
+logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
+RequestOptions options = new RequestOptions();
+options.setPartitionKey(new PartitionKey("test_doc"));
+StoredProcedureResponse storedProcedureResponse =
+ client.executeStoredProcedure(storedProcedure.getSelfLink(), options, null)
+ .toBlocking().single();
+logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
+ sprocId,
+ storedProcedureResponse.getResponseAsString(),
+ storedProcedureResponse.getStatusCode(),
+ storedProcedureResponse.getRequestCharge()));
+```
+++
+### Change feed
+
+The following code snippet shows the differences in how change feed operations are executed between the 4.0 and 3.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateCFAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+ChangeFeedProcessor changeFeedProcessorInstance =
+ChangeFeedProcessor.Builder()
+ .hostName(hostName)
+ .feedContainer(feedContainer)
+ .leaseContainer(leaseContainer)
+ .handleChanges((List<CosmosItemProperties> docs) -> {
+ logger.info(">setHandleChanges() START");
+
+ for (CosmosItemProperties document : docs) {
+ try {
+
+ // You are given the document as a CosmosItemProperties instance which you may
+ // cast to the desired type.
+ CustomPOJO pojo_doc = document.getObject(CustomPOJO.class);
+ logger.info("-=>id: " + pojo_doc.id());
+
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ logger.info(">handleChanges() END");
+
+ })
+ .build();
+
+// ...
+
+ changeFeedProcessorInstance.start()
+ .subscribeOn(Schedulers.elastic())
+ .subscribe();
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+* This feature is not supported as of Java SDK v2 sync.
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+* This feature is not supported as of Java SDK v2 async.
+++
+### Container level Time-To-Live(TTL)
+
+The following code snippet shows the differences in how to create time to live for data in the container between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateContainerTTLAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+CosmosContainer container;
+
+// Create a new container with TTL enabled with default expiration value
+CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
+containerProperties.defaultTimeToLive(90 * 60 * 60 * 24);
+container = database.createContainerIfNotExists(containerProperties, 400).block().container();
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+DocumentCollection documentCollection;
+
+// Create a new container with TTL enabled with default expiration value
+documentCollection.setDefaultTimeToLive(90 * 60 * 60 * 24);
+documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
+```
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+DocumentCollection collection = new DocumentCollection();
+// Create a new container with TTL enabled with default expiration value
+collection.setDefaultTimeToLive(90 * 60 * 60 * 24);
+collection = client
+ .createCollection(database.getSelfLink(), documentCollection, new RequestOptions())
+ .toBlocking()
+ .single()
+ .getResource();
+```
+++
+### Item level Time-To-Live(TTL)
+
+The following code snippet shows the differences in how to create time to live for an item between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
+
+# [Java SDK 4.0 Async API](#tab/java-v4-async)
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateItemTTLClassAsync)]
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateItemTTLAsync)]
+
+# [Java SDK 3.x.x Async API](#tab/java-v3-async)
+
+```java
+// Include a property that serializes to "ttl" in JSON
+public class SalesOrder
+{
+ private String id;
+ private String customerId;
+ private Integer ttl;
+
+ public SalesOrder(String id, String customerId, Integer ttl) {
+ this.id = id;
+ this.customerId = customerId;
+ this.ttl = ttl;
+ }
+
+ public String id() {return this.id;}
+ public SalesOrder id(String new_id) {this.id = new_id; return this;}
+ public String customerId() {return this.customerId; return this;}
+ public SalesOrder customerId(String new_cid) {this.customerId = new_cid;}
+ public Integer ttl() {return this.ttl;}
+ public SalesOrder ttl(Integer new_ttl) {this.ttl = new_ttl; return this;}
+
+ //...
+}
+
+// Set the value to the expiration in seconds
+SalesOrder salesOrder = new SalesOrder(
+ "SO05",
+ "CO18009186470",
+ 60 * 60 * 24 * 30 // Expire sales orders in 30 days
+);
+```
+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+Document document = new Document();
+document.setId("YourDocumentId");
+document.setTimeToLive(60 * 60 * 24 * 30 ); // Expire document in 30 days
+ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
+ new RequestOptions(), true);
+Document responseDocument = documentResourceResponse.getResource();
+```
+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+Document document = new Document();
+document.setId("YourDocumentId");
+document.setTimeToLive(60 * 60 * 24 * 30 ); // Expire document in 30 days
+ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
+ new RequestOptions(), true).toBlocking().single();
+Document responseDocument = documentResourceResponse.getResource();
+```
+++
+## Next steps
+
+* [Build a Java app](quickstart-java.md) to manage Azure Cosmos DB for NoSQL data using the V4 SDK
+* Learn about the [Reactor-based Java SDKs](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md)
+* Learn about converting RxJava async code to Reactor async code with the [Reactor vs RxJava Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB?
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-relational-data.md
+
+ Title: Migrate one-to-few relational data into Azure Cosmos DB for NoSQL
+description: Learn how to handle complex data migration for one-to-few relationships into API for NoSQL
++++ Last updated : 12/12/2019+
+ms.devlang: python, scala
+++
+# Migrate one-to-few relational data into Azure Cosmos DB for NoSQL account
+
+In order to migrate from a relational database to Azure Cosmos DB for NoSQL, it can be necessary to make changes to the data model for optimization.
+
+One common transformation is denormalizing data by embedding related subitems within one JSON document. Here we look at a few options for this using Azure Data Factory or Azure Databricks. For general guidance on data modeling for Azure Cosmos DB, please review [Data modeling in Azure Cosmos DB](modeling-data.md).
+
+## Example Scenario
+
+Assume we have the following two tables in our SQL database, Orders and OrderDetails.
+++
+We want to combine this one-to-few relationship into one JSON document during migration. To do this, we can create a T-SQL query using "FOR JSON" as below:
+
+```sql
+SELECT
+ o.OrderID,
+ o.OrderDate,
+ o.FirstName,
+ o.LastName,
+ o.Address,
+ o.City,
+ o.State,
+ o.PostalCode,
+ o.Country,
+ o.Phone,
+ o.Total,
+ (select OrderDetailId, ProductId, UnitPrice, Quantity from OrderDetails od where od.OrderId = o.OrderId for json auto) as OrderDetails
+FROM Orders o;
+```
+
+The results of this query would look as below:
++
+Ideally, you want to use a single Azure Data Factory (ADF) copy activity to query SQL data as the source and write the output directly to Azure Cosmos DB sink as proper JSON objects. Currently, it is not possible to perform the needed JSON transformation in one copy activity. If we try to copy the results of the above query into an Azure Cosmos DB for NoSQL container, we will see the OrderDetails field as a string property of our document, instead of the expected JSON array.
+
+We can work around this current limitation in one of the following ways:
+
+* **Use Azure Data Factory with two copy activities**:
+ 1. Get JSON-formatted data from SQL to a text file in an intermediary blob storage location, and
+ 2. Load data from the JSON text file to a container in Azure Cosmos DB.
+
+* **Use Azure Databricks to read from SQL and write to Azure Cosmos DB** - we will present two options here.
++
+LetΓÇÖs look at these approaches in more detail:
+
+## Azure Data Factory
+
+Although we cannot embed OrderDetails as a JSON-array in the destination Azure Cosmos DB document, we can work around the issue by using two separate Copy Activities.
+
+### Copy Activity #1: SqlJsonToBlobText
+
+For the source data, we use a SQL query to get the result set as a single column with one JSON object (representing the Order) per row using the SQL Server OPENJSON and FOR JSON PATH capabilities:
+
+```sql
+SELECT [value] FROM OPENJSON(
+ (SELECT
+ id = o.OrderID,
+ o.OrderDate,
+ o.FirstName,
+ o.LastName,
+ o.Address,
+ o.City,
+ o.State,
+ o.PostalCode,
+ o.Country,
+ o.Phone,
+ o.Total,
+ (select OrderDetailId, ProductId, UnitPrice, Quantity from OrderDetails od where od.OrderId = o.OrderId for json auto) as OrderDetails
+ FROM Orders o FOR JSON PATH)
+)
+```
+++
+For the sink of the SqlJsonToBlobText copy activity, we choose "Delimited Text" and point it to a specific folder in Azure Blob Storage with a dynamically generated unique file name (for example, '@concat(pipeline().RunId,'.json').
+Since our text file is not really "delimited" and we do not want it to be parsed into separate columns using commas and want to preserve the double-quotes ("), we set "Column delimiter" to a Tab ("\t") - or another character not occurring in the data - and "Quote character" to "No quote character".
++
+### Copy Activity #2: BlobJsonToCosmos
+
+Next, we modify our ADF pipeline by adding the second Copy Activity that looks in Azure Blob Storage for the text file that was created by the first activity. It processes it as "JSON" source to insert to Azure Cosmos DB sink as one document per JSON-row found in the text file.
++
+Optionally, we also add a "Delete" activity to the pipeline so that it deletes all of the previous files remaining in the /Orders/ folder prior to each run. Our ADF pipeline now looks something like this:
++
+After we trigger the pipeline above, we see a file created in our intermediary Azure Blob Storage location containing one JSON-object per row:
++
+We also see Orders documents with properly embedded OrderDetails inserted into our Azure Cosmos DB collection:
+++
+## Azure Databricks
+
+We can also use Spark in [Azure Databricks](https://azure.microsoft.com/services/databricks/) to copy the data from our SQL Database source to the Azure Cosmos DB destination without creating the intermediary text/JSON files in Azure Blob Storage.
+
+> [!NOTE]
+> For clarity and simplicity, the code snippets below include dummy database passwords explicitly inline, but you should always use Azure Databricks secrets.
+>
+
+First, we create and attach the required [SQL connector](/connectors/sql/) and [Azure Cosmos DB connector](https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html) libraries to our Azure Databricks cluster. Restart the cluster to make sure libraries are loaded.
++
+Next, we present two samples, for Scala and Python.
+
+### Scala
+Here, we get the results of the SQL query with ΓÇ£FOR JSONΓÇ¥ output into a DataFrame:
+
+```scala
+// Connect to Azure SQL /connectors/sql/
+import com.microsoft.azure.sqldb.spark.config.Config
+import com.microsoft.azure.sqldb.spark.connect._
+val configSql = Config(Map(
+ "url" -> "xxxx.database.windows.net",
+ "databaseName" -> "xxxx",
+ "queryCustom" -> "SELECT o.OrderID, o.OrderDate, o.FirstName, o.LastName, o.Address, o.City, o.State, o.PostalCode, o.Country, o.Phone, o.Total, (SELECT OrderDetailId, ProductId, UnitPrice, Quantity FROM OrderDetails od WHERE od.OrderId = o.OrderId FOR JSON AUTO) as OrderDetails FROM Orders o",
+ "user" -> "xxxx",
+ "password" -> "xxxx" // NOTE: For clarity and simplicity, this example includes secrets explicitely as a string, but you should always use Databricks secrets
+))
+// Create DataFrame from Azure SQL query
+val orders = sqlContext.read.sqlDB(configSql)
+display(orders)
+```
++
+Next, we connect to our Azure Cosmos DB database and collection:
+
+```scala
+// Connect to Azure Cosmos DB https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html
+import org.joda.time._
+import org.joda.time.format._
+import com.microsoft.azure.cosmosdb.spark.schema._
+import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
+import com.microsoft.azure.cosmosdb.spark.config.Config
+import org.apache.spark.sql.functions._
+import org.joda.time._
+import org.joda.time.format._
+import com.microsoft.azure.cosmosdb.spark.schema._
+import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
+import com.microsoft.azure.cosmosdb.spark.config.Config
+import org.apache.spark.sql.functions._
+val configMap = Map(
+ "Endpoint" -> "https://xxxx.documents.azure.com:443/",
+ // NOTE: For clarity and simplicity, this example includes secrets explicitely as a string, but you should always use Databricks secrets
+ "Masterkey" -> "xxxx",
+ "Database" -> "StoreDatabase",
+ "Collection" -> "Orders")
+val configAzure Cosmos DB= Config(configMap)
+```
+
+Finally, we define our schema and use from_json to apply the DataFrame prior to saving it to the CosmosDB collection.
+
+```scala
+// Convert DataFrame to proper nested schema
+import org.apache.spark.sql.types._
+val orderDetailsSchema = ArrayType(StructType(Array(
+ StructField("OrderDetailId",IntegerType,true),
+ StructField("ProductId",IntegerType,true),
+ StructField("UnitPrice",DoubleType,true),
+ StructField("Quantity",IntegerType,true)
+ )))
+val ordersWithSchema = orders.select(
+ col("OrderId").cast("string").as("id"),
+ col("OrderDate").cast("string"),
+ col("FirstName").cast("string"),
+ col("LastName").cast("string"),
+ col("Address").cast("string"),
+ col("City").cast("string"),
+ col("State").cast("string"),
+ col("PostalCode").cast("string"),
+ col("Country").cast("string"),
+ col("Phone").cast("string"),
+ col("Total").cast("double"),
+ from_json(col("OrderDetails"), orderDetailsSchema).as("OrderDetails")
+)
+display(ordersWithSchema)
+// Save nested data to Azure Cosmos DB
+CosmosDBSpark.save(ordersWithSchema, configCosmos)
+```
+++
+### Python
+
+As an alternative approach, you may need to execute JSON transformations in Spark (if the source database does not support "FOR JSON" or a similar operation), or you may wish to use parallel operations for a very large data set. Here we present a PySpark sample. Start by configuring the source and target database connections in the first cell:
+
+```python
+import uuid
+import pyspark.sql.functions as F
+from pyspark.sql.functions import col
+from pyspark.sql.types import StringType,DateType,LongType,IntegerType,TimestampType
+
+#JDBC connect details for SQL Server database
+jdbcHostname = "jdbcHostname"
+jdbcDatabase = "OrdersDB"
+jdbcUsername = "jdbcUsername"
+jdbcPassword = "jdbcPassword"
+jdbcPort = "1433"
+
+connectionProperties = {
+ "user" : jdbcUsername,
+ "password" : jdbcPassword,
+ "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
+}
+jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2};user={3};password={4}".format(jdbcHostname, jdbcPort, jdbcDatabase, jdbcUsername, jdbcPassword)
+
+#Connect details for Target Azure Cosmos DB for NoSQL account
+writeConfig = {
+ "Endpoint": "Endpoint",
+ "Masterkey": "Masterkey",
+ "Database": "OrdersDB",
+ "Collection": "Orders",
+ "Upsert": "true"
+}
+```
+
+Then, we will query the source Database (in this case SQL Server) for both the order and order detail records, putting the results into Spark Dataframes. We will also create a list containing all the order IDs, and a Thread pool for parallel operations:
+
+```python
+import json
+import ast
+import pyspark.sql.functions as F
+import uuid
+import numpy as np
+import pandas as pd
+from functools import reduce
+from pyspark.sql import SQLContext
+from pyspark.sql.types import *
+from pyspark.sql import *
+from pyspark.sql.functions import exp
+from pyspark.sql.functions import col
+from pyspark.sql.functions import lit
+from pyspark.sql.functions import array
+from pyspark.sql.types import *
+from multiprocessing.pool import ThreadPool
+
+#get all orders
+orders = sqlContext.read.jdbc(url=jdbcUrl, table="orders", properties=connectionProperties)
+
+#get all order details
+orderdetails = sqlContext.read.jdbc(url=jdbcUrl, table="orderdetails", properties=connectionProperties)
+
+#get all OrderId values to pass to map function
+orderids = orders.select('OrderId').collect()
+
+#create thread pool big enough to process merge of details to orders in parallel
+pool = ThreadPool(10)
+```
+
+Then, create a function for writing Orders into the target API for NoSQL collection. This function will filter all order details for the given order ID, convert them into a JSON array, and insert the array into a JSON document that we will write into the target API for NoSQL Collection for that order:
+
+```python
+def writeOrder(orderid):
+ #filter the order on current value passed from map function
+ order = orders.filter(orders['OrderId'] == orderid[0])
+
+ #set id to be a uuid
+ order = order.withColumn("id", lit(str(uuid.uuid1())))
+
+ #add details field to order dataframe
+ order = order.withColumn("details", lit(''))
+
+ #filter order details dataframe to get details we want to merge into the order document
+ orderdetailsgroup = orderdetails.filter(orderdetails['OrderId'] == orderid[0])
+
+ #convert dataframe to pandas
+ orderpandas = order.toPandas()
+
+ #convert the order dataframe to json and remove enclosing brackets
+ orderjson = orderpandas.to_json(orient='records', force_ascii=False)
+ orderjson = orderjson[1:-1]
+
+ #convert orderjson to a dictionaory so we can set the details element with order details later
+ orderjsondata = json.loads(orderjson)
+
+
+ #convert orderdetailsgroup dataframe to json, but only if details were returned from the earlier filter
+ if (orderdetailsgroup.count() !=0):
+ #convert orderdetailsgroup to pandas dataframe to work better with json
+ orderdetailsgroup = orderdetailsgroup.toPandas()
+
+ #convert orderdetailsgroup to json string
+ jsonstring = orderdetailsgroup.to_json(orient='records', force_ascii=False)
+
+ #convert jsonstring to dictionary to ensure correct encoding and no corrupt records
+ jsonstring = json.loads(jsonstring)
+
+ #set details json element in orderjsondata to jsonstring which contains orderdetailsgroup - this merges order details into the order
+ orderjsondata['details'] = jsonstring
+
+ #convert dictionary to json
+ orderjsondata = json.dumps(orderjsondata)
+
+ #read the json into spark dataframe
+ df = spark.read.json(sc.parallelize([orderjsondata]))
+
+ #write the dataframe (this will be a single order record with merged many-to-one order details) to Azure Cosmos DB db using spark the connector
+ #https://learn.microsoft.com/azure/cosmos-db/spark-connector
+ df.write.format("com.microsoft.azure.cosmosdb.spark").mode("append").options(**writeConfig).save()
+```
+
+Finally, we will call the above using a map function on the thread pool, to execute in parallel, passing in the list of order IDs we created earlier:
+
+```python
+#map order details to orders in parallel using the above function
+pool.map(writeOrder, orderids)
+```
+In either approach, at the end, we should get properly saved embedded OrderDetails within each Order document in Azure Cosmos DB collection:
++
+## Next steps
+* Learn about [data modeling in Azure Cosmos DB](./modeling-data.md)
+* Learn [how to model and partition data on Azure Cosmos DB](./how-to-model-partition-example.md)
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/modeling-data.md
+
+ Title: Modeling data in Azure Cosmos DB
+
+description: Learn about data modeling in NoSQL databases, differences between modeling data in a relational database and a document database.
++++++ Last updated : 03/24/2022++
+# Data modeling in Azure Cosmos DB
+
+While schema-free databases, like Azure Cosmos DB, make it super easy to store and query unstructured and semi-structured data, you should spend some time thinking about your data model to get the most of the service in terms of performance and scalability and lowest cost.
+
+How is data going to be stored? How is your application going to retrieve and query data? Is your application read-heavy, or write-heavy?
+
+>
+> [!VIDEO https://aka.ms/docs.modeling-data]
+
+After reading this article, you'll be able to answer the following questions:
+
+* What is data modeling and why should I care?
+* How is modeling data in Azure Cosmos DB different to a relational database?
+* How do I express data relationships in a non-relational database?
+* When do I embed data and when do I link to data?
+
+## Numbers in JSON
+Azure Cosmos DB saves documents in JSON. Which means it is necessary to carefully determine whether it is necessary to convert numbers into strings before storing them in json or not. All numbers should ideally be converted into a `String`, if there is any chance that they are outside the boundaries of double precision numbers according to [IEEE 754 binary64](https://www.rfc-editor.org/rfc/rfc8259#ref-IEEE754). The [Json specification](https://www.rfc-editor.org/rfc/rfc8259#section-6) calls out the reasons why using numbers outside of this boundary in general is a bad practice in JSON due to likely interoperability problems. These concerns are especially relevant for the partition key column, because it is immutable and requires data migration to change it later.
+
+## <a id="embedding-data"></a>Embed data
+
+When you start modeling data in Azure Cosmos DB try to treat your entities as **self-contained items** represented as JSON documents.
+
+For comparison, let's first see how we might model data in a relational database. The following example shows how a person might be stored in a relational database.
++
+The strategy, when working with relational databases, is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, and multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
+
+The guiding premise when normalizing data is to **avoid storing redundant data** on each record and rather refer to data. In this example, to read a person, with all their contact details and addresses, you need to use JOINS to effectively compose back (or denormalize) your data at run time.
+
+```sql
+SELECT p.FirstName, p.LastName, a.City, cd.Detail
+FROM Person p
+JOIN ContactDetail cd ON cd.PersonId = p.Id
+JOIN ContactDetailType cdt ON cdt.Id = cd.TypeId
+JOIN Address a ON a.PersonId = p.Id
+```
+
+Write operations across many individual tables are required to update a single person's contact details and addresses.
+
+Now let's take a look at how we would model the same data as a self-contained entity in Azure Cosmos DB.
+
+```json
+{
+ "id": "1",
+ "firstName": "Thomas",
+ "lastName": "Andersen",
+ "addresses": [
+ {
+ "line1": "100 Some Street",
+ "line2": "Unit 1",
+ "city": "Seattle",
+ "state": "WA",
+ "zip": 98012
+ }
+ ],
+ "contactDetails": [
+ {"email": "thomas@andersen.com"},
+ {"phone": "+1 555 555-5555", "extension": 5555}
+ ]
+}
+```
+
+Using the approach above we've **denormalized** the person record, by **embedding** all the information related to this person, such as their contact details and addresses, into a *single JSON* document.
+In addition, because we're not confined to a fixed schema we have the flexibility to do things like having contact details of different shapes entirely.
+
+Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating the contact details and addresses of a person record is also a **single write operation** against a single item.
+
+By denormalizing data, your application may need to issue fewer queries and updates to complete common operations.
+
+### When to embed
+
+In general, use embedded data models when:
+
+* There are **contained** relationships between entities.
+* There are **one-to-few** relationships between entities.
+* There's embedded data that **changes infrequently**.
+* There's embedded data that won't grow **without bound**.
+* There's embedded data that is **queried frequently together**.
+
+> [!NOTE]
+> Typically denormalized data models provide better **read** performance.
+
+### When not to embed
+
+While the rule of thumb in Azure Cosmos DB is to denormalize everything and embed all data into a single item, this can lead to some situations that should be avoided.
+
+Take this JSON snippet.
+
+```json
+{
+ "id": "1",
+ "name": "What's new in the coolest Cloud",
+ "summary": "A blog post by someone real famous",
+ "comments": [
+ {"id": 1, "author": "anon", "comment": "something useful, I'm sure"},
+ {"id": 2, "author": "bob", "comment": "wisdom from the interwebs"},
+ …
+ {"id": 100001, "author": "jane", "comment": "and on we go ..."},
+ …
+ {"id": 1000000001, "author": "angry", "comment": "blah angry blah angry"},
+ …
+ {"id": ∞ + 1, "author": "bored", "comment": "oh man, will this ever end?"},
+ ]
+}
+```
+
+This might be what a post entity with embedded comments would look like if we were modeling a typical blog, or CMS, system. The problem with this example is that the comments array is **unbounded**, meaning that there's no (practical) limit to the number of comments any single post can have. This may become a problem as the size of the item could grow infinitely large so is a design you should avoid.
+
+As the size of the item grows the ability to transmit the data over the wire and reading and updating the item, at scale, will be impacted.
+
+In this case, it would be better to consider the following data model.
+
+```json
+Post item:
+{
+ "id": "1",
+ "name": "What's new in the coolest Cloud",
+ "summary": "A blog post by someone real famous",
+ "recentComments": [
+ {"id": 1, "author": "anon", "comment": "something useful, I'm sure"},
+ {"id": 2, "author": "bob", "comment": "wisdom from the interwebs"},
+ {"id": 3, "author": "jane", "comment": "....."}
+ ]
+}
+
+Comment items:
+[
+ {"id": 4, "postId": "1", "author": "anon", "comment": "more goodness"},
+ {"id": 5, "postId": "1", "author": "bob", "comment": "tails from the field"},
+ ...
+ {"id": 99, "postId": "1", "author": "angry", "comment": "blah angry blah angry"},
+ {"id": 100, "postId": "2", "author": "anon", "comment": "yet more"},
+ ...
+ {"id": 199, "postId": "2", "author": "bored", "comment": "will this ever end?"}
+]
+```
+
+This model has a document for each comment with a property that contains the post identifier. This allows posts to contain any number of comments and can grow efficiently. Users wanting to see more
+than the most recent comments would query this container passing the postId, which should be the partition key for the comments container.
+
+Another case where embedding data isn't a good idea is when the embedded data is used often across items and will change frequently.
+
+Take this JSON snippet.
+
+```json
+{
+ "id": "1",
+ "firstName": "Thomas",
+ "lastName": "Andersen",
+ "holdings": [
+ {
+ "numberHeld": 100,
+ "stock": { "symbol": "zbzb", "open": 1, "high": 2, "low": 0.5 }
+ },
+ {
+ "numberHeld": 50,
+ "stock": { "symbol": "xcxc", "open": 89, "high": 93.24, "low": 88.87 }
+ }
+ ]
+}
+```
+
+This could represent a person's stock portfolio. We have chosen to embed the stock information into each portfolio document. In an environment where related data is changing frequently, like a stock trading application, embedding data that changes frequently is going to mean that you're constantly updating each portfolio document every time a stock is traded.
+
+Stock *zbzb* may be traded many hundreds of times in a single day and thousands of users could have *zbzb* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
+
+## <a id="referencing-data"></a>Reference data
+
+Embedding data works nicely for many cases but there are scenarios when denormalizing your data will cause more problems than it's worth. So what do we do now?
+
+Relational databases aren't the only place where you can create relationships between entities. In a document database, you may have information in one document that relates to data in other documents. We don't recommend building systems that would be better suited to a relational database in Azure Cosmos DB, or any other document database, but simple relationships are fine and may be useful.
+
+In the JSON below we chose to use the example of a stock portfolio from earlier but this time we refer to the stock item on the portfolio instead of embedding it. This way, when the stock item changes frequently throughout the day the only document that needs to be updated is the single stock document.
+
+```json
+Person document:
+{
+ "id": "1",
+ "firstName": "Thomas",
+ "lastName": "Andersen",
+ "holdings": [
+ { "numberHeld": 100, "stockId": 1},
+ { "numberHeld": 50, "stockId": 2}
+ ]
+}
+
+Stock documents:
+{
+ "id": "1",
+ "symbol": "zbzb",
+ "open": 1,
+ "high": 2,
+ "low": 0.5,
+ "vol": 11970000,
+ "mkt-cap": 42000000,
+ "pe": 5.89
+},
+{
+ "id": "2",
+ "symbol": "xcxc",
+ "open": 89,
+ "high": 93.24,
+ "low": 88.87,
+ "vol": 2970200,
+ "mkt-cap": 1005000,
+ "pe": 75.82
+}
+```
+
+An immediate downside to this approach though is if your application is required to show information about each stock that is held when displaying a person's portfolio; in this case you would need to make multiple trips to the database to load the information for each stock document. Here we've made a decision to improve the efficiency of write operations, which happen frequently throughout the day, but in turn compromised on the read operations that potentially have less impact on the performance of this particular system.
+
+> [!NOTE]
+> Normalized data models **can require more round trips** to the server.
+
+### What about foreign keys?
+
+Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or by using server-side triggers or stored procedures on Azure Cosmos DB.
+
+### When to reference
+
+In general, use normalized data models when:
+
+* Representing **one-to-many** relationships.
+* Representing **many-to-many** relationships.
+* Related data **changes frequently**.
+* Referenced data could be **unbounded**.
+
+> [!NOTE]
+> Typically normalizing provides better **write** performance.
+
+### Where do I put the relationship?
+
+The growth of the relationship will help determine in which document to store the reference.
+
+If we look at the JSON below that models publishers and books.
+
+```json
+Publisher document:
+{
+ "id": "mspress",
+ "name": "Microsoft Press",
+ "books": [ 1, 2, 3, ..., 100, ..., 1000]
+}
+
+Book documents:
+{"id": "1", "name": "Azure Cosmos DB 101" }
+{"id": "2", "name": "Azure Cosmos DB for RDBMS Users" }
+{"id": "3", "name": "Taking over the world one JSON doc at a time" }
+...
+{"id": "100", "name": "Learn about Azure Cosmos DB" }
+...
+{"id": "1000", "name": "Deep Dive into Azure Cosmos DB" }
+```
+
+If the number of the books per publisher is small with limited growth, then storing the book reference inside the publisher document may be useful. However, if the number of books per publisher is unbounded, then this data model would lead to mutable, growing arrays, as in the example publisher document above.
+
+Switching things around a bit would result in a model that still represents the same data but now avoids these large mutable collections.
+
+```json
+Publisher document:
+{
+ "id": "mspress",
+ "name": "Microsoft Press"
+}
+
+Book documents:
+{"id": "1","name": "Azure Cosmos DB 101", "pub-id": "mspress"}
+{"id": "2","name": "Azure Cosmos DB for RDBMS Users", "pub-id": "mspress"}
+{"id": "3","name": "Taking over the world one JSON doc at a time", "pub-id": "mspress"}
+...
+{"id": "100","name": "Learn about Azure Cosmos DB", "pub-id": "mspress"}
+...
+{"id": "1000","name": "Deep Dive into Azure Cosmos DB", "pub-id": "mspress"}
+```
+
+In the above example, we've dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
+
+### How do I model many-to-many relationships?
+
+In a relational database *many-to-many* relationships are often modeled with join tables, which just join records from other tables together.
+++
+You might be tempted to replicate the same thing using documents and produce a data model that looks similar to the following.
+
+```json
+Author documents:
+{"id": "a1", "name": "Thomas Andersen" }
+{"id": "a2", "name": "William Wakefield" }
+
+Book documents:
+{"id": "b1", "name": "Azure Cosmos DB 101" }
+{"id": "b2", "name": "Azure Cosmos DB for RDBMS Users" }
+{"id": "b3", "name": "Taking over the world one JSON doc at a time" }
+{"id": "b4", "name": "Learn about Azure Cosmos DB" }
+{"id": "b5", "name": "Deep Dive into Azure Cosmos DB" }
+
+Joining documents:
+{"authorId": "a1", "bookId": "b1" }
+{"authorId": "a2", "bookId": "b1" }
+{"authorId": "a1", "bookId": "b2" }
+{"authorId": "a1", "bookId": "b3" }
+```
+
+This would work. However, loading either an author with their books, or loading a book with its author, would always require at least two additional queries against the database. One query to the joining document and then another query to fetch the actual document being joined.
+
+If this join is only gluing together two pieces of data, then why not drop it completely?
+Consider the following example.
+
+```json
+Author documents:
+{"id": "a1", "name": "Thomas Andersen", "books": ["b1", "b2", "b3"]}
+{"id": "a2", "name": "William Wakefield", "books": ["b1", "b4"]}
+
+Book documents:
+{"id": "b1", "name": "Azure Cosmos DB 101", "authors": ["a1", "a2"]}
+{"id": "b2", "name": "Azure Cosmos DB for RDBMS Users", "authors": ["a1"]}
+{"id": "b3", "name": "Learn about Azure Cosmos DB", "authors": ["a1"]}
+{"id": "b4", "name": "Deep Dive into Azure Cosmos DB", "authors": ["a2"]}
+```
+
+Now, if I had an author, I immediately know which books they've written, and conversely if I had a book document loaded I would know the IDs of the author(s). This saves that intermediary query against the join table reducing the number of server round trips your application has to make.
+
+## Hybrid data models
+
+We've now looked at embedding (or denormalizing) and referencing (or normalizing) data. Each approach has upsides and compromises.
+
+It doesn't always have to be either-or, don't be scared to mix things up a little.
+
+Based on your application's specific usage patterns and workloads there may be cases where mixing embedded and referenced data makes sense and could lead to simpler application logic with fewer server round trips while still maintaining a good level of performance.
+
+Consider the following JSON.
+
+```json
+Author documents:
+{
+ "id": "a1",
+ "firstName": "Thomas",
+ "lastName": "Andersen",
+ "countOfBooks": 3,
+ "books": ["b1", "b2", "b3"],
+ "images": [
+ {"thumbnail": "https://....png"}
+ {"profile": "https://....png"}
+ {"large": "https://....png"}
+ ]
+},
+{
+ "id": "a2",
+ "firstName": "William",
+ "lastName": "Wakefield",
+ "countOfBooks": 1,
+ "books": ["b1"],
+ "images": [
+ {"thumbnail": "https://....png"}
+ ]
+}
+
+Book documents:
+{
+ "id": "b1",
+ "name": "Azure Cosmos DB 101",
+ "authors": [
+ {"id": "a1", "name": "Thomas Andersen", "thumbnailUrl": "https://....png"},
+ {"id": "a2", "name": "William Wakefield", "thumbnailUrl": "https://....png"}
+ ]
+},
+{
+ "id": "b2",
+ "name": "Azure Cosmos DB for RDBMS Users",
+ "authors": [
+ {"id": "a1", "name": "Thomas Andersen", "thumbnailUrl": "https://....png"},
+ ]
+}
+```
+
+Here we've (mostly) followed the embedded model, where data from other entities are embedded in the top-level document, but other data is referenced.
+
+If you look at the book document, we can see a few interesting fields when we look at the array of authors. There's an `id` field that is the field we use to refer back to an author document, standard practice in a normalized model, but then we also have `name` and `thumbnailUrl`. We could have stuck with `id` and left the application to get any additional information it needed from the respective author document using the "link", but because our application displays the author's name and a thumbnail picture with every book displayed we can save a round trip to the server per book in a list by denormalizing **some** data from the author.
+
+Sure, if the author's name changed or they wanted to update their photo we'd have to update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision.
+
+In the example, there are **pre-calculated aggregates** values to save expensive processing on a read operation. In the example, some of the data embedded in the author document is data that is calculated at run-time. Every time a new book is published, a book document is created **and** the countOfBooks field is set to a calculated value based on the number of book documents that exist for a particular author. This optimization would be good in read heavy systems where we can afford to do computations on writes in order to optimize reads.
+
+The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
+
+## Distinguish between different document types
+
+In some scenarios, you may want to mix different document types in the same collection; this is usually the case when you want multiple, related documents to sit in the same [partition](../partitioning-overview.md). For example, you could put both books and book reviews in the same collection and partition it by `bookId`. In such situation, you usually want to add to your documents with a field that identifies their type in order to differentiate them.
+
+```json
+Book documents:
+{
+ "id": "b1",
+ "name": "Azure Cosmos DB 101",
+ "bookId": "b1",
+ "type": "book"
+}
+
+Review documents:
+{
+ "id": "r1",
+ "content": "This book is awesome",
+ "bookId": "b1",
+ "type": "review"
+},
+{
+ "id": "r2",
+ "content": "Best book ever!",
+ "bookId": "b1",
+ "type": "review"
+}
+```
+
+## Data modeling for Azure Synapse Link and Azure Cosmos DB analytical store
+
+[Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
+
+This integration happens through [Azure Cosmos DB analytical store](../analytical-store-introduction.md), a columnar representation of your transactional data that enables large-scale analytics without any impact to your transactional workloads. This analytical store is suitable for fast, cost-effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads. When you create a container with analytical store enabled, or when you enable analytical store on an existing container, all transactional inserts, updates, and deletes are synchronized with analytical store in near real time, no Change Feed or ETL jobs are required.
+
+With Azure Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (request units) costs. Azure Synapse Analytics currently supports Azure Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
+
+### Analytical store automatic schema inference
+
+While Azure Cosmos DB transactional store is considered row-oriented semi-structured data, analytical store has columnar and structured format. This conversion is automatically made for customers, using [the schema inference rules for the analytical store](../analytical-store-introduction.md). There are limits in the conversion process: maximum number of nested levels, maximum number of properties, unsupported data types, and more.
+
+> [!NOTE]
+> In the context of analytical store, we consider the following structures as property:
+> * JSON "elements" or "string-value pairs separated by a `:` ".
+> * JSON objects, delimited by `{` and `}`.
+> * JSON arrays, delimited by `[` and `]`.
+
+You can minimize the impact of the schema inference conversions, and maximize your analytical capabilities, by using following techniques.
+
+### Normalization
+
+Normalization becomes meaningless since with Azure Synapse Link you can join between your containers, using T-SQL or Spark SQL. The expected benefits of normalization are:
+ * Smaller data footprint in both transactional and analytical store.
+ * Smaller transactions.
+ * Fewer properties per document.
+ * Data structures with fewer nested levels.
+
+Note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
+
+Another important factor for normalization is that SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. In other words, both analytical store and Synapse SQL serverless pools have a limit of 1000 properties.
+
+But what to do since denormalization is an important data modeling technique for Azure Cosmos DB? The answer is that you must find the right balance for your transactional and analytical workloads.
+
+### Partition Key
+
+Your Azure Cosmos DB partition key (PK) isn't used in analytical store. And now you can use [analytical store custom partitioning](https://devblogs.microsoft.com/cosmosdb/custom-partitioning-azure-synapse-link/) to copies of analytical store using any PK that you want. Because of this isolation, you can choose a PK for your transactional data with focus on data ingestion and point reads, while cross-partition queries can be done with Azure Synapse Link. Let's see an example:
+
+In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in request units to run. But with Azure Synapse Link, you can run these analytical queries at no request units costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link applies this characteristic to allow great performance with Azure Synapse Analytics runtimes.
+
+### Data types and properties names
+
+The automatic schema inference rules article lists what are the supported data types. While unsupported data type blocks the representation in analytical store, supported datatypes may be processed differently by the Azure Synapse runtimes. One example is: When using DateTime strings that follow the ISO 8601 UTC standard, Spark pools in Azure Synapse will represent these columns as string and SQL serverless pools in Azure Synapse will represent these columns as varchar(8000).
+
+Another challenge is that not all characters are accepted by Azure Synapse Spark. While white spaces are accepted, characters like colon, grave accent, and comma aren't. Let's say that your document has a property named **"First Name, Last Name"**. This property will be represented in analytical store and Synapse SQL serverless pool can read it without a problem. But since it is in analytical store, Azure Synapse Spark can't read any data from analytical store, including all other properties. At the end of the day, you can't use Azure Synapse Spark when you have one property using the unsupported characters in their names.
+
+### Data flattening
+
+All properties in the root level of your Azure Cosmos DB data will be represented in analytical store as a column and everything else that is in deeper levels of your document data model will be represented as JSON, also in nested structures. Nested structures demand extra processing from Azure Synapse runtimes to flatten the data in structured format, what may be a challenge in big data scenarios.
++
+The document below will have only two columns in analytical store, `id` and `contactDetails`. All other data, `email` and `phone`, will require extra processing through SQL functions to be individually read.
+
+```json
+
+{
+ "id": "1",
+ "contactDetails": [
+ {"email": "thomas@andersen.com"},
+ {"phone": "+1 555 555-5555"}
+ ]
+}
+```
+
+The document below will have three columns in analytical store, `id`, `email`, and `phone`. All data is directly accessible as columns.
+
+```json
+
+{
+ "id": "1",
+ "email": "thomas@andersen.com",
+ "phone": "+1 555 555-5555"
+}
+```
+
+### Data tiering
+
+Azure Synapse Link allows you to reduce costs from the following perspectives:
+
+ * Fewer queries running in your transactional database.
+ * A PK optimized for data ingestion and point reads, reducing data footprint, hot partition scenarios, and partitions splits.
+ * Data tiering since [analytical time-to-live (attl)](../analytical-store-introduction.md#analytical-ttl) is independent from transactional time-to-live (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. For more information about the current backup limitations, see [analytical store overview](../analytical-store-introduction.md).
+ * No ETL jobs running in your environment, meaning that you don't need to provision request units for them.
+
+### Controlled redundancy
+
+This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can use [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for an Azure Synapse Link friendly data model. Let's see an example:
+
+#### Scenario
+
+Container `CustomersOrdersAndItems` is used to store on-line orders including customer and items details: billing address, delivery address, delivery method, delivery status, items price, etc. Only the first 1000 properties are represented and key information isn't included in analytical store, blocking Azure Synapse Link usage. The container has PBs of records it's not possible to change the application and remodel the data.
+
+Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase request units provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
+
+What to do?
+
+#### Solution with Change Feed
+
+* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they're normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
+* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the request units usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
+* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another request units usage reduction, since there's a minimum of 10 request units per GB in Azure Cosmos DB. Less data, fewer request units.
++
+## Takeaways
+
+The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
+
+Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily.
+
+## Next steps
+
+* To learn more about Azure Cosmos DB, refer to the service's [documentation](/azure/cosmos-db/) page.
+
+* To understand how to shard your data across multiple partitions, refer to [Partitioning Data in Azure Cosmos DB](../partitioning-overview.md).
+
+* To learn how to model and partition data on Azure Cosmos DB using a real-world example, refer to [
+Data Modeling and Partitioning - a Real-World Example](how-to-model-partition-example.md).
+
+* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/)
+
+* Configure and use [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md).
+
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/odbc-driver.md
+
+ Title: Use Azure Cosmos DB ODBC driver to connect to BI and analytics tools
+description: Use the Azure Cosmos DB ODBC driver to create normalized data tables and views for SQL queries, analytics, BI, and visualizations.
++++++ Last updated : 06/21/2022+++
+# Use the Azure Cosmos DB ODBC driver to connect to BI and data analytics tools
++
+This article walks you through installing and using the Azure Cosmos DB ODBC driver to create normalized tables and views for your Azure Cosmos DB data. You can query the normalized data with SQL queries, or import the data into Power BI or other BI and analytics software to create reports and visualizations.
+
+Azure Cosmos DB is a schemaless database, which enables rapid application development and lets you iterate on data models without being confined to a strict schema. A single Azure Cosmos DB database can contain JSON documents of various structures. To analyze or report on this data, you might need to flatten the data to fit into a schema.
+
+The ODBC driver normalizes Azure Cosmos DB data into tables and views that fit your data analytics and reporting needs. The normalized schemas let you use ODBC-compliant tools to access the data. The schemas have no impact on the underlying data, and don't require developers to adhere to them. The ODBC driver helps make Azure Cosmos DB databases useful for data analysts as well as development teams.
+
+You can do SQL operations against the normalized tables and views, including group by queries, inserts, updates, and deletes. The driver is ODBC 3.8 compliant and supports ANSI SQL-92 syntax.
+
+You can also connect the normalized Azure Cosmos DB data to other software solutions, such as SQL Server Integration Services (SSIS), Alteryx, QlikSense, Tableau and other analytics software, BI, and data integration tools. You can use those solutions to analyze, move, transform, and create visualizations with your Azure Cosmos DB data.
+
+> [!IMPORTANT]
+> - Connecting to Azure Cosmos DB with the ODBC driver is currently supported for Azure Cosmos DB for NoSQL only.
+> - The current ODBC driver doesn't support aggregate pushdowns, and has known issues with some analytics tools. Until a new version is released, you can use one of the following alternatives:
+> - [Azure Synapse Link](../synapse-link.md) is the preferred analytics solution for Azure Cosmos DB. With Azure Synapse Link and Azure Synapse SQL serverless pools, you can use any BI tool to extract near real-time insights from Azure Cosmos DB SQL or API for MongoDB data.
+> - For Power BI, you can use the [Azure Cosmos DB connector for Power BI](powerbi-visualize.md).
+> - For Qlik Sense, see [Connect Qlik Sense to Azure Cosmos DB](../visualize-qlik-sense.md).
+
+<a id="install"></a>
+## Install the ODBC driver and connect to your database
+
+1. Download the drivers for your environment:
+
+ | Installer | Supported operating systems|
+ |||
+ |[Microsoft Azure Cosmos DB ODBC 64-bit.msi](https://aka.ms/cosmos-odbc-64x64) for 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2.|
+ |[Microsoft Azure Cosmos DB ODBC 32x64-bit.msi](https://aka.ms/cosmos-odbc-32x64) for 32-bit on 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, Windows Vista, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2003.|
+ |[Microsoft Azure Cosmos DB ODBC 32-bit.msi](https://aka.ms/cosmos-odbc-32x32) for 32-bit Windows|32-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, and Windows Vista.|
+
+1. Run the *.msi* file locally, which starts the **Microsoft Azure Cosmos DB ODBC Driver Installation Wizard**.
+
+1. Complete the installation wizard using the default input.
+
+1. After the driver installs, type *ODBC Data sources* in the Windows search box, and open the **ODBC Data Source Administrator**.
+
+1. Make sure that the **Microsoft Azure DocumentDB ODBC Driver** is listed on the **Drivers** tab.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Screenshot of the ODBC Data Source Administrator window.":::
+
+ <a id="connect"></a>
+1. Select the **User DSN** tab, and then select **Add** to create a new data source name (DSN). You can also create a System DSN.
+
+1. In the **Create New Data Source** window, select **Microsoft Azure DocumentDB ODBC Driver**, and then select **Finish**.
+
+1. In the **DocumentDB ODBC Driver DSN Setup** window, fill in the following information:
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Screenshot of the D S N Setup window.":::
+
+ - **Data Source Name**: A friendly name for the ODBC DSN. This name is unique to this Azure Cosmos DB account.
+ - **Description**: A brief description of the data source.
+ - **Host**: The URI for your Azure Cosmos DB account. You can get this information from the **Keys** page in your Azure Cosmos DB account in the Azure portal.
+ - **Access Key**: The primary or secondary, read-write or read-only key from the Azure Cosmos DB **Keys** page in the Azure portal. It's best to use the read-only keys, if you use the DSN for read-only data processing and reporting.
+
+ To avoid an authentication error, use the copy buttons to copy the URI and key from the Azure portal.
+
+ :::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Screenshot of the Azure Cosmos DB DB Keys page.":::
+
+ - **Encrypt Access Key for**: Select the best choice, based on who uses the machine.
+
+1. Select **Test** to make sure you can connect to your Azure Cosmos DB account.
+
+1. Select **Advanced Options** and set the following values:
+
+ - **REST API Version**: Select the [REST API version](/rest/api/cosmos-db) for your operations. The default is **2015-12-16**.
+
+ If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, type *2018-12-31*, and then [follow the steps at the end of this procedure](#edit-the-windows-registry-to-support-rest-api-version-2018-12-31).
+
+ - **Query Consistency**: Select the [consistency level](../consistency-levels.md) for your operations. The default is **Session**.
+ - **Number of Retries**: Enter the number of times to retry an operation if the initial request doesn't complete due to service rate limiting.
+ - **Schema File**: If you don't select a schema file, the driver scans the first page of data for each container to determine its schema, called container mapping, for each session. This process can cause long startup time for applications that use the DSN. It's best to associate a schema file to the DSN.
+
+ - If you already have a schema file, select **Browse**, navigate to the file, select **Save**, and then select **OK**.
+ - If you don't have a schema file yet, select **OK**, and then follow the steps in the next section to [create a schema definition](#create-a-schema-definition). After you create the schema, come back to this **Advanced Options** window to add the schema file.
+
+After you select **OK** to complete and close the **DocumentDB ODBC Driver DSN Setup** window, the new User DSN appears on the **User DSN** tab of the **ODBC Data Source Administrator** window.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-user-dsn.png" alt-text="Screenshot that shows the new User D S N on the User D S N tab.":::
+
+### Edit the Windows registry to support REST API version 2018-12-31
+
+If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, follow these steps to update the Windows registry to support this version.
+
+1. In the Windows **Start** menu, type *regedit* to find and open the **Registry Editor**.
+1. In the Registry Editor, navigate to the path **Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI**.
+1. Create a new subkey with the same name as your DSN, such as *Contoso Account ODBC DSN*.
+1. Navigate to the new **Contoso Account ODBC DSN** subkey, and right-click to add a new **String** value:
+ - Value Name: **IgnoreSessionToken**
+ - Value data: **1**
+ :::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Screenshot that shows the Windows Registry Editor settings.":::
+
+<a id="#container-mapping"></a><a id="table-mapping"></a>
+## Create a schema definition
+
+There are two types of sampling methods you can use to create a schema: *container mapping* or *table-delimiter mapping*. A sampling session can use both sampling methods, but each container can use only one of the sampling methods. Which method to use depends on your data's characteristics.
+
+- **Container mapping** retrieves the data on a container page to determine the data structure, and transposes the container to a table on the ODBC side. This sampling method is efficient and fast when the data in a container is homogenous.
+
+- **Table-delimiter mapping** provides more robust sampling for heterogeneous data. This method scopes the sampling to a set of attributes and corresponding values.
+
+ For example, if a document contains a **Type** property, you can scope the sampling to the values of this property. The end result of the sampling is a set of tables for each of the **Type** values you specified. **Type = Car** produces a **Car** table, while **Type = Plane** produces a **Plane** table.
+
+To define a schema, follow these steps. For the table-delimiter mapping method, you take extra steps to define attributes and values for the schema.
+
+1. On the **User DSN** tab of the **ODBC Data Source Administrator** window, select your Azure Cosmos DB User DSN Name, and then select **Configure**.
+
+1. In the **DocumentDB ODBC Driver DSN Setup** window, select **Schema Editor**.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Screenshot that shows the Schema Editor button in the D S N Setup window.":::
+
+1. In the **Schema Editor** window, select **Create New**.
+
+1. The **Generate Schema** window displays all the collections in the Azure Cosmos DB account. Select the checkboxes next to the containers you want to sample.
+
+1. To use the *container mapping* method, select **Sample**.
+
+ Or, to use *table-delimiter* mapping, take the following steps to define attributes and values for scoping the sample.
+
+ 1. Select **Edit** in the **Mapping Definition** column for your DSN.
+
+ 1. In the **Mapping Definition** window, under **Mapping Method**, select **Table Delimiters**.
+
+ 1. In the **Attributes** box, type the name of a delimiter property in your document that you want to scope the sampling to, for instance, *City*. Press Enter.
+
+ 1. If you want to scope the sampling to certain values for the attribute you entered, select the attribute, and then enter a value in the **Value** box, such as *Seattle*, and press Enter. You can add multiple values for attributes. Just make sure that the correct attribute is selected when you enter values.
+
+ 1. When you're done entering attributes and values, select **OK**.
+
+ 1. In the **Generate Schema** window, select **Sample**.
+
+1. In the **Design View** tab, refine your schema. The **Design View** represents the database, schema, and table. The table view displays the set of properties associated with the column names, such as **SQL Name** and **Source Name**.
+
+ For each column, you can modify the **SQL name**, the **SQL type**, **SQL length**, **Scale**, **Precision**, and **Nullable** as applicable.
+
+ You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked **Hide Column = true** aren't returned for selection and projection, although they're still part of the schema. For example, you can hide all of the Azure Cosmos DB system required properties that start with **_**. The **id** column is the only field you can't hide, because it's the primary key in the normalized schema.
+
+1. Once you finish defining the schema, select **File** > **Save**, navigate to the directory to save in, and select **Save**.
+
+1. To use this schema with a DSN, in the **DocumentDB ODBC Driver DSN Setup** window, select **Advanced Options**. Select the **Schema File** box, navigate to the saved schema, select **OK** and then select **OK** again. Saving the schema file modifies the DSN connection to scope to the schema-defined data and structure.
+
+### Create views
+
+Optionally, you can define and create views in the **Schema Editor** as part of the sampling process. These views are equivalent to SQL views. The views are read-only, and scope to the selections and projections of the defined Azure Cosmos DB SQL query.
+
+Follow these steps to create a view for your data:
+
+1. On the **Sample View** tab of the **Schema Editor** window, select the containers you want to sample, and then select **Add** in the **View Definition** column.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-create-view.png" alt-text="Screenshot that shows creating a view.":::
+
+1. In the **View Definitions** window, select **New**. Enter a name for the view, for example *EmployeesfromSeattleView*, and then select **OK**.
+
+1. In the **Edit view** window, enter an [Azure Cosmos DB query](query/getting-started.md), for example:
+
+ `SELECT c.City, c.EmployeeName, c.Level, c.Age, c.Manager FROM c WHERE c.City = "Seattle"`
+
+1. Select **OK**.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-create-view-2.png" alt-text="Screenshot of adding a query when creating a view.":::
+
+You can create as many views as you like. Once you're done defining the views, select **Sample** to sample the data.
+
+> [!IMPORTANT]
+> The query text in the view definition should not contain line breaks. Otherwise, we will get a generic error when previewing the view.
++
+## Query with SQL Server Management Studio
+
+Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection.
+
+1. [Install SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and connect to the server.
+
+1. In the SSMS query editor, create a linked server object for the data source by running the following commands. Replace `DEMOCOSMOS` with the name for your linked server, and `SDS Name` with your data source name.
+
+ ```sql
+ USE [master]
+ GO
+
+ EXEC master.dbo.sp_addlinkedserver @server = N'DEMOCOSMOS', @srvproduct=N'', @provider=N'MSDASQL', @datasrc=N'SDS Name'
+
+ EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'DEMOCOSMOS', @useself=N'False', @locallogin=NULL, @rmtuser=NULL, @rmtpassword=NULL
+
+ GO
+ ```
+
+To see the new linked server name, refresh the linked servers list.
++
+To query the linked database, enter an SSMS query. In this example, the query selects from the table in the container named `customers`:
+
+```sql
+SELECT * FROM OPENQUERY(DEMOCOSMOS, 'SELECT * FROM [customers].[customers]')
+```
+
+Execute the query. The results should look similar to the following output:
+
+```output
+attachments/ 1507476156 521 Bassett Avenue, Wikieup, Missouri, 5422 "2602bc56-0000-0000-0000-59da42bc0000" 2015-02-06T05:32:32 +05:00 f1ca3044f17149f3bc61f7b9c78a26df
+attachments/ 1507476156 167 Nassau Street, Tuskahoma, Illinois, 5998 "2602bd56-0000-0000-0000-59da42bc0000" 2015-06-16T08:54:17 +04:00 f75f949ea8de466a9ef2bdb7ce065ac8
+attachments/ 1507476156 885 Strong Place, Cassel, Montana, 2069 "2602be56-0000-0000-0000-59da42bc0000" 2015-03-20T07:21:47 +04:00 ef0365fb40c04bb6a3ffc4bc77c905fd
+attachments/ 1507476156 515 Barwell Terrace, Defiance, Tennessee, 6439 "2602c056-0000-0000-0000-59da42bc0000" 2014-10-16T06:49:04 +04:00 e913fe543490432f871bc42019663518
+attachments/ 1507476156 570 Ruby Street, Spokane, Idaho, 9025 "2602c156-0000-0000-0000-59da42bc0000" 2014-10-30T05:49:33 +04:00 e53072057d314bc9b36c89a8350048f3
+```
+
+## View your data in Power BI Desktop
+
+You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools. This procedure shows you how to connect to Power BI Desktop to create a Power BI visualization.
+
+1. In Power BI Desktop, select **Get Data**.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Screenshot showing Get Data in Power B I Desktop.":::
+
+1. In the **Get Data** window, select **Other** > **ODBC**, and then select **Connect**.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Screenshot that shows choosing ODBC data source in Power B I Get Data.":::
+
+1. In the **From ODBC** window, select the DSN you created, and then select **OK**.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Screenshot that shows choosing the D S N in Power B I Get Data.":::
+
+1. In the **Access a data source using an ODBC driver** window, select **Default or Custom** and then select **Connect**.
+
+1. In the **Navigator** window, in the left pane, expand the database and schema, and select the table. The results pane includes the data that uses the schema you created.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Screenshot of selecting the table in Power B I Get Data.":::
+
+1. To visualize the data in Power BI desktop, select the checkbox next to the table name, and then select **Load**.
+
+1. In Power BI Desktop, select the **Data** tab on the left of the screen to confirm your data was imported.
+
+1. Select the **Report** tab on the left of the screen, select **New visual** from the ribbon, and then customize the visual.
+
+## Troubleshooting
+
+- **Problem**: You get the following error when trying to connect:
+
+ ```output
+ [HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
+ ```
+
+ **Solution:** Make sure the **Host** and **Access Key** values you copied from the Azure portal are correct, and retry.
+
+- **Problem**: You get the following error in SSMS when trying to create a linked Azure Cosmos DB server:
+
+ ```output
+ Msg 7312, Level 16, State 1, Line 44
+
+ Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
+ ```
+
+ **Solution**: A linked Azure Cosmos DB server doesn't support four-part naming.
+
+## Next steps
+
+- To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).
+- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
cosmos-db Performance Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-testing.md
+
+ Title: Performance and scale testing with Azure Cosmos DB
+description: Learn how to do scale and performance testing with Azure Cosmos DB. You can then evaluate the functionality of Azure Cosmos DB for high-performance application scenarios.
++++++ Last updated : 08/26/2021+++
+# Performance and scale testing with Azure Cosmos DB
+
+Performance and scale testing is a key step in application development. For many applications, the database tier has a significant impact on overall performance and scalability. Therefore, it's a critical component of performance testing. [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is purpose-built for elastic scale and predictable performance. These capabilities make it a great fit for applications that need a high-performance database tier.
+
+This article is a reference for developers implementing performance test suites for their Azure Cosmos DB workloads. It also can be used to evaluate Azure Cosmos DB for high-performance application scenarios. It focuses primarily on isolated performance testing of the database, but also includes best practices for production applications.
+
+After reading this article, you'll be able to answer the following questions:
+
+* Where can I find a sample .NET client application for performance testing of Azure Cosmos DB?
+* How do I achieve high throughput levels with Azure Cosmos DB from my client application?
+
+To get started with code, download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark).
+
+> [!NOTE]
+> The goal of this application is to demonstrate how to get the best performance from Azure Cosmos DB with a small number of client machines. The goal of the sample is not to achieve the peak throughput capacity of Azure Cosmos DB (which can scale without any limits).
+
+If you're looking for client-side configuration options to improve Azure Cosmos DB performance, see [Azure Cosmos DB performance tips](performance-tips.md).
+
+## Run the performance testing application
+The quickest way to get started is to compile and run the .NET sample, as described in the following steps. You can also review the source code and implement similar configurations on your own client applications.
+
+**Step 1:** Download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark), or fork the GitHub repository.
+
+**Step 2:** Modify the settings for EndpointUrl, AuthorizationKey, CollectionThroughput, and DocumentTemplate (optional) in App.config.
+
+> [!NOTE]
+> Before you provision collections with high throughput, refer to the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to estimate the costs per collection. Azure Cosmos DB bills storage and throughput independently on an hourly basis. You can save costs by deleting or lowering the throughput of your Azure Cosmos DB containers after testing.
+>
+>
+
+**Step 3:** Compile and run the console app from the command line. You should see output like the following:
+
+```bash
+C:\Users\cosmosdb\Desktop\Benchmark>DocumentDBBenchmark.exe
+Summary:
+
+Endpoint: https://arramacquerymetrics.documents.azure.com:443/
+Collection : db.data at 100000 request units per second
+Document Template*: Player.json
+Degree of parallelism*: -1
+
+DocumentDBBenchmark starting...
+Found collection data with 100000 RU/s
+Starting Inserts with 100 tasks
+Inserted 4503 docs @ 4491 writes/s, 47070 RU/s (122B max monthly 1KB reads)
+Inserted 17910 docs @ 8862 writes/s, 92878 RU/s (241B max monthly 1KB reads)
+Inserted 32339 docs @ 10531 writes/s, 110366 RU/s (286B max monthly 1KB reads)
+Inserted 47848 docs @ 11675 writes/s, 122357 RU/s (317B max monthly 1KB reads)
+Inserted 58857 docs @ 11545 writes/s, 120992 RU/s (314B max monthly 1KB reads)
+Inserted 69547 docs @ 11378 writes/s, 119237 RU/s (309B max monthly 1KB reads)
+Inserted 80687 docs @ 11345 writes/s, 118896 RU/s (308B max monthly 1KB reads)
+Inserted 91455 docs @ 11272 writes/s, 118131 RU/s (306B max monthly 1KB reads)
+Inserted 102129 docs @ 11208 writes/s, 117461 RU/s (304B max monthly 1KB reads)
+Inserted 112444 docs @ 11120 writes/s, 116538 RU/s (302B max monthly 1KB reads)
+Inserted 122927 docs @ 11063 writes/s, 115936 RU/s (301B max monthly 1KB reads)
+Inserted 133157 docs @ 10993 writes/s, 115208 RU/s (299B max monthly 1KB reads)
+Inserted 144078 docs @ 10988 writes/s, 115159 RU/s (298B max monthly 1KB reads)
+Inserted 155415 docs @ 11013 writes/s, 115415 RU/s (299B max monthly 1KB reads)
+Inserted 166126 docs @ 10992 writes/s, 115198 RU/s (299B max monthly 1KB reads)
+Inserted 173051 docs @ 10739 writes/s, 112544 RU/s (292B max monthly 1KB reads)
+Inserted 180169 docs @ 10527 writes/s, 110324 RU/s (286B max monthly 1KB reads)
+Inserted 192469 docs @ 10616 writes/s, 111256 RU/s (288B max monthly 1KB reads)
+Inserted 199107 docs @ 10406 writes/s, 109054 RU/s (283B max monthly 1KB reads)
+Inserted 200000 docs @ 9930 writes/s, 104065 RU/s (270B max monthly 1KB reads)
+
+Summary:
+
+Inserted 200000 docs @ 9928 writes/s, 104063 RU/s (270B max monthly 1KB reads)
+
+DocumentDBBenchmark completed successfully.
+Press any key to exit...
+```
+
+**Step 4 (if necessary):** The throughput reported (RU/s) from the tool should be the same or higher than the provisioned throughput of the collection or a set of collections. If it's not, increasing the DegreeOfParallelism in small increments might help you reach the limit. If the throughput from your client app plateaus, start multiple instances of the app on additional client machines. If you need help with this step file a support ticket from the [Azure portal](https://portal.azure.com).
+
+After you have the app running, you can try different [indexing policies](../index-policy.md) and [consistency levels](../consistency-levels.md) to understand their impact on throughput and latency. You can also review the source code and implement similar configurations to your own test suites or production applications.
+
+## Next steps
+
+In this article, we looked at how you can perform performance and scale testing with Azure Cosmos DB by using a .NET console app. For more information, see the following articles:
+
+* [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark)
+* [Client configuration options to improve Azure Cosmos DB performance](performance-tips.md)
+* [Server-side partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
cosmos-db Performance Tips Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-async-java.md
+
+ Title: Performance tips for Azure Cosmos DB Async Java SDK v2
+description: Learn client configuration options to improve Azure Cosmos DB database performance for Async Java SDK v2
+++
+ms.devlang: java
+ Last updated : 05/11/2020+++++
+# Performance tips for Azure Cosmos DB Async Java SDK v2
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](performance-tips-java-sdk-v4.md)
+> * [Async Java SDK v2](performance-tips-async-java.md)
+> * [Sync Java SDK v2](performance-tips-java.md)
+> * [.NET SDK v3](performance-tips-dotnet-sdk-v3.md)
+> * [.NET SDK v2](performance-tips.md)
++
+> [!IMPORTANT]
+> This is *not* the latest Java SDK for Azure Cosmos DB! You should upgrade your project to [Azure Cosmos DB Java SDK v4](sdk-java-v4.md) and then read the Azure Cosmos DB Java SDK v4 [performance tips guide](performance-tips-java-sdk-v4.md). Follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide to upgrade.
+>
+> The performance tips in this article are for Azure Cosmos DB Async Java SDK v2 only. See the Azure Cosmos DB Async Java SDK v2 [Release notes](sdk-java-async-v2.md), [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb), and Azure Cosmos DB Async Java SDK v2 [troubleshooting guide](troubleshoot-java-async-sdk.md) for more information.
+>
+
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+
+Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using the [Azure Cosmos DB Async Java SDK v2](sdk-java-async-v2.md).
+
+So if you're asking "How can I improve my database performance?" consider the following options:
+
+## Networking
+
+* **Connection mode: Use Direct mode**
+
+ How a client connects to Azure Cosmos DB has important implications on performance, especially in terms of client-side latency. The *ConnectionMode* is a key configuration setting available for configuring the client *ConnectionPolicy*. For Azure Cosmos DB Async Java SDK v2, the two available ConnectionModes are:
+
+ * [Gateway (default)](/java/api/com.microsoft.azure.cosmosdb.connectionmode)
+ * [Direct](/java/api/com.microsoft.azure.cosmosdb.connectionmode)
+
+ Gateway mode is supported on all SDK platforms and it is the configured option by default. If your applications run within a corporate network with strict firewall restrictions, Gateway mode is the best choice since it uses the standard HTTPS port and a single endpoint. The performance tradeoff, however, is that Gateway mode involves an additional network hop every time data is read or written to Azure Azure Cosmos DB. Because of this, Direct mode offers better performance due to fewer network hops.
+
+ The *ConnectionMode* is configured during the construction of the *DocumentClient* instance with the *ConnectionPolicy* parameter.
+
+### <a id="asyncjava2-connectionpolicy"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
+
+```java
+ public ConnectionPolicy getConnectionPolicy() {
+ ConnectionPolicy policy = new ConnectionPolicy();
+ policy.setConnectionMode(ConnectionMode.Direct);
+ policy.setMaxPoolSize(1000);
+ return policy;
+ }
+
+ ConnectionPolicy connectionPolicy = new ConnectionPolicy();
+ DocumentClient client = new DocumentClient(HOST, MASTER_KEY, connectionPolicy, null);
+```
+
+* **Collocate clients in same Azure region for performance**
+
+ When possible, place any applications calling Azure Cosmos DB in the same region as the Azure Cosmos DB database. For an approximate comparison, calls to Azure Cosmos DB within the same region complete within 1-2 ms, but the latency between the West and East coast of the US is >50 ms. This latency can likely vary from request to request depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. The lowest possible latency is achieved by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure Regions](https://azure.microsoft.com/regions/#services).
+
+ :::image type="content" source="./media/performance-tips/same-region.png" alt-text="Illustration of the Azure Cosmos DB connection policy" border="false":::
+
+## SDK Usage
+
+* **Install the most recent SDK**
+
+ The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the Azure Cosmos DB Async Java SDK v2 [Release Notes](sdk-java-async-v2.md) pages to determine the most recent SDK and review improvements.
+
+* **Use a singleton Azure Cosmos DB client for the lifetime of your application**
+
+ Each AsyncDocumentClient instance is thread-safe and performs efficient connection management and address caching. To allow efficient connection management and better performance by AsyncDocumentClient, it is recommended to use a single instance of AsyncDocumentClient per AppDomain for the lifetime of the application.
+
+* **Tuning ConnectionPolicy**
+
+ By default, Direct mode Azure Cosmos DB requests are made over TCP when using the Azure Cosmos DB Async Java SDK v2. Internally the SDK uses a special Direct mode architecture to dynamically manage network resources and get the best performance.
+
+ In the Azure Cosmos DB Async Java SDK v2, Direct mode is the best choice to improve database performance with most workloads.
+
+ * ***Overview of Direct mode***
+
+ :::image type="content" source="./media/performance-tips-async-java/rntbdtransportclient.png" alt-text="Illustration of the Direct mode architecture" border="false":::
+
+ The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Azure Cosmos DB backend. The Direct mode architecture allocates up to 10 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
+
+ * ***ConnectionPolicy Configuration options for Direct mode***
+
+ As a first step, use the following recommended configuration settings below. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
+
+ If you are using Azure Cosmos DB as a reference database (that is, the database is used for many point read operations and few write operations), it may be acceptable to set *idleEndpointTimeout* to 0 (that is, no timeout).
++
+ | Configuration option | Default |
+ | :: | :--: |
+ | bufferPageSize | 8192 |
+ | connectionTimeout | "PT1M" |
+ | idleChannelTimeout | "PT0S" |
+ | idleEndpointTimeout | "PT1M10S" |
+ | maxBufferCapacity | 8388608 |
+ | maxChannelsPerEndpoint | 10 |
+ | maxRequestsPerChannel | 30 |
+ | receiveHangDetectionTime | "PT1M5S" |
+ | requestExpiryInterval | "PT5S" |
+ | requestTimeout | "PT1M" |
+ | requestTimerResolution | "PT0.5S" |
+ | sendHangDetectionTime | "PT10S" |
+ | shutdownTimeout | "PT15S" |
+
+* ***Programming tips for Direct mode***
+
+ Review the Azure Cosmos DB Async Java SDK v2 [Troubleshooting](troubleshoot-java-async-sdk.md) article as a baseline for resolving any SDK issues.
+
+ Some important programming tips when using Direct mode:
+
+ * **Use multithreading in your application for efficient TCP data transfer** - After making a request, your application should subscribe to receive data on another thread. Not doing so forces unintended "half-duplex" operation and the subsequent requests are blocked waiting for the previous request's reply.
+
+ * **Carry out compute-intensive workloads on a dedicated thread** - For similar reasons to the previous tip, operations such as complex data processing are best placed in a separate thread. A request that pulls in data from another data store (for example if the thread utilizes Azure Cosmos DB and Spark data stores simultaneously) may experience increased latency and it is recommended to spawn an additional thread that awaits a response from the other data store.
+
+ * The underlying network IO in the Azure Cosmos DB Async Java SDK v2 is managed by Netty, see these [tips for avoiding coding patterns that block Netty IO threads](troubleshoot-java-async-sdk.md#invalid-coding-pattern-blocking-netty-io-thread).
+
+ * **Data modeling** - The Azure Cosmos DB SLA assumes document size to be less than 1KB. Optimizing your data model and programming to favor smaller document size will generally lead to decreased latency. If you are going to need storage and retrieval of docs larger than 1KB, the recommended approach is for documents to link to data in Azure Blob Storage.
+
+* **Tuning parallel queries for partitioned collections**
+
+ Azure Cosmos DB Async Java SDK v2 supports parallel queries, which enable you to query a partitioned collection in parallel. For more information, see [code samples](https://github.com/Azure/azure-cosmosdb-java/tree/master/examples/src/test/java/com/microsoft/azure/cosmosdb/rx/examples) related to working with the SDKs. Parallel queries are designed to improve query latency and throughput over their serial counterpart.
+
+ * ***Tuning setMaxDegreeOfParallelism\:***
+
+ Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned collection is fetched serially with respect to the query. So, use setMaxDegreeOfParallelism to set the number of partitions that has the maximum chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can use setMaxDegreeOfParallelism to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
+
+ It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be bottlenecked by those partitions.
+
+ * ***Tuning setMaxBufferedItemCount\:***
+
+ Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. setMaxBufferedItemCount limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching.
+
+ Pre-fetching works the same way irrespective of the MaxDegreeOfParallelism, and there is a single buffer for the data from all partitions.
+
+* **Implement backoff at getRetryAfterInMilliseconds intervals**
+
+ During performance testing, you should increase load until a small rate of requests get throttled. If throttled, the client application should backoff for the server-specified retry interval. Respecting the backoff ensures that you spend minimal amount of time waiting between retries.
+
+* **Scale out your client-workload**
+
+ If you are testing at high throughput levels (>50,000 RU/s), the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
+
+* **Use name based addressing**
+
+ Use name-based addressing, where links have the format `dbs/MyDatabaseId/colls/MyCollectionId/docs/MyDocumentId`, instead of SelfLinks (\_self), which have the format `dbs/<database_rid>/colls/<collection_rid>/docs/<document_rid>` to avoid retrieving ResourceIds of all the resources used to construct the link. Also, as these resources get recreated (possibly with same name), caching them may not help.
+
+* **Tune the page size for queries/read feeds for better performance**
+
+ When performing a bulk read of documents by using read feed functionality (for example, readDocuments) or when issuing a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
+
+ To reduce the number of network round trips required to retrieve all applicable results, you can increase the page size using the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) request header to up to 1000. In cases where you need to display only a few results, for example, if your user interface or application API returns only 10 results a time, you can also decrease the page size to 10 to reduce the throughput consumed for reads and queries.
+
+ You may also set the page size using the setMaxItemCount method.
+
+* **Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads)**
+
+ The Azure Cosmos DB Async Java SDK v2 uses [netty](https://netty.io/) for non-blocking IO. The SDK uses a fixed number of IO netty event loop threads (as many CPU cores your machine has) for executing IO operations. The Observable returned by API emits the result on one of the shared IO event loop netty threads. So it is important to not block the shared IO event loop netty threads. Doing CPU intensive work or blocking operation on the IO event loop netty thread may cause deadlock or significantly reduce SDK throughput.
+
+ For example the following code executes a cpu intensive work on the event loop IO netty thread:
+
+ **Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)**
+
+ ```java
+ Observable<ResourceResponse<Document>> createDocObs = asyncDocumentClient.createDocument(
+ collectionLink, document, null, true);
+
+ createDocObs.subscribe(
+ resourceResponse -> {
+ //this is executed on eventloop IO netty thread.
+ //the eventloop thread is shared and is meant to return back quickly.
+ //
+ // DON'T do this on eventloop IO netty thread.
+ veryCpuIntensiveWork();
+ });
+ ```
+
+ After result is received if you want to do CPU intensive work on the result you should avoid doing so on event loop IO netty thread. You can instead provide your own Scheduler to provide your own thread for running your work.
+
+ **Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)**
+
+ ```java
+ import rx.schedulers;
+
+ Observable<ResourceResponse<Document>> createDocObs = asyncDocumentClient.createDocument(
+ collectionLink, document, null, true);
+
+ createDocObs.subscribeOn(Schedulers.computation())
+ subscribe(
+ resourceResponse -> {
+ // this is executed on threads provided by Scheduler.computation()
+ // Schedulers.computation() should be used only when:
+ // 1. The work is cpu intensive
+ // 2. You are not doing blocking IO, thread sleep, etc. in this thread against other resources.
+ veryCpuIntensiveWork();
+ });
+ ```
+
+ Based on the type of your work you should use the appropriate existing RxJava Scheduler for your work. Read here
+ [``Schedulers``](http://reactivex.io/RxJava/1.x/javadoc/rx/schedulers/Schedulers.html).
+
+ For More Information, Please look at the [GitHub page](https://github.com/Azure/azure-cosmosdb-java) for Azure Cosmos DB Async Java SDK v2.
+
+* **Disable netty's logging**
+
+ Netty library logging is chatty and needs to be turned off (suppressing sign in the configuration may not be enough) to avoid additional CPU costs. If you are not in debugging mode, disable netty's logging altogether. So if you are using log4j to remove the additional CPU costs incurred by ``org.apache.log4j.Category.callAppenders()`` from netty add the following line to your codebase:
+
+ ```java
+ org.apache.log4j.Logger.getLogger("io.netty").setLevel(org.apache.log4j.Level.OFF);
+ ```
+
+ * **OS Open files Resource Limit**
+
+ Some Linux systems (like Red Hat) have an upper limit on the number of open files and so the total number of connections. Run the following to view the current limits:
+
+ ```bash
+ ulimit -a
+ ```
+
+ The number of open files (nofile) needs to be large enough to have enough room for your configured connection pool size and other open files by the OS. It can be modified to allow for a larger connection pool size.
+
+ Open the limits.conf file:
+
+ ```bash
+ vim /etc/security/limits.conf
+ ```
+
+ Add/modify the following lines:
+
+ ```
+ * - nofile 100000
+ ```
+
+## Indexing Policy
+
+* **Exclude unused paths from indexing for faster writes**
+
+ Azure Cosmos DBΓÇÖs indexing policy allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths (setIncludedPaths and setExcludedPaths). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed. For example, the following code shows how to exclude an entire section of the documents (also known as a subtree) from indexing using the "*" wildcard.
+
+ ### <a id="asyncjava2-indexing"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
+
+ ```Java
+ Index numberIndex = Index.Range(DataType.Number);
+ numberIndex.set("precision", -1);
+ indexes.add(numberIndex);
+ includedPath.setIndexes(indexes);
+ includedPaths.add(includedPath);
+ indexingPolicy.setIncludedPaths(includedPaths);
+ collectionDefinition.setIndexingPolicy(indexingPolicy);
+ ```
+
+ For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
+
+## <a id="measure-rus"></a>Throughput
+
+* **Measure and tune for lower request units/second usage**
+
+ Azure Cosmos DB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures, and triggers ΓÇô all operating on the documents within a database collection. The cost associated with each of these operations varies based on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
+
+ Throughput is provisioned based on the number of [request units](../request-units.md) set for each container. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional request units.
+
+ The complexity of a query impacts how many request units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.
+
+ To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header to measure the number of request units consumed by these operations. You can also look at the equivalent RequestCharge property in ResourceResponse\<T> or FeedResponse\<T>.
+
+ ### <a id="asyncjava2-requestcharge"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
+
+ ```Java
+ ResourceResponse<Document> response = asyncClient.createDocument(collectionLink, documentDefinition, null,
+ false).toBlocking.single();
+ response.getRequestCharge();
+ ```
+
+ The request charge returned in this header is a fraction of your provisioned throughput. For example, if you have 2000 RU/s provisioned, and if the preceding query returns 1000 1KB-documents, the cost of the operation is 1000. As such, within one second, the server honors only two such requests before rate limiting subsequent requests. For more information, see [Request units](../request-units.md) and the [request unit calculator](https://www.documentdb.com/capacityplanner).
+
+* **Handle rate limiting/request rate too large**
+
+ When a client attempts to exceed the reserved throughput for an account, there is no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.
+
+ ```xml
+ HTTP Status 429,
+ Status Line: RequestRateTooLarge
+ x-ms-retry-after-ms :100
+ ```
+
+ The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
+
+ If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a DocumentClientException with status code 429 to the application. The default retry count can be changed by using setRetryOptions on the ConnectionPolicy instance. By default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
+
+ While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
+
+* **Design for smaller documents for higher throughput**
+
+ The request charge (the request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents.
+
+## Next steps
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips Dotnet Sdk V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-dotnet-sdk-v3.md
+
+ Title: Azure Cosmos DB performance tips for .NET SDK v3
+description: Learn client configuration options to help improve Azure Cosmos DB .NET v3 SDK performance.
++++ Last updated : 03/31/2022+
+ms.devlang: csharp
+++
+# Performance tips for Azure Cosmos DB and .NET
+
+> [!div class="op_single_selector"]
+> * [.NET SDK v3](performance-tips-dotnet-sdk-v3.md)
+> * [.NET SDK v2](performance-tips.md)
+> * [Java SDK v4](performance-tips-java-sdk-v4.md)
+> * [Async Java SDK v2](performance-tips-async-java.md)
+> * [Sync Java SDK v2](performance-tips-java.md)
+
+Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly with guaranteed latency and throughput levels. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [provision container throughput](how-to-provision-container-throughput.md) or [provision database throughput](how-to-provision-database-throughput.md).
+
+Because Azure Cosmos DB is accessed via network calls, you can make client-side optimizations to achieve peak performance when you use the [SQL .NET SDK](sdk-dotnet-v3.md).
+
+If you're trying to improve your database performance, consider the options presented in the following sections.
+
+## Hosting recommendations
+
+**Turn on server-side garbage collection**
+
+Reducing the frequency of garbage collection can help in some cases. In .NET, set [gcServer](/dotnet/core/run-time-config/garbage-collector#flavors-of-garbage-collection) to `true`.
+
+**Scale out your client workload**
+
+If you're testing at high throughput levels, or at rates that are greater than 50,000 Request Units per second (RU/s), the client application could become a workload bottleneck. This is because the machine might cap out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
+
+> [!NOTE]
+> High CPU usage can cause increased latency and request timeout exceptions.
+
+## <a id="metadata-operations"></a> Metadata operations
+
+Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests) that do not scale like data operations.
+
+## <a id="logging-and-tracing"></a> Logging and tracing
+
+Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
+
+Latest SDK versions (greater than 3.23.0) automatically remove it when they detect it, with older versions, you can remove it by:
+
+# [.NET 6 / .NET Core](#tab/trace-net-core)
+
+```csharp
+if (!Debugger.IsAttached)
+{
+ Type defaultTrace = Type.GetType("Microsoft.Azure.Cosmos.Core.Trace.DefaultTrace,Microsoft.Azure.Cosmos.Direct");
+ TraceSource traceSource = (TraceSource)defaultTrace.GetProperty("TraceSource").GetValue(null);
+ traceSource.Listeners.Remove("Default");
+ // Add your own trace listeners
+}
+```
+
+# [.NET Framework](#tab/trace-net-fx)
+
+Edit your `app.config` or `web.config` files:
+
+```xml
+<configuration>
+ <system.diagnostics>
+ <sources>
+ <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >
+ <listeners>
+ <remove name="Default" />
+ <!--Add your own trace listeners-->
+ <add name="myListener" ... />
+ </listeners>
+ </source>
+ </sources>
+ </system.diagnostics>
+<configuration>
+```
+++
+## Networking
+<a id="direct-connection"></a>
+
+**Connection policy: Use direct connection mode**
+
+.NET V3 SDK default connection mode is direct with TCP protocol. You configure the connection mode when you create the `CosmosClient` instance in `CosmosClientOptions`. To learn more about different connectivity options, see the [connectivity modes](sdk-connection-modes.md) article.
+
+```csharp
+string connectionString = "<your-account-connection-string>";
+CosmosClient client = new CosmosClient(connectionString,
+new CosmosClientOptions
+{
+ ConnectionMode = ConnectionMode.Gateway // ConnectionMode.Direct is the default
+});
+```
+
+**Ephemeral port exhaustion**
+
+If you see a high connection volume or high port usage on your instances, first verify that your client instances are singletons. In other words, the client instances should be unique for the lifetime of the application.
+
+When it's running on the TCP protocol, the client optimizes for latency by using the long-lived connections. This is in contrast with the HTTPS protocol, which terminates the connections after two minutes of inactivity.
+
+In scenarios where you have sparse access, and if you notice a higher connection count when compared to Gateway mode access, you can:
+
+* Configure the [CosmosClientOptions.PortReuseMode](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode) property to `PrivatePortPool` (effective with framework versions 4.6.1 and later and .NET Core versions 2.0 and later). This property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints.
+* Configure the [CosmosClientOptions.IdleConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout) property as greater than or equal to 10 minutes. The recommended values are from 20 minutes to 24 hours.
+
+<a id="same-region"></a>
+
+**For performance, collocate clients in the same Azure region**
+
+When possible, place any applications that call Azure Cosmos DB in the same region as the Azure Cosmos DB database. Here's an approximate comparison: calls to Azure Cosmos DB within the same region finish within 1 millisecond (ms) to 2 ms, but the latency between the West and East coast of the US is more than 50 ms. This latency can vary from request to request, depending on the route taken by the request as it passes from the client to the Azure datacenter boundary.
+
+You can get the lowest possible latency by ensuring that the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure regions](https://azure.microsoft.com/regions/#services).
++
+ <a id="increase-threads"></a>
+
+**Increase the number of threads/tasks**
+
+Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the .NET [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB.
+
+**Enable accelerated networking**
+
+To reduce latency and CPU jitter, we recommend that you enable accelerated networking on your client virtual machines. For more information, see [Create a Windows virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Create a Linux virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md).
+
+## <a id="sdk-usage"></a> SDK usage
+
+**Install the most recent SDK**
+
+The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. To determine the most recent SDK and review improvements, see [Azure Cosmos DB SDK](sdk-dotnet-v3.md).
+
+**Use stream APIs**
+
+[.NET SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3) contains stream APIs that can receive and return data without serializing.
+
+Middle-tier applications that don't consume responses directly from the SDK but relay them to other application tiers can benefit from the stream APIs. For examples of stream handling, see the [item management](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement) samples.
+
+**Use a singleton Azure Cosmos DB client for the lifetime of your application**
+
+Each `CosmosClient` instance is thread-safe and performs efficient connection management and address caching when it operates in Direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application.
+
+When you're working on Azure Functions, instances should also follow the existing [guidelines](../../azure-functions/manage-connections.md#static-clients) and maintain a single instance.
+
+**Avoid blocking calls**
+
+Azure Cosmos DB SDK should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
+
+A common performance problem in apps using the Azure Cosmos DB SDK is blocking calls that could be asynchronous. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times.
+
+**Do not**:
+
+* Block asynchronous execution by calling [Task.Wait](/dotnet/api/system.threading.tasks.task.wait) or [Task.Result](/dotnet/api/system.threading.tasks.task-1.result).
+* Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous.
+* Acquire locks in common code paths. Azure Cosmos DB .NET SDK is most performant when architected to run code in parallel.
+* Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.
+* Do not use ToList() on `Container.GetItemLinqQueryable<T>()` which uses blocking calls to synchronously drain the query. Use [ToFeedIterator()](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/e2029f2f4854c0e4decd399c35e69ef799db9f35/Microsoft.Azure.Cosmos/src/Resource/Container/Container.cs#L1143) to drain the query asynchronously.
+
+**Do**:
+
+* Call the Azure Cosmos DB .NET APIs asynchronously.
+* The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns.
+
+A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be used to find threads frequently added to the [Thread Pool](/windows/desktop/procthread/thread-pools). The `Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start` event indicates a thread added to the thread pool.
++
+**Disable content response on write operations**
+
+For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+
+```csharp
+ItemRequestOptions requestOptions = new ItemRequestOptions() { EnableContentResponseOnWrite = false };
+ItemResponse<Book> itemResponse = await this.container.CreateItemAsync<Book>(book, new PartitionKey(book.pk), requestOptions);
+// Resource will be null
+itemResponse.Resource
+```
+
+**Enable Bulk to optimize for throughput instead of latency**
+
+Enable *Bulk* for scenarios where the workload requires a large amount of throughput, and latency is not as important. For more information about how to enable the Bulk feature, and to learn which scenarios it should be used for, see [Introduction to Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
+
+<a id="max-connection"></a>**Increase System.Net MaxConnections per host when you use Gateway mode**
+
+Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [`Documents.Client.ConnectionPolicy.MaxConnectionLimit`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.gatewaymodemaxconnectionlimit) to a higher value.
+
+**Increase the number of threads/tasks**
+
+See [Increase the number of threads/tasks](#increase-threads) in the Networking section of this article.
+
+## Query operations
+
+For query operations see the [performance tips for queries](performance-tips-query-sdk.md?tabs=v3&pivots=programming-language-csharp).
+
+## <a id="indexing-policy"></a> Indexing policy
+
+**Exclude unused paths from indexing for faster writes**
+
+The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths).
+
+Indexing only the paths you need can improve write performance, reduce RU charges on write operations, and reduce index storage for scenarios in which the query patterns are known beforehand. This is because indexing costs correlate directly to the number of unique paths indexed. For example, the following code shows how to exclude an entire section of the documents (a subtree) from indexing by using the "*" wildcard:
+
+```csharp
+var containerProperties = new ContainerProperties(id: "excludedPathCollection", partitionKeyPath: "/pk" );
+containerProperties.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+containerProperties.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/nonIndexedContent/*");
+Container container = await this.cosmosDatabase.CreateContainerAsync(containerProperties);
+```
+
+For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
+
+## Throughput
+<a id="measure-rus"></a>
+
+**Measure and tune for lower RU/s usage**
+
+Azure Cosmos DB offers a rich set of database operations. These operations include relational and hierarchical queries with Universal Disk Format (UDF) files, stored procedures, and triggers, all operating on the documents within a database collection.
+
+The costs associated with each of these operations vary depending on the CPU, IO, and memory that are required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a Request Unit as a single measure for the resources that are required to perform various database operations and service an application request.
+
+Throughput is provisioned based on the number of [Request Units](../request-units.md) set for each container. Request Unit consumption is evaluated as a units-per-second rate. Applications that exceed the provisioned Request Unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional Request Units.
+
+The complexity of a query affects how many Request Units are consumed for an operation. The number of predicates, the nature of the predicates, the number of UDF files, and the size of the source dataset all influence the cost of query operations.
+
+To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent `RequestCharge` property in `ResourceResponse<T>` or `FeedResponse<T>` in the .NET SDK) to measure the number of Request Units consumed by the operations:
+
+```csharp
+// Measure the performance (Request Units) of writes
+ItemResponse<Book> response = await container.CreateItemAsync<Book>(myBook, new PartitionKey(myBook.PkValue));
+Console.WriteLine("Insert of item consumed {0} request units", response.RequestCharge);
+// Measure the performance (Request Units) of queries
+FeedIterator<Book> queryable = container.GetItemQueryIterator<ToDoActivity>(queryString);
+while (queryable.HasMoreResults)
+ {
+ FeedResponse<Book> queryResponse = await queryable.ExecuteNextAsync<Book>();
+ Console.WriteLine("Query batch consumed {0} request units", queryResponse.RequestCharge);
+ }
+```
+
+The request charge that's returned in this header is a fraction of your provisioned throughput (that is, 2,000 RU/s). For example, if the preceding query returns 1,000 1-KB documents, the cost of the operation is 1,000. So, within one second, the server honors only two such requests before it rate-limits later requests. For more information, see [Request Units](../request-units.md) and the [Request Unit calculator](https://www.documentdb.com/capacityplanner).
+<a id="429"></a>
+
+**Handle rate limiting/request rate too large**
+
+When a client attempts to exceed the reserved throughput for an account, there's no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server preemptively ends the request with RequestRateTooLarge (HTTP status code 429). It returns an [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header that indicates the amount of time, in milliseconds, that the user must wait before attempting the request again.
+
+```xml
+ HTTP Status 429,
+ Status Line: RequestRateTooLarge
+ x-ms-retry-after-ms :100
+```
+
+The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
+
+If you have more than one client cumulatively operating consistently above the request rate, the default retry count that's currently set to 9 internally by the client might not suffice. In this case, the client throws a CosmosException with status code 429 to the application.
+
+You can change the default retry count by setting the `RetryOptions` on the `CosmosClientOptions` instance. By default, the CosmosException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This error is returned even when the current retry count is less than the maximum retry count, whether the current value is the default of 9 or a user-defined value.
+
+The automated retry behavior helps improve resiliency and usability for most applications. But it might not be the best behavior when you're doing performance benchmarks, especially when you're measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge that's returned by each operation, and ensure that requests are operating below the reserved request rate.
+
+For more information, see [Request Units](../request-units.md).
+
+**For higher throughput, design for smaller documents**
+
+The request charge (that is, the request-processing cost) of a specified operation correlates directly to the size of the document. Operations on large documents cost more than operations on small documents.
+
+## Next steps
+For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-java-sdk-v4.md
+
+ Title: Performance tips for Azure Cosmos DB Java SDK v4
+description: Learn client configuration options to improve Azure Cosmos DB database performance for Java SDK v4
+++
+ms.devlang: java
+ Last updated : 04/22/2022+++++
+# Performance tips for Azure Cosmos DB Java SDK v4
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](performance-tips-java-sdk-v4.md)
+> * [Async Java SDK v2](performance-tips-async-java.md)
+> * [Sync Java SDK v2](performance-tips-java.md)
+> * [.NET SDK v3](performance-tips-dotnet-sdk-v3.md)
+> * [.NET SDK v2](performance-tips.md)
+>
+
+> [!IMPORTANT]
+> The performance tips in this article are for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+
+Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using Azure Cosmos DB Java SDK v4.
+
+So if you're asking "How can I improve my database performance?" consider the following options:
+
+## Networking
+
+* **Connection mode: Use Direct mode**
+
+Java SDK default connection mode is direct. You can configure the connection mode in the client builder using the *directMode()* or *gatewayMode()* methods, as shown below. To configure either mode with default settings, call either method without arguments. Otherwise, pass a configuration settings class instance as the argument (*DirectConnectionConfig* for *directMode()*, *GatewayConnectionConfig* for *gatewayMode()*.). To learn more about different connectivity options, see the [connectivity modes](sdk-connection-modes.md) article.
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientConnectionModeAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientConnectionModeSync)]
+
+
+
+The *directMode()* method has an additional override, for the following reason. Control plane operations such as database and container CRUD *always* utilize Gateway mode; when the user has configured Direct mode for data plane operations, control plane operations use default Gateway mode settings. This suits most users. However, users who want Direct mode for data plane operations as well as tunability of control plane Gateway mode parameters can use the following *directMode()* override:
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientDirectOverrideAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientDirectOverrideSync)]
+
+
+
+<a name="collocate-clients"></a>
+* **Collocate clients in same Azure region for performance**
+<a id="same-region"></a>
+
+When possible, place any applications calling Azure Cosmos DB in the same region as the Azure Cosmos DB database. For an approximate comparison, calls to Azure Cosmos DB within the same region complete within 1-2 ms, but the latency between the West and East coast of the US is >50 ms. This latency can likely vary from request to request depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. The lowest possible latency is achieved by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure Regions](https://azure.microsoft.com/regions/#services).
++
+An app that interacts with a multi-region Azure Cosmos DB account needs to configure
+[preferred locations](tutorial-global-distribution.md#preferred-locations) to ensure that requests are going to a collocated region.
+
+* **Enable Accelerated Networking on your Azure VM for lower latency.**
+
+It is recommended that you follow the instructions to enable Accelerated Networking in your [Windows (click for instructions)](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Linux (click for instructions)](../../virtual-network/create-vm-accelerated-networking-cli.md) Azure VM, in order to maximize performance.
+
+Without accelerated networking, IO that transits between your Azure VM and other Azure resources may be unnecessarily routed through a host and virtual switch situated between the VM and its network card. Having the host and virtual switch inline in the datapath not only increases latency and jitter in the communication channel, it also steals CPU cycles from the VM. With accelerated networking, the VM interfaces directly with the NIC without intermediaries; any network policy details which were being handled by the host and virtual switch are now handled in hardware at the NIC; the host and virtual switch are bypassed. Generally you can expect lower latency and higher throughput, as well as more *consistent* latency and decreased CPU utilization when you enable accelerated networking.
+
+Limitations: accelerated networking must be supported on the VM OS, and can only be enabled when the VM is stopped and deallocated. The VM cannot be deployed with Azure Resource Manager.
+
+Please see the [Windows](../../virtual-network/create-vm-accelerated-networking-powershell.md) and [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md) instructions for more details.
+
+## SDK usage
+* **Install the most recent SDK**
+
+The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the [Azure Cosmos DB SDK](sdk-java-async-v2.md) pages to determine the most recent SDK and review improvements.
+
+* <a id="max-connection"></a> **Use a singleton Azure Cosmos DB client for the lifetime of your application**
+
+Each Azure Cosmos DB client instance is thread-safe and performs efficient connection management and address caching. To allow efficient connection management and better performance by the Azure Cosmos DB client, it is recommended to use a single instance of the Azure Cosmos DB client per AppDomain for the lifetime of the application.
+
+* <a id="override-default-consistency-javav4"></a> **Use the lowest consistency level required for your application**
+
+When you create a *CosmosClient*, the default consistency used if not explicitly set is *Session*. If *Session* consistency is not required by your application logic set the *Consistency* to *Eventual*. Note: it is recommended to use at least *Session* consistency in applications employing the Azure Cosmos DB Change Feed processor.
+
+* **Use Async API to max out provisioned throughput**
+
+Azure Cosmos DB Java SDK v4 bundles two APIs, Sync and Async. Roughly speaking, the Async API implements SDK functionality, whereas the Sync API is a thin wrapper that makes blocking calls to the Async API. This stands in contrast to the older Azure Cosmos DB Async Java SDK v2, which was Async-only, and to the older Azure Cosmos DB Sync Java SDK v2, which was Sync-only and had a completely separate implementation.
+
+The choice of API is determined during client initialization; a *CosmosAsyncClient* supports Async API while a *CosmosClient* supports Sync API.
+
+The Async API implements non-blocking IO and is the optimal choice if your goal is to max out throughput when issuing requests to Azure Cosmos DB.
+
+Using Sync API can be the right choice if you want or need an API which blocks on the response to each request, or if synchronous operation is the dominant paradigm in your application. For example, you might want the Sync API when you are persisting data to Azure Cosmos DB in a microservices application, provided throughput is not critical.
+
+Just be aware that Sync API throughput degrades with increasing request response-time, whereas the Async API can saturate the full bandwidth capabilities of your hardware.
+
+Geographic collocation can give you higher and more consistent throughput when using Sync API (see [Collocate clients in same Azure region for performance](#collocate-clients)) but still is not expected to exceed Async API attainable throughput.
+
+Some users may also be unfamiliar with [Project Reactor](https://projectreactor.io/), the Reactive Streams framework used to implement Azure Cosmos DB Java SDK v4 Async API. If this is a concern, we recommend you read our introductory [Reactor Pattern Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md) and then take a look at this [Introduction to Reactive Programming](https://tech.io/playgrounds/929/reactive-programming-with-reactor-3/Intro) in order to familiarize yourself. If you have already used Azure Cosmos DB with an Async interface, and the SDK you used was Azure Cosmos DB Async Java SDK v2, then you may be familiar with [ReactiveX](http://reactivex.io/)/[RxJava](https://github.com/ReactiveX/RxJava) but be unsure what has changed in Project Reactor. In that case, please take a look at our [Reactor vs. RxJava Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) to become familiarized.
+
+The following code snippets show how to initialize your Azure Cosmos DB client for Async API or Sync API operation, respectively:
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientSync)]
+
+
+
+* **Tuning ConnectionPolicy**
+
+By default, Direct mode Azure Cosmos DB requests are made over TCP when using Azure Cosmos DB Java SDK v4. Internally Direct mode uses a special architecture to dynamically manage network resources and get the best performance.
+
+In Azure Cosmos DB Java SDK v4, Direct mode is the best choice to improve database performance with most workloads.
+
+* ***Overview of Direct mode***
+<a id="direct-connection"></a>
++
+The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Azure Cosmos DB backend. The Direct mode architecture allocates up to 130 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
+
+* ***Configuration options for Direct mode***
+
+If non-default Direct mode behavior is desired, create a *DirectConnectionConfig* instance and customize its properties, then pass the customized property instance to the *directMode()* method in the Azure Cosmos DB client builder.
+
+These configuration settings control the behavior of the underlying Direct mode architecture discussed above.
+
+As a first step, use the following recommended configuration settings below. These *DirectConnectionConfig* options are advanced configuration settings which can affect SDK performance in unexpected ways; we recommend users avoid modifying them unless they feel very comfortable in understanding the tradeoffs and it is absolutely necessary. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
+
+| Configuration option | Default |
+| :: | :--: |
+| idleConnectionTimeout | "PT0" |
+| maxConnectionsPerEndpoint | "130" |
+| connectTimeout | "PT5S" |
+| idleEndpointTimeout | "PT1H" |
+| maxRequestsPerConnection | "30" |
+
+* **Scale out your client-workload**
+
+If you are testing at high throughput levels, the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
+
+A good rule of thumb is not to exceed >50% CPU utilization on any given server, to keep latency low.
+
+<a id="tune-page-size"></a>
+
+* **Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads)**
+
+The asynchronous functionality of Azure Cosmos DB Java SDK is based on [netty](https://netty.io/) non-blocking IO. The SDK uses a fixed number of IO netty event loop threads (as many CPU cores your machine has) for executing IO operations. The Flux returned by API emits the result on one of the shared IO event loop netty threads. So it is important to not block the shared IO event loop netty threads. Doing CPU intensive work or blocking operation on the IO event loop netty thread may cause deadlock or significantly reduce SDK throughput.
+
+For example the following code executes a cpu intensive work on the event loop IO netty thread:
+<a id="java4-noscheduler"></a>
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceNeedsSchedulerAsync)]
+
+After result is received if you want to do CPU intensive work on the result you should avoid doing so on event loop IO netty thread. You can instead provide your own Scheduler to provide your own thread for running your work, as shown below (requires `import reactor.core.scheduler.Schedulers`).
+
+<a id="java4-scheduler"></a>
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceAddSchedulerAsync)]
+
+Based on the type of your work you should use the appropriate existing Reactor Scheduler for your work. Read here
+[``Schedulers``](https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html).
+
+For more information on Azure Cosmos DB Java SDK v4, please look at the [Azure Cosmos DB directory of the Azure SDK for Java monorepo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos).
+
+* **Optimize logging settings in your application**
+
+For a variety of reasons, you may want or need to add logging in a thread which is generating high request throughput. If your goal is to fully saturate a container's provisioned throughput with requests generated by this thread, logging optimizations can greatly improve performance.
+
+* ***Configure an async logger***
+
+The latency of a synchronous logger necessarily factors into the overall latency calculation of your request-generating thread. An async logger such as [log4j2](https://logging.apache.org/log4j/log4j-2.3/manual/async.html) is recommended to decouple logging overhead from your high-performance application threads.
+
+* ***Disable netty's logging***
+
+Netty library logging is chatty and needs to be turned off (suppressing sign in the configuration may not be enough) to avoid additional CPU costs. If you are not in debugging mode, disable netty's logging altogether. So if you are using log4j to remove the additional CPU costs incurred by ``org.apache.log4j.Category.callAppenders()`` from netty add the following line to your codebase:
+
+```java
+org.apache.log4j.Logger.getLogger("io.netty").setLevel(org.apache.log4j.Level.OFF);
+```
+
+ * **OS Open files Resource Limit**
+
+Some Linux systems (like Red Hat) have an upper limit on the number of open files and so the total number of connections. Run the following to view the current limits:
+
+```bash
+ulimit -a
+```
+
+The number of open files (nofile) needs to be large enough to have enough room for your configured connection pool size and other open files by the OS. It can be modified to allow for a larger connection pool size.
+
+Open the limits.conf file:
+
+```bash
+vim /etc/security/limits.conf
+```
+
+Add/modify the following lines:
+
+```
+* - nofile 100000
+```
+
+* **Specify partition key in point writes**
+
+To improve the performance of point writes, specify item partition key in the point write API call, as shown below:
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceNoPKAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceNoPKSync)]
+
+
+
+rather than providing only the item instance, as shown below:
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceAddPKAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceAddPKSync)]
+
+
+
+The latter is supported but will add latency to your application; the SDK must parse the item and extract the partition key.
+
+## Query operations
+
+For query operations see the [performance tips for queries](performance-tips-query-sdk.md?pivots=programming-language-java).
+
+## <a id="java4-indexing"></a><a id="indexing-policy"></a> Indexing policy
+
+* **Exclude unused paths from indexing for faster writes**
+
+Azure Cosmos DBΓÇÖs indexing policy allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths (setIncludedPaths and setExcludedPaths). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed. For example, the following code shows how to include and exclude entire sections of the documents (also known as a subtree) from indexing using the "*" wildcard.
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateIndexingAsync)]
+
+For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
+
+## Throughput
+<a id="measure-rus"></a>
+
+* **Measure and tune for lower request units/second usage**
+
+Azure Cosmos DB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures, and triggers ΓÇô all operating on the documents within a database collection. The cost associated with each of these operations varies based on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
+
+Throughput is provisioned based on the number of [request units](../request-units.md) set for each container. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional request units.
+
+The complexity of a query impacts how many request units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.
+
+To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header to measure the number of request units consumed by these operations. You can also look at the equivalent RequestCharge property in ResourceResponse\<T> or FeedResponse\<T>.
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceRequestChargeAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceRequestChargeSync)]
+
+
+
+The request charge returned in this header is a fraction of your provisioned throughput. For example, if you have 2000 RU/s provisioned, and if the preceding query returns 1000 1KB-documents, the cost of the operation is 1000. As such, within one second, the server honors only two such requests before rate limiting subsequent requests. For more information, see [Request units](../request-units.md) and the [request unit calculator](https://www.documentdb.com/capacityplanner).
+
+<a id="429"></a>
+* **Handle rate limiting/request rate too large**
+
+When a client attempts to exceed the reserved throughput for an account, there is no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.
+
+```xml
+HTTP Status 429,
+Status Line: RequestRateTooLarge
+x-ms-retry-after-ms :100
+```
+
+The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
+
+If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a *CosmosClientException* with status code 429 to the application. The default retry count can be changed by using setRetryOptions on the ConnectionPolicy instance. By default, the *CosmosClientException* with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
+
+While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
+
+* **Design for smaller documents for higher throughput**
+
+The request charge (the request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents. Ideally, architect your application and workflows to have your item size be ~1KB, or similar order or magnitude. For latency-sensitive applications large items should be avoided - multi-MB documents will slow down your application.
+
+## Next steps
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Performance Tips Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-java.md
+
+ Title: Performance tips for Azure Cosmos DB Sync Java SDK v2
+description: Learn client configuration options to improve Azure Cosmos DB database performance for Sync Java SDK v2
+++
+ms.devlang: java
+ Last updated : 05/11/2020+++++
+# Performance tips for Azure Cosmos DB Sync Java SDK v2
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](performance-tips-java-sdk-v4.md)
+> * [Async Java SDK v2](performance-tips-async-java.md)
+> * [Sync Java SDK v2](performance-tips-java.md)
+> * [.NET SDK v3](performance-tips-dotnet-sdk-v3.md)
+> * [.NET SDK v2](performance-tips.md)
+>
+
+> [!IMPORTANT]
+> This is *not* the latest Java SDK for Azure Cosmos DB! You should upgrade your project to [Azure Cosmos DB Java SDK v4](sdk-java-v4.md) and then read the Azure Cosmos DB Java SDK v4 [performance tips guide](performance-tips-java-sdk-v4.md). Follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide to upgrade.
+>
+> These performance tips are for Azure Cosmos DB Sync Java SDK v2 only. Please view the Azure Cosmos DB Sync Java SDK v2 [Release notes](sdk-java-v2.md) and [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb) for more information.
+>
+
+> [!IMPORTANT]
+> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+
+Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [how to provision container throughput](how-to-provision-container-throughput.md) or [how to provision database throughput](how-to-provision-database-throughput.md). However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using [Azure Cosmos DB Sync Java SDK v2](./sdk-java-v2.md).
+
+So if you're asking "How can I improve my database performance?" consider the following options:
+
+## Networking
+<a id="direct-connection"></a>
+
+1. **Connection mode: Use DirectHttps**
+
+ How a client connects to Azure Cosmos DB has important implications on performance, especially in terms of observed client-side latency. There is one key configuration setting available for configuring the client [ConnectionPolicy](/java/api/com.microsoft.azure.documentdb.connectionpolicy) ΓÇô the [ConnectionMode](/java/api/com.microsoft.azure.documentdb.connectionmode). The two available ConnectionModes are:
+
+ 1. [Gateway (default)](/java/api/com.microsoft.azure.documentdb.connectionmode)
+ 2. [DirectHttps](/java/api/com.microsoft.azure.documentdb.connectionmode)
+
+ Gateway mode is supported on all SDK platforms and is the configured default. If your application runs within a corporate network with strict firewall restrictions, Gateway is the best choice since it uses the standard HTTPS port and a single endpoint. The performance tradeoff, however, is that Gateway mode involves an additional network hop every time data is read or written to Azure Cosmos DB. Because of this, DirectHttps mode offers better performance due to fewer network hops.
+
+ The Azure Cosmos DB Sync Java SDK v2 uses HTTPS as a transport protocol. HTTPS uses TLS for initial authentication and encrypting traffic. When using the Azure Cosmos DB Sync Java SDK v2, only HTTPS port 443 needs to be open.
+
+ The ConnectionMode is configured during the construction of the DocumentClient instance with the ConnectionPolicy parameter.
+
+ ### <a id="syncjava2-connectionpolicy"></a>Sync Java SDK V2 (Maven com.microsoft.azure::azure-documentdb)
+
+ ```Java
+ public ConnectionPolicy getConnectionPolicy() {
+ ConnectionPolicy policy = new ConnectionPolicy();
+ policy.setConnectionMode(ConnectionMode.DirectHttps);
+ policy.setMaxPoolSize(1000);
+ return policy;
+ }
+
+ ConnectionPolicy connectionPolicy = new ConnectionPolicy();
+ DocumentClient client = new DocumentClient(HOST, MASTER_KEY, connectionPolicy, null);
+ ```
+
+ :::image type="content" source="./media/performance-tips-java/connection-policy.png" alt-text="Diagram shows the Azure Cosmos DB connection policy." border="false":::
+
+ <a id="same-region"></a>
+2. **Collocate clients in same Azure region for performance**
+
+ When possible, place any applications calling Azure Cosmos DB in the same region as the Azure Cosmos DB database. For an approximate comparison, calls to Azure Cosmos DB within the same region complete within 1-2 ms, but the latency between the West and East coast of the US is >50 ms. This latency can likely vary from request to request depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. The lowest possible latency is achieved by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure Regions](https://azure.microsoft.com/regions/#services).
+
+ :::image type="content" source="./media/performance-tips/same-region.png" alt-text="Diagram shows requests and responses in two regions, where computers connect to an Azure Cosmos DB DB Account through mid-tier services." border="false":::
+
+## SDK Usage
+1. **Install the most recent SDK**
+
+ The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the [Azure Cosmos DB SDK](./sdk-java-v2.md) pages to determine the most recent SDK and review improvements.
+2. **Use a singleton Azure Cosmos DB client for the lifetime of your application**
+
+ Each [DocumentClient](/java/api/com.microsoft.azure.documentdb.documentclient) instance is thread-safe and performs efficient connection management and address caching when operating in Direct Mode. To allow efficient connection management and better performance by DocumentClient, it is recommended to use a single instance of DocumentClient per AppDomain for the lifetime of the application.
+
+ <a id="max-connection"></a>
+3. **Increase MaxPoolSize per host when using Gateway mode**
+
+ Azure Cosmos DB requests are made over HTTPS/REST when using Gateway mode, and are subjected to the default connection limit per hostname or IP address. You may need to set the MaxPoolSize to a higher value (200-1000) so that the client library can utilize multiple simultaneous connections to Azure Cosmos DB. In the Azure Cosmos DB Sync Java SDK v2, the default value for [ConnectionPolicy.getMaxPoolSize](/java/api/com.microsoft.azure.documentdb.connectionpolicy.getmaxpoolsize) is 100. Use [setMaxPoolSize](/java/api/com.microsoft.azure.documentdb.connectionpolicy.setmaxpoolsize) to change the value.
+
+4. **Tuning parallel queries for partitioned collections**
+
+ Azure Cosmos DB Sync Java SDK version 1.9.0 and above support parallel queries, which enable you to query a partitioned collection in parallel. For more information, see [code samples](https://github.com/Azure/azure-documentdb-java/tree/master/documentdb-examples/src/test/java/com/microsoft/azure/documentdb/examples) related to working with the SDKs. Parallel queries are designed to improve query latency and throughput over their serial counterpart.
+
+ (a) ***Tuning setMaxDegreeOfParallelism\:***
+ Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned collection is fetched serially with respect to the query. So, use [setMaxDegreeOfParallelism](/java/api/com.microsoft.azure.documentdb.feedoptions.setmaxdegreeofparallelism) to set the number of partitions that has the maximum chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can use setMaxDegreeOfParallelism to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
+
+ It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be bottlenecked by those partitions.
+
+ (b) ***Tuning setMaxBufferedItemCount\:***
+ Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. setMaxBufferedItemCount limits the number of pre-fetched results. By setting [setMaxBufferedItemCount](/java/api/com.microsoft.azure.documentdb.feedoptions.setmaxbuffereditemcount) to the expected number of results returned (or a higher number), this enables the query to receive maximum benefit from pre-fetching.
+
+ Pre-fetching works the same way irrespective of the MaxDegreeOfParallelism, and there is a single buffer for the data from all partitions.
+
+5. **Implement backoff at getRetryAfterInMilliseconds intervals**
+
+ During performance testing, you should increase load until a small rate of requests get throttled. If throttled, the client application should backoff on throttle for the server-specified retry interval. Respecting the backoff ensures that you spend minimal amount of time waiting between retries. Retry policy support is included in Version 1.8.0 and above of the [Azure Cosmos DB Sync Java SDK](./sdk-java-v2.md). For more information, see [getRetryAfterInMilliseconds](/java/api/com.microsoft.azure.documentdb.documentclientexception.getretryafterinmilliseconds).
+
+6. **Scale out your client-workload**
+
+ If you are testing at high throughput levels (>50,000 RU/s), the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
+
+7. **Use name based addressing**
+
+ Use name-based addressing, where links have the format `dbs/MyDatabaseId/colls/MyCollectionId/docs/MyDocumentId`, instead of SelfLinks (\_self), which have the format `dbs/<database_rid>/colls/<collection_rid>/docs/<document_rid>` to avoid retrieving ResourceIds of all the resources used to construct the link. Also, as these resources get recreated (possibly with same name), caching these may not help.
+
+ <a id="tune-page-size"></a>
+8. **Tune the page size for queries/read feeds for better performance**
+
+ When performing a bulk read of documents by using read feed functionality (for example, [readDocuments](/java/api/com.microsoft.azure.documentdb.documentclient.readdocuments)) or when issuing a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
+
+ To reduce the number of network round trips required to retrieve all applicable results, you can increase the page size using the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) request header to up to 1000. In cases where you need to display only a few results, for example, if your user interface or application API returns only 10 results a time, you can also decrease the page size to 10 to reduce the throughput consumed for reads and queries.
+
+ You may also set the page size using the [setPageSize method](/java/api/com.microsoft.azure.documentdb.feedoptionsbase.setpagesize).
+
+## Indexing Policy
+
+1. **Exclude unused paths from indexing for faster writes**
+
+ Azure Cosmos DBΓÇÖs indexing policy allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths ([setIncludedPaths](/java/api/com.microsoft.azure.documentdb.indexingpolicy.setincludedpaths) and [setExcludedPaths](/java/api/com.microsoft.azure.documentdb.indexingpolicy.setexcludedpaths)). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed. For example, the following code shows how to exclude an entire section (subtree) of the documents from indexing using the "*" wildcard.
++
+ ### <a id="syncjava2-indexing"></a>Sync Java SDK V2 (Maven com.microsoft.azure::azure-documentdb)
+
+ ```Java
+ Index numberIndex = Index.Range(DataType.Number);
+ numberIndex.set("precision", -1);
+ indexes.add(numberIndex);
+ includedPath.setIndexes(indexes);
+ includedPaths.add(includedPath);
+ indexingPolicy.setIncludedPaths(includedPaths);
+ collectionDefinition.setIndexingPolicy(indexingPolicy);
+ ```
+
+ For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
+
+## Throughput
+<a id="measure-rus"></a>
+
+1. **Measure and tune for lower request units/second usage**
+
+ Azure Cosmos DB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures, and triggers ΓÇô all operating on the documents within a database collection. The cost associated with each of these operations varies based on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
+
+ Throughput is provisioned based on the number of [request units](../request-units.md) set for each container. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional request units.
+
+ The complexity of a query impacts how many request units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.
+
+ To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent RequestCharge property in [ResourceResponse\<T>](/java/api/com.microsoft.azure.documentdb.resourceresponse) or [FeedResponse\<T>](/java/api/com.microsoft.azure.documentdb.feedresponse) to measure the number of request units consumed by these operations.
++
+ ### <a id="syncjava2-requestcharge"></a>Sync Java SDK V2 (Maven com.microsoft.azure::azure-documentdb)
+
+ ```Java
+ ResourceResponse<Document> response = client.createDocument(collectionLink, documentDefinition, null, false);
+
+ response.getRequestCharge();
+ ```
+
+ The request charge returned in this header is a fraction of your provisioned throughput. For example, if you have 2000 RU/s provisioned, and if the preceding query returns 1000 1KB-documents, the cost of the operation is 1000. As such, within one second, the server honors only two such requests before rate limiting subsequent requests. For more information, see [Request units](../request-units.md) and the [request unit calculator](https://www.documentdb.com/capacityplanner).
+ <a id="429"></a>
+1. **Handle rate limiting/request rate too large**
+
+ When a client attempts to exceed the reserved throughput for an account, there is no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.
+
+ ```xml
+ HTTP Status 429,
+ Status Line: RequestRateTooLarge
+ x-ms-retry-after-ms :100
+ ```
+ The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
+
+ If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a [DocumentClientException](/java/api/com.microsoft.azure.documentdb.documentclientexception) with status code 429 to the application. The default retry count can be changed by using [setRetryOptions](/java/api/com.microsoft.azure.documentdb.connectionpolicy.setretryoptions) on the [ConnectionPolicy](/java/api/com.microsoft.azure.documentdb.connectionpolicy) instance. By default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
+
+ While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
+3. **Design for smaller documents for higher throughput**
+
+ The request charge (the request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents.
+
+## Next steps
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
+
+ Title: Azure Cosmos DB performance tips for queries using the Azure Cosmos DB SDK
+description: Learn query configuration options to help improve performance using the Azure Cosmos DB SDK.
++++ Last updated : 04/11/2022+
+ms.devlang: csharp, java
+
+zone_pivot_groups: programming-languages-set-cosmos
++
+# Query performance tips for Azure Cosmos DB SDKs
+
+Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly with guaranteed latency and throughput levels. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [provision container throughput](how-to-provision-container-throughput.md) or [provision database throughput](how-to-provision-database-throughput.md).
++
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There are two ways to remove this request and reduce the latency of the query operation:
+
+### Use local Query Plan generation
+
+The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries locally. ServiceInterop.dll is supported only on the **Windows x64** platform. The following types of applications use 32-bit host processing by default. To change host processing to 64-bit processing, follow these steps, based on the type of your application:
+
+- For executable applications, you can change host processing by setting the [platform target](/visualstudio/ide/how-to-configure-projects-to-target-platforms?preserve-view=true) to **x64** in the **Project Properties** window, on the **Build** tab.
+
+- For VSTest-based test projects, you can change host processing by selecting **Test** > **Test Settings** > **Default Processor Architecture as X64** on the Visual Studio **Test** menu.
+
+- For locally deployed ASP.NET web applications, you can change host processing by selecting **Use the 64-bit version of IIS Express for web sites and projects** under **Tools** > **Options** > **Projects and Solutions** > **Web Projects**.
+
+- For ASP.NET web applications deployed on Azure, you can change host processing by selecting the **64-bit** platform in **Application settings** in the Azure portal.
+
+> [!NOTE]
+> By default, new Visual Studio projects are set to **Any CPU**. We recommend that you set your project to **x64** so it doesn't switch to **x86**. A project set to **Any CPU** can easily switch to **x86** if an x86-only dependency is added.<br/>
+> ServiceInterop.dll needs to be in the folder that the SDK DLL is being executed from. This should be a concern only if you manually copy DLLs or have custom build/deployment systems.
+
+### Use single partition queries
+
+# [V3 .NET SDK](#tab/v3)
+
+For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By):
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() { PartitionKey = new PartitionKey("Washington")}))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.documents.client.feedoptions.partitionkey) property in `FeedOptions` and contain no aggregations (including Distinct, DCount, Group By):
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ PartitionKey = new PartitionKey("Washington")
+ }).AsDocumentQuery();
+```
+++
+> [!NOTE]
+> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slower they can potentially be.
+
+### Avoid recreating the iterator unnecessarily
+
+When all the query results are consumed by the current component, you don't need to re-create the iterator with the continuation for every page. Always prefer to drain the query fully unless the pagination is controlled by another calling component:
+
+# [V3 .NET SDK](#tab/v3)
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() { PartitionKey = new PartitionKey("Washington")}))
+{
+ while (feedIterator.HasMoreResults)
+ {
+ foreach(MyItem document in await feedIterator.ReadNextAsync())
+ {
+ // Iterate through documents
+ }
+ }
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ PartitionKey = new PartitionKey("Washington")
+ }).AsDocumentQuery();
+while (query.HasMoreResults)
+{
+ foreach(Document document in await queryable.ExecuteNextAsync())
+ {
+ // Iterate through documents
+ }
+}
+```
+++
+## Tune the degree of parallelism
+
+# [V3 .NET SDK](#tab/v3)
+
+For queries, tune the [MaxConcurrency](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxconcurrency) property in `QueryRequestOptions` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxConcurrency` controls the maximum number of parallel tasks, that is, the maximum of partitions to be visited in parallel. Setting the value to -1 will let the SDK decide the optimal concurrency.
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() {
+ PartitionKey = new PartitionKey("Washington"),
+ MaxConcurrency = -1 }))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+For queries, tune the [MaxDegreeOfParallelism](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxdegreeofparallelism) property in `FeedOptions` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxDegreeOfParallelism` controls the maximum number of parallel tasks, that is, the maximum of partitions to be visited in parallel. Setting the value to -1 will let the SDK decide the optimal concurrency.
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ MaxDegreeOfParallelism = -1,
+ EnableCrossPartitionQuery = true
+ }).AsDocumentQuery();
+```
+++
+Let's assume that
+* D = Default Maximum number of parallel tasks (= total number of processors in the client machine)
+* P = User-specified maximum number of parallel tasks
+* N = Number of partitions that needs to be visited for answering a query
+
+Following are implications of how the parallel queries would behave for different values of P.
+* (P == 0) => Serial Mode
+* (P == 1) => Maximum of one task
+* (P > 1) => Min (P, N) parallel tasks
+* (P < 1) => Min (N, D) parallel tasks
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
+
+> [!NOTE]
+> The `MaxItemCount` property shouldn't be used just for pagination. Its main use is to improve the performance of queries by reducing the maximum number of items returned in a single page.
+
+# [V3 .NET SDK](#tab/v3)
+
+You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxitemcount) property in `QueryRequestOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `MaxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() {
+ PartitionKey = new PartitionKey("Washington"),
+ MaxItemCount = 1000}))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxitemcount) property in `FeedOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `MaxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
+
+```csharp
+IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'",
+ new FeedOptions { MaxItemCount = 1000 });
+```
+++
+When a query is executed, the resulting data is sent within a TCP packet. If you specify too low a value for `MaxItemCount`, the number of trips required to send the data within the TCP packet is high, which affects performance. So if you're not sure what value to set for the `MaxItemCount` property, it's best to set it to -1 and let the SDK choose the default value.
+
+## Tune the buffer size
+
+# [V3 .NET SDK](#tab/v3)
+
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The [MaxBufferedItemCount](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxbuffereditemcount) property in `QueryRequestOptions` limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() {
+ PartitionKey = new PartitionKey("Washington"),
+ MaxBufferedItemCount = -1}))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The [MaxBufferedItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxbuffereditemcount) property in `FeedOptions` limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
+
+```csharp
+IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'",
+ new FeedOptions { MaxBufferedItemCount = -1 });
+```
+++
+Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
+++
+## Next steps
+
+To learn more about performance using the .NET SDK:
+
+* [Best practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)
+* [Performance tips for Azure Cosmos DB .NET V3 SDK](performance-tips-dotnet-sdk-v3.md)
+* [Performance tips for Azure Cosmos DB .NET V2 SDK](performance-tips.md)
++
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation.
+
+### Use Query Plan caching
+
+The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](query/parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Datan Azure Cosmos DB SDK version 3.13.0 and above**.
+
+### Use parametrized single partition queries
+
+For parametrized queries that are scoped to a partition key with [setPartitionKey](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setpartitionkey) in `CosmosQueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By), the query plan can be avoided:
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+options.setPartitionKey(new PartitionKey("Washington"));
+
+ArrayList<SqlParameter> paramList = new ArrayList<SqlParameter>();
+paramList.add(new SqlParameter("@city", "Seattle"));
+SqlQuerySpec querySpec = new SqlQuerySpec(
+ "SELECT * FROM c WHERE c.city = @city",
+ paramList);
+
+// Sync API
+CosmosPagedIterable<MyItem> filteredItems =
+ container.queryItems(querySpec, options, MyItem.class);
+
+// Async API
+CosmosPagedFlux<MyItem> filteredItems =
+ asyncContainer.queryItems(querySpec, options, MyItem.class);
+```
+
+> [!NOTE]
+> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slowed they can potentially be.
+
+## Tune the degree of parallelism
+
+Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned container is fetched serially with respect to the query. So, use [setMaxDegreeOfParallelism](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxdegreeofparallelism) on `CosmosQueryRequestOptions` to set the value to the number of partitions you have. If you don't know the number of partitions, you can use `setMaxDegreeOfParallelism` to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism. Setting the value to -1 will let the SDK decide the optimal concurrency.
+
+It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned container is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be degraded.
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+options.setPartitionKey(new PartitionKey("Washington"));
+options.setMaxDegreeOfParallelism(-1);
+
+// Define the query
+
+// Sync API
+CosmosPagedIterable<MyItem> filteredItems =
+ container.queryItems(querySpec, options, MyItem.class);
+
+// Async API
+CosmosPagedFlux<MyItem> filteredItems =
+ asyncContainer.queryItems(querySpec, options, MyItem.class);
+```
+
+Let's assume that
+* D = Default Maximum number of parallel tasks (= total number of processors in the client machine)
+* P = User-specified maximum number of parallel tasks
+* N = Number of partitions that needs to be visited for answering a query
+
+Following are implications of how the parallel queries would behave for different values of P.
+* (P == 0) => Serial Mode
+* (P == 1) => Maximum of one task
+* (P > 1) => Min (P, N) parallel tasks
+* (P == -1) => Min (N, D) parallel tasks
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 4 MB, whichever limit is hit first.
+
+You can use the `pageSize` parameter in `iterableByPage()` for sync API and `byPage()` for async API, to define a page size:
+
+```java
+// Sync API
+Iterable<FeedResponse<MyItem>> filteredItemsAsPages =
+ container.queryItems(querySpec, options, MyItem.class).iterableByPage(continuationToken,pageSize);
+
+for (FeedResponse<MyItem> page : filteredItemsAsPages) {
+ for (MyItem item : page.getResults()) {
+ //...
+ }
+}
+
+// Async API
+Flux<FeedResponse<MyItem>> filteredItemsAsPages =
+ asyncContainer.queryItems(querySpec, options, MyItem.class).byPage(continuationToken,pageSize);
+
+filteredItemsAsPages.map(page -> {
+ for (MyItem item : page.getResults()) {
+ //...
+ }
+}).subscribe();
+```
+
+## Tune the buffer size
+
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching (NOTE: This can also result in high memory consumption). If you set this value to 0, the system will automatically determine the number of items to buffer.
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+options.setPartitionKey(new PartitionKey("Washington"));
+options.setMaxBufferedItemCount(-1);
+
+// Define the query
+
+// Sync API
+CosmosPagedIterable<MyItem> filteredItems =
+ container.queryItems(querySpec, options, MyItem.class);
+
+// Async API
+CosmosPagedFlux<MyItem> filteredItems =
+ asyncContainer.queryItems(querySpec, options, MyItem.class);
+```
+
+Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
+
+## Next steps
+
+To learn more about performance using the Java SDK:
+
+* [Best practices for Azure Cosmos DB Java V4 SDK](best-practice-java.md)
+* [Performance tips for Azure Cosmos DB Java V4 SDK](performance-tips-java-sdk-v4.md)
+
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips.md
+
+ Title: Azure Cosmos DB performance tips for .NET SDK v2
+description: Learn client configuration options to improve Azure Cosmos DB .NET v2 SDK performance.
+++++ Last updated : 02/18/2022
+ms.devlang: csharp
++++
+# Performance tips for Azure Cosmos DB and .NET SDK v2
+
+> [!div class="op_single_selector"]
+> * [.NET SDK v3](performance-tips-dotnet-sdk-v3.md)
+> * [.NET SDK v2](performance-tips.md)
+> * [Java SDK v4](performance-tips-java-sdk-v4.md)
+> * [Async Java SDK v2](performance-tips-async-java.md)
+> * [Sync Java SDK v2](performance-tips-java.md)
+
+Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [how to provision container throughput](how-to-provision-container-throughput.md) or [how to provision database throughput](how-to-provision-database-throughput.md). But because Azure Cosmos DB is accessed via network calls, there are client-side optimizations you can make to achieve peak performance when you use the [SQL .NET SDK](sdk-dotnet-v3.md).
+
+So, if you're trying to improve your database performance, consider these options:
+
+## Upgrade to the .NET V3 SDK
+
+The [.NET v3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3) is released. If you use the .NET v3 SDK, see the [.NET v3 performance guide](performance-tips-dotnet-sdk-v3.md) for the following information:
+
+- Defaults to Direct TCP mode
+- Stream API support
+- Support custom serializer to allow System.Text.JSON usage
+- Integrated batch and bulk support
+
+## <a id="hosting"></a> Hosting recommendations
+
+**Turn on server-side garbage collection (GC)**
+
+Reducing the frequency of garbage collection can help in some cases. In .NET, set [gcServer](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element) to `true`.
+
+**Scale out your client workload**
+
+If you're testing at high throughput levels (more than 50,000 RU/s), the client application could become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
+
+> [!NOTE]
+> High CPU usage can cause increased latency and request timeout exceptions.
+
+## <a id="metadata-operations"></a> Metadata operations
+
+Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests) that do not scale like data operations.
+
+## <a id="logging-and-tracing"></a> Logging and tracing
+
+Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
+
+Latest SDK versions (greater than 2.16.2) automatically remove it when they detect it, with older versions, you can remove it by:
+
+# [.NET 6 / .NET Core](#tab/trace-net-core)
+
+```csharp
+if (!Debugger.IsAttached)
+{
+ Type defaultTrace = Type.GetType("Microsoft.Azure.Documents.DefaultTrace,Microsoft.Azure.DocumentDB.Core");
+ TraceSource traceSource = (TraceSource)defaultTrace.GetProperty("TraceSource").GetValue(null);
+ traceSource.Listeners.Remove("Default");
+ // Add your own trace listeners
+}
+```
+
+# [.NET Framework](#tab/trace-net-fx)
+
+Edit your `app.config` or `web.config` files:
+
+```xml
+<configuration>
+ <system.diagnostics>
+ <sources>
+ <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >
+ <listeners>
+ <remove name="Default" />
+ <!--Add your own trace listeners-->
+ <add name="myListener" ... />
+ </listeners>
+ </source>
+ </sources>
+ </system.diagnostics>
+<configuration>
+```
+++
+## <a id="networking"></a> Networking
+
+**Connection policy: Use direct connection mode**
+
+.NET V2 SDK default connection mode is gateway. You configure the connection mode during the construction of the `DocumentClient` instance by using the `ConnectionPolicy` parameter. If you use direct mode, you need to also set the `Protocol` by using the `ConnectionPolicy` parameter. To learn more about different connectivity options, see the [connectivity modes](sdk-connection-modes.md) article.
+
+```csharp
+Uri serviceEndpoint = new Uri("https://contoso.documents.net");
+string authKey = "your authKey from the Azure portal";
+DocumentClient client = new DocumentClient(serviceEndpoint, authKey,
+new ConnectionPolicy
+{
+ ConnectionMode = ConnectionMode.Direct, // ConnectionMode.Gateway is the default
+ ConnectionProtocol = Protocol.Tcp
+});
+```
+
+**Ephemeral port exhaustion**
+
+If you see a high connection volume or high port usage on your instances, first verify that your client instances are singletons. In other words, the client instances should be unique for the lifetime of the application.
+
+When running on the TCP protocol, the client optimizes for latency by using the long-lived connections as opposed to the HTTPS protocol, which terminates the connections after 2 minutes of inactivity.
+
+In scenarios where you have sparse access and if you notice a higher connection count when compared to the gateway mode access, you can:
+
+* Configure the [ConnectionPolicy.PortReuseMode](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.portreusemode) property to `PrivatePortPool` (effective with framework version>= 4.6.1 and .NET core version >= 2.0): This property allows the SDK to use a small pool of ephemeral ports for different Azure Cosmos DB destination endpoints.
+* Configure the [ConnectionPolicy.IdleConnectionTimeout](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.idletcpconnectiontimeout) property must be greater than or equal to 10 minutes. The recommended values are between 20 minutes and 24 hours.
+
+**Call OpenAsync to avoid startup latency on first request**
+
+By default, the first request has higher latency because it needs to fetch the address routing table. When you use [SDK V2](sdk-dotnet-v2.md), call `OpenAsync()` once during initialization to avoid this startup latency on the first request. The call looks like: `await client.OpenAsync();`
+
+> [!NOTE]
+> `OpenAsync` will generate requests to obtain the address routing table for all the containers in the account. For accounts that have many containers but whose application accesses a subset of them, `OpenAsync` would generate an unnecessary amount of traffic, which would make the initialization slow. So using `OpenAsync` might not be useful in this scenario because it slows down application startup.
+
+**For performance, collocate clients in same Azure region**
+
+When possible, place any applications that call Azure Cosmos DB in the same region as the Azure Cosmos DB database. Here's an approximate comparison: calls to Azure Cosmos DB within the same region complete within 1 ms to 2 ms, but the latency between the West and East coast of the US is more than 50 ms. This latency can vary from request to request, depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. You can get the lowest possible latency by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure regions](https://azure.microsoft.com/regions/#services).
++
+**Increase the number of threads/tasks**
+<a id="increase-threads"></a>
+
+Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of parallelism of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the .NET [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB.
+
+**Enable accelerated networking**
+
+To reduce latency and CPU jitter, we recommend that you enable accelerated networking on client virtual machines. See [Create a Windows virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Create a Linux virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md).
+
+## SDK usage
+
+**Install the most recent SDK**
+
+The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the [Azure Cosmos DB SDK](sdk-dotnet-v3.md) pages to determine the most recent SDK and review improvements.
+
+**Use a singleton Azure Cosmos DB client for the lifetime of your application**
+
+Each `DocumentClient` instance is thread-safe and performs efficient connection management and address caching when operating in direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application.
+
+**Avoid blocking calls**
+
+Azure Cosmos DB SDK should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
+
+A common performance problem in apps using the Azure Cosmos DB SDK is blocking calls that could be asynchronous. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times.
+
+**Do not**:
+
+* Block asynchronous execution by calling [Task.Wait](/dotnet/api/system.threading.tasks.task.wait) or [Task.Result](/dotnet/api/system.threading.tasks.task-1.result).
+* Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous.
+* Acquire locks in common code paths. Azure Cosmos DB .NET SDK is most performant when architected to run code in parallel.
+* Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.
+* Do not use ToList() on `DocumentClient.CreateDocumentQuery(...)` which uses blocking calls to synchronously drain the query. Use [AsDocumentQuery()](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/a4348f8cc0750434376b02ae64ca24237da28cd7/samples/code-samples/Queries/Program.cs#L690) to drain the query asynchronously.
+
+**Do**:
+
+* Call the Azure Cosmos DB .NET APIs asynchronously.
+* The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns.
+
+A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be used to find threads frequently added to the [Thread Pool](/windows/desktop/procthread/thread-pools). The `Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start` event indicates a thread added to the thread pool.
+
+**Increase System.Net MaxConnections per host when using gateway mode**
+
+Azure Cosmos DB requests are made over HTTPS/REST when you use gateway mode. They're subjected to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (100 to 1,000) so the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [Documents.Client.ConnectionPolicy.MaxConnectionLimit](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.maxconnectionlimit) to a higher value.
+
+**Implement backoff at RetryAfter intervals**
+
+During performance testing, you should increase load until a small rate of requests are throttled. If requests are throttled, the client application should back off on throttle for the server-specified retry interval. Respecting the backoff ensures you spend a minimal amount of time waiting between retries.
+
+Retry policy support is included in these SDKs:
+- Version 1.8.0 and later of the [.NET SDK for SQL](sdk-dotnet-v2.md) and the [Java SDK for SQL](sdk-java-v2.md)
+- Version 1.9.0 and later of the [Node.js SDK for SQL](sdk-nodejs.md) and the [Python SDK for SQL](sdk-python.md)
+- All supported versions of the [.NET Core](sdk-dotnet-core-v2.md) SDKs
+
+For more information, see [RetryAfter](/dotnet/api/microsoft.azure.documents.documentclientexception.retryafter).
+
+In version 1.19 and later of the .NET SDK, there's a mechanism for logging additional diagnostic information and troubleshooting latency issues, as shown in the following sample. You can log the diagnostic string for requests that have a higher read latency. The captured diagnostic string will help you understand how many times you received 429 errors for a given request.
+
+```csharp
+ResourceResponse<Document> readDocument = await this.readClient.ReadDocumentAsync(oldDocuments[i].SelfLink);
+readDocument.RequestDiagnosticsString
+```
+
+**Cache document URIs for lower read latency**
+
+Cache document URIs whenever possible for the best read performance. You need to define logic to cache the resource ID when you create a resource. Lookups based on resource IDs are faster than name-based lookups, so caching these values improves performance.
+
+**Increase the number of threads/tasks**
+
+See [Increase the number of threads/tasks](#increase-threads) in the networking section of this article.
+
+## Query operations
+
+For query operations see the [performance tips for queries](performance-tips-query-sdk.md?tabs=v2&pivots=programming-language-csharp).
+
+## Indexing policy
+
+**Exclude unused paths from indexing for faster writes**
+
+The Azure Cosmos DB indexing policy also allows you to specify which document paths to include in or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Indexing paths can improve write performance and reduce index storage for scenarios in which the query patterns are known beforehand. This is because indexing costs correlate directly to the number of unique paths indexed. For example, this code shows how to exclude an entire section of the documents (a subtree) from indexing by using the "*" wildcard:
+
+```csharp
+var collection = new DocumentCollection { Id = "excludedPathCollection" };
+collection.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+collection.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/nonIndexedContent/*");
+collection = await client.CreateDocumentCollectionAsync(UriFactory.CreateDatabaseUri("db"), collection);
+```
+
+For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
+
+## <a id="measure-rus"></a> Throughput
+
+**Measure and tune for lower Request Units/second usage**
+
+Azure Cosmos DB offers a rich set of database operations. These operations include relational and hierarchical queries with UDFs, stored procedures, and triggers, all operating on the documents within a database collection. The cost associated with each of these operations varies depending on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a Request Unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
+
+Throughput is provisioned based on the number of [Request Units](../request-units.md) set for each container. Request Unit consumption is evaluated as a rate per second. Applications that exceed the provisioned Request Unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional Request Units.
+
+The complexity of a query affects how many Request Units are consumed for an operation. The number of predicates, the nature of the predicates, the number of UDFs, and the size of the source dataset all influence the cost of query operations.
+
+To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent `RequestCharge` property in `ResourceResponse\<T>` or `FeedResponse\<T>` in the .NET SDK) to measure the number of Request Units consumed by the operations:
+
+```csharp
+// Measure the performance (Request Units) of writes
+ResourceResponse<Document> response = await client.CreateDocumentAsync(collectionSelfLink, myDocument);
+Console.WriteLine("Insert of document consumed {0} request units", response.RequestCharge);
+// Measure the performance (Request Units) of queries
+IDocumentQuery<dynamic> queryable = client.CreateDocumentQuery(collectionSelfLink, queryString).AsDocumentQuery();
+while (queryable.HasMoreResults)
+ {
+ FeedResponse<dynamic> queryResponse = await queryable.ExecuteNextAsync<dynamic>();
+ Console.WriteLine("Query batch consumed {0} request units", queryResponse.RequestCharge);
+ }
+```
+
+The request charge returned in this header is a fraction of your provisioned throughput (that is, 2,000 RUs / second). For example, if the preceding query returns 1,000 1-KB documents, the cost of the operation is 1,000. So, within one second, the server honors only two such requests before rate limiting later requests. For more information, see [Request Units](../request-units.md) and the [Request Unit calculator](https://www.documentdb.com/capacityplanner).
+<a id="429"></a>
+
+**Handle rate limiting/request rate too large**
+
+When a client attempts to exceed the reserved throughput for an account, there's no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429). It will return an [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header that indicates the amount of time, in milliseconds, that the user must wait before attempting the request again.
+
+```http
+HTTP Status 429,
+Status Line: RequestRateTooLarge
+x-ms-retry-after-ms :100
+```
+
+The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
+
+If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client might not suffice. In this case, the client throws a DocumentClientException with status code 429 to the application.
+
+You can change the default retry count by setting the `RetryOptions` on the `ConnectionPolicy` instance. By default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This error returns even when the current retry count is less than the maximum retry count, whether the current value is the default of 9 or a user-defined value.
+
+The automated retry behavior helps improve resiliency and usability for most applications. But it might not be the best behavior when you're doing performance benchmarks, especially when you're measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request Units](../request-units.md).
+
+**For higher throughput, design for smaller documents**
+
+The request charge (that is, the request-processing cost) of a given operation correlates directly to the size of the document. Operations on large documents cost more than operations on small documents.
+
+## Next steps
+
+For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/powerbi-visualize.md
+
+ Title: Power BI tutorial for Azure Cosmos DB
+description: Use this Power BI tutorial to import JSON, create insightful reports, and visualize data using the Azure Cosmos DB.
++++ Last updated : 03/28/2022+++
+# Visualize Azure Cosmos DB data using Power BI
+
+This article describes the steps required to connect Azure Cosmos DB data to [Power BI](https://powerbi.microsoft.com/) Desktop.
+
+You can connect to Azure Cosmos DB from Power BI desktop by using one of these options:
+
+* Use [Azure Synapse Link](../synapse-link.md) to build Power BI reports with no performance or cost impact to your transactional workloads, and no ETL pipelines.
+
+ You can either use [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode) or [import](/power-bi/connect-data/service-dataset-modes-understand#import-mode) mode. With [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), you can build dashboards/reports using live data from your Azure Cosmos DB accounts, without importing or copying the data into Power BI.
+
+* Connect Power BI Desktop to Azure Cosmos DB account with the Azure Cosmos DB connector for Power BI. This option is only available in import mode and will consume RUs allocated for your transactional workloads.
+
+> [!NOTE]
+> Reports created in Power BI Desktop can be published to PowerBI.com. Direct extraction of Azure Cosmos DB data cannot be performed from PowerBI.com.
+
+## Prerequisites
+Before following the instructions in this Power BI tutorial, ensure that you have access to the following resources:
+
+* [Download the latest version of Power BI Desktop](https://powerbi.microsoft.com/desktop).
+
+* [Create an Azure Cosmos DB database account](quickstart-portal.md#create-container-database) and add data to your Azure Cosmos DB containers.
+
+To share your reports in PowerBI.com, you must have an account in PowerBI.com. To learn more about Power BI and Power BI Pro, see [https://powerbi.microsoft.com/pricing](https://powerbi.microsoft.com/pricing).
+
+## Let's get started
+### Building BI reports using Azure Synapse Link
+
+You can enable Azure Synapse Link on your existing Azure Cosmos DB containers and build BI reports on this data, in just a few clicks using Azure Cosmos DB portal. Power BI will connect to Azure Cosmos DB using Direct Query mode, allowing you to query your live Azure Cosmos DB data, without impacting your transactional workloads.
+
+To build a Power BI report/dashboard:
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account.
+
+1. From the **Integrations** section, open the **Power BI** pane and select **Get started**.
+
+ > [!NOTE]
+ > Currently, this option is only available for API for NoSQL accounts. You can create T-SQL views directly in Synapse serverless SQL pools and build BI dashboards for Azure Cosmos DB for MongoDB. See ["Use Power BI and serverless Synapse SQL pool to analyze Azure Cosmos DB data with Synapse"](../synapse-link-power-bi.md) for more information.
+
+1. From the **Enable Azure Synapse Link** tab, you can enable Synapse Link on your account from **Enable Azure Synapse link for this account** section. If Synapse Link is already enabled for your account, you will not see this tab. This step is a pre-requisite to start enabling Synapse Link on your containers.
+
+ > [!NOTE]
+ > Enabling Azure Synapse Link has cost implications. See [Azure Synapse Link pricing](../synapse-link.md#pricing) section for more details.
+
+1. Next from the **Enable Azure Synapse Link for your containers** section, choose the required containers to enable Synapse Link.
+
+ * If you already enabled Synapse Link on some containers, you will see the checkbox next to the container name is selected. You may optionally deselect them, based on the data you'd like to visualize in Power BI.
+
+ * If Synapse Link isn't enabled, you can enable this on your existing containers.
+
+ If enabling Synapse Link is in progress on any of the containers, the data from those containers will not be included. You should come back to this tab later and import data when the containers are enabled.
+
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png" alt-text="Progress of Synapse Link enabled on existing containers." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png":::
+
+1. Depending on the amount of data in your containers, it may take a while to enable Synapse Link. To learn more, see [enable Synapse Link on existing containers](../configure-synapse-link.md#update-analytical-ttl) article.
+
+ You can check the progress in the portal as shown in the following screen. **Containers are enabled with Synapse Link when the progress reaches 100%.**
+
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png" alt-text="Synapse Link successfully enabled on the selected containers." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png":::
+
+1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This step will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Azure Cosmos DB to Power BI, see [Prepare views](../../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article.
+ > [!NOTE]
+ > Your Azure Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../../synapse-analytics/sql/resources-self-help-sql-on-demand.md#azure-cosmos-db-performance-issues) article.
+
+1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
+
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-create-views.png" alt-text="Connect to Synapse Link workspace and create views." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-create-views.png":::
+
+1. Next, select **Download .pbids** to download the Power BI data source file. Open the downloaded file. It contains the required connection information and opens Power BI desktop.
+
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/download-powerbi-desktop-files.png" alt-text="Download the Power BI desktop files in .pbids format." border="true" lightbox="../media/integrated-power-bi-synapse-link/download-powerbi-desktop-files.png":::
+
+1. You can now connect to Azure Cosmos DB data from Power BI desktop. A list of T-SQL views corresponding to the data in each container are displayed.
+
+ For example, the following screen shows vehicle fleet data. You can load this data for further analysis or transform it before loading.
+
+ :::image type="content" source="../media/integrated-power-bi-synapse-link/powerbi-desktop-select-view.png" alt-text="T-SQL views corresponding to the data in each container." border="true" lightbox="../media/integrated-power-bi-synapse-link/powerbi-desktop-select-view.png":::
+
+1. You can now start building the report using Azure Cosmos DB's analytical data. Any changes to your data will be reflected in the report, as soon as the data is replicated to analytical store, which typically happens in a couple of minutes.
++
+### Building BI reports using Power BI connector
+> [!NOTE]
+> Connecting to Azure Cosmos DB with the Power BI connector is currently supported for Azure Cosmos DB for NoSQL and API for Gremlin accounts only.
+
+1. Run Power BI Desktop.
+
+2. You can **Get Data**, see **Recent Sources**, or **Open Other Reports** directly from the welcome screen. Select the "X" at the top right corner to close the screen. The **Report** view of Power BI Desktop is displayed.
+
+ :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbireportview.png" alt-text="Power BI Desktop Report View - Power BI connector":::
+
+3. Select the **Home** ribbon, then click on **Get Data**. The **Get Data** window should appear.
+
+4. Click on **Azure**, select **Azure Cosmos DB (Beta)**, and then click **Connect**.
+
+ :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbigetdata.png" alt-text="Power BI Desktop Get Data - Power BI connector":::
+
+5. On the **Preview Connector** page, click **Continue**. The **Azure Cosmos DB** window appears.
+
+6. Specify the Azure Cosmos DB account endpoint URL you would like to retrieve the data from as shown below, and then click **OK**. To use your own account, you can retrieve the URL from the URI box in the **Keys** blade of the Azure portal. Optionally you can provide the database name, collection name or use the navigator to select the database and collection to identify where the data comes from.
+
+7. If you are connecting to this endpoint for the first time, you are prompted for the account key. For your own account, retrieve the key from the **Primary Key** box in the **Read-only Keys** blade of the Azure portal. Enter the appropriate key and then click **Connect**.
+
+ We recommend that you use the read-only key when building reports. This prevents unnecessary exposure of the primary key to potential security risks. The read-only key is available from the **Keys** blade of the Azure portal.
+
+8. When the account is successfully connected, the **Navigator** pane appears. The **Navigator** shows a list of databases under the account.
+
+9. Click and expand on the database where the data for the report comes from. Now, select a collection that contains the data to retrieve.
+
+ The Preview pane shows a list of **Record** items. A Document is represented as a **Record** type in Power BI. Similarly, a nested JSON block inside a document is also a **Record**. To view the the properties documents as columns, click on the grey button with 2 arrows in opposite directions that symbolize the expansion of the record. It's located on the right of the container's name, in the same preview pane.
+
+10. Power BI Desktop Report view is where you can start creating reports to visualize data. You can create reports by dragging and dropping fields into the **Report** canvas.
+
+11. There are two ways to refresh data: ad hoc and scheduled. Simply click **Refresh Now** to refresh the data. For a scheduled refresh, go to **Settings**, open the **Datasets** tab. Click on **Scheduled Refresh** and set your schedule.
+
+## Next steps
+* To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).
+* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](/azure/cosmos-db/).
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/powershell-samples.md
+
+ Title: Azure PowerShell samples for Azure Cosmos DB for NoSQL
+description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for API for NoSQL
+++++ Last updated : 01/20/2021++++
+# Azure PowerShell samples for Azure Cosmos DB for NoSQL
+
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+
+For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](../cassandr)
+
+## Common Samples
+
+|Task | Description |
+|||
+|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's default consistency level. |
+|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's regions. |
+|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos DB account or trigger a manual failover. |
+|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
+|[Create an Azure Cosmos DB Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
+|||
+
+## API for NoSQL Samples
+
+|Task | Description |
+|||
+|[Create an account, database and container](../scripts/powershell/sql/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account, database and container. |
+|[Create an account, database and container with autoscale](../scripts/powershell/sql/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account, database and container with autoscale. |
+|[Create a container with a large partition key](../scripts/powershell/sql/create-large-partition-key.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create a container with a large partition key. |
+|[Create a container with no index policy](../scripts/powershell/sql/create-index-none.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Create an Azure Cosmos DB container with index policy turned off.|
+|[List or get databases or containers](../scripts/powershell/sql/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or containers. |
+|[Perform throughput operations](../scripts/powershell/sql/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or container including get, update and migrate between autoscale and standard throughput. |
+|[Lock resources from deletion](../scripts/powershell/sql/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
+|||
cosmos-db Query Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-cheat-sheet.md
+
+ Title: Azure Cosmos DB PDF query cheat sheets
+description: Printable PDF cheat sheets that helps you use Azure Cosmos DB's SQL, MongoDB, Graph, and Table APIs to query your data
+++++++ Last updated : 05/28/2019+
+# Azure Cosmos DB query cheat sheets
+
+The **Azure Cosmos DB query cheat sheets** help you quickly write queries for your data by displaying common database queries, operations, functions, and operators in easy-to-print PDF reference sheets. The cheat sheets include reference information for the SQL, MongoDB, Table, and Gremlin APIs.
+
+Choose from a letter-sized or A3-sized download.
+
+## Letter-sized cheat sheets
+
+Download the [Azure Cosmos DB letter-sized query cheat sheets](https://go.microsoft.com/fwlink/?LinkId=623215) if you're going to print to letter-sized paper (8.5" x 11").
++
+## Oversized cheat sheets
+Download the [Azure Cosmos DB A3-sized query cheat sheets](https://go.microsoft.com/fwlink/?linkid=870413) if you're going to print using a plotter or large-scale printer on A3-sized paper (11.7" x 16.5").
++
+## Next steps
+For more help writing queries, see the following articles:
+* For API for NoSQL queries, see [Query using the API for NoSQL](tutorial-query.md), [SQL queries for Azure Cosmos DB](query/getting-started.md), and [SQL syntax reference](query/getting-started.md)
+* For MongoDB queries, see [Query using Azure Cosmos DB's API for MongoDB](../mongodb/tutorial-query.md) and [Azure Cosmos DB's API for MongoDB feature support and syntax](../mongodb/feature-support-32.md)
+* For API for Gremlin queries, see [Query using the API for Gremlin](../gremlin/tutorial-query.md) and [Azure Cosmos DB for Gremlin graph support](../gremlin/support.md)
+* For API for Table queries, see [Query using the API for Table](../table/tutorial-query.md)
cosmos-db Query Metrics Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-metrics-performance.md
+
+ Title: Get SQL query performance & execution metrics
+description: Learn how to retrieve SQL query execution metrics and profile SQL query performance of Azure Cosmos DB requests.
++++ Last updated : 05/17/2019+++
+# Get SQL query execution metrics and analyze query performance using .NET SDK
+
+This article presents how to profile SQL query performance on Azure Cosmos DB. This profiling can be done using `QueryMetrics` retrieved from the .NET SDK and is detailed here. [QueryMetrics](/dotnet/api/microsoft.azure.documents.querymetrics) is a strongly typed object with information about the backend query execution. These metrics are documented in more detail in the [Tune Query Performance](./query-metrics.md) article.
+
+## Set the FeedOptions parameter
+
+All the overloads for [DocumentClient.CreateDocumentQuery](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentquery) take in an optional [FeedOptions](/dotnet/api/microsoft.azure.documents.client.feedoptions) parameter. This option is what allows query execution to be tuned and parameterized.
+
+To collect the NoSQL query execution metrics, you must set the parameter [PopulateQueryMetrics](/dotnet/api/microsoft.azure.documents.client.feedoptions.populatequerymetrics#P:Microsoft.Azure.Documents.Client.FeedOptions.PopulateQueryMetrics) in the [FeedOptions](/dotnet/api/microsoft.azure.documents.client.feedoptions) to `true`. Setting `PopulateQueryMetrics` to true will make it so that the `FeedResponse` will contain the relevant `QueryMetrics`.
+
+## Get query metrics with AsDocumentQuery()
+The following code sample shows how to do retrieve metrics when using [AsDocumentQuery()](/dotnet/api/microsoft.azure.documents.linq.documentqueryable.asdocumentquery) method:
+
+```csharp
+// Initialize this DocumentClient and Collection
+DocumentClient documentClient = null;
+DocumentCollection collection = null;
+
+// Setting PopulateQueryMetrics to true in the FeedOptions
+FeedOptions feedOptions = new FeedOptions
+{
+ PopulateQueryMetrics = true
+};
+
+string query = "SELECT TOP 5 * FROM c";
+IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
+
+while (documentQuery.HasMoreResults)
+{
+ // Execute one continuation of the query
+ FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
+
+ // This dictionary maps the partitionId to the QueryMetrics of that query
+ IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
+
+ // At this point you have QueryMetrics which you can serialize using .ToString()
+ foreach (KeyValuePair<string, QueryMetrics> kvp in partitionIdToQueryMetrics)
+ {
+ string partitionId = kvp.Key;
+ QueryMetrics queryMetrics = kvp.Value;
+
+ // Do whatever logging you need
+ DoSomeLoggingOfQueryMetrics(query, partitionId, queryMetrics);
+ }
+}
+```
+## Aggregating QueryMetrics
+
+In the previous section, notice that there were multiple calls to [ExecuteNextAsync](/dotnet/api/microsoft.azure.documents.linq.idocumentquery-1.executenextasync) method. Each call returned a `FeedResponse` object that has a dictionary of `QueryMetrics`; one for every continuation of the query. The following example shows how to aggregate these `QueryMetrics` using LINQ:
+
+```csharp
+List<QueryMetrics> queryMetricsList = new List<QueryMetrics>();
+
+while (documentQuery.HasMoreResults)
+{
+ // Execute one continuation of the query
+ FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
+
+ // This dictionary maps the partitionId to the QueryMetrics of that query
+ IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
+ queryMetricsList.AddRange(partitionIdToQueryMetrics.Values);
+}
+
+// Aggregate the QueryMetrics using the + operator overload of the QueryMetrics class.
+QueryMetrics aggregatedQueryMetrics = queryMetricsList.Aggregate((curr, acc) => curr + acc);
+Console.WriteLine(aggregatedQueryMetrics);
+```
+
+## Grouping query metrics by Partition ID
+
+You can group the `QueryMetrics` by the Partition ID. Grouping by Partition ID allows you to see if a specific Partition is causing performance issues when compared to others. The following example shows how to group `QueryMetrics` with LINQ:
+
+```csharp
+List<KeyValuePair<string, QueryMetrics>> partitionedQueryMetrics = new List<KeyValuePair<string, QueryMetrics>>();
+while (documentQuery.HasMoreResults)
+{
+ // Execute one continuation of the query
+ FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
+
+ // This dictionary is maps the partitionId to the QueryMetrics of that query
+ IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
+ partitionedQueryMetrics.AddRange(partitionIdToQueryMetrics.ToList());
+}
+
+// Now we are able to group the query metrics by partitionId
+IEnumerable<IGrouping<string, KeyValuePair<string, QueryMetrics>>> groupedByQueryMetrics = partitionedQueryMetrics
+ .GroupBy(kvp => kvp.Key);
+
+// If we wanted to we could even aggregate the groupedby QueryMetrics
+foreach(IGrouping<string, KeyValuePair<string, QueryMetrics>> grouping in groupedByQueryMetrics)
+{
+ string partitionId = grouping.Key;
+ QueryMetrics aggregatedQueryMetricsForPartition = grouping
+ .Select(kvp => kvp.Value)
+ .Aggregate((curr, acc) => curr + acc);
+ DoSomeLoggingOfQueryMetrics(query, partitionId, aggregatedQueryMetricsForPartition);
+}
+```
+
+## LINQ on DocumentQuery
+
+You can also get the `FeedResponse` from a LINQ Query using the `AsDocumentQuery()` method:
+
+```csharp
+IDocumentQuery<Document> linqQuery = client.CreateDocumentQuery(collection.SelfLink, feedOptions)
+ .Take(1)
+ .Where(document => document.Id == "42")
+ .OrderBy(document => document.Timestamp)
+ .AsDocumentQuery();
+FeedResponse<Document> feedResponse = await linqQuery.ExecuteNextAsync<Document>();
+IReadOnlyDictionary<string, QueryMetrics> queryMetrics = feedResponse.QueryMetrics;
+```
+
+## Expensive Queries
+
+You can capture the request units consumed by each query to investigate expensive queries or queries that consume high throughput. You can get the request charge by using the [RequestCharge](/dotnet/api/microsoft.azure.documents.client.feedresponse-1.requestcharge) property in `FeedResponse`. To learn more about how to get the request charge using the Azure portal and different SDKs, see [find the request unit charge](find-request-unit-charge.md) article.
+
+```csharp
+string query = "SELECT * FROM c";
+IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
+
+while (documentQuery.HasMoreResults)
+{
+ // Execute one continuation of the query
+ FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
+ double requestCharge = feedResponse.RequestCharge
+
+ // Log the RequestCharge how ever you want.
+ DoSomeLogging(requestCharge);
+}
+```
+
+## Get the query execution time
+
+When calculating the time required to execute a client-side query, make sure that you only include the time to call the `ExecuteNextAsync` method and not other parts of your code base. Just these calls help you in calculating how long the query execution took as shown in the following example:
+
+```csharp
+string query = "SELECT * FROM c";
+IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
+Stopwatch queryExecutionTimeEndToEndTotal = new Stopwatch();
+while (documentQuery.HasMoreResults)
+{
+ // Execute one continuation of the query
+ queryExecutionTimeEndToEndTotal.Start();
+ FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
+ queryExecutionTimeEndToEndTotal.Stop();
+}
+
+// Log the elapsed time
+DoSomeLogging(queryExecutionTimeEndToEndTotal.Elapsed);
+```
+
+## Scan queries (commonly slow and expensive)
+
+A scan query refers to a query that wasn't served by the index, due to which, many documents are loaded before returning the result set.
+
+Below is an example of a scan query:
+
+```sql
+SELECT VALUE c.description
+FROM c
+WHERE UPPER(c.description) = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR"
+```
+
+This query's filter uses the system function UPPER, which isn't served from the index. Executing this query against a large collection produced the following query metrics for the first continuation:
+
+```
+QueryMetrics
+
+Retrieved Document Count : 60,951
+Retrieved Document Size : 399,998,938 bytes
+Output Document Count : 7
+Output Document Size : 510 bytes
+Index Utilization : 0.00 %
+Total Query Execution Time : 4,500.34 milliseconds
+ Query Preparation Times
+ Query Compilation Time : 0.09 milliseconds
+ Logical Plan Build Time : 0.05 milliseconds
+ Physical Plan Build Time : 0.04 milliseconds
+ Query Optimization Time : 0.01 milliseconds
+ Index Lookup Time : 0.01 milliseconds
+ Document Load Time : 4,177.66 milliseconds
+ Runtime Execution Times
+ Query Engine Times : 322.16 milliseconds
+ System Function Execution Time : 85.74 milliseconds
+ User-defined Function Execution Time : 0.00 milliseconds
+ Document Write Time : 0.01 milliseconds
+Client Side Metrics
+ Retry Count : 0
+ Request Charge : 4,059.95 RUs
+```
+
+Note the following values from the query metrics output:
+
+```
+Retrieved Document Count : 60,951
+Retrieved Document Size : 399,998,938 bytes
+```
+
+This query loaded 60,951 documents, which totaled 399,998,938 bytes. Loading this many bytes results in high cost or request unit charge. It also takes a long time to execute the query, which is clear with the total time spent property:
+
+```
+Total Query Execution Time : 4,500.34 milliseconds
+```
+
+Meaning that the query took 4.5 seconds to execute (and this was only one continuation).
+
+To optimize this example query, avoid the use of UPPER in the filter. Instead, when documents are created or updated, the `c.description` values must be inserted in all uppercase characters. The query then becomes:
+
+```sql
+SELECT VALUE c.description
+FROM c
+WHERE c.description = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR"
+```
+
+This query is now able to be served from the index.
+
+To learn more about tuning query performance, see the [Tune Query Performance](./query-metrics.md) article.
+
+## <a id="References"></a>References
+
+- [Azure Cosmos DB SQL specification](query/getting-started.md)
+- [ANSI SQL 2011](https://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=53681)
+- [JSON](https://json.org/)
+- [LINQ](/previous-versions/dotnet/articles/bb308959(v=msdn.10))
+
+## Next steps
+
+- [Tune query performance](query-metrics.md)
+- [Indexing overview](../index-overview.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
cosmos-db Query Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-metrics.md
+
+ Title: SQL query metrics for Azure Cosmos DB for NoSQL
+description: Learn about how to instrument and debug the SQL query performance of Azure Cosmos DB requests.
++++++ Last updated : 04/04/2022
+ms.devlang: csharp
++
+# Tuning query performance with Azure Cosmos DB
+
+Azure Cosmos DB provides a [API for NoSQL for querying data](query/getting-started.md), without requiring schema or secondary indexes. This article provides the following information for developers:
+
+* High-level details on how Azure Cosmos DB's SQL query execution works
+* Details on query request and response headers, and client SDK options
+* Tips and best practices for query performance
+* Examples of how to utilize SQL execution statistics to debug query performance
+
+## About SQL query execution
+
+In Azure Cosmos DB, you store data in containers, which can grow to any [storage size or request throughput](../partitioning-overview.md). Azure Cosmos DB seamlessly scales data across physical partitions under the covers to handle data growth or increase in provisioned throughput. You can issue SQL queries to any container using the REST API or one of the supported [SQL SDKs](sdk-dotnet-v2.md).
+
+A brief overview of partitioning: you define a partition key like "city", which determines how data is split across physical partitions. Data belonging to a single partition key (for example, "city" == "Seattle") is stored within a physical partition, but typically a single physical partition has multiple partition keys. When a partition reaches its storage size, the service seamlessly splits the partition into two new partitions, and divides the partition key evenly across these partitions. Since partitions are transient, the APIs use an abstraction of a "partition key range", which denotes the ranges of partition key hashes.
+
+When you issue a query to Azure Cosmos DB, the SDK performs these logical steps:
+
+* Parse the SQL query to determine the query execution plan.
+* If the query includes a filter against the partition key, like `SELECT * FROM c WHERE c.city = "Seattle"`, it is routed to a single partition. If the query does not have a filter on partition key, then it is executed in all partitions, and results are merged client side.
+* The query is executed within each partition in series or parallel, based on client configuration. Within each partition, the query might make one or more round trips depending on the query complexity, configured page size, and provisioned throughput of the collection. Each execution returns the number of [request units](../request-units.md) consumed by query execution, and optionally, query execution statistics.
+* The SDK performs a summarization of the query results across partitions. For example, if the query involves an ORDER BY across partitions, then results from individual partitions are merge-sorted to return results in globally sorted order. If the query is an aggregation like `COUNT`, the counts from individual partitions are summed to produce the overall count.
+
+The SDKs provide various options for query execution. For example, in .NET these options are available in the `FeedOptions` class. The following table describes these options and how they impact query execution time.
+
+| Option | Description |
+| | -- |
+| `EnableCrossPartitionQuery` | Must be set to true for any query that requires to be executed across more than one partition. This is an explicit flag to enable you to make conscious performance tradeoffs during development time. |
+| `EnableScanInQuery` | Must be set to true if you have opted out of indexing, but want to run the query via a scan anyway. Only applicable if indexing for the requested filter path is disabled. |
+| `MaxItemCount` | The maximum number of items to return per round trip to the server. By setting to -1, you can let the server manage the number of items. Or, you can lower this value to retrieve only a small number of items per round trip.
+| `MaxBufferedItemCount` | This is a client-side option, and used to limit the memory consumption when performing cross-partition ORDER BY. A higher value helps reduce the latency of cross-partition sorting. |
+| `MaxDegreeOfParallelism` | Gets or sets the number of concurrent operations run client side during parallel query execution in the Azure Cosmos DB database service. A positive property value limits the number of concurrent operations to the set value. If it is set to less than 0, the system automatically decides the number of concurrent operations to run. |
+| `PopulateQueryMetrics` | Enables detailed logging of statistics of time spent in various phases of query execution like compilation time, index loop time, and document load time. You can share output from query statistics with Azure Support to diagnose query performance issues. |
+| `RequestContinuation` | You can resume query execution by passing in the opaque continuation token returned by any query. The continuation token encapsulates all state required for query execution. |
+| `ResponseContinuationTokenLimitInKb` | You can limit the maximum size of the continuation token returned by the server. You might need to set this if your application host has limits on response header size. Setting this may increase the overall duration and RUs consumed for the query. |
+
+For example, let's take an example query on partition key requested on a collection with `/city` as the partition key and provisioned with 100,000 RU/s of throughput. You request this query using `CreateDocumentQuery<T>` in .NET like the following:
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ PopulateQueryMetrics = true,
+ MaxItemCount = -1,
+ MaxDegreeOfParallelism = -1,
+ EnableCrossPartitionQuery = true
+ }).AsDocumentQuery();
+
+FeedResponse<dynamic> result = await query.ExecuteNextAsync();
+```
+
+The SDK snippet shown above, corresponds to the following REST API request:
+
+```
+POST https://arramacquerymetrics-westus.documents.azure.com/dbs/db/colls/sample/docs HTTP/1.1
+x-ms-continuation:
+x-ms-documentdb-isquery: True
+x-ms-max-item-count: -1
+x-ms-documentdb-query-enablecrosspartition: True
+x-ms-documentdb-query-parallelizecrosspartitionquery: True
+x-ms-documentdb-query-iscontinuationexpected: True
+x-ms-documentdb-populatequerymetrics: True
+x-ms-date: Tue, 27 Jun 2017 21:52:18 GMT
+authorization: type%3dmaster%26ver%3d1.0%26sig%3drp1Hi83Y8aVV5V6LzZ6xhtQVXRAMz0WNMnUuvriUv%2b4%3d
+x-ms-session-token: 7:8,6:2008,5:8,4:2008,3:8,2:2008,1:8,0:8,9:8,8:4008
+Cache-Control: no-cache
+x-ms-consistency-level: Session
+User-Agent: documentdb-dotnet-sdk/1.14.1 Host/32-bit MicrosoftWindowsNT/6.2.9200.0
+x-ms-version: 2017-02-22
+Accept: application/json
+Content-Type: application/query+json
+Host: arramacquerymetrics-westus.documents.azure.com
+Content-Length: 52
+Expect: 100-continue
+
+{"query":"SELECT * FROM c WHERE c.city = 'Seattle'"}
+```
+
+Each query execution page corresponds to a REST API `POST` with the `Accept: application/query+json` header, and the SQL query in the body. Each query makes one or more round trips to the server with the `x-ms-continuation` token echoed between the client and server to resume execution. The configuration options in FeedOptions are passed to the server in the form of request headers. For example, `MaxItemCount` corresponds to `x-ms-max-item-count`.
+
+The request returns the following (truncated for readability) response:
+
+```
+HTTP/1.1 200 Ok
+Cache-Control: no-store, no-cache
+Pragma: no-cache
+Transfer-Encoding: chunked
+Content-Type: application/json
+Server: Microsoft-HTTPAPI/2.0
+Strict-Transport-Security: max-age=31536000
+x-ms-last-state-change-utc: Tue, 27 Jun 2017 21:01:57.561 GMT
+x-ms-resource-quota: documentSize=10240;documentsSize=10485760;documentsCount=-1;collectionSize=10485760;
+x-ms-resource-usage: documentSize=1;documentsSize=884;documentsCount=2000;collectionSize=1408;
+x-ms-item-count: 2000
+x-ms-schemaversion: 1.3
+x-ms-alt-content-path: dbs/db/colls/sample
+x-ms-content-path: +9kEANVq0wA=
+x-ms-xp-role: 1
+x-ms-documentdb-query-metrics: totalExecutionTimeInMs=33.67;queryCompileTimeInMs=0.06;queryLogicalPlanBuildTimeInMs=0.02;queryPhysicalPlanBuildTimeInMs=0.10;queryOptimizationTimeInMs=0.00;VMExecutionTimeInMs=32.56;indexLookupTimeInMs=0.36;documentLoadTimeInMs=9.58;systemFunctionExecuteTimeInMs=0.00;userFunctionExecuteTimeInMs=0.00;retrievedDocumentCount=2000;retrievedDocumentSize=1125600;outputDocumentCount=2000;writeOutputTimeInMs=18.10;indexUtilizationRatio=1.00
+x-ms-request-charge: 604.42
+x-ms-serviceversion: version=1.14.34.4
+x-ms-activity-id: 0df8b5f6-83b9-4493-abda-cce6d0f91486
+x-ms-session-token: 2:2008
+x-ms-gatewayversion: version=1.14.33.2
+Date: Tue, 27 Jun 2017 21:59:49 GMT
+```
+
+The key response headers returned from the query include the following:
+
+| Option | Description |
+| | -- |
+| `x-ms-item-count` | The number of items returned in the response. This is dependent on the supplied `x-ms-max-item-count`, the number of items that can be fit within the maximum response payload size, the provisioned throughput, and query execution time. |
+| `x-ms-continuation:` | The continuation token to resume execution of the query, if additional results are available. |
+| `x-ms-documentdb-query-metrics` | The query statistics for the execution. This is a delimited string containing statistics of time spent in the various phases of query execution. Returned if `x-ms-documentdb-populatequerymetrics` is set to `True`. |
+| `x-ms-request-charge` | The number of [request units](../request-units.md) consumed by the query. |
+
+For details on the REST API request headers and options, see [Querying resources using the REST API](/rest/api/cosmos-db/querying-cosmosdb-resources-using-the-rest-api).
+
+## Best practices for query performance
+The following are the most common factors that impact Azure Cosmos DB query performance. We dig deeper into each of these topics in this article.
+
+| Factor | Tip |
+| | --|
+| Provisioned throughput | Measure RU per query, and ensure that you have the required provisioned throughput for your queries. |
+| Partitioning and partition keys | Favor queries with the partition key value in the filter clause for low latency. |
+| SDK and query options | Follow SDK best practices like direct connectivity, and tune client-side query execution options. |
+| Indexing Policy | Ensure that you have the required indexing paths/policy for the query. |
+| Query execution metrics | Analyze the query execution metrics to identify potential rewrites of query and data shapes. |
+
+### Provisioned throughput
+In Azure Cosmos DB, you create containers of data, each with reserved throughput expressed in request units (RU) per-second. A read of a 1-KB document is 1 RU, and every operation (including queries) is normalized to a fixed number of RUs based on its complexity. For example, if you have 1000 RU/s provisioned for your container, and you have a query like `SELECT * FROM c WHERE c.city = 'Seattle'` that consumes 5 RUs, then you can perform (1000 RU/s) / (5 RU/query) = 200 query/s such queries per second.
+
+If you submit more than 200 queries/sec, the service starts rate-limiting incoming requests above 200/s. The SDKs automatically handle this case by performing a backoff/retry, therefore you might notice a higher latency for these queries. Increasing the provisioned throughput to the required value improves your query latency and throughput.
+
+To learn more about request units, see [Request units](../request-units.md).
+
+### Partitioning and partition keys
+With Azure Cosmos DB, typically queries perform in the following order from fastest/most efficient to slower/less efficient.
+
+* GET on a single partition key and item key
+* Query with a filter clause on a single partition key
+* Query without an equality or range filter clause on any property
+* Query without filters
+
+Queries that need to consult all partitions need higher latency, and can consume higher RUs. Since each partition has automatic indexing against all properties, the query can be served efficiently from the index in this case. You can make queries that span partitions faster by using the parallelism options.
+
+To learn more about partitioning and partition keys, see [Partitioning in Azure Cosmos DB](../partitioning-overview.md).
+
+### SDK and query options
+See [Query performance tips](performance-tips-query-sdk.md) and [Performance testing](performance-testing.md) for how to get the best client-side performance from Azure Cosmos DB using our SDKs.
+
+### Network latency
+See [Azure Cosmos DB global distribution](tutorial-global-distribution.md) for how to set up global distribution, and connect to the closest region. Network latency has a significant impact on query performance when you need to make multiple round-trips or retrieve a large result set from the query.
+
+The section on query execution metrics explains how to retrieve the server execution time of queries ( `totalExecutionTimeInMs`), so that you can differentiate between time spent in query execution and time spent in network transit.
+
+### Indexing policy
+See [Configuring indexing policy](../index-policy.md) for indexing paths, kinds, and modes, and how they impact query execution. By default, the indexing policy uses range indexing for strings, which is effective for equality queries. If you need range queries for strings, we recommend specifying the Range index type for all strings.
+
+By default, Azure Cosmos DB will apply automatic indexing to all data. For high performance insert scenarios, consider excluding paths as this will reduce the RU cost for each insert operation.
+
+## Query execution metrics
+You can obtain detailed metrics on query execution by passing in the optional `x-ms-documentdb-populatequerymetrics` header (`FeedOptions.PopulateQueryMetrics` in the .NET SDK). The value returned in `x-ms-documentdb-query-metrics` has the following key-value pairs meant for advanced troubleshooting of query execution.
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ PopulateQueryMetrics = true,
+ }).AsDocumentQuery();
+
+FeedResponse<dynamic> result = await query.ExecuteNextAsync();
+
+// Returns metrics by partition key range Id
+IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
+
+```
+
+| Metric | Unit | Description |
+| | --| -- |
+| `totalExecutionTimeInMs` | milliseconds | Query execution time |
+| `queryCompileTimeInMs` | milliseconds | Query compile time |
+| `queryLogicalPlanBuildTimeInMs` | milliseconds | Time to build logical query plan |
+| `queryPhysicalPlanBuildTimeInMs` | milliseconds | Time to build physical query plan |
+| `queryOptimizationTimeInMs` | milliseconds | Time spent in optimizing query |
+| `VMExecutionTimeInMs` | milliseconds | Time spent in query runtime |
+| `indexLookupTimeInMs` | milliseconds | Time spent in physical index layer |
+| `documentLoadTimeInMs` | milliseconds | Time spent in loading documents |
+| `systemFunctionExecuteTimeInMs` | milliseconds | Total time spent executing system (built-in) functions in milliseconds |
+| `userFunctionExecuteTimeInMs` | milliseconds | Total time spent executing user-defined functions in milliseconds |
+| `retrievedDocumentCount` | count | Total number of retrieved documents |
+| `retrievedDocumentSize` | bytes | Total size of retrieved documents in bytes |
+| `outputDocumentCount` | count | Number of output documents |
+| `writeOutputTimeInMs` | milliseconds | Time spent writing the output in milliseconds |
+| `indexUtilizationRatio` | ratio (<=1) | Ratio of number of documents matched by the filter to the number of documents loaded |
+
+The client SDKs may internally make multiple query operations to serve the query within each partition. The client makes more than one call per-partition if the total results exceed `x-ms-max-item-count`, if the query exceeds the provisioned throughput for the partition, or if the query payload reaches the maximum size per page, or if the query reaches the system allocated timeout limit. Each partial query execution returns a `x-ms-documentdb-query-metrics` for that page.
+
+Here are some sample queries, and how to interpret some of the metrics returned from query execution:
+
+| Query | Sample Metric | Description |
+| | --| -- |
+| `SELECT TOP 100 * FROM c` | `"RetrievedDocumentCount": 101` | The number of documents retrieved is 100+1 to match the TOP clause. Query time is mostly spent in `WriteOutputTime` and `DocumentLoadTime` since it is a scan. |
+| `SELECT TOP 500 * FROM c` | `"RetrievedDocumentCount": 501` | RetrievedDocumentCount is now higher (500+1 to match the TOP clause). |
+| `SELECT * FROM c WHERE c.N = 55` | `"IndexLookupTime": "00:00:00.0009500"` | About 0.9 ms is spent in IndexLookupTime for a key lookup, because it's an index lookup on `/N/?`. |
+| `SELECT * FROM c WHERE c.N > 55` | `"IndexLookupTime": "00:00:00.0017700"` | Slightly more time (1.7 ms) spent in IndexLookupTime over a range scan, because it's an index lookup on `/N/?`. |
+| `SELECT TOP 500 c.N FROM c` | `"IndexLookupTime": "00:00:00.0017700"` | Same time spent on `DocumentLoadTime` as previous queries, but lower `WriteOutputTime` because we're projecting only one property. |
+| `SELECT TOP 500 udf.toPercent(c.N) FROM c` | `"UserDefinedFunctionExecutionTime": "00:00:00.2136500"` | About 213 ms is spent in `UserDefinedFunctionExecutionTime` executing the UDF on each value of `c.N`. |
+| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(c.Name, 'Den')` | `"IndexLookupTime": "00:00:00.0006400", "SystemFunctionExecutionTime": "00:00:00.0074100"` | About 0.6 ms is spent in `IndexLookupTime` on `/Name/?`. Most of the query execution time (~7 ms) in `SystemFunctionExecutionTime`. |
+| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(LOWER(c.Name), 'den')` | `"IndexLookupTime": "00:00:00", "RetrievedDocumentCount": 2491, "OutputDocumentCount": 500` | Query is performed as a scan because it uses `LOWER`, and 500 out of 2491 retrieved documents are returned. |
++
+## Next steps
+* To learn about the supported SQL query operators and keywords, see [SQL query](query/getting-started.md).
+* To learn about request units, see [request units](../request-units.md).
+* To learn about indexing policy, see [indexing policy](../index-policy.md)
cosmos-db Abs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/abs.md
+
+ Title: ABS in Azure Cosmos DB query language
+description: Learn about how the Absolute(ABS) SQL system function in Azure Cosmos DB returns the positive value of the specified numeric expression
++++ Last updated : 03/04/2020+++
+# ABS (Azure Cosmos DB)
+
+ Returns the absolute (positive) value of the specified numeric expression.
+
+## Syntax
+
+```sql
+ABS (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example shows the results of using the `ABS` function on three different numbers.
+
+```sql
+SELECT ABS(-1) AS abs1, ABS(0) AS abs2, ABS(1) AS abs3
+```
+
+ Here is the result set.
+
+```json
+[{abs1: 1, abs2: 0, abs3: 1}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Acos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/acos.md
+
+ Title: ACOS in Azure Cosmos DB query language
+description: Learn about how the ACOS (arccosine) SQL system function in Azure Cosmos DB returns the angle, in radians, whose cosine is the specified numeric expression
++++ Last updated : 03/03/2020+++
+# ACOS (Azure Cosmos DB)
+
+ Returns the angle, in radians, whose cosine is the specified numeric expression; also called arccosine.
+
+## Syntax
+
+```sql
+ACOS(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `ACOS` of -1.
+
+```sql
+SELECT ACOS(-1) AS acos
+```
+
+ Here's the result set.
+
+```json
+[{"acos": 3.1415926535897931}]
+```
+
+## Remarks
+
+This system function won't utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Aggregate Avg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-avg.md
+
+ Title: AVG in Azure Cosmos DB query language
+description: Learn about the Average (AVG) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020++++
+# AVG (Azure Cosmos DB)
+
+This aggregate function returns the average of the values in the expression.
+
+## Syntax
+
+```sql
+AVG(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the average value of `propertyA`:
+
+```sql
+SELECT AVG(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). If any arguments in `AVG` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will not impact the `AVG` calculation.
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](mathematical-functions.md)
+- [System functions in Azure Cosmos DB](system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Count https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-count.md
+
+ Title: COUNT in Azure Cosmos DB query language
+description: Learn about the Count (COUNT) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020++++
+# COUNT (Azure Cosmos DB)
+
+This system function returns the count of the values in the expression.
+
+## Syntax
+
+```sql
+COUNT(<scalar_expr>)
+```
+
+## Arguments
+
+*scalar_expr*
+ Is any scalar expression
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the total count of items in a container:
+
+```sql
+SELECT COUNT(1)
+FROM c
+```
+COUNT can take any scalar expression as input. The below query will produce an equivalent results:
+
+```sql
+SELECT COUNT(2)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy) for any properties in the query's filter.
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](mathematical-functions.md)
+- [System functions in Azure Cosmos DB](system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-functions.md
+
+ Title: Aggregate functions in Azure Cosmos DB
+description: Learn about SQL aggregate function syntax, types of aggregate functions supported by Azure Cosmos DB.
+++++ Last updated : 12/02/2020+++
+# Aggregate functions in Azure Cosmos DB
+
+Aggregate functions perform a calculation on a set of values in the `SELECT` clause and return a single value. For example, the following query returns the count of items within a container:
+
+```sql
+ SELECT COUNT(1)
+ FROM c
+```
+
+## Types of aggregate functions
+
+The API for NoSQL supports the following aggregate functions. `SUM` and `AVG` operate on numeric values, and `COUNT`, `MIN`, and `MAX` work on numbers, strings, Booleans, and nulls.
+
+| Function | Description |
+|-|-|
+| [AVG](aggregate-avg.md) | Returns the average of the values in the expression. |
+| [COUNT](aggregate-count.md) | Returns the number of items in the expression. |
+| [MAX](aggregate-max.md) | Returns the maximum value in the expression. |
+| [MIN](aggregate-min.md) | Returns the minimum value in the expression. |
+| [SUM](aggregate-sum.md) | Returns the sum of all the values in the expression. |
++
+You can also return only the scalar value of the aggregate by using the VALUE keyword. For example, the following query returns the count of values as a single number:
+
+```sql
+ SELECT VALUE COUNT(1)
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [ 2 ]
+```
+
+You can also combine aggregations with filters. For example, the following query returns the count of items with the address state of `WA`.
+
+```sql
+ SELECT VALUE COUNT(1)
+ FROM Families f
+ WHERE f.address.state = "WA"
+```
+
+The results are:
+
+```json
+ [ 1 ]
+```
+
+## Remarks
+
+These aggregate system functions will benefit from a [range index](../../index-policy.md#includeexclude-strategy). If you expect to do an `AVG`, `COUNT`, `MAX`, `MIN`, or `SUM` on a property, you should [include the relevant path in the indexing policy](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [System functions](system-functions.md)
+- [User defined functions](udfs.md)
cosmos-db Aggregate Max https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-max.md
+
+ Title: MAX in Azure Cosmos DB query language
+description: Learn about the Max (MAX) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020++++
+# MAX (Azure Cosmos DB)
+
+This aggregate function returns the maximum of the values in the expression.
+
+## Syntax
+
+```sql
+MAX(<scalar_expr>)
+```
+
+## Arguments
+
+*scalar_expr*
+ Is a scalar expression.
+
+## Return types
+
+Returns a scalar expression.
+
+## Examples
+
+The following example returns the maximum value of `propertyA`:
+
+```sql
+SELECT MAX(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). The arguments in `MAX` can be number, string, boolean, or null. Any undefined values will be ignored.
+
+When comparing different types data, the following priority order is used (in descending order):
+
+- string
+- number
+- boolean
+- null
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](mathematical-functions.md)
+- [System functions in Azure Cosmos DB](system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Min https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-min.md
+
+ Title: MIN in Azure Cosmos DB query language
+description: Learn about the Min (MIN) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020++++
+# MIN (Azure Cosmos DB)
+
+This aggregate function returns the minimum of the values in the expression.
+
+## Syntax
+
+```sql
+MIN(<scalar_expr>)
+```
+
+## Arguments
+
+*scalar_expr*
+ Is a scalar expression.
+
+## Return types
+
+Returns a scalar expression.
+
+## Examples
+
+The following example returns the minimum value of `propertyA`:
+
+```sql
+SELECT MIN(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). The arguments in `MIN` can be number, string, boolean, or null. Any undefined values will be ignored.
+
+When comparing different types data, the following priority order is used (in ascending order):
+
+- null
+- boolean
+- number
+- string
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](mathematical-functions.md)
+- [System functions in Azure Cosmos DB](system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Sum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-sum.md
+
+ Title: SUM in Azure Cosmos DB query language
+description: Learn about the Sum (SUM) SQL system function in Azure Cosmos DB.
++++ Last updated : 12/02/2020++++
+# SUM (Azure Cosmos DB)
+
+This aggregate function returns the sum of the values in the expression.
+
+## Syntax
+
+```sql
+SUM(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the sum of `propertyA`:
+
+```sql
+SELECT SUM(c.propertyA)
+FROM c
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). If any arguments in `SUM` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will be not impact the `SUM` calculation.
+
+## Next steps
+
+- [Mathematical functions in Azure Cosmos DB](mathematical-functions.md)
+- [System functions in Azure Cosmos DB](system-functions.md)
+- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Array Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-concat.md
+
+ Title: ARRAY_CONCAT in Azure Cosmos DB query language
+description: Learn about how the Array Concat SQL system function in Azure Cosmos DB returns an array that is the result of concatenating two or more array values
++++ Last updated : 03/03/2020+++
+# ARRAY_CONCAT (Azure Cosmos DB)
+
+ Returns an array that is the result of concatenating two or more array values.
+
+## Syntax
+
+```sql
+ARRAY_CONCAT (<arr_expr1>, <arr_expr2> [, <arr_exprN>])
+```
+
+## Arguments
+
+*arr_expr*
+ Is an array expression to concatenate to the other values. The `ARRAY_CONCAT` function requires at least two *arr_expr* arguments.
+
+## Return types
+
+ Returns an array expression.
+
+## Examples
+
+ The following example how to concatenate two arrays.
+
+```sql
+SELECT ARRAY_CONCAT(["apples", "strawberries"], ["bananas"]) AS arrayConcat
+```
+
+ Here is the result set.
+
+```json
+[{"arrayConcat": ["apples", "strawberries", "bananas"]}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](array-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Array Contains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-contains.md
+
+ Title: ARRAY_CONTAINS in Azure Cosmos DB query language
+description: Learn about how the Array Contains SQL system function in Azure Cosmos DB returns a Boolean indicating whether the array contains the specified value
++++ Last updated : 09/13/2019+++
+# ARRAY_CONTAINS (Azure Cosmos DB)
+
+Returns a Boolean indicating whether the array contains the specified value. You can check for a partial or full match of an object by using a boolean expression within the command.
+
+## Syntax
+
+```sql
+ARRAY_CONTAINS (<arr_expr>, <expr> [, bool_expr])
+```
+
+## Arguments
+
+*arr_expr*
+ Is the array expression to be searched.
+
+*expr*
+ Is the expression to be found.
+
+*bool_expr*
+ Is a boolean expression. If it evaluates to 'true' and if the specified search value is an object, the command checks for a partial match (the search object is a subset of one of the objects). If it evaluates to 'false', the command checks for a full match of all objects within the array. The default value if not specified is false.
+
+## Return types
+
+ Returns a Boolean value.
+
+## Examples
+
+ The following example how to check for membership in an array using `ARRAY_CONTAINS`.
+
+```sql
+SELECT
+ ARRAY_CONTAINS(["apples", "strawberries", "bananas"], "apples") AS b1,
+ ARRAY_CONTAINS(["apples", "strawberries", "bananas"], "mangoes") AS b2
+```
+
+ Here is the result set.
+
+```json
+[{"b1": true, "b2": false}]
+```
+
+The following example how to check for a partial match of a JSON in an array using ARRAY_CONTAINS.
+
+```sql
+SELECT
+ ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "apples"}, true) AS b1,
+ ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "apples"}) AS b2,
+ ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "mangoes"}, true) AS b3
+```
+
+ Here is the result set.
+
+```json
+[{
+ "b1": true,
+ "b2": false,
+ "b3": false
+}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](array-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Array Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-functions.md
+
+ Title: Array functions in Azure Cosmos DB query language
+description: Learn about how the array functions let you perform operations on arrays in Azure Cosmos DB
++++ Last updated : 09/13/2019+++
+# Array functions (Azure Cosmos DB)
+
+The array functions let you perform operations on arrays in Azure Cosmos DB.
+
+## Functions
+
+The following scalar functions perform an operation on an array input value and return numeric, boolean or array value:
+
+* [ARRAY_CONCAT](array-concat.md)
+* [ARRAY_CONTAINS](array-contains.md)
+* [ARRAY_LENGTH](array-length.md)
+* [ARRAY_SLICE](array-slice.md)
++
+
+
+
+## Next steps
+
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Array Length https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-length.md
+
+ Title: ARRAY_LENGTH in Azure Cosmos DB query language
+description: Learn about how the Array length SQL system function in Azure Cosmos DB returns the number of elements of the specified array expression
++++ Last updated : 03/03/2020+++
+# ARRAY_LENGTH (Azure Cosmos DB)
+
+ Returns the number of elements of the specified array expression.
+
+## Syntax
+
+```sql
+ARRAY_LENGTH(<arr_expr>)
+```
+
+## Arguments
+
+*arr_expr*
+ Is an array expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example how to get the length of an array using `ARRAY_LENGTH`.
+
+```sql
+SELECT ARRAY_LENGTH(["apples", "strawberries", "bananas"]) AS len
+```
+
+ Here is the result set.
+
+```json
+[{"len": 3}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](array-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Array Slice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-slice.md
+
+ Title: ARRAY_SLICE in Azure Cosmos DB query language
+description: Learn about how the Array slice SQL system function in Azure Cosmos DB returns part of an array expression
++++ Last updated : 03/03/2020+++
+# ARRAY_SLICE (Azure Cosmos DB)
+
+ Returns part of an array expression.
+
+## Syntax
+
+```sql
+ARRAY_SLICE (<arr_expr>, <num_expr> [, <num_expr>])
+```
+
+## Arguments
+
+*arr_expr*
+ Is any array expression.
+
+*num_expr*
+ Zero-based numeric index at which to begin the array. Negative values may be used to specify the starting index relative to the last element of the array i.e. -1 references the last element in the array.
+
+*num_expr*
+ Optional numeric expression that sets the maximum number of elements in the resulting array.
+
+## Return types
+
+ Returns an array expression.
+
+## Examples
+
+ The following example shows how to get different slices of an array using `ARRAY_SLICE`.
+
+```sql
+SELECT
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1) AS s1,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 1) AS s2,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], -2, 1) AS s3,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], -2, 2) AS s4,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 0) AS s5,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 1000) AS s6,
+ ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, -100) AS s7
+
+```
+
+ Here is the result set.
+
+```json
+[{
+ "s1": ["strawberries", "bananas"],
+ "s2": ["strawberries"],
+ "s3": ["strawberries"],
+ "s4": ["strawberries", "bananas"],
+ "s5": [],
+ "s6": ["strawberries", "bananas"],
+ "s7": []
+}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Array functions Azure Cosmos DB](array-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Asin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/asin.md
+
+ Title: ASIN in Azure Cosmos DB query language
+description: Learn about how the Arcsine (ASIN) SQL system function in Azure Cosmos DB returns the angle, in radians, whose sine is the specified numeric expression
++++ Last updated : 03/04/2020+++
+# ASIN (Azure Cosmos DB)
+
+ Returns the angle, in radians, whose sine is the specified numeric expression. This is also called arcsine.
+
+## Syntax
+
+```sql
+ASIN(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `ASIN` of -1.
+
+```sql
+SELECT ASIN(-1) AS asin
+```
+
+ Here is the result set.
+
+```json
+[{"asin": -1.5707963267948966}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Atan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/atan.md
+
+ Title: ATAN in Azure Cosmos DB query language
+description: Learn about how the ATAN (arctangent) SQL system function in Azure Cosmos DB returns the angle, in radians, whose tangent is the specified numeric expression
++++ Last updated : 03/04/2020+++
+# ATAN (Azure Cosmos DB)
+
+ Returns the angle, in radians, whose tangent is the specified numeric expression. This value is also called arctangent.
+
+## Syntax
+
+```sql
+ATAN(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `ATAN` of the specified value.
+
+```sql
+SELECT ATAN(-45.01) AS atan
+```
+
+ Here's the result set.
+
+```json
+[{"atan": -1.5485826962062663}]
+```
+
+## Remarks
+
+This system function won't utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Atn2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/atn2.md
+
+ Title: ATN2 in Azure Cosmos DB query language
+description: Learn about how the ATN2 SQL system function in Azure Cosmos DB returns the principal value of the arc tangent of y/x, expressed in radians
++++ Last updated : 03/03/2020+++
+# ATN2 (Azure Cosmos DB)
+
+ Returns the principal value of the arc tangent of y/x, expressed in radians.
+
+## Syntax
+
+```sql
+ATN2(<numeric_expr>, <numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the ATN2 for the specified x and y components.
+
+```sql
+SELECT ATN2(35.175643, 129.44) AS atn2
+```
+
+ Here is the result set.
+
+```json
+[{"atn2": 1.3054517947300646}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Bitwise Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/bitwise-operators.md
+
+ Title: Bitwise operators in Azure Cosmos DB
+description: Learn about SQL bitwise operators supported by Azure Cosmos DB.
+++++++ Last updated : 06/02/2022++
+# Bitwise operators in Azure Cosmos DB
++
+This article details the bitwise operators supported by Azure Cosmos DB. Bitwise operators are useful for constructing JSON result-sets on the fly. The bitwise operators work similarly to higher-level programming languages like C# and JavaScript. For examples of C# bitwise operators, see [Bitwise and shift operators](/dotnet/csharp/language-reference/operators/bitwise-and-shift-operators).
+
+## Understanding bitwise operations
+
+The following table shows the explanations and examples of bitwise operations in the API for NoSQL between two values.
+
+| Operation | Operator | Description |
+| | | |
+| **Left shift** | ``<<`` | Shift left-hand value *left* by the specified number of bits. |
+| **Right shift** | ``>>`` | Shift left-hand value *right* by the specified number of bits. |
+| **Zero-fill (unsigned) right shift** | ``>>>`` | Shift left-hand value *right* by the specified number of bits without filling left-most bits. |
+| **AND** | ``&`` | Computes bitwise logical AND. |
+| **OR** | ``|`` | Computes bitwise logical OR. |
+| **XOR** | ``^`` | Computes bitwise logical exclusive OR. |
++
+For example, the following query uses each of the bitwise operators and renders a result.
+
+```sql
+SELECT
+ (100 >> 2) AS rightShift,
+ (100 << 2) AS leftShift,
+ (100 >>> 0) AS zeroFillRightShift,
+ (100 & 1000) AS logicalAnd,
+ (100 | 1000) AS logicalOr,
+ (100 ^ 1000) AS logicalExclusiveOr
+```
+
+The example query's results as a JSON object.
+
+```json
+[
+ {
+ "rightShift": 25,
+ "leftShift": 400,
+ "zeroFillRightShift": 100,
+ "logicalAnd": 96,
+ "logicalOr": 1004,
+ "logicalExclusiveOr": 908
+ }
+]
+```
+
+> [!IMPORTANT]
+> The bitwise operators in Azure Cosmos DB for NoSQL follow the same behavior as bitwise operators in JavaScript. JavaScript stores numbers as 64 bits floating point numbers, but all bitwise operations are performed on 32 bits binary numbers. Before a bitwise operation is performed, JavaScript converts numbers to 32 bits signed integers. After the bitwise operation is performed, the result is converted back to 64 bits JavaScript numbers. For more information about the bitwise operators in JavaScript, see [JavaScript binary bitwise operators at MDN Web Docs](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators#binary_bitwise_operators).
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](keywords.md)
+- [SELECT clause](select.md)
cosmos-db Ceiling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/ceiling.md
+
+ Title: CEILING in Azure Cosmos DB query language
+description: Learn about how the CEILING SQL system function in Azure Cosmos DB returns the smallest integer value greater than, or equal to, the specified numeric expression.
++++ Last updated : 09/13/2019+++
+# CEILING (Azure Cosmos DB)
+
+ Returns the smallest integer value greater than, or equal to, the specified numeric expression.
+
+## Syntax
+
+```sql
+CEILING (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example shows positive numeric, negative, and zero values with the `CEILING` function.
+
+```sql
+SELECT CEILING(123.45) AS c1, CEILING(-123.45) AS c2, CEILING(0.0) AS c3
+```
+
+ Here is the result set.
+
+```json
+[{c1: 124, c2: -123, c3: 0}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/concat.md
+
+ Title: CONCAT in Azure Cosmos DB query language
+description: Learn about how the CONCAT SQL system function in Azure Cosmos DB returns a string that is the result of concatenating two or more string values
++++ Last updated : 03/03/2020+++
+# CONCAT (Azure Cosmos DB)
+
+ Returns a string that is the result of concatenating two or more string values.
+
+## Syntax
+
+```sql
+CONCAT(<str_expr1>, <str_expr2> [, <str_exprN>])
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to concatenate to the other values. The `CONCAT` function requires at least two *str_expr* arguments.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the concatenated string of the specified values.
+
+```sql
+SELECT CONCAT("abc", "def") AS concat
+```
+
+ Here is the result set.
+
+```json
+[{"concat": "abcdef"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Constants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/constants.md
+
+ Title: SQL constants in Azure Cosmos DB
+description: Learn about how the SQL query constants in Azure Cosmos DB are used to represent a specific data value
+++++ Last updated : 05/31/2019++++
+# Azure Cosmos DB SQL query constants
+
+ A constant, also known as a literal or a scalar value, is a symbol that represents a specific data value. The format of a constant depends on the data type of the value it represents.
+
+ **Supported scalar data types:**
+
+|**Type**|**Values order**|
+|-|-|
+|**Undefined**|Single value: **undefined**|
+|**Null**|Single value: **null**|
+|**Boolean**|Values: **false**, **true**.|
+|**Number**|A double-precision floating-point number, IEEE 754 standard.|
+|**String**|A sequence of zero or more Unicode characters. Strings must be enclosed in single or double quotes.|
+|**Array**|A sequence of zero or more elements. Each element can be a value of any scalar data type, except **Undefined**.|
+|**Object**|An unordered set of zero or more name/value pairs. Name is a Unicode string, value can be of any scalar data type, except **Undefined**.|
+
+## <a name="bk_syntax"></a>Syntax
+
+```sql
+<constant> ::=
+ <undefined_constant>
+ | <null_constant>
+ | <boolean_constant>
+ | <number_constant>
+ | <string_constant>
+ | <array_constant>
+ | <object_constant>
+
+<undefined_constant> ::= undefined
+
+<null_constant> ::= null
+
+<boolean_constant> ::= false | true
+
+<number_constant> ::= decimal_literal | hexadecimal_literal
+
+<string_constant> ::= string_literal
+
+<array_constant> ::=
+ '[' [<constant>][,...n] ']'
+
+<object_constant> ::=
+ '{' [{property_name | "property_name"} : <constant>][,...n] '}'
+
+```
+
+## <a name="bk_arguments"></a> Arguments
+
+* `<undefined_constant>; Undefined`
+
+ Represents undefined value of type Undefined.
+
+* `<null_constant>; null`
+
+ Represents **null** value of type **Null**.
+
+* `<boolean_constant>`
+
+ Represents constant of type Boolean.
+
+* `false`
+
+ Represents **false** value of type Boolean.
+
+* `true`
+
+ Represents **true** value of type Boolean.
+
+* `<number_constant>`
+
+ Represents a constant.
+
+* `decimal_literal`
+
+ Decimal literals are numbers represented using either decimal notation, or scientific notation.
+
+* `hexadecimal_literal`
+
+ Hexadecimal literals are numbers represented using prefix '0x' followed by one or more hexadecimal digits.
+
+* `<string_constant>`
+
+ Represents a constant of type String.
+
+* `string _literal`
+
+ String literals are Unicode strings represented by a sequence of zero or more Unicode characters or escape sequences. String literals are enclosed in single quotes (apostrophe: ' ) or double quotes (quotation mark: ").
+
+ Following escape sequences are allowed:
+
+|**Escape sequence**|**Description**|**Unicode character**|
+|-|-|-|
+|\\'|apostrophe (')|U+0027|
+|\\"|quotation mark (")|U+0022|
+|\\\ |reverse solidus (\\)|U+005C|
+|\\/|solidus (/)|U+002F|
+|\b|backspace|U+0008|
+|\f|form feed|U+000C|
+|\n|line feed|U+000A|
+|\r|carriage return|U+000D|
+|\t|tab|U+0009|
+|\uXXXX|A Unicode character defined by 4 hexadecimal digits.|U+XXXX|
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../../modeling-data.md)
cosmos-db Contains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/contains.md
+
+ Title: Contains in Azure Cosmos DB query language
+description: Learn about how the CONTAINS SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression contains the second
++++ Last updated : 04/01/2021+++
+# CONTAINS (Azure Cosmos DB)
+
+Returns a Boolean indicating whether the first string expression contains the second.
+
+## Syntax
+
+```sql
+CONTAINS(<str_expr1>, <str_expr2> [, <bool_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to find.
+
+*bool_expr*
+ Optional value for ignoring case. When set to true, CONTAINS will do a case-insensitive search. When unspecified, this value is false.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks if "abc" contains "ab" and if "abc" contains "A".
+
+```sql
+SELECT CONTAINS("abc", "ab", false) AS c1, CONTAINS("abc", "A", false) AS c2, CONTAINS("abc", "A", true) AS c3
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "c1": true,
+ "c2": false,
+ "c3": true
+ }
+]
+```
+
+## Remarks
+
+Learn about [how this string system function uses the index](string-functions.md).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Cos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/cos.md
+
+ Title: COS in Azure Cosmos DB query language
+description: Learn about how the Cosine (COS) SQL system function in Azure Cosmos DB returns the trigonometric cosine of the specified angle, in radians, in the specified expression
++++ Last updated : 03/03/2020+++
+# COS (Azure Cosmos DB)
+
+ Returns the trigonometric cosine of the specified angle, in radians, in the specified expression.
+
+## Syntax
+
+```sql
+COS(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the `COS` of the specified angle.
+
+```sql
+SELECT COS(14.78) AS cos
+```
+
+ Here is the result set.
+
+```json
+[{"cos": -0.59946542619465426}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Cot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/cot.md
+
+ Title: COT in Azure Cosmos DB query language
+description: Learn about how the Cotangent(COT) SQL system function in Azure Cosmos DB returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression
++++ Last updated : 03/03/2020+++
+# COT (Azure Cosmos DB)
+
+ Returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression.
+
+## Syntax
+
+```sql
+COT(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the `COT` of the specified angle.
+
+```sql
+SELECT COT(124.1332) AS cot
+```
+
+ Here is the result set.
+
+```json
+[{"cot": -0.040311998371148884}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/date-time-functions.md
+
+ Title: Date and time functions in Azure Cosmos DB query language
+description: Learn about date and time SQL system functions in Azure Cosmos DB to perform DateTime and timestamp operations.
++++ Last updated : 08/18/2020+++
+# Date and time functions (Azure Cosmos DB)
+
+The date and time functions let you perform DateTime and timestamp operations in Azure Cosmos DB.
+
+## Functions to obtain the date and time
+
+The following scalar functions allow you to get the current UTC date and time in three forms: a string which conforms to the ISO 8601 format,
+a numeric timestamp whose value is the number of milliseconds which have elapsed since the Unix epoch,
+or numeric ticks whose value is the number of 100 nanosecond ticks which have elapsed since the Unix epoch:
+
+* [GetCurrentDateTime](getcurrentdatetime.md)
+* [GetCurrentTimestamp](getcurrenttimestamp.md)
+* [GetCurrentTicks](getcurrentticks.md)
+
+## Functions to work with DateTime values
+
+The following functions allow you to easily manipulate DateTime, timestamp, and tick values:
+
+* [DateTimeAdd](datetimeadd.md)
+* [DateTimeBin](datetimebin.md)
+* [DateTimeDiff](datetimediff.md)
+* [DateTimeFromParts](datetimefromparts.md)
+* [DateTimePart](datetimepart.md)
+* [DateTimeToTicks](datetimetoticks.md)
+* [DateTimeToTimestamp](datetimetotimestamp.md)
+* [TicksToDateTime](tickstodatetime.md)
+* [TimestampToDateTime](timestamptodatetime.md)
+
+## Next steps
+
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Datetimeadd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimeadd.md
+
+ Title: DateTimeAdd in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeAdd in Azure Cosmos DB.
++++ Last updated : 07/09/2020++++
+# DateTimeAdd (Azure Cosmos DB)
+
+Returns DateTime string value resulting from adding a specified number value (as a signed integer) to a specified DateTime string
+
+## Syntax
+
+```sql
+DateTimeAdd (<DateTimePart> , <numeric_expr> ,<DateTime>)
+```
+
+## Arguments
+
+*DateTimePart*
+ The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Year | "year", "yyyy", "yy" |
+| Month | "month", "mm", "m" |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*numeric_expr*
+ Is a signed integer value that will be added to the DateTimePart of the specified DateTime
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Return types
+
+Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+## Remarks
+
+DateTimeAdd will return `undefined` for the following reasons:
+
+- The DateTimePart value specified is invalid
+- The numeric_expr specified is not a valid integer
+- The DateTime in the argument or result is not a valid ISO 8601 DateTime.
+
+## Examples
+
+The following example adds 1 month to the DateTime: `2020-07-09T23:20:13.4575530Z`
+
+```sql
+SELECT DateTimeAdd("mm", 1, "2020-07-09T23:20:13.4575530Z") AS OneMonthLater
+```
+
+```json
+[
+ {
+ "OneMonthLater": "2020-08-09T23:20:13.4575530Z"
+ }
+]
+```
+
+The following example subtracts 2 hours from the DateTime: `2020-07-09T23:20:13.4575530Z`
+
+```sql
+SELECT DateTimeAdd("hh", -2, "2020-07-09T23:20:13.4575530Z") AS TwoHoursEarlier
+```
+
+```json
+[
+ {
+ "TwoHoursEarlier": "2020-07-09T21:20:13.4575530Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Datetimebin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimebin.md
+
+ Title: DateTimeBin in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeBin in Azure Cosmos DB.
++++ Last updated : 05/27/2022 ++
+
+
+# DateTimeBin (Azure Cosmos DB)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
+
+Returns the nearest multiple of *BinSize* below the specified DateTime given the unit of measurement *DateTimePart* and start value of *BinAtDateTime*.
++
+## Syntax
+
+```sql
+DateTimeBin (<DateTime> , <DateTimePart> [,BinSize] [,BinAtDateTime])
+```
++
+## Arguments
+
+*DateTime*
+ The string value date and time to be binned. A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+*DateTimePart*
+ The date time part specifies the units for BinSize. DateTimeBin is Undefined for DayOfWeek, Year, and Month. The finest granularity for binning by Nanosecond is 100 nanosecond ticks; if Nanosecond is specified with a BinSize less than 100, the result is Undefined. This table lists all valid DateTimePart arguments for DateTimeBin:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*BinSize* (optional)
+ Numeric value that specifies the size of bins. If not specified, the default value is one.
++
+*BinAtDateTime* (optional)
+ A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` that specifies the start date to bin from. Default value is the Unix epoch, ΓÇÿ1970-01-01T00:00:00.000000ZΓÇÖ.
++
+## Return types
+
+Returns the result of binning the *DateTime* value.
++
+## Remarks
+
+DateTimeBin will return `Undefined` for the following reasons:
+- The DateTimePart value specified is invalid
+- The BinSize value is zero or negative
+- The DateTime or BinAtDateTime isn't a valid ISO 8601 DateTime or precedes the year 1601 (the Windows epoch)
++
+## Examples
+
+The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ by one hour:
+
+```sql
+SELECT DateTimeBin('2021-06-28T17:24:29.2991234Z', 'hh') AS BinByHour
+```
+
+```json
+[
+    {
+        "BinByHour": "2021-06-28T17:00:00.0000000Z"
+    }
+]
+```
+
+The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ given different *BinAtDateTime* values:
+
+```sql
+SELECTΓÇ»
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5) AS One_BinByFiveDaysUnixEpochImplicit,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1970-01-01T00:00:00.0000000Z') AS Two_BinByFiveDaysUnixEpochExplicit,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1601-01-01T00:00:00.0000000Z') AS Three_BinByFiveDaysFromWindowsEpoch,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '2021-01-01T00:00:00.0000000Z') AS Four_BinByFiveDaysFromYearStart,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '0001-01-01T00:00:00.0000000Z') AS Five_BinByFiveDaysFromUndefinedYear
+```
+
+```json
+[
+    {
+        "One_BinByFiveDaysUnixEpochImplicit": "2021-06-27T00:00:00.0000000Z",
+        "Two_BinByFiveDaysUnixEpochExplicit": "2021-06-27T00:00:00.0000000Z",
+        "Three_BinByFiveDaysFromWindowsEpoch": "2021-06-28T00:00:00.0000000Z",
+        "Four_BinByFiveDaysFromYearStart": "2021-06-25T00:00:00.0000000Z"
+    }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Datetimediff https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimediff.md
+
+ Title: DateTimeDiff in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeDiff in Azure Cosmos DB.
++++ Last updated : 07/09/2020++++
+# DateTimeDiff (Azure Cosmos DB)
+Returns the count (as a signed integer value) of the specified DateTimePart boundaries crossed between the specified *StartDate* and *EndDate*.
+
+## Syntax
+
+```sql
+DateTimeDiff (<DateTimePart> , <StartDate> , <EndDate>)
+```
+
+## Arguments
+
+*DateTimePart*
+ The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Year | "year", "yyyy", "yy" |
+| Month | "month", "mm", "m" |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*StartDate*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+*EndDate*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+
+## Return types
+
+Returns a signed integer value.
+
+## Remarks
+
+DateTimeDiff will return `undefined` for the following reasons:
+
+- The DateTimePart value specified is invalid
+- The StartDate or EndDate is not a valid ISO 8601 DateTime
+
+DateTimeDiff will always return a signed integer value and is a measurement of the number of DateTimePart boundaries crossed, not measurement of the time interval.
+
+## Examples
+
+The following example computes the number of day boundaries crossed between `2020-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
+
+```sql
+SELECT DateTimeDiff("day", "2020-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInDays
+```
+
+```json
+[
+ {
+ "DifferenceInDays": 2
+ }
+]
+```
+
+The following example computes the number of year boundaries crossed between `2028-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
+
+```sql
+SELECT DateTimeDiff("yyyy", "2028-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInYears
+```
+
+```json
+[
+ {
+ "DifferenceInYears": -8
+ }
+]
+```
+
+The following example computes the number of hour boundaries crossed between `2020-01-01T01:00:00.1234527Z` and `2020-01-01T01:59:59.1234567Z`. Even though these DateTime values are over 0.99 hours apart, `DateTimeDiff` returns 0 because no hour boundaries were crossed.
+
+```sql
+SELECT DateTimeDiff("hh", "2020-01-01T01:00:00.1234527Z", "2020-01-01T01:59:59.1234567Z") AS DifferenceInHours
+```
+
+```json
+[
+ {
+ "DifferenceInHours": 0
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Datetimefromparts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimefromparts.md
+
+ Title: DateTimeFromParts in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeFromParts in Azure Cosmos DB.
++++ Last updated : 07/09/2020++++
+# DateTimeFromParts (Azure Cosmos DB)
+
+Returns a string DateTime value constructed from input values.
+
+## Syntax
+
+```sql
+DateTimeFromParts(<numberYear>, <numberMonth>, <numberDay> [, numberHour] [, numberMinute] [, numberSecond] [, numberOfFractionsOfSecond])
+```
+
+## Arguments
+
+*numberYear*
+ Integer value for the year in the format `YYYY`
+
+*numberMonth*
+ Integer value for the month in the format `MM`
+
+*numberDay*
+ Integer value for the day in the format `DD`
+
+*numberHour* (optional)
+ Integer value for the hour in the format `hh`
+
+*numberMinute* (optional)
+ Integer value for the minute in the format `mm`
+
+*numberSecond* (optional)
+ Integer value for the second in the format `ss`
+
+*numberOfFractionsOfSecond* (optional)
+ Integer value for the fractional of a second in the format `.fffffff`
+
+## Return types
+
+Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Remarks
+
+If the specified integers would create an invalid DateTime, DateTimeFromParts will return `undefined`.
+
+If an optional argument isn't specified, its value will be 0.
+
+## Examples
+
+Here's an example that only includes required arguments to construct a DateTime:
+
+```sql
+SELECT DateTimeFromParts(2020, 9, 4) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-09-04T00:00:00.0000000Z"
+ }
+]
+```
+
+Here's another example that also uses some optional arguments to construct a DateTime:
+
+```sql
+SELECT DateTimeFromParts(2020, 9, 4, 10, 52) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-09-04T10:52:00.0000000Z"
+ }
+]
+```
+
+Here's another example that also uses all optional arguments to construct a DateTime:
+
+```sql
+SELECT DateTimeFromParts(2020, 9, 4, 10, 52, 12, 3456789) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-09-04T10:52:12.3456789Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Datetimepart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimepart.md
+
+ Title: DateTimePart in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimePart in Azure Cosmos DB.
++++ Last updated : 08/14/2020++++
+# DateTimePart (Azure Cosmos DB)
+
+Returns the value of the specified DateTimePart between the specified DateTime.
+
+## Syntax
+
+```sql
+DateTimePart (<DateTimePart> , <DateTime>)
+```
+
+## Arguments
+
+*DateTimePart*
+ The part of the date for which DateTimePart will return the value. This table lists all valid DateTimePart arguments:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Year | "year", "yyyy", "yy" |
+| Month | "month", "mm", "m" |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+
+## Return types
+
+Returns a positive integer value.
+
+## Remarks
+
+DateTimePart will return `undefined` for the following reasons:
+
+- The DateTimePart value specified is invalid
+- The DateTime is not a valid ISO 8601 DateTime
+
+This system function will not utilize the index.
+
+## Examples
+
+Here's an example that returns the integer value of the month:
+
+```sql
+SELECT DateTimePart("m", "2020-01-02T03:04:05.6789123Z") AS MonthValue
+```
+
+```json
+[
+ {
+ "MonthValue": 1
+ }
+]
+```
+
+Here's an example that returns the number of microseconds:
+
+```sql
+SELECT DateTimePart("mcs", "2020-01-02T03:04:05.6789123Z") AS MicrosecondsValue
+```
+
+```json
+[
+ {
+ "MicrosecondsValue": 678912
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Datetimetoticks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimetoticks.md
+
+ Title: DateTimeToTicks in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeToTicks in Azure Cosmos DB.
++++ Last updated : 08/18/2020++++
+# DateTimeToTicks (Azure Cosmos DB)
+
+Converts the specified DateTime to ticks. A single tick represents one hundred nanoseconds or one ten-millionth of a second.
+
+## Syntax
+
+```sql
+DateTimeToTicks (<DateTime>)
+```
+
+## Arguments
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
+
+## Return types
+
+Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, DateTimeToTicks returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+DateTimeDateTimeToTicks will return `undefined` if the DateTime is not a valid ISO 8601 DateTime
+
+This system function will not utilize the index.
+
+## Examples
+
+Here's an example that returns the number of ticks:
+
+```sql
+SELECT DateTimeToTicks("2020-01-02T03:04:05.6789123Z") AS Ticks
+```
+
+```json
+[
+ {
+ "Ticks": 15779342456789124
+ }
+]
+```
+
+Here's an example that returns the number of ticks without specifying the number of fractional seconds:
+
+```sql
+SELECT DateTimeToTicks("2020-01-02T03:04:05Z") AS Ticks
+```
+
+```json
+[
+ {
+ "Ticks": 15779342450000000
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Datetimetotimestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/datetimetotimestamp.md
+
+ Title: DateTimeToTimestamp in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeToTimestamp in Azure Cosmos DB.
++++ Last updated : 08/18/2020++++
+# DateTimeToTimestamp (Azure Cosmos DB)
+
+Converts the specified DateTime to a timestamp.
+
+## Syntax
+
+```sql
+DateTimeToTimestamp (<DateTime>)
+```
+
+## Arguments
+
+*DateTime*
+ UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Return types
+
+Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+DateTimeToTimestamp will return `undefined` if the DateTime value specified is invalid
+
+## Examples
+
+The following example converts the DateTime to a timestamp:
+
+```sql
+SELECT DateTimeToTimestamp("2020-07-09T23:20:13.4575530Z") AS Timestamp
+```
+
+```json
+[
+ {
+ "Timestamp": 1594336813457
+ }
+]
+```
+
+Here's another example:
+
+```sql
+SELECT DateTimeToTimestamp("2020-07-09") AS Timestamp
+```
+
+```json
+[
+ {
+ "Timestamp": 1594252800000
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Degrees https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/degrees.md
+
+ Title: DEGREES in Azure Cosmos DB query language
+description: Learn about the DEGREES SQL system function in Azure Cosmos DB to return the corresponding angle in degrees for an angle specified in radians
++++ Last updated : 03/03/2020+++
+# DEGREES (Azure Cosmos DB)
+
+ Returns the corresponding angle in degrees for an angle specified in radians.
+
+## Syntax
+
+```sql
+DEGREES (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the number of degrees in an angle of PI/2 radians.
+
+```sql
+SELECT DEGREES(PI()/2) AS degrees
+```
+
+ Here is the result set.
+
+```json
+[{"degrees": 90}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Endswith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/endswith.md
+
+ Title: EndsWith in Azure Cosmos DB query language
+description: Learn about the ENDSWITH SQL system function in Azure Cosmos DB to return a Boolean indicating whether the first string expression ends with the second
++++ Last updated : 06/02/2020+++
+# ENDSWITH (Azure Cosmos DB)
+
+Returns a Boolean indicating whether the first string expression ends with the second.
+
+## Syntax
+
+```sql
+ENDSWITH(<str_expr1>, <str_expr2> [, <bool_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is a string expression.
+
+*str_expr2*
+ Is a string expression to be compared to the end of *str_expr1*.
+
+*bool_expr*
+ Optional value for ignoring case. When set to true, ENDSWITH will do a case-insensitive search. When unspecified, this value is false.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+The following example checks if the string "abc" ends with "b" and "bC".
+
+```sql
+SELECT ENDSWITH("abc", "b", false) AS e1, ENDSWITH("abc", "bC", false) AS e2, ENDSWITH("abc", "bC", true) AS e3
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "e1": false,
+ "e2": false,
+ "e3": true
+ }
+]
+```
+
+## Remarks
+
+Learn about [how this string system function uses the index](string-functions.md).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Equality Comparison Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/equality-comparison-operators.md
+
+ Title: Equality and comparison operators in Azure Cosmos DB
+description: Learn about SQL equality and comparison operators supported by Azure Cosmos DB.
+++++ Last updated : 01/07/2022+++
+# Equality and comparison operators in Azure Cosmos DB
+
+This article details the equality and comparison operators supported by Azure Cosmos DB.
+
+## Understanding equality comparisons
+
+The following table shows the result of equality comparisons in the API for NoSQL between any two JSON types.
+
+| **Op** | **Undefined** | **Null** | **Boolean** | **Number** | **String** | **Object** | **Array** |
+|||||||||
+| **Undefined** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined |
+| **Null** | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined | Undefined |
+| **Boolean** | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined |
+| **Number** | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined |
+| **String** | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined |
+| **Object** | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined |
+| **Array** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** |
+
+For comparison operators such as `>`, `>=`, `!=`, `<`, and `<=`, comparison across types or between two objects or arrays produces `Undefined`.
+
+If the result of the scalar expression is `Undefined`, the item isn't included in the result, because `Undefined` doesn't equal `true`.
+
+For example, the following query's comparison between a number and string value produces `Undefined`. Therefore, the filter does not include any results.
+
+```sql
+SELECT *
+FROM c
+WHERE 7 = 'a'
+```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](keywords.md)
+- [SELECT clause](select.md)
cosmos-db Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/exp.md
+
+ Title: EXP in Azure Cosmos DB query language
+description: Learn about the Exponent (EXP) SQL system function in Azure Cosmos DB to return the exponential value of the specified numeric expression
++++ Last updated : 09/13/2019+++
+# EXP (Azure Cosmos DB)
+
+ Returns the exponential value of the specified numeric expression.
+
+## Syntax
+
+```sql
+EXP (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ The constant **e** (2.718281…), is the base of natural logarithms.
+
+ The exponent of a number is the constant **e** raised to the power of the number. For example, EXP(1.0) = e^1.0 = 2.71828182845905 and EXP(10) = e^10 = 22026.4657948067.
+
+ The exponential of the natural logarithm of a number is the number itself: EXP (LOG (n)) = n. And the natural logarithm of the exponential of a number is the number itself: LOG (EXP (n)) = n.
+
+## Examples
+
+ The following example declares a variable and returns the exponential value of the specified variable (10).
+
+```sql
+SELECT EXP(10) AS exp
+```
+
+ Here is the result set.
+
+```json
+[{exp: 22026.465794806718}]
+```
+
+ The following example returns the exponential value of the natural logarithm of 20 and the natural logarithm of the exponential of 20. Because these functions are inverse functions of one another, the return value with rounding for floating point math in both cases is 20.
+
+```sql
+SELECT EXP(LOG(20)) AS exp1, LOG(EXP(20)) AS exp2
+```
+
+ Here is the result set.
+
+```json
+[{exp1: 19.999999999999996, exp2: 20}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Floor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/floor.md
+
+ Title: FLOOR in Azure Cosmos DB query language
+description: Learn about the FLOOR SQL system function in Azure Cosmos DB to return the largest integer less than or equal to the specified numeric expression
++++ Last updated : 09/13/2019+++
+# FLOOR (Azure Cosmos DB)
+
+ Returns the largest integer less than or equal to the specified numeric expression.
+
+## Syntax
+
+```sql
+FLOOR (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example shows positive numeric, negative, and zero values with the `FLOOR` function.
+
+```sql
+SELECT FLOOR(123.45) AS fl1, FLOOR(-123.45) AS fl2, FLOOR(0.0) AS fl3
+```
+
+ Here is the result set.
+
+```json
+[{fl1: 123, fl2: -124, fl3: 0}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db From https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/from.md
+
+ Title: FROM clause in Azure Cosmos DB
+description: Learn about the SQL syntax, and example for FROM clause for Azure Cosmos DB. This article also shows examples to scope results, and get sub items by using the FROM clause.
+++++ Last updated : 05/08/2020+++
+# FROM clause in Azure Cosmos DB
+
+The FROM (`FROM <from_specification>`) clause is optional, unless the source is filtered or projected later in the query. A query like `SELECT * FROM Families` enumerates over the entire `Families` container. You can also use the special identifier ROOT for the container instead of using the container name.
+
+The `FROM` clause enforces the following rules per query:
+
+* The container can be aliased, such as `SELECT f.id FROM Families AS f` or simply `SELECT f.id FROM Families f`. Here `f` is the alias for `Families`. AS is an optional keyword to [alias](working-with-json.md#aliasing) the identifier.
+
+* Once aliased, the original source name cannot be bound. For example, `SELECT Families.id FROM Families f` is syntactically invalid because the identifier `Families` has been aliased and can't be resolved anymore.
+
+* All referenced properties must be fully qualified, to avoid any ambiguous bindings in the absence of strict schema adherence. For example, `SELECT id FROM Families f` is syntactically invalid because the property `id` isn't bound.
+
+## Syntax
+
+```sql
+FROM <from_specification>
+
+<from_specification> ::=
+ <from_source> {[ JOIN <from_source>][,...n]}
+
+<from_source> ::=
+ <container_expression> [[AS] input_alias]
+ | input_alias IN <container_expression>
+
+<container_expression> ::=
+ ROOT
+ | container_name
+ | input_alias
+ | <container_expression> '.' property_name
+ | <container_expression> '[' "property_name" | array_index ']'
+```
+
+## Arguments
+
+- `<from_source>`
+
+ Specifies a data source, with or without an alias. If alias is not specified, it will be inferred from the `<container_expression>` using following rules:
+
+ - If the expression is a container_name, then container_name will be used as an alias.
+
+ - If the expression is `<container_expression>`, then property_name, then property_name will be used as an alias. If the expression is a container_name, then container_name will be used as an alias.
+
+- AS `input_alias`
+
+ Specifies that the `input_alias` is a set of values returned by the underlying container expression.
+
+- `input_alias` IN
+
+ Specifies that the `input_alias` should represent the set of values obtained by iterating over all array elements of each array returned by the underlying container expression. Any value returned by underlying container expression that is not an array is ignored.
+
+- `<container_expression>`
+
+ Specifies the container expression to be used to retrieve the documents.
+
+- `ROOT`
+
+ Specifies that document should be retrieved from the default, currently connected container.
+
+- `container_name`
+
+ Specifies that document should be retrieved from the provided container. The name of the container must match the name of the container currently connected to.
+
+- `input_alias`
+
+ Specifies that document should be retrieved from the other source defined by the provided alias.
+
+- `<container_expression> '.' property_name`
+
+ Specifies that document should be retrieved by accessing the `property_name` property.
+
+- `<container_expression> '[' "property_name" | array_index ']'`
+
+ Specifies that document should be retrieved by accessing the `property_name` property or array_index array element for all documents retrieved by specified container expression.
+
+## Remarks
+
+All aliases provided or inferred in the `<from_source>`(s) must be unique. The Syntax `<container_expression> '.' property_name` is the same as `<container_expression> '[' "property_name" ']'`. However, the latter syntax can be used if a property name contains a non-identifier character.
+
+### Handling missing properties, missing array elements, and undefined values
+
+If a container expression accesses properties or array elements and that value does not exist, that value will be ignored and not processed further.
+
+### Container expression context scoping
+
+A container expression may be container-scoped or document-scoped:
+
+- An expression is container-scoped, if the underlying source of the container expression is either ROOT or `container_name`. Such an expression represents a set of documents retrieved from the container directly, and is not dependent on the processing of other container expressions.
+
+- An expression is document-scoped, if the underlying source of the container expression is `input_alias` introduced earlier in the query. Such an expression represents a set of documents obtained by evaluating the container expression in the scope of each document belonging to the set associated with the aliased container. The resulting set will be a union of sets obtained by evaluating the container expression for each of the documents in the underlying set.
+
+## Examples
+
+### Get subitems by using the FROM clause
+
+The FROM clause can reduce the source to a smaller subset. To enumerate only a subtree in each item, the subroot can become the source, as shown in the following example:
+
+```sql
+ SELECT *
+ FROM Families.children
+```
+
+The results are:
+
+```json
+ [
+ [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [
+ {
+ "givenName": "Fluffy"
+ }
+ ]
+ }
+ ],
+ [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ]
+ ]
+```
+
+The preceding query used an array as the source, but you can also use an object as the source. The query considers any valid, defined JSON value in the source for inclusion in the result. The following example would exclude `Families` that don't have an `address.state` value.
+
+```sql
+ SELECT *
+ FROM Families.address.state
+```
+
+The results are:
+
+```json
+ [
+ "WA",
+ "NY"
+ ]
+```
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [SELECT clause](select.md)
+- [WHERE clause](where.md)
cosmos-db Geospatial Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/geospatial-index.md
+
+ Title: Index geospatial data with Azure Cosmos DB
+description: Index spatial data with Azure Cosmos DB
+++++ Last updated : 11/03/2020+++
+# Index geospatial data with Azure Cosmos DB
+
+We designed Azure Cosmos DB's database engine to be truly schema agnostic and provide first class support for JSON. The write optimized database engine of Azure Cosmos DB natively understands spatial data represented in the GeoJSON standard.
+
+In a nutshell, the geometry is projected from geodetic coordinates onto a 2D plane then divided progressively into cells using a **quadtree**. These cells are mapped to 1D based on the location of the cell within a **Hilbert space filling curve**, which preserves locality of points. Additionally when location data is indexed, it goes through a process known as **tessellation**, that is, all the cells that intersect a location are identified and stored as keys in the Azure Cosmos DB index. At query time, arguments like points and Polygons are also tessellated to extract the relevant cell ID ranges, then used to retrieve data from the index.
+
+If you specify an indexing policy that includes a spatial index for `/*` (all paths), then all data found within the container is indexed for efficient spatial queries.
+
+> [!NOTE]
+> Azure Cosmos DB supports indexing of Points, LineStrings, Polygons, and MultiPolygons. If you index any one of these types, we will automatically index all other types. In other words, if you index Polygons, we'll also index Points, LineStrings, and MultiPolygons. Indexing a new spatial type does not affect the write RU charge or index size unless you have valid GeoJSON data of that type.
+
+## Modifying geospatial configuration
+
+In your container, the **Geospatial Configuration** specifies how the spatial data will be indexed. Specify one **Geospatial Configuration** per container: geography or geometry.
+
+You can toggle between the **geography** and **geometry** spatial type in the Azure portal. It's important that you create a [valid spatial geometry indexing policy with a bounding box](#geometry-data-indexing-examples) before switching to the geometry spatial type.
+
+Here's how to set the **Geospatial Configuration** in **Data Explorer** within the Azure portal:
++
+You can also modify the `geospatialConfig` in the .NET SDK to adjust the **Geospatial Configuration**:
+
+If not specified, the `geospatialConfig` will default to the geography data type. When you modify the `geospatialConfig`, all existing geospatial data in the container will be reindexed.
+
+Here is an example for modifying the geospatial data type to `geometry` by setting the `geospatialConfig` property and adding a **boundingBox**:
+
+```csharp
+ //Retrieve the container's details
+ ContainerResponse containerResponse = await client.GetContainer("db", "spatial").ReadContainerAsync();
+ //Set GeospatialConfig to Geometry
+ GeospatialConfig geospatialConfig = new GeospatialConfig(GeospatialType.Geometry);
+ containerResponse.Resource.GeospatialConfig = geospatialConfig;
+ // Add a spatial index including the required boundingBox
+ SpatialPath spatialPath = new SpatialPath
+ {
+ Path = "/locations/*",
+ BoundingBox = new BoundingBoxProperties(){
+ Xmin = 0,
+ Ymin = 0,
+ Xmax = 10,
+ Ymax = 10
+ }
+ };
+ spatialPath.SpatialTypes.Add(SpatialType.Point);
+ spatialPath.SpatialTypes.Add(SpatialType.LineString);
+ spatialPath.SpatialTypes.Add(SpatialType.Polygon);
+ spatialPath.SpatialTypes.Add(SpatialType.MultiPolygon);
+
+ containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(spatialPath);
+
+ // Update container with changes
+ await client.GetContainer("db", "spatial").ReplaceContainerAsync(containerResponse.Resource);
+```
+
+## Geography data indexing examples
+
+The following JSON snippet shows an indexing policy with spatial indexing enabled for the **geography** data type. It is valid for spatial data with the geography data type and will index any GeoJSON Point, Polygon, MultiPolygon, or LineString found within documents for spatial querying. If you are modifying the indexing policy using the Azure portal, you can specify the following JSON for indexing policy to enable spatial indexing on your container:
+
+**Container indexing policy JSON with geography spatial indexing**
+
+```json
+{
+ "automatic": true,
+ "indexingMode": "Consistent",
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "spatialIndexes": [
+ {
+ "path": "/*",
+ "types": [
+ "Point",
+ "Polygon",
+ "MultiPolygon",
+ "LineString"
+ ]
+ }
+ ],
+ "excludedPaths": []
+}
+```
+
+> [!NOTE]
+> If the location GeoJSON value within the document is malformed or invalid, then it will not get indexed for spatial querying. You can validate location values using ST_ISVALID and ST_ISVALIDDETAILED.
+
+You can also [modify indexing policy](../../how-to-manage-indexing-policy.md) using the Azure CLI, PowerShell, or any SDK.
+
+## Geometry data indexing examples
+
+With the **geometry** data type, similar to the geography data type, you must specify relevant paths and types to index. In addition, you must also specify a `boundingBox` within the indexing policy to indicate the desired area to be indexed for that specific path. Each geospatial path requires its own`boundingBox`.
+
+The bounding box consists of the following properties:
+
+- **xmin**: the minimum indexed x coordinate
+- **ymin**: the minimum indexed y coordinate
+- **xmax**: the maximum indexed x coordinate
+- **ymax**: the maximum indexed y coordinate
+
+A bounding box is required because geometric data occupies a plane that can be infinite. Spatial indexes, however, require a finite space. For the **geography** data type, the Earth is the boundary and you do not need to set a bounding box.
+
+Create a bounding box that contains all (or most) of your data. Only operations computed on the objects that are entirely inside the bounding box will be able to utilize the spatial index. Making the bounding box larger than necessary will negatively impact query performance.
+
+Here is an example indexing policy that indexes **geometry** data with **geospatialConfig** set to `geometry`:
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/\"_etag\"/?"
+ }
+ ],
+ "spatialIndexes": [
+ {
+ "path": "/locations/*",
+ "types": [
+ "Point",
+ "LineString",
+ "Polygon",
+ "MultiPolygon"
+ ],
+ "boundingBox": {
+ "xmin": -10,
+ "ymin": -20,
+ "xmax": 10,
+ "ymax": 20
+ }
+ }
+ ]
+}
+```
+
+The above indexing policy has a **boundingBox** of (-10, 10) for x coordinates and (-20, 20) for y coordinates. The container with the above indexing policy will index all Points, Polygons, MultiPolygons, and LineStrings that are entirely within this region.
+
+> [!NOTE]
+> If you try to add an indexing policy with a **boundingBox** to a container with `geography` data type, it will fail. You should modify the container's **geospatialConfig** to be `geometry` before adding a **boundingBox**. You can add data and modify the remainder of
+> your indexing policy (such as the paths and types) either before or after selecting the geospatial data type for the container.
+
+## Next steps
+
+Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
+
+* Learn more about [Azure Cosmos DB Query](getting-started.md)
+* Learn more about [Querying spatial data with Azure Cosmos DB](geospatial-query.md)
+* Learn more about [Geospatial and GeoJSON location data in Azure Cosmos DB](geospatial-intro.md)
cosmos-db Geospatial Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/geospatial-intro.md
+
+ Title: Geospatial and GeoJSON location data in Azure Cosmos DB
+description: Understand how to create spatial objects with Azure Cosmos DB and the API for NoSQL.
++++ Last updated : 02/17/2022++++
+# Geospatial and GeoJSON location data in Azure Cosmos DB
+
+This article is an introduction to the geospatial functionality in Azure Cosmos DB. After reading our documentation on geospatial indexing you will be able to answer the following questions:
+
+* How do I store spatial data in Azure Cosmos DB?
+* How can I query spatial data in Azure Cosmos DB in SQL and LINQ?
+* How do I enable or disable spatial indexing in Azure Cosmos DB?
+
+## Spatial Data Use Cases
+
+Geospatial data often involve proximity queries, for example, "find all coffee shops near my current location". Common use cases are:
+
+* Geolocation Analytics, driving specific located marketing initiatives.
+* Location based personalization, for multiple industries like Retail and Healthcare.
+* Logistics enhancement, for transport optimization.
+* Risk Analysis, especially for insurance and finance companies.
+* Situational awareness, for alerts and notifications.
+
+## Introduction to spatial data
+
+Spatial data describes the position and shape of objects in space. In most applications, these correspond to objects on the earth and geospatial data. Spatial data can be used to represent the location of a person, a place of interest, or the boundary of a city, or a lake.
+
+Azure Cosmos DB's API for NoSQL supports two spatial data types: the **geometry** data type and the **geography** data type.
+
+- The **geometry** type represents data in a Euclidean (flat) coordinate system
+- The **geography** type represents data in a round-earth coordinate system.
+
+## Supported data types
+
+Azure Cosmos DB supports indexing and querying of geospatial point data that's represented using the [GeoJSON specification](https://tools.ietf.org/html/rfc7946). GeoJSON data structures are always valid JSON objects, so they can be stored and queried using Azure Cosmos DB without any specialized tools or libraries.
+
+Azure Cosmos DB supports the following spatial data types:
+
+- Point
+- LineString
+- Polygon
+- MultiPolygon
+
+> [!TIP]
+> Currently spatial data in Azure Cosmos DB is not supported by Entity Framework. Please use one of the Azure Cosmos DB SDKs instead.
+
+### Points
+
+A **Point** denotes a single position in space. In geospatial data, a Point represents the exact location, which could be a street address of a grocery store, a kiosk, an automobile, or a city. A point is represented in GeoJSON (and Azure Cosmos DB) using its coordinate pair or longitude and latitude.
+
+Here's an example JSON for a point:
+
+**Points in Azure Cosmos DB**
+
+```json
+{
+ "type":"Point",
+ "coordinates":[ 31.9, -4.8 ]
+}
+```
+
+Spatial data types can be embedded in an Azure Cosmos DB document as shown in this example of a user profile containing location data:
+
+**Use Profile with Location stored in Azure Cosmos DB**
+
+```json
+{
+ "id":"cosmosdb-profile",
+ "screen_name":"@CosmosDB",
+ "city":"Redmond",
+ "topics":[ "global", "distributed" ],
+ "location":{
+ "type":"Point",
+ "coordinates":[ 31.9, -4.8 ]
+ }
+}
+```
+
+### Points in a geometry coordinate system
+
+For the **geometry** data type, GeoJSON specification specifies the horizontal axis first and the vertical axis second.
+
+### Points in a geography coordinate system
+
+For the **geography** data type, GeoJSON specification specifies longitude first and latitude second. Like in other mapping applications, longitude and latitude are angles and represented in terms of degrees. Longitude values are measured from the Prime Meridian and are between -180 degrees and 180.0 degrees, and latitude values are measured from the equator and are between -90.0 degrees and 90.0 degrees.
+
+Azure Cosmos DB interprets coordinates as represented per the WGS-84 reference system. See below for more details about coordinate reference systems.
+
+### LineStrings
+
+**LineStrings** represent a series of two or more points in space and the line segments that connect them. In geospatial data, LineStrings are commonly used to represent highways or rivers.
+
+**LineStrings in GeoJSON**
+
+```json
+ "type":"LineString",
+ "coordinates":[ [
+ [ 31.8, -5 ],
+ [ 31.8, -4.7 ]
+ ] ]
+```
+
+### Polygons
+
+A **Polygon** is a boundary of connected points that forms a closed LineString. Polygons are commonly used to represent natural formations like lakes or political jurisdictions like cities and states. Here's an example of a Polygon in Azure Cosmos DB:
+
+**Polygons in GeoJSON**
+
+```json
+{
+ "type":"Polygon",
+ "coordinates":[ [
+ [ 31.8, -5 ],
+ [ 32, -5 ],
+ [ 32, -4.7 ],
+ [ 31.8, -4.7 ],
+ [ 31.8, -5 ]
+ ] ]
+}
+```
+
+> [!NOTE]
+> The GeoJSON specification requires that for valid Polygons, the last coordinate pair provided should be the same as the first, to create a closed shape.
+>
+> Points within a Polygon must be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+>
+>
+
+### MultiPolygons
+
+A **MultiPolygon** is an array of zero or more Polygons. **MultiPolygons** cannot overlap sides or have any common area. They may touch at one or more points.
+
+**MultiPolygons in GeoJSON**
+
+```json
+{
+ "type":"MultiPolygon",
+ "coordinates":[[[
+ [52.0, 12.0],
+ [53.0, 12.0],
+ [53.0, 13.0],
+ [52.0, 13.0],
+ [52.0, 12.0]
+ ]],
+ [[
+ [50.0, 0.0],
+ [51.0, 0.0],
+ [51.0, 5.0],
+ [50.0, 5.0],
+ [50.0, 0.0]
+ ]]]
+}
+```
+
+## Coordinate reference systems
+
+Since the shape of the earth is irregular, coordinates of geography geospatial data are represented in many coordinate reference systems (CRS), each with their own frames of reference and units of measurement. For example, the "National Grid of Britain" is a reference system is accurate for the United Kingdom, but not outside it.
+
+The most popular CRS in use today is the World Geodetic System [WGS-84](https://earth-info.nga.mil/GandG/update/index.php). GPS devices, and many mapping services including Google Maps and Bing Maps APIs use WGS-84. Azure Cosmos DB supports indexing and querying of geography geospatial data using the WGS-84 CRS only.
+
+## Creating documents with spatial data
+When you create documents that contain GeoJSON values, they are automatically indexed with a spatial index in accordance to the indexing policy of the container. If you're working with an Azure Cosmos DB SDK in a dynamically typed language like Python or Node.js, you must create valid GeoJSON.
+
+**Create Document with Geospatial data in Node.js**
+
+```javascript
+var userProfileDocument = {
+ "id":"cosmosdb",
+ "location":{
+ "type":"Point",
+ "coordinates":[ -122.12, 47.66 ]
+ }
+};
+
+client.createDocument(`dbs/${databaseName}/colls/${collectionName}`, userProfileDocument, (err, created) => {
+ // additional code within the callback
+});
+```
+
+If you're working with the API for NoSQLs, you can use the `Point`, `LineString`, `Polygon`, and `MultiPolygon` classes within the `Microsoft.Azure.Cosmos.Spatial` namespace to embed location information within your application objects. These classes help simplify the serialization and deserialization of spatial data into GeoJSON.
+
+**Create Document with Geospatial data in .NET**
+
+```csharp
+using Microsoft.Azure.Cosmos.Spatial;
+
+public class UserProfile
+{
+ [JsonProperty("id")]
+ public string id { get; set; }
+
+ [JsonProperty("location")]
+ public Point Location { get; set; }
+
+ // More properties
+}
+
+await container.CreateItemAsync( new UserProfile
+ {
+ id = "cosmosdb",
+ Location = new Point (-122.12, 47.66)
+ });
+```
+
+If you don't have the latitude and longitude information, but have the physical addresses or location name like city or country/region, you can look up the actual coordinates by using a geocoding service like Bing Maps REST Services. Learn more about Bing Maps geocoding [here](/bingmaps/rest-services/).
+
+## Next steps
+
+Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
+
+* Learn more about [Azure Cosmos DB Query](getting-started.md)
+* Learn more about [Querying spatial data with Azure Cosmos DB](geospatial-query.md)
+* Learn more about [Index spatial data with Azure Cosmos DB](geospatial-index.md)
cosmos-db Geospatial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/geospatial-query.md
+
+ Title: Querying geospatial data with Azure Cosmos DB
+description: Querying spatial data with Azure Cosmos DB
+++++ Last updated : 02/20/2020+++
+# Querying geospatial data with Azure Cosmos DB
+
+This article will cover how to query geospatial data in Azure Cosmos DB using SQL and LINQ. Currently storing and accessing geospatial data is supported by Azure Cosmos DB for NoSQL accounts only. Azure Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in functions for geospatial querying. For more information on the complete set of built-in functions in the SQL language, see [Query System Functions in Azure Cosmos DB](system-functions.md).
+
+## Spatial SQL built-in functions
+
+Here is a list of geospatial system functions useful for querying in Azure Cosmos DB:
+
+|**Usage**|**Description**|
+|||
+| ST_DISTANCE (spatial_expr, spatial_expr) | Returns the distance between the two GeoJSON Point, Polygon, or LineString expressions.|
+|ST_WITHIN (spatial_expr, spatial_expr) | Returns a Boolean expression indicating whether the first GeoJSON object (Point, Polygon, or LineString) is within the second GeoJSON object (Point, Polygon, or LineString).|
+|ST_INTERSECTS (spatial_expr, spatial_expr)| Returns a Boolean expression indicating whether the two specified GeoJSON objects (Point, Polygon, or LineString) intersect.|
+|ST_ISVALID| Returns a Boolean value indicating whether the specified GeoJSON Point, Polygon, or LineString expression is valid.|
+| ST_ISVALIDDETAILED| Returns a JSON value containing a Boolean value if the specified GeoJSON Point, Polygon, or LineString expression is valid. If invalid, it returns the reason as a string value.|
+
+Spatial functions can be used to perform proximity queries against spatial data. For example, here's a query that returns all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function.
+
+**Query**
+
+```sql
+ SELECT f.id
+ FROM Families f
+ WHERE ST_DISTANCE(f.location, {"type": "Point", "coordinates":[31.9, -4.8]}) < 30000
+```
+
+**Results**
+
+```json
+ [{
+ "id": "WakefieldFamily"
+ }]
+```
+
+If you include spatial indexing in your indexing policy, then "distance queries" will be served efficiently through the index. For more information on spatial indexing, see [geospatial indexing](geospatial-index.md). If you don't have a spatial index for the specified paths, the query will do a scan of the container.
+
+`ST_WITHIN` can be used to check if a point lies within a Polygon. Commonly Polygons are used to represent boundaries like zip codes, state boundaries, or natural formations. Again if you include spatial indexing in your indexing policy, then "within" queries will be served efficiently through the index.
+
+Polygon arguments in `ST_WITHIN` can contain only a single ring, that is, the Polygons must not contain holes in them.
+
+**Query**
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE ST_WITHIN(f.location, {
+ "type":"Polygon",
+ "coordinates": [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
+ })
+```
+
+**Results**
+
+```json
+ [{
+ "id": "WakefieldFamily",
+ }]
+```
+
+> [!NOTE]
+> Similar to how mismatched types work in Azure Cosmos DB query, if the location value specified in either argument is malformed or invalid, then it evaluates to **undefined** and the evaluated document to be skipped from the query results. If your query returns no results, run `ST_ISVALIDDETAILED` to debug why the spatial type is invalid.
+>
+>
+
+Azure Cosmos DB also supports performing inverse queries, that is, you can index polygons or lines in Azure Cosmos DB, then query for the areas that contain a specified point. This pattern is commonly used in logistics to identify, for example, when a truck enters or leaves a designated area.
+
+**Query**
+
+```sql
+ SELECT *
+ FROM Areas a
+ WHERE ST_WITHIN({"type": "Point", "coordinates":[31.9, -4.8]}, a.location)
+```
+
+**Results**
+
+```json
+ [{
+ "id": "MyDesignatedLocation",
+ "location": {
+ "type":"Polygon",
+ "coordinates": [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
+ }
+ }]
+```
+
+`ST_ISVALID` and `ST_ISVALIDDETAILED` can be used to check if a spatial object is valid. For example, the following query checks the validity of a point with an out of range latitude value (-132.8). `ST_ISVALID` returns just a Boolean value, and `ST_ISVALIDDETAILED` returns the Boolean and a string containing the reason why it is considered invalid.
+
+**Query**
+
+```sql
+ SELECT ST_ISVALID({ "type": "Point", "coordinates": [31.9, -132.8] })
+```
+
+**Results**
+
+```json
+ [{
+ "$1": false
+ }]
+```
+
+These functions can also be used to validate Polygons. For example, here we use `ST_ISVALIDDETAILED` to validate a Polygon that is not closed.
+
+**Query**
+
+```sql
+ SELECT ST_ISVALIDDETAILED({ "type": "Polygon", "coordinates": [[
+ [ 31.8, -5 ], [ 31.8, -4.7 ], [ 32, -4.7 ], [ 32, -5 ]
+ ]]})
+```
+
+**Results**
+
+```json
+ [{
+ "$1": {
+ "valid": false,
+ "reason": "The Polygon input is not valid because the start and end points of the ring number 1 are not the same. Each ring of a Polygon must have the same start and end points."
+ }
+ }]
+```
+
+## LINQ querying in the .NET SDK
+
+The SQL .NET SDK also providers stub methods `Distance()` and `Within()` for use within LINQ expressions. The SQL LINQ provider translates this method calls to the equivalent SQL built-in function calls (ST_DISTANCE and ST_WITHIN respectively).
+
+Here's an example of a LINQ query that finds all documents in the Azure Cosmos DB container whose `location` value is within a radius of 30 km of the specified point using LINQ.
+
+**LINQ query for Distance**
+
+```csharp
+ foreach (UserProfile user in container.GetItemLinqQueryable<UserProfile>(allowSynchronousQueryExecution: true)
+ .Where(u => u.ProfileType == "Public" && u.Location.Distance(new Point(32.33, -4.66)) < 30000))
+ {
+ Console.WriteLine("\t" + user);
+ }
+```
+
+Similarly, here's a query for finding all the documents whose `location` is within the specified box/Polygon.
+
+**LINQ query for Within**
+
+```csharp
+ Polygon rectangularArea = new Polygon(
+ new[]
+ {
+ new LinearRing(new [] {
+ new Position(31.8, -5),
+ new Position(32, -5),
+ new Position(32, -4.7),
+ new Position(31.8, -4.7),
+ new Position(31.8, -5)
+ })
+ });
+
+ foreach (UserProfile user in container.GetItemLinqQueryable<UserProfile>(allowSynchronousQueryExecution: true)
+ .Where(a => a.Location.Within(rectangularArea)))
+ {
+ Console.WriteLine("\t" + user);
+ }
+```
+
+## Next steps
+
+Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
+
+* Learn more about [Azure Cosmos DB Query](getting-started.md)
+* Learn more about [Geospatial and GeoJSON location data in Azure Cosmos DB](geospatial-intro.md)
+* Learn more about [Index spatial data with Azure Cosmos DB](geospatial-index.md)
cosmos-db Getcurrentdatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentdatetime.md
+
+ Title: GetCurrentDateTime in Azure Cosmos DB query language
+description: Learn about SQL system function GetCurrentDateTime in Azure Cosmos DB.
++++ Last updated : 02/03/2021++++
+# GetCurrentDateTime (Azure Cosmos DB)
+
+Returns the current UTC (Coordinated Universal Time) date and time as an ISO 8601 string.
+
+## Syntax
+
+```sql
+GetCurrentDateTime ()
+```
+
+## Return types
+
+Returns the current UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Remarks
+
+GetCurrentDateTime() is a nondeterministic function. The result returned is UTC. Precision is 7 digits, with an accuracy of 100 nanoseconds.
+
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+
+## Examples
+
+The following example shows how to get the current UTC Date Time using the GetCurrentDateTime() built-in function.
+
+```sql
+SELECT GetCurrentDateTime() AS currentUtcDateTime
+```
+
+ Here is an example result set.
+
+```json
+[{
+ "currentUtcDateTime": "2019-05-03T20:36:17.1234567Z"
+}]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Getcurrentticks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrentticks.md
+
+ Title: GetCurrentTicks in Azure Cosmos DB query language
+description: Learn about SQL system function GetCurrentTicks in Azure Cosmos DB.
++++ Last updated : 02/03/2021++++
+# GetCurrentTicks (Azure Cosmos DB)
+
+Returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Syntax
+
+```sql
+GetCurrentTicks ()
+```
+
+## Return types
+
+Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, GetCurrentTicks returns the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+GetCurrentTicks() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
+
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+
+## Examples
+
+Here's an example that returns the current time, measured in ticks:
+
+```sql
+SELECT GetCurrentTicks() AS CurrentTimeInTicks
+```
+
+```json
+[
+ {
+ "CurrentTimeInTicks": 15973607943002652
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Getcurrenttimestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getcurrenttimestamp.md
+
+ Title: GetCurrentTimestamp in Azure Cosmos DB query language
+description: Learn about SQL system function GetCurrentTimestamp in Azure Cosmos DB.
++++ Last updated : 02/03/2021++++
+# GetCurrentTimestamp (Azure Cosmos DB)
+
+ Returns the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Syntax
+
+```sql
+GetCurrentTimestamp ()
+```
+
+## Return types
+
+Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Remarks
+
+GetCurrentTimestamp() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
+
+> [!NOTE]
+> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
+
+## Examples
+
+ The following example shows how to get the current timestamp using the GetCurrentTimestamp() built-in function.
+
+```sql
+SELECT GetCurrentTimestamp() AS currentUtcTimestamp
+```
+
+ Here is an example result set.
+
+```json
+[{
+ "currentUtcTimestamp": 1556916469065
+}]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getting-started.md
+
+ Title: Getting started with SQL queries in Azure Cosmos DB
+description: Learn how to use SQL queries to query data from Azure Cosmos DB. You can upload sample data to a container in Azure Cosmos DB and query it.
+++++ Last updated : 08/26/2021+++
+# Getting started with SQL queries
+
+In Azure Cosmos DB for NoSQL accounts, there are two ways to read data:
+
+**Point reads** - You can do a key/value lookup on a single *item ID* and partition key. The *item ID* and partition key combination is the key and the item itself is the value. For a 1 KB document, point reads typically cost 1 [request unit](../../request-units.md) with a latency under 10 ms. Point reads return a single whole item, not a partial item or a specific field.
+
+Here are some examples of how to do **Point reads** with each SDK:
+
+- [.NET SDK](/dotnet/api/microsoft.azure.cosmos.container.readitemasync)
+- [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com-azure-cosmos-cosmoscontainer-(t)readitem(java-lang-string-com-azure-cosmos-models-partitionkey-com-azure-cosmos-models-cosmositemrequestoptions-java-lang-class(t)))
+- [Node.js SDK](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read)
+- [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item)
+
+**SQL queries** - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, will have a higher and more variable latency than point reads. Queries can return many items.
+
+Most read-heavy workloads on Azure Cosmos DB use a combination of both point reads and SQL queries. If you just need to read a single item, point reads are cheaper and faster than queries. Point reads don't need to use the query engine to access data and can read the data directly. Of course, it's not possible for all workloads to exclusively read data using point reads, so support of SQL as a query language and [schema-agnostic indexing](../../index-overview.md) provide a more flexible way to access your data.
+
+Here are some examples of how to do **SQL queries** with each SDK:
+
+- [.NET SDK](../samples-dotnet.md#items)
+- [Java SDK](../samples-java.md#query-examples)
+- [Node.js SDK](../samples-nodejs.md#item-examples)
+- [Python SDK](../samples-python.md#item-examples)
+
+The remainder of this doc shows how to get started writing SQL queries in Azure Cosmos DB. SQL queries can be run through either the SDK or Azure portal.
+
+## Upload sample data
+
+In your API for NoSQL Azure Cosmos DB account, open the [Data Explorer](../../data-explorer.md) to create a container called `Families`. After the container is created, use the data structures browser, to find and open it. In your `Families` container, you will see the `Items` option right below the name of the container. Open this option and you'll see a button, in the menu bar in center of the screen, to create a 'New Item'. You will use this feature to create the JSON items below.
+
+### Create JSON items
+
+The following 2 JSON items are documents about the Andersen and Wakefield families. They include parents, children and their pets, address, and registration information.
+
+The first item has strings, numbers, Booleans, arrays, and nested properties:
+
+```json
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
+
+The second item uses `givenName` and `familyName` instead of `firstName` and `lastName`:
+
+```json
+{
+ "id": "WakefieldFamily",
+ "parents": [
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8 }
+ ],
+ "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+### Query the JSON items
+
+Try a few queries against the JSON data to understand some of the key aspects of Azure Cosmos DB's SQL query language.
+
+The following query returns the items where the `id` field matches `AndersenFamily`. Since it's a `SELECT *` query, the output of the query is the complete JSON item. For more information about SELECT syntax, see [SELECT statement](select.md).
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The query results are:
+
+```json
+ [{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+ }]
+```
+
+The following query reformats the JSON output into a different shape. The query projects a new JSON `Family` object with two selected fields, `Name` and `City`, when the address city is the same as the state. "NY, NY" matches this case.
+
+```sql
+ SELECT {"Name":f.id, "City":f.address.city} AS Family
+ FROM Families f
+ WHERE f.address.city = f.address.state
+```
+
+The query results are:
+
+```json
+ [{
+ "Family": {
+ "Name": "WakefieldFamily",
+ "City": "NY"
+ }
+ }]
+```
+
+The following query returns all the given names of children in the family whose `id` matches `WakefieldFamily`, ordered by city.
+
+```sql
+ SELECT c.givenName
+ FROM Families f
+ JOIN c IN f.children
+ WHERE f.id = 'WakefieldFamily'
+ ORDER BY f.address.city ASC
+```
+
+The results are:
+
+```json
+ [
+ { "givenName": "Jesse" },
+ { "givenName": "Lisa"}
+ ]
+```
+
+## Remarks
+
+The preceding examples show several aspects of the Azure Cosmos DB query language:
+
+* Since API for NoSQL works on JSON values, it deals with tree-shaped entities instead of rows and columns. You can refer to the tree nodes at any arbitrary depth, like `Node1.Node2.Node3…..Nodem`, similar to the two-part reference of `<table>.<column>` in ANSI SQL.
+
+* Because the query language works with schemaless data, the type system must be bound dynamically. The same expression could yield different types on different items. The result of a query is a valid JSON value, but isn't guaranteed to be of a fixed schema.
+
+* Azure Cosmos DB supports strict JSON items only. The type system and expressions are restricted to deal only with JSON types. For more information, see the [JSON specification](https://www.json.org/).
+
+* An Azure Cosmos DB container is a schema-free collection of JSON items. The relations within and across container items are implicitly captured by containment, not by primary key and foreign key relations. This feature is important for the intra-item joins that are described in [Joins in Azure Cosmos DB](join.md).
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [SELECT clause](select.md)
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../../convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../../estimate-ru-with-capacity-planner.md)
cosmos-db Group By https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/group-by.md
+
+ Title: GROUP BY clause in Azure Cosmos DB
+description: Learn about the GROUP BY clause for Azure Cosmos DB.
+++++ Last updated : 05/12/2022+++
+# GROUP BY clause in Azure Cosmos DB
+
+The GROUP BY clause divides the query's results according to the values of one or more specified properties.
+
+> [!NOTE]
+> The GROUP BY clause is not supported in the Azure Cosmos DB Python SDK.
+
+## Syntax
+
+```sql
+<group_by_clause> ::= GROUP BY <scalar_expression_list>
+
+<scalar_expression_list> ::=
+ <scalar_expression>
+ | <scalar_expression_list>, <scalar_expression>
+```
+
+## Arguments
+
+- `<scalar_expression_list>`
+
+ Specifies the expressions that will be used to divide query results.
+
+- `<scalar_expression>`
+
+ Any scalar expression is allowed except for scalar subqueries and scalar aggregates. Each scalar expression must contain at least one property reference. There is no limit to the number of individual expressions or the cardinality of each expression.
+
+## Remarks
+
+ When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is [aggregate functions](aggregate-functions.md), which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause.
+
+ The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You cannot use GROUP BY with an ORDER BY clause.
+
+ The GROUP BY clause does not allow any of the following:
+
+- Aliasing properties or aliasing system functions (aliasing is still allowed within the SELECT clause)
+- Subqueries
+- Aggregate system functions (these are only allowed in the SELECT clause)
+
+Queries with an aggregate system function and a subquery with `GROUP BY` are not supported. For example, the following query is not supported:
+
+```sql
+SELECT COUNT(UniqueLastNames)
+FROM (
+SELECT AVG(f.age)
+FROM f
+GROUP BY f.lastName
+) AS UniqueLastNames
+```
+
+Additionally, cross-partition `GROUP BY` queries can have a maximum of 21 [aggregate system functions](aggregate-functions.md).
+
+## Examples
+
+These examples use a sample [nutrition data set](https://github.com/AzureCosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
+
+Here's a query which returns the total count of items in each foodGroup:
+
+```sql
+SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup
+FROM Food f
+GROUP BY f.foodGroup
+```
+
+Some results are (TOP keyword is used to limit results):
+
+```json
+[
+ {
+ "foodGroupCount": 183,
+ "foodGroup": "Cereal Grains and Pasta"
+ },
+ {
+ "foodGroupCount": 133,
+ "foodGroup": "Nut and Seed Products"
+ },
+ {
+ "foodGroupCount": 113,
+ "foodGroup": "Meals, Entrees, and Side Dishes"
+ },
+ {
+ "foodGroupCount": 64,
+ "foodGroup": "Spices and Herbs"
+ }
+]
+```
+
+This query has two expressions used to divide results:
+
+```sql
+SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup, f.version
+FROM Food f
+GROUP BY f.foodGroup, f.version
+```
+
+Some results are:
+
+```json
+[
+ {
+ "foodGroupCount": 183,
+ "foodGroup": "Cereal Grains and Pasta",
+ "version": 1
+ },
+ {
+ "foodGroupCount": 133,
+ "foodGroup": "Nut and Seed Products",
+ "version": 1
+ },
+ {
+ "foodGroupCount": 113,
+ "foodGroup": "Meals, Entrees, and Side Dishes",
+ "version": 1
+ },
+ {
+ "foodGroupCount": 64,
+ "foodGroup": "Spices and Herbs",
+ "version": 1
+ }
+]
+```
+
+This query has a system function in the GROUP BY clause:
+
+```sql
+SELECT TOP 4 COUNT(1) AS foodGroupCount, UPPER(f.foodGroup) AS upperFoodGroup
+FROM Food f
+GROUP BY UPPER(f.foodGroup)
+```
+
+Some results are:
+
+```json
+[
+ {
+ "foodGroupCount": 183,
+ "upperFoodGroup": "CEREAL GRAINS AND PASTA"
+ },
+ {
+ "foodGroupCount": 133,
+ "upperFoodGroup": "NUT AND SEED PRODUCTS"
+ },
+ {
+ "foodGroupCount": 113,
+ "upperFoodGroup": "MEALS, ENTREES, AND SIDE DISHES"
+ },
+ {
+ "foodGroupCount": 64,
+ "upperFoodGroup": "SPICES AND HERBS"
+ }
+]
+```
+
+This query uses both keywords and system functions in the item property expression:
+
+```sql
+SELECT COUNT(1) AS foodGroupCount, ARRAY_CONTAINS(f.tags, {name: 'orange'}) AS containsOrangeTag, f.version BETWEEN 0 AND 2 AS correctVersion
+FROM Food f
+GROUP BY ARRAY_CONTAINS(f.tags, {name: 'orange'}), f.version BETWEEN 0 AND 2
+```
+
+The results are:
+
+```json
+[
+ {
+ "foodGroupCount": 10,
+ "containsOrangeTag": true,
+ "correctVersion": true
+ },
+ {
+ "foodGroupCount": 8608,
+ "containsOrangeTag": false,
+ "correctVersion": true
+ }
+]
+```
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [SELECT clause](select.md)
+- [Aggregate functions](aggregate-functions.md)
cosmos-db Index Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/index-of.md
+
+ Title: INDEX_OF in Azure Cosmos DB query language
+description: Learn about SQL system function INDEX_OF in Azure Cosmos DB.
++++ Last updated : 08/30/2022++++
+# INDEX_OF (Azure Cosmos DB)
++
+Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or `-1` if the string isn't found.
+
+## Syntax
+
+```sql
+INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to search for.
+
+*numeric_expr*
+ Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+The following example returns the index of various substrings inside "abc".
+
+```sql
+SELECT
+ INDEX_OF("abc", "ab") AS index_of_prefix,
+ INDEX_OF("abc", "b") AS index_of_middle,
+ INDEX_OF("abc", "c") AS index_of_last,
+ INDEX_OF("abc", "d") AS index_of_missing
+```
+
+Here's the result set.
+
+```json
+[
+ {
+ "index_of_prefix": 0,
+ "index_of_middle": 1,
+ "index_of_last": 2,
+ "index_of_missing": -1
+ }
+]
+```
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-array.md
+
+ Title: IS_ARRAY in Azure Cosmos DB query language
+description: Learn about SQL system function IS_ARRAY in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_ARRAY (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is an array.
+
+## Syntax
+
+```sql
+IS_ARRAY(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_ARRAY` function.
+
+```sql
+SELECT
+ IS_ARRAY(true) AS isArray1,
+ IS_ARRAY(1) AS isArray2,
+ IS_ARRAY("value") AS isArray3,
+ IS_ARRAY(null) AS isArray4,
+ IS_ARRAY({prop: "value"}) AS isArray5,
+ IS_ARRAY([1, 2, 3]) AS isArray6,
+ IS_ARRAY({prop: "value"}.prop2) AS isArray7
+```
+
+ Here is the result set.
+
+```json
+[{"isArray1":false,"isArray2":false,"isArray3":false,"isArray4":false,"isArray5":false,"isArray6":true,"isArray7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Bool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-bool.md
+
+ Title: IS_BOOL in Azure Cosmos DB query language
+description: Learn about SQL system function IS_BOOL in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_BOOL (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a Boolean.
+
+## Syntax
+
+```sql
+IS_BOOL(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_BOOL` function.
+
+```sql
+SELECT
+ IS_BOOL(true) AS isBool1,
+ IS_BOOL(1) AS isBool2,
+ IS_BOOL("value") AS isBool3,
+ IS_BOOL(null) AS isBool4,
+ IS_BOOL({prop: "value"}) AS isBool5,
+ IS_BOOL([1, 2, 3]) AS isBool6,
+ IS_BOOL({prop: "value"}.prop2) AS isBool7
+```
+
+ Here is the result set.
+
+```json
+[{"isBool1":true,"isBool2":false,"isBool3":false,"isBool4":false,"isBool5":false,"isBool6":false,"isBool7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Defined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-defined.md
+
+ Title: IS_DEFINED in Azure Cosmos DB query language
+description: Learn about SQL system function IS_DEFINED in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_DEFINED (Azure Cosmos DB)
+
+ Returns a Boolean indicating if the property has been assigned a value.
+
+## Syntax
+
+```sql
+IS_DEFINED(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks for the presence of a property within the specified JSON document. The first returns true since "a" is present, but the second returns false since "b" is absent.
+
+```sql
+SELECT IS_DEFINED({ "a" : 5 }.a) AS isDefined1, IS_DEFINED({ "a" : 5 }.b) AS isDefined2
+```
+
+ Here is the result set.
+
+```json
+[{"isDefined1":true,"isDefined2":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Null https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-null.md
+
+ Title: IS_NULL in Azure Cosmos DB query language
+description: Learn about SQL system function IS_NULL in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_NULL (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is null.
+
+## Syntax
+
+```sql
+IS_NULL(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NULL` function.
+
+```sql
+SELECT
+ IS_NULL(true) AS isNull1,
+ IS_NULL(1) AS isNull2,
+ IS_NULL("value") AS isNull3,
+ IS_NULL(null) AS isNull4,
+ IS_NULL({prop: "value"}) AS isNull5,
+ IS_NULL([1, 2, 3]) AS isNull6,
+ IS_NULL({prop: "value"}.prop2) AS isNull7
+```
+
+ Here is the result set.
+
+```json
+[{"isNull1":false,"isNull2":false,"isNull3":false,"isNull4":true,"isNull5":false,"isNull6":false,"isNull7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-number.md
+
+ Title: IS_NUMBER in Azure Cosmos DB query language
+description: Learn about SQL system function IS_NUMBER in Azure Cosmos DB.
++++ Last updated : 09/13/2019++++
+# IS_NUMBER (Azure Cosmos DB)
+
+Returns a Boolean value indicating if the type of the specified expression is a number.
+
+## Syntax
+
+```sql
+IS_NUMBER(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+Returns a Boolean expression.
+
+## Examples
+
+The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NUMBER` function.
+
+```sql
+SELECT
+ IS_NUMBER(true) AS isBooleanANumber,
+ IS_NUMBER(1) AS isNumberANumber,
+ IS_NUMBER("value") AS isTextStringANumber,
+ IS_NUMBER("1") AS isNumberStringANumber,
+ IS_NUMBER(null) AS isNullANumber,
+ IS_NUMBER({prop: "value"}) AS isObjectANumber,
+ IS_NUMBER([1, 2, 3]) AS isArrayANumber,
+ IS_NUMBER({stringProp: "value"}.stringProp) AS isObjectStringPropertyANumber,
+ IS_NUMBER({numberProp: 1}.numberProp) AS isObjectNumberPropertyANumber
+```
+
+Here's the result set.
+
+```json
+[
+ {
+ "isBooleanANumber": false,
+ "isNumberANumber": true,
+ "isTextStringANumber": false,
+ "isNumberStringANumber": false,
+ "isNullANumber": false,
+ "isObjectANumber": false,
+ "isArrayANumber": false,
+ "isObjectStringPropertyANumber": false,
+ "isObjectNumberPropertyANumber": true
+ }
+]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-object.md
+
+ Title: IS_OBJECT in Azure Cosmos DB query language
+description: Learn about SQL system function IS_OBJECT in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_OBJECT (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a JSON object.
+
+## Syntax
+
+```sql
+IS_OBJECT(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_OBJECT` function.
+
+```sql
+SELECT
+ IS_OBJECT(true) AS isObj1,
+ IS_OBJECT(1) AS isObj2,
+ IS_OBJECT("value") AS isObj3,
+ IS_OBJECT(null) AS isObj4,
+ IS_OBJECT({prop: "value"}) AS isObj5,
+ IS_OBJECT([1, 2, 3]) AS isObj6,
+ IS_OBJECT({prop: "value"}.prop2) AS isObj7
+```
+
+ Here is the result set.
+
+```json
+[{"isObj1":false,"isObj2":false,"isObj3":false,"isObj4":false,"isObj5":true,"isObj6":false,"isObj7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is Primitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-primitive.md
+
+ Title: IS_PRIMITIVE in Azure Cosmos DB query language
+description: Learn about SQL system function IS_PRIMITIVE in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_PRIMITIVE (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a primitive (string, Boolean, numeric, or null).
+
+## Syntax
+
+```sql
+IS_PRIMITIVE(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array and undefined types using the `IS_PRIMITIVE` function.
+
+```sql
+SELECT
+ IS_PRIMITIVE(true) AS isPrim1,
+ IS_PRIMITIVE(1) AS isPrim2,
+ IS_PRIMITIVE("value") AS isPrim3,
+ IS_PRIMITIVE(null) AS isPrim4,
+ IS_PRIMITIVE({prop: "value"}) AS isPrim5,
+ IS_PRIMITIVE([1, 2, 3]) AS isPrim6,
+ IS_PRIMITIVE({prop: "value"}.prop2) AS isPrim7
+```
+
+ Here is the result set.
+
+```json
+[{"isPrim1": true, "isPrim2": true, "isPrim3": true, "isPrim4": true, "isPrim5": false, "isPrim6": false, "isPrim7": false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Is String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/is-string.md
+
+ Title: IS_STRING in Azure Cosmos DB query language
+description: Learn about SQL system function IS_STRING in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# IS_STRING (Azure Cosmos DB)
+
+ Returns a Boolean value indicating if the type of the specified expression is a string.
+
+## Syntax
+
+```sql
+IS_STRING(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_STRING` function.
+
+```sql
+SELECT
+ IS_STRING(true) AS isStr1,
+ IS_STRING(1) AS isStr2,
+ IS_STRING("value") AS isStr3,
+ IS_STRING(null) AS isStr4,
+ IS_STRING({prop: "value"}) AS isStr5,
+ IS_STRING([1, 2, 3]) AS isStr6,
+ IS_STRING({prop: "value"}.prop2) AS isStr7
+```
+
+ Here is the result set.
+
+```json
+[{"isStr1":false,"isStr2":false,"isStr3":true,"isStr4":false,"isStr5":false,"isStr6":false,"isStr7":false}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Type checking functions Azure Cosmos DB](type-checking-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/join.md
+
+ Title: SQL JOIN queries for Azure Cosmos DB
+description: Learn how to JOIN multiple tables in Azure Cosmos DB to query the data
+++++ Last updated : 08/27/2021+++
+# Joins in Azure Cosmos DB
+
+In a relational database, joins across tables are the logical corollary to designing normalized schemas. In contrast, the API for NoSQL uses the denormalized data model of schema-free items, which is the logical equivalent of a *self-join*.
+
+> [!NOTE]
+> In Azure Cosmos DB, joins are scoped to a single item. Cross-item and cross-container joins are not supported. In NoSQL databases like Azure Cosmos DB, good [data modeling](../../modeling-data.md) can help avoid the need for cross-item and cross-container joins.
+
+Joins result in a complete cross product of the sets participating in the join. The result of an N-way join is a set of N-element tuples, where each value in the tuple is associated with the aliased set participating in the join and can be accessed by referencing that alias in other clauses.
+
+## Syntax
+
+The language supports the syntax `<from_source1> JOIN <from_source2> JOIN ... JOIN <from_sourceN>`. This query returns a set of tuples with `N` values. Each tuple has values produced by iterating all container aliases over their respective sets.
+
+Let's look at the following FROM clause: `<from_source1> JOIN <from_source2> JOIN ... JOIN <from_sourceN>`
+
+ Let each source define `input_alias1, input_alias2, …, input_aliasN`. This FROM clause returns a set of N-tuples (tuple with N values). Each tuple has values produced by iterating all container aliases over their respective sets.
+
+**Example 1** - 2 sources
+
+- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
+
+- Let `<from_source2>` be document-scoped referencing input_alias1 and represent sets:
+
+ {1, 2} for `input_alias1 = A,`
+
+ {3} for `input_alias1 = B,`
+
+ {4, 5} for `input_alias1 = C,`
+
+- The FROM clause `<from_source1> JOIN <from_source2>` will result in the following tuples:
+
+ (`input_alias1, input_alias2`):
+
+ `(A, 1), (A, 2), (B, 3), (C, 4), (C, 5)`
+
+**Example 2** - 3 sources
+
+- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
+
+- Let `<from_source2>` be document-scoped referencing `input_alias1` and represent sets:
+
+ {1, 2} for `input_alias1 = A,`
+
+ {3} for `input_alias1 = B,`
+
+ {4, 5} for `input_alias1 = C,`
+
+- Let `<from_source3>` be document-scoped referencing `input_alias2` and represent sets:
+
+ {100, 200} for `input_alias2 = 1,`
+
+ {300} for `input_alias2 = 3,`
+
+- The FROM clause `<from_source1> JOIN <from_source2> JOIN <from_source3>` will result in the following tuples:
+
+ (input_alias1, input_alias2, input_alias3):
+
+ (A, 1, 100), (A, 1, 200), (B, 3, 300)
+
+ > [!NOTE]
+ > Lack of tuples for other values of `input_alias1`, `input_alias2`, for which the `<from_source3>` did not return any values.
+
+**Example 3** - 3 sources
+
+- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
+
+- Let `<from_source2>` be document-scoped referencing `input_alias1` and represent sets:
+
+ {1, 2} for `input_alias1 = A,`
+
+ {3} for `input_alias1 = B,`
+
+ {4, 5} for `input_alias1 = C,`
+
+- Let `<from_source3>` be scoped to `input_alias1` and represent sets:
+
+ {100, 200} for `input_alias2 = A,`
+
+ {300} for `input_alias2 = C,`
+
+- The FROM clause `<from_source1> JOIN <from_source2> JOIN <from_source3>` will result in the following tuples:
+
+ (`input_alias1, input_alias2, input_alias3`):
+
+ (A, 1, 100), (A, 1, 200), (A, 2, 100), (A, 2, 200), (C, 4, 300) , (C, 5, 300)
+
+ > [!NOTE]
+ > This resulted in cross product between `<from_source2>` and `<from_source3>` because both are scoped to the same `<from_source1>`. This resulted in 4 (2x2) tuples having value A, 0 tuples having value B (1x0) and 2 (2x1) tuples having value C.
+
+## Examples
+
+The following examples show how the JOIN clause works. Before you run these examples, upload the sample [family data](getting-started.md#upload-sample-data). In the following example, the result is empty, since the cross product of each item from source and an empty set is empty:
+
+```sql
+ SELECT f.id
+ FROM Families f
+ JOIN f.NonExistent
+```
+
+The result is:
+
+```json
+ [{
+ }]
+```
+
+In the following example, the join is a cross product between two JSON objects, the item root `id` and the `children` subroot. The fact that `children` is an array isn't effective in the join, because it deals with a single root that is the `children` array. The result contains only two results, because the cross product of each item with the array yields exactly only one item.
+
+```sql
+ SELECT f.id
+ FROM Families f
+ JOIN f.children
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily"
+ },
+ {
+ "id": "WakefieldFamily"
+ }
+ ]
+```
+
+The following example shows a more conventional join:
+
+```sql
+ SELECT f.id
+ FROM Families f
+ JOIN c IN f.children
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily"
+ },
+ {
+ "id": "WakefieldFamily"
+ },
+ {
+ "id": "WakefieldFamily"
+ }
+ ]
+```
+
+The FROM source of the JOIN clause is an iterator. So, the flow in the preceding example is:
+
+1. Expand each child element `c` in the array.
+2. Apply a cross product with the root of the item `f` with each child element `c` that the first step flattened.
+3. Finally, project the root object `f` `id` property alone.
+
+The first item, `AndersenFamily`, contains only one `children` element, so the result set contains only a single object. The second item, `WakefieldFamily`, contains two `children`, so the cross product produces two objects, one for each `children` element. The root fields in both these items are the same, just as you would expect in a cross product.
+
+The preceeding example returns just the id property for the result of the query. If we want to return the entire document (all the fields) for each child, we can alter the SELECT portion of the query:
+
+```sql
+ SELECT VALUE f
+ FROM Families f
+ JOIN c IN f.children
+ WHERE f.id = 'WakefieldFamily'
+ ORDER BY f.address.city ASC
+```
+
+The real utility of the JOIN clause is to form tuples from the cross product in a shape that's otherwise difficult to project. The example below filters on the combination of a tuple that lets the user choose a condition satisfied by the tuples overall.
+
+```sql
+ SELECT
+ f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName,
+ p.givenName AS petName
+ FROM Families f
+ JOIN c IN f.children
+ JOIN p IN c.pets
+```
+
+The results are:
+
+```json
+ [
+ {
+ "familyName": "AndersenFamily",
+ "childFirstName": "Henriette Thaulow",
+ "petName": "Fluffy"
+ },
+ {
+ "familyName": "WakefieldFamily",
+ "childGivenName": "Jesse",
+ "petName": "Goofy"
+ },
+ {
+ "familyName": "WakefieldFamily",
+ "childGivenName": "Jesse",
+ "petName": "Shadow"
+ }
+ ]
+```
+
+> [!IMPORTANT]
+> This example uses mulitple JOIN expressions in a single query. There is a maximum amount of JOINs that can be used in a single query. For more information, see [SQL query limits](../../concepts-limits.md#sql-query-limits).
+
+The following extension of the preceding example performs a double join. You could view the cross product as the following pseudo-code:
+
+```
+ for-each(Family f in Families)
+ {
+ for-each(Child c in f.children)
+ {
+ for-each(Pet p in c.pets)
+ {
+ return (Tuple(f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName,
+ p.givenName AS petName));
+ }
+ }
+ }
+```
+
+`AndersenFamily` has one child who has one pet, so the cross product yields one row (1\*1\*1) from this family. `WakefieldFamily` has two children, only one of whom has pets, but that child has two pets. The cross product for this family yields 1\*1\*2 = 2 rows.
+
+In the next example, there is an additional filter on `pet`, which excludes all the tuples where the pet name is not `Shadow`. You can build tuples from arrays, filter on any of the elements of the tuple, and project any combination of the elements.
+
+```sql
+ SELECT
+ f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName,
+ p.givenName AS petName
+ FROM Families f
+ JOIN c IN f.children
+ JOIN p IN c.pets
+ WHERE p.givenName = "Shadow"
+```
+
+The results are:
+
+```json
+ [
+ {
+ "familyName": "WakefieldFamily",
+ "childGivenName": "Jesse",
+ "petName": "Shadow"
+ }
+ ]
+```
+
+## Subqueries instead of JOINs
+
+If your query has a JOIN and filters, you can rewrite part of the query as a [subquery](subquery.md#optimize-join-expressions) to improve performance. In some cases, you may be able to use a subquery or [ARRAY_CONTAINS](array-contains.md) to avoid the need for JOIN altogether and improve query performance.
+
+For example, consider the earlier query that projected the familyName, child's givenName, children's firstName, and pet's givenName. If this query just needed to filter on the pet's name and didn't need to return it, you could use `ARRAY_CONTAINS` or a [subquery](subquery.md) to check for pets where `givenName = "Shadow"`.
+
+### Query rewritten with ARRAY_CONTAINS
+
+```sql
+ SELECT
+ f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName
+ FROM Families f
+ JOIN c IN f.children
+ WHERE ARRAY_CONTAINS(c.pets, {givenName: 'Shadow'})
+```
+
+### Query rewritten with subquery
+
+```sql
+ SELECT
+ f.id AS familyName,
+ c.givenName AS childGivenName,
+ c.firstName AS childFirstName
+ FROM Families f
+ JOIN c IN f.children
+ WHERE EXISTS (
+ SELECT VALUE n
+ FROM n IN c.pets
+ WHERE n.givenName = "Shadow"
+ )
+```
++
+## Next steps
+
+- [Getting started](getting-started.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmosdb-dotnet)
+- [Subqueries](subquery.md)
cosmos-db Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/keywords.md
+
+ Title: SQL keywords for Azure Cosmos DB
+description: Learn about SQL keywords for Azure Cosmos DB.
+++++ Last updated : 10/05/2021+++
+# Keywords in Azure Cosmos DB
+
+This article details keywords which may be used in Azure Cosmos DB SQL queries.
+
+## BETWEEN
+
+You can use the `BETWEEN` keyword to express queries against ranges of string or numerical values. For example, the following query returns all items in which the first child's grade is 1-5, inclusive.
+
+```sql
+ SELECT *
+ FROM Families.children[0] c
+ WHERE c.grade BETWEEN 1 AND 5
+```
+
+You can also use the `BETWEEN` keyword in the `SELECT` clause, as in the following example.
+
+```sql
+ SELECT (c.grade BETWEEN 0 AND 10)
+ FROM Families.children[0] c
+```
+
+In API for NoSQL, unlike ANSI SQL, you can express range queries against properties of mixed types. For example, `grade` might be a number like `5` in some items and a string like `grade4` in others. In these cases, as in JavaScript, the comparison between the two different types results in `Undefined`, so the item is skipped.
+
+## DISTINCT
+
+The `DISTINCT` keyword eliminates duplicates in the query's projection.
+
+In this example, the query projects values for each last name:
+
+```sql
+SELECT DISTINCT VALUE f.lastName
+FROM Families f
+```
+
+The results are:
+
+```json
+[
+ "Andersen"
+]
+```
+
+You can also project unique objects. In this case, the lastName field does not exist in one of the two documents, so the query returns an empty object.
+
+```sql
+SELECT DISTINCT f.lastName
+FROM Families f
+```
+
+The results are:
+
+```json
+[
+ {
+ "lastName": "Andersen"
+ },
+ {}
+]
+```
+
+`DISTINCT` can also be used in the projection within a subquery:
+
+```sql
+SELECT f.id, ARRAY(SELECT DISTINCT VALUE c.givenName FROM c IN f.children) as ChildNames
+FROM f
+```
+
+This query projects an array which contains each child's givenName with duplicates removed. This array is aliased as ChildNames and projected in the outer query.
+
+The results are:
+
+```json
+[
+ {
+ "id": "AndersenFamily",
+ "ChildNames": []
+ },
+ {
+ "id": "WakefieldFamily",
+ "ChildNames": [
+ "Jesse",
+ "Lisa"
+ ]
+ }
+]
+```
+
+Queries with an aggregate system function and a subquery with `DISTINCT` are only supported in specific SDK versions. For example, queries with the following shape are only supported in the below specific SDK versions:
+
+```sql
+SELECT COUNT(1) FROM (SELECT DISTINCT f.lastName FROM f)
+```
+
+**Supported SDK versions:**
+
+|**SDK**|**Supported versions**|
+|-|-|
+|.NET SDK|3.18.0 or later|
+|Java SDK|4.19.0 or later|
+|Node.js SDK|Unsupported|
+|Python SDK|Unsupported|
+
+There are some additional restrictions on queries with an aggregate system function and a subquery with `DISTINCT`. The below queries are unsupported:
+
+|**Restriction**|**Example**|
+|-|-|
+|WHERE clause in the outer query|SELECT COUNT(1) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName WHERE lastName = "Smith"|
+|ORDER BY clause in the outer query|SELECT VALUE COUNT(1) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName ORDER BY lastName|
+|GROUP BY clause in the outer query|SELECT COUNT(1) as annualCount, d.year FROM (SELECT DISTINCT c.year, c.id FROM c) AS d GROUP BY d.year|
+|Nested subquery|SELECT COUNT(1) FROM (SELECT y FROM (SELECT VALUE StringToNumber(SUBSTRING(d.date, 0, 4 FROM (SELECT DISTINCT c.date FROM c) d) AS y WHERE y > 2012)|
+|Multiple aggregations|SELECT COUNT(1) as AnnualCount, SUM(d.sales) as TotalSales FROM (SELECT DISTINCT c.year, c.sales, c.id FROM c) AS d|
+|COUNT() must have 1 as a parameter|SELECT COUNT(lastName) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName|
+
+## LIKE
+
+Returns a Boolean value depending on whether a specific character string matches a specified pattern. A pattern can include regular characters and wildcard characters. You can write logically equivalent queries using either the `LIKE` keyword or the [RegexMatch](regexmatch.md) system function. YouΓÇÖll observe the same index utilization regardless of which one you choose. Therefore, you should use `LIKE` if you prefer its syntax more than regular expressions.
+
+> [!NOTE]
+> Because `LIKE` can utilize an index, you should [create a range index](../../index-policy.md) for properties you are comparing using `LIKE`.
+
+You can use the following wildcard characters with LIKE:
+
+| Wildcard character | Description | Example |
+| -- | | - |
+| % | Any string of zero or more characters | WHERE c.description LIKE ΓÇ£%SO%PS%ΓÇ¥ |
+| _ (underscore) | Any single character | WHERE c.description LIKE ΓÇ£%SO_PS%ΓÇ¥ |
+| [ ] | Any single character within the specified range ([a-f]) or set ([abcdef]). | WHERE c.description LIKE ΓÇ£%SO[t-z]PS%ΓÇ¥ |
+| [^] | Any single character not within the specified range ([^a-f]) or set ([^abcdef]). | WHERE c.description LIKE ΓÇ£%SO[^abc]PS%ΓÇ¥ |
++
+### Using LIKE with the % wildcard character
+
+The `%` character matches any string of zero or more characters. For example, by placing a `%` at the beginning and end of the pattern, the following query returns all items with a description that contains `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE "%fruit%"
+```
+
+If you only used a `%` character at the end of the pattern, youΓÇÖd only return items with a description that started with `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE "fruit%"
+```
++
+### Using NOT LIKE
+
+The below example returns all items with a description that does not contain `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description NOT LIKE "%fruit%"
+```
+
+### Using the escape clause
+
+You can search for patterns that include one or more wildcard characters using the ESCAPE clause. For example, if you wanted to search for descriptions that contained the string `20-30%`, you wouldnΓÇÖt want to interpret the `%` as a wildcard character.
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE '%20-30!%%' ESCAPE '!'
+```
+
+### Using wildcard characters as literals
+
+You can enclose wildcard characters in brackets to treat them as literal characters. When you enclose a wildcard character in brackets, you remove any special attributes. Here are some examples:
+
+| Pattern | Meaning |
+| -- | - |
+| LIKE ΓÇ£20-30[%]ΓÇ¥ | 20-30% |
+| LIKE ΓÇ£[_]nΓÇ¥ | _n |
+| LIKE ΓÇ£[ [ ]ΓÇ¥ | [ |
+| LIKE ΓÇ£]ΓÇ¥ | ] |
+
+## IN
+
+Use the IN keyword to check whether a specified value matches any value in a list. For example, the following query returns all family items where the `id` is `WakefieldFamily` or `AndersenFamily`.
+
+```sql
+ SELECT *
+ FROM Families
+ WHERE Families.id IN ('AndersenFamily', 'WakefieldFamily')
+```
+
+The following example returns all items where the state is any of the specified values:
+
+```sql
+ SELECT *
+ FROM Families
+ WHERE Families.address.state IN ("NY", "WA", "CA", "PA", "OH", "OR", "MI", "WI", "MN", "FL")
+```
+
+The API for NoSQL provides support for [iterating over JSON arrays](object-array.md#Iteration), with a new construct added via the in keyword in the FROM source.
+
+If you include your partition key in the `IN` filter, your query will automatically filter to only the relevant partitions.
+
+## TOP
+
+The TOP keyword returns the first `N` number of query results in an undefined order. As a best practice, use TOP with the `ORDER BY` clause to limit results to the first `N` number of ordered values. Combining these two clauses is the only way to predictably indicate which rows TOP affects.
+
+You can use TOP with a constant value, as in the following example, or with a variable value using parameterized queries.
+
+```sql
+ SELECT TOP 1 *
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+ }]
+```
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [Joins](join.md)
+- [Subqueries](subquery.md)
cosmos-db Left https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/left.md
+
+ Title: LEFT in Azure Cosmos DB query language
+description: Learn about SQL system function LEFT in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# LEFT (Azure Cosmos DB)
+
+ Returns the left part of a string with the specified number of characters.
+
+## Syntax
+
+```sql
+LEFT(<str_expr>, <num_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is the string expression to extract characters from.
+
+*num_expr*
+ Is a numeric expression which specifies the number of characters.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the left part of "abc" for various length values.
+
+```sql
+SELECT LEFT("abc", 1) AS l1, LEFT("abc", 2) AS l2
+```
+
+ Here is the result set.
+
+```json
+[{"l1": "a", "l2": "ab"}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Length https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/length.md
+
+ Title: LENGTH in Azure Cosmos DB query language
+description: Learn about SQL system function LENGTH in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# LENGTH (Azure Cosmos DB)
+
+ Returns the number of characters of the specified string expression.
+
+## Syntax
+
+```sql
+LENGTH(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is the string expression to be evaluated.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the length of a string.
+
+```sql
+SELECT LENGTH("abc") AS len
+```
+
+ Here is the result set.
+
+```json
+[{"len": 3}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/linq-to-sql.md
+
+ Title: LINQ to SQL translation in Azure Cosmos DB
+description: Learn the LINQ operators supported and how the LINQ queries are mapped to SQL queries in Azure Cosmos DB.
+++++ Last updated : 08/06/2021+++
+# LINQ to SQL translation
+
+The Azure Cosmos DB query provider performs a best effort mapping from a LINQ query into an Azure Cosmos DB SQL query. If you want to get the SQL query that is translated from LINQ, use the `ToString()` method on the generated `IQueryable`object. The following description assumes a basic familiarity with [LINQ](/dotnet/csharp/programming-guide/concepts/linq/introduction-to-linq-queries). In addition to LINQ, Azure Cosmos DB also supports [Entity Framework Core](/ef/core/providers/cosmos/?tabs=dotnet-core-cli) which works with API for NoSQL.
+
+> [!NOTE]
+> We recommend using the latest [.NET SDK version](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)
+
+The query provider type system supports only the JSON primitive types: numeric, Boolean, string, and null.
+
+The query provider supports the following scalar expressions:
+
+- Constant values, including constant values of the primitive data types at query evaluation time.
+
+- Property/array index expressions that refer to the property of an object or an array element. For example:
+
+ ```
+ family.Id;
+ family.children[0].familyName;
+ family.children[0].grade;
+ family.children[n].grade; //n is an int variable
+ ```
+
+- Arithmetic expressions, including common arithmetic expressions on numerical and Boolean values. For the complete list, see the [Azure Cosmos DB SQL specification](aggregate-functions.md).
+
+ ```
+ 2 * family.children[0].grade;
+ x + y;
+ ```
+
+- String comparison expressions, which include comparing a string value to some constant string value.
+
+ ```
+ mother.familyName == "Wakefield";
+ child.givenName == s; //s is a string variable
+ ```
+
+- Object/array creation expressions, which return an object of compound value type or anonymous type, or an array of such objects. You can nest these values.
+
+ ```
+ new Parent { familyName = "Wakefield", givenName = "Robin" };
+ new { first = 1, second = 2 }; //an anonymous type with two fields
+ new int[] { 3, child.grade, 5 };
+ ```
+
+## Using LINQ
+
+You can create a LINQ query with `GetItemLinqQueryable`. This example shows LINQ query generation and asynchronous execution with a `FeedIterator`:
+
+```csharp
+using (FeedIterator<Book> setIterator = container.GetItemLinqQueryable<Book>()
+ .Where(b => b.Title == "War and Peace")
+ .ToFeedIterator<Book>())
+ {
+ //Asynchronous query execution
+ while (setIterator.HasMoreResults)
+ {
+ foreach(var item in await setIterator.ReadNextAsync()){
+ {
+ Console.WriteLine(item.cost);
+ }
+ }
+ }
+ }
+```
+
+## <a id="SupportedLinqOperators"></a>Supported LINQ operators
+
+The LINQ provider included with the SQL .NET SDK supports the following operators:
+
+- **Select**: Projections translate to [SELECT](select.md), including object construction.
+- **Where**: Filters translate to [WHERE](where.md), and support translation between `&&`, `||`, and `!` to the SQL operators
+- **SelectMany**: Allows unwinding of arrays to the [JOIN](join.md) clause. Use to chain or nest expressions to filter on array elements.
+- **OrderBy** and **OrderByDescending**: Translate to [ORDER BY](order-by.md) with ASC or DESC.
+- **Count**, **Sum**, **Min**, **Max**, and **Average** operators for [aggregation](aggregate-functions.md), and their async equivalents **CountAsync**, **SumAsync**, **MinAsync**, **MaxAsync**, and **AverageAsync**.
+- **CompareTo**: Translates to range comparisons. Commonly used for strings, since they're not comparable in .NET.
+- **Skip** and **Take**: Translates to [OFFSET and LIMIT](offset-limit.md) for limiting results from a query and doing pagination.
+- **Math functions**: Supports translation from .NET `Abs`, `Acos`, `Asin`, `Atan`, `Ceiling`, `Cos`, `Exp`, `Floor`, `Log`, `Log10`, `Pow`, `Round`, `Sign`, `Sin`, `Sqrt`, `Tan`, and `Truncate` to the equivalent [built-in mathematical functions](mathematical-functions.md).
+- **String functions**: Supports translation from .NET `Concat`, `Contains`, `Count`, `EndsWith`,`IndexOf`, `Replace`, `Reverse`, `StartsWith`, `SubString`, `ToLower`, `ToUpper`, `TrimEnd`, and `TrimStart` to the equivalent [built-in string functions](string-functions.md).
+- **Array functions**: Supports translation from .NET `Concat`, `Contains`, and `Count` to the equivalent [built-in array functions](array-functions.md).
+- **Geospatial Extension functions**: Supports translation from stub methods `Distance`, `IsValid`, `IsValidDetailed`, and `Within` to the equivalent [built-in geospatial functions](geospatial-query.md).
+- **User-Defined Function Extension function**: Supports translation from the stub method [CosmosLinq.InvokeUserDefinedFunction](/dotnet/api/microsoft.azure.cosmos.linq.cosmoslinq.invokeuserdefinedfunction?view=azure-dotnet&preserve-view=true) to the corresponding [user-defined function](udfs.md).
+- **Miscellaneous**: Supports translation of `Coalesce` and [conditional operators](logical-operators.md). Can translate `Contains` to String CONTAINS, ARRAY_CONTAINS, or IN, depending on context.
+
+## Examples
+
+The following examples illustrate how some of the standard LINQ query operators translate to queries in Azure Cosmos DB.
+
+### Select operator
+
+The syntax is `input.Select(x => f(x))`, where `f` is a scalar expression. The `input`, in this case, would be an `IQueryable` object.
+
+**Select operator, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => family.parents[0].familyName);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE f.parents[0].familyName
+ FROM Families f
+ ```
+
+**Select operator, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => family.children[0].grade + c); // c is an int variable
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE f.children[0].grade + c
+ FROM Families f
+ ```
+
+**Select operator, example 3:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => new
+ {
+ name = family.children[0].familyName,
+ grade = family.children[0].grade + 3
+ });
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE {"name":f.children[0].familyName,
+ "grade": f.children[0].grade + 3 }
+ FROM Families f
+ ```
+
+### SelectMany operator
+
+The syntax is `input.SelectMany(x => f(x))`, where `f` is a scalar expression that returns a container type.
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family => family.children);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE child
+ FROM child IN Families.children
+ ```
+
+### Where operator
+
+The syntax is `input.Where(x => f(x))`, where `f` is a scalar expression, which returns a Boolean value.
+
+**Where operator, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Where(family=> family.parents[0].familyName == "Wakefield");
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ ```
+
+**Where operator, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Where(
+ family => family.parents[0].familyName == "Wakefield" &&
+ family.children[0].grade < 3);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ AND f.children[0].grade < 3
+ ```
+
+## Composite SQL queries
+
+You can compose the preceding operators to form more powerful queries. Since Azure Cosmos DB supports nested containers, you can concatenate or nest the composition.
+
+### Concatenation
+
+The syntax is `input(.|.SelectMany())(.Select()|.Where())*`. A concatenated query can start with an optional `SelectMany` query, followed by multiple `Select` or `Where` operators.
+
+**Concatenation, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => family.parents[0])
+ .Where(parent => parent.familyName == "Wakefield");
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE f.parents[0].familyName = "Wakefield"
+ ```
+
+**Concatenation, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Where(family => family.children[0].grade > 3)
+ .Select(family => family.parents[0].familyName);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE f.parents[0].familyName
+ FROM Families f
+ WHERE f.children[0].grade > 3
+ ```
+
+**Concatenation, example 3:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.Select(family => new { grade=family.children[0].grade}).
+ Where(anon=> anon.grade < 3);
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ WHERE ({grade: f.children[0].grade}.grade > 3)
+ ```
+
+**Concatenation, example 4:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family => family.parents)
+ .Where(parent => parents.familyName == "Wakefield");
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM p IN Families.parents
+ WHERE p.familyName = "Wakefield"
+ ```
+
+### Nesting
+
+The syntax is `input.SelectMany(x=>x.Q())` where `Q` is a `Select`, `SelectMany`, or `Where` operator.
+
+A nested query applies the inner query to each element of the outer container. One important feature is that the inner query can refer to the fields of the elements in the outer container, like a self-join.
+
+**Nesting, example 1:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family=>
+ family.parents.Select(p => p.familyName));
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT VALUE p.familyName
+ FROM Families f
+ JOIN p IN f.parents
+ ```
+
+**Nesting, example 2:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family =>
+ family.children.Where(child => child.familyName == "Jeff"));
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ JOIN c IN f.children
+ WHERE c.familyName = "Jeff"
+ ```
+
+**Nesting, example 3:**
+
+- **LINQ lambda expression**
+
+ ```csharp
+ input.SelectMany(family => family.children.Where(
+ child => child.familyName == family.parents[0].familyName));
+ ```
+
+- **SQL**
+
+ ```sql
+ SELECT *
+ FROM Families f
+ JOIN c IN f.children
+ WHERE c.familyName = f.parents[0].familyName
+ ```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../../modeling-data.md)
cosmos-db Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/log.md
+
+ Title: LOG in Azure Cosmos DB query language
+description: Learn about the LOG SQL system function in Azure Cosmos DB to return the natural logarithm of the specified numeric expression
++++ Last updated : 09/13/2019+++
+# LOG (Azure Cosmos DB)
+
+ Returns the natural logarithm of the specified numeric expression.
+
+## Syntax
+
+```sql
+LOG (<numeric_expr> [, <base>])
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+*base*
+ Optional numeric argument that sets the base for the logarithm.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ By default, `LOG()` returns the natural logarithm. You can change the base of the logarithm to another value by using the optional base parameter.
+
+ The natural logarithm is the logarithm to the base **e**, where **e** is an irrational constant approximately equal to *2.718281828*.
+
+ The natural logarithm of the exponential of a number is the number itself: `LOG( EXP( n ) ) = n`. And the exponential of the natural logarithm of a number is the number itself: `EXP( LOG( n ) ) = n`.
+
+ This system function won't utilize the index.
+
+## Examples
+
+ The following example declares a variable and returns the logarithm value of the specified variable (10).
+
+```sql
+SELECT LOG(10) AS log
+```
+
+ Here's the result set.
+
+```json
+[{log: 2.3025850929940459}]
+```
+
+ The following example calculates the `LOG` for the exponent of a number.
+
+```sql
+SELECT EXP(LOG(10)) AS expLog
+```
+
+ Here's the result set.
+
+```json
+[{expLog: 10.000000000000002}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Log10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/log10.md
+
+ Title: LOG10 in Azure Cosmos DB query language
+description: Learn about the LOG10 SQL system function in Azure Cosmos DB to return the base-10 logarithm of the specified numeric expression
++++ Last updated : 09/13/2019+++
+# LOG10 (Azure Cosmos DB)
+
+ Returns the base-10 logarithm of the specified numeric expression.
+
+## Syntax
+
+```sql
+LOG10 (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expression*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ The LOG10 and POWER functions are inversely related to one another. For example, 10 ^ LOG10(n) = n. This system function will not utilize the index.
+
+## Examples
+
+ The following example declares a variable and returns the LOG10 value of the specified variable (100).
+
+```sql
+SELECT LOG10(100) AS log10
+```
+
+ Here is the result set.
+
+```json
+[{log10: 2}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Logical Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/logical-operators.md
+
+ Title: Logical operators in Azure Cosmos DB
+description: Learn about SQL logical operators supported by Azure Cosmos DB.
+++++ Last updated : 01/07/2022+++
+# Logical operators in Azure Cosmos DB
+
+This article details the logical operators supported by Azure Cosmos DB.
+
+## Understanding logical (AND, OR and NOT) operators
+
+Logical operators operate on Boolean values. The following tables show the logical truth tables for these operators:
+
+**OR operator**
+
+Returns `true` when either of the conditions is `true`.
+
+| | **True** | **False** | **Undefined** |
+| | | | |
+| **True** |True |True |True |
+| **False** |True |False |Undefined |
+| **Undefined** |True |Undefined |Undefined |
+
+**AND operator**
+
+Returns `true` when both expressions are `true`.
+
+| | **True** | **False** | **Undefined** |
+| | | | |
+| **True** |True |False |Undefined |
+| **False** |False |False |False |
+| **Undefined** |Undefined |False |Undefined |
+
+**NOT operator**
+
+Reverses the value of any Boolean expression.
+
+| | **NOT** |
+| | |
+| **True** |False |
+| **False** |True |
+| **Undefined** |Undefined |
+
+**Operator Precedence**
+
+The logical operators `OR`, `AND`, and `NOT` have the precedence level shown below:
+
+| **Operator** | **Priority** |
+| | |
+| **NOT** |1 |
+| **AND** |2 |
+| **OR** |3 |
+
+## * operator
+
+The special operator * projects the entire item as is. When used, it must be the only projected field. A query like `SELECT * FROM Families f` is valid, but `SELECT VALUE * FROM Families f` and `SELECT *, f.id FROM Families f` are not valid.
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](keywords.md)
+- [SELECT clause](select.md)
cosmos-db Lower https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/lower.md
+
+ Title: LOWER in Azure Cosmos DB query language
+description: Learn about the LOWER SQL system function in Azure Cosmos DB to return a string expression after converting uppercase character data to lowercase
++++ Last updated : 04/07/2021+++
+# LOWER (Azure Cosmos DB)
+
+Returns a string expression after converting uppercase character data to lowercase.
+
+> [!NOTE]
+> This function uses culture-independent (invariant) casing rules when returning the converted string expression.
+
+The LOWER system function doesn't utilize the index. If you plan to do frequent case insensitive comparisons, the LOWER system function may consume a significant number of RUs. If so, instead of using the LOWER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE LOWER(c.name) = 'username' simply becomes SELECT * FROM c WHERE c.name = 'username'.
+
+## Syntax
+
+```sql
+LOWER(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+## Return types
+
+Returns a string expression.
+
+## Examples
+
+The following example shows how to use `LOWER` in a query.
+
+```sql
+SELECT LOWER("Abc") AS lower
+```
+
+ Here's the result set.
+
+```json
+[{"lower": "abc"}]
+```
+
+## Remarks
+
+This system function won't [use indexes](../../index-overview.md#index-usage).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Ltrim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/ltrim.md
+
+ Title: LTRIM in Azure Cosmos DB query language
+description: Learn about the LTRIM SQL system function in Azure Cosmos DB to return a string expression after it removes leading blanks
++++ Last updated : 09/14/2021+++
+# LTRIM (Azure Cosmos DB)
+
+ Returns a string expression after it removes leading whitespace or specified characters.
+
+## Syntax
+
+```sql
+LTRIM(<str_expr1>[, <str_expr2>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is a string expression
+
+*str_expr2*
+ Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `LTRIM` inside a query.
+
+```sql
+SELECT LTRIM(" abc") AS t1,
+LTRIM(" abc ") AS t2,
+LTRIM("abc ") AS t3,
+LTRIM("abc") AS t4,
+LTRIM("abc", "ab") AS t5,
+LTRIM("abc", "abc") AS t6
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "t1": "abc",
+ "t2": "abc ",
+ "t3": "abc ",
+ "t4": "abc",
+ "t5": "c",
+ "t6": ""
+ }
+]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Mathematical Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/mathematical-functions.md
+
+ Title: Mathematical functions in Azure Cosmos DB query language
+description: Learn about the mathematical functions in Azure Cosmos DB to perform a calculation, based on input values that are provided as arguments, and return a numeric value.
++++ Last updated : 06/22/2021+++
+# Mathematical functions (Azure Cosmos DB)
+
+The mathematical functions each perform a calculation, based on input values that are provided as arguments, and return a numeric value.
+
+You can run queries like the following example:
+
+```sql
+ SELECT VALUE ABS(-4)
+```
+
+The result is:
+
+```json
+ [4]
+```
+
+## Functions
+
+The following supported built-in mathematical functions perform a calculation, usually based on input arguments, and return a numeric expression. The **index usage** column assumes, where applicable, that you're comparing the mathematical system function to another value with an equality filter.
+
+| System function | Index usage | [Index usage in queries with scalar aggregate functions](../../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
+| - | -- | | |
+| [ABS](abs.md) | Index seek | Index seek | |
+| [ACOS](acos.md) | Full scan | Full scan | |
+| [ASIN](asin.md) | Full scan | Full scan | |
+| [ATAN](atan.md) | Full scan | Full scan | |
+| [ATN2](atn2.md) | Full scan | Full scan | |
+| [CEILING](ceiling.md) | Index seek | Index seek | |
+| [COS](cos.md) | Full scan | Full scan | |
+| [COT](cot.md) | Full scan | Full scan | |
+| [DEGREES](degrees.md) | Index seek | Index seek | |
+| [EXP](exp.md) | Full scan | Full scan | |
+| [FLOOR](floor.md) | Index seek | Index seek | |
+| [LOG](log.md) | Full scan | Full scan | |
+| [LOG10](log10.md) | Full scan | Full scan | |
+| [PI](pi.md) | N/A | N/A | PI () returns a constant value. Because the result is deterministic, comparisons with PI() can use the index. |
+| [POWER](power.md) | Full scan | Full scan | |
+| [RADIANS](radians.md) | Index seek | Index seek | |
+| [RAND](rand.md) | N/A | N/A | Rand() returns a random number. Because the result is non-deterministic, comparisons that involve Rand() cannot use the index. |
+| [ROUND](round.md) | Index seek | Index seek | |
+| [SIGN](sign.md) | Index seek | Index seek | |
+| [SIN](sin.md) | Full scan | Full scan | |
+| [SQRT](sqrt.md) | Full scan | Full scan | |
+| [SQUARE](square.md) | Full scan | Full scan | |
+| [TAN](tan.md) | Full scan | Full scan | |
+| [TRUNC](trunc.md) | Index seek | Index seek | |
+## Next steps
+
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Object Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/object-array.md
+
+ Title: Working with arrays and objects in Azure Cosmos DB
+description: Learn the SQL syntax to create arrays and objects in Azure Cosmos DB. This article also provides some examples to perform operations on array objects
+++++ Last updated : 02/02/2021+++
+# Working with arrays and objects in Azure Cosmos DB
+
+A key feature of the Azure Cosmos DB for NoSQL is array and object creation. This document uses examples that can be recreated using the [Family dataset](getting-started.md#upload-sample-data).
+
+Here's an example item in this dataset:
+
+```json
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
+
+## Arrays
+
+You can construct arrays, as shown in the following example:
+
+```sql
+SELECT [f.address.city, f.address.state] AS CityState
+FROM Families f
+```
+
+The results are:
+
+```json
+[
+ {
+ "CityState": [
+ "Seattle",
+ "WA"
+ ]
+ },
+ {
+ "CityState": [
+ "NY",
+ "NY"
+ ]
+ }
+]
+```
+
+You can also use the [ARRAY expression](subquery.md#array-expression) to construct an array from [subquery's](subquery.md) results. This query gets all the distinct given names of children in an array.
+
+```sql
+SELECT f.id, ARRAY(SELECT DISTINCT VALUE c.givenName FROM c IN f.children) as ChildNames
+FROM f
+```
+
+The results are:
+
+```json
+[
+ {
+ "id": "AndersenFamily",
+ "ChildNames": []
+ },
+ {
+ "id": "WakefieldFamily",
+ "ChildNames": [
+ "Jesse",
+ "Lisa"
+ ]
+ }
+]
+```
+
+## <a id="Iteration"></a>Iteration
+
+The API for NoSQL provides support for iterating over JSON arrays, with the [IN keyword](keywords.md#in) in the FROM source. In the following example:
+
+```sql
+SELECT *
+FROM Families.children
+```
+
+The results are:
+
+```json
+[
+ [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy"}]
+ }
+ ],
+ [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ]
+]
+```
+
+The next query performs iteration over `children` in the `Families` container. The output array is different from the preceding query. This example splits `children`, and flattens the results into a single array:
+
+```sql
+SELECT *
+FROM c IN Families.children
+```
+
+The results are:
+
+```json
+[
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ },
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+]
+```
+
+You can filter further on each individual entry of the array, as shown in the following example:
+
+```sql
+SELECT c.givenName
+FROM c IN Families.children
+WHERE c.grade = 8
+```
+
+The results are:
+
+```json
+[{
+ "givenName": "Lisa"
+}]
+```
+
+You can also aggregate over the result of an array iteration. For example, the following query counts the number of children among all families:
+
+```sql
+SELECT COUNT(1) AS Count
+FROM child IN Families.children
+```
+
+The results are:
+
+```json
+[
+ {
+ "Count": 3
+ }
+]
+```
+
+> [!NOTE]
+> When using the IN keyword for iteration, you cannot filter or project any properties outside of the array. Instead, you should use [JOINs](join.md).
+
+For additional examples, read our [blog post on working with arrays in Azure Cosmos DB](https://devblogs.microsoft.com/cosmosdb/understanding-how-to-query-arrays-in-azure-cosmos-db/).
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Joins](join.md)
cosmos-db Offset Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/offset-limit.md
+
+ Title: OFFSET LIMIT clause in Azure Cosmos DB
+description: Learn how to use the OFFSET LIMIT clause to skip and take some certain values when querying in Azure Cosmos DB
+++++ Last updated : 07/29/2020+++
+# OFFSET LIMIT clause in Azure Cosmos DB
+
+The OFFSET LIMIT clause is an optional clause to skip then take some number of values from the query. The OFFSET count and the LIMIT count are required in the OFFSET LIMIT clause.
+
+When OFFSET LIMIT is used in conjunction with an ORDER BY clause, the result set is produced by doing skip and take on the ordered values. If no ORDER BY clause is used, it will result in a deterministic order of values.
+
+## Syntax
+
+```sql
+OFFSET <offset_amount> LIMIT <limit_amount>
+```
+
+## Arguments
+
+- `<offset_amount>`
+
+ Specifies the integer number of items that the query results should skip.
+
+- `<limit_amount>`
+
+ Specifies the integer number of items that the query results should include
+
+## Remarks
+
+ Both the `OFFSET` count and the `LIMIT` count are required in the `OFFSET LIMIT` clause. If an optional `ORDER BY` clause is used, the result set is produced by doing the skip over the ordered values. Otherwise, the query will return a fixed order of values.
+
+ The RU charge of a query with `OFFSET LIMIT` will increase as the number of terms being offset increases. For queries that have [multiple pages of results](pagination.md), we typically recommend using [continuation tokens](pagination.md#continuation-tokens). Continuation tokens are a "bookmark" for the place where the query can later resume. If you use `OFFSET LIMIT`, there is no "bookmark". If you wanted to return the query's next page, you would have to start from the beginning.
+
+ You should use `OFFSET LIMIT` for cases when you would like to skip items entirely and save client resources. For example, you should use `OFFSET LIMIT` if you want to skip to the 1000th query result and have no need to view results 1 through 999. On the backend, `OFFSET LIMIT` still loads each item, including those that are skipped. The performance advantage is a savings in client resources by avoiding processing items that are not needed.
+
+## Examples
+
+For example, here's a query that skips the first value and returns the second value (in order of the resident city's name):
+
+```sql
+ SELECT f.id, f.address.city
+ FROM Families f
+ ORDER BY f.address.city
+ OFFSET 1 LIMIT 1
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily",
+ "city": "Seattle"
+ }
+ ]
+```
+
+Here's a query that skips the first value and returns the second value (without ordering):
+
+```sql
+ SELECT f.id, f.address.city
+ FROM Families f
+ OFFSET 1 LIMIT 1
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "WakefieldFamily",
+ "city": "Seattle"
+ }
+ ]
+```
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [SELECT clause](select.md)
+- [ORDER BY clause](order-by.md)
cosmos-db Order By https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/order-by.md
+
+ Title: ORDER BY clause in Azure Cosmos DB
+description: Learn about SQL ORDER BY clause for Azure Cosmos DB. Use SQL as an Azure Cosmos DB JSON query language.
+++++ Last updated : 04/27/2022+++
+# ORDER BY clause in Azure Cosmos DB
+
+The optional `ORDER BY` clause specifies the sorting order for results returned by the query.
+
+## Syntax
+
+```sql
+ORDER BY <sort_specification>
+<sort_specification> ::= <sort_expression> [, <sort_expression>]
+<sort_expression> ::= {<scalar_expression> [ASC | DESC]} [ ,...n ]
+```
+
+## Arguments
+
+- `<sort_specification>`
+
+ Specifies a property or expression on which to sort the query result set. A sort column can be specified as a name or property alias.
+
+ Multiple properties can be specified. Property names must be unique. The sequence of the sort properties in the `ORDER BY` clause defines the organization of the sorted result set. That is, the result set is sorted by the first property and then that ordered list is sorted by the second property, and so on.
+
+ The property names referenced in the `ORDER BY` clause must correspond to either a property in the select list or to a property defined in the collection specified in the `FROM` clause without any ambiguities.
+
+- `<sort_expression>`
+
+ Specifies one or more properties or expressions on which to sort the query result set.
+
+- `<scalar_expression>`
+
+ See the [Scalar expressions](scalar-expressions.md) section for details.
+
+- `ASC | DESC`
+
+ Specifies that the values in the specified column should be sorted in ascending or descending order. `ASC` sorts from the lowest value to highest value. `DESC` sorts from highest value to lowest value. `ASC` is the default sort order. Null values are treated as the lowest possible values.
+
+## Remarks
+
+ The `ORDER BY` clause requires that the indexing policy include an index for the fields being sorted. The Azure Cosmos DB query runtime supports sorting against a property name and not against computed properties. Azure Cosmos DB supports multiple `ORDER BY` properties. In order to run a query with multiple ORDER BY properties, you should define a [composite index](../../index-policy.md#composite-indexes) on the fields being sorted.
+
+> [!Note]
+> If the properties being sorted might be undefined for some documents and you want to retrieve them in an ORDER BY query, you must explicitly include this path in the index. The default indexing policy won't allow for the retrieval of the documents where the sort property is undefined. [Review example queries on documents with some missing fields](#documents-with-missing-fields).
+
+## Examples
+
+For example, here's a query that retrieves families in ascending order of the resident city's name:
+
+```sql
+ SELECT f.id, f.address.city
+ FROM Families f
+ ORDER BY f.address.city
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "WakefieldFamily",
+ "city": "NY"
+ },
+ {
+ "id": "AndersenFamily",
+ "city": "Seattle"
+ }
+ ]
+```
+
+The following query retrieves family `id`s in order of their item creation date. Item `creationDate` is a number representing the *epoch time*, or elapsed time since Jan. 1, 1970 in seconds.
+
+```sql
+ SELECT f.id, f.creationDate
+ FROM Families f
+ ORDER BY f.creationDate DESC
+```
+
+The results are:
+
+```json
+ [
+ {
+ "id": "AndersenFamily",
+ "creationDate": 1431620472
+ },
+ {
+ "id": "WakefieldFamily",
+ "creationDate": 1431620462
+ }
+ ]
+```
+
+Additionally, you can order by multiple properties. A query that orders by multiple properties requires a [composite index](../../index-policy.md#composite-indexes). Consider the following query:
+
+```sql
+ SELECT f.id, f.creationDate
+ FROM Families f
+ ORDER BY f.address.city ASC, f.creationDate DESC
+```
+
+This query retrieves the family `id` in ascending order of the city name. If multiple items have the same city name, the query will order by the `creationDate` in descending order.
+
+## Documents with missing fields
+
+Queries with `ORDER BY` will return all items, including items where the property in the ORDER BY clause isn't defined.
+
+For example, if you run the below query that includes `lastName` in the `Order By` clause, the results will include all items, even those that don't have a `lastName` property defined.
+
+```sql
+ SELECT f.id, f.lastName
+ FROM Families f
+ ORDER BY f.lastName
+```
+
+> [!Note]
+> Only the .NET SDK 3.4.0 or later and Java SDK 4.13.0 or later support ORDER BY with mixed types. Therefore, if you want to sort by a combination of undefined and defined values, you should use this version (or later).
+
+You can't control the order that different types appear in the results. In the above example, we showed how undefined values were sorted before string values. If instead, for example, you wanted more control over the sort order of undefined values, you could assign any undefined properties a string value of "aaaaaaaaa" or "zzzzzzzz" to ensure they were either first or last.
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [Indexing policies in Azure Cosmos DB](../../index-policy.md)
+- [OFFSET LIMIT clause](offset-limit.md)
cosmos-db Pagination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/pagination.md
+
+ Title: Pagination in Azure Cosmos DB
+description: Learn about paging concepts and continuation tokens
+++++++ Last updated : 09/15/2021++
+# Pagination in Azure Cosmos DB
+
+In Azure Cosmos DB, queries may have multiple pages of results. This document explains criteria that Azure Cosmos DB's query engine uses to decide whether to split query results into multiple pages. You can optionally use continuation tokens to manage query results that span multiple pages.
+
+## Understanding query executions
+
+Sometimes query results will be split over multiple pages. Each page's results is generated by a separate query execution. When query results cannot be returned in one single execution, Azure Cosmos DB will automatically split results into multiple pages.
+
+You can specify the maximum number of items returned by a query by setting the `MaxItemCount`. The `MaxItemCount` is specified per request and tells the query engine to return that number of items or fewer. You can set `MaxItemCount` to `-1` if you don't want to place a limit on the number of results per query execution.
+
+In addition, there are other reasons that the query engine might need to split query results into multiple pages. These include:
+
+- The container was throttled and there weren't available RUs to return more query results
+- The query execution's response was too large
+- The query execution's time was too long
+- It was more efficient for the query engine to return results in additional executions
+
+The number of items returned per query execution will always be less than or equal to `MaxItemCount`. However, it is possible that other criteria might have limited the number of results the query could return. If you execute the same query multiple times, the number of pages might not be constant. For example, if a query is throttled there may be fewer available results per page, which means the query will have additional pages. In some cases, it is also possible that your query may return an empty page of results.
+
+## Handling multiple pages of results
+
+To ensure accurate query results, you should progress through all pages. You should continue to execute queries until there are no additional pages.
+
+Here are some examples for processing results from queries with multiple pages:
+
+- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L294)
+- [Java SDK](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L162-L176)
+- [Node.js SDK](https://github.com/Azure/azure-sdk-for-js/blob/83fcc44a23ad771128d6e0f49043656b3d1df990/sdk/cosmosdb/cosmos/samples/IndexManagement.ts#L128-L140)
+- [Python SDK](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/examples.py#L89)
+
+## Continuation tokens
+
+In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. For the Python SDK and Node.js SDK, it's supported for single partition queries, and the PK must be specified in the options object because it's not sufficient to have it in the query itself.
+
+Here are some example for using continuation tokens:
+
+- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L230)
+- [Java SDK](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L216)
+- [Node.js SDK](https://github.com/Azure/azure-sdk-for-js/blob/2186357a6e6a64b59915d0cf3cba845be4d115c4/sdk/cosmosdb/cosmos/samples/src/BulkUpdateWithSproc.ts#L16-L31)
+- [Python SDK](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/test/test_query.py#L533)
+
+If the query returns a continuation token, then there are additional query results.
+
+In Azure Cosmos DB's REST API, you can manage continuation tokens with the `x-ms-continuation` header. As with querying with the .NET or Java SDK, if the `x-ms-continuation` response header is not empty, it means the query has additional results.
+
+As long as you are using the same SDK version, continuation tokens never expire. You can optionally [restrict the size of a continuation token](/dotnet/api/microsoft.azure.documents.client.feedoptions.responsecontinuationtokenlimitinkb). Regardless of the amount of data or number of physical partitions in your container, queries return a single continuation token.
+
+You cannot use continuation tokens for queries with [GROUP BY](group-by.md) or [DISTINCT](keywords.md#distinct) because these queries would require storing a significant amount of state. For queries with `DISTINCT`, you can use continuation tokens if you add `ORDER BY` to the query.
+
+Here's an example of a query with `DISTINCT` that could use a continuation token:
+
+```sql
+SELECT DISTINCT VALUE c.name
+FROM c
+ORDER BY c.name
+```
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [ORDER BY clause](order-by.md)
cosmos-db Parameterized Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/parameterized-queries.md
+
+ Title: Parameterized queries in Azure Cosmos DB
+description: Learn how SQL parameterized queries provide robust handling and escaping of user input, and prevent accidental exposure of data through SQL injection.
+++++ Last updated : 07/29/2020+++
+# Parameterized queries in Azure Cosmos DB
+
+Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation. Parameterized SQL provides robust handling and escaping of user input, and prevents accidental exposure of data through SQL injection.
+
+## Examples
+
+For example, you can write a query that takes `lastName` and `address.state` as parameters, and execute it for various values of `lastName` and `address.state` based on user input.
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE f.lastName = @lastName AND f.address.state = @addressState
+```
+
+You can then send this request to Azure Cosmos DB as a parameterized JSON query like the following:
+
+```sql
+ {
+ "query": "SELECT * FROM Families f WHERE f.lastName = @lastName AND f.address.state = @addressState",
+ "parameters": [
+ {"name": "@lastName", "value": "Wakefield"},
+ {"name": "@addressState", "value": "NY"},
+ ]
+ }
+```
+
+The following example sets the TOP argument with a parameterized query:
+
+```sql
+ {
+ "query": "SELECT TOP @n * FROM Families",
+ "parameters": [
+ {"name": "@n", "value": 10},
+ ]
+ }
+```
+
+Parameter values can be any valid JSON: strings, numbers, Booleans, null, even arrays or nested JSON. Since Azure Cosmos DB is schemaless, parameters aren't validated against any type.
+
+Here are examples for parameterized queries in each Azure Cosmos DB SDK:
+
+- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L195)
+- [Java](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L392-L421)
+- [Node.js](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79)
+- [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L66-L78)
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../../modeling-data.md)
cosmos-db Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/pi.md
+
+ Title: PI in Azure Cosmos DB query language
+description: Learn about SQL system function PI in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# PI (Azure Cosmos DB)
+
+ Returns the constant value of PI.
+
+## Syntax
+
+```sql
+PI ()
+```
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the value of `PI`.
+
+```sql
+SELECT PI() AS pi
+```
+
+ Here is the result set.
+
+```json
+[{"pi": 3.1415926535897931}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Power https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/power.md
+
+ Title: POWER in Azure Cosmos DB query language
+description: Learn about SQL system function POWER in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# POWER (Azure Cosmos DB)
+
+ Returns the value of the specified expression to the specified power.
+
+## Syntax
+
+```sql
+POWER (<numeric_expr1>, <numeric_expr2>)
+```
+
+## Arguments
+
+*numeric_expr1*
+ Is a numeric expression.
+
+*numeric_expr2*
+ Is the power to which to raise *numeric_expr1*.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example demonstrates raising a number to the power of 3 (the cube of the number).
+
+```sql
+SELECT POWER(2, 3) AS pow1, POWER(2.5, 3) AS pow2
+```
+
+ Here is the result set.
+
+```json
+[{pow1: 8, pow2: 15.625}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Radians https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/radians.md
+
+ Title: RADIANS in Azure Cosmos DB query language
+description: Learn about SQL system function RADIANS in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# RADIANS (Azure Cosmos DB)
+
+ Returns radians when a numeric expression, in degrees, is entered.
+
+## Syntax
+
+```sql
+RADIANS (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example takes a few angles as input and returns their corresponding radian values.
+
+```sql
+SELECT RADIANS(-45.01) AS r1, RADIANS(-181.01) AS r2, RADIANS(0) AS r3, RADIANS(0.1472738) AS r4, RADIANS(197.1099392) AS r5
+```
+
+ Here is the result set.
+
+```json
+[{
+ "r1": -0.7855726963226477,
+ "r2": -3.1592204790349356,
+ "r3": 0,
+ "r4": 0.0025704127119236249,
+ "r5": 3.4402174274458375
+ }]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Rand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/rand.md
+
+ Title: RAND in Azure Cosmos DB query language
+description: Learn about SQL system function RAND in Azure Cosmos DB.
++++ Last updated : 09/16/2019+++
+# RAND (Azure Cosmos DB)
+
+ Returns a randomly generated numeric value from [0,1).
+
+## Syntax
+
+```sql
+RAND ()
+```
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+ `RAND` is a nondeterministic function. Repetitive calls of `RAND` do not return the same results. This system function will not utilize the index.
++
+## Examples
+
+ The following example returns a randomly generated numeric value.
+
+```sql
+SELECT RAND() AS rand
+```
+
+ Here is the result set.
+
+```json
+[{"rand": 0.87860053195618093}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Regexmatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/regexmatch.md
+
+ Title: RegexMatch in Azure Cosmos DB query language
+description: Learn about the RegexMatch SQL system function in Azure Cosmos DB
++++ Last updated : 08/12/2021++++
+# REGEXMATCH (Azure Cosmos DB)
+
+Provides regular expression capabilities. Regular expressions are a concise and flexible notation for finding patterns of text. Azure Cosmos DB uses [PERL compatible regular expressions (PCRE)](http://www.pcre.org/).
+
+## Syntax
+
+```sql
+RegexMatch(<str_expr1>, <str_expr2>, [, <str_expr3>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the regular expression.
+
+*str_expr3*
+ Is the string of selected modifiers to use with the regular expression. This string value is optional. If you'd like to run RegexMatch with no modifiers, you can either add an empty string or omit entirely.
+
+You can learn about [syntax for creating regular expressions in Perl](https://perldoc.perl.org/perlre).
+
+Azure Cosmos DB supports the following four modifiers:
+
+| Modifier | Description |
+| | -- |
+| `m` | Treat the string expression to be searched as multiple lines. Without this option, "^" and "$" will match at the beginning or end of the string and not each individual line. |
+| `s` | Allow "." to match any character, including a newline character. |
+| `i` | Ignore case when pattern matching. |
+| `x` | Ignore all whitespace characters. |
+
+## Return types
+
+ Returns a Boolean expression. Returns undefined if the string expression to be searched, the regular expression, or the selected modifiers are invalid.
+
+## Examples
+
+The following simple RegexMatch example checks the string "abcd" for regular expression match using a few different modifiers.
+
+```sql
+SELECT RegexMatch ("abcd", "ABC", "") AS NoModifiers,
+RegexMatch ("abcd", "ABC", "i") AS CaseInsensitive,
+RegexMatch ("abcd", "ab.", "") AS WildcardCharacter,
+RegexMatch ("abcd", "ab c", "x") AS IgnoreWhiteSpace,
+RegexMatch ("abcd", "aB c", "ix") AS CaseInsensitiveAndIgnoreWhiteSpace
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "NoModifiers": false,
+ "CaseInsensitive": true,
+ "WildcardCharacter": true,
+ "IgnoreWhiteSpace": true,
+ "CaseInsensitiveAndIgnoreWhiteSpace": true
+ }
+]
+```
+
+With RegexMatch, you can use metacharacters to do more complex string searches that wouldn't otherwise be possible with the StartsWith, EndsWith, Contains, or StringEquals system functions. Here are some additional examples:
+
+> [!NOTE]
+> If you need to use a metacharacter in a regular expression and don't want it to have special meaning, you should escape the metacharacter using `\`.
+
+**Check items that have a description that contains the word "salt" exactly once:**
+
+```sql
+SELECT *
+FROM c
+WHERE RegexMatch (c.description, "salt{1}","")
+```
+
+**Check items that have a description that contain a number between 0 and 99:**
+
+```sql
+SELECT *
+FROM c
+WHERE RegexMatch (c.description, "[0-99]","")
+```
+
+**Check items that have a description that contain four letter words starting with "S" or "s":**
+
+```sql
+SELECT *
+FROM c
+WHERE RegexMatch (c.description, " s... ","i")
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy) if the regular expression can be broken down into either StartsWith, EndsWith, Contains, or StringEquals system functions.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Replace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/replace.md
+
+ Title: REPLACE in Azure Cosmos DB query language
+description: Learn about SQL system function REPLACE in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# REPLACE (Azure Cosmos DB)
+
+ Replaces all occurrences of a specified string value with another string value.
+
+## Syntax
+
+```sql
+REPLACE(<str_expr1>, <str_expr2>, <str_expr3>)
+```
+
+## Arguments
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to be found.
+
+*str_expr3*
+ Is the string expression to replace occurrences of *str_expr2* in *str_expr1*.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `REPLACE` in a query.
+
+```sql
+SELECT REPLACE("This is a Test", "Test", "desk") AS replace
+```
+
+ Here is the result set.
+
+```json
+[{"replace": "This is a desk"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Replicate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/replicate.md
+
+ Title: REPLICATE in Azure Cosmos DB query language
+description: Learn about SQL system function REPLICATE in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# REPLICATE (Azure Cosmos DB)
+
+ Repeats a string value a specified number of times.
+
+## Syntax
+
+```sql
+REPLICATE(<str_expr>, <num_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+*num_expr*
+ Is a numeric expression. If *num_expr* is negative or non-finite, the result is undefined.
+
+## Return types
+
+ Returns a string expression.
+
+## Remarks
+
+ The maximum length of the result is 10,000 characters i.e. (length(*str_expr*) * *num_expr*) <= 10,000. This system function will not utilize the index.
+
+## Examples
+
+ The following example shows how to use `REPLICATE` in a query.
+
+```sql
+SELECT REPLICATE("a", 3) AS replicate
+```
+
+ Here is the result set.
+
+```json
+[{"replicate": "aaa"}]
+```
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Reverse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/reverse.md
+
+ Title: REVERSE in Azure Cosmos DB query language
+description: Learn about SQL system function REVERSE in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# REVERSE (Azure Cosmos DB)
+
+ Returns the reverse order of a string value.
+
+## Syntax
+
+```sql
+REVERSE(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `REVERSE` in a query.
+
+```sql
+SELECT REVERSE("Abc") AS reverse
+```
+
+ Here is the result set.
+
+```json
+[{"reverse": "cbA"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Right https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/right.md
+
+ Title: RIGHT in Azure Cosmos DB query language
+description: Learn about SQL system function RIGHT in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# RIGHT (Azure Cosmos DB)
+
+ Returns the right part of a string with the specified number of characters.
+
+## Syntax
+
+```sql
+RIGHT(<str_expr>, <num_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is the string expression to extract characters from.
+
+*num_expr*
+ Is a numeric expression which specifies the number of characters.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the right part of "abc" for various length values.
+
+```sql
+SELECT RIGHT("abc", 1) AS r1, RIGHT("abc", 2) AS r2
+```
+
+ Here is the result set.
+
+```json
+[{"r1": "c", "r2": "bc"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Round https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/round.md
+
+ Title: ROUND in Azure Cosmos DB query language
+description: Learn about SQL system function ROUND in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# ROUND (Azure Cosmos DB)
+
+ Returns a numeric value, rounded to the closest integer value.
+
+## Syntax
+
+```sql
+ROUND(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Remarks
+
+The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression which falls exactly between two integers then the result will be the closest integer value away from zero. This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+|<numeric_expr>|Rounded|
+|-|-|
+|-6.5000|-7|
+|-0.5|-1|
+|0.5|1|
+|6.5000|7|
+
+## Examples
+
+The following example rounds the following positive and negative numbers to the nearest integer.
+
+```sql
+SELECT ROUND(2.4) AS r1, ROUND(2.6) AS r2, ROUND(2.5) AS r3, ROUND(-2.4) AS r4, ROUND(-2.6) AS r5
+```
+
+Here is the result set.
+
+```json
+[{r1: 2, r2: 3, r3: 3, r4: -2, r5: -3}]
+```
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Rtrim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/rtrim.md
+
+ Title: RTRIM in Azure Cosmos DB query language
+description: Learn about SQL system function RTRIM in Azure Cosmos DB.
++++ Last updated : 09/14/2021+++
+# RTRIM (Azure Cosmos DB)
+
+ Returns a string expression after it removes trailing whitespace or specified characters.
+
+## Syntax
+
+```sql
+RTRIM(<str_expr1>[, <str_expr2>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is a string expression
+
+*str_expr2*
+ Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `RTRIM` inside a query.
+
+```sql
+SELECT RTRIM(" abc") AS t1,
+RTRIM(" abc ") AS t2,
+RTRIM("abc ") AS t3,
+RTRIM("abc") AS t4,
+RTRIM("abc", "bc") AS t5,
+RTRIM("abc", "abc") AS t6
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "t1": " abc",
+ "t2": " abc",
+ "t3": "abc",
+ "t4": "abc",
+ "t5": "a",
+ "t6": ""
+ }
+]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Scalar Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/scalar-expressions.md
+
+ Title: Scalar expressions in Azure Cosmos DB SQL queries
+description: Learn about the scalar expression SQL syntax for Azure Cosmos DB. This article also describes how to combine scalar expressions into complex expressions by using operators.
+++++ Last updated : 05/17/2019+++
+# Scalar expressions in Azure Cosmos DB SQL queries
+
+The [SELECT clause](select.md) supports scalar expressions. A scalar expression is a combination of symbols and operators that can be evaluated to obtain a single value. Examples of scalar expressions include: constants, property references, array element references, alias references, or function calls. Scalar expressions can be combined into complex expressions using operators.
+
+## Syntax
+
+```sql
+<scalar_expression> ::=
+ <constant>
+ | input_alias
+ | parameter_name
+ | <scalar_expression>.property_name
+ | <scalar_expression>'['"property_name"|array_index']'
+ | unary_operator <scalar_expression>
+ | <scalar_expression> binary_operator <scalar_expression>
+ | <scalar_expression> ? <scalar_expression> : <scalar_expression>
+ | <scalar_function_expression>
+ | <create_object_expression>
+ | <create_array_expression>
+ | (<scalar_expression>)
+
+<scalar_function_expression> ::=
+ 'udf.' Udf_scalar_function([<scalar_expression>][,…n])
+ | builtin_scalar_function([<scalar_expression>][,…n])
+
+<create_object_expression> ::=
+ '{' [{property_name | "property_name"} : <scalar_expression>][,…n] '}'
+
+<create_array_expression> ::=
+ '[' [<scalar_expression>][,…n] ']'
+
+```
+
+## Arguments
+
+- `<constant>`
+
+ Represents a constant value. See [Constants](constants.md) section for details.
+
+- `input_alias`
+
+ Represents a value defined by the `input_alias` introduced in the `FROM` clause.
+ This value is guaranteed to not be **undefined** ΓÇô**undefined** values in the input are skipped.
+
+- `<scalar_expression>.property_name`
+
+ Represents a value of the property of an object. If the property doesn't exist or property is referenced on a value, which isn't an object, then the expression evaluates to **undefined** value.
+
+- `<scalar_expression>'['"property_name"|array_index']'`
+
+ Represents a value of the property with name `property_name` or array element with index `array_index` of an array. If the property/array index doesn't exist or the property/array index is referenced on a value that isn't an object/array, then the expression evaluates to undefined value.
+
+- `unary_operator <scalar_expression>`
+
+ Represents an operator that is applied to a single value.
+
+- `<scalar_expression> binary_operator <scalar_expression>`
+
+ Represents an operator that is applied to two values.
+
+- `<scalar_function_expression>`
+
+ Represents a value defined by a result of a function call.
+
+- `udf_scalar_function`
+
+ Name of the user-defined scalar function.
+
+- `builtin_scalar_function`
+
+ Name of the built-in scalar function.
+
+- `<create_object_expression>`
+
+ Represents a value obtained by creating a new object with specified properties and their values.
+
+- `<create_array_expression>`
+
+ Represents a value obtained by creating a new array with specified values as elements
+
+- `parameter_name`
+
+ Represents a value of the specified parameter name. Parameter names must have a single \@ as the first character.
+
+## Remarks
+
+ When calling a built-in or user-defined scalar function, all arguments must be defined. If any of the arguments is undefined, the function won't be called, and the result will be undefined.
+
+ When creating an object, any property that is assigned undefined value will be skipped and not included in the created object.
+
+ When creating an array, any element value that is assigned **undefined** value will be skipped and not included in the created object. This will cause the next defined element to take its place in such a way that the created array won't have skipped indexes.
+
+## Examples
+
+```sql
+ SELECT ((2 + 11 % 7)-2)/3
+```
+
+The results are:
+
+```json
+ [{
+ "$1": 1.33333
+ }]
+```
+
+In the following query, the result of the scalar expression is a Boolean:
+
+```sql
+ SELECT f.address.city = f.address.state AS AreFromSameCityState
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [
+ {
+ "AreFromSameCityState": false
+ },
+ {
+ "AreFromSameCityState": true
+ }
+ ]
+```
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Subqueries](subquery.md)
cosmos-db Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/select.md
+
+ Title: SELECT clause in Azure Cosmos DB
+description: Learn about SQL SELECT clause for Azure Cosmos DB. Use SQL as an Azure Cosmos DB JSON query language.
+++++ Last updated : 05/08/2020++++
+# SELECT clause in Azure Cosmos DB
+
+Every query consists of a `SELECT` clause and optional [FROM](from.md) and [WHERE](where.md) clauses, per ANSI SQL standards. Typically, the source in the `FROM` clause is enumerated, and the `WHERE` clause applies a filter on the source to retrieve a subset of JSON items. The `SELECT` clause then projects the requested JSON values in the select list.
+
+## Syntax
+
+```sql
+SELECT <select_specification>
+
+<select_specification> ::=
+ '*'
+ | [DISTINCT] <object_property_list>
+ | [DISTINCT] VALUE <scalar_expression> [[ AS ] value_alias]
+
+<object_property_list> ::=
+{ <scalar_expression> [ [ AS ] property_alias ] } [ ,...n ]
+```
+
+## Arguments
+
+- `<select_specification>`
+
+ Properties or value to be selected for the result set.
+
+- `'*'`
+
+ Specifies that the value should be retrieved without making any changes. Specifically if the processed value is an object, all properties will be retrieved.
+
+- `<object_property_list>`
+
+ Specifies the list of properties to be retrieved. Each returned value will be an object with the properties specified.
+
+- `VALUE`
+
+ Specifies that the JSON value should be retrieved instead of the complete JSON object. This, unlike `<property_list>` does not wrap the projected value in an object.
+
+- `DISTINCT`
+
+ Specifies that duplicates of projected properties should be removed.
+
+- `<scalar_expression>`
+
+ Expression representing the value to be computed. See [Scalar expressions](scalar-expressions.md) section for details.
+
+## Remarks
+
+The `SELECT *` syntax is only valid if FROM clause has declared exactly one alias. `SELECT *` provides an identity projection, which can be useful if no projection is needed. SELECT * is only valid if FROM clause is specified and introduced only a single input source.
+
+Both `SELECT <select_list>` and `SELECT *` are "syntactic sugar" and can be alternatively expressed by using simple SELECT statements as shown below.
+
+1. `SELECT * FROM ... AS from_alias ...`
+
+ is equivalent to:
+
+ `SELECT from_alias FROM ... AS from_alias ...`
+
+2. `SELECT <expr1> AS p1, <expr2> AS p2,..., <exprN> AS pN [other clauses...]`
+
+ is equivalent to:
+
+ `SELECT VALUE { p1: <expr1>, p2: <expr2>, ..., pN: <exprN> }[other clauses...]`
+
+## Examples
+
+The following SELECT query example returns `address` from `Families` whose `id` matches `AndersenFamily`:
+
+```sql
+ SELECT f.address
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The results are:
+
+```json
+ [{
+ "address": {
+ "state": "WA",
+ "county": "King",
+ "city": "Seattle"
+ }
+ }]
+```
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [WHERE clause](where.md)
cosmos-db Sign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sign.md
+
+ Title: SIGN in Azure Cosmos DB query language
+description: Learn about SQL system function SIGN in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# SIGN (Azure Cosmos DB)
+
+ Returns the positive (+1), zero (0), or negative (-1) sign of the specified numeric expression.
+
+## Syntax
+
+```sql
+SIGN(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the `SIGN` values of numbers from -2 to 2.
+
+```sql
+SELECT SIGN(-2) AS s1, SIGN(-1) AS s2, SIGN(0) AS s3, SIGN(1) AS s4, SIGN(2) AS s5
+```
+
+ Here is the result set.
+
+```json
+[{s1: -1, s2: -1, s3: 0, s4: 1, s5: 1}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Sin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sin.md
+
+ Title: SIN in Azure Cosmos DB query language
+description: Learn about SQL system function SIN in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# SIN (Azure Cosmos DB)
+
+ Returns the trigonometric sine of the specified angle, in radians, in the specified expression.
+
+## Syntax
+
+```sql
+SIN(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the `SIN` of the specified angle.
+
+```sql
+SELECT SIN(45.175643) AS sin
+```
+
+ Here is the result set.
+
+```json
+[{"sin": 0.929607286611012}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Spatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/spatial-functions.md
+
+ Title: Spatial functions in Azure Cosmos DB query language
+description: Learn about spatial SQL system functions in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# Spatial functions (Azure Cosmos DB)
+
+Azure Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in functions for geospatial querying.
+
+## Functions
+
+The following scalar functions perform an operation on a spatial object input value and return a numeric or Boolean value.
+
+* [ST_DISTANCE](st-distance.md)
+* [ST_INTERSECTS](st-intersects.md)
+* [ST_ISVALID](st-isvalid.md)
+* [ST_ISVALIDDETAILED](st-isvaliddetailed.md)
+* [ST_WITHIN](st-within.md)
++++
+
+
+## Next steps
+
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Sqrt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sqrt.md
+
+ Title: SQRT in Azure Cosmos DB query language
+description: Learn about SQL system function SQRT in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# SQRT (Azure Cosmos DB)
+
+ Returns the square root of the specified numeric value.
+
+## Syntax
+
+```sql
+SQRT(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the square roots of numbers 1-3.
+
+```sql
+SELECT SQRT(1) AS s1, SQRT(2.0) AS s2, SQRT(3) AS s3
+```
+
+ Here is the result set.
+
+```json
+[{s1: 1, s2: 1.4142135623730952, s3: 1.7320508075688772}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/square.md
+
+ Title: SQUARE in Azure Cosmos DB query language
+description: Learn about SQL system function SQUARE in Azure Cosmos DB.
++++ Last updated : 03/04/2020+++
+# SQUARE (Azure Cosmos DB)
+
+ Returns the square of the specified numeric value.
+
+## Syntax
+
+```sql
+SQUARE(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example returns the squares of numbers 1-3.
+
+```sql
+SELECT SQUARE(1) AS s1, SQUARE(2.0) AS s2, SQUARE(3) AS s3
+```
+
+ Here is the result set.
+
+```json
+[{s1: 1, s2: 4, s3: 9}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Distance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-distance.md
+
+ Title: ST_DISTANCE in Azure Cosmos DB query language
+description: Learn about SQL system function ST_DISTANCE in Azure Cosmos DB.
++++ Last updated : 02/17/2021+++
+# ST_DISTANCE (Azure Cosmos DB)
+
+ Returns the distance between the two GeoJSON Point, Polygon, MultiPolygon or LineString expressions. To learn more, see the [Geospatial and GeoJSON location data](geospatial-intro.md) article.
+
+## Syntax
+
+```sql
+ST_DISTANCE (<spatial_expr>, <spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is any valid GeoJSON Point, Polygon, or LineString object expression.
+
+## Return types
+
+ Returns a numeric expression containing the distance. This is expressed in meters for the default reference system.
+
+## Examples
+
+ The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function. .
+
+```sql
+SELECT f.id
+FROM Families f
+WHERE ST_DISTANCE(f.location, {'type': 'Point', 'coordinates':[31.9, -4.8]}) < 30000
+```
+
+ Here is the result set.
+
+```json
+[{
+ "id": "WakefieldFamily"
+}]
+```
+
+## Remarks
+
+This system function will benefit from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Intersects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-intersects.md
+
+ Title: ST_INTERSECTS in Azure Cosmos DB query language
+description: Learn about SQL system function ST_INTERSECTS in Azure Cosmos DB.
++++ Last updated : 09/21/2021+++
+# ST_INTERSECTS (Azure Cosmos DB)
+
+ Returns a Boolean expression indicating whether the GeoJSON object (Point, Polygon, MultiPolygon, or LineString) specified in the first argument intersects the GeoJSON (Point, Polygon, MultiPolygon, or LineString) in the second argument.
+
+## Syntax
+
+```sql
+ST_INTERSECTS (<spatial_expr>, <spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is a GeoJSON Point, Polygon, or LineString object expression.
+
+## Return types
+
+ Returns a Boolean value.
+
+## Examples
+
+ The following example shows how to find all areas that intersect with the given polygon.
+
+```sql
+SELECT a.id
+FROM Areas a
+WHERE ST_INTERSECTS(a.location, {
+ 'type':'Polygon',
+ 'coordinates': [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
+})
+```
+
+ Here is the result set.
+
+```json
+[{ "id": "IntersectingPolygon" }]
+```
+
+## Remarks
+
+This system function will benefit from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Isvalid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-isvalid.md
+
+ Title: ST_ISVALID in Azure Cosmos DB query language
+description: Learn about SQL system function ST_ISVALID in Azure Cosmos DB.
++++ Last updated : 09/21/2021+++
+# ST_ISVALID (Azure Cosmos DB)
+
+ Returns a Boolean value indicating whether the specified GeoJSON Point, Polygon, MultiPolygon, or LineString expression is valid.
+
+## Syntax
+
+```sql
+ST_ISVALID(<spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is a GeoJSON Point, Polygon, or LineString expression.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example shows how to check if a point is valid using ST_VALID.
+
+ For example, this point has a latitude value that's not in the valid range of values [-90, 90], so the query returns false.
+
+ For polygons, the GeoJSON specification requires that the last coordinate pair provided should be the same as the first, to create a closed shape. Points within a polygon must be specified in counter-clockwise order. A polygon specified in clockwise order represents the inverse of the region within it.
+
+```sql
+SELECT ST_ISVALID({ "type": "Point", "coordinates": [31.9, -132.8] }) AS b
+```
+
+ Here is the result set.
+
+```json
+[{ "b": false }]
+```
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Isvaliddetailed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-isvaliddetailed.md
+
+ Title: ST_ISVALIDDETAILED in Azure Cosmos DB query language
+description: Learn about SQL system function ST_ISVALIDDETAILED in Azure Cosmos DB.
++++ Last updated : 09/21/2021+++
+# ST_ISVALIDDETAILED (Azure Cosmos DB)
+
+ Returns a JSON value containing a Boolean value if the specified GeoJSON Point, Polygon, or LineString expression is valid, and if invalid, additionally the reason as a string value.
+
+## Syntax
+
+```sql
+ST_ISVALIDDETAILED(<spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is a GeoJSON point or polygon expression.
+
+## Return types
+
+ Returns a JSON value containing a Boolean value if the specified GeoJSON point or polygon expression is valid, and if invalid, additionally the reason as a string value.
+
+## Examples
+
+ The following example how to check validity (with details) using `ST_ISVALIDDETAILED`.
+
+```sql
+SELECT ST_ISVALIDDETAILED({
+ "type": "Polygon",
+ "coordinates": [[ [ 31.8, -5 ], [ 31.8, -4.7 ], [ 32, -4.7 ], [ 32, -5 ] ]]
+}) AS b
+```
+
+ Here is the result set.
+
+```json
+[{
+ "b": {
+ "valid": false,
+ "reason": "The Polygon input is not valid because the start and end points of the ring number 1 are not the same. Each ring of a polygon must have the same start and end points."
+ }
+}]
+```
+
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Within https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-within.md
+
+ Title: ST_WITHIN in Azure Cosmos DB query language
+description: Learn about SQL system function ST_WITHIN in Azure Cosmos DB.
++++ Last updated : 09/21/2021+++
+# ST_WITHIN (Azure Cosmos DB)
+
+ Returns a Boolean expression indicating whether the GeoJSON object (Point, Polygon, MultiPolygon, or LineString) specified in the first argument is within the GeoJSON (Point, Polygon, MultiPolygon, or LineString) in the second argument.
+
+## Syntax
+
+```sql
+ST_WITHIN (<spatial_expr>, <spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is a GeoJSON Point, Polygon, or LineString object expression.
+
+## Return types
+
+ Returns a Boolean value.
+
+## Examples
+
+ The following example shows how to find all family documents within a polygon using `ST_WITHIN`.
+
+```sql
+SELECT f.id
+FROM Families f
+WHERE ST_WITHIN(f.location, {
+ 'type':'Polygon',
+ 'coordinates': [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
+})
+```
+
+ Here is the result set.
+
+```json
+[{ "id": "WakefieldFamily" }]
+```
+
+## Remarks
+
+This system function will benefit from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
++
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Startswith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/startswith.md
+
+ Title: StartsWith in Azure Cosmos DB query language
+description: Learn about SQL system function STARTSWITH in Azure Cosmos DB.
++++ Last updated : 04/01/2021+++
+# STARTSWITH (Azure Cosmos DB)
+
+ Returns a Boolean indicating whether the first string expression starts with the second.
+
+## Syntax
+
+```sql
+STARTSWITH(<str_expr1>, <str_expr2> [, <bool_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is a string expression.
+
+*str_expr2*
+ Is a string expression to be compared to the beginning of *str_expr1*.
+
+*bool_expr*
+ Optional value for ignoring case. When set to true, STARTSWITH will do a case-insensitive search. When unspecified, this value is false.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+The following example checks if the string "abc" begins with "b" and "A".
+
+```sql
+SELECT STARTSWITH("abc", "b", false) AS s1, STARTSWITH("abc", "A", false) AS s2, STARTSWITH("abc", "A", true) AS s3
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "s1": false,
+ "s2": false,
+ "s3": true
+ }
+]
+```
+
+## Remarks
+
+Learn about [how this string system function uses the index](string-functions.md).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db String Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/string-functions.md
+
+ Title: String functions in Azure Cosmos DB query language
+description: Learn about string SQL system functions in Azure Cosmos DB.
++++ Last updated : 05/26/2021++++
+# String functions (Azure Cosmos DB)
+
+The string functions let you perform operations on strings in Azure Cosmos DB.
+
+## Functions
+
+The below scalar functions perform an operation on a string input value and return a string, numeric, or Boolean value. The **index usage** column assumes, where applicable, that you're comparing the string system function to another value with an equality filter.
+
+| System function | Index usage | [Index usage in queries with scalar aggregate functions](../../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
+| -- | | | |
+| [CONCAT](concat.md) | Full scan | Full scan | |
+| [CONTAINS](contains.md) | Full index scan | Full scan | |
+| [ENDSWITH](endswith.md) | Full index scan | Full scan | |
+| [INDEX_OF](index-of.md) | Full scan | Full scan | |
+| [LEFT](left.md) | Precise index scan | Precise index scan | |
+| [LENGTH](length.md) | Full scan | Full scan | |
+| [LOWER](lower.md) | Full scan | Full scan | |
+| [LTRIM](ltrim.md) | Full scan | Full scan | |
+| [REGEXMATCH](regexmatch.md) | Full index scan | Full scan | |
+| [REPLACE](replace.md) | Full scan | Full scan | |
+| [REPLICATE](replicate.md) | Full scan | Full scan | |
+| [REVERSE](reverse.md) | Full scan | Full scan | |
+| [RIGHT](right.md) | Full scan | Full scan | |
+| [RTRIM](rtrim.md) | Full scan | Full scan | |
+| [STARTSWITH](startswith.md) | Precise index scan | Precise index scan | Will be Expanded index scan if case-insensitive option is true. |
+| [STRINGEQUALS](stringequals.md) | Index seek | Index seek | Will be Expanded index scan if case-insensitive option is true. |
+| [StringToArray](stringtoarray.md) | Full scan | Full scan | |
+| [StringToBoolean](stringtoboolean.md) | Full scan | Full scan | |
+| [StringToNull](stringtonull.md) | Full scan | Full scan | |
+| [StringToNumber](stringtonumber.md) | Full scan | Full scan | |
+
+Learn about about [index usage](../../index-overview.md#index-usage) in Azure Cosmos DB.
+
+## Next steps
+
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Stringequals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringequals.md
+
+ Title: StringEquals in Azure Cosmos DB query language
+description: Learn about how the StringEquals SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression matches the second
++++ Last updated : 05/20/2020++++
+# STRINGEQUALS (Azure Cosmos DB)
+
+ Returns a Boolean indicating whether the first string expression matches the second.
+
+## Syntax
+
+```sql
+STRINGEQUALS(<str_expr1>, <str_expr2> [, <bool_expr>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is the first string expression to compare.
+
+*str_expr2*
+ Is the second string expression to compare.
+
+*bool_expr*
+ Optional value for ignoring case. When set to true, StringEquals will do a case-insensitive search. When unspecified, this value is false.
+
+## Return types
+
+ Returns a Boolean expression.
+
+## Examples
+
+ The following example checks if "abc" matches "abc" and if "abc" matches "ABC".
+
+```sql
+SELECT STRINGEQUALS("abc", "abc", false) AS c1, STRINGEQUALS("abc", "ABC", false) AS c2, STRINGEQUALS("abc", "ABC", true) AS c3
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "c1": true,
+ "c2": false,
+ "c3": true
+ }
+]
+```
+
+## Remarks
+
+Learn about [how this string system function uses the index](string-functions.md).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Stringtoarray https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoarray.md
+
+ Title: StringToArray in Azure Cosmos DB query language
+description: Learn about SQL system function StringToArray in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# StringToArray (Azure Cosmos DB)
+
+ Returns expression translated to an Array. If expression can't be translated, returns undefined.
+
+## Syntax
+
+```sql
+StringToArray(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to be parsed as a JSON Array expression.
+
+## Return types
+
+ Returns an array expression or undefined.
+
+## Remarks
+ Nested string values must be written with double quotes to be valid JSON. For details on the JSON format, see [json.org](https://json.org/). This system function won't utilize the index.
+
+## Examples
+
+ The following example shows how `StringToArray` behaves across different types.
+
+ The following are examples with valid input.
+
+```sql
+SELECT
+ StringToArray('[]') AS a1,
+ StringToArray("[1,2,3]") AS a2,
+ StringToArray("[\"str\",2,3]") AS a3,
+ StringToArray('[["5","6","7"],["8"],["9"]]') AS a4,
+ StringToArray('[1,2,3, "[4,5,6]",[7,8]]') AS a5
+```
+
+Here's the result set.
+
+```json
+[{"a1": [], "a2": [1,2,3], "a3": ["str",2,3], "a4": [["5","6","7"],["8"],["9"]], "a5": [1,2,3,"[4,5,6]",[7,8]]}]
+```
+
+The following examples illustrate invalid input:
+
+- Single quotes within the array aren't valid JSON.
+- Even though they're valid within a query, they won't parse to valid arrays.
+- Strings within the array string must either be escaped "[\\"\\"]" or the surrounding quote must be single '[""]'.
+
+```sql
+SELECT
+ StringToArray("['5','6','7']")
+```
+
+Here's the result set.
+
+```json
+[{}]
+```
+
+The following are examples of invalid input.
+
+ The expression passed will be parsed as a JSON array; the following don't evaluate to type array and thus return undefined.
+
+```sql
+SELECT
+ StringToArray("["),
+ StringToArray("1"),
+ StringToArray(NaN),
+ StringToArray(false),
+ StringToArray(undefined)
+```
+
+Here's the result set.
+
+```json
+[{}]
+```
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Stringtoboolean https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoboolean.md
+
+ Title: StringToBoolean in Azure Cosmos DB query language
+description: Learn about SQL system function StringToBoolean in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# StringToBoolean (Azure Cosmos DB)
+
+ Returns expression translated to a Boolean. If expression can't be translated, returns undefined.
+
+## Syntax
+
+```sql
+StringToBoolean(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to be parsed as a Boolean expression.
+
+## Return types
+
+ Returns a Boolean expression or undefined.
+
+## Examples
+
+ The following example shows how `StringToBoolean` behaves across different types.
+
+ The following are examples with valid input.
+
+Whitespace is allowed only before or after `true`/`false`.
+
+```sql
+SELECT
+ StringToBoolean("true") AS b1,
+ StringToBoolean(" false") AS b2,
+ StringToBoolean("false ") AS b3
+```
+
+ Here's the result set.
+
+```json
+[{"b1": true, "b2": false, "b3": false}]
+```
+
+The following are examples with invalid input.
+
+ Booleans are case sensitive and must be written with all lowercase characters such as `true` and `false`.
+
+```sql
+SELECT
+ StringToBoolean("TRUE"),
+ StringToBoolean("False")
+```
+
+Here's the result set.
+
+```json
+[{}]
+```
+
+The expression passed will be parsed as a Boolean expression; these inputs don't evaluate to type Boolean and thus return undefined.
+
+```sql
+SELECT
+ StringToBoolean("null"),
+ StringToBoolean(undefined),
+ StringToBoolean(NaN),
+ StringToBoolean(false),
+ StringToBoolean(true)
+```
+
+Here's the result set.
+
+```json
+[{}]
+```
+
+## Remarks
+
+This system function won't utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Stringtonull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtonull.md
+
+ Title: StringToNull in Azure Cosmos DB query language
+description: Learn about SQL system function StringToNull in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# StringToNull (Azure Cosmos DB)
+
+ Returns expression translated to null. If expression can't be translated, returns undefined.
+
+## Syntax
+
+```sql
+StringToNull(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to be parsed as a null expression.
+
+## Return types
+
+ Returns a null expression or undefined.
+
+## Examples
+
+ The following example shows how `StringToNull` behaves across different types.
+
+The following are examples with valid input.
+
+ Whitespace is allowed only before or after "null".
+
+```sql
+SELECT
+ StringToNull("null") AS n1,
+ StringToNull(" null ") AS n2,
+ IS_NULL(StringToNull("null ")) AS n3
+```
+
+ Here's the result set.
+
+```json
+[{"n1": null, "n2": null, "n3": true}]
+```
+
+The following are examples with invalid input.
+
+Null is case sensitive and must be written with all lowercase characters such as `null`.
+
+```sql
+SELECT
+ StringToNull("NULL"),
+ StringToNull("Null")
+```
+
+ Here's the result set.
+
+```json
+[{}]
+```
+
+The expression passed will be parsed as a null expression; these inputs don't evaluate to type null and thus return undefined.
+
+```sql
+SELECT
+ StringToNull("true"),
+ StringToNull(false),
+ StringToNull(undefined),
+ StringToNull(NaN)
+```
+
+ Here's the result set.
+
+```json
+[{}]
+```
+
+## Remarks
+
+This system function won't utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Stringtonumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtonumber.md
+
+ Title: StringToNumber in Azure Cosmos DB query language
+description: Learn about SQL system function StringToNumber in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# StringToNumber (Azure Cosmos DB)
+
+ Returns expression translated to a Number. If expression cannot be translated, returns undefined.
+
+## Syntax
+
+```sql
+StringToNumber(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to be parsed as a JSON Number expression. Numbers in JSON must be an integer or a floating point. For details on the JSON format, see [json.org](https://json.org/)
+
+## Return types
+
+ Returns a Number expression or undefined.
+
+## Examples
+
+ The following example shows how `StringToNumber` behaves across different types.
+
+Whitespace is allowed only before or after the Number.
+
+```sql
+SELECT
+ StringToNumber("1.000000") AS num1,
+ StringToNumber("3.14") AS num2,
+ StringToNumber(" 60 ") AS num3,
+ StringToNumber("-1.79769e+308") AS num4
+```
+
+ Here is the result set.
+
+```json
+{{"num1": 1, "num2": 3.14, "num3": 60, "num4": -1.79769e+308}}
+```
+
+In JSON a valid Number must be either be an integer or a floating point number.
+
+```sql
+SELECT
+ StringToNumber("0xF")
+```
+
+ Here is the result set.
+
+```json
+{{}}
+```
+
+The expression passed will be parsed as a Number expression; these inputs do not evaluate to type Number and thus return undefined.
+
+```sql
+SELECT
+ StringToNumber("99 54"),
+ StringToNumber(undefined),
+ StringToNumber("false"),
+ StringToNumber(false),
+ StringToNumber(" "),
+ StringToNumber(NaN)
+```
+
+ Here is the result set.
+
+```json
+{{}}
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Stringtoobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoobject.md
+
+ Title: StringToObject in Azure Cosmos DB query language
+description: Learn about SQL system function StringToObject in Azure Cosmos DB.
++++ Last updated : 03/03/2020+++
+# StringToObject (Azure Cosmos DB)
+
+ Returns expression translated to an Object. If expression can't be translated, returns undefined.
+
+## Syntax
+
+```sql
+StringToObject(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression to be parsed as a JSON object expression. Nested string values must be written with double quotes to be valid. For details on the JSON format, see [json.org](https://json.org/)
+
+## Return types
+
+ Returns an object expression or undefined.
+
+## Examples
+
+ The following example shows how `StringToObject` behaves across different types.
+
+ The following are examples with valid input.
+
+```sql
+SELECT
+ StringToObject("{}") AS obj1,
+ StringToObject('{"A":[1,2,3]}') AS obj2,
+ StringToObject('{"B":[{"b1":[5,6,7]},{"b2":8},{"b3":9}]}') AS obj3,
+ StringToObject("{\"C\":[{\"c1\":[5,6,7]},{\"c2\":8},{\"c3\":9}]}") AS obj4
+```
+
+Here's the result set.
+
+```json
+[{"obj1": {},
+ "obj2": {"A": [1,2,3]},
+ "obj3": {"B":[{"b1":[5,6,7]},{"b2":8},{"b3":9}]},
+ "obj4": {"C":[{"c1":[5,6,7]},{"c2":8},{"c3":9}]}}]
+```
+
+ The following are examples with invalid input.
+Even though they're valid within a query, they won't parse to valid objects.
+ Strings within the string of object must either be escaped "{\\"a\\":\\"str\\"}" or the surrounding quote must be single
+ '{"a": "str"}'.
+
+Single quotes surrounding property names aren't valid JSON.
+
+```sql
+SELECT
+ StringToObject("{'a':[1,2,3]}")
+```
+
+Here's the result set.
+
+```json
+[{}]
+```
+
+Property names without surrounding quotes aren't valid JSON.
+
+```sql
+SELECT
+ StringToObject("{a:[1,2,3]}")
+```
+
+Here's the result set.
+
+```json
+[{}]
+```
+
+The following are examples with invalid input.
+
+ The expression passed will be parsed as a JSON object; these inputs don't evaluate to type object and thus return undefined.
+
+```sql
+SELECT
+ StringToObject("}"),
+ StringToObject("{"),
+ StringToObject("1"),
+ StringToObject(NaN),
+ StringToObject(false),
+ StringToObject(undefined)
+```
+
+ Here's the result set.
+
+```json
+[{}]
+```
+
+## Remarks
+
+This system function won't utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Subquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/subquery.md
+
+ Title: SQL subqueries for Azure Cosmos DB
+description: Learn about SQL subqueries and their common use cases and different types of subqueries in Azure Cosmos DB
+++++ Last updated : 07/30/2021+++
+# SQL subquery examples for Azure Cosmos DB
+
+A subquery is a query nested within another query. A subquery is also called an inner query or inner select. The statement that contains a subquery is typically called an outer query.
+
+This article describes SQL subqueries and their common use cases in Azure Cosmos DB. All sample queries in this doc can be run against [a sample nutrition dataset](https://github.com/AzureCosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
+
+## Types of subqueries
+
+There are two main types of subqueries:
+
+* **Correlated**: A subquery that references values from the outer query. The subquery is evaluated once for each row that the outer query processes.
+* **Non-correlated**: A subquery that's independent of the outer query. It can be run on its own without relying on the outer query.
+
+> [!NOTE]
+> Azure Cosmos DB supports only correlated subqueries.
+
+Subqueries can be further classified based on the number of rows and columns that they return. There are three types:
+* **Table**: Returns multiple rows and multiple columns.
+* **Multi-value**: Returns multiple rows and a single column.
+* **Scalar**: Returns a single row and a single column.
+
+SQL queries in Azure Cosmos DB always return a single column (either a simple value or a complex document). Therefore, only multi-value and scalar subqueries are applicable in Azure Cosmos DB. You can use a multi-value subquery only in the FROM clause as a relational expression. You can use a scalar subquery as a scalar expression in the SELECT or WHERE clause, or as a relational expression in the FROM clause.
+
+## Multi-value subqueries
+
+Multi-value subqueries return a set of documents and are always used within the FROM clause. They're used for:
+
+* Optimizing JOIN expressions.
+* Evaluating expensive expressions once and referencing multiple times.
+
+## Optimize JOIN expressions
+
+Multi-value subqueries can optimize JOIN expressions by pushing predicates after each select-many expression rather than after all cross-joins in the WHERE clause.
+
+Consider the following query:
+
+```sql
+SELECT Count(1) AS Count
+FROM c
+JOIN t IN c.tags
+JOIN n IN c.nutrients
+JOIN s IN c.servings
+WHERE t.name = 'infant formula' AND (n.nutritionValue > 0
+AND n.nutritionValue < 10) AND s.amount > 1
+```
+
+For this query, the index will match any document that has a tag with the name "infant formula." It's a nutrient item with a value between 0 and 10, and a serving item with an amount greater than 1. The JOIN expression here will perform the cross-product of all items of tags, nutrients, and servings arrays for each matching document before any filter is applied.
+
+The WHERE clause will then apply the filter predicate on each <c, t, n, s> tuple. For instance, if a matching document had 10 items in each of the three arrays, it will expand to 1 x 10 x 10 x 10 (that is, 1,000) tuples. Using subqueries here can help in filtering out joined array items before joining with the next expression.
+
+This query is equivalent to the preceding one but uses subqueries:
+
+```sql
+SELECT Count(1) AS Count
+FROM c
+JOIN (SELECT VALUE t FROM t IN c.tags WHERE t.name = 'infant formula')
+JOIN (SELECT VALUE n FROM n IN c.nutrients WHERE n.nutritionValue > 0 AND n.nutritionValue < 10)
+JOIN (SELECT VALUE s FROM s IN c.servings WHERE s.amount > 1)
+```
+
+Assume that only one item in the tags array matches the filter, and there are five items for both nutrients and servings arrays. The JOIN expressions will then expand to 1 x 1 x 5 x 5 = 25 items, as opposed to 1,000 items in the first query.
+
+## Evaluate once and reference many times
+
+Subqueries can help optimize queries with expensive expressions such as user-defined functions (UDFs), complex strings, or arithmetic expressions. You can use a subquery along with a JOIN expression to evaluate the expression once but reference it many times.
+
+The following query runs the UDF `GetMaxNutritionValue` twice:
+
+```sql
+SELECT c.id, udf.GetMaxNutritionValue(c.nutrients) AS MaxNutritionValue
+FROM c
+WHERE udf.GetMaxNutritionValue(c.nutrients) > 100
+```
+
+Here's an equivalent query that runs the UDF only once:
+
+```sql
+SELECT TOP 1000 c.id, MaxNutritionValue
+FROM c
+JOIN (SELECT VALUE udf.GetMaxNutritionValue(c.nutrients)) MaxNutritionValue
+WHERE MaxNutritionValue > 100
+```
+
+> [!NOTE]
+> Keep in mind the cross-product behavior of JOIN expressions. If the UDF expression can evaluate to undefined, you should ensure that the JOIN expression always produces a single row by returning an object from the subquery rather than the value directly.
+>
+
+Here's a similar example that returns an object rather than a value:
+
+```sql
+SELECT TOP 1000 c.id, m.MaxNutritionValue
+FROM c
+JOIN (SELECT udf.GetMaxNutritionValue(c.nutrients) AS MaxNutritionValue) m
+WHERE m.MaxNutritionValue > 100
+```
+
+The approach is not limited to UDFs. It applies to any potentially expensive expression. For example, you can take the same approach with the mathematical function `avg`:
+
+```sql
+SELECT TOP 1000 c.id, AvgNutritionValue
+FROM c
+JOIN (SELECT VALUE avg(n.nutritionValue) FROM n IN c.nutrients) AvgNutritionValue
+WHERE AvgNutritionValue > 80
+```
+
+## Mimic join with external reference data
+
+You might often need to reference static data that rarely changes, such as units of measurement or country codes. ItΓÇÖs better not to duplicate such data for each document. Avoiding this duplication will save on storage and improve write performance by keeping the document size smaller. You can use a subquery to mimic inner-join semantics with a collection of reference data.
+
+For instance, consider this set of reference data:
+
+| **Unit** | **Name** | **Multiplier** | **Base unit** |
+| -- | - | -- | - |
+| ng | Nanogram | 1.00E-09 | Gram |
+| ┬╡g | Microgram | 1.00E-06 | Gram |
+| mg | Milligram | 1.00E-03 | Gram |
+| g | Gram | 1.00E+00 | Gram |
+| kg | Kilogram | 1.00E+03 | Gram |
+| Mg | Megagram | 1.00E+06 | Gram |
+| Gg | Gigagram | 1.00E+09 | Gram |
+| nJ | Nanojoule | 1.00E-09 | Joule |
+| ┬╡J | Microjoule | 1.00E-06 | Joule |
+| mJ | Millijoule | 1.00E-03 | Joule |
+| J | Joule | 1.00E+00 | Joule |
+| kJ | Kilojoule | 1.00E+03 | Joule |
+| MJ | Megajoule | 1.00E+06 | Joule |
+| GJ | Gigajoule | 1.00E+09 | Joule |
+| cal | Calorie | 1.00E+00 | calorie |
+| kcal | Calorie | 1.00E+03 | calorie |
+| IU | International units | | |
++
+The following query mimics joining with this data so that you add the name of the unit to the output:
+
+```sql
+SELECT TOP 10 n.id, n.description, n.nutritionValue, n.units, r.name
+FROM food
+JOIN n IN food.nutrients
+JOIN r IN (
+ SELECT VALUE [
+ {unit: 'ng', name: 'nanogram', multiplier: 0.000000001, baseUnit: 'gram'},
+ {unit: '┬╡g', name: 'microgram', multiplier: 0.000001, baseUnit: 'gram'},
+ {unit: 'mg', name: 'milligram', multiplier: 0.001, baseUnit: 'gram'},
+ {unit: 'g', name: 'gram', multiplier: 1, baseUnit: 'gram'},
+ {unit: 'kg', name: 'kilogram', multiplier: 1000, baseUnit: 'gram'},
+ {unit: 'Mg', name: 'megagram', multiplier: 1000000, baseUnit: 'gram'},
+ {unit: 'Gg', name: 'gigagram', multiplier: 1000000000, baseUnit: 'gram'},
+ {unit: 'nJ', name: 'nanojoule', multiplier: 0.000000001, baseUnit: 'joule'},
+ {unit: '┬╡J', name: 'microjoule', multiplier: 0.000001, baseUnit: 'joule'},
+ {unit: 'mJ', name: 'millijoule', multiplier: 0.001, baseUnit: 'joule'},
+ {unit: 'J', name: 'joule', multiplier: 1, baseUnit: 'joule'},
+ {unit: 'kJ', name: 'kilojoule', multiplier: 1000, baseUnit: 'joule'},
+ {unit: 'MJ', name: 'megajoule', multiplier: 1000000, baseUnit: 'joule'},
+ {unit: 'GJ', name: 'gigajoule', multiplier: 1000000000, baseUnit: 'joule'},
+ {unit: 'cal', name: 'calorie', multiplier: 1, baseUnit: 'calorie'},
+ {unit: 'kcal', name: 'Calorie', multiplier: 1000, baseUnit: 'calorie'},
+ {unit: 'IU', name: 'International units'}
+ ]
+)
+WHERE n.units = r.unit
+```
+
+## Scalar subqueries
+
+A scalar subquery expression is a subquery that evaluates to a single value. The value of the scalar subquery expression is the value of the projection (SELECT clause) of the subquery. You can use a scalar subquery expression in many places where a scalar expression is valid. For instance, you can use a scalar subquery in any expression in both the SELECT and WHERE clauses.
+
+Using a scalar subquery doesn't always help optimize, though. For example, passing a scalar subquery as an argument to either a system or user-defined functions provides no benefit in resource unit (RU) consumption or latency.
+
+Scalar subqueries can be further classified as:
+* Simple-expression scalar subqueries
+* Aggregate scalar subqueries
+
+## Simple-expression scalar subqueries
+
+A simple-expression scalar subquery is a correlated subquery that has a SELECT clause that doesn't contain any aggregate expressions. These subqueries provide no optimization benefits because the compiler converts them into one larger simple expression. There's no correlated context between the inner and outer queries.
+
+Here are few examples:
+
+**Example 1**
+
+```sql
+SELECT 1 AS a, 2 AS b
+```
+
+You can rewrite this query, by using a simple-expression scalar subquery, to:
+
+```sql
+SELECT (SELECT VALUE 1) AS a, (SELECT VALUE 2) AS b
+```
+
+Both queries produce this output:
+
+```json
+[
+ { "a": 1, "b": 2 }
+]
+```
+
+**Example 2**
+
+```sql
+SELECT TOP 5 Concat('id_', f.id) AS id
+FROM food f
+```
+
+You can rewrite this query, by using a simple-expression scalar subquery, to:
+
+```sql
+SELECT TOP 5 (SELECT VALUE Concat('id_', f.id)) AS id
+FROM food f
+```
+
+Query output:
+
+```json
+[
+ { "id": "id_03226" },
+ { "id": "id_03227" },
+ { "id": "id_03228" },
+ { "id": "id_03229" },
+ { "id": "id_03230" }
+]
+```
+
+**Example 3**
+
+```sql
+SELECT TOP 5 f.id, Contains(f.description, 'fruit') = true ? f.description : undefined
+FROM food f
+```
+
+You can rewrite this query, by using a simple-expression scalar subquery, to:
+
+```sql
+SELECT TOP 10 f.id, (SELECT f.description WHERE Contains(f.description, 'fruit')).description
+FROM food f
+```
+
+Query output:
+
+```json
+[
+ { "id": "03230" },
+ { "id": "03238", "description":"Babyfood, dessert, tropical fruit, junior" },
+ { "id": "03229" },
+ { "id": "03226", "description":"Babyfood, dessert, fruit pudding, orange, strained" },
+ { "id": "03227" }
+]
+```
+
+### Aggregate scalar subqueries
+
+An aggregate scalar subquery is a subquery that has an aggregate function in its projection or filter that evaluates to a single value.
+
+**Example 1:**
+
+Here's a subquery with a single aggregate function expression in its projection:
+
+```sql
+SELECT TOP 5
+ f.id,
+ (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg'
+) AS count_mg
+FROM food f
+```
+
+Query output:
+
+```json
+[
+ { "id": "03230", "count_mg": 13 },
+ { "id": "03238", "count_mg": 14 },
+ { "id": "03229", "count_mg": 13 },
+ { "id": "03226", "count_mg": 15 },
+ { "id": "03227", "count_mg": 19 }
+]
+```
+
+**Example 2**
+
+Here's a subquery with multiple aggregate function expressions:
+
+```sql
+SELECT TOP 5 f.id, (
+ SELECT Count(1) AS count, Sum(n.nutritionValue) AS sum
+ FROM n IN f.nutrients
+ WHERE n.units = 'mg'
+) AS unit_mg
+FROM food f
+```
+
+Query output:
+
+```json
+[
+ { "id": "03230","unit_mg": { "count": 13,"sum": 147.072 } },
+ { "id": "03238","unit_mg": { "count": 14,"sum": 107.385 } },
+ { "id": "03229","unit_mg": { "count": 13,"sum": 141.579 } },
+ { "id": "03226","unit_mg": { "count": 15,"sum": 183.91399999999996 } },
+ { "id": "03227","unit_mg": { "count": 19,"sum": 94.788999999999987 } }
+]
+```
+
+**Example 3**
+
+Here's a query with an aggregate subquery in both the projection and the filter:
+
+```sql
+SELECT TOP 5
+ f.id,
+ (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg') AS count_mg
+FROM food f
+WHERE (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg') > 20
+```
+
+Query output:
+
+```json
+[
+ { "id": "03235", "count_mg": 27 },
+ { "id": "03246", "count_mg": 21 },
+ { "id": "03267", "count_mg": 21 },
+ { "id": "03269", "count_mg": 21 },
+ { "id": "03274", "count_mg": 21 }
+]
+```
+
+A more optimal way to write this query is to join on the subquery and reference the subquery alias in both the SELECT and WHERE clauses. This query is more efficient because you need to execute the subquery only within the join statement, and not in both the projection and filter.
+
+```sql
+SELECT TOP 5 f.id, count_mg
+FROM food f
+JOIN (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg') AS count_mg
+WHERE count_mg > 20
+```
+
+## EXISTS expression
+
+Azure Cosmos DB supports EXISTS expressions. This is an aggregate scalar subquery built into the Azure Cosmos DB for NoSQL. EXISTS is a Boolean expression that takes a subquery expression and returns true if the subquery returns any rows. Otherwise, it returns false.
+
+Because the Azure Cosmos DB for NoSQL doesn't differentiate between Boolean expressions and any other scalar expressions, you can use EXISTS in both SELECT and WHERE clauses. This is unlike T-SQL, where a Boolean expression (for example, EXISTS, BETWEEN, and IN) is restricted to the filter.
+
+If the EXISTS subquery returns a single value that's undefined, EXISTS will evaluate to false. For instance, consider the following query that evaluates to false:
+```sql
+SELECT EXISTS (SELECT VALUE undefined)
+```
++
+If the VALUE keyword in the preceding subquery is omitted, the query will evaluate to true:
+```sql
+SELECT EXISTS (SELECT undefined)
+```
+
+The subquery will enclose the list of values in the selected list in an object. If the selected list has no values, the subquery will return the single value ΓÇÿ{}ΓÇÖ. This value is defined, so EXISTS evaluates to true.
+
+### Example: Rewriting ARRAY_CONTAINS and JOIN as EXISTS
+
+A common use case of ARRAY_CONTAINS is to filter a document by the existence of an item in an array. In this case, we're checking to see if the tags array contains an item named "orange."
+
+```sql
+SELECT TOP 5 f.id, f.tags
+FROM food f
+WHERE ARRAY_CONTAINS(f.tags, {name: 'orange'})
+```
+
+You can rewrite the same query to use EXISTS:
+
+```sql
+SELECT TOP 5 f.id, f.tags
+FROM food f
+WHERE EXISTS(SELECT VALUE t FROM t IN f.tags WHERE t.name = 'orange')
+```
+
+Additionally, ARRAY_CONTAINS can only check if a value is equal to any element within an array. If you need more complex filters on array properties, use JOIN.
+
+Consider the following query that filters based on the units and `nutritionValue` properties in the array:
+
+```sql
+SELECT VALUE c.description
+FROM c
+JOIN n IN c.nutrients
+WHERE n.units= "mg" AND n.nutritionValue > 0
+```
+
+For each of the documents in the collection, a cross-product is performed with its array elements. This JOIN operation makes it possible to filter on properties within the array. However, this queryΓÇÖs RU consumption will be significant. For instance, if 1,000 documents had 100 items in each array, it will expand to 1,000 x 100 (that is, 100,000) tuples.
+
+Using EXISTS can help to avoid this expensive cross-product:
+
+```sql
+SELECT VALUE c.description
+FROM c
+WHERE EXISTS(
+ SELECT VALUE n
+ FROM n IN c.nutrients
+ WHERE n.units = "mg" AND n.nutritionValue > 0
+)
+```
+
+In this case, you filter on array elements within the EXISTS subquery. If an array element matches the filter, then you project it and EXISTS evaluates to true.
+
+You can also alias EXISTS and reference it in the projection:
+
+```sql
+SELECT TOP 1 c.description, EXISTS(
+ SELECT VALUE n
+ FROM n IN c.nutrients
+ WHERE n.units = "mg" AND n.nutritionValue > 0) as a
+FROM c
+```
+
+Query output:
+
+```json
+[
+ {
+ "description": "Babyfood, dessert, fruit pudding, orange, strained",
+ "a": true
+ }
+]
+```
+
+## ARRAY expression
+
+You can use the ARRAY expression to project the results of a query as an array. You can use this expression only within the SELECT clause of the query.
+
+```sql
+SELECT TOP 1 f.id, ARRAY(SELECT VALUE t.name FROM t in f.tags) AS tagNames
+FROM food f
+```
+
+Query output:
+
+```json
+[
+ {
+ "id": "03238",
+ "tagNames": [
+ "babyfood",
+ "dessert",
+ "tropical fruit",
+ "junior"
+ ]
+ }
+]
+```
+
+As with other subqueries, filters with the ARRAY expression are possible.
+
+```sql
+SELECT TOP 1 c.id, ARRAY(SELECT VALUE t FROM t in c.tags WHERE t.name != 'infant formula') AS tagNames
+FROM c
+```
+
+Query output:
+
+```json
+[
+ {
+ "id": "03226",
+ "tagNames": [
+ {
+ "name": "babyfood"
+ },
+ {
+ "name": "dessert"
+ },
+ {
+ "name": "fruit pudding"
+ },
+ {
+ "name": "orange"
+ },
+ {
+ "name": "strained"
+ }
+ ]
+ }
+]
+```
+
+Array expressions can also come after the FROM clause in subqueries.
+
+```sql
+SELECT TOP 1 c.id, ARRAY(SELECT VALUE t.name FROM t in c.tags) as tagNames
+FROM c
+JOIN n IN (SELECT VALUE ARRAY(SELECT t FROM t in c.tags WHERE t.name != 'infant formula'))
+```
+
+Query output:
+
+```json
+[
+ {
+ "id": "03238",
+ "tagNames": [
+ "babyfood",
+ "dessert",
+ "tropical fruit",
+ "junior"
+ ]
+ }
+]
+```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Model document data](../../modeling-data.md)
cosmos-db Substring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/substring.md
+
+ Title: SUBSTRING in Azure Cosmos DB query language
+description: Learn about SQL system function SUBSTRING in Azure Cosmos DB.
++++ Last updated : 09/13/2019+++
+# SUBSTRING (Azure Cosmos DB)
+
+ Returns part of a string expression starting at the specified character zero-based position and continues to the specified length, or to the end of the string.
+
+## Syntax
+
+```sql
+SUBSTRING(<str_expr>, <num_expr1>, <num_expr2>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+*num_expr1*
+ Is a numeric expression to denote the start character. A value of 0 is the first character of *str_expr*.
+
+*num_expr2*
+ Is a numeric expression to denote the maximum number of characters of *str_expr* to be returned. A value of 0 or less results in empty string.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example returns the substring of "abc" starting at 1 and for a length of 1 character.
+
+```sql
+SELECT SUBSTRING("abc", 1, 1) AS substring
+```
+
+ Here is the result set.
+
+```json
+[{"substring": "b"}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy) if the starting position is `0`.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db System Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/system-functions.md
+
+ Title: System functions in Azure Cosmos DB query language
+description: Learn about built-in and user defined SQL system functions in Azure Cosmos DB.
++++ Last updated : 02/03/2021+++
+# System functions (Azure Cosmos DB)
+
+ Azure Cosmos DB provides many built-in SQL functions. The categories of built-in functions are listed below.
+
+|Function group|Description|Operations|
+|--|--|--|
+|[Array functions](array-functions.md)|The array functions perform an operation on an array input value and return numeric, Boolean, or array value. | [ARRAY_CONCAT](array-concat.md), [ARRAY_CONTAINS](array-contains.md), [ARRAY_LENGTH](array-length.md), [ARRAY_SLICE](array-slice.md) |
+|[Date and Time functions](date-time-functions.md)|The date and time functions allow you to get the current UTC date and time in two forms; a numeric timestamp whose value is the Unix epoch in milliseconds or as a string which conforms to the ISO 8601 format. | [GetCurrentDateTime](getcurrentdatetime.md), [GetCurrentTimestamp](getcurrenttimestamp.md), [GetCurrentTicks](getcurrentticks.md) |
+|[Mathematical functions](mathematical-functions.md)|The mathematical functions each perform a calculation, usually based on input values that are provided as arguments, and return a numeric value. | [ABS](abs.md), [ACOS](acos.md), [ASIN](asin.md), [ATAN](atan.md), [ATN2](atn2.md), [CEILING](ceiling.md), [COS](cos.md), [COT](cot.md), [DEGREES](degrees.md), [EXP](exp.md), [FLOOR](floor.md), [LOG](log.md), [LOG10](log10.md), [PI](pi.md), [POWER](power.md), [RADIANS](radians.md), [RAND](rand.md), [ROUND](round.md), [SIGN](sign.md), [SIN](sin.md), [SQRT](sqrt.md), [SQUARE](square.md), [TAN](tan.md), [TRUNC](trunc.md) |
+|[Spatial functions](spatial-functions.md)|The spatial functions perform an operation on a spatial object input value and return a numeric or Boolean value. | [ST_DISTANCE](st-distance.md), [ST_INTERSECTS](st-intersects.md), [ST_ISVALID](st-isvalid.md), [ST_ISVALIDDETAILED](st-isvaliddetailed.md), [ST_WITHIN](st-within.md) |
+|[String functions](string-functions.md)|The string functions perform an operation on a string input value and return a string, numeric or Boolean value. | [CONCAT](concat.md), [CONTAINS](contains.md), [ENDSWITH](endswith.md), [INDEX_OF](index-of.md), [LEFT](left.md), [LENGTH](length.md), [LOWER](lower.md), [LTRIM](ltrim.md), [REGEXMATCH](regexmatch.md)[REPLACE](replace.md), [REPLICATE](replicate.md), [REVERSE](reverse.md), [RIGHT](right.md), [RTRIM](rtrim.md), [STARTSWITH](startswith.md), [StringToArray](stringtoarray.md), [StringToBoolean](stringtoboolean.md), [StringToNull](stringtonull.md), [StringToNumber](stringtonumber.md), [StringToObject](stringtoobject.md), [SUBSTRING](substring.md), [ToString](tostring.md), [TRIM](trim.md), [UPPER](upper.md) |
+|[Type checking functions](type-checking-functions.md)|The type checking functions allow you to check the type of an expression within SQL queries. | [IS_ARRAY](is-array.md), [IS_BOOL](is-bool.md), [IS_DEFINED](is-defined.md), [IS_NULL](is-null.md), [IS_NUMBER](is-number.md), [IS_OBJECT](is-object.md), [IS_PRIMITIVE](is-primitive.md), [IS_STRING](is-string.md) |
+
+## Built-in versus User Defined Functions (UDFs)
+
+If youΓÇÖre currently using a user-defined function (UDF) for which a built-in function is now available, the corresponding built-in function will be quicker to run and more efficient.
+
+## Built-in versus ANSI SQL functions
+
+The main difference between Azure Cosmos DB functions and ANSI SQL functions is that Azure Cosmos DB functions are designed to work well with schemaless and mixed-schema data. For example, if a property is missing or has a non-numeric value like `undefined`, the item is skipped instead of returning an error.
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [Array functions](array-functions.md)
+- [Date and time functions](date-time-functions.md)
+- [Mathematical functions](mathematical-functions.md)
+- [Spatial functions](spatial-functions.md)
+- [String functions](string-functions.md)
+- [Type checking functions](type-checking-functions.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Tan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tan.md
+
+ Title: TAN in Azure Cosmos DB query language
+description: Learn about SQL system function TAN in Azure Cosmos DB.
++++ Last updated : 03/04/2020+++
+# TAN (Azure Cosmos DB)
+
+ Returns the tangent of the specified angle, in radians, in the specified expression.
+
+## Syntax
+
+```sql
+TAN (<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example calculates the tangent of PI()/2.
+
+```sql
+SELECT TAN(PI()/2) AS tan
+```
+
+ Here is the result set.
+
+```json
+[{"tan": 16331239353195370 }]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Ternary Coalesce Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/ternary-coalesce-operators.md
+
+ Title: Ternary and coalesce operators in Azure Cosmos DB
+description: Learn about SQL ternary and coalesce operators supported by Azure Cosmos DB.
+++++ Last updated : 01/07/2022+++
+# Ternary and coalesce operators in Azure Cosmos DB
+
+This article details the ternary and coalesce operators supported by Azure Cosmos DB.
+
+## Understanding ternary and coalesce operators
+
+You can use the Ternary (?) and Coalesce (??) operators to build conditional expressions, as in programming languages like C# and JavaScript.
+
+You can use the ? operator to construct new JSON properties on the fly. For example, the following query classifies grade levels into `elementary` or `other`:
+
+```sql
+ SELECT (c.grade < 5)? "elementary": "other" AS gradeLevel
+ FROM Families.children[0] c
+```
+
+You can also nest calls to the ? operator, as in the following query:
+
+```sql
+ SELECT (c.grade < 5)? "elementary": ((c.grade < 9)? "junior": "high") AS gradeLevel
+ FROM Families.children[0] c
+```
+
+As with other query operators, the ? operator excludes items if the referenced properties are missing or the types being compared are different.
+
+Use the ?? operator to efficiently check for a property in an item when querying against semi-structured or mixed-type data. For example, the following query returns `lastName` if present, or `surname` if `lastName` isn't present.
+
+```sql
+ SELECT f.lastName ?? f.surname AS familyName
+ FROM Families f
+```
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](keywords.md)
+- [SELECT clause](select.md)
cosmos-db Tickstodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tickstodatetime.md
+
+ Title: TicksToDateTime in Azure Cosmos DB query language
+description: Learn about SQL system function TicksToDateTime in Azure Cosmos DB.
++++ Last updated : 08/18/2020++++
+# TicksToDateTime (Azure Cosmos DB)
+
+Converts the specified ticks value to a DateTime.
+
+## Syntax
+
+```sql
+TicksToDateTime (<Ticks>)
+```
+
+## Arguments
+
+*Ticks*
+
+A signed numeric value, the current number of 100 nanosecond ticks that have elapsed since the Unix epoch. In other words, it is the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Return types
+
+Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Remarks
+
+TicksToDateTime will return `undefined` if the ticks value specified is invalid.
+
+## Examples
+
+The following example converts the ticks to a DateTime:
+
+```sql
+SELECT TicksToDateTime(15943368134575530) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-07-09T23:20:13.4575530Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Timestamptodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/timestamptodatetime.md
+
+ Title: TimestampToDateTime in Azure Cosmos DB query language
+description: Learn about SQL system function TimestampToDateTime in Azure Cosmos DB.
++++ Last updated : 08/18/2020++++
+# TimestampToDateTime (Azure Cosmos DB)
+
+Converts the specified timestamp value to a DateTime.
+
+## Syntax
+
+```sql
+TimestampToDateTime (<Timestamp>)
+```
+
+## Arguments
+
+*Timestamp*
+
+A signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch. In other words, the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+
+## Return types
+
+Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+ For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+## Remarks
+
+TimestampToDateTime will return `undefined` if the timestamp value specified is invalid.
+
+## Examples
+
+The following example converts the timestamp to a DateTime:
+
+```sql
+SELECT TimestampToDateTime(1594227912345) AS DateTime
+```
+
+```json
+[
+ {
+ "DateTime": "2020-07-08T17:05:12.3450000Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](date-time-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Tostring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tostring.md
+
+ Title: ToString in Azure Cosmos DB query language
+description: Learn about SQL system function ToString in Azure Cosmos DB.
++++ Last updated : 03/04/2020+++
+# ToString (Azure Cosmos DB)
+
+ Returns a string representation of scalar expression.
+
+## Syntax
+
+```sql
+ToString(<expr>)
+```
+
+## Arguments
+
+*expr*
+ Is any scalar expression.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how `ToString` behaves across different types.
+
+```sql
+SELECT
+ ToString(1.0000) AS str1,
+ ToString("Hello World") AS str2,
+ ToString(NaN) AS str3,
+ ToString(Infinity) AS str4,
+ ToString(IS_STRING(ToString(undefined))) AS str5,
+ ToString(0.1234) AS str6,
+ ToString(false) AS str7,
+ ToString(undefined) AS str8
+```
+
+ Here is the result set.
+
+```json
+[{"str1": "1", "str2": "Hello World", "str3": "NaN", "str4": "Infinity", "str5": "false", "str6": "0.1234", "str7": "false"}]
+```
+ Given the following input:
+```json
+{"Products":[{"ProductID":1,"Weight":4,"WeightUnits":"lb"},{"ProductID":2,"Weight":32,"WeightUnits":"kg"},{"ProductID":3,"Weight":400,"WeightUnits":"g"},{"ProductID":4,"Weight":8999,"WeightUnits":"mg"}]}
+```
+ The following example shows how `ToString` can be used with other string functions like `CONCAT`.
+
+```sql
+SELECT
+CONCAT(ToString(p.Weight), p.WeightUnits)
+FROM p in c.Products
+```
+
+Here is the result set.
+
+```json
+[{"$1":"4lb" },
+{"$1":"32kg"},
+{"$1":"400g" },
+{"$1":"8999mg" }]
+
+```
+Given the following input.
+```json
+{"id":"08259","description":"Cereals ready-to-eat, KELLOGG, KELLOGG'S CRISPIX","nutrients":[{"id":"305","description":"Caffeine","units":"mg"},{"id":"306","description":"Cholesterol, HDL","nutritionValue":30,"units":"mg"},{"id":"307","description":"Sodium, NA","nutritionValue":612,"units":"mg"},{"id":"308","description":"Protein, ABP","nutritionValue":60,"units":"mg"},{"id":"309","description":"Zinc, ZN","nutritionValue":null,"units":"mg"}]}
+```
+The following example shows how `ToString` can be used with other string functions like `REPLACE`.
+```sql
+SELECT
+ n.id AS nutrientID,
+ REPLACE(ToString(n.nutritionValue), "6", "9") AS nutritionVal
+FROM food
+JOIN n IN food.nutrients
+```
+Here is the result set:
+
+```json
+[{"nutrientID":"305"},
+{"nutrientID":"306","nutritionVal":"30"},
+{"nutrientID":"307","nutritionVal":"912"},
+{"nutrientID":"308","nutritionVal":"90"},
+{"nutrientID":"309","nutritionVal":"null"}]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Trim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/trim.md
+
+ Title: TRIM in Azure Cosmos DB query language
+description: Learn about SQL system function TRIM in Azure Cosmos DB.
++++ Last updated : 09/14/2021+++
+# TRIM (Azure Cosmos DB)
+
+Returns a string expression after it removes leading and trailing whitespace or specified characters.
+
+## Syntax
+
+```sql
+TRIM(<str_expr1>[, <str_expr2>])
+```
+
+## Arguments
+
+*str_expr1*
+ Is a string expression
+
+*str_expr2*
+ Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
+
+## Return types
+
+ Returns a string expression.
+
+## Examples
+
+ The following example shows how to use `TRIM` inside a query.
+
+```sql
+SELECT TRIM(" abc") AS t1,
+TRIM(" abc ") AS t2,
+TRIM("abc ") AS t3,
+TRIM("abc") AS t4,
+TRIM("abc", "ab") AS t5,
+TRIM("abc", "abc") AS t6
+```
+
+ Here is the result set.
+
+```json
+[
+ {
+ "t1": "abc",
+ "t2": "abc",
+ "t3": "abc",
+ "t4": "abc",
+ "t5": "c",
+ "t6": ""
+ }
+]
+```
+
+## Remarks
+
+This system function will not utilize the index.
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Trunc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/trunc.md
+
+ Title: TRUNC in Azure Cosmos DB query language
+description: Learn about SQL system function TRUNC in Azure Cosmos DB.
++++ Last updated : 06/22/2021+++
+# TRUNC (Azure Cosmos DB)
+
+ Returns a numeric value, truncated to the closest integer value.
+
+## Syntax
+
+```sql
+TRUNC(<numeric_expr>)
+```
+
+## Arguments
+
+*numeric_expr*
+ Is a numeric expression.
+
+## Return types
+
+ Returns a numeric expression.
+
+## Examples
+
+ The following example truncates the following positive and negative numbers to the nearest integer value.
+
+```sql
+SELECT TRUNC(2.4) AS t1, TRUNC(2.6) AS t2, TRUNC(2.5) AS t3, TRUNC(-2.4) AS t4, TRUNC(-2.6) AS t5
+```
+
+ Here is the result set.
+
+```json
+[{t1: 2, t2: 2, t3: 2, t4: -2, t5: -2}]
+```
+
+## Remarks
+
+This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+
+## Next steps
+
+- [Mathematical functions Azure Cosmos DB](mathematical-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Type Checking Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/type-checking-functions.md
+
+ Title: Type checking functions in Azure Cosmos DB query language
+description: Learn about type checking SQL system functions in Azure Cosmos DB.
++++ Last updated : 05/26/2021++++
+# Type checking functions (Azure Cosmos DB)
+
+The type-checking functions let you check the type of an expression within a SQL query. You can use type-checking functions to determine the types of properties within items on the fly, when they're variable or unknown.
+
+## Functions
+
+The following functions support type checking against input values, and each return a Boolean value. The **index usage** column assumes, where applicable, that you're comparing the type checking functions to another value with an equality filter.
+
+| System function | Index usage | [Index usage in queries with scalar aggregate functions](../../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
+| -- | -- | | - |
+| [IS_ARRAY](is-array.md) | Full scan | Full scan | |
+| [IS_BOOL](is-bool.md) | Index seek | Index seek | |
+| [IS_DEFINED](is-defined.md) | Index seek | Index seek | |
+| [IS_NULL](is-null.md) | Index seek | Index seek | |
+| [IS_NUMBER](is-number.md) | Index seek | Index seek | |
+| [IS_OBJECT](is-object.md) | Full scan | Full scan | |
+| [IS_PRIMITIVE](is-primitive.md) | Index seek | Index seek | |
+| [IS_STRING](is-string.md) | Index seek | Index seek |
+
+## Next steps
+
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [User Defined Functions](udfs.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/udfs.md
+
+ Title: User-defined functions (UDFs) in Azure Cosmos DB
+description: Learn about User-defined functions in Azure Cosmos DB.
++++ Last updated : 04/09/2020+++++
+# User-defined functions (UDFs) in Azure Cosmos DB
+
+The API for NoSQL provides support for user-defined functions (UDFs). With scalar UDFs, you can pass in zero or many arguments and return a single argument result. The API checks each argument for being legal JSON values.
+
+## UDF use cases
+
+The API extends the SQL syntax to support custom application logic using UDFs. You can register UDFs with the API for NoSQL, and reference them in SQL queries. Unlike stored procedures and triggers, UDFs are read-only.
+
+Using UDFs, you can extend Azure Cosmos DB's query language. UDFs are a great way to express complex business logic in a query's projection.
+
+However, we recommending avoiding UDFs when:
+
+- An equivalent [system function](system-functions.md) already exists in Azure Cosmos DB. System functions will always use fewer RU's than the equivalent UDF.
+- The UDF is the only filter in the `WHERE` clause of your query. UDF's do not utilize the index so evaluating the UDF will require loading documents. Combining additional filter predicates that use the index, in combination with a UDF, in the `WHERE` clause will reduce the number of documents processed by the UDF.
+
+If you must use the same UDF multiple times in a query, you should reference the UDF in a [subquery](subquery.md#evaluate-once-and-reference-many-times), allowing you to use a JOIN expression to evaluate the UDF once but reference it many times.
+
+## Examples
+
+The following example registers a UDF under an item container in the Azure Cosmos DB database. The example creates a UDF whose name is `REGEX_MATCH`. It accepts two JSON string values, `input` and `pattern`, and checks if the first matches the pattern specified in the second using JavaScript's `string.match()` function.
+
+```javascript
+ UserDefinedFunction regexMatchUdf = new UserDefinedFunction
+ {
+ Id = "REGEX_MATCH",
+ Body = @"function (input, pattern) {
+ return input.match(pattern) !== null;
+ };",
+ };
+
+ UserDefinedFunction createdUdf = client.CreateUserDefinedFunctionAsync(
+ UriFactory.CreateDocumentCollectionUri("myDatabase", "families"),
+ regexMatchUdf).Result;
+```
+
+Now, use this UDF in a query projection. You must qualify UDFs with the case-sensitive prefix `udf.` when calling them from within queries.
+
+```sql
+ SELECT udf.REGEX_MATCH(Families.address.city, ".*eattle")
+ FROM Families
+```
+
+The results are:
+
+```json
+ [
+ {
+ "$1": true
+ },
+ {
+ "$1": false
+ }
+ ]
+```
+
+You can use the UDF qualified with the `udf.` prefix inside a filter, as in the following example:
+
+```sql
+ SELECT Families.id, Families.address.city
+ FROM Families
+ WHERE udf.REGEX_MATCH(Families.address.city, ".*eattle")
+```
+
+The results are:
+
+```json
+ [{
+ "id": "AndersenFamily",
+ "city": "Seattle"
+ }]
+```
+
+In essence, UDFs are valid scalar expressions that you can use in both projections and filters.
+
+To expand on the power of UDFs, look at another example with conditional logic:
+
+```javascript
+ UserDefinedFunction seaLevelUdf = new UserDefinedFunction()
+ {
+ Id = "SEALEVEL",
+ Body = @"function(city) {
+ switch (city) {
+ case 'Seattle':
+ return 520;
+ case 'NY':
+ return 410;
+ case 'Chicago':
+ return 673;
+ default:
+ return -1;
+ }"
+ };
+
+ UserDefinedFunction createdUdf = await client.CreateUserDefinedFunctionAsync(
+ UriFactory.CreateDocumentCollectionUri("myDatabase", "families"),
+ seaLevelUdf);
+```
+
+The following example exercises the UDF:
+
+```sql
+ SELECT f.address.city, udf.SEALEVEL(f.address.city) AS seaLevel
+ FROM Families f
+```
+
+The results are:
+
+```json
+ [
+ {
+ "city": "Seattle",
+ "seaLevel": 520
+ },
+ {
+ "city": "NY",
+ "seaLevel": 410
+ }
+ ]
+```
+
+If the properties referred to by the UDF parameters aren't available in the JSON value, the parameter is considered as undefined and the UDF invocation is skipped. Similarly, if the result of the UDF is undefined, it's not included in the result.
+
+As the preceding examples show, UDFs integrate the power of JavaScript language with the API for NoSQL. UDFs provide a rich programmable interface to do complex procedural, conditional logic with the help of built-in JavaScript runtime capabilities. The API for NoSQL provides the arguments to the UDFs for each source item at the current WHERE or SELECT clause stage of processing. The result is seamlessly incorporated in the overall execution pipeline. In summary, UDFs are great tools to do complex business logic as part of queries.
+
+## Next steps
+
+- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [System functions](system-functions.md)
+- [Aggregates](aggregate-functions.md)
cosmos-db Upper https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/upper.md
+
+ Title: UPPER in Azure Cosmos DB query language
+description: Learn about SQL system function UPPER in Azure Cosmos DB.
++++ Last updated : 04/08/2021+++
+# UPPER (Azure Cosmos DB)
+
+Returns a string expression after converting lowercase character data to uppercase.
+
+> [!NOTE]
+> This function uses culture-independent (invariant) casing rules when returning the converted string expression.
+
+The UPPER system function doesn't utilize the index. If you plan to do frequent case insensitive comparisons, the UPPER system function may consume a significant number of RUs. If so, instead of using the UPPER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE UPPER(c.name) = 'USERNAME' simply becomes SELECT * FROM c WHERE c.name = 'USERNAME'.
+
+## Syntax
+
+```sql
+UPPER(<str_expr>)
+```
+
+## Arguments
+
+*str_expr*
+ Is a string expression.
+
+## Return types
+
+Returns a string expression.
+
+## Examples
+
+The following example shows how to use `UPPER` in a query
+
+```sql
+SELECT UPPER("Abc") AS upper
+```
+
+Here's the result set.
+
+```json
+[{"upper": "ABC"}]
+```
+
+## Remarks
+
+This system function won't [use indexes](../../index-overview.md#index-usage).
+
+## Next steps
+
+- [String functions Azure Cosmos DB](string-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/where.md
+
+ Title: WHERE clause in Azure Cosmos DB
+description: Learn about SQL WHERE clause for Azure Cosmos DB
+++++ Last updated : 03/06/2020+++
+# WHERE clause in Azure Cosmos DB
+
+The optional WHERE clause (`WHERE <filter_condition>`) specifies condition(s) that the source JSON items must satisfy for the query to include them in results. A JSON item must evaluate the specified conditions to `true` to be considered for the result. The index layer uses the WHERE clause to determine the smallest subset of source items that can be part of the result.
+
+## Syntax
+
+```sql
+WHERE <filter_condition>
+<filter_condition> ::= <scalar_expression>
+
+```
+
+## Arguments
+
+- `<filter_condition>`
+
+ Specifies the condition to be met for the documents to be returned.
+
+- `<scalar_expression>`
+
+ Expression representing the value to be computed. See [Scalar expressions](scalar-expressions.md) for details.
+
+## Remarks
+
+ In order for the document to be returned an expression specified as filter condition must evaluate to true. Only Boolean value `true` will satisfy the condition, any other value: undefined, null, false, Number, Array, or Object will not satisfy the condition.
+
+ If you include your partition key in the `WHERE` clause as part of an equality filter, your query will automatically filter to only the relevant partitions.
+
+## Examples
+
+The following query requests items that contain an `id` property whose value is `AndersenFamily`. It excludes any item that does not have an `id` property or whose value doesn't match `AndersenFamily`.
+
+```sql
+ SELECT f.address
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The results are:
+
+```json
+ [{
+ "address": {
+ "state": "WA",
+ "county": "King",
+ "city": "Seattle"
+ }
+ }]
+```
+
+### Scalar expressions in the WHERE clause
+
+The previous example showed a simple equality query. The API for NoSQL also supports various [scalar expressions](scalar-expressions.md). The most commonly used are binary and unary expressions. Property references from the source JSON object are also valid expressions.
+
+You can use the following supported binary operators:
+
+|**Operator type** | **Values** |
+|||
+|Arithmetic | +,-,*,/,% |
+|Bitwise | \|, &, ^, <<, >>, >>> (zero-fill right shift) |
+|Logical | AND, OR, NOT |
+|Comparison | =, !=, &lt;, &gt;, &lt;=, &gt;=, <> |
+|String | \|\| (concatenate) |
+
+The following queries use binary operators:
+
+```sql
+ SELECT *
+ FROM Families.children[0] c
+ WHERE c.grade % 2 = 1 -- matching grades == 5, 1
+
+ SELECT *
+ FROM Families.children[0] c
+ WHERE c.grade ^ 4 = 1 -- matching grades == 5
+
+ SELECT *
+ FROM Families.children[0] c
+ WHERE c.grade >= 5 -- matching grades == 5
+```
+
+You can also use the unary operators +,-, ~, and NOT in queries, as shown in the following examples:
+
+```sql
+ SELECT *
+ FROM Families.children[0] c
+ WHERE NOT(c.grade = 5) -- matching grades == 1
+
+ SELECT *
+ FROM Families.children[0] c
+ WHERE (-c.grade = -5) -- matching grades == 5
+```
+
+You can also use property references in queries. For example, `SELECT * FROM Families f WHERE f.isRegistered` returns the JSON item containing the property `isRegistered` with value equal to `true`. Any other value, such as `false`, `null`, `Undefined`, `<number>`, `<string>`, `<object>`, or `<array>`, excludes the item from the result. Additionally, you can use the `IS_DEFINED` type checking function to query based on the presence or absence of a given JSON property. For instance, `SELECT * FROM Families f WHERE NOT IS_DEFINED(f.isRegistered)` returns any JSON item that does not have a value for `isRegistered`.
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [IN keyword](keywords.md#in)
+- [FROM clause](from.md)
cosmos-db Working With Dates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/working-with-dates.md
+
+ Title: Working with dates in Azure Cosmos DB
+description: Learn how to store, index, and query DataTime objects in Azure Cosmos DB
++++++ Last updated : 04/03/2020
+ms.devlang: csharp
++
+# Working with Dates in Azure Cosmos DB
+
+Azure Cosmos DB delivers schema flexibility and rich indexing via a native [JSON](https://www.json.org) data model. All Azure Cosmos DB resources including databases, containers, documents, and stored procedures are modeled and stored as JSON documents. As a requirement for being portable, JSON (and Azure Cosmos DB) supports only a small set of basic types: String, Number, Boolean, Array, Object, and Null. However, JSON is flexible and allow developers and frameworks to represent more complex types using these primitives and composing them as objects or arrays.
+
+In addition to the basic types, many applications need the DateTime type to represent dates and timestamps. This article describes how developers can store, retrieve, and query dates in Azure Cosmos DB using the .NET SDK.
+
+## Storing DateTimes
+
+Azure Cosmos DB supports JSON types such as - string, number, boolean, null, array, object. It does not directly support a DateTime type. Currently, Azure Cosmos DB doesn't support localization of dates. So, you need to store DateTimes as strings. The recommended format for DateTime strings in Azure Cosmos DB is `yyyy-MM-ddTHH:mm:ss.fffffffZ` which follows the ISO 8601 UTC standard. It is recommended to store all dates in Azure Cosmos DB as UTC. Converting the date strings to this format will allow sorting dates lexicographically. If non-UTC dates are stored, the logic must be handled at the client-side. To convert a local DateTime to UTC, the offset must be known/stored as a property in the JSON and the client can use the offset to compute the UTC DateTime value.
+
+Range queries with DateTime strings as filters are only supported if the DateTime strings are all in UTC and the same length. In Azure Cosmos DB, the [GetCurrentDateTime](getcurrentdatetime.md) system function will return the current UTC date and time ISO 8601 string value in the format: `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
+
+Most applications can use the default string representation for DateTime for the following reasons:
+
+* Strings can be compared, and the relative ordering of the DateTime values is preserved when they are transformed to strings.
+* This approach doesn't require any custom code or attributes for JSON conversion.
+* The dates as stored in JSON are human readable.
+* This approach can take advantage of Azure Cosmos DB's index for fast query performance.
+
+For example, the following snippet stores an `Order` object containing two DateTime properties - `ShipDate` and `OrderDate` as a document using the .NET SDK:
+
+```csharp
+ public class Order
+ {
+ [JsonProperty(PropertyName="id")]
+ public string Id { get; set; }
+ public DateTime OrderDate { get; set; }
+ public DateTime ShipDate { get; set; }
+ public double Total { get; set; }
+ }
+
+ await container.CreateItemAsync(
+ new Order
+ {
+ Id = "09152014101",
+ OrderDate = DateTime.UtcNow.AddDays(-30),
+ ShipDate = DateTime.UtcNow.AddDays(-14),
+ Total = 113.39
+ });
+```
+
+This document is stored in Azure Cosmos DB as follows:
+
+```json
+ {
+ "id": "09152014101",
+ "OrderDate": "2014-09-15T23:14:25.7251173Z",
+ "ShipDate": "2014-09-30T23:14:25.7251173Z",
+ "Total": 113.39
+ }
+```
+
+Alternatively, you can store DateTimes as Unix timestamps, that is, as a number representing the number of elapsed seconds since January 1, 1970. Azure Cosmos DB's internal Timestamp (`_ts`) property follows this approach. You can use the [UnixDateTimeConverter](/dotnet/api/microsoft.azure.documents.unixdatetimeconverter) class to serialize DateTimes as numbers.
+
+## Querying DateTimes in LINQ
+
+The SQL .NET SDK automatically supports querying data stored in Azure Cosmos DB via LINQ. For example, the following snippet shows a LINQ query that filters orders that were shipped in the last three days:
+
+```csharp
+ IQueryable<Order> orders = container.GetItemLinqQueryable<Order>(allowSynchronousQueryExecution: true).Where(o => o.ShipDate >= DateTime.UtcNow.AddDays(-3));
+```
+
+Translated to the following SQL statement and executed on Azure Cosmos DB:
+
+```sql
+ SELECT * FROM root WHERE (root["ShipDate"] >= "2014-09-30T23:14:25.7251173Z")
+```
+
+You can learn more about Azure Cosmos DB's SQL query language and the LINQ provider at [Querying Azure Cosmos DB in LINQ](linq-to-sql.md).
+
+## Indexing DateTimes for range queries
+
+Queries are common with DateTime values. To execute these queries efficiently, you must have an index defined on any properties in the query's filter.
+
+You can learn more about how to configure indexing policies at [Azure Cosmos DB Indexing Policies](../../index-policy.md).
+
+## Next Steps
+
+* Download and run the [Code samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples)
+* Learn more about [SQL queries](getting-started.md)
+* Learn more about [Azure Cosmos DB Indexing Policies](../../index-policy.md)
cosmos-db Working With Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/working-with-json.md
+
+ Title: Working with JSON in Azure Cosmos DB
+description: Learn about to query and access nested JSON properties and use special characters in Azure Cosmos DB
+++++ Last updated : 09/19/2020++++
+# Working with JSON in Azure Cosmos DB
+
+In Azure Cosmos DB's API for NoSQL, items are stored as JSON. The type system and expressions are restricted to deal only with JSON types. For more information, see the [JSON specification](https://www.json.org/).
+
+We'll summarize some important aspects of working with JSON:
+
+- JSON objects always begin with a `{` left brace and end with a `}` right brace
+- You can have JSON properties [nested](#nested-properties) within one another
+- JSON property values can be arrays
+- JSON property names are case sensitive
+- JSON property name can be any string value (including spaces or characters that aren't letters)
+
+## Nested properties
+
+You can access nested JSON using a dot accessor. You can use nested JSON properties in your queries the same way that you can use any other properties.
+
+Here's a document with nested JSON:
+
+```JSON
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "address": {
+ "state": "WA",
+ "county": "King",
+ "city": "Seattle"
+ },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
+
+In this case, the `state`, `country`, and `city` properties are all nested within the `address` property.
+
+The following example projects two nested properties: `f.address.state` and `f.address.city`.
+
+```sql
+ SELECT f.address.state, f.address.city
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The results are:
+
+```json
+ [{
+ "state": "WA",
+ "city": "Seattle"
+ }]
+```
+
+## Working with arrays
+
+In addition to nested properties, JSON also supports arrays.
+
+Here's an example document with an array:
+
+```json
+{
+ "id": "WakefieldFamily",
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
+ ],
+}
+```
+
+When working with arrays, you can access a specific element within the array by referencing its position:
+
+```sql
+SELECT *
+FROM Families f
+WHERE f.children[0].givenName = "Jesse"
+```
+
+In most cases, however, you'll use a [subquery](subquery.md) or [self-join](join.md) when working with arrays.
+
+For example, here's a document that shows the daily balance of a customer's bank account.
+
+```json
+{
+ "id": "Contoso-Checking-Account-2020",
+ "balance": [
+ {
+ "checkingAccount": 1000,
+ "savingsAccount": 5000
+ },
+ {
+ "checkingAccount": 100,
+ "savingsAccount": 5000
+ },
+ {
+ "checkingAccount": -10,
+ "savingsAccount": 5000
+ },
+ {
+ "checkingAccount": 5000,
+ "savingsAccount": 5000
+ }
+
+ ]
+}
+```
+
+If you wanted to run a query that showed all the customers that had a negative balance at some point, you could use [EXISTS](subquery.md#exists-expression) with a subquery:
+
+```sql
+SELECT c.id
+FROM c
+WHERE EXISTS(
+ SELECT VALUE n
+ FROM n IN c.balance
+ WHERE n.checkingAccount < 0
+)
+```
+
+## Difference between null and undefined
+
+If a property is not defined in an item, then its value is `undefined`. A property with the value `null` must be explicitly defined and assigned a `null` value.
+
+For example, consider this sample item:
+
+```json
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "address": {
+ "state": "WA",
+ "county": "King",
+ "city": "Seattle"
+ },
+ "creationDate": null
+}
+```
+
+In this example, the property `isRegistered` has a value of `undefined` because it is omitted from the item. The property `creationDate` has a `null` value.
+
+Azure Cosmos DB supports two helpful type checking system functions for `null` and `undefined` properties:
+
+* [IS_NULL](is-null.md) - checks if a property value is `null`
+* [IS_DEFINED](is-defined.md) - checks if a property value is defined
+
+You can learn about [supported operators](equality-comparison-operators.md) and their behavior for `null` and `undefined` values.
+
+## Reserved keywords and special characters in JSON
+
+You can access properties using the quoted property operator `[]`. For example, `SELECT c.grade` and `SELECT c["grade"]` are equivalent. This syntax is useful to escape a property that contains spaces, special characters, or has the same name as a SQL keyword or reserved word.
+
+For example, here's a document with a property named `order` and a property `price($)` that contains special characters:
+
+```json
+{
+ "id": "AndersenFamily",
+ "order": {
+ "orderId": "12345",
+ "productId": "A17849",
+ "price($)": 59.33
+ },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
+
+If you run a queries that includes the `order` property or `price($)` property, you will receive a syntax error.
+
+```sql
+SELECT * FROM c where c.order.orderid = "12345"
+```
+
+```sql
+SELECT * FROM c where c.order.price($) > 50
+```
+
+The result is:
+
+`
+Syntax error, incorrect syntax near 'order'
+`
+
+You should rewrite the same queries as below:
+
+```sql
+SELECT * FROM c WHERE c["order"].orderId = "12345"
+```
+
+```sql
+SELECT * FROM c WHERE c["order"]["price($)"] > 50
+```
+
+## JSON expressions
+
+Projection also supports JSON expressions, as shown in the following example:
+
+```sql
+ SELECT { "state": f.address.state, "city": f.address.city, "name": f.id }
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The results are:
+
+```json
+ [{
+ "$1": {
+ "state": "WA",
+ "city": "Seattle",
+ "name": "AndersenFamily"
+ }
+ }]
+```
+
+In the preceding example, the `SELECT` clause needs to create a JSON object, and since the sample provides no key, the clause uses the implicit argument variable name `$1`. The following query returns two implicit argument variables: `$1` and `$2`.
+
+```sql
+ SELECT { "state": f.address.state, "city": f.address.city },
+ { "name": f.id }
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The results are:
+
+```json
+ [{
+ "$1": {
+ "state": "WA",
+ "city": "Seattle"
+ },
+ "$2": {
+ "name": "AndersenFamily"
+ }
+ }]
+```
+
+## Aliasing
+
+You can explicitly alias values in queries. If a query has two properties with the same name, use aliasing to rename one or both of the properties so they're disambiguated in the projected result.
+
+### Examples
+
+The `AS` keyword used for aliasing is optional, as shown in the following example when projecting the second value as `NameInfo`:
+
+```sql
+ SELECT
+ { "state": f.address.state, "city": f.address.city } AS AddressInfo,
+ { "name": f.id } NameInfo
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+The results are:
+
+```json
+ [{
+ "AddressInfo": {
+ "state": "WA",
+ "city": "Seattle"
+ },
+ "NameInfo": {
+ "name": "AndersenFamily"
+ }
+ }]
+```
+
+### Aliasing with reserved keywords or special characters
+
+You can't use aliasing to project a value as a property name with a space, special character, or reserved word. If you wanted to change a value's projection to, for example, have a property name with a space, you could use a [JSON expression](#json-expressions).
+
+Here's an example:
+
+```sql
+ SELECT
+ {"JSON expression with a space": { "state": f.address.state, "city": f.address.city }},
+ {"JSON expression with a special character!": { "name": f.id }}
+ FROM Families f
+ WHERE f.id = "AndersenFamily"
+```
+
+## Next steps
+
+- [Getting started](getting-started.md)
+- [SELECT clause](select.md)
+- [WHERE clause](where.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
+
+ Title: Quickstart - Azure Cosmos DB for NoSQL client library for .NET
+description: Learn how to build a .NET app to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
++++
+ms.devlang: csharp
+ Last updated : 07/26/2022+++
+# Quickstart: Azure Cosmos DB for NoSQL client library for .NET
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
+
+Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
+
+[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+### Prerequisite check
+
+* In a terminal or command window, run ``dotnet --version`` to check that the .NET SDK is version 6.0 or later.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
+
+This section walks you through creating an Azure Cosmos DB account and setting up a project that uses Azure Cosmos DB for NoSQL client library for .NET to manage resources.
+
+### <a id="create-account"></a>Create an Azure Cosmos DB account
++
+### Create a new .NET app
+
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command specifying the **console** template.
+
+```dotnetcli
+dotnet new console
+```
+
+### Install the package
+
+Add the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package to the .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
+
+```dotnetcli
+dotnet add package Microsoft.Azure.Cosmos
+```
+
+Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
+
+```dotnetcli
+dotnet build
+```
+
+Make sure that the build was successful with no errors. The expected output from the build should look something like this:
+
+```output
+ Determining projects to restore...
+ All projects are up-to-date for restore.
+ dslkajfjlksd -> C:\Users\sidandrews\Demos\dslkajfjlksd\bin\Debug\net6.0\dslkajfjlksd.dll
+
+Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+```
+
+### Configure environment variables
++
+## Object model
++
+You'll use the following .NET classes to interact with these resources:
+
+* [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+* [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.
+* [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.
+* [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
+
+## Code examples
+
+* [Authenticate the client](#authenticate-the-client)
+* [Create a database](#create-a-database)
+* [Create a container](#create-a-container)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
+
+The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+
+For this sample code, the container will use the category as a logical partition key.
+
+### Authenticate the client
+
+From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``Microsoft.Azure.Cosmos``.
++
+Define a new instance of the ``CosmosClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariable) to read the two environment variables you created earlier.
++
+For more information on different ways to create a ``CosmosClient`` instance, see [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md#connect-to-azure-cosmos-db-sql-api).
+
+### Create a database
+
+Use the [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
++
+For more information on creating a database, see [Create a database in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-database.md).
+
+### Create a container
+
+The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) will create a new container if it doesn't already exist. This method will also return a reference to the container.
++
+For more information on creating a container, see [Create a container in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-container.md).
+
+### Create an item
+
+The easiest way to create a new item in a container is to first build a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *category* field for the partition key, and extra *name*, *quantity*, and *sale* fields.
++
+Create an item in the container by calling [``Container.UpsertItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync). In this example, we chose to *upsert* instead of *create* a new item in case you run this sample code more than once.
++
+For more information on creating, upserting, or replacing items, see [Create an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-item.md).
+
+### Get an item
+
+In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type.
++
+For more information about reading items and parsing the response, see [Read an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-read-item.md).
+
+### Query items
+
+After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM todo t WHERE t.partitionKey = 'gear-surf-surfboards'``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items.
++
+## Run the code
+
+This app creates an Azure Cosmos DB for NoSQL database and container. The example then creates an item and then reads the exact same item back. Finally, the example issues a query that should only return that single item. With each step, the example outputs metadata to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```dotnetcli
+dotnet run
+```
+
+The output of the app should be similar to this example:
+
+```output
+New database: adventureworks
+New container: products
+Created item: 68719518391 [gear-surf-surfboards]
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a database, and create a container using the .NET SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB for NoSQL resources.
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md)
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
+
+ Title: 'Quickstart: Build a Go app using Azure Cosmos DB for NoSQL account'
+description: Gives a Go code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+++
+ms.devlang: golang
++ Last updated : 3/4/2021+++
+# Quickstart: Build a Go application using an Azure Cosmos DB for NoSQL account
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
++
+In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage an Azure Cosmos DB for NoSQL account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
+
+Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](../introduction.md).
+
+## Prerequisites
+
+- An Azure Cosmos DB Account. Your options are:
+ * Within an Azure active subscription:
+ * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
+ * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
+ * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier)
+ * Without an Azure active subscription:
+ * [Try Azure Cosmos DB for free](https://aka.ms/trycosmosdb), a tests environment that lasts for 30 days.
+ * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
+- [Go 1.16 or higher](https://golang.org/dl/)
+- [Azure CLI](/cli/azure/install-azure-cli)
++
+## Getting started
+
+For this quickstart, you'll need to create an Azure resource group and an Azure Cosmos DB account.
+
+Run the following commands to create an Azure resource group:
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+```
+
+Next create an Azure Cosmos DB account by running the following command:
+
+```
+az cosmosdb create --name my-cosmosdb-account --resource-group myResourceGroup
+```
+
+### Install the package
+
+Use the `go get` command to install the [azcosmos](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos) package.
+
+```bash
+go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos
+```
+
+## Key concepts
+
+* A `Client` is a connection to an Azure Cosmos DB account.
+* Azure Cosmos DB accounts can have multiple `databases`. A `DatabaseClient` allows you to create, read, and delete databases.
+* Database within an Azure Cosmos DB Account can have multiple `containers`. A `ContainerClient` allows you to create, read, update, and delete containers, and to modify throughput provision.
+* Information is stored as items inside containers. And the client allows you to create, read, update, and delete items in containers.
+
+## Code examples
+
+**Authenticate the client**
+
+```go
+var endpoint = "<azure_cosmos_uri>"
+var key = "<azure_cosmos_primary_key"
+
+cred, err := azcosmos.NewKeyCredential(key)
+if err != nil {
+ log.Fatal("Failed to create a credential: ", err)
+}
+
+// Create a CosmosDB client
+client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
+if err != nil {
+ log.Fatal("Failed to create Azure Cosmos DB client: ", err)
+}
+
+// Create database client
+databaseClient, err := client.NewDatabase("<databaseName>")
+if err != nil {
+ log.Fatal("Failed to create database client:", err)
+}
+
+// Create container client
+containerClient, err := client.NewContainer("<databaseName>", "<containerName>")
+if err != nil {
+ log.Fatal("Failed to create a container client:", err)
+}
+```
+
+**Create an Azure Cosmos DB database**
+
+```go
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createDatabase (client *azcosmos.Client, databaseName string) error {
+// databaseName := "adventureworks"
+
+ // sets the name of the database
+ databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
+
+ // creating the database
+ ctx := context.TODO()
+ databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+ return nil
+}
+```
+
+**Create a container**
+
+```go
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createContainer (client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "/customerId"
+
+ databaseClient, err := client.NewDatabase(databaseName) // returns a struct that represents a database
+ if err != nil {
+ log.Fatal("Failed to create a database client:", err)
+ }
+
+ // Setting container properties
+ containerProperties := azcosmos.ContainerProperties{
+ ID: containerName,
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{partitionKey},
+ },
+ }
+
+ // Setting container options
+ throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
+ options := &azcosmos.CreateContainerOptions{
+ ThroughputProperties: &throughputProperties,
+ }
+
+ ctx := context.TODO()
+ containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
+ if err != nil {
+ log.Fatal(err)
+
+ }
+ log.Printf("Container [%v] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
+
+ return nil
+}
+```
+
+**Create an item**
+
+```go
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+/*
+ item = struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ }
+*/
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ b, err := json.Marshal(item)
+ if err != nil {
+ return err
+ }
+ // setting item options upon creating ie. consistency level
+ itemOptions := azcosmos.ItemOptions{
+ ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
+ }
+ ctx := context.TODO()
+ itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
+
+ if err != nil {
+ return err
+ }
+ log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
+}
+```
+
+**Read an item**
+
+```go
+import (
+ "context"
+ "log"
+ "fmt"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("Failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Read an item
+ ctx := context.TODO()
+ itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ itemResponseBody := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{}
+
+ err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
+ if err != nil {
+ return err
+ }
+
+ b, err := json.MarshalIndent(itemResponseBody, "", " ")
+ if err != nil {
+ return err
+ }
+ fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
+ fmt.Printf("%s\n", b)
+
+ log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
+}
+```
+
+**Delete an item**
+
+```go
+import (
+ "context"
+ "log"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("Failed to create a container client: %s", err)
+ }
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Delete an item
+ ctx := context.TODO()
+ res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
+
+ return nil
+}
+```
+
+## Run the code
+
+To authenticate, you need to pass the Azure Cosmos DB account credentials to the application.
+
+Get your Azure Cosmos DB account credentials by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos DB account.
+
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You'll add the URI and keys values to an environment variable in the next step.
+
+After you've copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application.
+
+Use the values copied from the Azure portal to set the following environment variables:
+
+# [Bash](#tab/bash)
+
+```bash
+export AZURE_COSMOS_ENPOINT=<Your_AZURE_COSMOS_URI>
+export AZURE_COSMOS_KEY=<Your_COSMOS_PRIMARY_KEY>
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$env:AZURE_COSMOS_ENDPOINT=<Your_AZURE_COSMOS_URI>
+$env:AZURE_COSMOS_KEY=<Your_AZURE_COSMOS_URI>
+```
+++
+Create a new Go module by running the following command:
+
+```bash
+go mod init azcosmos
+```
+
+```go
+
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
+)
+
+func main() {
+ endpoint := os.Getenv("AZURE_COSMOS_ENDPOINT")
+ if endpoint == "" {
+ log.Fatal("AZURE_COSMOS_ENDPOINT could not be found")
+ }
+
+ key := os.Getenv("AZURE_COSMOS_KEY")
+ if key == "" {
+ log.Fatal("AZURE_COSMOS_KEY could not be found")
+ }
+
+ var databaseName = "adventureworks"
+ var containerName = "customer"
+ var partitionKey = "/customerId"
+
+ item := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ }
+
+ cred, err := azcosmos.NewKeyCredential(key)
+ if err != nil {
+ log.Fatal("Failed to create a credential: ", err)
+ }
+
+ // Create a CosmosDB client
+ client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
+ if err != nil {
+ log.Fatal("Failed to create Azure Cosmos DB db client: ", err)
+ }
+
+ err = createDatabase(client, databaseName)
+ if err != nil {
+ log.Printf("createDatabase failed: %s\n", err)
+ }
+
+ err = createContainer(client, databaseName, containerName, partitionKey)
+ if err != nil {
+ log.Printf("createContainer failed: %s\n", err)
+ }
+
+ err = createItem(client, databaseName, containerName, item.CustomerId, item)
+ if err != nil {
+ log.Printf("createItem failed: %s\n", err)
+ }
+
+ err = readItem(client, databaseName, containerName, item.CustomerId, item.ID)
+ if err != nil {
+ log.Printf("readItem failed: %s\n", err)
+ }
+
+ err = deleteItem(client, databaseName, containerName, item.CustomerId, item.ID)
+ if err != nil {
+ log.Printf("deleteItem failed: %s\n", err)
+ }
+}
+
+func createDatabase(client *azcosmos.Client, databaseName string) error {
+// databaseName := "adventureworks"
+
+ databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
+
+ // This is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+ ctx := context.TODO()
+ databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Database [%s] already exists\n", databaseName)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Database [%v] created. ActivityId %s\n", databaseName, databaseResp.ActivityID)
+ }
+ return nil
+}
+
+func createContainer(client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
+// databaseName = adventureworks
+// containerName = customer
+// partitionKey = "/customerId"
+
+ databaseClient, err := client.NewDatabase(databaseName)
+ if err != nil {
+ return err
+ }
+
+ // creating a container
+ containerProperties := azcosmos.ContainerProperties{
+ ID: containerName,
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{partitionKey},
+ },
+ }
+
+ // this is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+
+ // setting options upon container creation
+ throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
+ options := &azcosmos.CreateContainerOptions{
+ ThroughputProperties: &throughputProperties,
+ }
+ ctx := context.TODO()
+ containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Container [%s] already exists\n", containerName)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Container [%s] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
+ }
+ return nil
+}
+
+func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+
+/* item = struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{
+ ID: "1",
+ CustomerId: "1",
+ Title: "Mr",
+ FirstName: "Luke",
+ LastName: "Hayes",
+ EmailAddress: "luke12@adventure-works.com",
+ PhoneNumber: "879-555-0197",
+ CreationDate: "2014-02-25T00:00:00",
+ }
+*/
+ // create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ b, err := json.Marshal(item)
+ if err != nil {
+ return err
+ }
+ // setting the item options upon creating ie. consistency level
+ itemOptions := azcosmos.ItemOptions{
+ ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
+ }
+
+ // this is a helper function that swallows 409 errors
+ errorIs409 := func(err error) bool {
+ var responseErr *azcore.ResponseError
+ return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
+ }
+
+ ctx := context.TODO()
+ itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
+
+ switch {
+ case errorIs409(err):
+ log.Printf("Item with partitionkey value %s already exists\n", pk)
+ case err != nil:
+ return err
+ default:
+ log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+ }
+
+ return nil
+}
+
+func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client: %s", err)
+ }
+
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Read an item
+ ctx := context.TODO()
+ itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ itemResponseBody := struct {
+ ID string `json:"id"`
+ CustomerId string `json:"customerId"`
+ Title string
+ FirstName string
+ LastName string
+ EmailAddress string
+ PhoneNumber string
+ CreationDate string
+ }{}
+
+ err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
+ if err != nil {
+ return err
+ }
+
+ b, err := json.MarshalIndent(itemResponseBody, "", " ")
+ if err != nil {
+ return err
+ }
+ fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
+ fmt.Printf("%s\n", b)
+
+ log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
+
+ return nil
+}
+
+func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
+// databaseName = "adventureworks"
+// containerName = "customer"
+// partitionKey = "1"
+// itemId = "1"
+
+ // Create container client
+ containerClient, err := client.NewContainer(databaseName, containerName)
+ if err != nil {
+ return fmt.Errorf("failed to create a container client:: %s", err)
+ }
+ // Specifies the value of the partiton key
+ pk := azcosmos.NewPartitionKeyString(partitionKey)
+
+ // Delete an item
+ ctx := context.TODO()
+
+ res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
+
+ return nil
+}
+
+```
+Create a new file named `main.go` and copy the code from the sample section above.
+
+Run the following command to execute the app:
+
+```bash
+go run main.go
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a database, container, and an item entry. Now import more data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB for the API for NoSQL](../import-data.md)
cosmos-db Quickstart Java Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java-spring-data.md
+
+ Title: Quickstart - Use Spring Datan Azure Cosmos DB v3 to create a document database using Azure Cosmos DB
+description: This quickstart presents a Spring Datan Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+++
+ms.devlang: java
+ Last updated : 08/26/2021+++++
+# Quickstart: Build a Spring Datan Azure Cosmos DB v3 app to manage Azure Cosmos DB for NoSQL data
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Spring Datan Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB for NoSQL account using the Azure portal or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Spring Boot app using the Spring Datan Azure Cosmos DB v3 connector, and then add resources to your Azure Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+> [!IMPORTANT]
+> These release notes are for version 3 of Spring Datan Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md).
+>
+> Spring Datan Azure Cosmos DB supports only the API for NoSQL.
+>
+> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
+> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
+> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
+>
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription or credit card. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.
+- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
+
+## Introductory notes
+
+*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
++
+You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+
+The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
+
+As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
+
+## Create a database account
+
+Before you can create a document database, you need to create a API for NoSQL account with Azure Cosmos DB.
++
+## Add a container
++
+<a id="add-sample-data"></a>
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+```bash
+git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started.git
+```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
+](#run-the-app).
+
+### Application configuration file
+
+Here we showcase how Spring Boot and Spring Data enhance user experience - the process of establishing an Azure Cosmos DB client and connecting to Azure Cosmos DB resources is now config rather than code. At application startup Spring Boot handles all of this boilerplate using the settings in **application.properties**:
+
+```xml
+cosmos.uri=${ACCOUNT_HOST}
+cosmos.key=${ACCOUNT_KEY}
+cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY}
+
+dynamic.collection.name=spel-property-collection
+# Populate query metrics
+cosmos.queryMetricsEnabled=true
+```
+
+Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data will automatically do the following: (1) create an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connect to the database and container. You're all set - **no more resource management code!**
+
+### Java source
+
+The Spring Data value-add also comes from its simple, clean, standardized and platform-independent interface for operating on datastores. Building on the Spring Data GitHub sample linked above, below are CRUD and query samples for manipulating Azure Cosmos DB documents with Spring Datan Azure Cosmos DB.
+
+* Item creation and updates by using the `save` method.
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Create)]
+
+* Point-reads using the derived query method defined in the repository. The `findByIdAndLastName` performs point-reads for `UserRepository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` and `lastName` fields:
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Read)]
+
+* Item deletes using `deleteAll`:
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Delete)]
+
+* Derived query based on repository method name. Spring Data implements the `UserRepository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field (this query could not be implemented as a point-read):
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Query)]
+
+## Run the app
+
+Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
+
+1. In the git terminal window, `cd` to the sample code folder.
+
+ ```bash
+ cd azure-spring-data-cosmos-java-sql-api-getting-started/azure-spring-data-cosmos-java-getting-started/
+ ```
+
+2. In the git terminal window, use the following command to install the required Spring Datan Azure Cosmos DB packages.
+
+ ```bash
+ mvn clean package
+ ```
+
+3. In the git terminal window, use the following command to start the Spring Datan Azure Cosmos DB application:
+
+ ```bash
+ mvn spring-boot:run
+ ```
+
+4. The app loads **application.properties** and connects the resources in your Azure Cosmos DB account.
+5. The app will perform point CRUD operations described above.
+6. The app will perform a derived query.
+7. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using the Data Explorer, and run a Spring Data app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
+
+ Title: Quickstart - Use Java to create a document database using Azure Cosmos DB
+description: This quickstart presents a Java code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+++
+ms.devlang: java
+ Last updated : 08/26/2021+++++
+# Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
++
+In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB for NoSQL account using the Azure portal, or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Java app using the SQL Java SDK, and then add resources to your Azure Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+> [!IMPORTANT]
+> This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.
+- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
+
+## Introductory notes
+
+*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
++
+You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+
+The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
+
+As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
+
+## Create a database account
+
+Before you can create a document database, you need to create a API for NoSQL account with Azure Cosmos DB.
++
+## Add a container
++
+<a id="add-sample-data"></a>
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-java-getting-started.git
+```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
+](#run-the-app).
++
+# [Sync API](#tab/sync)
+
+### Managing database resources using the synchronous (sync) API
+
+* `CosmosClient` initialization. The `CosmosClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute requests against the service.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateSyncClient)]
+
+* `CosmosDatabase` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
+
+* `CosmosContainer` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
+
+* Item creation by using the `createItem` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateItem)]
+
+* Point reads are performed using `readItem` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=ReadItem)]
+
+* SQL queries over JSON are performed using the `queryItems` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=QueryItems)]
+
+# [Async API](#tab/async)
+
+### Managing database resources using the asynchronous (async) API
+
+* Async API calls return immediately, without waiting for a response from the server. In light of this, the following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
+
+* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute asynchronous requests against the service.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateAsyncClient)]
+
+* `CosmosAsyncDatabase` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
+
+* `CosmosAsyncContainer` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
+
+* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream which issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program does not terminate during item creation. **The proper asynchronous programming practice is not to block on async calls - in realistic use-cases requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateItem)]
+
+* As with the sync API, point reads are performed using `readItem` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=ReadItem)]
+
+* As with the sync API, SQL queries over JSON are performed using the `queryItems` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=QueryItems)]
+++
+## Run the app
+
+Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
+
+1. In the git terminal window, `cd` to the sample code folder.
+
+ ```bash
+ cd azure-cosmos-java-getting-started
+ ```
+
+2. In the git terminal window, use the following command to install the required Java packages.
+
+ ```bash
+ mvn package
+ ```
+
+3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal)
+
+ ```bash
+ mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
+
+ ```
+
+ The terminal window displays a notification that the FamilyDB database was created.
+
+4. The app creates database with name `AzureSampleFamilyDB`
+5. The app creates container with name `FamilyContainer`
+6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
+7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
+
+7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using the Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
+
+ Title: Quickstart- Use Node.js to query from Azure Cosmos DB for NoSQL account
+description: How to use Node.js to create an app that connects to Azure Cosmos DB for NoSQL account and queries data.
+++
+ms.devlang: javascript
+ Last updated : 09/22/2022+++++
+# Quickstart: Use Node.js to connect and query data from Azure Cosmos DB for NoSQL account
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
+
+Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-sql-api-javascript-samples) are available on GitHub as a Node.js project.
+
+## Prerequisites
+
+* In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
+
+This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for JavaScript to manage resources.
+
+### Create an Azure Cosmos DB account
++
+### Configure environment variables
++
+### Create a new JavaScript project
+
+1. Create a new Node.js application in an empty folder using your preferred terminal.
+
+ ```bash
+ npm init -y
+ ```
+
+2. Edit the `package.json` file to use ES6 modules by adding the `"type": "module",` entry. This setting allows your code to use modern async/await syntax.
+
+ :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/package.json" highlight="6":::
+
+### Install the package
+
+1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) npm package to the Node.js project.
+
+ ```bash
+ npm install @azure/cosmos
+ ```
+
+1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file.
+
+ ```bash
+ npm install dotenv
+ ```
+
+### Create local development environment files
+
+1. Create a `.gitignore` file and add the following value to ignore your environment file and your node_modules. This configuration file ensures that only secure and relevant files are checked into source code.
+
+ ```text
+ .env
+ node_modules
+ ```
+
+1. Create a `.env` file with the following variables:
+
+ ```text
+ COSMOS_ENDPOINT=
+ COSMOS_KEY=
+ ```
+
+### Create a code file
+
+Create an `index.js` and add the following boilerplate code to the file to read environment variables:
++
+### Add dependency to client library
+
+Add the following code at the end of the `index.js` file to include the required dependency to programmatically access Cosmos DB.
++
+### Add environment variables to code file
+
+Add the following code at the end of the `index.js` file to include the required environment variables. The endpoint and key were found at the end of the [account creation steps](#create-an-azure-cosmos-db-account).
++
+### Add variables for names
+
+Add the following variables to manage unique database and container names and the [partition key (pk)](/azure/cosmos-db/partitioning-overview).
++
+In this example, we chose to add a timeStamp to the database and container in case you run this sample code more than once.
+
+## Object model
++
+You'll use the following JavaScript classes to interact with these resources:
+
+* [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+* [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
+* [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
+* [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
+
+## Code examples
+
+* [Authenticate the client](#authenticate-the-client)
+* [Create a database](#create-a-database)
+* [Create a container](#create-a-container)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
+
+The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+
+For this sample code, the container will use the category as a logical partition key.
+
+### Authenticate the client
+
+In the `index.js`, add the following code to use the resource **endpoint** and **key** to authenticate to Cosmos DB. Define a new instance of the [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) class.
++
+### Create a database
+
+Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
++
+### Create a container
+
+Add the following code to create a container with the [``Database.Containers.createContainerIfNotExistsAsync``](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-createifnotexists) method. The method returns a reference to the container.
++
+### Create an item
+
+Add the following code to provide your data set. Each _product_ has a unique ID, name, category name (used as partition key) and other fields.
++
+Create a few items in the container by calling [``Container.Items.create``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-create) in a loop.
++
+### Get an item
+
+In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.item().read``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) passing in both values to return an item.
+
+The partition key is specific to a container. In this Contoso Products container, the category name, `categoryName`, is used as the partition key.
++
+### Query items
+
+Add the following code to query for all items that match a specific filter. Create a [parameterized query expression](/javascript/api/@azure/cosmos/sqlqueryspec) then call the [``Container.Items.query``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method. This method returns a [``QueryIterator``](/javascript/api/@azure/cosmos/queryiterator) that will manage the pages of results. Then, use a combination of ``while`` and ``for`` loops to [``fetchNext``](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext) page of results as a [``FeedResponse``](/javascript/api/@azure/cosmos/feedresponse) and then iterate over the individual data objects.
+
+The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`.
++
+If you want to use this data returned from the FeedResponse as an _item_, you need to create an [``Item``](/javascript/api/@azure/cosmos/item), using the [``Container.Items.read``](#get-an-item) method.
+
+### Delete an item
+
+Add the following code to delete an item you need to use the ID and partition key to get the item, then delete it. This example uses the [``Container.Item.delete``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-delete) method to delete the item.
++
+## Run the code
+
+This app creates an Azure Cosmos DB SQL API database and container. The example then creates items and then reads one item back. Finally, the example issues a query that should only return items matching a specific category. With each step, the example outputs metadata to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```bash
+node index.js
+```
+
+The output of the app should be similar to this example:
+
+```output
+contoso_1663276732626 database ready
+products_1663276732626 container ready
+'Touring-1000 Blue, 50' inserted
+'Touring-1000 Blue, 46' inserted
+'Mountain-200 Black, 42' inserted
+Touring-1000 Blue, 50 read
+08225A9E-F2B3-4FA3-AB08-8C70ADD6C3C2: Touring-1000 Blue, 50, BK-T79U-50
+2C981511-AC73-4A65-9DA3-A0577E386394: Touring-1000 Blue, 46, BK-T79U-46
+0F124781-C991-48A9-ACF2-249771D44029 Item deleted
+```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
cosmos-db Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-portal.md
+
+ Title: Quickstart - Create Azure Cosmos DB resources from the Azure portal
+description: This quickstart shows how to create an Azure Cosmos DB database, container, and items by using the Azure portal.
+++++ Last updated : 08/26/2021++
+# Quickstart: Create an Azure Cosmos DB account, database, container, and items from the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Azure portal](quickstart-portal.md)
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+>
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [API for NoSQL](../introduction.md) account, create a document database, and container, and add data to the container. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
+
+## Prerequisites
+
+An Azure subscription or free Azure Cosmos DB trial account
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+## <a id="create-account"></a>Create an Azure Cosmos DB account
++
+## <a id="create-container-database"></a>Add a database and a container
+
+You can use the Data Explorer in the Azure portal to create a database and container.
+
+1. Select **Data Explorer** from the left navigation on your Azure Cosmos DB account page, and then select **New Container**.
+
+ You may need to scroll right to see the **Add Container** window.
+
+ :::image type="content" source="./media/create-cosmosdb-resources-portal/add-database-container.png" alt-text="The Azure portal Data Explorer, Add Container pane":::
+
+1. In the **Add container** pane, enter the settings for the new container.
+
+ |Setting|Suggested value|Description
+ ||||
+ |**Database ID**|ToDoList|Enter *ToDoList* as the name for the new database. Database names must contain from 1 through 255 characters, and they cannot contain `/, \\, #, ?`, or a trailing space. Check the **Share throughput across containers** option, it allows you to share the throughput provisioned on the database across all the containers within the database. This option also helps with cost savings. |
+ | **Database throughput**| You can provision **Autoscale** or **Manual** throughput. Manual throughput allows you to scale RU/s yourself whereas autoscale throughput allows the system to scale RU/s based on usage. Select **Manual** for this example. <br><br> Leave the throughput at 400 request units per second (RU/s). If you want to reduce latency, you can scale up the throughput later by estimating the required RU/s with the [capacity calculator](estimate-ru-with-capacity-planner.md).<br><br>**Note**: This setting is not available when creating a new container in a serverless account. |
+ |**Container ID**|Items|Enter *Items* as the name for your new container. Container IDs have the same character requirements as database names.|
+ |**Partition key**| /category| The sample described in this article uses */category* as the partition key.|
+
+ Don't add **Unique keys** or turn on **Analytical store** for this example. Unique keys let you add a layer of data integrity to the database by ensuring the uniqueness of one or more values per partition key. For more information, see [Unique keys in Azure Cosmos DB.](../unique-keys.md) [Analytical store](../analytical-store-introduction.md) is used to enable large-scale analytics against operational data without any impact to your transactional workloads.
+
+1. Select **OK**. The Data Explorer displays the new database and the container that you created.
+
+## Add data to your database
+
+Add data to your new database using Data Explorer.
+
+1. In **Data Explorer**, expand the **ToDoList** database, and expand the **Items** container. Next, select **Items**, and then select **New Item**.
+
+ :::image type="content" source="./media/quickstart-portal/azure-cosmosdb-new-document.png" alt-text="Create new documents in Data Explorer in the Azure portal":::
+
+1. Add the following structure to the document on the right side of the **Documents** pane:
+
+ ```json
+ {
+ "id": "1",
+ "category": "personal",
+ "name": "groceries",
+ "description": "Pick up apples and strawberries.",
+ "isComplete": false
+ }
+ ```
+
+1. Select **Save**.
+
+ :::image type="content" source="./media/quickstart-portal/azure-cosmosdb-save-document.png" alt-text="Copy in json data and select Save in Data Explorer in the Azure portal":::
+
+1. Select **New Item** again, and create and save another document with a unique `id`, and any other properties and values you want. Your documents can have any structure, because Azure Cosmos DB doesn't impose any schema on your data.
+
+## Query your data
++
+## Clean up resources
++
+If you wish to delete just the database and use the Azure Cosmos DB account in future, you can delete the database with the following steps:
+
+* Go to your Azure Cosmos DB account.
+* Open **Data Explorer**, right click on the database that you want to delete and select **Delete Database**.
+* Enter the Database ID/database name to confirm the delete operation.
+
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a database and container using the Data Explorer. You can now import additional data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
+
+ Title: 'Quickstart: Build a Python app using Azure Cosmos DB for NoSQL account'
+description: Presents a Python code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+++
+ms.devlang: python
+ Last updated : 08/25/2022++++
+# Quickstart: Build a Python application using an Azure Cosmos DB for NoSQL account
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and from Visual Studio Code with a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+
+- An Azure Cosmos DB Account. You options are:
+ * Within an Azure active subscription:
+ * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
+ * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
+ * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier)
+ * Without an Azure active subscription:
+ * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days.
+ * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
+- [Python 3.7+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
+- [Visual Studio Code](https://code.visualstudio.com/).
+- The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview).
+- [Git](https://www.git-scm.com/downloads).
+- [Azure Cosmos DB for NoSQL SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)
+
+## Important update on Python 2.x Support
+
+New releases of this SDK won't support Python 2.x starting January 1st, 2022. Please check the [CHANGELOG](./sdk-python.md) for more information.
+
+## Create a database account
++
+## Add a container
++
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's clone a API for NoSQL app from GitHub, set the connection string, and run it. This quickstart uses version 4 of the [Python SDK](https://pypi.org/project/azure-cosmos/#history).
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```cmd
+ md git-samples
+ ```
+
+ If you are using a bash prompt, you should instead use the following command:
+
+ ```bash
+ mkdir "git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started.git
+ ```
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys** from the left navigation. Use the copy buttons on the right side of the screen to copy the **URI** and **Primary Key** into the *cosmos_get_started.py* file in the next step.
+
+ :::image type="content" source="./media/quickstart-python/access-key-and-uri-in-keys-settings-in-the-azure-portal.png" alt-text="Get an access key and URI in the Keys settings in the Azure portal":::
+
+2. In Visual Studio Code, open the *cosmos_get_started.py* file in *\git-samples\azure-cosmos-db-python-getting-started*.
+
+3. Copy your **URI** value from the portal (using the copy button) and make it the value of the **endpoint** variable in *cosmos_get_started.py*.
+
+ `endpoint = 'https://FILLME.documents.azure.com',`
+
+4. Then copy your **PRIMARY KEY** value from the portal and make it the value of the **key** in *cosmos_get_started.py*. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+ `key = 'FILLME'`
+
+5. Save the *cosmos_get_started.py* file.
+
+## Review the code
+
+This step is optional. Learn about the database resources created in code, or skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the [cosmos_get_started.py](https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started/blob/main/cosmos_get_started.py) file.
+
+* The CosmosClient is initialized. Make sure to update the "endpoint" and "key" values as described in the [Update your connection string](#update-your-connection-string) section.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_cosmos_client)]
+
+* A new database is created.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_database_if_not_exists)]
+
+* A new container is created, with 400 RU/s of [provisioned throughput](../request-units.md). We choose `lastName` as the [partition key](../partitioning-overview.md#choose-partitionkey), which allows us to do efficient queries that filter on this property.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_container_if_not_exists)]
+
+* Some items are added to the container. Containers are a collection of items (JSON documents) that can have varied schema. The helper methods ```get_[name]_family_item``` return representations of a family that are stored in Azure Cosmos DB as JSON documents.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_item)]
+
+* Point reads (key value lookups) are performed using the `read_item` method. We print out the [RU charge](../request-units.md) of each operation.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=read_item)]
+
+* A query is performed using SQL query syntax. Because we're using partition key values of ```lastName``` in the WHERE clause, Azure Cosmos DB will efficiently route this query to the relevant partitions, improving performance.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=query_items)]
+
+## Run the app
+
+1. In Visual Studio Code, select **View** > **Command Palette**.
+
+2. At the prompt, enter **Python: Select Interpreter** and then select the version of Python to use.
+
+ The Footer in Visual Studio Code is updated to indicate the interpreter selected.
+
+3. Select **View** > **Integrated Terminal** to open the Visual Studio Code integrated terminal.
+
+4. In the integrated terminal window, ensure you are in the *azure-cosmos-db-python-getting-started* folder. If not, run the following command to switch to the sample folder.
+
+ ```cmd
+ cd "\git-samples\azure-cosmos-db-python-getting-started"`
+ ```
+
+5. Run the following command to install the azure-cosmos package.
+
+ ```python
+ pip install azure-cosmos aiohttp
+ ```
+
+ If you get an error about access being denied when attempting to install azure-cosmos, you'll need to [run VS Code as an administrator](https://stackoverflow.com/questions/37700536/visual-studio-code-terminal-how-to-run-a-command-with-administrator-rights).
+
+6. Run the following command to run the sample and create and store new documents in Azure Cosmos DB.
+
+ ```python
+ python cosmos_get_started.py
+ ```
+
+7. To confirm the new items were created and saved, in the Azure portal, select **Data Explorer** > **AzureSampleFamilyDatabase** > **Items**. View the items that were created. For example, here is a sample JSON document for the Andersen family:
+
+ ```json
+ {
+ "id": "Andersen-1569479288379",
+ "lastName": "Andersen",
+ "district": "WA5",
+ "parents": [
+ {
+ "familyName": null,
+ "firstName": "Thomas"
+ },
+ {
+ "familyName": null,
+ "firstName": "Mary Kay"
+ }
+ ],
+ "children": null,
+ "address": {
+ "state": "WA",
+ "county": "King",
+ "city": "Seattle"
+ },
+ "registered": true,
+ "_rid": "8K5qAIYtZXeBhB4AAAAAAA==",
+ "_self": "dbs/8K5qAA==/colls/8K5qAIYtZXc=/docs/8K5qAIYtZXeBhB4AAAAAAA==/",
+ "_etag": "\"a3004d78-0000-0800-0000-5d8c5a780000\"",
+ "_attachments": "attachments/",
+ "_ts": 1569479288
+ }
+ ```
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a Python app in Visual Studio Code. You can now import additional data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB for the API for NoSQL](../import-data.md)
cosmos-db Quickstart Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-spark.md
+
+ Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL
+description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL that you can use to connect to and query data in your Azure Cosmos DB account
+++
+ms.devlang: java
+ Last updated : 03/01/2022+++++
+# Quickstart: Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Java](quickstart-java.md)
+> * [Spring Data](quickstart-java-spring-data.md)
+> * [Python](quickstart-python.md)
+> * [Spark v3](quickstart-spark.md)
+> * [Go](quickstart-go.md)
+>
+
+This tutorial is a quick start guide to show how to use Azure Cosmos DB Spark Connector to read from or write to Azure Cosmos DB. Azure Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
+
+Throughout this quick tutorial, we rely on [Azure Databricks Runtime 10.4 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.4) and a Jupyter Notebook to show how to use the Azure Cosmos DB Spark Connector.
+
+You can use any other Spark (for e.g., spark 3.1.1) offering as well, also you should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
+
+## Prerequisites
+
+* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
+
+* [Azure Databricks](/azure/databricks/release-notes/runtime/10.4) runtime 10.4 with Spark 3.2.1
+
+* (Optional) [SLF4J binding](https://www.slf4j.org/manual.html) is used to associate a specific logging framework with SLF4J.
+
+SLF4J is only needed if you plan to use logging, also download an SLF4J binding, which will link the SLF4J API with the logging implementation of your choice. See the [SLF4J user manual](https://www.slf4j.org/manual.html) for more information.
+
+Install Azure Cosmos DB Spark Connector in your spark cluster [using the latest version for Spark 3.2.x](https://aka.ms/azure-cosmos-spark-3-2-download).
+
+The getting started guide is based on PySpark/Scala and you can run the following code snippet in an Azure Databricks PySpark/Scala notebook.
+
+## Create databases and containers
+
+First, set Azure Cosmos DB account credentials, and the Azure Cosmos DB Database name and container name.
+
+#### [Python](#tab/python)
+
+```python
+cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
+cosmosMasterKey = "REPLACEME"
+cosmosDatabaseName = "sampleDB"
+cosmosContainerName = "sampleContainer"
+
+cfg = {
+ "spark.cosmos.accountEndpoint" : cosmosEndpoint,
+ "spark.cosmos.accountKey" : cosmosMasterKey,
+ "spark.cosmos.database" : cosmosDatabaseName,
+ "spark.cosmos.container" : cosmosContainerName,
+}
+```
+
+#### [Scala](#tab/scala)
+
+```scala
+val cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
+val cosmosMasterKey = "REPLACEME"
+val cosmosDatabaseName = "sampleDB"
+val cosmosContainerName = "sampleContainer"
+
+val cfg = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.database" -> cosmosDatabaseName,
+ "spark.cosmos.container" -> cosmosContainerName
+)
+```
++
+Next, you can use the new Catalog API to create an Azure Cosmos DB Database and Container through Spark.
+
+#### [Python](#tab/python)
+
+```python
+# Configure Catalog Api to be used
+spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
+spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
+spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
+
+# create an Azure Cosmos DB database using catalog api
+spark.sql("CREATE DATABASE IF NOT EXISTS cosmosCatalog.{};".format(cosmosDatabaseName))
+
+# create an Azure Cosmos DB container using catalog api
+spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')".format(cosmosDatabaseName, cosmosContainerName))
+```
+
+#### [Scala](#tab/scala)
+
+```scala
+// Configure Catalog Api to be used
+spark.conf.set(s"spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
+spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
+spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
+
+// create an Azure Cosmos DB database using catalog api
+spark.sql(s"CREATE DATABASE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName};")
+
+// create an Azure Cosmos DB container using catalog api
+spark.sql(s"CREATE TABLE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')")
+```
++
+When creating containers with the Catalog API, you can set the throughput and [partition key path](../partitioning-overview.md#choose-partitionkey) for the container to be created.
+
+For more information, see the full [Catalog API](https://github.com/Azure/azure-sdk-for-jav) documentation.
+
+## Ingest data
+
+The name of the data source is `cosmos.oltp`, and the following example shows how you can write a memory dataframe consisting of two items to Azure Cosmos DB:
+
+#### [Python](#tab/python)
+
+```python
+spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "Schrodinger cat", 2, False)))\
+ .toDF("id","name","age","isAlive") \
+ .write\
+ .format("cosmos.oltp")\
+ .options(**cfg)\
+ .mode("APPEND")\
+ .save()
+```
+
+#### [Scala](#tab/scala)
+
+```scala
+spark.createDataFrame(Seq(("cat-alive", "Schrodinger cat", 2, true), ("cat-dead", "Schrodinger cat", 2, false)))
+ .toDF("id","name","age","isAlive")
+ .write
+ .format("cosmos.oltp")
+ .options(cfg)
+ .mode("APPEND")
+ .save()
+```
++
+Note that `id` is a mandatory field for Azure Cosmos DB.
+
+For more information related to ingesting data, see the full [write configuration](https://github.com/Azure/azure-sdk-for-jav#write-config) documentation.
+
+## Query data
+
+Using the same `cosmos.oltp` data source, we can query data and use `filter` to push down filters:
+
+#### [Python](#tab/python)
+
+```python
+from pyspark.sql.functions import col
+
+df = spark.read.format("cosmos.oltp").options(**cfg)\
+ .option("spark.cosmos.read.inferSchema.enabled", "true")\
+ .load()
+
+df.filter(col("isAlive") == True)\
+ .show()
+```
+
+#### [Scala](#tab/scala)
+
+```scala
+import org.apache.spark.sql.functions.col
+
+val df = spark.read.format("cosmos.oltp").options(cfg).load()
+
+df.filter(col("isAlive") === true)
+ .withColumn("age", col("age") + 1)
+ .show()
+```
++
+For more information related to querying data, see the full [query configuration](https://github.com/Azure/azure-sdk-for-jav#query-config) documentation.
+
+## Partial document update using Patch
+
+Using the same `cosmos.oltp` data source, we can do partial update in Azure Cosmos DB using Patch API:
+
+#### [Python](#tab/python)
+
+```python
+cfgPatch = {"spark.cosmos.accountEndpoint": cosmosEndpoint,
+ "spark.cosmos.accountKey": cosmosMasterKey,
+ "spark.cosmos.database": cosmosDatabaseName,
+ "spark.cosmos.container": cosmosContainerName,
+ "spark.cosmos.write.strategy": "ItemPatch",
+ "spark.cosmos.write.bulk.enabled": "false",
+ "spark.cosmos.write.patch.defaultOperationType": "Set",
+ "spark.cosmos.write.patch.columnConfigs": "[col(name).op(set)]"
+ }
+
+id = "<document-id>"
+query = "select * from cosmosCatalog.{}.{} where id = '{}';".format(
+ cosmosDatabaseName, cosmosContainerName, id)
+
+dfBeforePatch = spark.sql(query)
+print("document before patch operation")
+dfBeforePatch.show()
+
+data = [{"id": id, "name": "Joel Brakus"}]
+patchDf = spark.createDataFrame(data)
+
+patchDf.write.format("cosmos.oltp").mode("Append").options(**cfgPatch).save()
+
+dfAfterPatch = spark.sql(query)
+print("document after patch operation")
+dfAfterPatch.show()
+```
+
+For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/patch-sample.py).
++
+#### [Scala](#tab/scala)
+
+```scala
+val cfgPatch = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.database" -> cosmosDatabaseName,
+ "spark.cosmos.container" -> cosmosContainerName,
+ "spark.cosmos.write.strategy" -> "ItemPatch",
+ "spark.cosmos.write.bulk.enabled" -> "false",
+
+ "spark.cosmos.write.patch.columnConfigs" -> "[col(name).op(set)]"
+ )
+
+val id = "<document-id>"
+val query = s"select * from cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} where id = '$id';"
+
+val dfBeforePatch = spark.sql(query)
+println("document before patch operation")
+dfBeforePatch.show()
+val patchDf = Seq(
+ (id, "Joel Brakus")
+ ).toDF("id", "name")
+
+patchDf.write.format("cosmos.oltp").mode("Append").options(cfgPatch).save()
+val dfAfterPatch = spark.sql(query)
+println("document after patch operation")
+dfAfterPatch.show()
+```
+
+For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/PatchSample.scala).
+++
+## Schema inference
+
+When querying data, the Spark Connector can infer the schema based on sampling existing items by setting `spark.cosmos.read.inferSchema.enabled` to `true`.
+
+#### [Python](#tab/python)
+
+```python
+df = spark.read.format("cosmos.oltp").options(**cfg)\
+ .option("spark.cosmos.read.inferSchema.enabled", "true")\
+ .load()
+
+df.printSchema()
++
+# Alternatively, you can pass the custom schema you want to be used to read the data:
+
+customSchema = StructType([
+ StructField("id", StringType()),
+ StructField("name", StringType()),
+ StructField("type", StringType()),
+ StructField("age", IntegerType()),
+ StructField("isAlive", BooleanType())
+ ])
+
+df = spark.read.schema(customSchema).format("cosmos.oltp").options(**cfg)\
+ .load()
+
+df.printSchema()
+
+# If no custom schema is specified and schema inference is disabled, then the resulting data will be returning the raw Json content of the items:
+
+df = spark.read.format("cosmos.oltp").options(**cfg)\
+ .load()
+
+df.printSchema()
+```
+
+#### [Scala](#tab/scala)
+
+```scala
+val cfgWithAutoSchemaInference = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.database" -> cosmosDatabaseName,
+ "spark.cosmos.container" -> cosmosContainerName,
+ "spark.cosmos.read.inferSchema.enabled" -> "true"
+)
+
+val df = spark.read.format("cosmos.oltp").options(cfgWithAutoSchemaInference).load()
+df.printSchema()
+
+df.show()
+```
++
+For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation.
+
+## Configuration reference
+
+The Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL has a complete configuration reference that provides additional and advanced settings writing and querying data, serialization, streaming using change feed, partitioning and throughput management and more. For a complete listing with details see our [Spark Connector Configuration Reference](https://aka.ms/azure-cosmos-spark-3-config) on GitHub.
+
+## Migrate to Spark 3 Connector
+
+If you are using our older Spark 2.4 Connector, you can find out how to migrate to the Spark 3 Connector [here](https://github.com/Azure/azure-sdk-for-jav).
+
+## Next steps
+
+* Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL: [Release notes and resources](sdk-java-spark-v3.md)
+* Learn more about [Apache Spark](https://spark.apache.org/).
+* Learn how to configure [throughput control](throughput-control-spark.md).
+* Check out more [samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
cosmos-db Quickstart Template Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-bicep.md
+
+ Title: Quickstart - Create an Azure Cosmos DB and a container using Bicep
+description: Quickstart showing how to an Azure Cosmos DB database and a container using Bicep
++
+tags: azure-resource-manager, bicep
+++ Last updated : 04/18/2022+
+#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
++
+# Quickstart: Create an Azure Cosmos DB and a container using Bicep
++
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos DB database and a container within that database. You can later store data in this container.
++
+## Prerequisites
+
+An Azure subscription or free Azure Cosmos DB trial account.
+
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cosmosdb-sql/).
++
+Three Azure resources are defined in the Bicep file:
+
+- [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos DB account.
+
+- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos DB database.
+
+- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos DB container.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters primaryRegion=<primary-region> secondaryRegion=<secondary-region>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -primaryRegion "<primary-region>" -secondaryRegion "<secondary-region>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<primary-region\>** with the primary replica region for the Azure Cosmos DB account, such as **WestUS**. Replace **\<secondary-region\>** with the secondary replica region for the Azure Cosmos DB account, such as **EastUS**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure Cosmos DB account, a database and a container by using a Bicep file and validated the deployment. To learn more about Azure Cosmos DB and Bicep, continue on to the articles below.
+
+- Read an [Overview of Azure Cosmos DB](../introduction.md).
+- Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md).
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Quickstart Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-json.md
+
+ Title: Quickstart - Create an Azure Cosmos DB and a container by using Azure Resource Manager template
+description: Quickstart showing how to an Azure Cosmos DB database and a container by using Azure Resource Manager template
+++
+tags: azure-resource-manager
+++ Last updated : 08/26/2021+
+#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
++
+# Quickstart: Create an Azure Cosmos DB and a container by using an ARM template
+
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos DB database and a container within that database. You can later store data in this container.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql%2Fazuredeploy.json)
+
+## Prerequisites
+
+An Azure subscription or free Azure Cosmos DB trial account
+
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cosmosdb-sql/).
++
+Three Azure resources are defined in the template:
+
+* [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos DB account.
+
+* [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos DB database.
+
+* [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos DB container.
+
+More Azure Cosmos DB template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Documentdb).
+
+## Deploy the template
+
+1. Select the following image to sign in to Azure and open a template. The template creates an Azure Cosmos DB account, a database, and a container.
+
+ [:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql%2Fazuredeploy.json)
+
+2. Select or enter the following values.
+
+ :::image type="content" source="../media/quick-create-template/create-cosmosdb-using-template-portal.png" alt-text="ARM template, Azure Cosmos DB integration, deploy portal":::
+
+ Unless it is specified, use the default values to create the Azure Cosmos DB resources.
+
+ * **Subscription**: select an Azure subscription.
+ * **Resource group**: select **Create new**, enter a unique name for the resource group, and then click **OK**.
+ * **Location**: select a location. For example, **Central US**.
+ * **Account Name**: enter a name for the Azure Cosmos DB account. It must be globally unique.
+ * **Location**: enter a location where you want to create your Azure Cosmos DB account. The Azure Cosmos DB account can be in the same location as the resource group.
+ * **Primary Region**: The primary replica region for the Azure Cosmos DB account.
+ * **Secondary region**: The secondary replica region for the Azure Cosmos DB account.
+ * **Default Consistency Level**: The default consistency level for the Azure Cosmos DB account.
+ * **Max Staleness Prefix**: Max stale requests. Required for BoundedStaleness.
+ * **Max Interval in Seconds**: Max lag time. Required for BoundedStaleness.
+ * **Database Name**: The name of the Azure Cosmos DB database.
+ * **Container Name**: The name of the Azure Cosmos DB container.
+ * **Throughput**: The throughput for the container, minimum throughput value is 400 RU/s.
+ * **I agree to the terms and conditions state above**: Select.
+
+3. Select **Purchase**. After the Azure Cosmos DB account has been deployed successfully, you get a notification:
+
+ :::image type="content" source="../media/quick-create-template/resource-manager-template-portal-deployment-notification.png" alt-text="ARM template, Azure Cosmos DB integration, deploy portal notification":::
+
+The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
+
+## Validate the deployment
+
+You can either use the Azure portal to check the Azure Cosmos DB account, the database, and the container or use the following Azure CLI or Azure PowerShell script to list the secret created.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+echo "Enter your Azure Cosmos DB account name:" &&
+read cosmosAccountName &&
+echo "Enter the resource group where the Azure Cosmos DB account exists:" &&
+read resourcegroupName &&
+az cosmosdb show -g $resourcegroupName -n $cosmosAccountName
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the resource group name where your Azure Cosmos DB account exists"
+(Get-AzResource -ResourceType "Microsoft.DocumentDB/databaseAccounts" -ResourceGroupName $resourceGroupName).Name
+ Write-Host "Press [ENTER] to continue..."
+```
+++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
+When no longer needed, delete the resource group, which deletes the Azure Cosmos DB account and the related resources. To delete the resource group by using Azure CLI or Azure PowerShell:
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure Cosmos DB account, a database and a container by using an ARM template and validated the deployment. To learn more about Azure Cosmos DB and Azure Resource Manager, continue on to the articles below.
+
+- Read an [Overview of Azure Cosmos DB](../introduction.md)
+- Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)
+- Get other [Azure Cosmos DB Resource Manager templates](./samples-resource-manager-templates.md)
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-terraform.md
+
+ Title: Quickstart - Create an Azure Cosmos DB database and container using Terraform
+description: Quickstart showing how to an Azure Cosmos DB database and a container using Terraform
++
+tags: azure-resource-manager, terraform
++++ Last updated : 09/22/2022++
+# Quickstart - Create an Azure Cosmos DB database and container using Terraform
++
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deployments via Terraform to create an Azure Cosmos database and a container within that database. You can later store data in this container.
+
+## Prerequisites
+
+An Azure subscription or free Azure Cosmos DB trial account
+
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+Terraform should be installed on your local computer. Installation instructions can be found [here](https://learn.hashicorp.com/tutorials/terraform/install-cli).
+
+## Review the Terraform File
+
+The Terraform files used in this quickstart can be found on the [terraform samples repository](https://github.com/Azure/terraform). Create the below three files: providers.tf, main.tf and variables.tf. Variables can be set in command line or alternatively with a terraforms.tfvars file.
+
+### Provider Terraform File
++
+### Main Terraform File
++
+### Variables Terraform File
++
+Three Cosmos DB resources are defined in the main terraform file.
+
+- [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos account.
+
+- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos database.
+
+- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos container.
+
+## Deploy via terraform
+
+1. Save the terraform files as main.tf, variables.tf and providers.tf to your local computer.
+2. Sign in to your terminal via Azure CLI or PowerShell
+3. Deploy via Terraform commands
+ - terraform init
+ - terraform plan
+ - terraform apply
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az resource list --resource-group "your resource group name"
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName "your resource group name"
+```
+++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az group delete --name "your resource group name"
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name "your resource group name"
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure Cosmos account, a database and a container via terraform and validated the deployment. To learn more about Azure Cosmos DB and Terraform, continue on to the articles below.
+
+- Read an [Overview of Azure Cosmos DB](../introduction.md).
+- Learn more about [Terraform](https://www.terraform.io/intro).
+- Learn more about [Azure Terraform Provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs).
+- [Manage Cosmos DB with Terraform](manage-with-terraform.md)
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Read Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/read-change-feed.md
+
+ Title: Reading Azure Cosmos DB change feed
+description: This article describes different options available to read and access change feed in Azure Cosmos DB.
++++++ Last updated : 06/30/2021++
+# Reading Azure Cosmos DB change feed
+
+You can work with the Azure Cosmos DB change feed using either a push model or a pull model. With a push model, the change feed processor pushes work to a client that has business logic for processing this work. However, the complexity in checking for work and storing state for the last processed work is handled within the change feed processor.
+
+With a pull model, the client has to pull the work from the server. The client, in this case, not only has business logic for processing work but also storing state for the last processed work, handling load balancing across multiple clients processing work in parallel, and handling errors.
+
+When reading from the Azure Cosmos DB change feed, we usually recommend using a push model because you won't need to worry about:
+
+- Polling the change feed for future changes.
+- Storing state for the last processed change. When reading from the change feed, this is automatically stored in a [lease container](change-feed-processor.md#components-of-the-change-feed-processor).
+- Load balancing across multiple clients consuming changes. For example, if one client can't keep up with processing changes and another has available capacity.
+- [Handling errors](change-feed-processor.md#error-handling). For example, automatically retrying failed changes that weren't correctly processed after an unhandled exception in code or a transient network issue.
+
+The majority of scenarios that use the Azure Cosmos DB change feed will use one of the push model options. However, there are some scenarios where you might want the additional low level control of the pull model. These include:
+
+- Reading changes from a particular partition key
+- Controlling the pace at which your client receives changes for processing
+- Doing a one-time read of the existing data in the change feed (for example, to do a data migration)
+
+## Reading change feed with a push model
+
+Using a push model is the easiest way to read from the change feed. There are two ways you can read from the change feed with a push model: [Azure Functions Azure Cosmos DB triggers](change-feed-functions.md) and the [change feed processor library](change-feed-processor.md). Azure Functions uses the change feed processor behind the scenes, so these are both very similar ways to read the change feed. Think of Azure Functions as simply a hosting platform for the change feed processor, not an entirely different way of reading the change feed.
+
+### Azure Functions
+
+Azure Functions is the simplest option if you are just getting started using the change feed. Due to its simplicity, it is also the recommended option for most change feed use cases. When you create an Azure Functions trigger for Azure Cosmos DB, you select the container to connect, and the Azure Function gets triggered whenever there is a change in the container. Because Azure Functions uses the change feed processor behind the scenes, it automatically parallelizes change processing across your container's [partitions](../partitioning-overview.md).
+
+Developing with Azure Functions is an easy experience and can be faster than deploying the change feed processor on your own. Triggers can be created using the Azure Functions portal or programmatically using SDKs. Visual Studio and VS Code provide support to write Azure Functions, and you can even use the Azure Functions CLI for cross-platform development. You can write and debug the code on your desktop, and then deploy the function with one click. See [Serverless database computing using Azure Functions](serverless-computing-database.md) and [Using change feed with Azure Functions](change-feed-functions.md) articles to learn more.
+
+### Change feed processor library
+
+The change feed processor gives you more control of the change feed and still hides most complexity. The change feed processor library follows the observer pattern, where your processing function is called by the library. The change feed processor library will automatically check for changes and, if changes are found, "push" these to the client. If you have a high throughput change feed, you can instantiate multiple clients to read the change feed. The change feed processor library will automatically divide the load among the different clients. You won't have to implement any logic for load balancing across multiple clients or any logic to maintain the lease state.
+
+The change feed processor library guarantees an "at-least-once" delivery of all of the changes. In other words, if you use the change feed processor library, your processing function will be called successfully for every item in the change feed. If there is an unhandled exception in the business logic in your processing function, the failed changes will be retried until they are processed successfully. To prevent your change feed processor from getting "stuck" continuously retrying the same changes, add logic in your processing function to write documents, upon exception, to a dead-letter queue. Learn more about [error handling](change-feed-processor.md#error-handling).
+
+In Azure Functions, the recommendation for handling errors is the same. You should still add logic in your delegate code to write documents, upon exception, to a dead-letter queue. However, if there is an unhandled exception in your Azure Function, the change that generated the exception won't be automatically retried. If there is an unhandled exception in the business logic, the Azure Function will move on to processing the next change. The Azure Function won't retry the same failed change.
+
+Like Azure Functions, developing with the change feed processor library is easy. However, you are responsible for deploying one or more hosts for the change feed processor. A host is an application instance that uses the change feed processor to listen for changes. While Azure Functions has capabilities for automatic scaling, you are responsible for scaling your hosts. To learn more, see [using the change feed processor](change-feed-processor.md#dynamic-scaling). The change feed processor library is part of the [Azure Cosmos DB SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3).
+
+## Reading change feed with a pull model
+
+The [change feed pull model](change-feed-pull-model.md) allows you to consume the change feed at your own pace. Changes must be requested by the client and there is no automatic polling for changes. If you want to permanently "bookmark" the last processed change (similar to the push model's lease container), you'll need to [save a continuation token](change-feed-pull-model.md#saving-continuation-tokens).
+
+Using the change feed pull model, you get more low level control of the change feed. When reading the change feed with the pull model, you have three options:
+
+- Read changes for an entire container
+- Read changes for a specific [FeedRange](change-feed-pull-model.md#using-feedrange-for-parallelization)
+- Read changes for a specific partition key value
+
+You can parallelize the processing of changes across multiple clients, just as you can with the change feed processor. However, the pull model does not automatically handle load-balancing across clients. When you use the pull model to parallelize processing of the change feed, you'll first obtain a list of FeedRanges. A FeedRange spans a range of partition key values. You'll need to have an orchestrator process that obtains FeedRanges and distributes them among your machines. You can then use these FeedRanges to have multiple machines read the change feed in parallel.
+
+There is no built-in "at-least-once" delivery guarantee with the pull model. The pull model gives you low level control to decide how you would like to handle errors.
+
+## Change feed in APIs for Cassandra and MongoDB
+
+Change feed functionality is surfaced as change streams in API for MongoDB and Query with predicate in API for Cassandra. To learn more about the implementation details for API for MongoDB, see the [Change streams in the Azure Cosmos DB for MongoDB](../mongodb/change-streams.md).
+
+Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](../cassandr).
+
+## Next steps
+
+You can now continue to learn more about change feed in the following articles:
+
+* [Overview of change feed](../change-feed.md)
+* [Using change feed with Azure Functions](change-feed-functions.md)
+* [Using change feed processor library](change-feed-processor.md)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-dotnet.md
+
+ Title: Examples for Azure Cosmos DB for NoSQL SDK for .NET
+description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB for NoSQL.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Examples for Azure Cosmos DB for NoSQL SDK for .NET
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](samples-dotnet.md)
+>
+
+The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
+
+## Prerequisites
+
+* An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+* Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Samples
+
+The sample projects are all self-contained and are designed to be ran individually without any dependencies between projects.
+
+### Client
+
+| Task | API reference |
+| : | : |
+| [Create a client with endpoint and key](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with connection string](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with ``DefaultAzureCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with custom ``TokenCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+
+### Databases
+
+| Task | API reference |
+| : | : |
+| [Create a database](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
+
+### Containers
+
+| Task | API reference |
+| : | : |
+| [Create a container](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
+
+### Items
+
+| Task | API reference |
+| : | : |
+| [Create an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
+| [Point read an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
+| [Query multiple items](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
+
+## Next steps
+
+Dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB for NoSQL resources.
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md)
cosmos-db Samples Java Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java-spring-data.md
+
+ Title: 'Azure Cosmos DB for NoSQL: Spring Data v3 examples'
+description: Find Spring Data v3 examples on GitHub for common tasks using the Azure Cosmos DB for NoSQL, including CRUD operations.
++++ Last updated : 08/26/2021++++
+# Azure Cosmos DB for NoSQL: Spring Datan Azure Cosmos DB v3 examples
+
+> [!div class="op_single_selector"]
+> * [.NET SDK Examples](samples-dotnet.md)
+> * [Java V4 SDK Examples](samples-java.md)
+> * [Spring Data V3 SDK Examples](samples-java-spring-data.md)
+> * [Node.js Examples](samples-nodejs.md)
+> * [Python Examples](samples-python.md)
+> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
+>
+>
+
+> [!IMPORTANT]
+> These release notes are for version 3 of Spring Datan Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md).
+>
+> Spring Datan Azure Cosmos DB supports only the API for NoSQL.
+>
+> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
+> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
+> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
+>
+
+> [!IMPORTANT]
+>[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+>
+>- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.
+>
+>[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+>
+
+The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-spring-data-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples) GitHub repository. This article provides:
+
+* Links to the tasks in each of the example Spring Datan Azure Cosmos DB project files.
+* Links to the related API reference content.
+
+**Prerequisites**
+
+You need the following to run this sample application:
+
+* Java Development Kit 8
+* Spring Datan Azure Cosmos DB v3
+
+You can optionally use Maven to get the latest Spring Datan Azure Cosmos DB v3 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the **pom.xml** file and add them to your build path.
+
+```bash
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-spring-data-cosmos</artifactId>
+ <version>LATEST</version>
+</dependency>
+```
+
+**Running the sample applications**
+
+Clone the sample repo:
+```bash
+$ git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples
+
+$ cd azure-spring-data-cosmos-java-sql-api-samples
+```
+
+You can run the samples using either an IDE (Eclipse, IntelliJ, or VSCODE) or from the command line using Maven.
+
+In **application.properties** these environment variables must be set
+
+```xml
+cosmos.uri=${ACCOUNT_HOST}
+cosmos.key=${ACCOUNT_KEY}
+cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY}
+
+dynamic.collection.name=spel-property-collection
+# Populate query metrics
+cosmos.queryMetricsEnabled=true
+```
+
+in order to give the samples read/write access to your account, databases and containers.
+
+Your IDE may provide the ability to execute the Spring Data sample code. Otherwise you may use the following terminal command to execute the sample:
+
+```bash
+mvn spring-boot:run
+```
+
+## Document CRUD examples
+The [samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create a document](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L46-L47) | CosmosRepository.save |
+| [Read a document by ID](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L56-L58) | CosmosRepository.derivedQueryMethod |
+| [Delete all documents](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L39-L41) | CosmosRepository.deleteAll |
+
+## Derived query method examples
+The [samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java) file shows how to perform the following tasks. To learn about Azure Cosmos DB queries before running the following samples, you may find it helpful to read [Baeldung's Derived Query Methods in Spring](https://www.baeldung.com/spring-data-derived-queries) article.
+
+| [Query for documents](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L73-L77) | CosmosRepository.derivedQueryMethod |
+
+## Custom query examples
+The [samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/jav).
++
+| Task | API reference |
+| | |
+| [Query for all documents](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L20-L22) | @Query annotation |
+| [Query for equality using ==](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L24-L26) | @Query annotation |
+| [Query for inequality using != and NOT](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L28-L38) | @Query annotation |
+| [Query using range operators like >, <, >=, <=](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L40-L42) | @Query annotation |
+| [Query using range operators against strings](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L44-L46) | @Query annotation |
+| [Query with ORDER BY](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L48-L50) | @Query annotation |
+| [Query with DISTINCT](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L52-L54) | @Query annotation |
+| [Query with aggregate functions](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L56-L62) | @Query annotation |
+| [Work with subdocuments](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L64-L66) | @Query annotation |
+| [Query with intra-document Joins](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L68-L85) | @Query annotation |
+| [Query with string, math, and array operators](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L87-L97) | @Query annotation |
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java.md
++
+ Title: 'Azure Cosmos DB for NoSQL: Java SDK v4 examples'
+description: Find Java examples on GitHub for common tasks using the Azure Cosmos DB for NoSQL, including CRUD operations.
++++ Last updated : 08/26/2021
+ms.devlang: java
++++
+# Azure Cosmos DB for NoSQL: Java SDK v4 examples
+
+> [!div class="op_single_selector"]
+> * [.NET SDK Examples](samples-dotnet.md)
+> * [Java V4 SDK Examples](samples-java.md)
+> * [Spring Data V3 SDK Examples](samples-java-spring-data.md)
+> * [Node.js Examples](samples-nodejs.md)
+> * [Python Examples](samples-python.md)
+> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
+>
+>
+
+> [!IMPORTANT]
+> To learn more about Java SDK v4, please see the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+
+> [!IMPORTANT]
+>[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+>
+>- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.
+>
+>[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+>
+
+The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples) GitHub repository. This article provides:
+
+* Links to the tasks in each of the example Java project files.
+* Links to the related API reference content.
+
+**Prerequisites**
+
+You need the following to run this sample application:
+
+* Java Development Kit 8
+* Azure Cosmos DB Java SDK v4
+
+You can optionally use Maven to get the latest Azure Cosmos DB Java SDK v4 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the pom.xml file and add them to your build path.
+
+```bash
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ <version>LATEST</version>
+</dependency>
+```
+
+**Running the sample applications**
+
+Clone the sample repo:
+```bash
+$ git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples.git
+
+$ cd azure-cosmos-java-sql-api-samples
+```
+
+You can run the samples using either an IDE (Eclipse, IntelliJ, or VSCODE) or from the command line using Maven.
+
+These environment variables must be set
+
+```
+ACCOUNT_HOST=your account hostname;ACCOUNT_KEY=your account primary key
+```
+
+in order to give the samples read/write access to your account.
+
+To run a sample, specify its Main Class
+
+```
+com.azure.cosmos.examples.sample.synchronicity.MainClass
+```
+
+where *sample.synchronicity.MainClass* can be
+* crudquickstart.sync.SampleCRUDQuickstart
+* crudquickstart.async.SampleCRUDQuickstartAsync
+* indexmanagement.sync.SampleIndexManagement
+* indexmanagement.async.SampleIndexManagementAsync
+* storedprocedure.sync.SampleStoredProcedure
+* storedprocedure.async.SampleStoredProcedureAsync
+* changefeed.SampleChangeFeedProcessor *(Changefeed has only an async sample, no sync sample.)*
+...etc...
+
+> [!NOTE]
+> Each sample is self-contained; it sets itself up and cleans up after itself. The samples issue multiple calls to create a `CosmosContainer` or `CosmosAsyncContainer`. Each time this is done, your subscription is billed for 1 hour of usage for the performance tier of the collection created.
+>
+>
+
+## Database examples
+The Database CRUD Sample files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
+
+| Task | API reference |
+| | |
+| Create a database | [CosmosClient.createDatabaseIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L76-L84) <br> [CosmosAsyncClient.createDatabaseIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L80-L89) |
+| Read a database by ID | [CosmosClient.getDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L87-L94) <br> [CosmosAsyncClient.getDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L92-L99) |
+| Read all the databases | [CosmosClient.readAllDatabases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L97-L111) <br> [CosmosAsyncClient.readAllDatabases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L102-L124) |
+| Delete a database | [CosmosDatabase.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L114-L122) <br> [CosmosAsyncDatabase.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L127-L135) |
+
+## Collection examples
+The Collection CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
+
+| Task | API reference |
+| | |
+| Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L92-L107) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L96-L111) |
+| Change configured performance of a collection | [CosmosContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L110-L118) <br> [CosmosAsyncContainer.replaceProvisionedThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L114-L122) |
+| Get a collection by ID | [CosmosDatabase.getContainer](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L121-L128) <br> [CosmosAsyncDatabase.getContainer](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L125-L132) |
+| Read all the collections in a database | [CosmosDatabase.readAllContainers](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L131-L145) <br> [CosmosAsyncDatabase.readAllContainers](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L135-L158) |
+| Delete a collection | [CosmosContainer.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L148-L156) <br> [CosmosAsyncContainer.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L161-L169) |
+
+## Autoscale collection examples
+
+To learn more about autoscale before running these samples, take a look at these instructions for enabling autoscale in your [account](https://azure.microsoft.com/resources/templates/cosmosdb-sql-autoscale/) and in your [databases and containers](../provision-throughput-autoscale.md).
+
+The autoscale database sample files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/async/AutoscaleDatabaseCRUDQuickstartAsync.java) show how to perform the following task.
+
+| Task | API reference |
+| | |
+| Create a database with specified autoscale max throughput | [CosmosClient.createDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java#L77-L88) <br> [CosmosAsyncClient.createDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/async/AutoscaleDatabaseCRUDQuickstartAsync.java#L81-L94) |
++
+The autoscale collection samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java) show how to perform the following tasks.
+
+| Task | API reference |
+| | |
+| Create a collection with specified autoscale max throughput | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L97-L110) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L101-L114) |
+| Change configured autoscale max throughput of a collection | [CosmosContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L113-L120) <br> [CosmosAsyncContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L117-L124) |
+| Read autoscale throughput configuration of a collection | [CosmosContainer.readThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L122-L133) <br> [CosmosAsyncContainer.readThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L126-L137) |
+
+## Analytical storage collection examples
+
+The Analytical storage Collection CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java) show how to perform the following tasks. To learn about the Azure Cosmos DB collections before running the following samples, read about Azure Cosmos DB Synapse and Analytical Store.
+
+| Task | API reference |
+| | |
+| Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java#L91-L106) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java#L91-L106) |
+
+## Document examples
+The Document CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
+
+| Task | API reference |
+| | |
+| Create a document | [CosmosContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L132-L146) <br> [CosmosAsyncContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L188-L212) |
+| Read a document by ID | [CosmosContainer.readItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L177-L192) <br> [CosmosAsyncContainer.readItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L318-L340) |
+| Query for documents | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L161-L175) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L270-L287) |
+| Replace a document | [CosmosContainer.replaceItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L177-L192) <br> [CosmosAsyncContainer.replaceItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L318-L340) |
+| Upsert a document | [CosmosContainer.upsertItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L194-L207) <br> [CosmosAsyncContainer.upsertItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L342-L364) |
+| Delete a document | [CosmosContainer.deleteItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L285-L292) <br> [CosmosAsyncContainer.deleteItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L494-L510) |
+| Replace a document with conditional ETag check | [CosmosItemRequestOptions.setIfMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L209-L246) (sync) <br>[CosmosItemRequestOptions.setIfMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L366-L418) (async) |
+| Read document only if document has changed | [CosmosItemRequestOptions.setIfNoneMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L248-L282) (sync) <br>[CosmosItemRequestOptions.setIfNoneMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L420-L491) (async)|
+| Partial document update | [CosmosContainer.patchItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/patch/sync/SamplePatchQuickstart.java) |
+| Bulk document update | [Bulk samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java) |
+| Transactional batch | [batch samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/batch/async/SampleBatchQuickStartAsync.java) |
+
+## Indexing examples
+The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-jav#include-exclude-paths) conceptual articles.
+
+| Task | API reference |
+| | |
+| Include specified documents paths in the index | [IndexingPolicy.IncludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L143-L146) |
+| Exclude specified documents paths from the index | [IndexingPolicy.ExcludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L148-L151) |
+| Create a composite index | [IndexingPolicy.setCompositeIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L167-L184) <br> CompositePath |
+| Create a geospatial index | [IndexingPolicy.setSpatialIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L153-L165) <br> SpatialSpec <br> SpatialType |
+<!-- | Exclude a document from the index | ExcludedIndex<br>IndexingPolicy | -->
+<!-- | Use Lazy Indexing | IndexingPolicy.IndexingMode | -->
+<!-- | Force a range scan operation on a hash indexed path | FeedOptions.EnableScanInQuery | -->
+<!-- | Use range indexes on Strings | IndexingPolicy.IncludedPaths<br>RangeIndex | -->
+<!-- | Perform an index transform | - | -->
++
+For more information about indexing, see [Azure Cosmos DB indexing policies](../index-policy.md).
+
+## Query examples
+The Query Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
+
+| Task | API reference |
+| | |
+| Query for all documents | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L204-L208) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L244-L247)|
+| Query for equality using == | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L286-L290) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L325-L329)|
+| Query for inequality using != and NOT | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L292-L300) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L331-L339)|
+| Query using range operators like >, <, >=, <= | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L302-L307) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L341-L346)|
+| Query using range operators against strings | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L309-L314) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L348-L353)|
+| Query with ORDER BY | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L316-L321) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L355-L360)|
+| Query with DISTINCT | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L323-L328) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L362-L367)|
+| Query with aggregate functions | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L330-L338) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L369-L377)|
+| Work with subdocuments | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L340-L348) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L379-L387)|
+| Query with intra-document Joins | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L350-L372) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L389-L411)|
+| Query with string, math, and array operators | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L374-L385) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L413-L424)|
+| Query with parameterized SQL using SqlQuerySpec | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L387-L416) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L426-L455)|
+| Query with explicit paging | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L211-L261) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L250-L300)|
+| Query partitioned collections in parallel | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L263-L284) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L302-L323)|
+<!-- | Query with ORDER BY for partitioned collections | CosmosContainer.queryItems <br> CosmosAsyncContainer.queryItems | -->
+
+## Change feed examples
+The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](/azure/cosmos-db/sql/change-feed-processor?tabs=java).
+
+| Task | API reference |
+| | |
+| Basic change feed functionality | [ChangeFeedProcessor.changeFeedProcessorBuilder](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L141-L172) |
+| Read change feed from the beginning | [ChangeFeedProcessorOptions.setStartFromBeginning()](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L65) |
+<!-- | Read change feed from a specific time | ChangeFeedProcessor.changeFeedProcessorBuilder | -->
+
+## Server-side programming examples
+
+The [Stored Procedure Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
+
+| Task | API reference |
+| | |
+| Create a stored procedure | [CosmosScripts.createStoredProcedure](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L134-L153) |
+| Execute a stored procedure | [CosmosStoredProcedure.execute](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L213-L227) |
+| Delete a stored procedure | [CosmosStoredProcedure.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L254-L264) |
+
+<!-- ## User management examples
+The User Management Sample file shows how to do the following tasks:
+
+| Task | API reference |
+| | |
+| Create a user | - |
+| Set permissions on a collection or document | - |
+| Get a list of a user's permissions |- | -->
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-nodejs.md
+
+ Title: Node.js examples to manage data in Azure Cosmos DB database
+description: Find Node.js examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.
++++ Last updated : 08/26/2021+
+ms.devlang: javascript
++
+# Node.js examples to manage data in Azure Cosmos DB
+
+> [!div class="op_single_selector"]
+> * [.NET SDK Examples](samples-dotnet.md)
+> * [Java V4 SDK Examples](samples-java.md)
+> * [Spring Data V3 SDK Examples](samples-java-spring-data.md)
+> * [Node.js Examples](samples-nodejs.md)
+> * [Python Examples](samples-python.md)
+> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
+>
+>
+
+Sample solutions that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-js](https://github.com/Azure/azure-cosmos-js/tree/master/samples) GitHub repository. This article provides:
+
+* Links to the tasks in each of the Node.js example project files.
+* Links to the related API reference content.
+
+**Prerequisites**
++
+- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.
++
+You also need the [JavaScript SDK](sdk-nodejs.md).
+
+ > [!NOTE]
+ > Each sample is self-contained, it sets itself up and cleans up after itself. As such, the samples issue multiple calls to [Containers.create](/javascript/api/%40azure/cosmos/containers). Each time this is done your subscription will be billed for 1 hour of usage per the performance tier of the container being created.
+ >
+ >
+
+## Database examples
+
+The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create a database if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) |[Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
+| [List databases for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L16-L18) |[Databases.readAll](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) |
+| [Read a database by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L20-L29) |[Database.read](/javascript/api/@azure/cosmos/database#read-requestoptions-) |
+| [Delete a database](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L31-L32) |[Database.delete](/javascript/api/@azure/cosmos/database#delete-requestoptions-) |
+
+## Container examples
+
+The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create a container if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) |[Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
+| [List containers for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L17-L21) |[Containers.readAll](/javascript/api/@azure/cosmos/containers#readall-feedoptions-) |
+| [Read a container definition](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L23-L26) |[Container.read](/javascript/api/@azure/cosmos/container#read-requestoptions-) |
+| [Delete a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L28-L30) |[Container.delete](/javascript/api/@azure/cosmos/container#delete-requestoptions-) |
+
+## Item examples
+
+The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create items](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L18-L21) |[Items.create](/javascript/api/@azure/cosmos/items#create-t--requestoptions-) |
+| [Read all items in a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L23-L28) |[Items.readAll](/javascript/api/@azure/cosmos/items#readall-feedoptions-) |
+| [Read an item by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L30-L33) |[Item.read](/javascript/api/@azure/cosmos/item#read-requestoptions-) |
+| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) |[Item.read](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Query for documents](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79) |[Items.query](/javascript/api/%40azure/cosmos/items) |
+| [Replace an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L81-L96) |[Item.replace](/javascript/api/%40azure/cosmos/item) |
+| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) |[Item.replace](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Delete an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L137-L140) |[Item.delete](/javascript/api/%40azure/cosmos/item) |
+
+## Indexing examples
+
+The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+
+| Task | API reference |
+| | |
+| [Manually index a specific item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L52-L75) |[RequestOptions.indexingDirective: 'include'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
+| [Manually exclude a specific item from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L17-L29) |[RequestOptions.indexingDirective: 'exclude'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
+| [Exclude a path from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L142-L167) |[IndexingPolicy.ExcludedPath](/javascript/api/%40azure/cosmos/indexingpolicy#excludedpaths) |
+| [Create a range index on a string path](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L87-L112) |[IndexKind.Range](/javascript/api/%40azure/cosmos/indexkind), [IndexingPolicy](/javascript/api/%40azure/cosmos/indexingpolicy), [Items.query](/javascript/api/%40azure/cosmos/items) |
+| [Create a container with default indexPolicy, then update this online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) |[Containers.create](/javascript/api/%40azure/cosmos/containers)
+
+## Server-side programming examples
+
+The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create a stored procedure](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/upsert.js) |[StoredProcedures.create](/javascript/api/%40azure/cosmos/storedprocedures) |
+| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) |[StoredProcedure.execute](/javascript/api/%40azure/cosmos/storedprocedure) |
+
+For more information about server-side programming, see [Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs](stored-procedures-triggers-udfs.md).
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-python.md
+
+ Title: API for NoSQL Python examples for Azure Cosmos DB
+description: Find Python examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.
+++
+ms.devlang: python
+ Last updated : 08/26/2021+++
+# Azure Cosmos DB Python examples
+
+> [!div class="op_single_selector"]
+> * [.NET SDK Examples](samples-dotnet.md)
+> * [Java V4 SDK Examples](samples-java.md)
+> * [Spring Data V3 SDK Examples](samples-java-spring-data.md)
+> * [Node.js Examples](samples-nodejs.md)
+> * [Python Examples](samples-python.md)
+> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
+
+Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-documentdb-python](https://github.com/Azure/azure-documentdb-python) GitHub repository. This article provides:
+
+* Links to the tasks in each of the Python example project files.
+* Links to the related API reference content.
+
+## Prerequisites
+
+- An Azure Cosmos DB Account. You options are:
+ * Within an Azure active subscription:
+ * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
+ * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
+ * [Azure Cosmos DB Free Tier](../free-tier.md)
+ * Without an Azure active subscription:
+ * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days.
+ * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
+- [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
+- [Visual Studio Code](https://code.visualstudio.com/).
+- The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview).
+- [Git](https://www.git-scm.com/downloads).
+- [Azure Cosmos DB for NoSQL SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)
+
+## Database examples
+
+The [database_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py) Python sample shows how to do the following tasks. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L53-L62) |CosmosClient.create_database|
+| [Read a database by ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L64-L73) |CosmosClient.get_database_client|
+| [Query the databases](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L37-L50) |CosmosClient.query_databases|
+| [List databases for an account](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L76-L87) |CosmosClient.list_databases|
+| [Delete a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L90-L99) |CosmosClient.delete_database|
+
+## Container examples
+
+The [container_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py) Python sample shows how to do the following tasks. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Query for a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L51-L66) |database.query_containers |
+| [Create a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L69-L163) |database.create_container |
+| [List all the containers in a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L206-L217) |database.list_containers |
+| [Get a container by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L195-L203) |database.get_container_client |
+| [Manage container's provisioned throughput](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L166-L192) |container.read_offer, container.replace_throughput|
+| [Delete a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L220-L229) |database.delete_container |
+
+## Item examples
+
+The [item_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py) Python sample shows how to do the following tasks. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+
+| Task | API reference |
+| | |
+| [Create items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L31-L43) |container.create_item |
+| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L46-L54) |container.read_item |
+| [Read all the items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L57-L68) |container.read_all_items |
+| [Query an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L71-L83) |container.query_items |
+| [Replace an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L86-L93) |container.replace_items |
+| [Upsert an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L95-L103) |container.upsert_item |
+| [Delete an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L106-L111) |container.delete_item |
+| [Get the change feed of items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/change_feed_management.py) |container.query_items_change_feed |
+
+## Indexing examples
+
+The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py) Python sample shows how to do the following tasks. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+
+| Task | API reference |
+| | |
+| [Exclude a specific item from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L149-L205) | documents.IndexingDirective.Exclude|
+| [Use manual indexing with specific items indexed](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L208-L267) | documents.IndexingDirective.Include |
+| [Exclude paths from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L270-L340) |Define paths to exclude in `IndexingPolicy` property |
+| [Use range indexes on strings](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L405-L490) | Define indexing policy with range indexes on string data type. `'kind': documents.IndexKind.Range`, `'dataType': documents.DataType.String`|
+| [Perform an index transformation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L492-L548) |database.replace_container (use the updated indexing policy)|
+| [Use scans when only hash index exists on the path](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L343-L402) | set the `enable_scan_in_query=True` and `enable_cross_partition_query=True` when querying the items |
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-resource-manager-templates.md
+
+ Title: Azure Resource Manager templates for Azure Cosmos DB for NoSQL
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB.
+++++ Last updated : 08/26/2021++++
+# Azure Resource Manager templates for Azure Cosmos DB
+
+This article only shows Azure Resource Manager template examples for API for NoSQL accounts. You can also find template examples for [Cassandra](../cassandr) APIs.
+
+## API for NoSQL
+
+|**Template**|**Description**|
+|||
+|[Create an Azure Cosmos DB account, database, container with autoscale throughput](manage-with-templates.md#create-autoscale) | This template creates a API for NoSQL account in two regions, a database and container with autoscale throughput. |
+|[Create an Azure Cosmos DB account, database, container with analytical store](manage-with-templates.md#create-analytical-store) | This template creates a API for NoSQL account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. |
+|[Create an Azure Cosmos DB account, database, container with standard (manual) throughput](manage-with-templates.md#create-manual) | This template creates a API for NoSQL account in two regions, a database and container with standard throughput. |
+|[Create an Azure Cosmos DB account, database and container with a stored procedure, trigger and UDF](manage-with-templates.md#create-sproc) | This template creates a API for NoSQL account in two regions with a stored procedure, trigger and UDF for a container. |
+|[Create an Azure Cosmos DB account with Azure AD identity, Role Definitions and Role Assignment](manage-with-templates.md#create-rbac) | This template creates a API for NoSQL account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
+|[Create a private endpoint for an existing Azure Cosmos DB account](../how-to-configure-private-endpoints.md#create-a-private-endpoint-by-using-a-resource-manager-template) | This template creates a private endpoint for an existing Azure Cosmos DB for NoSQL account in an existing virtual network. |
+|[Create a free-tier Azure Cosmos DB account](manage-with-templates.md#free-tier) | This template creates an Azure Cosmos DB for NoSQL account on free-tier. |
+
+See [Azure Resource Manager reference for Azure Cosmos DB](/azure/templates/microsoft.documentdb/allversions) page for the reference documentation.
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-terraform.md
+
+ Title: Terraform samples for Azure Cosmos DB for NoSQL
+description: Use Terraform to create and configure Azure Cosmos DB for NoSQL.
+++++ Last updated : 09/16/2022++++
+# Terraform samples for Azure Cosmos DB for NoSQL
++
+This article shows Terraform samples for NoSQL accounts.
+
+## API for NoSQL
+
+| **Sample** | **Description** |
+| | |
+| [Create an Azure Cosmos account, database, container with autoscale throughput](manage-with-terraform.md#create-autoscale) | Create an API for NoSQL account in two regions, a database and container with autoscale throughput. |
+| [Create an Azure Cosmos account, database, container with analytical store](manage-with-terraform.md#create-analytical-store) | Create an API for NoSQL account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. |
+| [Create an Azure Cosmos account, database, container with standard (manual) throughput](manage-with-terraform.md#create-manual) | Create an API for NoSQL account in two regions, a database and container with standard throughput. |
+| [Create an Azure Cosmos account, database and container with a stored procedure, trigger and UDF](manage-with-terraform.md#create-sproc) | Create an API for NoSQL account in two regions with a stored procedure, trigger and UDF for a container. |
+| [Create an Azure Cosmos account with Azure AD identity, Role Definitions and Role Assignment](manage-with-terraform.md#create-rbac) | Create an API for NoSQL account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
+| [Create a free-tier Azure Cosmos account](manage-with-terraform.md#free-tier) | Create an Azure Cosmos DB API for NoSQL account on free-tier. |
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Scale On Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/scale-on-schedule.md
+
+ Title: Scale Azure Cosmos DB on a schedule by using Azure Functions timer
+description: Learn how to scale changes in throughput in Azure Cosmos DB using PowerShell and Azure Functions.
+++++ Last updated : 01/13/2020++++
+# Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
+
+The performance of an Azure Cosmos DB account is based on the amount of provisioned throughput expressed in Request Units per second (RU/s). The provisioning is at a second granularity and is billed based upon the highest RU/s per hour. This provisioned capacity model enables the service to provide a predictable and consistent throughput, guaranteed low latency, and high availability. Most production workloads these features. However, in development and testing environments where Azure Cosmos DB is only used during working hours, you can scale up the throughput in the morning and scale back down in the evening after working hours.
+
+You can set the throughput via [Azure Resource Manager Templates](./samples-resource-manager-templates.md), [Azure CLI](cli-samples.md), and [PowerShell](powershell-samples.md), for API for NoSQL accounts, or by using the language-specific Azure Cosmos DB SDKs. The benefit of using Resource Manager Templates, Azure CLI or PowerShell is that they support all Azure Cosmos DB model APIs.
+
+## Throughput scheduler sample project
+
+To simplify the process to scale Azure Cosmos DB on a schedule we've created a sample project called [Azure Cosmos DB throughput scheduler](https://github.com/Azure-Samples/azure-cosmos-throughput-scheduler). This project is an Azure Functions app with two timer triggers- "ScaleUpTrigger" and "ScaleDownTrigger". The triggers run a PowerShell script that sets the throughput on each resource as defined in the `resources.json` file in each trigger. The ScaleUpTrigger is configured to run at 8 AM UTC and the ScaleDownTrigger is configured to run at 6 PM UTC and these times can be easily updated within the `function.json` file for each trigger.
+
+You can clone this project locally, modify it to specify the Azure Cosmos DB resources to scale up and down and the schedule to run. Later you can deploy it in an Azure subscription and secure it using managed service identity with [Azure role-based access control (Azure RBAC)](../role-based-access-control.md) permissions with the "Azure Cosmos DB operator" role to set throughput on your Azure Cosmos DB accounts.
+
+## Next Steps
+
+- Learn more and download the sample from [Azure Cosmos DB throughput scheduler](https://github.com/Azure-Samples/azure-cosmos-throughput-scheduler).
cosmos-db Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md
+
+ Title: Azure Cosmos DB SQL SDK connectivity modes
+description: Learn about the different connectivity modes available on the Azure Cosmos DB SQL SDKs.
++++ Last updated : 04/28/2022++++
+# Azure Cosmos DB SQL SDK connectivity modes
+
+How a client connects to Azure Cosmos DB has important performance implications, especially for observed client-side latency. Azure Cosmos DB offers a simple, open RESTful programming model over HTTPS called gateway mode. Additionally, it offers an efficient TCP protocol, which is also RESTful in its communication model and uses TLS for initial authentication and encrypting traffic, called direct mode.
+
+## Available connectivity modes
+
+The two available connectivity modes are:
+
+ * Gateway mode
+
+ Gateway mode is supported on all SDK platforms. If your application runs within a corporate network with strict firewall restrictions, gateway mode is the best choice because it uses the standard HTTPS port and a single DNS endpoint. The performance tradeoff, however, is that gateway mode involves an additional network hop every time data is read from or written to Azure Cosmos DB. We also recommend gateway connection mode when you run applications in environments that have a limited number of socket connections.
+
+ When you use the SDK in Azure Functions, particularly in the [Consumption plan](../../azure-functions/consumption-plan.md), be aware of the current [limits on connections](../../azure-functions/manage-connections.md).
+
+ * Direct mode
+
+ Direct mode supports connectivity through TCP protocol, using TLS for initial authentication and encrypting traffic, and offers better performance because there are fewer network hops. The application connects directly to the backend replicas. Direct mode is currently only supported on .NET and Java SDK platforms.
+
+
+These connectivity modes essentially condition the route that data plane requests - document reads and writes - take from your client machine to partitions in the Azure Cosmos DB back-end. Direct mode is the preferred option for best performance - it allows your client to open TCP connections directly to partitions in the Azure Cosmos DB back-end and send requests *direct*ly with no intermediary. By contrast, in Gateway mode, requests made by your client are routed to a so-called "Gateway" server in the Azure Cosmos DB front end, which in turn fans out your requests to the appropriate partition(s) in the Azure Cosmos DB back-end.
+
+## Service port ranges
+
+When you use direct mode, in addition to the gateway ports, you need to ensure the port range between 10000 and 20000 is open because Azure Cosmos DB uses dynamic TCP ports. When using direct mode on [private endpoints](../how-to-configure-private-endpoints.md), the full range of TCP ports - from 0 to 65535 should be open. If these ports aren't open and you try to use the TCP protocol, you might receive a 503 Service Unavailable error.
+
+The following table shows a summary of the connectivity modes available for various APIs and the service ports used for each API:
+
+|Connection mode |Supported protocol |Supported SDKs |API/Service port |
+|||||
+|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10250, 10255, 10256), Table (443), Cassandra (10350), Graph (443) <br> The port 10250 maps to a default Azure Cosmos DB for MongoDB instance without geo-replication. Whereas the ports 10255 and 10256 map to the instance that has geo-replication. |
+|Direct | TCP (Encrypted via TLS) | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range |
+
+## <a id="direct-mode"></a> Direct mode connection architecture
+
+As detailed in the [introduction](#available-connectivity-modes), Direct mode clients will directly connect to the backend nodes through TCP protocol. Each backend node represents a replica in a [replica set](../partitioning-overview.md#replica-sets) belonging to a [physical partition](../partitioning-overview.md#physical-partitions).
+
+### Routing
+
+When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node and considered [metadata](../concepts-limits.md#metadata-request-limits). It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes. Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
+
+Each replica set contains one primary replica and three secondaries. Write operations are always routed to primary replica nodes while read operations can be served from primary or secondary nodes.
++
+Because the container and routing information don't change often, it's cached locally on the SDKs so subsequent operations can benefit from this information. The TCP connections already established are also reused across operations. Unless otherwise configured through the SDKs options, connections are permanently maintained during the lifetime of the SDK instance.
+
+As with any distributed architecture, the machines holding replicas might undergo upgrades or maintenance. The service will ensure the replica set maintains consistency but any replica movement would cause existing TCP addresses to change. In these cases, the SDKs need to refresh the routing information and re-connect to the new addresses through new Gateway requests. These events should not affect the overall P99 SLA.
+
+### Volume of connections
+
+Each physical partition has a replica set of four replicas, in order to provide the best possible performance, SDKs will end up opening connections to all replicas for workloads that mix write and read operations. Concurrent operations are load balanced across existing connections to take advantage of the throughput each replica provides.
+
+There are two factors that dictate the number of TCP connections the SDK will open:
+
+* Number of physical partitions
+
+ In a steady state, the SDK will have one connection per replica per physical partition. The larger the number of physical partitions in a container, the larger the number of open connections will be. As operations are routed across different partitions, connections are established on demand. The average number of connections would then be the number of physical partitions times four.
+
+* Volume of concurrent requests
+
+ Each established connection can serve a configurable number of concurrent operations. If the volume of concurrent operations exceeds this threshold, new connections will be open to serve them, and it's possible that for a physical partition, the number of open connections exceeds the steady state number. This behavior is expected for workloads that might have spikes in their operational volume. For the .NET SDK this configuration is set by [CosmosClientOptions.MaxRequestsPerTcpConnection](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxrequestspertcpconnection), and for the Java SDK you can customize using [DirectConnectionConfig.setMaxRequestsPerConnection](/java/api/com.azure.cosmos.directconnectionconfig.setmaxrequestsperconnection).
+
+By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and affect overall performance.
+
+### Language specific implementation details
+
+For further implementation details regarding a language see:
+
+* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/SdkDesign.md)
+* [Java SDK direct mode information](performance-tips-java-sdk-v4.md#direct-connection)
+
+## Next steps
+
+For specific SDK platform performance optimizations:
+
+* [.NET V2 SDK performance tips](performance-tips.md)
+
+* [.NET V3 SDK performance tips](performance-tips-dotnet-sdk-v3.md)
+
+* [Java V4 SDK performance tips](performance-tips-java-sdk-v4.md)
+
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sdk Dotnet Bulk Executor V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-bulk-executor-v2.md
+
+ Title: 'Azure Cosmos DB: Bulk executor .NET API, SDK & resources'
+description: Learn all about the bulk executor .NET API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB bulk executor .NET SDK.
+++
+ms.devlang: csharp
++ Last updated : 04/06/2021++++
+# .NET bulk executor library: Download information (Legacy)
++
+| | Link/notes |
+|||
+| **Description**| The .NET bulk executor library allows client applications to perform bulk operations on Azure Cosmos DB accounts. This library provides BulkImport, BulkUpdate, and BulkDelete namespaces. The BulkImport module can bulk ingest documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent. The BulkUpdate module can bulk update existing data in Azure Cosmos DB containers as patches. The BulkDelete module can bulk delete documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent.|
+|**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.BulkExecutor/) |
+| **Bulk executor library in GitHub**| [GitHub](https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor)|
+|**Get started**|[Get started with the bulk executor library .NET SDK](bulk-executor-dotnet.md)|
+| **Current supported framework**| Microsoft .NET Framework 4.5.2, 4.6.1 and .NET Standard 2.0 |
+
+> [!NOTE]
+> If you are using bulk executor, please see the latest version 3.x of the [.NET SDK](tutorial-dotnet-bulk-import.md), which has bulk executor built into the SDK.
+
+## Release notes
+
+### <a name="2.4.1-preview"></a>2.4.1-preview
+
+* Fixed TotalElapsedTime in the response of BulkDelete to correctly measure the total time including any retries.
+
+### <a name="2.4.0-preview"></a>2.4.0-preview
+
+* Changed SDK dependency to >= 2.5.1
+
+### <a name="2.3.0-preview2"></a>2.3.0-preview2
+
+* Added support for graph bulk executor to accept ttl on vertices and edges
+
+### <a name="2.2.0-preview2"></a>2.2.0-preview2
+
+* Fixed an issue, which caused exceptions during elastic scaling of Azure Cosmos DB when running in Gateway mode. This fix makes it functionally equivalent to 1.4.1 release.
+
+### <a name="2.1.0-preview2"></a>2.1.0-preview2
+
+* Added BulkDelete support for API for NoSQL accounts to accept partition key, document ID tuples to delete. This change makes it functionally equivalent to 1.4.0 release.
+
+### <a name="2.0.0-preview2"></a>2.0.0-preview2
+
+* Including MongoBulkExecutor to support .NET Standard 2.0. This feature makes it functionally equivalent to 1.3.0 release, with the addition of supporting .NET Standard 2.0 as the target framework.
+
+### <a name="2.0.0-preview"></a>2.0.0-preview
+
+* Added .NET Standard 2.0 as one of the supported target frameworks to make the bulk executor library work with .NET Core applications.
+
+### <a name="1.8.9"></a>1.8.9
+
+* Fixed an issue with BulkDeleteAsync when values with escaped quotes were passed as input parameters.
+
+### <a name="1.8.8"></a>1.8.8
+
+* Fixed an issue on MongoBulkExecutor that was increasing the document size unexpectedly by adding padding and in some cases, going over the maximum document size limit.
+
+### <a name="1.8.7"></a>1.8.7
+
+* Fixed an issue with BulkDeleteAsync when the Collection has nested partition key paths.
+
+### <a name="1.8.6"></a>1.8.6
+
+* MongoBulkExecutor now implements IDisposable and it's expected to be disposed after used.
+
+### <a name="1.8.5"></a>1.8.5
+
+* Removed lock on SDK version. Package is now dependent on SDK >= 2.5.1.
+
+### <a name="1.8.4"></a>1.8.4
+
+* Fixed handling of identifiers when calling BulkImport with a list of POCO objects with numeric values.
+
+### <a name="1.8.3"></a>1.8.3
+
+* Fixed TotalElapsedTime in the response of BulkDelete to correctly measure the total time including any retries.
+
+### <a name="1.8.2"></a>1.8.2
+
+* Fixed high CPU consumption on certain scenarios.
+* Tracing now uses TraceSource. Users can define listeners for the `BulkExecutorTrace` source.
+* Fixed a rare scenario that could cause a lock when sending documents near 2Mb of size.
+
+### <a name="1.6.0"></a>1.6.0
+
+* Updated the bulk executor to now use the latest version of the Azure Cosmos DB .NET SDK (2.4.0)
+
+### <a name="1.5.0"></a>1.5.0
+
+* Added support for graph bulk executor to accept ttl on vertices and edges
+
+### <a name="1.4.1"></a>1.4.1
+
+* Fixed an issue, which caused exceptions during elastic scaling of Azure Cosmos DB when running in Gateway mode.
+
+### <a name="1.4.0"></a>1.4.0
+
+* Added BulkDelete support for API for NoSQL accounts to accept partition key, document ID tuples to delete.
+
+### <a name="1.3.0"></a>1.3.0
+
+* Fixed an issue, which caused a formatting issue in the user agent used by bulk executor.
+
+### <a name="1.2.0"></a>1.2.0
+
+* Made improvement to bulk executor import and update APIs to transparently adapt to elastic scaling of Azure Cosmos DB container when storage exceeds current capacity without throwing exceptions.
+
+### <a name="1.1.2"></a>1.1.2
+
+* Bumped up the DocumentDB .NET SDK dependency to version 2.1.3.
+
+### <a name="1.1.1"></a>1.1.1
+
+* Fixed an issue, which caused bulk executor to throw JSRT error while importing to fixed collections.
+
+### <a name="1.1.0"></a>1.1.0
+
+* Added support for BulkDelete operation for Azure Cosmos DB for NoSQL accounts.
+* Added support for BulkImport operation for accounts with Azure Cosmos DB's API for MongoDB.
+* Bumped up the DocumentDB .NET SDK dependency to version 2.0.0.
+
+### <a name="1.0.2"></a>1.0.2
+
+* Added support for BulkImport operation for Azure Cosmos DB for Gremlin accounts.
+
+### <a name="1.0.1"></a>1.0.1
+
+* Minor bug fix to the BulkImport operation for Azure Cosmos DB for NoSQL accounts.
+
+### <a name="1.0.0"></a>1.0.0
+
+* Added support for BulkImport and BulkUpdate operations for Azure Cosmos DB for NoSQL accounts.
+
+## Next steps
+
+To learn about the bulk executor Java library, see the following article:
+
+[Java bulk executor library SDK and release information](sdk-java-bulk-executor-v2.md)
cosmos-db Sdk Dotnet Change Feed V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-change-feed-v2.md
+
+ Title: Azure Cosmos DB .NET change feed Processor API, SDK release notes
+description: Learn all about the Change Feed Processor API and SDK including release dates, retirement dates, and changes made between each version of the .NET Change Feed Processor SDK.
+++
+ms.devlang: csharp
++ Last updated : 04/06/2021++++
+# .NET Change Feed Processor SDK: Download and release notes (Legacy)
++
+| | Links |
+|||
+|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.ChangeFeedProcessor/)|
+|**API documentation**|[Change Feed Processor library API reference documentation](/dotnet/api/microsoft.azure.documents.changefeedprocessor)|
+|**Get started**|[Get started with the Change Feed Processor .NET SDK](../change-feed.md)|
+|**Current supported framework**| [Microsoft .NET Framework 4.5](https://www.microsoft.com/download/details.aspx?id=30653)</br> [Microsoft .NET Core](https://dotnet.microsoft.com/download) |
+
+> [!NOTE]
+> If you are using change feed processor, please see the latest version 3.x of the [.NET SDK](change-feed-processor.md), which has change feed built into the SDK.
+
+## Release notes
+
+### v2 builds
+
+### <a id="2.4.0"></a>2.4.0
+* Added support for lease collections that can be partitioned with partition key defined as /partitionKey. Prior to this change lease collection's partition key would have to be defined as /id.
+* This release allows using lease collections with API for Gremlin, as Gremlin collections cannot have partition key defined as /id.
+
+### <a id="2.3.2"></a>2.3.2
+* Added lease store compatibility with [V3 SDK that enables hot migration paths. An application can migrate to V3 SDK and migrate back to the Change Feed processor library without losing any state.
+
+### <a id="2.3.1"></a>2.3.1
+* Corrected a case when `FeedProcessing.ChangeFeedObserverCloseReason.Unknown` close reason was sent to `FeedProcessing.IChangeFeedObserver.CloseAsync` if the partition cannot be found or if the target replica is not up to date up with the read session. In these cases `FeedProcessing.ChangeFeedObserverCloseReason.ResourceGone` and `FeedProcessing.ChangeFeedObserverCloseReason.ReadSessionNotAvailable` close reasons are now used.
+* Added a new close reason `FeedProcessing.ChangeFeedObserverCloseReason.ReadSessionNotAvailable` that is sent to close the change feed observer when the target replica is not up to date up with the read session.
+
+### <a id="2.3.0"></a>2.3.0
+* Added a new method `ChangeFeedProcessorBuilder.WithCheckpointPartitionProcessorFactory` and corresponding public interface `ICheckpointPartitionProcessorFactory`. This allows an implementation of the `IPartitionProcessor` interface to use built-in checkpointing mechanism. The new factory is similar to the existing `IPartitionProcessorFactory`, except that its `Create` method also takes the `ILeaseCheckpointer` parameter.
+* Only one of the two methods, either `ChangeFeedProcessorBuilder.WithPartitionProcessorFactory` or `ChangeFeedProcessorBuilder.WithCheckpointPartitionProcessorFactory`, can be used for the same `ChangeFeedProcessorBuilder` instance.
+
+### <a id="2.2.8"></a>2.2.8
+* Stability and diagnosability improvements:
+ * Added support to detect reading change feed taking long time. When it takes longer than the value specified by the `ChangeFeedProcessorOptions.ChangeFeedTimeout` property, the following steps are taken:
+ * The operation to read change feed on the problematic partition is aborted.
+ * The change feed processor instance drops ownership of the problematic lease. The dropped lease will be picked up during the next lease acquire step that will be done by the same or different change feed processor instance. This way, reading change feed will start over.
+ * An issue is reported to the health monitor. The default heath monitor sends all reported issues to trace log.
+ * Added a new public property: `ChangeFeedProcessorOptions.ChangeFeedTimeout`. The default value of this property is 10 mins.
+ * Added a new public enum value: `Monitoring.MonitoredOperation.ReadChangeFeed`. When the value of `HealthMonitoringRecord.Operation` is set to `Monitoring.MonitoredOperation.ReadChangeFeed`, it indicates the health issue is related to reading change feed.
+
+### <a id="2.2.7"></a>2.2.7
+* Improved load-balancing strategy for scenario when getting all leases takes longer than lease expiration interval, for example, due to network issues:
+ * In this scenario load-balancing algorithm used to falsely consider leases as expired, causing stealing leases from active owners. This could trigger unnecessary rebalancing many leases.
+ * This issue is fixed in this release by avoiding retry on conflict while acquiring expired lease which owner hasn't changed and postponing acquiring expired lease to next load-balancing iteration.
+
+### <a id="2.2.6"></a>2.2.6
+* Improved handling of Observer exceptions.
+* Richer information on Observer errors:
+ * When an Observer is closed due to an exception thrown by Observer's ProcessChangesAsync, the CloseAsync will now receive the reason parameter set to ChangeFeedObserverCloseReason.ObserverError.
+ * Added traces to identify errors within user code in an Observer.
+
+### <a id="2.2.5"></a>2.2.5
+* Added support for handling split in collections that use shared database throughput.
+ * This release fixes an issue that may occur during split in collections using shared database throughput when split result into partition rebalancing with only one child partition key range created, rather than two. When this happens, Change Feed Processor may get stuck deleting the lease for old partition key range and not creating new leases. The issue is fixed in this release.
+
+### <a id="2.2.4"></a>2.2.4
+* Added new property ChangeFeedProcessorOptions.StartContinuation to support starting change feed from request continuation token. This is only used when lease collection is empty or a lease does not have ContinuationToken set. For leases in lease collection that have ContinuationToken set, the ContinuationToken is used and ChangeFeedProcessorOptions.StartContinuation is ignored.
+
+### <a id="2.2.3"></a>2.2.3
+* Added support for using custom store to persist continuation tokens per partition.
+ * For example, a custom lease store can be Azure Cosmos DB lease collection partitioned in any custom way.
+ * Custom lease stores can use new extensibility point ChangeFeedProcessorBuilder.WithLeaseStoreManager(ILeaseStoreManager) and ILeaseStoreManager public interface.
+ * Refactored the ILeaseManager interface into multiple role interfaces.
+* Minor breaking change: removed extensibility point ChangeFeedProcessorBuilder.WithLeaseManager(ILeaseManager), use ChangeFeedProcessorBuilder.WithLeaseStoreManager(ILeaseStoreManager) instead.
+
+### <a id="2.2.2"></a>2.2.2
+* This release fixes an issue that occurs during processing a split in monitored collection and using a partitioned lease collection. When processing a lease for split partition, the lease corresponding to that partition may not be deleted. The issue is fixed in this release.
+
+### <a id="2.2.1"></a>2.2.1
+* Fixed Estimator calculation for accounts with multiple write regions and new Session Token format.
+
+### <a id="2.2.0"></a>2.2.0
+* Added support for partitioned lease collections. The partition key must be defined as /id.
+* Minor breaking change: the methods of the IChangeFeedDocumentClient interface and the ChangeFeedDocumentClient class were changed to include RequestOptions and CancellationToken parameters. IChangeFeedDocumentClient is an advanced extensibility point that allows you to provide custom implementation of the Document Client to use with Change Feed Processor, for example, decorate DocumentClient and intercept all calls to it to do extra tracing, error handling, etc. With this update, the code that implement IChangeFeedDocumentClient will need to be changed to include new parameters in the implementation.
+* Minor diagnostics improvements.
+
+### <a id="2.1.0"></a>2.1.0
+* Added new API, Task&lt;IReadOnlyList&lt;RemainingPartitionWork&gt;&gt; IRemainingWorkEstimator.GetEstimatedRemainingWorkPerPartitionAsync(). This can be used to get estimated work for each partition.
+* Supports Microsoft.Azure.DocumentDB SDK 2.0. Requires Microsoft.Azure.DocumentDB 2.0 or later.
+
+### <a id="2.0.6"></a>2.0.6
+* Added ChangeFeedEventHost.HostName public property for compatibility with v1.
+
+### <a id="2.0.5"></a>2.0.5
+* Fixed a race condition that occurs during partition split. The race condition may lead to acquiring lease and immediately losing it during partition split and causing contention. The race condition issue is fixed with this release.
+
+### <a id="2.0.4"></a>2.0.4
+* GA SDK
+
+### <a id="2.0.3-prerelease"></a>2.0.3-prerelease
+* Fixed the following issues:
+ * When partition split happens, there could be duplicate processing of documents modified before the split.
+ * The GetEstimatedRemainingWork API returned 0 when no leases were present in the lease collection.
+
+* The following exceptions are made public. Extensions that implement IPartitionProcessor can throw these exceptions.
+ * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.LeaseLostException.
+ * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.PartitionException.
+ * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.PartitionNotFoundException.
+ * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.PartitionSplitException.
+
+### <a id="2.0.2-prerelease"></a>2.0.2-prerelease
+* Minor API changes:
+ * Removed ChangeFeedProcessorOptions.IsAutoCheckpointEnabled that was marked as obsolete.
+
+### <a id="2.0.1-prerelease"></a>2.0.1-prerelease
+* Stability improvements:
+ * Better handling of lease store initialization. When lease store is empty, only one instance of processor can initialize it, the others will wait.
+ * More stable/efficient lease renewal/release. Renewing and releasing a lease one partition is independent from renewing others. In v1 that was done sequentially for all partitions.
+* New v2 API:
+ * Builder pattern for flexible construction of the processor: the ChangeFeedProcessorBuilder class.
+ * Can take any combination of parameters.
+ * Can take DocumentClient instance for monitoring and/or lease collection (not available in v1).
+ * IChangeFeedObserver.ProcessChangesAsync now takes CancellationToken.
+ * IRemainingWorkEstimator - the remaining work estimator can be used separately from the processor.
+ * New extensibility points:
+ * IPartitionLoadBalancingStrategy - for custom load-balancing of partitions between instances of the processor.
+ * ILease, ILeaseManager - for custom lease management.
+ * IPartitionProcessor - for custom processing changes on a partition.
+* Logging - uses [LibLog](https://github.com/damianh/LibLog) library.
+* 100% backward compatible with v1 API.
+* New code base.
+* Compatible with [SQL .NET SDK](sdk-dotnet-v2.md) versions 1.21.1 and above.
+
+### v1 builds
+
+### <a id="1.3.3"></a>1.3.3
+* Added more logging.
+* Fixed a DocumentClient leak when calling the pending work estimation multiple times.
+
+### <a id="1.3.2"></a>1.3.2
+* Fixes in the pending work estimation.
+
+### <a id="1.3.1"></a>1.3.1
+* Stability improvements.
+ * Fix for handling canceled tasks issue that might lead to stopped observers on some partitions.
+* Support for manual checkpointing.
+* Compatible with [SQL .NET SDK](sdk-dotnet-v2.md) versions 1.21 and above.
+
+### <a id="1.2.0"></a>1.2.0
+* Adds support for .NET Standard 2.0. The package now supports `netstandard2.0` and `net451` framework monikers.
+* Compatible with [SQL .NET SDK](sdk-dotnet-v2.md) versions 1.17.0 and above.
+* Compatible with [SQL .NET Core SDK](sdk-dotnet-core-v2.md) versions 1.5.1 and above.
+
+### <a id="1.1.1"></a>1.1.1
+* Fixes an issue with the calculation of the estimate of remaining work when the Change Feed was empty or no work was pending.
+* Compatible with [SQL .NET SDK](sdk-dotnet-v2.md) versions 1.13.2 and above.
+
+### <a id="1.1.0"></a>1.1.0
+* Added a method to obtain an estimate of remaining work to be processed in the Change Feed.
+* Compatible with [SQL .NET SDK](sdk-dotnet-v2.md) versions 1.13.2 and above.
+
+### <a id="1.0.0"></a>1.0.0
+* GA SDK
+* Compatible with [SQL .NET SDK](sdk-dotnet-v2.md) versions 1.14.1 and below.
+
+## Release & Retirement dates
+
+Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+
+> [!WARNING]
+> After 31 August 2022, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 1.x of the Azure Cosmos DB .NET or .NET Core SDK for API for NoSQL. If you prefer not to upgrade, requests sent from version 1.x of the SDK will continue to be served by the Azure Cosmos DB service.
+
+<br/>
+
+| Version | Release Date | Retirement Date |
+| | | |
+| [2.4.0](#2.4.0) |May 6, 2021 | |
+| [2.3.2](#2.3.2) |August 11, 2020 | |
+| [2.3.1](#2.3.1) |July 30, 2020 | |
+| [2.3.0](#2.3.0) |April 2, 2020 | |
+| [2.2.8](#2.2.8) |October 28, 2019 | |
+| [2.2.7](#2.2.7) |May 14, 2019 | |
+| [2.2.6](#2.2.6) |January 29, 2019 | |
+| [2.2.5](#2.2.5) |December 13, 2018 | |
+| [2.2.4](#2.2.4) |November 29, 2018 | |
+| [2.2.3](#2.2.3) |November 19, 2018 | |
+| [2.2.2](#2.2.2) |October 31, 2018 | |
+| [2.2.1](#2.2.1) |October 24, 2018 | |
+| [1.3.3](#1.3.3) |May 08, 2018 | |
+| [1.3.2](#1.3.2) |April 18, 2018 | |
+| [1.3.1](#1.3.1) |March 13, 2018 | |
+| [1.2.0](#1.2.0) |October 31, 2017 | |
+| [1.1.1](#1.1.1) |August 29, 2017 | |
+| [1.1.0](#1.1.0) |August 13, 2017 | |
+| [1.0.0](#1.0.0) |July 07, 2017 | |
+
+## FAQ
++
+## See also
+
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Dotnet Core V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-core-v2.md
+
+ Title: 'Azure Cosmos DB: SQL .NET Core API, SDK & resources'
+description: Learn all about the SQL .NET Core API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET Core SDK.
+++
+ms.devlang: csharp
+ Last updated : 04/18/2022++++
+# Azure Cosmos DB .NET Core SDK v2 for API for NoSQL: Release notes and resources (Legacy)
++
+| | Links |
+|||
+|**Release notes**| [Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)|
+|**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
+|**Samples**|[.NET code samples](samples-dotnet.md)|
+|**Get started**|[Get started with the Azure Cosmos DB .NET](sdk-dotnet-v2.md)|
+|**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-dotnet-web-app.md)|
+|**Current supported framework**|[.NET Standard 1.6 and .NET Standard 1.5](https://www.nuget.org/packages/NETStandard.Library)|
+
+> [!WARNING]
+> On August 31, 2024 the Azure Cosmos DB .NET SDK v2.x will be retired; the SDK and all applications using the SDK will continue to function;
+> Azure Cosmos DB will simply cease to provide further maintenance and support for this SDK.
+> We recommend [migrating to the latest version](migrate-dotnet-v3.md) of the .NET SDK v3 SDK.
+>
+
+> [!NOTE]
+> If you are using .NET Core, please see the latest version 3.x of the [.NET SDK](sdk-dotnet-v3.md), which targets .NET Standard.
+
+## <a name="release-history"></a> Release history
+
+Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)
+
+Because version 3 of the Azure Cosmos DB .NET SDK includes updated features and improved performance, The 2.x of this SDK will be retired on 31 August 2024. You must update your SDK to version 3 by that date. We recommend following the instructions to migrate to Azure Cosmos DB .NET SDK version 3.
+
+## <a name="recommended-version"></a> Recommended version
+
+Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.18.0**.
+
+## <a name="known-issues"></a> Known issues
+
+Below is a list of any known issues affecting the [recommended minimum version](#recommended-version):
+
+| Issue | Impact | Mitigation | Tracking link |
+| | | | |
+
+## See Also
+
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Dotnet V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-v2.md
+
+ Title: 'Azure Cosmos DB: SQL .NET API, SDK & resources'
+description: Learn all about the SQL .NET API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET SDK.
+++
+ms.devlang: csharp
+ Last updated : 04/18/2022++++
+# Azure Cosmos DB .NET SDK v2 for API for NoSQL: Download and release notes (Legacy)
++
+| | Links |
+|||
+|**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)|
+|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
+|**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples)|
+|**Get started**|[Get started with the Azure Cosmos DB .NET SDK](quickstart-dotnet.md)|
+|**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-dotnet-web-app.md)|
+|**Current supported framework**|[Microsoft .NET Framework 4.5](https://www.microsoft.com/download/details.aspx?id=30653)|
+
+> [!WARNING]
+> On August 31, 2024 the Azure Cosmos DB .NET SDK v2.x will be retired; the SDK and all applications using the SDK will continue to function;
+> Azure Cosmos DB will simply cease to provide further maintenance and support for this SDK.
+> We recommend [migrating to the latest version](migrate-dotnet-v3.md) of the .NET SDK v3 SDK.
+>
+
+> [!NOTE]
+> If you are using .NET Framework, please see the latest version 3.x of the [.NET SDK](sdk-dotnet-v3.md), which targets .NET Standard.
+
+## <a name="release-history"></a> Release history
+
+Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)
+
+Because version 3 of the Azure Cosmos DB .NET SDK includes updated features and improved performance, The 2.x of this SDK will be retired on 31 August 2024. You must update your SDK to version 3 by that date. We recommend following the [instructions](migrate-dotnet-v3.md) to migrate to Azure Cosmos DB .NET SDK version 3.
+
+## <a name="recommended-version"></a> Recommended version
+
+Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.18.0**.
+
+## <a name="known-issues"></a> Known issues
+
+Below is a list of any known issues affecting the [recommended minimum version](#recommended-version):
+
+| Issue | Impact | Mitigation | Tracking link |
+| | | | |
+
+## FAQ
++
+## See also
+
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-v3.md
+
+ Title: 'Azure Cosmos DB: SQL .NET Standard API, SDK & resources'
+description: Learn all about the API for NoSQL and .NET SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET SDK.
+++
+ms.devlang: csharp
+ Last updated : 03/22/2022++++
+# Azure Cosmos DB .NET SDK v3 for API for NoSQL: Download and release notes
++
+| | Links |
+|||
+|**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md)|
+|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
+|**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage)|
+|**Get started**|[Get started with the Azure Cosmos DB .NET SDK](quickstart-dotnet.md)|
+|**Best Practices**|[Best Practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)|
+|**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-dotnet-web-app.md)|
+|**Entity Framework Core tutorial**|[Entity Framework Core with Azure Cosmos DB Provider](/ef/core/providers/cosmos/#get-started)|
+|**Current supported framework**|[Microsoft .NET Standard 2.0](/dotnet/standard/net-standard)|
+
+## Release history
+
+Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md)
+
+## <a name="recommended-version"></a> Recommended version
+
+Different sub versions of .NET SDKs are available under the 3.x.x version. **The minimum recommended version is 3.25.0**.
+
+## <a name="known-issues"></a> Known issues
+
+For a list of known issues with the recommended minimum version of the SDK, see [known issues section](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md#-known-issues).
+
+## FAQ
+
+## See also
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-go.md
+
+ Title: 'Azure Cosmos DB: SQL Go, SDK & resources'
+description: Learn all about the API for NoSQL and Go SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Go SDK.
+++
+ms.devlang: csharp
+ Last updated : 03/22/2022+++
+# Azure Cosmos DB Go SDK for API for NoSQL: Download and release notes
+++
+| | Links |
+|||
+|**Release notes**|[Release notes](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/dat)|
+|**SDK download**|[Go pkg](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos)|
+|**API documentation**|[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#pkg-types)|
+|**Samples**|[Code samples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#pkg-overview)|
+|**Get started**|[Get started with the Azure Cosmos DB Go SDK](quickstart-go.md)|
+
+> [!IMPORTANT]
+> The Go SDK for Azure Cosmos DB is currently in beta. This beta is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+>
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Release history
+
+Release history is maintained in the Azure Cosmos DB Go SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/dat).
+
+## FAQ
++
+## See also
+
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Java Async V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-async-v2.md
+
+ Title: 'Azure Cosmos DB: SQL Async Java API, SDK & resources'
+description: Learn all about the SQL Async Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+++
+ms.devlang: java
+ Last updated : 11/11/2021+++++
+# Azure Cosmos DB Async Java SDK for API for NoSQL (legacy): Release notes and resources
++
+The API for NoSQL Async Java SDK differs from the API for NoSQL Java SDK by providing asynchronous operations with support of the [Netty library](https://netty.io/). The pre-existing [API for NoSQL Java SDK](sdk-java-v2.md) does not support asynchronous operations.
+
+> [!IMPORTANT]
+> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
+>
+
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+
+| | Links |
+|||
+| **Release Notes** | [Release notes for Async Java SDK](https://github.com/Azure/azure-cosmosdb-jav) |
+| **SDK Download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) |
+| **API documentation** |[Java API reference documentation](/java/api/com.microsoft.azure.cosmosdb.rx.asyncdocumentclient) |
+| **Contribute to SDK** | [GitHub](https://github.com/Azure/azure-cosmosdb-java) |
+| **Get started** | [Get started with the Async Java SDK](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-async-java-getting-started) |
+| **Code sample** | [GitHub](https://github.com/Azure/azure-cosmosdb-java#usage-code-sample)|
+| **Performance tips**| [GitHub readme](https://github.com/Azure/azure-cosmosdb-java#guide-for-prod)|
+| **Minimum supported runtime**|[JDK 8](/java/azure/jdk/) |
+
+## Release history
+
+Release history is maintained in the Azure Cosmos DB Java SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmosdb-jav)
+
+## FAQ
+
+## See also
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Java Bulk Executor V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-bulk-executor-v2.md
+
+ Title: 'Azure Cosmos DB: Bulk executor Java API, SDK & resources'
+description: Learn all about the bulk executor Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB bulk executor Java SDK.
+++
+ms.devlang: java
+ Last updated : 04/06/2021+++++
+# Java bulk executor library: Download information
++
+> [!IMPORTANT]
+> This is *not* the latest Java Bulk Executor for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](bulk-executor-java.md) for performing bulk operations. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
+>
+
+> [!IMPORTANT]
+> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK including Bulk Executor
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+
+| | Link/notes |
+|||
+|**Description**|The bulk executor library allows client applications to perform bulk operations in Azure Cosmos DB accounts. bulk executor library provides BulkImport, and BulkUpdate namespaces. The BulkImport module can bulk ingest documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent. The BulkUpdate module can bulk update existing data in Azure Cosmos DB containers as patches.|
+|**SDK download**|[Maven](https://search.maven.org/#search%7Cga%7C1%7Cdocumentdb-bulkexecutor)|
+|**Bulk executor library in GitHub**|[GitHub](https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started)|
+| **API documentation**| [Java API reference documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor)|
+|**Get started**|[Get started with the bulk executor library Java SDK](bulk-executor-java.md)|
+|**Minimum supported runtime**|[Java Development Kit (JDK) 7+](/java/azure/jdk/)|
+
+## Release notes
+### <a name="2.12.3"></a>2.12.3
+
+* Fix retry policy when `GoneException` is wrapped in `IllegalStateException` - this change is necessary to make sure Gateway cache is refreshed on 410 so the Spark connector (for Spark 2.4) can use a custom retry policy to allow queries to succeed during partition splits
+
+### <a name="2.12.2"></a>2.12.2
+
+* Fix an issue resulting in documents not always being imported on transient errors.
+
+### <a name="2.12.1"></a>2.12.1
+
+* Upgrade to use latest Azure Cosmos DB Core SDK version.
+
+### <a name="2.12.0"></a>2.12.0
+
+* Improve handling of RU budget provided through the Spark Connector for bulk operation. An initial one-time bulk import is performed from spark connector with a baseBatchSize and the RU consumption for the above batch import is collected.
+ A miniBatchSizeAdjustmentFactor is calculated based on the above RU consumption, and the mini-batch size is adjusted based on this. Based on the Elapsed time and the consumed RU for each batch import, a sleep duration is calculated to limit the RU consumption per second and is used to pause the thread prior to the next batch import.
+
+### <a name="2.11.0"></a>2.11.0
+
+* Fix a bug preventing bulk updates when using a nested partition key
+
+### <a name="2.10.0"></a>2.10.0
+
+* Fix for DocumentAnalyzer.java to correctly extract nested partition key values from json.
+
+### <a name="2.9.4"></a>2.9.4
+
+* Add functionality in BulkDelete operations to retry on specific failures and also return a list of failures to the user that could be retried.
+
+### <a name="2.9.3"></a>2.9.3
+
+* Update for Azure Cosmos DB SDK version 2.4.7.
+
+### <a name="2.9.2"></a>2.9.2
+
+* Fix for 'mergeAll' to continue on 'id' and partition key value so that any patched document properties which are placed after 'id' and partition key value get added to the updated item list.
+
+### <a name="2.9.1"></a>2.9.1
+
+* Update start degree of concurrency to 1 and added debug logs for minibatch.
cosmos-db Sdk Java Spark V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spark-v2.md
+
+ Title: 'Azure Cosmos DB Apache Spark 2 OLTP Connector for API for NoSQL release notes and resources'
+description: Learn about the Azure Cosmos DB Apache Spark 2 OLTP Connector for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+++
+ms.devlang: java
+ Last updated : 04/06/2021+++++
+# Azure Cosmos DB Apache Spark 2 OLTP Connector for API for NoSQL: Release notes and resources
++
+You can accelerate big data analytics by using the Azure Cosmos DB Apache Spark 2 OLTP Connector for NoSQL. The Spark Connector allows you to run [Spark](https://spark.apache.org/) jobs on data stored in Azure Cosmos DB. Batch and stream processing are supported.
+
+You can use the connector with [Azure Databricks](https://azure.microsoft.com/services/databricks) or [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/), which provide managed Spark clusters on Azure. The following table shows supported versions:
+
+| Component | Version |
+||-|
+| Apache Spark | 2.4.*x*, 2.3.*x*, 2.2.*x*, and 2.1.*x* |
+| Scala | 2.11 |
+| Azure Databricks (runtime version) | Later than 3.4 |
+
+> [!WARNING]
+> This connector supports the API for NoSQL of Azure Cosmos DB.
+> For the Azure Cosmos DB for MongoDB, use the [MongoDB Connector for Spark](https://docs.mongodb.com/spark-connector/master/).
+> For the Azure Cosmos DB for Apache Cassandra, use the [Cassandra Spark connector](https://github.com/datastax/spark-cassandra-connector).
+>
+
+## Resources
+
+| Resource | Link |
+|||
+| **SDK download** | [Download latest .jar](https://aka.ms/CosmosDB_OLTP_Spark_2.4_LKG), [Maven](https://search.maven.org/search?q=a:azure-cosmosdb-spark_2.4.0_2.11) |
+|**API documentation** | [Spark Connector reference]() |
+|**Contribute to the SDK** | [Azure Cosmos DB Connector for Apache Spark on GitHub](https://github.com/Azure/azure-cosmosdb-spark) |
+|**Get started** | [Accelerate big data analytics by using the Apache Spark to Azure Cosmos DB connector](./quickstart-spark.md) <br> [Use Apache Spark Structured Streaming with Apache Kafka and Azure Cosmos DB](../../hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) |
+
+## Release history
+* [Release notes](https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/CHANGELOG.md)]
+
+## FAQ
+
+## Next steps
+
+Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
+
+Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spark-v3.md
+
+ Title: 'Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL (Preview) release notes and resources'
+description: Learn about the Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
+++
+ms.devlang: java
+ Last updated : 11/12/2021+++++
+# Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL: Release notes and resources
++
+**Azure Cosmos DB OLTP Spark connector** provides Apache Spark support for Azure Cosmos DB using the API for NoSQL. Azure Cosmos DB is a globally-distributed database service which allows developers to work with data using a variety of standard APIs, such as SQL, MongoDB, Cassandra, Graph, and Table.
+
+If you have any feedback or ideas on how to improve your experience create an issue in our [SDK GitHub repository](https://github.com/Azure/azure-sdk-for-java/issues/new)
+
+## Documentation links
+
+* [Getting started](https://aka.ms/azure-cosmos-spark-3-quickstart)
+* [Catalog API](https://aka.ms/azure-cosmos-spark-3-catalog-api)
+* [Configuration Parameter Reference](https://aka.ms/azure-cosmos-spark-3-config)
+* [End-to-end sample notebook "New York City Taxi data"](https://aka.ms/azure-cosmos-spark-3-sample-nyc-taxi-data)
+* [Migration from Spark 2.4 to Spark 3.*](https://aka.ms/azure-cosmos-spark-3-migration)
+
+## Version compatibility
+* [Version compatibility for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-version-compatibility)
+* [Version compatibility for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-version-compatibility)
+
+## Release notes
+* [Release notes for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-changelog)
+* [Release notes for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-changelog)
+
+## Download
+* [Download of Azure Cosmos DB Spark connector for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-download)
+* [Download of Azure Cosmos DB Spark connector for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-download)
+
+Azure Cosmos DB Spark connector is available on [Maven Central Repo](https://search.maven.org/search?q=g:com.azure.cosmos.spark).
+
+If you encounter any bug or want to suggest a feature change, [file an issue](https://github.com/Azure/azure-sdk-for-java/issues/new).
+
+## Next steps
+
+Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
+
+Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Sdk Java Spring Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v2.md
+
+ Title: 'Spring Datan Azure Cosmos DB v2 for API for NoSQL release notes and resources'
+description: Learn about the Spring Datan Azure Cosmos DB v2 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+++
+ms.devlang: java
+ Last updated : 04/06/2021+++++
+# Spring Datan Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources
++
+ Spring Datan Azure Cosmos DB version 2 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Datan Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
+
+> [!WARNING]
+> This version of Spring Datan Azure Cosmos DB SDK depends on a retired version of Azure Cosmos DB Java SDK. This Spring Datan Azure Cosmos DB SDK will be announced as retiring in the near future! This is *not* the latest Azure Spring Datan Azure Cosmos DB SDK for Azure Cosmos DB and is outdated. Because of performance issues and instability in Azure Spring Datan Azure Cosmos DB SDK V2, we highly recommend to use [Azure Spring Datan Azure Cosmos DB v3](sdk-java-spring-data-v3.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide to understand the difference in the underlying Java SDK V4.
+>
+
+The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
+
+You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
+
+> [!IMPORTANT]
+> These release notes are for version 2 of Spring Datan Azure Cosmos DB. You can find [release notes for version 3 here](sdk-java-spring-data-v3.md).
+>
+> Spring Datan Azure Cosmos DB supports only the API for NoSQL.
+>
+> See the following articles for information about Spring Data on other Azure Cosmos DB APIs:
+> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
+> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
+>
+> Want to get going fast?
+> 1. Install the [minimum supported Java runtime, JDK 8](/java/azure/jdk/), so you can use the SDK.
+> 2. Create a Spring Datan Azure Cosmos DB app by using the [starter](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). It's easy!
+> 3. Work through the [Spring Datan Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb), which walks through basic Azure Cosmos DB requests.
+>
+> You can spin up Spring Boot Starter apps fast by using [Spring Initializr](https://start.spring.io/)!
+>
+
+## Resources
+
+| Resource | Link |
+|||
+| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/spring-data-cosmosdb) |
+|**API documentation** | [Spring Datan Azure Cosmos DB reference documentation]() |
+|**Contribute to the SDK** | [Spring Datan Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) |
+|**Spring Boot Starter**| [Azure Cosmos DB Spring Boot Starter client library for Java](https://github.com/MicrosoftDocs/azure-dev-docs/blob/master/articles/jav) |
+|**Spring TODO app sample with Azure Cosmos DB**| [End-to-end Java Experience in App Service Linux (Part 2)](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2) |
+|**Developer's guide** | [Spring Datan Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) |
+|**Using Starter** | [How to use Spring Boot Starter with the Azure Cosmos DB for NoSQL](/azure/developer/jav) |
+|**Sample with Azure App Service** | [How to use Spring and Azure Cosmos DB with App Service on Linux](/azure/developer/java/spring-framework/configure-spring-app-with-cosmos-db-on-app-service-linux) <br> [TODO app sample](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2.git) |
+
+## Release history
+
+### 2.3.0 (May 21, 2020)
+#### New features
+* Updates Spring Boot version to 2.3.0.
++
+### 2.2.5 (May 19, 2020)
+#### New features
+* Updates Azure Cosmos DB version to 3.7.3.
+#### Key bug fixes
+* Contains memory leak fixes and Netty version upgrades from Azure Cosmos DB SDK 3.7.3.
+
+### 2.2.4 (April 6, 2020)
+#### Key bug fixes
+* Fixes `allowTelemetry` flag to take into account from `CosmosDbConfig`.
+* Fixes `TTL` property on container.
+
+### 2.2.3 (February 25, 2020)
+#### New features
+* Adds new `findAll` by partition key API.
+* Updates Azure Cosmos DB version to 3.7.0.
+#### Key bug fixes
+* Fixes `collectionName` -> `containerName`.
+* Fixes `entityClass` and `domainClass` -> `domainType`.
+* Fixes "Return entity collection saved by repository instead of input entity."
+
+### 2.1.10 (February 25, 2020)
+#### Key bug fixes
+* Backports fix for "Return entity collection saved by repository instead of input entity."
+
+### 2.2.2 (January 15, 2020)
+#### New features
+* Updates Azure Cosmos DB version to 3.6.0.
+#### Key bug fixes
+
+### 2.2.1 (December 31, 2019)
+#### New features
+* Updates Azure Cosmos DB SDK version to 3.5.0.
+* Adds annotation field to enable or disable automatic collection creation.
+* Improves exception handling. Exposes `CosmosClientException` through `CosmosDBAccessException`.
+* Exposes `requestCharge` and `activityId` through `ResponseDiagnostics`.
+#### Key bug fixes
+* SDK 3.5.0 update fixes "Exception when Azure Cosmos DB HTTP response header is larger than 8192 bytes," "ConsistencyPolicy.defaultConsistencyLevel() fails on Bounded Staleness and Consistent Prefix."
+* Fixes `findById` method's behavior. Previously, this method returned empty if the entity wasn't found instead of throwing an exception.
+* Fixes a bug in which sorting wasn't applied on the next page when `CosmosPageRequest` was used.
+
+### 2.1.9 (December 26, 2019)
+#### New features
+* Adds annotation field to enable or disable automatic collection creation.
+#### Key bug fixes
+* Fixes `findById` method's behavior. Previously, this method returned empty if the entity wasn't found instead of throwing an exception.
+
+### 2.2.0 (October 21, 2019)
+#### New features
+* Complete Reactive Azure Cosmos DB Repository support.
+* Azure Cosmos DB Request Diagnostics String and Query Metrics support.
+* Azure Cosmos DB SDK version update to 3.3.1.
+* Spring Framework version upgrade to 5.2.0.RELEASE.
+* Spring Data Commons version upgrade to 2.2.0.RELEASE.
+* Adds `findByIdAndPartitionKey` and `deleteByIdAndPartitionKey` APIs.
+* Removes dependency from azure-documentdb.
+* Rebrands DocumentDB to Azure Cosmos DB.
+#### Key bug fixes
+* Fixes "Sorting throws exception when pageSize is less than total items in repository."
+
+### 2.1.8 (October 18, 2019)
+#### New features
+* Deprecates DocumentDB APIs.
+* Adds `findByIdAndPartitionKey` and `deleteByIdAndPartitionKey` APIs.
+* Adds optimistic locking based on `_etag`.
+* Enables SpEL expression for document collection name.
+* Adds `ObjectMapper` improvements.
+
+### 2.1.7 (October 18, 2019)
+#### New features
+* Adds Azure Cosmos DB SDK version 3 dependency.
+* Adds Reactive Azure Cosmos DB Repository.
+* Updates implementation of `DocumentDbTemplate` to use Azure Cosmos DB SDK version 3.
+* Adds other configuration changes for Reactive Azure Cosmos DB Repository support.
+
+### 2.1.2 (March 19, 2019)
+#### Key bug fixes
+* Removes `applicationInsights` dependency for:
+ * Potential risk of dependencies polluting.
+ * Java 11 incompatibility.
+ * Avoiding potential performance impact to CPU and/or memory.
+
+### 2.0.7 (March 20, 2019)
+#### Key bug fixes
+* Backport removes `applicationInsights` dependency for:
+ * Potential risk of dependencies polluting.
+ * Java 11 incompatibility.
+ * Avoiding potential performance impact to CPU and/or memory.
+
+### 2.1.1 (March 7, 2019)
+#### New features
+* Updates main version to 2.1.1.
+
+### 2.0.6 (March 7, 2019)
+#### New features
+* Ignore all exceptions from telemetry.
+
+### 2.1.0 (December 17, 2018)
+#### New features
+* Updates version to 2.1.0 to address problem.
+
+### 2.0.5 (September 13, 2018)
+#### New features
+* Adds keywords `exists` and `startsWith`.
+* Updates Readme.
+#### Key bug fixes
+* Fixes "Can't call self href directly for Entity."
+* Fixes "findAll will fail if collection is not created."
+
+### 2.0.4 (Prerelease) (August 23, 2018)
+#### New features
+* Renames package from documentdb to cosmosdb.
+* Adds new feature of query method keyword. 16 keywords from API for NoSQL are now supported.
+* Adds new feature of query with paging and sorting.
+* Simplifies the configuration of spring-data-cosmosdb.
+* Adds `deleteCollection` and `deleteAll` APIs.
+
+#### Key bug fixes
+* Bug fix and defect mitigation.
+
+## FAQ
+
+## Next steps
+Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
+
+Learn more about the [Spring Framework](https://spring.io/projects/spring-framework).
+
+Learn more about [Spring Boot](https://spring.io/projects/spring-boot).
+
+Learn more about [Spring Data](https://spring.io/projects/spring-data).
cosmos-db Sdk Java Spring Data V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md
+
+ Title: 'Spring Datan Azure Cosmos DB v3 for API for NoSQL release notes and resources'
+description: Learn about the Spring Datan Azure Cosmos DB v3 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+++
+ms.devlang: java
+ Last updated : 04/06/2021+++++
+# Spring Datan Azure Cosmos DB v3 for API for NoSQL: Release notes and resources
++
+The Spring Datan Azure Cosmos DB version 3 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Datan Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
+
+The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model and framework for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
+
+You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
+
+## Version Support Policy
+
+### Spring Boot Version Support
+
+This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-version-support) for more information.
+
+### Spring Data Version Support
+
+This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-data-version-support) for more information.
+
+### Which Version of Azure Spring Datan Azure Cosmos DB Should I Use
+
+Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure spring datan Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Datan Azure Cosmos DB to use with Spring Boot / Spring Cloud version.
+
+> [!IMPORTANT]
+> These release notes are for version 3 of Spring Datan Azure Cosmos DB.
+>
+> Azure Spring Datan Azure Cosmos DB SDK has dependency on the Spring Data framework, and supports only the API for NoSQL.
+>
+> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
+> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
+> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
+>
+
+## Get started fast
+
+ Get up and running with Spring Datan Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Datan Azure Cosmos DB connector.
+
+ Alternatively, you can add the Spring Datan Azure Cosmos DB dependency to your `pom.xml` file as shown below:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-spring-data-cosmos</artifactId>
+ <version>latest-version</version>
+ </dependency>
+ ```
+
+## Helpful content
+
+| Content | Link |
+|||
+| **Release notes** | [Release notes for Spring Datan Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK Documentation** | [Azure Spring Datan Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) |
+| **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) |
+| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) |
+| **Get started** | [Quickstart: Build a Spring Datan Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) |
+| **Basic code samples** | [Azure Cosmos DB: Spring Datan Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)|
+| **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4.md)|
+| **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4.md) |
+| **Azure Cosmos DB workshops and labs** |[Azure Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
+
+## Release history
+Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav).
+
+## Recommended version
+
+It's strongly recommended to use version 3.22.0 and above.
+
+## Additional notes
+
+* Spring Datan Azure Cosmos DB supports Java JDK 8 and Java JDK 11.
+
+## FAQ
++
+## Next steps
+
+Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
+
+Learn more about the [Spring Framework](https://spring.io/projects/spring-framework).
+
+Learn more about [Spring Boot](https://spring.io/projects/spring-boot).
+
+Learn more about [Spring Data](https://spring.io/projects/spring-data).
cosmos-db Sdk Java V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v2.md
+
+ Title: 'Azure Cosmos DB: SQL Java API, SDK & resources'
+description: Learn all about the SQL Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
+++
+ms.devlang: java
+ Last updated : 04/06/2021+++++
+# Azure Cosmos DB Java SDK for API for NoSQL (legacy): Release notes and resources
++
+This is the original Azure Cosmos DB Sync Java SDK v2 for API for NoSQL which supports synchronous operations.
+
+> [!IMPORTANT]
+> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
+>
+
+> [!IMPORTANT]
+> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
++
+| | Links |
+|||
+|**SDK Download**|[Maven](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.microsoft.azure%22%20AND%20a%3A%22azure-documentdb%22)|
+|**API documentation**|[Java API reference documentation](/java/api/com.microsoft.azure.documentdb)|
+|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-documentdb-java/)|
+|**Get started**|[Get started with the Java SDK](./quickstart-java.md)|
+|**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-java-web-app.md)|
+|**Minimum supported runtime**|[Java Development Kit (JDK) 7+](/java/azure/jdk/)|
+
+## Release notes
+### <a name="2.6.3"></a>2.6.3
+* Fixed a retry policy when `GoneException` is wrapped in `IllegalStateException`
+
+### <a name="2.6.2"></a>2.6.2
+* Added a new retry policy to retry on Read Timeouts
+* Upgraded dependency `com.fasterxml.jackson.core/jackson-databind` to 2.9.10.8
+* Upgraded dependency `org.apache.httpcomponents/httpclient` to 4.5.13
+
+### <a name="2.6.1"></a>2.6.1
+* Fixed a bug in handling a query through service interop.
+
+### <a name="2.6.0"></a>2.6.0
+* Added support for querying change feed from point in time.
+
+### <a name="2.5.1"></a>2.5.1
+* Fixes primary partition cache issue on documentCollection query.
+
+### <a name="2.5.0"></a>2.5.0
+* Added support for 449 retry custom configuration.
+
+### <a name="2.4.7"></a>2.4.7
+* Fixes connection pool timeout issue.
+* Fixes auth token refresh on internal retries.
+
+### <a name="2.4.6"></a>2.4.6
+* Updated correct client side replica policy tag on databaseAccount and made databaseAccount configuration reads from cache.
+
+### <a name="2.4.5"></a>2.4.5
+* Avoiding retry on invalid partition key range error, if user provides pkRangeId.
+
+### <a name="2.4.4"></a>2.4.4
+* Optimized partition key range cache refreshes.
+* Fixes the scenario where the SDK doesn't entertain partition split hint from server and results in incorrect client side routing caches refresh.
+
+### <a name="2.4.2"></a>2.4.2
+* Optimized collection cache refreshes.
+
+### <a name="2.4.1"></a>2.4.1
+* Added support to retrieve inner exception message from request diagnostic string.
+
+### <a name="2.4.0"></a>2.4.0
+* Introduced version api on PartitionKeyDefinition.
+
+### <a name="2.3.0"></a>2.3.0
+* Added separate timeout support for direct mode.
+
+### <a name="2.2.3"></a>2.2.3
+* Consuming null error message from service and producing document client exception.
+
+### <a name="2.2.2"></a>2.2.2
+* Socket connection improvement, adding SoKeepAlive default true.
+
+### <a name="2.2.0"></a>2.2.0
+* Added request diagnostics string support.
+
+### <a name="2.1.3"></a>2.1.3
+* Fixed bug in PartitionKey for Hash V2.
+
+### <a name="2.1.2"></a>2.1.2
+* Added support for composite indexes.
+* Fixed bug in global endpoint manager to force refresh.
+* Fixed bug for upserts with pre-conditions in direct mode.
+
+### <a name="2.1.1"></a>2.1.1
+* Fixed bug in gateway address cache.
+
+### <a name="2.1.0"></a>2.1.0
+* Multi-region write support added for direct mode.
+* Added support for handling IOExceptions thrown as ServiceUnavailable exceptions, from a proxy.
+* Fixed a bug in endpoint discovery retry policy.
+* Fixed a bug to ensure null pointer exceptions are not thrown in BaseDatabaseAccountConfigurationProvider.
+* Fixed a bug to ensure QueryIterator does not return nulls.
+* Fixed a bug to ensure large PartitionKey is allowed
+
+### <a name="2.0.0"></a>2.0.0
+* Multi-region write support added for gateway mode.
+
+### <a name="1.16.4"></a>1.16.4
+* Fixed a bug in Read partition Key ranges for a query.
+
+### <a name="1.16.3"></a>1.16.3
+* Fixed a bug in setting continuation token header size in DirectHttps mode.
+
+### <a name="1.16.2"></a>1.16.2
+* Added streaming fail over support.
+* Added support for custom metadata.
+* Improved session handling logic.
+* Fixed a bug in partition key range cache.
+* Fixed a NPE bug in direct mode.
+
+### <a name="1.16.1"></a>1.16.1
+* Added support for Unique Index.
+* Added support for limiting continuation token size in feed-options.
+* Fixed a bug in Json Serialization (timestamp).
+* Fixed a bug in Json Serialization (enum).
+* Dependency on com.fasterxml.jackson.core:jackson-databind upgraded to 2.9.5.
+
+### <a name="1.16.0"></a>1.16.0
+* Improved Connection Pooling for Direct Mode.
+* Improved Prefetch improvement for non-orderby cross partition query.
+* Improved UUID generation.
+* Improved Session consistency logic.
+* Added support for multipolygon.
+* Added support for Partition Key Range Statistics for Collection.
+* Fixed a bug in Multi-region support.
+
+### <a name="1.15.0"></a>1.15.0
+* Improved Json Serialization performance.
+* This SDK version requires the latest version of [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator).
+
+### <a name="1.14.0"></a>1.14.0
+* Internal changes for Microsoft friends libraries.
+
+### <a name="1.13.0"></a>1.13.0
+* Fixed an issue in reading single partition key ranges.
+* Fixed an issue in ResourceID parsing that affects database with short names.
+* Fixed an issue cause by partition key encoding.
+
+### <a name="1.12.0"></a>1.12.0
+* Critical bug fixes to request processing during partition splits.
+* Fixed an issue with the Strong and BoundedStaleness consistency levels.
+
+### <a name="1.11.0"></a>1.11.0
+* Added support for a new consistency level called ConsistentPrefix.
+* Fixed a bug in reading collection in session mode.
+
+### <a name="1.10.0"></a>1.10.0
+* Enabled support for partitioned collection with as low as 2,500 RU/sec and scale in increments of 100 RU/sec.
+* Fixed a bug in the native assembly which can cause NullRef exception in some queries.
+
+### <a name="1.9.6"></a>1.9.6
+* Fixed a bug in the query engine configuration that may cause exceptions for queries in Gateway mode.
+* Fixed a few bugs in the session container that may cause an "Owner resource not found" exception for requests immediately after collection creation.
+
+### <a name="1.9.5"></a>1.9.5
+* Added support for aggregation queries (COUNT, MIN, MAX, SUM, and AVG). See [Aggregation support](query/aggregate-functions.md).
+* Added support for change feed.
+* Added support for collection quota information through RequestOptions.setPopulateQuotaInfo.
+* Added support for stored procedure script logging through RequestOptions.setScriptLoggingEnabled.
+* Fixed a bug where query in DirectHttps mode may stop responding when encountering throttle failures.
+* Fixed a bug in session consistency mode.
+* Fixed a bug which may cause NullReferenceException in HttpContext when request rate is high.
+* Improved performance of DirectHttps mode.
+
+### <a name="1.9.4"></a>1.9.4
+* Added simple client instance-based proxy support with ConnectionPolicy.setProxy() API.
+* Added DocumentClient.close() API to properly shutdown DocumentClient instance.
+* Improved query performance in direct connectivity mode by deriving the query plan from the native assembly instead of the Gateway.
+* Set FAIL_ON_UNKNOWN_PROPERTIES = false so users don't need to define JsonIgnoreProperties in their POJO.
+* Refactored logging to use SLF4J.
+* Fixed a few other bugs in consistency reader.
+
+### <a name="1.9.3"></a>1.9.3
+* Fixed a bug in the connection management to prevent connection leaks in direct connectivity mode.
+* Fixed a bug in the TOP query where it may throw NullReference exception.
+* Improved performance by reducing the number of network call for the internal caches.
+* Added status code, ActivityID and Request URI in DocumentClientException for better troubleshooting.
+
+### <a name="1.9.2"></a>1.9.2
+* Fixed an issue in the connection management for stability.
+
+### <a name="1.9.1"></a>1.9.1
+* Added support for BoundedStaleness consistency level.
+* Added support for direct connectivity for CRUD operations for partitioned collections.
+* Fixed a bug in querying a database with SQL.
+* Fixed a bug in the session cache where session token may be set incorrectly.
+
+### <a name="1.9.0"></a>1.9.0
+* Added support for cross partition parallel queries.
+* Added support for TOP/ORDER BY queries for partitioned collections.
+* Added support for strong consistency.
+* Added support for name based requests when using direct connectivity.
+* Fixed to make ActivityId stay consistent across all request retries.
+* Fixed a bug related to the session cache when recreating a collection with the same name.
+* Added Polygon and LineString DataTypes while specifying collection indexing policy for geo-fencing spatial queries.
+* Fixed issues with Java Doc for Java 1.8.
+
+### <a name="1.8.1"></a>1.8.1
+* Fixed a bug in PartitionKeyDefinitionMap to cache single partition collections and not make extra fetch partition key requests.
+* Fixed a bug to not retry when an incorrect partition key value is provided.
+
+### <a name="1.8.0"></a>1.8.0
+* Added the support for multi-region database accounts.
+* Added support for automatic retry on throttled requests with options to customize the max retry attempts and max retry wait time. See RetryOptions and ConnectionPolicy.getRetryOptions().
+* Deprecated IPartitionResolver based custom partitioning code. Please use partitioned collections for higher storage and throughput.
+
+### <a name="1.7.1"></a>1.7.1
+* Added retry policy support for rate limiting.
+
+### <a name="1.7.0"></a>1.7.0
+* Added time to live (TTL) support for documents.
+
+### <a name="1.6.0"></a>1.6.0
+* Implemented [partitioned collections](../partitioning-overview.md) and [user-defined performance levels](../performance-levels.md).
+
+### <a name="1.5.1"></a>1.5.1
+* Fixed a bug in HashPartitionResolver to generate hash values in little-endian to be consistent with other SDKs.
+
+### <a name="1.5.0"></a>1.5.0
+* Add Hash & Range partition resolvers to assist with sharding applications across multiple partitions.
+
+### <a name="1.4.0"></a>1.4.0
+* Implement Upsert. New upsertXXX methods added to support Upsert feature.
+* Implement ID Based Routing. No public API changes, all changes internal.
+
+### <a name="1.3.0"></a>1.3.0
+* Release skipped to bring version number in alignment with other SDKs
+
+### <a name="1.2.0"></a>1.2.0
+* Supports GeoSpatial Index
+* Validates ID property for all resources. Ids for resources cannot contain ?, /, #, \, characters or end with a space.
+* Adds new header "index transformation progress" to ResourceResponse.
+
+### <a name="1.1.0"></a>1.1.0
+* Implements V2 indexing policy
+
+### <a name="1.0.0"></a>1.0.0
+* GA SDK
+
+## Release and retirement dates
+
+Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommend that you always upgrade to the latest SDK version as early as possible.
+
+> [!WARNING]
+> After 30 May 2020, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 1.x of the Azure Cosmos DB Java SDK for API for NoSQL. If you prefer not to upgrade, requests sent from version 1.x of the SDK will continue to be served by the Azure Cosmos DB service.
+>
+> After 29 February 2016, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 0.x of the Azure Cosmos DB Java SDK for API for NoSQL. If you prefer not to upgrade, requests sent from version 0.x of the SDK will continue to be served by the Azure Cosmos DB service.
++
+| Version | Release Date | Retirement Date |
+| | | |
+| [2.6.1](#2.6.1) |Dec 17, 2020 |Feb 29, 2024|
+| [2.6.0](#2.6.0) |July 16, 2020 |Feb 29, 2024|
+| [2.5.1](#2.5.1) |June 03, 2020 |Feb 29, 2024|
+| [2.5.0](#2.5.0) |May 12, 2020 |Feb 29, 2024|
+| [2.4.7](#2.4.7) |Feb 20, 2020 |Feb 29, 2024|
+| [2.4.6](#2.4.6) |Jan 24, 2020 |Feb 29, 2024|
+| [2.4.5](#2.4.5) |Nov 10, 2019 |Feb 29, 2024|
+| [2.4.4](#2.4.4) |Oct 24, 2019 |Feb 29, 2024|
+| [2.4.2](#2.4.2) |Sep 26, 2019 |Feb 29, 2024|
+| [2.4.1](#2.4.1) |Jul 18, 2019 |Feb 29, 2024|
+| [2.4.0](#2.4.0) |May 04, 2019 |Feb 29, 2024|
+| [2.3.0](#2.3.0) |Apr 24, 2019 |Feb 29, 2024|
+| [2.2.3](#2.2.3) |Apr 16, 2019 |Feb 29, 2024|
+| [2.2.2](#2.2.2) |Apr 05, 2019 |Feb 29, 2024|
+| [2.2.0](#2.2.0) |Mar 27, 2019 |Feb 29, 2024|
+| [2.1.3](#2.1.3) |Mar 13, 2019 |Feb 29, 2024|
+| [2.1.2](#2.1.2) |Mar 09, 2019 |Feb 29, 2024|
+| [2.1.1](#2.1.1) |Dec 13, 2018 |Feb 29, 2024|
+| [2.1.0](#2.1.0) |Nov 20, 2018 |Feb 29, 2024|
+| [2.0.0](#2.0.0) |Sept 21, 2018 |Feb 29, 2024|
+| [1.16.4](#1.16.4) |Sept 10, 2018 |May 30, 2020 |
+| [1.16.3](#1.16.3) |Sept 09, 2018 |May 30, 2020 |
+| [1.16.2](#1.16.2) |June 29, 2018 |May 30, 2020 |
+| [1.16.1](#1.16.1) |May 16, 2018 |May 30, 2020 |
+| [1.16.0](#1.16.0) |March 15, 2018 |May 30, 2020 |
+| [1.15.0](#1.15.0) |Nov 14, 2017 |May 30, 2020 |
+| [1.14.0](#1.14.0) |Oct 28, 2017 |May 30, 2020 |
+| [1.13.0](#1.13.0) |August 25, 2017 |May 30, 2020 |
+| [1.12.0](#1.12.0) |July 11, 2017 |May 30, 2020 |
+| [1.11.0](#1.11.0) |May 10, 2017 |May 30, 2020 |
+| [1.10.0](#1.10.0) |March 11, 2017 |May 30, 2020 |
+| [1.9.6](#1.9.6) |February 21, 2017 |May 30, 2020 |
+| [1.9.5](#1.9.5) |January 31, 2017 |May 30, 2020 |
+| [1.9.4](#1.9.4) |November 24, 2016 |May 30, 2020 |
+| [1.9.3](#1.9.3) |October 30, 2016 |May 30, 2020 |
+| [1.9.2](#1.9.2) |October 28, 2016 |May 30, 2020 |
+| [1.9.1](#1.9.1) |October 26, 2016 |May 30, 2020 |
+| [1.9.0](#1.9.0) |October 03, 2016 |May 30, 2020 |
+| [1.8.1](#1.8.1) |June 30, 2016 |May 30, 2020 |
+| [1.8.0](#1.8.0) |June 14, 2016 |May 30, 2020 |
+| [1.7.1](#1.7.1) |April 30, 2016 |May 30, 2020 |
+| [1.7.0](#1.7.0) |April 27, 2016 |May 30, 2020 |
+| [1.6.0](#1.6.0) |March 29, 2016 |May 30, 2020 |
+| [1.5.1](#1.5.1) |December 31, 2015 |May 30, 2020 |
+| [1.5.0](#1.5.0) |December 04, 2015 |May 30, 2020 |
+| [1.4.0](#1.4.0) |October 05, 2015 |May 30, 2020 |
+| [1.3.0](#1.3.0) |October 05, 2015 |May 30, 2020 |
+| [1.2.0](#1.2.0) |August 05, 2015 |May 30, 2020 |
+| [1.1.0](#1.1.0) |July 09, 2015 |May 30, 2020 |
+| 1.0.1 |May 12, 2015 |May 30, 2020 |
+| [1.0.0](#1.0.0) |April 07, 2015 |May 30, 2020 |
+| 0.9.5-prelease |Mar 09, 2015 |February 29, 2016 |
+| 0.9.4-prelease |February 17, 2015 |February 29, 2016 |
+| 0.9.3-prelease |January 13, 2015 |February 29, 2016 |
+| 0.9.2-prelease |December 19, 2014 |February 29, 2016 |
+| 0.9.1-prelease |December 19, 2014 |February 29, 2016 |
+| 0.9.0-prelease |December 10, 2014 |February 29, 2016 |
+
+## FAQ
+
+## See also
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v4.md
+
+ Title: 'Azure Cosmos DB Java SDK v4 for API for NoSQL release notes and resources'
+description: Learn all about the Azure Cosmos DB Java SDK v4 for API for NoSQL and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+++
+ms.devlang: java
+ Last updated : 04/06/2021+++++
+# Azure Cosmos DB Java SDK v4 for API for NoSQL: release notes and resources
++
+The Azure Cosmos DB Java SDK v4 for NoSQL combines an Async API and a Sync API into one Maven artifact. The v4 SDK brings enhanced performance, new API features, and Async support based on Project Reactor and the [Netty library](https://netty.io/). Users can expect improved performance with Azure Cosmos DB Java SDK v4 versus the [Azure Cosmos DB Async Java SDK v2](sdk-java-async-v2.md) and the [Azure Cosmos DB Sync Java SDK v2](sdk-java-v2.md).
+
+> [!IMPORTANT]
+> These Release Notes are for Azure Cosmos DB Java SDK v4 only. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+> Here are three steps to get going fast!
+> 1. Install the [minimum supported Java runtime, JDK 8](/java/azure/jdk/) so you can use the SDK.
+> 2. Work through the [Quickstart Guide for Azure Cosmos DB Java SDK v4](./quickstart-java.md) which gets you access to the Maven artifact and walks through basic Azure Cosmos DB requests.
+> 3. Read the Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md) and [troubleshooting](troubleshoot-java-sdk-v4.md) guides to optimize the SDK for your application.
+>
+> The [Azure Cosmos DB workshops and labs](https://aka.ms/cosmosworkshop) are another great resource for learning how to use Azure Cosmos DB Java SDK v4!
+>
+
+## Helpful content
+
+| Content | Link |
+|||
+| **Release Notes** | [Release notes for Java SDK v4](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos) |
+| **API documentation** | [Java API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-cosmos/latest/https://docsupdatetracker.net/index.html) |
+| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos) |
+| **Get started** | [Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data](./quickstart-java.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-cosmos-java-getting-started) |
+| **Best Practices** | [Best Practices for Java SDK v4](best-practice-java.md) |
+| **Basic code samples** | [Azure Cosmos DB: Java examples for the API for NoSQL](samples-java.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples)|
+| **Console app with Change Feed**| [Change feed - Java SDK v4 sample](how-to-java-change-feed.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example)|
+| **Web app sample**| [Build a web app with Java SDK v4](tutorial-java-web-app.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app)|
+| **Performance tips**| [Performance tips for Java SDK v4](performance-tips-java-sdk-v4.md)|
+| **Troubleshooting** | [Troubleshoot Java SDK v4](troubleshoot-java-sdk-v4.md) |
+| **Migrate to v4 from an older SDK** | [Migrate to Java V4 SDK](migrate-java-v4-sdk.md) |
+| **Minimum supported runtime**|[JDK 8](/java/azure/jdk/) |
+| **Azure Cosmos DB workshops and labs** |[Azure Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
+
+> [!IMPORTANT]
+> * The 4.13.0 release updates `reactor-core` and `reactor-netty` major versions to `2020.0.4 (Europium)` release train.
+
+## Release history
+Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav).
+
+## Recommended version
+
+It's strongly recommended to use version 4.31.0 and above.
+
+## FAQ
+
+## Next steps
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-nodejs.md
+
+ Title: 'Azure Cosmos DB: SQL Node.js API, SDK & resources'
+description: Learn all about the SQL Node.js API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Node.js SDK.
+++
+ms.devlang: javascript
+ Last updated : 12/09/2021++++
+# Azure Cosmos DB Node.js SDK for API for NoSQL: Release notes and resources
++
+|Resource |Link |
+|||
+|Download SDK | [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos)
+|API Documentation | [JavaScript SDK reference documentation](/javascript/api/%40azure/cosmos/)
+|SDK installation instructions | `npm install @azure/cosmos`
+|Contribute to SDK | [Contributing guide for azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/blob/main/CONTRIBUTING.md)
+| Samples | [Node.js code samples](samples-nodejs.md)
+| Getting started tutorial | [Get started with the JavaScript SDK](sql-api-nodejs-get-started.md)
+| Web app tutorial | [Build a Node.js web application using Azure Cosmos DB](tutorial-nodejs-web-app.md)
+| Current supported Node.js platforms | [LTS versions of Node.js](https://nodejs.org/about/releases/)
+
+## Release notes
+
+Release history is maintained in the azure-sdk-for-js repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/CHANGELOG.md#release-history).
+
+## Migration guide for breaking changes
+
+If you are on an older version of the SDK, it's recommended to migrate to the 3.0 version. This section details the improvements you would get with this version and the bug fixes made in the 3.0 version.
+### Improved client constructor options
+
+Constructor options have been simplified:
+
+* masterKey was renamed key and moved to the top level
+* Properties previously under options.auth have moved to the top level
+
+```javascript
+// v2
+const client = new CosmosClient({
+ endpoint: "https://your-database.cosmos.azure.com",
+ auth: {
+ masterKey: "your-primary-key"
+ }
+})
+
+// v3
+const client = new CosmosClient({
+ endpoint: "https://your-database.cosmos.azure.com",
+ key: "your-primary-key"
+})
+```
+
+### Simplified query iterator API
+
+In v2, there were many different ways to iterate or retrieve results from a query. We have attempted to simplify the v3 API and remove similar or duplicate APIs:
+
+* Remove iterator.next() and iterator.current(). Use fetchNext() to get pages of results.
+* Remove iterator.forEach(). Use async iterators instead.
+* iterator.executeNext() renamed to iterator.fetchNext()
+* iterator.toArray() renamed to iterator.fetchAll()
+* Pages are now proper Response objects instead of plain JS objects
+* const container = client.database(dbId).container(containerId)
+
+```javascript
+// v2
+container.items.query('SELECT * from c').toArray()
+container.items.query('SELECT * from c').executeNext()
+container.items.query('SELECT * from c').forEach(({ body: item }) => { console.log(item.id) })
+
+// v3
+container.items.query('SELECT * from c').fetchAll()
+container.items.query('SELECT * from c').fetchNext()
+for await(const { result: item } in client.databases.readAll().getAsyncIterator()) {
+ console.log(item.id)
+}
+```
+
+### Fixed containers are now partitioned
+
+The Azure Cosmos DB service now supports partition keys on all containers, including those that were previously created as fixed containers. The v3 SDK updates to the latest API version that implements this change, but it is not breaking. If you do not supply a partition key for operations, we will default to a system key that works with all your existing containers and documents.
+
+### Upsert removed for stored procedures
+
+Previously upsert was allowed for non-partitioned collections, but with the API version update, all collections are partitioned so we removed it entirely.
+
+### Item reads will not throw on 404
+
+const container = client.database(dbId).container(containerId)
+
+```javascript
+// v2
+try {
+ container.items.read(id, undefined)
+} catch (e) {
+ if (e.code === 404) { console.log('item not found') }
+}
+
+// v3
+const { result: item } = container.items.read(id, undefined)
+if (item === undefined) { console.log('item not found') }
+```
+
+### Default multi-region writes
+
+The SDK will now write to multiple regions by default if your Azure Cosmos DB configuration supports it. This was previously opt-in behavior.
+
+### Proper error objects
+
+Failed requests now throw proper Error or subclasses of Error. Previously they threw plain JS objects.
+
+### New features
+
+#### User-cancelable requests
+
+The move to fetch internally allows us to use the browser AbortController API to support user-cancelable operations. In the case of operations where multiple requests are potentially in progress (like cross-partition queries), all requests for the operation will be canceled. Modern browser users will already have AbortController. Node.js users will need to use a Polyfill library
+
+```javascript
+ const controller = new AbortController()
+ const {result: item} = await items.query('SELECT * from c', { abortSignal: controller.signal});
+ controller.abort()
+```
+
+#### Set throughput as part of db/container create operation
+
+```javascript
+const { database } = client.databases.create({ id: 'my-database', throughput: 10000 })
+database.containers.create({ id: 'my-container', throughput: 10000 })
+```
+
+#### @azure/cosmos-sign
+
+Header token generation was split out into a new library, @azure/cosmos-sign. Anyone calling the Azure Cosmos DB REST API directly can use this to sign headers using the same code we call inside @azure/cosmos.
+
+#### UUID for generated IDs
+
+v2 had custom code to generate item IDs. We have switched to the well known and maintained community library uuid.
+
+#### Connection strings
+
+It is now possible to pass a connection string copied from the Azure portal:
+
+```javascript
+const client = new CosmosClient("AccountEndpoint=https://test-account.documents.azure.com:443/;AccountKey=c213asdasdefgdfgrtweaYPpgoeCsHbpRTHhxuMsTaw==;")
+Add DISTINCT and LIMIT/OFFSET queries (#306)
+ const { results } = await items.query('SELECT DISTINCT VALUE r.name FROM ROOT').fetchAll()
+ const { results } = await items.query('SELECT * FROM root r OFFSET 1 LIMIT 2').fetchAll()
+```
+
+### Improved browser experience
+
+While it was possible to use the v2 SDK in the browser, it was not an ideal experience. You needed to Polyfill several Node.js built-in libraries and use a bundler like webpack or Parcel. The v3 SDK makes the out of the box experience much better for browser users.
+
+* Replace request internals with fetch (#245)
+* Remove usage of Buffer (#330)
+* Remove node builtin usage in favor of universal packages/APIs (#328)
+* Switch to node-abort-controller (#294)
+
+### Bug fixes
+* Fix offer read and bring back offer tests (#224)
+* Fix EnableEndpointDiscovery (#207)
+* Fix missing RUs on paginated results (#360)
+* Expand SQL query parameter type (#346)
+* Add ttl to ItemDefinition (#341)
+* Fix CP query metrics (#311)
+* Add activityId to FeedResponse (#293)
+* Switch _ts type from string to number (#252)(#295)
+* Fix Request Charge Aggregation (#289)
+* Allow blank string partition keys (#277)
+* Add string to conflict query type (#237)
+* Add uniqueKeyPolicy to container (#234)
+
+### Engineering systems
+Not always the most visible changes, but they help our team ship better code, faster.
+
+* Use rollup for production builds (#104)
+* Update to TypeScript 3.5 (#327)
+* Convert to TS project references. Extract test folder (#270)
+* Enable noUnusedLocals and noUnusedParameters (#275)
+* Azure Pipelines YAML for CI builds (#298)
+
+## Release & Retirement Dates
+
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible. Read the [Microsoft Support Policy for SDKs](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md#microsoft-support-policy) for more details.
+
+| Version | Release Date | Retirement Date |
+| | | |
+| v3 | June 28, 2019 | |
+| v2 | September 24, 2018 | September 24, 2021 |
+| v1 | April 08, 2015 | August 30, 2020 |
+
+## FAQ
+
+## See also
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-python.md
+
+ Title: Azure Cosmos DB SQL Python API, SDK & resources
+description: Learn all about the SQL Python API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Python SDK.
+++
+ms.devlang: python
+ Last updated : 01/25/2022+++
+# Azure Cosmos DB Python SDK for API for NoSQL: Release notes and resources
++
+| Page| Link |
+|||
+|**Download SDK**|[PyPI](https://pypi.org/project/azure-cosmos)|
+|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/azure.cosmos?preserve-view=true&view=azure-python)|
+|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)|
+|**Get started**|[Get started with the Python SDK](quickstart-python.md)|
+|**Samples**|[Python SDK samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples)|
+|**Current supported platform**|[Python 3.6+](https://www.python.org/downloads/)|
+
+> [!IMPORTANT]
+> * Versions 4.3.0b2 and higher support Async IO operations and only support Python 3.6+. Python 2 is not supported.
+
+## Release history
+Release history is maintained in the azure-sdk-for-python repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cosmos/azure-cosmos/CHANGELOG.md).
+
+## Release & retirement dates
+
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+
+> [!WARNING]
+> After 31 August 2022, Azure Cosmos DB will no longer make bug fixes or provide support to versions 1.x and 2.x of the Azure Cosmos DB Python SDK for API for NoSQL. If you prefer not to upgrade, requests sent from version 1.x and 2.x of the SDK will continue to be served by the Azure Cosmos DB service.
+
+| Version | Release Date | Retirement Date |
+| | | |
+| 4.3.0 |May 23, 2022 | |
+| 4.2.0 |Oct 09, 2020 | |
+| 4.1.0 |Aug 10, 2020 | |
+| 4.0.0 |May 20, 2020 | |
+| 3.0.2 |Nov 15, 2018 | |
+| 3.0.1 |Oct 04, 2018 | |
+| 2.3.3 |Sept 08, 2018 |August 31, 2022 |
+| 2.3.2 |May 08, 2018 |August 31, 2022 |
+| 2.3.1 |December 21, 2017 |August 31, 2022 |
+| 2.3.0 |November 10, 2017 |August 31, 2022 |
+| 2.2.1 |Sep 29, 2017 |August 31, 2022 |
+| 2.2.0 |May 10, 2017 |August 31, 2022 |
+| 2.1.0 |May 01, 2017 |August 31, 2022 |
+| 2.0.1 |October 30, 2016 |August 31, 2022 |
+| 2.0.0 |September 29, 2016 |August 31, 2022 |
+| 1.9.0 |July 07, 2016 |August 31, 2022 |
+| 1.8.0 |June 14, 2016 |August 31, 2022 |
+| 1.7.0 |April 26, 2016 |August 31, 2022 |
+| 1.6.1 |April 08, 2016 |August 31, 2022 |
+| 1.6.0 |March 29, 2016 |August 31, 2022 |
+| 1.5.0 |January 03, 2016 |August 31, 2022 |
+| 1.4.2 |October 06, 2015 |August 31, 2022 |
+| 1.4.1 |October 06, 2015 |August 31, 2022 |
+| 1.2.0 |August 06, 2015 |August 31, 2022 |
+| 1.1.0 |July 09, 2015 |August 31, 2022 |
+| 1.0.1 |May 25, 2015 |August 31, 2022 |
+| 1.0.0 |April 07, 2015 |August 31, 2022 |
+| 0.9.4-prelease |January 14, 2015 |February 29, 2016 |
+| 0.9.3-prelease |December 09, 2014 |February 29, 2016 |
+| 0.9.2-prelease |November 25, 2014 |February 29, 2016 |
+| 0.9.1-prelease |September 23, 2014 |February 29, 2016 |
+| 0.9.0-prelease |August 21, 2014 |February 29, 2016 |
+
+## FAQ
++
+## Next steps
+
+To learn more about Azure Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Serverless Computing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/serverless-computing-database.md
+
+ Title: Serverless database computing with Azure Cosmos DB and Azure Functions
+description: Learn how Azure Cosmos DB and Azure Functions can be used together to create event-driven serverless computing apps.
++++++ Last updated : 05/02/2020+++
+# Serverless database computing using Azure Cosmos DB and Azure Functions
+
+Serverless computing is all about the ability to focus on individual pieces of logic that are repeatable and stateless. These pieces require no infrastructure management and they consume resources only for the seconds, or milliseconds, they run for. At the core of the serverless computing movement are functions, which are made available in the Azure ecosystem by [Azure Functions](https://azure.microsoft.com/services/functions). To learn about other serverless execution environments in Azure see [serverless in Azure](https://azure.microsoft.com/solutions/serverless/) page.
+
+With the native integration between [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db) and Azure Functions, you can create database triggers, input bindings, and output bindings directly from your Azure Cosmos DB account. Using Azure Functions and Azure Cosmos DB, you can create and deploy event-driven serverless apps with low-latency access to rich data for a global user base.
+
+## Overview
+
+Azure Cosmos DB and Azure Functions enable you to integrate your databases and serverless apps in the following ways:
+
+* Create an event-driven **Azure Functions trigger for Azure Cosmos DB**. This trigger relies on [change feed](../change-feed.md) streams to monitor your Azure Cosmos DB container for changes. When any changes are made to a container, the change feed stream is sent to the trigger, which invokes the Azure Function.
+* Alternatively, bind an Azure Function to an Azure Cosmos DB container using an **input binding**. Input bindings read data from a container when a function executes.
+* Bind a function to an Azure Cosmos DB container using an **output binding**. Output bindings write data to a container when a function completes.
+
+> [!NOTE]
+> Currently, Azure Functions trigger, input bindings, and output bindings for Azure Cosmos DB are supported for use with the API for NoSQL only. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API.
++
+The following diagram illustrates each of these three integrations:
++
+The Azure Functions trigger, input binding, and output binding for Azure Cosmos DB can be used in the following combinations:
+
+* An Azure Functions trigger for Azure Cosmos DB can be used with an output binding to a different Azure Cosmos DB container. After a function performs an action on an item in the change feed, you can write it to another container (writing it to the same container it came from would effectively create a recursive loop). Or, you can use an Azure Functions trigger for Azure Cosmos DB to effectively migrate all changed items from one container to a different container, with the use of an output binding.
+* Input bindings and output bindings for Azure Cosmos DB can be used in the same Azure Function. This works well in cases when you want to find certain data with the input binding, modify it in the Azure Function, and then save it to the same container or a different container, after the modification.
+* An input binding to an Azure Cosmos DB container can be used in the same function as an Azure Functions trigger for Azure Cosmos DB, and can be used with or without an output binding as well. You could use this combination to apply up-to-date currency exchange information (pulled in with an input binding to an exchange container) to the change feed of new orders in your shopping cart service. The updated shopping cart total, with the current currency conversion applied, can be written to a third container using an output binding.
+
+## Use cases
+
+The following use cases demonstrate a few ways you can make the most of your Azure Cosmos DB data - by connecting your data to event-driven Azure Functions.
+
+### IoT use case - Azure Functions trigger and output binding for Azure Cosmos DB
+
+In IoT implementations, you can invoke a function when the check engine light is displayed in a connected car.
+
+**Implementation:** Use an Azure Functions trigger and output binding for Azure Cosmos DB
+
+1. An **Azure Functions trigger for Azure Cosmos DB** is used to trigger events related to car alerts, such as the check engine light coming on in a connected car.
+2. When the check engine light comes, the sensor data is sent to Azure Cosmos DB.
+3. Azure Cosmos DB creates or updates new sensor data documents, then those changes are streamed to the Azure Functions trigger for Azure Cosmos DB.
+4. The trigger is invoked on every data-change to the sensor data collection, as all changes are streamed via the change feed.
+5. A threshold condition is used in the function to send the sensor data to the warranty department.
+6. If the temperature is also over a certain value, an alert is also sent to the owner.
+7. The **output binding** on the function updates the car record in another Azure Cosmos DB container to store information about the check engine event.
+
+The following image shows the code written in the Azure portal for this trigger.
++
+### Financial use case - Timer trigger and input binding
+
+In financial implementations, you can invoke a function when a bank account balance falls under a certain amount.
+
+**Implementation:** A timer trigger with an Azure Cosmos DB input binding
+
+1. Using a [timer trigger](../../azure-functions/functions-bindings-timer.md), you can retrieve the bank account balance information stored in an Azure Cosmos DB container at timed intervals using an **input binding**.
+2. If the balance is below the low balance threshold set by the user, then follow up with an action from the Azure Function.
+3. The output binding can be a [SendGrid integration](../../azure-functions/functions-bindings-sendgrid.md) that sends an email from a service account to the email addresses identified for each of the low balance accounts.
+
+The following images show the code in the Azure portal for this scenario.
+++
+### Gaming use case - Azure Functions trigger and output binding for Azure Cosmos DB
+
+In gaming, when a new user is created you can search for other users who might know them by using the [Azure Cosmos DB for Gremlin](../introduction.md). You can then write the results to an Azure Cosmos DB or SQL database for easy retrieval.
+
+**Implementation:** Use an Azure Functions trigger and output binding for Azure Cosmos DB
+
+1. Using an Azure Cosmos DB [graph database](../introduction.md) to store all users, you can create a new function with an Azure Functions trigger for Azure Cosmos DB.
+2. Whenever a new user is inserted, the function is invoked, and then the result is stored using an **output binding**.
+3. The function queries the graph database to search for all the users that are directly related to the new user and returns that dataset to the function.
+4. This data is then stored in Azure Cosmos DB, which can then be easily retrieved by any front-end application that shows the new user their connected friends.
+
+### Retail use case - Multiple functions
+
+In retail implementations, when a user adds an item to their basket you now have the flexibility to create and invoke functions for optional business pipeline components.
+
+**Implementation:** Multiple Azure Functions triggers for Azure Cosmos DB listening to one container
+
+1. You can create multiple Azure Functions by adding Azure Functions triggers for Azure Cosmos DB to each - all of which listen to the same change feed of shopping cart data. When multiple functions listen to the same change feed, a new lease collection is required for each function. For more information about lease collections, see [Understanding the Change Feed Processor library](change-feed-processor.md).
+2. Whenever a new item is added to a users shopping cart, each function is independently invoked by the change feed from the shopping cart container.
+ * One function may use the contents of the current basket to change the display of other items the user might be interested in.
+ * Another function may update inventory totals.
+ * Another function may send customer information for certain products to the marketing department, who sends them a promotional mailer.
+
+ Any department can create an Azure Functions for Azure Cosmos DB by listening to the change feed, and be sure they won't delay critical order processing events in the process.
+
+In all of these use cases, because the function has decoupled the app itself, you don't need to spin up new app instances all the time. Instead, Azure Functions spins up individual functions to complete discrete processes as needed.
+
+## Tooling
+
+Native integration between Azure Cosmos DB and Azure Functions is available in the Azure portal and in Visual Studio.
+
+* In the Azure Functions portal, you can create a trigger. For quickstart instructions, see [Create an Azure Functions trigger for Azure Cosmos DB in the Azure portal](../../azure-functions/functions-create-cosmos-db-triggered-function.md).
+* In the Azure Cosmos DB portal, you can add an Azure Functions trigger for Azure Cosmos DB to an existing Azure Function app in the same resource group.
+* In Visual Studio, you can create the trigger using the [Azure Functions Tools](../../azure-functions/functions-develop-vs.md):
+
+ >
+ >[!VIDEO https://aka.ms/docs.change-feed-azure-functions]
+
+## Why choose Azure Functions integration for serverless computing?
+
+Azure Functions provides the ability to create scalable units of work, or concise pieces of logic that can be run on demand, without provisioning or managing infrastructure. By using Azure Functions, you don't have to create a full-blown app to respond to changes in your Azure Cosmos DB database, you can create small reusable functions for specific tasks. In addition, you can also use Azure Cosmos DB data as the input or output to an Azure Function in response to event such as an HTTP requests or a timed trigger.
+
+Azure Cosmos DB is the recommended database for your serverless computing architecture for the following reasons:
+
+* **Instant access to all your data**: You have granular access to every value stored because Azure Cosmos DB [automatically indexes](../index-policy.md) all data by default, and makes those indexes immediately available. This means you're able to constantly query, update, and add new items to your database and have instant access via Azure Functions.
+
+* **Schemaless**. Azure Cosmos DB is schemaless - so it's uniquely able to handle any data output from an Azure Function. This "handle anything" approach makes it straightforward to create various Functions that all output to Azure Cosmos DB.
+
+* **Scalable throughput**. Throughput can be scaled up and down instantly in Azure Cosmos DB. If you have hundreds or thousands of Functions querying and writing to the same container, you can scale up your [RU/s](../request-units.md) to handle the load. All functions can work in parallel using your allocated RU/s and your data is guaranteed to be [consistent](../consistency-levels.md).
+
+* **Global replication**. You can replicate Azure Cosmos DB data [around the globe](../distribute-data-globally.md) to reduce latency, geo-locating your data closest to where your users are. As with all Azure Cosmos DB queries, data from event-driven triggers is read data from the Azure Cosmos DB closest to the user.
+
+If you're looking to integrate with Azure Functions to store data and don't need deep indexing or if you need to store attachments and media files, the [Azure Blob Storage trigger](../../azure-functions/functions-bindings-storage-blob.md) may be a better option.
+
+Benefits of Azure Functions:
+
+* **Event-driven**. Azure Functions is event-driven and can listen to a change feed from Azure Cosmos DB. This means you don't need to create listening logic, you just keep an eye out for the changes you're listening for.
+
+* **No limits**. Functions execute in parallel and the service spins up as many as you need. You set the parameters.
+
+* **Good for quick tasks**. The service spins up new instances of functions whenever an event fires and closes them as soon as the function completes. You only pay for the time your functions are running.
+
+If you're not sure whether Flow, Logic Apps, Azure Functions, or WebJobs are best for your implementation, see [Choose between Flow, Logic Apps, Functions, and WebJobs](../../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md).
+
+## Next steps
+
+Now let's connect Azure Cosmos DB and Azure Functions for real:
+
+* [Create an Azure Functions trigger for Azure Cosmos DB in the Azure portal](../../azure-functions/functions-create-cosmos-db-triggered-function.md)
+* [Create an Azure Functions HTTP trigger with an Azure Cosmos DB input binding](../../azure-functions/functions-bindings-cosmosdb.md?tabs=csharp)
+* [Azure Cosmos DB bindings and triggers](../../azure-functions/functions-bindings-cosmosdb-v2.md)
cosmos-db Session State And Caching Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/session-state-and-caching-provider.md
+
+ Title: Use Azure Cosmos DB as an ASP.NET session state and caching provider
+description: Learn how to use Azure Cosmos DB as an ASP.NET session state and caching provider
+++++ Last updated : 07/06/2022++
+# Use Azure Cosmos DB as an ASP.NET session state and caching provider
+
+The Azure Cosmos DB session and cache provider allows you to use Azure Cosmos DB and leverage its low latency and global scale capabilities for storing session state data and as a distributed cache within your application.
+
+## What is session state?
+
+[Session state](/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0#configure-session-state&preserve-view=true) is user data that tracks a user browsing through a web application during a period of time, within the same browser. The session state expires, and it's limited to the interactions a particular browser is having which does not extend across browsers. It is considered ephemeral data, if it is not present it will not break the application. However, when it exists, it makes the experience faster for the user because the web application does not need to fetch it on every browser request for the same user.
+
+It is often backed by some storage mechanism, that can in some cases, be external to the current web server and enable load-balancing requests of the same browser across multiple web servers to achieve higher scalability.
+
+The simplest session state provider is the in-memory provider that only stores data on the local web server memory and requires the application to use [Application Request Routing](/iis/extensions/planning-for-arr/using-the-application-request-routing-module). This makes the browser session sticky to a particular web server (all requests for that browser need to always land on the same web server). The provider works well on simple scenarios but the stickiness requirement can bring load-balancing problems when web applications scale.
+
+There are many external storage providers available, that can store the session data in a way that can be read and accessed by multiple web servers without requiring session stickiness and enable a higher scale.
+
+## Session state scenarios
+
+Azure Cosmos DB can be used as a session state provider through the extension package [Microsoft.Extensions.Caching.Cosmos](https://www.nuget.org/packages/Microsoft.Extensions.Caching.Cosmos) uses the [Azure Cosmos DB .NET SDK](sdk-dotnet-v3.md), using a Container as an effective session storage based on a key/value approach where the key is the session identifier.
+
+Once the package is added, you can use `AddCosmosCache` as part of your Startup process (services.AddSession and app.UseSession are [common initialization](/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0#configure-session-stat&preserve-view=true) steps required for any session state provider):
+
+```csharp
+public void ConfigureServices(IServiceCollection services)
+{
+ /* Other service configurations */
+ services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
+ {
+ CosmosClientBuilder clientBuilder = new CosmosClientBuilder("myConnectionString")
+ .WithApplicationRegion("West US");
+ cacheOptions.ContainerName = "myContainer";
+ cacheOptions.DatabaseName = "myDatabase";
+ cacheOptions.ClientBuilder = clientBuilder;
+ /* Creates the container if it does not exist */
+ cacheOptions.CreateIfNotExists = true;
+ });
+
+ services.AddSession(options =>
+ {
+ options.IdleTimeout = TimeSpan.FromSeconds(3600);
+ options.Cookie.IsEssential = true;
+ });
+ /* Other service configurations */
+}
+
+public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+{
+ /* Other configurations */
+
+ app.UseSession();
+
+ /* app.UseEndpoints and other configurations */
+}
+```
+
+Where you specify the database and container you want the session state to be stored and optionally, create them if they don't exist using the `CreateIfNotExists` attribute.
+
+> [!IMPORTANT]
+> If you provide an existing container instead of using `CreateIfNotExists`, make sure it has [time to live enabled](how-to-time-to-live.md).
+
+You can customize your SDK client configuration by using the `CosmosClientBuilder` or if your application is already using a `CosmosClient` for other operations with Azure Cosmos DB, you can also inject it into the provider:
+
+```csharp
+services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
+{
+ cacheOptions.ContainerName = "myContainer";
+ cacheOptions.DatabaseName = "myDatabase";
+ cacheOptions.CosmosClient = preExistingClient;
+ /* Creates the container if it does not exist */
+ cacheOptions.CreateIfNotExists = true;
+});
+```
+
+After this, you can use ASP.NET Core sessions like with any other provider and use the HttpContext.Session object. Keep in mind to always try to load your session information asynchronously as per the [ASP.NET recommendations](/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0#load-session-state-asynchronously&preserve-view=true).
+
+## Distributed cache scenarios
+
+Given that the Azure Cosmos DB provider implements the [IDistributedCache interface to act as a distributed cache provider](/aspnet/core/performance/caching/distributed?view=aspnetcore-5.0&preserve-view=true), it can also be used for any application that requires distributed cache, not just for web applications that require a performant and distributed session state provider.
+
+Distributed caches require data consistency to provide independent instances to be able to share that cached data. When using the Azure Cosmos DB provider, you can:
+
+- Use your Azure Cosmos DB account in **Session consistency** if you can enable [Application Request Routing](/iis/extensions/planning-for-arr/using-the-application-request-routing-module) and make requests sticky to a particular instance.
+- Use your Azure Cosmos DB account in **Bounded Staleness or Strong consistency** without requiring request stickiness. This provides the greatest scale in terms of load distribution across your instances.
+
+To use the Azure Cosmos DB provider as a distributed cache, it needs to be registered in `ConfiguredService`s with the `services.AddCosmosCache` call. Once that is done, any constructor in the application can ask for the cache by referencing `IDistributedCache` and it will receive the instance injected by [dependency injection](/dotnet/core/extensions/dependency-injection) to be used:
+
+```csharp
+public class MyBusinessClass
+{
+ private readonly IDistributedCache cache;
+
+ public MyBusinessClass(IDistributedCache cache)
+ {
+ this.cache = cache;
+ }
+
+ public async Task SomeOperationAsync()
+ {
+ string someCachedValue = await this.cache.GetStringAsync("someKey");
+ /* Use the cache */
+ }
+}
+```
+
+## Troubleshooting and diagnosing
+
+Since the Azure Cosmos DB provider uses the .NET SDK underneath, all the existing [performance guidelines](performance-tips-dotnet-sdk-v3.md) and [troubleshooting guides](troubleshoot-dotnet-sdk.md) apply to understanding any potential issue. Note, there is a distinct way to get access to the Diagnostics from the underlying Azure Cosmos DB operations because they cannot be exposed through the IDistributedCache APIs.
+
+Registering the optional diagnostics delegate will allow you to capture and conditionally log any diagnostics to troubleshoot any cases like high latency:
+
+```csharp
+void captureDiagnostics(CosmosDiagnostics diagnostics)
+{
+ if (diagnostics.GetClientElapsedTime() > SomePredefinedThresholdTime)
+ {
+ Console.WriteLine(diagnostics.ToString());
+ }
+}
+
+services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
+{
+ cacheOptions.DiagnosticsHandler = captureDiagnostics;
+ /* other options */
+});
+```
+
+## Next steps
+- To find more details on the Azure Cosmos DB session and cache provider see the [source code on GitHub](https://github.com/Azure/Microsoft.Extensions.Caching.Cosmos/).
+- [Try out](https://github.com/Azure/Microsoft.Extensions.Caching.Cosmos/tree/master/sample) the Azure Cosmos DB session and cache provider by exploring a sample Explore an ASP.NET Core web application.
+- Read more about [distributed caches](/aspnet/core/performance/caching/distributed?view=aspnetcore-5.0&preserve-view=true) in .NET.
cosmos-db Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/stored-procedures-triggers-udfs.md
+
+ Title: Work with stored procedures, triggers, and UDFs in Azure Cosmos DB
+description: This article introduces the concepts such as stored procedures, triggers, and user-defined functions in Azure Cosmos DB.
+++++ Last updated : 08/26/2021++++
+# Stored procedures, triggers, and user-defined functions
+
+Azure Cosmos DB provides language-integrated, transactional execution of JavaScript. When using the API for NoSQL in Azure Cosmos DB, you can write **stored procedures**, **triggers**, and **user-defined functions (UDFs)** in the JavaScript language. You can write your logic in JavaScript that executed inside the database engine. You can create and execute triggers, stored procedures, and UDFs by using [Azure portal](https://portal.azure.com/), the [JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) or the [Azure Cosmos DB for NoSQL client SDKs](how-to-use-stored-procedures-triggers-udfs.md).
+
+## Benefits of using server-side programming
+
+Writing stored procedures, triggers, and user-defined functions (UDFs) in JavaScript allows you to build rich applications and they have the following advantages:
+
+* **Procedural logic:** JavaScript as a high-level programming language that provides rich and familiar interface to express business logic. You can perform a sequence of complex operations on the data.
+
+* **Atomic transactions:** Azure Cosmos DB database operations that are performed within a single stored procedure or a trigger are atomic. This atomic functionality lets an application combine related operations into a single batch, so that either all of the operations succeed or none of them succeed.
+
+* **Performance:** The JSON data is intrinsically mapped to the JavaScript language type system. This mapping allows for a number of optimizations like lazy materialization of JSON documents in the buffer pool and making them available on-demand to the executing code. There are other performance benefits associated with shipping business logic to the database, which includes:
+
+ * *Batching:* You can group operations like inserts and submit them in bulk. The network traffic latency costs and the store overhead to create separate transactions are reduced significantly.
+
+ * *Pre-compilation:* Stored procedures, triggers, and UDFs are implicitly pre-compiled to the byte code format in order to avoid compilation cost at the time of each script invocation. Due to pre-compilation, the invocation of stored procedures is fast and has a low footprint.
+
+ * *Sequencing:* Sometimes operations need a triggering mechanism that may perform one or additional updates to the data. In addition to Atomicity, there are also performance benefits when executing on the server side.
+
+* **Encapsulation:** Stored procedures can be used to group logic in one place. Encapsulation adds an abstraction layer on top of the data, which enables you to evolve your applications independently from the data. This layer of abstraction is helpful when the data is schema-less and you don't have to manage adding additional logic directly into your application. The abstraction lets your keep the data secure by streamlining the access from the scripts.
+
+> [!TIP]
+> Stored procedures are best suited for operations that are write-heavy and require a transaction across a partition key value. When deciding whether to use stored procedures, optimize around encapsulating the maximum amount of writes possible. Generally speaking, stored procedures are not the most efficient means for doing large numbers of read or query operations, so using stored procedures to batch large numbers of reads to return to the client will not yield the desired benefit. For best performance, these read-heavy operations should be done on the client-side, using the Azure Cosmos DB SDK.
+
+## Transactions
+
+Transaction in a typical database can be defined as a sequence of operations performed as a single logical unit of work. Each transaction provides **ACID property guarantees**. ACID is a well-known acronym that stands for: **A**tomicity, **C**onsistency, **I**solation, and **D**urability.
+
+* Atomicity guarantees that all the operations done inside a transaction are treated as a single unit, and either all of them are committed or none of them are.
+
+* Consistency makes sure that the data is always in a valid state across transactions.
+
+* Isolation guarantees that no two transactions interfere with each other ΓÇô many commercial systems provide multiple isolation levels that can be used based on the application needs.
+
+* Durability ensures that any change that is committed in a database will always be present.
+
+In Azure Cosmos DB, JavaScript runtime is hosted inside the database engine. Hence, requests made within the stored procedures and the triggers execute in the same scope as the database session. This feature enables Azure Cosmos DB to guarantee ACID properties for all operations that are part of a stored procedure or a trigger. For examples, see [how to implement transactions](how-to-write-stored-procedures-triggers-udfs.md#transactions) article.
+
+### Scope of a transaction
+
+Stored procedures are associated with an Azure Cosmos DB container and stored procedure execution is scoped to a logical partition key. Stored procedures must include a logical partition key value during execution that defines the logical partition for the scope of the transaction. For more information, see [Azure Cosmos DB partitioning](../partitioning-overview.md) article.
+
+### Commit and rollback
+
+Transactions are natively integrated into the Azure Cosmos DB JavaScript programming model. Within a JavaScript function, all the operations are automatically wrapped under a single transaction. If the JavaScript logic in a stored procedure completes without any exceptions, all the operations within the transaction are committed to the database. Statements like `BEGIN TRANSACTION` and `COMMIT TRANSACTION` (familiar to relational databases) are implicit in Azure Cosmos DB. If there are any exceptions from the script, the Azure Cosmos DB JavaScript runtime will roll back the entire transaction. As such, throwing an exception is effectively equivalent to a `ROLLBACK TRANSACTION` in Azure Cosmos DB.
+
+### Data consistency
+
+Stored procedures and triggers are always executed on the primary replica of an Azure Cosmos DB container. This feature ensures that reads from stored procedures offer [strong consistency](../consistency-levels.md). Queries using user-defined functions can be executed on the primary or any secondary replica. Stored procedures and triggers are intended to support transactional writes ΓÇô meanwhile read-only logic is best implemented as application-side logic and queries using the [Azure Cosmos DB for NoSQL SDKs](samples-dotnet.md), will help you saturate the database throughput.
+
+> [!TIP]
+> The queries executed within a stored procedure or trigger may not see changes to items made by the same script transaction. This statement applies both to SQL queries, such as `getContent().getCollection.queryDocuments()`, as well as integrated language queries, such as `getContext().getCollection().filter()`.
+
+## Bounded execution
+
+All Azure Cosmos DB operations must complete within the specified timeout duration. Stored procedures have a timeout limit of 5 seconds. This constraint applies to JavaScript functions - stored procedures, triggers, and user-defined functions. If an operation does not complete within that time limit, the transaction is rolled back.
+
+You can either ensure that your JavaScript functions finish within the time limit or implement a continuation-based model to batch/resume execution. In order to simplify development of stored procedures and triggers to handle time limits, all functions under the Azure Cosmos DB container (for example, create, read, update, and delete of items) return a boolean value that represents whether that operation will complete. If this value is false, it is an indication that the procedure must wrap up execution because the script is consuming more time or provisioned throughput than the configured value. Operations queued prior to the first unaccepted store operation are guaranteed to complete if the stored procedure completes in time and does not queue any more requests. Thus, operations should be queued one at a time by using JavaScript's callback convention to manage the script's control flow. Because scripts are executed in a server-side environment, they are strictly governed. Scripts that repeatedly violate execution boundaries may be marked inactive and can't be executed, and they should be recreated to honor the execution boundaries.
+
+JavaScript functions are also subject to [provisioned throughput capacity](../request-units.md). JavaScript functions could potentially end up using a large number of request units within a short time and may be rate-limited if the provisioned throughput capacity limit is reached. It is important to note that scripts consume additional throughput in addition to the throughput spent executing database operations, although these database operations are slightly less expensive than executing the same operations from the client.
+
+## Triggers
+
+Azure Cosmos DB supports two types of triggers:
+
+### Pre-triggers
+
+Azure Cosmos DB provides triggers that can be invoked by performing an operation on an Azure Cosmos DB item. For example, you can specify a pre-trigger when you are creating an item. In this case, the pre-trigger will run before the item is created. Pre-triggers cannot have any input parameters. If necessary, the request object can be used to update the document body from original request. When triggers are registered, users can specify the operations that it can run with. If a trigger was created with `TriggerOperation.Create`, this means using the trigger in a replace operation will not be permitted. For examples, see [How to write triggers](how-to-write-stored-procedures-triggers-udfs.md#triggers) article.
+
+### Post-triggers
+
+Similar to pre-triggers, post-triggers, are also associated with an operation on an Azure Cosmos DB item and they don't require any input parameters. They run *after* the operation has completed and have access to the response message that is sent to the client. For examples, see [How to write triggers](how-to-write-stored-procedures-triggers-udfs.md#triggers) article.
+
+> [!NOTE]
+> Registered triggers don't run automatically when their corresponding operations (create / delete / replace / update) happen. They have to be explicitly called when executing these operations. To learn more, see [how to run triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) article.
+
+## <a id="udfs"></a>User-defined functions
+
+[User-defined functions](query/udfs.md) (UDFs) are used to extend the API for NoSQL query language syntax and implement custom business logic easily. They can be called only within queries. UDFs do not have access to the context object and are meant to be used as compute only JavaScript. Therefore, UDFs can be run on secondary replicas. For examples, see [How to write user-defined functions](how-to-write-stored-procedures-triggers-udfs.md#udfs) article.
+
+## <a id="jsqueryapi"></a>JavaScript language-integrated query API
+
+In addition to issuing queries using API for NoSQL query syntax, the [server-side SDK](https://azure.github.io/azure-cosmosdb-js-server) allows you to perform queries by using a JavaScript interface without any knowledge of SQL. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls. Queries are parsed by the JavaScript runtime and are executed efficiently within Azure Cosmos DB. To learn about JavaScript query API support, see [Working with JavaScript language integrated query API](javascript-query-api.md) article. For examples, see [How to write stored procedures and triggers using JavaScript Query API](how-to-write-javascript-query-api.md) article.
+
+## Next steps
+
+Learn how to write and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB with the following articles:
+
+* [How to write stored procedures, triggers, and user-defined functions](how-to-write-stored-procedures-triggers-udfs.md)
+
+* [How to use stored procedures, triggers, and user-defined functions](how-to-use-stored-procedures-triggers-udfs.md)
+
+* [Working with JavaScript language integrated query API](javascript-query-api.md)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Synthetic Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/synthetic-partition-keys.md
+
+ Title: Create a synthetic partition key in Azure Cosmos DB
+description: Learn how to use synthetic partition keys in your Azure Cosmos DB containers to distribute the data and workload evenly across the partition keys
+++ Last updated : 08/26/2021++++++
+# Create a synthetic partition key
+
+It's the best practice to have a partition key with many distinct values, such as hundreds or thousands. The goal is to distribute your data and workload evenly across the items associated with these partition key values. If such a property doesnΓÇÖt exist in your data, you can construct a *synthetic partition key*. This document describes several basic techniques for generating a synthetic partition key for your Azure Cosmos DB container.
+
+## Concatenate multiple properties of an item
+
+You can form a partition key by concatenating multiple property values into a single artificial `partitionKey` property. These keys are referred to as synthetic keys. For example, consider the following example document:
+
+```JavaScript
+{
+"deviceId": "abc-123",
+"date": 2018
+}
+```
+
+For the previous document, one option is to set /deviceId or /date as the partition key. Use this option, if you want to partition your container based on either device ID or date. Another option is to concatenate these two values into a synthetic `partitionKey` property that's used as the partition key.
+
+```JavaScript
+{
+"deviceId": "abc-123",
+"date": 2018,
+"partitionKey": "abc-123-2018"
+}
+```
+
+In real-time scenarios, you can have thousands of items in a database. Instead of adding the synthetic key manually, define client-side logic to concatenate values and insert the synthetic key into the items in your Azure Cosmos DB containers.
+
+## Use a partition key with a random suffix
+
+Another possible strategy to distribute the workload more evenly is to append a random number at the end of the partition key value. When you distribute items in this way, you can perform parallel write operations across partitions.
+
+An example is if a partition key represents a date. You might choose a random number between 1 and 400 and concatenate it as a suffix to the date. This method results in partition key values likeΓÇ»`2018-08-09.1`,`2018-08-09.2`, and so on, throughΓÇ»`2018-08-09.400`. Because you randomize the partition key, the write operations on the container on each day are spread evenly across multiple partitions. This method results in better parallelism and overall higher throughput.
+
+## Use a partition key with pre-calculated suffixes
+
+The random suffix strategy can greatly improve write throughput, but it's difficult to read a specific item. You don't know the suffix value that was used when you wrote the item. To make it easier to read individual items, use the pre-calculated suffixes strategy. Instead of using a random number to distribute the items among the partitions, use a number that is calculated based on something that you want to query.
+
+Consider the previous example, where a container uses a date as the partition key. Now suppose that each item has aΓÇ»`Vehicle-Identification-Number` (`VIN`) attribute that we want to access. Further, suppose that you often run queries to find items by the `VIN`, in addition to date. Before your application writes the item to the container, it can calculate a hash suffix based on the VIN and append it to the partition key date. The calculation might generate a number between 1 and 400 that is evenly distributed. This result is similar to the results produced by the random suffix strategy method. The partition key value is then the date concatenated with the calculated result.
+
+With this strategy, the writes are evenly spread across the partition key values, and across the partitions. You can easily read a particular item and date, because you can calculate the partition key value for a specific `Vehicle-Identification-Number`. The benefit of this method is that you can avoid creating a single hot partition key, i.e., a partition key that takes all the workload.
+
+## Next steps
+
+You can learn more about the partitioning concept in the following articles:
+
+* Learn more about [logical partitions](../partitioning-overview.md).
+* Learn more about how to [provision throughput on Azure Cosmos DB containers and databases](../set-throughput.md).
+* Learn how to [provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
+* Learn how to [provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md
+
+ Title: Azure Cosmos DB Spark Connector - Throughput Control
+description: Learn about controlling throughput for bulk data movements in the Azure Cosmos DB Spark Connector
++++ Last updated : 06/22/2022++++
+# Azure Cosmos DB Spark Connector - throughput control
+
+The [Spark Connector](quickstart-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control.
+
+## Why is throughput control important?
+
+ Having throughput control helps to isolate the performance needs of applications running against a container, by limiting the amount of [request units](../request-units.md) that can be consumed by a given Spark client.
+
+There are several advanced scenarios that benefit from client-side throughput control:
+
+- **Different operations and tasks have different priorities** - there can be a need to prevent normal transactions from being throttled due to data ingestion or copy activities. Some operations and/or tasks aren't sensitive to latency, and are more tolerant to being throttled than others.
+
+- **Provide fairness/isolation to different end users/tenants** - An application will usually have many end users. Some users may send too many requests, which consume all available throughput, causing others to get throttled.
+
+- **Load balancing of throughput between different Azure Cosmos DB clients** - in some use cases, it's important to make sure all the clients get a fair (equal) share of the throughput
++
+Throughput control enables the capability for more granular level RU rate limiting as needed.
+
+## How does throughput control work?
+
+Throughput control for the Spark Connector is configured by first creating a container that will define throughput control metadata, with a partition key of `groupId`, and `ttl` enabled. Here we create this container using Spark SQL, and call it `ThroughputControl`:
++
+```sql
+ %sql
+ CREATE TABLE IF NOT EXISTS cosmosCatalog.`database-v4`.ThroughputControl
+ USING cosmos.oltp
+ OPTIONS(spark.cosmos.database = 'database-v4')
+ TBLPROPERTIES(partitionKeyPath = '/groupId', autoScaleMaxThroughput = '4000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
+```
+
+> [!NOTE]
+> The above example creates a container with [autoscale](../provision-throughput-autoscale.md). If you prefer standard provisioning, you can replace `autoScaleMaxThroughput` with `manualThroughput` instead.
+
+> [!IMPORTANT]
+> The partition key must be defined as `/groupId`, and `ttl` must be enabled, for the throughput control feature to work.
+
+Within the Spark config of a given application, we can then specify parameters for our workload. The below example sets throughput control as `enabled`, as well as defining a throughput control group `name` and a `targetThroughputThreshold`. We also define the `database` and `container` in which through control group is maintained:
+
+```scala
+ "spark.cosmos.throughputControl.enabled" -> "true",
+ "spark.cosmos.throughputControl.name" -> "SourceContainerThroughputControl",
+ "spark.cosmos.throughputControl.targetThroughputThreshold" -> "0.95",
+ "spark.cosmos.throughputControl.globalControl.database" -> "database-v4",
+ "spark.cosmos.throughputControl.globalControl.container" -> "ThroughputControl"
+```
+
+In the above example, the `targetThroughputThreshold` is defined as **0.95**, so rate limiting will occur (and requests will be retried) when clients consume more than 95% (+/- 5-10 percent) of the throughput that is allocated to the container. This configuration is stored as a document in the throughput container that looks like the below:
+
+```json
+ {
+ "id": "ZGF0YWJhc2UtdjQvY3VzdG9tZXIvU291cmNlQ29udGFpbmVyVGhyb3VnaHB1dENvbnRyb2w.info",
+ "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
+ "targetThroughput": "",
+ "targetThroughputThreshold": "0.95",
+ "isDefault": true,
+ "_rid": "EHcYAPolTiABAAAAAAAAAA==",
+ "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
+ "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
+ "_attachments": "attachments/",
+ "_ts": 1651835869
+ }
+```
+> [!NOTE]
+> Throughput control does not do RU pre-calculation of each operation. Instead, it tracks the RU usages after the operation based on the response header. As such, throughput control is based on an approximation - and does not guarantee that amount of throughput will be available for the group at any given time.
+
+> [!WARNING]
+> The `targetThroughputThreshold` is **immutable**. If you change the target throughput threshold value, this will create a new throughput control group (but as long as you use Version 4.10.0 or later it can have the same name). You need to restart all Spark jobs that are using the group if you want to ensure they all consume the new threshold immediately (otherwise they will pick-up the new threshold after the next restart).
+
+For each Spark client that uses the throughput control group, a record will be created in the `ThroughputControl` container - with a ttl of a few seconds - so the documents will vanish pretty quickly if a Spark client isn't actively running anymore - which looks like the below:
+
+```json
+ {
+ "id": "Zhjdieidjojdook3osk3okso3ksp3ospojsp92939j3299p3oj93pjp93jsps939pkp9ks39kp9339skp",
+ "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
+ "_etag": "\"1782728-w98999w-ww9998w9-99990000\"",
+ "ttl": 10,
+ "initializeTime": "2022-06-26T02:24:40.054Z",
+ "loadFactor": 0.97636377638898,
+ "allocatedThroughput": 484.89444487847,
+ "_rid": "EHcYAPolTiABAAAAAAAAAA==",
+ "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
+ "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
+ "_attachments": "attachments/",
+ "_ts": 1651835869
+ }
+```
+
+In each client record, the `loadFactor` attribute represents the load on the given client, relative to other clients in the throughput control group. The `allocatedThroughput` attribute shows how many RUs are currently allocated to this client. The Spark Connector will adjust allocated throughput for each client based on its load. This will ensure that each client gets a share of the throughput available that is proportional to its load, and all clients together don't consume more than the total allocated for the throughput control group to which they belong.
++
+## Next steps
+
+* [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
+* [Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL](quickstart-spark.md).
+* Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/time-to-live.md
+
+ Title: Expire data in Azure Cosmos DB with Time to Live
+description: With TTL, Microsoft Azure Cosmos DB provides the ability to have documents automatically purged from the system after a period of time.
+++++++ Last updated : 09/16/2021+
+# Time to Live (TTL) in Azure Cosmos DB
+
+With **Time to Live** or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified. Time to live value is configured in seconds. When you configure TTL, the system will automatically delete the expired items based on the TTL value, without needing a delete operation that is explicitly issued by the client application. The maximum value for TTL is 2147483647.
+
+Deletion of expired items is a background task that consumes left-over [Request Units](../request-units.md), that is Request Units that haven't been consumed by user requests. Even after the TTL has expired, if the container is overloaded with requests and if there aren't enough RU's available, the data deletion is delayed. Data is deleted once there are enough RUs available to perform the delete operation. Though the data deletion is delayed, data is not returned by any queries (by any API) after the TTL has expired.
+
+> [!NOTE]
+> This content is related to Azure Cosmos DB transactional store TTL. If you are looking for analytical store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl).
+
+## Time to live for containers and items
+
+The time to live value is set in seconds, and it is interpreted as a delta from the time that an item was last modified. You can set time to live on a container or an item within the container:
+
+1. **Time to Live on a container** (set using `DefaultTimeToLive`):
+
+ - If missing (or set to null), items are not expired automatically.
+
+ - If present and the value is set to "-1", it is equal to infinity, and items donΓÇÖt expire by default.
+
+ - If present and the value is set to some *non-zero* number *"n"* ΓÇô items will expire *"n"* seconds after their last modified time.
+
+2. **Time to Live on an item** (set using `ttl`):
+
+ - This Property is applicable only if `DefaultTimeToLive` is present and it is not set to null for the parent container.
+
+ - If present, it overrides the `DefaultTimeToLive` value of the parent container.
+
+## Time to Live configurations
+
+- If TTL is set to *"n"* on a container, then the items in that container will expire after *n* seconds. If there are items in the same container that have their own time to live, set to -1 (indicating they do not expire) or if some items have overridden the time to live setting with a different number, these items expire based on their own configured TTL value.
+
+- If TTL is not set on a container, then the time to live on an item in this container has no effect.
+
+- If TTL on a container is set to -1, an item in this container that has the time to live set to n, will expire after n seconds, and remaining items will not expire.
+
+## Examples
+
+This section shows some examples with different time to live values assigned to container and items:
+
+### Example 1
+
+TTL on container is set to null (DefaultTimeToLive = null)
+
+|TTL on item| Result|
+|||
+|ttl = null|TTL is disabled. The item will never expire (default).|
+|ttl = -1|TTL is disabled. The item will never expire.|
+|ttl = 2000|TTL is disabled. The item will never expire.|
+
+### Example 2
+
+TTL on container is set to -1 (DefaultTimeToLive = -1)
+
+|TTL on item| Result|
+|||
+|ttl = null|TTL is enabled. The item will never expire (default).|
+|ttl = -1|TTL is enabled. The item will never expire.|
+|ttl = 2000|TTL is enabled. The item will expire after 2000 seconds.|
+
+### Example 3
+
+TTL on container is set to 1000 (DefaultTimeToLive = 1000)
+
+|TTL on item| Result|
+|||
+|ttl = null|TTL is enabled. The item will expire after 1000 seconds (default).|
+|ttl = -1|TTL is enabled. The item will never expire.|
+|ttl = 2000|TTL is enabled. The item will expire after 2000 seconds.|
+
+## Next steps
+
+Learn how to configure Time to Live in the following articles:
+
+- [How to configure Time to Live](how-to-time-to-live.md)
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/transactional-batch.md
+
+ Title: Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
+description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET or Java SDK to perform a group of point operations that either succeed or fail.
+++++ Last updated : 10/27/2020++
+# Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
+
+Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET and Java SDKs, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
+
+## What's a transaction in Azure Cosmos DB
+
+A transaction in a typical database can be defined as a sequence of operations performed as a single logical unit of work. Each transaction provides ACID (Atomicity, Consistency, Isolation, Durability) property guarantees.
+
+* **Atomicity** guarantees that all the operations done inside a transaction are treated as a single unit, and either all of them are committed or none of them are.
+* **Consistency** makes sure that the data is always in a valid state across transactions.
+* **Isolation** guarantees that no two transactions interfere with each other ΓÇô many commercial systems provide multiple isolation levels that can be used based on the application needs.
+* **Durability** ensures that any change that is committed in a database will always be present.
+Azure Cosmos DB supports [full ACID compliant transactions with snapshot isolation](database-transactions-optimistic-concurrency.md) for operations within the same [logical partition key](../partitioning-overview.md).
+
+## Transactional batch operations and stored procedures
+
+Azure Cosmos DB currently supports stored procedures, which also provide the transactional scope on operations. However, transactional batch operations offer the following benefits:
+
+* **Language option** ΓÇô Transactional batch is supported on the SDK and language you work with already, while stored procedures need to be written in JavaScript.
+* **Code versioning** ΓÇô Versioning application code and onboarding it onto your CI/CD pipeline is much more natural than orchestrating the update of a stored procedure and making sure the rollover happens at the right time. It also makes rolling back changes easier.
+* **Performance** ΓÇô Reduced latency on equivalent operations by up to 30% when compared to the stored procedure execution.
+* **Content serialization** ΓÇô Each operation within a transactional batch can use custom serialization options for its payload.
+
+## How to create a transactional batch operation
+
+### [.NET](#tab/dotnet)
+
+When creating a transactional batch operation, start with a container instance and call [CreateTransactionalBatch](/dotnet/api/microsoft.azure.cosmos.container.createtransactionalbatch):
+
+```csharp
+PartitionKey partitionKey = new PartitionKey("road-bikes");
+
+TransactionalBatch batch = container.CreateTransactionalBatch(partitionKey);
+```
+
+Next, add multiple operations to the batch:
+
+```csharp
+Product bike = new (
+ id: "68719520766",
+ category: "road-bikes",
+ name: "Chropen Road Bike"
+);
+
+batch.CreateItem<Product>(bike);
+
+Part part = new (
+ id: "68719519885",
+ category: "road-bikes",
+ name: "Tronosuros Tire",
+ productId: bike.id
+);
+
+batch.CreateItem<Part>(part);
+```
+
+Finally, call [ExecuteAsync](/dotnet/api/microsoft.azure.cosmos.transactionalbatch.executeasync) on the batch:
+
+```csharp
+using TransactionalBatchResponse response = await batch.ExecuteAsync();
+```
+
+Once the response is received, examine if the response is successful. If the response indicates a success, extract the results:
+
+```csharp
+if (response.IsSuccessStatusCode)
+{
+ TransactionalBatchOperationResult<Product> productResponse;
+ productResponse = response.GetOperationResultAtIndex<Product>(0);
+ Product productResult = productResponse.Resource;
+
+ TransactionalBatchOperationResult<Part> partResponse;
+ partResponse = response.GetOperationResultAtIndex<Part>(1);
+ Part partResult = partResponse.Resource;
+}
+```
+
+> [!IMPORTANT]
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
+
+### [Java](#tab/java)
+
+When creating a transactional batch operation, call [CosmosBatch.createCosmosBatch](/java/api/com.azure.cosmos.models.cosmosbatch.createcosmosbatch):
+
+```java
+PartitionKey partitionKey = new PartitionKey("road-bikes");
+
+CosmosBatch batch = CosmosBatch.createCosmosBatch(partitionKey);
+```
+
+Next, add multiple operations to the batch:
+
+```java
+Product bike = new Product();
+bike.setId("68719520766");
+bike.setCategory("road-bikes");
+bike.setName("Chropen Road Bike");
+
+batch.createItemOperation(bike);
+
+Part part = new Part();
+part.setId("68719519885");
+part.setCategory("road-bikes");
+part.setName("Tronosuros Tire");
+part.setProductId(bike.getId());
+
+batch.createItemOperation(part);
+```
+
+Finally, use a container instance to call [executeCosmosBatch](/java/api/com.azure.cosmos.cosmoscontainer.executecosmosbatch) with the batch:
+
+```java
+CosmosBatchResponse response = container.executeCosmosBatch(batch);
+```
+
+Once the response is received, examine if the response is successful. If the response indicates a success, extract the results:
+
+```java
+if (response.isSuccessStatusCode())
+{
+ List<CosmosBatchOperationResult> results = response.getResults();
+}
+```
+
+> [!IMPORTANT]
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
+++
+## How are transactional batch operations executed
+
+When the `ExecuteAsync` method is called, all operations in the `TransactionalBatch` object are grouped, serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
+
+The service receives the request and executes all operations within a transactional scope, and returns a response using the same serialization protocol. This response is either a success, or a failure, and supplies individual operation responses per operation.
+
+The SDK exposes the response for you to verify the result and, optionally, extract each of the internal operation results.
+
+## Limitations
+
+Currently, there are two known limits:
+
+* The Azure Cosmos DB request size limit constrains the size of the `TransactionalBatch` payload to not exceed 2 MB, and the maximum execution time is 5 seconds.
+* There's a current limit of 100 operations per `TransactionalBatch` to ensure the performance is as expected and within SLAs.
+
+## Next steps
+
+* Learn more about [what you can do with TransactionalBatch](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/TransactionalBatch)
+
+* Visit our [samples](samples-dotnet.md) for more ways to use our Azure Cosmos DB .NET SDK
cosmos-db Troubleshoot Bad Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-bad-request.md
+
+ Title: Troubleshoot Azure Cosmos DB bad request exceptions
+description: Learn how to diagnose and fix bad request exceptions such as input content or partition key is invalid, partition key doesn't match in Azure Cosmos DB.
+++ Last updated : 03/07/2022+++++
+# Diagnose and troubleshoot bad request exceptions in Azure Cosmos DB
+
+The HTTP status code 400 represents the request contains invalid data or it's missing required parameters.
+
+## <a name="missing-id-property"></a>Missing the ID property
+On this scenario, it's common to see the error:
+
+*The input content is invalid because the required properties - 'id; ' - are missing*
+
+A response with this error means the JSON document that is being sent to the service is lacking the required ID property.
+
+### Solution
+Specify an `id` property with a string value as per the [REST specification](/rest/api/cosmos-db/documents) as part of your document, the SDKs do not autogenerate values for this property.
+
+## <a name="invalid-partition-key-type"></a>Invalid partition key type
+On this scenario, it's common to see errors like:
+
+*Partition key ... is invalid.*
+
+A response with this error means the partition key value is of an invalid type.
+
+### Solution
+The value of the partition key should be a string or a number, make sure the value is of the expected types.
+
+## <a name="wrong-partition-key-value"></a>Wrong partition key value
+On this scenario, it's common to see these errors:
+
+*Response status code does not indicate success: BadRequest (400); Substatus: 1001*
+
+*PartitionKey extracted from document doesnΓÇÖt match the one specified in the header*
+
+A response with this error means you are executing an operation and passing a partition key value that does not match the document's body value for the expected property. If the collection's partition key path is `/myPartitionKey`, the document has a property called `myPartitionKey` with a value that does not match what was provided as partition key value when calling the SDK method.
+
+### Solution
+Send the partition key value parameter that matches the document property value.
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-changefeed-functions.md
+
+ Title: Troubleshoot issues when using Azure Functions trigger for Azure Cosmos DB
+description: Common issues, workarounds, and diagnostic steps, when using the Azure Functions trigger for Azure Cosmos DB
++++ Last updated : 04/14/2022+++++
+# Diagnose and troubleshoot issues when using Azure Functions trigger for Azure Cosmos DB
+
+This article covers common issues, workarounds, and diagnostic steps, when you use the [Azure Functions trigger for Azure Cosmos DB](change-feed-functions.md).
+
+## Dependencies
+
+The Azure Functions trigger and bindings for Azure Cosmos DB depend on the extension packages over the base Azure Functions runtime. Always keep these packages updated, as they might include fixes and new features that might address any potential issues you may encounter:
+
+* For Azure Functions V2, see [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB).
+* For Azure Functions V1, see [Microsoft.Azure.WebJobs.Extensions.DocumentDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DocumentDB).
+
+This article will always refer to Azure Functions V2 whenever the runtime is mentioned, unless explicitly specified.
+
+## Consume the Azure Cosmos DB SDK independently
+
+The key functionality of the extension package is to provide support for the Azure Functions trigger and bindings for Azure Cosmos DB. It also includes the [Azure Cosmos DB .NET SDK](sdk-dotnet-core-v2.md), which is helpful if you want to interact with Azure Cosmos DB programmatically without using the trigger and bindings.
+
+If want to use the Azure Cosmos DB SDK, make sure that you don't add to your project another NuGet package reference. Instead, **let the SDK reference resolve through the Azure Functions' Extension package**. Consume the Azure Cosmos DB SDK separately from the trigger and bindings
+
+Additionally, if you are manually creating your own instance of the [Azure Cosmos DB SDK client](./sdk-dotnet-core-v2.md), you should follow the pattern of having only one instance of the client [using a Singleton pattern approach](../../azure-functions/manage-connections.md?tabs=csharp#azure-cosmos-db-clients). This process avoids the potential socket issues in your operations.
+
+## Common scenarios and workarounds
+
+### Azure Function fails with error message collection doesn't exist
+
+Azure Function fails with error message "Either the source collection 'collection-name' (in database 'database-name') or the lease collection 'collection2-name' (in database 'database2-name') does not exist. Both collections must exist before the listener starts. To automatically create the lease collection, set 'CreateLeaseCollectionIfNotExists' to 'true'"
+
+This means that either one or both of the Azure Cosmos DB containers required for the trigger to work do not exist or are not reachable to the Azure Function. **The error itself will tell you which Azure Cosmos DB database and container is the trigger looking for** based on your configuration.
+
+1. Verify the `ConnectionStringSetting` attribute and that it **references a setting that exists in your Azure Function App**. The value on this attribute shouldn't be the Connection String itself, but the name of the Configuration Setting.
+2. Verify that the `databaseName` and `collectionName` exist in your Azure Cosmos DB account. If you are using automatic value replacement (using `%settingName%` patterns), make sure the name of the setting exists in your Azure Function App.
+3. If you don't specify a `LeaseCollectionName/leaseCollectionName`, the default is "leases". Verify that such container exists. Optionally you can set the `CreateLeaseCollectionIfNotExists` attribute in your Trigger to `true` to automatically create it.
+4. Verify your [Azure Cosmos DB account's Firewall configuration](../how-to-configure-firewall.md) to see to see that it's not it's not blocking the Azure Function.
+
+### Azure Function fails to start with "Shared throughput collection should have a partition key"
+
+The previous versions of the Azure Cosmos DB Extension did not support using a leases container that was created within a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). To resolve this issue, update the [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) extension to get the latest version.
+
+### Azure Function fails to start with "PartitionKey must be supplied for this operation."
+
+This error means that you are currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you are currently running on Azure Functions V1, you will need to upgrade to Azure Functions V2.
+
+### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] cannot be authorized by AAD token in data plane"
+
+This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You cannot use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
+
+### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id."
+
+This error means that your current leases container is partitioned, but the partition key path is not `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
+
+### You see a "Value cannot be null. Parameter name: o" in your Azure Functions logs when you try to Run the Trigger
+
+This issue appears if you are using the Azure portal and you try to select the **Run** button on the screen when inspecting an Azure Function that uses the trigger. The trigger does not require for you to select Run to start, it will automatically start when the Azure Function is deployed. If you want to check the Azure Function's log stream on the Azure portal, just go to your monitored container and insert some new items, you will automatically see the Trigger executing.
+
+### My changes take too long to be received
+
+This scenario can have multiple causes and all of them should be checked:
+
+1. Is your Azure Function deployed in the same region as your Azure Cosmos DB account? For optimal network latency, both the Azure Function and your Azure Cosmos DB account should be colocated in the same Azure region.
+2. Are the changes happening in your Azure Cosmos DB container continuous or sporadic?
+If it's the latter, there could be some delay between the changes being stored and the Azure Function picking them up. This is because internally, when the trigger checks for changes in your Azure Cosmos DB container and finds none pending to be read, it will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds).
+3. Your Azure Cosmos DB container might be [rate-limited](../request-units.md).
+4. You can use the `PreferredLocations` attribute in your trigger to specify a comma-separated list of Azure regions to define a custom preferred connection order.
+5. The speed at which your Trigger receives new changes is dictated by the speed at which you are processing them. Verify the Function's [Execution Time / Duration](../../azure-functions/analyze-telemetry-data.md), if your Function is slow that will increase the time it takes for your Trigger to get new changes. If you see a recent increase in Duration, there could be a recent code change that might affect it. If the speed at which you are receiving operations on your Azure Cosmos DB container is faster than the speed of your Trigger, you will keep lagging behind. You might want to investigate in the Function's code, what is the most time consuming operation and how to optimize it.
+
+### Some changes are repeated in my Trigger
+
+The concept of a "change" is an operation on a document. The most common scenarios where events for the same document is received are:
+* The account is using Eventual consistency. While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next).
+* The document is being updated. The Change Feed can contain multiple operations for the same documents, if that document is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same document is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If they don't match, these are different changes over the same document.
+* If you are identifying documents just by `id`, remember that the unique identifier for a document is the `id` and its partition key (there can be two documents with the same `id` but different partition key).
+
+### Some changes are missing in my Trigger
+
+If you find that some of the changes that happened in your Azure Cosmos DB container are not being picked up by the Azure Function or some changes are missing in the destination when you are copying them, please follow the below steps.
+
+When your Azure Function receives the changes, it often processes them, and could optionally, send the result to another destination. When you are investigating missing changes, make sure you **measure which changes are being received at the ingestion point** (when the Azure Function starts), not on the destination.
+
+If some changes are missing on the destination, this could mean that is some error happening during the Azure Function execution after the changes were received.
+
+In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry).
+
+> [!NOTE]
+> The Azure Functions trigger for Azure Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during your code execution. This means that the reason that the changes did not arrive at the destination is because that you are failing to process them.
+
+If the destination is another Azure Cosmos DB container and you are performing Upsert operations to copy the items, **verify that the Partition Key Definition on both the monitored and destination container are the same**. Upsert operations could be saving multiple source items as one in the destination because of this configuration difference.
+
+If you find that some changes were not received at all by your trigger, the most common scenario is that there is **another Azure Function running**. It could be another Azure Function deployed in Azure or an Azure Function running locally on a developer's machine that has **exactly the same configuration** (same monitored and lease containers), and this Azure Function is stealing a subset of the changes you would expect your Azure Function to process.
+
+Additionally, the scenario can be validated, if you know how many Azure Function App instances you have running. If you inspect your leases container and count the number of lease items within, the distinct values of the `Owner` property in them should be equal to the number of instances of your Function App. If there are more owners than the known Azure Function App instances, it means that these extra owners are the ones "stealing" the changes.
+
+One easy way to work around this situation, is to apply a `LeaseCollectionPrefix/leaseCollectionPrefix` to your Function with a new/different value or, alternatively, test with a new leases container.
+
+### Need to restart and reprocess all the items in my container from the beginning
+To reprocess all the items in a container from the beginning:
+1. Stop your Azure function if it is currently running.
+1. Delete the documents in the lease collection (or delete and re-create the lease collection so it is empty)
+1. Set the [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) CosmosDBTrigger attribute in your function to true.
+1. Restart the Azure function. It will now read and process all changes from the beginning.
+
+Setting [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) to true will tell the Azure function to start reading changes from the beginning of the history of the collection instead of the current time. This only works when there are no already created leases (that is, documents in the leases collection). Setting this property to true when there are leases already created has no effect; in this scenario, when a function is stopped and restarted, it will begin reading from the last checkpoint, as defined in the leases collection. To reprocess from the beginning, follow the above steps 1-4.
+
+### Binding can only be done with IReadOnlyList\<Document> or JArray
+
+This error happens if your Azure Functions project (or any referenced project) contains a manual NuGet reference to the Azure Cosmos DB SDK with a different version than the one provided by the [Azure Functions Azure Cosmos DB Extension](./troubleshoot-changefeed-functions.md#dependencies).
+
+To work around this situation, remove the manual NuGet reference that was added and let the Azure Cosmos DB SDK reference resolve through the Azure Functions Azure Cosmos DB Extension package.
+
+### Changing Azure Function's polling interval for the detecting changes
+
+As explained earlier for [My changes take too long to be received](#my-changes-take-too-long-to-be-received), Azure function will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds).
+
+## Next steps
+
+* [Enable monitoring for your Azure Functions](../../azure-functions/functions-monitoring.md)
+* [Azure Cosmos DB .NET SDK Troubleshooting](./troubleshoot-dotnet-sdk.md)
cosmos-db Troubleshoot Dotnet Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-request-header-too-large.md
+
+ Title: Troubleshoot a "Request header too large" message or 400 bad request in Azure Cosmos DB
+description: Learn how to diagnose and fix the request header too large exception.
+++ Last updated : 09/29/2021++++++
+# Diagnose and troubleshoot Azure Cosmos DB "Request header too large" message
+
+The "Request header too large" message is thrown with an HTTP error code 400. This error occurs if the size of the request header has grown so large that it exceeds the maximum-allowed size. We recommend that you use the latest version of the SDK. Use at least version 3.x or 2.x, because these versions add header size tracing to the exception message.
+
+## Troubleshooting steps
+The "Request header too large" message occurs if the session or the continuation token is too large. The following sections describe the cause of the issue and its solution in each category.
+
+### Session token is too large
+
+#### Cause:
+A 400 bad request most likely occurs because the session token is too large. If the following statements are true, the session token is too large:
+
+* The error occurs on point operations like create, read, and update where there isn't a continuation token.
+* The exception started without making any changes to the application. The session token grows as the number of partitions increases in the container. The number of partitions increases as the amount of data increases or if the throughput is increased.
+
+#### Temporary mitigation:
+Restart your client application to reset all the session tokens. Eventually, the session token will grow back to the previous size that caused the issue. To avoid this issue completely, use the solution in the next section.
+
+#### Solution:
+> [!IMPORTANT]
+> Upgrade to at least .NET [v3.20.1](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md) or [v2.16.1](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md). These minor versions contain optimizations to reduce the session token size to prevent the header from growing and hitting the size limit.
+1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the Transmission Control Protocol (TCP). The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue. Make sure to use the latest version of the SDK, which has a fix for query operations when the service interop isn't available.
+1. If the direct connection mode with the TCP protocol isn't an option for your workload, mitigate it by changing the [client consistency level](how-to-manage-consistency.md). The session token is only used for session consistency, which is the default consistency level for Azure Cosmos DB. Other consistency levels don't use the session token.
+
+### Continuation token is too large
+
+#### Cause:
+The 400 bad request occurs on query operations where the continuation token is used if the continuation token has grown too large or if different queries have different continuation token sizes.
+
+#### Solution:
+1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the TCP protocol. The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue.
+1. If the direct connection mode with the TCP protocol isn't an option for your workload, set the `ResponseContinuationTokenLimitInKb` option. You can find this option in `FeedOptions` in v2 or `QueryRequestOptions` in v3.
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
cosmos-db Troubleshoot Dotnet Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-request-timeout.md
+
+ Title: Troubleshoot Azure Cosmos DB HTTP 408 or request timeout issues with the .NET SDK
+description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
+++ Last updated : 09/16/2022++++++
+# Diagnose and troubleshoot Azure Cosmos DB .NET SDK request timeout exceptions
+
+The HTTP 408 error occurs if the SDK was unable to complete the request before the timeout limit occurred.
+
+It is important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for timeout errors as these are normally expected in a distributed system.
+
+When evaluating the case for timeout errors:
+
+* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
+* Is the P99 latency / availability affected?
+* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
+
+## Customize the timeout on the Azure Cosmos DB .NET SDK
+
+The SDK has two distinct alternatives to control timeouts, each with a different scope.
+
+### RequestTimeout
+
+The `CosmosClientOptions.RequestTimeout` (or `ConnectionPolicy.RequestTimeout` for SDK v2) configuration allows you to set a timeout that affects each individual network request. An operation started by a user can span multiple network requests (for example, there could be throttling). This configuration would apply for each network request on the retry. This timeout isn't an end-to-end operation request timeout.
+
+### CancellationToken
+
+All the async operations in the SDK have an optional CancellationToken parameter. This [CancellationToken](/dotnet/standard/threading/how-to-listen-for-cancellation-requests-by-polling) parameter is used throughout the entire operation, across all network requests. In between network requests, the cancellation token might be checked and an operation canceled if the related token is expired. The cancellation token should be used to define an approximate expected timeout on the operation scope.
+
+> [!NOTE]
+> The `CancellationToken` parameter is a mechanism where the library will check the cancellation when it [won't cause an invalid state](https://devblogs.microsoft.com/premier-developer/recommended-patterns-for-cancellationtoken/). The operation might not cancel exactly when the time defined in the cancellation is up. Instead, after the time is up, it cancels when it's safe to do so.
+
+## Troubleshooting steps
+
+The following list contains known causes and solutions for request timeout exceptions.
+
+### CosmosOperationCanceledException
+
+This type of exception is common when your application is passing [CancellationTokens](#cancellationtoken) to the SDK operations. The SDK checks the state of the `CancellationToken` in-between [retries](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) and if the `CancellationToken` is canceled, it will abort the current operation with this exception.
+
+The exception's `Message` / `ToString()` will also indicate the state of your `CancellationToken` through `Cancellation Token has expired: true` and it will also contain [Diagnostics](troubleshoot-dotnet-sdk.md#capture-diagnostics) that contain the context of the cancellation for the involved requests.
+
+These exceptions are safe to retry on and can be treated as [timeouts](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) from the retrying perspective.
+
+#### Solution
+
+Verify the configured time in your `CancellationToken`, make sure that it's greater than your [RequestTimeout](#requesttimeout) and the [CosmosClientOptions.OpenTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.opentcpconnectiontimeout) (if you're using [Direct mode](sdk-connection-modes.md)).
+If the available time in the `CancellationToken` is less than the configured timeouts, and the SDK is facing [transient connectivity issues](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503), the SDK won't be able to retry and will throw `CosmosOperationCanceledException`.
+
+### High CPU utilization
+
+High CPU utilization is the most common case. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where it might do multiple connections for a single query.
+
+# [3.21 and 2.16 or greater SDK](#tab/cpu-new)
+
+The timeouts will contain *Diagnostics*, which contain:
+
+```json
+"systemHistory": [
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+...
+]
+```
+
+* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the `dateUtc` time in-between measurements isn't approximately 10 seconds, it also would indicate contention on the thread pool. CPU is measured as an independent Task that is enqueued in the thread pool every 10 seconds, if the time in-between measurement is longer, it would indicate that the async Tasks aren't able to be processed in a timely fashion. Most common scenarios are when doing [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait) in the application code.
+
+# [Older SDK](#tab/cpu-old)
+
+If the error contains `TransportException` information, it might contain also `CPU History`:
+
+```
+CPU history:
+(2020-08-28T00:40:09.1769900Z 0.114),
+(2020-08-28T00:40:19.1763818Z 1.732),
+(2020-08-28T00:40:29.1759235Z 0.000),
+(2020-08-28T00:40:39.1763208Z 0.063),
+(2020-08-28T00:40:49.1767057Z 0.648),
+(2020-08-28T00:40:59.1689401Z 0.137),
+CPU count: 8)
+```
+
+* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements aren't happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+++
+#### Solution
+
+The client application that uses the SDK should be scaled up or out.
+
+### Socket or port availability might be low
+
+When running in Azure, clients using the .NET SDK can hit Azure SNAT (PAT) port exhaustion.
+
+#### Solution 1
+
+If you're running on Azure VMs, follow the [SNAT port exhaustion guide](troubleshoot-dotnet-sdk.md#snat).
+
+#### Solution 2
+
+If you're running on Azure App Service, follow the [connection errors troubleshooting guide](../../app-service/troubleshoot-intermittent-outbound-connection-errors.md#cause) and [use App Service diagnostics](https://azure.github.io/AppService/2018/03/01/Deep-Dive-into-TCP-Connections-in-App-Service-Diagnostics.html).
+
+#### Solution 3
+
+If you're running on Azure Functions, verify you're following the [Azure Functions recommendation](../../azure-functions/manage-connections.md#static-clients) of maintaining singleton or static clients for all of the involved services (including Azure Cosmos DB). Check the [service limits](../../azure-functions/functions-scale.md#service-limits) based on the type and size of your Function App hosting.
+
+#### Solution 4
+
+If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`. Otherwise, you'll face connection issues.
+
+### Create multiple client instances
+
+Creating multiple client instances might lead to connection contention and timeout issues.
+
+#### Solution
+
+Follow the [performance tips](performance-tips-dotnet-sdk-v3.md#sdk-usage), and use a single CosmosClient instance across an entire process.
+
+### Hot partition key
+
+Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. When there's a hot partition, one or more logical partition keys on a physical partition are consuming all the physical partition's Request Units per second (RU/s). At the same time, the RU/s on other physical partitions are going unused. As a symptom, the total RU/s consumed will be less than the overall provisioned RU/s at the database or container, but you'll still see throttling (429s) on the requests against the hot logical partition key. Use the [Normalized RU Consumption metric](../monitor-normalized-request-units.md) to see if the workload is encountering a hot partition.
+
+#### Solution
+
+Choose a good partition key that evenly distributes request volume and storage. Learn how to [change your partition key](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/).
+
+### High degree of concurrency
+
+The application is doing a high level of concurrency, which can lead to contention on the channel.
+
+#### Solution
+
+The client application that uses the SDK should be scaled up or out.
+
+### Large requests or responses
+
+Large requests or responses can lead to head-of-line blocking on the channel and exacerbate contention, even with a relatively low degree of concurrency.
+
+#### Solution
+The client application that uses the SDK should be scaled up or out.
+
+### Failure rate is within the Azure Cosmos DB SLA
+
+The application should be able to handle transient failures and retry when necessary. Any 408 exceptions aren't retried because on create paths it's impossible to know if the service created the item or not. Sending the same item again for create will cause a conflict exception. User applications business logic might have custom logic to handle conflicts, which would break from the ambiguity of an existing item versus conflict from a create retry.
+
+### Failure rate violates the Azure Cosmos DB SLA
+
+Contact [Azure Support](https://aka.ms/azure-support).
+
+## Next steps
+
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
cosmos-db Troubleshoot Dotnet Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-slow-request.md
+
+ Title: Troubleshoot slow requests in Azure Cosmos DB .NET SDK
+description: Learn how to diagnose and fix slow requests when you use Azure Cosmos DB .NET SDK.
++++ Last updated : 08/30/2022+++++
+# Diagnose and troubleshoot slow requests in Azure Cosmos DB .NET SDK
++
+In Azure Cosmos DB, you might notice slow requests. Delays can happen for multiple reasons, such as request throttling or the way your application is designed. This article explains the different root causes for this problem.
+
+## Request rate too large
+
+Request throttling is the most common reason for slow requests. Azure Cosmos DB throttles requests if they exceed the allocated request units for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled. The article also discusses how to scale your account to avoid these problems in the future.
+
+## Application design
+
+When you design your application, [follow the .NET SDK best practices](performance-tips-dotnet-sdk-v3.md) for the best performance. If your application doesn't follow the SDK best practices, you might get slow or failed requests.
+
+Consider the following when developing your application:
+
+* The application should be in the same region as your Azure Cosmos DB account.
+* Your [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) or [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions) should reflect your regional preference and point to the region your application is deployed on.
+* There might be a bottleneck on the Network interface because of high traffic. If the application is running on Azure Virtual Machines, there are possible workarounds:
+ * Consider using a [Virtual Machine with Accelerated Networking enabled](../../virtual-network/create-vm-accelerated-networking-powershell.md).
+ * Enable [Accelerated Networking on an existing Virtual Machine](../../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms).
+ * Consider using a [higher end Virtual Machine](../../virtual-machines/sizes.md).
+* Prefer [direct connectivity mode](sdk-connection-modes.md).
+* Avoid high CPU. Make sure to look at the maximum CPU and not the average, which is the default for most logging systems. Anything above roughly 40 percent can increase the latency.
+
+## Metadata operations
+
+If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests). They don't scale like data operations.
+
+## Slow requests on bulk mode
+
+[Bulk mode](tutorial-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you're experiencing slow requests when using bulk mode make sure that:
+
+* Your application is compiled in Release configuration.
+* You aren't measuring latency while debugging the application (no debuggers attached).
+* The volume of operations is high, don't use bulk for less than 1000 operations. Your provisioned throughput dictates how many operations per second you can process, your goal with bulk would be to utilize as much of it as possible.
+* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container, or reduce the volume of data (maybe create smaller batches of data at a time).
+* You're correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
+
+## Capture diagnostics
++
+## Diagnostics in version 3.19 and later
+
+The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. The following sections cover a few key things to look at.
+
+### CPU history
+
+High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries, where the requests might do multiple connections for a single query.
+
+# [3.21 or later SDK](#tab/cpu-new)
+
+The timeouts include diagnostics, which contain the following, for example:
+
+```json
+"systemHistory": [
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+{
+"dateUtc": "2021-11-17T23:38:38.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+...
+]
+```
+
+* If the `cpu` values are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case, the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+* If the `dateUtc` time between measurements isn't approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
+
+# [Older SDK](#tab/cpu-old)
+
+If the error contains `TransportException` information, it might also contain `CPU history`:
+
+```
+CPU history:
+(2020-08-28T00:40:09.1769900Z 0.114),
+(2020-08-28T00:40:19.1763818Z 1.732),
+(2020-08-28T00:40:29.1759235Z 0.000),
+(2020-08-28T00:40:39.1763208Z 0.063),
+(2020-08-28T00:40:49.1767057Z 0.648),
+(2020-08-28T00:40:59.1689401Z 0.137),
+CPU count: 8)
+```
+
+* If the CPU measurements are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements aren't happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+++
+#### Solution
+
+The client application that uses the SDK should be scaled up or out.
+
+### HttpResponseStats
+
+`HttpResponseStats` are requests that go to the [gateway](sdk-connection-modes.md). Even in direct mode, the SDK gets all the metadata information from the gateway.
+
+If the request is slow, first verify that none of the previous suggestions yield the desired results. If it's still slow, different patterns point to different problems. The following table provides more details.
+
+| Number of requests | Scenario | Description |
+|-|-|-|
+| Single to all | Request timeout or `HttpRequestExceptions` | Points to [SNAT port exhaustion](troubleshoot-dotnet-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
+| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
+| All | All | Points to a problem with the infrastructure or networking. |
+| SLA violated | No changes to application, and SLA dropped. | Points to a problem with the Azure Cosmos DB service. |
+
+```json
+"HttpResponseStats": [
+ {
+ "StartTimeUTC": "2021-06-15T13:53:09.7961124Z",
+ "EndTimeUTC": "2021-06-15T13:53:09.7961127Z",
+ "RequestUri": "https://127.0.0.1:8081/dbs/347a8e44-a550-493e-88ee-29a19c070ecc/colls/4f72e752-fa91-455a-82c1-bf253a5a3c4e",
+ "ResourceType": "Collection",
+ "HttpMethod": "GET",
+ "ActivityId": "e16e98ec-f2e3-430c-b9e9-7d99e58a4f72",
+ "StatusCode": "OK"
+ }
+]
+```
+
+### StoreResult
+
+`StoreResult` represents a single request to Azure Cosmos DB, by using direct mode with the TCP protocol.
+
+If it's still slow, different patterns point to different problems. The following table provides more details.
+
+| Number of requests | Scenario | Description |
+|-|-|-|
+| Single to all | `StoreResult` contains `TransportException` | Points to [SNAT port exhaustion](troubleshoot-dotnet-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
+| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
+| All | All | A problem with the infrastructure or networking. |
+| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is true`. | Points to a problem with the Azure Cosmos DB service. |
+| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is false`. | Points to a problem with the machine. |
+| SLA violated | `StorePhysicalAddress` are the same, with no failure status code. | Likely a problem with Azure Cosmos DB. |
+| SLA violated | `StorePhysicalAddress` have the same partition ID, but different replica IDs, with no failure status code. | Likely a problem with Azure Cosmos DB. |
+| SLA violated | `StorePhysicalAddress` is random, with no failure status code. | Points to a problem with the machine. |
+
+For multiple store results for a single request, be aware of the following:
+
+* Strong consistency and bounded staleness consistency always have at least two store results.
+* Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dotnet-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios.
+
+### RntbdRequestStats
+
+Show the time for the different stages of sending and receiving a request in the transport layer.
+
+* `ChannelAcquisitionStarted`: The time to get or create a new connection. You can create new connections for numerous different regions. For example, let's say that a connection was unexpectedly closed, or too many requests were getting sent through the existing connections. You create a new connection.
+* *Pipelined time is large* might be caused by a large request.
+* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service.
+* *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result.
+
+### ServiceEndpointStatistics
+
+Information about a particular backend server. The SDK can open multiple connections to a single backend server depending upon the number of pending requests and the MaxConcurrentRequestsPerConnection.
+
+* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may lead to more traffic and higher latencies.
+* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhaustion if this number is very high.
+
+### ConnectionStatistics
+
+Information about the particular connection (new or old) the request get's assigned to.
+
+* `waitforConnectionInit`: The current request was waiting for new connection initialization to complete. This will lead to higher latencies.
+* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were many calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels.
+* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are many receive timeouts, Sent time will be much larger than the Receive time.
+* `lastReceive`: Time of last request that was received from this server
+* `lastSendAttempt`: Time of the last send attempt
+
+### Request and response sizes
+
+* `requestSizeInBytes`: The total size of the request sent to Azure Cosmos DB
+* `responseMetadataSizeInBytes`: The size of headers returned from Azure Cosmos DB
+* `responseBodySizeInBytes`: The size of content returned from Azure Cosmos DB
+
+```json
+"StoreResult": {
+ "ActivityId": "bab6ade1-b8de-407f-b89d-fa2138a91284",
+ "StatusCode": "Ok",
+ "SubStatusCode": "Unknown",
+ "LSN": 453362,
+ "PartitionKeyRangeId": "1",
+ "GlobalCommittedLSN": 0,
+ "ItemLSN": 453358,
+ "UsingLocalLSN": true,
+ "QuorumAckedLSN": -1,
+ "SessionToken": "-1#453362",
+ "CurrentWriteQuorum": -1,
+ "CurrentReplicaSetSize": -1,
+ "NumberOfReadRegions": 0,
+ "IsValid": true,
+ "StorePhysicalAddress": "rntbd://127.0.0.1:10253/apps/DocDbApp/services/DocDbServer92/partitions/a4cb49a8-38c8-11e6-8106-8cdcd42c33be/replicas/1s/",
+ "RequestCharge": 1,
+ "RetryAfterInMs": null,
+ "BELatencyInMs": "0.304",
+ "transportRequestTimeline": {
+ "requestTimeline": [
+ {
+ "event": "Created",
+ "startTimeUtc": "2022-05-25T12:03:36.3081190Z",
+ "durationInMs": 0.0024
+ },
+ {
+ "event": "ChannelAcquisitionStarted",
+ "startTimeUtc": "2022-05-25T12:03:36.3081214Z",
+ "durationInMs": 0.0132
+ },
+ {
+ "event": "Pipelined",
+ "startTimeUtc": "2022-05-25T12:03:36.3081346Z",
+ "durationInMs": 0.0865
+ },
+ {
+ "event": "Transit Time",
+ "startTimeUtc": "2022-05-25T12:03:36.3082211Z",
+ "durationInMs": 1.3324
+ },
+ {
+ "event": "Received",
+ "startTimeUtc": "2022-05-25T12:03:36.3095535Z",
+ "durationInMs": 12.6128
+ },
+ {
+ "event": "Completed",
+ "startTimeUtc": "2022-05-25T12:03:36.8621663Z",
+ "durationInMs": 0
+ }
+ ],
+ "serviceEndpointStats": {
+ "inflightRequests": 1,
+ "openConnections": 1
+ },
+ "connectionStats": {
+ "waitforConnectionInit": "False",
+ "callsPendingReceive": 0,
+ "lastSendAttempt": "2022-05-25T12:03:34.0222760Z",
+ "lastSend": "2022-05-25T12:03:34.0223280Z",
+ "lastReceive": "2022-05-25T12:03:34.0257728Z"
+ },
+ "requestSizeInBytes": 447,
+ "responseMetadataSizeInBytes": 438,
+ "responseBodySizeInBytes": 604
+ },
+ "TransportException": null
+}
+```
+
+### Failure rate violates the Azure Cosmos DB SLA
+
+Contact [Azure support](https://aka.ms/azure-support).
+
+## Next steps
+
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3.md).
+* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
cosmos-db Troubleshoot Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk.md
+
+ Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
+description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK.
++ Last updated : 09/01/2022++++++
+# Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](troubleshoot-java-sdk-v4.md)
+> * [Async Java SDK v2](troubleshoot-java-async-sdk.md)
+> * [.NET](troubleshoot-dotnet-sdk.md)
+>
+
+This article covers common issues, workarounds, diagnostic steps, and tools when you use the [.NET SDK](sdk-dotnet-v2.md) with Azure Cosmos DB for NoSQL accounts.
+The .NET SDK provides client-side logical representation to access the Azure Cosmos DB for NoSQL. This article describes tools and approaches to help you if you run into any issues.
+
+## Checklist for troubleshooting issues
+
+Consider the following checklist before you move your application to production. Using the checklist will prevent several common issues you might see. You can also quickly diagnose when an issue occurs:
+
+* Use the latest [SDK](sdk-dotnet-v3.md). Preview SDKs shouldn't be used for production. This will prevent hitting known issues that are already fixed.
+* Review the [performance tips](performance-tips-dotnet-sdk-v3.md), and follow the suggested practices. This will help prevent scaling, latency, and other performance issues.
+* Enable the SDK logging to help you troubleshoot an issue. Enabling the logging may affect performance so it's best to enable it only when troubleshooting issues. You can enable the following logs:
+ * [Log metrics](../monitor.md) by using the Azure portal. Portal metrics show the Azure Cosmos DB telemetry, which is helpful to determine if the issue corresponds to Azure Cosmos DB or if it's from the client side.
+ * Log the [diagnostics string](#capture-diagnostics) from the operations and/or exceptions.
+
+Take a look at the [Common issues and workarounds](#common-issues-and-workarounds) section in this article.
+
+Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v3/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. If you didn't find a solution, then file a GitHub issue. You can open a support tick for urgent issues.
+
+## Capture diagnostics
++
+## Common issues and workarounds
+
+### General suggestions
+
+* Follow any `aka.ms` link included in the exception details.
+* Run your app in the same Azure region as your Azure Cosmos DB account, whenever possible.
+* You may run into connectivity/availability issues due to lack of resources on your client machine. We recommend monitoring your CPU utilization on nodes running the Azure Cosmos DB client, and scaling up/out if they're running at high load.
+
+### Check the portal metrics
+
+Checking the [portal metrics](../monitor.md) will help determine if it's a client-side issue or if there's an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section.
+
+### Retry design
+
+See our guide to [designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
+
+### SNAT
+
+If your app is deployed on [Azure Virtual Machines without a public IP address](../../load-balancer/load-balancer-outbound-connections.md), by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports). This situation can lead to connection throttling, connection closure, or the above mentioned [Request timeouts](troubleshoot-dotnet-sdk-request-timeout.md).
+
+ Azure SNAT ports are used only when your VM has a private IP address is connecting to a public IP address. There are two workarounds to avoid Azure SNAT limitation (provided you already are using a single client instance across the entire application):
+
+* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+
+ When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
+* Assign a [public IP to your Azure VM](../../load-balancer/troubleshoot-outbound-connection.md#configure-an-individual-public-ip-on-vm).
+
+### High network latency
+
+See our [latency troubleshooting guide](troubleshoot-dotnet-sdk-slow-request.md) for details on latency troubleshooting.
+
+### Proxy authentication failures
+
+If you see errors that show as HTTP 407:
+
+```
+Response status code does not indicate success: ProxyAuthenticationRequired (407);
+```
+
+This error isn't generated by the SDK nor it's coming from the Azure Cosmos DB Service. This is an error related to networking configuration. A proxy in your network configuration is most likely missing the required proxy authentication. If you're not expecting to be using a proxy, reach out to your network team. If you *are* using a proxy, make sure you're setting the right [WebProxy](/dotnet/api/system.net.webproxy) configuration on [CosmosClientOptions.WebProxy](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.webproxy) when creating the client instance.
+
+### Common query issues
+
+The [query metrics](query-metrics.md) will help determine where the query is spending most of the time. From the query metrics, you can see how much of it's being spent on the back-end vs the client. Learn more on the [query performance guide](performance-tips-query-sdk.md?pivots=programming-language-csharp).
+
+If you encounter the following error: `Unable to load DLL 'Microsoft.Azure.Cosmos.ServiceInterop.dll' or one of its dependencies:` and are using Windows, you should upgrade to the latest Windows version.
+
+## Next steps
+
+* Learn about Performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3.md)
+* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
+
+ <!--Anchors-->
+[Common issues and workarounds]: #common-issues-workarounds
+[Azure SNAT (PAT) port exhaustion]: #snat
+[Production check list]: #production-check-list
cosmos-db Troubleshoot Forbidden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-forbidden.md
+
+ Title: Troubleshoot Azure Cosmos DB forbidden exceptions
+description: Learn how to diagnose and fix forbidden exceptions.
++++ Last updated : 04/14/2022+++++
+# Diagnose and troubleshoot Azure Cosmos DB forbidden exceptions
+
+The HTTP status code 403 represents the request is forbidden to complete.
+
+## Firewall blocking requests
+
+Data plane requests can come to Azure Cosmos DB via the following three paths.
+
+- Public internet (IPv4)
+- Service endpoint
+- Private endpoint
+
+When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above three paths the request came to Azure Cosmos DB.
+
+- `Request originated from client IP {...} through public internet.`
+- `Request originated from client VNET through service endpoint.`
+- `Request originated from client VNET through private endpoint.`
+
+### Solution
+
+Understand via which path is the request **expected** to come to Azure Cosmos DB.
+ - If the error message shows that the request did not come to Azure Cosmos DB via the expected path, the issue is likely to be with client-side setup. Please double check your client-side setup following documentations.
+ - Public internet: [Configure IP firewall in Azure Cosmos DB](../how-to-configure-firewall.md).
+ - Service endpoint: [Configure access to Azure Cosmos DB from virtual networks (VNet)](../how-to-configure-vnet-service-endpoint.md). For example, if you expect to use service endpoint but request came to Azure Cosmos DB via public internet, maybe the subnet that the client was running in did not enable service endpoint to Azure Cosmos DB.
+ - Private endpoint: [Configure Azure Private Link for an Azure Cosmos DB account](../how-to-configure-private-endpoints.md). For example, if you expect to use private endpoint but request came to Azure Cosmos DB via public internet, maybe the DNS on the VM was not configured to resolve account endpoint to the private IP, so it went through account's public IP instead.
+ - If the request came to Azure Cosmos DB via the expected path, request was blocked because the source network identity was not configured to be allowed for the account. Check account's settings depending on the path the request came to Azure Cosmos DB.
+ - Public internet: check account's [public network access](../how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation) and IP range filter configurations.
+ - Service endpoint: check account's [public network access](../how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation) and VNET filter configurations.
+ - Private endpoint: check account's private endpoint configuration and client's private DNS configuration. This could be due to accessing account from a private endpoint that is set up for a different account.
+
+If you recently updated account's firewall configurations, keep in mind that changes can take **up to 15 minutes to apply**.
+
+## Partition key exceeding storage
+On this scenario, it's common to see errors like the ones below:
+
+```
+Response status code does not indicate success: Forbidden (403); Substatus: 1014
+```
+
+```
+Partition key reached maximum size of {...} GB
+```
+
+### Solution
+This error means that your current [partitioning design](../partitioning-overview.md#logical-partitions) and workload is trying to store more than the allowed amount of data for a given partition key value. There is no limit to the number of logical partitions in your container but the size of data each logical partition can store is limited. You can reach to support for clarification.
+
+## Non-data operations are not allowed
+This scenario happens when [attempting to perform non-data operations](../how-to-setup-rbac.md#permission-model) using Azure Active Directory (Azure AD) identities. On this scenario, it's common to see errors like the ones below:
+
+```
+Operation 'POST' on resource 'calls' is not allowed through Azure Cosmos DB endpoint
+```
+```
+Forbidden (403); Substatus: 5300; The given request [PUT ...] cannot be authorized by AAD token in data plane.
+```
+
+### Solution
+Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell.
+If you are using the [Azure Functions Azure Cosmos DB Trigger](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) make sure the `CreateLeaseContainerIfNotExists` property of the trigger isn't set to `true`. Using Azure AD identities blocks any non-data operation, such as creating the lease container.
+
+## Next steps
+* Configure [IP Firewall](../how-to-configure-firewall.md).
+* Configure access from [virtual networks](../how-to-configure-vnet-service-endpoint.md).
+* Configure access from [private endpoints](../how-to-configure-private-endpoints.md).
cosmos-db Troubleshoot Java Async Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-async-sdk.md
+
+ Title: Diagnose and troubleshoot Azure Cosmos DB Async Java SDK v2
+description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Async Java SDK v2.
++ Last updated : 05/11/2020++
+ms.devlang: java
+++++
+# Troubleshoot issues when you use the Azure Cosmos DB Async Java SDK v2 with API for NoSQL accounts
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](troubleshoot-java-sdk-v4.md)
+> * [Async Java SDK v2](troubleshoot-java-async-sdk.md)
+> * [.NET](troubleshoot-dotnet-sdk.md)
+>
+
+> [!IMPORTANT]
+> This is *not* the latest Java SDK for Azure Cosmos DB! You should upgrade your project to [Azure Cosmos DB Java SDK v4](sdk-java-v4.md) and then read the Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md). Follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide to upgrade.
+>
+> This article covers troubleshooting for Azure Cosmos DB Async Java SDK v2 only. See the Azure Cosmos DB Async Java SDK v2 [Release Notes](sdk-java-async-v2.md), [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) and [performance tips](performance-tips-async-java.md) for more information.
+>
+
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+
+This article covers common issues, workarounds, diagnostic steps, and tools when you use the [Java Async SDK](sdk-java-async-v2.md) with Azure Cosmos DB for NoSQL accounts.
+The Java Async SDK provides client-side logical representation to access the Azure Cosmos DB for NoSQL. This article describes tools and approaches to help you if you run into any issues.
+
+Start with this list:
+
+* Take a look at the [Common issues and workarounds] section in this article.
+* Look at the SDK, which is available [open source on GitHub](https://github.com/Azure/azure-cosmosdb-java). It has an [issues section](https://github.com/Azure/azure-cosmosdb-java/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed.
+* Review the [performance tips](performance-tips-async-java.md), and follow the suggested practices.
+* Read the rest of this article, if you didn't find a solution. Then file a [GitHub issue](https://github.com/Azure/azure-cosmosdb-java/issues).
+
+## <a name="common-issues-workarounds"></a>Common issues and workarounds
+
+### Network issues, Netty read timeout failure, low throughput, high latency
+
+#### General suggestions
+* Make sure the app is running on the same region as your Azure Cosmos DB account.
+* Check the CPU usage on the host where the app is running. If CPU usage is 90 percent or more, run your app on a host with a higher configuration. Or you can distribute the load on more machines.
+
+#### Connection throttling
+Connection throttling can happen because of either a [connection limit on a host machine] or [Azure SNAT (PAT) port exhaustion].
+
+##### <a name="connection-limit-on-host"></a>Connection limit on a host machine
+Some Linux systems, such as Red Hat, have an upper limit on the total number of open files. Sockets in Linux are implemented as files, so this number limits the total number of connections, too.
+Run the following command.
+
+```bash
+ulimit -a
+```
+The number of max allowed open files, which are identified as "nofile," needs to be at least double your connection pool size. For more information, see [Performance tips](performance-tips-async-java.md).
+
+##### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
+
+If your app is deployed on Azure Virtual Machines without a public IP address, by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports).
+
+ Azure SNAT ports are used only when your VM has a private IP address and a process from the VM tries to connect to a public IP address. There are two workarounds to avoid Azure SNAT limitation:
+
+* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+
+ When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
+* Assign a public IP to your Azure VM.
+
+##### <a name="cant-connect"></a>Can't reach the Service - firewall
+``ConnectTimeoutException`` indicates that the SDK cannot reach the service.
+You may get a failure similar to the following when using the direct mode:
+```
+GoneException{error=null, resourceAddress='https://cdb-ms-prod-westus-fd4.documents.azure.com:14940/apps/e41242a5-2d71-5acb-2e00-5e5f744b12de/services/d8aa21a5-340b-21d4-b1a2-4a5333e7ed8a/partitions/ed028254-b613-4c2a-bf3c-14bd5eb64500/replicas/131298754052060051p//', statusCode=410, message=Message: The requested resource is no longer available at the server., getCauseInfo=[class: class io.netty.channel.ConnectTimeoutException, message: connection timed out: cdb-ms-prod-westus-fd4.documents.azure.com/101.13.12.5:14940]
+```
+
+If you have a firewall running on your app machine, open port range 10,000 to 20,000 which are used by the direct mode.
+Also follow the [Connection limit on a host machine](#connection-limit-on-host).
+
+#### HTTP proxy
+
+If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`.
+Otherwise, you face connection issues.
+
+#### Invalid coding pattern: Blocking Netty IO thread
+
+The SDK uses the [Netty](https://netty.io/) IO library to communicate with Azure Cosmos DB. The SDK has Async APIs and uses non-blocking IO APIs of Netty. The SDK's IO work is performed on IO Netty threads. The number of IO Netty threads is configured to be the same as the number of CPU cores of the app machine.
+
+The Netty IO threads are meant to be used only for non-blocking Netty IO work. The SDK returns the API invocation result on one of the Netty IO threads to the app's code. If the app performs a long-lasting operation after it receives results on the Netty thread, the SDK might not have enough IO threads to perform its internal IO work. Such app coding might result in low throughput, high latency, and `io.netty.handler.timeout.ReadTimeoutException` failures. The workaround is to switch the thread when you know the operation takes time.
+
+For example, take a look at the following code snippet. You might perform long-lasting work that takes more than a few milliseconds on the Netty thread. If so, you eventually can get into a state where no Netty IO thread is present to process IO work. As a result, you get a ReadTimeoutException failure.
+
+### <a id="asyncjava2-readtimeout"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
+
+```java
+@Test
+public void badCodeWithReadTimeoutException() throws Exception {
+ int requestTimeoutInSeconds = 10;
+
+ ConnectionPolicy policy = new ConnectionPolicy();
+ policy.setRequestTimeoutInMillis(requestTimeoutInSeconds * 1000);
+
+ AsyncDocumentClient testClient = new AsyncDocumentClient.Builder()
+ .withServiceEndpoint(TestConfigurations.HOST)
+ .withMasterKeyOrResourceToken(TestConfigurations.MASTER_KEY)
+ .withConnectionPolicy(policy)
+ .build();
+
+ int numberOfCpuCores = Runtime.getRuntime().availableProcessors();
+ int numberOfConcurrentWork = numberOfCpuCores + 1;
+ CountDownLatch latch = new CountDownLatch(numberOfConcurrentWork);
+ AtomicInteger failureCount = new AtomicInteger();
+
+ for (int i = 0; i < numberOfConcurrentWork; i++) {
+ Document docDefinition = getDocumentDefinition();
+ Observable<ResourceResponse<Document>> createObservable = testClient
+ .createDocument(getCollectionLink(), docDefinition, null, false);
+ createObservable.subscribe(r -> {
+ try {
+ // Time-consuming work is, for example,
+ // writing to a file, computationally heavy work, or just sleep.
+ // Basically, it's anything that takes more than a few milliseconds.
+ // Doing such operations on the IO Netty thread
+ // without a proper scheduler will cause problems.
+ // The subscriber will get a ReadTimeoutException failure.
+ TimeUnit.SECONDS.sleep(2 * requestTimeoutInSeconds);
+ } catch (Exception e) {
+ }
+ },
+
+ exception -> {
+ //It will be io.netty.handler.timeout.ReadTimeoutException.
+ exception.printStackTrace();
+ failureCount.incrementAndGet();
+ latch.countDown();
+ },
+ () -> {
+ latch.countDown();
+ });
+ }
+
+ latch.await();
+ assertThat(failureCount.get()).isGreaterThan(0);
+}
+```
+The workaround is to change the thread on which you perform work that takes time. Define a singleton instance of the scheduler for your app.
+
+### <a id="asyncjava2-scheduler"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
+
+```java
+// Have a singleton instance of an executor and a scheduler.
+ExecutorService ex = Executors.newFixedThreadPool(30);
+Scheduler customScheduler = rx.schedulers.Schedulers.from(ex);
+```
+You might need to do work that takes time, for example, computationally heavy work or blocking IO. In this case, switch the thread to a worker provided by your `customScheduler` by using the `.observeOn(customScheduler)` API.
+
+### <a id="asyncjava2-applycustomscheduler"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
+
+```java
+Observable<ResourceResponse<Document>> createObservable = client
+ .createDocument(getCollectionLink(), docDefinition, null, false);
+
+createObservable
+ .observeOn(customScheduler) // Switches the thread.
+ .subscribe(
+ // ...
+ );
+```
+By using `observeOn(customScheduler)`, you release the Netty IO thread and switch to your own custom thread provided by the custom scheduler.
+This modification solves the problem. You won't get a `io.netty.handler.timeout.ReadTimeoutException` failure anymore.
+
+### Connection pool exhausted issue
+
+`PoolExhaustedException` is a client-side failure. This failure indicates that your app workload is higher than what the SDK connection pool can serve. Increase the connection pool size or distribute the load on multiple apps.
+
+### Request rate too large
+This failure is a server-side failure. It indicates that you consumed your provisioned throughput. Retry later. If you get this failure often, consider an increase in the collection throughput.
+
+### Failure connecting to Azure Cosmos DB Emulator
+
+The Azure Cosmos DB Emulator HTTPS certificate is self-signed. For the SDK to work with the emulator, import the emulator certificate to a Java TrustStore. For more information, see [Export Azure Cosmos DB Emulator certificates](../local-emulator-export-ssl-certificates.md).
+
+### Dependency Conflict Issues
+
+```console
+Exception in thread "main" java.lang.NoSuchMethodError: rx.Observable.toSingle()Lrx/Single;
+```
+
+The above exception suggests you have a dependency on an older version of RxJava lib (e.g., 1.2.2). Our SDK relies on RxJava 1.3.8 which has APIs not available in earlier version of RxJava.
+
+The workaround for such issues is to identify which other dependency brings in RxJava-1.2.2 and exclude the transitive dependency on RxJava-1.2.2, and allow CosmosDB SDK bring the newer version.
+
+To identify which library brings in RxJava-1.2.2 run the following command next to your project pom.xml file:
+```bash
+mvn dependency:tree
+```
+For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.10/examples/resolving-conflicts-using-the-dependency-tree.html).
+
+Once you identify RxJava-1.2.2 is transitive dependency of which other dependency of your project, you can modify the dependency on that lib in your pom file and exclude RxJava transitive dependency it:
+
+```xml
+<dependency>
+ <groupId>${groupid-of-lib-which-brings-in-rxjava1.2.2}</groupId>
+ <artifactId>${artifactId-of-lib-which-brings-in-rxjava1.2.2}</artifactId>
+ <version>${version-of-lib-which-brings-in-rxjava1.2.2}</version>
+ <exclusions>
+ <exclusion>
+ <groupId>io.reactivex</groupId>
+ <artifactId>rxjava</artifactId>
+ </exclusion>
+ </exclusions>
+</dependency>
+```
+
+For more information, see the [exclude transitive dependency guide](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html).
++
+## <a name="enable-client-sice-logging"></a>Enable client SDK logging
+
+The Java Async SDK uses SLF4j as the logging facade that supports logging into popular logging frameworks such as log4j and logback.
+
+For example, if you want to use log4j as the logging framework, add the following libs in your Java classpath.
+
+```xml
+<dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-log4j12</artifactId>
+ <version>${slf4j.version}</version>
+</dependency>
+<dependency>
+ <groupId>log4j</groupId>
+ <artifactId>log4j</artifactId>
+ <version>${log4j.version}</version>
+</dependency>
+```
+
+Also add a log4j config.
+```
+# this is a sample log4j configuration
+
+# Set root logger level to DEBUG and its only appender to A1.
+log4j.rootLogger=INFO, A1
+
+log4j.category.com.microsoft.azure.cosmosdb=DEBUG
+#log4j.category.io.netty=INFO
+#log4j.category.io.reactivex=INFO
+# A1 is set to be a ConsoleAppender.
+log4j.appender.A1=org.apache.log4j.ConsoleAppender
+
+# A1 uses PatternLayout.
+log4j.appender.A1.layout=org.apache.log4j.PatternLayout
+log4j.appender.A1.layout.ConversionPattern=%d %5X{pid} [%t] %-5p %c - %m%n
+```
+
+For more information, see the [sfl4j logging manual](https://www.slf4j.org/manual.html).
+
+## <a name="netstats"></a>OS network statistics
+Run the netstat command to get a sense of how many connections are in states such as `ESTABLISHED` and `CLOSE_WAIT`.
+
+On Linux, you can run the following command.
+```bash
+netstat -nap
+```
+Filter the result to only connections to the Azure Cosmos DB endpoint.
+
+The number of connections to the Azure Cosmos DB endpoint in the `ESTABLISHED` state can't be greater than your configured connection pool size.
+
+Many connections to the Azure Cosmos DB endpoint might be in the `CLOSE_WAIT` state. There might be more than 1,000. A number that high indicates that connections are established and torn down quickly. This situation potentially causes problems. For more information, see the [Common issues and workarounds] section.
+
+ <!--Anchors-->
+[Common issues and workarounds]: #common-issues-workarounds
+[Enable client SDK logging]: #enable-client-sice-logging
+[Connection limit on a host machine]: #connection-limit-on-host
+[Azure SNAT (PAT) port exhaustion]: #snat
cosmos-db Troubleshoot Java Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-sdk-request-timeout.md
+
+ Title: Troubleshoot Azure Cosmos DB HTTP 408 or request timeout issues with the Java v4 SDK
+description: Learn how to diagnose and fix Java SDK request timeout exceptions with the Java v4 SDK.
++++ Last updated : 10/28/2020+++++
+# Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK request timeout exceptions
+
+The HTTP 408 error occurs if the SDK was unable to complete the request before the timeout limit occurred.
+
+## Troubleshooting steps
+The following list contains known causes and solutions for request timeout exceptions.
+
+### Existing issues
+If you are seeing requests getting stuck for longer duration or timing out more frequently, please upgrade the Java v4 SDK to the latest version.
+NOTE: We strongly recommend to use the version 4.18.0 and above. Checkout the [Java v4 SDK release notes](sdk-java-v4.md) for more details.
+
+### High CPU utilization
+High CPU utilization is the most common case. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where it might do multiple connections for a single query.
+
+#### Solution:
+The client application that uses the SDK should be scaled up or out.
+
+### Connection throttling
+Connection throttling can happen because of either a connection limit on a host machine or Azure SNAT (PAT) port exhaustion.
+
+### Connection limit on a host machine
+Some Linux systems, such as Red Hat, have an upper limit on the total number of open files. Sockets in Linux are implemented as files, so this number limits the total number of connections, too. Run the following command.
+
+```bash
+ulimit -a
+```
+
+#### Solution:
+The number of max allowed open files, which are identified as "nofile," needs to be at least 10,000 or more. For more information, see the Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md).
+
+### Socket or port availability might be low
+When running in Azure, clients using the Java SDK can hit Azure SNAT (PAT) port exhaustion.
+
+#### Solution 1:
+If you're running on Azure VMs, follow the [SNAT port exhaustion guide](troubleshoot-java-sdk-v4.md#snat).
+
+#### Solution 2:
+If you're running on Azure App Service, follow the [connection errors troubleshooting guide](../../app-service/troubleshoot-intermittent-outbound-connection-errors.md#cause) and [use App Service diagnostics](https://azure.github.io/AppService/2018/03/01/Deep-Dive-into-TCP-Connections-in-App-Service-Diagnostics.html).
+
+#### Solution 3:
+If you're running on Azure Functions, verify you're following the [Azure Functions recommendation](../../azure-functions/manage-connections.md#static-clients) of maintaining singleton or static clients for all of the involved services (including Azure Cosmos DB). Check the [service limits](../../azure-functions/functions-scale.md#service-limits) based on the type and size of your Function App hosting.
+
+#### Solution 4:
+If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `GatewayConnectionConfig`. Otherwise, you'll face connection issues.
+
+### Create multiple client instances
+Creating multiple client instances might lead to connection contention and timeout issues.
+
+#### Solution 1:
+Follow the [performance tips](performance-tips-java-sdk-v4.md#sdk-usage), and use a single CosmosClient instance across an entire application.
+
+#### Solution 2:
+If singleton CosmosClient is not possible to have in an application, we recommend using connection sharing across multiple Azure Cosmos DB Clients through this API `connectionSharingAcrossClientsEnabled(true)` in CosmosClient.
+When you have multiple instances of Azure Cosmos DB Client in the same JVM interacting to multiple Azure Cosmos DB accounts, enabling this allows connection sharing in Direct mode if possible between instances of Azure Cosmos DB Client. Please note, when setting this option, the connection configuration (e.g., socket timeout config, idle timeout config) of the first instantiated client will be used for all other client instances.
+
+### Hot partition key
+Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. When there's a hot partition, one or more logical partition keys on a physical partition are consuming all the physical partition's Request Units per second (RU/s). At the same time, the RU/s on other physical partitions are going unused. As a symptom, the total RU/s consumed will be less than the overall provisioned RU/s at the database or container, but you'll still see throttling (429s) on the requests against the hot logical partition key. Use the [Normalized RU Consumption metric](../monitor-normalized-request-units.md) to see if the workload is encountering a hot partition.
+
+#### Solution:
+Choose a good partition key that evenly distributes request volume and storage. Learn how to [change your partition key](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/).
+
+### High degree of concurrency
+The application is doing a high level of concurrency, which can lead to contention on the channel.
+
+#### Solution:
+The client application that uses the SDK should be scaled up or out.
+
+### Large requests or responses
+Large requests or responses can lead to head-of-line blocking on the channel and exacerbate contention, even with a relatively low degree of concurrency.
+
+#### Solution:
+The client application that uses the SDK should be scaled up or out.
+
+### Failure rate is within the Azure Cosmos DB SLA
+The application should be able to handle transient failures and retry when necessary. Any 408 exceptions aren't retried because on create paths it's impossible to know if the service created the item or not. Sending the same item again for create will cause a conflict exception. User applications business logic might have custom logic to handle conflicts, which would break from the ambiguity of an existing item versus conflict from a create retry.
+
+### Failure rate violates the Azure Cosmos DB SLA
+Contact [Azure Support](https://aka.ms/azure-support).
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Java Sdk Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-sdk-service-unavailable.md
+
+ Title: Troubleshoot Azure Cosmos DB service unavailable exceptions with the Java v4 SDK
+description: Learn how to diagnose and fix Azure Cosmos DB service unavailable exceptions with the Java v4 SDK.
++++ Last updated : 02/03/2022+++++
+# Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK service unavailable exceptions
+The Java v4 SDK wasn't able to connect to Azure Cosmos DB.
+
+## Troubleshooting steps
+The following list contains known causes and solutions for service unavailable exceptions.
+
+### The required ports are being blocked
+Verify that all the [required ports](sdk-connection-modes.md#service-port-ranges) are enabled. If the account is configured with private endpoint, then more ports are required to be opened.
+
+```
+failed to establish connection to {account name}.documents.azure.com/<unresolved>:3044 due to io.netty.channel.ConnectTimeoutException:
+```
+
+### Client initialization failure
+The following exception is hit if the SDK isn't able to talk to the Azure Cosmos DB instance. This exception normally indicates some security protocol like a firewall rule that is blocking the requests.
+
+```java
+ java.lang.RuntimeException: Client initialization failed. Check if the endpoint is reachable and if your auth token is valid
+```
+
+To validate the SDK can communicate to the Azure Cosmos DB account execute the following command from where the application is hosted. If it fails this points to a firewall rule or other security feature blocking the request. If it succeeds the SDK should be able to communicate to the Azure Cosmos DB account.
+```
+telnet myCosmosDbAccountName.documents.azure.com 443
+```
+
+### Client-side transient connectivity issues
+Service unavailable exceptions can surface when there are transient connectivity problems that are causing timeouts. Typically, the stack trace related to this scenario will contain a `ServiceUnavailableException` error with diagnostic details. For example:
+
+```java
+Exception in thread "main" ServiceUnavailableException{userAgent=azsdk-java-cosmos/4.6.0 Linux/4.15.0-1096-azure JRE/11.0.8, error=null, resourceAddress='null', requestUri='null', statusCode=503, message=Service is currently unavailable, please retry after a while. If this problem persists please contact support.: Message: "" {"diagnostics"}
+```
+
+Follow the [request timeout troubleshooting steps](troubleshoot-java-sdk-request-timeout.md#troubleshooting-steps) to resolve it.
+
+#### UnknownHostException
+UnknownHostException means that the Java framework can't resolve the DNS entry for the Azure Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming can't be resolved. If the error is constant, you can verify the machine's DNS resolution through a `curl` command to the endpoint described in the error.
+
+### Service outage
+Check the [Azure status](https://azure.status.microsoft/status) to see if there's an ongoing issue.
++
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-sdk-v4.md
+
+ Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4
+description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4.
++ Last updated : 04/01/2022+
+ms.devlang: java
+++++
+# Troubleshoot issues when you use Azure Cosmos DB Java SDK v4 with API for NoSQL accounts
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](troubleshoot-java-sdk-v4.md)
+> * [Async Java SDK v2](troubleshoot-java-async-sdk.md)
+> * [.NET](troubleshoot-dotnet-sdk.md)
+>
+
+> [!IMPORTANT]
+> This article covers troubleshooting for Azure Cosmos DB Java SDK v4 only. Please see the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), and [performance tips](performance-tips-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+
+This article covers common issues, workarounds, diagnostic steps, and tools when you use Azure Cosmos DB Java SDK v4 with Azure Cosmos DB for NoSQL accounts.
+Azure Cosmos DB Java SDK v4 provides client-side logical representation to access the Azure Cosmos DB for NoSQL. This article describes tools and approaches to help you if you run into any issues.
+
+Start with this list:
+
+* Take a look at the [Common issues and workarounds] section in this article.
+* Look at the Java SDK in the Azure Cosmos DB central repo, which is available [open source on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos). It has an [issues section](https://github.com/Azure/azure-sdk-for-java/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. One helpful tip is to filter issues by the *cosmos:v4-item* tag.
+* Review the [performance tips](performance-tips-java-sdk-v4.md) for Azure Cosmos DB Java SDK v4, and follow the suggested practices.
+* Read the rest of this article, if you didn't find a solution. Then file a [GitHub issue](https://github.com/Azure/azure-sdk-for-java/issues). If there is an option to add tags to your GitHub issue, add a *cosmos:v4-item* tag.
+
+## Capture the diagnostics
+
+Database, container, item, and query responses in the Java V4 SDK have a Diagnostics property. This property records all the information related to the single request, including if there were retries or any transient failures.
+
+The Diagnostics are returned as a string. The string changes with each version as it is improved to better troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Do not parse the string to avoid breaking changes.
+
+The following code sample shows how to read diagnostic logs using the Java V4 SDK:
+
+> [!IMPORTANT]
+> We recommend validating the minimum recommended version of the Java V4 SDK and ensure you are using this version or higher. You can check recommended version [here](./sdk-java-v4.md#recommended-version).
+
+# [Sync](#tab/sync)
+
+#### Database Operations
+
+```Java
+CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
+CosmosDiagnostics diagnostics = databaseResponse.getDiagnostics();
+logger.info("Create database diagnostics : {}", diagnostics);
+```
+
+#### Container Operations
+
+```Java
+CosmosContainerResponse containerResponse = database.createContainerIfNotExists(containerProperties,
+ throughputProperties);
+CosmosDiagnostics diagnostics = containerResponse.getDiagnostics();
+logger.info("Create container diagnostics : {}", diagnostics);
+```
+
+#### Item Operations
+
+```Java
+// Write Item
+CosmosItemResponse<Family> item = container.createItem(family, new PartitionKey(family.getLastName()),
+ new CosmosItemRequestOptions());
+
+CosmosDiagnostics diagnostics = item.getDiagnostics();
+logger.info("Create item diagnostics : {}", diagnostics);
+
+// Read Item
+CosmosItemResponse<Family> familyCosmosItemResponse = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+
+CosmosDiagnostics diagnostics = familyCosmosItemResponse.getDiagnostics();
+logger.info("Read item diagnostics : {}", diagnostics);
+```
+
+#### Query Operations
+
+```Java
+String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
+
+CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(),
+ Family.class);
+
+// Add handler to capture diagnostics
+filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
+ logger.info("Query Item diagnostics through handle : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+
+// Or capture diagnostics through iterableByPage() APIs.
+filteredFamilies.iterableByPage().forEach(familyFeedResponse -> {
+ logger.info("Query item diagnostics through iterableByPage : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+```
+
+#### Azure Cosmos DB Exceptions
+
+```Java
+try {
+ CosmosItemResponse<Family> familyCosmosItemResponse = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+} catch (CosmosException ex) {
+ CosmosDiagnostics diagnostics = ex.getDiagnostics();
+ logger.error("Read item failure diagnostics : {}", diagnostics);
+}
+```
+
+# [Async](#tab/async)
+
+#### Database Operations
+
+```Java
+Mono<CosmosDatabaseResponse> databaseResponseMono = client.createDatabaseIfNotExists(databaseName);
+databaseResponseMono.map(databaseResponse -> {
+ CosmosDiagnostics diagnostics = databaseResponse.getDiagnostics();
+ logger.info("Create database diagnostics : {}", diagnostics);
+}).subscribe();
+```
+
+#### Container Operations
+
+```Java
+Mono<CosmosContainerResponse> containerResponseMono = database.createContainerIfNotExists(containerProperties,
+ throughputProperties);
+containerResponseMono.map(containerResponse -> {
+ CosmosDiagnostics diagnostics = containerResponse.getDiagnostics();
+ logger.info("Create container diagnostics : {}", diagnostics);
+}).subscribe();
+```
+
+#### Item Operations
+
+```Java
+// Write Item
+Mono<CosmosItemResponse<Family>> itemResponseMono = container.createItem(family,
+ new PartitionKey(family.getLastName()),
+ new CosmosItemRequestOptions());
+
+itemResponseMono.map(itemResponse -> {
+ CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
+ logger.info("Create item diagnostics : {}", diagnostics);
+}).subscribe();
+
+// Read Item
+Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+
+itemResponseMono.map(itemResponse -> {
+ CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
+ logger.info("Read item diagnostics : {}", diagnostics);
+}).subscribe();
+```
+
+#### Query Operations
+
+```Java
+String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
+CosmosPagedFlux<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(),
+ Family.class);
+// Add handler to capture diagnostics
+filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
+ logger.info("Query Item diagnostics through handle : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+
+// Or capture diagnostics through byPage() APIs.
+filteredFamilies.byPage().map(familyFeedResponse -> {
+ logger.info("Query item diagnostics through byPage : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+}).subscribe();
+```
+
+#### Azure Cosmos DB Exceptions
+
+```Java
+Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+
+itemResponseMono.onErrorResume(throwable -> {
+ if (throwable instanceof CosmosException) {
+ CosmosException cosmosException = (CosmosException) throwable;
+ CosmosDiagnostics diagnostics = cosmosException.getDiagnostics();
+ logger.error("Read item failure diagnostics : {}", diagnostics);
+ }
+ return Mono.error(throwable);
+}).subscribe();
+```
++
+## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a>
+See our guide to [designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
+
+## <a name="common-issues-workarounds"></a>Common issues and workarounds
+
+### Network issues, Netty read timeout failure, low throughput, high latency
+
+#### General suggestions
+For best performance:
+* Make sure the app is running on the same region as your Azure Cosmos DB account.
+* Check the CPU usage on the host where the app is running. If CPU usage is 50 percent or more, run your app on a host with a higher configuration. Or you can distribute the load on more machines.
+ * If you are running your application on Azure Kubernetes Service, you can [use Azure Monitor to monitor CPU utilization](../../azure-monitor/containers/container-insights-analyze.md).
+
+#### Connection throttling
+Connection throttling can happen because of either a [connection limit on a host machine] or [Azure SNAT (PAT) port exhaustion].
+
+##### <a name="connection-limit-on-host"></a>Connection limit on a host machine
+Some Linux systems, such as Red Hat, have an upper limit on the total number of open files. Sockets in Linux are implemented as files, so this number limits the total number of connections, too.
+Run the following command.
+
+```bash
+ulimit -a
+```
+The number of max allowed open files, which are identified as "nofile," needs to be at least double your connection pool size. For more information, see the Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md).
+
+##### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
+
+If your app is deployed on Azure Virtual Machines without a public IP address, by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports).
+
+ Azure SNAT ports are used only when your VM has a private IP address and a process from the VM tries to connect to a public IP address. There are two workarounds to avoid Azure SNAT limitation:
+
+* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+
+ When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
+* Assign a public IP to your Azure VM.
+
+##### <a name="cant-connect"></a>Can't reach the Service - firewall
+``ConnectTimeoutException`` indicates that the SDK cannot reach the service.
+You may get a failure similar to the following when using the direct mode:
+```
+GoneException{error=null, resourceAddress='https://cdb-ms-prod-westus-fd4.documents.azure.com:14940/apps/e41242a5-2d71-5acb-2e00-5e5f744b12de/services/d8aa21a5-340b-21d4-b1a2-4a5333e7ed8a/partitions/ed028254-b613-4c2a-bf3c-14bd5eb64500/replicas/131298754052060051p//', statusCode=410, message=Message: The requested resource is no longer available at the server., getCauseInfo=[class: class io.netty.channel.ConnectTimeoutException, message: connection timed out: cdb-ms-prod-westus-fd4.documents.azure.com/101.13.12.5:14940]
+```
+
+If you have a firewall running on your app machine, open port range 10,000 to 20,000 which are used by the direct mode.
+Also follow the [Connection limit on a host machine](#connection-limit-on-host).
+
+#### UnknownHostException
+
+UnknownHostException means that the Java framework cannot resolve the DNS entry for the Azure Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved. If the error is constant, you can verify the machine's DNS resolution through a `curl` command to the endpoint described in the error.
+
+#### HTTP proxy
+
+If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`.
+Otherwise, you face connection issues.
+
+#### Invalid coding pattern: Blocking Netty IO thread
+
+The SDK uses the [Netty](https://netty.io/) IO library to communicate with Azure Cosmos DB. The SDK has an Async API and uses non-blocking IO APIs of Netty. The SDK's IO work is performed on IO Netty threads. The number of IO Netty threads is configured to be the same as the number of CPU cores of the app machine.
+
+The Netty IO threads are meant to be used only for non-blocking Netty IO work. The SDK returns the API invocation result on one of the Netty IO threads to the app's code. If the app performs a long-lasting operation after it receives results on the Netty thread, the SDK might not have enough IO threads to perform its internal IO work. Such app coding might result in low throughput, high latency, and `io.netty.handler.timeout.ReadTimeoutException` failures. The workaround is to switch the thread when you know the operation takes time.
+
+For example, take a look at the following code snippet which adds items to a container (look [here](quickstart-java.md) for guidance on setting up the database and container.) You might perform long-lasting work that takes more than a few milliseconds on the Netty thread. If so, you eventually can get into a state where no Netty IO thread is present to process IO work. As a result, you get a ReadTimeoutException failure.
+
+### <a id="java4-readtimeout"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TroubleshootNeedsSchedulerAsync)]
+
+The workaround is to change the thread on which you perform work that takes time. Define a singleton instance of the scheduler for your app.
+
+### <a id="java4-scheduler"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TroubleshootCustomSchedulerAsync)]
+
+You might need to do work that takes time, for example, computationally heavy work or blocking IO. In this case, switch the thread to a worker provided by your `customScheduler` by using the `.publishOn(customScheduler)` API.
+
+### <a id="java4-apply-custom-scheduler"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TroubleshootPublishOnSchedulerAsync)]
+
+By using `publishOn(customScheduler)`, you release the Netty IO thread and switch to your own custom thread provided by the custom scheduler. This modification solves the problem. You won't get a `io.netty.handler.timeout.ReadTimeoutException` failure anymore.
+
+### Request rate too large
+This failure is a server-side failure. It indicates that you consumed your provisioned throughput. Retry later. If you get this failure often, consider an increase in the collection throughput.
+
+* **Implement backoff at getRetryAfterInMilliseconds intervals**
+
+ During performance testing, you should increase load until a small rate of requests get throttled. If throttled, the client application should backoff for the server-specified retry interval. Respecting the backoff ensures that you spend minimal amount of time waiting between retries.
+
+### Error handling from Java SDK Reactive Chain
+
+Error handling from Azure Cosmos DB Java SDK is important when it comes to client's application logic. There are different error handling mechanism provided by [reactor-core framework](https://projectreactor.io/docs/core/release/reference/#error.handling) which can be used in different scenarios. We recommend customers to understand these error handling operators in detail and use the ones which fit their retry logic scenarios the best.
+
+> [!IMPORTANT]
+> We do not recommend using [`onErrorContinue()`](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#onErrorContinue-java.util.function.BiConsumer-) operator, as it is not supported in all scenarios.
+> Note that `onErrorContinue()` is a specialist operator that can make the behaviour of your reactive chain unclear. It operates on upstream, not downstream operators, it requires specific operator support to work, and the scope can easily propagate upstream into library code that didn't anticipate it (resulting in unintended behaviour.). Please refer to [documentation](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#onErrorContinue-java.util.function.BiConsumer-) of `onErrorContinue()` for more details on this special operator.
+
+### Failure connecting to Azure Cosmos DB Emulator
+
+The Azure Cosmos DB Emulator HTTPS certificate is self-signed. For the SDK to work with the emulator, import the emulator certificate to a Java TrustStore. For more information, see [Export Azure Cosmos DB Emulator certificates](../local-emulator-export-ssl-certificates.md).
+
+### Dependency Conflict Issues
+
+The Azure Cosmos DB Java SDK pulls in a number of dependencies; generally speaking, if your project dependency tree includes an older version of an artifact that Azure Cosmos DB Java SDK depends on, this may result in unexpected errors being generated when you run your application. If you are debugging why your application unexpectedly throws an exception, it is a good idea to double-check that your dependency tree is not accidentally pulling in an older version of one or more of the Azure Cosmos DB Java SDK dependencies.
+
+The workaround for such an issue is to identify which of your project dependencies brings in the old version and exclude the transitive dependency on that older version, and allow Azure Cosmos DB Java SDK to bring in the newer version.
+
+To identify which of your project dependencies brings in an older version of something that Azure Cosmos DB Java SDK depends on, run the following command against your project pom.xml file:
+```bash
+mvn dependency:tree
+```
+For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.10/examples/resolving-conflicts-using-the-dependency-tree.html).
+
+Once you know which dependency of your project depends on an older version, you can modify the dependency on that lib in your pom file and exclude the transitive dependency, following the example below (which assumes that *reactor-core* is the outdated dependency):
+
+```xml
+<dependency>
+ <groupId>${groupid-of-lib-which-brings-in-reactor}</groupId>
+ <artifactId>${artifactId-of-lib-which-brings-in-reactor}</artifactId>
+ <version>${version-of-lib-which-brings-in-reactor}</version>
+ <exclusions>
+ <exclusion>
+ <groupId>io.projectreactor</groupId>
+ <artifactId>reactor-core</artifactId>
+ </exclusion>
+ </exclusions>
+</dependency>
+```
+
+For more information, see the [exclude transitive dependency guide](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html).
++
+## <a name="enable-client-sice-logging"></a>Enable client SDK logging
+
+Azure Cosmos DB Java SDK v4 uses SLF4j as the logging facade that supports logging into popular logging frameworks such as log4j and logback.
+
+For example, if you want to use log4j as the logging framework, add the following libs in your Java classpath.
+
+```xml
+<dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-log4j12</artifactId>
+ <version>${slf4j.version}</version>
+</dependency>
+<dependency>
+ <groupId>log4j</groupId>
+ <artifactId>log4j</artifactId>
+ <version>${log4j.version}</version>
+</dependency>
+```
+
+Also add a log4j config.
+```
+# this is a sample log4j configuration
+
+# Set root logger level to INFO and its only appender to A1.
+log4j.rootLogger=INFO, A1
+
+log4j.category.com.azure.cosmos=INFO
+#log4j.category.io.netty=OFF
+#log4j.category.io.projectreactor=OFF
+# A1 is set to be a ConsoleAppender.
+log4j.appender.A1=org.apache.log4j.ConsoleAppender
+
+# A1 uses PatternLayout.
+log4j.appender.A1.layout=org.apache.log4j.PatternLayout
+log4j.appender.A1.layout.ConversionPattern=%d %5X{pid} [%t] %-5p %c - %m%n
+```
+
+For more information, see the [sfl4j logging manual](https://www.slf4j.org/manual.html).
+
+## <a name="netstats"></a>OS network statistics
+Run the netstat command to get a sense of how many connections are in states such as `ESTABLISHED` and `CLOSE_WAIT`.
+
+On Linux, you can run the following command.
+```bash
+netstat -nap
+```
+
+On Windows, you can run the same command with different argument flags:
+```bash
+netstat -abn
+```
+
+Filter the result to only connections to the Azure Cosmos DB endpoint.
+
+The number of connections to the Azure Cosmos DB endpoint in the `ESTABLISHED` state can't be greater than your configured connection pool size.
+
+Many connections to the Azure Cosmos DB endpoint might be in the `CLOSE_WAIT` state. There might be more than 1,000. A number that high indicates that connections are established and torn down quickly. This situation potentially causes problems. For more information, see the [Common issues and workarounds] section.
+
+ <!--Anchors-->
+[Common issues and workarounds]: #common-issues-workarounds
+[Enable client SDK logging]: #enable-client-sice-logging
+[Connection limit on a host machine]: #connection-limit-on-host
+[Azure SNAT (PAT) port exhaustion]: #snat
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-not-found.md
+
+ Title: Troubleshoot Azure Cosmos DB not found exceptions
+description: Learn how to diagnose and fix not found exceptions.
+++ Last updated : 05/26/2021++++++
+# Diagnose and troubleshoot Azure Cosmos DB not found exceptions
+
+The HTTP status code 404 represents that the resource no longer exists.
+
+## Expected behavior
+There are many valid scenarios where an application expects a code 404 and correctly handles the scenario.
+
+## A not found exception was returned for an item that should exist or does exist
+Here are the possible reasons for a status code 404 to be returned if the item should exist or does exist.
+
+### The read session is not available for the input session token
+
+#### Solution:
+1. Update your current SDK to the latest version available. The most common causes for this particular error have been fixed in the newest SDK versions.
+
+### Race condition
+There are multiple SDK client instances and the read happened before the write.
+
+#### Solution:
+1. The default account consistency for Azure Cosmos DB is session consistency. When an item is created or updated, the response returns a session token that can be passed between SDK instances to guarantee that the read request is reading from a replica with that change.
+1. Change the [consistency level](../consistency-levels.md) to a [stronger level](../consistency-levels.md).
+
+### Reading throughput for a container or database resource
+Using PowerShell or Azure CLI and receive *not found* error message.
+
+#### Solution:
+Throughput can be provisioned at the database level, container level, or both. If getting a *not found* error, try reading throughput the parent database resource, or child container resource.
+
+### Invalid partition key and ID combination
+The partition key and ID combination aren't valid.
+
+#### Solution:
+Fix the application logic that's causing the incorrect combination.
+
+### Invalid character in an item ID
+An item is inserted into Azure Cosmos DB with an [invalid character](/dotnet/api/microsoft.azure.documents.resource.id#remarks) in the item ID.
+
+#### Solution:
+Change the ID to a different value that doesn't contain the special characters. If changing the ID isn't an option, you can Base64 encode the ID to escape the special characters. Base64 can still produce a name with a invalid character '/' which needs to be replaced.
+
+Items already inserted in the container for the ID can be replaced by using RID values instead of name-based references.
+```c#
+// Get a container reference that uses RID values.
+ContainerProperties containerProperties = await this.Container.ReadContainerAsync();
+string[] selfLinkSegments = containerProperties.SelfLink.Split('/');
+string databaseRid = selfLinkSegments[1];
+string containerRid = selfLinkSegments[3];
+Container containerByRid = this.cosmosClient.GetContainer(databaseRid, containerRid);
+
+// Invalid characters are listed here.
+// https://learn.microsoft.com/dotnet/api/microsoft.azure.documents.resource.id#remarks
+FeedIterator<JObject> invalidItemsIterator = this.Container.GetItemQueryIterator<JObject>(
+ @"select * from t where CONTAINS(t.id, ""/"") or CONTAINS(t.id, ""#"") or CONTAINS(t.id, ""?"") or CONTAINS(t.id, ""\\"") ");
+while (invalidItemsIterator.HasMoreResults)
+{
+ foreach (JObject itemWithInvalidId in await invalidItemsIterator.ReadNextAsync())
+ {
+ // Choose a new ID that doesn't contain special characters.
+ // If that isn't possible, then Base64 encode the ID to escape the special characters.
+ byte[] plainTextBytes = Encoding.UTF8.GetBytes(itemWithInvalidId["id"].ToString());
+ itemWithInvalidId["id"] = Convert.ToBase64String(plainTextBytes).Replace('/', '!');
+
+ // Update the item with the new ID value by using the RID-based container reference.
+ JObject item = await containerByRid.ReplaceItemAsync<JObject>(
+ item: itemWithInvalidId,
+ ID: itemWithInvalidId["_rid"].ToString(),
+ partitionKey: new Cosmos.PartitionKey(itemWithInvalidId["status"].ToString()));
+
+ // Validating the new ID can be read by using the original name-based container reference.
+ await this.Container.ReadItemAsync<ToDoActivity>(
+ item["id"].ToString(),
+ new Cosmos.PartitionKey(item["status"].ToString())); ;
+ }
+}
+```
+
+### Time to Live purge
+The item had the [Time to Live (TTL)](./time-to-live.md) property set. The item was purged because the TTL property expired.
+
+#### Solution:
+Change the TTL property to prevent the item from being purged.
+
+### Lazy indexing
+The [lazy indexing](../index-policy.md#indexing-mode) hasn't caught up.
+
+#### Solution:
+Wait for the indexing to catch up or change the indexing policy.
+
+### Parent resource deleted
+The database or container that the item exists in was deleted.
+
+#### Solution:
+1. [Restore](../configure-periodic-backup-restore.md#request-restore) the parent resource, or re-create the resources.
+1. Create a new resource to replace the deleted resource.
+
+### 7. Container/Collection names are case-sensitive
+Container/Collection names are case-sensitive in Azure Cosmos DB.
+
+#### Solution:
+Make sure to use the exact name while connecting to Azure Cosmos DB.
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-query-performance.md
+
+ Title: Troubleshoot query issues when using Azure Cosmos DB
+description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB SQL query issues.
+++ Last updated : 04/04/2022++++
+# Troubleshoot query issues when using Azure Cosmos DB
+
+This article walks through a general recommended approach for troubleshooting queries in Azure Cosmos DB. Although you shouldn't consider the steps outlined in this article a complete defense against potential query issues, we've included the most common performance tips here. You should use this article as a starting place for troubleshooting slow or expensive queries in the Azure Cosmos DB for NoSQL. You can also use [diagnostics logs](../monitor-resource-logs.md) to identify queries that are slow or that consume significant amounts of throughput. If you are using Azure Cosmos DB's API for MongoDB, you should use [Azure Cosmos DB's API for MongoDB query troubleshooting guide](../mongodb/troubleshoot-query-performance.md)
+
+Query optimizations in Azure Cosmos DB are broadly categorized as follows:
+
+- Optimizations that reduce the Request Unit (RU) charge of the query
+- Optimizations that just reduce latency
+
+If you reduce the RU charge of a query, you'll typically decrease latency as well.
+
+This article provides examples that you can re-create by using the [nutrition dataset](https://github.com/CosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
+
+## Common SDK issues
+
+Before reading this guide, it is helpful to consider common SDK issues that aren't related to the query engine.
+
+- Follow these [SDK Performance tips for query](performance-tips-query-sdk.md).
+- Sometimes queries may have empty pages even when there are results on a future page. Reasons for this could be:
+ - The SDK could be doing multiple network calls.
+ - The query might be taking a long time to retrieve the documents.
+- All queries have a continuation token that will allow the query to continue. Be sure to drain the query completely. Learn more about [handling multiple pages of results](query/pagination.md#handling-multiple-pages-of-results)
+
+## Get query metrics
+
+When you optimize a query in Azure Cosmos DB, the first step is always to [get the query metrics](query-metrics-performance.md) for your query. These metrics are also available through the Azure portal. Once you run your query in the Data Explorer, the query metrics are visible next to the **Results** tab:
++
+After you get the query metrics, compare the **Retrieved Document Count** with the **Output Document Count** for your query. Use this comparison to identify the relevant sections to review in this article.
+
+The **Retrieved Document Count** is the number of documents that the query engine needed to load. The **Output Document Count** is the number of documents that were needed for the results of the query. If the **Retrieved Document Count** is significantly higher than the **Output Document Count**, there was at least one part of your query that was unable to use an index and needed to do a scan.
+
+Refer to the following sections to understand the relevant query optimizations for your scenario.
+
+### Query's RU charge is too high
+
+#### Retrieved Document Count is significantly higher than Output Document Count
+
+- [Include necessary paths in the indexing policy.](#include-necessary-paths-in-the-indexing-policy)
+
+- [Understand which system functions use the index.](#understand-which-system-functions-use-the-index)
+
+- [Improve string system function execution.](#improve-string-system-function-execution)
+
+- [Understand which aggregate queries use the index.](#understand-which-aggregate-queries-use-the-index)
+
+- [Optimize queries that have both a filter and an ORDER BY clause.](#optimize-queries-that-have-both-a-filter-and-an-order-by-clause)
+
+- [Optimize JOIN expressions by using a subquery.](#optimize-join-expressions-by-using-a-subquery)
+
+<br>
+
+#### Retrieved Document Count is approximately equal to Output Document Count
+
+- [Minimize cross partition queries.](#minimize-cross-partition-queries)
+
+- [Optimize queries that have filters on multiple properties.](#optimize-queries-that-have-filters-on-multiple-properties)
+
+- [Optimize queries that have both a filter and an ORDER BY clause.](#optimize-queries-that-have-both-a-filter-and-an-order-by-clause)
+
+<br>
+
+### Query's RU charge is acceptable but latency is still too high
+
+- [Improve proximity.](#improve-proximity)
+
+- [Increase provisioned throughput.](#increase-provisioned-throughput)
+
+- [Increase MaxConcurrency.](#increase-maxconcurrency)
+
+- [Increase MaxBufferedItemCount.](#increase-maxbuffereditemcount)
+
+## Queries where Retrieved Document Count exceeds Output Document Count
+
+ The **Retrieved Document Count** is the number of documents that the query engine needed to load. The **Output Document Count** is the number of documents returned by the query. If the **Retrieved Document Count** is significantly higher than the **Output Document Count**, there was at least one part of your query that was unable to use an index and needed to do a scan.
+
+Here's an example of scan query that wasn't entirely served by the index:
+
+Query:
+
+```sql
+SELECT VALUE c.description
+FROM c
+WHERE UPPER(c.description) = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR"
+```
+
+Query metrics:
+
+```
+Retrieved Document Count : 60,951
+Retrieved Document Size : 399,998,938 bytes
+Output Document Count : 7
+Output Document Size : 510 bytes
+Index Utilization : 0.00 %
+Total Query Execution Time : 4,500.34 milliseconds
+ Query Preparation Times
+ Query Compilation Time : 0.09 milliseconds
+ Logical Plan Build Time : 0.05 milliseconds
+ Physical Plan Build Time : 0.04 milliseconds
+ Query Optimization Time : 0.01 milliseconds
+ Index Lookup Time : 0.01 milliseconds
+ Document Load Time : 4,177.66 milliseconds
+ Runtime Execution Times
+ Query Engine Times : 322.16 milliseconds
+ System Function Execution Time : 85.74 milliseconds
+ User-defined Function Execution Time : 0.00 milliseconds
+ Document Write Time : 0.01 milliseconds
+Client Side Metrics
+ Retry Count : 0
+ Request Charge : 4,059.95 RUs
+```
+
+The **Retrieved Document Count** (60,951) is significantly higher than the **Output Document Count** (7), implying that this query resulted in a document scan. In this case, the system function [UPPER()](query/upper.md) doesn't use an index.
+
+### Include necessary paths in the indexing policy
+
+Your indexing policy should cover any properties included in `WHERE` clauses, `ORDER BY` clauses, `JOIN`, and most system functions. The desired paths specified in the index policy should match the properties in the JSON documents.
+
+> [!NOTE]
+> Properties in Azure Cosmos DB indexing policy are case-sensitive
+
+If you run the following simple query on the [nutrition](https://github.com/CosmosDB/labs/blob/master/dotnet/setup/NutritionData.json) dataset, you will observe a much lower RU charge when the property in the `WHERE` clause is indexed:
+
+#### Original
+
+Query:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description = "Malabar spinach, cooked"
+```
+
+Indexing policy:
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/description/*"
+ }
+ ]
+}
+```
+
+**RU charge:** 409.51 RUs
+
+#### Optimized
+
+Updated indexing policy:
+
+```json
+{
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": []
+}
+```
+
+**RU charge:** 2.98 RUs
+
+You can add properties to the indexing policy at any time, with no effect on write or read availability. You can [track index transformation progress](./how-to-manage-indexing-policy.md#dotnet-sdk).
+
+### Understand which system functions use the index
+
+Most system functions use indexes. Here's a list of some common string functions that use indexes:
+
+- StartsWith
+- Contains
+- RegexMatch
+- Left
+- Substring - but only if the first num_expr is 0
+
+Following are some common system functions that don't use the index and must load each document when used in a `WHERE` clause:
+
+| **System function** | **Ideas for optimization** |
+| | |
+| Upper/Lower | Instead of using the system function to normalize data for comparisons, normalize the casing upon insertion. A query like ```SELECT * FROM c WHERE UPPER(c.name) = 'BOB'``` becomes ```SELECT * FROM c WHERE c.name = 'BOB'```. |
+| GetCurrentDateTime/GetCurrentTimestamp/GetCurrentTicks | Calculate the current time before query execution and use that string value in the `WHERE` clause. |
+| Mathematical functions (non-aggregates) | If you need to compute a value frequently in your query, consider storing the value as a property in your JSON document. |
+
+These system functions can use indexes, except when used in queries with aggregates:
+
+| **System function** | **Ideas for optimization** |
+| | |
+| Spatial system functions | Store the query result in a real-time materialized view |
+
+When used in the `SELECT` clause, inefficient system functions will not affect how queries can use indexes.
+
+### Improve string system function execution
+
+For some system functions that use indexes, you can improve query execution by adding an `ORDER BY` clause to the query.
+
+More specifically, any system function whose RU charge increases as the cardinality of the property increases may benefit from having `ORDER BY` in the query. These queries do an index scan, so having the query results sorted can make the query more efficient.
+
+This optimization can improve execution for the following system functions:
+
+- StartsWith (where case-insensitive = true)
+- StringEquals (where case-insensitive = true)
+- Contains
+- RegexMatch
+- EndsWith
+
+For example, consider the below query with `CONTAINS`. `CONTAINS` will use indexes but sometimes, even after adding the relevant index, you may still observe a very high RU charge when running the below query.
+
+Original query:
+
+```sql
+SELECT *
+FROM c
+WHERE CONTAINS(c.town, "Sea")
+```
+
+You can improve query execution by adding `ORDER BY`:
+
+```sql
+SELECT *
+FROM c
+WHERE CONTAINS(c.town, "Sea")
+ORDER BY c.town
+```
+
+The same optimization can help in queries with additional filters. In this case, it's best to also add properties with equality filters to the `ORDER BY` clause.
+
+Original query:
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Samer" AND CONTAINS(c.town, "Sea")
+```
+
+You can improve query execution by adding `ORDER BY` and [a composite index](../index-policy.md#composite-indexes) for (c.name, c.town):
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Samer" AND CONTAINS(c.town, "Sea")
+ORDER BY c.name, c.town
+```
+
+### Understand which aggregate queries use the index
+
+In most cases, aggregate system functions in Azure Cosmos DB will use the index. However, depending on the filters or additional clauses in an aggregate query, the query engine may be required to load a high number of documents. Typically, the query engine will apply equality and range filters first. After applying these filters,
+the query engine can evaluate additional filters and resort to loading remaining documents to compute the aggregate, if needed.
+
+For example, given these two sample queries, the query with both an equality and `CONTAINS` system function filter will generally be more efficient than a query with just a `CONTAINS` system function filter. This is because the equality filter is applied first and uses the index before documents need to be loaded for the more expensive `CONTAINS` filter.
+
+Query with only `CONTAINS` filter - higher RU charge:
+
+```sql
+SELECT COUNT(1)
+FROM c
+WHERE CONTAINS(c.description, "spinach")
+```
+
+Query with both equality filter and `CONTAINS` filter - lower RU charge:
+
+```sql
+SELECT AVG(c._ts)
+FROM c
+WHERE c.foodGroup = "Sausages and Luncheon Meats" AND CONTAINS(c.description, "spinach")
+```
+
+Here are additional examples of aggregate queries that will not fully use the index:
+
+#### Queries with system functions that don't use the index
+
+You should refer to the relevant [system function's page](query/system-functions.md) to see if it uses the index.
+
+```sql
+SELECT MAX(c._ts)
+FROM c
+WHERE CONTAINS(c.description, "spinach")
+```
+
+#### Aggregate queries with user-defined functions(UDF's)
+
+```sql
+SELECT AVG(c._ts)
+FROM c
+WHERE udf.MyUDF("Sausages and Luncheon Meats")
+```
+
+#### Queries with GROUP BY
+
+The RU charge of queries with `GROUP BY` will increase as the cardinality of the properties in the `GROUP BY` clause increases. In the below query, for example, the RU charge of the query will increase as the number unique descriptions increases.
+
+The RU charge of an aggregate function with a `GROUP BY` clause will be higher than the RU charge of an aggregate function alone. In this example, the query engine must load every document that matches the `c.foodGroup = "Sausages and Luncheon Meats"` filter so the RU charge is expected to be high.
+
+```sql
+SELECT COUNT(1)
+FROM c
+WHERE c.foodGroup = "Sausages and Luncheon Meats"
+GROUP BY c.description
+```
+
+If you plan to frequently run the same aggregate queries, it may be more efficient to build a real-time materialized view with the [Azure Cosmos DB change feed](../change-feed.md) than running individual queries.
+
+### Optimize queries that have both a filter and an ORDER BY clause
+
+Although queries that have a filter and an `ORDER BY` clause will normally use a range index, they'll be more efficient if they can be served from a composite index. In addition to modifying the indexing policy, you should add all properties in the composite index to the `ORDER BY` clause. This change to the query will ensure that it uses the composite index. You can observe the impact by running a query on the [nutrition](https://github.com/CosmosDB/labs/blob/master/dotnet/setup/NutritionData.json) dataset:
+
+#### Original
+
+Query:
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup = "Soups, Sauces, and Gravies"
+ORDER BY c._ts ASC
+```
+
+Indexing policy:
+
+```json
+{
+
+ "automatic":true,
+ "indexingMode":"Consistent",
+ "includedPaths":[
+ {
+ "path":"/*"
+ }
+ ],
+ "excludedPaths":[]
+}
+```
+
+**RU charge:** 44.28 RUs
+
+#### Optimized
+
+Updated query (includes both properties in the `ORDER BY` clause):
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup = "Soups, Sauces, and Gravies"
+ORDER BY c.foodGroup, c._ts ASC
+```
+
+Updated indexing policy:
+
+```json
+{
+ "automatic":true,
+ "indexingMode":"Consistent",
+ "includedPaths":[
+ {
+ "path":"/*"
+ }
+ ],
+ "excludedPaths":[],
+ "compositeIndexes":[
+ [
+ {
+ "path":"/foodGroup",
+ "order":"ascending"
+ },
+ {
+ "path":"/_ts",
+ "order":"ascending"
+ }
+ ]
+ ]
+ }
+
+```
+
+**RU charge:** 8.86 RUs
+
+### Optimize JOIN expressions by using a subquery
+
+Multi-value subqueries can optimize `JOIN` expressions by pushing predicates after each select-many expression rather than after all cross joins in the `WHERE` clause.
+
+Consider this query:
+
+```sql
+SELECT Count(1) AS Count
+FROM c
+JOIN t IN c.tags
+JOIN n IN c.nutrients
+JOIN s IN c.servings
+WHERE t.name = 'infant formula' AND (n.nutritionValue > 0
+AND n.nutritionValue < 10) AND s.amount > 1
+```
+
+**RU charge:** 167.62 RUs
+
+For this query, the index will match any document that has a tag with the name `infant formula`, `nutritionValue` greater than 0, and `amount` greater than 1. The `JOIN` expression here will perform the cross-product of all items of tags, nutrients, and servings arrays for each matching document before any filter is applied. The `WHERE` clause will then apply the filter predicate on each `<c, t, n, s>` tuple.
+
+For example, if a matching document has 10 items in each of the three arrays, it will expand to 1 x 10 x 10 x 10 (that is, 1,000) tuples. The use of subqueries here can help to filter out joined array items before joining with the next expression.
+
+This query is equivalent to the preceding one but uses subqueries:
+
+```sql
+SELECT Count(1) AS Count
+FROM c
+JOIN (SELECT VALUE t FROM t IN c.tags WHERE t.name = 'infant formula')
+JOIN (SELECT VALUE n FROM n IN c.nutrients WHERE n.nutritionValue > 0 AND n.nutritionValue < 10)
+JOIN (SELECT VALUE s FROM s IN c.servings WHERE s.amount > 1)
+```
+
+**RU charge:** 22.17 RUs
+
+Assume that only one item in the tags array matches the filter and that there are five items for both the nutrients and servings arrays. The `JOIN` expressions will expand to 1 x 1 x 5 x 5 = 25 items, as opposed to 1,000 items in the first query.
+
+## Queries where Retrieved Document Count is equal to Output Document Count
+
+If the **Retrieved Document Count** is approximately equal to the **Output Document Count**, the query engine didn't have to scan many unnecessary documents. For many queries, like those that use the `TOP` keyword, **Retrieved Document Count** might exceed **Output Document Count** by 1. You don't need to be concerned about this.
+
+### Minimize cross partition queries
+
+Azure Cosmos DB uses [partitioning](../partitioning-overview.md) to scale individual containers as Request Unit and data storage needs increase. Each physical partition has a separate and independent index. If your query has an equality filter that matches your container's partition key, you'll need to check only the relevant partition's index. This optimization reduces the total number of RUs that the query requires.
+
+If you have a large number of provisioned RUs (more than 30,000) or a large amount of data stored (more than approximately 100 GB), you probably have a large enough container to see a significant reduction in query RU charges.
+
+For example, if you create a container with the partition key foodGroup, the following queries will need to check only a single physical partition:
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup = "Soups, Sauces, and Gravies" and c.description = "Mushroom, oyster, raw"
+```
+
+Queries that have an `IN` filter with the partition key will only check the relevant physical partition(s) and will not "fan-out":
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup IN("Soups, Sauces, and Gravies", "Vegetables and Vegetable Products") and c.description = "Mushroom, oyster, raw"
+```
+
+Queries that have range filters on the partition key, or that don't have any filters on the partition key, will need to "fan-out" and check every physical partition's index for results:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description = "Mushroom, oyster, raw"
+```
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup > "Soups, Sauces, and Gravies" and c.description = "Mushroom, oyster, raw"
+```
+
+### Optimize queries that have filters on multiple properties
+
+Although queries that have filters on multiple properties will normally use a range index, they'll be more efficient if they can be served from a composite index. For small amounts of data, this optimization won't have a significant impact. It could be useful, however, for large amounts of data. You can only optimize, at most, one non-equality filter per composite index. If your query has multiple non-equality filters, pick one of them that will use the composite index. The rest will continue to use range indexes. The non-equality filter must be defined last in the composite index. [Learn more about composite indexes](../index-policy.md#composite-indexes).
+
+Here are some examples of queries that could be optimized with a composite index:
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup = "Vegetables and Vegetable Products" AND c._ts = 1575503264
+```
+
+```sql
+SELECT *
+FROM c
+WHERE c.foodGroup = "Vegetables and Vegetable Products" AND c._ts > 1575503264
+```
+
+Here's the relevant composite index:
+
+```json
+{
+ "automatic":true,
+ "indexingMode":"Consistent",
+ "includedPaths":[
+ {
+ "path":"/*"
+ }
+ ],
+ "excludedPaths":[],
+ "compositeIndexes":[
+ [
+ {
+ "path":"/foodGroup",
+ "order":"ascending"
+ },
+ {
+ "path":"/_ts",
+ "order":"ascending"
+ }
+ ]
+ ]
+}
+```
+
+## Optimizations that reduce query latency
+
+In many cases, the RU charge might be acceptable when query latency is still too high. The following sections give an overview of tips for reducing query latency. If you run the same query multiple times on the same dataset, it will typically have the same RU charge each time. But query latency might vary between query executions.
+
+### Improve proximity
+
+Queries that are run from a different region than the Azure Cosmos DB account will have higher latency than if they were run inside the same region. For example, if you're running code on your desktop computer, you should expect latency to be tens or hundreds of milliseconds higher (or more) than if the query came from a virtual machine within the same Azure region as Azure Cosmos DB. It's simple to [globally distribute data in Azure Cosmos DB](../distribute-data-globally.md) to ensure you can bring your data closer to your app.
+
+### Increase provisioned throughput
+
+In Azure Cosmos DB, your provisioned throughput is measured in Request Units (RUs). Imagine you have a query that consumes 5 RUs of throughput. For example, if you provision 1,000 RUs, you would be able to run that query 200 times per second. If you tried to run the query when there wasn't enough throughput available, Azure Cosmos DB would return an HTTP 429 error. Any of the current API for NoSQL SDKs will automatically retry this query after waiting for a short time. Throttled requests take longer, so increasing provisioned throughput can improve query latency. You can observe the [total number of throttled requests](../use-metrics.md#understand-how-many-requests-are-succeeding-or-causing-errors) on the **Metrics** blade of the Azure portal.
+
+### Increase MaxConcurrency
+
+Parallel queries work by querying multiple partitions in parallel. But data from an individual partitioned collection is fetched serially with respect to the query. So, if you set MaxConcurrency to the number of partitions, you have the best chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can set MaxConcurrency (or MaxDegreesOfParallelism in older SDK versions) to a high number. The system will choose the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
+
+### Increase MaxBufferedItemCount
+
+Queries are designed to pre-fetch results while the current batch of results is being processed by the client. Pre-fetching helps to improve the overall latency of a query. Setting MaxBufferedItemCount limits the number of pre-fetched results. If you set this value to the expected number of results returned (or a higher number), the query can get the most benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
+
+## Next steps
+See the following articles for information on how to measure RUs per query, get execution statistics to tune your queries, and more:
+
+* [Get SQL query execution metrics by using .NET SDK](query-metrics-performance.md)
+* [Tuning query performance with Azure Cosmos DB](./query-metrics.md)
+* [Performance tips for .NET SDK](performance-tips.md)
+* [Performance tips for Java v4 SDK](performance-tips-java-sdk-v4.md)
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-rate-too-large.md
+
+ Title: Troubleshoot Azure Cosmos DB request rate too large exceptions
+description: Learn how to diagnose and fix request rate too large exceptions.
++++ Last updated : 03/03/2022+++++
+# Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions
+
+This article contains known causes and solutions for various 429 status code errors for the API for NoSQL. If you're using the API for MongoDB, see the [Troubleshoot common issues in API for MongoDB](../mongodb/error-codes-solutions.md) article for how to debug status code 16500.
+
+A "Request rate too large" exception, also known as error code 429, indicates that your requests against Azure Cosmos DB are being rate limited.
+
+When you use provisioned throughput, you set the throughput measured in request units per second (RU/s) required for your workload. Database operations against the service such as reads, writes, and queries consume some number of request units (RUs). Learn more about [request units](../request-units.md).
+
+In a given second, if the operations consume more than the provisioned request units, Azure Cosmos DB will return a 429 exception. Each second, the number of request units available to use is reset.
+
+Before taking an action to change the RU/s, it's important to understand the root cause of rate limiting and address the underlying issue.
+> [!TIP]
+> The guidance in this article applies to databases and containers using provisioned throughput - both autoscale and manual throughput.
+
+There are different error messages that correspond to different types of 429 exceptions:
+- [Request rate is large. More Request Units may be needed, so no changes were made.](#request-rate-is-large)
+- [The request didn't complete due to a high rate of metadata requests.](#rate-limiting-on-metadata-requests)
+- [The request didn't complete due to a transient service error.](#rate-limiting-due-to-transient-service-error)
++
+## Request rate is large
+This is the most common scenario. It occurs when the request units consumed by operations on data exceed the provisioned number of RU/s. If you're using manual throughput, this occurs when you've consumed more RU/s than the manual throughput provisioned. If you're using autoscale, this occurs when you've consumed more than the maximum RU/s provisioned. For example, if you have a resource provisioned with manual throughput of 400 RU/s, you'll see 429 when you consume more than 400 request units in a single second. If you have a resource provisioned with autoscale max RU/s of 4000 RU/s (scales between 400 RU/s - 4000 RU/s), you'll see 429 responses when you consume more than 4000 request units in a single second.
+
+### Step 1: Check the metrics to determine the percentage of requests with 429 error
+Seeing 429 error messages doesn't necessarily mean there's a problem with your database or container. A small percentage of 429 responses is normal whether you're using manual or autoscale throughput, and is a sign that you're maximizing the RU/s you've provisioned.
+
+#### How to investigate
+
+Determine what percent of your requests to your database or container resulted in 429 responses, compared to the overall count of successful requests. From your Azure Cosmos DB account, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container.
+
+By default, the Azure Cosmos DB client SDKs and data import tools such as Azure Data Factory and bulk executor library automatically retry requests on 429s. They retry typically up to nine times. As a result, while you may see 429 responses in the metrics, these errors may not even have been returned to your application.
++
+#### Recommended solution
+In general, for a production workload, **if you see between 1-5% of requests with 429 responses, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized**. No action is required. Otherwise, move to the next troubleshooting steps.
+
+If you're using autoscale, it's possible to see 429 responses on your database or container, even if the RU/s wasn't scaled to the maximum RU/s. See the section [Request rate is large with autoscale](#request-rate-is-large-with-autoscale) for an explanation.
+
+One common question that arises is, **"Why am I seeing 429 responses in the Azure Monitor metrics, but none in my own application monitoring?"** If Azure Monitor Metrics show you have 429 responses, but you've not seen any in your own application, this is because by default, the Azure Cosmos DB client SDKs [`automatically retried internally on the 429 responses`](xref:Microsoft.Azure.Cosmos.CosmosClientOptions.MaxRetryAttemptsOnRateLimitedRequests) and the request succeeded in subsequent retries. As a result, the 429 status code isn't returned to the application. In these cases, the overall rate of 429 responses is typically minimal and can be safely ignored, assuming the overall rate is between 1-5% and end to end latency is acceptable to your application.
+
+### Step 2: Determine if there's a hot partition
+A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical (which implies physical) partitions that become "hot." Because all data for a logical partition resides on one physical partition and total RU/s is evenly distributed among the physical partitions, a hot partition can lead to 429 responses and inefficient use of throughput.
+
+Here are some examples of partitioning strategies that lead to hot partitions:
+- You have a container storing IoT device data for a write-heavy workload that is partitioned by `date`. All data for a single date will reside on the same logical and physical partition. Because all the data written each day has the same date, this would result in a hot partition every day.
+ - Instead, for this scenario, a partition key like `id` (either a GUID or device ID), or a [synthetic partition key](./synthetic-partition-keys.md) combining `id` and `date` would yield a higher cardinality of values and better distribution of request volume.
+- You have a multi-tenant scenario with a container partitioned by `tenantId`. If one tenant is much more active than the others, it results in a hot partition. For example, if the largest tenant has 100,000 users, but most tenants have fewer than 10 users, you'll have a hot partition when partitioned by `tenantID`.
+ - For this previous scenario, consider having a dedicated container for the largest tenant, partitioned by a more granular property such as `UserId`.
+
+#### How to identify the hot partition
+
+To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
+
+Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has much higher Normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition. Learn more about the [Normalized RU Consumption metric](../monitor-normalized-request-units.md).
++
+To see which logical partition keys are consuming the most RU/s,
+use [Azure Diagnostic Logs](../monitor-resource-logs.md). This sample query sums up the total request units consumed per second on each logical partition key.
+
+> [!IMPORTANT]
+> Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on the volume of data ingested. It's recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= ago(24hour)
+ | where CollectionName == "CollectionName"
+ | where isnotempty(PartitionKey)
+ // Sum total request units consumed by logical partition key for each second
+ | summarize sum(RequestCharge) by PartitionKey, OperationName, bin(TimeGenerated, 1s)
+ | order by sum_RequestCharge desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= ago(24hour)
+ | where Category == "PartitionKeyRUConsumption"
+ | where collectionName_s == "CollectionName"
+ | where isnotempty(partitionKey_s)
+ // Sum total request units consumed by logical partition key for each second
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, operationType_s, bin(TimeGenerated, 1s)
+ | order by sum_requestCharge_s desc
+ ```
++
+This sample output shows that in a particular minute, the logical partition key with value "Contoso" consumed around 12,000 RU/s, while the logical partition key with value "Fabrikam" consumed less than 600 RU/s. If this pattern was consistent during the time period where rate limiting occurred, this would indicate a hot partition.
++
+> [!TIP]
+> In any workload, there will be natural variation in request volume across logical partitions. You should determine if the hot partition is caused by a fundamental skewness due to choice of partition key (which may require changing the key) or temporary spike due to natural variation in workload patterns.
+
+#### Recommended solution
+Review the guidance on [how to chose a good partition key](../partitioning-overview.md#choose-partitionkey).
+
+If there's high percent of rate limited requests and no hot partition:
+- You can [increase the RU/s](../set-throughput.md) on the database or container using the client SDKs, Azure portal, PowerShell, CLI or ARM template. Follow [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md) to determine the right RU/s to set.
+
+If there's high percent of rate limited requests and there's an underlying hot partition:
+- Long-term, for best cost and performance, consider **changing the partition key**. The partition key can't be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.
+- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
+
+> [!TIP]
+> When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation. Learn more about [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md).
+
+### Step 3: Determine what requests are returning 429 responses
+
+#### How to investigate requests with 429 responses
+Use [Azure Diagnostic Logs](../monitor-resource-logs.md) to identify which requests are returning 429 responses and how many RUs they consumed. This sample query aggregates at the minute level.
+
+> [!IMPORTANT]
+> Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on volume of data ingested. It is recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBDataPlaneRequests
+ | where TimeGenerated >= ago(24h)
+ | summarize throttledOperations = dcountif(ActivityId, StatusCode == 429), totalOperations = dcount(ActivityId), totalConsumedRUPerMinute = sum(RequestCharge) by DatabaseName, CollectionName, OperationName, RequestResourceType, bin(TimeGenerated, 1min)
+ | extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
+ | extend fractionOf429s = 1.0 * throttledOperations / totalOperations
+ | order by fractionOf429s desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= ago(24h)
+ | where Category == "DataPlaneRequests"
+ | summarize throttledOperations = dcountif(activityId_g, statusCode_s == 429), totalOperations = dcount(activityId_g), totalConsumedRUPerMinute = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, OperationName, requestResourceType_s, bin(TimeGenerated, 1min)
+ | extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
+ | extend fractionOf429s = 1.0 * throttledOperations / totalOperations
+ | order by fractionOf429s desc
+ ```
++
+For example, this sample output shows that each minute, 30% of Create Document requests were being rate limited, with each request consuming an average of 17 RUs.
+
+#### Recommended solution
+##### Use the Azure Cosmos DB capacity planner
+You can use the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to understand what is the best provisioned throughput based on your workload (volume and type of operations and size of documents). You can customize further the calculations by providing sample data to get a more accurate estimation.
+
+##### 429 responses on create, replace, or upsert document requests
+- By default, in the API for NoSQL, all properties are indexed by default. Tune the [indexing policy](../index-policy.md) to only index the properties needed.
+This will lower the Request Units required per create document operation, which will reduce the likelihood of seeing 429 responses or allow you to achieve higher operations per second for the same amount of provisioned RU/s.
+
+##### 429 responses on query document requests
+- Follow the guidance to [troubleshoot queries with high RU charge](troubleshoot-query-performance.md#querys-ru-charge-is-too-high)
+
+##### 429 responses on execute stored procedures
+- [Stored procedures](stored-procedures-triggers-udfs.md) are intended for operations that require write transactions across a partition key value. It isn't recommended to use stored procedures for a large number of read or query operations. For best performance, these read or query operations should be done on the client-side, using the Azure Cosmos DB SDKs.
+
+## Request rate is large with autoscale
+All the guidance in this article applies to both manual and autoscale throughput.
+
+When using autoscale, a common question that arises is, **"Is it still possible to see 429 responses with autoscale?"**
+
+Yes. There are two main scenarios where this can occur.
+
+**Scenario 1**: When the overall consumed RU/s exceeds the max RU/s of the database or container, the service will throttle requests accordingly. This is analogous to exceeding the overall manual provisioned throughput of a database or container.
+
+**Scenario 2**: If there's a hot partition, that is, a logical partition key value that has a disproportionately higher amount of requests compared to other partition key values, it is possible for the underlying physical partition to exceed its RU/s budget. As a best practice, to avoid hot partitions, choose a good partition key that results in an even distribution of both storage and throughput. This is similar to when there's a hot partition when using manual throughput.
+
+For example, if you select the 20,000 RU/s max throughput option and have 200 GB of storage with four physical partitions, each physical partition can be autoscaled up to 5000 RU/s. If there was a hot partition on a particular logical partition key, you'll see 429 responses when the underlying physical partition it resides in exceeds 5000 RU/s, that is, exceeds 100% normalized utilization.
+
+Follow the guidance in [Step 1](#step-1-check-the-metrics-to-determine-the-percentage-of-requests-with-429-error), [Step 2](#step-2-determineif-theres-a-hot-partition), and [Step 3](#step-3-determine-what-requests-are-returning-429 responses) to debug these scenarios.
+
+Another common question that arises is, **Why is normalized RU consumption 100%, but autoscale didn't scale to the max RU/s?**
+
+This typically occurs for workloads that have temporary or intermittent spikes of usage. When you use autoscale, Azure Cosmos DB only scales the RU/s to the maximum throughput when the normalized RU consumption is 100% for a sustained, continuous period of time in a 5-second interval. This is done to ensure the scaling logic is cost friendly to the user, as it ensures that single, momentary spikes to not lead to unnecessary scaling and higher cost. When there are momentary spikes, the system typically scales up to a value higher than the previously scaled to RU/s, but lower than the max RU/s. Learn more about how to [interpret the normalized RU consumption metric with autoscale](../monitor-normalized-request-units.md#normalized-ru-consumption-and-autoscale).
+## Rate limiting on metadata requests
+
+Metadata rate limiting can occur when you're performing a high volume of metadata operations on databases and/or containers. Metadata operations include:
+- Create, read, update, or delete a container or database
+- List databases or containers in an Azure Cosmos DB account
+- Query for offers to see the current provisioned throughput
+
+There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits).
+
+#### How to investigate
+Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
++
+#### Recommended solution
+- If your application needs to perform metadata operations, consider implementing a backoff policy to send these requests at a lower rate.
+
+- Use static Azure Cosmos DB client instances. When the DocumentClient or CosmosClient is initialized, the Azure Cosmos DB SDK fetches metadata about the account, including information about the consistency level, databases, containers, partitions, and offers. This initialization may consume a high number of RUs, and should be performed infrequently. Use a single DocumentClient instance and use it for the lifetime of your application.
+
+- Cache the names of databases and containers. Retrieve the names of your databases and containers from configuration or cache them on start. Calls like ReadDatabaseAsync/ReadDocumentCollectionAsync or CreateDatabaseQuery/CreateDocumentCollectionQuery will result in metadata calls to the service, which consume from the system-reserved RU limit. These operations should be performed infrequently.
+
+## Rate limiting due to transient service error
+
+This 429 error is returned when the request encounters a transient service error. Increasing the RU/s on the database or container will have no impact and isn't recommended.
+
+#### Recommended solution
+Retry the request. If the error persists for several minutes, file a support ticket from the [Azure portal](https://portal.azure.com/).
+
+## Next steps
+* [Monitor normalized RU/s consumption](../monitor-normalized-request-units.md) of your database or container.
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-timeout.md
+
+ Title: Troubleshoot Azure Cosmos DB service request timeout exceptions
+description: Learn how to diagnose and fix Azure Cosmos DB service request timeout exceptions.
+++ Last updated : 07/13/2020+++++
+# Diagnose and troubleshoot Azure Cosmos DB request timeout exceptions
+
+Azure Cosmos DB returned an HTTP 408 request timeout.
+
+## Troubleshooting steps
+The following list contains known causes and solutions for request timeout exceptions.
+
+### Check the SLA
+Check [Azure Cosmos DB monitoring](../monitor.md) to see if the number of 408 exceptions violates the Azure Cosmos DB SLA.
+
+#### Solution 1: It didn't violate the Azure Cosmos DB SLA
+The application should handle this scenario and retry on these transient failures.
+
+#### Solution 2: It did violate the Azure Cosmos DB SLA
+Contact [Azure Support](https://aka.ms/azure-support).
+
+### Hot partition key
+Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. When there's a hot partition, one or more logical partition keys on a physical partition are consuming all the physical partition's Request Units per second (RU/s). At the same time, the RU/s on other physical partitions are going unused. As a symptom, the total RU/s consumed will be less than the overall provisioned RU/s at the database or container. You'll still see throttling (429s) on the requests against the hot logical partition key. Use the [Normalized RU Consumption metric](../monitor-normalized-request-units.md) to see if the workload is encountering a hot partition.
+
+#### Solution:
+Choose a good partition key that evenly distributes request volume and storage. Learn how to [change your partition key](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/).
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Sdk Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-sdk-availability.md
+
+ Title: Diagnose and troubleshoot the availability of Azure Cosmos DB SDKs in multiregional environments
+description: Learn all about the Azure Cosmos DB SDK availability behavior when operating in multi regional environments.
++ Last updated : 09/27/2022++++++
+# Diagnose and troubleshoot the availability of Azure Cosmos DB SDKs in multiregional environments
+
+This article describes the behavior of the latest version of Azure Cosmos DB SDKs when you see a connectivity issue to a particular region or when a region failover occurs.
+
+All the Azure Cosmos DB SDKs give you an option to customize the regional preference. The following properties are used in different SDKs:
+
+* The [ConnectionPolicy.PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations) property in .NET V2 SDK.
+* The [CosmosClientOptions.ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) or [CosmosClientOptions.ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions) properties in .NET V3 SDK.
+* The [CosmosClientBuilder.preferredRegions](/java/api/com.azure.cosmos.cosmosclientbuilder.preferredregions) method in Java V4 SDK.
+* The [CosmosClient.preferred_locations](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK.
+* The [CosmosClientOptions.ConnectionPolicy.preferredLocations](/javascript/api/@azure/cosmos/connectionpolicy#preferredlocations) parameter in JS SDK.
+
+When the SDK initializes with a configuration that specifies regional preference, it will first obtain the account information including the available regions from the global endpoint. It will then apply an intersection of the configured regional preference and the account's available regions and use the order in the regional preference to prioritize the result.
+
+If the regional preference configuration contains regions that aren't an available region in the account, the values will be ignored. If these invalid regions are [added later to the account](#adding-a-region-to-an-account), the SDK will use them if they're higher in the preference configuration.
+
+|Account type |Reads |Writes |
+||--|--|
+| Single write region | Preferred region with highest order | Primary region |
+| Multiple write regions | Preferred region with highest order | Preferred region with highest order |
+
+If you **don't set a preferred region**, the SDK client defaults to the primary region:
+
+|Account type |Reads |Writes |
+||--|--|
+| Single write region | Primary region | Primary region |
+| Multiple write regions | Primary region | Primary region |
+
+> [!NOTE]
+> Primary region refers to the first region in the [Azure Cosmos DB account region list](../distribute-data-globally.md).
+> If the values specified as regional preference do not match with any existing Azure regions, they will be ignored. If they match an existing region but the account is not replicated to it, then the client will connect to the next preferred region that matches or to the primary region.
+
+> [!WARNING]
+> The failover and availability logic described in this document can be disabled on the client configuration, which is not advised unless the user application is going to handle availability errors itself. This can be achieved by:
+>
+> * Setting the [ConnectionPolicy.EnableEndpointDiscovery](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.enableendpointdiscovery) property in .NET V2 SDK to false.
+> * Setting the [CosmosClientOptions.LimitToEndpoint](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.limittoendpoint) property in .NET V3 SDK to true.
+> * Setting the [CosmosClientBuilder.endpointDiscoveryEnabled](/java/api/com.azure.cosmos.cosmosclientbuilder.endpointdiscoveryenabled) method in Java V4 SDK to false.
+> * Setting the [CosmosClient.enable_endpoint_discovery](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK to false.
+> * Setting the [CosmosClientOptions.ConnectionPolicy.enableEndpointDiscovery](/javascript/api/@azure/cosmos/connectionpolicy#enableEndpointDiscovery) parameter in JS SDK to false.
+
+Under normal circumstances, the SDK client will connect to the preferred region (if a regional preference is set) or to the primary region (if no preference is set), and the operations will be limited to that region, unless any of the below scenarios occur.
+
+In these cases, the client using the Azure Cosmos DB SDK exposes logs and includes the retry information as part of the **operation diagnostic information**:
+
+* The *RequestDiagnosticsString* property on responses in .NET V2 SDK.
+* The *Diagnostics* property on responses and exceptions in .NET V3 SDK.
+* The *getDiagnostics()* method on responses and exceptions in Java V4 SDK.
+
+When determining the next region in order of preference, the SDK client will use the account region list, prioritizing the preferred regions (if any).
+
+For a comprehensive detail on SLA guarantees during these events, see the [SLAs for availability](../high-availability.md#slas).
+
+## <a id="remove-region"></a>Removing a region from the account
+
+When you remove a region from an Azure Cosmos DB account, any SDK client that actively uses the account will detect the region removal through a backend response code. The client then marks the regional endpoint as unavailable. The client retries the current operation and all the future operations are permanently routed to the next region in order of preference. In case the preference list only had one entry (or was empty) but the account has other regions available, it will route to the next region in the account list.
+
+## Adding a region to an account
+
+Every 5 minutes, the Azure Cosmos DB SDK client reads the account configuration and refreshes the regions that it's aware of.
+
+If you remove a region and later add it back to the account, if the added region has a higher regional preference order in the SDK configuration than the current connected region, the SDK will switch back to use this region permanently. After the added region is detected, all the future requests are directed to it.
+
+If you configure the client to preferably connect to a region that the Azure Cosmos DB account doesn't have, the preferred region is ignored. If you add that region later, the client detects it, and will switch permanently to that region.
+
+## <a id="manual-failover-single-region"></a>Fail over the write region in a single write region account
+
+If you initiate a failover of the current write region, the next write request will fail with a known backend response. When this response is detected, the client will query the account to learn the new write region, and proceed to retry the current operation and permanently route all future write operations to the new region.
+
+## Regional outage
+
+If the account is single write region and the regional outage occurs during a write operation, the behavior is similar to a [manual failover](#manual-failover-single-region). For read requests or multiple write regions accounts, the behavior is similar to [removing a region](#remove-region).
+
+## Session consistency guarantees
+
+When using [session consistency](../consistency-levels.md#guarantees-associated-with-consistency-levels), the client needs to guarantee that it can read its own writes. In single write region accounts where the read region preference is different from the write region, there could be cases where the user issues a write and when doing a read from a local region, the local region hasn't yet received the data replication (speed of light constraint). In such cases, the SDK detects the specific failure on the read operation and retries the read on the primary region to ensure session consistency.
+
+## Transient connectivity issues on TCP protocol
+
+In scenarios where the Azure Cosmos DB SDK client is configured to use the TCP protocol, for a given request, there might be situations where the network conditions are temporarily affecting the communication with a particular endpoint. These temporary network conditions can surface as TCP timeouts and Service Unavailable (HTTP 503) errors. The client will, if possible, [retry the request locally](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) on the same endpoint for some seconds.
+
+If the user has configured a preferred region list with more than one region and the client exhausted all local retries, it can attempt to retry that single operation in the next region from the preference list. Write operations can only be retried in other region if the Azure Cosmos DB account has multiple write regions enabled, while read operations can be retried in any available region.
+
+## Next steps
+
+* Review the [Availability SLAs](../high-availability.md#slas).
+* Use the latest [.NET SDK](sdk-dotnet-v3.md)
+* Use the latest [Java SDK](sdk-java-v4.md)
+* Use the latest [Python SDK](sdk-python.md)
+* Use the latest [Node SDK](sdk-nodejs.md)
cosmos-db Troubleshoot Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-service-unavailable.md
+
+ Title: Troubleshoot Azure Cosmos DB service unavailable exceptions
+description: Learn how to diagnose and fix Azure Cosmos DB service unavailable exceptions.
+++ Last updated : 08/31/2022+++++
+# Diagnose and troubleshoot Azure Cosmos DB service unavailable exceptions
+
+The SDK wasn't able to connect to Azure Cosmos DB. This scenario can be transient or permanent depending on the network conditions.
+
+It is important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for service unavailable errors.
+
+When evaluating the case for service unavailable errors:
+
+* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
+* Is the P99 latency / availability affected?
+* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
+
+## Troubleshooting steps
+
+The following list contains known causes and solutions for service unavailable exceptions.
+
+### Verify the substatus code
+
+In certain conditions, the HTTP 503 Service Unavailable error will include a substatus code that helps to identify the cause.
+
+| SubStatus Code | Description |
+|-|-|
+| 20001 | The service unavailable error happened because there are client side [connectivity issues](#client-side-transient-connectivity-issues) (failures attempting to connect). The client attempted to recover by [retrying](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) but all retries failed. |
+| 20002 | The service unavailable error happened because there are client side [timeouts](troubleshoot-dotnet-sdk-request-timeout.md#troubleshooting-steps). The client attempted to recover by [retrying](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) but all retries failed. |
+| 20003 | The service unavailable error happened because there are underlying I/O errors related to the operating system. See the exception details for the related I/O error. |
+| 20004 | The service unavailable error happened because [client machine's CPU is overloaded](troubleshoot-dotnet-sdk-request-timeout.md#high-cpu-utilization). |
+| 20005 | The service unavailable error happened because client machine's threadpool is starved. Verify any potential [blocking async calls in your code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). |
+| >= 21001 | This service unavailable error happened due to a transient service condition. Verify the conditions in the above section, confirm if you have retry policies in place. If the volume of these errors is high compared with successes, reach out to Azure Support. |
+
+### The required ports are being blocked
+
+Verify that all the [required ports](sdk-connection-modes.md#service-port-ranges) are enabled.
+
+### Client-side transient connectivity issues
+
+Service unavailable exceptions can surface when there are transient connectivity problems that are causing timeouts and can be safely retried following the [design recommendations](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503).
+
+Follow the [request timeout troubleshooting steps](troubleshoot-dotnet-sdk-request-timeout.md#troubleshooting-steps) to resolve it.
+
+### Service outage
+
+Check the [Azure status](https://azure.status.microsoft/status) to see if there's an ongoing issue.
+
+## Next steps
+
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java SDK.
+* Learn about performance guidelines for [.NET](performance-tips-dotnet-sdk-v3.md).
+* Learn about performance guidelines for [Java](performance-tips-java-sdk-v4.md).
cosmos-db Troubleshoot Unauthorized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-unauthorized.md
+
+ Title: Troubleshoot Azure Cosmos DB unauthorized exceptions
+description: Learn how to diagnose and fix unauthorized exceptions.
+++ Last updated : 07/13/2020+++++
+# Diagnose and troubleshoot Azure Cosmos DB unauthorized exceptions
+
+HTTP 401: The MAC signature found in the HTTP request isn't the same as the computed signature.
+If you received the 401 error message "The MAC signature found in the HTTP request is not the same as the computed signature", it can be caused by the following scenarios.
+
+For older SDKs, the exception can appear as an invalid JSON exception instead of the correct 401 unauthorized exception. Newer SDKs properly handle this scenario and give a valid error message.
+
+## Troubleshooting steps
+The following list contains known causes and solutions for unauthorized exceptions.
+
+### The key wasn't properly rotated is the most common scenario
+The 401 MAC signature is seen shortly after a key rotation and eventually stops without any changes.
+
+#### Solution:
+The key was rotated and didn't follow the [best practices](../secure-access-to-data.md#key-rotation). The Azure Cosmos DB account key rotation can take anywhere from a few seconds to possibly days depending on the Azure Cosmos DB account size.
+
+### The key is misconfigured
+The 401 MAC signature issue will be consistent and happens for all calls using that key.
+
+#### Solution:
+The key is misconfigured on the application and is using the wrong key for the account, or the entire key wasn't copied.
+
+### The application is using the read-only keys for write operations
+The 401 MAC signature issue only occurs for write operations like create or replace, but read requests succeed.
+
+#### Solution:
+Switch the application to use a read/write key to allow the operations to complete successfully.
+
+### Race condition with create container
+The 401 MAC signature issue is seen shortly after a container creation. This issue occurs only until the container creation is completed.
+
+#### Solution:
+There's a race condition with container creation. An application instance is trying to access the container before the container creation is complete. The most common scenario for this race condition is if the application is running and the container is deleted and re-created with the same name. The SDK attempts to use the new container, but the container creation is still in progress so it doesn't have the keys.
+
+### Bulk mode enabled
+When using [Bulk mode enabled](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/), read and write operations are optimized for best network performance and sent to the backend through a dedicated Bulk API. 401 errors while performing read operations with Bulk mode enabled often mean that the application is using the [read-only keys](../secure-access-to-data.md#primary-keys).
+
+#### Solution
+Use the read/write keys or authorization mechanism with write access when performing operations with Bulk mode enabled.
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dotnet-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3.md) and [.NET v2](performance-tips.md).
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4.md).
cosmos-db Tutorial Create Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-create-notebook.md
+
+ Title: |
+ Tutorial: Create a Jupyter Notebook in Azure Cosmos DB for NoSQL to analyze and visualize data (preview)
+description: |
+ Learn how to use built-in Jupyter notebooks to import data to Azure Cosmos DB for NoSQL, analyze the data, and visualize the output.
++ Last updated : 09/29/2022+++++
+# Tutorial: Create a Jupyter Notebook in Azure Cosmos DB for NoSQL to analyze and visualize data (preview)
++
+> [!IMPORTANT]
+> The Jupyter Notebooks feature of Azure Cosmos DB is currently in a preview state and is progressively rolling out to all customers over time.
+
+This article describes how to use the Jupyter Notebooks feature of Azure Cosmos DB to import sample retail data to an Azure Cosmos DB for NoSQL account. You'll see how to use the Azure Cosmos DB magic commands to run queries, analyze the data, and visualize the results.
+
+## Prerequisites
+
+- [Azure Cosmos DB for NoSQL account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput)
+
+## Create a new notebook
+
+In this section, you'll create the Azure Cosmos database, container, and import the retail data to the container.
+
+1. Navigate to your Azure Cosmos DB account and open the **Data Explorer.**
+
+1. Select **New Notebook**.
+
+ :::image type="content" source="media/tutorial-create-notebook/new-notebook-option.png" lightbox="media/tutorial-create-notebook/new-notebook-option.png" alt-text="Screenshot of the Data Explorer with the 'New Notebook' option highlighted.":::
+
+1. In the confirmation dialog that appears, select **Create**.
+
+ > [!NOTE]
+ > A temporary workspace will be created to enable you to work with Jupyter Notebooks. When the session expires, any notebooks in the workspace will be removed.
+
+1. Select the kernel you wish to use for the notebook.
+
+### [Python](#tab/python)
++
+### [C#](#tab/csharp)
++++
+> [!TIP]
+> Now that the new notebook has been created, you can rename it to something like **VisualizeRetailData.ipynb**.
+
+## Create a database and container using the SDK
+
+### [Python](#tab/python)
+
+1. Start in the default code cell.
+
+1. Import any packages you require for this tutorial.
+
+ ```python
+ import azure.cosmos
+ from azure.cosmos.partition_key import PartitionKey
+ ```
+
+1. Create a database named **RetailIngest** using the built-in SDK.
+
+ ```python
+ database = cosmos_client.create_database_if_not_exists('RetailIngest')
+ ```
+
+1. Create a container named **WebsiteMetrics** with a partition key of `/CartID`.
+
+ ```python
+ container = database.create_container_if_not_exists(id='WebsiteMetrics', partition_key=PartitionKey(path='/CartID'))
+ ```
+
+1. Select **Run** to create the database and container resource.
+
+ :::image type="content" source="media/tutorial-create-notebook/run-cell.png" alt-text="Screenshot of the 'Run' option in the menu.":::
+
+### [C#](#tab/csharp)
+
+1. Start in the default code cell.
+
+1. Import any packages you require for this tutorial.
+
+ ```csharp
+ using Microsoft.Azure.Cosmos;
+ ```
+
+1. Create a new instance of the client type using the built-in SDK.
+
+ ```csharp
+ var cosmosClient = new CosmosClient(Cosmos.Endpoint, Cosmos.Key);
+ ```
+
+1. Create a database named **RetailIngest**.
+
+ ```csharp
+ Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync("RetailIngest");
+ ```
+
+1. Create a container named **WebsiteMetrics** with a partition key of `/CartID`.
+
+ ```csharp
+ Container container = await database.CreateContainerIfNotExistsAsync("WebsiteMetrics", "/CartID");
+ ```
+
+1. Select **Run** to create the database and container resource.
+
+ :::image type="content" source="media/tutorial-create-notebook/run-cell.png" alt-text="Screenshot of the 'Run' option in the menu.":::
+++
+## Import data using magic commands
+
+1. Add a new code cell.
+
+1. Within the code cell, add the following magic command to upload, to your existing container, the JSON data from this url: <https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>
+
+ ```python
+ %%upload --databaseName RetailIngest --containerName WebsiteMetrics --url https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json
+ ```
+
+1. Select **Run Active Cell** to only run the command in this specific cell.
+
+ :::image type="content" source="media/tutorial-create-notebook/run-active-cell.png" alt-text="Screenshot of the 'Run Active Cell' option in the menu.":::
+
+ > [!NOTE]
+ > The import command should take 5-10 seconds to complete.
+
+1. Observe the output from the run command. Ensure that **2,654** documents were imported.
+
+ ```output
+ Documents successfully uploaded to WebsiteMetrics
+ Total number of documents imported:
+ Success: 2654
+ Failure: 0
+ Total time taken : 00:00:04 hours
+ Total RUs consumed : 27309.660000001593
+ ```
+
+## Visualize your data
+
+### [Python](#tab/python)
+
+1. Create another new code cell.
+
+1. In the code cell, use a SQL query to populate a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame).
+
+ ```python
+ %%sql --database RetailIngest --container WebsiteMetrics --output df_cosmos
+ SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
+ ```
+
+1. Select **Run Active Cell** to only run the command in this specific cell.
+
+1. Create another new code cell.
+
+1. In the code cell, output the top **10** items from the dataframe.
+
+ ```python
+ df_cosmos.head(10)
+ ```
+
+1. Select **Run Active Cell** to only run the command in this specific cell.
+
+1. Observe the output of running the command.
+
+ | | Action | ItemRevenue | Country | Item |
+ | | | | | |
+ | **0** | Purchased | 19.99 | Macedonia | Button-Up Shirt |
+ | **1** | Viewed | 12.00 | Papua New Guinea | Necklace |
+ | **2** | Viewed | 25.00 | Slovakia (Slovak Republic) | Cardigan Sweater |
+ | **3** | Purchased | 14.00 | Senegal | Flip Flop Shoes |
+ | **4** | Viewed | 50.00 | Panama | Denim Shorts |
+ | **5** | Viewed | 14.00 | Senegal | Flip Flop Shoes |
+ | **6** | Added | 14.00 | Senegal | Flip Flop Shoes |
+ | **7** | Added | 50.00 | Panama | Denim Shorts |
+ | **8** | Purchased | 33.00 | Palestinian Territory | Red Top |
+ | **9** | Viewed | 30.00 | Malta | Green Sweater |
+
+1. Create another new code cell.
+
+1. In the code cell, import the **pandas** package to customize the output of the dataframe.
+
+ ```python
+ import pandas as pd
+ pd.options.display.html.table_schema = True
+ pd.options.display.max_rows = None
+
+ df_cosmos.groupby("Item").size()
+ ```
+
+1. Select **Run Active Cell** to only run the command in this specific cell.
+
+1. In the output, select the **Line Chart** option to view a different visualization of the data.
+
+ :::image type="content" source="media/tutorial-create-notebook/pandas-python-line-chart.png" alt-text="Screenshot of the Pandas dataframe visualization for the data as a line chart.":::
+
+### [C#](#tab/csharp)
+
+1. Create a new code cell.
+
+1. In the code cell, create a new C# class to represent an item in the container.
+
+ ```csharp
+ public class Record
+ {
+ public string Action { get; set; }
+ public decimal Price { get; set; }
+ public string Country { get; set; }
+ public string Item { get; set; }
+ }
+ ```
+
+1. Create another new code cell.
+
+1. In the code cell, add code to [execute a SQL query using the SDK](quickstart-dotnet.md#query-items) storing the output of the query in a variable of type <xref:System.Collections.Generic.List%601> named **results**.
+
+ ```csharp
+ using System.Collections.Generic;
+
+ var query = new QueryDefinition(
+ query: "SELECT c.Action, c.Price, c.Country, c.Item FROM c"
+ );
+
+ FeedIterator<Record> feed = container.GetItemQueryIterator<Record>(
+ queryDefinition: query
+ );
+
+ var results = new List<Record>();
+ while (feed.HasMoreResults)
+ {
+ FeedResponse<Record> response = await feed.ReadNextAsync();
+ foreach (Record result in response)
+ {
+ results.Add(result);
+ }
+ }
+ ```
+
+1. Create another new code cell.
+
+1. In the code cell, create a dictionary by adding unique permutations of the **Item** field as the key and the data in the **Price** field as the value.
+
+ ```csharp
+ var dictionary = new Dictionary<string, decimal>();
+
+ foreach(var result in results)
+ {
+ dictionary.TryAdd (result.Item, result.Price);
+ }
+
+ dictionary
+ ```
+
+1. Select **Run Active Cell** to only run the command in this specific cell.
+
+1. Observe the output with unique combinations of the **Item** and **Price** fields.
+
+ ```output
+ ...
+ Denim Jacket:31.99
+ Fleece Jacket:65
+ Sandals:12
+ Socks:3.75
+ Sandal:35.5
+ Light Jeans:80
+ ...
+ ```
+
+1. Create another new code cell.
+
+1. In the code cell, output the **results** variable.
+
+ ```csharp
+ results
+ ```
+
+1. Select **Run Active Cell** to only run the command in this specific cell.
+
+1. In the output, select the **Box Plot** option to view a different visualization of the data.
+
+ :::image type="content" source="media/tutorial-create-notebook/pandas-csharp-box-plot.png" alt-text="Screenshot of the Pandas dataframe visualization for the data as a box plot.":::
+++
+## Persist your notebook
+
+1. In the **Notebooks** section, open the context menu for the notebook you created for this tutorial and select **Download**.
+
+ :::image type="content" source="media/tutorial-create-notebook/download-notebook.png" alt-text="Screenshot of the notebook context menu with the 'Download' option.":::
+
+ > [!TIP]
+ > To save your work permanently, save your notebooks to a GitHub repository or download the notebooks to your local machine before the session ends.
+
+## Next steps
+
+- [Learn about the Jupyter Notebooks feature in Azure Cosmos DB](../notebooks-overview.md)
+- [Review the FAQ on Jupyter Notebook support](../notebooks-faq.yml)
cosmos-db Tutorial Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-bulk-import.md
+
+ Title: Bulk import data to Azure Cosmos DB for NoSQL account by using the .NET SDK
+description: Learn how to import or ingest data to Azure Cosmos DB by building a .NET console application that optimizes provisioned throughput (RU/s) required for importing data
+++++ Last updated : 03/25/2022+
+ms.devlang: csharp
++
+# Bulk import data to Azure Cosmos DB for NoSQL account by using the .NET SDK
+
+This tutorial shows how to build a .NET console application that optimizes provisioned throughput (RU/s) required to import data to Azure Cosmos DB.
+
+>
+> [!VIDEO https://aka.ms/docs.learn-live-dotnet-bulk]
+
+In this article, you'll read data from a sample data source and import it into an Azure Cosmos DB container.
+This tutorial uses [Version 3.0+](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) of the Azure Cosmos DB .NET SDK, which can be targeted to .NET Framework or .NET Core.
+
+This tutorial covers:
+
+> [!div class="checklist"]
+> * Creating an Azure Cosmos DB account
+> * Configuring your project
+> * Connecting to an Azure Cosmos DB account with bulk support enabled
+> * Perform a data import through concurrent create operations
+
+## Prerequisites
+
+Before following the instructions in this article, make sure that you have the following resources:
+
+* An active Azure account. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+ [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+* [NET Core 3 SDK](https://dotnet.microsoft.com/download/dotnet-core). You can verify which version is available in your environment by running `dotnet --version`.
+
+## Step 1: Create an Azure Cosmos DB account
+
+[Create an Azure Cosmos DB for NoSQL account](quickstart-portal.md) from the Azure portal or you can create the account by using the [Azure Cosmos DB Emulator](../local-emulator.md).
+
+## Step 2: Set up your .NET project
+
+Open the Windows command prompt or a Terminal window from your local computer. You'll run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*.
+
+ ```bash
+ dotnet new console -n bulk-import-demo
+ ```
+
+Change your directory to the newly created app folder. You can build the application with:
+
+ ```bash
+ cd bulk-import-demo
+ dotnet build
+ ```
+
+The expected output from the build should look something like this:
+
+ ```bash
+ Restore completed in 100.37 ms for C:\Users\user1\Downloads\CosmosDB_Samples\bulk-import-demo\bulk-import-demo.csproj.
+ bulk -> C:\Users\user1\Downloads\CosmosDB_Samples\bulk-import-demo \bin\Debug\netcoreapp2.2\bulk-import-demo.dll
+
+ Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+
+ Time Elapsed 00:00:34.17
+ ```
+
+## Step 3: Add the Azure Cosmos DB package
+
+While still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the dotnet add package command.
+
+ ```bash
+ dotnet add package Microsoft.Azure.Cosmos
+ ```
+
+## Step 4: Get your Azure Cosmos DB account credentials
+
+The sample application needs to authenticate to your Azure Cosmos DB account. To authenticate, you should pass the Azure Cosmos DB account credentials to the application. Get your Azure Cosmos DB account credentials by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to your Azure Cosmos DB account.
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account.
+
+If you're using the Azure Cosmos DB Emulator, obtain the [emulator credentials from this article](../local-emulator.md#authenticate-requests).
+
+## Step 5: Initialize the CosmosClient object with bulk execution support
+
+Open the generated `Program.cs` file in a code editor. You'll create a new instance of CosmosClient with bulk execution enabled and use it to do operations against Azure Cosmos DB.
+
+Let's start by overwriting the default `Main` method and defining the global variables. These global variables will include the endpoint and authorization keys, the name of the database, container that you'll create, and the number of items that you'll be inserting in bulk. Make sure to replace the endpointURL and authorization key values according to your environment.
++
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+ using System.Diagnostics;
+ using System.IO;
+ using System.Text.Json;
+ using System.Threading.Tasks;
+ using Microsoft.Azure.Cosmos;
+
+ public class Program
+ {
+ private const string EndpointUrl = "https://<your-account>.documents.azure.com:443/";
+ private const string AuthorizationKey = "<your-account-key>";
+ private const string DatabaseName = "bulk-tutorial";
+ private const string ContainerName = "items";
+ private const int AmountToInsert = 300000;
+
+ static async Task Main(string[] args)
+ {
+
+ }
+ }
+ ```
+
+Inside the `Main` method, add the following code to initialize the CosmosClient object:
+
+[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=CreateClient)]
+
+> [!Note]
+> Once bulk execution is specified in the [CosmosClientOptions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions), they are effectively immutable for the lifetime of the CosmosClient. Changing the values will have no effect.
+
+After the bulk execution is enabled, the CosmosClient internally groups concurrent operations into single service calls. This way it optimizes the throughput utilization by distributing service calls across partitions, and finally assigning individual results to the original callers.
+
+You can then create a container to store all our items. Define `/pk` as the partition key, 50000 RU/s as provisioned throughput, and a custom indexing policy that will exclude all fields to optimize the write throughput. Add the following code after the CosmosClient initialization statement:
+
+[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Initialize)]
+
+## Step 6: Populate a list of concurrent tasks
+
+To take advantage of the bulk execution support, create a list of asynchronous tasks based on the source of data and the operations you want to perform, and use `Task.WhenAll` to execute them concurrently.
+Let's start by using "Bogus" data to generate a list of items from our data model. In a real-world application, the items would come from your desired data source.
+
+First, add the Bogus package to the solution by using the dotnet add package command.
+
+ ```bash
+ dotnet add package Bogus
+ ```
+
+Define the definition of the items that you want to save. You need to define the `Item` class within the `Program.cs` file:
+
+[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Model)]
+
+Next, create a helper function inside the `Program` class. This helper function will get the number of items you defined to insert and generates random data:
+
+[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Bogus)]
+
+Use the helper function to initialize a list of documents to work with:
+
+[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Operations)]
+
+Next use the list of documents to create concurrent tasks and populate the task list to insert the items into the container. To perform this operation, add the following code to the `Program` class:
+
+[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=ConcurrentTasks)]
+
+All these concurrent point operations will be executed together (that is in bulk) as described in the introduction section.
+
+## Step 7: Run the sample
+
+In order to run the sample, you can do it simply by the `dotnet` command:
+
+ ```bash
+ dotnet run
+ ```
+
+## Get the complete sample
+
+If you didn't have time to complete the steps in this tutorial, or just want to download the code samples, you can get it from [GitHub](https://github.com/Azure-Samples/cosmos-dotnet-bulk-import-throughput-optimizer).
+
+After cloning the project, make sure to update the desired credentials inside [Program.cs](https://github.com/Azure-Samples/cosmos-dotnet-bulk-import-throughput-optimizer/blob/main/src/Program.cs#L25).
+
+The sample can be run by changing to the repository directory and using `dotnet`:
+
+ ```bash
+ cd cosmos-dotnet-bulk-import-throughput-optimizer
+ dotnet run
+ ```
+
+## Next steps
+
+In this tutorial, you've done the following steps:
+
+> [!div class="checklist"]
+> * Creating an Azure Cosmos DB account
+> * Configuring your project
+> * Connecting to an Azure Cosmos DB account with bulk support enabled
+> * Perform a data import through concurrent create operations
+
+You can now proceed to the next tutorial:
+
+> [!div class="nextstepaction"]
+>[Query Azure Cosmos DB by using the API for NoSQL](tutorial-query.md)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Tutorial Dotnet Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-web-app.md
+
+ Title: ASP.NET Core MVC web app tutorial using Azure Cosmos DB
+description: ASP.NET Core MVC tutorial to create an MVC web application using Azure Cosmos DB. You'll store JSON and access data from a todo app hosted on Azure App Service - ASP NET Core MVC tutorial step by step.
++++
+ms.devlang: csharp
+ Last updated : 05/02/2020+++
+# Tutorial: Develop an ASP.NET Core MVC web application with Azure Cosmos DB by using .NET SDK
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](tutorial-dotnet-web-app.md)
+> * [Java](tutorial-java-web-app.md)
+> * [Node.js](tutorial-nodejs-web-app.md)
+>
+
+This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article:
++
+If you don't have time to complete the tutorial, you can download the complete sample project from [GitHub][GitHub].
+
+This tutorial covers:
+
+> [!div class="checklist"]
+>
+> * Creating an Azure Cosmos DB account
+> * Creating an ASP.NET Core MVC app
+> * Connecting the app to Azure Cosmos DB
+> * Performing create, read, update, and delete (CRUD) operations on the data
+
+> [!TIP]
+> This tutorial assumes that you have prior experience using ASP.NET Core MVC and Azure App Service. If you are new to ASP.NET Core or the [prerequisite tools](#prerequisites), we recommend you to download the complete sample project from [GitHub][GitHub], add the required NuGet packages, and run it. Once you build the project, you can review this article to gain insight on the code in the context of the project.
+
+## Prerequisites
+
+Before following the instructions in this article, make sure that you have the following resources:
+
+* An active Azure account. If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
+
+ [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
+
+All the screenshots in this article are from Microsoft Visual Studio Community 2019. If you use a different version, your screens and options may not match entirely. The solution should work if you meet the prerequisites.
+
+## Step 1: Create an Azure Cosmos DB account
+
+Let's start by creating an Azure Cosmos DB account. If you already have an Azure Cosmos DB for NoSQL account or if you're using the Azure Cosmos DB Emulator, skip to [Step 2: Create a new ASP.NET MVC application](#step-2-create-a-new-aspnet-core-mvc-application).
+++
+In the next section, you create a new ASP.NET Core MVC application.
+
+## Step 2: Create a new ASP.NET Core MVC application
+
+1. Open Visual Studio and select **Create a new project**.
+
+1. In **Create a new project**, find and select **ASP.NET Core Web Application** for C#. Select **Next** to continue.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-new-project-dialog.png" alt-text="Create new ASP.NET Core web application project":::
+
+1. In **Configure your new project**, name the project *todo* and select **Create**.
+
+1. In **Create a new ASP.NET Core Web Application**, choose **Web Application (Model-View-Controller)**. Select **Create** to continue.
+
+ Visual Studio creates an empty MVC application.
+
+1. Select **Debug** > **Start Debugging** or F5 to run your ASP.NET application locally.
+
+## Step 3: Add Azure Cosmos DB NuGet package to the project
+
+Now that we have most of the ASP.NET Core MVC framework code that we need for this solution, let's add the NuGet packages required to connect to Azure Cosmos DB.
+
+1. In **Solution Explorer**, right-click your project and select **Manage NuGet Packages**.
+
+1. In the **NuGet Package Manager**, search for and select **Microsoft.Azure.Cosmos**. Select **Install**.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-nuget.png" alt-text="Install NuGet package":::
+
+ Visual Studio downloads and installs the Azure Cosmos DB package and its dependencies.
+
+ You can also use **Package Manager Console** to install the NuGet package. To do so, select **Tools** > **NuGet Package Manager** > **Package Manager Console**. At the prompt, type the following command:
+
+ ```ps
+ Install-Package Microsoft.Azure.Cosmos
+ ```
+
+## Step 4: Set up the ASP.NET Core MVC application
+
+Now let's add the models, the views, and the controllers to this MVC application.
+
+### Add a model
+
+1. In **Solution Explorer**, right-click the **Models** folder, select **Add** > **Class**.
+
+1. In **Add New Item**, name your new class *Item.cs* and select **Add**.
+
+1. Replace the contents of *Item.cs* class with the following code:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Models/Item.cs":::
+
+Azure Cosmos DB uses JSON to move and store data. You can use the `JsonProperty` attribute to control how JSON serializes and deserializes objects. The `Item` class demonstrates the `JsonProperty` attribute. This code controls the format of the property name that goes into JSON. It also renames the .NET property `Completed`.
+
+### Add views
+
+Next, let's add the following views.
+
+* A create item view
+* A delete item view
+* A view to get an item detail
+* An edit item view
+* A view to list all the items
+
+#### Create item view
+
+1. In **Solution Explorer**, right-click the **Views** folder and select **Add** > **New Folder**. Name the folder *Item*.
+
+1. Right-click the empty **Item** folder, then select **Add** > **View**.
+
+1. In **Add MVC View**, make the following changes:
+
+ * In **View name**, enter *Create*.
+ * In **Template**, select **Create**.
+ * In **Model class**, select **Item (todo.Models)**.
+ * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
+ * Select **Add**.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-add-mvc-view.png" alt-text="Screenshot showing the Add MVC View dialog box":::
+
+1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Create.cshtml":::
+
+#### Delete item view
+
+1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+
+1. In **Add MVC View**, make the following changes:
+
+ * In the **View name** box, type *Delete*.
+ * In the **Template** box, select **Delete**.
+ * In the **Model class** box, select **Item (todo.Models)**.
+ * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
+ * Select **Add**.
+
+1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Delete.cshtml":::
+
+#### Add a view to get item details
+
+1. In **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+
+1. In **Add MVC View**, provide the following values:
+
+ * In **View name**, enter *Details*.
+ * In **Template**, select **Details**.
+ * In **Model class**, select **Item (todo.Models)**.
+ * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
+
+1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Details.cshtml":::
+
+#### Add an edit item view
+
+1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+
+1. In **Add MVC View**, make the following changes:
+
+ * In the **View name** box, type *Edit*.
+ * In the **Template** box, select **Edit**.
+ * In the **Model class** box, select **Item (todo.Models)**.
+ * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
+ * Select **Add**.
+
+1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Edit.cshtml":::
+
+#### Add a view to list all the items
+
+And finally, add a view to get all the items with the following steps:
+
+1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+
+1. In **Add MVC View**, make the following changes:
+
+ * In the **View name** box, type *Index*.
+ * In the **Template** box, select **List**.
+ * In the **Model class** box, select **Item (todo.Models)**.
+ * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
+ * Select **Add**.
+
+1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Index.cshtml":::
+
+Once you complete these steps, close all the *cshtml* documents in Visual Studio.
+
+### Declare and initialize services
+
+First, we'll add a class that contains the logic to connect to and use Azure Cosmos DB. For this tutorial, we'll encapsulate this logic into a class called `CosmosDbService` and an interface called `ICosmosDbService`. This service does the CRUD operations. It also does read feed operations such as listing incomplete items, creating, editing, and deleting the items.
+
+1. In **Solution Explorer**, right-click your project and select **Add** > **New Folder**. Name the folder *Services*.
+
+1. Right-click the **Services** folder, select **Add** > **Class**. Name the new class *CosmosDbService* and select **Add**.
+
+1. Replace the contents of *CosmosDbService.cs* with the following code:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Services/CosmosDbService.cs":::
+
+1. Right-click the **Services** folder, select **Add** > **Class**. Name the new class *ICosmosDbService* and select **Add**.
+
+1. Add the following code to *ICosmosDbService* class:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Services/ICosmosDbService.cs":::
+
+1. Open the *Startup.cs* file in your solution and add the following method **InitializeCosmosClientInstanceAsync**, which reads the configuration and initializes the client.
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Startup.cs" id="InitializeCosmosClientInstanceAsync" :::
+
+1. On that same file, replace the `ConfigureServices` method with:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Startup.cs" id="ConfigureServices":::
+
+ The code in this step initializes the client based on the configuration as a singleton instance to be injected through [Dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection).
+
+ And make sure to change the default MVC Controller to `Item` by editing the routes in the `Configure` method of the same file:
+
+ ```csharp
+ app.UseEndpoints(endpoints =>
+ {
+ endpoints.MapControllerRoute(
+ name: "default",
+ pattern: "{controller=Item}/{action=Index}/{id?}");
+ });
+ ```
++
+1. Define the configuration in the project's *appsettings.json* file as shown in the following snippet:
+
+ :::code language="json" source="~/samples-cosmosdb-dotnet-core-web-app/src/appsettings.json":::
+
+### Add a controller
+
+1. In **Solution Explorer**, right-click the **Controllers** folder, select **Add** > **Controller**.
+
+1. In **Add Scaffold**, select **MVC Controller - Empty** and select **Add**.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-controller-add-scaffold.png" alt-text="Select MVC Controller - Empty in Add Scaffold":::
+
+1. Name your new controller *ItemController*.
+
+1. Replace the contents of *ItemController.cs* with the following code:
+
+ :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Controllers/ItemController.cs":::
+
+The **ValidateAntiForgeryToken** attribute is used here to help protect this application against cross-site request forgery attacks. Your views should work with this anti-forgery token as well. For more information and examples, see [Preventing Cross-Site Request Forgery (CSRF) Attacks in ASP.NET MVC Application][Preventing Cross-Site Request Forgery]. The source code provided on [GitHub][GitHub] has the full implementation in place.
+
+We also use the **Bind** attribute on the method parameter to help protect against over-posting attacks. For more information, see [Tutorial: Implement CRUD Functionality with the Entity Framework in ASP.NET MVC][Basic CRUD Operations in ASP.NET MVC].
+
+## Step 5: Run the application locally
+
+To test the application on your local computer, use the following steps:
+
+1. Press F5 in Visual Studio to build the application in debug mode. It should build the application and launch a browser with the empty grid page we saw before:
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-create-an-item-a.png" alt-text="Screenshot of the todo list web application created by this tutorial":::
+
+ If the application instead opens to the home page, append `/Item` to the url.
+
+1. Select the **Create New** link and add values to the **Name** and **Description** fields. Leave the **Completed** check box unselected. If you select it, the app adds the new item in a completed state. The item no longer appears on the initial list.
+
+1. Select **Create**. The app sends you back to the **Index** view, and your item appears in the list. You can add a few more items to your **To-Do** list.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-create-an-item.png" alt-text="Screenshot of the Index view":::
+
+1. Select **Edit** next to an **Item** on the list. The app opens the **Edit** view where you can update any property of your object, including the **Completed** flag. If you select **Completed** and select **Save**, the app displays the **Item** as completed in the list.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-completed-item.png" alt-text="Screenshot of the Index view with the Completed box checked":::
+
+1. Verify the state of the data in the Azure Cosmos DB service using [Azure Cosmos DB Explorer](https://cosmos.azure.com) or the Azure Cosmos DB Emulator's Data Explorer.
+
+1. Once you've tested the app, select Ctrl+F5 to stop debugging the app. You're ready to deploy!
+
+## Step 6: Deploy the application
+
+Now that you have the complete application working correctly with Azure Cosmos DB we're going to deploy this web app to Azure App Service.
+
+1. To publish this application, right-click the project in **Solution Explorer** and select **Publish**.
+
+1. In **Pick a publish target**, select **App Service**.
+
+1. To use an existing App Service profile, choose **Select Existing**, then select **Publish**.
+
+1. In **App Service**, select a **Subscription**. Use the **View** filter to sort by resource group or resource type.
+
+1. Find your profile, and then select **OK**. Next search the required Azure App Service and select **OK**.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-app-service-2019.png" alt-text="App Service dialog box in Visual Studio":::
+
+Another option is to create a new profile:
+
+1. As in the previous procedure, right-click the project in **Solution Explorer** and select **Publish**.
+
+1. In **Pick a publish target**, select **App Service**.
+
+1. In **Pick a publish target**, select **Create New** and select **Publish**.
+
+1. In **App Service**, enter your Web App name and the appropriate subscription, resource group, and hosting plan, then select **Create**.
+
+ :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-create-app-service-2019.png" alt-text="Create App Service dialog box in Visual Studio":::
+
+In a few seconds, Visual Studio publishes your web application and launches a browser where you can see your project running in Azure!
+
+## Next steps
+
+In this tutorial, you've learned how to build an ASP.NET Core MVC web application. Your application can access data stored in Azure Cosmos DB. You can now continue with these resources:
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Getting started with SQL queries](query/getting-started.md)
+* [How to model and partition data on Azure Cosmos DB using a real-world example](./how-to-model-partition-example.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+[Visual Studio Express]: https://www.visualstudio.com/products/visual-studio-express-vs.aspx
+[Microsoft Web Platform Installer]: https://www.microsoft.com/web/downloads/platform.aspx
+[Preventing Cross-Site Request Forgery]: /aspnet/web-api/overview/security/preventing-cross-site-request-forgery-csrf-attacks
+[Basic CRUD Operations in ASP.NET MVC]: /aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/implementing-basic-crud-functionality-with-the-entity-framework-in-asp-net-mvc-application
+[GitHub]: https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app
cosmos-db Tutorial Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-global-distribution.md
+
+ Title: 'Tutorial: Azure Cosmos DB global distribution tutorial for the API for NoSQL'
+description: 'Tutorial: Learn how to set up Azure Cosmos DB global distribution using the API for NoSQL with .NET, Java, Python and various other SDKs'
++++++ Last updated : 04/03/2022++
+# Tutorial: Set up Azure Cosmos DB global distribution using the API for NoSQL
+
+In this article, we show how to use the Azure portal to set up Azure Cosmos DB global distribution and then connect using the API for NoSQL.
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the [API for NoSQLs](../introduction.md)
+
+<a id="portal"></a>
+
+## <a id="preferred-locations"></a> Connecting to a preferred region using the API for NoSQL
+
+In order to take advantage of [global distribution](../distribute-data-globally.md), client applications can specify the ordered preference list of regions to be used to perform document operations. Based on the Azure Cosmos DB account configuration, current regional availability and the preference list specified, the most optimal endpoint will be chosen by the SQL SDK to perform write and read operations.
+
+This preference list is specified when initializing a connection using the SQL SDKs. The SDKs accept an optional parameter `PreferredLocations` that is an ordered list of Azure regions.
+
+The SDK will automatically send all writes to the current write region. All reads will be sent to the first available region in the preferred locations list. If the request fails, the client will fail down the list to the next region.
+
+The SDK will only attempt to read from the regions specified in preferred locations. So, for example, if the Azure Cosmos DB account is available in four regions, but the client only specifies two read(non-write) regions within the `PreferredLocations`, then no reads will be served out of the read region that is not specified in `PreferredLocations`. If the read regions specified in the `PreferredLocations` list are not available, reads will be served out of write region.
+
+The application can verify the current write endpoint and read endpoint chosen by the SDK by checking two properties, `WriteEndpoint` and `ReadEndpoint`, available in SDK version 1.8 and above. If the `PreferredLocations` property is not set, all requests will be served from the current write region.
+
+If you don't specify the preferred locations but used the `setCurrentLocation` method, the SDK automatically populates the preferred locations based on the current region that the client is running in. The SDK orders the regions based on the proximity of a region to the current region.
+
+## .NET SDK
+
+The SDK can be used without any code changes. In this case, the SDK automatically directs both reads and writes to the current write region.
+
+In version 1.8 and later of the .NET SDK, the ConnectionPolicy parameter for the DocumentClient constructor has a property called Microsoft.Azure.Documents.ConnectionPolicy.PreferredLocations. This property is of type Collection `<string>` and should contain a list of region names. The string values are formatted per the region name column on the [Azure Regions][regions] page, with no spaces before or after the first and last character respectively.
+
+The current write and read endpoints are available in DocumentClient.WriteEndpoint and DocumentClient.ReadEndpoint respectively.
+
+> [!NOTE]
+> The URLs for the endpoints should not be considered as long-lived constants. The service may update these at any point. The SDK handles this change automatically.
+>
+
+# [.NET SDK V2](#tab/dotnetv2)
+
+If you are using the .NET V2 SDK, use the `PreferredLocations` property to set the preferred region.
+
+```csharp
+// Getting endpoints from application settings or other configuration location
+Uri accountEndPoint = new Uri(Properties.Settings.Default.GlobalDatabaseUri);
+string accountKey = Properties.Settings.Default.GlobalDatabaseKey;
+
+ConnectionPolicy connectionPolicy = new ConnectionPolicy();
+
+//Setting read region selection preference
+connectionPolicy.PreferredLocations.Add(LocationNames.WestUS); // first preference
+connectionPolicy.PreferredLocations.Add(LocationNames.EastUS); // second preference
+connectionPolicy.PreferredLocations.Add(LocationNames.NorthEurope); // third preference
+
+// initialize connection
+DocumentClient docClient = new DocumentClient(
+ accountEndPoint,
+ accountKey,
+ connectionPolicy);
+
+// connect to DocDB
+await docClient.OpenAsync().ConfigureAwait(false);
+```
+
+Alternatively, you can use the `SetCurrentLocation` property and let the SDK choose the preferred location based on proximity.
+
+```csharp
+// Getting endpoints from application settings or other configuration location
+Uri accountEndPoint = new Uri(Properties.Settings.Default.GlobalDatabaseUri);
+string accountKey = Properties.Settings.Default.GlobalDatabaseKey;
+
+ConnectionPolicy connectionPolicy = new ConnectionPolicy();
+
+connectionPolicy.SetCurrentLocation("West US 2"); /
+
+// initialize connection
+DocumentClient docClient = new DocumentClient(
+ accountEndPoint,
+ accountKey,
+ connectionPolicy);
+
+// connect to DocDB
+await docClient.OpenAsync().ConfigureAwait(false);
+```
+
+# [.NET SDK V3](#tab/dotnetv3)
+
+If you are using the .NET V3 SDK, use the `ApplicationPreferredRegions` property to set the preferred region.
+
+```csharp
+
+CosmosClientOptions options = new CosmosClientOptions();
+options.ApplicationName = "MyApp";
+options.ApplicationPreferredRegions = new List<string> {Regions.WestUS, Regions.WestUS2};
+
+CosmosClient client = new CosmosClient(connectionString, options);
+
+```
+
+Alternatively, you can use the `ApplicationRegion` property and let the SDK choose the preferred location based on proximity.
+
+```csharp
+CosmosClientOptions options = new CosmosClientOptions();
+options.ApplicationName = "MyApp";
+// If the application is running in West US
+options.ApplicationRegion = Regions.WestUS;
+
+CosmosClient client = new CosmosClient(connectionString, options);
+```
+++
+## Node.js/JavaScript
+
+> [!NOTE]
+> The URLs for the endpoints should not be considered as long-lived constants. The service may update these at any point. The SDK will handle this change automatically.
+>
+>
+
+Below is a code example for Node.js/JavaScript.
+
+```JavaScript
+// Setting read region selection preference, in the following order -
+// 1 - West US
+// 2 - East US
+// 3 - North Europe
+const preferredLocations = ['West US', 'East US', 'North Europe'];
+
+// initialize the connection
+const client = new CosmosClient{ endpoint, key, connectionPolicy: { preferredLocations } });
+```
+
+## Python SDK
+
+The following code shows how to set preferred locations by using the Python SDK:
+
+```python
+connectionPolicy = documents.ConnectionPolicy()
+connectionPolicy.PreferredLocations = ['West US', 'East US', 'North Europe']
+client = cosmos_client.CosmosClient(ENDPOINT, {'masterKey': MASTER_KEY}, connectionPolicy)
+
+```
+
+## <a id="java4-preferred-locations"></a> Java V4 SDK
+
+The following code shows how to set preferred locations by using the Java SDK:
+
+# [Async](#tab/api-async)
+
+ [Java SDK V4](sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TutorialGlobalDistributionPreferredLocationAsync)]
+
+# [Sync](#tab/api-sync)
+
+ [Java SDK V4](sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Sync API
+
+ [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=TutorialGlobalDistributionPreferredLocationSync)]
+
+
+
+## Spark 3 Connector
+
+You can define the preferred regional list using the `spark.cosmos.preferredRegionsList` [configuration](https://github.com/Azure/azure-sdk-for-jav), for example:
+
+```scala
+val sparkConnectorConfig = Map(
+ "spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.preferredRegionsList" -> "[West US, East US, North Europe]"
+ // other settings
+)
+```
+
+## REST
+
+Once a database account has been made available in multiple regions, clients can query its availability by performing a GET request on this URI `https://{databaseaccount}.documents.azure.com/`
+
+The service will return a list of regions and their corresponding Azure Cosmos DB endpoint URIs for the replicas. The current write region will be indicated in the response. The client can then select the appropriate endpoint for all further REST API requests as follows.
+
+Example response
+
+```json
+{
+ "_dbs": "//dbs/",
+ "media": "//media/",
+ "writableLocations": [
+ {
+ "Name": "West US",
+ "DatabaseAccountEndpoint": "https://globaldbexample-westus.documents.azure.com:443/"
+ }
+ ],
+ "readableLocations": [
+ {
+ "Name": "East US",
+ "DatabaseAccountEndpoint": "https://globaldbexample-eastus.documents.azure.com:443/"
+ }
+ ],
+ "MaxMediaStorageUsageInMB": 2048,
+ "MediaStorageUsageInMB": 0,
+ "ConsistencyPolicy": {
+ "defaultConsistencyLevel": "Session",
+ "maxStalenessPrefix": 100,
+ "maxIntervalInSeconds": 5
+ },
+ "addresses": "//addresses/",
+ "id": "globaldbexample",
+ "_rid": "globaldbexample.documents.azure.com",
+ "_self": "",
+ "_ts": 0,
+ "_etag": null
+}
+```
+
+* All PUT, POST and DELETE requests must go to the indicated write URI
+* All GETs and other read-only requests (for example queries) may go to any endpoint of the client's choice
+
+Write requests to read-only regions will fail with HTTP error code 403 ("Forbidden").
+
+If the write region changes after the client's initial discovery phase, subsequent writes to the previous write region will fail with HTTP error code 403 ("Forbidden"). The client should then GET the list of regions again to get the updated write region.
+
+That's it, that completes this tutorial. You can learn how to manage the consistency of your globally replicated account by reading [Consistency levels in Azure Cosmos DB](../consistency-levels.md). And for more information about how global database replication works in Azure Cosmos DB, see [Distribute data globally with Azure Cosmos DB](../distribute-data-globally.md).
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the API for NoSQLs
+
+You can now proceed to the next tutorial to learn how to develop locally using the Azure Cosmos DB local emulator.
+
+> [!div class="nextstepaction"]
+> [Develop locally with the emulator](../local-emulator.md)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+[regions]: https://azure.microsoft.com/regions/
cosmos-db Tutorial Java Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-java-web-app.md
+
+ Title: 'Tutorial: Build a Java web app using Azure Cosmos DB and the API for NoSQL'
+description: 'Tutorial: This Java web application tutorial shows you how to use the Azure Cosmos DB and the API for NoSQL to store and access data from a Java application hosted on Azure Websites.'
+++
+ms.devlang: java
+ Last updated : 03/29/2022+++++
+# Tutorial: Build a Java web application using Azure Cosmos DB and the API for NoSQL
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](tutorial-dotnet-web-app.md)
+> * [Java](tutorial-java-web-app.md)
+> * [Node.js](tutorial-nodejs-web-app.md)
+>
+
+This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this article, you will learn:
+
+* How to build a basic JavaServer Pages (JSP) application in Eclipse.
+* How to work with the Azure Cosmos DB service using the [Azure Cosmos DB Java SDK](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos).
+
+This Java application tutorial shows you how to create a web-based task-management application that enables you to create, retrieve, and mark tasks as complete, as shown in the following image. Each of the tasks in the ToDo list is stored as JSON documents in Azure Cosmos DB.
++
+> [!TIP]
+> This application development tutorial assumes that you have prior experience using Java. If you are new to Java or the [prerequisite tools](#Prerequisites), we recommend downloading the complete [todo](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project from GitHub and building it using [the instructions at the end of this article](#GetProject). Once you have it built, you can review the article to gain insight on the code in the context of the project.
+
+## <a id="Prerequisites"></a>Prerequisites for this Java web application tutorial
+
+Before you begin this application development tutorial, you must have the following:
+
+* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+
+ [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+* [Java Development Kit (JDK) 7+](/java/azure/jdk/).
+* [Eclipse IDE for Java EE Developers.](https://www.eclipse.org/downloads/packages/release/luna/sr1/eclipse-ide-java-ee-developers)
+* [An Azure Web Site with a Java runtime environment (for example, Tomcat or Jetty) enabled.](../../app-service/quickstart-java.md)
+
+If you're installing these tools for the first time, coreservlets.com provides a walk-through of the installation process in the quickstart section of their [Tutorial: Installing TomCat7 and Using it with Eclipse](https://www.youtube.com/watch?v=jOdCfW7-ybI&t=2s) article.
+
+## <a id="CreateDB"></a>Create an Azure Cosmos DB account
+
+Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create the Java JSP application](#CreateJSP).
+++
+## <a id="CreateJSP"></a>Create the Java JSP application
+
+To create the JSP application:
+
+1. First, we'll start off by creating a Java project. Start Eclipse, then select **File**, select **New**, and then select **Dynamic Web Project**. If you don't see **Dynamic Web Project** listed as an available project, do the following: Select **File**, select **New**, select **Project**…, expand **Web**, select **Dynamic Web Project**, and select **Next**.
+
+ :::image type="content" source="./media/tutorial-java-web-app/image10.png" alt-text="JSP Java Application Development":::
+
+1. Enter a project name in the **Project name** box, and in the **Target Runtime** drop-down menu, optionally select a value (e.g. Apache Tomcat v7.0), and then select **Finish**. Selecting a target runtime enables you to run your project locally through Eclipse.
+
+1. In Eclipse, in the Project Explorer view, expand your project. Right-click **WebContent**, select **New**, and then select **JSP File**.
+
+1. In the **New JSP File** dialog box, name the file **index.jsp**. Keep the parent folder as **WebContent**, as shown in the following illustration, and then select **Next**.
+
+ :::image type="content" source="./media/tutorial-java-web-app/image11.png" alt-text="Make a New JSP File - Java Web Application Tutorial":::
+
+1. In the **Select JSP Template** dialog box, for the purpose of this tutorial select **New JSP File (html)**, and then select **Finish**.
+
+1. When the *index.jsp* file opens in Eclipse, add text to display **Hello World!** within the existing `<body>` element. The updated `<body>` content should look like the following code:
+
+ ```html
+ <body>
+ <% out.println("Hello World!"); %>
+ </body>
+ ```
+
+1. Save the *index.jsp* file.
+
+1. If you set a target runtime in step 2, you can select **Project** and then **Run** to run your JSP application locally:
+
+ :::image type="content" source="./media/tutorial-java-web-app/image12.png" alt-text="Hello World ΓÇô Java Application Tutorial":::
+
+## <a id="InstallSDK"></a>Install the SQL Java SDK
+
+The easiest way to pull in the SQL Java SDK and its dependencies is through [Apache Maven](https://maven.apache.org/). To do this, you need to convert your project to a Maven project by using the following steps:
+
+1. Right-click your project in the Project Explorer, select **Configure**, select **Convert to Maven Project**.
+
+1. In the **Create new POM** window, accept the defaults, and select **Finish**.
+
+1. In **Project Explorer**, open the pom.xml file.
+
+1. On the **Dependencies** tab, in the **Dependencies** pane, select **Add**.
+
+1. In the **Select Dependency** window, do the following:
+
+ * In the **Group Id** box, enter `com.azure`.
+ * In the **Artifact Id** box, enter `azure-cosmos`.
+ * In the **Version** box, enter `4.11.0`.
+
+ Or, you can add the dependency XML for Group ID and Artifact ID directly to the *pom.xml* file:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ <version>4.11.0</version>
+ </dependency>
+ ```
+
+1. Select **OK** and Maven will install the SQL Java SDK or save the pom.xml file.
+
+## <a id="UseService"></a>Use the Azure Cosmos DB service in your Java application
+
+Now let's add the models, the views, and the controllers to your web application.
+
+### Add a model
+
+First, let's define a model within a new file *TodoItem.java*. The `TodoItem` class defines the schema of an item along with the getter and setter methods:
++
+### Add the Data Access Object(DAO) classes
+
+Create a Data Access Object (DAO) to abstract persisting the ToDo items to Azure Cosmos DB. In order to save ToDo items to a collection, the client needs to know which database and collection to persist to (as referenced by self-links). In general, it is best to cache the database and collection when possible to avoid additional round-trips to the database.
+
+1. To invoke the Azure Cosmos DB service, you must instantiate a new `cosmosClient` object. In general, it is best to reuse the `cosmosClient` object rather than constructing a new client for each subsequent request. You can reuse the client by defining it within the `cosmosClientFactory` class. Update the HOST and MASTER_KEY values that you saved in [step 1](#CreateDB). Replace the HOST variable with your URI and replace the MASTER_KEY with your PRIMARY KEY. Use the following code to create the `CosmosClientFactory` class within the *CosmosClientFactory.java* file:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/CosmosClientFactory.java":::
+
+1. Create a new *TodoDao.java* file and add the `TodoDao` class to create, update, read, and delete the todo items:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/TodoDao.java":::
+
+1. Create a new *MockDao.java* file and add the `MockDao` class, this class implements the `TodoDao` class to perform CRUD operations on the items:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/MockDao.java":::
+
+1. Create a new *DocDbDao.java* file and add the `DocDbDao` class. This class defines code to persist the TodoItems into the container, retrieves your database and collection, if it exists, or create a new one if it doesn't exist. This example uses [Gson](https://code.google.com/p/google-gson/) to serialize and de-serialize the TodoItem Plain Old Java Objects (POJOs) to JSON documents. In order to save ToDo items to a collection, the client needs to know which database and collection to persist to (as referenced by self-links). This class also defines helper function to retrieve the documents by another attribute (e.g. "ID") rather than self-link. You can use the helper method to retrieve a TodoItem JSON document by ID and then deserialize it to a POJO.
+
+ You can also use the `cosmosClient` client object to get a collection or list of TodoItems using a SQL query. Finally, you define the delete method to delete a TodoItem from your list. The following code shows the contents of the `DocDbDao` class:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/DocDbDao.java":::
+
+1. Next, create a new *TodoDaoFactory.java* file and add the `TodoDaoFactory` class that creates a new DocDbDao object:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/TodoDaoFactory.java":::
+
+### Add a controller
+
+Add the *TodoItemController* controller to your application. In this project, you are using [Project Lombok](https://projectlombok.org/) to generate the constructor, getters, setters, and a builder. Alternatively, you can write this code manually or have the IDE generate it:
++
+### Create a servlet
+
+Next, create a servlet to route HTTP requests to the controller. Create the *ApiServlet.java* file and define the following code under it:
++
+## <a id="Wire"></a>Wire the rest of the Java app together
+
+Now that we've finished the fun bits, all that's left is to build a quick user interface and wire it up to your DAO.
+
+1. You need a web user interface to display to the user. Let's re-write the *index.jsp* we created earlier with the following code:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/WebContent/index.jsp":::
+
+1. Finally, write some client-side JavaScript to tie the web user interface and the servlet together:
+
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/WebContent/assets/todo.js":::
+
+1. Now all that's left is to test the application. Run the application locally, and add some Todo items by filling in the item name and category and clicking **Add Task**. After the item appears, you can update whether it's complete by toggling the checkbox and clicking **Update Tasks**.
+
+## <a id="Deploy"></a>Deploy your Java application to Azure Web Sites
+
+Azure Web Sites makes deploying Java applications as simple as exporting your application as a WAR file and either uploading it via source control (e.g. Git) or FTP.
+
+1. To export your application as a WAR file, right-click on your project in **Project Explorer**, select **Export**, and then select **WAR File**.
+
+1. In the **WAR Export** window, do the following:
+
+ * In the Web project box, enter azure-cosmos-java-sample.
+ * In the Destination box, choose a destination to save the WAR file.
+ * Select **Finish**.
+
+1. Now that you have a WAR file in hand, you can simply upload it to your Azure Web Site's **webapps** directory. For instructions on uploading the file, see [Add a Java application to Azure App Service Web Apps](../../app-service/quickstart-java.md). After the WAR file is uploaded to the webapps directory, the runtime environment will detect that you've added it and will automatically load it.
+
+1. To view your finished product, navigate to `http://YOUR\_SITE\_NAME.azurewebsites.net/azure-cosmos-java-sample/` and start adding your tasks!
+
+## <a id="GetProject"></a>Get the project from GitHub
+
+All the samples in this tutorial are included in the [todo](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project on GitHub. To import the todo project into Eclipse, ensure you have the software and resources listed in the [Prerequisites](#Prerequisites) section, then do the following:
+
+1. Install [Project Lombok](https://projectlombok.org/). Lombok is used to generate constructors, getters, setters in the project. Once you have downloaded the lombok.jar file, double-click it to install it or install it from the command line.
+
+1. If Eclipse is open, close it and restart it to load Lombok.
+
+1. In Eclipse, on the **File** menu, select **Import**.
+
+1. In the **Import** window, select **Git**, select **Projects from Git**, and then select **Next**.
+
+1. On the **Select Repository Source** screen, select **Clone URI**.
+
+1. On the **Source Git Repository** screen, in the **URI** box, enter https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app, and then select **Next**.
+
+1. On the **Branch Selection** screen, ensure that **main** is selected, and then select **Next**.
+
+1. On the **Local Destination** screen, select **Browse** to select a folder where the repository can be copied, and then select **Next**.
+
+1. On the **Select a wizard to use for importing projects** screen, ensure that **Import existing projects** is selected, and then select **Next**.
+
+1. On the **Import Projects** screen, unselect the **DocumentDB** project, and then select **Finish**. The DocumentDB project contains the Azure Cosmos DB Java SDK, which we will add as a dependency instead.
+
+1. In **Project Explorer**, navigate to azure-cosmos-java-sample\src\com.microsoft.azure.cosmos.sample.dao\DocumentClientFactory.java and replace the HOST and MASTER_KEY values with the URI and PRIMARY KEY for your Azure Cosmos DB account, and then save the file. For more information, see [Step 1. Create an Azure Cosmos DB database account](#CreateDB).
+
+1. In **Project Explorer**, right-click the **azure-cosmos-java-sample**, select **Build Path**, and then select **Configure Build Path**.
+
+1. On the **Java Build Path** screen, in the right pane, select the **Libraries** tab, and then select **Add External JARs**. Navigate to the location of the lombok.jar file, and select **Open**, and then select **OK**.
+
+1. Use step 12 to open the **Properties** window again, and then in the left pane select **Targeted Runtimes**.
+
+1. On the **Targeted Runtimes** screen, select **New**, select **Apache Tomcat v7.0**, and then select **OK**.
+
+1. Use step 12 to open the **Properties** window again, and then in the left pane select **Project Facets**.
+
+1. On the **Project Facets** screen, select **Dynamic Web Module** and **Java**, and then select **OK**.
+
+1. On the **Servers** tab at the bottom of the screen, right-click **Tomcat v7.0 Server at localhost** and then select **Add and Remove**.
+
+1. On the **Add and Remove** window, move **azure-cosmos-java-sample** to the **Configured** box, and then select **Finish**.
+
+1. In the **Servers** tab, right-click **Tomcat v7.0 Server at localhost**, and then select **Restart**.
+
+1. In a browser, navigate to `http://localhost:8080/azure-cosmos-java-sample/` and start adding to your task list. Note that if you changed your default port values, change 8080 to the value you selected.
+
+1. To deploy your project to an Azure web site, see [Step 6. Deploy your application to Azure Web Sites](#Deploy).
+
+## Next steps
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Build a Node.js application with Azure Cosmos DB](tutorial-nodejs-web-app.md)
cosmos-db Tutorial Nodejs Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-nodejs-web-app.md
+
+ Title: 'Tutorial: Build a Node.js web app with Azure Cosmos DB JavaScript SDK to manage API for NoSQL data'
+description: This Node.js tutorial explores how to use Microsoft Azure Cosmos DB to store and access data from a Node.js Express web application hosted on Web Apps feature of Microsoft Azure App Service.
++++
+ms.devlang: javascript
+ Last updated : 10/18/2021+
+#Customer intent: As a developer, I want to build a Node.js web application to access and manage API for NoSQL account resources in Azure Cosmos DB, so that customers can better use the service.
++
+# Tutorial: Build a Node.js web app using the JavaScript SDK to manage a API for NoSQL account in Azure Cosmos DB
+
+> [!div class="op_single_selector"]
+>
+> * [.NET](tutorial-dotnet-web-app.md)
+> * [Java](tutorial-java-web-app.md)
+> * [Node.js](tutorial-nodejs-web-app.md)
+>
+
+As a developer, you might have applications that use NoSQL document data. You can use a API for NoSQL account in Azure Cosmos DB to store and access this document data. This Node.js tutorial shows you how to store and access data from a API for NoSQL account in Azure Cosmos DB by using a Node.js Express application that is hosted on the Web Apps feature of Microsoft Azure App Service. In this tutorial, you will build a web-based application (Todo app) that allows you to create, retrieve, and complete tasks. The tasks are stored as JSON documents in Azure Cosmos DB.
+
+This tutorial demonstrates how to create a API for NoSQL account in Azure Cosmos DB by using the Azure portal. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0.
+
+This tutorial covers the following tasks:
+
+> [!div class="checklist"]
+> * Create an Azure Cosmos DB account
+> * Create a new Node.js application
+> * Connect the application to Azure Cosmos DB
+> * Run and deploy the application to Azure
+
+## <a name="prerequisites"></a>Prerequisites
+
+Before following the instructions in this article, ensure
+that you have the following resources:
+
+* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+
+ [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+* [Node.js][Node.js] version 6.10 or higher.
+* [Express generator](https://www.expressjs.com/starter/generator.html) (you can install Express via `npm install express-generator -g`)
+* Install [Git][Git] on your local workstation.
+
+## <a name="create-account"></a>Create an Azure Cosmos DB account
+Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create a new Node.js application](#create-new-app).
+++
+## <a name="create-new-app"></a>Create a new Node.js application
+Now let's learn to create a basic Hello World Node.js project using the Express framework.
+
+1. Open your favorite terminal, such as the Node.js command prompt.
+
+1. Navigate to the directory in which you'd like to store the new application.
+
+1. Use the express generator to generate a new application called **todo**.
+
+ ```bash
+ express todo
+ ```
+
+1. Open the **todo** directory and install dependencies.
+
+ ```bash
+ cd todo
+ npm install
+ ```
+
+1. Run the new application.
+
+ ```bash
+ npm start
+ ```
+
+1. You can view your new application by navigating your browser to `http://localhost:3000`.
+
+ :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-express.png" alt-text="Learn Node.js - Screenshot of the Hello World application in a browser window":::
+
+ Stop the application by using CTRL+C in the terminal window, and select **y** to terminate the batch job.
+
+## <a name="install-modules"></a>Install the required modules
+
+The **package.json** file is one of the files created in the root of the project. This file contains a list of other modules that are required for your Node.js application. When you deploy this application to Azure, this file is used to determine which modules should be installed on Azure to support your application. Install two more packages for this tutorial.
+
+1. Install the **\@azure/cosmos** module via npm.
+
+ ```bash
+ npm install @azure/cosmos
+ ```
+
+## <a name="connect-to-database"></a>Connect the Node.js application to Azure Cosmos DB
+Now that you have completed the initial setup and configuration, next you will write code that is required by the todo application to communicate with Azure Cosmos DB.
+
+### Create the model
+1. At the root of your project directory, create a new directory named **models**.
+
+2. In the **models** directory, create a new file named **taskDao.js**. This file contains code required to create the database and container. It also defines methods to read, update, create, and find tasks in Azure Cosmos DB.
+
+3. Copy the following code into the **taskDao.js** file:
+
+ ```javascript
+ // @ts-check
+ const CosmosClient = require('@azure/cosmos').CosmosClient
+ const debug = require('debug')('todo:taskDao')
+
+ // For simplicity we'll set a constant partition key
+ const partitionKey = undefined
+ class TaskDao {
+ /**
+ * Manages reading, adding, and updating Tasks in Azure Cosmos DB
+ * @param {CosmosClient} cosmosClient
+ * @param {string} databaseId
+ * @param {string} containerId
+ */
+ constructor(cosmosClient, databaseId, containerId) {
+ this.client = cosmosClient
+ this.databaseId = databaseId
+ this.collectionId = containerId
+
+ this.database = null
+ this.container = null
+ }
+
+ async init() {
+ debug('Setting up the database...')
+ const dbResponse = await this.client.databases.createIfNotExists({
+ id: this.databaseId
+ })
+ this.database = dbResponse.database
+ debug('Setting up the database...done!')
+ debug('Setting up the container...')
+ const coResponse = await this.database.containers.createIfNotExists({
+ id: this.collectionId
+ })
+ this.container = coResponse.container
+ debug('Setting up the container...done!')
+ }
+
+ async find(querySpec) {
+ debug('Querying for items from the database')
+ if (!this.container) {
+ throw new Error('Collection is not initialized.')
+ }
+ const { resources } = await this.container.items.query(querySpec).fetchAll()
+ return resources
+ }
+
+ async addItem(item) {
+ debug('Adding an item to the database')
+ item.date = Date.now()
+ item.completed = false
+ const { resource: doc } = await this.container.items.create(item)
+ return doc
+ }
+
+ async updateItem(itemId) {
+ debug('Update an item in the database')
+ const doc = await this.getItem(itemId)
+ doc.completed = true
+
+ const { resource: replaced } = await this.container
+ .item(itemId, partitionKey)
+ .replace(doc)
+ return replaced
+ }
+
+ async getItem(itemId) {
+ debug('Getting an item from the database')
+ const { resource } = await this.container.item(itemId, partitionKey).read()
+ return resource
+ }
+ }
+
+ module.exports = TaskDao
+ ```
+4. Save and close the **taskDao.js** file.
+
+### Create the controller
+
+1. In the **routes** directory of your project, create a new file named **tasklist.js**.
+
+2. Add the following code to **tasklist.js**. This code loads the CosmosClient and async modules, which are used by **tasklist.js**. This code also defines the **TaskList** class, which is passed as an instance of the **TaskDao** object we defined earlier:
+
+ ```javascript
+ const TaskDao = require("../models/TaskDao");
+
+ class TaskList {
+ /**
+ * Handles the various APIs for displaying and managing tasks
+ * @param {TaskDao} taskDao
+ */
+ constructor(taskDao) {
+ this.taskDao = taskDao;
+ }
+ async showTasks(req, res) {
+ const querySpec = {
+ query: "SELECT * FROM root r WHERE r.completed=@completed",
+ parameters: [
+ {
+ name: "@completed",
+ value: false
+ }
+ ]
+ };
+
+ const items = await this.taskDao.find(querySpec);
+ res.render("index", {
+ Title: "My ToDo List ",
+ tasks: items
+ });
+ }
+
+ async addTask(req, res) {
+ const item = req.body;
+
+ await this.taskDao.addItem(item);
+ res.redirect("/");
+ }
+
+ async completeTask(req, res) {
+ const completedTasks = Object.keys(req.body);
+ const tasks = [];
+
+ completedTasks.forEach(task => {
+ tasks.push(this.taskDao.updateItem(task));
+ });
+
+ await Promise.all(tasks);
+
+ res.redirect("/");
+ }
+ }
+
+ module.exports = TaskList;
+ ```
+
+3. Save and close the **tasklist.js** file.
+
+### Add config.js
+
+1. At the root of your project directory, create a new file named **config.js**.
+
+2. Add the following code to **config.js** file. This code defines configuration settings and values needed for our application.
+
+ ```javascript
+ const config = {};
+
+ config.host = process.env.HOST || "[the endpoint URI of your Azure Cosmos DB account]";
+ config.authKey =
+ process.env.AUTH_KEY || "[the PRIMARY KEY value of your Azure Cosmos DB account";
+ config.databaseId = "ToDoList";
+ config.containerId = "Items";
+
+ if (config.host.includes("https://localhost:")) {
+ console.log("Local environment detected");
+ console.log("WARNING: Disabled checking of self-signed certs. Do not have this code in production.");
+ process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
+ console.log(`Go to http://localhost:${process.env.PORT || '3000'} to try the sample.`);
+ }
+
+ module.exports = config;
+ ```
+
+3. In the **config.js** file, update the values of HOST and AUTH_KEY using the values found in the Keys page of your Azure Cosmos DB account on the [Azure portal](https://portal.azure.com).
+
+4. Save and close the **config.js** file.
+
+### Modify app.js
+
+1. In the project directory, open the **app.js** file. This file was created earlier when the Express web application was created.
+
+2. Add the following code to the **app.js** file. This code defines the config file to be used, and loads the values into some variables that you will use in the next sections.
+
+ ```javascript
+ const CosmosClient = require('@azure/cosmos').CosmosClient
+ const config = require('./config')
+ const TaskList = require('./routes/tasklist')
+ const TaskDao = require('./models/taskDao')
+
+ const express = require('express')
+ const path = require('path')
+ const logger = require('morgan')
+ const cookieParser = require('cookie-parser')
+ const bodyParser = require('body-parser')
+
+ const app = express()
+
+ // view engine setup
+ app.set('views', path.join(__dirname, 'views'))
+ app.set('view engine', 'jade')
+
+ // uncomment after placing your favicon in /public
+ //app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
+ app.use(logger('dev'))
+ app.use(bodyParser.json())
+ app.use(bodyParser.urlencoded({ extended: false }))
+ app.use(cookieParser())
+ app.use(express.static(path.join(__dirname, 'public')))
+
+ //Todo App:
+ const cosmosClient = new CosmosClient({
+ endpoint: config.host,
+ key: config.authKey
+ })
+ const taskDao = new TaskDao(cosmosClient, config.databaseId, config.containerId)
+ const taskList = new TaskList(taskDao)
+ taskDao
+ .init(err => {
+ console.error(err)
+ })
+ .catch(err => {
+ console.error(err)
+ console.error(
+ 'Shutting down because there was an error settinig up the database.'
+ )
+ process.exit(1)
+ })
+
+ app.get('/', (req, res, next) => taskList.showTasks(req, res).catch(next))
+ app.post('/addtask', (req, res, next) => taskList.addTask(req, res).catch(next))
+ app.post('/completetask', (req, res, next) =>
+ taskList.completeTask(req, res).catch(next)
+ )
+ app.set('view engine', 'jade')
+
+ // catch 404 and forward to error handler
+ app.use(function(req, res, next) {
+ const err = new Error('Not Found')
+ err.status = 404
+ next(err)
+ })
+
+ // error handler
+ app.use(function(err, req, res, next) {
+ // set locals, only providing error in development
+ res.locals.message = err.message
+ res.locals.error = req.app.get('env') === 'development' ? err : {}
+
+ // render the error page
+ res.status(err.status || 500)
+ res.render('error')
+ })
+
+ module.exports = app
+ ```
+
+3. Finally, save and close the **app.js** file.
+
+## <a name="build-ui"></a>Build a user interface
+
+Now let's build the user interface so that a user can interact with the application. The Express application we created in the previous sections uses **Jade** as the view engine.
+
+1. The **layout.jade** file in the **views** directory is used as a global template for other **.jade** files. In this step you will modify it to use Twitter Bootstrap, which is a toolkit used to design a website.
+
+2. Open the **layout.jade** file found in the **views** folder and replace the contents with the following code:
+
+ ```html
+ doctype html
+ html
+ head
+ title= title
+ link(rel='stylesheet', href='//ajax.aspnetcdn.com/ajax/bootstrap/3.3.2/css/bootstrap.min.css')
+ link(rel='stylesheet', href='/stylesheets/style.css')
+ body
+ nav.navbar.navbar-inverse.navbar-fixed-top
+ div.navbar-header
+ a.navbar-brand(href='#') My Tasks
+ block content
+ script(src='//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.11.2.min.js')
+ script(src='//ajax.aspnetcdn.com/ajax/bootstrap/3.3.2/bootstrap.min.js')
+ ```
+
+ This code tells the **Jade** engine to render some HTML for our application, and creates a **block** called **content** where we can supply the layout for our content pages. Save and close the **layout.jade** file.
+
+3. Now open the **index.jade** file, the view that will be used by our application, and replace the content of the file with the following code:
+
+ ```html
+ extends layout
+ block content
+ h1 #{title}
+ br
+
+ form(action="/completetask", method="post")
+ table.table.table-striped.table-bordered
+ tr
+ td Name
+ td Category
+ td Date
+ td Complete
+ if (typeof tasks === "undefined")
+ tr
+ td
+ else
+ each task in tasks
+ tr
+ td #{task.name}
+ td #{task.category}
+ - var date = new Date(task.date);
+ - var day = date.getDate();
+ - var month = date.getMonth() + 1;
+ - var year = date.getFullYear();
+ td #{month + "/" + day + "/" + year}
+ td
+ if(task.completed)
+ input(type="checkbox", name="#{task.id}", value="#{!task.completed}", checked=task.completed)
+ else
+ input(type="checkbox", name="#{task.id}", value="#{!task.completed}", checked=task.completed)
+ button.btn.btn-primary(type="submit") Update tasks
+ hr
+ form.well(action="/addtask", method="post")
+ label Item Name:
+ input(name="name", type="textbox")
+ label Item Category:
+ input(name="category", type="textbox")
+ br
+ button.btn(type="submit") Add item
+ ```
+
+This code extends layout, and provides content for the **content** placeholder we saw in the **layout.jade** file earlier. In this layout, we created two HTML forms.
+
+The first form contains a table for your data and a button that allows you to update items by posting to **/completeTask** method of the controller.
+
+The second form contains two input fields and a button that allows you to create a new item by posting to **/addtask** method of the controller. That's all we need for the application to work.
+
+## <a name="run-app-locally"></a>Run your application locally
+
+Now that you have built the application, you can run it locally by using the following steps:
+
+1. To test the application on your local machine, run `npm start` in the terminal to start your application, and then refresh the `http://localhost:3000` browser page. The page should now look as shown in the following screenshot:
+
+ :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-localhost.png" alt-text="Screenshot of the MyTodo List application in a browser window":::
+
+ > [!TIP]
+ > If you receive an error about the indent in the layout.jade file or the index.jade file, ensure that the first two lines in both files are left-justified, with no spaces. If there are spaces before the first two lines, remove them, save both files, and then refresh your browser window.
+
+2. Use the Item, Item Name, and Category fields to enter a new task, and then select **Add Item**. It creates a document in Azure Cosmos DB with those properties.
+
+3. The page should update to display the newly created item in the ToDo
+ list.
+
+ :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-added-task.png" alt-text="Screenshot of the application with a new item in the ToDo list":::
+
+4. To complete a task, select the check box in the Complete column,
+ and then select **Update tasks**. It updates the document you already created and removes it from the view.
+
+5. To stop the application, press CTRL+C in the terminal window and then select **Y** to terminate the batch job.
+
+## <a name="deploy-app"></a>Deploy your application to App Service
+
+After your application succeeds locally, you can deploy it to Azure App Service. In the terminal, make sure you're in the *todo* app directory. Deploy the code in your local folder (todo) using the following [az webapp up](/cli/azure/webapp#az-webapp-up) command:
+
+```azurecli
+az webapp up --sku F1 --name <app-name>
+```
+
+Replace <app_name> with a name that's unique across all of Azure (valid characters are a-z, 0-9, and -). A good pattern is to use a combination of your company name and an app identifier. To learn more about the app deployment, see [Node.js app deployment in Azure](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-cli#deploy-to-azure) article.
+
+The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives you a URL to launch the app at `http://<app-name>.azurewebsites.net`, which is the app's URL on Azure.
+
+## Clean up resources
+
+When these resources are no longer needed, you can delete the resource group, Azure Cosmos DB account, and all the related resources. To do so, select the resource group that you used for the Azure Cosmos DB account, select **Delete**, and then confirm the name of the resource group to delete.
+
+## Next steps
+
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Build mobile applications with Xamarin and Azure Cosmos DB](/azure/architecture/solution-ideas/articles/gaming-using-cosmos-db)
+
+[Node.js]: https://nodejs.org/
+[Git]: https://git-scm.com/
+[GitHub]: https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-todo-app
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-query.md
+
+ Title: 'Tutorial: How to query with SQL in Azure Cosmos DB?'
+description: 'Tutorial: Learn how to query with SQL queries in Azure Cosmos DB using the query playground'
+++++++ Last updated : 08/26/2021++
+# Tutorial: Query Azure Cosmos DB by using the API for NoSQL
+
+The Azure Cosmos DB [API for NoSQL](../introduction.md) supports querying documents using SQL. This article provides a sample document and two sample SQL queries and results.
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Querying data with SQL
+
+## Sample document
+
+The SQL queries in this article use the following sample document.
+
+```json
+{
+ "id": "WakefieldFamily",
+ "parents": [
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female", "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8 }
+ ],
+ "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+## Where can I run SQL queries?
+
+You can run queries using the Data Explorer in the Azure portal and via the [REST API and SDKs](sdk-dotnet-v2.md).
+
+For more information about SQL queries, see:
+* [SQL query and SQL syntax](query/getting-started.md)
+
+## Prerequisites
+
+This tutorial assumes you have an Azure Cosmos DB account and collection. Don't have any of those resources? Complete the [5-minute quickstart](quickstart-portal.md).
+
+## Example query 1
+
+Given the sample family document above, following SQL query returns the documents where the ID field matches `WakefieldFamily`. Since it's a `SELECT *` statement, the output of the query is the complete JSON document:
+
+**Query**
+
+```sql
+ SELECT *
+ FROM Families f
+ WHERE f.id = "WakefieldFamily"
+```
+
+**Results**
+
+```json
+{
+ "id": "WakefieldFamily",
+ "parents": [
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female", "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8 }
+ ],
+ "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
+ "creationDate": 1431620462,
+ "isRegistered": false
+}
+```
+
+## Example query 2
+
+The next query returns all the given names of children in the family whose ID matches `WakefieldFamily`.
+
+**Query**
+
+```sql
+ SELECT c.givenName
+ FROM Families f
+ JOIN c IN f.children
+ WHERE f.id = 'WakefieldFamily'
+```
+
+**Results**
+
+```
+[
+ {
+ "givenName": "Jesse"
+ },
+ {
+ "givenName": "Lisa"
+ }
+]
+```
++
+## Next steps
+
+In this tutorial, you've done the following tasks:
+
+> [!div class="checklist"]
+> * Learned how to query using SQL
+
+You can now proceed to the next tutorial to learn how to distribute your data globally.
+
+> [!div class="nextstepaction"]
+> [Distribute your data globally](tutorial-global-distribution.md)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-springboot-azure-kubernetes-service.md
+
+ Title: Tutorial - Spring Boot application with Azure Cosmos DB for NoSQL and Azure Kubernetes Service
+description: This tutorial demonstrates how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB for NoSQL account.
+++
+ms.devlang: java
+ Last updated : 10/01/2021+++++
+# Tutorial - Spring Boot Application with Azure Cosmos DB for NoSQL and Azure Kubernetes Service
+
+In this tutorial, you will set up and deploy a Spring Boot application that exposes REST APIs to perform CRUD operations on data in Azure Cosmos DB (API for NoSQL account). You will package the application as Docker image, push it to Azure Container Registry, deploy to Azure Kubernetes Service and test the application.
+
+## Pre-requisites
+
+- An Azure account with an active subscription. Create a [free account](https://azure.microsoft.com/free/) or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the path where the JDK is installed.
+- [Azure CLI](/cli/azure/install-azure-cli) to provision Azure services.
+- [Docker](https://docs.docker.com/engine/install/) to manage images and containers.
+- A recent version of [Maven](https://maven.apache.org/download.cgi) and [Git](https://www.git-scm.com/downloads).
+- [curl](https://curl.se/download.html) to invoke REST APIs exposed the applications.
+
+## Provision Azure services
+
+In this section, you will create Azure services required for this tutorial.
+
+- Azure Cosmos DB
+- Azure Container Registry
+- Azure Kubernetes Service
+
+### Create a resource group for the Azure resources used in this tutorial
+
+1. Sign in to your Azure account using Azure CLI:
+
+ ```azurecli
+ az login
+ ```
+
+1. Choose your Azure Subscription:
+
+ ```azurecli
+ az account set -s <enter subscription ID>
+ ```
+
+1. Create a resource group.
+
+ ```azurecli
+ az group create --name=cosmosdb-springboot-aks-rg --location=eastus
+ ```
+
+ > [!NOTE]
+ > Replace `cosmosdb-springboot-aks-rg` with a unique name for your resource group.
+
+### Create an Azure Cosmos DB for NoSQL database account
+
+Use this command to create an [Azure Cosmos DB for NoSQL database account](manage-with-cli.md#create-an-azure-cosmos-db-account) using the Azure CLI.
+
+```azurecli
+az cosmosdb create --name <enter account name> --resource-group <enter resource group name>
+```
+
+### Create a private Azure Container Registry using the Azure CLI
+
+> [!NOTE]
+> Replace `cosmosdbspringbootregistry` with a unique name for your registry.
+
+```azurecli
+az acr create --resource-group cosmosdb-springboot-aks-rg --location eastus \
+ --name cosmosdbspringbootregistry --sku Basic
+```
+
+### Create a Kubernetes cluster on Azure using the Azure CLI
+
+1. The following command creates a Kubernetes cluster in the *cosmosdb-springboot-aks-rg* resource group, with *cosmosdb-springboot-aks* as the cluster name, with Azure Container Registry (ACR) `cosmosdbspringbootregistry` attached:
+
+ ```azurecli
+ az aks create \
+ --resource-group cosmosdb-springboot-aks-rg \
+ --name cosmosdb-springboot-aks \
+ --node-count 1 \
+ --generate-ssh-keys \
+ --attach-acr cosmosdbspringbootregistry \
+ --dns-name-prefix=cosmosdb-springboot-aks-app
+ ```
+
+ This command may take a while to complete.
+
+1. If you don't have `kubectl` installed, you can do so using the Azure CLI.
+
+ ```azurecli
+ az aks install-cli
+ ```
+
+1. Get access credentials for the Azure Kubernetes Service cluster.
+
+ ```azurecli
+ az aks get-credentials --resource-group=cosmosdb-springboot-aks-rg --name=cosmosdb-springboot-aks
+
+ kubectl get nodes
+ ```
+
+## Build the application
+
+1. Clone the application and change into the right directory.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/cosmosdb-springboot-aks.git
+
+ cd cosmosdb-springboot-aks
+ ```
+
+1. Use `Maven` to build the application. At the end of this step, you should have the application JAR file created in the `target` folder.
+
+ ```bash
+ ./mvnw install
+ ```
+
+## Run the application locally
+
+If you intend to run the application on Azure Kubernetes Service, skip this section and move on to [Push Docker image to Azure Container Registry](#push-docker-image-to-azure-container-registry)
+
+1. Before you run the application, update the `application.properties` file with the details of your Azure Cosmos DB account.
+
+ ```properties
+ azure.cosmos.uri=https://<enter Azure Cosmos DB db account name>.azure.com:443/
+ azure.cosmos.key=<enter Azure Cosmos DB db primary key>
+ azure.cosmos.database=<enter Azure Cosmos DB db database name>
+ azure.cosmos.populateQueryMetrics=false
+ ```
+
+ > [!NOTE]
+ > The database and container (called `users`) will get created automatically once you start the application.
+
+1. Run the application locally.
+
+ ```bash
+ java -jar target/*.jar
+ ```
+
+2. Proceed to [Access the application](#access-the-application) or refer to the next section to deploy the application to Kubernetes.
+
+## Push Docker image to Azure Container Registry
+
+1. Build the Docker image
+
+ ```bash
+ docker build -t cosmosdbspringbootregistry.azurecr.io/spring-cosmos-app:v1 .
+ ```
+
+ > [!NOTE]
+ > Replace `cosmosdbspringbootregistry` with the name of your Azure Container Registry
+
+1. Log into Azure Container Registry.
+
+ ```azurecli
+ az acr login -n cosmosdbspringbootregistry
+ ```
+
+1. Push image to Azure Container Registry and list it.
+
+ ```azurecli
+ docker push cosmosdbspringbootregistry.azurecr.io/spring-cosmos-app:v1
+
+ az acr repository list --name cosmosdbspringbootregistry --output table
+ ```
+
+## Deploy application to Azure Kubernetes Service
+
+1. Edit the `Secret` in `app.yaml` with the details of your Azure Cosmos DB setup.
+
+ ```yml
+ ...
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: app-config
+ type: Opaque
+ stringData:
+ application.properties: |
+ azure.cosmos.uri=https://<enter Azure Cosmos DB db account name>.azure.com:443/
+ azure.cosmos.key=<enter Azure Cosmos DB db primary key>
+ azure.cosmos.database=<enter Azure Cosmos DB db database name>
+ azure.cosmos.populateQueryMetrics=false
+ ...
+ ```
+
+ > [!NOTE]
+ > The database and a container (`users`) will get created automatically once you start the application.
++
+2. Deploy to Kubernetes and wait for the `Pod` to transition to `Running` state:
+
+ ```bash
+ kubectl apply -f deploy/app.yaml
+
+ kubectl get pods -l=app=spring-cosmos-app -w
+ ```
+
+ > [!NOTE]
+ > You can check application logs using: `kubectl logs -f $(kubectl get pods -l=app=spring-cosmos-app -o=jsonpath='{.items[0].metadata.name}') -c spring-cosmos-app`
++
+## Access the application
+
+If the application is running in Kubernetes and you want to access it locally over port `8080`, run the below command:
+
+```bash
+kubectl port-forward svc/spring-cosmos-app-internal 8080:8080
+```
+
+Invoke the REST endpoints to test the application. You can also navigate to the `Data Explorer` menu of the Azure Cosmos DB account in the Azure portal and access the `users` container to confirm the result of the operations.
+
+1. Create new users
+
+ ```bash
+ curl -i -X POST -H "Content-Type: application/json" -d '{"email":"john.doe@foobar.com", "firstName": "John", "lastName": "Doe", "city": "NYC"}' http://localhost:8080/users
+
+ curl -i -X POST -H "Content-Type: application/json" -d '{"email":"mr.jim@foobar.com", "firstName": "mr", "lastName": "jim", "city": "Seattle"}' http://localhost:8080/users
+ ```
+
+ If successful, you should get an HTTP `201` response.
+
+1. Update an user
+
+ ```bash
+ curl -i -X POST -H "Content-Type: application/json" -d '{"email":"john.doe@foobar.com", "firstName": "John", "lastName": "Doe", "city": "Dallas"}' http://localhost:8080/users
+ ```
+
+1. List all users
+
+ ```bash
+ curl -i http://localhost:8080/users
+ ```
+
+1. Get an existing user
+
+ ```bash
+ curl -i http://localhost:8080/users/john.doe@foobar.com
+ ```
+
+ You should get back a JSON payload with the user details. For example:
+
+ ```json
+ {
+ "email": "john.doe@foobar.com",
+ "firstName": "John",
+ "lastName": "Doe",
+ "city": "Dallas"
+ }
+ ```
+
+1. Try to get a user that does not exist
+
+ ```bash
+ curl -i http://localhost:8080/users/not.there@foobar.com
+ ```
+
+ You should receive an HTTP `404` response.
+
+1. Replace a user
+
+ ```bash
+ curl -i -X PUT -H "Content-Type: application/json" -d '{"email":"john.doe@foobar.com", "firstName": "john", "lastName": "doe","city": "New Jersey"}' http://localhost:8080/users/
+ ```
+
+1. Try to replace user that does not exist
+
+ ```bash
+ curl -i -X PUT -H "Content-Type: application/json" -d '{"email":"not.there@foobar.com", "firstName": "john", "lastName": "doe","city": "New Jersey"}' http://localhost:8080/users/
+ ```
+
+ You should receive an HTTP `404` response.
+
+1. Delete a user
+
+ ```bash
+ curl -i -X DELETE http://localhost:8080/users/mr.jim@foobar.com
+ ```
+
+1. Delete a user that does not exist
+
+ ```bash
+ curl -X DELETE http://localhost:8080/users/go.nuts@foobar.com
+ ```
+
+ You should receive an HTTP `404` response.
+
+### Access the application using a public IP address (optional)
+
+Creating a Service of type `LoadBalancer` in Azure Kubernetes Service will result in an Azure Load Balancer getting provisioned. You can then access the application using its public IP address.
+
+1. Create a Kubernetes Service of type `LoadBalancer`
+
+ > [!NOTE]
+ > This will create an Azure Load Balancer with a public IP address.
+
+ ```bash
+ kubectl apply -f deploy/load-balancer-service.yaml
+ ```
+
+1. Wait for the Azure Load Balancer to get created. Until then, the `EXTERNAL-IP` for the Kubernetes Service will remain in `<pending>` state.
+
+ ```bash
+ kubectl get service spring-cosmos-app -w
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ spring-cosmos-app LoadBalancer 10.0.68.83 <pending> 8080:31523/TCP 6s
+ ```
+
+ > [!NOTE]
+ > `CLUSTER-IP` value may differ in your case
+
+1. Once Azure Load Balancer creation completes, the `EXTERNAL-IP` will be available.
+
+ ```bash
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ spring-cosmos-app LoadBalancer 10.0.68.83 20.81.108.180 8080:31523/TCP 18s
+ ```
+
+ > [!NOTE]
+ > `EXTERNAL-IP` value may differ in your case
+
+1. Use the public IP address
+
+ Terminate the `kubectl watch` process and repeat the above `curl` commands with the public IP address along with port `8080`. For example, to list all users:
+
+ ```bash
+ curl -i http://20.81.108.180:8080/users
+ ```
+
+ > [!NOTE]
+ > Replace `20.81.108.180` with the the public IP address for your environment
+
+## Kubernetes resources for the application
+
+Here are some of the key points related to the Kubernetes resources for this application:
+
+- The Spring Boot application is a Kubernetes `Deployment` based on the [Docker image in Azure Container Registry](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L21)
+- Azure Cosmos DB configuration is mounted in `application.properties` at path `/config` [inside the container](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L26).
+- This is made possible using a [Kubernetes `Volume`](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L15) that in turn refers to a [Kubernetes Secret](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L49), which was created along with the application. You can run the command below to confirm that this file is present within the application container:
+
+ ```bash
+ kubectl exec -it $(kubectl get pods -l=app=spring-cosmos-app -o=jsonpath='{.items[0].metadata.name}') -c spring-cosmos-app -- cat /config/application.properties
+ ```
+
+- [Liveness and Readiness probes](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L34) configuration for this application refer to the HTTP endpoints that are made available by [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html) when a Spring Boot application is [deployed to a Kubernetes environment](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes) - `/actuator/health/liveness` and `/actuator/health/readiness`.
+- A [ClusterIP Service](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L61) can be created to access the REST endpoints of the Spring Boot application *internally* within the Kubernetes cluster.
+- A [LoadBalancer Service](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/load-balancer-service.yaml) can be created to access the application via a public IP address.
+
+## Clean up resources
++
+## Next steps
+
+In this tutorial, you've learned how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB for NoSQL account.
+
+> [!div class="nextstepaction"]
+> [Spring Datan Azure Cosmos DB v3 for API for NoSQL](sdk-java-spring-data-v3.md)
cosmos-db Notebooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/notebooks-overview.md
Title: |
- Jupyter Notebooks in Azure Cosmos DB (preview)
-description: |
- Create and use built-in Jupyter Notebooks in Azure Cosmos DB to interactively run queries.
+ Title: Jupyter Notebooks in Azure Cosmos DB (preview)
+description: Create and use built-in Jupyter Notebooks in Azure Cosmos DB to interactively run queries.
Last updated 09/29/2022 + # Jupyter Notebooks in Azure Cosmos DB (preview) > [!IMPORTANT] > The Jupyter Notebooks feature of Azure Cosmos DB is currently in a preview state and is progressively rolling out to all customers over time.
Azure Cosmos DB built-in Jupyter Notebooks are directly integrated into the Azur
:::image type="content" source="./media/notebooks/cosmos-notebooks-overview.png" alt-text="Screenshot of various Jupyter Notebooks visualizations in Azure Cosmos DB.":::
-Azure Cosmos DB supports both C# and Python notebooks for the APIs for SQL, Cassandra, Gremlin, Table, and MongoDB. Inside the notebook, you can take advantage of built-in commands and features that make it easy to create Azure Cosmos DB resources. You can also use the built-in commands to upload, query, and visualize your data in Azure Cosmos DB.
+Azure Cosmos DB supports both C# and Python notebooks for the APIs for NoSQL, Apache Cassandra, Apache Gremlin, Table, and MongoDB. Inside the notebook, you can take advantage of built-in commands and features that make it easy to create Azure Cosmos DB resources. You can also use the built-in commands to upload, query, and visualize your data in Azure Cosmos DB.
:::image type="content" source="./media/notebooks/jupyter-notebooks-portal.png" alt-text="Screenshot of Jupyter Notebooks integrated developer environment (IDE) in Azure Cosmos DB.":::
You can import the data from Azure Cosmos containers or the results of queries i
To get started with built-in Jupyter Notebooks in Azure Cosmos DB, see the following articles: -- [Create your first notebook in an Azure Cosmos DB SQL API account](sql/tutorial-create-notebook.md)
+- [Create your first notebook in an Azure Cosmos DB for NoSQL account](nosql/tutorial-create-notebook.md)
- [Review the FAQ on Jupyter Notebook support](notebooks-faq.yml)
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/online-backup-and-restore.md
Title: Online backup and on-demand data restore in Azure Cosmos DB.
description: This article describes how automatic backup, on-demand data restore works. It also explains the difference between continuous and periodic backup modes. + Last updated 06/28/2022
# Online backup and on-demand data restore in Azure Cosmos DB
-Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service. The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery. Azure Cosmos DB backups are encrypted with Microsoft managed service keys. These backups are transferred over a secure non-public network. Which means, backup data remains encrypted while transferred over the wire and at rest. Backups of an account in a given region are uploaded to storage accounts in the same region.
+Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service. The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos DB account, database, or container and later require the data recovery. Azure Cosmos DB backups are encrypted with Microsoft managed service keys. These backups are transferred over a secure non-public network. Which means, backup data remains encrypted while transferred over the wire and at rest. Backups of an account in a given region are uploaded to storage accounts in the same region.
## Backup modes
cosmos-db Optimize Cost Reads Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-reads-writes.md
+ Last updated 08/26/2021 # Optimize request cost in Azure Cosmos DB This article describes how read and write requests translate into [Request Units](request-units.md) and how to optimize the cost of these requests. Read operations include point reads and queries. Write operations include insert, replace, delete, and upsert of items.
Because point reads (key/value lookups on the item ID) are the most efficient ki
### Queries
-Request units for queries are dependent on a number of factors. For example, the number of Azure Cosmos items loaded/returned, the number of lookups against the index, the query compilation time etc. details. Azure Cosmos DB guarantees that the same query when executed on the same data will always consume the same number of request units even with repeat executions. The query profile using query execution metrics gives you a good idea of how the request units are spent.
+Request units for queries are dependent on a number of factors. For example, the number of Azure Cosmos DB items loaded/returned, the number of lookups against the index, the query compilation time etc. details. Azure Cosmos DB guarantees that the same query when executed on the same data will always consume the same number of request units even with repeat executions. The query profile using query execution metrics gives you a good idea of how the request units are spent.
In some cases you may see a sequence of 200 and 429 responses, and variable request units in a paged execution of queries, that is because queries will run as fast as possible based on the available RUs. You may see a query execution break into multiple pages/round trips between server and client. For example, 10,000 items may be returned as multiple pages, each charged based on the computation performed for that page. When you sum across these pages, you should get the same number of RUs as you would get for the entire query.
Next you can proceed to learn more about cost optimization in Azure Cosmos DB wi
* Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md) * Learn more about [Optimizing throughput cost](optimize-cost-throughput.md) * Learn more about [Optimizing storage cost](optimize-cost-storage.md)
-* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
-* Learn more about [Azure Cosmos DB reserved capacity](cosmos-db-reserved-capacity.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
+* Learn more about [Azure Cosmos DB reserved capacity](reserved-capacity.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Optimize Cost Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-regions.md
+ Last updated 08/26/2021 # Optimize multi-region cost in Azure Cosmos DB
-You can add and remove regions to your Azure Cosmos account at any time. The throughput that you configure for various Azure Cosmos databases and containers is reserved in each region associated with your account. If the throughput provisioned per hour, that is the sum of RU/s configured across all the databases and containers for your Azure Cosmos account is `T` and the number of Azure regions associated with your database account is `N`, then the total provisioned throughput for your Cosmos account, for a given hour is equal to `T x N RU/s`.
+You can add and remove regions to your Azure Cosmos DB account at any time. The throughput that you configure for various Azure Cosmos DB databases and containers is reserved in each region associated with your account. If the throughput provisioned per hour, that is the sum of RU/s configured across all the databases and containers for your Azure Cosmos DB account is `T` and the number of Azure regions associated with your database account is `N`, then the total provisioned throughput for your Azure Cosmos DB account, for a given hour is equal to `T x N RU/s`.
Provisioned throughput with single write region costs $0.008/hour per 100 RU/s and provisioned throughput with multiple writable regions costs $0.016/per hour per 100 RU/s. To learn more, see Azure Cosmos DB [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
cosmos-db Optimize Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-storage.md
Last updated 08/26/2021--+ # Optimize storage cost in Azure Cosmos DB
-Azure Cosmos DB offers unlimited storage and throughput. Unlike throughput, which you have to provision/configure on your Azure Cosmos containers or databases, the storage is billed based on a consumption basis. You are billed only for the logical storage you consume and you donΓÇÖt have to reserve any storage in advance. Storage automatically scales up and down based on the data that you add or remove to an Azure Cosmos container.
+Azure Cosmos DB offers unlimited storage and throughput. Unlike throughput, which you have to provision/configure on your Azure Cosmos DB containers or databases, the storage is billed based on a consumption basis. You are billed only for the logical storage you consume and you donΓÇÖt have to reserve any storage in advance. Storage automatically scales up and down based on the data that you add or remove to an Azure Cosmos DB container.
## Storage cost
-Storage is billed with the unit of GBs. Local SSD-backed storage is used by your data and indexing. The total storage used is equal to the storage required by the data and indexes used across all the regions where you are using Azure Cosmos DB. If you globally replicate an Azure Cosmos account across three regions, you will pay for the total storage cost in each of those three regions. To estimate your storage requirement, see [capacity planner](https://www.documentdb.com/capacityplanner) tool. The cost for storage in Azure Cosmos DB is $0.25 GB/month, see [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest updates. You can set up alerts to determine storage used by your Azure Cosmos container, to monitor your storage, see [Monitor Azure Cosmos DB](./monitor-cosmos-db.md)) article.
+Storage is billed with the unit of GBs. Local SSD-backed storage is used by your data and indexing. The total storage used is equal to the storage required by the data and indexes used across all the regions where you are using Azure Cosmos DB. If you globally replicate an Azure Cosmos DB account across three regions, you will pay for the total storage cost in each of those three regions. To estimate your storage requirement, see [capacity planner](https://www.documentdb.com/capacityplanner) tool. The cost for storage in Azure Cosmos DB is $0.25 GB/month, see [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest updates. You can set up alerts to determine storage used by your Azure Cosmos DB container, to monitor your storage, see [Monitor Azure Cosmos DB](./monitor.md)) article.
## Optimize cost with item size
By default, the data is automatically indexed, which can increase the total stor
## Optimize cost with time to live and change feed
-Once you no longer need the data you can gracefully delete it from your Azure Cosmos account by using [time to live](time-to-live.md), [change feed](change-feed.md) or you can migrate the old data to another data store such as Azure blob storage or Azure data warehouse. With time to live or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period since the time they were last modified. By using change feed, you can migrate data to either another container in Azure Cosmos DB or to an external data store. The migration takes zero down time and when you are done migrating, you can either delete or configure time to live to delete the source Azure Cosmos container.
+Once you no longer need the data you can gracefully delete it from your Azure Cosmos DB account by using [time to live](time-to-live.md), [change feed](change-feed.md) or you can migrate the old data to another data store such as Azure blob storage or Azure data warehouse. With time to live or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period since the time they were last modified. By using change feed, you can migrate data to either another container in Azure Cosmos DB or to an external data store. The migration takes zero down time and when you are done migrating, you can either delete or configure time to live to delete the source Azure Cosmos DB container.
## Optimize cost with rich media data types
-If you want to store rich media types, for example, videos, images, etc., you have a number of options in Azure Cosmos DB. One option is to store these rich media types as Azure Cosmos items. There is a 2-MB limit per item, and you can avoid this limit by chaining the data item into multiple subitems. Or you can store them in Azure Blob storage and use the metadata to reference them from your Azure Cosmos items. There are a number of pros and cons with this approach. The first approach gets you the best performance in terms of latency, throughput SLAs plus turnkey global distribution capabilities for the rich media data types in addition to your regular Azure Cosmos items. However the support is available at a higher price. By storing media in Azure Blob storage, you could lower your overall costs. If latency is critical, you could use premium storage for rich media files that are referenced from Azure Cosmos items. This integrates natively with CDN to serve images from the edge server at lower cost to circumvent geo-restriction. The down side with this scenario is that you have to deal with two services - Azure Cosmos DB and Azure Blob storage, which may increase operational costs.
+If you want to store rich media types, for example, videos, images, etc., you have a number of options in Azure Cosmos DB. One option is to store these rich media types as Azure Cosmos DB items. There is a 2-MB limit per item, and you can avoid this limit by chaining the data item into multiple subitems. Or you can store them in Azure Blob storage and use the metadata to reference them from your Azure Cosmos DB items. There are a number of pros and cons with this approach. The first approach gets you the best performance in terms of latency, throughput SLAs plus turnkey global distribution capabilities for the rich media data types in addition to your regular Azure Cosmos DB items. However the support is available at a higher price. By storing media in Azure Blob storage, you could lower your overall costs. If latency is critical, you could use premium storage for rich media files that are referenced from Azure Cosmos DB items. This integrates natively with CDN to serve images from the edge server at lower cost to circumvent geo-restriction. The down side with this scenario is that you have to deal with two services - Azure Cosmos DB and Azure Blob storage, which may increase operational costs.
## Check storage consumed
-To check the storage consumption of an Azure Cosmos container, you can run a HEAD or GET request on the container, and inspect the `x-ms-request-quota` and the `x-ms-request-usage` headers. Alternatively, when working with the .NET SDK, you can use the [DocumentSizeQuota](/previous-versions/azure/dn850325(v%3Dazure.100)), and [DocumentSizeUsage](/previous-versions/azure/dn850324(v=azure.100)) properties to get the storage consumed.
+To check the storage consumption of an Azure Cosmos DB container, you can run a HEAD or GET request on the container, and inspect the `x-ms-request-quota` and the `x-ms-request-usage` headers. Alternatively, when working with the .NET SDK, you can use the [DocumentSizeQuota](/previous-versions/azure/dn850325(v%3Dazure.100)), and [DocumentSizeUsage](/previous-versions/azure/dn850324(v=azure.100)) properties to get the storage consumed.
## Using SDK
Next you can proceed to learn more about cost optimization in Azure Cosmos DB wi
* Learn more about [Optimizing throughput cost](optimize-cost-throughput.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Optimize Cost Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-throughput.md
Last updated 08/26/2021 ms.devlang: csharp--+ # Optimize provisioned throughput cost in Azure Cosmos DB By offering provisioned throughput model, Azure Cosmos DB offers predictable performance at any scale. Reserving or provisioning throughput ahead of time eliminates the ΓÇ£noisy neighbor effectΓÇ¥ on your performance. You specify the exact amount of throughput you need and Azure Cosmos DB guarantees the configured throughput, backed by SLA.
-You can start with a minimum throughput of 400 RU/sec and scale up to tens of millions of requests per second or even more. Each request you issue against your Azure Cosmos containerΓÇèor database, such as a read request, write request, query request, stored procedures have a corresponding cost that is deducted from your provisioned throughput. If you provision 400 RU/s and issue a query that costs 40 RUs, you will be able to issue 10 such queries per second. Any request beyond that will get rate-limited and you should retry the request. If you are using client drivers, they support the automatic retry logic.
+You can start with a minimum throughput of 400 RU/sec and scale up to tens of millions of requests per second or even more. Each request you issue against your Azure Cosmos DB container ΓÇèor database, such as a read request, write request, query request, stored procedures have a corresponding cost that is deducted from your provisioned throughput. If you provision 400 RU/s and issue a query that costs 40 RUs, you will be able to issue 10 such queries per second. Any request beyond that will get rate-limited and you should retry the request. If you are using client drivers, they support the automatic retry logic.
You can provision throughput on databases or containers and each strategy can help you save on costs depending on the scenario.
You can provision throughput on databases or containers and each strategy can he
The following are some guidelines to decide on a provisioned throughput strategy:
-**Consider provisioning throughput on an Azure Cosmos database (containing a set of containers) if**:
+**Consider provisioning throughput on an Azure Cosmos DB database (containing a set of containers) if**:
-1. You have a few dozen Azure Cosmos containers and want to share throughput across some or all of them.
+1. You have a few dozen Azure Cosmos DB containers and want to share throughput across some or all of them.
2. You are migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. And if you have many collections/tables/graphs and you do not want to make any changes to your data model. Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you are not updating your data model when migrating from an on-premises database. It's recommended that you always reassess your data model to get the most in terms of performance and also to optimize for costs.
The following are some guidelines to decide on a provisioned throughput strategy
**Consider provisioning throughput on an individual container if:**
-1. You have a few Azure Cosmos containers. Because Azure Cosmos DB is schema-agnostic, a container can contain items that have heterogeneous schemas and does not require customers to create multiple container types, one for each entity. It is always an option to consider if grouping separate say 10-20 containers into a single container makes sense. With a 400 RUs minimum for containers, pooling all 10-20 containers into one could be more cost effective.
+1. You have a few Azure Cosmos DB containers. Because Azure Cosmos DB is schema-agnostic, a container can contain items that have heterogeneous schemas and does not require customers to create multiple container types, one for each entity. It is always an option to consider if grouping separate say 10-20 containers into a single container makes sense. With a 400 RUs minimum for containers, pooling all 10-20 containers into one could be more cost effective.
2. You want to control the throughput on a specific container and get the guaranteed throughput on a given container backed by SLA. **Consider a hybrid of the above two strategies:**
-1. As mentioned earlier, Azure Cosmos DB allows you to mix and match the above two strategies, so you can now have some containers within Azure Cosmos database, which may share the throughput provisioned on the database as well as, some containers within the same database, which may have dedicated amounts of provisioned throughput.
+1. As mentioned earlier, Azure Cosmos DB allows you to mix and match the above two strategies, so you can now have some containers within Azure Cosmos DB database, which may share the throughput provisioned on the database as well as, some containers within the same database, which may have dedicated amounts of provisioned throughput.
2. You can apply the above strategies to come up with a hybrid configuration, where you have both database level provisioned throughput with some containers having dedicated throughput.
As shown in the following table, depending on the choice of API, you can provisi
|API|For **shared** throughput, configure |For **dedicated** throughput, configure | |-|-|-|
-|SQL API|Database|Container|
+|API for NoSQL|Database|Container|
|Azure Cosmos DB's API for MongoDB|Database|Collection|
-|Cassandra API|Keyspace|Table|
-|Gremlin API|Database account|Graph|
-|Table API|Database account|Table|
+|API for Cassandra|Keyspace|Table|
+|API for Gremlin|Database account|Graph|
+|API for Table|Database account|Table|
By provisioning throughput at different levels, you can optimize your costs based on the characteristics of your workload. As mentioned earlier, you can programmatically and at any time increase or decrease your provisioned throughput for either individual container(s) or collectively across a set of containers. By elastically scaling throughput as your workload changes, you only pay for the throughput that you have configured. If your container or a set of containers is distributed across multiple regions, then the throughput you configure on the container or a set of containers is guaranteed to be made available across all regions.
To determine the provisioned throughput for a new workload, you can use the foll
2. It's recommended to create the containers with higher throughput than expected and then scaling down as needed.
-3. It's recommended to use one of the native Azure Cosmos DB SDKs to benefit from automatic retries when requests get rate-limited. If youΓÇÖre working on a platform that is not supported and use Cosmos DBΓÇÖs REST API, implement your own retry policy using the `x-ms-retry-after-ms` header.
+3. It's recommended to use one of the native Azure Cosmos DB SDKs to benefit from automatic retries when requests get rate-limited. If youΓÇÖre working on a platform that is not supported and use Azure Cosmos DBΓÇÖs REST API, implement your own retry policy using the `x-ms-retry-after-ms` header.
4. Make sure that your application code gracefully supports the case when all retries fail.
The following steps help you to make your solutions highly scalable and cost-eff
1. If you have significantly over provisioned throughput across containers and databases, you should review RUs provisioned Vs consumed RUs and fine-tune the workloads.
-2. One method for estimating the amount of reserved throughput required by your application is to record the request unit RU charge associated with running typical operations against a representative Azure Cosmos container or database used by your application and then estimate the number of operations you anticipate to perform each second. Be sure to measure and include typical queries and their usage as well. To learn how to estimate RU costs of queries programmatically or using portal see [Optimizing the cost of queries](./optimize-cost-reads-writes.md).
+2. One method for estimating the amount of reserved throughput required by your application is to record the request unit RU charge associated with running typical operations against a representative Azure Cosmos DB container or database used by your application and then estimate the number of operations you anticipate to perform each second. Be sure to measure and include typical queries and their usage as well. To learn how to estimate RU costs of queries programmatically or using portal see [Optimizing the cost of queries](./optimize-cost-reads-writes.md).
3. Another way to get operations and their costs in RUs is by enabling Azure Monitor logs, which will give you the breakdown of operation/duration and the request charge. Azure Cosmos DB provides request charge for every operation, so every operation charge can be stored back from the response and then used for analysis. 4. You can elastically scale up and down provisioned throughput as you need to accommodate your workload needs.
-5. You can add and remove regions associated with your Azure Cosmos account as you need and control costs.
+5. You can add and remove regions associated with your Azure Cosmos DB account as you need and control costs.
6. Make sure you have even distribution of data and workloads across logical partitions of your containers. If you have uneven partition distribution, this may cause to provision higher amount of throughput than the value that is needed. If you identify that you have a skewed distribution, we recommend redistributing the workload evenly across the partitions or repartition the data.
-7. If you have many containers and these containers do not require SLAs, you can use the database-based offer for the cases where the per container throughput SLAs do not apply. You should identify which of the Azure Cosmos containers you want to migrate to the database level throughput offer and then migrate them by using a change feed-based solution.
+7. If you have many containers and these containers do not require SLAs, you can use the database-based offer for the cases where the per container throughput SLAs do not apply. You should identify which of the Azure Cosmos DB containers you want to migrate to the database level throughput offer and then migrate them by using a change feed-based solution.
-8. Consider using the ΓÇ£Cosmos DB Free TierΓÇ¥ (free for one year), Try Cosmos DB (up to three regions) or downloadable Cosmos DB emulator for dev/test scenarios. By using these options for test-dev, you can substantially lower your costs.
+8. Consider using the ΓÇ£Azure Cosmos DB Free TierΓÇ¥ (free for one year), Try Azure Cosmos DB (up to three regions) or downloadable Azure Cosmos DB emulator for dev/test scenarios. By using these options for test-dev, you can substantially lower your costs.
9. You can further perform workload-specific cost optimizations ΓÇô for example, increasing batch-size, load-balancing reads across multiple regions, and de-duplicating data, if applicable.
-10. With Azure Cosmos DB reserved capacity, you can get significant discounts for up to 65% for three years. Azure Cosmos DB reserved capacity model is an upfront commitment on requests units needed over time. The discounts are tiered such that the more request units you use over a longer period, the more your discount will be. These discounts are applied immediately. Any RUs used above your provisioned values are charged based on the non-reserved capacity cost. See [Cosmos DB reserved capacity](cosmos-db-reserved-capacity.md)) for more details. Consider purchasing reserved capacity to further lower your provisioned throughput costs.
+10. With Azure Cosmos DB reserved capacity, you can get significant discounts for up to 65% for three years. Azure Cosmos DB reserved capacity model is an upfront commitment on requests units needed over time. The discounts are tiered such that the more request units you use over a longer period, the more your discount will be. These discounts are applied immediately. Any RUs used above your provisioned values are charged based on the non-reserved capacity cost. See [Azure Cosmos DB reserved capacity](reserved-capacity.md)) for more details. Consider purchasing reserved capacity to further lower your provisioned throughput costs.
## Next steps
Next you can proceed to learn more about cost optimization in Azure Cosmos DB wi
* Learn more about [Optimizing storage cost](optimize-cost-storage.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
cosmos-db Optimize Dev Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-dev-test.md
+ Last updated 08/26/2021 # Optimize development and testing cost in Azure Cosmos DB This article describes the different options to use Azure Cosmos DB for development and testing for free of cost, as well as techniques to optimize cost in development or test accounts.
Azure Cosmos DB is included in the [Azure free account](https://azure.microsoft.
## Azure Cosmos DB serverless
-[Azure Cosmos DB serverless](serverless.md) lets you use your Azure Cosmos account in a consumption-based fashion where you are only charged for the Request Units consumed by your database operations and the storage consumed by your data. There is no minimum charge involved when using Azure Cosmos DB in serverless mode. Because it eliminates the concept of provisioned capacity, it is best suited for development or testing activities specifically when your database is idle most of the time.
+[Azure Cosmos DB serverless](serverless.md) lets you use your Azure Cosmos DB account in a consumption-based fashion where you are only charged for the Request Units consumed by your database operations and the storage consumed by your data. There is no minimum charge involved when using Azure Cosmos DB in serverless mode. Because it eliminates the concept of provisioned capacity, it is best suited for development or testing activities specifically when your database is idle most of the time.
## Use shared throughput databases
You can get started with using the emulator or the free Azure Cosmos DB accounts
* Learn more about [Optimizing storage cost](optimize-cost-storage.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Title: Getting started with Azure Cosmos DB Partial Document Update
description: This article provides example for how to use Partial Document Update with .NET, Java, Node SDKs -+ Last updated 12/09/2021 -+ # Azure Cosmos DB Partial Document Update: Getting Started This article provides examples illustrating for how to use Partial Document Update with .NET, Java, and Node SDKs. This article also details common errors that you may encounter. Code samples for the following scenarios have been provided:
This article provides examples illustrating for how to use Partial Document Upda
## [.NET](#tab/dotnet)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3 SDK](sql/sql-api-sdk-dotnet-standard.md) is available from version *3.23.0* onwards. You can download it from the [NuGet Gallery](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.23.0)
+Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3 SDK](nosql/sdk-dotnet-v3.md) is available from version *3.23.0* onwards. You can download it from the [NuGet Gallery](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.23.0)
> [!NOTE] > A complete partial document update sample can be found in the [.NET v3 samples repository](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs) on GitHub.
Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3
## [Java](#tab/java)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4 SDK](sql/sql-api-sdk-java-v4.md) is available from version *4.21.0* onwards. You can either add it to the list of dependencies in your `pom.xml` or download it directly from [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos).
+Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4 SDK](nosql/sdk-java-v4.md) is available from version *4.21.0* onwards. You can either add it to the list of dependencies in your `pom.xml` or download it directly from [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos).
```xml <dependency>
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
## [Node.js](#tab/nodejs)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
+Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](nosql/sdk-nodejs.md) is available from version *3.15.0* onwards. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
> [!NOTE] > A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
Title: Partial document update in Azure Cosmos DB
description: Learn about partial document update in Azure Cosmos DB. -+ Last updated 04/29/2022 -+ # Partial document update in Azure Cosmos DB Azure Cosmos DB Partial Document Update feature (also known as Patch API) provides a convenient way to modify a document in a container. Currently, to update a document the client needs to read it, execute Optimistic Concurrency Control checks (if necessary), update the document locally and then send it over the wire as a whole document Replace API call.
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Last updated 03/24/2022-+ # Partitioning and horizontal scaling in Azure Cosmos DB Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called *logical partitions*. Logical partitions are formed based on the value of a *partition key* that is associated with each item in a container. All the items in a logical partition have the same partition key value.
Some things to consider when selecting the *item ID* as the partition key includ
* Learn about [provisioned throughput in Azure Cosmos DB](request-units.md). * Learn about [global distribution in Azure Cosmos DB](distribute-data-globally.md).
-* Learn how to [provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md).
-* Learn how to [provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
+* Learn how to [provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
+* Learn how to [provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
cosmos-db Partners Migration Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partners-migration-cosmosdb.md
- Title: Migration and application development partners for Azure Cosmos DB
-description: Lists Microsoft partners with migration solutions that support Azure Cosmos DB.
----- Previously updated : 08/26/2021--
-# Azure Cosmos DB NoSQL migration and application development partners
-
-From NoSQL migration to application development, you can choose from a variety of experienced systems integrator partners and tools to support your Azure Cosmos DB solutions. This article lists the partners who have solutions or services that use Azure Cosmos DB. This list changes over time, Microsoft is not responsible to any changes or updates made to the solutions of these partners.
-
-## Systems Integrator and tooling partners
-
-|**Partner** |**Capabilities & experience** |**Supported countries/regions** |
-||||
-|[Striim](https://www.striim.com/) | Continuous, real-time data movement, Data migration| USA |
-| [10thMagnitude](https://www.10thmagnitude.com/) | IoT, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development | USA |
-|[Altoros Development LLC](https://www.altoros.com/) | IoT, Personalization Retail (inventory), Serverless architectures NoSQL migration, App development| USA |
-|[Avanade](https://www.avanade.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Austria, Germany, Switzerland, Italy, Norway, Spain, UK, Canada |
-|[Accenture](https://www.accenture.com/) | IoT, Retail (inventory), Serverless Architecture, App development |Global|
-|Capax Global LLC | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
-| [Capgemini](https://www.capgemini.com/) | Retail (inventory), IoT, Operational Analytics (Spark), App development | USA, France, UK, Netherlands, Finland |
-| [Cognizant](https://www.cognizant.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), App development |USA, Canada, UK, Denmark, Netherlands, Switzerland, Australia, Japan |
-|[Infosys](https://www.infosys.com/) | App development | USA |
-| [Lambda3 Informatics](https://www.lambda3.com.br/) | Real-time personalization, Retail inventory, App development | Brazil|
-|[Neal Analytics](https://www.nealanalytics.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), App development | USA |
-|[Pragmatic Works Software Inc](https://www.pragmaticworks.com/) | NoSQL migration | USA |
-| [Ricoh Digital Experience](https://www.ricoh-europe.com/contact-us) | IoT, Real-time personalization, Retail inventory, NoSQL migration | UK, Europe |
-|[SNP Technologies](https://www.snp.com/) | NoSQL migration| USA |
-| [Solidsoft Reply](https://www.reply.com/solidsoft-reply/) | NoSQL migration | Croatia, Sweden, Denmark, Ireland, Bulgaria, Slovenia, Cyprus, Malta, Lithuania, the Czech Republic, Iceland, and Switzerland and Liechtenstein|
-| [Spanish Point Technologies](https://www.spanishpoint.ie/) | NoSQL migration| Ireland|
-| [Syone](https://www.syone.com/) | NoSQL migration| Portugal|
-|[Tallan](https://www.tallan.com/) | App development | USA |
-| [TCS](https://www.tcs.com/) | App development | USA, UK, France, Malaysia, Denmark, Norway, Sweden|
-|[VTeamLabs](https://www.vteamlabs.com/) | Personalization, Retail (inventory), IoT, Gaming, Operational Analytics (Spark), Serverless architecture, NoSQL Migration, App development | USA |
-| [White Duck GmbH](https://whiteduck.de/en/) |New app development, App Backend, Storage for document-based data| Germany |
-| [Xpand IT](https://www.xpand-it.com/) | New app development | Portugal, UK|
-| [Hanu](https://hanu.com/) | IoT, App development | USA|
-| [Incycle Software](https://www.incyclesoftware.com/) | NoSQL migration, Serverless architecture, App development| USA|
-| [Orion](https://www.orioninc.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), IoT, App development| USA, Canada|
-
-## Next steps
-
-To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
-
-Trying to do capacity planning for a migration to Azure Cosmos DB?
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-<!--Image references-->
-[2]: ./media/partners-migration-cosmosdb/striim_logo.png
-[3]: ./media/partners-migration-cosmosdb/altoros_logo.png
-[4]: ./media/partners-migration-cosmosdb/attunix_logo.png
-[6]: ./media/partners-migration-cosmosdb/capaxglobal_logo.png
-[7]: ./media/partners-migration-cosmosdb/coeo_logo.png
-[8]: ./media/partners-migration-cosmosdb/infosys_logo.png
-[9]: ./media/partners-migration-cosmosdb/nealanalytics_logo.png
-[10]: ./media/partners-migration-cosmosdb/pragmaticworks_logo.png
-[11]: ./media/partners-migration-cosmosdb/tallan_logo.png
-[12]: ./media/partners-migration-cosmosdb/vteamlabs_logo.png
-[13]: ./media/partners-migration-cosmosdb/10thmagnitude_logo.png
-[14]: ./media/partners-migration-cosmosdb/capgemini_logo.png
-[15]: ./media/partners-migration-cosmosdb/cognizant_logo.png
-[16]: ./media/partners-migration-cosmosdb/laglash_logo.png
-[17]: ./media/partners-migration-cosmosdb/lambda3_logo.png
-[18]: ./media/partners-migration-cosmosdb/ricoh_logo.png
-[19]: ./media/partners-migration-cosmosdb/snp_technologies_logo.png
-[20]: ./media/partners-migration-cosmosdb/solidsoft_reply_logo.png
-[21]: ./media/partners-migration-cosmosdb/spanish_point_logo.png
-[22]: ./media/partners-migration-cosmosdb/syone_logo.png
-[23]: ./media/partners-migration-cosmosdb/tcs_logo.png
-[24]: ./media/partners-migration-cosmosdb/whiteduck_logo.png
-[25]: ./media/partners-migration-cosmosdb/xpandit_logo.png
-[26]: ./media/partners-migration-cosmosdb/avanade_logo.png
cosmos-db Partners Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partners-migration.md
+
+ Title: Migration and application development partners for Azure Cosmos DB
+description: Lists Microsoft partners with migration solutions that support Azure Cosmos DB.
++++++ Last updated : 08/26/2021++
+# Azure Cosmos DB NoSQL migration and application development partners
+
+From NoSQL migration to application development, you can choose from a variety of experienced systems integrator partners and tools to support your Azure Cosmos DB solutions. This article lists the partners who have solutions or services that use Azure Cosmos DB. This list changes over time, Microsoft is not responsible to any changes or updates made to the solutions of these partners.
+
+## Systems Integrator and tooling partners
+
+|**Partner** |**Capabilities & experience** |**Supported countries/regions** |
+||||
+|[Striim](https://www.striim.com/) | Continuous, real-time data movement, Data migration| USA |
+| [10thMagnitude](https://www.10thmagnitude.com/) | IoT, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development | USA |
+|[Altoros Development LLC](https://www.altoros.com/) | IoT, Personalization Retail (inventory), Serverless architectures NoSQL migration, App development| USA |
+|[Avanade](https://www.avanade.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Austria, Germany, Switzerland, Italy, Norway, Spain, UK, Canada |
+|[Accenture](https://www.accenture.com/) | IoT, Retail (inventory), Serverless Architecture, App development |Global|
+|Capax Global LLC | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
+| [Capgemini](https://www.capgemini.com/) | Retail (inventory), IoT, Operational Analytics (Spark), App development | USA, France, UK, Netherlands, Finland |
+| [Cognizant](https://www.cognizant.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), App development |USA, Canada, UK, Denmark, Netherlands, Switzerland, Australia, Japan |
+|[Infosys](https://www.infosys.com/) | App development | USA |
+| [Lambda3 Informatics](https://www.lambda3.com.br/) | Real-time personalization, Retail inventory, App development | Brazil|
+|[Neal Analytics](https://www.nealanalytics.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), App development | USA |
+|[Pragmatic Works Software Inc](https://www.pragmaticworks.com/) | NoSQL migration | USA |
+| [Ricoh Digital Experience](https://www.ricoh-europe.com/contact-us) | IoT, Real-time personalization, Retail inventory, NoSQL migration | UK, Europe |
+|[SNP Technologies](https://www.snp.com/) | NoSQL migration| USA |
+| [Solidsoft Reply](https://www.reply.com/solidsoft-reply/) | NoSQL migration | Croatia, Sweden, Denmark, Ireland, Bulgaria, Slovenia, Cyprus, Malta, Lithuania, the Czech Republic, Iceland, and Switzerland and Liechtenstein|
+| [Spanish Point Technologies](https://www.spanishpoint.ie/) | NoSQL migration| Ireland|
+| [Syone](https://www.syone.com/) | NoSQL migration| Portugal|
+|[Tallan](https://www.tallan.com/) | App development | USA |
+| [TCS](https://www.tcs.com/) | App development | USA, UK, France, Malaysia, Denmark, Norway, Sweden|
+|[VTeamLabs](https://www.vteamlabs.com/) | Personalization, Retail (inventory), IoT, Gaming, Operational Analytics (Spark), Serverless architecture, NoSQL Migration, App development | USA |
+| [White Duck GmbH](https://whiteduck.de/en/) |New app development, App Backend, Storage for document-based data| Germany |
+| [Xpand IT](https://www.xpand-it.com/) | New app development | Portugal, UK|
+| [Hanu](https://hanu.com/) | IoT, App development | USA|
+| [Incycle Software](https://www.incyclesoftware.com/) | NoSQL migration, Serverless architecture, App development| USA|
+| [Orion](https://www.orioninc.com/) | Personalization, Retail (inventory), Operational Analytics (Spark), IoT, App development| USA, Canada|
+
+## Next steps
+
+To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB?
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Performance Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/performance-levels.md
Last updated 06/04/2018--+ # Retiring the S1, S2, and S3 performance levels > [!IMPORTANT] > The S1, S2, and S3 performance levels discussed in this article are being retired and are no longer available for new Azure Cosmos DB accounts.
You can migrate from the S1, S2, and S3 performance levels to single partition c
### Migrate to single partition collections by using the .NET SDK
-This section only covers changing a collection's performance level using the [SQL .NET API](sql-api-sdk-dotnet.md), but the process is similar for our other SDKs.
+This section only covers changing a collection's performance level using the [SQL .NET API](nosql/sdk-dotnet-v3.md), but the process is similar for our other SDKs.
Here is a code snippet for changing the collection throughput to 5,000 request units per second:
EA customers will be price protected until the end of their current contract.
## Next steps To learn more about pricing and managing data with Azure Cosmos DB, explore these resources:
-1. [Partitioning data in Cosmos DB](partitioning-overview.md). Understand the difference between single partition container and partitioned containers, as well as tips on implementing a partitioning strategy to scale seamlessly.
-2. [Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). Learn about the cost of provisioning throughput and consuming storage.
-3. [Request units](request-units.md). Understand the consumption of throughput for different operation types, for example Read, Write, Query.
+1. [Partitioning data in Azure Cosmos DB](partitioning-overview.md). Understand the difference between single partition container and partitioned containers, as well as tips on implementing a partitioning strategy to scale seamlessly.
+2. [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). Learn about the cost of provisioning throughput and consuming storage.
+3. [Request units](request-units.md). Understand the consumption of throughput for different operation types, for example Read, Write, Query.
cosmos-db Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/plan-manage-costs.md
description: Learn how to plan for and manage costs for Azure Cosmos DB by using
-+ Last updated 10/08/2021 # Plan and manage costs for Azure Cosmos DB This article describes how you can plan and manage costs for Azure Cosmos DB. First, you use the Azure Cosmos DB capacity calculator to estimate your workload cost before you create any resources. Later you can review the estimated cost and start creating your resources.
As an aid for estimating costs, it can be helpful to do capacity planning for a
### Estimate provisioned throughput costs
-If you plan to use Azure Cosmos DB in provisioned throughput mode, use the [Azure Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) to estimate costs before you create the resources in an Azure Cosmos account. The capacity calculator is used to get an estimate of the required throughput and cost of your workload. The capacity calculator is currently available for SQL API, Cassandra API, and API for MongoDB only.
+If you plan to use Azure Cosmos DB in provisioned throughput mode, use the [Azure Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) to estimate costs before you create the resources in an Azure Cosmos DB account. The capacity calculator is used to get an estimate of the required throughput and cost of your workload. The capacity calculator is currently available for API for NoSQL, Cassandra, and MongoDB only.
-Configuring your Azure Cosmos databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](request-units.md), for your workload is essential to optimize the cost and performance. You have to input details such as API type, number of regions, item size, read/write requests per second, total data stored to get a cost estimate. To learn more about the capacity calculator, see the [estimate](estimate-ru-with-capacity-planner.md) article.
+Configuring your Azure Cosmos DB databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](request-units.md), for your workload is essential to optimize the cost and performance. You have to input details such as API type, number of regions, item size, read/write requests per second, total data stored to get a cost estimate. To learn more about the capacity calculator, see the [estimate](estimate-ru-with-capacity-planner.md) article.
> [!TIP] > To make sure you never exceed the provisioned throughput you've budgeted, [limit your account's total provisioned throughput](./limit-total-account-throughput.md)
You can pay for Azure Cosmos DB charges with your Azure Prepayment credit. Howev
As you start using Azure Cosmos DB resources from Azure portal, you can see the estimated costs. Use the following steps to review the cost estimate:
-1. Sign into the Azure portal and navigate to your Azure Cosmos account.
+1. Sign into the Azure portal and navigate to your Azure Cosmos DB account.
1. Go to the **Overview** section. 1. Check the **Cost** chart at the bottom. This chart shows an estimate of your current cost over a configurable time period: 1. Create a new container such as a graph container.
The following are some best practices you can use to reduce the costs:
* [Optimize development/testing cost](optimize-dev-test.md) - Learn how to optimize your development cost by using the local emulator, the Azure Cosmos DB free tier, Azure free account and few other options.
-* [Optimize cost with reserved capacity](cosmos-db-reserved-capacity.md) - Learn how to use reserved capacity to save money by committing to a reservation for Azure Cosmos DB resources for either one year or three years.
+* [Optimize cost with reserved capacity](reserved-capacity.md) - Learn how to use reserved capacity to save money by committing to a reservation for Azure Cosmos DB resources for either one year or three years.
## Next steps
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
# Azure Policy built-in definitions for Azure Cosmos DB This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy definitions for Azure Cosmos DB. For additional Azure Policy built-ins for other services, see
cosmos-db Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy.md
Last updated 09/23/2020 --+ # Use Azure Policy to implement governance and controls for Azure Cosmos DB resources [Azure Policy](../governance/policy/overview.md) helps to enforce organizational governance standards, assess resource compliance, and implement automatic remediation. Common use cases include security, cost management, and configuration consistency. Azure Policy provides built-in policy definitions. You can create custom policy definitions for scenarios that are not addressed by the built-in policy definitions. See the [Azure Policy documentation](../governance/policy/overview.md) for more details. > [!IMPORTANT]
-> Azure Policy is enforced at the resource provider level for Azure services. Cosmos DB SDKs can perform most management operations on database, container and throughput resources that bypass Cosmos DB's resource provider, thus ignoring any policies created using Azure Policy. To ensure enforcement of policies see, [Preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes)
+> Azure Policy is enforced at the resource provider level for Azure services. Azure Cosmos DB SDKs can perform most management operations on database, container and throughput resources that bypass Azure Cosmos DB's resource provider, thus ignoring any policies created using Azure Policy. To ensure enforcement of policies see, [Preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes)
## Assign a built-in policy definition
You can create policy assignments with the [Azure portal](../governance/policy/a
To create a policy assignment from a built-in policy definition for Azure Cosmos DB, use the steps in [create a policy assignment with the Azure portal](../governance/policy/assign-policy-portal.md) article.
-At the step to select a policy definition, enter `Cosmos DB` in the Search field to filter the list of available built-in policy definitions. Select one of the available built-in policy definitions, and then choose **Select** to continue creating the policy assignment.
+At the step to select a policy definition, enter `Azure Cosmos DB` in the Search field to filter the list of available built-in policy definitions. Select one of the available built-in policy definitions, and then choose **Select** to continue creating the policy assignment.
> [!TIP] > You can also use the built-in policy definition names shown in the **Available Definitions** pane with Azure PowerShell, Azure CLI, or ARM templates to create policy assignments.
The following screenshot shows two example policy assignments.
One assignment is based on a built-in policy definition, which checks that the Azure Cosmos DB resources are deployed only to the allowed Azure regions. Resource compliance shows policy evaluation outcome (compliant or non-compliant) for in-scope resources.
-The other assignment is based on a custom policy definition. This assignment checks that Cosmos DB accounts are configured for multiple write locations.
+The other assignment is based on a custom policy definition. This assignment checks that Azure Cosmos DB accounts are configured for multiple write locations.
After the policy assignments are deployed, the compliance dashboard shows evaluation results. Note that this can take up to 30 minutes after deploying a policy assignment. Additionally, [policy evaluation scans can be started on-demand](../governance/policy/how-to/get-compliance-data.md#on-demand-evaluation-scan) immediately after creating policy assignments.
cosmos-db Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-audit.md
+
+ Title: Audit logging - Azure Cosmos DB for PostgreSQL
+description: Concepts for pgAudit audit logging in Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 08/03/2021++
+# Audit logging in Azure Cosmos DB for PostgreSQL
++
+> [!IMPORTANT]
+> The pgAudit extension in Azure Cosmos DB for PostgreSQL is currently in preview. This
+> preview version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features](product-updates.md).
+
+Audit logging of database activities in Azure Cosmos DB for PostgreSQL is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session or object audit logging.
+
+If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md).
+
+## Usage considerations
+By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Cosmos DB for PostgreSQL, you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, or Azure Monitor logs, depending on your choice.
+
+## Enabling pgAudit
+
+The pgAudit extension is pre-installed and enabled on a limited number of
+clusters at this time. It may or may not be available
+for preview yet on your cluster.
+
+## pgAudit settings
+
+pgAudit allows you to configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
+
+> [!NOTE]
+> pgAudit settings are specified globally and cannot be specified at a database or role level.
+>
+> Also, pgAudit settings are specified per-node in a cluster. To make a change on all nodes, you must apply it to each node individually.
+
+You must configure pgAudit parameters to start logging. The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first and confirm that you're getting the expected behavior.
+
+> [!NOTE]
+> Setting `pgaudit.log_client` to ON will redirect logs to a client process (like psql) instead of being written to file. This setting should generally be left disabled. <br> <br>
+> `pgaudit.log_level` is only enabled when `pgaudit.log_client` is on.
+
+> [!NOTE]
+> In Azure Cosmos DB for PostgreSQL, `pgaudit.log` cannot be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
+
+## Audit log format
+Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
+
+## Getting started
+To quickly get started, set `pgaudit.log` to `WRITE`, and open your server logs to review the output.
+
+## Viewing audit logs
+The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
+
+For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../../azure-monitor/logs/log-query-overview.md) overview.
+
+You can use this query to get started. You can configure alerts based on queries.
+
+Search for all pgAudit entries in Postgres logs for a particular server in the last day
+```kusto
+AzureDiagnostics
+| where LogicalServerName_s == "myservername"
+| where TimeGenerated > ago(1d)
+| where Message contains "AUDIT:"
+```
+
+## Next steps
+
+- [Learn how to setup logging in Azure Cosmos DB for PostgreSQL and how to access logs](howto-logging.md)
cosmos-db Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-backup.md
+
+ Title: Backup and restore ΓÇô Azure Cosmos DB for PostgreSQL
+description: Protecting data from accidental corruption or deletion
++++++ Last updated : 04/14/2021++
+# Backup and restore in Azure Cosmos DB for PostgreSQL
++
+Azure Cosmos DB for PostgreSQL automatically creates
+backups of each node and stores them in locally redundant storage. Backups can
+be used to restore your cluster to a specified time.
+Backup and restore are an essential part of any business continuity strategy
+because they protect your data from accidental corruption or deletion.
+
+## Backups
+
+At least once a day, Azure Cosmos DB for PostgreSQL takes snapshot backups of
+data files and the database transaction log. The backups allow you to restore a
+server to any point in time within the retention period. (The retention period
+is currently 35 days for all clusters.) All backups are encrypted using
+AES 256-bit encryption.
+
+In Azure regions that support availability zones, backup snapshots are stored
+in three availability zones. As long as at least one availability zone is
+online, the cluster is restorable.
+
+Backup files can't be exported. They may only be used for restore operations
+in Azure Cosmos DB for PostgreSQL.
+
+### Backup storage cost
+
+For current backup storage pricing, see the Azure Cosmos DB for PostgreSQL
+[pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
+
+## Restore
+
+You can restore a cluster to any point in time within
+the last 35 days. Point-in-time restore is useful in multiple scenarios. For
+example, when a user accidentally deletes data, drops an important table or
+database, or if an application accidentally overwrites good data with bad data.
+
+> [!IMPORTANT]
+> Deleted clusters can't be restored. If you delete the
+> cluster, all nodes that belong to the cluster are deleted and can't
+> be recovered. To protect cluster resources, post deployment, from
+> accidental deletion or unexpected changes, administrators can leverage
+> [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+The restore process creates a new cluster in the same Azure region,
+subscription, and resource group as the original. The cluster has the
+original's configuration: the same number of nodes, number of vCores, storage
+size, user roles, PostgreSQL version, and version of the Citus extension.
+
+Firewall settings and PostgreSQL server parameters are not preserved from the
+original cluster, they are reset to default values. The firewall will
+prevent all connections. You will need to manually adjust these settings after
+restore. In general, see our list of suggested [post-restore
+tasks](howto-restore-portal.md#post-restore-tasks).
+
+## Next steps
+
+* See the steps to [restore a cluster](howto-restore-portal.md)
+ in the Azure portal.
+* Learn aboutΓÇ»[Azure availability zones](../../availability-zones/az-overview.md).
cosmos-db Concepts Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-cluster.md
+
+ Title: Cluster - Azure Cosmos DB for PostgreSQL
+description: What is a cluster in Azure Cosmos DB for PostgreSQL
++++++ Last updated : 07/15/2022++
+# Clusters
++
+## Nodes
+
+Azure Cosmos DB for PostgreSQL allows
+PostgreSQL servers (called nodes) to coordinate with one another in a "cluster."
+The cluster's nodes collectively hold more data and use more CPU
+cores than would be possible on a single server. The architecture also allows
+the database to scale by adding more nodes to the cluster.
+
+To learn more about the types of nodes, see [nodes and
+tables](concepts-nodes.md).
+
+### Node status
+
+Azure Cosmos DB for PostgreSQL displays the status of nodes in a cluster on the
+Overview page in the Azure portal. Each node can have one of these status
+values:
+
+* **Provisioning**: Initial node provisioning, either as a part of its cluster
+ provisioning, or when a worker node is added.
+* **Available**: Node is in a healthy state.
+* **Need attention**: An issue is detected on the node. The node is attempting
+ to self-heal. If self-healing fails, an issue gets put in the queue for our
+ engineers to investigate.
+* **Dropping**: Cluster deletion started.
+* **Disabled**: The cluster's Azure subscription turned into Disabled
+ states. For more information about subscription states, see [this
+ page](../../cost-management-billing/manage/subscription-states.md).
+
+### Node availability zone
+
+Azure Cosmos DB for PostgreSQL displays the [availability
+zone](../../availability-zones/az-overview.md#availability-zones) of each node
+in a cluster on the Overview page in the Azure portal. The **Availability
+zone** column contains either the name of the zone, or `--` if the node isn't
+assigned to a zone. (Only [certain
+regions](https://azure.microsoft.com/global-infrastructure/geographies/#geographies)
+support availability zones.)
+
+If high availability is enabled for the cluster, and a node [fails
+over](concepts-high-availability.md) to a standby, you may see its availability
+zone differs from the other nodes. In this case, the nodes will be moved back
+into the same availability zone together during the next [maintenance
+window](concepts-maintenance.md).
+
+## Next steps
+
+* Learn to [provision a cluster](quickstart-create-portal.md)
cosmos-db Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-colocation.md
+
+ Title: Table colocation - Azure Cosmos DB for PostgreSQL
+description: How to store related information together for faster queries
++++++ Last updated : 05/06/2019++
+# Table colocation in Azure Cosmos DB for PostgreSQL
++
+Colocation means storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node.
+
+## Data colocation for hash-distributed tables
+
+In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables.
++
+## A practical example of colocation
+
+Consider the following tables that might be part of a multi-tenant web
+analytics SaaS:
+
+```sql
+CREATE TABLE event (
+ tenant_id int,
+ event_id bigint,
+ page_id int,
+ payload jsonb,
+ primary key (tenant_id, event_id)
+);
+
+CREATE TABLE page (
+ tenant_id int,
+ page_id int,
+ path text,
+ primary key (tenant_id, page_id)
+);
+```
+
+Now we want to answer queries that might be issued by a customer-facing
+dashboard. An example query is "Return the number of visits in the past week for
+all pages starting with '/blog' in tenant six."
+
+If our data was in a single PostgreSQL server, we could easily express
+our query by using the rich set of relational operations offered by SQL:
+
+```sql
+SELECT page_id, count(event_id)
+FROM
+ page
+LEFT JOIN (
+ SELECT * FROM event
+ WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
+) recent
+USING (tenant_id, page_id)
+WHERE tenant_id = 6 AND path LIKE '/blog%'
+GROUP BY page_id;
+```
+
+As long as the [working set](https://en.wikipedia.org/wiki/Working_set) for this query fits in memory, a single-server table is an appropriate solution. Let's consider the opportunities of scaling the data model with Azure Cosmos DB for PostgreSQL.
+
+### Distribute tables by ID
+
+Single-server queries start slowing down as the number of tenants and the data stored for each tenant grows. The working set stops fitting in memory and CPU becomes a bottleneck.
+
+In this case, we can shard the data across many nodes by using Azure Cosmos DB for PostgreSQL. The
+first and most important choice we need to make when we decide to shard is the
+distribution column. Let's start with a naive choice of using `event_id` for
+the event table and `page_id` for the `page` table:
+
+```sql
+-- naively use event_id and page_id as distribution columns
+
+SELECT create_distributed_table('event', 'event_id');
+SELECT create_distributed_table('page', 'page_id');
+```
+
+When data is dispersed across different workers, we can't perform a join like we would on a single PostgreSQL node. Instead, we need to issue two queries:
+
+```sql
+-- (Q1) get the relevant page_ids
+SELECT page_id FROM page WHERE path LIKE '/blog%' AND tenant_id = 6;
+
+-- (Q2) get the counts
+SELECT page_id, count(*) AS count
+FROM event
+WHERE page_id IN (/*…page IDs from first query…*/)
+ AND tenant_id = 6
+ AND (payload->>'time')::date >= now() - interval '1 week'
+GROUP BY page_id ORDER BY count DESC LIMIT 10;
+```
+
+Afterwards, the results from the two steps need to be combined by the
+application.
+
+Running the queries must consult data in shards scattered across nodes.
++
+In this case, the data distribution creates substantial drawbacks:
+
+- Overhead from querying each shard and running multiple queries.
+- Overhead of Q1 returning many rows to the client.
+- Q2 becomes large.
+- The need to write queries in multiple steps requires changes in the application.
+
+The data is dispersed, so the queries can be parallelized. It's
+only beneficial if the amount of work that the query does is substantially
+greater than the overhead of querying many shards.
+
+### Distribute tables by tenant
+
+In Azure Cosmos DB for PostgreSQL, rows with the same distribution column value are guaranteed to
+be on the same node. Starting over, we can create our tables with `tenant_id`
+as the distribution column.
+
+```sql
+-- co-locate tables by using a common distribution column
+SELECT create_distributed_table('event', 'tenant_id');
+SELECT create_distributed_table('page', 'tenant_id', colocate_with => 'event');
+```
+
+Now Azure Cosmos DB for PostgreSQL can answer the original single-server query without modification (Q1):
+
+```sql
+SELECT page_id, count(event_id)
+FROM
+ page
+LEFT JOIN (
+ SELECT * FROM event
+ WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
+) recent
+USING (tenant_id, page_id)
+WHERE tenant_id = 6 AND path LIKE '/blog%'
+GROUP BY page_id;
+```
+
+Because of filter and join on tenant_id, Azure Cosmos DB for PostgreSQL knows that the entire
+query can be answered by using the set of colocated shards that contain the data
+for that particular tenant. A single PostgreSQL node can answer the query in
+a single step.
++
+In some cases, queries and table schemas must be changed to include the tenant ID in unique constraints and join conditions. This change is usually straightforward.
+
+## Next steps
+
+- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).
cosmos-db Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-columnar.md
+
+ Title: Columnar table storage - Azure Cosmos DB for PostgreSQL
+description: Learn how to compress data using columnar storage.
+++++ Last updated : 05/23/2022+++
+# Compress data with columnar tables
++
+Azure Cosmos DB for PostgreSQL supports append-only
+columnar table storage for analytic and data warehousing workloads. When
+columns (rather than rows) are stored contiguously on disk, data becomes more
+compressible, and queries can request a subset of columns more quickly.
+
+## Create a table
+
+To use columnar storage, specify `USING columnar` when creating a table:
+
+```postgresql
+CREATE TABLE contestant (
+ handle TEXT,
+ birthdate DATE,
+ rating INT,
+ percentile FLOAT,
+ country CHAR(3),
+ achievements TEXT[]
+) USING columnar;
+```
+
+Azure Cosmos DB for PostgreSQL converts rows to columnar storage in "stripes" during
+insertion. Each stripe holds one transaction's worth of data, or 150000 rows,
+whichever is less. (The stripe size and other parameters of a columnar table
+can be changed with the
+[alter_columnar_table_set](reference-functions.md#alter_columnar_table_set)
+function.)
+
+For example, the following statement puts all five rows into the same stripe,
+because all values are inserted in a single transaction:
+
+```postgresql
+-- insert these values into a single columnar stripe
+
+INSERT INTO contestant VALUES
+ ('a','1990-01-10',2090,97.1,'XA','{a}'),
+ ('b','1990-11-01',2203,98.1,'XA','{a,b}'),
+ ('c','1988-11-01',2907,99.4,'XB','{w,y}'),
+ ('d','1985-05-05',2314,98.3,'XB','{}'),
+ ('e','1995-05-05',2236,98.2,'XC','{a}');
+```
+
+It's best to make large stripes when possible, because Azure Cosmos DB for PostgreSQL
+compresses columnar data separately per stripe. We can see facts about our
+columnar table like compression rate, number of stripes, and average rows per
+stripe by using `VACUUM VERBOSE`:
+
+```postgresql
+VACUUM VERBOSE contestant;
+```
+```
+INFO: statistics for "contestant":
+storage id: 10000000000
+total file size: 24576, total data size: 248
+compression rate: 1.31x
+total row count: 5, stripe count: 1, average rows per stripe: 5
+chunk count: 6, containing data for dropped columns: 0, zstd compressed: 6
+```
+
+The output shows that Azure Cosmos DB for PostgreSQL used the zstd compression algorithm to
+obtain 1.31x data compression. The compression rate compares a) the size of
+inserted data as it was staged in memory against b) the size of that data
+compressed in its eventual stripe.
+
+Because of how it's measured, the compression rate may or may not match the
+size difference between row and columnar storage for a table. The only way
+to truly find that difference is to construct a row and columnar table that
+contain the same data, and compare.
+
+## Measuring compression
+
+Let's create a new example with more data to benchmark the compression savings.
+
+```postgresql
+-- first a wide table using row storage
+CREATE TABLE perf_row(
+ c00 int8, c01 int8, c02 int8, c03 int8, c04 int8, c05 int8, c06 int8, c07 int8, c08 int8, c09 int8,
+ c10 int8, c11 int8, c12 int8, c13 int8, c14 int8, c15 int8, c16 int8, c17 int8, c18 int8, c19 int8,
+ c20 int8, c21 int8, c22 int8, c23 int8, c24 int8, c25 int8, c26 int8, c27 int8, c28 int8, c29 int8,
+ c30 int8, c31 int8, c32 int8, c33 int8, c34 int8, c35 int8, c36 int8, c37 int8, c38 int8, c39 int8,
+ c40 int8, c41 int8, c42 int8, c43 int8, c44 int8, c45 int8, c46 int8, c47 int8, c48 int8, c49 int8,
+ c50 int8, c51 int8, c52 int8, c53 int8, c54 int8, c55 int8, c56 int8, c57 int8, c58 int8, c59 int8,
+ c60 int8, c61 int8, c62 int8, c63 int8, c64 int8, c65 int8, c66 int8, c67 int8, c68 int8, c69 int8,
+ c70 int8, c71 int8, c72 int8, c73 int8, c74 int8, c75 int8, c76 int8, c77 int8, c78 int8, c79 int8,
+ c80 int8, c81 int8, c82 int8, c83 int8, c84 int8, c85 int8, c86 int8, c87 int8, c88 int8, c89 int8,
+ c90 int8, c91 int8, c92 int8, c93 int8, c94 int8, c95 int8, c96 int8, c97 int8, c98 int8, c99 int8
+);
+
+-- next a table with identical columns using columnar storage
+CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
+```
+
+Fill both tables with the same large dataset:
+
+```postgresql
+INSERT INTO perf_row
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+INSERT INTO perf_columnar
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+VACUUM (FREEZE, ANALYZE) perf_row;
+VACUUM (FREEZE, ANALYZE) perf_columnar;
+```
+
+For this data, you can see a compression ratio of better than 8X in the
+columnar table.
+
+```postgresql
+SELECT pg_total_relation_size('perf_row')::numeric/
+ pg_total_relation_size('perf_columnar') AS compression_ratio;
+ compression_ratio
+--
+ 8.0196135873627944
+(1 row)
+```
+
+## Example
+
+Columnar storage works well with table partitioning. For an example, see the
+Citus Engine community documentation, [archiving with columnar
+storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage).
+
+## Gotchas
+
+* Columnar storage compresses per stripe. Stripes are created per transaction,
+ so inserting one row per transaction will put single rows into their own
+ stripes. Compression and performance of single row stripes will be worse than
+ a row table. Always insert in bulk to a columnar table.
+* If you mess up and columnarize a bunch of tiny stripes, you're stuck.
+ The only fix is to create a new columnar table and copy
+ data from the original in one transaction:
+ ```postgresql
+ BEGIN;
+ CREATE TABLE foo_compacted (LIKE foo) USING columnar;
+ INSERT INTO foo_compacted SELECT * FROM foo;
+ DROP TABLE foo;
+ ALTER TABLE foo_compacted RENAME TO foo;
+ COMMIT;
+ ```
+* Fundamentally non-compressible data can be a problem, although columnar
+ storage is still useful when selecting specific columns. It doesn't need
+ to load the other columns into memory.
+* On a partitioned table with a mix of row and column partitions, updates must
+ be carefully targeted. Filter them to hit only the row partitions.
+ * If the operation is targeted at a specific row partition (for example,
+ `UPDATE p2 SET i = i + 1`), it will succeed; if targeted at a specified columnar
+ partition (for example, `UPDATE p1 SET i = i + 1`), it will fail.
+ * If the operation is targeted at the partitioned table and has a WHERE
+ clause that excludes all columnar partitions (for example
+ `UPDATE parent SET i = i + 1 WHERE timestamp = '2020-03-15'`),
+ it will succeed.
+ * If the operation is targeted at the partitioned table, but does not
+ filter on the partition key columns, it will fail. Even if there are
+ WHERE clauses that match rows in only columnar partitions, it's not
+ enough--the partition key must also be filtered.
+
+## Limitations
+
+This feature still has significant limitations. See [limits and
+limitations](reference-limits.md#columnar-storage).
+
+## Next steps
+
+* See an example of columnar storage in a Citus [time series
+ tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html)
+ (external link).
cosmos-db Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-connection-pool.md
+
+ Title: Connection pooling ΓÇô Azure Cosmos DB for PostgreSQL
+description: Use PgBouncer to scale client database connections.
++++++ Last updated : 09/20/2022++
+# Azure Cosmos DB for PostgreSQL connection pooling
++
+Establishing new connections takes time. That works against most applications,
+which request many short-lived connections. We recommend using a connection
+pooler, both to reduce idle transactions and reuse existing connections. To
+learn more, visit our [blog
+post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+You can run your own connection pooler, or use PgBouncer managed by Azure.
+
+## Managed PgBouncer
+
+Connection poolers such as PgBouncer allow more clients to connect to the
+coordinator node at once. Applications connect to the pooler, and the pooler
+relays commands to the destination database.
+
+When clients connect through PgBouncer, the number of connections that can
+actively run in the database doesn't change. Instead, PgBouncer queues excess
+connections and runs them when the database is ready.
+
+Azure Cosmos DB for PostgreSQL is now offering a managed instance of PgBouncer for clusters.
+It supports up to 2,000 simultaneous client connections. Additionally,
+if a cluster has [high availability](concepts-high-availability.md) (HA)
+enabled, then so does its managed PgBouncer.
+
+To connect through PgBouncer, follow these steps:
+
+1. Go to the **Connection strings** page for your cluster in the Azure
+ portal.
+2. Select the checkbox next to **PgBouncer connection strings**. The listed connection
+ strings change.
+3. Update client applications to connect with the new string.
+
+## Next steps
+
+Discover more about the [limits and limitations](reference-limits.md)
+of Azure Cosmos DB for PostgreSQL.
cosmos-db Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-distributed-data.md
+
+ Title: Distributed data ΓÇô Azure Cosmos DB for PostgreSQL
+description: Learn about distributed tables, reference tables, local tables, and shards.
+++++ Last updated : 05/06/2019++
+# Distributed data in Azure Cosmos DB for PostgreSQL
++
+This article outlines the three table types in Azure Cosmos DB for PostgreSQL.
+It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
+
+## Table types
+
+There are three types of tables in a cluster, each
+used for different purposes.
+
+### Type 1: Distributed tables
+
+The first type, and most common, is distributed tables. They
+appear to be normal tables to SQL statements, but they're horizontally
+partitioned across worker nodes. What this means is that the rows
+of the table are stored on different nodes, in fragment tables called
+shards.
+
+Azure Cosmos DB for PostgreSQL runs not only SQL but DDL statements throughout a cluster.
+Changing the schema of a distributed table cascades to update
+all the table's shards across workers.
+
+#### Distribution column
+
+Azure Cosmos DB for PostgreSQL uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
+of a table column called the distribution column. The cluster
+administrator must designate this column when distributing a table.
+Making the right choice is important for performance and functionality.
+
+### Type 2: Reference tables
+
+A reference table is a type of distributed table whose entire contents are
+concentrated into a single shard. The shard is replicated on every worker and
+the coordinator. Queries on any worker can access the reference information
+locally, without the network overhead of requesting rows from another node.
+Reference tables have no distribution column because there's no need to
+distinguish separate shards per row.
+
+Reference tables are typically small and are used to store data that's
+relevant to queries running on any worker node. An example is enumerated
+values like order statuses or product categories.
+
+### Type 3: Local tables
+
+When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
+
+A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
+
+## Shards
+
+The previous section described how distributed tables are stored as shards on
+worker nodes. This section discusses more technical details.
+
+The `pg_dist_shard` metadata table on the coordinator contains a
+row for each shard of each distributed table in the system. The row
+matches a shard ID with a range of integers in a hash space
+(shardminvalue, shardmaxvalue).
+
+```sql
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+If the coordinator node wants to determine which shard holds a row of
+`github_events`, it hashes the value of the distribution column in the
+row. Then the node checks which shard\'s range contains the hashed value. The
+ranges are defined so that the image of the hash function is their
+disjoint union.
+
+### Shard placements
+
+Suppose that shard 102027 is associated with the row in question. The row
+is read or written in a table called `github_events_102027` in one of
+the workers. Which worker? That's determined entirely by the metadata
+tables. The mapping of shard to worker is known as the shard placement.
+
+The coordinator node
+rewrites queries into fragments that refer to the specific tables
+like `github_events_102027` and runs those fragments on the
+appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
+
+```sql
+SELECT
+ shardid,
+ node.nodename,
+ node.nodeport
+FROM pg_dist_placement placement
+JOIN pg_dist_node node
+ ON placement.groupid = node.groupid
+ AND node.noderole = 'primary'::noderole
+WHERE shardid = 102027;
+```
+
+```output
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé nodename Γöé nodeport Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102027 Γöé localhost Γöé 5433 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+- Learn how to [choose a distribution column](howto-choose-distribution-column.md) for distributed tables.
cosmos-db Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-firewall-rules.md
+
+ Title: Public access - Azure Cosmos DB for PostgreSQL
+description: This article describes public access for Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 09/22/2022++
+# Public access in Azure Cosmos DB for PostgreSQL
+++
+This page describes the public access option. For private access, see
+[Private access in Azure Cosmos DB for PostgreSQL](concepts-private-access.md).
+
+## Firewall overview
+
+Azure Cosmos DB for PostgreSQL server firewall prevents all access to your coordinator node until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
+To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
+
+**Firewall rules:** These rules enable clients to access your coordinator node, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
+
+All database access to your coordinator node is blocked by the firewall by default. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself isn't affected by the firewall rules.
+Connection attempts from the internet and Azure must first pass through the firewall before they can reach your PostgreSQL database, as shown in the following diagram:
++
+## Connect from the internet and from Azure
+
+A cluster firewall controls who can connect to the group's coordinator node. The firewall determines access by consulting a configurable list of rules. Each rule is an IP address, or range of addresses, that are allowed in.
+
+When the firewall blocks connections, it can cause application errors. Using the PostgreSQL JDBC driver, for instance, raises an error like this:
+
+`java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "citus", database "citus", SSL`
+
+See [Create and manage firewall rules](howto-manage-firewall-using-portal.md) to learn how the rules are defined.
+
+## Troubleshoot the database server firewall
+When access to the Microsoft Azure Cosmos DB for PostgreSQL service doesn't behave as you expect, consider these points:
+
+* **Changes to the allow list haven't taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Cosmos DB for PostgreSQL firewall configuration to take effect.
+
+* **The user isn't authorized or an incorrect password was used:** If a user doesn't have permissions on the server or the password used is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server. Each client must still provide the necessary security credentials.
+
+ For example, using a JDBC client, the following error may appear.
+
+ `java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"`
+
+* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you're having trouble getting through the firewall, you could try one of the following solutions:
+
+ * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the coordinator node, and then add the IP address range as a firewall rule.
+
+ * Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
+
+## Next steps
+For articles on creating server-level and database-level firewall rules, see:
+* [Create and manage Azure Cosmos DB for PostgreSQL firewall rules using the Azure portal](howto-manage-firewall-using-portal.md)
cosmos-db Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-high-availability.md
+
+ Title: High availability ΓÇô Azure Cosmos DB for PostgreSQL
+description: High availability and disaster recovery concepts
++++++ Last updated : 07/15/2022++
+# High availability in Azure Cosmos DB for PostgreSQL
++
+High availability (HA) avoids database downtime by maintaining standby replicas
+of every node in a cluster. If a node goes down, Azure Cosmos DB for PostgreSQL
+switches incoming connections from the failed node to its standby. Failover
+happens within a few minutes, and promoted nodes always have fresh data through
+PostgreSQL synchronous streaming replication.
+
+All primary nodes in a cluster are provisioned into one availability zone
+for better latency between the nodes. The standby nodes are provisioned into
+another zone. The Azure portal
+[displays](concepts-cluster.md#node-availability-zone) the availability
+zone of each node in a cluster.
+
+Even without HA enabled, each node has its own locally
+redundant storage (LRS) with three synchronous replicas maintained by Azure
+Storage service. If there's a single replica failure, itΓÇÖs detected by Azure
+Storage service and is transparently re-created. For LRS storage durability,
+see metrics [on this
+page](../../storage/common/storage-redundancy.md#summary-of-redundancy-options).
+
+When HA *is* enabled, Azure Cosmos DB for PostgreSQL runs one standby node for each primary
+node in the cluster. The primary and its standby use synchronous
+PostgreSQL replication. This replication allows customers to have predictable
+downtime if a primary node fails. In a nutshell, our service detects a failure
+on primary nodes, and fails over to standby nodes with zero data loss.
+
+To take advantage of HA on the coordinator node, database applications need to
+detect and retry dropped connections and failed transactions. The newly
+promoted coordinator will be accessible with the same connection string.
+
+## High availability states
+
+Recovery can be broken into three stages: detection, failover, and full
+recovery. Azure Cosmos DB for PostgreSQL runs periodic health checks on every node, and
+after four failed checks it determines that a node is down. Azure Cosmos DB for PostgreSQL
+then promotes a standby to primary node status (failover), and creates a new
+standby-to-be. Streaming replication begins, bringing the new node up to date.
+When all data has been replicated, the node has reached full recovery.
+
+Azure Cosmos DB for PostgreSQL displays its failover progress state on the Overview page
+for clusters in the Azure portal.
+
+* **Healthy**: HA is enabled and the node is fully replicated to its standby.
+* **Failover in progress**: A failure was detected on the primary node and
+ a failover to standby was initiated. This state will transition into
+ **Creating standby** once failover to the standby node is completed, and the
+ standby becomes the new primary.
+* **Creating standby**: The previous standby was promoted to primary, and a
+ new standby is being created for it. When the new secondary is ready, this
+ state will transition into **Replication in progress**.
+* **Replication in progress**: The new standby node is provisioned and data
+ synchronization is in progress. Once all data is replicated to the new
+ standby, synchronous replication will be enabled between the primary and
+ standby nodes, and the nodes' state will transition back to **Healthy**.
+* **No**: HA isn't enabled on this node.
+
+## Next steps
+
+- Learn how to [enable high
+ availability](howto-high-availability.md) in a cluster
cosmos-db Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-maintenance.md
+
+ Title: Scheduled maintenance - Azure Cosmos DB for PostgreSQL
+description: This article describes the scheduled maintenance feature in Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 02/14/2022++
+# Scheduled maintenance in Azure Cosmos DB for PostgreSQL
++
+Azure Cosmos DB for PostgreSQL does periodic maintenance to
+keep your managed database secure, stable, and up-to-date. During maintenance,
+all nodes in the cluster get new features, updates, and patches.
+
+The key features of scheduled maintenance for Azure Cosmos DB for PostgreSQL are:
+
+* Updates are applied at the same time on all nodes in the cluster
+* Notifications about upcoming maintenance are posted to Azure Service Health
+ five days in advance
+* Usually there are at least 30 days between successful maintenance events for
+ a cluster
+* Preferred day of the week and time window within that day for maintenance
+ start can be defined for each cluster individually
+
+## Selecting a maintenance window and notification about upcoming maintenance
+
+You can schedule maintenance during a specific day of the week and a time
+window within that day. Or you can let the system pick a day and a time window
+for you automatically. Either way, the system will alert you five days before
+running any maintenance. The system will also let you know when maintenance is
+started, and when it's successfully completed.
+
+Notifications about upcoming scheduled maintenance are posted to Azure Service
+Health and can be:
+
+* Emailed to a specific address
+* Emailed to an Azure Resource Manager Role
+* Sent in a text message (SMS) to mobile devices
+* Pushed as a notification to an Azure app
+* Delivered as a voice message
+
+When specifying preferences for the maintenance schedule, you can pick a day of
+the week and a time window. If you don't specify, the system will pick times
+between 11pm and 7am in your cluster's region time. You can define
+different schedules for each cluster in your Azure
+subscription.
+
+> [!IMPORTANT]
+> Normally there are at least 30 days between successful scheduled maintenance
+> events for a cluster.
+>
+> However, in case of a critical emergency update such as a severe
+> vulnerability, the notification window could be shorter than five days. The
+> critical update may be applied to your server even if a successful scheduled
+> maintenance was performed in the last 30 days.
+
+You can update scheduling settings at any time. If there's maintenance
+scheduled for your cluster and you update the schedule,
+the pre-existing events will be rescheduled.
+
+If maintenance fails or gets canceled, the system will create a notification.
+It will try maintenance again according to current scheduling settings, and
+notify you five days before the next maintenance event.
+
+## Next steps
+
+* Learn how to [change the maintenance schedule](howto-maintenance.md)
+* Learn how to [get notifications about upcoming maintenance](../../service-health/service-notifications.md) using Azure Service Health
+* Learn how to [set up alerts about upcoming scheduled maintenance events](../../service-health/resource-health-alert-monitor-guide.md)
cosmos-db Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-monitoring.md
+
+ Title: Monitor and tune - Azure Cosmos DB for PostgreSQL
+description: This article describes monitoring and tuning features in Azure Cosmos DB for PostgreSQL
++++++ Last updated : 09/27/2022++
+# Monitor and tune Azure Cosmos DB for PostgreSQL
++
+Monitoring data about your servers helps you troubleshoot and optimize for your
+workload. Azure Cosmos DB for PostgreSQL provides various monitoring options to provide
+insight into the behavior of nodes in a cluster.
+
+## Metrics
+
+Azure Cosmos DB for PostgreSQL provides metrics for nodes in a cluster, and aggregate
+metrics for the group as a whole. The metrics give insight into the behavior of
+supporting resources. Each metric is emitted at a one-minute frequency, and has
+up to 30 days of history.
+
+In addition to viewing graphs of the metrics, you can configure alerts. For
+step-by-step guidance, see [How to set up
+alerts](howto-alert-on-metric.md). Other tasks include setting up
+automated actions, running advanced analytics, and archiving history. For more
+information, see the [Azure Metrics
+Overview](../../azure-monitor/data-platform.md).
+
+### Per node vs aggregate
+
+By default, the Azure portal aggregates metrics across nodes
+in a cluster. However, some metrics, such as disk usage percentage, are
+more informative on a per-node basis. To see metrics for nodes displayed
+individually, use Azure Monitor [metric
+splitting](howto-monitoring.md#view-metrics-per-node) by server
+name.
+
+> [!NOTE]
+>
+> Some clusters do not support metric splitting. On
+> these clusters, you can view metrics for individual nodes by clicking
+> the node name in the cluster **Overview** page. Then open the
+> **Metrics** page for the node.
+
+### List of metrics
+
+These metrics are available for nodes:
+
+|Metric|Metric Display Name|Unit|Description|
+|||||
+|active_connections|Active Connections|Count|The number of active connections to the server.|
+|apps_reserved_memory_percent|Reserved Memory Percent|Percent|Calculated from the ratio of Committed_AS/CommitLimit as shown in /proc/meminfo.|
+|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
+|iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Azure Cosmos DB for PostgreSQL throughput](resources-compute.md)|
+|memory_percent|Memory percent|Percent|The percentage of memory in use.|
+|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
+|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
+|replication_lag|Replication Lag|Seconds|How far read replica nodes are behind their counterparts in the primary cluster.|
+|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
+|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+
+Azure supplies no aggregate metrics for the cluster as a whole, but metrics for
+multiple nodes can be placed on the same graph.
+
+## Next steps
+
+- Learn how to [view metrics](howto-monitoring.md) for a
+ cluster.
+- See [how to set up alerts](howto-alert-on-metric.md) for guidance
+ on creating an alert on a metric.
+- Learn how to do [metric
+ splitting](../../azure-monitor/essentials/metrics-charts.md#metric-splitting) to
+ inspect metrics per node in a cluster.
+- See other measures of database health with [useful diagnostic queries](howto-useful-diagnostic-queries.md).
cosmos-db Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-nodes.md
+
+ Title: Nodes ΓÇô Azure Cosmos DB for PostgreSQL
+description: Learn about the types of nodes and tables in a cluster.
+++++ Last updated : 07/28/2019++
+# Nodes and tables in Azure Cosmos DB for PostgreSQL
++
+## Nodes
+
+Azure Cosmos DB for PostgreSQL allows PostgreSQL
+servers (called nodes) to coordinate with one another in a "shared nothing"
+architecture. The nodes in a cluster collectively hold more data and use
+more CPU cores than would be possible on a single server. The architecture also
+allows the database to scale by adding more nodes to the cluster.
+
+### Coordinator and workers
+
+Every cluster has a coordinator node and multiple workers. Applications
+send their queries to the coordinator node, which relays it to the relevant
+workers and accumulates their results. Applications are not able to connect
+directly to workers.
+
+Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables,
+storing different rows on different worker nodes. Distributed tables are the
+key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables leaves them entirely
+on the coordinator node and cannot take advantage of cross-machine parallelism.
+
+For each query on distributed tables, the coordinator either routes it to a
+single worker node, or parallelizes it across several depending on whether the
+required data lives on a single node or multiple. The coordinator decides what
+to do by consulting metadata tables. These tables track the DNS names and
+health of worker nodes, and the distribution of data across nodes.
+
+## Table types
+
+There are three types of tables in a cluster, each
+stored differently on nodes and used for different purposes.
+
+### Type 1: Distributed tables
+
+The first type, and most common, is distributed tables. They
+appear to be normal tables to SQL statements, but they're horizontally
+partitioned across worker nodes. What this means is that the rows
+of the table are stored on different nodes, in fragment tables called
+shards.
+
+Azure Cosmos DB for PostgreSQL runs not only SQL but DDL statements throughout a cluster.
+Changing the schema of a distributed table cascades to update
+all the table's shards across workers.
+
+#### Distribution column
+
+Azure Cosmos DB for PostgreSQL uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
+of a table column called the distribution column. The cluster
+administrator must designate this column when distributing a table.
+Making the right choice is important for performance and functionality.
+
+### Type 2: Reference tables
+
+A reference table is a type of distributed table whose entire
+contents are concentrated into a single shard. The shard is replicated on every worker. Queries on any worker can access the reference information locally, without the network overhead of requesting rows from another node. Reference tables have no distribution column
+because there's no need to distinguish separate shards per row.
+
+Reference tables are typically small and are used to store data that's
+relevant to queries running on any worker node. An example is enumerated
+values like order statuses or product categories.
+
+### Type 3: Local tables
+
+When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
+
+A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
+
+## Shards
+
+The previous section described how distributed tables are stored as shards on
+worker nodes. This section discusses more technical details.
+
+The `pg_dist_shard` metadata table on the coordinator contains a
+row for each shard of each distributed table in the system. The row
+matches a shard ID with a range of integers in a hash space
+(shardminvalue, shardmaxvalue).
+
+```sql
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+If the coordinator node wants to determine which shard holds a row of
+`github_events`, it hashes the value of the distribution column in the
+row. Then the node checks which shard\'s range contains the hashed value. The
+ranges are defined so that the image of the hash function is their
+disjoint union.
+
+### Shard placements
+
+Suppose that shard 102027 is associated with the row in question. The row
+is read or written in a table called `github_events_102027` in one of
+the workers. Which worker? That's determined entirely by the metadata
+tables. The mapping of shard to worker is known as the shard placement.
+
+The coordinator node
+rewrites queries into fragments that refer to the specific tables
+like `github_events_102027` and runs those fragments on the
+appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
+
+```sql
+SELECT
+ shardid,
+ node.nodename,
+ node.nodeport
+FROM pg_dist_placement placement
+JOIN pg_dist_node node
+ ON placement.groupid = node.groupid
+ AND node.noderole = 'primary'::noderole
+WHERE shardid = 102027;
+```
+
+```output
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé nodename Γöé nodeport Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102027 Γöé localhost Γöé 5433 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+- [Determine your application's type](howto-app-type.md) to prepare for data modeling
cosmos-db Concepts Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-performance-tuning.md
+
+ Title: Performance tuning - Azure Cosmos DB for PostgreSQL
+description: Improving query performance in the distributed database
+++++ Last updated : 08/30/2022++
+# Performance tuning
++
+Running a distributed database at its full potential offers high performance.
+However, reaching that performance can take some adjustments in application
+code and data modeling. This article covers some of the most common--and
+effective--techniques to improve performance.
+
+## Client-side connection pooling
+
+A connection pool holds open database connections for reuse. An application
+requests a connection from the pool when needed, and the pool returns one that
+is already established if possible, or establishes a new one. When done, the
+application releases the connection back to the pool rather than closing it.
+
+Adding a client-side connection pool is an easy way to boost application
+performance with minimal code changes. In our measurements, running single-row
+insert statements goes about **24x faster** on a cluster with pooling enabled.
+
+For language-specific examples of adding pooling in application code, see the
+[app stacks guide](quickstart-app-stacks-overview.yml).
+
+> [!NOTE]
+>
+> Azure Cosmos DB for PostgreSQL also provides [server-side connection
+> pooling](concepts-connection-pool.md) using pgbouncer, but it mainly serves
+> to increase the client connection limit. An individual application's
+> performance benefits more from client- rather than server-side pooling.
+> (Although both forms of pooling can be used at once without harm.)
+
+## Scoping distributed queries
+
+### Updates
+
+When updating a distributed table, try to filter queries on the distribution
+column--at least when it makes sense, when the new filters don't change the
+meaning of the query.
+
+In some workloads, it's easy. Transactional/operational workloads like
+multi-tenant SaaS apps or the Internet of Things distribute tables by tenant or
+device. Queries are scoped to a tenant- or device-ID.
+
+For instance, in our [multi-tenant
+tutorial](tutorial-design-database-multi-tenant.md#use-psql-utility-to-create-a-schema)
+we have an `ads` table distributed by `company_id`. The naive way to update an
+ad is to single it out like this:
+
+```sql
+-- slow
+
+UPDATE ads
+ SET impressions_count = impressions_count+1
+ WHERE id = 42; -- missing filter on distribution column
+```
+
+Although the query uniquely identifies a row and updates it, Azure Cosmos DB for PostgreSQL
+doesn't know, at planning time, which shard the query will update. The Citus extension takes a
+ShareUpdateExclusiveLock on all shards to be safe, which blocks other queries
+trying to update the table.
+
+Even though the `id` was sufficient to identify a row, we can include an
+extra filter to make the query faster:
+
+```sql
+-- fast
+
+UPDATE ads
+ SET impressions_count = impressions_count+1
+ WHERE id = 42
+ AND company_id = 1; -- the distribution column
+```
+
+The Azure Cosmos DB for PostgreSQL query planner sees a direct filter on the distribution
+column and knows exactly which single shard to lock. In our tests, adding
+filters for the distribution column increased parallel update performance by
+**100x**.
+
+### Joins and CTEs
+
+We've seen how UPDATE statements should scope by the distribution column to
+avoid unnecessary shard locks. Other queries benefit from scoping too, usually
+to avoid the network overhead of unnecessarily shuffling data between worker
+nodes.
+++
+```sql
+-- logically correct, but slow
+
+WITH single_ad AS (
+ SELECT *
+ FROM ads
+ WHERE id=1
+)
+SELECT *
+ FROM single_ad s
+ JOIN campaigns c ON (s.campaign_id=c.id);
+```
+
+We can speed up the query up by filtering on the distribution column,
+`company_id`, in the CTE and main SELECT statement.
+
+```sql
+-- faster, joining on distribution column
+
+WITH single_ad AS (
+ SELECT *
+ FROM ads
+ WHERE id=1 and company_id=1
+)
+SELECT *
+ FROM single_ad s
+ JOIN campaigns c ON (s.campaign_id=c.id)
+ WHERE s.company_id=1 AND c.company_id = 1;
+```
+
+In general, when joining distributed tables, try to include the distribution
+column in the join conditions. However, when joining between a distributed and
+reference table it's not required, because reference table contents are
+replicated across all worker nodes.
+
+If it seems inconvenient to add the extra filters to all your queries, keep in
+mind there are helper libraries for several popular application frameworks that
+make it easier. Here are instructions:
+
+* [Ruby on Rails](https://docs.citusdata.com/en/stable/develop/migration_mt_ror.html),
+* [Django](https://docs.citusdata.com/en/stable/develop/migration_mt_django.html),
+* [ASP.NET](https://docs.citusdata.com/en/stable/develop/migration_mt_asp.html),
+* [Java Hibernate](https://www.citusdata.com/blog/2018/02/13/using-hibernate-and-spring-to-build-multitenant-java-apps/).
+
+## Efficient database logging
+
+Logging all SQL statements all the time adds overhead. In our measurements,
+using more a judicious log level improved the transactions per second by
+**10x** vs full logging.
+
+For efficient everyday operation, you can disable logging except for errors and
+abnormally long-running queries:
+
+| setting | value | reason |
+||-|--|
+| log_statement_stats | OFF | Avoid profiling overhead |
+| log_duration | OFF | Don't need to know the duration of normal queries |
+| log_statement | NONE | Don't log queries without a more specific reason |
+| log_min_duration_statement | A value longer than what you think normal queries should take | Shows the abnormally long queries |
+
+> [!NOTE]
+>
+> The log-related settings in our managed service take the above
+> recommendations into account. You can leave them as they are. However, we've
+> sometimes seen customers change the settings to make logging aggressive,
+> which has led to performance issues.
+
+## Lock contention
+
+The database uses locks to keep data consistent under concurrent access.
+However, some query patterns require an excessive amount of locking, and faster
+alternatives exist.
+
+### System health and locks
+
+Before diving into common locking inefficiencies, let's see how to view locks
+and activity throughout the database cluster. The
+[citus_stat_activity](reference-metadata.md#distributed-query-activity) view
+gives a detailed view.
+
+The view shows, among other things, how queries are blocked by "wait events,"
+including locks. Grouping by
+[wait_event_type](https://www.postgresql.org/docs/14/monitoring-stats.html#WAIT-EVENT-TABLE)
+paints a picture of system health:
+
+```sql
+-- general system health
+
+SELECT wait_event_type, count(*)
+ FROM citus_stat_activity
+ WHERE state != 'idle'
+ GROUP BY 1
+ ORDER BY 2 DESC;
+```
+
+A NULL `wait_event_type` means the query isn't waiting on anything.
+
+If you do see locks in the stat activity output, you can view the specific
+blocked queries using `citus_lock_waits`:
+
+```sql
+SELECT * FROM citus_lock_waits;
+```
+
+For example, if one query is blocked on another trying to update the same row,
+you'll see the blocked and blocking statements appear:
+
+```
+-[ RECORD 1 ]-+--
+waiting_gpid | 10000011981
+blocking_gpid | 10000011979
+blocked_statement | UPDATE numbers SET j = 3 WHERE i = 1;
+current_statement_in_blocking_process | UPDATE numbers SET j = 2 WHERE i = 1;
+waiting_nodeid | 1
+blocking_nodeid | 1
+```
+
+To see not only the locks happening at the moment, but historical patterns, you
+can capture locks in the PostgreSQL logs. To learn more, see the
+[log_lock_waits](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LOCK-WAITS)
+server setting in the PostgreSQL documentation. Another great resource is
+[seven tips for dealing with
+locks](https://www.citusdata.com/blog/2018/02/22/seven-tips-for-dealing-with-postgres-locks/)
+on the Citus Data Blog.
+
+### Common problems and solutions
+
+#### DDL commands
+
+DDL Commands like `truncate`, `drop`, and `create index` all take write locks,
+and block writes on the entire table. Minimizing such operations reduces
+locking issues.
+
+Tips:
+
+* Try to consolidate DDL into maintenance windows, or use them less often.
+
+* PostgreSQL supports [building indices
+ concurrently](https://www.postgresql.org/docs/current/sql-createhttps://docsupdatetracker.net/index.html#SQL-CREATEINDEX-CONCURRENTLY),
+ to avoid taking a write lock on the table.
+
+* Consider setting
+ [lock_timeout](https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-LOCK-TIMEOUT)
+ in a SQL session prior to running a heavy DDL command. With `lock_timeout`,
+ PostgreSQL will abort the DDL command if the command waits too long for a write
+ lock. A DDL command waiting for a lock can cause later queries to queue behind
+ itself.
+
+#### Idle in transaction connections
+
+Idle (uncommitted) transactions sometimes block other queries unnecessarily.
+For example:
+
+```sql
+BEGIN;
+
+UPDATE ... ;
+
+-- Suppose the client waits now and doesn't COMMIT right away.
+--
+-- Other queries that want to update the same rows will be blocked.
+
+COMMIT; -- finally!
+```
+
+To manually clean up any long-idle queries on the coordinator node, you can run
+a command like this:
+
+```sql
+SELECT pg_terminate_backend(pid)
+FROM pg_stat_activity
+WHERE datname = 'citus'
+ AND pid <> pg_backend_pid()
+ AND state in ('idle in transaction')
+ AND state_change < current_timestamp - INTERVAL '15' MINUTE;
+```
+
+PostgreSQL also offers an
+[idle_in_transaction_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT)
+setting to automate idle session termination.
+
+#### Deadlocks
+
+Azure Cosmos DB for PostgreSQL detects distributed deadlocks and cancels their queries, but the
+situation is less performant than avoiding deadlocks in the first place. A
+common source of deadlocks comes from updating the same set of rows in a
+different order from multiple transactions at once.
+
+For instance, running these transactions in parallel:
+
+Session A:
+
+```sql
+BEGIN;
+UPDATE ads SET updated_at = now() WHERE id = 1 AND company_id = 1;
+UPDATE ads SET updated_at = now() WHERE id = 2 AND company_id = 1;
+```
+
+Session B:
+
+```sql
+BEGIN;
+UPDATE ads SET updated_at = now() WHERE id = 2 AND company_id = 1;
+UPDATE ads SET updated_at = now() WHERE id = 1 AND company_id = 1;
+
+-- ERROR: canceling the transaction since it was involved in a distributed deadlock
+```
+
+Session A updated ID 1 then 2, whereas the session B updated 2 then 1. Write
+SQL code for transactions carefully to update rows in the same order. (The
+update order is sometimes called a "locking hierarchy.")
+
+In our measurement, bulk updating a set of rows with many transactions went
+**3x faster** when avoiding deadlock.
+
+## I/O during ingestion
+
+I/O bottlenecking is typically less of a problem for Azure Cosmos DB for PostgreSQL than
+for single-node PostgreSQL because of sharding. The shards are individually
+smaller tables, with better index and cache hit rates, yielding better
+performance.
+
+However, even with Azure Cosmos DB for PostgreSQL, as tables and indices grow larger, disk
+I/O can become a problem for data ingestion. Things to look out for are an
+increasing number of 'IO' `wait_event_type` entries appearing in
+`citus_stat_activity`:
+
+```sql
+SELECT wait_event_type, wait_event count(*)
+ FROM citus_stat_activity
+ WHERE state='active'
+ GROUP BY 1,2;
+```
+
+Run the above query repeatedly to capture wait event related information. Note
+how the counts of different wait event types change.
+
+Also look at [metrics in the Azure portal](concepts-monitoring.md),
+particularly the IOPS metric maxing out.
+
+Tips:
+
+- If your data is naturally ordered, such as in a time series, use PostgreSQL
+ table partitioning. See [this
+ guide](https://docs.citusdata.com/en/stable/use_cases/timeseries.html) to learn
+ how to partition distributed tables.
+
+- Remove unused indices. Index maintenance causes I/O amplification during
+ ingestion. To find which indices are unused, use [this
+ query](howto-useful-diagnostic-queries.md#identifying-unused-indices).
+
+- If possible, avoid indexing randomized data. For instance, some UUID
+ generation algorithms follow no order. Indexing such a value causes a lot
+ overhead. Try a bigint sequence instead, or monotonically increasing UUIDs.
+
+## Summary of results
+
+In benchmarks of simple ingestion with INSERTs, UPDATEs, transaction blocks, we
+observed the following query speedups for the techniques in this article.
+
+| Technique | Query speedup |
+|--||
+| Scoping queries | 100x |
+| Connection pooling | 24x |
+| Efficient logging | 10x |
+| Avoiding deadlock | 3x |
+
+## Next steps
+
+* [Advanced query performance tuning](https://docs.citusdata.com/en/stable/performance/performance_tuning.html)
+* [Useful diagnostic queries](howto-useful-diagnostic-queries.md)
+* Build fast [app stacks](quickstart-app-stacks-overview.yml)
cosmos-db Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-private-access.md
+
+ Title: Private access - Azure Cosmos DB for PostgreSQL
+description: This article describes private access for Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 10/15/2021++
+# Private access in Azure Cosmos DB for PostgreSQL
+++
+This page describes the private access option. For public access, see
+[here](concepts-firewall-rules.md).
+
+## Definitions
+
+**Virtual network**. An Azure Virtual Network (VNet) is the fundamental
+building block for private networking in Azure. Virtual networks enable many
+types of Azure resources, such as database servers and Azure Virtual Machines
+(VM), to securely communicate with each other. Virtual networks support on-premises
+connections, allow hosts in multiple virtual networks to interact with each
+other through peering, and provide added benefits of scale, security options,
+and isolation. Each private endpoint for a cluster
+requires an associated virtual network.
+
+**Subnet**. Subnets segment a virtual network into one or more subnetworks.
+Each subnetwork gets a portion of the address space, improving address
+allocation efficiency. You can secure resources within subnets using Network
+Security Groups. For more information, see Network security groups.
+
+When you select a subnet for a clusterΓÇÖs private endpoint, make sure
+enough private IP addresses are available in that subnet for your current and
+future needs.
+
+**Private endpoint**. A private endpoint is a network interface that uses a
+private IP address from a virtual network. This network interface connects
+privately and securely to a service powered by Azure Private Link. Private
+endpoints bring the services into your virtual network.
+
+Enabling private access for Azure Cosmos DB for PostgreSQL creates a private endpoint for
+the clusterΓÇÖs coordinator node. The endpoint allows hosts in the selected
+virtual network to access the coordinator. You can optionally create private
+endpoints for worker nodes too.
+
+**Private DNS zone**. An Azure private DNS zone resolves hostnames within a
+linked virtual network, and within any peered virtual network. Domain records
+for nodes are created in a private DNS zone selected for
+their cluster. Be sure to use fully qualified domain names (FQDN) for
+nodes' PostgreSQL connection strings.
+
+## Private link
+
+You can use [private endpoints](../../private-link/private-endpoint-overview.md)
+for your clusters to allow hosts on a virtual network
+(VNet) to securely access data over a [Private
+Link](../../private-link/private-link-overview.md).
+
+The cluster's private endpoint uses an IP address from the virtual
+network's address space. Traffic between hosts on the virtual network and
+nodes goes over a private link on the Microsoft backbone
+network, eliminating exposure to the public Internet.
+
+Applications in the virtual network can connect to the nodes
+over the private endpoint seamlessly, using the same connection strings and
+authorization mechanisms that they would use otherwise.
+
+You can select private access during cluster creation,
+and you can switch from public access to private access at any point.
+
+### Using a private DNS zone
+
+A new private DNS zone is automatically provisioned for each private endpoint,
+unless you select one of the private DNS zones previously created by Azure
+Cosmos DB for PostgreSQL. For more information, see the [private DNS zones
+overview](../../dns/private-dns-overview.md).
+
+The Azure Cosmos DB for PostgreSQL service creates DNS records such as
+`c.privatelink.mygroup01.postgres.database.azure.com` in the selected private
+DNS zone for each node with a private endpoint. When you connect to a
+node from an Azure VM via private endpoint, Azure DNS
+resolves the nodeΓÇÖs FQDN into a private IP address.
+
+Private DNS zone settings and virtual network peering are independent of each
+other. If you want to connect to a node in the cluster from a client
+that's provisioned in another virtual network (from the same region or a
+different region), you have to link the private DNS zone with the virtual
+network. For more information, see [Link the virtual
+network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network).
+
+> [!NOTE]
+>
+> The service also always creates public CNAME records such as
+> `c.mygroup01.postgres.database.azure.com` for every node. However, selected
+> computers on the public internet can connect to the public hostname only if
+> the database administrator enables [public
+> access](concepts-firewall-rules.md) to the cluster.
+
+If you're using a custom DNS server, you must use a DNS forwarder to resolve
+the FQDN of nodes. The forwarder IP address should be
+168.63.129.16. The custom DNS server should be inside the virtual network or
+reachable via the virtual network's DNS server setting. To learn more, see
+[Name resolution that uses your own DNS
+server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
+
+### Recommendations
+
+When you enable private access for your cluster,
+consider:
+
+* **Subnet size**: When selecting subnet size for a cluster,
+ consider current needs such as IP addresses for coordinator or all
+ nodes in that cluster, and future needs such as growth of that cluster.
+ Make sure you have enough private IP addresses for the current and
+ future needs. Keep in mind, Azure reserves five IP addresses in each subnet.
+ See more details [in this
+ FAQ](../../virtual-network/virtual-networks-faq.md#configuration).
+* **Private DNS zone**: DNS records with private IP addresses are going to be
+ maintained by Azure Cosmos DB for PostgreSQL service. Make sure you donΓÇÖt delete private
+ DNS zone used for clusters.
+
+## Limits and limitations
+
+See Azure Cosmos DB for PostgreSQL [limits and limitations](reference-limits.md)
+page.
+
+## Next steps
+
+* Learn how to [enable and manage private access](howto-private-access.md)
+* Follow a [tutorial](tutorial-private-access.md) to see private access in
+ action.
+* Learn about [private
+ endpoints](../../private-link/private-endpoint-overview.md)
+* Learn about [virtual
+ networks](../../virtual-network/concepts-and-best-practices.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
cosmos-db Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-read-replicas.md
+
+ Title: Read replicas - Azure Cosmos DB for PostgreSQL
+description: This article describes the read replica feature in Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 09/27/2022++
+# Read replicas in Azure Cosmos DB for PostgreSQL
++
+The read replica feature allows you to replicate data from a
+cluster to a read-only cluster. Replicas are updated
+**asynchronously** with PostgreSQL physical replication technology. You can
+run to up to five replicas from the primary server.
+
+Replicas are new clusters that you manage similar to regular clusters. For each
+read replica, you're billed for the provisioned compute in vCores and storage
+in GiB/month. Compute and storage costs for replica clusters are the same as
+for regular clusters.
+
+Learn how to [create and manage replicas](howto-read-replicas-portal.md).
+
+## When to use a read replica
+
+The read replica feature helps to improve the performance and scale of
+read-intensive workloads. Read workloads can be isolated to the replicas, while
+write workloads can be directed to the primary.
+
+A common scenario is to have BI and analytical workloads use the read replica
+as the data source for reporting.
+
+Because replicas are read-only, they don't directly reduce write-capacity
+burdens on the primary.
+
+### Considerations
+
+The feature is meant for scenarios where replication lag is acceptable, and is
+meant for offloading queries. It isn't meant for synchronous replication
+scenarios where replica data is expected to be up to date. There will be a
+measurable delay between the primary and the replica. The delay can be minutes
+or even hours, depending on the workload and the latency between primary and
+replica. The data on the replica eventually becomes consistent with the
+data on the primary. Use this feature for workloads that can accommodate this
+delay.
+
+## Create a replica
+
+When you start the create replica workflow, a blank cluster
+is created. The new cluster is filled with the data that was on the primary
+cluster. The creation time depends on the amount of data on the primary
+and the time since the last weekly full backup. The time can range from a few
+minutes to several hours.
+
+The read replica feature uses PostgreSQL physical replication, not logical
+replication. The default mode is streaming replication using replication slots.
+When necessary, log shipping is used to catch up.
+
+Learn how to [create a read replica in the Azure
+portal](howto-read-replicas-portal.md).
+
+## Connect to a replica
+
+When you create a replica, it doesn't inherit firewall rules the primary
+cluster. These rules must be set up independently for the replica.
+
+The replica inherits the admin (`citus`) account from the primary cluster.
+All user accounts are replicated to the read replicas. You can only connect to
+a read replica by using the user accounts that are available on the primary
+server.
+
+You can connect to the replica's coordinator node by using its hostname and a
+valid user account, as you would on a regular cluster.
+For instance, given a server named **my replica** with the admin username
+**citus**, you can connect to the coordinator node of the replica by using
+psql:
+
+```bash
+psql -h c.myreplica.postgres.database.azure.com -U citus@myreplica -d postgres
+```
+
+At the prompt, enter the password for the user account.
+
+## Replica promotion to independent cluster
+
+You can promote a replica to an independent cluster that is readable and
+writable. A promoted replica no longer receives updates from its original, and
+promotion can't be undone. Promoted replicas can have replicas of their own.
+
+There are two common scenarios for promoting a replica:
+
+1. **Disaster recovery.** If something goes wrong with the primary, or with an
+ entire region, you can open another cluster for writes as an emergency
+ procedure.
+2. **Migrating to another region.** If you want to move to another region,
+ create a replica in the new region, wait for data to catch up, then promote
+ the replica. To avoid potentially losing data during promotion, you may want
+ to disable writes to the original cluster after the replica catches up.
+
+ You can see how far a replica has caught up using the `replication_lag`
+ metric. See [metrics](concepts-monitoring.md#metrics) for more information.
+
+## Considerations
+
+This section summarizes considerations about the read replica feature.
+
+### New replicas
+
+A read replica is created as a new cluster. An existing
+cluster can't be made into a replica. You can't create a replica of
+another read replica.
+
+### Replica configuration
+
+Replicas inherit compute, storage, and worker node settings from their
+primaries. You can change some--but not all--settings on a replica. For
+instance, you can change compute, firewall rules for public access, and private
+endpoints for private access. You can't change the storage size or number of
+worker nodes.
+
+Remember to keep replicas strong enough to keep up changes arriving from the
+primary. For instance, be sure to upscale compute power in replicas if you
+upscale it on the primary.
+
+Firewall rules and parameter settings aren't inherited from the primary server
+to the replica when the replica is created or afterwards.
+
+### Cross-region replication
+
+Read replicas can be created in the region of the primary cluster, or in
+any other [region supported by Azure Cosmos DB for PostgreSQL](resources-regions.md). The
+limit of five replicas per cluster counts across all regions, meaning five
+total, not five per region.
+
+## Next steps
+
+* Learn how to [create and manage read replicas in the Azure
+ portal](howto-read-replicas-portal.md).
cosmos-db Concepts Row Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-row-level-security.md
+
+ Title: Row level security ΓÇô Azure Cosmos DB for PostgreSQL
+description: Multi-tenant security through database roles
+++++ Last updated : 06/30/2022++
+# Row-level security
++
+PostgreSQL [row-level security
+policies](https://www.postgresql.org/docs/current/ddl-rowsecurity.html)
+restrict which users can modify or access which table rows. Row-level security
+can be especially useful in a multi-tenant cluster. It
+allows individual tenants to have full SQL access to the database while hiding
+each tenantΓÇÖs information from other tenants.
+
+## Implementing for multi-tenant apps
+
+We can implement the separation of tenant data by using a naming convention for
+database roles that ties into table row-level security policies. WeΓÇÖll assign
+each tenant a database role in a numbered sequence: `tenant1`, `tenant2`,
+etc. Tenants will connect to Azure Cosmos DB for PostgreSQL using these separate roles. Row-level
+security policies can compare the role name to values in the `tenant_id`
+distribution column to decide whether to allow access.
+
+Here's how to apply the approach on a simplified events table distributed by
+`tenant_id`. First [create the roles](howto-create-users.md) `tenant1` and
+`tenant2`. Then run the following SQL commands as the `citus` administrator
+user:
+
+```postgresql
+CREATE TABLE events(
+ tenant_id int,
+ id int,
+ type text
+);
+
+SELECT create_distributed_table('events','tenant_id');
+
+INSERT INTO events VALUES (1,1,'foo'), (2,2,'bar');
+
+-- assumes that roles tenant1 and tenant2 exist
+GRANT select, update, insert, delete
+ ON events TO tenant1, tenant2;
+```
+
+As it stands, anyone with select permissions for this table can see both rows.
+Users from either tenant can see and update the row of the other tenant. We can
+solve the data leak with row-level table security policies.
+
+Each policy consists of two clauses: USING and WITH CHECK. When a user tries to
+read or write rows, the database evaluates each row against these clauses.
+PostgreSQL checks existing table rows against the expression specified in the
+USING clause, and rows that would be created via INSERT or UPDATE against the
+WITH CHECK clause.
+
+```postgresql
+-- first a policy for the system admin "citus" user
+CREATE POLICY admin_all ON events
+ TO citus -- apply to this role
+ USING (true) -- read any existing row
+ WITH CHECK (true); -- insert or update any row
+
+-- next a policy which allows role "tenant<n>" to
+-- access rows where tenant_id = <n>
+CREATE POLICY user_mod ON events
+ USING (current_user = 'tenant' || tenant_id::text);
+ -- lack of CHECK means same condition as USING
+
+-- enforce the policies
+ALTER TABLE events ENABLE ROW LEVEL SECURITY;
+```
+
+Now roles `tenant1` and `tenant2` get different results for their queries:
+
+**Connected as tenant1:**
+
+```sql
+SELECT * FROM events;
+```
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé tenant_id Γöé id Γöé type Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 1 Γöé 1 Γöé foo Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+**Connected as tenant2:**
+
+```sql
+SELECT * FROM events;
+```
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé tenant_id Γöé id Γöé type Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 2 Γöé 2 Γöé bar Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+```sql
+INSERT INTO events VALUES (3,3,'surprise');
+/*
+ERROR: new row violates row-level security policy for table "events_102055"
+*/
+```
+
+## Next steps
+
+Learn how to [create roles](howto-create-users.md) in a
+cluster.
cosmos-db Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-security-overview.md
+
+ Title: Security overview - Azure Cosmos DB for PostgreSQL
+description: Information protection and network security for Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 01/14/2022++
+# Security in Azure Cosmos DB for PostgreSQL
++
+This page outlines the multiple layers of security available to protect the data in your
+cluster.
+
+## Information protection and encryption
+
+### In transit
+
+Whenever data is ingested into a node, Azure Cosmos DB for PostgreSQL secures your data by
+encrypting it in-transit with Transport Layer Security 1.2. Encryption
+(SSL/TLS) is always enforced, and canΓÇÖt be disabled.
+
+### At rest
+
+The Azure Cosmos DB for PostgreSQL service uses the FIPS 140-2 validated cryptographic
+module for storage encryption of data at-rest. Data, including backups, are
+encrypted on disk, including the temporary files created while running queries.
+The service uses the AES 256-bit cipher included in Azure storage encryption,
+and the keys are system-managed. Storage encryption is always on, and can't be
+disabled.
+
+## Network security
++
+## Limits and limitations
+
+See Azure Cosmos DB for PostgreSQL [limits and limitations](reference-limits.md)
+page.
+
+## Next steps
+
+* Learn how to [enable and manage private access](howto-private-access.md)
+* Learn about [private
+ endpoints](../../private-link/private-endpoint-overview.md)
+* Learn about [virtual
+ networks](../../virtual-network/concepts-and-best-practices.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
cosmos-db Concepts Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-upgrade.md
+
+ Title: Cluster upgrades - Azure Cosmos DB for PostgreSQL
+description: Types of upgrades, and their precautions
+++++ Last updated : 08/29/2022++
+# Cluster upgrades
++
+The Azure Cosmos DB for PostgreSQL managed service can handle upgrades of both the
+PostgreSQL server, and the Citus extension. You can choose these versions
+mostly independently of one another, except Citus 11 requires PostgreSQL 13 or
+higher.
+
+## Upgrade precautions
+
+Upgrades require some downtime in the database cluster. The exact time depends
+on the source and destination versions of the upgrade. To prepare for the
+production cluster upgrade, we recommend [testing the
+upgrade](howto-upgrade.md#test-the-upgrade-first), and measure downtime during
+the test.
+
+Also, upgrading a major version of Citus can introduce changes in behavior.
+It's best to familiarize yourself with new product features and changes to
+avoid surprises.
+
+Noteworthy Citus 11 changes:
+
+* Table shards may disappear in your SQL client. Their visibility
+ is now controlled by
+ [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text).
+* There are several [deprecated
+ features](https://www.citusdata.com/updates/v11-0/#deprecated-features).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to perform upgrades](howto-upgrade.md)
cosmos-db Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-alert-on-metric.md
+
+ Title: Configure alerts - Azure Cosmos DB for PostgreSQL
+description: See how to configure and access metric alerts for Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 3/16/2020++
+# Use the Azure portal to set up alerts on metrics for Azure Cosmos DB for PostgreSQL
++
+This article shows you how to set up Azure Cosmos DB for PostgreSQL alerts by using the Azure portal. You can receive an alert based on [monitoring metrics](concepts-monitoring.md) for your Azure services.
+
+You'll set up an alert to trigger when the value of a specified metric crosses a threshold. The alert triggers when the condition is first met, and continues to trigger afterwards.
+
+You can configure an alert to do the following actions when it triggers:
+* Send email notifications to the service administrator and coadministrators.
+* Send email to additional emails that you specify.
+* Call a webhook.
+
+You can configure and get information about alert rules using:
+* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
+* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
+* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
+
+## Create an alert rule on a metric from the Azure portal
+1. In the [Azure portal](https://portal.azure.com/), select the Azure Cosmos DB for PostgreSQL server you want to monitor.
+
+1. Under the **Monitoring** section of the sidebar, select **Alerts**, and then select **Create** or **Create alert rule**.
+
+ :::image type="content" source="media/howto-alert-on-metric/2-alert-rules.png" alt-text="Screenshot that shows selecting Create alert rule.":::
+
+1. The **Select a signal** screen opens. Select a metric from the list of signals to be alerted on. For this example, select **Storage percent**.
+
+ :::image type="content" source="media/howto-alert-on-metric/6-configure-signal-logic.png" alt-text="Screenshot shows the Configure signal logic page where you can view several signals.":::
+
+1. On the **Condition** tab of the **Create an alert rule** page, under **Alert logic**, complete the following items:
+
+ - For **Threshold**, select **Static**.
+ - For **Aggregation type**, select **Average**.
+ - For **Operator**, select **Greater than**.
+ - For **Threshold value**, enter *85*.
+
+ :::image type="content" source="media/howto-alert-on-metric/7-set-threshold-time.png" alt-text="Screenshot that shows configuring the Alert logic.":::
+
+1. Select the **Actions** tab, and then select **Create action group** to create a new group to receive notifications on the alert.
+
+1. On the **Create an action group** form, select the **Subscription**, **Resource group**, and **Region**, and enter a name and display name for the group.
+
+ :::image type="content" source="media/howto-alert-on-metric/9-add-action-group.png" alt-text="Screenshot that shows the Create an action group form.":::
+
+1. Select **Next: Notifications** at the bottom of the page.
+
+1. On the **Notifications** tab, under **Notification type**, select **Email/SMS message/Push/Voice**.
+
+1. On the **Email/SMS message/Push/Voice** form, fill out email addresses and phone numbers for the notification types and recipients you want, and then select **OK**.
+
+ :::image type="content" source="media/howto-alert-on-metric/10-action-group-type.png" alt-text="Screenshot that shows the Create an alert rule page.":::
+
+1. On the **Create an action group** form, enter a name for the new notification.
+
+1. Select **Review + create**, and then select **Create** to create the action group. The new action group is created and appears under **Action group name** on the **Actions** tab of the **Create an alert rule** page.
+
+1. Select **Next: Details** at the bottom of the page.
+
+1. On the **Details** tab, select a severity for the rule. Give the rule an easily identifiable name, and add an optional description.
+
+ :::image type="content" source="media/howto-alert-on-metric/11-name-description-severity.png" alt-text="Screenshot that shows the alert Details tab.":::
+
+1. Select **Review + create**, and then select **Create** to create the alert. Within a few minutes, the alert is active and triggers as previously described.
+
+## Manage alerts
+
+Once you've created an alert, you can select it and do the following actions:
+
+* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
+* **Edit** or **Delete** the alert rule.
+* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
+
+## Suggested alerts
+
+Here are some examples of suggested alerts to set up.
+
+### Disk space
+
+Monitoring and alerting is important for every production cluster. The underlying PostgreSQL database requires free disk space to operate correctly. If the disk becomes full, the database server node will go offline and refuse to start until space is available. At that point, it requires a Microsoft support request to fix the situation.
+
+We recommend setting disk space alerts on every node in every cluster, even for non-production usage. Disk space usage alerts provide the advance warning needed to intervene and keep nodes healthy. For best results, try a series of alerts at 75%, 85%, and 95% usage. The percentages to choose depend on data ingestion speed, since fast data ingestion fills up the disk faster.
+
+As the disk approaches its space limit, try these techniques to get more free space:
+
+* Review data retention policy. Move older data to cold storage if feasible.
+* Consider [adding nodes](howto-scale-grow.md#add-worker-nodes) to the cluster and rebalancing shards. Rebalancing distributes the data across more computers.
+* Consider [growing the capacity](howto-scale-grow.md#increase-or-decrease-vcores-on-nodes) of worker nodes. Each worker can have up to 2 TiB of storage. However adding nodes should be attempted before resizing nodes because adding nodes completes faster.
+
+### CPU usage
+
+Monitoring CPU usage is useful to establish a baseline for performance. For example, you may notice that CPU usage is usually around 40-60%. If CPU usage suddenly begins hovering around 95%, you can recognize an anomaly. The CPU usage may reflect organic growth, but it may also reveal a stray query. When creating a CPU alert, set a long aggregation granularity to catch prolonged increases and ignore momentary spikes.
+
+## Next steps
+* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
+* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
cosmos-db Howto App Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-app-type.md
+
+ Title: Determine application type - Azure Cosmos DB for PostgreSQL
+description: Identify your application for effective distributed data modeling
+++++ Last updated : 07/17/2020++
+# Determining Application Type
++
+Running efficient queries on a cluster requires that
+tables be properly distributed across servers. The recommended distribution
+varies by the type of application and its query patterns.
+
+There are broadly two kinds of applications that work well on Azure Cosmos DB
+for PostgreSQL. The first step in data modeling is to identify which of them
+more closely resembles your application.
+
+## At a Glance
+
+| Multi-Tenant Applications | Real-Time Applications |
+|--|-|
+| Sometimes dozens or hundreds of tables in schema | Small number of tables |
+| Queries relating to one tenant (company/store) at a time | Relatively simple analytics queries with aggregations |
+| OLTP workloads for serving web clients | High ingest volume of mostly immutable data |
+| OLAP workloads that serve per-tenant analytical queries | Often centering around large table of events |
+
+## Examples and Characteristics
+
+**Multi-Tenant Application**
+
+> These are typically SaaS applications that serve other companies,
+> accounts, or organizations. Most SaaS applications are inherently
+> relational. They have a natural dimension on which to distribute data
+> across nodes: just shard by tenant\_id.
+>
+> Azure Cosmos DB for PostgreSQL enables you to scale out your database to millions of
+> tenants without having to re-architect your application. You can keep the
+> relational semantics you need, like joins, foreign key constraints,
+> transactions, ACID, and consistency.
+>
+> - **Examples**: Websites which host store-fronts for other
+> businesses, such as a digital marketing solution, or a sales
+> automation tool.
+> - **Characteristics**: Queries relating to a single tenant rather
+> than joining information across tenants. This includes OLTP
+> workloads for serving web clients, and OLAP workloads that serve
+> per-tenant analytical queries. Having dozens or hundreds of tables
+> in your database schema is also an indicator for the multi-tenant
+> data model.
+>
+> Scaling a multi-tenant app with Azure Cosmos DB for PostgreSQL also requires minimal
+> changes to application code. We have support for popular frameworks like Ruby
+> on Rails and Django.
+
+**Real-Time Analytics**
+
+> Applications needing massive parallelism, coordinating hundreds of cores for
+> fast results to numerical, statistical, or counting queries. By sharding and
+> parallelizing SQL queries across multiple nodes, Azure Cosmos DB for PostgreSQL makes it
+> possible to perform real-time queries across billions of records in under a
+> second.
+>
+> Tables in real-time analytics data models are typically distributed by
+> columns like user\_id, host\_id, or device\_id.
+>
+> - **Examples**: Customer-facing analytics dashboards requiring
+> sub-second response times.
+> - **Characteristics**: Few tables, often centering around a big
+> table of device-, site- or user-events and requiring high ingest
+> volume of mostly immutable data. Relatively simple (but
+> computationally intensive) analytics queries involving several
+> aggregations and GROUP BYs.
+
+If your situation resembles either case above, then the next step is to decide
+how to shard your data in the cluster. The database administrator\'s
+choice of distribution columns needs to match the access patterns of typical
+queries to ensure performance.
+
+## Next steps
+
+* [Choose a distribution
+ column](howto-choose-distribution-column.md) for tables in your
+ application to distribute data efficiently
cosmos-db Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-choose-distribution-column.md
+
+ Title: Choose distribution columns ΓÇô Azure Cosmos DB for PostgreSQL
+description: Learn how to choose distribution columns in common scenarios in Azure Cosmos DB for PostgreSQL.
+++++ Last updated : 02/28/2022++
+# Choose distribution columns in Azure Cosmos DB for PostgreSQL
++
+Choosing each table's distribution column is one of the most important modeling
+decisions you'll make. Azure Cosmos DB for PostgreSQL
+stores rows in shards based on the value of the rows' distribution column.
+
+The correct choice groups related data together on the same physical nodes,
+which makes queries fast and adds support for all SQL features. An incorrect
+choice makes the system run slowly.
+
+## General tips
+
+Here are four criteria for choosing the ideal distribution column for your
+distributed tables.
+
+1. **Pick a column that is a central piece in the application workload.**
+
+ You might think of this column as the "heart," "central piece," or "natural dimension"
+ for partitioning data.
+
+ Examples:
+
+ * `device_id` in an IoT workload
+ * `security_id` for a financial app that tracks securities
+ * `user_id` in user analytics
+ * `tenant_id` for a multi-tenant SaaS application
+
+2. **Pick a column with decent cardinality, and an even statistical
+ distribution.**
+
+ The column should have many values, and distribute thoroughly and evenly
+ between all shards.
+
+ Examples:
+
+ * Cardinality over 1000
+ * Don't pick a column that has the same value on a large percentage of rows
+ (data skew)
+ * In a SaaS workload, having one tenant much bigger than the rest can cause
+ data skew. For this situation, you can use [tenant
+ isolation](reference-functions.md#isolate_tenant_to_new_shard) to create a
+ dedicated shard to handle the tenant.
+
+3. **Pick a column that benefits your existing queries.**
+
+ For a transactional or operational workload (where most queries take only a
+ few milliseconds), pick a column that appears as a filter in `WHERE` clauses
+ for at least 80% of queries. For instance, the `device_id` column in `SELECT *
+ FROM events WHERE device_id=1`.
+
+ For an analytical workload (where most queries take 1-2 seconds), pick a
+ column that enables queries to be parallelized across worker nodes. For
+ instance, a column frequently occurring in GROUP BY clauses, or queried over
+ multiple values at once.
+
+4. **Pick a column that is present in the majority of large tables.**
+
+ Tables over 50 GB should be distributed. Picking the same distribution column
+ for all of them enables you to co-locate data for that column on worker nodes.
+ Co-location makes it efficient to run JOINs and rollups, and enforce foreign
+ keys.
+
+ The other (smaller) tables can be local or reference tables. If the smaller
+ table needs to JOIN with distributed tables, make it a reference table.
+
+## Use-case examples
+
+We've seen general criteria for picking the distribution column. Now let's see
+how they apply to common use cases.
+
+### Multi-tenant apps
+
+The multi-tenant architecture uses a form of hierarchical database modeling to
+distribute queries across nodes in the cluster. The top of the data
+hierarchy is known as the *tenant ID* and needs to be stored in a column on
+each table.
+
+Azure Cosmos DB for PostgreSQL inspects queries to see which tenant ID they involve and
+finds the matching table shard. It routes the query to a single worker node
+that contains the shard. Running a query with all relevant data placed on the
+same node is called colocation.
+
+The following diagram illustrates colocation in the multi-tenant data model. It
+contains two tables, Accounts and Campaigns, each distributed by `account_id`.
+The shaded boxes represent shards. Green shards are stored together on one
+worker node, and blue shards are stored on another worker node. Notice how a
+join query between Accounts and Campaigns has all the necessary data together
+on one node when both tables are restricted to the same account\_id.
+
+![Multi-tenantcolocation](media/concepts-choosing-distribution-column/multi-tenant-colocation.png)
+
+To apply this design in your own schema, identify what constitutes a tenant in
+your application. Common instances include company, account, organization, or
+customer. The column name will be something like `company_id` or `customer_id`.
+Examine each of your queries and ask yourself, would it work if it had
+more WHERE clauses to restrict all tables involved to rows with the same
+tenant ID? Queries in the multi-tenant model are scoped to a tenant. For
+instance, queries on sales or inventory are scoped within a certain store.
+
+#### Best practices
+
+- **Distribute tables by a common tenant\_id column.** For
+ instance, in a SaaS application where tenants are companies, the
+ tenant\_id is likely to be the company\_id.
+- **Convert small cross-tenant tables to reference tables.** When
+ multiple tenants share a small table of information, distribute it
+ as a reference table.
+- **Restrict filter all application queries by tenant\_id.** Each
+ query should request information for one tenant at a time.
+
+Read the [multi-tenant
+tutorial](./tutorial-design-database-multi-tenant.md) for an example of how to
+build this kind of application.
+
+### Real-time apps
+
+The multi-tenant architecture introduces a hierarchical structure and uses data
+colocation to route queries per tenant. By contrast, real-time architectures
+depend on specific distribution properties of their data to achieve highly
+parallel processing.
+
+We use "entity ID" as a term for distribution columns in the real-time model.
+Typical entities are users, hosts, or devices.
+
+Real-time queries typically ask for numeric aggregates grouped by date or
+category. Azure Cosmos DB for PostgreSQL sends these queries to each shard for partial
+results and assembles the final answer on the coordinator node. Queries run
+fastest when as many nodes contribute as possible, and when no single node must
+do a disproportionate amount of work.
+
+#### Best practices
+
+- **Choose a column with high cardinality as the distribution
+ column.** For comparison, a Status field on an order table with
+ values New, Paid, and Shipped is a poor choice of distribution column.
+ It assumes only those few values, which limits the number of shards
+ that can hold the data, and the number of nodes that can process it.
+ Among columns with high cardinality, it's also good to choose those
+ columns that are frequently used in group-by clauses or as join keys.
+- **Choose a column with even distribution.** If you distribute a
+ table on a column skewed to certain common values, data in the
+ table tends to accumulate in certain shards. The nodes that hold
+ those shards end up doing more work than other nodes.
+- **Distribute fact and dimension tables on their common columns.**
+ Your fact table can have only one distribution key. Tables that join
+ on another key won't be colocated with the fact table. Choose
+ one dimension to colocate based on how frequently it's joined and
+ the size of the joining rows.
+- **Change some dimension tables into reference tables.** If a
+ dimension table can't be colocated with the fact table, you can
+ improve query performance by distributing copies of the dimension
+ table to all of the nodes in the form of a reference table.
+
+Read the [real-time dashboard tutorial](./tutorial-design-database-realtime.md)
+for an example of how to build this kind of application.
+
+### Time-series data
+
+In a time-series workload, applications query recent information while they
+archive old information.
+
+The most common mistake in modeling time-series information in Azure Cosmos DB
+for PostgreSQL is to use the timestamp itself as a distribution column. A hash
+distribution based on time distributes times seemingly at random into different
+shards rather than keeping ranges of time together in shards. Queries that
+involve time generally reference ranges of time, for example, the most recent
+data. This type of hash distribution leads to network overhead.
+
+#### Best practices
+
+- **Don't choose a timestamp as the distribution column.** Choose a
+ different distribution column. In a multi-tenant app, use the tenant
+ ID, or in a real-time app use the entity ID.
+- **Use PostgreSQL table partitioning for time instead.** Use table
+ partitioning to break a large table of time-ordered data into
+ multiple inherited tables with each table containing different time
+ ranges. Distributing a Postgres-partitioned table
+ creates shards for the inherited tables.
+
+## Next steps
+
+- Learn how [colocation](concepts-colocation.md) between distributed data helps queries run fast.
+- Discover the distribution column of a distributed table, and other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
cosmos-db Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-compute-quota.md
+
+ Title: Change compute quotas - Azure portal - Azure Cosmos DB for PostgreSQL
+description: Learn how to increase vCore quotas per region in Azure Cosmos DB for PostgreSQL from the Azure portal.
++++++ Last updated : 09/20/2022++
+# Change compute quotas from the Azure portal
++
+Azure enforces a vCore quota per subscription per region. There are two
+independently adjustable limits: vCores for coordinator nodes, and vCores for
+worker nodes.
+
+## Request quota increase
+
+1. Select **New Support Request** in the Azure portal menu for your
+ cluster.
+2. Fill out **Summary** with the quota increase request for your region, for
+ example *Quota increase in West Europe region.*
+3. These fields should be autoselected, but verify:
+ - **Issue Type** should be **Technical**.
+ - **Service type** should be **Azure Cosmos DB for PostgreSQL**.
+4. For **Problem type**, select **Create, Update, and Drop Resources**.
+5. For **Problem subtype**, select **Scaling Compute**.
+6. Select **Next** to view recommended solutions, and then select **Return to support request**.
+7. Select **Next** again. Under **Problem details**, provide the following information:
+ - For **When did the problem start**, the date, time, and timezone when the problem started, or select **Not sure, use current time**.
+ - For **Description**, quota increase details, for example *Need to increase worker node quota in West Europe to 512 vCores*.
++
+## Next steps
+
+* Learn about other [quotas and limits](reference-limits.md).
cosmos-db Howto Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-connect.md
+
+ Title: Connect to server - Azure Cosmos DB for PostgreSQL
+description: See how to connect to and query an Azure Cosmos DB for PostgreSQL cluster.
++++++ Last updated : 09/21/2022++
+# Connect to a cluster
++
+Choose one of the following database clients to see how to configure it to connect to
+an Azure Cosmos DB for PostgreSQL cluster.
+
+# [pgAdmin](#tab/pgadmin)
+
+[pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source
+administration and development platform for PostgreSQL.
+
+1. [Download](https://www.pgadmin.org/download/) and install pgAdmin.
+
+1. Open the pgAdmin application on your client computer. From the Dashboard,
+ select **Add New Server**.
+
+ :::image type="content" source="media/howto-connect/pgadmin-dashboard.png" alt-text="Screenshot that shows the pgAdmin dashboard.":::
+
+1. Choose a **Name** in the General tab. Any name will work.
+
+ :::image type="content" source="media/howto-connect/pgadmin-general.png" alt-text="Screenshot that shows the pgAdmin general connection settings.":::
+
+1. Enter connection details in the Connection tab.
+
+ :::image type="content" source="media/howto-connect/pgadmin-connection.png" alt-text="Screenshot that shows the pgAdmin connection settings.":::
+
+ Customize the following fields:
+
+ * **Host name/address**: Obtain this value from the **Overview** page for your
+ cluster in the Azure portal. It's listed there as **Coordinator name**.
+ It will be of the form, `c.<clustername>.postgres.database.azure.com`.
+ * **Maintenance database**: use the value `citus`.
+ * **Username**: use the value `citus`.
+ * **Password**: the connection password.
+ * **Save password**: enable if desired.
+
+1. In the SSL tab, set **SSL mode** to **Require**.
+
+ :::image type="content" source="media/howto-connect/pgadmin-ssl.png" alt-text="Screenshot that shows the pgAdmin SSL settings.":::
+
+1. Select **Save** to save and connect to the database.
+
+# [psql](#tab/psql)
+
+The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a
+terminal-based front-end to PostgreSQL. It enables you to type in queries
+interactively, issue them to PostgreSQL, and see the query results.
+
+1. Install psql. It's included with a [PostgreSQL
+ installation](https://www.postgresql.org/docs/current/tutorial-install.html),
+ or available separately in package managers for several operating systems.
+
+1. In the Azure portal, on the cluster page, select the **Connection strings** menu item, and then copy the **psql** connection string.
+
+ :::image type="content" source="media/quickstart-connect-psql/get-connection-string.png" alt-text="Screenshot that shows copying the psql connection string.":::
+
+ The **psql** string is of the form `psql "host=c.<clustername>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<your_password> sslmode=require"`. Notice that the host name starts with a `c.`, for example `c.demo.postgres.database.azure.com`. This prefix indicates the coordinator node of the cluster. The default `dbname` and `username` are `citus` and can't be changed.
+
+1. In the connection string you copied, replace `<your_password>` with your administrative password.
+
+1. In a local terminal prompt, paste the psql connection string, and then press Enter.
+++
+## Next steps
+
+* Troubleshoot [connection issues](howto-troubleshoot-common-connection-issues.md).
+* [Verify TLS](howto-ssl-connection-security.md) certificates in your
+ connections.
+* Now that you can connect to the database, learn how to [build scalable
+ apps](quickstart-build-scalable-apps-overview.md).
cosmos-db Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-create-users.md
+
+ Title: Create users - Azure Cosmos DB for PostgreSQL
+description: See how you can create new user accounts to interact with an Azure Cosmos DB for PostgreSQL cluster.
++++++ Last updated : 09/21/2022++
+# Create users in Azure Cosmos DB for PostgreSQL
++
+The PostgreSQL engine uses
+[roles](https://www.postgresql.org/docs/current/sql-createrole.html) to control
+access to database objects, and a newly created cluster
+comes with several roles pre-defined:
+
+* The [default PostgreSQL roles](https://www.postgresql.org/docs/current/default-roles.html)
+* `azure_pg_admin`
+* `postgres`
+* `citus`
+
+Since Azure Cosmos DB for PostgreSQL is a managed PaaS service, only Microsoft can sign in with the
+`postgres` superuser role. For limited administrative access, Azure Cosmos DB for PostgreSQL
+provides the `citus` role.
+
+## The Citus role
+
+Permissions for the `citus` role:
+
+* Read all configuration variables, even variables normally visible only to
+ superusers.
+* Read all pg\_stat\_\* views and use various statistics-related
+ extensions--even views or extensions normally visible only to superusers.
+* Execute monitoring functions that may take ACCESS SHARE locks on tables,
+ potentially for a long time.
+* [Create PostgreSQL extensions](reference-extensions.md), because
+ the role is a member of `azure_pg_admin`.
+
+Notably, the `citus` role has some restrictions:
+
+* Can't create roles
+* Can't create databases
+
+## How to create user roles
+
+As mentioned, the `citus` admin account lacks permission to create user roles. To add a user role, use the Azure portal interface.
+
+1. On your cluster page, select the **Roles** menu item, and on the **Roles** page, select **Add**.
+
+ :::image type="content" source="media/howto-create-users/1-role-page.png" alt-text="Screenshot that shows the Roles page.":::
+
+2. Enter the role name and password. Select **Save**.
+
+ :::image type="content" source="media/howto-create-users/2-add-user-fields.png" alt-text="Screenshot that shows the Add role page.":::
+
+The user will be created on the coordinator node of the cluster,
+and propagated to all the worker nodes. Roles created through the Azure
+portal have the `LOGIN` attribute, which means theyΓÇÖre true users who
+can sign in to the database.
+
+## How to modify privileges for user roles
+
+New user roles are commonly used to provide database access with restricted
+privileges. To modify user privileges, use standard PostgreSQL commands, using
+a tool such as PgAdmin or psql. For more information, see [Connect to a cluster](quickstart-connect-psql.md).
+
+For example, to allow `db_user` to read `mytable`, grant the permission:
+
+```sql
+GRANT SELECT ON mytable TO db_user;
+```
+
+Azure Cosmos DB for PostgreSQL propagates single-table GRANT statements through the entire
+cluster, applying them on all worker nodes. It also propagates GRANTs that are
+system-wide (for example, for all tables in a schema):
+
+```sql
+-- applies to the coordinator node and propagates to workers
+GRANT SELECT ON ALL TABLES IN SCHEMA public TO db_user;
+```
+
+## How to delete a user role or change their password
+
+To update a user, visit the **Roles** page for your cluster,
+and select the ellipses **...** next to the user. The ellipses will open a menu
+to delete the user or reset their password.
+
+ :::image type="content" source="media/howto-create-users/edit-role.png" alt-text="Edit a role":::
+
+The `citus` role is privileged and can't be deleted.
+
+## Next steps
+
+Open the firewall for the IP addresses of the new users' machines to enable
+them to connect: [Create and manage firewall rules using
+the Azure portal](howto-manage-firewall-using-portal.md).
+
+For more information about database user management, see PostgreSQL
+product documentation:
+
+* [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html)
+* [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html)
+* [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html)
cosmos-db Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-high-availability.md
+
+ Title: Configure high availability - Azure Cosmos DB for PostgreSQL
+description: How to enable or disable high availability
++++++ Last updated : 07/27/2020++
+# Configure high availability
++
+Azure Cosmos DB for PostgreSQL provides high availability
+(HA) to avoid database downtime. With HA enabled, every node in a cluster
+will get a standby. If the original node becomes unhealthy, its standby will be
+promoted to replace it.
+
+> [!IMPORTANT]
+> Because HA doubles the number of servers in the group, it will also double
+> the cost.
+
+Enabling HA is possible during cluster creation, or afterward in the
+**Compute + storage** tab for your cluster in the Azure portal. The user
+interface looks similar in either case. Drag the slider for **High
+availability** from NO to YES:
++
+Click the **Save** button to apply your selection. Enabling HA can take some
+time as the cluster provisions standbys and streams data to them.
+
+The **Overview** tab for the cluster will list all nodes and their
+standbys, along with a **High availability** column indicating whether HA is
+successfully enabled for each node.
++
+### Next steps
+
+Learn more about [high availability](concepts-high-availability.md).
cosmos-db Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-data-factory.md
+
+ Title: Azure Data Factory
+description: See a step-by-step guide for using Azure Data Factory for ingestion on Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 09/21/2022++
+# How to ingest data by using Azure Data Factory
++
+[Azure Data Factory](../../data-factory/introduction.md) is a cloud-based
+ETL and data integration service. It allows you to create data-driven workflows
+to move and transform data at scale.
+
+Using Data Factory, you can create and schedule data-driven workflows
+(called pipelines) that ingest data from disparate data stores. Pipelines can
+run on-premises, in Azure, or on other cloud providers for analytics and
+reporting.
+
+Data Factory has a data sink for Azure Cosmos DB for PostgreSQL. The data sink allows you to bring
+your data (relational, NoSQL, data lake files) into Azure Cosmos DB for PostgreSQL tables
+for storage, processing, and reporting.
++
+## Data Factory for real-time ingestion
+
+Here are key reasons to choose Azure Data Factory for ingesting data into
+Azure Cosmos DB for PostgreSQL:
+
+* **Easy-to-use** - Offers a code-free visual environment for orchestrating and automating data movement.
+* **Powerful** - Uses the full capacity of underlying network bandwidth, up to 5 GiB/s throughput.
+* **Built-in connectors** - Integrates all your data sources, with more than 90 built-in connectors.
+* **Cost effective** - Supports a pay-as-you-go, fully managed serverless cloud service that scales on demand.
+
+## Steps to use Data Factory
+
+In this article, you create a data pipeline by using the Data Factory
+user interface (UI). The pipeline in this data factory copies data from Azure
+Blob storage to a database. For a list of data stores
+supported as sources and sinks, see the [supported data
+stores](../../data-factory/copy-activity-overview.md#supported-data-stores-and-formats)
+table.
+
+In Data Factory, you can use the **Copy** activity to copy data among
+data stores located on-premises and in the cloud to Azure Cosmos DB for PostgreSQL. If you're
+new to Data Factory, here's a quick guide on how to get started:
+
+1. Once Data Factory is provisioned, go to your data factory. You see the Data
+ Factory home page as shown in the following image:
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-home.png" alt-text="Screenshot showing the landing page of Azure Data Factory.":::
+
+2. On the home page, select **Orchestrate**.
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-orchestrate.png" alt-text="Screenshot showing the Orchestrate page of Azure Data Factory.":::
+
+3. Under **Properties**, enter a name for the pipeline.
+
+4. In the **Activities** toolbox, expand the **Move & transform** category,
+ and drag and drop the **Copy data** activity to the pipeline designer
+ surface. At the bottom of the designer pane, on the **General** tab, enter a name for the copy activity.
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-pipeline-copy.png" alt-text="Screenshot showing a pipeline in Azure Data Factory.":::
+
+5. Configure **Source**.
+
+ 1. On the **Activities** page, select the **Source** tab. Select **New** to create a source dataset.
+ 2. In the **New Dataset** dialog box, select **Azure Blob Storage**, and then select **Continue**.
+ 3. Choose the format type of your data, and then select **Continue**.
+ 4. On the **Set properties** page, under **Linked service**, select **New**.
+ 5. On the **New linked service** page, enter a name for the linked service, and select your storage account from the **Storage account name** list.
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-configure-source.png" alt-text="Screenshot that shows configuring Source in Azure Data Factory.":::
+
+ 6. Under **Test connection**, select **To file path**, enter the container and directory to connect to, and then select **Test connection**.
+ 7. Select **Create** to save the configuration.
+ 8. On the **Set properties** screen, select **OK**.
+
+6. Configure **Sink**.
+
+ 1. On the **Activities** page, select the **Sink** tab. Select **New** to create a sink dataset.
+ 2. In the **New Dataset** dialog box, select **Azure Cosmos DB for PostgreSQL**, and then select **Continue**.
+ 3. On the **Set properties** page, under **Linked service**, select **New**.
+ 4. On the **New linked service** page, enter a name for the linked service, and select your cluster from the **Server name** list. Add connection details and test the connection.
+
+ > [!NOTE]
+ >
+ > If your cluster isn't present in the drop down, use the **Enter manually** option to add server details.
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-configure-sink.png" alt-text="Screenshot that shows configuring Sink in Azure Data Factory.":::
+
+ 5. Select **Create** to save the configuration.
+ 6. On the **Set properties** screen, select **OK**.
+ 5. In the **Sink** tab on the **Activities** page, select the table name where you want to ingest the data.
+ 6. Under **Write method**, select **Copy command**.
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-copy-command.png" alt-text="Screenshot that shows selecting the table and Copy command.":::
+
+7. From the toolbar above the canvas, select **Validate** to validate pipeline
+ settings. Fix any errors, revalidate, and ensure that the pipeline has
+ been successfully validated.
+
+8. Select **Debug** from the toolbar to execute the pipeline.
+
+ :::image type="content" source="media/howto-ingestion/azure-data-factory-execute.png" alt-text="Screenshot that shows Debug and Execute in Azure Data Factory.":::
+
+9. Once the pipeline can run successfully, in the top toolbar, select **Publish all**. This action publishes entities (datasets and pipelines) you created
+ to Data Factory.
+
+## Call a stored procedure in Data Factory
+
+In some specific scenarios, you might want to call a stored procedure/function
+to push aggregated data from the staging table to the summary table. Data Factory doesn't offer a stored procedure activity for Azure Cosmos DB for PostgreSQL, but as
+a workaround you can use the Lookup activity with a query to call a stored procedure
+as shown below:
++
+## Next steps
+
+Learn how to create a [real-time
+dashboard](tutorial-design-database-realtime.md) with Azure Cosmos DB for PostgreSQL.
cosmos-db Howto Ingest Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-stream-analytics.md
+
+ Title: Real-time data ingestion with Azure Stream Analytics - Azure Cosmos DB for PostgreSQL
+description: See how to transform and ingest streaming data from Azure Cosmos DB for PostgreSQL by using Azure Stream Analytics.
++++++ Last updated : 09/21/2022++
+# How to ingest data by using Azure Stream Analytics
++
+[Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#features)
+is a real-time analytics and event-processing engine that is designed to
+process high volumes of fast streaming data from devices, sensors, and web
+sites. It's also available on the Azure IoT Edge runtime, enabling data
+processing on IoT devices.
++
+Azure Cosmos DB for PostgreSQL shines at real-time workloads such as
+[IoT](quickstart-build-scalable-apps-model-high-throughput.md). For these workloads,
+Stream Analytics can act as a no-code, performant and scalable
+alternative to pre-process and stream data from Azure Event Hubs, Azure IoT Hub, and Azure
+Blob Storage into Azure Cosmos DB for PostgreSQL.
+
+## Steps to set up Stream Analytics
+
+> [!NOTE]
+>> This article uses [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md)
+> as an example datasource, but the technique is applicable to any other source
+> supported by Stream Analytics. Also, the following demonstration data comes from the
+> [Azure IoT Device Telemetry Simulator](https://github.com/Azure-Samples/Iot-Telemetry-Simulator). This
+> article doesn't cover setting up the simulator.
+
+1. In the Azure portal, expand the portal menu at upper left and select **Create a resource**.
+1. Select **Analytics** > **Stream Analytics job** from the results list.
+1. Fill out the **New Stream Analytics job** page with the following information:
+ * **Subscription** - Select the Azure subscription that you want to use for this job.
+ * **Resource group** - Select the same resource group as your IoT hub.
+ * **Name** - Enter a name to identify your Stream Analytics job.
+ * **Region** - Select the Azure region to host your Stream Analytics job. Use the geographic location that's closest to your users for better performance and to reduce the data transfer cost.
+ * **Hosting environment** - Select **Cloud** to deploy to the Azure cloud, or **Edge** to deploy to an IoT Edge device.
+ * **Streaming units** - Select the number of streaming units for the computing resources you need to execute the job.
+1. Select **Review + create**, and then select **Create**. You should see a **Deployment in progress** notification at upper right.
+
+ :::image type="content" source="media/howto-ingestion/azure-stream-analytics-02-create.png" alt-text="Screenshot that shows the create Stream Analytics job form." border="true":::
+
+1. Configure job input.
+
+ :::image type="content" source="media/howto-ingestion/azure-stream-analytics-03-input.png" alt-text="Screenshot that shows configuring job input in Stream Analytics." border="true":::
+
+ 1. Once the resource deployment is complete, navigate to your Stream Analytics
+ job. Select **Inputs** > **Add stream input** > **IoT Hub**.
+
+ 1. Fill out the **IoT Hub** page with the following values:
+ * **Input alias** - Enter a name to identify the job input.
+ * **Subscription** - Select the Azure subscription that has your IoT Hub account.
+ * **IoT Hub** ΓÇô Select the name of your IoT hub.
+ 1. Select **Save**.
+ 1. Once the input stream is added, you can also verify or download the dataset flowing in.
+ The following code shows the data for an example event:
+
+ ```json
+ {
+ "deviceId": "sim000001",
+ "time": "2022-04-25T13:49:11.6892185Z",
+ "counter": 1,
+ "EventProcessedUtcTime": "2022-04-25T13:49:41.4791613Z",
+ "PartitionId": 3,
+ "EventEnqueuedUtcTime": "2022-04-25T13:49:12.1820000Z",
+ "IoTHub": {
+ "MessageId": null,
+ "CorrelationId": "990407b8-4332-4cb6-a8f4-d47f304397d8",
+ "ConnectionDeviceId": "sim000001",
+ "ConnectionDeviceGenerationId": "637842405470327268",
+ "EnqueuedTime": "2022-04-25T13:49:11.7060000Z"
+ }
+ }
+ ```
+
+1. Configure job output.
+
+ 1. On the Stream Analytics job page, select **Outputs** > **Add** > **PostgreSQL database (preview)**.
+
+ :::image type="content" source="media/howto-ingestion/azure-stream-analytics-output.png" alt-text="Screenshot that shows selecting PostgreSQL database output.":::
+
+ 1. Fill out the **Azure PostgreSQL** page with the following values:
+ * **Output alias** - Enter a name to identify the job's output.
+ * Select **Provide PostgreSQL database settings manually** and enter the **Server fully qualified domain name**, **Database**, **Table**, **Username**, and **Password**. From the example dataset, use the table *device_data*.
+ 1. Select **Save**.
+
+ :::image type="content" source="media/howto-ingestion/azure-stream-analytics-04-output.png" alt-text="Configure job output in Azure Stream Analytics." border="true":::
+
+1. Define the transformation query.
+
+ :::image type="content" source="media/howto-ingestion/azure-stream-analytics-05-transformation-query.png" alt-text="Transformation query in Azure Stream Analytics." border="true":::
+
+ 1. On the Stream Analytics job page, select **Query** from the left menu.
+ 1. For this tutorial, you ingest only the alternate events from IoT Hub into Azure Cosmos DB for PostgreSQL, to reduce the overall data size. Copy and paste the following query into the query pane:
+
+ ```sql
+ select
+ counter,
+ iothub.connectiondeviceid,
+ iothub.correlationid,
+ iothub.connectiondevicegenerationid,
+ iothub.enqueuedtime
+ from
+ [src-iot-hub]
+ where counter%2 = 0;
+ ```
+
+ 1. Select **Save query**.
+
+ > [!NOTE]
+ > You use the query to not only sample the data, but also to extract the
+ > desired attributes from the data stream. The custom query option with
+ > Stream Analytics is helpful in pre-processing/transforming the data
+ > before it gets ingested into the database.
+
+1. Start the Stream Analytics job and verify output.
+
+ 1. Return to the job overview page and select **Start**.
+ 1. On the **Start job** page, select **Now** for the **Job output start time**, and then select **Start**.
+ 1. The job takes some time to start the first time, but once triggered it continues to run as the data arrives. After a few minutes, you can query the cluster to verify that the data loaded.
+
+ ```output
+ citus=> SELECT * FROM public.device_data LIMIT 10;
+
+ counter | connectiondeviceid | correlationid | connectiondevicegenerationid | enqueuedtime
+ +--+--++
+ 2 | sim000001 | 7745c600-5663-44bc-a70b-3e249f6fc302 | 637842405470327268 | 2022-05-25T18:24:03.4600000Z
+ 4 | sim000001 | 389abfde-5bec-445c-a387-18c0ed7af227 | 637842405470327268 | 2022-05-25T18:24:05.4600000Z
+ 6 | sim000001 | 3932ce3a-4616-470d-967f-903c45f71d0f | 637842405470327268 | 2022-05-25T18:24:07.4600000Z
+ 8 | sim000001 | 4bd8ecb0-7ee1-4238-b034-4e03cb50f11a | 637842405470327268 | 2022-05-25T18:24:09.4600000Z
+ 10 | sim000001 | 26cebc68-934e-4e26-80db-e07ade3775c0 | 637842405470327268 | 2022-05-25T18:24:11.4600000Z
+ 12 | sim000001 | 067af85c-a01c-4da0-b208-e4d31a24a9db | 637842405470327268 | 2022-05-25T18:24:13.4600000Z
+ 14 | sim000001 | 740e5002-4bb9-4547-8796-9d130f73532d | 637842405470327268 | 2022-05-25T18:24:15.4600000Z
+ 16 | sim000001 | 343ed04f-0cc0-4189-b04a-68e300637f0e | 637842405470327268 | 2022-05-25T18:24:17.4610000Z
+ 18 | sim000001 | 54157941-2405-407d-9da6-f142fc8825bb | 637842405470327268 | 2022-05-25T18:24:19.4610000Z
+ 20 | sim000001 | 219488e5-c48a-4f04-93f6-12c11ed00a30 | 637842405470327268 | 2022-05-25T18:24:21.4610000Z
+ (10 rows)
+ ```
+
+> [!NOTE]
+> The **Test Connection** feature currently isn't supported for Azure Cosmos DB for PostgreSQL and might throw an error, even when the connection works fine.
+
+## Next steps
+
+Learn how to create a [real-time dashboard](tutorial-design-database-realtime.md) with Azure Cosmos DB for PostgreSQL.
cosmos-db Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-logging.md
+
+ Title: Logs - Azure Cosmos DB for PostgreSQL
+description: How to access database logs for Azure Cosmos DB for PostgreSQL
++++++ Last updated : 9/21/2022++
+# Logs in Azure Cosmos DB for PostgreSQL
++
+PostgreSQL database server logs are available for every node of a
+cluster. You can ship logs to a storage server, or to an analytics
+service. The logs can be used to identify, troubleshoot, and repair
+configuration errors and suboptimal performance.
+
+## Capture logs
+
+To access PostgreSQL logs for a coordinator or worker node,
+you have to enable the PostgreSQL Server Logs diagnostic setting. On your cluster's page in the Azure portal, select **Diagnostic settings** from the left menu, and then select **Add diagnostic setting**.
++
+Enter a name for the new diagnostic setting, select the **PostgreSQL Server Logs** box,
+and check the **Send to Log Analytics workspace** box. Then select **Save**.
++
+## View logs
+
+To view and filter the logs, you use Kusto queries. On your cluster's page in the Azure portal, select **Logs** from the left menu. Close the opening splash screen and the query selection screen.
++
+Paste the following query into the query input box, and then select **Run**.
+
+```kusto
+AzureDiagnostics
+| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
+```
++
+The preceding query lists log messages from all nodes, along with their severity
+and timestamp. You can add `where` clauses to filter the results. For instance,
+to see errors from the coordinator node only, filter the error level and server
+name like in the following query. Replace the server name with the name of your server.
+
+```kusto
+AzureDiagnostics
+| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
+| where LogicalServerName_s == 'example-cluster-c'
+| where errorLevel_s == 'ERROR'
+```
+
+The coordinator node name has the suffix `-c` and worker nodes are named
+with a suffix of `-w0`, `-w1`, and so on.
+
+The Azure logs can be filtered in different ways. Here's how to find logs
+within the past day whose messages match a regular expression.
+
+```kusto
+AzureDiagnostics
+| where TimeGenerated > ago(24h)
+| order by TimeGenerated desc
+| where Message matches regex ".*error.*"
+```
+
+## Next steps
+
+- [Get started with log analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md)
+- Learn about [Azure Event Hubs](../../event-hubs/event-hubs-about.md)
cosmos-db Howto Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-maintenance.md
+
+ Title: Azure Cosmos DB for PostgreSQL - Scheduled maintenance - Azure portal
+description: Learn how to configure scheduled maintenance settings for an Azure Cosmos DB for PostgreSQL from the Azure portal.
++++++ Last updated : 04/07/2021++
+# Manage scheduled maintenance settings for Azure Cosmos DB for PostgreSQL
++
+You can specify maintenance options for each cluster in
+your Azure subscription. Options include the maintenance schedule and
+notification settings for upcoming and finished maintenance events.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- An [Azure Cosmos DB for PostgreSQL cluster](quickstart-create-portal.md)
+
+## Specify maintenance schedule options
+
+1. On the cluster page, under the **Settings** heading,
+ choose **Maintenance** to open scheduled maintenance options.
+2. The default (system-managed) schedule is a random day of the week, and
+ 30-minute window for maintenance start between 11pm and 7am cluster's
+ [Azure region time](https://go.microsoft.com/fwlink/?linkid=2143646). If you
+ want to customize this schedule, choose **Custom schedule**. You can then
+ select a preferred day of the week, and a 30-minute window for maintenance
+ start time.
+
+## Notifications about scheduled maintenance events
+
+You can use Azure Service Health to [view
+notifications](../../service-health/service-notifications.md) about upcoming
+and past scheduled maintenance on your cluster. You can
+also [set up](../../service-health/resource-health-alert-monitor-guide.md)
+alerts in Azure Service Health to get notifications about maintenance events.
+
+## Next steps
+
+* Learn about [scheduled maintenance in Azure Cosmos DB for PostgreSQL](concepts-maintenance.md)
+* Lean about [Azure Service Health](../../service-health/overview.md)
cosmos-db Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-manage-firewall-using-portal.md
+
+ Title: Manage firewall rules - Azure Cosmos DB for PostgreSQL
+description: Create and manage firewall rules for Azure Cosmos DB for PostgreSQL using the Azure portal.
++++++ Last updated : 09/24/2022+
+# Manage public access for Azure Cosmos DB for PostgreSQL
++
+Server-level firewall rules can be used to manage [public
+access](concepts-firewall-rules.md) to a
+coordinator node from a specified IP address (or range of IP addresses) in the
+public internet.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- An [Azure Cosmos DB for PostgreSQL cluster](quickstart-create-portal.md).
+
+## Create a server-level firewall rule in the Azure portal
+
+1. On the PostgreSQL cluster page, under **Settings**, select **Networking**.
+
+ :::image type="content" source="media/howto-manage-firewall-using-portal/1-connection-security.png" alt-text="Screenshot of selecting Networking.":::
+
+1. On the **Networking** page, select **Allow public access from Azure services and resources within Azure to this cluster**.
+
+1. If desired, select **Enable access to the worker nodes**. With this option, the firewall rules allow access to all worker nodes as well as the coordinator node.
+
+1. Select **Add current client IP address** to create a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+
+ Verify your IP address before saving this configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Thus, you may need to change the start IP and end IP to make the rule function as expected. Use a search engine or other online tool to check your own IP address. For example, search for *what is my IP*.
+
+ :::image type="content" source="media/howto-manage-firewall-using-portal/3-what-is-my-ip.png" alt-text="Screenshot of Bing search for What is my IP.":::
+
+ You can also select **Add 0.0.0.0 - 255.255.255.255** to allow not just your IP, but the whole internet to access the coordinator node's port 5432 (and 6432 for connection pooling). In this situation, clients still must log in with the correct username and password to use the cluster. Nevertheless, it's best to allow worldwide access for only short periods of time and for only non-production databases.
+
+1. To add firewall rules, type in the **Firewall rule name**, **Start IP address**, and **End IP address**. Opening the firewall enables administrators, users, and applications to access the coordinator node on ports 5432 and 6432. You can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the **Start IP address** and **End IP address** fields.
+
+1. Select **Save** on the toolbar to save the settings and server-level firewall rules. Wait for the confirmation that the update was successful.
+
+> [!NOTE]
+> These settings are also accessible during the creation of an Azure Cosmos DB for PostgreSQL cluster. On the **Networking** tab, for **Connectivity method**, select **Public access (allowed IP address)**.
+>
+> :::image type="content" source="media/howto-manage-firewall-using-portal/0-create-public-access.png" alt-text="Screenshot of selecting Public access on the Networking tab.":::
+
+## Connect from Azure
+
+There's an easy way to grant cluster access to applications hosted on Azure, such as an Azure Web Apps application or those running in an Azure VM. On the portal page for your cluster, under **Networking**, select the checkbox **Allow Azure services and resources to access this cluster**, and then select **Save**.
+
+> [!IMPORTANT]
+> This option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+
+## Manage existing server-level firewall rules through the Azure portal
+Repeat the steps to manage the firewall rules.
+* To add the current computer, select **Add current client IP address**. Select **Save** to save the changes.
+* To add more IP addresses, type in the **Firewall rule name**, **Start IP address**, and **End IP address**. Select **Save** to save the changes.
+* To modify an existing rule, select any of the fields in the rule and modify. Select **Save** to save the changes.
+* To delete an existing rule, select the ellipsis **...** and then select **Delete** to remove the rule. Select **Save** to save the changes.
+
+## Next steps
+
+For more about firewall rules, including how to troubleshoot connection problems, see [Public access in Azure Cosmos DB for PostgreSQL](concepts-firewall-rules.md).
cosmos-db Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-modify-distributed-tables.md
+
+ Title: Modify distributed tables - Azure Cosmos DB for PostgreSQL
+description: SQL commands to create and modify distributed tables
+++++ Last updated : 08/02/2022++
+# Distribute and modify tables
++
+## Distributing tables
+
+To create a distributed table, you need to first define the table schema. To do
+so, you can define a table using the [CREATE
+TABLE](http://www.postgresql.org/docs/current/static/sql-createtable.html)
+statement in the same way as you would do with a regular PostgreSQL table.
+
+```sql
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ actor jsonb,
+ org jsonb,
+ created_at timestamp
+);
+```
+
+Next, you can use the create\_distributed\_table() function to specify
+the table distribution column and create the worker shards.
+
+```sql
+SELECT create_distributed_table('github_events', 'repo_id');
+```
+
+The function call informs Azure Cosmos DB for PostgreSQL that the github\_events table
+should be distributed on the repo\_id column (by hashing the column value).
+
+It creates a total of 32 shards by default, where each shard owns a portion of
+a hash space and gets replicated based on the default
+citus.shard\_replication\_factor configuration value. The shard replicas
+created on the worker have the same table schema, index, and constraint
+definitions as the table on the coordinator. Once the replicas are created, the
+function saves all distributed metadata on the coordinator.
+
+Each created shard is assigned a unique shard ID and all its replicas have the
+same shard ID. Shards are represented on the worker node as regular PostgreSQL
+tables named \'tablename\_shardid\' where tablename is the name of the
+distributed table, and shard ID is the unique ID assigned. You can connect to
+the worker postgres instances to view or run commands on individual shards.
+
+You're now ready to insert data into the distributed table and run queries on
+it. You can also learn more about the UDF used in this section in the [table
+and shard DDL](reference-functions.md#table-and-shard-ddl)
+reference.
+
+### Reference Tables
+
+The above method distributes tables into multiple horizontal shards. Another
+possibility is distributing tables into a single shard and replicating the
+shard to every worker node. Tables distributed this way are called *reference
+tables.* They are used to store data that needs to be frequently accessed by
+multiple nodes in a cluster.
+
+Common candidates for reference tables include:
+
+- Smaller tables that need to join with larger distributed tables.
+- Tables in multi-tenant apps that lack a tenant ID column or which aren\'t
+ associated with a tenant. (Or, during migration, even for some tables
+ associated with a tenant.)
+- Tables that need unique constraints across multiple columns and are
+ small enough.
+
+For instance, suppose a multi-tenant eCommerce site needs to calculate sales
+tax for transactions in any of its stores. Tax information isn\'t specific to
+any tenant. It makes sense to put it in a shared table. A US-centric reference
+table might look like this:
+
+```postgresql
+-- a reference table
+
+CREATE TABLE states (
+ code char(2) PRIMARY KEY,
+ full_name text NOT NULL,
+ general_sales_tax numeric(4,3)
+);
+
+-- distribute it to all workers
+
+SELECT create_reference_table('states');
+```
+
+Now queries such as one calculating tax for a shopping cart can join on the
+`states` table with no network overhead, and can add a foreign key to the state
+code for better validation.
+
+In addition to distributing a table as a single replicated shard, the
+`create_reference_table` UDF marks it as a reference table in the Azure Cosmos
+DB for PostgreSQL metadata tables. Azure Cosmos DB for PostgreSQL automatically performs
+two-phase commits
+([2PC](https://en.wikipedia.org/wiki/Two-phase_commit_protocol)) for
+modifications to tables marked this way, which provides strong consistency
+guarantees.
+
+For another example of using reference tables, see the [multi-tenant database
+tutorial](tutorial-design-database-multi-tenant.md).
+
+### Distributing Coordinator Data
+
+If an existing PostgreSQL database is converted into the coordinator node for a
+cluster, the data in its tables can be distributed
+efficiently and with minimal interruption to an application.
+
+The `create_distributed_table` function described earlier works on both empty
+and non-empty tables, and for the latter it automatically distributes table
+rows throughout the cluster. You will know if it copies data by the presence of
+the message, \"NOTICE: Copying data from local table\...\" For example:
+
+```postgresql
+CREATE TABLE series AS SELECT i FROM generate_series(1,1000000) i;
+SELECT create_distributed_table('series', 'i');
+NOTICE: Copying data from local table...
+ create_distributed_table
+ --
+
+ (1 row)
+```
+
+Writes on the table are blocked while the data is migrated, and pending writes
+are handled as distributed queries once the function commits. (If the function
+fails then the queries become local again.) Reads can continue as normal and
+will become distributed queries once the function commits.
+
+When distributing tables A and B, where A has a foreign key to B, distribute
+the key destination table B first. Doing it in the wrong order will cause an
+error:
+
+```
+ERROR: cannot create foreign key constraint
+DETAIL: Referenced table must be a distributed table or a reference table.
+```
+
+If it's not possible to distribute in the correct order, then drop the foreign
+keys, distribute the tables, and recreate the foreign keys.
+
+When migrating data from an external database, such as from Amazon RDS to
+Azure Cosmos DB for PostgreSQL, first create the Azure Cosmos DB for PostgreSQL distributed
+tables via `create_distributed_table`, then copy the data into the table.
+Copying into distributed tables avoids running out of space on the coordinator
+node.
+
+## Colocating tables
+
+Colocation means placing keeping related information on the same machines. It
+enables efficient queries, while taking advantage of the horizontal scalability
+for the whole dataset. For more information, see
+[colocation](concepts-colocation.md).
+
+Tables are colocated in groups. To manually control a table's colocation group
+assignment, use the optional `colocate_with` parameter of
+`create_distributed_table`. If you don\'t care about a table\'s colocation then
+omit this parameter. It defaults to the value `'default'`, which groups the
+table with any other default colocation table having the same distribution
+column type, shard count, and replication factor. If you want to break or
+update this implicit colocation, you can use
+`update_distributed_table_colocation()`.
+
+```postgresql
+-- these tables are implicitly co-located by using the same
+-- distribution column type and shard count with the default
+-- co-location group
+
+SELECT create_distributed_table('A', 'some_int_col');
+SELECT create_distributed_table('B', 'other_int_col');
+```
+
+When a new table is not related to others in its would-be implicit
+colocation group, specify `colocated_with => 'none'`.
+
+```postgresql
+-- not co-located with other tables
+
+SELECT create_distributed_table('A', 'foo', colocate_with => 'none');
+```
+
+Splitting unrelated tables into their own colocation groups will improve [shard
+rebalancing](howto-scale-rebalance.md) performance, because
+shards in the same group have to be moved together.
+
+When tables are indeed related (for instance when they will be joined), it can
+make sense to explicitly colocate them. The gains of appropriate colocation are
+more important than any rebalancing overhead.
+
+To explicitly colocate multiple tables, distribute one and then put the others
+into its colocation group. For example:
+
+```postgresql
+-- distribute stores
+SELECT create_distributed_table('stores', 'store_id');
+
+-- add to the same group as stores
+SELECT create_distributed_table('orders', 'store_id', colocate_with => 'stores');
+SELECT create_distributed_table('products', 'store_id', colocate_with => 'stores');
+```
+
+Information about colocation groups is stored in the
+[pg_dist_colocation](reference-metadata.md#colocation-group-table)
+table, while
+[pg_dist_partition](reference-metadata.md#partition-table) reveals
+which tables are assigned to which groups.
+
+## Dropping tables
+
+You can use the standard PostgreSQL DROP TABLE command to remove your
+distributed tables. As with regular tables, DROP TABLE removes any indexes,
+rules, triggers, and constraints that exist for the target table. In addition,
+it also drops the shards on the worker nodes and cleans up their metadata.
+
+```sql
+DROP TABLE github_events;
+```
+
+## Modifying tables
+
+Azure Cosmos DB for PostgreSQL automatically propagates many kinds of DDL statements.
+Modifying a distributed table on the coordinator node will update shards on the
+workers too. Other DDL statements require manual propagation, and certain
+others are prohibited such as any which would modify a distribution column.
+Attempting to run DDL that is ineligible for automatic propagation will raise
+an error and leave tables on the coordinator node unchanged.
+
+Here is a reference of the categories of DDL statements that propagate.
+
+### Adding/Modifying Columns
+
+Azure Cosmos DB for PostgreSQL propagates most [ALTER
+TABLE](https://www.postgresql.org/docs/current/static/ddl-alter.html) commands
+automatically. Adding columns or changing their default values work as they
+would in a single-machine PostgreSQL database:
+
+```postgresql
+-- Adding a column
+
+ALTER TABLE products ADD COLUMN description text;
+
+-- Changing default value
+
+ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
+```
+
+Significant changes to an existing column like renaming it or changing its data
+type are fine too. However the data type of the [distribution
+column](concepts-nodes.md#distribution-column) cannot be altered.
+This column determines how table data distributes through the cluster,
+and modifying its data type would require moving the data.
+
+Attempting to do so causes an error:
+
+```postgres
+-- assumining store_id is the distribution column
+-- for products, and that it has type integer
+
+ALTER TABLE products
+ALTER COLUMN store_id TYPE text;
+
+/*
+ERROR: XX000: cannot execute ALTER TABLE command involving partition column
+LOCATION: ErrorIfUnsupportedAlterTableStmt, multi_utility.c:2150
+*/
+```
+
+### Adding/Removing Constraints
+
+Using Azure Cosmos DB for PostgreSQL allows you to continue to enjoy the safety of a
+relational database, including database constraints (see the PostgreSQL
+[docs](https://www.postgresql.org/docs/current/static/ddl-constraints.html)).
+Due to the nature of distributed systems, Azure Cosmos DB for PostgreSQL will not
+cross-reference uniqueness constraints or referential integrity between worker
+nodes.
+
+To set up a foreign key between colocated distributed tables, always include
+the distribution column in the key. Including the distribution column may
+involve making the key compound.
+
+Foreign keys may be created in these situations:
+
+- between two local (non-distributed) tables,
+- between two reference tables,
+- between two [colocated](concepts-colocation.md) distributed
+ tables when the key includes the distribution column, or
+- as a distributed table referencing a [reference
+ table](concepts-nodes.md#type-2-reference-tables)
+
+Foreign keys from reference tables to distributed tables are not
+supported.
+
+> [!NOTE]
+>
+> Primary keys and uniqueness constraints must include the distribution
+> column. Adding them to a non-distribution column will generate an error
+
+This example shows how to create primary and foreign keys on distributed
+tables:
+
+```postgresql
+--
+-- Adding a primary key
+-- --
+
+-- We'll distribute these tables on the account_id. The ads and clicks
+-- tables must use compound keys that include account_id.
+
+ALTER TABLE accounts ADD PRIMARY KEY (id);
+ALTER TABLE ads ADD PRIMARY KEY (account_id, id);
+ALTER TABLE clicks ADD PRIMARY KEY (account_id, id);
+
+-- Next distribute the tables
+
+SELECT create_distributed_table('accounts', 'id');
+SELECT create_distributed_table('ads', 'account_id');
+SELECT create_distributed_table('clicks', 'account_id');
+
+--
+-- Adding foreign keys
+-- -
+
+-- Note that this can happen before or after distribution, as long as
+-- there exists a uniqueness constraint on the target column(s) which
+-- can only be enforced before distribution.
+
+ALTER TABLE ads ADD CONSTRAINT ads_account_fk
+ FOREIGN KEY (account_id) REFERENCES accounts (id);
+ALTER TABLE clicks ADD CONSTRAINT clicks_ad_fk
+ FOREIGN KEY (account_id, ad_id) REFERENCES ads (account_id, id);
+```
+
+Similarly, include the distribution column in uniqueness constraints:
+
+```postgresql
+-- Suppose we want every ad to use a unique image. Notice we can
+-- enforce it only per account when we distribute by account id.
+
+ALTER TABLE ads ADD CONSTRAINT ads_unique_image
+ UNIQUE (account_id, image_url);
+```
+
+Not-null constraints can be applied to any column (distribution or not)
+because they require no lookups between workers.
+
+```postgresql
+ALTER TABLE ads ALTER COLUMN image_url SET NOT NULL;
+```
+
+### Using NOT VALID Constraints
+
+In some situations it can be useful to enforce constraints for new rows, while
+allowing existing non-conforming rows to remain unchanged. Azure Cosmos DB for PostgreSQL
+supports this feature for CHECK constraints and foreign keys, using
+PostgreSQL\'s \"NOT VALID\" constraint designation.
+
+For example, consider an application that stores user profiles in a
+[reference table](concepts-nodes.md#type-2-reference-tables).
+
+```postgres
+-- we're using the "text" column type here, but a real application
+-- might use "citext" which is available in a postgres contrib module
+
+CREATE TABLE users ( email text PRIMARY KEY );
+SELECT create_reference_table('users');
+```
+
+In the course of time imagine that a few non-addresses get into the
+table.
+
+```postgres
+INSERT INTO users VALUES
+ ('foo@example.com'), ('hacker12@aol.com'), ('lol');
+```
+
+We would like to validate the addresses, but PostgreSQL does not
+ordinarily allow us to add a CHECK constraint that fails for existing
+rows. However it *does* allow a constraint marked not valid:
+
+```postgres
+ALTER TABLE users
+ADD CONSTRAINT syntactic_email
+CHECK (email ~
+ '^[a-zA-Z0-9.!#$%&''*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$'
+) NOT VALID;
+```
+
+New rows are now protected.
+
+```postgres
+INSERT INTO users VALUES ('fake');
+
+/*
+ERROR: new row for relation "users_102010" violates
+ check constraint "syntactic_email_102010"
+DETAIL: Failing row contains (fake).
+*/
+```
+
+Later, during non-peak hours, a database administrator can attempt to
+fix the bad rows and revalidate the constraint.
+
+```postgres
+-- later, attempt to validate all rows
+ALTER TABLE users
+VALIDATE CONSTRAINT syntactic_email;
+```
+
+The PostgreSQL documentation has more information about NOT VALID and
+VALIDATE CONSTRAINT in the [ALTER
+TABLE](https://www.postgresql.org/docs/current/sql-altertable.html)
+section.
+
+### Adding/Removing Indices
+
+Azure Cosmos DB for PostgreSQL supports adding and removing
+[indices](https://www.postgresql.org/docs/current/static/sql-createhttps://docsupdatetracker.net/index.html):
+
+```postgresql
+-- Adding an index
+
+CREATE INDEX clicked_at_idx ON clicks USING BRIN (clicked_at);
+
+-- Removing an index
+
+DROP INDEX clicked_at_idx;
+```
+
+Adding an index takes a write lock, which can be undesirable in a
+multi-tenant \"system-of-record.\" To minimize application downtime,
+create the index
+[concurrently](https://www.postgresql.org/docs/current/static/sql-createhttps://docsupdatetracker.net/index.html#SQL-CREATEINDEX-CONCURRENTLY)
+instead. This method requires more total work than a standard index
+build and takes longer to complete. However, since it
+allows normal operations to continue while the index is built, this
+method is useful for adding new indexes in a production environment.
+
+```postgresql
+-- Adding an index without locking table writes
+
+CREATE INDEX CONCURRENTLY clicked_at_idx ON clicks USING BRIN (clicked_at);
+```
+### Types and Functions
+
+Creating custom SQL types and user-defined functions propogates to worker
+nodes. However, creating such database objects in a transaction with
+distributed operations involves tradeoffs.
+
+Azure Cosmos DB for PostgreSQL parallelizes operations such as `create_distributed_table()`
+across shards using multiple connections per worker. Whereas, when creating a
+database object, Azure Cosmos DB for PostgreSQL propagates it to worker nodes using a single connection
+per worker. Combining the two operations in a single transaction may cause
+issues, because the parallel connections will not be able to see the object
+that was created over a single connection but not yet committed.
+
+Consider a transaction block that creates a type, a table, loads data, and
+distributes the table:
+
+```postgresql
+BEGIN;
+
+-- type creation over a single connection:
+CREATE TYPE coordinates AS (x int, y int);
+CREATE TABLE positions (object_id text primary key, position coordinates);
+
+-- data loading thus goes over a single connection:
+SELECT create_distributed_table(ΓÇÿpositionsΓÇÖ, ΓÇÿobject_idΓÇÖ);
+\COPY positions FROM ΓÇÿpositions.csvΓÇÖ
+
+COMMIT;
+```
+
+Prior to Citus 11.0, Citus would defer creating the type on the worker nodes,
+and commit it separately when creating the distributed table. This enabled the
+data copying in `create_distributed_table()` to happen in parallel. However, it
+also meant that the type was not always present on the Citus worker nodes ΓÇô or
+if the transaction rolled back, the type would remain on the worker nodes.
+
+With Citus 11.0, the default behaviour changes to prioritize schema consistency
+between coordinator and worker nodes. The new behavior has a downside: if
+object propagation happens after a parallel command in the same transaction,
+then the transaction can no longer be completed, as highlighted by the ERROR in
+the code block below:
+
+```postgresql
+BEGIN;
+CREATE TABLE items (key text, value text);
+-- parallel data loading:
+SELECT create_distributed_table(ΓÇÿitemsΓÇÖ, ΓÇÿkeyΓÇÖ);
+\COPY items FROM ΓÇÿitems.csvΓÇÖ
+CREATE TYPE coordinates AS (x int, y int);
+
+ERROR: cannot run type command because there was a parallel operation on a distributed table in the transaction
+```
+
+If you run into this issue, there are two simple workarounds:
+
+1. Use set `citus.create_object_propagation` to `automatic` to defer creation
+ of the type in this situation, in which case there may be some inconsistency
+ between which database objects exist on different nodes.
+1. Use set `citus.multi_shard_modify_mode` to `sequential` to disable per-node
+ parallelism. Data load in the same transaction might be slower.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Useful diagnostic queries](howto-useful-diagnostic-queries.md)
cosmos-db Howto Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-monitoring.md
+
+ Title: How to view metrics - Azure Cosmos DB for PostgreSQL
+description: See how to access database metrics for Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 09/24/2022++
+# How to view metrics in Azure Cosmos DB for PostgreSQL
++
+Resource metrics are available for every node of a cluster, and in aggregate across the nodes.
+
+## View metrics
+
+To access metrics for a cluster, open **Metrics**
+under **Monitoring** in the Azure portal.
++
+Choose a dimension and an aggregation, for instance **CPU percent** and
+**Max**, to view the metric aggregated across all nodes. For an explanation of
+each metric, see [here](concepts-monitoring.md#list-of-metrics).
++
+### View metrics per node
+
+Viewing each node's metrics separately on the same graph is called *splitting*.
+To enable splitting, select **Apply splitting**, and then select the value by which to split. For nodes, choose **Server name**.
++
+The metrics will now be plotted in one color-coded line per node.
++
+## Next steps
+
+* Review Azure Cosmos DB for PostgreSQL [monitoring concepts](concepts-monitoring.md).
cosmos-db Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-private-access.md
+
+ Title: Enable private access - Azure Cosmos DB for PostgreSQL
+description: See how to set up private link in a cluster for Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 09/24/2022++
+# Enable private access in Azure Cosmos DB for PostgreSQL
++
+[Private access](concepts-private-access.md) allows resources in an Azure
+virtual network to connect securely and privately to nodes in a
+cluster. This how-to assumes you've already created a virtual
+network and subnet. For an example of setting up prerequisites, see the
+[private access tutorial](tutorial-private-access.md).
+
+## Create a cluster with a private endpoint
+
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. On the **Create a resource** page, select **Databases**, and then select **Azure Cosmos DB**.
+1. On the **Select API option** page, on the **PostgreSQL** tile, select **Create**.
+1. On the **Create an Azure Cosmos DB for PostgreSQL cluster** page, select or create a **Resource group**, enter a **Cluster name** and **Location**, and enter and confirm the administrator **Password**.
+1. Select **Next: Networking**.
+1. On the **Networking** tab, for **Connectivity method**, select **Private access**.
+1. On the **Create private endpoint** screen, enter or select appropriate values for:
+ - **Resource group**
+ - **Location**
+ - **Name**
+ - **Target sub-resource**
+ - **Virtual network**
+ - **Subnet**
+ - **Integrate with private DNS zone**
+1. Select **OK**.
+1. After you create the private endpoint, select **Review + create** and then select **Create** to create your cluster.
+
+## Enable private access on an existing cluster
+
+To create a private endpoint to a node in an existing cluster, open the
+**Networking** page for the cluster.
+
+1. Select **Add private endpoint**.
+
+ :::image type="content" source="media/howto-private-access/networking.png" alt-text="Screenshot of selecting Add private endpoint on the Networking screen.":::
+
+2. On the **Basics** tab of the **Create a private endpoint** screen, confirm the **Subscription**, **Resource group**, and
+ **Region**. Enter a **Name** for the endpoint, such as *my-cluster-1*, and a **Network interface name**, such as *my-cluster-1-nic*.
+
+ > [!NOTE]
+ >
+ > Unless you have a good reason to choose otherwise, we recommend picking a
+ > subscription and region that match those of your cluster. The
+ > default values for the form fields might not be correct. Check them and
+ > update if necessary.
+
+3. Select **Next: Resource**. For **Target sub-resource**, choose the target
+ node of the cluster. Usually **coordinator** is the desired node.
+
+4. Select **Next: Virtual Network**. Choose the desired **Virtual network** and
+ **Subnet**. Under **Private IP configuration**, select **Statically allocate IP address** or keep the default, **Dynamically allocate IP address**.
+
+1. Select **Next: DNS**.
+1. Under **Private DNS integration**, for **Integrate with private DNS zone**, keep the default **Yes** or select **No**.
+
+5. Select **Next: Tags**, and add any desired tags.
+
+6. Select **Review + create**. Review the settings, and select
+ **Create** when satisfied.
+
+## Next steps
+
+* Learn more about [private access](concepts-private-access.md).
+* Follow a [tutorial](tutorial-private-access.md) to see private access in
+ action.
cosmos-db Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-read-replicas-portal.md
+
+ Title: Manage read replicas - Azure portal - Azure Cosmos DB for PostgreSQL
+description: See how to manage read replicas in Azure Cosmos DB for PostgreSQL from the Azure portal.
++++++ Last updated : 09/24/2022++
+# Create and manage read replicas
++
+In this article, you learn how to create and manage read replicas in Azure
+Cosmos DB for PostgreSQL from the Azure portal. To learn more about read
+replicas, see the [overview](concepts-read-replicas.md).
+
+## Prerequisites
+
+A [cluster](quickstart-create-portal.md) to
+be the primary.
+
+## Create a read replica
+
+To create a read replica, follow these steps:
+
+1. Select an existing Azure Cosmos DB for PostgreSQL cluster to use as the
+ primary.
+
+2. On the cluster sidebar, under **Cluster management**, select
+ **Replicate data globally**.
+
+3. On the **Replicate data globally** screen, select **Add replica**.
+
+4. Under **Cluster name**, enter a name for the read replica.
+
+5. Select a value from the **Location (preview)** drop-down.
+
+6. Select **OK**.
+
+After the read replica is created, you can see it listed on the **Replicate data globally** screen.
+
+> [!IMPORTANT]
+>
+> Review the [considerations section of the Read Replica
+> overview](concepts-read-replicas.md#considerations).
+>
+> Before a primary cluster setting is updated to a new value, update the
+> replica setting to an equal or greater value. This action helps the replica
+> keep up with any changes made to the master.
+
+## Delete a primary cluster
+
+To delete a primary cluster, you use the same steps as to delete a
+standalone cluster. From the Azure portal, follow these
+steps:
+
+1. In the Azure portal, select your primary Azure Cosmos DB for PostgreSQL
+ cluster.
+
+1. On the **Overview** page for the cluster, select **Delete**.
+
+1. On the **Delete \<cluster name>** screen, select the checkbox next to **I understand that this cluster and all nodes that belong to this cluster will be deleted and cannot be recovered.**
+
+1. Select **Delete** to confirm deletion of the primary cluster.
+
+## Delete a replica
+
+You can delete a read replica similarly to how you delete a primary cluster.
+
+You can select the read replica to delete directly from the portal, or from the **Replicate data globally** screen of the primary cluster.
+
+1. In the Azure portal, on the **Overview** page for the read replica, select **Delete**.
+
+1. On the **Delete \<replica name>** screen, select the checkbox next to **I understand that this replica and all nodes that belong to it will be deleted. Deletion of this replica will not impact the primary cluster or other read replicas.**
+
+1. Select **Delete** to confirm deletion of the replica.
+
+## Next steps
+
+* Learn more about [read replicas](concepts-read-replicas.md).
cosmos-db Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restart.md
+
+ Title: Restart server - Azure Cosmos DB for PostgreSQL
+description: Learn how to restart all nodes in a cluster from the Azure portal.
++++++ Last updated : 05/06/2022++
+# Restart Azure Cosmos DB for PostgreSQL
++
+You can restart your cluster for the Azure portal. Restarting the cluster applies to all nodes; you can't selectively restart
+individual nodes. The restart applies to all PostgreSQL server processes in the nodes. Any applications attempting to use the database will experience
+connectivity downtime while the restart happens.
+
+1. In the Azure portal, navigate to the cluster's **Overview** page.
+
+1. Select **Restart** on the top bar.
+ > [!NOTE]
+ > If the Restart button is not yet present for your cluster, please open
+ > an Azure support request to restart the cluster.
+
+1. In the confirmation dialog, select **Restart all** to continue.
+
+**Next steps**
+
+- Changing some server parameters requires a restart. See the list of [all
+ server parameters](reference-parameters.md) configurable on
+ Azure Cosmos DB for PostgreSQL.
cosmos-db Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restore-portal.md
+
+ Title: Restore - Azure Cosmos DB for PostgreSQL - Azure portal
+description: See how to perform restore operations in Azure Cosmos DB for PostgreSQL through the Azure portal.
++++++ Last updated : 09/24/2022++
+# Point-in-time restore of a cluster
++
+This article provides step-by-step procedures to perform [point-in-time
+recoveries](concepts-backup.md#restore) for a
+cluster using backups. You can restore either to the earliest backup or to
+a custom restore point within your retention period.
+
+> [!IMPORTANT]
+> If the **Restore** option isn't present for your cluster, open an Azure support request to restore your cluster.
+
+## Restore to the earliest restore point
+
+Follow these steps to restore your cluster to its
+earliest existing backup.
+
+1. In the [Azure portal](https://portal.azure.com/), from the **Overview** page of the cluster you want to restore, select **Restore**.
+
+1. On the **Restore** page, select the **Earliest** restore point, which is shown.
+
+1. Provide a new cluster name in the **Restore to new cluster** field. The subscription, resource group, and location fields aren't editable.
+
+1. Select **OK**. A notification shows that the restore operation is initiated.
+
+1. When the restore completes, follow the [post-restore tasks](#post-restore-tasks).
+
+## Restore to a custom restore point
+
+Follow these steps to restore your cluster to a date
+and time of your choosing.
+
+1. In the [Azure portal](https://portal.azure.com/), from the **Overview** page of the cluster you want to restore, select **Restore**.
+
+1. On the **Restore** page, choose **Custom restore point**.
+
+1. Select a date and provide a time in the date and time fields, and enter a cluster name in the **Restore to new cluster** field. The other fields aren't editable.
+
+1. Select **OK**. A notification shows that the restore operation is initiated.
+
+1. When the restore completes, follow the [post-restore tasks](#post-restore-tasks).
+
+## Post-restore tasks
+
+After a restore, you should do the following to get your users and applications
+back up and running:
+
+* If the new server is meant to replace the original server, redirect clients
+ and client applications to the new server
+* Ensure an appropriate server-level firewall is in place for
+ users to connect. These rules aren't copied from the original cluster.
+* Adjust PostgreSQL server parameters as needed. The parameters aren't copied
+ from the original cluster.
+* Ensure appropriate logins and database level permissions are in place.
+* Configure alerts, as appropriate.
+
+## Next steps
+
+* Learn more about [backup and restore](concepts-backup.md) in
+ Azure Cosmos DB for PostgreSQL.
+* SetΓÇ»[suggested alerts](./howto-alert-on-metric.md#suggested-alerts) on clusters.
cosmos-db Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-grow.md
+
+ Title: Scale cluster - Azure Cosmos DB for PostgreSQL
+description: Adjust cluster memory, disk, and CPU resources to deal with increased load.
++++++ Last updated : 09/24/2022++
+# Scale a cluster
++
+Azure Cosmos DB for PostgreSQL provides self-service
+scaling to deal with increased load. The Azure portal makes it easy to add new
+worker nodes, and to increase the vCores and storage for existing nodes.
+
+Adding nodes causes no downtime, and even moving shards to the new nodes (called [shard
+rebalancing](howto-scale-rebalance.md)) happens without interrupting
+queries.
+
+## Add worker nodes
+
+1. On the portal page for your cluster, select **Scale** from the left menu.
+
+1. On the **Scale** page, under **Nodes**, select a new value for **Node count**.
+
+ :::image type="content" source="media/howto-scaling/01-sliders-workers.png" alt-text="Resource sliders":::
+
+1. Select **Save** to apply the changed values.
+
+> [!NOTE]
+> Once you increase nodes and save, you can't decrease the number of worker nodes by using this form.
+
+> [!NOTE]
+> To take advantage of newly added nodes you must [rebalance distributed table
+> shards](howto-scale-rebalance.md), which means moving some
+> [shards](concepts-distributed-data.md#shards) from existing nodes
+> to the new ones. Rebalancing can work in the background, and requires no
+> downtime.
+
+## Increase or decrease vCores on nodes
+
+You can increase the capabilities of existing nodes. Adjusting compute capacity up and down can be useful for performance
+experiments, and short- or long-term changes to traffic demands.
+
+To change the vCores for all worker nodes, on the **Scale** screen, select a new value under **Compute per node**. To adjust the coordinator node's vCores, expand **Coordinator** and select a new value under **Coordinator computer**.
+
+> [!NOTE]
+> Once you increase vCores and save, you can't decrease the number of vCores by using this form.
+
+> [!NOTE]
+> There is a vCore quota per Azure subscription per region. The default quota
+> should be more than enough to experiment with Azure Cosmos DB for PostgreSQL. If you
+> need more vCores for a region in your subscription, see how to [adjust
+> compute quotas](howto-compute-quota.md).
+
+## Increase storage on nodes
+
+You can increase the disk space of existing
+nodes. Increasing disk space can allow you to do more with existing worker
+nodes before needing to add more worker nodes.
+
+To change the storage amount for all worker nodes, on the **Scale** screen, select a new value under **Storage per node**. To adjust the coordinator node's storage, expand **Coordinator** and select a new value under **Coordinator storage**.
+
+> [!NOTE]
+> Once you increase storage and save, you can't decrease the amount of storage by using this form.
+
+## Next steps
+
+- Learn more about cluster [performance options](resources-compute.md).
+- [Rebalance distributed table shards](howto-scale-rebalance.md)
+ so that all worker nodes can participate in parallel queries
+- See the sizes of distributed tables, and other [useful diagnostic
+ queries](howto-useful-diagnostic-queries.md).
cosmos-db Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-initial.md
+
+ Title: Initial cluster size - Azure Cosmos DB for PostgreSQL
+description: Pick the right initial size for your use case
++++++ Last updated : 08/03/2021++
+# Pick initial size for cluster
++
+The size of a cluster, both number of nodes and their hardware capacity,
+is [easy to change](howto-scale-grow.md)). However you still need to
+choose an initial size for a new cluster. Here are some tips for a
+reasonable choice.
+
+## Use-cases
+
+Azure Cosmos DB for PostgreSQL is frequently used in the following ways.
+
+### Multi-tenant SaaS
+
+When migrating to Azure Cosmos DB for PostgreSQL from an existing single-node PostgreSQL
+database instance, choose a cluster where the number of worker vCores and RAM
+in total equals that of the original instance. In such scenarios we have seen
+2-3x performance improvements because sharding improves resource utilization,
+allowing smaller indices etc.
+
+The vCore count is actually the only decision. RAM allocation is currently
+determined based on vCore count, as described in the [compute and
+storage](resources-compute.md) page. The coordinator node doesn't require as
+much RAM as workers, but there's no way to choose RAM and vCores independently.
+
+### Real-time analytics
+
+Total vCores: when working data fits in RAM, you can expect a linear
+performance improvement on Azure Cosmos DB for PostgreSQL proportional to the number of
+worker cores. To determine the right number of vCores for your needs, consider
+the current latency for queries in your single-node database and the required
+latency in Azure Cosmos DB for PostgreSQL. Divide current latency by desired latency, and
+round the result.
+
+Worker RAM: the best case would be providing enough memory that most the
+working set fits in memory. The type of queries your application uses affect
+memory requirements. You can run EXPLAIN ANALYZE on a query to determine how
+much memory it requires. Remember that vCores and RAM are scaled together as
+described in the [compute and storage](resources-compute.md) article.
+
+## Next steps
+
+- [Scale a cluster](howto-scale-grow.md)
+- Learn more about cluster [performance
+ options](resources-compute.md).
cosmos-db Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-rebalance.md
+
+ Title: Rebalance shards - Azure Cosmos DB for PostgreSQL
+description: Learn how to use the Azure portal to rebalance data in a cluster using the Shard rebalancer.
++++++ Last updated : 07/20/2021++
+# Rebalance shards in cluster
++
+To take advantage of newly added nodes, rebalance distributed table
+[shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Azure Cosmos DB for PostgreSQL offers
+zero-downtime rebalancing, meaning queries continue without interruption during
+shard rebalancing.
+
+## Determine if the cluster is balanced
+
+The Azure portal shows whether data is distributed equally between
+worker nodes in a cluster or not. From the **Cluster management** menu, select **Shard rebalancer**.
+
+- If data is skewed between workers: You'll see the message, **Rebalancing is recommended** and a list of the size of each node.
+
+- If data is balanced: You'll see the message, **Rebalancing is not recommended at this time**.
+
+## Run the Shard rebalancer
+
+To start the Shard rebalancer, connect to the coordinator node of the cluster and then run the [rebalance_table_shards](reference-functions.md#rebalance_table_shards) SQL function on distributed tables.
+
+The function rebalances all tables in the
+[colocation](concepts-colocation.md) group of the table named in its
+argument. You don't have to call the function for every distributed
+table. Instead, call it on a representative table from each colocation group.
+
+```sql
+SELECT rebalance_table_shards('distributed_table_name');
+```
+
+## Monitor rebalance progress
+
+You can view the rebalance progress from the Azure portal. From the **Cluster management** menu, select **Shard rebalancer** . The
+message **Rebalancing is underway** displays with two tables:
+
+- The first table shows the number of shards moving into or out of a node. For
+example, "6 of 24 moved in."
+- The second table shows progress per database table: name, shard count affected, data size affected, and rebalancing status.
+
+Select **Refresh** to update the page. When rebalancing is complete, you'll see the message **Rebalancing is not recommended at this time**.
+
+## Next steps
+
+- Learn more about cluster [performance options](resources-compute.md).
+- [Scale a cluster](howto-scale-grow.md) up or out
+- See the
+ [rebalance_table_shards](reference-functions.md#rebalance_table_shards)
+ reference material
cosmos-db Howto Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ssl-connection-security.md
+
+ Title: Transport Layer Security (TLS) - Azure Cosmos DB for PostgreSQL
+description: Instructions and information to configure Azure Cosmos DB for PostgreSQL and associated applications to properly use TLS connections.
++++++ Last updated : 07/16/2020+
+# Configure TLS in Azure Cosmos DB for PostgreSQL
+
+The coordinator node requires client applications to connect with Transport Layer Security (TLS). Enforcing TLS between the database server and client applications helps keep data confidential in transit. Extra verification settings described below also protect against "man-in-the-middle" attacks.
+
+## Enforcing TLS connections
+Applications use a "connection string" to identify the destination database and settings for a connection. Different clients require different settings. To see a list of connection strings used by common clients, consult the **Connection Strings** section for your cluster in the Azure portal.
+
+The TLS parameters `ssl` and `sslmode` vary based on the capabilities of the connector, for example `ssl=true` or `sslmode=require` or `sslmode=required`.
+
+## Ensure your application or framework supports TLS connections
+Some application frameworks don't enable TLS by default for PostgreSQL connections. However, without a secure connection, an application can't connect to the coordinator node. Consult your application's documentation to learn how to enable TLS connections.
+
+## Applications that require certificate verification for TLS connectivity
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file (.cer) to connect securely. The certificate to connect to an Azure Cosmos DB for PostgreSQL is located at https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem. Download the certificate file and save it to your preferred location.
+
+> [!NOTE]
+>
+> To check the certificate's authenticity, you can verify its SHA-256
+> fingerprint using the OpenSSL command line tool:
+>
+> ```sh
+> openssl x509 -in DigiCertGlobalRootCA.crt.pem -noout -sha256 -fingerprint
+>
+> # should output:
+> # 43:48:A0:E9:44:4C:78:CB:26:5E:05:8D:5E:89:44:B4:D8:4F:96:62:BD:26:DB:25:7F:89:34:A4:43:C7:01:61
+> ```
+
+### Connect using psql
+The following example shows how to connect to your coordinator node using the psql command-line utility. Use the `sslmode=verify-full` connection string setting to enforce TLS certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
+
+Below is an example of the psql connection string:
+```
+psql "sslmode=verify-full sslrootcert=DigiCertGlobalRootCA.crt.pem host=mydemoserver.postgres.database.azure.com dbname=citus user=citus password=your_pass"
+```
+> [!TIP]
+> Confirm that the value passed to `sslrootcert` matches the file path for the certificate you saved.
+
+## Next steps
+Increase security further with [Firewall rules in Azure Cosmos DB for PostgreSQL](concepts-firewall-rules.md).
cosmos-db Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-table-size.md
+
+ Title: Determine table size - Azure Cosmos DB for PostgreSQL
+description: How to find the true size of distributed tables in a cluster
+++++ Last updated : 12/06/2021++
+# Determine table and relation size
++
+The usual way to find table sizes in PostgreSQL, `pg_total_relation_size`,
+drastically under-reports the size of distributed tables on Azure Cosmos DB for PostgreSQL.
+All this function does on a cluster is to reveal the size
+of tables on the coordinator node. In reality, the data in distributed tables
+lives on the worker nodes (in shards), not on the coordinator. A true measure
+of distributed table size is obtained as a sum of shard sizes. Azure Cosmos DB for PostgreSQL
+provides helper functions to query this information.
+
+<table>
+<colgroup>
+<col width="40%" />
+<col width="59%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Function</th>
+<th>Returns</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>citus_relation_size(relation_name)</td>
+<td><ul>
+<li>Size of actual data in table (the "<a href="https://www.postgresql.org/docs/current/static/storage-file-layout.html">main fork</a>").</li>
+<li>A relation can be the name of a table or an index.</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>citus_table_size(relation_name)</td>
+<td><ul>
+<li><p>citus_relation_size plus:</p>
+<blockquote>
+<ul>
+<li>size of <a href="https://www.postgresql.org/docs/current/static/storage-fsm.html">free space map</a></li>
+<li>size of <a href="https://www.postgresql.org/docs/current/static/storage-vm.html">visibility map</a></li>
+</ul>
+</blockquote></li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>citus_total_relation_size(relation_name)</td>
+<td><ul>
+<li><p>citus_table_size plus:</p>
+<blockquote>
+<ul>
+<li>size of indices</li>
+</ul>
+</blockquote></li>
+</ul></td>
+</tr>
+</tbody>
+</table>
+
+These functions are analogous to three of the standard PostgreSQL [object size
+functions](https://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE),
+except if they can't connect to a node, they error out.
+
+## Example
+
+Here's how to list the sizes of all distributed tables:
+
+``` postgresql
+SELECT logicalrelid AS name,
+ pg_size_pretty(citus_table_size(logicalrelid)) AS size
+ FROM pg_dist_partition;
+```
+
+Output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé name Γöé size Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé github_users Γöé 39 MB Γöé
+Γöé github_events Γöé 37 MB Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+* Learn to [scale a cluster](howto-scale-grow.md) to hold more data.
+* Distinguish [table types](concepts-nodes.md) in a cluster.
+* See other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
cosmos-db Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-troubleshoot-common-connection-issues.md
+
+ Title: Troubleshoot connections - Azure Cosmos DB for PostgreSQL
+description: Learn how to troubleshoot connection issues to Azure Cosmos DB for PostgreSQL
+keywords: postgresql connection,connection string,connectivity issues,transient error,connection error
++++++ Last updated : 12/17/2021++
+# Troubleshoot connection issues to Azure Cosmos DB for PostgreSQL
++
+Connection problems may be caused by several things, such as:
+
+* Firewall settings
+* Connection time-out
+* Incorrect sign in information
+* Connection limit reached for cluster
+* Issues with the infrastructure of the service
+* Service maintenance
+* The coordinator node failing over to new hardware
+
+Generally, connection issues to Azure Cosmos DB for PostgreSQL can be classified as follows:
+
+* Transient errors (short-lived or intermittent)
+* Persistent or non-transient errors (errors that regularly recur)
+
+## Troubleshoot transient errors
+
+Transient errors occur for a number of reasons. The most common include system
+Maintenance, error with hardware or software, and coordinator node vCore
+upgrades.
+
+Enabling high availability for cluster nodes can mitigate these
+types of problems automatically. However, your application should still be
+prepared to lose its connection briefly. Also other events can take longer to
+mitigate, such as when a large transaction causes a long-running recovery.
+
+### Steps to resolve transient connectivity issues
+
+1. Check the [Microsoft Azure Service
+ Dashboard](https://azure.microsoft.com/status) for any known outages that
+ occurred during the time in which the application was reporting errors.
+2. Applications that connect to a cloud service such as Azure Cosmos DB for PostgreSQL
+ should expect transient errors and react gracefully. For instance,
+ applications should implement retry logic to handle these errors instead of
+ surfacing them as application errors to users.
+3. As the cluster approaches its resource limits, errors can seem like
+ transient connectivity issues. Increasing node RAM, or adding worker nodes
+ and rebalancing data may help.
+4. If connectivity problems continue, or last longer than 60 seconds, or happen
+ more than once per day, file an Azure support request by
+ selecting **Get Support** on the [Azure
+ Support](https://azure.microsoft.com/support/options) site.
+
+## Troubleshoot persistent errors
+
+If the application persistently fails to connect to Azure Cosmos DB for PostgreSQL, the
+most common causes are firewall misconfiguration or user error.
+
+* Coordinator node firewall configuration: Make sure that the server
+ firewall is configured to allow connections from your client, including proxy
+ servers and gateways.
+* Client firewall configuration: The firewall on your client must allow
+ connections to your database server. Some firewalls require allowing not only
+ application by name, but allowing the IP addresses and ports of the server.
+* User error: Double-check the connection string. You might have mistyped
+ parameters like the server name. You can find connection strings for various
+ language frameworks and psql in the Azure portal. Go to the **Connection
+ strings** page in your cluster. Also keep in mind that
+ clusters have only one database and its predefined name is
+ **citus**.
+
+### Steps to resolve persistent connectivity issues
+
+1. Set up [firewall rules](howto-manage-firewall-using-portal.md) to
+ allow the client IP address. For temporary testing purposes only, set up a
+ firewall rule using 0.0.0.0 as the starting IP address and using
+ 255.255.255.255 as the ending IP address. That rule opens the server to all IP
+ addresses. If the rule resolves your connectivity issue, remove it and
+ create a firewall rule for an appropriately limited IP address or address
+ range.
+2. On all firewalls between the client and the internet, make sure that port
+ 5432 is open for outbound connections (and 6432 if using [connection
+ pooling](concepts-connection-pool.md)).
+3. Verify your connection string and other connection settings.
+4. Check the service health in the dashboard.
+
+## Next steps
+
+* Learn the concepts of [Firewall rules in Azure Cosmos DB for PostgreSQL](concepts-firewall-rules.md)
+* See how to [Manage firewall rules for Azure Cosmos DB for PostgreSQL](howto-manage-firewall-using-portal.md)
cosmos-db Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-troubleshoot-read-only.md
+
+ Title: Troubleshoot read-only access - Azure Cosmos DB for PostgreSQL
+description: Learn why a cluster can become read-only, and what to do
+keywords: postgresql connection,read only
+++++ Last updated : 08/03/2021++
+# Troubleshoot read-only access to Azure Cosmos DB for PostgreSQL
++
+PostgreSQL can't run on a machine without some free disk space. To maintain
+access to PostgreSQL servers, it's necessary to prevent the disk space from
+running out.
+
+In Azure Cosmos DB for PostgreSQL, nodes are set to a read-only (RO) state when the disk is
+almost full. Preventing writes stops the disk from continuing to fill, and
+keeps the node available for reads. During the read-only state, you can take
+measures to free more disk space.
+
+Specifically, a node becomes read-only when it has less than
+5 GiB of free storage left. When the server becomes read-only, all existing
+sessions are disconnected, and uncommitted transactions are rolled back. Any
+write operations and transaction commits will fail, while read queries will
+continue to work.
+
+## Ways to recover write-access
+
+### On the coordinator node
+
+* [Increase storage
+ size](howto-scale-grow.md#increase-storage-on-nodes)
+ on the coordinator node, and/or
+* Distribute local tables to worker nodes, or drop data. You'll need to run
+ `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE` after you've
+ connected to the database and before you execute other commands.
+
+### On a worker node
+
+* [Increase storage
+ size](howto-scale-grow.md#increase-storage-on-nodes)
+ on the worker nodes, and/or
+* [Rebalance data](howto-scale-rebalance.md) to other nodes, or drop
+ some data.
+ * You'll need to set the worker node as read-write temporarily. You can
+ connect directly to worker nodes and use `SET SESSION CHARACTERISTICS` as
+ described above for the coordinator node.
+
+## Prevention
+
+We recommend that you set up an alert to notify you when server storage is
+approaching the threshold. That way you can act early to avoid getting into the
+read-only state. For more information, see the documentation about [recommended
+alerts](howto-alert-on-metric.md#suggested-alerts).
+
+## Next steps
+
+* [Set up Azure
+ alerts](howto-alert-on-metric.md#suggested-alerts)
+ for advance notice so you can take action before reaching the read-only state.
+* Learn about [disk
+ usage](https://www.postgresql.org/docs/current/diskusage.html) in PostgreSQL
+ documentation.
+* Learn about [session
+ characteristics](https://www.postgresql.org/docs/13/sql-set-transaction.html)
+ in PostgreSQL documentation.
cosmos-db Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-upgrade.md
+
+ Title: Upgrade cluster - Azure Cosmos DB for PostgreSQL
+description: See how you can upgrade PostgreSQL and Citus in Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 09/25/2022++
+# Upgrade cluster
++
+These instructions describe how to upgrade to a new major version of PostgreSQL
+on all cluster nodes.
+
+## Test the upgrade first
+
+Upgrading PostgreSQL causes more changes than you might imagine, because
+Azure Cosmos DB for PostgreSQL will also upgrade the [database
+extensions](reference-extensions.md), including the Citus extension. Upgrades
+also require downtime in the database cluster.
+
+We strongly recommend you to test your application with the new PostgreSQL and
+Citus version before you upgrade your production environment. Also, see
+our list of [upgrade precautions](concepts-upgrade.md).
+
+A convenient way to test is to make a copy of your cluster using
+[point-in-time restore](concepts-backup.md#restore). Upgrade the
+copy and test your application against it. Once you've verified everything
+works properly, upgrade the original cluster.
+
+## Upgrade a cluster in the Azure portal
+
+1. In the **Overview** section of a cluster, select the
+ **Upgrade** button.
+1. A dialog appears, showing the current version of PostgreSQL and Citus.
+ Choose a new PostgreSQL version in the **PostgreSQL version to upgrade** list.
+1. Verify that the value in **Citus version to upgrade** is what you expect.
+ This value changes based on the PostgreSQL version you selected.
+1. Select the **Upgrade** button to continue.
+
+> [!NOTE]
+> If you're already running the latest PostgreSQL version, the selection and button are grayed out.
+
+## Next steps
+
+* Learn about [supported PostgreSQL versions](reference-versions.md).
+* See [which extensions](reference-extensions.md) are packaged with
+ each PostgreSQL version in a cluster.
+* Learn more about [upgrades](concepts-upgrade.md)
cosmos-db Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-useful-diagnostic-queries.md
+
+ Title: Useful diagnostic queries - Azure Cosmos DB for PostgreSQL
+description: Queries to learn about distributed data and more
+++++ Last updated : 8/23/2021++
+# Useful Diagnostic Queries
++
+## Finding which node contains data for a specific tenant
+
+In the multi-tenant use case, we can determine which worker node contains the
+rows for a specific tenant. Azure Cosmos DB for PostgreSQL groups the rows of distributed
+tables into shards, and places each shard on a worker node in the cluster.
+
+Suppose our application's tenants are stores, and we want to find which worker
+node holds the data for store ID=4. In other words, we want to find the
+placement for the shard containing rows whose distribution column has value 4:
+
+``` postgresql
+SELECT shardid, shardstate, shardlength, nodename, nodeport, placementid
+ FROM pg_dist_placement AS placement,
+ pg_dist_node AS node
+ WHERE placement.groupid = node.groupid
+ AND node.noderole = 'primary'
+ AND shardid = (
+ SELECT get_shard_id_for_distribution_column('stores', 4)
+ );
+```
+
+The output contains the host and port of the worker database.
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé shardstate Γöé shardlength Γöé nodename Γöé nodeport Γöé placementid Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102009 Γöé 1 Γöé 0 Γöé 10.0.0.16 Γöé 5432 Γöé 2 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Finding the distribution column for a table
+
+Each distributed table has a "distribution column." (For
+more information, see [Distributed Data
+Modeling](howto-choose-distribution-column.md).) It can be
+important to know which column it is. For instance, when joining or filtering
+tables, you may see error messages with hints like, "add a filter to the
+distribution column."
+
+The `pg_dist_*` tables on the coordinator node contain diverse metadata about
+the distributed database. In particular `pg_dist_partition` holds information
+about the distribution column for each table. You can use a convenient utility
+function to look up the distribution column name from the low-level details in
+the metadata. Here's an example and its output:
+
+``` postgresql
+-- create example table
+
+CREATE TABLE products (
+ store_id bigint,
+ product_id bigint,
+ name text,
+ price money,
+
+ CONSTRAINT products_pkey PRIMARY KEY (store_id, product_id)
+);
+
+-- pick store_id as distribution column
+
+SELECT create_distributed_table('products', 'store_id');
+
+-- get distribution column name for products table
+
+SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
+ FROM pg_dist_partition
+ WHERE logicalrelid='products'::regclass;
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé dist_col_name Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé store_id Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Detecting locks
+
+This query will run across all worker nodes and identify locks, how long
+they've been open, and the offending queries:
+
+``` postgresql
+SELECT run_command_on_workers($cmd$
+ SELECT array_agg(
+ blocked_statement || ' $ ' || cur_stmt_blocking_proc
+ || ' $ ' || cnt::text || ' $ ' || age
+ )
+ FROM (
+ SELECT blocked_activity.query AS blocked_statement,
+ blocking_activity.query AS cur_stmt_blocking_proc,
+ count(*) AS cnt,
+ age(now(), min(blocked_activity.query_start)) AS "age"
+ FROM pg_catalog.pg_locks blocked_locks
+ JOIN pg_catalog.pg_stat_activity blocked_activity
+ ON blocked_activity.pid = blocked_locks.pid
+ JOIN pg_catalog.pg_locks blocking_locks
+ ON blocking_locks.locktype = blocked_locks.locktype
+ AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
+ AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
+ AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
+ AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
+ AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
+ AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
+ AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
+ AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
+ AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
+ AND blocking_locks.pid != blocked_locks.pid
+ JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
+ WHERE NOT blocked_locks.GRANTED
+ AND blocking_locks.GRANTED
+ GROUP BY blocked_activity.query,
+ blocking_activity.query
+ ORDER BY 4
+ ) a
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé run_command_on_workers Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé (10.0.0.16,5432,t,"") Γöé
+│ (10.0.0.20,5432,t,"{""update ads_102277 set name = 'new name' where id = 1; $ sel…│
+│…ect * from ads_102277 where id = 1 for update; $ 1 $ 00:00:03.729519""}") │
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Querying the size of your shards
+
+This query will provide you with the size of every shard of a given
+distributed table, called `my_distributed_table`:
+
+``` postgresql
+SELECT *
+FROM run_command_on_shards('my_distributed_table', $cmd$
+ SELECT json_build_object(
+ 'shard_name', '%1$s',
+ 'size', pg_size_pretty(pg_table_size('%1$s'))
+ );
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé success Γöé result Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102008 Γöé t Γöé {"shard_name" : "my_distributed_table_102008", "size" : "2416 kB"} Γöé
+Γöé 102009 Γöé t Γöé {"shard_name" : "my_distributed_table_102009", "size" : "3960 kB"} Γöé
+Γöé 102010 Γöé t Γöé {"shard_name" : "my_distributed_table_102010", "size" : "1624 kB"} Γöé
+Γöé 102011 Γöé t Γöé {"shard_name" : "my_distributed_table_102011", "size" : "4792 kB"} Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Querying the size of all distributed tables
+
+This query gets a list of the sizes for each distributed table plus the
+size of their indices.
+
+``` postgresql
+SELECT
+ tablename,
+ pg_size_pretty(
+ citus_total_relation_size(tablename::text)
+ ) AS total_size
+FROM pg_tables pt
+JOIN pg_dist_partition pp
+ ON pt.tablename = pp.logicalrelid::text
+WHERE schemaname = 'public';
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé tablename Γöé total_size Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé github_users Γöé 39 MB Γöé
+Γöé github_events Γöé 98 MB Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+Note there are other Azure Cosmos DB for PostgreSQL functions for querying distributed
+table size, see [determining table size](howto-table-size.md).
+
+## Identifying unused indices
+
+The following query will identify unused indexes on worker nodes for a given
+distributed table (`my_distributed_table`)
+
+``` postgresql
+SELECT *
+FROM run_command_on_shards('my_distributed_table', $cmd$
+ SELECT array_agg(a) as infos
+ FROM (
+ SELECT (
+ schemaname || '.' || relname || '##' || indexrelname || '##'
+ || pg_size_pretty(pg_relation_size(i.indexrelid))::text
+ || '##' || idx_scan::text
+ ) AS a
+ FROM pg_stat_user_indexes ui
+ JOIN pg_index i
+ ON ui.indexrelid = i.indexrelid
+ WHERE NOT indisunique
+ AND idx_scan < 50
+ AND pg_relation_size(relid) > 5 * 8192
+ AND (schemaname || '.' || relname)::regclass = '%s'::regclass
+ ORDER BY
+ pg_relation_size(i.indexrelid) / NULLIF(idx_scan, 0) DESC nulls first,
+ pg_relation_size(i.indexrelid) DESC
+ ) sub
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé success Γöé result Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102008 Γöé t Γöé Γöé
+Γöé 102009 Γöé t Γöé {"public.my_distributed_table_102009##some_index_102009##28 MB##0"} Γöé
+Γöé 102010 Γöé t Γöé Γöé
+Γöé 102011 Γöé t Γöé Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Monitoring client connection count
+
+The following query counts the connections open on the coordinator, and groups
+them by type.
+
+``` sql
+SELECT state, count(*)
+FROM pg_stat_activity
+GROUP BY state;
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé state Γöé count Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé active Γöé 3 Γöé
+Γöé idle Γöé 3 Γöé
+Γöé Γêà Γöé 6 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Viewing system queries
+
+### Active queries
+
+The `pg_stat_activity` view shows which queries are currently executing. You
+can filter to find the actively executing ones, along with the process ID of
+their backend:
+
+```sql
+SELECT pid, query, state
+ FROM pg_stat_activity
+ WHERE state != 'idle';
+```
+
+### Why are queries waiting
+
+We can also query to see the most common reasons that non-idle queries that are
+waiting. For an explanation of the reasons, check the [PostgreSQL
+documentation](https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE).
+
+```sql
+SELECT wait_event || ':' || wait_event_type AS type, count(*) AS number_of_occurences
+ FROM pg_stat_activity
+ WHERE state != 'idle'
+GROUP BY wait_event, wait_event_type
+ORDER BY number_of_occurences DESC;
+```
+
+Example output when running `pg_sleep` in a separate query concurrently:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé type Γöé number_of_occurences Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé Γêà Γöé 1 Γöé
+Γöé PgSleep:Timeout Γöé 1 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Index hit rate
+
+This query will provide you with your index hit rate across all nodes. Index
+hit rate is useful in determining how often indices are used when querying.
+A value of 95% or higher is ideal.
+
+``` postgresql
+-- on coordinator
+SELECT 100 * (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) AS index_hit_rate
+ FROM pg_statio_user_indexes;
+
+-- on workers
+SELECT nodename, result as index_hit_rate
+FROM run_command_on_workers($cmd$
+ SELECT 100 * (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) AS index_hit_rate
+ FROM pg_statio_user_indexes;
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé nodename Γöé index_hit_rate Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 10.0.0.16 Γöé 96.0 Γöé
+Γöé 10.0.0.20 Γöé 98.0 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Cache hit rate
+
+Most applications typically access a small fraction of their total data at
+once. PostgreSQL keeps frequently accessed data in memory to avoid slow reads
+from disk. You can see statistics about it in the
+[pg_statio_user_tables](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STATIO-ALL-TABLES-VIEW)
+view.
+
+An important measurement is what percentage of data comes from the memory cache
+vs the disk in your workload:
+
+``` postgresql
+-- on coordinator
+SELECT
+ sum(heap_blks_read) AS heap_read,
+ sum(heap_blks_hit) AS heap_hit,
+ 100 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_rate
+FROM
+ pg_statio_user_tables;
+
+-- on workers
+SELECT nodename, result as cache_hit_rate
+FROM run_command_on_workers($cmd$
+ SELECT
+ 100 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_rate
+ FROM
+ pg_statio_user_tables;
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé heap_read Γöé heap_hit Γöé cache_hit_rate Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 1 Γöé 132 Γöé 99.2481203007518796 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+If you find yourself with a ratio significantly lower than 99%, then you likely
+want to consider increasing the cache available to your database.
+
+## Next steps
+
+* Learn about other [system tables](reference-metadata.md)
+ that are useful for diagnostics
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/introduction.md
+
+ Title: Overview of Azure Cosmos DB for PostgreSQL
+description: Read an overview guide for running Azure Cosmos DB for PostgreSQL.
++++++
+recommendations: false
Last updated : 09/29/2022+
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD026 -->
+
+# Azure Cosmos DB for PostgreSQL
++
+Azure Cosmos DB for PostgreSQL is a managed service for PostgreSQL extended
+with the [Citus open source](https://github.com/citusdata/citus) superpower of
+*distributed tables*. This superpower enables you to build highly scalable
+relational apps. You can start building apps on a single node cluster, the
+same way you would with PostgreSQL. As your app's scalability and performance
+requirements grow, you can seamlessly scale to multiple nodes by transparently
+distributing your tables.
+
+Real-world customer applications built on Azure Cosmos DB for PostgreSQL include software-as-a-service (SaaS) apps, real-time
+operational analytics apps, and high throughput transactional apps. These apps
+span various verticals such as sales and marketing automation, healthcare,
+Internet of Things (IoT) telemetry, finance, logistics, and search.
++
+## Implementation checklist
+
+As you're looking to create applications with Azure Cosmos DB for PostgreSQL, ensure you've
+reviewed the following articles:
+
+<!-- markdownlint-disable MD032 -->
+
+> [!div class="checklist"]
+> - Learn how to [build scalable apps](quickstart-build-scalable-apps-overview.md).
+> - Connect and query with your [app stack](quickstart-app-stacks-overview.yml).
+> - See how the [Azure Cosmos DB for PostgreSQL API](reference-overview.md) extends
+> PostgreSQL, and try [useful diagnostic
+> queries](howto-useful-diagnostic-queries.md).
+> - Pick the best [cluster size](howto-scale-initial.md) for your workload.
+> - [Monitor](howto-monitoring.md) cluster performance.
+> - Ingest data efficiently with [Azure Stream Analytics](howto-ingest-azure-stream-analytics.md)
+> and [Azure Data Factory](howto-ingest-azure-data-factory.md).
+
+<!-- markdownlint-enable MD032 -->
+
+## Fully managed, resilient database
+
+As Azure Cosmos DB for PostgreSQL is a fully managed service, it has all the features for
+worry-free operation in production. Features include:
+
+* automatic high availability
+* backups
+* built-in pgBouncer
+* read-replicas
+* easy monitoring
+* private endpoints
+* encryption
+* and more
+
+> [!div class="nextstepaction"]
+> [Try the quickstart >](quickstart-create-portal.md)
+
+## Always the latest PostgreSQL features
+
+Azure Cosmos DB for PostgreSQL is powered by the
+[Citus](https://github.com/citusdata/citus) open source extension to
+PostgreSQL. Because Citus isn't a fork of Postgres, the Citus extension always
+supports the latest PostgreSQL major version within a week of release--with
+support added to our managed service on Azure at most a few weeks later.
+
+Your apps can use the newest PostgreSQL features and extensions, such as
+native partitioning for performance, JSONB support to store and query
+unstructured data, and geospatial functionality via the PostGIS extension.
+It's the speed you need, on the database you love.
+
+## Start simply, scale seamlessly
+
+A database cluster can begin as a single node, while
+having the superpower of distributing tables. At a few dollars a day, it's the
+most cost-effective way to experience Azure Cosmos DB for PostgreSQL. Later, if your
+application requires greater scale, you can add nodes and rebalance your data.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the quickstart >](quickstart-create-portal.md)
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
+
+ Title: Product updates for Azure Cosmos DB for PostgreSQL
+description: New features and features in preview
++++++ Last updated : 09/27/2022++
+# Product updates for Azure Cosmos DB for PostgreSQL
++
+## Updates feed
+
+The Microsoft Azure website lists newly available features per product, plus
+features in preview and development. Check the [Azure Cosmos DB for PostgreSQL
+updates](https://azure.microsoft.com/updates/?category=databases&query=%22Cosmos%20DB%20for%20PostgreSQL%22)
+section for the latest. An RSS feed is also available on that page.
+
+## Features in preview
+
+Azure Cosmos DB for PostgreSQL offers
+previews for unreleased features. Preview versions are provided
+without a service level agreement, and aren't recommended for
+production workloads. Certain features might not be supported or
+might have constrained capabilities. For more information, see
+[Supplemental Terms of Use for Microsoft Azure
+Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+
+Here are the features currently available for preview:
+
+* **[pgAudit](concepts-audit.md)**. Provides detailed
+ session and object audit logging via the standard PostgreSQL
+ logging facility. It produces audit logs required to pass
+ certain government, financial, or ISO certification audits.
+
+## Contact us
+
+Let us know about your experience using preview features, by emailing [Ask
+Azure Cosmos DB for PostgreSQL](mailto:AskCosmosDB4Postgres@microsoft.com).
+(This email address isn't a technical support channel. For technical problems,
+open a [support
+request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).)
cosmos-db Quickstart App Stacks Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-csharp.md
+
+ Title: Use #C to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+description: See how to use C# to connect and run SQL statements on Azure Cosmos DB for PostgreSQL.
++++++
+recommendations: false
Last updated : 09/27/2022++
+# Use C# to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+++
+This quickstart shows you how to use C# code to connect to a cluster, and then use SQL statements to create a table and insert, query, update, and delete data in the database. The steps in this article assume that you're familiar with C# development, and are new to working with Azure Cosmos DB for PostgreSQL.
+
+> [!TIP]
+> The process of creating a C# app with Azure Cosmos DB for PostgreSQL is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- [Visual Studio](https://www.visualstudio.com/downloads) with the .NET desktop development workload installed. Or [install the .NET SDK](https://dotnet.microsoft.com/download) for your Windows, Ubuntu Linux, or macOS platform.
+- In Visual Studio, a C# console project with the [Npgsql](https://www.nuget.org/packages/Npgsql) NuGet package installed.
+- An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md).
+
+The code samples in this article use your cluster name and password. You can see your cluster name at the top of your cluster page in the Azure portal.
++
+## Connect, create a table, and insert data
+
+In Visual Studio, use the following code to connect to your cluster and load data using CREATE TABLE and INSERT INTO SQL statements. The code uses these `NpgsqlCommand` class methods:
+
+* [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Azure Cosmos DB for PostgreSQL
+* [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) to set the CommandText property
+* [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run database commands
++
+In the following code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresCreate
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = <password>; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("DROP TABLE IF EXISTS pharmacy;", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished dropping table (if existed)");
+ }
+ using (var command = new NpgsqlCommand("CREATE TABLE pharmacy (pharmacy_id integer ,pharmacy_name text,city text,state text,zip_code integer);", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished creating table");
+ }
+ using (var command = new NpgsqlCommand("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished creating index");
+ }
+ using (var command = new NpgsqlCommand("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (@n1, @q1, @a, @b, @c)", conn))
+ {
+ command.Parameters.AddWithValue("n1", 0);
+ command.Parameters.AddWithValue("q1", "Target");
+ command.Parameters.AddWithValue("a", "Sunnyvale");
+ command.Parameters.AddWithValue("b", "California");
+ command.Parameters.AddWithValue("c", 94001);
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows inserted={0}", nRows));
+ }
+
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Distribute tables
+
+Azure Cosmos DB for PostgreSQL gives you [the super power of distributing tables](introduction.md) across multiple nodes for scalability. Use the following code to distribute a table. You can learn more about `create_distributed_table` and the distribution column at [Distribution column (also known as shard key)](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!NOTE]
+> Distributing tables lets them grow across any worker nodes added to the cluster.
+
+In the following code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresCreate
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("select create_distributed_table('pharmacy','pharmacy_id');", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished distributing the table");
+ }
+
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Read data
+
+Use the following code to connect and read the data by using a SELECT SQL statement. The code uses these `NpgsqlCommand` class methods:
+
+* [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Azure Cosmos DB for PostgreSQL.
+* [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
+* [Read()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_Read) to advance to the record in the results.
+* [GetInt32()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetInt32_System_Int32_) and [GetString()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetString_System_Int32_) to parse the values in the record.
+
+In the following code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class read
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = <password>; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("SELECT * FROM pharmacy", conn))
+ {
+ var reader = command.ExecuteReader();
+ while (reader.Read())
+ {
+ Console.WriteLine(
+ string.Format(
+ "Reading from table=({0}, {1}, {2}, {3}, {4})",
+ reader.GetInt32(0).ToString(),
+ reader.GetString(1),
+ reader.GetString(2),
+ reader.GetString(3),
+ reader.GetInt32(4).ToString()
+ )
+ );
+ }
+ reader.Close();
+ }
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Update data
+
+Use the following code to connect and update data by using an UPDATE SQL statement. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresUpdate
+ {
+ static void Main(string[] args)
+ {
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = <password>; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("UPDATE pharmacy SET city = @q WHERE pharmacy_id = @n", conn))
+ {
+ command.Parameters.AddWithValue("n", 0);
+ command.Parameters.AddWithValue("q", "guntur");
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows updated={0}", nRows));
+ }
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Delete data
+
+Use the following code to connect and delete data by using a DELETE SQL statement. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
++
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresDelete
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("DELETE FROM pharmacy WHERE pharmacy_id = @n", conn))
+ {
+ command.Parameters.AddWithValue("n", 0);
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows deleted={0}", nRows));
+ }
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## COPY command for fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Azure Cosmos DB for PostgreSQL. The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following example code copies data from a CSV file to a database table.
+
+The code sample requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv) to be in your *Documents* folder. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
++
+```csharp
+using Npgsql;
+public class csvtotable
+{
+
+ static void Main(string[] args)
+ {
+ String sDestinationSchemaAndTableName = "pharmacy";
+ String sFromFilePath = "C:\\Users\\Documents\\pharmacies.csv";
+
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = <password>; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
+
+ connStr.TrustServerCertificate = true;
+
+ NpgsqlConnection conn = new NpgsqlConnection(connStr.ToString());
+ NpgsqlCommand cmd = new NpgsqlCommand();
+
+ conn.Open();
+
+ if (File.Exists(sFromFilePath))
+ {
+ using (var writer = conn.BeginTextImport("COPY " + sDestinationSchemaAndTableName + " FROM STDIN WITH(FORMAT CSV, HEADER true,NULL ''); "))
+ {
+ foreach (String sLine in File.ReadAllLines(sFromFilePath))
+ {
+ writer.WriteLine(sLine);
+ }
+ }
+ Console.WriteLine("csv file data copied sucessfully");
+ }
+ }
+}
+```
+
+### COPY command to load in-memory data
+
+The following example code copies in-memory data to a table. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```csharp
+using Npgsql;
+using NpgsqlTypes;
+namespace Driver
+{
+ public class InMemory
+ {
+
+ static async Task Main(string[] args)
+ {
+
+ // Replace <cluster> with your cluster name and <password> with your password:
+ var connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = <password>; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ conn.Open();
+ var text = new dynamic[] { 0, "Target", "Sunnyvale", "California", 94001 };
+ using (var writer = conn.BeginBinaryImport("COPY pharmacy FROM STDIN (FORMAT BINARY)"))
+ {
+ writer.StartRow();
+ foreach (var item in text)
+ {
+ writer.Write(item);
+ }
+ writer.Complete();
+ }
+ Console.WriteLine("in-memory data copied sucessfully");
+ }
+ }
+ }
+}
+```
+## App retry for database request failures
++
+In this code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```csharp
+using System;
+using System.Data;
+using System.Runtime.InteropServices;
+using System.Text;
+using Npgsql;
+
+namespace Driver
+{
+ public class Reconnect
+ {
+
+ // Replace <cluster> with your cluster name and <password> with your password:
+ static string connStr = new NpgsqlConnectionStringBuilder("Server = c.<cluster>.postgres.database.azure.com; Database = citus; Port = 5432; User Id = citus; Password = <password>; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50;TrustServerCertificate = true").ToString();
+ static string executeRetry(string sql, int retryCount)
+ {
+ for (int i = 0; i < retryCount; i++)
+ {
+ try
+ {
+ using (var conn = new NpgsqlConnection(connStr))
+ {
+ conn.Open();
+ DataTable dt = new DataTable();
+ using (var _cmd = new NpgsqlCommand(sql, conn))
+ {
+ NpgsqlDataAdapter _dap = new NpgsqlDataAdapter(_cmd);
+ _dap.Fill(dt);
+ conn.Close();
+ if (dt != null)
+ {
+ if (dt.Rows.Count > 0)
+ {
+ int J = dt.Rows.Count;
+ StringBuilder sb = new StringBuilder();
+
+ for (int k = 0; k < dt.Rows.Count; k++)
+ {
+ for (int j = 0; j < dt.Columns.Count; j++)
+ {
+ sb.Append(dt.Rows[k][j] + ",");
+ }
+ sb.Remove(sb.Length - 1, 1);
+ sb.Append("\n");
+ }
+ return sb.ToString();
+ }
+ }
+ }
+ }
+ return null;
+ }
+ catch (Exception e)
+ {
+ Thread.Sleep(60000);
+ Console.WriteLine(e.Message);
+ }
+ }
+ return null;
+ }
+ static void Main(string[] args)
+ {
+ string result = executeRetry("select 1",5);
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+## Next steps
+
cosmos-db Quickstart App Stacks Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-java.md
+
+ Title: Java app to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+description: See how to create a Java app that connects and runs SQL statements on Azure Cosmos DB for PostgreSQL.
++++++
+recommendations: false
Last updated : 09/28/2022++
+# Java app to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+++
+This quickstart shows you how to build a Java app that connects to a cluster, and then uses SQL statements to create a table and insert, query, update, and delete data in the database. The steps in this article assume that you're familiar with Java development and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity), and are new to working with Azure Cosmos DB for PostgreSQL.
+
+> [!TIP]
+> The process of creating a Java app with Azure Cosmos DB for PostgreSQL is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8, which is included in [Azure Cloud Shell](/azure/cloud-shell/overview).
+- The [Apache Maven](https://maven.apache.org) build tool.
+- An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md).
+
+The code samples in this article use your cluster name and password. In the Azure portal, your cluster name appears at the top of your cluster page.
++
+## Set up the Java project and connection
+
+Create a new Java project and a configuration file to connect to Azure Cosmos DB for PostgreSQL.
+
+### Create a new Java project
+
+Using your favorite integrated development environment (IDE), create a new Java project with groupId `test` and artifactId `crud`. In the project's root directory, add a *pom.xml* file with the following contents. This file configures [Apache Maven](https://maven.apache.org) to use Java 18 and a recent PostgreSQL driver for Java.
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+
+ <groupId>test</groupId>
+ <artifactId>crud</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <packaging>jar</packaging>
+
+ <name>crud</name>
+ <url>http://www.example.com</url>
+
+ <properties>
+ <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-engine</artifactId>
+ <version>5.7.1</version>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.postgresql</groupId>
+ <artifactId>postgresql</artifactId>
+ <version>42.2.12</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/com.zaxxer/HikariCP -->
+ <dependency>
+ <groupId>com.zaxxer</groupId>
+ <artifactId>HikariCP</artifactId>
+ <version>5.0.0</version>
+ </dependency>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-params</artifactId>
+ <version>5.7.1</version>
+ <scope>test</scope>
+ </dependency>
+ </dependencies>
+
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <version>3.0.0-M5</version>
+ </plugin>
+ </plugins>
+ </build>
+</project>
+```
+
+### Configure the database connection
+
+In *src/main/resources/*, create an *application.properties* file with the following contents. Replace \<cluster> with your cluster name, and replace \<password> with your administrative password.
+
+```properties
+driver.class.name=org.postgresql.Driver
+db.url=jdbc:postgresql://c.<cluster>.postgres.database.azure.com:5432/citus?ssl=true&sslmode=require
+db.username=citus
+db.password=<password>
+```
+
+The `?ssl=true&sslmode=require` string in the `db.url` property tells the JDBC driver to use Transport Layer Security (TLS) when connecting to the database. It's mandatory to use TLS with Azure Cosmos DB for PostgreSQL, and is a good security practice.
+
+## Create tables
+
+Configure a database schema that has distributed tables. Connect to the database to create the schema and tables.
+
+### Generate the database schema
+
+In *src/main/resources/*, create a *schema.sql* file with the following contents:
+
+```sql
+DROP TABLE IF EXISTS public.pharmacy;
+CREATE TABLE public.pharmacy(pharmacy_id integer,pharmacy_name text ,city text ,state text ,zip_code integer);
+CREATE INDEX idx_pharmacy_id ON public.pharmacy(pharmacy_id);
+```
+
+### Distribute tables
+
+Azure Cosmos DB for PostgreSQL gives you [the super power of distributing tables](introduction.md) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!NOTE]
+> Distributing tables lets them grow across any worker nodes added to the cluster.
+
+To distribute tables, append the following line to the *schema.sql* file you created in the previous section.
+
+```SQL
+select create_distributed_table('public.pharmacy','pharmacy_id');
+```
+
+### Connect to the database and create the schema
+
+Next, add the Java code that uses JDBC to store and retrieve data from your cluster. The code uses the *application.properties* and *schema.sql* files to connect to the cluster and create the schema.
++
+1. Create a *DButil.java* file with the following code, which contains the `DButil` class. The `DBUtil` class sets up a connection pool to PostgreSQL using [HikariCP](https://github.com/brettwooldridge/HikariCP). You use this class to connect to PostgreSQL and start querying.
+
+ [!INCLUDE[why-connection-pooling](includes/why-connection-pooling.md)]
+
+ ```java
+ //DButil.java
+ package test.crud;
+
+ import java.io.FileInputStream;
+ import java.io.IOException;
+ import java.sql.SQLException;
+ import java.util.Properties;
+
+ import javax.sql.DataSource;
+
+ import com.zaxxer.hikari.HikariDataSource;
+
+ public class DButil {
+ private static final String DB_USERNAME = "db.username";
+ private static final String DB_PASSWORD = "db.password";
+ private static final String DB_URL = "db.url";
+ private static final String DB_DRIVER_CLASS = "driver.class.name";
+ private static Properties properties = null;
+ private static HikariDataSource datasource;
+
+ static {
+ try {
+ properties = new Properties();
+ properties.load(new FileInputStream("src/main/java/application.properties"));
+
+ datasource = new HikariDataSource();
+ datasource.setDriverClassName(properties.getProperty(DB_DRIVER_CLASS ));
+ datasource.setJdbcUrl(properties.getProperty(DB_URL));
+ datasource.setUsername(properties.getProperty(DB_USERNAME));
+ datasource.setPassword(properties.getProperty(DB_PASSWORD));
+ datasource.setMinimumIdle(100);
+ datasource.setMaximumPoolSize(1000000000);
+ datasource.setAutoCommit(true);
+ datasource.setLoginTimeout(3);
+ } catch (IOException | SQLException e) {
+ e.printStackTrace();
+ }
+ }
+ public static DataSource getDataSource() {
+ return datasource;
+ }
+ }
+ ```
+
+1. In *src/main/java/*, create a *DemoApplication.java* file that contains the following code:
+
+ ``` java
+ package test.crud;
+ import java.io.IOException;
+ import java.sql.*;
+ import java.util.*;
+ import java.util.logging.Logger;
+ import java.io.FileInputStream;
+ import java.io.FileOutputStream;
+ import org.postgresql.copy.CopyManager;
+ import org.postgresql.core.BaseConnection;
+ import java.io.IOException;
+ import java.io.Reader;
+ import java.io.StringReader;
+
+ public class DemoApplication {
+
+ private static final Logger log;
+
+ static {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log =Logger.getLogger(DemoApplication.class.getName());
+ }
+ public static void main(String[] args)throws Exception
+ {
+ log.info("Connecting to the database");
+ Connection connection = DButil.getDataSource().getConnection();
+ System.out.println("The Connection Object is of Class: " + connection.getClass());
+ log.info("Database connection test: " + connection.getCatalog());
+ log.info("Creating table");
+ log.info("Creating index");
+ log.info("distributing table");
+ Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
+ Statement statement = connection.createStatement();
+ while (scanner.hasNextLine()) {
+ statement.execute(scanner.nextLine());
+ }
+ log.info("Closing database connection");
+ connection.close();
+ }
+
+ }
+ ```
+
+ > [!NOTE]
+ > The database `user` and `password` credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`. The credentials are stored in the *application.properties* file, which is passed as an argument.
+
+1. You can now execute this main class with your favorite tool:
+
+ - Using your IDE, you should be able to right-click on the `DemoApplication` class and execute it.
+ - Using Maven, you can run the application by executing:<br>`mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+
+The application should connect to the Azure Cosmos DB for PostgreSQL, create a database schema, and then close the connection, as you should see in the console logs:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Create database schema
+[INFO ] Closing database connection
+```
+
+## Create a domain class
+
+Create a new `Pharmacy` Java class, next to the `DemoApplication` class, and add the following code:
+
+``` java
+public class Pharmacy {
+ private Integer pharmacy_id;
+ private String pharmacy_name;
+ private String city;
+ private String state;
+ private Integer zip_code;
+ public Pharmacy() { }
+ public Pharmacy(Integer pharmacy_id, String pharmacy_name, String city,String state,Integer zip_code)
+ {
+ this.pharmacy_id = pharmacy_id;
+ this.pharmacy_name = pharmacy_name;
+ this.city = city;
+ this.state = state;
+ this.zip_code = zip_code;
+ }
+
+ public Integer getpharmacy_id() {
+ return pharmacy_id;
+ }
+
+ public void setpharmacy_id(Integer pharmacy_id) {
+ this.pharmacy_id = pharmacy_id;
+ }
+
+ public String getpharmacy_name() {
+ return pharmacy_name;
+ }
+
+ public void setpharmacy_name(String pharmacy_name) {
+ this.pharmacy_name = pharmacy_name;
+ }
+
+ public String getcity() {
+ return city;
+ }
+
+ public void setcity(String city) {
+ this.city = city;
+ }
+
+ public String getstate() {
+ return state;
+ }
+
+ public void setstate(String state) {
+ this.state = state;
+ }
+
+ public Integer getzip_code() {
+ return zip_code;
+ }
+
+ public void setzip_code(Integer zip_code) {
+ this.zip_code = zip_code;
+ }
+ @Override
+ public String toString() {
+ return "TPharmacy{" +
+ "pharmacy_id=" + pharmacy_id +
+ ", pharmacy_name='" + pharmacy_name + '\'' +
+ ", city='" + city + '\'' +
+ ", state='" + state + '\'' +
+ ", zip_code='" + zip_code + '\'' +
+ '}';
+ }
+}
+```
+
+This class is a domain model mapped on the `Pharmacy` table that you created when executing the *schema.sql* script.
+
+## Insert data
+
+In the *DemoApplication.java* file, after the `main` method, add the following method that uses the INSERT INTO SQL statement to insert data into the database:
+
+``` Java
+private static void insertData(Pharmacy todo, Connection connection) throws SQLException {
+ log.info("Insert data");
+ PreparedStatement insertStatement = connection
+ .prepareStatement("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (?, ?, ?, ?, ?);");
+
+ insertStatement.setInt(1, todo.getpharmacy_id());
+ insertStatement.setString(2, todo.getpharmacy_name());
+ insertStatement.setString(3, todo.getcity());
+ insertStatement.setString(4, todo.getstate());
+ insertStatement.setInt(5, todo.getzip_code());
+
+ insertStatement.executeUpdate();
+}
+```
+
+Add the two following lines in the main method:
+
+```java
+Pharmacy todo = new Pharmacy(0,"Target","Sunnyvale","California",94001);
+insertData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Closing database connection
+```
+
+## Read data
+
+Read the data you previously inserted to validate that your code works correctly.
+
+In the *DemoApplication.java* file, after the `insertData` method, add the following method that uses the SELECT SQL statement to read data from the database:
+
+``` java
+private static Pharmacy readData(Connection connection) throws SQLException {
+ log.info("Read data");
+ PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM Pharmacy;");
+ ResultSet resultSet = readStatement.executeQuery();
+ if (!resultSet.next()) {
+ log.info("There is no data in the database!");
+ return null;
+ }
+ Pharmacy todo = new Pharmacy();
+ todo.setpharmacy_id(resultSet.getInt("pharmacy_id"));
+ todo.setpharmacy_name(resultSet.getString("pharmacy_name"));
+ todo.setcity(resultSet.getString("city"));
+ todo.setstate(resultSet.getString("state"));
+ todo.setzip_code(resultSet.getInt("zip_code"));
+ log.info("Data read from the database: " + todo.toString());
+ return todo;
+}
+```
+
+Add the following line in the main method:
+
+``` java
+todo = readData(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Closing database connection
+```
+
+## Update data
+
+Update the data you previously inserted.
+
+Still in the *DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database by using the UPDATE SQL statement:
+
+``` java
+private static void updateData(Pharmacy todo, Connection connection) throws SQLException {
+ log.info("Update data");
+ PreparedStatement updateStatement = connection
+ .prepareStatement("UPDATE pharmacy SET city = ? WHERE pharmacy_id = ?;");
+
+ updateStatement.setString(1, todo.getcity());
+
+ updateStatement.setInt(2, todo.getpharmacy_id());
+ updateStatement.executeUpdate();
+ readData(connection);
+}
+
+```
+
+Add the two following lines in the main method:
+
+``` java
+todo.setcity("Guntur");
+updateData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Closing database connection
+```
+
+## Delete data
+
+Finally, delete the data you previously inserted. Still in the *DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database by using the DELETE SQL statement:
+
+``` java
+private static void deleteData(Pharmacy todo, Connection connection) throws SQLException {
+ log.info("Delete data");
+ PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM pharmacy WHERE pharmacy_id = ?;");
+ deleteStatement.setLong(1, todo.getpharmacy_id());
+ deleteStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+deleteData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Closing database connection
+```
+
+## COPY command for fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Azure Cosmos DB for PostgreSQL. The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code copies data from a CSV file to a database table. The code sample requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```java
+public static long
+copyFromFile(Connection connection, String filePath, String tableName)
+throws SQLException, IOException {
+ long count = 0;
+ FileInputStream fileInputStream = null;
+
+ try {
+ Connection unwrap = connection.unwrap(Connection.class);
+ BaseConnection connSec = (BaseConnection) unwrap;
+
+ CopyManager copyManager = new CopyManager((BaseConnection) connSec);
+ fileInputStream = new FileInputStream(filePath);
+ count = copyManager.copyIn("COPY " + tableName + " FROM STDIN delimiter ',' csv", fileInputStream);
+ } finally {
+ if (fileInputStream != null) {
+ try {
+ fileInputStream.close();
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+ }
+ return count;
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+int c = (int) copyFromFile(connection,"C:\\Users\\pharmacies.csv", "pharmacy");
+log.info("Copied "+ c +" rows using COPY command");
+```
+
+Executing the `main` class should now produce the following output:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Copied 5000 rows using COPY command
+[INFO ] Closing database connection
+```
+
+### COPY command to load in-memory data
+
+The following code copies in-memory data to a table.
+
+```java
+private static void inMemory(Connection connection) throws SQLException,IOException
+ {
+ log.info("Copying inmemory data into table");
+
+ final List<String> rows = new ArrayList<>();
+ rows.add("0,Target,Sunnyvale,California,94001");
+ rows.add("1,Apollo,Guntur,Andhra,94003");
+
+ final BaseConnection baseConnection = (BaseConnection) connection.unwrap(Connection.class);
+ final CopyManager copyManager = new CopyManager(baseConnection);
+
+ // COPY command can change based on the format of rows. This COPY command is for above rows.
+ final String copyCommand = "COPY pharmacy FROM STDIN with csv";
+
+ try (final Reader reader = new StringReader(String.join("\n", rows))) {
+ copyManager.copyIn(copyCommand, reader);
+ }
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+inMemory(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+5000
+[INFO ] Copying in-memory data into table
+[INFO ] Closing database connection
+```
+
+## App retry for database request failures
++
+In this code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```java
+package test.crud;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.util.logging.Logger;
+import com.zaxxer.hikari.HikariDataSource;
+
+public class DemoApplication
+{
+ private static final Logger log;
+
+ static
+ {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log = Logger.getLogger(DemoApplication.class.getName());
+ }
+ private static final String DB_USERNAME = "citus";
+ private static final String DB_PASSWORD = "<password>";
+ private static final String DB_URL = "jdbc:postgresql://c.<cluster>.postgres.database.azure.com:5432/citus?sslmode=require";
+ private static final String DB_DRIVER_CLASS = "org.postgresql.Driver";
+ private static HikariDataSource datasource;
+
+ private static String executeRetry(String sql, int retryCount) throws InterruptedException
+ {
+ Connection con = null;
+ PreparedStatement pst = null;
+ ResultSet rs = null;
+ for (int i = 1; i <= retryCount; i++)
+ {
+ try
+ {
+ datasource = new HikariDataSource();
+ datasource.setDriverClassName(DB_DRIVER_CLASS);
+ datasource.setJdbcUrl(DB_URL);
+ datasource.setUsername(DB_USERNAME);
+ datasource.setPassword(DB_PASSWORD);
+ datasource.setMinimumIdle(10);
+ datasource.setMaximumPoolSize(1000);
+ datasource.setAutoCommit(true);
+ datasource.setLoginTimeout(3);
+ log.info("Connecting to the database");
+ con = datasource.getConnection();
+ log.info("Connection established");
+ log.info("Read data");
+ pst = con.prepareStatement(sql);
+ rs = pst.executeQuery();
+ StringBuilder builder = new StringBuilder();
+ int columnCount = rs.getMetaData().getColumnCount();
+ while (rs.next())
+ {
+ for (int j = 0; j < columnCount;)
+ {
+ builder.append(rs.getString(j + 1));
+ if (++j < columnCount)
+ builder.append(",");
+ }
+ builder.append("\r\n");
+ }
+ return builder.toString();
+ }
+ catch (Exception e)
+ {
+ Thread.sleep(60000);
+ System.out.println(e.getMessage());
+ }
+ }
+ return null;
+ }
+
+ public static void main(String[] args) throws Exception
+ {
+ String result = executeRetry("select 1", 5);
+ System.out.print(result);
+ }
+}
+```
+
+## Next steps
+
cosmos-db Quickstart App Stacks Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-nodejs.md
+
+ Title: Use Node.js to connect and query Azure Cosmos DB for PostgreSQL
+description: See how to use Node.js to connect and run SQL statements on Azure Cosmos DB for PostgreSQL.
++++++
+recommendations: false
Last updated : 09/28/2022++
+# Use Node.js to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+++
+This quickstart shows you how to use Node.js code to connect to a cluster, and then use SQL statements to create a table and insert, query, update, and delete data in the database. The steps in this article assume that you're familiar with Node.js development, and are new to working with Azure Cosmos DB for PostgreSQL.
+
+> [!TIP]
+> The process of creating a Node.js app with Azure Cosmos DB for PostgreSQL is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md).
+- [Node.js](https://nodejs.org) installed.
+- For various samples, the following packages installed:
+
+ - [pg](https://www.npmjs.com/package/pg) PostgreSQL client for Node.js.
+ - [pg-copy-streams](https://www.npmjs.com/package/pg-copy-streams).
+ - [through2](https://www.npmjs.com/package/through2) to allow pipe chaining.
+
+ Install these packages from your command line by using the JavaScript `npm` node package manager.
+
+ ```bash
+ npm install <package name>
+ ```
+
+ Verify the installation by listing the packages installed.
+
+ ```bash
+ npm list
+ ```
+
+You can launch Node.js from the Bash shell, terminal, or Windows command prompt by typing `node`. Then run the example JavaScript code interactively by copying and pasting the code into the prompt. Or, you can save the JavaScript code into a *\<filename>.js* file, and then run `node <filename>.js` with the file name as a parameter.
+
+> [!NOTE]
+> Because each code sample finishes by ending the connection pool, you need to start a new Node.js session to build a new pool for each of the samples.
+
+The code samples in this article use your cluster name and password. You can see your cluster name at the top of your cluster page in the Azure portal.
++
+## Connect, create a table, and insert data
+
+All examples in this article need to connect to the database. You can put the connection logic into its own module for reuse. Use the [pg](https://node-postgres.com) client object to interface with the PostgreSQL server.
++
+### Create the common connection module
+
+Create a folder called *db*, and inside this folder create a *citus.js* file that contains the following common connection code. In this code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```javascript
+/**
+* file: db/citus.js
+*/
+
+const { Pool } = require('pg');
+
+const pool = new Pool({
+ max: 300,
+ connectionTimeoutMillis: 5000,
+
+ host: 'c.<cluster>.postgres.database.azure.com',
+ port: 5432,
+ user: 'citus',
+ password: '<password>',
+ database: 'citus',
+ ssl: true,
+});
+
+module.exports = {
+ pool,
+};
+```
+
+### Create a table
+
+Use the following code to connect and load the data by using CREATE TABLE
+and INSERT INTO SQL statements. The code creates a new `pharmacy` table and inserts some sample data.
+
+```javascript
+/**
+* file: create.js
+*/
+
+const { pool } = require('./db/citus');
+
+async function queryDatabase() {
+ const queryString = `
+ DROP TABLE IF EXISTS pharmacy;
+ CREATE TABLE pharmacy (pharmacy_id integer,pharmacy_name text,city text,state text,zip_code integer);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (2,'Walgreens','San Diego','California',94003);
+ CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);
+ `;
+
+ try {
+ /* Real application code would probably request a dedicated client with
+ pool.connect() and run multiple queries with the client. In this
+ example, you're running only one query, so you use the pool.query()
+ helper method to run it on the first available idle client.
+ */
+
+ await pool.query(queryString);
+ console.log('Created the Pharmacy table and inserted rows.');
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
+}
+
+queryDatabase();
+```
+
+## Distribute tables
+
+Azure Cosmos DB for PostgreSQL gives you [the super power of distributing tables](introduction.md) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!NOTE]
+> Distributing tables lets them grow across any worker nodes added to the cluster.
+
+Use the following code to connect to the database and distribute the table.
+
+```javascript
+/**
+* file: distribute-table.js
+*/
+
+const { pool } = require('./db/citus');
+
+async function queryDatabase() {
+ const queryString = `
+ SELECT create_distributed_table('pharmacy', 'pharmacy_id');
+ `;
+
+ try {
+ await pool.query(queryString);
+ console.log('Distributed pharmacy table.');
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
+}
+
+queryDatabase();
+```
+
+## Read data
+
+Use the following code to connect and read the data by using a SELECT SQL statement.
+
+```javascript
+/**
+* file: read.js
+*/
+
+const { pool } = require('./db/citus');
+
+async function queryDatabase() {
+ const queryString = `
+ SELECT * FROM pharmacy;
+ `;
+
+ try {
+ const res = await pool.query(queryString);
+ console.log(res.rows);
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
+}
+
+queryDatabase();
+```
+
+## Update data
+
+Use the following code to connect and update the data by using an UPDATE SQL statement.
+
+```javascript
+/**
+* file: update.js
+*/
+
+const { pool } = require('./db/citus');
+
+async function queryDatabase() {
+ const queryString = `
+ UPDATE pharmacy SET city = 'Long Beach'
+ WHERE pharmacy_id = 1;
+ `;
+
+ try {
+ const result = await pool.query(queryString);
+ console.log('Update completed.');
+ console.log(`Rows affected: ${result.rowCount}`);
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
+}
+
+queryDatabase();
+```
+
+## Delete data
+
+Use the following code to connect and delete the data by using a DELETE SQL statement.
+
+```javascript
+/**
+* file: delete.js
+*/
+
+const { pool } = require('./db/citus');
+
+async function queryDatabase() {
+ const queryString = `
+ DELETE FROM pharmacy
+ WHERE pharmacy_name = 'Target';
+ `;
+
+ try {
+ const result = await pool.query(queryString);
+ console.log('Delete completed.');
+ console.log(`Rows affected: ${result.rowCount}`);
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
+}
+
+queryDatabase();
+```
+
+## COPY command for fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Azure Cosmos DB for PostgreSQL. The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code copies data from a CSV file to a database table. The code requires the [pg-copy-streams](https://www.npmjs.com/package/pg-copy-streams) package and the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```javascript
+/**
+* file: copycsv.js
+*/
+
+const inputFile = require('path').join(__dirname, '/pharmacies.csv');
+const fileStream = require('fs').createReadStream(inputFile);
+const copyFrom = require('pg-copy-streams').from;
+const { pool } = require('./db/citus');
+
+async function importCsvDatabase() {
+ return new Promise((resolve, reject) => {
+ const queryString = `
+ COPY pharmacy FROM STDIN WITH (FORMAT CSV, HEADER true, NULL '');
+ `;
+
+ fileStream.on('error', reject);
+
+ pool
+ .connect()
+ .then(client => {
+ const stream = client
+ .query(copyFrom(queryString))
+ .on('error', reject)
+ .on('end', () => {
+ reject(new Error('Connection closed!'));
+ })
+ .on('finish', () => {
+ client.release();
+ resolve();
+ });
+
+ fileStream.pipe(stream);
+ })
+ .catch(err => {
+ reject(new Error(err));
+ });
+ });
+}
+
+(async () => {
+ console.log('Copying from CSV...');
+ await importCsvDatabase();
+ await pool.end();
+ console.log('Inserted csv successfully');
+})();
+```
+
+### COPY command to load in-memory data
+
+The following code copies in-memory data to a table. The code requires the [through2](https://www.npmjs.com/package/through2) package, which allows pipe chaining.
+
+```javascript
+/**
+ * file: copyinmemory.js
+ */
+
+const through2 = require('through2');
+const copyFrom = require('pg-copy-streams').from;
+const { pool } = require('./db/citus');
+
+async function importInMemoryDatabase() {
+ return new Promise((resolve, reject) => {
+ pool
+ .connect()
+ .then(client => {
+ const stream = client
+ .query(copyFrom('COPY pharmacy FROM STDIN'))
+ .on('error', reject)
+ .on('end', () => {
+ reject(new Error('Connection closed!'));
+ })
+ .on('finish', () => {
+ client.release();
+ resolve();
+ });
+
+ const internDataset = [
+ ['100', 'Target', 'Sunnyvale', 'California', '94001'],
+ ['101', 'CVS', 'San Francisco', 'California', '94002'],
+ ];
+
+ let started = false;
+ const internStream = through2.obj((arr, _enc, cb) => {
+ const rowText = (started ? '\n' : '') + arr.join('\t');
+ started = true;
+ cb(null, rowText);
+ });
+
+ internStream.on('error', reject).pipe(stream);
+
+ internDataset.forEach((record) => {
+ internStream.write(record);
+ });
+
+ internStream.end();
+ })
+ .catch(err => {
+ reject(new Error(err));
+ });
+ });
+}
+(async () => {
+ await importInMemoryDatabase();
+ await pool.end();
+ console.log('Inserted inmemory data successfully.');
+})();
+```
+
+## App retry for database request failures
++
+In this code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```javascript
+const { Pool } = require('pg');
+const { sleep } = require('sleep');
+
+const pool = new Pool({
+ host: 'c.<cluster>.postgres.database.azure.com',
+ port: 5432,
+ user: 'citus',
+ password: '<password>',
+ database: 'citus',
+ ssl: true,
+ connectionTimeoutMillis: 0,
+ idleTimeoutMillis: 0,
+ min: 10,
+ max: 20,
+});
+
+(async function() {
+ res = await executeRetry('select nonexistent_thing;',5);
+ console.log(res);
+ process.exit(res ? 0 : 1);
+})();
+
+async function executeRetry(sql,retryCount)
+{
+ for (let i = 0; i < retryCount; i++) {
+ try {
+ result = await pool.query(sql)
+ return result;
+ } catch (err) {
+ console.log(err.message);
+ sleep(60);
+ }
+ }
+
+ // didn't succeed after all the tries
+ return null;
+}
+```
+
+## Next steps
+
cosmos-db Quickstart App Stacks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-python.md
+
+ Title: Use Python to connect and run SQL on Azure Cosmos DB for PostgreSQL
+description: See how to use Python to connect and run SQL statements on Azure Cosmos DB for PostgreSQL.
++++++
+recommendations: false
Last updated : 09/28/2022++
+# Use Python to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+++
+This quickstart shows you how to use Python code on macOS, Ubuntu Linux, or Windows to connect to a cluster, and use SQL statements to create a table and insert, query, update, and delete data. The steps in this article assume that you're familiar with Python development, and are new to working with Azure Cosmos DB for PostgreSQL.
+
+> [!TIP]
+> The process of creating a Python app with Azure Cosmos DB for PostgreSQL is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- [Python](https://www.python.org/downloads) 2.7 or 3.6+.
+- The latest [pip](https://pip.pypa.io/en/stable/installing) package installer. Most versions of Python already install `pip`.
+- [psycopg2](https://pypi.python.org/pypi/psycopg2-binary) installed by using `pip` in a terminal or command prompt window. For more information, see [How to install psycopg2](https://www.psycopg.org/docs/install.html).
+- An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md).
+
+The code samples in this article use your cluster name and password. In the Azure portal, your cluster name appears at the top of your cluster page.
++
+## Connect, create a table, and insert data
+
+The following code example creates a connection pool to your Postgres database by using the [psycopg2.pool](https://www.psycopg.org/docs/pool.html) library, and uses `pool.getconn()` to get a connection from the pool. The code then uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) functions with SQL CREATE TABLE and INSERT INTO statements to create a table and insert data.
++
+In the following code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+> [!NOTE]
+> This example closes the connection at the end, so if you want to run the other samples in the article in the same session, don't include the `# Clean up` section when you run this sample.
+
+```python
+import psycopg2
+from psycopg2 import pool
+
+# NOTE: fill in these variables for your own cluster
+host = "c.<cluster>.postgres.database.azure.com"
+dbname = "citus"
+user = "citus"
+password = "<password>"
+sslmode = "require"
+
+# Build a connection string from the variables
+conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode)
+
+postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20,conn_string)
+if (postgreSQL_pool):
+ print("Connection pool created successfully")
+
+# Use getconn() to get a connection from the connection pool
+conn = postgreSQL_pool.getconn()
+
+cursor = conn.cursor()
+
+# Drop previous table of same name if one exists
+cursor.execute("DROP TABLE IF EXISTS pharmacy;")
+print("Finished dropping table (if existed)")
+
+# Create a table
+cursor.execute("CREATE TABLE pharmacy (pharmacy_id integer, pharmacy_name text, city text, state text, zip_code integer);")
+print("Finished creating table")
+
+# Create a index
+cursor.execute("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);")
+print("Finished creating index")
+
+# Insert some data into the table
+cursor.execute("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (%s, %s, %s, %s,%s);", (1,"Target","Sunnyvale","California",94001))
+cursor.execute("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (%s, %s, %s, %s,%s);", (2,"CVS","San Francisco","California",94002))
+print("Inserted 2 rows of data")
+
+# Clean up
+conn.commit()
+cursor.close()
+conn.close()
+```
+
+When the code runs successfully, it produces the following output:
+
+```output
+Connection established
+Finished dropping table
+Finished creating table
+Finished creating index
+Inserted 2 rows of data
+```
+
+## Distribute tables
+
+Azure Cosmos DB for PostgreSQL gives you [the super power of distributing tables](introduction.md) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!NOTE]
+> Distributing tables lets them grow across any worker nodes added to the cluster.
+
+```python
+# Create distributed table
+cursor.execute("select create_distributed_table('pharmacy','pharmacy_id');")
+print("Finished distributing the table")
+```
+
+## Read data
+
+The following code example uses the following APIs to read data from the database:
+
+- [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL SELECT statement to read data.
+- [cursor.fetchall()](https://www.psycopg.org/docs/cursor.html#cursor.fetchall) to accept a query and return a result set to iterate.
+
+```python
+# Fetch all rows from table
+cursor.execute("SELECT * FROM pharmacy;")
+rows = cursor.fetchall()
+
+# Print all rows
+for row in rows:
+ print("Data row = (%s, %s)" %(str(row[0]), str(row[1])))
+```
+
+## Update data
+
+The following code example uses `cursor.execute` with the SQL UPDATE statement to update data.
+
+```python
+# Update a data row in the table
+cursor.execute("UPDATE pharmacy SET city = %s WHERE pharmacy_id = %s;", ("guntur",1))
+print("Updated 1 row of data")
+```
+
+## Delete data
+
+The following code example runs `cursor.execute` with the SQL DELETE statement to delete the data.
+
+```python
+# Delete data row from table
+cursor.execute("DELETE FROM pharmacy WHERE pharmacy_name = %s;", ("Target",))
+print("Deleted 1 row of data")
+```
+
+## COPY command for fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Azure Cosmos DB for PostgreSQL. The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code copies data from a CSV file to a database table. The code requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```python
+with open('pharmacies.csv', 'r') as f:
+ # Notice that we don't need the `csv` module.
+ next(f) # Skip the header row.
+ cursor.copy_from(f, 'pharmacy', sep=',')
+ print("copying data completed")
+```
+
+### COPY command to load in-memory data
+
+The following code copies in-memory data to a table.
+
+```python
+data = [[3,"Walgreens","Sunnyvale","California",94006], [4,"Target","Sunnyvale","California",94016]]
+buf = io.StringIO()
+writer = csv.writer(buf)
+writer.writerows(data)
+
+buf.seek(0)
+with conn.cursor() as cur:
+ cur.copy_from(buf, "pharmacy", sep=",")
+
+conn.commit()
+conn.close()
+```
+## App retry for database request failures
++
+In this code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```python
+import psycopg2
+import time
+from psycopg2 import pool
+
+host = "c.<cluster>.postgres.database.azure.com"
+dbname = "citus"
+user = "citus"
+password = "<password>"
+sslmode = "require"
+
+conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(
+ host, user, dbname, password, sslmode)
+postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20, conn_string)
+
+def executeRetry(query, retryCount):
+ for x in range(retryCount):
+ try:
+ if (postgreSQL_pool):
+ # Use getconn() to Get Connection from connection pool
+ conn = postgreSQL_pool.getconn()
+ cursor = conn.cursor()
+ cursor.execute(query)
+ return cursor.fetchall()
+ break
+ except Exception as err:
+ print(err)
+ postgreSQL_pool.putconn(conn)
+ time.sleep(60)
+ return None
+
+print(executeRetry("select 1", 5))
+```
+
+## Next steps
+
cosmos-db Quickstart App Stacks Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-ruby.md
+
+ Title: Use Ruby to connect and query Azure Cosmos DB for PostgreSQL
+description: See how to use Ruby to connect and run SQL statements on Azure Cosmos DB for PostgreSQL.
++++++
+recommendations: false
Last updated : 09/28/2022++
+# Use Ruby to connect and run SQL commands on Azure Cosmos DB for PostgreSQL
+++
+This quickstart shows you how to use Ruby code to connect to a cluster, and then use SQL statements to create a table and insert, query, update, and delete data in the database. The steps in this article assume that you're familiar with Ruby development, and are new to working with Azure Cosmos DB for PostgreSQL.
+
+> [!TIP]
+> The process of creating a Ruby app with Azure Cosmos DB for PostgreSQL is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- [Ruby](https://www.ruby-lang.org/en/downloads) installed.
+- [Ruby pg](https://rubygems.org/gems/pg), the PostgreSQL module for Ruby.
+- An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md).
+
+The code samples in this article use your cluster name and password. In the Azure portal, your cluster name appears at the top of your cluster page.
++
+## Connect, create a table, and insert data
+
+Use the following code to connect and create a table by using the CREATE TABLE SQL statement, then add rows to the table by using the INSERT INTO SQL statement.
+
+The code uses a `PG::Connection` object with constructor to connect to Azure Cosmos DB for PostgreSQL. Then it calls method `exec()` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
+
+In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database'
+
+ # Drop previous table of same name if one exists
+ connection.exec('DROP TABLE IF EXISTS pharmacy;')
+ puts 'Finished dropping table (if existed).'
+
+ # Drop previous table of same name if one exists.
+ connection.exec('CREATE TABLE pharmacy (pharmacy_id integer ,pharmacy_name text,city text,state text,zip_code integer);')
+ puts 'Finished creating table.'
+
+ # Insert some data into table.
+ connection.exec("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);")
+ connection.exec("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);")
+ puts 'Inserted 2 rows of data.'
+
+ # Create index
+ connection.exec("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);")
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Distribute tables
+
+Azure Cosmos DB for PostgreSQL gives you [the super power of distributing tables](introduction.md) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!NOTE]
+> Distributing tables lets them grow across any worker nodes added to the cluster.
+
+Use the following code to connect to the database and distribute the table. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database.'
+
+ # Super power of distributed tables.
+ connection.exec("select create_distributed_table('pharmacy','pharmacy_id');")
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Read data
+
+Use the following code to connect and read the data using a SELECT SQL statement.
+
+The code calls method `exec()` to run the SELECT command, keeping the results in a result set. The result set collection is iterated using the `resultSet.each` do loop, keeping the current row values in the row variable. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database.'
+
+ resultSet = connection.exec('SELECT * from pharmacy')
+ resultSet.each do |row|
+ puts 'Data row = (%s, %s, %s, %s, %s)' % [row['pharmacy_id'], row['pharmacy_name'], row['city'], row['state'], row['zip_code ']]
+ end
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Update data
+
+Use the following code to connect and update the data by using a UPDATE SQL statement. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database.'
+
+ # Modify some data in table.
+ connection.exec('UPDATE pharmacy SET city = %s WHERE pharmacy_id = %d;' % ['\'guntur\'',100])
+ puts 'Updated 1 row of data.'
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Delete data
+
+Use the following code to connect and delete data using a DELETE SQL statement. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database.'
+
+ # Delete some data in table.
+ connection.exec('DELETE FROM pharmacy WHERE city = %s;' % ['\'guntur\''])
+ puts 'Deleted 1 row of data.'
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## COPY command for super fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Azure Cosmos DB for PostgreSQL. The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code copies data from a CSV file to a database table. It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv). In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ filename = String('pharmacies.csv')
+
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database.'
+
+ # Copy the data from Csv to table.
+ result = connection.copy_data "COPY pharmacy FROM STDIN with csv" do
+ File.open(filename , 'r').each do |line|
+ connection.put_copy_data line
+ end
+ puts 'Copied csv data successfully.'
+ end
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+### COPY command to load in-memory data
+
+The following code copies in-memory data to a table. In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ puts 'Successfully created connection to database.'
+
+ enco = PG::TextEncoder::CopyRow.new
+ connection.copy_data "COPY pharmacy FROM STDIN", enco do
+ connection.put_copy_data [5000,'Target','Sunnyvale','California','94001']
+ connection.put_copy_data [5001, 'CVS','San Francisco','California','94002']
+ puts 'Copied in-memory data successfully.'
+ end
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+## App retry for database request failures
++
+In the code, replace \<cluster> with your cluster name and \<password> with your administrator password.
+```ruby
+require 'pg'
+
+def executeretry(sql,retryCount)
+ begin
+ for a in 1..retryCount do
+ begin
+ # NOTE: Replace <cluster> and <password> in the connection string.
+ connection = PG::Connection.new("host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password=<password> sslmode=require")
+ resultSet = connection.exec(sql)
+ return resultSet.each
+ rescue PG::Error => e
+ puts e.message
+ sleep 60
+ ensure
+ connection.close if connection
+ end
+ end
+ end
+ return nil
+end
+
+var = executeretry('select 1',5)
+
+if var !=nil then
+ var.each do |row|
+ puts 'Data row = (%s)' % [row]
+ end
+end
+```
+
+## Next steps
+
cosmos-db Quickstart Build Scalable Apps Classify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-classify.md
+
+ Title: Classify application workload - Azure Cosmos DB for PostgreSQL
+description: Classify workload for scalable application
+++++
+recommendations: false
Last updated : 08/11/2022++
+# Classify application workload
++
+Here are common characteristics of the workloads that are the best fit for
+Azure Cosmos DB for PostgreSQL.
+
+## Prerequisites
+
+This article assumes you know the [fundamental concepts for
+scaling](quickstart-build-scalable-apps-concepts.md). If you haven't read about
+them, take a moment to do so.
+
+## Characteristics of multi-tenant SaaS
+
+* Tenants see their own data; they can't see other tenants' data.
+* Most B2B SaaS apps are multi-tenant. Examples include Salesforce or Shopify.
+* In most B2B SaaS apps, there are hundreds to tens of thousands of tenants, and
+ more tenants keep joining.
+* Multi-tenant SaaS apps are primarily operational/transactional, with single
+ digit millisecond latency requirements for their database queries.
+* These apps have a classic relational data model, and are built using ORMs ΓÇô
+ like RoR, Hibernate, Django etc.
+ <br><br>
+ > [!VIDEO https://www.youtube.com/embed/7gAW08du6kk]
+
+## Characteristics of real-time operational analytics
+
+* These apps have a customer/user facing interactive analytics dashboard, with
+ a subsecond query latency requirement.
+* High concurrency required - at least 20 users.
+* Analyzes data that's fresh, within the last one second to few minutes.
+* Most have time series data such as events, logs, etc.
+* Common data models in these apps include:
+ * Star Schema - few large/fact tables, the rest being small/dimension tables
+ * Mostly fewer than 20 major tables
+ <br><br>
+ > [!VIDEO https://www.youtube.com/embed/xGWVVTva434]
+
+## Characteristics of high-throughput transactional
+
+* Run NoSQL/document style workloads, but require PostgreSQL features such as
+ transactions, foreign/primary keys, triggers, extension like PostGIS, etc.
+* The workload is based on a single key. It has CRUD and lookups based on that
+ key.
+* These apps have high throughput requirements: thousands to hundreds of thousands of
+ TPS.
+* Query latency in single-digit milliseconds, with a high concurrency
+ requirement.
+* Time series data, such as internet of things.
+ <br><br>
+ > [!VIDEO https://www.youtube.com/embed/A9q7w96yO_E]
+
+## Next steps
+
+Choose whichever fits your application the best:
+
+> [!div class="nextstepaction"]
+> [Model multi-tenant SaaS app >](quickstart-build-scalable-apps-model-multi-tenant.md)
+
+> [!div class="nextstepaction"]
+> [Model real-time analytics app](quickstart-build-scalable-apps-model-real-time.md)
+
+> [!div class="nextstepaction"]
+> [Model high-throughput app](quickstart-build-scalable-apps-model-high-throughput.md)
cosmos-db Quickstart Build Scalable Apps Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-concepts.md
+
+ Title: Fundamental concepts for scaling - Azure Cosmos DB for PostgreSQL
+description: Ideas you need to know to build relational apps that scale
+++++
+recommendations: false
Last updated : 08/11/2022++
+# Fundamental concepts for scaling
++
+Before we investigate the steps of building a new app, it's helpful to see a
+quick overview of the terms and concepts involved.
+
+## Architectural overview
+
+Azure Cosmos DB for PostgreSQL gives you the power to distribute tables across multiple
+machines in a cluster and transparently query them the same you query
+plain PostgreSQL:
+
+![Diagram of the coordinator node sharding a table onto worker nodes.](media/howto-build-scalable-apps/architecture.png)
+
+In the Azure Cosmos DB for PostgreSQL architecture, there are multiple kinds of nodes:
+
+* The **coordinator** node stores distributed table metadata and is responsible
+ for distributed planning.
+* By contrast, the **worker** nodes store the actual data and do the computation.
+* Both the coordinator and workers are plain PostgreSQL databases, with the
+ `citus` extension loaded.
+
+To distribute a normal PostgreSQL table, like `campaigns` in the diagram above,
+run a command called `create_distributed_table()`. Once you run this
+command, Azure Cosmos DB for PostgreSQL transparently creates shards for the table across
+worker nodes. In the diagram, shards are represented as blue boxes.
+
+> [!NOTE]
+>
+> On a cluster with no worker nodes, shards of distributed tables are on the coordinator node.
+
+Shards are plain (but specially named) PostgreSQL tables that hold slices of
+your data. In our example, because we distributed `campaigns` by `company_id`,
+the shards hold campaigns, where the campaigns of different companies are
+assigned to different shards.
+
+## Distribution column (also known as shard key)
+
+`create_distributed_table()` is the magic function that Azure Cosmos DB for PostgreSQL
+provides to distribute tables and use resources across multiple machines.
+
+```postgresql
+SELECT create_distributed_table(
+ 'table_name',
+ 'distribution_column');
+```
+
+The second argument above picks a column from the table as a **distribution
+column**. It can be any column with a native PostgreSQL type (with integer and
+text being most common). The value of the distribution column determines which
+rows go into which shards, which is why the distribution column is also called
+the **shard key**.
+
+Azure Cosmos DB for PostgreSQL decides how to run queries based on their use of the shard
+key:
+
+| Query involves | Where it runs |
+|-||
+| just one shard key | on the worker node that holds its shard |
+| multiple shard keys | parallelized across multiple nodes |
+
+The choice of shard key dictates the performance and scalability of your
+applications.
+
+* Uneven data distribution per shard keys (also known as *data skew*) isn't optimal
+ for performance. For example, donΓÇÖt choose a column for which a single value
+ represents 50% of data.
+* Shard keys with low cardinality can affect scalability. You can use only as
+ many shards as there are distinct key values. Choose a key with cardinality
+ in the hundreds to thousands.
+* Joining two large tables with different shard keys can be slow. Choose a
+ common shard key across large tables. Learn more in
+ [colocation](#colocation).
+
+## Colocation
+
+Another concept closely related to shard key is *colocation*. Tables sharded by
+the same distribution column values are colocated - The shards of colocated
+tables are stored together on the same workers.
+
+Below are two tables sharded by the same key, `site_id`. They're colocated.
+
+![Diagram of tables http_request and http_request_1min colocated by site_id.](media/howto-build-scalable-apps/colocation.png)
+
+Azure Cosmos DB for PostgreSQL ensures that rows with a matching `site_id` value in both
+tables are stored on the same worker node. You can see that, for both tables,
+rows with `site_id=1` are stored on worker 1. Similarly for other site IDs.
+
+Colocation helps optimize JOINs across these tables. If you join the two tables
+on `site_id`, Azure Cosmos DB for PostgreSQL can perform the join locally on worker nodes
+without shuffling data between nodes.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Classify application workload >](quickstart-build-scalable-apps-classify.md)
cosmos-db Quickstart Build Scalable Apps Model High Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-high-throughput.md
+
+ Title: Model high throughput apps - Azure Cosmos DB for PostgreSQL
+description: Techniques for scalable high-throughput transactional apps
+++++
+recommendations: false
Last updated : 08/11/2022++
+# Model high-throughput transactional apps
++
+## Common filter as shard key
+
+To pick the shard key for a high-throughput transactional application, follow
+these guidelines:
+
+* Choose a column that is used for point lookups and is present in most
+ create, read, update, and delete operations.
+* Choose a column that is a natural dimension in the data, or a central piece
+ of the application. For example:
+ * In an IOT workload, `device_id` is a good distribution column.
+
+The choice of a good shard key helps optimize network hops, while taking
+advantage of memory and compute to achieve millisecond latency.
+
+## Optimal data model for high-throughput apps
+
+Below is an example of a sample data-model for an IoT app that captures
+telemetry (time series data) from devices. There are two tables for capturing
+telemetry: `devices` and `events`. There could be other tables, but they're not
+covered in this example.
+
+![Diagram of events and devices tables, and partitions of events.](media/howto-build-scalable-apps/high-throughput-data-model.png)
+
+When building a high-throughput app, keep some optimization in mind.
+
+* Distribute large tables on a common column that is central piece of the app,
+ and the column that your app mostly queries. In the above example of an IOT
+ app, `device_id` is that column, and it co-locates the events and devices
+ tables.
+* The rest of the small tables can be reference tables.
+* As IOT apps have a time dimension, partition your distributed tables based on
+ time. You can use native Azure Cosmos DB for PostgreSQL time series capabilities to
+ create and maintain partitions.
+ * Partitioning helps efficiently filter data for queries with time filters.
+ * Expiring old data is also fast, using the DROP vs DELETE command.
+ * The events table in our example is partitioned by month.
+* Use the JSONB datatype to store semi-structured data. Device telemetry
+ data is typically not structured, every device has its own metrics.
+ * In our example, the events table has a `detail` column, which is JSONB.
+* If your IoT app requires geospatial features, you can use the PostGIS
+ extension, which Azure Cosmos DB for PostgreSQL supports natively.
+
+## Next steps
+
+Now we've finished exploring data modeling for scalable apps. The next step is
+connecting and querying the database with your programming language of choice.
+
+> [!div class="nextstepaction"]
+> [App stacks >](quickstart-app-stacks-overview.yml)
cosmos-db Quickstart Build Scalable Apps Model Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-multi-tenant.md
+
+ Title: Model multi-tenant apps - Azure Cosmos DB for PostgreSQL
+description: Techniques for scalable multi-tenant SaaS apps
+++++
+recommendations: false
Last updated : 08/11/2022++
+# Model multi-tenant SaaS apps
++
+## Tenant ID as the shard key
+
+The tenant ID is the column at the root of the workload, or the top of the
+hierarchy in your data-model. For example, in this SaaS e-commerce schema,
+it would be the store ID:
+
+![Diagram of tables, with the store_id column highlighted.](media/howto-build-scalable-apps/multi-tenant-id.png)
+
+This data model would be typical for a business such as Shopify. It hosts sites
+for multiple online stores, where each store interacts with its own data.
+
+* This data-model has a bunch of tables: stores, products, orders, line items
+ and countries.
+* The stores table is at the top of the hierarchy. Products, orders and
+ line items are all associated with stores, thus lower in the hierarchy.
+* The countries table isn't related to individual stores, it is amongst across
+ stores.
+
+In this example, `store_id`, which is at the top of the hierarchy, is the
+identifier for tenant. It's the right shard key. Picking `store_id` as the
+shard key enables collocating data across all tables for a single store on a
+single worker.
+
+Colocating tables by store has advantages:
+
+* Provides SQL coverage such as foreign keys, JOINs. Transactions for a single
+ tenant are localized on a single worker node where each tenant exists.
+* Achieves single digit millisecond performance. Queries for a single tenant are
+ routed to a single node instead of getting parallelized, which helps optimize
+ network hops and still scale compute/memory.
+* It scales. As the number of tenants grows, you can add nodes and rebalance
+ the tenants to new nodes, or even isolate large tenants to their own nodes.
+ Tenant isolation allows you to provide dedicated resources.
+
+![Diagram of tables colocated to the same nodes.](media/howto-build-scalable-apps/multi-tenant-colocation.png)
+
+## Optimal data model for multi-tenant apps
+
+In this example, we should distribute the store-specific tables by store ID,
+and make `countries` a reference table.
+
+![Diagram of tables with store_id more universally highlighted.](media/howto-build-scalable-apps/multi-tenant-data-model.png)
+
+Notice that tenant-specific tables have the tenant ID and are distributed. In
+our example, stores, products and line\_items are distributed. The rest of the
+tables are reference tables. In our example, the countries table is a reference table.
+
+```sql
+-- Distribute large tables by the tenant ID
+
+SELECT create_distributed_table('stores', 'store_id');
+SELECT create_distributed_table('products', 'store_id', colocate_with => 'stores');
+-- etc for the rest of the tenant tables...
+
+-- Then, make "countries" a reference table, with a synchronized copy of the
+-- table maintained on every worker node
+
+SELECT create_reference_table('countries');
+```
+
+Large tables should all have the tenant ID.
+
+* If you're **migrating an existing** multi-tenant app to Azure Cosmos DB for PostgreSQL,
+ you may need to denormalize a little and add the tenant ID column to large
+ tables if it's missing, then backfill the missing values of the column.
+* For **new apps** on Azure Cosmos DB for PostgreSQL, make sure the tenant ID is present
+ on all tenant-specific tables.
+
+Ensure to include the tenant ID on primary, unique, and foreign key constraints
+on distributed tables in the form of a composite key. For example, if a table
+has a primary key of `id`, turn it into the composite key `(tenant_id,id)`.
+There's no need to change keys for reference tables.
+
+## Query considerations for best performance
+
+Distributed queries that filter on the tenant ID run most efficiently in
+multi-tenant apps. Ensure that your queries are always scoped to a single
+tenant.
+
+```sql
+SELECT *
+ FROM orders
+ WHERE order_id = 123
+ AND store_id = 42; -- ← tenant ID filter
+```
+
+It's necessary to add the tenant ID filter even if the original filter
+conditions unambiguously identify the rows you want. The tenant ID filter,
+while seemingly redundant, tells Azure Cosmos DB for PostgreSQL how to route the query to a
+single worker node.
+
+Similarly, when you're joining two distributed tables, ensure that both the
+tables are scoped to a single tenant. Scoping can be done by ensuring that join
+conditions include the tenant ID.
+
+```sql
+SELECT sum(l.quantity)
+ FROM line_items l
+ INNER JOIN products p
+ ON l.product_id = p.product_id
+ AND l.store_id = p.store_id -- ← tenant ID in join
+ WHERE p.name='Awesome Wool Pants'
+ AND l.store_id='8c69aa0d-3f13-4440-86ca-443566c1fc75';
+ -- Γåæ tenant ID filter
+```
+
+There are helper libraries for several popular application frameworks that make
+it easy to include a tenant ID in queries. Here are instructions:
+
+* [Ruby on Rails instructions](https://docs.citusdata.com/en/stable/develop/migration_mt_ror.html)
+* [Django instructions](https://docs.citusdata.com/en/stable/develop/migration_mt_django.html)
+* [ASP.NET](https://docs.citusdata.com/en/stable/develop/migration_mt_asp.html)
+* [Java Hibernate](https://www.citusdata.com/blog/2018/02/13/using-hibernate-and-spring-to-build-multitenant-java-apps/)
+
+## Next steps
+
+Now we've finished exploring data modeling for scalable apps. The next step is
+connecting and querying the database with your programming language of choice.
+
+> [!div class="nextstepaction"]
+> [App stacks >](quickstart-app-stacks-overview.yml)
cosmos-db Quickstart Build Scalable Apps Model Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-real-time.md
+
+ Title: Model real-time apps - Azure Cosmos DB for PostgreSQL
+description: Techniques for scalable real-time analytical apps
+++++
+recommendations: false
Last updated : 08/11/2022++
+# Model real-time analytics apps
++
+## Colocate large tables with shard key
+
+To pick the shard key for a real-time operational analytics application, follow
+these guidelines:
+
+* Choose a column that is common on large tables
+* Choose a column that is a natural dimension in the data, or a central piece
+ of the application. Some examples:
+ * In the financial world, an application that analyzes security trends would
+ probably use `security_id`.
+ * In a user analytics workload where you want to analyze website usage
+ metrics, `user_id` would be a good distribution column
+
+By colocating large tables, you can push SQL queries down to worker nodes in
+parallel. Pushing down queries avoids shuffling data between nodes over the
+network. Operations such as JOINs, aggregates, rollups, filters, LIMITs can be
+efficiently executed.
+
+To visualize parallel distributed queries on colocated tables, consider this
+diagram:
+
+![Diagram of joins happening within worker nodes.](media/howto-build-scalable-apps/real-time-join.png)
+
+The `users` and `events` tables are both sharded by `user_id`, so related
+rows for the same user ID are placed together on the same worker node. The
+SQL JOINs can happen without pulling information between workers.
+
+## Optimal data model for real-time apps
+
+Let's continue with the example of an application that analyzes user website
+visits and metrics. There are two "fact" tables--users and events--and other
+smaller "dimension" tables.
+
+![Diagram of users, events, and miscellaneous tables.](media/howto-build-scalable-apps/real-time-data-model.png)
+
+To apply the super power of distributed tables on Azure Cosmos DB for PostgreSQL, follow
+the following steps:
+
+* Distribute large fact tables on a common column. In our case, users and
+ events are distributed on `user_id`.
+* Mark the small/dimension tables (`device_types`, `countries`, and
+ `event_types) as reference tables.
+* Be sure to include the distribution column in primary, unique, and foreign
+ key constraints on distributed tables. Including the column may require making the keys
+ composite. There's need to update keys for reference tables.
+* When you're joining large distributed tables, be sure to join using the
+ shard key.
+
+```sql
+-- Distribute the fact tables
+
+SELECT create_distributed_table('users', 'user_id');
+SELECT create_distributed_table('products', 'user_id', colocate_with => 'users');
+
+-- Turn dimension tables into reference tables, with synchronized copies
+-- maintained on every worker node
+
+SELECT create_reference_table('countries');
+-- similarly for device_types and event_types...
+```
+
+## Next steps
+
+Now we've finished exploring data modeling for scalable apps. The next step is
+connecting and querying the database with your programming language of choice.
+
+> [!div class="nextstepaction"]
+> [App stacks >](quickstart-app-stacks-overview.yml)
cosmos-db Quickstart Build Scalable Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-overview.md
+
+ Title: Build scalable apps - Azure Cosmos DB for PostgreSQL
+description: How to build relational apps that scale
++++++
+recommendations: false
Last updated : 08/11/2022++
+# Build scalable apps
++
+There are three steps involved in building scalable apps with Azure Cosmos DB for PostgreSQL:
+
+1. Classify your application workload. There are use-case where Azure Cosmos DB for PostgreSQL
+ shines: multi-tenant SaaS, real-time operational analytics, and high
+ throughput OLTP. Determine whether your app falls into one of these categories.
+2. Based on the workload, identify the optimal shard key for the distributed
+ tables. Classify your tables as reference, distributed, or local.
+3. Update the database schema and application queries to make them go fast
+ across nodes.
+
+**Next steps**
+
+Before you start building a new app, you must first review a little more about
+the architecture of Azure Cosmos DB for PostgreSQL.
+
+> [!div class="nextstepaction"]
+> [Fundamental concepts for scaling >](quickstart-build-scalable-apps-concepts.md)
cosmos-db Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-connect-psql.md
+
+ Title: 'Quickstart: Connect to Azure Cosmos DB for PostgreSQL with psql'
+description: See how to use Azure Cloud Shell to connect to Azure Cosmos DB for PostgreSQL by using psql.
++
+recommendations: false
++++ Last updated : 09/28/2022++
+# Connect to a cluster with psql
++
+This quickstart shows you how to use the [psql](https://www.postgresql.org/docs/current/app-psql.html) connection string in [Azure Cloud Shell](/azure/cloud-shell/overview) to connect to an Azure Cosmos DB for PostgreSQL cluster.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- An Azure Cosmos DB for PostgreSQL cluster. To create a cluster, see [Create a cluster in the Azure portal](quickstart-create-portal.md).
+
+## Connect
+
+Your cluster has a default database named `citus`. To connect to the database, you use a connection string and the admin password.
+
+1. In the Azure portal, on your cluster page, select the **Connection strings** menu item, and then copy the **psql** connection string.
+
+ :::image type="content" source="media/quickstart-connect-psql/get-connection-string.png" alt-text="Screenshot that shows copying the psql connection string.":::
+
+ The **psql** string is of the form `psql "host=c.<cluster>.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"`. Notice that the host name starts with a `c.`, for example `c.mycluster.postgres.database.azure.com`. This prefix indicates the coordinator node of the cluster. The default `dbname` and `username` are `citus` and can't be changed.
+
+1. Open Azure Cloud Shell by selecting the **Cloud Shell** icon on the top menu bar.
+
+ :::image type="content" source="media/quickstart-connect-psql/open-cloud-shell.png" alt-text="Screenshot that shows the Cloud Shell icon.":::
+
+ If prompted, choose an Azure subscription in which to store Cloud Shell data.
+
+1. Paste your psql connection string into the shell.
+
+1. In the connection string, replace `{your_password}` with your cluster password, and then press Enter.
+
+ :::image type="content" source="media/quickstart-connect-psql/cloud-shell-run-psql.png" alt-text="Screenshot that shows running psql in the Cloud Shell.":::
+
+ When psql successfully connects to the database, you see a new `citus=>` prompt:
+
+ ```bash
+ psql (14.2, server 14.5)
+ SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
+ Type "help" for help.
+
+ citus=>
+ ```
+
+1. Run a test query. Paste the following command into the psql
+ prompt, and then press Enter.
+
+ ```sql
+ SHOW server_version;
+ ```
+
+ You should see a result matching the PostgreSQL version you selected
+ during cluster creation. For instance:
+
+ ```bash
+ server_version
+ -
+ 14.5
+ (1 row)
+ ```
+
+## Next steps
+
+Now that you've connected to the cluster, the next step is to create
+tables and shard them for horizontal scaling.
+
+> [!div class="nextstepaction"]
+> [Create and distribute tables >](quickstart-distribute-tables.md)
cosmos-db Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-create-portal.md
+
+ Title: 'Quickstart: create a cluster - Azure Cosmos DB for PostgreSQL'
+description: Quickstart to create and query distributed tables on Azure Cosmos DB for PostgreSQL.
++
+recommendations: false
++++ Last updated : 09/29/2022
+#Customer intent: As a developer, I want to provision a Azure Cosmos DB for PostgreSQL cluster so that I can run queries quickly on large datasets.
++
+# Create an Azure Cosmos DB for PostgreSQL cluster in the Azure portal
++
+Azure Cosmos DB for PostgreSQL is a managed service that
+allows you to run horizontally scalable PostgreSQL databases in the cloud.
+
+## Prerequisites
++
+## Next step
+
+With your cluster created, it's time to connect with a SQL client.
+
+> [!div class="nextstepaction"]
+> [Connect to your cluster >](quickstart-connect-psql.md)
cosmos-db Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-distribute-tables.md
+
+ Title: 'Quickstart: distribute tables - Azure Cosmos DB for PostgreSQL'
+description: Quickstart to distribute table data across nodes in Azure Cosmos DB for PostgreSQL.
++
+recommendations: false
++++ Last updated : 08/11/2022++
+# Create and distribute tables
++
+In this example, we'll use Azure Cosmos DB for PostgreSQL distributed tables to store and
+query events recorded from GitHub open source contributors.
+
+## Prerequisites
+
+To follow this quickstart, you'll first need to:
+
+1. [Create a cluster](quickstart-create-portal.md) in the Azure portal.
+2. [Connect to the cluster](quickstart-connect-psql.md) with psql to
+ run SQL commands.
+
+## Create tables
+
+Once you've connected via psql, let's create our table. Copy and paste the
+following commands into the psql terminal window, and hit enter to run:
+
+```sql
+CREATE TABLE github_users
+(
+ user_id bigint,
+ url text,
+ login text,
+ avatar_url text,
+ gravatar_id text,
+ display_login text
+);
+
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ user_id bigint,
+ org jsonb,
+ created_at timestamp
+);
+
+CREATE INDEX event_type_index ON github_events (event_type);
+CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
+```
+
+Notice the GIN index on `payload` in `github_events`. The index allows fast
+querying in the JSONB column. Since Citus is a PostgreSQL extension, Azure
+Cosmos DB for PostgreSQL supports advanced PostgreSQL features like the JSONB
+datatype for storing semi-structured data.
+
+## Distribute tables
+
+`create_distributed_table()` is the magic function that Azure Cosmos DB for PostgreSQL
+provides to distribute tables and use resources across multiple machines. The
+function decomposes tables into shards, which can be spread across nodes for
+increased storage and compute performance.
+
+Let's distribute the tables:
+
+```sql
+SELECT create_distributed_table('github_users', 'user_id');
+SELECT create_distributed_table('github_events', 'user_id');
+```
++
+## Load data into distributed tables
+
+We're ready to fill the tables with sample data. For this quickstart, we'll use
+a dataset previously captured from the GitHub API.
+
+Run the following commands to download example CSV files and load them into the
+database tables. (The `curl` command downloads the files, and comes
+pre-installed in the Azure Cloud Shell.)
+
+```
+-- download users and store in table
+
+\COPY github_users FROM PROGRAM 'curl https://examples.citusdata.com/users.csv' WITH (FORMAT CSV)
+
+-- download events and store in table
+
+\COPY github_events FROM PROGRAM 'curl https://examples.citusdata.com/events.csv' WITH (FORMAT CSV)
+```
+
+We can review details of our distributed tables, including their sizes, with
+the `citus_tables` view:
+
+```sql
+SELECT * FROM citus_tables;
+```
+
+```
+ table_name | citus_table_type | distribution_column | colocation_id | table_size | shard_count | table_owner | access_method
++++++-+-+
+ github_events | distributed | user_id | 1 | 388 MB | 32 | citus | heap
+ github_users | distributed | user_id | 1 | 39 MB | 32 | citus | heap
+(2 rows)
+```
+
+## Next steps
+
+Now we have distributed tables and loaded them with data. Next, let's try
+running queries across the distributed tables.
+
+> [!div class="nextstepaction"]
+> [Run distributed queries >](quickstart-run-queries.md)
cosmos-db Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-run-queries.md
+
+ Title: 'Quickstart: Run queries - Azure Cosmos DB for PostgreSQL'
+description: Quickstart to run queries on table data in Azure Cosmos DB for PostgreSQL.
++
+recommendations: false
++++ Last updated : 08/11/2022++
+# Run queries
++
+## Prerequisites
+
+To follow this quickstart, you'll first need to:
+
+1. [Create a cluster](quickstart-create-portal.md) in the Azure portal.
+2. [Connect to the cluster](quickstart-connect-psql.md) with psql to
+ run SQL commands.
+3. [Create and distribute tables](quickstart-distribute-tables.md) with our
+ example dataset.
+
+## Distributed queries
+
+Now it's time for the fun part in our quickstart series--running queries.
+Let's start with a simple `count (*)` to verify how much data we loaded in
+the previous section.
+
+```sql
+-- count all rows (across shards)
+
+SELECT count(*) FROM github_users;
+```
+
+```
+ count
+--
+ 264308
+(1 row)
+```
+
+Recall that `github_users` is a distributed table, meaning its data is divided
+between multiple shards. Azure Cosmos DB for PostgreSQL automatically runs the count on all
+shards in parallel, and combines the results.
+
+Let's continue looking at a few more query examples:
+
+```sql
+-- Find all events for a single user.
+-- (A common transactional/operational query)
+
+SELECT created_at, event_type, repo->>'name' AS repo_name
+ FROM github_events
+ WHERE user_id = 3861633;
+```
+
+```
+ created_at | event_type | repo_name
++--+--
+ 2016-12-01 06:28:44 | PushEvent | sczhengyabin/Google-Image-Downloader
+ 2016-12-01 06:29:27 | CreateEvent | sczhengyabin/Google-Image-Downloader
+ 2016-12-01 06:36:47 | ReleaseEvent | sczhengyabin/Google-Image-Downloader
+ 2016-12-01 06:42:35 | WatchEvent | sczhengyabin/Google-Image-Downloader
+ 2016-12-01 07:45:58 | IssuesEvent | sczhengyabin/Google-Image-Downloader
+(5 rows)
+```
+
+## More complicated queries
+
+Here's an example of a more complicated query, which retrieves hourly
+statistics for push events on GitHub. It uses PostgreSQL's JSONB feature to
+handle semi-structured data.
+
+```sql
+-- Querying JSONB type. Query is parallelized across nodes.
+-- Find the number of commits on the default branch per hour
+
+SELECT date_trunc('hour', created_at) AS hour,
+ sum((payload->>'distinct_size')::int) AS num_commits
+FROM github_events
+WHERE event_type = 'PushEvent' AND
+ payload @> '{"ref":"refs/heads/main"}'
+GROUP BY hour
+ORDER BY hour;
+```
+
+```
+ hour | num_commits
++-
+ 2016-12-01 05:00:00 | 13051
+ 2016-12-01 06:00:00 | 43480
+ 2016-12-01 07:00:00 | 34254
+ 2016-12-01 08:00:00 | 29307
+(4 rows)
+```
+
+Azure Cosmos DB for PostgreSQL combines the power of SQL and NoSQL datastores
+with structured and semi-structured data.
+
+In addition to running queries, Azure Cosmos DB for PostgreSQL also applies data definition
+changes across the shards of a distributed table:
+
+```sql
+-- DDL commands that are also parallelized
+
+ALTER TABLE github_users ADD COLUMN dummy_column integer;
+```
+
+## Next steps
+
+You've successfully created a scalable cluster, created
+tables, distributed them, loaded data, and run distributed queries.
+
+Now you're ready to learn to build applications with Azure Cosmos DB for PostgreSQL.
+
+> [!div class="nextstepaction"]
+> [Build scalable applications >](quickstart-build-scalable-apps-overview.md)
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
+
+ Title: Extensions ΓÇô Azure Cosmos DB for PostgreSQL
+description: Describes the ability to extend the functionality of your database by using extensions in Azure Cosmos DB for PostgreSQL
+++++ Last updated : 08/02/2022+
+# PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
++
+PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html).
+
+## Use PostgreSQL extensions
+
+PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command from the psql tool to load the packaged objects into your database.
+
+> [!NOTE]
+> If `CREATE EXTENSION` fails with a permission denied error, try the
+> `create_extension()` function instead. For instance:
+>
+> ```sql
+> SELECT create_extension('postgis');
+> ```
+
+Azure Cosmos DB for PostgreSQL currently supports a subset of key extensions as listed here. Extensions other than the ones listed aren't supported. You can't create your own extension with Azure Cosmos DB for PostgreSQL.
+
+## Extensions supported by Azure Cosmos DB for PostgreSQL
+
+The following tables list the standard PostgreSQL extensions that are currently supported by Azure Cosmos DB for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
+
+The versions of each extension installed in a cluster sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+
+### Citus extension
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 |
+
+### Data types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 |
+
+### Full-text search extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Functions extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 |
+> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Index types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
+
+### Language extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
+
+### Miscellaneous extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
++
+### PostGIS extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
++
+## pg_stat_statements
+The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Cosmos DB for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
+
+The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`.
+
+There's a tradeoff between the query execution information pg_stat_statements provides and the effect on server performance as it logs each SQL statement. If you aren't actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Some third-party monitoring services might rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
+
+## dblink and postgres_fdw
+
+You can use dblink and postgres\_fdw to connect from one PostgreSQL server to
+another, or to another database in the same server. The receiving server needs
+to allow connections from the sending server through its firewall. To use
+these extensions to connect between Azure Cosmos DB for PostgreSQL servers or
+clusters, set **Allow Azure services and resources to access this cluster (or
+server)** to ON. You also need to turn this setting ON if you want to use the
+extensions to loop back to the same server. The **Allow Azure services and
+resources to access this cluster** setting can be found in the Azure portal
+page for the cluster under **Networking**. Currently, outbound connections
+from Azure Cosmos DB for PostgreSQL aren't supported.
cosmos-db Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-functions.md
+
+ Title: SQL functions ΓÇô Azure Cosmos DB for PostgreSQL
+description: Functions in the Azure Cosmos DB for PostgreSQL SQL API
+++++ Last updated : 02/24/2022++
+# Azure Cosmos DB for PostgreSQL functions
++
+This section contains reference information for the user-defined functions
+provided by Azure Cosmos DB for PostgreSQL. These functions help in providing
+distributed functionality to Azure Cosmos DB for PostgreSQL.
+
+> [!NOTE]
+>
+> clusters running older versions of the Citus Engine may not
+> offer all the functions listed below.
+
+## Table and Shard DDL
+
+### create\_distributed\_table
+
+The create\_distributed\_table() function is used to define a distributed table
+and create its shards if it's a hash-distributed table. This function takes in
+a table name, the distribution column, and an optional distribution method and
+inserts appropriate metadata to mark the table as distributed. The function
+defaults to 'hash' distribution if no distribution method is specified. If the
+table is hash-distributed, the function also creates worker shards based on the
+shard count and shard replication factor configuration values. If the table
+contains any rows, they're automatically distributed to worker nodes.
+
+This function replaces usage of master\_create\_distributed\_table() followed
+by master\_create\_worker\_shards().
+
+#### Arguments
+
+**table\_name:** Name of the table that needs to be distributed.
+
+**distribution\_column:** The column on which the table is to be
+distributed.
+
+**distribution\_type:** (Optional) The method according to which the
+table is to be distributed. Permissible values are append or hash, with
+a default value of 'hash'.
+
+**colocate\_with:** (Optional) include current table in the colocation group
+of another table. By default tables are colocated when they're distributed by
+columns of the same type, have the same shard count, and have the same
+replication factor. Possible values for `colocate_with` are `default`, `none`
+to start a new colocation group, or the name of another table to colocate
+with that table. (See [table colocation](concepts-colocation.md).)
+
+Keep in mind that the default value of `colocate_with` does implicit
+colocation. [Colocation](concepts-colocation.md)
+can be a great thing when tables are related or will be joined. However when
+two tables are unrelated but happen to use the same datatype for their
+distribution columns, accidentally colocating them can decrease performance
+during [shard rebalancing](howto-scale-rebalance.md). The
+table shards will be moved together unnecessarily in a \"cascade.\"
+
+If a new distributed table isn't related to other tables, it's best to
+specify `colocate_with => 'none'`.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example informs the database that the github\_events table should
+be distributed by hash on the repo\_id column.
+
+```postgresql
+SELECT create_distributed_table('github_events', 'repo_id');
+
+-- alternatively, to be more explicit:
+SELECT create_distributed_table('github_events', 'repo_id',
+ colocate_with => 'github_repo');
+```
+
+### truncate\_local\_data\_after\_distributing\_table
+
+Truncate all local rows after distributing a table, and prevent constraints
+from failing due to outdated local records. The truncation cascades to tables
+having a foreign key to the designated table. If the referring tables aren't themselves distributed, then truncation is forbidden until they are, to protect
+referential integrity:
+
+```
+ERROR: cannot truncate a table referenced in a foreign key constraint by a local table
+```
+
+Truncating local coordinator node table data is safe for distributed tables
+because their rows, if any, are copied to worker nodes during
+distribution.
+
+#### Arguments
+
+**table_name:** Name of the distributed table whose local counterpart on the
+coordinator node should be truncated.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+-- requires that argument is a distributed table
+SELECT truncate_local_data_after_distributing_table('public.github_events');
+```
+
+### create\_reference\_table
+
+The create\_reference\_table() function is used to define a small
+reference or dimension table. This function takes in a table name, and
+creates a distributed table with just one shard, replicated to every
+worker node.
+
+#### Arguments
+
+**table\_name:** Name of the small dimension or reference table that
+needs to be distributed.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example informs the database that the nation table should be
+defined as a reference table
+
+```postgresql
+SELECT create_reference_table('nation');
+```
+
+### alter_distributed_table
+
+The alter_distributed_table() function can be used to change the distribution
+column, shard count or colocation properties of a distributed table.
+
+#### Arguments
+
+**table\_name:** Name of the table that will be altered.
+
+**distribution\_column:** (Optional) Name of the new distribution column.
+
+**shard\_count:** (Optional) The new shard count.
+
+**colocate\_with:** (Optional) The table that the current distributed table
+will be colocated with. Possible values are `default`, `none` to start a new
+colocation group, or the name of another table with which to colocate. (See
+[table colocation](concepts-colocation.md).)
+
+**cascade_to_colocated:** (Optional) When this argument is set to "true",
+`shard_count` and `colocate_with` changes will also be applied to all of the
+tables that were previously colocated with the table, and the colocation will
+be preserved. If it is "false", the current colocation of this table will be
+broken.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+-- change distribution column
+SELECT alter_distributed_table('github_events', distribution_column:='event_id');
+
+-- change shard count of all tables in colocation group
+SELECT alter_distributed_table('github_events', shard_count:=6, cascade_to_colocated:=true);
+
+-- change colocation
+SELECT alter_distributed_table('github_events', colocate_with:='another_table');
+```
+
+### update_distributed_table_colocation
+
+The update_distributed_table_colocation() function is used to update colocation
+of a distributed table. This function can also be used to break colocation of a
+distributed table. Azure Cosmos DB for PostgreSQL will implicitly colocate two tables if the
+distribution column is the same type, this can be useful if the tables are
+related and will do some joins. If tables A and B are colocated, and table A
+gets rebalanced, table B will also be rebalanced. If table B doesn't have a
+replica identity, the rebalance will fail. Therefore, this function can be
+useful breaking the implicit colocation in that case.
+
+This function doesn't move any data around physically.
+
+#### Arguments
+
+**table_name:** Name of the table colocation of which will be updated.
+
+**colocate_with:** The table to which the table should be colocated with.
+
+If you want to break the colocation of a table, you should specify
+`colocate_with => 'none'`.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example shows that colocation of table A is updated as colocation of table
+B.
+
+```postgresql
+SELECT update_distributed_table_colocation('A', colocate_with => 'B');
+```
+
+Assume that table A and table B are colocated (possibly implicitly), if you
+want to break the colocation:
+
+```postgresql
+SELECT update_distributed_table_colocation('A', colocate_with => 'none');
+```
+
+Now, assume that table A, table B, table C and table D are colocated and you
+want to colocate table A and table B together, and table C and table D
+together:
+
+```postgresql
+SELECT update_distributed_table_colocation('C', colocate_with => 'none');
+SELECT update_distributed_table_colocation('D', colocate_with => 'C');
+```
+
+If you have a hash distributed table named none and you want to update its
+colocation, you can do:
+
+```postgresql
+SELECT update_distributed_table_colocation('"none"', colocate_with => 'some_other_hash_distributed_table');
+```
+
+### undistribute\_table
+
+The undistribute_table() function undoes the action of create_distributed_table
+or create_reference_table. Undistributing moves all data from shards back into
+a local table on the coordinator node (assuming the data can fit), then deletes
+the shards.
+
+Azure Cosmos DB for PostgreSQL won't undistribute tables that have--or are referenced by--foreign
+keys, unless the cascade_via_foreign_keys argument is set to true. If this
+argument is false (or omitted), then you must manually drop the offending
+foreign key constraints before undistributing.
+
+#### Arguments
+
+**table_name:** Name of the distributed or reference table to undistribute.
+
+**cascade_via_foreign_keys:** (Optional) When this argument set to "true,"
+undistribute_table also undistributes all tables that are related to table_name
+through foreign keys. Use caution with this parameter, because it can
+potentially affect many tables.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example distributes a `github_events` table and then undistributes it.
+
+```postgresql
+-- first distribute the table
+SELECT create_distributed_table('github_events', 'repo_id');
+
+-- undo that and make it local again
+SELECT undistribute_table('github_events');
+```
+
+### create\_distributed\_function
+
+Propagates a function from the coordinator node to workers, and marks it for
+distributed execution. When a distributed function is called on the
+coordinator, Azure Cosmos DB for PostgreSQL uses the value of the \"distribution argument\"
+to pick a worker node to run the function. Executing the function on workers
+increases parallelism, and can bring the code closer to data in shards for
+lower latency.
+
+The Postgres search path isn't propagated from the coordinator to workers
+during distributed function execution, so distributed function code should
+fully qualify the names of database objects. Also notices emitted by the
+functions won't be displayed to the user.
+
+#### Arguments
+
+**function\_name:** Name of the function to be distributed. The name
+must include the function's parameter types in parentheses, because
+multiple functions can have the same name in PostgreSQL. For instance,
+`'foo(int)'` is different from `'foo(int, text)'`.
+
+**distribution\_arg\_name:** (Optional) The argument name by which to
+distribute. For convenience (or if the function arguments don't have
+names), a positional placeholder is allowed, such as `'$1'`. If this
+parameter isn't specified, then the function named by `function_name`
+is merely created on the workers. If worker nodes are added in the
+future, the function will automatically be created there too.
+
+**colocate\_with:** (Optional) When the distributed function reads or writes to
+a distributed table (or, more generally, colocation group), be sure to name
+that table using the `colocate_with` parameter. Then each invocation of the
+function will run on the worker node containing relevant shards.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+-- an example function which updates a hypothetical
+-- event_responses table which itself is distributed by event_id
+CREATE OR REPLACE FUNCTION
+ register_for_event(p_event_id int, p_user_id int)
+RETURNS void LANGUAGE plpgsql AS $fn$
+BEGIN
+ INSERT INTO event_responses VALUES ($1, $2, 'yes')
+ ON CONFLICT (event_id, user_id)
+ DO UPDATE SET response = EXCLUDED.response;
+END;
+$fn$;
+
+-- distribute the function to workers, using the p_event_id argument
+-- to determine which shard each invocation affects, and explicitly
+-- colocating with event_responses which the function updates
+SELECT create_distributed_function(
+ 'register_for_event(int, int)', 'p_event_id',
+ colocate_with := 'event_responses'
+);
+```
+
+### alter_columnar_table_set
+
+The alter_columnar_table_set() function changes settings on a [columnar
+table](concepts-columnar.md). Calling this function on a
+non-columnar table gives an error. All arguments except the table name are
+optional.
+
+To view current options for all columnar tables, consult this table:
+
+```postgresql
+SELECT * FROM columnar.options;
+```
+
+The default values for columnar settings for newly created tables can be
+overridden with these GUCs:
+
+* columnar.compression
+* columnar.compression_level
+* columnar.stripe_row_count
+* columnar.chunk_row_count
+
+#### Arguments
+
+**table_name:** Name of the columnar table.
+
+**chunk_row_count:** (Optional) The maximum number of rows per chunk for
+newly inserted data. Existing chunks of data won't be changed and may have
+more rows than this maximum value. The default value is 10000.
+
+**stripe_row_count:** (Optional) The maximum number of rows per stripe for
+newly inserted data. Existing stripes of data won't be changed and may have
+more rows than this maximum value. The default value is 150000.
+
+**compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
+for newly inserted data. Existing data won't be recompressed or
+decompressed. The default and suggested value is zstd (if support has
+been compiled in).
+
+**compression_level:** (Optional) Valid settings are from 1 through 19. If the
+compression method doesn't support the level chosen, the closest level will be
+selected instead.
+
+#### Return value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT alter_columnar_table_set(
+ 'my_columnar_table',
+ compression => 'none',
+ stripe_row_count => 10000);
+```
+
+### alter_table_set_access_method
+
+The alter_table_set_access_method() function changes access method of a table
+(for example, heap or columnar).
+
+#### Arguments
+
+**table_name:** Name of the table whose access method will change.
+
+**access_method:** Name of the new access method.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT alter_table_set_access_method('github_events', 'columnar');
+```
+
+### create_time_partitions
+
+The create_time_partitions() function creates partitions of a given interval to
+cover a given range of time.
+
+#### Arguments
+
+**table_name:** (regclass) table for which to create new partitions. The table
+must be partitioned on one column, of type date, timestamp, or timestamptz.
+
+**partition_interval:** an interval of time, such as `'2 hours'`, or `'1
+month'`, to use when setting ranges on new partitions.
+
+**end_at:** (timestamptz) create partitions up to this time. The last partition
+will contain the point end_at, and no later partitions will be created.
+
+**start_from:** (timestamptz, optional) pick the first partition so that it
+contains the point start_from. The default value is `now()`.
+
+#### Return Value
+
+True if it needed to create new partitions, false if they all existed already.
+
+#### Example
+
+```postgresql
+-- create a year's worth of monthly partitions
+-- in table foo, starting from the current time
+
+SELECT create_time_partitions(
+ table_name := 'foo',
+ partition_interval := '1 month',
+ end_at := now() + '12 months'
+);
+```
+
+### drop_old_time_partitions
+
+The drop_old_time_partitions() function removes all partitions whose intervals
+fall before a given timestamp. In addition to using this function, you might
+consider
+[alter_old_partitions_set_access_method](#alter_old_partitions_set_access_method)
+to compress the old partitions with columnar storage.
+
+#### Arguments
+
+**table_name:** (regclass) table for which to remove partitions. The table must
+be partitioned on one column, of type date, timestamp, or timestamptz.
+
+**older_than:** (timestamptz) drop partitions whose upper range is less than or
+equal to older_than.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+-- drop partitions that are over a year old
+
+CALL drop_old_time_partitions('foo', now() - interval '12 months');
+```
+
+### alter_old_partitions_set_access_method
+
+In a timeseries use case, tables are often partitioned by time, and old
+partitions are compressed into read-only columnar storage.
+
+#### Arguments
+
+**parent_table_name:** (regclass) table for which to change partitions. The
+table must be partitioned on one column, of type date, timestamp, or
+timestamptz.
+
+**older_than:** (timestamptz) change partitions whose upper range is less than
+or equal to older_than.
+
+**new_access_method:** (name) either ΓÇÿheapΓÇÖ for row-based storage, or
+ΓÇÿcolumnarΓÇÖ for columnar storage.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+CALL alter_old_partitions_set_access_method(
+ 'foo', now() - interval '6 months',
+ 'columnar'
+);
+```
+
+## Metadata / Configuration Information
+
+### get\_shard\_id\_for\_distribution\_column
+
+Azure Cosmos DB for PostgreSQL assigns every row of a distributed table to a shard based on
+the value of the row's distribution column and the table's method of
+distribution. In most cases, the precise mapping is a low-level detail that the
+database administrator can ignore. However it can be useful to determine a
+row's shard, either for manual database maintenance tasks or just to satisfy
+curiosity. The `get_shard_id_for_distribution_column` function provides this
+info for hash-distributed, range-distributed, and reference tables. It
+doesn't work for the append distribution.
+
+#### Arguments
+
+**table\_name:** The distributed table.
+
+**distribution\_value:** The value of the distribution column.
+
+#### Return Value
+
+The shard ID Azure Cosmos DB for PostgreSQL associates with the distribution column value
+for the given table.
+
+#### Example
+
+```postgresql
+SELECT get_shard_id_for_distribution_column('my_table', 4);
+
+ get_shard_id_for_distribution_column
+--
+ 540007
+(1 row)
+```
+
+### column\_to\_column\_name
+
+Translates the `partkey` column of `pg_dist_partition` into a textual column
+name. The translation is useful to determine the distribution column of a
+distributed table.
+
+For a more detailed discussion, see [choosing a distribution
+column](howto-choose-distribution-column.md).
+
+#### Arguments
+
+**table\_name:** The distributed table.
+
+**column\_var\_text:** The value of `partkey` in the `pg_dist_partition`
+table.
+
+#### Return Value
+
+The name of `table_name`'s distribution column.
+
+#### Example
+
+```postgresql
+-- get distribution column name for products table
+
+SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
+ FROM pg_dist_partition
+ WHERE logicalrelid='products'::regclass;
+```
+
+Output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé dist_col_name Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé company_id Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### citus\_relation\_size
+
+Get the disk space used by all the shards of the specified distributed table.
+The disk space includes the size of the \"main fork,\" but excludes the
+visibility map and free space map for the shards.
+
+#### Arguments
+
+**logicalrelid:** the name of a distributed table.
+
+#### Return Value
+
+Size in bytes as a bigint.
+
+#### Example
+
+```postgresql
+SELECT pg_size_pretty(citus_relation_size('github_events'));
+```
+
+```
+pg_size_pretty
+--
+23 MB
+```
+
+### citus\_table\_size
+
+Get the disk space used by all the shards of the specified distributed table,
+excluding indexes (but including TOAST, free space map, and visibility map).
+
+#### Arguments
+
+**logicalrelid:** the name of a distributed table.
+
+#### Return Value
+
+Size in bytes as a bigint.
+
+#### Example
+
+```postgresql
+SELECT pg_size_pretty(citus_table_size('github_events'));
+```
+
+```
+pg_size_pretty
+--
+37 MB
+```
+
+### citus\_total\_relation\_size
+
+Get the total disk space used by the all the shards of the specified
+distributed table, including all indexes and TOAST data.
+
+#### Arguments
+
+**logicalrelid:** the name of a distributed table.
+
+#### Return Value
+
+Size in bytes as a bigint.
+
+#### Example
+
+```postgresql
+SELECT pg_size_pretty(citus_total_relation_size('github_events'));
+```
+
+```
+pg_size_pretty
+--
+73 MB
+```
+
+### citus\_stat\_statements\_reset
+
+Removes all rows from
+[citus_stat_statements](reference-metadata.md#query-statistics-table).
+This function works independently from `pg_stat_statements_reset()`. To reset
+all stats, call both functions.
+
+#### Arguments
+
+N/A
+
+#### Return Value
+
+None
+
+### citus_get_active_worker_nodes
+
+The citus_get_active_worker_nodes() function returns a list of active worker
+host names and port numbers.
+
+#### Arguments
+
+N/A
+
+#### Return Value
+
+List of tuples where each tuple contains the following information:
+
+**node_name:** DNS name of the worker node
+
+**node_port:** Port on the worker node on which the database server is
+listening
+
+#### Example
+
+```postgresql
+SELECT * from citus_get_active_worker_nodes();
+ node_name | node_port
+--+--
+ localhost | 9700
+ localhost | 9702
+ localhost | 9701
+
+(3 rows)
+```
+
+## Cluster management and repair
+
+### master\_copy\_shard\_placement
+
+If a shard placement fails to be updated during a modification command or a DDL
+operation, then it gets marked as inactive. The master\_copy\_shard\_placement
+function can then be called to repair an inactive shard placement using data
+from a healthy placement.
+
+To repair a shard, the function first drops the unhealthy shard placement and
+recreates it using the schema on the coordinator. Once the shard placement is
+created, the function copies data from the healthy placement and updates the
+metadata to mark the new shard placement as healthy. This function ensures that
+the shard will be protected from any concurrent modifications during the
+repair.
+
+#### Arguments
+
+**shard\_id:** ID of the shard to be repaired.
+
+**source\_node\_name:** DNS name of the node on which the healthy shard
+placement is present (\"source\" node).
+
+**source\_node\_port:** The port on the source worker node on which the
+database server is listening.
+
+**target\_node\_name:** DNS name of the node on which the invalid shard
+placement is present (\"target\" node).
+
+**target\_node\_port:** The port on the target worker node on which the
+database server is listening.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+The example below will repair an inactive shard placement of shard
+12345, which is present on the database server running on 'bad\_host'
+on port 5432. To repair it, it will use data from a healthy shard
+placement present on the server running on 'good\_host' on port
+5432.
+
+```postgresql
+SELECT master_copy_shard_placement(12345, 'good_host', 5432, 'bad_host', 5432);
+```
+
+### master\_move\_shard\_placement
+
+This function moves a given shard (and shards colocated with it) from one node
+to another. It's typically used indirectly during shard rebalancing rather
+than being called directly by a database administrator.
+
+There are two ways to move the data: blocking or nonblocking. The blocking
+approach means that during the move all modifications to the shard are paused.
+The second way, which avoids blocking shard writes, relies on Postgres 10
+logical replication.
+
+After a successful move operation, shards in the source node get deleted. If
+the move fails at any point, this function throws an error and leaves the
+source and target nodes unchanged.
+
+#### Arguments
+
+**shard\_id:** ID of the shard to be moved.
+
+**source\_node\_name:** DNS name of the node on which the healthy shard
+placement is present (\"source\" node).
+
+**source\_node\_port:** The port on the source worker node on which the
+database server is listening.
+
+**target\_node\_name:** DNS name of the node on which the invalid shard
+placement is present (\"target\" node).
+
+**target\_node\_port:** The port on the target worker node on which the
+database server is listening.
+
+**shard\_transfer\_mode:** (Optional) Specify the method of replication,
+whether to use PostgreSQL logical replication or a cross-worker COPY
+command. The possible values are:
+
+> - `auto`: Require replica identity if logical replication is
+> possible, otherwise use legacy behaviour (e.g. for shard repair,
+> PostgreSQL 9.6). This is the default value.
+> - `force_logical`: Use logical replication even if the table
+> doesn't have a replica identity. Any concurrent update/delete
+> statements to the table will fail during replication.
+> - `block_writes`: Use COPY (blocking writes) for tables lacking
+> primary key or replica identity.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT master_move_shard_placement(12345, 'from_host', 5432, 'to_host', 5432);
+```
+
+### rebalance\_table\_shards
+
+The rebalance\_table\_shards() function moves shards of the given table to
+distribute them evenly among the workers. The function first calculates the
+list of moves it needs to make in order to ensure that the cluster is
+balanced within the given threshold. Then, it moves shard placements one by one
+from the source node to the destination node and updates the corresponding
+shard metadata to reflect the move.
+
+Every shard is assigned a cost when determining whether shards are \"evenly
+distributed.\" By default each shard has the same cost (a value of 1), so
+distributing to equalize the cost across workers is the same as equalizing the
+number of shards on each. The constant cost strategy is called
+\"by\_shard\_count\" and is the default rebalancing strategy.
+
+The default strategy is appropriate under these circumstances:
+
+* The shards are roughly the same size
+* The shards get roughly the same amount of traffic
+* Worker nodes are all the same size/type
+* Shards haven't been pinned to particular workers
+
+If any of these assumptions don't hold, then the default rebalancing
+can result in a bad plan. In this case you may customize the strategy,
+using the `rebalance_strategy` parameter.
+
+It's advisable to call
+[get_rebalance_table_shards_plan](#get_rebalance_table_shards_plan) before
+running rebalance\_table\_shards, to see and verify the actions to be
+performed.
+
+#### Arguments
+
+**table\_name:** (Optional) The name of the table whose shards need to
+be rebalanced. If NULL, then rebalance all existing colocation groups.
+
+**threshold:** (Optional) A float number between 0.0 and 1.0 that
+indicates the maximum difference ratio of node utilization from average
+utilization. For example, specifying 0.1 will cause the shard rebalancer
+to attempt to balance all nodes to hold the same number of shards ┬▒10%.
+Specifically, the shard rebalancer will try to converge utilization of
+all worker nodes to the (1 - threshold) \* average\_utilization \... (1
++ threshold) \* average\_utilization range.+
+**max\_shard\_moves:** (Optional) The maximum number of shards to move.
+
+**excluded\_shard\_list:** (Optional) Identifiers of shards that
+shouldn't be moved during the rebalance operation.
+
+**shard\_transfer\_mode:** (Optional) Specify the method of replication,
+whether to use PostgreSQL logical replication or a cross-worker COPY
+command. The possible values are:
+
+> - `auto`: Require replica identity if logical replication is
+> possible, otherwise use legacy behaviour (e.g. for shard repair,
+> PostgreSQL 9.6). This is the default value.
+> - `force_logical`: Use logical replication even if the table
+> doesn't have a replica identity. Any concurrent update/delete
+> statements to the table will fail during replication.
+> - `block_writes`: Use COPY (blocking writes) for tables lacking
+> primary key or replica identity.
+
+**drain\_only:** (Optional) When true, move shards off worker nodes who have
+`shouldhaveshards` set to false in
+[pg_dist_node](reference-metadata.md#worker-node-table); move no
+other shards.
+
+**rebalance\_strategy:** (Optional) the name of a strategy in
+[pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table).
+If this argument is omitted, the function chooses the default strategy, as
+indicated in the table.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+The example below will attempt to rebalance the shards of the
+github\_events table within the default threshold.
+
+```postgresql
+SELECT rebalance_table_shards('github_events');
+```
+
+This example usage will attempt to rebalance the github\_events table
+without moving shards with ID 1 and 2.
+
+```postgresql
+SELECT rebalance_table_shards('github_events', excluded_shard_list:='{1,2}');
+```
+
+### get\_rebalance\_table\_shards\_plan
+
+Output the planned shard movements of
+[rebalance_table_shards](#rebalance_table_shards) without performing them.
+While it's unlikely, get\_rebalance\_table\_shards\_plan can output a slightly
+different plan than what a rebalance\_table\_shards call with the same
+arguments will do. They aren't executed at the same time, so facts about the
+cluster \-- for example, disk space \-- might differ between the calls.
+
+#### Arguments
+
+The same arguments as rebalance\_table\_shards: relation, threshold,
+max\_shard\_moves, excluded\_shard\_list, and drain\_only. See
+documentation of that function for the arguments' meaning.
+
+#### Return Value
+
+Tuples containing these columns:
+
+- **table\_name**: The table whose shards would move
+- **shardid**: The shard in question
+- **shard\_size**: Size in bytes
+- **sourcename**: Hostname of the source node
+- **sourceport**: Port of the source node
+- **targetname**: Hostname of the destination node
+- **targetport**: Port of the destination node
+
+### get\_rebalance\_progress
+
+Once a shard rebalance begins, the `get_rebalance_progress()` function lists
+the progress of every shard involved. It monitors the moves planned and
+executed by `rebalance_table_shards()`.
+
+#### Arguments
+
+N/A
+
+#### Return Value
+
+Tuples containing these columns:
+
+- **sessionid**: Postgres PID of the rebalance monitor
+- **table\_name**: The table whose shards are moving
+- **shardid**: The shard in question
+- **shard\_size**: Size in bytes
+- **sourcename**: Hostname of the source node
+- **sourceport**: Port of the source node
+- **targetname**: Hostname of the destination node
+- **targetport**: Port of the destination node
+- **progress**: 0 = waiting to be moved; 1 = moving; 2 = complete
+
+#### Example
+
+```sql
+SELECT * FROM get_rebalance_progress();
+```
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé sessionid Γöé table_name Γöé shardid Γöé shard_size Γöé sourcename Γöé sourceport Γöé targetname Γöé targetport Γöé progress Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 7083 Γöé foo Γöé 102008 Γöé 1204224 Γöé n1.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 0 Γöé
+Γöé 7083 Γöé foo Γöé 102009 Γöé 1802240 Γöé n1.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 0 Γöé
+Γöé 7083 Γöé foo Γöé 102018 Γöé 614400 Γöé n2.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 1 Γöé
+Γöé 7083 Γöé foo Γöé 102019 Γöé 8192 Γöé n3.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 2 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### citus\_add\_rebalance\_strategy
+
+Append a row to
+[pg_dist_rebalance_strategy](reference-metadata.md?#rebalancer-strategy-table)
+.
+
+#### Arguments
+
+For more about these arguments, see the corresponding column values in
+`pg_dist_rebalance_strategy`.
+
+**name:** identifier for the new strategy
+
+**shard\_cost\_function:** identifies the function used to determine the
+\"cost\" of each shard
+
+**node\_capacity\_function:** identifies the function to measure node
+capacity
+
+**shard\_allowed\_on\_node\_function:** identifies the function that
+determines which shards can be placed on which nodes
+
+**default\_threshold:** a floating point threshold that tunes how
+precisely the cumulative shard cost should be balanced between nodes
+
+**minimum\_threshold:** (Optional) a safeguard column that holds the
+minimum value allowed for the threshold argument of
+rebalance\_table\_shards(). Its default value is 0
+
+#### Return Value
+
+N/A
+
+### citus\_set\_default\_rebalance\_strategy
+
+Update the
+[pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table)
+table, changing the strategy named by its argument to be the default chosen
+when rebalancing shards.
+
+#### Arguments
+
+**name:** the name of the strategy in pg\_dist\_rebalance\_strategy
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT citus_set_default_rebalance_strategy('by_disk_size');
+```
+
+### citus\_remote\_connection\_stats
+
+The citus\_remote\_connection\_stats() function shows the number of
+active connections to each remote node.
+
+#### Arguments
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT * from citus_remote_connection_stats();
+```
+
+```
+ hostname | port | database_name | connection_count_to_node
+-+++--
+ citus_worker_1 | 5432 | postgres | 3
+(1 row)
+```
+
+### isolate\_tenant\_to\_new\_shard
+
+This function creates a new shard to hold rows with a specific single value in
+the distribution column. It's especially handy for the multi-tenant
+use case, where a large tenant can be placed alone on its own shard and
+ultimately its own physical node.
+
+#### Arguments
+
+**table\_name:** The name of the table to get a new shard.
+
+**tenant\_id:** The value of the distribution column that will be
+assigned to the new shard.
+
+**cascade\_option:** (Optional) When set to \"CASCADE,\" also isolates a shard
+from all tables in the current table's [colocation
+group](concepts-colocation.md).
+
+#### Return Value
+
+**shard\_id:** The function returns the unique ID assigned to the newly
+created shard.
+
+#### Examples
+
+Create a new shard to hold the lineitems for tenant 135:
+
+```postgresql
+SELECT isolate_tenant_to_new_shard('lineitem', 135);
+```
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé isolate_tenant_to_new_shard Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102240 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+* Many of the functions in this article modify system [metadata tables](reference-metadata.md)
+* [Server parameters](reference-parameters.md) customize the behavior of some functions
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
+
+ Title: Limits and limitations ΓÇô Azure Cosmos DB for PostgreSQL
+description: Current limits for clusters
+++++ Last updated : 02/25/2022++
+# Azure Cosmos DB for PostgreSQL limits and limitations
++
+The following section describes capacity and functional limits in the
+Azure Cosmos DB for PostgreSQL service.
+
+### Naming
+
+#### Cluster name
+
+A cluster must have a name that is 40 characters or
+shorter.
+
+## Networking
+
+### Maximum connections
+
+Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
+it's important to limit simultaneous connections. Here are the limits we chose
+to keep nodes healthy:
+
+* Maximum connections per node
+ * 300 for 0-3 vCores
+ * 500 for 4-15 vCores
+ * 1000 for 16+ vCores
+
+The connection limits above are for *user* connections (`max_connections` minus
+`superuser_reserved_connections`). We reserve extra connections for
+administration and recovery.
+
+The limits apply to both worker nodes and the coordinator node. Attempts to
+connect beyond these limits will fail with an error.
+
+#### Connection pooling
+
+You can scale connections further using [connection
+pooling](concepts-connection-pool.md). Azure Cosmos DB for PostgreSQL offers a
+managed pgBouncer connection pooler configured for up to 2,000 simultaneous
+client connections.
+
+## Storage
+
+### Storage scaling
+
+Storage on coordinator and worker nodes can be scaled up (increased) but can't
+be scaled down (decreased).
+
+### Storage size
+
+Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
+available storage options and IOPS calculation [above](resources-compute.md)
+for node and cluster sizes.
+
+## Compute
+
+### Subscription vCore limits
+
+Azure enforces a vCore quota per subscription per region. There are two
+independently adjustable quotas: vCores for coordinator nodes, and vCores for
+worker nodes. The default quota should be more than enough to experiment with
+Azure Cosmos DB for PostgreSQL. If you do need more vCores for a region in your
+subscription, see how to [adjust compute
+quotas](howto-compute-quota.md).
+
+## PostgreSQL
+
+### Database creation
+
+The Azure portal provides credentials to connect to exactly one database per
+cluster, the `citus` database. Creating another
+database is currently not allowed, and the CREATE DATABASE command will fail
+with an error.
+
+### Columnar storage
+
+Azure Cosmos DB for PostgreSQL currently has these limitations with [columnar
+tables](concepts-columnar.md):
+
+* Compression is on disk, not in memory
+* Append-only (no UPDATE/DELETE support)
+* No space reclamation (for example, rolled-back transactions may still consume
+ disk space)
+* No index support, index scans, or bitmap index scans
+* No tidscans
+* No sample scans
+* No TOAST support (large values supported inline)
+* No support for ON CONFLICT statements (except DO NOTHING actions with no
+ target specified).
+* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
+* No support for serializable isolation level
+* Support for PostgreSQL server versions 12+ only
+* No support for foreign keys, unique constraints, or exclusion constraints
+* No support for logical decoding
+* No support for intra-node parallel scans
+* No support for AFTER ... FOR EACH ROW triggers
+* No UNLOGGED columnar tables
+* No TEMPORARY columnar tables
+
+## Next steps
+
+* Learn how to [create a cluster in the
+ portal](quickstart-create-portal.md).
+* Learn to enable [connection pooling](concepts-connection-pool.md).
cosmos-db Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-metadata.md
+
+ Title: System tables ΓÇô Azure Cosmos DB for PostgreSQL
+description: Metadata for distributed query execution
+++++ Last updated : 02/18/2022++
+# Azure Cosmos DB for PostgreSQL system tables and views
++
+Azure Cosmos DB for PostgreSQL creates and maintains special tables that contain
+information about distributed data in the cluster. The coordinator node
+consults these tables when planning how to run queries across the worker nodes.
+
+## Coordinator Metadata
+
+Azure Cosmos DB for PostgreSQL divides each distributed table into multiple logical shards
+based on the distribution column. The coordinator then maintains metadata
+tables to track statistics and information about the health and location of
+these shards.
+
+In this section, we describe each of these metadata tables and their schema.
+You can view and query these tables using SQL after logging into the
+coordinator node.
+
+> [!NOTE]
+>
+> clusters running older versions of the Citus Engine may not
+> offer all the tables listed below.
+
+### Partition table
+
+The pg\_dist\_partition table stores metadata about which tables in the
+database are distributed. For each distributed table, it also stores
+information about the distribution method and detailed information about
+the distribution column.
+
+| Name | Type | Description |
+|--|-|-|
+| logicalrelid | regclass | Distributed table to which this row corresponds. This value references the relfilenode column in the pg_class system catalog table. |
+| partmethod | char | The method used for partitioning / distribution. The values of this column corresponding to different distribution methods are append: ΓÇÿaΓÇÖ, hash: ΓÇÿhΓÇÖ, reference table: ΓÇÿnΓÇÖ |
+| partkey | text | Detailed information about the distribution column including column number, type and other relevant information. |
+| colocationid | integer | Colocation group to which this table belongs. Tables in the same group allow colocated joins and distributed rollups among other optimizations. This value references the colocationid column in the pg_dist_colocation table. |
+| repmodel | char | The method used for data replication. The values of this column corresponding to different replication methods are: Citus statement-based replication: ΓÇÿcΓÇÖ, postgresql streaming replication: ΓÇÿsΓÇÖ, two-phase commit (for reference tables): ΓÇÿtΓÇÖ |
+
+```
+SELECT * from pg_dist_partition;
+ logicalrelid | partmethod | partkey | colocationid | repmodel
++++--+-
+ github_events | h | {VAR :varno 1 :varattno 4 :vartype 20 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 4 :location -1} | 2 | c
+ (1 row)
+```
+
+### Shard table
+
+The pg\_dist\_shard table stores metadata about individual shards of a
+table. Pg_dist_shard has information about which distributed table shards
+belong to, and statistics about the distribution column for shards.
+For append distributed tables, these statistics correspond to min / max
+values of the distribution column. For hash distributed tables,
+they're hash token ranges assigned to that shard. These statistics are
+used for pruning away unrelated shards during SELECT queries.
+
+| Name | Type | Description |
+||-|-|
+| logicalrelid | regclass | Distributed table to which this row corresponds. This value references the relfilenode column in the pg_class system catalog table. |
+| shardid | bigint | Globally unique identifier assigned to this shard. |
+| shardstorage | char | Type of storage used for this shard. Different storage types are discussed in the table below. |
+| shardminvalue | text | For append distributed tables, minimum value of the distribution column in this shard (inclusive). For hash distributed tables, minimum hash token value assigned to that shard (inclusive). |
+| shardmaxvalue | text | For append distributed tables, maximum value of the distribution column in this shard (inclusive). For hash distributed tables, maximum hash token value assigned to that shard (inclusive). |
+
+```
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+#### Shard Storage Types
+
+The shardstorage column in pg\_dist\_shard indicates the type of storage
+used for the shard. A brief overview of different shard storage types
+and their representation is below.
+
+| Storage Type | Shardstorage value | Description |
+|--|--||
+| TABLE | 't' | Indicates that shard stores data belonging to a regular distributed table. |
+| COLUMNAR | 'c' | Indicates that shard stores columnar data. (Used by distributed cstore_fdw tables) |
+| FOREIGN | 'f' | Indicates that shard stores foreign data. (Used by distributed file_fdw tables) |
++
+### Shard information view
+
+In addition to the low-level shard metadata table described above, Azure Cosmos
+DB for PostgreSQL provides a `citus_shards` view to easily check:
+
+* Where each shard is (node, and port),
+* What kind of table it belongs to, and
+* Its size
+
+This view helps you inspect shards to find, among other things, any size
+imbalances across nodes.
+
+```postgresql
+SELECT * FROM citus_shards;
+.
+ table_name | shardid | shard_name | citus_table_type | colocation_id | nodename | nodeport | shard_size
+++--+++--+-+
+ dist | 102170 | dist_102170 | distributed | 34 | localhost | 9701 | 90677248
+ dist | 102171 | dist_102171 | distributed | 34 | localhost | 9702 | 90619904
+ dist | 102172 | dist_102172 | distributed | 34 | localhost | 9701 | 90701824
+ dist | 102173 | dist_102173 | distributed | 34 | localhost | 9702 | 90693632
+ ref | 102174 | ref_102174 | reference | 2 | localhost | 9701 | 8192
+ ref | 102174 | ref_102174 | reference | 2 | localhost | 9702 | 8192
+ dist2 | 102175 | dist2_102175 | distributed | 34 | localhost | 9701 | 933888
+ dist2 | 102176 | dist2_102176 | distributed | 34 | localhost | 9702 | 950272
+ dist2 | 102177 | dist2_102177 | distributed | 34 | localhost | 9701 | 942080
+ dist2 | 102178 | dist2_102178 | distributed | 34 | localhost | 9702 | 933888
+```
+
+The colocation_id refers to the colocation group.
+
+### Shard placement table
+
+The pg\_dist\_placement table tracks the location of shard replicas on
+worker nodes. Each replica of a shard assigned to a specific node is
+called a shard placement. This table stores information about the health
+and location of each shard placement.
+
+| Name | Type | Description |
+|-|--|-|
+| shardid | bigint | Shard identifier associated with this placement. This value references the shardid column in the pg_dist_shard catalog table. |
+| shardstate | int | Describes the state of this placement. Different shard states are discussed in the section below. |
+| shardlength | bigint | For append distributed tables, the size of the shard placement on the worker node in bytes. For hash distributed tables, zero. |
+| placementid | bigint | Unique autogenerated identifier for each individual placement. |
+| groupid | int | Denotes a group of one primary server and zero or more secondary servers when the streaming replication model is used. |
+
+```
+SELECT * from pg_dist_placement;
+ shardid | shardstate | shardlength | placementid | groupid
+ ++-+-+
+ 102008 | 1 | 0 | 1 | 1
+ 102008 | 1 | 0 | 2 | 2
+ 102009 | 1 | 0 | 3 | 2
+ 102009 | 1 | 0 | 4 | 3
+ 102010 | 1 | 0 | 5 | 3
+ 102010 | 1 | 0 | 6 | 4
+ 102011 | 1 | 0 | 7 | 4
+```
+
+#### Shard Placement States
+
+Azure Cosmos DB for PostgreSQL manages shard health on a per-placement basis. If a placement
+puts the system in an inconsistent state, Azure Cosmos DB for PostgreSQL automatically marks it as unavailable. Placement state is recorded in the pg_dist_shard_placement table,
+within the shardstate column. Here's a brief overview of different shard placement states:
+
+| State name | Shardstate value | Description |
+|||-|
+| FINALIZED | 1 | The state new shards are created in. Shard placements in this state are considered up to date and are used in query planning and execution. |
+| INACTIVE | 3 | Shard placements in this state are considered inactive due to being out-of-sync with other replicas of the same shard. The state can occur when an append, modification (INSERT, UPDATE, DELETE), or a DDL operation fails for this placement. The query planner will ignore placements in this state during planning and execution. Users can synchronize the data in these shards with a finalized replica as a background activity. |
+| TO_DELETE | 4 | If Azure Cosmos DB for PostgreSQL attempts to drop a shard placement in response to a master_apply_delete_command call and fails, the placement is moved to this state. Users can then delete these shards as a subsequent background activity. |
+
+### Worker node table
+
+The pg\_dist\_node table contains information about the worker nodes in
+the cluster.
+
+| Name | Type | Description |
+|||--|
+| nodeid | int | Autogenerated identifier for an individual node. |
+| groupid | int | Identifier used to denote a group of one primary server and zero or more secondary servers, when the streaming replication model is used. By default it's the same as the nodeid. |
+| nodename | text | Host Name or IP Address of the PostgreSQL worker node. |
+| nodeport | int | Port number on which the PostgreSQL worker node is listening. |
+| noderack | text | (Optional) Rack placement information for the worker node. |
+| hasmetadata | boolean | Reserved for internal use. |
+| isactive | boolean | Whether the node is active accepting shard placements. |
+| noderole | text | Whether the node is a primary or secondary |
+| nodecluster | text | The name of the cluster containing this node |
+| shouldhaveshards | boolean | If false, shards will be moved off node (drained) when rebalancing, nor will shards from new distributed tables be placed on the node, unless they're colocated with shards already there |
+
+```
+SELECT * from pg_dist_node;
+ nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | shouldhaveshards
+--++--+-+-+-+-+-+-+
+ 1 | 1 | localhost | 12345 | default | f | t | primary | default | t
+ 2 | 2 | localhost | 12346 | default | f | t | primary | default | t
+ 3 | 3 | localhost | 12347 | default | f | t | primary | default | t
+(3 rows)
+```
+
+### Distributed object table
+
+The citus.pg\_dist\_object table contains a list of objects such as
+types and functions that have been created on the coordinator node and
+propagated to worker nodes. When an administrator adds new worker nodes
+to the cluster, Azure Cosmos DB for PostgreSQL automatically creates copies of the distributed
+objects on the new nodes (in the correct order to satisfy object
+dependencies).
+
+| Name | Type | Description |
+|--|||
+| classid | oid | Class of the distributed object |
+| objid | oid | Object ID of the distributed object |
+| objsubid | integer | Object sub ID of the distributed object, for example, attnum |
+| type | text | Part of the stable address used during pg upgrades |
+| object_names | text[] | Part of the stable address used during pg upgrades |
+| object_args | text[] | Part of the stable address used during pg upgrades |
+| distribution_argument_index | integer | Only valid for distributed functions/procedures |
+| colocationid | integer | Only valid for distributed functions/procedures |
+
+\"Stable addresses\" uniquely identify objects independently of a
+specific server. Azure Cosmos DB for PostgreSQL tracks objects during a PostgreSQL upgrade using
+stable addresses created with the
+[pg\_identify\_object\_as\_address()](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-OBJECT-TABLE)
+function.
+
+Here\'s an example of how `create_distributed_function()` adds entries
+to the `citus.pg_dist_object` table:
+
+```psql
+CREATE TYPE stoplight AS enum ('green', 'yellow', 'red');
+
+CREATE OR REPLACE FUNCTION intersection()
+RETURNS stoplight AS $$
+DECLARE
+ color stoplight;
+BEGIN
+ SELECT *
+ FROM unnest(enum_range(NULL::stoplight)) INTO color
+ ORDER BY random() LIMIT 1;
+ RETURN color;
+END;
+$$ LANGUAGE plpgsql VOLATILE;
+
+SELECT create_distributed_function('intersection()');
+
+-- will have two rows, one for the TYPE and one for the FUNCTION
+TABLE citus.pg_dist_object;
+```
+
+```
+-[ RECORD 1 ]+
+classid | 1247
+objid | 16780
+objsubid | 0
+type |
+object_names |
+object_args |
+distribution_argument_index |
+colocationid |
+-[ RECORD 2 ]+
+classid | 1255
+objid | 16788
+objsubid | 0
+type |
+object_names |
+object_args |
+distribution_argument_index |
+colocationid |
+```
+
+### Distributed tables view
+
+The `citus_tables` view shows a summary of all tables managed by Azure Cosmos
+DB for PostgreSQL (distributed and reference tables). The view combines
+information from Azure Cosmos DB for PostgreSQL metadata tables for an easy,
+human-readable overview of these table properties:
+
+* Table type
+* Distribution column
+* Colocation group ID
+* Human-readable size
+* Shard count
+* Owner (database user)
+* Access method (heap or columnar)
+
+HereΓÇÖs an example:
+
+```postgresql
+SELECT * FROM citus_tables;
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé table_name Γöé citus_table_type Γöé distribution_column Γöé colocation_id Γöé table_size Γöé shard_count Γöé table_owner Γöé access_method Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé foo.test Γöé distributed Γöé test_column Γöé 1 Γöé 0 bytes Γöé 32 Γöé citus Γöé heap Γöé
+Γöé ref Γöé reference Γöé <none> Γöé 2 Γöé 24 GB Γöé 1 Γöé citus Γöé heap Γöé
+Γöé test Γöé distributed Γöé id Γöé 1 Γöé 248 TB Γöé 32 Γöé citus Γöé heap Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### Time partitions view
+
+Azure Cosmos DB for PostgreSQL provides UDFs to manage partitions for the Timeseries Data
+use case. It also maintains a `time_partitions` view to inspect the partitions
+it manages.
+
+Columns:
+
+* **parent_table** the table that is partitioned
+* **partition_column** the column on which the parent table is partitioned
+* **partition** the name of a partition table
+* **from_value** lower bound in time for rows in this partition
+* **to_value** upper bound in time for rows in this partition
+* **access_method** heap for row-based storage, and columnar for columnar
+ storage
+
+```postgresql
+SELECT * FROM time_partitions;
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé parent_table Γöé partition_column Γöé partition Γöé from_value Γöé to_value Γöé access_method Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0000 Γöé 2015-01-01 00:00:00 Γöé 2015-01-01 02:00:00 Γöé columnar Γöé
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0200 Γöé 2015-01-01 02:00:00 Γöé 2015-01-01 04:00:00 Γöé columnar Γöé
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0400 Γöé 2015-01-01 04:00:00 Γöé 2015-01-01 06:00:00 Γöé columnar Γöé
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0600 Γöé 2015-01-01 06:00:00 Γöé 2015-01-01 08:00:00 Γöé heap Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### Colocation group table
+
+The pg\_dist\_colocation table contains information about which tables\' shards
+should be placed together, or [colocated](concepts-colocation.md). When two
+tables are in the same colocation group, Azure Cosmos DB for PostgreSQL ensures shards with
+the same distribution column values will be placed on the same worker nodes.
+Colocation enables join optimizations, certain distributed rollups, and foreign
+key support. Shard colocation is inferred when the shard counts, replication
+factors, and partition column types all match between two tables; however, a
+custom colocation group may be specified when creating a distributed table, if
+so desired.
+
+| Name | Type | Description |
+|||-|
+| colocationid | int | Unique identifier for the colocation group this row corresponds to. |
+| shardcount | int | Shard count for all tables in this colocation group |
+| replicationfactor | int | Replication factor for all tables in this colocation group. |
+| distributioncolumntype | oid | The type of the distribution column for all tables in this colocation group. |
+
+```
+SELECT * from pg_dist_colocation;
+ colocationid | shardcount | replicationfactor | distributioncolumntype
+ --++-+
+ 2 | 32 | 2 | 20
+ (1 row)
+```
+
+### Rebalancer strategy table
+
+This table defines strategies that
+[rebalance_table_shards](reference-functions.md#rebalance_table_shards)
+can use to determine where to move shards.
+
+| Name | Type | Description |
+|--|||
+| default_strategy | boolean | Whether rebalance_table_shards should choose this strategy by default. Use citus_set_default_rebalance_strategy to update this column |
+| shard_cost_function | regproc | Identifier for a cost function, which must take a shardid as bigint, and return its notion of a cost, as type real |
+| node_capacity_function | regproc | Identifier for a capacity function, which must take a nodeid as int, and return its notion of node capacity as type real |
+| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Azure Cosmos DB for PostgreSQL may store the shard on the node |
+| default_threshold | float4 | Threshold for deeming a node too full or too empty, which determines when the rebalance_table_shards should try to move shards |
+| minimum_threshold | float4 | A safeguard to prevent the threshold argument of rebalance_table_shards() from being set too low |
+
+By default Cosmos DB for PostgreSQL ships with these strategies in the table:
+
+```postgresql
+SELECT * FROM pg_dist_rebalance_strategy;
+```
+
+```
+-[ RECORD 1 ]-+--
+Name | by_shard_count
+default_strategy | true
+shard_cost_function | citus_shard_cost_1
+node_capacity_function | citus_node_capacity_1
+shard_allowed_on_node_function | citus_shard_allowed_on_node_true
+default_threshold | 0
+minimum_threshold | 0
+-[ RECORD 2 ]-+--
+Name | by_disk_size
+default_strategy | false
+shard_cost_function | citus_shard_cost_by_disk_size
+node_capacity_function | citus_node_capacity_1
+shard_allowed_on_node_function | citus_shard_allowed_on_node_true
+default_threshold | 0.1
+minimum_threshold | 0.01
+```
+
+The default strategy, `by_shard_count`, assigns every shard the same
+cost. Its effect is to equalize the shard count across nodes. The other
+predefined strategy, `by_disk_size`, assigns a cost to each shard
+matching its disk size in bytes plus that of the shards that are
+colocated with it. The disk size is calculated using
+`pg_total_relation_size`, so it includes indices. This strategy attempts
+to achieve the same disk space on every node. Note the threshold of 0.1--it prevents unnecessary shard movement caused by insignificant
+differences in disk space.
+
+#### Creating custom rebalancer strategies
+
+Here are examples of functions that can be used within new shard rebalancer
+strategies, and registered in the
+[pg_dist_rebalance_strategy](reference-metadata.md?#rebalancer-strategy-table)
+with the
+[citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy)
+function.
+
+- Setting a node capacity exception by hostname pattern:
+
+ ```postgresql
+ CREATE FUNCTION v2_node_double_capacity(nodeidarg int)
+ RETURNS boolean AS $$
+ SELECT
+ (CASE WHEN nodename LIKE '%.v2.worker.citusdata.com' THEN 2 ELSE 1 END)
+ FROM pg_dist_node where nodeid = nodeidarg
+ $$ LANGUAGE sql;
+ ```
+
+- Rebalancing by number of queries that go to a shard, as measured by the
+ [citus_stat_statements](reference-metadata.md#query-statistics-table):
+
+ ```postgresql
+ -- example of shard_cost_function
+
+ CREATE FUNCTION cost_of_shard_by_number_of_queries(shardid bigint)
+ RETURNS real AS $$
+ SELECT coalesce(sum(calls)::real, 0.001) as shard_total_queries
+ FROM citus_stat_statements
+ WHERE partition_key is not null
+ AND get_shard_id_for_distribution_column('tab', partition_key) = shardid;
+ $$ LANGUAGE sql;
+ ```
+
+- Isolating a specific shard (10000) on a node (address \'10.0.0.1\'):
+
+ ```postgresql
+ -- example of shard_allowed_on_node_function
+
+ CREATE FUNCTION isolate_shard_10000_on_10_0_0_1(shardid bigint, nodeidarg int)
+ RETURNS boolean AS $$
+ SELECT
+ (CASE WHEN nodename = '10.0.0.1' THEN shardid = 10000 ELSE shardid != 10000 END)
+ FROM pg_dist_node where nodeid = nodeidarg
+ $$ LANGUAGE sql;
+
+ -- The next two definitions are recommended in combination with the above function.
+ -- This way the average utilization of nodes is not impacted by the isolated shard.
+ CREATE FUNCTION no_capacity_for_10_0_0_1(nodeidarg int)
+ RETURNS real AS $$
+ SELECT
+ (CASE WHEN nodename = '10.0.0.1' THEN 0 ELSE 1 END)::real
+ FROM pg_dist_node where nodeid = nodeidarg
+ $$ LANGUAGE sql;
+ CREATE FUNCTION no_cost_for_10000(shardid bigint)
+ RETURNS real AS $$
+ SELECT
+ (CASE WHEN shardid = 10000 THEN 0 ELSE 1 END)::real
+ $$ LANGUAGE sql;
+ ```
+
+### Query statistics table
+
+Azure Cosmos DB for PostgreSQL provides `citus_stat_statements` for stats about how queries are
+being executed, and for whom. It\'s analogous to (and can be joined
+with) the
+[pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html)
+view in PostgreSQL, which tracks statistics about query speed.
+
+This view can trace queries to originating tenants in a multi-tenant
+application, which helps for deciding when to do tenant isolation.
+
+| Name | Type | Description |
+||--|-|
+| queryid | bigint | identifier (good for pg_stat_statements joins) |
+| userid | oid | user who ran the query |
+| dbid | oid | database instance of coordinator |
+| query | text | anonymized query string |
+| executor | text | Citus executor used: adaptive, real-time, task-tracker, router, or insert-select |
+| partition_key | text | value of distribution column in router-executed queries, else NULL |
+| calls | bigint | number of times the query was run |
+
+```sql
+-- create and populate distributed table
+create table foo ( id int );
+select create_distributed_table('foo', 'id');
+insert into foo select generate_series(1,100);
+
+-- enable stats
+-- pg_stat_statements must be in shared_preload libraries
+create extension pg_stat_statements;
+
+select count(*) from foo;
+select * from foo where id = 42;
+
+select * from citus_stat_statements;
+```
+
+Results:
+
+```
+-[ RECORD 1 ]-+-
+queryid | -909556869173432820
+userid | 10
+dbid | 13340
+query | insert into foo select generate_series($1,$2)
+executor | insert-select
+partition_key |
+calls | 1
+-[ RECORD 2 ]-+-
+queryid | 3919808845681956665
+userid | 10
+dbid | 13340
+query | select count(*) from foo;
+executor | adaptive
+partition_key |
+calls | 1
+-[ RECORD 3 ]-+-
+queryid | 5351346905785208738
+userid | 10
+dbid | 13340
+query | select * from foo where id = $1
+executor | adaptive
+partition_key | 42
+calls | 1
+```
+
+Caveats:
+
+- The stats data isn't replicated, and won\'t survive database
+ crashes or failover
+- Tracks a limited number of queries, set by the
+ `pg_stat_statements.max` GUC (default 5000)
+- To truncate the table, use the `citus_stat_statements_reset()`
+ function
+
+### Distributed Query Activity
+
+Azure Cosmos DB for PostgreSQL provides special views to watch queries and locks throughout the
+cluster, including shard-specific queries used internally to build
+results for distributed queries.
+
+- **citus\_dist\_stat\_activity**: shows the distributed queries that
+ are executing on all nodes. A superset of `pg_stat_activity`, usable
+ wherever the latter is.
+- **citus\_worker\_stat\_activity**: shows queries on workers,
+ including fragment queries against individual shards.
+- **citus\_lock\_waits**: Blocked queries throughout the cluster.
+
+The first two views include all columns of
+[pg\_stat\_activity](https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW)
+plus the host host/port of the worker that initiated the query and the
+host/port of the coordinator node of the cluster.
+
+For example, consider counting the rows in a distributed table:
+
+```postgresql
+-- run from worker on localhost:9701
+
+SELECT count(*) FROM users_table;
+```
+
+We can see that the query appears in `citus_dist_stat_activity`:
+
+```postgresql
+SELECT * FROM citus_dist_stat_activity;
+
+-[ RECORD 1 ]-+-
+query_hostname | localhost
+query_hostport | 9701
+master_query_host_name | localhost
+master_query_host_port | 9701
+transaction_number | 1
+transaction_stamp | 2018-10-05 13:27:20.691907+03
+datid | 12630
+datname | postgres
+pid | 23723
+usesysid | 10
+usename | citus
+application\_name | psql
+client\_addr |
+client\_hostname |
+client\_port | -1
+backend\_start | 2018-10-05 13:27:14.419905+03
+xact\_start | 2018-10-05 13:27:16.362887+03
+query\_start | 2018-10-05 13:27:20.682452+03
+state\_change | 2018-10-05 13:27:20.896546+03
+wait\_event_type | Client
+wait\_event | ClientRead
+state | idle in transaction
+backend\_xid |
+backend\_xmin |
+query | SELECT count(*) FROM users_table;
+backend\_type | client backend
+```
+
+This query requires information from all shards. Some of the information is in
+shard `users_table_102038`, which happens to be stored in `localhost:9700`. We can
+see a query accessing the shard by looking at the `citus_worker_stat_activity`
+view:
+
+```postgresql
+SELECT * FROM citus_worker_stat_activity;
+
+-[ RECORD 1 ]-+--
+query_hostname | localhost
+query_hostport | 9700
+master_query_host_name | localhost
+master_query_host_port | 9701
+transaction_number | 1
+transaction_stamp | 2018-10-05 13:27:20.691907+03
+datid | 12630
+datname | postgres
+pid | 23781
+usesysid | 10
+usename | citus
+application\_name | citus
+client\_addr | ::1
+client\_hostname |
+client\_port | 51773
+backend\_start | 2018-10-05 13:27:20.75839+03
+xact\_start | 2018-10-05 13:27:20.84112+03
+query\_start | 2018-10-05 13:27:20.867446+03
+state\_change | 2018-10-05 13:27:20.869889+03
+wait\_event_type | Client
+wait\_event | ClientRead
+state | idle in transaction
+backend\_xid |
+backend\_xmin |
+query | COPY (SELECT count(*) AS count FROM users_table_102038 users_table WHERE true) TO STDOUT
+backend\_type | client backend
+```
+
+The `query` field shows data being copied out of the shard to be
+counted.
+
+> [!NOTE]
+> If a router query (e.g. single-tenant in a multi-tenant application, `SELECT
+> * FROM table WHERE tenant_id = X`) is executed without a transaction block,
+> then master\_query\_host\_name and master\_query\_host\_port columns will be
+> NULL in citus\_worker\_stat\_activity.
+
+Here are examples of useful queries you can build using
+`citus_worker_stat_activity`:
+
+```postgresql
+-- active queries' wait events on a certain node
+
+SELECT query, wait_event_type, wait_event
+ FROM citus_worker_stat_activity
+ WHERE query_hostname = 'xxxx' and state='active';
+
+-- active queries' top wait events
+
+SELECT wait_event, wait_event_type, count(*)
+ FROM citus_worker_stat_activity
+ WHERE state='active'
+ GROUP BY wait_event, wait_event_type
+ ORDER BY count(*) desc;
+
+-- total internal connections generated per node by Azure Cosmos DB for PostgreSQL
+
+SELECT query_hostname, count(*)
+ FROM citus_worker_stat_activity
+ GROUP BY query_hostname;
+
+-- total internal active connections generated per node by Azure Cosmos DB for PostgreSQL
+
+SELECT query_hostname, count(*)
+ FROM citus_worker_stat_activity
+ WHERE state='active'
+ GROUP BY query_hostname;
+```
+
+The next view is `citus_lock_waits`. To see how it works, we can generate a
+locking situation manually. First we'll set up a test table from the
+coordinator:
+
+```postgresql
+CREATE TABLE numbers AS
+ SELECT i, 0 AS j FROM generate_series(1,10) AS i;
+SELECT create_distributed_table('numbers', 'i');
+```
+
+Then, using two sessions on the coordinator, we can run this sequence of
+statements:
+
+```postgresql
+-- session 1 -- session 2
+- -
+BEGIN;
+UPDATE numbers SET j = 2 WHERE i = 1;
+ BEGIN;
+ UPDATE numbers SET j = 3 WHERE i = 1;
+ -- (this blocks)
+```
+
+The `citus_lock_waits` view shows the situation.
+
+```postgresql
+SELECT * FROM citus_lock_waits;
+
+-[ RECORD 1 ]-+-
+waiting_pid | 88624
+blocking_pid | 88615
+blocked_statement | UPDATE numbers SET j = 3 WHERE i = 1;
+current_statement_in_blocking_process | UPDATE numbers SET j = 2 WHERE i = 1;
+waiting_node_id | 0
+blocking_node_id | 0
+waiting_node_name | coordinator_host
+blocking_node_name | coordinator_host
+waiting_node_port | 5432
+blocking_node_port | 5432
+```
+
+In this example the queries originated on the coordinator, but the view
+can also list locks between queries originating on workers (executed
+with Azure Cosmos DB for PostgreSQL MX for instance).
+
+## Next steps
+
+* Learn how some [Azure Cosmos DB for PostgreSQL functions](reference-functions.md) alter system tables
+* Review the concepts of [nodes and tables](concepts-nodes.md)
cosmos-db Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-overview.md
+
+ Title: Reference ΓÇô Azure Cosmos DB for PostgreSQL
+description: Overview of the Azure Cosmos DB for PostgreSQL distributed SQL API
+++++ Last updated : 08/02/2022++
+# Azure Cosmos DB for PostgreSQL distributed SQL API
++
+Azure Cosmos DB for PostgreSQL includes features beyond
+standard PostgreSQL. Below is a categorized reference of functions and
+configuration options for:
+
+* Parallelizing query execution across shards
+* Managing sharded data between multiple servers
+* Compressing data with columnar storage
+* Automating timeseries partitioning
+
+## SQL functions
+
+### Sharding
+
+| Name | Description |
+||-|
+| [alter_distributed_table](reference-functions.md#alter_distributed_table) | Change the distribution column, shard count or colocation properties of a distributed table |
+| [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | Repair an inactive shard placement using data from a healthy placement |
+| [create_distributed_table](reference-functions.md#create_distributed_table) | Turn a PostgreSQL table into a distributed (sharded) table |
+| [create_reference_table](reference-functions.md#create_reference_table) | Maintain full copies of a table in sync across all nodes |
+| [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | Create a new shard to hold rows with a specific single value in the distribution column |
+| [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | Truncate all local rows after distributing a table |
+| [undistribute_table](reference-functions.md#undistribute_table) | Undo the action of create_distributed_table or create_reference_table |
+
+### Shard rebalancing
+
+| Name | Description |
+||-|
+| [citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy) | Append a row to `pg_dist_rebalance_strategy` |
+| [citus_move_shard_placement](reference-functions.md#master_move_shard_placement) | Typically used indirectly during shard rebalancing rather than being called directly by a database administrator |
+| [citus_set_default_rebalance_strategy](reference-functions.md#) | Change the strategy named by its argument to be the default chosen when rebalancing shards |
+| [get_rebalance_progress](reference-functions.md#get_rebalance_progress) | Monitor the moves planned and executed by `rebalance_table_shards` |
+| [get_rebalance_table_shards_plan](reference-functions.md#get_rebalance_table_shards_plan) | Output the planned shard movements of rebalance_table_shards without performing them |
+| [rebalance_table_shards](reference-functions.md#rebalance_table_shards) | Move shards of the given table to distribute them evenly among the workers |
+
+### Colocation
+
+| Name | Description |
+||-|
+| [create_distributed_function](reference-functions.md#create_distributed_function) | Make function run on workers near colocated shards |
+| [update_distributed_table_colocation](reference-functions.md#update_distributed_table_colocation) | Update or break colocation of a distributed table |
+
+### Columnar storage
+
+| Name | Description |
+||-|
+| [alter_columnar_table_set](reference-functions.md#alter_columnar_table_set) | Change settings on a columnar table |
+| [alter_table_set_access_method](reference-functions.md#alter_table_set_access_method) | Convert a table between heap or columnar storage |
+
+### Timeseries partitioning
+
+| Name | Description |
+||-|
+| [alter_old_partitions_set_access_method](reference-functions.md#alter_old_partitions_set_access_method) | Change storage method of partitions |
+| [create_time_partitions](reference-functions.md#create_time_partitions) | Create partitions of a given interval to cover a given range of time |
+| [drop_old_time_partitions](reference-functions.md#drop_old_time_partitions) | Remove all partitions whose intervals fall before a given timestamp |
+
+### Informational
+
+| Name | Description |
+||-|
+| [citus_get_active_worker_nodes](reference-functions.md#citus_get_active_worker_nodes) | Get active worker host names and port numbers |
+| [citus_relation_size](reference-functions.md#citus_relation_size) | Get disk space used by all the shards of the specified distributed table |
+| [citus_remote_connection_stats](reference-functions.md#citus_remote_connection_stats) | Show the number of active connections to each remote node |
+| [citus_stat_statements_reset](reference-functions.md#citus_stat_statements_reset) | Remove all rows from `citus_stat_statements` |
+| [citus_table_size](reference-functions.md#citus_table_size) | Get disk space used by all the shards of the specified distributed table, excluding indexes |
+| [citus_total_relation_size](reference-functions.md#citus_total_relation_size) | Get total disk space used by the all the shards of the specified distributed table, including all indexes and TOAST data |
+| [column_to_column_name](reference-functions.md#column_to_column_name) | Translate the `partkey` column of `pg_dist_partition` into a textual column name |
+| [get_shard_id_for_distribution_column](reference-functions.md#get_shard_id_for_distribution_column) | Find the shard ID associated with a value of the distribution column |
+
+## Server parameters
+
+### Query execution
+
+| Name | Description |
+||-|
+| [citus.all_modifications_commutative](reference-parameters.md#citusall_modifications_commutative) | Allow all commands to claim a shared lock |
+| [citus.count_distinct_error_rate](reference-parameters.md#cituscount_distinct_error_rate-floating-point) | Tune error rate of postgresql-hll approximate counting |
+| [citus.enable_repartition_joins](reference-parameters.md#citusenable_repartition_joins-boolean) | Allow JOINs made on non-distribution columns |
+| [citus.enable_repartitioned_insert_select](reference-parameters.md#citusenable_repartition_joins-boolean) | Allow repartitioning rows from the SELECT statement and transferring them between workers for insertion |
+| [citus.limit_clause_row_fetch_count](reference-parameters.md#cituslimit_clause_row_fetch_count-integer) | The number of rows to fetch per task for limit clause optimization |
+| [citus.local_table_join_policy](reference-parameters.md#cituslocal_table_join_policy-enum) | Where data moves when doing a join between local and distributed tables |
+| [citus.multi_shard_commit_protocol](reference-parameters.md#citusmulti_shard_commit_protocol-enum) | The commit protocol to use when performing COPY on a hash distributed table |
+| [citus.propagate_set_commands](reference-parameters.md#cituspropagate_set_commands-enum) | Which SET commands are propagated from the coordinator to workers |
+| [citus.create_object_propagation](reference-parameters.md#cituscreate_object_propagation-enum) | Behavior of CREATE statements in transactions for supported objects |
+| [citus.use_citus_managed_tables](reference-parameters.md#citususe_citus_managed_tables-boolean) | Allow local tables to be accessed in worker node queries |
+
+### Informational
+
+| Name | Description |
+||-|
+| [citus.explain_all_tasks](reference-parameters.md#citusexplain_all_tasks-boolean) | Make EXPLAIN output show all tasks |
+| [citus.explain_analyze_sort_method](reference-parameters.md#citusexplain_analyze_sort_method-enum) | Sort method of the tasks in the output of EXPLAIN ANALYZE |
+| [citus.log_remote_commands](reference-parameters.md#cituslog_remote_commands-boolean) | Log queries the coordinator sends to worker nodes |
+| [citus.multi_task_query_log_level](reference-parameters.md#citusmulti_task_query_log_level-enum-multi_task_logging) | Log-level for any query that generates more than one task |
+| [citus.stat_statements_max](reference-parameters.md#citusstat_statements_max-integer) | Max number of rows to store in `citus_stat_statements` |
+| [citus.stat_statements_purge_interval](reference-parameters.md#citusstat_statements_purge_interval-integer) | Frequency at which the maintenance daemon removes records from `citus_stat_statements` that are unmatched in `pg_stat_statements` |
+| [citus.stat_statements_track](reference-parameters.md#citusstat_statements_track-enum) | Enable/disable statement tracking |
+| [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text) | Allows shards to be displayed for selected clients that want to see them |
+| [citus.override_table_visibility](reference-parameters.md#citusoverride_table_visibility-boolean) | Enable/disable shard hiding |
+
+### Inter-node connection management
+
+| Name | Description |
+||-|
+| [citus.executor_slow_start_interval](reference-parameters.md#citusexecutor_slow_start_interval-integer) | Time to wait in milliseconds between opening connections to the same worker node |
+| [citus.force_max_query_parallelization](reference-parameters.md#citusforce_max_query_parallelization-boolean) | Open as many connections as possible |
+| [citus.max_adaptive_executor_pool_size](reference-parameters.md#citusmax_adaptive_executor_pool_size-integer) | Max worker connections per session |
+| [citus.max_cached_conns_per_worker](reference-parameters.md#citusmax_cached_conns_per_worker-integer) | Number of connections kept open to speed up subsequent commands |
+| [citus.node_connection_timeout](reference-parameters.md#citusnode_connection_timeout-integer) | Max duration (in milliseconds) to wait for connection establishment |
+
+### Data transfer
+
+| Name | Description |
+||-|
+| [citus.enable_binary_protocol](reference-parameters.md#citusenable_binary_protocol-boolean) | Use PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data with workers |
+| [citus.max_intermediate_result_size](reference-parameters.md#citusmax_intermediate_result_size-integer) | Size in KB of intermediate results for CTEs and subqueries that are unable to be pushed down |
+
+### Deadlock
+
+| Name | Description |
+||-|
+| [citus.distributed_deadlock_detection_factor](reference-parameters.md#citusdistributed_deadlock_detection_factor-floating-point) | Time to wait before checking for distributed deadlocks |
+| [citus.log_distributed_deadlock_detection](reference-parameters.md#cituslog_distributed_deadlock_detection-boolean) | Whether to log distributed deadlock detection-related processing in the server log |
+
+## System tables
+
+The coordinator node contains metadata tables and views to
+help you see data properties and query activity across the cluster.
+
+| Name | Description |
+||-|
+| [citus_dist_stat_activity](reference-metadata.md#distributed-query-activity) | Distributed queries that are executing on all nodes |
+| [citus_lock_waits](reference-metadata.md#distributed-query-activity) | Queries blocked throughout the cluster |
+| [citus_shards](reference-metadata.md#shard-information-view) | The location of each shard, the type of table it belongs to, and its size |
+| [citus_stat_statements](reference-metadata.md#query-statistics-table) | Stats about how queries are being executed, and for whom |
+| [citus_tables](reference-metadata.md#distributed-tables-view) | A summary of all distributed and reference tables |
+| [citus_worker_stat_activity](reference-metadata.md#distributed-query-activity) | Queries on workers, including tasks on individual shards |
+| [pg_dist_colocation](reference-metadata.md#colocation-group-table) | Which tables' shards should be placed together |
+| [pg_dist_node](reference-metadata.md#worker-node-table) | Information about worker nodes in the cluster |
+| [pg_dist_object](reference-metadata.md#distributed-object-table) | Objects such as types and functions that have been created on the coordinator node and propagated to worker nodes |
+| [pg_dist_placement](reference-metadata.md#shard-placement-table) | The location of shard replicas on worker nodes |
+| [pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table) | Strategies that `rebalance_table_shards` can use to determine where to move shards |
+| [pg_dist_shard](reference-metadata.md#shard-table) | The table, distribution column, and value ranges for every shard |
+| [time_partitions](reference-metadata.md#time-partitions-view) | Information about each partition managed by such functions as `create_time_partitions` and `drop_old_time_partitions` |
++
+## Next steps
+
+* Learn some [useful diagnostic queries](howto-useful-diagnostic-queries.md)
+* Review the list of [configuration
+ parameters](reference-parameters.md#postgresql-parameters) in the underlying
+ PostgreSQL database.
cosmos-db Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-parameters.md
+
+ Title: Server parameters ΓÇô Azure Cosmos DB for PostgreSQL
+description: Parameters in the Azure Cosmos DB for PostgreSQL SQL API
+++++ Last updated : 08/02/2022++
+# Azure Cosmos DB for PostgreSQL server parameters
++
+There are various server parameters that affect the behavior of Azure Cosmos DB for PostgreSQL,
+both from standard PostgreSQL, and specific to Azure Cosmos DB for PostgreSQL.
+These parameters can be set in the Azure portal for a cluster.
+Under the **Settings** category, choose **Worker node parameters** or
+**Coordinator node parameters**. These pages allow you to set parameters for
+all worker nodes, or just for the coordinator node.
+
+## Azure Cosmos DB for PostgreSQL parameters
+
+> [!NOTE]
+>
+> clusters running older versions of the Citus Engine may not
+> offer all the parameters listed below.
+
+### General configuration
+
+#### citus.use\_secondary\_nodes (enum)
+
+Sets the policy to use when choosing nodes for SELECT queries. If it's set to 'always', then the planner will query only nodes that are
+marked as 'secondary' noderole in
+[pg_dist_node](reference-metadata.md#worker-node-table).
+
+The supported values for this enum are:
+
+- **never:** (default) All reads happen on primary nodes.
+- **always:** Reads run against secondary nodes instead, and
+ insert/update statements are disabled.
+
+#### citus.cluster\_name (text)
+
+Informs the coordinator node planner which cluster it coordinates. Once
+cluster\_name is set, the planner will query worker nodes in that
+cluster alone.
+
+#### citus.enable\_version\_checks (boolean)
+
+Upgrading Azure Cosmos DB for PostgreSQL version requires a server restart (to pick up the
+new shared-library), followed by the ALTER EXTENSION UPDATE command. The
+failure to execute both steps could potentially cause errors or crashes.
+Azure Cosmos DB for PostgreSQL thus validates the version of the code and that of the
+extension match, and errors out if they don\'t.
+
+This value defaults to true, and is effective on the coordinator. In
+rare cases, complex upgrade processes may require setting this parameter
+to false, thus disabling the check.
+
+#### citus.log\_distributed\_deadlock\_detection (boolean)
+
+Whether to log distributed deadlock detection-related processing in the
+server log. It defaults to false.
+
+#### citus.distributed\_deadlock\_detection\_factor (floating point)
+
+Sets the time to wait before checking for distributed deadlocks. In particular
+the time to wait will be this value multiplied by PostgreSQL\'s
+[deadlock\_timeout](https://www.postgresql.org/docs/current/static/runtime-config-locks.html)
+setting. The default value is `2`. A value of `-1` disables distributed
+deadlock detection.
+
+#### citus.node\_connection\_timeout (integer)
+
+The `citus.node_connection_timeout` GUC sets the maximum duration (in
+milliseconds) to wait for connection establishment. Azure Cosmos DB for PostgreSQL raises
+an error if the timeout elapses before at least one worker connection is
+established. This GUC affects connections from the coordinator to workers, and
+workers to each other.
+
+- Default: five seconds
+- Minimum: 10 milliseconds
+- Maximum: one hour
+
+```postgresql
+-- set to 30 seconds
+ALTER DATABASE foo
+SET citus.node_connection_timeout = 30000;
+```
+
+#### citus.log_remote_commands (boolean)
+
+Log all commands that the coordinator sends to worker nodes. For instance:
+
+```postgresql
+-- reveal the per-shard queries behind the scenes
+SET citus.log_remote_commands TO on;
+
+-- run a query on distributed table "github_users"
+SELECT count(*) FROM github_users;
+```
+
+The output reveals several queries running on workers because of the single
+`count(*)` query on the coordinator.
+
+```
+NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102040 github_events WHERE true
+DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
+NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102041 github_events WHERE true
+DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
+NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102042 github_events WHERE true
+DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
+... etc, one for each of the 32 shards
+```
+
+#### citus.show\_shards\_for\_app\_name\_prefixes (text)
+
+By default, Azure Cosmos DB for PostgreSQL hides shards from the list of tables PostgreSQL gives to SQL
+clients. It does this because there are multiple shards per distributed table,
+and the shards can be distracting to the SQL client.
+
+The `citus.show_shards_for_app_name_prefixes` GUC allows shards to be displayed
+for selected clients that want to see them. Its default value is ''.
+
+```postgresql
+-- show shards to psql only (hide in other clients, like pgAdmin)
+
+SET citus.show_shards_for_app_name_prefixes TO 'psql';
+
+-- also accepts a comma separated list
+
+SET citus.show_shards_for_app_name_prefixes TO 'psql,pg_dump';
+```
+
+Shard hiding can be disabled entirely using
+[citus.override_table_visibility](#citusoverride_table_visibility-boolean).
+
+#### citus.override\_table\_visibility (boolean)
+
+Determines whether
+[citus.show_shards_for_app_name_prefixes](#citusshow_shards_for_app_name_prefixes-text)
+is active. The default value is 'true'. When set to 'false', shards are visible
+to all client applications.
+
+#### citus.use\_citus\_managed\_tables (boolean)
+
+Allow new [local tables](concepts-nodes.md#type-3-local-tables) to be accessed
+by queries on worker nodes. Adds all newly created tables to Citus metadata
+when enabled. The default value is 'false'.
+
+### Query Statistics
+
+#### citus.stat\_statements\_purge\_interval (integer)
+
+Sets the frequency at which the maintenance daemon removes records from
+[citus_stat_statements](reference-metadata.md#query-statistics-table)
+that are unmatched in `pg_stat_statements`. This configuration value sets the
+time interval between purges in seconds, with a default value of 10. A value of
+0 disables the purges.
+
+```psql
+SET citus.stat_statements_purge_interval TO 5;
+```
+
+This parameter is effective on the coordinator and can be changed at
+runtime.
+
+#### citus.stat_statements_max (integer)
+
+The maximum number of rows to store in `citus_stat_statements`. Defaults to
+50000, and may be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its
+maximum value of 10M would consume 1.4 GB of memory.
+
+Changing this GUC won't take effect until PostgreSQL is restarted.
+
+#### citus.stat_statements_track (enum)
+
+Recording statistics for `citus_stat_statements` requires extra CPU resources.
+When the database is experiencing load, the administrator may wish to disable
+statement tracking. The `citus.stat_statements_track` GUC can turn tracking on
+and off.
+
+* **all:** (default) Track all statements.
+* **none:** Disable tracking.
+
+### Data Loading
+
+#### citus.multi\_shard\_commit\_protocol (enum)
+
+Sets the commit protocol to use when performing COPY on a hash distributed
+table. On each individual shard placement, the COPY is performed in a
+transaction block to ensure that no data is ingested if an error occurs during
+the COPY. However, there's a particular failure case in which the COPY
+succeeds on all placements, but a (hardware) failure occurs before all
+transactions commit. This parameter can be used to prevent data loss in that
+case by choosing between the following commit protocols:
+
+- **2pc:** (default) The transactions in which COPY is performed on
+ the shard placements are first prepared using PostgreSQL\'s
+ [two-phase
+ commit](http://www.postgresql.org/docs/current/static/sql-prepare-transaction.html)
+ and then committed. Failed commits can be manually recovered or
+ aborted using COMMIT PREPARED or ROLLBACK PREPARED, respectively.
+ When using 2pc,
+ [max\_prepared\_transactions](http://www.postgresql.org/docs/current/static/runtime-config-resource.html)
+ should be increased on all the workers, typically to the same value
+ as max\_connections.
+- **1pc:** The transactions in which COPY is performed on the shard
+ placements is committed in a single round. Data may be lost if a
+ commit fails after COPY succeeds on all placements (rare).
+
+#### citus.shard\_replication\_factor (integer)
+
+Sets the replication factor for shards that is, the number of nodes on which
+shards will be placed, and defaults to 1. This parameter can be set at run-time
+and is effective on the coordinator. The ideal value for this parameter depends
+on the size of the cluster and rate of node failure. For example, you may want
+to increase this replication factor if you run large clusters and observe node
+failures on a more frequent basis.
+
+### Planner Configuration
+
+#### citus.local_table_join_policy (enum)
+
+This GUC determines how Azure Cosmos DB for PostgreSQL moves data when doing a join between
+local and distributed tables. Customizing the join policy can help reduce the
+amount of data sent between worker nodes.
+
+Azure Cosmos DB for PostgreSQL will send either the local or distributed tables to nodes as
+necessary to support the join. Copying table data is referred to as a
+ΓÇ£conversion.ΓÇ¥ If a local table is converted, then it will be sent to any
+workers that need its data to perform the join. If a distributed table is
+converted, then it will be collected in the coordinator to support the join.
+The Azure Cosmos DB for PostgreSQL planner will send only the necessary rows doing a conversion.
+
+There are four modes available to express conversion preference:
+
+* **auto:** (Default) Azure Cosmos DB for PostgreSQL will convert either all local or all distributed
+ tables to support local and distributed table joins. Azure Cosmos DB for PostgreSQL decides which to
+ convert using a heuristic. It will convert distributed tables if they're
+ joined using a constant filter on a unique index (such as a primary key). The
+ conversion ensures less data gets moved between workers.
+* **never:** Azure Cosmos DB for PostgreSQL won't allow joins between local and distributed tables.
+* **prefer-local:** Azure Cosmos DB for PostgreSQL will prefer converting local tables to support local
+ and distributed table joins.
+* **prefer-distributed:** Azure Cosmos DB for PostgreSQL will prefer converting distributed tables to
+ support local and distributed table joins. If the distributed tables are
+ huge, using this option might result in moving lots of data between workers.
+
+For example, assume `citus_table` is a distributed table distributed by the
+column `x`, and that `postgres_table` is a local table:
+
+```postgresql
+CREATE TABLE citus_table(x int primary key, y int);
+SELECT create_distributed_table('citus_table', 'x');
+
+CREATE TABLE postgres_table(x int, y int);
+
+-- even though the join is on primary key, there isn't a constant filter
+-- hence postgres_table will be sent to worker nodes to support the join
+SELECT * FROM citus_table JOIN postgres_table USING (x);
+
+-- there is a constant filter on a primary key, hence the filtered row
+-- from the distributed table will be pulled to coordinator to support the join
+SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10;
+
+SET citus.local_table_join_policy to 'prefer-distributed';
+-- since we prefer distributed tables, citus_table will be pulled to coordinator
+-- to support the join. Note that citus_table can be huge.
+SELECT * FROM citus_table JOIN postgres_table USING (x);
+
+SET citus.local_table_join_policy to 'prefer-local';
+-- even though there is a constant filter on primary key for citus_table
+-- postgres_table will be sent to necessary workers because we are using 'prefer-local'.
+SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10;
+```
+
+#### citus.limit\_clause\_row\_fetch\_count (integer)
+
+Sets the number of rows to fetch per task for limit clause optimization.
+In some cases, select queries with limit clauses may need to fetch all
+rows from each task to generate results. In those cases, and where an
+approximation would produce meaningful results, this configuration value
+sets the number of rows to fetch from each shard. Limit approximations
+are disabled by default and this parameter is set to -1. This value can
+be set at run-time and is effective on the coordinator.
+
+#### citus.count\_distinct\_error\_rate (floating point)
+
+Azure Cosmos DB for PostgreSQL can calculate count(distinct) approximates using the
+postgresql-hll extension. This configuration entry sets the desired
+error rate when calculating count(distinct). 0.0, which is the default,
+disables approximations for count(distinct); and 1.0 provides no
+guarantees about the accuracy of results. We recommend setting this
+parameter to 0.005 for best results. This value can be set at run-time
+and is effective on the coordinator.
+
+#### citus.task\_assignment\_policy (enum)
+
+> [!NOTE]
+> This GUC is applicable only when
+> [shard_replication_factor](reference-parameters.md#citusshard_replication_factor-integer)
+> is greater than one, or for queries against
+> [reference_tables](concepts-distributed-data.md#type-2-reference-tables).
+
+Sets the policy to use when assigning tasks to workers. The coordinator
+assigns tasks to workers based on shard locations. This configuration
+value specifies the policy to use when making these assignments.
+Currently, there are three possible task assignment policies that can
+be used.
+
+- **greedy:** The greedy policy is the default and aims to evenly
+ distribute tasks across workers.
+- **round-robin:** The round-robin policy assigns tasks to workers in
+ a round-robin fashion alternating between different replicas. This policy
+ enables better cluster utilization when the shard count for a
+ table is low compared to the number of workers.
+- **first-replica:** The first-replica policy assigns tasks based on the insertion order of placements (replicas) for the
+ shards. In other words, the fragment query for a shard is assigned to the worker that has the first replica of that shard.
+ This method allows you to have strong guarantees about which shards
+ will be used on which nodes (that is, stronger memory residency
+ guarantees).
+
+This parameter can be set at run-time and is effective on the
+coordinator.
+
+### Intermediate Data Transfer
+
+#### citus.max\_intermediate\_result\_size (integer)
+
+The maximum size in KB of intermediate results for CTEs that are unable
+to be pushed down to worker nodes for execution, and for complex
+subqueries. The default is 1 GB, and a value of -1 means no limit.
+Queries exceeding the limit will be canceled and produce an error
+message.
+
+### Executor Configuration
+
+#### General
+
+##### citus.all\_modifications\_commutative
+
+Azure Cosmos DB for PostgreSQL enforces commutativity rules and acquires appropriate locks
+for modify operations in order to guarantee correctness of behavior. For
+example, it assumes that an INSERT statement commutes with another INSERT
+statement, but not with an UPDATE or DELETE statement. Similarly, it assumes
+that an UPDATE or DELETE statement doesn't commute with another UPDATE or
+DELETE statement. This precaution means that UPDATEs and DELETEs require
+Azure Cosmos DB for PostgreSQL to acquire stronger locks.
+
+If you have UPDATE statements that are commutative with your INSERTs or
+other UPDATEs, then you can relax these commutativity assumptions by
+setting this parameter to true. When this parameter is set to true, all
+commands are considered commutative and claim a shared lock, which can
+improve overall throughput. This parameter can be set at runtime and is
+effective on the coordinator.
+
+##### citus.remote\_task\_check\_interval (integer)
+
+Sets the frequency at which Azure Cosmos DB for PostgreSQL checks for statuses of jobs
+managed by the task tracker executor. It defaults to 10 ms. The coordinator
+assigns tasks to workers, and then regularly checks with them about each
+task\'s progress. This configuration value sets the time interval between two
+consequent checks. This parameter is effective on the coordinator and can be
+set at runtime.
+
+##### citus.task\_executor\_type (enum)
+
+Azure Cosmos DB for PostgreSQL has three executor types for running distributed SELECT
+queries. The desired executor can be selected by setting this configuration
+parameter. The accepted values for this parameter are:
+
+- **adaptive:** The default. It's optimal for fast responses to
+ queries that involve aggregations and colocated joins spanning
+ across multiple shards.
+- **task-tracker:** The task-tracker executor is well suited for long
+ running, complex queries that require shuffling of data across
+ worker nodes and efficient resource management.
+- **real-time:** (deprecated) Serves a similar purpose as the adaptive
+ executor, but is less flexible and can cause more connection
+ pressure on worker nodes.
+
+This parameter can be set at run-time and is effective on the coordinator.
+
+##### citus.multi\_task\_query\_log\_level (enum) {#multi_task_logging}
+
+Sets a log-level for any query that generates more than one task (that is,
+which hits more than one shard). Logging is useful during a multi-tenant
+application migration, as you can choose to error or warn for such queries, to
+find them and add a tenant\_id filter to them. This parameter can be set at
+runtime and is effective on the coordinator. The default value for this
+parameter is \'off\'.
+
+The supported values for this enum are:
+
+- **off:** Turn off logging any queries that generate multiple tasks
+ (that is, span multiple shards)
+- **debug:** Logs statement at DEBUG severity level.
+- **log:** Logs statement at LOG severity level. The log line will
+ include the SQL query that was run.
+- **notice:** Logs statement at NOTICE severity level.
+- **warning:** Logs statement at WARNING severity level.
+- **error:** Logs statement at ERROR severity level.
+
+It may be useful to use `error` during development testing,
+and a lower log-level like `log` during actual production deployment.
+Choosing `log` will cause multi-task queries to appear in the database
+logs with the query itself shown after \"STATEMENT.\"
+
+```
+LOG: multi-task query about to be executed
+HINT: Queries are split to multiple tasks if they have to be split into several queries on the workers.
+STATEMENT: select * from foo;
+```
+
+##### citus.propagate_set_commands (enum)
+
+Determines which SET commands are propagated from the coordinator to workers.
+The default value for this parameter is ΓÇÿnoneΓÇÖ.
+
+The supported values are:
+
+* **none:** no SET commands are propagated.
+* **local:** only SET LOCAL commands are propagated.
+
+##### citus.create\_object\_propagation (enum)
+
+Controls the behavior of CREATE statements in transactions for supported
+objects.
+
+When objects are created in a multi-statement transaction block, Azure Cosmos DB for PostgreSQL switches
+sequential mode to ensure created objects are visible to later statements on
+shards. However, the switch to sequential mode is not always desirable. By
+overriding this behavior, the user can trade off performance for full
+transactional consistency in the creation of new objects.
+
+The default value for this parameter is 'immediate'.
+
+The supported values are:
+
+* **immediate:** raises error in transactions where parallel operations like
+ create\_distributed\_table happen before an attempted CREATE TYPE.
+* **automatic:** defer creation of types when sharing a transaction with a
+ parallel operation on distributed tables. There may be some inconsistency
+ between which database objects exist on different nodes.
+* **deferred:** return to pre-11.0 behavior, which is like automatic but with
+ other subtle corner cases. We recommend the automatic setting over deferred,
+ unless you require true backward compatibility.
+
+For an example of this GUC in action, see [type
+propagation](howto-modify-distributed-tables.md#types-and-functions).
+
+##### citus.enable\_repartition\_joins (boolean)
+
+Ordinarily, attempting to perform repartition joins with the adaptive executor
+will fail with an error message. However setting
+`citus.enable_repartition_joins` to true allows Azure Cosmos DB for PostgreSQL to
+temporarily switch into the task-tracker executor to perform the join. The
+default value is false.
+
+##### citus.enable_repartitioned_insert_select (boolean)
+
+By default, an INSERT INTO … SELECT statement that can’t be pushed down will
+attempt to repartition rows from the SELECT statement and transfer them between
+workers for insertion. However, if the target table has too many shards then
+repartitioning will probably not perform well. The overhead of processing the
+shard intervals when determining how to partition the results is too great.
+Repartitioning can be disabled manually by setting
+`citus.enable_repartitioned_insert_select` to false.
+
+##### citus.enable_binary_protocol (boolean)
+
+Setting this parameter to true instructs the coordinator node to use
+PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data
+with workers. Some column types don't support binary serialization.
+
+Enabling this parameter is mostly useful when the workers must return large
+amounts of data. Examples are when many rows are requested, the rows have
+many columns, or they use wide types such as `hll` from the postgresql-hll
+extension.
+
+The default value is true for Postgres versions 14 and higher. For Postgres
+versions 13 and lower the default is false, which means all results are encoded
+and transferred in text format.
+
+##### citus.max_adaptive_executor_pool_size (integer)
+
+Max_adaptive_executor_pool_size limits worker connections from the current
+session. This GUC is useful for:
+
+* Preventing a single backend from getting all the worker resources
+* Providing priority management: designate low priority sessions with low
+ max_adaptive_executor_pool_size, and high priority sessions with higher
+ values
+
+The default value is 16.
+
+##### citus.executor_slow_start_interval (integer)
+
+Time to wait in milliseconds between opening connections to the same worker
+node.
+
+When the individual tasks of a multi-shard query take little time, they
+can often be finished over a single (often already cached) connection. To avoid
+redundantly opening more connections, the executor waits between
+connection attempts for the configured number of milliseconds. At the end of
+the interval, it increases the number of connections it's allowed to open next
+time.
+
+For long queries (those taking >500 ms), slow start might add latency, but for
+short queries itΓÇÖs faster. The default value is 10 ms.
+
+##### citus.max_cached_conns_per_worker (integer)
+
+Each backend opens connections to the workers to query the shards. At the end
+of the transaction, the configured number of connections is kept open to speed
+up subsequent commands. Increasing this value will reduce the latency of
+multi-shard queries, but will also increase overhead on the workers.
+
+The default value is 1. A larger value such as 2 might be helpful for clusters
+that use a small number of concurrent sessions, but itΓÇÖs not wise to go much
+further (for example, 16 would be too high).
+
+##### citus.force_max_query_parallelization (boolean)
+
+Simulates the deprecated and now nonexistent real-time executor. This is used
+to open as many connections as possible to maximize query parallelization.
+
+When this GUC is enabled, Azure Cosmos DB for PostgreSQL will force the adaptive executor to use as many
+connections as possible while executing a parallel distributed query. If not
+enabled, the executor might choose to use fewer connections to optimize overall
+query execution throughput. Internally, setting this true will end up using one
+connection per task.
+
+One place where this is useful is in a transaction whose first query is
+lightweight and requires few connections, while a subsequent query would
+benefit from more connections. Azure Cosmos DB for PostgreSQL decides how many connections to use in a
+transaction based on the first statement, which can throttle other queries
+unless we use the GUC to provide a hint.
+
+```postgresql
+BEGIN;
+-- add this hint
+SET citus.force_max_query_parallelization TO ON;
+
+-- a lightweight query that doesn't require many connections
+SELECT count(*) FROM table WHERE filter = x;
+
+-- a query that benefits from more connections, and can obtain
+-- them since we forced max parallelization above
+SELECT ... very .. complex .. SQL;
+COMMIT;
+```
+
+The default value is false.
+
+#### Task tracker executor configuration
+
+##### citus.task\_tracker\_delay (integer)
+
+This parameter sets the task tracker sleep time between task management rounds
+and defaults to 200 ms. The task tracker process wakes up regularly, walks over
+all tasks assigned to it, and schedules and executes these tasks. Then, the
+task tracker sleeps for a time period before walking over these tasks again.
+This configuration value determines the length of that sleeping period. This
+parameter is effective on the workers and needs to be changed in the
+postgresql.conf file. After editing the config file, users can send a SIGHUP
+signal or restart the server for the change to take effect.
+
+This parameter can be decreased to trim the delay caused due to the task
+tracker executor by reducing the time gap between the management rounds.
+Decreasing the delay is useful in cases when the shard queries are short and
+hence update their status regularly.
+
+##### citus.max\_assign\_task\_batch\_size (integer)
+
+The task tracker executor on the coordinator synchronously assigns tasks in
+batches to the daemon on the workers. This parameter sets the maximum number of
+tasks to assign in a single batch. Choosing a larger batch size allows for
+faster task assignment. However, if the number of workers is large, then it may
+take longer for all workers to get tasks. This parameter can be set at runtime
+and is effective on the coordinator.
+
+##### citus.max\_running\_tasks\_per\_node (integer)
+
+The task tracker process schedules and executes the tasks assigned to it as
+appropriate. This configuration value sets the maximum number of tasks to
+execute concurrently on one node at any given time and defaults to 8.
+
+The limit ensures that you don\'t have many tasks hitting disk at the same
+time, and helps in avoiding disk I/O contention. If your queries are served
+from memory or SSDs, you can increase max\_running\_tasks\_per\_node without
+much concern.
+
+##### citus.partition\_buffer\_size (integer)
+
+Sets the buffer size to use for partition operations and defaults to 8 MB.
+Azure Cosmos DB for PostgreSQL allows for table data to be repartitioned into multiple
+files when two large tables are being joined. After this partition buffer fills
+up, the repartitioned data is flushed into files on disk. This configuration
+entry can be set at run-time and is effective on the workers.
+
+#### Explain output
+
+##### citus.explain\_all\_tasks (boolean)
+
+By default, Azure Cosmos DB for PostgreSQL shows the output of a single, arbitrary task
+when running
+[EXPLAIN](http://www.postgresql.org/docs/current/static/sql-explain.html) on a
+distributed query. In most cases, the explain output will be similar across
+tasks. Occasionally, some of the tasks will be planned differently or have much
+higher execution times. In those cases, it can be useful to enable this
+parameter, after which the EXPLAIN output will include all tasks. Explaining
+all tasks may cause the EXPLAIN to take longer.
+
+##### citus.explain_analyze_sort_method (enum)
+
+Determines the sort method of the tasks in the output of EXPLAIN ANALYZE. The
+default value of `citus.explain_analyze_sort_method` is `execution-time`.
+
+The supported values are:
+
+* **execution-time:** sort by execution time.
+* **taskId:** sort by task ID.
+
+## PostgreSQL parameters
+
+* [DateStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-OUTPUT) - Sets the display format for date and time values
+* [IntervalStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-OUTPUT) - Sets the display format for interval values
+* [TimeZone](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TIMEZONE) - Sets the time zone for displaying and interpreting time stamps
+* [application_name](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-APPLICATION-NAME) - Sets the application name to be reported in statistics and logs
+* [array_nulls](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-ARRAY-NULLS) - Enables input of NULL elements in arrays
+* [autovacuum](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM) - Starts the autovacuum subprocess
+* [autovacuum_analyze_scale_factor](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-ANALYZE-SCALE-FACTOR) - Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples
+* [autovacuum_analyze_threshold](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-ANALYZE-THRESHOLD) - Minimum number of tuple inserts, updates, or deletes prior to analyze
+* [autovacuum_naptime](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-NAPTIME) - Time to sleep between autovacuum runs
+* [autovacuum_vacuum_cost_delay](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-DELAY) - Vacuum cost delay in milliseconds, for autovacuum
+* [autovacuum_vacuum_cost_limit](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-LIMIT) - Vacuum cost amount available before napping, for autovacuum
+* [autovacuum_vacuum_scale_factor](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR) - Number of tuple updates or deletes prior to vacuum as a fraction of reltuples
+* [autovacuum_vacuum_threshold](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-THRESHOLD) - Minimum number of tuple updates or deletes prior to vacuum
+* [autovacuum_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM) - Sets the maximum memory to be used by each autovacuum worker process
+* [backend_flush_after](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BACKEND-FLUSH-AFTER) - Number of pages after which previously performed writes are flushed to disk
+* [backslash_quote](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-BACKSLASH-QUOTE) - Sets whether "\'" is allowed in string literals
+* [bgwriter_delay](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-DELAY) - Background writer sleep time between rounds
+* [bgwriter_flush_after](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-FLUSH-AFTER) - Number of pages after which previously performed writes are flushed to disk
+* [bgwriter_lru_maxpages](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-LRU-MAXPAGES) - Background writer maximum number of LRU pages to flush per round
+* [bgwriter_lru_multiplier](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-LRU-MULTIPLIER) - Multiple of the average buffer usage to free per round
+* [bytea_output](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-BYTEA-OUTPUT) - Sets the output format for bytea
+* [check_function_bodies](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CHECK-FUNCTION-BODIES) - Checks function bodies during CREATE FUNCTION
+* [checkpoint_completion_target](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET) - Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval
+* [checkpoint_timeout](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT) - Sets the maximum time between automatic WAL checkpoints
+* [checkpoint_warning](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-WARNING) - Enables warnings if checkpoint segments are filled more frequently than this
+* [client_encoding](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CLIENT-ENCODING) - Sets the client's character set encoding
+* [client_min_messages](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CLIENT-MIN-MESSAGES) - Sets the message levels that are sent to the client
+* [commit_delay](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-COMMIT-DELAY) - Sets the delay in microseconds between transaction commit and flushing WAL to disk
+* [commit_siblings](https://www.postgresql.org/docs/12/runtime-config-wal.html#GUC-COMMIT-SIBLINGS) - Sets the minimum concurrent open transactions before performing commit_delay
+* [constraint_exclusion](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION) - Enables the planner to use constraints to optimize queries
+* [cpu_index_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-INDEX-TUPLE-COST) - Sets the planner's estimate of the cost of processing each index entry during an index scan
+* [cpu_operator_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-OPERATOR-COST) - Sets the planner's estimate of the cost of processing each operator or function call
+* [cpu_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-TUPLE-COST) - Sets the planner's estimate of the cost of processing each tuple (row)
+* [cursor_tuple_fraction](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CURSOR-TUPLE-FRACTION) - Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved
+* [deadlock_timeout](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-DEADLOCK-TIMEOUT) - Sets the amount of time, in milliseconds, to wait on a lock before checking for deadlock
+* [debug_pretty_print](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.3.1.3) - Indents parse and plan tree displays
+* [debug_print_parse](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's parse tree
+* [debug_print_plan](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's execution plan
+* [debug_print_rewritten](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's rewritten parse tree
+* [default_statistics_target](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET) - Sets the default statistics target
+* [default_tablespace](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TABLESPACE) - Sets the default tablespace to create tables and indexes in
+* [default_text_search_config](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TEXT-SEARCH-CONFIG) - Sets default text search configuration
+* [default_transaction_deferrable](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-DEFERRABLE) - Sets the default deferrable status of new transactions
+* [default_transaction_isolation](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION) - Sets the transaction isolation level of each new transaction
+* [default_transaction_read_only](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-READ-ONLY) - Sets the default read-only status of new transactions
+* default_with_oids - Creates new tables with OIDs by default
+* [effective_cache_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE) - Sets the planner's assumption about the size of the disk cache
+* [enable_bitmapscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-BITMAPSCAN) - Enables the planner's use of bitmap-scan plans
+* [enable_gathermerge](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-GATHERMERGE) - Enables the planner's use of gather merge plans
+* [enable_hashagg](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-HASHAGG) - Enables the planner's use of hashed aggregation plans
+* [enable_hashjoin](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-HASHJOIN) - Enables the planner's use of hash join plans
+* [enable_indexonlyscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-INDEXONLYSCAN) - Enables the planner's use of index-only-scan plans
+* [enable_indexscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-INDEXSCAN) - Enables the planner's use of index-scan plans
+* [enable_material](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-MATERIAL) - Enables the planner's use of materialization
+* [enable_mergejoin](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-MERGEJOIN) - Enables the planner's use of merge join plans
+* [enable_nestloop](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-NESTLOOP) - Enables the planner's use of nested loop join plans
+* [enable_seqscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SEQSCAN) - Enables the planner's use of sequential-scan plans
+* [enable_sort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SORT) - Enables the planner's use of explicit sort steps
+* [enable_tidscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-TIDSCAN) - Enables the planner's use of TID scan plans
+* [escape_string_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-ESCAPE-STRING-WARNING) - Warns about backslash escapes in ordinary string literals
+* [exit_on_error](https://www.postgresql.org/docs/current/runtime-config-error-handling.html#GUC-EXIT-ON-ERROR) - Terminates session on any error
+* [extra_float_digits](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-EXTRA-FLOAT-DIGITS) - Sets the number of digits displayed for floating-point values
+* [force_parallel_mode](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FORCE-PARALLEL-MODE) - Forces use of parallel query facilities
+* [from_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which subqueries aren't collapsed
+* [geqo](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO) - Enables genetic query optimization
+* [geqo_effort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-EFFORT) - GEQO: effort is used to set the default for other GEQO parameters
+* [geqo_generations](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-GENERATIONS) - GEQO: number of iterations of the algorithm
+* [geqo_pool_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-POOL-SIZE) - GEQO: number of individuals in the population
+* [geqo_seed](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-SEED) - GEQO: seed for random path selection
+* [geqo_selection_bias](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-SELECTION-BIAS) - GEQO: selective pressure within the population
+* [geqo_threshold](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-THRESHOLD) - Sets the threshold of FROM items beyond which GEQO is used
+* [gin_fuzzy_search_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.5.2.2.1.3) - Sets the maximum allowed result for exact search by GIN
+* [gin_pending_list_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.2.2.23.1.3) - Sets the maximum size of the pending list for GIN index
+* [idle_in_transaction_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT) - Sets the maximum allowed duration of any idling transaction
+* [join_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which JOIN constructs aren't flattened
+* [lc_monetary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-MONETARY) - Sets the locale for formatting monetary amounts
+* [lc_numeric](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-NUMERIC) - Sets the locale for formatting numbers
+* [lo_compat_privileges](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES) - Enables backward compatibility mode for privilege checks on large objects
+* [lock_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LOCK-TIMEOUT) - Sets the maximum allowed duration (in milliseconds) of any wait for a lock. 0 turns this off
+* [log_autovacuum_min_duration](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#) - Sets the minimum execution time above which autovacuum actions will be logged
+* [log_checkpoints](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-CHECKPOINTS) - Logs each checkpoint
+* [log_connections](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-CONNECTIONS) - Logs each successful connection
+* [log_destination](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DESTINATION) - Sets the destination for server log output
+* [log_disconnections](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DISCONNECTIONS) - Logs end of a session, including duration
+* [log_duration](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DURATION) - Logs the duration of each completed SQL statement
+* [log_error_verbosity](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY) - Sets the verbosity of logged messages
+* [log_lock_waits](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LOCK-WAITS) - Logs long lock waits
+* [log_min_duration_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT) - Sets the minimum execution time (in milliseconds) above which statements will be logged. -1 disables logging statement durations
+* [log_min_error_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-ERROR-STATEMENT) - Causes all statements generating error at or above this level to be logged
+* [log_min_messages](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-MESSAGES) - Sets the message levels that are logged
+* [log_replication_commands](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-REPLICATION-COMMANDS) - Logs each replication command
+* [log_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-STATEMENT) - Sets the type of statements logged
+* [log_statement_stats](https://www.postgresql.org/docs/current/runtime-config-statistics.html#id-1.6.6.12.3.2.1.1.3) - For each query, writes cumulative performance statistics to the server log
+* [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) - Logs the use of temporary files larger than this number of kilobytes
+* [maintenance_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM) - Sets the maximum memory to be used for maintenance operations
+* [max_parallel_workers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS) - Sets the maximum number of parallel workers that can be active at one time
+* [max_parallel_workers_per_gather](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-PER-GATHER) - Sets the maximum number of parallel processes per executor node
+* [max_pred_locks_per_page](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-PAGE) - Sets the maximum number of predicate-locked tuples per page
+* [max_pred_locks_per_relation](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-RELATION) - Sets the maximum number of predicate-locked pages and tuples per relation
+* [max_standby_archive_delay](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-STANDBY-ARCHIVE-DELAY) - Sets the maximum delay before canceling queries when a hot standby server is processing archived WAL data
+* [max_standby_streaming_delay](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-STANDBY-STREAMING-DELAY) - Sets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data
+* [max_sync_workers_per_subscription](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SYNC-WORKERS-PER-SUBSCRIPTION) - Maximum number of table synchronization workers per subscription
+* [max_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MAX-WAL-SIZE) - Sets the WAL size that triggers a checkpoint
+* [min_parallel_index_scan_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-MIN-PARALLEL-INDEX-SCAN-SIZE) - Sets the minimum amount of index data for a parallel scan
+* [min_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MIN-WAL-SIZE) - Sets the minimum size to shrink the WAL to
+* [operator_precedence_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-OPERATOR-PRECEDENCE-WARNING) - Emits a warning for constructs that changed meaning since PostgreSQL 9.4
+* [parallel_setup_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-SETUP-COST) - Sets the planner's estimate of the cost of starting up worker processes for parallel query
+* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend
+* [pg_stat_statements.save](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Saves pg_stat_statements statistics across server shutdowns
+* [pg_stat_statements.track](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects which statements are tracked by pg_stat_statements
+* [pg_stat_statements.track_utility](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects whether utility commands are tracked by pg_stat_statements
+* [quote_all_identifiers](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-QUOTE-ALL-IDENTIFIERS) - When generating SQL fragments, quotes all identifiers
+* [random_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-RANDOM-PAGE-COST) - Sets the planner's estimate of the cost of a nonsequentially fetched disk page
+* [row_security](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) - Enables row security
+* [search_path](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH) - Sets the schema search order for names that aren't schema-qualified
+* [seq_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-SEQ-PAGE-COST) - Sets the planner's estimate of the cost of a sequentially fetched disk page
+* [session_replication_role](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE) - Sets the session's behavior for triggers and rewrite rules
+* [standard_conforming_strings](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.7.1.3) - Causes '...' strings to treat backslashes literally
+* [statement_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STATEMENT-TIMEOUT) - Sets the maximum allowed duration (in milliseconds) of any statement. 0 turns this off
+* [synchronize_seqscans](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.8.1.3) - Enables synchronized sequential scans
+* [synchronous_commit](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) - Sets the current transaction's synchronization level
+* [tcp_keepalives_count](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-COUNT) - Maximum number of TCP keepalive retransmits
+* [tcp_keepalives_idle](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE) - Time between issuing TCP keepalives
+* [tcp_keepalives_interval](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-INTERVAL) - Time between TCP keepalive retransmits
+* [temp_buffers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-TEMP-BUFFERS) - Sets the maximum number of temporary buffers used by each database session
+* [temp_tablespaces](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TEMP-TABLESPACES) - Sets the tablespace(s) to use for temporary tables and sort files
+* [track_activities](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES) - Collects information about executing commands
+* [track_counts](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS) - Collects statistics on database activity
+* [track_functions](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-FUNCTIONS) - Collects function-level statistics on database activity
+* [track_io_timing](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING) - Collects timing statistics for database I/O activity
+* [transform_null_equals](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-TRANSFORM-NULL-EQUALS) - Treats "expr=NULL" as "expr IS NULL"
+* [vacuum_cost_delay](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-DELAY) - Vacuum cost delay in milliseconds
+* [vacuum_cost_limit](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-LIMIT) - Vacuum cost amount available before napping
+* [vacuum_cost_page_dirty](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-DIRTY) - Vacuum cost for a page dirtied by vacuum
+* [vacuum_cost_page_hit](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-HIT) - Vacuum cost for a page found in the buffer cache
+* [vacuum_cost_page_miss](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-MISS) - Vacuum cost for a page not found in the buffer cache
+* [vacuum_defer_cleanup_age](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-VACUUM-DEFER-CLEANUP-AGE) - Number of transactions by which VACUUM and HOT cleanup should be deferred, if any
+* [vacuum_freeze_min_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE) - Minimum age at which VACUUM should freeze a table row
+* [vacuum_freeze_table_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE) - Age at which VACUUM should scan whole table to freeze tuples
+* [vacuum_multixact_freeze_min_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-MIN-AGE) - Minimum age at which VACUUM should freeze a MultiXactId in a table row
+* [vacuum_multixact_freeze_table_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-TABLE-AGE) - Multixact age at which VACUUM should scan whole table to freeze tuples
+* [wal_receiver_status_interval](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-WAL-RECEIVER-STATUS-INTERVAL) - Sets the maximum interval between WAL receiver status reports to the primary
+* [wal_writer_delay](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-WRITER-DELAY) - Time between WAL flushes performed in the WAL writer
+* [wal_writer_flush_after](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-WRITER-FLUSH-AFTER) - Amount of WAL written out by WAL writer that triggers a flush
+* [work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM) - Sets the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files
+* [xmlbinary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-XMLBINARY) - Sets how binary values are to be encoded in XML
+* [xmloption](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-XMLOPTION) - Sets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments
+
+## Next steps
+
+* Another form of configuration, besides server parameters, are the resource [configuration options](resources-compute.md) in a cluster.
+* The underlying PostgreSQL data base also has [configuration parameters](http://www.postgresql.org/docs/current/static/runtime-config.html).
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
+
+ Title: Supported versions ΓÇô Azure Cosmos DB for PostgreSQL
+description: PostgreSQL versions available in Azure Cosmos DB for PostgreSQL
++++++ Last updated : 09/28/2022++
+# Supported database versions in Azure Cosmos DB for PostgreSQL
++
+## PostgreSQL versions
+
+The version of PostgreSQL running in a cluster is
+customizable during creation. Azure Cosmos DB for PostgreSQL currently supports the
+following major [PostgreSQL
+versions](https://www.postgresql.org/docs/release/):
+
+### PostgreSQL version 14
+
+The current minor release is 14.4. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/14/release-14-1.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 13
+
+The current minor release is 13.7. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/13/release-13-5.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 12
+
+The current minor release is 12.11. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/12/release-12-9.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 11
+
+The current minor release is 11.16. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/11/release-11-14.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 10 and older
+
+We don't support PostgreSQL version 10 and older for Azure Cosmos DB for PostgreSQL.
+
+## PostgreSQL Version syntax
+
+Before PostgreSQL version 10, the [PostgreSQL versioning
+policy](https://www.postgresql.org/support/versioning/) considered a _major
+version_ upgrade to be an increase in the first _or_ second number. For
+example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10,
+only a change in the first number is considered a major version upgrade. For
+example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a
+_major_ version upgrade.
+
+## PostgreSQL version support and retirement
+
+Each major version of PostgreSQL will be supported by Azure Cosmos DB for
+PostgreSQL from the date on which Azure begins supporting the version until the
+version is retired by the PostgreSQL community. Refer to [PostgreSQL community
+versioning policy](https://www.postgresql.org/support/versioning/).
+
+Azure Cosmos DB for PostgreSQL automatically performs minor version upgrades to
+the Azure preferred PostgreSQL version as part of periodic maintenance.
+
+### Major version retirement policy
+
+The table below provides the retirement details for PostgreSQL major versions.
+The dates follow the [PostgreSQL community versioning
+policy](https://www.postgresql.org/support/versioning/).
+
+| Version | What's New | Azure support start date | Retirement date (Azure)|
+| - | - | | - |
+| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
+| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
+| [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 [Single Server, Flexible Server] |
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 (Flexible Server)| November 12, 2026
+
+### Retired PostgreSQL engine versions not supported in Azure Cosmos DB for PostgreSQL
+
+You may continue to run the retired version in Azure Cosmos DB for PostgreSQL.
+However, note the following restrictions after the retirement date for each
+PostgreSQL database version:
+
+- As the community will not be releasing any further bug fixes or security fixes, Azure Cosmos DB for PostgreSQL will not patch the retired database engine for any bugs or security issues, or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- If any support issue you may experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we may not be able to provide you with support. In such cases, you will have to upgrade your database to one of the supported versions.
+- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+- New service capabilities developed by Azure Cosmos DB for PostgreSQL may only be available to supported database server versions.
+- Uptime SLAs will apply solely to Azure Cosmos DB for PostgreSQL service-related issues and not to any downtime caused by database engine-related bugs.
+- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure may choose to stop your database server to secure the service. In such case, you will be notified to upgrade the server before bringing the server online.
+
+## Citus and other extension versions
+
+Depending on which version of PostgreSQL is running in a cluster,
+different [versions of PostgreSQL extensions](reference-extensions.md)
+will be installed as well. In particular, PostgreSQL 14 comes with Citus 11, PostgreSQL versions 12 and 13 come with
+Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
+
+## Next steps
+
+* See which [extensions](reference-extensions.md) are installed in
+ which versions.
+* Learn to [create a cluster](quickstart-create-portal.md).
cosmos-db Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-compute.md
+
+ Title: Compute and storage ΓÇô Azure Cosmos DB for PostgreSQL
+description: Options for a cluster, including node compute and storage
++++++ Last updated : 07/08/2022++
+# Azure Cosmos DB for PostgreSQL compute and storage
+
+
+You can select the compute and storage settings independently for
+worker nodes and the coordinator node in a cluster.
+Compute resources are provided as vCores, which represent
+the logical CPU of the underlying hardware. The storage size for
+provisioning refers to the capacity available to the coordinator
+and worker nodes in your cluster. The storage
+includes database files, temporary files, transaction logs, and
+the Postgres server logs.
+
+| Resource | Worker node | Coordinator node |
+|--|--|--|
+| Compute, vCores | 4, 8, 16, 32, 64, 96, 104 | 4, 8, 16, 32, 64, 96 |
+| Memory per vCore, GiB | 8 | 4 |
+| Storage size, TiB | 0.5, 1, 2 | 0.5, 1, 2 |
+| Storage type | General purpose (SSD) | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single node is based on the
+selected number of vCores.
+
+| vCores | One worker node, GiB RAM | Coordinator node, GiB RAM |
+|--|--||
+| 4 | 32 | 16 |
+| 8 | 64 | 32 |
+| 16 | 128 | 64 |
+| 32 | 256 | 128 |
+| 64 | 432 or 512 | 256 |
+| 96 | 672 | 384 |
+| 104 | 672 | n/a |
+
+The total amount of storage you provision also defines the I/O capacity
+available to each worker and coordinator node.
+
+| Storage size, TiB | Maximum IOPS |
+|-|--|
+| 0.5 | 1,536 |
+| 1 | 3,072 |
+| 2 | 6,148 |
+
+For the entire cluster, the aggregated IOPS work out to the
+following values:
+
+| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 TiB, total IOPS |
+|--||-|-|
+| 2 | 3,072 | 6,144 | 12,296 |
+| 3 | 4,608 | 9,216 | 18,444 |
+| 4 | 6,144 | 12,288 | 24,592 |
+| 5 | 7,680 | 15,360 | 30,740 |
+| 6 | 9,216 | 18,432 | 36,888 |
+| 7 | 10,752 | 21,504 | 43,036 |
+| 8 | 12,288 | 24,576 | 49,184 |
+| 9 | 13,824 | 27,648 | 55,332 |
+| 10 | 15,360 | 30,720 | 61,480 |
+| 11 | 16,896 | 33,792 | 67,628 |
+| 12 | 18,432 | 36,864 | 73,776 |
+| 13 | 19,968 | 39,936 | 79,924 |
+| 14 | 21,504 | 43,008 | 86,072 |
+| 15 | 23,040 | 46,080 | 92,220 |
+| 16 | 24,576 | 49,152 | 98,368 |
+| 17 | 26,112 | 52,224 | 104,516 |
+| 18 | 27,648 | 55,296 | 110,664 |
+| 19 | 29,184 | 58,368 | 116,812 |
+| 20 | 30,720 | 61,440 | 122,960 |
+
+**Next steps**
+
+* Learn how to [create a cluster in the portal](quickstart-create-portal.md)
+* Change [compute quotas](howto-compute-quota.md) for a subscription and region
cosmos-db Resources Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-pricing.md
+
+ Title: Pricing ΓÇô Azure Cosmos DB for PostgreSQL
+description: Pricing and how to save with Azure Cosmos DB for PostgreSQL
++++++ Last updated : 09/26/2022++
+# Pricing for Azure Cosmos DB for PostgreSQL
++
+For the most up-to-date general pricing information, see the service
+[pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
+To see the cost for the configuration you want, the
+[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)
+shows the monthly cost on the **Configure** tab based on the options you
+select. If you don't have an Azure subscription, you can use the Azure pricing
+calculator to get an estimated price. On the
+[Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+website, select **Add items**, expand the **Databases** category, and choose
+**Azure Cosmos DB for PostgreSQL** to customize the
+options.
+
+## Prepay for compute resources with reserved capacity
+
+Azure Cosmos DB for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Cosmos DB for PostgreSQL reserved capacity, you make an upfront commitment on cluster for a one- or three-year period to get a significant discount on the compute costs. To purchase Azure Cosmos DB for PostgreSQL reserved capacity, you need to specify the Azure region, reservation term, and billing frequency.
+
+You don't need to assign the reservation to specific clusters. An already running cluster or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one year or three years. As soon as you buy a reservation, the Azure Cosmos DB for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you-go rates.
+
+A reservation doesn't cover software, networking, or storage charges associated with the clusters. At the end of the reservation term, the billing benefit expires, and the clusters are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Cosmos DB for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/cosmos-db/).
+
+You can buy Azure Cosmos DB for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
+
+* You must be in the owner role for at least one Enterprise Agreement (EA) or individual subscription with pay-as-you-go rates.
+* For Enterprise Agreement subscriptions, **Add Reserved Instances** must be enabled in the [EA Portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an Enterprise Agreement admin on the subscription.
+* For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Cosmos DB for PostgreSQL reserved capacity.
+
+For information on how Enterprise Agreement customers and pay-as-you-go customers are charged for reservation purchases, see:
+- [Understand Azure reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+- [Understand Azure reservation usage for your pay-as-you-go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md)
+
+### Determine the right cluster size before purchase
+
+The size of reservation is based on the total amount of compute used by the existing or soon-to-be-deployed coordinator and worker nodes in clusters within a specific region.
+
+For example, let's suppose you're running one cluster with 16 vCore coordinator and three 8 vCore worker nodes. Further, let's assume you plan to deploy within the next month an additional cluster with a 32 vCore coordinator and two 4 vCore worker nodes. Let's also suppose you need these resources for at least one year.
+
+In this case, purchase a one-year reservation for:
+
+* Total 16 vCores + 32 vCores = 48 vCores for coordinator nodes
+* Total 3 nodes x 8 vCores + 2 nodes x 4 vCores = 24 + 8 = 32 vCores for worker nodes
+
+### Buy Azure Cosmos DB for PostgreSQL reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Select **All services** > **Reservations**.
+1. Select **Add**. In the **Purchase reservations** pane, select **Azure Cosmos DB for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
+1. Select the **Azure Cosmos DB for PostgreSQL Compute** type to purchase, and click **Select**.
+1. Review the quantity for the selected compute type on the **Products** tab.
+1. Continue to the **Buy + Review** tab to finish your purchase.
+
+The following table describes required fields.
+
+| Field | Description |
+|--|--|
+| Subscription | The subscription used to pay for the Azure Cosmos DB for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Cosmos DB for PostgreSQL reserved capacity reservation. The subscription type must be an Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an Enterprise Agreement subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. |
+| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select **Shared**, the vCore reservation discount is applied to clusters running in any subscriptions within your billing context. For Enterprise Agreement customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator. If you select **Management group**, the reservation discount is applied to clusters running in any subscriptions that are a part of both the management group and billing scope. If you select **Single subscription**, the vCore reservation discount is applied to clusters in this subscription. If you select **Single resource group**, the reservation discount is applied to clusters in the selected subscription and the selected resource group within that subscription. |
+| Region | The Azure region that's covered by the Azure Cosmos DB for PostgreSQL reserved capacity reservation. |
+| Term | One year or three years. |
+| Quantity | The amount of compute resources being purchased within the Azure Cosmos DB for PostgreSQL reserved capacity reservation. In particular, the number of coordinator or worker node vCores in the selected Azure region that are being reserved and which will get the billing discount. For example, if you're running (or plan to run) clusters with the total compute capacity of 64 coordinator node vCores and 32 worker node vCores in the East US region, specify the quantity as 64 and 32 for coordinator and worker nodes, respectively, to maximize the benefit for all servers. |
+++
+### Cancel, exchange, or refund reservations
+
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+### vCore size flexibility
+
+vCore size flexibility helps you scale up or down coordinator and worker nodes within a region, without losing the reserved capacity benefit.
+
+### Need help? Contact us
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+The vCore reservation discount is applied automatically to the number of clusters that match the Azure Cosmos DB for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure Cosmos DB for PostgreSQL reserved capacity reservation through the Azure portal, PowerShell, the Azure CLI, or the API.
+
+To learn more about Azure reservations, see the following articles:
+
+* [What are Azure reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
+* [Manage Azure reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+* [Understand reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
cosmos-db Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-regions.md
+
+ Title: Regional availability ΓÇô Azure Cosmos DB for PostgreSQL
+description: Where you can run a cluster
++++++ Last updated : 06/21/2022++
+# Regional availability for Azure Cosmos DB for PostgreSQL
++
+clusters are available in the following Azure regions:
+
+* Americas:
+ * Brazil South
+ * Canada Central
+ * Central US
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West Central US
+ * West US
+ * West US 2
+ * West US 3
+* Asia Pacific:
+ * Australia East
+ * Central India
+ * East Asia
+ * Japan East
+ * Japan West
+ * Korea Central
+ * Southeast Asia
+* Europe:
+ * France Central
+ * Germany West Central
+ * North Europe
+ * Switzerland North
+ * UK South
+ * West Europe
+
+Some of these regions may not be initially activated on all Azure
+subscriptions. If you want to use a region from the list above and don't see it
+in your subscription, or if you want to use a region not on this list, open a
+[support
+request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+**Next steps**
+
+Learn how to [create a cluster in the portal](quickstart-create-portal.md).
cosmos-db Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-multi-tenant.md
+
+ Title: Multi-tenant database - Azure Cosmos DB for PostgreSQL
+description: Learn how to design a scalable multi-tenant application with Azure Cosmos DB for PostgreSQL.
+++++
+ms.devlang: azurecli
+ Last updated : 06/29/2022
+#Customer intent: As an developer, I want to design a Azure Cosmos DB for PostgreSQL database so that my multi-tenant application runs efficiently for all tenants.
++
+# Design a multi-tenant database using Azure Cosmos DB for PostgreSQL
++
+In this tutorial, you use Azure Cosmos DB for PostgreSQL to learn how to:
+
+> [!div class="checklist"]
+> * Create a cluster
+> * Use psql utility to create a schema
+> * Shard tables across nodes
+> * Ingest sample data
+> * Query tenant data
+> * Share data between tenants
+> * Customize the schema per-tenant
+
+## Prerequisites
++
+## Use psql utility to create a schema
+
+Once connected to the Azure Cosmos DB for PostgreSQL using psql, you can complete some basic tasks. This tutorial walks you through creating a web app that allows advertisers to track their campaigns.
+
+Multiple companies can use the app, so let's create a table to hold companies and another for their campaigns. In the psql console, run these commands:
+
+```sql
+CREATE TABLE companies (
+ id bigserial PRIMARY KEY,
+ name text NOT NULL,
+ image_url text,
+ created_at timestamp without time zone NOT NULL,
+ updated_at timestamp without time zone NOT NULL
+);
+
+CREATE TABLE campaigns (
+ id bigserial,
+ company_id bigint REFERENCES companies (id),
+ name text NOT NULL,
+ cost_model text NOT NULL,
+ state text NOT NULL,
+ monthly_budget bigint,
+ blocked_site_urls text[],
+ created_at timestamp without time zone NOT NULL,
+ updated_at timestamp without time zone NOT NULL,
+
+ PRIMARY KEY (company_id, id)
+);
+```
+
+Each campaign will pay to run ads. Add a table for ads too, by running the following code in psql after the code above:
+
+```sql
+CREATE TABLE ads (
+ id bigserial,
+ company_id bigint,
+ campaign_id bigint,
+ name text NOT NULL,
+ image_url text,
+ target_url text,
+ impressions_count bigint DEFAULT 0,
+ clicks_count bigint DEFAULT 0,
+ created_at timestamp without time zone NOT NULL,
+ updated_at timestamp without time zone NOT NULL,
+
+ PRIMARY KEY (company_id, id),
+ FOREIGN KEY (company_id, campaign_id)
+ REFERENCES campaigns (company_id, id)
+);
+```
+
+Finally, we'll track statistics about clicks and impressions for each ad:
+
+```sql
+CREATE TABLE clicks (
+ id bigserial,
+ company_id bigint,
+ ad_id bigint,
+ clicked_at timestamp without time zone NOT NULL,
+ site_url text NOT NULL,
+ cost_per_click_usd numeric(20,10),
+ user_ip inet NOT NULL,
+ user_data jsonb NOT NULL,
+
+ PRIMARY KEY (company_id, id),
+ FOREIGN KEY (company_id, ad_id)
+ REFERENCES ads (company_id, id)
+);
+
+CREATE TABLE impressions (
+ id bigserial,
+ company_id bigint,
+ ad_id bigint,
+ seen_at timestamp without time zone NOT NULL,
+ site_url text NOT NULL,
+ cost_per_impression_usd numeric(20,10),
+ user_ip inet NOT NULL,
+ user_data jsonb NOT NULL,
+
+ PRIMARY KEY (company_id, id),
+ FOREIGN KEY (company_id, ad_id)
+ REFERENCES ads (company_id, id)
+);
+```
+
+You can see the newly created tables in the list of tables now in psql by running:
+
+```postgres
+\dt
+```
+
+Multi-tenant applications can enforce uniqueness only per tenant,
+which is why all primary and foreign keys include the company ID.
+
+## Shard tables across nodes
+
+A Azure Cosmos DB for PostgreSQL deployment stores table rows on different nodes based on the value of a user-designated column. This "distribution column" marks which tenant owns which rows.
+
+Let's set the distribution column to be company\_id, the tenant
+identifier. In psql, run these functions:
+
+```sql
+SELECT create_distributed_table('companies', 'id');
+SELECT create_distributed_table('campaigns', 'company_id');
+SELECT create_distributed_table('ads', 'company_id');
+SELECT create_distributed_table('clicks', 'company_id');
+SELECT create_distributed_table('impressions', 'company_id');
+```
++
+## Ingest sample data
+
+Outside of psql now, in the normal command line, download sample data sets:
+
+```bash
+for dataset in companies campaigns ads clicks impressions geo_ips; do
+ curl -O https://examples.citusdata.com/mt_ref_arch/${dataset}.csv
+done
+```
+
+Back inside psql, bulk load the data. Be sure to run psql in the same directory where you downloaded the data files.
+
+```sql
+SET CLIENT_ENCODING TO 'utf8';
+
+\copy companies from 'companies.csv' with csv
+\copy campaigns from 'campaigns.csv' with csv
+\copy ads from 'ads.csv' with csv
+\copy clicks from 'clicks.csv' with csv
+\copy impressions from 'impressions.csv' with csv
+```
+
+This data will now be spread across worker nodes.
+
+## Query tenant data
+
+When the application requests data for a single tenant, the database
+can execute the query on a single worker node. Single-tenant queries
+filter by a single tenant ID. For example, the following query
+filters `company_id = 5` for ads and impressions. Try running it in
+psql to see the results.
+
+```sql
+SELECT a.campaign_id,
+ RANK() OVER (
+ PARTITION BY a.campaign_id
+ ORDER BY a.campaign_id, count(*) desc
+ ), count(*) as n_impressions, a.id
+ FROM ads as a
+ JOIN impressions as i
+ ON i.company_id = a.company_id
+ AND i.ad_id = a.id
+ WHERE a.company_id = 5
+GROUP BY a.campaign_id, a.id
+ORDER BY a.campaign_id, n_impressions desc;
+```
+
+## Share data between tenants
+
+Until now all tables have been distributed by `company_id`. However,
+some data doesn't naturally "belong" to any tenant in particular,
+and can be shared. For instance, all companies in the example ad
+platform might want to get geographical information for their
+audience based on IP addresses.
+
+Create a table to hold shared geographic information. Run the following commands in psql:
+
+```sql
+CREATE TABLE geo_ips (
+ addrs cidr NOT NULL PRIMARY KEY,
+ latlon point NOT NULL
+ CHECK (-90 <= latlon[0] AND latlon[0] <= 90 AND
+ -180 <= latlon[1] AND latlon[1] <= 180)
+);
+CREATE INDEX ON geo_ips USING gist (addrs inet_ops);
+```
+
+Next make `geo_ips` a "reference table" to store a copy of the
+table on every worker node.
+
+```sql
+SELECT create_reference_table('geo_ips');
+```
+
+Load it with example data. Remember to run this command in psql from inside the directory where you downloaded the dataset.
+
+```sql
+\copy geo_ips from 'geo_ips.csv' with csv
+```
+
+Joining the clicks table with geo\_ips is efficient on all nodes.
+Here's a join to find the locations of everyone who clicked on ad
+290. Try running the query in psql.
+
+```sql
+SELECT c.id, clicked_at, latlon
+ FROM geo_ips, clicks c
+ WHERE addrs >> c.user_ip
+ AND c.company_id = 5
+ AND c.ad_id = 290;
+```
+
+## Customize the schema per-tenant
+
+Each tenant may need to store special information not needed by
+others. However, all tenants share a common infrastructure with
+an identical database schema. Where can the extra data go?
+
+One trick is to use an open-ended column type like PostgreSQL's
+JSONB. Our schema has a JSONB field in `clicks` called `user_data`.
+A company (say company five), can use the column to track whether
+the user is on a mobile device.
+
+Here's a query to find who clicks more: mobile, or traditional
+visitors.
+
+```sql
+SELECT
+ user_data->>'is_mobile' AS is_mobile,
+ count(*) AS count
+FROM clicks
+WHERE company_id = 5
+GROUP BY user_data->>'is_mobile'
+ORDER BY count DESC;
+```
+
+We can optimize this query for a single company by creating a
+[partial
+index](https://www.postgresql.org/docs/current/static/indexes-partial.html).
+
+```sql
+CREATE INDEX click_user_data_is_mobile
+ON clicks ((user_data->>'is_mobile'))
+WHERE company_id = 5;
+```
+
+More generally, we can create a [GIN
+indices](https://www.postgresql.org/docs/current/static/gin-intro.html) on
+every key and value within the column.
+
+```sql
+CREATE INDEX click_user_data
+ON clicks USING gin (user_data);
+
+-- this speeds up queries like, "which clicks have
+-- the is_mobile key present in user_data?"
+
+SELECT id
+ FROM clicks
+ WHERE user_data ? 'is_mobile'
+ AND company_id = 5;
+```
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a cluster. If you don't expect to need these resources in the future, delete the cluster. Select the *Delete* button in the *Overview* page for your cluster. When prompted on a pop-up page, confirm the name of the cluster and select the final *Delete* button.
+
+## Next steps
+
+In this tutorial, you learned how to provision a cluster. You connected to it with psql, created a schema, and distributed data. You learned to query data both within and between tenants, and to customize the schema per tenant.
+
+- Learn about cluster [node types](./concepts-nodes.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your cluster
cosmos-db Tutorial Design Database Realtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-realtime.md
+
+ Title: 'Tutorial: Design a real-time dashboard - Azure Cosmos DB for PostgreSQL'
+description: This tutorial shows how to parallelize real-time dashboard queries with Azure Cosmos DB for PostgreSQL.
++++++ Last updated : 06/29/2022
+#Customer intent: As a developer, I want to parallelize queries so that I can make a real-time dashboard application.
++
+# Tutorial: Design a real-time analytics dashboard by using Azure Cosmos DB for PostgreSQL
++
+In this tutorial, you use Azure Cosmos DB for PostgreSQL to learn how to:
+
+> [!div class="checklist"]
+> * Create a cluster
+> * Use psql utility to create a schema
+> * Shard tables across nodes
+> * Generate sample data
+> * Perform rollups
+> * Query raw and aggregated data
+> * Expire data
+
+## Prerequisites
++
+## Use psql utility to create a schema
+
+Once connected to the Azure Cosmos DB for PostgreSQL using psql, you can complete some basic tasks. This tutorial walks you through ingesting traffic data from web analytics, then rolling up the data to provide real-time dashboards based on that data.
+
+Let's create a table that will consume all of our raw web traffic data. Run the following commands in the psql terminal:
+
+```sql
+CREATE TABLE http_request (
+ site_id INT,
+ ingest_time TIMESTAMPTZ DEFAULT now(),
+
+ url TEXT,
+ request_country TEXT,
+ ip_address TEXT,
+
+ status_code INT,
+ response_time_msec INT
+);
+```
+
+We're also going to create a table that will hold our per-minute aggregates, and a table that maintains the position of our last rollup. Run the following commands in psql as well:
+
+```sql
+CREATE TABLE http_request_1min (
+ site_id INT,
+ ingest_time TIMESTAMPTZ, -- which minute this row represents
+
+ error_count INT,
+ success_count INT,
+ request_count INT,
+ average_response_time_msec INT,
+ CHECK (request_count = error_count + success_count),
+ CHECK (ingest_time = date_trunc('minute', ingest_time))
+);
+
+CREATE INDEX http_request_1min_idx ON http_request_1min (site_id, ingest_time);
+
+CREATE TABLE latest_rollup (
+ minute timestamptz PRIMARY KEY,
+
+ CHECK (minute = date_trunc('minute', minute))
+);
+```
+
+You can see the newly created tables in the list of tables now with this psql command:
+
+```postgres
+\dt
+```
+
+## Shard tables across nodes
+
+A Azure Cosmos DB for PostgreSQL deployment stores table rows on different nodes based on the value of a user-designated column. This "distribution column" marks how data is sharded across nodes.
+
+Let's set the distribution column to be site\_id, the shard
+key. In psql, run these functions:
+
+ ```sql
+SELECT create_distributed_table('http_request', 'site_id');
+SELECT create_distributed_table('http_request_1min', 'site_id');
+```
++
+## Generate sample data
+
+Now our cluster should be ready to ingest some data. We can run the
+following locally from our `psql` connection to continuously insert data.
+
+```sql
+DO $$
+ BEGIN LOOP
+ INSERT INTO http_request (
+ site_id, ingest_time, url, request_country,
+ ip_address, status_code, response_time_msec
+ ) VALUES (
+ trunc(random()*32), clock_timestamp(),
+ concat('http://example.com/', md5(random()::text)),
+ ('{China,India,USA,Indonesia}'::text[])[ceil(random()*4)],
+ concat(
+ trunc(random()*250 + 2), '.',
+ trunc(random()*250 + 2), '.',
+ trunc(random()*250 + 2), '.',
+ trunc(random()*250 + 2)
+ )::inet,
+ ('{200,404}'::int[])[ceil(random()*2)],
+ 5+trunc(random()*150)
+ );
+ COMMIT;
+ PERFORM pg_sleep(random() * 0.25);
+ END LOOP;
+END $$;
+```
+
+The query inserts approximately eight rows every second. The rows are stored on different worker nodes as directed by the distribution column, `site_id`.
+
+ > [!NOTE]
+ > Leave the data generation query running, and open a second psql
+ > connection for the remaining commands in this tutorial.
+ >
+
+## Query
+
+Azure Cosmos DB for PostgreSQL allows multiple nodes to process queries in
+parallel for speed. For instance, the database calculates aggregates like SUM
+and COUNT on worker nodes, and combines the results into a final answer.
+
+Here's a query to count web requests per minute along with a few statistics.
+Try running it in psql and observe the results.
+
+```sql
+SELECT
+ site_id,
+ date_trunc('minute', ingest_time) as minute,
+ COUNT(1) AS request_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
+ SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
+FROM http_request
+WHERE date_trunc('minute', ingest_time) > now() - '5 minutes'::interval
+GROUP BY site_id, minute
+ORDER BY minute ASC;
+```
+
+## Rolling up data
+
+The previous query works fine in the early stages, but its performance
+degrades as your data scales. Even with distributed processing, it's faster to pre-compute the data than to recalculate it repeatedly.
+
+We can ensure our dashboard stays fast by regularly rolling up the
+raw data into an aggregate table. You can experiment with the aggregation duration. We used a per-minute aggregation table, but you could break data into 5, 15, or 60 minutes instead.
+
+To run this roll-up more easily, we're going to put it into a plpgsql function. Run these commands in psql to create the `rollup_http_request` function.
+
+```sql
+-- initialize to a time long ago
+INSERT INTO latest_rollup VALUES ('10-10-1901');
+
+-- function to do the rollup
+CREATE OR REPLACE FUNCTION rollup_http_request() RETURNS void AS $$
+DECLARE
+ curr_rollup_time timestamptz := date_trunc('minute', now());
+ last_rollup_time timestamptz := minute from latest_rollup;
+BEGIN
+ INSERT INTO http_request_1min (
+ site_id, ingest_time, request_count,
+ success_count, error_count, average_response_time_msec
+ ) SELECT
+ site_id,
+ date_trunc('minute', ingest_time),
+ COUNT(1) as request_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
+ SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
+ FROM http_request
+ -- roll up only data new since last_rollup_time
+ WHERE date_trunc('minute', ingest_time) <@
+ tstzrange(last_rollup_time, curr_rollup_time, '(]')
+ GROUP BY 1, 2;
+
+ -- update the value in latest_rollup so that next time we run the
+ -- rollup it will operate on data newer than curr_rollup_time
+ UPDATE latest_rollup SET minute = curr_rollup_time;
+END;
+$$ LANGUAGE plpgsql;
+```
+
+With our function in place, execute it to roll up the data:
+
+```sql
+SELECT rollup_http_request();
+```
+
+And with our data in a pre-aggregated form we can query the rollup
+table to get the same report as earlier. Run the following query:
+
+```sql
+SELECT site_id, ingest_time as minute, request_count,
+ success_count, error_count, average_response_time_msec
+ FROM http_request_1min
+ WHERE ingest_time > date_trunc('minute', now()) - '5 minutes'::interval;
+ ```
+
+## Expiring old data
+
+The rollups make queries faster, but we still need to expire old data to avoid unbounded storage costs. Decide how long youΓÇÖd like to keep data for each granularity, and use standard queries to delete expired data. In the following example, we decided to keep raw data for one day, and per-minute aggregations for one month:
+
+```sql
+DELETE FROM http_request WHERE ingest_time < now() - interval '1 day';
+DELETE FROM http_request_1min WHERE ingest_time < now() - interval '1 month';
+```
+
+In production, you could wrap these queries in a function and call it every minute in a cron job.
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a cluster. If you don't expect to need these resources in the future, delete the cluster. Press the *Delete* button in the *Overview* page for your cluster. When prompted on a pop-up page, confirm the name of the cluster and click the final *Delete* button.
+
+## Next steps
+
+In this tutorial, you learned how to provision a cluster. You connected to it with psql, created a schema, and distributed data. You learned to query data in the raw form, regularly aggregate that data, query the aggregated tables, and expire old data.
+
+- Learn about cluster [node types](./concepts-nodes.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your cluster
cosmos-db Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-private-access.md
+
+ Title: Create a cluster with private access - Azure Cosmos DB for PostgreSQL
+description: Use Azure CLI to create a virtual network and virtual machine, then connect the VM to a cluster private endpoint.
++++++ Last updated : 09/28/2022++
+# Connect to a cluster with private access in Azure Cosmos DB for PostgreSQL
++
+This tutorial creates a virtual machine (VM) and an Azure Cosmos DB for PostgreSQL cluster,
+and establishes [private access](concepts-private-access.md) between
+them.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free).
+- If you want to run the code locally, [Azure CLI](/cli/azure/install-azure-cli) installed. You can also run the code in [Azure Cloud Shell](/azure/cloud-shell/overview).
+
+## Create a virtual network
+
+First, set up a resource group and virtual network to hold your cluster and VM.
+
+```azurecli
+az group create \
+ --name link-demo \
+ --location eastus
+
+az network vnet create \
+ --resource-group link-demo \
+ --name link-demo-net \
+ --address-prefix 10.0.0.0/16
+
+az network nsg create \
+ --resource-group link-demo \
+ --name link-demo-nsg
+
+az network vnet subnet create \
+ --resource-group link-demo \
+ --vnet-name link-demo-net \
+ --name link-demo-subnet \
+ --address-prefixes 10.0.1.0/24 \
+ --network-security-group link-demo-nsg
+```
+
+## Create a virtual machine
+
+For demonstration, create a VM running Debian Linux and the `psql` PostgreSQL client.
+
+```azurecli
+# provision the VM
+
+az vm create \
+ --resource-group link-demo \
+ --name link-demo-vm \
+ --vnet-name link-demo-net \
+ --subnet link-demo-subnet \
+ --nsg link-demo-nsg \
+ --public-ip-address link-demo-net-ip \
+ --image debian \
+ --admin-username azureuser \
+ --generate-ssh-keys
+
+# install psql database client
+
+az vm run-command invoke \
+ --resource-group link-demo \
+ --name link-demo-vm \
+ --command-id RunShellScript \
+ --scripts \
+ "sudo touch /home/azureuser/.hushlogin" \
+ "sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
+ "sudo DEBIAN_FRONTEND=noninteractive apt-get install -q -y postgresql-client"
+```
+
+## Create a cluster with a private link
+
+Create your Azure Cosmos DB for PostgreSQL cluster in the [Azure portal](https://portal.azure.com).
+
+1. In the portal, select **Create a resource** in the upper left-hand corner.
+1. On the **Create a resource** page, select **Databases**, and then select **Azure Cosmos DB**.
+1. On the **Select API option** page, on the **PostgreSQL** tile, select **Create**.
+1. On the **Create an Azure Cosmos DB for PostgreSQL cluster** page, fill out the following information:
+
+ - **Resource group**: Select **New**, then enter *link-demo*.
+ - **Cluster name**: Enter *link-demo-sg*.
+
+ > [!NOTE]
+ > The cluster name must be globally unique across Azure because it
+ > creates a DNS entry. If `link-demo-sg` is unavailable, enter another name and adjust the following steps accordingly.
+
+ - **Location**: Select **East US**.
+ - **Password**: Enter and then confirm a password.
+
+1. Select **Next: Networking**.
+1. On the **Networking** tab, for **Connectivity method**, select **Private access**.
+1. On the **Create private endpoint** screen, enter or select the following values:
+
+ - **Resource group**: `link-demo`
+ - **Location**: `(US) East US`
+ - **Name**: `link-demo-sg-c-pe1`
+ - **Target sub-resource**: `coordinator`
+ - **Virtual network**: `link-demo-net`
+ - **Subnet**: `link-demo-subnet`
+ - **Integrate with private DNS zone**: Yes
+
+1. Select **OK**.
+1. After you create the private endpoint, select **Review + create** and then select **Create** to create your cluster.
+
+## Access the cluster privately from the VM
+
+The private link allows the VM to connect to the cluster, and prevents external hosts from doing so. In this step, you check that the psql database client on your VM can communicate with the coordinator node of the cluster. In the code, replace `{your_password}` with your cluster password.
+
+```azurecli
+# replace {your_password} in the string with your actual password
+
+PG_URI='host=c.link-demo-sg.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require'
+
+# Attempt to connect to cluster with psql in the VM
+
+az vm run-command invoke \
+ --resource-group link-demo \
+ --name link-demo-vm \
+ --command-id RunShellScript \
+ --scripts "psql '$PG_URI' -c 'SHOW citus.version;'" \
+ --query 'value[0].message' \
+ | xargs printf
+```
+
+You should see a version number for Citus in the output. If you do, then psql
+was able to execute the command, and the private link worked.
+
+## Clean up resources
+
+You've seen how to create a private link between a VM and a
+cluster. Now you can deprovision the resources.
+
+Delete the resource group, and the resources inside will be deprovisioned:
+
+```azurecli
+az group delete --resource-group link-demo
+
+# press y to confirm
+```
+
+## Next steps
+
+* Learn more about [private access](concepts-private-access.md)
+* Learn about [private
+ endpoints](../../private-link/private-endpoint-overview.md)
+* Learn about [virtual
+ networks](../../virtual-network/concepts-and-best-practices.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
cosmos-db Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-shard.md
+
+ Title: 'Tutorial: Shard data on worker nodes - Azure Cosmos DB for PostgreSQL'
+description: This tutorial shows how to create distributed tables and visualize their data distribution with Azure Cosmos DB for PostgreSQL.
+++++
+ms.devlang: azurecli
+ Last updated : 12/16/2020++
+# Tutorial: Shard data on worker nodes in Azure Cosmos DB for PostgreSQL
++
+In this tutorial, you use Azure Cosmos DB for PostgreSQL to learn how to:
+
+> [!div class="checklist"]
+> * Create hash-distributed shards
+> * See where table shards are placed
+> * Identify skewed distribution
+> * Create constraints on distributed tables
+> * Run queries on distributed data
+
+## Prerequisites
+
+This tutorial requires a running cluster with two
+worker nodes. If you don't have a running cluster, follow the [create
+cluster](quickstart-create-portal.md) tutorial and then come back
+to this one.
+
+## Hash-distributed data
+
+Distributing table rows across multiple PostgreSQL servers is a key technique
+for scalable queries in Azure Cosmos DB for PostgreSQL. Together, multiple nodes can hold
+more data than a traditional database, and in many cases can use worker CPUs in
+parallel to execute queries.
+
+In the prerequisites section, we created a cluster with
+two worker nodes.
+
+![coordinator and two workers](media/tutorial-shard/nodes.png)
+
+The coordinator node's metadata tables track workers and distributed data. We
+can check the active workers in the
+[pg_dist_node](reference-metadata.md#worker-node-table) table.
+
+```sql
+select nodeid, nodename from pg_dist_node where isactive;
+```
+```
+ nodeid | nodename
+--+--
+ 1 | 10.0.0.21
+ 2 | 10.0.0.23
+```
+
+> [!NOTE]
+> Nodenames on Azure Cosmos DB for PostgreSQL are internal IP addresses in a virtual
+> network, and the actual addresses you see may differ.
+
+### Rows, shards, and placements
+
+To use the CPU and storage resources of worker nodes, we have to distribute
+table data throughout the cluster. Distributing a table assigns each row
+to a logical group called a *shard.* Let's create a table and distribute it:
+
+```sql
+-- create a table on the coordinator
+create table users ( email text primary key, bday date not null );
+
+-- distribute it into shards on workers
+select create_distributed_table('users', 'email');
+```
+
+Azure Cosmos DB for PostgreSQL assigns each row to a shard based on the value of the
+*distribution column*, which, in our case, we specified to be `email`. Every
+row will be in exactly one shard, and every shard can contain multiple rows.
+
+![users table with rows pointing to shards](media/tutorial-shard/table.png)
+
+By default `create_distributed_table()` makes 32 shards, as we can see by
+counting in the metadata table
+[pg_dist_shard](reference-metadata.md#shard-table):
+
+```sql
+select logicalrelid, count(shardid)
+ from pg_dist_shard
+ group by logicalrelid;
+```
+```
+ logicalrelid | count
+--+-
+ users | 32
+```
+
+Azure Cosmos DB for PostgreSQL uses the `pg_dist_shard` table to assign rows to shards,
+based on a hash of the value in the distribution column. The hashing details
+are unimportant for this tutorial. What matters is that we can query to see
+which values map to which shard IDs:
+
+```sql
+-- Where would a row containing hi@test.com be stored?
+-- (The value doesn't have to actually be present in users, the mapping
+-- is a mathematical operation consulting pg_dist_shard.)
+select get_shard_id_for_distribution_column('users', 'hi@test.com');
+```
+```
+ get_shard_id_for_distribution_column
+--
+ 102008
+```
+
+The mapping of rows to shards is purely logical. Shards must be assigned to
+specific worker nodes for storage, in what Azure Cosmos DB for PostgreSQL calls *shard
+placement*.
+
+![shards assigned to workers](media/tutorial-shard/shard-placement.png)
+
+We can look at the shard placements in
+[pg_dist_placement](reference-metadata.md#shard-placement-table).
+Joining it with the other metadata tables we've seen shows where each shard
+lives.
+
+```sql
+-- limit the output to the first five placements
+
+select
+ shard.logicalrelid as table,
+ placement.shardid as shard,
+ node.nodename as host
+from
+ pg_dist_placement placement,
+ pg_dist_node node,
+ pg_dist_shard shard
+where placement.groupid = node.groupid
+ and shard.shardid = placement.shardid
+order by shard
+limit 5;
+```
+```
+ table | shard | host
+-+--+
+ users | 102008 | 10.0.0.21
+ users | 102009 | 10.0.0.23
+ users | 102010 | 10.0.0.21
+ users | 102011 | 10.0.0.23
+ users | 102012 | 10.0.0.21
+```
+
+### Data skew
+
+A cluster runs most efficiently when you place data evenly on worker
+nodes, and when you place related data together on the same workers. In this
+section we'll focus on the first part, the uniformity of placement.
+
+To demonstrate, let's create sample data for our `users` table:
+
+```sql
+-- load sample data
+insert into users
+select
+ md5(random()::text) || '@test.com',
+ date_trunc('day', now() - random()*'100 years'::interval)
+from generate_series(1, 1000);
+```
+
+To see shard sizes, we can run [table size
+functions](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE)
+on the shards.
+
+```sql
+-- sizes of the first five shards
+select *
+from
+ run_command_on_shards('users', $cmd$
+ select pg_size_pretty(pg_table_size('%1$s'));
+ $cmd$)
+order by shardid
+limit 5;
+```
+```
+ shardid | success | result
+++--
+ 102008 | t | 16 kB
+ 102009 | t | 16 kB
+ 102010 | t | 16 kB
+ 102011 | t | 16 kB
+ 102012 | t | 16 kB
+```
+
+We can see the shards are of equal size. We already saw that placements are
+evenly distributed among workers, so we can infer that the worker nodes hold
+roughly equal numbers of rows.
+
+The rows in our `users` example distributed evenly because properties of the
+distribution column, `email`.
+
+1. The number of email addresses was greater than or equal to the number of shards.
+2. The number of rows per email address was similar (in our case, exactly one
+ row per address because we declared email a key).
+
+Any choice of table and distribution column where either property fails will
+end up with uneven data size on workers, that is, *data skew*.
+
+### Add constraints to distributed data
+
+Using Azure Cosmos DB for PostgreSQL allows you to continue to enjoy the safety of a
+relational database, including [database
+constraints](https://www.postgresql.org/docs/current/ddl-constraints.html).
+However, there's a limitation. Because of the nature of distributed systems,
+Azure Cosmos DB for PostgreSQL won't cross-reference uniqueness constraints or referential
+integrity between worker nodes.
+
+Let's consider our `users` table example with a related table.
+
+```sql
+-- books that users own
+create table books (
+ owner_email text references users (email),
+ isbn text not null,
+ title text not null
+);
+
+-- distribute it
+select create_distributed_table('books', 'owner_email');
+```
+
+For efficiency, we distribute `books` the same way as `users`: by the owner's
+email address. Distributing by similar column values is called
+[colocation](concepts-colocation.md).
+
+We had no problem distributing books with a foreign key to users, because the
+key was on a distribution column. However, we would have trouble making `isbn`
+a key:
+
+```sql
+-- will not work
+alter table books add constraint books_isbn unique (isbn);
+```
+```
+ERROR: cannot create constraint on "books"
+DETAIL: Distributed relations cannot have UNIQUE, EXCLUDE, or
+ PRIMARY KEY constraints that do not include the partition column
+ (with an equality operator if EXCLUDE).
+```
+
+In a distributed table the best we can do is make columns unique modulo
+the distribution column:
+
+```sql
+-- a weaker constraint is allowed
+alter table books add constraint books_isbn unique (owner_email, isbn);
+```
+
+The above constraint merely makes isbn unique per user. Another option is to
+make books a [reference
+table](howto-modify-distributed-tables.md#reference-tables) rather
+than a distributed table, and create a separate distributed table associating
+books with users.
+
+## Query distributed tables
+
+In the previous sections, we saw how distributed table rows are placed in shards
+on worker nodes. Most of the time you don't need to know how or where data is
+stored in a cluster. Azure Cosmos DB for PostgreSQL has a distributed query executor
+that automatically splits up regular SQL queries. It runs them in parallel on
+worker nodes close to the data.
+
+For instance, we can run a query to find the average age of users, treating the
+distributed `users` table like it's a normal table on the coordinator.
+
+```sql
+select avg(current_date - bday) as avg_days_old from users;
+```
+```
+ avg_days_old
+--
+ 17926.348000000000
+```
+
+![query going to shards via coordinator](media/tutorial-shard/query-fragments.png)
+
+Behind the scenes, the Azure Cosmos DB for PostgreSQL executor creates a separate query for
+each shard, runs them on the workers, and combines the result. You can see it
+if you use the PostgreSQL EXPLAIN command:
+
+```sql
+explain select avg(current_date - bday) from users;
+```
+```
+ QUERY PLAN
+-
+ Aggregate (cost=500.00..500.02 rows=1 width=32)
+ -> Custom Scan (Citus Adaptive) (cost=0.00..0.00 rows=100000 width=16)
+ Task Count: 32
+ Tasks Shown: One of 32
+ -> Task
+ Node: host=10.0.0.21 port=5432 dbname=citus
+ -> Aggregate (cost=41.75..41.76 rows=1 width=16)
+ -> Seq Scan on users_102040 users (cost=0.00..22.70 rows=1270 width=4)
+```
+
+The output shows an example of an execution plan for a *query fragment* running
+on shard 102040 (the table `users_102040` on worker 10.0.0.21). The other
+fragments aren't shown because they're similar. We can see that the worker node
+scans the shard tables and applies the aggregate. The coordinator node combines
+aggregates for the final result.
+
+## Next steps
+
+In this tutorial, we created a distributed table, and learned about its shards
+and placements. We saw a challenge of using uniqueness and foreign key
+constraints, and finally saw how distributed queries work at a high level.
+
+* Read more about Azure Cosmos DB for PostgreSQL [table types](concepts-nodes.md)
+* Get more tips on [choosing a distribution column](howto-choose-distribution-column.md)
+* Learn the benefits of [table colocation](concepts-colocation.md)
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
Last updated 08/24/2022 -+ ms.devlang: azurecli- # Provision an Azure Cosmos DB account with continuous backup and point in time restore Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, restore a deleted resource, or restore into any region where backups existed. The continuous backup mode allows you to restore to any point of time within the last 30 or 7 days. How far back you can go in time depends on the tier of the continuous mode for the account.
This article explains how to provision an account with continuous backup and poi
> [!NOTE] > You can provision continuous backup mode account only if the following conditions are true: >
-> * If the account is of type SQL API or API for MongoDB.
-> * If the account is of type Table API or Gremlin API.
+> * If the account is of type API for NoSQL or MongoDB.
+> * If the account is of type API for Table or Gremlin.
> * If the account has a single write region. ## <a id="provision-portal"></a>Provision using Azure portal
When creating a new Azure Cosmos DB account, in the **Backup policy** tab, choos
:::image type="content" source="./media/provision-account-continuous-backup/provision-account-continuous-mode.png" alt-text="Provision an Azure Cosmos DB account with continuous backup configuration." border="true" lightbox="./media/provision-account-continuous-backup/provision-account-continuous-mode.png":::
-Table API and Gremlin API are in preview and can be provisioned with PowerShell and Azure CLI.
+Support for API for Table and Gremlin is in preview and can be provisioned with PowerShell and Azure CLI.
## <a id="provision-powershell"></a>Provision using Azure PowerShell
For PowerShell and CLI commands, the tier value is optional, if it isn't already
Select-AzSubscription -Subscription <SubscriptionName> ```
-### <a id="provision-powershell-sql-api"></a>SQL API account
+### <a id="provision-powershell-sql-api"></a>API for NoSQL account
To provision an account with continuous backup, add the argument `-BackupPolicyType Continuous` along with the regular provisioning command.
New-AzCosmosDBAccount `
-ServerVersion "3.6" ```
-### <a id="provision-powershell-table-api"></a>Table API account
+### <a id="provision-powershell-table-api"></a>API for Table account
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
New-AzCosmosDBAccount `
-ApiKind "Table" ```
-### <a id="provision-powershell-graph-api"></a>Gremlin API account
+### <a id="provision-powershell-graph-api"></a>API for Gremlin account
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
Before provisioning the account, install Azure CLI with the following steps:
* Sign into your Azure account with ``az login`` command. * Select the required subscription using ``az account set -s <subscriptionguid>`` command.
-### <a id="provision-cli-sql-api"></a>SQL API account
+### <a id="provision-cli-sql-api"></a>API for NoSQL account
-To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier:
+To provision a API for NoSQL account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier:
```azurecli-interactive
az cosmosdb create \
--locations regionName="West US" ```
-### <a id="provision-cli-table-api"></a>Table API account
+### <a id="provision-cli-table-api"></a>API for Table account
The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous30days`` tier:
az cosmosdb create \
--locations regionName="West US" ```
-### <a id="provision-cli-graph-api"></a>Gremlin API account
+### <a id="provision-cli-graph-api"></a>API for Gremlin account
The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier created in *West US* region under *MyRG* resource group:
cosmos-db Provision Throughput Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-throughput-autoscale.md
Title: Create Azure Cosmos containers and databases in autoscale mode.
-description: Learn about the benefits, use cases, and how to provision Azure Cosmos databases and containers in autoscale mode.
+ Title: Create Azure Cosmos DB containers and databases in autoscale mode.
+description: Learn about the benefits, use cases, and how to provision Azure Cosmos DB databases and containers in autoscale mode.
Last updated 04/01/2022-+
-# Create Azure Cosmos containers and databases with autoscale throughput
+# Create Azure Cosmos DB containers and databases with autoscale throughput
In Azure Cosmos DB, you can configure either standard (manual) or autoscale provisioned throughput on your databases and containers. Autoscale provisioned throughput in Azure Cosmos DB allows you to **scale the throughput (RU/s) of your database or container automatically and instantly**. The throughput is scaled based on the usage, without impacting the availability, latency, throughput, or performance of the workload.
Autoscale provisioned throughput is well suited for mission-critical workloads t
## Benefits of autoscale
-Azure Cosmos databases and containers that are configured with autoscale provisioned throughput have the following benefits:
+Azure Cosmos DB databases and containers that are configured with autoscale provisioned throughput have the following benefits:
* **Simple:** Autoscale removes the complexity of managing RU/s with custom scripting or manually scaling capacity.
The use cases of autoscale include:
* **Infrequently used applications:** If you have an application that's only used for a few hours several times a day, week, or month ΓÇö such as a low-volume application/web/blog site ΓÇö autoscale adjusts the capacity to handle peak usage and scales down when it's over.
-* **Development and test workloads:** If you or your team use Azure Cosmos databases and containers during work hours, but don't need them on nights or weekends, autoscale helps save cost by scaling down to a minimum when not in use.
+* **Development and test workloads:** If you or your team use Azure Cosmos DB databases and containers during work hours, but don't need them on nights or weekends, autoscale helps save cost by scaling down to a minimum when not in use.
* **Scheduled production workloads/queries:** If you have a series of scheduled requests, operations, or queries that you want to run during idle periods, you can do that easily with autoscale. When you need to run the workload, the throughput will automatically scale to what's needed and scale down afterward.
For more detail, see this [documentation](how-to-choose-offer.md) on how to choo
* Review the [autoscale FAQ](autoscale-faq.yml). * Learn how to [choose between manual and autoscale throughput](how-to-choose-offer.md).
-* Learn how to [provision autoscale throughput on an Azure Cosmos database or container](how-to-provision-autoscale-throughput.md).
+* Learn how to [provision autoscale throughput on an Azure Cosmos DB database or container](how-to-provision-autoscale-throughput.md).
* Learn more about [partitioning](partitioning-overview.md) in Azure Cosmos DB. * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)--
cosmos-db Rate Limiting Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/rate-limiting-requests.md
Title: Optimize your Azure Cosmos DB application using rate limiting
description: This article provides developers with a methodology to rate limit requests to Azure Cosmos DB. Implementing this pattern can reduce errors and improve overall performance for workloads that exceed the provisioned throughput of the target database or container. + Last updated 08/26/2021 # Optimize your Azure Cosmos DB application using rate limiting This article provides developers with a methodology to rate limit requests to Azure Cosmos DB. Implementing this pattern can reduce errors and improve overall performance for workloads that exceed the provisioned throughput of the target database or container.
There are some key concepts when measuring cost:
The method to determine the cost of a request, is different for each API:
-* [Core API](find-request-unit-charge.md)
-* [Cassandra API](cassandr)
-* [Gremlin API](find-request-unit-charge-gremlin.md)
-* [Mongo DB API](mongodb/find-request-unit-charge-mongodb.md)
-* [Table API](table/find-request-unit-charge.md)
+* [API for NoSQL](find-request-unit-charge.md)
+* [API for Cassandra](cassandr)
+* [API for Gremlin](gremlin/find-request-unit-charge.md)
+* [API for MongoDB](mongodb/find-request-unit-charge.md)
+* [API for Table](table/find-request-unit-charge.md)
## Write requests
-The cost of write operations tends to be easy to predict. You will insert records and document the cost that Azure Cosmos reported.
+The cost of write operations tends to be easy to predict. You will insert records and document the cost that Azure Cosmos DB reported.
If you have documents of different size and/or documents that will use different indexes, it is important to measure all of them. You may find that your representative documents are close enough in cost that you can assign a single value across all writes.
The cost of query operations can be much harder to predict for the following rea
* The number of results can vary and unless you have statistics, you cannot predict the RU impact from the return payload. It is likely you will not have a single cost of query operations, but rather some function that evaluates the query and calculates a cost.
-If you are using the Core API, you could then evaluate the actual cost of the operation and determine how accurate your estimation was
+If you are using the API for NoSQL, you could then evaluate the actual cost of the operation and determine how accurate your estimation was
(tuning of this estimation could even happen automatically within the code). ## Transient fault handling
Autoscale provisioned throughput in Azure Cosmos DB allows you to scale the thro
Autoscale provisioned throughput is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale.
-For more information on autoscaling, see [Create Azure Cosmos containers and databases with autoscale throughput
+For more information on autoscaling, see [Create Azure Cosmos DB containers and databases with autoscale throughput
](provision-throughput-autoscale.md). ### Queue-Based Load Leveling pattern
cosmos-db Relational Nosql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/relational-nosql.md
Title: Understanding the differences between Azure Cosmos DB NoSQL and relational databases description: This article enumerates the differences between NoSQL and relational databases ----- Previously updated : 12/16/2019++ +++ Last updated : 10/05/2021 adobe-target: true # Understanding the differences between NoSQL and relational databases
-This article will enumerate some of the key benefits of NoSQL databases over relational databases. We will also discuss some of the challenges in working with NoSQL. For an in-depth look at the different data stores that exist, have a look at our article on [choosing the right data store](/azure/architecture/guide/technology-choices/data-store-overview).
+
+This article will enumerate some of the key benefits of NoSQL databases over relational databases. We'll also discuss some of the challenges in working with NoSQL. For an in-depth look at the different data stores that exist, have a look at our article on [choosing the right data store](/azure/architecture/guide/technology-choices/data-store-overview).
## High throughput
One of the most obvious challenges when maintaining a relational database system
In these scenarios, [distributed databases](https://en.wikipedia.org/wiki/Distributed_database) can offer a more scalable solution. However, maintenance can still be a costly and time-consuming exercise. Administrators may have to do extra work to ensure that the distributed nature of the system is transparent. They may also have to account for the ΓÇ£disconnectedΓÇ¥ nature of the database.
-[Azure Cosmos DB](./introduction.md) simplifies these challenges, by being deployed worldwide across all Azure regions. Partition ranges are capable of being dynamically subdivided to seamlessly grow the database in line with the application, while simultaneously maintaining high availability. Fine-grained multi-tenancy and tightly controlled, cloud-native resource governance facilitates [astonishing latency guarantees](./consistency-levels.md#consistency-levels-and-latency) and predictable performance. Partitioning is fully managed, so administrators need not have to write code or manage partitions.
+[Azure Cosmos DB](./introduction.md) simplifies these challenges, by being deployed worldwide across all Azure regions. Partition ranges are capable of being dynamically subdivided to seamlessly grow the database in line with the application, while simultaneously maintaining high availability. Fine-grained multi-tenancy and tightly controlled, cloud-native resource governance facilitates [astonishing latency guarantees](./consistency-levels.md#consistency-levels-and-latency) and predictable performance. Partitioning is fully managed, so administrators won't need to write code or manage partitions.
If your transactional volumes are reaching extreme levels, such as many thousands of transactions per second, you should consider a distributed NoSQL database. Consider Azure Cosmos DB for maximum efficiency, ease of maintenance, and reduced total cost of ownership.
If your transactional volumes are reaching extreme levels, such as many thousand
## Hierarchical data
-There are a significant number of use cases where transactions in the database can contain many parent-child relationships. These relationships can grow significantly over time, and prove difficult to manage. Forms of [hierarchical databases](https://en.wikipedia.org/wiki/Hierarchical_database_model) did emerge during the 1980s, but were not popular due to inefficiency in storage. They also lost traction as [Ted CoddΓÇÖs relational model](https://en.wikipedia.org/wiki/Relational_model) became the de facto standard used by virtually all mainstream database management systems.
+There are a significant number of use cases where transactions in the database can contain many parent-child relationships. These relationships can grow significantly over time, and prove difficult to manage. Forms of [hierarchical databases](https://en.wikipedia.org/wiki/Hierarchical_database_model) did emerge during the 1980s, but weren't popular due to inefficiency in storage. They also lost traction as [the relational model (RM)](https://en.wikipedia.org/wiki/Relational_model) became the de facto standard used by virtually all mainstream database management systems.
However, today the popularity of document-style databases has grown significantly. These databases might be considered a reinventing of the hierarchical database paradigm, now uninhibited by concerns around the cost of storing data on disk. As a result, maintaining many complex parent-child entity relationships in a relational database could now be considered an anti-pattern compared to modern document-oriented approaches.
-The emergence of [object oriented design](https://en.wikipedia.org/wiki/Object-oriented_design), and the [impedance mismatch](https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch) that arises when combining it with relational models, also highlights an anti-pattern in relational databases for certain use cases. Hidden but often significant maintenance costs can arise as a result. Although [ORM approaches](https://en.wikipedia.org/wiki/Object-relational_mapping) have evolved to partly mitigate this, document-oriented databases nonetheless coalesce much better with object-oriented approaches. With this approach, developers are not forced to be committed to ORM drivers, or bespoke language specific [OO Database engines](https://en.wikipedia.org/wiki/Object_database). If your data contains many parent-child relationships and deep levels of hierarchy, you may want to consider using a NoSQL document database such as the [Azure Cosmos DB SQL API](./introduction.md).
+The emergence of [object oriented design](https://en.wikipedia.org/wiki/Object-oriented_design), and the [impedance mismatch](https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch) that arises when combining it with relational models, also highlights an anti-pattern in relational databases for certain use cases. Hidden but often significant maintenance costs can arise as a result. Although [ORM approaches](https://en.wikipedia.org/wiki/Object-relational_mapping) have evolved to partly mitigate this, document-oriented databases nonetheless coalesce much better with object-oriented approaches. With this approach, developers won't need ORM drivers, or bespoke language specific [OO Database engines](https://en.wikipedia.org/wiki/Object_database). If your data contains many parent-child relationships and deep levels of hierarchy, you may want to consider using a NoSQL document database such as the [Azure Cosmos DB API for NoSQL](./introduction.md).
:::image type="content" source="./media/relational-or-nosql/order-orderdetails.jpg" alt-text="OrderDetails"::: ## Complex networks and relationships
-Ironically, given their name, relational databases present a less than optimal solution for modeling deep and complex relationships. The reason for this is that relationships between entities do not actually exist in a relational database. They need to be computed at runtime, with complex relationships requiring cartesian joins in order to allow mapping using queries. As a result, operations become exponentially more expensive in terms of computation as relationships increase. In some cases, a relational database attempting to manage such entities will become unusable.
+Ironically, given their name, relational databases present a less than optimal solution for modeling deep and complex relationships. The reason for this is that relationships between entities don't actually exist in a relational database. They need to be computed at runtime, with complex relationships requiring cartesian joins in order to allow mapping using queries. As a result, operations become exponentially more expensive in terms of computation as relationships increase. In some cases, a relational database attempting to manage such entities will become unusable.
Various forms of ΓÇ£NetworkΓÇ¥ databases did emerge during the time that relational databases emerged, but as with hierarchical databases, these systems struggled to gain popularity. Slow adoption was due to a lack of use cases at the time, and storage inefficiencies. Today, graph database engines could be considered a re-emergence of the network database paradigm. The key benefit with these systems is that relationships are stored as ΓÇ£first class citizensΓÇ¥ within the database. Thus, traversing relationships can be done in constant time, rather than increasing in time complexity with each new join or cross product.
-If you are maintaining a complex network of relationships in your database, you may want to consider a graph database such as the [Azure Cosmos DB Gremlin API](./graph-introduction.md) for managing this data.
+If you're maintaining a complex network of relationships in your database, you may want to consider a graph database such as the [Azure Cosmos DB for Gremlin](./gremlin/introduction.md).
:::image type="content" source="./media/relational-or-nosql/graph.png" alt-text="Database diagram shows several employees and departments connected to each other.":::
-Azure Cosmos DB is a multi-model database service, which offers an API projection for all the major NoSQL model types; Column-family, Document, Graph, and Key-Value. The [Gremlin (graph)](./gremlin-support.md) and SQL (Core) Document API layers are fully interoperable. This has benefits for switching between different models at the programmability level. Graph stores can be queried in terms of both complex network traversals as well as transactions modeled as document records in the same store.
+Azure Cosmos DB is a multi-model database service, which offers an API projection for all the major NoSQL model types; Column-family, Document, Graph, and Key-Value. The [API for Gremlin (graph)](./gremlin/support.md) and NoSQL are fully interoperable. This interoperability has benefits for switching between different models at the programmability level. Graph stores can be queried in terms of both complex network traversals and transactions modeled as document records in the same store.
## Fluid schema
-Another particular characteristic of relational databases is that schemas are required to be defined at design time. This has benefits in terms of referential integrity and conformity of data. However, it can also be restrictive as the application grows. Responding to changes in the schema across logically separate models sharing the same table or database definition can become complex over time. Such use cases often benefit from the schema being devolved to the application to manage on a per record basis. This requires the database to be ΓÇ£schema agnosticΓÇ¥ and allow records to be ΓÇ£self-describingΓÇ¥ in terms of the data contained within them.
+Another particular characteristic of relational databases is that schemas are required to be defined at design time. Pre-defined schemas have benefits in terms of referential integrity and conformity of data. However, it can also be restrictive as the application grows. Responding to changes in the schema across logically separate models sharing the same table or database definition can become complex over time. Such use cases often benefit from the schema being devolved to the application to manage on a per record basis. These use cases require the database to be ΓÇ£schema agnosticΓÇ¥ and allow records to be ΓÇ£self-describingΓÇ¥ in terms of the data contained within them.
-If you are managing data whose structures are constantly changing at a high rate, particularly if transactions can come from external sources where it is difficult to enforce conformity across the database, you may want to consider a more schema-agnostic approach using a managed NoSQL database service like Azure Cosmos DB.
+If you're managing data whose structures are constantly changing at a high rate, consider a more schema-agnostic approach using a managed NoSQL database service like Azure Cosmos DB. Particularly, if transactions can come from external sources where it's difficult to enforce conformity across the database, consider services like Azure Cosmos DB.
## Microservices
-The [microservices](https://en.wikipedia.org/wiki/Microservices) pattern has grown significantly in recent years. This pattern has its roots in [Service-Oriented Architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture). The de-facto standard for data transmission in these modern microservices architectures is [JSON](https://en.wikipedia.org/wiki/JSON), which also happens to be the storage medium for the vast majority of document-oriented NoSQL Databases. This makes NoSQL document stores a much more seamless fit for both the persistence and synchronization (using [event sourcing patterns](https://en.wikipedia.org/wiki/Event-driven_architecture)) across complex Microservice implementations. More traditional relational databases can be much more complex to maintain in these architectures. This is due to the greater amount of transformation required for both state and synchronization across APIs. Azure Cosmos DB in particular has a number of features that make it an even more seamless fit for JSON-based Microservices Architectures than many NoSQL databases:
+The [microservices](https://en.wikipedia.org/wiki/Microservices) pattern has grown significantly in recent years. This pattern has its roots in [Service-Oriented Architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture). The de-facto standard for data transmission in these modern microservices architectures is [JSON](https://en.wikipedia.org/wiki/JSON), which also happens to be the storage medium for most document-oriented NoSQL Databases. The JSON data type makes NoSQL document stores a much more seamless fit for both the persistence and synchronization (using [event sourcing patterns](https://en.wikipedia.org/wiki/Event-driven_architecture)) across complex Microservice implementations. More traditional relational databases can be much more complex to maintain in these architectures. This complexity is due to the greater amount of transformation required for both state and synchronization across APIs. Azure Cosmos DB in particular has many features that make it an even more seamless fit for JSON-based Microservices Architectures than many NoSQL databases:
* a choice of pure JSON data types * a JavaScript engine and [query API](sql/javascript-query-api.md) built into the database.
The [microservices](https://en.wikipedia.org/wiki/Microservices) pattern has gro
## Some challenges with NoSQL databases
-Although there are some clear advantages when implementing NoSQL databases, there are also some challenges that you may want to take into consideration. These may not be present to the same degree when working with the relational model:
+Although there are some clear advantages when implementing NoSQL databases, there are also some challenges that you may want to take into consideration. These challenges may not be present to the same degree when working with the relational model:
* transactions with many relations pointing to the same entity. * transactions requiring strong consistency across the entire dataset.
-Looking at the first challenge, the rule-of-thumb in NoSQL databases is generally denormalization, which as articulated earlier, produces more efficient reads in a distributed system. However, there are some design challenges that come into play with this approach. LetΓÇÖs take an example of a product thatΓÇÖs related to one category and multiple tags:
+The first challenge requires a denormalized solution. The rule-of-thumb in NoSQL databases is generally denormalization, which as articulated earlier, produces more efficient reads in a distributed system. However, there are some design challenges that come into play with this approach. LetΓÇÖs take an example of a product thatΓÇÖs related to one category and multiple tags:
:::image type="content" source="./media/relational-or-nosql/many-joins.png" alt-text="Joins":::
-A best practice approach in a NoSQL document database would be to denormalize the category name and tag names directly in a ΓÇ£product documentΓÇ¥. However, in order to keep categories, tags, and products in sync, the design options to facilitate this have added maintenance complexity, because the data is duplicated across multiple records in the product, rather than being a simple update in a ΓÇ£one-to-manyΓÇ¥ relationship, and a join to retrieve the data.
+A best practice approach in a NoSQL document database would be to denormalize the category name and tag names directly in a ΓÇ£product documentΓÇ¥. In order to keep categories, tags, and products in sync; the design options to facilitate this have added maintenance complexity. This complexity occurs because the data is duplicated across multiple records in the product. In a relational database, the operation would be a straightforward update in a ΓÇ£one-to-manyΓÇ¥ relationship, and a join to retrieve the data.
The trade-off is that reads are more efficient in the denormalized record, and become increasingly more efficient as the number of conceptually joined entities increases. However, just as the read efficiency increases with increasing numbers of joined entities in a denormalize record, so too does the maintenance complexity of keeping entities in sync. One way of mitigating this trade-off is to create a [hybrid data model](./modeling-data.md#hybrid-data-models).
-While there is more flexibility available in NoSQL databases to deal with these trade-offs, increased flexibility can also produce more design decisions. Consult our article [how to model and partition data on Azure Cosmos DB using a real-world example](./how-to-model-partition-example.md), which includes an approach for keeping [denormalized user data in sync](./how-to-model-partition-example.md#denormalizing-usernames) where users not only sit in different partitions, but in different containers.
+While there's more flexibility available in NoSQL databases to deal with these trade-offs, increased flexibility can also produce more design decisions. For more information about keeping [denormalized user data in sync](./how-to-model-partition-example.md#denormalizing-usernames), see [how to model and partition data on Azure Cosmos DB using a real-world example](./how-to-model-partition-example.md). This example includes an approach for keeping denormalized data in sync where users not only sit in different partitions, but in different containers.
-With respect to strong consistency, it is rare that this will be required across the entire data set. However, in cases where this is necessary, it can be a challenge in distributed databases. To ensure strong consistency, data needs to be synchronized across all replicas and regions before allowing clients to read it. This can increase the latency of reads.
+With respect to strong consistency, it's rare that strong consistency will be required across the entire data set. However, cases where this level of consistency is necessary can be a challenge in distributed databases. To ensure strong consistency, data needs to be synchronized across all replicas and regions before allowing clients to read it. This synchronization can increase the latency of reads.
-Again, Azure Cosmos DB offers more flexibility than relational databases for the various trade-offs that are relevant here, but for small scale implementations, this approach may add more design considerations. Consult our article on [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more detail on this topic.
+Again, Azure Cosmos DB offers more flexibility than relational databases for the various trade-offs that are relevant here. For small scale implementations, this approach may add more design considerations. For more information, see [consistency, availability, and performance tradeoffs](./consistency-levels.md).
## Next steps
-Learn how to manage your Azure Cosmos account and other concepts:
+Learn how to manage your Azure Cosmos DB account and other concepts:
-* [How-to manage your Azure Cosmos account](how-to-manage-database-account.md)
+* [How-to manage your Azure Cosmos DB account](how-to-manage-database-account.md)
* [Global distribution](distribute-data-globally.md) * [Consistency levels](consistency-levels.md)
-* [Working with Azure Cosmos containers and items](account-databases-containers-items.md)
-* [VNET service endpoint for your Azure Cosmos account](how-to-configure-vnet-service-endpoint.md)
-* [IP-firewall for your Azure Cosmos account](how-to-configure-firewall.md)
-* [How-to add and remove Azure regions to your Azure Cosmos account](how-to-manage-database-account.md)
+* [Working with Azure Cosmos DB containers and items](account-databases-containers-items.md)
+* [VNET service endpoint for your Azure Cosmos DB account](how-to-configure-vnet-service-endpoint.md)
+* [IP-firewall for your Azure Cosmos DB account](how-to-configure-firewall.md)
+* [How-to add and remove Azure regions to your Azure Cosmos DB account](how-to-manage-database-account.md)
* [Azure Cosmos DB SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/)
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Last updated 03/24/2022-+ # Request Units in Azure Cosmos DB Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation. The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
-The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is 1 Request Unit (or 1 RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
+The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is 1 Request Unit (or 1 RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
> > [!VIDEO https://aka.ms/docs.essential-request-units]
The following image shows the high-level idea of RUs:
To manage and plan capacity, Azure Cosmos DB ensures that the number of RUs for a given database operation over a given dataset is deterministic. You can examine the response header to track the number of RUs that are consumed by any database operation. When you understand the [factors that affect RU charges](request-units.md#request-unit-considerations) and your application's throughput requirements, you can run your application cost effectively.
-The type of Azure Cosmos account you're using determines the way consumed RUs get charged. There are three modes in which you can create an account:
+The type of Azure Cosmos DB account you're using determines the way consumed RUs get charged. There are three modes in which you can create an account:
1. **Provisioned throughput mode**: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You are billed on an hourly basis for the number of RUs per second you have provisioned. To learn more, see the [Provisioned throughput](set-throughput.md) article. You can provision throughput at two distinct granularities:
- * **Containers**: For more information, see [Provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md).
- * **Databases**: For more information, see [Provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
+ * **Containers**: For more information, see [Provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
+ * **Databases**: For more information, see [Provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
-2. **Serverless mode**: In this mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations. To learn more, see the [Serverless throughput](serverless.md) article.
+2. **Serverless mode**: In this mode, you don't have to provision any throughput when creating resources in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations. To learn more, see the [Serverless throughput](serverless.md) article.
3. **Autoscale mode**: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage, without impacting the availability, latency, throughput, or performance of the workload. This mode is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale. To learn more, see the [autoscale throughput](provision-throughput-autoscale.md) article.
While you estimate the number of RUs consumed by your workload, consider the fol
## Request units and multiple regions
-If you provision *'R'* RUs on a Cosmos container (or database), Cosmos DB ensures that *'R'* RUs are available in *each* region associated with your Cosmos account. You can't selectively assign RUs to a specific region. The RUs provisioned on a Cosmos container (or database) are provisioned in all the regions associated with your Cosmos account.
+If you provision *'R'* RUs on an Azure Cosmos DB container (or database), Azure Cosmos DB ensures that *'R'* RUs are available in *each* region associated with your Azure Cosmos DB account. You can't selectively assign RUs to a specific region. The RUs provisioned on an Azure Cosmos DB container (or database) are provisioned in all the regions associated with your Azure Cosmos DB account.
-Assuming that a Cosmos container is configured with *'R'* RUs and there are *'N'* regions associated with the Cosmos account, the total RUs available globally on the container = *R* x *N*.
+Assuming that an Azure Cosmos DB container is configured with *'R'* RUs and there are *'N'* regions associated with the Azure Cosmos DB account, the total RUs available globally on the container = *R* x *N*.
Your choice of [consistency model](consistency-levels.md) also affects the throughput. You can get approximately 2x read throughput for the more relaxed consistency levels (*session*, *consistent prefix* and *eventual* consistency) compared to stronger consistency levels (*bounded staleness* or *strong* consistency). ## Next steps -- Learn more about how to [provision throughput on Azure Cosmos containers and databases](set-throughput.md).
+- Learn more about how to [provision throughput on Azure Cosmos DB containers and databases](set-throughput.md).
- Learn more about [serverless on Azure Cosmos DB](serverless.md). - Learn more about [logical partitions](./partitioning-overview.md).-- Learn how to [provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md).-- Learn how to [provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
+- Learn how to [provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
+- Learn how to [provision throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
- Learn how to [find the request unit charge for an operation](find-request-unit-charge.md). - Learn how to [optimize provisioned throughput cost in Azure Cosmos DB](optimize-cost-throughput.md). - Learn how to [optimize reads and writes cost in Azure Cosmos DB](optimize-cost-reads-writes.md).
cosmos-db Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/reserved-capacity.md
+
+ Title: Reserved capacity in Azure Cosmos DB to Optimize cost
+description: Learn how to buy Azure Cosmos DB reserved capacity to save on your compute costs.
++++ Last updated : 08/26/2021++++
+# Optimize cost with reserved capacity in Azure Cosmos DB
+
+Azure Cosmos DB reserved capacity helps you save money by committing to a reservation for Azure Cosmos DB resources for either one year or three years. With Azure Cosmos DB reserved capacity, you can get a discount on the throughput provisioned for Azure Cosmos DB resources. Examples of resources are databases and containers (tables, collections, and graphs).
+
+Azure Cosmos DB reserved capacity can significantly reduce your Azure Cosmos DB costs&mdash;up to 65 percent on regular prices with a one-year or three-year upfront commitment. Reserved capacity provides a billing discount and doesn't affect the runtime state of your Azure Cosmos DB resources.
+
+Azure Cosmos DB reserved capacity covers throughput provisioned for your resources. It doesn't cover the storage and networking charges. As soon as you buy a reservation, the throughput charges that match the reservation attributes are no longer charged at the pay-as-you go rates. For more information on reservations, see the [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) article.
+
+You can buy Azure Cosmos DB reserved capacity from the [Azure portal](https://portal.azure.com). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy reserved capacity:
+
+* You must be in the Owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com). Or, if that setting is disabled, you must be an EA Admin on the subscription.
+* For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure Cosmos DB reserved capacity.
+
+## Determine the required throughput before purchase
+
+The size of the reserved capacity purchase should be based on the total amount of throughput that the existing or soon-to-be-deployed Azure Cosmos DB resources will use on an hourly basis. For example: Purchase 30,000 RU/s reserved capacity if that's your consistent hourly usage pattern. In this example, any provisioned throughput above 30,000 RU/s will be billed using your Pay-as-you-go rate. If provisioned throughput is below 30,000 RU/s in an hour, then the extra reserved capacity for that hour will be wasted.
+
+We calculate purchase recommendations based on your hourly usage pattern. Usage over last 7, 30 and 60 days is analyzed, and reserved capacity purchase that maximizes your savings is recommended. You can view recommended reservation sizes in the Azure portal using the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All services** > **Reservations** > **Add**.
+
+3. From the **Purchase reservations** pane, choose **Azure Cosmos DB**.
+
+4. Select the **Recommended** tab to view recommended reservations:
+
+You can filter recommendations by the following attributes:
+
+- **Term** (1 year or 3 years)
+- **Billing frequency** (Monthly or Upfront)
+- **Throughput Type** (RU/s vs multi-region write RU/s)
+
+Additionally, you can scope recommendations to be within a single resource group, single subscription, or your entire Azure enrollment.
+
+Here's an example recommendation:
++
+This recommendation to purchase a 30,000 RU/s reservation indicates that, among 3 year reservations, a 30,000 RU/s reservation size will maximize savings. In this case, the recommendation is calculated based on the past 30 days of Azure Cosmos DB usage. If this customer expects that the past 30 days of Azure Cosmos DB usage is representative of future use, they would maximize savings by purchasing a 30,000 RU/s reservation.
+
+## Buy Azure Cosmos DB reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All services** > **Reservations** > **Add**.
+
+3. From the **Purchase reservations** pane, choose **Azure Cosmos DB** to buy a new reservation.
+
+4. Fill in the required fields as described in the following table:
+
+ :::image type="content" source="./media/reserved-capacity/fill-reserved-capacity-form.png" alt-text="Fill the reserved capacity form":::
+
+ |Field |Description |
+ |||
+ |Scope | Option that controls how many subscriptions can use the billing benefit associated with the reservation. It also controls how the reservation is applied to specific subscriptions. <br/><br/> If you select **Shared**, the reservation discount is applied to Azure Cosmos DB instances that run in any subscription within your billing context. The billing context is based on how you signed up for Azure. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all individual subscriptions with pay-as-you-go rates created by the account administrator. </br></br>If you select **Management group**, the reservation discount is applied to Azure Cosmos DB instances that run in any of the subscriptions that are a part of both the management group and billing scope. <br/><br/> If you select **Single subscription**, the reservation discount is applied to Azure Cosmos DB instances in the selected subscription. <br/><br/> If you select **Single resource group**, the reservation discount is applied to Azure Cosmos DB instances in the selected subscription and the selected resource group within that subscription. <br/><br/> You can change the reservation scope after you buy the reserved capacity. |
+ |Subscription | Subscription that's used to pay for the Azure Cosmos DB reserved capacity. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
+ | Resource Group | Resource group to which the reserved capacity discount is applied. |
+ |Term | One year or three years. |
+ |Throughput Type | Throughput is provisioned as request units. You can buy a reservation for the provisioned throughput for both setups - single region writes as well as multiple region writes. The throughput type has two values to choose from: 100 RU/s per hour and 100 multi-region writes RU/s per hour.|
+ | Reserved Capacity Units| The amount of throughput that you want to reserve. You can calculate this value by determining the throughput needed for all your Azure Cosmos DB resources (for example, databases or containers) per region. You then multiply it by the number of regions that you'll associate with your Azure Cosmos DB database. For example: If you have five regions with 1 million RU/sec in every region, select 5 million RU/sec for the reservation capacity purchase. |
++
+5. After you fill the form, the price required to purchase the reserved capacity is calculated. The output also shows the percentage of discount you get with the chosen options. Next click **Select**
+
+6. In the **Purchase reservations** pane, review the discount and the price of the reservation. This reservation price applies to Azure Cosmos DB resources with throughput provisioned across all regions.
+
+ :::image type="content" source="./media/reserved-capacity/reserved-capacity-summary.png" alt-text="Reserved capacity summary":::
+
+7. Select **Review + buy** and then **buy now**. You see the following page when the purchase is successful:
+
+After you buy a reservation, it's applied immediately to any existing Azure Cosmos DB resources that match the terms of the reservation. If you donΓÇÖt have any existing Azure Cosmos DB resources, the reservation will apply when you deploy a new Azure Cosmos DB instance that matches the terms of the reservation. In both cases, the period of the reservation starts immediately after a successful purchase.
+
+When your reservation expires, your Azure Cosmos DB instances continue to run and are billed at the regular pay-as-you-go rates.
+
+## Cancel, exchange, or refund reservations
+
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+## Next steps
+
+The reservation discount is applied automatically to the Azure Cosmos DB resources that match the reservation scope and attributes. You can update the scope of the reservation through the Azure portal, PowerShell, Azure CLI, or the API.
+
+* To learn how reserved capacity discounts are applied to Azure Cosmos DB, see [Understand the Azure reservation discount](../cost-management-billing/reservations/understand-cosmosdb-reservation-charges.md).
+
+* To learn more about Azure reservations, see the following articles:
+
+ * [What are Azure reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+ * [Manage Azure reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+ * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+ * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md)
+ * [Azure reservations in the Partner Center CSP program](/partner-center/azure-reservations)
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-graph-samples.md
-+ # Azure Resource Graph sample queries for Azure Cosmos DB This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Cosmos DB. For a complete list of Azure Resource Graph samples, see
cosmos-db Resource Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md
Title: Prevent Azure Cosmos DB resources from being deleted or changed
description: Use Azure Resource Locks to prevent Azure Cosmos DB resources from being deleted or changed. -+ Last updated 08/31/2022 -+ ms.devlang: azurecli # Prevent Azure Cosmos DB resources from being deleted or changed
-As an administrator, you may need to lock an Azure Cosmos account, database or container. Locks prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to ``CanNotDelete`` or ``ReadOnly``.
+As an administrator, you may need to lock an Azure Cosmos DB account, database or container. Locks prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to ``CanNotDelete`` or ``ReadOnly``.
| Level | Description | | | |
When you apply a lock at a parent scope, all resources within that scope inherit
Unlike Azure role-based access control, you use management locks to apply a restriction across all users and roles. To learn about role-based access control for Azure Cosmos DB see, [Azure role-based access control in Azure Cosmos DB](role-based-access-control.md).
-Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to <https://management.azure.com>. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to <https://management.azure.com>.
+Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to <https://management.azure.com>. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos DB container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to <https://management.azure.com>.
## Manage locks
-Resource locks don't work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos account is first locked by enabling the ``disableKeyBasedMetadataWriteAccess`` property. Ensure this property doesn't break existing applications that make changes to resources using any SDK, Azure portal, or third party tools. Enabling this property will break applications that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function, see [preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes)
+Resource locks don't work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos DB account is first locked by enabling the ``disableKeyBasedMetadataWriteAccess`` property. Ensure this property doesn't break existing applications that make changes to resources using any SDK, Azure portal, or third party tools. Enabling this property will break applications that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function, see [preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes)
### [PowerShell](#tab/powershell)
$parameters = @{
Update-AzCosmosDBAccount @parameters ```
-Create a Delete Lock on an Azure Cosmos account resource and all child resources.
+Create a Delete Lock on an Azure Cosmos DB account resource and all child resources.
```powershell-interactive $parameters = @{
az cosmosdb update \
--disable-key-based-metadata-write-access true ```
-Create a Delete Lock on an Azure Cosmos account resource
+Create a Delete Lock on an Azure Cosmos DB account resource
```azurecli-interactive az lock create \
resource lock 'Microsoft.Authorization/locks@2017-04-01' = {
scope: account properties: { level: 'CanNotDelete'
- notes: 'Do not delete Azure Cosmos DB SQL API account.'
+ notes: 'Do not delete Azure Cosmos DB API for NoSQL account.'
} } ```
resource lock 'Microsoft.Authorization/locks@2017-04-01' = {
Manage resource locks for Azure Cosmos DB: -- Cassandra API keyspace and table [Azure CLI](scripts\cli\cassandra\lock.md) | [Azure PowerShell](scripts\powershell\cassandra\lock.md)-- Gremlin API database and graph [Azure CLI](scripts\cli\gremlin\lock.md) | [Azure PowerShell](scripts\powershell\gremlin\lock.md)-- MongoDB API database and collection [Azure CLI](scripts\cli\mongodb\lock.md)| [Azure PowerShell](scripts\powershell\mongodb\lock.md)-- Core (SQL) API database and container [Azure CLI](scripts\cli\sql\lock.md) | [Azure PowerShell](scripts\powershell\sql\lock.md)-- Table API table [Azure CLI](scripts\cli\table\lock.md) | [Azure PowerShell](scripts\powershell\table\lock.md)
+- API for Cassandra keyspace and table [Azure CLI](scripts\cli\cassandra\lock.md) | [Azure PowerShell](scripts\powershell\cassandra\lock.md)
+- API for Gremlin database and graph [Azure CLI](scripts\cli\gremlin\lock.md) | [Azure PowerShell](scripts\powershell\gremlin\lock.md)
+- API for MongoDB database and collection [Azure CLI](scripts\cli\mongodb\lock.md)| [Azure PowerShell](scripts\powershell\mongodb\lock.md)
+- API for NoSQL database and container [Azure CLI](scripts\cli\sql\lock.md) | [Azure PowerShell](scripts\powershell\sql\lock.md)
+- API for Table table [Azure CLI](scripts\cli\table\lock.md) | [Azure PowerShell](scripts\powershell\table\lock.md)
## Next steps
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Last updated 04/18/2022 -+ # Restore an Azure Cosmos DB account that uses continuous backup mode Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days. This article describes how to identify the restore time and restore a live or deleted Azure Cosmos DB account. It shows restore the account using [Azure portal](#restore-account-portal), [PowerShell](#restore-account-powershell), [CLI](#restore-account-cli), or an [Azure Resource Manager template](#restore-arm-template). > [!NOTE]
-> Currently in preview, the restore action for Table API and Gremlin API is supported via PowerShell and the Azure CLI.
+> Currently in preview, the restore action for API for Table and Gremlin is supported via PowerShell and the Azure CLI.
## <a id="restore-account-portal"></a>Restore an account using Azure portal
Before restoring the account, install the [latest version of Azure PowerShell](/
```azurepowershell Select-AzSubscription -Subscription <SubscriptionName>
-### <a id="trigger-restore-ps"></a>Trigger a restore operation for SQL API account
+### <a id="trigger-restore-ps"></a>Trigger a restore operation for API for NoSQL account
The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, and timestamp:
Restore-AzCosmosDBAccount `
-Location "West US" ```
-**Example 3:** Restoring Gremlin API Account. This example restores the graphs *graph1*, *graph2* from *MyDB1* and the entire database *MyDB2*, which, includes all the containers under it.
+**Example 3:** Restoring API for Gremlin Account. This example restores the graphs *graph1*, *graph2* from *MyDB1* and the entire database *MyDB2*, which, includes all the containers under it.
```azurepowershell $datatabaseToRestore1 = New-AzCosmosDBGremlinDatabaseToRestore -DatabaseName "MyDB1" -GraphName "graph1", "graph2"
Restore-AzCosmosDBAccount `
```
-**Example 4:** Restoring Table API Account. This example restores the tables *table1*, *table1* from *MyDB1*
+**Example 4:** Restoring API for Table Account. This example restores the tables *table1*, *table1* from *MyDB1*
```azurepowershell $tablesToRestore = New-AzCosmosDBTableToRestore -TableName "table1", "table2"
Import the `Az.CosmosDB` module and run the following command to get the restore
Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount ```
-### <a id="enumerate-sql-api"></a>Enumerate restorable resources for SQL API
+### <a id="enumerate-sql-api"></a>Enumerate restorable resources for API for NoSQL
The enumeration cmdlets help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources.
Get-AzCosmosdbSqlRestorableResource `
### <a id="enumerate-mongodb-api"></a>Enumerate restorable resources in API for MongoDB
-The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts and they are similar to SQL API commands but with `MongoDB` in the command name instead of `sql`.
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts and they are similar to API for NoSQL commands but with `MongoDB` in the command name instead of `sql`.
#### List all the versions of MongoDB databases in a live database account
Get-AzCosmosdbMongoDBRestorableResource `
-RestoreLocation "West US" ` -RestoreTimestamp "2020-07-20T16:09:53+0000" ```
-### <a id="enumerate-gremlin-api-ps"></a>Enumerate restorable resources for Gremlin API
+### <a id="enumerate-gremlin-api-ps"></a>Enumerate restorable resources for API for Gremlin
The enumeration cmdlets help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and graph resources.
Get-AzCosmosdbGremlinRestorableDatabase `
#### List all the versions of Gremlin graphs of a database in a live database account
-Use the following command to list all the versions of Gremlin API graphs. This command only works with live accounts. The `DatabaseRId` parameter is the `ResourceId` of the database you want to restore. It is the value of `ownerResourceid` attribute found in the response of `Get-AzCosmosdbGremlinRestorableDatabase` cmdlet. The response also includes a list of operations performed on all the graphs inside this database.
+Use the following command to list all the versions of API for Gremlin graphs. This command only works with live accounts. The `DatabaseRId` parameter is the `ResourceId` of the database you want to restore. It is the value of `ownerResourceid` attribute found in the response of `Get-AzCosmosdbGremlinRestorableDatabase` cmdlet. The response also includes a list of operations performed on all the graphs inside this database.
```azurepowershell Get-AzCosmosdbGremlinRestorableGraph `
Get-AzCosmosdbGremlinRestorableResource `
-RestoreTimestamp "2020-07-20T16:09:53+0000" ```
-### <a id="enumerate-table-api-ps"></a>Enumerate restorable resources for Table API
+### <a id="enumerate-table-api-ps"></a>Enumerate restorable resources for API for Table
The enumeration cmdlets help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account and table resources.
The simplest way to trigger a restore is by issuing the restore command with nam
--databases-to-restore name=MyDB2 collections=Collection3 Collection4 ```
-#### Create a new Azure Cosmos DB Gremlin API account by restoring only selected databases and graphs from an existing Gremlin API account
+#### Create a new Azure Cosmos DB API for Gremlin account by restoring only selected databases and graphs from an existing API for Gremlin account
```azurecli-interactive
The simplest way to trigger a restore is by issuing the restore command with nam
--gremlin-databases-to-restore name=MyDB2 graphs =graph3 graph4 ```
- #### Create a new Azure Cosmos DB Table API account by restoring only selected tables from an existing Table API account
+ #### Create a new Azure Cosmos DB API for Table account by restoring only selected tables from an existing API for Table account
```azurecli-interactive
Run the following command to get the restore details. The `az cosmosdb show` com
az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup ```
-### <a id="enumerate-sql-api-cli"></a>Enumerate restorable resources for SQL API
+### <a id="enumerate-sql-api-cli"></a>Enumerate restorable resources for API for NoSQL
The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources.
az cosmosdb sql restorable-resource list \
] ```
-### <a id="enumerate-mongodb-api-cli"></a>Enumerate restorable resources for MongoDB API account
+### <a id="enumerate-mongodb-api-cli"></a>Enumerate restorable resources for API for MongoDB account
The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts.
az cosmosdb gremlin restorable-resource list \
] ```
-### <a id="enumerate-table-api-cli"></a>Enumerate restorable resources for Table API account
+### <a id="enumerate-table-api-cli"></a>Enumerate restorable resources for API for Table account
-The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account and Table API resources. These commands only work for live accounts.
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account and API for Table resources. These commands only work for live accounts.
#### List all the versions of tables in a live database account
az cosmosdb table restorable-table list \
] ```
-### List all the resources of a Table API account that are available to restore at a given timestamp and region
+### List all the resources of a API for Table account that are available to restore at a given timestamp and region
```azurecli-interactive az cosmosdb table restorable-resource list \
az cosmosdb table restorable-resource list \
You can also restore an account using Azure Resource Manager (ARM) template. When defining the template, include the following parameters:
-### Restore SQL API or MongoDB API account using ARM template
+### Restore API for NoSQL or MongoDB account using ARM template
1. Set the `createMode` parameter to *Restore*. 1. Define the `restoreParameters`, notice that the `restoreSource` value is extracted from the output of the `az cosmosdb restorable-database-account list` command for your source account. The Instance ID attribute for your account name is used to do the restore. 1. Set the `restoreMode` parameter to *PointInTime* and configure the `restoreTimestampInUtc` value.
-Use the following ARM template to restore an account for the Azure Cosmos DB SQL API or MongoDB API. Examples for other APIs are provided next.
+Use the following ARM template to restore an account for the Azure Cosmos DB API for NoSQL or MongoDB. Examples for other APIs are provided next.
```json {
Use the following ARM template to restore an account for the Azure Cosmos DB SQL
} ```
-### Restore Gremlin API account using ARM template
+### Restore API for Gremlin account using ARM template
```json {
Use the following ARM template to restore an account for the Azure Cosmos DB SQL
} ```
-### Restore Table API account using ARM template
+### Restore API for Table account using ARM template
```json {
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
Last updated 05/11/2022 -+ # Azure role-based access control in Azure Cosmos DB > [!NOTE] > This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, data is secured using primary keys, resource tokens, or the Azure Cosmos DB RBAC.
-To learn more about role-based access control applied to data plane operations in the SQL API, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles. For the Cosmos DB API for MongoDB, see [Data Plane RBAC in the API for MongoDB](mongodb/how-to-setup-rbac.md).
+To learn more about role-based access control applied to data plane operations in the API for NoSQL, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles. For the Azure Cosmos DB API for MongoDB, see [Data Plane RBAC in the API for MongoDB](mongodb/how-to-setup-rbac.md).
-Azure Cosmos DB provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Cosmos DB. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Cosmos DB resources. Role assignments are scoped to control-plane access only, which includes access to Azure Cosmos accounts, databases, containers, and offers (throughput).
+Azure Cosmos DB provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Cosmos DB. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Cosmos DB resources. Role assignments are scoped to control-plane access only, which includes access to Azure Cosmos DB accounts, databases, containers, and offers (throughput).
## Built-in roles
The following are the built-in roles supported by Azure Cosmos DB:
## Identity and access management (IAM)
-The **Access control (IAM)** pane in the Azure portal is used to configure Azure role-based access control on Azure Cosmos resources. The roles are applied to users, groups, service principals, and managed identities in Active Directory. You can use built-in roles or custom roles for individuals and groups. The following screenshot shows Active Directory integration (Azure RBAC) using access control (IAM) in the Azure portal:
+The **Access control (IAM)** pane in the Azure portal is used to configure Azure role-based access control on Azure Cosmos DB resources. The roles are applied to users, groups, service principals, and managed identities in Active Directory. You can use built-in roles or custom roles for individuals and groups. The following screenshot shows Active Directory integration (Azure RBAC) using access control (IAM) in the Azure portal:
:::image type="content" source="./media/role-based-access-control/database-security-identity-access-management-rbac.png" alt-text="Access control (IAM) in the Azure portal - demonstrating database security.":::
The **Access control (IAM)** pane in the Azure portal is used to configure Azure
In addition to the built-in roles, users may also create [custom roles](../role-based-access-control/custom-roles.md) in Azure and apply these roles to service principals across all subscriptions within their Active Directory tenant. Custom roles provide users a way to create Azure role definitions with a custom set of resource provider operations. To learn which operations are available for building custom roles for Azure Cosmos DB see, [Azure Cosmos DB resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb) > [!TIP]
-> Custom roles that need to access data stored within Cosmos DB or use Data Explorer in the Azure portal must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action.
+> Custom roles that need to access data stored within Azure Cosmos DB or use Data Explorer in the Azure portal must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action.
## <a id="prevent-sdk-changes"></a>Preventing changes from the Azure Cosmos DB SDKs
-The Azure Cosmos DB resource provider can be locked down to prevent any changes to resources from a client connecting using the account keys (that is applications connecting via the Azure Cosmos SDK). This feature may be desirable for users who want higher degrees of control and governance for production environments. Preventing changes from the SDK also enables features such as resource locks and diagnostic logs for control plane operations. The clients connecting from Azure Cosmos DB SDK will be prevented from changing any property for the Azure Cosmos accounts, databases, containers, and throughput. The operations involving reading and writing data to Cosmos containers themselves are not impacted.
+The Azure Cosmos DB resource provider can be locked down to prevent any changes to resources from a client connecting using the account keys (that is applications connecting via the Azure Cosmos DB SDK). This feature may be desirable for users who want higher degrees of control and governance for production environments. Preventing changes from the SDK also enables features such as resource locks and diagnostic logs for control plane operations. The clients connecting from Azure Cosmos DB SDK will be prevented from changing any property for the Azure Cosmos DB accounts, databases, containers, and throughput. The operations involving reading and writing data to Azure Cosmos DB containers themselves are not impacted.
When this feature is enabled, changes to any resource can only be made from a user with the right Azure role and Azure Active Directory credentials including Managed Service Identities.
When this feature is enabled, changes to any resource can only be made from a us
### Check list before enabling
-This setting will prevent any changes to any Cosmos resource from any client connecting using account keys including any Cosmos DB SDK, any tools that connect via account keys. To prevent issues or errors from applications after enabling this feature, check if applications perform any of the following actions before enabling this feature, including:
+This setting will prevent any changes to any Azure Cosmos DB resource from any client connecting using account keys including any Azure Cosmos DB SDK, any tools that connect via account keys. To prevent issues or errors from applications after enabling this feature, check if applications perform any of the following actions before enabling this feature, including:
- Creating, deleting child resources such as databases and containers. This includes resources for other APIs such as Cassandra, MongoDB, Gremlin, and table resources.
Update-AzCosmosDBAccount -ResourceGroupName [ResourceGroupName] -Name [CosmosDBA
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) - [Azure custom roles](../role-based-access-control/custom-roles.md) - [Azure Cosmos DB resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb)-- [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md)
+- [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
Title: Best practices for scaling provisioned throughput (RU/s)
description: Learn best practices for scaling provisioned throughput for manual and autoscale throughput -++ Last updated 08/20/2021
# Best practices for scaling provisioned throughput (RU/s) This article describes best practices and strategies for scaling the throughput (RU/s) of your database or container (collection, table, or graph). The concepts apply when you're increasing either the provisioned manual RU/s or the autoscale max RU/s of any resource for any of the Azure Cosmos DB APIs.
Follow [best practices](partitioning-overview.md#choose-partitionkey) for choosi
### Step 2: Calculate the number of physical partitions you'll need `Number of physical partitions = Total data size in GB / Target data per physical partition in GB`
-Each physical partition can hold a maximum of 50 GB of storage (30 GB for Cassandra API). The value you should choose for the `Target data per physical partition in GB` depends on how fully packed you want the physical partitions to be and how much you expect storage to grow post-migration.
+Each physical partition can hold a maximum of 50 GB of storage (30 GB for API for Cassandra). The value you should choose for the `Target data per physical partition in GB` depends on how fully packed you want the physical partitions to be and how much you expect storage to grow post-migration.
For example, if you anticipate that storage will continue to grow, you may choose to set the value to 30 GB. Assuming you've chosen a good partition key that evenly distributes storage, each partition will be ~60% full (30 GB out of 50 GB). As future data is written, it can be stored on the existing set of physical partitions, without requiring the service to immediately add more physical partitions.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
Title: Azure Cosmos DB Cassandra API keyspace and table with autoscale
-description: Use Azure CLI to create an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.
+ Title: Azure Cosmos DB for Apache Cassandra keyspace and table with autoscale
+description: Use Azure CLI to create an Azure Cosmos DB for Apache Cassandra account, keyspace, and table with autoscale.
-+ Last updated 05/02/2022-+
-# Use Azure CLI to create a Cassandra API account, keyspace, and table with autoscale
+# Use Azure CLI to create a API for Cassandra account, keyspace, and table with autoscale
-The script in this article creates an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.
+The script in this article creates an Azure Cosmos DB for Apache Cassandra account, keyspace, and table with autoscale.
## Prerequisites
The script in this article creates an Azure Cosmos DB Cassandra API account, key
This script uses the following commands: - [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.-- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableCassandra` parameter creates a Cassandra API-enabled Azure Cosmos DB account.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableCassandra` parameter creates a API for Cassandra-enabled Azure Cosmos DB account.
- [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) creates an Azure Cosmos DB Cassandra keyspace. - [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) with the `--max-throughput` parameter set to minimum `4000` creates an Azure Cosmos DB Cassandra table with autoscale.
az group delete --name $resourceGroup
## Next steps
-[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
-++ Last updated 02/21/2022
-# Create an Azure Cosmos Cassandra API account, keyspace and table using Azure CLI
+# Create an Azure Cosmos DB Cassandra API account, keyspace and table using Azure CLI
-The script in this article demonstrates creating an Azure Cosmos DB account, keyspace, and table for Cassandra API.
+The script in this article demonstrates creating an Azure Cosmos DB account, keyspace, and table for API for Cassandra.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos Cassandra keyspace. |
-| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos Cassandra table. |
+| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos DB Cassandra keyspace. |
+| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos DB Cassandra table. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
-++ Last updated 02/21/2022
-# Create a resource lock for Azure Cosmos Cassandra API keyspace and table using Azure CLI
+# Create a resource lock for Azure Cosmos DB Cassandra API keyspace and table using Azure CLI
The script in this article demonstrates preventing resources from being deleted with resource locks.
The script in this article demonstrates preventing resources from being deleted
> > To create resource locks, you must have membership in the owner role in the subscription. >
-> Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
-++ Last updated 02/21/2022
-# Create an Azure Cosmos Cassandra API serverless account, keyspace and table using Azure CLI
+# Create an Azure Cosmos DB Cassandra API serverless account, keyspace and table using Azure CLI
-The script in this article demonstrates creating a serverless Azure Cosmos DB account, keyspace, and table for Cassandra API.
+The script in this article demonstrates creating a serverless Azure Cosmos DB account, keyspace, and table for API for Cassandra.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos Cassandra keyspace. |
-| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos Cassandra table. |
+| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos DB Cassandra keyspace. |
+| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos DB Cassandra table. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources
-description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB for Apache Cassandra resources
+description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB for Apache Cassandra resources
-++ Last updated 02/21/2022
-# Throughput (RU/s) operations with Azure CLI for a keyspace or table for Azure Cosmos DB - Cassandra API
+# Throughput (RU/s) operations with Azure CLI for a keyspace or table for Azure Cosmos DB - API for Cassandra
The script in this article creates a Cassandra keyspace with shared throughput and a Cassandra table with dedicated throughput, then updates the throughput for both the keyspace and table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos Cassandra keyspace. |
-| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos Cassandra table. |
-| [az cosmosdb cassandra keyspace throughput update](/cli/azure/cosmosdb/cassandra/keyspace/throughput#az-cosmosdb-cassandra-keyspace-throughput-update) | Update RU/s for an Azure Cosmos Cassandra keyspace. |
-| [az cosmosdb cassandra table throughput update](/cli/azure/cosmosdb/cassandra/table/throughput#az-cosmosdb-cassandra-table-throughput-update) | Update RU/s for an Azure Cosmos Cassandra table. |
-| [az cosmosdb cassandra keyspace throughput migrate](/cli/azure/cosmosdb/cassandra/keyspace/throughput#az-cosmosdb-cassandra-keyspace-throughput-migrate) | Migrate throughput for an Azure Cosmos Cassandra keyspace. |
-| [az cosmosdb cassandra table throughput migrate](/cli/azure/cosmosdb/cassandra/table/throughput#az-cosmosdb-cassandra-table-throughput-migrate) | Migrate throughput for an Azure Cosmos Cassandra table. |
+| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos DB Cassandra keyspace. |
+| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos DB Cassandra table. |
+| [az cosmosdb cassandra keyspace throughput update](/cli/azure/cosmosdb/cassandra/keyspace/throughput#az-cosmosdb-cassandra-keyspace-throughput-update) | Update RU/s for an Azure Cosmos DB Cassandra keyspace. |
+| [az cosmosdb cassandra table throughput update](/cli/azure/cosmosdb/cassandra/table/throughput#az-cosmosdb-cassandra-table-throughput-update) | Update RU/s for an Azure Cosmos DB Cassandra table. |
+| [az cosmosdb cassandra keyspace throughput migrate](/cli/azure/cosmosdb/cassandra/keyspace/throughput#az-cosmosdb-cassandra-keyspace-throughput-migrate) | Migrate throughput for an Azure Cosmos DB Cassandra keyspace. |
+| [az cosmosdb cassandra table throughput migrate](/cli/azure/cosmosdb/cassandra/table/throughput#az-cosmosdb-cassandra-table-throughput-migrate) | Migrate throughput for an Azure Cosmos DB Cassandra table. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/free-tier.md
+ Last updated 07/08/2022 # Find an existing Azure Cosmos DB free-tier account in a subscription using Azure CLI The script in this article demonstrates how to locate an Azure Cosmos DB free-tier account within a subscription.
For Azure CLI samples for specific APIs, see:
- [CLI Samples for Cassandra](../../../cassandr) - [CLI Samples for Gremlin](../../../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../../../mongodb/cli-samples.md)
- [CLI Samples for SQL](../../../sql/cli-samples.md) - [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
Title: Create an Azure Cosmos account with IP firewall
-description: Create an Azure Cosmos account with IP firewall
+ Title: Create an Azure Cosmos DB account with IP firewall
+description: Create an Azure Cosmos DB account with IP firewall
+ Last updated 02/21/2022
-# Create an Azure Cosmos account with IP firewall using Azure CLI
+# Create an Azure Cosmos DB account with IP firewall using Azure CLI
-The script in this article demonstrates creating a Cosmos DB account with default values and IP Firewall enabled. It uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB. To use this sample for other APIs, apply the `ip-range-filter` parameter in the script to the `az cosmosdb account create` command for your API specific script.
+The script in this article demonstrates creating an Azure Cosmos DB account with default values and IP Firewall enabled. It uses a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB. To use this sample for other APIs, apply the `ip-range-filter` parameter in the script to the `az cosmosdb account create` command for your API specific script.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
For Azure CLI samples for specific APIs see:
- [CLI Samples for Cassandra](../../../cassandr) - [CLI Samples for Gremlin](../../../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../../../mongodb/cli-samples.md)
- [CLI Samples for SQL](../../../sql/cli-samples.md) - [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
Title: Work with account keys and connection strings for an Azure Cosmos account
-description: Work with account keys and connection strings for an Azure Cosmos account
+ Title: Work with account keys and connection strings for an Azure Cosmos DB account
+description: Work with account keys and connection strings for an Azure Cosmos DB account
+ Last updated 02/21/2022
-# Work with account keys and connection strings for an Azure Cosmos account using Azure CLI
+# Work with account keys and connection strings for an Azure Cosmos DB account using Azure CLI
The script in this article demonstrates four operations.
The script in this article demonstrates four operations.
- List connection strings - Regenerate account keys
- This script uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+ This script uses a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
For Azure CLI samples for specific APIs see:
- [CLI Samples for Cassandra](../../../cassandr) - [CLI Samples for Gremlin](../../../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../../../mongodb/cli-samples.md)
- [CLI Samples for SQL](../../../sql/cli-samples.md) - [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
Title: Add regions, change failover priority, trigger failover for an Azure Cosmos account
-description: Add regions, change failover priority, trigger failover for an Azure Cosmos account
+ Title: Add regions, change failover priority, trigger failover for an Azure Cosmos DB account
+description: Add regions, change failover priority, trigger failover for an Azure Cosmos DB account
+ Last updated 02/21/2022
-# Add regions, change failover priority, trigger failover for an Azure Cosmos account using Azure CLI
+# Add regions, change failover priority, trigger failover for an Azure Cosmos DB account using Azure CLI
The script in this article demonstrates three operations. -- Add a region to an existing Azure Cosmos account.
+- Add a region to an existing Azure Cosmos DB account.
- Change regional failover priority (applies to accounts using service-managed failover) - Trigger a manual failover from primary to secondary regions (applies to accounts with manual failover)
-This script uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+This script uses a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB.
> [!IMPORTANT]
-> Add and remove region operations on a Cosmos account cannot be done while changing other properties.
+> Add and remove region operations on an Azure Cosmos DB account cannot be done while changing other properties.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
For Azure CLI samples for specific APIs see:
- [CLI Samples for Cassandra](../../../cassandr) - [CLI Samples for Gremlin](../../../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../../../mongodb/cli-samples.md)
- [CLI Samples for SQL](../../../sql/cli-samples.md) - [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
Title: Connect an existing Azure Cosmos account with virtual network service endpoints
-description: Connect an existing Azure Cosmos account with virtual network service endpoints
+ Title: Connect an existing Azure Cosmos DB account with virtual network service endpoints
+description: Connect an existing Azure Cosmos DB account with virtual network service endpoints
+ Last updated 02/21/2022
-# Connect an existing Azure Cosmos account with virtual network service endpoints using Azure CLI
+# Connect an existing Azure Cosmos DB account with virtual network service endpoints using Azure CLI
-The script in this article demonstrates connecting an existing Azure Cosmos account to an existing new virtual network where the subnet is not yet configured for service endpoints by using the `ignore-missing-vnet-service-endpoint` parameter. This allows the configuration for the Cosmos account to complete without error before the configuration to the virtual network's subnet is completed. Once the subnet configuration is complete, the Cosmos account is accessible through the configured subnet.
+The script in this article demonstrates connecting an existing Azure Cosmos DB account to an existing new virtual network where the subnet is not yet configured for service endpoints by using the `ignore-missing-vnet-service-endpoint` parameter. This allows the configuration for the Azure Cosmos DB account to complete without error before the configuration to the virtual network's subnet is completed. Once the subnet configuration is complete, the Azure Cosmos DB account is accessible through the configured subnet.
-This script uses a SQL (Core) API account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
+This script uses a API for NoSQL account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
For Azure CLI samples for specific APIs see:
- [CLI Samples for Cassandra](../../../cassandr) - [CLI Samples for Gremlin](../../../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../../../mongodb/cli-samples.md)
- [CLI Samples for SQL](../../../sql/cli-samples.md) - [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
Title: Create an Azure Cosmos account with virtual network service endpoints
-description: Create an Azure Cosmos account with virtual network service endpoints
+ Title: Create an Azure Cosmos DB account with virtual network service endpoints
+description: Create an Azure Cosmos DB account with virtual network service endpoints
+ Last updated 02/21/2022
-# Create an Azure Cosmos account with virtual network service endpoints using Azure CLI
+# Create an Azure Cosmos DB account with virtual network service endpoints using Azure CLI
-The script in this article creates a new virtual network with a front and back end subnet and enables service endpoints for `Microsoft.AzureCosmosDB`. It then retrieves the resource ID for this subnet and applies it to the Azure Cosmos account and enables service endpoints for the account.
+The script in this article creates a new virtual network with a front and back end subnet and enables service endpoints for `Microsoft.AzureCosmosDB`. It then retrieves the resource ID for this subnet and applies it to the Azure Cosmos DB account and enables service endpoints for the account.
-This script uses a Core (SQL) API account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
+This script uses a API for NoSQL account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
For Azure CLI samples for specific APIs see:
- [CLI Samples for Cassandra](../../../cassandr) - [CLI Samples for Gremlin](../../../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../../../mongodb/cli-samples.md)
- [CLI Samples for SQL](../../../sql/cli-samples.md) - [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
Title: Azure Cosmos DB Gremlin database and graph with autoscale
-description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.
+ Title: Azure Cosmos DB for Gremlin database and graph with autoscale
+description: Use this Azure CLI script to create an Azure Cosmos DB for Gremlin account, database, and graph with autoscale.
-+ Last updated 05/02/2022-+
-# Use Azure CLI to create a Gremlin API account, database, and graph with autoscale
+# Use Azure CLI to create a API for Gremlin account, database, and graph with autoscale
-The script in this article creates an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.
+The script in this article creates an Azure Cosmos DB for Gremlin account, database, and graph with autoscale.
## Prerequisites
This script uses the following commands:
- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources. - [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableGremlin` parameter creates a Gremlin-enabled Azure Cosmos DB account.-- [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) creates an Azure Cosmos DB Gremlin database.-- [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) with the `--max-throughput` parameter set to minimum `4000` creates an Azure Cosmos DB Gremlin graph with autoscale.
+- [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) creates an Azure Cosmos DB for Gremlin database.
+- [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) with the `--max-throughput` parameter set to minimum `4000` creates an Azure Cosmos DB for Gremlin graph with autoscale.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/gremlin/autoscale.sh" id="FullScript":::
az group delete --name $resourceGroup
## Next steps
-[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
-++ Last updated 02/21/2022
-# Create an Azure Cosmos Gremlin API account, database and graph using Azure CLI
+# Create an Azure Cosmos DB for Gremlin account, database and graph using Azure CLI
The script in this article demonstrates creating a Gremlin database and graph.
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) | Creates an Azure Cosmos Gremlin database. |
-| [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) | Creates an Azure Cosmos Gremlin graph. |
+| [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) | Creates an Azure Cosmos DB for Gremlin database. |
+| [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) | Creates an Azure Cosmos DB for Gremlin graph. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
-++ Last updated 02/21/2022
-# Create a resource lock for Azure Cosmos Gremlin API database and graph using Azure CLI
+# Create a resource lock for Azure Cosmos DB for Gremlin database and graph using Azure CLI
The script in this article demonstrates performing resource lock operations for a Gremlin database and graph.
The script in this article demonstrates performing resource lock operations for
> > To create resource locks, you must have membership in the owner role in the subscription. >
-> Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
Title: Azure Cosmos DB Gremlin serverless account, database, and graph
-description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin serverless account, database, and graph.
+ Title: Azure Cosmos DB for Gremlin serverless account, database, and graph
+description: Use this Azure CLI script to create an Azure Cosmos DB for Gremlin serverless account, database, and graph.
-+ Last updated 05/02/2022-+ # Use Azure CLI to create a Gremlin serverless account, database, and graph
-The script in this article creates an Azure Cosmos DB Gremlin API serverless account, database, and graph.
+The script in this article creates an Azure Cosmos DB for Gremlin serverless account, database, and graph.
## Prerequisites
This script uses the following commands:
- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources. - [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableGremlin EnableServerless` parameter creates a Gremlin-enabled, serverless Azure Cosmos DB account.-- [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) creates an Azure Cosmos DB Gremlin database.-- [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) creates an Azure Cosmos DB Gremlin graph.
+- [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) creates an Azure Cosmos DB for Gremlin database.
+- [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) creates an Azure Cosmos DB for Gremlin graph.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/gremlin/serverless.sh" id="FullScript":::
az group delete --name $resourceGroup
## Next steps
-[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources
-description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB for Gremlin resources
+description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB for Gremlin resources
-++ Last updated 02/21/2022
-# Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB - Gremlin API
+# Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB - API for Gremlin
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) | Creates an Azure Cosmos Gremlin database. |
-| [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) | Creates an Azure Cosmos Gremlin graph. |
-| [az cosmosdb gremlin database throughput update](/cli/azure/cosmosdb/gremlin/database/throughput#az-cosmosdb-gremlin-database-throughput-update) | Update RU/s for an Azure Cosmos Gremlin database. |
-| [az cosmosdb gremlin graph throughput update](/cli/azure/cosmosdb/gremlin/graph/throughput#az-cosmosdb-gremlin-graph-throughput-update) | Update RU/s for an Azure Cosmos Gremlin graph. |
-| [az cosmosdb gremlin database throughput migrate](/cli/azure/cosmosdb/gremlin/database/throughput#az-cosmosdb-gremlin-database-throughput-migrate) | Migrate throughput for an Azure Cosmos Gremlin database. |
-| [az cosmosdb gremlin graph throughput migrate](/cli/azure/cosmosdb/gremlin/graph/throughput#az-cosmosdb-gremlin-graph-throughput-migrate) | Migrate throughput for an Azure Cosmos Gremlin graph. |
+| [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) | Creates an Azure Cosmos DB for Gremlin database. |
+| [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) | Creates an Azure Cosmos DB for Gremlin graph. |
+| [az cosmosdb gremlin database throughput update](/cli/azure/cosmosdb/gremlin/database/throughput#az-cosmosdb-gremlin-database-throughput-update) | Update RU/s for an Azure Cosmos DB for Gremlin database. |
+| [az cosmosdb gremlin graph throughput update](/cli/azure/cosmosdb/gremlin/graph/throughput#az-cosmosdb-gremlin-graph-throughput-update) | Update RU/s for an Azure Cosmos DB for Gremlin graph. |
+| [az cosmosdb gremlin database throughput migrate](/cli/azure/cosmosdb/gremlin/database/throughput#az-cosmosdb-gremlin-database-throughput-migrate) | Migrate throughput for an Azure Cosmos DB for Gremlin database. |
+| [az cosmosdb gremlin graph throughput migrate](/cli/azure/cosmosdb/gremlin/graph/throughput#az-cosmosdb-gremlin-graph-throughput-migrate) | Migrate throughput for an Azure Cosmos DB for Gremlin graph. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
Title: Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB
-description: Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB
+ Title: Create a database with autoscale and shared collections for API for MongoDB for Azure Cosmos DB
+description: Create a database with autoscale and shared collections for API for MongoDB for Azure Cosmos DB
-++ Last updated 02/21/2022
-# Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB using Azure CLI
+# Create a database with autoscale and shared collections for API for MongoDB for Azure Cosmos DB using Azure CLI
-The script in this article demonstrates creating a MongoDB API database with autoscale and 2 collections that share throughput.
+The script in this article demonstrates creating a API for MongoDB database with autoscale and 2 collections that share throughput.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos MongoDB API database. |
-| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos MongoDB API collection. |
+| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos DB MongoDB API database. |
+| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos DB MongoDB API collection. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
Title: Create a database and collection for MongoDB API for Azure Cosmos DB
-description: Create a database and collection for MongoDB API for Azure Cosmos DB
+ Title: Create a database and collection for API for MongoDB for Azure Cosmos DB
+description: Create a database and collection for API for MongoDB for Azure Cosmos DB
-++ Last updated 02/21/2022
-# Create a database and collection for MongoDB API for Azure Cosmos DB using Azure CLI
+# Create a database and collection for API for MongoDB for Azure Cosmos DB using Azure CLI
-The script in this article demonstrates creating a MongoDB API database and collection.
+The script in this article demonstrates creating a API for MongoDB database and collection.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos MongoDB API database. |
-| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos MongoDB API collection. |
+| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos DB MongoDB API database. |
+| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos DB MongoDB API collection. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
Title: Create resource lock for a database and collection for MongoDB API for Azure Cosmos DB
-description: Create resource lock for a database and collection for MongoDB API for Azure Cosmos DB
+ Title: Create resource lock for a database and collection for API for MongoDB for Azure Cosmos DB
+description: Create resource lock for a database and collection for API for MongoDB for Azure Cosmos DB
-++ Last updated 02/21/2022 # Create a resource lock for Azure Cosmos DB's API for MongoDB using Azure CLI
-The script in this article demonstrates performing resource lock operations for a MongoDB API database and collection.
+The script in this article demonstrates performing resource lock operations for a API for MongoDB database and collection.
> [!IMPORTANT] > > To create resource locks, you must have membership in the owner role in the subscription. >
-> Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
Title: Create a serverless database and collection for MongoDB API for Azure Cosmos DB
-description: Create a serverless database and collection for MongoDB API for Azure Cosmos DB
+ Title: Create a serverless database and collection for API for MongoDB for Azure Cosmos DB
+description: Create a serverless database and collection for API for MongoDB for Azure Cosmos DB
-++ Last updated 02/21/2022
-# Create a serverless database and collection for MongoDB API for Azure Cosmos DB using Azure CLI
+# Create a serverless database and collection for API for MongoDB for Azure Cosmos DB using Azure CLI
-The script in this article demonstrates creating a MongoDB API serverless account database and collection.
+The script in this article demonstrates creating a API for MongoDB serverless account database and collection.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos MongoDB API database. |
-| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos MongoDB API collection. |
+| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos DB MongoDB API database. |
+| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos DB MongoDB API collection. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources
-description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB for MongoDB resources
+description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB for MongoDB resources
-++ Last updated 02/21/2022
-# Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB API for MongoDB
+# Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB for MongoDB
The script in this article creates a MongoDB database with shared throughput and collection with dedicated throughput, then updates the throughput for both. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos MongoDB API database. |
-| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos MongoDB API collection. |
-| [az cosmosdb mongodb database throughput update](/cli/azure/cosmosdb/mongodb/database/throughput#az-cosmosdb-mongodb-database-throughput-update) | Update RUs for an Azure Cosmos MongoDB API database. |
-| [az cosmosdb mongodb collection throughput update](/cli/azure/cosmosdb/mongodb/collection/throughput#az-cosmosdb-mongodb-collection-throughput-update) | Update RUs for an Azure Cosmos MongoDB API collection. |
+| [az cosmosdb mongodb database create](/cli/azure/cosmosdb/mongodb/database#az-cosmosdb-mongodb-database-create) | Creates an Azure Cosmos DB MongoDB API database. |
+| [az cosmosdb mongodb collection create](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create) | Creates an Azure Cosmos DB MongoDB API collection. |
+| [az cosmosdb mongodb database throughput update](/cli/azure/cosmosdb/mongodb/database/throughput#az-cosmosdb-mongodb-database-throughput-update) | Update RUs for an Azure Cosmos DB MongoDB API database. |
+| [az cosmosdb mongodb collection throughput update](/cli/azure/cosmosdb/mongodb/collection/throughput#az-cosmosdb-mongodb-collection-throughput-update) | Update RUs for an Azure Cosmos DB MongoDB API collection. |
| [az cosmosdb mongodb database throughput migrate](/cli/azure/cosmosdb/mongodb/database/throughput#az-cosmosdb-mongodb-database-throughput-migrate) | Migrate throughput for a database. | | [az cosmosdb mongodb collection throughput migrate](/cli/azure/cosmosdb/mongodb/collection/throughput#az-cosmosdb-mongodb-collection-throughput-migrate) | Migrate throughput for a collection. | | [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/autoscale.md
+
+ Title: Azure Cosmos DB for NoSQL account, database, and container with autoscale
+description: Use Azure CLI to create an Azure Cosmos DB for NoSQL account, database, and container with autoscale.
++++++ Last updated : 06/22/2022+++
+# Create an Azure Cosmos DB for NoSQL account, database, and container with autoscale
++
+The script in this article creates an Azure Cosmos DB for NoSQL account, database, and container with autoscale.
+
+## Prerequisites
+
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+
+- This script requires Azure CLI version 2.0.73 or later.
+
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ Cloud Shell is automatically authenticated under the account you used to sign in to the Azure portal. You can use [az account set](/cli/azure/account#az-account-set) to sign in with a different subscription, replacing `<subscriptionId>` with your Azure subscription ID.
+
+ ```azurecli
+ subscription="<subscriptionId>" # add subscription here
+
+ az account set -s $subscription # ...or use 'az login'
+ ```
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find the Azure CLI version and dependent libraries that are installed, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. If prompted, [install Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). If you're running Windows or macOS, consider [running Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+ If you're using a local installation, sign in to Azure by running [az login](/cli/azure/reference-index#az-login) and following the prompts. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+
+## Sample script
+
+Run the following script to create an Azure resource group, an Azure Cosmos DB for NoSQL account and database, and a container with autoscale. The resources might take a while to create.
++
+This script uses the following commands:
+
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) creates an Azure Cosmos DB account for API for NoSQL.
+- [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) creates an Azure Cosmos DB for NoSQL database.
+- [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) with `--max-throughput 1000` creates an Azure Cosmos DB for NoSQL container with autoscale capability.
+
+## Clean up resources
+
+If you no longer need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains. These resources include the Azure Cosmos DB account, database, and container. The resources might take a while to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Next steps
+
+- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+- [Throughput (RU/s) operations for Azure Cosmos DB for NoSQL resources](throughput.md)
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/create.md
+
+ Title: Create a API for NoSQL database and container for Azure Cosmos DB
+description: Create a API for NoSQL database and container for Azure Cosmos DB
+++++++ Last updated : 02/21/2022++
+# Create an Azure Cosmos DB for NoSQL account, database and container using Azure CLI
++
+The script in this article demonstrates creating a API for NoSQL database and container.
+++
+- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
+| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
+| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos DB for NoSQL database. |
+| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos DB for NoSQL container. |
+| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
+
+## Next steps
+
+For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/lock.md
+
+ Title: Create resource lock for an Azure Cosmos DB for NoSQL database and container
+description: Create resource lock for an Azure Cosmos DB for NoSQL database and container
+++++++ Last updated : 02/21/2022++
+# Create resource lock for an Azure Cosmos DB for NoSQL database and container using Azure CLI
++
+The script in this article demonstrates performing resource lock operations for a SQL database and container.
+
+> [!IMPORTANT]
+>
+> To create resource locks, you must have membership in the owner role in the subscription.
+>
+> Resource locks do not work for changes made by users connecting using any Azure Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+++
+- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az lock create](/cli/azure/lock#az-lock-create) | Creates a lock. |
+| [az lock list](/cli/azure/lock#az-lock-list) | List lock information. |
+| [az lock show](/cli/azure/lock#az-lock-show) | Show properties of a lock. |
+| [az lock delete](/cli/azure/lock#az-lock-delete) | Deletes a lock. |
+
+## Next steps
+
+- [Lock resources to prevent unexpected changes](../../../../azure-resource-manager/management/lock-resources.md)
+
+- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+
+- [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/serverless.md
+
+ Title: Create a API for NoSQL serverless account, database and container for Azure Cosmos DB
+description: Create a API for NoSQL serverless account, database and container for Azure Cosmos DB
++++++ Last updated : 02/21/2022++
+# Create an Azure Cosmos DB for NoSQL serverless account, database and container using Azure CLI
++
+The script in this article demonstrates creating a API for NoSQL serverless account with database and container.
+++
+- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
+| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
+| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos DB for NoSQL database. |
+| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos DB for NoSQL container. |
+| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
+
+## Next steps
+
+For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/throughput.md
+
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB for NoSQL resources
+description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB for NoSQL resources
+++++++ Last updated : 02/21/2022++
+# Throughput (RU/s) operations with Azure CLI for a database or container for Azure Cosmos DB for NoSQL
++
+The script in this article creates a API for NoSQL database with shared throughput and a API for NoSQL container with dedicated throughput, then updates the throughput for both the database and container. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
+++
+- This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
+| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
+| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos DB for NoSQL database. |
+| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos DB for NoSQL container. |
+| [az cosmosdb sql database throughput update](/cli/azure/cosmosdb/sql/database/throughput#az-cosmosdb-sql-database-throughput-update) | Update throughput for an Azure Cosmos DB for NoSQL database. |
+| [az cosmosdb sql container throughput update](/cli/azure/cosmosdb/sql/container/throughput#az-cosmosdb-sql-container-throughput-update) | Update throughput for an Azure Cosmos DB for NoSQL container. |
+| [az cosmosdb sql database throughput migrate](/cli/azure/cosmosdb/sql/database/throughput#az-cosmosdb-sql-database-throughput-migrate) | Migrate throughput for an Azure Cosmos DB for NoSQL database. |
+| [az cosmosdb sql container throughput migrate](/cli/azure/cosmosdb/sql/container/throughput#az-cosmosdb-sql-container-throughput-migrate) | Migrate throughput for an Azure Cosmos DB for NoSQL container. |
+| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
+
+## Next steps
+
+For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
- Title: Azure Cosmos DB SQL API account, database, and container with autoscale
-description: Use Azure CLI to create an Azure Cosmos DB Core (SQL) API account, database, and container with autoscale.
------ Previously updated : 06/22/2022---
-# Create an Azure Cosmos DB SQL API account, database, and container with autoscale
--
-The script in this article creates an Azure Cosmos DB Core (SQL) API account, database, and container with autoscale.
-
-## Prerequisites
--- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]--- This script requires Azure CLI version 2.0.73 or later.-
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
-
- [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
-
- Cloud Shell is automatically authenticated under the account you used to sign in to the Azure portal. You can use [az account set](/cli/azure/account#az-account-set) to sign in with a different subscription, replacing `<subscriptionId>` with your Azure subscription ID.
-
- ```azurecli
- subscription="<subscriptionId>" # add subscription here
-
- az account set -s $subscription # ...or use 'az login'
- ```
-
- - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find the Azure CLI version and dependent libraries that are installed, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. If prompted, [install Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). If you're running Windows or macOS, consider [running Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
-
- If you're using a local installation, sign in to Azure by running [az login](/cli/azure/reference-index#az-login) and following the prompts. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
-
-## Sample script
-
-Run the following script to create an Azure resource group, an Azure Cosmos DB SQL API account and database, and a container with autoscale. The resources might take a while to create.
--
-This script uses the following commands:
--- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.-- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) creates an Azure Cosmos DB account for SQL API.-- [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) creates an Azure Cosmos SQL (Core) database.-- [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) with `--max-throughput 1000` creates an Azure Cosmos SQL (Core) container with autoscale capability.-
-## Clean up resources
-
-If you no longer need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains. These resources include the Azure Cosmos DB account, database, and container. The resources might take a while to delete.
-
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Next steps
--- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)-- [Throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources](throughput.md)
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/create.md
- Title: Create a Core (SQL) API database and container for Azure Cosmos DB
-description: Create a Core (SQL) API database and container for Azure Cosmos DB
------ Previously updated : 02/21/2022--
-# Create an Azure Cosmos Core (SQL) API account, database and container using Azure CLI
--
-The script in this article demonstrates creating a SQL API database and container.
----- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos SQL (Core) database. |
-| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos SQL (Core) container. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/lock.md
- Title: Create resource lock for a Azure Cosmos DB Core (SQL) API database and container
-description: Create resource lock for a Azure Cosmos DB Core (SQL) API database and container
------ Previously updated : 02/21/2022--
-# Create resource lock for a Azure Cosmos DB Core (SQL) API database and container using Azure CLI
--
-The script in this article demonstrates performing resource lock operations for a SQL database and container.
-
-> [!IMPORTANT]
->
-> To create resource locks, you must have membership in the owner role in the subscription.
->
-> Resource locks do not work for changes made by users connecting using any Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
----- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az lock create](/cli/azure/lock#az-lock-create) | Creates a lock. |
-| [az lock list](/cli/azure/lock#az-lock-list) | List lock information. |
-| [az lock show](/cli/azure/lock#az-lock-show) | Show properties of a lock. |
-| [az lock delete](/cli/azure/lock#az-lock-delete) | Deletes a lock. |
-
-## Next steps
--- [Lock resources to prevent unexpected changes](../../../../azure-resource-manager/management/lock-resources.md)--- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).--- [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/serverless.md
- Title: Create a Core (SQL) API serverless account, database and container for Azure Cosmos DB
-description: Create a Core (SQL) API serverless account, database and container for Azure Cosmos DB
------ Previously updated : 02/21/2022--
-# Create an Azure Cosmos Core (SQL) API serverless account, database and container using Azure CLI
--
-The script in this article demonstrates creating a SQL API serverless account with database and container.
----- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos SQL (Core) database. |
-| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos SQL (Core) container. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/throughput.md
- Title: Perform throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources
-description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources
------ Previously updated : 02/21/2022--
-# Throughput (RU/s) operations with Azure CLI for a database or container for Azure Cosmos DB Core (SQL) API
--
-The script in this article creates a Core (SQL) API database with shared throughput and a Core (SQL) API container with dedicated throughput, then updates the throughput for both the database and container. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
----- This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos Core (SQL) database. |
-| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos Core (SQL) container. |
-| [az cosmosdb sql database throughput update](/cli/azure/cosmosdb/sql/database/throughput#az-cosmosdb-sql-database-throughput-update) | Update throughput for an Azure Cosmos Core (SQL) database. |
-| [az cosmosdb sql container throughput update](/cli/azure/cosmosdb/sql/container/throughput#az-cosmosdb-sql-container-throughput-update) | Update throughput for an Azure Cosmos Core (SQL) container. |
-| [az cosmosdb sql database throughput migrate](/cli/azure/cosmosdb/sql/database/throughput#az-cosmosdb-sql-database-throughput-migrate) | Migrate throughput for an Azure Cosmos Core (SQL) database. |
-| [az cosmosdb sql container throughput migrate](/cli/azure/cosmosdb/sql/container/throughput#az-cosmosdb-sql-container-throughput-migrate) | Migrate throughput for an Azure Cosmos Core (SQL) container. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
Title: Create an Azure Cosmos DB Table API account and table with autoscale
-description: Use Azure CLI to create a Table API account and table with autoscale for Azure Cosmos DB.
+ Title: Create an Azure Cosmos DB for Table account and table with autoscale
+description: Use Azure CLI to create a API for Table account and table with autoscale for Azure Cosmos DB.
-+ Last updated 06/22/2022-+
-# Use Azure CLI to create an Azure Cosmos DB Table API account and table with autoscale
+# Use Azure CLI to create an Azure Cosmos DB for Table account and table with autoscale
-The script in this article creates an Azure Cosmos DB Table API account and table with autoscale.
+The script in this article creates an Azure Cosmos DB for Table account and table with autoscale.
## Prerequisites
The script in this article creates an Azure Cosmos DB Table API account and tabl
## Sample script
-Run the following script to create an Azure resource group, an Azure Cosmos DB Table API account, and Table API table with autoscale capability. The resources might take a while to create.
+Run the following script to create an Azure resource group, an Azure Cosmos DB for Table account, and API for Table table with autoscale capability. The resources might take a while to create.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/table/autoscale.sh" id="FullScript"::: This script uses the following commands: - [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.-- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with `--capabilities EnableTable` creates an Azure Cosmos DB account for Table API.-- [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) with `--max-throughput 1000` creates an Azure Cosmos DB Table API table with autoscale capabilities.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with `--capabilities EnableTable` creates an Azure Cosmos DB account for API for Table.
+- [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) with `--max-throughput 1000` creates an Azure Cosmos DB for Table table with autoscale capabilities.
## Clean up resources
az group delete --name $resourceGroup
## Next steps - [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)-- [Throughput (RU/s) operations with Azure CLI for a table for Azure Cosmos DB Table API](throughput.md)
+- [Throughput (RU/s) operations with Azure CLI for a table for Azure Cosmos DB for Table](throughput.md)
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
Title: Create a Table API table for Azure Cosmos DB
-description: Create a Table API table for Azure Cosmos DB
+ Title: Create a API for Table table for Azure Cosmos DB
+description: Create a API for Table table for Azure Cosmos DB
-++ Last updated 02/21/2022
-# Create an Azure Cosmos Table API account and table using Azure CLI
+# Create an Azure Cosmos DB Table API account and table using Azure CLI
-The script in this article demonstrates creating a Table API table.
+The script in this article demonstrates creating a API for Table table.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) | Creates an Azure Cosmos Table API table. |
+| [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) | Creates an Azure Cosmos DB Table API table. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
Title: Azure Cosmos DB Table API resource lock operations
-description: Use Azure CLI to create, list, show properties for, and delete resource locks for an Azure Cosmos DB Table API table.
+ Title: Azure Cosmos DB for Table resource lock operations
+description: Use Azure CLI to create, list, show properties for, and delete resource locks for an Azure Cosmos DB for Table table.
-+ Last updated 06/16/2022-+
-# Use Azure CLI for resource lock operations on Azure Cosmos DB Table API tables
+# Use Azure CLI for resource lock operations on Azure Cosmos DB for Table tables
-The script in this article demonstrates performing resource lock operations for a Table API table.
+The script in this article demonstrates performing resource lock operations for a API for Table table.
> [!IMPORTANT]
-> To enable resource locking, the Azure Cosmos DB account must have the `disableKeyBasedMetadataWriteAccess` property enabled. This property prevents any changes to resources from clients that connect via account keys, such as the Cosmos DB Table SDK, Azure Storage Table SDK, or Azure portal. For more information, see [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> To enable resource locking, the Azure Cosmos DB account must have the `disableKeyBasedMetadataWriteAccess` property enabled. This property prevents any changes to resources from clients that connect via account keys, such as the Azure Cosmos DB Table SDK, Azure Storage Table SDK, or Azure portal. For more information, see [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
## Prerequisites -- You need an [Azure Cosmos DB Table API account, database, and table created](create.md). [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- You need an [Azure Cosmos DB for Table account, database, and table created](create.md). [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
> [!IMPORTANT] > To create or delete resource locks, you must have the **Owner** role in your Azure subscription.
The script in this article demonstrates performing resource lock operations for
## Sample script
-The following script uses Azure CLI [az lock](/cli/azure/lock) commands to manipulate resource locks on your Azure Cosmos DB Table API table. The script needs the `resourceGroup`, `account` name, and `table` name for the Azure Cosmos DB account and table you created.
+The following script uses Azure CLI [az lock](/cli/azure/lock) commands to manipulate resource locks on your Azure Cosmos DB for Table table. The script needs the `resourceGroup`, `account` name, and `table` name for the Azure Cosmos DB account and table you created.
- [az lock create](/cli/azure/lock#az-lock-create) creates a `CanNotDelete` resource lock on the table. - [az lock list](/cli/azure/lock#az-lock-list) lists all the lock information for your Azure Cosmos DB Table account.
az group delete --name $resourceGroup
- [Lock resources to prevent unexpected changes](../../../../azure-resource-manager/management/lock-resources.md) - [How to audit Azure Cosmos DB control plane operations](../../../audit-control-plane-logs.md) - [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)-- [Azure Cosmos DB CLI GitHub repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb)
+- [Azure Cosmos DB CLI GitHub repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
Title: Create an Azure Cosmos DB Table API serverless account and table
-description: Use Azure CLI to create a Table API serverless account and table for Azure Cosmos DB.
+ Title: Create an Azure Cosmos DB for Table serverless account and table
+description: Use Azure CLI to create a API for Table serverless account and table for Azure Cosmos DB.
-+ Last updated 06/16/2022-+
-# Use Azure CLI to create an Azure Cosmos DB Table API serverless account and table
+# Use Azure CLI to create an Azure Cosmos DB for Table serverless account and table
-The script in this article creates an Azure Cosmos DB Table API serverless account and table.
+The script in this article creates an Azure Cosmos DB for Table serverless account and table.
## Prerequisites
The script in this article creates an Azure Cosmos DB Table API serverless accou
## Sample script
-Run the following script to create an Azure resource group, an Azure Cosmos DB Table API serverless account, and Table API table. The resources might take a while to create.
+Run the following script to create an Azure resource group, an Azure Cosmos DB for Table serverless account, and API for Table table. The resources might take a while to create.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/table/serverless.sh" id="FullScript"::: This script uses the following commands: - [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.-- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with `--capabilities EnableTable EnableServerless` creates an Azure Cosmos DB serverless account for Table API.-- [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) creates an Azure Cosmos DB Table API table.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with `--capabilities EnableTable EnableServerless` creates an Azure Cosmos DB serverless account for API for Table.
+- [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) creates an Azure Cosmos DB for Table table.
## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Table API resources
-description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Table API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB for Table resources
+description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB for Table resources
-++ Last updated 02/21/2022
-# Throughput (RU/s) operations with Azure CLI for a table for Azure Cosmos DB Table API
+# Throughput (RU/s) operations with Azure CLI for a table for Azure Cosmos DB for Table
-The script in this article creates a Table API table then updates the throughput the table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
+The script in this article creates a API for Table table then updates the throughput the table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
This script uses the following commands. Each command in the table links to comm
||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) | Creates an Azure Cosmos Table API table. |
-| [az cosmosdb table throughput update](/cli/azure/cosmosdb/table/throughput#az-cosmosdb-table-throughput-update) | Update throughput for an Azure Cosmos Table API table. |
-| [az cosmosdb table throughput migrate](/cli/azure/cosmosdb/table/throughput#az-cosmosdb-table-throughput-migrate) | Migrate throughput for an Azure Cosmos Table API table. |
+| [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) | Creates an Azure Cosmos DB Table API table. |
+| [az cosmosdb table throughput update](/cli/azure/cosmosdb/table/throughput#az-cosmosdb-table-throughput-update) | Update throughput for an Azure Cosmos DB Table API table. |
+| [az cosmosdb table throughput migrate](/cli/azure/cosmosdb/table/throughput#az-cosmosdb-table-throughput-migrate) | Migrate throughput for an Azure Cosmos DB Table API table. |
| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. | ## Next steps
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/autoscale.md
Title: PowerShell script to create Azure Cosmos DB Cassandra API keyspace and table with autoscale
-description: Azure PowerShell script - Azure Cosmos DB create Cassandra API keyspace and table with autoscale
+ Title: PowerShell script to create Azure Cosmos DB for Apache Cassandra keyspace and table with autoscale
+description: Azure PowerShell script - Azure Cosmos DB create API for Cassandra keyspace and table with autoscale
-+ Last updated 07/30/2020 -+
-# Create a keyspace and table with autoscale for Azure Cosmos DB - Cassandra API
+# Create a keyspace and table with autoscale for Azure Cosmos DB - API for Cassandra
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-autoscale.ps1 "Create a keyspace and table with autoscale for Cassandra API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-autoscale.ps1 "Create a keyspace and table with autoscale for API for Cassandra")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBCassandraKeyspace](/powershell/module/az.cosmosdb/new-azcosmosdbcassandrakeyspace) | Creates a Cosmos DB Cassandra API Keyspace. |
-| [New-AzCosmosDBCassandraClusterKey](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraclusterkey) | Creates a Cosmos DB Cassandra API Cluster Key. |
-| [New-AzCosmosDBCassandraColumn](/powershell/module/az.cosmosdb/new-azcosmosdbcassandracolumn) | Creates a Cosmos DB Cassandra API Column. |
-| [New-AzCosmosDBCassandraSchema](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraschema) | Creates a Cosmos DB Cassandra API Schema. |
-| [New-AzCosmosDBCassandraTable](/powershell/module/az.cosmosdb/new-azcosmosdbcassandratable) | Creates a Cosmos DB Cassandra API Table. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBCassandraKeyspace](/powershell/module/az.cosmosdb/new-azcosmosdbcassandrakeyspace) | Creates an Azure Cosmos DB for Apache Cassandra Keyspace. |
+| [New-AzCosmosDBCassandraClusterKey](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraclusterkey) | Creates an Azure Cosmos DB for Apache Cassandra Cluster Key. |
+| [New-AzCosmosDBCassandraColumn](/powershell/module/az.cosmosdb/new-azcosmosdbcassandracolumn) | Creates an Azure Cosmos DB for Apache Cassandra Column. |
+| [New-AzCosmosDBCassandraSchema](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraschema) | Creates an Azure Cosmos DB for Apache Cassandra Schema. |
+| [New-AzCosmosDBCassandraTable](/powershell/module/az.cosmosdb/new-azcosmosdbcassandratable) | Creates an Azure Cosmos DB for Apache Cassandra Table. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/create.md
Title: PowerShell script to create Azure Cosmos DB Cassandra API keyspace and table
-description: Azure PowerShell script - Azure Cosmos DB create Cassandra API keyspace and table
+ Title: PowerShell script to create Azure Cosmos DB for Apache Cassandra keyspace and table
+description: Azure PowerShell script - Azure Cosmos DB create API for Cassandra keyspace and table
-+ Last updated 05/13/2020 -+
-# Create a keyspace and table for Azure Cosmos DB - Cassandra API
+# Create a keyspace and table for Azure Cosmos DB - API for Cassandra
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-create.ps1 "Create a keyspace and table for Cassandra API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-create.ps1 "Create a keyspace and table for API for Cassandra")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBCassandraKeyspace](/powershell/module/az.cosmosdb/new-azcosmosdbcassandrakeyspace) | Creates a Cosmos DB Cassandra API Keyspace. |
-| [New-AzCosmosDBCassandraClusterKey](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraclusterkey) | Creates a Cosmos DB Cassandra API Cluster Key. |
-| [New-AzCosmosDBCassandraColumn](/powershell/module/az.cosmosdb/new-azcosmosdbcassandracolumn) | Creates a Cosmos DB Cassandra API Column. |
-| [New-AzCosmosDBCassandraSchema](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraschema) | Creates a Cosmos DB Cassandra API Schema. |
-| [New-AzCosmosDBCassandraTable](/powershell/module/az.cosmosdb/new-azcosmosdbcassandratable) | Creates a Cosmos DB Cassandra API Table. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBCassandraKeyspace](/powershell/module/az.cosmosdb/new-azcosmosdbcassandrakeyspace) | Creates an Azure Cosmos DB for Apache Cassandra Keyspace. |
+| [New-AzCosmosDBCassandraClusterKey](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraclusterkey) | Creates an Azure Cosmos DB for Apache Cassandra Cluster Key. |
+| [New-AzCosmosDBCassandraColumn](/powershell/module/az.cosmosdb/new-azcosmosdbcassandracolumn) | Creates an Azure Cosmos DB for Apache Cassandra Column. |
+| [New-AzCosmosDBCassandraSchema](/powershell/module/az.cosmosdb/new-azcosmosdbcassandraschema) | Creates an Azure Cosmos DB for Apache Cassandra Schema. |
+| [New-AzCosmosDBCassandraTable](/powershell/module/az.cosmosdb/new-azcosmosdbcassandratable) | Creates an Azure Cosmos DB for Apache Cassandra Table. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB Cassandra API resources
-description: Azure PowerShell script - Azure Cosmos DB list and get operations for Cassandra API
+ Title: PowerShell script to list and get Azure Cosmos DB for Apache Cassandra resources
+description: Azure PowerShell script - Azure Cosmos DB list and get operations for API for Cassandra
-+ Last updated 03/18/2020 -+
-# List and get keyspaces and tables for Azure Cosmos DB - Cassandra API
+# List and get keyspaces and tables for Azure Cosmos DB - API for Cassandra
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-list-get.ps1 "List or get keyspace or table for Cassandra API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-list-get.ps1 "List or get keyspace or table for API for Cassandra")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Get-AzCosmosDBCassandraKeyspace](/powershell/module/az.cosmosdb/get-azcosmosdbcassandrakeyspace) | Lists Cosmos DB Cassandra API Keyspaces in an Account, or gets a specified Cosmos DB Cassandra API Keyspace in an Account. |
-| [Get-AzCosmosDBCassandraTable](/powershell/module/az.cosmosdb/get-azcosmosdbcassandratable) | Lists Cosmos DB Cassandra API Tables in a Keyspace, or gets a specified Cosmos DB Cassandra API Table in a Keyspace. |
+| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Azure Cosmos DB Accounts, or gets a specified Azure Cosmos DB Account. |
+| [Get-AzCosmosDBCassandraKeyspace](/powershell/module/az.cosmosdb/get-azcosmosdbcassandrakeyspace) | Lists Azure Cosmos DB for Apache Cassandra Keyspaces in an Account, or gets a specified Azure Cosmos DB for Apache Cassandra Keyspace in an Account. |
+| [Get-AzCosmosDBCassandraTable](/powershell/module/az.cosmosdb/get-azcosmosdbcassandratable) | Lists Azure Cosmos DB for Apache Cassandra Tables in a Keyspace, or gets a specified Azure Cosmos DB for Apache Cassandra Table in a Keyspace. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Cassandra API keyspace and table
-description: Create resource lock for Azure Cosmos Cassandra API keyspace and table
+ Title: PowerShell script to create resource lock for Azure Cosmos DB Cassandra API keyspace and table
+description: Create resource lock for Azure Cosmos DB Cassandra API keyspace and table
-+ Last updated 06/12/2020 -+
-# Create a resource lock for Azure Cosmos Cassandra API keyspace and table using Azure PowerShell
+# Create a resource lock for Azure Cosmos DB Cassandra API keyspace and table using Azure PowerShell
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
If you need to install, see [Install Azure PowerShell module](/powershell/azure/
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure. > [!IMPORTANT]
-> Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
## Sample script
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources
-description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources
+ Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Apache Cassandra resources
+description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Apache Cassandra resources
-+ Last updated 10/07/2020 -+
-# Throughput (RU/s) operations with PowerShell for a keyspace or table for Azure Cosmos DB - Cassandra API
+# Throughput (RU/s) operations with PowerShell for a keyspace or table for Azure Cosmos DB - API for Cassandra
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Get throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-ru-get.ps1 "Get throughput on a keyspace or table for Cassandra API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-ru-get.ps1 "Get throughput on a keyspace or table for API for Cassandra")]
## Update throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-ru-update.ps1 "Update throughput on a keyspace or table for Cassandra API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-ru-update.ps1 "Update throughput on a keyspace or table for API for Cassandra")]
## Migrate throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a keyspace or table for Cassandra API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/cassandra/ps-cassandra-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a keyspace or table for API for Cassandra")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBCassandraKeyspaceThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbcassandrakeyspacethroughput) | Gets the throughput value of the Cassandra API Keyspace. |
-| [Get-AzCosmosDBCassandraTableThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbcassandratablethroughput) | Gets the throughput value of the Cassandra API Table. |
-| [Update-AzCosmosDBCassandraKeyspaceThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbcassandrakeyspacethroughput) | Updates the throughput value of the Cassandra API Keyspace. |
-| [Update-AzCosmosDBCassandraTableThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbcassandratablethroughput) | Updates the throughput value of the Cassandra API Table. |
-| [Invoke-AzCosmosDBCassandraKeyspaceThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbcassandrakeyspacethroughputmigration) | Migrate throughput for a Cassandra API Keyspace. |
-| [Invoke-AzCosmosDBCassandraTableThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbcassandratablethroughputmigration) | Migrate throughput for a Cassandra API Table. |
+| [Get-AzCosmosDBCassandraKeyspaceThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbcassandrakeyspacethroughput) | Gets the throughput value of the API for Cassandra Keyspace. |
+| [Get-AzCosmosDBCassandraTableThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbcassandratablethroughput) | Gets the throughput value of the API for Cassandra Table. |
+| [Update-AzCosmosDBCassandraKeyspaceThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbcassandrakeyspacethroughput) | Updates the throughput value of the API for Cassandra Keyspace. |
+| [Update-AzCosmosDBCassandraTableThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbcassandratablethroughput) | Updates the throughput value of the API for Cassandra Table. |
+| [Invoke-AzCosmosDBCassandraKeyspaceThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbcassandrakeyspacethroughputmigration) | Migrate throughput for a API for Cassandra Keyspace. |
+| [Invoke-AzCosmosDBCassandraTableThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbcassandratablethroughputmigration) | Migrate throughput for a API for Cassandra Table. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Account Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/account-update.md
Title: PowerShell script to update the default consistency level on an Azure Cosmos account
+ Title: PowerShell script to update the default consistency level on an Azure Cosmos DB account
description: Azure PowerShell script sample - Update default consistency level on an Azure Cosmos DB account using PowerShell
Last updated 03/21/2020 -+ # Update consistency level for an Azure Cosmos DB account with PowerShell [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script > [!NOTE]
-> You cannot modify regions and change other Cosmos account properties in the same operation. These must be done as two separate operations.
+> You cannot modify regions and change other Azure Cosmos DB account properties in the same operation. These must be done as two separate operations.
> [!NOTE]
-> This sample demonstrates using a SQL API account. To use this sample for other APIs, copy the related properties and apply to your API-specific script.
+> This sample demonstrates using a API for NoSQL account. To use this sample for other APIs, copy the related properties and apply to your API-specific script.
[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-update.ps1 "Update an Azure Cosmos DB account")]
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Update-AzCosmosDBAccount](/powershell/module/az.cosmosdb/update-azcosmosdbaccountfailoverpriority) | Update a Cosmos DB Account. |
+| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Azure Cosmos DB Accounts, or gets a specified Azure Cosmos DB Account. |
+| [Update-AzCosmosDBAccount](/powershell/module/az.cosmosdb/update-azcosmosdbaccountfailoverpriority) | Update an Azure Cosmos DB Account. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Failover Priority Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/failover-priority-update.md
Title: PowerShell script to change failover priority for an Azure Cosmos account with single write region
-description: Azure PowerShell script sample - Change failover priority or trigger failover for an Azure Cosmos account with single write region
+ Title: PowerShell script to change failover priority for an Azure Cosmos DB account with single write region
+description: Azure PowerShell script sample - Change failover priority or trigger failover for an Azure Cosmos DB account with single write region
Last updated 03/18/2020 -+
-# Change failover priority or trigger failover for an Azure Cosmos account with single write region by using PowerShell
+# Change failover priority or trigger failover for an Azure Cosmos DB account with single write region by using PowerShell
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script > [!NOTE]
-> Any change to a region with `failoverPriority=0` triggers a manual failover and can only be done to an account configured for manual failover. Changes to all other regions simply changes the failover priority for a Cosmos account.
+> Any change to a region with `failoverPriority=0` triggers a manual failover and can only be done to an account configured for manual failover. Changes to all other regions simply changes the failover priority for an Azure Cosmos DB account.
> [!NOTE]
-> This sample demonstrates using a SQL (Core) API account. To use this sample for other APIs, copy the related properties and apply to your API specific script
+> This sample demonstrates using a API for NoSQL account. To use this sample for other APIs, copy the related properties and apply to your API specific script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-failover-priority-update.ps1 "Update failover priority for an Azure Cosmos account or trigger a manual failover")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-failover-priority-update.ps1 "Update failover priority for an Azure Cosmos DB account or trigger a manual failover")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Update-AzCosmosDBAccountFailoverPriority](/powershell/module/az.cosmosdb/update-azcosmosdbaccountfailoverpriority) | Update the failover priority order of a Cosmos DB Account's regions. |
+| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Azure Cosmos DB Accounts, or gets a specified Azure Cosmos DB Account. |
+| [Update-AzCosmosDBAccountFailoverPriority](/powershell/module/az.cosmosdb/update-azcosmosdbaccountfailoverpriority) | Update the failover priority order of an Azure Cosmos DB Account's regions. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Firewall Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/firewall-create.md
Last updated 03/18/2020 -+ # Create an Azure Cosmos DB account with IP Firewall [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script > [!NOTE]
-> This sample demonstrates using a SQL (Core) API account. To use this sample for other APIs, copy the related properties and apply to your API specific script
+> This sample demonstrates using a API for NoSQL account. To use this sample for other APIs, copy the related properties and apply to your API specific script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-firewall-create.ps1 "Create an Azure Cosmos account with IP Firewall")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-firewall-create.ps1 "Create an Azure Cosmos DB account with IP Firewall")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a new Cosmos DB Account. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a new Azure Cosmos DB Account. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Keys Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
Last updated 03/18/2020 -+ # Connection string and account key operations for an Azure Cosmos DB account using PowerShell [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script > [!NOTE]
-> This sample demonstrates using a SQL API account. To use this sample for other APIs, copy the related properties and apply to your API-specific script
+> This sample demonstrates using a API for NoSQL account. To use this sample for other APIs, copy the related properties and apply to your API-specific script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-keys-connection-strings.ps1 "Connection strings and account keys for Azure Cosmos account")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-keys-connection-strings.ps1 "Connection strings and account keys for Azure Cosmos DB account")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) | Gets the connection string or key (read-write or read-only) for a Cosmos DB Account. |
-| [New-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/new-azcosmosdbaccountkey) | Regenerate the specified key for a Cosmos DB Account. |
+| [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) | Gets the connection string or key (read-write or read-only) for an Azure Cosmos DB Account. |
+| [New-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/new-azcosmosdbaccountkey) | Regenerate the specified key for an Azure Cosmos DB Account. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Update Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/update-region.md
Last updated 05/02/2022 -+ # Update regions for an Azure Cosmos DB account by using PowerShell This PowerShell script updates the Azure regions that an Azure Cosmos DB account uses. You can use this script to add an Azure region or change region failover order.
In this script, the [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-a
- If you add a region, don't change the first failover region in the same operation. Change failover priority order in a separate operation. - You can't modify regions in the same operation as changing other Azure Cosmos DB account properties. Do these operations separately.
-This sample uses a SQL (Core) API account. To use this sample for other APIs, copy the related properties and apply them to your API-specific script.
+This sample uses a API for NoSQL account. To use this sample for other APIs, copy the related properties and apply them to your API-specific script.
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-update-region.ps1 "Update Azure Cosmos account regions")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-update-region.ps1 "Update Azure Cosmos DB account regions")]
Although the script returns a result, the update operation might not be finished. Check the status of the operation in the Azure portal by using the Azure Cosmos DB account **Activity log**.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/autoscale.md
Title: PowerShell script to create Azure Cosmos DB Gremlin API database and graph with autoscale
-description: Azure PowerShell script - Azure Cosmos DB create Gremlin API database and graph with autoscale
+ Title: PowerShell script to create Azure Cosmos DB for Gremlin database and graph with autoscale
+description: Azure PowerShell script - Azure Cosmos DB create API for Gremlin database and graph with autoscale
-+ Last updated 07/30/2020 -+
-# Create a database and graph with autoscale for Azure Cosmos DB - Gremlin API
+# Create a database and graph with autoscale for Azure Cosmos DB - API for Gremlin
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-autoscale.ps1 "Create a database and graph with autoscale for Gremlin API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-autoscale.ps1 "Create a database and graph with autoscale for API for Gremlin")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbgremlindatabase) | Creates a Gremlin API Database. |
-| [New-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/new-azcosmosdbgremlingraph) | Creates a Gremlin API Graph. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbgremlindatabase) | Creates a API for Gremlin Database. |
+| [New-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/new-azcosmosdbgremlingraph) | Creates a API for Gremlin Graph. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/create.md
Title: PowerShell script to create Azure Cosmos DB Gremlin API database and graph
-description: Azure PowerShell script - Azure Cosmos DB create Gremlin API database and graph
+ Title: PowerShell script to create Azure Cosmos DB for Gremlin database and graph
+description: Azure PowerShell script - Azure Cosmos DB create API for Gremlin database and graph
-+ Last updated 05/13/2020 -+
-# Create a database and graph for Azure Cosmos DB - Gremlin API
+# Create a database and graph for Azure Cosmos DB - API for Gremlin
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-create.ps1 "Create a database and graph for Gremlin API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-create.ps1 "Create a database and graph for API for Gremlin")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbgremlindatabase) | Creates a Gremlin API Database. |
-| [New-AzCosmosDBGremlinConflictResolutionPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbgremlinconflictresolutionpolicy) | Creates a Gremlin API Write Conflict Resolution Policy. |
-| [New-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/new-azcosmosdbgremlingraph) | Creates a Gremlin API Graph. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbgremlindatabase) | Creates a API for Gremlin Database. |
+| [New-AzCosmosDBGremlinConflictResolutionPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbgremlinconflictresolutionpolicy) | Creates a API for Gremlin Write Conflict Resolution Policy. |
+| [New-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/new-azcosmosdbgremlingraph) | Creates a API for Gremlin Graph. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
Title: PowerShell script to list or get Azure Cosmos DB Gremlin API databases and graphs
-description: Run this Azure PowerShell script to list all or get specific Azure Cosmos DB Gremlin API databases and graphs.
+ Title: PowerShell script to list or get Azure Cosmos DB for Gremlin databases and graphs
+description: Run this Azure PowerShell script to list all or get specific Azure Cosmos DB for Gremlin databases and graphs.
-+ Last updated 05/02/2022 -+
-# PowerShell script to list or get Azure Cosmos DB Gremlin API databases and graphs
+# PowerShell script to list or get Azure Cosmos DB for Gremlin databases and graphs
-This PowerShell script lists or gets specific Azure Cosmos DB accounts, Gremlin API databases, and Gremlin API graphs.
+This PowerShell script lists or gets specific Azure Cosmos DB accounts, API for Gremlin databases, and API for Gremlin graphs.
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
This PowerShell script lists or gets specific Azure Cosmos DB accounts, Gremlin
In this script: - [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) lists all or gets a specific Azure Cosmos DB account in an Azure resource group.-- [Get-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbgremlindatabase) lists all or gets a specific Gremlin API database in an Azure Cosmos DB account.-- [Get-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/get-azcosmosdbgremlingraph) lists all or gets a specific Gremlin API graph in a Gremlin API database.
+- [Get-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbgremlindatabase) lists all or gets a specific API for Gremlin database in an Azure Cosmos DB account.
+- [Get-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/get-azcosmosdbgremlingraph) lists all or gets a specific API for Gremlin graph in a API for Gremlin database.
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-list-get.ps1 "List or get databases or graphs for Gremlin API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-list-get.ps1 "List or get databases or graphs for API for Gremlin")]
## Delete Azure resource group
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Gremlin API database and graph
-description: Create resource lock for Azure Cosmos Gremlin API database and graph
+ Title: PowerShell script to create resource lock for Azure Cosmos DB for Gremlin database and graph
+description: Create resource lock for Azure Cosmos DB for Gremlin database and graph
-+ Last updated 06/12/2020 -+
-# Create a resource lock for Azure Cosmos Gremlin API database and graph using Azure PowerShell
+# Create a resource lock for Azure Cosmos DB for Gremlin database and graph using Azure PowerShell
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
If you need to install, see [Install Azure PowerShell module](/powershell/azure/
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure. > [!IMPORTANT]
-> Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
## Sample script
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API
-description: PowerShell scripts for throughput (RU/s) operations for Gremlin API
+ Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Gremlin
+description: PowerShell scripts for throughput (RU/s) operations for API for Gremlin
-+ Last updated 10/07/2020 -+
-# Throughput (RU/s) operations with PowerShell for a database or graph for Azure Cosmos DB - Gremlin API
+# Throughput (RU/s) operations with PowerShell for a database or graph for Azure Cosmos DB - API for Gremlin
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Get throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-ru-get.ps1 "Get throughput on a database or graph for Gremlin API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-ru-get.ps1 "Get throughput on a database or graph for API for Gremlin")]
## Update throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-ru-update.ps1 "Update throughput on a database or graph for Gremlin API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-ru-update.ps1 "Update throughput on a database or graph for API for Gremlin")]
## Migrate throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a database or graph for Gremlin API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a database or graph for API for Gremlin")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBGremlinDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbgremlindatabasethroughput) | Gets the throughput value of the Gremlin API Database. |
-| [Get-AzCosmosDBGremlinGraphThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbgremlingraphthroughput) | Gets the throughput value of the Gremlin API Graph. |
-| [Update-AzCosmosDBGremlinDatabaseThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbgremlindatabasethroughput) | Updates the throughput value of the Gremlin API Database. |
-| [Update-AzCosmosDBGremlinGraphThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbgremlingraphthroughput) | Updates the throughput value of the Gremlin API Graph. |
-| [Invoke-AzCosmosDBGremlinDatabaseThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbgremlindatabasethroughputmigration) | Migrate throughput of a Gremlin API Database. |
-| [Invoke-AzCosmosDBGremlinGraphThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbgremlingraphthroughputmigration) | Migrate throughput of a Gremlin API Graph. |
+| [Get-AzCosmosDBGremlinDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbgremlindatabasethroughput) | Gets the throughput value of the API for Gremlin Database. |
+| [Get-AzCosmosDBGremlinGraphThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbgremlingraphthroughput) | Gets the throughput value of the API for Gremlin Graph. |
+| [Update-AzCosmosDBGremlinDatabaseThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbgremlindatabasethroughput) | Updates the throughput value of the API for Gremlin Database. |
+| [Update-AzCosmosDBGremlinGraphThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbgremlingraphthroughput) | Updates the throughput value of the API for Gremlin Graph. |
+| [Invoke-AzCosmosDBGremlinDatabaseThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbgremlindatabasethroughputmigration) | Migrate throughput of a API for Gremlin Database. |
+| [Invoke-AzCosmosDBGremlinGraphThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbgremlingraphthroughputmigration) | Migrate throughput of a API for Gremlin Graph. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/autoscale.md
Title: PowerShell script to create Azure Cosmos MongoDB API database and collection with autoscale
-description: Azure PowerShell script - create Azure Cosmos MongoDB API database and collection with autoscale
+ Title: PowerShell script to create Azure Cosmos DB MongoDB API database and collection with autoscale
+description: Azure PowerShell script - create Azure Cosmos DB MongoDB API database and collection with autoscale
-+ Last updated 07/30/2020 -+
-# Create a database and collection with autoscale for Azure Cosmos DB - MongoDB API
+# Create a database and collection with autoscale for Azure Cosmos DB - API for MongoDB
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-autoscale.ps1 "Create a database and collection with autoscale for MongoDB API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-autoscale.ps1 "Create a database and collection with autoscale for API for MongoDB")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBMongoDBDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbdatabase) | Creates a MongoDB API Database. |
-| [New-AzCosmosDBMongoDBIndex](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbindex) | Creates a MongoDB API Index. |
-| [New-AzCosmosDBMongoDBCollection](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection) | Creates a MongoDB API Collection. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBMongoDBDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbdatabase) | Creates a API for MongoDB Database. |
+| [New-AzCosmosDBMongoDBIndex](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbindex) | Creates a API for MongoDB Index. |
+| [New-AzCosmosDBMongoDBCollection](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection) | Creates a API for MongoDB Collection. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/create.md
Title: PowerShell script to create Azure Cosmos MongoDB API database and collection
-description: Azure PowerShell script - create Azure Cosmos MongoDB API database and collection
+ Title: PowerShell script to create Azure Cosmos DB MongoDB API database and collection
+description: Azure PowerShell script - create Azure Cosmos DB MongoDB API database and collection
-+ Last updated 05/13/2020 -+
-# Create a database and collection for Azure Cosmos DB - MongoDB API
+# Create a database and collection for Azure Cosmos DB - API for MongoDB
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-create.ps1 "Create a database and collection for MongoDB API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-create.ps1 "Create a database and collection for API for MongoDB")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBMongoDBDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbdatabase) | Creates a MongoDB API Database. |
-| [New-AzCosmosDBMongoDBIndex](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbindex) | Creates a MongoDB API Index. |
-| [New-AzCosmosDBMongoDBCollection](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection) | Creates a MongoDB API Collection. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBMongoDBDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbdatabase) | Creates a API for MongoDB Database. |
+| [New-AzCosmosDBMongoDBIndex](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbindex) | Creates a API for MongoDB Index. |
+| [New-AzCosmosDBMongoDBCollection](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection) | Creates a API for MongoDB Collection. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/list-get.md
Title: PowerShell script to list and get operations in Azure Cosmos DB's API for MongoDB
-description: Azure PowerShell script - Azure Cosmos DB list and get operations for MongoDB API
+description: Azure PowerShell script - Azure Cosmos DB list and get operations for API for MongoDB
-+ Last updated 05/01/2020 -+
-# List and get databases and graphs for Azure Cosmos DB - MongoDB API
+# List and get databases and graphs for Azure Cosmos DB - API for MongoDB
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-list-get.ps1 "List or get databases or collections for MongoDB API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-list-get.ps1 "List or get databases or collections for API for MongoDB")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Get-AzCosmosDBMongoDBDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbdatabase) | Lists MongoDB API Databases in an Account, or gets a specified MongoDB API Database in an Account. |
-| [Get-AzCosmosDBMongoDBCollection](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbcollection) | Lists MongoDB API Collections, or gets a specified MongoDB API Collection in a Database. |
+| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Azure Cosmos DB Accounts, or gets a specified Azure Cosmos DB Account. |
+| [Get-AzCosmosDBMongoDBDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbdatabase) | Lists API for MongoDB Databases in an Account, or gets a specified API for MongoDB Database in an Account. |
+| [Get-AzCosmosDBMongoDBCollection](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbcollection) | Lists API for MongoDB Collections, or gets a specified API for MongoDB Collection in a Database. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos MongoDB API database and collection
-description: Create resource lock for Azure Cosmos MongoDB API database and collection
+ Title: PowerShell script to create resource lock for Azure Cosmos DB MongoDB API database and collection
+description: Create resource lock for Azure Cosmos DB MongoDB API database and collection
-+ Last updated 06/12/2020 -+
-# Create a resource lock for Azure Cosmos MongoDB API database and collection using Azure PowerShell
+# Create a resource lock for Azure Cosmos DB MongoDB API database and collection using Azure PowerShell
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
If you need to install, see [Install Azure PowerShell module](/powershell/azure/
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure. > [!IMPORTANT]
-> Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
## Sample script
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DBs API for MongoDB
-description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DBs API for MongoDB
+ Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB s API for MongoDB
+description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB s API for MongoDB
-+ Last updated 10/07/2020 -+
-# Throughput (RU/s) operations with PowerShell for a database or collection for Azure Cosmos DB API for MongoDB
+# Throughput (RU/s) operations with PowerShell for a database or collection for Azure Cosmos DB for MongoDB
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Get throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-ru-get.ps1 "Get throughput on a database or collection for Azure Cosmos DB API for MongoDB")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-ru-get.ps1 "Get throughput on a database or collection for Azure Cosmos DB for MongoDB")]
## Update throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-ru-update.ps1 "Update throughput on a database or collection for Azure Cosmos DB API for MongoDB")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-ru-update.ps1 "Update throughput on a database or collection for Azure Cosmos DB for MongoDB")]
## Migrate throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a database or collection for Azure Cosmos DB API for MongoDB")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/mongodb/ps-mongodb-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a database or collection for Azure Cosmos DB for MongoDB")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBMongoDBDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbdatabasethroughput) | Gets the throughput value of the specified Azure Cosmos DB API for MongoDB Database. |
-| [Get-AzCosmosDBMongoDBCollectionThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbcollectionthroughput) | Gets the throughput value of the specified Azure Cosmos DB API for MongoDB Collection. |
-| [Update-AzCosmosDBMongoDBDatabaseThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbdatabasethroughput) | Updates the throughput value of the Azure Cosmos DB API for MongoDB Database. |
-| [Update-AzCosmosDBMongoDBCollectionThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollectionthroughput) | Updates the throughput value of the Azure Cosmos DB API for MongoDB Collection. |
-| [Invoke-AzCosmosDBMongoDBDatabaseThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbmongodbdatabasethroughputmigration) | Migrate throughput of an Azure Cosmos DB API for MongoDB Collection. |
-| [Invoke-AzCosmosDBMongoDBCollectionThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbmongodbcollectionthroughputmigration) | Migrate throughput of an Azure Cosmos DB API for MongoDB Collection. |
+| [Get-AzCosmosDBMongoDBDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbdatabasethroughput) | Gets the throughput value of the specified Azure Cosmos DB for MongoDB Database. |
+| [Get-AzCosmosDBMongoDBCollectionThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbmongodbcollectionthroughput) | Gets the throughput value of the specified Azure Cosmos DB for MongoDB Collection. |
+| [Update-AzCosmosDBMongoDBDatabaseThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbdatabasethroughput) | Updates the throughput value of the Azure Cosmos DB for MongoDB Database. |
+| [Update-AzCosmosDBMongoDBCollectionThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollectionthroughput) | Updates the throughput value of the Azure Cosmos DB for MongoDB Collection. |
+| [Invoke-AzCosmosDBMongoDBDatabaseThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbmongodbdatabasethroughputmigration) | Migrate throughput of an Azure Cosmos DB for MongoDB Collection. |
+| [Invoke-AzCosmosDBMongoDBCollectionThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbmongodbcollectionthroughputmigration) | Migrate throughput of an Azure Cosmos DB for MongoDB Collection. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/autoscale.md
+
+ Title: PowerShell script to create Azure Cosmos DB for NoSQL database and container with autoscale
+description: Azure PowerShell script - Azure Cosmos DB create API for NoSQL database and container with autoscale
++++ Last updated : 07/30/2020+++++
+# Create a database and container with autoscale for Azure Cosmos DB - API for NoSQL
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+## Sample script
+
+This script creates an Azure Cosmos DB account for API for NoSQL in two regions with session level consistency, a database, and a container with autoscale throughput and default index policy.
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-autoscale.ps1 "Create an account, database, and container with autoscale for API for NoSQL")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Cosmos DB**| |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates an Azure Cosmos DB SQL Database. |
+| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates a new Azure Cosmos DB SQL Container. |
+|**Azure Resource Groups**| |
+| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
+|||
+
+## Next steps
+
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Create Index None https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create-index-none.md
+
+ Title: PowerShell script to create a container with indexing turned off in an Azure Cosmos DB account
+description: Azure PowerShell script sample - Create a container with indexing turned off in an Azure Cosmos DB account
++++ Last updated : 05/13/2020+++++
+# Create a container with indexing turned off in an Azure Cosmos DB account using PowerShell
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+## Sample script
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-container-create-index-none.ps1 "Create a container with indexing turned off in an Azure Cosmos DB account")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Cosmos DB**| |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates an Azure Cosmos DB SQL Database. |
+| [New-AzCosmosDBSqlIndexingPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqlindexingpolicy) | Creates a PSSqlIndexingPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
+| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates an Azure Cosmos DB SQL Container. |
+|**Azure Resource Groups**| |
+| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
+|||
+
+## Next steps
+
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Create Large Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create-large-partition-key.md
+
+ Title: PowerShell script to create an Azure Cosmos DB container with a large partition key
+description: Azure PowerShell script sample - Create a container with a large partition key in an Azure Cosmos DB account
++++ Last updated : 05/13/2020+++++
+# Create a container with a large partition key in an Azure Cosmos DB account using PowerShell
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+## Sample script
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-container-large-partition-key.ps1 "Create a container with a large partition key in an Azure Cosmos DB account")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Cosmos DB**| |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates an Azure Cosmos DB SQL Database. |
+| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates an Azure Cosmos DB SQL Container. |
+|**Azure Resource Groups**| |
+| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
+|||
+
+## Next steps
+
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create.md
+
+ Title: PowerShell script to create Azure Cosmos DB for NoSQL database and container
+description: Azure PowerShell script - Azure Cosmos DB create API for NoSQL database and container
++++ Last updated : 05/13/2020+++++
+# Create a database and container for Azure Cosmos DB - API for NoSQL
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+## Sample script
+
+This script creates an Azure Cosmos DB account for API for NoSQL in two regions with session level consistency, a database, and a container with a partition key, custom indexing policy, unique key policy, TTL, dedicated throughput, and last writer wins conflict resolution policy with a custom conflict resolution path that will be used when `multipleWriteLocations=true`.
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-create.ps1 "Create an account, database, and container for API for NoSQL")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Cosmos DB**| |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates an Azure Cosmos DB SQL Database. |
+| [New-AzCosmosDBSqlUniqueKey](/powershell/module/az.cosmosdb/new-azcosmosdbsqluniquekey) | Creates a PSSqlUniqueKey object used as a parameter for New-AzCosmosDBSqlUniqueKeyPolicy. |
+| [New-AzCosmosDBSqlUniqueKeyPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqluniquekeypolicy) | Creates a PSSqlUniqueKeyPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
+| [New-AzCosmosDBSqlCompositePath](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcompositepath) | Creates a PSCompositePath object used as a parameter for New-AzCosmosDBSqlIndexingPolicy. |
+| [New-AzCosmosDBSqlIncludedPathIndex](/powershell/module/az.cosmosdb/new-azcosmosdbsqlincludedpathindex) | Creates a PSIndexes object used as a parameter for New-AzCosmosDBSqlIncludedPath. |
+| [New-AzCosmosDBSqlIncludedPath](/powershell/module/az.cosmosdb/new-azcosmosdbsqlincludedpath) | Creates a PSIncludedPath object used as a parameter for New-AzCosmosDBSqlIndexingPolicy. |
+| [New-AzCosmosDBSqlIndexingPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqlindexingpolicy) | Creates a PSSqlIndexingPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
+| [New-AzCosmosDBSqlConflictResolutionPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqlconflictresolutionpolicy) | Creates a PSSqlConflictResolutionPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
+| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates a new Azure Cosmos DB SQL Container. |
+|**Azure Resource Groups**| |
+| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
+|||
+
+## Next steps
+
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/list-get.md
+
+ Title: PowerShell script to list and get Azure Cosmos DB for NoSQL resources
+description: Azure PowerShell script - Azure Cosmos DB list and get operations for API for NoSQL
++++ Last updated : 03/17/2020+++++
+# List and get databases and containers for Azure Cosmos DB - API for NoSQL
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+## Sample script
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-list-get.ps1 "List and get databases and containers for API for NoSQL")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Cosmos DB**| |
+| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Azure Cosmos DB Accounts, or gets a specified Azure Cosmos DB Account. |
+| [Get-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbsqldatabase) | Lists Azure Cosmos DB Databases in an Account, or gets a specified Azure Cosmos DB Database in an Account. |
+| [Get-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/get-azcosmosdbsqlcontainer) | Lists Azure Cosmos DB Containers in a Database, or gets a specified Azure Cosmos DB Container in a Database. |
+|**Azure Resource Groups**| |
+| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
+|||
+
+## Next steps
+
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/lock.md
+
+ Title: PowerShell script to create resource lock for Azure Cosmos DB for NoSQL database and container
+description: Create resource lock for Azure Cosmos DB for NoSQL database and container
++++++ Last updated : 06/12/2020 +++
+# Create a resource lock for Azure Cosmos DB for NoSQL database and container using Azure PowerShell
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+> [!IMPORTANT]
+> Resource locks do not work for changes made by users connecting using any Azure Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+
+## Sample script
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-lock.ps1 "Create, list, and remove resource locks")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Resource**| |
+| [New-AzResourceLock](/powershell/module/az.resources/new-azresourcelock) | Creates a resource lock. |
+| [Get-AzResourceLock](/powershell/module/az.resources/get-azresourcelock) | Gets a resource lock, or lists resource locks. |
+| [Remove-AzResourceLock](/powershell/module/az.resources/remove-azresourcelock) | Removes a resource lock. |
+|||
+
+## Next steps
+
+For more information on Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/throughput.md
+
+ Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for NoSQL database or container
+description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for NoSQL database or container
++++ Last updated : 10/07/2020+++++
+# Throughput (RU/s) operations with PowerShell for a database or container for Azure Cosmos DB for NoSQL
++
+This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+
+## Get throughput
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-ru-get.ps1 "Get throughput (RU/s) for Azure Cosmos DB for NoSQL database or container")]
+
+## Update throughput
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-ru-update.ps1 "Update throughput (RU/s) for an Azure Cosmos DB for NoSQL database or container")]
+
+## Migrate throughput
+
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-ru-migrate.ps1 "Migrate between standard and autoscale throughput on an Azure Cosmos DB for NoSQL database or container")]
+
+## Clean up deployment
+
+After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+
+```powershell
+Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
+```
+
+## Script explanation
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+|**Azure Cosmos DB**| |
+| [Get-AzCosmosDBSqlDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqldatabasethroughput) | Get the throughput provisioned on an Azure Cosmos DB for NoSQL Database. |
+| [Get-AzCosmosDBSqlContainerThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqlcontainerthroughput) | Get the throughput provisioned on an Azure Cosmos DB for NoSQL Container. |
+| [Update-AzCosmosDBSqlDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqldatabase) | Updates the throughput value of an Azure Cosmos DB for NoSQL Database. |
+| [Update-AzCosmosDBSqlContainerThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqlcontainer) | Updates the throughput value of an Azure Cosmos DB for NoSQL Container. |
+| [Invoke-AzCosmosDBSqlDatabaseThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbsqldatabasethroughputmigration) | Migrate throughput of an Azure Cosmos DB for NoSQL Database. |
+| [Invoke-AzCosmosDBSqlContainerThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbsqlcontainerthroughputmigration) | Migrate throughput of an Azure Cosmos DB for NoSQL Container. |
+|**Azure Resource Groups**| |
+| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
+|||
+
+## Next steps
+
+For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/autoscale.md
- Title: PowerShell script to create Azure Cosmos DB SQL API database and container with autoscale
-description: Azure PowerShell script - Azure Cosmos DB create SQL API database and container with autoscale
---- Previously updated : 07/30/2020-----
-# Create a database and container with autoscale for Azure Cosmos DB - SQL API
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-## Sample script
-
-This script creates a Cosmos account for Core (SQL) API in two regions with session level consistency, a database, and a container with autoscale throughput and default index policy.
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-autoscale.ps1 "Create an account, database, and container with autoscale for Core (SQL) API")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates a Cosmos DB SQL Database. |
-| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates a new Cosmos DB SQL Container. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Create Index None https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create-index-none.md
- Title: PowerShell script to create a container with indexing turned off in an Azure Cosmos DB account
-description: Azure PowerShell script sample - Create a container with indexing turned off in an Azure Cosmos DB account
---- Previously updated : 05/13/2020-----
-# Create a container with indexing turned off in an Azure Cosmos DB account using PowerShell
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-container-create-index-none.ps1 "Create a container with indexing turned off in an Azure Cosmos DB account")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates a Cosmos DB SQL Database. |
-| [New-AzCosmosDBSqlIndexingPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqlindexingpolicy) | Creates a PSSqlIndexingPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
-| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates a Cosmos DB SQL Container. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Create Large Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create-large-partition-key.md
- Title: PowerShell script to create an Azure Cosmos DB container with a large partition key
-description: Azure PowerShell script sample - Create a container with a large partition key in an Azure Cosmos DB account
---- Previously updated : 05/13/2020-----
-# Create a container with a large partition key in an Azure Cosmos DB account using PowerShell
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-container-large-partition-key.ps1 "Create a container with a large partition key in an Azure Cosmos account")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates a Cosmos DB SQL Database. |
-| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates a Cosmos DB SQL Container. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create.md
- Title: PowerShell script to create Azure Cosmos DB SQL API database and container
-description: Azure PowerShell script - Azure Cosmos DB create SQL API database and container
---- Previously updated : 05/13/2020-----
-# Create a database and container for Azure Cosmos DB - SQL API
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-## Sample script
-
-This script creates a Cosmos account for SQL (Core) API in two regions with session level consistency, a database, and a container with a partition key, custom indexing policy, unique key policy, TTL, dedicated throughput, and last writer wins conflict resolution policy with a custom conflict resolution path that will be used when `multipleWriteLocations=true`.
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-create.ps1 "Create an account, database, and container for SQL API")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/new-azcosmosdbsqldatabase) | Creates a Cosmos DB SQL Database. |
-| [New-AzCosmosDBSqlUniqueKey](/powershell/module/az.cosmosdb/new-azcosmosdbsqluniquekey) | Creates a PSSqlUniqueKey object used as a parameter for New-AzCosmosDBSqlUniqueKeyPolicy. |
-| [New-AzCosmosDBSqlUniqueKeyPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqluniquekeypolicy) | Creates a PSSqlUniqueKeyPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
-| [New-AzCosmosDBSqlCompositePath](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcompositepath) | Creates a PSCompositePath object used as a parameter for New-AzCosmosDBSqlIndexingPolicy. |
-| [New-AzCosmosDBSqlIncludedPathIndex](/powershell/module/az.cosmosdb/new-azcosmosdbsqlincludedpathindex) | Creates a PSIndexes object used as a parameter for New-AzCosmosDBSqlIncludedPath. |
-| [New-AzCosmosDBSqlIncludedPath](/powershell/module/az.cosmosdb/new-azcosmosdbsqlincludedpath) | Creates a PSIncludedPath object used as a parameter for New-AzCosmosDBSqlIndexingPolicy. |
-| [New-AzCosmosDBSqlIndexingPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqlindexingpolicy) | Creates a PSSqlIndexingPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
-| [New-AzCosmosDBSqlConflictResolutionPolicy](/powershell/module/az.cosmosdb/new-azcosmosdbsqlconflictresolutionpolicy) | Creates a PSSqlConflictResolutionPolicy object used as a parameter for New-AzCosmosDBSqlContainer. |
-| [New-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer) | Creates a new Cosmos DB SQL Container. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/list-get.md
- Title: PowerShell script to list and get Azure Cosmos DB SQL API resources
-description: Azure PowerShell script - Azure Cosmos DB list and get operations for SQL API
---- Previously updated : 03/17/2020-----
-# List and get databases and containers for Azure Cosmos DB - SQL (Core) API
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-list-get.ps1 "List and get databases and containers for SQL API")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Get-AzCosmosDBSqlDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbsqldatabase) | Lists Cosmos DB Databases in an Account, or gets a specified Cosmos DB Database in an Account. |
-| [Get-AzCosmosDBSqlContainer](/powershell/module/az.cosmosdb/get-azcosmosdbsqlcontainer) | Lists Cosmos DB Containers in a Database, or gets a specified Cosmos DB Container in a Database. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/lock.md
- Title: PowerShell script to create resource lock for Azure Cosmos SQL API database and container
-description: Create resource lock for Azure Cosmos SQL API database and container
------ Previously updated : 06/12/2020 ---
-# Create a resource lock for Azure Cosmos SQL API database and container using Azure PowerShell
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-> [!IMPORTANT]
-> Resource locks do not work for changes made by users connecting using any Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
-
-## Sample script
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-lock.ps1 "Create, list, and remove resource locks")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Resource**| |
-| [New-AzResourceLock](/powershell/module/az.resources/new-azresourcelock) | Creates a resource lock. |
-| [Get-AzResourceLock](/powershell/module/az.resources/get-azresourcelock) | Gets a resource lock, or lists resource locks. |
-| [Remove-AzResourceLock](/powershell/module/az.resources/remove-azresourcelock) | Removes a resource lock. |
-|||
-
-## Next steps
-
-For more information on Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/throughput.md
- Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB SQL API database or container
-description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB SQL API database or container
---- Previously updated : 10/07/2020-----
-# Throughput (RU/s) operations with PowerShell for a database or container for Azure Cosmos DB Core (SQL) API
--
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
-
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
-
-## Get throughput
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-ru-get.ps1 "Get throughput (RU/s) for Azure Cosmos DB Core (SQL) API database or container")]
-
-## Update throughput
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-ru-update.ps1 "Update throughput (RU/s) for an Azure Cosmos DB Core (SQL) API database or container")]
-
-## Migrate throughput
-
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/sql/ps-sql-ru-migrate.ps1 "Migrate between standard and autoscale throughput on an Azure Cosmos DB Core (SQL) API database or container")]
-
-## Clean up deployment
-
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-
-```powershell
-Remove-AzResourceGroup -ResourceGroupName "myResourceGroup"
-```
-
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [Get-AzCosmosDBSqlDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqldatabasethroughput) | Get the throughput provisioned on an Azure Cosmos DB Core (SQL) API Database. |
-| [Get-AzCosmosDBSqlContainerThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqlcontainerthroughput) | Get the throughput provisioned on an Azure Cosmos DB Core (SQL) API Container. |
-| [Update-AzCosmosDBSqlDatabaseThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqldatabase) | Updates the throughput value of an Azure Cosmos DB Core (SQL) API Database. |
-| [Update-AzCosmosDBSqlContainerThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbsqlcontainer) | Updates the throughput value of an Azure Cosmos DB Core (SQL) API Container. |
-| [Invoke-AzCosmosDBSqlDatabaseThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbsqldatabasethroughputmigration) | Migrate throughput of an Azure Cosmos DB Core (SQL) API Database. |
-| [Invoke-AzCosmosDBSqlContainerThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbsqlcontainerthroughputmigration) | Migrate throughput of an Azure Cosmos DB Core (SQL) API Container. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
Title: PowerShell script to create a table with autoscale in Azure Cosmos DB Table API
-description: PowerShell script to create a table with autoscale in Azure Cosmos DB Table API
+ Title: PowerShell script to create a table with autoscale in Azure Cosmos DB for Table
+description: PowerShell script to create a table with autoscale in Azure Cosmos DB for Table
-+ Last updated 07/30/2020 -+
-# Create a table with autoscale for Azure Cosmos DB - Table API
+# Create a table with autoscale for Azure Cosmos DB - API for Table
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-autoscale.ps1 "Create a table with autoscale for Table API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-autoscale.ps1 "Create a table with autoscale for API for Table")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) | Creates a Table API Table. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) | Creates a API for Table Table. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
Title: PowerShell script to create a table in Azure Cosmos DB Table API
-description: Learn how to use a PowerShell script to update the throughput for a database or a container in Azure Cosmos DB Table API
+ Title: PowerShell script to create a table in Azure Cosmos DB for Table
+description: Learn how to use a PowerShell script to update the throughput for a database or a container in Azure Cosmos DB for Table
-+ Last updated 05/13/2020 -+
-# Create a table for Azure Cosmos DB - Table API
+# Create a table for Azure Cosmos DB - API for Table
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-create.ps1 "Create a table for Table API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-create.ps1 "Create a table for API for Table")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates a Cosmos DB Account. |
-| [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) | Creates a Table API Table. |
+| [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) | Creates an Azure Cosmos DB Account. |
+| [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) | Creates a API for Table Table. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB Table API operations
-description: Azure PowerShell script - Azure Cosmos DB list and get operations for Table API
+ Title: PowerShell script to list and get Azure Cosmos DB for Table operations
+description: Azure PowerShell script - Azure Cosmos DB list and get operations for API for Table
-+ Last updated 07/31/2020 -+
-# List and get tables for Azure Cosmos DB - Table API
+# List and get tables for Azure Cosmos DB - API for Table
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Sample script
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-list-get.ps1 "List or get tables for Table API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-list-get.ps1 "List or get tables for API for Table")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Get-AzCosmosDBTable](/powershell/module/az.cosmosdb/get-azcosmosdbtable) | Lists Table API Tables in an Account, or gets a specified Table API Table in an Account. |
+| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Azure Cosmos DB Accounts, or gets a specified Azure Cosmos DB Account. |
+| [Get-AzCosmosDBTable](/powershell/module/az.cosmosdb/get-azcosmosdbtable) | Lists API for Table Tables in an Account, or gets a specified API for Table Table in an Account. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Table API table
-description: Create resource lock for Azure Cosmos Table API table
+ Title: PowerShell script to create resource lock for Azure Cosmos DB Table API table
+description: Create resource lock for Azure Cosmos DB Table API table
-+ Last updated 06/12/2020 -+
-# Create a resource lock for Azure Cosmos Table API table using Azure PowerShell
+# Create a resource lock for Azure Cosmos DB Table API table using Azure PowerShell
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
If you need to install, see [Install Azure PowerShell module](/powershell/azure/
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure. > [!IMPORTANT]
-> Resource locks do not work for changes made by users connecting using any Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> Resource locks do not work for changes made by users connecting using any Azure Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
## Sample script
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB Table API
-description: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB Table API
+ Title: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB for Table
+description: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB for Table
-+ Last updated 10/07/2020 -+
-# Throughput (RU/s) operations with PowerShell for a table for Azure Cosmos DB - Table API
+# Throughput (RU/s) operations with PowerShell for a table for Azure Cosmos DB - API for Table
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sig
## Get throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-ru-get.ps1 "Get throughput on a table for Table API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-ru-get.ps1 "Get throughput on a table for API for Table")]
## Update throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-ru-update.ps1 "Update throughput on a table for Table API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-ru-update.ps1 "Update throughput on a table for API for Table")]
## Migrate throughput
-[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a table for Table API")]
+[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/table/ps-table-ru-migrate.ps1 "Migrate between standard and autoscale throughput on a table for API for Table")]
## Clean up deployment
This script uses the following commands. Each command in the table links to comm
| Command | Notes | ||| |**Azure Cosmos DB**| |
-| [Get-AzCosmosDBTableThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbtablethroughput) | Gets the throughput value of the specified Table API Table. |
-| [Update-AzCosmosDBMongoDBCollectionThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbsqldatabasethroughput) | Updates the throughput value of the Table API Table. |
-| [Invoke-AzCosmosDBTableThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbtablethroughputmigration) | Migrate throughput of a Table API Table. |
+| [Get-AzCosmosDBTableThroughput](/powershell/module/az.cosmosdb/get-azcosmosdbtablethroughput) | Gets the throughput value of the specified API for Table Table. |
+| [Update-AzCosmosDBMongoDBCollectionThroughput](/powershell/module/az.cosmosdb/update-azcosmosdbsqldatabasethroughput) | Updates the throughput value of the API for Table Table. |
+| [Invoke-AzCosmosDBTableThroughputMigration](/powershell/module/az.cosmosdb/invoke-azcosmosdbtablethroughputmigration) | Migrate throughput of a API for Table Table. |
|**Azure Resource Groups**| | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
description: Learn about access control concepts in Azure Cosmos DB, including p
-+ Last updated 05/26/2022-+ # Secure access to data in Azure Cosmos DB This article provides an overview of data access control in Azure Cosmos DB.
Primary/secondary keys provide access to all the administrative resources for th
### <a id="key-rotation"></a> Key rotation and regeneration > [!NOTE]
-> The following section describes the steps to rotate and regenerate keys for the SQL API. If you're using a different API, see the [Azure Cosmos DB API for Mongo DB](database-security.md?tabs=mongo-api#key-rotation), [Cassandra API](database-security.md?tabs=cassandra-api#key-rotation), [Gremlin API](database-security.md?tabs=gremlin-api#key-rotation), or [Table API](database-security.md?tabs=table-api#key-rotation) sections.
+> The following section describes the steps to rotate and regenerate keys for the API for NoSQL. If you're using a different API, see the [API for MongoDB](database-security.md?tabs=mongo-api#key-rotation), [API for Cassandra](database-security.md?tabs=cassandra-api#key-rotation), [API for Gremlin](database-security.md?tabs=gremlin-api#key-rotation), or [API for Table](database-security.md?tabs=table-api#key-rotation) sections.
> > To monitor your account for key updates and key regeneration, see [monitor key updates with metrics and alerts](monitor-account-key-updates.md) article.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
-1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your primary key with the secondary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
:::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
-1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
1. Replace your secondary key with the primary key in your application.
The process of key rotation and regeneration is simple. First, make sure that **
### Code sample to use a primary key
-The following code sample illustrates how to use a Cosmos DB account endpoint and primary key to instantiate a CosmosClient:
+The following code sample illustrates how to use an Azure Cosmos DB account endpoint and primary key to instantiate a CosmosClient:
```csharp // Read the Azure Cosmos DB endpointUrl and authorization keys from config.
Azure Cosmos DB RBAC is the ideal access control method in situations where:
See [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md) to learn more about Azure Cosmos DB RBAC.
-For information and sample code to configure RBAC for the Azure Cosmos DB API for MongoDB, see [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md).
+For information and sample code to configure RBAC for the Azure Cosmso DB for MongoDB, see [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md).
## <a id="resource-tokens"></a> Resource tokens
Resource tokens provide access to the application resources within a database. R
- Use a hash resource token specifically constructed for the user, resource, and permission. - Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, may be explicitly specified, up to a maximum of five hours. - Provide a safe alternative to giving out the primary key.-- Enable clients to read, write, and delete resources in the Cosmos DB account according to the permissions they've been granted.
+- Enable clients to read, write, and delete resources in the Azure Cosmos DB account according to the permissions they've been granted.
-You can use a resource token (by creating Cosmos DB users and permissions) when you want to provide access to resources in your Cosmos DB account to a client that cannot be trusted with the primary key.
+You can use a resource token (by creating Azure Cosmos DB users and permissions) when you want to provide access to resources in your Azure Cosmos DB account to a client that cannot be trusted with the primary key.
-Cosmos DB resource tokens provide a safe alternative that enables clients to read, write, and delete resources in your Cosmos DB account according to the permissions you've granted, and without need for either a primary or read only key.
+Azure Cosmos DB resource tokens provide a safe alternative that enables clients to read, write, and delete resources in your Azure Cosmos DB account according to the permissions you've granted, and without need for either a primary or read only key.
Here is a typical design pattern whereby resource tokens may be requested, generated, and delivered to clients: 1. A mid-tier service is set up to serve a mobile application to share user photos.
-2. The mid-tier service possesses the primary key of the Cosmos DB account.
+2. The mid-tier service possesses the primary key of the Azure Cosmos DB account.
3. The photo app is installed on end-user mobile devices. 4. On login, the photo app establishes the identity of the user with the mid-tier service. This mechanism of identity establishment is purely up to the application. 5. Once the identity is established, the mid-tier service requests permissions based on the identity. 6. The mid-tier service sends a resource token back to the phone app.
-7. The phone app can continue to use the resource token to directly access Cosmos DB resources with the permissions defined by the resource token and for the interval allowed by the resource token.
+7. The phone app can continue to use the resource token to directly access Azure Cosmos DB resources with the permissions defined by the resource token and for the interval allowed by the resource token.
8. When the resource token expires, subsequent requests receive a 401 unauthorized exception. At this point, the phone app re-establishes the identity and requests a new resource token. :::image type="content" source="./media/secure-access-to-data/resourcekeyworkflow.png" alt-text="Azure Cosmos DB resource tokens workflow" border="false":::
-Resource token generation and management are handled by the native Cosmos DB client libraries; however, if you use REST you must construct the request/authentication headers. For more information on creating authentication headers for REST, see [Access Control on Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources) or the source code for our [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/Authorization/AuthorizationHelper.cs) or [Node.js SDK](https://github.com/Azure/azure-cosmos-js/blob/master/src/auth.ts).
+Resource token generation and management are handled by the native Azure Cosmos DB client libraries; however, if you use REST you must construct the request/authentication headers. For more information on creating authentication headers for REST, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources) or the source code for our [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/Authorization/AuthorizationHelper.cs) or [Node.js SDK](https://github.com/Azure/azure-cosmos-js/blob/master/src/auth.ts).
For an example of a middle tier service used to generate or broker resource tokens, see the [ResourceTokenBroker app](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/xamarin/UserItems/ResourceTokenBroker/ResourceTokenBroker/Controllers). ### Users<a id="users"></a>
-Azure Cosmos DB users are associated with a Cosmos database. Each database can contain zero or more Cosmos DB users. The following code sample shows how to create a Cosmos DB user using the [Azure Cosmos DB .NET SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement).
+Azure Cosmos DB users are associated with an Azure Cosmos DB database. Each database can contain zero or more Azure Cosmos DB users. The following code sample shows how to create an Azure Cosmos DB user using the [Azure Cosmos DB .NET SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement).
```csharp // Create a user.
User user = await database.CreateUserAsync("User 1");
``` > [!NOTE]
-> Each Cosmos DB user has a ReadAsync() method that can be used to retrieve the list of [permissions](#permissions) associated with the user.
+> Each Azure Cosmos DB user has a ReadAsync() method that can be used to retrieve the list of [permissions](#permissions) associated with the user.
### Permissions<a id="permissions"></a>
A permission resource is associated with a user and assigned to a specific resou
> [!NOTE] > In order to run stored procedures the user must have the All permission on the container in which the stored procedure will be run.
-If you enable the [diagnostic logs on data-plane requests](cosmosdb-monitor-resource-logs.md), the following two properties corresponding to the permission are logged:
+If you enable the [diagnostic logs on data-plane requests](monitor-resource-logs.md), the following two properties corresponding to the permission are logged:
* **resourceTokenPermissionId** - This property indicates the resource token permission ID that you have specified.
As a database service, Azure Cosmos DB enables you to search, select, modify and
## Next steps -- To learn more about Cosmos database security, see [Cosmos DB Database security](database-security.md).
+- To learn more about Azure Cosmos DB database security, see [Azure Cosmos DB Database security](database-security.md).
- To learn how to construct Azure Cosmos DB authorization tokens, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources). - For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)-- For information and sample code to configure RBAC for the Azure Cosmos DB API for MongoDB, see [Configure role-based access control for your Azure Cosmos DB API for MongoDB](mongodb/how-to-setup-rbac.md)
+- For information and sample code to configure RBAC for the Azure Cosmso DB for MongoDB, see [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
# Azure Policy Regulatory Compliance controls for Azure Cosmos DB [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless.md
description: Learn more about Azure Cosmos DB's consumption-based serverless off
-+ Last updated 05/09/2022 # Azure Cosmos DB serverless
-The Azure Cosmos DB serverless offering lets you use your Azure Cosmos account in a consumption-based fashion. With serverless, you're only charged for the Request Units (RUs) consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required.
+The Azure Cosmos DB serverless offering lets you use your Azure Cosmos DB account in a consumption-based fashion. With serverless, you're only charged for the Request Units (RUs) consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required.
> [!IMPORTANT] > Do you have any feedback about serverless? We want to hear it! Feel free to drop a message to the Azure Cosmos DB serverless team: [azurecosmosdbserverless@service.microsoft.com](mailto:azurecosmosdbserverless@service.microsoft.com).
-Every database operation in Azure Cosmos DB has a cost expressed in [Request Units (RUs)](request-units.md). How you're charged for this cost depends on the type of Azure Cosmos account you're using:
+Every database operation in Azure Cosmos DB has a cost expressed in [Request Units (RUs)](request-units.md). How you're charged for this cost depends on the type of Azure Cosmos DB account you're using:
- In [provisioned throughput](set-throughput.md) mode, you have to commit to a certain amount of throughput (expressed in Request Units per second or RU/s) that is provisioned on your databases and containers. The cost of your database operations is then deducted from the number of Request Units available every second. At the end of your billing period, you get billed for the amount of throughput you've provisioned.-- In serverless mode, you don't have to configure provisioned throughput when creating containers in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that were consumed by your database operations.
+- In serverless mode, you don't have to configure provisioned throughput when creating containers in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units that were consumed by your database operations.
## Use-cases
For more information, see [choosing between provisioned throughput and serverles
## Using serverless resources
-Serverless is a new Azure Cosmos account type, which means that you have to choose between **provisioned throughput** and **serverless** when creating a new account. You must create a new serverless account to get started with serverless. Migrating existing accounts to/from serverless mode isn't currently supported.
+Serverless is a new Azure Cosmos DB account type, which means that you have to choose between **provisioned throughput** and **serverless** when creating a new account. You must create a new serverless account to get started with serverless. Migrating existing accounts to/from serverless mode isn't currently supported.
Any container that is created in a serverless account is a serverless container. Serverless containers expose the same capabilities as containers created in provisioned throughput mode, so you read, write and query your data the exact same way. However serverless accounts and containers also have specific characteristics:
Get started with serverless with the following articles:
- [Request Units in Azure Cosmos DB](request-units.md) - [Choose between provisioned throughput and serverless](throughput-serverless.md)-- [Pricing model in Azure Cosmos DB](how-pricing-works.md)
+- [Pricing model in Azure Cosmos DB](how-pricing-works.md)
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md
Title: Provision throughput on Azure Cosmos containers and databases
-description: Learn how to set provisioned throughput for your Azure Cosmos containers and databases.
+ Title: Provision throughput on Azure Cosmos DB containers and databases
+description: Learn how to set provisioned throughput for your Azure Cosmos DB containers and databases.
+ Last updated 09/16/2021 # Introduction to provisioned throughput in Azure Cosmos DB Azure Cosmos DB allows you to set provisioned throughput on your databases and containers. There are two types of provisioned throughput, standard (manual) or autoscale. This article gives an overview of how provisioned throughput works.
-An Azure Cosmos database is a unit of management for a set of containers. A database consists of a set of schema-agnostic containers. An Azure Cosmos container is the unit of scalability for both throughput and storage. A container is horizontally partitioned across a set of machines within an Azure region and is distributed across all Azure regions associated with your Azure Cosmos account.
+An Azure Cosmos DB database is a unit of management for a set of containers. A database consists of a set of schema-agnostic containers. An Azure Cosmos DB container is the unit of scalability for both throughput and storage. A container is horizontally partitioned across a set of machines within an Azure region and is distributed across all Azure regions associated with your Azure Cosmos DB account.
With Azure Cosmos DB, you can provision throughput at two granularities: -- Azure Cosmos containers-- Azure Cosmos databases
+- Azure Cosmos DB containers
+- Azure Cosmos DB databases
## Set throughput on a container
-The throughput provisioned on an Azure Cosmos container is exclusively reserved for that container. The container receives the provisioned throughput all the time. The provisioned throughput on a container is financially backed by SLAs. To learn how to configure standard (manual) throughput on a container, see [Provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md). To learn how to configure autoscale throughput on a container, see [Provision autoscale throughput](how-to-provision-autoscale-throughput.md).
+The throughput provisioned on an Azure Cosmos DB container is exclusively reserved for that container. The container receives the provisioned throughput all the time. The provisioned throughput on a container is financially backed by SLAs. To learn how to configure standard (manual) throughput on a container, see [Provision throughput on an Azure Cosmos DB container](how-to-provision-container-throughput.md). To learn how to configure autoscale throughput on a container, see [Provision autoscale throughput](how-to-provision-autoscale-throughput.md).
Setting provisioned throughput on a container is the most frequently used option. You can elastically scale throughput for a container by provisioning any amount of throughput by using [Request Units (RUs)](request-units.md).
The following image shows how a physical partition hosts one or more logical par
## Set throughput on a database
-When you provision throughput on an Azure Cosmos database, the throughput is shared across all the containers (called shared database containers) in the database. An exception is if you specified a provisioned throughput on specific containers in the database. Sharing the database-level provisioned throughput among its containers is analogous to hosting a database on a cluster of machines. Because all containers within a database share the resources available on a machine, you naturally do not get predictable performance on any specific container. To learn how to configure provisioned throughput on a database, see [Configure provisioned throughput on an Azure Cosmos database](how-to-provision-database-throughput.md). To learn how to configure autoscale throughput on a database, see [Provision autoscale throughput](how-to-provision-autoscale-throughput.md).
+When you provision throughput on an Azure Cosmos DB database, the throughput is shared across all the containers (called shared database containers) in the database. An exception is if you specified a provisioned throughput on specific containers in the database. Sharing the database-level provisioned throughput among its containers is analogous to hosting a database on a cluster of machines. Because all containers within a database share the resources available on a machine, you naturally do not get predictable performance on any specific container. To learn how to configure provisioned throughput on a database, see [Configure provisioned throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md). To learn how to configure autoscale throughput on a database, see [Provision autoscale throughput](how-to-provision-autoscale-throughput.md).
Because all containers within the database share the provisioned throughput, Azure Cosmos DB doesn't provide any predictable throughput guarantees for a particular container in that database. The portion of the throughput that a specific container can receive is dependent on:
We recommend that you configure throughput on a database when you want to share
The following examples demonstrate where it's preferred to provision throughput at the database level:
-* Sharing a database's provisioned throughput across a set of containers is useful for a multitenant application. Each user can be represented by a distinct Azure Cosmos container.
+* Sharing a database's provisioned throughput across a set of containers is useful for a multitenant application. Each user can be represented by a distinct Azure Cosmos DB container.
-* Sharing a database's provisioned throughput across a set of containers is useful when you migrate a NoSQL database, such as MongoDB or Cassandra, hosted on a cluster of VMs or from on-premises physical servers to Azure Cosmos DB. Think of the provisioned throughput configured on your Azure Cosmos database as a logical equivalent, but more cost-effective and elastic, to that of the compute capacity of your MongoDB or Cassandra cluster.
+* Sharing a database's provisioned throughput across a set of containers is useful when you migrate a NoSQL database, such as MongoDB or Cassandra, hosted on a cluster of VMs or from on-premises physical servers to Azure Cosmos DB. Think of the provisioned throughput configured on your Azure Cosmos DB database as a logical equivalent, but more cost-effective and elastic, to that of the compute capacity of your MongoDB or Cassandra cluster.
All containers created inside a database with provisioned throughput must be created with a [partition key](partitioning-overview.md). At any given point in time, the throughput allocated to a container within a database is distributed across all the logical partitions of that container. When you have containers that share provisioned throughput configured on a database, you can't selectively apply the throughput to a specific container or a logical partition.
If your workloads involve deleting and recreating all the collections in a datab
## Set throughput on a database and a container
-You can combine the two models. Provisioning throughput on both the database and the container is allowed. The following example shows how to provision standard (manual) provisioned throughput on an Azure Cosmos database and a container:
+You can combine the two models. Provisioning throughput on both the database and the container is allowed. The following example shows how to provision standard (manual) provisioned throughput on an Azure Cosmos DB database and a container:
-* You can create an Azure Cosmos database named *Z* with standard (manual) provisioned throughput of *"K"* RUs.
+* You can create an Azure Cosmos DB database named *Z* with standard (manual) provisioned throughput of *"K"* RUs.
* Next, create five containers named *A*, *B*, *C*, *D*, and *E* within the database. When creating container B, make sure to enable **Provision dedicated throughput for this container** option and explicitly configure *"P"* RUs of provisioned throughput on this container. You can configure shared and dedicated throughput only when creating the database and container. :::image type="content" source="./media/set-throughput/coll-level-throughput.png" alt-text="Setting the throughput at the container-level":::
You can combine the two models. Provisioning throughput on both the database and
## Update throughput on a database or a container
-After you create an Azure Cosmos container or a database, you can update the provisioned throughput. There is no limit on the maximum provisioned throughput that you can configure on the database or the container.
+After you create an Azure Cosmos DB container or a database, you can update the provisioned throughput. There is no limit on the maximum provisioned throughput that you can configure on the database or the container.
### <a id="current-provisioned-throughput"></a> Current provisioned throughput
You can programmatically check the scaling progress by reading the [current prov
* [ThroughputResponse.IsReplacePending](/dotnet/api/microsoft.azure.cosmos.throughputresponse.isreplacepending) on the .NET SDK. * [ThroughputResponse.isReplacePending()](/java/api/com.azure.cosmos.models.throughputresponse.isreplacepending) on the Java SDK.
-You can use [Azure Monitor metrics](monitor-cosmos-db.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
+You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
## <a id="high-storage-low-throughput-program"></a> High storage / low throughput program
This table shows a comparison between provisioning standard (manual) throughput
## Next steps * Learn more about [logical partitions](partitioning-overview.md).
-* Learn how to [provision standard (manual) on an Azure Cosmos container](how-to-provision-container-throughput.md).
-* Learn how to [provision standard (manual) throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
-* Learn how to [provision autoscale throughput on an Azure Cosmos database or container](how-to-provision-autoscale-throughput.md).
+* Learn how to [provision standard (manual) on an Azure Cosmos DB container](how-to-provision-container-throughput.md).
+* Learn how to [provision standard (manual) throughput on an Azure Cosmos DB database](how-to-provision-database-throughput.md).
+* Learn how to [provision autoscale throughput on an Azure Cosmos DB database or container](how-to-provision-autoscale-throughput.md).
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Social Media Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md
Title: 'Azure Cosmos DB design pattern: Social media apps'
description: Learn about a design pattern for Social Networks by leveraging the storage flexibility of Azure Cosmos DB and other Azure services. + Last updated 05/28/2019 - # Going social with Azure Cosmos DB Living in a massively interconnected society means that, at some point in life, you become part of a **social network**. You use social networks to keep in touch with friends, colleagues, family, or sometimes to share your passion with people with common interests.
You could use an enormous SQL instance with enough power to solve thousands of q
## The NoSQL road
-This article guides you into modeling your social platform's data with Azure's NoSQL database [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) cost-effectively. It also tells you how to use other Azure Cosmos DB features like the [Gremlin API](../cosmos-db/graph-introduction.md). Using a [NoSQL](https://en.wikipedia.org/wiki/NoSQL) approach, storing data, in JSON format and applying [denormalization](https://en.wikipedia.org/wiki/Denormalization), the previously complicated post can be transformed into a single [Document](https://en.wikipedia.org/wiki/Document-oriented_database):
+This article guides you into modeling your social platform's data with Azure's NoSQL database [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) cost-effectively. It also tells you how to use other Azure Cosmos DB features like the [API for Gremlin](gremlin/introduction.md). Using a [NoSQL](https://en.wikipedia.org/wiki/NoSQL) approach, storing data, in JSON format and applying [denormalization](https://en.wikipedia.org/wiki/Denormalization), the previously complicated post can be transformed into a single [Document](https://en.wikipedia.org/wiki/Document-oriented_database):
```json {
This article guides you into modeling your social platform's data with Azure's N
And it can be gotten with a single query, and with no joins. This query is much simple and straightforward, and, budget-wise, it requires fewer resources to achieve a better result.
-Azure Cosmos DB makes sure that all properties are indexed with its automatic indexing. The automatic indexing can even be [customized](index-policy.md). The schema-free approach lets us store documents with different and dynamic structures. Maybe tomorrow you want posts to have a list of categories or hashtags associated with them? Cosmos DB will handle the new Documents with the added attributes without extra work required by us.
+Azure Cosmos DB makes sure that all properties are indexed with its automatic indexing. The automatic indexing can even be [customized](index-policy.md). The schema-free approach lets us store documents with different and dynamic structures. Maybe tomorrow you want posts to have a list of categories or hashtags associated with them? Azure Cosmos DB will handle the new Documents with the added attributes without extra work required by us.
Comments on a post can be treated as other posts with a parent property. (This practice simplifies your object mapping.)
Creating feeds is just a matter of creating documents that can hold a list of po
] ```
-You could have a "latest" stream with posts ordered by creation date. Or you could have a "hottest" stream with those posts with more likes in the last 24 hours. You could even implement a custom stream for each user based on logic like followers and interests. It would still be a list of posts. ItΓÇÖs a matter of how to build these lists, but the reading performance stays unhindered. Once you acquire one of these lists, you issue a single query to Cosmos DB using the [IN keyword](sql-query-keywords.md#in) to get pages of posts at a time.
+You could have a "latest" stream with posts ordered by creation date. Or you could have a "hottest" stream with those posts with more likes in the last 24 hours. You could even implement a custom stream for each user based on logic like followers and interests. It would still be a list of posts. ItΓÇÖs a matter of how to build these lists, but the reading performance stays unhindered. Once you acquire one of these lists, you issue a single query to Azure Cosmos DB using the [IN keyword](sql-query-keywords.md#in) to get pages of posts at a time.
The feed streams could be built using [Azure App ServicesΓÇÖ](https://azure.microsoft.com/services/app-service/) background processes: [Webjobs](../app-service/webjobs-create.md). Once a post is created, background processing can be triggered by using [Azure Storage](https://azure.microsoft.com/services/storage/) [Queues](../storage/queues/storage-dotnet-how-to-use-queues.md) and Webjobs triggered using the [Azure Webjobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki), implementing the post propagation inside streams based on your own custom logic. Points and likes over a post can be processed in a deferred manner using this same technique to create an eventually consistent environment.
-Followers are trickier. Cosmos DB has a document size limit, and reading/writing large documents can impact the scalability of your application. So you may think about storing followers as a document with this structure:
+Followers are trickier. Azure Cosmos DB has a document size limit, and reading/writing large documents can impact the scalability of your application. So you may think about storing followers as a document with this structure:
```json {
To solve this problem, you can use a mixed approach. As part of the User Statist
} ```
-You can store the actual graph of followers using Azure Cosmos DB [Gremlin API](../cosmos-db/graph-introduction.md) to create [vertexes](http://mathworld.wolfram.com/GraphVertex.html) for each user and [edges](http://mathworld.wolfram.com/GraphEdge.html) that maintain the "A-follows-B" relationships. With the Gremlin API, you can get the followers of a certain user and create more complex queries to suggest people in common. If you add to the graph the Content Categories that people like or enjoy, you can start weaving experiences that include smart content discovery, suggesting content that those people you follow like, or finding people that you might have much in common with.
+You can store the actual graph of followers using Azure Cosmos DB [API for Gremlin](../cosmos-db/introduction.md) to create [vertexes](http://mathworld.wolfram.com/GraphVertex.html) for each user and [edges](http://mathworld.wolfram.com/GraphEdge.html) that maintain the "A-follows-B" relationships. With the API for Gremlin, you can get the followers of a certain user and create more complex queries to suggest people in common. If you add to the graph the Content Categories that people like or enjoy, you can start weaving experiences that include smart content discovery, suggesting content that those people you follow like, or finding people that you might have much in common with.
The User Statistics document can still be used to create cards in the UI or quick profile previews.
By looking at this information, you can quickly detect which is critical informa
The smallest step is called a UserChunk, the minimal piece of information that identifies a user and itΓÇÖs used for data duplication. By reducing the duplicated data size to only the information you'll "show", you reduce the possibility of massive updates.
-The middle step is called the user. ItΓÇÖs the full data that will be used on most performance-dependent queries on Cosmos DB, the most accessed and critical. It includes the information represented by a UserChunk.
+The middle step is called the user. ItΓÇÖs the full data that will be used on most performance-dependent queries on Azure Cosmos DB, the most accessed and critical. It includes the information represented by a UserChunk.
-The largest is the Extended User. It includes the critical user information and other data that doesnΓÇÖt need to be read quickly or has eventual usage, like the sign-in process. This data can be stored outside of Cosmos DB, in Azure SQL Database or Azure Storage Tables.
+The largest is the Extended User. It includes the critical user information and other data that doesnΓÇÖt need to be read quickly or has eventual usage, like the sign-in process. This data can be stored outside of Azure Cosmos DB, in Azure SQL Database or Azure Storage Tables.
-Why would you split the user and even store this information in different places? Because from a performance point of view, the bigger the documents, the costlier the queries. Keep documents slim, with the right information to do all your performance-dependent queries for your social network. Store the other extra information for eventual scenarios like full profile edits, logins, and data mining for usage analytics and Big Data initiatives. You really donΓÇÖt care if the data gathering for data mining is slower, because itΓÇÖs running on Azure SQL Database. You do have concern though that your users have a fast and slim experience. A user stored on Cosmos DB would look like this code:
+Why would you split the user and even store this information in different places? Because from a performance point of view, the bigger the documents, the costlier the queries. Keep documents slim, with the right information to do all your performance-dependent queries for your social network. Store the other extra information for eventual scenarios like full profile edits, logins, and data mining for usage analytics and Big Data initiatives. You really donΓÇÖt care if the data gathering for data mining is slower, because itΓÇÖs running on Azure SQL Database. You do have concern though that your users have a fast and slim experience. A user stored on Azure Cosmos DB would look like this code:
```json {
Because you're using Azure Cosmos DB, you can easily implement a search engine u
Why is this process so easy?
-Azure Cognitive Search implements what they call [Indexers](/rest/api/searchservice/Indexer-operations), background processes that hook in your data repositories and automagically add, update or remove your objects in the indexes. They support an [Azure SQL Database indexers](/archive/blogs/kaevans/indexing-azure-sql-database-with-azure-search), [Azure Blobs indexers](../search/search-howto-indexing-azure-blob-storage.md) and thankfully, [Azure Cosmos DB indexers](../search/search-howto-index-cosmosdb.md). The transition of information from Cosmos DB to Azure Cognitive Search is straightforward. Both technologies store information in JSON format, so you just need to [create your Index](../search/search-what-is-an-index.md) and map the attributes from your Documents you want indexed. ThatΓÇÖs it! Depending on the size of your data, all your content will be available to be searched upon within minutes by the best Search-as-a-Service solution in cloud infrastructure.
+Azure Cognitive Search implements what they call [Indexers](/rest/api/searchservice/Indexer-operations), background processes that hook in your data repositories and automagically add, update or remove your objects in the indexes. They support an [Azure SQL Database indexers](/archive/blogs/kaevans/indexing-azure-sql-database-with-azure-search), [Azure Blobs indexers](../search/search-howto-indexing-azure-blob-storage.md) and thankfully, [Azure Cosmos DB indexers](../search/search-howto-index-cosmosdb.md). The transition of information from Azure Cosmos DB to Azure Cognitive Search is straightforward. Both technologies store information in JSON format, so you just need to [create your Index](../search/search-what-is-an-index.md) and map the attributes from your Documents you want indexed. ThatΓÇÖs it! Depending on the size of your data, all your content will be available to be searched upon within minutes by the best Search-as-a-Service solution in cloud infrastructure.
For more information about Azure Cognitive Search, you can visit the [HitchhikerΓÇÖs Guide to Search](/archive/blogs/mvpawardprogram/a-hitchhikers-guide-to-search).
Another available option is to use [Azure Cognitive Services](https://www.micros
## A planet-scale social experience
-There is a last, but not least, important article I must address: **scalability**. When you design an architecture, each component should scale on its own. You will eventually need to process more data, or you will want to have a bigger geographical coverage. Thankfully, achieving both tasks is a **turnkey experience** with Cosmos DB.
+There is a last, but not least, important article I must address: **scalability**. When you design an architecture, each component should scale on its own. You will eventually need to process more data, or you will want to have a bigger geographical coverage. Thankfully, achieving both tasks is a **turnkey experience** with Azure Cosmos DB.
-Cosmos DB supports dynamic partitioning out-of-the-box. It automatically creates partitions based on a given **partition key**, which is defined as an attribute in your documents. Defining the correct partition key must be done at design time. For more information, see [Partitioning in Azure Cosmos DB](partitioning-overview.md).
+Azure Cosmos DB supports dynamic partitioning out-of-the-box. It automatically creates partitions based on a given **partition key**, which is defined as an attribute in your documents. Defining the correct partition key must be done at design time. For more information, see [Partitioning in Azure Cosmos DB](partitioning-overview.md).
For a social experience, you must align your partitioning strategy with the way you query and write. (For example, reads within the same partition are desirable, and avoid "hot spots" by spreading writes on multiple partitions.) Some options are: partitions based on a temporal key (day/month/week), by content category, by geographical region, or by user. It all really depends on how you'll query the data and show the data in your social experience.
-Cosmos DB will run your queries (including [aggregates](https://azure.microsoft.com/blog/planet-scale-aggregates-with-azure-documentdb/)) across all your partitions transparently, so you don't need to add any logic as your data grows.
+Azure Cosmos DB will run your queries (including [aggregates](https://azure.microsoft.com/blog/planet-scale-aggregates-with-azure-documentdb/)) across all your partitions transparently, so you don't need to add any logic as your data grows.
With time, you'll eventually grow in traffic and your resource consumption (measured in [RUs](request-units.md), or Request Units) will increase. You will read and write more frequently as your user base grows. The user base will start creating and reading more content. So the ability of **scaling your throughput** is vital. Increasing your RUs is easy. You can do it with a few clicks on the Azure portal or by [issuing commands through the API](/rest/api/cosmos-db/replace-an-offer).
What happens if things keep getting better? Suppose users from another region, c
But wait! You soon realize their experience with your platform isn't optimal. They're so far away from your operational region that the latency is terrible. You obviously don't want them to quit. If only there was an easy way of **extending your global reach**? There is!
-Cosmos DB lets you [replicate your data globally](../cosmos-db/tutorial-global-distribution-sql-api.md) and transparently with a couple of clicks and automatically select among the available regions from your [client code](../cosmos-db/tutorial-global-distribution-sql-api.md). This process also means that you can have [multiple failover regions](high-availability.md).
+Azure Cosmos DB lets you [replicate your data globally](nosql/tutorial-global-distribution.md) and transparently with a couple of clicks and automatically select among the available regions from your [client code](nosql/tutorial-global-distribution.md). This process also means that you can have [multiple failover regions](high-availability.md).
-When you replicate your data globally, you need to make sure that your clients can take advantage of it. If you're using a web frontend or accessing APIs from mobile clients, you can deploy [Azure Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) and clone your Azure App Service on all the desired regions, using a performance configuration to support your extended global coverage. When your clients access your frontend or APIs, they'll be routed to the closest App Service, which in turn, will connect to the local Cosmos DB replica.
+When you replicate your data globally, you need to make sure that your clients can take advantage of it. If you're using a web frontend or accessing APIs from mobile clients, you can deploy [Azure Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) and clone your Azure App Service on all the desired regions, using a performance configuration to support your extended global coverage. When your clients access your frontend or APIs, they'll be routed to the closest App Service, which in turn, will connect to the local Azure Cosmos DB replica.
:::image type="content" source="./media/social-media-apps/social-media-apps-global-replicate.png" alt-text="Adding global coverage to your social platform" border="false":::
The truth is that there's no silver bullet for this kind of scenarios. ItΓÇÖs th
## Next steps
-To learn more about use cases for Cosmos DB, see [Common Cosmos DB use cases](use-cases.md).
+To learn more about use cases for Azure Cosmos DB, see [Common Azure Cosmos DB use cases](use-cases.md).
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md
- Title: Azure Cosmos DB best practices for .NET SDK v3
-description: Learn the best practices for using the Azure Cosmos DB .NET SDK v3
---- Previously updated : 08/30/2022-----
-# Best practices for Azure Cosmos DB .NET SDK
-
-This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
-
-Watch the video below to learn more about using the .NET SDK from a Cosmos DB engineer!
-
->
-> [!VIDEO https://aka.ms/docs.dotnet-best-practices]
-
-## Checklist
-|Checked | Subject |Details/Links |
-||||
-|<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. |
-| <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). |
-| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
-| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
-| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. |
-| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. |
-| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).|
-|<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
-|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
-|<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
-|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
-|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
-|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
-|<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
-| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
-| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
-| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) |
-| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
-| <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
-| <input type="checkbox"/> | Enabling Query Metrics | For more logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
-| <input type="checkbox"/> | SDK Logging | Log [SDK diagnostics](#capture-diagnostics) for outstanding scenarios, such as exceptions or when requests go beyond an expected latency.
-| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you're using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
-
-## Capture diagnostics
--
-## Best practices when using Gateway mode
-
-Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
-
-## Best practices for write-heavy workloads
-
-For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
-
-## Next steps
-
-For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
-
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Best Practice Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-java.md
- Title: Azure Cosmos DB best practices for Java SDK v4
-description: Learn the best practices for using the Azure Cosmos DB Java SDK v4
---- Previously updated : 04/01/2022----
-# Best practices for Azure Cosmos DB Java SDK
-
-This article walks through the best practices for using the Azure Cosmos DB Java SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
-
-## Checklist
-|Checked | Topic |Details/Links |
-||||
-|<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-java-v4.md) of the Cosmos DB SDK available for optimal performance. |
-| <input type="checkbox"/> | Singleton Client | Use a [single instance](/jav#sdk-usage). |
-| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the Java SDK [visit here](tutorial-global-distribution-sql-api.md) |
-| <input type="checkbox"/> | Availability and Failovers | Set the [preferredRegions](/jav). |
-| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
-| <input type="checkbox"/> | Hosting | For most common cases of production workloads, we highly recommend using at least 4-cores and 8-GB memory VMs whenever possible. |
-| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V4 SDK documentation](performance-tips-java-sdk-v4-sql.md#networking).|
-| <input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
-| <input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we recommend setting the [`idleEndpointTimeout`](/java/api/com.azure.cosmos.directconnectionconfig.setidleendpointtimeout?view=azure-java-stable#com-azure-cosmos-directconnectionconfig-setidleendpointtimeout(java-time-duration)&preserve-view=true) to a higher value. The `idleEndpointTimeout` property in `DirectConnectionConfig` helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections to an endpoint are kept open for 1 hour. If there aren't requests to a specific endpoint for idle endpoint timeout duration, direct client closes all connections to that endpoint to save resources and I/O cost. |
-| <input type="checkbox"/> | Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads) | Avoid blocking calls: `.block()`. The entire call stack is asynchronous in order to benefit from [async API](https://projectreactor.io/docs/core/release/reference/#intro-reactive) patterns and use of appropriate [threading and schedulers](https://projectreactor.io/docs/core/release/reference/#schedulers) |
-| <input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use [project reactor's timeout API](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#timeout-java.time.Duration-). For more details on timeouts with Cosmos DB [visit here](troubleshoot-request-timeout-java-sdk-v4-sql.md) |
-| <input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit here](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
-| <input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `CosmosAsyncDatabase#read()` or `CosmosAsyncContainer#read()` will result in metadata calls to the service, which consume from the system-reserved RU limit. `createDatabaseIfNotExists()` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
-| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-java) for better latency and throughput on your queries. We recommend setting the `maxDegreeOfParallelism` property within the `CosmosQueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, set the value to `-1` that will give you the best latency. Also, set the `maxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
-| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-java-sdk-v4-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
-| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths `IndexingPolicy#getIncludedPaths()` and `IndexingPolicy#getExcludedPaths()`. Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit here](performance-tips-java-sdk-v4-sql.md#indexing-policy) |
-| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
-| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, follow instructions on how to capture SQL Query Metrics using [Java SDK](troubleshoot-java-sdk-v4-sql.md#query-operations) |
-| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [CosmosDiagnostics](/jav#capture-the-diagnostics) |
-
-## Best practices when using Gateway mode
-Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to tweak [maxConnectionPoolSize](/java/api/com.azure.cosmos.gatewayconnectionconfig.setmaxconnectionpoolsize?view=azure-java-stable#com-azure-cosmos-gatewayconnectionconfig-setmaxconnectionpoolsize(int)&preserve-view=true) to a different value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In Java v4 SDK, the default value for `GatewayConnectionConfig#maxConnectionPoolSize` is 1000. To change the value, you can set `GatewayConnectionConfig#maxConnectionPoolSize` to a different value.
-
-## Best practices for write-heavy workloads
-For workloads that have heavy create payloads, set the `CosmosClientBuilder#contentResponseOnWriteEnabled()` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
-
-## Next steps
-To learn more about performance tips for Java SDK, see [Performance tips for Azure Cosmos DB Java SDK v4](performance-tips-java-sdk-v4-sql.md).
-
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bicep Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bicep-samples.md
- Title: Bicep samples for Azure Cosmos DB Core (SQL API)
-description: Use Bicep to create and configure Azure Cosmos DB.
---- Previously updated : 09/13/2021----
-# Bicep for Azure Cosmos DB
--
-This article shows Bicep samples for Core (SQL) API accounts. You can also find Bicep samples for [Cassandra](../cassandr) APIs.
-
-## Core (SQL) API
-
-|**Sample**|**Description**|
-|||
-|[Create an Azure Cosmos account, database, container with autoscale throughput](manage-with-bicep.md#create-autoscale) | Create a Core (SQL) API account in two regions, a database and container with autoscale throughput. |
-|[Create an Azure Cosmos account, database, container with analytical store](manage-with-bicep.md#create-analytical-store) | Create a Core (SQL) API account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. |
-|[Create an Azure Cosmos account, database, container with standard (manual) throughput](manage-with-bicep.md#create-manual) | Create a Core (SQL) API account in two regions, a database and container with standard throughput. |
-|[Create an Azure Cosmos account, database and container with a stored procedure, trigger and UDF](manage-with-bicep.md#create-sproc) | Create a Core (SQL) API account in two regions with a stored procedure, trigger and UDF for a container. |
-|[Create an Azure Cosmos account with Azure AD identity, Role Definitions and Role Assignment](manage-with-bicep.md#create-rbac) | Create a Core (SQL) API account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
-|[Create a free-tier Azure Cosmos account](manage-with-bicep.md#free-tier) | Create an Azure Cosmos DB Core (SQL) API account on free-tier. |
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-dot-net.md
- Title: Use bulk executor .NET library in Azure Cosmos DB for bulk import and update operations
-description: Bulk import and update the Azure Cosmos DB documents using the bulk executor .NET library.
----- Previously updated : 05/02/2020----
-# Use the bulk executor .NET library to perform bulk operations in Azure Cosmos DB
-
-> [!NOTE]
-> This bulk executor library described in this article is maintained for applications using the .NET SDK 2.x version. For new applications, you can use the **bulk support** that is directly available with the [.NET SDK version 3.x](tutorial-sql-api-dotnet-bulk-import.md) and it does not require any external library.
-
-> If you are currently using the bulk executor library and planning to migrate to bulk support on the newer SDK, use the steps in the [Migration guide](how-to-migrate-from-bulk-executor-library.md) to migrate your application.
-
-This tutorial provides instructions on using the bulk executor .NET library to import and update documents to an Azure Cosmos container. To learn about the bulk executor library and how it helps you use massive throughput and storage, see the [bulk executor library overview](../bulk-executor-overview.md) article. In this tutorial, you'll see a sample .NET application that bulk imports randomly generated documents into an Azure Cosmos container. After importing the data, the library shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
-
-Currently, bulk executor library is supported by the Azure Cosmos DB SQL API and Gremlin API accounts only. This article describes how to use the bulk executor .NET library with SQL API accounts. To learn about using the bulk executor .NET library with Gremlin API accounts, see [perform bulk operations in the Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md).
-
-## Prerequisites
-
-* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-
-* You can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
-
-* Create an Azure Cosmos DB SQL API account by using the steps described in the [create a database account](create-sql-api-dotnet.md#create-account) section of the .NET quickstart article.
-
-## Clone the sample application
-
-Now let's switch to working with code by downloading a sample .NET application from GitHub. This application performs bulk operations on the data stored in your Azure Cosmos account. To clone the application, open a command prompt, navigate to the directory where you want to copy it and run the following command:
-
-```bash
-git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started.git
-```
-
-The cloned repository contains two samples "BulkImportSample" and "BulkUpdateSample". You can open either of the sample applications, update the connection strings in App.config file with your Azure Cosmos DB account's connection strings, build the solution, and run it.
-
-The "BulkImportSample" application generates random documents and bulk imports them to your Azure Cosmos account. The "BulkUpdateSample" application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you'll review the code in each of these sample apps.
-
-## Bulk import data to an Azure Cosmos account
-
-1. Navigate to the "BulkImportSample" folder and open the "BulkImportSample.sln" file.
-
-2. The Azure Cosmos DB's connection strings are retrieved from the App.config file as shown in the following code:
-
- ```csharp
- private static readonly string EndpointUrl = ConfigurationManager.AppSettings["EndPointUrl"];
- private static readonly string AuthorizationKey = ConfigurationManager.AppSettings["AuthorizationKey"];
- private static readonly string DatabaseName = ConfigurationManager.AppSettings["DatabaseName"];
- private static readonly string CollectionName = ConfigurationManager.AppSettings["CollectionName"];
- private static readonly int CollectionThroughput = int.Parse(ConfigurationManager.AppSettings["CollectionThroughput"]);
- ```
-
- The bulk importer creates a new database and a container with the database name, container name, and the throughput values specified in the App.config file.
-
-3. Next the DocumentClient object is initialized with Direct TCP connection mode:
-
- ```csharp
- ConnectionPolicy connectionPolicy = new ConnectionPolicy
- {
- ConnectionMode = ConnectionMode.Direct,
- ConnectionProtocol = Protocol.Tcp
- };
- DocumentClient client = new DocumentClient(new Uri(endpointUrl),authorizationKey,
- connectionPolicy)
- ```
-
-4. The BulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they're set to 0 to pass congestion control to BulkExecutor for its lifetime.
-
- ```csharp
- // Set retry options high during initialization (default values).
- client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 30;
- client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 9;
-
- IBulkExecutor bulkExecutor = new BulkExecutor(client, dataCollection);
- await bulkExecutor.InitializeAsync();
-
- // Set retries to 0 to pass complete control to bulk executor.
- client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 0;
- client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 0;
- ```
-
-5. The application invokes the BulkImportAsync API. The .NET library provides two overloads of the bulk import API - one that accepts a list of serialized JSON documents and the other that accepts a list of deserialized POCO documents. To learn more about the definitions of each of these overloaded methods, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkimportasync).
-
- ```csharp
- BulkImportResponse bulkImportResponse = await bulkExecutor.BulkImportAsync(
- documents: documentsToImportInBatch,
- enableUpsert: true,
- disableAutomaticIdGeneration: true,
- maxConcurrencyPerPartitionKeyRange: null,
- maxInMemorySortingBatchSize: null,
- cancellationToken: token);
- ```
- **BulkImportAsync method accepts the following parameters:**
-
- |**Parameter** |**Description** |
- |||
- |enableUpsert | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it's set to false. |
- |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it's set to true. |
- |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting to null will cause library to use a default value of 20. |
- |maxInMemorySortingBatchSize | The maximum number of documents that are pulled from the document enumerator, which is passed to the API call in each stage. For in-memory sorting phase that happens before bulk importing, setting this parameter to null will cause library to use default minimum value (documents.count, 1000000). |
- |cancellationToken | The cancellation token to gracefully exit the bulk import operation. |
-
- **Bulk import response object definition**
- The result of the bulk import API call contains the following attributes:
-
- |**Parameter** |**Description** |
- |||
- |NumberOfDocumentsImported (long) | The total number of documents that were successfully imported out of the total documents supplied to the bulk import API call. |
- |TotalRequestUnitsConsumed (double) | The total request units (RU) consumed by the bulk import API call. |
- |TotalTimeTaken (TimeSpan) | The total time taken by the bulk import API call to complete the execution. |
- |BadInputDocuments (List\<object>) | The list of bad-format documents that weren't successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value isn't a string (null or any other datatype is considered invalid). |
-
-## Bulk update data in your Azure Cosmos account
-
-You can update existing documents by using the BulkUpdateAsync API. In this example, you'll set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
-
-1. Navigate to the "BulkUpdateSample" folder and open the "BulkUpdateSample.sln" file.
-
-2. Define the update items along with the corresponding field update operations. In this example, you'll use `SetUpdateOperation` to update the `Name` field and `UnsetUpdateOperation` to remove the `Description` field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
-
- ```csharp
- SetUpdateOperation<string> nameUpdate = new SetUpdateOperation<string>("Name", "UpdatedDoc");
- UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
-
- List<UpdateOperation> updateOperations = new List<UpdateOperation>();
- updateOperations.Add(nameUpdate);
- updateOperations.Add(descriptionUpdate);
-
- List<UpdateItem> updateItems = new List<UpdateItem>();
- for (int i = 0; i < 10; i++)
- {
- updateItems.Add(new UpdateItem(i.ToString(), i.ToString(), updateOperations));
- }
- ```
-
-3. The application invokes the BulkUpdateAsync API. To learn about the definition of the BulkUpdateAsync method, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.ibulkexecutor.bulkupdateasync).
-
- ```csharp
- BulkUpdateResponse bulkUpdateResponse = await bulkExecutor.BulkUpdateAsync(
- updateItems: updateItems,
- maxConcurrencyPerPartitionKeyRange: null,
- maxInMemorySortingBatchSize: null,
- cancellationToken: token);
- ```
- **BulkUpdateAsync method accepts the following parameters:**
-
- |**Parameter** |**Description** |
- |||
- |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting this parameter to null will make the library to use the default value(20). |
- |maxInMemorySortingBatchSize | The maximum number of update items pulled from the update items enumerator passed to the API call in each stage. For the in-memory sorting phase that happens before bulk updating, setting this parameter to null will cause the library to use the default minimum value(updateItems.count, 1000000). |
- | cancellationToken|The cancellation token to gracefully exit the bulk update operation. |
-
- **Bulk update response object definition**
- The result of the bulk update API call contains the following attributes:
-
- |**Parameter** |**Description** |
- |||
- |NumberOfDocumentsUpdated (long) | The number of documents that were successfully updated out of the total documents supplied to the bulk update API call. |
- |TotalRequestUnitsConsumed (double) | The total request units (RUs) consumed by the bulk update API call. |
- |TotalTimeTaken (TimeSpan) | The total time taken by the bulk update API call to complete the execution. |
-
-## Performance tips
-
-Consider the following points for better performance when using the bulk executor library:
-
-* For best performance, run your application from an Azure virtual machine that is in the same region as your Azure Cosmos account's write region.
-
-* It's recommended that you instantiate a single `BulkExecutor` object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos container.
-
-* A single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO (This happens by spawning multiple tasks internally). Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that is running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
-
-* Ensure the `InitializeAsync()` method is invoked after instantiating a BulkExecutor object to fetch the target Cosmos container's partition map.
-
-* In your application's App.Config, ensure **gcServer** is enabled for better performance
- ```xml
- <runtime>
- <gcServer enabled="true" />
- </runtime>
- ```
-* The library emits traces that can be collected either into a log file or on the console. To enable both, add the following code to your application's App.Config file.
-
- ```xml
- <system.diagnostics>
- <trace autoflush="false" indentsize="4">
- <listeners>
- <add name="logListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="application.log" />
- <add name="consoleListener" type="System.Diagnostics.ConsoleTraceListener" />
- </listeners>
- </trace>
- </system.diagnostics>
- ```
-
-## Next steps
-
-* To learn about the NuGet package details and the release notes, see the [bulk executor SDK details](sql-api-sdk-bulk-executor-dot-net.md).
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-java.md
- Title: Use bulk executor Java library in Azure Cosmos DB to perform bulk import and update operations
-description: Bulk import and update Azure Cosmos DB documents using bulk executor Java library
----- Previously updated : 03/07/2022----
-# Perform bulk operations on Azure Cosmos DB data
-
-This tutorial provides instructions on performing bulk operations in the [Azure Cosmos DB Java V4 SDK](sql-api-sdk-java-v4.md). This version of the SDK comes with the bulk executor library built-in. If you're using an older version of Java SDK, it's recommended to [migrate to the latest version](migrate-java-v4-sdk.md). Azure Cosmos DB Java V4 SDK is the current recommended solution for Java bulk support.
-
-Currently, the bulk executor library is supported only by Azure Cosmos DB SQL API and Gremlin API accounts. To learn about using bulk executor .NET library with Gremlin API, see [perform bulk operations in Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md).
--
-## Prerequisites
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-
-* You can [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
-
-* [Java Development Kit (JDK) 1.8+](/java/azure/jdk/)
- - On Ubuntu, run `apt-get install default-jdk` to install the JDK.
-
- - Be sure to set the JAVA_HOME environment variable to point to the folder where the JDK is installed.
-
-* [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a [Maven](https://maven.apache.org/) binary archive
-
- - On Ubuntu, you can run `apt-get install maven` to install Maven.
-
-* Create an Azure Cosmos DB SQL API account by using the steps described in the [create database account](create-sql-api-java.md#create-a-database-account) section of the Java quickstart article.
-
-## Clone the sample application
-
-Now let's switch to working with code by downloading a generic samples repository for Java V4 SDK for Azure Cosmos DB from GitHub. These sample applications perform CRUD operations and other common operations on Azure Cosmos DB. To clone the repository, open a command prompt, navigate to the directory where you want to copy the application and run the following command:
-
-```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples
-```
-
-The cloned repository contains a sample `SampleBulkQuickStartAsync.java` in the `/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/bulk/async` folder. The application generates documents and executes operations to bulk create, upsert, replace and delete items in Azure Cosmos DB. In the next sections, we will review the code in the sample app.
-
-## Bulk execution in Azure Cosmos DB
-
-1. The Azure Cosmos DB's connection strings are read as arguments and assigned to variables defined in /`examples/common/AccountSettings.java` file. These environment variables must be set
-
-```
-ACCOUNT_HOST=your account hostname;ACCOUNT_KEY=your account primary key
-```
-
-To run the bulk sample, specify its Main Class:
-
-```
-com.azure.cosmos.examples.bulk.async.SampleBulkQuickStartAsync
-```
-
-2. The `CosmosAsyncClient` object is initialized by using the following statements:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=CreateAsyncClient)]
--
-3. The sample creates an async database and container. It then creates multiple documents on which bulk operations will be executed. It adds these documents to a `Flux<Family>` reactive stream object:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=AddDocsToStream)]
--
-4. The sample contains methods for bulk create, upsert, replace, and delete. In each method we map the families documents in the BulkWriter `Flux<Family>` stream to multiple method calls in `CosmosBulkOperations`. These operations are added to another reactive stream object `Flux<CosmosItemOperation>`. The stream is then passed to the `executeBulkOperations` method of the async `container` we created at the beginning, and operations are executed in bulk. See bulk create method below as an example:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItems)]
--
-5. There's also a class `BulkWriter.java` in the same directory as the sample application. This class demonstrates how to handle rate limiting (429) and timeout (408) errors that may occur during bulk execution, and retrying those operations effectively. It is implemented in the below methods, also showing how to implement local and global throughput control.
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkWriterAbstraction)]
--
-6. Additionally, there are bulk create methods in the sample, which illustrate how to add response processing, and set execution options:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItemsWithResponseProcessingAndExecutionOptions)]
--
- <!-- The importAll method accepts the following parameters:
-
- |**Parameter** |**Description** |
- |||
- |isUpsert | A flag to enable upsert of the documents. If a document with given ID already exists, it's updated. |
- |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
- |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
-
- **Bulk import response object definition**
- The result of the bulk import API call contains the following get methods:
-
- |**Parameter** |**Description** |
- |||
- |int getNumberOfDocumentsImported() | The total number of documents that were successfully imported out of the documents supplied to the bulk import API call. |
- |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk import API call. |
- |Duration getTotalTimeTaken() | The total time taken by the bulk import API call to complete execution. |
- |List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. |
- |List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
-
-<!-- 5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
-
- ```bash
- mvn clean package
- ```
-
-6. After the target dependencies are generated, you can invoke the bulk importer application by using the following command:
-
- ```bash
- java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint *<Fill in your Azure Cosmos DB's endpoint>* -masterKey *<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkImportDb -collectionId bulkImportColl -operation import -shouldCreateCollection -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
- ```
-
- The bulk importer creates a new database and a collection with the database name, collection name, and throughput values specified in the App.config file.
-
-## Bulk update data in Azure Cosmos DB
-
-You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the Name field to a new value and remove the Description field from the existing documents. For the full set of supported field update operations, see [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
-
-1. Defines the update items along with corresponding field update operations. In this example, you will use SetUpdateOperation to update the Name field and UnsetUpdateOperation to remove the Description field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
-
- ```java
- SetUpdateOperation<String> nameUpdate = new SetUpdateOperation<>("Name","UpdatedDocValue");
- UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
-
- ArrayList<UpdateOperationBase> updateOperations = new ArrayList<>();
- updateOperations.add(nameUpdate);
- updateOperations.add(descriptionUpdate);
-
- List<UpdateItem> updateItems = new ArrayList<>(cfg.getNumberOfDocumentsForEachCheckpoint());
- IntStream.range(0, cfg.getNumberOfDocumentsForEachCheckpoint()).mapToObj(j -> {
- return new UpdateItem(Long.toString(prefix + j), Long.toString(prefix + j), updateOperations);
- }).collect(Collectors.toCollection(() -> updateItems));
- ```
-
-2. Call the updateAll API that generates random documents to be then bulk imported into an Azure Cosmos container. You can configure the command-line configurations to be passed in CmdLineConfiguration.java file.
-
- ```java
- BulkUpdateResponse bulkUpdateResponse = bulkExecutor.updateAll(updateItems, null)
- ```
-
- The bulk update API accepts a collection of items to be updated. Each update item specifies the list of field update operations to be performed on a document identified by an ID and a partition key value. for more information, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
-
- ```java
- public BulkUpdateResponse updateAll(
- Collection<UpdateItem> updateItems,
- Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
- ```
-
- The updateAll method accepts the following parameters:
-
- |**Parameter** |**Description** |
- |||
- |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
-
- **Bulk import response object definition**
- The result of the bulk import API call contains the following get methods:
-
- |**Parameter** |**Description** |
- |||
- |int getNumberOfDocumentsUpdated() | The total number of documents that were successfully updated out of the documents supplied to the bulk update API call. |
- |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk update API call. |
- |Duration getTotalTimeTaken() | The total time taken by the bulk update API call to complete execution. |
- |List\<Exception> getErrors() | Gets the list of operational or networking issues related to the update operation. |
- |List\<BulkUpdateFailure> getFailedUpdates() | Gets the list of updates, which could not be completed along with the specific exceptions leading to the failures.|
-
-3. After you have the bulk update application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
-
- ```bash
- mvn clean package
- ```
-
-4. After the target dependencies are generated, you can invoke the bulk update application by using the following command:
-
- ```bash
- java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
- ``` -->
-
-## Performance tips
-
-Consider the following points for better performance when using bulk executor library:
-
-* For best performance, run your application from an Azure VM in the same region as your Cosmos DB account write region.
-* For achieving higher throughput:
-
- * Set the JVM's heap size to a large enough number to avoid any memory issue in handling large number of documents. Suggested heap size: max(3 GB, 3 * sizeof(all documents passed to bulk import API in one batch)).
- * There's a preprocessing time, due to which you'll get higher throughput when performing bulk operations with a large number of documents. So, if you want to import 10,000,000 documents, running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is preferable than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
-
-* It is recommended to instantiate a single CosmosAsyncClient object for the entire application within a single virtual machine that corresponds to a specific Azure Cosmos container.
-
-* Since a single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO. This happens by spawning multiple tasks internally, avoid spawning multiple concurrent tasks within your application process each executing bulk operation API calls. If a single bulk operation API calls running on a single virtual machine is unable to consume your entire container's throughput (if your container's throughput > 1 million RU/s), it's preferable to create separate virtual machines to concurrently execute bulk operation API calls.
-
-
-## Next steps
-* For an overview of bulk executor functionality, see [bulk executor overview](../bulk-executor-overview.md).
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/certificate-based-authentication.md
- Title: Certificate-based authentication with Azure Cosmos DB and Active Directory
-description: Learn how to configure an Azure AD identity for certificate-based authentication to access keys from Azure Cosmos DB.
---- Previously updated : 06/11/2019-----
-# Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
-
-Certificate-based authentication enables your client application to be authenticated by using Azure Active Directory (Azure AD) with a client certificate. You can perform certificate-based authentication on a machine where you need an identity, such as an on-premises machine or virtual machine in Azure. Your application can then read Azure Cosmos DB keys without having the keys directly in the application. This article describes how to create a sample Azure AD application, configure it for certificate-based authentication, sign into Azure using the new application identity, and then it retrieves the keys from your Azure Cosmos account. This article uses Azure PowerShell to set up the identities and provides a C# sample app that authenticates and accesses keys from your Azure Cosmos account.
-
-## Prerequisites
-
-* Install the [latest version](/powershell/azure/install-az-ps) of Azure PowerShell.
-
-* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-
-## Register an app in Azure AD
-
-In this step, you will register a sample web application in your Azure AD account. This application is later used to read the keys from your Azure Cosmos DB account. Use the following steps to register an application:
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-
-1. Open the Azure **Active Directory** pane, go to **App registrations** pane, and select **New registration**.
-
- :::image type="content" source="./media/certificate-based-authentication/new-app-registration.png" alt-text="New application registration in Active Directory":::
-
-1. Fill the **Register an application** form with the following details:
-
- * **Name** ΓÇô Provide a name for your application, it can be any name such as "sampleApp".
- * **Supported account types** ΓÇô Choose **Accounts in this organizational directory only (Default Directory)** to allow resources in your current directory to access this application.
- * **Redirect URL** ΓÇô Choose application of type **Web** and provide a URL where your application is hosted, it can be any URL. For this example, you can provide a test URL such as `https://sampleApp.com` it's okay even if the app doesn't exist.
-
- :::image type="content" source="./media/certificate-based-authentication/register-sample-web-app.png" alt-text="Registering a sample web application":::
-
-1. Select **Register** after you fill the form.
-
-1. After the app is registered, make a note of the **Application(client) ID** and **Object ID**, you will use these details in the next steps.
-
- :::image type="content" source="./media/certificate-based-authentication/get-app-object-ids.png" alt-text="Get the application and object IDs":::
-
-## Install the AzureAD module
-
-In this step, you will install the Azure AD PowerShell module. This module is required to get the ID of the application you registered in the previous step and associate a self-signed certificate to that application.
-
-1. Open Windows PowerShell ISE with administrator rights. If you haven't already done, install the AZ PowerShell module and connect to your subscription. If you have multiple subscriptions, you can set the context of current subscription as shown in the following commands:
-
- ```powershell
- Install-Module -Name Az -AllowClobber
- Connect-AzAccount
-
- Get-AzSubscription
- $context = Get-AzSubscription -SubscriptionId <Your_Subscription_ID>
- Set-AzContext $context
- ```
-
-1. Install and import the [AzureAD](/powershell/module/azuread/) module
-
- ```powershell
- Install-Module AzureAD
- Import-Module AzureAD
- # On PowerShell 7.x, use the -UseWindowsPowerShell parameter
- # Import-Module AzureAD -UseWindowsPowerShell
- ```
-
-## Sign into your Azure AD
-
-Sign into your Azure AD where you have registered the application. Use the Connect-AzureAD command to sign into your account, enter your Azure account credentials in the pop-up window.
-
-```powershell
-Connect-AzureAD
-```
-
-## Create a self-signed certificate
-
-Open another instance of Windows PowerShell ISE, and run the following commands to create a self-signed certificate and read the key associated with the certificate:
-
-```powershell
-$cert = New-SelfSignedCertificate -CertStoreLocation "Cert:\CurrentUser\My" -Subject "CN=sampleAppCert" -KeySpec KeyExchange
-$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
-```
-
-## Create the certificate-based credential
-
-Next run the following commands to get the object ID of your application and create the certificate-based credential. In this example, we set the certificate to expire after a year, you can set it to any required end date.
-
-```powershell
-$application = Get-AzureADApplication -ObjectId <Object_ID_of_Your_Application>
-
-New-AzureADApplicationKeyCredential -ObjectId $application.ObjectId -CustomKeyIdentifier "Key1" -Type AsymmetricX509Cert -Usage Verify -Value $keyValue -EndDate "2020-01-01"
-```
-
-The above command results in the output similar to the screenshot below:
--
-## Configure your Azure Cosmos account to use the new identity
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to your Azure Cosmos account.
-
-1. Assign the Contributor role to the sample app you created in the previous section.
-
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-## Register your certificate with Azure AD
-
-You can associate the certificate-based credential with the client application in Azure AD from the Azure portal. To associate the credential, you must upload the certificate file with the following steps:
-
-In the Azure app registration for the client application:
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-
-1. Open the Azure **Active Directory** pane, go to the **App registrations** pane, and open the sample app you created in the previous step.
-
-1. Select **Certificates & secrets** and then **Upload certificate**. Browse the certificate file you created in the previous step to upload.
-
-1. Select **Add**. After the certificate is uploaded, the thumbprint, start date, and expiration values are displayed.
-
-## Access the keys from PowerShell
-
-In this step, you will sign into Azure by using the application and the certificate you created and access your Azure Cosmos account's keys.
-
-1. Initially clear the Azure account's credentials you have used to sign into your account. You can clear credentials by using the following command:
-
- ```powershell
- Disconnect-AzAccount -Username <Your_Azure_account_email_id>
- ```
-
-1. Next validate that you can sign into Azure portal by using the application's credentials and access the Azure Cosmos DB keys:
-
- ```powershell
- Login-AzAccount -ApplicationId <Your_Application_ID> -CertificateThumbprint $cert.Thumbprint -ServicePrincipal -Tenant <Tenant_ID_of_your_application>
-
- Get-AzCosmosDBAccountKey `
- -ResourceGroupName "<Resource_Group_Name_of_your_Azure_Cosmos_account>" `
- -Name "<Your_Azure_Cosmos_Account_Name>" `
- -Type "Keys"
- ```
-
-The previous command will display the primary and secondary primary keys of your Azure Cosmos account. You can view the Activity log of your Azure Cosmos account to validate that the get keys request succeeded and the event is initiated by the "sampleApp" application.
--
-## Access the keys from a C# application
-
-You can also validate this scenario by accessing keys from a C# application. The following C# console application, that can access Azure Cosmos DB keys by using the app registered in Active Directory. Make sure to update the tenantId, clientID, certName, resource group name, subscription ID, Azure Cosmos account name details before you run the code.
-
-```csharp
-using System;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-using System.Linq;
-using System.Net.Http;
-using System.Security.Cryptography.X509Certificates;
-using System.Threading;
-using System.Threading.Tasks;
-
-namespace TodoListDaemonWithCert
-{
- class Program
- {
- private static string aadInstance = "https://login.windows.net/";
- private static string tenantId = "<Your_Tenant_ID>";
- private static string clientId = "<Your_Client_ID>";
- private static string certName = "<Your_Certificate_Name>";
-
- private static int errorCode = 0;
- static int Main(string[] args)
- {
- MainAync().Wait();
- Console.ReadKey();
-
- return 0;
- }
-
- static async Task MainAync()
- {
- string authContextURL = aadInstance + tenantId;
- AuthenticationContext authContext = new AuthenticationContext(authContextURL);
- X509Certificate2 cert = ReadCertificateFromStore(certName);
-
- ClientAssertionCertificate credential = new ClientAssertionCertificate(clientId, cert);
- AuthenticationResult result = await authContext.AcquireTokenAsync("https://management.azure.com/", credential);
- if (result == null)
- {
- throw new InvalidOperationException("Failed to obtain the JWT token");
- }
-
- string token = result.AccessToken;
- string subscriptionId = "<Your_Subscription_ID>";
- string rgName = "<ResourceGroup_of_your_Cosmos_account>";
- string accountName = "<Your_Cosmos_account_name>";
- string cosmosDBRestCall = $"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/listKeys?api-version=2015-04-08";
-
- Uri restCall = new Uri(cosmosDBRestCall);
- HttpClient httpClient = new HttpClient();
- httpClient.DefaultRequestHeaders.Remove("Authorization");
- httpClient.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
- HttpResponseMessage response = await httpClient.PostAsync(restCall, null);
-
- Console.WriteLine("Got result {0} and keys {1}", response.StatusCode.ToString(), response.Content.ReadAsStringAsync().Result);
- }
-
- /// <summary>
- /// Reads the certificate
- /// </summary>
- private static X509Certificate2 ReadCertificateFromStore(string certName)
- {
- X509Certificate2 cert = null;
- X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
- store.Open(OpenFlags.ReadOnly);
- X509Certificate2Collection certCollection = store.Certificates;
-
- // Find unexpired certificates.
- X509Certificate2Collection currentCerts = certCollection.Find(X509FindType.FindByTimeValid, DateTime.Now, false);
-
- // From the collection of unexpired certificates, find the ones with the correct name.
- X509Certificate2Collection signingCert = currentCerts.Find(X509FindType.FindBySubjectName, certName, false);
-
- // Return the first certificate in the collection, has the right name and is current.
- cert = signingCert.OfType<X509Certificate2>().OrderByDescending(c => c.NotBefore).FirstOrDefault();
- store.Close();
- return cert;
- }
- }
-}
-```
-
-This script outputs the primary and secondary primary keys as shown in the following screenshot:
--
-Similar to the previous section, you can view the Activity log of your Azure Cosmos account to validate that the get keys request event is initiated by the "sampleApp" application.
--
-## Next steps
-
-* [Secure Azure Cosmos keys using Azure Key Vault](../access-secrets-from-keyvault.md)
-
-* [Security baseline for Azure Cosmos DB](../security-baseline.md)
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-design-patterns.md
- Title: Change feed design patterns in Azure Cosmos DB
-description: Overview of common change feed design patterns
------ Previously updated : 03/24/2022--
-# Change feed design patterns in Azure Cosmos DB
-
-The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This document focuses on common change feed design patterns, design tradeoffs, and change feed limitations.
-
->
-> [!VIDEO https://aka.ms/docs.change-feed-azure-functions]
--
-Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
-
-* Triggering a notification or a call to an API, when an item is inserted or updated.
-* Real-time stream processing for IoT or real-time analytics processing on operational data.
-* Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage.
-
-The change feed in Azure Cosmos DB enables you to build efficient and scalable solutions for each of these patterns, as shown in the following image:
--
-## Event computing and notifications
-
-The Azure Cosmos DB change feed can simplify scenarios that need to trigger a notification or send a call to an API based on a certain event. You can use the [Change Feed Process Library](change-feed-processor.md) to automatically poll your container for changes and call an external API each time there is a write or update.
-
-You can also selectively trigger a notification or send a call to an API based on specific criteria. For example, if you are reading from the change feed using [Azure Functions](change-feed-functions.md), you can put logic into the function to only send a notification if a specific criteria has been met. While the Azure Function code would execute during each write and update, the notification would only be sent if specific criteria had been met.
-
-## Real-time stream processing
-
-The Azure Cosmos DB change feed can be used for real-time stream processing for IoT or real-time analytics processing on operational data.
-For example, you might receive and store event data from devices, sensors, infrastructure and applications, and process these events in real time, using [Spark](../../hdinsight/spark/apache-spark-overview.md). The following image shows how you can implement a lambda architecture using the Azure Cosmos DB via change feed:
--
-In many cases, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hubs or Apache Kafka. The change feed is a great alternative due to Azure Cosmos DB's ability to support a sustained high rate of data ingestion with guaranteed low read and write latency. The advantages of the Azure Cosmos DB change feed over a message queue include:
-
-### Data persistence
-
-Data written to Azure Cosmos DB will show up in the change feed and be retained until deleted. Message queues typically have a maximum retention period. For example, [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days.
-
-### Querying ability
-
-In addition to reading from a Cosmos container's change feed, you can also run SQL queries on the data stored in Azure Cosmos DB. The change feed isn't a duplication of data already in the container but rather just a different mechanism of reading the data. Therefore, if you read data from the change feed, it will always be consistent with queries of the same Azure Cosmos DB container.
-
-### High availability
-
-Azure Cosmos DB offers up to 99.999% read and write availability. Unlike many message queues, Azure Cosmos DB data can be easily globally distributed and configured with an [RTO (Recovery Time Objective)](../consistency-levels.md#rto) of zero.
-
-After processing items in the change feed, you can build a materialized view and persist aggregated values back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, you can, for example, use change feed to implement real-time leaderboards based on scores from completed games.
-
-## Data movement
-
-You can also read from the change feed for real-time data movement.
-
-For example, the change feed helps you perform the following tasks efficiently:
-
-* Update a cache, search index, or data warehouse with data stored in Azure Cosmos DB.
-
-* Perform zero down-time migrations to another Azure Cosmos account or another Azure Cosmos container with a different logical partition key.
-
-* Implement an application-level data tiering and archival. For example, you can store "hot data" in Azure Cosmos DB and age out "cold data" to other storage systems such as [Azure Blob Storage](../../storage/common/storage-introduction.md).
-
-When you have to [denormalize data across partitions and containers](how-to-model-partition-example.md#v2-introducing-denormalization-to-optimize-read-queries
-), you can read from your container's change feed as a source for this data replication. Real-time data replication with the change feed can only guarantee eventual consistency. You can [monitor how far the Change Feed Processor lags behind](how-to-use-change-feed-estimator.md) in processing changes in your Cosmos container.
-
-## Event sourcing
-
-The [event sourcing pattern](/azure/architecture/patterns/event-sourcing) involves using an append-only store to record the full series of actions on that data. Azure Cosmos DB's change feed is a great choice as a central data store in event sourcing architectures where all data ingestion is modeled as writes (no updates or deletes). In this case, each write to Azure Cosmos DB is an "event" and you'll have a full record of past events in the change feed. Typical uses of the events published by the central event store are for maintaining materialized views or for integration with external systems. Because there is no time limit for retention in the change feed, you can replay all past events by reading from the beginning of your Cosmos container's change feed.
-
-You can have [multiple change feed consumers subscribe to the same container's change feed](how-to-create-multiple-cosmos-db-triggers.md#optimizing-containers-for-multiple-triggers). Aside from the [lease container's](change-feed-processor.md#components-of-the-change-feed-processor) provisioned throughput, there is no cost to utilize the change feed. The change feed is available in every container regardless of whether it is utilized.
-
-Azure Cosmos DB is a great central append-only persistent data store in the event sourcing pattern because of its strengths in horizontal scalability and high availability. In addition, the change Feed Processor library offers an ["at least once"](change-feed-processor.md#error-handling) guarantee, ensuring that you won't miss processing any events.
-
-## Current limitations
-
-The change feed has important limitations that you should understand. While items in a Cosmos container will always remain in the change feed, the change feed is not a full operation log. There are important areas to consider when designing an application that utilizes the change feed.
-
-### Intermediate updates
-
-Only the most recent change for a given item is included in the change feed. When processing changes, you will read the latest available item version. If there are multiple updates to the same item in a short period of time, it is possible to miss processing intermediate updates. If you would like to track updates and be able to replay past updates to an item, we recommend modeling these updates as a series of writes instead.
-
-### Deletes
-
-The change feed does not capture deletes. If you delete an item from your container, it is also removed from the change feed. The most common method of handling this is adding a soft marker on the items that are being deleted. You can add a property called "deleted" and set it to "true" at the time of deletion. This document update will show up in the change feed. You can set a TTL on this item so that it can be automatically deleted later.
-
-### Guaranteed order
-
-There is guaranteed order in the change feed within a partition key value but not across partition key values. You should select a partition key that gives you a meaningful order guarantee.
-
-For example, consider a retail application using the event sourcing design pattern. In this application, different user actions are each "events" which are modeled as writes to Azure Cosmos DB. Imagine if some example events occurred in the following sequence:
-
-1. Customer adds Item A to their shopping cart
-2. Customer adds Item B to their shopping cart
-3. Customer removes Item A from their shopping cart
-4. Customer checks out and shopping cart contents are shipped
-
-A materialized view of current shopping cart contents is maintained for each customer. This application must ensure that these events are processed in the order in which they occur. If, for example, the cart checkout were to be processed before Item A's removal, it is likely that the customer would have had Item A shipped, as opposed to the desired Item B. In order to guarantee that these four events are processed in order of their occurrence, they should fall within the same partition key value. If you select **username** (each customer has a unique username) as the partition key, you can guarantee that these events show up in the change feed in the same order in which they are written to Azure Cosmos DB.
-
-## Examples
-
-Here are some real-world change feed code examples that extend beyond the scope of the provided samples:
--- [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)-- [IoT use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)-- [Retail use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)-
-## Next steps
-
-* [Change feed overview](../change-feed.md)
-* [Options to read change feed](read-change-feed.md)
-* [Using change feed with Azure Functions](change-feed-functions.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Feed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-functions.md
- Title: How to use Azure Cosmos DB change feed with Azure Functions
-description: Use Azure Functions to connect to Azure Cosmos DB change feed. Later you can create reactive Azure functions that are triggered on every new event.
------ Previously updated : 10/14/2021--
-# Serverless event-based architectures with Azure Cosmos DB and Azure Functions
-
-Azure Functions provides the simplest way to connect to the [change feed](../change-feed.md). You can create small reactive Azure Functions that will be automatically triggered on each new event in your Azure Cosmos container's change feed.
--
-With the [Azure Functions trigger for Cosmos DB](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), you can leverage the [Change Feed Processor](change-feed-processor.md)'s scaling and reliable event detection functionality without the need to maintain any [worker infrastructure](change-feed-processor.md). Just focus on your Azure Function's logic without worrying about the rest of the event-sourcing pipeline. You can even mix the Trigger with any other [Azure Functions bindings](../../azure-functions/functions-triggers-bindings.md#supported-bindings).
-
-> [!NOTE]
-> Currently, the Azure Functions trigger for Cosmos DB is supported for use with the Core (SQL) API only.
-
-## Requirements
-
-To implement a serverless event-based flow, you need:
-
-* **The monitored container**: The monitored container is the Azure Cosmos container being monitored, and it stores the data from which the change feed is generated. Any inserts, updates to the monitored container are reflected in the change feed of the container.
-* **The lease container**: The lease container maintains state across multiple and dynamic serverless Azure Function instances and enables dynamic scaling. You can create the lease container automatically with the Azure Functions trigger for Cosmos DB. You can also create the lease container manually. To automatically create the lease container, set the *CreateLeaseCollectionIfNotExists* flag in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration). Partitioned lease containers are required to have a `/id` partition key definition.
-
-## Create your Azure Functions trigger for Cosmos DB
-
-Creating your Azure Function with an Azure Functions trigger for Cosmos DB is now supported across all Azure Functions IDE and CLI integrations:
-
-* [Visual Studio Extension](../../azure-functions/functions-develop-vs.md) for Visual Studio users.
-* [Visual Studio Code Extension](/azure/developer/javascript/tutorial-vscode-serverless-node-01) for Visual Studio Code users.
-* And finally [Core CLI tooling](../../azure-functions/functions-run-local.md#create-func) for a cross-platform IDE agnostic experience.
-
-## Run your trigger locally
-
-You can run your [Azure Function locally](../../azure-functions/functions-develop-local.md) with the [Azure Cosmos DB Emulator](../local-emulator.md) to create and develop your serverless event-based flows without an Azure Subscription or incurring any costs.
-
-If you want to test live scenarios in the cloud, you can [Try Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without any credit card or Azure subscription required.
-
-## Next steps
-
-You can now continue to learn more about change feed in the following articles:
-
-* [Overview of change feed](../change-feed.md)
-* [Ways to read change feed](read-change-feed.md)
-* [Using change feed processor library](change-feed-processor.md)
-* [How to work with change feed processor library](change-feed-processor.md)
-* [Serverless database computing using Azure Cosmos DB and Azure Functions](serverless-computing-database.md)
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md
- Title: Change feed processor in Azure Cosmos DB
-description: Learn how to use the Azure Cosmos DB change feed processor to read the change feed, the components of the change feed processor
------ Previously updated : 04/05/2022---
-# Change feed processor in Azure Cosmos DB
-
-The change feed processor is part of the Azure Cosmos DB [.NET V3](https://github.com/Azure/azure-cosmos-dotnet-v3) and [Java V4](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos) SDKs. It simplifies the process of reading the change feed and distributes the event processing across multiple consumers effectively.
-
-The main benefit of change feed processor library is its fault-tolerant behavior that assures an "at-least-once" delivery of all the events in the change feed.
-
-## Components of the change feed processor
-
-There are four main components of implementing the change feed processor:
-
-1. **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
-
-1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
-
-1. **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
-
-1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
-
-To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items.
-There are two compute instances and the change feed processor is assigning different ranges to each instance to maximize compute distribution, each instance has a unique and different name.
-Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor.
--
-## Implementing the change feed processor
-
-### [.NET](#tab/dotnet)
-
-The point of entry is always the monitored container, from a `Container` instance you call `GetChangeFeedProcessorBuilder`:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=DefineProcessor)]
-
-Where the first parameter is a distinct name that describes the goal of this processor and the second name is the delegate implementation that will handle changes.
-
-An example of a delegate would be:
--
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)]
-
-Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, this should be unique and different in each compute instance you are deploying, and finally which is the container to maintain the lease state with `WithLeaseContainer`.
-
-Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
-
-## Processing life cycle
-
-The normal life cycle of a host instance is:
-
-1. Read the change feed.
-1. If there are no changes, sleep for a predefined amount of time (customizable with `WithPollInterval` in the Builder) and go to #1.
-1. If there are changes, send them to the **delegate**.
-1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1.
-
-## Error handling
-
-The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
-
-> [!NOTE]
-> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Cosmos container. The exact data store does not matter, simply that the unprocessed changes are persisted.
-
-In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed or use the [life cycle notifications](#life-cycle-notifications) to detect underlying failures.
-
-## Life-cycle notifications
-
-The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification:
-
-* Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it.
-* Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.
-* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues).
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)]
-
-## Deployment unit
-
-A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
-
-For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
-
-## Dynamic scaling
-
-As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
-
-1. All instances should have the same lease container configuration.
-1. All instances should have the same `processorName`.
-1. Each instance needs to have a different instance name (`WithInstanceName`).
-
-If these three conditions apply, then the change feed processor will distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute using an equal distribution algorithm. One lease can only be owned by one instance at a given time, so the number of instances should not be greater than the number of leases.
-
-The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly.
-
-Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
-
-## Change feed and provisioned throughput
-
-Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
-
-Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
-
-## Starting time
-
-By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
-
-### Reading from a previous date and time
-
-It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by passing an instance of a `DateTime` to the `WithStartTime` builder extension:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=TimeInitialization)]
-
-The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
-
-> [!NOTE]
-> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
-
-### Reading from the beginning
-
-In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartFromBeginningInitialization)]
-
-The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
-
-> [!NOTE]
-> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
-
-### [Java](#tab/java)
-
-An example of a delegate implementation would be:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=Delegate)]
-
->[!NOTE]
-> In the above we pass a variable `options` of type `ChangeFeedProcessorOptions`, which can be used to set various values including `setStartFromBeginning`:
-> [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=ChangeFeedProcessorOptions)]
-
-We assign this to a `changeFeedProcessorInstance`, passing parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`) and the `leaseContainer`. We then start the change feed processor:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)]
-
->[!NOTE]
-> The above code snippets are taken from a sample in GitHub, which you can find [here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java).
-
-## Processing life cycle
-
-The normal life cycle of a host instance is:
-
-1. Read the change feed.
-1. If there are no changes, sleep for a predefined amount of time (customizable with `options.setFeedPollDelay` in the Builder) and go to #1.
-1. If there are changes, send them to the **delegate**.
-1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1.
-
-## Error handling
-
-The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
-
-> [!NOTE]
-> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Cosmos container. The exact data store does not matter, simply that the unprocessed changes are persisted.
-
-In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed.
-
-<!-- ## Life-cycle notifications
-
-The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification:
-
-* Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it.
-* Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.
-* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues).
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)] -->
-
-## Deployment unit
-
-A single change feed processor deployment unit consists of one or more compute instances with the same lease container configuration, the same `leasePrefix`, but different `hostName` name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
-
-For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
-
-## Dynamic scaling
-
-As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
-
-1. All instances should have the same lease container configuration.
-1. All instances should have the same value set in `options.setLeasePrefix` (or none set at all).
-1. Each instance needs to have a different `hostName`.
-
-If these three conditions apply, then the change feed processor will distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute using an equal distribution algorithm. One lease can only be owned by one instance at a given time, so the number of instances should not be greater than the number of leases.
-
-The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix`.
-
-Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
-
-## Change feed and provisioned throughput
-
-Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
-
-Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
-
-## Starting time
-
-By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
-
-### Reading from a previous date and time
-
-It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by setting `setStartTime` in `options`. The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
-
-> [!NOTE]
-> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
-
-### Reading from the beginning
-
-In our above sample, we set `setStartFromBeginning` to `false`, which is the same as the default value. In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can set `setStartFromBeginning` to `true`. The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
-
-> [!NOTE]
-> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
---
-## Sharing the lease container
-
-You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units.
-
-## Where to host the change feed processor
-
-The change feed processor can be hosted in any platform that supports long running processes or tasks:
-
-* A continuous running [Azure WebJob](/training/modules/run-web-app-background-task-with-webjobs/).
-* A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines).
-* A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service).
-* A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions).
-* An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services).
-
-While change feed processor can run in short lived environments, because the lease container maintains the state, the startup cycle of these environments will add delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started).
-
-## Additional resources
-
-* [Azure Cosmos DB SDK](sql-api-sdk-dotnet.md)
-* [Complete sample application on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
-* [Additional usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
-* [Cosmos DB workshop labs for change feed processor](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html#consume-cosmos-db-change-feed-via-the-change-feed-processor)
-
-## Next steps
-
-You can now proceed to learn more about change feed processor in the following articles:
-
-* [Overview of change feed](../change-feed.md)
-* [Change feed pull model](change-feed-pull-model.md)
-* [How to migrate from the change feed processor library](how-to-migrate-from-change-feed-library.md)
-* [Using the change feed estimator](how-to-use-change-feed-estimator.md)
-* [Change feed processor start time](#starting-time)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-pull-model.md
- Title: Change feed pull model
-description: Learn how to use the Azure Cosmos DB change feed pull model to read the change feed and the differences between the pull model and Change Feed Processor
------ Previously updated : 04/07/2022---
-# Change feed pull model in Azure Cosmos DB
-
-With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. As you can already do with the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers.
-
-## Comparing with change feed processor
-
-Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item (or batch of items) in the change feed.
-
-However, you can't convert continuation tokens to a lease container (or vice versa).
-
-> [!NOTE]
-> In most cases when you need to read from the change feed, the simplest option is to use the [change feed processor](change-feed-processor.md).
-
-You should consider using the pull model in these scenarios:
--- Read changes from a particular partition key-- Control the pace at which your client receives changes for processing-- Perform a one-time read of the existing data in the change feed (for example, to do a data migration)-
-Here's some key differences between the change feed processor and pull model:
-
-|Feature | Change feed processor| Pull model |
-| | | |
-| Keeping track of current point in processing change feed | Lease (stored in an Azure Cosmos DB container) | Continuation token (stored in memory or manually persisted) |
-| Ability to replay past changes | Yes, with push model | Yes, with pull model|
-| Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` | Manual |
-| Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must check status and manually recheck |
-| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machine consuming from the same container| Yes, and manually parallelized using FeedRange |
-| Process changes from just a single partition key | Not supported | Yes|
-
-> [!NOTE]
-> Unlike when reading using the change feed processor, you must explicitly handle cases where there are no new changes.
-
-## Consuming an entire container's changes
-
-### [.NET](#tab/dotnet)
-
-You can create a `FeedIterator` to process the change feed using the pull model. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed using that specific `FeedIterator`.
-
-You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed
-through stored procedures, transaction scope is preserved when reading items from the Change Feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of
-one atomic batch.
-
-The `FeedIterator` comes in two flavors. In addition to the examples below that return entity objects, you can also obtain the response with `Stream` support. Streams allow you to read data without having it first deserialized, saving on client resources.
-
-Here's an example for obtaining a `FeedIterator` that returns entity objects, in this case a `User` object:
-
-```csharp
-FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
-```
-
-Here's an example for obtaining a `FeedIterator` that returns a `Stream`:
-
-```csharp
-FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
-```
-
-If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time:
-
-```csharp
-FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
-
-while (iteratorForTheEntireContainer.HasMoreResults)
-{
- FeedResponse<User> response = await iteratorForTheEntireContainer.ReadNextAsync();
-
- if (response.StatusCode == HttpStatusCode.NotModified)
- {
- Console.WriteLine($"No new changes");
- await Task.Delay(TimeSpan.FromSeconds(5));
- }
- else
- {
- foreach (User user in response)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
-}
-```
-
-Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `HasMoreResults` is always true. When you try to read the change feed and there are no new changes available, you'll receive a response with `NotModified` status. In the above example, it is handled by waiting 5 seconds before rechecking for changes.
-
-## Consuming a partition key's changes
-
-In some cases, you may only want to process a specific partition key's changes. You can obtain a `FeedIterator` for a specific partition key and process the changes the same way that you can for an entire container.
-
-```csharp
-FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<User>(
- ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue")), ChangeFeedMode.Incremental));
-
-while (iteratorForThePartitionKey.HasMoreResults)
-{
- FeedResponse<User> response = await iteratorForThePartitionKey.ReadNextAsync();
-
- if (response.StatusCode == HttpStatusCode.NotModified)
- {
- Console.WriteLine($"No new changes");
- await Task.Delay(TimeSpan.FromSeconds(5));
- }
- else
- {
- foreach (User user in response)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
-}
-```
-
-## Using FeedRange for parallelization
-
-In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values.
-
-Here's an example showing how to obtain a list of ranges for your container:
-
-```csharp
-IReadOnlyList<FeedRange> ranges = await container.GetFeedRangesAsync();
-```
-
-When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions).
-
-Using a `FeedRange`, you can then create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators, which can process the change feed in parallel.
-
-In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be:
-
-* Using `FeedRange.ToJsonString` and distributing this string value. The consumers can use this value with `FeedRange.FromJsonString`
-* If the distribution is in-process, passing the `FeedRange` object reference.
-
-Here's a sample that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel:
-
-Machine 1:
-
-```csharp
-FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental);
-while (iteratorA.HasMoreResults)
-{
- FeedResponse<User> response = await iteratorA.ReadNextAsync();
-
- if (response.StatusCode == HttpStatusCode.NotModified)
- {
- Console.WriteLine($"No new changes");
- await Task.Delay(TimeSpan.FromSeconds(5));
- }
- else
- {
- foreach (User user in response)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
-}
-```
-
-Machine 2:
-
-```csharp
-FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental);
-while (iteratorB.HasMoreResults)
-{
- FeedResponse<User> response = await iteratorA.ReadNextAsync();
-
- if (response.StatusCode == HttpStatusCode.NotModified)
- {
- Console.WriteLine($"No new changes");
- await Task.Delay(TimeSpan.FromSeconds(5));
- }
- else
- {
- foreach (User user in response)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
-}
-```
-
-## Saving continuation tokens
-
-You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
-
-```csharp
-FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
-
-string continuation = null;
-
-while (iterator.HasMoreResults)
-{
- FeedResponse<User> response = await iterator.ReadNextAsync();
-
- if (response.StatusCode == HttpStatusCode.NotModified)
- {
- Console.WriteLine($"No new changes");
- continuation = response.ContinuationToken;
- // Stop the consumption since there are no new changes
- break;
- }
- else
- {
- foreach (User user in response)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
-}
-
-// Some time later when I want to check changes again
-FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.Incremental);
-```
-
-As long as the Cosmos container still exists, a FeedIterator's continuation token never expires.
-
-### [Java](#tab/java)
-
-You can create a `Iterator<FeedResponse<JsonNode>> responseIterator` to process the change feed using the pull model. When creating `CosmosChangeFeedRequestOptions` you must specify where to start reading the change feed from. You will also pass the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed. If you specify `FeedRange.forFullRange()`, you can process an entire container's change feed at your own pace. You can optionally specify a value in `byPage()`. When set, this property sets the maximum number of items received per page. Below is an example for obtaining a `responseIterator`.
-
->[!NOTE]
-> All of the below code snippets are taken from a sample in GitHub, which you can find [here](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java).
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=FeedResponseIterator)]
-
-We can then iterate over the results. Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `responseIterator.hasNext()` is always true. Below is an example, which reads all changes starting from the beginning. Each iteration persists a continuation token after processing all events, and will pick up from the last processed point in the change feed. This is handled using `createForProcessingFromContinuation`:.
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=AllFeedRanges)]
--
-## Consuming a partition key's changes
-
-In some cases, you may only want to process a specific partition key's changes. You can can process the changes for a specific partition key in the same way that you can for an entire container. Here's an example:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=PartitionKeyProcessing)]
--
-## Using FeedRange for parallelization
-
-In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values.
-
-Here's an example showing how to obtain a list of ranges for your container:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=GetFeedRanges)]
-
-When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions).
-
-Using a `FeedRange`, you can then parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to process changes for the entire container or a single partition key, you can use FeedRanges to process the change feed in parallel.
-
-In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be:
-
-* Using `FeedRange.toString()` and distributing this string value.
-* If the distribution is in-process, passing the `FeedRange` object reference.
-
-Here's a sample that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel:
-
-Machine 1:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine1)]
-
-Machine 2:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeedpull/SampleChangeFeedPullModel.java?name=Machine2)]
---
-## Next steps
-
-* [Overview of change feed](../change-feed.md)
-* [Using the change feed processor](change-feed-processor.md)
-* [Trigger Azure Functions](change-feed-functions.md)
cosmos-db Changefeed Ecommerce Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/changefeed-ecommerce-solution.md
- Title: Use Azure Cosmos DB change feed to visualize real-time data analytics
-description: This article describes how change feed can be used by a retail company to understand user patterns, perform real-time data analysis and visualization
------ Previously updated : 03/24/2022---
-# Use Azure Cosmos DB change feed to visualize real-time data analytics
-
-The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. To learn more about change feed, see [working with change feed](../change-feed.md) article.
-
-This article describes how change feed can be used by an e-commerce company to understand user patterns, perform real-time data analysis and visualization. You will analyze events such as a user viewing an item, adding an item to their cart, or purchasing an item. When one of these events occurs, a new record is created, and the change feed logs that record. Change feed then triggers a series of steps resulting in visualization of metrics that analyze the company performance and activity. Sample metrics that you can visualize include revenue, unique site visitors, most popular items, and average price of the items that are viewed versus added to a cart versus purchased. These sample metrics can help an e-commerce company evaluate its site popularity, develop its advertising and pricing strategies, and make decisions regarding what inventory to invest in.
-
->
-> [!VIDEO https://aka.ms/docs.ecomm-change-feed]
->
-
-## Solution components
-The following diagram represents the data flow and components involved in the solution:
-
-
-1. **Data Generation:** Data simulator is used to generate retail data that represents events such as a user viewing an item, adding an item to their cart, and purchasing an item. You can generate large set of sample data by using the data generator. The generated sample data contains documents in the following format:
-
- ```json
- {
- "CartID": 2486,
- "Action": "Viewed",
- "Item": "Women's Denim Jacket",
- "Price": 31.99
- }
- ```
-
-2. **Cosmos DB:** The generated data is stored in an Azure Cosmos container.
-
-3. **Change Feed:** The change feed will listen for changes to the Azure Cosmos container. Each time a new document is added into the collection (that is when an event occurs such a user viewing an item, adding an item to their cart, or purchasing an item), the change feed will trigger an [Azure Function](../../azure-functions/functions-overview.md).
-
-4. **Azure Function:** The Azure Function processes the new data and sends it to [Azure Event Hubs](../../event-hubs/event-hubs-about.md).
-
-5. **Azure event hub:** The event hub stores these events and sends them to [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) to perform further analysis.
-
-6. **Azure Stream Analytics:** Azure Stream Analytics defines queries to process the events and perform real-time data analysis. This data is then sent to [Microsoft Power BI](/power-bi/desktop-what-is-desktop).
-
-7. **Power BI:** Power BI is used to visualize the data sent by Azure Stream Analytics. You can build a dashboard to see how the metrics change in real time.
-
-## Prerequisites
-
-* Microsoft .NET Framework 4.7.1 or higher
-
-* Microsoft .NET Core 2.1 (or higher)
-
-* Visual Studio with Universal Windows Platform development, .NET desktop development, and ASP.NET and web development workloads
-
-* Microsoft Azure Subscription
-
-* Microsoft Power BI Account
-
-* Download the [Azure Cosmos DB change feed lab](https://github.com/Azure-Samples/azure-cosmos-db-change-feed-dotnet-retail-sample) from GitHub.
-
-## Create Azure resources
-
-Create the Azure resources: Azure Cosmos DB, storage account, event hub, and Stream Analytics required by the solution. You will deploy these resources through an Azure Resource Manager template. Use the following steps to deploy these resources:
-
-1. Set the Windows PowerShell execution policy to **Unrestricted**. To do so, open **Windows PowerShell as an Administrator** and run the following commands:
-
- ```powershell
- Get-ExecutionPolicy
- Set-ExecutionPolicy Unrestricted
- ```
-
-2. From the GitHub repository you downloaded in the previous step, navigate to the **Azure Resource Manager** folder, and open the file called **parameters.json** file.
-
-3. Provide values for `cosmosdbaccount_name`, `eventhubnamespace_name`, `storageaccount_name`, parameters as indicated in **parameters.json** file. You'll need to use the names that you give to each of your resources later.
-
-4. From **Windows PowerShell**, navigate to the **Azure Resource Manager** folder and run the following command:
-
- ```powershell
- .\deploy.ps1
- ```
-5. When prompted, enter your Azure **Subscription ID**, **changefeedlab** for the resource group name, and **run1** for the deployment name. Once the resources begin to deploy, it may take up to 10 minutes for it to complete.
-
-## Create a database and the collection
-
-You will now create a collection to hold e-commerce site events. When a user views an item, adds an item to their cart, or purchases an item, the collection will receive a record that includes the action ("viewed", "added", or "purchased"), the name of the item involved, the price of the item involved, and the ID number of the user cart involved.
-
-1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** that's been created by the template deployment.
-
-2. From the **Data Explorer** pane, select **New Collection** and fill the form with the following details:
-
- * For the **Database id** field, select **Create new**, then enter **changefeedlabdatabase**. Leave the **Provision database throughput** box unchecked.
- * For the **Collection** id field, enter **changefeedlabcollection**.
- * For the **Partition key** field, enter **/Item**. This is case-sensitive, so make sure you enter it correctly.
- * For the **Throughput** field, enter **10000**.
- * Select the **OK** button.
-
-3. Next create another collection named **leases** for change feed processing. The leases collection coordinates processing the change feed across multiple workers. A separate collection is used to store the leases with one lease per partition.
-
-4. Return to the **Data Explorer** pane and select **New Collection** and fill the form with the following details:
-
- * For the **Database id** field, select **Use existing**, then enter **changefeedlabdatabase**.
- * For the **Collection id** field, enter **leases**.
- * For **Storage capacity**, select **Fixed**.
- * Leave the **Throughput** field set to its default value.
- * Select the **OK** button.
-
-## Get the connection string and keys
-
-### Get the Azure Cosmos DB connection string
-
-1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** that's created by the template deployment.
-
-2. Navigate to the **Keys** pane, copy the PRIMARY CONNECTION STRING and copy it to a notepad or another document that you will have access to throughout the lab. You should label it **Cosmos DB Connection String**. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
-
-### Get the storage account key and connection string
-
-Azure Storage Accounts allow users to store data. In this lab, you will use a storage account to store data that is used by the Azure Function. The Azure Function is triggered when any modification is made to the collection.
-
-1. Return to your resource group and open the storage account that you created earlier
-
-2. Select **Access keys** from the menu on the left-hand side.
-
-3. Copy the values under **key 1** to a notepad or another document that you will have access to throughout the lab. You should label the **Key** as **Storage Key** and the **Connection string** as **Storage Connection String**. You'll need to copy these strings into your code later, so take a note and remember where you are storing them.
-
-### Get the event hub namespace connection string
-
-An Azure event hub receives the event data, stores, processes, and forwards the data. In this lab, the event hub will receive a document every time a new event occurs (whenever an item is viewed by a user, added to a user's cart, or purchased by a user) and then will forward that document to Azure Stream Analytics.
-
-1. Return to your resource group and open the **Event Hubs Namespace** that you created and named in the prelab.
-
-2. Select **Shared access policies** from the menu on the left-hand side.
-
-3. Select **RootManageSharedAccessKey**. Copy the **Connection string-primary key** to a notepad or another document that you will have access to throughout the lab. You should label it **Event Hub Namespace** connection string. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
-
-## Set up Azure Function to read the change feed
-
-When a new document is created, or a current document is modified in a Cosmos container, the change feed automatically adds that modified document to its history of collection changes. You will now build and run an Azure Function that processes the change feed. When a document is created or modified in the collection you created, the Azure Function will be triggered by the change feed. Then the Azure Function will send the modified document to the event hub.
-
-1. Return to the repository that you cloned on your device.
-
-2. Right-click the file named **ChangeFeedLabSolution.sln** and select **Open With Visual Studio**.
-
-3. Navigate to **local.settings.json** in Visual Studio. Then use the values you recorded earlier to fill in the blanks.
-
-4. Navigate to **ChangeFeedProcessor.cs**. In the parameters for the **Run** function, perform the following actions:
-
- * Replace the text **YOUR COLLECTION NAME HERE** with the name of your collection. If you followed earlier instructions, the name of your collection is changefeedlabcollection.
- * Replace the text **YOUR LEASES COLLECTION NAME HERE** with the name of your leases collection. If you followed earlier instructions, the name of your leases collection is **leases**.
- * At the top of Visual Studio, make sure that the Startup Project box on the left of the green arrow says **ChangeFeedFunction**.
- * Select **Start** at the top of the page to run the program
- * You can confirm that the function is running when the console app says "Job host started".
-
-## Insert data into Azure Cosmos DB
-
-To see how change feed processes new actions on an e-commerce site, have to simulate data that represents users viewing items from the product catalog, adding those items to their carts, and purchasing the items in their carts. This data is arbitrary and used for replicating what data on an e-commerce site would look like.
-
-1. Navigate back to the repository in File Explorer, and right-click **ChangeFeedFunction.sln** to open it again in a new Visual Studio window.
-
-2. Navigate to the **App.config** file.Within the `<appSettings>` block, add the endpoint and unique **PRIMARY KEY** that of your Azure Cosmos DB account that you retrieved earlier.
-
-3. Add in the **collection** and **database** names. (These names should be **changefeedlabcollection** and **changefeedlabdatabase** unless you choose to name yours differently.)
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/update-connection-string.png" alt-text="Update connection strings":::
-
-4. Save the changes on all the files edited.
-
-5. At the top of Visual Studio, make sure that the **Startup Project** box on the left of the green arrow says **DataGenerator**. Then select **Start** at the top of the page to run the program.
-
-6. Wait for the program to run. The stars mean that data is coming in! Keep the program running - it is important that lots of data is collected.
-
-7. If you navigate to [Azure portal](https://portal.azure.com/) , then to the Cosmos DB account within your resource group, then to **Data Explorer**, you will see the randomized data imported in your **changefeedlabcollection** .
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/data-generated-in-portal.png" alt-text="Data generated in portal":::
-
-## Set up a stream analytics job
-
-Azure Stream Analytics is a fully managed cloud service for real-time processing of streaming data. In this lab, you will use stream analytics to process new events from the event hub (when an item is viewed, added to a cart, or purchased), incorporate those events into real-time data analysis, and send them into Power BI for visualization.
-
-1. From the [Azure portal](https://portal.azure.com/), navigate to your resource group, then to **streamjob1** (the stream analytics job that you created in the prelab).
-
-2. Select **Inputs** as demonstrated below.
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/create-input.png" alt-text="Create input":::
-
-3. Select **+ Add stream input**. Then select **Event Hub** from the drop-down menu.
-
-4. Fill the new input form with the following details:
-
- * In the **Input** alias field, enter **input**.
- * Select the option for **Select Event Hub from your subscriptions**.
- * Set the **Subscription** field to your subscription.
- * In the **Event Hubs namespace** field, enter the name of your event hub namespace that you created during the prelab.
- * In the **Event Hub name** field, select the option for **Use existing** and choose **event-hub1** from the drop-down menu.
- * Leave **Event Hub policy** name field set to its default value.
- * Leave **Event serialization format** as **JSON**.
- * Leave **Encoding field** set to **UTF-8**.
- * Leave **Event compression type** field set to **None**.
- * Select the **Save** button.
-
-5. Navigate back to the stream analytics job page, and select **Outputs**.
-
-6. Select **+ Add**. Then select **Power BI** from the drop-down menu.
-
-7. To create a new Power BI output to visualize average price, perform the following actions:
-
- * In the **Output alias** field, enter **averagePriceOutput**.
- * Leave the **Group workspace** field set to **Authorize connection to load workspaces**.
- * In the **Dataset name** field, enter **averagePrice**.
- * In the **Table name** field, enter **averagePrice**.
- * Select the **Authorize** button, then follow the instructions to authorize the connection to Power BI.
- * Select the **Save** button.
-
-8. Then go back to **streamjob1** and select **Edit query**.
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/edit-query.png" alt-text="Edit query":::
-
-9. Paste the following query into the query window. The **AVERAGE PRICE** query calculates the average price of all items that are viewed by users, the average price of all items that are added to users' carts, and the average price of all items that are purchased by users. This metric can help e-commerce companies decide what prices to sell items at and what inventory to invest in. For example, if the average price of items viewed is much higher than the average price of items purchased, then a company might choose to add less expensive items to its inventory.
-
- ```sql
- /*AVERAGE PRICE*/
- SELECT System.TimeStamp AS Time, Action, AVG(Price)
- INTO averagePriceOutput
- FROM input
- GROUP BY Action, TumblingWindow(second,5)
- ```
-10. Then select **Save** in the upper left-hand corner.
-
-11. Now return to **streamjob1** and select the **Start** button at the top of the page. Azure Stream Analytics can take a few minutes to start up, but eventually you will see it change from "Starting" to "Running".
-
-## Connect to Power BI
-
-Power BI is a suite of business analytics tools to analyze data and share insights. It's a great example of how you can strategically visualize the analyzed data.
-
-1. Sign in to Power BI and navigate to **My Workspace** by opening the menu on the left-hand side of the page.
-
-2. Select **+ Create** in the top right-hand corner and then select **Dashboard** to create a dashboard.
-
-3. Select **+ Add tile** in the top right-hand corner.
-
-4. Select **Custom Streaming Data**, then select the **Next** button.
-
-5. Select **averagePrice** from **YOUR DATASETS**, then select **Next**.
-
-6. In the **Visualization Type** field, choose **Clustered bar chart** from the drop-down menu. Under **Axis**, add action. Skip **Legend** without adding anything. Then, under the next section called **Value**, add **avg**. Select **Next**, then title your chart, and select **Apply**. You should see a new chart on your dashboard!
-
-7. Now, if you want to visualize more metrics, you can go back to **streamjob1** and create three more outputs with the following fields.
-
- a. **Output alias:** incomingRevenueOutput, Dataset name: incomingRevenue, Table name: incomingRevenue
- b. **Output alias:** top5Output, Dataset name: top5, Table name: top5
- c. **Output alias:** uniqueVisitorCountOutput, Dataset name: uniqueVisitorCount, Table name: uniqueVisitorCount
-
- Then select **Edit query** and paste the following queries **above** the one you already wrote.
-
- ```sql
- /*TOP 5*/
- WITH Counter AS
- (
- SELECT Item, Price, Action, COUNT(*) AS countEvents
- FROM input
- WHERE Action = 'Purchased'
- GROUP BY Item, Price, Action, TumblingWindow(second,30)
- ),
- top5 AS
- (
- SELECT DISTINCT
- CollectTop(5) OVER (ORDER BY countEvents) AS topEvent
- FROM Counter
- GROUP BY TumblingWindow(second,30)
- ),
- arrayselect AS
- (
- SELECT arrayElement.ArrayValue
- FROM top5
- CROSS APPLY GetArrayElements(top5.topevent) AS arrayElement
- )
- SELECT arrayvalue.value.item, arrayvalue.value.price, arrayvalue.value.countEvents
- INTO top5Output
- FROM arrayselect
-
- /*REVENUE*/
- SELECT System.TimeStamp AS Time, SUM(Price)
- INTO incomingRevenueOutput
- FROM input
- WHERE Action = 'Purchased'
- GROUP BY TumblingWindow(hour, 1)
-
- /*UNIQUE VISITORS*/
- SELECT System.TimeStamp AS Time, COUNT(DISTINCT CartID) as uniqueVisitors
- INTO uniqueVisitorCountOutput
- FROM input
- GROUP BY TumblingWindow(second, 5)
- ```
-
- The TOP 5 query calculates the top five items, ranked by the number of times that they have been purchased. This metric can help e-commerce companies evaluate which items are most popular and can influence the company's advertising, pricing, and inventory decisions.
-
- The REVENUE query calculates revenue by summing up the prices of all items purchased each minute. This metric can help e-commerce companies evaluate its financial performance and also understand what times of day contribute to most revenue. This can impact the overall company strategy, marketing in particular.
-
- The UNIQUE VISITORS query calculates how many unique visitors are on the site every five seconds by detecting unique cart ID's. This metric can help e-commerce companies evaluate their site activity and strategize how to acquire more customers.
-
-8. You can now add tiles for these datasets as well.
-
- * For Top 5, it would make sense to do a clustered column chart with the items as the axis and the count as the value.
- * For Revenue, it would make sense to do a line chart with time as the axis and the sum of the prices as the value. The time window to display should be the largest possible in order to deliver as much information as possible.
- * For Unique Visitors, it would make sense to do a card visualization with the number of unique visitors as the value.
-
- This is how a sample dashboard looks with these charts:
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/visualizations.png" alt-text="Screenshot shows a sample dashboard with charts named Average Price of Items by Action, Unique Visitors, Revenue, and Top 5 Items Purchased.":::
-
-## Optional: Visualize with an E-commerce site
-
-You will now observe how you can use your new data analysis tool to connect with a real e-commerce site. In order to build the e-commerce site, use an Azure Cosmos database to store the list of product categories, the product catalog, and a list of the most popular items.
-
-1. Navigate back to the [Azure portal](https://portal.azure.com/), then to your **Cosmos DB account**, then to **Data Explorer**.
-
- Add two collections under **changefeedlabdatabase** - **products** and **categories** with Fixed storage capacity.
-
- Add another collection under **changefeedlabdatabase** named **topItems** and **/Item** as the partition key.
-
-2. Select the **topItems** collection, and under **Scale and Settings** set the **Time to Live** to be **30 seconds** so that topItems updates every 30 seconds.
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/time-to-live.png" alt-text="Time to live":::
-
-3. In order to populate the **topItems** collection with the most frequently purchased items, navigate back to **streamjob1** and add a new **Output**. Select **Cosmos DB**.
-
-4. Fill in the required fields as pictured below.
-
- :::image type="content" source="./media/changefeed-ecommerce-solution/cosmos-output.png" alt-text="Cosmos output":::
-
-5. If you added the optional TOP 5 query in the previous part of the lab, proceed to part 5a. If not, proceed to part 5b.
-
- 5a. In **streamjob1**, select **Edit query** and paste the following query in your Azure Stream Analytics query editor below the TOP 5 query but above the rest of the queries.
-
- ```sql
- SELECT arrayvalue.value.item AS Item, arrayvalue.value.price, arrayvalue.value.countEvents
- INTO topItems
- FROM arrayselect
- ```
- 5b. In **streamjob1**, select **Edit query** and paste the following query in your Azure Stream Analytics query editor above all other queries.
-
- ```sql
- /*TOP 5*/
- WITH Counter AS
- (
- SELECT Item, Price, Action, COUNT(*) AS countEvents
- FROM input
- WHERE Action = 'Purchased'
- GROUP BY Item, Price, Action, TumblingWindow(second,30)
- ),
- top5 AS
- (
- SELECT DISTINCT
- CollectTop(5) OVER (ORDER BY countEvents) AS topEvent
- FROM Counter
- GROUP BY TumblingWindow(second,30)
- ),
- arrayselect AS
- (
- SELECT arrayElement.ArrayValue
- FROM top5
- CROSS APPLY GetArrayElements(top5.topevent) AS arrayElement
- )
- SELECT arrayvalue.value.item AS Item, arrayvalue.value.price, arrayvalue.value.countEvents
- INTO topItems
- FROM arrayselect
- ```
-
-6. Open **EcommerceWebApp.sln** and navigate to the **Web.config** file in the **Solution Explorer**.
-
-7. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier where it says **your URI here** and **your primary key here**. Then add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
-
- Fill in your **products collection name**, **categories collection name**, and **top items collection name** as indicated. (These names should be **products, categories, and topItems** unless you chose to name yours differently.)
-
-8. Navigate to and open the **Checkout folder** within **EcommerceWebApp.sln.** Then open the **Web.config** file within that folder.
-
-9. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier, where indicated. Then, add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
-
-10. Press **Start** at the top of the page to run the program.
-
-11. Now you can play around on the e-commerce site. When you view an item, add an item to your cart, change the quantity of an item in your cart, or purchase an item, these events will be passed through the Cosmos DB change feed to event hub, Stream Analytics, and then Power BI. We recommend continuing to run DataGenerator to generate significant web traffic data and provide a realistic set of "Hot Products" on the e-commerce site.
-
-## Delete the resources
-
-To delete the resources that you created during this lab, navigate to the resource group on [Azure portal](https://portal.azure.com/), then select **Delete resource group** from the menu at the top of the page and follow the instructions provided.
-
-## Next steps
-
-* To learn more about change feed, see [working with change feed support in Azure Cosmos DB](../change-feed.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/cli-samples.md
- Title: Azure CLI Samples for Azure Cosmos DB | Microsoft Docs
-description: This article lists several Azure CLI code samples available for interacting with Azure Cosmos DB. View API-specific CLI samples.
---- Previously updated : 08/19/2022---
-keywords: cosmos db, azure cli samples, azure cli code samples, azure cli script samples
--
-# Azure CLI samples for Azure Cosmos DB Core (SQL) API
--
-The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB SQL API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-
-These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-
-## Core (SQL) API Samples
-
-|Task | Description |
-|||
-| [Create an Azure Cosmos account, database, and container](../scripts/cli/sql/create.md)| Creates an Azure Cosmos DB account, database, and container for Core (SQL) API. |
-| [Create a serverless Azure Cosmos account, database, and container](../scripts/cli/sql/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and container for Core (SQL) API. |
-| [Create an Azure Cosmos account, database, and container with autoscale](../scripts/cli/sql/autoscale.md)| Creates an Azure Cosmos DB account, database, and container with autoscale for Core (SQL) API. |
-| [Perform throughput operations](../scripts/cli/sql/throughput.md) | Read, update, and migrate between autoscale and standard throughput on a database and container.|
-| [Lock resources from deletion](../scripts/cli/sql/lock.md)| Prevent resources from being deleted with resource locks.|
-|||
-
-## Common API Samples
-
-These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
-
-|Task | Description |
-|||
-| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
-| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.|
-|||
-
-## Next steps
-
-Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
-
-For Azure CLI samples for other APIs see:
--- [CLI Samples for Cassandra](../cassandr)-- [CLI Samples for Gremlin](../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)-- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Conceptual Resilient Sdk Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/conceptual-resilient-sdk-applications.md
- Title: Designing resilient applications with Azure Cosmos DB SDKs
-description: Learn how to build resilient applications using the Azure Cosmos DB SDKs and what all are the expected error status codes to retry on.
-- Previously updated : 05/05/2022----
-# Designing resilient applications with Azure Cosmos DB SDKs
-
-When authoring client applications that interact with Azure Cosmos DB through any of the SDKs, it's important to understand a few key fundamentals. This article is a design guide to help you understand these fundamentals and design resilient applications.
-
-## Overview
-
-For a video overview of the concepts discussed in this article, see:
-
-> [!VIDEO https://www.youtube.com/embed/McZIQhZpvew?start=118]
->
-
-## Connectivity modes
-
-Azure Cosmos DB SDKs can connect to the service in two [connectivity modes](sql-sdk-connection-modes.md). The .NET and Java SDKs can connect to the service in both Gateway and Direct mode, while the others can only connect in Gateway mode. Gateway mode uses the HTTP protocol and Direct mode uses the TCP protocol.
-
-Gateway mode is always used to fetch metadata such as the account, container, and routing information regardless of which mode SDK is configured to use. This information is cached in memory and is used to connect to the [service replicas](../partitioning-overview.md#replica-sets).
-
-In summary, for SDKs in Gateway mode, you can expect HTTP traffic, while for SDKs in Direct mode, you can expect a combination of HTTP and TCP traffic under different circumstances (like initialization, or fetching metadata, or routing information).
-
-## Client instances and connections
-
-Regardless of the connectivity mode, it's critical to maintain a Singleton instance of the SDK client per account per application. Connections, both HTTP and TCP, are scoped to the client instance. Most compute environments have limitations in terms of the number of connections that can be open at the same time and when these limits are reached, connectivity will be affected.
-
-## Distributed applications and networks
-
-When you design distributed applications, there are three key components:
-
-* Your application and the environment it runs on.
-* The network, which includes any component between your application and the Azure Cosmos DB service endpoint.
-* The Azure Cosmos DB service endpoint.
-
-When failures occur, they often fall into one of these three areas, and it's important to understand that due to the distributed nature of the system, it's impractical to expect 100% availability for any of these components.
-
-Azure Cosmos DB has a [comprehensive set of availability SLAs](../high-availability.md#slas), but none of them is 100%. The network components that connect your application to the service endpoint can have transient hardware issues and lose packets. Even the compute environment where your application runs could have a CPU spike affecting operations. These failure conditions can affect the operations of the SDKs and normally surface as errors with particular codes.
-
-Your application should be resilient to a [certain degree](#when-to-contact-customer-support) of potential failures across these components by implementing [retry policies](#should-my-application-retry-on-errors) over the responses provided by the SDKs.
-
-## Should my application retry on errors?
-
-The short answer is **yes**. But not all errors make sense to retry on, some of the error or status codes aren't transient. The table below describes them in detail:
-
-| Status Code | Should add retry | SDKs retry | Description |
-|-|-|-|
-| 400 | No | No | [Bad request](troubleshoot-bad-request.md) |
-| 401 | No | No | [Not authorized](troubleshoot-unauthorized.md) |
-| 403 | Optional | No | [Forbidden](troubleshoot-forbidden.md) |
-| 404 | No | No | [Resource is not found](troubleshoot-not-found.md) |
-| 408 | Yes | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
-| 409 | No | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
-| 410 | Yes | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
-| 412 | No | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
-| 413 | No | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
-| 429 | Yes | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
-| 449 | Yes | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
-| 500 | No | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
-| 503 | Yes | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
-
-In the table above, all the status codes marked with **Yes** on the second column should have some degree of retry coverage in your application.
-
-### HTTP 403
-
-The Azure Cosmos DB SDKs don't retry on HTTP 403 failures in general, but there are certain errors associated with HTTP 403 that your application might decide to react to. For example, if you receive an error indicating that [a Partition Key is full](troubleshoot-forbidden.md#partition-key-exceeding-storage), you might decide to alter the partition key of the document you're trying to write based on some business rule.
-
-### HTTP 429
-
-The Azure Cosmos DB SDKs will retry on HTTP 429 errors by default following the client configuration and honoring the service's response `x-ms-retry-after-ms` header, by waiting the indicated time and retrying after.
-
-When the SDK retries are exceeded, the error is returned to your application. Ideally inspecting the `x-ms-retry-after-ms` header in the response can be used as a hint to decide how long to wait before retrying the request. Another alternative would be an exponential back-off algorithm or configuring the client to extend the retries on HTTP 429.
-
-### HTTP 449
-
-The Azure Cosmos DB SDKs will retry on HTTP 449 with an incremental back-off during a fixed period of time to accommodate most scenarios.
-
-When the automatic SDK retries are exceeded, the error is returned to your application. HTTP 449 errors can be safely retried. Because of the highly concurrent nature of write operations, it's better to have a random back-off algorithm to avoid repeating the same degree of concurrency after a fixed interval.
-
-### Timeouts and connectivity related failures (HTTP 408/503)
-
-Network timeouts and connectivity failures are among the most common errors. The Azure Cosmos DB SDKs are themselves resilient and will retry timeouts and connectivity issues across the HTTP and TCP protocols if the retry is feasible:
-
-* For read operations, the SDKs will retry any timeout or connectivity related error.
-* For write operations, the SDKs will **not** retry because these operations are **not idempotent**. When a timeout occurs waiting for the response, it's not possible to know if the request reached the service.
-
-If the account has multiple regions available, the SDKs will also attempt a [cross-region retry](troubleshoot-sdk-availability.md#transient-connectivity-issues-on-tcp-protocol).
-
-Because of the nature of timeouts and connectivity failures, these might not appear in your [account metrics](../monitor-cosmos-db.md), as they only cover failures happening on the service side.
-
-It's recommended for applications to have their own retry policy for these scenarios and take into consideration how to resolve write timeouts. For example, retrying on a Create timeout can yield an HTTP 409 (Conflict) if the previous request did reach the service, but it would succeed if it didn't.
-
-### Language specific implementation details
-
-For further implementation details regarding a language see:
-
-* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/)
-* [Java SDK implementation information](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/docs/)
-
-## Do retries affect my latency?
-
-From the client perspective, any retries will affect the end to end latency of an operation. When your application P99 latency is being affected, understanding the retries that are happening and how to address them is important.
-
-Azure Cosmos DB SDKs provide detailed information in their logs and diagnostics that can help identify which retries are taking place. For more information, see [how to collect .NET SDK diagnostics](troubleshoot-dot-net-sdk-slow-request.md#capture-diagnostics) and [how to collect Java SDK diagnostics](troubleshoot-java-sdk-v4-sql.md#capture-the-diagnostics).
-
-## What about regional outages?
-
-The Azure Cosmos DB SDKs cover regional availability and can perform retries on another account regions. Refer to the [multiregional environments retry scenarios and configurations](troubleshoot-sdk-availability.md) article to understand which scenarios involve other regions.
-
-## When to contact customer support
-
-Before contacting customer support, go through these steps:
-
-* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
-* Is the P99 latency affected?
-* Are the failures related to [error codes](#should-my-application-retry-on-errors) that my application should retry on and does the application cover such retries?
-* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
-* Have you gone through the related troubleshooting documents in the above table to rule out a problem on the application environment?
-
-If all the application instances are affected, or the percentage of affected operations is outside service SLAs, or affecting your own application SLAs and P99s, contact customer support.
-
-## Next steps
-
-* Learn about [multiregional environments retry scenarios and configurations](troubleshoot-sdk-availability.md)
-* Review the [Availability SLAs](../high-availability.md#slas)
-* Use the latest [.NET SDK](sql-api-sdk-dotnet-standard.md)
-* Use the latest [Java SDK](sql-api-sdk-java-v4.md)
-* Use the latest [Python SDK](sql-api-sdk-python.md)
-* Use the latest [Node SDK](sql-api-sdk-node.md)
cosmos-db Couchbase Cosmos Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/couchbase-cosmos-migration.md
- Title: 'Migrate from CouchBase to Azure Cosmos DB SQL API'
-description: Step-by-Step guidance for migrating from CouchBase to Azure Cosmos DB SQL API
--- Previously updated : 02/11/2020-----
-# Migrate from CouchBase to Azure Cosmos DB SQL API
-
-Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article provides instructions to migrate Java applications that are connected to Couchbase to a SQL API account in Azure Cosmos DB.
-
-## Differences in nomenclature
-
-The following are the key features that work differently in Azure Cosmos DB when compared to Couchbase:
-
-| Couchbase | Azure Cosmos DB |
-|--|--|
-| Couchbase server | Account |
-| Bucket | Database |
-| Bucket | Container/Collection |
-| JSON Document | Item / Document |
-
-## Key differences
-
-* Azure Cosmos DB has an "ID" field within the document whereas Couchbase has the ID as a part of bucket. The "ID" field is unique across the partition.
-
-* Azure Cosmos DB scales by using the partitioning or sharding technique. Which means it splits the data into multiple shards/partitions. These partitions/shards are created based on the partition key property that you provide. You can select the partition key to optimize read as well write operations or read/write optimized too. To learn more, see the [partitioning](../partitioning-overview.md) article.
-
-* In Azure Cosmos DB, it is not required for the top-level hierarchy to denote the collection because the collection name already exists. This feature makes the JSON structure much simpler. The following is an example that shows differences in the data model between Couchbase and Azure Cosmos DB:
-
- **Couchbase**: Document ID = "99FF4444"
-
- ```json
- {
- "TravelDocument":
- {
- "Country":"India",
- "Validity" : "2022-09-01",
- "Person":
- {
- "Name": "Manish",
- "Address": "AB Road, City-z"
- },
- "Visas":
- [
- {
- "Country":"India",
- "Type":"Multi-Entry",
- "Validity":"2022-09-01"
- },
- {
- "Country":"US",
- "Type":"Single-Entry",
- "Validity":"2022-08-01"
- }
- ]
- }
- }
- ```
-
- **Azure Cosmos DB**: Refer "ID" within the document as shown below
-
- ```json
- {
- "id" : "99FF4444",
-
- "Country":"India",
- "Validity" : "2022-09-01",
- "Person":
- {
- "Name": "Manish",
- "Address": "AB Road, City-z"
- },
- "Visas":
- [
- {
- "Country":"India",
- "Type":"Multi-Entry",
- "Validity":"2022-09-01"
- },
- {
- "Country":"US",
- "Type":"Single-Entry",
- "Validity":"2022-08-01"
- }
- ]
- }
-
- ```
-
-## Java SDK support
-
-Azure Cosmos DB has following SDKs to support different Java frameworks:
-
-* Async SDK
-* Spring Boot SDK
-
-The following sections describe when to use each of these SDKs. Consider an example where we have three types of workloads:
-
-## Couchbase as document repository & spring data-based custom queries
-
-If the workload that you are migrating is based on Spring Boot Based SDK, then you can use the following steps:
-
-1. Add parent to the POM.xml file:
-
- ```java
- <parent>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.1.5.RELEASE</version>
- <relativePath/>
- </parent>
- ```
-
-1. Add properties to the POM.xml file:
-
- ```java
- <azure.version>2.1.6</azure.version>
- ```
-
-1. Add dependencies to the POM.xml file:
-
- ```java
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-cosmosdb-spring-boot-starter</artifactId>
- <version>2.1.6</version>
- </dependency>
- ```
-
-1. Add application properties under resources and specify the following. Make sure to replace the URL, key, and database name parameters:
-
- ```java
- azure.cosmosdb.uri=<your-cosmosDB-URL>
- azure.cosmosdb.key=<your-cosmosDB-key>
- azure.cosmosdb.database=<your-cosmosDB-dbName>
- ```
-
-1. Define the name of the collection in the model. You can also specify further annotations. For example, ID, partition key to denote them explicitly:
-
- ```java
- @Document(collection = "mycollection")
- public class User {
- @id
- private String id;
- private String firstName;
- @PartitionKey
- private String lastName;
- }
- ```
-
-The following are the code snippets for CRUD operations:
-
-### Insert and update operations
-
-Where *_repo* is the object of repository and *doc* is the POJO classΓÇÖs object. You can use `.save` to insert or upsert (if document with specified ID found). The following code snippet shows how to insert or update a doc object:
-
-`_repo.save(doc);`
-
-### Delete Operation
-
-Consider the following code snippet, where doc object will have ID and partition key mandatory to locate and delete the object:
-
-`_repo.delete(doc);`
-
-### Read Operation
-
-You can read the document with or without specifying the partition key. If you donΓÇÖt specify the partition key, then it is treated as a cross-partition query. Consider the following code samples, first one will perform operation using ID and partition key field. Second example uses a regular field & without specifying the partition key field.
-
-* ```_repo.findByIdAndName(objDoc.getId(),objDoc.getName());```
-* ```_repo.findAllByStatus(objDoc.getStatus());```
-
-ThatΓÇÖs it, you can now use your application with Azure Cosmos DB. Complete code sample for the example described in this doc is available in the [CouchbaseToCosmosDB-SpringCosmos](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/SpringCosmos) GitHub repo.
-
-## Couchbase as a document repository & using N1QL queries
-
-N1QL queries is the way to define queries in the Couchbase.
-
-|N1QL Query | Azure Cosmos DB Query|
-|-|-|
-|SELECT META(`TravelDocument`).id AS id, `TravelDocument`.* FROM `TravelDocument` WHERE `_type` = "com.xx.xx.xx.xxx.xxx.xxxx " and country = 'IndiaΓÇÖ and ANY m in Visas SATISFIES m.type == 'Multi-Entry' and m.Country IN ['India', BhutanΓÇÖ] ORDER BY ` Validity` DESC LIMIT 25 OFFSET 0 | SELECT c.id,c FROM c JOIN m in c.country=ΓÇÖIndiaΓÇÖ WHERE c._type = " com.xx.xx.xx.xxx.xxx.xxxx" and c.country = 'India' and m.type = 'Multi-Entry' and m.Country IN ('India', 'Bhutan') ORDER BY c.Validity DESC OFFSET 0 LIMIT 25 |
-
-You can notice the following changes in your N1QL queries:
-
-* You donΓÇÖt need to use the META keyword or refer to the first-level document. Instead you can create your own reference to the container. In this example, we have considered it as "c" (it can be anything). This reference is used as a prefix for all the first-level fields. Fr example, c.id, c.country etc.
-
-* Instead of "ANY" now you can do a join on subdocument and refer it with a dedicated alias such as "m". Once you have created alias for a subdocument you need to use alias. For example, m.Country.
-
-* The sequence of OFFSET is different in Azure Cosmos DB query, first you need to specify OFFSET then LIMIT.
-It is recommended not to use Spring Data SDK if you are using maximum custom defined queries as it can have unnecessary overhead at the client side while passing the query to Azure Cosmos DB. Instead we have a direct Async Java SDK, which can be utilized much efficiently in this case.
-
-### Read operation
-
-Use the Async Java SDK with the following steps:
-
-1. Configure the following dependency onto the POM.xml file:
-
- ```java
- <!-- https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb -->
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-cosmos</artifactId>
- <version>3.0.0</version>
- </dependency>
- ```
-
-1. Create a connection object for Azure Cosmos DB by using the `ConnectionBuilder` method as shown in the following example. Make sure you put this declaration into the bean such that the following code should get executed only once:
-
- ```java
- ConnectionPolicy cp=new ConnectionPolicy();
- cp.connectionMode(ConnectionMode.DIRECT);
-
- if(client==null)
- client= CosmosClient.builder()
- .endpoint(Host)//(Host, PrimaryKey, dbName, collName).Builder()
- .connectionPolicy(cp)
- .key(PrimaryKey)
- .consistencyLevel(ConsistencyLevel.EVENTUAL)
- .build();
-
- container = client.getDatabase(_dbName).getContainer(_collName);
- ```
-
-1. To execute the query, you need to run the following code snippet:
-
- ```java
- Flux<FeedResponse<CosmosItemProperties>> objFlux= container.queryItems(query, fo);
- ```
-
-Now, with the help of above method you can pass multiple queries and execute without any hassle. In case you have the requirement to execute one large query, which can be split into multiple queries then try the following code snippet instead of the previous one:
-
-```java
-for(SqlQuerySpec query:queries)
-{
- objFlux= container.queryItems(query, fo);
- objFlux .publishOn(Schedulers.elastic())
- .subscribe(feedResponse->
- {
- if(feedResponse.results().size()>0)
- {
- _docs.addAll(feedResponse.results());
- }
-
- },
- Throwable::printStackTrace,latch::countDown);
- lstFlux.add(objFlux);
-}
-
- Flux.merge(lstFlux);
- latch.await();
-}
-```
-
-With the previous code, you can run queries in parallel and increase the distributed executions to optimize. Further you can run the insert and update operations too:
-
-### Insert operation
-
-To insert the document, run the following code:
-
-```java
-Mono<CosmosItemResponse> objMono= container.createItem(doc,ro);
-```
-
-Then subscribe to Mono as:
-
-```java
-CountDownLatch latch=new CountDownLatch(1);
-objMono .subscribeOn(Schedulers.elastic())
- .subscribe(resourceResponse->
- {
- if(resourceResponse.statusCode()!=successStatus)
- {
- throw new RuntimeException(resourceResponse.toString());
- }
- },
- Throwable::printStackTrace,latch::countDown);
-latch.await();
-```
-
-### Upsert operation
-
-Upsert operation requires you to specify the document that needs to be updated. To fetch the complete document, you can use the snippet mentioned under heading read operation then modify the required field(s). The following code snippet upserts the document:
-
-```java
-Mono<CosmosItemResponse> obs= container.upsertItem(doc, ro);
-```
-Then subscribe to mono. Refer to the mono subscription snippet in insert operation.
-
-### Delete operation
-
-Following snippet will do delete operation:
-
-```java
-CosmosItem objItem= container.getItem(doc.Id, doc.Tenant);
-Mono<CosmosItemResponse> objMono = objItem.delete(ro);
-```
-
-Then subscribe to mono, refer mono subscription snippet in insert operation. The complete code sample is available in the [CouchbaseToCosmosDB-AsyncInSpring](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/AsyncInSpring) GitHub repo.
-
-## Couchbase as a key/value pair
-
-This is a simple type of workload in which you can perform lookups instead of queries. Use the following steps for key/value pairs:
-
-1. Consider having "/ID" as primary key, which will makes sure you can perform lookup operation directly in the specific partition. Create a collection and specify "/ID" as partition key.
-
-1. Switch off the indexing completely. Because you will execute lookup operations, there is no point of carrying indexing overhead. To turn off indexing, sign into Azure portal, goto Azure Cosmos DB Account. Open the **Data Explorer**, select your **Database** and the **Container**. Open the **Scale & Settings** tab and select the **Indexing Policy**. Currently indexing policy looks like the following:
-
- ```json
- {
- "indexingMode": "consistent",
- "automatic": true,
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "excludedPaths": [
- {
- "path": "/\"_etag\"/?"
- }
- ]
- }
- ````
-
- Replace the above indexing policy with the following policy:
-
- ```json
- {
- "indexingMode": "none",
- "automatic": false,
- "includedPaths": [],
- "excludedPaths": []
- }
- ```
-
-1. Use the following code snippet to create the connection object. Connection Object (to be placed in @Bean or make it static):
-
- ```java
- ConnectionPolicy cp=new ConnectionPolicy();
- cp.connectionMode(ConnectionMode.DIRECT);
-
- if(client==null)
- client= CosmosClient.builder()
- .endpoint(Host)//(Host, PrimaryKey, dbName, collName).Builder()
- .connectionPolicy(cp)
- .key(PrimaryKey)
- .consistencyLevel(ConsistencyLevel.EVENTUAL)
- .build();
-
- container = client.getDatabase(_dbName).getContainer(_collName);
- ```
-
-Now you can execute the CRUD operations as follows:
-
-### Read operation
-
-To read the item, use the following snippet:
-
-```java
-CosmosItemRequestOptions ro=new CosmosItemRequestOptions();
-ro.partitionKey(new PartitionKey(documentId));
-CountDownLatch latch=new CountDownLatch(1);
-
-var objCosmosItem= container.getItem(documentId, documentId);
-Mono<CosmosItemResponse> objMono = objCosmosItem.read(ro);
-objMono .subscribeOn(Schedulers.elastic())
- .subscribe(resourceResponse->
- {
- if(resourceResponse.item()!=null)
- {
- doc= resourceResponse.properties().toObject(UserModel.class);
- }
- },
- Throwable::printStackTrace,latch::countDown);
-latch.await();
-```
-
-### Insert operation
-
-To insert an item, you can perform the following code:
-
-```java
-Mono<CosmosItemResponse> objMono= container.createItem(doc,ro);
-```
-
-Then subscribe to mono as:
-
-```java
-CountDownLatch latch=new CountDownLatch(1);
-objMono.subscribeOn(Schedulers.elastic())
- .subscribe(resourceResponse->
- {
- if(resourceResponse.statusCode()!=successStatus)
- {
- throw new RuntimeException(resourceResponse.toString());
- }
- },
- Throwable::printStackTrace,latch::countDown);
-latch.await();
-```
-
-### Upsert operation
-
-To update the value of an item, refer the code snippet below:
-
-```java
-Mono<CosmosItemResponse> obs= container.upsertItem(doc, ro);
-```
-Then subscribe to mono, refer mono subscription snippet in insert operation.
-
-### Delete operation
-
-Use the following snippet to execute the delete operation:
-
-```java
-CosmosItem objItem= container.getItem(id, id);
-Mono<CosmosItemResponse> objMono = objItem.delete(ro);
-```
-
-Then subscribe to mono, refer mono subscription snippet in insert operation. The complete code sample is available in the [CouchbaseToCosmosDB-AsyncKeyValue](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/AsyncKeyValue) GitHub repo.
-
-## Data Migration
-
-There are two ways to migrate data.
-
-* **Use Azure Data Factory:** This is the most recommended method to migrate the data. Configure the source as Couchbase and sink as Azure Cosmos DB SQL API, see the Azure [Cosmos DB Data Factory connector](../../data-factory/connector-azure-cosmos-db.md) article for detailed steps.
-
-* **Use the Azure Cosmos DB data import tool:** This option is recommended to migrate using VMs with less amount of data. For detailed steps, see the [Data importer](../import-data.md) article.
-
-## Next Steps
-
-* To do performance testing, see [Performance and scale testing with Azure Cosmos DB](./performance-testing.md) article.
-* To optimize the code, see [Performance tips for Azure Cosmos DB](./performance-tips-async-java.md) article.
-* Explore Java Async V3 SDK, [SDK reference](https://github.com/Azure/azure-cosmosdb-java/tree/v3) GitHub repo.
cosmos-db Create Cosmosdb Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-cosmosdb-resources-portal.md
- Title: Quickstart - Create Azure Cosmos DB resources from the Azure portal
-description: This quickstart shows how to create an Azure Cosmos database, container, and items by using the Azure portal.
----- Previously updated : 08/26/2021--
-# Quickstart: Create an Azure Cosmos account, database, container, and items from the Azure portal
-
-> [!div class="op_single_selector"]
-> * [Azure portal](create-cosmosdb-resources-portal.md)
-> * [.NET](create-sql-api-dotnet.md)
-> * [Java](create-sql-api-java.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Python](create-sql-api-python.md)
->
-
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
-
-This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [SQL API](../introduction.md) account, create a document database, and container, and add data to the container. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
-
-## Prerequisites
-
-An Azure subscription or free Azure Cosmos DB trial account
-- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] --- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] -
-<a id="create-account"></a>
-## Create an Azure Cosmos DB account
--
-<a id="create-container-database"></a>
-## Add a database and a container
-
-You can use the Data Explorer in the Azure portal to create a database and container.
-
-1. Select **Data Explorer** from the left navigation on your Azure Cosmos DB account page, and then select **New Container**.
-
- You may need to scroll right to see the **Add Container** window.
-
- :::image type="content" source="./media/create-cosmosdb-resources-portal/add-database-container.png" alt-text="The Azure portal Data Explorer, Add Container pane":::
-
-1. In the **Add container** pane, enter the settings for the new container.
-
- |Setting|Suggested value|Description
- ||||
- |**Database ID**|ToDoList|Enter *ToDoList* as the name for the new database. Database names must contain from 1 through 255 characters, and they cannot contain `/, \\, #, ?`, or a trailing space. Check the **Share throughput across containers** option, it allows you to share the throughput provisioned on the database across all the containers within the database. This option also helps with cost savings. |
- | **Database throughput**| You can provision **Autoscale** or **Manual** throughput. Manual throughput allows you to scale RU/s yourself whereas autoscale throughput allows the system to scale RU/s based on usage. Select **Manual** for this example. <br><br> Leave the throughput at 400 request units per second (RU/s). If you want to reduce latency, you can scale up the throughput later by estimating the required RU/s with the [capacity calculator](estimate-ru-with-capacity-planner.md).<br><br>**Note**: This setting is not available when creating a new container in a serverless account. |
- |**Container ID**|Items|Enter *Items* as the name for your new container. Container IDs have the same character requirements as database names.|
- |**Partition key**| /category| The sample described in this article uses */category* as the partition key.|
-
- Don't add **Unique keys** or turn on **Analytical store** for this example. Unique keys let you add a layer of data integrity to the database by ensuring the uniqueness of one or more values per partition key. For more information, see [Unique keys in Azure Cosmos DB.](../unique-keys.md) [Analytical store](../analytical-store-introduction.md) is used to enable large-scale analytics against operational data without any impact to your transactional workloads.
-
-1. Select **OK**. The Data Explorer displays the new database and the container that you created.
-
-## Add data to your database
-
-Add data to your new database using Data Explorer.
-
-1. In **Data Explorer**, expand the **ToDoList** database, and expand the **Items** container. Next, select **Items**, and then select **New Item**.
-
- :::image type="content" source="./media/create-sql-api-dotnet/azure-cosmosdb-new-document.png" alt-text="Create new documents in Data Explorer in the Azure portal":::
-
-1. Add the following structure to the document on the right side of the **Documents** pane:
-
- ```json
- {
- "id": "1",
- "category": "personal",
- "name": "groceries",
- "description": "Pick up apples and strawberries.",
- "isComplete": false
- }
- ```
-
-1. Select **Save**.
-
- :::image type="content" source="./media/create-sql-api-dotnet/azure-cosmosdb-save-document.png" alt-text="Copy in json data and select Save in Data Explorer in the Azure portal":::
-
-1. Select **New Item** again, and create and save another document with a unique `id`, and any other properties and values you want. Your documents can have any structure, because Azure Cosmos DB doesn't impose any schema on your data.
-
-## Query your data
--
-## Clean up resources
--
-If you wish to delete just the database and use the Azure Cosmos account in future, you can delete the database with the following steps:
-
-* Go to your Azure Cosmos account.
-* Open **Data Explorer**, right click on the database that you want to delete and select **Delete Database**.
-* Enter the Database ID/database name to confirm the delete operation.
-
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a database and container using the Data Explorer. You can now import additional data to your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
- Title: 'Quickstart: Build a Go app using Azure Cosmos DB SQL API account'
-description: Gives a Go code sample you can use to connect to and query the Azure Cosmos DB SQL API
---- Previously updated : 3/4/2021---
-# Quickstart: Build a Go application using an Azure Cosmos DB SQL API account
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
--
-In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage a Cosmos DB SQL API account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
-
-Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](../introduction.md).
-
-## Prerequisites
--- A Cosmos DB Account. Your options are:
- * Within an Azure active subscription:
- * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
- * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
- * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier)
- * Without an Azure active subscription:
- * [Try Azure Cosmos DB for free](https://aka.ms/trycosmosdb), a tests environment that lasts for 30 days.
- * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
-- [Go 1.16 or higher](https://golang.org/dl/)-- [Azure CLI](/cli/azure/install-azure-cli)--
-## Getting started
-
-For this quickstart, you'll need to create an Azure resource group and a Cosmos DB account.
-
-Run the following commands to create an Azure resource group:
-
-```azurecli
-az group create --name myResourceGroup --location eastus
-```
-
-Next create a Cosmos DB account by running the following command:
-
-```
-az cosmosdb create --name my-cosmosdb-account --resource-group myResourceGroup
-```
-
-### Install the package
-
-Use the `go get` command to install the [azcosmos](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos) package.
-
-```bash
-go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos
-```
-
-## Key concepts
-
-* A `Client` is a connection to an Azure Cosmos DB account.
-* Azure Cosmos DB accounts can have multiple `databases`. A `DatabaseClient` allows you to create, read, and delete databases.
-* Database within an Azure Cosmos Account can have multiple `containers`. A `ContainerClient` allows you to create, read, update, and delete containers, and to modify throughput provision.
-* Information is stored as items inside containers. And the client allows you to create, read, update, and delete items in containers.
-
-## Code examples
-
-**Authenticate the client**
-
-```go
-var endpoint = "<azure_cosmos_uri>"
-var key = "<azure_cosmos_primary_key"
-
-cred, err := azcosmos.NewKeyCredential(key)
-if err != nil {
- log.Fatal("Failed to create a credential: ", err)
-}
-
-// Create a CosmosDB client
-client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
-if err != nil {
- log.Fatal("Failed to create cosmos client: ", err)
-}
-
-// Create database client
-databaseClient, err := client.NewDatabase("<databaseName>")
-if err != nil {
- log.Fatal("Failed to create database client:", err)
-}
-
-// Create container client
-containerClient, err := client.NewContainer("<databaseName>", "<containerName>")
-if err != nil {
- log.Fatal("Failed to create a container client:", err)
-}
-```
-
-**Create a Cosmos DB database**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func createDatabase (client *azcosmos.Client, databaseName string) error {
-// databaseName := "adventureworks"
-
- // sets the name of the database
- databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
-
- // creating the database
- ctx := context.TODO()
- databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
- if err != nil {
- log.Fatal(err)
- }
- return nil
-}
-```
-
-**Create a container**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func createContainer (client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "/customerId"
-
- databaseClient, err := client.NewDatabase(databaseName) // returns a struct that represents a database
- if err != nil {
- log.Fatal("Failed to create a database client:", err)
- }
-
- // Setting container properties
- containerProperties := azcosmos.ContainerProperties{
- ID: containerName,
- PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
- Paths: []string{partitionKey},
- },
- }
-
- // Setting container options
- throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
- options := &azcosmos.CreateContainerOptions{
- ThroughputProperties: &throughputProperties,
- }
-
- ctx := context.TODO()
- containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
- if err != nil {
- log.Fatal(err)
-
- }
- log.Printf("Container [%v] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
-
- return nil
-}
-```
-
-**Create an item**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-/*
- item = struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{
- ID: "1",
- CustomerId: "1",
- Title: "Mr",
- FirstName: "Luke",
- LastName: "Hayes",
- EmailAddress: "luke12@adventure-works.com",
- PhoneNumber: "879-555-0197",
- }
-*/
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client: %s", err)
- }
-
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- b, err := json.Marshal(item)
- if err != nil {
- return err
- }
- // setting item options upon creating ie. consistency level
- itemOptions := azcosmos.ItemOptions{
- ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
- }
- ctx := context.TODO()
- itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
-
- if err != nil {
- return err
- }
- log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
-
- return nil
-}
-```
-
-**Read an item**
-
-```go
-import (
- "context"
- "log"
- "fmt"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("Failed to create a container client: %s", err)
- }
-
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Read an item
- ctx := context.TODO()
- itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- itemResponseBody := struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{}
-
- err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
- if err != nil {
- return err
- }
-
- b, err := json.MarshalIndent(itemResponseBody, "", " ")
- if err != nil {
- return err
- }
- fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
- fmt.Printf("%s\n", b)
-
- log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
-
- return nil
-}
-```
-
-**Delete an item**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("Failed to create a container client: %s", err)
- }
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Delete an item
- ctx := context.TODO()
- res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
-
- return nil
-}
-```
-
-## Run the code
-
-To authenticate, you need to pass the Azure Cosmos account credentials to the application.
-
-Get your Azure Cosmos account credentials by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to your Azure Cosmos account.
-
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You'll add the URI and keys values to an environment variable in the next step.
-
-After you've copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application.
-
-Use the values copied from the Azure portal to set the following environment variables:
-
-# [Bash](#tab/bash)
-
-```bash
-export AZURE_COSMOS_ENPOINT=<Your_AZURE_COSMOS_URI>
-export AZURE_COSMOS_KEY=<Your_COSMOS_PRIMARY_KEY>
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$env:AZURE_COSMOS_ENDPOINT=<Your_AZURE_COSMOS_URI>
-$env:AZURE_COSMOS_KEY=<Your_AZURE_COSMOS_URI>
-```
---
-Create a new Go module by running the following command:
-
-```bash
-go mod init azcosmos
-```
-
-```go
-
-package main
-
-import (
- "context"
- "encoding/json"
- "errors"
- "fmt"
- "log"
- "os"
-
- "github.com/Azure/azure-sdk-for-go/sdk/azcore"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func main() {
- endpoint := os.Getenv("AZURE_COSMOS_ENDPOINT")
- if endpoint == "" {
- log.Fatal("AZURE_COSMOS_ENDPOINT could not be found")
- }
-
- key := os.Getenv("AZURE_COSMOS_KEY")
- if key == "" {
- log.Fatal("AZURE_COSMOS_KEY could not be found")
- }
-
- var databaseName = "adventureworks"
- var containerName = "customer"
- var partitionKey = "/customerId"
-
- item := struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{
- ID: "1",
- CustomerId: "1",
- Title: "Mr",
- FirstName: "Luke",
- LastName: "Hayes",
- EmailAddress: "luke12@adventure-works.com",
- PhoneNumber: "879-555-0197",
- }
-
- cred, err := azcosmos.NewKeyCredential(key)
- if err != nil {
- log.Fatal("Failed to create a credential: ", err)
- }
-
- // Create a CosmosDB client
- client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
- if err != nil {
- log.Fatal("Failed to create cosmos db client: ", err)
- }
-
- err = createDatabase(client, databaseName)
- if err != nil {
- log.Printf("createDatabase failed: %s\n", err)
- }
-
- err = createContainer(client, databaseName, containerName, partitionKey)
- if err != nil {
- log.Printf("createContainer failed: %s\n", err)
- }
-
- err = createItem(client, databaseName, containerName, item.CustomerId, item)
- if err != nil {
- log.Printf("createItem failed: %s\n", err)
- }
-
- err = readItem(client, databaseName, containerName, item.CustomerId, item.ID)
- if err != nil {
- log.Printf("readItem failed: %s\n", err)
- }
-
- err = deleteItem(client, databaseName, containerName, item.CustomerId, item.ID)
- if err != nil {
- log.Printf("deleteItem failed: %s\n", err)
- }
-}
-
-func createDatabase(client *azcosmos.Client, databaseName string) error {
-// databaseName := "adventureworks"
-
- databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
-
- // This is a helper function that swallows 409 errors
- errorIs409 := func(err error) bool {
- var responseErr *azcore.ResponseError
- return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
- }
- ctx := context.TODO()
- databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
-
- switch {
- case errorIs409(err):
- log.Printf("Database [%s] already exists\n", databaseName)
- case err != nil:
- return err
- default:
- log.Printf("Database [%v] created. ActivityId %s\n", databaseName, databaseResp.ActivityID)
- }
- return nil
-}
-
-func createContainer(client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
-// databaseName = adventureworks
-// containerName = customer
-// partitionKey = "/customerId"
-
- databaseClient, err := client.NewDatabase(databaseName)
- if err != nil {
- return err
- }
-
- // creating a container
- containerProperties := azcosmos.ContainerProperties{
- ID: containerName,
- PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
- Paths: []string{partitionKey},
- },
- }
-
- // this is a helper function that swallows 409 errors
- errorIs409 := func(err error) bool {
- var responseErr *azcore.ResponseError
- return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
- }
-
- // setting options upon container creation
- throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
- options := &azcosmos.CreateContainerOptions{
- ThroughputProperties: &throughputProperties,
- }
- ctx := context.TODO()
- containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
-
- switch {
- case errorIs409(err):
- log.Printf("Container [%s] already exists\n", containerName)
- case err != nil:
- return err
- default:
- log.Printf("Container [%s] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
- }
- return nil
-}
-
-func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-
-/* item = struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{
- ID: "1",
- CustomerId: "1",
- Title: "Mr",
- FirstName: "Luke",
- LastName: "Hayes",
- EmailAddress: "luke12@adventure-works.com",
- PhoneNumber: "879-555-0197",
- CreationDate: "2014-02-25T00:00:00",
- }
-*/
- // create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client: %s", err)
- }
-
- // specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- b, err := json.Marshal(item)
- if err != nil {
- return err
- }
- // setting the item options upon creating ie. consistency level
- itemOptions := azcosmos.ItemOptions{
- ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
- }
-
- // this is a helper function that swallows 409 errors
- errorIs409 := func(err error) bool {
- var responseErr *azcore.ResponseError
- return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
- }
-
- ctx := context.TODO()
- itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
-
- switch {
- case errorIs409(err):
- log.Printf("Item with partitionkey value %s already exists\n", pk)
- case err != nil:
- return err
- default:
- log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
- }
-
- return nil
-}
-
-func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client: %s", err)
- }
-
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Read an item
- ctx := context.TODO()
- itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- itemResponseBody := struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{}
-
- err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
- if err != nil {
- return err
- }
-
- b, err := json.MarshalIndent(itemResponseBody, "", " ")
- if err != nil {
- return err
- }
- fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
- fmt.Printf("%s\n", b)
-
- log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
-
- return nil
-}
-
-func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client:: %s", err)
- }
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Delete an item
- ctx := context.TODO()
-
- res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
-
- return nil
-}
-
-```
-Create a new file named `main.go` and copy the code from the sample section above.
-
-Run the following command to execute the app:
-
-```bash
-go run main.go
-```
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a database, container, and an item entry. Now import more data to your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Create Sql Api Java Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-java-changefeed.md
- Title: Create an end-to-end Azure Cosmos DB Java SDK v4 application sample by using Change Feed
-description: This guide walks you through a simple Java SQL API application which inserts documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed.
---- Previously updated : 06/11/2020-----
-# How to create a Java application that uses Azure Cosmos DB SQL API and change feed processor
-
-This how-to guide walks you through a simple Java application which uses the Azure Cosmos DB SQL API to insert documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed and Change Feed Processor. The Java application communicates with the Azure Cosmos DB SQL API using Azure Cosmos DB Java SDK v4.
-
-> [!IMPORTANT]
-> This tutorial is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
->
-
-## Prerequisites
-
-* The URI and key for your Azure Cosmos DB account
-
-* Maven
-
-* Java 8
-
-## Background
-
-The Azure Cosmos DB change feed provides an event-driven interface to trigger actions in response to document insertion. This has many uses. For example in applications which are both read and write heavy, a chief use of change feed is to create a real-time **materialized view** of a container as it is ingesting documents. The materialized view container will hold the same data but partitioned for efficient reads, making the application both read and write efficient.
-
-The work of managing change feed events is largely taken care of by the change feed Processor library built into the SDK. This library is powerful enough to distribute change feed events among multiple workers, if that is desired. All you have to do is provide the change feed library a callback.
-
-This simple example demonstrates change feed Processor library with a single worker creating and deleting documents from a materialized view.
-
-## Setup
-
-If you have not already done so, clone the app example repo:
-
-```bash
-git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example.git
-```
-
-Open a terminal in the repo directory. Build the app by running
-
-```bash
-mvn clean package
-```
-
-## Walkthrough
-
-1. As a first check, you should have an Azure Cosmos DB account. Open the **Azure portal** in your browser, go to your Azure Cosmos DB account, and in the left pane navigate to **Data Explorer**.
-
- :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_account_empty.JPG" alt-text="Azure Cosmos DB account":::
-
-1. Run the app in the terminal using the following command:
-
- ```bash
- mvn exec:java -Dexec.mainClass="com.azure.cosmos.workedappexample.SampleGroceryStore" -DACCOUNT_HOST="your-account-uri" -DACCOUNT_KEY="your-account-key" -Dexec.cleanupDaemonThreads=false
- ```
-
-1. Press enter when you see
-
- ```bash
- Press enter to create the grocery store inventory system...
- ```
-
- then return to the Azure portal Data Explorer in your browser. You will see a database **GroceryStoreDatabase** has been added with three empty containers:
-
- * **InventoryContainer** - The inventory record for our example grocery store, partitioned on item ```id``` which is a UUID.
- * **InventoryContainer-pktype** - A materialized view of the inventory record, optimized for queries over item ```type```
- * **InventoryContainer-leases** - A leases container is always needed for change feed; leases track the app's progress in reading the change feed.
-
- :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_account_resources_lease_empty.JPG" alt-text="Empty containers":::
-
-1. In the terminal, you should now see a prompt
-
- ```bash
- Press enter to start creating the materialized view...
- ```
-
- Press enter. Now the following block of code will execute and initialize the change feed processor on another thread:
-
- ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=InitializeCFP)]
-
- ```"SampleHost_1"``` is the name of the Change Feed processor worker. ```changeFeedProcessorInstance.start()``` is what actually starts the Change Feed processor.
-
- Return to the Azure portal Data Explorer in your browser. Under the **InventoryContainer-leases** container, click **items** to see its contents. You will see that Change Feed Processor has populated the lease container, i.e. the processor has assigned the ```SampleHost_1``` worker a lease on some partitions of the **InventoryContainer**.
-
- :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_leases.JPG" alt-text="Leases":::
-
-1. Press enter again in the terminal. This will trigger 10 documents to be inserted into **InventoryContainer**. Each document insertion appears in the change feed as JSON; the following callback code handles these events by mirroring the JSON documents into a materialized view:
-
- ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=CFPCallback)]
-
-1. Allow the code to run 5-10sec. Then return to the Azure portal Data Explorer and navigate to **InventoryContainer > items**. You should see that items are being inserted into the inventory container; note the partition key (```id```).
-
- :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_items.JPG" alt-text="Feed container":::
-
-1. Now, in Data Explorer navigate to **InventoryContainer-pktype > items**. This is the materialized view - the items in this container mirror **InventoryContainer** because they were inserted programmatically by change feed. Note the partition key (```type```). So this materialized view is optimized for queries filtering over ```type```, which would be inefficient on **InventoryContainer** because it is partitioned on ```id```.
-
- :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_materializedview2.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos D B account with Items selected.":::
-
-1. We're going to delete a document from both **InventoryContainer** and **InventoryContainer-pktype** using just a single ```upsertItem()``` call. First, take a look at Azure portal Data Explorer. We'll delete the document for which ```/type == "plums"```; it is encircled in red below
-
- :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_materializedview-emph-todelete.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos D B account with a particular item I D selected.":::
-
- Hit enter again to call the function ```deleteDocument()``` in the example code. This function, shown below, upserts a new version of the document with ```/ttl == 5```, which sets document Time-To-Live (TTL) to 5sec.
-
- ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=DeleteWithTTL)]
-
- The change feed ```feedPollDelay``` is set to 100ms; therefore, change feed responds to this update almost instantly and calls ```updateInventoryTypeMaterializedView()``` shown above. That last function call will upsert the new document with TTL of 5sec into **InventoryContainer-pktype**.
-
- The effect is that after about 5 seconds, the document will expire and be deleted from both containers.
-
- This procedure is necessary because change feed only issues events on item insertion or update, not on item deletion.
-
-1. Press enter one more time to close the program and clean up its resources.
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-java.md
- Title: Quickstart - Use Java to create a document database using Azure Cosmos DB
-description: This quickstart presents a Java code sample you can use to connect to and query the Azure Cosmos DB SQL API
---- Previously updated : 08/26/2021-----
-# Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
--
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-> [!IMPORTANT]
-> This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
->
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.-- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.-
-## Introductory notes
-
-*The structure of a Cosmos DB account.* Irrespective of API or programming language, a Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
--
-You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
-
-The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
-
-As items are inserted into a Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
-
-## Create a database account
-
-Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB.
--
-## Add a container
--
-<a id="add-sample-data"></a>
-## Add sample data
--
-## Query your data
--
-## Clone the sample application
-
-Now let's switch to working with code. Let's clone a SQL API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
-```bash
-git clone https://github.com/Azure-Samples/azure-cosmos-java-getting-started.git
-```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
-](#run-the-app).
--
-# [Sync API](#tab/sync)
-
-### Managing database resources using the synchronous (sync) API
-
-* `CosmosClient` initialization. The `CosmosClient` provides client-side logical representation for the Azure Cosmos database service. This client is used to configure and execute requests against the service.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateSyncClient)]
-
-* `CosmosDatabase` creation.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
-
-* `CosmosContainer` creation.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
-
-* Item creation by using the `createItem` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateItem)]
-
-* Point reads are performed using `readItem` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=ReadItem)]
-
-* SQL queries over JSON are performed using the `queryItems` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=QueryItems)]
-
-# [Async API](#tab/async)
-
-### Managing database resources using the asynchronous (async) API
-
-* Async API calls return immediately, without waiting for a response from the server. In light of this, the following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
-
-* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos database service. This client is used to configure and execute asynchronous requests against the service.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateAsyncClient)]
-
-* `CosmosAsyncDatabase` creation.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
-
-* `CosmosAsyncContainer` creation.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
-
-* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream which issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program does not terminate during item creation. **The proper asynchronous programming practice is not to block on async calls - in realistic use-cases requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateItem)]
-
-* As with the sync API, point reads are performed using `readItem` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=ReadItem)]
-
-* As with the sync API, SQL queries over JSON are performed using the `queryItems` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=QueryItems)]
---
-## Run the app
-
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
-
-1. In the git terminal window, `cd` to the sample code folder.
-
- ```bash
- cd azure-cosmos-java-getting-started
- ```
-
-2. In the git terminal window, use the following command to install the required Java packages.
-
- ```bash
- mvn package
- ```
-
-3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal)
-
- ```bash
- mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
-
- ```
-
- The terminal window displays a notification that the FamilyDB database was created.
-
-4. The app creates database with name `AzureSampleFamilyDB`
-5. The app creates container with name `FamilyContainer`
-6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
-7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
-
-7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB SQL API account, create a document database and container using the Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
- Title: Quickstart- Use Node.js to query from Azure Cosmos DB SQL API account
-description: How to use Node.js to create an app that connects to Azure Cosmos DB SQL API account and queries data.
---- Previously updated : 09/21/2022-----
-# Quickstart: Use Node.js to connect and query data from Azure Cosmos DB SQL API account
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
-
-Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
-
-> [!NOTE]
-> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-sql-api-javascript-samples) are available on GitHub as a Node.js project.
-
-## Prerequisites
-
-* In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-
-## Setting up
-
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for JavaScript to manage resources.
-
-### Create an Azure Cosmos DB account
--
-### Configure environment variables
--
-### Create a new JavaScript project
-
-1. Create a new Node.js application in an empty folder using your preferred terminal.
-
- ```bash
- npm init -y
- ```
-
-2. Edit the `package.json` file to use ES6 modules by adding the `"type": "module",` entry. This allows your code to use modern async/await syntax.
-
- :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/package.json" highlight="6":::
-
-### Install the package
--
-1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) npm package to the Node.js project.
-
- ```bash
- npm install @azure/cosmos
- ```
-
-
-1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file.
-
- ```bash
- npm install dotenv
- ```
-
-### Create local development environment files
-
-1. Create a `.gitignore` file and add the following value to ignore your environment file and your node_modules. This ensures that only the secure and relevant information can be checked into source code.
-
- ```text
- .env
- node_modules
- ```
-
-1. Create a `.env` file with the following variables:
-
- ```text
- COSMOS_ENDPOINT=
- COSMOS_KEY=
- ```
-
-### Create a code file
-
-Create an `index.js` and add the following boilerplate code to the file to read environment variables:
--
-### Add dependency to client library
-
-Add the following code at the end of the `index.js` file to include the required dependency to programmatically access Cosmos DB.
--
-### Add environment variables to code file
-
-Add the following code at the end of the `index.js` file to include the required environment variables. The endpoint and key were found at the end of the [account creation steps](#create-an-azure-cosmos-db-account).
--
-### Add variables for names
-
-Add the following variables to manage unique database and container names as well as the [partition key (pk)](/azure/cosmos-db/partitioning-overview).
--
-In this example, we chose to add a timeStamp to the database and container in case you run this sample code more than once.
-
-## Object model
---
-You'll use the following JavaScript classes to interact with these resources:
-
-* [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-* [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
-* [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
-* [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
-* [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
-
-## Code examples
-
-* [Authenticate the client](#authenticate-the-client)
-* [Create a database](#create-a-database)
-* [Create a container](#create-a-container)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
-
-The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-For this sample code, the container will use the category as a logical partition key.
-
-### Authenticate the client
-
-In the `index.js`, add the following code to use the resource **endpoint** and **key** to authenticate to Cosmos DB. Define a new instance of the [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) class.
---
-### Create a database
-
-Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
--
-### Create a container
-
-Add the following code to create a container with the [``Database.Containers.createContainerIfNotExistsAsync``](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-createifnotexists) method. The method returns a reference to the container.
--
-### Create an item
-
-Add the following code to provide your data set. Each _product_ has a unique ID, name, category name (used as partition key) and other fields.
--
-Create a few items in the container by calling [``Container.Items.create``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-create) in a loop.
--
-### Get an item
-
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.item().read``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) passing in both values to return an item.
-
-The partition key is specific to a container. In this Contoso Products container, the category name, `categoryName`, is used as the partition key.
--
-### Query items
-
-Add the following code to query for all items that match a specific filter. Create a [parameterized query expression](/javascript/api/@azure/cosmos/sqlqueryspec) then call the [``Container.Items.query``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method. This method returns a [``QueryIterator``](/javascript/api/@azure/cosmos/queryiterator) that will manage the pages of results. Then, use a combination of ``while`` and ``for`` loops to [``fetchNext``](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext) page of results as a [``FeedResponse``](/javascript/api/@azure/cosmos/feedresponse) and then iterate over the individual data objects.
-
-The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`.
--
-If you want to use this data returned from the FeedResponse as an _item_, you need to create an [``Item``](/javascript/api/@azure/cosmos/item), using the [``Container.Items.read``](#get-an-item) method.
-
-### Delete an item
-
-Add the following code to delete an item you need to use the ID and partition key to get the item, then delete it. This example uses the [``Container.Item.delete``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-delete) method to delete the item.
--
-## Run the code
-
-This app creates an Azure Cosmos DB SQL API database and container. The example then creates items and then reads one item back. Finally, the example issues a query that should only return items matching a specific category. With each step, the example outputs metadata to the console about the steps it has performed.
-
-To run the app, use a terminal to navigate to the application directory and run the application.
-
-```bash
-node index.js
-```
-
-The output of the app should be similar to this example:
-
-```output
-contoso_1663276732626 database ready
-products_1663276732626 container ready
-'Touring-1000 Blue, 50' inserted
-'Touring-1000 Blue, 46' inserted
-'Mountain-200 Black, 42' inserted
-Touring-1000 Blue, 50 read
-08225A9E-F2B3-4FA3-AB08-8C70ADD6C3C2: Touring-1000 Blue, 50, BK-T79U-50
-2C981511-AC73-4A65-9DA3-A0577E386394: Touring-1000 Blue, 46, BK-T79U-46
-0F124781-C991-48A9-ACF2-249771D44029 Item deleted
-```
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
-
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-python.md
- Title: 'Quickstart: Build a Python app using Azure Cosmos DB SQL API account'
-description: Presents a Python code sample you can use to connect to and query the Azure Cosmos DB SQL API
---- Previously updated : 08/25/2022----
-# Quickstart: Build a Python application using an Azure Cosmos DB SQL API account
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and from Visual Studio Code with a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
--- A Cosmos DB Account. You options are:
- * Within an Azure active subscription:
- * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
- * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
- * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier)
- * Without an Azure active subscription:
- * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days.
- * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
-- [Python 3.7+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.-- [Visual Studio Code](https://code.visualstudio.com/).-- The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview).-- [Git](https://www.git-scm.com/downloads). -- [Azure Cosmos DB SQL API SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)-
-## Important update on Python 2.x Support
-
-New releases of this SDK won't support Python 2.x starting January 1st, 2022. Please check the [CHANGELOG](./sql-api-sdk-python.md) for more information.
-
-## Create a database account
--
-## Add a container
--
-## Add sample data
--
-## Query your data
--
-## Clone the sample application
-
-Now let's clone a SQL API app from GitHub, set the connection string, and run it. This quickstart uses version 4 of the [Python SDK](https://pypi.org/project/azure-cosmos/#history).
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```cmd
- md git-samples
- ```
-
- If you are using a bash prompt, you should instead use the following command:
-
- ```bash
- mkdir "git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started.git
- ```
-
-## Update your connection string
-
-Now go back to the Azure portal to get your connection string information and copy it into the app.
-
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys** from the left navigation. Use the copy buttons on the right side of the screen to copy the **URI** and **Primary Key** into the *cosmos_get_started.py* file in the next step.
-
- :::image type="content" source="./media/create-sql-api-dotnet/access-key-and-uri-in-keys-settings-in-the-azure-portal.png" alt-text="Get an access key and URI in the Keys settings in the Azure portal":::
-
-2. In Visual Studio Code, open the *cosmos_get_started.py* file in *\git-samples\azure-cosmos-db-python-getting-started*.
-
-3. Copy your **URI** value from the portal (using the copy button) and make it the value of the **endpoint** variable in *cosmos_get_started.py*.
-
- `endpoint = 'https://FILLME.documents.azure.com',`
-
-4. Then copy your **PRIMARY KEY** value from the portal and make it the value of the **key** in *cosmos_get_started.py*. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
-
- `key = 'FILLME'`
-
-5. Save the *cosmos_get_started.py* file.
-
-## Review the code
-
-This step is optional. Learn about the database resources created in code, or skip ahead to [Update your connection string](#update-your-connection-string).
-
-The following snippets are all taken from the [cosmos_get_started.py](https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started/blob/main/cosmos_get_started.py) file.
-
-* The CosmosClient is initialized. Make sure to update the "endpoint" and "key" values as described in the [Update your connection string](#update-your-connection-string) section.
-
- [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_cosmos_client)]
-
-* A new database is created.
-
- [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_database_if_not_exists)]
-
-* A new container is created, with 400 RU/s of [provisioned throughput](../request-units.md). We choose `lastName` as the [partition key](../partitioning-overview.md#choose-partitionkey), which allows us to do efficient queries that filter on this property.
-
- [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_container_if_not_exists)]
-
-* Some items are added to the container. Containers are a collection of items (JSON documents) that can have varied schema. The helper methods ```get_[name]_family_item``` return representations of a family that are stored in Azure Cosmos DB as JSON documents.
-
- [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_item)]
-
-* Point reads (key value lookups) are performed using the `read_item` method. We print out the [RU charge](../request-units.md) of each operation.
-
- [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=read_item)]
-
-* A query is performed using SQL query syntax. Because we're using partition key values of ```lastName``` in the WHERE clause, Azure Cosmos DB will efficiently route this query to the relevant partitions, improving performance.
-
- [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=query_items)]
-
-## Run the app
-
-1. In Visual Studio Code, select **View** > **Command Palette**.
-
-2. At the prompt, enter **Python: Select Interpreter** and then select the version of Python to use.
-
- The Footer in Visual Studio Code is updated to indicate the interpreter selected.
-
-3. Select **View** > **Integrated Terminal** to open the Visual Studio Code integrated terminal.
-
-4. In the integrated terminal window, ensure you are in the *azure-cosmos-db-python-getting-started* folder. If not, run the following command to switch to the sample folder.
-
- ```cmd
- cd "\git-samples\azure-cosmos-db-python-getting-started"`
- ```
-
-5. Run the following command to install the azure-cosmos package.
-
- ```python
- pip install azure-cosmos aiohttp
- ```
-
- If you get an error about access being denied when attempting to install azure-cosmos, you'll need to [run VS Code as an administrator](https://stackoverflow.com/questions/37700536/visual-studio-code-terminal-how-to-run-a-command-with-administrator-rights).
-
-6. Run the following command to run the sample and create and store new documents in Azure Cosmos DB.
-
- ```python
- python cosmos_get_started.py
- ```
-
-7. To confirm the new items were created and saved, in the Azure portal, select **Data Explorer** > **AzureSampleFamilyDatabase** > **Items**. View the items that were created. For example, here is a sample JSON document for the Andersen family:
-
- ```json
- {
- "id": "Andersen-1569479288379",
- "lastName": "Andersen",
- "district": "WA5",
- "parents": [
- {
- "familyName": null,
- "firstName": "Thomas"
- },
- {
- "familyName": null,
- "firstName": "Mary Kay"
- }
- ],
- "children": null,
- "address": {
- "state": "WA",
- "county": "King",
- "city": "Seattle"
- },
- "registered": true,
- "_rid": "8K5qAIYtZXeBhB4AAAAAAA==",
- "_self": "dbs/8K5qAA==/colls/8K5qAIYtZXc=/docs/8K5qAIYtZXeBhB4AAAAAAA==/",
- "_etag": "\"a3004d78-0000-0800-0000-5d8c5a780000\"",
- "_attachments": "attachments/",
- "_ts": 1569479288
- }
- ```
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a Python app in Visual Studio Code. You can now import additional data to your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
- Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API
-description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for SQL API that you can use to connect to and query data in your Azure Cosmos DB account
---- Previously updated : 03/01/2022-----
-# Quickstart: Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
-
-This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
-
-Throughout this quick tutorial, we rely on [Azure Databricks Runtime 10.4 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.4) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector.
-
-You can use any other Spark (for e.g., spark 3.1.1) offering as well, also you should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
-
-## Prerequisites
-
-* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
-
-* [Azure Databricks](/azure/databricks/release-notes/runtime/10.4) runtime 10.4 with Spark 3.2.1
-
-* (Optional) [SLF4J binding](https://www.slf4j.org/manual.html) is used to associate a specific logging framework with SLF4J.
-
-SLF4J is only needed if you plan to use logging, also download an SLF4J binding, which will link the SLF4J API with the logging implementation of your choice. See the [SLF4J user manual](https://www.slf4j.org/manual.html) for more information.
-
-Install Cosmos DB Spark Connector in your spark cluster [using the latest version for Spark 3.2.x](https://aka.ms/azure-cosmos-spark-3-2-download).
-
-The getting started guide is based on PySpark/Scala and you can run the following code snippet in an Azure Databricks PySpark/Scala notebook.
-
-## Create databases and containers
-
-First, set Cosmos DB account credentials, and the Cosmos DB Database name and container name.
-
-#### [Python](#tab/python)
-
-```python
-cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
-cosmosMasterKey = "REPLACEME"
-cosmosDatabaseName = "sampleDB"
-cosmosContainerName = "sampleContainer"
-
-cfg = {
- "spark.cosmos.accountEndpoint" : cosmosEndpoint,
- "spark.cosmos.accountKey" : cosmosMasterKey,
- "spark.cosmos.database" : cosmosDatabaseName,
- "spark.cosmos.container" : cosmosContainerName,
-}
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-val cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
-val cosmosMasterKey = "REPLACEME"
-val cosmosDatabaseName = "sampleDB"
-val cosmosContainerName = "sampleContainer"
-
-val cfg = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName
-)
-```
--
-Next, you can use the new Catalog API to create a Cosmos DB Database and Container through Spark.
-
-#### [Python](#tab/python)
-
-```python
-# Configure Catalog Api to be used
-spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
-spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
-spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
-
-# create a cosmos database using catalog api
-spark.sql("CREATE DATABASE IF NOT EXISTS cosmosCatalog.{};".format(cosmosDatabaseName))
-
-# create a cosmos container using catalog api
-spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')".format(cosmosDatabaseName, cosmosContainerName))
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-// Configure Catalog Api to be used
-spark.conf.set(s"spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
-spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
-spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
-
-// create a cosmos database using catalog api
-spark.sql(s"CREATE DATABASE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName};")
-
-// create a cosmos container using catalog api
-spark.sql(s"CREATE TABLE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')")
-```
--
-When creating containers with the Catalog API, you can set the throughput and [partition key path](../partitioning-overview.md#choose-partitionkey) for the container to be created.
-
-For more information, see the full [Catalog API](https://github.com/Azure/azure-sdk-for-jav) documentation.
-
-## Ingest data
-
-The name of the data source is `cosmos.oltp`, and the following example shows how you can write a memory dataframe consisting of two items to Cosmos DB:
-
-#### [Python](#tab/python)
-
-```python
-spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "Schrodinger cat", 2, False)))\
- .toDF("id","name","age","isAlive") \
- .write\
- .format("cosmos.oltp")\
- .options(**cfg)\
- .mode("APPEND")\
- .save()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-spark.createDataFrame(Seq(("cat-alive", "Schrodinger cat", 2, true), ("cat-dead", "Schrodinger cat", 2, false)))
- .toDF("id","name","age","isAlive")
- .write
- .format("cosmos.oltp")
- .options(cfg)
- .mode("APPEND")
- .save()
-```
--
-Note that `id` is a mandatory field for Cosmos DB.
-
-For more information related to ingesting data, see the full [write configuration](https://github.com/Azure/azure-sdk-for-jav#write-config) documentation.
-
-## Query data
-
-Using the same `cosmos.oltp` data source, we can query data and use `filter` to push down filters:
-
-#### [Python](#tab/python)
-
-```python
-from pyspark.sql.functions import col
-
-df = spark.read.format("cosmos.oltp").options(**cfg)\
- .option("spark.cosmos.read.inferSchema.enabled", "true")\
- .load()
-
-df.filter(col("isAlive") == True)\
- .show()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-import org.apache.spark.sql.functions.col
-
-val df = spark.read.format("cosmos.oltp").options(cfg).load()
-
-df.filter(col("isAlive") === true)
- .withColumn("age", col("age") + 1)
- .show()
-```
--
-For more information related to querying data, see the full [query configuration](https://github.com/Azure/azure-sdk-for-jav#query-config) documentation.
-
-## Partial document update using Patch
-
-Using the same `cosmos.oltp` data source, we can do partial update in Cosmos DB using Patch API:
-
-#### [Python](#tab/python)
-
-```python
-cfgPatch = {"spark.cosmos.accountEndpoint": cosmosEndpoint,
- "spark.cosmos.accountKey": cosmosMasterKey,
- "spark.cosmos.database": cosmosDatabaseName,
- "spark.cosmos.container": cosmosContainerName,
- "spark.cosmos.write.strategy": "ItemPatch",
- "spark.cosmos.write.bulk.enabled": "false",
- "spark.cosmos.write.patch.defaultOperationType": "Set",
- "spark.cosmos.write.patch.columnConfigs": "[col(name).op(set)]"
- }
-
-id = "<document-id>"
-query = "select * from cosmosCatalog.{}.{} where id = '{}';".format(
- cosmosDatabaseName, cosmosContainerName, id)
-
-dfBeforePatch = spark.sql(query)
-print("document before patch operation")
-dfBeforePatch.show()
-
-data = [{"id": id, "name": "Joel Brakus"}]
-patchDf = spark.createDataFrame(data)
-
-patchDf.write.format("cosmos.oltp").mode("Append").options(**cfgPatch).save()
-
-dfAfterPatch = spark.sql(query)
-print("document after patch operation")
-dfAfterPatch.show()
-```
-
-For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/patch-sample.py).
--
-#### [Scala](#tab/scala)
-
-```scala
-val cfgPatch = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName,
- "spark.cosmos.write.strategy" -> "ItemPatch",
- "spark.cosmos.write.bulk.enabled" -> "false",
-
- "spark.cosmos.write.patch.columnConfigs" -> "[col(name).op(set)]"
- )
-
-val id = "<document-id>"
-val query = s"select * from cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} where id = '$id';"
-
-val dfBeforePatch = spark.sql(query)
-println("document before patch operation")
-dfBeforePatch.show()
-val patchDf = Seq(
- (id, "Joel Brakus")
- ).toDF("id", "name")
-
-patchDf.write.format("cosmos.oltp").mode("Append").options(cfgPatch).save()
-val dfAfterPatch = spark.sql(query)
-println("document after patch operation")
-dfAfterPatch.show()
-```
-
-For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/PatchSample.scala).
---
-## Schema inference
-
-When querying data, the Spark Connector can infer the schema based on sampling existing items by setting `spark.cosmos.read.inferSchema.enabled` to `true`.
-
-#### [Python](#tab/python)
-
-```python
-df = spark.read.format("cosmos.oltp").options(**cfg)\
- .option("spark.cosmos.read.inferSchema.enabled", "true")\
- .load()
-
-df.printSchema()
--
-# Alternatively, you can pass the custom schema you want to be used to read the data:
-
-customSchema = StructType([
- StructField("id", StringType()),
- StructField("name", StringType()),
- StructField("type", StringType()),
- StructField("age", IntegerType()),
- StructField("isAlive", BooleanType())
- ])
-
-df = spark.read.schema(customSchema).format("cosmos.oltp").options(**cfg)\
- .load()
-
-df.printSchema()
-
-# If no custom schema is specified and schema inference is disabled, then the resulting data will be returning the raw Json content of the items:
-
-df = spark.read.format("cosmos.oltp").options(**cfg)\
- .load()
-
-df.printSchema()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-val cfgWithAutoSchemaInference = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName,
- "spark.cosmos.read.inferSchema.enabled" -> "true"
-)
-
-val df = spark.read.format("cosmos.oltp").options(cfgWithAutoSchemaInference).load()
-df.printSchema()
-
-df.show()
-```
--
-For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation.
-
-## Configuration reference
-
-The Azure Cosmos DB Spark 3 OLTP Connector for SQL API has a complete configuration reference that provides additional and advanced settings writing and querying data, serialization, streaming using change feed, partitioning and throughput management and more. For a complete listing with details see our [Spark Connector Configuration Reference](https://aka.ms/azure-cosmos-spark-3-config) on GitHub.
-
-## Migrate to Spark 3 Connector
-
-If you are using our older Spark 2.4 Connector, you can find out how to migrate to the Spark 3 Connector [here](https://github.com/Azure/azure-sdk-for-jav).
-
-## Next steps
-
-* Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: [Release notes and resources](sql-api-sdk-java-spark-v3.md)
-* Learn more about [Apache Spark](https://spark.apache.org/).
-* Learn how to configure [throughput control](throughput-control-spark.md).
-* Check out more [samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spring-data.md
- Title: Quickstart - Use Spring Data Azure Cosmos DB v3 to create a document database using Azure Cosmos DB
-description: This quickstart presents a Spring Data Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB SQL API
---- Previously updated : 08/26/2021-----
-# Quickstart: Build a Spring Data Azure Cosmos DB v3 app to manage Azure Cosmos DB SQL API data
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-> [!IMPORTANT]
-> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sql-api-sdk-java-spring-v2.md).
->
-> Spring Data Azure Cosmos DB supports only the SQL API.
->
-> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
-> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
-> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
->
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription or credit card. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.-- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.-
-## Introductory notes
-
-*The structure of a Cosmos DB account.* Irrespective of API or programming language, a Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
--
-You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
-
-The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
-
-As items are inserted into a Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
-
-## Create a database account
-
-Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB.
--
-## Add a container
--
-<a id="add-sample-data"></a>
-## Add sample data
--
-## Query your data
--
-## Clone the sample application
-
-Now let's switch to working with code. Let's clone a SQL API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
-```bash
-git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started.git
-```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
-](#run-the-app).
-
-### Application configuration file
-
-Here we showcase how Spring Boot and Spring Data enhance user experience - the process of establishing a Cosmos client and connecting to Cosmos resources is now config rather than code. At application startup Spring Boot handles all of this boilerplate using the settings in **application.properties**:
-
-```xml
-cosmos.uri=${ACCOUNT_HOST}
-cosmos.key=${ACCOUNT_KEY}
-cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY}
-
-dynamic.collection.name=spel-property-collection
-# Populate query metrics
-cosmos.queryMetricsEnabled=true
-```
-
-Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data will automatically do the following: (1) create an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connect to the database and container. You're all set - **no more resource management code!**
-
-### Java source
-
-The Spring Data value-add also comes from its simple, clean, standardized and platform-independent interface for operating on datastores. Building on the Spring Data GitHub sample linked above, below are CRUD and query samples for manipulating Azure Cosmos DB documents with Spring Data Azure Cosmos DB.
-
-* Item creation and updates by using the `save` method.
-
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Create)]
-
-* Point-reads using the derived query method defined in the repository. The `findByIdAndLastName` performs point-reads for `UserRepository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` and `lastName` fields:
-
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Read)]
-
-* Item deletes using `deleteAll`:
-
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Delete)]
-
-* Derived query based on repository method name. Spring Data implements the `UserRepository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field (this query could not be implemented as a point-read):
-
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Query)]
-
-## Run the app
-
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
-
-1. In the git terminal window, `cd` to the sample code folder.
-
- ```bash
- cd azure-spring-data-cosmos-java-sql-api-getting-started/azure-spring-data-cosmos-java-getting-started/
- ```
-
-2. In the git terminal window, use the following command to install the required Spring Data Azure Cosmos DB packages.
-
- ```bash
- mvn clean package
- ```
-
-3. In the git terminal window, use the following command to start the Spring Data Azure Cosmos DB application:
-
- ```bash
- mvn spring-boot:run
- ```
-
-4. The app loads **application.properties** and connects the resources in your Azure Cosmos DB account.
-5. The app will perform point CRUD operations described above.
-6. The app will perform a derived query.
-7. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB SQL API account, create a document database and container using the Data Explorer, and run a Spring Data app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Support Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-support-request-quota-increase.md
- Title: How to request quota increase for Azure Cosmos DB resources
-description: Learn how to request a quota increase for Azure Cosmos DB resources. You will also learn how to enable a subscription to access a region.
----- Previously updated : 04/27/2022--
-# How to request quota increase for Azure Cosmos DB resources
-
-The resources in Azure Cosmos DB have [default quotas/limits](../concepts-limits.md). However, there may be a case where your workload needs more quota than the default value. In such case, you must reach out to the Azure Cosmos DB team to request a quota increase. This article explains how to request a quota increase for Azure Cosmos DB resources. You will also learn how to enable a subscription to access a region.
-
-## Create a new support request
-
-To request a quota increase, you must create a new support request with your workload details. The Azure Cosmos DB team will then evaluate your request and approve it. Use the following steps to create a new support request from the Azure portal:
-
-1. Sign into the Azure portal.
-
-1. From the left-hand menu, select **Help + support** and then select **New support request**.
-
-1. In the **Basics** tab fill the following details:
-
- * For **Issue type**, select **Service and subscription limits (quotas)**
- * For **Subscription**, select the subscription for which you want to increase the quota.
- * For **Quota type**, select **Cosmos DB**
-
- :::image type="content" source="./media/create-support-request-quota-increase/create-quota-increase-request.png" alt-text="Create a new Cosmos DB support request for quota increase":::
-
-1. In the **Details** tab, enter the details corresponding to your quota request. The Information provided on this tab will be used to further assess your issue and help the support engineer troubleshoot the problem.
-
-1. Fill the following details in this form:
-
- * **Description**: Provide a short description of your request such as your workload, why the default values arenΓÇÖt sufficient along with any error messages you are observing.
-
- * **Quota specific fields** provide the requested information for your specific quota request.
-
- * **File upload**: Upload the diagnostic files or any other files that you think are relevant to the support request. To learn more on the file upload guidance, see the [Azure support](../../azure-portal/supportability/how-to-manage-azure-support-request.md#upload-files) article.
-
- * **Severity**: Choose one of the available severity levels based on the business impact.
-
- * **Preferred contact method**: You can either choose to be contacted over **Email** or by **Phone**.
-
-1. Fill out the remaining details such as your availability, support language, contact information, email, and phone number on the form.
-
-1. Select **Next: Review+Create**. Validate the information provided and select **Create** to create a support request.
-
-Within 24 hours, the Azure Cosmos DB support team will evaluate your request and get back to you.
-
-## Next steps
-
-* See the [Azure Cosmos DB default service quotas](../concepts-limits.md)
cosmos-db Create Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-website.md
- Title: Deploy a web app with a template - Azure Cosmos DB
-description: Learn how to deploy an Azure Cosmos account, Azure App Service Web Apps, and a sample web application using an Azure Resource Manager template.
---- Previously updated : 06/19/2020----
-# Deploy Azure Cosmos DB and Azure App Service with a web app from GitHub using an Azure Resource Manager Template
-
-This tutorial shows you how to do a "no touch" deployment of a web application that connects to Azure Cosmos DB on first run without having to cut and paste any connection information from Azure Cosmos DB to `appsettings.json` or to the Azure App Services application settings in the Azure portal. All these actions are accomplished using an Azure Resource Manager template in a single operation. In the example here we will deploy the [Azure Cosmos DB ToDo sample](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app) from a [Web app tutorial](sql-api-dotnet-application.md).
-
-Resource Manager templates, are quite flexible and allow you to compose complex deployments across any service in Azure. This includes advanced tasks such as deploying applications from GitHub and injecting connection information into Azure App Service's application settings in the Azure portal. This tutorial will show you how to do the following things using a single Resource Manager template.
-
-* Deploy an Azure Cosmos account.
-* Deploy an Azure App Service Hosting Plan.
-* Deploy an Azure App Service.
-* Inject the endpoint and keys from the Azure Cosmos account into the App Service application settings in the Azure portal.
-* Deploy a web application from a GitHub repository to the App Service.
-
-The resulting deployment has a fully functional web application that can connect to Azure Cosmos DB without having to cut and paste the Azure Cosmos DB's endpoint URL or authentication keys from the Azure portal.
-
-## Prerequisites
-
-> [!TIP]
-> While this tutorial does not assume prior experience with Azure Resource Manager templates or JSON, should you wish to modify the referenced templates or deployment options, then knowledge of each of these areas is required.
-
-## Step 1: Deploy the template
-
-First, select the **Deploy to Azure** button below to open the Azure portal to create a custom deployment. You can also view the Azure Resource Manager template from the [Azure Quickstart Templates Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-webapp%2Fazuredeploy.json)
-
-Once in the Azure portal, select the subscription to deploy into and select or create a new resource group. Then fill in the following values.
--
-* **Region** - This is required by the Resource Manager. Enter the same region used by the location parameter where your resources are located.
-* **Application Name** - This name is used by all the resources for this deployment. Make sure to choose a unique name to avoid conflicts with existing Azure Cosmos DB and App Service accounts.
-* **Location** - The region where your resources are deployed.
-* **App Service Plan Tier** - App Service Plan's pricing tier.
-* **App Service Plan Instances** - The number of workers for the app service plan.
-* **Repository URL** - The repository to the web application on GitHub.
-* **Branch** - The branch for the GitHub repository.
-* **Database Name** - The Azure Cosmos database name.
-* **Container Name** - The Azure Cosmos container name.
-
-After filling in the values, select the **Create** button to start the deployment. This step should take between 5 and 10 minutes to complete.
-
-> [!TIP]
-> The template does not validate that the Azure App Service name and Azure Cosmos account name entered in the template are valid and available. It is highly recommended that you verify the availability of the names you plan to supply prior to submitting the deployment.
--
-## Step 2: Explore the resources
-
-### View the deployed resources
-
-After the template has deployed the resources, you can now see each of them in your resource group.
--
-### View Cosmos DB endpoint and keys
-
-Next, open the Azure Cosmos account in the portal. The following screenshot shows the endpoint and keys for an Azure Cosmos account.
--
-### View the Azure Cosmos DB keys in application settings
-
-Next, navigate to the Azure App Service in the resource group. Click the Configuration tab to view the Application Settings for the App Service. The Application Settings contains the Cosmos DB account and primary key values necessary to connect to Cosmos DB as well as the database and container names that were passed in from the template deployment.
--
-### View web app in Deployment Center
-
-Next go to the Deployment Center for the App Service. Here you will see Repository points to the GitHub repository passed in to the template. Also the Status below indicates Success(Active), meaning the application successfully deployed and started.
--
-### Run the web application
-
-Click **Browse** at the top of Deployment Center to open the web application. The web application will open up to the home screen. Click on **Create New** and enter some data into the fields and click Save. The resulting screen shows the data saved to Cosmos DB.
--
-## Step 3: How does it work
-
-There are three elements necessary for this to work.
-
-### Reading app settings at runtime
-
-First, the application needs to request the Cosmos DB endpoint and key in the `Startup` class in the ASP.NET MVC web application. The [Cosmos DB To Do Sample](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app) can run locally where you can enter the connection information into appsettings.json. However, when deployed, this file does deploy with the app. If these lines in red cannot access the settings from appsettings.json, it will try from Application Settings in Azure App Service.
--
-### Using special Azure Resource Management functions
-
-For these values to be available to the application when deployed, the Azure Resource Manager template can ask for those values from the Cosmos DB account using special Azure Resource Management functions including [reference](../../azure-resource-manager/templates/template-functions-resource.md#reference) and [listKeys](../../azure-resource-manager/templates/template-functions-resource.md#listkeys) which grab the values from the Cosmos DB account and insert them into the application settings values with key names that match what is used in the application above in a '{section:key}' format. For example, `CosmosDb:Account`.
--
-### Deploying web apps from GitHub
-
-Lastly, we need to deploy the web application from GitHub into the App Service. This is done using the JSON below. Two things to be careful with are the type and name for this resource. Both the `"type": "sourcecontrols"` and `"name": "web"` property values are hard coded and should not be changed.
--
-## Next steps
-
-Congratulations! You've deployed Azure Cosmos DB, Azure App Service, and a sample web application that automatically has the connection info necessary to connect to Cosmos DB, all in a single operation and without having to cut and paste sensitive information. Using this template as a starting point, you can modify it to deploy your own web applications the same way.
-
-* For the Azure Resource Manager Template for this sample go to [Azure Quickstart Templates Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)
-* For the source code for the sample app go to [Cosmos DB To Do App on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app).
cosmos-db Database Transactions Optimistic Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/database-transactions-optimistic-concurrency.md
- Title: Database transactions and optimistic concurrency control in Azure Cosmos DB
-description: This article describes database transactions and optimistic concurrency control in Azure Cosmos DB
------ Previously updated : 12/04/2019--
-# Transactions and optimistic concurrency control
-
-Database transactions provide a safe and predictable programming model to deal with concurrent changes to the data. Traditional relational databases, like SQL Server, allow you to write the business logic using stored-procedures and/or triggers, send it to the server for execution directly within the database engine. With traditional relational databases, you are required to deal with two different programming languages the (non-transactional) application programming language such as JavaScript, Python, C#, Java, etc. and the transactional programming language (such as T-SQL) that is natively executed by the database.
-
-The database engine in Azure Cosmos DB supports full ACID (Atomicity, Consistency, Isolation, Durability) compliant transactions with snapshot isolation. All the database operations within the scope of a container's [logical partition](../partitioning-overview.md) are transactionally executed within the database engine that is hosted by the replica of the partition. These operations include both write (updating one or more items within the logical partition) and read operations. The following table illustrates different operations and transaction types:
-
-| **Operation** | **Operation Type** | **Single or Multi Item Transaction** |
-||||
-| Insert (without a pre/post trigger) | Write | Single item transaction |
-| Insert (with a pre/post trigger) | Write and Read | Multi-item transaction |
-| Replace (without a pre/post trigger) | Write | Single item transaction |
-| Replace (with a pre/post trigger) | Write and Read | Multi-item transaction |
-| Upsert (without a pre/post trigger) | Write | Single item transaction |
-| Upsert (with a pre/post trigger) | Write and Read | Multi-item transaction |
-| Delete (without a pre/post trigger) | Write | Single item transaction |
-| Delete (with a pre/post trigger) | Write and Read | Multi-item transaction |
-| Execute stored procedure | Write and Read | Multi-item transaction |
-| System initiated execution of a merge procedure | Write | Multi-item transaction |
-| System initiated execution of deleting items based on expiration (TTL) of an item | Write | Multi-item transaction |
-| Read | Read | Single-item transaction |
-| Change Feed | Read | Multi-item transaction |
-| Paginated Read | Read | Multi-item transaction |
-| Paginated Query | Read | Multi-item transaction |
-| Execute UDF as part of the paginated query | Read | Multi-item transaction |
-
-## Multi-item transactions
-
-Azure Cosmos DB allows you to write [stored procedures, pre/post triggers, user-defined-functions (UDFs)](stored-procedures-triggers-udfs.md) and merge procedures in JavaScript. Azure Cosmos DB natively supports JavaScript execution inside its database engine. You can register stored procedures, pre/post triggers, user-defined-functions (UDFs) and merge procedures on a container and later execute them transactionally within the Azure Cosmos database engine. Writing application logic in JavaScript allows natural expression of control flow, variable scoping, assignment, and integration of exception handling primitives within the database transactions directly in the JavaScript language.
-
-The JavaScript-based stored procedures, triggers, UDFs, and merge procedures are wrapped within an ambient ACID transaction with snapshot isolation across all items within the logical partition. During the course of its execution, if the JavaScript program throws an exception, the entire transaction is aborted and rolled-back. The resulting programming model is simple yet powerful. JavaScript developers get a durable programming model while still using their familiar language constructs and library primitives.
-
-The ability to execute JavaScript directly within the database engine provides performance and transactional execution of database operations against the items of a container. Furthermore, since Azure Cosmos database engine natively supports JSON and JavaScript, there is no impedance mismatch between the type systems of an application and the database.
-
-## Optimistic concurrency control
-
-Optimistic concurrency control allows you to prevent lost updates and deletes. Concurrent, conflicting operations are subjected to the regular pessimistic locking of the database engine hosted by the logical partition that owns the item. When two concurrent operations attempt to update the latest version of an item within a logical partition, one of them will win and the other will fail. However, if one or two operations attempting to concurrently update the same item had previously read an older value of the item, the database doesnΓÇÖt know if the previously read value by either or both the conflicting operations was indeed the latest value of the item. Fortunately, this situation can be detected with the **Optimistic Concurrency Control (OCC)** before letting the two operations enter the transaction boundary inside the database engine. OCC protects your data from accidentally overwriting changes that were made by others. It also prevents others from accidentally overwriting your own changes.
-
-### Implementing optimistic concurrency control using ETag and HTTP headers
-
-Every item stored in an Azure Cosmos container has a system defined `_etag` property. The value of the `_etag` is automatically generated and updated by the server every time the item is updated. `_etag` can be used with the client supplied `if-match` request header to allow the server to decide whether an item can be conditionally updated. The value of the `if-match` header matches the value of the `_etag` at the server, the item is then updated. If the value of the `if-match` request header is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response message. The client then can re-fetch the item to acquire the current version of the item on the server or override the version of item in the server with its own `_etag` value for the item. In addition, `_etag` can be used with the `if-none-match` header to determine whether a refetch of a resource is needed.
-
-The itemΓÇÖs `_etag` value changes every time the item is updated. For replace item operations, `if-match` must be explicitly expressed as a part of the request options. For an example, see the sample code in [GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L791-L887). `_etag` values are implicitly checked for all written items touched by the stored procedure. If any conflict is detected, the stored procedure will roll back the transaction and throw an exception. With this method, either all or no writes within the stored procedure are applied atomically. This is a signal to the application to reapply updates and retry the original client request.
-
-### Optimistic concurrency control and global distribution
-
-The concurrent updates of an item are subjected to the OCC by Azure Cosmos DBΓÇÖs communication protocol layer. For Azure Cosmos accounts configured for **single-region writes**, Azure Cosmos DB ensures that the client-side version of the item that you are updating (or deleting) is the same as the version of the item in the Azure Cosmos container. This ensures that your writes are protected from being overwritten accidentally by the writes of others and vice versa. In a multi-user environment, the optimistic concurrency control protects you from accidentally deleting or updating wrong version of an item. As such, items are protected against the infamous "lost update" or "lost delete" problems.
-
-In an Azure Cosmos account configured with **multi-region writes**, data can be committed independently into secondary regions if its `_etag` matches that of the data in the local region. Once new data is committed locally in a secondary region, it is then merged in the hub or primary region. If the conflict resolution policy merges the new data into the hub region, this data will then be replicated globally with the new `_etag`. If the conflict resolution policy rejects the new data, the secondary region will be rolled back to the original data and `_etag`.
-
-## Next steps
-
-Learn more about database transactions and optimistic concurrency control in the following articles:
--- [Working with Azure Cosmos databases, containers and items](../account-databases-containers-items.md)-- [Consistency levels](../consistency-levels.md)-- [Conflict types and resolution policies](../conflict-resolution-policies.md)-- [Using TransactionalBatch](transactional-batch.md)-- [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md)
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
- Title: 'Microsoft Defender for Azure Cosmos DB'
-description: Learn how Microsoft Defender provides advanced threat protection on Azure Cosmos DB.
--- Previously updated : 06/21/2022----
-# Microsoft Defender for Azure Cosmos DB
-
-Microsoft Defender for Azure Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
-
-Security alerts are triggered when anomalies in activity occur. These security alerts show up in [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). Subscription administrators also get these alerts over email, with details of the suspicious activity and recommendations on how to investigate and remediate the threats.
-
-> [!NOTE]
->
-> * Microsoft Defender for Azure Cosmos DB is currently available only for the Core (SQL) API.
-> * Microsoft Defender for Azure Cosmos DB is not currently available in Azure government and sovereign cloud regions.
-
-For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor-cosmos-db.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases.
-
-## Threat types
-
-Microsoft Defender for Azure Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
--- **Potential SQL injection attacks**: Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.--- **Anomalous database access patterns**: For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations.--- **Suspicious database activity**: For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns.-
-## Configure Microsoft Defender for Azure Cosmos DB
-
-See [Enable Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/defender-for-databases-enable-cosmos-protections.md).
-
-## Manage security alerts
-
-When Azure Cosmos DB activity anomalies occur, a security alert is triggered with information about the suspicious security event.
-
- From Microsoft Defender for Cloud, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. An email notification is also sent with the alert details and recommended actions.
-
-## Azure Cosmos DB alerts
-
- To see a list of the alerts generated when monitoring Azure Cosmos DB accounts, see the [Azure Cosmos DB alerts](../../security-center/alerts-reference.md#alerts-azurecosmos) section in the Microsoft Defender for Cloud documentation.
-
-## Next steps
-
-* Learn more about [Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
-* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/distribute-throughput-across-partitions.md
- Title: Redistribute throughput across partitions (preview) in Azure Cosmos DB
-description: Learn how to redistribute throughput across partitions (preview)
------ Previously updated : 05/09/2022--
-# Redistribute throughput across partitions (preview)
-
-By default, Azure Cosmos DB distributes the provisioned throughput of a database or container equally across all physical partitions. However, scenarios may arise where due to a skew in the workload or choice of partition key, certain logical (and thus physical) partitions need more throughput than others. For these scenarios, Azure Cosmos DB gives you the ability to redistribute your provisioned throughput across physical partitions. Redistributing throughput across partitions helps you achieve better performance without having to configure your overall throughput based on the hottest partition.
-
-The throughput redistributing feature applies to databases and containers using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. You can change the throughput per physical partition using the Azure Cosmos DB PowerShell commands.
-
-## When to use this feature
-
-In general, usage of this feature is recommended for scenarios when both the following are true:
--- You're consistently seeing greater than 1-5% overall rate of 429 responses-- You've a consistent, predictable hot partition-
-If you aren't seeing 429 responses and your end to end latency is acceptable, then no action to reconfigure RU/s per partition is required. If you have a workload that has consistent traffic with occasional unpredictable spikes across *all your partitions*, it's recommended to use [autoscale](../provision-throughput-autoscale.md) and [burst capacity (preview)](../burst-capacity.md). Autoscale and burst capacity will ensure you can meet your throughput requirements. If you have a small amount of RU/s per partition, you can also use the [partition merge (preview)](../merge.md) to reduce the number of partitions and ensure more RU/s per partition for the same total provisioned throughput.
-
-## Getting started
-
-To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
-
-Before submitting your request:
-- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.-- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).-
-The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
-
-To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic.
---
-## Example scenario
-
-Suppose we have a workload that keeps track of transactions that take place in retail stores. Because most of our queries are by `StoreId`, we partition by `StoreId`. However, over time, we see that some stores have more activity than others and require more throughput to serve their workloads. We're seeing rate limiting (429) for requests against those StoreIds, and our [overall rate of 429 responses is greater than 1-5%](troubleshoot-request-rate-too-large.md#recommended-solution). Meanwhile, other stores are less active and require less throughput. Let's see how we can redistribute our throughput for better performance.
-
-## Step 1: Identify which physical partitions need more throughput
-
-There are two ways to identify if there's a hot partition.
-
-### Option 1: Use Azure Monitor metrics
-
-To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
-
-Each PartitionKeyRangeId maps to one physical partition. Look for one PartitionKeyRangeId that consistently has a higher normalized RU consumption than others. For example, one value is consistently at 100%, but others are at 30% or less. A pattern such as this can indicate a hot partition.
--
-### Option 2: Use Diagnostic Logs
-
-We can use the information from **CDBPartitionKeyRUConsumption** in Diagnostic Logs to get more information about the logical partition keys (and corresponding physical partitions) that are consuming the most RU/s at a second level granularity. Note the sample queries use 24 hours for illustrative purposes only - it's recommended to use at least seven days of history to understand the pattern.
-
-#### Find the physical partition (PartitionKeyRangeId) that is consuming the most RU/s over time
-
-```Kusto
-CDBPartitionKeyRUConsumption
-| where TimeGenerated >= ago(24hr)
-| where DatabaseName == "MyDB" and CollectionName == "MyCollection" // Replace with database and collection name
-| where isnotempty(PartitionKey) and isnotempty(PartitionKeyRangeId)
-| summarize sum(RequestCharge) by bin(TimeGenerated, 1s), PartitionKeyRangeId
-| render timechart
-```
-
-#### For a given physical partition, find the top 10 logical partition keys that are consuming the most RU/s over each hour
-
-```Kusto
-CDBPartitionKeyRUConsumption
-| where TimeGenerated >= ago(24hour)
-| where DatabaseName == "MyDB" and CollectionName == "MyCollection" // Replace with database and collection name
-| where isnotempty(PartitionKey) and isnotempty(PartitionKeyRangeId)
-| where PartitionKeyRangeId == 0 // Replace with PartitionKeyRangeId
-| summarize sum(RequestCharge) by bin(TimeGenerated, 1hour), PartitionKey
-| order by sum_RequestCharge desc | take 10
-```
-
-## Step 2: Determine the target RU/s for each physical partition
-
-### Determine current RU/s for each physical partition
-
-First, let's determine the current RU/s for each physical partition. You can use the new Azure Monitor metric **PhysicalPartitionThroughput** and split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
-
-Alternatively, if you haven't changed your throughput per partition before, you can use the formula:
-``Current RU/s per partition = Total RU/s / Number of physical partitions``
-
-Follow the guidance in the article [Best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md#step-1-find-the-current-number-of-physical-partitions) to determine the number of physical partitions.
-
-You can also use the PowerShell `Get-AzCosmosDBSqlContainerPerPartitionThroughput` and `Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput` commands to read the current RU/s on each physical partition.
-
-```powershell
-// SQL API
-$somePartitions = Get-AzCosmosDBSqlContainerPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-container-name>" `
- -PhysicalPartitionIds ("<PartitionId>", "<PartitionId">)
-
-$allPartitions = Get-AzCosmosDBSqlContainerPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-container-name>" `
- -AllPartitions
-
-// API for MongoDB
-$somePartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-collection-name>" `
- -PhysicalPartitionIds ("<PartitionId>", "<PartitionId">, ...)
-
-$allPartitions = Get-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-collection-name>" `
- -AllPartitions
-```
-### Determine RU/s for target partition
-
-Next, let's decide how many RU/s we want to give to our hottest physical partition(s). Let's call this set our target partition(s). The most RU/s any physical partition can contain is 10,000 RU/s.
-
-The right approach depends on your workload requirements. General approaches include:
-- Increasing the RU/s by a percentage, measure the rate of 429 responses, and repeat until desired throughput is achieved.
- - If you aren't sure the right percentage, you can start with 10% to be conservative.
- - If you already know this physical partition requires most of the throughput of the workload, you can start by doubling the RU/s or increasing it to the maximum of 10,000 RU/s, whichever is lower.
-- Increasing the RU/s to `Total consumed RU/s of the physical partition + (Number of 429 responses per second * Average RU charge per request to the partition)`
- - This approach tries to estimate what the "real" RU/s consumption would have been if the requests hadn't been rate limited.
-
-### Determine RU/s for source partition
-
-Finally, let's decide how many RU/s we want to keep on our other physical partitions. This selection will determine the partitions that the target physical partition takes throughput from.
-
-In the PowerShell APIs, we must specify at least one source partition to redistribute RU/s from. We can also specify a custom minimum throughput each physical partition should have after the redistribution. If not specified, by default, Azure Cosmos DB will ensure that each physical partition has at least 100 RU/s after the redistribution. It's recommended to explicitly specify the minimum throughput.
-
-The right approach depends on your workload requirements. General approaches include:
-- Taking RU/s equally from all source partitions (works best when there are <= 10 partitions)
- - Calculate the amount we need to offset each source physical partition by. `Offset = Total desired RU/s of target partition(s) - total current RU/s of target partition(s)) / (Total physical partitions - number of target partitions)`
- - Assign the minimum throughput for each source partition = `Current RU/s of source partition - offset`
-- Taking RU/s from the least active partition(s)
- - Use Azure Monitor metrics and Diagnostic Logs to determine which physical partition(s) have the least traffic/request volume
- - Calculate the amount we need to offset each source physical partition by. `Offset = Total desired RU/s of target partition(s) - total current RU/s of target partition) / Number of source physical partitions`
- - Assign the minimum throughput for each source partition = `Current RU/s of source partition - offset`
-
-## Step 3: Programatically change the throughput across partitions
-
-You can use the PowerShell command `Update-AzCosmosDBSqlContainerPerPartitionThroughput` to redistribute throughput.
-
-To understand the below example, let's take an example where we have a container that has 6000 RU/s total (either 6000 manual RU/s or autoscale 6000 RU/s) and 3 physical partitions. Based on our analysis, we want a layout where:
--- Physical partition 0: 1000 RU/s-- Physical partition 1: 4000 RU/s-- Physical partition 2: 1000 RU/s-
-We specify partitions 0 and 2 as our source partitions, and specify that after the redistribution, they should have a minimum RU/s of 1000 RU/s. Partition 1 is out target partition, which we specify should have 4000 RU/s.
-
-```powershell
-$SourcePhysicalPartitionObjects = @()
-$SourcePhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "0" -Throughput 1000
-$SourcePhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "2" -Throughput 1000
-
-$TargetPhysicalPartitionObjects = @()
-$TargetPhysicalPartitionObjects += New-AzCosmosDBPhysicalPartitionThroughputObject -Id "1" -Throughput 4000
-
-// SQL API
-Update-AzCosmosDBSqlContainerPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-container-name>" `
- -SourcePhysicalPartitionThroughputObject $SourcePhysicalPartitionObjects `
- -TargetPhysicalPartitionThroughputObject $TargetPhysicalPartitionObjects
-
-// API for MongoDB
-Update-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-collection-name>" `
- -SourcePhysicalPartitionThroughputObject $SourcePhysicalPartitionObjects `
- -TargetPhysicalPartitionThroughputObject $TargetPhysicalPartitionObjects
-```
-
-After you've completed the redistribution, you can verify the change by viewing the **PhysicalPartitionThroughput** metric in Azure Monitor. Split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
-
-If necessary, you can also reset the RU/s per physical partition so that the RU/s of your container are evenly distributed across all physical partitions.
-
-```powershell
-// SQL API
-$resetPartitions = Update-AzCosmosDBSqlContainerPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-container-name>" `
- -EqualDistributionPolicy
-
-// API for MongoDB
-$resetPartitions = Update-AzCosmosDBMongoDBCollectionPerPartitionThroughput `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-collection-name>" `
- -EqualDistributionPolicy
-```
-
-## Step 4: Verify and monitor your RU/s consumption
-
-After you've completed the redistribution, you can verify the change by viewing the **PhysicalPartitionThroughput** metric in Azure Monitor. Split by the dimension **PhysicalPartitionId** to see how many RU/s you have per physical partition.
-
-It's recommended to monitor your overall rate of 429 responses and RU/s consumption. For more information, review [Step 1](#step-1-identify-which-physical-partitions-need-more-throughput) to validate you've achieved the performance you expect.
-
-After the changes, assuming your overall workload hasn't changed, you'll likely see that both the target and source physical partitions have higher [Normalized RU consumption](../monitor-normalized-request-units.md) than previously. Higher normalized RU consumption is expected behavior. Essentially, you have allocated RU/s closer to what each partition actually needs to consume, so higher normalized RU consumption means that each partition is fully utilizing its allocated RU/s. You should also expect to see a lower overall rate of 429 exceptions, as the hot partitions now have more RU/s to serve requests.
-
-## Limitations
-
-### Preview eligibility criteria
-To enroll in the preview, your Cosmos account must meet all the following criteria:
- - Your Cosmos account is using SQL API or API for MongoDB.
- - If you're using API for MongoDB, the version must be >= 3.6.
- - Your Cosmos account is using provisioned throughput (manual or autoscale). Distribution of throughput across partitions doesn't apply to serverless accounts.
- - If you're using SQL API, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When the ability to redistribute throughput across partitions is enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
- - Your Cosmos account isn't using any unsupported connectors:
- - Azure Data Factory
- - Azure Stream Analytics
- - Logic Apps
- - Azure Functions
- - Azure Search
- - Azure Cosmos DB Spark connector
- - Azure Cosmos DB data migration tool
- - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-
-### SDK requirements (SQL API only)
-
-Throughput redistribution across partitions is supported only with the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use this feature for API for MongoDB accounts.
-
-Find the latest preview version of the supported SDK:
-
-| SDK | Supported versions | Package manager link |
-| | | |
-| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
-
-Support for other SDKs is planned for the future.
-
-> [!TIP]
-> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](migrate-dotnet-v3.md).
-
-### Unsupported connectors
-
-If you enroll in the preview, the following connectors will fail.
-
-* Azure Data Factory<sup>1</sup>
-* Azure Stream Analytics<sup>1</sup>
-* Logic Apps<sup>1</sup>
-* Azure Functions<sup>1</sup>
-* Azure Search<sup>1</sup>
-* Azure Cosmos DB Spark connector<sup>1</sup>
-* Azure Cosmos DB data migration tool
-* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-
-<sup>1</sup>Support for these connectors is planned for the future.
-
-## Next steps
-
-Learn about how to use provisioned throughput with the following articles:
-
-* Learn more about [provisioned throughput.](../set-throughput.md)
-* Learn more about [request units.](../request-units.md)
-* Need to monitor for hot partitions? See [monitoring request units.](../monitor-normalized-request-units.md#how-to-monitor-for-hot-partitions)
-* Want to learn the best practices? See [best practices for scaling provisioned throughput.](../scaling-provisioned-throughput-best-practices.md)
cosmos-db Dynamo To Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/dynamo-to-cosmos.md
- Title: Migrate your application from Amazon DynamoDB to Azure Cosmos DB
-description: Learn how to migrate your .NET application from Amazon's DynamoDB to Azure Cosmos DB
---- Previously updated : 05/02/2020---
-# Migrate your application from Amazon DynamoDB to Azure Cosmos DB
-
-Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article describes how to migrate your .NET application from DynamoDB to Azure Cosmos DB with minimal code changes.
-
-## Conceptual differences
-
-The following are the key conceptual differences between Azure Cosmos DB and DynamoDB:
-
-| DynamoDB | Azure Cosmos DB |
-|||
-|Not applicable| Database |
-|Table | Collection |
-| Item | Document |
-|Attribute|Field|
-|Secondary Index|Secondary Index|
-|Primary Key ΓÇô Partition key|Partition Key|
-|Primary Key ΓÇô Sort Key| Not Required |
-|Stream|ChangeFeed|
-|Write Compute Unit|Request Unit (Flexible, can be used for reads or writes)|
-|Read Compute Unit |Request Unit (Flexible, can be used for reads or writes)|
-|Global Tables| Not Required. You can directly select the region while provisioning the Azure Cosmos account (you can change the region later)|
-
-## Structural differences
-
-Azure Cosmos DB has a simpler JSON structure when compared to that of DynamoDB. The following example shows the differences
-
-**DynamoDB**:
-
-The following JSON object represents the data format in DynamoDB
-
-```json
-{
-TableName: "Music",
-KeySchema: [
-{
- AttributeName: "Artist",
- KeyType: "HASH", //Partition key
-},
-{
- AttributeName: "SongTitle",
- KeyType: "RANGE" //Sort key
-}
-],
-AttributeDefinitions: [
-{
- AttributeName: "Artist",
- AttributeType: "S"
-},
-{
- AttributeName: "SongTitle",
- AttributeType: "S"
-}
-],
-ProvisionedThroughput: {
- ReadCapacityUnits: 1,
- WriteCapacityUnits: 1
- }
-}
- ```
-
-**Azure Cosmos DB**:
-
-The following JSON object represents the data format in Azure Cosmos DB
-
-```json
-{
-"Artist": "",
-"SongTitle": "",
-"AlbumTitle": "",
-"Year": 9999,
-"Price": 0.0,
-"Genre": "",
-"Tags": ""
-}
-```
-
-## Migrate your data
-
-There are various options available to migrate your data to Azure Cosmos DB. To learn more, see the [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../cosmosdb-migrationchoices.md) article.
-
-## Migrate your code
-
-This article is scoped to migrate an application's code to Azure Cosmos DB, which is the critical aspect of database migration. To help you reduce learning curve, the following sections include a side-by-side code comparison between Amazon DynamoDB and Azure Cosmos DB's equivalent code snippet.
-
-To download the source code, clone the following repo:
-
-```bash
-git clone https://github.com/Azure-Samples/DynamoDB-to-CosmosDB
-```
-
-### Pre-requisites
--- .NET Framework 4.7.2
-* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
-- Access to Azure Cosmos DB SQL API Account-- Local installation of Amazon DynamoDB-- Java 8-- Run the downloadable version of Amazon DynamoDB at port 8000 (you can change and configure the code)-
-### Set up your code
-
-Add the following "NuGet package" to your project:
-
-```bash
-Install-Package Microsoft.Azure.Cosmos
-```
-
-### Establish connection
-
-**DynamoDB**:
-
-In Amazon DynamoDB, the following code is used to connect:
-
-```csharp
- AmazonDynamoDBConfig addbConfig = new AmazonDynamoDBConfig() ;
- addbConfig.ServiceURL = "endpoint";
- try { aws_dynamodbclient = new AmazonDynamoDBClient( addbConfig ); }
-```
-
-**Azure Cosmos DB**:
-
-To connect Azure Cosmos DB, update your code to:
-
-```csharp
-client_documentDB = new CosmosClient("your connectionstring from the Azure portal");
-```
-
-**Optimize the connection in Azure Cosmos DB**
-
-With Azure Cosmos DB, you can use the following options to optimize your connection:
-
-* **ConnectionMode** - Use direct connection mode to connect to the data nodes in the Azure Cosmos DB service. Use gateway mode only to initialize and cache the logical addresses and refresh on updates. For more information, see [connectivity modes](sql-sdk-connection-modes.md).
-
-* **ApplicationRegion** - This option is used to set the preferred geo-replicated region that is used to interact with Azure Cosmos DB. For more information, see [global distribution](../distribute-data-globally.md).
-
-* **ConsistencyLevel** - This option is used to override default consistency level. For more information, see [consistency levels](../consistency-levels.md).
-
-* **BulkExecutionMode** - This option is used to execute bulk operations by setting the *AllowBulkExecution* property to true. For more information, see [bulk import](tutorial-sql-api-dotnet-bulk-import.md).
-
- ```csharp
- client_cosmosDB = new CosmosClient(" Your connection string ",new CosmosClientOptions()
- {
- ConnectionMode=ConnectionMode.Direct,
- ApplicationRegion=Regions.EastUS2,
- ConsistencyLevel=ConsistencyLevel.Session,
- AllowBulkExecution=true
- });
- ```
-
-### Create the container
-
-**DynamoDB**:
-
-To store the data into Amazon DynamoDB, you need to create the table first. In the table creation process; you define the schema, key type, and attributes as shown in the following code:
-
-```csharp
-// movies_key_schema
-public static List<KeySchemaElement> movies_key_schema
- = new List<KeySchemaElement>
-{
- new KeySchemaElement
- {
- AttributeName = partition_key_name,
- KeyType = "HASH"
- },
- new KeySchemaElement
- {
- AttributeName = sort_key_name,
- KeyType = "RANGE"
- }
-};
-
-// key names for the Movies table
-public const string partition_key_name = "year";
-public const string sort_key_name = "title";
- public const int readUnits=1, writeUnits=1;
-
- // movie_items_attributes
- public static List<AttributeDefinition> movie_items_attributes
- = new List<AttributeDefinition>
-{
- new AttributeDefinition
- {
- AttributeName = partition_key_name,
- AttributeType = "N"
- },
- new AttributeDefinition
- {
- AttributeName = sort_key_name,
- AttributeType = "S"
- }
-
-CreateTableRequest request;
-CreateTableResponse response;
-
-// Build the 'CreateTableRequest' structure for the new table
-request = new CreateTableRequest
-{
- TableName = table_name,
- AttributeDefinitions = table_attributes,
- KeySchema = table_key_schema,
- // Provisioned-throughput settings are always required,
- // although the local test version of DynamoDB ignores them.
- ProvisionedThroughput = new ProvisionedThroughput( readUnits, writeUnits );
-};
-```
-
-**Azure Cosmos DB**:
-
-In Amazon DynamoDB, you need to provision the read compute units & write compute units. Whereas in Azure Cosmos DB you specify the throughput as [Request Units (RU/s)](../request-units.md), which can be used for any operations dynamically. The data is organized as database --> container--> item. You can specify the throughput at database level or at collection level or both.
-
-To create a database:
-
-```csharp
-await client_cosmosDB.CreateDatabaseIfNotExistsAsync(movies_table_name);
-```
-
-To create the container:
-
-```csharp
-await cosmosDatabase.CreateContainerIfNotExistsAsync(new ContainerProperties() { PartitionKeyPath = "/" + partitionKey, Id = new_collection_name }, provisionedThroughput);
-```
-
-### Load the data
-
-**DynamoDB**:
-
-The following code shows how to load the data in Amazon DynamoDB. The moviesArray consists of list of JSON document then you need to iterate through and load the JSON document into Amazon DynamoDB:
-
-```csharp
-int n = moviesArray.Count;
-for( int i = 0, j = 99; i < n; i++ )
- {
- try
- {
- string itemJson = moviesArray[i].ToString();
- Document doc = Document.FromJson(itemJson);
- Task putItem = moviesTable.PutItemAsync(doc);
- if( i >= j )
- {
- j++;
- Console.Write( "{0,5:#,##0}, ", j );
- if( j % 1000 == 0 )
- Console.Write( "\n " );
- j += 99;
- }
- await putItem;
-```
-
-**Azure Cosmos DB**:
-
-In Azure Cosmos DB, you can opt for stream and write with `moviesContainer.CreateItemStreamAsync()`. However, in this sample, the JSON will be deserialized into the *MovieModel* type to demonstrate type-casting feature. The code is multi-threaded, which will use Azure Cosmos DB's distributed architecture and speed up the loading:
-
-```csharp
-List<Task> concurrentTasks = new List<Task>();
-for (int i = 0, j = 99; i < n; i++)
-{
- try
- {
- MovieModel doc= JsonConvert.DeserializeObject<MovieModel>(moviesArray[i].ToString());
- doc.Id = Guid.NewGuid().ToString();
- concurrentTasks.Add(moviesContainer.CreateItemAsync(doc,new PartitionKey(doc.Year)));
- {
- j++;
- Console.Write("{0,5:#,##0}, ", j);
- if (j % 1000 == 0)
- Console.Write("\n ");
- j += 99;
- }
-
- }
- catch (Exception ex)
- {
- Console.WriteLine("\n ERROR: Could not write the movie record #{0:#,##0}, because:\n {1}",
- i, ex.Message);
- operationFailed = true;
- break;
- }
-}
-await Task.WhenAll(concurrentTasks);
-```
-
-### Create a document
-
-**DynamoDB**:
-
-Writing a new document in Amazon DynamoDB isn't type safe, the following example uses newItem as document type:
-
-```csharp
-Task<Document> writeNew = moviesTable.PutItemAsync(newItem, token);
-await writeNew;
-```
-
-**Azure Cosmos DB**:
-
-Azure Cosmos DB provides you type safety via data model. We use data model named 'MovieModel':
-
-```csharp
-public class MovieModel
-{
- [JsonProperty("id")]
- public string Id { get; set; }
- [JsonProperty("title")]
- public string Title{ get; set; }
- [JsonProperty("year")]
- public int Year { get; set; }
- public MovieModel(string title, int year)
- {
- this.Title = title;
- this.Year = year;
- }
- public MovieModel()
- {
-
- }
- [JsonProperty("info")]
- public MovieInfo MovieInfo { get; set; }
-
- internal string PrintInfo()
- {
- if(this.MovieInfo!=null)
- return string.Format("\nMovie with Title: {1}\n Year: {2}, Actors: {3}\n Directors:{4}\n Rating:{5}\n", this.Id, this.Title, this.Year, String.Join(",",this.MovieInfo.Actors), this.MovieInfo, this.MovieInfo.Rating);
- else
- return string.Format("\nMovie with Title: {0}\n Year: {1}\n", this.Title, this.Year);
- }
-}
-```
-
-In Azure Cosmos DB newItem will be MovieModel:
-
-```csharp
- MovieModel movieModel = new MovieModel()
- {
- Id = Guid.NewGuid().ToString(),
- Title = "The Big New Movie",
- Year = 2018,
- MovieInfo = new MovieInfo() { Plot = "Nothing happens at all.", Rating = 0 }
- };
- var writeNew= moviesContainer.CreateItemAsync(movieModel, new Microsoft.Azure.Cosmos.PartitionKey(movieModel.Year));
- await writeNew;
-```
-
-### Read a document
-
-**DynamoDB**:
-
-To read in Amazon DynamoDB, you need to define primitives:
-
-```csharp
-// Create Primitives for the HASH and RANGE portions of the primary key
-Primitive hash = new Primitive(year.ToString(), true);
-Primitive range = new Primitive(title, false);
-
- Task<Document> readMovie = moviesTable.GetItemAsync(hash, range, token);
- movie_record = await readMovie;
-```
-
-**Azure Cosmos DB**:
-
-However, with Azure Cosmos DB the query is natural (LINQ):
-
-```csharp
-IQueryable<MovieModel> movieQuery = moviesContainer.GetItemLinqQueryable<MovieModel>(true)
- .Where(f => f.Year == year && f.Title == title);
-// The query is executed synchronously here, but can also be executed asynchronously via the IDocumentQuery<T> interface
- foreach (MovieModel movie in movieQuery)
- {
- movie_record_cosmosdb = movie;
- }
-```
-
-The documents collection in the above example will be:
--- type safe-- provide a natural query option.-
-### Update an item
-
-**DynamoDB**:
-To update the item in Amazon DynamoDB:
-
-```csharp
-updateResponse = await client.UpdateItemAsync( updateRequest );
-````
-
-**Azure Cosmos DB**:
-
-In Azure Cosmos DB, update will be treated as Upsert operation meaning insert the document if it doesn't exist:
-
-```csharp
-await moviesContainer.UpsertItemAsync<MovieModel>(updatedMovieModel);
-```
-
-### Delete a document
-
-**DynamoDB**:
-
-To delete an item in Amazon DynamoDB, you again need to fall on primitives:
-
-```csharp
-Primitive hash = new Primitive(year.ToString(), true);
- Primitive range = new Primitive(title, false);
- DeleteItemOperationConfig deleteConfig = new DeleteItemOperationConfig( );
- deleteConfig.ConditionalExpression = condition;
- deleteConfig.ReturnValues = ReturnValues.AllOldAttributes;
-
- Task<Document> delItem = table.DeleteItemAsync( hash, range, deleteConfig );
- deletedItem = await delItem;
-```
-
-**Azure Cosmos DB**:
-
-In Azure Cosmos DB, we can get the document and delete them asynchronously:
-
-```csharp
-var result= ReadingMovieItem_async_List_CosmosDB("select * from c where c.info.rating>7 AND c.year=2018 AND c.title='The Big New Movie'");
-while (result.HasMoreResults)
-{
- var resultModel = await result.ReadNextAsync();
- foreach (var movie in resultModel.ToList<MovieModel>())
- {
- await moviesContainer.DeleteItemAsync<MovieModel>(movie.Id, new PartitionKey(movie.Year));
- }
- }
-```
-
-### Query documents
-
-**DynamoDB**:
-
-In Amazon DynamoDB, api functions are required to query the data:
-
-```csharp
-QueryOperationConfig config = new QueryOperationConfig( );
- config.Filter = new QueryFilter( );
- config.Filter.AddCondition( "year", QueryOperator.Equal, new DynamoDBEntry[ ] { 1992 } );
- config.Filter.AddCondition( "title", QueryOperator.Between, new DynamoDBEntry[ ] { "B", "Hzz" } );
- config.AttributesToGet = new List<string> { "year", "title", "info" };
- config.Select = SelectValues.SpecificAttributes;
- search = moviesTable.Query( config );
-```
-
-**Azure Cosmos DB**:
-
-In Azure Cosmos DB, you can do projection and filter inside a simple sql query:
-
-```csharp
-var result = moviesContainer.GetItemQueryIterator<MovieModel>(
- "select c.Year, c.Title, c.info from c where Year=1998 AND (CONTAINS(Title,'B') OR CONTAINS(Title,'Hzz'))");
-```
-
-For range operations, for example, 'between', you need to do a scan in Amazon DynamoDB:
-
-```csharp
-ScanRequest sRequest = new ScanRequest
-{
- TableName = "Movies",
- ExpressionAttributeNames = new Dictionary<string, string>
- {
- { "#yr", "year" }
- },
- ExpressionAttributeValues = new Dictionary<string, AttributeValue>
- {
- { ":y_a", new AttributeValue { N = "1960" } },
- { ":y_z", new AttributeValue { N = "1969" } },
- },
- FilterExpression = "#yr between :y_a and :y_z",
- ProjectionExpression = "#yr, title, info.actors[0], info.directors, info.running_time_secs"
-};
-
-ClientScanning_async( sRequest ).Wait( );
-```
-
-In Azure Cosmos DB, you can use SQL query and a single-line statement:
-
-```csharp
-var result = moviesContainer.GetItemQueryIterator<MovieModel>(
- "select c.title, c.info.actors[0], c.info.directors,c.info.running_time_secs from c where BETWEEN year 1960 AND 1969");
-```
-
-### Delete a container
-
-**DynamoDB**:
-
-To delete the table in Amazon DynamoDB, you can specify:
-
-```csharp
-client.DeleteTableAsync( tableName );
-```
-
-**Azure Cosmos DB**:
-
-To delete the collection in Azure Cosmos DB, you can specify:
-
-```csharp
-await moviesContainer.DeleteContainerAsync();
-```
-Then delete the database too if you need:
-
-```csharp
-await cosmosDatabase.DeleteAsync();
-```
-
-As you can see, Azure Cosmos DB supports natural queries (SQL), operations are asynchronous and much easier. You can easily migrate your complex code to Azure Cosmos DB, which becomes simpler after the migration.
-
-### Next Steps
--- Learn about [performance optimization](performance-tips.md).-- Learn about [optimize reads and writes](../key-value-store-cost.md)-- Learn about [Monitoring in Cosmos DB](../monitor-cosmos-db.md)-
cosmos-db Estimate Ru With Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/estimate-ru-with-capacity-planner.md
- Title: Estimate costs using the Azure Cosmos DB capacity planner - SQL API
-description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using SQL API.
---- Previously updated : 08/26/2021----
-# Estimate RU/s using the Azure Cosmos DB capacity planner - SQL API
-
-> [!NOTE]
-> If you are planning a data migration to Azure Cosmos DB and all that you know is the number of vcores and servers in your existing sharded and replicated database cluster, please also read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
->
-
-Configuring your Azure Cosmos databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](../request-units.md), for your workload is essential to optimizing cost and performance. This article describes how to use the Azure Cosmos DB [capacity planner](https://cosmos.azure.com/capacitycalculator/) to get an estimate of the required RU/s and cost of your workload when using the SQL API. If you are using API for MongoDB, see how to [use capacity calculator with MongoDB](../mongodb/estimate-ru-capacity-planner.md) article.
--
-## <a id="basic-mode"></a>Estimate provisioned throughput and cost using basic mode
-To get a quick estimate for your workload using the basic mode, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/). Enter in the following parameters based on your workload:
-
-|**Input** |**Description** |
-|||
-| API |Choose SQL (Core) API |
-|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Cosmos account. See [global distribution](../distribute-data-globally.md) in Azure Cosmos DB for more details.|
-|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
-|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
-|Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored (GB) in the analytical store in a single region. |
-|Item size|The estimated size of the data item (for example, document), ranging from 1 KB to 2 MB. |
-|Queries/sec |Number of queries expected per second per region. The average RU charge to run a query is estimated at 10 RUs. |
-|Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. |
-|Creates/sec |Number of create operations expected per second per region. |
-|Updates/sec |Number of update operations expected per second per region. When you choose automatic indexing, the estimated RU/s for the update operation is calculated as one property being changed per an update. |
-|Deletes/sec |Number of delete operations expected per second per region. |
-
-After filling the required details, select **Calculate**. The **Cost Estimate** tab shows the total cost for storage and provisioned throughput. You can expand the **Show Details** link in this tab to get the breakdown of the throughput required for different CRUD and query requests. Each time you change the value of any field, select **Calculate** to recalculate the estimated cost.
--
-## <a id="advanced-mode"></a>Estimate provisioned throughput and cost using advanced mode
-
-Advanced mode allows you to provide more settings that impact the RU/s estimate. To use this option, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/) and **sign in** to the tool with an account you use for Azure. The sign-in option is available at the right-hand corner.
-
-After you sign in, you can see more fields compared to the fields in basic mode. Enter the other parameters based on your workload.
-
-|**Input** |**Description** |
-|||
-|API|Azure Cosmos DB is a multi-model and multi-API service. Choose SQL(Core) API. |
-|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Cosmos account. See [global distribution](../distribute-data-globally.md) in Azure Cosmos DB for more details.|
-|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
-|Default consistency|Azure Cosmos DB supports 5 consistency levels, to allow developers to balance the tradeoff between consistency, availability, and latency tradeoffs. To learn more, see the [consistency levels](../consistency-levels.md) article. <br/><br/> By default, Azure Cosmos DB uses session consistency, which guarantees the ability to read your own writes in a session. <br/><br/> Choosing strong or bounded staleness will require double the required RU/s for reads, when compared to session, consistent prefix, and eventual consistency. Strong consistency with multi-region writes is not supported and will automatically default to single-region writes with strong consistency. |
-|Indexing policy|By default, Azure Cosmos DB [indexes all properties](../index-policy.md) in all items for flexible and efficient queries (maps to the **Automatic** indexing policy). <br/><br/> If you choose **off**, none of the properties are indexed. This results in the lowest RU charge for writes. Select **off** policy if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and/or writes, and no queries. <br/><br/> If you choose **Automatic**, Azure Cosmos DB automatically indexes all the items as they are written. <br/><br/> **Custom** indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. To learn more, see [indexing policy](../index-overview.md) and [sample indexing policies](how-to-manage-indexing-policy.md#indexing-policy-examples) articles.|
-|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
-|Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored(GB) in the analytical store in a single region. |
-|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: 45 hours at peak / 730 hours / month = ~6%.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
-|Item size|The size of the data item (for example, document), ranging from 1 KB to 2 MB. You can add estimates for multiple sample items. <br/><br/>You can also **Upload sample (JSON)** document for a more accurate estimate.<br/><br/>If your workload has multiple types of items (with different JSON content) in the same container, you can upload multiple JSON documents and get the estimate. Use the **Add new item** button to add multiple sample JSON documents.|
-| Number of properties | The average number of properties per an item. |
-|Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. Point read operations are different from query read operations. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. If your workload mode is **Variable**, you can provide the expected number of point read operations at peak and off peak. |
-|Creates/sec |Number of create operations expected per second per region. |
-|Updates/sec |Number of update operations expected per second per region. |
-|Deletes/sec |Number of delete operations expected per second per region. |
-|Queries/sec |Number of queries expected per second per region. For an accurate estimate, either use the average cost of queries or enter the RU/s your queries use from query stats in Azure portal. |
-| Average RU/s charge per query | By default, the average cost of queries/sec per region is estimated at 10 RU/s. You can increase or decrease it based on the RU/s charges based on your estimated query charge.|
-
-You can also use the **Save Estimate** button to download a CSV file containing the current estimate.
--
-The prices shown in the Azure Cosmos DB capacity planner are estimates based on the public pricing rates for throughput and storage. All prices are shown in US dollars. Refer to the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to see all rates by region.
-
-## Next steps
-
-* If all you know is the number of vcores and servers in your existing sharded and replicated database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* Learn more about [Azure Cosmos DB's pricing model](../how-pricing-works.md).
-* Create a new [Cosmos account, database, and container](create-cosmosdb-resources-portal.md).
-* Learn how to [optimize provisioned throughput cost](../optimize-cost-throughput.md).
-* Learn how to [optimize cost with reserved capacity](../cosmos-db-reserved-capacity.md).
-
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/find-request-unit-charge.md
- Title: Find request unit charge for a SQL query in Azure Cosmos DB
-description: Find the request unit charge for SQL queries against containers created with Azure Cosmos DB, using the Azure portal, .NET, Java, Python, or Node.js.
---- Previously updated : 06/02/2022---- devx-track-js-- kr2b-contr-experiment--
-# Find the request unit charge for operations in Azure Cosmos DB SQL API
-
-Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by *request units* (RU). *Request charge* is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your container, costs are always measured in RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see [Request Units in Azure Cosmos DB](../request-units.md).
-
-This article presents the different ways that you can find the request unit consumption for any operation run against a container in Azure Cosmos DB SQL API. If you're using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](../cassandr).
-
-Currently, you can measure consumption only by using the Azure portal or by inspecting the response sent from Azure Cosmos DB through one of the SDKs. If you're using the SQL API, you have multiple options for finding the request charge for an operation.
-
-## Use the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-sql-api-dotnet.md#create-account) and feed it with data, or select an existing Azure Cosmos account that already contains data.
-
-1. Go to the **Data Explorer** pane, and then select the container you want to work on.
-
-1. Select **New SQL Query**.
-
-1. Enter a valid query, and then select **Execute Query**.
-
-1. Select **Query Stats** to display the actual request charge for the request you executed.
-
- :::image type="content" source="../media/find-request-unit-charge/portal-sql-query.png" alt-text="Screenshot of a SQL query request charge in the Azure portal.":::
-
-## Use the .NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-Objects that are returned from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) expose a `RequestCharge` property:
-
-```csharp
-ResourceResponse<Document> fetchDocumentResponse = await client.ReadDocumentAsync(
- UriFactory.CreateDocumentUri("database", "container", "itemId"),
- new RequestOptions
- {
- PartitionKey = new PartitionKey("partitionKey")
- });
-var requestCharge = fetchDocumentResponse.RequestCharge;
-
-StoredProcedureResponse<string> storedProcedureCallResponse = await client.ExecuteStoredProcedureAsync<string>(
- UriFactory.CreateStoredProcedureUri("database", "container", "storedProcedureId"),
- new RequestOptions
- {
- PartitionKey = new PartitionKey("partitionKey")
- });
-requestCharge = storedProcedureCallResponse.RequestCharge;
-
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri("database", "container"),
- "SELECT * FROM c",
- new FeedOptions
- {
- PartitionKey = new PartitionKey("partitionKey")
- }).AsDocumentQuery();
-while (query.HasMoreResults)
-{
- FeedResponse<dynamic> queryResponse = await query.ExecuteNextAsync<dynamic>();
- requestCharge = queryResponse.RequestCharge;
-}
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-Objects that are returned from the [.NET SDK v3](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) expose a `RequestCharge` property:
-
-[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/CustomDocsSampleCode.cs?name=GetRequestCharge)]
-
-For more information, see [Quickstart: Build a .NET web app by using a SQL API account in Azure Cosmos DB](create-sql-api-dotnet.md).
---
-## Use the Java SDK
-
-Objects that are returned from the [Java SDK](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) expose a `getRequestCharge()` method:
-
-```java
-RequestOptions requestOptions = new RequestOptions();
-requestOptions.setPartitionKey(new PartitionKey("partitionKey"));
-
-Observable<ResourceResponse<Document>> readDocumentResponse = client.readDocument(String.format("/dbs/%s/colls/%s/docs/%s", "database", "container", "itemId"), requestOptions);
-readDocumentResponse.subscribe(result -> {
- double requestCharge = result.getRequestCharge();
-});
-
-Observable<StoredProcedureResponse> storedProcedureResponse = client.executeStoredProcedure(String.format("/dbs/%s/colls/%s/sprocs/%s", "database", "container", "storedProcedureId"), requestOptions, null);
-storedProcedureResponse.subscribe(result -> {
- double requestCharge = result.getRequestCharge();
-});
-
-FeedOptions feedOptions = new FeedOptions();
-feedOptions.setPartitionKey(new PartitionKey("partitionKey"));
-
-Observable<FeedResponse<Document>> feedResponse = client
- .queryDocuments(String.format("/dbs/%s/colls/%s", "database", "container"), "SELECT * FROM c", feedOptions);
-feedResponse.forEach(result -> {
- double requestCharge = result.getRequestCharge();
-});
-```
-
-For more information, see [Quickstart: Build a Java application by using an Azure Cosmos DB SQL API account](create-sql-api-java.md).
-
-## Use the Node.js SDK
-
-Objects that are returned from the [Node.js SDK](https://www.npmjs.com/package/@azure/cosmos) expose a `headers` subobject that maps all the headers returned by the underlying HTTP API. The request charge is available under the `x-ms-request-charge` key:
-
-```javascript
-const item = await client
- .database('database')
- .container('container')
- .item('itemId', 'partitionKey')
- .read();
-var requestCharge = item.headers['x-ms-request-charge'];
-
-const storedProcedureResult = await client
- .database('database')
- .container('container')
- .storedProcedure('storedProcedureId')
- .execute({
- partitionKey: 'partitionKey'
- });
-requestCharge = storedProcedureResult.headers['x-ms-request-charge'];
-
-const query = client.database('database')
- .container('container')
- .items
- .query('SELECT * FROM c', {
- partitionKey: 'partitionKey'
- });
-while (query.hasMoreResults()) {
- var result = await query.executeNext();
- requestCharge = result.headers['x-ms-request-charge'];
-}
-```
-
-For more information, see [Quickstart: Build a Node.js app by using an Azure Cosmos DB SQL API account](create-sql-api-nodejs.md).
-
-## Use the Python SDK
-
-The `CosmosClient` object from the [Python SDK](https://pypi.org/project/azure-cosmos/) exposes a `last_response_headers` dictionary that maps all the headers returned by the underlying HTTP API for the last operation executed. The request charge is available under the `x-ms-request-charge` key:
-
-```python
-response = client.ReadItem(
- 'dbs/database/colls/container/docs/itemId', {'partitionKey': 'partitionKey'})
-request_charge = client.last_response_headers['x-ms-request-charge']
-
-response = client.ExecuteStoredProcedure(
- 'dbs/database/colls/container/sprocs/storedProcedureId', None, {'partitionKey': 'partitionKey'})
-request_charge = client.last_response_headers['x-ms-request-charge']
-```
-
-For more information, see [Quickstart: Build a Python app by using an Azure Cosmos DB SQL API account](create-sql-api-python.md).
-
-## Next steps
-
-To learn about optimizing your RU consumption, see these articles:
-
-* [Request Units in Azure Cosmos DB](../request-units.md)
-* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
-* [Globally scale provisioned throughput](../request-units.md)
-* [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md)
-* [Provision throughput for a container](how-to-provision-container-throughput.md)
-* [Monitor and debug with insights in Azure Cosmos DB](../use-metrics.md)
cosmos-db How To Configure Cosmos Db Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-configure-cosmos-db-trigger.md
- Title: Azure Functions trigger for Cosmos DB advanced configuration
-description: Learn how to configure logging and connection policy used by Azure Functions trigger for Cosmos DB
--- Previously updated : 07/06/2022---
-# How to configure logging and connectivity with the Azure Functions trigger for Cosmos DB
-
-This article describes advanced configuration options you can set when using the Azure Functions trigger for Cosmos DB.
-
-## Enabling trigger specific logs
-
-The Azure Functions trigger for Cosmos DB uses the [Change Feed Processor Library](change-feed-processor.md) internally, and the library generates a set of health logs that can be used to monitor internal operations for [troubleshooting purposes](./troubleshoot-changefeed-functions.md).
-
-The health logs describe how the Azure Functions trigger for Cosmos DB behaves when attempting operations during load-balancing, initialization, and processing scenarios.
-
-### Enabling logging
-
-To enable logging when using Azure Functions trigger for Cosmos DB, locate the `host.json` file in your Azure Functions project or Azure Functions App and [configure the level of required logging](../../azure-functions/functions-monitoring.md#log-levels-and-categories). Enable the traces for `Host.Triggers.CosmosDB` as shown in the following sample:
-
-```js
-{
- "version": "2.0",
- "logging": {
- "fileLoggingMode": "always",
- "logLevel": {
- "Host.Triggers.CosmosDB": "Warning"
- }
- }
-}
-```
-
-After the Azure Function is deployed with the updated configuration, you'll see the Azure Functions trigger for Cosmos DB logs as part of your traces. You can view the logs in your configured logging provider under the *Category* `Host.Triggers.CosmosDB`.
-
-### Which type of logs are emitted?
-
-Once enabled, there are three levels of log events that will be emitted:
-
-* Error:
- * When there's an unknown or critical error on the Change Feed processing that is affecting the correct trigger functionality.
-
-* Warning:
- * When your Function user code had an unhandled exception - There's a gap in your Function code and the Function isn't [resilient to errors](../../azure-functions/performance-reliability.md#write-defensive-functions) or a serialization error (for C# Functions, the raw json can't be deserialized to the selected C# type).
- * When there are transient connectivity issues preventing the trigger from interacting with the Cosmos DB account. The trigger will retry these [transient connectivity errors](troubleshoot-dot-net-sdk-request-timeout.md) but if they extend for a long period of time, there could be a network problem. You can enable Debug level traces to obtain the Diagnostics from the underlying Cosmos DB SDK.
-
-* Debug:
- * When a lease is acquired by an instance - The current instance will start processing the Change Feed for the lease.
- * When a lease is released by an instance - The current instance has stopped processing the Change Feed for the lease.
- * When new changes are delivered from the trigger to your Function code - Helps debug situations when your Function code might be having errors and you aren't sure if you're receiving changes or not.
- * For traces that are Warning and Error, adds the Diagnostics information from the underlying Cosmos DB SDK for troubleshooting purposes.
-
-You can also [refer to the source code](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerHealthMonitor.cs) to see the full details.
-
-### Query the logs
-
-Run the following query to query the logs generated by the Azure Functions trigger for Cosmos DB in [Azure Application Insights' Analytics](../../azure-monitor/logs/log-query-overview.md):
-
-```sql
-traces
-| where customDimensions.Category == "Host.Triggers.CosmosDB"
-```
-
-## Configuring the connection policy
-
-There are two connection modes - Direct mode and Gateway mode. To learn more about these connection modes, see the [connection modes](sql-sdk-connection-modes.md) article. By default, **Gateway** is used to establish all connections on the Azure Functions trigger for Cosmos DB. However, it might not be the best option for performance-driven scenarios.
-
-### Changing the connection mode and protocol
-
-There are two key configuration settings available to configure the client connection policy ΓÇô the **connection mode** and the **connection protocol**. You can change the default connection mode and protocol used by the Azure Functions trigger for Cosmos DB and all the [Azure Cosmos DB bindings](../../azure-functions/functions-bindings-cosmosdb-v2-output.md)). To change the default settings, you need to locate the `host.json` file in your Azure Functions project or Azure Functions App and add the following [extra setting](../../azure-functions/functions-bindings-cosmosdb-v2.md#hostjson-settings):
-
-```js
-{
- "cosmosDB": {
- "connectionMode": "Direct",
- "protocol": "Tcp"
- }
-}
-```
-
-Where `connectionMode` must have the desired connection mode (Direct or Gateway) and `protocol` the desired connection protocol (Tcp for Direct mode or Https for Gateway mode).
-
-If your Azure Functions project is working with Azure Functions V1 runtime, the configuration has a slight name difference, you should use `documentDB` instead of `cosmosDB`:
-
-```js
-{
- "documentDB": {
- "connectionMode": "Direct",
- "protocol": "Tcp"
- }
-}
-```
-
-## Customizing the user agent
-
-The Azure Functions trigger for Cosmos DB performs requests to the service that will be reflected on your [monitoring](../monitor-cosmos-db.md). You can customize the user agent used for the requests from an Azure Function by changing the `userAgentSuffix` in the `host.json` [extra settings](../../azure-functions/functions-bindings-cosmosdb-v2.md?tabs=extensionv4#hostjson-settings):
-
-```js
-{
- "cosmosDB": {
- "userAgentSuffix": "MyUniqueIdentifier"
- }
-}
-```
-
-> [!NOTE]
-> When hosting your function app in a Consumption plan, each instance has a limit in the amount of Socket Connections that it can maintain. When working with Direct / TCP mode, by design more connections are created and can hit the [Consumption plan limit](../../azure-functions/manage-connections.md#connection-limit), in which case you can either use Gateway mode or instead host your function app in either a [Premium plan](../../azure-functions/functions-premium-plan.md) or a [Dedicated (App Service) plan](../../azure-functions/dedicated-plan.md).
-
-## Next steps
-
-* [Connection limits in Azure Functions](../../azure-functions/manage-connections.md#connection-limit)
-* [Enable monitoring](../../azure-functions/functions-monitoring.md) in your Azure Functions applications.
-* Learn how to [Diagnose and troubleshoot common issues](./troubleshoot-changefeed-functions.md) when using the Azure Functions trigger for Cosmos DB.
cosmos-db How To Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-configure-cross-origin-resource-sharing.md
- Title: Cross-Origin Resource Sharing (CORS) in Azure Cosmos DB
-description: This article describes how to configure Cross-Origin Resource Sharing (CORS) in Azure Cosmos DB by using Azure portal and Azure Resource Manager templates.
---- Previously updated : 10/11/2019---
-# Configure Cross-Origin Resource Sharing (CORS)
-
-Cross-Origin Resource Sharing (CORS) is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain. However, CORS provides a secure way to allow the origin domain to call APIs in another domain. The Core (SQL) API in Azure Cosmos DB now supports Cross-Origin Resource Sharing (CORS) by using the ΓÇ£allowedOriginsΓÇ¥ header. After you enable the CORS support for your Azure Cosmos account, only authenticated requests are evaluated to determine whether they're allowed according to the rules you've specified.
-
-You can configure the Cross-origin resource sharing (CORS) setting from the Azure portal or from an Azure Resource Manager template. For Cosmos accounts using the Core (SQL) API, Azure Cosmos DB supports a JavaScript library that works in both Node.js and browser-based environments. This library can now take advantage of CORS support when using Gateway mode. There's no client-side configuration needed to use this feature. With CORS support, resources from a browser can directly access Azure Cosmos DB through the [JavaScript library](https://www.npmjs.com/package/@azure/cosmos) or directly from the [REST API](/rest/api/cosmos-db/) for simple operations.
-
-> [!NOTE]
-> CORS support is only applicable and supported for the Azure Cosmos DB Core (SQL) API. It is not applicable to the Azure Cosmos DB APIs for Cassandra, Gremlin, or MongoDB, as these protocols do not use HTTP for client-server communication.
-
-## Enable CORS support from Azure portal
-
-Follow these steps to enable Cross-Origin Resource Sharing by using Azure portal:
-
-1. Navigate to your Azure Cosmos DB account. Open the **CORS** page.
-
-2. Specify a comma-separated list of origins that can make cross-origin calls to your Azure Cosmos DB account. For example, `https://www.mydomain.com`, `https://mydomain.com`, `https://api.mydomain.com`. You can also use a wildcard ΓÇ£\*ΓÇ¥ to allow all origins and select **Submit**.
-
- > [!NOTE]
- > Currently, you cannot use wildcards as part of the domain name. For example `https://*.mydomain.net` format is not yet supported.
-
- :::image type="content" source="./media/how-to-configure-cross-origin-resource-sharing/enable-cross-origin-resource-sharing-using-azure-portal.png" alt-text="Enable cross origin resource sharing using Azure portal":::
-
-## Enable CORS support from Resource Manager template
-
-To enable CORS by using a Resource Manager template, add the ΓÇ£corsΓÇ¥ section with ΓÇ£allowedOriginsΓÇ¥ property to any existing template. This JSON is an example of a template that creates a new Azure Cosmos account with CORS enabled.
-
-```json
-{
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "name": "[variables('accountName')]",
- "apiVersion": "2019-08-01",
- "location": "[parameters('location')]",
- "kind": "GlobalDocumentDB",
- "properties": {
- "consistencyPolicy": "[variables('consistencyPolicy')[parameters('defaultConsistencyLevel')]]",
- "locations": "[variables('locations')]",
- "databaseAccountOfferType": "Standard",
- "cors": [
- {
- "allowedOrigins": "https://contoso.com"
- }
- ]
- }
-}
-```
-
-## Using the Azure Cosmos DB JavaScript library from a browser
-
-Today, the Azure Cosmos DB JavaScript library only has the CommonJS version of the library shipped with its package. To use this library from the browser, you have to use a tool such as Rollup or Webpack to create a browser compatible library. Certain Node.js libraries should have browser mocks for them. This is an example of a webpack config file that has the necessary mock settings.
-
-```javascript
-const path = require("path");
-
-module.exports = {
- entry: "./src/index.ts",
- devtool: "inline-source-map",
- node: {
- net: "mock",
- tls: "mock"
- },
- output: {
- filename: "bundle.js",
- path: path.resolve(__dirname, "dist")
- }
-};
-```
-
-Here's a [code sample](https://github.com/christopheranderson/cosmos-browser-sample) that uses TypeScript and Webpack with the Azure Cosmos DB JavaScript SDK library. The sample builds a Todo app that sends real time updates when new items are created.
-
-As a best practice, don't use the primary key to communicate with Azure Cosmos DB from the browser. Instead, use resource tokens to communicate. For more information about resource tokens, see [Securing access to Azure Cosmos DB](../secure-access-to-data.md#resource-tokens) article.
-
-## Next steps
-
-To learn about other ways to secure your Azure Cosmos account, see the following articles:
-
-* [Configure a firewall for Azure Cosmos DB](../how-to-configure-firewall.md) article.
-
-* [Configure virtual network and subnet-based access for your Azure Cosmos DB account](../how-to-configure-vnet-service-endpoint.md)
cosmos-db How To Convert Session Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-convert-session-token.md
- Title: How to convert session token formats in .NET SDK - Azure Cosmos DB
-description: Learn how to convert session token formats to ensure compatibilities between different .NET SDK versions
---- Previously updated : 04/30/2020----
-# Convert session token formats in .NET SDK
-
-This article explains how to convert between different session token formats to ensure compatibility between SDK versions.
-
-> [!NOTE]
-> By default, the SDK keeps track of the session token automatically and it will use the most recent session token. For more information, please visit [Utilize session tokens](how-to-manage-consistency.md#utilize-session-tokens). The instructions in this article only apply with the following conditions:
-> * Your Azure Cosmos DB account uses Session consistency.
-> * You are managing the session tokens manually.
-> * You are using multiple versions of the SDK at the same time.
-
-## Session token formats
-
-There are two session token formats: **simple** and **vector**. These two formats are not interchangeable so, the format should be converted when passing to the client application with different versions.
-- The **simple** session token format is used by the .NET SDK V1 (Microsoft.Azure.DocumentDB -version 1.x)-- The **vector** session token format is used by the .NET SDK V2 (Microsoft.Azure.DocumentDB -version 2.x)-
-### Simple session token
-
-A simple session token has this format: `{pkrangeid}:{globalLSN}`
-
-### Vector session token
-
-A vector session token has the following format:
-`{pkrangeid}:{Version}#{GlobalLSN}#{RegionId1}={LocalLsn1}#{RegionId2}={LocalLsn2}....#{RegionIdN}={LocalLsnN}`
-
-## Convert to Simple session token
-
-To pass a session token to client using .NET SDK V1, use a **simple** session token format. For example, use the following sample code to convert it.
-
-```csharp
-private static readonly char[] SegmentSeparator = (new[] { '#' });
-private static readonly char[] PkRangeSeparator = (new[] { ':' });
-
-// sessionTokenToConvert = session token from previous response
-string[] items = sessionTokenToConvert.Split(PkRangeSeparator, StringSplitOptions.RemoveEmptyEntries);
-string[] sessionTokenSegments = items[1].Split(SessionTokenHelpers.SegmentSeparator, StringSplitOptions.RemoveEmptyEntries);
-
-string sessionTokenInSimpleFormat;
-
-if (sessionTokenSegments.Length == 1)
-{
- // returning the same token since it already has the correct format
- sessionTokenInSimpleFormat = sessionTokenToConvert;
-}
-else
-{
- long version = 0;
- long globalLSN = 0;
-
- if (!long.TryParse(sessionTokenSegments[0], out version)
- || !long.TryParse(sessionTokenSegments[1], out globalLSN))
- {
- throw new ArgumentException("Invalid session token format", sessionTokenToConvert);
- }
-
- sessionTokenInSimpleFormat = string.Format("{0}:{1}", items[0], globalLSN);
-}
-```
-
-## Convert to Vector session token
-
-To pass a session token to client using .NET SDK V2, use the **vector** session token format. For example, use the following sample code to convert it.
-
-```csharp
-
-private static readonly char[] SegmentSeparator = (new[] { '#' });
-private static readonly char[] PkRangeSeparator = (new[] { ':' });
-
-// sessionTokenToConvert = session token from previous response
-string[] items = sessionTokenToConvert.Split(PkRangeSeparator, StringSplitOptions.RemoveEmptyEntries);
-string[] sessionTokenSegments = items[1].Split(SegmentSeparator, StringSplitOptions.RemoveEmptyEntries);
-
-string sessionTokenInVectorFormat;
-
-if (sessionTokenSegments.Length == 1)
-{
- long globalLSN = 0;
- if (long.TryParse(sessionTokenSegments[0], out globalLSN))
- {
- sessionTokenInVectorFormat = string.Format("{0}:-2#{1}", items[0], globalLSN);
- }
- else
- {
- throw new ArgumentException("Invalid session token format", sessionTokenToConvert);
- }
-}
-else
-{
- // returning the same token since it already has the correct format
- sessionTokenInVectorFormat = sessionTokenToConvert;
-}
-```
-
-## Next steps
-
-Read the following articles:
-
-* [Use session tokens to manage consistency in Azure Cosmos DB](how-to-manage-consistency.md#utilize-session-tokens)
-* [Choose the right consistency level in Azure Cosmos DB](../consistency-levels.md)
-* [Consistency, availability, and performance tradeoffs in Azure Cosmos DB](../consistency-levels.md)
-* [Availability and performance tradeoffs for various consistency levels](../consistency-levels.md)
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-account.md
- Title: Create an Azure Cosmos DB SQL API account
-description: Learn how to create a new Azure Cosmos DB SQL API account to store databases, containers, and items.
----- Previously updated : 06/08/2022--
-# Create an Azure Cosmos DB SQL API account
-
-An Azure Cosmos DB SQL API account contains all of your Azure Cosmos DB resources: databases, containers, and items. The account provides a unique endpoint for various tools and SDKs to connect to Azure Cosmos DB and perform everyday operations. For more information about the resources in Azure Cosmos DB, see [Azure Cosmos DB resource model](../account-databases-containers-items.md).
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-
-## Create an account
-
-Create a single Azure Cosmos DB account using the SQL API.
-
-#### [Azure CLI](#tab/azure-cli)
-
-1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
-
- ```azurecli-interactive
- # Variable for resource group name
- resourceGroupName="msdocs-cosmos"
-
- # Variable for location
- location="westus"
-
- # Variable for account name with a randomnly generated suffix
- let suffix=$RANDOM*$RANDOM
- accountName="msdocs-$suffix"
- ```
-
-1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
-
-1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
-
- ```azurecli-interactive
- az group create \
- --name $resourceGroupName \
- --location $location
- ```
-
-1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB SQL API account with default settings.
-
- ```azurecli-interactive
- az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --locations regionName=$location
- ```
-
-#### [PowerShell](#tab/azure-powershell)
-
-1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
-
- ```azurepowershell-interactive
- # Variable for resource group name
- $RESOURCE_GROUP_NAME = "msdocs-cosmos"
-
- # Variable for location
- $LOCATION = "West US"
-
- # Variable for account name with a randomnly generated suffix
- $SUFFIX = Get-Random
- $ACCOUNT_NAME = "msdocs-$SUFFIX"
- ```
-
-1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-
-1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
-
- ```azurepowershell-interactive
- $parameters = @{
- Name = $RESOURCE_GROUP_NAME
- Location = $LOCATION
- }
- New-AzResourceGroup @parameters
- ```
-
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Location = $LOCATION
- }
- New-AzCosmosDBAccount @parameters
- ```
-
-#### [Portal](#tab/azure-portal)
-
-> [!TIP]
-> For this guide, we recommend using the resource group name ``msdocs-cosmos``.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the Azure portal menu or the **Home page**, select **Create a resource**.
-
-1. On the **New** page, search for and select **Azure Cosmos DB**.
-
-1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
-
- :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
-
-1. On the **Create Azure Cosmos DB Account** page, enter the following information:
-
- | Setting | Value | Description |
- | | | |
- | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
- | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
- | Account Name | A unique name | Enter a name to identify your Azure Cosmos account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
- | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
- | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
- | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
-
- > [!NOTE]
- > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
-
- :::image type="content" source="media/create-account-portal/new-cosmos-account-page.png" lightbox="media/create-account-portal/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos D B SQL A P I.":::
-
-1. Select **Review + create**.
-
-1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
-
-1. Select **Go to resource** to go to the Azure Cosmos DB account page.
-
- :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
---
-## Next steps
-
-In this guide, you learned how to create an Azure Cosmos DB SQL API account. You can now import additional data to your Azure Cosmos DB account.
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB SQL API](../import-data.md)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-container.md
- Title: Create a container in Azure Cosmos DB SQL API
-description: Learn how to create a container in Azure Cosmos DB SQL API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
---- Previously updated : 01/03/2022-----
-# Create a container in Azure Cosmos DB SQL API
-
-This article explains the different ways to create an container in Azure Cosmos DB SQL API. It shows how to create a container using the Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-
-This article explains the different ways to create a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container-mongodb.md), [Cassandra API](../cassandr) articles to create the container.
-
-> [!NOTE]
-> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
-
-## <a id="portal-sql"></a>Create a container using Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-sql-api-dotnet.md#create-account), or select an existing account.
-
-1. Open the **Data Explorer** pane, and select **New Container**. Next, provide the following details:
-
- * Indicate whether you are creating a new database or using an existing one.
- * Enter a **Container Id**.
- * Enter a **Partition key** value (for example, `/ItemID`).
- * Select **Autoscale** or **Manual** throughput and enter the required **Container throughput** (for example, 1000 RU/s). Enter a throughput that you want to provision (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="../media/how-to-provision-container-throughput/provision-container-throughput-portal-sql-api.png" alt-text="Screenshot of Data Explorer, with New Collection highlighted":::
-
-## <a id="cli-sql"></a>Create a container using Azure CLI
-
-[Create a container with Azure CLI](manage-with-cli.md#create-a-container). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
-
-## Create a container using PowerShell
-
-[Create a container with PowerShell](manage-with-powershell.md#create-container). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
-
-## <a id="dotnet-sql"></a>Create a container using .NET SDK
-
-If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article.
-
-```csharp
-// Create a container with a partition key and provision 400 RU/s manual throughput.
-CosmosClient client = new CosmosClient(connectionString, clientOptions);
-Database database = await client.CreateDatabaseIfNotExistsAsync(databaseId);
-
-ContainerProperties containerProperties = new ContainerProperties()
-{
- Id = containerId,
- PartitionKeyPath = "/myPartitionKey"
-};
-
-var throughput = ThroughputProperties.CreateManualThroughput(400);
-Container container = await database.CreateContainerIfNotExistsAsync(containerProperties, throughput);
-```
-
-## Next steps
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
-* [Work with Azure Cosmos account](../account-databases-containers-items.md)
cosmos-db How To Create Multiple Cosmos Db Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-multiple-cosmos-db-triggers.md
- Title: Create multiple independent Azure Functions triggers for Cosmos DB
-description: Learn how to configure multiple independent Azure Functions triggers for Cosmos DB to create event-driven architectures.
---- Previously updated : 07/17/2019----
-# Create multiple Azure Functions triggers for Cosmos DB
-
-This article describes how you can configure multiple Azure Functions triggers for Cosmos DB to work in parallel and independently react to changes.
--
-## Event-based architecture requirements
-
-When building serverless architectures with [Azure Functions](../../azure-functions/functions-overview.md), it's [recommended](../../azure-functions/performance-reliability.md#avoid-long-running-functions) to create small function sets that work together instead of large long running functions.
-
-As you build event-based serverless flows using the [Azure Functions trigger for Cosmos DB](./change-feed-functions.md), you'll run into the scenario where you want to do multiple things whenever there is a new event in a particular [Azure Cosmos container](../account-databases-containers-items.md#azure-cosmos-db-containers). If actions you want to trigger, are independent from one another, the ideal solution would be to **create one Azure Functions triggers for Cosmos DB per action** you want to do, all listening for changes on the same Azure Cosmos container.
-
-## Optimizing containers for multiple Triggers
-
-Given the *requirements* of the Azure Functions trigger for Cosmos DB, we need a second container to store state, also called, the *leases container*. Does this mean that you need a separate leases container for each Azure Function?
-
-Here, you have two options:
-
-* Create **one leases container per Function**: This approach can translate into additional costs, unless you're using a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). Remember, that the minimum throughput at the container level is 400 [Request Units](../request-units.md), and in the case of the leases container, it is only being used to checkpoint the progress and maintain state.
-* Have **one lease container and share it** for all your Functions: This second option makes better use of the provisioned Request Units on the container, as it enables multiple Azure Functions to share and use the same provisioned throughput.
-
-The goal of this article is to guide you to accomplish the second option.
-
-## Configuring a shared leases container
-
-To configure the shared leases container, the only extra configuration you need to make on your triggers is to add the `LeaseCollectionPrefix` [attribute](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#attributes) if you are using C# or `leaseCollectionPrefix` [attribute](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) if you are using JavaScript. The value of the attribute should be a logical descriptor of what that particular trigger.
-
-For example, if you have three Triggers: one that sends emails, one that does an aggregation to create a materialized view, and one that sends the changes to another storage, for later analysis, you could assign the `LeaseCollectionPrefix` of "emails" to the first one, "materialized" to the second one, and "analytics" to the third one.
-
-The important part is that all three Triggers **can use the same leases container configuration** (account, database, and container name).
-
-A very simple code samples using the `LeaseCollectionPrefix` attribute in C#, would look like this:
-
-```cs
-using Microsoft.Azure.Documents;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Host;
-using System.Collections.Generic;
-using Microsoft.Extensions.Logging;
-
-[FunctionName("SendEmails")]
-public static void SendEmails([CosmosDBTrigger(
- databaseName: "ToDoItems",
- collectionName: "Items",
- ConnectionStringSetting = "CosmosDBConnection",
- LeaseCollectionName = "leases",
- LeaseCollectionPrefix = "emails")]IReadOnlyList<Document> documents,
- ILogger log)
-{
- ...
-}
-
-[FunctionName("MaterializedViews")]
-public static void MaterializedViews([CosmosDBTrigger(
- databaseName: "ToDoItems",
- collectionName: "Items",
- ConnectionStringSetting = "CosmosDBConnection",
- LeaseCollectionName = "leases",
- LeaseCollectionPrefix = "materialized")]IReadOnlyList<Document> documents,
- ILogger log)
-{
- ...
-}
-```
-
-And for JavaScript, you can apply the configuration on the `function.json` file, with the `leaseCollectionPrefix` attribute:
-
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "CosmosDBConnection",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "leaseCollectionPrefix": "emails"
-},
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "CosmosDBConnection",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "leaseCollectionPrefix": "materialized"
-}
-```
-
-> [!NOTE]
-> Always monitor on the Request Units provisioned on your shared leases container. Each Trigger that shares it, will increase the throughput average consumption, so you might need to increase the provisioned throughput as you increase the number of Azure Functions that are using it.
-
-## Next steps
-
-* See the full configuration for the [Azure Functions trigger for Cosmos DB](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration)
-* Check the extended [list of samples](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) for all the languages.
-* Visit the Serverless recipes with Azure Cosmos DB and Azure Functions [GitHub repository](https://github.com/ealsur/serverless-recipes/tree/master/cosmosdbtriggerscenarios) for more samples.
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-delete-by-partition-key.md
- Title: Delete items by partition key value using the Cosmos SDK (preview)
-description: Learn how to delete items by partition key value using the Cosmos SDKs
----- Previously updated : 08/19/2022---
-# Delete items by partition key value - SQL API (preview)
-
-This article explains how to use the Cosmos SDKs to delete all items by logical partition key value.
-
-## Feature overview
-
-The delete by partition key feature is an asynchronous, background operation that allows you to delete all documents with the same logical partition key value, using the Comsos SDK.
-
-Because the number of documents to be deleted may be large, the operation runs in the background. Though the physical deletion operation runs in the background, the effects will be available immediately, as the documents to be deleted will not appear in the results of queries or read operations.
-
-To help limit the resources used by this background task, the delete by partition key operation is constrained to consume at most 10% of the total available RU/s on the container each second.
-
-## Getting started
-
-To use the feature, your Cosmos account must be enrolled in the preview. To enroll, submit a request for the **DeleteAllItemsByPartitionKey** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
--
-#### [.NET](#tab/dotnet-example)
-
-## Sample code
-Use [version 3.25.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) (or a higher preview version) of the Azure Cosmos DB .NET SDK to delete items by partition key.
-
-```csharp
-// Suppose our container is partitioned by tenantId, and we want to delete all the data for a particular tenant Contoso
-
-// Get reference to the container
-var container = cosmosClient.GetContainer("DatabaseName", "ContainerName");
-
-// Delete by logical partition key
-ResponseMessage deleteResponse = await container.DeleteAllItemsByPartitionKeyStreamAsync(new PartitionKey("Contoso"));
-
- if (deleteResponse.IsSuccessStatusCode) {
- Console.WriteLine($"Delete all documents with partition key operation has successfully started");
-}
-```
-#### [Java](#tab/java-example)
-
-Use [version 4.19.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos) (or a higher version) of the Azure Cosmos DB Java SDK to delete items by partition key. The delete by partition key API will be marked as beta.
--
-```java
-// Suppose our container is partitioned by tenantId, and we want to delete all the data for a particular tenant Contoso
-
-// Delete by logical partition key
-CosmosItemResponse<?> deleteResponse = container.deleteAllItemsByPartitionKey(
- new PartitionKey("Contoso"), new CosmosItemRequestOptions()).block();
-```
-
-
-### Frequently asked questions (FAQ)
-#### Are the results of the delete by partition key operation reflected immediately?
-Yes, once the delete by partition key operation starts, the documents to be deleted will not appear in the results of queries or read operations. This also means that you can write new a new document with the same ID and partition key as a document to be deleted without resulting in a conflict.
-
-See [Known issues](#known-issues) for exceptions.
-
-#### What happens if I issue a delete by partition key operation, and then immediately write a new document with the same partition key?
-When the delete by partition key operation is issued, only the documents that exist in the container at that point in time with the partition key value will be deleted. Any new documents that come in will not be in scope for the deletion.
-
-### How is the delete by partition key operation prioritized among other operations against the container?
-By default, the delete by partition key value operation can consume up to a reserved fraction - 0.1, or 10% - of the overall RU/s on the resource. Any Request Units (RUs) in this bucket that are unused will be available for other non-background operations, such as reads, writes, and queries.
-
-For example, suppose you have provisioned 1000 RU/s on a container. There is an ongoing delete by partition key operation that consumes 100 RUs each second for 5 seconds. During each of these 5 seconds, there are 900 RUs available for non-background database operations. Once the delete operation is complete, all 1000 RU/s are now available again.
-
-### Known issues
-For certain scenarios, the effects of a delete by partition key operation is not guaranteed to be immediately reflected. The effect may be partially seen as the operation progresses.
--- [Aggregate queries](sql-query-aggregate-functions.md) that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.-- Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.-- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It is not recommended to use this preview feature if you have a scenario that requires continuous backup. -
-## How to give feedback or report an issue/bug
-* Email cosmosPkDeleteFeedbk@microsoft.com with questions or feedback.
-
-### SDK requirements
-
-Find the latest version of the SDK that supports this feature.
-
-| SDK | Supported versions | Package manager link |
-| | | |
-| **.NET SDK v3** | *>= 3.25.0-preview (must be preview version)* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
-| **Java SDK v4** | *>= 4.19.0 (API is marked as beta)* | <https://mvnrepository.com/artifact/com.azure/azure-cosmos> |
-
-Support for other SDKs is planned for the future.
-
-## Next steps
-
-See the following articles to learn about more SDK operations in Azure Cosmos DB.
-- [Query an Azure Cosmos container
-](how-to-query-container.md)
-- [Transactional batch operations in Azure Cosmos DB using the .NET SDK
-](transactional-batch.md)
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-container.md
- Title: Create a container in Azure Cosmos DB SQL API using .NET
-description: Learn how to create a container in your Azure Cosmos DB SQL API database using the .NET SDK.
----- Previously updated : 07/06/2022---
-# Create a container in Azure Cosmos DB SQL API using .NET
--
-Containers in Azure Cosmos DB store sets of items. Before you can create, query, or manage items, you must first create a container.
-
-## Name a container
-
-In Azure Cosmos DB, a container is analogous to a table in a relational database. When you create a container, the container name forms a segment of the URI used to access the container resource and any child items.
-
-Here are some quick rules when naming a container:
-
-* Keep container names between 3 and 63 characters long
-* Container names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
-
-Once created, the URI for a container is in this format:
-
-``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/colls/<container-name>``
-
-## Create a container
-
-To create a container, call one of the following methods:
-
-* [``CreateContainerAsync``](#create-a-container-asynchronously)
-* [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
-
-### Create a container asynchronously
-
-The following example creates a container asynchronously:
--
-The [``Database.CreateContainerAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerasync) method will throw an exception if a database with the same name already exists.
-
-### Create a container asynchronously if it doesn't already exist
-
-The following example creates a container asynchronously only if it doesn't already exist on the account:
--
-The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) method will only create a new container if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
-
-## Parsing the response
-
-In all examples so far, the response from the asynchronous request was cast immediately to the [``Container``](/dotnet/api/microsoft.azure.cosmos.container) type. You may want to parse metadata about the response including headers and the HTTP status code. The true return type for the **Database.CreateContainerAsync** and **Database.CreateContainerIfNotExistsAsync** methods is [``ContainerResponse``](/dotnet/api/microsoft.azure.cosmos.containerresponse).
-
-The following example shows the **Database.CreateContainerIfNotExistsAsync** method returning a **ContainerResponse**. Once returned, you can parse response properties and then eventually get the underlying **Container** object:
--
-## Next steps
-
-Now that you've create a container, use the next guide to create items.
-
-> [!div class="nextstepaction"]
-> [Create an item](how-to-dotnet-create-item.md)
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-database.md
- Title: Create a database in Azure Cosmos DB SQL API using .NET
-description: Learn how to create a database in your Azure Cosmos DB SQL API account using the .NET SDK.
----- Previously updated : 07/06/2022---
-# Create a database in Azure Cosmos DB SQL API using .NET
--
-Databases in Azure Cosmos DB are units of management for one or more containers. Before you can create or manage containers, you must first create a database.
-
-## Name a database
-
-In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
-
-Here are some quick rules when naming a database:
-
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
-
-Once created, the URI for a database is in this format:
-
-``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
-
-## Create a database
-
-To create a database, call one of the following methods:
-
-* [``CreateDatabaseAsync``](#create-a-database-asynchronously)
-* [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
-
-### Create a database asynchronously
-
-The following example creates a database asynchronously:
--
-The [``CosmosClient.CreateDatabaseAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseasync) method will throw an exception if a database with the same name already exists.
-
-### Create a database asynchronously if it doesn't already exist
-
-The following example creates a database asynchronously only if it doesn't already exist on the account:
--
-The [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method will only create a new database if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
-
-## Parsing the response
-
-In all examples so far, the response from the asynchronous request was cast immediately to the [``Database``](/dotnet/api/microsoft.azure.cosmos.database) type. You may want to parse metadata about the response including headers and the HTTP status code. The true return type for the **CosmosClient.CreateDatabaseAsync** and **CosmosClient.CreateDatabaseIfNotExistsAsync** methods is [``DatabaseResponse``](/dotnet/api/microsoft.azure.cosmos.databaseresponse).
-
-The following example shows the **CosmosClient.CreateDatabaseIfNotExistsAsync** method returning a **DatabaseResponse**. Once returned, you can parse response properties and then eventually get the underlying **Database** object:
--
-## Next steps
-
-Now that you've created a database, use the next guide to create containers.
-
-> [!div class="nextstepaction"]
-> [Create a container](how-to-dotnet-create-container.md)
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-item.md
- Title: Create an item in Azure Cosmos DB SQL API using .NET
-description: Learn how to create, upsert, or replace an item in your Azure Cosmos DB SQL API container using the .NET SDK.
----- Previously updated : 07/06/2022---
-# Create an item in Azure Cosmos DB SQL API using .NET
--
-Items in Azure Cosmos DB represent a specific entity stored within a container. In the SQL API, an item consists of JSON-formatted data with a unique identifier.
-
-## Create a unique identifier for an item
-
-The unique identifier is a distinct string that identifies an item within a container. The ``id`` property is the only required property when creating a new JSON document. For example, this JSON document is a valid item in Azure Cosmos DB:
-
-```json
-{
- "id": "unique-string-2309509"
-}
-```
-
-Within the scope of a container, two items can't share the same unique identifier.
-
-> [!IMPORTANT]
-> The ``id`` property is case-sensitive. Properties named ``ID``, ``Id``, ``iD``, and ``_id`` will be treated as an arbitrary JSON property.
-
-Once created, the URI for an item is in this format:
-
-``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/docs/<item-resource-identifier>``
-
-When referencing the item using a URI, use the system-generated *resource identifier* instead of the ``id`` field. For more information about system-generated item properties in Azure Cosmos DB SQL API, see [properties of an item](../account-databases-containers-items.md#properties-of-an-item)
-
-## Create an item
-
-> [!NOTE]
-> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
->
-> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/250-create-item/Product.cs" id="type" :::
->
-> The examples also assume that you have already created a new object of type **Product** named **newItem**:
->
-> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/250-create-item/Program.cs" id="create_object" :::
->
-
-To create an item, call one of the following methods:
-
-* [``CreateItemAsync<>``](#create-an-item-asynchronously)
-* [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
-* [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
-
-## Create an item asynchronously
-
-The following example creates a new item asynchronously:
--
-The [``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) method will throw an exception if there's a conflict with the unique identifier of an existing item. To learn more about potential exceptions, see [``CreateItemAsync<>`` exceptions](/dotnet/api/microsoft.azure.cosmos.container.createitemasync#exceptions).
-
-## Replace an item asynchronously
-
-The following example replaces an existing item asynchronously:
--
-The [``Container.ReplaceItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.replaceitemasync) method requires the provided string for the ``id`` parameter to match the unique identifier of the ``item`` parameter.
-
-## Create or replace an item asynchronously
-
-The following example will create a new item or replace an existing item if an item already exists with the same unique identifier:
--
-The [``Container.UpsertItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync) method will use the unique identifier of the ``item`` parameter to determine if there's a conflict with an existing item and to replace the item appropriately.
-
-## Next steps
-
-Now that you've created various items, use the next guide to read an item.
-
-> [!div class="nextstepaction"]
-> [Read an item](how-to-dotnet-read-item.md)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-get-started.md
- Title: Get started with Azure Cosmos DB SQL API and .NET
-description: Get started developing a .NET application that works with Azure Cosmos DB SQL API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB SQL API endpoint.
----- Previously updated : 07/06/2022---
-# Get started with Azure Cosmos DB SQL API and .NET
--
-This article shows you how to connect to Azure Cosmos DB SQL API using the .NET SDK. Once connected, you can perform operations on databases, containers, and items.
-
-[Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md) | [API reference](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB SQL API account. [Create a SQL API account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-
-## Set up your project
-
-### Create the .NET console application
-
-Create a new .NET application by using the [``dotnet new``](/dotnet/core/tools/dotnet-new) command with the **console** template.
-
-```dotnetcli
-dotnet new console
-```
-
-Import the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package using the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command.
-
-```dotnetcli
-dotnet add package Microsoft.Azure.Cosmos
-```
-
-Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
-
-```dotnetcli
-dotnet build
-```
-
-## Connect to Azure Cosmos DB SQL API
-
-To connect to the SQL API of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to a SQL API account using the **CosmosClient** class:
-
-* [Connect with a SQL API endpoint and read/write key](#connect-with-an-endpoint-and-key)
-* [Connect with a SQL API connection string](#connect-with-a-connection-string)
-* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
-
-### Connect with an endpoint and key
-
-The most common constructor for **CosmosClient** has two parameters:
-
-| Parameter | Example value | Description |
-| | | |
-| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | SQL API endpoint to use for all requests |
-| ``authKeyOrResourceToken`` | ``COSMOS_KEY`` environment variable | Account key or resource token to use when authenticating |
-
-#### Retrieve your account endpoint and key
-
-##### [Azure CLI](#tab/azure-cli)
-
-1. Create a shell variable for *resourceGroupName*.
-
- ```azurecli-interactive
- # Variable for resource group name
- resourceGroupName="msdocs-cosmos-dotnet-howto-rg"
- ```
-
-1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
-
- ```azurecli-interactive
- # Retrieve most recently created account name
- accountName=$(
- az cosmosdb list \
- --resource-group $resourceGroupName \
- --query "[0].name" \
- --output tsv
- )
- ```
-
-1. Get the SQL API endpoint *URI* for the account using the [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show) command.
-
- ```azurecli-interactive
- az cosmosdb show \
- --resource-group $resourceGroupName \
- --name $accountName \
- --query "documentEndpoint"
- ```
-
-1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
-
- ```azurecli-interactive
- az cosmosdb keys list \
- --resource-group $resourceGroupName \
- --name $accountName \
- --type "keys" \
- --query "primaryMasterKey"
- ```
-
-1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
-
-##### [PowerShell](#tab/azure-powershell)
-
-1. Create a shell variable for *RESOURCE_GROUP_NAME*.
-
- ```azurepowershell-interactive
- # Variable for resource group name
- $RESOURCE_GROUP_NAME = "msdocs-cosmos-dotnet-howto-rg"
- ```
-
-1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
-
- ```azurepowershell-interactive
- # Retrieve most recently created account name
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- }
- $ACCOUNT_NAME = (
- Get-AzCosmosDBAccount @parameters |
- Select-Object -Property Name -First 1
- ).Name
- ```
-
-1. Get the SQL API endpoint *URI* for the account using the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- }
- Get-AzCosmosDBAccount @parameters |
- Select-Object -Property "DocumentEndpoint"
- ```
-
-1. Find the *PRIMARY KEY* from the list of keys for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Type = "Keys"
- }
- Get-AzCosmosDBAccountKey @parameters |
- Select-Object -Property "PrimaryMasterKey"
- ```
-
-1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
-
-##### [Portal](#tab/azure-portal)
-
-> [!TIP]
-> For this guide, we recommend using the resource group name ``msdocs-cosmos-dotnet-howto-rg``.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to the existing Azure Cosmos DB SQL API account page.
-
-1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
-
- :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
-
-1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
-
- :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL A P I account.":::
---
-To use the **URI** and **PRIMARY KEY** values within your .NET code, persist them to new environment variables on the local machine running the application.
-
-#### [Windows](#tab/windows)
-
-```powershell
-$env:COSMOS_ENDPOINT = "<cosmos-account-URI>"
-$env:COSMOS_KEY = "<cosmos-account-PRIMARY-KEY>"
-```
-
-#### [Linux / macOS](#tab/linux+macos)
-
-```bash
-export COSMOS_ENDPOINT="<cosmos-account-URI>"
-export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
-```
---
-#### Create CosmosClient with account endpoint and key
-
-Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` and ``COSMOS_KEY`` environment variables as parameters.
--
-### Connect with a connection string
-
-Another constructor for **CosmosClient** only contains a single parameter:
-
-| Parameter | Example value | Description |
-| | | |
-| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | SQL API endpoint to use for all requests |
-| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the SQL API account |
-
-#### Retrieve your account connection string
-
-##### [Azure CLI](#tab/azure-cli)
-
-1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
-
- ```azurecli-interactive
- # Retrieve most recently created account name
- accountName=$(
- az cosmosdb list \
- --resource-group $resourceGroupName \
- --query "[0].name" \
- --output tsv
- )
- ```
-
-1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
-
- ```azurecli-interactive
- az cosmosdb keys list \
- --resource-group $resourceGroupName \
- --name $accountName \
- --type "connection-strings" \
- --query "connectionStrings[?description == \`Primary SQL Connection String\`] | [0].connectionString"
- ```
-
-##### [PowerShell](#tab/azure-powershell)
-
-1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
-
- ```azurepowershell-interactive
- # Retrieve most recently created account name
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- }
- $ACCOUNT_NAME = (
- Get-AzCosmosDBAccount @parameters |
- Select-Object -Property Name -First 1
- ).Name
- ```
-
-1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Type = "ConnectionStrings"
- }
- Get-AzCosmosDBAccountKey @parameters |
- Select-Object -Property "Primary SQL Connection String" -First 1
- ```
-
-##### [Portal](#tab/azure-portal)
-
-> [!TIP]
-> For this guide, we recommend using the resource group name ``msdocs-cosmos-dotnet-howto-rg``.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to the existing Azure Cosmos DB SQL API account page.
-
-1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
-
-1. Record the value from the **PRIMARY CONNECTION STRING** field.
--
-To use the **PRIMARY CONNECTION STRING** value within your .NET code, persist it to a new environment variable on the local machine running the application.
-
-#### [Windows](#tab/windows)
-
-```powershell
-$env:COSMOS_CONNECTION_STRING = "<cosmos-account-PRIMARY-CONNECTION-STRING>"
-```
-
-#### [Linux / macOS](#tab/linux+macos)
-
-```bash
-export COSMOS_CONNECTION_STRING="<cosmos-account-PRIMARY-CONNECTION-STRING>"
-```
---
-#### Create CosmosClient with connection string
-
-Create a new instance of the **CosmosClient** class with the ``COSMOS_CONNECTION_STRING`` environment variable as the only parameter.
--
-### Connect using the Microsoft Identity Platform
-
-To connect to your SQL API account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
-
-| Where the application runs | Security principal
-|--|--||
-| Local machine (developing and testing) | User identity or service principal |
-| Azure | Managed identity |
-| Servers or clients outside of Azure | Service principal |
-
-#### Import Azure.Identity
-
-The **Azure.Identity** NuGet package contains core authentication functionality that is shared among all Azure SDK libraries.
-
-Import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) NuGet package using the ``dotnet add package`` command.
-
-```dotnetcli
-dotnet add package Azure.Identity
-```
-
-Rebuild the project with the ``dotnet build`` command.
-
-```dotnetcli
-dotnet build
-```
-
-In your code editor, add using directives for ``Azure.Core`` and ``Azure.Identity`` namespaces.
--
-#### Create CosmosClient with default credential implementation
-
-If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
-
-For this example, we saved the instance in a variable of type [``TokenCredential``](/dotnet/api/azure.core.tokencredential) as that's a more generic type that's reusable across SDKs.
--
-Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
--
-#### Create CosmosClient with a custom credential implementation
-
-If you plan to deploy the application out of Azure, you can obtain an OAuth token by using other classes in the [Azure.Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). These other classes also derive from the ``TokenCredential`` class.
-
-For this example, we create a [``ClientSecretCredential``](/dotnet/api/azure.identity.clientsecretcredential) instance by using client and tenant identifiers, along with a client secret.
--
-You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
-
-Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
--
-## Build your application
-
-As you build your application, your code will primarily interact with four types of resources:
-
-* The SQL API account, which is the unique top-level namespace for your Azure Cosmos DB data.
-
-* Databases, which organize the containers in your account.
-
-* Containers, which contain a set of individual items in your database.
-
-* Items, which represent a JSON document in your container.
-
-The following diagram shows the relationship between these resources.
-
- Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
-
-Each type of resource is represented by one or more associated .NET classes. Here's a list of the most common classes:
-
-| Class | Description |
-|||
-| [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) | This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service. |
-| [``Database``](/dotnet/api/microsoft.azure.cosmos.database) | This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it. |
-| [``Container``](/dotnet/api/microsoft.azure.cosmos.container) | This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it. |
-
-The following guides show you how to use each of these classes to build your application.
-
-| Guide | Description |
-|--||
-| [Create a database](how-to-dotnet-create-database.md) | Create databases |
-| [Create a container](how-to-dotnet-create-container.md) | Create containers |
-| [Read an item](how-to-dotnet-read-item.md) | Point read a specific item |
-| [Query items](how-to-dotnet-query-items.md) | Query multiple items |
-
-## See also
-
-* [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
-* [Samples](samples-dotnet.md)
-* [API reference](/dotnet/api/microsoft.azure.cosmos)
-* [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
-* [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
-
-## Next steps
-
-Now that you've connected to a SQL API account, use the next guide to create and manage databases.
-
-> [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-database.md)
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-query-items.md
- Title: Query items in Azure Cosmos DB SQL API using .NET
-description: Learn how to query items in your Azure Cosmos DB SQL API container using the .NET SDK.
----- Previously updated : 06/15/2022---
-# Query items in Azure Cosmos DB SQL API using .NET
--
-Items in Azure Cosmos DB represent entities stored within a container. In the SQL API, an item consists of JSON-formatted data with a unique identifier. When you issue queries using the SQL API, results are returned as a JSON array of JSON documents.
-
-## Query items using SQL
-
-The Azure Cosmos DB SQL API supports the use of Structured Query Language (SQL) to perform queries on items in containers. A simple SQL query like ``SELECT * FROM products`` will return all items and properties from a container. Queries can be even more complex and include specific field projections, filters, and other common SQL clauses:
-
-```sql
-SELECT
- p.name,
- p.description AS copy
-FROM
- products p
-WHERE
- p.price > 500
-```
-
-To learn more about the SQL syntax for Azure Cosmos DB SQL API, see [Getting started with SQL queries](sql-query-getting-started.md).
-
-## Query an item
-
-> [!NOTE]
-> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
->
-> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/300-query-items/Product.cs" id="type" :::
->
-
-To query items in a container, call one of the following methods:
-
-* [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
-* [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
-
-## Query items using a SQL query asynchronously
-
-This example builds a SQL query using a simple string, retrieves a feed iterator, and then uses nested loops to iterate over results. The outer **while** loop will iterate through result pages, while the inner **foreach** loop iterates over results within a page.
--
-The [Container.GetItemQueryIterator<>](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) method returns a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) that is used to iterate through multi-page results. The ``HasMoreResults`` property indicates if there are more result pages left. The ``ReadNextAsync`` method gets the next page of results as an enumerable that is then used in a loop to iterate over results.
-
-Alternatively, use the [QueryDefinition](/dotnet/api/microsoft.azure.cosmos.querydefinition) to build a SQL query with parameterized input:
--
-> [!TIP]
-> Parameterized input values can help prevent many common SQL query injection attacks.
-
-## Query items using LINQ asynchronously
-
-In this example, an [``IQueryable``<>](/dotnet/api/system.linq.iqueryable) object is used to construct a [Language Integrated Query (LINQ)](/dotnet/csharp/programming-guide/concepts/linq/). The results are then iterated over using a feed iterator.
--
-The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.container.getitemlinqqueryable) method constructs an ``IQueryable`` to build the LINQ query. Then the ``ToFeedIterator<>`` method is used to convert the LINQ query expression into a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1).
-
-> [!TIP]
-> While you can iterate over the ``IQueryable<>``, this operation is synchronous. Use the ``ToFeedIterator<>`` method to gather results asynchronously.
-
-## Next steps
-
-Now that you've queried multiple items, try one of our end-to-end tutorials with the SQL API.
-
-> [!div class="nextstepaction"]
-> [Build an app that queries and adds data to Azure Cosmos DB SQL API](/training/modules/build-dotnet-app-cosmos-db-sql-api/)
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-read-item.md
- Title: Read an item in Azure Cosmos DB SQL API using .NET
-description: Learn how to point read a specific item in your Azure Cosmos DB SQL API container using the .NET SDK.
----- Previously updated : 07/06/2022---
-# Read an item in Azure Cosmos DB SQL API using .NET
--
-Items in Azure Cosmos DB represent a specific entity stored within a container. In the SQL API, an item consists of JSON-formatted data with a unique identifier.
-
-## Reading items with unique identifiers
-
-Every item in Azure Cosmos DB SQL API has a unique identifier specified by the ``id`` property. Within the scope of a container, two items can't share the same unique identifier. However, Azure Cosmos DB requires both the unique identifier and the partition key value of an item to perform a quick *point read* of that item. If only the unique identifier is available, you would have to perform a less efficient [query](how-to-dotnet-query-items.md) to look up the item across multiple logical partitions. To learn more about point reads and queries, see [optimize request cost for reading data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries).
-
-## Read an item
-
-> [!NOTE]
-> The examples in this article assume that you have already defined a C# type to represent your data named **Product**:
->
-> :::code language="csharp" source="~/azure-cosmos-dotnet-v3/275-read-item/Product.cs" id="type" :::
->
-
-To perform a point read of an item, call one of the following methods:
-
-* [``ReadItemAsync<>``](#read-an-item-asynchronously)
-* [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
-* [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
-
-## Read an item asynchronously
-
-The following example point reads a single item asynchronously and returns a deserialized item using the provided generic type:
--
-The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads and item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators).
-
-Alternatively, you can return the **ItemResponse<>** generic type and explicitly get the resource. The more general **ItemResponse<>** type also contains useful metadata about the underlying API operation. In this example, metadata about the request unit charge for this operation is gathered using the **RequestCharge** property.
--
-## Read an item as a stream asynchronously
-
-This example reads an item as a data stream directly:
--
-The [``Container.ReadItemStreamAsync``](/dotnet/api/microsoft.azure.cosmos.container.readitemstreamasync) method returns the item as a [``Stream``](/dotnet/api/system.io.stream) without deserializing the contents.
-
-If you aren't planning to deserialize the items directly, using the stream APIs can improve performance by handing off the item as a stream directly to the next component of your application. For more tips on how to optimize the SDK for high performance scenarios, see [SDK performance tips](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage).
-
-## Read multiple items asynchronously
-
-In this example, a list of tuples containing unique identifier and partition key pairs are used to look up and retrieve multiple items:
--
-[``Container.ReadManyItemsAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readmanyitemsasync) returns a list of items based on the unique identifiers and partition keys you provide. This operation is typically more performant than a query since you'll effectively perform a point read operation on all items in the list.
-
-## Next steps
-
-Now that you've read various items, use the next guide to query items.
-
-> [!div class="nextstepaction"]
-> [Query items](how-to-dotnet-query-items.md)
cosmos-db How To Manage Conflicts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-conflicts.md
- Title: Manage conflicts between regions in Azure Cosmos DB
-description: Learn how to manage conflicts in Azure Cosmos DB by creating the last-writer-wins or a custom conflict resolution policy
---- Previously updated : 06/11/2020-----
-# Manage conflict resolution policies in Azure Cosmos DB
-
-With multi-region writes, when multiple clients write to the same item, conflicts may occur. When a conflict occurs, you can resolve the conflict by using different conflict resolution policies. This article describes how to manage conflict resolution policies.
-
-> [!TIP]
-> Conflict resolution policy can only be specified at container creation time and cannot be modified after container creation.
-
-## Create a last-writer-wins conflict resolution policy
-
-These samples show how to set up a container with a last-writer-wins conflict resolution policy. The default path for last-writer-wins is the timestamp field or the `_ts` property. For SQL API, this may also be set to a user-defined path with a numeric type. In a conflict, the highest value wins. If the path isn't set or it's invalid, it defaults to `_ts`. Conflicts resolved with this policy do not show up in the conflict feed. This policy can be used by all APIs.
-
-### <a id="create-custom-conflict-resolution-policy-lww-dotnet"></a>.NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-DocumentCollection lwwCollection = await createClient.CreateDocumentCollectionIfNotExistsAsync(
- UriFactory.CreateDatabaseUri(this.databaseName), new DocumentCollection
- {
- Id = this.lwwCollectionName,
- ConflictResolutionPolicy = new ConflictResolutionPolicy
- {
- Mode = ConflictResolutionMode.LastWriterWins,
- ConflictResolutionPath = "/myCustomId",
- },
- });
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-Container container = await createClient.GetDatabase(this.databaseName)
- .CreateContainerIfNotExistsAsync(new ContainerProperties(this.lwwCollectionName, "/partitionKey")
- {
- ConflictResolutionPolicy = new ConflictResolutionPolicy()
- {
- Mode = ConflictResolutionMode.LastWriterWins,
- ResolutionPath = "/myCustomId",
- }
- });
-```
--
-### <a id="create-custom-conflict-resolution-policy-lww-javav4"></a> Java V4 SDK
-
-# [Async](#tab/api-async)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConflictResolutionLWWAsync)]
-
-# [Sync](#tab/api-sync)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConflictResolutionLWWSync)]
-
-
-
-### <a id="create-custom-conflict-resolution-policy-lww-javav2"></a>Java V2 SDKs
-
-# [Async Java V2 SDK](#tab/async)
-
-[Async Java V2 SDK](sql-api-sdk-async-java.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
-
-```java
-DocumentCollection collection = new DocumentCollection();
-collection.setId(id);
-ConflictResolutionPolicy policy = ConflictResolutionPolicy.createLastWriterWinsPolicy("/myCustomId");
-collection.setConflictResolutionPolicy(policy);
-DocumentCollection createdCollection = client.createCollection(databaseUri, collection, null).toBlocking().value();
-```
-
-# [Sync Java V2 SDK](#tab/sync)
-
-[Sync Java V2 SDK](sql-api-sdk-java.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
-
-```java
-DocumentCollection lwwCollection = new DocumentCollection();
-lwwCollection.setId(this.lwwCollectionName);
-ConflictResolutionPolicy lwwPolicy = ConflictResolutionPolicy.createLastWriterWinsPolicy("/myCustomId");
-lwwCollection.setConflictResolutionPolicy(lwwPolicy);
-DocumentCollection createdCollection = this.tryCreateDocumentCollection(createClient, database, lwwCollection);
-```
--
-### <a id="create-custom-conflict-resolution-policy-lww-javascript"></a>Node.js/JavaScript/TypeScript SDK
-
-```javascript
-const database = client.database(this.databaseName);
-const { container: lwwContainer } = await database.containers.createIfNotExists(
- {
- id: this.lwwContainerName,
- conflictResolutionPolicy: {
- mode: "LastWriterWins",
- conflictResolutionPath: "/myCustomId"
- }
- }
-);
-```
-
-### <a id="create-custom-conflict-resolution-policy-lww-python"></a>Python SDK
-
-```python
-udp_collection = {
- 'id': self.udp_collection_name,
- 'conflictResolutionPolicy': {
- 'mode': 'LastWriterWins',
- 'conflictResolutionPath': '/myCustomId'
- }
-}
-udp_collection = self.try_create_document_collection(
- create_client, database, udp_collection)
-```
-
-## Create a custom conflict resolution policy using a stored procedure
-
-These samples show how to set up a container with a custom conflict resolution policy with a stored procedure to resolve the conflict. These conflicts don't show up in the conflict feed unless there's an error in your stored procedure. After the policy is created with the container, you need to create the stored procedure. The .NET SDK sample below shows an example. This policy is supported on Core (SQL) Api only.
-
-### Sample custom conflict resolution stored procedure
-
-Custom conflict resolution stored procedures must be implemented using the function signature shown below. The function name does not need to match the name used when registering the stored procedure with the container but it does simplify naming. Here is a description of the parameters that must be implemented for this stored procedure.
--- **incomingItem**: The item being inserted or updated in the commit that is generating the conflicts. Is null for delete operations.-- **existingItem**: The currently committed item. This value is non-null in an update and null for an insert or deletes.-- **isTombstone**: Boolean indicating if the incomingItem is conflicting with a previously deleted item. When true, existingItem is also null.-- **conflictingItems**: Array of the committed version of all items in the container that are conflicting with incomingItem on ID or any other unique index properties.-
-> [!IMPORTANT]
-> Just as with any stored procedure, a custom conflict resolution procedure can access any data with the same partition key and can perform any insert, update or delete operation to resolve conflicts.
-
-This sample stored procedure resolves conflicts by selecting the lowest value from the `/myCustomId` path.
-
-```javascript
-function resolver(incomingItem, existingItem, isTombstone, conflictingItems) {
- var collection = getContext().getCollection();
-
- if (!incomingItem) {
- if (existingItem) {
-
- collection.deleteDocument(existingItem._self, {}, function (err, responseOptions) {
- if (err) throw err;
- });
- }
- } else if (isTombstone) {
- // delete always wins.
- } else {
- if (existingItem) {
- if (incomingItem.myCustomId > existingItem.myCustomId) {
- return; // existing item wins
- }
- }
-
- var i;
- for (i = 0; i < conflictingItems.length; i++) {
- if (incomingItem.myCustomId > conflictingItems[i].myCustomId) {
- return; // existing conflict item wins
- }
- }
-
- // incoming item wins - clear conflicts and replace existing with incoming.
- tryDelete(conflictingItems, incomingItem, existingItem);
- }
-
- function tryDelete(documents, incoming, existing) {
- if (documents.length > 0) {
- collection.deleteDocument(documents[0]._self, {}, function (err, responseOptions) {
- if (err) throw err;
-
- documents.shift();
- tryDelete(documents, incoming, existing);
- });
- } else if (existing) {
- collection.replaceDocument(existing._self, incoming,
- function (err, documentCreated) {
- if (err) throw err;
- });
- } else {
- collection.createDocument(collection.getSelfLink(), incoming,
- function (err, documentCreated) {
- if (err) throw err;
- });
- }
- }
-}
-```
-
-### <a id="create-custom-conflict-resolution-policy-stored-proc-dotnet"></a>.NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-DocumentCollection udpCollection = await createClient.CreateDocumentCollectionIfNotExistsAsync(
- UriFactory.CreateDatabaseUri(this.databaseName), new DocumentCollection
- {
- Id = this.udpCollectionName,
- ConflictResolutionPolicy = new ConflictResolutionPolicy
- {
- Mode = ConflictResolutionMode.Custom,
- ConflictResolutionProcedure = string.Format("dbs/{0}/colls/{1}/sprocs/{2}", this.databaseName, this.udpCollectionName, "resolver"),
- },
- });
-
-//Create the stored procedure
-await clients[0].CreateStoredProcedureAsync(
-UriFactory.CreateStoredProcedureUri(this.databaseName, this.udpCollectionName, "resolver"), new StoredProcedure
-{
- Id = "resolver",
- Body = File.ReadAllText(@"resolver.js")
-});
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-Container container = await createClient.GetDatabase(this.databaseName)
- .CreateContainerIfNotExistsAsync(new ContainerProperties(this.udpCollectionName, "/partitionKey")
- {
- ConflictResolutionPolicy = new ConflictResolutionPolicy()
- {
- Mode = ConflictResolutionMode.Custom,
- ResolutionProcedure = string.Format("dbs/{0}/colls/{1}/sprocs/{2}", this.databaseName, this.udpCollectionName, "resolver")
- }
- });
-
-await container.Scripts.CreateStoredProcedureAsync(
- new StoredProcedureProperties("resolver", File.ReadAllText(@"resolver.js"))
-);
-```
--
-### <a id="create-custom-conflict-resolution-policy-stored-proc-javav4"></a> Java V4 SDK
-
-# [Async](#tab/api-async)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConflictResolutionSprocAsync)]
-
-# [Sync](#tab/api-sync)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConflictResolutionSprocSync)]
-
-
-
-### <a id="create-custom-conflict-resolution-policy-stored-proc-javav2"></a>Java V2 SDKs
-
-# [Async Java V2 SDK](#tab/async)
-
-[Async Java V2 SDK](sql-api-sdk-async-java.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
-
-```java
-DocumentCollection collection = new DocumentCollection();
-collection.setId(id);
-ConflictResolutionPolicy policy = ConflictResolutionPolicy.createCustomPolicy("resolver");
-collection.setConflictResolutionPolicy(policy);
-DocumentCollection createdCollection = client.createCollection(databaseUri, collection, null).toBlocking().value();
-```
-
-# [Sync Java V2 SDK](#tab/sync)
-
-[Sync Java V2 SDK](sql-api-sdk-java.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
-
-```java
-DocumentCollection udpCollection = new DocumentCollection();
-udpCollection.setId(this.udpCollectionName);
-ConflictResolutionPolicy udpPolicy = ConflictResolutionPolicy.createCustomPolicy(
- String.format("dbs/%s/colls/%s/sprocs/%s", this.databaseName, this.udpCollectionName, "resolver"));
-udpCollection.setConflictResolutionPolicy(udpPolicy);
-DocumentCollection createdCollection = this.tryCreateDocumentCollection(createClient, database, udpCollection);
-```
--
-After your container is created, you must create the `resolver` stored procedure.
-
-### <a id="create-custom-conflict-resolution-policy-stored-proc-javascript"></a>Node.js/JavaScript/TypeScript SDK
-
-```javascript
-const database = client.database(this.databaseName);
-const { container: udpContainer } = await database.containers.createIfNotExists(
- {
- id: this.udpContainerName,
- conflictResolutionPolicy: {
- mode: "Custom",
- conflictResolutionProcedure: `dbs/${this.databaseName}/colls/${
- this.udpContainerName
- }/sprocs/resolver`
- }
- }
-);
-```
-
-After your container is created, you must create the `resolver` stored procedure.
-
-### <a id="create-custom-conflict-resolution-policy-stored-proc-python"></a>Python SDK
-
-```python
-udp_collection = {
- 'id': self.udp_collection_name,
- 'conflictResolutionPolicy': {
- 'mode': 'Custom',
- 'conflictResolutionProcedure': 'dbs/' + self.database_name + "/colls/" + self.udp_collection_name + '/sprocs/resolver'
- }
-}
-udp_collection = self.try_create_document_collection(
- create_client, database, udp_collection)
-```
-
-After your container is created, you must create the `resolver` stored procedure.
-
-## Create a custom conflict resolution policy
-
-These samples show how to set up a container with a custom conflict resolution policy. These conflicts show up in the conflict feed.
-
-### <a id="create-custom-conflict-resolution-policy-dotnet"></a>.NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-DocumentCollection manualCollection = await createClient.CreateDocumentCollectionIfNotExistsAsync(
- UriFactory.CreateDatabaseUri(this.databaseName), new DocumentCollection
- {
- Id = this.manualCollectionName,
- ConflictResolutionPolicy = new ConflictResolutionPolicy
- {
- Mode = ConflictResolutionMode.Custom,
- },
- });
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-Container container = await createClient.GetDatabase(this.databaseName)
- .CreateContainerIfNotExistsAsync(new ContainerProperties(this.manualCollectionName, "/partitionKey")
- {
- ConflictResolutionPolicy = new ConflictResolutionPolicy()
- {
- Mode = ConflictResolutionMode.Custom
- }
- });
-```
--
-### <a id="create-custom-conflict-resolution-policy-javav4"></a> Java V4 SDK
-
-# [Async](#tab/api-async)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConflictResolutionCustomAsync)]
-
-# [Sync](#tab/api-sync)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConflictResolutionCustomSync)]
-
-
-
-### <a id="create-custom-conflict-resolution-policy-javav2"></a>Java V2 SDKs
-
-# [Async Java V2 SDK](#tab/async)
-
-[Async Java V2 SDK](sql-api-sdk-async-java.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
-
-```java
-DocumentCollection collection = new DocumentCollection();
-collection.setId(id);
-ConflictResolutionPolicy policy = ConflictResolutionPolicy.createCustomPolicy();
-collection.setConflictResolutionPolicy(policy);
-DocumentCollection createdCollection = client.createCollection(databaseUri, collection, null).toBlocking().value();
-```
-
-# [Sync Java V2 SDK](#tab/sync)
-
-[Sync Java V2 SDK](sql-api-sdk-java.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
-
-```java
-DocumentCollection manualCollection = new DocumentCollection();
-manualCollection.setId(this.manualCollectionName);
-ConflictResolutionPolicy customPolicy = ConflictResolutionPolicy.createCustomPolicy(null);
-manualCollection.setConflictResolutionPolicy(customPolicy);
-DocumentCollection createdCollection = client.createCollection(database.getSelfLink(), collection, null).getResource();
-```
--
-### <a id="create-custom-conflict-resolution-policy-javascript"></a>Node.js/JavaScript/TypeScript SDK
-
-```javascript
-const database = client.database(this.databaseName);
-const {
- container: manualContainer
-} = await database.containers.createIfNotExists({
- id: this.manualContainerName,
- conflictResolutionPolicy: {
- mode: "Custom"
- }
-});
-```
-
-### <a id="create-custom-conflict-resolution-policy-python"></a>Python SDK
-
-```python
-database = client.ReadDatabase("dbs/" + self.database_name)
-manual_collection = {
- 'id': self.manual_collection_name,
- 'conflictResolutionPolicy': {
- 'mode': 'Custom'
- }
-}
-manual_collection = client.CreateContainer(database['_self'], collection)
-```
-
-## Read from conflict feed
-
-These samples show how to read from a container's conflict feed. Conflicts show up in the conflict feed only if they weren't resolved automatically or if using a custom conflict policy.
-
-### <a id="read-from-conflict-feed-dotnet"></a>.NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-FeedResponse<Conflict> conflicts = await delClient.ReadConflictFeedAsync(this.collectionUri);
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-FeedIterator<ConflictProperties> conflictFeed = container.Conflicts.GetConflictQueryIterator();
-while (conflictFeed.HasMoreResults)
-{
- FeedResponse<ConflictProperties> conflicts = await conflictFeed.ReadNextAsync();
- foreach (ConflictProperties conflict in conflicts)
- {
- // Read the conflicted content
- MyClass intendedChanges = container.Conflicts.ReadConflictContent<MyClass>(conflict);
- MyClass currentState = await container.Conflicts.ReadCurrentAsync<MyClass>(conflict, new PartitionKey(intendedChanges.MyPartitionKey));
-
- // Do manual merge among documents
- await container.ReplaceItemAsync<MyClass>(intendedChanges, intendedChanges.Id, new PartitionKey(intendedChanges.MyPartitionKey));
-
- // Delete the conflict
- await container.Conflicts.DeleteAsync(conflict, new PartitionKey(intendedChanges.MyPartitionKey));
- }
-}
-```
--
-### <a id="read-from-conflict-feed-javav2"></a>Java V2 SDKs
-
-# [Async Java V2 SDK](#tab/async)
-
-[Async Java V2 SDK](sql-api-sdk-async-java.md) (Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb))
-
-```java
-FeedResponse<Conflict> response = client.readConflicts(this.manualCollectionUri, null)
- .first().toBlocking().single();
-for (Conflict conflict : response.getResults()) {
- /* Do something with conflict */
-}
-```
-# [Sync Java V2 SDK](#tab/sync)
-
-[Sync Java V2 SDK](sql-api-sdk-java.md) (Maven [com.microsoft.azure::azure-documentdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb))
-
-```java
-Iterator<Conflict> conflictsIterator = client.readConflicts(this.collectionLink, null).getQueryIterator();
-while (conflictsIterator.hasNext()) {
- Conflict conflict = conflictsIterator.next();
- /* Do something with conflict */
-}
-```
--
-### <a id="read-from-conflict-feed-javascript"></a>Node.js/JavaScript/TypeScript SDK
-
-```javascript
-const container = client
- .database(this.databaseName)
- .container(this.lwwContainerName);
-
-const { result: conflicts } = await container.conflicts.readAll().toArray();
-```
-
-### <a id="read-from-conflict-feed-python"></a>Python
-
-```python
-conflicts_iterator = iter(client.ReadConflicts(self.manual_collection_link))
-conflict = next(conflicts_iterator, None)
-while conflict:
- # Do something with conflict
- conflict = next(conflicts_iterator, None)
-```
-
-## Next steps
-
-Learn about the following Azure Cosmos DB concepts:
--- [Global distribution - under the hood](../global-dist-under-the-hood.md)-- [How to configure multi-region writes in your applications](how-to-multi-master.md)-- [Configure clients for multihoming](../how-to-manage-database-account.md#configure-multiple-write-regions)-- [Add or remove regions from your Azure Cosmos DB account](../how-to-manage-database-account.md#addremove-regions-from-your-database-account)-- [How to configuremulti-region writes in your applications](how-to-multi-master.md).-- [Partitioning and data distribution](../partitioning-overview.md)-- [Indexing in Azure Cosmos DB](../index-policy.md)
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
- Title: Manage consistency in Azure Cosmos DB
-description: Learn how to configure and manage consistency levels in Azure Cosmos DB using Azure portal, .NET SDK, Java SDK and various other SDKs
---- Previously updated : 02/16/2022-----
-# Manage consistency levels in Azure Cosmos DB
-
-This article explains how to manage consistency levels in Azure Cosmos DB. You learn how to configure the default consistency level, override the default consistency, manually manage session tokens, and understand the Probabilistically Bounded Staleness (PBS) metric.
--
-## Configure the default consistency level
-
-The [default consistency level](../consistency-levels.md) is the consistency level that clients use by default.
-
-# [Azure portal](#tab/portal)
-
-To view or modify the default consistency level, sign in to the Azure portal. Find your Azure Cosmos account, and open the **Default consistency** pane. Select the level of consistency you want as the new default, and then select **Save**. The Azure portal also provides a visualization of different consistency levels with music notes.
--
-# [CLI](#tab/cli)
-
-Create a Cosmos account with Session consistency, then update the default consistency.
-
-```azurecli
-# Create a new account with Session consistency
-az cosmosdb create --name $accountName --resource-group $resourceGroupName --default-consistency-level Session
-
-# update an existing account's default consistency
-az cosmosdb update --name $accountName --resource-group $resourceGroupName --default-consistency-level Strong
-```
-
-# [PowerShell](#tab/powershell)
-
-Create a Cosmos account with Session consistency, then update the default consistency.
-
-```azurepowershell-interactive
-# Create a new account with Session consistency
-New-AzCosmosDBAccount -ResourceGroupName $resourceGroupName `
- -Location $locations -Name $accountName -DefaultConsistencyLevel "Session"
-
-# Update an existing account's default consistency
-Update-AzCosmosDBAccount -ResourceGroupName $resourceGroupName `
- -Name $accountName -DefaultConsistencyLevel "Strong"
-```
---
-## Override the default consistency level
-
-Clients can override the default consistency level that is set by the service. Consistency level can be set on a per request, which overrides the default consistency level set at the account level.
-
-> [!TIP]
-> Consistency can only be **relaxed** at the SDK instance or request level. To move from weaker to stronger consistency, update the default consistency for the Cosmos account.
-
-> [!TIP]
-> Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. See [Consistency levels and throughput](../consistency-levels.md#consistency-levels-and-throughput) for more details.
-
-### <a id="override-default-consistency-dotnet"></a>.NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-// Override consistency at the client level
-documentClient = new DocumentClient(new Uri(endpoint), authKey, connectionPolicy, ConsistencyLevel.Eventual);
-
-// Override consistency at the request level via request options
-RequestOptions requestOptions = new RequestOptions { ConsistencyLevel = ConsistencyLevel.Eventual };
-
-var response = await client.CreateDocumentAsync(collectionUri, document, requestOptions);
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-// Override consistency at the request level via request options
-ItemRequestOptions requestOptions = new ItemRequestOptions { ConsistencyLevel = ConsistencyLevel.Strong };
-
-var response = await client.GetContainer(databaseName, containerName)
- .CreateItemAsync(
- item,
- new PartitionKey(itemPartitionKey),
- requestOptions);
-```
--
-### <a id="override-default-consistency-javav4"></a> Java V4 SDK
-
-# [Async](#tab/api-async)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConsistencyAsync)]
-
-# [Sync](#tab/api-sync)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConsistencySync)]
-
-
-
-### <a id="override-default-consistency-javav2"></a> Java V2 SDKs
-
-# [Async](#tab/api-async)
-
-Async Java V2 SDK (Maven com.microsoft.azure::azure-cosmosdb)
-
-```java
-// Override consistency at the client level
-ConnectionPolicy policy = new ConnectionPolicy();
-
-AsyncDocumentClient client =
- new AsyncDocumentClient.Builder()
- .withMasterKey(this.accountKey)
- .withServiceEndpoint(this.accountEndpoint)
- .withConsistencyLevel(ConsistencyLevel.Eventual)
- .withConnectionPolicy(policy).build();
-```
-
-# [Sync](#tab/api-sync)
-
-Sync Java V2 SDK (Maven com.microsoft.azure::azure-documentdb)
-
-```java
-// Override consistency at the client level
-ConnectionPolicy connectionPolicy = new ConnectionPolicy();
-DocumentClient client = new DocumentClient(accountEndpoint, accountKey, connectionPolicy, ConsistencyLevel.Eventual);
-```
--
-### <a id="override-default-consistency-javascript"></a>Node.js/JavaScript/TypeScript SDK
-
-```javascript
-// Override consistency at the client level
-const client = new CosmosClient({
- /* other config... */
- consistencyLevel: ConsistencyLevel.Eventual
-});
-
-// Override consistency at the request level via request options
-const { body } = await item.read({ consistencyLevel: ConsistencyLevel.Eventual });
-```
-
-### <a id="override-default-consistency-python"></a>Python SDK
-
-```python
-# Override consistency at the client level
-connection_policy = documents.ConnectionPolicy()
-client = cosmos_client.CosmosClient(self.account_endpoint, {
- 'masterKey': self.account_key}, connection_policy, documents.ConsistencyLevel.Eventual)
-```
-
-## Utilize session tokens
-
-One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Cosmos accounts by default. When working with Session consistency, each new write request to Azure Cosmos DB is assigned a new SessionToken. The CosmosClient will use this token internally with each read/query request to ensure that the set consistency level is maintained.
-
-In some scenarios you need to manage this Session yourself. Consider a web application with multiple nodes, each node will have its own instance of CosmosClient. If you wanted these nodes to participate in the same session (to be able read your own writes consistently across web tiers) you would have to send the SessionToken from FeedResponse\<T\> of the write action to the end-user using a cookie or some other mechanism, and have that token flow back to the web tier and ultimately the CosmosClient for subsequent reads. If you are using a round-robin load balancer which does not maintain session affinity between requests, such as the Azure Load Balancer, the read could potentially land on a different node to the write request, where the session was created.
-
-If you do not flow the Azure Cosmos DB SessionToken across as described above you could end up with inconsistent read results for a period of time.
-
-To manage session tokens manually, get the session token from the response and set them per request. If you don't need to manage session tokens manually, you don't need to use these samples. The SDK keeps track of session tokens automatically. If you don't set the session token manually, by default, the SDK uses the most recent session token.
-
-### <a id="utilize-session-tokens-dotnet"></a>.NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-var response = await client.ReadDocumentAsync(
- UriFactory.CreateDocumentUri(databaseName, collectionName, "SalesOrder1"));
-string sessionToken = response.SessionToken;
-
-RequestOptions options = new RequestOptions();
-options.SessionToken = sessionToken;
-var response = await client.ReadDocumentAsync(
- UriFactory.CreateDocumentUri(databaseName, collectionName, "SalesOrder1"), options);
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-Container container = client.GetContainer(databaseName, collectionName);
-ItemResponse<SalesOrder> response = await container.CreateItemAsync<SalesOrder>(salesOrder);
-string sessionToken = response.Headers.Session;
-
-ItemRequestOptions options = new ItemRequestOptions();
-options.SessionToken = sessionToken;
-ItemResponse<SalesOrder> response = await container.ReadItemAsync<SalesOrder>(salesOrder.Id, new PartitionKey(salesOrder.PartitionKey), options);
-```
--
-### <a id="override-default-consistency-javav4"></a> Java V4 SDK
-
-# [Async](#tab/api-async)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ManageConsistencySessionAsync)]
-
-# [Sync](#tab/api-sync)
-
- Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ManageConsistencySessionSync)]
-
-
-
-### <a id="utilize-session-tokens-javav2"></a>Java V2 SDKs
-
-# [Async](#tab/api-async)
-
-Async Java V2 SDK (Maven com.microsoft.azure::azure-cosmosdb)
-
-```java
-// Get session token from response
-RequestOptions options = new RequestOptions();
-options.setPartitionKey(new PartitionKey(document.get("mypk")));
-Observable<ResourceResponse<Document>> readObservable = client.readDocument(document.getSelfLink(), options);
-readObservable.single() // we know there will be one response
- .subscribe(
- documentResourceResponse -> {
- System.out.println(documentResourceResponse.getSessionToken());
- },
- error -> {
- System.err.println("an error happened: " + error.getMessage());
- });
-
-// Resume the session by setting the session token on RequestOptions
-RequestOptions options = new RequestOptions();
-requestOptions.setSessionToken(sessionToken);
-Observable<ResourceResponse<Document>> readObservable = client.readDocument(document.getSelfLink(), options);
-```
-
-# [Sync](#tab/api-sync)
-
-Sync Java V2 SDK (Maven com.microsoft.azure::azure-documentdb)
-
-```java
-// Get session token from response
-ResourceResponse<Document> response = client.readDocument(documentLink, null);
-String sessionToken = response.getSessionToken();
-
-// Resume the session by setting the session token on the RequestOptions
-RequestOptions options = new RequestOptions();
-options.setSessionToken(sessionToken);
-ResourceResponse<Document> response = client.readDocument(documentLink, options);
-```
--
-### <a id="utilize-session-tokens-javascript"></a>Node.js/JavaScript/TypeScript SDK
-
-```javascript
-// Get session token from response
-const { headers, item } = await container.items.create({ id: "meaningful-id" });
-const sessionToken = headers["x-ms-session-token"];
-
-// Immediately or later, you can use that sessionToken from the header to resume that session.
-const { body } = await item.read({ sessionToken });
-```
-
-### <a id="utilize-session-tokens-python"></a>Python SDK
-
-```python
-// Get the session token from the last response headers
-item = client.ReadItem(item_link)
-session_token = client.last_response_headers["x-ms-session-token"]
-
-// Resume the session by setting the session token on the options for the request
-options = {
- "sessionToken": session_token
-}
-item = client.ReadItem(doc_link, options)
-```
-
-## Monitor Probabilistically Bounded Staleness (PBS) metric
-
-How eventual is eventual consistency? For the average case, can we offer staleness bounds with respect to version history and time. The [**Probabilistically Bounded Staleness (PBS)**](http://pbs.cs.berkeley.edu/) metric tries to quantify the probability of staleness and shows it as a metric.
-
-To view the PBS metric, go to your Azure Cosmos account in the Azure portal. Open the **Metrics (Classic)** pane, and select the **Consistency** tab. Look at the graph named **Probability of strongly consistent reads based on your workload (see PBS)**.
--
-## Next steps
-
-Learn more about how to manage data conflicts, or move on to the next key concept in Azure Cosmos DB. See the following articles:
-
-* [Consistency Levels in Azure Cosmos DB](../consistency-levels.md)
-* [Partitioning and data distribution](../partitioning-overview.md)
-* [Manage conflicts between regions](how-to-manage-conflicts.md)
-* [Partitioning and data distribution](../partitioning-overview.md)
-* [Consistency tradeoffs in modern distributed database systems design](https://www.computer.org/csdl/magazine/co/2012/02/mco2012020037/13rRUxjyX7k)
-* [High availability](../high-availability.md)
-* [Azure Cosmos DB SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/)
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-indexing-policy.md
- Title: Manage indexing policies in Azure Cosmos DB
-description: Learn how to manage indexing policies, include or exclude a property from indexing, how to define indexing using different Azure Cosmos DB SDKs
---- Previously updated : 05/25/2021-----
-# Manage indexing policies in Azure Cosmos DB
-
-In Azure Cosmos DB, data is indexed following [indexing policies](../index-policy.md) that are defined for each container. The default indexing policy for newly created containers enforces range indexes for any string or number. This policy can be overridden with your own custom indexing policy.
-
-> [!NOTE]
-> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB's API for MongoDB](../mongodb/mongodb-indexing.md) and [Secondary indexing in Azure Cosmos DB Cassandra API.](../cassandr)
-
-## Indexing policy examples
-
-Here are some examples of indexing policies shown in [their JSON format](../index-policy.md#include-exclude-paths), which is how they are exposed on the Azure portal. The same parameters can be set through the Azure CLI or any SDK.
-
-### <a id="range-index"></a>Opt-out policy to selectively exclude some property paths
-
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "excludedPaths": [
- {
- "path": "/path/to/single/excluded/property/?"
- },
- {
- "path": "/path/to/root/of/multiple/excluded/properties/*"
- }
- ]
- }
-```
-
-This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
--
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/*",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number",
- "precision": -1
- },
- {
- "kind": "Range",
- "dataType": "String",
- "precision": -1
- }
- ]
- }
- ],
- "excludedPaths": [
- {
- "path": "/path/to/single/excluded/property/?"
- },
- {
- "path": "/path/to/root/of/multiple/excluded/properties/*"
- }
- ]
- }
-```
-
-### Opt-in policy to selectively include some property paths
-
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/path/to/included/property/?"
- },
- {
- "path": "/path/to/root/of/multiple/included/properties/*"
- }
- ],
- "excludedPaths": [
- {
- "path": "/*"
- }
- ]
- }
-```
-
-This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
--
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/path/to/included/property/?",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number"
- },
- {
- "kind": "Range",
- "dataType": "String"
- }
- ]
- },
- {
- "path": "/path/to/root/of/multiple/included/properties/*",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number"
- },
- {
- "kind": "Range",
- "dataType": "String"
- }
- ]
- }
- ],
- "excludedPaths": [
- {
- "path": "/*"
- }
- ]
- }
-```
-
-> [!NOTE]
-> It is generally recommended to use an **opt-out** indexing policy to let Azure Cosmos DB proactively index any new property that may be added to your data model.
-
-### <a id="spatial-index"></a>Using a spatial index on a specific property path only
-
-```json
-{
- "indexingMode": "consistent",
- "automatic": true,
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "excludedPaths": [
- {
- "path": "/_etag/?"
- }
- ],
- "spatialIndexes": [
- {
- "path": "/path/to/geojson/property/?",
- "types": [
- "Point",
- "Polygon",
- "MultiPolygon",
- "LineString"
- ]
- }
- ]
-}
-```
-
-## <a id="composite-index"></a>Composite indexing policy examples
-
-In addition to including or excluding paths for individual properties, you can also specify a composite index. If you would like to perform a query that has an `ORDER BY` clause for multiple properties, a [composite index](../index-policy.md#composite-indexes) on those properties is required. Additionally, composite indexes will have a performance benefit for queries that have a multiple filters or both a filter and an ORDER BY clause.
-
-> [!NOTE]
-> Composite paths have an implicit `/?` since only the scalar value at that path is indexed. The `/*` wildcard is not supported in composite paths. You shouldn't specify `/?` or `/*` in a composite path.
-
-### Composite index defined for (name asc, age desc):
-
-```json
- {
- "automatic":true,
- "indexingMode":"Consistent",
- "includedPaths":[
- {
- "path":"/*"
- }
- ],
- "excludedPaths":[],
- "compositeIndexes":[
- [
- {
- "path":"/name",
- "order":"ascending"
- },
- {
- "path":"/age",
- "order":"descending"
- }
- ]
- ]
- }
-```
-
-The above composite index on name and age is required for Query #1 and Query #2:
-
-Query #1:
-
-```sql
- SELECT *
- FROM c
- ORDER BY c.name ASC, c.age DESC
-```
-
-Query #2:
-
-```sql
- SELECT *
- FROM c
- ORDER BY c.name DESC, c.age ASC
-```
-
-This composite index will benefit Query #3 and Query #4 and optimize the filters:
-
-Query #3:
-
-```sql
-SELECT *
-FROM c
-WHERE c.name = "Tim"
-ORDER BY c.name DESC, c.age ASC
-```
-
-Query #4:
-
-```sql
-SELECT *
-FROM c
-WHERE c.name = "Tim" AND c.age > 18
-```
-
-### Composite index defined for (name ASC, age ASC) and (name ASC, age DESC):
-
-You can define multiple different composite indexes within the same indexing policy.
-
-```json
- {
- "automatic":true,
- "indexingMode":"Consistent",
- "includedPaths":[
- {
- "path":"/*"
- }
- ],
- "excludedPaths":[],
- "compositeIndexes":[
- [
- {
- "path":"/name",
- "order":"ascending"
- },
- {
- "path":"/age",
- "order":"ascending"
- }
- ],
- [
- {
- "path":"/name",
- "order":"ascending"
- },
- {
- "path":"/age",
- "order":"descending"
- }
- ]
- ]
- }
-```
-
-### Composite index defined for (name ASC, age ASC):
-
-It is optional to specify the order. If not specified, the order is ascending.
-
-```json
-{
- "automatic":true,
- "indexingMode":"Consistent",
- "includedPaths":[
- {
- "path":"/*"
- }
- ],
- "excludedPaths":[],
- "compositeIndexes":[
- [
- {
- "path":"/name",
- },
- {
- "path":"/age",
- }
- ]
- ]
-}
-```
-
-### Excluding all property paths but keeping indexing active
-
-This policy can be used in situations where the [Time-to-Live (TTL) feature](time-to-live.md) is active but no additional indexes are necessary (to use Azure Cosmos DB as a pure key-value store).
-
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [],
- "excludedPaths": [{
- "path": "/*"
- }]
- }
-```
-
-### No indexing
-
-This policy will turn off indexing. If `indexingMode` is set to `none`, you cannot set a TTL on the container.
-
-```json
- {
- "indexingMode": "none"
- }
-```
-
-## Updating indexing policy
-
-In Azure Cosmos DB, the indexing policy can be updated using any of the below methods:
--- from the Azure portal-- using the Azure CLI-- using PowerShell-- using one of the SDKs-
-An [indexing policy update](../index-policy.md#modifying-the-indexing-policy) triggers an index transformation. The progress of this transformation can also be tracked from the SDKs.
-
-> [!NOTE]
-> When updating indexing policy, writes to Azure Cosmos DB will be uninterrupted. Learn more about [indexing transformations](../index-policy.md#modifying-the-indexing-policy)
-
-## Use the Azure portal
-
-Azure Cosmos containers store their indexing policy as a JSON document that the Azure portal lets you directly edit.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Create a new Azure Cosmos account or select an existing account.
-
-1. Open the **Data Explorer** pane and select the container that you want to work on.
-
-1. Click on **Scale & Settings**.
-
-1. Modify the indexing policy JSON document (see examples [below](#indexing-policy-examples))
-
-1. Click **Save** when you are done.
--
-## Use the Azure CLI
-
-To create a container with a custom indexing policy see, [Create a container with a custom index policy using CLI](manage-with-cli.md#create-a-container-with-a-custom-index-policy)
-
-## Use PowerShell
-
-To create a container with a custom indexing policy see, [Create a container with a custom index policy using PowerShell](manage-with-powershell.md#create-container-custom-index)
-
-## <a id="dotnet-sdk"></a> Use the .NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-The `DocumentCollection` object from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
-
-```csharp
-// Retrieve the container's details
-ResourceResponse<DocumentCollection> containerResponse = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"));
-// Set the indexing mode to consistent
-containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
-// Add an included path
-containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
-// Add an excluded path
-containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
-// Add a spatial index
-containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(new SpatialSpec() { Path = "/locations/*", SpatialTypes = new Collection<SpatialType>() { SpatialType.Point } } );
-// Add a composite index
-containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> {new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending }});
-// Update container with changes
-await client.ReplaceDocumentCollectionAsync(containerResponse.Resource);
-```
-
-To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`.
-
-```csharp
-// retrieve the container's details
-ResourceResponse<DocumentCollection> container = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"), new RequestOptions { PopulateQuotaInfo = true });
-// retrieve the index transformation progress from the result
-long indexTransformationProgress = container.IndexTransformationProgress;
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-The `ContainerProperties` object from the [.NET SDK v3](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) (see [this Quickstart](create-sql-api-dotnet.md) regarding its usage) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
-
-```csharp
-// Retrieve the container's details
-ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync();
-// Set the indexing mode to consistent
-containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
-// Add an included path
-containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
-// Add an excluded path
-containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
-// Add a spatial index
-SpatialPath spatialPath = new SpatialPath
-{
- Path = "/locations/*"
-};
-spatialPath.SpatialTypes.Add(SpatialType.Point);
-containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(spatialPath);
-// Add a composite index
-containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> { new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending } });
-// Update container with changes
-await client.GetContainer("database", "container").ReplaceContainerAsync(containerResponse.Resource);
-```
-
-To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`, then retrieve the value from the `x-ms-documentdb-collection-index-transformation-progress` response header.
-
-```csharp
-// retrieve the container's details
-ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync(new ContainerRequestOptions { PopulateQuotaInfo = true });
-// retrieve the index transformation progress from the result
-long indexTransformationProgress = long.Parse(containerResponse.Headers["x-ms-documentdb-collection-index-transformation-progress"]);
-```
-
-When defining a custom indexing policy while creating a new container, the SDK V3's fluent API lets you write this definition in a concise and efficient way:
-
-```csharp
-await client.GetDatabase("database").DefineContainer(name: "container", partitionKeyPath: "/myPartitionKey")
- .WithIndexingPolicy()
- .WithIncludedPaths()
- .Path("/*")
- .Attach()
- .WithExcludedPaths()
- .Path("/name/*")
- .Attach()
- .WithSpatialIndex()
- .Path("/locations/*", SpatialType.Point)
- .Attach()
- .WithCompositeIndex()
- .Path("/name", CompositePathSortOrder.Ascending)
- .Path("/age", CompositePathSortOrder.Descending)
- .Attach()
- .Attach()
- .CreateIfNotExistsAsync();
-```
--
-## Use the Java SDK
-
-The `DocumentCollection` object from the [Java SDK](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) (see [this Quickstart](create-sql-api-java.md) regarding its usage) exposes `getIndexingPolicy()` and `setIndexingPolicy()` methods. The `IndexingPolicy` object they manipulate lets you change the indexing mode and add or remove included and excluded paths.
-
-```java
-// Retrieve the container's details
-Observable<ResourceResponse<DocumentCollection>> containerResponse = client.readCollection(String.format("/dbs/%s/colls/%s", "database", "container"), null);
-containerResponse.subscribe(result -> {
-DocumentCollection container = result.getResource();
-IndexingPolicy indexingPolicy = container.getIndexingPolicy();
-
-// Set the indexing mode to consistent
-indexingPolicy.setIndexingMode(IndexingMode.Consistent);
-
-// Add an included path
-
-Collection<IncludedPath> includedPaths = new ArrayList<>();
-IncludedPath includedPath = new IncludedPath();
-includedPath.setPath("/*");
-includedPaths.add(includedPath);
-indexingPolicy.setIncludedPaths(includedPaths);
-
-// Add an excluded path
-
-Collection<ExcludedPath> excludedPaths = new ArrayList<>();
-ExcludedPath excludedPath = new ExcludedPath();
-excludedPath.setPath("/name/*");
-excludedPaths.add(excludedPath);
-indexingPolicy.setExcludedPaths(excludedPaths);
-
-// Add a spatial index
-
-Collection<SpatialSpec> spatialIndexes = new ArrayList<SpatialSpec>();
-Collection<SpatialType> collectionOfSpatialTypes = new ArrayList<SpatialType>();
-
-SpatialSpec spec = new SpatialSpec();
-spec.setPath("/locations/*");
-collectionOfSpatialTypes.add(SpatialType.Point);
-spec.setSpatialTypes(collectionOfSpatialTypes);
-spatialIndexes.add(spec);
-
-indexingPolicy.setSpatialIndexes(spatialIndexes);
-
-// Add a composite index
-
-Collection<ArrayList<CompositePath>> compositeIndexes = new ArrayList<>();
-ArrayList<CompositePath> compositePaths = new ArrayList<>();
-
-CompositePath nameCompositePath = new CompositePath();
-nameCompositePath.setPath("/name");
-nameCompositePath.setOrder(CompositePathSortOrder.Ascending);
-
-CompositePath ageCompositePath = new CompositePath();
-ageCompositePath.setPath("/age");
-ageCompositePath.setOrder(CompositePathSortOrder.Descending);
-
-compositePaths.add(ageCompositePath);
-compositePaths.add(nameCompositePath);
-
-compositeIndexes.add(compositePaths);
-indexingPolicy.setCompositeIndexes(compositeIndexes);
-
-// Update the container with changes
-
- client.replaceCollection(container, null);
-});
-```
-
-To track the index transformation progress on a container, pass a `RequestOptions` object that requests the quota info to be populated, then retrieve the value from the `x-ms-documentdb-collection-index-transformation-progress` response header.
-
-```java
-// set the RequestOptions object
-RequestOptions requestOptions = new RequestOptions();
-requestOptions.setPopulateQuotaInfo(true);
-// retrieve the container's details
-Observable<ResourceResponse<DocumentCollection>> containerResponse = client.readCollection(String.format("/dbs/%s/colls/%s", "database", "container"), requestOptions);
-containerResponse.subscribe(result -> {
- // retrieve the index transformation progress from the response headers
- String indexTransformationProgress = result.getResponseHeaders().get("x-ms-documentdb-collection-index-transformation-progress");
-});
-```
-
-## Use the Node.js SDK
-
-The `ContainerDefinition` interface from [Node.js SDK](https://www.npmjs.com/package/@azure/cosmos) (see [this Quickstart](create-sql-api-nodejs.md) regarding its usage) exposes an `indexingPolicy` property that lets you change the `indexingMode` and add or remove `includedPaths` and `excludedPaths`.
-
-Retrieve the container's details
-
-```javascript
-const containerResponse = await client.database('database').container('container').read();
-```
-
-Set the indexing mode to consistent
-
-```javascript
-containerResponse.body.indexingPolicy.indexingMode = "consistent";
-```
-
-Add included path including a spatial index
-
-```javascript
-containerResponse.body.indexingPolicy.includedPaths.push({
- includedPaths: [
- {
- path: "/age/*",
- indexes: [
- {
- kind: cosmos.DocumentBase.IndexKind.Range,
- dataType: cosmos.DocumentBase.DataType.String
- },
- {
- kind: cosmos.DocumentBase.IndexKind.Range,
- dataType: cosmos.DocumentBase.DataType.Number
- }
- ]
- },
- {
- path: "/locations/*",
- indexes: [
- {
- kind: cosmos.DocumentBase.IndexKind.Spatial,
- dataType: cosmos.DocumentBase.DataType.Point
- }
- ]
- }
- ]
- });
-```
-
-Add excluded path
-
-```javascript
-containerResponse.body.indexingPolicy.excludedPaths.push({ path: '/name/*' });
-```
-
-Update the container with changes
-
-```javascript
-const replaceResponse = await client.database('database').container('container').replace(containerResponse.body);
-```
-
-To track the index transformation progress on a container, pass a `RequestOptions` object that sets the `populateQuotaInfo` property to `true`, then retrieve the value from the `x-ms-documentdb-collection-index-transformation-progress` response header.
-
-```javascript
-// retrieve the container's details
-const containerResponse = await client.database('database').container('container').read({
- populateQuotaInfo: true
-});
-// retrieve the index transformation progress from the response headers
-const indexTransformationProgress = replaceResponse.headers['x-ms-documentdb-collection-index-transformation-progress'];
-```
-
-## Use the Python SDK
-
-# [Python SDK V3](#tab/pythonv3)
-
-When using the [Python SDK V3](https://pypi.org/project/azure-cosmos/) (see [this Quickstart](create-sql-api-python.md) regarding its usage), the container configuration is managed as a dictionary. From this dictionary, it is possible to access the indexing policy and all its attributes.
-
-Retrieve the container's details
-
-```python
-containerPath = 'dbs/database/colls/collection'
-container = client.ReadContainer(containerPath)
-```
-
-Set the indexing mode to consistent
-
-```python
-container['indexingPolicy']['indexingMode'] = 'consistent'
-```
-
-Define an indexing policy with an included path and a spatial index
-
-```python
-container["indexingPolicy"] = {
-
- "indexingMode":"consistent",
- "spatialIndexes":[
- {"path":"/location/*","types":["Point"]}
- ],
- "includedPaths":[{"path":"/age/*","indexes":[]}],
- "excludedPaths":[{"path":"/*"}]
-}
-```
-
-Define an indexing policy with an excluded path
-
-```python
-container["indexingPolicy"] = {
- "indexingMode":"consistent",
- "includedPaths":[{"path":"/*","indexes":[]}],
- "excludedPaths":[{"path":"/name/*"}]
-}
-```
-
-Add a composite index
-
-```python
-container['indexingPolicy']['compositeIndexes'] = [
- [
- {
- "path": "/name",
- "order": "ascending"
- },
- {
- "path": "/age",
- "order": "descending"
- }
- ]
- ]
-```
-
-Update the container with changes
-
-```python
-response = client.ReplaceContainer(containerPath, container)
-```
-
-# [Python SDK V4](#tab/pythonv4)
-
-When using the [Python SDK V4](https://pypi.org/project/azure-cosmos/), the container configuration is managed as a dictionary. From this dictionary, it is possible to access the indexing policy and all its attributes.
-
-Retrieve the container's details
-
-```python
-database_client = cosmos_client.get_database_client('database')
-container_client = database_client.get_container_client('container')
-container = container_client.read()
-```
-
-Set the indexing mode to consistent
-
-```python
-indexingPolicy = {
- 'indexingMode': 'consistent'
-}
-```
-
-Define an indexing policy with an included path and a spatial index
-
-```python
-indexingPolicy = {
- "indexingMode":"consistent",
- "spatialIndexes":[
- {"path":"/location/*","types":["Point"]}
- ],
- "includedPaths":[{"path":"/age/*","indexes":[]}],
- "excludedPaths":[{"path":"/*"}]
-}
-```
-
-Define an indexing policy with an excluded path
-
-```python
-indexingPolicy = {
- "indexingMode":"consistent",
- "includedPaths":[{"path":"/*","indexes":[]}],
- "excludedPaths":[{"path":"/name/*"}]
-}
-```
-
-Add a composite index
-
-```python
-indexingPolicy['compositeIndexes'] = [
- [
- {
- "path": "/name",
- "order": "ascending"
- },
- {
- "path": "/age",
- "order": "descending"
- }
- ]
-]
-```
-
-Update the container with changes
-
-```python
-response = database_client.replace_container(container_client, container['partitionKey'], indexingPolicy)
-```
-
-Retrieve the index transformation progress from the response headers
-```python
-container_client.read(populate_quota_info = True,
- response_hook = lambda h,p: print(h['x-ms-documentdb-collection-index-transformation-progress']))
-```
---
-## Next steps
-
-Read more about the indexing in the following articles:
--- [Indexing overview](../index-overview.md)-- [Indexing policy](../index-policy.md)
cosmos-db How To Migrate From Bulk Executor Library Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-migrate-from-bulk-executor-library-java.md
- Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
-description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
---- Previously updated : 05/13/2022-----
-# Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK
-
-This article describes the required steps to migrate an existing application's code that uses the [Java bulk executor library](sql-api-sdk-bulk-executor-java.md) to the [bulk support](bulk-executor-java.md) feature in the latest version of the Java SDK.
-
-## Enable bulk support
-
-To use bulk support in the Java SDK, include the import below:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=CosmosBulkOperationsImport)]
-
-## Add documents to a reactive stream
-
-Bulk support in the Java V4 SDK works by adding documents to a reactive stream object. For example, you can add each document individually:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=AddDocsToStream)]
-
-Or you can add the documents to the stream from a list, using `fromIterable`:
-
-```java
-class SampleDoc {
- public SampleDoc() {
- }
- public String getId() {
- return id;
- }
- public void setId(String id) {
- this.id = id;
- }
- private String id="";
-}
-List<SampleDoc> docList = new ArrayList<>();
-for (int i = 1; i <= 5; i++){
- SampleDoc doc = new SampleDoc();
- String id = "id-"+i;
- doc.setId(id);
- docList.add(doc);
-}
-
-Flux<SampleDoc> docs = Flux.fromIterable(docList);
-```
-
-If you want to do bulk create or upsert items (similar to using [DocumentBulkExecutor.importAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.importall)), you need to pass the reactive stream to a method like the following:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkUpsertItems)]
-
-You can also use a method like the below, but this is only used for creating items:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItems)]
--
-The [DocumentBulkExecutor.importAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.importall) method in the old BulkExecutor library was also used to bulk *patch* items. The old [DocumentBulkExecutor.mergeAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.mergeall) method was also used for patch, but only for the `set` patch operation type. To do bulk patch operations in the V4 SDK, first you need to create patch operations:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=PatchOperations)]
-
-Then you can pass the operations, along with the reactive stream of documents, to a method like the below. In this example, we apply both `add` and `set` patch operation types. The full set of patch operation types supported can be found [here](../partial-document-update.md#supported-operations) in our overview of [partial document update in Azure Cosmos DB](../partial-document-update.md).
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkPatchItems)]
-
-> [!NOTE]
-> In the above example, we apply `add` and `set` to patch elements whose root parent exists. However, you cannot do this where the root parent does **not** exist. This is because Azure Cosmos DB partial document update is [inspired by JSON Patch RFC 6902](../partial-document-update-faq.yml#is-this-an-implementation-of-json-patch-rfc-6902-). If patching where root parent does not exist, first read back the full documents, then use a method like the below to replace the documents:
-> [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkReplaceItems)]
--
-And if you want to do bulk *delete* (similar to using [DocumentBulkExecutor.deleteAll](/java/api/com.microsoft.azure.documentdb.bulkexecutor.documentbulkexecutor.deleteall)), you need to use bulk delete:
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkDeleteItems)]
--
-## Retries, timeouts, and throughput control
-
-The bulk support in Java V4 SDK doesn't handle retries and timeouts natively. You can refer to the guidance in [Bulk Executor - Java Library](bulk-executor-java.md), which includes a [sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav#should-my-application-retry-on-errors) for more guidance on the different kinds of errors that can occur, and best practices for handling retries.
--
-## Next steps
-
-* [Bulk samples on GitHub](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/bulk/async)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Migrate From Bulk Executor Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-migrate-from-bulk-executor-library.md
- Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB .NET V3 SDK
-description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB SDK V3
---- Previously updated : 08/26/2021-----
-# Migrate from the bulk executor library to the bulk support in Azure Cosmos DB .NET V3 SDK
-
-This article describes the required steps to migrate an existing application's code that uses the [.NET bulk executor library](bulk-executor-dot-net.md) to the [bulk support](tutorial-sql-api-dotnet-bulk-import.md) feature in the latest version of the .NET SDK.
-
-## Enable bulk support
-
-Enable bulk support on the `CosmosClient` instance through the [AllowBulkExecution](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.allowbulkexecution) configuration:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="Initialization":::
-
-## Create Tasks for each operation
-
-Bulk support in the .NET SDK works by leveraging the [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl) and grouping operations that occur concurrently.
-
-There is no single method in the SDK that will take your list of documents or operations as an input parameter, but rather, you need to create a Task for each operation you want to execute in bulk, and then simply wait for them to complete.
-
-For example, if your initial input is a list of items where each item has the following schema:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="Model":::
-
-If you want to do bulk import (similar to using BulkExecutor.BulkImportAsync), you need to have concurrent calls to `CreateItemAsync`. For example:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkImport":::
-
-If you want to do bulk *update* (similar to using [BulkExecutor.BulkUpdateAsync](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkupdateasync)), you need to have concurrent calls to `ReplaceItemAsync` method after updating the item value. For example:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkUpdate":::
-
-And if you want to do bulk *delete* (similar to using [BulkExecutor.BulkDeleteAsync](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkdeleteasync)), you need to have concurrent calls to `DeleteItemAsync`, with the `id` and partition key of each item. For example:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkDelete":::
-
-## Capture task result state
-
-In the previous code examples, we have created a concurrent list of tasks, and called the `CaptureOperationResponse` method on each of those tasks. This method is an extension that lets us maintain a *similar response schema* as BulkExecutor, by capturing any errors and tracking the [request units usage](../request-units.md).
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="CaptureOperationResult":::
-
-Where the `OperationResponse` is declared as:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="OperationResult":::
-
-## Execute operations concurrently
-
-To track the scope of the entire list of Tasks, we use this helper class:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="BulkOperationsHelper":::
-
-The `ExecuteAsync` method will wait until all operations are completed and you can use it like so:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="WhenAll":::
-
-## Capture statistics
-
-The previous code waits until all operations are completed and calculates the required statistics. These statistics are similar to that of the bulk executor library's [BulkImportResponse](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkimport.bulkimportresponse).
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration/Program.cs" ID="ResponseType":::
-
-The `BulkOperationResponse` contains:
-
-1. The total time taken to process the list of operations through bulk support.
-1. The number of successful operations.
-1. The total of request units consumed.
-1. If there are failures, it displays a list of tuples that contain the exception and the associated item for logging and identification purpose.
-
-## Retry configuration
-
-Bulk executor library had [guidance](bulk-executor-dot-net.md#bulk-import-data-to-an-azure-cosmos-account) that mentioned to set the `MaxRetryWaitTimeInSeconds` and `MaxRetryAttemptsOnThrottledRequests` of [RetryOptions](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.retryoptions) to `0` to delegate control to the library.
-
-For bulk support in the .NET SDK, there is no hidden behavior. You can configure the retry options directly through the [CosmosClientOptions.MaxRetryAttemptsOnRateLimitedRequests](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxretryattemptsonratelimitedrequests) and [CosmosClientOptions.MaxRetryWaitTimeOnRateLimitedRequests](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxretrywaittimeonratelimitedrequests).
-
-> [!NOTE]
-> In cases where the provisioned request units is much lower than the expected based on the amount of data, you might want to consider setting these to high values. The bulk operation will take longer but it has a higher chance of completely succeeding due to the higher retries.
-
-## Performance improvements
-
-As with other operations with the .NET SDK, using the stream APIs results in better performance and avoids any unnecessary serialization.
-
-Using stream APIs is only possible if the nature of the data you use matches that of a stream of bytes (for example, file streams). In such cases, using the `CreateItemStreamAsync`, `ReplaceItemStreamAsync`, or `DeleteItemStreamAsync` methods and working with `ResponseMessage` (instead of `ItemResponse`) increases the throughput that can be achieved.
-
-## Next steps
-
-* To learn more about the .NET SDK releases, see the [Azure Cosmos DB SDK](sql-api-sdk-dotnet.md) article.
-* Get the complete [migration source code](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/BulkExecutorMigration) from GitHub.
-* [Additional bulk samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/BulkSupport)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Migrate From Change Feed Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-migrate-from-change-feed-library.md
- Title: Migrate from the change feed processor library to the Azure Cosmos DB .NET V3 SDK
-description: Learn how to migrate your application from using the change feed processor library to the Azure Cosmos DB SDK V3
---- Previously updated : 09/13/2021----
-# Migrate from the change feed processor library to the Azure Cosmos DB .NET V3 SDK
-
-This article describes the required steps to migrate an existing application's code that uses the [change feed processor library](https://github.com/Azure/azure-documentdb-changefeedprocessor-dotnet) to the [change feed](../change-feed.md) feature in the latest version of the .NET SDK (also referred as .NET V3 SDK).
-
-## Required code changes
-
-The .NET V3 SDK has several breaking changes, the following are the key steps to migrate your application:
-
-1. Convert the `DocumentCollectionInfo` instances into `Container` references for the monitored and leases containers.
-1. Customizations that use `WithProcessorOptions` should be updated to use `WithLeaseConfiguration` and `WithPollInterval` for intervals, `WithStartTime` [for start time](./change-feed-processor.md#starting-time), and `WithMaxItems` to define the maximum item count.
-1. Set the `processorName` on `GetChangeFeedProcessorBuilder` to match the value configured on `ChangeFeedProcessorOptions.LeasePrefix`, or use `string.Empty` otherwise.
-1. The changes are no longer delivered as a `IReadOnlyList<Document>`, instead, it's a `IReadOnlyCollection<T>` where `T` is a type you need to define, there is no base item class anymore.
-1. To handle the changes, you no longer need an implementation of `IChangeFeedObserver`, instead you need to [define a delegate](change-feed-processor.md#implementing-the-change-feed-processor). The delegate can be a static Function or, if you need to maintain state across executions, you can create your own class and pass an instance method as delegate.
-
-For example, if the original code to build the change feed processor looks as follows:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=ChangeFeedProcessorLibrary)]
-
-The migrated code will look like:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=ChangeFeedProcessorMigrated)]
-
-For the delegate, you can have a static method to receive the events. If you were consuming information from the `IChangeFeedObserverContext` you can migrate to use the `ChangeFeedProcessorContext`:
-
-* `ChangeFeedProcessorContext.LeaseToken` can be used instead of `IChangeFeedObserverContext.PartitionKeyRangeId`
-* `ChangeFeedProcessorContext.Headers` can be used instead of `IChangeFeedObserverContext.FeedResponse`
-* `ChangeFeedProcessorContext.Diagnostics` contains detailed information about request latency for troubleshooting
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=Delegate)]
-
-## State and lease container
-
-Similar to the change feed processor library, the change feed feature in .NET V3 SDK uses a [lease container](change-feed-processor.md#components-of-the-change-feed-processor) to store the state. However, the schemas are different.
-
-The SDK V3 change feed processor will detect any old library state and migrate it to the new schema automatically upon the first execution of the migrated application code.
-
-You can safely stop the application using the old code, migrate the code to the new version, start the migrated application, and any changes that happened while the application was stopped, will be picked up and processed by the new version.
-
-## Additional resources
-
-* [Azure Cosmos DB SDK](sql-api-sdk-dotnet.md)
-* [Usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
-* [Additional samples on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
-
-## Next steps
-
-You can now proceed to learn more about change feed processor in the following articles:
-
-* [Overview of change feed processor](change-feed-processor.md)
-* [Using the change feed estimator](how-to-use-change-feed-estimator.md)
-* [Change feed processor start time](./change-feed-processor.md#starting-time)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Model Partition Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-model-partition-example.md
- Title: Model and partition data on Azure Cosmos DB with a real-world example
-description: Learn how to model and partition a real-world example using the Azure Cosmos DB Core API
---- Previously updated : 08/26/2021----
-# How to model and partition data on Azure Cosmos DB using a real-world example
-
-This article builds on several Azure Cosmos DB concepts like [data modeling](../modeling-data.md), [partitioning](../partitioning-overview.md), and [provisioned throughput](../request-units.md) to demonstrate how to tackle a real-world data design exercise.
-
-If you usually work with relational databases, you have probably built habits and intuitions on how to design a data model. Because of the specific constraints, but also the unique strengths of Azure Cosmos DB, most of these best practices don't translate well and may drag you into suboptimal solutions. The goal of this article is to guide you through the complete process of modeling a real-world use-case on Azure Cosmos DB, from item modeling to entity colocation and container partitioning.
-
-[Download or view a community-generated source code](https://github.com/jwidmer/AzureCosmosDbBlogExample) that illustrates the concepts from this article. This code sample was contributed by a community contributor and Azure Cosmos DB team doesn't support its maintenance.
-
-## The scenario
-
-For this exercise, we are going to consider the domain of a blogging platform where *users* can create *posts*. Users can also *like* and add *comments* to those posts.
-
-> [!TIP]
-> We have highlighted some words in *italic*; these words identify the kind of "things" our model will have to manipulate.
-
-Adding more requirements to our specification:
--- A front page displays a feed of recently created posts,-- We can fetch all posts for a user, all comments for a post and all likes for a post,-- Posts are returned with the username of their authors and a count of how many comments and likes they have,-- Comments and likes are also returned with the username of the users who have created them,-- When displayed as lists, posts only have to present a truncated summary of their content.-
-## Identify the main access patterns
-
-To start, we give some structure to our initial specification by identifying our solution's access patterns. When designing a data model for Azure Cosmos DB, it's important to understand which requests our model will have to serve to make sure that the model will serve those requests efficiently.
-
-To make the overall process easier to follow, we categorize those different requests as either commands or queries, borrowing some vocabulary from [CQRS](https://en.wikipedia.org/wiki/Command%E2%80%93query_separation#Command_query_responsibility_segregation) where commands are write requests (that is, intents to update the system) and queries are read-only requests.
-
-Here is the list of requests that our platform will have to expose:
--- **[C1]** Create/edit a user-- **[Q1]** Retrieve a user-- **[C2]** Create/edit a post-- **[Q2]** Retrieve a post-- **[Q3]** List a user's posts in short form-- **[C3]** Create a comment-- **[Q4]** List a post's comments-- **[C4]** Like a post-- **[Q5]** List a post's likes-- **[Q6]** List the *x* most recent posts created in short form (feed)-
-At this stage, we haven't thought about the details of what each entity (user, post etc.) will contain. This step is usually among the first ones to be tackled when designing against a relational store, because we have to figure out how those entities will translate in terms of tables, columns, foreign keys etc. It is much less of a concern with a document database that doesn't enforce any schema at write.
-
-The main reason why it is important to identify our access patterns from the beginning, is because this list of requests is going to be our test suite. Every time we iterate over our data model, we will go through each of the requests and check its performance and scalability. We calculate the request units consumed in each model and optimize them. All these models use the default indexing policy and you can override it by indexing specific properties, which can further improve the RU consumption and latency.
-
-## V1: A first version
-
-We start with two containers: `users` and `posts`.
-
-### Users container
-
-This container only stores user items:
-
-```json
-{
- "id": "<user-id>",
- "username": "<username>"
-}
-```
-
-We partition this container by `id`, which means that each logical partition within that container will only contain one item.
-
-### Posts container
-
-This container hosts posts, comments, and likes:
-
-```json
-{
- "id": "<post-id>",
- "type": "post",
- "postId": "<post-id>",
- "userId": "<post-author-id>",
- "title": "<post-title>",
- "content": "<post-content>",
- "creationDate": "<post-creation-date>"
-}
-
-{
- "id": "<comment-id>",
- "type": "comment",
- "postId": "<post-id>",
- "userId": "<comment-author-id>",
- "content": "<comment-content>",
- "creationDate": "<comment-creation-date>"
-}
-
-{
- "id": "<like-id>",
- "type": "like",
- "postId": "<post-id>",
- "userId": "<liker-id>",
- "creationDate": "<like-creation-date>"
-}
-```
-
-We partition this container by `postId`, which means that each logical partition within that container will contain one post, all the comments for that post and all the likes for that post.
-
-Note that we have introduced a `type` property in the items stored in this container to distinguish between the three types of entities that this container hosts.
-
-Also, we have chosen to reference related data instead of embedding it (check [this section](modeling-data.md) for details about these concepts) because:
--- there's no upper limit to how many posts a user can create,-- posts can be arbitrarily long,-- there's no upper limit to how many comments and likes a post can have,-- we want to be able to add a comment or a like to a post without having to update the post itself.-
-## How well does our model perform?
-
-It's now time to assess the performance and scalability of our first version. For each of the requests previously identified, we measure its latency and how many request units it consumes. This measurement is done against a dummy data set containing 100,000 users with 5 to 50 posts per user, and up to 25 comments and 100 likes per post.
-
-### [C1] Create/edit a user
-
-This request is straightforward to implement as we just create or update an item in the `users` container. The requests will nicely spread across all partitions thanks to the `id` partition key.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 7 ms | 5.71 RU | ✅ |
-
-### [Q1] Retrieve a user
-
-Retrieving a user is done by reading the corresponding item from the `users` container.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 2 ms | 1 RU | ✅ |
-
-### [C2] Create/edit a post
-
-Similarly to **[C1]**, we just have to write to the `posts` container.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 9 ms | 8.76 RU | ✅ |
-
-### [Q2] Retrieve a post
-
-We start by retrieving the corresponding document from the `posts` container. But that's not enough, as per our specification we also have to aggregate the username of the post's author and the counts of how many comments and how many likes this post has, which requires 3 additional SQL queries to be issued.
--
-Each of the additional queries filters on the partition key of its respective container, which is exactly what we want to maximize performance and scalability. But we eventually have to perform four operations to return a single post, so we'll improve that in a next iteration.
-
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 9 ms | 19.54 RU | ΓÜá |
-
-### [Q3] List a user's posts in short form
-
-First, we have to retrieve the desired posts with a SQL query that fetches the posts corresponding to that particular user. But we also have to issue additional queries to aggregate the author's username and the counts of comments and likes.
--
-This implementation presents many drawbacks:
--- the queries aggregating the counts of comments and likes have to be issued for each post returned by the first query,-- the main query does not filter on the partition key of the `posts` container, leading to a fan-out and a partition scan across the container.-
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 130 ms | 619.41 RU | ΓÜá |
-
-### [C3] Create a comment
-
-A comment is created by writing the corresponding item in the `posts` container.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 7 ms | 8.57 RU | ✅ |
-
-### [Q4] List a post's comments
-
-We start with a query that fetches all the comments for that post and once again, we also need to aggregate usernames separately for each comment.
--
-Although the main query does filter on the container's partition key, aggregating the usernames separately penalizes the overall performance. We'll improve that later on.
-
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 23 ms | 27.72 RU | ΓÜá |
-
-### [C4] Like a post
-
-Just like **[C3]**, we create the corresponding item in the `posts` container.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 6 ms | 7.05 RU | ✅ |
-
-### [Q5] List a post's likes
-
-Just like **[Q4]**, we query the likes for that post, then aggregate their usernames.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 59 ms | 58.92 RU | ΓÜá |
-
-### [Q6] List the x most recent posts created in short form (feed)
-
-We fetch the most recent posts by querying the `posts` container sorted by descending creation date, then aggregate usernames and counts of comments and likes for each of the posts.
--
-Once again, our initial query doesn't filter on the partition key of the `posts` container, which triggers a costly fan-out. This one is even worse as we target a much larger result set and sort the results with an `ORDER BY` clause, which makes it more expensive in terms of request units.
-
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 306 ms | 2063.54 RU | ΓÜá |
-
-## Reflecting on the performance of V1
-
-Looking at the performance issues we faced in the previous section, we can identify two main classes of problems:
--- some requests require multiple queries to be issued in order to gather all the data we need to return,-- some queries don't filter on the partition key of the containers they target, leading to a fan-out that impedes our scalability.-
-Let's resolve each of those problems, starting with the first one.
-
-## V2: Introducing denormalization to optimize read queries
-
-The reason why we have to issue additional requests in some cases is because the results of the initial request doesn't contain all the data we need to return. When working with a non-relational data store like Azure Cosmos DB, this kind of issue is commonly solved by denormalizing data across our data set.
-
-In our example, we modify post items to add the username of the post's author, the count of comments and the count of likes:
-
-```json
-{
- "id": "<post-id>",
- "type": "post",
- "postId": "<post-id>",
- "userId": "<post-author-id>",
- "userUsername": "<post-author-username>",
- "title": "<post-title>",
- "content": "<post-content>",
- "commentCount": <count-of-comments>,
- "likeCount": <count-of-likes>,
- "creationDate": "<post-creation-date>"
-}
-```
-
-We also modify comment and like items to add the username of the user who has created them:
-
-```json
-{
- "id": "<comment-id>",
- "type": "comment",
- "postId": "<post-id>",
- "userId": "<comment-author-id>",
- "userUsername": "<comment-author-username>",
- "content": "<comment-content>",
- "creationDate": "<comment-creation-date>"
-}
-
-{
- "id": "<like-id>",
- "type": "like",
- "postId": "<post-id>",
- "userId": "<liker-id>",
- "userUsername": "<liker-username>",
- "creationDate": "<like-creation-date>"
-}
-```
-
-### Denormalizing comment and like counts
-
-What we want to achieve is that every time we add a comment or a like, we also increment the `commentCount` or the `likeCount` in the corresponding post. As our `posts` container is partitioned by `postId`, the new item (comment or like) and its corresponding post sit in the same logical partition. As a result, we can use a [stored procedure](stored-procedures-triggers-udfs.md) to perform that operation.
-
-Now when creating a comment (**[C3]**), instead of just adding a new item in the `posts` container we call the following stored procedure on that container:
-
-```javascript
-function createComment(postId, comment) {
- var collection = getContext().getCollection();
-
- collection.readDocument(
- `${collection.getAltLink()}/docs/${postId}`,
- function (err, post) {
- if (err) throw err;
-
- post.commentCount++;
- collection.replaceDocument(
- post._self,
- post,
- function (err) {
- if (err) throw err;
-
- comment.postId = postId;
- collection.createDocument(
- collection.getSelfLink(),
- comment
- );
- }
- );
- })
-}
-```
-
-This stored procedure takes the ID of the post and the body of the new comment as parameters, then:
--- retrieves the post-- increments the `commentCount`-- replaces the post-- adds the new comment-
-As stored procedures are executed as atomic transactions, the value of `commentCount` and the actual number of comments will always stay in sync.
-
-We obviously call a similar stored procedure when adding new likes to increment the `likeCount`.
-
-### Denormalizing usernames
-
-Usernames require a different approach as users not only sit in different partitions, but in a different container. When we have to denormalize data across partitions and containers, we can use the source container's [change feed](../change-feed.md).
-
-In our example, we use the change feed of the `users` container to react whenever users update their usernames. When that happens, we propagate the change by calling another stored procedure on the `posts` container:
--
-```javascript
-function updateUsernames(userId, username) {
- var collection = getContext().getCollection();
-
- collection.queryDocuments(
- collection.getSelfLink(),
- `SELECT * FROM p WHERE p.userId = '${userId}'`,
- function (err, results) {
- if (err) throw err;
-
- for (var i in results) {
- var doc = results[i];
- doc.userUsername = username;
-
- collection.upsertDocument(
- collection.getSelfLink(),
- doc);
- }
- });
-}
-```
-
-This stored procedure takes the ID of the user and the user's new username as parameters, then:
--- fetches all items matching the `userId` (which can be posts, comments, or likes)-- for each of those items
- - replaces the `userUsername`
- - replaces the item
-
-> [!IMPORTANT]
-> This operation is costly because it requires this stored procedure to be executed on every partition of the `posts` container. We assume that most users choose a suitable username during sign-up and won't ever change it, so this update will run very rarely.
-
-## What are the performance gains of V2?
-
-### [Q2] Retrieve a post
-
-Now that our denormalization is in place, we only have to fetch a single item to handle that request.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 2 ms | 1 RU | ✅ |
-
-### [Q4] List a post's comments
-
-Here again, we can spare the extra requests that fetched the usernames and end up with a single query that filters on the partition key.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 4 ms | 7.72 RU | ✅ |
-
-### [Q5] List a post's likes
-
-Exact same situation when listing the likes.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 4 ms | 8.92 RU | ✅ |
-
-## V3: Making sure all requests are scalable
-
-Looking at our overall performance improvements, there are still two requests that we haven't fully optimized: **[Q3]** and **[Q6]**. They are the requests involving queries that don't filter on the partition key of the containers they target.
-
-### [Q3] List a user's posts in short form
-
-This request already benefits from the improvements introduced in V2, which spares additional queries.
--
-But the remaining query is still not filtering on the partition key of the `posts` container.
-
-The way to think about this situation is actually simple:
-
-1. This request *has* to filter on the `userId` because we want to fetch all posts for a particular user
-1. It doesn't perform well because it is executed against the `posts` container, which is not partitioned by `userId`
-1. Stating the obvious, we would solve our performance problem by executing this request against a container that *is* partitioned by `userId`
-1. It turns out that we already have such a container: the `users` container!
-
-So we introduce a second level of denormalization by duplicating entire posts to the `users` container. By doing that, we effectively get a copy of our posts, only partitioned along a different dimensions, making them way more efficient to retrieve by their `userId`.
-
-The `users` container now contains 2 kinds of items:
-
-```json
-{
- "id": "<user-id>",
- "type": "user",
- "userId": "<user-id>",
- "username": "<username>"
-}
-
-{
- "id": "<post-id>",
- "type": "post",
- "postId": "<post-id>",
- "userId": "<post-author-id>",
- "userUsername": "<post-author-username>",
- "title": "<post-title>",
- "content": "<post-content>",
- "commentCount": <count-of-comments>,
- "likeCount": <count-of-likes>,
- "creationDate": "<post-creation-date>"
-}
-```
-
-Note that:
--- we have introduced a `type` field in the user item to distinguish users from posts,-- we have also added a `userId` field in the user item, which is redundant with the `id` field but is required as the `users` container is now partitioned by `userId` (and not `id` as previously)-
-To achieve that denormalization, we once again use the change feed. This time, we react on the change feed of the `posts` container to dispatch any new or updated post to the `users` container. And because listing posts does not require to return their full content, we can truncate them in the process.
--
-We can now route our query to the `users` container, filtering on the container's partition key.
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 4 ms | 6.46 RU | ✅ |
-
-### [Q6] List the x most recent posts created in short form (feed)
-
-We have to deal with a similar situation here: even after sparing the additional queries left unnecessary by the denormalization introduced in V2, the remaining query does not filter on the container's partition key:
--
-Following the same approach, maximizing this request's performance and scalability requires that it only hits one partition. This is conceivable because we only have to return a limited number of items; in order to populate our blogging platform's home page, we just need to get the 100 most recent posts, without the need to paginate through the entire data set.
-
-So to optimize this last request, we introduce a third container to our design, entirely dedicated to serving this request. We denormalize our posts to that new `feed` container:
-
-```json
-{
- "id": "<post-id>",
- "type": "post",
- "postId": "<post-id>",
- "userId": "<post-author-id>",
- "userUsername": "<post-author-username>",
- "title": "<post-title>",
- "content": "<post-content>",
- "commentCount": <count-of-comments>,
- "likeCount": <count-of-likes>,
- "creationDate": "<post-creation-date>"
-}
-```
-
-This container is partitioned by `type`, which will always be `post` in our items. Doing that ensures that all the items in this container will sit in the same partition.
-
-To achieve the denormalization, we just have to hook on the change feed pipeline we have previously introduced to dispatch the posts to that new container. One important thing to bear in mind is that we need to make sure that we only store the 100 most recent posts; otherwise, the content of the container may grow beyond the maximum size of a partition. This is done by calling a [post-trigger](stored-procedures-triggers-udfs.md#triggers) every time a document is added in the container:
--
-Here's the body of the post-trigger that truncates the collection:
-
-```javascript
-function truncateFeed() {
- const maxDocs = 100;
- var context = getContext();
- var collection = context.getCollection();
-
- collection.queryDocuments(
- collection.getSelfLink(),
- "SELECT VALUE COUNT(1) FROM f",
- function (err, results) {
- if (err) throw err;
-
- processCountResults(results);
- });
-
- function processCountResults(results) {
- // + 1 because the query didn't count the newly inserted doc
- if ((results[0] + 1) > maxDocs) {
- var docsToRemove = results[0] + 1 - maxDocs;
- collection.queryDocuments(
- collection.getSelfLink(),
- `SELECT TOP ${docsToRemove} * FROM f ORDER BY f.creationDate`,
- function (err, results) {
- if (err) throw err;
-
- processDocsToRemove(results, 0);
- });
- }
- }
-
- function processDocsToRemove(results, index) {
- var doc = results[index];
- if (doc) {
- collection.deleteDocument(
- doc._self,
- function (err) {
- if (err) throw err;
-
- processDocsToRemove(results, index + 1);
- });
- }
- }
-}
-```
-
-The final step is to reroute our query to our new `feed` container:
--
-| **Latency** | **RU charge** | **Performance** |
-| | | |
-| 9 ms | 16.97 RU | ✅ |
-
-## Conclusion
-
-Let's have a look at the overall performance and scalability improvements we have introduced over the different versions of our design.
-
-| | V1 | V2 | V3 |
-| | | | |
-| **[C1]** | 7 ms / 5.71 RU | 7 ms / 5.71 RU | 7 ms / 5.71 RU |
-| **[Q1]** | 2 ms / 1 RU | 2 ms / 1 RU | 2 ms / 1 RU |
-| **[C2]** | 9 ms / 8.76 RU | 9 ms / 8.76 RU | 9 ms / 8.76 RU |
-| **[Q2]** | 9 ms / 19.54 RU | 2 ms / 1 RU | 2 ms / 1 RU |
-| **[Q3]** | 130 ms / 619.41 RU | 28 ms / 201.54 RU | 4 ms / 6.46 RU |
-| **[C3]** | 7 ms / 8.57 RU | 7 ms / 15.27 RU | 7 ms / 15.27 RU |
-| **[Q4]** | 23 ms / 27.72 RU | 4 ms / 7.72 RU | 4 ms / 7.72 RU |
-| **[C4]** | 6 ms / 7.05 RU | 7 ms / 14.67 RU | 7 ms / 14.67 RU |
-| **[Q5]** | 59 ms / 58.92 RU | 4 ms / 8.92 RU | 4 ms / 8.92 RU |
-| **[Q6]** | 306 ms / 2063.54 RU | 83 ms / 532.33 RU | 9 ms / 16.97 RU |
-
-### We have optimized a read-heavy scenario
-
-You may have noticed that we have concentrated our efforts towards improving the performance of read requests (queries) at the expense of write requests (commands). In many cases, write operations now trigger subsequent denormalization through change feeds, which makes them more computationally expensive and longer to materialize.
-
-This is justified by the fact that a blogging platform (like most social apps) is read-heavy, which means that the amount of read requests it has to serve is usually orders of magnitude higher than the amount of write requests. So it makes sense to make write requests more expensive to execute in order to let read requests be cheaper and better performing.
-
-If we look at the most extreme optimization we have done, **[Q6]** went from 2000+ RUs to just 17 RUs; we have achieved that by denormalizing posts at a cost of around 10 RUs per item. As we would serve a lot more feed requests than creation or updates of posts, the cost of this denormalization is negligible considering the overall savings.
-
-### Denormalization can be applied incrementally
-
-The scalability improvements we've explored in this article involve denormalization and duplication of data across the data set. It should be noted that these optimizations don't have to be put in place at day 1. Queries that filter on partition keys perform better at scale, but cross-partition queries can be totally acceptable if they are called rarely or against a limited data set. If you're just building a prototype, or launching a product with a small and controlled user base, you can probably spare those improvements for later; what's important then is to [monitor](../use-metrics.md) your model's performance so you can decide if and when it's time to bring them in.
-
-The change feed that we use to distribute updates to other containers store all those updates persistently. This makes it possible to request all updates since the creation of the container and bootstrap denormalized views as a one-time catch-up operation even if your system already has a lot of data.
-
-## Next steps
-
-After this introduction to practical data modeling and partitioning, you may want to check the following articles to review the concepts we have covered:
--- [Work with databases, containers, and items](../account-databases-containers-items.md)-- [Partitioning in Azure Cosmos DB](../partitioning-overview.md)-- [Change feed in Azure Cosmos DB](../change-feed.md)-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Multi Master https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-multi-master.md
- Title: How to configure multi-region writes in Azure Cosmos DB
-description: Learn how to configure multi-region writes for your applications by using different SDKs in Azure Cosmos DB.
---- Previously updated : 01/06/2021-----
-# Configure multi-region writes in your applications that use Azure Cosmos DB
-
-Once an account has been created with multiple write regions enabled, you must make two changes in your application to the ConnectionPolicy for the Cosmos client to enable the multi-region writes in Azure Cosmos DB. Within the ConnectionPolicy, set UseMultipleWriteLocations to true and pass the name of the region where the application is deployed to ApplicationRegion. This will populate the PreferredLocations property based on the geo-proximity from location passed in. If a new region is later added to the account, the application does not have to be updated or redeployed, it will automatically detect the closer region and will auto-home on to it should a regional event occur.
-
-> [!Note]
-> Cosmos accounts initially configured with single write region can be configured to multiple write regions with zero down time. To learn more see, [Configure multiple-write regions](../how-to-manage-database-account.md#configure-multiple-write-regions)
-
-## <a id="portal"></a> Azure portal
-
-To enable multi-region writes from Azure portal, use the following steps:
-
-1. Sign-in to the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to your Azure Cosmos account and from the menu, open the **Replicate data globally** pane.
-
-1. Under the **Multi-region writes** option, choose **enable**. It automatically adds the existing regions to read and write regions.
-
-1. You can add additional regions by selecting the icons on the map or by selecting the **Add region** button. All the regions you add will have both read and writes enabled.
-
-1. After you update the region list, select **save** to apply the changes.
-
- :::image type="content" source="./media/how-to-multi-master/enable-multi-region-writes.png" alt-text="Screenshot to enable multi-region writes using Azure portal" lightbox="./media/how-to-multi-master/enable-multi-region-writes.png":::
-
-## <a id="netv2"></a>.NET SDK v2
-
-To enable multi-region writes in your application, set `UseMultipleWriteLocations` to `true`. Also, set `SetCurrentLocation` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
-
-```csharp
-ConnectionPolicy policy = new ConnectionPolicy
- {
- ConnectionMode = ConnectionMode.Direct,
- ConnectionProtocol = Protocol.Tcp,
- UseMultipleWriteLocations = true
- };
-policy.SetCurrentLocation("West US 2");
-```
-
-## <a id="netv3"></a>.NET SDK v3
-
-To enable multi-region writes in your application, set `ApplicationRegion` to the region in which the application is being deployed and where Cosmos DB is replicated:
-
-```csharp
-CosmosClient cosmosClient = new CosmosClient(
- "<connection-string-from-portal>",
- new CosmosClientOptions()
- {
- ApplicationRegion = Regions.WestUS2,
- });
-```
-
-Optionally, you can use the `CosmosClientBuilder` and `WithApplicationRegion` to achieve the same result:
-
-```csharp
-CosmosClientBuilder cosmosClientBuilder = new CosmosClientBuilder("<connection-string-from-portal>")
- .WithApplicationRegion(Regions.WestUS2);
-CosmosClient client = cosmosClientBuilder.Build();
-```
-
-## <a id="java4-multi-region-writes"></a> Java V4 SDK
-
-To enable multi-region writes in your application, call `.multipleWriteRegionsEnabled(true)` and `.preferredRegions(preferredRegions)` in the client builder, where `preferredRegions` is a `List` containing one element - that is the region in which the application is being deployed and where Cosmos DB is replicated:
-
-# [Async](#tab/api-async)
-
- [Java SDK V4](sql-api-sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=ConfigureMultimasterAsync)]
-
-# [Sync](#tab/api-sync)
-
- [Java SDK V4](sql-api-sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=ConfigureMultimasterSync)]
-
-
-
-## <a id="java2-multi-region-writes"></a> Async Java V2 SDK
-
-The Java V2 SDK used the Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb). To enable multi-region writes in your application, set `policy.setUsingMultipleWriteLocations(true)` and set `policy.setPreferredLocations` to the region in which the application is being deployed and where Cosmos DB is replicated:
-
-```java
-ConnectionPolicy policy = new ConnectionPolicy();
-policy.setUsingMultipleWriteLocations(true);
-policy.setPreferredLocations(Collections.singletonList(region));
-
-AsyncDocumentClient client =
- new AsyncDocumentClient.Builder()
- .withMasterKeyOrResourceToken(this.accountKey)
- .withServiceEndpoint(this.accountEndpoint)
- .withConsistencyLevel(ConsistencyLevel.Eventual)
- .withConnectionPolicy(policy).build();
-```
-
-## <a id="javascript"></a>Node.js, JavaScript, and TypeScript SDKs
-
-To enable multi-region writes in your application, set `connectionPolicy.UseMultipleWriteLocations` to `true`. Also, set `connectionPolicy.PreferredLocations` to the region in which the application is being deployed and where Cosmos DB is replicated:
-
-```javascript
-const connectionPolicy: ConnectionPolicy = new ConnectionPolicy();
-connectionPolicy.UseMultipleWriteLocations = true;
-connectionPolicy.PreferredLocations = [region];
-
-const client = new CosmosClient({
- endpoint: config.endpoint,
- auth: { masterKey: config.key },
- connectionPolicy,
- consistencyLevel: ConsistencyLevel.Eventual
-});
-```
-
-## <a id="python"></a>Python SDK
-
-To enable multi-region writes in your application, set `connection_policy.UseMultipleWriteLocations` to `true`. Also, set `connection_policy.PreferredLocations` to the region in which the application is being deployed and where Cosmos DB is replicated.
-
-```python
-connection_policy = documents.ConnectionPolicy()
-connection_policy.UseMultipleWriteLocations = True
-connection_policy.PreferredLocations = [region]
-
-client = cosmos_client.CosmosClient(self.account_endpoint, {
- 'masterKey': self.account_key}, connection_policy, documents.ConsistencyLevel.Session)
-```
-
-## Next steps
-
-Read the following articles:
-
-* [Use session tokens to manage consistency in Azure Cosmos DB](how-to-manage-consistency.md#utilize-session-tokens)
-* [Conflict types and resolution policies in Azure Cosmos DB](../conflict-resolution-policies.md)
-* [High availability in Azure Cosmos DB](../high-availability.md)
-* [Consistency levels in Azure Cosmos DB](../consistency-levels.md)
-* [Choose the right consistency level in Azure Cosmos DB](../consistency-levels.md)
-* [Consistency, availability, and performance tradeoffs in Azure Cosmos DB](../consistency-levels.md)
-* [Availability and performance tradeoffs for various consistency levels](../consistency-levels.md)
-* [Globally scaling provisioned throughput](../request-units.md)
-* [Global distribution: Under the hood](../global-dist-under-the-hood.md)
cosmos-db How To Provision Autoscale Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-autoscale-throughput.md
- Title: Provision autoscale throughput in Azure Cosmos DB SQL API
-description: Learn how to provision autoscale throughput at the container and database level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell, and various other SDKs.
----- Previously updated : 04/01/2022---
-# Provision autoscale throughput on database or container in Azure Cosmos DB - SQL API
-
-This article explains how to provision autoscale throughput on a database or container (collection, graph, or table) in Azure Cosmos DB SQL API. You can enable autoscale on a single container, or provision autoscale throughput on a database and share it among all the containers in the database.
-
-If you are using a different API, see [API for MongoDB](../mongodb/how-to-provision-throughput-mongodb.md), [Cassandra API](../cassandr) articles to provision the throughput.
-
-## Azure portal
-
-### Create new database or container with autoscale
-
-1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Cosmos DB explorer.](https://cosmos.azure.com/)
-
-1. Navigate to your Azure Cosmos DB account and open the **Data Explorer** tab.
-
-1. Select **New Container.** Enter a name for your database, container, and a partition key. Under database or container throughput, select the **Autoscale** option, and set the [maximum throughput (RU/s)](../provision-throughput-autoscale.md#how-autoscale-provisioned-throughput-works) that you want the database or container to scale to.
-
- :::image type="content" source="./media/how-to-provision-autoscale-throughput/create-new-autoscale-container.png" alt-text="Creating a container and configuring autoscale provisioned throughput":::
-
-1. Select **OK**.
-
-To provision autoscale on shared throughput database, select the **Provision database throughput** option when creating a new database.
-
-### Enable autoscale on existing database or container
-
-1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Cosmos DB explorer.](https://cosmos.azure.com/)
-
-1. Navigate to your Azure Cosmos DB account and open the **Data Explorer** tab.
-
-1. Select **Scale and Settings** for your container, or **Scale** for your database.
-
-1. Under **Scale**, select the **Autoscale** option and **Save**.
-
- :::image type="content" source="./media/how-to-provision-autoscale-throughput/autoscale-scale-and-settings.png" alt-text="Enabling autoscale on an existing container":::
-
-> [!NOTE]
-> When you enable autoscale on an existing database or container, the starting value for max RU/s is determined by the system, based on your current manual provisioned throughput settings and storage. After the operation completes, you can change the max RU/s if needed. [Learn more.](../autoscale-faq.yml#how-does-the-migration-between-autoscale-and-standard--manual--provisioned-throughput-work-)
-
-## Azure Cosmos DB .NET V3 SDK
-
-Use [version 3.9 or higher](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) of the Azure Cosmos DB .NET SDK for SQL API to manage autoscale resources.
-
-> [!IMPORTANT]
-> You can use the .NET SDK to create new autoscale resources. The SDK does not support migrating between autoscale and standard (manual) throughput. The migration scenario is currently supported in only the [Azure portal](#enable-autoscale-on-existing-database-or-container), [CLI](#azure-cli), and [PowerShell](#azure-powershell).
-
-### Create database with shared throughput
-
-```csharp
-// Create instance of CosmosClient
-CosmosClient cosmosClient = new CosmosClient(Endpoint, PrimaryKey);
-
-// Autoscale throughput settings
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(1000); //Set autoscale max RU/s
-
-//Create the database with autoscale enabled
-database = await cosmosClient.CreateDatabaseAsync(DatabaseName, throughputProperties: autoscaleThroughputProperties);
-```
-
-### Create container with dedicated throughput
-
-```csharp
-// Get reference to database that container will be created in
-Database database = await cosmosClient.GetDatabase("DatabaseName");
-
-// Container and autoscale throughput settings
-ContainerProperties autoscaleContainerProperties = new ContainerProperties("ContainerName", "/partitionKey");
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(1000); //Set autoscale max RU/s
-
-// Create the container with autoscale enabled
-container = await database.CreateContainerAsync(autoscaleContainerProperties, autoscaleThroughputProperties);
-```
-
-### Read the current throughput (RU/s)
-
-```csharp
-// Get a reference to the resource
-Container container = cosmosClient.GetDatabase("DatabaseName").GetContainer("ContainerName");
-
-// Read the throughput on a resource
-ThroughputProperties autoscaleContainerThroughput = await container.ReadThroughputAsync(requestOptions: null);
-
-// The autoscale max throughput (RU/s) of the resource
-int? autoscaleMaxThroughput = autoscaleContainerThroughput.AutoscaleMaxThroughput;
-
-// The throughput (RU/s) the resource is currently scaled to
-int? currentThroughput = autoscaleContainerThroughput.Throughput;
-```
-
-### Change the autoscale max throughput (RU/s)
-
-```csharp
-// Change the autoscale max throughput (RU/s)
-await container.ReplaceThroughputAsync(ThroughputProperties.CreateAutoscaleThroughput(newAutoscaleMaxThroughput));
-```
-
-## Azure Cosmos DB Java V4 SDK
-
-You can use [version 4.0 or higher](https://mvnrepository.com/artifact/com.azure/azure-cosmos) of the Azure Cosmos DB Java SDK for SQL API to manage autoscale resources.
-
-> [!IMPORTANT]
-> You can use the Java SDK to create new autoscale resources. The SDK does not support migrating between autoscale and standard (manual) throughput. The migration scenario is currently supported in only the [Azure portal](#enable-autoscale-on-existing-database-or-container), [CLI](#azure-cli), and [PowerShell](#azure-powershell).
-### Create database with shared throughput
-
-# [Async](#tab/api-async)
-
-```java
-// Create instance of CosmosClient
-CosmosAsyncClient client = new CosmosClientBuilder()
- .setEndpoint(HOST)
- .setKey(PRIMARYKEY)
- .setConnectionPolicy(CONNECTIONPOLICY)
- .buildAsyncClient();
-
-// Autoscale throughput settings
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
-
-//Create the database with autoscale enabled
-CosmosAsyncDatabase database = client.createDatabase(databaseName, autoscaleThroughputProperties).block().getDatabase();
-```
-
-# [Sync](#tab/api-sync)
-
-```java
-// Create instance of CosmosClient
-CosmosClient client = new CosmosClientBuilder()
- .setEndpoint(HOST)
- .setKey(PRIMARYKEY)
- .setConnectionPolicy(CONNECTIONPOLICY)
- .buildClient();
-
-// Autoscale throughput settings
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
-
-//Create the database with autoscale enabled
-CosmosDatabase database = client.createDatabase(databaseName, autoscaleThroughputProperties).getDatabase();
-```
-
-
-
-### Create container with dedicated throughput
-
-# [Async](#tab/api-async)
-
-```java
-// Get reference to database that container will be created in
-CosmosAsyncDatabase database = client.createDatabase("DatabaseName").block().getDatabase();
-
-// Container and autoscale throughput settings
-CosmosContainerProperties autoscaleContainerProperties = new CosmosContainerProperties("ContainerName", "/partitionKey");
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
-
-// Create the container with autoscale enabled
-CosmosAsyncContainer container = database.createContainer(autoscaleContainerProperties, autoscaleThroughputProperties, new CosmosContainerRequestOptions())
- .block()
- .getContainer();
-```
-
-# [Sync](#tab/api-sync)
-
-```java
-// Get reference to database that container will be created in
-CosmosDatabase database = client.createDatabase("DatabaseName").getDatabase();
-
-// Container and autoscale throughput settings
-CosmosContainerProperties autoscaleContainerProperties = new CosmosContainerProperties("ContainerName", "/partitionKey");
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
-
-// Create the container with autoscale enabled
-CosmosContainer container = database.createContainer(autoscaleContainerProperties, autoscaleThroughputProperties, new CosmosContainerRequestOptions())
- .getContainer();
-```
-
-
-
-### Read the current throughput (RU/s)
-
-# [Async](#tab/api-async)
-
-```java
-// Get a reference to the resource
-CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
-
-// Read the throughput on a resource
-ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
-
-// The autoscale max throughput (RU/s) of the resource
-int autoscaleMaxThroughput = autoscaleContainerThroughput.getAutoscaleMaxThroughput();
-
-// The throughput (RU/s) the resource is currently scaled to
-int currentThroughput = autoscaleContainerThroughput.Throughput;
-```
-
-# [Sync](#tab/api-sync)
-
-```java
-// Get a reference to the resource
-CosmosContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
-
-// Read the throughput on a resource
-ThroughputProperties autoscaleContainerThroughput = container.readThroughput().getProperties();
-
-// The autoscale max throughput (RU/s) of the resource
-int autoscaleMaxThroughput = autoscaleContainerThroughput.getAutoscaleMaxThroughput();
-
-// The throughput (RU/s) the resource is currently scaled to
-int currentThroughput = autoscaleContainerThroughput.Throughput;
-```
-
-
-
-### Change the autoscale max throughput (RU/s)
-
-# [Async](#tab/api-async)
-
-```java
-// Change the autoscale max throughput (RU/s)
-container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();
-```
-
-# [Sync](#tab/api-sync)
-
-```java
-// Change the autoscale max throughput (RU/s)
-container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput));
-```
---
-## Azure Resource Manager
-
-Azure Resource Manager templates can be used to provision autoscale throughput on a new database or container-level resource for all Azure Cosmos DB APIs. See [Azure Resource Manager templates for Azure Cosmos DB](./templates-samples-sql.md) for samples. By design, Azure Resource Manager templates cannot be used to migrate between provisioned and autoscale throughput on an existing resource.
-
-## Azure CLI
-
-Azure CLI can be used to provision autoscale throughput on a new database or container-level resource for all Azure Cosmos DB APIs, or enable autoscale on an existing resource. For samples see [Azure CLI Samples for Azure Cosmos DB](cli-samples.md).
-
-## Azure PowerShell
-
-Azure PowerShell can be used to provision autoscale throughput on a new database or container-level resource for all Azure Cosmos DB APIs, or enable autoscale on an existing resource. For samples see [Azure PowerShell samples for Azure Cosmos DB](powershell-samples.md).
-
-## Next steps
-
-* Learn about the [benefits of provisioned throughput with autoscale](../provision-throughput-autoscale.md#benefits-of-autoscale).
-* Learn how to [choose between manual and autoscale throughput](../how-to-choose-offer.md).
-* Review the [autoscale FAQ](../autoscale-faq.yml).
cosmos-db How To Provision Container Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-container-throughput.md
- Title: Provision container throughput in Azure Cosmos DB SQL API
-description: Learn how to provision throughput at the container level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell and various other SDKs.
---- Previously updated : 10/14/2020-----
-# Provision standard (manual) throughput on an Azure Cosmos container - SQL API
-
-This article explains how to provision standard (manual) throughput on a container in Azure Cosmos DB SQL API. You can provision throughput on a single container, or [provision throughput on a database](how-to-provision-database-throughput.md) and share it among the containers within the database. You can provision throughput on a container using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-
-If you are using a different API, see [API for MongoDB](../mongodb/how-to-provision-throughput-mongodb.md), [Cassandra API](../cassandr) articles to provision the throughput.
-
-## Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-sql-api-dotnet.md#create-account), or select an existing Azure Cosmos account.
-
-1. Open the **Data Explorer** pane, and select **New Container**. Next, provide the following details:
-
- * Indicate whether you are creating a new database or using an existing one.
- * Enter a **Container Id**.
- * Enter a **Partition key** value (for example, `/ItemID`).
- * Select **Autoscale** or **Manual** throughput and enter the required **Container throughput** (for example, 1000 RU/s). Enter a throughput that you want to provision (for example, 1000 RUs).
- * Select **OK**.
-
- :::image type="content" source="../media/how-to-provision-container-throughput/provision-container-throughput-portal-sql-api.png" alt-text="Screenshot of Data Explorer, with New Collection highlighted":::
-
-## Azure CLI or PowerShell
-
-To create a container with dedicated throughput see,
-
-* [Create a container using Azure CLI](manage-with-cli.md#create-a-container)
-* [Create a container using PowerShell](manage-with-powershell.md#create-container)
-
-## .NET SDK
-
-> [!Note]
-> Use the Cosmos SDKs for SQL API to provision throughput for all Cosmos DB APIs, except Cassandra and MongoDB API.
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-// Create a container with a partition key and provision throughput of 400 RU/s
-DocumentCollection myCollection = new DocumentCollection();
-myCollection.Id = "myContainerName";
-myCollection.PartitionKey.Paths.Add("/myPartitionKey");
-
-await client.CreateDocumentCollectionAsync(
- UriFactory.CreateDatabaseUri("myDatabaseName"),
- myCollection,
- new RequestOptions { OfferThroughput = 400 });
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/ContainerDocsSampleCode.cs?name=ContainerCreateWithThroughput)]
---
-## JavaScript SDK
-
-```javascript
-// Create a new Client
-const client = new CosmosClient({ endpoint, key });
-
-// Create a database
-const { database } = await client.databases.createIfNotExists({ id: "databaseId" });
-
-// Create a container with the specified throughput
-const { resource } = await database.containers.createIfNotExists({
-id: "containerId",
-throughput: 1000
-});
-
-// To update an existing container or databases throughput, you need to user the offers API
-// Get all the offers
-const { resources: offers } = await client.offers.readAll().fetchAll();
-
-// Find the offer associated with your container or the database
-const offer = offers.find((_offer) => _offer.offerResourceId === resource._rid);
-
-// Change the throughput value
-offer.content.offerThroughput = 2000;
-
-// Replace the offer.
-await client.offer(offer.id).replace(offer);
-```
-
-## Next steps
-
-See the following articles to learn about throughput provisioning in Azure Cosmos DB:
-
-* [How to provision standard (manual) throughput on a database](how-to-provision-database-throughput.md)
-* [How to provision autoscale throughput on a database](how-to-provision-autoscale-throughput.md)
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Provision Database Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-database-throughput.md
- Title: Provision database throughput in Azure Cosmos DB SQL API
-description: Learn how to provision throughput at the database level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell and various other SDKs.
---- Previously updated : 10/15/2020-----
-# Provision standard (manual) throughput on a database in Azure Cosmos DB - SQL API
-
-This article explains how to provision standard (manual) throughput on a database in Azure Cosmos DB SQL API. You can provision throughput for a single [container](how-to-provision-container-throughput.md), or for a database and share the throughput among the containers within it. To learn when to use container level and database level throughput, see the [Use cases for provisioning throughput on containers and databases](../set-throughput.md) article. You can provision database level throughput by using the Azure portal or Azure Cosmos DB SDKs.
-
-If you are using a different API, see [API for MongoDB](../mongodb/how-to-provision-throughput-mongodb.md), [Cassandra API](../cassandr) articles to provision the throughput.
-
-## Provision throughput using Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. [Create a new Azure Cosmos account](create-sql-api-dotnet.md#create-account), or select an existing Azure Cosmos account.
-
-1. Open the **Data Explorer** pane, and select **New Database**. Provide the following details:
-
- * Enter a database ID.
- * Select the **Share throughput across containers** option.
- * Select **Autoscale** or **Manual** throughput and enter the required **Database throughput** (for example, 1000 RU/s).
- * Enter a name for your container under **Container ID**
- * Enter a **Partition key**
- * Select **OK**.
-
- :::image type="content" source="../media/how-to-provision-database-throughput/provision-database-throughput-portal-sql-api.png" alt-text="Screenshot of New Database dialog box":::
-
-## Provision throughput using Azure CLI or PowerShell
-
-To create a database with shared throughput see,
-
-* [Create a database using Azure CLI](manage-with-cli.md#create-a-database-with-shared-throughput)
-* [Create a database using PowerShell](manage-with-powershell.md#create-db-ru)
-
-## Provision throughput using .NET SDK
-
-> [!Note]
-> You can use Azure Cosmos SDKs for SQL API to provision throughput for all APIs. You can optionally use the following example for Cassandra API as well.
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-```csharp
-//set the throughput for the database
-RequestOptions options = new RequestOptions
-{
- OfferThroughput = 500
-};
-
-//create the database
-await client.CreateDatabaseIfNotExistsAsync(
- new Database {Id = databaseName},
- options);
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-[!code-csharp[](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos/tests/Microsoft.Azure.Cosmos.Tests/SampleCodeForDocs/DatabaseDocsSampleCode.cs?name=DatabaseCreateWithThroughput)]
---
-## Next steps
-
-See the following articles to learn about provisioned throughput in Azure Cosmos DB:
-
-* [Globally scale provisioned throughput](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
-* [How to provision standard (manual) throughput for a container](how-to-provision-container-throughput.md)
-* [How to provision autoscale throughput for a container](how-to-provision-autoscale-throughput.md)
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
cosmos-db How To Query Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-query-container.md
- Title: Query containers in Azure Cosmos DB
-description: Learn how to query containers in Azure Cosmos DB using in-partition and cross-partition queries
---- Previously updated : 3/18/2019----
-# Query an Azure Cosmos container
-
-This article explains how to query a container (collection, graph, or table) in Azure Cosmos DB. In particular, it covers how in-partition and cross-partition queries work in Azure Cosmos DB.
-
-## In-partition query
-
-When you query data from containers, if the query has a partition key filter specified, Azure Cosmos DB automatically optimizes the query. It routes the query to the [physical partitions](../partitioning-overview.md#physical-partitions) corresponding to the partition key values specified in the filter.
-
-For example, consider the below query with an equality filter on `DeviceId`. If we run this query on a container partitioned on `DeviceId`, this query will filter to a single physical partition.
-
-```sql
-SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
-```
-
-As with the earlier example, this query will also filter to a single partition. Adding the additional filter on `Location` does not change this:
-
-```sql
-SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'
-```
-
-Here's a query that has a range filter on the partition key and won't be scoped to a single physical partition. In order to be an in-partition query, the query must have an equality filter that includes the partition key:
-
-```sql
-SELECT * FROM c WHERE c.DeviceId > 'XMS-0001'
-```
-
-## Cross-partition query
-
-The following query doesn't have a filter on the partition key (`DeviceId`). Therefore, it must fan-out to all physical partitions where it is run against each partition's index:
-
-```sql
-SELECT * FROM c WHERE c.Location = 'Seattle`
-```
-
-Each physical partition has its own index. Therefore, when you run a cross-partition query on a container, you are effectively running one query *per* physical partition. Azure Cosmos DB will automatically aggregate results across different physical partitions.
-
-The indexes in different physical partitions are independent from one another. There is no global index in Azure Cosmos DB.
-
-## Parallel cross-partition query
-
-The Azure Cosmos DB SDKs 1.9.0 and later support parallel query execution options. Parallel cross-partition queries allow you to perform low latency, cross-partition queries.
-
-You can manage parallel query execution by tuning the following parameters:
--- **MaxConcurrency**: Sets the maximum number of simultaneous network connections to the container's partitions. If you set this property to `-1`, the SDK manages the degree of parallelism. If theΓÇ»`MaxConcurrency` set to `0`, there is a single network connection to the container's partitions.--- **MaxBufferedItemCount**: Trades query latency versus client-side memory utilization. If this option is omitted or to set to -1, the SDK manages the number of items buffered during parallel query execution.-
-Because of the Azure Cosmos DB's ability to parallelize cross-partition queries, query latency will generally scale well as the system adds [physical partitions](../partitioning-overview.md#physical-partitions). However, RU charge will increase significantly as the total number of physical partitions increases.
-
-When you run a cross-partition query, you are essentially doing a separate query per individual physical partition. While cross-partition queries queries will use the index, if available, they are still not nearly as efficient as in-partition queries.
-
-## Useful example
-
-Here's an analogy to better understand cross-partition queries:
-
-Let's imagine you are a delivery driver that has to deliver packages to different apartment complexes. Each apartment complex has a list on the premises that has all of the resident's unit numbers. We can compare each apartment complex to a physical partition and each list to the physical partition's index.
-
-We can compare in-partition and cross-partition queries using this example:
-
-### In-partition query
-
-If the delivery driver knows the correct apartment complex (physical partition), then they can immediately drive to the correct building. The driver can check the apartment complex's list of the resident's unit numbers (the index) and quickly deliver the appropriate packages. In this case, the driver does not waste any time or effort driving to an apartment complex to check and see if any package recipients live there.
-
-### Cross-partition query (fan-out)
-
-If the delivery driver does not know the correct apartment complex (physical partition), they'll need to drive to every single apartment building and check the list with all of the resident's unit numbers (the index). Once they arrive at each apartment complex, they'll still be able to use the list of the addresses of each resident. However, they will need to check every apartment complex's list, whether any package recipients live there or not. This is how cross-partition queries work. While they can use the index (don't need to knock on every single door), they must separately check the index for every physical partition.
-
-### Cross-partition query (scoped to only a few physical partitions)
-
-If the delivery driver knows that all package recipients live within a certain few apartment complexes, they won't need to drive to every single one. While driving to a few apartment complexes will still require more work than visiting just a single building, the delivery driver still saves significant time and effort. If a query has the partition key in its filter with the `IN` keyword, it will only check the relevant physical partition's indexes for data.
-
-## Avoiding cross-partition queries
-
-For most containers, it's inevitable that you will have some cross-partition queries. Having some cross-partition queries is ok! Nearly all query operations are supported across partitions (both logical partition keys and physical partitions). Azure Cosmos DB also has many optimizations in the query engine and client SDKs to parallelize query execution across physical partitions.
-
-For most read-heavy scenarios, we recommend simply selecting the most common property in your query filters. You should also make sure your partition key adheres to other [partition key selection best practices](../partitioning-overview.md#choose-partitionkey).
-
-Avoiding cross-partition queries typically only matters with large containers. You are charged a minimum of about 2.5 RU's each time you check a physical partition's index for results, even if no items in the physical partition match the query's filter. As such, if you have only one (or just a few) physical partitions, cross-partition queries will not consume significantly more RU's than in-partition queries.
-
-The number of physical partitions is tied to the amount of provisioned RU's. Each physical partition allows for up to 10,000 provisioned RU's and can store up to 50 GB of data. Azure Cosmos DB will automatically manage physical partitions for you. The number of physical partitions in your container is dependent on your provisioned throughput and consumed storage.
-
-You should try to avoid cross-partition queries if your workload meets the criteria below:
-- You plan to have over 30,000 RU's provisioned-- You plan to store over 100 GB of data-
-## Next steps
-
-See the following articles to learn about partitioning in Azure Cosmos DB:
--- [Partitioning in Azure Cosmos DB](../partitioning-overview.md)-- [Synthetic partition keys in Azure Cosmos DB](synthetic-partition-keys.md)
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-time-to-live.md
- Title: Configure and manage Time to Live in Azure Cosmos DB
-description: Learn how to configure and manage time to live on a container and an item in Azure Cosmos DB
---- Previously updated : 05/12/2022-----
-# Configure time to live in Azure Cosmos DB
-
-In Azure Cosmos DB, you can choose to configure Time to Live (TTL) at the container level, or you can override it at an item level after setting for the container. You can configure TTL for a container by using Azure portal or the language-specific SDKs. Item level TTL overrides can be configured by using the SDKs.
-
-> This article's content is related to Azure Cosmos DB transactional store TTL. If you are looking for analitycal store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl).
-
-## Enable time to live on a container using the Azure portal
-
-Use the following steps to enable time to live on a container with no expiration. Enabling TTL at the container level to allow the same value to be overridden at an individual item's level. You can also set the TTL by entering a non-zero value for seconds.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Create a new Azure Cosmos account or select an existing account.
-
-3. Open the **Data Explorer** pane.
-
-4. Select an existing container, expand the **Settings** tab and modify the following values:
-
- * Under **Setting** find, **Time to Live**.
- * Based on your requirement, you can:
- * Turn **off** this setting
- * Set it to **On (no default)** or
- * Turn **On** with a TTL value specified in seconds.
-
- * Select **Save** to save the changes.
-
- :::image type="content" source="./media/how-to-time-to-live/how-to-time-to-live-portal.png" alt-text="Configure Time to live in Azure portal":::
-
-* When DefaultTimeToLive is null, then your Time to Live is Off
-* When DefaultTimeToLive is -1 then, your Time to Live setting is On (No default)
-* When DefaultTimeToLive has any other Int value (except 0), then your Time to Live setting is On. The server will automatically delete items based on the configured value.
-
-## Enable time to live on a container using Azure CLI or Azure PowerShell
-
-To create or enable TTL on a container see,
-
-* [Create a container with TTL using Azure CLI](manage-with-cli.md#create-a-container-with-ttl)
-* [Create a container with TTL using PowerShell](manage-with-powershell.md#create-container-unique-key-ttl)
-
-## Enable time to live on a container using an SDK
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-```csharp
-Database database = client.GetDatabase("database");
-
-ContainerProperties properties = new ()
-{
- Id = "container",
- PartitionKeyPath = "/customerId",
- // Never expire by default
- DefaultTimeToLive = -1
-};
-
-// Create a new container with TTL enabled and without any expiration value
-Container container = await database
- .CreateContainerAsync(properties);
-```
-
-### [Java SDK v4](#tab/javav4)
-
-```java
-CosmosDatabase database = client.getDatabase("database");
-
-CosmosContainerProperties properties = new CosmosContainerProperties(
- "container",
- "/customerId"
-);
-// Never expire by default
-properties.setDefaultTimeToLiveInSeconds(-1);
-
-// Create a new container with TTL enabled and without any expiration value
-CosmosContainerResponse response = database
- .createContainerIfNotExists(properties);
-```
-
-### [Node SDK](#tab/node-sdk)
-
-```javascript
-const database = await client.database("database");
-
-const properties = {
- id: "container",
- partitionKey: "/customerId",
- // Never expire by default
- defaultTtl: -1
-};
-
-const { container } = await database.containers
- .createIfNotExists(properties);
-
-```
-
-### [Python SDK](#tab/python-sdk)
-
-```python
-database = client.get_database_client('database')
-
-database.create_container(
- id='container',
- partition_key=PartitionKey(path='/customerId'),
- # Never expire by default
- default_ttl=-1
-)
-```
---
-## Set time to live on a container using an SDK
-
-To set the time to live on a container, you need to provide a non-zero positive number that indicates the time period in seconds. Based on the configured TTL value, all items in the container after the last modified timestamp of the item `_ts` are deleted.
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-```csharp
-Database database = client.GetDatabase("database");
-
-ContainerProperties properties = new ()
-{
- Id = "container",
- PartitionKeyPath = "/customerId",
- // Expire all documents after 90 days
- DefaultTimeToLive = 90 * 60 * 60 * 24
-};
-
-// Create a new container with TTL enabled and without any expiration value
-Container container = await database
- .CreateContainerAsync(properties);
-```
-
-### [Java SDK v4](#tab/javav4)
-
-```java
-CosmosDatabase database = client.getDatabase("database");
-
-CosmosContainerProperties properties = new CosmosContainerProperties(
- "container",
- "/customerId"
-);
-// Expire all documents after 90 days
-properties.setDefaultTimeToLiveInSeconds(90 * 60 * 60 * 24);
-
-CosmosContainerResponse response = database
- .createContainerIfNotExists(properties);
-```
-
-### [Node SDK](#tab/node-sdk)
-
-```javascript
-const database = await client.database("database");
-
-const properties = {
- id: "container",
- partitionKey: "/customerId",
- // Expire all documents after 90 days
- defaultTtl: 90 * 60 * 60 * 24
-};
-
-const { container } = await database.containers
- .createIfNotExists(properties);
-```
-
-### [Python SDK](#tab/python-sdk)
-
-```python
-database = client.get_database_client('database')
-
-database.create_container(
- id='container',
- partition_key=PartitionKey(path='/customerId'),
- # Expire all documents after 90 days
- default_ttl=90 * 60 * 60 * 24
-)
-```
---
-## Set time to live on an item using the Portal
-
-In addition to setting a default time to live on a container, you can set a time to live for an item. Setting time to live at the item level will override the default TTL of the item in that container.
-
-* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item shouldn't expire.
-
-* If the item doesn't have a TTL field, then by default, the TTL set to the container will apply to the item.
-
-* If TTL is disabled at the container level, the TTL field on the item will be ignored until TTL is re-enabled on the container.
-
-Use the following steps to enable time to live on an item:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Create a new Azure Cosmos account or select an existing account.
-
-3. Open the **Data Explorer** pane.
-
-4. Select an existing container, expand it and modify the following values:
-
- * Open the **Scale & Settings** window.
- * Under **Setting** find, **Time to Live**.
- * Select **On (no default)** or select **On** and set a TTL value.
- * Select **Save** to save the changes.
-
-5. Next navigate to the item for which you want to set time to live, add the `ttl` property and select **Update**.
-
- ```json
- {
- "id": "1",
- "_rid": "Jic9ANWdO-EFAAAAAAAAAA==",
- "_self": "dbs/Jic9AA==/colls/Jic9ANWdO-E=/docs/Jic9ANWdO-EFAAAAAAAAAA==/",
- "_etag": "\"0d00b23f-0000-0000-0000-5c7712e80000\"",
- "_attachments": "attachments/",
- "ttl": 10,
- "_ts": 1551307496
- }
- ```
-
-## Set time to live on an item using an SDK
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-```csharp
-public record SalesOrder(string id, string customerId, int? ttl);
-```
-
-```csharp
-Container container = database.GetContainer("container");
-
-SalesOrder item = new (
- "SO05",
- "CO18009186470"
- // Expire sales order in 30 days using "ttl" property
- ttl: 60 * 60 * 24 * 30
-);
-
-await container.CreateItemAsync<SalesOrder>(item);
-```
-
-### [Java SDK v4](#tab/javav4)
-
-```java
-public class SalesOrder {
-
- public String id;
-
- public String customerId;
-
- // Include a property that serializes to "ttl" in JSON
- public Integer ttl;
-
-}
-```
-
-```java
-CosmosContainer container = database.getContainer("container");
-
-SalesOrder item = new SalesOrder();
-item.id = "SO05";
-item.customerId = "CO18009186470";
-// Expire sales order in 30 days using "ttl" property
-item.ttl = 60 * 60 * 24 * 30;
-
-container.createItem(item);
-```
-
-### [Node SDK](#tab/node-sdk)
-
-```javascript
-const container = await database.container("container");
-
-const item = {
- id: 'SO05',
- customerId: 'CO18009186470',
- // Expire sales order in 30 days using "ttl" property
- ttl: 60 * 60 * 24 * 30
-};
-
-await container.items.create(item);
-```
-
-### [Python SDK](#tab/python-sdk)
-
-```python
-container = database.get_container_client('container')
-
-item = {
- 'id': 'SO05',
- 'customerId': 'CO18009186470',
- # Expire sales order in 30 days using "ttl" property
- 'ttl': 60 * 60 * 24 * 30
-}
-
-container.create_item(body=item)
-```
---
-## Reset time to live using an SDK
-
-You can reset the time to live on an item by performing a write or update operation on the item. The write or update operation will set the `_ts` to the current time, and the TTL for the item to expire will begin again. If you wish to change the TTL of an item, you can update the field just as you update any other field.
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-```csharp
-SalesOrder item = await container.ReadItemAsync<SalesOrder>(
- "SO05",
- new PartitionKey("CO18009186470")
-);
-
-// Update ttl to 2 hours
-SalesOrder modifiedItem = item with {
- ttl = 60 * 60 * 2
-};
-
-await container.ReplaceItemAsync<SalesOrder>(
- modifiedItem,
- "SO05",
- new PartitionKey("CO18009186470")
-);
-```
-
-### [Java SDK v4](#tab/javav4)
-
-```java
-CosmosItemResponse<SalesOrder> response = container.readItem(
- "SO05",
- new PartitionKey("CO18009186470"),
- SalesOrder.class
-);
-
-SalesOrder item = response.getItem();
-
-// Update ttl to 2 hours
-item.ttl = 60 * 60 * 2;
-
-CosmosItemRequestOptions options = new CosmosItemRequestOptions();
-container.replaceItem(
- item,
- "SO05",
- new PartitionKey("CO18009186470"),
- options
-);
-```
-
-### [Node SDK](#tab/node-sdk)
-
-```javascript
-const { resource: item } = await container.item(
- 'SO05',
- 'CO18009186470'
-).read();
-
-// Update ttl to 2 hours
-item.ttl = 60 * 60 * 2;
-
-await container.item(
- 'SO05',
- 'CO18009186470'
-).replace(item);
-```
-
-### [Python SDK](#tab/python-sdk)
-
-```python
-item = container.read_item(
- item='SO05',
- partition_key='CO18009186470'
-)
-
-# Update ttl to 2 hours
-item['ttl'] = 60 * 60 * 2
-
-container.replace_item(
- item='SO05',
- body=item
-)
-```
---
-## Disable time to live using an SDK
-
-To disable time to live on a container and stop the background process from checking for expired items, the `DefaultTimeToLive` property on the container should be deleted. Deleting this property is different from setting it to -1. When you set it to -1, new items added to the container will live forever, however you can override this value on specific items in the container. When you remove the TTL property from the container the items will never expire, even if there are they have explicitly overridden the previous default TTL value.
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-```csharp
-ContainerProperties properties = await container.ReadContainerAsync();
-
-// Disable ttl at container-level
-properties.DefaultTimeToLive = null;
-
-await container.ReplaceContainerAsync(properties);
-```
-
-### [Java SDK v4](#tab/javav4)
-
-```java
-CosmosContainerResponse response = container.read();
-CosmosContainerProperties properties = response.getProperties();
-
-// Disable ttl at container-level
-properties.setDefaultTimeToLiveInSeconds(null);
-
-container.replace(properties);
-```
-
-### [Node SDK](#tab/node-sdk)
-
-```javascript
-const { resource: definition } = await container.read();
-
-// Disable ttl at container-level
-definition.defaultTtl = null;
-
-await container.replace(definition);
-```
-
-### [Python SDK](#tab/python-sdk)
-
-```python
-database.replace_container(
- container,
- partition_key=PartitionKey(path='/id'),
- # Disable ttl at container-level
- default_ttl=None
-)
-```
---
-## Next steps
-
-Learn more about time to live in the following article:
-
-* [Time to live](time-to-live.md)
cosmos-db How To Use Change Feed Estimator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-change-feed-estimator.md
- Title: Use the change feed estimator - Azure Cosmos DB
-description: Learn how to use the change feed estimator to analyze the progress of your change feed processor
---- Previously updated : 04/01/2021----
-# Use the change feed estimator
-
-This article describes how you can monitor the progress of your [change feed processor](./change-feed-processor.md) instances as they read the change feed.
-
-## Why is monitoring progress important?
-
-The change feed processor acts as a pointer that moves forward across your [change feed](../change-feed.md) and delivers the changes to a delegate implementation.
-
-Your change feed processor deployment can process changes at a particular rate based on its available resources like CPU, memory, network, and so on.
-
-If this rate is slower than the rate at which your changes happen in your Azure Cosmos container, your processor will start to lag behind.
-
-Identifying this scenario helps understand if we need to scale our change feed processor deployment.
-
-## Implement the change feed estimator
-
-### As a push model for automatic notifications
-
-Like the [change feed processor](./change-feed-processor.md), the change feed estimator can work as a push model. The estimator will measure the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and push this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds.
-
-As an example, if your change feed processor is defined like this:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartProcessorEstimator)]
-
-The correct way to initialize an estimator to measure that processor would be using `GetChangeFeedEstimatorBuilder` like so:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartEstimator)]
-
-Where both the processor and the estimator share the same `leaseContainer` and the same name.
-
-The other two parameters are the delegate, which will receive a number that represents **how many changes are pending to be read** by the processor, and the time interval at which you want this measurement to be taken.
-
-An example of a delegate that receives the estimation is:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=EstimationDelegate)]
-
-You can send this estimation to your monitoring solution and use it to understand how your progress is behaving over time.
-
-### As an on-demand detailed estimation
-
-In contrast with the push model, there's an alternative that lets you obtain the estimation on demand. This model also provides more detailed information:
-
-* The estimated lag per lease.
-* The instance owning and processing each lease, so you can identify if there's an issue on an instance.
-
-If your change feed processor is defined like this:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartProcessorEstimatorDetailed)]
-
-You can create the estimator with the same lease configuration:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartEstimatorDetailed)]
-
-And whenever you want it, with the frequency you require, you can obtain the detailed estimation:
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=GetIteratorEstimatorDetailed)]
-
-Each `ChangeFeedProcessorState` will contain the lease and lag information, and also who is the current instance owning it.
-
-> [!NOTE]
-> The change feed estimator does not need to be deployed as part of your change feed processor, nor be part of the same project. It can be independent and run in a completely different instance, which is recommended. It just needs to use the same name and lease configuration.
-
-## Additional resources
-
-* [Azure Cosmos DB SDK](sql-api-sdk-dotnet.md)
-* [Usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
-* [Additional samples on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
-
-## Next steps
-
-You can now proceed to learn more about change feed processor in the following articles:
-
-* [Overview of change feed processor](change-feed-processor.md)
-* [Change feed processor start time](./change-feed-processor.md#starting-time)
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
- Title: Register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB SDKs
-description: Learn how to register and call stored procedures, triggers, and user-defined functions using the Azure Cosmos DB SDKs
---- Previously updated : 11/03/2021-----
-# How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB
-
-The SQL API in Azure Cosmos DB supports registering and invoking stored procedures, triggers, and user-defined functions (UDFs) written in JavaScript. Once you've defined one or more stored procedures, triggers, and user-defined functions, you can load and view them in the [Azure portal](https://portal.azure.com/) by using Data Explorer.
-
-You can use the SQL API SDK across multiple platforms including [.NET v2 (legacy)](sql-api-sdk-dotnet.md), [.NET v3](sql-api-sdk-dotnet-standard.md), [Java](sql-api-sdk-java.md), [JavaScript](sql-api-sdk-node.md), or [Python](sql-api-sdk-python.md) SDKs to perform these tasks. If you haven't worked with one of these SDKs before, see the *"Quickstart"* article for the appropriate SDK:
-
-| SDK | Getting started |
-| : | : |
-| .NET v3 | [Quickstart: Build a .NET console app to manage Azure Cosmos DB SQL API resources](create-sql-api-dotnet.md) |
-| Java | [Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data](create-sql-api-java.md)
-| JavaScript | [Quickstart: Use Node.js to connect and query data from Azure Cosmos DB SQL API account](create-sql-api-nodejs.md) |
-| Python | [Quickstart: Build a Python application using an Azure Cosmos DB SQL API account](create-sql-api-python.md) |
-
-## How to run stored procedures
-
-Stored procedures are written using JavaScript. They can create, update, read, query, and delete items within an Azure Cosmos container. For more information on how to write stored procedures in Azure Cosmos DB, see [How to write stored procedures in Azure Cosmos DB](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures) article.
-
-The following examples show how to register and call a stored procedure by using the Azure Cosmos DB SDKs. Refer to [Create a Document](how-to-write-stored-procedures-triggers-udfs.md#create-an-item) as the source for this stored procedure is saved as `spCreateToDoItem.js`.
-
-> [!NOTE]
-> For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well.
-
-### [.NET SDK v2](#tab/dotnet-sdk-v2)
-
-The following example shows how to register a stored procedure by using the .NET SDK v2:
-
-```csharp
-string storedProcedureId = "spCreateToDoItems";
-StoredProcedure newStoredProcedure = new StoredProcedure
- {
- Id = storedProcedureId,
- Body = File.ReadAllText($@"..\js\{storedProcedureId}.js")
- };
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-var response = await client.CreateStoredProcedureAsync(containerUri, newStoredProcedure);
-StoredProcedure createdStoredProcedure = response.Resource;
-```
-
-The following code shows how to call a stored procedure by using the .NET SDK v2:
-
-```csharp
-dynamic[] newItems = new dynamic[]
-{
- new {
- category = "Personal",
- name = "Groceries",
- description = "Pick up strawberries",
- isComplete = false
- },
- new {
- category = "Personal",
- name = "Doctor",
- description = "Make appointment for check up",
- isComplete = false
- }
-};
-
-Uri uri = UriFactory.CreateStoredProcedureUri("myDatabase", "myContainer", "spCreateToDoItem");
-RequestOptions options = new RequestOptions { PartitionKey = new PartitionKey("Personal") };
-var result = await client.ExecuteStoredProcedureAsync<string>(uri, options, new[] { newItems });
-```
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-The following example shows how to register a stored procedure by using the .NET SDK v3:
-
-```csharp
-string storedProcedureId = "spCreateToDoItems";
-StoredProcedureResponse storedProcedureResponse = await client.GetContainer("myDatabase", "myContainer").Scripts.CreateStoredProcedureAsync(new StoredProcedureProperties
-{
- Id = storedProcedureId,
- Body = File.ReadAllText($@"..\js\{storedProcedureId}.js")
-});
-```
-
-The following code shows how to call a stored procedure by using the .NET SDK v3:
-
-```csharp
-dynamic[] newItems = new dynamic[]
-{
- new {
- category = "Personal",
- name = "Groceries",
- description = "Pick up strawberries",
- isComplete = false
- },
- new {
- category = "Personal",
- name = "Doctor",
- description = "Make appointment for check up",
- isComplete = false
- }
-};
-
-var result = await client.GetContainer("database", "container").Scripts.ExecuteStoredProcedureAsync<string>("spCreateToDoItem", new PartitionKey("Personal"), new[] { newItems });
-```
-
-### [Java SDK](#tab/java-sdk)
-
-The following example shows how to register a stored procedure by using the Java SDK:
-
-```java
-CosmosStoredProcedureProperties definition = new CosmosStoredProcedureProperties(
- "spCreateToDoItems",
- Files.readString(Paths.get("createToDoItems.js"))
-);
-
-CosmosStoredProcedureResponse response = container
- .getScripts()
- .createStoredProcedure(definition);
-```
-
-The following code shows how to call a stored procedure by using the Java SDK:
-
-```java
-CosmosStoredProcedure sproc = container
- .getScripts()
- .getStoredProcedure("spCreateToDoItems");
-
-List<Object> items = new ArrayList<Object>();
-
-ToDoItem firstItem = new ToDoItem();
-firstItem.category = "Personal";
-firstItem.name = "Groceries";
-firstItem.description = "Pick up strawberries";
-firstItem.isComplete = false;
-items.add(firstItem);
-
-ToDoItem secondItem = new ToDoItem();
-secondItem.category = "Personal";
-secondItem.name = "Doctor";
-secondItem.description = "Make appointment for check up";
-secondItem.isComplete = true;
-items.add(secondItem);
-
-CosmosStoredProcedureRequestOptions options = new CosmosStoredProcedureRequestOptions();
-options.setPartitionKey(
- new PartitionKey("Personal")
-);
-
-CosmosStoredProcedureResponse response = sproc.execute(
- items,
- options
-);
-```
-
-### [JavaScript SDK](#tab/javascript-sdk)
-
-The following example shows how to register a stored procedure by using the JavaScript SDK
-
-```javascript
-const container = client.database("myDatabase").container("myContainer");
-const sprocId = "spCreateToDoItems";
-await container.scripts.storedProcedures.create({
- id: sprocId,
- body: require(`../js/${sprocId}`)
-});
-```
-
-The following code shows how to call a stored procedure by using the JavaScript SDK:
-
-```javascript
-const newItem = [{
- category: "Personal",
- name: "Groceries",
- description: "Pick up strawberries",
- isComplete: false
-}];
-const container = client.database("myDatabase").container("myContainer");
-const sprocId = "spCreateToDoItems";
-const {resource: result} = await container.scripts.storedProcedure(sprocId).execute(newItem, {partitionKey: newItem[0].category});
-```
-
-### [Python SDK](#tab/python-sdk)
-
-The following example shows how to register a stored procedure by using the Python SDK:
-
-```python
-import azure.cosmos.cosmos_client as cosmos_client
-
-url = "your_cosmos_db_account_URI"
-key = "your_cosmos_db_account_key"
-database_name = 'your_cosmos_db_database_name'
-container_name = 'your_cosmos_db_container_name'
-
-with open('../js/spCreateToDoItems.js') as file:
- file_contents = file.read()
-
-sproc = {
- 'id': 'spCreateToDoItem',
- 'serverScript': file_contents,
-}
-client = cosmos_client.CosmosClient(url, key)
-database = client.get_database_client(database_name)
-container = database.get_container_client(container_name)
-created_sproc = container.scripts.create_stored_procedure(body=sproc)
-```
-
-The following code shows how to call a stored procedure by using the Python SDK:
-
-```python
-import uuid
-
-new_id= str(uuid.uuid4())
-
-# Creating a document for a container with "id" as a partition key.
-
-new_item = {
- "id": new_id,
- "category":"Personal",
- "name":"Groceries",
- "description":"Pick up strawberries",
- "isComplete":False
- }
-result = container.scripts.execute_stored_procedure(sproc=created_sproc,params=[[new_item]], partition_key=new_id)
-```
---
-## How to run pre-triggers
-
-The following examples show how to register and call a pre-trigger by using the Azure Cosmos DB SDKs. Refer to the [Pre-trigger example](how-to-write-stored-procedures-triggers-udfs.md#pre-triggers) as the source for this pre-trigger is saved as `trgPreValidateToDoItemTimestamp.js`.
-
-Pre-triggers are passed in the RequestOptions object, when executing an operation, by specifying `PreTriggerInclude` and then passing the name of the trigger in a List object.
-
-> [!NOTE]
-> Even though the name of the trigger is passed as a List, you can still execute only one trigger per operation.
-
-### [.NET SDK v2](#tab/dotnet-sdk-v2)
-
-The following code shows how to register a pre-trigger using the .NET SDK v2:
-
-```csharp
-string triggerId = "trgPreValidateToDoItemTimestamp";
-Trigger trigger = new Trigger
-{
- Id = triggerId,
- Body = File.ReadAllText($@"..\js\{triggerId}.js"),
- TriggerOperation = TriggerOperation.Create,
- TriggerType = TriggerType.Pre
-};
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-await client.CreateTriggerAsync(containerUri, trigger);
-```
-
-The following code shows how to call a pre-trigger using the .NET SDK v2:
-
-```csharp
-dynamic newItem = new
-{
- category = "Personal",
- name = "Groceries",
- description = "Pick up strawberries",
- isComplete = false
-};
-
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-RequestOptions requestOptions = new RequestOptions { PreTriggerInclude = new List<string> { "trgPreValidateToDoItemTimestamp" } };
-await client.CreateDocumentAsync(containerUri, newItem, requestOptions);
-```
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-The following code shows how to register a pre-trigger using the .NET SDK v3:
-
-```csharp
-await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
-{
- Id = "trgPreValidateToDoItemTimestamp",
- Body = File.ReadAllText("@..\js\trgPreValidateToDoItemTimestamp.js"),
- TriggerOperation = TriggerOperation.Create,
- TriggerType = TriggerType.Pre
-});
-```
-
-The following code shows how to call a pre-trigger using the .NET SDK v3:
-
-```csharp
-dynamic newItem = new
-{
- category = "Personal",
- name = "Groceries",
- description = "Pick up strawberries",
- isComplete = false
-};
-
-await client.GetContainer("database", "container").CreateItemAsync(newItem, null, new ItemRequestOptions { PreTriggers = new List<string> { "trgPreValidateToDoItemTimestamp" } });
-```
-
-### [Java SDK](#tab/java-sdk)
-
-The following code shows how to register a pre-trigger using the Java SDK:
-
-```java
-CosmosTriggerProperties definition = new CosmosTriggerProperties(
- "preValidateToDoItemTimestamp",
- Files.readString(Paths.get("validateToDoItemTimestamp.js"))
-);
-definition.setTriggerOperation(TriggerOperation.CREATE);
-definition.setTriggerType(TriggerType.PRE);
-
-CosmosTriggerResponse response = container
- .getScripts()
- .createTrigger(definition);
-```
-
-The following code shows how to call a pre-trigger using the Java SDK:
-
-```java
-ToDoItem item = new ToDoItem();
-item.category = "Personal";
-item.name = "Groceries";
-item.description = "Pick up strawberries";
-item.isComplete = false;
-
-CosmosItemRequestOptions options = new CosmosItemRequestOptions();
-options.setPreTriggerInclude(
- Arrays.asList("preValidateToDoItemTimestamp")
-);
-
-CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
-```
-
-### [JavaScript SDK](#tab/javascript-sdk)
-
-The following code shows how to register a pre-trigger using the JavaScript SDK:
-
-```javascript
-const container = client.database("myDatabase").container("myContainer");
-const triggerId = "trgPreValidateToDoItemTimestamp";
-await container.triggers.create({
- id: triggerId,
- body: require(`../js/${triggerId}`),
- triggerOperation: "create",
- triggerType: "pre"
-});
-```
-
-The following code shows how to call a pre-trigger using the JavaScript SDK:
-
-```javascript
-const container = client.database("myDatabase").container("myContainer");
-const triggerId = "trgPreValidateToDoItemTimestamp";
-await container.items.create({
- category: "Personal",
- name: "Groceries",
- description: "Pick up strawberries",
- isComplete: false
-}, {preTriggerInclude: [triggerId]});
-```
-
-### [Python SDK](#tab/python-sdk)
-
-The following code shows how to register a pre-trigger using the Python SDK:
-
-```python
-import azure.cosmos.cosmos_client as cosmos_client
-from azure.cosmos import documents
-
-url = "your_cosmos_db_account_URI"
-key = "your_cosmos_db_account_key"
-database_name = 'your_cosmos_db_database_name'
-container_name = 'your_cosmos_db_container_name'
-
-with open('../js/trgPreValidateToDoItemTimestamp.js') as file:
- file_contents = file.read()
-
-trigger_definition = {
- 'id': 'trgPreValidateToDoItemTimestamp',
- 'serverScript': file_contents,
- 'triggerType': documents.TriggerType.Pre,
- 'triggerOperation': documents.TriggerOperation.All
-}
-client = cosmos_client.CosmosClient(url, key)
-database = client.get_database_client(database_name)
-container = database.get_container_client(container_name)
-trigger = container.scripts.create_trigger(trigger_definition)
-```
-
-The following code shows how to call a pre-trigger using the Python SDK:
-
-```python
-item = {'category': 'Personal', 'name': 'Groceries',
- 'description': 'Pick up strawberries', 'isComplete': False}
-container.create_item(item, {'pre_trigger_include': 'trgPreValidateToDoItemTimestamp'})
-```
---
-## How to run post-triggers
-
-The following examples show how to register a post-trigger by using the Azure Cosmos DB SDKs. Refer to the [Post-trigger example](how-to-write-stored-procedures-triggers-udfs.md#post-triggers) as the source for this post-trigger is saved as `trgPostUpdateMetadata.js`.
-
-### [.NET SDK v2](#tab/dotnet-sdk-v2)
-
-The following code shows how to register a post-trigger using the .NET SDK v2:
-
-```csharp
-string triggerId = "trgPostUpdateMetadata";
-Trigger trigger = new Trigger
-{
- Id = triggerId,
- Body = File.ReadAllText($@"..\js\{triggerId}.js"),
- TriggerOperation = TriggerOperation.Create,
- TriggerType = TriggerType.Post
-};
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-await client.CreateTriggerAsync(containerUri, trigger);
-```
-
-The following code shows how to call a post-trigger using the .NET SDK v2:
-
-```csharp
-var newItem = {
- name: "artist_profile_1023",
- artist: "The Band",
- albums: ["Hellujah", "Rotators", "Spinning Top"]
-};
-
-RequestOptions options = new RequestOptions { PostTriggerInclude = new List<string> { "trgPostUpdateMetadata" } };
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-await client.createDocumentAsync(containerUri, newItem, options);
-```
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-The following code shows how to register a post-trigger using the .NET SDK v3:
-
-```csharp
-await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
-{
- Id = "trgPostUpdateMetadata",
- Body = File.ReadAllText(@"..\js\trgPostUpdateMetadata.js"),
- TriggerOperation = TriggerOperation.Create,
- TriggerType = TriggerType.Post
-});
-```
-
-The following code shows how to call a post-trigger using the .NET SDK v3:
-
-```csharp
-var newItem = {
- name: "artist_profile_1023",
- artist: "The Band",
- albums: ["Hellujah", "Rotators", "Spinning Top"]
-};
-
-await client.GetContainer("database", "container").CreateItemAsync(newItem, null, new ItemRequestOptions { PostTriggers = new List<string> { "trgPostUpdateMetadata" } });
-```
-
-### [Java SDK](#tab/java-sdk)
-
-The following code shows how to register a post-trigger using the Java SDK:
-
-```java
-CosmosTriggerProperties definition = new CosmosTriggerProperties(
- "postUpdateMetadata",
- Files.readString(Paths.get("updateMetadata.js"))
-);
-definition.setTriggerOperation(TriggerOperation.CREATE);
-definition.setTriggerType(TriggerType.POST);
-
-CosmosTriggerResponse response = container
- .getScripts()
- .createTrigger(definition);
-```
-
-The following code shows how to call a post-trigger using the Java SDK:
-
-```java
-ToDoItem item = new ToDoItem();
-item.category = "Personal";
-item.name = "Doctor";
-item.description = "Make appointment for check up";
-item.isComplete = true;
-
-CosmosItemRequestOptions options = new CosmosItemRequestOptions();
-options.setPostTriggerInclude(
- Arrays.asList("postUpdateMetadata")
-);
-
-CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
-```
-
-### [JavaScript SDK](#tab/javascript-sdk)
-
-The following code shows how to register a post-trigger using the JavaScript SDK:
-
-```javascript
-const container = client.database("myDatabase").container("myContainer");
-const triggerId = "trgPostUpdateMetadata";
-await container.triggers.create({
- id: triggerId,
- body: require(`../js/${triggerId}`),
- triggerOperation: "create",
- triggerType: "post"
-});
-```
-
-The following code shows how to call a post-trigger using the JavaScript SDK:
-
-```javascript
-const item = {
- name: "artist_profile_1023",
- artist: "The Band",
- albums: ["Hellujah", "Rotators", "Spinning Top"]
-};
-const container = client.database("myDatabase").container("myContainer");
-const triggerId = "trgPostUpdateMetadata";
-await container.items.create(item, {postTriggerInclude: [triggerId]});
-```
-
-### [Python SDK](#tab/python-sdk)
-
-The following code shows how to register a post-trigger using the Python SDK:
-
-```python
-import azure.cosmos.cosmos_client as cosmos_client
-from azure.cosmos import documents
-
-url = "your_cosmos_db_account_URI"
-key = "your_cosmos_db_account_key"
-database_name = 'your_cosmos_db_database_name'
-container_name = 'your_cosmos_db_container_name'
-
-with open('../js/trgPostValidateToDoItemTimestamp.js') as file:
- file_contents = file.read()
-
-trigger_definition = {
- 'id': 'trgPostValidateToDoItemTimestamp',
- 'serverScript': file_contents,
- 'triggerType': documents.TriggerType.Post,
- 'triggerOperation': documents.TriggerOperation.All
-}
-client = cosmos_client.CosmosClient(url, key)
-database = client.get_database_client(database_name)
-container = database.get_container_client(container_name)
-trigger = container.scripts.create_trigger(trigger_definition)
-```
-
-The following code shows how to call a post-trigger using the Python SDK:
-
-```python
-item = {'category': 'Personal', 'name': 'Groceries',
- 'description': 'Pick up strawberries', 'isComplete': False}
-container.create_item(item, {'post_trigger_include': 'trgPreValidateToDoItemTimestamp'})
-```
---
-## How to work with user-defined functions
-
-The following examples show how to register a user-defined function by using the Azure Cosmos DB SDKs. Refer to this [User-defined function example](how-to-write-stored-procedures-triggers-udfs.md#udfs) as the source for this post-trigger is saved as `udfTax.js`.
-
-### [.NET SDK v2](#tab/dotnet-sdk-v2)
-
-The following code shows how to register a user-defined function using the .NET SDK v2:
-
-```csharp
-string udfId = "Tax";
-var udfTax = new UserDefinedFunction
-{
- Id = udfId,
- Body = File.ReadAllText($@"..\js\{udfId}.js")
-};
-
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-await client.CreateUserDefinedFunctionAsync(containerUri, udfTax);
-
-```
-
-The following code shows how to call a user-defined function using the .NET SDK v2:
-
-```csharp
-Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
-var results = client.CreateDocumentQuery<dynamic>(containerUri, "SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000"));
-
-foreach (var result in results)
-{
- //iterate over results
-}
-```
-
-### [.NET SDK v3](#tab/dotnet-sdk-v3)
-
-The following code shows how to register a user-defined function using the .NET SDK v3:
-
-```csharp
-await client.GetContainer("database", "container").Scripts.CreateUserDefinedFunctionAsync(new UserDefinedFunctionProperties
-{
- Id = "Tax",
- Body = File.ReadAllText(@"..\js\Tax.js")
-});
-```
-
-The following code shows how to call a user-defined function using the .NET SDK v3:
-
-```csharp
-var iterator = client.GetContainer("database", "container").GetItemQueryIterator<dynamic>("SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000");
-while (iterator.HasMoreResults)
-{
- var results = await iterator.ReadNextAsync();
- foreach (var result in results)
- {
- //iterate over results
- }
-}
-```
-
-### [Java SDK](#tab/java-sdk)
-
-The following code shows how to register a user-defined function using the Java SDK:
-
-```java
-CosmosUserDefinedFunctionProperties definition = new CosmosUserDefinedFunctionProperties(
- "udfTax",
- Files.readString(Paths.get("tax.js"))
-);
-
-CosmosUserDefinedFunctionResponse response = container
- .getScripts()
- .createUserDefinedFunction(definition);
-```
-
-The following code shows how to call a user-defined function using the Java SDK:
-
-```java
-CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
-
-CosmosPagedIterable<ToDoItem> iterable = container.queryItems(
- "SELECT t.cost, udf.udfTax(t.cost) AS costWithTax FROM t",
- options,
- ToDoItem.class);
-```
-
-### [JavaScript SDK](#tab/javascript-sdk)
-
-The following code shows how to register a user-defined function using the JavaScript SDK:
-
-```javascript
-const container = client.database("myDatabase").container("myContainer");
-const udfId = "Tax";
-await container.userDefinedFunctions.create({
- id: udfId,
- body: require(`../js/${udfId}`)
-```
-
-The following code shows how to call a user-defined function using the JavaScript SDK:
-
-```javascript
-const container = client.database("myDatabase").container("myContainer");
-const sql = "SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000";
-const {result} = await container.items.query(sql).toArray();
-```
-
-### [Python SDK](#tab/python-sdk)
-
-The following code shows how to register a user-defined function using the Python SDK:
-
-```python
-import azure.cosmos.cosmos_client as cosmos_client
-
-url = "your_cosmos_db_account_URI"
-key = "your_cosmos_db_account_key"
-database_name = 'your_cosmos_db_database_name'
-container_name = 'your_cosmos_db_container_name'
-
-with open('../js/udfTax.js') as file:
- file_contents = file.read()
-udf_definition = {
- 'id': 'Tax',
- 'serverScript': file_contents,
-}
-client = cosmos_client.CosmosClient(url, key)
-database = client.get_database_client(database_name)
-container = database.get_container_client(container_name)
-udf = container.scripts.create_user_defined_function(udf_definition)
-```
-
-The following code shows how to call a user-defined function using the Python SDK:
-
-```python
-results = list(container.query_items(
- 'query': 'SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000'))
-```
---
-## Next steps
-
-Learn more concepts and how-to write or use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
--- [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md)-- [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md)-- [How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-write-stored-procedures-triggers-udfs.md)-- [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
cosmos-db How To Write Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-write-javascript-query-api.md
- Title: Write stored procedures and triggers using the JavaScript query API in Azure Cosmos DB
-description: Learn how to write stored procedures and triggers using the JavaScript Query API in Azure Cosmos DB
---- Previously updated : 05/07/2020-----
-# How to write stored procedures and triggers in Azure Cosmos DB by using the JavaScript query API
-
-Azure Cosmos DB allows you to perform optimized queries by using a fluent JavaScript interface without any knowledge of SQL language that can be used to write stored procedures or triggers. To learn more about JavaScript Query API support in Azure Cosmos DB, see [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) article.
-
-## <a id="stored-procedures"></a>Stored procedure using the JavaScript query API
-
-The following code sample is an example of how the JavaScript query API is used in the context of a stored procedure. The stored procedure inserts an Azure Cosmos item that is specified by an input parameter, and updates a metadata document by using the `__.filter()` method, with minSize, maxSize, and totalSize based upon the input item's size property.
-
-> [!NOTE]
-> `__` (double-underscore) is an alias to `getContext().getCollection()` when using the JavaScript query API.
-
-```javascript
-/**
- * Insert an item and update metadata doc: minSize, maxSize, totalSize based on item.size.
- */
-function insertDocumentAndUpdateMetadata(item) {
- // HTTP error codes sent to our callback function by CosmosDB server.
- var ErrorCode = {
- RETRY_WITH: 449,
- }
-
- var isAccepted = __.createDocument(__.getSelfLink(), item, {}, function(err, item, options) {
- if (err) throw err;
-
- // Check the item (ignore items with invalid/zero size and metadata itself) and call updateMetadata.
- if (!item.isMetadata && item.size > 0) {
- // Get the metadata. We keep it in the same container. it's the only item that has .isMetadata = true.
- var result = __.filter(function(x) {
- return x.isMetadata === true
- }, function(err, feed, options) {
- if (err) throw err;
-
- // We assume that metadata item was pre-created and must exist when this script is called.
- if (!feed || !feed.length) throw new Error("Failed to find the metadata item.");
-
- // The metadata item.
- var metaItem = feed[0];
-
- // Update metaDoc.minSize:
- // for 1st document use doc.Size, for all the rest see if it's less than last min.
- if (metaItem.minSize == 0) metaItem.minSize = item.size;
- else metaItem.minSize = Math.min(metaItem.minSize, item.size);
-
- // Update metaItem.maxSize.
- metaItem.maxSize = Math.max(metaItem.maxSize, item.size);
-
- // Update metaItem.totalSize.
- metaItem.totalSize += item.size;
-
- // Update/replace the metadata item in the store.
- var isAccepted = __.replaceDocument(metaItem._self, metaItem, function(err) {
- if (err) throw err;
- // Note: in case concurrent updates causes conflict with ErrorCode.RETRY_WITH, we can't read the meta again
- // and update again because due to Snapshot isolation we will read same exact version (we are in same transaction).
- // We have to take care of that on the client side.
- });
- if (!isAccepted) throw new Error("replaceDocument(metaItem) returned false.");
- });
- if (!result.isAccepted) throw new Error("filter for metaItem returned false.");
- }
- });
- if (!isAccepted) throw new Error("createDocument(actual item) returned false.");
-}
-```
-
-## Next steps
-
-See the following articles to learn about stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
-
-* [How to work with stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
-
-* [How to register and use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures)
-
-* How to register and use [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) in Azure Cosmos DB
-
-* [How to register and use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions)
-
-* [Synthetic partition keys in Azure Cosmos DB](synthetic-partition-keys.md)
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-write-stored-procedures-triggers-udfs.md
- Title: Write stored procedures, triggers, and UDFs in Azure Cosmos DB
-description: Learn how to define stored procedures, triggers, and user-defined functions in Azure Cosmos DB
---- Previously updated : 10/05/2021-----
-# How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB
-
-Azure Cosmos DB provides language-integrated, transactional execution of JavaScript that lets you write **stored procedures**, **triggers**, and **user-defined functions (UDFs)**. When using the SQL API in Azure Cosmos DB, you can define the stored procedures, triggers, and UDFs in JavaScript language. You can write your logic in JavaScript and execute it inside the database engine. You can create and execute triggers, stored procedures, and UDFs by using [Azure portal](https://portal.azure.com/), the [JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) and the [Cosmos DB SQL API client SDKs](sql-api-dotnet-samples.md).
-
-To call a stored procedure, trigger, and user-defined function, you need to register it. For more information, see [How to work with stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md).
-
-> [!NOTE]
-> For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well.
-> [!Tip]
-> Cosmos supports deploying containers with stored procedures, triggers and user-defined functions. For more information see [Create an Azure Cosmos DB container with server-side functionality.](./manage-with-templates.md#create-sproc)
-
-## <a id="stored-procedures"></a>How to write stored procedures
-
-Stored procedures are written using JavaScript, they can create, update, read, query, and delete items inside an Azure Cosmos container. Stored procedures are registered per collection, and can operate on any document or an attachment present in that collection.
-
-Here is a simple stored procedure that returns a "Hello World" response.
-
-```javascript
-var helloWorldStoredProc = {
- id: "helloWorld",
- serverScript: function () {
- var context = getContext();
- var response = context.getResponse();
-
- response.setBody("Hello, World");
- }
-}
-```
-
-The context object provides access to all operations that can be performed in Azure Cosmos DB, as well as access to the request and response objects. In this case, you use the response object to set the body of the response to be sent back to the client.
-
-Once written, the stored procedure must be registered with a collection. To learn more, see [How to use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures) article.
-
-### <a id="create-an-item"></a>Create an item using stored procedure
-
-When you create an item by using stored procedure, the item is inserted into the Azure Cosmos container and an ID for the newly created item is returned. Creating an item is an asynchronous operation and depends on the JavaScript callback functions. The callback function has two parameters - one for the error object in case the operation fails and another for a return value; in this case, the created object. Inside the callback, you can either handle the exception or throw an error. In case a callback is not provided and there is an error, the Azure Cosmos DB runtime will throw an error.
-
-The stored procedure also includes a parameter to set the description, it's a boolean value. When the parameter is set to true and the description is missing, the stored procedure will throw an exception. Otherwise, the rest of the stored procedure continues to run.
-
-The following example stored procedure takes an array of new Azure Cosmos items as input, inserts it into the Azure Cosmos container and returns the count of the items inserted. In this example, we are leveraging the ToDoList sample from the [Quickstart .NET SQL API](create-sql-api-dotnet.md)
-
-```javascript
-function createToDoItems(items) {
- var collection = getContext().getCollection();
- var collectionLink = collection.getSelfLink();
- var count = 0;
-
- if (!items) throw new Error("The array is undefined or null.");
-
- var numItems = items.length;
-
- if (numItems == 0) {
- getContext().getResponse().setBody(0);
- return;
- }
-
- tryCreate(items[count], callback);
-
- function tryCreate(item, callback) {
- var options = { disableAutomaticIdGeneration: false };
-
- var isAccepted = collection.createDocument(collectionLink, item, options, callback);
-
- if (!isAccepted) getContext().getResponse().setBody(count);
- }
-
- function callback(err, item, options) {
- if (err) throw err;
- count++;
- if (count >= numItems) {
- getContext().getResponse().setBody(count);
- } else {
- tryCreate(items[count], callback);
- }
- }
-}
-```
-
-### Arrays as input parameters for stored procedures
-
-When defining a stored procedure in Azure portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to string and sent to the stored procedure. To work around this, you can define a function within your stored procedure to parse the string as an array. The following code shows how to parse a string input parameter as an array:
-
-```javascript
-function sample(arr) {
- if (typeof arr === "string") arr = JSON.parse(arr);
-
- arr.forEach(function(a) {
- // do something here
- console.log(a);
- });
-}
-```
-
-### <a id="transactions"></a>Transactions within stored procedures
-
-You can implement transactions on items within a container by using a stored procedure. The following example uses transactions within a fantasy football gaming app to trade players between two teams in a single operation. The stored procedure attempts to read the two Azure Cosmos items each corresponding to the player IDs passed in as an argument. If both players are found, then the stored procedure updates the items by swapping their teams. If any errors are encountered along the way, the stored procedure throws a JavaScript exception that implicitly aborts the transaction.
-
-```javascript
-// JavaScript source code
-function tradePlayers(playerId1, playerId2) {
- var context = getContext();
- var container = context.getCollection();
- var response = context.getResponse();
-
- var player1Document, player2Document;
-
- // query for players
- var filterQuery =
- {
- 'query' : 'SELECT * FROM Players p where p.id = @playerId1',
- 'parameters' : [{'name':'@playerId1', 'value':playerId1}]
- };
-
- var accept = container.queryDocuments(container.getSelfLink(), filterQuery, {},
- function (err, items, responseOptions) {
- if (err) throw new Error("Error" + err.message);
-
- if (items.length != 1) throw "Unable to find both names";
- player1Item = items[0];
-
- var filterQuery2 =
- {
- 'query' : 'SELECT * FROM Players p where p.id = @playerId2',
- 'parameters' : [{'name':'@playerId2', 'value':playerId2}]
- };
- var accept2 = container.queryDocuments(container.getSelfLink(), filterQuery2, {},
- function (err2, items2, responseOptions2) {
- if (err2) throw new Error("Error" + err2.message);
- if (items2.length != 1) throw "Unable to find both names";
- player2Item = items2[0];
- swapTeams(player1Item, player2Item);
- return;
- });
- if (!accept2) throw "Unable to read player details, abort ";
- });
-
- if (!accept) throw "Unable to read player details, abort ";
-
- // swap the two playersΓÇÖ teams
- function swapTeams(player1, player2) {
- var player2NewTeam = player1.team;
- player1.team = player2.team;
- player2.team = player2NewTeam;
-
- var accept = container.replaceDocument(player1._self, player1,
- function (err, itemReplaced) {
- if (err) throw "Unable to update player 1, abort ";
-
- var accept2 = container.replaceDocument(player2._self, player2,
- function (err2, itemReplaced2) {
- if (err) throw "Unable to update player 2, abort"
- });
-
- if (!accept2) throw "Unable to update player 2, abort";
- });
-
- if (!accept) throw "Unable to update player 1, abort";
- }
-}
-```
-
-### <a id="bounded-execution"></a>Bounded execution within stored procedures
-
-The following is an example of a stored procedure that bulk-imports items into an Azure Cosmos container. The stored procedure handles bounded execution by checking the boolean return value from `createDocument`, and then uses the count of items inserted in each invocation of the stored procedure to track and resume progress across batches.
-
-```javascript
-function bulkImport(items) {
- var container = getContext().getCollection();
- var containerLink = container.getSelfLink();
-
- // The count of imported items, also used as current item index.
- var count = 0;
-
- // Validate input.
- if (!items) throw new Error("The array is undefined or null.");
-
- var itemsLength = items.length;
- if (itemsLength == 0) {
- getContext().getResponse().setBody(0);
- }
-
- // Call the create API to create an item.
- tryCreate(items[count], callback);
-
- // Note that there are 2 exit conditions:
- // 1) The createDocument request was not accepted.
- // In this case the callback will not be called, we just call setBody and we are done.
- // 2) The callback was called items.length times.
- // In this case all items were created and we donΓÇÖt need to call tryCreate anymore. Just call setBody and we are done.
- function tryCreate(item, callback) {
- var isAccepted = container.createDocument(containerLink, item, callback);
-
- // If the request was accepted, callback will be called.
- // Otherwise report current count back to the client,
- // which will call the script again with remaining set of items.
- if (!isAccepted) getContext().getResponse().setBody(count);
- }
-
- // This is called when container.createDocument is done in order to process the result.
- function callback(err, item, options) {
- if (err) throw err;
-
- // One more item has been inserted, increment the count.
- count++;
-
- if (count >= itemsLength) {
- // If we created all items, we are done. Just set the response.
- getContext().getResponse().setBody(count);
- } else {
- // Create next document.
- tryCreate(items[count], callback);
- }
- }
-}
-```
-
-### <a id="async-promises"></a>Async await with stored procedures
-
-The following is an example of a stored procedure that uses async-await with Promises using a helper function. The stored procedure queries for an item and replaces it.
-
-```javascript
-function async_sample() {
- const ERROR_CODE = {
- NotAccepted: 429
- };
-
- const asyncHelper = {
- queryDocuments(sqlQuery, options) {
- return new Promise((resolve, reject) => {
- const isAccepted = __.queryDocuments(__.getSelfLink(), sqlQuery, options, (err, feed, options) => {
- if (err) reject(err);
- resolve({ feed, options });
- });
- if (!isAccepted) reject(new Error(ERROR_CODE.NotAccepted, "queryDocuments was not accepted."));
- });
- },
-
- replaceDocument(doc) {
- return new Promise((resolve, reject) => {
- const isAccepted = __.replaceDocument(doc._self, doc, (err, result, options) => {
- if (err) reject(err);
- resolve({ result, options });
- });
- if (!isAccepted) reject(new Error(ERROR_CODE.NotAccepted, "replaceDocument was not accepted."));
- });
- }
- };
-
- async function main() {
- let continuation;
- do {
- let { feed, options } = await asyncHelper.queryDocuments("SELECT * from c", { continuation });
-
- for (let doc of feed) {
- doc.newProp = 1;
- await asyncHelper.replaceDocument(doc);
- }
-
- continuation = options.continuation;
- } while (continuation);
- }
-
- main().catch(err => getContext().abort(err));
-}
-```
-
-## <a id="triggers"></a>How to write triggers
-
-Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item. Triggers are not automatically executed, they must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) by using the Azure Cosmos DB SDKs.
-
-### <a id="pre-triggers"></a>Pre-triggers
-
-The following example shows how a pre-trigger is used to validate the properties of an Azure Cosmos item that is being created. In this example, we are leveraging the ToDoList sample from the [Quickstart .NET SQL API](create-sql-api-dotnet.md), to add a timestamp property to a newly added item if it doesn't contain one.
-
-```javascript
-function validateToDoItemTimestamp() {
- var context = getContext();
- var request = context.getRequest();
-
- // item to be created in the current operation
- var itemToCreate = request.getBody();
-
- // validate properties
- if (!("timestamp" in itemToCreate)) {
- var ts = new Date();
- itemToCreate["timestamp"] = ts.getTime();
- }
-
- // update the item that will be created
- request.setBody(itemToCreate);
-}
-```
-
-Pre-triggers cannot have any input parameters. The request object in the trigger is used to manipulate the request message associated with the operation. In the previous example, the pre-trigger is run when creating an Azure Cosmos item, and the request message body contains the item to be created in JSON format.
-
-When triggers are registered, you can specify the operations that it can run with. This trigger should be created with a `TriggerOperation` value of `TriggerOperation.Create`, which means using the trigger in a replace operation as shown in the following code is not permitted.
-
-For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles.
-
-### <a id="post-triggers"></a>Post-triggers
-
-The following example shows a post-trigger. This trigger queries for the metadata item and updates it with details about the newly created item.
--
-```javascript
-function updateMetadata() {
- var context = getContext();
- var container = context.getCollection();
- var response = context.getResponse();
-
- // item that was created
- var createdItem = response.getBody();
-
- // query for metadata document
- var filterQuery = 'SELECT * FROM root r WHERE r.id = "_metadata"';
- var accept = container.queryDocuments(container.getSelfLink(), filterQuery,
- updateMetadataCallback);
- if(!accept) throw "Unable to update metadata, abort";
-
- function updateMetadataCallback(err, items, responseOptions) {
- if(err) throw new Error("Error" + err.message);
-
- if(items.length != 1) throw 'Unable to find metadata document';
-
- var metadataItem = items[0];
-
- // update metadata
- metadataItem.createdItems += 1;
- metadataItem.createdNames += " " + createdItem.id;
- var accept = container.replaceDocument(metadataItem._self,
- metadataItem, function(err, itemReplaced) {
- if(err) throw "Unable to update metadata, abort";
- });
-
- if(!accept) throw "Unable to update metadata, abort";
- return;
- }
-}
-```
-
-One thing that is important to note is the transactional execution of triggers in Azure Cosmos DB. The post-trigger runs as part of the same transaction for the underlying item itself. An exception during the post-trigger execution will fail the whole transaction. Anything committed will be rolled back and an exception returned.
-
-For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles.
-
-## <a id="udfs"></a>How to write user-defined functions
-
-The following sample creates a UDF to calculate income tax for various income brackets. This user-defined function would then be used inside a query. For the purposes of this example assume there is a container called "Incomes" with properties as follows:
-
-```json
-{
- "name": "Satya Nadella",
- "country": "USA",
- "income": 70000
-}
-```
-
-The following is a function definition to calculate income tax for various income brackets:
-
-```javascript
-function tax(income) {
- if (income == undefined)
- throw 'no input';
-
- if (income < 1000)
- return income * 0.1;
- else if (income < 10000)
- return income * 0.2;
- else
- return income * 0.4;
-}
-```
-
-For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions) article.
-
-## Logging
-
-When using stored procedure, triggers or user-defined functions, you can log the steps by enabling the script logging. A string for debugging is generated when `EnableScriptLogging` is set to true as shown in the following examples:
-
-# [JavaScript](#tab/javascript)
-
-```javascript
-let requestOptions = { enableScriptLogging: true };
-const { resource: result, headers: responseHeaders} await container.scripts
- .storedProcedure(Sproc.id)
- .execute(undefined, [], requestOptions);
-console.log(responseHeaders[Constants.HttpHeaders.ScriptLogResults]);
-```
-
-# [C#](#tab/csharp)
-
-```csharp
-var response = await client.ExecuteStoredProcedureAsync(
-document.SelfLink,
-new RequestOptions { EnableScriptLogging = true } );
-Console.WriteLine(response.ScriptLog);
-```
--
-## Next steps
-
-Learn more concepts and how-to write or use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
-
-* [How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
-
-* [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
-
-* [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md)
-
-* [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md)
cosmos-db Index Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/index-metrics.md
- Title: Azure Cosmos DB indexing metrics
-description: Learn how to obtain and interpret the indexing metrics in Azure Cosmos DB
---- Previously updated : 10/25/2021---
-# Indexing metrics in Azure Cosmos DB
-
-Azure Cosmos DB provides indexing metrics to show both utilized indexed paths and recommended indexed paths. You can use the indexing metrics to optimize query performance, especially in cases where you aren't sure how to modify the [indexing policy](../index-policy.md)).
-
-> [!NOTE]
-> The indexing metrics are only supported in the .NET SDK (version 3.21.0 or later) and Java SDK (version 4.19.0 or later)
-
-## Enable indexing metrics
-
-You can enable indexing metrics for a query by setting the `PopulateIndexMetrics` property to `true`. When not specified, `PopulateIndexMetrics` defaults to `false`. We only recommend enabling the index metrics for troubleshooting query performance. As long as your queries and indexing policy stay the same, the index metrics are unlikely to change. Instead, we recommend identifying expensive queries by monitoring query RU charge and latency using diagnostic logs.
-
-### .NET SDK example
-
-```csharp
- string sqlQueryText = "SELECT TOP 10 c.id FROM c WHERE c.Item = 'value1234' AND c.Price > 2";
-
- QueryDefinition query = new QueryDefinition(sqlQueryText);
-
- FeedIterator<Item> resultSetIterator = container.GetItemQueryIterator<Item>(
- query, requestOptions: new QueryRequestOptions
- {
- PopulateIndexMetrics = true
- });
-
- FeedResponse<Item> response = null;
-
- while (resultSetIterator.HasMoreResults)
- {
- response = await resultSetIterator.ReadNextAsync();
- Console.WriteLine(response.IndexMetrics);
- }
-```
-
-### Example output
-
-In this example query, we observe the utilized paths `/Item/?` and `/Price/?` and the potential composite indexes `(/Item ASC, /Price ASC)`.
-
-```
-Index Utilization Information
- Utilized Single Indexes
- Index Spec: /Item/?
- Index Impact Score: High
-
- Index Spec: /Price/?
- Index Impact Score: High
-
- Potential Single Indexes
- Utilized Composite Indexes
- Potential Composite Indexes
- Index Spec: /Item ASC, /Price ASC
- Index Impact Score: High
-
-```
-
-## Utilized indexed paths
-
-The utilized single indexes and utilized composite indexes respectively show the included paths and composite indexes that the query used. Queries can use multiple indexed paths, as well as a mix of included paths and composite indexes. If an indexed path isn't listed as utilized, removing the indexed path won't have any impact on the query's performance.
-
-Consider the list of utilized indexed paths as evidence that a query used those paths. If you aren't sure if a new indexed path will improve query performance, you should try adding the new indexed paths and check if the query uses them.
-
-## Potential indexed paths
-
-The potential single indexes and utilized composite indexes respectively show the included paths and composite indexes that, if added, the query might utilize. If you see potential indexed paths, you should consider adding them to your indexing policy and observe if they improve query performance.
-
-Consider the list of potential indexed paths as recommendations rather than conclusive evidence that a query will use a specific indexed path. The potential indexed paths are not an exhaustive list of indexed paths that a query could use. Additionally, it's possible that some potential indexed paths won't have any impact on query performance. [Add the recommended indexed paths](how-to-manage-indexing-policy.md) and confirm that they improve query performance.
-
-> [!NOTE]
-> Do you have any feedback about the indexing metrics? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: cosmosdbindexing@microsoft.com
-
-## Index impact score
-
-The index impact score is the likelihood that an indexed path, based on the query shape, has a significant impact on query performance. In other words, the index impact score is the probability that, without that specific indexed path, the query RU charge would have been substantially higher.
-
-There are two possible index impact scores: **high** and **low**. If you have multiple potential indexed paths, we recommend focusing on indexed paths with a **high** impact score.
-
-The only criteria used in the index impact score is the query shape. For example, in the below query, the indexed path `/name/?` would be assigned a **high** index impact score:
-
-```sql
-SELECT *
-FROM c
-WHERE c.name = "Samer"
-```
-
-The actual impact depending on the nature of the data. If only a few items match the `/name` filter, the indexed path will substantially improve the query RU charge. However, if most items end up matching the `/name` filter anyway, the indexed path may not end up improving query performance. In each of these cases, the indexed path `/name/?` would be assigned a **high** index impact score because, based on the query shape, the indexed path has a high likelihood of improving query performance.
-
-## Additional examples
-
-### Example query
-
-```sql
-SELECT c.id
-FROM c
-WHERE c.name = 'Tim' AND c.age > 15 AND c.town = 'Redmond' AND c.timestamp > 2349230183
-```
-
-### Index metrics
-
-```
-Index Utilization Information
- Utilized Single Indexes
- Index Spec: /name/?
- Index Impact Score: High
-
- Index Spec: /age/?
- Index Impact Score: High
-
- Index Spec: /town/?
- Index Impact Score: High
-
- Index Spec: /timestamp/?
- Index Impact Score: High
-
- Potential Single Indexes
- Utilized Composite Indexes
- Potential Composite Indexes
- Index Spec: /name ASC, /town ASC, /age ASC
- Index Impact Score: High
-
- Index Spec: /name ASC, /town ASC, /timestamp ASC
- Index Impact Score: High
-
-```
-These index metrics show that the query used the indexed paths `/name/?`, `/age/?`, `/town/?`, and `/timestamp/?`. The index metrics also indicate that there's a high likelihood that adding the composite indexes `(/name ASC, /town ASC, /age ASC)` and `(/name ASC, /town ASC, /timestamp ASC)` will further improve performance.
-
-## Next steps
-
-Read more about indexing in the following articles:
--- [Indexing overview](../index-overview.md)-- [How to manage indexing policy](how-to-manage-indexing-policy.md)
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/javascript-query-api.md
- Title: Work with JavaScript integrated query API in Azure Cosmos DB Stored Procedures and Triggers
-description: This article introduces the concepts for JavaScript language-integrated query API to create stored procedures and triggers in Azure Cosmos DB.
---- Previously updated : 05/07/2020-----
-# JavaScript query API in Azure Cosmos DB
-
-In addition to issuing queries using the SQL API in Azure Cosmos DB, the [Cosmos DB server-side SDK](https://github.com/Azure/azure-cosmosdb-js-server/) provides a JavaScript interface for performing optimized queries in Cosmos DB Stored Procedures and Triggers. You don't have to be aware of the SQL language to use this JavaScript interface. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls, with a syntax familiar to ECMAScript5's array built-ins and popular JavaScript libraries like Lodash. Queries are parsed by the JavaScript runtime and efficiently executed using Azure Cosmos DB indices.
-
-## Supported JavaScript functions
-
-| **Function** | **Description** |
-|||
-|`chain() ... .value([callback] [, options])`|Starts a chained call that must be terminated with value().|
-|`filter(predicateFunction [, options] [, callback])`|Filters the input using a predicate function that returns true/false in order to filter in/out input documents into the resulting set. This function behaves similar to a WHERE clause in SQL.|
-|`flatten([isShallow] [, options] [, callback])`|Combines and flattens arrays from each input item into a single array. This function behaves similar to SelectMany in LINQ.|
-|`map(transformationFunction [, options] [, callback])`|Applies a projection given a transformation function that maps each input item to a JavaScript object or value. This function behaves similar to a SELECT clause in SQL.|
-|`pluck([propertyName] [, options] [, callback])`|This function is a shortcut for a map that extracts the value of a single property from each input item.|
-|`sortBy([predicate] [, options] [, callback])`|Produces a new set of documents by sorting the documents in the input document stream in ascending order by using the given predicate. This function behaves similar to an ORDER BY clause in SQL.|
-|`sortByDescending([predicate] [, options] [, callback])`|Produces a new set of documents by sorting the documents in the input document stream in descending order using the given predicate. This function behaves similar to an ORDER BY x DESC clause in SQL.|
-|`unwind(collectionSelector, [resultSelector], [options], [callback])`|Performs a self-join with inner array and adds results from both sides as tuples to the result projection. For instance, joining a person document with person.pets would produce [person, pet] tuples. This is similar to SelectMany in .NET LINQ.|
-
-When included inside predicate and/or selector functions, the following JavaScript constructs get automatically optimized to run directly on Azure Cosmos DB indices:
--- Simple operators: `=` `+` `-` `*` `/` `%` `|` `^` `&` `==` `!=` `===` `!===` `<` `>` `<=` `>=` `||` `&&` `<<` `>>` `>>>!` `~`-- Literals, including the object literal: {}-- var, return-
-The following JavaScript constructs do not get optimized for Azure Cosmos DB indices:
--- Control flow (for example, if, for, while)-- Function calls-
-For more information, see the [Cosmos DB Server Side JavaScript Documentation](https://github.com/Azure/azure-cosmosdb-js-server/).
-
-## SQL to JavaScript cheat sheet
-
-The following table presents various SQL queries and the corresponding JavaScript queries. As with SQL queries, properties (for example, item.id) are case-sensitive.
-
-> [!NOTE]
-> `__` (double-underscore) is an alias to `getContext().getCollection()` when using the JavaScript query API.
-
-|**SQL**|**JavaScript Query API**|**Description**|
-||||
-|SELECT *<br>FROM docs| __.map(function(doc) { <br>&nbsp;&nbsp;&nbsp;&nbsp;return doc;<br>});|Results in all documents (paginated with continuation token) as is.|
-|SELECT <br>&nbsp;&nbsp;&nbsp;docs.id,<br>&nbsp;&nbsp;&nbsp;docs.message AS msg,<br>&nbsp;&nbsp;&nbsp;docs.actions <br>FROM docs|__.map(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;return {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;id: doc.id,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;msg: doc.message,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;actions:doc.actions<br>&nbsp;&nbsp;&nbsp;&nbsp;};<br>});|Projects the id, message (aliased to msg), and action from all documents.|
-|SELECT *<br>FROM docs<br>WHERE<br>&nbsp;&nbsp;&nbsp;docs.id="X998_Y998"|__.filter(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;return doc.id ==="X998_Y998";<br>});|Queries for documents with the predicate: id = "X998_Y998".|
-|SELECT *<br>FROM docs<br>WHERE<br>&nbsp;&nbsp;&nbsp;ARRAY_CONTAINS(docs.Tags, 123)|__.filter(function(x) {<br>&nbsp;&nbsp;&nbsp;&nbsp;return x.Tags && x.Tags.indexOf(123) > -1;<br>});|Queries for documents that have a Tags property and Tags is an array containing the value 123.|
-|SELECT<br>&nbsp;&nbsp;&nbsp;docs.id,<br>&nbsp;&nbsp;&nbsp;docs.message AS msg<br>FROM docs<br>WHERE<br>&nbsp;&nbsp;&nbsp;docs.id="X998_Y998"|__.chain()<br>&nbsp;&nbsp;&nbsp;&nbsp;.filter(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return doc.id ==="X998_Y998";<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>&nbsp;&nbsp;&nbsp;&nbsp;.map(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;id: doc.id,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;msg: doc.message<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;};<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>.value();|Queries for documents with a predicate, id = "X998_Y998", and then projects the id and message (aliased to msg).|
-|SELECT VALUE tag<br>FROM docs<br>JOIN tag IN docs.Tags<br>ORDER BY docs._ts|__.chain()<br>&nbsp;&nbsp;&nbsp;&nbsp;.filter(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return doc.Tags && Array.isArray(doc.Tags);<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>&nbsp;&nbsp;&nbsp;&nbsp;.sortBy(function(doc) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return doc._ts;<br>&nbsp;&nbsp;&nbsp;&nbsp;})<br>&nbsp;&nbsp;&nbsp;&nbsp;.pluck("Tags")<br>&nbsp;&nbsp;&nbsp;&nbsp;.flatten()<br>&nbsp;&nbsp;&nbsp;&nbsp;.value()|Filters for documents that have an array property, Tags, and sorts the resulting documents by the _ts timestamp system property, and then projects + flattens the Tags array.|
-
-## Next steps
-
-Learn more concepts and how-to write and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
--- [How to write stored procedures and triggers using JavaScript Query API](how-to-write-javascript-query-api.md)-- [Working with Azure Cosmos DB stored procedures, triggers and user-defined functions](stored-procedures-triggers-udfs.md)-- [How to use stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)-- [Azure Cosmos DB JavaScript server-side API reference](https://azure.github.io/azure-cosmosdb-js-server)-- [JavaScript ES6 (ECMA 2015)](https://www.ecma-international.org/ecma-262/6.0/)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-sink.md
- Title: Kafka Connect for Azure Cosmos DB - Sink connector
-description: The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
---- Previously updated : 05/13/2022---
-# Kafka Connect for Azure Cosmos DB - sink connector
-
-Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
-
-## Prerequisites
-
-* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you don't wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You'll also need to install and configure the Azure Cosmos DB connectors manually.
-* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md)
-* Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1.
-* Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
-* Download [Maven](https://maven.apache.org/download.cgi)
-
-## Install sink connector
-
-If you're using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB sink connector is included in the installation, and you can skip this step.
-
-Otherwise, you can download the JAR file from the latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) or package this repo to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code.
-
-```bash
-# clone the kafka-connect-cosmosdb repo if you haven't done so already
-git clone https://github.com/microsoft/kafka-connect-cosmosdb.git
-cd kafka-connect-cosmosdb
-
-# package the source code into a JAR file
-mvn clean package
-
-# include the following JAR file in Kafka Connect installation
-ls target/*dependencies.jar
-```
-
-## Create a Kafka topic and write data
-
-If you're using the Confluent Platform, the easiest way to create a Kafka topic is by using the supplied Control Center UX. Otherwise, you can create a Kafka topic manually using the following syntax:
-
-```bash
-./kafka-topics.sh --create --zookeeper <ZOOKEEPER_URL:PORT> --replication-factor <NO_OF_REPLICATIONS> --partitions <NO_OF_PARTITIONS> --topic <TOPIC_NAME>
-```
-
-For this scenario, we'll create a Kafka topic named ΓÇ£hotelsΓÇ¥ and will write non-schema embedded JSON data to the topic. To create a topic inside Control Center, see the [Confluent guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
-
-Next, start the Kafka console producer to write a few records to the ΓÇ£hotelsΓÇ¥ topic.
-
-```powershell
-# Option 1: If using Codespaces, use the built-in CLI utility
-kafka-console-producer --broker-list localhost:9092 --topic hotels
-
-# Option 2: Using this repo's Confluent Platform setup, first exec into the broker container
-docker exec -it broker /bin/bash
-kafka-console-producer --broker-list localhost:9092 --topic hotels
-
-# Option 3: Using your Confluent Platform setup and CLI install
-<path-to-confluent>/bin/kafka-console-producer --broker-list <kafka broker hostname> --topic hotels
-```
-
-In the console producer, enter:
-
-```json
-{"id": "h1", "HotelName": "Marriott", "Description": "Marriott description"}
-{"id": "h2", "HotelName": "HolidayInn", "Description": "HolidayInn description"}
-{"id": "h3", "HotelName": "Motel8", "Description": "Motel8 description"}
-```
-
-The three records entered are published to the ΓÇ£hotelsΓÇ¥ Kafka topic in JSON format.
-
-## Create the sink connector
-
-Create an Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector. Make sure to replace the values for `connect.cosmos.connection.endpoint` and `connect.cosmos.master.key`, properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
-
-For more information on each of these configuration properties, see [sink properties](#sink-configuration-properties).
-
-```json
-{
- "name": "cosmosdb-sink-connector",
- "config": {
- "connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
- "tasks.max": "1",
- "topics": [
- "hotels"
- ],
- "value.converter": "org.apache.kafka.connect.json.JsonConverter",
- "value.converter.schemas.enable": "false",
- "key.converter": "org.apache.kafka.connect.json.JsonConverter",
- "key.converter.schemas.enable": "false",
- "connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
- "connect.cosmos.master.key": "<cosmosdbprimarykey>",
- "connect.cosmos.databasename": "kafkaconnect",
- "connect.cosmos.containers.topicmap": "hotels#kafka"
- }
-}
-```
-
-Once you have all the values filled out, save the JSON file somewhere locally. You can use this file to create the connector using the REST API.
-
-### Create connector using Control Center
-
-An easy option to create the connector is by going through the Control Center webpage. Follow this [installation guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. Instead of using the `DatagenConnector` option, use the `CosmosDBSinkConnector` tile instead. When configuring the sink connector, fill out the values as you've filled in the JSON file.
-
-Alternatively, in the connectors page, you can upload the JSON file created earlier by using the **Upload connector config file** option.
--
-### Create connector using REST API
-
-Create the sink connector using the Connect REST API:
-
-```bash
-# Curl to Kafka connect service
-curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors
-
-```
-
-## Confirm data written to Cosmos DB
-
-Sign into the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account.
-
-## Cleanup
-
-To delete the connector from the Control Center, navigate to the sink connector you created and select the **Delete** icon.
--
-Alternatively, use the Connect REST API to delete:
-
-```bash
-# Curl to Kafka connect service
-curl -X DELETE http://localhost:8083/connectors/cosmosdb-sink-connector
-```
-
-To delete the created Azure Cosmos DB service and its resource group using Azure CLI, refer to these [steps](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md#cleanup).
-
-## <a id="sink-configuration-properties"></a>Sink configuration properties
-
-The following settings are used to configure an Azure Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
-
-| Name | Type | Description | Required/Optional |
-| : | : | : | : |
-| Topics | list | A list of Kafka topics to watch. | Required |
-| connector.class | string | Class name of the Azure Cosmos DB sink. It should be set to `com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector`. | Required |
-| connect.cosmos.connection.endpoint | uri | Azure Cosmos endpoint URI string. | Required |
-| connect.cosmos.master.key | string | The Azure Cosmos DB primary key that the sink connects with. | Required |
-| connect.cosmos.databasename | string | The name of the Azure Cosmos database the sink writes to. | Required |
-| connect.cosmos.containers.topicmap | string | Mapping between Kafka topics and Azure Cosmos DB containers, formatted using CSV as shown: `topic#container,topic2#container2`. | Required |
-| key.converter | string | Serialization format for the key data written into Kafka topic. | Required |
-| value.converter | string | Serialization format for the value data written into the Kafka topic. | Required |
-| key.converter.schemas.enable | string | Set to "true" if the key data has embedded schema. | Optional |
-| value.converter.schemas.enable | string | Set to "true" if the key data has embedded schema. | Optional |
-| tasks.max | int | Maximum number of connector sink tasks. Default is `1` | Optional |
-
-Data will always be written to the Azure Cosmos DB as JSON without any schema.
-
-## Supported data types
-
-The Azure Cosmos DB sink connector converts sink record into JSON document supporting the following schema types:
-
-| Schema type | JSON data type |
-| : | : |
-| Array | Array |
-| Boolean | Boolean |
-| Float32 | Number |
-| Float64 | Number |
-| Int8 | Number |
-| Int16 | Number |
-| Int32 | Number |
-| Int64 | Number|
-| Map | Object (JSON)|
-| String | String<br> Null |
-| Struct | Object (JSON) |
-
-The sink Connector also supports the following AVRO logical types:
-
-| Schema Type | JSON Data Type |
-| : | : |
-| Date | Number |
-| Time | Number |
-| Timestamp | Number |
-
-> [!NOTE]
-> Byte deserialization is currently not supported by the Azure Cosmos DB sink connector.
-
-## Single Message Transforms(SMT)
-
-Along with the sink connector settings, you can specify the use of Single Message Transformations (SMTs) to modify messages flowing through the Kafka Connect platform. For more information, see [Confluent SMT Documentation](https://docs.confluent.io/platform/current/connect/transforms/overview.html).
-
-### Using the InsertUUID SMT
-
-You can use InsertUUID SMT to automatically add item IDs. With the custom `InsertUUID` SMT, you can insert the `id` field with a random UUID value for each message, before it's written to Azure Cosmos DB.
-
-> [!WARNING]
-> Use this SMT only if the messages donΓÇÖt contain the `id` field. Otherwise, the `id` values will be overwritten and you may end up with duplicate items in your database. Using UUIDs as the message ID can be quick and easy but are [not an ideal partition key](https://stackoverflow.com/questions/49031461/would-using-a-substring-of-a-guid-in-cosmosdb-as-partitionkey-be-a-bad-idea) to use in Azure Cosmos DB.
-
-### Install the SMT
-
-Before you can use the `InsertUUID` SMT, you'll need to install this transform in your Confluent Platform setup. If you're using the Confluent Platform setup from this repo, the transform is already included in the installation, and you can skip this step.
-
-Alternatively, you can package the [InsertUUID source](https://github.com/confluentinc/kafka-connect-insert-uuid) to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually).
-
-```bash
-# clone the kafka-connect-insert-uuid repo
-https://github.com/confluentinc/kafka-connect-insert-uuid.git
-cd kafka-connect-insert-uuid
-
-# package the source code into a JAR file
-mvn clean package
-
-# include the following JAR file in Confluent Platform installation
-ls target/*.jar
-```
-
-### Configure the SMT
-
-Inside your sink connector config, add the following properties to set the `id`.
-
-```json
-"transforms": "insertID",
-"transforms.insertID.type": "com.github.cjmatta.kafka.connect.smt.InsertUuid$Value",
-"transforms.insertID.uuid.field.name": "id"
-```
-
-For more information on using this SMT, see the [InsertUUID repository](https://github.com/confluentinc/kafka-connect-insert-uuid).
-
-### Using SMTs to configure Time to live (TTL)
-
-Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-the-azure-portal) doc.
-
-Inside your Sink connector config, add the following properties to set the TTL in seconds. In this following example, the TTL is set to 100 seconds. If the message already contains the `TTL` field, the `TTL` value will be overwritten by these SMTs.
-
-```json
-"transforms": "insertTTL,castTTLInt",
-"transforms.insertTTL.type": "org.apache.kafka.connect.transforms.InsertField$Value",
-"transforms.insertTTL.static.field": "ttl",
-"transforms.insertTTL.static.value": "100",
-"transforms.castTTLInt.type": "org.apache.kafka.connect.transforms.Cast$Value",
-"transforms.castTTLInt.spec": "ttl:int32"
-```
-
-For more information on using these SMTs, see the [InsertField](https://docs.confluent.io/platform/current/connect/transforms/insertfield.html) and [Cast](https://docs.confluent.io/platform/current/connect/transforms/cast.html) documentation.
-
-## Troubleshooting common issues
-
-Here are solutions to some common problems that you may encounter when working with the Kafka sink connector.
-
-### Read non-JSON data with JsonConverter
-
-If you have non-JSON data on your source topic in Kafka and attempt to read it using the `JsonConverter`, you'll see the following exception:
-
-```console
-org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
-…
-org.apache.kafka.common.errors.SerializationException: java.io.CharConversionException: Invalid UTF-32 character 0x1cfa7e2 (above 0x0010ffff) at char #1, byte #7)
-
-```
-
-This error is likely caused by data in the source topic being serialized in either Avro or another format such as CSV string.
-
-**Solution**: If the topic data is in AVRO format, then change your Kafka Connect sink connector to use the `AvroConverter` as shown below.
-
-```json
-"value.converter": "io.confluent.connect.avro.AvroConverter",
-"value.converter.schema.registry.url": "http://schema-registry:8081",
-```
-
-### Read non-Avro data with AvroConverter
-
-This scenario is applicable when you try to use the Avro converter to read data from a topic that isn't in Avro format. Which, includes data written by an Avro serializer other than the Confluent Schema RegistryΓÇÖs Avro serializer, which has its own wire format.
-
-```console
-org.apache.kafka.connect.errors.DataException: my-topic-name
-at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)
-…
-org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
-org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
-
-```
-
-**Solution**: Check the source topicΓÇÖs serialization format. Then, either switch Kafka ConnectΓÇÖs sink connector to use the right converter or switch the upstream format to Avro.
-
-### Read a JSON message without the expected schema/payload structure
-
-Kafka Connect supports a special structure of JSON messages containing both payload and schema as follows.
-
-```json
-{
- "schema": {
- "type": "struct",
- "fields": [
- {
- "type": "int32",
- "optional": false,
- "field": "userid"
- },
- {
- "type": "string",
- "optional": false,
- "field": "name"
- }
- ]
- },
- "payload": {
- "userid": 123,
- "name": "Sam"
- }
-}
-```
-
-If you try to read JSON data that doesn't contain the data in this structure, you'll get the following error:
-
-```none
-org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
-```
-
-To be clear, the only JSON structure that is valid for `schemas.enable=true` has schema and payload fields as the top-level elements as shown above. As the error message states, if you just have plain JSON data, you should change your connectorΓÇÖs configuration to:
-
-```json
-"value.converter": "org.apache.kafka.connect.json.JsonConverter",
-"value.converter.schemas.enable": "false",
-```
-
-## Limitations
-
-* Autocreation of databases and containers in Azure Cosmos DB aren't supported. The database and containers must already exist, and they must be configured correctly.
-
-## Next steps
-
-You can learn more about change feed in Azure Cosmo DB with the following docs:
-
-* [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)
-* [Reading from change feed](read-change-feed.md)
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md
- Title: Kafka Connect for Azure Cosmos DB - Source connector
-description: Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic. Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB.
---- Previously updated : 05/13/2022---
-# Kafka Connect for Azure Cosmos DB - source connector
-
-Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic.
-
-## Prerequisites
-
-* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you don't wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You'll also need to install and configure the Azure Cosmos DB connectors manually.
-* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md)
-* Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1.
-* Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
-* Download [Maven](https://maven.apache.org/download.cgi)
-
-## Install the source connector
-
-If you're using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB source connector is included in the installation, and you can skip this step.
-
-Otherwise, you can use JAR file from latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) and install the connector manually. To learn more, see these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code:
-
-```bash
-# clone the kafka-connect-cosmosdb repo if you haven't done so already
-git clone https://github.com/microsoft/kafka-connect-cosmosdb.git
-cd kafka-connect-cosmosdb
-
-# package the source code into a JAR file
-mvn clean package
-
-# include the following JAR file in Confluent Platform installation
-ls target/*dependencies.jar
-```
-
-## Create a Kafka topic
-
-Create a Kafka topic using Confluent Control Center. For this scenario, we'll create a Kafka topic named "apparels" and write non-schema embedded JSON data to the topic. To create a topic inside the Control Center, see [create Kafka topic doc](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
-
-## Create the source connector
-
-### Create the source connector in Kafka Connect
-
-To create the Azure Cosmos DB source connector in Kafka Connect, use the following JSON config. Make sure to replace the placeholder values for `connect.cosmos.connection.endpoint`, `connect.cosmos.master.key` properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
-
-```json
-{
- "name": "cosmosdb-source-connector",
- "config": {
- "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector",
- "tasks.max": "1",
- "key.converter": "org.apache.kafka.connect.json.JsonConverter",
- "value.converter": "org.apache.kafka.connect.json.JsonConverter",
- "connect.cosmos.task.poll.interval": "100",
- "connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
- "connect.cosmos.master.key": "<cosmosdbprimarykey>",
- "connect.cosmos.databasename": "kafkaconnect",
- "connect.cosmos.containers.topicmap": "apparels#kafka",
- "connect.cosmos.offset.useLatest": false,
- "value.converter.schemas.enable": "false",
- "key.converter.schemas.enable": "false"
- }
-}
-```
-
-For more information on each of the above configuration properties, see the [source properties](#source-configuration-properties) section. Once you have all the values filled out, save the JSON file somewhere locally. You can use this file to create the connector using the REST API.
-
-#### Create connector using Control Center
-
-An easy option to create the connector is from the Confluent Control Center portal. Follow the [Confluent setup guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. When setting up, instead of using the `DatagenConnector` option, use the `CosmosDBSourceConnector` tile instead. When configuring the source connector, fill out the values as you've filled in the JSON file.
-
-Alternatively, in the connectors page, you can upload the JSON file built from the previous section by using the **Upload connector config file** option.
--
-#### Create connector using REST API
-
-Create the source connector using the Connect REST API
-
-```bash
-# Curl to Kafka connect service
-curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors
-```
-
-## Insert document into Azure Cosmos DB
-
-1. Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account.
-1. Open the **Data Explore** tab and select **Databases**
-1. Open the "kafkaconnect" database and "kafka" container you created earlier.
-1. To create a new JSON document, in the SQL API pane, expand "kafka" container, select **Items**, then select **New Item** in the toolbar.
-1. Now, add a document to the container with the following structure. Paste the following sample JSON block into the Items tab, overwriting the current content:
-
- ``` json
-
- {
- "id": "2",
- "productId": "33218897",
- "category": "Women's Outerwear",
- "manufacturer": "Contoso",
- "description": "Black wool pea-coat",
- "price": "49.99",
- "shipping": {
- "weight": 2,
- "dimensions": {
- "width": 8,
- "height": 11,
- "depth": 3
- }
- }
- }
-
- ```
-
-1. Select **Save**.
-1. Confirm the document has been saved by viewing the Items on the left-hand menu.
-
-### Confirm data written to Kafka topic
-
-1. Open Kafka Topic UI on `http://localhost:9000`.
-1. Select the Kafka "apparels" topic you created.
-1. Verify that the document you inserted into Azure Cosmos DB earlier appears in the Kafka topic.
-
-### Cleanup
-
-To delete the connector from the Confluent Control Center, navigate to the source connector you created and select the **Delete** icon.
--
-Alternatively, use the connectorΓÇÖs REST API:
-
-```bash
-# Curl to Kafka connect service
-curl -X DELETE http://localhost:8083/connectors/cosmosdb-source-connector
-```
-
-To delete the created Azure Cosmos DB service and its resource group using Azure CLI, refer to these [steps](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md#cleanup).
-
-## Source configuration properties
-
-The following settings are used to configure the Kafka source connector. These configuration values determine which Azure Cosmos DB container is consumed, data from which Kafka topics is written, and formats to serialize the data. For an example with default values, see this [configuration file](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/source.example.json).
-
-| Name | Type | Description | Required/optional |
-| : | : | : | : |
-| connector.class | String | Class name of the Azure Cosmos DB source. It should be set to `com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector` | Required |
-| connect.cosmos.databasename | String | Name of the database to read from. | Required |
-| connect.cosmos.master.key | String | The Azure Cosmos DB primary key. | Required |
-| connect.cosmos.connection.endpoint | URI | The account endpoint. | Required |
-| connect.cosmos.containers.topicmap | String | Comma-separated topic to container mapping. For example, topic1#coll1, topic2#coll2 | Required |
-| connect.cosmos.messagekey.enabled | Boolean | This value represents if the Kafka message key should be set. Default value is `true` | Required |
-| connect.cosmos.messagekey.field | String | Use the field's value from the document as the message key. Default is `id`. | Required |
-| connect.cosmos.offset.useLatest | Boolean | Set to `true` to use the most recent source offset. Set to `false` to use the earliest recorded offset. Default value is `false`. | Required |
-| connect.cosmos.task.poll.interval | Int | Interval to poll the change feed container for changes. | Required |
-| key.converter | String | Serialization format for the key data written into Kafka topic. | Required |
-| value.converter | String | Serialization format for the value data written into the Kafka topic. | Required |
-| key.converter.schemas.enable | String | Set to `true` if the key data has embedded schema. | Optional |
-| value.converter.schemas.enable | String | Set to `true` if the key data has embedded schema. | Optional |
-| tasks.max | Int | Maximum number of connectors source tasks. Default value is `1`. | Optional |
-
-## Supported data types
-
-The Azure Cosmos DB source connector converts JSON document to schema and supports the following JSON data types:
-
-| JSON data type | Schema type |
-| : | : |
-| Array | Array |
-| Boolean | Boolean |
-| Number | Float32<br>Float64<br>Int8<br>Int16<br>Int32<br>Int64|
-| Null | String |
-| Object (JSON)| Struct|
-| String | String |
-
-## Next steps
-
-* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Kafka Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector.md
- Title: Use Kafka Connect for Azure Cosmos DB to read and write data
-description: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. Kafka Connect is a tool for scalable and reliably streaming data between Apache Kafka and other systems
---- Previously updated : 06/28/2021---
-# Kafka Connect for Azure Cosmos DB
-
-[Kafka Connect](http://kafka.apache.org/documentation.html#connect) is a tool for scalable and reliably streaming data between Apache Kafka and other systems. Using Kafka Connect you can define connectors that move large data sets into and out of Kafka. Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB.
-
-## Source & sink connectors semantics
-
-* **Source connector** - Currently this connector supports at-least once with multiple tasks and exactly once for single tasks.
-
-* **Sink connector** - This connector fully supports exactly once semantics.
-
-## Supported data formats
-
-The source and sink connectors can be configured to support the following data formats:
-
-| Format | Description |
-| :-- | :- |
-| Plain JSON | JSON record structure without any attached schema. |
-| JSON with schema | JSON record structure with explicit schema information to ensure the data matches the expected format. |
-| AVRO | A row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types, protocols, and serializes data in a compact binary format.
-
-The key and value settings, including the format and serialization can be independently configured in Kafka. So, it is possible to work with different data formats for keys and values, respectively. To cater for different data formats, there is converter configuration for both `key.converter` and `value.converter`.
-
-## Converter configuration examples
-
-### <a id="json-plain"></a>Plain JSON
-
-If you need to use JSON without schema registry for connect data, use the `JsonConverter` supported with Kafka. The following example shows the `JsonConverter` key and value properties that are added to the configuration:
-
- ```java
- key.converter=org.apache.kafka.connect.json.JsonConverter
- key.converter.schemas.enable=false
- value.converter=org.apache.kafka.connect.json.JsonConverter
- value.converter.schemas.enable=false
- ```
-
-### <a id="json-with-schema"></a>JSON with schema
-
-Set the properties `key.converter.schemas.enable` and `value.converter.schemas.enable` to true so that the key or value is treated as a composite JSON object that contains both an internal schema and the data. Without these properties, the key or value is treated as plain JSON.
-
- ```java
- key.converter=org.apache.kafka.connect.json.JsonConverter
- key.converter.schemas.enable=true
- value.converter=org.apache.kafka.connect.json.JsonConverter
- value.converter.schemas.enable=true
- ```
-
-The resulting message to Kafka would look like the example below, with schema and payload as top-level elements in the JSON:
-
- ```json
- {
- "schema": {
- "type": "struct",
- "fields": [
- {
- "type": "int32",
- "optional": false,
- "field": "userid"
- },
- {
- "type": "string",
- "optional": false,
- "field": "name"
- }
- ],
- "optional": false,
- "name": "ksql.users"
- },
- "payload": {
- "userid": 123,
- "name": "user's name"
- }
- }
- ```
-
-> [!NOTE]
-> The message written to Azure Cosmos DB is made up of the schema and payload. Notice the size of the message, as well as the proportion of it that is made up of the payload vs. the schema. The schema is repeated in every message you write to Kafka. In scenarios like this, you may want to use a serialization format like JSON Schema or AVRO, where the schema is stored separately, and the message holds just the payload.
-
-### <a id="avro"></a>AVRO
-
-The Kafka Connector supports AVRO data format. To use AVRO format, configure a `AvroConverter` so that Kafka Connect knows how to work with AVRO data. Azure Cosmos DB Kafka Connect has been tested with the [AvroConverter](https://www.confluent.io/hub/confluentinc/kafka-connect-avro-converter) supplied by Confluent, under Apache 2.0 license. You can also use a different custom converter if you prefer.
-
-Kafka deals with keys and values independently. Specify the `key.converter` and `value.converter` properties as required in the worker configuration. When using `AvroConverter`, add an extra converter property that provides the URL for the schema registry. The following example shows the AvroConverter key and value properties that are added to the configuration:
-
- ```java
- key.converter=io.confluent.connect.avro.AvroConverter
- key.converter.schema.registry.url=http://schema-registry:8081
- value.converter=io.confluent.connect.avro.AvroConverter
- value.converter.schema.registry.url=http://schema-registry:8081
- ```
-
-## Choose a conversion format
-
-The following are some considerations on how to choose a conversion format:
-
-* When configuring a **Source connector**:
-
- * If you want Kafka Connect to include plain JSON in the message it writes to Kafka, set [Plain JSON](#json-plain) configuration.
-
- * If you want Kafka Connect to include the schema in the message it writes to Kafka, set [JSON with Schema](#json-with-schema) configuration.
-
- * If you want Kafka Connect to include AVRO format in the message it writes to Kafka, set [AVRO](#avro) configuration.
-
-* If youΓÇÖre consuming JSON data from a Kafka topic into a **Sink connector**, understand how the JSON was serialized when it was written to the Kafka topic:
-
- * If it was written with JSON serializer, set Kafka Connect to use the JSON converter `(org.apache.kafka.connect.json.JsonConverter)`.
-
- * If the JSON data was written as a plain string, determine if the data includes a nested schema or payload. If it does, set [JSON with schema](#json-with-schema) configuration.
- * However, if youΓÇÖre consuming JSON data and it doesnΓÇÖt have the schema or payload construct, then you must tell Kafka Connect **not** to look for a schema by setting `schemas.enable=false` as per [Plain JSON](#json-plain) configuration.
-
- * If it was written with AVRO serializer, set Kafka Connect to use the AVRO converter `(io.confluent.connect.avro.AvroConverter)` as per [AVRO](#avro) configuration.
-
-## Configuration
-
-### Common configuration properties
-
-The source and sink connectors share the following common configuration properties:
-
-| Name | Type | Description | Required/Optional |
-| : | : | : | : |
-| connect.cosmos.connection.endpoint | uri | Cosmos endpoint URI string | Required |
-| connect.cosmos.master.key | string | The Azure Cosmos DB primary key that the sink connects with. | Required |
-| connect.cosmos.databasename | string | The name of the Azure Cosmos database the sink writes to. | Required |
-| connect.cosmos.containers.topicmap | string | Mapping between Kafka topics and Azure Cosmos DB containers. It is formatted using CSV as `topic#container,topic2#container2` | Required |
-
-For sink connector-specific configuration, see the [Sink Connector Documentation](kafka-connector-sink.md)
-
-For source connector-specific configuration, see the [Source Connector Documentation](kafka-connector-source.md)
-
-## Common configuration errors
-
-If you misconfigure the converters in Kafka Connect, it can result in errors. These errors will show up at the Kafka Connector sink because youΓÇÖll try to deserialize the messages already stored in Kafka. Converter problems donΓÇÖt usually occur in source because serialization is set at the source.
-
-For more information, see [common configuration errors](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#common-errors) doc.
-
-## Project setup
-
-Refer to the [Developer walkthrough and project setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Developer_Walkthrough.md) for initial setup instructions.
-
-## Performance testing
-
-For more information on the performance tests run for the sink and source connectors, see the [Performance testing document](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Performance_Testing.md).
-
-Refer to the [Performance environment setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/perf/README.md) for exact steps on deploying the performance test environment for the connectors.
-
-## Resources
-
-* [Kafka Connect](http://kafka.apache.org/documentation.html#connect)
-* [Kafka Connect Deep Dive ΓÇô Converters and Serialization Explained](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/)
-
-## Next steps
-
-* Kafka Connect for Azure Cosmos DB [source connector](kafka-connector-source.md)
-* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-bicep.md
- Title: Create and manage Azure Cosmos DB with Bicep
-description: Use Bicep to create and configure Azure Cosmos DB for Core (SQL) API
---- Previously updated : 02/18/2022----
-# Manage Azure Cosmos DB Core (SQL) API resources with Bicep
--
-In this article, you learn how to use Bicep to deploy and manage your Azure Cosmos DB accounts, databases, and containers.
-
-This article shows Bicep samples for Core (SQL) API accounts. You can also find Bicep samples for [Cassandra](../cassandr) APIs.
-
-> [!IMPORTANT]
->
-> * Account names are limited to 44 characters, all lowercase.
-> * To change the throughput (RU/s) values, redeploy the Bicep file with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
-> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
-
-To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Bicep files including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md).
-
-<a id="create-autoscale"></a>
-
-## Azure Cosmos account with autoscale throughput
-
-Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most index policy options enabled.
--
-<a id="create-analytical-store"></a>
-
-## Azure Cosmos account with analytical store
-
-Create an Azure Cosmos account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput.
--
-<a id="create-manual"></a>
-
-## Azure Cosmos account with standard provisioned throughput
-
-Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for standard throughput that has most policy options enabled.
--
-<a id="create-sproc"></a>
-
-## Azure Cosmos DB container with server-side functionality
-
-Create an Azure Cosmos account, database and container with with a stored procedure, trigger, and user-defined function.
--
-<a id="create-rbac"></a>
-
-## Azure Cosmos DB account with Azure AD and RBAC
-
-Create an Azure Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an AAD identity.
--
-<a id="free-tier"></a>
-
-## Free tier Azure Cosmos DB account
-
-Create a free-tier Azure Cosmos account and a database with shared throughput that can be shared with up to 25 containers.
--
-## Next steps
-
-Here are some additional resources:
-
-* [Bicep documentation](../../azure-resource-manager/bicep/index.yml)
-* [Install Bicep tools](../../azure-resource-manager/bicep/install.md)
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-cli.md
- Title: Manage Azure Cosmos DB Core (SQL) API resources using Azure CLI
-description: Manage Azure Cosmos DB Core (SQL) API resources using Azure CLI.
---- Previously updated : 02/18/2022----
-# Manage Azure Cosmos Core (SQL) API resources using Azure CLI
--
-The following guide describes common commands to automate management of your Azure Cosmos DB accounts, databases and containers using Azure CLI. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). You can also find more examples in [Azure CLI samples for Azure Cosmos DB](cli-samples.md), including how to create and manage Cosmos DB accounts, databases and containers for MongoDB, Gremlin, Cassandra and Table API.
---- This article requires version 2.22.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-For Azure CLI samples for other APIs see [CLI Samples for Cassandra](../cassandr)
-
-> [!IMPORTANT]
-> Azure Cosmos DB resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
-
-## Azure Cosmos Accounts
-
-The following sections demonstrate how to manage the Azure Cosmos account, including:
--- [Create an Azure Cosmos account](#create-an-azure-cosmos-db-account)-- [Add or remove regions](#add-or-remove-regions)-- [Enable multi-region writes](#enable-multiple-write-regions)-- [Set regional failover priority](#set-failover-priority)-- [Enable service-managed failover](#enable-service-managed-failover)-- [Trigger manual failover](#trigger-manual-failover)-- [List account keys](#list-account-keys)-- [List read-only account keys](#list-read-only-account-keys)-- [List connection strings](#list-connection-strings)-- [Regenerate account key](#regenerate-account-key)-
-### Create an Azure Cosmos DB account
-
-Create an Azure Cosmos DB account with SQL API, Session consistency in West US and East US regions:
-
-> [!IMPORTANT]
-> The Azure Cosmos account name must be lowercase and less than 44 characters.
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount' #needs to be lower case and less than 44 characters
-
-az cosmosdb create \
- -n $accountName \
- -g $resourceGroupName \
- --default-consistency-level Session \
- --locations regionName='West US' failoverPriority=0 isZoneRedundant=False \
- --locations regionName='East US' failoverPriority=1 isZoneRedundant=False
-```
-
-### Add or remove regions
-
-Create an Azure Cosmos account with two regions, add a region, and remove a region.
-
-> [!NOTE]
-> You cannot simultaneously add or remove regions `locations` and change other properties for an Azure Cosmos account. Modifying regions must be performed as a separate operation than any other change to the account resource.
-> [!NOTE]
-> This command allows you to add and remove regions but does not allow you to modify failover priorities or trigger a manual failover. See [Set failover priority](#set-failover-priority) and [Trigger manual failover](#trigger-manual-failover).
-> [!TIP]
-> When a new region is added, all data must be fully replicated and committed into the new region before the region is marked as available. The amount of time this operation takes will depend upon how much data is stored within the account. If an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused and will resume automatically when the add/remove region operation is complete.
-
-```azurecli-interactive
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-
-# Create an account with 2 regions
-az cosmosdb create --name $accountName --resource-group $resourceGroupName \
- --locations regionName="West US" failoverPriority=0 isZoneRedundant=False \
- --locations regionName="East US" failoverPriority=1 isZoneRedundant=False
-
-# Add a region
-az cosmosdb update --name $accountName --resource-group $resourceGroupName \
- --locations regionName="West US" failoverPriority=0 isZoneRedundant=False \
- --locations regionName="East US" failoverPriority=1 isZoneRedundant=False \
- --locations regionName="South Central US" failoverPriority=2 isZoneRedundant=False
-
-# Remove a region
-az cosmosdb update --name $accountName --resource-group $resourceGroupName \
- --locations regionName="West US" failoverPriority=0 isZoneRedundant=False \
- --locations regionName="East US" failoverPriority=1 isZoneRedundant=False
-```
-
-### Enable multiple write regions
-
-Enable multi-region writes for a Cosmos account
-
-```azurecli-interactive
-# Update an Azure Cosmos account from single write region to multiple write regions
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-
-# Get the account resource id for an existing account
-accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
-
-az cosmosdb update --ids $accountId --enable-multiple-write-locations true
-```
-
-### Set failover priority
-
-Set the failover priority for an Azure Cosmos account configured for service-managed failover
-
-```azurecli-interactive
-# Assume region order is initially 'West US'=0 'East US'=1 'South Central US'=2 for account
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-
-# Get the account resource id for an existing account
-accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
-
-# Make South Central US the next region to fail over to instead of East US
-az cosmosdb failover-priority-change --ids $accountId \
- --failover-policies 'West US=0' 'South Central US=1' 'East US=2'
-```
-
-### Enable service-managed failover
-
-```azurecli-interactive
-# Enable service-managed failover on an existing account
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-
-# Get the account resource id for an existing account
-accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
-
-az cosmosdb update --ids $accountId --enable-automatic-failover true
-```
-
-### Trigger manual failover
-
-> [!CAUTION]
-> Changing region with priority = 0 will trigger a manual failover for an Azure Cosmos account. Any other priority change will not trigger a failover.
-
-> [!NOTE]
-> If you perform a manual failover operation while an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover operation is complete.
-
-```azurecli-interactive
-# Assume region order is initially 'West US=0' 'East US=1' 'South Central US=2' for account
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-
-# Get the account resource id for an existing account
-accountId=$(az cosmosdb show -g $resourceGroupName -n $accountName --query id -o tsv)
-
-# Trigger a manual failover to promote East US 2 as new write region
-az cosmosdb failover-priority-change --ids $accountId \
- --failover-policies 'East US=0' 'South Central US=1' 'West US=2'
-```
-
-### <a id="list-account-keys"></a> List all account keys
-
-Get all keys for a Cosmos account.
-
-```azurecli-interactive
-# List all account keys
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-
-az cosmosdb keys list \
- -n $accountName \
- -g $resourceGroupName
-```
-
-### List read-only account keys
-
-Get read-only keys for a Cosmos account.
-
-```azurecli-interactive
-# List read-only account keys
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-
-az cosmosdb keys list \
- -n $accountName \
- -g $resourceGroupName \
- --type read-only-keys
-```
-
-### List connection strings
-
-Get the connection strings for a Cosmos account.
-
-```azurecli-interactive
-# List connection strings
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-
-az cosmosdb keys list \
- -n $accountName \
- -g $resourceGroupName \
- --type connection-strings
-```
-
-### Regenerate account key
-
-Regenerate a new key for a Cosmos account.
-
-```azurecli-interactive
-# Regenerate secondary account keys
-# key-kind values: primary, primaryReadonly, secondary, secondaryReadonly
-az cosmosdb keys regenerate \
- -n $accountName \
- -g $resourceGroupName \
- --key-kind secondary
-```
-
-## Azure Cosmos DB database
-
-The following sections demonstrate how to manage the Azure Cosmos DB database, including:
--- [Create a database](#create-a-database)-- [Create a database with shared throughput](#create-a-database-with-shared-throughput)-- [Migrate a database to autoscale throughput](#migrate-a-database-to-autoscale-throughput)-- [Change database throughput](#change-database-throughput)-- [Prevent a database from being deleted](#prevent-a-database-from-being-deleted)-
-### Create a database
-
-Create a Cosmos database.
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-
-az cosmosdb sql database create \
- -a $accountName \
- -g $resourceGroupName \
- -n $databaseName
-```
-
-### Create a database with shared throughput
-
-Create a Cosmos database with shared throughput.
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-throughput=400
-
-az cosmosdb sql database create \
- -a $accountName \
- -g $resourceGroupName \
- -n $databaseName \
- --throughput $throughput
-```
-
-### Migrate a database to autoscale throughput
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-
-# Migrate to autoscale throughput
-az cosmosdb sql database throughput migrate \
- -a $accountName \
- -g $resourceGroupName \
- -n $databaseName \
- -t 'autoscale'
-
-# Read the new autoscale max throughput
-az cosmosdb sql database throughput show \
- -g $resourceGroupName \
- -a $accountName \
- -n $databaseName \
- --query resource.autoscaleSettings.maxThroughput \
- -o tsv
-```
-
-### Change database throughput
-
-Increase the throughput of a Cosmos database by 1000 RU/s.
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-newRU=1000
-
-# Get minimum throughput to make sure newRU is not lower than minRU
-minRU=$(az cosmosdb sql database throughput show \
- -g $resourceGroupName -a $accountName -n $databaseName \
- --query resource.minimumThroughput -o tsv)
-
-if [ $minRU -gt $newRU ]; then
- newRU=$minRU
-fi
-
-az cosmosdb sql database throughput update \
- -a $accountName \
- -g $resourceGroupName \
- -n $databaseName \
- --throughput $newRU
-```
-
-### Prevent a database from being deleted
-
-Put an Azure resource delete lock on a database to prevent it from being deleted. This feature requires locking the Cosmos account from being changed by data plane SDKs. To learn more, see [preventing changes from SDKs](../role-based-access-control.md#prevent-sdk-changes). Azure resource locks can also prevent a resource from being changed by specifying a `ReadOnly` lock type. For a Cosmos database, it can be used to prevent throughput from being changed.
-
-```azurecli-interactive
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-
-lockType='CanNotDelete' # CanNotDelete or ReadOnly
-databaseParent="databaseAccounts/$accountName"
-databaseLockName="$databaseName-Lock"
-
-# Create a delete lock on database
-az lock create --name $databaseLockName \
- --resource-group $resourceGroupName \
- --resource-type Microsoft.DocumentDB/sqlDatabases \
- --lock-type $lockType \
- --parent $databaseParent \
- --resource $databaseName
-
-# Delete lock on database
-lockid=$(az lock show --name $databaseLockName \
- --resource-group $resourceGroupName \
- --resource-type Microsoft.DocumentDB/sqlDatabases \
- --resource $databaseName \
- --parent $databaseParent \
- --output tsv --query id)
-az lock delete --ids $lockid
-```
-
-## Azure Cosmos DB container
-
-The following sections demonstrate how to manage the Azure Cosmos DB container, including:
--- [Create a container](#create-a-container)-- [Create a container with autoscale](#create-a-container-with-autoscale)-- [Create a container with TTL enabled](#create-a-container-with-ttl)-- [Create a container with custom index policy](#create-a-container-with-a-custom-index-policy)-- [Change container throughput](#change-container-throughput)-- [Migrate a container to autoscale throughput](#migrate-a-container-to-autoscale-throughput)-- [Prevent a container from being deleted](#prevent-a-container-from-being-deleted)-
-### Create a container
-
-Create a Cosmos container with default index policy, partition key and RU/s of 400.
-
-```azurecli-interactive
-# Create a SQL API container
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-partitionKey='/myPartitionKey'
-throughput=400
-
-az cosmosdb sql container create \
- -a $accountName -g $resourceGroupName \
- -d $databaseName -n $containerName \
- -p $partitionKey --throughput $throughput
-```
-
-### Create a container with autoscale
-
-Create a Cosmos container with default index policy, partition key and autoscale RU/s of 4000.
-
-```azurecli-interactive
-# Create a SQL API container
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-partitionKey='/myPartitionKey'
-maxThroughput=4000
-
-az cosmosdb sql container create \
- -a $accountName -g $resourceGroupName \
- -d $databaseName -n $containerName \
- -p $partitionKey --max-throughput $maxThroughput
-```
-
-### Create a container with TTL
-
-Create a Cosmos container with TTL enabled.
-
-```azurecli-interactive
-# Create an Azure Cosmos container with TTL of one day
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-
-az cosmosdb sql container update \
- -g $resourceGroupName \
- -a $accountName \
- -d $databaseName \
- -n $containerName \
- --ttl=86400
-```
-
-### Create a container with a custom index policy
-
-Create a Cosmos container with a custom index policy, a spatial index, composite index, a partition key and RU/s of 400.
-
-```azurecli-interactive
-# Create a SQL API container
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-partitionKey='/myPartitionKey'
-throughput=400
-
-# Generate a unique 10 character alphanumeric string to ensure unique resource names
-uniqueId=$(env LC_CTYPE=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 10 | head -n 1)
-
-# Define the index policy for the container, include spatial and composite indexes
-idxpolicy=$(cat << EOF
-{
- "indexingMode": "consistent",
- "includedPaths": [
- {"path": "/*"}
- ],
- "excludedPaths": [
- { "path": "/headquarters/employees/?"}
- ],
- "spatialIndexes": [
- {"path": "/*", "types": ["Point"]}
- ],
- "compositeIndexes":[
- [
- { "path":"/name", "order":"ascending" },
- { "path":"/age", "order":"descending" }
- ]
- ]
-}
-EOF
-)
-# Persist index policy to json file
-echo "$idxpolicy" > "idxpolicy-$uniqueId.json"
--
-az cosmosdb sql container create \
- -a $accountName -g $resourceGroupName \
- -d $databaseName -n $containerName \
- -p $partitionKey --throughput $throughput \
- --idx @idxpolicy-$uniqueId.json
-
-# Clean up temporary index policy file
-rm -f "idxpolicy-$uniqueId.json"
-```
-
-### Change container throughput
-
-Increase the throughput of a Cosmos container by 1000 RU/s.
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-newRU=1000
-
-# Get minimum throughput to make sure newRU is not lower than minRU
-minRU=$(az cosmosdb sql container throughput show \
- -g $resourceGroupName -a $accountName -d $databaseName \
- -n $containerName --query resource.minimumThroughput -o tsv)
-
-if [ $minRU -gt $newRU ]; then
- newRU=$minRU
-fi
-
-az cosmosdb sql container throughput update \
- -a $accountName \
- -g $resourceGroupName \
- -d $databaseName \
- -n $containerName \
- --throughput $newRU
-```
-
-### Migrate a container to autoscale throughput
-
-```azurecli-interactive
-resourceGroupName='MyResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-
-# Migrate to autoscale throughput
-az cosmosdb sql container throughput migrate \
- -a $accountName \
- -g $resourceGroupName \
- -d $databaseName \
- -n $containerName \
- -t 'autoscale'
-
-# Read the new autoscale max throughput
-az cosmosdb sql container throughput show \
- -g $resourceGroupName \
- -a $accountName \
- -d $databaseName \
- -n $containerName \
- --query resource.autoscaleSettings.maxThroughput \
- -o tsv
-```
-
-### Prevent a container from being deleted
-
-Put an Azure resource delete lock on a container to prevent it from being deleted. This feature requires locking the Cosmos account from being changed by data plane SDKs. To learn more, see [preventing changes from SDKs](../role-based-access-control.md#prevent-sdk-changes). Azure resource locks can also prevent a resource from being changed by specifying a `ReadOnly` lock type. For a Cosmos container, locks can be used to prevent throughput or any other property from being changed.
-
-```azurecli-interactive
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-databaseName='database1'
-containerName='container1'
-
-lockType='CanNotDelete' # CanNotDelete or ReadOnly
-databaseParent="databaseAccounts/$accountName"
-containerParent="databaseAccounts/$accountName/sqlDatabases/$databaseName"
-containerLockName="$containerName-Lock"
-
-# Create a delete lock on container
-az lock create --name $containerLockName \
- --resource-group $resourceGroupName \
- --resource-type Microsoft.DocumentDB/containers \
- --lock-type $lockType \
- --parent $containerParent \
- --resource $containerName
-
-# Delete lock on container
-lockid=$(az lock show --name $containerLockName \
- --resource-group $resourceGroupName \
- --resource-type Microsoft.DocumentDB/containers \
- --resource-name $containerName \
- --parent $containerParent \
- --output tsv --query id)
-az lock delete --ids $lockid
-```
-
-## Next steps
-
-For more information on the Azure CLI, see:
--- [Install Azure CLI](/cli/azure/install-azure-cli)-- [Azure CLI Reference](/cli/azure/cosmosdb)-- [More Azure CLI samples for Azure Cosmos DB](cli-samples.md)
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-powershell.md
- Title: Manage Azure Cosmos DB Core (SQL) API resources using using PowerShell
-description: Manage Azure Cosmos DB Core (SQL) API resources using using PowerShell.
---- Previously updated : 02/18/2022-----
-# Manage Azure Cosmos DB Core (SQL) API resources using PowerShell
-
-The following guide describes how to use PowerShell to script and automate management of Azure Cosmos DB Core (SQL) API resources, including the Cosmos account, database, container, and throughput. For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](../cassandr)
-
-> [!NOTE]
-> Samples in this article use [Az.CosmosDB](/powershell/module/az.cosmosdb) management cmdlets. See the [Az.CosmosDB](/powershell/module/az.cosmosdb) API reference page for the latest changes.
-
-For cross-platform management of Azure Cosmos DB, you can use the `Az` and `Az.CosmosDB` cmdlets with [cross-platform PowerShell](/powershell/scripting/install/installing-powershell), as well as the [Azure CLI](manage-with-cli.md), the [REST API][rp-rest-api], or the [Azure portal](create-sql-api-dotnet.md#create-account).
--
-## Getting Started
-
-Follow the instructions in [How to install and configure Azure PowerShell][powershell-install-configure] to install and sign in to your Azure account in PowerShell.
-
-> [!IMPORTANT]
-> Azure Cosmos DB resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
-
-## Azure Cosmos accounts
-
-The following sections demonstrate how to manage the Azure Cosmos account, including:
-
-* [Create an Azure Cosmos account](#create-account)
-* [Update an Azure Cosmos account](#update-account)
-* [List all Azure Cosmos accounts in a subscription](#list-accounts)
-* [Get an Azure Cosmos account](#get-account)
-* [Delete an Azure Cosmos account](#delete-account)
-* [Update tags for an Azure Cosmos account](#update-tags)
-* [List keys for an Azure Cosmos account](#list-keys)
-* [Regenerate keys for an Azure Cosmos account](#regenerate-keys)
-* [List connection strings for an Azure Cosmos account](#list-connection-strings)
-* [Modify failover priority for an Azure Cosmos account](#modify-failover-priority)
-* [Trigger a manual failover for an Azure Cosmos account](#trigger-manual-failover)
-* [List resource locks on an Azure Cosmos DB account](#list-account-locks)
-
-### <a id="create-account"></a> Create an Azure Cosmos account
-
-This command creates an Azure Cosmos DB database account with [multiple regions][distribute-data-globally], [service-managed failover](../how-to-manage-database-account.md#automatic-failover) and bounded-staleness [consistency policy](../consistency-levels.md).
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$apiKind = "Sql"
-$consistencyLevel = "BoundedStaleness"
-$maxStalenessInterval = 300
-$maxStalenessPrefix = 100000
-$locations = @()
-$locations += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
-$locations += New-AzCosmosDBLocationObject -LocationName "West US" -FailoverPriority 1 -IsZoneRedundant 0
-
-New-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -LocationObject $locations `
- -Name $accountName `
- -ApiKind $apiKind `
- -EnableAutomaticFailover:$true `
- -DefaultConsistencyLevel $consistencyLevel `
- -MaxStalenessIntervalInSeconds $maxStalenessInterval `
- -MaxStalenessPrefix $maxStalenessPrefix
-```
-
-* `$resourceGroupName` The Azure resource group into which to deploy the Cosmos account. It must already exist.
-* `$locations` The regions for the database account, the region with `FailoverPriority 0` is the write region.
-* `$accountName` The name for the Azure Cosmos account. Must be unique, lowercase, include only alphanumeric and '-' characters, and between 3 and 31 characters in length.
-* `$apiKind` The type of Cosmos account to create. For more information, see [APIs in Cosmos DB](../introduction.md#simplified-application-development).
-* `$consistencyPolicy`, `$maxStalenessInterval`, and `$maxStalenessPrefix` The default consistency level and settings of the Azure Cosmos account. For more information, see [Consistency Levels in Azure Cosmos DB](../consistency-levels.md).
-
-Azure Cosmos accounts can be configured with IP Firewall, Virtual Network service endpoints, and private endpoints. For information on how to configure the IP Firewall for Azure Cosmos DB, see [Configure IP Firewall](../how-to-configure-firewall.md). For information on how to enable service endpoints for Azure Cosmos DB, see [Configure access from virtual Networks](../how-to-configure-vnet-service-endpoint.md). For information on how to enable private endpoints for Azure Cosmos DB, see [Configure access from private endpoints](../how-to-configure-private-endpoints.md).
-
-### <a id="list-accounts"></a> List all Azure Cosmos accounts in a Resource Group
-
-This command lists all Azure Cosmos accounts in a Resource Group.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-
-Get-AzCosmosDBAccount -ResourceGroupName $resourceGroupName
-```
-
-### <a id="get-account"></a> Get the properties of an Azure Cosmos account
-
-This command allows you to get the properties of an existing Azure Cosmos account.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-
-Get-AzCosmosDBAccount -ResourceGroupName $resourceGroupName -Name $accountName
-```
-
-### <a id="update-account"></a> Update an Azure Cosmos account
-
-This command allows you to update your Azure Cosmos DB database account properties. Properties that can be updated include the following:
-
-* Adding or removing regions
-* Changing default consistency policy
-* Changing IP Range Filter
-* Changing Virtual Network configurations
-* Enabling multi-region writes
-
-> [!NOTE]
-> You cannot simultaneously add or remove regions (`locations`) and change other properties for an Azure Cosmos account. Modifying regions must be performed as a separate operation from any other change to the account.
-> [!NOTE]
-> This command allows you to add and remove regions but does not allow you to modify failover priorities or trigger a manual failover. See [Modify failover priority](#modify-failover-priority) and [Trigger manual failover](#trigger-manual-failover).
-> [!TIP]
-> When a new region is added, all data must be fully replicated and committed into the new region before the region is marked as available. The amount of time this operation takes will depend upon how much data is stored within the account. If an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused and will resume automatically when the add/remove region operation is complete.
-
-```azurepowershell-interactive
-# Create account with two regions
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$apiKind = "Sql"
-$consistencyLevel = "Session"
-$enableAutomaticFailover = $true
-$locations = @()
-$locations += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
-$locations += New-AzCosmosDBLocationObject -LocationName "West US" -FailoverPriority 1 -IsZoneRedundant 0
-
-# Create the Cosmos DB account
-New-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -LocationObject $locations `
- -Name $accountName `
- -ApiKind $apiKind `
- -EnableAutomaticFailover:$enableAutomaticFailover `
- -DefaultConsistencyLevel $consistencyLevel
-
-# Add a region to the account
-$locationObject2 = @()
-$locationObject2 += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
-$locationObject2 += New-AzCosmosDBLocationObject -LocationName "West US" -FailoverPriority 1 -IsZoneRedundant 0
-$locationObject2 += New-AzCosmosDBLocationObject -LocationName "South Central US" -FailoverPriority 2 -IsZoneRedundant 0
-
-Update-AzCosmosDBAccountRegion `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -LocationObject $locationObject2
-
-Write-Host "Update-AzCosmosDBAccountRegion returns before the region update is complete."
-Write-Host "Check account in Azure portal or using Get-AzCosmosDBAccount for region status."
-Write-Host "When region was added, press any key to continue."
-$HOST.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | OUT-NULL
-$HOST.UI.RawUI.Flushinputbuffer()
-
-# Remove West US region from the account
-$locationObject3 = @()
-$locationObject3 += New-AzCosmosDBLocationObject -LocationName "East US" -FailoverPriority 0 -IsZoneRedundant 0
-$locationObject3 += New-AzCosmosDBLocationObject -LocationName "South Central US" -FailoverPriority 1 -IsZoneRedundant 0
-
-Update-AzCosmosDBAccountRegion `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -LocationObject $locationObject3
-
-Write-Host "Update-AzCosmosDBAccountRegion returns before the region update is complete."
-Write-Host "Check account in Azure portal or using Get-AzCosmosDBAccount for region status."
-```
-
-### <a id="multi-region-writes"></a> Enable multiple write regions for an Azure Cosmos account
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$enableAutomaticFailover = $false
-$enableMultiMaster = $true
-
-# First disable service-managed failover - cannot have both service-managed
-# failover and multi-region writes on an account
-Update-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -EnableAutomaticFailover:$enableAutomaticFailover
-
-# Now enable multi-region writes
-Update-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -EnableMultipleWriteLocations:$enableMultiMaster
-```
-
-### <a id="delete-account"></a> Delete an Azure Cosmos account
-
-This command deletes an existing Azure Cosmos account.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-
-Remove-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -PassThru:$true
-```
-
-### <a id="update-tags"></a> Update Tags of an Azure Cosmos account
-
-This command sets the [Azure resource tags][azure-resource-tags] for an Azure Cosmos account. Tags can be set both at account creation using `New-AzCosmosDBAccount` as well as on account update using `Update-AzCosmosDBAccount`.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$tags = @{dept = "Finance"; environment = "Production";}
-
-Update-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -Tag $tags
-```
-
-### <a id="list-keys"></a> List Account Keys
-
-When you create an Azure Cosmos account, the service generates two primary access keys that can be used for authentication when the Azure Cosmos account is accessed. Read-only keys for authenticating read-only operations are also generated.
-By providing two access keys, Azure Cosmos DB enables you to regenerate and rotate one key at a time with no interruption to your Azure Cosmos account.
-Cosmos DB accounts have two read-write keys (primary and secondary) and two read-only keys (primary and secondary).
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-
-Get-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -Type "Keys"
-```
-
-### <a id="list-connection-strings"></a> List Connection Strings
-
-The following command retrieves connection strings to connect apps to the Cosmos DB account.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-
-Get-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -Type "ConnectionStrings"
-```
-
-### <a id="regenerate-keys"></a> Regenerate Account Keys
-
-Access keys to an Azure Cosmos account should be periodically regenerated to help keep connections secure. A primary and secondary access keys are assigned to the account. This allows clients to maintain access while one key at a time is regenerated.
-There are four types of keys for an Azure Cosmos account (Primary, Secondary, PrimaryReadonly, and SecondaryReadonly)
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup" # Resource Group must already exist
-$accountName = "mycosmosaccount" # Must be all lower case
-$keyKind = "primary" # Other key kinds: secondary, primaryReadonly, secondaryReadonly
-
-New-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -KeyKind $keyKind
-```
-
-### <a id="enable-automatic-failover"></a> Enable service-managed failover
-
-The following command sets a Cosmos DB account to fail over automatically to its secondary region should the primary region become unavailable.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$enableAutomaticFailover = $true
-$enableMultiMaster = $false
-
-# First disable multi-region writes - cannot have both automatic
-# failover and multi-region writes on an account
-Update-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -EnableMultipleWriteLocations:$enableMultiMaster
-
-# Now enable service-managed failover
-Update-AzCosmosDBAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -EnableAutomaticFailover:$enableAutomaticFailover
-```
-
-### <a id="modify-failover-priority"></a> Modify Failover Priority
-
-For accounts configured with Service-Managed Failover, you can change the order in which Cosmos will promote secondary replicas to primary should the primary become unavailable.
-
-For the example below, assume the current failover priority is `West US = 0`, `East US = 1`, `South Central US = 2`. The command will change this to `West US = 0`, `South Central US = 1`, `East US = 2`.
-
-> [!CAUTION]
-> Changing the location for `failoverPriority=0` will trigger a manual failover for an Azure Cosmos account. Any other priority changes will not trigger a failover.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$locations = @("West US", "South Central US", "East US") # Regions ordered by UPDATED failover priority
-
-Update-AzCosmosDBAccountFailoverPriority `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -FailoverPolicy $locations
-```
-
-### <a id="trigger-manual-failover"></a> Trigger Manual Failover
-
-For accounts configured with Manual Failover, you can fail over and promote any secondary replica to primary by modifying to `failoverPriority=0`. This operation can be used to initiate a disaster recovery drill to test disaster recovery planning.
-
-For the example below, assume the account has a current failover priority of `West US = 0` and `East US = 1` and flip the regions.
-
-> [!CAUTION]
-> Changing `locationName` for `failoverPriority=0` will trigger a manual failover for an Azure Cosmos account. Any other priority change will not trigger a failover.
-
-> [!NOTE]
-> If you perform a manual failover operation while an [asynchronous throughput scaling operation](../scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover operation is complete.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$locations = @("East US", "West US") # Regions ordered by UPDATED failover priority
-
-Update-AzCosmosDBAccountFailoverPriority `
- -ResourceGroupName $resourceGroupName `
- -Name $accountName `
- -FailoverPolicy $locations
-```
-
-### <a id="list-account-locks"></a> List resource locks on an Azure Cosmos DB account
-
-Resource locks can be placed on Azure Cosmos DB resources including databases and collections. The example below shows how to list all Azure resource locks on an Azure Cosmos DB account.
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$resourceTypeAccount = "Microsoft.DocumentDB/databaseAccounts"
-$accountName = "mycosmosaccount"
-
-Get-AzResourceLock `
- -ResourceGroupName $resourceGroupName `
- -ResourceType $resourceTypeAccount `
- -ResourceName $accountName
-```
-
-## Azure Cosmos DB Database
-
-The following sections demonstrate how to manage the Azure Cosmos DB database, including:
-
-* [Create an Azure Cosmos DB database](#create-db)
-* [Create an Azure Cosmos DB database with shared throughput](#create-db-ru)
-* [Get the throughput of an Azure Cosmos DB database](#get-db-ru)
-* [Migrate database throughput to autoscale](#migrate-db-ru)
-* [List all Azure Cosmos DB databases in an account](#list-db)
-* [Get a single Azure Cosmos DB database](#get-db)
-* [Delete an Azure Cosmos DB database](#delete-db)
-* [Create a resource lock on an Azure Cosmos DB database to prevent delete](#create-db-lock)
-* [Remove a resource lock on an Azure Cosmos DB database](#remove-db-lock)
-
-### <a id="create-db"></a>Create an Azure Cosmos DB database
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-
-New-AzCosmosDBSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -Name $databaseName
-```
-
-### <a id="create-db-ru"></a>Create an Azure Cosmos DB database with shared throughput
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$databaseRUs = 400
-
-New-AzCosmosDBSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -Name $databaseName `
- -Throughput $databaseRUs
-```
-
-### <a id="get-db-ru"></a>Get the throughput of an Azure Cosmos DB database
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-
-Get-AzCosmosDBSqlDatabaseThroughput `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -Name $databaseName
-```
-
-## <a id="migrate-db-ru"></a>Migrate database throughput to autoscale
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-
-Invoke-AzCosmosDBSqlDatabaseThroughputMigration `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -Name $databaseName `
- -ThroughputType Autoscale
-```
-
-### <a id="list-db"></a>Get all Azure Cosmos DB databases in an account
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-
-Get-AzCosmosDBSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName
-```
-
-### <a id="get-db"></a>Get a single Azure Cosmos DB database
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-
-Get-AzCosmosDBSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -Name $databaseName
-```
-
-### <a id="delete-db"></a>Delete an Azure Cosmos DB database
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-
-Remove-AzCosmosDBSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -Name $databaseName
-```
-
-### <a id="create-db-lock"></a>Create a resource lock on an Azure Cosmos DB database to prevent delete
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$resourceName = "$accountName/$databaseName"
-$lockName = "myResourceLock"
-$lockLevel = "CanNotDelete"
-
-New-AzResourceLock `
- -ResourceGroupName $resourceGroupName `
- -ResourceType $resourceType `
- -ResourceName $resourceName `
- -LockName $lockName `
- -LockLevel $lockLevel
-```
-
-### <a id="remove-db-lock"></a>Remove a resource lock on an Azure Cosmos DB database
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$resourceName = "$accountName/$databaseName"
-$lockName = "myResourceLock"
-
-Remove-AzResourceLock `
- -ResourceGroupName $resourceGroupName `
- -ResourceType $resourceType `
- -ResourceName $resourceName `
- -LockName $lockName
-```
-
-## Azure Cosmos DB Container
-
-The following sections demonstrate how to manage the Azure Cosmos DB container, including:
-
-* [Create an Azure Cosmos DB container](#create-container)
-* [Create an Azure Cosmos DB container with autoscale](#create-container-autoscale)
-* [Create an Azure Cosmos DB container with a large partition key](#create-container-big-pk)
-* [Get the throughput of an Azure Cosmos DB container](#get-container-ru)
-* [Migrate container throughput to autoscale](#migrate-container-ru)
-* [Create an Azure Cosmos DB container with custom indexing](#create-container-custom-index)
-* [Create an Azure Cosmos DB container with indexing turned off](#create-container-no-index)
-* [Create an Azure Cosmos DB container with unique key and TTL](#create-container-unique-key-ttl)
-* [Create an Azure Cosmos DB container with conflict resolution](#create-container-lww)
-* [List all Azure Cosmos DB containers in a database](#list-containers)
-* [Get a single Azure Cosmos DB container in a database](#get-container)
-* [Delete an Azure Cosmos DB container](#delete-container)
-* [Create a resource lock on an Azure Cosmos DB container to prevent delete](#create-container-lock)
-* [Remove a resource lock on an Azure Cosmos DB container](#remove-container-lock)
-
-### <a id="create-container"></a>Create an Azure Cosmos DB container
-
-```azurepowershell-interactive
-# Create an Azure Cosmos DB container with default indexes and throughput at 400 RU
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-$throughput = 400 #minimum = 400
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -Throughput $throughput
-```
-
-### <a id="create-container-autoscale"></a>Create an Azure Cosmos DB container with autoscale
-
-```azurepowershell-interactive
-# Create an Azure Cosmos DB container with default indexes and autoscale throughput at 4000 RU
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-$autoscaleMaxThroughput = 4000 #minimum = 4000
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -AutoscaleMaxThroughput $autoscaleMaxThroughput
-```
-
-### <a id="create-container-big-pk"></a>Create an Azure Cosmos DB container with a large partition key size
-
-```azurepowershell-interactive
-# Create an Azure Cosmos DB container with a large partition key value (version = 2)
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -PartitionKeyVersion 2
-```
-
-### <a id="get-container-ru"></a>Get the throughput of an Azure Cosmos DB container
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-
-Get-AzCosmosDBSqlContainerThroughput `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName
-```
-
-### <a id="migrate-container-ru"></a>Migrate container throughput to autoscale
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-
-Invoke-AzCosmosDBSqlContainerThroughputMigration `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -ThroughputType Autoscale
-```
-
-### <a id="create-container-custom-index"></a>Create an Azure Cosmos DB container with custom index policy
-
-```azurepowershell-interactive
-# Create a container with a custom indexing policy
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-$indexPathIncluded = "/*"
-$indexPathExcluded = "/myExcludedPath/*"
-
-$includedPathIndex = New-AzCosmosDBSqlIncludedPathIndex -DataType String -Kind Range
-$includedPath = New-AzCosmosDBSqlIncludedPath -Path $indexPathIncluded -Index $includedPathIndex
-
-$indexingPolicy = New-AzCosmosDBSqlIndexingPolicy `
- -IncludedPath $includedPath `
- -ExcludedPath $indexPathExcluded `
- -IndexingMode Consistent `
- -Automatic $true
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -IndexingPolicy $indexingPolicy
-```
-
-### <a id="create-container-no-index"></a>Create an Azure Cosmos DB container with indexing turned off
-
-```azurepowershell-interactive
-# Create an Azure Cosmos DB container with no indexing
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-
-$indexingPolicy = New-AzCosmosDBSqlIndexingPolicy `
- -IndexingMode None
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -IndexingPolicy $indexingPolicy
-```
-
-### <a id="create-container-unique-key-ttl"></a>Create an Azure Cosmos DB container with unique key policy and TTL
-
-```azurepowershell-interactive
-# Create a container with a unique key policy and TTL of one day
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-$uniqueKeyPath = "/myUniqueKeyPath"
-$ttlInSeconds = 86400 # Set this to -1 (or don't use it at all) to never expire
-
-$uniqueKey = New-AzCosmosDBSqlUniqueKey `
- -Path $uniqueKeyPath
-
-$uniqueKeyPolicy = New-AzCosmosDBSqlUniqueKeyPolicy `
- -UniqueKey $uniqueKey
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -UniqueKeyPolicy $uniqueKeyPolicy `
- -TtlInSeconds $ttlInSeconds
-```
-
-### <a id="create-container-lww"></a>Create an Azure Cosmos DB container with conflict resolution
-
-To write all conflicts to the ConflictsFeed and handle separately, pass `-Type "Custom" -Path ""`.
-
-```azurepowershell-interactive
-# Create container with last-writer-wins conflict resolution policy
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-$conflictResolutionPath = "/myResolutionPath"
-
-$conflictResolutionPolicy = New-AzCosmosDBSqlConflictResolutionPolicy `
- -Type LastWriterWins `
- -Path $conflictResolutionPath
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -ConflictResolutionPolicy $conflictResolutionPolicy
-```
-
-To create a conflict resolution policy to use a stored procedure, call `New-AzCosmosDBSqlConflictResolutionPolicy` and pass parameters `-Type` and `-ConflictResolutionProcedure`.
-
-```azurepowershell-interactive
-# Create container with custom conflict resolution policy using a stored procedure
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$partitionKeyPath = "/myPartitionKey"
-$conflictResolutionSprocName = "mysproc"
-
-$conflictResolutionSproc = "/dbs/$databaseName/colls/$containerName/sprocs/$conflictResolutionSprocName"
-
-$conflictResolutionPolicy = New-AzCosmosDBSqlConflictResolutionPolicy `
- -Type Custom `
- -ConflictResolutionProcedure $conflictResolutionSproc
-
-New-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName `
- -PartitionKeyKind Hash `
- -PartitionKeyPath $partitionKeyPath `
- -ConflictResolutionPolicy $conflictResolutionPolicy
-```
--
-### <a id="list-containers"></a>List all Azure Cosmos DB containers in a database
-
-```azurepowershell-interactive
-# List all Azure Cosmos DB containers in a database
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-
-Get-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName
-```
-
-### <a id="get-container"></a>Get a single Azure Cosmos DB container in a database
-
-```azurepowershell-interactive
-# Get a single Azure Cosmos DB container in a database
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-
-Get-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName
-```
-
-### <a id="delete-container"></a>Delete an Azure Cosmos DB container
-
-```azurepowershell-interactive
-# Delete an Azure Cosmos DB container
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-
-Remove-AzCosmosDBSqlContainer `
- -ResourceGroupName $resourceGroupName `
- -AccountName $accountName `
- -DatabaseName $databaseName `
- -Name $containerName
-```
-### <a id="create-container-lock"></a>Create a resource lock on an Azure Cosmos DB container to prevent delete
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$resourceName = "$accountName/$databaseName/$containerName"
-$lockName = "myResourceLock"
-$lockLevel = "CanNotDelete"
-
-New-AzResourceLock `
- -ResourceGroupName $resourceGroupName `
- -ResourceType $resourceType `
- -ResourceName $resourceName `
- -LockName $lockName `
- -LockLevel $lockLevel
-```
-
-### <a id="remove-container-lock"></a>Remove a resource lock on an Azure Cosmos DB container
-
-```azurepowershell-interactive
-$resourceGroupName = "myResourceGroup"
-$resourceType = "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers"
-$accountName = "mycosmosaccount"
-$databaseName = "myDatabase"
-$containerName = "myContainer"
-$resourceName = "$accountName/$databaseName/$containerName"
-$lockName = "myResourceLock"
-
-Remove-AzResourceLock `
- -ResourceGroupName $resourceGroupName `
- -ResourceType $resourceType `
- -ResourceName $resourceName `
- -LockName $lockName
-```
-
-## Next steps
-
-* [All PowerShell Samples](powershell-samples.md)
-* [How to manage Azure Cosmos account](../how-to-manage-database-account.md)
-* [Create an Azure Cosmos DB container](how-to-create-container.md)
-* [Configure time-to-live in Azure Cosmos DB](how-to-time-to-live.md)
-
-<!--Reference style links - using these makes the source content way more readable than using inline links-->
-
-[powershell-install-configure]: /powershell/azure/
-[scaling-globally]: ../distribute-data-globally.md#EnableGlobalDistribution
-[distribute-data-globally]: ../distribute-data-globally.md
-[azure-resource-groups]: ../../azure-resource-manager/management/overview.md#resource-groups
-[azure-resource-tags]: ../../azure-resource-manager/management/tag-resources.md
-[rp-rest-api]: /rest/api/cosmos-db-resource-provider/
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-templates.md
- Title: Create and manage Azure Cosmos DB with Resource Manager templates
-description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Core (SQL) API
---- Previously updated : 02/18/2022----
-# Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates
--
-In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
-
-This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](../cassandr) APIs.
-
-> [!IMPORTANT]
->
-> * Account names are limited to 44 characters, all lowercase.
-> * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
-> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
-
-To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md).
-
-<a id="create-autoscale"></a>
-
-## Azure Cosmos account with autoscale throughput
-
-This template creates an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most policy options enabled. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-> [!NOTE]
-> You can use Azure Resource Manager templates update the autoscale max RU/s setting on an database and container resources already configured with autoscale. Migrating between manual and autoscale throughput is a POST operation and not supported with Azure Resource Manager templates. To migrate throughput use [Azure CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell).
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-autoscale%2Fazuredeploy.json)
--
-<a id="create-analytical-store"></a>
-
-## Azure Cosmos account with analytical store
-
-This template creates an Azure Cosmos account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-analytical-store%2Fazuredeploy.json)
--
-<a id="create-manual"></a>
-
-## Azure Cosmos account with standard provisioned throughput
-
-This template creates an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for standard throughput with many indexing policy options configured. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql%2Fazuredeploy.json)
--
-<a id="create-sproc"></a>
-
-## Azure Cosmos DB container with server-side functionality
-
-This template creates an Azure Cosmos account, database and container with with a stored procedure, trigger, and user-defined function. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-container-sprocs%2Fazuredeploy.json)
--
-<a id="create-rbac"></a>
-
-## Azure Cosmos DB account with Azure AD and RBAC
-
-This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure AD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-rbac%2Fazuredeploy.json)
--
-<a id="free-tier"></a>
-
-## Free tier Azure Cosmos DB account
-
-This template creates a free-tier Azure Cosmos account and a database with shared throughput that can be shared with up to 25 containers. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-free%2Fazuredeploy.json)
--
-## Next steps
-
-Here are some additional resources:
-
-* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml)
-* [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions)
-* [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Documentdb&pageNumber=1&sort=Popular)
-* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Manage With Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-terraform.md
- Title: Create and manage Azure Cosmos DB with terraform
-description: Use terraform to create and configure Azure Cosmos DB for Core (SQL) API
---- Previously updated : 09/16/2022----
-# Manage Azure Cosmos DB Core (SQL) API resources with terraform
--
-In this article, you learn how to use terraform to deploy and manage your Azure Cosmos DB accounts, databases, and containers.
-
-This article shows terraform samples for Core (SQL) API accounts.
-
-> [!IMPORTANT]
->
-> * Account names are limited to 44 characters, all lowercase.
-> * To change the throughput (RU/s) values, redeploy the terraform file with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
-> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
-
-To create any of the Azure Cosmos DB resources below, copy the example into a new terraform file (main.tf) or alternatively, have 2 separate files for resources (main.tf) and variables (variables.tf). Be sure to include the azurerm provider either in the main terraform file or split out to a separate providers file. All examples can be found on the [terraform samples repository](https://github.com/Azure/terraform).
--
-<a id="create-autoscale"></a>
-
-## Azure Cosmos account with autoscale throughput
-
-Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most index policy options enabled.
-
-### main.tf
--
-### variables.tf
--
-<a id="create-analytical-store"></a>
-
-## Azure Cosmos account with analytical store
-
-Create an Azure Cosmos account in one region with a container with Analytical TTL enabled and options for manual or autoscale throughput.
-
-### main.tf
--
-### variables.tf
--
-<a id="create-manual"></a>
-
-## Azure Cosmos account with standard provisioned throughput
-
-Create an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for standard throughput that has most policy options enabled.
-
-### main.tf
--
-### variables.tf
--
-<a id="create-sproc"></a>
-
-## Azure Cosmos DB container with server-side functionality
-
-Create an Azure Cosmos account, database and container with a stored procedure, trigger, and user-defined function.
-
-### main.tf
--
-### variables.tf
--
-<a id="create-rbac"></a>
-
-## Azure Cosmos DB account with Azure AD and RBAC
-
-Create an Azure Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure Active Directory identity.
-
-### main.tf
--
-### variables.tf
--
-<a id="free-tier"></a>
-
-## Free tier Azure Cosmos DB account
-
-Create a free-tier Azure Cosmos account and a database with shared throughput that can be shared with up to 25 containers.
-
-### main.tf
--
-### variables.tf
--
-## Next steps
-
-Here are some additional resources:
-
-* [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
-* [Terraform Azure Tutorial](https://learn.hashicorp.com/collections/terraform/azure-get-started)
-* [Terraform tools](https://www.terraform.io/docs/terraform-tools)
-* [Azure Provider Terraform documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
-* [Terraform documentation](https://www.terraform.io/docs)
cosmos-db Migrate Containers Partitioned To Nonpartitioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-containers-partitioned-to-nonpartitioned.md
- Title: Migrate non-partitioned Azure Cosmos containers to partitioned containers
-description: Learn how to migrate all the existing non-partitioned containers into partitioned containers.
---- Previously updated : 08/26/2021-----
-# Migrate non-partitioned containers to partitioned containers
-
-Azure Cosmos DB supports creating containers without a partition key. Currently you can create non-partitioned containers by using Azure CLI and Azure Cosmos DB SDKs (.Net, Java, NodeJs) that have a version less than or equal to 2.x. You cannot create non-partitioned containers using the Azure portal. However, such non-partitioned containers arenΓÇÖt elastic and have fixed storage capacity of 20 GB and throughput limit of 10K RU/s.
-
-The non-partitioned containers are legacy and you should migrate your existing non-partitioned containers to partitioned containers to scale storage and throughput. Azure Cosmos DB provides a system defined mechanism to migrate your non-partitioned containers to partitioned containers. This document explains how all the existing non-partitioned containers are auto-migrated into partitioned containers. You can take advantage of the auto-migration feature only if you are using the V3 version of SDKs in all the languages.
-
-> [!NOTE]
-> Currently, you cannot migrate Azure Cosmos DB MongoDB and Gremlin API accounts by using the steps described in this document.
-
-## Migrate container using the system defined partition key
-
-To support the migration, Azure Cosmos DB provides a system defined partition key named `/_partitionkey` on all the containers that donΓÇÖt have a partition key. You cannot change the partition key definition after the containers are migrated. For example, the definition of a container that is migrated to a partitioned container will be as follows:
-
-```json
-{
- "Id": "CollId"
- "partitionKey": {
- "paths": [
- "/_partitionKey"
- ],
- "kind": "Hash"
- },
-}
-```
-
-After the container is migrated, you can create documents by populating the `_partitionKey` property along with the other properties of the document. The `_partitionKey` property represents the partition key of your documents.
-
-Choosing the right partition key is important to utilize the provisioned throughput optimally. For more information, see [how to choose a partition key](../partitioning-overview.md) article.
-
-> [!NOTE]
-> You can take advantage of system defined partition key only if you are using the latest/V3 version of SDKs in all the languages.
-
-The following example shows a sample code to create a document with the system defined partition key and read that document:
-
-**JSON representation of the document**
-
-### [.NET SDK V3](#tab/dotnetv3)
-
-```csharp
-DeviceInformationItem = new DeviceInformationItem
-{
- "id": "elevator/PugetSound/Building44/Floor1/1",
- "deviceId": "3cf4c52d-cc67-4bb8-b02f-f6185007a808",
- "_partitionKey": "3cf4c52d-cc67-4bb8-b02f-f6185007a808"
-}
-
-public class DeviceInformationItem
-{
- [JsonProperty(PropertyName = "id")]
- public string Id { get; set; }
-
- [JsonProperty(PropertyName = "deviceId")]
- public string DeviceId { get; set; }
-
- [JsonProperty(PropertyName = "_partitionKey", NullValueHandling = NullValueHandling.Ignore)]
- public string PartitionKey {get {return this.DeviceId; set; }
-}
-
-CosmosContainer migratedContainer = database.Containers["testContainer"];
-
-DeviceInformationItem deviceItem = new DeviceInformationItem() {
- Id = "1234",
- DeviceId = "3cf4c52d-cc67-4bb8-b02f-f6185007a808"
-}
-
-ItemResponse<DeviceInformationItem > response =
- await migratedContainer.CreateItemAsync<DeviceInformationItem>(
- deviceItem.PartitionKey,
- deviceItem
- );
-
-// Read back the document providing the same partition key
-ItemResponse<DeviceInformationItem> readResponse =
- await migratedContainer.ReadItemAsync<DeviceInformationItem>(
- partitionKey:deviceItem.PartitionKey,
- id: device.Id
- );
-
-```
-
-For the complete sample, see the [.Net samples][1] GitHub repository.
-
-## Migrate the documents
-
-While the container definition is enhanced with a partition key property, the documents within the container arenΓÇÖt auto migrated. Which means the system partition key property `/_partitionKey` path is not automatically added to the existing documents. You need to repartition the existing documents by reading the documents that were created without a partition key and rewrite them back with `_partitionKey` property in the documents.
-
-## Access documents that don't have a partition key
-
-Applications can access the existing documents that donΓÇÖt have a partition key by using the special system property called "PartitionKey.None", this is the value of the non-migrated documents. You can use this property in all the CRUD and query operations. The following example shows a sample to read a single Document from the NonePartitionKey.
-
-```csharp
-CosmosItemResponse<DeviceInformationItem> readResponse =
-await migratedContainer.Items.ReadItemAsync<DeviceInformationItem>(
- partitionKey: PartitionKey.None,
- id: device.Id
-);
-
-```
-
-### [Java SDK V4](#tab/javav4)
-
-```java
-static class Family {
- public String id;
- public String firstName;
- public String lastName;
- public String _partitionKey;
-
- public Family(String id, String firstName, String lastName, String _partitionKey) {
- this.id = id;
- this.firstName = firstName;
- this.lastName = lastName;
- this._partitionKey = _partitionKey;
- }
-}
-
-...
-
-CosmosDatabase cosmosDatabase = cosmosClient.getDatabase("testdb");
-CosmosContainer cosmosContainer = cosmosDatabase.getContainer("testcontainer");
-
-// Create single item
-Family family = new Family("id-1", "John", "Doe", "Doe");
-cosmosContainer.createItem(family, new PartitionKey(family._partitionKey), new CosmosItemRequestOptions());
-
-// Create items through bulk operations
-family = new Family("id-2", "Jane", "Doe", "Doe");
-CosmosItemOperation createItemOperation = CosmosBulkOperations.getCreateItemOperation(family,
- new PartitionKey(family._partitionKey));
-cosmosContainer.executeBulkOperations(Collections.singletonList(createItemOperation));
-```
-
-For the complete sample, see the [Java samples][2] GitHub repository.
-
-## Migrate the documents
-
-While the container definition is enhanced with a partition key property, the documents within the container arenΓÇÖt auto migrated. Which means the system partition key property `/_partitionKey` path is not automatically added to the existing documents. You need to repartition the existing documents by reading the documents that were created without a partition key and rewrite them back with `_partitionKey` property in the documents.
-
-## Access documents that don't have a partition key
-
-Applications can access the existing documents that donΓÇÖt have a partition key by using the special system property called "PartitionKey.None", this is the value of the non-migrated documents. You can use this property in all the CRUD and query operations. The following example shows a sample to read a single Document from the NonePartitionKey.
-
-```java
-CosmosItemResponse<JsonNode> cosmosItemResponse =
- cosmosContainer.readItem("itemId", PartitionKey.NONE, JsonNode.class);
-```
-
-For the complete sample on how to repartition the documents, see the [Java samples][2] GitHub repository.
---
-## Compatibility with SDKs
-
-Older version of Azure Cosmos DB SDKs such as V2.x.x and V1.x.x donΓÇÖt support the system defined partition key property. So, when you read the container definition from an older SDK, it doesnΓÇÖt contain any partition key definition and these containers will behave exactly as before. Applications that are built with the older version of SDKs continue to work with non-partitioned as is without any changes.
-
-If a migrated container is consumed by the latest/V3 version of SDK and you start populating the system defined partition key within the new documents, you cannot access (read, update, delete, query) such documents from the older SDKs anymore.
-
-## Known issues
-
-**Querying for the count of items that were inserted without a partition key by using V3 SDK may involve higher throughput consumption**
-
-If you query from the V3 SDK for the items that are inserted by using V2 SDK, or the items inserted by using the V3 SDK with `PartitionKey.None` parameter, the count query may consume more RU/s if the `PartitionKey.None` parameter is supplied in the FeedOptions. We recommend that you don't supply the `PartitionKey.None` parameter if no other items are inserted with a partition key.
-
-If new items are inserted with different values for the partition key, querying for such item counts by passing the appropriate key in `FeedOptions` will not have any issues. After inserting new documents with partition key, if you need to query just the document count without the partition key value, that query may again incur higher RU/s similar to the regular partitioned collections.
-
-## Next steps
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Request Units in Azure Cosmos DB](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
-* [Work with Azure Cosmos account](../account-databases-containers-items.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-[1]: https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/NonPartitionContainerMigration
-[2]: https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/nonpartitioncontainercrud
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-data-striim.md
- Title: Migrate data to Azure Cosmos DB SQL API account using Striim
-description: Learn how to use Striim to migrate data from an Oracle database to an Azure Cosmos DB SQL API account.
----- Previously updated : 12/09/2021---
-# Migrate data to Azure Cosmos DB SQL API account using Striim
-
-The Striim image in the Azure marketplace offers continuous real-time data movement from data warehouses and databases to Azure. While moving the data, you can perform in-line denormalization, data transformation, enable real-time analytics, and data reporting scenarios. ItΓÇÖs easy to get started with Striim to continuously move enterprise data to Azure Cosmos DB SQL API. Azure provides a marketplace offering that makes it easy to deploy Striim and migrate data to Azure Cosmos DB.
-
-This article shows how to use Striim to migrate data from an **Oracle database** to an **Azure Cosmos DB SQL API account**.
-
-## Prerequisites
-
-* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-
-* An Oracle database running on-premises with some data in it.
-
-## Deploy the Striim marketplace solution
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-
-1. Select **Create a resource** and search for **Striim** in the Azure marketplace. Select the first option and **Create**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/striim-azure-marketplace.png" alt-text="Find Striim marketplace item":::
-
-1. Next, enter the configuration properties of the Striim instance. The Striim environment is deployed in a virtual machine. From the **Basics** pane, enter the **VM user name**, **VM password** (this password is used to SSH into the VM). Select your **Subscription**, **Resource Group**, and **Location details** where youΓÇÖd like to deploy Striim. Once complete, select **OK**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/striim-configure-basic-settings.png" alt-text="Configure basic settings for Striim":::
-
-1. In the **Striim Cluster settings** pane, choose the type of Striim deployment and the virtual machine size.
-
- |Setting | Value | Description |
- | | | |
- |Striim deployment type |Standalone | Striim can run in a **Standalone** or **Cluster** deployment types. Standalone mode will deploy the Striim server on a single virtual machine and you can select the size of the VMs depending on your data volume. Cluster mode will deploy the Striim server on two or more VMs with the selected size. Cluster environments with more than 2 nodes offer automatic high availability and failover.</br></br> In this tutorial, you can select Standalone option. Use the default ΓÇ£Standard_F4sΓÇ¥ size VM. |
- | Name of the Striim cluster| <Striim_cluster_Name>| Name of the Striim cluster.|
- | Striim cluster password| <Striim_cluster_password>| Password for the cluster.|
-
- After you fill the form, select **OK** to continue.
-
-1. In the **Striim access settings** pane, configure the **Public IP address** (choose the default values), **Domain name for Striim**, **Admin password** that youΓÇÖd like to use to login to the Striim UI. Configure a VNET and Subnet (choose the default values). After filling in the details, select **OK** to continue.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/striim-access-settings.png" alt-text="Striim access settings":::
-
-1. Azure will validate the deployment and make sure everything looks good; validation takes few minutes to complete. After the validation is completed, select **OK**.
-
-1. Finally, review the terms of use and select **Create** to create your Striim instance.
-
-## Configure the source database
-
-In this section, you configure the Oracle database as the source for data movement. Striim server comes with the Oracle JDBC driver that's used to connect to Oracle. To read changes from your source Oracle database, you can either use the [LogMiner](https://www.oracle.com/technetwork/database/features/availability/logmineroverview-088844.html) or the [XStream APIs](https://docs.oracle.com/cd/E11882_01/server.112/e16545/xstrm_intro.htm#XSTRM72647). The Oracle JDBC driver is present in Striim's Java classpath to read, write, or persist data from Oracle database.
-
-## Configure the target database
-
-In this section, you will configure the Azure Cosmos DB SQL API account as the target for data movement.
-
-1. Create an [Azure Cosmos DB SQL API account](create-cosmosdb-resources-portal.md) using the Azure portal.
-
-1. Navigate to the **Data Explorer** pane in your Azure Cosmos account. Select **New Container** to create a new container. Assume that you are migrating *products* and *orders* data from Oracle database to Azure Cosmos DB. Create a new database named **StriimDemo** with a container named **Orders**. Provision the container with **1000 RUs** (this example uses 1000 RUs, but you should use the throughput estimated for your workload), and **/ORDER_ID** as the partition key. These values will differ depending on your source data.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/create-sql-api-account.png" alt-text="Create a SQL API account":::
-
-## Configure Oracle to Azure Cosmos DB data flow
-
-1. Navigate to the Striim instance that you deployed in the Azure portal. Select the **Connect** button in the upper menu bar and from the **SSH** tab, copy the URL in **Login using VM local account** field.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
-
-1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using PuTTY or a different SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
-
-1. From the same terminal window, restart the Striim server by executing the following commands:
-
- ```bash
- Systemctl stop striim-node
- Systemctl stop striim-dbms
- Systemctl start striim-dbms
- Systemctl start striim-node
- ```
-
-1. Striim will take a minute to start up. If youΓÇÖd like to see the status, run the following command:
-
- ```bash
- tail -f /opt/striim/logs/striim-node.log
- ```
-
-1. Now, navigate back to Azure and copy the Public IP address of your Striim VM.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/copy-public-ip-address.png" alt-text="Copy Striim VM IP address":::
-
-1. To navigate to the StriimΓÇÖs Web UI, open a new tab in a browser and copy the public IP followed by: 9080. Sign in by using the **admin** username, along with the admin password you specified in the Azure portal.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/striim-login-ui.png" alt-text="Sign in to Striim":::
-
-1. Now youΓÇÖll arrive at StriimΓÇÖs home page. There are three different panes ΓÇô **Dashboards**, **Apps**, and **SourcePreview**. The Dashboards pane allows you to move data in real time and visualize it. The Apps pane contains your streaming data pipelines, or data flows. On the right hand of the page is SourcePreview where you can preview your data before moving it.
-
-1. Select the **Apps** pane, weΓÇÖll focus on this pane for now. There are a variety of sample apps that you can use to learn about Striim, however in this article you will create our own. Select the **Add App** button in the top right-hand corner.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/add-striim-app.png" alt-text="Add the Striim app":::
-
-1. There are a few different ways to create Striim applications. Select **Start with Template** to start with an existing template.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/start-with-template.png" alt-text="Start the app with the template":::
-
-1. In the **Search templates** field, type ΓÇ£CosmosΓÇ¥ and select **Target: Azure Cosmos DB** and then select **Oracle CDC to Azure Cosmos DB**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/oracle-cdc-cosmosdb.png" alt-text="Select Oracle CDC to Cosmos DB":::
-
-1. In the next page, name your application. You can provide a name such as **oraToCosmosDB** and then select **Save**.
-
-1. Next, enter the source configuration of your source Oracle instance. Enter a value for the **Source Name**. The source name is just a naming convention for the Striim application, you can use something like **src_onPremOracle**. Enter values for rest of the source parameters **URL**, **Username**, **Password**, choose **LogMiner** as the reader to read data from Oracle. Select **Next** to continue.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/configure-source-parameters.png" alt-text="Configure source parameters":::
-
-1. Striim will check your environment and make sure that it can connect to your source Oracle instance, have the right privileges, and that CDC was configured properly. Once all the values are validated, select **Next**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/validate-source-parameters.png" alt-text="Validate source parameters":::
-
-1. Select the tables from Oracle database that youΓÇÖd like to migrate. For example, letΓÇÖs choose the Orders table and select **Next**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/select-source-tables.png" alt-text="Select source tables":::
-
-1. After selecting the source table, you can do more complicated operations such as mapping and filtering. In this case, you will just create a replica of your source table in Azure Cosmos DB. So, select **Next** to configure the target
-
-1. Now, letΓÇÖs configure the target:
-
- * **Target Name** - Provide a friendly name for the target.
- * **Input From** - From the dropdown list, select the input stream from the one you created in the source Oracle configuration.
- * **Collections**- Enter the target Azure Cosmos DB configuration properties. The collections syntax is **SourceSchema.SourceTable, TargetDatabase.TargetContainer**. In this example, the value would be ΓÇ£SYSTEM.ORDERS, StriimDemo.OrdersΓÇ¥.
- * **AccessKey** - The PrimaryKey of your Azure Cosmos account.
- * **ServiceEndpoint** ΓÇô The URI of your Azure Cosmos account, they can be found under the **Keys** section of the Azure portal.
-
- Select **Save** and **Next**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/configure-target-parameters.png" alt-text="Configure target parameters":::
--
-1. Next, youΓÇÖll arrive at the flow designer, where you can drag and drop out of the box connectors to create your streaming applications. You will not make any modifications to the flow at this point. so go ahead and deploy the application by selecting the **Deploy App** button.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/deploy-app.png" alt-text="Deploy the app":::
-
-1. In the deployment window, you can specify if you want to run certain parts of your application on specific parts of your deployment topology. Since weΓÇÖre running in a simple deployment topology through Azure, weΓÇÖll use the default option.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/deploy-using-default-option.png" alt-text="Use the default option":::
-
-1. After deploying, you can preview the stream to see data flowing through. Select the **wave** icon and the eyeball next to it. Select the **Deployed** button in the top menu bar, and select **Start App**.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/start-app.png" alt-text="Start the app":::
-
-1. By using a **CDC(Change Data Capture)** reader, Striim will pick up only new changes on the database. If you have data flowing through your source tables, youΓÇÖll see it. However, since this is a demo table, the source isnΓÇÖt connected to any application. If you use a sample data generator, you can insert a chain of events into your Oracle database.
-
-1. YouΓÇÖll see data flowing through the Striim platform. Striim picks up all the metadata associated with your table as well, which is helpful to monitor the data and make sure that the data lands on the right target.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/configure-cdc-pipeline.png" alt-text="Configure CDC pipeline":::
-
-1. Finally, letΓÇÖs sign into Azure and navigate to your Azure Cosmos account. Refresh the Data Explorer, and you can see that data has arrived.
-
- :::image type="content" source="./media/cosmosdb-sql-api-migrate-data-striim/portal-validate-results.png" alt-text="Validate migrated data in Azure":::
-
-By using the Striim solution in Azure, you can continuously migrate data to Azure Cosmos DB from various sources such as Oracle, Cassandra, MongoDB, and various others to Azure Cosmos DB. To learn more please visit the [Striim website](https://www.striim.com/), [download a free 30-day trial of Striim](https://go2.striim.com/download-free-trial), and for any issues when setting up the migration path with Striim, file a [support request.](https://go2.striim.com/request-support-striim)
-
-## Next steps
-
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-* If you are migrating data to Azure Cosmos DB SQL API, see [how to migrate data to Cassandra API account using Striim](../cassandr)
-
-* [Monitor and debug your data with Azure Cosmos DB metrics](../use-metrics.md)
cosmos-db Migrate Dotnet V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v2.md
- Title: Migrate your application to use the Azure Cosmos DB .NET SDK 2.0 (Microsoft.Azure.Cosmos)
-description: Learn how to upgrade your existing .NET application from the v1 SDK to .NET SDK v2 for Core (SQL) API.
---- Previously updated : 08/26/2021--
-# Migrate your application to use the Azure Cosmos DB .NET SDK v2
-
-> [!IMPORTANT]
-> It is important to note that the v3 of the .NET SDK is currently available and a migration plan from v2 to v3 is available [here](migrate-dotnet-v3.md). To learn about the Azure Cosmos DB .NET SDK v2, see the [Release notes](sql-api-sdk-dotnet.md), the [.NET GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v2), .NET SDK v2 [Performance Tips](performance-tips.md), and the [Troubleshooting guide](troubleshoot-dot-net-sdk.md).
->
-
-This article highlights some of the considerations to upgrade your existing v1 .NET application to Azure Cosmos DB .NET SDK v2 for Core (SQL) API. Azure Cosmos DB .NET SDK v2 corresponds to the `Microsoft.Azure.DocumentDB` namespace. You can use the information provided in this document if you are migrating your application from any of the following Azure Cosmos DB .NET Platforms to use the v2 SDK `Microsoft.Azure.Cosmos`:
-
-* Azure Cosmos DB .NET Framework v1 SDK for SQL API
-* Azure Cosmos DB .NET Core SDK v1 for SQL API
-
-## What's available in the .NET v2 SDK
-
-The v2 SDK contains many usability and performance improvements, including:
-
-* Support for TCP direct mode for non-Windows clients
-* Multi-region write support
-* Improvements on query performance
-* Support for geospatial/geometry collections and indexing
-* Increased improvements for direct/TCP transport diagnostics
-* Updates on direct TCP transport stack to reduce the number of connections established
-* Improvements in latency reduction in the RequestTimeout
-
-Most of the retry logic and lower levels of the SDK remains largely unchanged.
-
-## Why migrate to the .NET v2 SDK
-
-In addition to the numerous performance improvements, new feature investments made in the latest SDK will not be back ported to older versions.
-
-Additionally, the older SDKs will be replaced by newer versions and the v1 SDK will go into [maintenance mode](sql-api-sdk-dotnet.md). For the best development experience, we recommend migrating your application to a later version of the SDK.
-
-## Major changes from v1 SDK to v2 SDK
-
-### Direct mode + TCP
-
-The .NET v2 SDK now supports both direct and gateway mode. Direct mode supports connectivity through TCP protocol and offers better performance as it connects directly to the backend replicas with fewer network hops.
-
-For more details, read through the [Azure Cosmos DB SQL SDK connectivity modes guide](sql-sdk-connection-modes.md).
-
-### Session token formatting
-
-The v2 SDK no longer uses the *simple* session token format as used in v1, instead the SDK uses *vector* formatting. The format should be converted when passing to the client application with different versions, since the formats are not interchangeable.
-
-For more information, see [converting session token formats in the .NET SDK](how-to-convert-session-token.md).
-
-### Using the .NET change feed processor SDK
-
-The .NET change feed processor library 2.1.x requires `Microsoft.Azure.DocumentDB` 2.0 or later.
-
-The 2.1.x library has the following key changes:
-
-* Stability and diagnosability improvements
-* Improved handling of errors and exceptions
-* Additional support for partitioned lease collections
-* Advanced extensions to implement the `ChangeFeedDocument` interface and class for additional error handling and tracing
-* Added support for using a custom store to persist continuation tokens per partition
-
-For more information, see the change feed processor library [release notes](sql-api-sdk-dotnet-changefeed.md).
-
-### Using the bulk executor library
-
-The v2 bulk executor library currently has a dependency on the Azure Cosmos DB .NET SDK 2.5.1 or later.
-
-For more information, see the [Azure Cosmos DB bulk executor library overview](../bulk-executor-overview.md) and the .NET bulk executor library [release notes](sql-api-sdk-bulk-executor-dot-net.md).
-
-## Next steps
-
-* Read about [additional performance tips](sql-api-get-started.md) using Azure Cosmos DB for SQL API v2 for optimization your application to achieve max performance
-* Learn more about [what you can do with the v2 SDK](sql-api-dotnet-samples.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
- Title: Migrate your application to use the Azure Cosmos DB .NET SDK 3.0 (Microsoft.Azure.Cosmos)
-description: Learn how to upgrade your existing .NET application from the v2 SDK to the newer .NET SDK v3 (Microsoft.Azure.Cosmos package) for Core (SQL) API.
----- Previously updated : 06/01/2022--
-# Migrate your application to use the Azure Cosmos DB .NET SDK v3
-
-> [!IMPORTANT]
-> To learn about the Azure Cosmos DB .NET SDK v3, see the [Release notes](sql-api-sdk-dotnet-standard.md), the [.NET GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v3), .NET SDK v3 [Performance Tips](performance-tips-dotnet-sdk-v3-sql.md), and the [Troubleshooting guide](troubleshoot-dot-net-sdk.md).
->
-
-This article highlights some of the considerations of upgrading your existing .NET application to the newer Azure Cosmos DB .NET SDK v3 for Core (SQL) API. Azure Cosmos DB .NET SDK v3 corresponds to the Microsoft.Azure.Cosmos namespace. You can use the information provided in this doc if you're migrating your application from any of the following Azure Cosmos DB .NET SDKs:
-
-* Azure Cosmos DB .NET Framework SDK v2 for SQL API
-* Azure Cosmos DB .NET Core SDK v2 for SQL API
-
-The instructions in this article also help you to migrate the following external libraries that are now part of the Azure Cosmos DB .NET SDK v3 for Core (SQL) API:
-
-* .NET change feed processor library 2.0
-* .NET bulk executor library 1.1 or greater
-
-## What's new in the .NET V3 SDK
-
-The v3 SDK contains many usability and performance improvements, including:
-
-* Intuitive programming model naming
-* .NET Standard 2.0 **
-* Increased performance through stream API support
-* Fluent hierarchy that replaces the need for URI factory
-* Built-in support for change feed processor library
-* Built-in support for bulk operations
-* Mockable APIs for easier unit testing
-* Transactional batch and Blazor support
-* Pluggable serializers
-* Scale non-partitioned and autoscale containers
-
-** The SDK targets .NET Standard 2.0 that unifies the existing Azure Cosmos DB .NET Framework and .NET Core SDKs into a single .NET SDK. You can use the .NET SDK in any platform that implements .NET Standard 2.0, including your .NET Framework 4.6.1+ and .NET Core 2.0+ applications.
-
-Most of the networking, retry logic, and lower levels of the SDK remain largely unchanged.
-
-**The Azure Cosmos DB .NET SDK v3 is now open source.** We welcome any pull requests and will be logging issues and tracking feedback on [GitHub.](https://github.com/Azure/azure-cosmos-dotnet-v3/) We'll work on taking on any features that will improve customer experience.
-
-## Why migrate to the .NET v3 SDK
-
-In addition to the numerous usability and performance improvements, new feature investments made in the latest SDK won't be back ported to older versions.
-The v2 SDK is currently in maintenance mode. For the best development experience, we recommend always starting with the latest supported version of SDK.
-
-## Major name changes from v2 SDK to v3 SDK
-
-The following name changes have been applied throughout the .NET 3.0 SDK to align with the API naming conventions for the Core (SQL) API:
-
-* `DocumentClient` is renamed to `CosmosClient`
-* `Collection` is renamed to `Container`
-* `Document` is renamed to `Item`
-
-All the resource objects are renamed with additional properties, which, includes the resource name for clarity.
-
-The following are some of the main class name changes:
-
-| .NET v2 SDK | .NET v3 SDK |
-|-|-|
-|`Microsoft.Azure.Documents.Client.DocumentClient`|`Microsoft.Azure.Cosmos.CosmosClient`|
-|`Microsoft.Azure.Documents.Client.ConnectionPolicy`|`Microsoft.Azure.Cosmos.CosmosClientOptions`|
-|`Microsoft.Azure.Documents.Client.DocumentClientException` |`Microsoft.Azure.Cosmos.CosmosException`|
-|`Microsoft.Azure.Documents.Client.Database`|`Microsoft.Azure.Cosmos.DatabaseProperties`|
-|`Microsoft.Azure.Documents.Client.DocumentCollection`|`Microsoft.Azure.Cosmos.ContainerProperties`|
-|`Microsoft.Azure.Documents.Client.RequestOptions`|`Microsoft.Azure.Cosmos.ItemRequestOptions`|
-|`Microsoft.Azure.Documents.Client.FeedOptions`|`Microsoft.Azure.Cosmos.QueryRequestOptions`|
-|`Microsoft.Azure.Documents.Client.StoredProcedure`|`Microsoft.Azure.Cosmos.StoredProcedureProperties`|
-|`Microsoft.Azure.Documents.Client.Trigger`|`Microsoft.Azure.Cosmos.TriggerProperties`|
-|`Microsoft.Azure.Documents.SqlQuerySpec`|`Microsoft.Azure.Cosmos.QueryDefinition`|
-
-### Classes replaced on .NET v3 SDK
-
-The following classes have been replaced on the 3.0 SDK:
-
-* `Microsoft.Azure.Documents.UriFactory`
-
-* `Microsoft.Azure.Documents.Document`
-
-* `Microsoft.Azure.Documents.Resource`
-
-The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-Container container = client.GetContainer(databaseName,containerName);
-ItemResponse<SalesOrder> response = await this._container.CreateItemAsync(
- salesOrder,
- new PartitionKey(salesOrder.AccountNumber));
-
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName, containerName);
-await client.CreateDocumentAsync(
- collectionUri,
- salesOrder,
- new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber) });
-```
---
-Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
-
-### Changes to item ID generation
-
-Item ID is no longer auto populated in the .NET v3 SDK. Therefore, the Item ID must specifically include a generated ID. View the following example:
-
-```csharp
-[JsonProperty(PropertyName = "id")]
-public Guid Id { get; set; }
-```
-
-### Changed default behavior for connection mode
-
-The SDK v3 now defaults to Direct + TCP connection modes compared to the previous v2 SDK, which defaulted to Gateway + HTTPS connections modes. This change provides enhanced performance and scalability.
-
-### Changes to FeedOptions (QueryRequestOptions in v3.0 SDK)
-
-The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions` in the SDK v3 and within the class, several properties have had changes in name and/or default value or been removed completely.
-
-`FeedOptions.MaxDegreeOfParallelism` has been renamed to `QueryRequestOptions.MaxConcurrency` and default value and associated behavior remains the same, operations run client side during parallel query execution will be executed serially with no-parallelism.
-
-`FeedOptions.EnableCrossPartitionQuery` has been removed and the default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically.
-
-`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the `FeedResponse.Diagnostics` property of the response.
-
-`FeedOptions.RequestContinuation` has now been promoted to the query methods themselves.
-
-The following properties have been removed:
-
-* `FeedOptions.DisableRUPerMinuteUsage`
-
-* `FeedOptions.EnableCrossPartitionQuery`
-
-* `FeedOptions.JsonSerializerSettings`
-
-* `FeedOptions.PartitionKeyRangeId`
-
-* `FeedOptions.PopulateQueryMetrics`
-
-### Constructing a client
-
-The .NET SDK v3 provides a fluent `CosmosClientBuilder` class that replaces the need for the SDK v2 URI Factory.
-
-The fluent design builds URLs internally and allows a single `Container` object to be passed around instead of a `DocumentClient`, `DatabaseName`, and `DocumentCollection`.
-
-The following example creates a new `CosmosClientBuilder` with a strong ConsistencyLevel and a list of preferred locations:
-
-```csharp
-CosmosClientBuilder cosmosClientBuilder = new CosmosClientBuilder(
- accountEndpoint: "https://testcosmos.documents.azure.com:443/",
- authKeyOrResourceToken: "SuperSecretKey")
-.WithConsistencyLevel(ConsistencyLevel.Strong)
-.WithApplicationRegion(Regions.EastUS);
-CosmosClient client = cosmosClientBuilder.Build();
-```
-
-### Exceptions
-
-Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
-
-```csharp
-catch (CosmosException ex)
-{
- HttpStatusCode statusCode = ex.StatusCode;
- CosmosDiagnostics diagnostics = ex.Diagnostics;
- // store diagnostics optionally with diagnostics.ToString();
- // or log the entire error details with ex.ToString();
-}
-```
-
-### Diagnostics
-
-Where the v2 SDK had Direct-only diagnostics available through the `RequestDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
-
-```csharp
-try
-{
- ItemResponse<MyItem> response = await container.ReadItemAsync<MyItem>(
- partitionKey: new PartitionKey("MyPartitionKey"),
- id: "MyId");
-
- TimeSpan elapsedTime = response.Diagnostics.GetElapsedTime();
- if (elapsedTime > somePreDefinedThreshold)
- {
- // log response.Diagnostics.ToString();
- IReadOnlyList<(string region, Uri uri)> regions = response.Diagnostics.GetContactedRegions();
- }
-}
-catch (CosmosException cosmosException) {
- string diagnostics = cosmosException.Diagnostics.ToString();
-
- TimeSpan elapsedTime = cosmosException.Diagnostics.GetElapsedTime();
-
- IReadOnlyList<(string region, Uri uri)> regions = cosmosException.Diagnostics.GetContactedRegions();
-
- // log cosmosException.ToString()
-}
-```
-
-### ConnectionPolicy
-
-Some settings in `ConnectionPolicy` have been renamed or replaced:
-
-| .NET v2 SDK | .NET v3 SDK |
-|-|-|
-|`EnableEndpointRediscovery`|`LimitToEndpoint` - The value is now inverted, if `EnableEndpointRediscovery` was being set to `true`, `LimitToEndpoint` should be set to `false`. Before using this setting, you need to understand [how it affects the client](troubleshoot-sdk-availability.md).|
-|`ConnectionProtocol`|Removed. Protocol is tied to the Mode, either it's Gateway (HTTPS) or Direct (TCP). Direct mode with HTTPS protocol is no longer supported on V3 SDK and the recommendation is to use TCP protocol. |
-|`MediaRequestTimeout`|Removed. Attachments are no longer supported.|
-|`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.|
-|`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.|
-
-### Indexing policy
-
-In the indexing policy, it is not possible to configure these properties. When not specified, these properties will now always have the following values:
-
-| **Property Name** | **New Value (not configurable)** |
-| -- | -- |
-| `Kind` | `range` |
-| `dataType` | `String` and `Number` |
-
-See [this section](how-to-manage-indexing-policy.md#indexing-policy-examples) for indexing policy examples for including and excluding paths. Due to improvements in the query engine, configuring these properties, even if using an older SDK version, has no impact on performance.
-
-### Session token
-
-Where the v2 SDK exposed the session token of a response as `ResourceResponse.SessionToken` for cases where capturing the session token was required, because the session token is a header, the v3 SDK exposes that value in the `Headers.Session` property of any response.
-
-### Timestamp
-
-Where the v2 SDK exposed the timestamp of a document through the `Timestamp` property, because `Document` is no longer available, users can map the `_ts` [system property](../account-databases-containers-items.md#properties-of-an-item) to a property in their model.
-
-### OpenAsync
-
-For use cases where `OpenAsync()` was being used to warm up the v2 SDK client, `CreateAndInitializeAsync` can be used to both [create and warm-up](https://devblogs.microsoft.com/cosmosdb/improve-net-sdk-initialization/) a v3 SDK client.
-
-### Using the change feed processor APIs directly from the v3 SDK
-
-The v3 SDK has built-in support for the Change Feed Processor APIs, allowing you use the same SDK for building your application and change feed processor implementation. Previously, you had to use a separate change feed processor library.
-
-For more information, see [how to migrate from the change feed processor library to the Azure Cosmos DB .NET v3 SDK](how-to-migrate-from-change-feed-library.md)
-
-### Change feed queries
-
-Executing change feed queries on the v3 SDK is considered to be using the [change feed pull model](change-feed-pull-model.md). Follow this table to migrate configuration:
-
-| .NET v2 SDK | .NET v3 SDK |
-|-|-|
-|`ChangeFeedOptions.PartitionKeyRangeId`|`FeedRange` - In order to achieve parallelism reading the change feed [FeedRanges](change-feed-pull-model.md#using-feedrange-for-parallelization) can be used. It's no longer a required parameter, you can [read the Change Feed for an entire container](change-feed-pull-model.md#consuming-an-entire-containers-changes) easily now.|
-|`ChangeFeedOptions.PartitionKey`|`FeedRange.FromPartitionKey` - A FeedRange representing the desired Partition Key can be used to [read the Change Feed for that Partition Key value](change-feed-pull-model.md#consuming-a-partition-keys-changes).|
-|`ChangeFeedOptions.RequestContinuation`|`ChangeFeedStartFrom.Continuation` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
-|`ChangeFeedOptions.StartTime`|`ChangeFeedStartFrom.Time` |
-|`ChangeFeedOptions.StartFromBeginning` |`ChangeFeedStartFrom.Beginning` |
-|`ChangeFeedOptions.MaxItemCount`|`ChangeFeedRequestOptions.PageSizeHint` - The change feed iterator can be stopped and resumed at any time by [saving the continuation and using it when creating a new iterator](change-feed-pull-model.md#saving-continuation-tokens).|
-|`IDocumentQuery.HasMoreResults` |`response.StatusCode == HttpStatusCode.NotModified` - The change feed is conceptually infinite, so there could always be more results. When a response contains the `HttpStatusCode.NotModified` status code, it means there are no new changes to read at this time. You can use that to stop and [save the continuation](change-feed-pull-model.md#saving-continuation-tokens) or to temporarily sleep or wait and then call `ReadNextAsync` again to test for new changes. |
-|Split handling|It's no longer required for users to handle split exceptions when reading the change feed, splits will be handled transparently without the need of user interaction.|
-
-### Using the bulk executor library directly from the V3 SDK
-
-The v3 SDK has built-in support for the bulk executor library, allowing you to use the same SDK for building your application and performing bulk operations. Previously, you were required to use a separate bulk executor library.
-
-For more information, see [how to migrate from the bulk executor library to bulk support in Azure Cosmos DB .NET V3 SDK](how-to-migrate-from-bulk-executor-library.md)
-
-## Code snippet comparisons
-
-The following code snippet shows the differences in how resources are created between the .NET v2 and v3 SDKs:
-
-## Database operations
-
-### Create a database
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-// Create database with no shared provisioned throughput
-DatabaseResponse databaseResponse = await client.CreateDatabaseIfNotExistsAsync(DatabaseName);
-Database database = databaseResponse;
-DatabaseProperties databaseProperties = databaseResponse;
-
-// Create a database with a shared manual provisioned throughput
-string databaseIdManual = new string(DatabaseName + "_SharedManualThroughput");
-database = await client.CreateDatabaseIfNotExistsAsync(databaseIdManual, ThroughputProperties.CreateManualThroughput(400));
-
-// Create a database with shared autoscale provisioned throughput
-string databaseIdAutoscale = new string(DatabaseName + "_SharedAutoscaleThroughput");
-database = await client.CreateDatabaseIfNotExistsAsync(databaseIdAutoscale, ThroughputProperties.CreateAutoscaleThroughput(4000));
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-// Create database
-ResourceResponse<Database> databaseResponse = await client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseName });
-Database database = databaseResponse;
-
-// Create a database with shared standard provisioned throughput
-database = await client.CreateDatabaseIfNotExistsAsync(new Database{ Id = databaseIdStandard }, new RequestOptions { OfferThroughput = 400 });
-
-// Creating a database with shared autoscale provisioned throughput from v2 SDK is not supported use v3 SDK
-```
--
-### Read a database by ID
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-// Read a database
-Console.WriteLine($"{Environment.NewLine} Read database resource: {DatabaseName}");
-database = client.GetDatabase(DatabaseName);
-Console.WriteLine($"{Environment.NewLine} database { database.Id.ToString()}");
-
-// Read all databases
-string findQueryText = "SELECT * FROM c";
-using (FeedIterator<DatabaseProperties> feedIterator = client.GetDatabaseQueryIterator<DatabaseProperties>(findQueryText))
-{
- while (feedIterator.HasMoreResults)
- {
- FeedResponse<DatabaseProperties> databaseResponses = await feedIterator.ReadNextAsync();
- foreach (DatabaseProperties _database in databaseResponses)
- {
- Console.WriteLine($"{ Environment.NewLine} database {_database.Id.ToString()}");
- }
- }
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-// Read a database
-database = await client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri(DatabaseName));
-Console.WriteLine("\n database {0}", database.Id.ToString());
-
-// Read all databases
-Console.WriteLine("\n1.1 Reading all databases resources");
-foreach (Database _database in await client.ReadDatabaseFeedAsync())
-{
- Console.WriteLine("\n database {0} \n {1}", _database.Id.ToString(), _database.ToString());
-}
-```
--
-### Delete a database
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-// Delete a database
-await client.GetDatabase(DatabaseName).DeleteAsync();
-Console.WriteLine($"{ Environment.NewLine} database {DatabaseName} deleted.");
-
-// Delete all databases in an account
-string deleteQueryText = "SELECT * FROM c";
-using (FeedIterator<DatabaseProperties> feedIterator = client.GetDatabaseQueryIterator<DatabaseProperties>(deleteQueryText))
-{
- while (feedIterator.HasMoreResults)
- {
- FeedResponse<DatabaseProperties> databaseResponses = await feedIterator.ReadNextAsync();
- foreach (DatabaseProperties _database in databaseResponses)
- {
- await client.GetDatabase(_database.Id).DeleteAsync();
- Console.WriteLine($"{ Environment.NewLine} database {_database.Id} deleted");
- }
- }
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-// Delete a database
-database = await client.DeleteDatabaseAsync(UriFactory.CreateDatabaseUri(DatabaseName));
-Console.WriteLine(" database {0} deleted.", DatabaseName);
-
-// Delete all databases in an account
-foreach (Database _database in await client.ReadDatabaseFeedAsync())
-{
- await client.DeleteDatabaseAsync(UriFactory.CreateDatabaseUri(_database.Id));
- Console.WriteLine("\n database {0} deleted", _database.Id);
-}
-```
--
-## Container operations
-
-### Create a container (Autoscale + Time to live with expiration)
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task CreateManualThroughputContainer(Database database)
-{
- // Set throughput to the minimum value of 400 RU/s manually configured throughput
- string containerIdManual = ContainerName + "_Manual";
- ContainerResponse container = await database.CreateContainerIfNotExistsAsync(
- id: containerIdManual,
- partitionKeyPath: partitionKeyPath,
- throughput: 400);
-}
-
-// Create container with autoscale
-private static async Task CreateAutoscaleThroughputContainer(Database database)
-{
- string autoscaleContainerId = ContainerName + "_Autoscale";
- ContainerProperties containerProperties = new ContainerProperties(autoscaleContainerId, partitionKeyPath);
-
- Container container = await database.CreateContainerIfNotExistsAsync(
- containerProperties: containerProperties,
- throughputProperties: ThroughputProperties.CreateAutoscaleThroughput(autoscaleMaxThroughput: 4000);
-}
-
-// Create a container with TTL Expiration
-private static async Task CreateContainerWithTtlExpiration(Database database)
-{
- string containerIdManualwithTTL = ContainerName + "_ManualTTL";
-
- ContainerProperties properties = new ContainerProperties
- (id: containerIdManualwithTTL,
- partitionKeyPath: partitionKeyPath);
-
- properties.DefaultTimeToLive = (int)TimeSpan.FromDays(1).TotalSeconds; //expire in 1 day
-
- ContainerResponse containerResponse = await database.CreateContainerIfNotExistsAsync(containerProperties: properties);
- ContainerProperties returnedProperties = containerResponse;
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-// Create a collection
-private static async Task CreateManualThroughputContainer(DocumentClient client)
-{
- string containerIdManual = ContainerName + "_Manual";
-
- // Set throughput to the minimum value of 400 RU/s manually configured throughput
-
- DocumentCollection collectionDefinition = new DocumentCollection();
- collectionDefinition.Id = containerIdManual;
- collectionDefinition.PartitionKey.Paths.Add(partitionKeyPath);
-
- DocumentCollection partitionedCollection = await client.CreateDocumentCollectionIfNotExistsAsync(
- UriFactory.CreateDatabaseUri(DatabaseName),
- collectionDefinition,
- new RequestOptions { OfferThroughput = 400 });
-}
-
-private static async Task CreateAutoscaleThroughputContainer(DocumentClient client)
-{
- // .NET v2 SDK does not support the creation of provisioned autoscale throughput containers
-}
-
- private static async Task CreateContainerWithTtlExpiration(DocumentClient client)
-{
- string containerIdManualwithTTL = ContainerName + "_ManualTTL";
-
- DocumentCollection collectionDefinition = new DocumentCollection();
- collectionDefinition.Id = containerIdManualwithTTL;
- collectionDefinition.DefaultTimeToLive = (int)TimeSpan.FromDays(1).TotalSeconds; //expire in 1 day
- collectionDefinition.PartitionKey.Paths.Add(partitionKeyPath);
-
- DocumentCollection partitionedCollection = await client.CreateDocumentCollectionIfNotExistsAsync(
- UriFactory.CreateDatabaseUri(DatabaseName),
- collectionDefinition,
- new RequestOptions { OfferThroughput = 400 });
-
-}
-```
--
-### Read container properties
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task ReadContainerProperties(Database database)
-{
- string containerIdManual = ContainerName + "_Manual";
- Container container = database.GetContainer(containerIdManual);
- ContainerProperties containerProperties = await container.ReadContainerAsync();
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task ReadContainerProperties(DocumentClient client)
-{
- string containerIdManual = ContainerName + "_Manual";
- DocumentCollection collection = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseName, containerIdManual));
-}
-```
--
-### Delete a container
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task DeleteContainers(Database database)
-{
- string containerIdManual = ContainerName + "_Manual";
-
- // Delete a container
- await database.GetContainer(containerIdManual).DeleteContainerAsync();
-
- // Delete all CosmosContainer resources for a database
- using (FeedIterator<ContainerProperties> feedIterator = database.GetContainerQueryIterator<ContainerProperties>())
- {
- while (feedIterator.HasMoreResults)
- {
- foreach (ContainerProperties _container in await feedIterator.ReadNextAsync())
- {
- await database.GetContainer(_container.Id).DeleteContainerAsync();
- Console.WriteLine($"{Environment.NewLine} deleted container {_container.Id}");
- }
- }
- }
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task DeleteContainers(DocumentClient client)
-{
- // Delete a collection
- string containerIdManual = ContainerName + "_Manual";
- await client.DeleteDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseName, containerIdManual));
-
- // Delete all containers for a database
- foreach (var collection in await client.ReadDocumentCollectionFeedAsync(UriFactory.CreateDatabaseUri(DatabaseName)))
- {
- await client.DeleteDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseName, collection.Id));
- }
-}
-```
--
-## Item and query operations
-
-### Create an item
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task CreateItemAsync(Container container)
-{
- // Create a SalesOrder POCO object
- SalesOrder salesOrder1 = GetSalesOrderSample("Account1", "SalesOrder1");
- ItemResponse<SalesOrder> response = await container.CreateItemAsync(salesOrder1,
- new PartitionKey(salesOrder1.AccountNumber));
-}
-
-private static async Task RunBasicOperationsOnDynamicObjects(Container container)
-{
- // Dynamic Object
- dynamic salesOrder = new
- {
- id = "SalesOrder5",
- AccountNumber = "Account1",
- PurchaseOrderNumber = "PO18009186470",
- OrderDate = DateTime.UtcNow,
- Total = 5.95,
- };
- Console.WriteLine("\nCreating item");
- ItemResponse<dynamic> response = await container.CreateItemAsync<dynamic>(
- salesOrder, new PartitionKey(salesOrder.AccountNumber));
- dynamic createdSalesOrder = response.Resource;
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task CreateItemAsync(DocumentClient client)
-{
- // Create a SalesOrder POCO object
- SalesOrder salesOrder1 = GetSalesOrderSample("Account1", "SalesOrder1");
- await client.CreateDocumentAsync(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
- salesOrder1,
- new RequestOptions { PartitionKey = new PartitionKey("Account1")});
-}
-
-private static async Task RunBasicOperationsOnDynamicObjects(DocumentClient client)
-{
- // Create a dynamic object
- dynamic salesOrder = new
- {
- id= "SalesOrder5",
- AccountNumber = "Account1",
- PurchaseOrderNumber = "PO18009186470",
- OrderDate = DateTime.UtcNow,
- Total = 5.95,
- };
- ResourceResponse<Document> response = await client.CreateDocumentAsync(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
- salesOrder,
- new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber)});
-
- dynamic createdSalesOrder = response.Resource;
- }
-```
--
-### Read all the items in a container
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task ReadAllItems(Container container)
-{
- // Read all items in a container
- List<SalesOrder> allSalesForAccount1 = new List<SalesOrder>();
-
- using (FeedIterator<SalesOrder> resultSet = container.GetItemQueryIterator<SalesOrder>(
- queryDefinition: null,
- requestOptions: new QueryRequestOptions()
- {
- PartitionKey = new PartitionKey("Account1"),
- MaxItemCount = 5
- }))
- {
- while (resultSet.HasMoreResults)
- {
- FeedResponse<SalesOrder> response = await resultSet.ReadNextAsync();
- SalesOrder salesOrder = response.First();
- Console.WriteLine($"\n1.3.1 Account Number: {salesOrder.AccountNumber}; Id: {salesOrder.Id}");
- allSalesForAccount1.AddRange(response);
- }
- }
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task ReadAllItems(DocumentClient client)
-{
- // Read all items in a collection
- List<SalesOrder> allSalesForAccount1 = new List<SalesOrder>();
-
- string continuationToken = null;
- do
- {
- var feed = await client.ReadDocumentFeedAsync(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
- new FeedOptions { MaxItemCount = 5, RequestContinuation = continuationToken });
- continuationToken = feed.ResponseContinuation;
- foreach (Document document in feed)
- {
- SalesOrder salesOrder = (SalesOrder)(dynamic)document;
- Console.WriteLine($"\n1.3.1 Account Number: {salesOrder.AccountNumber}; Id: {salesOrder.Id}");
- allSalesForAccount1.Add(salesOrder);
-
- }
- } while (continuationToken != null);
-}
-```
--
-### Query items
-#### Changes to SqlQuerySpec (QueryDefinition in v3.0 SDK)
-
-The `SqlQuerySpec` class in SDK v2 has now been renamed to `QueryDefinition` in the SDK v3.
-
-`SqlParameterCollection` and `SqlParameter` has been removed. Parameters are now added to the `QueryDefinition` with a builder model using `QueryDefinition.WithParameter`. Users can access the parameters with `QueryDefinition.GetQueryParameters`
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task QueryItems(Container container)
-{
- // Query for items by a property other than Id
- QueryDefinition queryDefinition = new QueryDefinition(
- "select * from sales s where s.AccountNumber = @AccountInput")
- .WithParameter("@AccountInput", "Account1");
-
- List<SalesOrder> allSalesForAccount1 = new List<SalesOrder>();
- using (FeedIterator<SalesOrder> resultSet = container.GetItemQueryIterator<SalesOrder>(
- queryDefinition,
- requestOptions: new QueryRequestOptions()
- {
- PartitionKey = new PartitionKey("Account1"),
- MaxItemCount = 1
- }))
- {
- while (resultSet.HasMoreResults)
- {
- FeedResponse<SalesOrder> response = await resultSet.ReadNextAsync();
- SalesOrder sale = response.First();
- Console.WriteLine($"\n Account Number: {sale.AccountNumber}; Id: {sale.Id};");
- allSalesForAccount1.AddRange(response);
- }
- }
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task QueryItems(DocumentClient client)
-{
- // Query for items by a property other than Id
- SqlQuerySpec querySpec = new SqlQuerySpec()
- {
- QueryText = "select * from sales s where s.AccountNumber = @AccountInput",
- Parameters = new SqlParameterCollection()
- {
- new SqlParameter("@AccountInput", "Account1")
- }
- };
- var query = client.CreateDocumentQuery<SalesOrder>(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName),
- querySpec,
- new FeedOptions {EnableCrossPartitionQuery = true});
-
- var allSalesForAccount1 = query.ToList();
-
- Console.WriteLine($"\n1.4.2 Query found {allSalesForAccount1.Count} items.");
-}
-```
--
-### Delete an item
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task DeleteItemAsync(Container container)
-{
- ItemResponse<SalesOrder> response = await container.DeleteItemAsync<SalesOrder>(
- partitionKey: new PartitionKey("Account1"), id: "SalesOrder3");
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task DeleteItemAsync(DocumentClient client)
-{
- ResourceResponse<Document> response = await client.DeleteDocumentAsync(
- UriFactory.CreateDocumentUri(DatabaseName, ContainerName, "SalesOrder3"),
- new RequestOptions { PartitionKey = new PartitionKey("Account1") });
-}
-```
--
-### Change feed query
-
-# [.NET SDK v3](#tab/dotnet-v3)
-
-```csharp
-private static async Task QueryChangeFeedAsync(Container container)
-{
- FeedIterator<SalesOrder> iterator = container.GetChangeFeedIterator<SalesOrder>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
-
- string continuation = null;
- while (iterator.HasMoreResults)
- {
- FeedResponse<SalesOrder> response = await iteratorForTheEntireContainer.ReadNextAsync();
-
- if (response.StatusCode == HttpStatusCode.NotModified)
- {
- // No new changes
- continuation = response.ContinuationToken;
- break;
- }
- else
- {
- // Process the documents in response
- }
- }
-}
-```
-
-# [.NET SDK v2](#tab/dotnet-v2)
-
-```csharp
-private static async Task QueryChangeFeedAsync(DocumentClient client, string partitionKeyRangeId)
-{
- ChangeFeedOptions options = new ChangeFeedOptions
- {
- PartitionKeyRangeId = partitionKeyRangeId,
- StartFromBeginning = true,
- };
-
- using(var query = client.CreateDocumentChangeFeedQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, ContainerName), options))
- {
- do
- {
- var response = await query.ExecuteNextAsync<Document>();
- if (response.Count > 0)
- {
- var docs = new List<Document>();
- docs.AddRange(response);
- // Process the documents.
- // Save response.ResponseContinuation if needed
- }
- }
- while (query.HasMoreResults);
- }
-}
-```
--
-## Next steps
-
-* [Build a Console app](sql-api-get-started.md) to manage Azure Cosmos DB SQL API data using the v3 SDK
-* Learn more about [what you can do with the v3 SDK](sql-api-dotnet-v3sdk-samples.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-hbase-to-cosmos-db.md
- Title: Migrate data from Apache HBase to Azure Cosmos DB SQL API account
-description: Learn how to migrate your data from HBase to Azure Cosmos DB SQL API account.
---- Previously updated : 12/07/2021----
-# Migrate data from Apache HBase to Azure Cosmos DB SQL API account
-
-Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article guides how to migrate your data from HBase to Azure Cosmos DB SQL API account.
-
-## Differences between Cosmos DB and HBase
-
-Before migrating, you must understand the differences between Azure Cosmos DB and HBase.
-
-### Resource model
-
-Azure Cosmos DB has the following resource model:
-
-HBase has the following resource model:
-
-### Resource mapping
-
-The following table shows a conceptual mapping between Apache HBase, Apache Phoenix, and Azure Cosmos DB.
-
-| **HBase** | **Phoenix** | **Azure Cosmos DB** |
-| - | -- | |
-| Cluster | Cluster | Account |
-| Namespace | Schema (if enabled) | Database |
-| Table | Table | Container/Collection |
-| Column family | Column family | N/A |
-| Row | Row | Item/Document |
-| Version (Timestamp) | Version (Timestamp) | N/A |
-| N/A | Primary Key | Partition Key |
-| N/A | Index | Index |
-| N/A | Secondary Index | Secondary Index |
-| N/A | View | N/A |
-| N/A | Sequence | N/A |
-
-### Data structure comparison and differences
-
-The key differences between the data structure of Azure Cosmos DB and HBase are as follows:
-
-**RowKey**
-
-* In HBase, data is stored by [RowKey](https://hbase.apache.org/book.html#rowkey.design) and horizontally partitioned into regions by the range of RowKey specified during the table creation.
-
-* Azure Cosmos DB on the other side distributes data into partitions based on the hash value of a specified [Partition key](../partitioning-overview.md).
-
-**Column family**
-
-* In HBase, columns are grouped within a Column Family (CF).
-
-* Azure Cosmos DB (SQL API) stores data as [JSON](https://www.json.org/json-en.html) document. Hence, all properties associated with a JSON data structure apply.
-
-**Timestamp**
-
-* HBase uses timestamp to version multiple instances of a given cell. You can query different versions of a cell using timestamp.
-
-* Azure Cosmos DB ships with the [Change feed feature](../change-feed.md) which tracks persistent record of changes to a container in the order they occur. It then outputs the sorted list of documents that were changed in the order in which they were modified.
-
-**Data format**
-
-* HBase data format consists of RowKey, Column Family: Column Name, Timestamp, Value. The following is an example of a HBase table row:
-
- ```console
- ROW COLUMN+CELL
- 1000 column=Office:Address, timestamp=1611408732448, value=1111 San Gabriel Dr.
- 1000 column=Office:Phone, timestamp=1611408732418, value=1-425-000-0002
- 1000 column=Personal:Name, timestamp=1611408732340, value=John Dole
- 1000 column=Personal:Phone, timestamp=1611408732385, value=1-425-000-0001
- ```
-
-* In Azure Cosmos DB SQL API, the JSON object represents the data format. The partition key resides in a field in the document and sets which field is the partition key for the collection. Azure Cosmos DB does not have the concept of timestamp used for column family or version. As highlighted previously, it has change feed support through which one can track/record changes performed on a container. The following is an example of a document.
-
- ```json
- {
- "RowId": "1000",
- "OfficeAddress": "1111 San Gabriel Dr.",
- "OfficePhone": "1-425-000-0002",
- "PersonalName": "John Dole",
- "PersonalPhone": "1-425-000-0001",
- }
- ```
-
-> [!TIP]
-> HBase stores data in byte array, so if you want to migrate data that contains double-byte characters to Azure Cosmos DB, the data must be UTF-8 encoded.
-
-### Consistency model
-
-HBase offers strictly consistent reads and writes.
-
-Azure Cosmos DB offers [five well-defined consistency levels](../consistency-levels.md). Each level provides availability and performance trade-offs. From strongest to weakest, the consistency levels supported are:
-
-* Strong
-* Bounded staleness
-* Session
-* Consistent prefix
-* Eventual
-
-### Sizing
-
-**HBase**
-
-For an enterprise-scale deployment of HBase, Master; Region servers; and ZooKeeper drive bulk of the sizing. Like any distributed application, HBase is designed to scale out. HBase performance is primarily driven by the size of the HBase RegionServers. Sizing is primarily driven by two key requirements ΓÇô throughput and size of the dataset that must be stored on HBase.
-
-**Azure Cosmos DB**
-
-Azure Cosmos DB is a PaaS offering from Microsoft and underlying infrastructure deployment details are abstracted from the end users. When a Cosmos DB container is provisioned, Azure platform automatically provisions underlying infrastructure (compute, storage, memory, networking stack) to support the performance requirements of a given workload. The cost of all database operations is normalized by Azure Cosmos DB and is expressed by [Request Units (or RUs, for short).](../request-units.md)
-
-To estimate RUs consumed by your workload, consider the following [factors](../request-units.md#request-unit-considerations):
-
-There is a [capacity calculator](estimate-ru-with-capacity-planner.md) available to assist with sizing exercise for RUs.
-
-You can also use [autoscaling provisioning throughput](../provision-throughput-autoscale.md) in Azure Cosmos DB to automatically and instantly scale your database or container throughput (RU/sec). Throughput is scaled based on usage without impacting workload availability, latency, throughput, or performance.
-
-### Data distribution
-
-**HBase**
-HBase sorts data according to RowKey. The data is then partitioned into regions and stored in RegionServers. The automatic partitioning divides regions horizontally according to the partitioning policy. This is controlled by the value assigned to HBase parameter `hbase.hregion.max.filesize` (default value is 10 GB). A row in HBase with a given RowKey always belongs to one region. In addition, the data is separated on disk for each column family. This enables filtering at the time of reading and isolation of I/O on HFile.
-
-**Azure Cosmos DB**
-Azure Cosmos DB uses [partitioning](../partitioning-overview.md) to scale individual containers in the database. Partitioning divides the items in a container into specific subsets called "logical partitions". Logical partitions are formed based on the value of the "partition key" associated with each item in the container. All items in a logical partition have the same partition key value. Each logical partition can hold up to 20 GB of data.
-
-Physical partitions each contain a replica of your data and an instance of the Cosmos DB database engine. This structure makes your data durable and highly available and throughput is divided equally amongst the local physical partitions. Physical partitions are automatically created and configured, and it's not possible to control their size, location, or which logical partitions they contain. Logical partitions are not split between physical partitions.
-
-As with HBase RowKey, partition key design is important for Azure Cosmos DB. HBase's Row Key works by sorting data and storing continuous data, and Azure Cosmos DB's Partition Key is a different mechanism because it hash-distributes data. Assuming your application using HBase is optimized for data access patterns to HBase, using the same RowKey for the partition Key will not give good performance results. Given that it's sorted data on HBase, the [Azure Cosmos DB composite index](../index-policy.md#composite-indexes) may be useful. It is required if you want to use the ORDER BY clause in more than one field. You can also improve the performance of many equal and range queries by defining a composite index.
-
-### Availability
-
-**HBase**
-HBase consists of Master; Region Server; and ZooKeeper. High availability in a single cluster can be achieved by making each component redundant. When configuring geo-redundancy, one can deploy HBase clusters across different physical data centers and use replication to keep multiple clusters in-sync.
-
-**Azure Cosmos DB**
-Azure Cosmos DB does not require any configuration such as cluster component redundancy. It provides a comprehensive SLA for high availability, consistency, and latency. Please see [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for more detail.
-
-### Data reliability
-
-**HBase**
-HBase is built on Hadoop Distributed File System (HDFS) and data stored on HDFS is replicated three times.
-
-**Azure Cosmos DB**
-Azure Cosmos DB primarily provides high availability in two ways. First, Azure Cosmos DB replicates data between regions configured within your Cosmos account. Second, Azure Cosmos DB keeps four replicas of the data in the region.
-
-## Considerations before migrating
-
-### System dependencies
-
-This aspect of planning focuses on understanding upstream and downstream dependencies for HBase instance, which is being migrated to Azure Cosmos DB.
-
-Example of downstream dependencies could be applications that read data from HBase. These must be refactored to read from Azure Cosmos DB. These following points must be considered as part of the migration:
-
-* Questions for assessing dependencies - Is the current HBase system an independent component? Or Does it call a process on another system, or is it called by a process on another system, or is it accessed using a directory service? Are other important processes working in your HBase cluster? These system dependencies need to be clarified to determine the impact of migration.
-
-* The RPO and RTO for HBase deployment on-premises.
-
-### Offline Vs online migration
-
-For successful data migration, it is important to understand the characteristics of the business that uses the database and decide how to do it. Select offline migration if you can completely shut down the system, perform data migration, and restart the system at the destination. Also, if your database is always busy and you can't afford a long outage, consider migrating online.
-
-> [!NOTE]
-> This document covers only offline migration.
-
-When performing offline data migration, it depends on the version of HBase you are currently running and the tools available. See the [Data Migration](#migrate-your-data) section for more details.
-
-### Performance considerations
-
-This aspect of planning is to understand performance targets for HBase and then translate them across to Azure Cosmos DB semantics. For example ΓÇô to hit *"X"* IOPS on HBase, how many Request Units (RU/s) will be required in Azure Cosmos DB. There are differences between HBase and Azure Cosmos DB, this exercise focuses on building a view of how performance targets from HBase will be translated across to Azure Cosmos DB. This will drive the scaling exercise.
-
-Questions to ask:
-
-* Is the HBase deployment read-heavy or write-heavy?
-* What is the split between reads and writes?
-* What is the target IOPS expresses as percentile?
-* How/what applications are used to load data into HBase?
-* How/what applications are used to read data from HBase?
-
-When executing queries that request sorted data, HBase will return the result quickly because the data is sorted by RowKey. However, Azure Cosmos DB doesn't have such a concept. In order to optimize the performance, you can use [composite indexes](../index-policy.md#composite-indexes) as needed.
-
-### Deployment considerations
-
-You can use [the Azure portal or Azure CLI to deploy the Cosmos DB SQL API](create-cosmosdb-resources-portal.md). Since the migration destination is Azure Cosmos DB SQL API, select "Core (SQL)" for the API as a parameter when deploying. In addition, set Geo-Redundancy, Multi-region Writes, and Availability Zones according to your availability requirements.
-
-### Network consideration
-
-Cosmos DB has three main network options. The first is a configuration that uses a Public IP address and controls access with an IP firewall (default). The second is a configuration that uses a Public IP address and allows access only from a specific subnet of a specific virtual network (service endpoint). The third is a configuration (private endpoint) that joins a private network using a Private IP address.
-
-See the following documents for more information on the three network options:
-
-* [Public IP with Firewall](../how-to-configure-firewall.md)
-* [Public IP with Service Endpoint](../how-to-configure-vnet-service-endpoint.md)
-* [Private Endpoint](../how-to-configure-private-endpoints.md)
-
-## Assess your existing data
-
-### Data discovery
-
-Gather information in advance from your existing HBase cluster to identify the data you want to migrate. These can help you identify how to migrate, decide which tables to migrate, understand the structure within those tables, and decide how to build your data model. For example, gather details such as the following:
-
-* HBase version
-* Migration target tables
-* Column family information
-* Table status
-
-The following commands show how to collect the above details using a hbase shell script and store them in the local file system of the operating machine.
-
-#### Get the HBase version
-
-```console
-hbase version -n > hbase-version.txt
-```
-
-**Output:**
-
-```console
-cat hbase-version.txt
-HBase 2.1.8.4.1.2.5
-```
-
-#### Get the list of tables
-
-You can get a list of tables stored in HBase. If you have created a namespace other than default, it will be output in the "Namespace: Table" format.
-
-```console
-echo "list" | hbase shell -n > table-list.txt
-HBase 2.1.8.4.1.2.5
-```
-
-**Output:**
-
-```console
-echo "list" | hbase shell -n > table-list.txt
-cat table-list.txt
-TABLE
-COMPANY
-Contacts
-ns1:t1
-3 row(s)
-Took 0.4261 seconds
-COMPANY
-Contacts
-ns1:t1
-```
-
-#### Identify the tables to be migrated
-
-Get the details of the column families in the table by specifying the table name to be migrated.
-
-```console
-echo "describe '({Namespace}:){Table name}'" | hbase shell -n > {Table name} -schema.txt
-```
-
-**Output:**
-
-```console
-cat {Table name} -schema.txt
-Table {Table name} is ENABLED
-{Table name}
-COLUMN FAMILIES DESCRIPTION
-{NAME => 'cf1', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
-{NAME => 'cf2', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
-2 row(s)
-Took 0.5775 seconds
-```
-
-#### Get the column families in the table and their settings
-
-```console
-echo "status 'detailed'" | hbase shell -n > hbase-status.txt
-```
-
-**Output:**
-
-```console
-{HBase version}
-0 regionsInTransition
-active master: {Server:Port number}
-2 backup masters
- {Server:Port number}
- {Server:Port number}
-master coprocessors: []
-# live servers
- {Server:Port number}
- requestsPerSecond=0.0, numberOfOnlineRegions=44, usedHeapMB=1420, maxHeapMB=15680, numberOfStores=49, numberOfStorefiles=14, storefileUncompressedSizeMB=7, storefileSizeMB=7, compressionRatio=1.0000, memstoreSizeMB=0, storefileIndexSizeKB=15, readRequestsCount=36210, filteredReadRequestsCount=415729, writeRequestsCount=439, rootIndexSizeKB=15, totalStaticIndexSizeKB=5, totalStaticBloomSizeKB=16, totalCompactingKVs=464, currentCompactedKVs=464, compactionProgressPct=1.0, coprocessors=[GroupedAggregateRegionObserver, Indexer, MetaDataEndpointImpl, MetaDataRegionObserver, MultiRowMutationEndpoint, ScanRegionObserver, SecureBulkLoadEndpoint, SequenceRegionObserver, ServerCachingEndpointImpl, UngroupedAggregateRegionObserver]
-
- [...]
-
- "Contacts,,1611126188216.14a597a0964383a3d923b2613524e0bd."
- numberOfStores=2, numberOfStorefiles=2, storefileUncompressedSizeMB=7168, lastMajorCompactionTimestamp=0, storefileSizeMB=7, compressionRatio=0.0010, memstoreSizeMB=0, readRequestsCount=4393, writeRequestsCount=0, rootIndexSizeKB=14, totalStaticIndexSizeKB=5, totalStaticBloomSizeKB=16, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
-
-[...]
-
-```
-
-You can get useful sizing information such as the size of heap memory, the number of regions, the number of requests as the status of the cluster, and the size of the data in compressed/uncompressed as the status of the table.
-
-If you are using Apache Phoenix on HBase cluster, you need to collect data from Phoenix as well.
-
-* Migration target table
-* Table schemas
-* Indexes
-* Primary key
-
-#### Connect to Apache Phoenix on your cluster
-
-```console
-sqlline.py ZOOKEEPER/hbase-unsecure
-```
-
-#### Get the table list
-
-```console
-!tables
-```
-
-#### Get the table details
-
-```console
-!describe <Table Name>
-```
-
-#### Get the index details
-
- ```console
-!indexes <Table Name>
-```
-
-### Get the primary key details
-
- ```console
-!primarykeys <Table Name>
-```
-
-## Migrate your data
-
-### Migration options
-
-There are various methods to migrate data offline, but here we will introduce how to use Azure Data Factory and Data Migration Tool.
-
-| Solution | Source version | Considerations |
-| | -- | - |
-| Azure Data Factory | HBase < 2 | Easy to set up. Suitable for large datasets. DoesnΓÇÖt support HBase 2 or later. |
-| Apache Spark | All versions | Support all versions of HBase. Suitable for large datasets. Spark setup required. |
-| Custom tool with Cosmos DB bulk executor library | All versions | Most flexible to create custom data migration tools using libraries. Requires more effort to set up. |
-
-The following flowchart uses some conditions to reach the available data migration methods.
-
-### Migrate using Data Factory
-
-This option is suitable for large datasets. The Azure Cosmos DB Bulk Executor library is used. There are no checkpoints, so if you encounter any issues during the migration you will have to restart the migration process from the beginning. You can also use Data Factory's self-hosted integration runtime to connect to your on-premises HBase, or deploy Data Factory to a Managed VNET and connect to your on-premises network via VPN or ExpressRoute.
-
-Data Factory's Copy activity supports HBase as a data source. See the [Copy data from HBase using Azure Data Factory](../../data-factory/connector-hbase.md) article for more details.
-
-You can specify Cosmos DB (SQL API) as the destination for your data. See the [Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md) article for more details.
--
-### Migrate using Apache Spark - Apache HBase Connector & Cosmos DB Spark connector
-
-Here is an example to migrate your data to Azure Cosmos DB. It assumes that HBase 2.1.0 and Spark 2.4.0 are running in the same cluster.
-
-Apache Spark ΓÇô Apache HBase Connector repository can be found at [Apache Spark - Apache HBase Connector](https://github.com/hortonworks-spark/shc)
-
-For Azure Cosmos DB Spark connector, refer to the [Quick Start Guide](create-sql-api-spark.md) and download the appropriate library for your Spark version.
-
-1. Copy hbase-site.xml to your Spark configuration directory.
-
- ```console
- cp /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/
- ```
-
-1. Run spark -shell with Spark HBase connector and Azure Cosmos DB Spark connector.
-
- ```console
- spark-shell --packages com.hortonworks.shc:shc-core:1.1.0.3.1.2.2-1 --repositories http://repo.hortonworcontent/groups/public/ --jars azure-cosmosdb-spark_2.4.0_2.11-3.6.8-uber.jar
- ```
-
-1. After the Spark shell starts, execute the Scala code as follows. Import the libraries needed to load data from HBase.
-
- ```scala
- // Import libraries
- import org.apache.spark.sql.{SQLContext, _}
- import org.apache.spark.sql.execution.datasources.hbase._
- import org.apache.spark.{SparkConf, SparkContext}
- import spark.sqlContext.implicits._
- ```
-
-1. Define the Spark catalog schema for your HBase tables. Here the Namespace is "default" and the table name is "Contacts". The row key is specified as the key. Columns, Column Family and Column are mapped to Spark's catalog.
-
- ```scala
- // define a catalog for the Contacts table you created in HBase
- def catalog = s"""{
- |"table":{"namespace":"default", "name":"Contacts"},
- |"rowkey":"key",
- |"columns":{
- |"rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
- |"officeAddress":{"cf":"Office", "col":"Address", "type":"string"},
- |"officePhone":{"cf":"Office", "col":"Phone", "type":"string"},
- |"personalName":{"cf":"Personal", "col":"Name", "type":"string"},
- |"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"}
- |}
- |}""".stripMargin
-
- ```
-
-1. Next, define a method to get the data from the HBase Contacts table as a DataFrame.
-
- ```scala
- def withCatalog(cat: String): DataFrame = {
- spark.sqlContext
- .read
- .options(Map(HBaseTableCatalog.tableCatalog->cat))
- .format("org.apache.spark.sql.execution.datasources.hbase")
- .load()
- }
-
- ```
-
-1. Create a DataFrame using the defined method.
-
- ```scala
- val df = withCatalog(catalog)
- ```
-
-1. Then import the libraries needed to use the Cosmos DB Spark connector.
-
- ```scala
- import com.microsoft.azure.cosmosdb.spark.schema._
- import com.microsoft.azure.cosmosdb.spark._
- import com.microsoft.azure.cosmosdb.spark.config.Config
- ```
-
-1. Make settings for writing data to Cosmos DB.
-
- ```scala
- val writeConfig = Config(Map( "Endpoint" -> "https://<cosmos-db-account-name>.documents.azure.com:443/", "Masterkey" -> "<comsmos-db-master-key>", "Database" -> "<database-name>", "Collection" -> "<collection-name>", "Upsert" -> "true" ))
- ```
-
-1. Write DataFrame data to Azure Cosmos DB.
-
- ```scala
- import org.apache.spark.sql.SaveMode df.write.mode(SaveMode.Overwrite).cosmosDB(writeConfig)
- ```
-
-It writes in parallel at high speed, its performance is high. On the other hand, note that it may consume up RU/s on the Azure Cosmos DB side.
-
-### Phoenix
-
-Phoenix is supported as a Data Factory data source. Refer to the following documents for detailed steps.
-
-* [Copy data from Phoenix using Azure Data Factory](../../data-factory/connector-phoenix.md)
-* [Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB](../import-data.md)
-* [Copy data from HBase using Azure Data Factory](../../data-factory/connector-hbase.md)
-
-## Migrate your code
-
-This section describes the differences between creating applications in Azure Cosmos DB SQL APIs and HBase. The examples here use Apache HBase 2.x APIs and [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md).
-
-These HBase's sample codes are based on those described in [HBase's official documentation](https://hbase.apache.org/book.html).
-
-The code for Azure Cosmos DB presented here is based on the [Azure Cosmos DB SQL API: Java SDK v4 examples](sql-api-java-sdk-samples.md) documentation. You can access the full code example from the documentation.
-
-The mappings for code migration are shown here, but the HBase RowKeys and Azure Cosmos DB Partition Keys used in these examples are not always well designed. Design according to the actual data model of the migration source.
-
-### Establish connection
-
-**HBase**
-
-```java
-Configuration config = HBaseConfiguration.create();
-config.set("hbase.zookeeper.quorum","zookeepernode0,zookeepernode1,zookeepernode2");
-config.set("hbase.zookeeper.property.clientPort", "2181");
-config.set("hbase.cluster.distributed", "true");
-Connection connection = ConnectionFactory.createConnection(config)
-```
-
-**Phoenix**
-
-```java
-//Use JDBC to get a connection to an HBase cluster
-Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333",props);
-```
-
-**Azure Cosmos DB**
-
-```java
-// Create sync client
-client = new CosmosClientBuilder()
- .endpoint(AccountSettings.HOST)
- .key(AccountSettings.MASTER_KEY)
- .consistencyLevel(ConsistencyLevel.{ConsistencyLevel})
- .contentResponseOnWriteEnabled(true)
- .buildClient();
-```
-
-### Create database/table/collection
-
-**HBase**
-
-```java
-// create an admin object using the config
-HBaseAdmin admin = new HBaseAdmin(config);
-// create the table...
-HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("FamilyTable"));
-// ... with single column families
-tableDescriptor.addFamily(new HColumnDescriptor("ColFam"));
-admin.createTable(tableDescriptor);
-```
-
-**Phoenix**
-
-```java
-CREATE IF NOT EXISTS FamilyTable ("id" BIGINT not null primary key, "ColFam"."lastName" VARCHAR(50));
-```
-
-**Azure Cosmos DB**
-
-```java
-// Create database if not exists
-CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
-database = client.getDatabase(databaseResponse.getProperties().getId());
-
-// Create container if not exists
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("FamilyContainer", "/lastName");
-
-// Provision throughput
-ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
-
-// Create container with 400 RU/s
-CosmosContainerResponse databaseResponse = database.createContainerIfNotExists(containerProperties, throughputProperties);
-container = database.getContainer(databaseResponse.getProperties().getId());
-```
-
-### Create row/document
-
-**HBase**
-
-```java
-HTable table = new HTable(config, "FamilyTable");
-Put put = new Put(Bytes.toBytes(RowKey));
-
-put.add(Bytes.toBytes("ColFam"), Bytes.toBytes("id"), Bytes.toBytes("1"));
-put.add(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"), Bytes.toBytes("Witherspoon"));
-table.put(put)
-```
-
-**Phoenix**
-
-```sql
-UPSERT INTO FamilyTable (id, lastName) VALUES (1, ΓÇÿWitherspoonΓÇÖ);
-```
-
-**Azure Cosmos DB**
-
-Azure Cosmos DB provides you type safety via data model. We use data model named ΓÇÿFamilyΓÇÖ.
-
-```java
-public class Family {
- public Family() {
- }
-
- public void setId(String id) {
- this.id = id;
- }
-
- public void setLastName(String lastName) {
- this.lastName = lastName;
- }
-
- private String id="";
- private String lastName="";
-}
-```
-
-The above is part of the code. See [full code example](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/df1840b0b5e3715b8555c29f422e0e7d2bc1d49a/src/main/java/com/azure/cosmos/examples/common/Family.java).
-
-Use the Family class to define document and insert item.
-
-```java
-Family family = new Family();
-family.setLastName("Witherspoon");
-family.setId("1");
-
-// Insert this item as a document
-// Explicitly specifying the /pk value improves performance.
-container.createItem(family,new PartitionKey(family.getLastName()),new CosmosItemRequestOptions());
-```
-
-### Read row/document
-
-**HBase**
-
-```java
-HTable table = new HTable(config, "FamilyTable");
-
-Get get = new Get(Bytes.toBytes(RowKey));
-get.addColumn(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"));
-
-Result result = table.get(get);
-
-byte[] col = result.getValue(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"));
-```
-
-**Phoenix**
-
-```sql
-SELECT lastName FROM FamilyTable;
-```
-
-**Azure Cosmos DB**
-
-```java
-// Read document by ID
-Family family = container.readItem(documentId,new PartitionKey(documentLastName),Family.class).getItem();
-
-String sql = "SELECT lastName FROM c";
-
-CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(), Family.class);
-```
-
-### Update data
-
-**HBase**
-
-For HBase, use the append method and checkAndPut method to update the value. append is the process of appending a value atomically to the end of the current value, and checkAndPut atomically compares the current value with the expected value and updates only if they match.
-
-```java
-// append
-HTable table = new HTable(config, "FamilyTable");
-Append append = new Append(Bytes.toBytes(RowKey));
-Append.add(Bytes.toBytes("ColFam"), Bytes.toBytes("id"), Bytes.toBytes(2));
-Append.add(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"), Bytes.toBytes("Harris"));
-Result result = table.append(append)
-
-// checkAndPut
-byte[] row = Bytes.toBytes(RowKey);
-byte[] colfam = Bytes.toBytes("ColFam");
-byte[] col = Bytes.toBytes("lastName");
-Put put = new Put(row);
-put.add(colfam, col, Bytes.toBytes("Patrick"));
-boolearn result = table.checkAndPut(row, colfam, col, Bytes.toBytes("Witherspoon"), put);
-```
-
-**Phoenix**
-
-```sql
-UPSERT INTO FamilyTable (id, lastName) VALUES (1, ΓÇÿBrownΓÇÖ)
-ON DUPLICATE KEY UPDATE id = "1", lastName = "Whiterspoon";
-```
-
-**Azure Cosmos DB**
-
-In Azure Cosmos DB, updates are treated as Upsert operations. That is, if the document does not exist, it will be inserted.
-
-```java
-// Replace existing document with new modified document (contingent on modification).
-
-Family family = new Family();
-family.setLastName("Brown");
-family.setId("1");
-
-CosmosItemResponse<Family> famResp = container.upsertItem(family, new CosmosItemRequestOptions());
-```
-
-### Delete row/document
-
-**HBase**
-
-In Hbase, there is no direct delete way of selecting the row by value. You may have implemented the delete process in combination with ValueFilter etc. In this example, the row to be deleted is specified by RowKey.
-
-```java
-HTable table = new HTable(config, "FamilyTable");
-
-Delete delete = new Delete(Bytes.toBytes(RowKey));
-delete.deleteColumn(Bytes.toBytes("ColFam"), Bytes.toBytes("id"));
-delete.deleteColumn(Bytes.toBytes("ColFam"), Bytes.toBytes("lastName"));
-
-table.dalate(delete)
-```
-
-**Phoenix**
-
-```sql
-DELETE FROM TableName WHERE id = "xxx";
-```
-
-**Azure Cosmos DB**
-
-The deletion method by Document ID is shown below.
-
- ```java
-container.deleteItem(documentId, new PartitionKey(documentLastName), new CosmosItemRequestOptions());
-```
-
-### Query rows/documents
-
-**HBase**
-HBase allows you to retrieve multiple Rows using scan. You can use Filter to specify detailed scan conditions. See [Client Request Filters](https://hbase.apache.org/book.html#client.filter) for HBase built-in filter types.
-
-```java
-HTable table = new HTable(config, "FamilyTable");
-
-Scan scan = new Scan();
-SingleColumnValueFilter filter = new SingleColumnValueFilter(Bytes.toBytes("ColFam"),
-Bytes.toBytes("lastName"), CompareOp.EQUAL, New BinaryComparator(Bytes.toBytes("Witherspoon")));
-filter.setFilterIfMissing(true);
-filter.setLatestVersionOnly(true);
-scan.setFilter(filter);
-
-ResultScanner scanner = table.getScanner(scan);
-```
-
-**Phoenix**
-
-```sql
-SELECT * FROM FamilyTable WHERE lastName = "Witherspoon"
-```
-
-**Azure Cosmos DB**
-
-Filter operation
-
- ```java
-String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
-CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(), Family.class);
-```
-
-### Delete table/collection
-
-**HBase**
-
-```java
-HBaseAdmin admin = new HBaseAdmin(config);
-admin.deleteTable("FamilyTable")
-```
-
-**Phoenix**
-
-```sql
-DROP TABLE IF EXISTS FamilyTable;
-```
-
-**Azure Cosmos DB**
-
-```java
-CosmosContainerResponse containerResp = database.getContainer("FamilyContainer").delete(new CosmosContainerRequestOptions());
-```
-
-### Other considerations
-
-HBase clusters may be used with HBase workloads and MapReduce, Hive, Spark, and more. If you have other workloads with your current HBase, they also need to be migrated. For details, refer to each migration guide.
-
-* MapReduce
-* HBase
-* Spark
-
-### Server-side programming
-
-HBase offers several server-side programming features. If you are using these features, you will also need to migrate their processing.
-
-**HBase**
-
-* [Custom filters](https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html)
-
- Various filters are available as default in HBase, but you can also implement your own custom filters. Custom filters may be implemented if the filters available as default on HBase do not meet your requirements.
-
-* [Coprocessor](https://hbase.apache.org/book.html#_types_of_coprocessors)
-
- The Coprocessor is a framework that allows you to run your own code on the Region Server. By using the Coprocessor, it is possible to perform the processing that was being executed on the client side on the server side, and depending on the processing, it can be made more efficient. There are two types of Coprocessors, Observer and Endpoint.
-
- * Observer
- * Observer hooks specific operations and events. This is a function for adding arbitrary processing. This is a feature similar to RDBMS triggers.
-
- * Endpoint
- * Endpoint is a feature for extending HBase RPC. It is a function similar to an RDBMS stored procedure.
-
-**Azure Cosmos DB**
-
-* [Stored Procedure](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures)
-
- * Azure Cosmos DB stored procedures are written in JavaScript and can perform operations such as creating, updating, reading, querying, and deleting items in Cosmos DB containers.
-
-* [Trigger](how-to-write-stored-procedures-triggers-udfs.md#triggers)
-
- * Triggers can be specified for operations on the database. There are two methods provided: a pre-trigger that runs before the database item changes and a post-trigger that runs after the database item changes.
-
-* [UDF](how-to-write-stored-procedures-triggers-udfs.md#udfs)
-
- * Azure Cosmos DB allows you to define User-Defined Functions (UDFs). UDFs can also be written in JavaScript.
-
-Stored procedures and triggers consume RUs based on the complexity of the operations performed. When developing server-side processing, check the required usage to get a better understanding of the amount of RU consumed by each operation. See [Request Units in Azure Cosmos DB](../request-units.md) and [Optimize request cost in Azure Cosmos DB](../optimize-cost-reads-writes.md) for details.
-
-Server-side programming mappings
-
-| HBase | Azure Cosmos DB | Description |
-| -- | - | -- |
-| Custom filters | WHERE Clause | If the processing implemented by the custom filter cannot be achieved by the WHERE clause in Azure Cosmos DB, use UDF in combination. See [UDF examples](sql-query-udfs.md#examples) for an example of using UDF to further filter the results of the WHERE clause. |
-| Coprocessor (Observer) | Trigger | Observer is a trigger that executes before and after a particular event. Just as Observer supports pre- and post-calls, Cosmos DB's Trigger also supports pre- and post-triggers. |
-| Coprocessor (Endpoint) | Stored Procedure | Endpoint is a server-side data processing mechanism that is executed for each region. This is similar to an RDBMS stored procedure. Cosmos DB stored procedures are written using JavaScript. It provides access to all the operations you can perform on Cosmos DB through stored procedures. |
-
-> [!NOTE]
-> Different mappings and implementations may be required in Cosmos DB depending on the processing implemented on HBase.
-
-## Security
-
-Data security is a shared responsibility of the customer and the database provider. For on-premises solutions, customers have to provide everything from endpoint protection to physical hardware security, which is not an easy task. If you choose a PaaS cloud database provider such as Azure Cosmos DB, customer involvement will be reduced. For Microsoft's security shared responsibility model, see [Shared Responsibilities for Cloud Computing](https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91). Cosmos DB runs on the Azure platform, so it can be enhanced in a different way than HBase. Cosmos DB does not require any extra components to be installed for security. We recommend that you consider migrating your database system security implementation using the following checklist:
-
-| **Security control** | **HBase** | **Azure Cosmos DB** |
-| -- | -- | - |
-| Network Security and firewall setting | Control traffic using security functions such as network devices. | Supports policy-based IP-based access control on the inbound firewall. |
-| User authentication and fine-grained user controls | Fine-grained access control by combining LDAP with security components such as Apache Ranger. | You can use the account primary key to create user and permission resources for each database. Resource tokens are associated with permissions in the database to determine how users can access application resources in the database (read/write, read-only, or no access). You can also use your Azure Active Directory (AAD) ID to authenticate your data requests. This allows you to authorize data requests using a fine-grained RBAC model.|
-| Ability to replicate data globally for regional failures | Make a database replica in a remote data center using HBase's replication. | Cosmos DB performs configuration-free global distribution and allows you to replicate data to data centers around the world in Azure with the select of a button. In terms of security, global replication ensures that your data is protected from local failures. |
-| Ability to fail over from one data center to another | You need to implement failover yourself. | If you're replicating data to multiple data centers and the region's data center goes offline, Azure Cosmos DB automatically rolls over the operation. |
-| Local data replication within a data center | The HDFS mechanism allows you to have multiple replicas across nodes within a single file system. | Cosmos DB automatically replicates data to maintain high availability, even within a single data center. You can choose the consistency level yourself. |
-| Automatic data backups | There is no automatic backup function. You need to implement data backup yourself. | Cosmos DB is backed up regularly and stored in the geo redundant storage. |
-| Protect and isolate sensitive data | For example, if you are using Apache Ranger, you can use Ranger policy to apply the policy to the table. | You can separate personal and other sensitive data into specific containers and read / write, or limit read-only access to specific users. |
-| Monitoring for attacks | It needs to be implemented using third party products. | By using [audit logging and activity logs](../monitor-cosmos-db.md), you can monitor your account for normal and abnormal activity. |
-| Responding to attacks | It needs to be implemented using third party products. | When you contact Azure support and report a potential attack, a five-step incident response process begins. |
-| Ability to geo-fence data to adhere to data governance restrictions | You need to check the restrictions of each country and implement it yourself. | Guarantees data governance for sovereign regions (Germany, China, US Gov, etc.). |
-| Physical protection of servers in protected data centers | It depends on the data center where the system is located. | For a list of the latest certifications, see the global [Azure compliance site](/compliance/regulatory/offering-home?view=o365-worldwide&preserve-view=true). |
-| Certifications | Depends on the Hadoop distribution. | See [Azure compliance documentation](../compliance.md) |
-
-For more information on security, please refer to [Security in Azure Cosmos DB - overview](../database-security.md)
-
-## Monitoring
-
-HBase typically monitors the cluster using the cluster metric web UI or with Ambari, Cloudera Manager, or other monitoring tools. Azure Cosmos DB allows you to use the monitoring mechanism built into the Azure platform. For more information on Azure Cosmos DB monitoring, see [Monitor Azure Cosmos DB](../monitor-cosmos-db.md).
-
-If your environment implements HBase system monitoring to send alerts, such as by email, you may be able to replace it with Azure Monitor alerts. You can receive alerts based on metrics or activity log events for your Azure Cosmos DB account.
-
-For more information on alerts in Azure Monitor, please refer to [Create alerts for Azure Cosmos DB using Azure Monitor](../create-alerts.md)
-
-Also, see [Cosmos DB metrics and log types](../monitor-cosmos-db-reference.md) that can be collected by Azure Monitor.
-
-## Backup & disaster recovery
-
-### Backup
-
-There are several ways to get a backup of HBase. For example, Snapshot, Export, CopyTable, Offline backup of HDFS data, and other custom backups.
-
-Azure Cosmos DB automatically backs up data at periodic intervals, which does not affect the performance or availability of database operations. Backups are stored in Azure storage and can be used to recover data if needed. There are two types of Cosmos DB backups:
-
-* [Periodic backup](../configure-periodic-backup-restore.md)
-
-* [Continuous backup](../continuous-backup-restore-introduction.md)
-
-### Disaster recovery
-
-HBase is a fault-tolerant distributed system, but you must implement disaster recovery using Snapshot, replication, etc. when failover is required at the backup location in the case of a data center level failure. HBase replication can be set up with three replication models: Leader-Follower, Leader-Leader, and Cyclic. If the source HBase implements Disaster Recovery, you need to understand how you can configure Disaster Recovery in Cosmos DB and meet your system requirements.
-
-Azure Cosmos DB is a globally distributed database with built-in disaster recovery capabilities. You can replicate your DB data to any Azure region. Azure Cosmos DB keeps your database highly available in the unlikely event of a failure in some regions.
-
-Azure Cosmos DB account that uses only a single region may lose availability in the event of a region failure. We recommend that you configure at least two regions to ensure high availability always. You can also ensure high availability for both writes and reads by configuring your Azure Cosmos DB account to span at least two regions with multiple write regions to ensure high availability for writes and reads. For multi-region accounts that consist of multiple write regions, failover between regions is detected and handled by the Azure Cosmos DB client. These are momentary and do not require any changes from the application. In this way, you can achieve an availability configuration that includes Disaster Recovery for Azure Cosmos DB. As mentioned earlier, HBase replication can be set up with three models, but Cosmos DB can be set up with SLA-based availability by configuring single-write and multi-write regions.
-
-For more information on High Availability, please refer to [How does Azure Cosmos DB provide high availability](../high-availability.md)
-
-## Frequently asked questions
-
-#### Why migrate to SQL API instead of other APIs in Azure Cosmos DB?
-
-SQL API provides the best end-to-end experience in terms of interface, service SDK client library. The new features rolled out to Azure Cosmos DB will be first available in your SQL API account. In addition, the SQL API supports analytics and provides performance separation between production and analytics workloads. If you want to use the modernized technologies to build your apps, SQL API is the recommended option.
-
-#### Can I assign the HBase RowKey to the Azure Cosmos DB partition key?
-
-It may not be optimized as it is. In HBase, the data is sorted by the specified RowKey, stored in the Region, and divided into fixed sizes. This behaves differently than partitioning in Cosmos DB. Therefore, the keys need to be redesigned to better distribute the data according to the characteristics of the workload. See the [Distribution](#data-distribution) section for more details.
-
-#### Data is sorted by RowKey in HBase, but partitioning by key in Cosmos DB. How can Cosmos DB achieve sorting and collocation?
-
-In Cosmos DB, you can add a Composite Index to sort your data in ascending or descending order to improve the performance of equality and range queries. See the [Distribution](#data-distribution) section and the [Composite Index](../index-policy.md#composite-indexes) in product documentation.
-
-#### Analytical processing is executed on HBase data with Hive or Spark. How can I modernize them in Cosmos DB?
-
-You can use the Azure Cosmos DB analytical store to automatically synchronize operational data to another column store. The column store format is suitable for large analytic queries that are executed in an optimized way, which improves latency for such queries. Azure Synapse Link allows you to build an ETL-free HTAP solution by linking directly from Azure Synapse Analytics to the Azure Cosmos DB analytical store. This allows you to perform large-scale, near-real-time analysis of operational data. Synapse Analytics supports Apache Spark and serverless SQL pools in the Cosmos DB analytics store. You can take advantage of this feature to migrate your analytical processing. See [Analytical store](../analytical-store-introduction.md) for more information.
-
-#### How can users be using timestamp query in HBase to Cosmos DB?
-
-Cosmos DB doesn't have exactly the same timestamp versioning feature as HBase. But Cosmos DB provides the ability to access the [change feed](../change-feed.md) and you can utilize it for versioning.
-
-* Store every version/change as a separate item.
-
-* Read the change feed to merge/consolidate changes and trigger appropriate actions downstream by filtering with "_ts" field.
-Additionally, for old version of data, you can expire old versions using [TTL](time-to-live.md).
-
-## Next steps
-
-* To do performance testing, see [Performance and scale testing with Azure Cosmos DB](./performance-testing.md) article.
-
-* To optimize the code, see [Performance tips for Azure Cosmos DB](./performance-tips-async-java.md) article.
-
-* Explore Java Async V3 SDK, [SDK reference](https://github.com/Azure/azure-cosmosdb-java/tree/v3) GitHub repo.
cosmos-db Migrate Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-java-v4-sdk.md
- Title: Migrate your application to use the Azure Cosmos DB Java SDK v4 (com.azure.cosmos)
-description: Learn how to upgrade your existing Java application from using the older Azure Cosmos DB Java SDKs to the newer Java SDK 4.0 (com.azure.cosmos package)for Core (SQL) API.
------- Previously updated : 08/26/2021--
-# Migrate your application to use the Azure Cosmos DB Java SDK v4
-
-> [!IMPORTANT]
-> For more information about this SDK, please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md).
->
-
-> [!IMPORTANT]
-> Because Azure Cosmos DB Java SDK v4 has up to 20% enhanced throughput, TCP-based direct mode, and support for the latest backend service features, we recommend you upgrade to v4 at the next opportunity. Continue reading below to learn more.
->
-
-Update to the latest Azure Cosmos DB Java SDK to get the best of what Azure Cosmos DB has to offer - a managed non-relational database service with competitive performance, five-nines availability, one-of-a-kind resource governance, and more. This article explains how to upgrade your existing Java application that is using an older Azure Cosmos DB Java SDK to the newer Azure Cosmos DB Java SDK 4.0 for Core (SQL) API. Azure Cosmos DB Java SDK v4 corresponds to the `com.azure.cosmos` package. You can use the instructions in this doc if you are migrating your application from any of the following Azure Cosmos DB Java SDKs:
-
-* Sync Java SDK 2.x.x
-* Async Java SDK 2.x.x
-* Java SDK 3.x.x
-
-## Azure Cosmos DB Java SDKΓÇÖs and package mappings
-
-The following table lists different Azure Cosmos DB Java SDKs, the package name and the release information:
-
-| Java SDK| Release Date | Bundled APIs | Maven Jar | Java package name |API Reference | Release Notes | Retire date |
-|-||--|--|--|-||--|
-| Async 2.x.x | June 2018 | Async(RxJava) | `com.microsoft.azure::azure-cosmosdb` | `com.microsoft.azure.cosmosdb.rx` | [API](https://azure.github.io/azure-cosmosdb-jav) | August 31, 2024 |
-| Sync 2.x.x | Sept 2018 | Sync | `com.microsoft.azure::azure-documentdb` | `com.microsoft.azure.cosmosdb` | [API](https://azure.github.io/azure-cosmosdb-jav) | February 29, 2024 |
-| 3.x.x | July 2019 | Async(Reactor)/Sync | `com.microsoft.azure::azure-cosmos` | `com.azure.data.cosmos` | [API](https://azure.github.io/azure-cosmosdb-java/3.0.0/) | - | August 31, 2024 |
-| 4.0 | June 2020 | Async(Reactor)/Sync | `com.azure::azure-cosmos` | `com.azure.cosmos` | [API](/java/api/overview/azure/cosmosdb) | - | - |
-
-## SDK level implementation changes
-
-The following are the key implementation differences between different SDKs:
-
-### RxJava is replaced with reactor in Azure Cosmos DB Java SDK versions 3.x.x and 4.0
-
-If you are unfamiliar with asynchronous programming or Reactive Programming, see the [Reactor pattern guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md) for an introduction to async programming and Project Reactor. This guide may be useful if you have been using Azure Cosmos DB Sync Java SDK 2.x.x or Azure Cosmos DB Java SDK 3.x.x Sync API in the past.
-
-If you have been using Azure Cosmos DB Async Java SDK 2.x.x, and you plan on migrating to the 4.0 SDK, see the [Reactor vs RxJava Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) for guidance on converting RxJava code to use Reactor.
-
-### Azure Cosmos DB Java SDK v4 has direct connectivity mode in both Async and Sync APIs
-
-If you have been using Azure Cosmos DB Sync Java SDK 2.x.x, note that the direct connection mode based on TCP (as opposed to HTTP) is implemented in Azure Cosmos DB Java SDK 4.0 for both the Async and Sync APIs.
-
-## API level changes
-
-The following are the API level changes in Azure Cosmos DB Java SDK 4.x.x compared to previous SDKs (Java SDK 3.x.x, Async Java SDK 2.x.x, and Sync Java SDK 2.x.x):
--
-* The Azure Cosmos DB Java SDK 3.x.x and 4.0 refer the client resources as `Cosmos<resourceName>`. For example, `CosmosClient`, `CosmosDatabase`, `CosmosContainer`. Whereas in version 2.x.x, the Azure Cosmos DB Java SDKs donΓÇÖt have a uniform naming scheme.
-
-* Azure Cosmos DB Java SDK 3.x.x and 4.0 offer both Sync and Async APIs.
-
- * **Java SDK 4.0** : All the classes belong to the Sync API unless the class name is appended with `Async` after `Cosmos`.
-
- * **Java SDK 3.x.x**: All the classes belong to the Async API unless the class name is appended with `Async` after `Cosmos`.
-
- * **Async Java SDK 2.x.x**: The class names are similar to Sync Java SDK 2.x.x, however the name starts with *Async*.
-
-### Hierarchical API structure
-
-Azure Cosmos DB Java SDK 4.0 and 3.x.x introduce a hierarchical API structure that organizes the clients, databases, and containers in a nested fashion as shown in the following 4.0 SDK code snippet:
-
-```java
-CosmosContainer container = client.getDatabase("MyDatabaseName").getContainer("MyContainerName");
-
-```
-
-In version 2.x.x of the Azure Cosmos DB Java SDK, all operations on resources and documents are performed through the client instance.
-
-### Representing documents
-
-In Azure Cosmos DB Java SDK 4.0, custom POJO's and `JsonNodes` are the two options to read and write the documents from Azure Cosmos DB.
-
-In the Azure Cosmos DB Java SDK 3.x.x, the `CosmosItemProperties` object is exposed by the public API and served as a document representation. This class is no longer exposed publicly in version 4.0.
-
-### Imports
-
-* The Azure Cosmos DB Java SDK 4.0 packages begin with `com.azure.cosmos`
-* Azure Cosmos DB Java SDK 3.x.x packages begin with `com.azure.data.cosmos`
-* Azure Cosmos DB Java SDK 2.x.x Sync API packages begin with `com.microsoft.azure.documentdb`
-
-* Azure Cosmos DB Java SDK 4.0 places several classes in a nested package `com.azure.cosmos.models`. Some of these packages include:
-
- * `CosmosContainerResponse`
- * `CosmosDatabaseResponse`
- * `CosmosItemResponse`
- * The Async API analogs for all of the above packages
- * `CosmosContainerProperties`
- * `FeedOptions`
- * `PartitionKey`
- * `IndexingPolicy`
- * `IndexingMode` ...etc.
-
-### Accessors
-
-Azure Cosmos DB Java SDK 4.0 exposes `get` and `set` methods to access the instance members. For example, the `CosmosContainer` instance has `container.getId()` and `container.setId()` methods.
-
-This is different from Azure Cosmos DB Java SDK 3.x.x which exposes a fluent interface. For example, a `CosmosSyncContainer` instance has `container.id()` which is overloaded to get or set the `id` value.
-
-## Code snippet comparisons
-
-### Create resources
-
-The following code snippet shows the differences in how resources are created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateJavaSDKv4ResourceAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-ConnectionPolicy defaultPolicy = ConnectionPolicy.defaultPolicy();
-// Setting the preferred location to Cosmos DB Account region
-defaultPolicy.preferredLocations(Lists.newArrayList("Your Account Location"));
-
-// Create async client
-// <CreateAsyncClient>
-client = new CosmosClientBuilder()
- .endpoint("your.hostname")
- .key("yourmasterkey")
- .connectionPolicy(defaultPolicy)
- .consistencyLevel(ConsistencyLevel.EVENTUAL)
- .build();
-
-// Create database with specified name
-client.createDatabaseIfNotExists("YourDatabaseName")
- .flatMap(databaseResponse -> {
- database = databaseResponse.database();
- // Container properties - name and partition key
- CosmosContainerProperties containerProperties =
- new CosmosContainerProperties("YourContainerName", "/id");
- // Create container with specified properties & provisioned throughput
- return database.createContainerIfNotExists(containerProperties, 400);
- }).flatMap(containerResponse -> {
- container = containerResponse.container();
- return Mono.empty();
-}).subscribe();
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-```java
-ConnectionPolicy defaultPolicy = ConnectionPolicy.GetDefault();
-// Setting the preferred location to Cosmos DB Account region
-defaultPolicy.setPreferredLocations(Lists.newArrayList("Your Account Location"));
-
-// Create document client
-// <CreateDocumentClient>
-client = new DocumentClient("your.hostname", "your.masterkey", defaultPolicy, ConsistencyLevel.Eventual)
-
-// Create database with specified name
-Database databaseDefinition = new Database();
-databaseDefinition.setId("YourDatabaseName");
-ResourceResponse<Database> databaseResourceResponse = client.createDatabase(databaseDefinition, new RequestOptions());
-
-// Read database with specified name
-String databaseLink = "dbs/YourDatabaseName";
-databaseResourceResponse = client.readDatabase(databaseLink, new RequestOptions());
-Database database = databaseResourceResponse.getResource();
-
-// Create container with specified name
-DocumentCollection documentCollection = new DocumentCollection();
-documentCollection.setId("YourContainerName");
-documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
-```
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-```java
-// Create Async client.
-// Building an async client is still a sync operation.
-AsyncDocumentClient client = new Builder()
- .withServiceEndpoint("your.hostname")
- .withMasterKeyOrResourceToken("yourmasterkey")
- .withConsistencyLevel(ConsistencyLevel.Eventual)
- .build();
-// Create database with specified name
-Database database = new Database();
-database.setId("YourDatabaseName");
-client.createDatabase(database, new RequestOptions())
- .flatMap(databaseResponse -> {
- // Collection properties - name and partition key
- DocumentCollection documentCollection = new DocumentCollection();
- documentCollection.setId("YourContainerName");
- documentCollection.setPartitionKey(new PartitionKeyDefinition("/id"));
- // Create collection
- return client.createCollection(databaseResponse.getResource().getSelfLink(), documentCollection, new RequestOptions());
-}).subscribe();
-
-```
---
-### Item operations
-
-The following code snippet shows the differences in how item operations are performed between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateItemOpsAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-// Container is created. Generate many docs to insert.
-int number_of_docs = 50000;
-ArrayList<JsonNode> docs = generateManyDocs(number_of_docs);
-
-// Insert many docs into container...
-Flux.fromIterable(docs)
- .flatMap(doc -> container.createItem(doc))
- .subscribe(); // ...Subscribing triggers stream execution.
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-```java
-// Container is created. Generate documents to insert.
-Document document = new Document();
-document.setId("YourDocumentId");
-ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
- new RequestOptions(), true);
-Document responseDocument = documentResourceResponse.getResource();
-```
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-```java
-// Collection is created. Generate many docs to insert.
-int number_of_docs = 50000;
-ArrayList<Document> docs = generateManyDocs(number_of_docs);
-// Insert many docs into collection...
-Observable.from(docs)
- .flatMap(doc -> client.createDocument(createdCollection.getSelfLink(), doc, new RequestOptions(), false))
- .subscribe(); // ...Subscribing triggers stream execution.
-```
---
-### Indexing
-
-The following code snippet shows the differences in how indexing is created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateIndexingAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-CosmosContainerProperties containerProperties = new CosmosContainerProperties(containerName, "/lastName");
-
-// Custom indexing policy
-IndexingPolicy indexingPolicy = new IndexingPolicy();
-indexingPolicy.setIndexingMode(IndexingMode.CONSISTENT); //To turn indexing off set IndexingMode.NONE
-
-// Included paths
-List<IncludedPath> includedPaths = new ArrayList<>();
-IncludedPath includedPath = new IncludedPath();
-includedPath.path("/*");
-includedPaths.add(includedPath);
-indexingPolicy.includedPaths(includedPaths);
-
-// Excluded paths
-List<ExcludedPath> excludedPaths = new ArrayList<>();
-ExcludedPath excludedPath = new ExcludedPath();
-excludedPath.path("/name/*");
-excludedPaths.add(excludedPath);
-indexingPolicy.excludedPaths(excludedPaths);
-
-containerProperties.indexingPolicy(indexingPolicy);
-
-CosmosContainer containerIfNotExists = database.createContainerIfNotExists(containerProperties, 400)
- .block()
- .container();
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-```java
-// Custom indexing policy
-IndexingPolicy indexingPolicy = new IndexingPolicy();
-indexingPolicy.setIndexingMode(IndexingMode.Consistent); //To turn indexing off set IndexingMode.NONE
-
-// Included paths
-List<IncludedPath> includedPaths = new ArrayList<>();
-IncludedPath includedPath = new IncludedPath();
-includedPath.setPath("/*");
-includedPaths.add(includedPath);
-indexingPolicy.setIncludedPaths(includedPaths);
-
-// Excluded paths
-List<ExcludedPath> excludedPaths = new ArrayList<>();
-ExcludedPath excludedPath = new ExcludedPath();
-excludedPath.setPath("/name/*");
-excludedPaths.add(excludedPath);
-indexingPolicy.setExcludedPaths(excludedPaths);
-
-// Create container with specified name and indexing policy
-DocumentCollection documentCollection = new DocumentCollection();
-documentCollection.setId("YourContainerName");
-documentCollection.setIndexingPolicy(indexingPolicy);
-documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
-```
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-```java
-// Custom indexing policy
-IndexingPolicy indexingPolicy = new IndexingPolicy();
-indexingPolicy.setIndexingMode(IndexingMode.Consistent); //To turn indexing off set IndexingMode.None
-// Included paths
-List<IncludedPath> includedPaths = new ArrayList<>();
-IncludedPath includedPath = new IncludedPath();
-includedPath.setPath("/*");
-includedPaths.add(includedPath);
-indexingPolicy.setIncludedPaths(includedPaths);
-// Excluded paths
-List<ExcludedPath> excludedPaths = new ArrayList<>();
-ExcludedPath excludedPath = new ExcludedPath();
-excludedPath.setPath("/name/*");
-excludedPaths.add(excludedPath);
-indexingPolicy.setExcludedPaths(excludedPaths);
-// Create container with specified name and indexing policy
-DocumentCollection documentCollection = new DocumentCollection();
-documentCollection.setId("YourContainerName");
-documentCollection.setIndexingPolicy(indexingPolicy);
-client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).subscribe();
-```
---
-### Stored procedures
-
-The following code snippet shows the differences in how stored procedures are created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateSprocAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-logger.info("Creating stored procedure...\n");
-
-sprocId = "createMyDocument";
-String sprocBody = "function createMyDocument() {\n" +
- "var documentToCreate = {\"id\":\"test_doc\"}\n" +
- "var context = getContext();\n" +
- "var collection = context.getCollection();\n" +
- "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
- " function (err, documentCreated) {\n" +
- "if (err) throw new Error('Error' + err.message);\n" +
- "context.getResponse().setBody(documentCreated.id)\n" +
- "});\n" +
- "if (!accepted) return;\n" +
- "}";
-CosmosStoredProcedureProperties storedProcedureDef = new CosmosStoredProcedureProperties(sprocId, sprocBody);
-container.getScripts()
- .createStoredProcedure(storedProcedureDef,
- new CosmosStoredProcedureRequestOptions()).block();
-
-// ...
-
-logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
-
-CosmosStoredProcedureRequestOptions options = new CosmosStoredProcedureRequestOptions();
-options.partitionKey(new PartitionKey("test_doc"));
-
-container.getScripts()
- .getStoredProcedure(sprocId)
- .execute(null, options)
- .flatMap(executeResponse -> {
- logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
- sprocId,
- executeResponse.responseAsString(),
- executeResponse.statusCode(),
- executeResponse.requestCharge()));
- return Mono.empty();
- }).block();
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-```java
-logger.info("Creating stored procedure...\n");
-
-String sprocId = "createMyDocument";
-String sprocBody = "function createMyDocument() {\n" +
- "var documentToCreate = {\"id\":\"test_doc\"}\n" +
- "var context = getContext();\n" +
- "var collection = context.getCollection();\n" +
- "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
- " function (err, documentCreated) {\n" +
- "if (err) throw new Error('Error' + err.message);\n" +
- "context.getResponse().setBody(documentCreated.id)\n" +
- "});\n" +
- "if (!accepted) return;\n" +
- "}";
-StoredProcedure storedProcedureDef = new StoredProcedure();
-storedProcedureDef.setId(sprocId);
-storedProcedureDef.setBody(sprocBody);
-StoredProcedure storedProcedure = client.createStoredProcedure(documentCollection.getSelfLink(), storedProcedureDef, new RequestOptions())
- .getResource();
-
-// ...
-
-logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
-
-RequestOptions options = new RequestOptions();
-options.setPartitionKey(new PartitionKey("test_doc"));
-
-StoredProcedureResponse storedProcedureResponse =
- client.executeStoredProcedure(storedProcedure.getSelfLink(), options, null);
-logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
- sprocId,
- storedProcedureResponse.getResponseAsString(),
- storedProcedureResponse.getStatusCode(),
- storedProcedureResponse.getRequestCharge()));
-```
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-```java
-logger.info("Creating stored procedure...\n");
-String sprocId = "createMyDocument";
-String sprocBody = "function createMyDocument() {\n" +
- "var documentToCreate = {\"id\":\"test_doc\"}\n" +
- "var context = getContext();\n" +
- "var collection = context.getCollection();\n" +
- "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
- " function (err, documentCreated) {\n" +
- "if (err) throw new Error('Error' + err.message);\n" +
- "context.getResponse().setBody(documentCreated.id)\n" +
- "});\n" +
- "if (!accepted) return;\n" +
- "}";
-StoredProcedure storedProcedureDef = new StoredProcedure();
-storedProcedureDef.setId(sprocId);
-storedProcedureDef.setBody(sprocBody);
-StoredProcedure storedProcedure = client
- .createStoredProcedure(documentCollection.getSelfLink(), storedProcedureDef, new RequestOptions())
- .toBlocking()
- .single()
- .getResource();
-// ...
-logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
-RequestOptions options = new RequestOptions();
-options.setPartitionKey(new PartitionKey("test_doc"));
-StoredProcedureResponse storedProcedureResponse =
- client.executeStoredProcedure(storedProcedure.getSelfLink(), options, null)
- .toBlocking().single();
-logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
- sprocId,
- storedProcedureResponse.getResponseAsString(),
- storedProcedureResponse.getStatusCode(),
- storedProcedureResponse.getRequestCharge()));
-```
---
-### Change feed
-
-The following code snippet shows the differences in how change feed operations are executed between the 4.0 and 3.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateCFAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-ChangeFeedProcessor changeFeedProcessorInstance =
-ChangeFeedProcessor.Builder()
- .hostName(hostName)
- .feedContainer(feedContainer)
- .leaseContainer(leaseContainer)
- .handleChanges((List<CosmosItemProperties> docs) -> {
- logger.info(">setHandleChanges() START");
-
- for (CosmosItemProperties document : docs) {
- try {
-
- // You are given the document as a CosmosItemProperties instance which you may
- // cast to the desired type.
- CustomPOJO pojo_doc = document.getObject(CustomPOJO.class);
- logger.info("-=>id: " + pojo_doc.id());
-
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- logger.info(">handleChanges() END");
-
- })
- .build();
-
-// ...
-
- changeFeedProcessorInstance.start()
- .subscribeOn(Schedulers.elastic())
- .subscribe();
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-* This feature is not supported as of Java SDK v2 sync.
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-* This feature is not supported as of Java SDK v2 async.
---
-### Container level Time-To-Live(TTL)
-
-The following code snippet shows the differences in how to create time to live for data in the container between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateContainerTTLAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-CosmosContainer container;
-
-// Create a new container with TTL enabled with default expiration value
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-containerProperties.defaultTimeToLive(90 * 60 * 60 * 24);
-container = database.createContainerIfNotExists(containerProperties, 400).block().container();
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-```java
-DocumentCollection documentCollection;
-
-// Create a new container with TTL enabled with default expiration value
-documentCollection.setDefaultTimeToLive(90 * 60 * 60 * 24);
-documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
-```
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-```java
-DocumentCollection collection = new DocumentCollection();
-// Create a new container with TTL enabled with default expiration value
-collection.setDefaultTimeToLive(90 * 60 * 60 * 24);
-collection = client
- .createCollection(database.getSelfLink(), documentCollection, new RequestOptions())
- .toBlocking()
- .single()
- .getResource();
-```
---
-### Item level Time-To-Live(TTL)
-
-The following code snippet shows the differences in how to create time to live for an item between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
-
-# [Java SDK 4.0 Async API](#tab/java-v4-async)
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateItemTTLClassAsync)]
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateItemTTLAsync)]
-
-# [Java SDK 3.x.x Async API](#tab/java-v3-async)
-
-```java
-// Include a property that serializes to "ttl" in JSON
-public class SalesOrder
-{
- private String id;
- private String customerId;
- private Integer ttl;
-
- public SalesOrder(String id, String customerId, Integer ttl) {
- this.id = id;
- this.customerId = customerId;
- this.ttl = ttl;
- }
-
- public String id() {return this.id;}
- public SalesOrder id(String new_id) {this.id = new_id; return this;}
- public String customerId() {return this.customerId; return this;}
- public SalesOrder customerId(String new_cid) {this.customerId = new_cid;}
- public Integer ttl() {return this.ttl;}
- public SalesOrder ttl(Integer new_ttl) {this.ttl = new_ttl; return this;}
-
- //...
-}
-
-// Set the value to the expiration in seconds
-SalesOrder salesOrder = new SalesOrder(
- "SO05",
- "CO18009186470",
- 60 * 60 * 24 * 30 // Expire sales orders in 30 days
-);
-```
-
-# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
-
-```java
-Document document = new Document();
-document.setId("YourDocumentId");
-document.setTimeToLive(60 * 60 * 24 * 30 ); // Expire document in 30 days
-ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
- new RequestOptions(), true);
-Document responseDocument = documentResourceResponse.getResource();
-```
-
-# [Java SDK 2.x.x Async API](#tab/java-v2-async)
-
-```java
-Document document = new Document();
-document.setId("YourDocumentId");
-document.setTimeToLive(60 * 60 * 24 * 30 ); // Expire document in 30 days
-ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
- new RequestOptions(), true).toBlocking().single();
-Document responseDocument = documentResourceResponse.getResource();
-```
---
-## Next steps
-
-* [Build a Java app](create-sql-api-java.md) to manage Azure Cosmos DB SQL API data using the V4 SDK
-* Learn about the [Reactor-based Java SDKs](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md)
-* Learn about converting RxJava async code to Reactor async code with the [Reactor vs RxJava Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Migrate Relational To Cosmos Db Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-relational-to-cosmos-db-sql-api.md
- Title: Migrate one-to-few relational data into Azure Cosmos DB SQL API
-description: Learn how to handle complex data migration for one-to-few relationships into SQL API
---- Previously updated : 12/12/2019---
-# Migrate one-to-few relational data into Azure Cosmos DB SQL API account
-
-In order to migrate from a relational database to Azure Cosmos DB SQL API, it can be necessary to make changes to the data model for optimization.
-
-One common transformation is denormalizing data by embedding related subitems within one JSON document. Here we look at a few options for this using Azure Data Factory or Azure Databricks. For general guidance on data modeling for Cosmos DB, please review [Data modeling in Azure Cosmos DB](modeling-data.md).
-
-## Example Scenario
-
-Assume we have the following two tables in our SQL database, Orders and OrderDetails.
---
-We want to combine this one-to-few relationship into one JSON document during migration. To do this, we can create a T-SQL query using "FOR JSON" as below:
-
-```sql
-SELECT
- o.OrderID,
- o.OrderDate,
- o.FirstName,
- o.LastName,
- o.Address,
- o.City,
- o.State,
- o.PostalCode,
- o.Country,
- o.Phone,
- o.Total,
- (select OrderDetailId, ProductId, UnitPrice, Quantity from OrderDetails od where od.OrderId = o.OrderId for json auto) as OrderDetails
-FROM Orders o;
-```
-
-The results of this query would look as below:
--
-Ideally, you want to use a single Azure Data Factory (ADF) copy activity to query SQL data as the source and write the output directly to Azure Cosmos DB sink as proper JSON objects. Currently, it is not possible to perform the needed JSON transformation in one copy activity. If we try to copy the results of the above query into an Azure Cosmos DB SQL API container, we will see the OrderDetails field as a string property of our document, instead of the expected JSON array.
-
-We can work around this current limitation in one of the following ways:
-
-* **Use Azure Data Factory with two copy activities**:
- 1. Get JSON-formatted data from SQL to a text file in an intermediary blob storage location, and
- 2. Load data from the JSON text file to a container in Azure Cosmos DB.
-
-* **Use Azure Databricks to read from SQL and write to Azure Cosmos DB** - we will present two options here.
--
-LetΓÇÖs look at these approaches in more detail:
-
-## Azure Data Factory
-
-Although we cannot embed OrderDetails as a JSON-array in the destination Cosmos DB document, we can work around the issue by using two separate Copy Activities.
-
-### Copy Activity #1: SqlJsonToBlobText
-
-For the source data, we use a SQL query to get the result set as a single column with one JSON object (representing the Order) per row using the SQL Server OPENJSON and FOR JSON PATH capabilities:
-
-```sql
-SELECT [value] FROM OPENJSON(
- (SELECT
- id = o.OrderID,
- o.OrderDate,
- o.FirstName,
- o.LastName,
- o.Address,
- o.City,
- o.State,
- o.PostalCode,
- o.Country,
- o.Phone,
- o.Total,
- (select OrderDetailId, ProductId, UnitPrice, Quantity from OrderDetails od where od.OrderId = o.OrderId for json auto) as OrderDetails
- FROM Orders o FOR JSON PATH)
-)
-```
---
-For the sink of the SqlJsonToBlobText copy activity, we choose "Delimited Text" and point it to a specific folder in Azure Blob Storage with a dynamically generated unique file name (for example, '@concat(pipeline().RunId,'.json').
-Since our text file is not really "delimited" and we do not want it to be parsed into separate columns using commas and want to preserve the double-quotes ("), we set "Column delimiter" to a Tab ("\t") - or another character not occurring in the data - and "Quote character" to "No quote character".
--
-### Copy Activity #2: BlobJsonToCosmos
-
-Next, we modify our ADF pipeline by adding the second Copy Activity that looks in Azure Blob Storage for the text file that was created by the first activity. It processes it as "JSON" source to insert to Cosmos DB sink as one document per JSON-row found in the text file.
--
-Optionally, we also add a "Delete" activity to the pipeline so that it deletes all of the previous files remaining in the /Orders/ folder prior to each run. Our ADF pipeline now looks something like this:
--
-After we trigger the pipeline above, we see a file created in our intermediary Azure Blob Storage location containing one JSON-object per row:
--
-We also see Orders documents with properly embedded OrderDetails inserted into our Cosmos DB collection:
---
-## Azure Databricks
-
-We can also use Spark in [Azure Databricks](https://azure.microsoft.com/services/databricks/) to copy the data from our SQL Database source to the Azure Cosmos DB destination without creating the intermediary text/JSON files in Azure Blob Storage.
-
-> [!NOTE]
-> For clarity and simplicity, the code snippets below include dummy database passwords explicitly inline, but you should always use Azure Databricks secrets.
->
-
-First, we create and attach the required [SQL connector](/connectors/sql/) and [Azure Cosmos DB connector](https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html) libraries to our Azure Databricks cluster. Restart the cluster to make sure libraries are loaded.
--
-Next, we present two samples, for Scala and Python.
-
-### Scala
-Here, we get the results of the SQL query with ΓÇ£FOR JSONΓÇ¥ output into a DataFrame:
-
-```scala
-// Connect to Azure SQL /connectors/sql/
-import com.microsoft.azure.sqldb.spark.config.Config
-import com.microsoft.azure.sqldb.spark.connect._
-val configSql = Config(Map(
- "url" -> "xxxx.database.windows.net",
- "databaseName" -> "xxxx",
- "queryCustom" -> "SELECT o.OrderID, o.OrderDate, o.FirstName, o.LastName, o.Address, o.City, o.State, o.PostalCode, o.Country, o.Phone, o.Total, (SELECT OrderDetailId, ProductId, UnitPrice, Quantity FROM OrderDetails od WHERE od.OrderId = o.OrderId FOR JSON AUTO) as OrderDetails FROM Orders o",
- "user" -> "xxxx",
- "password" -> "xxxx" // NOTE: For clarity and simplicity, this example includes secrets explicitely as a string, but you should always use Databricks secrets
-))
-// Create DataFrame from Azure SQL query
-val orders = sqlContext.read.sqlDB(configSql)
-display(orders)
-```
--
-Next, we connect to our Cosmos DB database and collection:
-
-```scala
-// Connect to Cosmos DB https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html
-import org.joda.time._
-import org.joda.time.format._
-import com.microsoft.azure.cosmosdb.spark.schema._
-import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
-import com.microsoft.azure.cosmosdb.spark.config.Config
-import org.apache.spark.sql.functions._
-import org.joda.time._
-import org.joda.time.format._
-import com.microsoft.azure.cosmosdb.spark.schema._
-import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
-import com.microsoft.azure.cosmosdb.spark.config.Config
-import org.apache.spark.sql.functions._
-val configMap = Map(
- "Endpoint" -> "https://xxxx.documents.azure.com:443/",
- // NOTE: For clarity and simplicity, this example includes secrets explicitely as a string, but you should always use Databricks secrets
- "Masterkey" -> "xxxx",
- "Database" -> "StoreDatabase",
- "Collection" -> "Orders")
-val configCosmos = Config(configMap)
-```
-
-Finally, we define our schema and use from_json to apply the DataFrame prior to saving it to the CosmosDB collection.
-
-```scala
-// Convert DataFrame to proper nested schema
-import org.apache.spark.sql.types._
-val orderDetailsSchema = ArrayType(StructType(Array(
- StructField("OrderDetailId",IntegerType,true),
- StructField("ProductId",IntegerType,true),
- StructField("UnitPrice",DoubleType,true),
- StructField("Quantity",IntegerType,true)
- )))
-val ordersWithSchema = orders.select(
- col("OrderId").cast("string").as("id"),
- col("OrderDate").cast("string"),
- col("FirstName").cast("string"),
- col("LastName").cast("string"),
- col("Address").cast("string"),
- col("City").cast("string"),
- col("State").cast("string"),
- col("PostalCode").cast("string"),
- col("Country").cast("string"),
- col("Phone").cast("string"),
- col("Total").cast("double"),
- from_json(col("OrderDetails"), orderDetailsSchema).as("OrderDetails")
-)
-display(ordersWithSchema)
-// Save nested data to Cosmos DB
-CosmosDBSpark.save(ordersWithSchema, configCosmos)
-```
---
-### Python
-
-As an alternative approach, you may need to execute JSON transformations in Spark (if the source database does not support "FOR JSON" or a similar operation), or you may wish to use parallel operations for a very large data set. Here we present a PySpark sample. Start by configuring the source and target database connections in the first cell:
-
-```python
-import uuid
-import pyspark.sql.functions as F
-from pyspark.sql.functions import col
-from pyspark.sql.types import StringType,DateType,LongType,IntegerType,TimestampType
-
-#JDBC connect details for SQL Server database
-jdbcHostname = "jdbcHostname"
-jdbcDatabase = "OrdersDB"
-jdbcUsername = "jdbcUsername"
-jdbcPassword = "jdbcPassword"
-jdbcPort = "1433"
-
-connectionProperties = {
- "user" : jdbcUsername,
- "password" : jdbcPassword,
- "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
-}
-jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2};user={3};password={4}".format(jdbcHostname, jdbcPort, jdbcDatabase, jdbcUsername, jdbcPassword)
-
-#Connect details for Target Azure Cosmos DB SQL API account
-writeConfig = {
- "Endpoint": "Endpoint",
- "Masterkey": "Masterkey",
- "Database": "OrdersDB",
- "Collection": "Orders",
- "Upsert": "true"
-}
-```
-
-Then, we will query the source Database (in this case SQL Server) for both the order and order detail records, putting the results into Spark Dataframes. We will also create a list containing all the order IDs, and a Thread pool for parallel operations:
-
-```python
-import json
-import ast
-import pyspark.sql.functions as F
-import uuid
-import numpy as np
-import pandas as pd
-from functools import reduce
-from pyspark.sql import SQLContext
-from pyspark.sql.types import *
-from pyspark.sql import *
-from pyspark.sql.functions import exp
-from pyspark.sql.functions import col
-from pyspark.sql.functions import lit
-from pyspark.sql.functions import array
-from pyspark.sql.types import *
-from multiprocessing.pool import ThreadPool
-
-#get all orders
-orders = sqlContext.read.jdbc(url=jdbcUrl, table="orders", properties=connectionProperties)
-
-#get all order details
-orderdetails = sqlContext.read.jdbc(url=jdbcUrl, table="orderdetails", properties=connectionProperties)
-
-#get all OrderId values to pass to map function
-orderids = orders.select('OrderId').collect()
-
-#create thread pool big enough to process merge of details to orders in parallel
-pool = ThreadPool(10)
-```
-
-Then, create a function for writing Orders into the target SQL API collection. This function will filter all order details for the given order ID, convert them into a JSON array, and insert the array into a JSON document that we will write into the target SQL API Collection for that order:
-
-```python
-def writeOrder(orderid):
- #filter the order on current value passed from map function
- order = orders.filter(orders['OrderId'] == orderid[0])
-
- #set id to be a uuid
- order = order.withColumn("id", lit(str(uuid.uuid1())))
-
- #add details field to order dataframe
- order = order.withColumn("details", lit(''))
-
- #filter order details dataframe to get details we want to merge into the order document
- orderdetailsgroup = orderdetails.filter(orderdetails['OrderId'] == orderid[0])
-
- #convert dataframe to pandas
- orderpandas = order.toPandas()
-
- #convert the order dataframe to json and remove enclosing brackets
- orderjson = orderpandas.to_json(orient='records', force_ascii=False)
- orderjson = orderjson[1:-1]
-
- #convert orderjson to a dictionaory so we can set the details element with order details later
- orderjsondata = json.loads(orderjson)
-
-
- #convert orderdetailsgroup dataframe to json, but only if details were returned from the earlier filter
- if (orderdetailsgroup.count() !=0):
- #convert orderdetailsgroup to pandas dataframe to work better with json
- orderdetailsgroup = orderdetailsgroup.toPandas()
-
- #convert orderdetailsgroup to json string
- jsonstring = orderdetailsgroup.to_json(orient='records', force_ascii=False)
-
- #convert jsonstring to dictionary to ensure correct encoding and no corrupt records
- jsonstring = json.loads(jsonstring)
-
- #set details json element in orderjsondata to jsonstring which contains orderdetailsgroup - this merges order details into the order
- orderjsondata['details'] = jsonstring
-
- #convert dictionary to json
- orderjsondata = json.dumps(orderjsondata)
-
- #read the json into spark dataframe
- df = spark.read.json(sc.parallelize([orderjsondata]))
-
- #write the dataframe (this will be a single order record with merged many-to-one order details) to cosmos db using spark the connector
- #https://learn.microsoft.com/azure/cosmos-db/spark-connector
- df.write.format("com.microsoft.azure.cosmosdb.spark").mode("append").options(**writeConfig).save()
-```
-
-Finally, we will call the above using a map function on the thread pool, to execute in parallel, passing in the list of order IDs we created earlier:
-
-```python
-#map order details to orders in parallel using the above function
-pool.map(writeOrder, orderids)
-```
-In either approach, at the end, we should get properly saved embedded OrderDetails within each Order document in Cosmos DB collection:
--
-## Next steps
-* Learn about [data modeling in Azure Cosmos DB](./modeling-data.md)
-* Learn [how to model and partition data on Azure Cosmos DB](./how-to-model-partition-example.md)
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
- Title: Modeling data in Azure Cosmos DB-
-description: Learn about data modeling in NoSQL databases, differences between modeling data in a relational database and a document database.
------ Previously updated : 03/24/2022--
-# Data modeling in Azure Cosmos DB
-
-While schema-free databases, like Azure Cosmos DB, make it super easy to store and query unstructured and semi-structured data, you should spend some time thinking about your data model to get the most of the service in terms of performance and scalability and lowest cost.
-
-How is data going to be stored? How is your application going to retrieve and query data? Is your application read-heavy, or write-heavy?
-
->
-> [!VIDEO https://aka.ms/docs.modeling-data]
-
-After reading this article, you'll be able to answer the following questions:
-
-* What is data modeling and why should I care?
-* How is modeling data in Azure Cosmos DB different to a relational database?
-* How do I express data relationships in a non-relational database?
-* When do I embed data and when do I link to data?
-
-## Numbers in JSON
-Cosmos DB saves documents in JSON. Which means it is necessary to carefully determine whether it is necessary to convert numbers into strings before storing them in json or not. All numbers should ideally be converted into a `String`, if there is any chance that they are outside the boundaries of double precision numbers according to [IEEE 754 binary64](https://www.rfc-editor.org/rfc/rfc8259#ref-IEEE754). The [Json specification](https://www.rfc-editor.org/rfc/rfc8259#section-6) calls out the reasons why using numbers outside of this boundary in general is a bad practice in JSON due to likely interoperability problems. These concerns are especially relevant for the partition key column, because it is immutable and requires data migration to change it later.
-
-## <a id="embedding-data"></a>Embed data
-
-When you start modeling data in Azure Cosmos DB try to treat your entities as **self-contained items** represented as JSON documents.
-
-For comparison, let's first see how we might model data in a relational database. The following example shows how a person might be stored in a relational database.
--
-The strategy, when working with relational databases, is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, and multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
-
-The guiding premise when normalizing data is to **avoid storing redundant data** on each record and rather refer to data. In this example, to read a person, with all their contact details and addresses, you need to use JOINS to effectively compose back (or denormalize) your data at run time.
-
-```sql
-SELECT p.FirstName, p.LastName, a.City, cd.Detail
-FROM Person p
-JOIN ContactDetail cd ON cd.PersonId = p.Id
-JOIN ContactDetailType cdt ON cdt.Id = cd.TypeId
-JOIN Address a ON a.PersonId = p.Id
-```
-
-Write operations across many individual tables are required to update a single person's contact details and addresses.
-
-Now let's take a look at how we would model the same data as a self-contained entity in Azure Cosmos DB.
-
-```json
-{
- "id": "1",
- "firstName": "Thomas",
- "lastName": "Andersen",
- "addresses": [
- {
- "line1": "100 Some Street",
- "line2": "Unit 1",
- "city": "Seattle",
- "state": "WA",
- "zip": 98012
- }
- ],
- "contactDetails": [
- {"email": "thomas@andersen.com"},
- {"phone": "+1 555 555-5555", "extension": 5555}
- ]
-}
-```
-
-Using the approach above we've **denormalized** the person record, by **embedding** all the information related to this person, such as their contact details and addresses, into a *single JSON* document.
-In addition, because we're not confined to a fixed schema we have the flexibility to do things like having contact details of different shapes entirely.
-
-Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating the contact details and addresses of a person record is also a **single write operation** against a single item.
-
-By denormalizing data, your application may need to issue fewer queries and updates to complete common operations.
-
-### When to embed
-
-In general, use embedded data models when:
-
-* There are **contained** relationships between entities.
-* There are **one-to-few** relationships between entities.
-* There's embedded data that **changes infrequently**.
-* There's embedded data that won't grow **without bound**.
-* There's embedded data that is **queried frequently together**.
-
-> [!NOTE]
-> Typically denormalized data models provide better **read** performance.
-
-### When not to embed
-
-While the rule of thumb in Azure Cosmos DB is to denormalize everything and embed all data into a single item, this can lead to some situations that should be avoided.
-
-Take this JSON snippet.
-
-```json
-{
- "id": "1",
- "name": "What's new in the coolest Cloud",
- "summary": "A blog post by someone real famous",
- "comments": [
- {"id": 1, "author": "anon", "comment": "something useful, I'm sure"},
- {"id": 2, "author": "bob", "comment": "wisdom from the interwebs"},
- …
- {"id": 100001, "author": "jane", "comment": "and on we go ..."},
- …
- {"id": 1000000001, "author": "angry", "comment": "blah angry blah angry"},
- …
- {"id": ∞ + 1, "author": "bored", "comment": "oh man, will this ever end?"},
- ]
-}
-```
-
-This might be what a post entity with embedded comments would look like if we were modeling a typical blog, or CMS, system. The problem with this example is that the comments array is **unbounded**, meaning that there's no (practical) limit to the number of comments any single post can have. This may become a problem as the size of the item could grow infinitely large so is a design you should avoid.
-
-As the size of the item grows the ability to transmit the data over the wire and reading and updating the item, at scale, will be impacted.
-
-In this case, it would be better to consider the following data model.
-
-```json
-Post item:
-{
- "id": "1",
- "name": "What's new in the coolest Cloud",
- "summary": "A blog post by someone real famous",
- "recentComments": [
- {"id": 1, "author": "anon", "comment": "something useful, I'm sure"},
- {"id": 2, "author": "bob", "comment": "wisdom from the interwebs"},
- {"id": 3, "author": "jane", "comment": "....."}
- ]
-}
-
-Comment items:
-[
- {"id": 4, "postId": "1", "author": "anon", "comment": "more goodness"},
- {"id": 5, "postId": "1", "author": "bob", "comment": "tails from the field"},
- ...
- {"id": 99, "postId": "1", "author": "angry", "comment": "blah angry blah angry"},
- {"id": 100, "postId": "2", "author": "anon", "comment": "yet more"},
- ...
- {"id": 199, "postId": "2", "author": "bored", "comment": "will this ever end?"}
-]
-```
-
-This model has a document for each comment with a property that contains the post identifier. This allows posts to contain any number of comments and can grow efficiently. Users wanting to see more
-than the most recent comments would query this container passing the postId, which should be the partition key for the comments container.
-
-Another case where embedding data isn't a good idea is when the embedded data is used often across items and will change frequently.
-
-Take this JSON snippet.
-
-```json
-{
- "id": "1",
- "firstName": "Thomas",
- "lastName": "Andersen",
- "holdings": [
- {
- "numberHeld": 100,
- "stock": { "symbol": "zbzb", "open": 1, "high": 2, "low": 0.5 }
- },
- {
- "numberHeld": 50,
- "stock": { "symbol": "xcxc", "open": 89, "high": 93.24, "low": 88.87 }
- }
- ]
-}
-```
-
-This could represent a person's stock portfolio. We have chosen to embed the stock information into each portfolio document. In an environment where related data is changing frequently, like a stock trading application, embedding data that changes frequently is going to mean that you're constantly updating each portfolio document every time a stock is traded.
-
-Stock *zbzb* may be traded many hundreds of times in a single day and thousands of users could have *zbzb* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
-
-## <a id="referencing-data"></a>Reference data
-
-Embedding data works nicely for many cases but there are scenarios when denormalizing your data will cause more problems than it's worth. So what do we do now?
-
-Relational databases aren't the only place where you can create relationships between entities. In a document database, you may have information in one document that relates to data in other documents. We don't recommend building systems that would be better suited to a relational database in Azure Cosmos DB, or any other document database, but simple relationships are fine and may be useful.
-
-In the JSON below we chose to use the example of a stock portfolio from earlier but this time we refer to the stock item on the portfolio instead of embedding it. This way, when the stock item changes frequently throughout the day the only document that needs to be updated is the single stock document.
-
-```json
-Person document:
-{
- "id": "1",
- "firstName": "Thomas",
- "lastName": "Andersen",
- "holdings": [
- { "numberHeld": 100, "stockId": 1},
- { "numberHeld": 50, "stockId": 2}
- ]
-}
-
-Stock documents:
-{
- "id": "1",
- "symbol": "zbzb",
- "open": 1,
- "high": 2,
- "low": 0.5,
- "vol": 11970000,
- "mkt-cap": 42000000,
- "pe": 5.89
-},
-{
- "id": "2",
- "symbol": "xcxc",
- "open": 89,
- "high": 93.24,
- "low": 88.87,
- "vol": 2970200,
- "mkt-cap": 1005000,
- "pe": 75.82
-}
-```
-
-An immediate downside to this approach though is if your application is required to show information about each stock that is held when displaying a person's portfolio; in this case you would need to make multiple trips to the database to load the information for each stock document. Here we've made a decision to improve the efficiency of write operations, which happen frequently throughout the day, but in turn compromised on the read operations that potentially have less impact on the performance of this particular system.
-
-> [!NOTE]
-> Normalized data models **can require more round trips** to the server.
-
-### What about foreign keys?
-
-Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or by using server-side triggers or stored procedures on Azure Cosmos DB.
-
-### When to reference
-
-In general, use normalized data models when:
-
-* Representing **one-to-many** relationships.
-* Representing **many-to-many** relationships.
-* Related data **changes frequently**.
-* Referenced data could be **unbounded**.
-
-> [!NOTE]
-> Typically normalizing provides better **write** performance.
-
-### Where do I put the relationship?
-
-The growth of the relationship will help determine in which document to store the reference.
-
-If we look at the JSON below that models publishers and books.
-
-```json
-Publisher document:
-{
- "id": "mspress",
- "name": "Microsoft Press",
- "books": [ 1, 2, 3, ..., 100, ..., 1000]
-}
-
-Book documents:
-{"id": "1", "name": "Azure Cosmos DB 101" }
-{"id": "2", "name": "Azure Cosmos DB for RDBMS Users" }
-{"id": "3", "name": "Taking over the world one JSON doc at a time" }
-...
-{"id": "100", "name": "Learn about Azure Cosmos DB" }
-...
-{"id": "1000", "name": "Deep Dive into Azure Cosmos DB" }
-```
-
-If the number of the books per publisher is small with limited growth, then storing the book reference inside the publisher document may be useful. However, if the number of books per publisher is unbounded, then this data model would lead to mutable, growing arrays, as in the example publisher document above.
-
-Switching things around a bit would result in a model that still represents the same data but now avoids these large mutable collections.
-
-```json
-Publisher document:
-{
- "id": "mspress",
- "name": "Microsoft Press"
-}
-
-Book documents:
-{"id": "1","name": "Azure Cosmos DB 101", "pub-id": "mspress"}
-{"id": "2","name": "Azure Cosmos DB for RDBMS Users", "pub-id": "mspress"}
-{"id": "3","name": "Taking over the world one JSON doc at a time", "pub-id": "mspress"}
-...
-{"id": "100","name": "Learn about Azure Cosmos DB", "pub-id": "mspress"}
-...
-{"id": "1000","name": "Deep Dive into Azure Cosmos DB", "pub-id": "mspress"}
-```
-
-In the above example, we've dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
-
-### How do I model many-to-many relationships?
-
-In a relational database *many-to-many* relationships are often modeled with join tables, which just join records from other tables together.
---
-You might be tempted to replicate the same thing using documents and produce a data model that looks similar to the following.
-
-```json
-Author documents:
-{"id": "a1", "name": "Thomas Andersen" }
-{"id": "a2", "name": "William Wakefield" }
-
-Book documents:
-{"id": "b1", "name": "Azure Cosmos DB 101" }
-{"id": "b2", "name": "Azure Cosmos DB for RDBMS Users" }
-{"id": "b3", "name": "Taking over the world one JSON doc at a time" }
-{"id": "b4", "name": "Learn about Azure Cosmos DB" }
-{"id": "b5", "name": "Deep Dive into Azure Cosmos DB" }
-
-Joining documents:
-{"authorId": "a1", "bookId": "b1" }
-{"authorId": "a2", "bookId": "b1" }
-{"authorId": "a1", "bookId": "b2" }
-{"authorId": "a1", "bookId": "b3" }
-```
-
-This would work. However, loading either an author with their books, or loading a book with its author, would always require at least two additional queries against the database. One query to the joining document and then another query to fetch the actual document being joined.
-
-If this join is only gluing together two pieces of data, then why not drop it completely?
-Consider the following example.
-
-```json
-Author documents:
-{"id": "a1", "name": "Thomas Andersen", "books": ["b1", "b2", "b3"]}
-{"id": "a2", "name": "William Wakefield", "books": ["b1", "b4"]}
-
-Book documents:
-{"id": "b1", "name": "Azure Cosmos DB 101", "authors": ["a1", "a2"]}
-{"id": "b2", "name": "Azure Cosmos DB for RDBMS Users", "authors": ["a1"]}
-{"id": "b3", "name": "Learn about Azure Cosmos DB", "authors": ["a1"]}
-{"id": "b4", "name": "Deep Dive into Azure Cosmos DB", "authors": ["a2"]}
-```
-
-Now, if I had an author, I immediately know which books they've written, and conversely if I had a book document loaded I would know the IDs of the author(s). This saves that intermediary query against the join table reducing the number of server round trips your application has to make.
-
-## Hybrid data models
-
-We've now looked at embedding (or denormalizing) and referencing (or normalizing) data. Each approach has upsides and compromises.
-
-It doesn't always have to be either-or, don't be scared to mix things up a little.
-
-Based on your application's specific usage patterns and workloads there may be cases where mixing embedded and referenced data makes sense and could lead to simpler application logic with fewer server round trips while still maintaining a good level of performance.
-
-Consider the following JSON.
-
-```json
-Author documents:
-{
- "id": "a1",
- "firstName": "Thomas",
- "lastName": "Andersen",
- "countOfBooks": 3,
- "books": ["b1", "b2", "b3"],
- "images": [
- {"thumbnail": "https://....png"}
- {"profile": "https://....png"}
- {"large": "https://....png"}
- ]
-},
-{
- "id": "a2",
- "firstName": "William",
- "lastName": "Wakefield",
- "countOfBooks": 1,
- "books": ["b1"],
- "images": [
- {"thumbnail": "https://....png"}
- ]
-}
-
-Book documents:
-{
- "id": "b1",
- "name": "Azure Cosmos DB 101",
- "authors": [
- {"id": "a1", "name": "Thomas Andersen", "thumbnailUrl": "https://....png"},
- {"id": "a2", "name": "William Wakefield", "thumbnailUrl": "https://....png"}
- ]
-},
-{
- "id": "b2",
- "name": "Azure Cosmos DB for RDBMS Users",
- "authors": [
- {"id": "a1", "name": "Thomas Andersen", "thumbnailUrl": "https://....png"},
- ]
-}
-```
-
-Here we've (mostly) followed the embedded model, where data from other entities are embedded in the top-level document, but other data is referenced.
-
-If you look at the book document, we can see a few interesting fields when we look at the array of authors. There's an `id` field that is the field we use to refer back to an author document, standard practice in a normalized model, but then we also have `name` and `thumbnailUrl`. We could have stuck with `id` and left the application to get any additional information it needed from the respective author document using the "link", but because our application displays the author's name and a thumbnail picture with every book displayed we can save a round trip to the server per book in a list by denormalizing **some** data from the author.
-
-Sure, if the author's name changed or they wanted to update their photo we'd have to update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision.
-
-In the example, there are **pre-calculated aggregates** values to save expensive processing on a read operation. In the example, some of the data embedded in the author document is data that is calculated at run-time. Every time a new book is published, a book document is created **and** the countOfBooks field is set to a calculated value based on the number of book documents that exist for a particular author. This optimization would be good in read heavy systems where we can afford to do computations on writes in order to optimize reads.
-
-The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
-
-## Distinguish between different document types
-
-In some scenarios, you may want to mix different document types in the same collection; this is usually the case when you want multiple, related documents to sit in the same [partition](../partitioning-overview.md). For example, you could put both books and book reviews in the same collection and partition it by `bookId`. In such situation, you usually want to add to your documents with a field that identifies their type in order to differentiate them.
-
-```json
-Book documents:
-{
- "id": "b1",
- "name": "Azure Cosmos DB 101",
- "bookId": "b1",
- "type": "book"
-}
-
-Review documents:
-{
- "id": "r1",
- "content": "This book is awesome",
- "bookId": "b1",
- "type": "review"
-},
-{
- "id": "r2",
- "content": "Best book ever!",
- "bookId": "b1",
- "type": "review"
-}
-```
-
-## Data modeling for Azure Synapse Link and Azure Cosmos DB analytical store
-
-[Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
-
-This integration happens through [Azure Cosmos DB analytical store](../analytical-store-introduction.md), a columnar representation of your transactional data that enables large-scale analytics without any impact to your transactional workloads. This analytical store is suitable for fast, cost-effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads. When you create a container with analytical store enabled, or when you enable analytical store on an existing container, all transactional inserts, updates, and deletes are synchronized with analytical store in near real time, no Change Feed or ETL jobs are required.
-
-With Azure Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (request units) costs. Azure Synapse Analytics currently supports Azure Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
-
-### Analytical store automatic schema inference
-
-While Azure Cosmos DB transactional store is considered row-oriented semi-structured data, analytical store has columnar and structured format. This conversion is automatically made for customers, using [the schema inference rules for the analytical store](../analytical-store-introduction.md). There are limits in the conversion process: maximum number of nested levels, maximum number of properties, unsupported data types, and more.
-
-> [!NOTE]
-> In the context of analytical store, we consider the following structures as property:
-> * JSON "elements" or "string-value pairs separated by a `:` ".
-> * JSON objects, delimited by `{` and `}`.
-> * JSON arrays, delimited by `[` and `]`.
-
-You can minimize the impact of the schema inference conversions, and maximize your analytical capabilities, by using following techniques.
-
-### Normalization
-
-Normalization becomes meaningless since with Azure Synapse Link you can join between your containers, using T-SQL or Spark SQL. The expected benefits of normalization are:
- * Smaller data footprint in both transactional and analytical store.
- * Smaller transactions.
- * Fewer properties per document.
- * Data structures with fewer nested levels.
-
-Note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
-
-Another important factor for normalization is that SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. In other words, both analytical store and Synapse SQL serverless pools have a limit of 1000 properties.
-
-But what to do since denormalization is an important data modeling technique for Azure Cosmos DB? The answer is that you must find the right balance for your transactional and analytical workloads.
-
-### Partition Key
-
-Your Azure Cosmos DB partition key (PK) isn't used in analytical store. And now you can use [analytical store custom partitioning](https://devblogs.microsoft.com/cosmosdb/custom-partitioning-azure-synapse-link/) to copies of analytical store using any PK that you want. Because of this isolation, you can choose a PK for your transactional data with focus on data ingestion and point reads, while cross-partition queries can be done with Azure Synapse Link. Let's see an example:
-
-In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in request units to run. But with Azure Synapse Link, you can run these analytical queries at no request units costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link applies this characteristic to allow great performance with Azure Synapse Analytics runtimes.
-
-### Data types and properties names
-
-The automatic schema inference rules article lists what are the supported data types. While unsupported data type blocks the representation in analytical store, supported datatypes may be processed differently by the Azure Synapse runtimes. One example is: When using DateTime strings that follow the ISO 8601 UTC standard, Spark pools in Azure Synapse will represent these columns as string and SQL serverless pools in Azure Synapse will represent these columns as varchar(8000).
-
-Another challenge is that not all characters are accepted by Azure Synapse Spark. While white spaces are accepted, characters like colon, grave accent, and comma aren't. Let's say that your document has a property named **"First Name, Last Name"**. This property will be represented in analytical store and Synapse SQL serverless pool can read it without a problem. But since it is in analytical store, Azure Synapse Spark can't read any data from analytical store, including all other properties. At the end of the day, you can't use Azure Synapse Spark when you have one property using the unsupported characters in their names.
-
-### Data flattening
-
-All properties in the root level of your Cosmos DB data will be represented in analytical store as a column and everything else that is in deeper levels of your document data model will be represented as JSON, also in nested structures. Nested structures demand extra processing from Azure Synapse runtimes to flatten the data in structured format, what may be a challenge in big data scenarios.
--
-The document below will have only two columns in analytical store, `id` and `contactDetails`. All other data, `email` and `phone`, will require extra processing through SQL functions to be individually read.
-
-```json
-
-{
- "id": "1",
- "contactDetails": [
- {"email": "thomas@andersen.com"},
- {"phone": "+1 555 555-5555"}
- ]
-}
-```
-
-The document below will have three columns in analytical store, `id`, `email`, and `phone`. All data is directly accessible as columns.
-
-```json
-
-{
- "id": "1",
- "email": "thomas@andersen.com",
- "phone": "+1 555 555-5555"
-}
-```
-
-### Data tiering
-
-Azure Synapse Link allows you to reduce costs from the following perspectives:
-
- * Fewer queries running in your transactional database.
- * A PK optimized for data ingestion and point reads, reducing data footprint, hot partition scenarios, and partitions splits.
- * Data tiering since [analytical time-to-live (attl)](../analytical-store-introduction.md#analytical-ttl) is independent from transactional time-to-live (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. For more information about the current backup limitations, see [analytical store overview](../analytical-store-introduction.md).
- * No ETL jobs running in your environment, meaning that you don't need to provision request units for them.
-
-### Controlled redundancy
-
-This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can use [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for an Azure Synapse Link friendly data model. Let's see an example:
-
-#### Scenario
-
-Container `CustomersOrdersAndItems` is used to store on-line orders including customer and items details: billing address, delivery address, delivery method, delivery status, items price, etc. Only the first 1000 properties are represented and key information isn't included in analytical store, blocking Azure Synapse Link usage. The container has PBs of records it's not possible to change the application and remodel the data.
-
-Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase request units provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
-
-What to do?
-
-#### Solution with Change Feed
-
-* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they're normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
-* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the request units usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
-* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another request units usage reduction, since there's a minimum of 10 request units per GB in Azure Cosmos DB. Less data, fewer request units.
--
-## Takeaways
-
-The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
-
-Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily.
-
-## Next steps
-
-* To learn more about Azure Cosmos DB, refer to the service's [documentation](/azure/cosmos-db/) page.
-
-* To understand how to shard your data across multiple partitions, refer to [Partitioning Data in Azure Cosmos DB](../partitioning-overview.md).
-
-* To learn how to model and partition data on Azure Cosmos DB using a real-world example, refer to [
-Data Modeling and Partitioning - a Real-World Example](how-to-model-partition-example.md).
-
-* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/)
-
-* Configure and use [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md).
-
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/odbc-driver.md
- Title: Use Azure Cosmos DB ODBC driver to connect to BI and analytics tools
-description: Use the Azure Cosmos DB ODBC driver to create normalized data tables and views for SQL queries, analytics, BI, and visualizations.
------ Previously updated : 06/21/2022---
-# Use the Azure Cosmos DB ODBC driver to connect to BI and data analytics tools
--
-This article walks you through installing and using the Azure Cosmos DB ODBC driver to create normalized tables and views for your Azure Cosmos DB data. You can query the normalized data with SQL queries, or import the data into Power BI or other BI and analytics software to create reports and visualizations.
-
-Azure Cosmos DB is a schemaless database, which enables rapid application development and lets you iterate on data models without being confined to a strict schema. A single Azure Cosmos database can contain JSON documents of various structures. To analyze or report on this data, you might need to flatten the data to fit into a schema.
-
-The ODBC driver normalizes Azure Cosmos DB data into tables and views that fit your data analytics and reporting needs. The normalized schemas let you use ODBC-compliant tools to access the data. The schemas have no impact on the underlying data, and don't require developers to adhere to them. The ODBC driver helps make Azure Cosmos DB databases useful for data analysts as well as development teams.
-
-You can do SQL operations against the normalized tables and views, including group by queries, inserts, updates, and deletes. The driver is ODBC 3.8 compliant and supports ANSI SQL-92 syntax.
-
-You can also connect the normalized Azure Cosmos DB data to other software solutions, such as SQL Server Integration Services (SSIS), Alteryx, QlikSense, Tableau and other analytics software, BI, and data integration tools. You can use those solutions to analyze, move, transform, and create visualizations with your Azure Cosmos DB data.
-
-> [!IMPORTANT]
-> - Connecting to Azure Cosmos DB with the ODBC driver is currently supported for Azure Cosmos DB Core (SQL) API only.
-> - The current ODBC driver doesn't support aggregate pushdowns, and has known issues with some analytics tools. Until a new version is released, you can use one of the following alternatives:
-> - [Azure Synapse Link](../synapse-link.md) is the preferred analytics solution for Azure Cosmos DB. With Azure Synapse Link and Azure Synapse SQL serverless pools, you can use any BI tool to extract near real-time insights from Azure Cosmos DB SQL or MongoDB API data.
-> - For Power BI, you can use the [Azure Cosmos DB connector for Power BI](powerbi-visualize.md).
-> - For Qlik Sense, see [Connect Qlik Sense to Azure Cosmos DB](../visualize-qlik-sense.md).
-
-<a id="install"></a>
-## Install the ODBC driver and connect to your database
-
-1. Download the drivers for your environment:
-
- | Installer | Supported operating systems|
- |||
- |[Microsoft Azure Cosmos DB ODBC 64-bit.msi](https://aka.ms/cosmos-odbc-64x64) for 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2.|
- |[Microsoft Azure Cosmos DB ODBC 32x64-bit.msi](https://aka.ms/cosmos-odbc-32x64) for 32-bit on 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, Windows Vista, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2003.|
- |[Microsoft Azure Cosmos DB ODBC 32-bit.msi](https://aka.ms/cosmos-odbc-32x32) for 32-bit Windows|32-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, and Windows Vista.|
-
-1. Run the *.msi* file locally, which starts the **Microsoft Azure Cosmos DB ODBC Driver Installation Wizard**.
-
-1. Complete the installation wizard using the default input.
-
-1. After the driver installs, type *ODBC Data sources* in the Windows search box, and open the **ODBC Data Source Administrator**.
-
-1. Make sure that the **Microsoft Azure DocumentDB ODBC Driver** is listed on the **Drivers** tab.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Screenshot of the O D B C Data Source Administrator window.":::
-
- <a id="connect"></a>
-1. Select the **User DSN** tab, and then select **Add** to create a new data source name (DSN). You can also create a System DSN.
-
-1. In the **Create New Data Source** window, select **Microsoft Azure DocumentDB ODBC Driver**, and then select **Finish**.
-
-1. In the **DocumentDB ODBC Driver DSN Setup** window, fill in the following information:
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Screenshot of the D S N Setup window.":::
-
- - **Data Source Name**: A friendly name for the ODBC DSN. This name is unique to this Azure Cosmos DB account.
- - **Description**: A brief description of the data source.
- - **Host**: The URI for your Azure Cosmos DB account. You can get this information from the **Keys** page in your Azure Cosmos DB account in the Azure portal.
- - **Access Key**: The primary or secondary, read-write or read-only key from the Azure Cosmos DB **Keys** page in the Azure portal. It's best to use the read-only keys, if you use the DSN for read-only data processing and reporting.
-
- To avoid an authentication error, use the copy buttons to copy the URI and key from the Azure portal.
-
- :::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Screenshot of the Azure Cosmos D B Keys page.":::
-
- - **Encrypt Access Key for**: Select the best choice, based on who uses the machine.
-
-1. Select **Test** to make sure you can connect to your Azure Cosmos DB account.
-
-1. Select **Advanced Options** and set the following values:
-
- - **REST API Version**: Select the [REST API version](/rest/api/cosmos-db) for your operations. The default is **2015-12-16**.
-
- If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, type *2018-12-31*, and then [follow the steps at the end of this procedure](#edit-the-windows-registry-to-support-rest-api-version-2018-12-31).
-
- - **Query Consistency**: Select the [consistency level](../consistency-levels.md) for your operations. The default is **Session**.
- - **Number of Retries**: Enter the number of times to retry an operation if the initial request doesn't complete due to service rate limiting.
- - **Schema File**: If you don't select a schema file, the driver scans the first page of data for each container to determine its schema, called container mapping, for each session. This process can cause long startup time for applications that use the DSN. It's best to associate a schema file to the DSN.
-
- - If you already have a schema file, select **Browse**, navigate to the file, select **Save**, and then select **OK**.
- - If you don't have a schema file yet, select **OK**, and then follow the steps in the next section to [create a schema definition](#create-a-schema-definition). After you create the schema, come back to this **Advanced Options** window to add the schema file.
-
-After you select **OK** to complete and close the **DocumentDB ODBC Driver DSN Setup** window, the new User DSN appears on the **User DSN** tab of the **ODBC Data Source Administrator** window.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-user-dsn.png" alt-text="Screenshot that shows the new User D S N on the User D S N tab.":::
-
-### Edit the Windows registry to support REST API version 2018-12-31
-
-If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, follow these steps to update the Windows registry to support this version.
-
-1. In the Windows **Start** menu, type *regedit* to find and open the **Registry Editor**.
-1. In the Registry Editor, navigate to the path **Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI**.
-1. Create a new subkey with the same name as your DSN, such as *Contoso Account ODBC DSN*.
-1. Navigate to the new **Contoso Account ODBC DSN** subkey, and right-click to add a new **String** value:
- - Value Name: **IgnoreSessionToken**
- - Value data: **1**
- :::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Screenshot that shows the Windows Registry Editor settings.":::
-
-<a id="#container-mapping"></a><a id="table-mapping"></a>
-## Create a schema definition
-
-There are two types of sampling methods you can use to create a schema: *container mapping* or *table-delimiter mapping*. A sampling session can use both sampling methods, but each container can use only one of the sampling methods. Which method to use depends on your data's characteristics.
--- **Container mapping** retrieves the data on a container page to determine the data structure, and transposes the container to a table on the ODBC side. This sampling method is efficient and fast when the data in a container is homogenous.--- **Table-delimiter mapping** provides more robust sampling for heterogeneous data. This method scopes the sampling to a set of attributes and corresponding values.-
- For example, if a document contains a **Type** property, you can scope the sampling to the values of this property. The end result of the sampling is a set of tables for each of the **Type** values you specified. **Type = Car** produces a **Car** table, while **Type = Plane** produces a **Plane** table.
-
-To define a schema, follow these steps. For the table-delimiter mapping method, you take extra steps to define attributes and values for the schema.
-
-1. On the **User DSN** tab of the **ODBC Data Source Administrator** window, select your Azure Cosmos DB User DSN Name, and then select **Configure**.
-
-1. In the **DocumentDB ODBC Driver DSN Setup** window, select **Schema Editor**.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Screenshot that shows the Schema Editor button in the D S N Setup window.":::
-
-1. In the **Schema Editor** window, select **Create New**.
-
-1. The **Generate Schema** window displays all the collections in the Azure Cosmos DB account. Select the checkboxes next to the containers you want to sample.
-
-1. To use the *container mapping* method, select **Sample**.
-
- Or, to use *table-delimiter* mapping, take the following steps to define attributes and values for scoping the sample.
-
- 1. Select **Edit** in the **Mapping Definition** column for your DSN.
-
- 1. In the **Mapping Definition** window, under **Mapping Method**, select **Table Delimiters**.
-
- 1. In the **Attributes** box, type the name of a delimiter property in your document that you want to scope the sampling to, for instance, *City*. Press Enter.
-
- 1. If you want to scope the sampling to certain values for the attribute you entered, select the attribute, and then enter a value in the **Value** box, such as *Seattle*, and press Enter. You can add multiple values for attributes. Just make sure that the correct attribute is selected when you enter values.
-
- 1. When you're done entering attributes and values, select **OK**.
-
- 1. In the **Generate Schema** window, select **Sample**.
-
-1. In the **Design View** tab, refine your schema. The **Design View** represents the database, schema, and table. The table view displays the set of properties associated with the column names, such as **SQL Name** and **Source Name**.
-
- For each column, you can modify the **SQL name**, the **SQL type**, **SQL length**, **Scale**, **Precision**, and **Nullable** as applicable.
-
- You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked **Hide Column = true** aren't returned for selection and projection, although they're still part of the schema. For example, you can hide all of the Azure Cosmos DB system required properties that start with **_**. The **id** column is the only field you can't hide, because it's the primary key in the normalized schema.
-
-1. Once you finish defining the schema, select **File** > **Save**, navigate to the directory to save in, and select **Save**.
-
-1. To use this schema with a DSN, in the **DocumentDB ODBC Driver DSN Setup** window, select **Advanced Options**. Select the **Schema File** box, navigate to the saved schema, select **OK** and then select **OK** again. Saving the schema file modifies the DSN connection to scope to the schema-defined data and structure.
-
-### Create views
-
-Optionally, you can define and create views in the **Schema Editor** as part of the sampling process. These views are equivalent to SQL views. The views are read-only, and scope to the selections and projections of the defined Azure Cosmos DB SQL query.
-
-Follow these steps to create a view for your data:
-
-1. On the **Sample View** tab of the **Schema Editor** window, select the containers you want to sample, and then select **Add** in the **View Definition** column.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-create-view.png" alt-text="Screenshot that shows creating a view.":::
-
-1. In the **View Definitions** window, select **New**. Enter a name for the view, for example *EmployeesfromSeattleView*, and then select **OK**.
-
-1. In the **Edit view** window, enter an [Azure Cosmos DB query](./sql-query-getting-started.md), for example:
-
- `SELECT c.City, c.EmployeeName, c.Level, c.Age, c.Manager FROM c WHERE c.City = "Seattle"`
-
-1. Select **OK**.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-create-view-2.png" alt-text="Screenshot of adding a query when creating a view.":::
-
-You can create as many views as you like. Once you're done defining the views, select **Sample** to sample the data.
-
-> [!IMPORTANT]
-> The query text in the view definition should not contain line breaks. Otherwise, we will get a generic error when previewing the view.
--
-## Query with SQL Server Management Studio
-
-Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection.
-
-1. [Install SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and connect to the server.
-
-1. In the SSMS query editor, create a linked server object for the data source by running the following commands. Replace `DEMOCOSMOS` with the name for your linked server, and `SDS Name` with your data source name.
-
- ```sql
- USE [master]
- GO
-
- EXEC master.dbo.sp_addlinkedserver @server = N'DEMOCOSMOS', @srvproduct=N'', @provider=N'MSDASQL', @datasrc=N'SDS Name'
-
- EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'DEMOCOSMOS', @useself=N'False', @locallogin=NULL, @rmtuser=NULL, @rmtpassword=NULL
-
- GO
- ```
-
-To see the new linked server name, refresh the linked servers list.
--
-To query the linked database, enter an SSMS query. In this example, the query selects from the table in the container named `customers`:
-
-```sql
-SELECT * FROM OPENQUERY(DEMOCOSMOS, 'SELECT * FROM [customers].[customers]')
-```
-
-Execute the query. The results should look similar to the following output:
-
-```output
-attachments/ 1507476156 521 Bassett Avenue, Wikieup, Missouri, 5422 "2602bc56-0000-0000-0000-59da42bc0000" 2015-02-06T05:32:32 +05:00 f1ca3044f17149f3bc61f7b9c78a26df
-attachments/ 1507476156 167 Nassau Street, Tuskahoma, Illinois, 5998 "2602bd56-0000-0000-0000-59da42bc0000" 2015-06-16T08:54:17 +04:00 f75f949ea8de466a9ef2bdb7ce065ac8
-attachments/ 1507476156 885 Strong Place, Cassel, Montana, 2069 "2602be56-0000-0000-0000-59da42bc0000" 2015-03-20T07:21:47 +04:00 ef0365fb40c04bb6a3ffc4bc77c905fd
-attachments/ 1507476156 515 Barwell Terrace, Defiance, Tennessee, 6439 "2602c056-0000-0000-0000-59da42bc0000" 2014-10-16T06:49:04 +04:00 e913fe543490432f871bc42019663518
-attachments/ 1507476156 570 Ruby Street, Spokane, Idaho, 9025 "2602c156-0000-0000-0000-59da42bc0000" 2014-10-30T05:49:33 +04:00 e53072057d314bc9b36c89a8350048f3
-```
-
-## View your data in Power BI Desktop
-
-You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools. This procedure shows you how to connect to Power BI Desktop to create a Power BI visualization.
-
-1. In Power BI Desktop, select **Get Data**.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Screenshot showing Get Data in Power B I Desktop.":::
-
-1. In the **Get Data** window, select **Other** > **ODBC**, and then select **Connect**.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Screenshot that shows choosing O D B C data source in Power B I Get Data.":::
-
-1. In the **From ODBC** window, select the DSN you created, and then select **OK**.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Screenshot that shows choosing the D S N in Power B I Get Data.":::
-
-1. In the **Access a data source using an ODBC driver** window, select **Default or Custom** and then select **Connect**.
-
-1. In the **Navigator** window, in the left pane, expand the database and schema, and select the table. The results pane includes the data that uses the schema you created.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Screenshot of selecting the table in Power B I Get Data.":::
-
-1. To visualize the data in Power BI desktop, select the checkbox next to the table name, and then select **Load**.
-
-1. In Power BI Desktop, select the **Data** tab on the left of the screen to confirm your data was imported.
-
-1. Select the **Report** tab on the left of the screen, select **New visual** from the ribbon, and then customize the visual.
-
-## Troubleshooting
--- **Problem**: You get the following error when trying to connect:-
- ```output
- [HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
- ```
-
- **Solution:** Make sure the **Host** and **Access Key** values you copied from the Azure portal are correct, and retry.
--- **Problem**: You get the following error in SSMS when trying to create a linked Azure Cosmos DB server:-
- ```output
- Msg 7312, Level 16, State 1, Line 44
-
- Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
- ```
-
- **Solution**: A linked Azure Cosmos DB server doesn't support four-part naming.
-
-## Next steps
--- To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).-- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
cosmos-db Performance Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-testing.md
- Title: Performance and scale testing with Azure Cosmos DB
-description: Learn how to do scale and performance testing with Azure Cosmos DB. You can then evaluate the functionality of Azure Cosmos DB for high-performance application scenarios.
------ Previously updated : 08/26/2021---
-# Performance and scale testing with Azure Cosmos DB
-
-Performance and scale testing is a key step in application development. For many applications, the database tier has a significant impact on overall performance and scalability. Therefore, it's a critical component of performance testing. [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is purpose-built for elastic scale and predictable performance. These capabilities make it a great fit for applications that need a high-performance database tier.
-
-This article is a reference for developers implementing performance test suites for their Azure Cosmos DB workloads. It also can be used to evaluate Azure Cosmos DB for high-performance application scenarios. It focuses primarily on isolated performance testing of the database, but also includes best practices for production applications.
-
-After reading this article, you'll be able to answer the following questions:
-
-* Where can I find a sample .NET client application for performance testing of Azure Cosmos DB?
-* How do I achieve high throughput levels with Azure Cosmos DB from my client application?
-
-To get started with code, download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark).
-
-> [!NOTE]
-> The goal of this application is to demonstrate how to get the best performance from Azure Cosmos DB with a small number of client machines. The goal of the sample is not to achieve the peak throughput capacity of Azure Cosmos DB (which can scale without any limits).
-
-If you're looking for client-side configuration options to improve Azure Cosmos DB performance, see [Azure Cosmos DB performance tips](performance-tips.md).
-
-## Run the performance testing application
-The quickest way to get started is to compile and run the .NET sample, as described in the following steps. You can also review the source code and implement similar configurations on your own client applications.
-
-**Step 1:** Download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark), or fork the GitHub repository.
-
-**Step 2:** Modify the settings for EndpointUrl, AuthorizationKey, CollectionThroughput, and DocumentTemplate (optional) in App.config.
-
-> [!NOTE]
-> Before you provision collections with high throughput, refer to the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to estimate the costs per collection. Azure Cosmos DB bills storage and throughput independently on an hourly basis. You can save costs by deleting or lowering the throughput of your Azure Cosmos containers after testing.
->
->
-
-**Step 3:** Compile and run the console app from the command line. You should see output like the following:
-
-```bash
-C:\Users\cosmosdb\Desktop\Benchmark>DocumentDBBenchmark.exe
-Summary:
-
-Endpoint: https://arramacquerymetrics.documents.azure.com:443/
-Collection : db.data at 100000 request units per second
-Document Template*: Player.json
-Degree of parallelism*: -1
-
-DocumentDBBenchmark starting...
-Found collection data with 100000 RU/s
-Starting Inserts with 100 tasks
-Inserted 4503 docs @ 4491 writes/s, 47070 RU/s (122B max monthly 1KB reads)
-Inserted 17910 docs @ 8862 writes/s, 92878 RU/s (241B max monthly 1KB reads)
-Inserted 32339 docs @ 10531 writes/s, 110366 RU/s (286B max monthly 1KB reads)
-Inserted 47848 docs @ 11675 writes/s, 122357 RU/s (317B max monthly 1KB reads)
-Inserted 58857 docs @ 11545 writes/s, 120992 RU/s (314B max monthly 1KB reads)
-Inserted 69547 docs @ 11378 writes/s, 119237 RU/s (309B max monthly 1KB reads)
-Inserted 80687 docs @ 11345 writes/s, 118896 RU/s (308B max monthly 1KB reads)
-Inserted 91455 docs @ 11272 writes/s, 118131 RU/s (306B max monthly 1KB reads)
-Inserted 102129 docs @ 11208 writes/s, 117461 RU/s (304B max monthly 1KB reads)
-Inserted 112444 docs @ 11120 writes/s, 116538 RU/s (302B max monthly 1KB reads)
-Inserted 122927 docs @ 11063 writes/s, 115936 RU/s (301B max monthly 1KB reads)
-Inserted 133157 docs @ 10993 writes/s, 115208 RU/s (299B max monthly 1KB reads)
-Inserted 144078 docs @ 10988 writes/s, 115159 RU/s (298B max monthly 1KB reads)
-Inserted 155415 docs @ 11013 writes/s, 115415 RU/s (299B max monthly 1KB reads)
-Inserted 166126 docs @ 10992 writes/s, 115198 RU/s (299B max monthly 1KB reads)
-Inserted 173051 docs @ 10739 writes/s, 112544 RU/s (292B max monthly 1KB reads)
-Inserted 180169 docs @ 10527 writes/s, 110324 RU/s (286B max monthly 1KB reads)
-Inserted 192469 docs @ 10616 writes/s, 111256 RU/s (288B max monthly 1KB reads)
-Inserted 199107 docs @ 10406 writes/s, 109054 RU/s (283B max monthly 1KB reads)
-Inserted 200000 docs @ 9930 writes/s, 104065 RU/s (270B max monthly 1KB reads)
-
-Summary:
-
-Inserted 200000 docs @ 9928 writes/s, 104063 RU/s (270B max monthly 1KB reads)
-
-DocumentDBBenchmark completed successfully.
-Press any key to exit...
-```
-
-**Step 4 (if necessary):** The throughput reported (RU/s) from the tool should be the same or higher than the provisioned throughput of the collection or a set of collections. If it's not, increasing the DegreeOfParallelism in small increments might help you reach the limit. If the throughput from your client app plateaus, start multiple instances of the app on additional client machines. If you need help with this step file a support ticket from the [Azure portal](https://portal.azure.com).
-
-After you have the app running, you can try different [indexing policies](../index-policy.md) and [consistency levels](../consistency-levels.md) to understand their impact on throughput and latency. You can also review the source code and implement similar configurations to your own test suites or production applications.
-
-## Next steps
-
-In this article, we looked at how you can perform performance and scale testing with Azure Cosmos DB by using a .NET console app. For more information, see the following articles:
-
-* [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark)
-* [Client configuration options to improve Azure Cosmos DB performance](performance-tips.md)
-* [Server-side partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
cosmos-db Performance Tips Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-async-java.md
- Title: Performance tips for Azure Cosmos DB Async Java SDK v2
-description: Learn client configuration options to improve Azure Cosmos database performance for Async Java SDK v2
---- Previously updated : 05/11/2020-----
-# Performance tips for Azure Cosmos DB Async Java SDK v2
-
-> [!div class="op_single_selector"]
-> * [Java SDK v4](performance-tips-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](performance-tips-async-java.md)
-> * [Sync Java SDK v2](performance-tips-java.md)
-> * [.NET SDK v3](performance-tips-dotnet-sdk-v3-sql.md)
-> * [.NET SDK v2](performance-tips.md)
--
-> [!IMPORTANT]
-> This is *not* the latest Java SDK for Azure Cosmos DB! You should upgrade your project to [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) and then read the Azure Cosmos DB Java SDK v4 [performance tips guide](performance-tips-java-sdk-v4-sql.md). Follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide to upgrade.
->
-> The performance tips in this article are for Azure Cosmos DB Async Java SDK v2 only. See the Azure Cosmos DB Async Java SDK v2 [Release notes](sql-api-sdk-async-java.md), [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb), and Azure Cosmos DB Async Java SDK v2 [troubleshooting guide](troubleshoot-java-async-sdk.md) for more information.
->
-
-> [!IMPORTANT]
-> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
-> will be retired; the SDK and all applications using the SDK
-> **will continue to function**; Azure Cosmos DB will simply cease
-> to provide further maintenance and support for this SDK.
-> We recommend following the instructions above to migrate to
-> Azure Cosmos DB Java SDK v4.
->
-
-Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using the [Azure Cosmos DB Async Java SDK v2](sql-api-sdk-async-java.md).
-
-So if you're asking "How can I improve my database performance?" consider the following options:
-
-## Networking
-
-* **Connection mode: Use Direct mode**
-
- How a client connects to Azure Cosmos DB has important implications on performance, especially in terms of client-side latency. The *ConnectionMode* is a key configuration setting available for configuring the client *ConnectionPolicy*. For Azure Cosmos DB Async Java SDK v2, the two available ConnectionModes are:
-
- * [Gateway (default)](/java/api/com.microsoft.azure.cosmosdb.connectionmode)
- * [Direct](/java/api/com.microsoft.azure.cosmosdb.connectionmode)
-
- Gateway mode is supported on all SDK platforms and it is the configured option by default. If your applications run within a corporate network with strict firewall restrictions, Gateway mode is the best choice since it uses the standard HTTPS port and a single endpoint. The performance tradeoff, however, is that Gateway mode involves an additional network hop every time data is read or written to Azure Cosmos DB. Because of this, Direct mode offers better performance due to fewer network hops.
-
- The *ConnectionMode* is configured during the construction of the *DocumentClient* instance with the *ConnectionPolicy* parameter.
-
-### <a id="asyncjava2-connectionpolicy"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
-
-```java
- public ConnectionPolicy getConnectionPolicy() {
- ConnectionPolicy policy = new ConnectionPolicy();
- policy.setConnectionMode(ConnectionMode.Direct);
- policy.setMaxPoolSize(1000);
- return policy;
- }
-
- ConnectionPolicy connectionPolicy = new ConnectionPolicy();
- DocumentClient client = new DocumentClient(HOST, MASTER_KEY, connectionPolicy, null);
-```
-
-* **Collocate clients in same Azure region for performance**
-
- When possible, place any applications calling Azure Cosmos DB in the same region as the Azure Cosmos database. For an approximate comparison, calls to Azure Cosmos DB within the same region complete within 1-2 ms, but the latency between the West and East coast of the US is >50 ms. This latency can likely vary from request to request depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. The lowest possible latency is achieved by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure Regions](https://azure.microsoft.com/regions/#services).
-
- :::image type="content" source="./media/performance-tips/same-region.png" alt-text="Illustration of the Azure Cosmos DB connection policy" border="false":::
-
-## SDK Usage
-
-* **Install the most recent SDK**
-
- The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the Azure Cosmos DB Async Java SDK v2 [Release Notes](sql-api-sdk-async-java.md) pages to determine the most recent SDK and review improvements.
-
-* **Use a singleton Azure Cosmos DB client for the lifetime of your application**
-
- Each AsyncDocumentClient instance is thread-safe and performs efficient connection management and address caching. To allow efficient connection management and better performance by AsyncDocumentClient, it is recommended to use a single instance of AsyncDocumentClient per AppDomain for the lifetime of the application.
-
-* **Tuning ConnectionPolicy**
-
- By default, Direct mode Cosmos DB requests are made over TCP when using the Azure Cosmos DB Async Java SDK v2. Internally the SDK uses a special Direct mode architecture to dynamically manage network resources and get the best performance.
-
- In the Azure Cosmos DB Async Java SDK v2, Direct mode is the best choice to improve database performance with most workloads.
-
- * ***Overview of Direct mode***
-
- :::image type="content" source="./media/performance-tips-async-java/rntbdtransportclient.png" alt-text="Illustration of the Direct mode architecture" border="false":::
-
- The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Cosmos DB backend. The Direct mode architecture allocates up to 10 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
-
- * ***ConnectionPolicy Configuration options for Direct mode***
-
- As a first step, use the following recommended configuration settings below. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
-
- If you are using Azure Cosmos DB as a reference database (that is, the database is used for many point read operations and few write operations), it may be acceptable to set *idleEndpointTimeout* to 0 (that is, no timeout).
--
- | Configuration option | Default |
- | :: | :--: |
- | bufferPageSize | 8192 |
- | connectionTimeout | "PT1M" |
- | idleChannelTimeout | "PT0S" |
- | idleEndpointTimeout | "PT1M10S" |
- | maxBufferCapacity | 8388608 |
- | maxChannelsPerEndpoint | 10 |
- | maxRequestsPerChannel | 30 |
- | receiveHangDetectionTime | "PT1M5S" |
- | requestExpiryInterval | "PT5S" |
- | requestTimeout | "PT1M" |
- | requestTimerResolution | "PT0.5S" |
- | sendHangDetectionTime | "PT10S" |
- | shutdownTimeout | "PT15S" |
-
-* ***Programming tips for Direct mode***
-
- Review the Azure Cosmos DB Async Java SDK v2 [Troubleshooting](troubleshoot-java-async-sdk.md) article as a baseline for resolving any SDK issues.
-
- Some important programming tips when using Direct mode:
-
- * **Use multithreading in your application for efficient TCP data transfer** - After making a request, your application should subscribe to receive data on another thread. Not doing so forces unintended "half-duplex" operation and the subsequent requests are blocked waiting for the previous request's reply.
-
- * **Carry out compute-intensive workloads on a dedicated thread** - For similar reasons to the previous tip, operations such as complex data processing are best placed in a separate thread. A request that pulls in data from another data store (for example if the thread utilizes Azure Cosmos DB and Spark data stores simultaneously) may experience increased latency and it is recommended to spawn an additional thread that awaits a response from the other data store.
-
- * The underlying network IO in the Azure Cosmos DB Async Java SDK v2 is managed by Netty, see these [tips for avoiding coding patterns that block Netty IO threads](troubleshoot-java-async-sdk.md#invalid-coding-pattern-blocking-netty-io-thread).
-
- * **Data modeling** - The Azure Cosmos DB SLA assumes document size to be less than 1KB. Optimizing your data model and programming to favor smaller document size will generally lead to decreased latency. If you are going to need storage and retrieval of docs larger than 1KB, the recommended approach is for documents to link to data in Azure Blob Storage.
-
-* **Tuning parallel queries for partitioned collections**
-
- Azure Cosmos DB Async Java SDK v2 supports parallel queries, which enable you to query a partitioned collection in parallel. For more information, see [code samples](https://github.com/Azure/azure-cosmosdb-java/tree/master/examples/src/test/java/com/microsoft/azure/cosmosdb/rx/examples) related to working with the SDKs. Parallel queries are designed to improve query latency and throughput over their serial counterpart.
-
- * ***Tuning setMaxDegreeOfParallelism\:***
-
- Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned collection is fetched serially with respect to the query. So, use setMaxDegreeOfParallelism to set the number of partitions that has the maximum chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can use setMaxDegreeOfParallelism to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
-
- It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be bottlenecked by those partitions.
-
- * ***Tuning setMaxBufferedItemCount\:***
-
- Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. setMaxBufferedItemCount limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching.
-
- Pre-fetching works the same way irrespective of the MaxDegreeOfParallelism, and there is a single buffer for the data from all partitions.
-
-* **Implement backoff at getRetryAfterInMilliseconds intervals**
-
- During performance testing, you should increase load until a small rate of requests get throttled. If throttled, the client application should backoff for the server-specified retry interval. Respecting the backoff ensures that you spend minimal amount of time waiting between retries.
-
-* **Scale out your client-workload**
-
- If you are testing at high throughput levels (>50,000 RU/s), the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
-
-* **Use name based addressing**
-
- Use name-based addressing, where links have the format `dbs/MyDatabaseId/colls/MyCollectionId/docs/MyDocumentId`, instead of SelfLinks (\_self), which have the format `dbs/<database_rid>/colls/<collection_rid>/docs/<document_rid>` to avoid retrieving ResourceIds of all the resources used to construct the link. Also, as these resources get recreated (possibly with same name), caching them may not help.
-
-* **Tune the page size for queries/read feeds for better performance**
-
- When performing a bulk read of documents by using read feed functionality (for example, readDocuments) or when issuing a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
-
- To reduce the number of network round trips required to retrieve all applicable results, you can increase the page size using the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) request header to up to 1000. In cases where you need to display only a few results, for example, if your user interface or application API returns only 10 results a time, you can also decrease the page size to 10 to reduce the throughput consumed for reads and queries.
-
- You may also set the page size using the setMaxItemCount method.
-
-* **Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads)**
-
- The Azure Cosmos DB Async Java SDK v2 uses [netty](https://netty.io/) for non-blocking IO. The SDK uses a fixed number of IO netty event loop threads (as many CPU cores your machine has) for executing IO operations. The Observable returned by API emits the result on one of the shared IO event loop netty threads. So it is important to not block the shared IO event loop netty threads. Doing CPU intensive work or blocking operation on the IO event loop netty thread may cause deadlock or significantly reduce SDK throughput.
-
- For example the following code executes a cpu intensive work on the event loop IO netty thread:
-
- **Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)**
-
- ```java
- Observable<ResourceResponse<Document>> createDocObs = asyncDocumentClient.createDocument(
- collectionLink, document, null, true);
-
- createDocObs.subscribe(
- resourceResponse -> {
- //this is executed on eventloop IO netty thread.
- //the eventloop thread is shared and is meant to return back quickly.
- //
- // DON'T do this on eventloop IO netty thread.
- veryCpuIntensiveWork();
- });
- ```
-
- After result is received if you want to do CPU intensive work on the result you should avoid doing so on event loop IO netty thread. You can instead provide your own Scheduler to provide your own thread for running your work.
-
- **Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)**
-
- ```java
- import rx.schedulers;
-
- Observable<ResourceResponse<Document>> createDocObs = asyncDocumentClient.createDocument(
- collectionLink, document, null, true);
-
- createDocObs.subscribeOn(Schedulers.computation())
- subscribe(
- resourceResponse -> {
- // this is executed on threads provided by Scheduler.computation()
- // Schedulers.computation() should be used only when:
- // 1. The work is cpu intensive
- // 2. You are not doing blocking IO, thread sleep, etc. in this thread against other resources.
- veryCpuIntensiveWork();
- });
- ```
-
- Based on the type of your work you should use the appropriate existing RxJava Scheduler for your work. Read here
- [``Schedulers``](http://reactivex.io/RxJava/1.x/javadoc/rx/schedulers/Schedulers.html).
-
- For More Information, Please look at the [GitHub page](https://github.com/Azure/azure-cosmosdb-java) for Azure Cosmos DB Async Java SDK v2.
-
-* **Disable netty's logging**
-
- Netty library logging is chatty and needs to be turned off (suppressing sign in the configuration may not be enough) to avoid additional CPU costs. If you are not in debugging mode, disable netty's logging altogether. So if you are using log4j to remove the additional CPU costs incurred by ``org.apache.log4j.Category.callAppenders()`` from netty add the following line to your codebase:
-
- ```java
- org.apache.log4j.Logger.getLogger("io.netty").setLevel(org.apache.log4j.Level.OFF);
- ```
-
- * **OS Open files Resource Limit**
-
- Some Linux systems (like Red Hat) have an upper limit on the number of open files and so the total number of connections. Run the following to view the current limits:
-
- ```bash
- ulimit -a
- ```
-
- The number of open files (nofile) needs to be large enough to have enough room for your configured connection pool size and other open files by the OS. It can be modified to allow for a larger connection pool size.
-
- Open the limits.conf file:
-
- ```bash
- vim /etc/security/limits.conf
- ```
-
- Add/modify the following lines:
-
- ```
- * - nofile 100000
- ```
-
-## Indexing Policy
-
-* **Exclude unused paths from indexing for faster writes**
-
- Azure Cosmos DBΓÇÖs indexing policy allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths (setIncludedPaths and setExcludedPaths). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed. For example, the following code shows how to exclude an entire section of the documents (also known as a subtree) from indexing using the "*" wildcard.
-
- ### <a id="asyncjava2-indexing"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
-
- ```Java
- Index numberIndex = Index.Range(DataType.Number);
- numberIndex.set("precision", -1);
- indexes.add(numberIndex);
- includedPath.setIndexes(indexes);
- includedPaths.add(includedPath);
- indexingPolicy.setIncludedPaths(includedPaths);
- collectionDefinition.setIndexingPolicy(indexingPolicy);
- ```
-
- For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
-
-## <a id="measure-rus"></a>Throughput
-
-* **Measure and tune for lower request units/second usage**
-
- Azure Cosmos DB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures, and triggers ΓÇô all operating on the documents within a database collection. The cost associated with each of these operations varies based on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
-
- Throughput is provisioned based on the number of [request units](../request-units.md) set for each container. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional request units.
-
- The complexity of a query impacts how many request units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.
-
- To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header to measure the number of request units consumed by these operations. You can also look at the equivalent RequestCharge property in ResourceResponse\<T> or FeedResponse\<T>.
-
- ### <a id="asyncjava2-requestcharge"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
-
- ```Java
- ResourceResponse<Document> response = asyncClient.createDocument(collectionLink, documentDefinition, null,
- false).toBlocking.single();
- response.getRequestCharge();
- ```
-
- The request charge returned in this header is a fraction of your provisioned throughput. For example, if you have 2000 RU/s provisioned, and if the preceding query returns 1000 1KB-documents, the cost of the operation is 1000. As such, within one second, the server honors only two such requests before rate limiting subsequent requests. For more information, see [Request units](../request-units.md) and the [request unit calculator](https://www.documentdb.com/capacityplanner).
-
-* **Handle rate limiting/request rate too large**
-
- When a client attempts to exceed the reserved throughput for an account, there is no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.
-
- ```xml
- HTTP Status 429,
- Status Line: RequestRateTooLarge
- x-ms-retry-after-ms :100
- ```
-
- The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
-
- If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a DocumentClientException with status code 429 to the application. The default retry count can be changed by using setRetryOptions on the ConnectionPolicy instance. By default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
-
- While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
-
-* **Design for smaller documents for higher throughput**
-
- The request charge (the request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents.
-
-## Next steps
-
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
- Title: Azure Cosmos DB performance tips for .NET SDK v3
-description: Learn client configuration options to help improve Azure Cosmos DB .NET v3 SDK performance.
---- Previously updated : 03/31/2022-----
-# Performance tips for Azure Cosmos DB and .NET
-
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](performance-tips-dotnet-sdk-v3-sql.md)
-> * [.NET SDK v2](performance-tips.md)
-> * [Java SDK v4](performance-tips-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](performance-tips-async-java.md)
-> * [Sync Java SDK v2](performance-tips-java.md)
-
-Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly with guaranteed latency and throughput levels. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [provision container throughput](how-to-provision-container-throughput.md) or [provision database throughput](how-to-provision-database-throughput.md).
-
-Because Azure Cosmos DB is accessed via network calls, you can make client-side optimizations to achieve peak performance when you use the [SQL .NET SDK](sql-api-sdk-dotnet-standard.md).
-
-If you're trying to improve your database performance, consider the options presented in the following sections.
-
-## Hosting recommendations
-
-**Turn on server-side garbage collection**
-
-Reducing the frequency of garbage collection can help in some cases. In .NET, set [gcServer](/dotnet/core/run-time-config/garbage-collector#flavors-of-garbage-collection) to `true`.
-
-**Scale out your client workload**
-
-If you're testing at high throughput levels, or at rates that are greater than 50,000 Request Units per second (RU/s), the client application could become a workload bottleneck. This is because the machine might cap out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
-
-> [!NOTE]
-> High CPU usage can cause increased latency and request timeout exceptions.
-
-## <a id="metadata-operations"></a> Metadata operations
-
-Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests) that do not scale like data operations.
-
-## <a id="logging-and-tracing"></a> Logging and tracing
-
-Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
-
-Latest SDK versions (greater than 3.23.0) automatically remove it when they detect it, with older versions, you can remove it by:
-
-# [.NET 6 / .NET Core](#tab/trace-net-core)
-
-```csharp
-if (!Debugger.IsAttached)
-{
- Type defaultTrace = Type.GetType("Microsoft.Azure.Cosmos.Core.Trace.DefaultTrace,Microsoft.Azure.Cosmos.Direct");
- TraceSource traceSource = (TraceSource)defaultTrace.GetProperty("TraceSource").GetValue(null);
- traceSource.Listeners.Remove("Default");
- // Add your own trace listeners
-}
-```
-
-# [.NET Framework](#tab/trace-net-fx)
-
-Edit your `app.config` or `web.config` files:
-
-```xml
-<configuration>
- <system.diagnostics>
- <sources>
- <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >
- <listeners>
- <remove name="Default" />
- <!--Add your own trace listeners-->
- <add name="myListener" ... />
- </listeners>
- </source>
- </sources>
- </system.diagnostics>
-<configuration>
-```
---
-## Networking
-<a id="direct-connection"></a>
-
-**Connection policy: Use direct connection mode**
-
-.NET V3 SDK default connection mode is direct with TCP protocol. You configure the connection mode when you create the `CosmosClient` instance in `CosmosClientOptions`. To learn more about different connectivity options, see the [connectivity modes](sql-sdk-connection-modes.md) article.
-
-```csharp
-string connectionString = "<your-account-connection-string>";
-CosmosClient client = new CosmosClient(connectionString,
-new CosmosClientOptions
-{
- ConnectionMode = ConnectionMode.Gateway // ConnectionMode.Direct is the default
-});
-```
-
-**Ephemeral port exhaustion**
-
-If you see a high connection volume or high port usage on your instances, first verify that your client instances are singletons. In other words, the client instances should be unique for the lifetime of the application.
-
-When it's running on the TCP protocol, the client optimizes for latency by using the long-lived connections. This is in contrast with the HTTPS protocol, which terminates the connections after two minutes of inactivity.
-
-In scenarios where you have sparse access, and if you notice a higher connection count when compared to Gateway mode access, you can:
-
-* Configure the [CosmosClientOptions.PortReuseMode](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode) property to `PrivatePortPool` (effective with framework versions 4.6.1 and later and .NET Core versions 2.0 and later). This property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints.
-* Configure the [CosmosClientOptions.IdleConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout) property as greater than or equal to 10 minutes. The recommended values are from 20 minutes to 24 hours.
-
-<a id="same-region"></a>
-
-**For performance, collocate clients in the same Azure region**
-
-When possible, place any applications that call Azure Cosmos DB in the same region as the Azure Cosmos DB database. Here's an approximate comparison: calls to Azure Cosmos DB within the same region finish within 1 millisecond (ms) to 2 ms, but the latency between the West and East coast of the US is more than 50 ms. This latency can vary from request to request, depending on the route taken by the request as it passes from the client to the Azure datacenter boundary.
-
-You can get the lowest possible latency by ensuring that the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure regions](https://azure.microsoft.com/regions/#services).
--
- <a id="increase-threads"></a>
-
-**Increase the number of threads/tasks**
-
-Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the .NET [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB.
-
-**Enable accelerated networking**
-
-To reduce latency and CPU jitter, we recommend that you enable accelerated networking on your client virtual machines. For more information, see [Create a Windows virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Create a Linux virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md).
-
-## <a id="sdk-usage"></a> SDK usage
-
-**Install the most recent SDK**
-
-The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. To determine the most recent SDK and review improvements, see [Azure Cosmos DB SDK](sql-api-sdk-dotnet-standard.md).
-
-**Use stream APIs**
-
-[.NET SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3) contains stream APIs that can receive and return data without serializing.
-
-Middle-tier applications that don't consume responses directly from the SDK but relay them to other application tiers can benefit from the stream APIs. For examples of stream handling, see the [item management](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement) samples.
-
-**Use a singleton Azure Cosmos DB client for the lifetime of your application**
-
-Each `CosmosClient` instance is thread-safe and performs efficient connection management and address caching when it operates in Direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application.
-
-When you're working on Azure Functions, instances should also follow the existing [guidelines](../../azure-functions/manage-connections.md#static-clients) and maintain a single instance.
-
-**Avoid blocking calls**
-
-Cosmos DB SDK should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
-
-A common performance problem in apps using the Cosmos DB SDK is blocking calls that could be asynchronous. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times.
-
-**Do not**:
-
-* Block asynchronous execution by calling [Task.Wait](/dotnet/api/system.threading.tasks.task.wait) or [Task.Result](/dotnet/api/system.threading.tasks.task-1.result).
-* Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous.
-* Acquire locks in common code paths. Cosmos DB .NET SDK is most performant when architected to run code in parallel.
-* Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.
-* Do not use ToList() on `Container.GetItemLinqQueryable<T>()` which uses blocking calls to synchronously drain the query. Use [ToFeedIterator()](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/e2029f2f4854c0e4decd399c35e69ef799db9f35/Microsoft.Azure.Cosmos/src/Resource/Container/Container.cs#L1143) to drain the query asynchronously.
-
-**Do**:
-
-* Call the Cosmos DB .NET APIs asynchronously.
-* The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns.
-
-A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be used to find threads frequently added to the [Thread Pool](/windows/desktop/procthread/thread-pools). The `Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start` event indicates a thread added to the thread pool.
--
-**Disable content response on write operations**
-
-For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
-
-```csharp
-ItemRequestOptions requestOptions = new ItemRequestOptions() { EnableContentResponseOnWrite = false };
-ItemResponse<Book> itemResponse = await this.container.CreateItemAsync<Book>(book, new PartitionKey(book.pk), requestOptions);
-// Resource will be null
-itemResponse.Resource
-```
-
-**Enable Bulk to optimize for throughput instead of latency**
-
-Enable *Bulk* for scenarios where the workload requires a large amount of throughput, and latency is not as important. For more information about how to enable the Bulk feature, and to learn which scenarios it should be used for, see [Introduction to Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
-
-<a id="max-connection"></a>**Increase System.Net MaxConnections per host when you use Gateway mode**
-
-Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [`Documents.Client.ConnectionPolicy.MaxConnectionLimit`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.gatewaymodemaxconnectionlimit) to a higher value.
-
-**Increase the number of threads/tasks**
-
-See [Increase the number of threads/tasks](#increase-threads) in the Networking section of this article.
-
-## Query operations
-
-For query operations see the [performance tips for queries](performance-tips-query-sdk.md?tabs=v3&pivots=programming-language-csharp).
-
-## <a id="indexing-policy"></a> Indexing policy
-
-**Exclude unused paths from indexing for faster writes**
-
-The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths).
-
-Indexing only the paths you need can improve write performance, reduce RU charges on write operations, and reduce index storage for scenarios in which the query patterns are known beforehand. This is because indexing costs correlate directly to the number of unique paths indexed. For example, the following code shows how to exclude an entire section of the documents (a subtree) from indexing by using the "*" wildcard:
-
-```csharp
-var containerProperties = new ContainerProperties(id: "excludedPathCollection", partitionKeyPath: "/pk" );
-containerProperties.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
-containerProperties.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/nonIndexedContent/*");
-Container container = await this.cosmosDatabase.CreateContainerAsync(containerProperties);
-```
-
-For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
-
-## Throughput
-<a id="measure-rus"></a>
-
-**Measure and tune for lower RU/s usage**
-
-Azure Cosmos DB offers a rich set of database operations. These operations include relational and hierarchical queries with Universal Disk Format (UDF) files, stored procedures, and triggers, all operating on the documents within a database collection.
-
-The costs associated with each of these operations vary depending on the CPU, IO, and memory that are required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a Request Unit as a single measure for the resources that are required to perform various database operations and service an application request.
-
-Throughput is provisioned based on the number of [Request Units](../request-units.md) set for each container. Request Unit consumption is evaluated as a units-per-second rate. Applications that exceed the provisioned Request Unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional Request Units.
-
-The complexity of a query affects how many Request Units are consumed for an operation. The number of predicates, the nature of the predicates, the number of UDF files, and the size of the source dataset all influence the cost of query operations.
-
-To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent `RequestCharge` property in `ResourceResponse<T>` or `FeedResponse<T>` in the .NET SDK) to measure the number of Request Units consumed by the operations:
-
-```csharp
-// Measure the performance (Request Units) of writes
-ItemResponse<Book> response = await container.CreateItemAsync<Book>(myBook, new PartitionKey(myBook.PkValue));
-Console.WriteLine("Insert of item consumed {0} request units", response.RequestCharge);
-// Measure the performance (Request Units) of queries
-FeedIterator<Book> queryable = container.GetItemQueryIterator<ToDoActivity>(queryString);
-while (queryable.HasMoreResults)
- {
- FeedResponse<Book> queryResponse = await queryable.ExecuteNextAsync<Book>();
- Console.WriteLine("Query batch consumed {0} request units", queryResponse.RequestCharge);
- }
-```
-
-The request charge that's returned in this header is a fraction of your provisioned throughput (that is, 2,000 RU/s). For example, if the preceding query returns 1,000 1-KB documents, the cost of the operation is 1,000. So, within one second, the server honors only two such requests before it rate-limits later requests. For more information, see [Request Units](../request-units.md) and the [Request Unit calculator](https://www.documentdb.com/capacityplanner).
-<a id="429"></a>
-
-**Handle rate limiting/request rate too large**
-
-When a client attempts to exceed the reserved throughput for an account, there's no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server preemptively ends the request with RequestRateTooLarge (HTTP status code 429). It returns an [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header that indicates the amount of time, in milliseconds, that the user must wait before attempting the request again.
-
-```xml
- HTTP Status 429,
- Status Line: RequestRateTooLarge
- x-ms-retry-after-ms :100
-```
-
-The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
-
-If you have more than one client cumulatively operating consistently above the request rate, the default retry count that's currently set to 9 internally by the client might not suffice. In this case, the client throws a CosmosException with status code 429 to the application.
-
-You can change the default retry count by setting the `RetryOptions` on the `CosmosClientOptions` instance. By default, the CosmosException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This error is returned even when the current retry count is less than the maximum retry count, whether the current value is the default of 9 or a user-defined value.
-
-The automated retry behavior helps improve resiliency and usability for most applications. But it might not be the best behavior when you're doing performance benchmarks, especially when you're measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge that's returned by each operation, and ensure that requests are operating below the reserved request rate.
-
-For more information, see [Request Units](../request-units.md).
-
-**For higher throughput, design for smaller documents**
-
-The request charge (that is, the request-processing cost) of a specified operation correlates directly to the size of the document. Operations on large documents cost more than operations on small documents.
-
-## Next steps
-For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
-
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-java-sdk-v4-sql.md
- Title: Performance tips for Azure Cosmos DB Java SDK v4
-description: Learn client configuration options to improve Azure Cosmos database performance for Java SDK v4
---- Previously updated : 04/22/2022-----
-# Performance tips for Azure Cosmos DB Java SDK v4
-
-> [!div class="op_single_selector"]
-> * [Java SDK v4](performance-tips-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](performance-tips-async-java.md)
-> * [Sync Java SDK v2](performance-tips-java.md)
-> * [.NET SDK v3](performance-tips-dotnet-sdk-v3-sql.md)
-> * [.NET SDK v2](performance-tips.md)
->
-
-> [!IMPORTANT]
-> The performance tips in this article are for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
-
-Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using Azure Cosmos DB Java SDK v4.
-
-So if you're asking "How can I improve my database performance?" consider the following options:
-
-## Networking
-
-* **Connection mode: Use Direct mode**
-
-Java SDK default connection mode is direct. You can configure the connection mode in the client builder using the *directMode()* or *gatewayMode()* methods, as shown below. To configure either mode with default settings, call either method without arguments. Otherwise, pass a configuration settings class instance as the argument (*DirectConnectionConfig* for *directMode()*, *GatewayConnectionConfig* for *gatewayMode()*.). To learn more about different connectivity options, see the [connectivity modes](sql-sdk-connection-modes.md) article.
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientConnectionModeAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientConnectionModeSync)]
-
-
-
-The *directMode()* method has an additional override, for the following reason. Control plane operations such as database and container CRUD *always* utilize Gateway mode; when the user has configured Direct mode for data plane operations, control plane operations use default Gateway mode settings. This suits most users. However, users who want Direct mode for data plane operations as well as tunability of control plane Gateway mode parameters can use the following *directMode()* override:
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientDirectOverrideAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientDirectOverrideSync)]
-
-
-
-<a name="collocate-clients"></a>
-* **Collocate clients in same Azure region for performance**
-<a id="same-region"></a>
-
-When possible, place any applications calling Azure Cosmos DB in the same region as the Azure Cosmos database. For an approximate comparison, calls to Azure Cosmos DB within the same region complete within 1-2 ms, but the latency between the West and East coast of the US is >50 ms. This latency can likely vary from request to request depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. The lowest possible latency is achieved by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure Regions](https://azure.microsoft.com/regions/#services).
--
-An app that interacts with a multi-region Azure Cosmos DB account needs to configure
-[preferred locations](tutorial-global-distribution-sql-api.md#preferred-locations) to ensure that requests are going to a collocated region.
-
-* **Enable Accelerated Networking on your Azure VM for lower latency.**
-
-It is recommended that you follow the instructions to enable Accelerated Networking in your [Windows (click for instructions)](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Linux (click for instructions)](../../virtual-network/create-vm-accelerated-networking-cli.md) Azure VM, in order to maximize performance.
-
-Without accelerated networking, IO that transits between your Azure VM and other Azure resources may be unnecessarily routed through a host and virtual switch situated between the VM and its network card. Having the host and virtual switch inline in the datapath not only increases latency and jitter in the communication channel, it also steals CPU cycles from the VM. With accelerated networking, the VM interfaces directly with the NIC without intermediaries; any network policy details which were being handled by the host and virtual switch are now handled in hardware at the NIC; the host and virtual switch are bypassed. Generally you can expect lower latency and higher throughput, as well as more *consistent* latency and decreased CPU utilization when you enable accelerated networking.
-
-Limitations: accelerated networking must be supported on the VM OS, and can only be enabled when the VM is stopped and deallocated. The VM cannot be deployed with Azure Resource Manager.
-
-Please see the [Windows](../../virtual-network/create-vm-accelerated-networking-powershell.md) and [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md) instructions for more details.
-
-## SDK usage
-* **Install the most recent SDK**
-
-The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the [Azure Cosmos DB SDK](sql-api-sdk-async-java.md) pages to determine the most recent SDK and review improvements.
-
-* <a id="max-connection"></a> **Use a singleton Azure Cosmos DB client for the lifetime of your application**
-
-Each Azure Cosmos DB client instance is thread-safe and performs efficient connection management and address caching. To allow efficient connection management and better performance by the Azure Cosmos DB client, it is recommended to use a single instance of the Azure Cosmos DB client per AppDomain for the lifetime of the application.
-
-* <a id="override-default-consistency-javav4"></a> **Use the lowest consistency level required for your application**
-
-When you create a *CosmosClient*, the default consistency used if not explicitly set is *Session*. If *Session* consistency is not required by your application logic set the *Consistency* to *Eventual*. Note: it is recommended to use at least *Session* consistency in applications employing the Azure Cosmos DB Change Feed processor.
-
-* **Use Async API to max out provisioned throughput**
-
-Azure Cosmos DB Java SDK v4 bundles two APIs, Sync and Async. Roughly speaking, the Async API implements SDK functionality, whereas the Sync API is a thin wrapper that makes blocking calls to the Async API. This stands in contrast to the older Azure Cosmos DB Async Java SDK v2, which was Async-only, and to the older Azure Cosmos DB Sync Java SDK v2, which was Sync-only and had a completely separate implementation.
-
-The choice of API is determined during client initialization; a *CosmosAsyncClient* supports Async API while a *CosmosClient* supports Sync API.
-
-The Async API implements non-blocking IO and is the optimal choice if your goal is to max out throughput when issuing requests to Azure Cosmos DB.
-
-Using Sync API can be the right choice if you want or need an API which blocks on the response to each request, or if synchronous operation is the dominant paradigm in your application. For example, you might want the Sync API when you are persisting data to Azure Cosmos DB in a microservices application, provided throughput is not critical.
-
-Just be aware that Sync API throughput degrades with increasing request response-time, whereas the Async API can saturate the full bandwidth capabilities of your hardware.
-
-Geographic collocation can give you higher and more consistent throughput when using Sync API (see [Collocate clients in same Azure region for performance](#collocate-clients)) but still is not expected to exceed Async API attainable throughput.
-
-Some users may also be unfamiliar with [Project Reactor](https://projectreactor.io/), the Reactive Streams framework used to implement Azure Cosmos DB Java SDK v4 Async API. If this is a concern, we recommend you read our introductory [Reactor Pattern Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md) and then take a look at this [Introduction to Reactive Programming](https://tech.io/playgrounds/929/reactive-programming-with-reactor-3/Intro) in order to familiarize yourself. If you have already used Azure Cosmos DB with an Async interface, and the SDK you used was Azure Cosmos DB Async Java SDK v2, then you may be familiar with [ReactiveX](http://reactivex.io/)/[RxJava](https://github.com/ReactiveX/RxJava) but be unsure what has changed in Project Reactor. In that case, please take a look at our [Reactor vs. RxJava Guide](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) to become familiarized.
-
-The following code snippets show how to initialize your Azure Cosmos DB client for Async API or Sync API operation, respectively:
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientSync)]
-
-
-
-* **Tuning ConnectionPolicy**
-
-By default, Direct mode Cosmos DB requests are made over TCP when using Azure Cosmos DB Java SDK v4. Internally Direct mode uses a special architecture to dynamically manage network resources and get the best performance.
-
-In Azure Cosmos DB Java SDK v4, Direct mode is the best choice to improve database performance with most workloads.
-
-* ***Overview of Direct mode***
-<a id="direct-connection"></a>
--
-The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Cosmos DB backend. The Direct mode architecture allocates up to 130 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
-
-* ***Configuration options for Direct mode***
-
-If non-default Direct mode behavior is desired, create a *DirectConnectionConfig* instance and customize its properties, then pass the customized property instance to the *directMode()* method in the Azure Cosmos DB client builder.
-
-These configuration settings control the behavior of the underlying Direct mode architecture discussed above.
-
-As a first step, use the following recommended configuration settings below. These *DirectConnectionConfig* options are advanced configuration settings which can affect SDK performance in unexpected ways; we recommend users avoid modifying them unless they feel very comfortable in understanding the tradeoffs and it is absolutely necessary. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
-
-| Configuration option | Default |
-| :: | :--: |
-| idleConnectionTimeout | "PT0" |
-| maxConnectionsPerEndpoint | "130" |
-| connectTimeout | "PT5S" |
-| idleEndpointTimeout | "PT1H" |
-| maxRequestsPerConnection | "30" |
-
-* **Scale out your client-workload**
-
-If you are testing at high throughput levels, the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
-
-A good rule of thumb is not to exceed >50% CPU utilization on any given server, to keep latency low.
-
-<a id="tune-page-size"></a>
-
-* **Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads)**
-
-The asynchronous functionality of Azure Cosmos DB Java SDK is based on [netty](https://netty.io/) non-blocking IO. The SDK uses a fixed number of IO netty event loop threads (as many CPU cores your machine has) for executing IO operations. The Flux returned by API emits the result on one of the shared IO event loop netty threads. So it is important to not block the shared IO event loop netty threads. Doing CPU intensive work or blocking operation on the IO event loop netty thread may cause deadlock or significantly reduce SDK throughput.
-
-For example the following code executes a cpu intensive work on the event loop IO netty thread:
-<a id="java4-noscheduler"></a>
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceNeedsSchedulerAsync)]
-
-After result is received if you want to do CPU intensive work on the result you should avoid doing so on event loop IO netty thread. You can instead provide your own Scheduler to provide your own thread for running your work, as shown below (requires `import reactor.core.scheduler.Schedulers`).
-
-<a id="java4-scheduler"></a>
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceAddSchedulerAsync)]
-
-Based on the type of your work you should use the appropriate existing Reactor Scheduler for your work. Read here
-[``Schedulers``](https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html).
-
-For more information on Azure Cosmos DB Java SDK v4, please look at the [Cosmos DB directory of the Azure SDK for Java monorepo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos).
-
-* **Optimize logging settings in your application**
-
-For a variety of reasons, you may want or need to add logging in a thread which is generating high request throughput. If your goal is to fully saturate a container's provisioned throughput with requests generated by this thread, logging optimizations can greatly improve performance.
-
-* ***Configure an async logger***
-
-The latency of a synchronous logger necessarily factors into the overall latency calculation of your request-generating thread. An async logger such as [log4j2](https://logging.apache.org/log4j/log4j-2.3/manual/async.html) is recommended to decouple logging overhead from your high-performance application threads.
-
-* ***Disable netty's logging***
-
-Netty library logging is chatty and needs to be turned off (suppressing sign in the configuration may not be enough) to avoid additional CPU costs. If you are not in debugging mode, disable netty's logging altogether. So if you are using log4j to remove the additional CPU costs incurred by ``org.apache.log4j.Category.callAppenders()`` from netty add the following line to your codebase:
-
-```java
-org.apache.log4j.Logger.getLogger("io.netty").setLevel(org.apache.log4j.Level.OFF);
-```
-
- * **OS Open files Resource Limit**
-
-Some Linux systems (like Red Hat) have an upper limit on the number of open files and so the total number of connections. Run the following to view the current limits:
-
-```bash
-ulimit -a
-```
-
-The number of open files (nofile) needs to be large enough to have enough room for your configured connection pool size and other open files by the OS. It can be modified to allow for a larger connection pool size.
-
-Open the limits.conf file:
-
-```bash
-vim /etc/security/limits.conf
-```
-
-Add/modify the following lines:
-
-```
-* - nofile 100000
-```
-
-* **Specify partition key in point writes**
-
-To improve the performance of point writes, specify item partition key in the point write API call, as shown below:
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceNoPKAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceNoPKSync)]
-
-
-
-rather than providing only the item instance, as shown below:
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceAddPKAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceAddPKSync)]
-
-
-
-The latter is supported but will add latency to your application; the SDK must parse the item and extract the partition key.
-
-## Query operations
-
-For query operations see the [performance tips for queries](performance-tips-query-sdk.md?pivots=programming-language-java).
-
-## <a id="java4-indexing"></a><a id="indexing-policy"></a> Indexing policy
-
-* **Exclude unused paths from indexing for faster writes**
-
-Azure Cosmos DBΓÇÖs indexing policy allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths (setIncludedPaths and setExcludedPaths). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed. For example, the following code shows how to include and exclude entire sections of the documents (also known as a subtree) from indexing using the "*" wildcard.
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=MigrateIndexingAsync)]
-
-For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
-
-## Throughput
-<a id="measure-rus"></a>
-
-* **Measure and tune for lower request units/second usage**
-
-Azure Cosmos DB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures, and triggers ΓÇô all operating on the documents within a database collection. The cost associated with each of these operations varies based on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
-
-Throughput is provisioned based on the number of [request units](../request-units.md) set for each container. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional request units.
-
-The complexity of a query impacts how many request units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.
-
-To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header to measure the number of request units consumed by these operations. You can also look at the equivalent RequestCharge property in ResourceResponse\<T> or FeedResponse\<T>.
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceRequestChargeAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceRequestChargeSync)]
-
-
-
-The request charge returned in this header is a fraction of your provisioned throughput. For example, if you have 2000 RU/s provisioned, and if the preceding query returns 1000 1KB-documents, the cost of the operation is 1000. As such, within one second, the server honors only two such requests before rate limiting subsequent requests. For more information, see [Request units](../request-units.md) and the [request unit calculator](https://www.documentdb.com/capacityplanner).
-
-<a id="429"></a>
-* **Handle rate limiting/request rate too large**
-
-When a client attempts to exceed the reserved throughput for an account, there is no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.
-
-```xml
-HTTP Status 429,
-Status Line: RequestRateTooLarge
-x-ms-retry-after-ms :100
-```
-
-The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
-
-If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a *CosmosClientException* with status code 429 to the application. The default retry count can be changed by using setRetryOptions on the ConnectionPolicy instance. By default, the *CosmosClientException* with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
-
-While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
-
-* **Design for smaller documents for higher throughput**
-
-The request charge (the request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents. Ideally, architect your application and workflows to have your item size be ~1KB, or similar order or magnitude. For latency-sensitive applications large items should be avoided - multi-MB documents will slow down your application.
-
-## Next steps
-
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Performance Tips Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-java.md
- Title: Performance tips for Azure Cosmos DB Sync Java SDK v2
-description: Learn client configuration options to improve Azure Cosmos database performance for Sync Java SDK v2
---- Previously updated : 05/11/2020-----
-# Performance tips for Azure Cosmos DB Sync Java SDK v2
-
-> [!div class="op_single_selector"]
-> * [Java SDK v4](performance-tips-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](performance-tips-async-java.md)
-> * [Sync Java SDK v2](performance-tips-java.md)
-> * [.NET SDK v3](performance-tips-dotnet-sdk-v3-sql.md)
-> * [.NET SDK v2](performance-tips.md)
->
-
-> [!IMPORTANT]
-> This is *not* the latest Java SDK for Azure Cosmos DB! You should upgrade your project to [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) and then read the Azure Cosmos DB Java SDK v4 [performance tips guide](performance-tips-java-sdk-v4-sql.md). Follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide to upgrade.
->
-> These performance tips are for Azure Cosmos DB Sync Java SDK v2 only. Please view the Azure Cosmos DB Sync Java SDK v2 [Release notes](sql-api-sdk-java.md) and [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb) for more information.
->
-
-> [!IMPORTANT]
-> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
-> will be retired; the SDK and all applications using the SDK
-> **will continue to function**; Azure Cosmos DB will simply cease
-> to provide further maintenance and support for this SDK.
-> We recommend following the instructions above to migrate to
-> Azure Cosmos DB Java SDK v4.
->
-
-Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [how to provision container throughput](how-to-provision-container-throughput.md) or [how to provision database throughput](how-to-provision-database-throughput.md). However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using [Azure Cosmos DB Sync Java SDK v2](./sql-api-sdk-java.md).
-
-So if you're asking "How can I improve my database performance?" consider the following options:
-
-## Networking
-<a id="direct-connection"></a>
-
-1. **Connection mode: Use DirectHttps**
-
- How a client connects to Azure Cosmos DB has important implications on performance, especially in terms of observed client-side latency. There is one key configuration setting available for configuring the client [ConnectionPolicy](/java/api/com.microsoft.azure.documentdb.connectionpolicy) ΓÇô the [ConnectionMode](/java/api/com.microsoft.azure.documentdb.connectionmode). The two available ConnectionModes are:
-
- 1. [Gateway (default)](/java/api/com.microsoft.azure.documentdb.connectionmode)
- 2. [DirectHttps](/java/api/com.microsoft.azure.documentdb.connectionmode)
-
- Gateway mode is supported on all SDK platforms and is the configured default. If your application runs within a corporate network with strict firewall restrictions, Gateway is the best choice since it uses the standard HTTPS port and a single endpoint. The performance tradeoff, however, is that Gateway mode involves an additional network hop every time data is read or written to Azure Cosmos DB. Because of this, DirectHttps mode offers better performance due to fewer network hops.
-
- The Azure Cosmos DB Sync Java SDK v2 uses HTTPS as a transport protocol. HTTPS uses TLS for initial authentication and encrypting traffic. When using the Azure Cosmos DB Sync Java SDK v2, only HTTPS port 443 needs to be open.
-
- The ConnectionMode is configured during the construction of the DocumentClient instance with the ConnectionPolicy parameter.
-
- ### <a id="syncjava2-connectionpolicy"></a>Sync Java SDK V2 (Maven com.microsoft.azure::azure-documentdb)
-
- ```Java
- public ConnectionPolicy getConnectionPolicy() {
- ConnectionPolicy policy = new ConnectionPolicy();
- policy.setConnectionMode(ConnectionMode.DirectHttps);
- policy.setMaxPoolSize(1000);
- return policy;
- }
-
- ConnectionPolicy connectionPolicy = new ConnectionPolicy();
- DocumentClient client = new DocumentClient(HOST, MASTER_KEY, connectionPolicy, null);
- ```
-
- :::image type="content" source="./media/performance-tips-java/connection-policy.png" alt-text="Diagram shows the Azure Cosmos DB connection policy." border="false":::
-
- <a id="same-region"></a>
-2. **Collocate clients in same Azure region for performance**
-
- When possible, place any applications calling Azure Cosmos DB in the same region as the Azure Cosmos database. For an approximate comparison, calls to Azure Cosmos DB within the same region complete within 1-2 ms, but the latency between the West and East coast of the US is >50 ms. This latency can likely vary from request to request depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. The lowest possible latency is achieved by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure Regions](https://azure.microsoft.com/regions/#services).
-
- :::image type="content" source="./media/performance-tips/same-region.png" alt-text="Diagram shows requests and responses in two regions, where computers connect to a Cosmos D B Account through mid-tier services." border="false":::
-
-## SDK Usage
-1. **Install the most recent SDK**
-
- The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the [Azure Cosmos DB SDK](./sql-api-sdk-java.md) pages to determine the most recent SDK and review improvements.
-2. **Use a singleton Azure Cosmos DB client for the lifetime of your application**
-
- Each [DocumentClient](/java/api/com.microsoft.azure.documentdb.documentclient) instance is thread-safe and performs efficient connection management and address caching when operating in Direct Mode. To allow efficient connection management and better performance by DocumentClient, it is recommended to use a single instance of DocumentClient per AppDomain for the lifetime of the application.
-
- <a id="max-connection"></a>
-3. **Increase MaxPoolSize per host when using Gateway mode**
-
- Azure Cosmos DB requests are made over HTTPS/REST when using Gateway mode, and are subjected to the default connection limit per hostname or IP address. You may need to set the MaxPoolSize to a higher value (200-1000) so that the client library can utilize multiple simultaneous connections to Azure Cosmos DB. In the Azure Cosmos DB Sync Java SDK v2, the default value for [ConnectionPolicy.getMaxPoolSize](/java/api/com.microsoft.azure.documentdb.connectionpolicy.getmaxpoolsize) is 100. Use [setMaxPoolSize](/java/api/com.microsoft.azure.documentdb.connectionpolicy.setmaxpoolsize) to change the value.
-
-4. **Tuning parallel queries for partitioned collections**
-
- Azure Cosmos DB Sync Java SDK version 1.9.0 and above support parallel queries, which enable you to query a partitioned collection in parallel. For more information, see [code samples](https://github.com/Azure/azure-documentdb-java/tree/master/documentdb-examples/src/test/java/com/microsoft/azure/documentdb/examples) related to working with the SDKs. Parallel queries are designed to improve query latency and throughput over their serial counterpart.
-
- (a) ***Tuning setMaxDegreeOfParallelism\:***
- Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned collection is fetched serially with respect to the query. So, use [setMaxDegreeOfParallelism](/java/api/com.microsoft.azure.documentdb.feedoptions.setmaxdegreeofparallelism) to set the number of partitions that has the maximum chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can use setMaxDegreeOfParallelism to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
-
- It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be bottlenecked by those partitions.
-
- (b) ***Tuning setMaxBufferedItemCount\:***
- Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. setMaxBufferedItemCount limits the number of pre-fetched results. By setting [setMaxBufferedItemCount](/java/api/com.microsoft.azure.documentdb.feedoptions.setmaxbuffereditemcount) to the expected number of results returned (or a higher number), this enables the query to receive maximum benefit from pre-fetching.
-
- Pre-fetching works the same way irrespective of the MaxDegreeOfParallelism, and there is a single buffer for the data from all partitions.
-
-5. **Implement backoff at getRetryAfterInMilliseconds intervals**
-
- During performance testing, you should increase load until a small rate of requests get throttled. If throttled, the client application should backoff on throttle for the server-specified retry interval. Respecting the backoff ensures that you spend minimal amount of time waiting between retries. Retry policy support is included in Version 1.8.0 and above of the [Azure Cosmos DB Sync Java SDK](./sql-api-sdk-java.md). For more information, see [getRetryAfterInMilliseconds](/java/api/com.microsoft.azure.documentdb.documentclientexception.getretryafterinmilliseconds).
-
-6. **Scale out your client-workload**
-
- If you are testing at high throughput levels (>50,000 RU/s), the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
-
-7. **Use name based addressing**
-
- Use name-based addressing, where links have the format `dbs/MyDatabaseId/colls/MyCollectionId/docs/MyDocumentId`, instead of SelfLinks (\_self), which have the format `dbs/<database_rid>/colls/<collection_rid>/docs/<document_rid>` to avoid retrieving ResourceIds of all the resources used to construct the link. Also, as these resources get recreated (possibly with same name), caching these may not help.
-
- <a id="tune-page-size"></a>
-8. **Tune the page size for queries/read feeds for better performance**
-
- When performing a bulk read of documents by using read feed functionality (for example, [readDocuments](/java/api/com.microsoft.azure.documentdb.documentclient.readdocuments)) or when issuing a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
-
- To reduce the number of network round trips required to retrieve all applicable results, you can increase the page size using the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) request header to up to 1000. In cases where you need to display only a few results, for example, if your user interface or application API returns only 10 results a time, you can also decrease the page size to 10 to reduce the throughput consumed for reads and queries.
-
- You may also set the page size using the [setPageSize method](/java/api/com.microsoft.azure.documentdb.feedoptionsbase.setpagesize).
-
-## Indexing Policy
-
-1. **Exclude unused paths from indexing for faster writes**
-
- Azure Cosmos DBΓÇÖs indexing policy allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths ([setIncludedPaths](/java/api/com.microsoft.azure.documentdb.indexingpolicy.setincludedpaths) and [setExcludedPaths](/java/api/com.microsoft.azure.documentdb.indexingpolicy.setexcludedpaths)). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed. For example, the following code shows how to exclude an entire section (subtree) of the documents from indexing using the "*" wildcard.
--
- ### <a id="syncjava2-indexing"></a>Sync Java SDK V2 (Maven com.microsoft.azure::azure-documentdb)
-
- ```Java
- Index numberIndex = Index.Range(DataType.Number);
- numberIndex.set("precision", -1);
- indexes.add(numberIndex);
- includedPath.setIndexes(indexes);
- includedPaths.add(includedPath);
- indexingPolicy.setIncludedPaths(includedPaths);
- collectionDefinition.setIndexingPolicy(indexingPolicy);
- ```
-
- For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
-
-## Throughput
-<a id="measure-rus"></a>
-
-1. **Measure and tune for lower request units/second usage**
-
- Azure Cosmos DB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures, and triggers ΓÇô all operating on the documents within a database collection. The cost associated with each of these operations varies based on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
-
- Throughput is provisioned based on the number of [request units](../request-units.md) set for each container. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional request units.
-
- The complexity of a query impacts how many request units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.
-
- To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent RequestCharge property in [ResourceResponse\<T>](/java/api/com.microsoft.azure.documentdb.resourceresponse) or [FeedResponse\<T>](/java/api/com.microsoft.azure.documentdb.feedresponse) to measure the number of request units consumed by these operations.
--
- ### <a id="syncjava2-requestcharge"></a>Sync Java SDK V2 (Maven com.microsoft.azure::azure-documentdb)
-
- ```Java
- ResourceResponse<Document> response = client.createDocument(collectionLink, documentDefinition, null, false);
-
- response.getRequestCharge();
- ```
-
- The request charge returned in this header is a fraction of your provisioned throughput. For example, if you have 2000 RU/s provisioned, and if the preceding query returns 1000 1KB-documents, the cost of the operation is 1000. As such, within one second, the server honors only two such requests before rate limiting subsequent requests. For more information, see [Request units](../request-units.md) and the [request unit calculator](https://www.documentdb.com/capacityplanner).
- <a id="429"></a>
-1. **Handle rate limiting/request rate too large**
-
- When a client attempts to exceed the reserved throughput for an account, there is no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.
-
- ```xml
- HTTP Status 429,
- Status Line: RequestRateTooLarge
- x-ms-retry-after-ms :100
- ```
- The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
-
- If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a [DocumentClientException](/java/api/com.microsoft.azure.documentdb.documentclientexception) with status code 429 to the application. The default retry count can be changed by using [setRetryOptions](/java/api/com.microsoft.azure.documentdb.connectionpolicy.setretryoptions) on the [ConnectionPolicy](/java/api/com.microsoft.azure.documentdb.connectionpolicy) instance. By default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
-
- While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
-3. **Design for smaller documents for higher throughput**
-
- The request charge (the request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents.
-
-## Next steps
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-query-sdk.md
- Title: Azure Cosmos DB performance tips for queries using the Azure Cosmos DB SDK
-description: Learn query configuration options to help improve performance using the Azure Cosmos DB SDK.
---- Previously updated : 04/11/2022--
-zone_pivot_groups: programming-languages-set-cosmos
--
-# Query performance tips for Azure Cosmos DB SDKs
-
-Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly with guaranteed latency and throughput levels. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [provision container throughput](how-to-provision-container-throughput.md) or [provision database throughput](how-to-provision-database-throughput.md).
--
-## Reduce Query Plan calls
-
-To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There are two ways to remove this request and reduce the latency of the query operation:
-
-### Use local Query Plan generation
-
-The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries locally. ServiceInterop.dll is supported only on the **Windows x64** platform. The following types of applications use 32-bit host processing by default. To change host processing to 64-bit processing, follow these steps, based on the type of your application:
--- For executable applications, you can change host processing by setting the [platform target](/visualstudio/ide/how-to-configure-projects-to-target-platforms?preserve-view=true) to **x64** in the **Project Properties** window, on the **Build** tab.--- For VSTest-based test projects, you can change host processing by selecting **Test** > **Test Settings** > **Default Processor Architecture as X64** on the Visual Studio **Test** menu.--- For locally deployed ASP.NET web applications, you can change host processing by selecting **Use the 64-bit version of IIS Express for web sites and projects** under **Tools** > **Options** > **Projects and Solutions** > **Web Projects**.--- For ASP.NET web applications deployed on Azure, you can change host processing by selecting the **64-bit** platform in **Application settings** in the Azure portal.-
-> [!NOTE]
-> By default, new Visual Studio projects are set to **Any CPU**. We recommend that you set your project to **x64** so it doesn't switch to **x86**. A project set to **Any CPU** can easily switch to **x86** if an x86-only dependency is added.<br/>
-> ServiceInterop.dll needs to be in the folder that the SDK DLL is being executed from. This should be a concern only if you manually copy DLLs or have custom build/deployment systems.
-
-### Use single partition queries
-
-# [V3 .NET SDK](#tab/v3)
-
-For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By):
-
-```cs
-using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- requestOptions: new QueryRequestOptions() { PartitionKey = new PartitionKey("Washington")}))
-{
- // ...
-}
-```
-
-# [V2 .NET SDK](#tab/v2)
-
-For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.documents.client.feedoptions.partitionkey) property in `FeedOptions` and contain no aggregations (including Distinct, DCount, Group By):
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- PartitionKey = new PartitionKey("Washington")
- }).AsDocumentQuery();
-```
---
-> [!NOTE]
-> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slower they can potentially be.
-
-### Avoid recreating the iterator unnecessarily
-
-When all the query results are consumed by the current component, you don't need to re-create the iterator with the continuation for every page. Always prefer to drain the query fully unless the pagination is controlled by another calling component:
-
-# [V3 .NET SDK](#tab/v3)
-
-```cs
-using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- requestOptions: new QueryRequestOptions() { PartitionKey = new PartitionKey("Washington")}))
-{
- while (feedIterator.HasMoreResults)
- {
- foreach(MyItem document in await feedIterator.ReadNextAsync())
- {
- // Iterate through documents
- }
- }
-}
-```
-
-# [V2 .NET SDK](#tab/v2)
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- PartitionKey = new PartitionKey("Washington")
- }).AsDocumentQuery();
-while (query.HasMoreResults)
-{
- foreach(Document document in await queryable.ExecuteNextAsync())
- {
- // Iterate through documents
- }
-}
-```
---
-## Tune the degree of parallelism
-
-# [V3 .NET SDK](#tab/v3)
-
-For queries, tune the [MaxConcurrency](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxconcurrency) property in `QueryRequestOptions` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxConcurrency` controls the maximum number of parallel tasks, that is, the maximum of partitions to be visited in parallel. Setting the value to -1 will let the SDK decide the optimal concurrency.
-
-```cs
-using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- requestOptions: new QueryRequestOptions() {
- PartitionKey = new PartitionKey("Washington"),
- MaxConcurrency = -1 }))
-{
- // ...
-}
-```
-
-# [V2 .NET SDK](#tab/v2)
-
-For queries, tune the [MaxDegreeOfParallelism](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxdegreeofparallelism) property in `FeedOptions` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxDegreeOfParallelism` controls the maximum number of parallel tasks, that is, the maximum of partitions to be visited in parallel. Setting the value to -1 will let the SDK decide the optimal concurrency.
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- MaxDegreeOfParallelism = -1,
- EnableCrossPartitionQuery = true
- }).AsDocumentQuery();
-```
---
-Let's assume that
-* D = Default Maximum number of parallel tasks (= total number of processors in the client machine)
-* P = User-specified maximum number of parallel tasks
-* N = Number of partitions that needs to be visited for answering a query
-
-Following are implications of how the parallel queries would behave for different values of P.
-* (P == 0) => Serial Mode
-* (P == 1) => Maximum of one task
-* (P > 1) => Min (P, N) parallel tasks
-* (P < 1) => Min (N, D) parallel tasks
-
-## Tune the page size
-
-When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
-
-> [!NOTE]
-> The `MaxItemCount` property shouldn't be used just for pagination. Its main use is to improve the performance of queries by reducing the maximum number of items returned in a single page.
-
-# [V3 .NET SDK](#tab/v3)
-
-You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxitemcount) property in `QueryRequestOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `MaxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
-
-```cs
-using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- requestOptions: new QueryRequestOptions() {
- PartitionKey = new PartitionKey("Washington"),
- MaxItemCount = 1000}))
-{
- // ...
-}
-```
-
-# [V2 .NET SDK](#tab/v2)
-
-You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxitemcount) property in `FeedOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `MaxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
-
-```csharp
-IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'",
- new FeedOptions { MaxItemCount = 1000 });
-```
---
-When a query is executed, the resulting data is sent within a TCP packet. If you specify too low a value for `MaxItemCount`, the number of trips required to send the data within the TCP packet is high, which affects performance. So if you're not sure what value to set for the `MaxItemCount` property, it's best to set it to -1 and let the SDK choose the default value.
-
-## Tune the buffer size
-
-# [V3 .NET SDK](#tab/v3)
-
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The [MaxBufferedItemCount](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxbuffereditemcount) property in `QueryRequestOptions` limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
-
-```cs
-using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- requestOptions: new QueryRequestOptions() {
- PartitionKey = new PartitionKey("Washington"),
- MaxBufferedItemCount = -1}))
-{
- // ...
-}
-```
-
-# [V2 .NET SDK](#tab/v2)
-
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The [MaxBufferedItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxbuffereditemcount) property in `FeedOptions` limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
-
-```csharp
-IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'",
- new FeedOptions { MaxBufferedItemCount = -1 });
-```
---
-Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
---
-## Next steps
-
-To learn more about performance using the .NET SDK:
-
-* [Best practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)
-* [Performance tips for Azure Cosmos DB .NET V3 SDK](performance-tips-dotnet-sdk-v3-sql.md)
-* [Performance tips for Azure Cosmos DB .NET V2 SDK](performance-tips.md)
--
-## Reduce Query Plan calls
-
-To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation.
-
-### Use Query Plan caching
-
-The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](sql-query-parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Data Cosmos SDK version 3.13.0 and above**.
-
-### Use parametrized single partition queries
-
-For parametrized queries that are scoped to a partition key with [setPartitionKey](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setpartitionkey) in `CosmosQueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By), the query plan can be avoided:
-
-```java
-CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
-options.setPartitionKey(new PartitionKey("Washington"));
-
-ArrayList<SqlParameter> paramList = new ArrayList<SqlParameter>();
-paramList.add(new SqlParameter("@city", "Seattle"));
-SqlQuerySpec querySpec = new SqlQuerySpec(
- "SELECT * FROM c WHERE c.city = @city",
- paramList);
-
-// Sync API
-CosmosPagedIterable<MyItem> filteredItems =
- container.queryItems(querySpec, options, MyItem.class);
-
-// Async API
-CosmosPagedFlux<MyItem> filteredItems =
- asyncContainer.queryItems(querySpec, options, MyItem.class);
-```
-
-> [!NOTE]
-> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slowed they can potentially be.
-
-## Tune the degree of parallelism
-
-Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned container is fetched serially with respect to the query. So, use [setMaxDegreeOfParallelism](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxdegreeofparallelism) on `CosmosQueryRequestOptions` to set the value to the number of partitions you have. If you don't know the number of partitions, you can use `setMaxDegreeOfParallelism` to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism. Setting the value to -1 will let the SDK decide the optimal concurrency.
-
-It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned container is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be degraded.
-
-```java
-CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
-options.setPartitionKey(new PartitionKey("Washington"));
-options.setMaxDegreeOfParallelism(-1);
-
-// Define the query
-
-// Sync API
-CosmosPagedIterable<MyItem> filteredItems =
- container.queryItems(querySpec, options, MyItem.class);
-
-// Async API
-CosmosPagedFlux<MyItem> filteredItems =
- asyncContainer.queryItems(querySpec, options, MyItem.class);
-```
-
-Let's assume that
-* D = Default Maximum number of parallel tasks (= total number of processors in the client machine)
-* P = User-specified maximum number of parallel tasks
-* N = Number of partitions that needs to be visited for answering a query
-
-Following are implications of how the parallel queries would behave for different values of P.
-* (P == 0) => Serial Mode
-* (P == 1) => Maximum of one task
-* (P > 1) => Min (P, N) parallel tasks
-* (P == -1) => Min (N, D) parallel tasks
-
-## Tune the page size
-
-When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 4 MB, whichever limit is hit first.
-
-You can use the `pageSize` parameter in `iterableByPage()` for sync API and `byPage()` for async API, to define a page size:
-
-```java
-// Sync API
-Iterable<FeedResponse<MyItem>> filteredItemsAsPages =
- container.queryItems(querySpec, options, MyItem.class).iterableByPage(continuationToken,pageSize);
-
-for (FeedResponse<MyItem> page : filteredItemsAsPages) {
- for (MyItem item : page.getResults()) {
- //...
- }
-}
-
-// Async API
-Flux<FeedResponse<MyItem>> filteredItemsAsPages =
- asyncContainer.queryItems(querySpec, options, MyItem.class).byPage(continuationToken,pageSize);
-
-filteredItemsAsPages.map(page -> {
- for (MyItem item : page.getResults()) {
- //...
- }
-}).subscribe();
-```
-
-## Tune the buffer size
-
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching (NOTE: This can also result in high memory consumption). If you set this value to 0, the system will automatically determine the number of items to buffer.
-
-```java
-CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
-options.setPartitionKey(new PartitionKey("Washington"));
-options.setMaxBufferedItemCount(-1);
-
-// Define the query
-
-// Sync API
-CosmosPagedIterable<MyItem> filteredItems =
- container.queryItems(querySpec, options, MyItem.class);
-
-// Async API
-CosmosPagedFlux<MyItem> filteredItems =
- asyncContainer.queryItems(querySpec, options, MyItem.class);
-```
-
-Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
-
-## Next steps
-
-To learn more about performance using the Java SDK:
-
-* [Best practices for Azure Cosmos DB Java V4 SDK](best-practice-java.md)
-* [Performance tips for Azure Cosmos DB Java V4 SDK](performance-tips-java-sdk-v4-sql.md)
-
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips.md
- Title: Azure Cosmos DB performance tips for .NET SDK v2
-description: Learn client configuration options to improve Azure Cosmos DB .NET v2 SDK performance.
----- Previously updated : 02/18/2022----
-# Performance tips for Azure Cosmos DB and .NET SDK v2
-
-> [!div class="op_single_selector"]
-> * [.NET SDK v3](performance-tips-dotnet-sdk-v3-sql.md)
-> * [.NET SDK v2](performance-tips.md)
-> * [Java SDK v4](performance-tips-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](performance-tips-async-java.md)
-> * [Sync Java SDK v2](performance-tips-java.md)
-
-Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [how to provision container throughput](how-to-provision-container-throughput.md) or [how to provision database throughput](how-to-provision-database-throughput.md). But because Azure Cosmos DB is accessed via network calls, there are client-side optimizations you can make to achieve peak performance when you use the [SQL .NET SDK](sql-api-sdk-dotnet-standard.md).
-
-So, if you're trying to improve your database performance, consider these options:
-
-## Upgrade to the .NET V3 SDK
-
-The [.NET v3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3) is released. If you use the .NET v3 SDK, see the [.NET v3 performance guide](performance-tips-dotnet-sdk-v3-sql.md) for the following information:
--- Defaults to Direct TCP mode-- Stream API support-- Support custom serializer to allow System.Text.JSON usage-- Integrated batch and bulk support-
-## <a id="hosting"></a> Hosting recommendations
-
-**Turn on server-side garbage collection (GC)**
-
-Reducing the frequency of garbage collection can help in some cases. In .NET, set [gcServer](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element) to `true`.
-
-**Scale out your client workload**
-
-If you're testing at high throughput levels (more than 50,000 RU/s), the client application could become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
-
-> [!NOTE]
-> High CPU usage can cause increased latency and request timeout exceptions.
-
-## <a id="metadata-operations"></a> Metadata operations
-
-Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests) that do not scale like data operations.
-
-## <a id="logging-and-tracing"></a> Logging and tracing
-
-Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
-
-Latest SDK versions (greater than 2.16.2) automatically remove it when they detect it, with older versions, you can remove it by:
-
-# [.NET 6 / .NET Core](#tab/trace-net-core)
-
-```csharp
-if (!Debugger.IsAttached)
-{
- Type defaultTrace = Type.GetType("Microsoft.Azure.Documents.DefaultTrace,Microsoft.Azure.DocumentDB.Core");
- TraceSource traceSource = (TraceSource)defaultTrace.GetProperty("TraceSource").GetValue(null);
- traceSource.Listeners.Remove("Default");
- // Add your own trace listeners
-}
-```
-
-# [.NET Framework](#tab/trace-net-fx)
-
-Edit your `app.config` or `web.config` files:
-
-```xml
-<configuration>
- <system.diagnostics>
- <sources>
- <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >
- <listeners>
- <remove name="Default" />
- <!--Add your own trace listeners-->
- <add name="myListener" ... />
- </listeners>
- </source>
- </sources>
- </system.diagnostics>
-<configuration>
-```
---
-## <a id="networking"></a> Networking
-
-**Connection policy: Use direct connection mode**
-
-.NET V2 SDK default connection mode is gateway. You configure the connection mode during the construction of the `DocumentClient` instance by using the `ConnectionPolicy` parameter. If you use direct mode, you need to also set the `Protocol` by using the `ConnectionPolicy` parameter. To learn more about different connectivity options, see the [connectivity modes](sql-sdk-connection-modes.md) article.
-
-```csharp
-Uri serviceEndpoint = new Uri("https://contoso.documents.net");
-string authKey = "your authKey from the Azure portal";
-DocumentClient client = new DocumentClient(serviceEndpoint, authKey,
-new ConnectionPolicy
-{
- ConnectionMode = ConnectionMode.Direct, // ConnectionMode.Gateway is the default
- ConnectionProtocol = Protocol.Tcp
-});
-```
-
-**Ephemeral port exhaustion**
-
-If you see a high connection volume or high port usage on your instances, first verify that your client instances are singletons. In other words, the client instances should be unique for the lifetime of the application.
-
-When running on the TCP protocol, the client optimizes for latency by using the long-lived connections as opposed to the HTTPS protocol, which terminates the connections after 2 minutes of inactivity.
-
-In scenarios where you have sparse access and if you notice a higher connection count when compared to the gateway mode access, you can:
-
-* Configure the [ConnectionPolicy.PortReuseMode](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.portreusemode) property to `PrivatePortPool` (effective with framework version>= 4.6.1 and .NET core version >= 2.0): This property allows the SDK to use a small pool of ephemeral ports for different Azure Cosmos DB destination endpoints.
-* Configure the [ConnectionPolicy.IdleConnectionTimeout](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.idletcpconnectiontimeout) property must be greater than or equal to 10 minutes. The recommended values are between 20 minutes and 24 hours.
-
-**Call OpenAsync to avoid startup latency on first request**
-
-By default, the first request has higher latency because it needs to fetch the address routing table. When you use [SDK V2](sql-api-sdk-dotnet.md), call `OpenAsync()` once during initialization to avoid this startup latency on the first request. The call looks like: `await client.OpenAsync();`
-
-> [!NOTE]
-> `OpenAsync` will generate requests to obtain the address routing table for all the containers in the account. For accounts that have many containers but whose application accesses a subset of them, `OpenAsync` would generate an unnecessary amount of traffic, which would make the initialization slow. So using `OpenAsync` might not be useful in this scenario because it slows down application startup.
-
-**For performance, collocate clients in same Azure region**
-
-When possible, place any applications that call Azure Cosmos DB in the same region as the Azure Cosmos DB database. Here's an approximate comparison: calls to Azure Cosmos DB within the same region complete within 1 ms to 2 ms, but the latency between the West and East coast of the US is more than 50 ms. This latency can vary from request to request, depending on the route taken by the request as it passes from the client to the Azure datacenter boundary. You can get the lowest possible latency by ensuring the calling application is located within the same Azure region as the provisioned Azure Cosmos DB endpoint. For a list of available regions, see [Azure regions](https://azure.microsoft.com/regions/#services).
--
-**Increase the number of threads/tasks**
-<a id="increase-threads"></a>
-
-Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of parallelism of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the .NET [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB.
-
-**Enable accelerated networking**
-
-To reduce latency and CPU jitter, we recommend that you enable accelerated networking on client virtual machines. See [Create a Windows virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Create a Linux virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md).
-
-## SDK usage
-
-**Install the most recent SDK**
-
-The Azure Cosmos DB SDKs are constantly being improved to provide the best performance. See the [Azure Cosmos DB SDK](sql-api-sdk-dotnet-standard.md) pages to determine the most recent SDK and review improvements.
-
-**Use a singleton Azure Cosmos DB client for the lifetime of your application**
-
-Each `DocumentClient` instance is thread-safe and performs efficient connection management and address caching when operating in direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application.
-
-**Avoid blocking calls**
-
-Cosmos DB SDK should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
-
-A common performance problem in apps using the Cosmos DB SDK is blocking calls that could be asynchronous. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times.
-
-**Do not**:
-
-* Block asynchronous execution by calling [Task.Wait](/dotnet/api/system.threading.tasks.task.wait) or [Task.Result](/dotnet/api/system.threading.tasks.task-1.result).
-* Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous.
-* Acquire locks in common code paths. Cosmos DB .NET SDK is most performant when architected to run code in parallel.
-* Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.
-* Do not use ToList() on `DocumentClient.CreateDocumentQuery(...)` which uses blocking calls to synchronously drain the query. Use [AsDocumentQuery()](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/a4348f8cc0750434376b02ae64ca24237da28cd7/samples/code-samples/Queries/Program.cs#L690) to drain the query asynchronously.
-
-**Do**:
-
-* Call the Cosmos DB .NET APIs asynchronously.
-* The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns.
-
-A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be used to find threads frequently added to the [Thread Pool](/windows/desktop/procthread/thread-pools). The `Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start` event indicates a thread added to the thread pool.
-
-**Increase System.Net MaxConnections per host when using gateway mode**
-
-Azure Cosmos DB requests are made over HTTPS/REST when you use gateway mode. They're subjected to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (100 to 1,000) so the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [Documents.Client.ConnectionPolicy.MaxConnectionLimit](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.maxconnectionlimit) to a higher value.
-
-**Implement backoff at RetryAfter intervals**
-
-During performance testing, you should increase load until a small rate of requests are throttled. If requests are throttled, the client application should back off on throttle for the server-specified retry interval. Respecting the backoff ensures you spend a minimal amount of time waiting between retries.
-
-Retry policy support is included in these SDKs:
-- Version 1.8.0 and later of the [.NET SDK for SQL](sql-api-sdk-dotnet.md) and the [Java SDK for SQL](sql-api-sdk-java.md)-- Version 1.9.0 and later of the [Node.js SDK for SQL](sql-api-sdk-node.md) and the [Python SDK for SQL](sql-api-sdk-python.md)-- All supported versions of the [.NET Core](sql-api-sdk-dotnet-core.md) SDKs -
-For more information, see [RetryAfter](/dotnet/api/microsoft.azure.documents.documentclientexception.retryafter).
-
-In version 1.19 and later of the .NET SDK, there's a mechanism for logging additional diagnostic information and troubleshooting latency issues, as shown in the following sample. You can log the diagnostic string for requests that have a higher read latency. The captured diagnostic string will help you understand how many times you received 429 errors for a given request.
-
-```csharp
-ResourceResponse<Document> readDocument = await this.readClient.ReadDocumentAsync(oldDocuments[i].SelfLink);
-readDocument.RequestDiagnosticsString
-```
-
-**Cache document URIs for lower read latency**
-
-Cache document URIs whenever possible for the best read performance. You need to define logic to cache the resource ID when you create a resource. Lookups based on resource IDs are faster than name-based lookups, so caching these values improves performance.
-
-**Increase the number of threads/tasks**
-
-See [Increase the number of threads/tasks](#increase-threads) in the networking section of this article.
-
-## Query operations
-
-For query operations see the [performance tips for queries](performance-tips-query-sdk.md?tabs=v2&pivots=programming-language-csharp).
-
-## Indexing policy
-
-**Exclude unused paths from indexing for faster writes**
-
-The Azure Cosmos DB indexing policy also allows you to specify which document paths to include in or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Indexing paths can improve write performance and reduce index storage for scenarios in which the query patterns are known beforehand. This is because indexing costs correlate directly to the number of unique paths indexed. For example, this code shows how to exclude an entire section of the documents (a subtree) from indexing by using the "*" wildcard:
-
-```csharp
-var collection = new DocumentCollection { Id = "excludedPathCollection" };
-collection.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
-collection.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/nonIndexedContent/*");
-collection = await client.CreateDocumentCollectionAsync(UriFactory.CreateDatabaseUri("db"), collection);
-```
-
-For more information, see [Azure Cosmos DB indexing policies](../index-policy.md).
-
-## <a id="measure-rus"></a> Throughput
-
-**Measure and tune for lower Request Units/second usage**
-
-Azure Cosmos DB offers a rich set of database operations. These operations include relational and hierarchical queries with UDFs, stored procedures, and triggers, all operating on the documents within a database collection. The cost associated with each of these operations varies depending on the CPU, IO, and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a Request Unit (RU) as a single measure for the resources required to perform various database operations and service an application request.
-
-Throughput is provisioned based on the number of [Request Units](../request-units.md) set for each container. Request Unit consumption is evaluated as a rate per second. Applications that exceed the provisioned Request Unit rate for their container are limited until the rate drops below the provisioned level for the container. If your application requires a higher level of throughput, you can increase your throughput by provisioning additional Request Units.
-
-The complexity of a query affects how many Request Units are consumed for an operation. The number of predicates, the nature of the predicates, the number of UDFs, and the size of the source dataset all influence the cost of query operations.
-
-To measure the overhead of any operation (create, update, or delete), inspect the [x-ms-request-charge](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header (or the equivalent `RequestCharge` property in `ResourceResponse\<T>` or `FeedResponse\<T>` in the .NET SDK) to measure the number of Request Units consumed by the operations:
-
-```csharp
-// Measure the performance (Request Units) of writes
-ResourceResponse<Document> response = await client.CreateDocumentAsync(collectionSelfLink, myDocument);
-Console.WriteLine("Insert of document consumed {0} request units", response.RequestCharge);
-// Measure the performance (Request Units) of queries
-IDocumentQuery<dynamic> queryable = client.CreateDocumentQuery(collectionSelfLink, queryString).AsDocumentQuery();
-while (queryable.HasMoreResults)
- {
- FeedResponse<dynamic> queryResponse = await queryable.ExecuteNextAsync<dynamic>();
- Console.WriteLine("Query batch consumed {0} request units", queryResponse.RequestCharge);
- }
-```
-
-The request charge returned in this header is a fraction of your provisioned throughput (that is, 2,000 RUs / second). For example, if the preceding query returns 1,000 1-KB documents, the cost of the operation is 1,000. So, within one second, the server honors only two such requests before rate limiting later requests. For more information, see [Request Units](../request-units.md) and the [Request Unit calculator](https://www.documentdb.com/capacityplanner).
-<a id="429"></a>
-
-**Handle rate limiting/request rate too large**
-
-When a client attempts to exceed the reserved throughput for an account, there's no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429). It will return an [x-ms-retry-after-ms](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers) header that indicates the amount of time, in milliseconds, that the user must wait before attempting the request again.
-
-```http
-HTTP Status 429,
-Status Line: RequestRateTooLarge
-x-ms-retry-after-ms :100
-```
-
-The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
-
-If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client might not suffice. In this case, the client throws a DocumentClientException with status code 429 to the application.
-
-You can change the default retry count by setting the `RetryOptions` on the `ConnectionPolicy` instance. By default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This error returns even when the current retry count is less than the maximum retry count, whether the current value is the default of 9 or a user-defined value.
-
-The automated retry behavior helps improve resiliency and usability for most applications. But it might not be the best behavior when you're doing performance benchmarks, especially when you're measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request Units](../request-units.md).
-
-**For higher throughput, design for smaller documents**
-
-The request charge (that is, the request-processing cost) of a given operation correlates directly to the size of the document. Operations on large documents cost more than operations on small documents.
-
-## Next steps
-
-For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
-
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powerbi-visualize.md
- Title: Power BI tutorial for Azure Cosmos DB
-description: Use this Power BI tutorial to import JSON, create insightful reports, and visualize data using the Azure Cosmos DB.
---- Previously updated : 03/28/2022---
-# Visualize Azure Cosmos DB data using Power BI
-
-This article describes the steps required to connect Azure Cosmos DB data to [Power BI](https://powerbi.microsoft.com/) Desktop.
-
-You can connect to Azure Cosmos DB from Power BI desktop by using one of these options:
-
-* Use [Azure Synapse Link](../synapse-link.md) to build Power BI reports with no performance or cost impact to your transactional workloads, and no ETL pipelines.
-
- You can either use [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode) or [import](/power-bi/connect-data/service-dataset-modes-understand#import-mode) mode. With [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), you can build dashboards/reports using live data from your Azure Cosmos DB accounts, without importing or copying the data into Power BI.
-
-* Connect Power BI Desktop to Azure Cosmos DB account with the Azure Cosmos DB connector for Power BI. This option is only available in import mode and will consume RUs allocated for your transactional workloads.
-
-> [!NOTE]
-> Reports created in Power BI Desktop can be published to PowerBI.com. Direct extraction of Azure Cosmos DB data cannot be performed from PowerBI.com.
-
-## Prerequisites
-Before following the instructions in this Power BI tutorial, ensure that you have access to the following resources:
-
-* [Download the latest version of Power BI Desktop](https://powerbi.microsoft.com/desktop).
-
-* [Create an Azure Cosmos database account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) and add data to your Cosmos containers.
-
-To share your reports in PowerBI.com, you must have an account in PowerBI.com. To learn more about Power BI and Power BI Pro, see [https://powerbi.microsoft.com/pricing](https://powerbi.microsoft.com/pricing).
-
-## Let's get started
-### Building BI reports using Azure Synapse Link
-
-You can enable Azure Synapse Link on your existing Cosmos DB containers and build BI reports on this data, in just a few clicks using Azure Cosmos DB portal. Power BI will connect to Cosmos DB using Direct Query mode, allowing you to query your live Cosmos DB data, without impacting your transactional workloads.
-
-To build a Power BI report/dashboard:
-
-1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account.
-
-1. From the **Integrations** section, open the **Power BI** pane and select **Get started**.
-
- > [!NOTE]
- > Currently, this option is only available for SQL API accounts. You can create T-SQL views directly in Synapse serverless SQL pools and build BI dashboards for Azure Cosmos DB API for MongoDB. See ["Use Power BI and serverless Synapse SQL pool to analyze Azure Cosmos DB data with Synapse"](../synapse-link-power-bi.md) for more information.
-
-1. From the **Enable Azure Synapse Link** tab, you can enable Synapse Link on your account from **Enable Azure Synapse link for this account** section. If Synapse Link is already enabled for your account, you will not see this tab. This step is a pre-requisite to start enabling Synapse Link on your containers.
-
- > [!NOTE]
- > Enabling Azure Synapse Link has cost implications. See [Azure Synapse Link pricing](../synapse-link.md#pricing) section for more details.
-
-1. Next from the **Enable Azure Synapse Link for your containers** section, choose the required containers to enable Synapse Link.
-
- * If you already enabled Synapse Link on some containers, you will see the checkbox next to the container name is selected. You may optionally deselect them, based on the data you'd like to visualize in Power BI.
-
- * If Synapse Link isn't enabled, you can enable this on your existing containers.
-
- If enabling Synapse Link is in progress on any of the containers, the data from those containers will not be included. You should come back to this tab later and import data when the containers are enabled.
-
- :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png" alt-text="Progress of Synapse Link enabled on existing containers." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png":::
-
-1. Depending on the amount of data in your containers, it may take a while to enable Synapse Link. To learn more, see [enable Synapse Link on existing containers](../configure-synapse-link.md#update-analytical-ttl) article.
-
- You can check the progress in the portal as shown in the following screen. **Containers are enabled with Synapse Link when the progress reaches 100%.**
-
- :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png" alt-text="Synapse Link successfully enabled on the selected containers." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png":::
-
-1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This step will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Cosmos DB to Power BI, see [Prepare views](../../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article.
- > [!NOTE]
- > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../../synapse-analytics/sql/resources-self-help-sql-on-demand.md#azure-cosmos-db-performance-issues) article.
-
-1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
-
- :::image type="content" source="../media/integrated-power-bi-synapse-link/synapse-create-views.png" alt-text="Connect to Synapse Link workspace and create views." border="true" lightbox="../media/integrated-power-bi-synapse-link/synapse-create-views.png":::
-
-1. Next, select **Download .pbids** to download the Power BI data source file. Open the downloaded file. It contains the required connection information and opens Power BI desktop.
-
- :::image type="content" source="../media/integrated-power-bi-synapse-link/download-powerbi-desktop-files.png" alt-text="Download the Power BI desktop files in .pbids format." border="true" lightbox="../media/integrated-power-bi-synapse-link/download-powerbi-desktop-files.png":::
-
-1. You can now connect to Azure Cosmos DB data from Power BI desktop. A list of T-SQL views corresponding to the data in each container are displayed.
-
- For example, the following screen shows vehicle fleet data. You can load this data for further analysis or transform it before loading.
-
- :::image type="content" source="../media/integrated-power-bi-synapse-link/powerbi-desktop-select-view.png" alt-text="T-SQL views corresponding to the data in each container." border="true" lightbox="../media/integrated-power-bi-synapse-link/powerbi-desktop-select-view.png":::
-
-1. You can now start building the report using Azure Cosmos DB's analytical data. Any changes to your data will be reflected in the report, as soon as the data is replicated to analytical store, which typically happens in a couple of minutes.
--
-### Building BI reports using Power BI connector
-> [!NOTE]
-> Connecting to Azure Cosmos DB with the Power BI connector is currently supported for Azure Cosmos DB SQL API and Gremlin API accounts only.
-
-1. Run Power BI Desktop.
-
-2. You can **Get Data**, see **Recent Sources**, or **Open Other Reports** directly from the welcome screen. Select the "X" at the top right corner to close the screen. The **Report** view of Power BI Desktop is displayed.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbireportview.png" alt-text="Power BI Desktop Report View - Power BI connector":::
-
-3. Select the **Home** ribbon, then click on **Get Data**. The **Get Data** window should appear.
-
-4. Click on **Azure**, select **Azure Cosmos DB (Beta)**, and then click **Connect**.
-
- :::image type="content" source="./media/powerbi-visualize/power_bi_connector_pbigetdata.png" alt-text="Power BI Desktop Get Data - Power BI connector":::
-
-5. On the **Preview Connector** page, click **Continue**. The **Azure Cosmos DB** window appears.
-
-6. Specify the Azure Cosmos DB account endpoint URL you would like to retrieve the data from as shown below, and then click **OK**. To use your own account, you can retrieve the URL from the URI box in the **Keys** blade of the Azure portal. Optionally you can provide the database name, collection name or use the navigator to select the database and collection to identify where the data comes from.
-
-7. If you are connecting to this endpoint for the first time, you are prompted for the account key. For your own account, retrieve the key from the **Primary Key** box in the **Read-only Keys** blade of the Azure portal. Enter the appropriate key and then click **Connect**.
-
- We recommend that you use the read-only key when building reports. This prevents unnecessary exposure of the primary key to potential security risks. The read-only key is available from the **Keys** blade of the Azure portal.
-
-8. When the account is successfully connected, the **Navigator** pane appears. The **Navigator** shows a list of databases under the account.
-
-9. Click and expand on the database where the data for the report comes from. Now, select a collection that contains the data to retrieve.
-
- The Preview pane shows a list of **Record** items. A Document is represented as a **Record** type in Power BI. Similarly, a nested JSON block inside a document is also a **Record**. To view the the properties documents as columns, click on the grey button with 2 arrows in opposite directions that symbolize the expansion of the record. It's located on the right of the container's name, in the same preview pane.
-
-10. Power BI Desktop Report view is where you can start creating reports to visualize data. You can create reports by dragging and dropping fields into the **Report** canvas.
-
-11. There are two ways to refresh data: ad hoc and scheduled. Simply click **Refresh Now** to refresh the data. For a scheduled refresh, go to **Settings**, open the **Datasets** tab. Click on **Scheduled Refresh** and set your schedule.
-
-## Next steps
-* To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).
-* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](/azure/cosmos-db/).
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powershell-samples.md
- Title: Azure PowerShell samples for Azure Cosmos DB Core (SQL) API
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Core (SQL) API
---- Previously updated : 01/20/2021----
-# Azure PowerShell samples for Azure Cosmos DB Core (SQL) API
-
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
-
-For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](../cassandr)
-
-## Common Samples
-
-|Task | Description |
-|||
-|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
-|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
-|||
-
-## Core (SQL) API Samples
-
-|Task | Description |
-|||
-|[Create an account, database and container](../scripts/powershell/sql/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account, database and container. |
-|[Create an account, database and container with autoscale](../scripts/powershell/sql/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account, database and container with autoscale. |
-|[Create a container with a large partition key](../scripts/powershell/sql/create-large-partition-key.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create a container with a large partition key. |
-|[Create a container with no index policy](../scripts/powershell/sql/create-index-none.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Create an Azure Cosmos container with index policy turned off.|
-|[List or get databases or containers](../scripts/powershell/sql/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or containers. |
-|[Perform throughput operations](../scripts/powershell/sql/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or container including get, update and migrate between autoscale and standard throughput. |
-|[Lock resources from deletion](../scripts/powershell/sql/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
-|||
cosmos-db Profile Sql Api Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/profile-sql-api-query.md
- Title: Get SQL query performance & execution metrics
-description: Learn how to retrieve SQL query execution metrics and profile SQL query performance of Azure Cosmos DB requests.
---- Previously updated : 05/17/2019----
-# Get SQL query execution metrics and analyze query performance using .NET SDK
-
-This article presents how to profile SQL query performance on Azure Cosmos DB. This profiling can be done using `QueryMetrics` retrieved from the .NET SDK and is detailed here. [QueryMetrics](/dotnet/api/microsoft.azure.documents.querymetrics) is a strongly typed object with information about the backend query execution. These metrics are documented in more detail in the [Tune Query Performance](./sql-api-query-metrics.md) article.
-
-## Set the FeedOptions parameter
-
-All the overloads for [DocumentClient.CreateDocumentQuery](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentquery) take in an optional [FeedOptions](/dotnet/api/microsoft.azure.documents.client.feedoptions) parameter. This option is what allows query execution to be tuned and parameterized.
-
-To collect the Sql query execution metrics, you must set the parameter [PopulateQueryMetrics](/dotnet/api/microsoft.azure.documents.client.feedoptions.populatequerymetrics#P:Microsoft.Azure.Documents.Client.FeedOptions.PopulateQueryMetrics) in the [FeedOptions](/dotnet/api/microsoft.azure.documents.client.feedoptions) to `true`. Setting `PopulateQueryMetrics` to true will make it so that the `FeedResponse` will contain the relevant `QueryMetrics`.
-
-## Get query metrics with AsDocumentQuery()
-The following code sample shows how to do retrieve metrics when using [AsDocumentQuery()](/dotnet/api/microsoft.azure.documents.linq.documentqueryable.asdocumentquery) method:
-
-```csharp
-// Initialize this DocumentClient and Collection
-DocumentClient documentClient = null;
-DocumentCollection collection = null;
-
-// Setting PopulateQueryMetrics to true in the FeedOptions
-FeedOptions feedOptions = new FeedOptions
-{
- PopulateQueryMetrics = true
-};
-
-string query = "SELECT TOP 5 * FROM c";
-IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
-
-while (documentQuery.HasMoreResults)
-{
- // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
-
- // This dictionary maps the partitionId to the QueryMetrics of that query
- IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
-
- // At this point you have QueryMetrics which you can serialize using .ToString()
- foreach (KeyValuePair<string, QueryMetrics> kvp in partitionIdToQueryMetrics)
- {
- string partitionId = kvp.Key;
- QueryMetrics queryMetrics = kvp.Value;
-
- // Do whatever logging you need
- DoSomeLoggingOfQueryMetrics(query, partitionId, queryMetrics);
- }
-}
-```
-## Aggregating QueryMetrics
-
-In the previous section, notice that there were multiple calls to [ExecuteNextAsync](/dotnet/api/microsoft.azure.documents.linq.idocumentquery-1.executenextasync) method. Each call returned a `FeedResponse` object that has a dictionary of `QueryMetrics`; one for every continuation of the query. The following example shows how to aggregate these `QueryMetrics` using LINQ:
-
-```csharp
-List<QueryMetrics> queryMetricsList = new List<QueryMetrics>();
-
-while (documentQuery.HasMoreResults)
-{
- // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
-
- // This dictionary maps the partitionId to the QueryMetrics of that query
- IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
- queryMetricsList.AddRange(partitionIdToQueryMetrics.Values);
-}
-
-// Aggregate the QueryMetrics using the + operator overload of the QueryMetrics class.
-QueryMetrics aggregatedQueryMetrics = queryMetricsList.Aggregate((curr, acc) => curr + acc);
-Console.WriteLine(aggregatedQueryMetrics);
-```
-
-## Grouping query metrics by Partition ID
-
-You can group the `QueryMetrics` by the Partition ID. Grouping by Partition ID allows you to see if a specific Partition is causing performance issues when compared to others. The following example shows how to group `QueryMetrics` with LINQ:
-
-```csharp
-List<KeyValuePair<string, QueryMetrics>> partitionedQueryMetrics = new List<KeyValuePair<string, QueryMetrics>>();
-while (documentQuery.HasMoreResults)
-{
- // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
-
- // This dictionary is maps the partitionId to the QueryMetrics of that query
- IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
- partitionedQueryMetrics.AddRange(partitionIdToQueryMetrics.ToList());
-}
-
-// Now we are able to group the query metrics by partitionId
-IEnumerable<IGrouping<string, KeyValuePair<string, QueryMetrics>>> groupedByQueryMetrics = partitionedQueryMetrics
- .GroupBy(kvp => kvp.Key);
-
-// If we wanted to we could even aggregate the groupedby QueryMetrics
-foreach(IGrouping<string, KeyValuePair<string, QueryMetrics>> grouping in groupedByQueryMetrics)
-{
- string partitionId = grouping.Key;
- QueryMetrics aggregatedQueryMetricsForPartition = grouping
- .Select(kvp => kvp.Value)
- .Aggregate((curr, acc) => curr + acc);
- DoSomeLoggingOfQueryMetrics(query, partitionId, aggregatedQueryMetricsForPartition);
-}
-```
-
-## LINQ on DocumentQuery
-
-You can also get the `FeedResponse` from a LINQ Query using the `AsDocumentQuery()` method:
-
-```csharp
-IDocumentQuery<Document> linqQuery = client.CreateDocumentQuery(collection.SelfLink, feedOptions)
- .Take(1)
- .Where(document => document.Id == "42")
- .OrderBy(document => document.Timestamp)
- .AsDocumentQuery();
-FeedResponse<Document> feedResponse = await linqQuery.ExecuteNextAsync<Document>();
-IReadOnlyDictionary<string, QueryMetrics> queryMetrics = feedResponse.QueryMetrics;
-```
-
-## Expensive Queries
-
-You can capture the request units consumed by each query to investigate expensive queries or queries that consume high throughput. You can get the request charge by using the [RequestCharge](/dotnet/api/microsoft.azure.documents.client.feedresponse-1.requestcharge) property in `FeedResponse`. To learn more about how to get the request charge using the Azure portal and different SDKs, see [find the request unit charge](find-request-unit-charge.md) article.
-
-```csharp
-string query = "SELECT * FROM c";
-IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
-
-while (documentQuery.HasMoreResults)
-{
- // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
- double requestCharge = feedResponse.RequestCharge
-
- // Log the RequestCharge how ever you want.
- DoSomeLogging(requestCharge);
-}
-```
-
-## Get the query execution time
-
-When calculating the time required to execute a client-side query, make sure that you only include the time to call the `ExecuteNextAsync` method and not other parts of your code base. Just these calls help you in calculating how long the query execution took as shown in the following example:
-
-```csharp
-string query = "SELECT * FROM c";
-IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
-Stopwatch queryExecutionTimeEndToEndTotal = new Stopwatch();
-while (documentQuery.HasMoreResults)
-{
- // Execute one continuation of the query
- queryExecutionTimeEndToEndTotal.Start();
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
- queryExecutionTimeEndToEndTotal.Stop();
-}
-
-// Log the elapsed time
-DoSomeLogging(queryExecutionTimeEndToEndTotal.Elapsed);
-```
-
-## Scan queries (commonly slow and expensive)
-
-A scan query refers to a query that wasn't served by the index, due to which, many documents are loaded before returning the result set.
-
-Below is an example of a scan query:
-
-```sql
-SELECT VALUE c.description
-FROM c
-WHERE UPPER(c.description) = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR"
-```
-
-This query's filter uses the system function UPPER, which isn't served from the index. Executing this query against a large collection produced the following query metrics for the first continuation:
-
-```
-QueryMetrics
-
-Retrieved Document Count : 60,951
-Retrieved Document Size : 399,998,938 bytes
-Output Document Count : 7
-Output Document Size : 510 bytes
-Index Utilization : 0.00 %
-Total Query Execution Time : 4,500.34 milliseconds
- Query Preparation Times
- Query Compilation Time : 0.09 milliseconds
- Logical Plan Build Time : 0.05 milliseconds
- Physical Plan Build Time : 0.04 milliseconds
- Query Optimization Time : 0.01 milliseconds
- Index Lookup Time : 0.01 milliseconds
- Document Load Time : 4,177.66 milliseconds
- Runtime Execution Times
- Query Engine Times : 322.16 milliseconds
- System Function Execution Time : 85.74 milliseconds
- User-defined Function Execution Time : 0.00 milliseconds
- Document Write Time : 0.01 milliseconds
-Client Side Metrics
- Retry Count : 0
- Request Charge : 4,059.95 RUs
-```
-
-Note the following values from the query metrics output:
-
-```
-Retrieved Document Count : 60,951
-Retrieved Document Size : 399,998,938 bytes
-```
-
-This query loaded 60,951 documents, which totaled 399,998,938 bytes. Loading this many bytes results in high cost or request unit charge. It also takes a long time to execute the query, which is clear with the total time spent property:
-
-```
-Total Query Execution Time : 4,500.34 milliseconds
-```
-
-Meaning that the query took 4.5 seconds to execute (and this was only one continuation).
-
-To optimize this example query, avoid the use of UPPER in the filter. Instead, when documents are created or updated, the `c.description` values must be inserted in all uppercase characters. The query then becomes:
-
-```sql
-SELECT VALUE c.description
-FROM c
-WHERE c.description = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR"
-```
-
-This query is now able to be served from the index.
-
-To learn more about tuning query performance, see the [Tune Query Performance](./sql-api-query-metrics.md) article.
-
-## <a id="References"></a>References
--- [Azure Cosmos DB SQL specification](./sql-query-getting-started.md)-- [ANSI SQL 2011](https://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=53681)-- [JSON](https://json.org/)-- [LINQ](/previous-versions/dotnet/articles/bb308959(v=msdn.10)) -
-## Next steps
--- [Tune query performance](sql-api-query-metrics.md)-- [Indexing overview](../index-overview.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
cosmos-db Query Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/query-cheat-sheet.md
- Title: Azure Cosmos DB PDF query cheat sheets
-description: Printable PDF cheat sheets that helps you use Azure Cosmos DB's SQL, MongoDB, Graph, and Table APIs to query your data
------ Previously updated : 05/28/2019--
-# Azure Cosmos DB query cheat sheets
-
-The **Azure Cosmos DB query cheat sheets** help you quickly write queries for your data by displaying common database queries, operations, functions, and operators in easy-to-print PDF reference sheets. The cheat sheets include reference information for the SQL, MongoDB, Table, and Gremlin APIs.
-
-Choose from a letter-sized or A3-sized download.
-
-## Letter-sized cheat sheets
-
-Download the [Azure Cosmos DB letter-sized query cheat sheets](https://go.microsoft.com/fwlink/?LinkId=623215) if you're going to print to letter-sized paper (8.5" x 11").
--
-## Oversized cheat sheets
-Download the [Azure Cosmos DB A3-sized query cheat sheets](https://go.microsoft.com/fwlink/?linkid=870413) if you're going to print using a plotter or large-scale printer on A3-sized paper (11.7" x 16.5").
--
-## Next steps
-For more help writing queries, see the following articles:
-* For SQL API queries, see [Query using the SQL API](tutorial-query-sql-api.md), [SQL queries for Azure Cosmos DB](./sql-query-getting-started.md), and [SQL syntax reference](./sql-query-getting-started.md)
-* For MongoDB queries, see [Query using Azure Cosmos DB's API for MongoDB](../mongodb/tutorial-query-mongodb.md) and [Azure Cosmos DB's API for MongoDB feature support and syntax](../mongodb/feature-support-32.md)
-* For Gremlin API queries, see [Query using the Gremlin API](../tutorial-query-graph.md) and [Azure Cosmos DB Gremlin graph support](../gremlin-support.md)
-* For Table API queries, see [Query using the Table API](../table/tutorial-query-table.md)
cosmos-db Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-bicep.md
- Title: Quickstart - Create an Azure Cosmos DB and a container using Bicep
-description: Quickstart showing how to an Azure Cosmos database and a container using Bicep
--
-tags: azure-resource-manager, bicep
--- Previously updated : 04/18/2022-
-#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
--
-# Quickstart: Create an Azure Cosmos DB and a container using Bicep
--
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos database and a container within that database. You can later store data in this container.
--
-## Prerequisites
-
-An Azure subscription or free Azure Cosmos DB trial account.
--- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]-
-## Review the Bicep file
-
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cosmosdb-sql/).
--
-Three Azure resources are defined in the Bicep file:
--- [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos account.--- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos database.--- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos container.-
-## Deploy the Bicep file
-
-1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters primaryRegion=<primary-region> secondaryRegion=<secondary-region>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -primaryRegion "<primary-region>" -secondaryRegion "<secondary-region>"
- ```
-
-
-
- > [!NOTE]
- > Replace **\<primary-region\>** with the primary replica region for the Cosmos DB account, such as **WestUS**. Replace **\<secondary-region\>** with the secondary replica region for the Cosmos DB account, such as **EastUS**.
-
- When the deployment finishes, you should see a message indicating the deployment succeeded.
-
-## Validate the deployment
-
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az resource list --resource-group exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName exampleRG
-```
---
-## Clean up resources
-
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name exampleRG
-```
---
-## Next steps
-
-In this quickstart, you created an Azure Cosmos account, a database and a container by using a Bicep file and validated the deployment. To learn more about Azure Cosmos DB and Bicep, continue on to the articles below.
--- Read an [Overview of Azure Cosmos DB](../introduction.md).-- Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md).-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-template.md
- Title: Quickstart - Create an Azure Cosmos DB and a container by using Azure Resource Manager template
-description: Quickstart showing how to an Azure Cosmos database and a container by using Azure Resource Manager template
---
-tags: azure-resource-manager
--- Previously updated : 08/26/2021-
-#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
--
-# Quickstart: Create an Azure Cosmos DB and a container by using an ARM template
-
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos database and a container within that database. You can later store data in this container.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql%2Fazuredeploy.json)
-
-## Prerequisites
-
-An Azure subscription or free Azure Cosmos DB trial account
--- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]--- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cosmosdb-sql/).
--
-Three Azure resources are defined in the template:
-
-* [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos account.
-
-* [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos database.
-
-* [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos container.
-
-More Azure Cosmos DB template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Documentdb).
-
-## Deploy the template
-
-1. Select the following image to sign in to Azure and open a template. The template creates an Azure Cosmos account, a database, and a container.
-
- [:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql%2Fazuredeploy.json)
-
-2. Select or enter the following values.
-
- :::image type="content" source="../media/quick-create-template/create-cosmosdb-using-template-portal.png" alt-text="ARM template, Azure Cosmos DB integration, deploy portal":::
-
- Unless it is specified, use the default values to create the Azure Cosmos resources.
-
- * **Subscription**: select an Azure subscription.
- * **Resource group**: select **Create new**, enter a unique name for the resource group, and then click **OK**.
- * **Location**: select a location. For example, **Central US**.
- * **Account Name**: enter a name for the Azure Cosmos account. It must be globally unique.
- * **Location**: enter a location where you want to create your Azure Cosmos account. The Azure Cosmos account can be in the same location as the resource group.
- * **Primary Region**: The primary replica region for the Azure Cosmos account.
- * **Secondary region**: The secondary replica region for the Azure Cosmos account.
- * **Default Consistency Level**: The default consistency level for the Azure Cosmos account.
- * **Max Staleness Prefix**: Max stale requests. Required for BoundedStaleness.
- * **Max Interval in Seconds**: Max lag time. Required for BoundedStaleness.
- * **Database Name**: The name of the Azure Cosmos database.
- * **Container Name**: The name of the Azure Cosmos container.
- * **Throughput**: The throughput for the container, minimum throughput value is 400 RU/s.
- * **I agree to the terms and conditions state above**: Select.
-
-3. Select **Purchase**. After the Azure Cosmos account has been deployed successfully, you get a notification:
-
- :::image type="content" source="../media/quick-create-template/resource-manager-template-portal-deployment-notification.png" alt-text="ARM template, Cosmos DB integration, deploy portal notification":::
-
-The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
-
-## Validate the deployment
-
-You can either use the Azure portal to check the Azure Cosmos account, the database, and the container or use the following Azure CLI or Azure PowerShell script to list the secret created.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-echo "Enter your Azure Cosmos account name:" &&
-read cosmosAccountName &&
-echo "Enter the resource group where the Azure Cosmos account exists:" &&
-read resourcegroupName &&
-az cosmosdb show -g $resourcegroupName -n $cosmosAccountName
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the resource group name where your Azure Cosmos account exists"
-(Get-AzResource -ResourceType "Microsoft.DocumentDB/databaseAccounts" -ResourceGroupName $resourceGroupName).Name
- Write-Host "Press [ENTER] to continue..."
-```
---
-## Clean up resources
-
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
-When no longer needed, delete the resource group, which deletes the Azure Cosmos account and the related resources. To delete the resource group by using Azure CLI or Azure PowerShell:
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
-```
---
-## Next steps
-
-In this quickstart, you created an Azure Cosmos account, a database and a container by using an ARM template and validated the deployment. To learn more about Azure Cosmos DB and Azure Resource Manager, continue on to the articles below.
--- Read an [Overview of Azure Cosmos DB](../introduction.md)-- Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)-- Get other [Azure Cosmos DB Resource Manager templates](./templates-samples-sql.md)-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-terraform.md
- Title: Quickstart - Create an Azure Cosmos DB and a container using Terraform
-description: Quickstart showing how to an Azure Cosmos database and a container using Terraform
--
-tags: azure-resource-manager, terraform
--- Previously updated : 09/22/2022-
-#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
--
-# Quickstart: Create an Azure Cosmos DB and a container using Terraform
--
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deployments via Terraform to create an Azure Cosmos database and a container within that database. You can later store data in this container.
-
-## Prerequisites
-
-An Azure subscription or free Azure Cosmos DB trial account
--- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]-
-Terraform should be installed on your local computer. Installation instructions can be found [here](https://learn.hashicorp.com/tutorials/terraform/install-cli).
-
-## Review the Terraform File
-
-The Terraform files used in this quickstart can be found on the [terraform samples repository](https://github.com/Azure/terraform). Please create the below 3 files: providers.tf, main.tf and variables.tf. Variables can be set in command line or alternatively with a terraforms.tfvars file.
-
-### Provider Terraform File
--
-### Main Terraform File
--
-### Variables Terraform File
---
-Three Cosmos DB resources are defined in the main terraform file.
--- [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos account.--- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos database.--- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos container.-
-## Deploy via terraform
-
-1. Save the terraform files as main.tf, variables.tf and providers.tf to your local computer.
-2. Login to your terminal via Azure CLI or Powershell
-3. Deploy via Terraform commands
- - terraform init
- - terraform plan
- - terraform apply
-
-## Validate the deployment
-
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az resource list --resource-group "your resource group name"
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName "your resource group name"
-```
---
-## Clean up resources
-
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name "your resource group name"
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name "your resource group name"
-```
---
-## Next steps
-
-In this quickstart, you created an Azure Cosmos account, a database and a container via terraform and validated the deployment. To learn more about Azure Cosmos DB and Terraform, continue on to the articles below.
--- Read an [Overview of Azure Cosmos DB](../introduction.md).-- Learn more about [Terraform](https://www.terraform.io/intro).-- Learn more about [Azure Terraform Provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs).-- [Manage Cosmos DB with Terraform](manage-with-terraform.md)-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
- Title: Quickstart - Azure Cosmos DB SQL API client library for .NET
-description: Learn how to build a .NET app to manage Azure Cosmos DB SQL API account resources in this quickstart.
----- Previously updated : 07/26/2022---
-# Quickstart: Azure Cosmos DB SQL API client library for .NET
--
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Java](create-sql-api-java.md)
-> * [Spring Data](create-sql-api-spring-data.md)
-> * [Python](create-sql-api-python.md)
-> * [Spark v3](create-sql-api-spark.md)
-> * [Go](create-sql-api-go.md)
->
-
-Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
-
-> [!NOTE]
-> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
-
-[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md)
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-
-### Prerequisite check
-
-* In a terminal or command window, run ``dotnet --version`` to check that the .NET SDK is version 6.0 or later.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-
-## Setting up
-
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources.
-
-### Create an Azure Cosmos DB account
--
-### Create a new .NET app
-
-Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command specifying the **console** template.
-
-```dotnetcli
-dotnet new console
-```
-
-### Install the package
-
-Add the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package to the .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
-
-```dotnetcli
-dotnet add package Microsoft.Azure.Cosmos
-```
-
-Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
-
-```dotnetcli
-dotnet build
-```
-
-Make sure that the build was successful with no errors. The expected output from the build should look something like this:
-
-```output
- Determining projects to restore...
- All projects are up-to-date for restore.
- dslkajfjlksd -> C:\Users\sidandrews\Demos\dslkajfjlksd\bin\Debug\net6.0\dslkajfjlksd.dll
-
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-```
-
-### Configure environment variables
--
-## Object model
--
-You'll use the following .NET classes to interact with these resources:
-
-* [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-* [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
-* [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.
-* [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.
-* [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
-
-## Code examples
-
-* [Authenticate the client](#authenticate-the-client)
-* [Create a database](#create-a-database)
-* [Create a container](#create-a-container)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
-
-The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-For this sample code, the container will use the category as a logical partition key.
-
-### Authenticate the client
-
-From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``Microsoft.Azure.Cosmos``.
--
-Define a new instance of the ``CosmosClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariable) to read the two environment variables you created earlier.
--
-For more information on different ways to create a ``CosmosClient`` instance, see [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md#connect-to-azure-cosmos-db-sql-api).
-
-### Create a database
-
-Use the [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
--
-For more information on creating a database, see [Create a database in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-database.md).
-
-### Create a container
-
-The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) will create a new container if it doesn't already exist. This method will also return a reference to the container.
--
-For more information on creating a container, see [Create a container in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-container.md).
-
-### Create an item
-
-The easiest way to create a new item in a container is to first build a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *category* field for the partition key, and extra *name*, *quantity*, and *sale* fields.
--
-Create an item in the container by calling [``Container.UpsertItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync). In this example, we chose to *upsert* instead of *create* a new item in case you run this sample code more than once.
--
-For more information on creating, upserting, or replacing items, see [Create an item in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-item.md).
-
-### Get an item
-
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type.
--
-For more information about reading items and parsing the response, see [Read an item in Azure Cosmos DB SQL API using .NET](how-to-dotnet-read-item.md).
-
-### Query items
-
-After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM todo t WHERE t.partitionKey = 'gear-surf-surfboards'``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items.
--
-## Run the code
-
-This app creates an Azure Cosmos DB SQL API database and container. The example then creates an item and then reads the exact same item back. Finally, the example issues a query that should only return that single item. With each step, the example outputs metadata to the console about the steps it has performed.
-
-To run the app, use a terminal to navigate to the application directory and run the application.
-
-```dotnetcli
-dotnet run
-```
-
-The output of the app should be similar to this example:
-
-```output
-New database: adventureworks
-New container: products
-Created item: 68719518391 [gear-surf-surfboards]
-```
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the .NET SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
-
-> [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)
cosmos-db Read Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/read-change-feed.md
- Title: Reading Azure Cosmos DB change feed
-description: This article describes different options available to read and access change feed in Azure Cosmos DB.
------ Previously updated : 06/30/2021--
-# Reading Azure Cosmos DB change feed
-
-You can work with the Azure Cosmos DB change feed using either a push model or a pull model. With a push model, the change feed processor pushes work to a client that has business logic for processing this work. However, the complexity in checking for work and storing state for the last processed work is handled within the change feed processor.
-
-With a pull model, the client has to pull the work from the server. The client, in this case, not only has business logic for processing work but also storing state for the last processed work, handling load balancing across multiple clients processing work in parallel, and handling errors.
-
-When reading from the Azure Cosmos DB change feed, we usually recommend using a push model because you won't need to worry about:
--- Polling the change feed for future changes.-- Storing state for the last processed change. When reading from the change feed, this is automatically stored in a [lease container](change-feed-processor.md#components-of-the-change-feed-processor).-- Load balancing across multiple clients consuming changes. For example, if one client can't keep up with processing changes and another has available capacity.-- [Handling errors](change-feed-processor.md#error-handling). For example, automatically retrying failed changes that weren't correctly processed after an unhandled exception in code or a transient network issue.-
-The majority of scenarios that use the Azure Cosmos DB change feed will use one of the push model options. However, there are some scenarios where you might want the additional low level control of the pull model. These include:
--- Reading changes from a particular partition key-- Controlling the pace at which your client receives changes for processing-- Doing a one-time read of the existing data in the change feed (for example, to do a data migration)-
-## Reading change feed with a push model
-
-Using a push model is the easiest way to read from the change feed. There are two ways you can read from the change feed with a push model: [Azure Functions Cosmos DB triggers](change-feed-functions.md) and the [change feed processor library](change-feed-processor.md). Azure Functions uses the change feed processor behind the scenes, so these are both very similar ways to read the change feed. Think of Azure Functions as simply a hosting platform for the change feed processor, not an entirely different way of reading the change feed.
-
-### Azure Functions
-
-Azure Functions is the simplest option if you are just getting started using the change feed. Due to its simplicity, it is also the recommended option for most change feed use cases. When you create an Azure Functions trigger for Azure Cosmos DB, you select the container to connect, and the Azure Function gets triggered whenever there is a change in the container. Because Azure Functions uses the change feed processor behind the scenes, it automatically parallelizes change processing across your container's [partitions](../partitioning-overview.md).
-
-Developing with Azure Functions is an easy experience and can be faster than deploying the change feed processor on your own. Triggers can be created using the Azure Functions portal or programmatically using SDKs. Visual Studio and VS Code provide support to write Azure Functions, and you can even use the Azure Functions CLI for cross-platform development. You can write and debug the code on your desktop, and then deploy the function with one click. See [Serverless database computing using Azure Functions](serverless-computing-database.md) and [Using change feed with Azure Functions](change-feed-functions.md) articles to learn more.
-
-### Change feed processor library
-
-The change feed processor gives you more control of the change feed and still hides most complexity. The change feed processor library follows the observer pattern, where your processing function is called by the library. The change feed processor library will automatically check for changes and, if changes are found, "push" these to the client. If you have a high throughput change feed, you can instantiate multiple clients to read the change feed. The change feed processor library will automatically divide the load among the different clients. You won't have to implement any logic for load balancing across multiple clients or any logic to maintain the lease state.
-
-The change feed processor library guarantees an "at-least-once" delivery of all of the changes. In other words, if you use the change feed processor library, your processing function will be called successfully for every item in the change feed. If there is an unhandled exception in the business logic in your processing function, the failed changes will be retried until they are processed successfully. To prevent your change feed processor from getting "stuck" continuously retrying the same changes, add logic in your processing function to write documents, upon exception, to a dead-letter queue. Learn more about [error handling](change-feed-processor.md#error-handling).
-
-In Azure Functions, the recommendation for handling errors is the same. You should still add logic in your delegate code to write documents, upon exception, to a dead-letter queue. However, if there is an unhandled exception in your Azure Function, the change that generated the exception won't be automatically retried. If there is an unhandled exception in the business logic, the Azure Function will move on to processing the next change. The Azure Function won't retry the same failed change.
-
-Like Azure Functions, developing with the change feed processor library is easy. However, you are responsible for deploying one or more hosts for the change feed processor. A host is an application instance that uses the change feed processor to listen for changes. While Azure Functions has capabilities for automatic scaling, you are responsible for scaling your hosts. To learn more, see [using the change feed processor](change-feed-processor.md#dynamic-scaling). The change feed processor library is part of the [Azure Cosmos DB SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3).
-
-## Reading change feed with a pull model
-
-The [change feed pull model](change-feed-pull-model.md) allows you to consume the change feed at your own pace. Changes must be requested by the client and there is no automatic polling for changes. If you want to permanently "bookmark" the last processed change (similar to the push model's lease container), you'll need to [save a continuation token](change-feed-pull-model.md#saving-continuation-tokens).
-
-Using the change feed pull model, you get more low level control of the change feed. When reading the change feed with the pull model, you have three options:
--- Read changes for an entire container-- Read changes for a specific [FeedRange](change-feed-pull-model.md#using-feedrange-for-parallelization)-- Read changes for a specific partition key value-
-You can parallelize the processing of changes across multiple clients, just as you can with the change feed processor. However, the pull model does not automatically handle load-balancing across clients. When you use the pull model to parallelize processing of the change feed, you'll first obtain a list of FeedRanges. A FeedRange spans a range of partition key values. You'll need to have an orchestrator process that obtains FeedRanges and distributes them among your machines. You can then use these FeedRanges to have multiple machines read the change feed in parallel.
-
-There is no built-in "at-least-once" delivery guarantee with the pull model. The pull model gives you low level control to decide how you would like to handle errors.
-
-## Change feed in APIs for Cassandra and MongoDB
-
-Change feed functionality is surfaced as change streams in MongoDB API and Query with predicate in Cassandra API. To learn more about the implementation details for MongoDB API, see the [Change streams in the Azure Cosmos DB API for MongoDB](../mongodb/change-streams.md).
-
-Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB API for Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB API for Cassandra](../cassandr).
-
-## Next steps
-
-You can now continue to learn more about change feed in the following articles:
-
-* [Overview of change feed](../change-feed.md)
-* [Using change feed with Azure Functions](change-feed-functions.md)
-* [Using change feed processor library](change-feed-processor.md)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/samples-dotnet.md
- Title: Examples for Azure Cosmos DB SQL API SDK for .NET
-description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB SQL API.
----- Previously updated : 07/06/2022---
-# Examples for Azure Cosmos DB SQL API SDK for .NET
--
-> [!div class="op_single_selector"]
->
-> * [.NET](samples-dotnet.md)
->
-
-The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB SQL API resources.
-
-## Prerequisites
-
-* An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-* Azure Cosmos DB SQL API account. [Create a SQL API account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-
-## Samples
-
-The sample projects are all self-contained and are designed to be ran individually without any dependencies between projects.
-
-### Client
-
-| Task | API reference |
-| : | : |
-| [Create a client with endpoint and key](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with connection string](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with ``DefaultAzureCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with custom ``TokenCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
-
-### Databases
-
-| Task | API reference |
-| : | : |
-| [Create a database](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
-
-### Containers
-
-| Task | API reference |
-| : | : |
-| [Create a container](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
-
-### Items
-
-| Task | API reference |
-| : | : |
-| [Create an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
-| [Point read an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
-| [Query multiple items](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
-
-## Next steps
-
-Dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
-
-> [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)
cosmos-db Scale On Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/scale-on-schedule.md
- Title: Scale Azure Cosmos DB on a schedule by using Azure Functions timer
-description: Learn how to scale changes in throughput in Azure Cosmos DB using PowerShell and Azure Functions.
---- Previously updated : 01/13/2020----
-# Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
-
-The performance of an Azure Cosmos account is based on the amount of provisioned throughput expressed in Request Units per second (RU/s). The provisioning is at a second granularity and is billed based upon the highest RU/s per hour. This provisioned capacity model enables the service to provide a predictable and consistent throughput, guaranteed low latency, and high availability. Most production workloads these features. However, in development and testing environments where Azure Cosmos DB is only used during working hours, you can scale up the throughput in the morning and scale back down in the evening after working hours.
-
-You can set the throughput via [Azure Resource Manager Templates](./templates-samples-sql.md), [Azure CLI](cli-samples.md), and [PowerShell](powershell-samples.md), for Core (SQL) API accounts, or by using the language-specific Azure Cosmos DB SDKs. The benefit of using Resource Manager Templates, Azure CLI or PowerShell is that they support all Azure Cosmos DB model APIs.
-
-## Throughput scheduler sample project
-
-To simplify the process to scale Azure Cosmos DB on a schedule we've created a sample project called [Azure Cosmos throughput scheduler](https://github.com/Azure-Samples/azure-cosmos-throughput-scheduler). This project is an Azure Functions app with two timer triggers- "ScaleUpTrigger" and "ScaleDownTrigger". The triggers run a PowerShell script that sets the throughput on each resource as defined in the `resources.json` file in each trigger. The ScaleUpTrigger is configured to run at 8 AM UTC and the ScaleDownTrigger is configured to run at 6 PM UTC and these times can be easily updated within the `function.json` file for each trigger.
-
-You can clone this project locally, modify it to specify the Azure Cosmos DB resources to scale up and down and the schedule to run. Later you can deploy it in an Azure subscription and secure it using managed service identity with [Azure role-based access control (Azure RBAC)](../role-based-access-control.md) permissions with the "Azure Cosmos DB operator" role to set throughput on your Azure Cosmos accounts.
-
-## Next Steps
--- Learn more and download the sample from [Azure Cosmos DB throughput scheduler](https://github.com/Azure-Samples/azure-cosmos-throughput-scheduler).
cosmos-db Serverless Computing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/serverless-computing-database.md
- Title: Serverless database computing with Azure Cosmos DB and Azure Functions
-description: Learn how Azure Cosmos DB and Azure Functions can be used together to create event-driven serverless computing apps.
------ Previously updated : 05/02/2020---
-# Serverless database computing using Azure Cosmos DB and Azure Functions
-
-Serverless computing is all about the ability to focus on individual pieces of logic that are repeatable and stateless. These pieces require no infrastructure management and they consume resources only for the seconds, or milliseconds, they run for. At the core of the serverless computing movement are functions, which are made available in the Azure ecosystem by [Azure Functions](https://azure.microsoft.com/services/functions). To learn about other serverless execution environments in Azure see [serverless in Azure](https://azure.microsoft.com/solutions/serverless/) page.
-
-With the native integration between [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db) and Azure Functions, you can create database triggers, input bindings, and output bindings directly from your Azure Cosmos DB account. Using Azure Functions and Azure Cosmos DB, you can create and deploy event-driven serverless apps with low-latency access to rich data for a global user base.
-
-## Overview
-
-Azure Cosmos DB and Azure Functions enable you to integrate your databases and serverless apps in the following ways:
-
-* Create an event-driven **Azure Functions trigger for Cosmos DB**. This trigger relies on [change feed](../change-feed.md) streams to monitor your Azure Cosmos container for changes. When any changes are made to a container, the change feed stream is sent to the trigger, which invokes the Azure Function.
-* Alternatively, bind an Azure Function to an Azure Cosmos container using an **input binding**. Input bindings read data from a container when a function executes.
-* Bind a function to an Azure Cosmos container using an **output binding**. Output bindings write data to a container when a function completes.
-
-> [!NOTE]
-> Currently, Azure Functions trigger, input bindings, and output bindings for Cosmos DB are supported for use with the SQL API only. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API.
--
-The following diagram illustrates each of these three integrations:
--
-The Azure Functions trigger, input binding, and output binding for Azure Cosmos DB can be used in the following combinations:
-
-* An Azure Functions trigger for Cosmos DB can be used with an output binding to a different Azure Cosmos container. After a function performs an action on an item in the change feed, you can write it to another container (writing it to the same container it came from would effectively create a recursive loop). Or, you can use an Azure Functions trigger for Cosmos DB to effectively migrate all changed items from one container to a different container, with the use of an output binding.
-* Input bindings and output bindings for Azure Cosmos DB can be used in the same Azure Function. This works well in cases when you want to find certain data with the input binding, modify it in the Azure Function, and then save it to the same container or a different container, after the modification.
-* An input binding to an Azure Cosmos container can be used in the same function as an Azure Functions trigger for Cosmos DB, and can be used with or without an output binding as well. You could use this combination to apply up-to-date currency exchange information (pulled in with an input binding to an exchange container) to the change feed of new orders in your shopping cart service. The updated shopping cart total, with the current currency conversion applied, can be written to a third container using an output binding.
-
-## Use cases
-
-The following use cases demonstrate a few ways you can make the most of your Azure Cosmos DB data - by connecting your data to event-driven Azure Functions.
-
-### IoT use case - Azure Functions trigger and output binding for Cosmos DB
-
-In IoT implementations, you can invoke a function when the check engine light is displayed in a connected car.
-
-**Implementation:** Use an Azure Functions trigger and output binding for Cosmos DB
-
-1. An **Azure Functions trigger for Cosmos DB** is used to trigger events related to car alerts, such as the check engine light coming on in a connected car.
-2. When the check engine light comes, the sensor data is sent to Azure Cosmos DB.
-3. Azure Cosmos DB creates or updates new sensor data documents, then those changes are streamed to the Azure Functions trigger for Cosmos DB.
-4. The trigger is invoked on every data-change to the sensor data collection, as all changes are streamed via the change feed.
-5. A threshold condition is used in the function to send the sensor data to the warranty department.
-6. If the temperature is also over a certain value, an alert is also sent to the owner.
-7. The **output binding** on the function updates the car record in another Azure Cosmos container to store information about the check engine event.
-
-The following image shows the code written in the Azure portal for this trigger.
--
-### Financial use case - Timer trigger and input binding
-
-In financial implementations, you can invoke a function when a bank account balance falls under a certain amount.
-
-**Implementation:** A timer trigger with an Azure Cosmos DB input binding
-
-1. Using a [timer trigger](../../azure-functions/functions-bindings-timer.md), you can retrieve the bank account balance information stored in an Azure Cosmos container at timed intervals using an **input binding**.
-2. If the balance is below the low balance threshold set by the user, then follow up with an action from the Azure Function.
-3. The output binding can be a [SendGrid integration](../../azure-functions/functions-bindings-sendgrid.md) that sends an email from a service account to the email addresses identified for each of the low balance accounts.
-
-The following images show the code in the Azure portal for this scenario.
---
-### Gaming use case - Azure Functions trigger and output binding for Cosmos DB
-
-In gaming, when a new user is created you can search for other users who might know them by using the [Azure Cosmos DB Gremlin API](../graph-introduction.md). You can then write the results to an Azure Cosmos DB or SQL database for easy retrieval.
-
-**Implementation:** Use an Azure Functions trigger and output binding for Cosmos DB
-
-1. Using an Azure Cosmos DB [graph database](../graph-introduction.md) to store all users, you can create a new function with an Azure Functions trigger for Cosmos DB.
-2. Whenever a new user is inserted, the function is invoked, and then the result is stored using an **output binding**.
-3. The function queries the graph database to search for all the users that are directly related to the new user and returns that dataset to the function.
-4. This data is then stored in Azure Cosmos DB, which can then be easily retrieved by any front-end application that shows the new user their connected friends.
-
-### Retail use case - Multiple functions
-
-In retail implementations, when a user adds an item to their basket you now have the flexibility to create and invoke functions for optional business pipeline components.
-
-**Implementation:** Multiple Azure Functions triggers for Cosmos DB listening to one container
-
-1. You can create multiple Azure Functions by adding Azure Functions triggers for Cosmos DB to each - all of which listen to the same change feed of shopping cart data. When multiple functions listen to the same change feed, a new lease collection is required for each function. For more information about lease collections, see [Understanding the Change Feed Processor library](change-feed-processor.md).
-2. Whenever a new item is added to a users shopping cart, each function is independently invoked by the change feed from the shopping cart container.
- * One function may use the contents of the current basket to change the display of other items the user might be interested in.
- * Another function may update inventory totals.
- * Another function may send customer information for certain products to the marketing department, who sends them a promotional mailer.
-
- Any department can create an Azure Functions for Cosmos DB by listening to the change feed, and be sure they won't delay critical order processing events in the process.
-
-In all of these use cases, because the function has decoupled the app itself, you don't need to spin up new app instances all the time. Instead, Azure Functions spins up individual functions to complete discrete processes as needed.
-
-## Tooling
-
-Native integration between Azure Cosmos DB and Azure Functions is available in the Azure portal and in Visual Studio.
-
-* In the Azure Functions portal, you can create a trigger. For quickstart instructions, see [Create an Azure Functions trigger for Cosmos DB in the Azure portal](../../azure-functions/functions-create-cosmos-db-triggered-function.md).
-* In the Azure Cosmos DB portal, you can add an Azure Functions trigger for Cosmos DB to an existing Azure Function app in the same resource group.
-* In Visual Studio, you can create the trigger using the [Azure Functions Tools](../../azure-functions/functions-develop-vs.md):
-
- >
- >[!VIDEO https://aka.ms/docs.change-feed-azure-functions]
-
-## Why choose Azure Functions integration for serverless computing?
-
-Azure Functions provides the ability to create scalable units of work, or concise pieces of logic that can be run on demand, without provisioning or managing infrastructure. By using Azure Functions, you don't have to create a full-blown app to respond to changes in your Azure Cosmos database, you can create small reusable functions for specific tasks. In addition, you can also use Azure Cosmos DB data as the input or output to an Azure Function in response to event such as an HTTP requests or a timed trigger.
-
-Azure Cosmos DB is the recommended database for your serverless computing architecture for the following reasons:
-
-* **Instant access to all your data**: You have granular access to every value stored because Azure Cosmos DB [automatically indexes](../index-policy.md) all data by default, and makes those indexes immediately available. This means you're able to constantly query, update, and add new items to your database and have instant access via Azure Functions.
-
-* **Schemaless**. Azure Cosmos DB is schemaless - so it's uniquely able to handle any data output from an Azure Function. This "handle anything" approach makes it straightforward to create various Functions that all output to Azure Cosmos DB.
-
-* **Scalable throughput**. Throughput can be scaled up and down instantly in Azure Cosmos DB. If you have hundreds or thousands of Functions querying and writing to the same container, you can scale up your [RU/s](../request-units.md) to handle the load. All functions can work in parallel using your allocated RU/s and your data is guaranteed to be [consistent](../consistency-levels.md).
-
-* **Global replication**. You can replicate Azure Cosmos DB data [around the globe](../distribute-data-globally.md) to reduce latency, geo-locating your data closest to where your users are. As with all Azure Cosmos DB queries, data from event-driven triggers is read data from the Azure Cosmos DB closest to the user.
-
-If you're looking to integrate with Azure Functions to store data and don't need deep indexing or if you need to store attachments and media files, the [Azure Blob Storage trigger](../../azure-functions/functions-bindings-storage-blob.md) may be a better option.
-
-Benefits of Azure Functions:
-
-* **Event-driven**. Azure Functions is event-driven and can listen to a change feed from Azure Cosmos DB. This means you don't need to create listening logic, you just keep an eye out for the changes you're listening for.
-
-* **No limits**. Functions execute in parallel and the service spins up as many as you need. You set the parameters.
-
-* **Good for quick tasks**. The service spins up new instances of functions whenever an event fires and closes them as soon as the function completes. You only pay for the time your functions are running.
-
-If you're not sure whether Flow, Logic Apps, Azure Functions, or WebJobs are best for your implementation, see [Choose between Flow, Logic Apps, Functions, and WebJobs](../../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md).
-
-## Next steps
-
-Now let's connect Azure Cosmos DB and Azure Functions for real:
-
-* [Create an Azure Functions trigger for Cosmos DB in the Azure portal](../../azure-functions/functions-create-cosmos-db-triggered-function.md)
-* [Create an Azure Functions HTTP trigger with an Azure Cosmos DB input binding](../../azure-functions/functions-bindings-cosmosdb.md?tabs=csharp)
-* [Azure Cosmos DB bindings and triggers](../../azure-functions/functions-bindings-cosmosdb-v2.md)
cosmos-db Session State And Caching Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/session-state-and-caching-provider.md
- Title: Use Azure Cosmos DB as an ASP.NET session state and caching provider
-description: Learn how to use Azure Cosmos DB as an ASP.NET session state and caching provider
---- Previously updated : 07/06/2022--
-# Use Azure Cosmos DB as an ASP.NET session state and caching provider
-
-The Azure Cosmos DB session and cache provider allows you to use Azure Cosmos DB and leverage its low latency and global scale capabilities for storing session state data and as a distributed cache within your application.
-
-## What is session state?
-
-[Session state](/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0#configure-session-state&preserve-view=true) is user data that tracks a user browsing through a web application during a period of time, within the same browser. The session state expires, and it's limited to the interactions a particular browser is having which does not extend across browsers. It is considered ephemeral data, if it is not present it will not break the application. However, when it exists, it makes the experience faster for the user because the web application does not need to fetch it on every browser request for the same user.
-
-It is often backed by some storage mechanism, that can in some cases, be external to the current web server and enable load-balancing requests of the same browser across multiple web servers to achieve higher scalability.
-
-The simplest session state provider is the in-memory provider that only stores data on the local web server memory and requires the application to use [Application Request Routing](/iis/extensions/planning-for-arr/using-the-application-request-routing-module). This makes the browser session sticky to a particular web server (all requests for that browser need to always land on the same web server). The provider works well on simple scenarios but the stickiness requirement can bring load-balancing problems when web applications scale.
-
-There are many external storage providers available, that can store the session data in a way that can be read and accessed by multiple web servers without requiring session stickiness and enable a higher scale.
-
-## Session state scenarios
-
-Cosmos DB can be used as a session state provider through the extension package [Microsoft.Extensions.Caching.Cosmos](https://www.nuget.org/packages/Microsoft.Extensions.Caching.Cosmos) uses the [Azure Cosmos DB .NET SDK](sql-api-sdk-dotnet-standard.md), using a Container as an effective session storage based on a key/value approach where the key is the session identifier.
-
-Once the package is added, you can use `AddCosmosCache` as part of your Startup process (services.AddSession and app.UseSession are [common initialization](/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0#configure-session-stat&preserve-view=true) steps required for any session state provider):
-
-```csharp
-public void ConfigureServices(IServiceCollection services)
-{
- /* Other service configurations */
- services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
- {
- CosmosClientBuilder clientBuilder = new CosmosClientBuilder("myConnectionString")
- .WithApplicationRegion("West US");
- cacheOptions.ContainerName = "myContainer";
- cacheOptions.DatabaseName = "myDatabase";
- cacheOptions.ClientBuilder = clientBuilder;
- /* Creates the container if it does not exist */
- cacheOptions.CreateIfNotExists = true;
- });
-
- services.AddSession(options =>
- {
- options.IdleTimeout = TimeSpan.FromSeconds(3600);
- options.Cookie.IsEssential = true;
- });
- /* Other service configurations */
-}
-
-public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
-{
- /* Other configurations */
-
- app.UseSession();
-
- /* app.UseEndpoints and other configurations */
-}
-```
-
-Where you specify the database and container you want the session state to be stored and optionally, create them if they don't exist using the `CreateIfNotExists` attribute.
-
-> [!IMPORTANT]
-> If you provide an existing container instead of using `CreateIfNotExists`, make sure it has [time to live enabled](how-to-time-to-live.md).
-
-You can customize your SDK client configuration by using the `CosmosClientBuilder` or if your application is already using a `CosmosClient` for other operations with Cosmos DB, you can also inject it into the provider:
-
-```csharp
-services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
-{
- cacheOptions.ContainerName = "myContainer";
- cacheOptions.DatabaseName = "myDatabase";
- cacheOptions.CosmosClient = preExistingClient;
- /* Creates the container if it does not exist */
- cacheOptions.CreateIfNotExists = true;
-});
-```
-
-After this, you can use ASP.NET Core sessions like with any other provider and use the HttpContext.Session object. Keep in mind to always try to load your session information asynchronously as per the [ASP.NET recommendations](/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0#load-session-state-asynchronously&preserve-view=true).
-
-## Distributed cache scenarios
-
-Given that the Cosmos DB provider implements the [IDistributedCache interface to act as a distributed cache provider](/aspnet/core/performance/caching/distributed?view=aspnetcore-5.0&preserve-view=true), it can also be used for any application that requires distributed cache, not just for web applications that require a performant and distributed session state provider.
-
-Distributed caches require data consistency to provide independent instances to be able to share that cached data. When using the Cosmos DB provider, you can:
--- Use your Cosmos DB account in **Session consistency** if you can enable [Application Request Routing](/iis/extensions/planning-for-arr/using-the-application-request-routing-module) and make requests sticky to a particular instance.-- Use your Cosmos DB account in **Bounded Staleness or Strong consistency** without requiring request stickiness. This provides the greatest scale in terms of load distribution across your instances.-
-To use the Cosmos DB provider as a distributed cache, it needs to be registered in `ConfiguredService`s with the `services.AddCosmosCache` call. Once that is done, any constructor in the application can ask for the cache by referencing `IDistributedCache` and it will receive the instance injected by [dependency injection](/dotnet/core/extensions/dependency-injection) to be used:
-
-```csharp
-public class MyBusinessClass
-{
- private readonly IDistributedCache cache;
-
- public MyBusinessClass(IDistributedCache cache)
- {
- this.cache = cache;
- }
-
- public async Task SomeOperationAsync()
- {
- string someCachedValue = await this.cache.GetStringAsync("someKey");
- /* Use the cache */
- }
-}
-```
-
-## Troubleshooting and diagnosing
-
-Since the Cosmos DB provider uses the .NET SDK underneath, all the existing [performance guidelines](performance-tips-dotnet-sdk-v3-sql.md) and [troubleshooting guides](troubleshoot-dot-net-sdk.md) apply to understanding any potential issue. Note, there is a distinct way to get access to the Diagnostics from the underlying Cosmos DB operations because they cannot be exposed through the IDistributedCache APIs.
-
-Registering the optional diagnostics delegate will allow you to capture and conditionally log any diagnostics to troubleshoot any cases like high latency:
-
-```csharp
-void captureDiagnostics(CosmosDiagnostics diagnostics)
-{
- if (diagnostics.GetClientElapsedTime() > SomePredefinedThresholdTime)
- {
- Console.WriteLine(diagnostics.ToString());
- }
-}
-
-services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
-{
- cacheOptions.DiagnosticsHandler = captureDiagnostics;
- /* other options */
-});
-```
-
-## Next steps
-- To find more details on the Azure Cosmos DB session and cache provider see the [source code on GitHub](https://github.com/Azure/Microsoft.Extensions.Caching.Cosmos/).-- [Try out](https://github.com/Azure/Microsoft.Extensions.Caching.Cosmos/tree/master/sample) the Azure Cosmos DB session and cache provider by exploring a sample Explore an ASP.NET Core web application.-- Read more about [distributed caches](/aspnet/core/performance/caching/distributed?view=aspnetcore-5.0&preserve-view=true) in .NET.
cosmos-db Sql Api Dotnet Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-application.md
- Title: ASP.NET Core MVC web app tutorial using Azure Cosmos DB
-description: ASP.NET Core MVC tutorial to create an MVC web application using Azure Cosmos DB. You'll store JSON and access data from a todo app hosted on Azure App Service - ASP NET Core MVC tutorial step by step.
----- Previously updated : 05/02/2020---
-# Tutorial: Develop an ASP.NET Core MVC web application with Azure Cosmos DB by using .NET SDK
-
-> [!div class="op_single_selector"]
-> * [.NET](sql-api-dotnet-application.md)
-> * [Java](sql-api-java-application.md)
-> * [Node.js](sql-api-nodejs-application.md)
-> * [Python](./create-sql-api-python.md)
-
-This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article:
--
-If you don't have time to complete the tutorial, you can download the complete sample project from [GitHub][GitHub].
-
-This tutorial covers:
-
-> [!div class="checklist"]
->
-> * Creating an Azure Cosmos account
-> * Creating an ASP.NET Core MVC app
-> * Connecting the app to Azure Cosmos DB
-> * Performing create, read, update, and delete (CRUD) operations on the data
-
-> [!TIP]
-> This tutorial assumes that you have prior experience using ASP.NET Core MVC and Azure App Service. If you are new to ASP.NET Core or the [prerequisite tools](#prerequisites), we recommend you to download the complete sample project from [GitHub][GitHub], add the required NuGet packages, and run it. Once you build the project, you can review this article to gain insight on the code in the context of the project.
-
-## Prerequisites
-
-Before following the instructions in this article, make sure that you have the following resources:
-
-* An active Azure account. If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
-
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-
-* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
-
-All the screenshots in this article are from Microsoft Visual Studio Community 2019. If you use a different version, your screens and options may not match entirely. The solution should work if you meet the prerequisites.
-
-## Step 1: Create an Azure Cosmos account
-
-Let's start by creating an Azure Cosmos account. If you already have an Azure Cosmos DB SQL API account or if you're using the Azure Cosmos DB Emulator, skip to [Step 2: Create a new ASP.NET MVC application](#step-2-create-a-new-aspnet-core-mvc-application).
---
-In the next section, you create a new ASP.NET Core MVC application.
-
-## Step 2: Create a new ASP.NET Core MVC application
-
-1. Open Visual Studio and select **Create a new project**.
-
-1. In **Create a new project**, find and select **ASP.NET Core Web Application** for C#. Select **Next** to continue.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-new-project-dialog.png" alt-text="Create new ASP.NET Core web application project":::
-
-1. In **Configure your new project**, name the project *todo* and select **Create**.
-
-1. In **Create a new ASP.NET Core Web Application**, choose **Web Application (Model-View-Controller)**. Select **Create** to continue.
-
- Visual Studio creates an empty MVC application.
-
-1. Select **Debug** > **Start Debugging** or F5 to run your ASP.NET application locally.
-
-## Step 3: Add Azure Cosmos DB NuGet package to the project
-
-Now that we have most of the ASP.NET Core MVC framework code that we need for this solution, let's add the NuGet packages required to connect to Azure Cosmos DB.
-
-1. In **Solution Explorer**, right-click your project and select **Manage NuGet Packages**.
-
-1. In the **NuGet Package Manager**, search for and select **Microsoft.Azure.Cosmos**. Select **Install**.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-nuget.png" alt-text="Install NuGet package":::
-
- Visual Studio downloads and installs the Azure Cosmos DB package and its dependencies.
-
- You can also use **Package Manager Console** to install the NuGet package. To do so, select **Tools** > **NuGet Package Manager** > **Package Manager Console**. At the prompt, type the following command:
-
- ```ps
- Install-Package Microsoft.Azure.Cosmos
- ```
-
-## Step 4: Set up the ASP.NET Core MVC application
-
-Now let's add the models, the views, and the controllers to this MVC application.
-
-### Add a model
-
-1. In **Solution Explorer**, right-click the **Models** folder, select **Add** > **Class**.
-
-1. In **Add New Item**, name your new class *Item.cs* and select **Add**.
-
-1. Replace the contents of *Item.cs* class with the following code:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Models/Item.cs":::
-
-Azure Cosmos DB uses JSON to move and store data. You can use the `JsonProperty` attribute to control how JSON serializes and deserializes objects. The `Item` class demonstrates the `JsonProperty` attribute. This code controls the format of the property name that goes into JSON. It also renames the .NET property `Completed`.
-
-### Add views
-
-Next, let's add the following views.
-
-* A create item view
-* A delete item view
-* A view to get an item detail
-* An edit item view
-* A view to list all the items
-
-#### Create item view
-
-1. In **Solution Explorer**, right-click the **Views** folder and select **Add** > **New Folder**. Name the folder *Item*.
-
-1. Right-click the empty **Item** folder, then select **Add** > **View**.
-
-1. In **Add MVC View**, make the following changes:
-
- * In **View name**, enter *Create*.
- * In **Template**, select **Create**.
- * In **Model class**, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-add-mvc-view.png" alt-text="Screenshot showing the Add MVC View dialog box":::
-
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Create.cshtml":::
-
-#### Delete item view
-
-1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
-
-1. In **Add MVC View**, make the following changes:
-
- * In the **View name** box, type *Delete*.
- * In the **Template** box, select **Delete**.
- * In the **Model class** box, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
-
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Delete.cshtml":::
-
-#### Add a view to get item details
-
-1. In **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
-
-1. In **Add MVC View**, provide the following values:
-
- * In **View name**, enter *Details*.
- * In **Template**, select **Details**.
- * In **Model class**, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
-
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Details.cshtml":::
-
-#### Add an edit item view
-
-1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
-
-1. In **Add MVC View**, make the following changes:
-
- * In the **View name** box, type *Edit*.
- * In the **Template** box, select **Edit**.
- * In the **Model class** box, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
-
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Edit.cshtml":::
-
-#### Add a view to list all the items
-
-And finally, add a view to get all the items with the following steps:
-
-1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
-
-1. In **Add MVC View**, make the following changes:
-
- * In the **View name** box, type *Index*.
- * In the **Template** box, select **List**.
- * In the **Model class** box, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
-
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Index.cshtml":::
-
-Once you complete these steps, close all the *cshtml* documents in Visual Studio.
-
-### Declare and initialize services
-
-First, we'll add a class that contains the logic to connect to and use Azure Cosmos DB. For this tutorial, we'll encapsulate this logic into a class called `CosmosDbService` and an interface called `ICosmosDbService`. This service does the CRUD operations. It also does read feed operations such as listing incomplete items, creating, editing, and deleting the items.
-
-1. In **Solution Explorer**, right-click your project and select **Add** > **New Folder**. Name the folder *Services*.
-
-1. Right-click the **Services** folder, select **Add** > **Class**. Name the new class *CosmosDbService* and select **Add**.
-
-1. Replace the contents of *CosmosDbService.cs* with the following code:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Services/CosmosDbService.cs":::
-
-1. Right-click the **Services** folder, select **Add** > **Class**. Name the new class *ICosmosDbService* and select **Add**.
-
-1. Add the following code to *ICosmosDbService* class:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Services/ICosmosDbService.cs":::
-
-1. Open the *Startup.cs* file in your solution and add the following method **InitializeCosmosClientInstanceAsync**, which reads the configuration and initializes the client.
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Startup.cs" id="InitializeCosmosClientInstanceAsync" :::
-
-1. On that same file, replace the `ConfigureServices` method with:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Startup.cs" id="ConfigureServices":::
-
- The code in this step initializes the client based on the configuration as a singleton instance to be injected through [Dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection).
-
- And make sure to change the default MVC Controller to `Item` by editing the routes in the `Configure` method of the same file:
-
- ```csharp
- app.UseEndpoints(endpoints =>
- {
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Item}/{action=Index}/{id?}");
- });
- ```
--
-1. Define the configuration in the project's *appsettings.json* file as shown in the following snippet:
-
- :::code language="json" source="~/samples-cosmosdb-dotnet-core-web-app/src/appsettings.json":::
-
-### Add a controller
-
-1. In **Solution Explorer**, right-click the **Controllers** folder, select **Add** > **Controller**.
-
-1. In **Add Scaffold**, select **MVC Controller - Empty** and select **Add**.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-controller-add-scaffold.png" alt-text="Select MVC Controller - Empty in Add Scaffold":::
-
-1. Name your new controller *ItemController*.
-
-1. Replace the contents of *ItemController.cs* with the following code:
-
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Controllers/ItemController.cs":::
-
-The **ValidateAntiForgeryToken** attribute is used here to help protect this application against cross-site request forgery attacks. Your views should work with this anti-forgery token as well. For more information and examples, see [Preventing Cross-Site Request Forgery (CSRF) Attacks in ASP.NET MVC Application][Preventing Cross-Site Request Forgery]. The source code provided on [GitHub][GitHub] has the full implementation in place.
-
-We also use the **Bind** attribute on the method parameter to help protect against over-posting attacks. For more information, see [Tutorial: Implement CRUD Functionality with the Entity Framework in ASP.NET MVC][Basic CRUD Operations in ASP.NET MVC].
-
-## Step 5: Run the application locally
-
-To test the application on your local computer, use the following steps:
-
-1. Press F5 in Visual Studio to build the application in debug mode. It should build the application and launch a browser with the empty grid page we saw before:
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-create-an-item-a.png" alt-text="Screenshot of the todo list web application created by this tutorial":::
-
- If the application instead opens to the home page, append `/Item` to the url.
-
-1. Select the **Create New** link and add values to the **Name** and **Description** fields. Leave the **Completed** check box unselected. If you select it, the app adds the new item in a completed state. The item no longer appears on the initial list.
-
-1. Select **Create**. The app sends you back to the **Index** view, and your item appears in the list. You can add a few more items to your **To-Do** list.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-create-an-item.png" alt-text="Screenshot of the Index view":::
-
-1. Select **Edit** next to an **Item** on the list. The app opens the **Edit** view where you can update any property of your object, including the **Completed** flag. If you select **Completed** and select **Save**, the app displays the **Item** as completed in the list.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-completed-item.png" alt-text="Screenshot of the Index view with the Completed box checked":::
-
-1. Verify the state of the data in the Azure Cosmos DB service using [Cosmos Explorer](https://cosmos.azure.com) or the Azure Cosmos DB Emulator's Data Explorer.
-
-1. Once you've tested the app, select Ctrl+F5 to stop debugging the app. You're ready to deploy!
-
-## Step 6: Deploy the application
-
-Now that you have the complete application working correctly with Azure Cosmos DB we're going to deploy this web app to Azure App Service.
-
-1. To publish this application, right-click the project in **Solution Explorer** and select **Publish**.
-
-1. In **Pick a publish target**, select **App Service**.
-
-1. To use an existing App Service profile, choose **Select Existing**, then select **Publish**.
-
-1. In **App Service**, select a **Subscription**. Use the **View** filter to sort by resource group or resource type.
-
-1. Find your profile, and then select **OK**. Next search the required Azure App Service and select **OK**.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-app-service-2019.png" alt-text="App Service dialog box in Visual Studio":::
-
-Another option is to create a new profile:
-
-1. As in the previous procedure, right-click the project in **Solution Explorer** and select **Publish**.
-
-1. In **Pick a publish target**, select **App Service**.
-
-1. In **Pick a publish target**, select **Create New** and select **Publish**.
-
-1. In **App Service**, enter your Web App name and the appropriate subscription, resource group, and hosting plan, then select **Create**.
-
- :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-create-app-service-2019.png" alt-text="Create App Service dialog box in Visual Studio":::
-
-In a few seconds, Visual Studio publishes your web application and launches a browser where you can see your project running in Azure!
-
-## Next steps
-
-In this tutorial, you've learned how to build an ASP.NET Core MVC web application. Your application can access data stored in Azure Cosmos DB. You can now continue with these resources:
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Getting started with SQL queries](./sql-query-getting-started.md)
-* [How to model and partition data on Azure Cosmos DB using a real-world example](./how-to-model-partition-example.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-[Visual Studio Express]: https://www.visualstudio.com/products/visual-studio-express-vs.aspx
-[Microsoft Web Platform Installer]: https://www.microsoft.com/web/downloads/platform.aspx
-[Preventing Cross-Site Request Forgery]: /aspnet/web-api/overview/security/preventing-cross-site-request-forgery-csrf-attacks
-[Basic CRUD Operations in ASP.NET MVC]: /aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/implementing-basic-crud-functionality-with-the-entity-framework-in-asp-net-mvc-application
-[GitHub]: https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app
cosmos-db Sql Api Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-get-started.md
- Title: 'Tutorial: Build a .NET console app to manage data in Azure Cosmos DB SQL API account'
-description: 'Tutorial: Learn how to create Azure Cosmos DB SQL API resources using a C# console application.'
---- Previously updated : 03/23/2022----
-# Tutorial: Build a .NET console app to manage data in Azure Cosmos DB SQL API account
-
-> [!div class="op_single_selector"]
-> * [.NET](sql-api-get-started.md)
-> * [Java](./create-sql-api-java.md)
-> * [Async Java](./create-sql-api-java.md)
-> * [Node.js](sql-api-nodejs-get-started.md)
->
-
-Welcome to the Azure Cosmos DB SQL API get started tutorial. After following this tutorial, you'll have a console application that creates and queries Azure Cosmos DB resources.
-
-This tutorial uses version 3.0 or later of the [Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) and [.NET 6](https://dotnet.microsoft.com/download).
-
-This tutorial covers:
-
-> [!div class="checklist"]
->
-> * Creating and connecting to an Azure Cosmos account
-> * Configuring your project in Visual Studio
-> * Creating a database and a container
-> * Adding items to the container
-> * Querying the container
-> * Performing create, read, update, and delete (CRUD) operations on the item
-> * Deleting the database
-
-Don't have time? Don't worry! The complete solution is available on [GitHub](https://github.com/Azure-Samples/cosmos-dotnet-getting-started). Jump to the [Get the complete tutorial solution section](#GetSolution) for quick instructions.
-
-Now let's get started!
-
-## Prerequisites
-
-An active Azure account. If you don't have one, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
--
-Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
-
-## Step 1: Create an Azure Cosmos DB account
-
-Let's create an Azure Cosmos DB account. If you already have an account you want to use, skip this section. To use the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator. Then skip ahead to [Step 2: Set up your Visual Studio project](#SetupVS).
--
-## <a id="SetupVS"></a>Step 2: Set up your Visual Studio project
-
-1. Open Visual Studio and select **Create a new project**.
-1. In **Create a new project**, choose **Console App** for C#, then select **Next**.
-1. Name your project *CosmosGettingStartedTutorial*, and then select **Create**.
-1. In the **Solution Explorer**, right-click your new console application, which is under your Visual Studio solution, and select **Manage NuGet Packages**.
-1. In the **NuGet Package Manager**, select **Browse** and search for *Microsoft.Azure.Cosmos*. Choose **Microsoft.Azure.Cosmos** and select **Install**.
-
- :::image type="content" source="./media/sql-api-get-started/cosmos-getting-started-manage-nuget-2019.png" alt-text="Install NuGet for Azure Cosmos DB Client SDK":::
-
- The package ID for the Azure Cosmos DB SQL API Client Library is [Microsoft Azure Cosmos DB Client Library](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/).
-
-Great! Now that we finished the setup, let's start writing some code. For the completed project of this tutorial, see [Developing a .NET console app using Azure Cosmos DB](https://github.com/Azure-Samples/cosmos-dotnet-getting-started).
-
-## <a id="Connect"></a>Step 3: Connect to an Azure Cosmos DB account
-
-1. Replace the references at the beginning of your C# application in the *Program.cs* file with these references:
-
- ```csharp
- using System;
- using System.Threading.Tasks;
- using System.Configuration;
- using System.Collections.Generic;
- using System.Net;
- using Microsoft.Azure.Cosmos;
- ```
-
-1. Add these constants and variables into your `Program` class.
-
- ```csharp
- public class Program
- {
- // ADD THIS PART TO YOUR CODE
-
- // The Azure Cosmos DB endpoint for running this sample.
- private static readonly string EndpointUri = "<your endpoint here>";
- // The primary key for the Azure Cosmos account.
- private static readonly string PrimaryKey = "<your primary key>";
-
- // The Cosmos client instance
- private CosmosClient cosmosClient;
-
- // The database we will create
- private Database database;
-
- // The container we will create.
- private Container container;
-
- // The name of the database and container we will create
- private string databaseId = "FamilyDatabase";
- private string containerId = "FamilyContainer";
- }
- ```
-
- > [!NOTE]
- > If you're familiar with the previous version of the .NET SDK, you may be familiar with the terms *collection* and *document*. Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic terms *container* and *item*. A *container* can be a collection, graph, or table. An *item* can be a document, edge/vertex, or row, and is the content inside a container. For more information, see [Work with databases, containers, and items in Azure Cosmos DB](../account-databases-containers-items.md).
-
-1. Open the [Azure portal](https://portal.azure.com). Find your Azure Cosmos DB account, and then select **Keys**.
-
- :::image type="content" source="./media/sql-api-get-started/cosmos-getting-started-portal-keys.png" alt-text="Get Azure Cosmos DB keys from Azure portal":::
-
-1. In *Program.cs*, replace `<your endpoint URL>` with the value of **URI**. Replace `<your primary key>` with the value of **PRIMARY KEY**.
-
-1. Below the **Main** method, add a new asynchronous task called **GetStartedDemoAsync**, which instantiates our new `CosmosClient`.
-
- ```csharp
- public static async Task Main(string[] args)
- {
- }
-
- // ADD THIS PART TO YOUR CODE
- /*
- Entry point to call methods that operate on Azure Cosmos DB resources in this sample
- */
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- }
- ```
-
- We use **GetStartedDemoAsync** as the entry point that calls methods that operate on Azure Cosmos DB resources.
-
-1. Add the following code to run the **GetStartedDemoAsync** asynchronous task from your **Main** method. The **Main** method catches exceptions and writes them to the console.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=Main)]
-
-1. Select F5 to run your application.
-
- The console displays the message: **End of demo, press any key to exit.** This message confirms that your application made a connection to Azure Cosmos DB. You can then close the console window.
-
-Congratulations! You've successfully connected to an Azure Cosmos DB account.
-
-## Step 4: Create a database
-
-A database is the logical container of items partitioned across containers. Either the `CreateDatabaseIfNotExistsAsync` or `CreateDatabaseAsync` method of the [CosmosClient](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class can create a database.
-
-1. Copy and paste the `CreateDatabaseAsync` method below your `GetStartedDemoAsync` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=CreateDatabaseAsync&highlight=7)]
-
- `CreateDatabaseAsync` creates a new database with ID `FamilyDatabase` if it doesn't already exist, that has the ID specified from the `databaseId` field. For the purpose of this demo we are creating the database as part of the exercise, but on applications in production, it is [not recommended to do it as part of the normal flow](troubleshoot-dot-net-sdk-slow-request.md#metadata-operations).
-
-1. Copy and paste the code below where you instantiate the CosmosClient to call the **CreateDatabaseAsync** method you just added.
-
- ```csharp
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
-
- //ADD THIS PART TO YOUR CODE
- await this.CreateDatabaseAsync();
- }
- ```
-
- Your *Program.cs* should now look like this, with your endpoint and primary key filled in.
-
- ```csharp
- using System;
- using System.Threading.Tasks;
- using System.Configuration;
- using System.Collections.Generic;
- using System.Net;
- using Microsoft.Azure.Cosmos;
-
- namespace CosmosGettingStartedTutorial
- {
- class Program
- {
- // The Azure Cosmos DB endpoint for running this sample.
- private static readonly string EndpointUri = "<your endpoint here>";
- // The primary key for the Azure Cosmos account.
- private static readonly string PrimaryKey = "<your primary key>";
-
- // The Cosmos client instance
- private CosmosClient cosmosClient;
-
- // The database we will create
- private Database database;
-
- // The container we will create.
- private Container container;
-
- // The name of the database and container we will create
- private string databaseId = "FamilyDatabase";
- private string containerId = "FamilyContainer";
-
- public static async Task Main(string[] args)
- {
- try
- {
- Console.WriteLine("Beginning operations...");
- Program p = new Program();
- await p.GetStartedDemoAsync();
- }
- catch (CosmosException cosmosException)
- {
- Console.WriteLine("Cosmos Exception with Status {0} : {1}\n", cosmosException.StatusCode, cosmosException);
- }
- catch (Exception e)
- {
- Console.WriteLine("Error: {0}\n", e);
- }
- finally
- {
- Console.WriteLine("End of demo, press any key to exit.");
- Console.ReadKey();
- }
- }
-
- /// <summary>
- /// Entry point to call methods that operate on Azure Cosmos DB resources in this sample
- /// </summary>
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- await this.CreateDatabaseAsync();
- }
-
- /// <summary>
- /// Create the database if it does not exist
- /// </summary>
- private async Task CreateDatabaseAsync()
- {
- // Create a new database
- this.database = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(databaseId);
- Console.WriteLine("Created Database: {0}\n", this.database.Id);
- }
- }
- }
- ```
-
-1. Select F5 to run your application.
-
- > [!NOTE]
- > If you get a "503 service unavailable exception" error, it's possible that the required [ports](sql-sdk-connection-modes.md#service-port-ranges) for direct connectivity mode are blocked by a firewall. To fix this issue, either open the required ports or use the gateway mode connectivity as shown in the following code:
- ```csharp
- // Create a new instance of the Cosmos Client in Gateway mode
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey, new CosmosClientOptions()
- {
- ConnectionMode = ConnectionMode.Gateway
- });
- ```
-
-Congratulations! You've successfully created an Azure Cosmos database.
-
-## <a id="CreateColl"></a>Step 5: Create a container
-
-> [!WARNING]
-> The method `CreateContainerIfNotExistsAsync` creates a new container, which has pricing implications. For more details, please visit our [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
->
->
-
-A container can be created by using either the [**CreateContainerIfNotExistsAsync**](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync#Microsoft_Azure_Cosmos_Database_CreateContainerIfNotExistsAsync_Microsoft_Azure_Cosmos_ContainerProperties_System_Nullable_System_Int32__Microsoft_Azure_Cosmos_RequestOptions_System_Threading_CancellationToken_) or [**CreateContainerAsync**](/dotnet/api/microsoft.azure.cosmos.database.createcontainerasync#Microsoft_Azure_Cosmos_Database_CreateContainerAsync_Microsoft_Azure_Cosmos_ContainerProperties_System_Nullable_System_Int32__Microsoft_Azure_Cosmos_RequestOptions_System_Threading_CancellationToken_) method in the `CosmosDatabase` class. A container consists of items (JSON documents if SQL API) and associated server-side application logic in JavaScript, for example, stored procedures, user-defined functions, and triggers.
-
-1. Copy and paste the `CreateContainerAsync` method below your `CreateDatabaseAsync` method. `CreateContainerAsync` creates a new container with the ID `FamilyContainer` if it doesn't already exist, by using the ID specified from the `containerId` field partitioned by `LastName` property. For the purpose of this demo we are creating the container as part of the exercise, but on applications in production, it is [not recommended to do it as part of the normal flow](troubleshoot-dot-net-sdk-slow-request.md#metadata-operations).
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=CreateContainerAsync&highlight=9)]
-
-1. Copy and paste the code below where you instantiated the CosmosClient to call the **CreateContainer** method you just added.
-
- ```csharp
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- await this.CreateDatabaseAsync();
-
- //ADD THIS PART TO YOUR CODE
- await this.CreateContainerAsync();
- }
- ```
-
-1. Select F5 to run your application.
-
-Congratulations! You've successfully created an Azure Cosmos container.
-
-## <a id="CreateDoc"></a>Step 6: Add items to the container
-
-The [**CreateItemAsync**](/dotnet/api/microsoft.azure.cosmos.container.createitemasync#Microsoft_Azure_Cosmos_Container_CreateItemAsync__1___0_System_Nullable_Microsoft_Azure_Cosmos_PartitionKey__Microsoft_Azure_Cosmos_ItemRequestOptions_System_Threading_CancellationToken_) method of the `CosmosContainer` class can create an item. When using the SQL API, items are projected as documents, which are user-defined arbitrary JSON content. You can now insert an item into your Azure Cosmos container.
-
-First, let's create a `Family` class that represents objects stored within Azure Cosmos DB in this sample. We'll also create `Parent`, `Child`, `Pet`, `Address` subclasses that are used within `Family`. The item must have an `Id` property serialized as `id` in JSON.
-
-1. Select Ctrl+Shift+A to open **Add New Item**. Add a new class `Family.cs` to your project.
-
- :::image type="content" source="./media/sql-api-get-started/cosmos-getting-started-add-family-class-2019.png" alt-text="Screenshot of adding a new Family.cs class into the project":::
-
-1. Copy and paste the `Family`, `Parent`, `Child`, `Pet`, and `Address` class into `Family.cs`.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Family.cs)]
--
-1. Back in *Program.cs*, add the `AddItemsToContainerAsync` method after your `CreateContainerAsync` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=AddItemsToContainerAsync)]
--
- The code checks to make sure an item with the same ID doesn't already exist. We'll insert two items, one each for the *Andersen Family* and the *Wakefield Family*.
-
-1. Add a call to `AddItemsToContainerAsync` in the `GetStartedDemoAsync` method.
-
- ```csharp
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- await this.CreateDatabaseAsync();
- await this.CreateContainerAsync();
-
- //ADD THIS PART TO YOUR CODE
- await this.AddItemsToContainerAsync();
- }
- ```
-
-1. Select F5 to run your application.
-
-Congratulations! You've successfully created two Azure Cosmos items.
-
-## <a id="Query"></a>Step 7: Query Azure Cosmos DB resources
-
-Azure Cosmos DB supports rich queries against JSON documents stored in each container. For more information, see [Getting started with SQL queries](./sql-query-getting-started.md). The following sample code shows how to run a query against the items we inserted in the previous step.
-
-1. Copy and paste the `QueryItemsAsync` method after your `AddItemsToContainerAsync` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=QueryItemsAsync&highlight=10-11,17-18)]
-
-1. Add a call to ``QueryItemsAsync`` in the ``GetStartedDemoAsync`` method.
-
- ```csharp
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- await this.CreateDatabaseAsync();
- await this.CreateContainerAsync();
- await this.AddItemsToContainerAsync();
-
- //ADD THIS PART TO YOUR CODE
- await this.QueryItemsAsync();
- }
- ```
-
-1. Select F5 to run your application.
-
-Congratulations! You've successfully queried an Azure Cosmos container.
-
-## <a id="ReplaceItem"></a>Step 8: Replace a JSON item
-
-Now, we'll update an item in Azure Cosmos DB. We'll change the `IsRegistered` property of the `Family` and the `Grade` of one of the children.
-
-1. Copy and paste the `ReplaceFamilyItemAsync` method after your `QueryItemsAsync` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=ReplaceFamilyItemAsync&highlight=15)]
-
-1. Add a call to `ReplaceFamilyItemAsync` in the `GetStartedDemoAsync` method.
-
- ```csharp
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- await this.CreateDatabaseAsync();
- await this.CreateContainerAsync();
- await this.AddItemsToContainerAsync();
- await this.QueryItemsAsync();
-
- //ADD THIS PART TO YOUR CODE
- await this.ReplaceFamilyItemAsync();
- }
- ```
-
-1. Select F5 to run your application.
-
-Congratulations! You've successfully replaced an Azure Cosmos item.
-
-## <a id="DeleteDocument"></a>Step 9: Delete item
-
-Now, we'll delete an item in Azure Cosmos DB.
-
-1. Copy and paste the `DeleteFamilyItemAsync` method after your `ReplaceFamilyItemAsync` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=DeleteFamilyItemAsync&highlight=10)]
-
-1. Add a call to `DeleteFamilyItemAsync` in the `GetStartedDemoAsync` method.
-
- ```csharp
- public async Task GetStartedDemoAsync()
- {
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
- await this.CreateDatabaseAsync();
- await this.CreateContainerAsync();
- await this.AddItemsToContainerAsync();
- await this.QueryItemsAsync();
- await this.ReplaceFamilyItemAsync();
-
- //ADD THIS PART TO YOUR CODE
- await this.DeleteFamilyItemAsync();
- }
- ```
-
-1. Select F5 to run your application.
-
-Congratulations! You've successfully deleted an Azure Cosmos item.
-
-## <a id="DeleteDatabase"></a>Step 10: Delete the database
-
-Now we'll delete our database. Deleting the created database removes the database and all children resources. The resources include containers, items, and any stored procedures, user-defined functions, and triggers. We also dispose of the `CosmosClient` instance.
-
-1. Copy and paste the `DeleteDatabaseAndCleanupAsync` method after your `DeleteFamilyItemAsync` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=DeleteDatabaseAndCleanupAsync)]
-
-1. Add a call to ``DeleteDatabaseAndCleanupAsync`` in the ``GetStartedDemoAsync`` method.
-
- [!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=GetStartedDemoAsync&highlight=14)]
-
-1. Select F5 to run your application.
-
-Congratulations! You've successfully deleted an Azure Cosmos database.
-
-## <a id="Run"></a>Step 11: Run your C# console application all together!
-
-Select F5 in Visual Studio to build and run the application in debug mode.
-
-You should see the output of your entire app in a console window. The output shows the results of the queries we added. It should match the example text below.
-
-```cmd
-Beginning operations...
-
-Created Database: FamilyDatabase
-
-Created Container: FamilyContainer
-
-Created item in database with id: Andersen.1 Operation consumed 11.43 RUs.
-
-Created item in database with id: Wakefield.7 Operation consumed 14.29 RUs.
-
-Running query: SELECT * FROM c WHERE c.LastName = 'Andersen'
-
- Read {"id":"Andersen.1","LastName":"Andersen","Parents":[{"FamilyName":null,"FirstName":"Thomas"},{"FamilyName":null,"FirstName":"Mary Kay"}],"Children":[{"FamilyName":null,"FirstName":"Henriette Thaulow","Gender":"female","Grade":5,"Pets":[{"GivenName":"Fluffy"}]}],"Address":{"State":"WA","County":"King","City":"Seattle"},"IsRegistered":false}
-
-Updated Family [Wakefield,Wakefield.7].
- Body is now: {"id":"Wakefield.7","LastName":"Wakefield","Parents":[{"FamilyName":"Wakefield","FirstName":"Robin"},{"FamilyName":"Miller","FirstName":"Ben"}],"Children":[{"FamilyName":"Merriam","FirstName":"Jesse","Gender":"female","Grade":6,"Pets":[{"GivenName":"Goofy"},{"GivenName":"Shadow"}]},{"FamilyName":"Miller","FirstName":"Lisa","Gender":"female","Grade":1,"Pets":null}],"Address":{"State":"NY","County":"Manhattan","City":"NY"},"IsRegistered":true}
-
-Deleted Family [Wakefield,Wakefield.7]
-
-Deleted Database: FamilyDatabase
-
-End of demo, press any key to exit.
-```
-
-Congratulations! You've completed the tutorial and have a working C# console application!
-
-## <a id="GetSolution"></a> Get the complete tutorial solution
-
-If you didn't have time to complete the steps in this tutorial, or just want to download the code samples, you can download it.
-
-To build the `GetStarted` solution, you need the following prerequisites:
-
-* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/free/).
-* An [Azure Cosmos DB account][cosmos-db-create-account].
-* The [GetStarted](https://github.com/Azure-Samples/cosmos-dotnet-getting-started) solution available on GitHub.
-
-To restore the references to the Azure Cosmos DB .NET SDK in Visual Studio, right-click the solution in **Solution Explorer**, and then select **Restore NuGet Packages**. Next, in the *App.config* file, update the `EndPointUri` and `PrimaryKey` values as described in [Step 3: Connect to an Azure Cosmos DB account](#Connect).
-
-That's it, build it, and you're on your way!
-
-## Next steps
-
-* Want a more complex ASP.NET MVC tutorial? See [Tutorial: Develop an ASP.NET Core MVC web application with Azure Cosmos DB by using .NET SDK](sql-api-dotnet-application.md).
-* Want to do scale and performance testing with Azure Cosmos DB? See [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
-* To learn how to monitor Azure Cosmos DB requests, usage, and storage, see [Monitor performance and storage metrics in Azure Cosmos DB](../monitor-cosmos-db.md).
-* To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-[cosmos-db-create-account]: create-sql-api-java.md#create-a-database-account
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-application.md
- Title: 'Tutorial: Build a Java web app using Azure Cosmos DB and the SQL API'
-description: 'Tutorial: This Java web application tutorial shows you how to use the Azure Cosmos DB and the SQL API to store and access data from a Java application hosted on Azure Websites.'
---- Previously updated : 03/29/2022-----
-# Tutorial: Build a Java web application using Azure Cosmos DB and the SQL API
-
-> [!div class="op_single_selector"]
-> * [.NET](sql-api-dotnet-application.md)
-> * [Java](sql-api-java-application.md)
-> * [Node.js](sql-api-nodejs-application.md)
-> * [Python](./create-sql-api-python.md)
->
-
-This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this article, you will learn:
-
-* How to build a basic JavaServer Pages (JSP) application in Eclipse.
-* How to work with the Azure Cosmos DB service using the [Azure Cosmos DB Java SDK](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos).
-
-This Java application tutorial shows you how to create a web-based task-management application that enables you to create, retrieve, and mark tasks as complete, as shown in the following image. Each of the tasks in the ToDo list is stored as JSON documents in Azure Cosmos DB.
--
-> [!TIP]
-> This application development tutorial assumes that you have prior experience using Java. If you are new to Java or the [prerequisite tools](#Prerequisites), we recommend downloading the complete [todo](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project from GitHub and building it using [the instructions at the end of this article](#GetProject). Once you have it built, you can review the article to gain insight on the code in the context of the project.
-
-## <a id="Prerequisites"></a>Prerequisites for this Java web application tutorial
-
-Before you begin this application development tutorial, you must have the following:
-
-* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-
-* [Java Development Kit (JDK) 7+](/java/azure/jdk/).
-* [Eclipse IDE for Java EE Developers.](https://www.eclipse.org/downloads/packages/release/luna/sr1/eclipse-ide-java-ee-developers)
-* [An Azure Web Site with a Java runtime environment (for example, Tomcat or Jetty) enabled.](../../app-service/quickstart-java.md)
-
-If you're installing these tools for the first time, coreservlets.com provides a walk-through of the installation process in the quickstart section of their [Tutorial: Installing TomCat7 and Using it with Eclipse](https://www.youtube.com/watch?v=jOdCfW7-ybI&t=2s) article.
-
-## <a id="CreateDB"></a>Create an Azure Cosmos DB account
-
-Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create the Java JSP application](#CreateJSP).
---
-## <a id="CreateJSP"></a>Create the Java JSP application
-
-To create the JSP application:
-
-1. First, we'll start off by creating a Java project. Start Eclipse, then select **File**, select **New**, and then select **Dynamic Web Project**. If you don't see **Dynamic Web Project** listed as an available project, do the following: Select **File**, select **New**, select **Project**…, expand **Web**, select **Dynamic Web Project**, and select **Next**.
-
- :::image type="content" source="./media/sql-api-java-application/image10.png" alt-text="JSP Java Application Development":::
-
-1. Enter a project name in the **Project name** box, and in the **Target Runtime** drop-down menu, optionally select a value (e.g. Apache Tomcat v7.0), and then select **Finish**. Selecting a target runtime enables you to run your project locally through Eclipse.
-
-1. In Eclipse, in the Project Explorer view, expand your project. Right-click **WebContent**, select **New**, and then select **JSP File**.
-
-1. In the **New JSP File** dialog box, name the file **index.jsp**. Keep the parent folder as **WebContent**, as shown in the following illustration, and then select **Next**.
-
- :::image type="content" source="./media/sql-api-java-application/image11.png" alt-text="Make a New JSP File - Java Web Application Tutorial":::
-
-1. In the **Select JSP Template** dialog box, for the purpose of this tutorial select **New JSP File (html)**, and then select **Finish**.
-
-1. When the *index.jsp* file opens in Eclipse, add text to display **Hello World!** within the existing `<body>` element. The updated `<body>` content should look like the following code:
-
- ```html
- <body>
- <% out.println("Hello World!"); %>
- </body>
- ```
-
-1. Save the *index.jsp* file.
-
-1. If you set a target runtime in step 2, you can select **Project** and then **Run** to run your JSP application locally:
-
- :::image type="content" source="./media/sql-api-java-application/image12.png" alt-text="Hello World ΓÇô Java Application Tutorial":::
-
-## <a id="InstallSDK"></a>Install the SQL Java SDK
-
-The easiest way to pull in the SQL Java SDK and its dependencies is through [Apache Maven](https://maven.apache.org/). To do this, you need to convert your project to a Maven project by using the following steps:
-
-1. Right-click your project in the Project Explorer, select **Configure**, select **Convert to Maven Project**.
-
-1. In the **Create new POM** window, accept the defaults, and select **Finish**.
-
-1. In **Project Explorer**, open the pom.xml file.
-
-1. On the **Dependencies** tab, in the **Dependencies** pane, select **Add**.
-
-1. In the **Select Dependency** window, do the following:
-
- * In the **Group Id** box, enter `com.azure`.
- * In the **Artifact Id** box, enter `azure-cosmos`.
- * In the **Version** box, enter `4.11.0`.
-
- Or, you can add the dependency XML for Group ID and Artifact ID directly to the *pom.xml* file:
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-cosmos</artifactId>
- <version>4.11.0</version>
- </dependency>
- ```
-
-1. Select **OK** and Maven will install the SQL Java SDK or save the pom.xml file.
-
-## <a id="UseService"></a>Use the Azure Cosmos DB service in your Java application
-
-Now let's add the models, the views, and the controllers to your web application.
-
-### Add a model
-
-First, let's define a model within a new file *TodoItem.java*. The `TodoItem` class defines the schema of an item along with the getter and setter methods:
--
-### Add the Data Access Object(DAO) classes
-
-Create a Data Access Object (DAO) to abstract persisting the ToDo items to Azure Cosmos DB. In order to save ToDo items to a collection, the client needs to know which database and collection to persist to (as referenced by self-links). In general, it is best to cache the database and collection when possible to avoid additional round-trips to the database.
-
-1. To invoke the Azure Cosmos DB service, you must instantiate a new `cosmosClient` object. In general, it is best to reuse the `cosmosClient` object rather than constructing a new client for each subsequent request. You can reuse the client by defining it within the `cosmosClientFactory` class. Update the HOST and MASTER_KEY values that you saved in [step 1](#CreateDB). Replace the HOST variable with your URI and replace the MASTER_KEY with your PRIMARY KEY. Use the following code to create the `CosmosClientFactory` class within the *CosmosClientFactory.java* file:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/CosmosClientFactory.java":::
-
-1. Create a new *TodoDao.java* file and add the `TodoDao` class to create, update, read, and delete the todo items:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/TodoDao.java":::
-
-1. Create a new *MockDao.java* file and add the `MockDao` class, this class implements the `TodoDao` class to perform CRUD operations on the items:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/MockDao.java":::
-
-1. Create a new *DocDbDao.java* file and add the `DocDbDao` class. This class defines code to persist the TodoItems into the container, retrieves your database and collection, if it exists, or create a new one if it doesn't exist. This example uses [Gson](https://code.google.com/p/google-gson/) to serialize and de-serialize the TodoItem Plain Old Java Objects (POJOs) to JSON documents. In order to save ToDo items to a collection, the client needs to know which database and collection to persist to (as referenced by self-links). This class also defines helper function to retrieve the documents by another attribute (e.g. "ID") rather than self-link. You can use the helper method to retrieve a TodoItem JSON document by ID and then deserialize it to a POJO.
-
- You can also use the `cosmosClient` client object to get a collection or list of TodoItems using a SQL query. Finally, you define the delete method to delete a TodoItem from your list. The following code shows the contents of the `DocDbDao` class:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/DocDbDao.java":::
-
-1. Next, create a new *TodoDaoFactory.java* file and add the `TodoDaoFactory` class that creates a new DocDbDao object:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/TodoDaoFactory.java":::
-
-### Add a controller
-
-Add the *TodoItemController* controller to your application. In this project, you are using [Project Lombok](https://projectlombok.org/) to generate the constructor, getters, setters, and a builder. Alternatively, you can write this code manually or have the IDE generate it:
--
-### Create a servlet
-
-Next, create a servlet to route HTTP requests to the controller. Create the *ApiServlet.java* file and define the following code under it:
--
-## <a id="Wire"></a>Wire the rest of the Java app together
-
-Now that we've finished the fun bits, all that's left is to build a quick user interface and wire it up to your DAO.
-
-1. You need a web user interface to display to the user. Let's re-write the *index.jsp* we created earlier with the following code:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/WebContent/index.jsp":::
-
-1. Finally, write some client-side JavaScript to tie the web user interface and the servlet together:
-
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/WebContent/assets/todo.js":::
-
-1. Now all that's left is to test the application. Run the application locally, and add some Todo items by filling in the item name and category and clicking **Add Task**. After the item appears, you can update whether it's complete by toggling the checkbox and clicking **Update Tasks**.
-
-## <a id="Deploy"></a>Deploy your Java application to Azure Web Sites
-
-Azure Web Sites makes deploying Java applications as simple as exporting your application as a WAR file and either uploading it via source control (e.g. Git) or FTP.
-
-1. To export your application as a WAR file, right-click on your project in **Project Explorer**, select **Export**, and then select **WAR File**.
-
-1. In the **WAR Export** window, do the following:
-
- * In the Web project box, enter azure-cosmos-java-sample.
- * In the Destination box, choose a destination to save the WAR file.
- * Select **Finish**.
-
-1. Now that you have a WAR file in hand, you can simply upload it to your Azure Web Site's **webapps** directory. For instructions on uploading the file, see [Add a Java application to Azure App Service Web Apps](../../app-service/quickstart-java.md). After the WAR file is uploaded to the webapps directory, the runtime environment will detect that you've added it and will automatically load it.
-
-1. To view your finished product, navigate to `http://YOUR\_SITE\_NAME.azurewebsites.net/azure-cosmos-java-sample/` and start adding your tasks!
-
-## <a id="GetProject"></a>Get the project from GitHub
-
-All the samples in this tutorial are included in the [todo](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project on GitHub. To import the todo project into Eclipse, ensure you have the software and resources listed in the [Prerequisites](#Prerequisites) section, then do the following:
-
-1. Install [Project Lombok](https://projectlombok.org/). Lombok is used to generate constructors, getters, setters in the project. Once you have downloaded the lombok.jar file, double-click it to install it or install it from the command line.
-
-1. If Eclipse is open, close it and restart it to load Lombok.
-
-1. In Eclipse, on the **File** menu, select **Import**.
-
-1. In the **Import** window, select **Git**, select **Projects from Git**, and then select **Next**.
-
-1. On the **Select Repository Source** screen, select **Clone URI**.
-
-1. On the **Source Git Repository** screen, in the **URI** box, enter https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app, and then select **Next**.
-
-1. On the **Branch Selection** screen, ensure that **main** is selected, and then select **Next**.
-
-1. On the **Local Destination** screen, select **Browse** to select a folder where the repository can be copied, and then select **Next**.
-
-1. On the **Select a wizard to use for importing projects** screen, ensure that **Import existing projects** is selected, and then select **Next**.
-
-1. On the **Import Projects** screen, unselect the **DocumentDB** project, and then select **Finish**. The DocumentDB project contains the Azure Cosmos DB Java SDK, which we will add as a dependency instead.
-
-1. In **Project Explorer**, navigate to azure-cosmos-java-sample\src\com.microsoft.azure.cosmos.sample.dao\DocumentClientFactory.java and replace the HOST and MASTER_KEY values with the URI and PRIMARY KEY for your Azure Cosmos DB account, and then save the file. For more information, see [Step 1. Create an Azure Cosmos database account](#CreateDB).
-
-1. In **Project Explorer**, right-click the **azure-cosmos-java-sample**, select **Build Path**, and then select **Configure Build Path**.
-
-1. On the **Java Build Path** screen, in the right pane, select the **Libraries** tab, and then select **Add External JARs**. Navigate to the location of the lombok.jar file, and select **Open**, and then select **OK**.
-
-1. Use step 12 to open the **Properties** window again, and then in the left pane select **Targeted Runtimes**.
-
-1. On the **Targeted Runtimes** screen, select **New**, select **Apache Tomcat v7.0**, and then select **OK**.
-
-1. Use step 12 to open the **Properties** window again, and then in the left pane select **Project Facets**.
-
-1. On the **Project Facets** screen, select **Dynamic Web Module** and **Java**, and then select **OK**.
-
-1. On the **Servers** tab at the bottom of the screen, right-click **Tomcat v7.0 Server at localhost** and then select **Add and Remove**.
-
-1. On the **Add and Remove** window, move **azure-cosmos-java-sample** to the **Configured** box, and then select **Finish**.
-
-1. In the **Servers** tab, right-click **Tomcat v7.0 Server at localhost**, and then select **Restart**.
-
-1. In a browser, navigate to `http://localhost:8080/azure-cosmos-java-sample/` and start adding to your task list. Note that if you changed your default port values, change 8080 to the value you selected.
-
-1. To deploy your project to an Azure web site, see [Step 6. Deploy your application to Azure Web Sites](#Deploy).
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Build a Node.js application with Azure Cosmos DB](sql-api-nodejs-application.md)
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-sdk-samples.md
-- Title: 'Azure Cosmos DB SQL API: Java SDK v4 examples'
-description: Find Java examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.
---- Previously updated : 08/26/2021-----
-# Azure Cosmos DB SQL API: Java SDK v4 examples
-
-> [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
-> * [Node.js Examples](sql-api-nodejs-samples.md)
-> * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
-
-> [!IMPORTANT]
-> To learn more about Java SDK v4, please see the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
->
-
-> [!IMPORTANT]
->[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
->
->- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.
->
->[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
->
-
-The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples) GitHub repository. This article provides:
-
-* Links to the tasks in each of the example Java project files.
-* Links to the related API reference content.
-
-**Prerequisites**
-
-You need the following to run this sample application:
-
-* Java Development Kit 8
-* Azure Cosmos DB Java SDK v4
-
-You can optionally use Maven to get the latest Azure Cosmos DB Java SDK v4 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the pom.xml file and add them to your build path.
-
-```bash
-<dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-cosmos</artifactId>
- <version>LATEST</version>
-</dependency>
-```
-
-**Running the sample applications**
-
-Clone the sample repo:
-```bash
-$ git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples.git
-
-$ cd azure-cosmos-java-sql-api-samples
-```
-
-You can run the samples using either an IDE (Eclipse, IntelliJ, or VSCODE) or from the command line using Maven.
-
-These environment variables must be set
-
-```
-ACCOUNT_HOST=your account hostname;ACCOUNT_KEY=your account primary key
-```
-
-in order to give the samples read/write access to your account.
-
-To run a sample, specify its Main Class
-
-```
-com.azure.cosmos.examples.sample.synchronicity.MainClass
-```
-
-where *sample.synchronicity.MainClass* can be
-* crudquickstart.sync.SampleCRUDQuickstart
-* crudquickstart.async.SampleCRUDQuickstartAsync
-* indexmanagement.sync.SampleIndexManagement
-* indexmanagement.async.SampleIndexManagementAsync
-* storedprocedure.sync.SampleStoredProcedure
-* storedprocedure.async.SampleStoredProcedureAsync
-* changefeed.SampleChangeFeedProcessor *(Changefeed has only an async sample, no sync sample.)*
-...etc...
-
-> [!NOTE]
-> Each sample is self-contained; it sets itself up and cleans up after itself. The samples issue multiple calls to create a `CosmosContainer` or `CosmosAsyncContainer`. Each time this is done, your subscription is billed for 1 hour of usage for the performance tier of the collection created.
->
->
-
-## Database examples
-The Database CRUD Sample files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
-
-| Task | API reference |
-| | |
-| Create a database | [CosmosClient.createDatabaseIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L76-L84) <br> [CosmosAsyncClient.createDatabaseIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L80-L89) |
-| Read a database by ID | [CosmosClient.getDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L87-L94) <br> [CosmosAsyncClient.getDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L92-L99) |
-| Read all the databases | [CosmosClient.readAllDatabases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L97-L111) <br> [CosmosAsyncClient.readAllDatabases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L102-L124) |
-| Delete a database | [CosmosDatabase.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L114-L122) <br> [CosmosAsyncDatabase.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L127-L135) |
-
-## Collection examples
-The Collection CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
-
-| Task | API reference |
-| | |
-| Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L92-L107) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L96-L111) |
-| Change configured performance of a collection | [CosmosContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L110-L118) <br> [CosmosAsyncContainer.replaceProvisionedThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L114-L122) |
-| Get a collection by ID | [CosmosDatabase.getContainer](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L121-L128) <br> [CosmosAsyncDatabase.getContainer](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L125-L132) |
-| Read all the collections in a database | [CosmosDatabase.readAllContainers](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L131-L145) <br> [CosmosAsyncDatabase.readAllContainers](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L135-L158) |
-| Delete a collection | [CosmosContainer.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L148-L156) <br> [CosmosAsyncContainer.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L161-L169) |
-
-## Autoscale collection examples
-
-To learn more about autoscale before running these samples, take a look at these instructions for enabling autoscale in your [account](https://azure.microsoft.com/resources/templates/cosmosdb-sql-autoscale/) and in your [databases and containers](../provision-throughput-autoscale.md).
-
-The autoscale database sample files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/async/AutoscaleDatabaseCRUDQuickstartAsync.java) show how to perform the following task.
-
-| Task | API reference |
-| | |
-| Create a database with specified autoscale max throughput | [CosmosClient.createDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java#L77-L88) <br> [CosmosAsyncClient.createDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/async/AutoscaleDatabaseCRUDQuickstartAsync.java#L81-L94) |
--
-The autoscale collection samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java) show how to perform the following tasks.
-
-| Task | API reference |
-| | |
-| Create a collection with specified autoscale max throughput | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L97-L110) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L101-L114) |
-| Change configured autoscale max throughput of a collection | [CosmosContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L113-L120) <br> [CosmosAsyncContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L117-L124) |
-| Read autoscale throughput configuration of a collection | [CosmosContainer.readThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L122-L133) <br> [CosmosAsyncContainer.readThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L126-L137) |
-
-## Analytical storage collection examples
-
-The Analytical storage Collection CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java) show how to perform the following tasks. To learn about the Azure Cosmos collections before running the following samples, read about Azure Cosmos DB Synapse and Analytical Store.
-
-| Task | API reference |
-| | |
-| Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java#L91-L106) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java#L91-L106) |
-
-## Document examples
-The Document CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
-
-| Task | API reference |
-| | |
-| Create a document | [CosmosContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L132-L146) <br> [CosmosAsyncContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L188-L212) |
-| Read a document by ID | [CosmosContainer.readItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L177-L192) <br> [CosmosAsyncContainer.readItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L318-L340) |
-| Query for documents | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L161-L175) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L270-L287) |
-| Replace a document | [CosmosContainer.replaceItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L177-L192) <br> [CosmosAsyncContainer.replaceItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L318-L340) |
-| Upsert a document | [CosmosContainer.upsertItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L194-L207) <br> [CosmosAsyncContainer.upsertItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L342-L364) |
-| Delete a document | [CosmosContainer.deleteItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L285-L292) <br> [CosmosAsyncContainer.deleteItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L494-L510) |
-| Replace a document with conditional ETag check | [CosmosItemRequestOptions.setIfMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L209-L246) (sync) <br>[CosmosItemRequestOptions.setIfMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L366-L418) (async) |
-| Read document only if document has changed | [CosmosItemRequestOptions.setIfNoneMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L248-L282) (sync) <br>[CosmosItemRequestOptions.setIfNoneMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L420-L491) (async)|
-| Partial document update | [CosmosContainer.patchItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/patch/sync/SamplePatchQuickstart.java) |
-| Bulk document update | [Bulk samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java) |
-| Transactional batch | [batch samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/batch/async/SampleBatchQuickStartAsync.java) |
-
-## Indexing examples
-The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-jav#include-exclude-paths) conceptual articles.
-
-| Task | API reference |
-| | |
-| Include specified documents paths in the index | [IndexingPolicy.IncludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L143-L146) |
-| Exclude specified documents paths from the index | [IndexingPolicy.ExcludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L148-L151) |
-| Create a composite index | [IndexingPolicy.setCompositeIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L167-L184) <br> CompositePath |
-| Create a geospatial index | [IndexingPolicy.setSpatialIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L153-L165) <br> SpatialSpec <br> SpatialType |
-<!-- | Exclude a document from the index | ExcludedIndex<br>IndexingPolicy | -->
-<!-- | Use Lazy Indexing | IndexingPolicy.IndexingMode | -->
-<!-- | Force a range scan operation on a hash indexed path | FeedOptions.EnableScanInQuery | -->
-<!-- | Use range indexes on Strings | IndexingPolicy.IncludedPaths<br>RangeIndex | -->
-<!-- | Perform an index transform | - | -->
--
-For more information about indexing, see [Azure Cosmos DB indexing policies](../index-policy.md).
-
-## Query examples
-The Query Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
-
-| Task | API reference |
-| | |
-| Query for all documents | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L204-L208) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L244-L247)|
-| Query for equality using == | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L286-L290) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L325-L329)|
-| Query for inequality using != and NOT | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L292-L300) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L331-L339)|
-| Query using range operators like >, <, >=, <= | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L302-L307) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L341-L346)|
-| Query using range operators against strings | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L309-L314) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L348-L353)|
-| Query with ORDER BY | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L316-L321) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L355-L360)|
-| Query with DISTINCT | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L323-L328) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L362-L367)|
-| Query with aggregate functions | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L330-L338) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L369-L377)|
-| Work with subdocuments | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L340-L348) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L379-L387)|
-| Query with intra-document Joins | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L350-L372) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L389-L411)|
-| Query with string, math, and array operators | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L374-L385) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L413-L424)|
-| Query with parameterized SQL using SqlQuerySpec | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L387-L416) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L426-L455)|
-| Query with explicit paging | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L211-L261) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L250-L300)|
-| Query partitioned collections in parallel | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L263-L284) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L302-L323)|
-<!-- | Query with ORDER BY for partitioned collections | CosmosContainer.queryItems <br> CosmosAsyncContainer.queryItems | -->
-
-## Change feed examples
-The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](/azure/cosmos-db/sql/change-feed-processor?tabs=java).
-
-| Task | API reference |
-| | |
-| Basic change feed functionality | [ChangeFeedProcessor.changeFeedProcessorBuilder](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L141-L172) |
-| Read change feed from the beginning | [ChangeFeedProcessorOptions.setStartFromBeginning()](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L65) |
-<!-- | Read change feed from a specific time | ChangeFeedProcessor.changeFeedProcessorBuilder | -->
-
-## Server-side programming examples
-
-The [Stored Procedure Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
-
-| Task | API reference |
-| | |
-| Create a stored procedure | [CosmosScripts.createStoredProcedure](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L134-L153) |
-| Execute a stored procedure | [CosmosStoredProcedure.execute](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L213-L227) |
-| Delete a stored procedure | [CosmosStoredProcedure.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L254-L264) |
-
-<!-- ## User management examples
-The User Management Sample file shows how to do the following tasks:
-
-| Task | API reference |
-| | |
-| Create a user | - |
-| Set permissions on a collection or document | - |
-| Get a list of a user's permissions |- | -->
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Api Nodejs Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-application.md
- Title: 'Tutorial: Build a Node.js web app with Azure Cosmos DB JavaScript SDK to manage SQL API data'
-description: This Node.js tutorial explores how to use Microsoft Azure Cosmos DB to store and access data from a Node.js Express web application hosted on Web Apps feature of Microsoft Azure App Service.
----- Previously updated : 10/18/2021-
-#Customer intent: As a developer, I want to build a Node.js web application to access and manage SQL API account resources in Azure Cosmos DB, so that customers can better use the service.
--
-# Tutorial: Build a Node.js web app using the JavaScript SDK to manage a SQL API account in Azure Cosmos DB
-
-> [!div class="op_single_selector"]
-> * [.NET](sql-api-dotnet-application.md)
-> * [Java](sql-api-java-application.md)
-> * [Node.js](sql-api-nodejs-application.md)
-> * [Python](./create-sql-api-python.md)
->
-
-As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This Node.js tutorial shows you how to store and access data from a SQL API account in Azure Cosmos DB by using a Node.js Express application that is hosted on the Web Apps feature of Microsoft Azure App Service. In this tutorial, you will build a web-based application (Todo app) that allows you to create, retrieve, and complete tasks. The tasks are stored as JSON documents in Azure Cosmos DB.
-
-This tutorial demonstrates how to create a SQL API account in Azure Cosmos DB by using the Azure portal. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0.
-
-This tutorial covers the following tasks:
-
-> [!div class="checklist"]
-> * Create an Azure Cosmos DB account
-> * Create a new Node.js application
-> * Connect the application to Azure Cosmos DB
-> * Run and deploy the application to Azure
-
-## <a name="prerequisites"></a>Prerequisites
-
-Before following the instructions in this article, ensure
-that you have the following resources:
-
-* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-
-* [Node.js][Node.js] version 6.10 or higher.
-* [Express generator](https://www.expressjs.com/starter/generator.html) (you can install Express via `npm install express-generator -g`)
-* Install [Git][Git] on your local workstation.
-
-## <a name="create-account"></a>Create an Azure Cosmos DB account
-Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create a new Node.js application](#create-new-app).
---
-## <a name="create-new-app"></a>Create a new Node.js application
-Now let's learn to create a basic Hello World Node.js project using the Express framework.
-
-1. Open your favorite terminal, such as the Node.js command prompt.
-
-1. Navigate to the directory in which you'd like to store the new application.
-
-1. Use the express generator to generate a new application called **todo**.
-
- ```bash
- express todo
- ```
-
-1. Open the **todo** directory and install dependencies.
-
- ```bash
- cd todo
- npm install
- ```
-
-1. Run the new application.
-
- ```bash
- npm start
- ```
-
-1. You can view your new application by navigating your browser to `http://localhost:3000`.
-
- :::image type="content" source="./media/sql-api-nodejs-application/cosmos-db-node-js-express.png" alt-text="Learn Node.js - Screenshot of the Hello World application in a browser window":::
-
- Stop the application by using CTRL+C in the terminal window, and select **y** to terminate the batch job.
-
-## <a name="install-modules"></a>Install the required modules
-
-The **package.json** file is one of the files created in the root of the project. This file contains a list of other modules that are required for your Node.js application. When you deploy this application to Azure, this file is used to determine which modules should be installed on Azure to support your application. Install two more packages for this tutorial.
-
-1. Install the **\@azure/cosmos** module via npm.
-
- ```bash
- npm install @azure/cosmos
- ```
-
-## <a name="connect-to-database"></a>Connect the Node.js application to Azure Cosmos DB
-Now that you have completed the initial setup and configuration, next you will write code that is required by the todo application to communicate with Azure Cosmos DB.
-
-### Create the model
-1. At the root of your project directory, create a new directory named **models**.
-
-2. In the **models** directory, create a new file named **taskDao.js**. This file contains code required to create the database and container. It also defines methods to read, update, create, and find tasks in Azure Cosmos DB.
-
-3. Copy the following code into the **taskDao.js** file:
-
- ```javascript
- // @ts-check
- const CosmosClient = require('@azure/cosmos').CosmosClient
- const debug = require('debug')('todo:taskDao')
-
- // For simplicity we'll set a constant partition key
- const partitionKey = undefined
- class TaskDao {
- /**
- * Manages reading, adding, and updating Tasks in Cosmos DB
- * @param {CosmosClient} cosmosClient
- * @param {string} databaseId
- * @param {string} containerId
- */
- constructor(cosmosClient, databaseId, containerId) {
- this.client = cosmosClient
- this.databaseId = databaseId
- this.collectionId = containerId
-
- this.database = null
- this.container = null
- }
-
- async init() {
- debug('Setting up the database...')
- const dbResponse = await this.client.databases.createIfNotExists({
- id: this.databaseId
- })
- this.database = dbResponse.database
- debug('Setting up the database...done!')
- debug('Setting up the container...')
- const coResponse = await this.database.containers.createIfNotExists({
- id: this.collectionId
- })
- this.container = coResponse.container
- debug('Setting up the container...done!')
- }
-
- async find(querySpec) {
- debug('Querying for items from the database')
- if (!this.container) {
- throw new Error('Collection is not initialized.')
- }
- const { resources } = await this.container.items.query(querySpec).fetchAll()
- return resources
- }
-
- async addItem(item) {
- debug('Adding an item to the database')
- item.date = Date.now()
- item.completed = false
- const { resource: doc } = await this.container.items.create(item)
- return doc
- }
-
- async updateItem(itemId) {
- debug('Update an item in the database')
- const doc = await this.getItem(itemId)
- doc.completed = true
-
- const { resource: replaced } = await this.container
- .item(itemId, partitionKey)
- .replace(doc)
- return replaced
- }
-
- async getItem(itemId) {
- debug('Getting an item from the database')
- const { resource } = await this.container.item(itemId, partitionKey).read()
- return resource
- }
- }
-
- module.exports = TaskDao
- ```
-4. Save and close the **taskDao.js** file.
-
-### Create the controller
-
-1. In the **routes** directory of your project, create a new file named **tasklist.js**.
-
-2. Add the following code to **tasklist.js**. This code loads the CosmosClient and async modules, which are used by **tasklist.js**. This code also defines the **TaskList** class, which is passed as an instance of the **TaskDao** object we defined earlier:
-
- ```javascript
- const TaskDao = require("../models/TaskDao");
-
- class TaskList {
- /**
- * Handles the various APIs for displaying and managing tasks
- * @param {TaskDao} taskDao
- */
- constructor(taskDao) {
- this.taskDao = taskDao;
- }
- async showTasks(req, res) {
- const querySpec = {
- query: "SELECT * FROM root r WHERE r.completed=@completed",
- parameters: [
- {
- name: "@completed",
- value: false
- }
- ]
- };
-
- const items = await this.taskDao.find(querySpec);
- res.render("index", {
- Title: "My ToDo List ",
- tasks: items
- });
- }
-
- async addTask(req, res) {
- const item = req.body;
-
- await this.taskDao.addItem(item);
- res.redirect("/");
- }
-
- async completeTask(req, res) {
- const completedTasks = Object.keys(req.body);
- const tasks = [];
-
- completedTasks.forEach(task => {
- tasks.push(this.taskDao.updateItem(task));
- });
-
- await Promise.all(tasks);
-
- res.redirect("/");
- }
- }
-
- module.exports = TaskList;
- ```
-
-3. Save and close the **tasklist.js** file.
-
-### Add config.js
-
-1. At the root of your project directory, create a new file named **config.js**.
-
-2. Add the following code to **config.js** file. This code defines configuration settings and values needed for our application.
-
- ```javascript
- const config = {};
-
- config.host = process.env.HOST || "[the endpoint URI of your Azure Cosmos DB account]";
- config.authKey =
- process.env.AUTH_KEY || "[the PRIMARY KEY value of your Azure Cosmos DB account";
- config.databaseId = "ToDoList";
- config.containerId = "Items";
-
- if (config.host.includes("https://localhost:")) {
- console.log("Local environment detected");
- console.log("WARNING: Disabled checking of self-signed certs. Do not have this code in production.");
- process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
- console.log(`Go to http://localhost:${process.env.PORT || '3000'} to try the sample.`);
- }
-
- module.exports = config;
- ```
-
-3. In the **config.js** file, update the values of HOST and AUTH_KEY using the values found in the Keys page of your Azure Cosmos DB account on the [Azure portal](https://portal.azure.com).
-
-4. Save and close the **config.js** file.
-
-### Modify app.js
-
-1. In the project directory, open the **app.js** file. This file was created earlier when the Express web application was created.
-
-2. Add the following code to the **app.js** file. This code defines the config file to be used, and loads the values into some variables that you will use in the next sections.
-
- ```javascript
- const CosmosClient = require('@azure/cosmos').CosmosClient
- const config = require('./config')
- const TaskList = require('./routes/tasklist')
- const TaskDao = require('./models/taskDao')
-
- const express = require('express')
- const path = require('path')
- const logger = require('morgan')
- const cookieParser = require('cookie-parser')
- const bodyParser = require('body-parser')
-
- const app = express()
-
- // view engine setup
- app.set('views', path.join(__dirname, 'views'))
- app.set('view engine', 'jade')
-
- // uncomment after placing your favicon in /public
- //app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
- app.use(logger('dev'))
- app.use(bodyParser.json())
- app.use(bodyParser.urlencoded({ extended: false }))
- app.use(cookieParser())
- app.use(express.static(path.join(__dirname, 'public')))
-
- //Todo App:
- const cosmosClient = new CosmosClient({
- endpoint: config.host,
- key: config.authKey
- })
- const taskDao = new TaskDao(cosmosClient, config.databaseId, config.containerId)
- const taskList = new TaskList(taskDao)
- taskDao
- .init(err => {
- console.error(err)
- })
- .catch(err => {
- console.error(err)
- console.error(
- 'Shutting down because there was an error settinig up the database.'
- )
- process.exit(1)
- })
-
- app.get('/', (req, res, next) => taskList.showTasks(req, res).catch(next))
- app.post('/addtask', (req, res, next) => taskList.addTask(req, res).catch(next))
- app.post('/completetask', (req, res, next) =>
- taskList.completeTask(req, res).catch(next)
- )
- app.set('view engine', 'jade')
-
- // catch 404 and forward to error handler
- app.use(function(req, res, next) {
- const err = new Error('Not Found')
- err.status = 404
- next(err)
- })
-
- // error handler
- app.use(function(err, req, res, next) {
- // set locals, only providing error in development
- res.locals.message = err.message
- res.locals.error = req.app.get('env') === 'development' ? err : {}
-
- // render the error page
- res.status(err.status || 500)
- res.render('error')
- })
-
- module.exports = app
- ```
-
-3. Finally, save and close the **app.js** file.
-
-## <a name="build-ui"></a>Build a user interface
-
-Now let's build the user interface so that a user can interact with the application. The Express application we created in the previous sections uses **Jade** as the view engine.
-
-1. The **layout.jade** file in the **views** directory is used as a global template for other **.jade** files. In this step you will modify it to use Twitter Bootstrap, which is a toolkit used to design a website.
-
-2. Open the **layout.jade** file found in the **views** folder and replace the contents with the following code:
-
- ```html
- doctype html
- html
- head
- title= title
- link(rel='stylesheet', href='//ajax.aspnetcdn.com/ajax/bootstrap/3.3.2/css/bootstrap.min.css')
- link(rel='stylesheet', href='/stylesheets/style.css')
- body
- nav.navbar.navbar-inverse.navbar-fixed-top
- div.navbar-header
- a.navbar-brand(href='#') My Tasks
- block content
- script(src='//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.11.2.min.js')
- script(src='//ajax.aspnetcdn.com/ajax/bootstrap/3.3.2/bootstrap.min.js')
- ```
-
- This code tells the **Jade** engine to render some HTML for our application, and creates a **block** called **content** where we can supply the layout for our content pages. Save and close the **layout.jade** file.
-
-3. Now open the **index.jade** file, the view that will be used by our application, and replace the content of the file with the following code:
-
- ```html
- extends layout
- block content
- h1 #{title}
- br
-
- form(action="/completetask", method="post")
- table.table.table-striped.table-bordered
- tr
- td Name
- td Category
- td Date
- td Complete
- if (typeof tasks === "undefined")
- tr
- td
- else
- each task in tasks
- tr
- td #{task.name}
- td #{task.category}
- - var date = new Date(task.date);
- - var day = date.getDate();
- - var month = date.getMonth() + 1;
- - var year = date.getFullYear();
- td #{month + "/" + day + "/" + year}
- td
- if(task.completed)
- input(type="checkbox", name="#{task.id}", value="#{!task.completed}", checked=task.completed)
- else
- input(type="checkbox", name="#{task.id}", value="#{!task.completed}", checked=task.completed)
- button.btn.btn-primary(type="submit") Update tasks
- hr
- form.well(action="/addtask", method="post")
- label Item Name:
- input(name="name", type="textbox")
- label Item Category:
- input(name="category", type="textbox")
- br
- button.btn(type="submit") Add item
- ```
-
-This code extends layout, and provides content for the **content** placeholder we saw in the **layout.jade** file earlier. In this layout, we created two HTML forms.
-
-The first form contains a table for your data and a button that allows you to update items by posting to **/completeTask** method of the controller.
-
-The second form contains two input fields and a button that allows you to create a new item by posting to **/addtask** method of the controller. That's all we need for the application to work.
-
-## <a name="run-app-locally"></a>Run your application locally
-
-Now that you have built the application, you can run it locally by using the following steps:
-
-1. To test the application on your local machine, run `npm start` in the terminal to start your application, and then refresh the `http://localhost:3000` browser page. The page should now look as shown in the following screenshot:
-
- :::image type="content" source="./media/sql-api-nodejs-application/cosmos-db-node-js-localhost.png" alt-text="Screenshot of the MyTodo List application in a browser window":::
-
- > [!TIP]
- > If you receive an error about the indent in the layout.jade file or the index.jade file, ensure that the first two lines in both files are left-justified, with no spaces. If there are spaces before the first two lines, remove them, save both files, and then refresh your browser window.
-
-2. Use the Item, Item Name, and Category fields to enter a new task, and then select **Add Item**. It creates a document in Azure Cosmos DB with those properties.
-
-3. The page should update to display the newly created item in the ToDo
- list.
-
- :::image type="content" source="./media/sql-api-nodejs-application/cosmos-db-node-js-added-task.png" alt-text="Screenshot of the application with a new item in the ToDo list":::
-
-4. To complete a task, select the check box in the Complete column,
- and then select **Update tasks**. It updates the document you already created and removes it from the view.
-
-5. To stop the application, press CTRL+C in the terminal window and then select **Y** to terminate the batch job.
-
-## <a name="deploy-app"></a>Deploy your application to App Service
-
-After your application succeeds locally, you can deploy it to Azure App Service. In the terminal, make sure you're in the *todo* app directory. Deploy the code in your local folder (todo) using the following [az webapp up](/cli/azure/webapp#az-webapp-up) command:
-
-```azurecli
-az webapp up --sku F1 --name <app-name>
-```
-
-Replace <app_name> with a name that's unique across all of Azure (valid characters are a-z, 0-9, and -). A good pattern is to use a combination of your company name and an app identifier. To learn more about the app deployment, see [Node.js app deployment in Azure](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-cli#deploy-to-azure) article.
-
-The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives you a URL to launch the app at `http://<app-name>.azurewebsites.net`, which is the app's URL on Azure.
-
-## Clean up resources
-
-When these resources are no longer needed, you can delete the resource group, Azure Cosmos DB account, and all the related resources. To do so, select the resource group that you used for the Azure Cosmos DB account, select **Delete**, and then confirm the name of the resource group to delete.
-
-## Next steps
-
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Build mobile applications with Xamarin and Azure Cosmos DB](mobile-apps-with-xamarin.md)
-
-[Node.js]: https://nodejs.org/
-[Git]: https://git-scm.com/
-[GitHub]: https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-todo-app
cosmos-db Sql Api Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-get-started.md
- Title: Node.js tutorial for the SQL API for Azure Cosmos DB
-description: A Node.js tutorial that demonstrates how to connect to and query Azure Cosmos DB using the SQL API
----- Previously updated : 05/02/2022---
-# Tutorial: Build a Node.js console app with the JavaScript SDK to manage Azure Cosmos DB SQL API data
--
-> [!div class="op_single_selector"]
->
-> * [.NET](sql-api-get-started.md)
-> * [Java](./create-sql-api-java.md)
-> * [Async Java](./create-sql-api-java.md)
-> * [Node.js](sql-api-nodejs-get-started.md)
->
-
-As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This tutorial shows you how to build a Node.js console application to create Azure Cosmos DB resources and query them. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> * Create and connect to an Azure Cosmos DB account.
-> * Set up your application.
-> * Create a database.
-> * Create a container.
-> * Add items to the container.
-> * Perform basic operations on the items, container, and database.
->
-
-## Prerequisites
-
-Make sure you have the following resources:
-
-* An active Azure account. If you don't have one, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-
-* [Node.js](https://nodejs.org/) v6.0.0 or higher.
-
-## Create Azure Cosmos DB account
-
-Let's create an Azure Cosmos DB account. If you already have an account you want to use, you can skip ahead to [Set up your Node.js application](#set-up-your-nodejs-application). If you're using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator and skip ahead to [Set up your Node.js application](#set-up-your-nodejs-application).
--
-## Set up your Node.js application
-
-Before you start writing code to build the application, you can build the scaffolding for your app. Open your favorite terminal and locate the folder or directory where you'd like to save your Node.js application. Create placeholder JavaScript files with the following commands for your Node.js application:
-
-### [Windows](#tab/windows)
-
-```powershell
-fsutil file createnew app.js 0
-
-fsutil file createnew config.js 0
-
-md data
-
-fsutil file createnew data\databaseContext.js 0
-```
-
-### [Linux / macOS](#tab/linux+macos)
-
-```bash
-touch app.js
-
-touch config.js
-
-mkdir data
-
-touch data/databaseContext.js
-```
---
-1. Create and initialize a `package.json` file. Use the following command:
-
- ```bash
- npm init -y
- ```
-
-1. Install the ``@azure/cosmos`` module via **npm**. Use the following command:
-
- ```bash
- npm install @azure/cosmos --save
- ```
-
-## Set your app's configurations
-
-Now that your app exists, you need to make sure it can talk to Azure Cosmos DB. By updating a few configuration settings, as shown in the following steps, you can set your app to talk to Azure Cosmos DB:
-
-1. Open the *config.js* file in your favorite text editor.
-
-1. Copy and paste the following code snippet into the *config.js* file and set the properties `endpoint` and `key` to your Azure Cosmos DB endpoint URI and primary key. The database, container names are set to **Tasks** and **Items**. The partition key you'll use for this application is **/category**.
-
- :::code language="javascript" source="~/cosmosdb-nodejs-get-started/config.js":::
-
- You can find the endpoint and key details in the **Keys** pane of the [Azure portal](https://portal.azure.com).
-
- :::image type="content" source="media/sql-api-nodejs-get-started/node-js-tutorial-keys.png" alt-text="Get keys from Azure portal screenshot":::
-
-The JavaScript SDK uses the generic terms *container* and *item*. A container can be a collection, graph, or table. An item can be a document, edge/vertex, or row, and is the content inside a container. In the previous code snippet, the `module.exports = config;` code is used to export the config object, so that you can reference it within the *app.js* file.
-
-## Create a database and a container
-
-1. Open the *databaseContext.js* file in your favorite text editor.
-
-1. Copy and paste the following code to the *databaseContext.js* file. This code defines a function that creates the "Tasks", "Items" database and the container if they don't already exist in your Azure Cosmos account:
-
- :::code language="javascript" source="~/cosmosdb-nodejs-get-started/data/databaseContext.js" id="createDatabaseAndContainer":::
-
- A database is the logical container of items partitioned across containers. You create a database by using either the `createIfNotExists` or create function of the **Databases** class. A container consists of items, which, in the SQL API, are actually JSON documents. You create a container by using either the `createIfNotExists` or create function from the **Containers** class. After creating a container, you can store and query the data.
-
- > [!WARNING]
- > Creating a container has pricing implications. Visit our [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) so you know what to expect.
-
-## Import the configuration
-
-1. Open the *app.js* file in your favorite text editor.
-
-1. Copy and paste the code below to import the `@azure/cosmos` module, the configuration, and the databaseContext that you defined in the previous steps.
-
- :::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="ImportConfiguration":::
-
-## Create an asynchronous function
-
-In the *app.js* file, copy and paste the following code to create an asynchronous function named **main** and immediately execute the function.
--
-## Connect to the Azure Cosmos account
-
-Within the **main** method, copy and paste the following code to use the previously saved endpoint and key to create a new CosmosClient object.
--
-> [!Note]
-> If connecting to the **Cosmos DB Emulator**, disable TLS verification for your node process:
->
-> ```javascript
-> process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
-> const client = new CosmosClient({ endpoint, key });
-> ```
->
-
-Now that you have the code to initialize the Azure Cosmos DB client, add a try/catch block that you'll use for your code performing point operations.
--
-Let's take a look at how to work with Azure Cosmos DB resources.
-
-## Query items
-
-Azure Cosmos DB supports rich queries against JSON items stored in each container. The following sample code shows a query that you can run against the items in your container. You can query the items by using the query function of the `Items` class.
-
-Add the following code to the **try** block to query the items from your Azure Cosmos account:
--
-## Create an item
-
-An item can be created by using the create function of the `Items` class. When you're using the SQL API, items are projected as documents, which are user-defined (arbitrary) JSON content. In this tutorial, you create a new item within the tasks database.
-
-1. In the *app.js* file, outside of the **main** method, define the item definition:
-
- :::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="DefineNewItem":::
-
-1. Back within the **try** block of the **main** method, add the following code to create the previously defined item:
-
- :::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="CreateItem":::
-
-## Update an item
-
-Azure Cosmos DB supports replacing the contents of items. Copy and paste the following code to **try** block. This code gets an item from the container and updates the *isComplete* field to true.
--
-## Delete an item
-
-Azure Cosmos DB supports deleting JSON items. The following code shows how to get an item by its ID and delete it. Copy and paste the following code to the **try** block:
--
-## Run your Node.js application
-
-Altogether, your code should look like this:
--
-In your terminal, locate your ```app.js``` file and run the command:
-
-```bash
-node app.js
-```
-
-You should see the output of your get started app. The output should match the example text below.
-
-```bash
-Created database:
-Tasks
-
-Created container:
-Items
-
-Querying container: Items
-1 - Pick up apples and strawberries.
-
-Created new item: 3 - Complete Cosmos DB Node.js Quickstart ΓÜí
-
-Updated item: 3 - Complete Cosmos DB Node.js Quickstart ΓÜí
-Updated isComplete to true
-
-Deleted item with id: 3
-```
-
-## Get the complete Node.js tutorial solution
-
-If you didn't have time to complete the steps in this tutorial, or just want to download the code, you can get it from [GitHub](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started ).
-
-To run the getting started solution that contains all the code in this article, you'll need:
-
-* An [Azure Cosmos DB account][create-account].
-* The [Getting Started](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started) solution available on GitHub.
-
-Install the project's dependencies via npm. Use the following command:
-
-* ```npm install```
-
-Next, in the ```config.js``` file, update the config.endpoint and config.key values as described in [Step 3: Set your app's configurations](#set-your-apps-configurations).
-
-Then in your terminal, locate your ```app.js``` file and run the command:
-
-```bash
-node app.js
-```
-
-## Clean up resources
-
-When these resources are no longer needed, you can delete the resource group, Azure Cosmos DB account, and all the related resources. To do so, select the resource group that you used for the Azure Cosmos DB account, select **Delete**, and then confirm the name of the resource group to delete.
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Monitor an Azure Cosmos DB account](../monitor-cosmos-db.md)
-
-[create-account]: create-sql-api-dotnet.md#create-account
cosmos-db Sql Api Nodejs Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-samples.md
- Title: Node.js examples to manage data in Azure Cosmos database
-description: Find Node.js examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.
---- Previously updated : 08/26/2021---
-# Node.js examples to manage data in Azure Cosmos DB
-
-> [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
-> * [Node.js Examples](sql-api-nodejs-samples.md)
-> * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
-
-Sample solutions that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-js](https://github.com/Azure/azure-cosmos-js/tree/master/samples) GitHub repository. This article provides:
-
-* Links to the tasks in each of the Node.js example project files.
-* Links to the related API reference content.
-
-**Prerequisites**
---- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.--
-You also need the [JavaScript SDK](sql-api-sdk-node.md).
-
- > [!NOTE]
- > Each sample is self-contained, it sets itself up and cleans up after itself. As such, the samples issue multiple calls to [Containers.create](/javascript/api/%40azure/cosmos/containers). Each time this is done your subscription will be billed for 1 hour of usage per the performance tier of the container being created.
- >
- >
-
-## Database examples
-
-The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create a database if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) |[Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
-| [List databases for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L16-L18) |[Databases.readAll](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) |
-| [Read a database by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L20-L29) |[Database.read](/javascript/api/@azure/cosmos/database#read-requestoptions-) |
-| [Delete a database](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L31-L32) |[Database.delete](/javascript/api/@azure/cosmos/database#delete-requestoptions-) |
-
-## Container examples
-
-The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create a container if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) |[Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
-| [List containers for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L17-L21) |[Containers.readAll](/javascript/api/@azure/cosmos/containers#readall-feedoptions-) |
-| [Read a container definition](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L23-L26) |[Container.read](/javascript/api/@azure/cosmos/container#read-requestoptions-) |
-| [Delete a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L28-L30) |[Container.delete](/javascript/api/@azure/cosmos/container#delete-requestoptions-) |
-
-## Item examples
-
-The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create items](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L18-L21) |[Items.create](/javascript/api/@azure/cosmos/items#create-t--requestoptions-) |
-| [Read all items in a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L23-L28) |[Items.readAll](/javascript/api/@azure/cosmos/items#readall-feedoptions-) |
-| [Read an item by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L30-L33) |[Item.read](/javascript/api/@azure/cosmos/item#read-requestoptions-) |
-| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) |[Item.read](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
-| [Query for documents](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79) |[Items.query](/javascript/api/%40azure/cosmos/items) |
-| [Replace an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L81-L96) |[Item.replace](/javascript/api/%40azure/cosmos/item) |
-| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) |[Item.replace](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
-| [Delete an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L137-L140) |[Item.delete](/javascript/api/%40azure/cosmos/item) |
-
-## Indexing examples
-
-The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
-
-| Task | API reference |
-| | |
-| [Manually index a specific item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L52-L75) |[RequestOptions.indexingDirective: 'include'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
-| [Manually exclude a specific item from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L17-L29) |[RequestOptions.indexingDirective: 'exclude'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
-| [Exclude a path from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L142-L167) |[IndexingPolicy.ExcludedPath](/javascript/api/%40azure/cosmos/indexingpolicy#excludedpaths) |
-| [Create a range index on a string path](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L87-L112) |[IndexKind.Range](/javascript/api/%40azure/cosmos/indexkind), [IndexingPolicy](/javascript/api/%40azure/cosmos/indexingpolicy), [Items.query](/javascript/api/%40azure/cosmos/items) |
-| [Create a container with default indexPolicy, then update this online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) |[Containers.create](/javascript/api/%40azure/cosmos/containers)
-
-## Server-side programming examples
-
-The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create a stored procedure](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/upsert.js) |[StoredProcedures.create](/javascript/api/%40azure/cosmos/storedprocedures) |
-| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) |[StoredProcedure.execute](/javascript/api/%40azure/cosmos/storedprocedure) |
-
-For more information about server-side programming, see [Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs](stored-procedures-triggers-udfs.md).
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Api Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-python-samples.md
- Title: SQL API Python examples for Azure Cosmos DB
-description: Find Python examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.
---- Previously updated : 08/26/2021----
-# Azure Cosmos DB Python examples
-
-> [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
-> * [Node.js Examples](sql-api-nodejs-samples.md)
-> * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
-
-Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-documentdb-python](https://github.com/Azure/azure-documentdb-python) GitHub repository. This article provides:
-
-* Links to the tasks in each of the Python example project files.
-* Links to the related API reference content.
-
-## Prerequisites
--- A Cosmos DB Account. You options are:
- * Within an Azure active subscription:
- * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
- * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
- * [Azure Cosmos DB Free Tier](../free-tier.md)
- * Without an Azure active subscription:
- * [Try Azure Cosmos DB for free](../try-free.md), a tests environment that lasts for 30 days.
- * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
-- [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.-- [Visual Studio Code](https://code.visualstudio.com/).-- The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview).-- [Git](https://www.git-scm.com/downloads). -- [Azure Cosmos DB SQL API SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)-
-## Database examples
-
-The [database_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py) Python sample shows how to do the following tasks. To learn about the Azure Cosmos databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L53-L62) |CosmosClient.create_database|
-| [Read a database by ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L64-L73) |CosmosClient.get_database_client|
-| [Query the databases](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L37-L50) |CosmosClient.query_databases|
-| [List databases for an account](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L76-L87) |CosmosClient.list_databases|
-| [Delete a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L90-L99) |CosmosClient.delete_database|
-
-## Container examples
-
-The [container_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py) Python sample shows how to do the following tasks. To learn about the Azure Cosmos collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Query for a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L51-L66) |database.query_containers |
-| [Create a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L69-L163) |database.create_container |
-| [List all the containers in a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L206-L217) |database.list_containers |
-| [Get a container by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L195-L203) |database.get_container_client |
-| [Manage container's provisioned throughput](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L166-L192) |container.read_offer, container.replace_throughput|
-| [Delete a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/container_management.py#L220-L229) |database.delete_container |
-
-## Item examples
-
-The [item_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py) Python sample shows how to do the following tasks. To learn about the Azure Cosmos documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L31-L43) |container.create_item |
-| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L46-L54) |container.read_item |
-| [Read all the items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L57-L68) |container.read_all_items |
-| [Query an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L71-L83) |container.query_items |
-| [Replace an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L86-L93) |container.replace_items |
-| [Upsert an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L95-L103) |container.upsert_item |
-| [Delete an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L106-L111) |container.delete_item |
-| [Get the change feed of items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/change_feed_management.py) |container.query_items_change_feed |
-
-## Indexing examples
-
-The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py) Python sample shows how to do the following tasks. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
-
-| Task | API reference |
-| | |
-| [Exclude a specific item from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L149-L205) | documents.IndexingDirective.Exclude|
-| [Use manual indexing with specific items indexed](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L208-L267) | documents.IndexingDirective.Include |
-| [Exclude paths from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L270-L340) |Define paths to exclude in `IndexingPolicy` property |
-| [Use range indexes on strings](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L405-L490) | Define indexing policy with range indexes on string data type. `'kind': documents.IndexKind.Range`, `'dataType': documents.DataType.String`|
-| [Perform an index transformation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L492-L548) |database.replace_container (use the updated indexing policy)|
-| [Use scans when only hash index exists on the path](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L343-L402) | set the `enable_scan_in_query=True` and `enable_cross_partition_query=True` when querying the items |
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Api Query Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-query-metrics.md
- Title: SQL query metrics for Azure Cosmos DB SQL API
-description: Learn about how to instrument and debug the SQL query performance of Azure Cosmos DB requests.
------ Previously updated : 04/04/2022---
-# Tuning query performance with Azure Cosmos DB
-
-Azure Cosmos DB provides a [SQL API for querying data](./sql-query-getting-started.md), without requiring schema or secondary indexes. This article provides the following information for developers:
-
-* High-level details on how Azure Cosmos DB's SQL query execution works
-* Details on query request and response headers, and client SDK options
-* Tips and best practices for query performance
-* Examples of how to utilize SQL execution statistics to debug query performance
-
-## About SQL query execution
-
-In Azure Cosmos DB, you store data in containers, which can grow to any [storage size or request throughput](../partitioning-overview.md). Azure Cosmos DB seamlessly scales data across physical partitions under the covers to handle data growth or increase in provisioned throughput. You can issue SQL queries to any container using the REST API or one of the supported [SQL SDKs](sql-api-sdk-dotnet.md).
-
-A brief overview of partitioning: you define a partition key like "city", which determines how data is split across physical partitions. Data belonging to a single partition key (for example, "city" == "Seattle") is stored within a physical partition, but typically a single physical partition has multiple partition keys. When a partition reaches its storage size, the service seamlessly splits the partition into two new partitions, and divides the partition key evenly across these partitions. Since partitions are transient, the APIs use an abstraction of a "partition key range", which denotes the ranges of partition key hashes.
-
-When you issue a query to Azure Cosmos DB, the SDK performs these logical steps:
-
-* Parse the SQL query to determine the query execution plan.
-* If the query includes a filter against the partition key, like `SELECT * FROM c WHERE c.city = "Seattle"`, it is routed to a single partition. If the query does not have a filter on partition key, then it is executed in all partitions, and results are merged client side.
-* The query is executed within each partition in series or parallel, based on client configuration. Within each partition, the query might make one or more round trips depending on the query complexity, configured page size, and provisioned throughput of the collection. Each execution returns the number of [request units](../request-units.md) consumed by query execution, and optionally, query execution statistics.
-* The SDK performs a summarization of the query results across partitions. For example, if the query involves an ORDER BY across partitions, then results from individual partitions are merge-sorted to return results in globally sorted order. If the query is an aggregation like `COUNT`, the counts from individual partitions are summed to produce the overall count.
-
-The SDKs provide various options for query execution. For example, in .NET these options are available in the `FeedOptions` class. The following table describes these options and how they impact query execution time.
-
-| Option | Description |
-| | -- |
-| `EnableCrossPartitionQuery` | Must be set to true for any query that requires to be executed across more than one partition. This is an explicit flag to enable you to make conscious performance tradeoffs during development time. |
-| `EnableScanInQuery` | Must be set to true if you have opted out of indexing, but want to run the query via a scan anyway. Only applicable if indexing for the requested filter path is disabled. |
-| `MaxItemCount` | The maximum number of items to return per round trip to the server. By setting to -1, you can let the server manage the number of items. Or, you can lower this value to retrieve only a small number of items per round trip.
-| `MaxBufferedItemCount` | This is a client-side option, and used to limit the memory consumption when performing cross-partition ORDER BY. A higher value helps reduce the latency of cross-partition sorting. |
-| `MaxDegreeOfParallelism` | Gets or sets the number of concurrent operations run client side during parallel query execution in the Azure Cosmos database service. A positive property value limits the number of concurrent operations to the set value. If it is set to less than 0, the system automatically decides the number of concurrent operations to run. |
-| `PopulateQueryMetrics` | Enables detailed logging of statistics of time spent in various phases of query execution like compilation time, index loop time, and document load time. You can share output from query statistics with Azure Support to diagnose query performance issues. |
-| `RequestContinuation` | You can resume query execution by passing in the opaque continuation token returned by any query. The continuation token encapsulates all state required for query execution. |
-| `ResponseContinuationTokenLimitInKb` | You can limit the maximum size of the continuation token returned by the server. You might need to set this if your application host has limits on response header size. Setting this may increase the overall duration and RUs consumed for the query. |
-
-For example, let's take an example query on partition key requested on a collection with `/city` as the partition key and provisioned with 100,000 RU/s of throughput. You request this query using `CreateDocumentQuery<T>` in .NET like the following:
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- PopulateQueryMetrics = true,
- MaxItemCount = -1,
- MaxDegreeOfParallelism = -1,
- EnableCrossPartitionQuery = true
- }).AsDocumentQuery();
-
-FeedResponse<dynamic> result = await query.ExecuteNextAsync();
-```
-
-The SDK snippet shown above, corresponds to the following REST API request:
-
-```
-POST https://arramacquerymetrics-westus.documents.azure.com/dbs/db/colls/sample/docs HTTP/1.1
-x-ms-continuation:
-x-ms-documentdb-isquery: True
-x-ms-max-item-count: -1
-x-ms-documentdb-query-enablecrosspartition: True
-x-ms-documentdb-query-parallelizecrosspartitionquery: True
-x-ms-documentdb-query-iscontinuationexpected: True
-x-ms-documentdb-populatequerymetrics: True
-x-ms-date: Tue, 27 Jun 2017 21:52:18 GMT
-authorization: type%3dmaster%26ver%3d1.0%26sig%3drp1Hi83Y8aVV5V6LzZ6xhtQVXRAMz0WNMnUuvriUv%2b4%3d
-x-ms-session-token: 7:8,6:2008,5:8,4:2008,3:8,2:2008,1:8,0:8,9:8,8:4008
-Cache-Control: no-cache
-x-ms-consistency-level: Session
-User-Agent: documentdb-dotnet-sdk/1.14.1 Host/32-bit MicrosoftWindowsNT/6.2.9200.0
-x-ms-version: 2017-02-22
-Accept: application/json
-Content-Type: application/query+json
-Host: arramacquerymetrics-westus.documents.azure.com
-Content-Length: 52
-Expect: 100-continue
-
-{"query":"SELECT * FROM c WHERE c.city = 'Seattle'"}
-```
-
-Each query execution page corresponds to a REST API `POST` with the `Accept: application/query+json` header, and the SQL query in the body. Each query makes one or more round trips to the server with the `x-ms-continuation` token echoed between the client and server to resume execution. The configuration options in FeedOptions are passed to the server in the form of request headers. For example, `MaxItemCount` corresponds to `x-ms-max-item-count`.
-
-The request returns the following (truncated for readability) response:
-
-```
-HTTP/1.1 200 Ok
-Cache-Control: no-store, no-cache
-Pragma: no-cache
-Transfer-Encoding: chunked
-Content-Type: application/json
-Server: Microsoft-HTTPAPI/2.0
-Strict-Transport-Security: max-age=31536000
-x-ms-last-state-change-utc: Tue, 27 Jun 2017 21:01:57.561 GMT
-x-ms-resource-quota: documentSize=10240;documentsSize=10485760;documentsCount=-1;collectionSize=10485760;
-x-ms-resource-usage: documentSize=1;documentsSize=884;documentsCount=2000;collectionSize=1408;
-x-ms-item-count: 2000
-x-ms-schemaversion: 1.3
-x-ms-alt-content-path: dbs/db/colls/sample
-x-ms-content-path: +9kEANVq0wA=
-x-ms-xp-role: 1
-x-ms-documentdb-query-metrics: totalExecutionTimeInMs=33.67;queryCompileTimeInMs=0.06;queryLogicalPlanBuildTimeInMs=0.02;queryPhysicalPlanBuildTimeInMs=0.10;queryOptimizationTimeInMs=0.00;VMExecutionTimeInMs=32.56;indexLookupTimeInMs=0.36;documentLoadTimeInMs=9.58;systemFunctionExecuteTimeInMs=0.00;userFunctionExecuteTimeInMs=0.00;retrievedDocumentCount=2000;retrievedDocumentSize=1125600;outputDocumentCount=2000;writeOutputTimeInMs=18.10;indexUtilizationRatio=1.00
-x-ms-request-charge: 604.42
-x-ms-serviceversion: version=1.14.34.4
-x-ms-activity-id: 0df8b5f6-83b9-4493-abda-cce6d0f91486
-x-ms-session-token: 2:2008
-x-ms-gatewayversion: version=1.14.33.2
-Date: Tue, 27 Jun 2017 21:59:49 GMT
-```
-
-The key response headers returned from the query include the following:
-
-| Option | Description |
-| | -- |
-| `x-ms-item-count` | The number of items returned in the response. This is dependent on the supplied `x-ms-max-item-count`, the number of items that can be fit within the maximum response payload size, the provisioned throughput, and query execution time. |
-| `x-ms-continuation:` | The continuation token to resume execution of the query, if additional results are available. |
-| `x-ms-documentdb-query-metrics` | The query statistics for the execution. This is a delimited string containing statistics of time spent in the various phases of query execution. Returned if `x-ms-documentdb-populatequerymetrics` is set to `True`. |
-| `x-ms-request-charge` | The number of [request units](../request-units.md) consumed by the query. |
-
-For details on the REST API request headers and options, see [Querying resources using the REST API](/rest/api/cosmos-db/querying-cosmosdb-resources-using-the-rest-api).
-
-## Best practices for query performance
-The following are the most common factors that impact Azure Cosmos DB query performance. We dig deeper into each of these topics in this article.
-
-| Factor | Tip |
-| | --|
-| Provisioned throughput | Measure RU per query, and ensure that you have the required provisioned throughput for your queries. |
-| Partitioning and partition keys | Favor queries with the partition key value in the filter clause for low latency. |
-| SDK and query options | Follow SDK best practices like direct connectivity, and tune client-side query execution options. |
-| Indexing Policy | Ensure that you have the required indexing paths/policy for the query. |
-| Query execution metrics | Analyze the query execution metrics to identify potential rewrites of query and data shapes. |
-
-### Provisioned throughput
-In Cosmos DB, you create containers of data, each with reserved throughput expressed in request units (RU) per-second. A read of a 1-KB document is 1 RU, and every operation (including queries) is normalized to a fixed number of RUs based on its complexity. For example, if you have 1000 RU/s provisioned for your container, and you have a query like `SELECT * FROM c WHERE c.city = 'Seattle'` that consumes 5 RUs, then you can perform (1000 RU/s) / (5 RU/query) = 200 query/s such queries per second.
-
-If you submit more than 200 queries/sec, the service starts rate-limiting incoming requests above 200/s. The SDKs automatically handle this case by performing a backoff/retry, therefore you might notice a higher latency for these queries. Increasing the provisioned throughput to the required value improves your query latency and throughput.
-
-To learn more about request units, see [Request units](../request-units.md).
-
-### Partitioning and partition keys
-With Azure Cosmos DB, typically queries perform in the following order from fastest/most efficient to slower/less efficient.
-
-* GET on a single partition key and item key
-* Query with a filter clause on a single partition key
-* Query without an equality or range filter clause on any property
-* Query without filters
-
-Queries that need to consult all partitions need higher latency, and can consume higher RUs. Since each partition has automatic indexing against all properties, the query can be served efficiently from the index in this case. You can make queries that span partitions faster by using the parallelism options.
-
-To learn more about partitioning and partition keys, see [Partitioning in Azure Cosmos DB](../partitioning-overview.md).
-
-### SDK and query options
-See [Query performance tips](performance-tips-query-sdk.md) and [Performance testing](performance-testing.md) for how to get the best client-side performance from Azure Cosmos DB using our SDKs.
-
-### Network latency
-See [Azure Cosmos DB global distribution](tutorial-global-distribution-sql-api.md) for how to set up global distribution, and connect to the closest region. Network latency has a significant impact on query performance when you need to make multiple round-trips or retrieve a large result set from the query.
-
-The section on query execution metrics explains how to retrieve the server execution time of queries ( `totalExecutionTimeInMs`), so that you can differentiate between time spent in query execution and time spent in network transit.
-
-### Indexing policy
-See [Configuring indexing policy](../index-policy.md) for indexing paths, kinds, and modes, and how they impact query execution. By default, the indexing policy uses range indexing for strings, which is effective for equality queries. If you need range queries for strings, we recommend specifying the Range index type for all strings.
-
-By default, Azure Cosmos DB will apply automatic indexing to all data. For high performance insert scenarios, consider excluding paths as this will reduce the RU cost for each insert operation.
-
-## Query execution metrics
-You can obtain detailed metrics on query execution by passing in the optional `x-ms-documentdb-populatequerymetrics` header (`FeedOptions.PopulateQueryMetrics` in the .NET SDK). The value returned in `x-ms-documentdb-query-metrics` has the following key-value pairs meant for advanced troubleshooting of query execution.
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- PopulateQueryMetrics = true,
- }).AsDocumentQuery();
-
-FeedResponse<dynamic> result = await query.ExecuteNextAsync();
-
-// Returns metrics by partition key range Id
-IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
-
-```
-
-| Metric | Unit | Description |
-| | --| -- |
-| `totalExecutionTimeInMs` | milliseconds | Query execution time |
-| `queryCompileTimeInMs` | milliseconds | Query compile time |
-| `queryLogicalPlanBuildTimeInMs` | milliseconds | Time to build logical query plan |
-| `queryPhysicalPlanBuildTimeInMs` | milliseconds | Time to build physical query plan |
-| `queryOptimizationTimeInMs` | milliseconds | Time spent in optimizing query |
-| `VMExecutionTimeInMs` | milliseconds | Time spent in query runtime |
-| `indexLookupTimeInMs` | milliseconds | Time spent in physical index layer |
-| `documentLoadTimeInMs` | milliseconds | Time spent in loading documents |
-| `systemFunctionExecuteTimeInMs` | milliseconds | Total time spent executing system (built-in) functions in milliseconds |
-| `userFunctionExecuteTimeInMs` | milliseconds | Total time spent executing user-defined functions in milliseconds |
-| `retrievedDocumentCount` | count | Total number of retrieved documents |
-| `retrievedDocumentSize` | bytes | Total size of retrieved documents in bytes |
-| `outputDocumentCount` | count | Number of output documents |
-| `writeOutputTimeInMs` | milliseconds | Time spent writing the output in milliseconds |
-| `indexUtilizationRatio` | ratio (<=1) | Ratio of number of documents matched by the filter to the number of documents loaded |
-
-The client SDKs may internally make multiple query operations to serve the query within each partition. The client makes more than one call per-partition if the total results exceed `x-ms-max-item-count`, if the query exceeds the provisioned throughput for the partition, or if the query payload reaches the maximum size per page, or if the query reaches the system allocated timeout limit. Each partial query execution returns a `x-ms-documentdb-query-metrics` for that page.
-
-Here are some sample queries, and how to interpret some of the metrics returned from query execution:
-
-| Query | Sample Metric | Description |
-| | --| -- |
-| `SELECT TOP 100 * FROM c` | `"RetrievedDocumentCount": 101` | The number of documents retrieved is 100+1 to match the TOP clause. Query time is mostly spent in `WriteOutputTime` and `DocumentLoadTime` since it is a scan. |
-| `SELECT TOP 500 * FROM c` | `"RetrievedDocumentCount": 501` | RetrievedDocumentCount is now higher (500+1 to match the TOP clause). |
-| `SELECT * FROM c WHERE c.N = 55` | `"IndexLookupTime": "00:00:00.0009500"` | About 0.9 ms is spent in IndexLookupTime for a key lookup, because it's an index lookup on `/N/?`. |
-| `SELECT * FROM c WHERE c.N > 55` | `"IndexLookupTime": "00:00:00.0017700"` | Slightly more time (1.7 ms) spent in IndexLookupTime over a range scan, because it's an index lookup on `/N/?`. |
-| `SELECT TOP 500 c.N FROM c` | `"IndexLookupTime": "00:00:00.0017700"` | Same time spent on `DocumentLoadTime` as previous queries, but lower `WriteOutputTime` because we're projecting only one property. |
-| `SELECT TOP 500 udf.toPercent(c.N) FROM c` | `"UserDefinedFunctionExecutionTime": "00:00:00.2136500"` | About 213 ms is spent in `UserDefinedFunctionExecutionTime` executing the UDF on each value of `c.N`. |
-| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(c.Name, 'Den')` | `"IndexLookupTime": "00:00:00.0006400", "SystemFunctionExecutionTime": "00:00:00.0074100"` | About 0.6 ms is spent in `IndexLookupTime` on `/Name/?`. Most of the query execution time (~7 ms) in `SystemFunctionExecutionTime`. |
-| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(LOWER(c.Name), 'den')` | `"IndexLookupTime": "00:00:00", "RetrievedDocumentCount": 2491, "OutputDocumentCount": 500` | Query is performed as a scan because it uses `LOWER`, and 500 out of 2491 retrieved documents are returned. |
--
-## Next steps
-* To learn about the supported SQL query operators and keywords, see [SQL query](sql-query-getting-started.md).
-* To learn about request units, see [request units](../request-units.md).
-* To learn about indexing policy, see [indexing policy](../index-policy.md)
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-async-java.md
- Title: 'Azure Cosmos DB: SQL Async Java API, SDK & resources'
-description: Learn all about the SQL Async Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
---- Previously updated : 11/11/2021-----
-# Azure Cosmos DB Async Java SDK for SQL API (legacy): Release notes and resources
--
-The SQL API Async Java SDK differs from the SQL API Java SDK by providing asynchronous operations with support of the [Netty library](https://netty.io/). The pre-existing [SQL API Java SDK](sql-api-sdk-java.md) does not support asynchronous operations.
-
-> [!IMPORTANT]
-> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
->
-
-> [!IMPORTANT]
-> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
-> will be retired; the SDK and all applications using the SDK
-> **will continue to function**; Azure Cosmos DB will simply cease
-> to provide further maintenance and support for this SDK.
-> We recommend following the instructions above to migrate to
-> Azure Cosmos DB Java SDK v4.
->
-
-| | Links |
-|||
-| **Release Notes** | [Release notes for Async Java SDK](https://github.com/Azure/azure-cosmosdb-jav) |
-| **SDK Download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) |
-| **API documentation** |[Java API reference documentation](/java/api/com.microsoft.azure.cosmosdb.rx.asyncdocumentclient) |
-| **Contribute to SDK** | [GitHub](https://github.com/Azure/azure-cosmosdb-java) |
-| **Get started** | [Get started with the Async Java SDK](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-async-java-getting-started) |
-| **Code sample** | [GitHub](https://github.com/Azure/azure-cosmosdb-java#usage-code-sample)|
-| **Performance tips**| [GitHub readme](https://github.com/Azure/azure-cosmosdb-java#guide-for-prod)|
-| **Minimum supported runtime**|[JDK 8](/java/azure/jdk/) |
-
-## Release history
-
-Release history is maintained in the Azure Cosmos DB Java SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmosdb-jav)
-
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-bulk-executor-dot-net.md
- Title: 'Azure Cosmos DB: Bulk executor .NET API, SDK & resources'
-description: Learn all about the bulk executor .NET API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB bulk executor .NET SDK.
---- Previously updated : 04/06/2021----
-# .NET bulk executor library: Download information (Legacy)
--
-| | Link/notes |
-|||
-| **Description**| The .NET bulk executor library allows client applications to perform bulk operations on Azure Cosmos DB accounts. This library provides BulkImport, BulkUpdate, and BulkDelete namespaces. The BulkImport module can bulk ingest documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent. The BulkUpdate module can bulk update existing data in Azure Cosmos containers as patches. The BulkDelete module can bulk delete documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent.|
-|**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.BulkExecutor/) |
-| **Bulk executor library in GitHub**| [GitHub](https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor)|
-|**Get started**|[Get started with the bulk executor library .NET SDK](bulk-executor-dot-net.md)|
-| **Current supported framework**| Microsoft .NET Framework 4.5.2, 4.6.1 and .NET Standard 2.0 |
-
-> [!NOTE]
-> If you are using bulk executor, please see the latest version 3.x of the [.NET SDK](tutorial-sql-api-dotnet-bulk-import.md), which has bulk executor built into the SDK.
-
-## Release notes
-
-### <a name="2.4.1-preview"></a>2.4.1-preview
-
-* Fixed TotalElapsedTime in the response of BulkDelete to correctly measure the total time including any retries.
-
-### <a name="2.4.0-preview"></a>2.4.0-preview
-
-* Changed SDK dependency to >= 2.5.1
-
-### <a name="2.3.0-preview2"></a>2.3.0-preview2
-
-* Added support for graph bulk executor to accept ttl on vertices and edges
-
-### <a name="2.2.0-preview2"></a>2.2.0-preview2
-
-* Fixed an issue, which caused exceptions during elastic scaling of Azure Cosmos DB when running in Gateway mode. This fix makes it functionally equivalent to 1.4.1 release.
-
-### <a name="2.1.0-preview2"></a>2.1.0-preview2
-
-* Added BulkDelete support for SQL API accounts to accept partition key, document ID tuples to delete. This change makes it functionally equivalent to 1.4.0 release.
-
-### <a name="2.0.0-preview2"></a>2.0.0-preview2
-
-* Including MongoBulkExecutor to support .NET Standard 2.0. This feature makes it functionally equivalent to 1.3.0 release, with the addition of supporting .NET Standard 2.0 as the target framework.
-
-### <a name="2.0.0-preview"></a>2.0.0-preview
-
-* Added .NET Standard 2.0 as one of the supported target frameworks to make the bulk executor library work with .NET Core applications.
-
-### <a name="1.8.9"></a>1.8.9
-
-* Fixed an issue with BulkDeleteAsync when values with escaped quotes were passed as input parameters.
-
-### <a name="1.8.8"></a>1.8.8
-
-* Fixed an issue on MongoBulkExecutor that was increasing the document size unexpectedly by adding padding and in some cases, going over the maximum document size limit.
-
-### <a name="1.8.7"></a>1.8.7
-
-* Fixed an issue with BulkDeleteAsync when the Collection has nested partition key paths.
-
-### <a name="1.8.6"></a>1.8.6
-
-* MongoBulkExecutor now implements IDisposable and it's expected to be disposed after used.
-
-### <a name="1.8.5"></a>1.8.5
-
-* Removed lock on SDK version. Package is now dependent on SDK >= 2.5.1.
-
-### <a name="1.8.4"></a>1.8.4
-
-* Fixed handling of identifiers when calling BulkImport with a list of POCO objects with numeric values.
-
-### <a name="1.8.3"></a>1.8.3
-
-* Fixed TotalElapsedTime in the response of BulkDelete to correctly measure the total time including any retries.
-
-### <a name="1.8.2"></a>1.8.2
-
-* Fixed high CPU consumption on certain scenarios.
-* Tracing now uses TraceSource. Users can define listeners for the `BulkExecutorTrace` source.
-* Fixed a rare scenario that could cause a lock when sending documents near 2Mb of size.
-
-### <a name="1.6.0"></a>1.6.0
-
-* Updated the bulk executor to now use the latest version of the Azure Cosmos DB .NET SDK (2.4.0)
-
-### <a name="1.5.0"></a>1.5.0
-
-* Added support for graph bulk executor to accept ttl on vertices and edges
-
-### <a name="1.4.1"></a>1.4.1
-
-* Fixed an issue, which caused exceptions during elastic scaling of Azure Cosmos DB when running in Gateway mode.
-
-### <a name="1.4.0"></a>1.4.0
-
-* Added BulkDelete support for SQL API accounts to accept partition key, document ID tuples to delete.
-
-### <a name="1.3.0"></a>1.3.0
-
-* Fixed an issue, which caused a formatting issue in the user agent used by bulk executor.
-
-### <a name="1.2.0"></a>1.2.0
-
-* Made improvement to bulk executor import and update APIs to transparently adapt to elastic scaling of Cosmos container when storage exceeds current capacity without throwing exceptions.
-
-### <a name="1.1.2"></a>1.1.2
-
-* Bumped up the DocumentDB .NET SDK dependency to version 2.1.3.
-
-### <a name="1.1.1"></a>1.1.1
-
-* Fixed an issue, which caused bulk executor to throw JSRT error while importing to fixed collections.
-
-### <a name="1.1.0"></a>1.1.0
-
-* Added support for BulkDelete operation for Azure Cosmos DB SQL API accounts.
-* Added support for BulkImport operation for accounts with Azure Cosmos DB's API for MongoDB.
-* Bumped up the DocumentDB .NET SDK dependency to version 2.0.0.
-
-### <a name="1.0.2"></a>1.0.2
-
-* Added support for BulkImport operation for Azure Cosmos DB Gremlin API accounts.
-
-### <a name="1.0.1"></a>1.0.1
-
-* Minor bug fix to the BulkImport operation for Azure Cosmos DB SQL API accounts.
-
-### <a name="1.0.0"></a>1.0.0
-
-* Added support for BulkImport and BulkUpdate operations for Azure Cosmos DB SQL API accounts.
-
-## Next steps
-
-To learn about the bulk executor Java library, see the following article:
-
-[Java bulk executor library SDK and release information](sql-api-sdk-bulk-executor-java.md)
cosmos-db Sql Api Sdk Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-bulk-executor-java.md
- Title: 'Azure Cosmos DB: Bulk executor Java API, SDK & resources'
-description: Learn all about the bulk executor Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB bulk executor Java SDK.
---- Previously updated : 04/06/2021-----
-# Java bulk executor library: Download information
--
-> [!IMPORTANT]
-> This is *not* the latest Java Bulk Executor for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](bulk-executor-java.md) for performing bulk operations. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
->
-
-> [!IMPORTANT]
-> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
-> will be retired; the SDK and all applications using the SDK including Bulk Executor
-> **will continue to function**; Azure Cosmos DB will simply cease
-> to provide further maintenance and support for this SDK.
-> We recommend following the instructions above to migrate to
-> Azure Cosmos DB Java SDK v4.
->
-
-| | Link/notes |
-|||
-|**Description**|The bulk executor library allows client applications to perform bulk operations in Azure Cosmos DB accounts. bulk executor library provides BulkImport, and BulkUpdate namespaces. The BulkImport module can bulk ingest documents in an optimized way such that the throughput provisioned for a collection is consumed to its maximum extent. The BulkUpdate module can bulk update existing data in Azure Cosmos containers as patches.|
-|**SDK download**|[Maven](https://search.maven.org/#search%7Cga%7C1%7Cdocumentdb-bulkexecutor)|
-|**Bulk executor library in GitHub**|[GitHub](https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started)|
-| **API documentation**| [Java API reference documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor)|
-|**Get started**|[Get started with the bulk executor library Java SDK](bulk-executor-java.md)|
-|**Minimum supported runtime**|[Java Development Kit (JDK) 7+](/java/azure/jdk/)|
-
-## Release notes
-### <a name="2.12.3"></a>2.12.3
-
-* Fix retry policy when `GoneException` is wrapped in `IllegalStateException` - this change is necessary to make sure Gateway cache is refreshed on 410 so the Spark connector (for Spark 2.4) can use a custom retry policy to allow queries to succeed during partition splits
-
-### <a name="2.12.2"></a>2.12.2
-
-* Fix an issue resulting in documents not always being imported on transient errors.
-
-### <a name="2.12.1"></a>2.12.1
-
-* Upgrade to use latest Cosmos Core SDK version.
-
-### <a name="2.12.0"></a>2.12.0
-
-* Improve handling of RU budget provided through the Spark Connector for bulk operation. An initial one-time bulk import is performed from spark connector with a baseBatchSize and the RU consumption for the above batch import is collected.
- A miniBatchSizeAdjustmentFactor is calculated based on the above RU consumption, and the mini-batch size is adjusted based on this. Based on the Elapsed time and the consumed RU for each batch import, a sleep duration is calculated to limit the RU consumption per second and is used to pause the thread prior to the next batch import.
-
-### <a name="2.11.0"></a>2.11.0
-
-* Fix a bug preventing bulk updates when using a nested partition key
-
-### <a name="2.10.0"></a>2.10.0
-
-* Fix for DocumentAnalyzer.java to correctly extract nested partition key values from json.
-
-### <a name="2.9.4"></a>2.9.4
-
-* Add functionality in BulkDelete operations to retry on specific failures and also return a list of failures to the user that could be retried.
-
-### <a name="2.9.3"></a>2.9.3
-
-* Update for Cosmos SDK version 2.4.7.
-
-### <a name="2.9.2"></a>2.9.2
-
-* Fix for 'mergeAll' to continue on 'id' and partition key value so that any patched document properties which are placed after 'id' and partition key value get added to the updated item list.
-
-### <a name="2.9.1"></a>2.9.1
-
-* Update start degree of concurrency to 1 and added debug logs for minibatch.
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-changefeed.md
- Title: Azure Cosmos DB .NET change feed Processor API, SDK release notes
-description: Learn all about the Change Feed Processor API and SDK including release dates, retirement dates, and changes made between each version of the .NET Change Feed Processor SDK.
---- Previously updated : 04/06/2021----
-# .NET Change Feed Processor SDK: Download and release notes (Legacy)
--
-| | Links |
-|||
-|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.ChangeFeedProcessor/)|
-|**API documentation**|[Change Feed Processor library API reference documentation](/dotnet/api/microsoft.azure.documents.changefeedprocessor)|
-|**Get started**|[Get started with the Change Feed Processor .NET SDK](../change-feed.md)|
-|**Current supported framework**| [Microsoft .NET Framework 4.5](https://www.microsoft.com/download/details.aspx?id=30653)</br> [Microsoft .NET Core](https://dotnet.microsoft.com/download) |
-
-> [!NOTE]
-> If you are using change feed processor, please see the latest version 3.x of the [.NET SDK](change-feed-processor.md), which has change feed built into the SDK.
-
-## Release notes
-
-### v2 builds
-
-### <a id="2.4.0"></a>2.4.0
-* Added support for lease collections that can be partitioned with partition key defined as /partitionKey. Prior to this change lease collection's partition key would have to be defined as /id.
-* This release allows using lease collections with Gremlin API, as Gremlin collections cannot have partition key defined as /id.
-
-### <a id="2.3.2"></a>2.3.2
-* Added lease store compatibility with [V3 SDK that enables hot migration paths. An application can migrate to V3 SDK and migrate back to the Change Feed processor library without losing any state.
-
-### <a id="2.3.1"></a>2.3.1
-* Corrected a case when `FeedProcessing.ChangeFeedObserverCloseReason.Unknown` close reason was sent to `FeedProcessing.IChangeFeedObserver.CloseAsync` if the partition cannot be found or if the target replica is not up to date up with the read session. In these cases `FeedProcessing.ChangeFeedObserverCloseReason.ResourceGone` and `FeedProcessing.ChangeFeedObserverCloseReason.ReadSessionNotAvailable` close reasons are now used.
-* Added a new close reason `FeedProcessing.ChangeFeedObserverCloseReason.ReadSessionNotAvailable` that is sent to close the change feed observer when the target replica is not up to date up with the read session.
-
-### <a id="2.3.0"></a>2.3.0
-* Added a new method `ChangeFeedProcessorBuilder.WithCheckpointPartitionProcessorFactory` and corresponding public interface `ICheckpointPartitionProcessorFactory`. This allows an implementation of the `IPartitionProcessor` interface to use built-in checkpointing mechanism. The new factory is similar to the existing `IPartitionProcessorFactory`, except that its `Create` method also takes the `ILeaseCheckpointer` parameter.
-* Only one of the two methods, either `ChangeFeedProcessorBuilder.WithPartitionProcessorFactory` or `ChangeFeedProcessorBuilder.WithCheckpointPartitionProcessorFactory`, can be used for the same `ChangeFeedProcessorBuilder` instance.
-
-### <a id="2.2.8"></a>2.2.8
-* Stability and diagnosability improvements:
- * Added support to detect reading change feed taking long time. When it takes longer than the value specified by the `ChangeFeedProcessorOptions.ChangeFeedTimeout` property, the following steps are taken:
- * The operation to read change feed on the problematic partition is aborted.
- * The change feed processor instance drops ownership of the problematic lease. The dropped lease will be picked up during the next lease acquire step that will be done by the same or different change feed processor instance. This way, reading change feed will start over.
- * An issue is reported to the health monitor. The default heath monitor sends all reported issues to trace log.
- * Added a new public property: `ChangeFeedProcessorOptions.ChangeFeedTimeout`. The default value of this property is 10 mins.
- * Added a new public enum value: `Monitoring.MonitoredOperation.ReadChangeFeed`. When the value of `HealthMonitoringRecord.Operation` is set to `Monitoring.MonitoredOperation.ReadChangeFeed`, it indicates the health issue is related to reading change feed.
-
-### <a id="2.2.7"></a>2.2.7
-* Improved load-balancing strategy for scenario when getting all leases takes longer than lease expiration interval, for example, due to network issues:
- * In this scenario load-balancing algorithm used to falsely consider leases as expired, causing stealing leases from active owners. This could trigger unnecessary rebalancing many leases.
- * This issue is fixed in this release by avoiding retry on conflict while acquiring expired lease which owner hasn't changed and postponing acquiring expired lease to next load-balancing iteration.
-
-### <a id="2.2.6"></a>2.2.6
-* Improved handling of Observer exceptions.
-* Richer information on Observer errors:
- * When an Observer is closed due to an exception thrown by Observer's ProcessChangesAsync, the CloseAsync will now receive the reason parameter set to ChangeFeedObserverCloseReason.ObserverError.
- * Added traces to identify errors within user code in an Observer.
-
-### <a id="2.2.5"></a>2.2.5
-* Added support for handling split in collections that use shared database throughput.
- * This release fixes an issue that may occur during split in collections using shared database throughput when split result into partition rebalancing with only one child partition key range created, rather than two. When this happens, Change Feed Processor may get stuck deleting the lease for old partition key range and not creating new leases. The issue is fixed in this release.
-
-### <a id="2.2.4"></a>2.2.4
-* Added new property ChangeFeedProcessorOptions.StartContinuation to support starting change feed from request continuation token. This is only used when lease collection is empty or a lease does not have ContinuationToken set. For leases in lease collection that have ContinuationToken set, the ContinuationToken is used and ChangeFeedProcessorOptions.StartContinuation is ignored.
-
-### <a id="2.2.3"></a>2.2.3
-* Added support for using custom store to persist continuation tokens per partition.
- * For example, a custom lease store can be Azure Cosmos DB lease collection partitioned in any custom way.
- * Custom lease stores can use new extensibility point ChangeFeedProcessorBuilder.WithLeaseStoreManager(ILeaseStoreManager) and ILeaseStoreManager public interface.
- * Refactored the ILeaseManager interface into multiple role interfaces.
-* Minor breaking change: removed extensibility point ChangeFeedProcessorBuilder.WithLeaseManager(ILeaseManager), use ChangeFeedProcessorBuilder.WithLeaseStoreManager(ILeaseStoreManager) instead.
-
-### <a id="2.2.2"></a>2.2.2
-* This release fixes an issue that occurs during processing a split in monitored collection and using a partitioned lease collection. When processing a lease for split partition, the lease corresponding to that partition may not be deleted. The issue is fixed in this release.
-
-### <a id="2.2.1"></a>2.2.1
-* Fixed Estimator calculation for accounts with multiple write regions and new Session Token format.
-
-### <a id="2.2.0"></a>2.2.0
-* Added support for partitioned lease collections. The partition key must be defined as /id.
-* Minor breaking change: the methods of the IChangeFeedDocumentClient interface and the ChangeFeedDocumentClient class were changed to include RequestOptions and CancellationToken parameters. IChangeFeedDocumentClient is an advanced extensibility point that allows you to provide custom implementation of the Document Client to use with Change Feed Processor, for example, decorate DocumentClient and intercept all calls to it to do extra tracing, error handling, etc. With this update, the code that implement IChangeFeedDocumentClient will need to be changed to include new parameters in the implementation.
-* Minor diagnostics improvements.
-
-### <a id="2.1.0"></a>2.1.0
-* Added new API, Task&lt;IReadOnlyList&lt;RemainingPartitionWork&gt;&gt; IRemainingWorkEstimator.GetEstimatedRemainingWorkPerPartitionAsync(). This can be used to get estimated work for each partition.
-* Supports Microsoft.Azure.DocumentDB SDK 2.0. Requires Microsoft.Azure.DocumentDB 2.0 or later.
-
-### <a id="2.0.6"></a>2.0.6
-* Added ChangeFeedEventHost.HostName public property for compatibility with v1.
-
-### <a id="2.0.5"></a>2.0.5
-* Fixed a race condition that occurs during partition split. The race condition may lead to acquiring lease and immediately losing it during partition split and causing contention. The race condition issue is fixed with this release.
-
-### <a id="2.0.4"></a>2.0.4
-* GA SDK
-
-### <a id="2.0.3-prerelease"></a>2.0.3-prerelease
-* Fixed the following issues:
- * When partition split happens, there could be duplicate processing of documents modified before the split.
- * The GetEstimatedRemainingWork API returned 0 when no leases were present in the lease collection.
-
-* The following exceptions are made public. Extensions that implement IPartitionProcessor can throw these exceptions.
- * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.LeaseLostException.
- * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.PartitionException.
- * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.PartitionNotFoundException.
- * Microsoft.Azure.Documents.ChangeFeedProcessor.Exceptions.PartitionSplitException.
-
-### <a id="2.0.2-prerelease"></a>2.0.2-prerelease
-* Minor API changes:
- * Removed ChangeFeedProcessorOptions.IsAutoCheckpointEnabled that was marked as obsolete.
-
-### <a id="2.0.1-prerelease"></a>2.0.1-prerelease
-* Stability improvements:
- * Better handling of lease store initialization. When lease store is empty, only one instance of processor can initialize it, the others will wait.
- * More stable/efficient lease renewal/release. Renewing and releasing a lease one partition is independent from renewing others. In v1 that was done sequentially for all partitions.
-* New v2 API:
- * Builder pattern for flexible construction of the processor: the ChangeFeedProcessorBuilder class.
- * Can take any combination of parameters.
- * Can take DocumentClient instance for monitoring and/or lease collection (not available in v1).
- * IChangeFeedObserver.ProcessChangesAsync now takes CancellationToken.
- * IRemainingWorkEstimator - the remaining work estimator can be used separately from the processor.
- * New extensibility points:
- * IPartitionLoadBalancingStrategy - for custom load-balancing of partitions between instances of the processor.
- * ILease, ILeaseManager - for custom lease management.
- * IPartitionProcessor - for custom processing changes on a partition.
-* Logging - uses [LibLog](https://github.com/damianh/LibLog) library.
-* 100% backward compatible with v1 API.
-* New code base.
-* Compatible with [SQL .NET SDK](sql-api-sdk-dotnet.md) versions 1.21.1 and above.
-
-### v1 builds
-
-### <a id="1.3.3"></a>1.3.3
-* Added more logging.
-* Fixed a DocumentClient leak when calling the pending work estimation multiple times.
-
-### <a id="1.3.2"></a>1.3.2
-* Fixes in the pending work estimation.
-
-### <a id="1.3.1"></a>1.3.1
-* Stability improvements.
- * Fix for handling canceled tasks issue that might lead to stopped observers on some partitions.
-* Support for manual checkpointing.
-* Compatible with [SQL .NET SDK](sql-api-sdk-dotnet.md) versions 1.21 and above.
-
-### <a id="1.2.0"></a>1.2.0
-* Adds support for .NET Standard 2.0. The package now supports `netstandard2.0` and `net451` framework monikers.
-* Compatible with [SQL .NET SDK](sql-api-sdk-dotnet.md) versions 1.17.0 and above.
-* Compatible with [SQL .NET Core SDK](sql-api-sdk-dotnet-core.md) versions 1.5.1 and above.
-
-### <a id="1.1.1"></a>1.1.1
-* Fixes an issue with the calculation of the estimate of remaining work when the Change Feed was empty or no work was pending.
-* Compatible with [SQL .NET SDK](sql-api-sdk-dotnet.md) versions 1.13.2 and above.
-
-### <a id="1.1.0"></a>1.1.0
-* Added a method to obtain an estimate of remaining work to be processed in the Change Feed.
-* Compatible with [SQL .NET SDK](sql-api-sdk-dotnet.md) versions 1.13.2 and above.
-
-### <a id="1.0.0"></a>1.0.0
-* GA SDK
-* Compatible with [SQL .NET SDK](sql-api-sdk-dotnet.md) versions 1.14.1 and below.
-
-## Release & Retirement dates
-
-Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
-
-> [!WARNING]
-> After 31 August 2022, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 1.x of the Azure Cosmos DB .NET or .NET Core SDK for SQL API. If you prefer not to upgrade, requests sent from version 1.x of the SDK will continue to be served by the Azure Cosmos DB service.
-
-<br/>
-
-| Version | Release Date | Retirement Date |
-| | | |
-| [2.4.0](#2.4.0) |May 6, 2021 | |
-| [2.3.2](#2.3.2) |August 11, 2020 | |
-| [2.3.1](#2.3.1) |July 30, 2020 | |
-| [2.3.0](#2.3.0) |April 2, 2020 | |
-| [2.2.8](#2.2.8) |October 28, 2019 | |
-| [2.2.7](#2.2.7) |May 14, 2019 | |
-| [2.2.6](#2.2.6) |January 29, 2019 | |
-| [2.2.5](#2.2.5) |December 13, 2018 | |
-| [2.2.4](#2.2.4) |November 29, 2018 | |
-| [2.2.3](#2.2.3) |November 19, 2018 | |
-| [2.2.2](#2.2.2) |October 31, 2018 | |
-| [2.2.1](#2.2.1) |October 24, 2018 | |
-| [1.3.3](#1.3.3) |May 08, 2018 | |
-| [1.3.2](#1.3.2) |April 18, 2018 | |
-| [1.3.1](#1.3.1) |March 13, 2018 | |
-| [1.2.0](#1.2.0) |October 31, 2017 | |
-| [1.1.1](#1.1.1) |August 29, 2017 | |
-| [1.1.0](#1.1.0) |August 13, 2017 | |
-| [1.0.0](#1.0.0) |July 07, 2017 | |
-
-## FAQ
--
-## See also
-
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-core.md
- Title: 'Azure Cosmos DB: SQL .NET Core API, SDK & resources'
-description: Learn all about the SQL .NET Core API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET Core SDK.
---- Previously updated : 04/18/2022------
-# Azure Cosmos DB .NET Core SDK v2 for SQL API: Release notes and resources (Legacy)
--
-| | Links |
-|||
-|**Release notes**| [Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)|
-|**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
-|**Samples**|[.NET code samples](sql-api-dotnet-samples.md)|
-|**Get started**|[Get started with the Azure Cosmos DB .NET](sql-api-sdk-dotnet.md)|
-|**Web app tutorial**|[Web application development with Azure Cosmos DB](sql-api-dotnet-application.md)|
-|**Current supported framework**|[.NET Standard 1.6 and .NET Standard 1.5](https://www.nuget.org/packages/NETStandard.Library)|
-
-> [!WARNING]
-> On August 31, 2024 the Azure Cosmos DB .NET SDK v2.x will be retired; the SDK and all applications using the SDK will continue to function;
-> Azure Cosmos DB will simply cease to provide further maintenance and support for this SDK.
-> We recommend [migrating to the latest version](migrate-dotnet-v3.md) of the .NET SDK v3 SDK.
->
-
-> [!NOTE]
-> If you are using .NET Core, please see the latest version 3.x of the [.NET SDK](sql-api-sdk-dotnet-standard.md), which targets .NET Standard.
-
-## <a name="release-history"></a> Release history
-
-Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)
-
-Because version 3 of the Azure Cosmos DB .NET SDK includes updated features and improved performance, The 2.x of this SDK will be retired on 31 August 2024. You must update your SDK to version 3 by that date. We recommend following the instructions to migrate to Azure Cosmos DB .NET SDK version 3.
-
-## <a name="recommended-version"></a> Recommended version
-
-Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.18.0**.
-
-## <a name="known-issues"></a> Known issues
-
-Below is a list of any known issues affecting the [recommended minimum version](#recommended-version):
-
-| Issue | Impact | Mitigation | Tracking link |
-| | | | |
-
-## See Also
-
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
-
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-standard.md
- Title: 'Azure Cosmos DB: SQL .NET Standard API, SDK & resources'
-description: Learn all about the SQL API and .NET SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET SDK.
---- Previously updated : 03/22/2022------
-# Azure Cosmos DB .NET SDK v3 for SQL API: Download and release notes
--
-| | Links |
-|||
-|**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md)|
-|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
-|**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage)|
-|**Get started**|[Get started with the Azure Cosmos DB .NET SDK](sql-api-get-started.md)|
-|**Best Practices**|[Best Practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)|
-|**Web app tutorial**|[Web application development with Azure Cosmos DB](sql-api-dotnet-application.md)|
-|**Entity Framework Core tutorial**|[Entity Framework Core with Azure Cosmos DB Provider](/ef/core/providers/cosmos/#get-started)|
-|**Current supported framework**|[Microsoft .NET Standard 2.0](/dotnet/standard/net-standard)|
-
-## Release history
-
-Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md)
-
-## <a name="recommended-version"></a> Recommended version
-
-Different sub versions of .NET SDKs are available under the 3.x.x version. **The minimum recommended version is 3.25.0**.
-
-## <a name="known-issues"></a> Known issues
-
-For a list of known issues with the recommended minimum version of the SDK, see [known issues section](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md#-known-issues).
-
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet.md
- Title: 'Azure Cosmos DB: SQL .NET API, SDK & resources'
-description: Learn all about the SQL .NET API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB .NET SDK.
---- Previously updated : 04/18/2022-----
-# Azure Cosmos DB .NET SDK v2 for SQL API: Download and release notes (Legacy)
--
-| | Links |
-|||
-|**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)|
-|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
-|**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples)|
-|**Get started**|[Get started with the Azure Cosmos DB .NET SDK](sql-api-get-started.md)|
-|**Web app tutorial**|[Web application development with Azure Cosmos DB](sql-api-dotnet-application.md)|
-|**Current supported framework**|[Microsoft .NET Framework 4.5](https://www.microsoft.com/download/details.aspx?id=30653)|
-
-> [!WARNING]
-> On August 31, 2024 the Azure Cosmos DB .NET SDK v2.x will be retired; the SDK and all applications using the SDK will continue to function;
-> Azure Cosmos DB will simply cease to provide further maintenance and support for this SDK.
-> We recommend [migrating to the latest version](migrate-dotnet-v3.md) of the .NET SDK v3 SDK.
->
-
-> [!NOTE]
-> If you are using .NET Framework, please see the latest version 3.x of the [.NET SDK](sql-api-sdk-dotnet-standard.md), which targets .NET Standard.
-
-## <a name="release-history"></a> Release history
-
-Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)
-
-Because version 3 of the Azure Cosmos DB .NET SDK includes updated features and improved performance, The 2.x of this SDK will be retired on 31 August 2024. You must update your SDK to version 3 by that date. We recommend following the [instructions](migrate-dotnet-v3.md) to migrate to Azure Cosmos DB .NET SDK version 3.
-
-## <a name="recommended-version"></a> Recommended version
-
-Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.18.0**.
-
-## <a name="known-issues"></a> Known issues
-
-Below is a list of any known issues affecting the [recommended minimum version](#recommended-version):
-
-| Issue | Impact | Mitigation | Tracking link |
-| | | | |
-
-## FAQ
--
-## See also
-
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-go.md
- Title: 'Azure Cosmos DB: SQL Go, SDK & resources'
-description: Learn all about the SQL API and Go SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Go SDK.
---- Previously updated : 03/22/2022-----
-# Azure Cosmos DB Go SDK for SQL API: Download and release notes
---
-| | Links |
-|||
-|**Release notes**|[Release notes](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/dat)|
-|**SDK download**|[Go pkg](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos)|
-|**API documentation**|[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#pkg-types)|
-|**Samples**|[Code samples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#pkg-overview)|
-|**Get started**|[Get started with the Azure Cosmos DB Go SDK](create-sql-api-go.md)|
-
-> [!IMPORTANT]
-> The Go SDK for Azure Cosmos DB is currently in beta. This beta is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Release history
-
-Release history is maintained in the Azure Cosmos DB Go SDK source repo. For a detailed list of feature releases and bugs fixed in each release, see the [SDK changelog documentation](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/dat).
-
-## FAQ
--
-## See also
-
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spark-v3.md
- Title: 'Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API (Preview) release notes and resources'
-description: Learn about the Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
---- Previously updated : 11/12/2021-----
-# Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: Release notes and resources
--
-**Azure Cosmos DB OLTP Spark connector** provides Apache Spark support for Azure Cosmos DB using the SQL API. Azure Cosmos DB is a globally-distributed database service which allows developers to work with data using a variety of standard APIs, such as SQL, MongoDB, Cassandra, Graph, and Table.
-
-If you have any feedback or ideas on how to improve your experience create an issue in our [SDK GitHub repository](https://github.com/Azure/azure-sdk-for-java/issues/new)
-
-## Documentation links
-
-* [Getting started](https://aka.ms/azure-cosmos-spark-3-quickstart)
-* [Catalog API](https://aka.ms/azure-cosmos-spark-3-catalog-api)
-* [Configuration Parameter Reference](https://aka.ms/azure-cosmos-spark-3-config)
-* [End-to-end sample notebook "New York City Taxi data"](https://aka.ms/azure-cosmos-spark-3-sample-nyc-taxi-data)
-* [Migration from Spark 2.4 to Spark 3.*](https://aka.ms/azure-cosmos-spark-3-migration)
-
-## Version compatibility
-* [Version compatibility for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-version-compatibility)
-* [Version compatibility for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-version-compatibility)
-
-## Release notes
-* [Release notes for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-changelog)
-* [Release notes for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-changelog)
-
-## Download
-* [Download of Cosmos DB Spark connector for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-download)
-* [Download of Cosmos DB Spark connector for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-download)
-
-Azure Cosmos DB Spark connector is available on [Maven Central Repo](https://search.maven.org/search?q=g:com.azure.cosmos.spark).
-
-If you encounter any bug or want to suggest a feature change, [file an issue](https://github.com/Azure/azure-sdk-for-java/issues/new).
-
-## Next steps
-
-Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
-
-Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Sql Api Sdk Java Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spark.md
- Title: 'Azure Cosmos DB Apache Spark 2 OLTP Connector for SQL API release notes and resources'
-description: Learn about the Azure Cosmos DB Apache Spark 2 OLTP Connector for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
---- Previously updated : 04/06/2021-----
-# Azure Cosmos DB Apache Spark 2 OLTP Connector for Core (SQL) API: Release notes and resources
--
-You can accelerate big data analytics by using the Azure Cosmos DB Apache Spark 2 OLTP Connector for Core (SQL). The Spark Connector allows you to run [Spark](https://spark.apache.org/) jobs on data stored in Azure Cosmos DB. Batch and stream processing are supported.
-
-You can use the connector with [Azure Databricks](https://azure.microsoft.com/services/databricks) or [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/), which provide managed Spark clusters on Azure. The following table shows supported versions:
-
-| Component | Version |
-||-|
-| Apache Spark | 2.4.*x*, 2.3.*x*, 2.2.*x*, and 2.1.*x* |
-| Scala | 2.11 |
-| Azure Databricks (runtime version) | Later than 3.4 |
-
-> [!WARNING]
-> This connector supports the core (SQL) API of Azure Cosmos DB.
-> For the Cosmos DB API for MongoDB, use the [MongoDB Connector for Spark](https://docs.mongodb.com/spark-connector/master/).
-> For the Cosmos DB Cassandra API, use the [Cassandra Spark connector](https://github.com/datastax/spark-cassandra-connector).
->
-
-## Resources
-
-| Resource | Link |
-|||
-| **SDK download** | [Download latest .jar](https://aka.ms/CosmosDB_OLTP_Spark_2.4_LKG), [Maven](https://search.maven.org/search?q=a:azure-cosmosdb-spark_2.4.0_2.11) |
-|**API documentation** | [Spark Connector reference]() |
-|**Contribute to the SDK** | [Azure Cosmos DB Connector for Apache Spark on GitHub](https://github.com/Azure/azure-cosmosdb-spark) |
-|**Get started** | [Accelerate big data analytics by using the Apache Spark to Azure Cosmos DB connector](./create-sql-api-spark.md) <br> [Use Apache Spark Structured Streaming with Apache Kafka and Azure Cosmos DB](../../hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) |
-
-## Release history
-* [Release notes](https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/CHANGELOG.md)]
-
-## FAQ
-
-## Next steps
-
-Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
-
-Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Sql Api Sdk Java Spring V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v2.md
- Title: 'Spring Data Azure Cosmos DB v2 for SQL API release notes and resources'
-description: Learn about the Spring Data Azure Cosmos DB v2 for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
---- Previously updated : 04/06/2021-----
-# Spring Data Azure Cosmos DB v2 for Core (SQL) API (legacy): Release notes and resources
--
- Spring Data Azure Cosmos DB version 2 for Core (SQL) allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
-
-> [!WARNING]
-> This version of Spring Data Cosmos SDK depends on a retired version of Cosmos DB Java SDK. This Spring Data Cosmos SDK will be announced as retiring in the near future! This is *not* the latest Azure Spring Data Cosmos SDK for Azure Cosmos DB and is outdated. Because of performance issues and instability in Azure Spring Data Cosmos SDK V2, we highly recommend to use [Azure Spring Data Cosmos v3](sql-api-sdk-java-spring-v3.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide to understand the difference in the underlying Java SDK V4.
->
-
-The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
-
-You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
-
-> [!IMPORTANT]
-> These release notes are for version 2 of Spring Data Azure Cosmos DB. You can find [release notes for version 3 here](sql-api-sdk-java-spring-v3.md).
->
-> Spring Data Azure Cosmos DB supports only the SQL API.
->
-> See the following articles for information about Spring Data on other Azure Cosmos DB APIs:
-> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
-> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
->
-> Want to get going fast?
-> 1. Install the [minimum supported Java runtime, JDK 8](/java/azure/jdk/), so you can use the SDK.
-> 2. Create a Spring Data Azure Cosmos DB app by using the [starter](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). It's easy!
-> 3. Work through the [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb), which walks through basic Azure Cosmos DB requests.
->
-> You can spin up Spring Boot Starter apps fast by using [Spring Initializr](https://start.spring.io/)!
->
-
-## Resources
-
-| Resource | Link |
-|||
-| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/spring-data-cosmosdb) |
-|**API documentation** | [Spring Data Azure Cosmos DB reference documentation]() |
-|**Contribute to the SDK** | [Spring Data Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) |
-|**Spring Boot Starter**| [Azure Cosmos DB Spring Boot Starter client library for Java](https://github.com/MicrosoftDocs/azure-dev-docs/blob/master/articles/jav) |
-|**Spring TODO app sample with Azure Cosmos DB**| [End-to-end Java Experience in App Service Linux (Part 2)](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2) |
-|**Developer's guide** | [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) |
-|**Using Starter** | [How to use Spring Boot Starter with the Azure Cosmos DB SQL API](/azure/developer/jav) |
-|**Sample with Azure App Service** | [How to use Spring and Cosmos DB with App Service on Linux](/azure/developer/java/spring-framework/configure-spring-app-with-cosmos-db-on-app-service-linux) <br> [TODO app sample](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2.git) |
-
-## Release history
-
-### 2.3.0 (May 21, 2020)
-#### New features
-* Updates Spring Boot version to 2.3.0.
--
-### 2.2.5 (May 19, 2020)
-#### New features
-* Updates Azure Cosmos DB version to 3.7.3.
-#### Key bug fixes
-* Contains memory leak fixes and Netty version upgrades from Azure Cosmos DB SDK 3.7.3.
-
-### 2.2.4 (April 6, 2020)
-#### Key bug fixes
-* Fixes `allowTelemetry` flag to take into account from `CosmosDbConfig`.
-* Fixes `TTL` property on container.
-
-### 2.2.3 (February 25, 2020)
-#### New features
-* Adds new `findAll` by partition key API.
-* Updates Azure Cosmos DB version to 3.7.0.
-#### Key bug fixes
-* Fixes `collectionName` -> `containerName`.
-* Fixes `entityClass` and `domainClass` -> `domainType`.
-* Fixes "Return entity collection saved by repository instead of input entity."
-
-### 2.1.10 (February 25, 2020)
-#### Key bug fixes
-* Backports fix for "Return entity collection saved by repository instead of input entity."
-
-### 2.2.2 (January 15, 2020)
-#### New features
-* Updates Azure Cosmos DB version to 3.6.0.
-#### Key bug fixes
-
-### 2.2.1 (December 31, 2019)
-#### New features
-* Updates Azure Cosmos DB SDK version to 3.5.0.
-* Adds annotation field to enable or disable automatic collection creation.
-* Improves exception handling. Exposes `CosmosClientException` through `CosmosDBAccessException`.
-* Exposes `requestCharge` and `activityId` through `ResponseDiagnostics`.
-#### Key bug fixes
-* SDK 3.5.0 update fixes "Exception when Cosmos DB HTTP response header is larger than 8192 bytes," "ConsistencyPolicy.defaultConsistencyLevel() fails on Bounded Staleness and Consistent Prefix."
-* Fixes `findById` method's behavior. Previously, this method returned empty if the entity wasn't found instead of throwing an exception.
-* Fixes a bug in which sorting wasn't applied on the next page when `CosmosPageRequest` was used.
-
-### 2.1.9 (December 26, 2019)
-#### New features
-* Adds annotation field to enable or disable automatic collection creation.
-#### Key bug fixes
-* Fixes `findById` method's behavior. Previously, this method returned empty if the entity wasn't found instead of throwing an exception.
-
-### 2.2.0 (October 21, 2019)
-#### New features
-* Complete Reactive Cosmos Repository support.
-* Azure Cosmos DB Request Diagnostics String and Query Metrics support.
-* Azure Cosmos DB SDK version update to 3.3.1.
-* Spring Framework version upgrade to 5.2.0.RELEASE.
-* Spring Data Commons version upgrade to 2.2.0.RELEASE.
-* Adds `findByIdAndPartitionKey` and `deleteByIdAndPartitionKey` APIs.
-* Removes dependency from azure-documentdb.
-* Rebrands DocumentDB to Azure Cosmos DB.
-#### Key bug fixes
-* Fixes "Sorting throws exception when pageSize is less than total items in repository."
-
-### 2.1.8 (October 18, 2019)
-#### New features
-* Deprecates DocumentDB APIs.
-* Adds `findByIdAndPartitionKey` and `deleteByIdAndPartitionKey` APIs.
-* Adds optimistic locking based on `_etag`.
-* Enables SpEL expression for document collection name.
-* Adds `ObjectMapper` improvements.
-
-### 2.1.7 (October 18, 2019)
-#### New features
-* Adds Azure Cosmos DB SDK version 3 dependency.
-* Adds Reactive Cosmos Repository.
-* Updates implementation of `DocumentDbTemplate` to use Azure Cosmos DB SDK version 3.
-* Adds other configuration changes for Reactive Cosmos Repository support.
-
-### 2.1.2 (March 19, 2019)
-#### Key bug fixes
-* Removes `applicationInsights` dependency for:
- * Potential risk of dependencies polluting.
- * Java 11 incompatibility.
- * Avoiding potential performance impact to CPU and/or memory.
-
-### 2.0.7 (March 20, 2019)
-#### Key bug fixes
-* Backport removes `applicationInsights` dependency for:
- * Potential risk of dependencies polluting.
- * Java 11 incompatibility.
- * Avoiding potential performance impact to CPU and/or memory.
-
-### 2.1.1 (March 7, 2019)
-#### New features
-* Updates main version to 2.1.1.
-
-### 2.0.6 (March 7, 2019)
-#### New features
-* Ignore all exceptions from telemetry.
-
-### 2.1.0 (December 17, 2018)
-#### New features
-* Updates version to 2.1.0 to address problem.
-
-### 2.0.5 (September 13, 2018)
-#### New features
-* Adds keywords `exists` and `startsWith`.
-* Updates Readme.
-#### Key bug fixes
-* Fixes "Can't call self href directly for Entity."
-* Fixes "findAll will fail if collection is not created."
-
-### 2.0.4 (Prerelease) (August 23, 2018)
-#### New features
-* Renames package from documentdb to cosmosdb.
-* Adds new feature of query method keyword. 16 keywords from SQL API are now supported.
-* Adds new feature of query with paging and sorting.
-* Simplifies the configuration of spring-data-cosmosdb.
-* Adds `deleteCollection` and `deleteAll` APIs.
-
-#### Key bug fixes
-* Bug fix and defect mitigation.
-
-## FAQ
-
-## Next steps
-Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
-
-Learn more about the [Spring Framework](https://spring.io/projects/spring-framework).
-
-Learn more about [Spring Boot](https://spring.io/projects/spring-boot).
-
-Learn more about [Spring Data](https://spring.io/projects/spring-data).
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
- Title: 'Spring Data Azure Cosmos DB v3 for SQL API release notes and resources'
-description: Learn about the Spring Data Azure Cosmos DB v3 for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
---- Previously updated : 04/06/2021-----
-# Spring Data Azure Cosmos DB v3 for Core (SQL) API: Release notes and resources
--
-The Spring Data Azure Cosmos DB version 3 for Core (SQL) allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
-
-The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model and framework for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
-
-You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
-
-## Version Support Policy
-
-### Spring Boot Version Support
-
-This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-version-support) for more information.
-
-### Spring Data Version Support
-
-This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-data-version-support) for more information.
-
-### Which Version of Azure Spring Data Cosmos Should I Use
-
-Azure Spring Data Cosmos library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure spring data cosmos version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Cosmos to use with Spring Boot / Spring Cloud version.
-
-> [!IMPORTANT]
-> These release notes are for version 3 of Spring Data Azure Cosmos DB.
->
-> Azure Spring Data Cosmos SDK has dependency on the Spring Data framework, and supports only the SQL API.
->
-> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
-> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
-> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
->
-
-## Get started fast
-
- Get up and running with Spring Data Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Data Azure Cosmos DB connector.
-
- Alternatively, you can add the Spring Data Azure Cosmos DB dependency to your `pom.xml` file as shown below:
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-spring-data-cosmos</artifactId>
- <version>latest-version</version>
- </dependency>
- ```
-
-## Helpful content
-
-| Content | Link |
-|||
-| **Release notes** | [Release notes for Spring Data Cosmos SDK v3](https://github.com/Azure/azure-sdk-for-jav) |
-| **SDK Documentation** | [Azure Spring Data Cosmos SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) |
-| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) |
-| **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) |
-| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) |
-| **Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB SQL API data](./create-sql-api-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) |
-| **Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the SQL API](sql-api-spring-data-sdk-samples.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)|
-| **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4-sql.md)|
-| **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4-sql.md) |
-| **Azure Cosmos DB workshops and labs** |[Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
-
-## Release history
-Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav).
-
-## Recommended version
-
-It's strongly recommended to use version 3.22.0 and above.
-
-## Additional notes
-
-* Spring Data Azure Cosmos DB supports Java JDK 8 and Java JDK 11.
-
-## FAQ
--
-## Next steps
-
-Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
-
-Learn more about the [Spring Framework](https://spring.io/projects/spring-framework).
-
-Learn more about [Spring Boot](https://spring.io/projects/spring-boot).
-
-Learn more about [Spring Data](https://spring.io/projects/spring-data).
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-v4.md
- Title: 'Azure Cosmos DB Java SDK v4 for SQL API release notes and resources'
-description: Learn all about the Azure Cosmos DB Java SDK v4 for SQL API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
---- Previously updated : 04/06/2021-----
-# Azure Cosmos DB Java SDK v4 for Core (SQL) API: release notes and resources
--
-The Azure Cosmos DB Java SDK v4 for Core (SQL) combines an Async API and a Sync API into one Maven artifact. The v4 SDK brings enhanced performance, new API features, and Async support based on Project Reactor and the [Netty library](https://netty.io/). Users can expect improved performance with Azure Cosmos DB Java SDK v4 versus the [Azure Cosmos DB Async Java SDK v2](sql-api-sdk-async-java.md) and the [Azure Cosmos DB Sync Java SDK v2](sql-api-sdk-java.md).
-
-> [!IMPORTANT]
-> These Release Notes are for Azure Cosmos DB Java SDK v4 only. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
->
-> Here are three steps to get going fast!
-> 1. Install the [minimum supported Java runtime, JDK 8](/java/azure/jdk/) so you can use the SDK.
-> 2. Work through the [Quickstart Guide for Azure Cosmos DB Java SDK v4](./create-sql-api-java.md) which gets you access to the Maven artifact and walks through basic Azure Cosmos DB requests.
-> 3. Read the Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md) and [troubleshooting](troubleshoot-java-sdk-v4-sql.md) guides to optimize the SDK for your application.
->
-> The [Azure Cosmos DB workshops and labs](https://aka.ms/cosmosworkshop) are another great resource for learning how to use Azure Cosmos DB Java SDK v4!
->
-
-## Helpful content
-
-| Content | Link |
-|||
-| **Release Notes** | [Release notes for Java SDK v4](https://github.com/Azure/azure-sdk-for-jav) |
-| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos) |
-| **API documentation** | [Java API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-cosmos/latest/https://docsupdatetracker.net/index.html) |
-| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos) |
-| **Get started** | [Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data](./create-sql-api-java.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-cosmos-java-getting-started) |
-| **Best Practices** | [Best Practices for Java SDK v4](best-practice-java.md) |
-| **Basic code samples** | [Azure Cosmos DB: Java examples for the SQL API](sql-api-java-sdk-samples.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples)|
-| **Console app with Change Feed**| [Change feed - Java SDK v4 sample](create-sql-api-java-changefeed.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example)|
-| **Web app sample**| [Build a web app with Java SDK v4](sql-api-java-application.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app)|
-| **Performance tips**| [Performance tips for Java SDK v4](performance-tips-java-sdk-v4-sql.md)|
-| **Troubleshooting** | [Troubleshoot Java SDK v4](troubleshoot-java-sdk-v4-sql.md) |
-| **Migrate to v4 from an older SDK** | [Migrate to Java V4 SDK](migrate-java-v4-sdk.md) |
-| **Minimum supported runtime**|[JDK 8](/java/azure/jdk/) |
-| **Azure Cosmos DB workshops and labs** |[Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
-
-> [!IMPORTANT]
-> * The 4.13.0 release updates `reactor-core` and `reactor-netty` major versions to `2020.0.4 (Europium)` release train.
-
-## Release history
-Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav).
-
-## Recommended version
-
-It's strongly recommended to use version 4.31.0 and above.
-
-## FAQ
-
-## Next steps
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java.md
- Title: 'Azure Cosmos DB: SQL Java API, SDK & resources'
-description: Learn all about the SQL Java API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
---- Previously updated : 04/06/2021-----
-# Azure Cosmos DB Java SDK for SQL API (legacy): Release notes and resources
--
-This is the original Azure Cosmos DB Sync Java SDK v2 for SQL API which supports synchronous operations.
-
-> [!IMPORTANT]
-> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
->
-
-> [!IMPORTANT]
-> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
-> will be retired; the SDK and all applications using the SDK
-> **will continue to function**; Azure Cosmos DB will simply cease
-> to provide further maintenance and support for this SDK.
-> We recommend following the instructions above to migrate to
-> Azure Cosmos DB Java SDK v4.
->
--
-| | Links |
-|||
-|**SDK Download**|[Maven](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.microsoft.azure%22%20AND%20a%3A%22azure-documentdb%22)|
-|**API documentation**|[Java API reference documentation](/java/api/com.microsoft.azure.documentdb)|
-|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-documentdb-java/)|
-|**Get started**|[Get started with the Java SDK](./create-sql-api-java.md)|
-|**Web app tutorial**|[Web application development with Azure Cosmos DB](sql-api-java-application.md)|
-|**Minimum supported runtime**|[Java Development Kit (JDK) 7+](/java/azure/jdk/)|
-
-## Release notes
-### <a name="2.6.3"></a>2.6.3
-* Fixed a retry policy when `GoneException` is wrapped in `IllegalStateException`
-
-### <a name="2.6.2"></a>2.6.2
-* Added a new retry policy to retry on Read Timeouts
-* Upgraded dependency `com.fasterxml.jackson.core/jackson-databind` to 2.9.10.8
-* Upgraded dependency `org.apache.httpcomponents/httpclient` to 4.5.13
-
-### <a name="2.6.1"></a>2.6.1
-* Fixed a bug in handling a query through service interop.
-
-### <a name="2.6.0"></a>2.6.0
-* Added support for querying change feed from point in time.
-
-### <a name="2.5.1"></a>2.5.1
-* Fixes primary partition cache issue on documentCollection query.
-
-### <a name="2.5.0"></a>2.5.0
-* Added support for 449 retry custom configuration.
-
-### <a name="2.4.7"></a>2.4.7
-* Fixes connection pool timeout issue.
-* Fixes auth token refresh on internal retries.
-
-### <a name="2.4.6"></a>2.4.6
-* Updated correct client side replica policy tag on databaseAccount and made databaseAccount configuration reads from cache.
-
-### <a name="2.4.5"></a>2.4.5
-* Avoiding retry on invalid partition key range error, if user provides pkRangeId.
-
-### <a name="2.4.4"></a>2.4.4
-* Optimized partition key range cache refreshes.
-* Fixes the scenario where the SDK doesn't entertain partition split hint from server and results in incorrect client side routing caches refresh.
-
-### <a name="2.4.2"></a>2.4.2
-* Optimized collection cache refreshes.
-
-### <a name="2.4.1"></a>2.4.1
-* Added support to retrieve inner exception message from request diagnostic string.
-
-### <a name="2.4.0"></a>2.4.0
-* Introduced version api on PartitionKeyDefinition.
-
-### <a name="2.3.0"></a>2.3.0
-* Added separate timeout support for direct mode.
-
-### <a name="2.2.3"></a>2.2.3
-* Consuming null error message from service and producing document client exception.
-
-### <a name="2.2.2"></a>2.2.2
-* Socket connection improvement, adding SoKeepAlive default true.
-
-### <a name="2.2.0"></a>2.2.0
-* Added request diagnostics string support.
-
-### <a name="2.1.3"></a>2.1.3
-* Fixed bug in PartitionKey for Hash V2.
-
-### <a name="2.1.2"></a>2.1.2
-* Added support for composite indexes.
-* Fixed bug in global endpoint manager to force refresh.
-* Fixed bug for upserts with pre-conditions in direct mode.
-
-### <a name="2.1.1"></a>2.1.1
-* Fixed bug in gateway address cache.
-
-### <a name="2.1.0"></a>2.1.0
-* Multi-region write support added for direct mode.
-* Added support for handling IOExceptions thrown as ServiceUnavailable exceptions, from a proxy.
-* Fixed a bug in endpoint discovery retry policy.
-* Fixed a bug to ensure null pointer exceptions are not thrown in BaseDatabaseAccountConfigurationProvider.
-* Fixed a bug to ensure QueryIterator does not return nulls.
-* Fixed a bug to ensure large PartitionKey is allowed
-
-### <a name="2.0.0"></a>2.0.0
-* Multi-region write support added for gateway mode.
-
-### <a name="1.16.4"></a>1.16.4
-* Fixed a bug in Read partition Key ranges for a query.
-
-### <a name="1.16.3"></a>1.16.3
-* Fixed a bug in setting continuation token header size in DirectHttps mode.
-
-### <a name="1.16.2"></a>1.16.2
-* Added streaming fail over support.
-* Added support for custom metadata.
-* Improved session handling logic.
-* Fixed a bug in partition key range cache.
-* Fixed a NPE bug in direct mode.
-
-### <a name="1.16.1"></a>1.16.1
-* Added support for Unique Index.
-* Added support for limiting continuation token size in feed-options.
-* Fixed a bug in Json Serialization (timestamp).
-* Fixed a bug in Json Serialization (enum).
-* Dependency on com.fasterxml.jackson.core:jackson-databind upgraded to 2.9.5.
-
-### <a name="1.16.0"></a>1.16.0
-* Improved Connection Pooling for Direct Mode.
-* Improved Prefetch improvement for non-orderby cross partition query.
-* Improved UUID generation.
-* Improved Session consistency logic.
-* Added support for multipolygon.
-* Added support for Partition Key Range Statistics for Collection.
-* Fixed a bug in Multi-region support.
-
-### <a name="1.15.0"></a>1.15.0
-* Improved Json Serialization performance.
-* This SDK version requires the latest version of [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator).
-
-### <a name="1.14.0"></a>1.14.0
-* Internal changes for Microsoft friends libraries.
-
-### <a name="1.13.0"></a>1.13.0
-* Fixed an issue in reading single partition key ranges.
-* Fixed an issue in ResourceID parsing that affects database with short names.
-* Fixed an issue cause by partition key encoding.
-
-### <a name="1.12.0"></a>1.12.0
-* Critical bug fixes to request processing during partition splits.
-* Fixed an issue with the Strong and BoundedStaleness consistency levels.
-
-### <a name="1.11.0"></a>1.11.0
-* Added support for a new consistency level called ConsistentPrefix.
-* Fixed a bug in reading collection in session mode.
-
-### <a name="1.10.0"></a>1.10.0
-* Enabled support for partitioned collection with as low as 2,500 RU/sec and scale in increments of 100 RU/sec.
-* Fixed a bug in the native assembly which can cause NullRef exception in some queries.
-
-### <a name="1.9.6"></a>1.9.6
-* Fixed a bug in the query engine configuration that may cause exceptions for queries in Gateway mode.
-* Fixed a few bugs in the session container that may cause an "Owner resource not found" exception for requests immediately after collection creation.
-
-### <a name="1.9.5"></a>1.9.5
-* Added support for aggregation queries (COUNT, MIN, MAX, SUM, and AVG). See [Aggregation support](sql-query-aggregate-functions.md).
-* Added support for change feed.
-* Added support for collection quota information through RequestOptions.setPopulateQuotaInfo.
-* Added support for stored procedure script logging through RequestOptions.setScriptLoggingEnabled.
-* Fixed a bug where query in DirectHttps mode may stop responding when encountering throttle failures.
-* Fixed a bug in session consistency mode.
-* Fixed a bug which may cause NullReferenceException in HttpContext when request rate is high.
-* Improved performance of DirectHttps mode.
-
-### <a name="1.9.4"></a>1.9.4
-* Added simple client instance-based proxy support with ConnectionPolicy.setProxy() API.
-* Added DocumentClient.close() API to properly shutdown DocumentClient instance.
-* Improved query performance in direct connectivity mode by deriving the query plan from the native assembly instead of the Gateway.
-* Set FAIL_ON_UNKNOWN_PROPERTIES = false so users don't need to define JsonIgnoreProperties in their POJO.
-* Refactored logging to use SLF4J.
-* Fixed a few other bugs in consistency reader.
-
-### <a name="1.9.3"></a>1.9.3
-* Fixed a bug in the connection management to prevent connection leaks in direct connectivity mode.
-* Fixed a bug in the TOP query where it may throw NullReference exception.
-* Improved performance by reducing the number of network call for the internal caches.
-* Added status code, ActivityID and Request URI in DocumentClientException for better troubleshooting.
-
-### <a name="1.9.2"></a>1.9.2
-* Fixed an issue in the connection management for stability.
-
-### <a name="1.9.1"></a>1.9.1
-* Added support for BoundedStaleness consistency level.
-* Added support for direct connectivity for CRUD operations for partitioned collections.
-* Fixed a bug in querying a database with SQL.
-* Fixed a bug in the session cache where session token may be set incorrectly.
-
-### <a name="1.9.0"></a>1.9.0
-* Added support for cross partition parallel queries.
-* Added support for TOP/ORDER BY queries for partitioned collections.
-* Added support for strong consistency.
-* Added support for name based requests when using direct connectivity.
-* Fixed to make ActivityId stay consistent across all request retries.
-* Fixed a bug related to the session cache when recreating a collection with the same name.
-* Added Polygon and LineString DataTypes while specifying collection indexing policy for geo-fencing spatial queries.
-* Fixed issues with Java Doc for Java 1.8.
-
-### <a name="1.8.1"></a>1.8.1
-* Fixed a bug in PartitionKeyDefinitionMap to cache single partition collections and not make extra fetch partition key requests.
-* Fixed a bug to not retry when an incorrect partition key value is provided.
-
-### <a name="1.8.0"></a>1.8.0
-* Added the support for multi-region database accounts.
-* Added support for automatic retry on throttled requests with options to customize the max retry attempts and max retry wait time. See RetryOptions and ConnectionPolicy.getRetryOptions().
-* Deprecated IPartitionResolver based custom partitioning code. Please use partitioned collections for higher storage and throughput.
-
-### <a name="1.7.1"></a>1.7.1
-* Added retry policy support for rate limiting.
-
-### <a name="1.7.0"></a>1.7.0
-* Added time to live (TTL) support for documents.
-
-### <a name="1.6.0"></a>1.6.0
-* Implemented [partitioned collections](../partitioning-overview.md) and [user-defined performance levels](../performance-levels.md).
-
-### <a name="1.5.1"></a>1.5.1
-* Fixed a bug in HashPartitionResolver to generate hash values in little-endian to be consistent with other SDKs.
-
-### <a name="1.5.0"></a>1.5.0
-* Add Hash & Range partition resolvers to assist with sharding applications across multiple partitions.
-
-### <a name="1.4.0"></a>1.4.0
-* Implement Upsert. New upsertXXX methods added to support Upsert feature.
-* Implement ID Based Routing. No public API changes, all changes internal.
-
-### <a name="1.3.0"></a>1.3.0
-* Release skipped to bring version number in alignment with other SDKs
-
-### <a name="1.2.0"></a>1.2.0
-* Supports GeoSpatial Index
-* Validates ID property for all resources. Ids for resources cannot contain ?, /, #, \, characters or end with a space.
-* Adds new header "index transformation progress" to ResourceResponse.
-
-### <a name="1.1.0"></a>1.1.0
-* Implements V2 indexing policy
-
-### <a name="1.0.0"></a>1.0.0
-* GA SDK
-
-## Release and retirement dates
-
-Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommend that you always upgrade to the latest SDK version as early as possible.
-
-> [!WARNING]
-> After 30 May 2020, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 1.x of the Azure Cosmos DB Java SDK for SQL API. If you prefer not to upgrade, requests sent from version 1.x of the SDK will continue to be served by the Azure Cosmos DB service.
->
-> After 29 February 2016, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 0.x of the Azure Cosmos DB Java SDK for SQL API. If you prefer not to upgrade, requests sent from version 0.x of the SDK will continue to be served by the Azure Cosmos DB service.
--
-| Version | Release Date | Retirement Date |
-| | | |
-| [2.6.1](#2.6.1) |Dec 17, 2020 |Feb 29, 2024|
-| [2.6.0](#2.6.0) |July 16, 2020 |Feb 29, 2024|
-| [2.5.1](#2.5.1) |June 03, 2020 |Feb 29, 2024|
-| [2.5.0](#2.5.0) |May 12, 2020 |Feb 29, 2024|
-| [2.4.7](#2.4.7) |Feb 20, 2020 |Feb 29, 2024|
-| [2.4.6](#2.4.6) |Jan 24, 2020 |Feb 29, 2024|
-| [2.4.5](#2.4.5) |Nov 10, 2019 |Feb 29, 2024|
-| [2.4.4](#2.4.4) |Oct 24, 2019 |Feb 29, 2024|
-| [2.4.2](#2.4.2) |Sep 26, 2019 |Feb 29, 2024|
-| [2.4.1](#2.4.1) |Jul 18, 2019 |Feb 29, 2024|
-| [2.4.0](#2.4.0) |May 04, 2019 |Feb 29, 2024|
-| [2.3.0](#2.3.0) |Apr 24, 2019 |Feb 29, 2024|
-| [2.2.3](#2.2.3) |Apr 16, 2019 |Feb 29, 2024|
-| [2.2.2](#2.2.2) |Apr 05, 2019 |Feb 29, 2024|
-| [2.2.0](#2.2.0) |Mar 27, 2019 |Feb 29, 2024|
-| [2.1.3](#2.1.3) |Mar 13, 2019 |Feb 29, 2024|
-| [2.1.2](#2.1.2) |Mar 09, 2019 |Feb 29, 2024|
-| [2.1.1](#2.1.1) |Dec 13, 2018 |Feb 29, 2024|
-| [2.1.0](#2.1.0) |Nov 20, 2018 |Feb 29, 2024|
-| [2.0.0](#2.0.0) |Sept 21, 2018 |Feb 29, 2024|
-| [1.16.4](#1.16.4) |Sept 10, 2018 |May 30, 2020 |
-| [1.16.3](#1.16.3) |Sept 09, 2018 |May 30, 2020 |
-| [1.16.2](#1.16.2) |June 29, 2018 |May 30, 2020 |
-| [1.16.1](#1.16.1) |May 16, 2018 |May 30, 2020 |
-| [1.16.0](#1.16.0) |March 15, 2018 |May 30, 2020 |
-| [1.15.0](#1.15.0) |Nov 14, 2017 |May 30, 2020 |
-| [1.14.0](#1.14.0) |Oct 28, 2017 |May 30, 2020 |
-| [1.13.0](#1.13.0) |August 25, 2017 |May 30, 2020 |
-| [1.12.0](#1.12.0) |July 11, 2017 |May 30, 2020 |
-| [1.11.0](#1.11.0) |May 10, 2017 |May 30, 2020 |
-| [1.10.0](#1.10.0) |March 11, 2017 |May 30, 2020 |
-| [1.9.6](#1.9.6) |February 21, 2017 |May 30, 2020 |
-| [1.9.5](#1.9.5) |January 31, 2017 |May 30, 2020 |
-| [1.9.4](#1.9.4) |November 24, 2016 |May 30, 2020 |
-| [1.9.3](#1.9.3) |October 30, 2016 |May 30, 2020 |
-| [1.9.2](#1.9.2) |October 28, 2016 |May 30, 2020 |
-| [1.9.1](#1.9.1) |October 26, 2016 |May 30, 2020 |
-| [1.9.0](#1.9.0) |October 03, 2016 |May 30, 2020 |
-| [1.8.1](#1.8.1) |June 30, 2016 |May 30, 2020 |
-| [1.8.0](#1.8.0) |June 14, 2016 |May 30, 2020 |
-| [1.7.1](#1.7.1) |April 30, 2016 |May 30, 2020 |
-| [1.7.0](#1.7.0) |April 27, 2016 |May 30, 2020 |
-| [1.6.0](#1.6.0) |March 29, 2016 |May 30, 2020 |
-| [1.5.1](#1.5.1) |December 31, 2015 |May 30, 2020 |
-| [1.5.0](#1.5.0) |December 04, 2015 |May 30, 2020 |
-| [1.4.0](#1.4.0) |October 05, 2015 |May 30, 2020 |
-| [1.3.0](#1.3.0) |October 05, 2015 |May 30, 2020 |
-| [1.2.0](#1.2.0) |August 05, 2015 |May 30, 2020 |
-| [1.1.0](#1.1.0) |July 09, 2015 |May 30, 2020 |
-| 1.0.1 |May 12, 2015 |May 30, 2020 |
-| [1.0.0](#1.0.0) |April 07, 2015 |May 30, 2020 |
-| 0.9.5-prelease |Mar 09, 2015 |February 29, 2016 |
-| 0.9.4-prelease |February 17, 2015 |February 29, 2016 |
-| 0.9.3-prelease |January 13, 2015 |February 29, 2016 |
-| 0.9.2-prelease |December 19, 2014 |February 29, 2016 |
-| 0.9.1-prelease |December 19, 2014 |February 29, 2016 |
-| 0.9.0-prelease |December 10, 2014 |February 29, 2016 |
-
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-node.md
- Title: 'Azure Cosmos DB: SQL Node.js API, SDK & resources'
-description: Learn all about the SQL Node.js API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Node.js SDK.
---- Previously updated : 12/09/2021-----
-# Azure Cosmos DB Node.js SDK for SQL API: Release notes and resources
--
-|Resource |Link |
-|||
-|Download SDK | [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos)
-|API Documentation | [JavaScript SDK reference documentation](/javascript/api/%40azure/cosmos/)
-|SDK installation instructions | `npm install @azure/cosmos`
-|Contribute to SDK | [Contributing guide for azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/blob/main/CONTRIBUTING.md)
-| Samples | [Node.js code samples](sql-api-nodejs-samples.md)
-| Getting started tutorial | [Get started with the JavaScript SDK](sql-api-nodejs-get-started.md)
-| Web app tutorial | [Build a Node.js web application using Azure Cosmos DB](sql-api-nodejs-application.md)
-| Current supported Node.js platforms | [LTS versions of Node.js](https://nodejs.org/about/releases/)
-
-## Release notes
-
-Release history is maintained in the azure-sdk-for-js repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/CHANGELOG.md#release-history).
-
-## Migration guide for breaking changes
-
-If you are on an older version of the SDK, it's recommended to migrate to the 3.0 version. This section details the improvements you would get with this version and the bug fixes made in the 3.0 version.
-### Improved client constructor options
-
-Constructor options have been simplified:
-
-* masterKey was renamed key and moved to the top level
-* Properties previously under options.auth have moved to the top level
-
-```javascript
-// v2
-const client = new CosmosClient({
- endpoint: "https://your-database.cosmos.azure.com",
- auth: {
- masterKey: "your-primary-key"
- }
-})
-
-// v3
-const client = new CosmosClient({
- endpoint: "https://your-database.cosmos.azure.com",
- key: "your-primary-key"
-})
-```
-
-### Simplified query iterator API
-
-In v2, there were many different ways to iterate or retrieve results from a query. We have attempted to simplify the v3 API and remove similar or duplicate APIs:
-
-* Remove iterator.next() and iterator.current(). Use fetchNext() to get pages of results.
-* Remove iterator.forEach(). Use async iterators instead.
-* iterator.executeNext() renamed to iterator.fetchNext()
-* iterator.toArray() renamed to iterator.fetchAll()
-* Pages are now proper Response objects instead of plain JS objects
-* const container = client.database(dbId).container(containerId)
-
-```javascript
-// v2
-container.items.query('SELECT * from c').toArray()
-container.items.query('SELECT * from c').executeNext()
-container.items.query('SELECT * from c').forEach(({ body: item }) => { console.log(item.id) })
-
-// v3
-container.items.query('SELECT * from c').fetchAll()
-container.items.query('SELECT * from c').fetchNext()
-for await(const { result: item } in client.databases.readAll().getAsyncIterator()) {
- console.log(item.id)
-}
-```
-
-### Fixed containers are now partitioned
-
-The Cosmos service now supports partition keys on all containers, including those that were previously created as fixed containers. The v3 SDK updates to the latest API version that implements this change, but it is not breaking. If you do not supply a partition key for operations, we will default to a system key that works with all your existing containers and documents.
-
-### Upsert removed for stored procedures
-
-Previously upsert was allowed for non-partitioned collections, but with the API version update, all collections are partitioned so we removed it entirely.
-
-### Item reads will not throw on 404
-
-const container = client.database(dbId).container(containerId)
-
-```javascript
-// v2
-try {
- container.items.read(id, undefined)
-} catch (e) {
- if (e.code === 404) { console.log('item not found') }
-}
-
-// v3
-const { result: item } = container.items.read(id, undefined)
-if (item === undefined) { console.log('item not found') }
-```
-
-### Default multi-region writes
-
-The SDK will now write to multiple regions by default if your Cosmos configuration supports it. This was previously opt-in behavior.
-
-### Proper error objects
-
-Failed requests now throw proper Error or subclasses of Error. Previously they threw plain JS objects.
-
-### New features
-
-#### User-cancelable requests
-
-The move to fetch internally allows us to use the browser AbortController API to support user-cancelable operations. In the case of operations where multiple requests are potentially in progress (like cross-partition queries), all requests for the operation will be canceled. Modern browser users will already have AbortController. Node.js users will need to use a Polyfill library
-
-```javascript
- const controller = new AbortController()
- const {result: item} = await items.query('SELECT * from c', { abortSignal: controller.signal});
- controller.abort()
-```
-
-#### Set throughput as part of db/container create operation
-
-```javascript
-const { database } = client.databases.create({ id: 'my-database', throughput: 10000 })
-database.containers.create({ id: 'my-container', throughput: 10000 })
-```
-
-#### @azure/cosmos-sign
-
-Header token generation was split out into a new library, @azure/cosmos-sign. Anyone calling the Cosmos REST API directly can use this to sign headers using the same code we call inside @azure/cosmos.
-
-#### UUID for generated IDs
-
-v2 had custom code to generate item IDs. We have switched to the well known and maintained community library uuid.
-
-#### Connection strings
-
-It is now possible to pass a connection string copied from the Azure portal:
-
-```javascript
-const client = new CosmosClient("AccountEndpoint=https://test-account.documents.azure.com:443/;AccountKey=c213asdasdefgdfgrtweaYPpgoeCsHbpRTHhxuMsTaw==;")
-Add DISTINCT and LIMIT/OFFSET queries (#306)
- const { results } = await items.query('SELECT DISTINCT VALUE r.name FROM ROOT').fetchAll()
- const { results } = await items.query('SELECT * FROM root r OFFSET 1 LIMIT 2').fetchAll()
-```
-
-### Improved browser experience
-
-While it was possible to use the v2 SDK in the browser, it was not an ideal experience. You needed to Polyfill several Node.js built-in libraries and use a bundler like webpack or Parcel. The v3 SDK makes the out of the box experience much better for browser users.
-
-* Replace request internals with fetch (#245)
-* Remove usage of Buffer (#330)
-* Remove node builtin usage in favor of universal packages/APIs (#328)
-* Switch to node-abort-controller (#294)
-
-### Bug fixes
-* Fix offer read and bring back offer tests (#224)
-* Fix EnableEndpointDiscovery (#207)
-* Fix missing RUs on paginated results (#360)
-* Expand SQL query parameter type (#346)
-* Add ttl to ItemDefinition (#341)
-* Fix CP query metrics (#311)
-* Add activityId to FeedResponse (#293)
-* Switch _ts type from string to number (#252)(#295)
-* Fix Request Charge Aggregation (#289)
-* Allow blank string partition keys (#277)
-* Add string to conflict query type (#237)
-* Add uniqueKeyPolicy to container (#234)
-
-### Engineering systems
-Not always the most visible changes, but they help our team ship better code, faster.
-
-* Use rollup for production builds (#104)
-* Update to TypeScript 3.5 (#327)
-* Convert to TS project references. Extract test folder (#270)
-* Enable noUnusedLocals and noUnusedParameters (#275)
-* Azure Pipelines YAML for CI builds (#298)
-
-## Release & Retirement Dates
-
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible. Read the [Microsoft Support Policy for SDKs](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md#microsoft-support-policy) for more details.
-
-| Version | Release Date | Retirement Date |
-| | | |
-| v3 | June 28, 2019 | |
-| v2 | September 24, 2018 | September 24, 2021 |
-| v1 | April 08, 2015 | August 30, 2020 |
-
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-python.md
- Title: Azure Cosmos DB SQL Python API, SDK & resources
-description: Learn all about the SQL Python API and SDK including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB Python SDK.
---- Previously updated : 01/25/2022---
-# Azure Cosmos DB Python SDK for SQL API: Release notes and resources
--
-| Page| Link |
-|||
-|**Download SDK**|[PyPI](https://pypi.org/project/azure-cosmos)|
-|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/azure.cosmos?preserve-view=true&view=azure-python)|
-|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)|
-|**Get started**|[Get started with the Python SDK](create-sql-api-python.md)|
-|**Samples**|[Python SDK samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples)|
-|**Current supported platform**|[Python 3.6+](https://www.python.org/downloads/)|
-
-> [!IMPORTANT]
-> * Versions 4.3.0b2 and higher support Async IO operations and only support Python 3.6+. Python 2 is not supported.
-
-## Release history
-Release history is maintained in the azure-sdk-for-python repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cosmos/azure-cosmos/CHANGELOG.md).
-
-## Release & retirement dates
-
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
-
-> [!WARNING]
-> After 31 August 2022, Azure Cosmos DB will no longer make bug fixes or provide support to versions 1.x and 2.x of the Azure Cosmos DB Python SDK for SQL API. If you prefer not to upgrade, requests sent from version 1.x and 2.x of the SDK will continue to be served by the Azure Cosmos DB service.
-
-| Version | Release Date | Retirement Date |
-| | | |
-| 4.3.0 |May 23, 2022 | |
-| 4.2.0 |Oct 09, 2020 | |
-| 4.1.0 |Aug 10, 2020 | |
-| 4.0.0 |May 20, 2020 | |
-| 3.0.2 |Nov 15, 2018 | |
-| 3.0.1 |Oct 04, 2018 | |
-| 2.3.3 |Sept 08, 2018 |August 31, 2022 |
-| 2.3.2 |May 08, 2018 |August 31, 2022 |
-| 2.3.1 |December 21, 2017 |August 31, 2022 |
-| 2.3.0 |November 10, 2017 |August 31, 2022 |
-| 2.2.1 |Sep 29, 2017 |August 31, 2022 |
-| 2.2.0 |May 10, 2017 |August 31, 2022 |
-| 2.1.0 |May 01, 2017 |August 31, 2022 |
-| 2.0.1 |October 30, 2016 |August 31, 2022 |
-| 2.0.0 |September 29, 2016 |August 31, 2022 |
-| 1.9.0 |July 07, 2016 |August 31, 2022 |
-| 1.8.0 |June 14, 2016 |August 31, 2022 |
-| 1.7.0 |April 26, 2016 |August 31, 2022 |
-| 1.6.1 |April 08, 2016 |August 31, 2022 |
-| 1.6.0 |March 29, 2016 |August 31, 2022 |
-| 1.5.0 |January 03, 2016 |August 31, 2022 |
-| 1.4.2 |October 06, 2015 |August 31, 2022 |
-| 1.4.1 |October 06, 2015 |August 31, 2022 |
-| 1.2.0 |August 06, 2015 |August 31, 2022 |
-| 1.1.0 |July 09, 2015 |August 31, 2022 |
-| 1.0.1 |May 25, 2015 |August 31, 2022 |
-| 1.0.0 |April 07, 2015 |August 31, 2022 |
-| 0.9.4-prelease |January 14, 2015 |February 29, 2016 |
-| 0.9.3-prelease |December 09, 2014 |February 29, 2016 |
-| 0.9.2-prelease |November 25, 2014 |February 29, 2016 |
-| 0.9.1-prelease |September 23, 2014 |February 29, 2016 |
-| 0.9.0-prelease |August 21, 2014 |February 29, 2016 |
-
-## FAQ
--
-## Next steps
-
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Spring Data Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-spring-data-sdk-samples.md
- Title: 'Azure Cosmos DB SQL API: Spring Data v3 examples'
-description: Find Spring Data v3 examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.
---- Previously updated : 08/26/2021-----
-# Azure Cosmos DB SQL API: Spring Data Azure Cosmos DB v3 examples
-
-> [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
-> * [Node.js Examples](sql-api-nodejs-samples.md)
-> * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
-
-> [!IMPORTANT]
-> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sql-api-sdk-java-spring-v2.md).
->
-> Spring Data Azure Cosmos DB supports only the SQL API.
->
-> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
-> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
-> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
->
-
-> [!IMPORTANT]
->[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
->
->- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.
->
->[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
->
-
-The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-spring-data-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples) GitHub repository. This article provides:
-
-* Links to the tasks in each of the example Spring Data Azure Cosmos DB project files.
-* Links to the related API reference content.
-
-**Prerequisites**
-
-You need the following to run this sample application:
-
-* Java Development Kit 8
-* Spring Data Azure Cosmos DB v3
-
-You can optionally use Maven to get the latest Spring Data Azure Cosmos DB v3 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the **pom.xml** file and add them to your build path.
-
-```bash
-<dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-spring-data-cosmos</artifactId>
- <version>LATEST</version>
-</dependency>
-```
-
-**Running the sample applications**
-
-Clone the sample repo:
-```bash
-$ git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples
-
-$ cd azure-spring-data-cosmos-java-sql-api-samples
-```
-
-You can run the samples using either an IDE (Eclipse, IntelliJ, or VSCODE) or from the command line using Maven.
-
-In **application.properties** these environment variables must be set
-
-```xml
-cosmos.uri=${ACCOUNT_HOST}
-cosmos.key=${ACCOUNT_KEY}
-cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY}
-
-dynamic.collection.name=spel-property-collection
-# Populate query metrics
-cosmos.queryMetricsEnabled=true
-```
-
-in order to give the samples read/write access to your account, databases and containers.
-
-Your IDE may provide the ability to execute the Spring Data sample code. Otherwise you may use the following terminal command to execute the sample:
-
-```bash
-mvn spring-boot:run
-```
-
-## Document CRUD examples
-The [samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
-
-| Task | API reference |
-| | |
-| [Create a document](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L46-L47) | CosmosRepository.save |
-| [Read a document by ID](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L56-L58) | CosmosRepository.derivedQueryMethod |
-| [Delete all documents](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L39-L41) | CosmosRepository.deleteAll |
-
-## Derived query method examples
-The [samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java) file shows how to perform the following tasks. To learn about Azure Cosmos DB queries before running the following samples, you may find it helpful to read [Baeldung's Derived Query Methods in Spring](https://www.baeldung.com/spring-data-derived-queries) article.
-
-| [Query for documents](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/SampleApplication.java#L73-L77) | CosmosRepository.derivedQueryMethod |
-
-## Custom query examples
-The [samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/jav).
--
-| Task | API reference |
-| | |
-| [Query for all documents](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L20-L22) | @Query annotation |
-| [Query for equality using ==](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L24-L26) | @Query annotation |
-| [Query for inequality using != and NOT](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L28-L38) | @Query annotation |
-| [Query using range operators like >, <, >=, <=](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L40-L42) | @Query annotation |
-| [Query using range operators against strings](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L44-L46) | @Query annotation |
-| [Query with ORDER BY](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L48-L50) | @Query annotation |
-| [Query with DISTINCT](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L52-L54) | @Query annotation |
-| [Query with aggregate functions](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L56-L62) | @Query annotation |
-| [Work with subdocuments](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L64-L66) | @Query annotation |
-| [Query with intra-document Joins](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L68-L85) | @Query annotation |
-| [Query with string, math, and array operators](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/springexamples/quickstart/sync/UserRepository.java#L87-L97) | @Query annotation |
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Query Abs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-abs.md
- Title: ABS in Azure Cosmos DB query language
-description: Learn about how the Absolute(ABS) SQL system function in Azure Cosmos DB returns the positive value of the specified numeric expression
---- Previously updated : 03/04/2020---
-# ABS (Azure Cosmos DB)
-
- Returns the absolute (positive) value of the specified numeric expression.
-
-## Syntax
-
-```sql
-ABS (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example shows the results of using the `ABS` function on three different numbers.
-
-```sql
-SELECT ABS(-1) AS abs1, ABS(0) AS abs2, ABS(1) AS abs3
-```
-
- Here is the result set.
-
-```json
-[{abs1: 1, abs2: 0, abs3: 1}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Acos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-acos.md
- Title: ACOS in Azure Cosmos DB query language
-description: Learn about how the ACOS (arccosice) SQL system function in Azure Cosmos DB returns the angle, in radians, whose cosine is the specified numeric expression
---- Previously updated : 03/03/2020---
-# ACOS (Azure Cosmos DB)
-
- Returns the angle, in radians, whose cosine is the specified numeric expression; also called arccosine.
-
-## Syntax
-
-```sql
-ACOS(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the `ACOS` of -1.
-
-```sql
-SELECT ACOS(-1) AS acos
-```
-
- Here is the result set.
-
-```json
-[{"acos": 3.1415926535897931}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Aggregate Avg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-aggregate-avg.md
- Title: AVG in Azure Cosmos DB query language
-description: Learn about the Average (AVG) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# AVG (Azure Cosmos DB)
-
-This aggregate function returns the average of the values in the expression.
-
-## Syntax
-
-```sql
-AVG(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
-Returns a numeric expression.
-
-## Examples
-
-The following example returns the average value of `propertyA`:
-
-```sql
-SELECT AVG(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). If any arguments in `AVG` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will not impact the `AVG` calculation.
-
-## Next steps
--- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions in Azure Cosmos DB](sql-query-system-functions.md)-- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Count https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-aggregate-count.md
- Title: COUNT in Azure Cosmos DB query language
-description: Learn about the Count (COUNT) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# COUNT (Azure Cosmos DB)
-
-This system function returns the count of the values in the expression.
-
-## Syntax
-
-```sql
-COUNT(<scalar_expr>)
-```
-
-## Arguments
-
-*scalar_expr*
- Is any scalar expression
-
-## Return types
-
-Returns a numeric expression.
-
-## Examples
-
-The following example returns the total count of items in a container:
-
-```sql
-SELECT COUNT(1)
-FROM c
-```
-COUNT can take any scalar expression as input. The below query will produce an equivalent results:
-
-```sql
-SELECT COUNT(2)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy) for any properties in the query's filter.
-
-## Next steps
--- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions in Azure Cosmos DB](sql-query-system-functions.md)-- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-aggregate-functions.md
- Title: Aggregate functions in Azure Cosmos DB
-description: Learn about SQL aggregate function syntax, types of aggregate functions supported by Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# Aggregate functions in Azure Cosmos DB
-
-Aggregate functions perform a calculation on a set of values in the `SELECT` clause and return a single value. For example, the following query returns the count of items within a container:
-
-```sql
- SELECT COUNT(1)
- FROM c
-```
-
-## Types of aggregate functions
-
-The SQL API supports the following aggregate functions. `SUM` and `AVG` operate on numeric values, and `COUNT`, `MIN`, and `MAX` work on numbers, strings, Booleans, and nulls.
-
-| Function | Description |
-|-|-|
-| [AVG](sql-query-aggregate-avg.md) | Returns the average of the values in the expression. |
-| [COUNT](sql-query-aggregate-count.md) | Returns the number of items in the expression. |
-| [MAX](sql-query-aggregate-max.md) | Returns the maximum value in the expression. |
-| [MIN](sql-query-aggregate-min.md) | Returns the minimum value in the expression. |
-| [SUM](sql-query-aggregate-sum.md) | Returns the sum of all the values in the expression. |
--
-You can also return only the scalar value of the aggregate by using the VALUE keyword. For example, the following query returns the count of values as a single number:
-
-```sql
- SELECT VALUE COUNT(1)
- FROM Families f
-```
-
-The results are:
-
-```json
- [ 2 ]
-```
-
-You can also combine aggregations with filters. For example, the following query returns the count of items with the address state of `WA`.
-
-```sql
- SELECT VALUE COUNT(1)
- FROM Families f
- WHERE f.address.state = "WA"
-```
-
-The results are:
-
-```json
- [ 1 ]
-```
-
-## Remarks
-
-These aggregate system functions will benefit from a [range index](../index-policy.md#includeexclude-strategy). If you expect to do an `AVG`, `COUNT`, `MAX`, `MIN`, or `SUM` on a property, you should [include the relevant path in the indexing policy](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../introduction.md)-- [System functions](sql-query-system-functions.md)-- [User defined functions](sql-query-udfs.md)
cosmos-db Sql Query Aggregate Max https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-aggregate-max.md
- Title: MAX in Azure Cosmos DB query language
-description: Learn about the Max (MAX) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# MAX (Azure Cosmos DB)
-
-This aggregate function returns the maximum of the values in the expression.
-
-## Syntax
-
-```sql
-MAX(<scalar_expr>)
-```
-
-## Arguments
-
-*scalar_expr*
- Is a scalar expression.
-
-## Return types
-
-Returns a scalar expression.
-
-## Examples
-
-The following example returns the maximum value of `propertyA`:
-
-```sql
-SELECT MAX(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). The arguments in `MAX` can be number, string, boolean, or null. Any undefined values will be ignored.
-
-When comparing different types data, the following priority order is used (in descending order):
--- string-- number-- boolean-- null-
-## Next steps
--- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions in Azure Cosmos DB](sql-query-system-functions.md)-- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Min https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-aggregate-min.md
- Title: MIN in Azure Cosmos DB query language
-description: Learn about the Min (MIN) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# MIN (Azure Cosmos DB)
-
-This aggregate function returns the minimum of the values in the expression.
-
-## Syntax
-
-```sql
-MIN(<scalar_expr>)
-```
-
-## Arguments
-
-*scalar_expr*
- Is a scalar expression.
-
-## Return types
-
-Returns a scalar expression.
-
-## Examples
-
-The following example returns the minimum value of `propertyA`:
-
-```sql
-SELECT MIN(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). The arguments in `MIN` can be number, string, boolean, or null. Any undefined values will be ignored.
-
-When comparing different types data, the following priority order is used (in ascending order):
--- null-- boolean-- number-- string-
-## Next steps
--- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions in Azure Cosmos DB](sql-query-system-functions.md)-- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Aggregate Sum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-aggregate-sum.md
- Title: SUM in Azure Cosmos DB query language
-description: Learn about the Sum (SUM) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# SUM (Azure Cosmos DB)
-
-This aggregate function returns the sum of the values in the expression.
-
-## Syntax
-
-```sql
-SUM(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
-Returns a numeric expression.
-
-## Examples
-
-The following example returns the sum of `propertyA`:
-
-```sql
-SELECT SUM(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy). If any arguments in `SUM` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will be not impact the `SUM` calculation.
-
-## Next steps
--- [Mathematical functions in Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions in Azure Cosmos DB](sql-query-system-functions.md)-- [Aggregate functions in Azure Cosmos DB](sql-query-aggregate-functions.md)
cosmos-db Sql Query Array Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-array-concat.md
- Title: ARRAY_CONCAT in Azure Cosmos DB query language
-description: Learn about how the Array Concat SQL system function in Azure Cosmos DB returns an array that is the result of concatenating two or more array values
---- Previously updated : 03/03/2020---
-# ARRAY_CONCAT (Azure Cosmos DB)
-
- Returns an array that is the result of concatenating two or more array values.
-
-## Syntax
-
-```sql
-ARRAY_CONCAT (<arr_expr1>, <arr_expr2> [, <arr_exprN>])
-```
-
-## Arguments
-
-*arr_expr*
- Is an array expression to concatenate to the other values. The `ARRAY_CONCAT` function requires at least two *arr_expr* arguments.
-
-## Return types
-
- Returns an array expression.
-
-## Examples
-
- The following example how to concatenate two arrays.
-
-```sql
-SELECT ARRAY_CONCAT(["apples", "strawberries"], ["bananas"]) AS arrayConcat
-```
-
- Here is the result set.
-
-```json
-[{"arrayConcat": ["apples", "strawberries", "bananas"]}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Array functions Azure Cosmos DB](sql-query-array-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Array Contains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-array-contains.md
- Title: ARRAY_CONTAINS in Azure Cosmos DB query language
-description: Learn about how the Array Contains SQL system function in Azure Cosmos DB returns a Boolean indicating whether the array contains the specified value
---- Previously updated : 09/13/2019---
-# ARRAY_CONTAINS (Azure Cosmos DB)
-
-Returns a Boolean indicating whether the array contains the specified value. You can check for a partial or full match of an object by using a boolean expression within the command.
-
-## Syntax
-
-```sql
-ARRAY_CONTAINS (<arr_expr>, <expr> [, bool_expr])
-```
-
-## Arguments
-
-*arr_expr*
- Is the array expression to be searched.
-
-*expr*
- Is the expression to be found.
-
-*bool_expr*
- Is a boolean expression. If it evaluates to 'true' and if the specified search value is an object, the command checks for a partial match (the search object is a subset of one of the objects). If it evaluates to 'false', the command checks for a full match of all objects within the array. The default value if not specified is false.
-
-## Return types
-
- Returns a Boolean value.
-
-## Examples
-
- The following example how to check for membership in an array using `ARRAY_CONTAINS`.
-
-```sql
-SELECT
- ARRAY_CONTAINS(["apples", "strawberries", "bananas"], "apples") AS b1,
- ARRAY_CONTAINS(["apples", "strawberries", "bananas"], "mangoes") AS b2
-```
-
- Here is the result set.
-
-```json
-[{"b1": true, "b2": false}]
-```
-
-The following example how to check for a partial match of a JSON in an array using ARRAY_CONTAINS.
-
-```sql
-SELECT
- ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "apples"}, true) AS b1,
- ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "apples"}) AS b2,
- ARRAY_CONTAINS([{"name": "apples", "fresh": true}, {"name": "strawberries", "fresh": true}], {"name": "mangoes"}, true) AS b3
-```
-
- Here is the result set.
-
-```json
-[{
- "b1": true,
- "b2": false,
- "b3": false
-}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Array functions Azure Cosmos DB](sql-query-array-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Array Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-array-functions.md
- Title: Array functions in Azure Cosmos DB query language
-description: Learn about how the array functions let you perform operations on arrays in Azure Cosmos DB
---- Previously updated : 09/13/2019---
-# Array functions (Azure Cosmos DB)
-
-The array functions let you perform operations on arrays in Azure Cosmos DB.
-
-## Functions
-
-The following scalar functions perform an operation on an array input value and return numeric, boolean or array value:
-
-* [ARRAY_CONCAT](sql-query-array-concat.md)
-* [ARRAY_CONTAINS](sql-query-array-contains.md)
-* [ARRAY_LENGTH](sql-query-array-length.md)
-* [ARRAY_SLICE](sql-query-array-slice.md)
--
-
-
-
-## Next steps
--- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Array Length https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-array-length.md
- Title: ARRAY_LENGTH in Azure Cosmos DB query language
-description: Learn about how the Array length SQL system function in Azure Cosmos DB returns the number of elements of the specified array expression
---- Previously updated : 03/03/2020---
-# ARRAY_LENGTH (Azure Cosmos DB)
-
- Returns the number of elements of the specified array expression.
-
-## Syntax
-
-```sql
-ARRAY_LENGTH(<arr_expr>)
-```
-
-## Arguments
-
-*arr_expr*
- Is an array expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example how to get the length of an array using `ARRAY_LENGTH`.
-
-```sql
-SELECT ARRAY_LENGTH(["apples", "strawberries", "bananas"]) AS len
-```
-
- Here is the result set.
-
-```json
-[{"len": 3}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Array functions Azure Cosmos DB](sql-query-array-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Array Slice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-array-slice.md
- Title: ARRAY_SLICE in Azure Cosmos DB query language
-description: Learn about how the Array slice SQL system function in Azure Cosmos DB returns part of an array expression
---- Previously updated : 03/03/2020---
-# ARRAY_SLICE (Azure Cosmos DB)
-
- Returns part of an array expression.
-
-## Syntax
-
-```sql
-ARRAY_SLICE (<arr_expr>, <num_expr> [, <num_expr>])
-```
-
-## Arguments
-
-*arr_expr*
- Is any array expression.
-
-*num_expr*
- Zero-based numeric index at which to begin the array. Negative values may be used to specify the starting index relative to the last element of the array i.e. -1 references the last element in the array.
-
-*num_expr*
- Optional numeric expression that sets the maximum number of elements in the resulting array.
-
-## Return types
-
- Returns an array expression.
-
-## Examples
-
- The following example shows how to get different slices of an array using `ARRAY_SLICE`.
-
-```sql
-SELECT
- ARRAY_SLICE(["apples", "strawberries", "bananas"], 1) AS s1,
- ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 1) AS s2,
- ARRAY_SLICE(["apples", "strawberries", "bananas"], -2, 1) AS s3,
- ARRAY_SLICE(["apples", "strawberries", "bananas"], -2, 2) AS s4,
- ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 0) AS s5,
- ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, 1000) AS s6,
- ARRAY_SLICE(["apples", "strawberries", "bananas"], 1, -100) AS s7
-
-```
-
- Here is the result set.
-
-```json
-[{
- "s1": ["strawberries", "bananas"],
- "s2": ["strawberries"],
- "s3": ["strawberries"],
- "s4": ["strawberries", "bananas"],
- "s5": [],
- "s6": ["strawberries", "bananas"],
- "s7": []
-}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Array functions Azure Cosmos DB](sql-query-array-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Asin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-asin.md
- Title: ASIN in Azure Cosmos DB query language
-description: Learn about how the Arcsine (ASIN) SQL system function in Azure Cosmos DB returns the angle, in radians, whose sine is the specified numeric expression
---- Previously updated : 03/04/2020---
-# ASIN (Azure Cosmos DB)
-
- Returns the angle, in radians, whose sine is the specified numeric expression. This is also called arcsine.
-
-## Syntax
-
-```sql
-ASIN(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the `ASIN` of -1.
-
-```sql
-SELECT ASIN(-1) AS asin
-```
-
- Here is the result set.
-
-```json
-[{"asin": -1.5707963267948966}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Atan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-atan.md
- Title: ATAN in Azure Cosmos DB query language
-description: Learn about how the Arctangent (ATAN ) SQL system function in Azure Cosmos DB returns the angle, in radians, whose tangent is the specified numeric expression
---- Previously updated : 03/04/2020---
-# ATAN (Azure Cosmos DB)
-
- Returns the angle, in radians, whose tangent is the specified numeric expression. This is also called arctangent.
-
-## Syntax
-
-```sql
-ATAN(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the `ATAN` of the specified value.
-
-```sql
-SELECT ATAN(-45.01) AS atan
-```
-
- Here is the result set.
-
-```json
-[{"atan": -1.5485826962062663}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Atn2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-atn2.md
- Title: ATN2 in Azure Cosmos DB query language
-description: Learn about how the ATN2 SQL system function in Azure Cosmos DB returns the principal value of the arc tangent of y/x, expressed in radians
---- Previously updated : 03/03/2020---
-# ATN2 (Azure Cosmos DB)
-
- Returns the principal value of the arc tangent of y/x, expressed in radians.
-
-## Syntax
-
-```sql
-ATN2(<numeric_expr>, <numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example calculates the ATN2 for the specified x and y components.
-
-```sql
-SELECT ATN2(35.175643, 129.44) AS atn2
-```
-
- Here is the result set.
-
-```json
-[{"atn2": 1.3054517947300646}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Bitwise Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-bitwise-operators.md
- Title: Bitwise operators in Azure Cosmos DB
-description: Learn about SQL bitwise operators supported by Azure Cosmos DB.
------ Previously updated : 06/02/2022--
-# Bitwise operators in Azure Cosmos DB
--
-This article details the bitwise operators supported by Azure Cosmos DB. Bitwise operators are useful for constructing JSON result-sets on the fly. The bitwise operators work similarly to higher-level programming languages like C# and JavaScript. For examples of C# bitwise operators, see [Bitwise and shift operators](/dotnet/csharp/language-reference/operators/bitwise-and-shift-operators).
-
-## Understanding bitwise operations
-
-The following table shows the explanations and examples of bitwise operations in the SQL API between two values.
-
-| Operation | Operator | Description |
-| | | |
-| **Left shift** | ``<<`` | Shift left-hand value *left* by the specified number of bits. |
-| **Right shift** | ``>>`` | Shift left-hand value *right* by the specified number of bits. |
-| **Zero-fill (unsigned) right shift** | ``>>>`` | Shift left-hand value *right* by the specified number of bits without filling left-most bits. |
-| **AND** | ``&`` | Computes bitwise logical AND. |
-| **OR** | ``|`` | Computes bitwise logical OR. |
-| **XOR** | ``^`` | Computes bitwise logical exclusive OR. |
--
-For example, the following query uses each of the bitwise operators and renders a result.
-
-```sql
-SELECT
- (100 >> 2) AS rightShift,
- (100 << 2) AS leftShift,
- (100 >>> 0) AS zeroFillRightShift,
- (100 & 1000) AS logicalAnd,
- (100 | 1000) AS logicalOr,
- (100 ^ 1000) AS logicalExclusiveOr
-```
-
-The example query's results as a JSON object.
-
-```json
-[
- {
- "rightShift": 25,
- "leftShift": 400,
- "zeroFillRightShift": 100,
- "logicalAnd": 96,
- "logicalOr": 1004,
- "logicalExclusiveOr": 908
- }
-]
-```
-
-> [!IMPORTANT]
-> The bitwise operators in Azure Cosmos DB SQL API follow the same behavior as bitwise operators in JavaScript. JavaScript stores numbers as 64 bits floating point numbers, but all bitwise operations are performed on 32 bits binary numbers. Before a bitwise operation is performed, JavaScript converts numbers to 32 bits signed integers. After the bitwise operation is performed, the result is converted back to 64 bits JavaScript numbers. For more information about the bitwise operators in JavaScript, see [JavaScript binary bitwise operators at MDN Web Docs](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators#binary_bitwise_operators).
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Keywords](sql-query-keywords.md)-- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Ceiling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-ceiling.md
- Title: CEILING in Azure Cosmos DB query language
-description: Learn about how the CEILING SQL system function in Azure Cosmos DB returns the smallest integer value greater than, or equal to, the specified numeric expression.
---- Previously updated : 09/13/2019---
-# CEILING (Azure Cosmos DB)
-
- Returns the smallest integer value greater than, or equal to, the specified numeric expression.
-
-## Syntax
-
-```sql
-CEILING (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example shows positive numeric, negative, and zero values with the `CEILING` function.
-
-```sql
-SELECT CEILING(123.45) AS c1, CEILING(-123.45) AS c2, CEILING(0.0) AS c3
-```
-
- Here is the result set.
-
-```json
-[{c1: 124, c2: -123, c3: 0}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-concat.md
- Title: CONCAT in Azure Cosmos DB query language
-description: Learn about how the CONCAT SQL system function in Azure Cosmos DB returns a string that is the result of concatenating two or more string values
---- Previously updated : 03/03/2020---
-# CONCAT (Azure Cosmos DB)
-
- Returns a string that is the result of concatenating two or more string values.
-
-## Syntax
-
-```sql
-CONCAT(<str_expr1>, <str_expr2> [, <str_exprN>])
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to concatenate to the other values. The `CONCAT` function requires at least two *str_expr* arguments.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example returns the concatenated string of the specified values.
-
-```sql
-SELECT CONCAT("abc", "def") AS concat
-```
-
- Here is the result set.
-
-```json
-[{"concat": "abcdef"}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Constants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-constants.md
- Title: SQL constants in Azure Cosmos DB
-description: Learn about how the SQL query constants in Azure Cosmos DB are used to represent a specific data value
---- Previously updated : 05/31/2019-----
-# Azure Cosmos DB SQL query constants
-
- A constant, also known as a literal or a scalar value, is a symbol that represents a specific data value. The format of a constant depends on the data type of the value it represents.
-
- **Supported scalar data types:**
-
-|**Type**|**Values order**|
-|-|-|
-|**Undefined**|Single value: **undefined**|
-|**Null**|Single value: **null**|
-|**Boolean**|Values: **false**, **true**.|
-|**Number**|A double-precision floating-point number, IEEE 754 standard.|
-|**String**|A sequence of zero or more Unicode characters. Strings must be enclosed in single or double quotes.|
-|**Array**|A sequence of zero or more elements. Each element can be a value of any scalar data type, except **Undefined**.|
-|**Object**|An unordered set of zero or more name/value pairs. Name is a Unicode string, value can be of any scalar data type, except **Undefined**.|
-
-## <a name="bk_syntax"></a>Syntax
-
-```sql
-<constant> ::=
- <undefined_constant>
- | <null_constant>
- | <boolean_constant>
- | <number_constant>
- | <string_constant>
- | <array_constant>
- | <object_constant>
-
-<undefined_constant> ::= undefined
-
-<null_constant> ::= null
-
-<boolean_constant> ::= false | true
-
-<number_constant> ::= decimal_literal | hexadecimal_literal
-
-<string_constant> ::= string_literal
-
-<array_constant> ::=
- '[' [<constant>][,...n] ']'
-
-<object_constant> ::=
- '{' [{property_name | "property_name"} : <constant>][,...n] '}'
-
-```
-
-## <a name="bk_arguments"></a> Arguments
-
-* `<undefined_constant>; Undefined`
-
- Represents undefined value of type Undefined.
-
-* `<null_constant>; null`
-
- Represents **null** value of type **Null**.
-
-* `<boolean_constant>`
-
- Represents constant of type Boolean.
-
-* `false`
-
- Represents **false** value of type Boolean.
-
-* `true`
-
- Represents **true** value of type Boolean.
-
-* `<number_constant>`
-
- Represents a constant.
-
-* `decimal_literal`
-
- Decimal literals are numbers represented using either decimal notation, or scientific notation.
-
-* `hexadecimal_literal`
-
- Hexadecimal literals are numbers represented using prefix '0x' followed by one or more hexadecimal digits.
-
-* `<string_constant>`
-
- Represents a constant of type String.
-
-* `string _literal`
-
- String literals are Unicode strings represented by a sequence of zero or more Unicode characters or escape sequences. String literals are enclosed in single quotes (apostrophe: ' ) or double quotes (quotation mark: ").
-
- Following escape sequences are allowed:
-
-|**Escape sequence**|**Description**|**Unicode character**|
-|-|-|-|
-|\\'|apostrophe (')|U+0027|
-|\\"|quotation mark (")|U+0022|
-|\\\ |reverse solidus (\\)|U+005C|
-|\\/|solidus (/)|U+002F|
-|\b|backspace|U+0008|
-|\f|form feed|U+000C|
-|\n|line feed|U+000A|
-|\r|carriage return|U+000D|
-|\t|tab|U+0009|
-|\uXXXX|A Unicode character defined by 4 hexadecimal digits.|U+XXXX|
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Model document data](../modeling-data.md)
cosmos-db Sql Query Contains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-contains.md
- Title: Contains in Azure Cosmos DB query language
-description: Learn about how the CONTAINS SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression contains the second
---- Previously updated : 04/01/2021---
-# CONTAINS (Azure Cosmos DB)
-
-Returns a Boolean indicating whether the first string expression contains the second.
-
-## Syntax
-
-```sql
-CONTAINS(<str_expr1>, <str_expr2> [, <bool_expr>])
-```
-
-## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to find.
-
-*bool_expr*
- Optional value for ignoring case. When set to true, CONTAINS will do a case-insensitive search. When unspecified, this value is false.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks if "abc" contains "ab" and if "abc" contains "A".
-
-```sql
-SELECT CONTAINS("abc", "ab", false) AS c1, CONTAINS("abc", "A", false) AS c2, CONTAINS("abc", "A", true) AS c3
-```
-
- Here is the result set.
-
-```json
-[
- {
- "c1": true,
- "c2": false,
- "c3": true
- }
-]
-```
-
-## Remarks
-
-Learn about [how this string system function uses the index](sql-query-string-functions.md).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Cos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-cos.md
- Title: COS in Azure Cosmos DB query language
-description: Learn about how the Cosine (COS) SQL system function in Azure Cosmos DB returns the trigonometric cosine of the specified angle, in radians, in the specified expression
---- Previously updated : 03/03/2020---
-# COS (Azure Cosmos DB)
-
- Returns the trigonometric cosine of the specified angle, in radians, in the specified expression.
-
-## Syntax
-
-```sql
-COS(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example calculates the `COS` of the specified angle.
-
-```sql
-SELECT COS(14.78) AS cos
-```
-
- Here is the result set.
-
-```json
-[{"cos": -0.59946542619465426}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Cot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-cot.md
- Title: COT in Azure Cosmos DB query language
-description: Learn about how the Cotangent(COT) SQL system function in Azure Cosmos DB returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression
---- Previously updated : 03/03/2020---
-# COT (Azure Cosmos DB)
-
- Returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression.
-
-## Syntax
-
-```sql
-COT(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example calculates the `COT` of the specified angle.
-
-```sql
-SELECT COT(124.1332) AS cot
-```
-
- Here is the result set.
-
-```json
-[{"cot": -0.040311998371148884}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-date-time-functions.md
- Title: Date and time functions in Azure Cosmos DB query language
-description: Learn about date and time SQL system functions in Azure Cosmos DB to perform DateTime and timestamp operations.
---- Previously updated : 08/18/2020---
-# Date and time functions (Azure Cosmos DB)
-
-The date and time functions let you perform DateTime and timestamp operations in Azure Cosmos DB.
-
-## Functions to obtain the date and time
-
-The following scalar functions allow you to get the current UTC date and time in three forms: a string which conforms to the ISO 8601 format,
-a numeric timestamp whose value is the number of milliseconds which have elapsed since the Unix epoch,
-or numeric ticks whose value is the number of 100 nanosecond ticks which have elapsed since the Unix epoch:
-
-* [GetCurrentDateTime](sql-query-getcurrentdatetime.md)
-* [GetCurrentTimestamp](sql-query-getcurrenttimestamp.md)
-* [GetCurrentTicks](sql-query-getcurrentticks.md)
-
-## Functions to work with DateTime values
-
-The following functions allow you to easily manipulate DateTime, timestamp, and tick values:
-
-* [DateTimeAdd](sql-query-datetimeadd.md)
-* [DateTimeBin](sql-query-datetimebin.md)
-* [DateTimeDiff](sql-query-datetimediff.md)
-* [DateTimeFromParts](sql-query-datetimefromparts.md)
-* [DateTimePart](sql-query-datetimepart.md)
-* [DateTimeToTicks](sql-query-datetimetoticks.md)
-* [DateTimeToTimestamp](sql-query-datetimetotimestamp.md)
-* [TicksToDateTime](sql-query-tickstodatetime.md)
-* [TimestampToDateTime](sql-query-timestamptodatetime.md)
-
-## Next steps
--- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Datetimeadd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimeadd.md
- Title: DateTimeAdd in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeAdd in Azure Cosmos DB.
---- Previously updated : 07/09/2020----
-# DateTimeAdd (Azure Cosmos DB)
-
-Returns DateTime string value resulting from adding a specified number value (as a signed integer) to a specified DateTime string
-
-## Syntax
-
-```sql
-DateTimeAdd (<DateTimePart> , <numeric_expr> ,<DateTime>)
-```
-
-## Arguments
-
-*DateTimePart*
- The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Year | "year", "yyyy", "yy" |
-| Month | "month", "mm", "m" |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*numeric_expr*
- Is a signed integer value that will be added to the DateTimePart of the specified DateTime
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Return types
-
-Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
-## Remarks
-
-DateTimeAdd will return `undefined` for the following reasons:
--- The DateTimePart value specified is invalid-- The numeric_expr specified is not a valid integer-- The DateTime in the argument or result is not a valid ISO 8601 DateTime.-
-## Examples
-
-The following example adds 1 month to the DateTime: `2020-07-09T23:20:13.4575530Z`
-
-```sql
-SELECT DateTimeAdd("mm", 1, "2020-07-09T23:20:13.4575530Z") AS OneMonthLater
-```
-
-```json
-[
- {
- "OneMonthLater": "2020-08-09T23:20:13.4575530Z"
- }
-]
-```
-
-The following example subtracts 2 hours from the DateTime: `2020-07-09T23:20:13.4575530Z`
-
-```sql
-SELECT DateTimeAdd("hh", -2, "2020-07-09T23:20:13.4575530Z") AS TwoHoursEarlier
-```
-
-```json
-[
- {
- "TwoHoursEarlier": "2020-07-09T21:20:13.4575530Z"
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimebin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimebin.md
-
Title: DateTimeBin in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeBin in Azure Cosmos DB.
---- Previously updated : 05/27/2022 --
-
-
-# DateTimeBin (Azure Cosmos DB)
- [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-
-Returns the nearest multiple of *BinSize* below the specified DateTime given the unit of measurement *DateTimePart* and start value of *BinAtDateTime*.
--
-## Syntax
-
-```sql
-DateTimeBin (<DateTime> , <DateTimePart> [,BinSize] [,BinAtDateTime])
-```
--
-## Arguments
-
-*DateTime*
- The string value date and time to be binned. A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
-For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-*DateTimePart*
- The date time part specifies the units for BinSize. DateTimeBin is Undefined for DayOfWeek, Year, and Month. The finest granularity for binning by Nanosecond is 100 nanosecond ticks; if Nanosecond is specified with a BinSize less than 100, the result is Undefined. This table lists all valid DateTimePart arguments for DateTimeBin:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*BinSize* (optional)
- Numeric value that specifies the size of bins. If not specified, the default value is one.
--
-*BinAtDateTime* (optional)
- A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` that specifies the start date to bin from. Default value is the Unix epoch, ΓÇÿ1970-01-01T00:00:00.000000ZΓÇÖ.
--
-## Return types
-
-Returns the result of binning the *DateTime* value.
--
-## Remarks
-
-DateTimeBin will return `Undefined` for the following reasons:
-- The DateTimePart value specified is invalid -- The BinSize value is zero or negative -- The DateTime or BinAtDateTime isn't a valid ISO 8601 DateTime or precedes the year 1601 (the Windows epoch) --
-## Examples
-
-The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ by one hour:
-
-```sql
-SELECT DateTimeBin('2021-06-28T17:24:29.2991234Z', 'hh') AS BinByHour
-```
-
-```json
-[
-    {
-        "BinByHour": "2021-06-28T17:00:00.0000000Z"
-    }
-]
-```
-
-The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ given different *BinAtDateTime* values:
-
-```sql
-SELECTΓÇ»
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5) AS One_BinByFiveDaysUnixEpochImplicit,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1970-01-01T00:00:00.0000000Z') AS Two_BinByFiveDaysUnixEpochExplicit,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1601-01-01T00:00:00.0000000Z') AS Three_BinByFiveDaysFromWindowsEpoch,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '2021-01-01T00:00:00.0000000Z') AS Four_BinByFiveDaysFromYearStart,
-DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '0001-01-01T00:00:00.0000000Z') AS Five_BinByFiveDaysFromUndefinedYear
-```
-
-```json
-[
-    {
-        "One_BinByFiveDaysUnixEpochImplicit": "2021-06-27T00:00:00.0000000Z",
-        "Two_BinByFiveDaysUnixEpochExplicit": "2021-06-27T00:00:00.0000000Z",
-        "Three_BinByFiveDaysFromWindowsEpoch": "2021-06-28T00:00:00.0000000Z",
-        "Four_BinByFiveDaysFromYearStart": "2021-06-25T00:00:00.0000000Z"
-    }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md) -- [System functions Azure Cosmos DB](sql-query-system-functions.md) -- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimediff https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimediff.md
- Title: DateTimeDiff in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeDiff in Azure Cosmos DB.
---- Previously updated : 07/09/2020----
-# DateTimeDiff (Azure Cosmos DB)
-Returns the count (as a signed integer value) of the specified DateTimePart boundaries crossed between the specified *StartDate* and *EndDate*.
-
-## Syntax
-
-```sql
-DateTimeDiff (<DateTimePart> , <StartDate> , <EndDate>)
-```
-
-## Arguments
-
-*DateTimePart*
- The part of date to which DateTimeAdd adds an integer number. This table lists all valid DateTimePart arguments:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Year | "year", "yyyy", "yy" |
-| Month | "month", "mm", "m" |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*StartDate*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-*EndDate*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
-
-## Return types
-
-Returns a signed integer value.
-
-## Remarks
-
-DateTimeDiff will return `undefined` for the following reasons:
--- The DateTimePart value specified is invalid-- The StartDate or EndDate is not a valid ISO 8601 DateTime-
-DateTimeDiff will always return a signed integer value and is a measurement of the number of DateTimePart boundaries crossed, not measurement of the time interval.
-
-## Examples
-
-The following example computes the number of day boundaries crossed between `2020-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
-
-```sql
-SELECT DateTimeDiff("day", "2020-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInDays
-```
-
-```json
-[
- {
- "DifferenceInDays": 2
- }
-]
-```
-
-The following example computes the number of year boundaries crossed between `2028-01-01T01:02:03.1234527Z` and `2020-01-03T01:02:03.1234567Z`.
-
-```sql
-SELECT DateTimeDiff("yyyy", "2028-01-01T01:02:03.1234527Z", "2020-01-03T01:02:03.1234567Z") AS DifferenceInYears
-```
-
-```json
-[
- {
- "DifferenceInYears": -8
- }
-]
-```
-
-The following example computes the number of hour boundaries crossed between `2020-01-01T01:00:00.1234527Z` and `2020-01-01T01:59:59.1234567Z`. Even though these DateTime values are over 0.99 hours apart, `DateTimeDiff` returns 0 because no hour boundaries were crossed.
-
-```sql
-SELECT DateTimeDiff("hh", "2020-01-01T01:00:00.1234527Z", "2020-01-01T01:59:59.1234567Z") AS DifferenceInHours
-```
-
-```json
-[
- {
- "DifferenceInHours": 0
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimefromparts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimefromparts.md
- Title: DateTimeFromParts in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeFromParts in Azure Cosmos DB.
---- Previously updated : 07/09/2020----
-# DateTimeFromParts (Azure Cosmos DB)
-
-Returns a string DateTime value constructed from input values.
-
-## Syntax
-
-```sql
-DateTimeFromParts(<numberYear>, <numberMonth>, <numberDay> [, numberHour] [, numberMinute] [, numberSecond] [, numberOfFractionsOfSecond])
-```
-
-## Arguments
-
-*numberYear*
- Integer value for the year in the format `YYYY`
-
-*numberMonth*
- Integer value for the month in the format `MM`
-
-*numberDay*
- Integer value for the day in the format `DD`
-
-*numberHour* (optional)
- Integer value for the hour in the format `hh`
-
-*numberMinute* (optional)
- Integer value for the minute in the format `mm`
-
-*numberSecond* (optional)
- Integer value for the second in the format `ss`
-
-*numberOfFractionsOfSecond* (optional)
- Integer value for the fractional of a second in the format `.fffffff`
-
-## Return types
-
-Returns a UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Remarks
-
-If the specified integers would create an invalid DateTime, DateTimeFromParts will return `undefined`.
-
-If an optional argument isn't specified, its value will be 0.
-
-## Examples
-
-Here's an example that only includes required arguments to construct a DateTime:
-
-```sql
-SELECT DateTimeFromParts(2020, 9, 4) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-09-04T00:00:00.0000000Z"
- }
-]
-```
-
-Here's another example that also uses some optional arguments to construct a DateTime:
-
-```sql
-SELECT DateTimeFromParts(2020, 9, 4, 10, 52) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-09-04T10:52:00.0000000Z"
- }
-]
-```
-
-Here's another example that also uses all optional arguments to construct a DateTime:
-
-```sql
-SELECT DateTimeFromParts(2020, 9, 4, 10, 52, 12, 3456789) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-09-04T10:52:12.3456789Z"
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimepart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimepart.md
- Title: DateTimePart in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimePart in Azure Cosmos DB.
---- Previously updated : 08/14/2020----
-# DateTimePart (Azure Cosmos DB)
-
-Returns the value of the specified DateTimePart between the specified DateTime.
-
-## Syntax
-
-```sql
-DateTimePart (<DateTimePart> , <DateTime>)
-```
-
-## Arguments
-
-*DateTimePart*
- The part of the date for which DateTimePart will return the value. This table lists all valid DateTimePart arguments:
-
-| DateTimePart | abbreviations |
-| | -- |
-| Year | "year", "yyyy", "yy" |
-| Month | "month", "mm", "m" |
-| Day | "day", "dd", "d" |
-| Hour | "hour", "hh" |
-| Minute | "minute", "mi", "n" |
-| Second | "second", "ss", "s" |
-| Millisecond | "millisecond", "ms" |
-| Microsecond | "microsecond", "mcs" |
-| Nanosecond | "nanosecond", "ns" |
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
-
-## Return types
-
-Returns a positive integer value.
-
-## Remarks
-
-DateTimePart will return `undefined` for the following reasons:
--- The DateTimePart value specified is invalid-- The DateTime is not a valid ISO 8601 DateTime-
-This system function will not utilize the index.
-
-## Examples
-
-Here's an example that returns the integer value of the month:
-
-```sql
-SELECT DateTimePart("m", "2020-01-02T03:04:05.6789123Z") AS MonthValue
-```
-
-```json
-[
- {
- "MonthValue": 1
- }
-]
-```
-
-Here's an example that returns the number of microseconds:
-
-```sql
-SELECT DateTimePart("mcs", "2020-01-02T03:04:05.6789123Z") AS MicrosecondsValue
-```
-
-```json
-[
- {
- "MicrosecondsValue": 678912
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimetoticks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimetoticks.md
- Title: DateTimeToTicks in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeToTicks in Azure Cosmos DB.
---- Previously updated : 08/18/2020----
-# DateTimeToTicks (Azure Cosmos DB)
-
-Converts the specified DateTime to ticks. A single tick represents one hundred nanoseconds or one ten-millionth of a second.
-
-## Syntax
-
-```sql
-DateTimeToTicks (<DateTime>)
-```
-
-## Arguments
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ`
-
-## Return types
-
-Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, DateTimeToTicks returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Remarks
-
-DateTimeDateTimeToTicks will return `undefined` if the DateTime is not a valid ISO 8601 DateTime
-
-This system function will not utilize the index.
-
-## Examples
-
-Here's an example that returns the number of ticks:
-
-```sql
-SELECT DateTimeToTicks("2020-01-02T03:04:05.6789123Z") AS Ticks
-```
-
-```json
-[
- {
- "Ticks": 15779342456789124
- }
-]
-```
-
-Here's an example that returns the number of ticks without specifying the number of fractional seconds:
-
-```sql
-SELECT DateTimeToTicks("2020-01-02T03:04:05Z") AS Ticks
-```
-
-```json
-[
- {
- "Ticks": 15779342450000000
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Datetimetotimestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimetotimestamp.md
- Title: DateTimeToTimestamp in Azure Cosmos DB query language
-description: Learn about SQL system function DateTimeToTimestamp in Azure Cosmos DB.
---- Previously updated : 08/18/2020----
-# DateTimeToTimestamp (Azure Cosmos DB)
-
-Converts the specified DateTime to a timestamp.
-
-## Syntax
-
-```sql
-DateTimeToTimestamp (<DateTime>)
-```
-
-## Arguments
-
-*DateTime*
- UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Return types
-
-Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Remarks
-
-DateTimeToTimestamp will return `undefined` if the DateTime value specified is invalid
-
-## Examples
-
-The following example converts the DateTime to a timestamp:
-
-```sql
-SELECT DateTimeToTimestamp("2020-07-09T23:20:13.4575530Z") AS Timestamp
-```
-
-```json
-[
- {
- "Timestamp": 1594336813457
- }
-]
-```
-
-Here's another example:
-
-```sql
-SELECT DateTimeToTimestamp("2020-07-09") AS Timestamp
-```
-
-```json
-[
- {
- "Timestamp": 1594252800000
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Degrees https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-degrees.md
- Title: DEGREES in Azure Cosmos DB query language
-description: Learn about the DEGREES SQL system function in Azure Cosmos DB to return the corresponding angle in degrees for an angle specified in radians
---- Previously updated : 03/03/2020---
-# DEGREES (Azure Cosmos DB)
-
- Returns the corresponding angle in degrees for an angle specified in radians.
-
-## Syntax
-
-```sql
-DEGREES (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the number of degrees in an angle of PI/2 radians.
-
-```sql
-SELECT DEGREES(PI()/2) AS degrees
-```
-
- Here is the result set.
-
-```json
-[{"degrees": 90}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Endswith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-endswith.md
- Title: EndsWith in Azure Cosmos DB query language
-description: Learn about the ENDSWITH SQL system function in Azure Cosmos DB to return a Boolean indicating whether the first string expression ends with the second
---- Previously updated : 06/02/2020---
-# ENDSWITH (Azure Cosmos DB)
-
-Returns a Boolean indicating whether the first string expression ends with the second.
-
-## Syntax
-
-```sql
-ENDSWITH(<str_expr1>, <str_expr2> [, <bool_expr>])
-```
-
-## Arguments
-
-*str_expr1*
- Is a string expression.
-
-*str_expr2*
- Is a string expression to be compared to the end of *str_expr1*.
-
-*bool_expr*
- Optional value for ignoring case. When set to true, ENDSWITH will do a case-insensitive search. When unspecified, this value is false.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
-The following example checks if the string "abc" ends with "b" and "bC".
-
-```sql
-SELECT ENDSWITH("abc", "b", false) AS e1, ENDSWITH("abc", "bC", false) AS e2, ENDSWITH("abc", "bC", true) AS e3
-```
-
- Here is the result set.
-
-```json
-[
- {
- "e1": false,
- "e2": false,
- "e3": true
- }
-]
-```
-
-## Remarks
-
-Learn about [how this string system function uses the index](sql-query-string-functions.md).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Equality Comparison Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-equality-comparison-operators.md
- Title: Equality and comparison operators in Azure Cosmos DB
-description: Learn about SQL equality and comparison operators supported by Azure Cosmos DB.
---- Previously updated : 01/07/2022----
-# Equality and comparison operators in Azure Cosmos DB
-
-This article details the equality and comparison operators supported by Azure Cosmos DB.
-
-## Understanding equality comparisons
-
-The following table shows the result of equality comparisons in the SQL API between any two JSON types.
-
-| **Op** | **Undefined** | **Null** | **Boolean** | **Number** | **String** | **Object** | **Array** |
-|||||||||
-| **Undefined** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined |
-| **Null** | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined | Undefined |
-| **Boolean** | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined | Undefined |
-| **Number** | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined | Undefined |
-| **String** | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined | Undefined |
-| **Object** | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** | Undefined |
-| **Array** | Undefined | Undefined | Undefined | Undefined | Undefined | Undefined | **Ok** |
-
-For comparison operators such as `>`, `>=`, `!=`, `<`, and `<=`, comparison across types or between two objects or arrays produces `Undefined`.
-
-If the result of the scalar expression is `Undefined`, the item isn't included in the result, because `Undefined` doesn't equal `true`.
-
-For example, the following query's comparison between a number and string value produces `Undefined`. Therefore, the filter does not include any results.
-
-```sql
-SELECT *
-FROM c
-WHERE 7 = 'a'
-```
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Keywords](sql-query-keywords.md)-- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-exp.md
- Title: EXP in Azure Cosmos DB query language
-description: Learn about the Exponent (EXP) SQL system function in Azure Cosmos DB to return the exponential value of the specified numeric expression
---- Previously updated : 09/13/2019---
-# EXP (Azure Cosmos DB)
-
- Returns the exponential value of the specified numeric expression.
-
-## Syntax
-
-```sql
-EXP (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
- The constant **e** (2.718281…), is the base of natural logarithms.
-
- The exponent of a number is the constant **e** raised to the power of the number. For example, EXP(1.0) = e^1.0 = 2.71828182845905 and EXP(10) = e^10 = 22026.4657948067.
-
- The exponential of the natural logarithm of a number is the number itself: EXP (LOG (n)) = n. And the natural logarithm of the exponential of a number is the number itself: LOG (EXP (n)) = n.
-
-## Examples
-
- The following example declares a variable and returns the exponential value of the specified variable (10).
-
-```sql
-SELECT EXP(10) AS exp
-```
-
- Here is the result set.
-
-```json
-[{exp: 22026.465794806718}]
-```
-
- The following example returns the exponential value of the natural logarithm of 20 and the natural logarithm of the exponential of 20. Because these functions are inverse functions of one another, the return value with rounding for floating point math in both cases is 20.
-
-```sql
-SELECT EXP(LOG(20)) AS exp1, LOG(EXP(20)) AS exp2
-```
-
- Here is the result set.
-
-```json
-[{exp1: 19.999999999999996, exp2: 20}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Floor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-floor.md
- Title: FLOOR in Azure Cosmos DB query language
-description: Learn about the FLOOR SQL system function in Azure Cosmos DB to return the largest integer less than or equal to the specified numeric expression
---- Previously updated : 09/13/2019---
-# FLOOR (Azure Cosmos DB)
-
- Returns the largest integer less than or equal to the specified numeric expression.
-
-## Syntax
-
-```sql
-FLOOR (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example shows positive numeric, negative, and zero values with the `FLOOR` function.
-
-```sql
-SELECT FLOOR(123.45) AS fl1, FLOOR(-123.45) AS fl2, FLOOR(0.0) AS fl3
-```
-
- Here is the result set.
-
-```json
-[{fl1: 123, fl2: -124, fl3: 0}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query From https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-from.md
- Title: FROM clause in Azure Cosmos DB
-description: Learn about the SQL syntax, and example for FROM clause for Azure Cosmos DB. This article also shows examples to scope results, and get sub items by using the FROM clause.
---- Previously updated : 05/08/2020----
-# FROM clause in Azure Cosmos DB
-
-The FROM (`FROM <from_specification>`) clause is optional, unless the source is filtered or projected later in the query. A query like `SELECT * FROM Families` enumerates over the entire `Families` container. You can also use the special identifier ROOT for the container instead of using the container name.
-
-The `FROM` clause enforces the following rules per query:
-
-* The container can be aliased, such as `SELECT f.id FROM Families AS f` or simply `SELECT f.id FROM Families f`. Here `f` is the alias for `Families`. AS is an optional keyword to [alias](sql-query-working-with-json.md#aliasing) the identifier.
-
-* Once aliased, the original source name cannot be bound. For example, `SELECT Families.id FROM Families f` is syntactically invalid because the identifier `Families` has been aliased and can't be resolved anymore.
-
-* All referenced properties must be fully qualified, to avoid any ambiguous bindings in the absence of strict schema adherence. For example, `SELECT id FROM Families f` is syntactically invalid because the property `id` isn't bound.
-
-## Syntax
-
-```sql
-FROM <from_specification>
-
-<from_specification> ::=
- <from_source> {[ JOIN <from_source>][,...n]}
-
-<from_source> ::=
- <container_expression> [[AS] input_alias]
- | input_alias IN <container_expression>
-
-<container_expression> ::=
- ROOT
- | container_name
- | input_alias
- | <container_expression> '.' property_name
- | <container_expression> '[' "property_name" | array_index ']'
-```
-
-## Arguments
-
-- `<from_source>`
-
- Specifies a data source, with or without an alias. If alias is not specified, it will be inferred from the `<container_expression>` using following rules:
-
- - If the expression is a container_name, then container_name will be used as an alias.
-
- - If the expression is `<container_expression>`, then property_name, then property_name will be used as an alias. If the expression is a container_name, then container_name will be used as an alias.
-
-- AS `input_alias`
-
- Specifies that the `input_alias` is a set of values returned by the underlying container expression.
-
-- `input_alias` IN
-
- Specifies that the `input_alias` should represent the set of values obtained by iterating over all array elements of each array returned by the underlying container expression. Any value returned by underlying container expression that is not an array is ignored.
-
-- `<container_expression>`
-
- Specifies the container expression to be used to retrieve the documents.
-
-- `ROOT`
-
- Specifies that document should be retrieved from the default, currently connected container.
-
-- `container_name`
-
- Specifies that document should be retrieved from the provided container. The name of the container must match the name of the container currently connected to.
-
-- `input_alias`
-
- Specifies that document should be retrieved from the other source defined by the provided alias.
-
-- `<container_expression> '.' property_name`
-
- Specifies that document should be retrieved by accessing the `property_name` property.
-
-- `<container_expression> '[' "property_name" | array_index ']'`
-
- Specifies that document should be retrieved by accessing the `property_name` property or array_index array element for all documents retrieved by specified container expression.
-
-## Remarks
-
-All aliases provided or inferred in the `<from_source>`(s) must be unique. The Syntax `<container_expression> '.' property_name` is the same as `<container_expression> '[' "property_name" ']'`. However, the latter syntax can be used if a property name contains a non-identifier character.
-
-### Handling missing properties, missing array elements, and undefined values
-
-If a container expression accesses properties or array elements and that value does not exist, that value will be ignored and not processed further.
-
-### Container expression context scoping
-
-A container expression may be container-scoped or document-scoped:
-
-- An expression is container-scoped, if the underlying source of the container expression is either ROOT or `container_name`. Such an expression represents a set of documents retrieved from the container directly, and is not dependent on the processing of other container expressions.
-
-- An expression is document-scoped, if the underlying source of the container expression is `input_alias` introduced earlier in the query. Such an expression represents a set of documents obtained by evaluating the container expression in the scope of each document belonging to the set associated with the aliased container. The resulting set will be a union of sets obtained by evaluating the container expression for each of the documents in the underlying set.-
-## Examples
-
-### Get subitems by using the FROM clause
-
-The FROM clause can reduce the source to a smaller subset. To enumerate only a subtree in each item, the subroot can become the source, as shown in the following example:
-
-```sql
- SELECT *
- FROM Families.children
-```
-
-The results are:
-
-```json
- [
- [
- {
- "firstName": "Henriette Thaulow",
- "gender": "female",
- "grade": 5,
- "pets": [
- {
- "givenName": "Fluffy"
- }
- ]
- }
- ],
- [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }
- ]
- ]
-```
-
-The preceding query used an array as the source, but you can also use an object as the source. The query considers any valid, defined JSON value in the source for inclusion in the result. The following example would exclude `Families` that don't have an `address.state` value.
-
-```sql
- SELECT *
- FROM Families.address.state
-```
-
-The results are:
-
-```json
- [
- "WA",
- "NY"
- ]
-```
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [SELECT clause](sql-query-select.md)-- [WHERE clause](sql-query-where.md)
cosmos-db Sql Query Geospatial Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-geospatial-index.md
- Title: Index geospatial data with Azure Cosmos DB
-description: Index spatial data with Azure Cosmos DB
---- Previously updated : 11/03/2020----
-# Index geospatial data with Azure Cosmos DB
-
-We designed Azure Cosmos DB's database engine to be truly schema agnostic and provide first class support for JSON. The write optimized database engine of Azure Cosmos DB natively understands spatial data represented in the GeoJSON standard.
-
-In a nutshell, the geometry is projected from geodetic coordinates onto a 2D plane then divided progressively into cells using a **quadtree**. These cells are mapped to 1D based on the location of the cell within a **Hilbert space filling curve**, which preserves locality of points. Additionally when location data is indexed, it goes through a process known as **tessellation**, that is, all the cells that intersect a location are identified and stored as keys in the Azure Cosmos DB index. At query time, arguments like points and Polygons are also tessellated to extract the relevant cell ID ranges, then used to retrieve data from the index.
-
-If you specify an indexing policy that includes a spatial index for `/*` (all paths), then all data found within the container is indexed for efficient spatial queries.
-
-> [!NOTE]
-> Azure Cosmos DB supports indexing of Points, LineStrings, Polygons, and MultiPolygons. If you index any one of these types, we will automatically index all other types. In other words, if you index Polygons, we'll also index Points, LineStrings, and MultiPolygons. Indexing a new spatial type does not affect the write RU charge or index size unless you have valid GeoJSON data of that type.
-
-## Modifying geospatial configuration
-
-In your container, the **Geospatial Configuration** specifies how the spatial data will be indexed. Specify one **Geospatial Configuration** per container: geography or geometry.
-
-You can toggle between the **geography** and **geometry** spatial type in the Azure portal. It's important that you create a [valid spatial geometry indexing policy with a bounding box](#geometry-data-indexing-examples) before switching to the geometry spatial type.
-
-Here's how to set the **Geospatial Configuration** in **Data Explorer** within the Azure portal:
--
-You can also modify the `geospatialConfig` in the .NET SDK to adjust the **Geospatial Configuration**:
-
-If not specified, the `geospatialConfig` will default to the geography data type. When you modify the `geospatialConfig`, all existing geospatial data in the container will be reindexed.
-
-Here is an example for modifying the geospatial data type to `geometry` by setting the `geospatialConfig` property and adding a **boundingBox**:
-
-```csharp
- //Retrieve the container's details
- ContainerResponse containerResponse = await client.GetContainer("db", "spatial").ReadContainerAsync();
- //Set GeospatialConfig to Geometry
- GeospatialConfig geospatialConfig = new GeospatialConfig(GeospatialType.Geometry);
- containerResponse.Resource.GeospatialConfig = geospatialConfig;
- // Add a spatial index including the required boundingBox
- SpatialPath spatialPath = new SpatialPath
- {
- Path = "/locations/*",
- BoundingBox = new BoundingBoxProperties(){
- Xmin = 0,
- Ymin = 0,
- Xmax = 10,
- Ymax = 10
- }
- };
- spatialPath.SpatialTypes.Add(SpatialType.Point);
- spatialPath.SpatialTypes.Add(SpatialType.LineString);
- spatialPath.SpatialTypes.Add(SpatialType.Polygon);
- spatialPath.SpatialTypes.Add(SpatialType.MultiPolygon);
-
- containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(spatialPath);
-
- // Update container with changes
- await client.GetContainer("db", "spatial").ReplaceContainerAsync(containerResponse.Resource);
-```
-
-## Geography data indexing examples
-
-The following JSON snippet shows an indexing policy with spatial indexing enabled for the **geography** data type. It is valid for spatial data with the geography data type and will index any GeoJSON Point, Polygon, MultiPolygon, or LineString found within documents for spatial querying. If you are modifying the indexing policy using the Azure portal, you can specify the following JSON for indexing policy to enable spatial indexing on your container:
-
-**Container indexing policy JSON with geography spatial indexing**
-
-```json
-{
- "automatic": true,
- "indexingMode": "Consistent",
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "spatialIndexes": [
- {
- "path": "/*",
- "types": [
- "Point",
- "Polygon",
- "MultiPolygon",
- "LineString"
- ]
- }
- ],
- "excludedPaths": []
-}
-```
-
-> [!NOTE]
-> If the location GeoJSON value within the document is malformed or invalid, then it will not get indexed for spatial querying. You can validate location values using ST_ISVALID and ST_ISVALIDDETAILED.
-
-You can also [modify indexing policy](../how-to-manage-indexing-policy.md) using the Azure CLI, PowerShell, or any SDK.
-
-## Geometry data indexing examples
-
-With the **geometry** data type, similar to the geography data type, you must specify relevant paths and types to index. In addition, you must also specify a `boundingBox` within the indexing policy to indicate the desired area to be indexed for that specific path. Each geospatial path requires its own`boundingBox`.
-
-The bounding box consists of the following properties:
--- **xmin**: the minimum indexed x coordinate-- **ymin**: the minimum indexed y coordinate-- **xmax**: the maximum indexed x coordinate-- **ymax**: the maximum indexed y coordinate-
-A bounding box is required because geometric data occupies a plane that can be infinite. Spatial indexes, however, require a finite space. For the **geography** data type, the Earth is the boundary and you do not need to set a bounding box.
-
-Create a bounding box that contains all (or most) of your data. Only operations computed on the objects that are entirely inside the bounding box will be able to utilize the spatial index. Making the bounding box larger than necessary will negatively impact query performance.
-
-Here is an example indexing policy that indexes **geometry** data with **geospatialConfig** set to `geometry`:
-
-```json
-{
- "indexingMode": "consistent",
- "automatic": true,
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "excludedPaths": [
- {
- "path": "/\"_etag\"/?"
- }
- ],
- "spatialIndexes": [
- {
- "path": "/locations/*",
- "types": [
- "Point",
- "LineString",
- "Polygon",
- "MultiPolygon"
- ],
- "boundingBox": {
- "xmin": -10,
- "ymin": -20,
- "xmax": 10,
- "ymax": 20
- }
- }
- ]
-}
-```
-
-The above indexing policy has a **boundingBox** of (-10, 10) for x coordinates and (-20, 20) for y coordinates. The container with the above indexing policy will index all Points, Polygons, MultiPolygons, and LineStrings that are entirely within this region.
-
-> [!NOTE]
-> If you try to add an indexing policy with a **boundingBox** to a container with `geography` data type, it will fail. You should modify the container's **geospatialConfig** to be `geometry` before adding a **boundingBox**. You can add data and modify the remainder of
-> your indexing policy (such as the paths and types) either before or after selecting the geospatial data type for the container.
-
-## Next steps
-
-Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
-
-* Learn more about [Azure Cosmos DB Query](sql-query-getting-started.md)
-* Learn more about [Querying spatial data with Azure Cosmos DB](sql-query-geospatial-query.md)
-* Learn more about [Geospatial and GeoJSON location data in Azure Cosmos DB](sql-query-geospatial-intro.md)
cosmos-db Sql Query Geospatial Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-geospatial-intro.md
- Title: Geospatial and GeoJSON location data in Azure Cosmos DB
-description: Understand how to create spatial objects with Azure Cosmos DB and the SQL API.
---- Previously updated : 02/17/2022----
-# Geospatial and GeoJSON location data in Azure Cosmos DB
-
-This article is an introduction to the geospatial functionality in Azure Cosmos DB. After reading our documentation on geospatial indexing you will be able to answer the following questions:
-
-* How do I store spatial data in Azure Cosmos DB?
-* How can I query spatial data in Azure Cosmos DB in SQL and LINQ?
-* How do I enable or disable spatial indexing in Azure Cosmos DB?
-
-## Spatial Data Use Cases
-
-Geospatial data often involve proximity queries, for example, "find all coffee shops near my current location". Common use cases are:
-
-* Geolocation Analytics, driving specific located marketing initiatives.
-* Location based personalization, for multiple industries like Retail and Healthcare.
-* Logistics enhancement, for transport optimization.
-* Risk Analysis, especially for insurance and finance companies.
-* Situational awareness, for alerts and notifications.
-
-## Introduction to spatial data
-
-Spatial data describes the position and shape of objects in space. In most applications, these correspond to objects on the earth and geospatial data. Spatial data can be used to represent the location of a person, a place of interest, or the boundary of a city, or a lake.
-
-Azure Cosmos DB's SQL API supports two spatial data types: the **geometry** data type and the **geography** data type.
--- The **geometry** type represents data in a Euclidean (flat) coordinate system-- The **geography** type represents data in a round-earth coordinate system.-
-## Supported data types
-
-Azure Cosmos DB supports indexing and querying of geospatial point data that's represented using the [GeoJSON specification](https://tools.ietf.org/html/rfc7946). GeoJSON data structures are always valid JSON objects, so they can be stored and queried using Azure Cosmos DB without any specialized tools or libraries.
-
-Azure Cosmos DB supports the following spatial data types:
--- Point-- LineString-- Polygon-- MultiPolygon-
-> [!TIP]
-> Currently spatial data in Azure Cosmos DB is not supported by Entity Framework. Please use one of the Azure Cosmos DB SDKs instead.
-
-### Points
-
-A **Point** denotes a single position in space. In geospatial data, a Point represents the exact location, which could be a street address of a grocery store, a kiosk, an automobile, or a city. A point is represented in GeoJSON (and Azure Cosmos DB) using its coordinate pair or longitude and latitude.
-
-Here's an example JSON for a point:
-
-**Points in Azure Cosmos DB**
-
-```json
-{
- "type":"Point",
- "coordinates":[ 31.9, -4.8 ]
-}
-```
-
-Spatial data types can be embedded in an Azure Cosmos DB document as shown in this example of a user profile containing location data:
-
-**Use Profile with Location stored in Azure Cosmos DB**
-
-```json
-{
- "id":"cosmosdb-profile",
- "screen_name":"@CosmosDB",
- "city":"Redmond",
- "topics":[ "global", "distributed" ],
- "location":{
- "type":"Point",
- "coordinates":[ 31.9, -4.8 ]
- }
-}
-```
-
-### Points in a geometry coordinate system
-
-For the **geometry** data type, GeoJSON specification specifies the horizontal axis first and the vertical axis second.
-
-### Points in a geography coordinate system
-
-For the **geography** data type, GeoJSON specification specifies longitude first and latitude second. Like in other mapping applications, longitude and latitude are angles and represented in terms of degrees. Longitude values are measured from the Prime Meridian and are between -180 degrees and 180.0 degrees, and latitude values are measured from the equator and are between -90.0 degrees and 90.0 degrees.
-
-Azure Cosmos DB interprets coordinates as represented per the WGS-84 reference system. See below for more details about coordinate reference systems.
-
-### LineStrings
-
-**LineStrings** represent a series of two or more points in space and the line segments that connect them. In geospatial data, LineStrings are commonly used to represent highways or rivers.
-
-**LineStrings in GeoJSON**
-
-```json
- "type":"LineString",
- "coordinates":[ [
- [ 31.8, -5 ],
- [ 31.8, -4.7 ]
- ] ]
-```
-
-### Polygons
-
-A **Polygon** is a boundary of connected points that forms a closed LineString. Polygons are commonly used to represent natural formations like lakes or political jurisdictions like cities and states. Here's an example of a Polygon in Azure Cosmos DB:
-
-**Polygons in GeoJSON**
-
-```json
-{
- "type":"Polygon",
- "coordinates":[ [
- [ 31.8, -5 ],
- [ 32, -5 ],
- [ 32, -4.7 ],
- [ 31.8, -4.7 ],
- [ 31.8, -5 ]
- ] ]
-}
-```
-
-> [!NOTE]
-> The GeoJSON specification requires that for valid Polygons, the last coordinate pair provided should be the same as the first, to create a closed shape.
->
-> Points within a Polygon must be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
->
->
-
-### MultiPolygons
-
-A **MultiPolygon** is an array of zero or more Polygons. **MultiPolygons** cannot overlap sides or have any common area. They may touch at one or more points.
-
-**MultiPolygons in GeoJSON**
-
-```json
-{
- "type":"MultiPolygon",
- "coordinates":[[[
- [52.0, 12.0],
- [53.0, 12.0],
- [53.0, 13.0],
- [52.0, 13.0],
- [52.0, 12.0]
- ]],
- [[
- [50.0, 0.0],
- [51.0, 0.0],
- [51.0, 5.0],
- [50.0, 5.0],
- [50.0, 0.0]
- ]]]
-}
-```
-
-## Coordinate reference systems
-
-Since the shape of the earth is irregular, coordinates of geography geospatial data are represented in many coordinate reference systems (CRS), each with their own frames of reference and units of measurement. For example, the "National Grid of Britain" is a reference system is accurate for the United Kingdom, but not outside it.
-
-The most popular CRS in use today is the World Geodetic System [WGS-84](https://earth-info.nga.mil/GandG/update/index.php). GPS devices, and many mapping services including Google Maps and Bing Maps APIs use WGS-84. Azure Cosmos DB supports indexing and querying of geography geospatial data using the WGS-84 CRS only.
-
-## Creating documents with spatial data
-When you create documents that contain GeoJSON values, they are automatically indexed with a spatial index in accordance to the indexing policy of the container. If you're working with an Azure Cosmos DB SDK in a dynamically typed language like Python or Node.js, you must create valid GeoJSON.
-
-**Create Document with Geospatial data in Node.js**
-
-```javascript
-var userProfileDocument = {
- "id":"cosmosdb",
- "location":{
- "type":"Point",
- "coordinates":[ -122.12, 47.66 ]
- }
-};
-
-client.createDocument(`dbs/${databaseName}/colls/${collectionName}`, userProfileDocument, (err, created) => {
- // additional code within the callback
-});
-```
-
-If you're working with the SQL APIs, you can use the `Point`, `LineString`, `Polygon`, and `MultiPolygon` classes within the `Microsoft.Azure.Cosmos.Spatial` namespace to embed location information within your application objects. These classes help simplify the serialization and deserialization of spatial data into GeoJSON.
-
-**Create Document with Geospatial data in .NET**
-
-```csharp
-using Microsoft.Azure.Cosmos.Spatial;
-
-public class UserProfile
-{
- [JsonProperty("id")]
- public string id { get; set; }
-
- [JsonProperty("location")]
- public Point Location { get; set; }
-
- // More properties
-}
-
-await container.CreateItemAsync( new UserProfile
- {
- id = "cosmosdb",
- Location = new Point (-122.12, 47.66)
- });
-```
-
-If you don't have the latitude and longitude information, but have the physical addresses or location name like city or country/region, you can look up the actual coordinates by using a geocoding service like Bing Maps REST Services. Learn more about Bing Maps geocoding [here](/bingmaps/rest-services/).
-
-## Next steps
-
-Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
-
-* Learn more about [Azure Cosmos DB Query](sql-query-getting-started.md)
-* Learn more about [Querying spatial data with Azure Cosmos DB](sql-query-geospatial-query.md)
-* Learn more about [Index spatial data with Azure Cosmos DB](sql-query-geospatial-index.md)
cosmos-db Sql Query Geospatial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-geospatial-query.md
- Title: Querying geospatial data with Azure Cosmos DB
-description: Querying spatial data with Azure Cosmos DB
---- Previously updated : 02/20/2020----
-# Querying geospatial data with Azure Cosmos DB
-
-This article will cover how to query geospatial data in Azure Cosmos DB using SQL and LINQ. Currently storing and accessing geospatial data is supported by Azure Cosmos DB SQL API accounts only. Azure Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in functions for geospatial querying. For more information on the complete set of built-in functions in the SQL language, see [Query System Functions in Azure Cosmos DB](sql-query-system-functions.md).
-
-## Spatial SQL built-in functions
-
-Here is a list of geospatial system functions useful for querying in Azure Cosmos DB:
-
-|**Usage**|**Description**|
-|||
-| ST_DISTANCE (spatial_expr, spatial_expr) | Returns the distance between the two GeoJSON Point, Polygon, or LineString expressions.|
-|ST_WITHIN (spatial_expr, spatial_expr) | Returns a Boolean expression indicating whether the first GeoJSON object (Point, Polygon, or LineString) is within the second GeoJSON object (Point, Polygon, or LineString).|
-|ST_INTERSECTS (spatial_expr, spatial_expr)| Returns a Boolean expression indicating whether the two specified GeoJSON objects (Point, Polygon, or LineString) intersect.|
-|ST_ISVALID| Returns a Boolean value indicating whether the specified GeoJSON Point, Polygon, or LineString expression is valid.|
-| ST_ISVALIDDETAILED| Returns a JSON value containing a Boolean value if the specified GeoJSON Point, Polygon, or LineString expression is valid. If invalid, it returns the reason as a string value.|
-
-Spatial functions can be used to perform proximity queries against spatial data. For example, here's a query that returns all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function.
-
-**Query**
-
-```sql
- SELECT f.id
- FROM Families f
- WHERE ST_DISTANCE(f.location, {"type": "Point", "coordinates":[31.9, -4.8]}) < 30000
-```
-
-**Results**
-
-```json
- [{
- "id": "WakefieldFamily"
- }]
-```
-
-If you include spatial indexing in your indexing policy, then "distance queries" will be served efficiently through the index. For more information on spatial indexing, see [geospatial indexing](sql-query-geospatial-index.md). If you don't have a spatial index for the specified paths, the query will do a scan of the container.
-
-`ST_WITHIN` can be used to check if a point lies within a Polygon. Commonly Polygons are used to represent boundaries like zip codes, state boundaries, or natural formations. Again if you include spatial indexing in your indexing policy, then "within" queries will be served efficiently through the index.
-
-Polygon arguments in `ST_WITHIN` can contain only a single ring, that is, the Polygons must not contain holes in them.
-
-**Query**
-
-```sql
- SELECT *
- FROM Families f
- WHERE ST_WITHIN(f.location, {
- "type":"Polygon",
- "coordinates": [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
- })
-```
-
-**Results**
-
-```json
- [{
- "id": "WakefieldFamily",
- }]
-```
-
-> [!NOTE]
-> Similar to how mismatched types work in Azure Cosmos DB query, if the location value specified in either argument is malformed or invalid, then it evaluates to **undefined** and the evaluated document to be skipped from the query results. If your query returns no results, run `ST_ISVALIDDETAILED` to debug why the spatial type is invalid.
->
->
-
-Azure Cosmos DB also supports performing inverse queries, that is, you can index polygons or lines in Azure Cosmos DB, then query for the areas that contain a specified point. This pattern is commonly used in logistics to identify, for example, when a truck enters or leaves a designated area.
-
-**Query**
-
-```sql
- SELECT *
- FROM Areas a
- WHERE ST_WITHIN({"type": "Point", "coordinates":[31.9, -4.8]}, a.location)
-```
-
-**Results**
-
-```json
- [{
- "id": "MyDesignatedLocation",
- "location": {
- "type":"Polygon",
- "coordinates": [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
- }
- }]
-```
-
-`ST_ISVALID` and `ST_ISVALIDDETAILED` can be used to check if a spatial object is valid. For example, the following query checks the validity of a point with an out of range latitude value (-132.8). `ST_ISVALID` returns just a Boolean value, and `ST_ISVALIDDETAILED` returns the Boolean and a string containing the reason why it is considered invalid.
-
-**Query**
-
-```sql
- SELECT ST_ISVALID({ "type": "Point", "coordinates": [31.9, -132.8] })
-```
-
-**Results**
-
-```json
- [{
- "$1": false
- }]
-```
-
-These functions can also be used to validate Polygons. For example, here we use `ST_ISVALIDDETAILED` to validate a Polygon that is not closed.
-
-**Query**
-
-```sql
- SELECT ST_ISVALIDDETAILED({ "type": "Polygon", "coordinates": [[
- [ 31.8, -5 ], [ 31.8, -4.7 ], [ 32, -4.7 ], [ 32, -5 ]
- ]]})
-```
-
-**Results**
-
-```json
- [{
- "$1": {
- "valid": false,
- "reason": "The Polygon input is not valid because the start and end points of the ring number 1 are not the same. Each ring of a Polygon must have the same start and end points."
- }
- }]
-```
-
-## LINQ querying in the .NET SDK
-
-The SQL .NET SDK also providers stub methods `Distance()` and `Within()` for use within LINQ expressions. The SQL LINQ provider translates this method calls to the equivalent SQL built-in function calls (ST_DISTANCE and ST_WITHIN respectively).
-
-Here's an example of a LINQ query that finds all documents in the Azure Cosmos container whose `location` value is within a radius of 30 km of the specified point using LINQ.
-
-**LINQ query for Distance**
-
-```csharp
- foreach (UserProfile user in container.GetItemLinqQueryable<UserProfile>(allowSynchronousQueryExecution: true)
- .Where(u => u.ProfileType == "Public" && u.Location.Distance(new Point(32.33, -4.66)) < 30000))
- {
- Console.WriteLine("\t" + user);
- }
-```
-
-Similarly, here's a query for finding all the documents whose `location` is within the specified box/Polygon.
-
-**LINQ query for Within**
-
-```csharp
- Polygon rectangularArea = new Polygon(
- new[]
- {
- new LinearRing(new [] {
- new Position(31.8, -5),
- new Position(32, -5),
- new Position(32, -4.7),
- new Position(31.8, -4.7),
- new Position(31.8, -5)
- })
- });
-
- foreach (UserProfile user in container.GetItemLinqQueryable<UserProfile>(allowSynchronousQueryExecution: true)
- .Where(a => a.Location.Within(rectangularArea)))
- {
- Console.WriteLine("\t" + user);
- }
-```
-
-## Next steps
-
-Now that you have learned how to get started with geospatial support in Azure Cosmos DB, next you can:
-
-* Learn more about [Azure Cosmos DB Query](sql-query-getting-started.md)
-* Learn more about [Geospatial and GeoJSON location data in Azure Cosmos DB](sql-query-geospatial-intro.md)
-* Learn more about [Index spatial data with Azure Cosmos DB](sql-query-geospatial-index.md)
cosmos-db Sql Query Getcurrentdatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-getcurrentdatetime.md
- Title: GetCurrentDateTime in Azure Cosmos DB query language
-description: Learn about SQL system function GetCurrentDateTime in Azure Cosmos DB.
---- Previously updated : 02/03/2021----
-# GetCurrentDateTime (Azure Cosmos DB)
-
-Returns the current UTC (Coordinated Universal Time) date and time as an ISO 8601 string.
-
-## Syntax
-
-```sql
-GetCurrentDateTime ()
-```
-
-## Return types
-
-Returns the current UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Remarks
-
-GetCurrentDateTime() is a nondeterministic function. The result returned is UTC. Precision is 7 digits, with an accuracy of 100 nanoseconds.
-
-> [!NOTE]
-> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
-
-## Examples
-
-The following example shows how to get the current UTC Date Time using the GetCurrentDateTime() built-in function.
-
-```sql
-SELECT GetCurrentDateTime() AS currentUtcDateTime
-```
-
- Here is an example result set.
-
-```json
-[{
- "currentUtcDateTime": "2019-05-03T20:36:17.1234567Z"
-}]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Getcurrentticks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-getcurrentticks.md
- Title: GetCurrentTicks in Azure Cosmos DB query language
-description: Learn about SQL system function GetCurrentTicks in Azure Cosmos DB.
---- Previously updated : 02/03/2021----
-# GetCurrentTicks (Azure Cosmos DB)
-
-Returns the number of 100-nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Syntax
-
-```sql
-GetCurrentTicks ()
-```
-
-## Return types
-
-Returns a signed numeric value, the current number of 100-nanosecond ticks that have elapsed since the Unix epoch. In other words, GetCurrentTicks returns the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Remarks
-
-GetCurrentTicks() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
-
-> [!NOTE]
-> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
-
-## Examples
-
-Here's an example that returns the current time, measured in ticks:
-
-```sql
-SELECT GetCurrentTicks() AS CurrentTimeInTicks
-```
-
-```json
-[
- {
- "CurrentTimeInTicks": 15973607943002652
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Getcurrenttimestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-getcurrenttimestamp.md
- Title: GetCurrentTimestamp in Azure Cosmos DB query language
-description: Learn about SQL system function GetCurrentTimestamp in Azure Cosmos DB.
---- Previously updated : 02/03/2021----
-# GetCurrentTimestamp (Azure Cosmos DB)
-
- Returns the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Syntax
-
-```sql
-GetCurrentTimestamp ()
-```
-
-## Return types
-
-Returns a signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch i.e. the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Remarks
-
-GetCurrentTimestamp() is a nondeterministic function. The result returned is UTC (Coordinated Universal Time).
-
-> [!NOTE]
-> This system function will not utilize the index. If you need to compare values to the current time, obtain the current time before query execution and use that constant string value in the `WHERE` clause.
-
-## Examples
-
- The following example shows how to get the current timestamp using the GetCurrentTimestamp() built-in function.
-
-```sql
-SELECT GetCurrentTimestamp() AS currentUtcTimestamp
-```
-
- Here is an example result set.
-
-```json
-[{
- "currentUtcTimestamp": 1556916469065
-}]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-getting-started.md
- Title: Getting started with SQL queries in Azure Cosmos DB
-description: Learn how to use SQL queries to query data from Azure Cosmos DB. You can upload sample data to a container in Azure Cosmos DB and query it.
---- Previously updated : 08/26/2021----
-# Getting started with SQL queries
-
-In Azure Cosmos DB SQL API accounts, there are two ways to read data:
-
-**Point reads** - You can do a key/value lookup on a single *item ID* and partition key. The *item ID* and partition key combination is the key and the item itself is the value. For a 1 KB document, point reads typically cost 1 [request unit](../request-units.md) with a latency under 10 ms. Point reads return a single whole item, not a partial item or a specific field.
-
-Here are some examples of how to do **Point reads** with each SDK:
--- [.NET SDK](/dotnet/api/microsoft.azure.cosmos.container.readitemasync)-- [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com-azure-cosmos-cosmoscontainer-(t)readitem(java-lang-string-com-azure-cosmos-models-partitionkey-com-azure-cosmos-models-cosmositemrequestoptions-java-lang-class(t)))-- [Node.js SDK](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read)-- [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item)-
-**SQL queries** - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, will have a higher and more variable latency than point reads. Queries can return many items.
-
-Most read-heavy workloads on Azure Cosmos DB use a combination of both point reads and SQL queries. If you just need to read a single item, point reads are cheaper and faster than queries. Point reads don't need to use the query engine to access data and can read the data directly. Of course, it's not possible for all workloads to exclusively read data using point reads, so support of SQL as a query language and [schema-agnostic indexing](../index-overview.md) provide a more flexible way to access your data.
-
-Here are some examples of how to do **SQL queries** with each SDK:
--- [.NET SDK](../sql-api-dotnet-v3sdk-samples.md#query-examples)-- [Java SDK](../sql-api-java-sdk-samples.md#query-examples)-- [Node.js SDK](../sql-api-nodejs-samples.md#item-examples)-- [Python SDK](../sql-api-python-samples.md#item-examples)-
-The remainder of this doc shows how to get started writing SQL queries in Azure Cosmos DB. SQL queries can be run through either the SDK or Azure portal.
-
-## Upload sample data
-
-In your SQL API Cosmos DB account, open the [Data Explorer](../data-explorer.md) to create a container called `Families`. After the container is created, use the data structures browser, to find and open it. In your `Families` container, you will see the `Items` option right below the name of the container. Open this option and you'll see a button, in the menu bar in center of the screen, to create a 'New Item'. You will use this feature to create the JSON items below.
-
-### Create JSON items
-
-The following 2 JSON items are documents about the Andersen and Wakefield families. They include parents, children and their pets, address, and registration information.
-
-The first item has strings, numbers, Booleans, arrays, and nested properties:
-
-```json
-{
- "id": "AndersenFamily",
- "lastName": "Andersen",
- "parents": [
- { "firstName": "Thomas" },
- { "firstName": "Mary Kay"}
- ],
- "children": [
- {
- "firstName": "Henriette Thaulow",
- "gender": "female",
- "grade": 5,
- "pets": [{ "givenName": "Fluffy" }]
- }
- ],
- "address": { "state": "WA", "county": "King", "city": "Seattle" },
- "creationDate": 1431620472,
- "isRegistered": true
-}
-```
-
-The second item uses `givenName` and `familyName` instead of `firstName` and `lastName`:
-
-```json
-{
- "id": "WakefieldFamily",
- "parents": [
- { "familyName": "Wakefield", "givenName": "Robin" },
- { "familyName": "Miller", "givenName": "Ben" }
- ],
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8 }
- ],
- "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-### Query the JSON items
-
-Try a few queries against the JSON data to understand some of the key aspects of Azure Cosmos DB's SQL query language.
-
-The following query returns the items where the `id` field matches `AndersenFamily`. Since it's a `SELECT *` query, the output of the query is the complete JSON item. For more information about SELECT syntax, see [SELECT statement](sql-query-select.md).
-
-```sql
- SELECT *
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The query results are:
-
-```json
- [{
- "id": "AndersenFamily",
- "lastName": "Andersen",
- "parents": [
- { "firstName": "Thomas" },
- { "firstName": "Mary Kay"}
- ],
- "children": [
- {
- "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
- "pets": [{ "givenName": "Fluffy" }]
- }
- ],
- "address": { "state": "WA", "county": "King", "city": "Seattle" },
- "creationDate": 1431620472,
- "isRegistered": true
- }]
-```
-
-The following query reformats the JSON output into a different shape. The query projects a new JSON `Family` object with two selected fields, `Name` and `City`, when the address city is the same as the state. "NY, NY" matches this case.
-
-```sql
- SELECT {"Name":f.id, "City":f.address.city} AS Family
- FROM Families f
- WHERE f.address.city = f.address.state
-```
-
-The query results are:
-
-```json
- [{
- "Family": {
- "Name": "WakefieldFamily",
- "City": "NY"
- }
- }]
-```
-
-The following query returns all the given names of children in the family whose `id` matches `WakefieldFamily`, ordered by city.
-
-```sql
- SELECT c.givenName
- FROM Families f
- JOIN c IN f.children
- WHERE f.id = 'WakefieldFamily'
- ORDER BY f.address.city ASC
-```
-
-The results are:
-
-```json
- [
- { "givenName": "Jesse" },
- { "givenName": "Lisa"}
- ]
-```
-
-## Remarks
-
-The preceding examples show several aspects of the Cosmos DB query language:
-
-* Since SQL API works on JSON values, it deals with tree-shaped entities instead of rows and columns. You can refer to the tree nodes at any arbitrary depth, like `Node1.Node2.Node3…..Nodem`, similar to the two-part reference of `<table>.<column>` in ANSI SQL.
-
-* Because the query language works with schemaless data, the type system must be bound dynamically. The same expression could yield different types on different items. The result of a query is a valid JSON value, but isn't guaranteed to be of a fixed schema.
-
-* Azure Cosmos DB supports strict JSON items only. The type system and expressions are restricted to deal only with JSON types. For more information, see the [JSON specification](https://www.json.org/).
-
-* A Cosmos container is a schema-free collection of JSON items. The relations within and across container items are implicitly captured by containment, not by primary key and foreign key relations. This feature is important for the intra-item joins that are described in [Joins in Azure Cosmos DB](sql-query-join.md).
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../introduction.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [SELECT clause](sql-query-select.md)-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md)
cosmos-db Sql Query Group By https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-group-by.md
- Title: GROUP BY clause in Azure Cosmos DB
-description: Learn about the GROUP BY clause for Azure Cosmos DB.
---- Previously updated : 05/12/2022----
-# GROUP BY clause in Azure Cosmos DB
-
-The GROUP BY clause divides the query's results according to the values of one or more specified properties.
-
-> [!NOTE]
-> The GROUP BY clause is not supported in the Azure Cosmos DB Python SDK.
-
-## Syntax
-
-```sql
-<group_by_clause> ::= GROUP BY <scalar_expression_list>
-
-<scalar_expression_list> ::=
- <scalar_expression>
- | <scalar_expression_list>, <scalar_expression>
-```
-
-## Arguments
--- `<scalar_expression_list>`-
- Specifies the expressions that will be used to divide query results.
--- `<scalar_expression>`
-
- Any scalar expression is allowed except for scalar subqueries and scalar aggregates. Each scalar expression must contain at least one property reference. There is no limit to the number of individual expressions or the cardinality of each expression.
-
-## Remarks
-
- When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is [aggregate functions](sql-query-aggregate-functions.md), which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause.
-
- The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You cannot use GROUP BY with an ORDER BY clause.
-
- The GROUP BY clause does not allow any of the following:
-
-- Aliasing properties or aliasing system functions (aliasing is still allowed within the SELECT clause)-- Subqueries-- Aggregate system functions (these are only allowed in the SELECT clause)-
-Queries with an aggregate system function and a subquery with `GROUP BY` are not supported. For example, the following query is not supported:
-
-```sql
-SELECT COUNT(UniqueLastNames)
-FROM (
-SELECT AVG(f.age)
-FROM f
-GROUP BY f.lastName
-) AS UniqueLastNames
-```
-
-Additionally, cross-partition `GROUP BY` queries can have a maximum of 21 [aggregate system functions](sql-query-aggregate-functions.md).
-
-## Examples
-
-These examples use a sample [nutrition data set](https://github.com/AzureCosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
-
-Here's a query which returns the total count of items in each foodGroup:
-
-```sql
-SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup
-FROM Food f
-GROUP BY f.foodGroup
-```
-
-Some results are (TOP keyword is used to limit results):
-
-```json
-[
- {
- "foodGroupCount": 183,
- "foodGroup": "Cereal Grains and Pasta"
- },
- {
- "foodGroupCount": 133,
- "foodGroup": "Nut and Seed Products"
- },
- {
- "foodGroupCount": 113,
- "foodGroup": "Meals, Entrees, and Side Dishes"
- },
- {
- "foodGroupCount": 64,
- "foodGroup": "Spices and Herbs"
- }
-]
-```
-
-This query has two expressions used to divide results:
-
-```sql
-SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup, f.version
-FROM Food f
-GROUP BY f.foodGroup, f.version
-```
-
-Some results are:
-
-```json
-[
- {
- "foodGroupCount": 183,
- "foodGroup": "Cereal Grains and Pasta",
- "version": 1
- },
- {
- "foodGroupCount": 133,
- "foodGroup": "Nut and Seed Products",
- "version": 1
- },
- {
- "foodGroupCount": 113,
- "foodGroup": "Meals, Entrees, and Side Dishes",
- "version": 1
- },
- {
- "foodGroupCount": 64,
- "foodGroup": "Spices and Herbs",
- "version": 1
- }
-]
-```
-
-This query has a system function in the GROUP BY clause:
-
-```sql
-SELECT TOP 4 COUNT(1) AS foodGroupCount, UPPER(f.foodGroup) AS upperFoodGroup
-FROM Food f
-GROUP BY UPPER(f.foodGroup)
-```
-
-Some results are:
-
-```json
-[
- {
- "foodGroupCount": 183,
- "upperFoodGroup": "CEREAL GRAINS AND PASTA"
- },
- {
- "foodGroupCount": 133,
- "upperFoodGroup": "NUT AND SEED PRODUCTS"
- },
- {
- "foodGroupCount": 113,
- "upperFoodGroup": "MEALS, ENTREES, AND SIDE DISHES"
- },
- {
- "foodGroupCount": 64,
- "upperFoodGroup": "SPICES AND HERBS"
- }
-]
-```
-
-This query uses both keywords and system functions in the item property expression:
-
-```sql
-SELECT COUNT(1) AS foodGroupCount, ARRAY_CONTAINS(f.tags, {name: 'orange'}) AS containsOrangeTag, f.version BETWEEN 0 AND 2 AS correctVersion
-FROM Food f
-GROUP BY ARRAY_CONTAINS(f.tags, {name: 'orange'}), f.version BETWEEN 0 AND 2
-```
-
-The results are:
-
-```json
-[
- {
- "foodGroupCount": 10,
- "containsOrangeTag": true,
- "correctVersion": true
- },
- {
- "foodGroupCount": 8608,
- "containsOrangeTag": false,
- "correctVersion": true
- }
-]
-```
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [SELECT clause](sql-query-select.md)-- [Aggregate functions](sql-query-aggregate-functions.md)
cosmos-db Sql Query Index Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-index-of.md
- Title: INDEX_OF in Azure Cosmos DB query language
-description: Learn about SQL system function INDEX_OF in Azure Cosmos DB.
---- Previously updated : 08/30/2022----
-# INDEX_OF (Azure Cosmos DB)
--
-Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or `-1` if the string isn't found.
-
-## Syntax
-
-```sql
-INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
-```
-
-## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to search for.
-
-*numeric_expr*
- Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
-
-## Return types
-
-Returns a numeric expression.
-
-## Examples
-
-The following example returns the index of various substrings inside "abc".
-
-```sql
-SELECT
- INDEX_OF("abc", "ab") AS index_of_prefix,
- INDEX_OF("abc", "b") AS index_of_middle,
- INDEX_OF("abc", "c") AS index_of_last,
- INDEX_OF("abc", "d") AS index_of_missing
-```
-
-Here's the result set.
-
-```json
-[
- {
- "index_of_prefix": 0,
- "index_of_middle": 1,
- "index_of_last": 2,
- "index_of_missing": -1
- }
-]
-```
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-array.md
- Title: IS_ARRAY in Azure Cosmos DB query language
-description: Learn about SQL system function IS_ARRAY in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_ARRAY (Azure Cosmos DB)
-
- Returns a Boolean value indicating if the type of the specified expression is an array.
-
-## Syntax
-
-```sql
-IS_ARRAY(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_ARRAY` function.
-
-```sql
-SELECT
- IS_ARRAY(true) AS isArray1,
- IS_ARRAY(1) AS isArray2,
- IS_ARRAY("value") AS isArray3,
- IS_ARRAY(null) AS isArray4,
- IS_ARRAY({prop: "value"}) AS isArray5,
- IS_ARRAY([1, 2, 3]) AS isArray6,
- IS_ARRAY({prop: "value"}.prop2) AS isArray7
-```
-
- Here is the result set.
-
-```json
-[{"isArray1":false,"isArray2":false,"isArray3":false,"isArray4":false,"isArray5":false,"isArray6":true,"isArray7":false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Bool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-bool.md
- Title: IS_BOOL in Azure Cosmos DB query language
-description: Learn about SQL system function IS_BOOL in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_BOOL (Azure Cosmos DB)
-
- Returns a Boolean value indicating if the type of the specified expression is a Boolean.
-
-## Syntax
-
-```sql
-IS_BOOL(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_BOOL` function.
-
-```sql
-SELECT
- IS_BOOL(true) AS isBool1,
- IS_BOOL(1) AS isBool2,
- IS_BOOL("value") AS isBool3,
- IS_BOOL(null) AS isBool4,
- IS_BOOL({prop: "value"}) AS isBool5,
- IS_BOOL([1, 2, 3]) AS isBool6,
- IS_BOOL({prop: "value"}.prop2) AS isBool7
-```
-
- Here is the result set.
-
-```json
-[{"isBool1":true,"isBool2":false,"isBool3":false,"isBool4":false,"isBool5":false,"isBool6":false,"isBool7":false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Defined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-defined.md
- Title: IS_DEFINED in Azure Cosmos DB query language
-description: Learn about SQL system function IS_DEFINED in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_DEFINED (Azure Cosmos DB)
-
- Returns a Boolean indicating if the property has been assigned a value.
-
-## Syntax
-
-```sql
-IS_DEFINED(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks for the presence of a property within the specified JSON document. The first returns true since "a" is present, but the second returns false since "b" is absent.
-
-```sql
-SELECT IS_DEFINED({ "a" : 5 }.a) AS isDefined1, IS_DEFINED({ "a" : 5 }.b) AS isDefined2
-```
-
- Here is the result set.
-
-```json
-[{"isDefined1":true,"isDefined2":false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Null https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-null.md
- Title: IS_NULL in Azure Cosmos DB query language
-description: Learn about SQL system function IS_NULL in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_NULL (Azure Cosmos DB)
-
- Returns a Boolean value indicating if the type of the specified expression is null.
-
-## Syntax
-
-```sql
-IS_NULL(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NULL` function.
-
-```sql
-SELECT
- IS_NULL(true) AS isNull1,
- IS_NULL(1) AS isNull2,
- IS_NULL("value") AS isNull3,
- IS_NULL(null) AS isNull4,
- IS_NULL({prop: "value"}) AS isNull5,
- IS_NULL([1, 2, 3]) AS isNull6,
- IS_NULL({prop: "value"}.prop2) AS isNull7
-```
-
- Here is the result set.
-
-```json
-[{"isNull1":false,"isNull2":false,"isNull3":false,"isNull4":true,"isNull5":false,"isNull6":false,"isNull7":false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-number.md
- Title: IS_NUMBER in Azure Cosmos DB query language
-description: Learn about SQL system function IS_NUMBER in Azure Cosmos DB.
---- Previously updated : 09/13/2019----
-# IS_NUMBER (Azure Cosmos DB)
-
-Returns a Boolean value indicating if the type of the specified expression is a number.
-
-## Syntax
-
-```sql
-IS_NUMBER(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
-Returns a Boolean expression.
-
-## Examples
-
-The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_NUMBER` function.
-
-```sql
-SELECT
- IS_NUMBER(true) AS isBooleanANumber,
- IS_NUMBER(1) AS isNumberANumber,
- IS_NUMBER("value") AS isTextStringANumber,
- IS_NUMBER("1") AS isNumberStringANumber,
- IS_NUMBER(null) AS isNullANumber,
- IS_NUMBER({prop: "value"}) AS isObjectANumber,
- IS_NUMBER([1, 2, 3]) AS isArrayANumber,
- IS_NUMBER({stringProp: "value"}.stringProp) AS isObjectStringPropertyANumber,
- IS_NUMBER({numberProp: 1}.numberProp) AS isObjectNumberPropertyANumber
-```
-
-Here's the result set.
-
-```json
-[
- {
- "isBooleanANumber": false,
- "isNumberANumber": true,
- "isTextStringANumber": false,
- "isNumberStringANumber": false,
- "isNullANumber": false,
- "isObjectANumber": false,
- "isArrayANumber": false,
- "isObjectStringPropertyANumber": false,
- "isObjectNumberPropertyANumber": true
- }
-]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-object.md
- Title: IS_OBJECT in Azure Cosmos DB query language
-description: Learn about SQL system function IS_OBJECT in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_OBJECT (Azure Cosmos DB)
-
- Returns a Boolean value indicating if the type of the specified expression is a JSON object.
-
-## Syntax
-
-```sql
-IS_OBJECT(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_OBJECT` function.
-
-```sql
-SELECT
- IS_OBJECT(true) AS isObj1,
- IS_OBJECT(1) AS isObj2,
- IS_OBJECT("value") AS isObj3,
- IS_OBJECT(null) AS isObj4,
- IS_OBJECT({prop: "value"}) AS isObj5,
- IS_OBJECT([1, 2, 3]) AS isObj6,
- IS_OBJECT({prop: "value"}.prop2) AS isObj7
-```
-
- Here is the result set.
-
-```json
-[{"isObj1":false,"isObj2":false,"isObj3":false,"isObj4":false,"isObj5":true,"isObj6":false,"isObj7":false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is Primitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-primitive.md
- Title: IS_PRIMITIVE in Azure Cosmos DB query language
-description: Learn about SQL system function IS_PRIMITIVE in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_PRIMITIVE (Azure Cosmos DB)
-
- Returns a Boolean value indicating if the type of the specified expression is a primitive (string, Boolean, numeric, or null).
-
-## Syntax
-
-```sql
-IS_PRIMITIVE(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array and undefined types using the `IS_PRIMITIVE` function.
-
-```sql
-SELECT
- IS_PRIMITIVE(true) AS isPrim1,
- IS_PRIMITIVE(1) AS isPrim2,
- IS_PRIMITIVE("value") AS isPrim3,
- IS_PRIMITIVE(null) AS isPrim4,
- IS_PRIMITIVE({prop: "value"}) AS isPrim5,
- IS_PRIMITIVE([1, 2, 3]) AS isPrim6,
- IS_PRIMITIVE({prop: "value"}.prop2) AS isPrim7
-```
-
- Here is the result set.
-
-```json
-[{"isPrim1": true, "isPrim2": true, "isPrim3": true, "isPrim4": true, "isPrim5": false, "isPrim6": false, "isPrim7": false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Is String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-is-string.md
- Title: IS_STRING in Azure Cosmos DB query language
-description: Learn about SQL system function IS_STRING in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# IS_STRING (Azure Cosmos DB)
-
- Returns a Boolean value indicating if the type of the specified expression is a string.
-
-## Syntax
-
-```sql
-IS_STRING(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks objects of JSON Boolean, number, string, null, object, array, and undefined types using the `IS_STRING` function.
-
-```sql
-SELECT
- IS_STRING(true) AS isStr1,
- IS_STRING(1) AS isStr2,
- IS_STRING("value") AS isStr3,
- IS_STRING(null) AS isStr4,
- IS_STRING({prop: "value"}) AS isStr5,
- IS_STRING([1, 2, 3]) AS isStr6,
- IS_STRING({prop: "value"}.prop2) AS isStr7
-```
-
- Here is the result set.
-
-```json
-[{"isStr1":false,"isStr2":false,"isStr3":true,"isStr4":false,"isStr5":false,"isStr6":false,"isStr7":false}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Type checking functions Azure Cosmos DB](sql-query-type-checking-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-join.md
- Title: SQL JOIN queries for Azure Cosmos DB
-description: Learn how to JOIN multiple tables in Azure Cosmos DB to query the data
---- Previously updated : 08/27/2021----
-# Joins in Azure Cosmos DB
-
-In a relational database, joins across tables are the logical corollary to designing normalized schemas. In contrast, the SQL API uses the denormalized data model of schema-free items, which is the logical equivalent of a *self-join*.
-
-> [!NOTE]
-> In Azure Cosmos DB, joins are scoped to a single item. Cross-item and cross-container joins are not supported. In NoSQL databases like Azure Cosmos DB, good [data modeling](../modeling-data.md) can help avoid the need for cross-item and cross-container joins.
-
-Joins result in a complete cross product of the sets participating in the join. The result of an N-way join is a set of N-element tuples, where each value in the tuple is associated with the aliased set participating in the join and can be accessed by referencing that alias in other clauses.
-
-## Syntax
-
-The language supports the syntax `<from_source1> JOIN <from_source2> JOIN ... JOIN <from_sourceN>`. This query returns a set of tuples with `N` values. Each tuple has values produced by iterating all container aliases over their respective sets.
-
-Let's look at the following FROM clause: `<from_source1> JOIN <from_source2> JOIN ... JOIN <from_sourceN>`
-
- Let each source define `input_alias1, input_alias2, …, input_aliasN`. This FROM clause returns a set of N-tuples (tuple with N values). Each tuple has values produced by iterating all container aliases over their respective sets.
-
-**Example 1** - 2 sources
-
-- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
-
-- Let `<from_source2>` be document-scoped referencing input_alias1 and represent sets:
-
- {1, 2} for `input_alias1 = A,`
-
- {3} for `input_alias1 = B,`
-
- {4, 5} for `input_alias1 = C,`
-
-- The FROM clause `<from_source1> JOIN <from_source2>` will result in the following tuples:
-
- (`input_alias1, input_alias2`):
-
- `(A, 1), (A, 2), (B, 3), (C, 4), (C, 5)`
-
-**Example 2** - 3 sources
-
-- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
-
-- Let `<from_source2>` be document-scoped referencing `input_alias1` and represent sets:
-
- {1, 2} for `input_alias1 = A,`
-
- {3} for `input_alias1 = B,`
-
- {4, 5} for `input_alias1 = C,`
-
-- Let `<from_source3>` be document-scoped referencing `input_alias2` and represent sets:
-
- {100, 200} for `input_alias2 = 1,`
-
- {300} for `input_alias2 = 3,`
-
-- The FROM clause `<from_source1> JOIN <from_source2> JOIN <from_source3>` will result in the following tuples:
-
- (input_alias1, input_alias2, input_alias3):
-
- (A, 1, 100), (A, 1, 200), (B, 3, 300)
-
- > [!NOTE]
- > Lack of tuples for other values of `input_alias1`, `input_alias2`, for which the `<from_source3>` did not return any values.
-
-**Example 3** - 3 sources
-
-- Let `<from_source1>` be container-scoped and represent set {A, B, C}.
-
-- Let `<from_source2>` be document-scoped referencing `input_alias1` and represent sets:
-
- {1, 2} for `input_alias1 = A,`
-
- {3} for `input_alias1 = B,`
-
- {4, 5} for `input_alias1 = C,`
-
-- Let `<from_source3>` be scoped to `input_alias1` and represent sets:
-
- {100, 200} for `input_alias2 = A,`
-
- {300} for `input_alias2 = C,`
-
-- The FROM clause `<from_source1> JOIN <from_source2> JOIN <from_source3>` will result in the following tuples:
-
- (`input_alias1, input_alias2, input_alias3`):
-
- (A, 1, 100), (A, 1, 200), (A, 2, 100), (A, 2, 200), (C, 4, 300) , (C, 5, 300)
-
- > [!NOTE]
- > This resulted in cross product between `<from_source2>` and `<from_source3>` because both are scoped to the same `<from_source1>`. This resulted in 4 (2x2) tuples having value A, 0 tuples having value B (1x0) and 2 (2x1) tuples having value C.
-
-## Examples
-
-The following examples show how the JOIN clause works. Before you run these examples, upload the sample [family data](sql-query-getting-started.md#upload-sample-data). In the following example, the result is empty, since the cross product of each item from source and an empty set is empty:
-
-```sql
- SELECT f.id
- FROM Families f
- JOIN f.NonExistent
-```
-
-The result is:
-
-```json
- [{
- }]
-```
-
-In the following example, the join is a cross product between two JSON objects, the item root `id` and the `children` subroot. The fact that `children` is an array isn't effective in the join, because it deals with a single root that is the `children` array. The result contains only two results, because the cross product of each item with the array yields exactly only one item.
-
-```sql
- SELECT f.id
- FROM Families f
- JOIN f.children
-```
-
-The results are:
-
-```json
- [
- {
- "id": "AndersenFamily"
- },
- {
- "id": "WakefieldFamily"
- }
- ]
-```
-
-The following example shows a more conventional join:
-
-```sql
- SELECT f.id
- FROM Families f
- JOIN c IN f.children
-```
-
-The results are:
-
-```json
- [
- {
- "id": "AndersenFamily"
- },
- {
- "id": "WakefieldFamily"
- },
- {
- "id": "WakefieldFamily"
- }
- ]
-```
-
-The FROM source of the JOIN clause is an iterator. So, the flow in the preceding example is:
-
-1. Expand each child element `c` in the array.
-2. Apply a cross product with the root of the item `f` with each child element `c` that the first step flattened.
-3. Finally, project the root object `f` `id` property alone.
-
-The first item, `AndersenFamily`, contains only one `children` element, so the result set contains only a single object. The second item, `WakefieldFamily`, contains two `children`, so the cross product produces two objects, one for each `children` element. The root fields in both these items are the same, just as you would expect in a cross product.
-
-The preceeding example returns just the id property for the result of the query. If we want to return the entire document (all the fields) for each child, we can alter the SELECT portion of the query:
-
-```sql
- SELECT VALUE f
- FROM Families f
- JOIN c IN f.children
- WHERE f.id = 'WakefieldFamily'
- ORDER BY f.address.city ASC
-```
-
-The real utility of the JOIN clause is to form tuples from the cross product in a shape that's otherwise difficult to project. The example below filters on the combination of a tuple that lets the user choose a condition satisfied by the tuples overall.
-
-```sql
- SELECT
- f.id AS familyName,
- c.givenName AS childGivenName,
- c.firstName AS childFirstName,
- p.givenName AS petName
- FROM Families f
- JOIN c IN f.children
- JOIN p IN c.pets
-```
-
-The results are:
-
-```json
- [
- {
- "familyName": "AndersenFamily",
- "childFirstName": "Henriette Thaulow",
- "petName": "Fluffy"
- },
- {
- "familyName": "WakefieldFamily",
- "childGivenName": "Jesse",
- "petName": "Goofy"
- },
- {
- "familyName": "WakefieldFamily",
- "childGivenName": "Jesse",
- "petName": "Shadow"
- }
- ]
-```
-
-> [!IMPORTANT]
-> This example uses mulitple JOIN expressions in a single query. There is a maximum amount of JOINs that can be used in a single query. For more information, see [SQL query limits](../concepts-limits.md#sql-query-limits).
-
-The following extension of the preceding example performs a double join. You could view the cross product as the following pseudo-code:
-
-```
- for-each(Family f in Families)
- {
- for-each(Child c in f.children)
- {
- for-each(Pet p in c.pets)
- {
- return (Tuple(f.id AS familyName,
- c.givenName AS childGivenName,
- c.firstName AS childFirstName,
- p.givenName AS petName));
- }
- }
- }
-```
-
-`AndersenFamily` has one child who has one pet, so the cross product yields one row (1\*1\*1) from this family. `WakefieldFamily` has two children, only one of whom has pets, but that child has two pets. The cross product for this family yields 1\*1\*2 = 2 rows.
-
-In the next example, there is an additional filter on `pet`, which excludes all the tuples where the pet name is not `Shadow`. You can build tuples from arrays, filter on any of the elements of the tuple, and project any combination of the elements.
-
-```sql
- SELECT
- f.id AS familyName,
- c.givenName AS childGivenName,
- c.firstName AS childFirstName,
- p.givenName AS petName
- FROM Families f
- JOIN c IN f.children
- JOIN p IN c.pets
- WHERE p.givenName = "Shadow"
-```
-
-The results are:
-
-```json
- [
- {
- "familyName": "WakefieldFamily",
- "childGivenName": "Jesse",
- "petName": "Shadow"
- }
- ]
-```
-
-## Subqueries instead of JOINs
-
-If your query has a JOIN and filters, you can rewrite part of the query as a [subquery](sql-query-subquery.md#optimize-join-expressions) to improve performance. In some cases, you may be able to use a subquery or [ARRAY_CONTAINS](sql-query-array-contains.md) to avoid the need for JOIN altogether and improve query performance.
-
-For example, consider the earlier query that projected the familyName, child's givenName, children's firstName, and pet's givenName. If this query just needed to filter on the pet's name and didn't need to return it, you could use `ARRAY_CONTAINS` or a [subquery](sql-query-subquery.md) to check for pets where `givenName = "Shadow"`.
-
-### Query rewritten with ARRAY_CONTAINS
-
-```sql
- SELECT
- f.id AS familyName,
- c.givenName AS childGivenName,
- c.firstName AS childFirstName
- FROM Families f
- JOIN c IN f.children
- WHERE ARRAY_CONTAINS(c.pets, {givenName: 'Shadow'})
-```
-
-### Query rewritten with subquery
-
-```sql
- SELECT
- f.id AS familyName,
- c.givenName AS childGivenName,
- c.firstName AS childFirstName
- FROM Families f
- JOIN c IN f.children
- WHERE EXISTS (
- SELECT VALUE n
- FROM n IN c.pets
- WHERE n.givenName = "Shadow"
- )
-```
--
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmosdb-dotnet)-- [Subqueries](sql-query-subquery.md)
cosmos-db Sql Query Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-keywords.md
- Title: SQL keywords for Azure Cosmos DB
-description: Learn about SQL keywords for Azure Cosmos DB.
---- Previously updated : 10/05/2021----
-# Keywords in Azure Cosmos DB
-
-This article details keywords which may be used in Azure Cosmos DB SQL queries.
-
-## BETWEEN
-
-You can use the `BETWEEN` keyword to express queries against ranges of string or numerical values. For example, the following query returns all items in which the first child's grade is 1-5, inclusive.
-
-```sql
- SELECT *
- FROM Families.children[0] c
- WHERE c.grade BETWEEN 1 AND 5
-```
-
-You can also use the `BETWEEN` keyword in the `SELECT` clause, as in the following example.
-
-```sql
- SELECT (c.grade BETWEEN 0 AND 10)
- FROM Families.children[0] c
-```
-
-In SQL API, unlike ANSI SQL, you can express range queries against properties of mixed types. For example, `grade` might be a number like `5` in some items and a string like `grade4` in others. In these cases, as in JavaScript, the comparison between the two different types results in `Undefined`, so the item is skipped.
-
-## DISTINCT
-
-The `DISTINCT` keyword eliminates duplicates in the query's projection.
-
-In this example, the query projects values for each last name:
-
-```sql
-SELECT DISTINCT VALUE f.lastName
-FROM Families f
-```
-
-The results are:
-
-```json
-[
- "Andersen"
-]
-```
-
-You can also project unique objects. In this case, the lastName field does not exist in one of the two documents, so the query returns an empty object.
-
-```sql
-SELECT DISTINCT f.lastName
-FROM Families f
-```
-
-The results are:
-
-```json
-[
- {
- "lastName": "Andersen"
- },
- {}
-]
-```
-
-`DISTINCT` can also be used in the projection within a subquery:
-
-```sql
-SELECT f.id, ARRAY(SELECT DISTINCT VALUE c.givenName FROM c IN f.children) as ChildNames
-FROM f
-```
-
-This query projects an array which contains each child's givenName with duplicates removed. This array is aliased as ChildNames and projected in the outer query.
-
-The results are:
-
-```json
-[
- {
- "id": "AndersenFamily",
- "ChildNames": []
- },
- {
- "id": "WakefieldFamily",
- "ChildNames": [
- "Jesse",
- "Lisa"
- ]
- }
-]
-```
-
-Queries with an aggregate system function and a subquery with `DISTINCT` are only supported in specific SDK versions. For example, queries with the following shape are only supported in the below specific SDK versions:
-
-```sql
-SELECT COUNT(1) FROM (SELECT DISTINCT f.lastName FROM f)
-```
-
-**Supported SDK versions:**
-
-|**SDK**|**Supported versions**|
-|-|-|
-|.NET SDK|3.18.0 or later|
-|Java SDK|4.19.0 or later|
-|Node.js SDK|Unsupported|
-|Python SDK|Unsupported|
-
-There are some additional restrictions on queries with an aggregate system function and a subquery with `DISTINCT`. The below queries are unsupported:
-
-|**Restriction**|**Example**|
-|-|-|
-|WHERE clause in the outer query|SELECT COUNT(1) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName WHERE lastName = "Smith"|
-|ORDER BY clause in the outer query|SELECT VALUE COUNT(1) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName ORDER BY lastName|
-|GROUP BY clause in the outer query|SELECT COUNT(1) as annualCount, d.year FROM (SELECT DISTINCT c.year, c.id FROM c) AS d GROUP BY d.year|
-|Nested subquery|SELECT COUNT(1) FROM (SELECT y FROM (SELECT VALUE StringToNumber(SUBSTRING(d.date, 0, 4 FROM (SELECT DISTINCT c.date FROM c) d) AS y WHERE y > 2012)|
-|Multiple aggregations|SELECT COUNT(1) as AnnualCount, SUM(d.sales) as TotalSales FROM (SELECT DISTINCT c.year, c.sales, c.id FROM c) AS d|
-|COUNT() must have 1 as a parameter|SELECT COUNT(lastName) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName|
-
-## LIKE
-
-Returns a Boolean value depending on whether a specific character string matches a specified pattern. A pattern can include regular characters and wildcard characters. You can write logically equivalent queries using either the `LIKE` keyword or the [RegexMatch](sql-query-regexmatch.md) system function. YouΓÇÖll observe the same index utilization regardless of which one you choose. Therefore, you should use `LIKE` if you prefer its syntax more than regular expressions.
-
-> [!NOTE]
-> Because `LIKE` can utilize an index, you should [create a range index](./../index-policy.md) for properties you are comparing using `LIKE`.
-
-You can use the following wildcard characters with LIKE:
-
-| Wildcard character | Description | Example |
-| -- | | - |
-| % | Any string of zero or more characters | WHERE c.description LIKE ΓÇ£%SO%PS%ΓÇ¥ |
-| _ (underscore) | Any single character | WHERE c.description LIKE ΓÇ£%SO_PS%ΓÇ¥ |
-| [ ] | Any single character within the specified range ([a-f]) or set ([abcdef]). | WHERE c.description LIKE ΓÇ£%SO[t-z]PS%ΓÇ¥ |
-| [^] | Any single character not within the specified range ([^a-f]) or set ([^abcdef]). | WHERE c.description LIKE ΓÇ£%SO[^abc]PS%ΓÇ¥ |
--
-### Using LIKE with the % wildcard character
-
-The `%` character matches any string of zero or more characters. For example, by placing a `%` at the beginning and end of the pattern, the following query returns all items with a description that contains `fruit`:
-
-```sql
-SELECT *
-FROM c
-WHERE c.description LIKE "%fruit%"
-```
-
-If you only used a `%` character at the end of the pattern, youΓÇÖd only return items with a description that started with `fruit`:
-
-```sql
-SELECT *
-FROM c
-WHERE c.description LIKE "fruit%"
-```
--
-### Using NOT LIKE
-
-The below example returns all items with a description that does not contain `fruit`:
-
-```sql
-SELECT *
-FROM c
-WHERE c.description NOT LIKE "%fruit%"
-```
-
-### Using the escape clause
-
-You can search for patterns that include one or more wildcard characters using the ESCAPE clause. For example, if you wanted to search for descriptions that contained the string `20-30%`, you wouldnΓÇÖt want to interpret the `%` as a wildcard character.
-
-```sql
-SELECT *
-FROM c
-WHERE c.description LIKE '%20-30!%%' ESCAPE '!'
-```
-
-### Using wildcard characters as literals
-
-You can enclose wildcard characters in brackets to treat them as literal characters. When you enclose a wildcard character in brackets, you remove any special attributes. Here are some examples:
-
-| Pattern | Meaning |
-| -- | - |
-| LIKE ΓÇ£20-30[%]ΓÇ¥ | 20-30% |
-| LIKE ΓÇ£[_]nΓÇ¥ | _n |
-| LIKE ΓÇ£[ [ ]ΓÇ¥ | [ |
-| LIKE ΓÇ£]ΓÇ¥ | ] |
-
-## IN
-
-Use the IN keyword to check whether a specified value matches any value in a list. For example, the following query returns all family items where the `id` is `WakefieldFamily` or `AndersenFamily`.
-
-```sql
- SELECT *
- FROM Families
- WHERE Families.id IN ('AndersenFamily', 'WakefieldFamily')
-```
-
-The following example returns all items where the state is any of the specified values:
-
-```sql
- SELECT *
- FROM Families
- WHERE Families.address.state IN ("NY", "WA", "CA", "PA", "OH", "OR", "MI", "WI", "MN", "FL")
-```
-
-The SQL API provides support for [iterating over JSON arrays](sql-query-object-array.md#Iteration), with a new construct added via the in keyword in the FROM source.
-
-If you include your partition key in the `IN` filter, your query will automatically filter to only the relevant partitions.
-
-## TOP
-
-The TOP keyword returns the first `N` number of query results in an undefined order. As a best practice, use TOP with the `ORDER BY` clause to limit results to the first `N` number of ordered values. Combining these two clauses is the only way to predictably indicate which rows TOP affects.
-
-You can use TOP with a constant value, as in the following example, or with a variable value using parameterized queries.
-
-```sql
- SELECT TOP 1 *
- FROM Families f
-```
-
-The results are:
-
-```json
- [{
- "id": "AndersenFamily",
- "lastName": "Andersen",
- "parents": [
- { "firstName": "Thomas" },
- { "firstName": "Mary Kay"}
- ],
- "children": [
- {
- "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
- "pets": [{ "givenName": "Fluffy" }]
- }
- ],
- "address": { "state": "WA", "county": "King", "city": "Seattle" },
- "creationDate": 1431620472,
- "isRegistered": true
- }]
-```
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [Joins](sql-query-join.md)-- [Subqueries](sql-query-subquery.md)
cosmos-db Sql Query Left https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-left.md
- Title: LEFT in Azure Cosmos DB query language
-description: Learn about SQL system function LEFT in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# LEFT (Azure Cosmos DB)
-
- Returns the left part of a string with the specified number of characters.
-
-## Syntax
-
-```sql
-LEFT(<str_expr>, <num_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is the string expression to extract characters from.
-
-*num_expr*
- Is a numeric expression which specifies the number of characters.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example returns the left part of "abc" for various length values.
-
-```sql
-SELECT LEFT("abc", 1) AS l1, LEFT("abc", 2) AS l2
-```
-
- Here is the result set.
-
-```json
-[{"l1": "a", "l2": "ab"}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Length https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-length.md
- Title: LENGTH in Azure Cosmos DB query language
-description: Learn about SQL system function LENGTH in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# LENGTH (Azure Cosmos DB)
-
- Returns the number of characters of the specified string expression.
-
-## Syntax
-
-```sql
-LENGTH(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is the string expression to be evaluated.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the length of a string.
-
-```sql
-SELECT LENGTH("abc") AS len
-```
-
- Here is the result set.
-
-```json
-[{"len": 3}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-linq-to-sql.md
- Title: LINQ to SQL translation in Azure Cosmos DB
-description: Learn the LINQ operators supported and how the LINQ queries are mapped to SQL queries in Azure Cosmos DB.
---- Previously updated : 08/06/2021----
-# LINQ to SQL translation
-
-The Azure Cosmos DB query provider performs a best effort mapping from a LINQ query into a Cosmos DB SQL query. If you want to get the SQL query that is translated from LINQ, use the `ToString()` method on the generated `IQueryable`object. The following description assumes a basic familiarity with [LINQ](/dotnet/csharp/programming-guide/concepts/linq/introduction-to-linq-queries). In addition to LINQ, Azure Cosmos DB also supports [Entity Framework Core](/ef/core/providers/cosmos/?tabs=dotnet-core-cli) which works with SQL API.
-
-> [!NOTE]
-> We recommend using the latest [.NET SDK version](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)
-
-The query provider type system supports only the JSON primitive types: numeric, Boolean, string, and null.
-
-The query provider supports the following scalar expressions:
--- Constant values, including constant values of the primitive data types at query evaluation time.
-
-- Property/array index expressions that refer to the property of an object or an array element. For example:
-
- ```
- family.Id;
- family.children[0].familyName;
- family.children[0].grade;
- family.children[n].grade; //n is an int variable
- ```
-
-- Arithmetic expressions, including common arithmetic expressions on numerical and Boolean values. For the complete list, see the [Azure Cosmos DB SQL specification](sql-query-aggregate-functions.md).
-
- ```
- 2 * family.children[0].grade;
- x + y;
- ```
-
-- String comparison expressions, which include comparing a string value to some constant string value.
-
- ```
- mother.familyName == "Wakefield";
- child.givenName == s; //s is a string variable
- ```
-
-- Object/array creation expressions, which return an object of compound value type or anonymous type, or an array of such objects. You can nest these values.
-
- ```
- new Parent { familyName = "Wakefield", givenName = "Robin" };
- new { first = 1, second = 2 }; //an anonymous type with two fields
- new int[] { 3, child.grade, 5 };
- ```
-
-## Using LINQ
-
-You can create a LINQ query with `GetItemLinqQueryable`. This example shows LINQ query generation and asynchronous execution with a `FeedIterator`:
-
-```csharp
-using (FeedIterator<Book> setIterator = container.GetItemLinqQueryable<Book>()
- .Where(b => b.Title == "War and Peace")
- .ToFeedIterator<Book>())
- {
- //Asynchronous query execution
- while (setIterator.HasMoreResults)
- {
- foreach(var item in await setIterator.ReadNextAsync()){
- {
- Console.WriteLine(item.cost);
- }
- }
- }
- }
-```
-
-## <a id="SupportedLinqOperators"></a>Supported LINQ operators
-
-The LINQ provider included with the SQL .NET SDK supports the following operators:
--- **Select**: Projections translate to [SELECT](sql-query-select.md), including object construction.-- **Where**: Filters translate to [WHERE](sql-query-where.md), and support translation between `&&`, `||`, and `!` to the SQL operators-- **SelectMany**: Allows unwinding of arrays to the [JOIN](sql-query-join.md) clause. Use to chain or nest expressions to filter on array elements.-- **OrderBy** and **OrderByDescending**: Translate to [ORDER BY](sql-query-order-by.md) with ASC or DESC.-- **Count**, **Sum**, **Min**, **Max**, and **Average** operators for [aggregation](sql-query-aggregate-functions.md), and their async equivalents **CountAsync**, **SumAsync**, **MinAsync**, **MaxAsync**, and **AverageAsync**.-- **CompareTo**: Translates to range comparisons. Commonly used for strings, since they're not comparable in .NET.-- **Skip** and **Take**: Translates to [OFFSET and LIMIT](sql-query-offset-limit.md) for limiting results from a query and doing pagination.-- **Math functions**: Supports translation from .NET `Abs`, `Acos`, `Asin`, `Atan`, `Ceiling`, `Cos`, `Exp`, `Floor`, `Log`, `Log10`, `Pow`, `Round`, `Sign`, `Sin`, `Sqrt`, `Tan`, and `Truncate` to the equivalent [built-in mathematical functions](sql-query-mathematical-functions.md).-- **String functions**: Supports translation from .NET `Concat`, `Contains`, `Count`, `EndsWith`,`IndexOf`, `Replace`, `Reverse`, `StartsWith`, `SubString`, `ToLower`, `ToUpper`, `TrimEnd`, and `TrimStart` to the equivalent [built-in string functions](sql-query-string-functions.md).-- **Array functions**: Supports translation from .NET `Concat`, `Contains`, and `Count` to the equivalent [built-in array functions](sql-query-array-functions.md).-- **Geospatial Extension functions**: Supports translation from stub methods `Distance`, `IsValid`, `IsValidDetailed`, and `Within` to the equivalent [built-in geospatial functions](sql-query-geospatial-query.md).-- **User-Defined Function Extension function**: Supports translation from the stub method [CosmosLinq.InvokeUserDefinedFunction](/dotnet/api/microsoft.azure.cosmos.linq.cosmoslinq.invokeuserdefinedfunction?view=azure-dotnet&preserve-view=true) to the corresponding [user-defined function](sql-query-udfs.md).-- **Miscellaneous**: Supports translation of `Coalesce` and [conditional operators](sql-query-logical-operators.md). Can translate `Contains` to String CONTAINS, ARRAY_CONTAINS, or IN, depending on context.-
-## Examples
-
-The following examples illustrate how some of the standard LINQ query operators translate to queries in Azure Cosmos DB.
-
-### Select operator
-
-The syntax is `input.Select(x => f(x))`, where `f` is a scalar expression. The `input`, in this case, would be an `IQueryable` object.
-
-**Select operator, example 1:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Select(family => family.parents[0].familyName);
- ```
-
-- **SQL**
-
- ```sql
- SELECT VALUE f.parents[0].familyName
- FROM Families f
- ```
-
-**Select operator, example 2:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Select(family => family.children[0].grade + c); // c is an int variable
- ```
-
-- **SQL**
-
- ```sql
- SELECT VALUE f.children[0].grade + c
- FROM Families f
- ```
-
-**Select operator, example 3:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Select(family => new
- {
- name = family.children[0].familyName,
- grade = family.children[0].grade + 3
- });
- ```
-
-- **SQL**
-
- ```sql
- SELECT VALUE {"name":f.children[0].familyName,
- "grade": f.children[0].grade + 3 }
- FROM Families f
- ```
-
-### SelectMany operator
-
-The syntax is `input.SelectMany(x => f(x))`, where `f` is a scalar expression that returns a container type.
--- **LINQ lambda expression**
-
- ```csharp
- input.SelectMany(family => family.children);
- ```
-
-- **SQL**-
- ```sql
- SELECT VALUE child
- FROM child IN Families.children
- ```
-
-### Where operator
-
-The syntax is `input.Where(x => f(x))`, where `f` is a scalar expression, which returns a Boolean value.
-
-**Where operator, example 1:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Where(family=> family.parents[0].familyName == "Wakefield");
- ```
-
-- **SQL**
-
- ```sql
- SELECT *
- FROM Families f
- WHERE f.parents[0].familyName = "Wakefield"
- ```
-
-**Where operator, example 2:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Where(
- family => family.parents[0].familyName == "Wakefield" &&
- family.children[0].grade < 3);
- ```
-
-- **SQL**
-
- ```sql
- SELECT *
- FROM Families f
- WHERE f.parents[0].familyName = "Wakefield"
- AND f.children[0].grade < 3
- ```
-
-## Composite SQL queries
-
-You can compose the preceding operators to form more powerful queries. Since Cosmos DB supports nested containers, you can concatenate or nest the composition.
-
-### Concatenation
-
-The syntax is `input(.|.SelectMany())(.Select()|.Where())*`. A concatenated query can start with an optional `SelectMany` query, followed by multiple `Select` or `Where` operators.
-
-**Concatenation, example 1:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Select(family => family.parents[0])
- .Where(parent => parent.familyName == "Wakefield");
- ```
--- **SQL**
-
- ```sql
- SELECT *
- FROM Families f
- WHERE f.parents[0].familyName = "Wakefield"
- ```
-
-**Concatenation, example 2:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Where(family => family.children[0].grade > 3)
- .Select(family => family.parents[0].familyName);
- ```
--- **SQL**
-
- ```sql
- SELECT VALUE f.parents[0].familyName
- FROM Families f
- WHERE f.children[0].grade > 3
- ```
-
-**Concatenation, example 3:**
--- **LINQ lambda expression**
-
- ```csharp
- input.Select(family => new { grade=family.children[0].grade}).
- Where(anon=> anon.grade < 3);
- ```
-
-- **SQL**
-
- ```sql
- SELECT *
- FROM Families f
- WHERE ({grade: f.children[0].grade}.grade > 3)
- ```
-
-**Concatenation, example 4:**
--- **LINQ lambda expression**
-
- ```csharp
- input.SelectMany(family => family.parents)
- .Where(parent => parents.familyName == "Wakefield");
- ```
-
-- **SQL**
-
- ```sql
- SELECT *
- FROM p IN Families.parents
- WHERE p.familyName = "Wakefield"
- ```
-
-### Nesting
-
-The syntax is `input.SelectMany(x=>x.Q())` where `Q` is a `Select`, `SelectMany`, or `Where` operator.
-
-A nested query applies the inner query to each element of the outer container. One important feature is that the inner query can refer to the fields of the elements in the outer container, like a self-join.
-
-**Nesting, example 1:**
--- **LINQ lambda expression**
-
- ```csharp
- input.SelectMany(family=>
- family.parents.Select(p => p.familyName));
- ```
--- **SQL**
-
- ```sql
- SELECT VALUE p.familyName
- FROM Families f
- JOIN p IN f.parents
- ```
-
-**Nesting, example 2:**
--- **LINQ lambda expression**
-
- ```csharp
- input.SelectMany(family =>
- family.children.Where(child => child.familyName == "Jeff"));
- ```
--- **SQL**
-
- ```sql
- SELECT *
- FROM Families f
- JOIN c IN f.children
- WHERE c.familyName = "Jeff"
- ```
-
-**Nesting, example 3:**
--- **LINQ lambda expression**
-
- ```csharp
- input.SelectMany(family => family.children.Where(
- child => child.familyName == family.parents[0].familyName));
- ```
--- **SQL**
-
- ```sql
- SELECT *
- FROM Families f
- JOIN c IN f.children
- WHERE c.familyName = f.parents[0].familyName
- ```
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Model document data](../modeling-data.md)
cosmos-db Sql Query Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-log.md
- Title: LOG in Azure Cosmos DB query language
-description: Learn about the LOG SQL system function in Azure Cosmos DB to return the natural logarithm of the specified numeric expression
---- Previously updated : 09/13/2019---
-# LOG (Azure Cosmos DB)
-
- Returns the natural logarithm of the specified numeric expression.
-
-## Syntax
-
-```sql
-LOG (<numeric_expr> [, <base>])
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-*base*
- Optional numeric argument that sets the base for the logarithm.
-
-## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
- By default, LOG() returns the natural logarithm. You can change the base of the logarithm to another value by using the optional base parameter.
-
- The natural logarithm is the logarithm to the base **e**, where **e** is an irrational constant approximately equal to 2.718281828.
-
- The natural logarithm of the exponential of a number is the number itself: LOG( EXP( n ) ) = n. And the exponential of the natural logarithm of a number is the number itself: EXP( LOG( n ) ) = n.
-
- This system function will not utilize the index.
-
-## Examples
-
- The following example declares a variable and returns the logarithm value of the specified variable (10).
-
-```sql
-SELECT LOG(10) AS log
-```
-
- Here is the result set.
-
-```json
-[{log: 2.3025850929940459}]
-```
-
- The following example calculates the `LOG` for the exponent of a number.
-
-```sql
-SELECT EXP(LOG(10)) AS expLog
-```
-
- Here is the result set.
-
-```json
-[{expLog: 10.000000000000002}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Log10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-log10.md
- Title: LOG10 in Azure Cosmos DB query language
-description: Learn about the LOG10 SQL system function in Azure Cosmos DB to return the base-10 logarithm of the specified numeric expression
---- Previously updated : 09/13/2019---
-# LOG10 (Azure Cosmos DB)
-
- Returns the base-10 logarithm of the specified numeric expression.
-
-## Syntax
-
-```sql
-LOG10 (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expression*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
- The LOG10 and POWER functions are inversely related to one another. For example, 10 ^ LOG10(n) = n. This system function will not utilize the index.
-
-## Examples
-
- The following example declares a variable and returns the LOG10 value of the specified variable (100).
-
-```sql
-SELECT LOG10(100) AS log10
-```
-
- Here is the result set.
-
-```json
-[{log10: 2}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Logical Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-logical-operators.md
- Title: Logical operators in Azure Cosmos DB
-description: Learn about SQL logical operators supported by Azure Cosmos DB.
---- Previously updated : 01/07/2022----
-# Logical operators in Azure Cosmos DB
-
-This article details the logical operators supported by Azure Cosmos DB.
-
-## Understanding logical (AND, OR and NOT) operators
-
-Logical operators operate on Boolean values. The following tables show the logical truth tables for these operators:
-
-**OR operator**
-
-Returns `true` when either of the conditions is `true`.
-
-| | **True** | **False** | **Undefined** |
-| | | | |
-| **True** |True |True |True |
-| **False** |True |False |Undefined |
-| **Undefined** |True |Undefined |Undefined |
-
-**AND operator**
-
-Returns `true` when both expressions are `true`.
-
-| | **True** | **False** | **Undefined** |
-| | | | |
-| **True** |True |False |Undefined |
-| **False** |False |False |False |
-| **Undefined** |Undefined |False |Undefined |
-
-**NOT operator**
-
-Reverses the value of any Boolean expression.
-
-| | **NOT** |
-| | |
-| **True** |False |
-| **False** |True |
-| **Undefined** |Undefined |
-
-**Operator Precedence**
-
-The logical operators `OR`, `AND`, and `NOT` have the precedence level shown below:
-
-| **Operator** | **Priority** |
-| | |
-| **NOT** |1 |
-| **AND** |2 |
-| **OR** |3 |
-
-## * operator
-
-The special operator * projects the entire item as is. When used, it must be the only projected field. A query like `SELECT * FROM Families f` is valid, but `SELECT VALUE * FROM Families f` and `SELECT *, f.id FROM Families f` are not valid.
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Keywords](sql-query-keywords.md)-- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Lower https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-lower.md
- Title: LOWER in Azure Cosmos DB query language
-description: Learn about the LOWER SQL system function in Azure Cosmos DB to return a string expression after converting uppercase character data to lowercase
---- Previously updated : 04/07/2021---
-# LOWER (Azure Cosmos DB)
-
-Returns a string expression after converting uppercase character data to lowercase.
-
-> [!NOTE]
-> This function uses culture-independent (invariant) casing rules when returning the converted string expression.
-
-The LOWER system function doesn't utilize the index. If you plan to do frequent case insensitive comparisons, the LOWER system function may consume a significant number of RUs. If so, instead of using the LOWER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE LOWER(c.name) = 'username' simply becomes SELECT * FROM c WHERE c.name = 'username'.
-
-## Syntax
-
-```sql
-LOWER(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression.
-
-## Return types
-
-Returns a string expression.
-
-## Examples
-
-The following example shows how to use `LOWER` in a query.
-
-```sql
-SELECT LOWER("Abc") AS lower
-```
-
- Here's the result set.
-
-```json
-[{"lower": "abc"}]
-```
-
-## Remarks
-
-This system function won't [use indexes](../index-overview.md#index-usage).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Ltrim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-ltrim.md
- Title: LTRIM in Azure Cosmos DB query language
-description: Learn about the LTRIM SQL system function in Azure Cosmos DB to return a string expression after it removes leading blanks
---- Previously updated : 09/14/2021---
-# LTRIM (Azure Cosmos DB)
-
- Returns a string expression after it removes leading whitespace or specified characters.
-
-## Syntax
-
-```sql
-LTRIM(<str_expr1>[, <str_expr2>])
-```
-
-## Arguments
-
-*str_expr1*
- Is a string expression
-
-*str_expr2*
- Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example shows how to use `LTRIM` inside a query.
-
-```sql
-SELECT LTRIM(" abc") AS t1,
-LTRIM(" abc ") AS t2,
-LTRIM("abc ") AS t3,
-LTRIM("abc") AS t4,
-LTRIM("abc", "ab") AS t5,
-LTRIM("abc", "abc") AS t6
-```
-
- Here is the result set.
-
-```json
-[
- {
- "t1": "abc",
- "t2": "abc ",
- "t3": "abc ",
- "t4": "abc",
- "t5": "c",
- "t6": ""
- }
-]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Mathematical Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-mathematical-functions.md
- Title: Mathematical functions in Azure Cosmos DB query language
-description: Learn about the mathematical functions in Azure Cosmos DB to perform a calculation, based on input values that are provided as arguments, and return a numeric value.
---- Previously updated : 06/22/2021---
-# Mathematical functions (Azure Cosmos DB)
-
-The mathematical functions each perform a calculation, based on input values that are provided as arguments, and return a numeric value.
-
-You can run queries like the following example:
-
-```sql
- SELECT VALUE ABS(-4)
-```
-
-The result is:
-
-```json
- [4]
-```
-
-## Functions
-
-The following supported built-in mathematical functions perform a calculation, usually based on input arguments, and return a numeric expression. The **index usage** column assumes, where applicable, that you're comparing the mathematical system function to another value with an equality filter.
-
-| System function | Index usage | [Index usage in queries with scalar aggregate functions](../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
-| - | -- | | |
-| [ABS](sql-query-abs.md) | Index seek | Index seek | |
-| [ACOS](sql-query-acos.md) | Full scan | Full scan | |
-| [ASIN](sql-query-asin.md) | Full scan | Full scan | |
-| [ATAN](sql-query-atan.md) | Full scan | Full scan | |
-| [ATN2](sql-query-atn2.md) | Full scan | Full scan | |
-| [CEILING](sql-query-ceiling.md) | Index seek | Index seek | |
-| [COS](sql-query-cos.md) | Full scan | Full scan | |
-| [COT](sql-query-cot.md) | Full scan | Full scan | |
-| [DEGREES](sql-query-degrees.md) | Index seek | Index seek | |
-| [EXP](sql-query-exp.md) | Full scan | Full scan | |
-| [FLOOR](sql-query-floor.md) | Index seek | Index seek | |
-| [LOG](sql-query-log.md) | Full scan | Full scan | |
-| [LOG10](sql-query-log10.md) | Full scan | Full scan | |
-| [PI](sql-query-pi.md) | N/A | N/A | PI () returns a constant value. Because the result is deterministic, comparisons with PI() can use the index. |
-| [POWER](sql-query-power.md) | Full scan | Full scan | |
-| [RADIANS](sql-query-radians.md) | Index seek | Index seek | |
-| [RAND](sql-query-rand.md) | N/A | N/A | Rand() returns a random number. Because the result is non-deterministic, comparisons that involve Rand() cannot use the index. |
-| [ROUND](sql-query-round.md) | Index seek | Index seek | |
-| [SIGN](sql-query-sign.md) | Index seek | Index seek | |
-| [SIN](sql-query-sin.md) | Full scan | Full scan | |
-| [SQRT](sql-query-sqrt.md) | Full scan | Full scan | |
-| [SQUARE](sql-query-square.md) | Full scan | Full scan | |
-| [TAN](sql-query-tan.md) | Full scan | Full scan | |
-| [TRUNC](sql-query-trunc.md) | Index seek | Index seek | |
-## Next steps
--- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Object Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-object-array.md
- Title: Working with arrays and objects in Azure Cosmos DB
-description: Learn the SQL syntax to create arrays and objects in Azure Cosmos DB. This article also provides some examples to perform operations on array objects
---- Previously updated : 02/02/2021----
-# Working with arrays and objects in Azure Cosmos DB
-
-A key feature of the Azure Cosmos DB SQL API is array and object creation. This document uses examples that can be recreated using the [Family dataset](sql-query-getting-started.md#upload-sample-data).
-
-Here's an example item in this dataset:
-
-```json
-{
- "id": "AndersenFamily",
- "lastName": "Andersen",
- "parents": [
- { "firstName": "Thomas" },
- { "firstName": "Mary Kay"}
- ],
- "children": [
- {
- "firstName": "Henriette Thaulow",
- "gender": "female",
- "grade": 5,
- "pets": [{ "givenName": "Fluffy" }]
- }
- ],
- "address": { "state": "WA", "county": "King", "city": "Seattle" },
- "creationDate": 1431620472,
- "isRegistered": true
-}
-```
-
-## Arrays
-
-You can construct arrays, as shown in the following example:
-
-```sql
-SELECT [f.address.city, f.address.state] AS CityState
-FROM Families f
-```
-
-The results are:
-
-```json
-[
- {
- "CityState": [
- "Seattle",
- "WA"
- ]
- },
- {
- "CityState": [
- "NY",
- "NY"
- ]
- }
-]
-```
-
-You can also use the [ARRAY expression](sql-query-subquery.md#array-expression) to construct an array from [subquery's](sql-query-subquery.md) results. This query gets all the distinct given names of children in an array.
-
-```sql
-SELECT f.id, ARRAY(SELECT DISTINCT VALUE c.givenName FROM c IN f.children) as ChildNames
-FROM f
-```
-
-The results are:
-
-```json
-[
- {
- "id": "AndersenFamily",
- "ChildNames": []
- },
- {
- "id": "WakefieldFamily",
- "ChildNames": [
- "Jesse",
- "Lisa"
- ]
- }
-]
-```
-
-## <a id="Iteration"></a>Iteration
-
-The SQL API provides support for iterating over JSON arrays, with the [IN keyword](sql-query-keywords.md#in) in the FROM source. In the following example:
-
-```sql
-SELECT *
-FROM Families.children
-```
-
-The results are:
-
-```json
-[
- [
- {
- "firstName": "Henriette Thaulow",
- "gender": "female",
- "grade": 5,
- "pets": [{ "givenName": "Fluffy"}]
- }
- ],
- [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }
- ]
-]
-```
-
-The next query performs iteration over `children` in the `Families` container. The output array is different from the preceding query. This example splits `children`, and flattens the results into a single array:
-
-```sql
-SELECT *
-FROM c IN Families.children
-```
-
-The results are:
-
-```json
-[
- {
- "firstName": "Henriette Thaulow",
- "gender": "female",
- "grade": 5,
- "pets": [{ "givenName": "Fluffy" }]
- },
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }
-]
-```
-
-You can filter further on each individual entry of the array, as shown in the following example:
-
-```sql
-SELECT c.givenName
-FROM c IN Families.children
-WHERE c.grade = 8
-```
-
-The results are:
-
-```json
-[{
- "givenName": "Lisa"
-}]
-```
-
-You can also aggregate over the result of an array iteration. For example, the following query counts the number of children among all families:
-
-```sql
-SELECT COUNT(1) AS Count
-FROM child IN Families.children
-```
-
-The results are:
-
-```json
-[
- {
- "Count": 3
- }
-]
-```
-
-> [!NOTE]
-> When using the IN keyword for iteration, you cannot filter or project any properties outside of the array. Instead, you should use [JOINs](sql-query-join.md).
-
-For additional examples, read our [blog post on working with arrays in Azure Cosmos DB](https://devblogs.microsoft.com/cosmosdb/understanding-how-to-query-arrays-in-azure-cosmos-db/).
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Joins](sql-query-join.md)
cosmos-db Sql Query Offset Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-offset-limit.md
- Title: OFFSET LIMIT clause in Azure Cosmos DB
-description: Learn how to use the OFFSET LIMIT clause to skip and take some certain values when querying in Azure Cosmos DB
---- Previously updated : 07/29/2020----
-# OFFSET LIMIT clause in Azure Cosmos DB
-
-The OFFSET LIMIT clause is an optional clause to skip then take some number of values from the query. The OFFSET count and the LIMIT count are required in the OFFSET LIMIT clause.
-
-When OFFSET LIMIT is used in conjunction with an ORDER BY clause, the result set is produced by doing skip and take on the ordered values. If no ORDER BY clause is used, it will result in a deterministic order of values.
-
-## Syntax
-
-```sql
-OFFSET <offset_amount> LIMIT <limit_amount>
-```
-
-## Arguments
--- `<offset_amount>`-
- Specifies the integer number of items that the query results should skip.
--- `<limit_amount>`
-
- Specifies the integer number of items that the query results should include
-
-## Remarks
-
- Both the `OFFSET` count and the `LIMIT` count are required in the `OFFSET LIMIT` clause. If an optional `ORDER BY` clause is used, the result set is produced by doing the skip over the ordered values. Otherwise, the query will return a fixed order of values.
-
- The RU charge of a query with `OFFSET LIMIT` will increase as the number of terms being offset increases. For queries that have [multiple pages of results](sql-query-pagination.md), we typically recommend using [continuation tokens](sql-query-pagination.md#continuation-tokens). Continuation tokens are a "bookmark" for the place where the query can later resume. If you use `OFFSET LIMIT`, there is no "bookmark". If you wanted to return the query's next page, you would have to start from the beginning.
-
- You should use `OFFSET LIMIT` for cases when you would like to skip items entirely and save client resources. For example, you should use `OFFSET LIMIT` if you want to skip to the 1000th query result and have no need to view results 1 through 999. On the backend, `OFFSET LIMIT` still loads each item, including those that are skipped. The performance advantage is a savings in client resources by avoiding processing items that are not needed.
-
-## Examples
-
-For example, here's a query that skips the first value and returns the second value (in order of the resident city's name):
-
-```sql
- SELECT f.id, f.address.city
- FROM Families f
- ORDER BY f.address.city
- OFFSET 1 LIMIT 1
-```
-
-The results are:
-
-```json
- [
- {
- "id": "AndersenFamily",
- "city": "Seattle"
- }
- ]
-```
-
-Here's a query that skips the first value and returns the second value (without ordering):
-
-```sql
- SELECT f.id, f.address.city
- FROM Families f
- OFFSET 1 LIMIT 1
-```
-
-The results are:
-
-```json
- [
- {
- "id": "WakefieldFamily",
- "city": "Seattle"
- }
- ]
-```
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [SELECT clause](sql-query-select.md)-- [ORDER BY clause](sql-query-order-by.md)
cosmos-db Sql Query Order By https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-order-by.md
- Title: ORDER BY clause in Azure Cosmos DB
-description: Learn about SQL ORDER BY clause for Azure Cosmos DB. Use SQL as an Azure Cosmos DB JSON query language.
---- Previously updated : 04/27/2022----
-# ORDER BY clause in Azure Cosmos DB
-
-The optional `ORDER BY` clause specifies the sorting order for results returned by the query.
-
-## Syntax
-
-```sql
-ORDER BY <sort_specification>
-<sort_specification> ::= <sort_expression> [, <sort_expression>]
-<sort_expression> ::= {<scalar_expression> [ASC | DESC]} [ ,...n ]
-```
-
-## Arguments
-
-- `<sort_specification>`
-
- Specifies a property or expression on which to sort the query result set. A sort column can be specified as a name or property alias.
-
- Multiple properties can be specified. Property names must be unique. The sequence of the sort properties in the `ORDER BY` clause defines the organization of the sorted result set. That is, the result set is sorted by the first property and then that ordered list is sorted by the second property, and so on.
-
- The property names referenced in the `ORDER BY` clause must correspond to either a property in the select list or to a property defined in the collection specified in the `FROM` clause without any ambiguities.
-
-- `<sort_expression>`
-
- Specifies one or more properties or expressions on which to sort the query result set.
-
-- `<scalar_expression>`
-
- See the [Scalar expressions](sql-query-scalar-expressions.md) section for details.
-
-- `ASC | DESC`
-
- Specifies that the values in the specified column should be sorted in ascending or descending order. `ASC` sorts from the lowest value to highest value. `DESC` sorts from highest value to lowest value. `ASC` is the default sort order. Null values are treated as the lowest possible values.
-
-## Remarks
-
- The `ORDER BY` clause requires that the indexing policy include an index for the fields being sorted. The Azure Cosmos DB query runtime supports sorting against a property name and not against computed properties. Azure Cosmos DB supports multiple `ORDER BY` properties. In order to run a query with multiple ORDER BY properties, you should define a [composite index](../index-policy.md#composite-indexes) on the fields being sorted.
-
-> [!Note]
-> If the properties being sorted might be undefined for some documents and you want to retrieve them in an ORDER BY query, you must explicitly include this path in the index. The default indexing policy won't allow for the retrieval of the documents where the sort property is undefined. [Review example queries on documents with some missing fields](#documents-with-missing-fields).
-
-## Examples
-
-For example, here's a query that retrieves families in ascending order of the resident city's name:
-
-```sql
- SELECT f.id, f.address.city
- FROM Families f
- ORDER BY f.address.city
-```
-
-The results are:
-
-```json
- [
- {
- "id": "WakefieldFamily",
- "city": "NY"
- },
- {
- "id": "AndersenFamily",
- "city": "Seattle"
- }
- ]
-```
-
-The following query retrieves family `id`s in order of their item creation date. Item `creationDate` is a number representing the *epoch time*, or elapsed time since Jan. 1, 1970 in seconds.
-
-```sql
- SELECT f.id, f.creationDate
- FROM Families f
- ORDER BY f.creationDate DESC
-```
-
-The results are:
-
-```json
- [
- {
- "id": "AndersenFamily",
- "creationDate": 1431620472
- },
- {
- "id": "WakefieldFamily",
- "creationDate": 1431620462
- }
- ]
-```
-
-Additionally, you can order by multiple properties. A query that orders by multiple properties requires a [composite index](../index-policy.md#composite-indexes). Consider the following query:
-
-```sql
- SELECT f.id, f.creationDate
- FROM Families f
- ORDER BY f.address.city ASC, f.creationDate DESC
-```
-
-This query retrieves the family `id` in ascending order of the city name. If multiple items have the same city name, the query will order by the `creationDate` in descending order.
-
-## Documents with missing fields
-
-Queries with `ORDER BY` will return all items, including items where the property in the ORDER BY clause isn't defined.
-
-For example, if you run the below query that includes `lastName` in the `Order By` clause, the results will include all items, even those that don't have a `lastName` property defined.
-
-```sql
- SELECT f.id, f.lastName
- FROM Families f
- ORDER BY f.lastName
-```
-
-> [!Note]
-> Only the .NET SDK 3.4.0 or later and Java SDK 4.13.0 or later support ORDER BY with mixed types. Therefore, if you want to sort by a combination of undefined and defined values, you should use this version (or later).
-
-You can't control the order that different types appear in the results. In the above example, we showed how undefined values were sorted before string values. If instead, for example, you wanted more control over the sort order of undefined values, you could assign any undefined properties a string value of "aaaaaaaaa" or "zzzzzzzz" to ensure they were either first or last.
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [Indexing policies in Azure Cosmos DB](../index-policy.md)-- [OFFSET LIMIT clause](sql-query-offset-limit.md)
cosmos-db Sql Query Pagination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-pagination.md
- Title: Pagination in Azure Cosmos DB
-description: Learn about paging concepts and continuation tokens
------ Previously updated : 09/15/2021--
-# Pagination in Azure Cosmos DB
-
-In Azure Cosmos DB, queries may have multiple pages of results. This document explains criteria that Azure Cosmos DB's query engine uses to decide whether to split query results into multiple pages. You can optionally use continuation tokens to manage query results that span multiple pages.
-
-## Understanding query executions
-
-Sometimes query results will be split over multiple pages. Each page's results is generated by a separate query execution. When query results cannot be returned in one single execution, Azure Cosmos DB will automatically split results into multiple pages.
-
-You can specify the maximum number of items returned by a query by setting the `MaxItemCount`. The `MaxItemCount` is specified per request and tells the query engine to return that number of items or fewer. You can set `MaxItemCount` to `-1` if you don't want to place a limit on the number of results per query execution.
-
-In addition, there are other reasons that the query engine might need to split query results into multiple pages. These include:
--- The container was throttled and there weren't available RUs to return more query results-- The query execution's response was too large-- The query execution's time was too long-- It was more efficient for the query engine to return results in additional executions-
-The number of items returned per query execution will always be less than or equal to `MaxItemCount`. However, it is possible that other criteria might have limited the number of results the query could return. If you execute the same query multiple times, the number of pages might not be constant. For example, if a query is throttled there may be fewer available results per page, which means the query will have additional pages. In some cases, it is also possible that your query may return an empty page of results.
-
-## Handling multiple pages of results
-
-To ensure accurate query results, you should progress through all pages. You should continue to execute queries until there are no additional pages.
-
-Here are some examples for processing results from queries with multiple pages:
--- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L294)-- [Java SDK](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L162-L176)-- [Node.js SDK](https://github.com/Azure/azure-sdk-for-js/blob/83fcc44a23ad771128d6e0f49043656b3d1df990/sdk/cosmosdb/cosmos/samples/IndexManagement.ts#L128-L140)-- [Python SDK](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/examples.py#L89)-
-## Continuation tokens
-
-In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. For the Python SDK and Node.js SDK, it's supported for single partition queries, and the PK must be specified in the options object because it's not sufficient to have it in the query itself.
-
-Here are some example for using continuation tokens:
--- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L230)-- [Java SDK](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L216)-- [Node.js SDK](https://github.com/Azure/azure-sdk-for-js/blob/2186357a6e6a64b59915d0cf3cba845be4d115c4/sdk/cosmosdb/cosmos/samples/src/BulkUpdateWithSproc.ts#L16-L31)-- [Python SDK](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/test/test_query.py#L533)-
-If the query returns a continuation token, then there are additional query results.
-
-In Azure Cosmos DB's REST API, you can manage continuation tokens with the `x-ms-continuation` header. As with querying with the .NET or Java SDK, if the `x-ms-continuation` response header is not empty, it means the query has additional results.
-
-As long as you are using the same SDK version, continuation tokens never expire. You can optionally [restrict the size of a continuation token](/dotnet/api/microsoft.azure.documents.client.feedoptions.responsecontinuationtokenlimitinkb). Regardless of the amount of data or number of physical partitions in your container, queries return a single continuation token.
-
-You cannot use continuation tokens for queries with [GROUP BY](sql-query-group-by.md) or [DISTINCT](sql-query-keywords.md#distinct) because these queries would require storing a significant amount of state. For queries with `DISTINCT`, you can use continuation tokens if you add `ORDER BY` to the query.
-
-Here's an example of a query with `DISTINCT` that could use a continuation token:
-
-```sql
-SELECT DISTINCT VALUE c.name
-FROM c
-ORDER BY c.name
-```
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../introduction.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [ORDER BY clause](sql-query-order-by.md)
cosmos-db Sql Query Parameterized Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-parameterized-queries.md
- Title: Parameterized queries in Azure Cosmos DB
-description: Learn how SQL parameterized queries provide robust handling and escaping of user input, and prevent accidental exposure of data through SQL injection.
---- Previously updated : 07/29/2020---
-# Parameterized queries in Azure Cosmos DB
-
-Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation. Parameterized SQL provides robust handling and escaping of user input, and prevents accidental exposure of data through SQL injection.
-
-## Examples
-
-For example, you can write a query that takes `lastName` and `address.state` as parameters, and execute it for various values of `lastName` and `address.state` based on user input.
-
-```sql
- SELECT *
- FROM Families f
- WHERE f.lastName = @lastName AND f.address.state = @addressState
-```
-
-You can then send this request to Azure Cosmos DB as a parameterized JSON query like the following:
-
-```sql
- {
- "query": "SELECT * FROM Families f WHERE f.lastName = @lastName AND f.address.state = @addressState",
- "parameters": [
- {"name": "@lastName", "value": "Wakefield"},
- {"name": "@addressState", "value": "NY"},
- ]
- }
-```
-
-The following example sets the TOP argument with a parameterized query:
-
-```sql
- {
- "query": "SELECT TOP @n * FROM Families",
- "parameters": [
- {"name": "@n", "value": 10},
- ]
- }
-```
-
-Parameter values can be any valid JSON: strings, numbers, Booleans, null, even arrays or nested JSON. Since Azure Cosmos DB is schemaless, parameters aren't validated against any type.
-
-Here are examples for parameterized queries in each Azure Cosmos DB SDK:
--- [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L195)-- [Java](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L392-L421)-- [Node.js](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79)-- [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L66-L78)-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Model document data](../modeling-data.md)
cosmos-db Sql Query Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-pi.md
- Title: PI in Azure Cosmos DB query language
-description: Learn about SQL system function PI in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# PI (Azure Cosmos DB)
-
- Returns the constant value of PI.
-
-## Syntax
-
-```sql
-PI ()
-```
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the value of `PI`.
-
-```sql
-SELECT PI() AS pi
-```
-
- Here is the result set.
-
-```json
-[{"pi": 3.1415926535897931}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Power https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-power.md
- Title: POWER in Azure Cosmos DB query language
-description: Learn about SQL system function POWER in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# POWER (Azure Cosmos DB)
-
- Returns the value of the specified expression to the specified power.
-
-## Syntax
-
-```sql
-POWER (<numeric_expr1>, <numeric_expr2>)
-```
-
-## Arguments
-
-*numeric_expr1*
- Is a numeric expression.
-
-*numeric_expr2*
- Is the power to which to raise *numeric_expr1*.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example demonstrates raising a number to the power of 3 (the cube of the number).
-
-```sql
-SELECT POWER(2, 3) AS pow1, POWER(2.5, 3) AS pow2
-```
-
- Here is the result set.
-
-```json
-[{pow1: 8, pow2: 15.625}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Radians https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-radians.md
- Title: RADIANS in Azure Cosmos DB query language
-description: Learn about SQL system function RADIANS in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# RADIANS (Azure Cosmos DB)
-
- Returns radians when a numeric expression, in degrees, is entered.
-
-## Syntax
-
-```sql
-RADIANS (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example takes a few angles as input and returns their corresponding radian values.
-
-```sql
-SELECT RADIANS(-45.01) AS r1, RADIANS(-181.01) AS r2, RADIANS(0) AS r3, RADIANS(0.1472738) AS r4, RADIANS(197.1099392) AS r5
-```
-
- Here is the result set.
-
-```json
-[{
- "r1": -0.7855726963226477,
- "r2": -3.1592204790349356,
- "r3": 0,
- "r4": 0.0025704127119236249,
- "r5": 3.4402174274458375
- }]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Rand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-rand.md
- Title: RAND in Azure Cosmos DB query language
-description: Learn about SQL system function RAND in Azure Cosmos DB.
---- Previously updated : 09/16/2019---
-# RAND (Azure Cosmos DB)
-
- Returns a randomly generated numeric value from [0,1).
-
-## Syntax
-
-```sql
-RAND ()
-```
-
-## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
- `RAND` is a nondeterministic function. Repetitive calls of `RAND` do not return the same results. This system function will not utilize the index.
--
-## Examples
-
- The following example returns a randomly generated numeric value.
-
-```sql
-SELECT RAND() AS rand
-```
-
- Here is the result set.
-
-```json
-[{"rand": 0.87860053195618093}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Regexmatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-regexmatch.md
- Title: RegexMatch in Azure Cosmos DB query language
-description: Learn about the RegexMatch SQL system function in Azure Cosmos DB
---- Previously updated : 08/12/2021----
-# REGEXMATCH (Azure Cosmos DB)
-
-Provides regular expression capabilities. Regular expressions are a concise and flexible notation for finding patterns of text. Azure Cosmos DB uses [PERL compatible regular expressions (PCRE)](http://www.pcre.org/).
-
-## Syntax
-
-```sql
-RegexMatch(<str_expr1>, <str_expr2>, [, <str_expr3>])
-```
-
-## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the regular expression.
-
-*str_expr3*
- Is the string of selected modifiers to use with the regular expression. This string value is optional. If you'd like to run RegexMatch with no modifiers, you can either add an empty string or omit entirely.
-
-You can learn about [syntax for creating regular expressions in Perl](https://perldoc.perl.org/perlre).
-
-Azure Cosmos DB supports the following four modifiers:
-
-| Modifier | Description |
-| | -- |
-| `m` | Treat the string expression to be searched as multiple lines. Without this option, "^" and "$" will match at the beginning or end of the string and not each individual line. |
-| `s` | Allow "." to match any character, including a newline character. |
-| `i` | Ignore case when pattern matching. |
-| `x` | Ignore all whitespace characters. |
-
-## Return types
-
- Returns a Boolean expression. Returns undefined if the string expression to be searched, the regular expression, or the selected modifiers are invalid.
-
-## Examples
-
-The following simple RegexMatch example checks the string "abcd" for regular expression match using a few different modifiers.
-
-```sql
-SELECT RegexMatch ("abcd", "ABC", "") AS NoModifiers,
-RegexMatch ("abcd", "ABC", "i") AS CaseInsensitive,
-RegexMatch ("abcd", "ab.", "") AS WildcardCharacter,
-RegexMatch ("abcd", "ab c", "x") AS IgnoreWhiteSpace,
-RegexMatch ("abcd", "aB c", "ix") AS CaseInsensitiveAndIgnoreWhiteSpace
-```
-
- Here is the result set.
-
-```json
-[
- {
- "NoModifiers": false,
- "CaseInsensitive": true,
- "WildcardCharacter": true,
- "IgnoreWhiteSpace": true,
- "CaseInsensitiveAndIgnoreWhiteSpace": true
- }
-]
-```
-
-With RegexMatch, you can use metacharacters to do more complex string searches that wouldn't otherwise be possible with the StartsWith, EndsWith, Contains, or StringEquals system functions. Here are some additional examples:
-
-> [!NOTE]
-> If you need to use a metacharacter in a regular expression and don't want it to have special meaning, you should escape the metacharacter using `\`.
-
-**Check items that have a description that contains the word "salt" exactly once:**
-
-```sql
-SELECT *
-FROM c
-WHERE RegexMatch (c.description, "salt{1}","")
-```
-
-**Check items that have a description that contain a number between 0 and 99:**
-
-```sql
-SELECT *
-FROM c
-WHERE RegexMatch (c.description, "[0-99]","")
-```
-
-**Check items that have a description that contain four letter words starting with "S" or "s":**
-
-```sql
-SELECT *
-FROM c
-WHERE RegexMatch (c.description, " s... ","i")
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy) if the regular expression can be broken down into either StartsWith, EndsWith, Contains, or StringEquals system functions.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Replace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-replace.md
- Title: REPLACE in Azure Cosmos DB query language
-description: Learn about SQL system function REPLACE in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# REPLACE (Azure Cosmos DB)
-
- Replaces all occurrences of a specified string value with another string value.
-
-## Syntax
-
-```sql
-REPLACE(<str_expr1>, <str_expr2>, <str_expr3>)
-```
-
-## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to be found.
-
-*str_expr3*
- Is the string expression to replace occurrences of *str_expr2* in *str_expr1*.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example shows how to use `REPLACE` in a query.
-
-```sql
-SELECT REPLACE("This is a Test", "Test", "desk") AS replace
-```
-
- Here is the result set.
-
-```json
-[{"replace": "This is a desk"}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Replicate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-replicate.md
- Title: REPLICATE in Azure Cosmos DB query language
-description: Learn about SQL system function REPLICATE in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# REPLICATE (Azure Cosmos DB)
-
- Repeats a string value a specified number of times.
-
-## Syntax
-
-```sql
-REPLICATE(<str_expr>, <num_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression.
-
-*num_expr*
- Is a numeric expression. If *num_expr* is negative or non-finite, the result is undefined.
-
-## Return types
-
- Returns a string expression.
-
-## Remarks
-
- The maximum length of the result is 10,000 characters i.e. (length(*str_expr*) * *num_expr*) <= 10,000. This system function will not utilize the index.
-
-## Examples
-
- The following example shows how to use `REPLICATE` in a query.
-
-```sql
-SELECT REPLICATE("a", 3) AS replicate
-```
-
- Here is the result set.
-
-```json
-[{"replicate": "aaa"}]
-```
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Reverse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-reverse.md
- Title: REVERSE in Azure Cosmos DB query language
-description: Learn about SQL system function REVERSE in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# REVERSE (Azure Cosmos DB)
-
- Returns the reverse order of a string value.
-
-## Syntax
-
-```sql
-REVERSE(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example shows how to use `REVERSE` in a query.
-
-```sql
-SELECT REVERSE("Abc") AS reverse
-```
-
- Here is the result set.
-
-```json
-[{"reverse": "cbA"}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Right https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-right.md
- Title: RIGHT in Azure Cosmos DB query language
-description: Learn about SQL system function RIGHT in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# RIGHT (Azure Cosmos DB)
-
- Returns the right part of a string with the specified number of characters.
-
-## Syntax
-
-```sql
-RIGHT(<str_expr>, <num_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is the string expression to extract characters from.
-
-*num_expr*
- Is a numeric expression which specifies the number of characters.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example returns the right part of "abc" for various length values.
-
-```sql
-SELECT RIGHT("abc", 1) AS r1, RIGHT("abc", 2) AS r2
-```
-
- Here is the result set.
-
-```json
-[{"r1": "c", "r2": "bc"}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Round https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-round.md
- Title: ROUND in Azure Cosmos DB query language
-description: Learn about SQL system function ROUND in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# ROUND (Azure Cosmos DB)
-
- Returns a numeric value, rounded to the closest integer value.
-
-## Syntax
-
-```sql
-ROUND(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
-The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression which falls exactly between two integers then the result will be the closest integer value away from zero. This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-|<numeric_expr>|Rounded|
-|-|-|
-|-6.5000|-7|
-|-0.5|-1|
-|0.5|1|
-|6.5000|7|
-
-## Examples
-
-The following example rounds the following positive and negative numbers to the nearest integer.
-
-```sql
-SELECT ROUND(2.4) AS r1, ROUND(2.6) AS r2, ROUND(2.5) AS r3, ROUND(-2.4) AS r4, ROUND(-2.6) AS r5
-```
-
-Here is the result set.
-
-```json
-[{r1: 2, r2: 3, r3: 3, r4: -2, r5: -3}]
-```
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Rtrim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-rtrim.md
- Title: RTRIM in Azure Cosmos DB query language
-description: Learn about SQL system function RTRIM in Azure Cosmos DB.
---- Previously updated : 09/14/2021---
-# RTRIM (Azure Cosmos DB)
-
- Returns a string expression after it removes trailing whitespace or specified characters.
-
-## Syntax
-
-```sql
-RTRIM(<str_expr1>[, <str_expr2>])
-```
-
-## Arguments
-
-*str_expr1*
- Is a string expression
-
-*str_expr2*
- Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example shows how to use `RTRIM` inside a query.
-
-```sql
-SELECT RTRIM(" abc") AS t1,
-RTRIM(" abc ") AS t2,
-RTRIM("abc ") AS t3,
-RTRIM("abc") AS t4,
-RTRIM("abc", "bc") AS t5,
-RTRIM("abc", "abc") AS t6
-```
-
- Here is the result set.
-
-```json
-[
- {
- "t1": " abc",
- "t2": " abc",
- "t3": "abc",
- "t4": "abc",
- "t5": "a",
- "t6": ""
- }
-]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Scalar Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-scalar-expressions.md
- Title: Scalar expressions in Azure Cosmos DB SQL queries
-description: Learn about the scalar expression SQL syntax for Azure Cosmos DB. This article also describes how to combine scalar expressions into complex expressions by using operators.
---- Previously updated : 05/17/2019----
-# Scalar expressions in Azure Cosmos DB SQL queries
-
-The [SELECT clause](sql-query-select.md) supports scalar expressions. A scalar expression is a combination of symbols and operators that can be evaluated to obtain a single value. Examples of scalar expressions include: constants, property references, array element references, alias references, or function calls. Scalar expressions can be combined into complex expressions using operators.
-
-## Syntax
-
-```sql
-<scalar_expression> ::=
- <constant>
- | input_alias
- | parameter_name
- | <scalar_expression>.property_name
- | <scalar_expression>'['"property_name"|array_index']'
- | unary_operator <scalar_expression>
- | <scalar_expression> binary_operator <scalar_expression>
- | <scalar_expression> ? <scalar_expression> : <scalar_expression>
- | <scalar_function_expression>
- | <create_object_expression>
- | <create_array_expression>
- | (<scalar_expression>)
-
-<scalar_function_expression> ::=
- 'udf.' Udf_scalar_function([<scalar_expression>][,…n])
- | builtin_scalar_function([<scalar_expression>][,…n])
-
-<create_object_expression> ::=
- '{' [{property_name | "property_name"} : <scalar_expression>][,…n] '}'
-
-<create_array_expression> ::=
- '[' [<scalar_expression>][,…n] ']'
-
-```
-
-## Arguments
-
-- `<constant>`
-
- Represents a constant value. See [Constants](sql-query-constants.md) section for details.
-
-- `input_alias`
-
- Represents a value defined by the `input_alias` introduced in the `FROM` clause.
- This value is guaranteed to not be **undefined** ΓÇô**undefined** values in the input are skipped.
-
-- `<scalar_expression>.property_name`
-
- Represents a value of the property of an object. If the property does not exist or property is referenced on a value, which is not an object, then the expression evaluates to **undefined** value.
-
-- `<scalar_expression>'['"property_name"|array_index']'`
-
- Represents a value of the property with name `property_name` or array element with index `array_index` of an array. If the property/array index does not exist or the property/array index is referenced on a value that is not an object/array, then the expression evaluates to undefined value.
-
-- `unary_operator <scalar_expression>`
-
- Represents an operator that is applied to a single value.
-
-- `<scalar_expression> binary_operator <scalar_expression>`
-
- Represents an operator that is applied to two values.
-
-- `<scalar_function_expression>`
-
- Represents a value defined by a result of a function call.
-
-- `udf_scalar_function`
-
- Name of the user-defined scalar function.
-
-- `builtin_scalar_function`
-
- Name of the built-in scalar function.
-
-- `<create_object_expression>`
-
- Represents a value obtained by creating a new object with specified properties and their values.
-
-- `<create_array_expression>`
-
- Represents a value obtained by creating a new array with specified values as elements
-
-- `parameter_name`
-
- Represents a value of the specified parameter name. Parameter names must have a single \@ as the first character.
-
-## Remarks
-
- When calling a built-in or user-defined scalar function, all arguments must be defined. If any of the arguments is undefined, the function will not be called and the result will be undefined.
-
- When creating an object, any property that is assigned undefined value will be skipped and not included in the created object.
-
- When creating an array, any element value that is assigned **undefined** value will be skipped and not included in the created object. This will cause the next defined element to take its place in such a way that the created array will not have skipped indexes.
-
-## Examples
-
-```sql
- SELECT ((2 + 11 % 7)-2)/3
-```
-
-The results are:
-
-```json
- [{
- "$1": 1.33333
- }]
-```
-
-In the following query, the result of the scalar expression is a Boolean:
-
-```sql
- SELECT f.address.city = f.address.state AS AreFromSameCityState
- FROM Families f
-```
-
-The results are:
-
-```json
- [
- {
- "AreFromSameCityState": false
- },
- {
- "AreFromSameCityState": true
- }
- ]
-```
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../introduction.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Subqueries](sql-query-subquery.md)
cosmos-db Sql Query Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-select.md
- Title: SELECT clause in Azure Cosmos DB
-description: Learn about SQL SELECT clause for Azure Cosmos DB. Use SQL as an Azure Cosmos DB JSON query language.
---- Previously updated : 05/08/2020----
-# SELECT clause in Azure Cosmos DB
-
-Every query consists of a `SELECT` clause and optional [FROM](sql-query-from.md) and [WHERE](sql-query-where.md) clauses, per ANSI SQL standards. Typically, the source in the `FROM` clause is enumerated, and the `WHERE` clause applies a filter on the source to retrieve a subset of JSON items. The `SELECT` clause then projects the requested JSON values in the select list.
-
-## Syntax
-
-```sql
-SELECT <select_specification>
-
-<select_specification> ::=
- '*'
- | [DISTINCT] <object_property_list>
- | [DISTINCT] VALUE <scalar_expression> [[ AS ] value_alias]
-
-<object_property_list> ::=
-{ <scalar_expression> [ [ AS ] property_alias ] } [ ,...n ]
-```
-
-## Arguments
-
-- `<select_specification>` -
- Properties or value to be selected for the result set.
-
-- `'*'` -
- Specifies that the value should be retrieved without making any changes. Specifically if the processed value is an object, all properties will be retrieved.
-
-- `<object_property_list>`
-
- Specifies the list of properties to be retrieved. Each returned value will be an object with the properties specified.
-
-- `VALUE` -
- Specifies that the JSON value should be retrieved instead of the complete JSON object. This, unlike `<property_list>` does not wrap the projected value in an object.
--- `DISTINCT`
-
- Specifies that duplicates of projected properties should be removed.
--- `<scalar_expression>` -
- Expression representing the value to be computed. See [Scalar expressions](sql-query-scalar-expressions.md) section for details.
-
-## Remarks
-
-The `SELECT *` syntax is only valid if FROM clause has declared exactly one alias. `SELECT *` provides an identity projection, which can be useful if no projection is needed. SELECT * is only valid if FROM clause is specified and introduced only a single input source.
-
-Both `SELECT <select_list>` and `SELECT *` are "syntactic sugar" and can be alternatively expressed by using simple SELECT statements as shown below.
-
-1. `SELECT * FROM ... AS from_alias ...`
-
- is equivalent to:
-
- `SELECT from_alias FROM ... AS from_alias ...`
-
-2. `SELECT <expr1> AS p1, <expr2> AS p2,..., <exprN> AS pN [other clauses...]`
-
- is equivalent to:
-
- `SELECT VALUE { p1: <expr1>, p2: <expr2>, ..., pN: <exprN> }[other clauses...]`
-
-## Examples
-
-The following SELECT query example returns `address` from `Families` whose `id` matches `AndersenFamily`:
-
-```sql
- SELECT f.address
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The results are:
-
-```json
- [{
- "address": {
- "state": "WA",
- "county": "King",
- "city": "Seattle"
- }
- }]
-```
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [WHERE clause](sql-query-where.md)
cosmos-db Sql Query Sign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-sign.md
- Title: SIGN in Azure Cosmos DB query language
-description: Learn about SQL system function SIGN in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# SIGN (Azure Cosmos DB)
-
- Returns the positive (+1), zero (0), or negative (-1) sign of the specified numeric expression.
-
-## Syntax
-
-```sql
-SIGN(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the `SIGN` values of numbers from -2 to 2.
-
-```sql
-SELECT SIGN(-2) AS s1, SIGN(-1) AS s2, SIGN(0) AS s3, SIGN(1) AS s4, SIGN(2) AS s5
-```
-
- Here is the result set.
-
-```json
-[{s1: -1, s2: -1, s3: 0, s4: 1, s5: 1}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Sin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-sin.md
- Title: SIN in Azure Cosmos DB query language
-description: Learn about SQL system function SIN in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# SIN (Azure Cosmos DB)
-
- Returns the trigonometric sine of the specified angle, in radians, in the specified expression.
-
-## Syntax
-
-```sql
-SIN(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example calculates the `SIN` of the specified angle.
-
-```sql
-SELECT SIN(45.175643) AS sin
-```
-
- Here is the result set.
-
-```json
-[{"sin": 0.929607286611012}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Spatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-spatial-functions.md
- Title: Spatial functions in Azure Cosmos DB query language
-description: Learn about spatial SQL system functions in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# Spatial functions (Azure Cosmos DB)
-
-Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in functions for geospatial querying.
-
-## Functions
-
-The following scalar functions perform an operation on a spatial object input value and return a numeric or Boolean value.
-
-* [ST_DISTANCE](sql-query-st-distance.md)
-* [ST_INTERSECTS](sql-query-st-intersects.md)
-* [ST_ISVALID](sql-query-st-isvalid.md)
-* [ST_ISVALIDDETAILED](sql-query-st-isvaliddetailed.md)
-* [ST_WITHIN](sql-query-st-within.md)
----
-
-
-## Next steps
--- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Sqrt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-sqrt.md
- Title: SQRT in Azure Cosmos DB query language
-description: Learn about SQL system function SQRT in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# SQRT (Azure Cosmos DB)
-
- Returns the square root of the specified numeric value.
-
-## Syntax
-
-```sql
-SQRT(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the square roots of numbers 1-3.
-
-```sql
-SELECT SQRT(1) AS s1, SQRT(2.0) AS s2, SQRT(3) AS s3
-```
-
- Here is the result set.
-
-```json
-[{s1: 1, s2: 1.4142135623730952, s3: 1.7320508075688772}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-square.md
- Title: SQUARE in Azure Cosmos DB query language
-description: Learn about SQL system function SQUARE in Azure Cosmos DB.
---- Previously updated : 03/04/2020---
-# SQUARE (Azure Cosmos DB)
-
- Returns the square of the specified numeric value.
-
-## Syntax
-
-```sql
-SQUARE(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example returns the squares of numbers 1-3.
-
-```sql
-SELECT SQUARE(1) AS s1, SQUARE(2.0) AS s2, SQUARE(3) AS s3
-```
-
- Here is the result set.
-
-```json
-[{s1: 1, s2: 4, s3: 9}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query St Distance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-st-distance.md
- Title: ST_DISTANCE in Azure Cosmos DB query language
-description: Learn about SQL system function ST_DISTANCE in Azure Cosmos DB.
---- Previously updated : 02/17/2021---
-# ST_DISTANCE (Azure Cosmos DB)
-
- Returns the distance between the two GeoJSON Point, Polygon, MultiPolygon or LineString expressions. To learn more, see the [Geospatial and GeoJSON location data](sql-query-geospatial-intro.md) article.
-
-## Syntax
-
-```sql
-ST_DISTANCE (<spatial_expr>, <spatial_expr>)
-```
-
-## Arguments
-
-*spatial_expr*
- Is any valid GeoJSON Point, Polygon, or LineString object expression.
-
-## Return types
-
- Returns a numeric expression containing the distance. This is expressed in meters for the default reference system.
-
-## Examples
-
- The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function. .
-
-```sql
-SELECT f.id
-FROM Families f
-WHERE ST_DISTANCE(f.location, {'type': 'Point', 'coordinates':[31.9, -4.8]}) < 30000
-```
-
- Here is the result set.
-
-```json
-[{
- "id": "WakefieldFamily"
-}]
-```
-
-## Remarks
-
-This system function will benefit from a [geospatial index](../index-policy.md#spatial-indexes) except in queries with aggregates.
-
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
-
-## Next steps
--- [Spatial functions Azure Cosmos DB](sql-query-spatial-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query St Intersects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-st-intersects.md
- Title: ST_INTERSECTS in Azure Cosmos DB query language
-description: Learn about SQL system function ST_INTERSECTS in Azure Cosmos DB.
---- Previously updated : 09/21/2021---
-# ST_INTERSECTS (Azure Cosmos DB)
-
- Returns a Boolean expression indicating whether the GeoJSON object (Point, Polygon, MultiPolygon, or LineString) specified in the first argument intersects the GeoJSON (Point, Polygon, MultiPolygon, or LineString) in the second argument.
-
-## Syntax
-
-```sql
-ST_INTERSECTS (<spatial_expr>, <spatial_expr>)
-```
-
-## Arguments
-
-*spatial_expr*
- Is a GeoJSON Point, Polygon, or LineString object expression.
-
-## Return types
-
- Returns a Boolean value.
-
-## Examples
-
- The following example shows how to find all areas that intersect with the given polygon.
-
-```sql
-SELECT a.id
-FROM Areas a
-WHERE ST_INTERSECTS(a.location, {
- 'type':'Polygon',
- 'coordinates': [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
-})
-```
-
- Here is the result set.
-
-```json
-[{ "id": "IntersectingPolygon" }]
-```
-
-## Remarks
-
-This system function will benefit from a [geospatial index](../index-policy.md#spatial-indexes) except in queries with aggregates.
-
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
-
-## Next steps
--- [Spatial functions Azure Cosmos DB](sql-query-spatial-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query St Isvalid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-st-isvalid.md
- Title: ST_ISVALID in Azure Cosmos DB query language
-description: Learn about SQL system function ST_ISVALID in Azure Cosmos DB.
---- Previously updated : 09/21/2021---
-# ST_ISVALID (Azure Cosmos DB)
-
- Returns a Boolean value indicating whether the specified GeoJSON Point, Polygon, MultiPolygon, or LineString expression is valid.
-
-## Syntax
-
-```sql
-ST_ISVALID(<spatial_expr>)
-```
-
-## Arguments
-
-*spatial_expr*
- Is a GeoJSON Point, Polygon, or LineString expression.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example shows how to check if a point is valid using ST_VALID.
-
- For example, this point has a latitude value that's not in the valid range of values [-90, 90], so the query returns false.
-
- For polygons, the GeoJSON specification requires that the last coordinate pair provided should be the same as the first, to create a closed shape. Points within a polygon must be specified in counter-clockwise order. A polygon specified in clockwise order represents the inverse of the region within it.
-
-```sql
-SELECT ST_ISVALID({ "type": "Point", "coordinates": [31.9, -132.8] }) AS b
-```
-
- Here is the result set.
-
-```json
-[{ "b": false }]
-```
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
-
-## Next steps
--- [Spatial functions Azure Cosmos DB](sql-query-spatial-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query St Isvaliddetailed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-st-isvaliddetailed.md
- Title: ST_ISVALIDDETAILED in Azure Cosmos DB query language
-description: Learn about SQL system function ST_ISVALIDDETAILED in Azure Cosmos DB.
---- Previously updated : 09/21/2021---
-# ST_ISVALIDDETAILED (Azure Cosmos DB)
-
- Returns a JSON value containing a Boolean value if the specified GeoJSON Point, Polygon, or LineString expression is valid, and if invalid, additionally the reason as a string value.
-
-## Syntax
-
-```sql
-ST_ISVALIDDETAILED(<spatial_expr>)
-```
-
-## Arguments
-
-*spatial_expr*
- Is a GeoJSON point or polygon expression.
-
-## Return types
-
- Returns a JSON value containing a Boolean value if the specified GeoJSON point or polygon expression is valid, and if invalid, additionally the reason as a string value.
-
-## Examples
-
- The following example how to check validity (with details) using `ST_ISVALIDDETAILED`.
-
-```sql
-SELECT ST_ISVALIDDETAILED({
- "type": "Polygon",
- "coordinates": [[ [ 31.8, -5 ], [ 31.8, -4.7 ], [ 32, -4.7 ], [ 32, -5 ] ]]
-}) AS b
-```
-
- Here is the result set.
-
-```json
-[{
- "b": {
- "valid": false,
- "reason": "The Polygon input is not valid because the start and end points of the ring number 1 are not the same. Each ring of a polygon must have the same start and end points."
- }
-}]
-```
-
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
-
-## Next steps
--- [Spatial functions Azure Cosmos DB](sql-query-spatial-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query St Within https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-st-within.md
- Title: ST_WITHIN in Azure Cosmos DB query language
-description: Learn about SQL system function ST_WITHIN in Azure Cosmos DB.
---- Previously updated : 09/21/2021---
-# ST_WITHIN (Azure Cosmos DB)
-
- Returns a Boolean expression indicating whether the GeoJSON object (Point, Polygon, MultiPolygon, or LineString) specified in the first argument is within the GeoJSON (Point, Polygon, MultiPolygon, or LineString) in the second argument.
-
-## Syntax
-
-```sql
-ST_WITHIN (<spatial_expr>, <spatial_expr>)
-```
-
-## Arguments
-
-*spatial_expr*
- Is a GeoJSON Point, Polygon, or LineString object expression.
-
-## Return types
-
- Returns a Boolean value.
-
-## Examples
-
- The following example shows how to find all family documents within a polygon using `ST_WITHIN`.
-
-```sql
-SELECT f.id
-FROM Families f
-WHERE ST_WITHIN(f.location, {
- 'type':'Polygon',
- 'coordinates': [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
-})
-```
-
- Here is the result set.
-
-```json
-[{ "id": "WakefieldFamily" }]
-```
-
-## Remarks
-
-This system function will benefit from a [geospatial index](../index-policy.md#spatial-indexes) except in queries with aggregates.
-
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
--
-## Next steps
--- [Spatial functions Azure Cosmos DB](sql-query-spatial-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Startswith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-startswith.md
- Title: StartsWith in Azure Cosmos DB query language
-description: Learn about SQL system function STARTSWITH in Azure Cosmos DB.
---- Previously updated : 04/01/2021---
-# STARTSWITH (Azure Cosmos DB)
-
- Returns a Boolean indicating whether the first string expression starts with the second.
-
-## Syntax
-
-```sql
-STARTSWITH(<str_expr1>, <str_expr2> [, <bool_expr>])
-```
-
-## Arguments
-
-*str_expr1*
- Is a string expression.
-
-*str_expr2*
- Is a string expression to be compared to the beginning of *str_expr1*.
-
-*bool_expr*
- Optional value for ignoring case. When set to true, STARTSWITH will do a case-insensitive search. When unspecified, this value is false.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
-The following example checks if the string "abc" begins with "b" and "A".
-
-```sql
-SELECT STARTSWITH("abc", "b", false) AS s1, STARTSWITH("abc", "A", false) AS s2, STARTSWITH("abc", "A", true) AS s3
-```
-
- Here is the result set.
-
-```json
-[
- {
- "s1": false,
- "s2": false,
- "s3": true
- }
-]
-```
-
-## Remarks
-
-Learn about [how this string system function uses the index](sql-query-string-functions.md).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query String Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-string-functions.md
- Title: String functions in Azure Cosmos DB query language
-description: Learn about string SQL system functions in Azure Cosmos DB.
---- Previously updated : 05/26/2021----
-# String functions (Azure Cosmos DB)
-
-The string functions let you perform operations on strings in Azure Cosmos DB.
-
-## Functions
-
-The below scalar functions perform an operation on a string input value and return a string, numeric, or Boolean value. The **index usage** column assumes, where applicable, that you're comparing the string system function to another value with an equality filter.
-
-| System function | Index usage | [Index usage in queries with scalar aggregate functions](../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
-| -- | | | |
-| [CONCAT](sql-query-concat.md) | Full scan | Full scan | |
-| [CONTAINS](sql-query-contains.md) | Full index scan | Full scan | |
-| [ENDSWITH](sql-query-endswith.md) | Full index scan | Full scan | |
-| [INDEX_OF](sql-query-index-of.md) | Full scan | Full scan | |
-| [LEFT](sql-query-left.md) | Precise index scan | Precise index scan | |
-| [LENGTH](sql-query-length.md) | Full scan | Full scan | |
-| [LOWER](sql-query-lower.md) | Full scan | Full scan | |
-| [LTRIM](sql-query-ltrim.md) | Full scan | Full scan | |
-| [REGEXMATCH](sql-query-regexmatch.md) | Full index scan | Full scan | |
-| [REPLACE](sql-query-replace.md) | Full scan | Full scan | |
-| [REPLICATE](sql-query-replicate.md) | Full scan | Full scan | |
-| [REVERSE](sql-query-reverse.md) | Full scan | Full scan | |
-| [RIGHT](sql-query-right.md) | Full scan | Full scan | |
-| [RTRIM](sql-query-rtrim.md) | Full scan | Full scan | |
-| [STARTSWITH](sql-query-startswith.md) | Precise index scan | Precise index scan | Will be Expanded index scan if case-insensitive option is true. |
-| [STRINGEQUALS](sql-query-stringequals.md) | Index seek | Index seek | Will be Expanded index scan if case-insensitive option is true. |
-| [StringToArray](sql-query-stringtoarray.md) | Full scan | Full scan | |
-| [StringToBoolean](sql-query-stringtoboolean.md) | Full scan | Full scan | |
-| [StringToNull](sql-query-stringtonull.md) | Full scan | Full scan | |
-| [StringToNumber](sql-query-stringtonumber.md) | Full scan | Full scan | |
-
-Learn about about [index usage](../index-overview.md#index-usage) in Azure Cosmos DB.
-
-## Next steps
--- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Stringequals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-stringequals.md
- Title: StringEquals in Azure Cosmos DB query language
-description: Learn about how the StringEquals SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression matches the second
---- Previously updated : 05/20/2020----
-# STRINGEQUALS (Azure Cosmos DB)
-
- Returns a Boolean indicating whether the first string expression matches the second.
-
-## Syntax
-
-```sql
-STRINGEQUALS(<str_expr1>, <str_expr2> [, <bool_expr>])
-```
-
-## Arguments
-
-*str_expr1*
- Is the first string expression to compare.
-
-*str_expr2*
- Is the second string expression to compare.
-
-*bool_expr*
- Optional value for ignoring case. When set to true, StringEquals will do a case-insensitive search. When unspecified, this value is false.
-
-## Return types
-
- Returns a Boolean expression.
-
-## Examples
-
- The following example checks if "abc" matches "abc" and if "abc" matches "ABC".
-
-```sql
-SELECT STRINGEQUALS("abc", "abc", false) AS c1, STRINGEQUALS("abc", "ABC", false) AS c2, STRINGEQUALS("abc", "ABC", true) AS c3
-```
-
- Here is the result set.
-
-```json
-[
- {
- "c1": true,
- "c2": false,
- "c3": true
- }
-]
-```
-
-## Remarks
-
-Learn about [how this string system function uses the index](sql-query-string-functions.md).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Stringtoarray https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-stringtoarray.md
- Title: StringToArray in Azure Cosmos DB query language
-description: Learn about SQL system function StringToArray in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# StringToArray (Azure Cosmos DB)
-
- Returns expression translated to an Array. If expression cannot be translated, returns undefined.
-
-## Syntax
-
-```sql
-StringToArray(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a JSON Array expression.
-
-## Return types
-
- Returns an array expression or undefined.
-
-## Remarks
- Nested string values must be written with double quotes to be valid JSON. For details on the JSON format, see [json.org](https://json.org/). This system function will not utilize the index.
-
-## Examples
-
- The following example shows how `StringToArray` behaves across different types.
-
- The following are examples with valid input.
-
-```sql
-SELECT
- StringToArray('[]') AS a1,
- StringToArray("[1,2,3]") AS a2,
- StringToArray("[\"str\",2,3]") AS a3,
- StringToArray('[["5","6","7"],["8"],["9"]]') AS a4,
- StringToArray('[1,2,3, "[4,5,6]",[7,8]]') AS a5
-```
-
-Here is the result set.
-
-```json
-[{"a1": [], "a2": [1,2,3], "a3": ["str",2,3], "a4": [["5","6","7"],["8"],["9"]], "a5": [1,2,3,"[4,5,6]",[7,8]]}]
-```
-
-The following is an example of invalid input.
-
- Single quotes within the array are not valid JSON.
-Even though they are valid within a query, they will not parse to valid arrays.
- Strings within the array string must either be escaped "[\\"\\"]" or the surrounding quote must be single '[""]'.
-
-```sql
-SELECT
- StringToArray("['5','6','7']")
-```
-
-Here is the result set.
-
-```json
-[{}]
-```
-
-The following are examples of invalid input.
-
- The expression passed will be parsed as a JSON array; the following do not evaluate to type array and thus return undefined.
-
-```sql
-SELECT
- StringToArray("["),
- StringToArray("1"),
- StringToArray(NaN),
- StringToArray(false),
- StringToArray(undefined)
-```
-
-Here is the result set.
-
-```json
-[{}]
-```
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Stringtoboolean https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-stringtoboolean.md
- Title: StringToBoolean in Azure Cosmos DB query language
-description: Learn about SQL system function StringToBoolean in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# StringToBoolean (Azure Cosmos DB)
-
- Returns expression translated to a Boolean. If expression cannot be translated, returns undefined.
-
-## Syntax
-
-```sql
-StringToBoolean(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a Boolean expression.
-
-## Return types
-
- Returns a Boolean expression or undefined.
-
-## Examples
-
- The following example shows how `StringToBoolean` behaves across different types.
-
- The following are examples with valid input.
-
-Whitespace is allowed only before or after "true"/"false".
-
-```sql
-SELECT
- StringToBoolean("true") AS b1,
- StringToBoolean(" false") AS b2,
- StringToBoolean("false ") AS b3
-```
-
- Here is the result set.
-
-```json
-[{"b1": true, "b2": false, "b3": false}]
-```
-
-The following are examples with invalid input.
-
- Booleans are case sensitive and must be written with all lowercase characters i.e. "true" and "false".
-
-```sql
-SELECT
- StringToBoolean("TRUE"),
- StringToBoolean("False")
-```
-
-Here is the result set.
-
-```json
-[{}]
-```
-
-The expression passed will be parsed as a Boolean expression; these inputs do not evaluate to type Boolean and thus return undefined.
-
-```sql
-SELECT
- StringToBoolean("null"),
- StringToBoolean(undefined),
- StringToBoolean(NaN),
- StringToBoolean(false),
- StringToBoolean(true)
-```
-
-Here is the result set.
-
-```json
-[{}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Stringtonull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-stringtonull.md
- Title: StringToNull in Azure Cosmos DB query language
-description: Learn about SQL system function StringToNull in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# StringToNull (Azure Cosmos DB)
-
- Returns expression translated to null. If expression cannot be translated, returns undefined.
-
-## Syntax
-
-```sql
-StringToNull(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a null expression.
-
-## Return types
-
- Returns a null expression or undefined.
-
-## Examples
-
- The following example shows how `StringToNull` behaves across different types.
-
-The following are examples with valid input.
-
- Whitespace is allowed only before or after "null".
-
-```sql
-SELECT
- StringToNull("null") AS n1,
- StringToNull(" null ") AS n2,
- IS_NULL(StringToNull("null ")) AS n3
-```
-
- Here is the result set.
-
-```json
-[{"n1": null, "n2": null, "n3": true}]
-```
-
-The following are examples with invalid input.
-
-Null is case sensitive and must be written with all lowercase characters i.e. "null".
-
-```sql
-SELECT
- StringToNull("NULL"),
- StringToNull("Null")
-```
-
- Here is the result set.
-
-```json
-[{}]
-```
-
-The expression passed will be parsed as a null expression; these inputs do not evaluate to type null and thus return undefined.
-
-```sql
-SELECT
- StringToNull("true"),
- StringToNull(false),
- StringToNull(undefined),
- StringToNull(NaN)
-```
-
- Here is the result set.
-
-```json
-[{}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Stringtonumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-stringtonumber.md
- Title: StringToNumber in Azure Cosmos DB query language
-description: Learn about SQL system function StringToNumber in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# StringToNumber (Azure Cosmos DB)
-
- Returns expression translated to a Number. If expression cannot be translated, returns undefined.
-
-## Syntax
-
-```sql
-StringToNumber(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a JSON Number expression. Numbers in JSON must be an integer or a floating point. For details on the JSON format, see [json.org](https://json.org/)
-
-## Return types
-
- Returns a Number expression or undefined.
-
-## Examples
-
- The following example shows how `StringToNumber` behaves across different types.
-
-Whitespace is allowed only before or after the Number.
-
-```sql
-SELECT
- StringToNumber("1.000000") AS num1,
- StringToNumber("3.14") AS num2,
- StringToNumber(" 60 ") AS num3,
- StringToNumber("-1.79769e+308") AS num4
-```
-
- Here is the result set.
-
-```json
-{{"num1": 1, "num2": 3.14, "num3": 60, "num4": -1.79769e+308}}
-```
-
-In JSON a valid Number must be either be an integer or a floating point number.
-
-```sql
-SELECT
- StringToNumber("0xF")
-```
-
- Here is the result set.
-
-```json
-{{}}
-```
-
-The expression passed will be parsed as a Number expression; these inputs do not evaluate to type Number and thus return undefined.
-
-```sql
-SELECT
- StringToNumber("99 54"),
- StringToNumber(undefined),
- StringToNumber("false"),
- StringToNumber(false),
- StringToNumber(" "),
- StringToNumber(NaN)
-```
-
- Here is the result set.
-
-```json
-{{}}
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Stringtoobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-stringtoobject.md
- Title: StringToObject in Azure Cosmos DB query language
-description: Learn about SQL system function StringToObject in Azure Cosmos DB.
---- Previously updated : 03/03/2020---
-# StringToObject (Azure Cosmos DB)
-
- Returns expression translated to an Object. If expression cannot be translated, returns undefined.
-
-## Syntax
-
-```sql
-StringToObject(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a JSON object expression. Note that nested string values must be written with double quotes to be valid. For details on the JSON format, see [json.org](https://json.org/)
-
-## Return types
-
- Returns an object expression or undefined.
-
-## Examples
-
- The following example shows how `StringToObject` behaves across different types.
-
- The following are examples with valid input.
-
-```sql
-SELECT
- StringToObject("{}") AS obj1,
- StringToObject('{"A":[1,2,3]}') AS obj2,
- StringToObject('{"B":[{"b1":[5,6,7]},{"b2":8},{"b3":9}]}') AS obj3,
- StringToObject("{\"C\":[{\"c1\":[5,6,7]},{\"c2\":8},{\"c3\":9}]}") AS obj4
-```
-
-Here is the result set.
-
-```json
-[{"obj1": {},
- "obj2": {"A": [1,2,3]},
- "obj3": {"B":[{"b1":[5,6,7]},{"b2":8},{"b3":9}]},
- "obj4": {"C":[{"c1":[5,6,7]},{"c2":8},{"c3":9}]}}]
-```
-
- The following are examples with invalid input.
-Even though they are valid within a query, they will not parse to valid objects.
- Strings within the string of object must either be escaped "{\\"a\\":\\"str\\"}" or the surrounding quote must be single
- '{"a": "str"}'.
-
-Single quotes surrounding property names are not valid JSON.
-
-```sql
-SELECT
- StringToObject("{'a':[1,2,3]}")
-```
-
-Here is the result set.
-
-```json
-[{}]
-```
-
-Property names without surrounding quotes are not valid JSON.
-
-```sql
-SELECT
- StringToObject("{a:[1,2,3]}")
-```
-
-Here is the result set.
-
-```json
-[{}]
-```
-
-The following are examples with invalid input.
-
- The expression passed will be parsed as a JSON object; these inputs do not evaluate to type object and thus return undefined.
-
-```sql
-SELECT
- StringToObject("}"),
- StringToObject("{"),
- StringToObject("1"),
- StringToObject(NaN),
- StringToObject(false),
- StringToObject(undefined)
-```
-
- Here is the result set.
-
-```json
-[{}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Subquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-subquery.md
- Title: SQL subqueries for Azure Cosmos DB
-description: Learn about SQL subqueries and their common use cases and different types of subqueries in Azure Cosmos DB
---- Previously updated : 07/30/2021----
-# SQL subquery examples for Azure Cosmos DB
-
-A subquery is a query nested within another query. A subquery is also called an inner query or inner select. The statement that contains a subquery is typically called an outer query.
-
-This article describes SQL subqueries and their common use cases in Azure Cosmos DB. All sample queries in this doc can be run against [a sample nutrition dataset](https://github.com/AzureCosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
-
-## Types of subqueries
-
-There are two main types of subqueries:
-
-* **Correlated**: A subquery that references values from the outer query. The subquery is evaluated once for each row that the outer query processes.
-* **Non-correlated**: A subquery that's independent of the outer query. It can be run on its own without relying on the outer query.
-
-> [!NOTE]
-> Azure Cosmos DB supports only correlated subqueries.
-
-Subqueries can be further classified based on the number of rows and columns that they return. There are three types:
-* **Table**: Returns multiple rows and multiple columns.
-* **Multi-value**: Returns multiple rows and a single column.
-* **Scalar**: Returns a single row and a single column.
-
-SQL queries in Azure Cosmos DB always return a single column (either a simple value or a complex document). Therefore, only multi-value and scalar subqueries are applicable in Azure Cosmos DB. You can use a multi-value subquery only in the FROM clause as a relational expression. You can use a scalar subquery as a scalar expression in the SELECT or WHERE clause, or as a relational expression in the FROM clause.
-
-## Multi-value subqueries
-
-Multi-value subqueries return a set of documents and are always used within the FROM clause. They're used for:
-
-* Optimizing JOIN expressions.
-* Evaluating expensive expressions once and referencing multiple times.
-
-## Optimize JOIN expressions
-
-Multi-value subqueries can optimize JOIN expressions by pushing predicates after each select-many expression rather than after all cross-joins in the WHERE clause.
-
-Consider the following query:
-
-```sql
-SELECT Count(1) AS Count
-FROM c
-JOIN t IN c.tags
-JOIN n IN c.nutrients
-JOIN s IN c.servings
-WHERE t.name = 'infant formula' AND (n.nutritionValue > 0
-AND n.nutritionValue < 10) AND s.amount > 1
-```
-
-For this query, the index will match any document that has a tag with the name "infant formula." It's a nutrient item with a value between 0 and 10, and a serving item with an amount greater than 1. The JOIN expression here will perform the cross-product of all items of tags, nutrients, and servings arrays for each matching document before any filter is applied.
-
-The WHERE clause will then apply the filter predicate on each <c, t, n, s> tuple. For instance, if a matching document had 10 items in each of the three arrays, it will expand to 1 x 10 x 10 x 10 (that is, 1,000) tuples. Using subqueries here can help in filtering out joined array items before joining with the next expression.
-
-This query is equivalent to the preceding one but uses subqueries:
-
-```sql
-SELECT Count(1) AS Count
-FROM c
-JOIN (SELECT VALUE t FROM t IN c.tags WHERE t.name = 'infant formula')
-JOIN (SELECT VALUE n FROM n IN c.nutrients WHERE n.nutritionValue > 0 AND n.nutritionValue < 10)
-JOIN (SELECT VALUE s FROM s IN c.servings WHERE s.amount > 1)
-```
-
-Assume that only one item in the tags array matches the filter, and there are five items for both nutrients and servings arrays. The JOIN expressions will then expand to 1 x 1 x 5 x 5 = 25 items, as opposed to 1,000 items in the first query.
-
-## Evaluate once and reference many times
-
-Subqueries can help optimize queries with expensive expressions such as user-defined functions (UDFs), complex strings, or arithmetic expressions. You can use a subquery along with a JOIN expression to evaluate the expression once but reference it many times.
-
-The following query runs the UDF `GetMaxNutritionValue` twice:
-
-```sql
-SELECT c.id, udf.GetMaxNutritionValue(c.nutrients) AS MaxNutritionValue
-FROM c
-WHERE udf.GetMaxNutritionValue(c.nutrients) > 100
-```
-
-Here's an equivalent query that runs the UDF only once:
-
-```sql
-SELECT TOP 1000 c.id, MaxNutritionValue
-FROM c
-JOIN (SELECT VALUE udf.GetMaxNutritionValue(c.nutrients)) MaxNutritionValue
-WHERE MaxNutritionValue > 100
-```
-
-> [!NOTE]
-> Keep in mind the cross-product behavior of JOIN expressions. If the UDF expression can evaluate to undefined, you should ensure that the JOIN expression always produces a single row by returning an object from the subquery rather than the value directly.
->
-
-Here's a similar example that returns an object rather than a value:
-
-```sql
-SELECT TOP 1000 c.id, m.MaxNutritionValue
-FROM c
-JOIN (SELECT udf.GetMaxNutritionValue(c.nutrients) AS MaxNutritionValue) m
-WHERE m.MaxNutritionValue > 100
-```
-
-The approach is not limited to UDFs. It applies to any potentially expensive expression. For example, you can take the same approach with the mathematical function `avg`:
-
-```sql
-SELECT TOP 1000 c.id, AvgNutritionValue
-FROM c
-JOIN (SELECT VALUE avg(n.nutritionValue) FROM n IN c.nutrients) AvgNutritionValue
-WHERE AvgNutritionValue > 80
-```
-
-## Mimic join with external reference data
-
-You might often need to reference static data that rarely changes, such as units of measurement or country codes. ItΓÇÖs better not to duplicate such data for each document. Avoiding this duplication will save on storage and improve write performance by keeping the document size smaller. You can use a subquery to mimic inner-join semantics with a collection of reference data.
-
-For instance, consider this set of reference data:
-
-| **Unit** | **Name** | **Multiplier** | **Base unit** |
-| -- | - | -- | - |
-| ng | Nanogram | 1.00E-09 | Gram |
-| ┬╡g | Microgram | 1.00E-06 | Gram |
-| mg | Milligram | 1.00E-03 | Gram |
-| g | Gram | 1.00E+00 | Gram |
-| kg | Kilogram | 1.00E+03 | Gram |
-| Mg | Megagram | 1.00E+06 | Gram |
-| Gg | Gigagram | 1.00E+09 | Gram |
-| nJ | Nanojoule | 1.00E-09 | Joule |
-| ┬╡J | Microjoule | 1.00E-06 | Joule |
-| mJ | Millijoule | 1.00E-03 | Joule |
-| J | Joule | 1.00E+00 | Joule |
-| kJ | Kilojoule | 1.00E+03 | Joule |
-| MJ | Megajoule | 1.00E+06 | Joule |
-| GJ | Gigajoule | 1.00E+09 | Joule |
-| cal | Calorie | 1.00E+00 | calorie |
-| kcal | Calorie | 1.00E+03 | calorie |
-| IU | International units | | |
--
-The following query mimics joining with this data so that you add the name of the unit to the output:
-
-```sql
-SELECT TOP 10 n.id, n.description, n.nutritionValue, n.units, r.name
-FROM food
-JOIN n IN food.nutrients
-JOIN r IN (
- SELECT VALUE [
- {unit: 'ng', name: 'nanogram', multiplier: 0.000000001, baseUnit: 'gram'},
- {unit: '┬╡g', name: 'microgram', multiplier: 0.000001, baseUnit: 'gram'},
- {unit: 'mg', name: 'milligram', multiplier: 0.001, baseUnit: 'gram'},
- {unit: 'g', name: 'gram', multiplier: 1, baseUnit: 'gram'},
- {unit: 'kg', name: 'kilogram', multiplier: 1000, baseUnit: 'gram'},
- {unit: 'Mg', name: 'megagram', multiplier: 1000000, baseUnit: 'gram'},
- {unit: 'Gg', name: 'gigagram', multiplier: 1000000000, baseUnit: 'gram'},
- {unit: 'nJ', name: 'nanojoule', multiplier: 0.000000001, baseUnit: 'joule'},
- {unit: '┬╡J', name: 'microjoule', multiplier: 0.000001, baseUnit: 'joule'},
- {unit: 'mJ', name: 'millijoule', multiplier: 0.001, baseUnit: 'joule'},
- {unit: 'J', name: 'joule', multiplier: 1, baseUnit: 'joule'},
- {unit: 'kJ', name: 'kilojoule', multiplier: 1000, baseUnit: 'joule'},
- {unit: 'MJ', name: 'megajoule', multiplier: 1000000, baseUnit: 'joule'},
- {unit: 'GJ', name: 'gigajoule', multiplier: 1000000000, baseUnit: 'joule'},
- {unit: 'cal', name: 'calorie', multiplier: 1, baseUnit: 'calorie'},
- {unit: 'kcal', name: 'Calorie', multiplier: 1000, baseUnit: 'calorie'},
- {unit: 'IU', name: 'International units'}
- ]
-)
-WHERE n.units = r.unit
-```
-
-## Scalar subqueries
-
-A scalar subquery expression is a subquery that evaluates to a single value. The value of the scalar subquery expression is the value of the projection (SELECT clause) of the subquery. You can use a scalar subquery expression in many places where a scalar expression is valid. For instance, you can use a scalar subquery in any expression in both the SELECT and WHERE clauses.
-
-Using a scalar subquery doesn't always help optimize, though. For example, passing a scalar subquery as an argument to either a system or user-defined functions provides no benefit in resource unit (RU) consumption or latency.
-
-Scalar subqueries can be further classified as:
-* Simple-expression scalar subqueries
-* Aggregate scalar subqueries
-
-## Simple-expression scalar subqueries
-
-A simple-expression scalar subquery is a correlated subquery that has a SELECT clause that doesn't contain any aggregate expressions. These subqueries provide no optimization benefits because the compiler converts them into one larger simple expression. There's no correlated context between the inner and outer queries.
-
-Here are few examples:
-
-**Example 1**
-
-```sql
-SELECT 1 AS a, 2 AS b
-```
-
-You can rewrite this query, by using a simple-expression scalar subquery, to:
-
-```sql
-SELECT (SELECT VALUE 1) AS a, (SELECT VALUE 2) AS b
-```
-
-Both queries produce this output:
-
-```json
-[
- { "a": 1, "b": 2 }
-]
-```
-
-**Example 2**
-
-```sql
-SELECT TOP 5 Concat('id_', f.id) AS id
-FROM food f
-```
-
-You can rewrite this query, by using a simple-expression scalar subquery, to:
-
-```sql
-SELECT TOP 5 (SELECT VALUE Concat('id_', f.id)) AS id
-FROM food f
-```
-
-Query output:
-
-```json
-[
- { "id": "id_03226" },
- { "id": "id_03227" },
- { "id": "id_03228" },
- { "id": "id_03229" },
- { "id": "id_03230" }
-]
-```
-
-**Example 3**
-
-```sql
-SELECT TOP 5 f.id, Contains(f.description, 'fruit') = true ? f.description : undefined
-FROM food f
-```
-
-You can rewrite this query, by using a simple-expression scalar subquery, to:
-
-```sql
-SELECT TOP 10 f.id, (SELECT f.description WHERE Contains(f.description, 'fruit')).description
-FROM food f
-```
-
-Query output:
-
-```json
-[
- { "id": "03230" },
- { "id": "03238", "description":"Babyfood, dessert, tropical fruit, junior" },
- { "id": "03229" },
- { "id": "03226", "description":"Babyfood, dessert, fruit pudding, orange, strained" },
- { "id": "03227" }
-]
-```
-
-### Aggregate scalar subqueries
-
-An aggregate scalar subquery is a subquery that has an aggregate function in its projection or filter that evaluates to a single value.
-
-**Example 1:**
-
-Here's a subquery with a single aggregate function expression in its projection:
-
-```sql
-SELECT TOP 5
- f.id,
- (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg'
-) AS count_mg
-FROM food f
-```
-
-Query output:
-
-```json
-[
- { "id": "03230", "count_mg": 13 },
- { "id": "03238", "count_mg": 14 },
- { "id": "03229", "count_mg": 13 },
- { "id": "03226", "count_mg": 15 },
- { "id": "03227", "count_mg": 19 }
-]
-```
-
-**Example 2**
-
-Here's a subquery with multiple aggregate function expressions:
-
-```sql
-SELECT TOP 5 f.id, (
- SELECT Count(1) AS count, Sum(n.nutritionValue) AS sum
- FROM n IN f.nutrients
- WHERE n.units = 'mg'
-) AS unit_mg
-FROM food f
-```
-
-Query output:
-
-```json
-[
- { "id": "03230","unit_mg": { "count": 13,"sum": 147.072 } },
- { "id": "03238","unit_mg": { "count": 14,"sum": 107.385 } },
- { "id": "03229","unit_mg": { "count": 13,"sum": 141.579 } },
- { "id": "03226","unit_mg": { "count": 15,"sum": 183.91399999999996 } },
- { "id": "03227","unit_mg": { "count": 19,"sum": 94.788999999999987 } }
-]
-```
-
-**Example 3**
-
-Here's a query with an aggregate subquery in both the projection and the filter:
-
-```sql
-SELECT TOP 5
- f.id,
- (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg') AS count_mg
-FROM food f
-WHERE (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg') > 20
-```
-
-Query output:
-
-```json
-[
- { "id": "03235", "count_mg": 27 },
- { "id": "03246", "count_mg": 21 },
- { "id": "03267", "count_mg": 21 },
- { "id": "03269", "count_mg": 21 },
- { "id": "03274", "count_mg": 21 }
-]
-```
-
-A more optimal way to write this query is to join on the subquery and reference the subquery alias in both the SELECT and WHERE clauses. This query is more efficient because you need to execute the subquery only within the join statement, and not in both the projection and filter.
-
-```sql
-SELECT TOP 5 f.id, count_mg
-FROM food f
-JOIN (SELECT VALUE Count(1) FROM n IN f.nutrients WHERE n.units = 'mg') AS count_mg
-WHERE count_mg > 20
-```
-
-## EXISTS expression
-
-Azure Cosmos DB supports EXISTS expressions. This is an aggregate scalar subquery built into the Azure Cosmos DB SQL API. EXISTS is a Boolean expression that takes a subquery expression and returns true if the subquery returns any rows. Otherwise, it returns false.
-
-Because the Azure Cosmos DB SQL API doesn't differentiate between Boolean expressions and any other scalar expressions, you can use EXISTS in both SELECT and WHERE clauses. This is unlike T-SQL, where a Boolean expression (for example, EXISTS, BETWEEN, and IN) is restricted to the filter.
-
-If the EXISTS subquery returns a single value that's undefined, EXISTS will evaluate to false. For instance, consider the following query that evaluates to false:
-```sql
-SELECT EXISTS (SELECT VALUE undefined)
-```
--
-If the VALUE keyword in the preceding subquery is omitted, the query will evaluate to true:
-```sql
-SELECT EXISTS (SELECT undefined)
-```
-
-The subquery will enclose the list of values in the selected list in an object. If the selected list has no values, the subquery will return the single value ΓÇÿ{}ΓÇÖ. This value is defined, so EXISTS evaluates to true.
-
-### Example: Rewriting ARRAY_CONTAINS and JOIN as EXISTS
-
-A common use case of ARRAY_CONTAINS is to filter a document by the existence of an item in an array. In this case, we're checking to see if the tags array contains an item named "orange."
-
-```sql
-SELECT TOP 5 f.id, f.tags
-FROM food f
-WHERE ARRAY_CONTAINS(f.tags, {name: 'orange'})
-```
-
-You can rewrite the same query to use EXISTS:
-
-```sql
-SELECT TOP 5 f.id, f.tags
-FROM food f
-WHERE EXISTS(SELECT VALUE t FROM t IN f.tags WHERE t.name = 'orange')
-```
-
-Additionally, ARRAY_CONTAINS can only check if a value is equal to any element within an array. If you need more complex filters on array properties, use JOIN.
-
-Consider the following query that filters based on the units and `nutritionValue` properties in the array:
-
-```sql
-SELECT VALUE c.description
-FROM c
-JOIN n IN c.nutrients
-WHERE n.units= "mg" AND n.nutritionValue > 0
-```
-
-For each of the documents in the collection, a cross-product is performed with its array elements. This JOIN operation makes it possible to filter on properties within the array. However, this queryΓÇÖs RU consumption will be significant. For instance, if 1,000 documents had 100 items in each array, it will expand to 1,000 x 100 (that is, 100,000) tuples.
-
-Using EXISTS can help to avoid this expensive cross-product:
-
-```sql
-SELECT VALUE c.description
-FROM c
-WHERE EXISTS(
- SELECT VALUE n
- FROM n IN c.nutrients
- WHERE n.units = "mg" AND n.nutritionValue > 0
-)
-```
-
-In this case, you filter on array elements within the EXISTS subquery. If an array element matches the filter, then you project it and EXISTS evaluates to true.
-
-You can also alias EXISTS and reference it in the projection:
-
-```sql
-SELECT TOP 1 c.description, EXISTS(
- SELECT VALUE n
- FROM n IN c.nutrients
- WHERE n.units = "mg" AND n.nutritionValue > 0) as a
-FROM c
-```
-
-Query output:
-
-```json
-[
- {
- "description": "Babyfood, dessert, fruit pudding, orange, strained",
- "a": true
- }
-]
-```
-
-## ARRAY expression
-
-You can use the ARRAY expression to project the results of a query as an array. You can use this expression only within the SELECT clause of the query.
-
-```sql
-SELECT TOP 1 f.id, ARRAY(SELECT VALUE t.name FROM t in f.tags) AS tagNames
-FROM food f
-```
-
-Query output:
-
-```json
-[
- {
- "id": "03238",
- "tagNames": [
- "babyfood",
- "dessert",
- "tropical fruit",
- "junior"
- ]
- }
-]
-```
-
-As with other subqueries, filters with the ARRAY expression are possible.
-
-```sql
-SELECT TOP 1 c.id, ARRAY(SELECT VALUE t FROM t in c.tags WHERE t.name != 'infant formula') AS tagNames
-FROM c
-```
-
-Query output:
-
-```json
-[
- {
- "id": "03226",
- "tagNames": [
- {
- "name": "babyfood"
- },
- {
- "name": "dessert"
- },
- {
- "name": "fruit pudding"
- },
- {
- "name": "orange"
- },
- {
- "name": "strained"
- }
- ]
- }
-]
-```
-
-Array expressions can also come after the FROM clause in subqueries.
-
-```sql
-SELECT TOP 1 c.id, ARRAY(SELECT VALUE t.name FROM t in c.tags) as tagNames
-FROM c
-JOIN n IN (SELECT VALUE ARRAY(SELECT t FROM t in c.tags WHERE t.name != 'infant formula'))
-```
-
-Query output:
-
-```json
-[
- {
- "id": "03238",
- "tagNames": [
- "babyfood",
- "dessert",
- "tropical fruit",
- "junior"
- ]
- }
-]
-```
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Model document data](../modeling-data.md)
cosmos-db Sql Query Substring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-substring.md
- Title: SUBSTRING in Azure Cosmos DB query language
-description: Learn about SQL system function SUBSTRING in Azure Cosmos DB.
---- Previously updated : 09/13/2019---
-# SUBSTRING (Azure Cosmos DB)
-
- Returns part of a string expression starting at the specified character zero-based position and continues to the specified length, or to the end of the string.
-
-## Syntax
-
-```sql
-SUBSTRING(<str_expr>, <num_expr1>, <num_expr2>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression.
-
-*num_expr1*
- Is a numeric expression to denote the start character. A value of 0 is the first character of *str_expr*.
-
-*num_expr2*
- Is a numeric expression to denote the maximum number of characters of *str_expr* to be returned. A value of 0 or less results in empty string.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example returns the substring of "abc" starting at 1 and for a length of 1 character.
-
-```sql
-SELECT SUBSTRING("abc", 1, 1) AS substring
-```
-
- Here is the result set.
-
-```json
-[{"substring": "b"}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy) if the starting position is `0`.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query System Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-system-functions.md
- Title: System functions in Azure Cosmos DB query language
-description: Learn about built-in and user defined SQL system functions in Azure Cosmos DB.
---- Previously updated : 02/03/2021---
-# System functions (Azure Cosmos DB)
-
- Cosmos DB provides many built-in SQL functions. The categories of built-in functions are listed below.
-
-|Function group|Description|Operations|
-|--|--|--|
-|[Array functions](sql-query-array-functions.md)|The array functions perform an operation on an array input value and return numeric, Boolean, or array value. | [ARRAY_CONCAT](sql-query-array-concat.md), [ARRAY_CONTAINS](sql-query-array-contains.md), [ARRAY_LENGTH](sql-query-array-length.md), [ARRAY_SLICE](sql-query-array-slice.md) |
-|[Date and Time functions](sql-query-date-time-functions.md)|The date and time functions allow you to get the current UTC date and time in two forms; a numeric timestamp whose value is the Unix epoch in milliseconds or as a string which conforms to the ISO 8601 format. | [GetCurrentDateTime](sql-query-getcurrentdatetime.md), [GetCurrentTimestamp](sql-query-getcurrenttimestamp.md), [GetCurrentTicks](sql-query-getcurrentticks.md) |
-|[Mathematical functions](sql-query-mathematical-functions.md)|The mathematical functions each perform a calculation, usually based on input values that are provided as arguments, and return a numeric value. | [ABS](sql-query-abs.md), [ACOS](sql-query-acos.md), [ASIN](sql-query-asin.md), [ATAN](sql-query-atan.md), [ATN2](sql-query-atn2.md), [CEILING](sql-query-ceiling.md), [COS](sql-query-cos.md), [COT](sql-query-cot.md), [DEGREES](sql-query-degrees.md), [EXP](sql-query-exp.md), [FLOOR](sql-query-floor.md), [LOG](sql-query-log.md), [LOG10](sql-query-log10.md), [PI](sql-query-pi.md), [POWER](sql-query-power.md), [RADIANS](sql-query-radians.md), [RAND](sql-query-rand.md), [ROUND](sql-query-round.md), [SIGN](sql-query-sign.md), [SIN](sql-query-sin.md), [SQRT](sql-query-sqrt.md), [SQUARE](sql-query-square.md), [TAN](sql-query-tan.md), [TRUNC](sql-query-trunc.md) |
-|[Spatial functions](sql-query-spatial-functions.md)|The spatial functions perform an operation on a spatial object input value and return a numeric or Boolean value. | [ST_DISTANCE](sql-query-st-distance.md), [ST_INTERSECTS](sql-query-st-intersects.md), [ST_ISVALID](sql-query-st-isvalid.md), [ST_ISVALIDDETAILED](sql-query-st-isvaliddetailed.md), [ST_WITHIN](sql-query-st-within.md) |
-|[String functions](sql-query-string-functions.md)|The string functions perform an operation on a string input value and return a string, numeric or Boolean value. | [CONCAT](sql-query-concat.md), [CONTAINS](sql-query-contains.md), [ENDSWITH](sql-query-endswith.md), [INDEX_OF](sql-query-index-of.md), [LEFT](sql-query-left.md), [LENGTH](sql-query-length.md), [LOWER](sql-query-lower.md), [LTRIM](sql-query-ltrim.md), [REGEXMATCH](sql-query-regexmatch.md)[REPLACE](sql-query-replace.md), [REPLICATE](sql-query-replicate.md), [REVERSE](sql-query-reverse.md), [RIGHT](sql-query-right.md), [RTRIM](sql-query-rtrim.md), [STARTSWITH](sql-query-startswith.md), [StringToArray](sql-query-stringtoarray.md), [StringToBoolean](sql-query-stringtoboolean.md), [StringToNull](sql-query-stringtonull.md), [StringToNumber](sql-query-stringtonumber.md), [StringToObject](sql-query-stringtoobject.md), [SUBSTRING](sql-query-substring.md), [ToString](sql-query-tostring.md), [TRIM](sql-query-trim.md), [UPPER](sql-query-upper.md) |
-|[Type checking functions](sql-query-type-checking-functions.md)|The type checking functions allow you to check the type of an expression within SQL queries. | [IS_ARRAY](sql-query-is-array.md), [IS_BOOL](sql-query-is-bool.md), [IS_DEFINED](sql-query-is-defined.md), [IS_NULL](sql-query-is-null.md), [IS_NUMBER](sql-query-is-number.md), [IS_OBJECT](sql-query-is-object.md), [IS_PRIMITIVE](sql-query-is-primitive.md), [IS_STRING](sql-query-is-string.md) |
-
-## Built-in versus User Defined Functions (UDFs)
-
-If youΓÇÖre currently using a user-defined function (UDF) for which a built-in function is now available, the corresponding built-in function will be quicker to run and more efficient.
-
-## Built-in versus ANSI SQL functions
-
-The main difference between Cosmos DB functions and ANSI SQL functions is that Cosmos DB functions are designed to work well with schemaless and mixed-schema data. For example, if a property is missing or has a non-numeric value like `undefined`, the item is skipped instead of returning an error.
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../introduction.md)-- [Array functions](sql-query-array-functions.md)-- [Date and time functions](sql-query-date-time-functions.md)-- [Mathematical functions](sql-query-mathematical-functions.md)-- [Spatial functions](sql-query-spatial-functions.md)-- [String functions](sql-query-string-functions.md)-- [Type checking functions](sql-query-type-checking-functions.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Tan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-tan.md
- Title: TAN in Azure Cosmos DB query language
-description: Learn about SQL system function TAN in Azure Cosmos DB.
---- Previously updated : 03/04/2020---
-# TAN (Azure Cosmos DB)
-
- Returns the tangent of the specified angle, in radians, in the specified expression.
-
-## Syntax
-
-```sql
-TAN (<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example calculates the tangent of PI()/2.
-
-```sql
-SELECT TAN(PI()/2) AS tan
-```
-
- Here is the result set.
-
-```json
-[{"tan": 16331239353195370 }]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Ternary Coalesce Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-ternary-coalesce-operators.md
- Title: Ternary and coalesce operators in Azure Cosmos DB
-description: Learn about SQL ternary and coalesce operators supported by Azure Cosmos DB.
---- Previously updated : 01/07/2022----
-# Ternary and coalesce operators in Azure Cosmos DB
-
-This article details the ternary and coalesce operators supported by Azure Cosmos DB.
-
-## Understanding ternary and coalesce operators
-
-You can use the Ternary (?) and Coalesce (??) operators to build conditional expressions, as in programming languages like C# and JavaScript.
-
-You can use the ? operator to construct new JSON properties on the fly. For example, the following query classifies grade levels into `elementary` or `other`:
-
-```sql
- SELECT (c.grade < 5)? "elementary": "other" AS gradeLevel
- FROM Families.children[0] c
-```
-
-You can also nest calls to the ? operator, as in the following query:
-
-```sql
- SELECT (c.grade < 5)? "elementary": ((c.grade < 9)? "junior": "high") AS gradeLevel
- FROM Families.children[0] c
-```
-
-As with other query operators, the ? operator excludes items if the referenced properties are missing or the types being compared are different.
-
-Use the ?? operator to efficiently check for a property in an item when querying against semi-structured or mixed-type data. For example, the following query returns `lastName` if present, or `surname` if `lastName` isn't present.
-
-```sql
- SELECT f.lastName ?? f.surname AS familyName
- FROM Families f
-```
-
-## Next steps
--- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)-- [Keywords](sql-query-keywords.md)-- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Tickstodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-tickstodatetime.md
- Title: TicksToDateTime in Azure Cosmos DB query language
-description: Learn about SQL system function TicksToDateTime in Azure Cosmos DB.
---- Previously updated : 08/18/2020----
-# TicksToDateTime (Azure Cosmos DB)
-
-Converts the specified ticks value to a DateTime.
-
-## Syntax
-
-```sql
-TicksToDateTime (<Ticks>)
-```
-
-## Arguments
-
-*Ticks*
-
-A signed numeric value, the current number of 100 nanosecond ticks that have elapsed since the Unix epoch. In other words, it is the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Return types
-
-Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Remarks
-
-TicksToDateTime will return `undefined` if the ticks value specified is invalid.
-
-## Examples
-
-The following example converts the ticks to a DateTime:
-
-```sql
-SELECT TicksToDateTime(15943368134575530) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-07-09T23:20:13.4575530Z"
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Timestamptodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-timestamptodatetime.md
- Title: TimestampToDateTime in Azure Cosmos DB query language
-description: Learn about SQL system function TimestampToDateTime in Azure Cosmos DB.
---- Previously updated : 08/18/2020----
-# TimestampToDateTime (Azure Cosmos DB)
-
-Converts the specified timestamp value to a DateTime.
-
-## Syntax
-
-```sql
-TimestampToDateTime (<Timestamp>)
-```
-
-## Arguments
-
-*Timestamp*
-
-A signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch. In other words, the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
-
-## Return types
-
-Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Remarks
-
-TimestampToDateTime will return `undefined` if the timestamp value specified is invalid.
-
-## Examples
-
-The following example converts the timestamp to a DateTime:
-
-```sql
-SELECT TimestampToDateTime(1594227912345) AS DateTime
-```
-
-```json
-[
- {
- "DateTime": "2020-07-08T17:05:12.3450000Z"
- }
-]
-```
-
-## Next steps
--- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Tostring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-tostring.md
- Title: ToString in Azure Cosmos DB query language
-description: Learn about SQL system function ToString in Azure Cosmos DB.
---- Previously updated : 03/04/2020---
-# ToString (Azure Cosmos DB)
-
- Returns a string representation of scalar expression.
-
-## Syntax
-
-```sql
-ToString(<expr>)
-```
-
-## Arguments
-
-*expr*
- Is any scalar expression.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example shows how `ToString` behaves across different types.
-
-```sql
-SELECT
- ToString(1.0000) AS str1,
- ToString("Hello World") AS str2,
- ToString(NaN) AS str3,
- ToString(Infinity) AS str4,
- ToString(IS_STRING(ToString(undefined))) AS str5,
- ToString(0.1234) AS str6,
- ToString(false) AS str7,
- ToString(undefined) AS str8
-```
-
- Here is the result set.
-
-```json
-[{"str1": "1", "str2": "Hello World", "str3": "NaN", "str4": "Infinity", "str5": "false", "str6": "0.1234", "str7": "false"}]
-```
- Given the following input:
-```json
-{"Products":[{"ProductID":1,"Weight":4,"WeightUnits":"lb"},{"ProductID":2,"Weight":32,"WeightUnits":"kg"},{"ProductID":3,"Weight":400,"WeightUnits":"g"},{"ProductID":4,"Weight":8999,"WeightUnits":"mg"}]}
-```
- The following example shows how `ToString` can be used with other string functions like `CONCAT`.
-
-```sql
-SELECT
-CONCAT(ToString(p.Weight), p.WeightUnits)
-FROM p in c.Products
-```
-
-Here is the result set.
-
-```json
-[{"$1":"4lb" },
-{"$1":"32kg"},
-{"$1":"400g" },
-{"$1":"8999mg" }]
-
-```
-Given the following input.
-```json
-{"id":"08259","description":"Cereals ready-to-eat, KELLOGG, KELLOGG'S CRISPIX","nutrients":[{"id":"305","description":"Caffeine","units":"mg"},{"id":"306","description":"Cholesterol, HDL","nutritionValue":30,"units":"mg"},{"id":"307","description":"Sodium, NA","nutritionValue":612,"units":"mg"},{"id":"308","description":"Protein, ABP","nutritionValue":60,"units":"mg"},{"id":"309","description":"Zinc, ZN","nutritionValue":null,"units":"mg"}]}
-```
-The following example shows how `ToString` can be used with other string functions like `REPLACE`.
-```sql
-SELECT
- n.id AS nutrientID,
- REPLACE(ToString(n.nutritionValue), "6", "9") AS nutritionVal
-FROM food
-JOIN n IN food.nutrients
-```
-Here is the result set:
-
-```json
-[{"nutrientID":"305"},
-{"nutrientID":"306","nutritionVal":"30"},
-{"nutrientID":"307","nutritionVal":"912"},
-{"nutrientID":"308","nutritionVal":"90"},
-{"nutrientID":"309","nutritionVal":"null"}]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Trim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-trim.md
- Title: TRIM in Azure Cosmos DB query language
-description: Learn about SQL system function TRIM in Azure Cosmos DB.
---- Previously updated : 09/14/2021---
-# TRIM (Azure Cosmos DB)
-
-Returns a string expression after it removes leading and trailing whitespace or specified characters.
-
-## Syntax
-
-```sql
-TRIM(<str_expr1>[, <str_expr2>])
-```
-
-## Arguments
-
-*str_expr1*
- Is a string expression
-
-*str_expr2*
- Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
-
-## Return types
-
- Returns a string expression.
-
-## Examples
-
- The following example shows how to use `TRIM` inside a query.
-
-```sql
-SELECT TRIM(" abc") AS t1,
-TRIM(" abc ") AS t2,
-TRIM("abc ") AS t3,
-TRIM("abc") AS t4,
-TRIM("abc", "ab") AS t5,
-TRIM("abc", "abc") AS t6
-```
-
- Here is the result set.
-
-```json
-[
- {
- "t1": "abc",
- "t2": "abc",
- "t3": "abc",
- "t4": "abc",
- "t5": "c",
- "t6": ""
- }
-]
-```
-
-## Remarks
-
-This system function will not utilize the index.
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Trunc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-trunc.md
- Title: TRUNC in Azure Cosmos DB query language
-description: Learn about SQL system function TRUNC in Azure Cosmos DB.
---- Previously updated : 06/22/2021---
-# TRUNC (Azure Cosmos DB)
-
- Returns a numeric value, truncated to the closest integer value.
-
-## Syntax
-
-```sql
-TRUNC(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
- Returns a numeric expression.
-
-## Examples
-
- The following example truncates the following positive and negative numbers to the nearest integer value.
-
-```sql
-SELECT TRUNC(2.4) AS t1, TRUNC(2.6) AS t2, TRUNC(2.5) AS t3, TRUNC(-2.4) AS t4, TRUNC(-2.6) AS t5
-```
-
- Here is the result set.
-
-```json
-[{t1: 2, t2: 2, t3: 2, t4: -2, t5: -2}]
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Mathematical functions Azure Cosmos DB](sql-query-mathematical-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Type Checking Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-type-checking-functions.md
- Title: Type checking functions in Azure Cosmos DB query language
-description: Learn about type checking SQL system functions in Azure Cosmos DB.
---- Previously updated : 05/26/2021----
-# Type checking functions (Azure Cosmos DB)
-
-The type-checking functions let you check the type of an expression within a SQL query. You can use type-checking functions to determine the types of properties within items on the fly, when they're variable or unknown.
-
-## Functions
-
-The following functions support type checking against input values, and each return a Boolean value. The **index usage** column assumes, where applicable, that you're comparing the type checking functions to another value with an equality filter.
-
-| System function | Index usage | [Index usage in queries with scalar aggregate functions](../index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
-| -- | -- | | - |
-| [IS_ARRAY](sql-query-is-array.md) | Full scan | Full scan | |
-| [IS_BOOL](sql-query-is-bool.md) | Index seek | Index seek | |
-| [IS_DEFINED](sql-query-is-defined.md) | Index seek | Index seek | |
-| [IS_NULL](sql-query-is-null.md) | Index seek | Index seek | |
-| [IS_NUMBER](sql-query-is-number.md) | Index seek | Index seek | |
-| [IS_OBJECT](sql-query-is-object.md) | Full scan | Full scan | |
-| [IS_PRIMITIVE](sql-query-is-primitive.md) | Index seek | Index seek | |
-| [IS_STRING](sql-query-is-string.md) | Index seek | Index seek |
-
-## Next steps
--- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)-- [User Defined Functions](sql-query-udfs.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-udfs.md
- Title: User-defined functions (UDFs) in Azure Cosmos DB
-description: Learn about User-defined functions in Azure Cosmos DB.
---- Previously updated : 04/09/2020-----
-# User-defined functions (UDFs) in Azure Cosmos DB
-
-The SQL API provides support for user-defined functions (UDFs). With scalar UDFs, you can pass in zero or many arguments and return a single argument result. The API checks each argument for being legal JSON values.
-
-## UDF use cases
-
-The API extends the SQL syntax to support custom application logic using UDFs. You can register UDFs with the SQL API, and reference them in SQL queries. Unlike stored procedures and triggers, UDFs are read-only.
-
-Using UDFs, you can extend Azure Cosmos DB's query language. UDFs are a great way to express complex business logic in a query's projection.
-
-However, we recommending avoiding UDFs when:
--- An equivalent [system function](sql-query-system-functions.md) already exists in Azure Cosmos DB. System functions will always use fewer RU's than the equivalent UDF.-- The UDF is the only filter in the `WHERE` clause of your query. UDF's do not utilize the index so evaluating the UDF will require loading documents. Combining additional filter predicates that use the index, in combination with a UDF, in the `WHERE` clause will reduce the number of documents processed by the UDF.-
-If you must use the same UDF multiple times in a query, you should reference the UDF in a [subquery](sql-query-subquery.md#evaluate-once-and-reference-many-times), allowing you to use a JOIN expression to evaluate the UDF once but reference it many times.
-
-## Examples
-
-The following example registers a UDF under an item container in the Cosmos database. The example creates a UDF whose name is `REGEX_MATCH`. It accepts two JSON string values, `input` and `pattern`, and checks if the first matches the pattern specified in the second using JavaScript's `string.match()` function.
-
-```javascript
- UserDefinedFunction regexMatchUdf = new UserDefinedFunction
- {
- Id = "REGEX_MATCH",
- Body = @"function (input, pattern) {
- return input.match(pattern) !== null;
- };",
- };
-
- UserDefinedFunction createdUdf = client.CreateUserDefinedFunctionAsync(
- UriFactory.CreateDocumentCollectionUri("myDatabase", "families"),
- regexMatchUdf).Result;
-```
-
-Now, use this UDF in a query projection. You must qualify UDFs with the case-sensitive prefix `udf.` when calling them from within queries.
-
-```sql
- SELECT udf.REGEX_MATCH(Families.address.city, ".*eattle")
- FROM Families
-```
-
-The results are:
-
-```json
- [
- {
- "$1": true
- },
- {
- "$1": false
- }
- ]
-```
-
-You can use the UDF qualified with the `udf.` prefix inside a filter, as in the following example:
-
-```sql
- SELECT Families.id, Families.address.city
- FROM Families
- WHERE udf.REGEX_MATCH(Families.address.city, ".*eattle")
-```
-
-The results are:
-
-```json
- [{
- "id": "AndersenFamily",
- "city": "Seattle"
- }]
-```
-
-In essence, UDFs are valid scalar expressions that you can use in both projections and filters.
-
-To expand on the power of UDFs, look at another example with conditional logic:
-
-```javascript
- UserDefinedFunction seaLevelUdf = new UserDefinedFunction()
- {
- Id = "SEALEVEL",
- Body = @"function(city) {
- switch (city) {
- case 'Seattle':
- return 520;
- case 'NY':
- return 410;
- case 'Chicago':
- return 673;
- default:
- return -1;
- }"
- };
-
- UserDefinedFunction createdUdf = await client.CreateUserDefinedFunctionAsync(
- UriFactory.CreateDocumentCollectionUri("myDatabase", "families"),
- seaLevelUdf);
-```
-
-The following example exercises the UDF:
-
-```sql
- SELECT f.address.city, udf.SEALEVEL(f.address.city) AS seaLevel
- FROM Families f
-```
-
-The results are:
-
-```json
- [
- {
- "city": "Seattle",
- "seaLevel": 520
- },
- {
- "city": "NY",
- "seaLevel": 410
- }
- ]
-```
-
-If the properties referred to by the UDF parameters aren't available in the JSON value, the parameter is considered as undefined and the UDF invocation is skipped. Similarly, if the result of the UDF is undefined, it's not included in the result.
-
-As the preceding examples show, UDFs integrate the power of JavaScript language with the SQL API. UDFs provide a rich programmable interface to do complex procedural, conditional logic with the help of built-in JavaScript runtime capabilities. The SQL API provides the arguments to the UDFs for each source item at the current WHERE or SELECT clause stage of processing. The result is seamlessly incorporated in the overall execution pipeline. In summary, UDFs are great tools to do complex business logic as part of queries.
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../introduction.md)-- [System functions](sql-query-system-functions.md)-- [Aggregates](sql-query-aggregate-functions.md)
cosmos-db Sql Query Upper https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-upper.md
- Title: UPPER in Azure Cosmos DB query language
-description: Learn about SQL system function UPPER in Azure Cosmos DB.
---- Previously updated : 04/08/2021---
-# UPPER (Azure Cosmos DB)
-
-Returns a string expression after converting lowercase character data to uppercase.
-
-> [!NOTE]
-> This function uses culture-independent (invariant) casing rules when returning the converted string expression.
-
-The UPPER system function doesn't utilize the index. If you plan to do frequent case insensitive comparisons, the UPPER system function may consume a significant number of RUs. If so, instead of using the UPPER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE UPPER(c.name) = 'USERNAME' simply becomes SELECT * FROM c WHERE c.name = 'USERNAME'.
-
-## Syntax
-
-```sql
-UPPER(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression.
-
-## Return types
-
-Returns a string expression.
-
-## Examples
-
-The following example shows how to use `UPPER` in a query
-
-```sql
-SELECT UPPER("Abc") AS upper
-```
-
-Here's the result set.
-
-```json
-[{"upper": "ABC"}]
-```
-
-## Remarks
-
-This system function won't [use indexes](../index-overview.md#index-usage).
-
-## Next steps
--- [String functions Azure Cosmos DB](sql-query-string-functions.md)-- [System functions Azure Cosmos DB](sql-query-system-functions.md)-- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Sql Query Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-where.md
- Title: WHERE clause in Azure Cosmos DB
-description: Learn about SQL WHERE clause for Azure Cosmos DB
---- Previously updated : 03/06/2020----
-# WHERE clause in Azure Cosmos DB
-
-The optional WHERE clause (`WHERE <filter_condition>`) specifies condition(s) that the source JSON items must satisfy for the query to include them in results. A JSON item must evaluate the specified conditions to `true` to be considered for the result. The index layer uses the WHERE clause to determine the smallest subset of source items that can be part of the result.
-
-## Syntax
-
-```sql
-WHERE <filter_condition>
-<filter_condition> ::= <scalar_expression>
-
-```
-
-## Arguments
--- `<filter_condition>`
-
- Specifies the condition to be met for the documents to be returned.
-
-- `<scalar_expression>`
-
- Expression representing the value to be computed. See [Scalar expressions](sql-query-scalar-expressions.md) for details.
-
-## Remarks
-
- In order for the document to be returned an expression specified as filter condition must evaluate to true. Only Boolean value `true` will satisfy the condition, any other value: undefined, null, false, Number, Array, or Object will not satisfy the condition.
-
- If you include your partition key in the `WHERE` clause as part of an equality filter, your query will automatically filter to only the relevant partitions.
-
-## Examples
-
-The following query requests items that contain an `id` property whose value is `AndersenFamily`. It excludes any item that does not have an `id` property or whose value doesn't match `AndersenFamily`.
-
-```sql
- SELECT f.address
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The results are:
-
-```json
- [{
- "address": {
- "state": "WA",
- "county": "King",
- "city": "Seattle"
- }
- }]
-```
-
-### Scalar expressions in the WHERE clause
-
-The previous example showed a simple equality query. The SQL API also supports various [scalar expressions](sql-query-scalar-expressions.md). The most commonly used are binary and unary expressions. Property references from the source JSON object are also valid expressions.
-
-You can use the following supported binary operators:
-
-|**Operator type** | **Values** |
-|||
-|Arithmetic | +,-,*,/,% |
-|Bitwise | \|, &, ^, <<, >>, >>> (zero-fill right shift) |
-|Logical | AND, OR, NOT |
-|Comparison | =, !=, &lt;, &gt;, &lt;=, &gt;=, <> |
-|String | \|\| (concatenate) |
-
-The following queries use binary operators:
-
-```sql
- SELECT *
- FROM Families.children[0] c
- WHERE c.grade % 2 = 1 -- matching grades == 5, 1
-
- SELECT *
- FROM Families.children[0] c
- WHERE c.grade ^ 4 = 1 -- matching grades == 5
-
- SELECT *
- FROM Families.children[0] c
- WHERE c.grade >= 5 -- matching grades == 5
-```
-
-You can also use the unary operators +,-, ~, and NOT in queries, as shown in the following examples:
-
-```sql
- SELECT *
- FROM Families.children[0] c
- WHERE NOT(c.grade = 5) -- matching grades == 1
-
- SELECT *
- FROM Families.children[0] c
- WHERE (-c.grade = -5) -- matching grades == 5
-```
-
-You can also use property references in queries. For example, `SELECT * FROM Families f WHERE f.isRegistered` returns the JSON item containing the property `isRegistered` with value equal to `true`. Any other value, such as `false`, `null`, `Undefined`, `<number>`, `<string>`, `<object>`, or `<array>`, excludes the item from the result. Additionally, you can use the `IS_DEFINED` type checking function to query based on the presence or absence of a given JSON property. For instance, `SELECT * FROM Families f WHERE NOT IS_DEFINED(f.isRegistered)` returns any JSON item that does not have a value for `isRegistered`.
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [IN keyword](sql-query-keywords.md#in)-- [FROM clause](sql-query-from.md)
cosmos-db Sql Query Working With Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-working-with-json.md
- Title: Working with JSON in Azure Cosmos DB
-description: Learn about to query and access nested JSON properties and use special characters in Azure Cosmos DB
---- Previously updated : 09/19/2020----
-# Working with JSON in Azure Cosmos DB
-
-In Azure Cosmos DB's SQL (Core) API, items are stored as JSON. The type system and expressions are restricted to deal only with JSON types. For more information, see the [JSON specification](https://www.json.org/).
-
-We'll summarize some important aspects of working with JSON:
--- JSON objects always begin with a `{` left brace and end with a `}` right brace-- You can have JSON properties [nested](#nested-properties) within one another-- JSON property values can be arrays-- JSON property names are case sensitive-- JSON property name can be any string value (including spaces or characters that aren't letters)-
-## Nested properties
-
-You can access nested JSON using a dot accessor. You can use nested JSON properties in your queries the same way that you can use any other properties.
-
-Here's a document with nested JSON:
-
-```JSON
-{
- "id": "AndersenFamily",
- "lastName": "Andersen",
- "address": {
- "state": "WA",
- "county": "King",
- "city": "Seattle"
- },
- "creationDate": 1431620472,
- "isRegistered": true
-}
-```
-
-In this case, the `state`, `country`, and `city` properties are all nested within the `address` property.
-
-The following example projects two nested properties: `f.address.state` and `f.address.city`.
-
-```sql
- SELECT f.address.state, f.address.city
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The results are:
-
-```json
- [{
- "state": "WA",
- "city": "Seattle"
- }]
-```
-
-## Working with arrays
-
-In addition to nested properties, JSON also supports arrays.
-
-Here's an example document with an array:
-
-```json
-{
- "id": "WakefieldFamily",
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }
- ],
-}
-```
-
-When working with arrays, you can access a specific element within the array by referencing its position:
-
-```sql
-SELECT *
-FROM Families f
-WHERE f.children[0].givenName = "Jesse"
-```
-
-In most cases, however, you'll use a [subquery](sql-query-subquery.md) or [self-join](sql-query-join.md) when working with arrays.
-
-For example, here's a document that shows the daily balance of a customer's bank account.
-
-```json
-{
- "id": "Contoso-Checking-Account-2020",
- "balance": [
- {
- "checkingAccount": 1000,
- "savingsAccount": 5000
- },
- {
- "checkingAccount": 100,
- "savingsAccount": 5000
- },
- {
- "checkingAccount": -10,
- "savingsAccount": 5000
- },
- {
- "checkingAccount": 5000,
- "savingsAccount": 5000
- }
-
- ]
-}
-```
-
-If you wanted to run a query that showed all the customers that had a negative balance at some point, you could use [EXISTS](sql-query-subquery.md#exists-expression) with a subquery:
-
-```sql
-SELECT c.id
-FROM c
-WHERE EXISTS(
- SELECT VALUE n
- FROM n IN c.balance
- WHERE n.checkingAccount < 0
-)
-```
-
-## Difference between null and undefined
-
-If a property is not defined in an item, then its value is `undefined`. A property with the value `null` must be explicitly defined and assigned a `null` value.
-
-For example, consider this sample item:
-
-```json
-{
- "id": "AndersenFamily",
- "lastName": "Andersen",
- "address": {
- "state": "WA",
- "county": "King",
- "city": "Seattle"
- },
- "creationDate": null
-}
-```
-
-In this example, the property `isRegistered` has a value of `undefined` because it is omitted from the item. The property `creationDate` has a `null` value.
-
-Azure Cosmos DB supports two helpful type checking system functions for `null` and `undefined` properties:
-
-* [IS_NULL](sql-query-is-null.md) - checks if a property value is `null`
-* [IS_DEFINED](sql-query-is-defined.md) - checks if a property value is defined
-
-You can learn about [supported operators](sql-query-equality-comparison-operators.md) and their behavior for `null` and `undefined` values.
-
-## Reserved keywords and special characters in JSON
-
-You can access properties using the quoted property operator `[]`. For example, `SELECT c.grade` and `SELECT c["grade"]` are equivalent. This syntax is useful to escape a property that contains spaces, special characters, or has the same name as a SQL keyword or reserved word.
-
-For example, here's a document with a property named `order` and a property `price($)` that contains special characters:
-
-```json
-{
- "id": "AndersenFamily",
- "order": {
- "orderId": "12345",
- "productId": "A17849",
- "price($)": 59.33
- },
- "creationDate": 1431620472,
- "isRegistered": true
-}
-```
-
-If you run a queries that includes the `order` property or `price($)` property, you will receive a syntax error.
-
-```sql
-SELECT * FROM c where c.order.orderid = "12345"
-```
-
-```sql
-SELECT * FROM c where c.order.price($) > 50
-```
-
-The result is:
-
-`
-Syntax error, incorrect syntax near 'order'
-`
-
-You should rewrite the same queries as below:
-
-```sql
-SELECT * FROM c WHERE c["order"].orderId = "12345"
-```
-
-```sql
-SELECT * FROM c WHERE c["order"]["price($)"] > 50
-```
-
-## JSON expressions
-
-Projection also supports JSON expressions, as shown in the following example:
-
-```sql
- SELECT { "state": f.address.state, "city": f.address.city, "name": f.id }
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The results are:
-
-```json
- [{
- "$1": {
- "state": "WA",
- "city": "Seattle",
- "name": "AndersenFamily"
- }
- }]
-```
-
-In the preceding example, the `SELECT` clause needs to create a JSON object, and since the sample provides no key, the clause uses the implicit argument variable name `$1`. The following query returns two implicit argument variables: `$1` and `$2`.
-
-```sql
- SELECT { "state": f.address.state, "city": f.address.city },
- { "name": f.id }
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The results are:
-
-```json
- [{
- "$1": {
- "state": "WA",
- "city": "Seattle"
- },
- "$2": {
- "name": "AndersenFamily"
- }
- }]
-```
-
-## Aliasing
-
-You can explicitly alias values in queries. If a query has two properties with the same name, use aliasing to rename one or both of the properties so they're disambiguated in the projected result.
-
-### Examples
-
-The `AS` keyword used for aliasing is optional, as shown in the following example when projecting the second value as `NameInfo`:
-
-```sql
- SELECT
- { "state": f.address.state, "city": f.address.city } AS AddressInfo,
- { "name": f.id } NameInfo
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-The results are:
-
-```json
- [{
- "AddressInfo": {
- "state": "WA",
- "city": "Seattle"
- },
- "NameInfo": {
- "name": "AndersenFamily"
- }
- }]
-```
-
-### Aliasing with reserved keywords or special characters
-
-You can't use aliasing to project a value as a property name with a space, special character, or reserved word. If you wanted to change a value's projection to, for example, have a property name with a space, you could use a [JSON expression](#json-expressions).
-
-Here's an example:
-
-```sql
- SELECT
- {"JSON expression with a space": { "state": f.address.state, "city": f.address.city }},
- {"JSON expression with a special character!": { "name": f.id }}
- FROM Families f
- WHERE f.id = "AndersenFamily"
-```
-
-## Next steps
--- [Getting started](sql-query-getting-started.md)-- [SELECT clause](sql-query-select.md)-- [WHERE clause](sql-query-where.md)
cosmos-db Sql Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-sdk-connection-modes.md
- Title: Azure Cosmos DB SQL SDK connectivity modes
-description: Learn about the different connectivity modes available on the Azure Cosmos DB SQL SDKs.
---- Previously updated : 04/28/2022-----
-# Azure Cosmos DB SQL SDK connectivity modes
-
-How a client connects to Azure Cosmos DB has important performance implications, especially for observed client-side latency. Azure Cosmos DB offers a simple, open RESTful programming model over HTTPS called gateway mode. Additionally, it offers an efficient TCP protocol, which is also RESTful in its communication model and uses TLS for initial authentication and encrypting traffic, called direct mode.
-
-## Available connectivity modes
-
-The two available connectivity modes are:
-
- * Gateway mode
-
- Gateway mode is supported on all SDK platforms. If your application runs within a corporate network with strict firewall restrictions, gateway mode is the best choice because it uses the standard HTTPS port and a single DNS endpoint. The performance tradeoff, however, is that gateway mode involves an additional network hop every time data is read from or written to Azure Cosmos DB. We also recommend gateway connection mode when you run applications in environments that have a limited number of socket connections.
-
- When you use the SDK in Azure Functions, particularly in the [Consumption plan](../../azure-functions/consumption-plan.md), be aware of the current [limits on connections](../../azure-functions/manage-connections.md).
-
- * Direct mode
-
- Direct mode supports connectivity through TCP protocol, using TLS for initial authentication and encrypting traffic, and offers better performance because there are fewer network hops. The application connects directly to the backend replicas. Direct mode is currently only supported on .NET and Java SDK platforms.
-
-
-These connectivity modes essentially condition the route that data plane requests - document reads and writes - take from your client machine to partitions in the Azure Cosmos DB back-end. Direct mode is the preferred option for best performance - it allows your client to open TCP connections directly to partitions in the Azure Cosmos DB back-end and send requests *direct*ly with no intermediary. By contrast, in Gateway mode, requests made by your client are routed to a so-called "Gateway" server in the Azure Cosmos DB front end, which in turn fans out your requests to the appropriate partition(s) in the Azure Cosmos DB back-end.
-
-## Service port ranges
-
-When you use direct mode, in addition to the gateway ports, you need to ensure the port range between 10000 and 20000 is open because Azure Cosmos DB uses dynamic TCP ports. When using direct mode on [private endpoints](../how-to-configure-private-endpoints.md), the full range of TCP ports - from 0 to 65535 should be open. If these ports aren't open and you try to use the TCP protocol, you might receive a 503 Service Unavailable error.
-
-The following table shows a summary of the connectivity modes available for various APIs and the service ports used for each API:
-
-|Connection mode |Supported protocol |Supported SDKs |API/Service port |
-|||||
-|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10250, 10255, 10256), Table (443), Cassandra (10350), Graph (443) <br> The port 10250 maps to a default Azure Cosmos DB API for MongoDB instance without geo-replication. Whereas the ports 10255 and 10256 map to the instance that has geo-replication. |
-|Direct | TCP (Encrypted via TLS) | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range |
-
-## <a id="direct-mode"></a> Direct mode connection architecture
-
-As detailed in the [introduction](#available-connectivity-modes), Direct mode clients will directly connect to the backend nodes through TCP protocol. Each backend node represents a replica in a [replica set](../partitioning-overview.md#replica-sets) belonging to a [physical partition](../partitioning-overview.md#physical-partitions).
-
-### Routing
-
-When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node and considered [metadata](../concepts-limits.md#metadata-request-limits). It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes. Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
-
-Each replica set contains one primary replica and three secondaries. Write operations are always routed to primary replica nodes while read operations can be served from primary or secondary nodes.
--
-Because the container and routing information don't change often, it's cached locally on the SDKs so subsequent operations can benefit from this information. The TCP connections already established are also reused across operations. Unless otherwise configured through the SDKs options, connections are permanently maintained during the lifetime of the SDK instance.
-
-As with any distributed architecture, the machines holding replicas might undergo upgrades or maintenance. The service will ensure the replica set maintains consistency but any replica movement would cause existing TCP addresses to change. In these cases, the SDKs need to refresh the routing information and re-connect to the new addresses through new Gateway requests. These events should not affect the overall P99 SLA.
-
-### Volume of connections
-
-Each physical partition has a replica set of four replicas, in order to provide the best possible performance, SDKs will end up opening connections to all replicas for workloads that mix write and read operations. Concurrent operations are load balanced across existing connections to take advantage of the throughput each replica provides.
-
-There are two factors that dictate the number of TCP connections the SDK will open:
-
-* Number of physical partitions
-
- In a steady state, the SDK will have one connection per replica per physical partition. The larger the number of physical partitions in a container, the larger the number of open connections will be. As operations are routed across different partitions, connections are established on demand. The average number of connections would then be the number of physical partitions times four.
-
-* Volume of concurrent requests
-
- Each established connection can serve a configurable number of concurrent operations. If the volume of concurrent operations exceeds this threshold, new connections will be open to serve them, and it's possible that for a physical partition, the number of open connections exceeds the steady state number. This behavior is expected for workloads that might have spikes in their operational volume. For the .NET SDK this configuration is set by [CosmosClientOptions.MaxRequestsPerTcpConnection](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxrequestspertcpconnection), and for the Java SDK you can customize using [DirectConnectionConfig.setMaxRequestsPerConnection](/java/api/com.azure.cosmos.directconnectionconfig.setmaxrequestsperconnection).
-
-By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and affect overall performance.
-
-### Language specific implementation details
-
-For further implementation details regarding a language see:
-
-* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/SdkDesign.md)
-* [Java SDK direct mode information](performance-tips-java-sdk-v4-sql.md#direct-connection)
-
-## Next steps
-
-For specific SDK platform performance optimizations:
-
-* [.NET V2 SDK performance tips](performance-tips.md)
-
-* [.NET V3 SDK performance tips](performance-tips-dotnet-sdk-v3-sql.md)
-
-* [Java V4 SDK performance tips](performance-tips-java-sdk-v4-sql.md)
-
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/stored-procedures-triggers-udfs.md
- Title: Work with stored procedures, triggers, and UDFs in Azure Cosmos DB
-description: This article introduces the concepts such as stored procedures, triggers, and user-defined functions in Azure Cosmos DB.
---- Previously updated : 08/26/2021----
-# Stored procedures, triggers, and user-defined functions
-
-Azure Cosmos DB provides language-integrated, transactional execution of JavaScript. When using the SQL API in Azure Cosmos DB, you can write **stored procedures**, **triggers**, and **user-defined functions (UDFs)** in the JavaScript language. You can write your logic in JavaScript that executed inside the database engine. You can create and execute triggers, stored procedures, and UDFs by using [Azure portal](https://portal.azure.com/), the [JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) or the [Cosmos DB SQL API client SDKs](how-to-use-stored-procedures-triggers-udfs.md).
-
-## Benefits of using server-side programming
-
-Writing stored procedures, triggers, and user-defined functions (UDFs) in JavaScript allows you to build rich applications and they have the following advantages:
-
-* **Procedural logic:** JavaScript as a high-level programming language that provides rich and familiar interface to express business logic. You can perform a sequence of complex operations on the data.
-
-* **Atomic transactions:** Azure Cosmos DB database operations that are performed within a single stored procedure or a trigger are atomic. This atomic functionality lets an application combine related operations into a single batch, so that either all of the operations succeed or none of them succeed.
-
-* **Performance:** The JSON data is intrinsically mapped to the JavaScript language type system. This mapping allows for a number of optimizations like lazy materialization of JSON documents in the buffer pool and making them available on-demand to the executing code. There are other performance benefits associated with shipping business logic to the database, which includes:
-
- * *Batching:* You can group operations like inserts and submit them in bulk. The network traffic latency costs and the store overhead to create separate transactions are reduced significantly.
-
- * *Pre-compilation:* Stored procedures, triggers, and UDFs are implicitly pre-compiled to the byte code format in order to avoid compilation cost at the time of each script invocation. Due to pre-compilation, the invocation of stored procedures is fast and has a low footprint.
-
- * *Sequencing:* Sometimes operations need a triggering mechanism that may perform one or additional updates to the data. In addition to Atomicity, there are also performance benefits when executing on the server side.
-
-* **Encapsulation:** Stored procedures can be used to group logic in one place. Encapsulation adds an abstraction layer on top of the data, which enables you to evolve your applications independently from the data. This layer of abstraction is helpful when the data is schema-less and you don't have to manage adding additional logic directly into your application. The abstraction lets your keep the data secure by streamlining the access from the scripts.
-
-> [!TIP]
-> Stored procedures are best suited for operations that are write-heavy and require a transaction across a partition key value. When deciding whether to use stored procedures, optimize around encapsulating the maximum amount of writes possible. Generally speaking, stored procedures are not the most efficient means for doing large numbers of read or query operations, so using stored procedures to batch large numbers of reads to return to the client will not yield the desired benefit. For best performance, these read-heavy operations should be done on the client-side, using the Cosmos SDK.
-
-## Transactions
-
-Transaction in a typical database can be defined as a sequence of operations performed as a single logical unit of work. Each transaction provides **ACID property guarantees**. ACID is a well-known acronym that stands for: **A**tomicity, **C**onsistency, **I**solation, and **D**urability.
-
-* Atomicity guarantees that all the operations done inside a transaction are treated as a single unit, and either all of them are committed or none of them are.
-
-* Consistency makes sure that the data is always in a valid state across transactions.
-
-* Isolation guarantees that no two transactions interfere with each other ΓÇô many commercial systems provide multiple isolation levels that can be used based on the application needs.
-
-* Durability ensures that any change that is committed in a database will always be present.
-
-In Azure Cosmos DB, JavaScript runtime is hosted inside the database engine. Hence, requests made within the stored procedures and the triggers execute in the same scope as the database session. This feature enables Azure Cosmos DB to guarantee ACID properties for all operations that are part of a stored procedure or a trigger. For examples, see [how to implement transactions](how-to-write-stored-procedures-triggers-udfs.md#transactions) article.
-
-### Scope of a transaction
-
-Stored procedures are associated with an Azure Cosmos container and stored procedure execution is scoped to a logical partition key. Stored procedures must include a logical partition key value during execution that defines the logical partition for the scope of the transaction. For more information, see [Azure Cosmos DB partitioning](../partitioning-overview.md) article.
-
-### Commit and rollback
-
-Transactions are natively integrated into the Azure Cosmos DB JavaScript programming model. Within a JavaScript function, all the operations are automatically wrapped under a single transaction. If the JavaScript logic in a stored procedure completes without any exceptions, all the operations within the transaction are committed to the database. Statements like `BEGIN TRANSACTION` and `COMMIT TRANSACTION` (familiar to relational databases) are implicit in Azure Cosmos DB. If there are any exceptions from the script, the Azure Cosmos DB JavaScript runtime will roll back the entire transaction. As such, throwing an exception is effectively equivalent to a `ROLLBACK TRANSACTION` in Azure Cosmos DB.
-
-### Data consistency
-
-Stored procedures and triggers are always executed on the primary replica of an Azure Cosmos container. This feature ensures that reads from stored procedures offer [strong consistency](../consistency-levels.md). Queries using user-defined functions can be executed on the primary or any secondary replica. Stored procedures and triggers are intended to support transactional writes ΓÇô meanwhile read-only logic is best implemented as application-side logic and queries using the [Azure Cosmos DB SQL API SDKs](sql-api-dotnet-samples.md), will help you saturate the database throughput.
-
-> [!TIP]
-> The queries executed within a stored procedure or trigger may not see changes to items made by the same script transaction. This statement applies both to SQL queries, such as `getContent().getCollection.queryDocuments()`, as well as integrated language queries, such as `getContext().getCollection().filter()`.
-
-## Bounded execution
-
-All Azure Cosmos DB operations must complete within the specified timeout duration. Stored procedures have a timeout limit of 5 seconds. This constraint applies to JavaScript functions - stored procedures, triggers, and user-defined functions. If an operation does not complete within that time limit, the transaction is rolled back.
-
-You can either ensure that your JavaScript functions finish within the time limit or implement a continuation-based model to batch/resume execution. In order to simplify development of stored procedures and triggers to handle time limits, all functions under the Azure Cosmos container (for example, create, read, update, and delete of items) return a boolean value that represents whether that operation will complete. If this value is false, it is an indication that the procedure must wrap up execution because the script is consuming more time or provisioned throughput than the configured value. Operations queued prior to the first unaccepted store operation are guaranteed to complete if the stored procedure completes in time and does not queue any more requests. Thus, operations should be queued one at a time by using JavaScript's callback convention to manage the script's control flow. Because scripts are executed in a server-side environment, they are strictly governed. Scripts that repeatedly violate execution boundaries may be marked inactive and can't be executed, and they should be recreated to honor the execution boundaries.
-
-JavaScript functions are also subject to [provisioned throughput capacity](../request-units.md). JavaScript functions could potentially end up using a large number of request units within a short time and may be rate-limited if the provisioned throughput capacity limit is reached. It is important to note that scripts consume additional throughput in addition to the throughput spent executing database operations, although these database operations are slightly less expensive than executing the same operations from the client.
-
-## Triggers
-
-Azure Cosmos DB supports two types of triggers:
-
-### Pre-triggers
-
-Azure Cosmos DB provides triggers that can be invoked by performing an operation on an Azure Cosmos item. For example, you can specify a pre-trigger when you are creating an item. In this case, the pre-trigger will run before the item is created. Pre-triggers cannot have any input parameters. If necessary, the request object can be used to update the document body from original request. When triggers are registered, users can specify the operations that it can run with. If a trigger was created with `TriggerOperation.Create`, this means using the trigger in a replace operation will not be permitted. For examples, see [How to write triggers](how-to-write-stored-procedures-triggers-udfs.md#triggers) article.
-
-### Post-triggers
-
-Similar to pre-triggers, post-triggers, are also associated with an operation on an Azure Cosmos item and they don't require any input parameters. They run *after* the operation has completed and have access to the response message that is sent to the client. For examples, see [How to write triggers](how-to-write-stored-procedures-triggers-udfs.md#triggers) article.
-
-> [!NOTE]
-> Registered triggers don't run automatically when their corresponding operations (create / delete / replace / update) happen. They have to be explicitly called when executing these operations. To learn more, see [how to run triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) article.
-
-## <a id="udfs"></a>User-defined functions
-
-[User-defined functions](sql-query-udfs.md) (UDFs) are used to extend the SQL API query language syntax and implement custom business logic easily. They can be called only within queries. UDFs do not have access to the context object and are meant to be used as compute only JavaScript. Therefore, UDFs can be run on secondary replicas. For examples, see [How to write user-defined functions](how-to-write-stored-procedures-triggers-udfs.md#udfs) article.
-
-## <a id="jsqueryapi"></a>JavaScript language-integrated query API
-
-In addition to issuing queries using SQL API query syntax, the [server-side SDK](https://azure.github.io/azure-cosmosdb-js-server) allows you to perform queries by using a JavaScript interface without any knowledge of SQL. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls. Queries are parsed by the JavaScript runtime and are executed efficiently within Azure Cosmos DB. To learn about JavaScript query API support, see [Working with JavaScript language integrated query API](javascript-query-api.md) article. For examples, see [How to write stored procedures and triggers using JavaScript Query API](how-to-write-javascript-query-api.md) article.
-
-## Next steps
-
-Learn how to write and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB with the following articles:
-
-* [How to write stored procedures, triggers, and user-defined functions](how-to-write-stored-procedures-triggers-udfs.md)
-
-* [How to use stored procedures, triggers, and user-defined functions](how-to-use-stored-procedures-triggers-udfs.md)
-
-* [Working with JavaScript language integrated query API](javascript-query-api.md)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Synthetic Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/synthetic-partition-keys.md
- Title: Create a synthetic partition key in Azure Cosmos DB
-description: Learn how to use synthetic partition keys in your Azure Cosmos containers to distribute the data and workload evenly across the partition keys
--- Previously updated : 08/26/2021------
-# Create a synthetic partition key
-
-It's the best practice to have a partition key with many distinct values, such as hundreds or thousands. The goal is to distribute your data and workload evenly across the items associated with these partition key values. If such a property doesnΓÇÖt exist in your data, you can construct a *synthetic partition key*. This document describes several basic techniques for generating a synthetic partition key for your Cosmos container.
-
-## Concatenate multiple properties of an item
-
-You can form a partition key by concatenating multiple property values into a single artificial `partitionKey` property. These keys are referred to as synthetic keys. For example, consider the following example document:
-
-```JavaScript
-{
-"deviceId": "abc-123",
-"date": 2018
-}
-```
-
-For the previous document, one option is to set /deviceId or /date as the partition key. Use this option, if you want to partition your container based on either device ID or date. Another option is to concatenate these two values into a synthetic `partitionKey` property that's used as the partition key.
-
-```JavaScript
-{
-"deviceId": "abc-123",
-"date": 2018,
-"partitionKey": "abc-123-2018"
-}
-```
-
-In real-time scenarios, you can have thousands of items in a database. Instead of adding the synthetic key manually, define client-side logic to concatenate values and insert the synthetic key into the items in your Cosmos containers.
-
-## Use a partition key with a random suffix
-
-Another possible strategy to distribute the workload more evenly is to append a random number at the end of the partition key value. When you distribute items in this way, you can perform parallel write operations across partitions.
-
-An example is if a partition key represents a date. You might choose a random number between 1 and 400 and concatenate it as a suffix to the date. This method results in partition key values likeΓÇ»`2018-08-09.1`,`2018-08-09.2`, and so on, throughΓÇ»`2018-08-09.400`. Because you randomize the partition key, the write operations on the container on each day are spread evenly across multiple partitions. This method results in better parallelism and overall higher throughput.
-
-## Use a partition key with pre-calculated suffixes
-
-The random suffix strategy can greatly improve write throughput, but it's difficult to read a specific item. You don't know the suffix value that was used when you wrote the item. To make it easier to read individual items, use the pre-calculated suffixes strategy. Instead of using a random number to distribute the items among the partitions, use a number that is calculated based on something that you want to query.
-
-Consider the previous example, where a container uses a date as the partition key. Now suppose that each item has aΓÇ»`Vehicle-Identification-Number` (`VIN`) attribute that we want to access. Further, suppose that you often run queries to find items by the `VIN`, in addition to date. Before your application writes the item to the container, it can calculate a hash suffix based on the VIN and append it to the partition key date. The calculation might generate a number between 1 and 400 that is evenly distributed. This result is similar to the results produced by the random suffix strategy method. The partition key value is then the date concatenated with the calculated result.
-
-With this strategy, the writes are evenly spread across the partition key values, and across the partitions. You can easily read a particular item and date, because you can calculate the partition key value for a specific `Vehicle-Identification-Number`. The benefit of this method is that you can avoid creating a single hot partition key, i.e., a partition key that takes all the workload.
-
-## Next steps
-
-You can learn more about the partitioning concept in the following articles:
-
-* Learn more about [logical partitions](../partitioning-overview.md).
-* Learn more about how to [provision throughput on Azure Cosmos containers and databases](../set-throughput.md).
-* Learn how to [provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md).
-* Learn how to [provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Templates Samples Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/templates-samples-sql.md
- Title: Azure Resource Manager templates for Azure Cosmos DB Core (SQL API)
-description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB.
---- Previously updated : 08/26/2021----
-# Azure Resource Manager templates for Azure Cosmos DB
-
-This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](../cassandr) APIs.
-
-## Core (SQL) API
-
-|**Template**|**Description**|
-|||
-|[Create an Azure Cosmos account, database, container with autoscale throughput](manage-with-templates.md#create-autoscale) | This template creates a Core (SQL) API account in two regions, a database and container with autoscale throughput. |
-|[Create an Azure Cosmos account, database, container with analytical store](manage-with-templates.md#create-analytical-store) | This template creates a Core (SQL) API account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. |
-|[Create an Azure Cosmos account, database, container with standard (manual) throughput](manage-with-templates.md#create-manual) | This template creates a Core (SQL) API account in two regions, a database and container with standard throughput. |
-|[Create an Azure Cosmos account, database and container with a stored procedure, trigger and UDF](manage-with-templates.md#create-sproc) | This template creates a Core (SQL) API account in two regions with a stored procedure, trigger and UDF for a container. |
-|[Create an Azure Cosmos account with Azure AD identity, Role Definitions and Role Assignment](manage-with-templates.md#create-rbac) | This template creates a Core (SQL) API account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
-|[Create a private endpoint for an existing Azure Cosmos account](../how-to-configure-private-endpoints.md#create-a-private-endpoint-by-using-a-resource-manager-template) | This template creates a private endpoint for an existing Azure Cosmos Core (SQL) API account in an existing virtual network. |
-|[Create a free-tier Azure Cosmos account](manage-with-templates.md#free-tier) | This template creates an Azure Cosmos DB Core (SQL) API account on free-tier. |
-
-See [Azure Resource Manager reference for Azure Cosmos DB](/azure/templates/microsoft.documentdb/allversions) page for the reference documentation.
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Terraform Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/terraform-samples.md
- Title: Terraform samples for Azure Cosmos DB Core (SQL API)
-description: Use Terraform to create and configure Azure Cosmos DB.
---- Previously updated : 09/16/2022----
-# Terraform for Azure Cosmos DB
--
-This article shows Terraform samples for Core (SQL) API accounts.
-
-## Core (SQL) API
-
-|**Sample**|**Description**|
-|||
-|[Create an Azure Cosmos account, database, container with autoscale throughput](manage-with-terraform.md#create-autoscale) | Create a Core (SQL) API account in two regions, a database and container with autoscale throughput. |
-|[Create an Azure Cosmos account, database, container with analytical store](manage-with-terraform.md#create-analytical-store) | Create a Core (SQL) API account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. |
-|[Create an Azure Cosmos account, database, container with standard (manual) throughput](manage-with-terraform.md#create-manual) | Create a Core (SQL) API account in two regions, a database and container with standard throughput. |
-|[Create an Azure Cosmos account, database and container with a stored procedure, trigger and UDF](manage-with-terraform.md#create-sproc) | Create a Core (SQL) API account in two regions with a stored procedure, trigger and UDF for a container. |
-|[Create an Azure Cosmos account with Azure AD identity, Role Definitions and Role Assignment](manage-with-terraform.md#create-rbac) | Create a Core (SQL) API account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
-|[Create a free-tier Azure Cosmos account](manage-with-terraform.md#free-tier) | Create an Azure Cosmos DB Core (SQL) API account on free-tier. |
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/throughput-control-spark.md
- Title: Azure Cosmos DB Spark Connector - Throughput Control
-description: Learn about controlling throughput for bulk data movements in the Azure Cosmos DB Spark Connector
---- Previously updated : 06/22/2022----
-# Azure Cosmos DB Spark Connector - throughput control
-
-The [Spark Connector](create-sql-api-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control.
-
-## Why is throughput control important?
-
- Having throughput control helps to isolate the performance needs of applications running against a container, by limiting the amount of [request units](../request-units.md) that can be consumed by a given Spark client.
-
-There are several advanced scenarios that benefit from client-side throughput control:
--- **Different operations and tasks have different priorities** - there can be a need to prevent normal transactions from being throttled due to data ingestion or copy activities. Some operations and/or tasks aren't sensitive to latency, and are more tolerant to being throttled than others.--- **Provide fairness/isolation to different end users/tenants** - An application will usually have many end users. Some users may send too many requests, which consume all available throughput, causing others to get throttled.--- **Load balancing of throughput between different Azure Cosmos DB clients** - in some use cases, it's important to make sure all the clients get a fair (equal) share of the throughput--
-Throughput control enables the capability for more granular level RU rate limiting as needed.
-
-## How does throughput control work?
-
-Throughput control for the Spark Connector is configured by first creating a container that will define throughput control metadata, with a partition key of `groupId`, and `ttl` enabled. Here we create this container using Spark SQL, and call it `ThroughputControl`:
--
-```sql
- %sql
- CREATE TABLE IF NOT EXISTS cosmosCatalog.`database-v4`.ThroughputControl
- USING cosmos.oltp
- OPTIONS(spark.cosmos.database = 'database-v4')
- TBLPROPERTIES(partitionKeyPath = '/groupId', autoScaleMaxThroughput = '4000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
-```
-
-> [!NOTE]
-> The above example creates a container with [autoscale](../provision-throughput-autoscale.md). If you prefer standard provisioning, you can replace `autoScaleMaxThroughput` with `manualThroughput` instead.
-
-> [!IMPORTANT]
-> The partition key must be defined as `/groupId`, and `ttl` must be enabled, for the throughput control feature to work.
-
-Within the Spark config of a given application, we can then specify parameters for our workload. The below example sets throughput control as `enabled`, as well as defining a throughput control group `name` and a `targetThroughputThreshold`. We also define the `database` and `container` in which through control group is maintained:
-
-```scala
- "spark.cosmos.throughputControl.enabled" -> "true",
- "spark.cosmos.throughputControl.name" -> "SourceContainerThroughputControl",
- "spark.cosmos.throughputControl.targetThroughputThreshold" -> "0.95",
- "spark.cosmos.throughputControl.globalControl.database" -> "database-v4",
- "spark.cosmos.throughputControl.globalControl.container" -> "ThroughputControl"
-```
-
-In the above example, the `targetThroughputThreshold` is defined as **0.95**, so rate limiting will occur (and requests will be retried) when clients consume more than 95% (+/- 5-10 percent) of the throughput that is allocated to the container. This configuration is stored as a document in the throughput container that looks like the below:
-
-```json
- {
- "id": "ZGF0YWJhc2UtdjQvY3VzdG9tZXIvU291cmNlQ29udGFpbmVyVGhyb3VnaHB1dENvbnRyb2w.info",
- "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
- "targetThroughput": "",
- "targetThroughputThreshold": "0.95",
- "isDefault": true,
- "_rid": "EHcYAPolTiABAAAAAAAAAA==",
- "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
- "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
- "_attachments": "attachments/",
- "_ts": 1651835869
- }
-```
-> [!NOTE]
-> Throughput control does not do RU pre-calculation of each operation. Instead, it tracks the RU usages after the operation based on the response header. As such, throughput control is based on an approximation - and does not guarantee that amount of throughput will be available for the group at any given time.
-
-> [!WARNING]
-> The `targetThroughputThreshold` is **immutable**. If you change the target throughput threshold value, this will create a new throughput control group (but as long as you use Version 4.10.0 or later it can have the same name). You need to restart all Spark jobs that are using the group if you want to ensure they all consume the new threshold immediately (otherwise they will pick-up the new threshold after the next restart).
-
-For each Spark client that uses the throughput control group, a record will be created in the `ThroughputControl` container - with a ttl of a few seconds - so the documents will vanish pretty quickly if a Spark client isn't actively running anymore - which looks like the below:
-
-```json
- {
- "id": "Zhjdieidjojdook3osk3okso3ksp3ospojsp92939j3299p3oj93pjp93jsps939pkp9ks39kp9339skp",
- "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
- "_etag": "\"1782728-w98999w-ww9998w9-99990000\"",
- "ttl": 10,
- "initializeTime": "2022-06-26T02:24:40.054Z",
- "loadFactor": 0.97636377638898,
- "allocatedThroughput": 484.89444487847,
- "_rid": "EHcYAPolTiABAAAAAAAAAA==",
- "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
- "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
- "_attachments": "attachments/",
- "_ts": 1651835869
- }
-```
-
-In each client record, the `loadFactor` attribute represents the load on the given client, relative to other clients in the throughput control group. The `allocatedThroughput` attribute shows how many RUs are currently allocated to this client. The Spark Connector will adjust allocated throughput for each client based on its load. This will ensure that each client gets a share of the throughput available that is proportional to its load, and all clients together don't consume more than the total allocated for the throughput control group to which they belong.
--
-## Next steps
-
-* [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
-* [Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API](create-sql-api-spark.md).
-* Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/time-to-live.md
- Title: Expire data in Azure Cosmos DB with Time to Live
-description: With TTL, Microsoft Azure Cosmos DB provides the ability to have documents automatically purged from the system after a period of time.
------ Previously updated : 09/16/2021-
-# Time to Live (TTL) in Azure Cosmos DB
-
-With **Time to Live** or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified. Time to live value is configured in seconds. When you configure TTL, the system will automatically delete the expired items based on the TTL value, without needing a delete operation that is explicitly issued by the client application. The maximum value for TTL is 2147483647.
-
-Deletion of expired items is a background task that consumes left-over [Request Units](../request-units.md), that is Request Units that haven't been consumed by user requests. Even after the TTL has expired, if the container is overloaded with requests and if there aren't enough RU's available, the data deletion is delayed. Data is deleted once there are enough RUs available to perform the delete operation. Though the data deletion is delayed, data is not returned by any queries (by any API) after the TTL has expired.
-
-> [!NOTE]
-> This content is related to Azure Cosmos DB transactional store TTL. If you are looking for analytical store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl).
-
-## Time to live for containers and items
-
-The time to live value is set in seconds, and it is interpreted as a delta from the time that an item was last modified. You can set time to live on a container or an item within the container:
-
-1. **Time to Live on a container** (set using `DefaultTimeToLive`):
-
- - If missing (or set to null), items are not expired automatically.
-
- - If present and the value is set to "-1", it is equal to infinity, and items donΓÇÖt expire by default.
-
- - If present and the value is set to some *non-zero* number *"n"* ΓÇô items will expire *"n"* seconds after their last modified time.
-
-2. **Time to Live on an item** (set using `ttl`):
-
- - This Property is applicable only if `DefaultTimeToLive` is present and it is not set to null for the parent container.
-
- - If present, it overrides the `DefaultTimeToLive` value of the parent container.
-
-## Time to Live configurations
--- If TTL is set to *"n"* on a container, then the items in that container will expire after *n* seconds. If there are items in the same container that have their own time to live, set to -1 (indicating they do not expire) or if some items have overridden the time to live setting with a different number, these items expire based on their own configured TTL value.--- If TTL is not set on a container, then the time to live on an item in this container has no effect.--- If TTL on a container is set to -1, an item in this container that has the time to live set to n, will expire after n seconds, and remaining items will not expire.-
-## Examples
-
-This section shows some examples with different time to live values assigned to container and items:
-
-### Example 1
-
-TTL on container is set to null (DefaultTimeToLive = null)
-
-|TTL on item| Result|
-|||
-|ttl = null|TTL is disabled. The item will never expire (default).|
-|ttl = -1|TTL is disabled. The item will never expire.|
-|ttl = 2000|TTL is disabled. The item will never expire.|
-
-### Example 2
-
-TTL on container is set to -1 (DefaultTimeToLive = -1)
-
-|TTL on item| Result|
-|||
-|ttl = null|TTL is enabled. The item will never expire (default).|
-|ttl = -1|TTL is enabled. The item will never expire.|
-|ttl = 2000|TTL is enabled. The item will expire after 2000 seconds.|
-
-### Example 3
-
-TTL on container is set to 1000 (DefaultTimeToLive = 1000)
-
-|TTL on item| Result|
-|||
-|ttl = null|TTL is enabled. The item will expire after 1000 seconds (default).|
-|ttl = -1|TTL is enabled. The item will never expire.|
-|ttl = 2000|TTL is enabled. The item will expire after 2000 seconds.|
-
-## Next steps
-
-Learn how to configure Time to Live in the following articles:
--- [How to configure Time to Live](how-to-time-to-live.md)
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/transactional-batch.md
- Title: Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
-description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET or Java SDK to perform a group of point operations that either succeed or fail.
----- Previously updated : 10/27/2020--
-# Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
-
-Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET and Java SDKs, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
-
-## What's a transaction in Azure Cosmos DB
-
-A transaction in a typical database can be defined as a sequence of operations performed as a single logical unit of work. Each transaction provides ACID (Atomicity, Consistency, Isolation, Durability) property guarantees.
-
-* **Atomicity** guarantees that all the operations done inside a transaction are treated as a single unit, and either all of them are committed or none of them are.
-* **Consistency** makes sure that the data is always in a valid state across transactions.
-* **Isolation** guarantees that no two transactions interfere with each other ΓÇô many commercial systems provide multiple isolation levels that can be used based on the application needs.
-* **Durability** ensures that any change that is committed in a database will always be present.
-Azure Cosmos DB supports [full ACID compliant transactions with snapshot isolation](database-transactions-optimistic-concurrency.md) for operations within the same [logical partition key](../partitioning-overview.md).
-
-## Transactional batch operations and stored procedures
-
-Azure Cosmos DB currently supports stored procedures, which also provide the transactional scope on operations. However, transactional batch operations offer the following benefits:
-
-* **Language option** ΓÇô Transactional batch is supported on the SDK and language you work with already, while stored procedures need to be written in JavaScript.
-* **Code versioning** ΓÇô Versioning application code and onboarding it onto your CI/CD pipeline is much more natural than orchestrating the update of a stored procedure and making sure the rollover happens at the right time. It also makes rolling back changes easier.
-* **Performance** ΓÇô Reduced latency on equivalent operations by up to 30% when compared to the stored procedure execution.
-* **Content serialization** ΓÇô Each operation within a transactional batch can use custom serialization options for its payload.
-
-## How to create a transactional batch operation
-
-### [.NET](#tab/dotnet)
-
-When creating a transactional batch operation, start with a container instance and call [CreateTransactionalBatch](/dotnet/api/microsoft.azure.cosmos.container.createtransactionalbatch):
-
-```csharp
-PartitionKey partitionKey = new PartitionKey("road-bikes");
-
-TransactionalBatch batch = container.CreateTransactionalBatch(partitionKey);
-```
-
-Next, add multiple operations to the batch:
-
-```csharp
-Product bike = new (
- id: "68719520766",
- category: "road-bikes",
- name: "Chropen Road Bike"
-);
-
-batch.CreateItem<Product>(bike);
-
-Part part = new (
- id: "68719519885",
- category: "road-bikes",
- name: "Tronosuros Tire",
- productId: bike.id
-);
-
-batch.CreateItem<Part>(part);
-```
-
-Finally, call [ExecuteAsync](/dotnet/api/microsoft.azure.cosmos.transactionalbatch.executeasync) on the batch:
-
-```csharp
-using TransactionalBatchResponse response = await batch.ExecuteAsync();
-```
-
-Once the response is received, examine if the response is successful. If the response indicates a success, extract the results:
-
-```csharp
-if (response.IsSuccessStatusCode)
-{
- TransactionalBatchOperationResult<Product> productResponse;
- productResponse = response.GetOperationResultAtIndex<Product>(0);
- Product productResult = productResponse.Resource;
-
- TransactionalBatchOperationResult<Part> partResponse;
- partResponse = response.GetOperationResultAtIndex<Part>(1);
- Part partResult = partResponse.Resource;
-}
-```
-
-> [!IMPORTANT]
-> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
-
-### [Java](#tab/java)
-
-When creating a transactional batch operation, call [CosmosBatch.createCosmosBatch](/java/api/com.azure.cosmos.models.cosmosbatch.createcosmosbatch):
-
-```java
-PartitionKey partitionKey = new PartitionKey("road-bikes");
-
-CosmosBatch batch = CosmosBatch.createCosmosBatch(partitionKey);
-```
-
-Next, add multiple operations to the batch:
-
-```java
-Product bike = new Product();
-bike.setId("68719520766");
-bike.setCategory("road-bikes");
-bike.setName("Chropen Road Bike");
-
-batch.createItemOperation(bike);
-
-Part part = new Part();
-part.setId("68719519885");
-part.setCategory("road-bikes");
-part.setName("Tronosuros Tire");
-part.setProductId(bike.getId());
-
-batch.createItemOperation(part);
-```
-
-Finally, use a container instance to call [executeCosmosBatch](/java/api/com.azure.cosmos.cosmoscontainer.executecosmosbatch) with the batch:
-
-```java
-CosmosBatchResponse response = container.executeCosmosBatch(batch);
-```
-
-Once the response is received, examine if the response is successful. If the response indicates a success, extract the results:
-
-```java
-if (response.isSuccessStatusCode())
-{
- List<CosmosBatchOperationResult> results = response.getResults();
-}
-```
-
-> [!IMPORTANT]
-> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
---
-## How are transactional batch operations executed
-
-When the `ExecuteAsync` method is called, all operations in the `TransactionalBatch` object are grouped, serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
-
-The service receives the request and executes all operations within a transactional scope, and returns a response using the same serialization protocol. This response is either a success, or a failure, and supplies individual operation responses per operation.
-
-The SDK exposes the response for you to verify the result and, optionally, extract each of the internal operation results.
-
-## Limitations
-
-Currently, there are two known limits:
-
-* The Azure Cosmos DB request size limit constrains the size of the `TransactionalBatch` payload to not exceed 2 MB, and the maximum execution time is 5 seconds.
-* There's a current limit of 100 operations per `TransactionalBatch` to ensure the performance is as expected and within SLAs.
-
-## Next steps
-
-* Learn more about [what you can do with TransactionalBatch](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/TransactionalBatch)
-
-* Visit our [samples](sql-api-dotnet-v3sdk-samples.md) for more ways to use our Cosmos DB .NET SDK
cosmos-db Troubleshoot Bad Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-bad-request.md
- Title: Troubleshoot Azure Cosmos DB bad request exceptions
-description: Learn how to diagnose and fix bad request exceptions such as input content or partition key is invalid, partition key doesn't match in Azure Cosmos DB.
--- Previously updated : 03/07/2022-----
-# Diagnose and troubleshoot bad request exceptions in Azure Cosmos DB
-
-The HTTP status code 400 represents the request contains invalid data or it's missing required parameters.
-
-## <a name="missing-id-property"></a>Missing the ID property
-On this scenario, it's common to see the error:
-
-*The input content is invalid because the required properties - 'id; ' - are missing*
-
-A response with this error means the JSON document that is being sent to the service is lacking the required ID property.
-
-### Solution
-Specify an `id` property with a string value as per the [REST specification](/rest/api/cosmos-db/documents) as part of your document, the SDKs do not autogenerate values for this property.
-
-## <a name="invalid-partition-key-type"></a>Invalid partition key type
-On this scenario, it's common to see errors like:
-
-*Partition key ... is invalid.*
-
-A response with this error means the partition key value is of an invalid type.
-
-### Solution
-The value of the partition key should be a string or a number, make sure the value is of the expected types.
-
-## <a name="wrong-partition-key-value"></a>Wrong partition key value
-On this scenario, it's common to see these errors:
-
-*Response status code does not indicate success: BadRequest (400); Substatus: 1001*
-
-*PartitionKey extracted from document doesnΓÇÖt match the one specified in the header*
-
-A response with this error means you are executing an operation and passing a partition key value that does not match the document's body value for the expected property. If the collection's partition key path is `/myPartitionKey`, the document has a property called `myPartitionKey` with a value that does not match what was provided as partition key value when calling the SDK method.
-
-### Solution
-Send the partition key value parameter that matches the document property value.
-
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-changefeed-functions.md
- Title: Troubleshoot issues when using Azure Functions trigger for Cosmos DB
-description: Common issues, workarounds, and diagnostic steps, when using the Azure Functions trigger for Cosmos DB
--- Previously updated : 04/14/2022-----
-# Diagnose and troubleshoot issues when using Azure Functions trigger for Cosmos DB
-
-This article covers common issues, workarounds, and diagnostic steps, when you use the [Azure Functions trigger for Cosmos DB](change-feed-functions.md).
-
-## Dependencies
-
-The Azure Functions trigger and bindings for Cosmos DB depend on the extension packages over the base Azure Functions runtime. Always keep these packages updated, as they might include fixes and new features that might address any potential issues you may encounter:
-
-* For Azure Functions V2, see [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB).
-* For Azure Functions V1, see [Microsoft.Azure.WebJobs.Extensions.DocumentDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DocumentDB).
-
-This article will always refer to Azure Functions V2 whenever the runtime is mentioned, unless explicitly specified.
-
-## Consume the Azure Cosmos DB SDK independently
-
-The key functionality of the extension package is to provide support for the Azure Functions trigger and bindings for Cosmos DB. It also includes the [Azure Cosmos DB .NET SDK](sql-api-sdk-dotnet-core.md), which is helpful if you want to interact with Azure Cosmos DB programmatically without using the trigger and bindings.
-
-If want to use the Azure Cosmos DB SDK, make sure that you don't add to your project another NuGet package reference. Instead, **let the SDK reference resolve through the Azure Functions' Extension package**. Consume the Azure Cosmos DB SDK separately from the trigger and bindings
-
-Additionally, if you are manually creating your own instance of the [Azure Cosmos DB SDK client](./sql-api-sdk-dotnet-core.md), you should follow the pattern of having only one instance of the client [using a Singleton pattern approach](../../azure-functions/manage-connections.md?tabs=csharp#azure-cosmos-db-clients). This process avoids the potential socket issues in your operations.
-
-## Common scenarios and workarounds
-
-### Azure Function fails with error message collection doesn't exist
-
-Azure Function fails with error message "Either the source collection 'collection-name' (in database 'database-name') or the lease collection 'collection2-name' (in database 'database2-name') does not exist. Both collections must exist before the listener starts. To automatically create the lease collection, set 'CreateLeaseCollectionIfNotExists' to 'true'"
-
-This means that either one or both of the Azure Cosmos containers required for the trigger to work do not exist or are not reachable to the Azure Function. **The error itself will tell you which Azure Cosmos database and container is the trigger looking for** based on your configuration.
-
-1. Verify the `ConnectionStringSetting` attribute and that it **references a setting that exists in your Azure Function App**. The value on this attribute shouldn't be the Connection String itself, but the name of the Configuration Setting.
-2. Verify that the `databaseName` and `collectionName` exist in your Azure Cosmos account. If you are using automatic value replacement (using `%settingName%` patterns), make sure the name of the setting exists in your Azure Function App.
-3. If you don't specify a `LeaseCollectionName/leaseCollectionName`, the default is "leases". Verify that such container exists. Optionally you can set the `CreateLeaseCollectionIfNotExists` attribute in your Trigger to `true` to automatically create it.
-4. Verify your [Azure Cosmos account's Firewall configuration](../how-to-configure-firewall.md) to see to see that it's not it's not blocking the Azure Function.
-
-### Azure Function fails to start with "Shared throughput collection should have a partition key"
-
-The previous versions of the Azure Cosmos DB Extension did not support using a leases container that was created within a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). To resolve this issue, update the [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) extension to get the latest version.
-
-### Azure Function fails to start with "PartitionKey must be supplied for this operation."
-
-This error means that you are currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you are currently running on Azure Functions V1, you will need to upgrade to Azure Functions V2.
-
-### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] cannot be authorized by AAD token in data plane"
-
-This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You cannot use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
-
-### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id."
-
-This error means that your current leases container is partitioned, but the partition key path is not `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
-
-### You see a "Value cannot be null. Parameter name: o" in your Azure Functions logs when you try to Run the Trigger
-
-This issue appears if you are using the Azure portal and you try to select the **Run** button on the screen when inspecting an Azure Function that uses the trigger. The trigger does not require for you to select Run to start, it will automatically start when the Azure Function is deployed. If you want to check the Azure Function's log stream on the Azure portal, just go to your monitored container and insert some new items, you will automatically see the Trigger executing.
-
-### My changes take too long to be received
-
-This scenario can have multiple causes and all of them should be checked:
-
-1. Is your Azure Function deployed in the same region as your Azure Cosmos account? For optimal network latency, both the Azure Function and your Azure Cosmos account should be colocated in the same Azure region.
-2. Are the changes happening in your Azure Cosmos container continuous or sporadic?
-If it's the latter, there could be some delay between the changes being stored and the Azure Function picking them up. This is because internally, when the trigger checks for changes in your Azure Cosmos container and finds none pending to be read, it will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds).
-3. Your Azure Cosmos container might be [rate-limited](../request-units.md).
-4. You can use the `PreferredLocations` attribute in your trigger to specify a comma-separated list of Azure regions to define a custom preferred connection order.
-5. The speed at which your Trigger receives new changes is dictated by the speed at which you are processing them. Verify the Function's [Execution Time / Duration](../../azure-functions/analyze-telemetry-data.md), if your Function is slow that will increase the time it takes for your Trigger to get new changes. If you see a recent increase in Duration, there could be a recent code change that might affect it. If the speed at which you are receiving operations on your Azure Cosmos container is faster than the speed of your Trigger, you will keep lagging behind. You might want to investigate in the Function's code, what is the most time consuming operation and how to optimize it.
-
-### Some changes are repeated in my Trigger
-
-The concept of a "change" is an operation on a document. The most common scenarios where events for the same document is received are:
-* The account is using Eventual consistency. While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next).
-* The document is being updated. The Change Feed can contain multiple operations for the same documents, if that document is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same document is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If they don't match, these are different changes over the same document.
-* If you are identifying documents just by `id`, remember that the unique identifier for a document is the `id` and its partition key (there can be two documents with the same `id` but different partition key).
-
-### Some changes are missing in my Trigger
-
-If you find that some of the changes that happened in your Azure Cosmos container are not being picked up by the Azure Function or some changes are missing in the destination when you are copying them, please follow the below steps.
-
-When your Azure Function receives the changes, it often processes them, and could optionally, send the result to another destination. When you are investigating missing changes, make sure you **measure which changes are being received at the ingestion point** (when the Azure Function starts), not on the destination.
-
-If some changes are missing on the destination, this could mean that is some error happening during the Azure Function execution after the changes were received.
-
-In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry).
-
-> [!NOTE]
-> The Azure Functions trigger for Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during your code execution. This means that the reason that the changes did not arrive at the destination is because that you are failing to process them.
-
-If the destination is another Cosmos container and you are performing Upsert operations to copy the items, **verify that the Partition Key Definition on both the monitored and destination container are the same**. Upsert operations could be saving multiple source items as one in the destination because of this configuration difference.
-
-If you find that some changes were not received at all by your trigger, the most common scenario is that there is **another Azure Function running**. It could be another Azure Function deployed in Azure or an Azure Function running locally on a developer's machine that has **exactly the same configuration** (same monitored and lease containers), and this Azure Function is stealing a subset of the changes you would expect your Azure Function to process.
-
-Additionally, the scenario can be validated, if you know how many Azure Function App instances you have running. If you inspect your leases container and count the number of lease items within, the distinct values of the `Owner` property in them should be equal to the number of instances of your Function App. If there are more owners than the known Azure Function App instances, it means that these extra owners are the ones "stealing" the changes.
-
-One easy way to work around this situation, is to apply a `LeaseCollectionPrefix/leaseCollectionPrefix` to your Function with a new/different value or, alternatively, test with a new leases container.
-
-### Need to restart and reprocess all the items in my container from the beginning
-To reprocess all the items in a container from the beginning:
-1. Stop your Azure function if it is currently running.
-1. Delete the documents in the lease collection (or delete and re-create the lease collection so it is empty)
-1. Set the [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) CosmosDBTrigger attribute in your function to true.
-1. Restart the Azure function. It will now read and process all changes from the beginning.
-
-Setting [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) to true will tell the Azure function to start reading changes from the beginning of the history of the collection instead of the current time. This only works when there are no already created leases (that is, documents in the leases collection). Setting this property to true when there are leases already created has no effect; in this scenario, when a function is stopped and restarted, it will begin reading from the last checkpoint, as defined in the leases collection. To reprocess from the beginning, follow the above steps 1-4.
-
-### Binding can only be done with IReadOnlyList\<Document> or JArray
-
-This error happens if your Azure Functions project (or any referenced project) contains a manual NuGet reference to the Azure Cosmos DB SDK with a different version than the one provided by the [Azure Functions Cosmos DB Extension](./troubleshoot-changefeed-functions.md#dependencies).
-
-To work around this situation, remove the manual NuGet reference that was added and let the Azure Cosmos DB SDK reference resolve through the Azure Functions Cosmos DB Extension package.
-
-### Changing Azure Function's polling interval for the detecting changes
-
-As explained earlier for [My changes take too long to be received](#my-changes-take-too-long-to-be-received), Azure function will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds).
-
-## Next steps
-
-* [Enable monitoring for your Azure Functions](../../azure-functions/functions-monitoring.md)
-* [Azure Cosmos DB .NET SDK Troubleshooting](./troubleshoot-dot-net-sdk.md)
cosmos-db Troubleshoot Dot Net Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-header-too-large.md
- Title: Troubleshoot a "Request header too large" message or 400 bad request in Azure Cosmos DB
-description: Learn how to diagnose and fix the request header too large exception.
--- Previously updated : 09/29/2021------
-# Diagnose and troubleshoot Azure Cosmos DB "Request header too large" message
-
-The "Request header too large" message is thrown with an HTTP error code 400. This error occurs if the size of the request header has grown so large that it exceeds the maximum-allowed size. We recommend that you use the latest version of the SDK. Use at least version 3.x or 2.x, because these versions add header size tracing to the exception message.
-
-## Troubleshooting steps
-The "Request header too large" message occurs if the session or the continuation token is too large. The following sections describe the cause of the issue and its solution in each category.
-
-### Session token is too large
-
-#### Cause:
-A 400 bad request most likely occurs because the session token is too large. If the following statements are true, the session token is too large:
-
-* The error occurs on point operations like create, read, and update where there isn't a continuation token.
-* The exception started without making any changes to the application. The session token grows as the number of partitions increases in the container. The number of partitions increases as the amount of data increases or if the throughput is increased.
-
-#### Temporary mitigation:
-Restart your client application to reset all the session tokens. Eventually, the session token will grow back to the previous size that caused the issue. To avoid this issue completely, use the solution in the next section.
-
-#### Solution:
-> [!IMPORTANT]
-> Upgrade to at least .NET [v3.20.1](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md) or [v2.16.1](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md). These minor versions contain optimizations to reduce the session token size to prevent the header from growing and hitting the size limit.
-1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the Transmission Control Protocol (TCP). The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue. Make sure to use the latest version of the SDK, which has a fix for query operations when the service interop isn't available.
-1. If the direct connection mode with the TCP protocol isn't an option for your workload, mitigate it by changing the [client consistency level](how-to-manage-consistency.md). The session token is only used for session consistency, which is the default consistency level for Azure Cosmos DB. Other consistency levels don't use the session token.
-
-### Continuation token is too large
-
-#### Cause:
-The 400 bad request occurs on query operations where the continuation token is used if the continuation token has grown too large or if different queries have different continuation token sizes.
-
-#### Solution:
-1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the TCP protocol. The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue.
-1. If the direct connection mode with the TCP protocol isn't an option for your workload, set the `ResponseContinuationTokenLimitInKb` option. You can find this option in `FeedOptions` in v2 or `QueryRequestOptions` in v3.
-
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-timeout.md
- Title: Troubleshoot Azure Cosmos DB HTTP 408 or request timeout issues with the .NET SDK
-description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
--- Previously updated : 09/16/2022------
-# Diagnose and troubleshoot Azure Cosmos DB .NET SDK request timeout exceptions
-
-The HTTP 408 error occurs if the SDK was unable to complete the request before the timeout limit occurred.
-
-It is important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for timeout errors as these are normally expected in a distributed system.
-
-When evaluating the case for timeout errors:
-
-* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
-* Is the P99 latency / availability affected?
-* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
-
-## Customize the timeout on the Azure Cosmos DB .NET SDK
-
-The SDK has two distinct alternatives to control timeouts, each with a different scope.
-
-### RequestTimeout
-
-The `CosmosClientOptions.RequestTimeout` (or `ConnectionPolicy.RequestTimeout` for SDK v2) configuration allows you to set a timeout that affects each individual network request. An operation started by a user can span multiple network requests (for example, there could be throttling). This configuration would apply for each network request on the retry. This timeout isn't an end-to-end operation request timeout.
-
-### CancellationToken
-
-All the async operations in the SDK have an optional CancellationToken parameter. This [CancellationToken](/dotnet/standard/threading/how-to-listen-for-cancellation-requests-by-polling) parameter is used throughout the entire operation, across all network requests. In between network requests, the cancellation token might be checked and an operation canceled if the related token is expired. The cancellation token should be used to define an approximate expected timeout on the operation scope.
-
-> [!NOTE]
-> The `CancellationToken` parameter is a mechanism where the library will check the cancellation when it [won't cause an invalid state](https://devblogs.microsoft.com/premier-developer/recommended-patterns-for-cancellationtoken/). The operation might not cancel exactly when the time defined in the cancellation is up. Instead, after the time is up, it cancels when it's safe to do so.
-
-## Troubleshooting steps
-
-The following list contains known causes and solutions for request timeout exceptions.
-
-### CosmosOperationCanceledException
-
-This type of exception is common when your application is passing [CancellationTokens](#cancellationtoken) to the SDK operations. The SDK checks the state of the `CancellationToken` in-between [retries](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) and if the `CancellationToken` is canceled, it will abort the current operation with this exception.
-
-The exception's `Message` / `ToString()` will also indicate the state of your `CancellationToken` through `Cancellation Token has expired: true` and it will also contain [Diagnostics](troubleshoot-dot-net-sdk.md#capture-diagnostics) that contain the context of the cancellation for the involved requests.
-
-These exceptions are safe to retry on and can be treated as [timeouts](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) from the retrying perspective.
-
-#### Solution
-
-Verify the configured time in your `CancellationToken`, make sure that it's greater than your [RequestTimeout](#requesttimeout) and the [CosmosClientOptions.OpenTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.opentcpconnectiontimeout) (if you're using [Direct mode](sql-sdk-connection-modes.md)).
-If the available time in the `CancellationToken` is less than the configured timeouts, and the SDK is facing [transient connectivity issues](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503), the SDK won't be able to retry and will throw `CosmosOperationCanceledException`.
-
-### High CPU utilization
-
-High CPU utilization is the most common case. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where it might do multiple connections for a single query.
-
-# [3.21 and 2.16 or greater SDK](#tab/cpu-new)
-
-The timeouts will contain *Diagnostics*, which contain:
-
-```json
-"systemHistory": [
-{
-"dateUtc": "2021-11-17T23:38:28.3115496Z",
-"cpu": 16.731,
-"memory": 9024120.000,
-"threadInfo": {
-"isThreadStarving": "False",
-....
-}
-
-},
-{
-"dateUtc": "2021-11-17T23:38:28.3115496Z",
-"cpu": 16.731,
-"memory": 9024120.000,
-"threadInfo": {
-"isThreadStarving": "False",
-....
-}
-
-},
-...
-]
-```
-
-* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
-* If the `dateUtc` time in-between measurements isn't approximately 10 seconds, it also would indicate contention on the thread pool. CPU is measured as an independent Task that is enqueued in the thread pool every 10 seconds, if the time in-between measurement is longer, it would indicate that the async Tasks aren't able to be processed in a timely fashion. Most common scenarios are when doing [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait) in the application code.
-
-# [Older SDK](#tab/cpu-old)
-
-If the error contains `TransportException` information, it might contain also `CPU History`:
-
-```
-CPU history:
-(2020-08-28T00:40:09.1769900Z 0.114),
-(2020-08-28T00:40:19.1763818Z 1.732),
-(2020-08-28T00:40:29.1759235Z 0.000),
-(2020-08-28T00:40:39.1763208Z 0.063),
-(2020-08-28T00:40:49.1767057Z 0.648),
-(2020-08-28T00:40:59.1689401Z 0.137),
-CPU count: 8)
-```
-
-* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the CPU measurements aren't happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
---
-#### Solution
-
-The client application that uses the SDK should be scaled up or out.
-
-### Socket or port availability might be low
-
-When running in Azure, clients using the .NET SDK can hit Azure SNAT (PAT) port exhaustion.
-
-#### Solution 1
-
-If you're running on Azure VMs, follow the [SNAT port exhaustion guide](troubleshoot-dot-net-sdk.md#snat).
-
-#### Solution 2
-
-If you're running on Azure App Service, follow the [connection errors troubleshooting guide](../../app-service/troubleshoot-intermittent-outbound-connection-errors.md#cause) and [use App Service diagnostics](https://azure.github.io/AppService/2018/03/01/Deep-Dive-into-TCP-Connections-in-App-Service-Diagnostics.html).
-
-#### Solution 3
-
-If you're running on Azure Functions, verify you're following the [Azure Functions recommendation](../../azure-functions/manage-connections.md#static-clients) of maintaining singleton or static clients for all of the involved services (including Azure Cosmos DB). Check the [service limits](../../azure-functions/functions-scale.md#service-limits) based on the type and size of your Function App hosting.
-
-#### Solution 4
-
-If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`. Otherwise, you'll face connection issues.
-
-### Create multiple client instances
-
-Creating multiple client instances might lead to connection contention and timeout issues.
-
-#### Solution
-
-Follow the [performance tips](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage), and use a single CosmosClient instance across an entire process.
-
-### Hot partition key
-
-Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. When there's a hot partition, one or more logical partition keys on a physical partition are consuming all the physical partition's Request Units per second (RU/s). At the same time, the RU/s on other physical partitions are going unused. As a symptom, the total RU/s consumed will be less than the overall provisioned RU/s at the database or container, but you'll still see throttling (429s) on the requests against the hot logical partition key. Use the [Normalized RU Consumption metric](../monitor-normalized-request-units.md) to see if the workload is encountering a hot partition.
-
-#### Solution
-
-Choose a good partition key that evenly distributes request volume and storage. Learn how to [change your partition key](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/).
-
-### High degree of concurrency
-
-The application is doing a high level of concurrency, which can lead to contention on the channel.
-
-#### Solution
-
-The client application that uses the SDK should be scaled up or out.
-
-### Large requests or responses
-
-Large requests or responses can lead to head-of-line blocking on the channel and exacerbate contention, even with a relatively low degree of concurrency.
-
-#### Solution
-The client application that uses the SDK should be scaled up or out.
-
-### Failure rate is within the Azure Cosmos DB SLA
-
-The application should be able to handle transient failures and retry when necessary. Any 408 exceptions aren't retried because on create paths it's impossible to know if the service created the item or not. Sending the same item again for create will cause a conflict exception. User applications business logic might have custom logic to handle conflicts, which would break from the ambiguity of an existing item versus conflict from a create retry.
-
-### Failure rate violates the Azure Cosmos DB SLA
-
-Contact [Azure Support](https://aka.ms/azure-support).
-
-## Next steps
-
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
- Title: Troubleshoot slow requests in Azure Cosmos DB .NET SDK
-description: Learn how to diagnose and fix slow requests when you use Azure Cosmos DB .NET SDK.
--- Previously updated : 08/30/2022-----
-# Diagnose and troubleshoot slow requests in Azure Cosmos DB .NET SDK
--
-In Azure Cosmos DB, you might notice slow requests. Delays can happen for multiple reasons, such as request throttling or the way your application is designed. This article explains the different root causes for this problem.
-
-## Request rate too large
-
-Request throttling is the most common reason for slow requests. Azure Cosmos DB throttles requests if they exceed the allocated request units for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled. The article also discusses how to scale your account to avoid these problems in the future.
-
-## Application design
-
-When you design your application, [follow the .NET SDK best practices](performance-tips-dotnet-sdk-v3-sql.md) for the best performance. If your application doesn't follow the SDK best practices, you might get slow or failed requests.
-
-Consider the following when developing your application:
-
-* The application should be in the same region as your Azure Cosmos DB account.
-* Your [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) or [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions) should reflect your regional preference and point to the region your application is deployed on.
-* There might be a bottleneck on the Network interface because of high traffic. If the application is running on Azure Virtual Machines, there are possible workarounds:
- * Consider using a [Virtual Machine with Accelerated Networking enabled](../../virtual-network/create-vm-accelerated-networking-powershell.md).
- * Enable [Accelerated Networking on an existing Virtual Machine](../../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms).
- * Consider using a [higher end Virtual Machine](../../virtual-machines/sizes.md).
-* Prefer [direct connectivity mode](sql-sdk-connection-modes.md).
-* Avoid high CPU. Make sure to look at the maximum CPU and not the average, which is the default for most logging systems. Anything above roughly 40 percent can increase the latency.
-
-## Metadata operations
-
-If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests). They don't scale like data operations.
-
-## Slow requests on bulk mode
-
-[Bulk mode](tutorial-sql-api-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you're experiencing slow requests when using bulk mode make sure that:
-
-* Your application is compiled in Release configuration.
-* You aren't measuring latency while debugging the application (no debuggers attached).
-* The volume of operations is high, don't use bulk for less than 1000 operations. Your provisioned throughput dictates how many operations per second you can process, your goal with bulk would be to utilize as much of it as possible.
-* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container, or reduce the volume of data (maybe create smaller batches of data at a time).
-* You're correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-sql-api-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
-
-## Capture diagnostics
--
-## Diagnostics in version 3.19 and later
-
-The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. The following sections cover a few key things to look at.
-
-### CPU history
-
-High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries, where the requests might do multiple connections for a single query.
-
-# [3.21 or later SDK](#tab/cpu-new)
-
-The timeouts include diagnostics, which contain the following, for example:
-
-```json
-"systemHistory": [
-{
-"dateUtc": "2021-11-17T23:38:28.3115496Z",
-"cpu": 16.731,
-"memory": 9024120.000,
-"threadInfo": {
-"isThreadStarving": "False",
-....
-}
-
-},
-{
-"dateUtc": "2021-11-17T23:38:38.3115496Z",
-"cpu": 16.731,
-"memory": 9024120.000,
-"threadInfo": {
-"isThreadStarving": "False",
-....
-}
-
-},
-...
-]
-```
-
-* If the `cpu` values are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case, the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
-* If the `dateUtc` time between measurements isn't approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
-
-# [Older SDK](#tab/cpu-old)
-
-If the error contains `TransportException` information, it might also contain `CPU history`:
-
-```
-CPU history:
-(2020-08-28T00:40:09.1769900Z 0.114),
-(2020-08-28T00:40:19.1763818Z 1.732),
-(2020-08-28T00:40:29.1759235Z 0.000),
-(2020-08-28T00:40:39.1763208Z 0.063),
-(2020-08-28T00:40:49.1767057Z 0.648),
-(2020-08-28T00:40:59.1689401Z 0.137),
-CPU count: 8)
-```
-
-* If the CPU measurements are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the CPU measurements aren't happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
---
-#### Solution
-
-The client application that uses the SDK should be scaled up or out.
-
-### HttpResponseStats
-
-`HttpResponseStats` are requests that go to the [gateway](sql-sdk-connection-modes.md). Even in direct mode, the SDK gets all the metadata information from the gateway.
-
-If the request is slow, first verify that none of the previous suggestions yield the desired results. If it's still slow, different patterns point to different problems. The following table provides more details.
-
-| Number of requests | Scenario | Description |
-|-|-|-|
-| Single to all | Request timeout or `HttpRequestExceptions` | Points to [SNAT port exhaustion](troubleshoot-dot-net-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
-| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
-| All | All | Points to a problem with the infrastructure or networking. |
-| SLA violated | No changes to application, and SLA dropped. | Points to a problem with the Azure Cosmos DB service. |
-
-```json
-"HttpResponseStats": [
- {
- "StartTimeUTC": "2021-06-15T13:53:09.7961124Z",
- "EndTimeUTC": "2021-06-15T13:53:09.7961127Z",
- "RequestUri": "https://127.0.0.1:8081/dbs/347a8e44-a550-493e-88ee-29a19c070ecc/colls/4f72e752-fa91-455a-82c1-bf253a5a3c4e",
- "ResourceType": "Collection",
- "HttpMethod": "GET",
- "ActivityId": "e16e98ec-f2e3-430c-b9e9-7d99e58a4f72",
- "StatusCode": "OK"
- }
-]
-```
-
-### StoreResult
-
-`StoreResult` represents a single request to Azure Cosmos DB, by using direct mode with the TCP protocol.
-
-If it's still slow, different patterns point to different problems. The following table provides more details.
-
-| Number of requests | Scenario | Description |
-|-|-|-|
-| Single to all | `StoreResult` contains `TransportException` | Points to [SNAT port exhaustion](troubleshoot-dot-net-sdk.md#snat), or a lack of resources on the machine to process the request in time. |
-| Single or small percentage (SLA isn't violated) | All | A single or small percentage of slow requests can be caused by several different transient problems, and should be expected. |
-| All | All | A problem with the infrastructure or networking. |
-| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is true`. | Points to a problem with the Azure Cosmos DB service. |
-| SLA violated | Requests contain multiple failure error codes, like `410` and `IsValid is false`. | Points to a problem with the machine. |
-| SLA violated | `StorePhysicalAddress` are the same, with no failure status code. | Likely a problem with Azure Cosmos DB. |
-| SLA violated | `StorePhysicalAddress` have the same partition ID, but different replica IDs, with no failure status code. | Likely a problem with Azure Cosmos DB. |
-| SLA violated | `StorePhysicalAddress` is random, with no failure status code. | Points to a problem with the machine. |
-
-For multiple store results for a single request, be aware of the following:
-
-* Strong consistency and bounded staleness consistency always have at least two store results.
-* Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios.
-
-### RntbdRequestStats
-
-Show the time for the different stages of sending and receiving a request in the transport layer.
-
-* `ChannelAcquisitionStarted`: The time to get or create a new connection. You can create new connections for numerous different regions. For example, let's say that a connection was unexpectedly closed, or too many requests were getting sent through the existing connections. You create a new connection.
-* *Pipelined time is large* might be caused by a large request.
-* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service.
-* *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result.
-
-### ServiceEndpointStatistics
-
-Information about a particular backend server. The SDK can open multiple connections to a single backend server depending upon the number of pending requests and the MaxConcurrentRequestsPerConnection.
-
-* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may lead to more traffic and higher latencies.
-* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhaustion if this number is very high.
-
-### ConnectionStatistics
-
-Information about the particular connection (new or old) the request get's assigned to.
-
-* `waitforConnectionInit`: The current request was waiting for new connection initialization to complete. This will lead to higher latencies.
-* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were many calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels.
-* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are many receive timeouts, Sent time will be much larger than the Receive time.
-* `lastReceive`: Time of last request that was received from this server
-* `lastSendAttempt`: Time of the last send attempt
-
-### Request and response sizes
-
-* `requestSizeInBytes`: The total size of the request sent to Cosmos DB
-* `responseMetadataSizeInBytes`: The size of headers returned from Cosmos DB
-* `responseBodySizeInBytes`: The size of content returned from Cosmos DB
-
-```json
-"StoreResult": {
- "ActivityId": "bab6ade1-b8de-407f-b89d-fa2138a91284",
- "StatusCode": "Ok",
- "SubStatusCode": "Unknown",
- "LSN": 453362,
- "PartitionKeyRangeId": "1",
- "GlobalCommittedLSN": 0,
- "ItemLSN": 453358,
- "UsingLocalLSN": true,
- "QuorumAckedLSN": -1,
- "SessionToken": "-1#453362",
- "CurrentWriteQuorum": -1,
- "CurrentReplicaSetSize": -1,
- "NumberOfReadRegions": 0,
- "IsValid": true,
- "StorePhysicalAddress": "rntbd://127.0.0.1:10253/apps/DocDbApp/services/DocDbServer92/partitions/a4cb49a8-38c8-11e6-8106-8cdcd42c33be/replicas/1s/",
- "RequestCharge": 1,
- "RetryAfterInMs": null,
- "BELatencyInMs": "0.304",
- "transportRequestTimeline": {
- "requestTimeline": [
- {
- "event": "Created",
- "startTimeUtc": "2022-05-25T12:03:36.3081190Z",
- "durationInMs": 0.0024
- },
- {
- "event": "ChannelAcquisitionStarted",
- "startTimeUtc": "2022-05-25T12:03:36.3081214Z",
- "durationInMs": 0.0132
- },
- {
- "event": "Pipelined",
- "startTimeUtc": "2022-05-25T12:03:36.3081346Z",
- "durationInMs": 0.0865
- },
- {
- "event": "Transit Time",
- "startTimeUtc": "2022-05-25T12:03:36.3082211Z",
- "durationInMs": 1.3324
- },
- {
- "event": "Received",
- "startTimeUtc": "2022-05-25T12:03:36.3095535Z",
- "durationInMs": 12.6128
- },
- {
- "event": "Completed",
- "startTimeUtc": "2022-05-25T12:03:36.8621663Z",
- "durationInMs": 0
- }
- ],
- "serviceEndpointStats": {
- "inflightRequests": 1,
- "openConnections": 1
- },
- "connectionStats": {
- "waitforConnectionInit": "False",
- "callsPendingReceive": 0,
- "lastSendAttempt": "2022-05-25T12:03:34.0222760Z",
- "lastSend": "2022-05-25T12:03:34.0223280Z",
- "lastReceive": "2022-05-25T12:03:34.0257728Z"
- },
- "requestSizeInBytes": 447,
- "responseMetadataSizeInBytes": 438,
- "responseBodySizeInBytes": 604
- },
- "TransportException": null
-}
-```
-
-### Failure rate violates the Azure Cosmos DB SLA
-
-Contact [Azure support](https://aka.ms/azure-support).
-
-## Next steps
-
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3-sql.md).
-* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md
- Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
-description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK.
-- Previously updated : 09/01/2022------
-# Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
-
-> [!div class="op_single_selector"]
-> * [Java SDK v4](troubleshoot-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](troubleshoot-java-async-sdk.md)
-> * [.NET](troubleshoot-dot-net-sdk.md)
->
-
-This article covers common issues, workarounds, diagnostic steps, and tools when you use the [.NET SDK](sql-api-sdk-dotnet.md) with Azure Cosmos DB SQL API accounts.
-The .NET SDK provides client-side logical representation to access the Azure Cosmos DB SQL API. This article describes tools and approaches to help you if you run into any issues.
-
-## Checklist for troubleshooting issues
-
-Consider the following checklist before you move your application to production. Using the checklist will prevent several common issues you might see. You can also quickly diagnose when an issue occurs:
-
-* Use the latest [SDK](sql-api-sdk-dotnet-standard.md). Preview SDKs shouldn't be used for production. This will prevent hitting known issues that are already fixed.
-* Review the [performance tips](performance-tips-dotnet-sdk-v3-sql.md), and follow the suggested practices. This will help prevent scaling, latency, and other performance issues.
-* Enable the SDK logging to help you troubleshoot an issue. Enabling the logging may affect performance so it's best to enable it only when troubleshooting issues. You can enable the following logs:
- * [Log metrics](../monitor-cosmos-db.md) by using the Azure portal. Portal metrics show the Azure Cosmos DB telemetry, which is helpful to determine if the issue corresponds to Azure Cosmos DB or if it's from the client side.
- * Log the [diagnostics string](#capture-diagnostics) from the operations and/or exceptions.
-
-Take a look at the [Common issues and workarounds](#common-issues-and-workarounds) section in this article.
-
-Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v3/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. If you didn't find a solution, then file a GitHub issue. You can open a support tick for urgent issues.
-
-## Capture diagnostics
--
-## Common issues and workarounds
-
-### General suggestions
-
-* Follow any `aka.ms` link included in the exception details.
-* Run your app in the same Azure region as your Azure Cosmos DB account, whenever possible.
-* You may run into connectivity/availability issues due to lack of resources on your client machine. We recommend monitoring your CPU utilization on nodes running the Azure Cosmos DB client, and scaling up/out if they're running at high load.
-
-### Check the portal metrics
-
-Checking the [portal metrics](../monitor-cosmos-db.md) will help determine if it's a client-side issue or if there's an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section.
-
-### Retry design
-
-See our guide to [designing resilient applications with Azure Cosmos SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
-
-### SNAT
-
-If your app is deployed on [Azure Virtual Machines without a public IP address](../../load-balancer/load-balancer-outbound-connections.md), by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports). This situation can lead to connection throttling, connection closure, or the above mentioned [Request timeouts](troubleshoot-dot-net-sdk-request-timeout.md).
-
- Azure SNAT ports are used only when your VM has a private IP address is connecting to a public IP address. There are two workarounds to avoid Azure SNAT limitation (provided you already are using a single client instance across the entire application):
-
-* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
-
- When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
-* Assign a [public IP to your Azure VM](../../load-balancer/troubleshoot-outbound-connection.md#configure-an-individual-public-ip-on-vm).
-
-### High network latency
-
-See our [latency troubleshooting guide](troubleshoot-dot-net-sdk-slow-request.md) for details on latency troubleshooting.
-
-### Proxy authentication failures
-
-If you see errors that show as HTTP 407:
-
-```
-Response status code does not indicate success: ProxyAuthenticationRequired (407);
-```
-
-This error isn't generated by the SDK nor it's coming from the Cosmos DB Service. This is an error related to networking configuration. A proxy in your network configuration is most likely missing the required proxy authentication. If you're not expecting to be using a proxy, reach out to your network team. If you *are* using a proxy, make sure you're setting the right [WebProxy](/dotnet/api/system.net.webproxy) configuration on [CosmosClientOptions.WebProxy](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.webproxy) when creating the client instance.
-
-### Common query issues
-
-The [query metrics](sql-api-query-metrics.md) will help determine where the query is spending most of the time. From the query metrics, you can see how much of it's being spent on the back-end vs the client. Learn more on the [query performance guide](performance-tips-query-sdk.md?pivots=programming-language-csharp).
-
-If you encounter the following error: `Unable to load DLL 'Microsoft.Azure.Cosmos.ServiceInterop.dll' or one of its dependencies:` and are using Windows, you should upgrade to the latest Windows version.
-
-## Next steps
-
-* Learn about Performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3-sql.md)
-* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
-
- <!--Anchors-->
-[Common issues and workarounds]: #common-issues-workarounds
-[Azure SNAT (PAT) port exhaustion]: #snat
-[Production check list]: #production-check-list
cosmos-db Troubleshoot Forbidden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-forbidden.md
- Title: Troubleshoot Azure Cosmos DB forbidden exceptions
-description: Learn how to diagnose and fix forbidden exceptions.
--- Previously updated : 04/14/2022-----
-# Diagnose and troubleshoot Azure Cosmos DB forbidden exceptions
-
-The HTTP status code 403 represents the request is forbidden to complete.
-
-## Firewall blocking requests
-
-Data plane requests can come to Cosmos DB via the following three paths.
--- Public internet (IPv4)-- Service endpoint-- Private endpoint-
-When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above three paths the request came to Cosmos DB.
--- `Request originated from client IP {...} through public internet.`-- `Request originated from client VNET through service endpoint.`-- `Request originated from client VNET through private endpoint.`-
-### Solution
-
-Understand via which path is the request **expected** to come to Cosmos DB.
- - If the error message shows that the request did not come to Cosmos DB via the expected path, the issue is likely to be with client-side setup. Please double check your client-side setup following documentations.
- - Public internet: [Configure IP firewall in Azure Cosmos DB](../how-to-configure-firewall.md).
- - Service endpoint: [Configure access to Azure Cosmos DB from virtual networks (VNet)](../how-to-configure-vnet-service-endpoint.md). For example, if you expect to use service endpoint but request came to Cosmos DB via public internet, maybe the subnet that the client was running in did not enable service endpoint to Cosmos DB.
- - Private endpoint: [Configure Azure Private Link for an Azure Cosmos account](../how-to-configure-private-endpoints.md). For example, if you expect to use private endpoint but request came to Cosmos DB via public internet, maybe the DNS on the VM was not configured to resolve account endpoint to the private IP, so it went through account's public IP instead.
- - If the request came to Cosmos DB via the expected path, request was blocked because the source network identity was not configured to be allowed for the account. Check account's settings depending on the path the request came to Cosmos DB.
- - Public internet: check account's [public network access](../how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation) and IP range filter configurations.
- - Service endpoint: check account's [public network access](../how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation) and VNET filter configurations.
- - Private endpoint: check account's private endpoint configuration and client's private DNS configuration. This could be due to accessing account from a private endpoint that is set up for a different account.
-
-If you recently updated account's firewall configurations, keep in mind that changes can take **up to 15 minutes to apply**.
-
-## Partition key exceeding storage
-On this scenario, it's common to see errors like the ones below:
-
-```
-Response status code does not indicate success: Forbidden (403); Substatus: 1014
-```
-
-```
-Partition key reached maximum size of {...} GB
-```
-
-### Solution
-This error means that your current [partitioning design](../partitioning-overview.md#logical-partitions) and workload is trying to store more than the allowed amount of data for a given partition key value. There is no limit to the number of logical partitions in your container but the size of data each logical partition can store is limited. You can reach to support for clarification.
-
-## Non-data operations are not allowed
-This scenario happens when [attempting to perform non-data operations](../how-to-setup-rbac.md#permission-model) using Azure Active Directory (Azure AD) identities. On this scenario, it's common to see errors like the ones below:
-
-```
-Operation 'POST' on resource 'calls' is not allowed through Azure Cosmos DB endpoint
-```
-```
-Forbidden (403); Substatus: 5300; The given request [PUT ...] cannot be authorized by AAD token in data plane.
-```
-
-### Solution
-Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell.
-If you are using the [Azure Functions Cosmos DB Trigger](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) make sure the `CreateLeaseContainerIfNotExists` property of the trigger isn't set to `true`. Using Azure AD identities blocks any non-data operation, such as creating the lease container.
-
-## Next steps
-* Configure [IP Firewall](../how-to-configure-firewall.md).
-* Configure access from [virtual networks](../how-to-configure-vnet-service-endpoint.md).
-* Configure access from [private endpoints](../how-to-configure-private-endpoints.md).
cosmos-db Troubleshoot Java Async Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-java-async-sdk.md
- Title: Diagnose and troubleshoot Azure Cosmos DB Async Java SDK v2
-description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Async Java SDK v2.
-- Previously updated : 05/11/2020-------
-# Troubleshoot issues when you use the Azure Cosmos DB Async Java SDK v2 with SQL API accounts
-
-> [!div class="op_single_selector"]
-> * [Java SDK v4](troubleshoot-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](troubleshoot-java-async-sdk.md)
-> * [.NET](troubleshoot-dot-net-sdk.md)
->
-
-> [!IMPORTANT]
-> This is *not* the latest Java SDK for Azure Cosmos DB! You should upgrade your project to [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) and then read the Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md). Follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide to upgrade.
->
-> This article covers troubleshooting for Azure Cosmos DB Async Java SDK v2 only. See the Azure Cosmos DB Async Java SDK v2 [Release Notes](sql-api-sdk-async-java.md), [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) and [performance tips](performance-tips-async-java.md) for more information.
->
-
-> [!IMPORTANT]
-> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
-> will be retired; the SDK and all applications using the SDK
-> **will continue to function**; Azure Cosmos DB will simply cease
-> to provide further maintenance and support for this SDK.
-> We recommend following the instructions above to migrate to
-> Azure Cosmos DB Java SDK v4.
->
-
-This article covers common issues, workarounds, diagnostic steps, and tools when you use the [Java Async SDK](sql-api-sdk-async-java.md) with Azure Cosmos DB SQL API accounts.
-The Java Async SDK provides client-side logical representation to access the Azure Cosmos DB SQL API. This article describes tools and approaches to help you if you run into any issues.
-
-Start with this list:
-
-* Take a look at the [Common issues and workarounds] section in this article.
-* Look at the SDK, which is available [open source on GitHub](https://github.com/Azure/azure-cosmosdb-java). It has an [issues section](https://github.com/Azure/azure-cosmosdb-java/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed.
-* Review the [performance tips](performance-tips-async-java.md), and follow the suggested practices.
-* Read the rest of this article, if you didn't find a solution. Then file a [GitHub issue](https://github.com/Azure/azure-cosmosdb-java/issues).
-
-## <a name="common-issues-workarounds"></a>Common issues and workarounds
-
-### Network issues, Netty read timeout failure, low throughput, high latency
-
-#### General suggestions
-* Make sure the app is running on the same region as your Azure Cosmos DB account.
-* Check the CPU usage on the host where the app is running. If CPU usage is 90 percent or more, run your app on a host with a higher configuration. Or you can distribute the load on more machines.
-
-#### Connection throttling
-Connection throttling can happen because of either a [connection limit on a host machine] or [Azure SNAT (PAT) port exhaustion].
-
-##### <a name="connection-limit-on-host"></a>Connection limit on a host machine
-Some Linux systems, such as Red Hat, have an upper limit on the total number of open files. Sockets in Linux are implemented as files, so this number limits the total number of connections, too.
-Run the following command.
-
-```bash
-ulimit -a
-```
-The number of max allowed open files, which are identified as "nofile," needs to be at least double your connection pool size. For more information, see [Performance tips](performance-tips-async-java.md).
-
-##### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
-
-If your app is deployed on Azure Virtual Machines without a public IP address, by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports).
-
- Azure SNAT ports are used only when your VM has a private IP address and a process from the VM tries to connect to a public IP address. There are two workarounds to avoid Azure SNAT limitation:
-
-* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
-
- When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
-* Assign a public IP to your Azure VM.
-
-##### <a name="cant-connect"></a>Can't reach the Service - firewall
-``ConnectTimeoutException`` indicates that the SDK cannot reach the service.
-You may get a failure similar to the following when using the direct mode:
-```
-GoneException{error=null, resourceAddress='https://cdb-ms-prod-westus-fd4.documents.azure.com:14940/apps/e41242a5-2d71-5acb-2e00-5e5f744b12de/services/d8aa21a5-340b-21d4-b1a2-4a5333e7ed8a/partitions/ed028254-b613-4c2a-bf3c-14bd5eb64500/replicas/131298754052060051p//', statusCode=410, message=Message: The requested resource is no longer available at the server., getCauseInfo=[class: class io.netty.channel.ConnectTimeoutException, message: connection timed out: cdb-ms-prod-westus-fd4.documents.azure.com/101.13.12.5:14940]
-```
-
-If you have a firewall running on your app machine, open port range 10,000 to 20,000 which are used by the direct mode.
-Also follow the [Connection limit on a host machine](#connection-limit-on-host).
-
-#### HTTP proxy
-
-If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`.
-Otherwise, you face connection issues.
-
-#### Invalid coding pattern: Blocking Netty IO thread
-
-The SDK uses the [Netty](https://netty.io/) IO library to communicate with Azure Cosmos DB. The SDK has Async APIs and uses non-blocking IO APIs of Netty. The SDK's IO work is performed on IO Netty threads. The number of IO Netty threads is configured to be the same as the number of CPU cores of the app machine.
-
-The Netty IO threads are meant to be used only for non-blocking Netty IO work. The SDK returns the API invocation result on one of the Netty IO threads to the app's code. If the app performs a long-lasting operation after it receives results on the Netty thread, the SDK might not have enough IO threads to perform its internal IO work. Such app coding might result in low throughput, high latency, and `io.netty.handler.timeout.ReadTimeoutException` failures. The workaround is to switch the thread when you know the operation takes time.
-
-For example, take a look at the following code snippet. You might perform long-lasting work that takes more than a few milliseconds on the Netty thread. If so, you eventually can get into a state where no Netty IO thread is present to process IO work. As a result, you get a ReadTimeoutException failure.
-
-### <a id="asyncjava2-readtimeout"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
-
-```java
-@Test
-public void badCodeWithReadTimeoutException() throws Exception {
- int requestTimeoutInSeconds = 10;
-
- ConnectionPolicy policy = new ConnectionPolicy();
- policy.setRequestTimeoutInMillis(requestTimeoutInSeconds * 1000);
-
- AsyncDocumentClient testClient = new AsyncDocumentClient.Builder()
- .withServiceEndpoint(TestConfigurations.HOST)
- .withMasterKeyOrResourceToken(TestConfigurations.MASTER_KEY)
- .withConnectionPolicy(policy)
- .build();
-
- int numberOfCpuCores = Runtime.getRuntime().availableProcessors();
- int numberOfConcurrentWork = numberOfCpuCores + 1;
- CountDownLatch latch = new CountDownLatch(numberOfConcurrentWork);
- AtomicInteger failureCount = new AtomicInteger();
-
- for (int i = 0; i < numberOfConcurrentWork; i++) {
- Document docDefinition = getDocumentDefinition();
- Observable<ResourceResponse<Document>> createObservable = testClient
- .createDocument(getCollectionLink(), docDefinition, null, false);
- createObservable.subscribe(r -> {
- try {
- // Time-consuming work is, for example,
- // writing to a file, computationally heavy work, or just sleep.
- // Basically, it's anything that takes more than a few milliseconds.
- // Doing such operations on the IO Netty thread
- // without a proper scheduler will cause problems.
- // The subscriber will get a ReadTimeoutException failure.
- TimeUnit.SECONDS.sleep(2 * requestTimeoutInSeconds);
- } catch (Exception e) {
- }
- },
-
- exception -> {
- //It will be io.netty.handler.timeout.ReadTimeoutException.
- exception.printStackTrace();
- failureCount.incrementAndGet();
- latch.countDown();
- },
- () -> {
- latch.countDown();
- });
- }
-
- latch.await();
- assertThat(failureCount.get()).isGreaterThan(0);
-}
-```
-The workaround is to change the thread on which you perform work that takes time. Define a singleton instance of the scheduler for your app.
-
-### <a id="asyncjava2-scheduler"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
-
-```java
-// Have a singleton instance of an executor and a scheduler.
-ExecutorService ex = Executors.newFixedThreadPool(30);
-Scheduler customScheduler = rx.schedulers.Schedulers.from(ex);
-```
-You might need to do work that takes time, for example, computationally heavy work or blocking IO. In this case, switch the thread to a worker provided by your `customScheduler` by using the `.observeOn(customScheduler)` API.
-
-### <a id="asyncjava2-applycustomscheduler"></a>Async Java SDK V2 (Maven com.microsoft.azure::azure-cosmosdb)
-
-```java
-Observable<ResourceResponse<Document>> createObservable = client
- .createDocument(getCollectionLink(), docDefinition, null, false);
-
-createObservable
- .observeOn(customScheduler) // Switches the thread.
- .subscribe(
- // ...
- );
-```
-By using `observeOn(customScheduler)`, you release the Netty IO thread and switch to your own custom thread provided by the custom scheduler.
-This modification solves the problem. You won't get a `io.netty.handler.timeout.ReadTimeoutException` failure anymore.
-
-### Connection pool exhausted issue
-
-`PoolExhaustedException` is a client-side failure. This failure indicates that your app workload is higher than what the SDK connection pool can serve. Increase the connection pool size or distribute the load on multiple apps.
-
-### Request rate too large
-This failure is a server-side failure. It indicates that you consumed your provisioned throughput. Retry later. If you get this failure often, consider an increase in the collection throughput.
-
-### Failure connecting to Azure Cosmos DB Emulator
-
-The Azure Cosmos DB Emulator HTTPS certificate is self-signed. For the SDK to work with the emulator, import the emulator certificate to a Java TrustStore. For more information, see [Export Azure Cosmos DB Emulator certificates](../local-emulator-export-ssl-certificates.md).
-
-### Dependency Conflict Issues
-
-```console
-Exception in thread "main" java.lang.NoSuchMethodError: rx.Observable.toSingle()Lrx/Single;
-```
-
-The above exception suggests you have a dependency on an older version of RxJava lib (e.g., 1.2.2). Our SDK relies on RxJava 1.3.8 which has APIs not available in earlier version of RxJava.
-
-The workaround for such issues is to identify which other dependency brings in RxJava-1.2.2 and exclude the transitive dependency on RxJava-1.2.2, and allow CosmosDB SDK bring the newer version.
-
-To identify which library brings in RxJava-1.2.2 run the following command next to your project pom.xml file:
-```bash
-mvn dependency:tree
-```
-For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.10/examples/resolving-conflicts-using-the-dependency-tree.html).
-
-Once you identify RxJava-1.2.2 is transitive dependency of which other dependency of your project, you can modify the dependency on that lib in your pom file and exclude RxJava transitive dependency it:
-
-```xml
-<dependency>
- <groupId>${groupid-of-lib-which-brings-in-rxjava1.2.2}</groupId>
- <artifactId>${artifactId-of-lib-which-brings-in-rxjava1.2.2}</artifactId>
- <version>${version-of-lib-which-brings-in-rxjava1.2.2}</version>
- <exclusions>
- <exclusion>
- <groupId>io.reactivex</groupId>
- <artifactId>rxjava</artifactId>
- </exclusion>
- </exclusions>
-</dependency>
-```
-
-For more information, see the [exclude transitive dependency guide](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html).
--
-## <a name="enable-client-sice-logging"></a>Enable client SDK logging
-
-The Java Async SDK uses SLF4j as the logging facade that supports logging into popular logging frameworks such as log4j and logback.
-
-For example, if you want to use log4j as the logging framework, add the following libs in your Java classpath.
-
-```xml
-<dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- <version>${slf4j.version}</version>
-</dependency>
-<dependency>
- <groupId>log4j</groupId>
- <artifactId>log4j</artifactId>
- <version>${log4j.version}</version>
-</dependency>
-```
-
-Also add a log4j config.
-```
-# this is a sample log4j configuration
-
-# Set root logger level to DEBUG and its only appender to A1.
-log4j.rootLogger=INFO, A1
-
-log4j.category.com.microsoft.azure.cosmosdb=DEBUG
-#log4j.category.io.netty=INFO
-#log4j.category.io.reactivex=INFO
-# A1 is set to be a ConsoleAppender.
-log4j.appender.A1=org.apache.log4j.ConsoleAppender
-
-# A1 uses PatternLayout.
-log4j.appender.A1.layout=org.apache.log4j.PatternLayout
-log4j.appender.A1.layout.ConversionPattern=%d %5X{pid} [%t] %-5p %c - %m%n
-```
-
-For more information, see the [sfl4j logging manual](https://www.slf4j.org/manual.html).
-
-## <a name="netstats"></a>OS network statistics
-Run the netstat command to get a sense of how many connections are in states such as `ESTABLISHED` and `CLOSE_WAIT`.
-
-On Linux, you can run the following command.
-```bash
-netstat -nap
-```
-Filter the result to only connections to the Azure Cosmos DB endpoint.
-
-The number of connections to the Azure Cosmos DB endpoint in the `ESTABLISHED` state can't be greater than your configured connection pool size.
-
-Many connections to the Azure Cosmos DB endpoint might be in the `CLOSE_WAIT` state. There might be more than 1,000. A number that high indicates that connections are established and torn down quickly. This situation potentially causes problems. For more information, see the [Common issues and workarounds] section.
-
- <!--Anchors-->
-[Common issues and workarounds]: #common-issues-workarounds
-[Enable client SDK logging]: #enable-client-sice-logging
-[Connection limit on a host machine]: #connection-limit-on-host
-[Azure SNAT (PAT) port exhaustion]: #snat
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
- Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4
-description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4.
-- Previously updated : 04/01/2022------
-# Troubleshoot issues when you use Azure Cosmos DB Java SDK v4 with SQL API accounts
-
-> [!div class="op_single_selector"]
-> * [Java SDK v4](troubleshoot-java-sdk-v4-sql.md)
-> * [Async Java SDK v2](troubleshoot-java-async-sdk.md)
-> * [.NET](troubleshoot-dot-net-sdk.md)
->
-
-> [!IMPORTANT]
-> This article covers troubleshooting for Azure Cosmos DB Java SDK v4 only. Please see the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), and [performance tips](performance-tips-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
->
-
-This article covers common issues, workarounds, diagnostic steps, and tools when you use Azure Cosmos DB Java SDK v4 with Azure Cosmos DB SQL API accounts.
-Azure Cosmos DB Java SDK v4 provides client-side logical representation to access the Azure Cosmos DB SQL API. This article describes tools and approaches to help you if you run into any issues.
-
-Start with this list:
-
-* Take a look at the [Common issues and workarounds] section in this article.
-* Look at the Java SDK in the Azure Cosmos DB central repo, which is available [open source on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos). It has an [issues section](https://github.com/Azure/azure-sdk-for-java/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. One helpful tip is to filter issues by the *cosmos:v4-item* tag.
-* Review the [performance tips](performance-tips-java-sdk-v4-sql.md) for Azure Cosmos DB Java SDK v4, and follow the suggested practices.
-* Read the rest of this article, if you didn't find a solution. Then file a [GitHub issue](https://github.com/Azure/azure-sdk-for-java/issues). If there is an option to add tags to your GitHub issue, add a *cosmos:v4-item* tag.
-
-## Capture the diagnostics
-
-Database, container, item, and query responses in the Java V4 SDK have a Diagnostics property. This property records all the information related to the single request, including if there were retries or any transient failures.
-
-The Diagnostics are returned as a string. The string changes with each version as it is improved to better troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Do not parse the string to avoid breaking changes.
-
-The following code sample shows how to read diagnostic logs using the Java V4 SDK:
-
-> [!IMPORTANT]
-> We recommend validating the minimum recommended version of the Java V4 SDK and ensure you are using this version or higher. You can check recommended version [here](./sql-api-sdk-java-v4.md#recommended-version).
-
-# [Sync](#tab/sync)
-
-#### Database Operations
-
-```Java
-CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
-CosmosDiagnostics diagnostics = databaseResponse.getDiagnostics();
-logger.info("Create database diagnostics : {}", diagnostics);
-```
-
-#### Container Operations
-
-```Java
-CosmosContainerResponse containerResponse = database.createContainerIfNotExists(containerProperties,
- throughputProperties);
-CosmosDiagnostics diagnostics = containerResponse.getDiagnostics();
-logger.info("Create container diagnostics : {}", diagnostics);
-```
-
-#### Item Operations
-
-```Java
-// Write Item
-CosmosItemResponse<Family> item = container.createItem(family, new PartitionKey(family.getLastName()),
- new CosmosItemRequestOptions());
-
-CosmosDiagnostics diagnostics = item.getDiagnostics();
-logger.info("Create item diagnostics : {}", diagnostics);
-
-// Read Item
-CosmosItemResponse<Family> familyCosmosItemResponse = container.readItem(documentId,
- new PartitionKey(documentLastName), Family.class);
-
-CosmosDiagnostics diagnostics = familyCosmosItemResponse.getDiagnostics();
-logger.info("Read item diagnostics : {}", diagnostics);
-```
-
-#### Query Operations
-
-```Java
-String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
-
-CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(),
- Family.class);
-
-// Add handler to capture diagnostics
-filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
- logger.info("Query Item diagnostics through handle : {}",
- familyFeedResponse.getCosmosDiagnostics());
-});
-
-// Or capture diagnostics through iterableByPage() APIs.
-filteredFamilies.iterableByPage().forEach(familyFeedResponse -> {
- logger.info("Query item diagnostics through iterableByPage : {}",
- familyFeedResponse.getCosmosDiagnostics());
-});
-```
-
-#### Cosmos Exceptions
-
-```Java
-try {
- CosmosItemResponse<Family> familyCosmosItemResponse = container.readItem(documentId,
- new PartitionKey(documentLastName), Family.class);
-} catch (CosmosException ex) {
- CosmosDiagnostics diagnostics = ex.getDiagnostics();
- logger.error("Read item failure diagnostics : {}", diagnostics);
-}
-```
-
-# [Async](#tab/async)
-
-#### Database Operations
-
-```Java
-Mono<CosmosDatabaseResponse> databaseResponseMono = client.createDatabaseIfNotExists(databaseName);
-databaseResponseMono.map(databaseResponse -> {
- CosmosDiagnostics diagnostics = databaseResponse.getDiagnostics();
- logger.info("Create database diagnostics : {}", diagnostics);
-}).subscribe();
-```
-
-#### Container Operations
-
-```Java
-Mono<CosmosContainerResponse> containerResponseMono = database.createContainerIfNotExists(containerProperties,
- throughputProperties);
-containerResponseMono.map(containerResponse -> {
- CosmosDiagnostics diagnostics = containerResponse.getDiagnostics();
- logger.info("Create container diagnostics : {}", diagnostics);
-}).subscribe();
-```
-
-#### Item Operations
-
-```Java
-// Write Item
-Mono<CosmosItemResponse<Family>> itemResponseMono = container.createItem(family,
- new PartitionKey(family.getLastName()),
- new CosmosItemRequestOptions());
-
-itemResponseMono.map(itemResponse -> {
- CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
- logger.info("Create item diagnostics : {}", diagnostics);
-}).subscribe();
-
-// Read Item
-Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId,
- new PartitionKey(documentLastName), Family.class);
-
-itemResponseMono.map(itemResponse -> {
- CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
- logger.info("Read item diagnostics : {}", diagnostics);
-}).subscribe();
-```
-
-#### Query Operations
-
-```Java
-String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
-CosmosPagedFlux<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(),
- Family.class);
-// Add handler to capture diagnostics
-filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
- logger.info("Query Item diagnostics through handle : {}",
- familyFeedResponse.getCosmosDiagnostics());
-});
-
-// Or capture diagnostics through byPage() APIs.
-filteredFamilies.byPage().map(familyFeedResponse -> {
- logger.info("Query item diagnostics through byPage : {}",
- familyFeedResponse.getCosmosDiagnostics());
-}).subscribe();
-```
-
-#### Cosmos Exceptions
-
-```Java
-Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId,
- new PartitionKey(documentLastName), Family.class);
-
-itemResponseMono.onErrorResume(throwable -> {
- if (throwable instanceof CosmosException) {
- CosmosException cosmosException = (CosmosException) throwable;
- CosmosDiagnostics diagnostics = cosmosException.getDiagnostics();
- logger.error("Read item failure diagnostics : {}", diagnostics);
- }
- return Mono.error(throwable);
-}).subscribe();
-```
--
-## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a>
-See our guide to [designing resilient applications with Azure Cosmos SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
-
-## <a name="common-issues-workarounds"></a>Common issues and workarounds
-
-### Network issues, Netty read timeout failure, low throughput, high latency
-
-#### General suggestions
-For best performance:
-* Make sure the app is running on the same region as your Azure Cosmos DB account.
-* Check the CPU usage on the host where the app is running. If CPU usage is 50 percent or more, run your app on a host with a higher configuration. Or you can distribute the load on more machines.
- * If you are running your application on Azure Kubernetes Service, you can [use Azure Monitor to monitor CPU utilization](../../azure-monitor/containers/container-insights-analyze.md).
-
-#### Connection throttling
-Connection throttling can happen because of either a [connection limit on a host machine] or [Azure SNAT (PAT) port exhaustion].
-
-##### <a name="connection-limit-on-host"></a>Connection limit on a host machine
-Some Linux systems, such as Red Hat, have an upper limit on the total number of open files. Sockets in Linux are implemented as files, so this number limits the total number of connections, too.
-Run the following command.
-
-```bash
-ulimit -a
-```
-The number of max allowed open files, which are identified as "nofile," needs to be at least double your connection pool size. For more information, see the Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md).
-
-##### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
-
-If your app is deployed on Azure Virtual Machines without a public IP address, by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports).
-
- Azure SNAT ports are used only when your VM has a private IP address and a process from the VM tries to connect to a public IP address. There are two workarounds to avoid Azure SNAT limitation:
-
-* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
-
- When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
-* Assign a public IP to your Azure VM.
-
-##### <a name="cant-connect"></a>Can't reach the Service - firewall
-``ConnectTimeoutException`` indicates that the SDK cannot reach the service.
-You may get a failure similar to the following when using the direct mode:
-```
-GoneException{error=null, resourceAddress='https://cdb-ms-prod-westus-fd4.documents.azure.com:14940/apps/e41242a5-2d71-5acb-2e00-5e5f744b12de/services/d8aa21a5-340b-21d4-b1a2-4a5333e7ed8a/partitions/ed028254-b613-4c2a-bf3c-14bd5eb64500/replicas/131298754052060051p//', statusCode=410, message=Message: The requested resource is no longer available at the server., getCauseInfo=[class: class io.netty.channel.ConnectTimeoutException, message: connection timed out: cdb-ms-prod-westus-fd4.documents.azure.com/101.13.12.5:14940]
-```
-
-If you have a firewall running on your app machine, open port range 10,000 to 20,000 which are used by the direct mode.
-Also follow the [Connection limit on a host machine](#connection-limit-on-host).
-
-#### UnknownHostException
-
-UnknownHostException means that the Java framework cannot resolve the DNS entry for the Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved. If the error is constant, you can verify the machine's DNS resolution through a `curl` command to the endpoint described in the error.
-
-#### HTTP proxy
-
-If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`.
-Otherwise, you face connection issues.
-
-#### Invalid coding pattern: Blocking Netty IO thread
-
-The SDK uses the [Netty](https://netty.io/) IO library to communicate with Azure Cosmos DB. The SDK has an Async API and uses non-blocking IO APIs of Netty. The SDK's IO work is performed on IO Netty threads. The number of IO Netty threads is configured to be the same as the number of CPU cores of the app machine.
-
-The Netty IO threads are meant to be used only for non-blocking Netty IO work. The SDK returns the API invocation result on one of the Netty IO threads to the app's code. If the app performs a long-lasting operation after it receives results on the Netty thread, the SDK might not have enough IO threads to perform its internal IO work. Such app coding might result in low throughput, high latency, and `io.netty.handler.timeout.ReadTimeoutException` failures. The workaround is to switch the thread when you know the operation takes time.
-
-For example, take a look at the following code snippet which adds items to a container (look [here](create-sql-api-java.md) for guidance on setting up the database and container.) You might perform long-lasting work that takes more than a few milliseconds on the Netty thread. If so, you eventually can get into a state where no Netty IO thread is present to process IO work. As a result, you get a ReadTimeoutException failure.
-
-### <a id="java4-readtimeout"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TroubleshootNeedsSchedulerAsync)]
-
-The workaround is to change the thread on which you perform work that takes time. Define a singleton instance of the scheduler for your app.
-
-### <a id="java4-scheduler"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TroubleshootCustomSchedulerAsync)]
-
-You might need to do work that takes time, for example, computationally heavy work or blocking IO. In this case, switch the thread to a worker provided by your `customScheduler` by using the `.publishOn(customScheduler)` API.
-
-### <a id="java4-apply-custom-scheduler"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TroubleshootPublishOnSchedulerAsync)]
-
-By using `publishOn(customScheduler)`, you release the Netty IO thread and switch to your own custom thread provided by the custom scheduler. This modification solves the problem. You won't get a `io.netty.handler.timeout.ReadTimeoutException` failure anymore.
-
-### Request rate too large
-This failure is a server-side failure. It indicates that you consumed your provisioned throughput. Retry later. If you get this failure often, consider an increase in the collection throughput.
-
-* **Implement backoff at getRetryAfterInMilliseconds intervals**
-
- During performance testing, you should increase load until a small rate of requests get throttled. If throttled, the client application should backoff for the server-specified retry interval. Respecting the backoff ensures that you spend minimal amount of time waiting between retries.
-
-### Error handling from Java SDK Reactive Chain
-
-Error handling from Cosmos DB Java SDK is important when it comes to client's application logic. There are different error handling mechanism provided by [reactor-core framework](https://projectreactor.io/docs/core/release/reference/#error.handling) which can be used in different scenarios. We recommend customers to understand these error handling operators in detail and use the ones which fit their retry logic scenarios the best.
-
-> [!IMPORTANT]
-> We do not recommend using [`onErrorContinue()`](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#onErrorContinue-java.util.function.BiConsumer-) operator, as it is not supported in all scenarios.
-> Note that `onErrorContinue()` is a specialist operator that can make the behaviour of your reactive chain unclear. It operates on upstream, not downstream operators, it requires specific operator support to work, and the scope can easily propagate upstream into library code that didn't anticipate it (resulting in unintended behaviour.). Please refer to [documentation](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#onErrorContinue-java.util.function.BiConsumer-) of `onErrorContinue()` for more details on this special operator.
-
-### Failure connecting to Azure Cosmos DB Emulator
-
-The Azure Cosmos DB Emulator HTTPS certificate is self-signed. For the SDK to work with the emulator, import the emulator certificate to a Java TrustStore. For more information, see [Export Azure Cosmos DB Emulator certificates](../local-emulator-export-ssl-certificates.md).
-
-### Dependency Conflict Issues
-
-The Azure Cosmos DB Java SDK pulls in a number of dependencies; generally speaking, if your project dependency tree includes an older version of an artifact that Azure Cosmos DB Java SDK depends on, this may result in unexpected errors being generated when you run your application. If you are debugging why your application unexpectedly throws an exception, it is a good idea to double-check that your dependency tree is not accidentally pulling in an older version of one or more of the Azure Cosmos DB Java SDK dependencies.
-
-The workaround for such an issue is to identify which of your project dependencies brings in the old version and exclude the transitive dependency on that older version, and allow Azure Cosmos DB Java SDK to bring in the newer version.
-
-To identify which of your project dependencies brings in an older version of something that Azure Cosmos DB Java SDK depends on, run the following command against your project pom.xml file:
-```bash
-mvn dependency:tree
-```
-For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.10/examples/resolving-conflicts-using-the-dependency-tree.html).
-
-Once you know which dependency of your project depends on an older version, you can modify the dependency on that lib in your pom file and exclude the transitive dependency, following the example below (which assumes that *reactor-core* is the outdated dependency):
-
-```xml
-<dependency>
- <groupId>${groupid-of-lib-which-brings-in-reactor}</groupId>
- <artifactId>${artifactId-of-lib-which-brings-in-reactor}</artifactId>
- <version>${version-of-lib-which-brings-in-reactor}</version>
- <exclusions>
- <exclusion>
- <groupId>io.projectreactor</groupId>
- <artifactId>reactor-core</artifactId>
- </exclusion>
- </exclusions>
-</dependency>
-```
-
-For more information, see the [exclude transitive dependency guide](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html).
--
-## <a name="enable-client-sice-logging"></a>Enable client SDK logging
-
-Azure Cosmos DB Java SDK v4 uses SLF4j as the logging facade that supports logging into popular logging frameworks such as log4j and logback.
-
-For example, if you want to use log4j as the logging framework, add the following libs in your Java classpath.
-
-```xml
-<dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- <version>${slf4j.version}</version>
-</dependency>
-<dependency>
- <groupId>log4j</groupId>
- <artifactId>log4j</artifactId>
- <version>${log4j.version}</version>
-</dependency>
-```
-
-Also add a log4j config.
-```
-# this is a sample log4j configuration
-
-# Set root logger level to INFO and its only appender to A1.
-log4j.rootLogger=INFO, A1
-
-log4j.category.com.azure.cosmos=INFO
-#log4j.category.io.netty=OFF
-#log4j.category.io.projectreactor=OFF
-# A1 is set to be a ConsoleAppender.
-log4j.appender.A1=org.apache.log4j.ConsoleAppender
-
-# A1 uses PatternLayout.
-log4j.appender.A1.layout=org.apache.log4j.PatternLayout
-log4j.appender.A1.layout.ConversionPattern=%d %5X{pid} [%t] %-5p %c - %m%n
-```
-
-For more information, see the [sfl4j logging manual](https://www.slf4j.org/manual.html).
-
-## <a name="netstats"></a>OS network statistics
-Run the netstat command to get a sense of how many connections are in states such as `ESTABLISHED` and `CLOSE_WAIT`.
-
-On Linux, you can run the following command.
-```bash
-netstat -nap
-```
-
-On Windows, you can run the same command with different argument flags:
-```bash
-netstat -abn
-```
-
-Filter the result to only connections to the Azure Cosmos DB endpoint.
-
-The number of connections to the Azure Cosmos DB endpoint in the `ESTABLISHED` state can't be greater than your configured connection pool size.
-
-Many connections to the Azure Cosmos DB endpoint might be in the `CLOSE_WAIT` state. There might be more than 1,000. A number that high indicates that connections are established and torn down quickly. This situation potentially causes problems. For more information, see the [Common issues and workarounds] section.
-
- <!--Anchors-->
-[Common issues and workarounds]: #common-issues-workarounds
-[Enable client SDK logging]: #enable-client-sice-logging
-[Connection limit on a host machine]: #connection-limit-on-host
-[Azure SNAT (PAT) port exhaustion]: #snat
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-not-found.md
- Title: Troubleshoot Azure Cosmos DB not found exceptions
-description: Learn how to diagnose and fix not found exceptions.
--- Previously updated : 05/26/2021------
-# Diagnose and troubleshoot Azure Cosmos DB not found exceptions
-
-The HTTP status code 404 represents that the resource no longer exists.
-
-## Expected behavior
-There are many valid scenarios where an application expects a code 404 and correctly handles the scenario.
-
-## A not found exception was returned for an item that should exist or does exist
-Here are the possible reasons for a status code 404 to be returned if the item should exist or does exist.
-
-### The read session is not available for the input session token
-
-#### Solution:
-1. Update your current SDK to the latest version available. The most common causes for this particular error have been fixed in the newest SDK versions.
-
-### Race condition
-There are multiple SDK client instances and the read happened before the write.
-
-#### Solution:
-1. The default account consistency for Azure Cosmos DB is session consistency. When an item is created or updated, the response returns a session token that can be passed between SDK instances to guarantee that the read request is reading from a replica with that change.
-1. Change the [consistency level](../consistency-levels.md) to a [stronger level](../consistency-levels.md).
-
-### Reading throughput for a container or database resource
-Using PowerShell or Azure CLI and receive *not found* error message.
-
-#### Solution:
-Throughput can be provisioned at the database level, container level, or both. If getting a *not found* error, try reading throughput the parent database resource, or child container resource.
-
-### Invalid partition key and ID combination
-The partition key and ID combination aren't valid.
-
-#### Solution:
-Fix the application logic that's causing the incorrect combination.
-
-### Invalid character in an item ID
-An item is inserted into Azure Cosmos DB with an [invalid character](/dotnet/api/microsoft.azure.documents.resource.id#remarks) in the item ID.
-
-#### Solution:
-Change the ID to a different value that doesn't contain the special characters. If changing the ID isn't an option, you can Base64 encode the ID to escape the special characters. Base64 can still produce a name with a invalid character '/' which needs to be replaced.
-
-Items already inserted in the container for the ID can be replaced by using RID values instead of name-based references.
-```c#
-// Get a container reference that uses RID values.
-ContainerProperties containerProperties = await this.Container.ReadContainerAsync();
-string[] selfLinkSegments = containerProperties.SelfLink.Split('/');
-string databaseRid = selfLinkSegments[1];
-string containerRid = selfLinkSegments[3];
-Container containerByRid = this.cosmosClient.GetContainer(databaseRid, containerRid);
-
-// Invalid characters are listed here.
-// https://learn.microsoft.com/dotnet/api/microsoft.azure.documents.resource.id#remarks
-FeedIterator<JObject> invalidItemsIterator = this.Container.GetItemQueryIterator<JObject>(
- @"select * from t where CONTAINS(t.id, ""/"") or CONTAINS(t.id, ""#"") or CONTAINS(t.id, ""?"") or CONTAINS(t.id, ""\\"") ");
-while (invalidItemsIterator.HasMoreResults)
-{
- foreach (JObject itemWithInvalidId in await invalidItemsIterator.ReadNextAsync())
- {
- // Choose a new ID that doesn't contain special characters.
- // If that isn't possible, then Base64 encode the ID to escape the special characters.
- byte[] plainTextBytes = Encoding.UTF8.GetBytes(itemWithInvalidId["id"].ToString());
- itemWithInvalidId["id"] = Convert.ToBase64String(plainTextBytes).Replace('/', '!');
-
- // Update the item with the new ID value by using the RID-based container reference.
- JObject item = await containerByRid.ReplaceItemAsync<JObject>(
- item: itemWithInvalidId,
- ID: itemWithInvalidId["_rid"].ToString(),
- partitionKey: new Cosmos.PartitionKey(itemWithInvalidId["status"].ToString()));
-
- // Validating the new ID can be read by using the original name-based container reference.
- await this.Container.ReadItemAsync<ToDoActivity>(
- item["id"].ToString(),
- new Cosmos.PartitionKey(item["status"].ToString())); ;
- }
-}
-```
-
-### Time to Live purge
-The item had the [Time to Live (TTL)](./time-to-live.md) property set. The item was purged because the TTL property expired.
-
-#### Solution:
-Change the TTL property to prevent the item from being purged.
-
-### Lazy indexing
-The [lazy indexing](../index-policy.md#indexing-mode) hasn't caught up.
-
-#### Solution:
-Wait for the indexing to catch up or change the indexing policy.
-
-### Parent resource deleted
-The database or container that the item exists in was deleted.
-
-#### Solution:
-1. [Restore](../configure-periodic-backup-restore.md#request-restore) the parent resource, or re-create the resources.
-1. Create a new resource to replace the deleted resource.
-
-### 7. Container/Collection names are case-sensitive
-Container/Collection names are case-sensitive in Cosmos DB.
-
-#### Solution:
-Make sure to use the exact name while connecting to Cosmos DB.
-
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-query-performance.md
- Title: Troubleshoot query issues when using Azure Cosmos DB
-description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB SQL query issues.
--- Previously updated : 04/04/2022----
-# Troubleshoot query issues when using Azure Cosmos DB
-
-This article walks through a general recommended approach for troubleshooting queries in Azure Cosmos DB. Although you shouldn't consider the steps outlined in this article a complete defense against potential query issues, we've included the most common performance tips here. You should use this article as a starting place for troubleshooting slow or expensive queries in the Azure Cosmos DB core (SQL) API. You can also use [diagnostics logs](../cosmosdb-monitor-resource-logs.md) to identify queries that are slow or that consume significant amounts of throughput. If you are using Azure Cosmos DB's API for MongoDB, you should use [Azure Cosmos DB's API for MongoDB query troubleshooting guide](../mongodb/troubleshoot-query-performance.md)
-
-Query optimizations in Azure Cosmos DB are broadly categorized as follows:
--- Optimizations that reduce the Request Unit (RU) charge of the query-- Optimizations that just reduce latency-
-If you reduce the RU charge of a query, you'll typically decrease latency as well.
-
-This article provides examples that you can re-create by using the [nutrition dataset](https://github.com/CosmosDB/labs/blob/master/dotnet/setup/NutritionData.json).
-
-## Common SDK issues
-
-Before reading this guide, it is helpful to consider common SDK issues that aren't related to the query engine.
--- Follow these [SDK Performance tips for query](performance-tips-query-sdk.md).-- Sometimes queries may have empty pages even when there are results on a future page. Reasons for this could be:
- - The SDK could be doing multiple network calls.
- - The query might be taking a long time to retrieve the documents.
-- All queries have a continuation token that will allow the query to continue. Be sure to drain the query completely. Learn more about [handling multiple pages of results](sql-query-pagination.md#handling-multiple-pages-of-results)-
-## Get query metrics
-
-When you optimize a query in Azure Cosmos DB, the first step is always to [get the query metrics](profile-sql-api-query.md) for your query. These metrics are also available through the Azure portal. Once you run your query in the Data Explorer, the query metrics are visible next to the **Results** tab:
--
-After you get the query metrics, compare the **Retrieved Document Count** with the **Output Document Count** for your query. Use this comparison to identify the relevant sections to review in this article.
-
-The **Retrieved Document Count** is the number of documents that the query engine needed to load. The **Output Document Count** is the number of documents that were needed for the results of the query. If the **Retrieved Document Count** is significantly higher than the **Output Document Count**, there was at least one part of your query that was unable to use an index and needed to do a scan.
-
-Refer to the following sections to understand the relevant query optimizations for your scenario.
-
-### Query's RU charge is too high
-
-#### Retrieved Document Count is significantly higher than Output Document Count
--- [Include necessary paths in the indexing policy.](#include-necessary-paths-in-the-indexing-policy)--- [Understand which system functions use the index.](#understand-which-system-functions-use-the-index)--- [Improve string system function execution.](#improve-string-system-function-execution)--- [Understand which aggregate queries use the index.](#understand-which-aggregate-queries-use-the-index)--- [Optimize queries that have both a filter and an ORDER BY clause.](#optimize-queries-that-have-both-a-filter-and-an-order-by-clause)--- [Optimize JOIN expressions by using a subquery.](#optimize-join-expressions-by-using-a-subquery)-
-<br>
-
-#### Retrieved Document Count is approximately equal to Output Document Count
--- [Minimize cross partition queries.](#minimize-cross-partition-queries)--- [Optimize queries that have filters on multiple properties.](#optimize-queries-that-have-filters-on-multiple-properties)--- [Optimize queries that have both a filter and an ORDER BY clause.](#optimize-queries-that-have-both-a-filter-and-an-order-by-clause)-
-<br>
-
-### Query's RU charge is acceptable but latency is still too high
--- [Improve proximity.](#improve-proximity)--- [Increase provisioned throughput.](#increase-provisioned-throughput)--- [Increase MaxConcurrency.](#increase-maxconcurrency)--- [Increase MaxBufferedItemCount.](#increase-maxbuffereditemcount)-
-## Queries where Retrieved Document Count exceeds Output Document Count
-
- The **Retrieved Document Count** is the number of documents that the query engine needed to load. The **Output Document Count** is the number of documents returned by the query. If the **Retrieved Document Count** is significantly higher than the **Output Document Count**, there was at least one part of your query that was unable to use an index and needed to do a scan.
-
-Here's an example of scan query that wasn't entirely served by the index:
-
-Query:
-
-```sql
-SELECT VALUE c.description
-FROM c
-WHERE UPPER(c.description) = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR"
-```
-
-Query metrics:
-
-```
-Retrieved Document Count : 60,951
-Retrieved Document Size : 399,998,938 bytes
-Output Document Count : 7
-Output Document Size : 510 bytes
-Index Utilization : 0.00 %
-Total Query Execution Time : 4,500.34 milliseconds
- Query Preparation Times
- Query Compilation Time : 0.09 milliseconds
- Logical Plan Build Time : 0.05 milliseconds
- Physical Plan Build Time : 0.04 milliseconds
- Query Optimization Time : 0.01 milliseconds
- Index Lookup Time : 0.01 milliseconds
- Document Load Time : 4,177.66 milliseconds
- Runtime Execution Times
- Query Engine Times : 322.16 milliseconds
- System Function Execution Time : 85.74 milliseconds
- User-defined Function Execution Time : 0.00 milliseconds
- Document Write Time : 0.01 milliseconds
-Client Side Metrics
- Retry Count : 0
- Request Charge : 4,059.95 RUs
-```
-
-The **Retrieved Document Count** (60,951) is significantly higher than the **Output Document Count** (7), implying that this query resulted in a document scan. In this case, the system function [UPPER()](sql-query-upper.md) doesn't use an index.
-
-### Include necessary paths in the indexing policy
-
-Your indexing policy should cover any properties included in `WHERE` clauses, `ORDER BY` clauses, `JOIN`, and most system functions. The desired paths specified in the index policy should match the properties in the JSON documents.
-
-> [!NOTE]
-> Properties in Azure Cosmos DB indexing policy are case-sensitive
-
-If you run the following simple query on the [nutrition](https://github.com/CosmosDB/labs/blob/master/dotnet/setup/NutritionData.json) dataset, you will observe a much lower RU charge when the property in the `WHERE` clause is indexed:
-
-#### Original
-
-Query:
-
-```sql
-SELECT *
-FROM c
-WHERE c.description = "Malabar spinach, cooked"
-```
-
-Indexing policy:
-
-```json
-{
- "indexingMode": "consistent",
- "automatic": true,
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "excludedPaths": [
- {
- "path": "/description/*"
- }
- ]
-}
-```
-
-**RU charge:** 409.51 RUs
-
-#### Optimized
-
-Updated indexing policy:
-
-```json
-{
- "indexingMode": "consistent",
- "automatic": true,
- "includedPaths": [
- {
- "path": "/*"
- }
- ],
- "excludedPaths": []
-}
-```
-
-**RU charge:** 2.98 RUs
-
-You can add properties to the indexing policy at any time, with no effect on write or read availability. You can [track index transformation progress](./how-to-manage-indexing-policy.md#dotnet-sdk).
-
-### Understand which system functions use the index
-
-Most system functions use indexes. Here's a list of some common string functions that use indexes:
--- StartsWith-- Contains-- RegexMatch-- Left-- Substring - but only if the first num_expr is 0-
-Following are some common system functions that don't use the index and must load each document when used in a `WHERE` clause:
-
-| **System function** | **Ideas for optimization** |
-| | |
-| Upper/Lower | Instead of using the system function to normalize data for comparisons, normalize the casing upon insertion. A query like ```SELECT * FROM c WHERE UPPER(c.name) = 'BOB'``` becomes ```SELECT * FROM c WHERE c.name = 'BOB'```. |
-| GetCurrentDateTime/GetCurrentTimestamp/GetCurrentTicks | Calculate the current time before query execution and use that string value in the `WHERE` clause. |
-| Mathematical functions (non-aggregates) | If you need to compute a value frequently in your query, consider storing the value as a property in your JSON document. |
-
-These system functions can use indexes, except when used in queries with aggregates:
-
-| **System function** | **Ideas for optimization** |
-| | |
-| Spatial system functions | Store the query result in a real-time materialized view |
-
-When used in the `SELECT` clause, inefficient system functions will not affect how queries can use indexes.
-
-### Improve string system function execution
-
-For some system functions that use indexes, you can improve query execution by adding an `ORDER BY` clause to the query.
-
-More specifically, any system function whose RU charge increases as the cardinality of the property increases may benefit from having `ORDER BY` in the query. These queries do an index scan, so having the query results sorted can make the query more efficient.
-
-This optimization can improve execution for the following system functions:
--- StartsWith (where case-insensitive = true)-- StringEquals (where case-insensitive = true)-- Contains-- RegexMatch-- EndsWith-
-For example, consider the below query with `CONTAINS`. `CONTAINS` will use indexes but sometimes, even after adding the relevant index, you may still observe a very high RU charge when running the below query.
-
-Original query:
-
-```sql
-SELECT *
-FROM c
-WHERE CONTAINS(c.town, "Sea")
-```
-
-You can improve query execution by adding `ORDER BY`:
-
-```sql
-SELECT *
-FROM c
-WHERE CONTAINS(c.town, "Sea")
-ORDER BY c.town
-```
-
-The same optimization can help in queries with additional filters. In this case, it's best to also add properties with equality filters to the `ORDER BY` clause.
-
-Original query:
-
-```sql
-SELECT *
-FROM c
-WHERE c.name = "Samer" AND CONTAINS(c.town, "Sea")
-```
-
-You can improve query execution by adding `ORDER BY` and [a composite index](../index-policy.md#composite-indexes) for (c.name, c.town):
-
-```sql
-SELECT *
-FROM c
-WHERE c.name = "Samer" AND CONTAINS(c.town, "Sea")
-ORDER BY c.name, c.town
-```
-
-### Understand which aggregate queries use the index
-
-In most cases, aggregate system functions in Azure Cosmos DB will use the index. However, depending on the filters or additional clauses in an aggregate query, the query engine may be required to load a high number of documents. Typically, the query engine will apply equality and range filters first. After applying these filters,
-the query engine can evaluate additional filters and resort to loading remaining documents to compute the aggregate, if needed.
-
-For example, given these two sample queries, the query with both an equality and `CONTAINS` system function filter will generally be more efficient than a query with just a `CONTAINS` system function filter. This is because the equality filter is applied first and uses the index before documents need to be loaded for the more expensive `CONTAINS` filter.
-
-Query with only `CONTAINS` filter - higher RU charge:
-
-```sql
-SELECT COUNT(1)
-FROM c
-WHERE CONTAINS(c.description, "spinach")
-```
-
-Query with both equality filter and `CONTAINS` filter - lower RU charge:
-
-```sql
-SELECT AVG(c._ts)
-FROM c
-WHERE c.foodGroup = "Sausages and Luncheon Meats" AND CONTAINS(c.description, "spinach")
-```
-
-Here are additional examples of aggregate queries that will not fully use the index:
-
-#### Queries with system functions that don't use the index
-
-You should refer to the relevant [system function's page](sql-query-system-functions.md) to see if it uses the index.
-
-```sql
-SELECT MAX(c._ts)
-FROM c
-WHERE CONTAINS(c.description, "spinach")
-```
-
-#### Aggregate queries with user-defined functions(UDF's)
-
-```sql
-SELECT AVG(c._ts)
-FROM c
-WHERE udf.MyUDF("Sausages and Luncheon Meats")
-```
-
-#### Queries with GROUP BY
-
-The RU charge of queries with `GROUP BY` will increase as the cardinality of the properties in the `GROUP BY` clause increases. In the below query, for example, the RU charge of the query will increase as the number unique descriptions increases.
-
-The RU charge of an aggregate function with a `GROUP BY` clause will be higher than the RU charge of an aggregate function alone. In this example, the query engine must load every document that matches the `c.foodGroup = "Sausages and Luncheon Meats"` filter so the RU charge is expected to be high.
-
-```sql
-SELECT COUNT(1)
-FROM c
-WHERE c.foodGroup = "Sausages and Luncheon Meats"
-GROUP BY c.description
-```
-
-If you plan to frequently run the same aggregate queries, it may be more efficient to build a real-time materialized view with the [Azure Cosmos DB change feed](../change-feed.md) than running individual queries.
-
-### Optimize queries that have both a filter and an ORDER BY clause
-
-Although queries that have a filter and an `ORDER BY` clause will normally use a range index, they'll be more efficient if they can be served from a composite index. In addition to modifying the indexing policy, you should add all properties in the composite index to the `ORDER BY` clause. This change to the query will ensure that it uses the composite index. You can observe the impact by running a query on the [nutrition](https://github.com/CosmosDB/labs/blob/master/dotnet/setup/NutritionData.json) dataset:
-
-#### Original
-
-Query:
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup = "Soups, Sauces, and Gravies"
-ORDER BY c._ts ASC
-```
-
-Indexing policy:
-
-```json
-{
-
- "automatic":true,
- "indexingMode":"Consistent",
- "includedPaths":[
- {
- "path":"/*"
- }
- ],
- "excludedPaths":[]
-}
-```
-
-**RU charge:** 44.28 RUs
-
-#### Optimized
-
-Updated query (includes both properties in the `ORDER BY` clause):
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup = "Soups, Sauces, and Gravies"
-ORDER BY c.foodGroup, c._ts ASC
-```
-
-Updated indexing policy:
-
-```json
-{
- "automatic":true,
- "indexingMode":"Consistent",
- "includedPaths":[
- {
- "path":"/*"
- }
- ],
- "excludedPaths":[],
- "compositeIndexes":[
- [
- {
- "path":"/foodGroup",
- "order":"ascending"
- },
- {
- "path":"/_ts",
- "order":"ascending"
- }
- ]
- ]
- }
-
-```
-
-**RU charge:** 8.86 RUs
-
-### Optimize JOIN expressions by using a subquery
-
-Multi-value subqueries can optimize `JOIN` expressions by pushing predicates after each select-many expression rather than after all cross joins in the `WHERE` clause.
-
-Consider this query:
-
-```sql
-SELECT Count(1) AS Count
-FROM c
-JOIN t IN c.tags
-JOIN n IN c.nutrients
-JOIN s IN c.servings
-WHERE t.name = 'infant formula' AND (n.nutritionValue > 0
-AND n.nutritionValue < 10) AND s.amount > 1
-```
-
-**RU charge:** 167.62 RUs
-
-For this query, the index will match any document that has a tag with the name `infant formula`, `nutritionValue` greater than 0, and `amount` greater than 1. The `JOIN` expression here will perform the cross-product of all items of tags, nutrients, and servings arrays for each matching document before any filter is applied. The `WHERE` clause will then apply the filter predicate on each `<c, t, n, s>` tuple.
-
-For example, if a matching document has 10 items in each of the three arrays, it will expand to 1 x 10 x 10 x 10 (that is, 1,000) tuples. The use of subqueries here can help to filter out joined array items before joining with the next expression.
-
-This query is equivalent to the preceding one but uses subqueries:
-
-```sql
-SELECT Count(1) AS Count
-FROM c
-JOIN (SELECT VALUE t FROM t IN c.tags WHERE t.name = 'infant formula')
-JOIN (SELECT VALUE n FROM n IN c.nutrients WHERE n.nutritionValue > 0 AND n.nutritionValue < 10)
-JOIN (SELECT VALUE s FROM s IN c.servings WHERE s.amount > 1)
-```
-
-**RU charge:** 22.17 RUs
-
-Assume that only one item in the tags array matches the filter and that there are five items for both the nutrients and servings arrays. The `JOIN` expressions will expand to 1 x 1 x 5 x 5 = 25 items, as opposed to 1,000 items in the first query.
-
-## Queries where Retrieved Document Count is equal to Output Document Count
-
-If the **Retrieved Document Count** is approximately equal to the **Output Document Count**, the query engine didn't have to scan many unnecessary documents. For many queries, like those that use the `TOP` keyword, **Retrieved Document Count** might exceed **Output Document Count** by 1. You don't need to be concerned about this.
-
-### Minimize cross partition queries
-
-Azure Cosmos DB uses [partitioning](../partitioning-overview.md) to scale individual containers as Request Unit and data storage needs increase. Each physical partition has a separate and independent index. If your query has an equality filter that matches your container's partition key, you'll need to check only the relevant partition's index. This optimization reduces the total number of RUs that the query requires.
-
-If you have a large number of provisioned RUs (more than 30,000) or a large amount of data stored (more than approximately 100 GB), you probably have a large enough container to see a significant reduction in query RU charges.
-
-For example, if you create a container with the partition key foodGroup, the following queries will need to check only a single physical partition:
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup = "Soups, Sauces, and Gravies" and c.description = "Mushroom, oyster, raw"
-```
-
-Queries that have an `IN` filter with the partition key will only check the relevant physical partition(s) and will not "fan-out":
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup IN("Soups, Sauces, and Gravies", "Vegetables and Vegetable Products") and c.description = "Mushroom, oyster, raw"
-```
-
-Queries that have range filters on the partition key, or that don't have any filters on the partition key, will need to "fan-out" and check every physical partition's index for results:
-
-```sql
-SELECT *
-FROM c
-WHERE c.description = "Mushroom, oyster, raw"
-```
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup > "Soups, Sauces, and Gravies" and c.description = "Mushroom, oyster, raw"
-```
-
-### Optimize queries that have filters on multiple properties
-
-Although queries that have filters on multiple properties will normally use a range index, they'll be more efficient if they can be served from a composite index. For small amounts of data, this optimization won't have a significant impact. It could be useful, however, for large amounts of data. You can only optimize, at most, one non-equality filter per composite index. If your query has multiple non-equality filters, pick one of them that will use the composite index. The rest will continue to use range indexes. The non-equality filter must be defined last in the composite index. [Learn more about composite indexes](../index-policy.md#composite-indexes).
-
-Here are some examples of queries that could be optimized with a composite index:
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup = "Vegetables and Vegetable Products" AND c._ts = 1575503264
-```
-
-```sql
-SELECT *
-FROM c
-WHERE c.foodGroup = "Vegetables and Vegetable Products" AND c._ts > 1575503264
-```
-
-Here's the relevant composite index:
-
-```json
-{
- "automatic":true,
- "indexingMode":"Consistent",
- "includedPaths":[
- {
- "path":"/*"
- }
- ],
- "excludedPaths":[],
- "compositeIndexes":[
- [
- {
- "path":"/foodGroup",
- "order":"ascending"
- },
- {
- "path":"/_ts",
- "order":"ascending"
- }
- ]
- ]
-}
-```
-
-## Optimizations that reduce query latency
-
-In many cases, the RU charge might be acceptable when query latency is still too high. The following sections give an overview of tips for reducing query latency. If you run the same query multiple times on the same dataset, it will typically have the same RU charge each time. But query latency might vary between query executions.
-
-### Improve proximity
-
-Queries that are run from a different region than the Azure Cosmos DB account will have higher latency than if they were run inside the same region. For example, if you're running code on your desktop computer, you should expect latency to be tens or hundreds of milliseconds higher (or more) than if the query came from a virtual machine within the same Azure region as Azure Cosmos DB. It's simple to [globally distribute data in Azure Cosmos DB](../distribute-data-globally.md) to ensure you can bring your data closer to your app.
-
-### Increase provisioned throughput
-
-In Azure Cosmos DB, your provisioned throughput is measured in Request Units (RUs). Imagine you have a query that consumes 5 RUs of throughput. For example, if you provision 1,000 RUs, you would be able to run that query 200 times per second. If you tried to run the query when there wasn't enough throughput available, Azure Cosmos DB would return an HTTP 429 error. Any of the current Core (SQL) API SDKs will automatically retry this query after waiting for a short time. Throttled requests take longer, so increasing provisioned throughput can improve query latency. You can observe the [total number of throttled requests](../use-metrics.md#understand-how-many-requests-are-succeeding-or-causing-errors) on the **Metrics** blade of the Azure portal.
-
-### Increase MaxConcurrency
-
-Parallel queries work by querying multiple partitions in parallel. But data from an individual partitioned collection is fetched serially with respect to the query. So, if you set MaxConcurrency to the number of partitions, you have the best chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can set MaxConcurrency (or MaxDegreesOfParallelism in older SDK versions) to a high number. The system will choose the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
-
-### Increase MaxBufferedItemCount
-
-Queries are designed to pre-fetch results while the current batch of results is being processed by the client. Pre-fetching helps to improve the overall latency of a query. Setting MaxBufferedItemCount limits the number of pre-fetched results. If you set this value to the expected number of results returned (or a higher number), the query can get the most benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
-
-## Next steps
-See the following articles for information on how to measure RUs per query, get execution statistics to tune your queries, and more:
-
-* [Get SQL query execution metrics by using .NET SDK](profile-sql-api-query.md)
-* [Tuning query performance with Azure Cosmos DB](./sql-api-query-metrics.md)
-* [Performance tips for .NET SDK](performance-tips.md)
-* [Performance tips for Java v4 SDK](performance-tips-java-sdk-v4-sql.md)
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
- Title: Troubleshoot Azure Cosmos DB request rate too large exceptions
-description: Learn how to diagnose and fix request rate too large exceptions.
--- Previously updated : 03/03/2022-----
-# Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions
-
-This article contains known causes and solutions for various 429 status code errors for the SQL API. If you're using the API for MongoDB, see the [Troubleshoot common issues in API for MongoDB](../mongodb/error-codes-solutions.md) article for how to debug status code 16500.
-
-A "Request rate too large" exception, also known as error code 429, indicates that your requests against Azure Cosmos DB are being rate limited.
-
-When you use provisioned throughput, you set the throughput measured in request units per second (RU/s) required for your workload. Database operations against the service such as reads, writes, and queries consume some amount of request units (RUs). Learn more about [request units](../request-units.md).
-
-In a given second, if the operations consume more than the provisioned request units, Azure Cosmos DB will return a 429 exception. Each second, the number of request units available to use is reset.
-
-Before taking an action to change the RU/s, it's important to understand the root cause of rate limiting and address the underlying issue.
-> [!TIP]
-> The guidance in this article applies to databases and containers using provisioned throughput - both autoscale and manual throughput.
-
-There are different error messages that correspond to different types of 429 exceptions:
-- [Request rate is large. More Request Units may be needed, so no changes were made.](#request-rate-is-large)-- [The request didn't complete due to a high rate of metadata requests.](#rate-limiting-on-metadata-requests)-- [The request didn't complete due to a transient service error.](#rate-limiting-due-to-transient-service-error)--
-## Request rate is large
-This is the most common scenario. It occurs when the request units consumed by operations on data exceed the provisioned number of RU/s. If you're using manual throughput, this occurs when you've consumed more RU/s than the manual throughput provisioned. If you're using autoscale, this occurs when you've consumed more than the maximum RU/s provisioned. For example, if you have a resource provisioned with manual throughput of 400 RU/s, you will see 429 when you consume more than 400 request units in a single second. If you have a resource provisioned with autoscale max RU/s of 4000 RU/s (scales between 400 RU/s - 4000 RU/s), you will see 429s when you consume more than 4000 request units in a single second.
-
-### Step 1: Check the metrics to determine the percentage of requests with 429 error
-Seeing 429 error messages doesn't necessarily mean there is a problem with your database or container. A small percentage of 429s is normal whether you are using manual or autoscale throughput, and is a sign that you are maximizing the RU/s you've provisioned.
-
-#### How to investigate
-
-Determine what percent of your requests to your database or container resulted in 429s, compared to the overall count of successful requests. From your Azure Cosmos DB account blade, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container.
-
-By default, the Azure Cosmos DB client SDKs and data import tools such as Azure Data Factory and bulk executor library automatically retry requests on 429s. They retry typically up to nine times. As a result, while you may see 429s in the metrics, these errors may not even have been returned to your application.
--
-#### Recommended solution
-In general, for a production workload, **if you see between 1-5% of requests with 429s, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized**. No action is required. Otherwise, move to the next troubleshooting steps.
-
-If you're using autoscale, it's possible to see 429s on your database or container, even if the RU/s was not scaled to the maximum RU/s. See the section [Request rate is large with autoscale](#request-rate-is-large-with-autoscale) for an explanation.
-
-One common question that arises is, **"Why am I seeing 429s in the Azure Monitor metrics, but none in my own application monitoring?"** If Azure Monitor Metrics show you have 429s, but you've not seen any in your own application, this is because by default, the Cosmos client SDKs [`automatically retried internally on the 429s`](xref:Microsoft.Azure.Cosmos.CosmosClientOptions.MaxRetryAttemptsOnRateLimitedRequests) and the request succeeded in subsequent retries. As a result, the 429 status code is not returned to the application. In these cases, the overall rate of 429s is typically very low and can be safely ignored, assuming the overall rate is between 1-5% and end to end latency is acceptable to your application.
-
-### Step 2: Determine if there's a hot partition
-A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical (which implies physical) partitions that become "hot." Because all data for a logical partition resides on one physical partition and total RU/s is evenly distributed among the physical partitions, a hot partition can lead to 429s and inefficient use of throughput.
-
-Here are some examples of partitioning strategies that lead to hot partitions:
-- You have a container storing IoT device data for a write-heavy workload that is partitioned by `date`. All data for a single date will reside on the same logical and physical partition. Because all the data written each day has the same date, this would result in a hot partition every day.
- - Instead, for this scenario, a partition key like `id` (either a GUID or device ID), or a [synthetic partition key](./synthetic-partition-keys.md) combining `id` and `date` would yield a higher cardinality of values and better distribution of request volume.
-- You have a multi-tenant scenario with a container partitioned by `tenantId`. If one tenant is much more active than the others, it results in a hot partition. For example, if the largest tenant has 100,000 users, but most tenants have fewer than 10 users, you will have a hot partition when partitioned by `tenantID`.
- - For this previous scenario, consider having a dedicated container for the largest tenant, partitioned by a more granular property such as `UserId`.
-
-#### How to identify the hot partition
-
-To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
-
-Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has much higher Normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition. Learn more about the [Normalized RU Consumption metric](../monitor-normalized-request-units.md).
--
-To see which logical partition keys are consuming the most RU/s,
-use [Azure Diagnostic Logs](../cosmosdb-monitor-resource-logs.md). This sample query sums up the total request units consumed per second on each logical partition key.
-
-> [!IMPORTANT]
-> Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on the volume of data ingested. It's recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= ago(24hour)
- | where CollectionName == "CollectionName"
- | where isnotempty(PartitionKey)
- // Sum total request units consumed by logical partition key for each second
- | summarize sum(RequestCharge) by PartitionKey, OperationName, bin(TimeGenerated, 1s)
- | order by sum_RequestCharge desc
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= ago(24hour)
- | where Category == "PartitionKeyRUConsumption"
- | where collectionName_s == "CollectionName"
- | where isnotempty(partitionKey_s)
- // Sum total request units consumed by logical partition key for each second
- | summarize sum(todouble(requestCharge_s)) by partitionKey_s, operationType_s, bin(TimeGenerated, 1s)
- | order by sum_requestCharge_s desc
- ```
--
-This sample output shows that in a particular minute, the logical partition key with value "Contoso" consumed around 12,000 RU/s, while the logical partition key with value "Fabrikam" consumed less than 600 RU/s. If this pattern was consistent during the time period where rate limiting occurred, this would indicate a hot partition.
--
-> [!TIP]
-> In any workload, there will be natural variation in request volume across logical partitions. You should determine if the hot partition is caused by a fundamental skewness due to choice of partition key (which may require changing the key) or temporary spike due to natural variation in workload patterns.
-
-#### Recommended solution
-Review the guidance on [how to chose a good partition key](../partitioning-overview.md#choose-partitionkey).
-
-If there's high percent of rate limited requests and no hot partition:
-- You can [increase the RU/s](../set-throughput.md) on the database or container using the client SDKs, Azure portal, PowerShell, CLI or ARM template. Follow [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md) to determine the right RU/s to set.-
-If there's high percent of rate limited requests and there's an underlying hot partition:
-- Long-term, for best cost and performance, consider **changing the partition key**. The partition key can't be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.-- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost. -
-> [!TIP]
-> When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation. Learn more about [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md).
-
-### Step 3: Determine what requests are returning 429s
-
-#### How to investigate requests with 429s
-Use [Azure Diagnostic Logs](../cosmosdb-monitor-resource-logs.md) to identify which requests are returning 429s and how many RUs they consumed. This sample query aggregates at the minute level.
-
-> [!IMPORTANT]
-> Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on volume of data ingested. It is recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- CDBDataPlaneRequests
- | where TimeGenerated >= ago(24h)
- | summarize throttledOperations = dcountif(ActivityId, StatusCode == 429), totalOperations = dcount(ActivityId), totalConsumedRUPerMinute = sum(RequestCharge) by DatabaseName, CollectionName, OperationName, RequestResourceType, bin(TimeGenerated, 1min)
- | extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
- | extend fractionOf429s = 1.0 * throttledOperations / totalOperations
- | order by fractionOf429s desc
- ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= ago(24h)
- | where Category == "DataPlaneRequests"
- | summarize throttledOperations = dcountif(activityId_g, statusCode_s == 429), totalOperations = dcount(activityId_g), totalConsumedRUPerMinute = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, OperationName, requestResourceType_s, bin(TimeGenerated, 1min)
- | extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
- | extend fractionOf429s = 1.0 * throttledOperations / totalOperations
- | order by fractionOf429s desc
- ```
--
-For example, this sample output shows that each minute, 30% of Create Document requests were being rate limited, with each request consuming an average of 17 RUs.
-
-#### Recommended solution
-##### Use the Azure Cosmos DB capacity planner
-You can use the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to understand what is the best provisioned throughput based on your workload (volume and type of operations and size of documents). You can customize further the calculations by providing sample data to get a more accurate estimation.
-
-##### 429s on create, replace, or upsert document requests
-- By default, in the SQL API, all properties are indexed by default. Tune the [indexing policy](../index-policy.md) to only index the properties needed.
-This will lower the Request Units required per create document operation, which will reduce the likelihood of seeing 429s or allow you to achieve higher operations per second for the same amount of provisioned RU/s.
-
-##### 429s on query document requests
-- Follow the guidance to [troubleshoot queries with high RU charge](troubleshoot-query-performance.md#querys-ru-charge-is-too-high)-
-##### 429s on execute stored procedures
-- [Stored procedures](stored-procedures-triggers-udfs.md) are intended for operations that require write transactions across a partition key value. It is not recommended to use stored procedures for a large number of read or query operations. For best performance, these read or query operations should be done on the client-side, using the Cosmos SDKs.-
-## Request rate is large with autoscale
-All the guidance in this article applies to both manual and autoscale throughput.
-
-When using autoscale, a common question that arises is, **"Is it still possible to see 429s with autoscale?"**
-
-Yes. There are two main scenarios where this can occur.
-
-**Scenario 1**: When the overall consumed RU/s exceeds the max RU/s of the database or container, the service will throttle requests accordingly. This is analogous to exceeding the overall manual provisioned throughput of a database or container.
-
-**Scenario 2**: If there is a hot partition, that is, a logical partition key value that has a disproportionately higher amount of requests compared to other partition key values, it is possible for the underlying physical partition to exceed its RU/s budget. As a best practice, to avoid hot partitions, choose a good partition key that results in an even distribution of both storage and throughput. This is similar to when there is a hot partition when using manual throughput.
-
-For example, if you select the 20,000 RU/s max throughput option and have 200 GB of storage with four physical partitions, each physical partition can be autoscaled up to 5000 RU/s. If there was a hot partition on a particular logical partition key, you will see 429s when the underlying physical partition it resides in exceeds 5000 RU/s, that is, exceeds 100% normalized utilization.
-
-Follow the guidance in [Step 1](#step-1-check-the-metrics-to-determine-the-percentage-of-requests-with-429-error), [Step 2](#step-2-determineif-theres-a-hot-partition), and [Step 3](#step-3-determine-what-requests-are-returning-429s) to debug these scenarios.
-
-Another common question that arises is, **Why is normalized RU consumption 100%, but autoscale didn't scale to the max RU/s?**
-
-This typically occurs for workloads that have temporary or intermittent spikes of usage. When you use autoscale, Azure Cosmos DB only scales the RU/s to the maximum throughput when the normalized RU consumption is 100% for a sustained, continuous period of time in a 5 second interval. This is done to ensure the scaling logic is cost friendly to the user, as it ensures that single, momentary spikes to not lead to unnecessary scaling and higher cost. When there are momentary spikes, the system typically scales up to a value higher than the previously scaled to RU/s, but lower than the max RU/s. Learn more about how to [interpret the normalized RU consumption metric with autoscale](../monitor-normalized-request-units.md#normalized-ru-consumption-and-autoscale).
-## Rate limiting on metadata requests
-
-Metadata rate limiting can occur when you are performing a high volume of metadata operations on databases and/or containers. Metadata operations include:
-- Create, read, update, or delete a container or database-- List databases or containers in a Cosmos account-- Query for offers to see the current provisioned throughput-
-There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits).
-
-#### How to investigate
-Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
--
-#### Recommended solution
-- If your application needs to perform metadata operations, consider implementing a backoff policy to send these requests at a lower rate.--- Use static Cosmos DB client instances. When the DocumentClient or CosmosClient is initialized, the Cosmos DB SDK fetches metadata about the account, including information about the consistency level, databases, containers, partitions, and offers. This initialization may consume a high number of RUs, and should be performed infrequently. Use a single DocumentClient instance and use it for the lifetime of your application.--- Cache the names of databases and containers. Retrieve the names of your databases and containers from configuration or cache them on start. Calls like ReadDatabaseAsync/ReadDocumentCollectionAsync or CreateDatabaseQuery/CreateDocumentCollectionQuery will result in metadata calls to the service, which consume from the system-reserved RU limit. These operations should be performed infrequently.-
-## Rate limiting due to transient service error
-
-This 429 error is returned when the request encounters a transient service error. Increasing the RU/s on the database or container will have no impact and isn't recommended.
-
-#### Recommended solution
-Retry the request. If the error persists for several minutes, file a support ticket from the [Azure portal](https://portal.azure.com/).
-
-## Next steps
-* [Monitor normalized RU/s consumption](../monitor-normalized-request-units.md) of your database or container.
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Request Timeout Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout-java-sdk-v4-sql.md
- Title: Troubleshoot Azure Cosmos DB HTTP 408 or request timeout issues with the Java v4 SDK
-description: Learn how to diagnose and fix Java SDK request timeout exceptions with the Java v4 SDK.
--- Previously updated : 10/28/2020-----
-# Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK request timeout exceptions
-
-The HTTP 408 error occurs if the SDK was unable to complete the request before the timeout limit occurred.
-
-## Troubleshooting steps
-The following list contains known causes and solutions for request timeout exceptions.
-
-### Existing issues
-If you are seeing requests getting stuck for longer duration or timing out more frequently, please upgrade the Java v4 SDK to the latest version.
-NOTE: We strongly recommend to use the version 4.18.0 and above. Checkout the [Java v4 SDK release notes](sql-api-sdk-java-v4.md) for more details.
-
-### High CPU utilization
-High CPU utilization is the most common case. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where it might do multiple connections for a single query.
-
-#### Solution:
-The client application that uses the SDK should be scaled up or out.
-
-### Connection throttling
-Connection throttling can happen because of either a connection limit on a host machine or Azure SNAT (PAT) port exhaustion.
-
-### Connection limit on a host machine
-Some Linux systems, such as Red Hat, have an upper limit on the total number of open files. Sockets in Linux are implemented as files, so this number limits the total number of connections, too. Run the following command.
-
-```bash
-ulimit -a
-```
-
-#### Solution:
-The number of max allowed open files, which are identified as "nofile," needs to be at least 10,000 or more. For more information, see the Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md).
-
-### Socket or port availability might be low
-When running in Azure, clients using the Java SDK can hit Azure SNAT (PAT) port exhaustion.
-
-#### Solution 1:
-If you're running on Azure VMs, follow the [SNAT port exhaustion guide](troubleshoot-java-sdk-v4-sql.md#snat).
-
-#### Solution 2:
-If you're running on Azure App Service, follow the [connection errors troubleshooting guide](../../app-service/troubleshoot-intermittent-outbound-connection-errors.md#cause) and [use App Service diagnostics](https://azure.github.io/AppService/2018/03/01/Deep-Dive-into-TCP-Connections-in-App-Service-Diagnostics.html).
-
-#### Solution 3:
-If you're running on Azure Functions, verify you're following the [Azure Functions recommendation](../../azure-functions/manage-connections.md#static-clients) of maintaining singleton or static clients for all of the involved services (including Azure Cosmos DB). Check the [service limits](../../azure-functions/functions-scale.md#service-limits) based on the type and size of your Function App hosting.
-
-#### Solution 4:
-If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `GatewayConnectionConfig`. Otherwise, you'll face connection issues.
-
-### Create multiple client instances
-Creating multiple client instances might lead to connection contention and timeout issues.
-
-#### Solution 1:
-Follow the [performance tips](performance-tips-java-sdk-v4-sql.md#sdk-usage), and use a single CosmosClient instance across an entire application.
-
-#### Solution 2:
-If singleton CosmosClient is not possible to have in an application, we recommend using connection sharing across multiple Cosmos Clients through this API `connectionSharingAcrossClientsEnabled(true)` in CosmosClient.
-When you have multiple instances of Cosmos Client in the same JVM interacting to multiple Cosmos accounts, enabling this allows connection sharing in Direct mode if possible between instances of Cosmos Client. Please note, when setting this option, the connection configuration (e.g., socket timeout config, idle timeout config) of the first instantiated client will be used for all other client instances.
-
-### Hot partition key
-Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. When there's a hot partition, one or more logical partition keys on a physical partition are consuming all the physical partition's Request Units per second (RU/s). At the same time, the RU/s on other physical partitions are going unused. As a symptom, the total RU/s consumed will be less than the overall provisioned RU/s at the database or container, but you'll still see throttling (429s) on the requests against the hot logical partition key. Use the [Normalized RU Consumption metric](../monitor-normalized-request-units.md) to see if the workload is encountering a hot partition.
-
-#### Solution:
-Choose a good partition key that evenly distributes request volume and storage. Learn how to [change your partition key](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/).
-
-### High degree of concurrency
-The application is doing a high level of concurrency, which can lead to contention on the channel.
-
-#### Solution:
-The client application that uses the SDK should be scaled up or out.
-
-### Large requests or responses
-Large requests or responses can lead to head-of-line blocking on the channel and exacerbate contention, even with a relatively low degree of concurrency.
-
-#### Solution:
-The client application that uses the SDK should be scaled up or out.
-
-### Failure rate is within the Azure Cosmos DB SLA
-The application should be able to handle transient failures and retry when necessary. Any 408 exceptions aren't retried because on create paths it's impossible to know if the service created the item or not. Sending the same item again for create will cause a conflict exception. User applications business logic might have custom logic to handle conflicts, which would break from the ambiguity of an existing item versus conflict from a create retry.
-
-### Failure rate violates the Azure Cosmos DB SLA
-Contact [Azure Support](https://aka.ms/azure-support).
-
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout.md
- Title: Troubleshoot Azure Cosmos DB service request timeout exceptions
-description: Learn how to diagnose and fix Azure Cosmos DB service request timeout exceptions.
--- Previously updated : 07/13/2020-----
-# Diagnose and troubleshoot Azure Cosmos DB request timeout exceptions
-
-Azure Cosmos DB returned an HTTP 408 request timeout.
-
-## Troubleshooting steps
-The following list contains known causes and solutions for request timeout exceptions.
-
-### Check the SLA
-Check [Azure Cosmos DB monitoring](../monitor-cosmos-db.md) to see if the number of 408 exceptions violates the Azure Cosmos DB SLA.
-
-#### Solution 1: It didn't violate the Azure Cosmos DB SLA
-The application should handle this scenario and retry on these transient failures.
-
-#### Solution 2: It did violate the Azure Cosmos DB SLA
-Contact [Azure Support](https://aka.ms/azure-support).
-
-### Hot partition key
-Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. When there's a hot partition, one or more logical partition keys on a physical partition are consuming all the physical partition's Request Units per second (RU/s). At the same time, the RU/s on other physical partitions are going unused. As a symptom, the total RU/s consumed will be less than the overall provisioned RU/s at the database or container. You'll still see throttling (429s) on the requests against the hot logical partition key. Use the [Normalized RU Consumption metric](../monitor-normalized-request-units.md) to see if the workload is encountering a hot partition.
-
-#### Solution:
-Choose a good partition key that evenly distributes request volume and storage. Learn how to [change your partition key](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/).
-
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Sdk Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-sdk-availability.md
- Title: Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multiregional environments
-description: Learn all about the Azure Cosmos SDK availability behavior when operating in multi regional environments.
-- Previously updated : 09/27/2022-----
-# Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multiregional environments
-
-This article describes the behavior of the latest version of Azure Cosmos SDKs when you see a connectivity issue to a particular region or when a region failover occurs.
-
-All the Azure Cosmos SDKs give you an option to customize the regional preference. The following properties are used in different SDKs:
-
-* The [ConnectionPolicy.PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations) property in .NET V2 SDK.
-* The [CosmosClientOptions.ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) or [CosmosClientOptions.ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions) properties in .NET V3 SDK.
-* The [CosmosClientBuilder.preferredRegions](/java/api/com.azure.cosmos.cosmosclientbuilder.preferredregions) method in Java V4 SDK.
-* The [CosmosClient.preferred_locations](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK.
-* The [CosmosClientOptions.ConnectionPolicy.preferredLocations](/javascript/api/@azure/cosmos/connectionpolicy#preferredlocations) parameter in JS SDK.
-
-When the SDK initializes with a configuration that specifies regional preference, it will first obtain the account information including the available regions from the global endpoint. It will then apply an intersection of the configured regional preference and the account's available regions and use the order in the regional preference to prioritize the result.
-
-If the regional preference configuration contains regions that aren't an available region in the account, the values will be ignored. If these invalid regions are [added later to the account](#adding-a-region-to-an-account), the SDK will use them if they're higher in the preference configuration.
-
-|Account type |Reads |Writes |
-||--|--|
-| Single write region | Preferred region with highest order | Primary region |
-| Multiple write regions | Preferred region with highest order | Preferred region with highest order |
-
-If you **don't set a preferred region**, the SDK client defaults to the primary region:
-
-|Account type |Reads |Writes |
-||--|--|
-| Single write region | Primary region | Primary region |
-| Multiple write regions | Primary region | Primary region |
-
-> [!NOTE]
-> Primary region refers to the first region in the [Azure Cosmos account region list](../distribute-data-globally.md).
-> If the values specified as regional preference do not match with any existing Azure regions, they will be ignored. If they match an existing region but the account is not replicated to it, then the client will connect to the next preferred region that matches or to the primary region.
-
-> [!WARNING]
-> The failover and availability logic described in this document can be disabled on the client configuration, which is not advised unless the user application is going to handle availability errors itself. This can be achieved by:
->
-> * Setting the [ConnectionPolicy.EnableEndpointDiscovery](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.enableendpointdiscovery) property in .NET V2 SDK to false.
-> * Setting the [CosmosClientOptions.LimitToEndpoint](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.limittoendpoint) property in .NET V3 SDK to true.
-> * Setting the [CosmosClientBuilder.endpointDiscoveryEnabled](/java/api/com.azure.cosmos.cosmosclientbuilder.endpointdiscoveryenabled) method in Java V4 SDK to false.
-> * Setting the [CosmosClient.enable_endpoint_discovery](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK to false.
-> * Setting the [CosmosClientOptions.ConnectionPolicy.enableEndpointDiscovery](/javascript/api/@azure/cosmos/connectionpolicy#enableEndpointDiscovery) parameter in JS SDK to false.
-
-Under normal circumstances, the SDK client will connect to the preferred region (if a regional preference is set) or to the primary region (if no preference is set), and the operations will be limited to that region, unless any of the below scenarios occur.
-
-In these cases, the client using the Azure Cosmos SDK exposes logs and includes the retry information as part of the **operation diagnostic information**:
-
-* The *RequestDiagnosticsString* property on responses in .NET V2 SDK.
-* The *Diagnostics* property on responses and exceptions in .NET V3 SDK.
-* The *getDiagnostics()* method on responses and exceptions in Java V4 SDK.
-
-When determining the next region in order of preference, the SDK client will use the account region list, prioritizing the preferred regions (if any).
-
-For a comprehensive detail on SLA guarantees during these events, see the [SLAs for availability](../high-availability.md#slas).
-
-## <a id="remove-region"></a>Removing a region from the account
-
-When you remove a region from an Azure Cosmos account, any SDK client that actively uses the account will detect the region removal through a backend response code. The client then marks the regional endpoint as unavailable. The client retries the current operation and all the future operations are permanently routed to the next region in order of preference. In case the preference list only had one entry (or was empty) but the account has other regions available, it will route to the next region in the account list.
-
-## Adding a region to an account
-
-Every 5 minutes, the Azure Cosmos SDK client reads the account configuration and refreshes the regions that it's aware of.
-
-If you remove a region and later add it back to the account, if the added region has a higher regional preference order in the SDK configuration than the current connected region, the SDK will switch back to use this region permanently. After the added region is detected, all the future requests are directed to it.
-
-If you configure the client to preferably connect to a region that the Azure Cosmos account doesn't have, the preferred region is ignored. If you add that region later, the client detects it, and will switch permanently to that region.
-
-## <a id="manual-failover-single-region"></a>Fail over the write region in a single write region account
-
-If you initiate a failover of the current write region, the next write request will fail with a known backend response. When this response is detected, the client will query the account to learn the new write region, and proceed to retry the current operation and permanently route all future write operations to the new region.
-
-## Regional outage
-
-If the account is single write region and the regional outage occurs during a write operation, the behavior is similar to a [manual failover](#manual-failover-single-region). For read requests or multiple write regions accounts, the behavior is similar to [removing a region](#remove-region).
-
-## Session consistency guarantees
-
-When using [session consistency](../consistency-levels.md#guarantees-associated-with-consistency-levels), the client needs to guarantee that it can read its own writes. In single write region accounts where the read region preference is different from the write region, there could be cases where the user issues a write and when doing a read from a local region, the local region hasn't yet received the data replication (speed of light constraint). In such cases, the SDK detects the specific failure on the read operation and retries the read on the primary region to ensure session consistency.
-
-## Transient connectivity issues on TCP protocol
-
-In scenarios where the Azure Cosmos SDK client is configured to use the TCP protocol, for a given request, there might be situations where the network conditions are temporarily affecting the communication with a particular endpoint. These temporary network conditions can surface as TCP timeouts and Service Unavailable (HTTP 503) errors. The client will, if possible, [retry the request locally](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) on the same endpoint for some seconds.
-
-If the user has configured a preferred region list with more than one region and the client exhausted all local retries, it can attempt to retry that single operation in the next region from the preference list. Write operations can only be retried in other region if the Azure Cosmos DB account has multiple write regions enabled, while read operations can be retried in any available region.
-
-## Next steps
-
-* Review the [Availability SLAs](../high-availability.md#slas).
-* Use the latest [.NET SDK](sql-api-sdk-dotnet-standard.md)
-* Use the latest [Java SDK](sql-api-sdk-java-v4.md)
-* Use the latest [Python SDK](sql-api-sdk-python.md)
-* Use the latest [Node SDK](sql-api-sdk-node.md)
cosmos-db Troubleshoot Service Unavailable Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable-java-sdk-v4-sql.md
- Title: Troubleshoot Azure Cosmos DB service unavailable exceptions with the Java v4 SDK
-description: Learn how to diagnose and fix Azure Cosmos DB service unavailable exceptions with the Java v4 SDK.
--- Previously updated : 02/03/2022-----
-# Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK service unavailable exceptions
-The Java v4 SDK wasn't able to connect to Azure Cosmos DB.
-
-## Troubleshooting steps
-The following list contains known causes and solutions for service unavailable exceptions.
-
-### The required ports are being blocked
-Verify that all the [required ports](sql-sdk-connection-modes.md#service-port-ranges) are enabled. If the account is configured with private endpoint then additional ports are required to be opened.
-
-```
-failed to establish connection to {account name}.documents.azure.com/<unresolved>:3044 due to io.netty.channel.ConnectTimeoutException:
-```
-
-### Client initialization failure
-The following exception is hit if the SDK is not able to talk to the Cosmos DB instance. This normally points to some security protocol like a firewall rule is blocking the requests.
-
-```java
- java.lang.RuntimeException: Client initialization failed. Check if the endpoint is reachable and if your auth token is valid
-```
-
-To validate the SDK can communicate to the Cosmos DB account execute the following command from where the application is hosted. If it fails this points to a firewall rule or other security feature blocking the request. If it succeeds the SDK should be able to communicate to the Cosmos DB account.
-```
-telnet myCosmosDbAccountName.documents.azure.com 443
-```
-
-### Client-side transient connectivity issues
-Service unavailable exceptions can surface when there are transient connectivity problems that are causing timeouts. Typically, the stack trace related to this scenario will contain a `ServiceUnavailableException` error with diagnostic details. For example:
-
-```java
-Exception in thread "main" ServiceUnavailableException{userAgent=azsdk-java-cosmos/4.6.0 Linux/4.15.0-1096-azure JRE/11.0.8, error=null, resourceAddress='null', requestUri='null', statusCode=503, message=Service is currently unavailable, please retry after a while. If this problem persists please contact support.: Message: "" {"diagnostics"}
-```
-
-Follow the [request timeout troubleshooting steps](troubleshoot-request-timeout-java-sdk-v4-sql.md#troubleshooting-steps) to resolve it.
-
-#### UnknownHostException
-UnknownHostException means that the Java framework cannot resolve the DNS entry for the Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved. If the error is constant, you can verify the machine's DNS resolution through a `curl` command to the endpoint described in the error.
-
-### Service outage
-Check the [Azure status](https://azure.status.microsoft/status) to see if there's an ongoing issue.
--
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable.md
- Title: Troubleshoot Azure Cosmos DB service unavailable exceptions
-description: Learn how to diagnose and fix Azure Cosmos DB service unavailable exceptions.
--- Previously updated : 08/31/2022-----
-# Diagnose and troubleshoot Azure Cosmos DB service unavailable exceptions
-
-The SDK wasn't able to connect to Azure Cosmos DB. This scenario can be transient or permanent depending on the network conditions.
-
-It is important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for service unavailable errors.
-
-When evaluating the case for service unavailable errors:
-
-* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
-* Is the P99 latency / availability affected?
-* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
-
-## Troubleshooting steps
-
-The following list contains known causes and solutions for service unavailable exceptions.
-
-### Verify the substatus code
-
-In certain conditions, the HTTP 503 Service Unavailable error will include a substatus code that helps to identify the cause.
-
-| SubStatus Code | Description |
-|-|-|
-| 20001 | The service unavailable error happened because there are client side [connectivity issues](#client-side-transient-connectivity-issues) (failures attempting to connect). The client attempted to recover by [retrying](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) but all retries failed. |
-| 20002 | The service unavailable error happened because there are client side [timeouts](troubleshoot-dot-net-sdk-request-timeout.md#troubleshooting-steps). The client attempted to recover by [retrying](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) but all retries failed. |
-| 20003 | The service unavailable error happened because there are underlying I/O errors related to the operating system. See the exception details for the related I/O error. |
-| 20004 | The service unavailable error happened because [client machine's CPU is overloaded](troubleshoot-dot-net-sdk-request-timeout.md#high-cpu-utilization). |
-| 20005 | The service unavailable error happened because client machine's threadpool is starved. Verify any potential [blocking async calls in your code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). |
-| >= 21001 | This service unavailable error happened due to a transient service condition. Verify the conditions in the above section, confirm if you have retry policies in place. If the volume of these errors is high compared with successes, reach out to Azure Support. |
-
-### The required ports are being blocked
-
-Verify that all the [required ports](sql-sdk-connection-modes.md#service-port-ranges) are enabled.
-
-### Client-side transient connectivity issues
-
-Service unavailable exceptions can surface when there are transient connectivity problems that are causing timeouts and can be safely retried following the [design recommendations](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503).
-
-Follow the [request timeout troubleshooting steps](troubleshoot-dot-net-sdk-request-timeout.md#troubleshooting-steps) to resolve it.
-
-### Service outage
-
-Check the [Azure status](https://azure.status.microsoft/status) to see if there's an ongoing issue.
-
-## Next steps
-
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java SDK.
-* Learn about performance guidelines for [.NET](performance-tips-dotnet-sdk-v3-sql.md).
-* Learn about performance guidelines for [Java](performance-tips-java-sdk-v4-sql.md).
cosmos-db Troubleshoot Unauthorized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-unauthorized.md
- Title: Troubleshoot Azure Cosmos DB unauthorized exceptions
-description: Learn how to diagnose and fix unauthorized exceptions.
--- Previously updated : 07/13/2020-----
-# Diagnose and troubleshoot Azure Cosmos DB unauthorized exceptions
-
-HTTP 401: The MAC signature found in the HTTP request isn't the same as the computed signature.
-If you received the 401 error message "The MAC signature found in the HTTP request is not the same as the computed signature", it can be caused by the following scenarios.
-
-For older SDKs, the exception can appear as an invalid JSON exception instead of the correct 401 unauthorized exception. Newer SDKs properly handle this scenario and give a valid error message.
-
-## Troubleshooting steps
-The following list contains known causes and solutions for unauthorized exceptions.
-
-### The key wasn't properly rotated is the most common scenario
-The 401 MAC signature is seen shortly after a key rotation and eventually stops without any changes.
-
-#### Solution:
-The key was rotated and didn't follow the [best practices](../secure-access-to-data.md#key-rotation). The Azure Cosmos DB account key rotation can take anywhere from a few seconds to possibly days depending on the Azure Cosmos DB account size.
-
-### The key is misconfigured
-The 401 MAC signature issue will be consistent and happens for all calls using that key.
-
-#### Solution:
-The key is misconfigured on the application and is using the wrong key for the account, or the entire key wasn't copied.
-
-### The application is using the read-only keys for write operations
-The 401 MAC signature issue only occurs for write operations like create or replace, but read requests succeed.
-
-#### Solution:
-Switch the application to use a read/write key to allow the operations to complete successfully.
-
-### Race condition with create container
-The 401 MAC signature issue is seen shortly after a container creation. This issue occurs only until the container creation is completed.
-
-#### Solution:
-There's a race condition with container creation. An application instance is trying to access the container before the container creation is complete. The most common scenario for this race condition is if the application is running and the container is deleted and re-created with the same name. The SDK attempts to use the new container, but the container creation is still in progress so it doesn't have the keys.
-
-### Bulk mode enabled
-When using [Bulk mode enabled](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/), read and write operations are optimized for best network performance and sent to the backend through a dedicated Bulk API. 401 errors while performing read operations with Bulk mode enabled often mean that the application is using the [read-only keys](../secure-access-to-data.md#primary-keys).
-
-#### Solution
-Use the read/write keys or authorization mechanism with write access when performing operations with Bulk mode enabled.
-
-## Next steps
-* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
-* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cosmos-db Tutorial Create Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-create-notebook.md
- Title: |
- Tutorial: Create a Jupyter Notebook in Azure Cosmos DB SQL API to analyze and visualize data (preview)
-description: |
- Learn how to use built-in Jupyter notebooks to import data to Azure Cosmos DB SQL API, analyze the data, and visualize the output.
-- Previously updated : 09/29/2022-----
-# Tutorial: Create a Jupyter Notebook in Azure Cosmos DB SQL API to analyze and visualize data (preview)
--
-> [!IMPORTANT]
-> The Jupyter Notebooks feature of Azure Cosmos DB is currently in a preview state and is progressively rolling out to all customers over time.
-
-This article describes how to use the Jupyter Notebooks feature of Azure Cosmos DB to import sample retail data to an Azure Cosmos DB SQL API account. You'll see how to use the Azure Cosmos DB magic commands to run queries, analyze the data, and visualize the results.
-
-## Prerequisites
--- [Azure Cosmos DB SQL API account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput)-
-## Create a new notebook
-
-In this section, you'll create the Azure Cosmos database, container, and import the retail data to the container.
-
-1. Navigate to your Azure Cosmos DB account and open the **Data Explorer.**
-
-1. Select **New Notebook**.
-
- :::image type="content" source="media/tutorial-create-notebook/new-notebook-option.png" lightbox="media/tutorial-create-notebook/new-notebook-option.png" alt-text="Screenshot of the Data Explorer with the 'New Notebook' option highlighted.":::
-
-1. In the confirmation dialog that appears, select **Create**.
-
- > [!NOTE]
- > A temporary workspace will be created to enable you to work with Jupyter Notebooks. When the session expires, any notebooks in the workspace will be removed.
-
-1. Select the kernel you wish to use for the notebook.
-
-### [Python](#tab/python)
--
-### [C#](#tab/csharp)
----
-> [!TIP]
-> Now that the new notebook has been created, you can rename it to something like **VisualizeRetailData.ipynb**.
-
-## Create a database and container using the SDK
-
-### [Python](#tab/python)
-
-1. Start in the default code cell.
-
-1. Import any packages you require for this tutorial.
-
- ```python
- import azure.cosmos
- from azure.cosmos.partition_key import PartitionKey
- ```
-
-1. Create a database named **RetailIngest** using the built-in SDK.
-
- ```python
- database = cosmos_client.create_database_if_not_exists('RetailIngest')
- ```
-
-1. Create a container named **WebsiteMetrics** with a partition key of `/CartID`.
-
- ```python
- container = database.create_container_if_not_exists(id='WebsiteMetrics', partition_key=PartitionKey(path='/CartID'))
- ```
-
-1. Select **Run** to create the database and container resource.
-
- :::image type="content" source="media/tutorial-create-notebook/run-cell.png" alt-text="Screenshot of the 'Run' option in the menu.":::
-
-### [C#](#tab/csharp)
-
-1. Start in the default code cell.
-
-1. Import any packages you require for this tutorial.
-
- ```csharp
- using Microsoft.Azure.Cosmos;
- ```
-
-1. Create a new instance of the client type using the built-in SDK.
-
- ```csharp
- var cosmosClient = new CosmosClient(Cosmos.Endpoint, Cosmos.Key);
- ```
-
-1. Create a database named **RetailIngest**.
-
- ```csharp
- Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync("RetailIngest");
- ```
-
-1. Create a container named **WebsiteMetrics** with a partition key of `/CartID`.
-
- ```csharp
- Container container = await database.CreateContainerIfNotExistsAsync("WebsiteMetrics", "/CartID");
- ```
-
-1. Select **Run** to create the database and container resource.
-
- :::image type="content" source="media/tutorial-create-notebook/run-cell.png" alt-text="Screenshot of the 'Run' option in the menu.":::
---
-## Import data using magic commands
-
-1. Add a new code cell.
-
-1. Within the code cell, add the following magic command to upload, to your existing container, the JSON data from this url: <https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>
-
- ```python
- %%upload --databaseName RetailIngest --containerName WebsiteMetrics --url https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
- :::image type="content" source="media/tutorial-create-notebook/run-active-cell.png" alt-text="Screenshot of the 'Run Active Cell' option in the menu.":::
-
- > [!NOTE]
- > The import command should take 5-10 seconds to complete.
-
-1. Observe the output from the run command. Ensure that **2,654** documents were imported.
-
- ```output
- Documents successfully uploaded to WebsiteMetrics
- Total number of documents imported:
- Success: 2654
- Failure: 0
- Total time taken : 00:00:04 hours
- Total RUs consumed : 27309.660000001593
- ```
-
-## Visualize your data
-
-### [Python](#tab/python)
-
-1. Create another new code cell.
-
-1. In the code cell, use a SQL query to populate a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame).
-
- ```python
- %%sql --database RetailIngest --container WebsiteMetrics --output df_cosmos
- SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. Create another new code cell.
-
-1. In the code cell, output the top **10** items from the dataframe.
-
- ```python
- df_cosmos.head(10)
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. Observe the output of running the command.
-
- | | Action | ItemRevenue | Country | Item |
- | | | | | |
- | **0** | Purchased | 19.99 | Macedonia | Button-Up Shirt |
- | **1** | Viewed | 12.00 | Papua New Guinea | Necklace |
- | **2** | Viewed | 25.00 | Slovakia (Slovak Republic) | Cardigan Sweater |
- | **3** | Purchased | 14.00 | Senegal | Flip Flop Shoes |
- | **4** | Viewed | 50.00 | Panama | Denim Shorts |
- | **5** | Viewed | 14.00 | Senegal | Flip Flop Shoes |
- | **6** | Added | 14.00 | Senegal | Flip Flop Shoes |
- | **7** | Added | 50.00 | Panama | Denim Shorts |
- | **8** | Purchased | 33.00 | Palestinian Territory | Red Top |
- | **9** | Viewed | 30.00 | Malta | Green Sweater |
-
-1. Create another new code cell.
-
-1. In the code cell, import the **pandas** package to customize the output of the dataframe.
-
- ```python
- import pandas as pd
- pd.options.display.html.table_schema = True
- pd.options.display.max_rows = None
-
- df_cosmos.groupby("Item").size()
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. In the output, select the **Line Chart** option to view a different visualization of the data.
-
- :::image type="content" source="media/tutorial-create-notebook/pandas-python-line-chart.png" alt-text="Screenshot of the Pandas dataframe visualization for the data as a line chart.":::
-
-### [C#](#tab/csharp)
-
-1. Create a new code cell.
-
-1. In the code cell, create a new C# class to represent an item in the container.
-
- ```csharp
- public class Record
- {
- public string Action { get; set; }
- public decimal Price { get; set; }
- public string Country { get; set; }
- public string Item { get; set; }
- }
- ```
-
-1. Create another new code cell.
-
-1. In the code cell, add code to [execute a SQL query using the SDK](quickstart-dotnet.md#query-items) storing the output of the query in a variable of type <xref:System.Collections.Generic.List%601> named **results**.
-
- ```csharp
- using System.Collections.Generic;
-
- var query = new QueryDefinition(
- query: "SELECT c.Action, c.Price, c.Country, c.Item FROM c"
- );
-
- FeedIterator<Record> feed = container.GetItemQueryIterator<Record>(
- queryDefinition: query
- );
-
- var results = new List<Record>();
- while (feed.HasMoreResults)
- {
- FeedResponse<Record> response = await feed.ReadNextAsync();
- foreach (Record result in response)
- {
- results.Add(result);
- }
- }
- ```
-
-1. Create another new code cell.
-
-1. In the code cell, create a dictionary by adding unique permutations of the **Item** field as the key and the data in the **Price** field as the value.
-
- ```csharp
- var dictionary = new Dictionary<string, decimal>();
-
- foreach(var result in results)
- {
- dictionary.TryAdd (result.Item, result.Price);
- }
-
- dictionary
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. Observe the output with unique combinations of the **Item** and **Price** fields.
-
- ```output
- ...
- Denim Jacket:31.99
- Fleece Jacket:65
- Sandals:12
- Socks:3.75
- Sandal:35.5
- Light Jeans:80
- ...
- ```
-
-1. Create another new code cell.
-
-1. In the code cell, output the **results** variable.
-
- ```csharp
- results
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. In the output, select the **Box Plot** option to view a different visualization of the data.
-
- :::image type="content" source="media/tutorial-create-notebook/pandas-csharp-box-plot.png" alt-text="Screenshot of the Pandas dataframe visualization for the data as a box plot.":::
---
-## Persist your notebook
-
-1. In the **Notebooks** section, open the context menu for the notebook you created for this tutorial and select **Download**.
-
- :::image type="content" source="media/tutorial-create-notebook/download-notebook.png" alt-text="Screenshot of the notebook context menu with the 'Download' option.":::
-
- > [!TIP]
- > To save your work permanently, save your notebooks to a GitHub repository or download the notebooks to your local machine before the session ends.
-
-## Next steps
--- [Learn about the Jupyter Notebooks feature in Azure Cosmos DB](../notebooks-overview.md)-- [Review the FAQ on Jupyter Notebook support](../notebooks-faq.yml)
cosmos-db Tutorial Global Distribution Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-global-distribution-sql-api.md
- Title: 'Tutorial: Azure Cosmos DB global distribution tutorial for the SQL API'
-description: 'Tutorial: Learn how to set up Azure Cosmos DB global distribution using the SQL API with .NET, Java, Python and various other SDKs'
------ Previously updated : 04/03/2022--
-# Tutorial: Set up Azure Cosmos DB global distribution using the SQL API
-
-In this article, we show how to use the Azure portal to set up Azure Cosmos DB global distribution and then connect using the SQL API.
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Configure global distribution using the Azure portal
-> * Configure global distribution using the [SQL APIs](../introduction.md)
-
-<a id="portal"></a>
-
-## <a id="preferred-locations"></a> Connecting to a preferred region using the SQL API
-
-In order to take advantage of [global distribution](../distribute-data-globally.md), client applications can specify the ordered preference list of regions to be used to perform document operations. Based on the Azure Cosmos DB account configuration, current regional availability and the preference list specified, the most optimal endpoint will be chosen by the SQL SDK to perform write and read operations.
-
-This preference list is specified when initializing a connection using the SQL SDKs. The SDKs accept an optional parameter `PreferredLocations` that is an ordered list of Azure regions.
-
-The SDK will automatically send all writes to the current write region. All reads will be sent to the first available region in the preferred locations list. If the request fails, the client will fail down the list to the next region.
-
-The SDK will only attempt to read from the regions specified in preferred locations. So, for example, if the Azure Cosmos account is available in four regions, but the client only specifies two read(non-write) regions within the `PreferredLocations`, then no reads will be served out of the read region that is not specified in `PreferredLocations`. If the read regions specified in the `PreferredLocations` list are not available, reads will be served out of write region.
-
-The application can verify the current write endpoint and read endpoint chosen by the SDK by checking two properties, `WriteEndpoint` and `ReadEndpoint`, available in SDK version 1.8 and above. If the `PreferredLocations` property is not set, all requests will be served from the current write region.
-
-If you don't specify the preferred locations but used the `setCurrentLocation` method, the SDK automatically populates the preferred locations based on the current region that the client is running in. The SDK orders the regions based on the proximity of a region to the current region.
-
-## .NET SDK
-
-The SDK can be used without any code changes. In this case, the SDK automatically directs both reads and writes to the current write region.
-
-In version 1.8 and later of the .NET SDK, the ConnectionPolicy parameter for the DocumentClient constructor has a property called Microsoft.Azure.Documents.ConnectionPolicy.PreferredLocations. This property is of type Collection `<string>` and should contain a list of region names. The string values are formatted per the region name column on the [Azure Regions][regions] page, with no spaces before or after the first and last character respectively.
-
-The current write and read endpoints are available in DocumentClient.WriteEndpoint and DocumentClient.ReadEndpoint respectively.
-
-> [!NOTE]
-> The URLs for the endpoints should not be considered as long-lived constants. The service may update these at any point. The SDK handles this change automatically.
->
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-If you are using the .NET V2 SDK, use the `PreferredLocations` property to set the preferred region.
-
-```csharp
-// Getting endpoints from application settings or other configuration location
-Uri accountEndPoint = new Uri(Properties.Settings.Default.GlobalDatabaseUri);
-string accountKey = Properties.Settings.Default.GlobalDatabaseKey;
-
-ConnectionPolicy connectionPolicy = new ConnectionPolicy();
-
-//Setting read region selection preference
-connectionPolicy.PreferredLocations.Add(LocationNames.WestUS); // first preference
-connectionPolicy.PreferredLocations.Add(LocationNames.EastUS); // second preference
-connectionPolicy.PreferredLocations.Add(LocationNames.NorthEurope); // third preference
-
-// initialize connection
-DocumentClient docClient = new DocumentClient(
- accountEndPoint,
- accountKey,
- connectionPolicy);
-
-// connect to DocDB
-await docClient.OpenAsync().ConfigureAwait(false);
-```
-
-Alternatively, you can use the `SetCurrentLocation` property and let the SDK choose the preferred location based on proximity.
-
-```csharp
-// Getting endpoints from application settings or other configuration location
-Uri accountEndPoint = new Uri(Properties.Settings.Default.GlobalDatabaseUri);
-string accountKey = Properties.Settings.Default.GlobalDatabaseKey;
-
-ConnectionPolicy connectionPolicy = new ConnectionPolicy();
-
-connectionPolicy.SetCurrentLocation("West US 2"); /
-
-// initialize connection
-DocumentClient docClient = new DocumentClient(
- accountEndPoint,
- accountKey,
- connectionPolicy);
-
-// connect to DocDB
-await docClient.OpenAsync().ConfigureAwait(false);
-```
-
-# [.NET SDK V3](#tab/dotnetv3)
-
-If you are using the .NET V3 SDK, use the `ApplicationPreferredRegions` property to set the preferred region.
-
-```csharp
-
-CosmosClientOptions options = new CosmosClientOptions();
-options.ApplicationName = "MyApp";
-options.ApplicationPreferredRegions = new List<string> {Regions.WestUS, Regions.WestUS2};
-
-CosmosClient client = new CosmosClient(connectionString, options);
-
-```
-
-Alternatively, you can use the `ApplicationRegion` property and let the SDK choose the preferred location based on proximity.
-
-```csharp
-CosmosClientOptions options = new CosmosClientOptions();
-options.ApplicationName = "MyApp";
-// If the application is running in West US
-options.ApplicationRegion = Regions.WestUS;
-
-CosmosClient client = new CosmosClient(connectionString, options);
-```
---
-## Node.js/JavaScript
-
-> [!NOTE]
-> The URLs for the endpoints should not be considered as long-lived constants. The service may update these at any point. The SDK will handle this change automatically.
->
->
-
-Below is a code example for Node.js/JavaScript.
-
-```JavaScript
-// Setting read region selection preference, in the following order -
-// 1 - West US
-// 2 - East US
-// 3 - North Europe
-const preferredLocations = ['West US', 'East US', 'North Europe'];
-
-// initialize the connection
-const client = new CosmosClient{ endpoint, key, connectionPolicy: { preferredLocations } });
-```
-
-## Python SDK
-
-The following code shows how to set preferred locations by using the Python SDK:
-
-```python
-connectionPolicy = documents.ConnectionPolicy()
-connectionPolicy.PreferredLocations = ['West US', 'East US', 'North Europe']
-client = cosmos_client.CosmosClient(ENDPOINT, {'masterKey': MASTER_KEY}, connectionPolicy)
-
-```
-
-## <a id="java4-preferred-locations"></a> Java V4 SDK
-
-The following code shows how to set preferred locations by using the Java SDK:
-
-# [Async](#tab/api-async)
-
- [Java SDK V4](sql-api-sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Async API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=TutorialGlobalDistributionPreferredLocationAsync)]
-
-# [Sync](#tab/api-sync)
-
- [Java SDK V4](sql-api-sdk-java-v4.md) (Maven [com.azure::azure-cosmos](https://mvnrepository.com/artifact/com.azure/azure-cosmos)) Sync API
-
- [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=TutorialGlobalDistributionPreferredLocationSync)]
-
-
-
-## Spark 3 Connector
-
-You can define the preferred regional list using the `spark.cosmos.preferredRegionsList` [configuration](https://github.com/Azure/azure-sdk-for-jav), for example:
-
-```scala
-val sparkConnectorConfig = Map(
- "spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.preferredRegionsList" -> "[West US, East US, North Europe]"
- // other settings
-)
-```
-
-## REST
-
-Once a database account has been made available in multiple regions, clients can query its availability by performing a GET request on this URI `https://{databaseaccount}.documents.azure.com/`
-
-The service will return a list of regions and their corresponding Azure Cosmos DB endpoint URIs for the replicas. The current write region will be indicated in the response. The client can then select the appropriate endpoint for all further REST API requests as follows.
-
-Example response
-
-```json
-{
- "_dbs": "//dbs/",
- "media": "//media/",
- "writableLocations": [
- {
- "Name": "West US",
- "DatabaseAccountEndpoint": "https://globaldbexample-westus.documents.azure.com:443/"
- }
- ],
- "readableLocations": [
- {
- "Name": "East US",
- "DatabaseAccountEndpoint": "https://globaldbexample-eastus.documents.azure.com:443/"
- }
- ],
- "MaxMediaStorageUsageInMB": 2048,
- "MediaStorageUsageInMB": 0,
- "ConsistencyPolicy": {
- "defaultConsistencyLevel": "Session",
- "maxStalenessPrefix": 100,
- "maxIntervalInSeconds": 5
- },
- "addresses": "//addresses/",
- "id": "globaldbexample",
- "_rid": "globaldbexample.documents.azure.com",
- "_self": "",
- "_ts": 0,
- "_etag": null
-}
-```
-
-* All PUT, POST and DELETE requests must go to the indicated write URI
-* All GETs and other read-only requests (for example queries) may go to any endpoint of the client's choice
-
-Write requests to read-only regions will fail with HTTP error code 403 ("Forbidden").
-
-If the write region changes after the client's initial discovery phase, subsequent writes to the previous write region will fail with HTTP error code 403 ("Forbidden"). The client should then GET the list of regions again to get the updated write region.
-
-That's it, that completes this tutorial. You can learn how to manage the consistency of your globally replicated account by reading [Consistency levels in Azure Cosmos DB](../consistency-levels.md). And for more information about how global database replication works in Azure Cosmos DB, see [Distribute data globally with Azure Cosmos DB](../distribute-data-globally.md).
-
-## Next steps
-
-In this tutorial, you've done the following:
-
-> [!div class="checklist"]
-> * Configure global distribution using the Azure portal
-> * Configure global distribution using the SQL APIs
-
-You can now proceed to the next tutorial to learn how to develop locally using the Azure Cosmos DB local emulator.
-
-> [!div class="nextstepaction"]
-> [Develop locally with the emulator](../local-emulator.md)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-[regions]: https://azure.microsoft.com/regions/
cosmos-db Tutorial Query Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-query-sql-api.md
- Title: 'Tutorial: How to query with SQL in Azure Cosmos DB?'
-description: 'Tutorial: Learn how to query with SQL queries in Azure Cosmos DB using the query playground'
------- Previously updated : 08/26/2021--
-# Tutorial: Query Azure Cosmos DB by using the SQL API
-
-The Azure Cosmos DB [SQL API](../introduction.md) supports querying documents using SQL. This article provides a sample document and two sample SQL queries and results.
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Querying data with SQL
-
-## Sample document
-
-The SQL queries in this article use the following sample document.
-
-```json
-{
- "id": "WakefieldFamily",
- "parents": [
- { "familyName": "Wakefield", "givenName": "Robin" },
- { "familyName": "Miller", "givenName": "Ben" }
- ],
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female", "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8 }
- ],
- "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-## Where can I run SQL queries?
-
-You can run queries using the Data Explorer in the Azure portal and via the [REST API and SDKs](sql-api-sdk-dotnet.md).
-
-For more information about SQL queries, see:
-* [SQL query and SQL syntax](sql-query-getting-started.md)
-
-## Prerequisites
-
-This tutorial assumes you have an Azure Cosmos DB account and collection. Don't have any of those resources? Complete the [5-minute quickstart](create-cosmosdb-resources-portal.md).
-
-## Example query 1
-
-Given the sample family document above, following SQL query returns the documents where the ID field matches `WakefieldFamily`. Since it's a `SELECT *` statement, the output of the query is the complete JSON document:
-
-**Query**
-
-```sql
- SELECT *
- FROM Families f
- WHERE f.id = "WakefieldFamily"
-```
-
-**Results**
-
-```json
-{
- "id": "WakefieldFamily",
- "parents": [
- { "familyName": "Wakefield", "givenName": "Robin" },
- { "familyName": "Miller", "givenName": "Ben" }
- ],
- "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female", "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8 }
- ],
- "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
- "creationDate": 1431620462,
- "isRegistered": false
-}
-```
-
-## Example query 2
-
-The next query returns all the given names of children in the family whose ID matches `WakefieldFamily`.
-
-**Query**
-
-```sql
- SELECT c.givenName
- FROM Families f
- JOIN c IN f.children
- WHERE f.id = 'WakefieldFamily'
-```
-
-**Results**
-
-```
-[
- {
- "givenName": "Jesse"
- },
- {
- "givenName": "Lisa"
- }
-]
-```
--
-## Next steps
-
-In this tutorial, you've done the following tasks:
-
-> [!div class="checklist"]
-> * Learned how to query using SQL
-
-You can now proceed to the next tutorial to learn how to distribute your data globally.
-
-> [!div class="nextstepaction"]
-> [Distribute your data globally](tutorial-global-distribution-sql-api.md)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-springboot-azure-kubernetes-service.md
- Title: Tutorial - Spring Boot application with Azure Cosmos DB SQL API and Azure Kubernetes Service
-description: This tutorial demonstrates how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB SQL API account.
---- Previously updated : 10/01/2021-----
-# Tutorial - Spring Boot Application with Azure Cosmos DB SQL API and Azure Kubernetes Service
-
-In this tutorial, you will set up and deploy a Spring Boot application that exposes REST APIs to perform CRUD operations on data in Azure Cosmos DB (SQL API account). You will package the application as Docker image, push it to Azure Container Registry, deploy to Azure Kubernetes Service and test the application.
-
-## Pre-requisites
--- An Azure account with an active subscription. Create a [free account](https://azure.microsoft.com/free/) or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the path where the JDK is installed.-- [Azure CLI](/cli/azure/install-azure-cli) to provision Azure services.-- [Docker](https://docs.docker.com/engine/install/) to manage images and containers.-- A recent version of [Maven](https://maven.apache.org/download.cgi) and [Git](https://www.git-scm.com/downloads).-- [curl](https://curl.se/download.html) to invoke REST APIs exposed the applications.-
-## Provision Azure services
-
-In this section, you will create Azure services required for this tutorial.
--- Azure Cosmos DB-- Azure Container Registry-- Azure Kubernetes Service-
-### Create a resource group for the Azure resources used in this tutorial
-
-1. Sign in to your Azure account using Azure CLI:
-
- ```azurecli
- az login
- ```
-
-1. Choose your Azure Subscription:
-
- ```azurecli
- az account set -s <enter subscription ID>
- ```
-
-1. Create a resource group.
-
- ```azurecli
- az group create --name=cosmosdb-springboot-aks-rg --location=eastus
- ```
-
- > [!NOTE]
- > Replace `cosmosdb-springboot-aks-rg` with a unique name for your resource group.
-
-### Create an Azure Cosmos DB SQL API database account
-
-Use this command to create an [Azure Cosmos DB SQL API database account](manage-with-cli.md#create-an-azure-cosmos-db-account) using the Azure CLI.
-
-```azurecli
-az cosmosdb create --name <enter account name> --resource-group <enter resource group name>
-```
-
-### Create a private Azure Container Registry using the Azure CLI
-
-> [!NOTE]
-> Replace `cosmosdbspringbootregistry` with a unique name for your registry.
-
-```azurecli
-az acr create --resource-group cosmosdb-springboot-aks-rg --location eastus \
- --name cosmosdbspringbootregistry --sku Basic
-```
-
-### Create a Kubernetes cluster on Azure using the Azure CLI
-
-1. The following command creates a Kubernetes cluster in the *cosmosdb-springboot-aks-rg* resource group, with *cosmosdb-springboot-aks* as the cluster name, with Azure Container Registry (ACR) `cosmosdbspringbootregistry` attached:
-
- ```azurecli
- az aks create \
- --resource-group cosmosdb-springboot-aks-rg \
- --name cosmosdb-springboot-aks \
- --node-count 1 \
- --generate-ssh-keys \
- --attach-acr cosmosdbspringbootregistry \
- --dns-name-prefix=cosmosdb-springboot-aks-app
- ```
-
- This command may take a while to complete.
-
-1. If you don't have `kubectl` installed, you can do so using the Azure CLI.
-
- ```azurecli
- az aks install-cli
- ```
-
-1. Get access credentials for the Azure Kubernetes Service cluster.
-
- ```azurecli
- az aks get-credentials --resource-group=cosmosdb-springboot-aks-rg --name=cosmosdb-springboot-aks
-
- kubectl get nodes
- ```
-
-## Build the application
-
-1. Clone the application and change into the right directory.
-
- ```bash
- git clone https://github.com/Azure-Samples/cosmosdb-springboot-aks.git
-
- cd cosmosdb-springboot-aks
- ```
-
-1. Use `Maven` to build the application. At the end of this step, you should have the application JAR file created in the `target` folder.
-
- ```bash
- ./mvnw install
- ```
-
-## Run the application locally
-
-If you intend to run the application on Azure Kubernetes Service, skip this section and move on to [Push Docker image to Azure Container Registry](#push-docker-image-to-azure-container-registry)
-
-1. Before you run the application, update the `application.properties` file with the details of your Azure Cosmos DB account.
-
- ```properties
- azure.cosmos.uri=https://<enter cosmos db account name>.azure.com:443/
- azure.cosmos.key=<enter cosmos db primary key>
- azure.cosmos.database=<enter cosmos db database name>
- azure.cosmos.populateQueryMetrics=false
- ```
-
- > [!NOTE]
- > The database and container (called `users`) will get created automatically once you start the application.
-
-1. Run the application locally.
-
- ```bash
- java -jar target/*.jar
- ```
-
-2. Proceed to [Access the application](#access-the-application) or refer to the next section to deploy the application to Kubernetes.
-
-## Push Docker image to Azure Container Registry
-
-1. Build the Docker image
-
- ```bash
- docker build -t cosmosdbspringbootregistry.azurecr.io/spring-cosmos-app:v1 .
- ```
-
- > [!NOTE]
- > Replace `cosmosdbspringbootregistry` with the name of your Azure Container Registry
-
-1. Log into Azure Container Registry.
-
- ```azurecli
- az acr login -n cosmosdbspringbootregistry
- ```
-
-1. Push image to Azure Container Registry and list it.
-
- ```azurecli
- docker push cosmosdbspringbootregistry.azurecr.io/spring-cosmos-app:v1
-
- az acr repository list --name cosmosdbspringbootregistry --output table
- ```
-
-## Deploy application to Azure Kubernetes Service
-
-1. Edit the `Secret` in `app.yaml` with the details of your Azure Cosmos DB setup.
-
- ```yml
- ...
- apiVersion: v1
- kind: Secret
- metadata:
- name: app-config
- type: Opaque
- stringData:
- application.properties: |
- azure.cosmos.uri=https://<enter cosmos db account name>.azure.com:443/
- azure.cosmos.key=<enter cosmos db primary key>
- azure.cosmos.database=<enter cosmos db database name>
- azure.cosmos.populateQueryMetrics=false
- ...
- ```
-
- > [!NOTE]
- > The database and a container (`users`) will get created automatically once you start the application.
--
-2. Deploy to Kubernetes and wait for the `Pod` to transition to `Running` state:
-
- ```bash
- kubectl apply -f deploy/app.yaml
-
- kubectl get pods -l=app=spring-cosmos-app -w
- ```
-
- > [!NOTE]
- > You can check application logs using: `kubectl logs -f $(kubectl get pods -l=app=spring-cosmos-app -o=jsonpath='{.items[0].metadata.name}') -c spring-cosmos-app`
--
-## Access the application
-
-If the application is running in Kubernetes and you want to access it locally over port `8080`, run the below command:
-
-```bash
-kubectl port-forward svc/spring-cosmos-app-internal 8080:8080
-```
-
-Invoke the REST endpoints to test the application. You can also navigate to the `Data Explorer` menu of the Azure Cosmos DB account in the Azure portal and access the `users` container to confirm the result of the operations.
-
-1. Create new users
-
- ```bash
- curl -i -X POST -H "Content-Type: application/json" -d '{"email":"john.doe@foobar.com", "firstName": "John", "lastName": "Doe", "city": "NYC"}' http://localhost:8080/users
-
- curl -i -X POST -H "Content-Type: application/json" -d '{"email":"mr.jim@foobar.com", "firstName": "mr", "lastName": "jim", "city": "Seattle"}' http://localhost:8080/users
- ```
-
- If successful, you should get an HTTP `201` response.
-
-1. Update an user
-
- ```bash
- curl -i -X POST -H "Content-Type: application/json" -d '{"email":"john.doe@foobar.com", "firstName": "John", "lastName": "Doe", "city": "Dallas"}' http://localhost:8080/users
- ```
-
-1. List all users
-
- ```bash
- curl -i http://localhost:8080/users
- ```
-
-1. Get an existing user
-
- ```bash
- curl -i http://localhost:8080/users/john.doe@foobar.com
- ```
-
- You should get back a JSON payload with the user details. For example:
-
- ```json
- {
- "email": "john.doe@foobar.com",
- "firstName": "John",
- "lastName": "Doe",
- "city": "Dallas"
- }
- ```
-
-1. Try to get a user that does not exist
-
- ```bash
- curl -i http://localhost:8080/users/not.there@foobar.com
- ```
-
- You should receive an HTTP `404` response.
-
-1. Replace a user
-
- ```bash
- curl -i -X PUT -H "Content-Type: application/json" -d '{"email":"john.doe@foobar.com", "firstName": "john", "lastName": "doe","city": "New Jersey"}' http://localhost:8080/users/
- ```
-
-1. Try to replace user that does not exist
-
- ```bash
- curl -i -X PUT -H "Content-Type: application/json" -d '{"email":"not.there@foobar.com", "firstName": "john", "lastName": "doe","city": "New Jersey"}' http://localhost:8080/users/
- ```
-
- You should receive an HTTP `404` response.
-
-1. Delete a user
-
- ```bash
- curl -i -X DELETE http://localhost:8080/users/mr.jim@foobar.com
- ```
-
-1. Delete a user that does not exist
-
- ```bash
- curl -X DELETE http://localhost:8080/users/go.nuts@foobar.com
- ```
-
- You should receive an HTTP `404` response.
-
-### Access the application using a public IP address (optional)
-
-Creating a Service of type `LoadBalancer` in Azure Kubernetes Service will result in an Azure Load Balancer getting provisioned. You can then access the application using its public IP address.
-
-1. Create a Kubernetes Service of type `LoadBalancer`
-
- > [!NOTE]
- > This will create an Azure Load Balancer with a public IP address.
-
- ```bash
- kubectl apply -f deploy/load-balancer-service.yaml
- ```
-
-1. Wait for the Azure Load Balancer to get created. Until then, the `EXTERNAL-IP` for the Kubernetes Service will remain in `<pending>` state.
-
- ```bash
- kubectl get service spring-cosmos-app -w
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- spring-cosmos-app LoadBalancer 10.0.68.83 <pending> 8080:31523/TCP 6s
- ```
-
- > [!NOTE]
- > `CLUSTER-IP` value may differ in your case
-
-1. Once Azure Load Balancer creation completes, the `EXTERNAL-IP` will be available.
-
- ```bash
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- spring-cosmos-app LoadBalancer 10.0.68.83 20.81.108.180 8080:31523/TCP 18s
- ```
-
- > [!NOTE]
- > `EXTERNAL-IP` value may differ in your case
-
-1. Use the public IP address
-
- Terminate the `kubectl watch` process and repeat the above `curl` commands with the public IP address along with port `8080`. For example, to list all users:
-
- ```bash
- curl -i http://20.81.108.180:8080/users
- ```
-
- > [!NOTE]
- > Replace `20.81.108.180` with the the public IP address for your environment
-
-## Kubernetes resources for the application
-
-Here are some of the key points related to the Kubernetes resources for this application:
--- The Spring Boot application is a Kubernetes `Deployment` based on the [Docker image in Azure Container Registry](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L21)-- Azure Cosmos DB configuration is mounted in `application.properties` at path `/config` [inside the container](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L26).-- This is made possible using a [Kubernetes `Volume`](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L15) that in turn refers to a [Kubernetes Secret](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L49), which was created along with the application. You can run the command below to confirm that this file is present within the application container:-
- ```bash
- kubectl exec -it $(kubectl get pods -l=app=spring-cosmos-app -o=jsonpath='{.items[0].metadata.name}') -c spring-cosmos-app -- cat /config/application.properties
- ```
--- [Liveness and Readiness probes](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L34) configuration for this application refer to the HTTP endpoints that are made available by [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html) when a Spring Boot application is [deployed to a Kubernetes environment](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes) - `/actuator/health/liveness` and `/actuator/health/readiness`. -- A [ClusterIP Service](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/app.yaml#L61) can be created to access the REST endpoints of the Spring Boot application *internally* within the Kubernetes cluster.-- A [LoadBalancer Service](https://github.com/Azure-Samples/cosmosdb-springboot-aks/blob/main/deploy/load-balancer-service.yaml) can be created to access the application via a public IP address.-
-## Clean up resources
--
-## Next steps
-
-In this tutorial, you've learned how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB SQL API account.
-
-> [!div class="nextstepaction"]
-> [Spring Data Azure Cosmos DB v3 for SQL API](sql-api-sdk-java-spring-v3.md)
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
- Title: Bulk import data to Azure Cosmos DB SQL API account by using the .NET SDK
-description: Learn how to import or ingest data to Azure Cosmos DB by building a .NET console application that optimizes provisioned throughput (RU/s) required for importing data
----- Previously updated : 03/25/2022---
-# Bulk import data to Azure Cosmos DB SQL API account by using the .NET SDK
-
-This tutorial shows how to build a .NET console application that optimizes provisioned throughput (RU/s) required to import data to Azure Cosmos DB.
-
->
-> [!VIDEO https://aka.ms/docs.learn-live-dotnet-bulk]
-
-In this article, you'll read data from a sample data source and import it into an Azure Cosmos container.
-This tutorial uses [Version 3.0+](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) of the Azure Cosmos DB .NET SDK, which can be targeted to .NET Framework or .NET Core.
-
-This tutorial covers:
-
-> [!div class="checklist"]
-> * Creating an Azure Cosmos account
-> * Configuring your project
-> * Connecting to an Azure Cosmos account with bulk support enabled
-> * Perform a data import through concurrent create operations
-
-## Prerequisites
-
-Before following the instructions in this article, make sure that you have the following resources:
-
-* An active Azure account. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-
-* [NET Core 3 SDK](https://dotnet.microsoft.com/download/dotnet-core). You can verify which version is available in your environment by running `dotnet --version`.
-
-## Step 1: Create an Azure Cosmos DB account
-
-[Create an Azure Cosmos DB SQL API account](create-cosmosdb-resources-portal.md) from the Azure portal or you can create the account by using the [Azure Cosmos DB Emulator](../local-emulator.md).
-
-## Step 2: Set up your .NET project
-
-Open the Windows command prompt or a Terminal window from your local computer. You'll run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*.
-
- ```bash
- dotnet new console -n bulk-import-demo
- ```
-
-Change your directory to the newly created app folder. You can build the application with:
-
- ```bash
- cd bulk-import-demo
- dotnet build
- ```
-
-The expected output from the build should look something like this:
-
- ```bash
- Restore completed in 100.37 ms for C:\Users\user1\Downloads\CosmosDB_Samples\bulk-import-demo\bulk-import-demo.csproj.
- bulk -> C:\Users\user1\Downloads\CosmosDB_Samples\bulk-import-demo \bin\Debug\netcoreapp2.2\bulk-import-demo.dll
-
- Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-
- Time Elapsed 00:00:34.17
- ```
-
-## Step 3: Add the Azure Cosmos DB package
-
-While still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the dotnet add package command.
-
- ```bash
- dotnet add package Microsoft.Azure.Cosmos
- ```
-
-## Step 4: Get your Azure Cosmos account credentials
-
-The sample application needs to authenticate to your Azure Cosmos account. To authenticate, you should pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your Azure Cosmos account.
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account.
-
-If you're using the Azure Cosmos DB Emulator, obtain the [emulator credentials from this article](../local-emulator.md#authenticate-requests).
-
-## Step 5: Initialize the CosmosClient object with bulk execution support
-
-Open the generated `Program.cs` file in a code editor. You'll create a new instance of CosmosClient with bulk execution enabled and use it to do operations against Azure Cosmos DB.
-
-Let's start by overwriting the default `Main` method and defining the global variables. These global variables will include the endpoint and authorization keys, the name of the database, container that you'll create, and the number of items that you'll be inserting in bulk. Make sure to replace the endpointURL and authorization key values according to your environment.
--
- ```csharp
- using System;
- using System.Collections.Generic;
- using System.Diagnostics;
- using System.IO;
- using System.Text.Json;
- using System.Threading.Tasks;
- using Microsoft.Azure.Cosmos;
-
- public class Program
- {
- private const string EndpointUrl = "https://<your-account>.documents.azure.com:443/";
- private const string AuthorizationKey = "<your-account-key>";
- private const string DatabaseName = "bulk-tutorial";
- private const string ContainerName = "items";
- private const int AmountToInsert = 300000;
-
- static async Task Main(string[] args)
- {
-
- }
- }
- ```
-
-Inside the `Main` method, add the following code to initialize the CosmosClient object:
-
-[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=CreateClient)]
-
-> [!Note]
-> Once bulk execution is specified in the [CosmosClientOptions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions), they are effectively immutable for the lifetime of the CosmosClient. Changing the values will have no effect.
-
-After the bulk execution is enabled, the CosmosClient internally groups concurrent operations into single service calls. This way it optimizes the throughput utilization by distributing service calls across partitions, and finally assigning individual results to the original callers.
-
-You can then create a container to store all our items. Define `/pk` as the partition key, 50000 RU/s as provisioned throughput, and a custom indexing policy that will exclude all fields to optimize the write throughput. Add the following code after the CosmosClient initialization statement:
-
-[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Initialize)]
-
-## Step 6: Populate a list of concurrent tasks
-
-To take advantage of the bulk execution support, create a list of asynchronous tasks based on the source of data and the operations you want to perform, and use `Task.WhenAll` to execute them concurrently.
-Let's start by using "Bogus" data to generate a list of items from our data model. In a real-world application, the items would come from your desired data source.
-
-First, add the Bogus package to the solution by using the dotnet add package command.
-
- ```bash
- dotnet add package Bogus
- ```
-
-Define the definition of the items that you want to save. You need to define the `Item` class within the `Program.cs` file:
-
-[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Model)]
-
-Next, create a helper function inside the `Program` class. This helper function will get the number of items you defined to insert and generates random data:
-
-[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Bogus)]
-
-Use the helper function to initialize a list of documents to work with:
-
-[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Operations)]
-
-Next use the list of documents to create concurrent tasks and populate the task list to insert the items into the container. To perform this operation, add the following code to the `Program` class:
-
-[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=ConcurrentTasks)]
-
-All these concurrent point operations will be executed together (that is in bulk) as described in the introduction section.
-
-## Step 7: Run the sample
-
-In order to run the sample, you can do it simply by the `dotnet` command:
-
- ```bash
- dotnet run
- ```
-
-## Get the complete sample
-
-If you didn't have time to complete the steps in this tutorial, or just want to download the code samples, you can get it from [GitHub](https://github.com/Azure-Samples/cosmos-dotnet-bulk-import-throughput-optimizer).
-
-After cloning the project, make sure to update the desired credentials inside [Program.cs](https://github.com/Azure-Samples/cosmos-dotnet-bulk-import-throughput-optimizer/blob/main/src/Program.cs#L25).
-
-The sample can be run by changing to the repository directory and using `dotnet`:
-
- ```bash
- cd cosmos-dotnet-bulk-import-throughput-optimizer
- dotnet run
- ```
-
-## Next steps
-
-In this tutorial, you've done the following steps:
-
-> [!div class="checklist"]
-> * Creating an Azure Cosmos account
-> * Configuring your project
-> * Connecting to an Azure Cosmos account with bulk support enabled
-> * Perform a data import through concurrent create operations
-
-You can now proceed to the next tutorial:
-
-> [!div class="nextstepaction"]
->[Query Azure Cosmos DB by using the SQL API](tutorial-query-sql-api.md)
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Working With Dates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/working-with-dates.md
- Title: Working with dates in Azure Cosmos DB
-description: Learn how to store, index, and query DataTime objects in Azure Cosmos DB
------ Previously updated : 04/03/2020--
-# Working with Dates in Azure Cosmos DB
-
-Azure Cosmos DB delivers schema flexibility and rich indexing via a native [JSON](https://www.json.org) data model. All Azure Cosmos DB resources including databases, containers, documents, and stored procedures are modeled and stored as JSON documents. As a requirement for being portable, JSON (and Azure Cosmos DB) supports only a small set of basic types: String, Number, Boolean, Array, Object, and Null. However, JSON is flexible and allow developers and frameworks to represent more complex types using these primitives and composing them as objects or arrays.
-
-In addition to the basic types, many applications need the DateTime type to represent dates and timestamps. This article describes how developers can store, retrieve, and query dates in Azure Cosmos DB using the .NET SDK.
-
-## Storing DateTimes
-
-Azure Cosmos DB supports JSON types such as - string, number, boolean, null, array, object. It does not directly support a DateTime type. Currently, Azure Cosmos DB doesn't support localization of dates. So, you need to store DateTimes as strings. The recommended format for DateTime strings in Azure Cosmos DB is `yyyy-MM-ddTHH:mm:ss.fffffffZ` which follows the ISO 8601 UTC standard. It is recommended to store all dates in Azure Cosmos DB as UTC. Converting the date strings to this format will allow sorting dates lexicographically. If non-UTC dates are stored, the logic must be handled at the client-side. To convert a local DateTime to UTC, the offset must be known/stored as a property in the JSON and the client can use the offset to compute the UTC DateTime value.
-
-Range queries with DateTime strings as filters are only supported if the DateTime strings are all in UTC and the same length. In Azure Cosmos DB, the [GetCurrentDateTime](sql-query-getcurrentdatetime.md) system function will return the current UTC date and time ISO 8601 string value in the format: `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
-
-Most applications can use the default string representation for DateTime for the following reasons:
-
-* Strings can be compared, and the relative ordering of the DateTime values is preserved when they are transformed to strings.
-* This approach doesn't require any custom code or attributes for JSON conversion.
-* The dates as stored in JSON are human readable.
-* This approach can take advantage of Azure Cosmos DB's index for fast query performance.
-
-For example, the following snippet stores an `Order` object containing two DateTime properties - `ShipDate` and `OrderDate` as a document using the .NET SDK:
-
-```csharp
- public class Order
- {
- [JsonProperty(PropertyName="id")]
- public string Id { get; set; }
- public DateTime OrderDate { get; set; }
- public DateTime ShipDate { get; set; }
- public double Total { get; set; }
- }
-
- await container.CreateItemAsync(
- new Order
- {
- Id = "09152014101",
- OrderDate = DateTime.UtcNow.AddDays(-30),
- ShipDate = DateTime.UtcNow.AddDays(-14),
- Total = 113.39
- });
-```
-
-This document is stored in Azure Cosmos DB as follows:
-
-```json
- {
- "id": "09152014101",
- "OrderDate": "2014-09-15T23:14:25.7251173Z",
- "ShipDate": "2014-09-30T23:14:25.7251173Z",
- "Total": 113.39
- }
-```
-
-Alternatively, you can store DateTimes as Unix timestamps, that is, as a number representing the number of elapsed seconds since January 1, 1970. Azure Cosmos DB's internal Timestamp (`_ts`) property follows this approach. You can use the [UnixDateTimeConverter](/dotnet/api/microsoft.azure.documents.unixdatetimeconverter) class to serialize DateTimes as numbers.
-
-## Querying DateTimes in LINQ
-
-The SQL .NET SDK automatically supports querying data stored in Azure Cosmos DB via LINQ. For example, the following snippet shows a LINQ query that filters orders that were shipped in the last three days:
-
-```csharp
- IQueryable<Order> orders = container.GetItemLinqQueryable<Order>(allowSynchronousQueryExecution: true).Where(o => o.ShipDate >= DateTime.UtcNow.AddDays(-3));
-```
-
-Translated to the following SQL statement and executed on Azure Cosmos DB:
-
-```sql
- SELECT * FROM root WHERE (root["ShipDate"] >= "2014-09-30T23:14:25.7251173Z")
-```
-
-You can learn more about Azure Cosmos DB's SQL query language and the LINQ provider at [Querying Cosmos DB in LINQ](sql-query-linq-to-sql.md).
-
-## Indexing DateTimes for range queries
-
-Queries are common with DateTime values. To execute these queries efficiently, you must have an index defined on any properties in the query's filter.
-
-You can learn more about how to configure indexing policies at [Azure Cosmos DB Indexing Policies](../index-policy.md).
-
-## Next Steps
-
-* Download and run the [Code samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples)
-* Learn more about [SQL queries](sql-query-getting-started.md)
-* Learn more about [Azure Cosmos DB Indexing Policies](../index-policy.md)
cosmos-db Synapse Link Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-power-bi.md
Last updated 09/29/2022 -+ # Use Power BI and serverless Synapse SQL pool to analyze Azure Cosmos DB data with Synapse Link In this article, you learn how to build a serverless SQL pool database and views over Synapse Link for Azure Cosmos DB. You will query the Azure Cosmos DB containers and then build a model with Power BI over those views to reflect that query. With Azure Synapse Link, you can build near real-time dashboards in Power BI to analyze your Azure Cosmos DB data. There is no performance or cost impact to your transactional workloads, and no complexity of managing ETL pipelines. You can use either [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode) or [import](/power-bi/connect-data/service-dataset-modes-understand#import-mode) modes. > [!Note]
-> You can build Power BI dashboards with just a few clicks using Azure Cosmos DB portal. For more information, see [Integrated Power BI experience in Azure Cosmos DB portal for Synapse Link enabled accounts](integrated-power-bi-synapse-link.md). This will automatically create T-SQL views in Synapse serverless SQL pools on your Cosmos DB containers. You can simply download the .pbids file that connects to these T-SQL views to start building your BI dashboards.
+> You can build Power BI dashboards with just a few clicks using Azure Cosmos DB portal. For more information, see [Integrated Power BI experience in Azure Cosmos DB portal for Synapse Link enabled accounts](integrated-power-bi-synapse-link.md). This will automatically create T-SQL views in Synapse serverless SQL pools on your Azure Cosmos DB containers. You can simply download the .pbids file that connects to these T-SQL views to start building your BI dashboards.
In this scenario, you will use dummy data about Surface product sales in a partner retail store. You will analyze the revenue per store based on the proximity to large households and the impact of advertising for a specific week. In this article, you create two views named **RetailSales** and **StoreDemographics** and a query between them. You can get the sample product data from this [GitHub](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/RetailData) repo.
In this scenario, you will use dummy data about Surface product sales in a partn
Make sure to create the following resources before you start:
-* [Create an Azure Cosmos DB account of kind SQL(core) or MongoDB.](create-cosmosdb-resources-portal.md)
+* [Create an Azure Cosmos DB account for API for NoSQL or MongoDB.](nosql/quickstart-portal.md)
-* Enable Azure Synapse Link for your [Azure Cosmos account](configure-synapse-link.md#enable-synapse-link)
+* Enable Azure Synapse Link for your [Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link)
-* Create a database within the Azure Cosmos account and two containers that have [analytical store enabled.](configure-synapse-link.md)
+* Create a database within the Azure Cosmos DB account and two containers that have [analytical store enabled.](configure-synapse-link.md)
-* Load products data into the Azure Cosmos containers as described in this [batch data ingestion](https://github.com/Azure-Samples/Synapse/blob/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/spark-notebooks/pyspark/1CosmoDBSynapseSparkBatchIngestion.ipynb) notebook.
+* Load products data into the Azure Cosmos DB containers as described in this [batch data ingestion](https://github.com/Azure-Samples/Synapse/blob/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/spark-notebooks/pyspark/1CosmoDBSynapseSparkBatchIngestion.ipynb) notebook.
* [Create a Synapse workspace](../synapse-analytics/quickstart-create-workspace.md) named **SynapseLinkBI**.
-* [Connect the Azure Cosmos database to the Synapse workspace](../synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json).
+* [Connect the Azure Cosmos DB database to the Synapse workspace](../synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json).
## Create a database and views
Creating views in the **master** or **default** databases is not recommended or
Create database RetailCosmosDB ```
-Next, create multiple views across different Synapse Link enabled Azure Cosmos containers. Views will allow you to use T-SQL to join and query Azure Cosmos DB data sitting in different containers. Make sure to select the **RetailCosmosDB** database when creating the views.
+Next, create multiple views across different Synapse Link enabled Azure Cosmos DB containers. Views will allow you to use T-SQL to join and query Azure Cosmos DB data sitting in different containers. Make sure to select the **RetailCosmosDB** database when creating the views.
The following scripts show how to create views on each container. For simplicity, letΓÇÖs use the [automatic schema inference](analytical-store-introduction.md#analytical-schema) feature of serverless SQL pool over Synapse Link enabled containers:
CREATE VIEW  RetailSales
ASΓÇ» SELECT * FROM OPENROWSET (
- 'CosmosDB', N'account=<Your Azure Cosmos account name>;database=<Your Azure Cosmos database name>;region=<Your Azure Cosmos DB Region>;key=<Your Azure Cosmos DB key here>',RetailSales)
+ 'CosmosDB', N'account=<Your Azure Cosmos DB account name>;database=<Your Azure Cosmos DB database name>;region=<Your Azure Cosmos DB Region>;key=<Your Azure Cosmos DB key here>',RetailSales)
AS q1 ```
CREATE VIEW StoreDemographics
ASΓÇ» SELECT * FROM OPENROWSET (
- 'CosmosDB', N'account=<Your Azure Cosmos account name>;database=<Your Azure Cosmos database name>;region=<Your Azure Cosmos DB Region>;key=<Your Azure Cosmos DB key here>', StoreDemographics)
+ 'CosmosDB', N'account=<Your Azure Cosmos DB account name>;database=<Your Azure Cosmos DB database name>;region=<Your Azure Cosmos DB Region>;key=<Your Azure Cosmos DB key here>', StoreDemographics)
AS q1 ```
cosmos-db Synapse Link Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-use-cases.md
Last updated 09/29/2022-+ # Azure Synapse Link for Azure Cosmos DB: Near real-time analytics use cases [Azure Synapse Link](synapse-link.md) for Azure Cosmos DB is a cloud native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data. Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Last updated 09/29/2022 -+ # What is Azure Synapse Link for Azure Cosmos DB? Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables near real time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
You can now get rich insights on your operational data in near real-time, using
### No impact on operational workloads
-With Azure Synapse Link, you can run analytical queries against an Azure Cosmos DB analytical store, a column store representation of your data, while the transactional operations are processed using provisioned throughput for the transactional workload, over the Cosmos DB row-based transactional store. The analytical workload is independent of the transactional workload traffic, not consuming any of the provisioned throughput of your operational data.
+With Azure Synapse Link, you can run analytical queries against an Azure Cosmos DB analytical store, a column store representation of your data, while the transactional operations are processed using provisioned throughput for the transactional workload, over the Azure Cosmos DB row-based transactional store. The analytical workload is independent of the transactional workload traffic, not consuming any of the provisioned throughput of your operational data.
### Optimized for large-scale analytics workloads
This integration enables the following HTAP scenarios for different users:
* A Data Engineer, who wants to make data accessible for consumers, by creating SQL or Spark tables over Azure Cosmos DB containers, without manual ETL processes.
-For more information on Azure Synapse Analytics runtime support for Azure Cosmos DB, see [Azure Synapse Analytics for Cosmos DB support](../synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md).
+For more information on Azure Synapse Analytics runtime support for Azure Cosmos DB, see [Azure Synapse Analytics for Azure Cosmos DB support](../synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md).
## When to use Azure Synapse Link for Azure Cosmos DB?
Synapse Link isn't recommended if you're looking for traditional data warehouse
## Limitations
-* Azure Synapse Link for Azure Cosmos DB is not supported for Gremlin API, Cassandra API, and Table API. It is supported for SQL API and API for Mongo DB.
+* Azure Synapse Link for Azure Cosmos DB is not supported for API for Gremlin, Cassandra, and Table. It is supported for API for NoSQL and MongoDB.
* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently isn't supported.
-* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
+* Enabling Synapse Link on existing Azure Cosmos DB containers is only supported for API for NoSQL accounts. Synapse Link can be enabled on new containers for both API for NoSQL and MongoDB accounts.
* Although analytical store data is not backed up, and therefore cannot be restored, you can rebuild your analytical store by reenabling Synapse Link in the restored container. Check the [analytical store documentation](analytical-store-introduction.md) for more information. * Currently Synapse Link isn't fully compatible with continuous backup mode. Check the [analytical store documentation](analytical-store-introduction.md) for more information.
-* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
+* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Azure Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
* Currently Azure Synapse Workspaces don't support linked services using `Managed Identity`. Always use the `MasterKey` option.
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Table API
-description: Azure CLI Samples for Azure Cosmos DB Table API
+ Title: Azure CLI Samples for Azure Cosmos DB for Table
+description: Azure CLI Samples for Azure Cosmos DB for Table
-+ Last updated 08/19/2022 -+
-# Azure CLI samples for Azure Cosmos DB Table API
+# Azure CLI samples for Azure Cosmos DB for Table
-The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB Table API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB for Table and to sample Azure CLI scripts that apply to all Azure Cosmos DB APIs. Common samples are the same across all APIs.
These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-## Table API Samples
+## API for Table Samples
|Task | Description | |||
-| [Create an Azure Cosmos account and table](../scripts/cli/table/create.md)| Creates an Azure Cosmos DB account and table for Table API. |
-| [Create a serverless Azure Cosmos account and table](../scripts/cli/table/serverless.md)| Creates a serverless Azure Cosmos DB account and table for Table API. |
-| [Create an Azure Cosmos account and table with autoscale](../scripts/cli/table/autoscale.md)| Creates an Azure Cosmos DB account and table with autoscale for Table API. |
+| [Create an Azure Cosmos DB account and table](../scripts/cli/table/create.md)| Creates an Azure Cosmos DB account and table for API for Table. |
+| [Create a serverless Azure Cosmos DB account and table](../scripts/cli/table/serverless.md)| Creates a serverless Azure Cosmos DB account and table for API for Table. |
+| [Create an Azure Cosmos DB account and table with autoscale](../scripts/cli/table/autoscale.md)| Creates an Azure Cosmos DB account and table with autoscale for API for Table. |
| [Perform throughput operations](../scripts/cli/table/throughput.md) | Read, update and migrate between autoscale and standard throughput on a table.| | [Lock resources from deletion](../scripts/cli/table/lock.md)| Prevent resources from being deleted with resource locks.| ||| ## Common API Samples
-These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+These samples apply to all Azure Cosmos DB APIs. These samples use a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB.
|Task | Description | ||| | [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.| | [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create an Azure Cosmos DB account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create an Azure Cosmos DB account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update an Azure Cosmos DB account to secure with service-endpoints when the subnet is eventually configured.|
| [Find existing free-tier account](../scripts/cli/common/free-tier.md)| Find whether there is an existing free-tier account in your subscription.| |||
For Azure CLI samples for other APIs see:
- [CLI Samples for Cassandra](../cassandr) - [CLI Samples for Gremlin](../graph/cli-samples.md)-- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for API for MongoDB](../mongodb/cli-samples.md)
- [CLI Samples for SQL](../sql/cli-samples.md)
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
- Title: Quickstart - Azure Cosmos DB Table API for .NET
-description: Learn how to build a .NET app to manage Azure Cosmos DB Table API resources in this quickstart.
----- Previously updated : 08/22/2022---
-# Quickstart: Azure Cosmos DB Table API for .NET
--
-This quickstart shows how to get started with the Azure Cosmos DB Table API from a .NET application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. You'll learn how to create tables, rows, and perform basic tasks within your Cosmos DB resource using the [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/).
-
-> [!NOTE]
-> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-table-api-dotnet-samples) are available on GitHub as a .NET project.
-
-[Table API reference documentation](../../storage/tables/index.yml) | [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [.NET 6.0](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-
-### Prerequisite check
-
-* In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-
-## Setting up
-
-This section walks you through how to create an Azure Cosmos account and set up a project that uses the Table API NuGet packages.
-
-### Create an Azure Cosmos DB account
-
-This quickstart will create a single Azure Cosmos DB account using the Table API.
-
-#### [Azure CLI](#tab/azure-cli)
--
-#### [PowerShell](#tab/azure-powershell)
--
-#### [Portal](#tab/azure-portal)
----
-### Get Table API connection string
-
-#### [Azure CLI](#tab/azure-cli)
--
-#### [PowerShell](#tab/azure-powershell)
--
-#### [Portal](#tab/azure-portal)
----
-### Create a new .NET app
-
-Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-new) to create a new console app.
-
-```console
-dotnet new console --output <app-name>
-```
-
-### Install the NuGet package
-
-Add the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables) NuGet package to the new .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
-
-```console
-dotnet add package Azure.Data.Tables
-```
-
-### Configure environment variables
--
-## Code examples
-
-* [Authenticate the client](#authenticate-the-client)
-* [Create a table](#create-a-table)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
-
-The sample code described in this article creates a table named ``adventureworks``. Each table row contains the details of a product such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-You'll use the following Table API classes to interact with these resources:
-
-* [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) - This class provides methods to perform service level operations with Azure Cosmos DB Table API.
-* [``TableClient``](/dotnet/api/azure.data.tables.tableclient) - This class allows you to interact with tables hosted in the Azure Cosmos DB table API.
-* [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) - This class is a reference to a row in a table that allows you to manage properties and column data.
-
-### Authenticate the client
-
-From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``Azure.Data.Tables``.
--
-Define a new instance of the ``TableServiceClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariables) to read the connection string you set earlier.
--
-### Create a table
-
-Retrieve an instance of the `TableClient` using the `TableServiceClient` class. Use the [``TableClient.CreateIfNotExistsAsync``](/dotnet/api/azure.data.tables.tableclient.createifnotexistsasync) method on the `TableClient` to create a new table if it doesn't already exist. This method will return a reference to the existing or newly created table.
--
-### Create an item
-
-The easiest way to create a new item in a table is to create a class that implements the [``ITableEntity``](/dotnet/api/azure.data.tables.itableentity) interface. You can then add your own properties to the class to populate columns of data in that table row.
--
-Create an item in the collection using the `Product` class by calling [``TableClient.AddEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.addentityasync).
--
-### Get an item
-
-You can retrieve a specific item from a table using the [``TableEntity.GetEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.getentity) method. Provide the `partitionKey` and `rowKey` as parameters to identify the correct row to perform a quick *point read* of that item.
--
-### Query items
-
-After you insert an item, you can also run a query to get all items that match a specific filter by using the `TableClient.Query<T>` method. This example filters products by category using [Linq](/dotnet/standard/linq) syntax, which is a benefit of using typed `ITableEntity` models like the `Product` class.
-
-> [!NOTE]
-> You can also query items using [OData](/rest/api/storageservices/querying-tables-and-entities) syntax. You can see an example of this approach in the [Query Data](./tutorial-query-table.md) tutorial.
--
-## Run the code
-
-This app creates an Azure Cosmos Table API table. The example then creates an item and then reads the exact same item back. Finally, the example creates a second item and then performs a query that should return multiple items. With each step, the example outputs metadata to the console about the steps it has performed.
-
-To run the app, use a terminal to navigate to the application directory and run the application.
-
-```dotnetcli
-dotnet run
-```
-
-The output of the app should be similar to this example:
-
-```output
-Single product name:
-Yamba Surfboard
-Multiple products:
-Yamba Surfboard
-Sand Surfboard
-```
-
-## Clean up resources
-
-When you no longer need the Azure Cosmos DB Table API account, you can delete the corresponding resource group.
-
-### [Azure CLI](#tab/azure-cli)
-
-Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
-
-```azurecli-interactive
-az group delete --name $resourceGroupName
-```
-
-### [PowerShell](#tab/azure-powershell)
-
-Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
-
-```azurepowershell-interactive
-$parameters = @{
- Name = $RESOURCE_GROUP_NAME
-}
-Remove-AzResourceGroup @parameters
-```
-
-### [Portal](#tab/azure-portal)
-
-1. Navigate to the resource group you previously created in the Azure portal.
-
- > [!TIP]
- > In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``.
-1. Select **Delete resource group**.
-
- :::image type="content" source="media/dotnet-quickstart/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
-
-1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
-
- :::image type="content" source="media/dotnet-quickstart/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
---
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB Table API account, create a table, and manage entries using the .NET SDK. You can now dive deeper into the SDK to learn how to perform more advanced data queries and management tasks in your Azure Cosmos DB Table API resources.
-
-> [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB Table API and .NET](./how-to-dotnet-get-started.md)
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
- Title: Use the Table API and Java to build an app - Azure Cosmos DB
-description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and Java
----- Previously updated : 05/28/2020---
-# Quickstart: Build a Table API app with Java SDK and Azure Cosmos DB
--
-> [!div class="op_single_selector"]
-> * [.NET](create-table-dotnet.md)
-> * [Java](create-table-java.md)
-> * [Node.js](create-table-nodejs.md)
-> * [Python](how-to-use-python.md)
->
-
-This quickstart shows how to access the Azure Cosmos DB [Tables API](introduction.md) from a Java application. The Cosmos DB Tables API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table.
-
-Java applications can access the Cosmos DB Tables API using the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) client library.
-
-## Prerequisites
-
-The sample application is written in [Spring Boot 2.6.4](https://spring.io/projects/spring-boot), You can use either [Visual Studio Code](https://code.visualstudio.com/), or [IntelliJ IDEA](https://www.jetbrains.com/idea/) as an IDE.
--
-## Sample application
-
-The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java). Both a starter and completed app are included in the sample repository.
-
-```bash
-git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java
-```
-
-The sample application uses weather data as an example to demonstrate the capabilities of the Tables API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Tables API.
--
-## 1 - Create an Azure Cosmos DB account
-
-You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create a Cosmos DB account.
-
-| Instructions | Screenshot |
-|:|--:|
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos D B accounts in Azure." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db account step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos D B accounts page in Azure." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db account step 3](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
-| [!INCLUDE [Create cosmos db account step 4](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos D B Account creation page." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Cosmos DB accounts are created using the [az Cosmos DB create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
-
-Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurecli
-LOCATION='eastus'
-RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
-COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-COSMOS_TABLE_NAME='WeatherData'
-
-az group create \
- --location $LOCATION \
- --name $RESOURCE_GROUP_NAME
-
-az cosmosdb create \
- --name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --capabilities EnableTable
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-
-Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurepowershell
-$location = 'eastus'
-$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
-$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-
-# Create a resource group
-New-AzResourceGroup `
- -Location $location `
- -Name $resourceGroupName
-
-# Create an Azure Cosmos DB
-New-AzCosmosDBAccount `
- -Name $cosmosAccountName `
- -ResourceGroupName $resourceGroupName `
- -Location $location `
- -ApiKind "Table"
-```
---
-## 2 - Create a table
-
-Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
-
-### [Azure portal](#tab/azure-portal)
-
-In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
-
-| Instructions | Screenshot |
-|:--|--:|
-| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos D B account." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for a Cosmos D B table." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-table-api-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Tables in Cosmos DB are created using the [az Cosmos DB table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
-
-```azurecli
-COSMOS_TABLE_NAME='WeatherData'
-
-az cosmosdb table create \
- --account-name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_TABLE_NAME \
- --throughput 400
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
-
-```azurepowershell
-$cosmosTableName = 'WeatherData'
-
-# Create the table for the application to use
-New-AzCosmosDBTable `
- -Name $cosmosTableName `
- -AccountName $cosmosAccountName `
- -ResourceGroupName $resourceGroupName
-```
---
-## 3 - Get Cosmos DB connection string
-
-To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:--|--:|
-| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos D B page." lightbox="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-1.png"::: |
-| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing the which connection string to select and use in your application." lightbox="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-2.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To get the primary table storage connection string using Azure CLI, use the [az Cosmos DB keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command with the option `--type connection-strings`. This command uses a [JMESPath query](/cli/azure/query-azure-cli) to display only the primary table connection string.
-
-```azurecli
-# This gets the primary Table connection string
-az cosmosdb keys list \
- --type connection-strings \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_ACCOUNT_NAME \
- --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
- --output tsv
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
-```azurepowershell
-# This gets the primary Table connection string
-$(Get-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $cosmosAccountName `
- -Type "ConnectionStrings")."Primary Table Connection String"
-```
---
-The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password. This example uses the POM to store the connection string during development and make it available to the application.
-
-```xml
-<profiles>
- <profile>
- <id>local</id>
- <properties>
- <azure.tables.connection.string>
- <![CDATA[YOUR-DATA-TABLES-SERVICE-CONNECTION-STRING]]>
- </azure.tables.connection.string>
- <azure.tables.tableName>WeatherData</azure.tables.tableName>
- </properties>
- <activation>
- <activeByDefault>true</activeByDefault>
- </activation>
- </profile>
-</profiles>
-```
-
-## 4 - Include the azure-data-tables package
-
-To access the Cosmos DB Tables API from a Java application, include the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) package.
-
-```xml
-<dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-data-tables</artifactId>
- <version>12.2.1</version>
-</dependency>
-```
---
-## 5 - Configure the Table client in TableServiceConfig.java
-
-The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The [TableClient](/java/api/com.azure.data.tables.tableclient) object is the object used to communicate with the Cosmos DB Tables API.
-
-An application will typically create a single [TableClient](/java/api/com.azure.data.tables.tableclient) object per table to be used throughout the application. It's recommended to indicate that a method produces a [TableClient](/java/api/com.azure.data.tables.tableclient) object bean to be managed by the Spring container and as a singleton to accomplish this.
-
-In the `TableServiceConfig.java` file of the application, edit the `tableClientConfiguration()` method to match the following code snippet:
-
-```java
-@Configuration
-public class TableServiceConfiguration {
-
- private static String TABLE_NAME;
-
- private static String CONNECTION_STRING;
-
- @Value("${azure.tables.connection.string}")
- public void setConnectionStringStatic(String connectionString) {
- TableServiceConfiguration.CONNECTION_STRING = connectionString;
- }
-
- @Value("${azure.tables.tableName}")
- public void setTableNameStatic(String tableName) {
- TableServiceConfiguration.TABLE_NAME = tableName;
- }
-
- @Bean
- public TableClient tableClientConfiguration() {
- return new TableClientBuilder()
- .connectionString(CONNECTION_STRING)
- .tableName(TABLE_NAME)
- .buildClient();
- }
-
-}
-```
-
-You'll also need to add the following using statement at the top of the `TableServiceConfig.java` file.
-
-```java
-import com.azure.data.tables.TableClient;
-import com.azure.data.tables.TableClientBuilder;
-```
-
-## 6 - Implement Cosmos DB table operations
-
-All Cosmos DB table operations for the sample app are implemented in the `TablesServiceImpl` class located in the *Services* directory. You'll need to import the `com.azure.data.tables` SDK package.
-
-```java
-import com.azure.data.tables.TableClient;
-import com.azure.data.tables.models.ListEntitiesOptions;
-import com.azure.data.tables.models.TableEntity;
-import com.azure.data.tables.models.TableTransactionAction;
-import com.azure.data.tables.models.TableTransactionActionType;
-```
-
-At the start of the `TableServiceImpl` class, add a member variable for the [TableClient](/java/api/com.azure.data.tables.tableclient) object and a constructor to allow the [TableClient](/java/api/com.azure.data.tables.tableclient) object to be injected into the class.
-
-```java
-@Autowired
-private TableClient tableClient;
-```
-
-### Get rows from a table
-
-The [TableClient](/java/api/com.azure.data.tables.tableclient) class contains a method named [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
-
-The method also takes a generic parameter of type [TableEntity](/java/api/com.azure.data.tables.models.tableentity) that specifies the model class data will be returned as. In this case, the built-in class [TableEntity](/java/api/com.azure.data.tables.models.tableentity) is used, meaning the `listEntities` method will return a `PagedIterable<TableEntity>` collection as its results.
-
-```java
-public List<WeatherDataModel> retrieveAllEntities() {
- List<WeatherDataModel> modelList = tableClient.listEntities().stream()
- .map(WeatherDataUtils::mapTableEntityToWeatherDataModel)
- .collect(Collectors.toList());
- return Collections.unmodifiableList(WeatherDataUtils.filledValue(modelList));
-}
-```
-
-The [TableEntity](/java/api/com.azure.data.tables.models.tableentity) class defined in the `com.azure.data.tables.models` package has properties for the partition key and row key values in the table. Together, these two values for a unique key for the row in the table. In this example application, the name of the weather station (city) is stored in the partition key and the date/time of the observation is stored in the row key. All other properties (temperature, humidity, wind speed) are stored in a dictionary in the `TableEntity` object.
-
-It's common practice to map a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object to an object of your own definition. The sample application defines a class `WeatherDataModel` in the *Models* directory for this purpose. This class has properties for the station name and observation date that the partition key and row key will map to, providing more meaningful property names for these values. It then uses a dictionary to store all the other properties on the object. This is a common pattern when working with Table storage since a row can have any number of arbitrary properties and we want our model objects to be able to capture all of them. This class also contains methods to list the properties on the class.
-
-```java
-public class WeatherDataModel {
-
- public WeatherDataModel(String stationName, String observationDate, OffsetDateTime timestamp, String etag) {
- this.stationName = stationName;
- this.observationDate = observationDate;
- this.timestamp = timestamp;
- this.etag = etag;
- }
-
- private String stationName;
-
- private String observationDate;
-
- private OffsetDateTime timestamp;
-
- private String etag;
-
- private Map<String, Object> propertyMap = new HashMap<String, Object>();
-
- public String getStationName() {
- return stationName;
- }
-
- public void setStationName(String stationName) {
- this.stationName = stationName;
- }
-
- public String getObservationDate() {
- return observationDate;
- }
-
- public void setObservationDate(String observationDate) {
- this.observationDate = observationDate;
- }
-
- public OffsetDateTime getTimestamp() {
- return timestamp;
- }
-
- public void setTimestamp(OffsetDateTime timestamp) {
- this.timestamp = timestamp;
- }
-
- public String getEtag() {
- return etag;
- }
-
- public void setEtag(String etag) {
- this.etag = etag;
- }
-
- public Map<String, Object> getPropertyMap() {
- return propertyMap;
- }
-
- public void setPropertyMap(Map<String, Object> propertyMap) {
- this.propertyMap = propertyMap;
- }
-}
-```
-
-The `mapTableEntityToWeatherDataModel` method is used to map a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object to a `WeatherDataModel` object. The `mapTableEntityToWeatherDataModel` method directly maps the `PartitionKey`, `RowKey`, `Timestamp`, and `Etag` properties and then uses the `properties.keySet` to iterate over the other properties in the `TableEntity` object and map those to the `WeatherDataModel` object, minus the properties that have already been directly mapped.
-
-Edit the code in the `mapTableEntityToWeatherDataModel` method to match the following code block.
-
-```java
-public static WeatherDataModel mapTableEntityToWeatherDataModel(TableEntity entity) {
- WeatherDataModel observation = new WeatherDataModel(
- entity.getPartitionKey(), entity.getRowKey(),
- entity.getTimestamp(), entity.getETag());
- rearrangeEntityProperties(observation.getPropertyMap(), entity.getProperties());
- return observation;
-}
-
-private static void rearrangeEntityProperties(Map<String, Object> target, Map<String, Object> source) {
- Constants.DEFAULT_LIST_OF_KEYS.forEach(key -> {
- if (source.containsKey(key)) {
- target.put(key, source.get(key));
- }
- });
- source.keySet().forEach(key -> {
- if (Constants.DEFAULT_LIST_OF_KEYS.parallelStream().noneMatch(defaultKey -> defaultKey.equals(key))
- && Constants.EXCLUDE_TABLE_ENTITY_KEYS.parallelStream().noneMatch(defaultKey -> defaultKey.equals(key))) {
- target.put(key, source.get(key));
- }
- });
-}
-```
-
-### Filter rows returned from a table
-To filter the rows returned from a table, you can pass an OData style filter string to the [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
-
-```odata
-PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
-```
-
-You can view all OData filter operators on the OData website in the section [Filter System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/)
-
-In the example application, the `FilterResultsInputModel` object is designed to capture any filter criteria provided by the user.
-
-```java
-public class FilterResultsInputModel implements Serializable {
-
- private String partitionKey;
-
- private String rowKeyDateStart;
-
- private String rowKeyTimeStart;
-
- private String rowKeyDateEnd;
-
- private String rowKeyTimeEnd;
-
- private Double minTemperature;
-
- private Double maxTemperature;
-
- private Double minPrecipitation;
-
- private Double maxPrecipitation;
-
- public String getPartitionKey() {
- return partitionKey;
- }
-
- public void setPartitionKey(String partitionKey) {
- this.partitionKey = partitionKey;
- }
-
- public String getRowKeyDateStart() {
- return rowKeyDateStart;
- }
-
- public void setRowKeyDateStart(String rowKeyDateStart) {
- this.rowKeyDateStart = rowKeyDateStart;
- }
-
- public String getRowKeyTimeStart() {
- return rowKeyTimeStart;
- }
-
- public void setRowKeyTimeStart(String rowKeyTimeStart) {
- this.rowKeyTimeStart = rowKeyTimeStart;
- }
-
- public String getRowKeyDateEnd() {
- return rowKeyDateEnd;
- }
-
- public void setRowKeyDateEnd(String rowKeyDateEnd) {
- this.rowKeyDateEnd = rowKeyDateEnd;
- }
-
- public String getRowKeyTimeEnd() {
- return rowKeyTimeEnd;
- }
-
- public void setRowKeyTimeEnd(String rowKeyTimeEnd) {
- this.rowKeyTimeEnd = rowKeyTimeEnd;
- }
-
- public Double getMinTemperature() {
- return minTemperature;
- }
-
- public void setMinTemperature(Double minTemperature) {
- this.minTemperature = minTemperature;
- }
-
- public Double getMaxTemperature() {
- return maxTemperature;
- }
-
- public void setMaxTemperature(Double maxTemperature) {
- this.maxTemperature = maxTemperature;
- }
-
- public Double getMinPrecipitation() {
- return minPrecipitation;
- }
-
- public void setMinPrecipitation(Double minPrecipitation) {
- this.minPrecipitation = minPrecipitation;
- }
-
- public Double getMaxPrecipitation() {
- return maxPrecipitation;
- }
-
- public void setMaxPrecipitation(Double maxPrecipitation) {
- this.maxPrecipitation = maxPrecipitation;
- }
-}
-```
-
-When this object is passed to the `retrieveEntitiesByFilter` method in the `TableServiceImpl` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
-
-```java
-public List<WeatherDataModel> retrieveEntitiesByFilter(FilterResultsInputModel model) {
-
- List<String> filters = new ArrayList<>();
-
- if (!StringUtils.isEmptyOrWhitespace(model.getPartitionKey())) {
- filters.add(String.format("PartitionKey eq '%s'", model.getPartitionKey()));
- }
- if (!StringUtils.isEmptyOrWhitespace(model.getRowKeyDateStart())
- && !StringUtils.isEmptyOrWhitespace(model.getRowKeyTimeStart())) {
- filters.add(String.format("RowKey ge '%s %s'", model.getRowKeyDateStart(), model.getRowKeyTimeStart()));
- }
- if (!StringUtils.isEmptyOrWhitespace(model.getRowKeyDateEnd())
- && !StringUtils.isEmptyOrWhitespace(model.getRowKeyTimeEnd())) {
- filters.add(String.format("RowKey le '%s %s'", model.getRowKeyDateEnd(), model.getRowKeyTimeEnd()));
- }
- if (model.getMinTemperature() != null) {
- filters.add(String.format("Temperature ge %f", model.getMinTemperature()));
- }
- if (model.getMaxTemperature() != null) {
- filters.add(String.format("Temperature le %f", model.getMaxTemperature()));
- }
- if (model.getMinPrecipitation() != null) {
- filters.add(String.format("Precipitation ge %f", model.getMinPrecipitation()));
- }
- if (model.getMaxPrecipitation() != null) {
- filters.add(String.format("Precipitation le %f", model.getMaxPrecipitation()));
- }
-
- List<WeatherDataModel> modelList = tableClient.listEntities(new ListEntitiesOptions()
- .setFilter(String.join(" and ", filters)), null, null).stream()
- .map(WeatherDataUtils::mapTableEntityToWeatherDataModel)
- .collect(Collectors.toList());
- return Collections.unmodifiableList(WeatherDataUtils.filledValue(modelList));
-}
-```
-
-### Insert data using a TableEntity object
-
-The simplest way to add data to a table is by using a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object. In this example, data is mapped from an input model object to a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey`) properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the [TableClient](/java/api/com.azure.data.tables.models.tableentity) object. Finally, the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) method on the [TableClient](/java/api/com.azure.data.tables.models.tableentity) object is used to insert data into the table.
-
-Modify the `insertEntity` class in the example application to contain the following code.
-
-```java
-public void insertEntity(WeatherInputModel model) {
- tableClient.createEntity(WeatherDataUtils.createTableEntity(model));
-}
-```
-
-### Upsert data using a TableEntity object
-
-If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you'll receive an error. For this reason, it's often preferable to use the [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) instead of the `insertEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) method will update the existing row. Otherwise, the row will be added to the table.
-
-```java
-public void upsertEntity(WeatherInputModel model) {
- tableClient.upsertEntity(WeatherDataUtils.createTableEntity(model));
-}
-```
-
-### Insert or upsert data with variable properties
-
-One of the advantages of using the Cosmos DB Tables API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There's no need to run DDL statements like `ALTER TABLE` to add columns as in a traditional database.
-
-This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
-
-In the sample application, the `ExpandableWeatherObject` class is built around an internal dictionary to support any set of properties on the object. This class represents a typical pattern for when an object needs to contain an arbitrary set of properties.
-
-```java
-public class ExpandableWeatherObject {
-
- private String stationName;
-
- private String observationDate;
-
- private Map<String, Object> propertyMap = new HashMap<String, Object>();
-
- public String getStationName() {
- return stationName;
- }
-
- public void setStationName(String stationName) {
- this.stationName = stationName;
- }
-
- public String getObservationDate() {
- return observationDate;
- }
-
- public void setObservationDate(String observationDate) {
- this.observationDate = observationDate;
- }
-
- public Map<String, Object> getPropertyMap() {
- return propertyMap;
- }
-
- public void setPropertyMap(Map<String, Object> propertyMap) {
- this.propertyMap = propertyMap;
- }
-
- public boolean containsProperty(String key) {
- return this.propertyMap.containsKey(key);
- }
-
- public Object getPropertyValue(String key) {
- return containsProperty(key) ? this.propertyMap.get(key) : null;
- }
-
- public void putProperty(String key, Object value) {
- this.propertyMap.put(key, value);
- }
-
- public List<String> getPropertyKeys() {
- List<String> list = Collections.synchronizedList(new ArrayList<String>());
- Iterator<String> iterators = this.propertyMap.keySet().iterator();
- while (iterators.hasNext()) {
- list.add(iterators.next());
- }
- return Collections.unmodifiableList(list);
- }
-
- public Integer getPropertyCount() {
- return this.propertyMap.size();
- }
-}
-```
-
-To insert or upsert such an object using the Table API, map the properties of the expandable object into a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object and use the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) or [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/java/api/com.azure.data.tables.tableclient) object as appropriate.
-
-```java
-public void insertExpandableEntity(ExpandableWeatherObject model) {
- tableClient.createEntity(WeatherDataUtils.createTableEntity(model));
-}
-
-public void upsertExpandableEntity(ExpandableWeatherObject model) {
- tableClient.upsertEntity(WeatherDataUtils.createTableEntity(model));
-}
-```
-
-### Update an entity
-
-Entities can be updated by calling the [updateEntity](/java/api/com.azure.data.tables.tableclient.updateentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object. Because an entity (row) stored using the Tables API could contain any arbitrary set of properties, it's often useful to create an update object based around a dictionary object similar to the `ExpandableWeatherObject` discussed earlier. In this case, the only difference is the addition of an `etag` property which is used for concurrency control during updates.
-
-```java
-public class UpdateWeatherObject {
-
- private String stationName;
-
- private String observationDate;
-
- private String etag;
-
- private Map<String, Object> propertyMap = new HashMap<String, Object>();
-
- public String getStationName() {
- return stationName;
- }
-
- public void setStationName(String stationName) {
- this.stationName = stationName;
- }
-
- public String getObservationDate() {
- return observationDate;
- }
-
- public void setObservationDate(String observationDate) {
- this.observationDate = observationDate;
- }
-
- public String getEtag() {
- return etag;
- }
-
- public void setEtag(String etag) {
- this.etag = etag;
- }
-
- public Map<String, Object> getPropertyMap() {
- return propertyMap;
- }
-
- public void setPropertyMap(Map<String, Object> propertyMap) {
- this.propertyMap = propertyMap;
- }
-}
-```
-
-In the sample app, this object is passed to the `updateEntity` method in the `TableServiceImpl` class. This method first loads the existing entity from the Tables API using the [getEntity](/java/api/com.azure.data.tables.tableclient.getentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient). It then updates that entity object and uses the `updateEntity` method save the updates to the database. Note how the [updateEntity](/java/api/com.azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to ensure the object hasn't changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `etag` to the `updateEntity` method.
-
-```java
-public void updateEntity(UpdateWeatherObject model) {
- TableEntity tableEntity = tableClient.getEntity(model.getStationName(), model.getObservationDate());
- Map<String, Object> propertiesMap = model.getPropertyMap();
- propertiesMap.keySet().forEach(key -> tableEntity.getProperties().put(key, propertiesMap.get(key)));
- tableClient.updateEntity(tableEntity);
-}
-```
-
-### Remove an entity
-
-To remove an entity from a table, call the [deleteEntity](/java/api/com.azure.data.tables.tableclient.deleteentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object with the partition key and row key of the object.
-
-```java
-public void deleteEntity(WeatherInputModel model) {
- tableClient.deleteEntity(model.getStationName(),
- WeatherDataUtils.formatRowKey(model.getObservationDate(), model.getObservationTime()));
-}
-```
-
-## 7 - Run the code
-
-Run the sample application to interact with the Cosmos DB Tables API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
--
-Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
--
-Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Tables API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
--
-Use the **Insert Sample Data** button to load some sample data into your Cosmos DB table.
--
-Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Tables API.
--
-## Clean up resources
-
-When you're finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
-
-### [Azure portal](#tab/azure-portal)
-
-A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Delete resource group step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-java/azure-portal-remove-resource-group-1.png"::: |
-| [!INCLUDE [Delete resource group step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-java/azure-portal-remove-resource-group-2.png"::: |
-| [!INCLUDE [Delete resource group step 3](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-java/azure-portal-remove-resource-group-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurecli
-az group delete --name $RESOURCE_GROUP_NAME
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurepowershell
-Remove-AzResourceGroup -Name $resourceGroupName
-```
---
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Tables API.
-
-> [!div class="nextstepaction"]
-> [Import table data to the Tables API](table-import.md)
cosmos-db Create Table Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-nodejs.md
- Title: 'Quickstart: Table API with Node.js - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and Node.js
----- Previously updated : 05/28/2020--
-# Quickstart: Build a Table API app with Node.js and Azure Cosmos DB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-table-dotnet.md)
-> * [Java](create-table-java.md)
-> * [Node.js](create-table-nodejs.md)
-> * [Python](how-to-use-python.md)
->
-
-In this quickstart, you create an Azure Cosmos DB Table API account, and use Data Explorer and a Node.js app cloned from GitHub to create tables and entities. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- [Node.js 0.10.29+](https://nodejs.org/) .-- [Git](https://git-scm.com/downloads).-
-## Sample application
-
-The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js). Both a starter and completed app are included in the sample repository.
-
-```bash
-git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js
-```
-
-The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
--
-## 1 - Create an Azure Cosmos DB account
-
-You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create a Cosmos DB account.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
-
-Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurecli
-LOCATION='eastus'
-RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
-COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-COSMOS_TABLE_NAME='WeatherData'
-
-az group create \
- --location $LOCATION \
- --name $RESOURCE_GROUP_NAME
-
-az cosmosdb create \
- --name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --capabilities EnableTable
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-
-Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurepowershell
-$location = 'eastus'
-$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
-$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-
-# Create a resource group
-New-AzResourceGroup `
- -Location $location `
- -Name $resourceGroupName
-
-# Create an Azure Cosmos DB
-New-AzCosmosDBAccount `
- -Name $cosmosAccountName `
- -ResourceGroupName $resourceGroupName `
- -Location $location `
- -ApiKind "Table"
-```
---
-## 2 - Create a table
-
-Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
-
-### [Azure portal](#tab/azure-portal)
-
-In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-dotnet/create-cosmos-table-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-dotnet/create-cosmos-table-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-dotnet/create-cosmos-table-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for a Cosmos DB table." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
-
-```azurecli
-COSMOS_TABLE_NAME='WeatherData'
-
-az cosmosdb table create \
- --account-name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_TABLE_NAME \
- --throughput 400
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
-
-```azurepowershell
-$cosmosTableName = 'WeatherData'
-
-# Create the table for the application to use
-New-AzCosmosDBTable `
- -Name $cosmosTableName `
- -AccountName $cosmosAccountName `
- -ResourceGroupName $resourceGroupName
-```
---
-## 3 - Get Cosmos DB connection string
-
-To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-dotnet/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1.png"::: |
-| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-dotnet/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To get the primary table storage connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](/cli/azure/query-azure-cli) to display only the primary table connection string.
-
-```azurecli
-# This gets the primary Table connection string
-az cosmosdb keys list \
- --type connection-strings \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_ACCOUNT_NAME \
- --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
- --output tsv
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
-```azurepowershell
-# This gets the primary Table connection string
-$(Get-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $cosmosAccountName `
- -Type "ConnectionStrings")."Primary Table Connection String"
-```
-
-The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
---
-## 4 - Install the Azure Data Tables SDK for JS
-
-To access the Cosmos DB Table API from a nodejs application, install the [Azure Data Tables SDK](https://www.npmjs.com/package/@azure/data-tables) package.
-
-```bash
- npm install @azure/data-tables
-```
-
-## 5 - Configure the Table client in env.js file
-
-Copy your Cosmos DB or Storage account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `configure/env.js` file.
-
-```js
-const env = {
- connectionString:"A connection string to an Azure Storage or Cosmos account.",
- tableName: "WeatherData",
-};
-```
-
-The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableClient` class is the class used to communicate with the Cosmos DB Table API. An application will typically create a single `serviceClient` object per table to be used throughout the application.
-
-```js
-const { TableClient } = require("@azure/data-tables");
-const env = require("../configure/env");
-const serviceClient = TableClient.fromConnectionString(
- env.connectionString,
- env.tableName
-);
-```
---
-## 6 - Implement Cosmos DB table operations
-
-All Cosmos DB table operations for the sample app are implemented in the `serviceClient` object located in `tableClient.js` file under *service* directory.
-
-```js
-const { TableClient } = require("@azure/data-tables");
-const env = require("../configure/env");
-const serviceClient = TableClient.fromConnectionString(
- env.connectionString,
- env.tableName
-);
-```
-
-### Get rows from a table
-
-The `serviceClient` object contains a method named `listEntities` which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
-
-```js
-const allRowsEntities = serviceClient.listEntities();
-```
-
-### Filter rows returned from a table
-
-To filter the rows returned from a table, you can pass an OData style filter string to the `listEntities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
-
-```odata
-PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00' and RowKey le '2021-07-02 12:00'
-```
-
-You can view all OData filter operators on the OData website in the section Filter [System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/).
-
-When request.args parameter is passed to the `listEntities` method in the `serviceClient` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `listEntities` method on the `serviceClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
-
-```js
-const filterEntities = async function (option) {
- /*
- You can query data according to existing fields
- option provides some conditions to query,eg partitionKey, rowKeyDateTimeStart, rowKeyDateTimeEnd
- minTemperature, maxTemperature, minPrecipitation, maxPrecipitation
- */
- const filterEntitiesArray = [];
- const filters = [];
- if (option.partitionKey) {
- filters.push(`PartitionKey eq '${option.partitionKey}'`);
- }
- if (option.rowKeyDateTimeStart) {
- filters.push(`RowKey ge '${option.rowKeyDateTimeStart}'`);
- }
- if (option.rowKeyDateTimeEnd) {
- filters.push(`RowKey le '${option.rowKeyDateTimeEnd}'`);
- }
- if (option.minTemperature !== null) {
- filters.push(`Temperature ge ${option.minTemperature}`);
- }
- if (option.maxTemperature !== null) {
- filters.push(`Temperature le ${option.maxTemperature}`);
- }
- if (option.minPrecipitation !== null) {
- filters.push(`Precipitation ge ${option.minPrecipitation}`);
- }
- if (option.maxPrecipitation !== null) {
- filters.push(`Precipitation le ${option.maxPrecipitation}`);
- }
- const res = serviceClient.listEntities({
- queryOptions: {
- filter: filters.join(" and "),
- },
- });
- for await (const entity of res) {
- filterEntitiesArray.push(entity);
- }
-
- return filterEntitiesArray;
-};
-```
-
-### Insert data using a TableEntity object
-
-The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `createEntity` method on the `serviceClient` object is used to insert data into the table.
-
-Modify the `insertEntity` function in the example application to contain the following code.
-
-```js
-const insertEntity = async function (entity) {
-
- await serviceClient.createEntity(entity);
-
-};
-```
-### Upsert data using a TableEntity object
-
-If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsertEntity` instead of the `createEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsertEntity` method will update the existing row. Otherwise, the row will be added to the table.
-
-```js
-const upsertEntity = async function (entity) {
-
- await serviceClient.upsertEntity(entity, "Merge");
-
-};
-```
-### Insert or upsert data with variable properties
-
-One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
-
-This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
-
-To insert or upsert such an object using the Table API, map the properties of the expandable object into a `TableEntity` object and use the `createEntity` or `upsertEntity` methods on the `serviceClient` object as appropriate.
-
-In the sample application, the `upsertEntity` function can also implement the function of insert or upsert data with variable properties
-
-```js
-const insertEntity = async function (entity) {
- await serviceClient.createEntity(entity);
-};
-
-const upsertEntity = async function (entity) {
- await serviceClient.upsertEntity(entity, "Merge");
-};
-```
-### Update an entity
-
-Entities can be updated by calling the `updateEntity` method on the `serviceClient` object.
-
-In the sample app, this object is passed to the `upsertEntity` method in the `serviceClient` object. It updates that entity object and uses the `upsertEntity` method save the updates to the database.
-
-```js
-const updateEntity = async function (entity) {
- await serviceClient.updateEntity(entity, "Replace");
-};
-```
-
-## 7 - Run the code
-
-Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
--
-Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
--
-Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
--
-Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
--
-Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
--
-## Clean up resources
-
-When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
-
-### [Azure portal](#tab/azure-portal)
-
-A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Delete resource group step 1](./includes/create-table-dotnet/remove-resource-group-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-1.png"::: |
-| [!INCLUDE [Delete resource group step 2](./includes/create-table-dotnet/remove-resource-group-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-2.png"::: |
-| [!INCLUDE [Delete resource group step 3](./includes/create-table-dotnet/remove-resource-group-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurecli
-az group delete --name $RESOURCE_GROUP_NAME
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurepowershell
-Remove-AzResourceGroup -Name $resourceGroupName
-```
---
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run a Node.js app to add table data. Now you can query your data using the Table API.
-
-> [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
cosmos-db Dotnet Standard Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/dotnet-standard-sdk.md
Title: Azure Cosmos DB Table API .NET Standard SDK & Resources
-description: Learn all about the Azure Cosmos DB Table API and the .NET Standard SDK including release dates, retirement dates, and changes made between each version.
--
+ Title: Azure Cosmos DB for Table .NET Standard SDK & Resources
+description: Learn all about the Azure Cosmos DB for Table and the .NET Standard SDK including release dates, retirement dates, and changes made between each version.
++ -+ ms.devlang: csharp Last updated 11/03/2021--+ # Azure Cosmos DB Table .NET Standard API: Download and release notes > [!div class="op_single_selector"] > > * [.NET](dotnet-sdk.md)
| | Links | ||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Azure.Data.Tables/)|
-|**Sample**|[Cosmos DB Table API .NET Sample](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started)|
-|**Quickstart**|[Quickstart](create-table-dotnet.md)|
+|**Sample**|[Azure Cosmos DB for Table .NET Sample](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started)|
+|**Quickstart**|[Quickstart](quickstart-dotnet.md)|
|**Tutorial**|[Tutorial](tutorial-develop-table-dotnet.md)| |**Current supported framework**|[Microsoft .NET Standard 2.0](https://www.nuget.org/packages/NETStandard.Library)| |**Report Issue**|[Report Issue](https://github.com/Azure/azure-cosmos-table-dotnet/issues)| ## Release notes for 2.0.0 series
-2.0.0 series takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Cosmos DB endpoint.
+2.0.0 series takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Azure Cosmos DB endpoint.
### <a name="2.0.0-preview"></a>2.0.0-preview
-* initial preview of 2.0.0 Table SDK that takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Cosmos DB endpoint. The public API remains the same.
+* initial preview of 2.0.0 Table SDK that takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Azure Cosmos DB endpoint. The public API remains the same.
## Release notes for 1.0.0 series 1.0.0 series takes the dependency on [Microsoft.Azure.DocumentDB.Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/).
* Performance improvement by setting Table SDK default trace level to SourceLevels.Off, which can be opted in via app.config ### <a name="1.0.5"></a>1.0.5
-* Introduce new config under TableClientConfiguration to use Rest Executor to communicate with Cosmos DB Table API
+* Introduce new config under TableClientConfiguration to use Rest Executor to communicate with Azure Cosmos DB for Table
### <a name="1.0.5-preview"></a>1.0.5-preview * Bug fixes
### <a name="0.11.0-preview"></a>0.11.0-preview
-* Changes were made to how CloudTableClient can be configured. It now takes a TableClientConfiguration object during construction. TableClientConfiguration provides different properties to configure the client behavior depending on whether the target endpoint is Cosmos DB Table API or Azure Storage Table API.
-* Added support to TableQuery to return results in sorted order on a custom column. This feature is only supported on Cosmos DB Table endpoints.
-* Added support to expose RequestCharges on various result types. This feature is only supported on Cosmos DB Table endpoints.
+* Changes were made to how CloudTableClient can be configured. It now takes a TableClientConfiguration object during construction. TableClientConfiguration provides different properties to configure the client behavior depending on whether the target endpoint is Azure Cosmos DB for Table or Azure Storage API for Table.
+* Added support to TableQuery to return results in sorted order on a custom column. This feature is only supported on Azure Cosmos DB Table endpoints.
+* Added support to expose RequestCharges on various result types. This feature is only supported on Azure Cosmos DB Table endpoints.
### <a name="0.10.1-preview"></a>0.10.1-preview * Add support for SAS token, operations of TablePermissions, ServiceProperties, and ServiceStats against Azure Storage Table endpoints.
> Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption. ### <a name="0.9.1-preview"></a>0.9.1-preview
-* Azure Cosmos DB Table .NET Standard SDK is a cross-platform .NET library that provides efficient access to the Table data model on Cosmos DB. This initial release supports the full set of Table and Entity CRUD + Query functionalities with similar APIs as the [Cosmos DB Table SDK For .NET Framework](dotnet-sdk.md).
+* Azure Cosmos DB Table .NET Standard SDK is a cross-platform .NET library that provides efficient access to the Table data model on Azure Cosmos DB. This initial release supports the full set of Table and Entity CRUD + Query functionalities with similar APIs as the [Azure Cosmos DB Table SDK For .NET Framework](dotnet-sdk.md).
> [!NOTE] > Azure Storage Table endpoints are not yet supported in the 0.9.1-preview version.
This cross-platform .NET Standard library [Microsoft.Azure.Cosmos.Table](https:/
[!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)] ## See also
-To learn more about the Azure Cosmos DB Table API, see [Introduction to Azure Cosmos DB Table API](introduction.md).
+To learn more about the Azure Cosmos DB for Table, see [Introduction to Azure Cosmos DB for Table](introduction.md).
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/find-request-unit-charge.md
Title: Find request unit (RU) charge for a Table API queries in Azure Cosmos DB
-description: Learn how to find the request unit (RU) charge for Table API queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge.
+ Title: Find request unit (RU) charge for a API for Table queries in Azure Cosmos DB
+description: Learn how to find the request unit (RU) charge for API for Table queries executed against an Azure Cosmos DB container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge.
-+ Last updated 10/14/2020 ms.devlang: csharp-+
-# Find the request unit charge for operations executed in Azure Cosmos DB Table API
+# Find the request unit charge for operations executed in Azure Cosmos DB for Table
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos DB container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Table API. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB for Table. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge.md), [API for Cassandra](../cassandr) articles to find the RU/s charge.
## Use the .NET SDK
-Currently, the only SDK that returns the RU charge for table operations is the [.NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB Table API:
+Currently, the only SDK that returns the RU charge for table operations is the [.NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB for Table:
```csharp CloudTable tableReference = client.GetTableReference("table");
if (tableResult.RequestCharge.HasValue) // would be false when using Azure Stora
} ```
-For more information, see [Quickstart: Build a Table API app by using the .NET SDK and Azure Cosmos DB](create-table-dotnet.md).
+For more information, see [Quickstart: Build a API for Table app by using the .NET SDK and Azure Cosmos DB](quickstart-dotnet.md).
## Next steps
To learn about optimizing your RU consumption, see these articles:
* [Request units and throughput in Azure Cosmos DB](../request-units.md) * [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-account.md
Title: Create an Azure Cosmos DB Table API account
-description: Learn how to create a new Azure Cosmos DB Table API account
+ Title: Create an Azure Cosmos DB for Table account
+description: Learn how to create a new Azure Cosmos DB for Table account
-+ ms.devlang: csharp Last updated 07/06/2022-+
-# Create an Azure Cosmos DB Table API account
+# Create an Azure Cosmos DB for Table account
-An Azure Cosmos DB Table API account contains all of your Azure Cosmos DB resources: tables and items. The account provides a unique endpoint for various tools and SDKs to connect to Azure Cosmos DB and perform everyday operations. For more information about the resources in Azure Cosmos DB, see [Azure Cosmos DB resource model](../account-databases-containers-items.md).
+An Azure Cosmos DB for Table account contains all of your Azure Cosmos DB resources: tables and items. The account provides a unique endpoint for various tools and SDKs to connect to Azure Cosmos DB and perform everyday operations. For more information about the resources in Azure Cosmos DB, see [Azure Cosmos DB resource model](../account-databases-containers-items.md).
## Prerequisites
An Azure Cosmos DB Table API account contains all of your Azure Cosmos DB resour
## Create an account
-Create a single Azure Cosmos DB account using the Table API.
+Create a single Azure Cosmos DB account using the API for Table.
### [Azure CLI](#tab/azure-cli)
Create a single Azure Cosmos DB account using the Table API.
--location $location ```
-1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB Table API account with default settings.
+1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB for Table account with default settings.
```azurecli-interactive az cosmosdb create \
Create a single Azure Cosmos DB account using the Table API.
New-AzResourceGroup @parameters ```
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB Table API account with default settings.
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB for Table account with default settings.
```azurepowershell-interactive $parameters = @{
Create a single Azure Cosmos DB account using the Table API.
## Next steps
-In this guide, you learned how to create an Azure Cosmos DB Table API account. You can now import more data to your Azure Cosmos DB account.
+In this guide, you learned how to create an Azure Cosmos DB for Table account. You can now import more data to your Azure Cosmos DB account.
> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB Table API](../import-data.md)
+> [Import data into Azure Cosmos DB for Table](../import-data.md)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
Title: Create a container in Azure Cosmos DB Table API
-description: Learn how to create a container in Azure Cosmos DB Table API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
+ Title: Create a container in Azure Cosmos DB for Table
+description: Learn how to create a container in Azure Cosmos DB for Table by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
-++ Last updated 10/16/2020
-# Create a container in Azure Cosmos DB Table API
+# Create a container in Azure Cosmos DB for Table
-This article explains the different ways to create a container in Azure Cosmos DB Table API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
+This article explains the different ways to create a container in Azure Cosmos DB for Table. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB Table API. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container-mongodb.md), [Cassandra API](../cassandr) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB for Table. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container.md), [API for Cassandra](../cassandr) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
This article explains the different ways to create a container in Azure Cosmos D
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](create-table-dotnet.md#create-an-azure-cosmos-db-account), or select an existing account.
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-an-azure-cosmos-db-account), or select an existing account.
1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
This article explains the different ways to create a container in Azure Cosmos D
* Enter a throughput to be provisioned (for example, 1000 RUs). * Select **OK**.
- :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-table.png" alt-text="Screenshot of Table API, Add Table dialog box":::
+ :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-table.png" alt-text="Screenshot of API for Table, Add Table dialog box":::
> [!Note]
-> For Table API, the partition key is specified each time you add a new row.
+> For API for Table, the partition key is specified each time you add a new row.
## <a id="cli-mongodb"></a>Create using Azure CLI
-[Create a Table API table with Azure CLI](../scripts/cli/table/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
+[Create a API for Table table with Azure CLI](../scripts/cli/table/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
## Create using PowerShell
-[Create a Table API table with PowerShell](../scripts/powershell/table/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
+[Create a API for Table table with PowerShell](../scripts/powershell/table/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
## Next steps * [Partitioning in Azure Cosmos DB](../partitioning-overview.md) * [Request Units in Azure Cosmos DB](../request-units.md) * [Provision throughput on containers and databases](../set-throughput.md)
-* [Work with Azure Cosmos account](../account-databases-containers-items.md)
+* [Work with Azure Cosmos DB account](../account-databases-containers-items.md)
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-item.md
Title: Create an item in Azure Cosmos DB Table API using .NET
-description: Learn how to create an item in your Azure Cosmos DB Table API account using the .NET SDK
+ Title: Create an item in Azure Cosmos DB for Table using .NET
+description: Learn how to create an item in your Azure Cosmos DB for Table account using the .NET SDK
-+ ms.devlang: csharp Last updated 07/06/2022-+
-# Create an item in Azure Cosmos DB Table API using .NET
+# Create an item in Azure Cosmos DB for Table using .NET
-Items in Azure Cosmos DB represent a specific entity stored within a table. In the Table API, an item consists of a set of key-value pairs uniquely identified by the composite of the row and partition keys.
+Items in Azure Cosmos DB represent a specific entity stored within a table. In the API for Table, an item consists of a set of key-value pairs uniquely identified by the composite of the row and partition keys.
## Create a unique identifier for an item
cosmos-db How To Dotnet Create Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-table.md
Title: Create a table in Azure Cosmos DB Table API using .NET
-description: Learn how to create a table in your Azure Cosmos DB Table API account using the .NET SDK
+ Title: Create a table in Azure Cosmos DB for Table using .NET
+description: Learn how to create a table in your Azure Cosmos DB for Table account using the .NET SDK
-+ ms.devlang: csharp Last updated 07/06/2022-+
-# Create a table in Azure Cosmos DB Table API using .NET
+# Create a table in Azure Cosmos DB for Table using .NET
-Tables in Azure Cosmos DB Table API are units of management for multiple items. Before you can create or manage items, you must first create a table.
+Tables in Azure Cosmos DB for Table are units of management for multiple items. Before you can create or manage items, you must first create a table.
## Name a table In Azure Cosmos DB, a table is analogous to a table in a relational database. > [!NOTE]
-> With Table API accounts, when you create your first table, a default database is automatically created in your Azure Cosmos account.
+> With API for Table accounts, when you create your first table, a default database is automatically created in your Azure Cosmos DB account.
Here are some quick rules when naming a table:
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB Table API and .NET
-description: Get started developing a .NET application that works with Azure Cosmos DB Table API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB Table API endpoint.
+ Title: Get started with Azure Cosmos DB for Table and .NET
+description: Get started developing a .NET application that works with Azure Cosmos DB for Table. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for Table endpoint.
-+ ms.devlang: csharp Last updated 07/06/2022-+
-# Get started with Azure Cosmos DB Table API and .NET
+# Get started with Azure Cosmos DB for Table and .NET
-This article shows you how to connect to Azure Cosmos DB Table API using the .NET SDK. Once connected, you can perform operations on tables and items.
+This article shows you how to connect to Azure Cosmos DB for Table using the .NET SDK. Once connected, you can perform operations on tables and items.
[Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/) | [Samples](samples-dotnet.md) | [API reference](/dotnet/api/azure.data.tables) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues) | ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB Table API account. [Create a Table API account](how-to-create-account.md).
+* Azure Cosmos DB for Table account. [Create a API for Table account](how-to-create-account.md).
* [.NET 6.0 or later](https://dotnet.microsoft.com/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) c
dotnet build ```
-## Connect to Azure Cosmos DB Table API
+## Connect to Azure Cosmos DB for Table
-To connect to the Table API of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to a Table API account using the **TableServiceClient** class:
+To connect to the API for Table of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to a API for Table account using the **TableServiceClient** class:
-* [Connect with a Table API connection string](#connect-with-a-connection-string)
+* [Connect with a API for Table connection string](#connect-with-a-connection-string)
* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform) ### Connect with a connection string
The most common constructor for **TableServiceClient** has a single parameter:
| Parameter | Example value | Description | | | | |
-| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the Table API account |
+| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the API for Table account |
#### Retrieve your account connection string
Create a new instance of the **TableServiceClient** class with the ``COSMOS_CONN
### Connect using the Microsoft Identity Platform
-To connect to your Table API account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
+To connect to your API for Table account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
| Where the application runs | Security principal |--|--||
Create a new instance of the **TableServiceClient** class with the ``COSMOS_ENDP
As you build your application, your code will primarily interact with four types of resources:
-* The Table API account, which is the unique top-level namespace for your Azure Cosmos DB data.
+* The API for Table account, which is the unique top-level namespace for your Azure Cosmos DB data.
* Tables, which contain a set of individual items in your account.
Each type of resource is represented by one or more associated .NET classes or i
||| | [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) | This client class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service. | | [``TableClient``](/dotnet/api/azure.data.tables.tableclient) | This client class is a reference to a table that may, or may not, exist in the service yet. The table is validated server-side when you attempt to access it or perform an operation against it. |
-| [``ITableEntity``](/dotnet/api/azure.data.tables.itableentity) | This interface is the base interface for any items that are created in the table or queried from the table. This interface includes all required properties for items in the Table API. |
+| [``ITableEntity``](/dotnet/api/azure.data.tables.itableentity) | This interface is the base interface for any items that are created in the table or queried from the table. This interface includes all required properties for items in the API for Table. |
| [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) | This class is a generic implementation of the ``ITableEntity`` interface as a dictionary of key-value pairs. | The following guides show you how to use each of these classes to build your application.
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a Table API account, use the next guide to create and manage tables.
+Now that you've connected to a API for Table account, use the next guide to create and manage tables.
> [!div class="nextstepaction"]
-> [Create a table in Azure Cosmos DB Table API using .NET](how-to-dotnet-create-table.md)
+> [Create a table in Azure Cosmos DB for Table using .NET](how-to-dotnet-create-table.md)
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-read-item.md
Title: Read an item in Azure Cosmos DB Table API using .NET
-description: Learn how to read an item in your Azure Cosmos DB Table API account using the .NET SDK
+ Title: Read an item in Azure Cosmos DB for Table using .NET
+description: Learn how to read an item in your Azure Cosmos DB for Table account using the .NET SDK
-+ ms.devlang: csharp Last updated 07/06/2022-+
-# Read an item in Azure Cosmos DB Table API using .NET
+# Read an item in Azure Cosmos DB for Table using .NET
-Items in Azure Cosmos DB represent a specific entity stored within a table. In the Table API, an item consists of a set of key-value pairs uniquely identified by the composite of the row and partition keys.
+Items in Azure Cosmos DB represent a specific entity stored within a table. In the API for Table, an item consists of a set of key-value pairs uniquely identified by the composite of the row and partition keys.
## Reading items using the composite key
-Every item in Azure Cosmos DB Table API has a unique identifier specified by the composite of the **row** and **partition** keys. These composite keys are stored as the ``RowKey`` and ``PartitionKey`` properties respectively. Within the scope of a table, two items can't share the same unique identifier composite.
+Every item in Azure Cosmos DB for Table has a unique identifier specified by the composite of the **row** and **partition** keys. These composite keys are stored as the ``RowKey`` and ``PartitionKey`` properties respectively. Within the scope of a table, two items can't share the same unique identifier composite.
Azure Cosmos DB requires both the unique identifier and the partition key value of an item to perform a read of the item. Specifically, providing the composite key will perform a quick *point read* of that item with a predictable cost in request units (RUs).
The [``TableClient.GetEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient
## Next steps
-Now that you've read various items, try one of our tutorials on querying Azure Cosmos DB Table API data.
+Now that you've read various items, try one of our tutorials on querying Azure Cosmos DB for Table data.
> [!div class="nextstepaction"]
-> [Query Azure Cosmos DB by using the Table API](tutorial-query-table.md)
+> [Query Azure Cosmos DB by using the API for Table](tutorial-query.md)
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md
Title: Use Azure Table Storage and Azure Cosmos DB Table API with C++
-description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API by using C++.
+ Title: Use Azure Table Storage and Azure Cosmos DB for Table with C++
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB for Table by using C++.
-+ ms.devlang: cpp+ Last updated 10/07/2019--++
-# How to use Azure Table storage and Azure Cosmos DB Table API with C++
+# How to use Azure Table storage and Azure Cosmos DB for Table with C++
[!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)] [!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)]
-This guide shows you common scenarios by using the Azure Table storage service or Azure Cosmos DB Table API. The samples are written in C++ and use the [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/blob/master/README.md). This article covers the following scenarios:
+This guide shows you common scenarios by using the Azure Table storage service or Azure Cosmos DB for Table. The samples are written in C++ and use the [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/blob/master/README.md). This article covers the following scenarios:
* Create and delete a table * Work with table entities
This guide shows you common scenarios by using the Azure Table storage service o
[!INCLUDE [cosmos-db-create-storage-account](../includes/cosmos-db-create-storage-account.md)]
-### Create an Azure Cosmos DB Table API account
+### Create an Azure Cosmos DB for Table account
[!INCLUDE [cosmos-db-create-tableapi-account](../includes/cosmos-db-create-tableapi-account.md)]
To use the Azure storage APIs to access tables, add the following `include` stat
#include <was/table.h> ```
-An Azure Storage client or Cosmos DB client uses a connection string to store endpoints and credentials to access data management services. When you run a client application, you must provide the storage connection string or Azure Cosmos DB connection string in the appropriate format.
+An Azure Storage client or Azure Cosmos DB client uses a connection string to store endpoints and credentials to access data management services. When you run a client application, you must provide the storage connection string or Azure Cosmos DB connection string in the appropriate format.
### Set up an Azure Storage connection string
For Visual Studio Community Edition, if your project gets build errors because o
[Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
-Follow these links to learn more about Azure Storage and the Table API in Azure Cosmos DB:
+Follow these links to learn more about Azure Storage and the API for Table in Azure Cosmos DB:
-* [Introduction to the Table API](introduction.md)
+* [Introduction to the API for Table](introduction.md)
* [List Azure Storage resources in C++](../../storage/common/storage-c-plus-plus-enumeration.md) * [Storage Client Library for C++ reference](https://azure.github.io/azure-storage-cpp) * [Azure Storage documentation](/azure/storage/)
cosmos-db How To Use Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-go.md
Title: Use the Azure Table client library for Go description: Store structured data in the cloud using the Azure Table client library for Go. -+ ms.devlang: golang + Last updated 03/24/2022--++ # How to use the Azure SDK for Go with Azure Table [!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)] [!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)]
az group delete --resource-group myResourceGroup
## Next steps > [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
+> [Import table data to the API for Table](import.md)
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
Title: Use the Azure Tables client library for Java description: Store structured data in the cloud using the Azure Tables client library for Java. -+ ms.devlang: Java Last updated 12/10/2020---+++ # How to use the Azure Tables client library for Java [!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)] [!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)]
This article shows you how to create tables, store your data, and perform CRUD operations on said data. The samples are written in Java and use the [Azure Tables client library for Java][Azure Tables client library for Java]. The scenarios covered include **creating**, **listing**, and **deleting** tables, as well as **inserting**, **querying**, **modifying**, and **deleting** entities in a table. For more information on tables, see the [Next steps](#next-steps) section. > [!IMPORTANT]
-> The last version of the Azure Tables client library supporting Table Storage and Cosmos DB Table is [12+][Azure Tables client library for Java].
+> The last version of the Azure Tables client library supporting Table Storage and Azure Cosmos DB Table is [12+][Azure Tables client library for Java].
## Create an Azure service account
import com.azure.data.tables.models.TableTransactionActionType;
## Add your connection string
-You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+You can either connect to the Azure storage account or the Azure Cosmos DB for Table account. Get the connection string based on the type of account you are using.
### Add an Azure Storage connection string
public final String connectionString =
"EndpointSuffix=core.windows.net"; ```
-### Add an Azure Cosmos DB Table API connection string
+### Add an Azure Cosmos DB for Table connection string
An Azure Cosmos DB account uses a connection string to store the table endpoint and your credentials. When running in a client app, you must provide the Azure Cosmos DB connection string in the following format, using the name of your Azure Cosmos DB account and the primary access key for the account listed in the [Azure portal](https://portal.azure.com) for the **AccountName** and **AccountKey** values.
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-nodejs.md
Title: Use Azure Table storage or Azure Cosmos DB Table API from Node.js
+ Title: Use Azure Table storage or Azure Cosmos DB for Table from Node.js
description: Store structured data in the cloud using Azure Tables client library for Node.js. -+ ms.devlang: javascript Last updated 07/23/2020---+++
-# How to use Azure Table storage or the Azure Cosmos DB Table API from Node.js
+# How to use Azure Table storage or the Azure Cosmos DB for Table from Node.js
[!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)] [!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)]
This article shows you how to create tables, store your data, and perform CRUD o
[!INCLUDE [cosmos-db-create-storage-account](../includes/cosmos-db-create-storage-account.md)]
-**Create an Azure Cosmos DB Table API account**
+**Create an Azure Cosmos DB for Table account**
[!INCLUDE [cosmos-db-create-tableapi-account](../includes/cosmos-db-create-tableapi-account.md)]
const { TableServiceClient } = require("@azure/data-tables");
## Connect to Azure Table service
-You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the shared key or connection string based on the type of account you are using.
+You can either connect to the Azure storage account or the Azure Cosmos DB for Table account. Get the shared key or connection string based on the type of account you are using.
### Creating the Table service client from a shared key
-The Azure module reads the environment variables AZURE_ACCOUNT and AZURE_ACCESS_KEY and AZURE_TABLES_ENDPOINT for information required to connect to your Azure Storage account or Cosmos DB. If these environment variables are not set, you must specify the account information when calling `TableServiceClient`. For example, the following code creates a `TableServiceClient` object:
+The Azure module reads the environment variables AZURE_ACCOUNT and AZURE_ACCESS_KEY and AZURE_TABLES_ENDPOINT for information required to connect to your Azure Storage account or Azure Cosmos DB. If these environment variables are not set, you must specify the account information when calling `TableServiceClient`. For example, the following code creates a `TableServiceClient` object:
```javascript const tableService = new TableServiceClient(
const tableService = new TableServiceClient(
### Creating the Table service client from a connection string
-To add an Azure Cosmos DB or Storage account connection, create a `TableServiceClient` object and specify your account name, primary key, and endpoint. You can copy these values from **Settings** > **Connection String** in the Azure portal for your Cosmos DB account or Storage account. For example:
+To add an Azure Cosmos DB or Storage account connection, create a `TableServiceClient` object and specify your account name, primary key, and endpoint. You can copy these values from **Settings** > **Connection String** in the Azure portal for your Azure Cosmos DB account or Storage account. For example:
```javascript const tableService = TableServiceClient.fromConnectionString("<connection-string>");
cosmos-db How To Use Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-php.md
Title: Use Azure Storage Table service or Azure Cosmos DB Table API from PHP
-description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API from PHP.
--
+ Title: Use Azure Storage Table service or Azure Cosmos DB for Table from PHP
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB for Table from PHP.
++ -+ ms.devlang: php+ Last updated 07/23/2020
-# How to use Azure Storage Table service or the Azure Cosmos DB Table API from PHP
+# How to use Azure Storage Table service or the Azure Cosmos DB for Table from PHP
[!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)] [!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)]
-This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB Table API. The samples are written in PHP and use the [Azure Storage Table PHP Client Library][download]. The scenarios covered include **creating and deleting a table**, and **inserting, deleting, and querying entities in a table**. For more information on the Azure Table service, see the [Next steps](#next-steps) section.
+This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB for Table. The samples are written in PHP and use the [Azure Storage Table PHP Client Library][download]. The scenarios covered include **creating and deleting a table**, and **inserting, deleting, and querying entities in a table**. For more information on the Azure Table service, see the [Next steps](#next-steps) section.
## Create an Azure service account
This article shows you how to create tables, store your data, and perform CRUD o
[!INCLUDE [cosmos-db-create-storage-account](../includes/cosmos-db-create-storage-account.md)]
-**Create an Azure Cosmos DB Table API account**
+**Create an Azure Cosmos DB for Table account**
[!INCLUDE [cosmos-db-create-tableapi-account](../includes/cosmos-db-create-tableapi-account.md)] ## Create a PHP application
-The only requirement to create a PHP application to access the Storage Table service or Azure Cosmos DB Table API is to reference classes in the azure-storage-table SDK for PHP from within your code. You can use any development tools to create your application, including Notepad.
+The only requirement to create a PHP application to access the Storage Table service or Azure Cosmos DB for Table is to reference classes in the azure-storage-table SDK for PHP from within your code. You can use any development tools to create your application, including Notepad.
In this guide, you use Storage Table service or Azure Cosmos DB features that can be called from within a PHP application locally, or in code running within an Azure web role, worker role, or website.
In the examples below, the `require_once` statement is always shown, but only th
## Add your connection string
-You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+You can either connect to the Azure storage account or the Azure Cosmos DB for Table account. Get the connection string based on the type of account you are using.
### Add a Storage Table service connection
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
- Title: 'Quickstart: Table API with Python - Azure Cosmos DB'
-description: This quickstart shows how to access the Azure Cosmos DB Table API from a Python application using the Azure Data Tables SDK
---- Previously updated : 03/23/2021-----
-# Quickstart: Build a Table API app with Python SDK and Azure Cosmos DB
--
-This quickstart shows how to access the Azure Cosmos DB [Table API](introduction.md) from a Python application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table. Python applications can access the Cosmos DB Table API using the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) package.
-
-## Prerequisites
-
-The sample application is written in [Python3.6](https://www.python.org/downloads/), though the principles apply to all Python3.6+ applications. You can use [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
-
-If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
-
-## Sample application
-
-The sample application for this tutorial may be cloned or downloaded from the repository https://github.com/Azure-Samples/msdocs-azure-tables-sdk-python-flask. Both a starter and completed app are included in the sample repository.
-
-```bash
-git clone https://github.com/Azure-Samples/msdocs-azure-tables-sdk-python-flask.git
-```
-
-The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
--
-## 1 - Create an Azure Cosmos DB account
-
-You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Cosmos DB account.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-python/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db account step 2](./includes/create-table-python/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db account step 3](./includes/create-table-python/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
-| [!INCLUDE [Create cosmos db account step 4](./includes/create-table-python/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
-
-Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com/) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurecli
-LOCATION='eastus'
-RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
-COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-COSMOS_TABLE_NAME='WeatherData'
-
-az group create \
- --location $LOCATION \
- --name $RESOURCE_GROUP_NAME
-
-az cosmosdb create \
- --name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --capabilities EnableTable
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-
-Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurepowershell
-$location = 'eastus'
-$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
-$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-
-# Create a resource group
-New-AzResourceGroup `
- -Location $location `
- -Name $resourceGroupName
-
-# Create an Azure Cosmos DB
-New-AzCosmosDBAccount `
- -Name $cosmosAccountName `
- -ResourceGroupName $resourceGroupName `
- -Location $location `
- -ApiKind "Table"
-```
---
-## 2 - Create a table
-
-Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
-
-### [Azure portal](#tab/azure-portal)
-
-In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-python/create-cosmos-table-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-python/create-cosmos-table-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-python/create-cosmos-table-3.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Cosmos DB table." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-table-api-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
-
-```azurecli
-COSMOS_TABLE_NAME='WeatherData'
-
-az cosmosdb table create \
- --account-name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_TABLE_NAME \
- --throughput 400
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
-
-```azurepowershell
-$cosmosTableName = 'WeatherData'
-
-# Create the table for the application to use
-New-AzCosmosDBTable `
- -Name $cosmosTableName `
- -AccountName $cosmosAccountName `
- -ResourceGroupName $resourceGroupName
-```
---
-## 3 - Get Cosmos DB connection string
-
-To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-python/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-1.png"::: |
-| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-python/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-2.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To get the primary connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](/cli/azure/query-azure-cli) to display only the primary table connection string.
-
-```azurecli
-# This gets the primary connection string
-az cosmosdb keys list \
- --type connection-strings \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_ACCOUNT_NAME \
- --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
- --output tsv
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To get the primary connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
-```azurepowershell
-# This gets the primary connection string
-$(Get-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $cosmosAccountName `
- -Type "ConnectionStrings")."Primary Table Connection String"
-```
-
-The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
---
-## 4 - Install the Azure Data Tables SDK for Python
-
-After you've created a Cosmos DB account, your next step is to install the Microsoft [Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). For details on installing the SDK, refer to the [README.md](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/tables/azure-data-tables/README.md) file in the Data Tables SDK for Python repository on GitHub.
-
-Install the Azure Tables client library for Python with pip:
-
-```bash
-pip install azure-data-tables
-```
---
-## 5 - Configure the Table client in .env file
-
-Copy your Azure Cosmos DB account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `.env` file.
-
-```python
-# Configuration Parameters
-conn_str = "A connection string to an Azure Cosmos account."
-table_name = "WeatherData"
-project_root_path = "Project abs path"
-```
-
-The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableServiceClient` object is the object used to communicate with the Cosmos DB Table API. An application will typically have a single `TableServiceClient` overall, and it will have a `TableClient` per table.
-
-```python
-self.conn_str = os.getenv("AZURE_CONNECTION_STRING")
-self.table_service = TableServiceClient.from_connection_string(self.conn_str)
-```
---
-## 6 - Implement Cosmos DB table operations
-
-All Cosmos DB table operations for the sample app are implemented in the `TableServiceHelper` class located in *helper* file under the *webapp* directory. You will need to import the `TableServiceClient` class at the top of this file to work with objects in the `azure.data.tables` SDK package.
-
-```python
-from azure.data.tables import TableServiceClient
-```
-
-At the start of the `TableServiceHelper` class, create a constructor and add a member variable for the `TableClient` object to allow the `TableClient` object to be injected into the class.
-
-```python
-def __init__(self, table_name=None, conn_str=None):
- self.table_name = table_name if table_name else os.getenv("table_name")
- self.conn_str = conn_str if conn_str else os.getenv("conn_str")
- self.table_service = TableServiceClient.from_connection_string(self.conn_str)
- self.table_client = self.table_service.get_table_client(self.table_name)
-```
-
-### Filter rows returned from a table
-
-To filter the rows returned from a table, you can pass an OData style filter string to the `query_entities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
-
-```odata
-PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
-```
-
-You can view related OData filter operators on the azure-data-tables website in the section [Writing Filters](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables/samples#writing-filters).
-
-When request.args parameter is passed to the `query_entity` method in the `TableServiceHelper` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `query_entities` method on the `TableClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
-
-```python
-def query_entity(self, params):
- filters = []
- if params.get("partitionKey"):
- filters.append("PartitionKey eq '{}'".format(params.get("partitionKey")))
- if params.get("rowKeyDateStart") and params.get("rowKeyTimeStart"):
- filters.append("RowKey ge '{} {}'".format(params.get("rowKeyDateStart"), params.get("rowKeyTimeStart")))
- if params.get("rowKeyDateEnd") and params.get("rowKeyTimeEnd"):
- filters.append("RowKey le '{} {}'".format(params.get("rowKeyDateEnd"), params.get("rowKeyTimeEnd")))
- if params.get("minTemperature"):
- filters.append("Temperature ge {}".format(params.get("minTemperature")))
- if params.get("maxTemperature"):
- filters.append("Temperature le {}".format(params.get("maxTemperature")))
- if params.get("minPrecipitation"):
- filters.append("Precipitation ge {}".format(params.get("minPrecipitation")))
- if params.get("maxPrecipitation"):
- filters.append("Precipitation le {}".format(params.get("maxPrecipitation")))
- return list(self.table_client.query_entities(" and ".join(filters)))
-```
-
-### Insert data using a TableEntity object
-
-The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `create_entity` method on the `TableClient` object is used to insert data into the table.
-
-Modify the `insert_entity` function in the example application to contain the following code.
-
-```python
-def insert_entity(self):
- entity = self.deserialize()
- return self.table_client.create_entity(entity)
-
-@staticmethod
-def deserialize():
- params = {key: request.form.get(key) for key in request.form.keys()}
- params["PartitionKey"] = params.pop("StationName")
- params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
- return params
-```
-
-### Upsert data using a TableEntity object
-
-If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsert_entity` instead of the `create_entity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsert_entity` method will update the existing row. Otherwise, the row will be added to the table.
-
-```python
-def upsert_entity(self):
- entity = self.deserialize()
- return self.table_client.upsert_entity(entity)
-
-@staticmethod
-def deserialize():
- params = {key: request.form.get(key) for key in request.form.keys()}
- params["PartitionKey"] = params.pop("StationName")
- params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
- return params
-```
-
-### Insert or upsert data with variable properties
-
-One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
-
-This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
-
-To insert or upsert such an object using the Table API, map the properties of the expandable object into a `TableEntity` object and use the `create_entity` or `upsert_entity` methods on the `TableClient` object as appropriate.
-
-In the sample application, the `upsert_entity` function can also implement the function of insert or upsert data with variable properties
-
-```python
-def insert_entity(self):
- entity = self.deserialize()
- return self.table_client.create_entity(entity)
-
-def upsert_entity(self):
- entity = self.deserialize()
- return self.table_client.upsert_entity(entity)
-
-@staticmethod
-def deserialize():
- params = {key: request.form.get(key) for key in request.form.keys()}
- params["PartitionKey"] = params.pop("StationName")
- params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
- return params
-```
-
-### Update an entity
-
-Entities can be updated by calling the `update_entity` method on the `TableClient` object.
-
-In the sample app, this object is passed to the `upsert_entity` method in the `TableClient` class. It updates that entity object and uses the `upsert_entity` method save the updates to the database.
-
-```python
-def update_entity(self):
- entity = self.update_deserialize()
- return self.table_client.update_entity(entity)
-
-@staticmethod
-def update_deserialize():
- params = {key: request.form.get(key) for key in request.form.keys()}
- params["PartitionKey"] = params.pop("StationName")
- params["RowKey"] = params.pop("ObservationDate")
- return params
-```
-
-### Remove an entity
-
-To remove an entity from a table, call the `delete_entity` method on the `TableClient` object with the partition key and row key of the object.
-
-```python
-def delete_entity(self):
- partition_key = request.form.get("StationName")
- row_key = request.form.get("ObservationDate")
- return self.table_client.delete_entity(partition_key, row_key)
-```
-
-## 7 - Run the code
-
-Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
--
-Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
--
-Selecting the **Insert using Expandable** Data button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
--
-Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
--
-Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
--
-## Clean up resources
-
-When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
-
-### [Azure portal](#tab/azure-portal)
-
-A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Delete resource group step 1](./includes/create-table-python/remove-resource-group-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-python/azure-portal-remove-resource-group-1.png"::: |
-| [!INCLUDE [Delete resource group step 2](./includes/create-table-python/remove-resource-group-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-python/azure-portal-remove-resource-group-2.png"::: |
-| [!INCLUDE [Delete resource group step 3](./includes/create-table-python/remove-resource-group-3.md)] | :::image type="content" source="./media/create-table-python/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-python/azure-portal-remove-resource-group-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurecli
-az group delete --name $RESOURCE_GROUP_NAME
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurepowershell
-Remove-AzResourceGroup -Name $resourceGroupName
-```
---
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API.
-
-> [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
cosmos-db How To Use Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-ruby.md
Title: Use Azure Cosmos DB Table API and Azure Table Storage with Ruby
-description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API.
+ Title: Use Azure Cosmos DB for Table and Azure Table Storage with Ruby
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB for Table.
-+ ms.devlang: ruby+ Last updated 07/23/2020--++
-# How to use Azure Table Storage and the Azure Cosmos DB Table API with Ruby
+# How to use Azure Table Storage and the Azure Cosmos DB for Table with Ruby
[!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)] [!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)]
-This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB Table API. The samples described in this article are written in Ruby and uses the [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). The scenarios covered include create a table, delete a table, insert entities, and query entities from the table.
+This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB for Table. The samples described in this article are written in Ruby and uses the [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). The scenarios covered include create a table, delete a table, insert entities, and query entities from the table.
## Create an Azure service account
require "azure/storage/table"
## Add your connection string
-You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+You can either connect to the Azure storage account or the Azure Cosmos DB for Table account. Get the connection string based on the type of account you are using.
### Add an Azure Storage connection
azure_table_service.delete_table("testtable")
* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux. * [Ruby Developer Center](https://azure.microsoft.com/develop/ruby/)
-* [Microsoft Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table)
+* [Microsoft Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table)
cosmos-db Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/import.md
+
+ Title: Migrate existing data to a API for Table account in Azure Cosmos DB
+description: Learn how to migrate or import on-premises or cloud data to an Azure API for Table account in Azure Cosmos DB.
++++++ Last updated : 03/03/2022+++
+# Migrate your data to an Azure Cosmos DB for Table account
+
+This tutorial provides instructions on importing data for use with the Azure Cosmos DB [API for Table](introduction.md). If you have data stored in Azure Table Storage, you can use the **Data migration tool** to import your data to the Azure Cosmos DB for Table.
++
+## Prerequisites
+
+* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual container or a set of containers. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs.
+
+* **Create Azure Cosmos DB resources:** Before you start migrating the data, create all your tables from the Azure portal. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the Azure Cosmos DB tables.
+
+## Data migration tool
+
+> [!IMPORTANT]
+> Ownership of the Data Migration Tool has been transferred to a 3rd party who is acting as maintainers of this tool which is open source. The tool is currently being updated to use the latest nuget packages so does not currently work on the main branch. There is a fork of this tool which does work. You can learn more [here](https://github.com/Azure/azure-documentdb-datamigrationtool/issues/89).
+
+You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to import your existing Azure Table Storage data to a API for Table account.
+
+To migrate table data:
+
+1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool/tree/archive).
+2. Run `dt.exe` by using the command-line arguments for your scenario. `dt.exe` takes a command in the following format:
+
+ ```bash
+ dt.exe [/<option>:<value>] /s:<source-name> [/s.<source-option>:<value>] /t:<target-name> [/t.<target-option>:<value>]
+ ```
+
+The supported options for this command are:
+
+* **/ErrorLog:** Optional. Name of the CSV file to redirect data transfer failures.
+* **/OverwriteErrorLog:** Optional. Overwrite the error log file.
+* **/ProgressUpdateInterval:** Optional, default is `00:00:01`. The time interval to refresh on-screen data transfer progress.
+* **/ErrorDetails:** Optional, default is `None`. Specifies that detailed error information should be displayed for the following errors: `None`, `Critical`, or `All`.
+* **/EnableCosmosTableLog:** Optional. Direct the log to an Azure Cosmos DB table account. If set, this defaults to the destination account connection string unless `/CosmosTableLogConnectionString` is also provided. This is useful if multiple instances of the tool are being run simultaneously.
+* **/CosmosTableLogConnectionString:** Optional. The connection string to direct the log to a remote Azure Cosmos DB table account.
+
+### Command-line source settings
+
+Use the following source options when you define Azure Table Storage as the source of the migration.
+
+* **/s:AzureTable:** Reads data from Table Storage.
+* **/s.ConnectionString:** Connection string for the table endpoint. You can retrieve this from the Azure portal.
+* **/s.LocationMode:** Optional, default is `PrimaryOnly`. Specifies which location mode to use when connecting to Table Storage: `PrimaryOnly`, `PrimaryThenSecondary`, `SecondaryOnly`, `SecondaryThenPrimary`.
+* **/s.Table:** Name of the Azure table.
+* **/s.InternalFields:** Set to `All` for table migration, because `RowKey` and `PartitionKey` are required for import.
+* **/s.Filter:** Optional. Filter string to apply.
+* **/s.Projection:** Optional. List of columns to select,
+
+To retrieve the source connection string when you import from Table Storage, open the Azure portal. Select **Storage accounts** > **Account** > **Access keys**, and copy the **Connection string**.
++
+### Command-line target settings
+
+Use the following target options when you define the Azure Cosmos DB for Table as the target of the migration.
+
+* **/t:TableAPIBulk:** Uploads data into the Azure Cosmos DB for Table in batches.
+* **/t.ConnectionString:** The connection string for the table endpoint.
+* **/t.TableName:** Specifies the name of the table to write to.
+* **/t.Overwrite:** Optional, default is `false`. Specifies if existing values should be overwritten.
+* **/t.MaxInputBufferSize:** Optional, default is `1GB`. Approximate estimate of input bytes to buffer before flushing data to sink.
+* **/t.Throughput:** Optional, service defaults if not specified. Specifies throughput to configure for table.
+* **/t.MaxBatchSize:** Optional, default is `2MB`. Specify the batch size in bytes.
+
+### Sample command: Source is Table Storage
+
+Here's a command-line sample showing how to import from Table Storage to the API for Table:
+
+```bash
+dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Table storage account name>;AccountKey=<Account Key>;EndpointSuffix=core.windows.net /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmos.azure.com:443 /t.TableName:<Table name> /t.Overwrite
+```
+## Next steps
+
+Learn how to query data by using the Azure Cosmos DB for Table.
+
+> [!div class="nextstepaction"]
+>[How to query data?](tutorial-query.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
Title: Introduction to the Azure Cosmos DB Table API
+ Title: Introduction to the Azure Cosmos DB for Table
description: Learn how you can use Azure Cosmos DB to store and query massive volumes of key-value data with low latency by using the Azure Tables API.--++ -++ Last updated 11/03/2021-
-# Introduction to Azure Cosmos DB: Table API
+# Introduction to Azure Cosmos DB: API for Table
-[Azure Cosmos DB](introduction.md) provides the Table API for applications that are written for Azure Table storage and that need premium capabilities like:
+[Azure Cosmos DB](introduction.md) provides the API for Table for applications that are written for Azure Table storage and that need premium capabilities like:
* [Turnkey global distribution](../distribute-data-globally.md). * [Dedicated throughput](../partitioning-overview.md) worldwide (when using provisioned throughput).
Last updated 11/03/2021
* Guaranteed high availability. * Automatic secondary indexing.
-[Azure Tables SDKs](https://devblogs.microsoft.com/azure-sdk/announcing-the-new-azure-data-tables-libraries/) are available for .NET, Java, Python, Node.js, and Go. These SDKs can be used to target either Table Storage or Cosmos DB Tables. Applications written for Azure Table storage using the Azure Tables SDKs can be migrated to the Azure Cosmos DB Table API with no code changes to take advantage of premium capabilities.
+[Azure Tables SDKs](https://devblogs.microsoft.com/azure-sdk/announcing-the-new-azure-data-tables-libraries/) are available for .NET, Java, Python, Node.js, and Go. These SDKs can be used to target either Table Storage or Azure Cosmos DB Tables. Applications written for Azure Table storage using the Azure Tables SDKs can be migrated to the Azure Cosmos DB for Table with no code changes to take advantage of premium capabilities.
> [!NOTE]
-> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Table API.
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's API for Table.
> [!IMPORTANT]
-> The .NET Azure Tables SDK [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) offers latest features supported by the Table API. The Azure Tables client library can seamlessly target either Azure Table storage or Azure Cosmos DB table service endpoints with no code changes.
+> The .NET Azure Tables SDK [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) offers latest features supported by the API for Table. The Azure Tables client library can seamlessly target either Azure Table storage or Azure Cosmos DB table service endpoints with no code changes.
## Table offerings
-If you currently use Azure Table Storage, you gain the following benefits by moving to the Azure Cosmos DB Table API:
+If you currently use Azure Table Storage, you gain the following benefits by moving to the Azure Cosmos DB for Table:
-| Feature | Azure Table storage | Azure Cosmos DB Table API |
+| Feature | Azure Table storage | Azure Cosmos DB for Table |
| | | | | Latency | Fast, but no upper bounds on latency. | Single-digit millisecond latency for reads and writes, backed with <10 ms latency for reads and writes at the 99th percentile, at any scale, anywhere in the world. | | Throughput | Variable throughput model. Tables have a scalability limit of 20,000 operations/s. | Highly scalable with [dedicated reserved throughput per table](../request-units.md) that's backed by SLAs. Accounts have no upper limit on throughput and support >10 million operations/s per table. |
If you currently use Azure Table Storage, you gain the following benefits by mov
## Get started
-Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com). Then get started with our [Quick Start for Table API by using .NET](create-table-dotnet.md).
+Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com). Then get started with our [Quick Start for API for Table by using .NET](quickstart-dotnet.md).
## Next steps Here are a few pointers to get you started:
-* [Build a .NET application by using the Table API](create-table-dotnet.md)
-* [Develop with the Table API in .NET](tutorial-develop-table-dotnet.md)
-* [Query table data by using the Table API](tutorial-query-table.md)
-* [Learn how to set up Azure Cosmos DB global distribution by using the Table API](tutorial-global-distribution-table.md)
+* [Build a .NET application by using the API for Table](quickstart-dotnet.md)
+* [Develop with the API for Table in .NET](tutorial-develop-table-dotnet.md)
+* [Query table data by using the API for Table](tutorial-query.md)
+* [Learn how to set up Azure Cosmos DB global distribution by using the API for Table](tutorial-global-distribution.md)
* [Azure Cosmos DB Table .NET Standard SDK](dotnet-standard-sdk.md) * [Azure Cosmos DB Table .NET SDK](dotnet-sdk.md) * [Azure Cosmos DB Table Java SDK](java-sdk.md)
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Table API with Bicep
-description: Use Bicep to create and configure Azure Cosmos DB Table API.
+ Title: Create and manage Azure Cosmos DB for Table with Bicep
+description: Use Bicep to create and configure Azure Cosmos DB for Table.
-++ Last updated 09/13/2021
-# Manage Azure Cosmos DB Table API resources using Bicep
+# Manage Azure Cosmos DB for Table resources using Bicep
-In this article, you learn how to use Bicep to help deploy and manage your Azure Cosmos DB Table API accounts and tables.
+In this article, you learn how to use Bicep to help deploy and manage your Azure Cosmos DB for Table accounts and tables.
-This article has examples for Table API accounts only. You can also find Bicep samples for [Cassandra](../cassandr) articles.
+This article has examples for API for Table accounts only. You can also find Bicep samples for [Cassandra](../cassandr) articles.
> [!IMPORTANT] > > * Account names are limited to 44 characters, all lowercase. > * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
To create any of the Azure Cosmos DB resources below, copy the following example into a new bicep file. You can optionally create a parameters file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure CLI](../../azure-resource-manager/bicep/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/bicep/deploy-powershell.md) and [Cloud Shell](../../azure-resource-manager/bicep/deploy-cloud-shell.md). > [!TIP]
-> To enable shared throughput when using Table API, enable account-level throughput in the Azure portal. Account-level shared throughput cannot be set using Bicep.
+> To enable shared throughput when using API for Table, enable account-level throughput in the Azure portal. Account-level shared throughput cannot be set using Bicep.
<a id="create-autoscale"></a>
-## Azure Cosmos account for Table with autoscale throughput
+## Azure Cosmos DB account for Table with autoscale throughput
-Create an Azure Cosmos account for Table API with one table with autoscale throughput.
+Create an Azure Cosmos DB account for API for Table with one table with autoscale throughput.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-table-autoscale/main.bicep"::: <a id="create-manual"></a>
-## Azure Cosmos account for Table with standard provisioned throughput
+## Azure Cosmos DB account for Table with standard provisioned throughput
-Create an Azure Cosmos account for Table API with one table with standard throughput.
+Create an Azure Cosmos DB account for API for Table with one table with standard throughput.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-table/main.bicep":::
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Table API
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Table API
+ Title: Azure PowerShell samples for Azure Cosmos DB for Table
+description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Table
-++ Last updated 01/20/2021
-# Azure PowerShell samples for Azure Cosmos DB Table API
+# Azure PowerShell samples for Azure Cosmos DB for Table
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples |Task | Description | |||
-|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
+|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's default consistency level. |
+|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update an Azure Cosmos DB account's regions. |
+|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos DB account or trigger a manual failover. |
|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
+|[Create an Azure Cosmos DB Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
|||
-## Table API Samples
+## API for Table Samples
|Task | Description | |||
-|[Create an account and table](../scripts/powershell/table/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table. |
-|[Create an account and table with autoscale](../scripts/powershell/table/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table autoscale. |
+|[Create an account and table](../scripts/powershell/table/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account and table. |
+|[Create an account and table with autoscale](../scripts/powershell/table/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos DB account and table autoscale. |
|[List or get tables](../scripts/powershell/table/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get tables. | |[Perform throughput operations](../scripts/powershell/table/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a table including get, update and migrate between autoscale and standard throughput. | |[Lock resources from deletion](../scripts/powershell/table/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
+
+ Title: Quickstart - Azure Cosmos DB for Table for .NET
+description: Learn how to build a .NET app to manage Azure Cosmos DB for Table resources in this quickstart.
++++
+ms.devlang: dotnet
+ Last updated : 08/22/2022+++
+# Quickstart: Azure Cosmos DB for Table for .NET
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+>
+
+This quickstart shows how to get started with the Azure Cosmos DB for Table from a .NET application. The Azure Cosmos DB for Table is a schemaless data store allowing applications to store structured NoSQL data in the cloud. You'll learn how to create tables, rows, and perform basic tasks within your Azure Cosmos DB resource using the [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/).
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-table-api-dotnet-samples) are available on GitHub as a .NET project.
+
+[API for Table reference documentation](../../storage/tables/index.yml) | [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [.NET 6.0](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+### Prerequisite check
+
+* In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
+
+This section walks you through how to create an Azure Cosmos DB account and set up a project that uses the API for Table NuGet packages.
+
+### Create an Azure Cosmos DB account
+
+This quickstart will create a single Azure Cosmos DB account using the API for Table.
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
++++
+### Get API for Table connection string
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
++++
+### Create a new .NET app
+
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-new) to create a new console app.
+
+```console
+dotnet new console --output <app-name>
+```
+
+### Install the NuGet package
+
+Add the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables) NuGet package to the new .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
+
+```console
+dotnet add package Azure.Data.Tables
+```
+
+### Configure environment variables
++
+## Code examples
+
+* [Authenticate the client](#authenticate-the-client)
+* [Create a table](#create-a-table)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
+
+The sample code described in this article creates a table named ``adventureworks``. Each table row contains the details of a product such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+
+You'll use the following API for Table classes to interact with these resources:
+
+* [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) - This class provides methods to perform service level operations with Azure Cosmos DB for Table.
+* [``TableClient``](/dotnet/api/azure.data.tables.tableclient) - This class allows you to interact with tables hosted in the Azure Cosmos DB table API.
+* [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) - This class is a reference to a row in a table that allows you to manage properties and column data.
+
+### Authenticate the client
+
+From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``Azure.Data.Tables``.
++
+Define a new instance of the ``TableServiceClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariables) to read the connection string you set earlier.
++
+### Create a table
+
+Retrieve an instance of the `TableClient` using the `TableServiceClient` class. Use the [``TableClient.CreateIfNotExistsAsync``](/dotnet/api/azure.data.tables.tableclient.createifnotexistsasync) method on the `TableClient` to create a new table if it doesn't already exist. This method will return a reference to the existing or newly created table.
++
+### Create an item
+
+The easiest way to create a new item in a table is to create a class that implements the [``ITableEntity``](/dotnet/api/azure.data.tables.itableentity) interface. You can then add your own properties to the class to populate columns of data in that table row.
++
+Create an item in the collection using the `Product` class by calling [``TableClient.AddEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.addentityasync).
++
+### Get an item
+
+You can retrieve a specific item from a table using the [``TableEntity.GetEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.getentity) method. Provide the `partitionKey` and `rowKey` as parameters to identify the correct row to perform a quick *point read* of that item.
++
+### Query items
+
+After you insert an item, you can also run a query to get all items that match a specific filter by using the `TableClient.Query<T>` method. This example filters products by category using [Linq](/dotnet/standard/linq) syntax, which is a benefit of using typed `ITableEntity` models like the `Product` class.
+
+> [!NOTE]
+> You can also query items using [OData](/rest/api/storageservices/querying-tables-and-entities) syntax. You can see an example of this approach in the [Query Data](./tutorial-query.md) tutorial.
++
+## Run the code
+
+This app creates an Azure Cosmos DB Table API table. The example then creates an item and then reads the exact same item back. Finally, the example creates a second item and then performs a query that should return multiple items. With each step, the example outputs metadata to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```dotnetcli
+dotnet run
+```
+
+The output of the app should be similar to this example:
+
+```output
+Single product name:
+Yamba Surfboard
+Multiple products:
+Yamba Surfboard
+Sand Surfboard
+```
+
+## Clean up resources
+
+When you no longer need the Azure Cosmos DB for Table account, you can delete the corresponding resource group.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
+```
+
+### [PowerShell](#tab/azure-powershell)
+
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
+
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+}
+Remove-AzResourceGroup @parameters
+```
+
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the resource group you previously created in the Azure portal.
+
+ > [!TIP]
+ > In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``.
+1. Select **Delete resource group**.
+
+ :::image type="content" source="media/dotnet-quickstart/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
+
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
+
+ :::image type="content" source="media/dotnet-quickstart/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
+++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB for Table account, create a table, and manage entries using the .NET SDK. You can now dive deeper into the SDK to learn how to perform more advanced data queries and management tasks in your Azure Cosmos DB for Table resources.
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB for Table and .NET](./how-to-dotnet-get-started.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-java.md
+
+ Title: Use the API for Table and Java to build an app - Azure Cosmos DB
+description: This quickstart shows how to use the Azure Cosmos DB for Table to create an application with the Azure portal and Java
++++
+ms.devlang: java
+ Last updated : 05/28/2020+++
+# Quickstart: Build a API for Table app with Java SDK and Azure Cosmos DB
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+>
+
+This quickstart shows how to access the Azure Cosmos DB [Tables API](introduction.md) from a Java application. The Azure Cosmos DB Tables API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table.
+
+Java applications can access the Azure Cosmos DB Tables API using the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) client library.
+
+## Prerequisites
+
+The sample application is written in [Spring Boot 2.6.4](https://spring.io/projects/spring-boot), You can use either [Visual Studio Code](https://code.visualstudio.com/), or [IntelliJ IDEA](https://www.jetbrains.com/idea/) as an IDE.
++
+## Sample application
+
+The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java). Both a starter and completed app are included in the sample repository.
+
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java
+```
+
+The sample application uses weather data as an example to demonstrate the capabilities of the Tables API. Objects representing weather observations are stored and retrieved using the API for Table, including storing objects with additional properties to demonstrate the schemaless capabilities of the Tables API.
++
+## 1 - Create an Azure Cosmos DB account
+
+You first need to create an Azure Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Azure Cosmos DB account.
+
+| Instructions | Screenshot |
+|:|--:|
+| [!INCLUDE [Create Azure Cosmos DB db account step 1](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Azure Cosmos DB accounts in Azure." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 2](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Azure Cosmos DB accounts page in Azure." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 3](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 4](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Azure Cosmos DB Account creation page." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Azure Cosmos DB accounts are created using the [az Azure Cosmos DB create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Azure Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
+
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
+
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
+
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Azure Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+
+It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
+
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
+
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
+++
+## 2 - Create a table
+
+Next, you need to create a table within your Azure Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Azure Cosmos DB account.
+
+| Instructions | Screenshot |
+|:--|--:|
+| [!INCLUDE [Create Azure Cosmos DB db table step 1](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Azure Cosmos DB account." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db table step 2](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db table step 3](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Azure Cosmos DB table." lightbox="./media/quickstart-java/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Tables in Azure Cosmos DB are created using the [az Azure Cosmos DB table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Tables in Azure Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
+++
+## 3 - Get Azure Cosmos DB connection string
+
+To access your table(s) in Azure Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:--|--:|
+| [!INCLUDE [Get Azure Cosmos DB db table connection string step 1](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Azure Cosmos DB page." lightbox="./media/quickstart-java/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get Azure Cosmos DB db table connection string step 2](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing the which connection string to select and use in your application." lightbox="./media/quickstart-java/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary table storage connection string using Azure CLI, use the [az Azure Cosmos DB keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command with the option `--type connection-strings`. This command uses a [JMESPath query](/cli/azure/query-azure-cli) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary Table connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary Table connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+++
+The connection string for your Azure Cosmos DB account is considered an app secret and must be protected like any other app secret or password. This example uses the POM to store the connection string during development and make it available to the application.
+
+```xml
+<profiles>
+ <profile>
+ <id>local</id>
+ <properties>
+ <azure.tables.connection.string>
+ <![CDATA[YOUR-DATA-TABLES-SERVICE-CONNECTION-STRING]]>
+ </azure.tables.connection.string>
+ <azure.tables.tableName>WeatherData</azure.tables.tableName>
+ </properties>
+ <activation>
+ <activeByDefault>true</activeByDefault>
+ </activation>
+ </profile>
+</profiles>
+```
+
+## 4 - Include the azure-data-tables package
+
+To access the Azure Cosmos DB Tables API from a Java application, include the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) package.
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-data-tables</artifactId>
+ <version>12.2.1</version>
+</dependency>
+```
+++
+## 5 - Configure the Table client in TableServiceConfig.java
+
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The [TableClient](/java/api/com.azure.data.tables.tableclient) object is the object used to communicate with the Azure Cosmos DB Tables API.
+
+An application will typically create a single [TableClient](/java/api/com.azure.data.tables.tableclient) object per table to be used throughout the application. It's recommended to indicate that a method produces a [TableClient](/java/api/com.azure.data.tables.tableclient) object bean to be managed by the Spring container and as a singleton to accomplish this.
+
+In the `TableServiceConfig.java` file of the application, edit the `tableClientConfiguration()` method to match the following code snippet:
+
+```java
+@Configuration
+public class TableServiceConfiguration {
+
+ private static String TABLE_NAME;
+
+ private static String CONNECTION_STRING;
+
+ @Value("${azure.tables.connection.string}")
+ public void setConnectionStringStatic(String connectionString) {
+ TableServiceConfiguration.CONNECTION_STRING = connectionString;
+ }
+
+ @Value("${azure.tables.tableName}")
+ public void setTableNameStatic(String tableName) {
+ TableServiceConfiguration.TABLE_NAME = tableName;
+ }
+
+ @Bean
+ public TableClient tableClientConfiguration() {
+ return new TableClientBuilder()
+ .connectionString(CONNECTION_STRING)
+ .tableName(TABLE_NAME)
+ .buildClient();
+ }
+
+}
+```
+
+You'll also need to add the following using statement at the top of the `TableServiceConfig.java` file.
+
+```java
+import com.azure.data.tables.TableClient;
+import com.azure.data.tables.TableClientBuilder;
+```
+
+## 6 - Implement Azure Cosmos DB table operations
+
+All Azure Cosmos DB table operations for the sample app are implemented in the `TablesServiceImpl` class located in the *Services* directory. You'll need to import the `com.azure.data.tables` SDK package.
+
+```java
+import com.azure.data.tables.TableClient;
+import com.azure.data.tables.models.ListEntitiesOptions;
+import com.azure.data.tables.models.TableEntity;
+import com.azure.data.tables.models.TableTransactionAction;
+import com.azure.data.tables.models.TableTransactionActionType;
+```
+
+At the start of the `TableServiceImpl` class, add a member variable for the [TableClient](/java/api/com.azure.data.tables.tableclient) object and a constructor to allow the [TableClient](/java/api/com.azure.data.tables.tableclient) object to be injected into the class.
+
+```java
+@Autowired
+private TableClient tableClient;
+```
+
+### Get rows from a table
+
+The [TableClient](/java/api/com.azure.data.tables.tableclient) class contains a method named [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+
+The method also takes a generic parameter of type [TableEntity](/java/api/com.azure.data.tables.models.tableentity) that specifies the model class data will be returned as. In this case, the built-in class [TableEntity](/java/api/com.azure.data.tables.models.tableentity) is used, meaning the `listEntities` method will return a `PagedIterable<TableEntity>` collection as its results.
+
+```java
+public List<WeatherDataModel> retrieveAllEntities() {
+ List<WeatherDataModel> modelList = tableClient.listEntities().stream()
+ .map(WeatherDataUtils::mapTableEntityToWeatherDataModel)
+ .collect(Collectors.toList());
+ return Collections.unmodifiableList(WeatherDataUtils.filledValue(modelList));
+}
+```
+
+The [TableEntity](/java/api/com.azure.data.tables.models.tableentity) class defined in the `com.azure.data.tables.models` package has properties for the partition key and row key values in the table. Together, these two values for a unique key for the row in the table. In this example application, the name of the weather station (city) is stored in the partition key and the date/time of the observation is stored in the row key. All other properties (temperature, humidity, wind speed) are stored in a dictionary in the `TableEntity` object.
+
+It's common practice to map a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object to an object of your own definition. The sample application defines a class `WeatherDataModel` in the *Models* directory for this purpose. This class has properties for the station name and observation date that the partition key and row key will map to, providing more meaningful property names for these values. It then uses a dictionary to store all the other properties on the object. This is a common pattern when working with Table storage since a row can have any number of arbitrary properties and we want our model objects to be able to capture all of them. This class also contains methods to list the properties on the class.
+
+```java
+public class WeatherDataModel {
+
+ public WeatherDataModel(String stationName, String observationDate, OffsetDateTime timestamp, String etag) {
+ this.stationName = stationName;
+ this.observationDate = observationDate;
+ this.timestamp = timestamp;
+ this.etag = etag;
+ }
+
+ private String stationName;
+
+ private String observationDate;
+
+ private OffsetDateTime timestamp;
+
+ private String etag;
+
+ private Map<String, Object> propertyMap = new HashMap<String, Object>();
+
+ public String getStationName() {
+ return stationName;
+ }
+
+ public void setStationName(String stationName) {
+ this.stationName = stationName;
+ }
+
+ public String getObservationDate() {
+ return observationDate;
+ }
+
+ public void setObservationDate(String observationDate) {
+ this.observationDate = observationDate;
+ }
+
+ public OffsetDateTime getTimestamp() {
+ return timestamp;
+ }
+
+ public void setTimestamp(OffsetDateTime timestamp) {
+ this.timestamp = timestamp;
+ }
+
+ public String getEtag() {
+ return etag;
+ }
+
+ public void setEtag(String etag) {
+ this.etag = etag;
+ }
+
+ public Map<String, Object> getPropertyMap() {
+ return propertyMap;
+ }
+
+ public void setPropertyMap(Map<String, Object> propertyMap) {
+ this.propertyMap = propertyMap;
+ }
+}
+```
+
+The `mapTableEntityToWeatherDataModel` method is used to map a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object to a `WeatherDataModel` object. The `mapTableEntityToWeatherDataModel` method directly maps the `PartitionKey`, `RowKey`, `Timestamp`, and `Etag` properties and then uses the `properties.keySet` to iterate over the other properties in the `TableEntity` object and map those to the `WeatherDataModel` object, minus the properties that have already been directly mapped.
+
+Edit the code in the `mapTableEntityToWeatherDataModel` method to match the following code block.
+
+```java
+public static WeatherDataModel mapTableEntityToWeatherDataModel(TableEntity entity) {
+ WeatherDataModel observation = new WeatherDataModel(
+ entity.getPartitionKey(), entity.getRowKey(),
+ entity.getTimestamp(), entity.getETag());
+ rearrangeEntityProperties(observation.getPropertyMap(), entity.getProperties());
+ return observation;
+}
+
+private static void rearrangeEntityProperties(Map<String, Object> target, Map<String, Object> source) {
+ Constants.DEFAULT_LIST_OF_KEYS.forEach(key -> {
+ if (source.containsKey(key)) {
+ target.put(key, source.get(key));
+ }
+ });
+ source.keySet().forEach(key -> {
+ if (Constants.DEFAULT_LIST_OF_KEYS.parallelStream().noneMatch(defaultKey -> defaultKey.equals(key))
+ && Constants.EXCLUDE_TABLE_ENTITY_KEYS.parallelStream().noneMatch(defaultKey -> defaultKey.equals(key))) {
+ target.put(key, source.get(key));
+ }
+ });
+}
+```
+
+### Filter rows returned from a table
+To filter the rows returned from a table, you can pass an OData style filter string to the [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
+```
+
+You can view all OData filter operators on the OData website in the section [Filter System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/)
+
+In the example application, the `FilterResultsInputModel` object is designed to capture any filter criteria provided by the user.
+
+```java
+public class FilterResultsInputModel implements Serializable {
+
+ private String partitionKey;
+
+ private String rowKeyDateStart;
+
+ private String rowKeyTimeStart;
+
+ private String rowKeyDateEnd;
+
+ private String rowKeyTimeEnd;
+
+ private Double minTemperature;
+
+ private Double maxTemperature;
+
+ private Double minPrecipitation;
+
+ private Double maxPrecipitation;
+
+ public String getPartitionKey() {
+ return partitionKey;
+ }
+
+ public void setPartitionKey(String partitionKey) {
+ this.partitionKey = partitionKey;
+ }
+
+ public String getRowKeyDateStart() {
+ return rowKeyDateStart;
+ }
+
+ public void setRowKeyDateStart(String rowKeyDateStart) {
+ this.rowKeyDateStart = rowKeyDateStart;
+ }
+
+ public String getRowKeyTimeStart() {
+ return rowKeyTimeStart;
+ }
+
+ public void setRowKeyTimeStart(String rowKeyTimeStart) {
+ this.rowKeyTimeStart = rowKeyTimeStart;
+ }
+
+ public String getRowKeyDateEnd() {
+ return rowKeyDateEnd;
+ }
+
+ public void setRowKeyDateEnd(String rowKeyDateEnd) {
+ this.rowKeyDateEnd = rowKeyDateEnd;
+ }
+
+ public String getRowKeyTimeEnd() {
+ return rowKeyTimeEnd;
+ }
+
+ public void setRowKeyTimeEnd(String rowKeyTimeEnd) {
+ this.rowKeyTimeEnd = rowKeyTimeEnd;
+ }
+
+ public Double getMinTemperature() {
+ return minTemperature;
+ }
+
+ public void setMinTemperature(Double minTemperature) {
+ this.minTemperature = minTemperature;
+ }
+
+ public Double getMaxTemperature() {
+ return maxTemperature;
+ }
+
+ public void setMaxTemperature(Double maxTemperature) {
+ this.maxTemperature = maxTemperature;
+ }
+
+ public Double getMinPrecipitation() {
+ return minPrecipitation;
+ }
+
+ public void setMinPrecipitation(Double minPrecipitation) {
+ this.minPrecipitation = minPrecipitation;
+ }
+
+ public Double getMaxPrecipitation() {
+ return maxPrecipitation;
+ }
+
+ public void setMaxPrecipitation(Double maxPrecipitation) {
+ this.maxPrecipitation = maxPrecipitation;
+ }
+}
+```
+
+When this object is passed to the `retrieveEntitiesByFilter` method in the `TableServiceImpl` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```java
+public List<WeatherDataModel> retrieveEntitiesByFilter(FilterResultsInputModel model) {
+
+ List<String> filters = new ArrayList<>();
+
+ if (!StringUtils.isEmptyOrWhitespace(model.getPartitionKey())) {
+ filters.add(String.format("PartitionKey eq '%s'", model.getPartitionKey()));
+ }
+ if (!StringUtils.isEmptyOrWhitespace(model.getRowKeyDateStart())
+ && !StringUtils.isEmptyOrWhitespace(model.getRowKeyTimeStart())) {
+ filters.add(String.format("RowKey ge '%s %s'", model.getRowKeyDateStart(), model.getRowKeyTimeStart()));
+ }
+ if (!StringUtils.isEmptyOrWhitespace(model.getRowKeyDateEnd())
+ && !StringUtils.isEmptyOrWhitespace(model.getRowKeyTimeEnd())) {
+ filters.add(String.format("RowKey le '%s %s'", model.getRowKeyDateEnd(), model.getRowKeyTimeEnd()));
+ }
+ if (model.getMinTemperature() != null) {
+ filters.add(String.format("Temperature ge %f", model.getMinTemperature()));
+ }
+ if (model.getMaxTemperature() != null) {
+ filters.add(String.format("Temperature le %f", model.getMaxTemperature()));
+ }
+ if (model.getMinPrecipitation() != null) {
+ filters.add(String.format("Precipitation ge %f", model.getMinPrecipitation()));
+ }
+ if (model.getMaxPrecipitation() != null) {
+ filters.add(String.format("Precipitation le %f", model.getMaxPrecipitation()));
+ }
+
+ List<WeatherDataModel> modelList = tableClient.listEntities(new ListEntitiesOptions()
+ .setFilter(String.join(" and ", filters)), null, null).stream()
+ .map(WeatherDataUtils::mapTableEntityToWeatherDataModel)
+ .collect(Collectors.toList());
+ return Collections.unmodifiableList(WeatherDataUtils.filledValue(modelList));
+}
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object. In this example, data is mapped from an input model object to a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey`) properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the [TableClient](/java/api/com.azure.data.tables.models.tableentity) object. Finally, the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) method on the [TableClient](/java/api/com.azure.data.tables.models.tableentity) object is used to insert data into the table.
+
+Modify the `insertEntity` class in the example application to contain the following code.
+
+```java
+public void insertEntity(WeatherInputModel model) {
+ tableClient.createEntity(WeatherDataUtils.createTableEntity(model));
+}
+```
+
+### Upsert data using a TableEntity object
+
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you'll receive an error. For this reason, it's often preferable to use the [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) instead of the `insertEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) method will update the existing row. Otherwise, the row will be added to the table.
+
+```java
+public void upsertEntity(WeatherInputModel model) {
+ tableClient.upsertEntity(WeatherDataUtils.createTableEntity(model));
+}
+```
+
+### Insert or upsert data with variable properties
+
+One of the advantages of using the Azure Cosmos DB Tables API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Azure Cosmos DB. There's no need to run DDL statements like `ALTER TABLE` to add columns as in a traditional database.
+
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
+
+In the sample application, the `ExpandableWeatherObject` class is built around an internal dictionary to support any set of properties on the object. This class represents a typical pattern for when an object needs to contain an arbitrary set of properties.
+
+```java
+public class ExpandableWeatherObject {
+
+ private String stationName;
+
+ private String observationDate;
+
+ private Map<String, Object> propertyMap = new HashMap<String, Object>();
+
+ public String getStationName() {
+ return stationName;
+ }
+
+ public void setStationName(String stationName) {
+ this.stationName = stationName;
+ }
+
+ public String getObservationDate() {
+ return observationDate;
+ }
+
+ public void setObservationDate(String observationDate) {
+ this.observationDate = observationDate;
+ }
+
+ public Map<String, Object> getPropertyMap() {
+ return propertyMap;
+ }
+
+ public void setPropertyMap(Map<String, Object> propertyMap) {
+ this.propertyMap = propertyMap;
+ }
+
+ public boolean containsProperty(String key) {
+ return this.propertyMap.containsKey(key);
+ }
+
+ public Object getPropertyValue(String key) {
+ return containsProperty(key) ? this.propertyMap.get(key) : null;
+ }
+
+ public void putProperty(String key, Object value) {
+ this.propertyMap.put(key, value);
+ }
+
+ public List<String> getPropertyKeys() {
+ List<String> list = Collections.synchronizedList(new ArrayList<String>());
+ Iterator<String> iterators = this.propertyMap.keySet().iterator();
+ while (iterators.hasNext()) {
+ list.add(iterators.next());
+ }
+ return Collections.unmodifiableList(list);
+ }
+
+ public Integer getPropertyCount() {
+ return this.propertyMap.size();
+ }
+}
+```
+
+To insert or upsert such an object using the API for Table, map the properties of the expandable object into a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object and use the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) or [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/java/api/com.azure.data.tables.tableclient) object as appropriate.
+
+```java
+public void insertExpandableEntity(ExpandableWeatherObject model) {
+ tableClient.createEntity(WeatherDataUtils.createTableEntity(model));
+}
+
+public void upsertExpandableEntity(ExpandableWeatherObject model) {
+ tableClient.upsertEntity(WeatherDataUtils.createTableEntity(model));
+}
+```
+
+### Update an entity
+
+Entities can be updated by calling the [updateEntity](/java/api/com.azure.data.tables.tableclient.updateentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object. Because an entity (row) stored using the Tables API could contain any arbitrary set of properties, it's often useful to create an update object based around a dictionary object similar to the `ExpandableWeatherObject` discussed earlier. In this case, the only difference is the addition of an `etag` property which is used for concurrency control during updates.
+
+```java
+public class UpdateWeatherObject {
+
+ private String stationName;
+
+ private String observationDate;
+
+ private String etag;
+
+ private Map<String, Object> propertyMap = new HashMap<String, Object>();
+
+ public String getStationName() {
+ return stationName;
+ }
+
+ public void setStationName(String stationName) {
+ this.stationName = stationName;
+ }
+
+ public String getObservationDate() {
+ return observationDate;
+ }
+
+ public void setObservationDate(String observationDate) {
+ this.observationDate = observationDate;
+ }
+
+ public String getEtag() {
+ return etag;
+ }
+
+ public void setEtag(String etag) {
+ this.etag = etag;
+ }
+
+ public Map<String, Object> getPropertyMap() {
+ return propertyMap;
+ }
+
+ public void setPropertyMap(Map<String, Object> propertyMap) {
+ this.propertyMap = propertyMap;
+ }
+}
+```
+
+In the sample app, this object is passed to the `updateEntity` method in the `TableServiceImpl` class. This method first loads the existing entity from the Tables API using the [getEntity](/java/api/com.azure.data.tables.tableclient.getentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient). It then updates that entity object and uses the `updateEntity` method save the updates to the database. Note how the [updateEntity](/java/api/com.azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to ensure the object hasn't changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `etag` to the `updateEntity` method.
+
+```java
+public void updateEntity(UpdateWeatherObject model) {
+ TableEntity tableEntity = tableClient.getEntity(model.getStationName(), model.getObservationDate());
+ Map<String, Object> propertiesMap = model.getPropertyMap();
+ propertiesMap.keySet().forEach(key -> tableEntity.getProperties().put(key, propertiesMap.get(key)));
+ tableClient.updateEntity(tableEntity);
+}
+```
+
+### Remove an entity
+
+To remove an entity from a table, call the [deleteEntity](/java/api/com.azure.data.tables.tableclient.deleteentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object with the partition key and row key of the object.
+
+```java
+public void deleteEntity(WeatherInputModel model) {
+ tableClient.deleteEntity(model.getStationName(),
+ WeatherDataUtils.formatRowKey(model.getObservationDate(), model.getObservationTime()));
+}
+```
+
+## 7 - Run the code
+
+Run the sample application to interact with the Azure Cosmos DB Tables API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
++
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Azure Cosmos DB Tables API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Azure Cosmos DB table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Azure Cosmos DB Tables API.
++
+## Clean up resources
+
+When you're finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/quickstart-java/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/quickstart-java/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/quickstart-jav)] | :::image type="content" source="./media/quickstart-java/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/quickstart-java/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
+++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Tables API.
+
+> [!div class="nextstepaction"]
+> [Import table data to the Tables API](import.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-nodejs.md
+
+ Title: 'Quickstart: API for Table with Node.js - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB for Table to create an application with the Azure portal and Node.js
++++
+ms.devlang: javascript
+ Last updated : 05/28/2020+++
+# Quickstart: Build a API for Table app with Node.js and Azure Cosmos DB
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+>
+
+In this quickstart, you create an Azure Cosmos DB for Table account, and use Data Explorer and a Node.js app cloned from GitHub to create tables and entities. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- [Node.js 0.10.29+](https://nodejs.org/) .
+- [Git](https://git-scm.com/downloads).
+
+## Sample application
+
+The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js). Both a starter and completed app are included in the sample repository.
+
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-js
+```
+
+The sample application uses weather data as an example to demonstrate the capabilities of the API for Table. Objects representing weather observations are stored and retrieved using the API for Table, including storing objects with additional properties to demonstrate the schemaless capabilities of the API for Table.
++
+## 1 - Create an Azure Cosmos DB account
+
+You first need to create an Azure Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Azure Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create Azure Cosmos DB db account step 1](./includes/quickstart-nodejs/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Azure Cosmos DB accounts in Azure." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 1](./includes/quickstart-nodejs/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Azure Cosmos DB accounts page in Azure." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 1](./includes/quickstart-nodejs/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 1](./includes/quickstart-nodejs/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Azure Cosmos DB Account creation page." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Azure Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Azure Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
+
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
+
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
+
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Azure Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+
+It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
+
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
+
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
+++
+## 2 - Create a table
+
+Next, you need to create a table within your Azure Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Azure Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create Azure Cosmos DB db table step 1](./includes/quickstart-nodejs/create-cosmos-table-1.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Azure Cosmos DB account." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db table step 2](./includes/quickstart-nodejs/create-cosmos-table-2.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db table step 3](./includes/quickstart-nodejs/create-cosmos-table-3.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Azure Cosmos DB table." lightbox="./media/quickstart-nodejs/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Tables in Azure Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Tables in Azure Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
+++
+## 3 - Get Azure Cosmos DB connection string
+
+To access your table(s) in Azure Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get Azure Cosmos DB db table connection string step 1](./includes/quickstart-nodejs/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Azure Cosmos DB page." lightbox="./media/quickstart-nodejs/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get Azure Cosmos DB db table connection string step 2](./includes/quickstart-nodejs/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/quickstart-nodejs/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary table storage connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](/cli/azure/query-azure-cli) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary Table connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary Table connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+
+The connection string for your Azure Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
+++
+## 4 - Install the Azure Data Tables SDK for JS
+
+To access the Azure Cosmos DB for Table from a nodejs application, install the [Azure Data Tables SDK](https://www.npmjs.com/package/@azure/data-tables) package.
+
+```bash
+ npm install @azure/data-tables
+```
+
+## 5 - Configure the Table client in env.js file
+
+Copy your Azure Cosmos DB or Storage account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `configure/env.js` file.
+
+```js
+const env = {
+ connectionString:"A connection string to an Azure Storage or Azure Cosmos DB account.",
+ tableName: "WeatherData",
+};
+```
+
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableClient` class is the class used to communicate with the Azure Cosmos DB for Table. An application will typically create a single `serviceClient` object per table to be used throughout the application.
+
+```js
+const { TableClient } = require("@azure/data-tables");
+const env = require("../configure/env");
+const serviceClient = TableClient.fromConnectionString(
+ env.connectionString,
+ env.tableName
+);
+```
+++
+## 6 - Implement Azure Cosmos DB table operations
+
+All Azure Cosmos DB table operations for the sample app are implemented in the `serviceClient` object located in `tableClient.js` file under *service* directory.
+
+```js
+const { TableClient } = require("@azure/data-tables");
+const env = require("../configure/env");
+const serviceClient = TableClient.fromConnectionString(
+ env.connectionString,
+ env.tableName
+);
+```
+
+### Get rows from a table
+
+The `serviceClient` object contains a method named `listEntities` which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+
+```js
+const allRowsEntities = serviceClient.listEntities();
+```
+
+### Filter rows returned from a table
+
+To filter the rows returned from a table, you can pass an OData style filter string to the `listEntities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00' and RowKey le '2021-07-02 12:00'
+```
+
+You can view all OData filter operators on the OData website in the section Filter [System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/).
+
+When request.args parameter is passed to the `listEntities` method in the `serviceClient` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `listEntities` method on the `serviceClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```js
+const filterEntities = async function (option) {
+ /*
+ You can query data according to existing fields
+ option provides some conditions to query,eg partitionKey, rowKeyDateTimeStart, rowKeyDateTimeEnd
+ minTemperature, maxTemperature, minPrecipitation, maxPrecipitation
+ */
+ const filterEntitiesArray = [];
+ const filters = [];
+ if (option.partitionKey) {
+ filters.push(`PartitionKey eq '${option.partitionKey}'`);
+ }
+ if (option.rowKeyDateTimeStart) {
+ filters.push(`RowKey ge '${option.rowKeyDateTimeStart}'`);
+ }
+ if (option.rowKeyDateTimeEnd) {
+ filters.push(`RowKey le '${option.rowKeyDateTimeEnd}'`);
+ }
+ if (option.minTemperature !== null) {
+ filters.push(`Temperature ge ${option.minTemperature}`);
+ }
+ if (option.maxTemperature !== null) {
+ filters.push(`Temperature le ${option.maxTemperature}`);
+ }
+ if (option.minPrecipitation !== null) {
+ filters.push(`Precipitation ge ${option.minPrecipitation}`);
+ }
+ if (option.maxPrecipitation !== null) {
+ filters.push(`Precipitation le ${option.maxPrecipitation}`);
+ }
+ const res = serviceClient.listEntities({
+ queryOptions: {
+ filter: filters.join(" and "),
+ },
+ });
+ for await (const entity of res) {
+ filterEntitiesArray.push(entity);
+ }
+
+ return filterEntitiesArray;
+};
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `createEntity` method on the `serviceClient` object is used to insert data into the table.
+
+Modify the `insertEntity` function in the example application to contain the following code.
+
+```js
+const insertEntity = async function (entity) {
+
+ await serviceClient.createEntity(entity);
+
+};
+```
+### Upsert data using a TableEntity object
+
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsertEntity` instead of the `createEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsertEntity` method will update the existing row. Otherwise, the row will be added to the table.
+
+```js
+const upsertEntity = async function (entity) {
+
+ await serviceClient.upsertEntity(entity, "Merge");
+
+};
+```
+### Insert or upsert data with variable properties
+
+One of the advantages of using the Azure Cosmos DB for Table is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Azure Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
+
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
+
+To insert or upsert such an object using the API for Table, map the properties of the expandable object into a `TableEntity` object and use the `createEntity` or `upsertEntity` methods on the `serviceClient` object as appropriate.
+
+In the sample application, the `upsertEntity` function can also implement the function of insert or upsert data with variable properties
+
+```js
+const insertEntity = async function (entity) {
+ await serviceClient.createEntity(entity);
+};
+
+const upsertEntity = async function (entity) {
+ await serviceClient.upsertEntity(entity, "Merge");
+};
+```
+### Update an entity
+
+Entities can be updated by calling the `updateEntity` method on the `serviceClient` object.
+
+In the sample app, this object is passed to the `upsertEntity` method in the `serviceClient` object. It updates that entity object and uses the `upsertEntity` method save the updates to the database.
+
+```js
+const updateEntity = async function (entity) {
+ await serviceClient.updateEntity(entity, "Replace");
+};
+```
+
+## 7 - Run the code
+
+Run the sample application to interact with the Azure Cosmos DB for Table. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
++
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Azure Cosmos DB for Table automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Azure Cosmos DB Table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Azure Cosmos DB for Table.
++
+## Clean up resources
+
+When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/quickstart-nodejs/remove-resource-group-1.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/quickstart-nodejs/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/quickstart-nodejs/remove-resource-group-2.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/quickstart-nodejs/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/quickstart-nodejs/remove-resource-group-3.md)] | :::image type="content" source="./media/quickstart-nodejs/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/quickstart-nodejs/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run a Node.js app to add table data. Now you can query your data using the API for Table.
+
+> [!div class="nextstepaction"]
+> [Import table data to the API for Table](import.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-python.md
+
+ Title: 'Quickstart: API for Table with Python - Azure Cosmos DB'
+description: This quickstart shows how to access the Azure Cosmos DB for Table from a Python application using the Azure Data Tables SDK
+++
+ms.devlang: python
+ Last updated : 03/23/2021+++++
+# Quickstart: Build a API for Table app with Python SDK and Azure Cosmos DB
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Java](quickstart-java.md)
+> * [Node.js](quickstart-nodejs.md)
+> * [Python](quickstart-python.md)
+>
+
+This quickstart shows how to access the Azure Cosmos DB [API for Table](introduction.md) from a Python application. The Azure Cosmos DB for Table is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table. Python applications can access the Azure Cosmos DB for Table using the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) package.
+
+## Prerequisites
+
+The sample application is written in [Python3.6](https://www.python.org/downloads/), though the principles apply to all Python3.6+ applications. You can use [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
+
+If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
+
+## Sample application
+
+The sample application for this tutorial may be cloned or downloaded from the repository https://github.com/Azure-Samples/msdocs-azure-tables-sdk-python-flask. Both a starter and completed app are included in the sample repository.
+
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-tables-sdk-python-flask.git
+```
+
+The sample application uses weather data as an example to demonstrate the capabilities of the API for Table. Objects representing weather observations are stored and retrieved using the API for Table, including storing objects with additional properties to demonstrate the schemaless capabilities of the API for Table.
++
+## 1 - Create an Azure Cosmos DB account
+
+You first need to create an Azure Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Azure Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create Azure Cosmos DB db account step 1](./includes/quickstart-python/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Azure Cosmos DB accounts in Azure." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 2](./includes/quickstart-python/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Azure Cosmos DB accounts page in Azure." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 3](./includes/quickstart-python/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db account step 4](./includes/quickstart-python/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Azure Cosmos DB Account creation page." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Azure Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Azure Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com/) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
+
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
+
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
+
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Azure Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
+
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+
+It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
+
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
+
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
+++
+## 2 - Create a table
+
+Next, you need to create a table within your Azure Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Azure Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create Azure Cosmos DB db table step 1](./includes/quickstart-python/create-cosmos-table-1.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Azure Cosmos DB account." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db table step 2](./includes/quickstart-python/create-cosmos-table-2.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create Azure Cosmos DB db table step 3](./includes/quickstart-python/create-cosmos-table-3.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Azure Cosmos DB table." lightbox="./media/quickstart-python/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Tables in Azure Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Tables in Azure Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
+++
+## 3 - Get Azure Cosmos DB connection string
+
+To access your table(s) in Azure Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get Azure Cosmos DB db table connection string step 1](./includes/quickstart-python/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Azure Cosmos DB page." lightbox="./media/quickstart-python/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get Azure Cosmos DB db table connection string step 2](./includes/quickstart-python/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/quickstart-python/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](/cli/azure/query-azure-cli) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+
+The connection string for your Azure Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
+++
+## 4 - Install the Azure Data Tables SDK for Python
+
+After you've created an Azure Cosmos DB account, your next step is to install the Microsoft [Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). For details on installing the SDK, refer to the [README.md](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/tables/azure-data-tables/README.md) file in the Data Tables SDK for Python repository on GitHub.
+
+Install the Azure Tables client library for Python with pip:
+
+```bash
+pip install azure-data-tables
+```
+++
+## 5 - Configure the Table client in .env file
+
+Copy your Azure Cosmos DB account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `.env` file.
+
+```python
+# Configuration Parameters
+conn_str = "A connection string to an Azure Cosmos DB account."
+table_name = "WeatherData"
+project_root_path = "Project abs path"
+```
+
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableServiceClient` object is the object used to communicate with the Azure Cosmos DB for Table. An application will typically have a single `TableServiceClient` overall, and it will have a `TableClient` per table.
+
+```python
+self.conn_str = os.getenv("AZURE_CONNECTION_STRING")
+self.table_service = TableServiceClient.from_connection_string(self.conn_str)
+```
+++
+## 6 - Implement Azure Cosmos DB table operations
+
+All Azure Cosmos DB table operations for the sample app are implemented in the `TableServiceHelper` class located in *helper* file under the *webapp* directory. You will need to import the `TableServiceClient` class at the top of this file to work with objects in the `azure.data.tables` SDK package.
+
+```python
+from azure.data.tables import TableServiceClient
+```
+
+At the start of the `TableServiceHelper` class, create a constructor and add a member variable for the `TableClient` object to allow the `TableClient` object to be injected into the class.
+
+```python
+def __init__(self, table_name=None, conn_str=None):
+ self.table_name = table_name if table_name else os.getenv("table_name")
+ self.conn_str = conn_str if conn_str else os.getenv("conn_str")
+ self.table_service = TableServiceClient.from_connection_string(self.conn_str)
+ self.table_client = self.table_service.get_table_client(self.table_name)
+```
+
+### Filter rows returned from a table
+
+To filter the rows returned from a table, you can pass an OData style filter string to the `query_entities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
+```
+
+You can view related OData filter operators on the azure-data-tables website in the section [Writing Filters](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables/samples#writing-filters).
+
+When request.args parameter is passed to the `query_entity` method in the `TableServiceHelper` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `query_entities` method on the `TableClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```python
+def query_entity(self, params):
+ filters = []
+ if params.get("partitionKey"):
+ filters.append("PartitionKey eq '{}'".format(params.get("partitionKey")))
+ if params.get("rowKeyDateStart") and params.get("rowKeyTimeStart"):
+ filters.append("RowKey ge '{} {}'".format(params.get("rowKeyDateStart"), params.get("rowKeyTimeStart")))
+ if params.get("rowKeyDateEnd") and params.get("rowKeyTimeEnd"):
+ filters.append("RowKey le '{} {}'".format(params.get("rowKeyDateEnd"), params.get("rowKeyTimeEnd")))
+ if params.get("minTemperature"):
+ filters.append("Temperature ge {}".format(params.get("minTemperature")))
+ if params.get("maxTemperature"):
+ filters.append("Temperature le {}".format(params.get("maxTemperature")))
+ if params.get("minPrecipitation"):
+ filters.append("Precipitation ge {}".format(params.get("minPrecipitation")))
+ if params.get("maxPrecipitation"):
+ filters.append("Precipitation le {}".format(params.get("maxPrecipitation")))
+ return list(self.table_client.query_entities(" and ".join(filters)))
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `create_entity` method on the `TableClient` object is used to insert data into the table.
+
+Modify the `insert_entity` function in the example application to contain the following code.
+
+```python
+def insert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.create_entity(entity)
+
+@staticmethod
+def deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
+ return params
+```
+
+### Upsert data using a TableEntity object
+
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsert_entity` instead of the `create_entity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsert_entity` method will update the existing row. Otherwise, the row will be added to the table.
+
+```python
+def upsert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.upsert_entity(entity)
+
+@staticmethod
+def deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
+ return params
+```
+
+### Insert or upsert data with variable properties
+
+One of the advantages of using the Azure Cosmos DB for Table is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Azure Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
+
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
+
+To insert or upsert such an object using the API for Table, map the properties of the expandable object into a `TableEntity` object and use the `create_entity` or `upsert_entity` methods on the `TableClient` object as appropriate.
+
+In the sample application, the `upsert_entity` function can also implement the function of insert or upsert data with variable properties
+
+```python
+def insert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.create_entity(entity)
+
+def upsert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.upsert_entity(entity)
+
+@staticmethod
+def deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
+ return params
+```
+
+### Update an entity
+
+Entities can be updated by calling the `update_entity` method on the `TableClient` object.
+
+In the sample app, this object is passed to the `upsert_entity` method in the `TableClient` class. It updates that entity object and uses the `upsert_entity` method save the updates to the database.
+
+```python
+def update_entity(self):
+ entity = self.update_deserialize()
+ return self.table_client.update_entity(entity)
+
+@staticmethod
+def update_deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = params.pop("ObservationDate")
+ return params
+```
+
+### Remove an entity
+
+To remove an entity from a table, call the `delete_entity` method on the `TableClient` object with the partition key and row key of the object.
+
+```python
+def delete_entity(self):
+ partition_key = request.form.get("StationName")
+ row_key = request.form.get("ObservationDate")
+ return self.table_client.delete_entity(partition_key, row_key)
+```
+
+## 7 - Run the code
+
+Run the sample application to interact with the Azure Cosmos DB for Table. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
++
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable** Data button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Azure Cosmos DB for Table automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Azure Cosmos DB Table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Azure Cosmos DB for Table.
++
+## Clean up resources
+
+When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/quickstart-python/remove-resource-group-1.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/quickstart-python/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/quickstart-python/remove-resource-group-2.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/quickstart-python/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/quickstart-python/remove-resource-group-3.md)] | :::image type="content" source="./media/quickstart-python/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/quickstart-python/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
+++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the API for Table.
+
+> [!div class="nextstepaction"]
+> [Import table data to the API for Table](import.md)
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/resource-manager-templates.md
Title: Resource Manager templates for Azure Cosmos DB Table API
-description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Table API.
+ Title: Resource Manager templates for Azure Cosmos DB for Table
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Table.
-++ Last updated 05/19/2020
-# Manage Azure Cosmos DB Table API resources using Azure Resource Manager templates
+# Manage Azure Cosmos DB for Table resources using Azure Resource Manager templates
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
-This article has examples for Table API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../cassandr) articles.
+This article has examples for API for Table accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../cassandr) articles.
> [!IMPORTANT] > > * Account names are limited to 44 characters, all lowercase. > * To change the throughput values, redeploy the template with updated RU/s.
-> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+> * When you add or remove locations to an Azure Cosmos DB account, you can't simultaneously modify other properties. These operations must be done separately.
To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md). > [!TIP]
-> To enable shared throughput when using Table API, enable account-level throughput in the Azure portal.
+> To enable shared throughput when using API for Table, enable account-level throughput in the Azure portal.
<a id="create-autoscale"></a>
-## Azure Cosmos account for Table with autoscale throughput
+## Azure Cosmos DB account for Table with autoscale throughput
-This template will create an Azure Cosmos account for Table API with one table with autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create an Azure Cosmos DB account for API for Table with one table with autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-table-autoscale%2Fazuredeploy.json)
This template will create an Azure Cosmos account for Table API with one table w
<a id="create-manual"></a>
-## Azure Cosmos account for Table with standard provisioned throughput
+## Azure Cosmos DB account for Table with standard provisioned throughput
-This template will create an Azure Cosmos account for Table API with one table with standard throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create an Azure Cosmos DB account for API for Table with one table with standard throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-table%2Fazuredeploy.json)
Here are some additional resources:
* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml) * [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions) * [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.DocumentDB&pageNumber=1&sort=Popular)
-* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
+* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/samples-dotnet.md
Title: Examples for Azure Cosmos DB Table API SDK for .NET
-description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB Table API.
+ Title: Examples for Azure Cosmos DB for Table SDK for .NET
+description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB for Table.
-+ ms.devlang: csharp Last updated 07/06/2022-+
-# Examples for Azure Cosmos DB Table API SDK for .NET
+# Examples for Azure Cosmos DB for Table SDK for .NET
> [!div class="op_single_selector"] > > * [.NET](samples-dotnet.md) >
-The [cosmos-db-table-api-dotnet-samples](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB Table API resources.
+The [cosmos-db-table-api-dotnet-samples](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for Table resources.
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB Table API account. [Create a Table API account](how-to-create-account.md).
+* Azure Cosmos DB for Table account. [Create a API for Table account](how-to-create-account.md).
* [.NET 6.0 or later](https://dotnet.microsoft.com/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
The sample projects are all self-contained and are designed to be ran individual
## Next steps
-Dive deeper into the SDK to read data and manage your Azure Cosmos DB Table API resources.
+Dive deeper into the SDK to read data and manage your Azure Cosmos DB for Table resources.
> [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB Table API and .NET](how-to-dotnet-get-started.md)
+> [Get started with Azure Cosmos DB for Table and .NET](how-to-dotnet-get-started.md)
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/support.md
+
+ Title: Azure Table Storage support in Azure Cosmos DB
+description: Learn how Azure Cosmos DB for Table and Azure Storage Tables work together by sharing the same table data model an operations
+++ Last updated : 11/03/2021+++
+ms.devlang: cpp, csharp, java, javascript, php, python, ruby
+++
+# Developing with Azure Cosmos DB for Table and Azure Table storage
+
+Azure Cosmos DB for Table and Azure Table storage share the same table data model and expose the same create, delete, update, and query operations through their SDKs.
+
+> [!NOTE]
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's API for Table.
++
+## Azure SDKs
+
+### Current release
+
+The following SDK packages work with both the Azure Cosmos DB Table API and Azure Table storage.
+
+* **.NET** - Use the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) available on NuGet.
+
+* **Python** - Use the [azure-data-tables](https://pypi.org/project/azure-data-tables/) available from PyPi.
+
+* **JavaScript/TypeScript** - Use the [@azure/data-tables](https://www.npmjs.com/package/@azure/data-tables) package available on npm.js.
+
+* **Java** - Use the [azure-data-tables](https://mvnrepository.com/artifact/com.azure/azure-data-tables/12.0.0) package available on Maven.
+
+### Prior releases
+
+The following SDK packages work only with Azure Cosmos DB for Table.
+
+* **.NET** - [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) available on NuGet. The Azure Tables client library can seamlessly target either Azure Table storage or Azure Cosmos DB table service endpoints with no code changes.
+
+* **Python** - [azure-cosmosdb-table](https://pypi.org/project/azure-cosmosdb-table/) available from PyPi. This SDK connects with both Azure Table storage and Azure Cosmos DB for Table.
+
+* **JavaScript/TypeScript** - [azure-storage](https://www.npmjs.com/package/azure-storage) package available on npm.js. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the API for Table.
+
+* **Java** - [Microsoft Azure Storage Client SDK for Java](https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage) on Maven. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the API for Table.
+
+* **C++** - [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/). This library enables you to build applications against Azure Storage.
+
+* **Ruby** - [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). This project provides a Ruby package that makes it easy to access Azure storage Table services.
+
+* **PHP** - [Azure Storage Table PHP Client Library](https://github.com/Azure/azure-storage-php/tree/master/azure-storage-table). This project provides a PHP client library that makes it easy to access Azure storage Table services.
+
+* **PowerShell** - [AzureRmStorageTable PowerShell module](https://www.powershellgallery.com/packages/AzureRmStorageTable). This PowerShell module has cmdlets to work with storage Tables.
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-import.md
- Title: Migrate existing data to a Table API account in Azure Cosmos DB
-description: Learn how to migrate or import on-premises or cloud data to an Azure Table API account in Azure Cosmos DB.
------ Previously updated : 03/03/2022---
-# Migrate your data to an Azure Cosmos DB Table API account
-
-This tutorial provides instructions on importing data for use with the Azure Cosmos DB [Table API](introduction.md). If you have data stored in Azure Table Storage, you can use the **Data migration tool** to import your data to the Azure Cosmos DB Table API.
--
-## Prerequisites
-
-* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual container or a set of containers. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs.
-
-* **Create Azure Cosmos DB resources:** Before you start migrating the data, create all your tables from the Azure portal. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the Azure Cosmos DB tables.
-
-## Data migration tool
-
-> [!IMPORTANT]
-> Ownership of the Data Migration Tool has been transferred to a 3rd party who is acting as maintainers of this tool which is open source. The tool is currently being updated to use the latest nuget packages so does not currently work on the main branch. There is a fork of this tool which does work. You can learn more [here](https://github.com/Azure/azure-documentdb-datamigrationtool/issues/89).
-
-You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to import your existing Azure Table Storage data to a Table API account.
-
-To migrate table data:
-
-1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool/tree/archive).
-2. Run `dt.exe` by using the command-line arguments for your scenario. `dt.exe` takes a command in the following format:
-
- ```bash
- dt.exe [/<option>:<value>] /s:<source-name> [/s.<source-option>:<value>] /t:<target-name> [/t.<target-option>:<value>]
- ```
-
-The supported options for this command are:
-
-* **/ErrorLog:** Optional. Name of the CSV file to redirect data transfer failures.
-* **/OverwriteErrorLog:** Optional. Overwrite the error log file.
-* **/ProgressUpdateInterval:** Optional, default is `00:00:01`. The time interval to refresh on-screen data transfer progress.
-* **/ErrorDetails:** Optional, default is `None`. Specifies that detailed error information should be displayed for the following errors: `None`, `Critical`, or `All`.
-* **/EnableCosmosTableLog:** Optional. Direct the log to an Azure Cosmos DB table account. If set, this defaults to the destination account connection string unless `/CosmosTableLogConnectionString` is also provided. This is useful if multiple instances of the tool are being run simultaneously.
-* **/CosmosTableLogConnectionString:** Optional. The connection string to direct the log to a remote Azure Cosmos DB table account.
-
-### Command-line source settings
-
-Use the following source options when you define Azure Table Storage as the source of the migration.
-
-* **/s:AzureTable:** Reads data from Table Storage.
-* **/s.ConnectionString:** Connection string for the table endpoint. You can retrieve this from the Azure portal.
-* **/s.LocationMode:** Optional, default is `PrimaryOnly`. Specifies which location mode to use when connecting to Table Storage: `PrimaryOnly`, `PrimaryThenSecondary`, `SecondaryOnly`, `SecondaryThenPrimary`.
-* **/s.Table:** Name of the Azure table.
-* **/s.InternalFields:** Set to `All` for table migration, because `RowKey` and `PartitionKey` are required for import.
-* **/s.Filter:** Optional. Filter string to apply.
-* **/s.Projection:** Optional. List of columns to select,
-
-To retrieve the source connection string when you import from Table Storage, open the Azure portal. Select **Storage accounts** > **Account** > **Access keys**, and copy the **Connection string**.
--
-### Command-line target settings
-
-Use the following target options when you define the Azure Cosmos DB Table API as the target of the migration.
-
-* **/t:TableAPIBulk:** Uploads data into the Azure Cosmos DB Table API in batches.
-* **/t.ConnectionString:** The connection string for the table endpoint.
-* **/t.TableName:** Specifies the name of the table to write to.
-* **/t.Overwrite:** Optional, default is `false`. Specifies if existing values should be overwritten.
-* **/t.MaxInputBufferSize:** Optional, default is `1GB`. Approximate estimate of input bytes to buffer before flushing data to sink.
-* **/t.Throughput:** Optional, service defaults if not specified. Specifies throughput to configure for table.
-* **/t.MaxBatchSize:** Optional, default is `2MB`. Specify the batch size in bytes.
-
-### Sample command: Source is Table Storage
-
-Here's a command-line sample showing how to import from Table Storage to the Table API:
-
-```bash
-dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Table storage account name>;AccountKey=<Account Key>;EndpointSuffix=core.windows.net /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmos.azure.com:443 /t.TableName:<Table name> /t.Overwrite
-```
-## Next steps
-
-Learn how to query data by using the Azure Cosmos DB Table API.
-
-> [!div class="nextstepaction"]
->[How to query data?](tutorial-query-table.md)
----
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-support.md
- Title: Azure Table Storage support in Azure Cosmos DB
-description: Learn how Azure Cosmos DB Table API and Azure Storage Tables work together by sharing the same table data model an operations
--- Previously updated : 11/03/2021-----
-# Developing with Azure Cosmos DB Table API and Azure Table storage
-
-Azure Cosmos DB Table API and Azure Table storage share the same table data model and expose the same create, delete, update, and query operations through their SDKs.
-
-> [!NOTE]
-> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Table API.
--
-## Azure SDKs
-
-### Current release
-
-The following SDK packages work with both the Azure Cosmos Table API and Azure Table storage.
-
-* **.NET** - Use the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) available on NuGet.
-
-* **Python** - Use the [azure-data-tables](https://pypi.org/project/azure-data-tables/) available from PyPi.
-
-* **JavaScript/TypeScript** - Use the [@azure/data-tables](https://www.npmjs.com/package/@azure/data-tables) package available on npm.js.
-
-* **Java** - Use the [azure-data-tables](https://mvnrepository.com/artifact/com.azure/azure-data-tables/12.0.0) package available on Maven.
-
-### Prior releases
-
-The following SDK packages work only with Azure Cosmos DB Table API.
-
-* **.NET** - [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) available on NuGet. The Azure Tables client library can seamlessly target either Azure Table storage or Azure Cosmos DB table service endpoints with no code changes.
-
-* **Python** - [azure-cosmosdb-table](https://pypi.org/project/azure-cosmosdb-table/) available from PyPi. This SDK connects with both Azure Table storage and Azure Cosmos DB Table API.
-
-* **JavaScript/TypeScript** - [azure-storage](https://www.npmjs.com/package/azure-storage) package available on npm.js. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
-
-* **Java** - [Microsoft Azure Storage Client SDK for Java](https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage) on Maven. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
-
-* **C++** - [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/). This library enables you to build applications against Azure Storage.
-
-* **Ruby** - [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). This project provides a Ruby package that makes it easy to access Azure storage Table services.
-
-* **PHP** - [Azure Storage Table PHP Client Library](https://github.com/Azure/azure-storage-php/tree/master/azure-storage-table). This project provides a PHP client library that makes it easy to access Azure storage Table services.
-
-* **PowerShell** - [AzureRmStorageTable PowerShell module](https://www.powershellgallery.com/packages/AzureRmStorageTable). This PowerShell module has cmdlets to work with storage Tables.
cosmos-db Tutorial Global Distribution Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution-table.md
- Title: Azure Cosmos DB global distribution tutorial for Table API
-description: Learn how global distribution works in Azure Cosmos DB Table API accounts and how to configure the preferred list of regions
----- Previously updated : 01/30/2020--
-# Set up Azure Cosmos DB global distribution using the Table API
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Configure global distribution using the Azure portal
-> * Configure global distribution using the [Table API](introduction.md)
---
-## Connecting to a preferred region using the Table API
-
-In order to take advantage of the [global distribution](../distribute-data-globally.md), client applications should specify the current location where their application is running. This is done by setting the `CosmosExecutorConfiguration.CurrentRegion` property. The `CurrentRegion` property should contain a single location. Each client instance can specify their own region for low latency reads. The region must be named by using their [display names](/previous-versions/azure/reference/gg441293(v=azure.100)) such as "West US".
-
-The Azure Cosmos DB Table API SDK automatically picks the best endpoint to communicate with based on the account configuration and current regional availability. It prioritizes the closest region to provide better latency to clients. After you set the current `CurrentRegion` property, read and write requests are directed as follows:
-
-* **Read requests:** All read requests are sent to the configured `CurrentRegion`. Based on the proximity, the SDK automatically selects a fallback geo-replicated region for high availability.
-
-* **Write requests:** The SDK automatically sends all write requests to the current write region. In an account with multi-region writes, current region will serve the writes requests as well. Based on the proximity, the SDK automatically selects a fallback geo-replicated region for high availability.
-
-If you don't specify the `CurrentRegion` property, the SDK uses the current write region for all operations.
-
-For example, if an Azure Cosmos account is in "West US" and "East US" regions. If "West US" is the write region and the application is present in "East US". If the CurrentRegion property is not configured, all the read and write requests are always directed to the "West US" region. If the CurrentRegion property is configured, all the read requests are served from "East US" region.
-
-## Next steps
-
-In this tutorial, you've done the following:
-
-> [!div class="checklist"]
-> * Configure global distribution using the Azure portal
-> * Configure global distribution using the Azure Cosmos DB Table APIs
cosmos-db Tutorial Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution.md
+
+ Title: Azure Cosmos DB global distribution tutorial for API for Table
+description: Learn how global distribution works in Azure Cosmos DB for Table accounts and how to configure the preferred list of regions
++++++ Last updated : 01/30/2020++
+# Set up Azure Cosmos DB global distribution using the API for Table
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the [API for Table](introduction.md)
+++
+## Connecting to a preferred region using the API for Table
+
+In order to take advantage of the [global distribution](../distribute-data-globally.md), client applications should specify the current location where their application is running. This is done by setting the `CosmosExecutorConfiguration.CurrentRegion` property. The `CurrentRegion` property should contain a single location. Each client instance can specify their own region for low latency reads. The region must be named by using their [display names](/previous-versions/azure/reference/gg441293(v=azure.100)) such as "West US".
+
+The Azure Cosmos DB for Table SDK automatically picks the best endpoint to communicate with based on the account configuration and current regional availability. It prioritizes the closest region to provide better latency to clients. After you set the current `CurrentRegion` property, read and write requests are directed as follows:
+
+* **Read requests:** All read requests are sent to the configured `CurrentRegion`. Based on the proximity, the SDK automatically selects a fallback geo-replicated region for high availability.
+
+* **Write requests:** The SDK automatically sends all write requests to the current write region. In an account with multi-region writes, current region will serve the writes requests as well. Based on the proximity, the SDK automatically selects a fallback geo-replicated region for high availability.
+
+If you don't specify the `CurrentRegion` property, the SDK uses the current write region for all operations.
+
+For example, if an Azure Cosmos DB account is in "West US" and "East US" regions. If "West US" is the write region and the application is present in "East US". If the CurrentRegion property is not configured, all the read and write requests are always directed to the "West US" region. If the CurrentRegion property is configured, all the read requests are served from "East US" region.
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the Azure Cosmos DB Table APIs
cosmos-db Tutorial Query Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query-table.md
- Title: How to query table data in Azure Cosmos DB?
-description: Learn how to query data stored in the Azure Cosmos DB Table API account by using OData filters and LINQ queries
----- Previously updated : 06/05/2020----
-# Tutorial: Query Azure Cosmos DB by using the Table API
-
-The Azure Cosmos DB [Table API](introduction.md) supports OData and [LINQ](/rest/api/storageservices/fileservices/writing-linq-queries-against-the-table-service) queries against key/value (table) data.
-
-This article covers the following tasks:
-
-> [!div class="checklist"]
-> * Querying data with the Table API
-
-The queries in this article use the following sample `People` table:
-
-| PartitionKey | RowKey | Email | PhoneNumber |
-| | | | |
-| Harp | Walter | Walter@contoso.com| 425-555-0101 |
-| Smith | Ben | Ben@contoso.com| 425-555-0102 |
-| Smith | Jeff | Jeff@contoso.com| 425-555-0104 |
-
-See [Querying Tables and Entities](/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the Table API.
-
-For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB Table API](introduction.md) and [Develop with the Table API in .NET](tutorial-develop-table-dotnet.md).
-
-## Prerequisites
-
-For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](create-table-dotnet.md) or the [developer tutorial](tutorial-develop-table-dotnet.md) to create an account and populate your database.
-
-## Query on PartitionKey and RowKey
-
-Because the PartitionKey and RowKey properties form an entity's primary key, you can use the following special syntax to identify the entity:
-
-**Query**
-
-```
-https://<mytableendpoint>/People(PartitionKey='Harp',RowKey='Walter')
-```
-
-**Results**
-
-| PartitionKey | RowKey | Email | PhoneNumber |
-| | | | |
-| Harp | Walter | Walter@contoso.com| 425-555-0104 |
-
-Alternatively, you can specify these properties as part of the `$filter` option, as shown in the following section. Note that the key property names and constant values are case-sensitive. Both the PartitionKey and RowKey properties are of type String.
-
-## Query by using an OData filter
-
-When you're constructing a filter string, keep these rules in mind:
-
-* Use the logical operators defined by the OData Protocol Specification to compare a property to a value. Note that you can't compare a property to a dynamic value. One side of the expression must be a constant.
-* The property name, operator, and constant value must be separated by URL-encoded spaces. A space is URL-encoded as `%20`.
-* All parts of the filter string are case-sensitive.
-* The constant value must be of the same data type as the property in order for the filter to return valid results. For more information about supported property types, see [Understanding the Table Service Data Model](/rest/api/storageservices/understanding-the-table-service-data-model).
-
-Here's an example query that shows how to filter by the PartitionKey and Email properties by using an OData `$filter`.
-
-**Query**
-
-```
-https://<mytableapi-endpoint>/People()?$filter=PartitionKey%20eq%20'Smith'%20and%20Email%20eq%20'Ben@contoso.com'
-```
-
-For more information on how to construct filter expressions for various data types, see [Querying Tables and Entities](/rest/api/storageservices/querying-tables-and-entities).
-
-**Results**
-
-| PartitionKey | RowKey | Email | PhoneNumber |
-| | | | |
-| Smith |Ben | Ben@contoso.com| 425-555-0102 |
-
-The queries on datetime properties don't return any data when executed in Azure Cosmos DB's Table API. While the Azure Table storage stores date values with time granularity of ticks, the Table API in Azure Cosmos DB uses the `_ts` property. The `_ts` property is at a second level of granularity, which isn't an OData filter. So, the queries on timestamp properties are blocked by Azure Cosmos DB. As a workaround, you can define a custom datetime or long data type property and set the date value from the client.
-
-## Query by using LINQ
-You can also query by using LINQ, which translates to the corresponding OData query expressions. Here's an example of how to build queries by using the .NET SDK:
-
-```csharp
-IQueryable<CustomerEntity> linqQuery = table.CreateQuery<CustomerEntity>()
- .Where(x => x.PartitionKey == "4")
- .Select(x => new CustomerEntity() { PartitionKey = x.PartitionKey, RowKey = x.RowKey, Email = x.Email });
-```
-
-## Next steps
-
-In this tutorial, you've done the following:
-
-> [!div class="checklist"]
-> * Learned how to query by using the Table API
-
-You can now proceed to the next tutorial to learn how to distribute your data globally.
-
-> [!div class="nextstepaction"]
-> [Distribute your data globally](tutorial-global-distribution-table.md)
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query.md
+
+ Title: How to query table data in Azure Cosmos DB?
+description: Learn how to query data stored in the Azure Cosmos DB for Table account by using OData filters and LINQ queries
+++++ Last updated : 06/05/2020+
+ms.devlang: csharp
+++
+# Tutorial: Query Azure Cosmos DB by using the API for Table
+
+The Azure Cosmos DB [API for Table](introduction.md) supports OData and [LINQ](/rest/api/storageservices/fileservices/writing-linq-queries-against-the-table-service) queries against key/value (table) data.
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Querying data with the API for Table
+
+The queries in this article use the following sample `People` table:
+
+| PartitionKey | RowKey | Email | PhoneNumber |
+| | | | |
+| Harp | Walter | Walter@contoso.com| 425-555-0101 |
+| Smith | Ben | Ben@contoso.com| 425-555-0102 |
+| Smith | Jeff | Jeff@contoso.com| 425-555-0104 |
+
+See [Querying Tables and Entities](/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the API for Table.
+
+For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB for Table](introduction.md) and [Develop with the API for Table in .NET](tutorial-develop-table-dotnet.md).
+
+## Prerequisites
+
+For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](quickstart-dotnet.md) or the [developer tutorial](tutorial-develop-table-dotnet.md) to create an account and populate your database.
+
+## Query on PartitionKey and RowKey
+
+Because the PartitionKey and RowKey properties form an entity's primary key, you can use the following special syntax to identify the entity:
+
+**Query**
+
+```
+https://<mytableendpoint>/People(PartitionKey='Harp',RowKey='Walter')
+```
+
+**Results**
+
+| PartitionKey | RowKey | Email | PhoneNumber |
+| | | | |
+| Harp | Walter | Walter@contoso.com| 425-555-0104 |
+
+Alternatively, you can specify these properties as part of the `$filter` option, as shown in the following section. Note that the key property names and constant values are case-sensitive. Both the PartitionKey and RowKey properties are of type String.
+
+## Query by using an OData filter
+
+When you're constructing a filter string, keep these rules in mind:
+
+* Use the logical operators defined by the OData Protocol Specification to compare a property to a value. Note that you can't compare a property to a dynamic value. One side of the expression must be a constant.
+* The property name, operator, and constant value must be separated by URL-encoded spaces. A space is URL-encoded as `%20`.
+* All parts of the filter string are case-sensitive.
+* The constant value must be of the same data type as the property in order for the filter to return valid results. For more information about supported property types, see [Understanding the Table Service Data Model](/rest/api/storageservices/understanding-the-table-service-data-model).
+
+Here's an example query that shows how to filter by the PartitionKey and Email properties by using an OData `$filter`.
+
+**Query**
+
+```
+https://<mytableapi-endpoint>/People()?$filter=PartitionKey%20eq%20'Smith'%20and%20Email%20eq%20'Ben@contoso.com'
+```
+
+For more information on how to construct filter expressions for various data types, see [Querying Tables and Entities](/rest/api/storageservices/querying-tables-and-entities).
+
+**Results**
+
+| PartitionKey | RowKey | Email | PhoneNumber |
+| | | | |
+| Smith |Ben | Ben@contoso.com| 425-555-0102 |
+
+The queries on datetime properties don't return any data when executed in Azure Cosmos DB's API for Table. While the Azure Table storage stores date values with time granularity of ticks, the API for Table in Azure Cosmos DB uses the `_ts` property. The `_ts` property is at a second level of granularity, which isn't an OData filter. So, the queries on timestamp properties are blocked by Azure Cosmos DB. As a workaround, you can define a custom datetime or long data type property and set the date value from the client.
+
+## Query by using LINQ
+You can also query by using LINQ, which translates to the corresponding OData query expressions. Here's an example of how to build queries by using the .NET SDK:
+
+```csharp
+IQueryable<CustomerEntity> linqQuery = table.CreateQuery<CustomerEntity>()
+ .Where(x => x.PartitionKey == "4")
+ .Select(x => new CustomerEntity() { PartitionKey = x.PartitionKey, RowKey = x.RowKey, Email = x.Email });
+```
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Learned how to query by using the API for Table
+
+You can now proceed to the next tutorial to learn how to distribute your data globally.
+
+> [!div class="nextstepaction"]
+> [Distribute your data globally](tutorial-global-distribution.md)
cosmos-db Throughput Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/throughput-serverless.md
Last updated 05/09/2022 -+ # How to choose between provisioned throughput and serverless Azure Cosmos DB is available in two different capacity modes: [provisioned throughput](set-throughput.md) and [serverless](serverless.md). You can perform the exact same database operations in both modes, but the way you get billed for these operations is radically different. The following video explains the core differences between these modes and how they fit different types of workloads:
For more information, see [estimating serverless costs](plan-manage-costs.md#est
- Read more about [provisioning throughput on Azure Cosmos DB](set-throughput.md) - Read more about [Azure Cosmos DB serverless](serverless.md)-- Get familiar with the concept of [Request Units](request-units.md)
+- Get familiar with the concept of [Request Units](request-units.md)
cosmos-db Total Cost Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/total-cost-ownership.md
+ Last updated 08/26/2021 # Total Cost of Ownership (TCO) with Azure Cosmos DB Azure Cosmos DB is designed with the fine grained multi-tenancy and resource governance. This design allows Azure Cosmos DB to operate at significantly lower cost and help users save. Currently Azure Cosmos DB supports more than 280 customer workloads on a single machine with the density continuously increasing, and thousands of customer workloads within a cluster. It load balances replicas of customers' workloads across different machines in a cluster and across multiple clusters within a data center. To learn more, see [Azure Cosmos DB: Pushing the frontier of globally distributed databases](https://azure.microsoft.com/blog/azure-cosmos-db-pushing-the-frontier-of-globally-distributed-databases/). Because of resource-governance, multi-tenancy, and native integration with the rest of Azure infrastructure, Azure Cosmos DB is on average 4 to 6 times cheaper than MongoDB, Cassandra, or other OSS NoSQL running on IaaS and up to 10 times cheaper than the database engines running on premises. See the paper on [The total cost of (non) ownership of a NoSQL database cloud service](https://documentdbportalstorage.blob.core.windows.net/papers/11.15.2017/NoSQL%20TCO%20paper.pdf).
The serverless provisioning model of Azure Cosmos DB eliminates the need to over
* **You automatically get all enterprise capabilities, at no additional cost.** Azure Cosmos DB offers the most comprehensive set of compliance certifications, security, and encryption at rest and in motion at no additional cost (compared to our competition). You automatically get regional availability anywhere in the world. You can span your database across any number of Azure regions and add or remove regions at any point.
-* **You can save up to 65% of costs with reserved capacity:** Azure Cosmos DB [reserved capacity](cosmos-db-reserved-capacity.md) helps you save money by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. On your mission-critical workloads you can get better SLAs in terms of provisioning capacity.
+* **You can save up to 65% of costs with reserved capacity:** Azure Cosmos DB [reserved capacity](reserved-capacity.md) helps you save money by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. On your mission-critical workloads you can get better SLAs in terms of provisioning capacity.
## Capacity planning
As an aid for estimating TCO, it can be helpful to start with capacity planning.
* Learn more about [Optimizing storage cost](optimize-cost-storage.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Cosmos accounts](optimize-cost-regions.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
* Learn more about [The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service](https://documentdbportalstorage.blob.core.windows.net/papers/11.15.2017/NoSQL%20TCO%20paper.pdf)
cosmos-db Troubleshoot Local Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/troubleshoot-local-emulator.md
Last updated 09/17/2020-+ # Troubleshoot issues when using the Azure Cosmos DB Emulator The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Use the tips in this article to help troubleshoot issues you encounter when installing or using the Azure Cosmos DB Emulator.
If you installed a new version of the emulator and are experiencing errors, ensu
* If you receive a **Service Unavailable** message, the emulator might be failing to initialize the network stack. Check to see if you have the Pulse secure client or Juniper networks client installed, as their network filter drivers may cause the problem. Uninstalling third-party network filter drivers typically fixes the issue. Alternatively, start the emulator with /DisableRIO, which will switch the emulator network communication to regular Winsock.
-* If you encounter **"Forbidden","message":"Request is being made with a forbidden encryption in transit protocol or cipher. Check account SSL/TLS minimum allowed protocol setting..."** connectivity issues, this might be caused by global changes in the OS (for example Insider Preview Build 20170) or the browser settings that enable TLS 1.3 as default. Similar error might occur when using the SDK to execute a request against the Cosmos emulator, such as **Microsoft.Azure.Documents.DocumentClientException: Request is being made with a forbidden encryption in transit protocol or cipher. Check account SSL/TLS minimum allowed protocol setting**. This is expected at this time since Cosmos emulator only accepts and works with TLS 1.2 protocol. The recommended work-around is to change the settings and default to TLS 1.2; for instance, in IIS Manager navigate to "Sites" -> "Default Web Sites" and locate the "Site Bindings" for port 8081 and edit them to disable TLS 1.3. Similar operation can be performed for the Web browser via the "Settings" options.
+* If you encounter **"Forbidden","message":"Request is being made with a forbidden encryption in transit protocol or cipher. Check account SSL/TLS minimum allowed protocol setting..."** connectivity issues, this might be caused by global changes in the OS (for example Insider Preview Build 20170) or the browser settings that enable TLS 1.3 as default. Similar error might occur when using the SDK to execute a request against the Azure Cosmos DB emulator, such as **Microsoft.Azure.Documents.DocumentClientException: Request is being made with a forbidden encryption in transit protocol or cipher. Check account SSL/TLS minimum allowed protocol setting**. This is expected at this time since Azure Cosmos DB emulator only accepts and works with TLS 1.2 protocol. The recommended work-around is to change the settings and default to TLS 1.2; for instance, in IIS Manager navigate to "Sites" -> "Default Web Sites" and locate the "Site Bindings" for port 8081 and edit them to disable TLS 1.3. Similar operation can be performed for the Web browser via the "Settings" options.
* While the emulator is running, if your computer goes to sleep mode or runs any OS updates, you might see a **Service is currently unavailable** message. Reset the emulator's data, by right-clicking on the icon that appears on the windows notification tray and select **Reset Data**.
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
+ Previously updated : 08/26/2021 Last updated : 10/05/2021 # Try Azure Cosmos DB free
-[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. If you're using the SQL API, migrate your Try Azure Cosmos DB data to your upgraded account.
-
-This article walks you through how to create your account, limits, and upgrading your account. This article also walks through how to migrate your data from your Try Azure Cosmos DB sandbox to your own account using the SQL API.
+
+[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. If you're using the API for NoSQL, migrate your Try Azure Cosmos DB data to your upgraded account.
+
+This article walks you through how to create your account, limits, and upgrading your account. This article also walks through how to migrate your data from your Try Azure Cosmos DB sandbox to your own account using the API for NoSQL.
## Try Azure Cosmos DB limits
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
| Resource | Limit | | | | | Duration of the trial | 30 days (a new trial can be requested after expiration) After expiration, the information stored is deleted. Prior to expiration you can upgrade your account and migrate the information stored. |
-| Maximum containers per subscription (SQL, Gremlin, Table API) | 1 |
-| Maximum containers per subscription (MongoDB API) | 3 |
+| Maximum containers per subscription (API for NoSQL, Gremlin, Table) | 1 |
+| Maximum containers per subscription (API for MongoDB) | 3 |
| Maximum throughput per container | 5,000 | | Maximum throughput per shared-throughput database | 20,000 | | Maximum total storage per account | 10 GB |
Try Azure Cosmos DB supports global distribution in only the Central US, North E
## Create your Try Azure Cosmos DB account
-From the [Try Azure Cosmos DB home page](https://aka.ms/trycosmosdb), select an API. Azure Cosmos DB provides five APIs: Core (SQL) and MongoDB for document data, Gremlin for graph data, Azure Table, and Cassandra.
+From the [Try Azure Cosmos DB home page](https://aka.ms/trycosmosdb), select an API. Azure Cosmos DB provides five APIs: NoSQL and MongoDB for document data, Gremlin for graph data, Azure Table, and Cassandra.
> [!NOTE] > Not sure which API will best meet your needs? To learn more about the APIs for Azure Cosmos DB, see [Choose an API in Azure Cosmos DB](choose-api.md). ## Launch a Quick Start
-Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosmos DB or get started with our documentation.
+Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosmos DB or get started with our documentation.
-* [Core (SQL) API Quickstart](sql/create-cosmosdb-resources-portal.md#add-a-database-and-a-container)
-* [MongoDB API Quickstart](mongodb/create-mongodb-python.md#learn-the-object-model)
-* [Apache Cassandra API](cassandr)
-* [Gremlin (Graph) API](graph/create-graph-console.md#add-a-graph)
-* [Azure Table API](table/create-table-dotnet.md)
+* [API for NoSQL Quickstart](nosql/quickstart-portal.md#create-container-database)
+* [API for MongoDB Quickstart](mongodb/quickstart-python.md#learn-the-object-model)
+* [API for Apache Cassandra](cassandr)
+* [API for Apache Gremlin](gremlin/quickstart-console.md#add-a-graph)
+* [API for Table](table/quickstart-dotnet.md)
You can also get started with one of the learning resources in Data Explorer.
You can also get started with one of the learning resources in Data Explorer.
Your account is free for 30 days. After expiration, a new sandbox account can be created. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. Here are the steps to start an upgrade. 1. Select the option to upgrade your current account in the Dashboard page or from the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) page.
-
+ :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience."::: 1. Select **Sign up for Azure Account** & create an Azure Cosmos DB account.
-You can migrate your database from Try Azure Cosmos DB to your new Azure account if you're utilizing the SQL API after you've signed up for an Azure account. Here are the steps to migrate.
+You can migrate your database from Try Azure Cosmos DB to your new Azure account if you're utilizing the API for NoSQL after you've signed up for an Azure account. Here are the steps to migrate.
### Create an Azure Cosmos DB account
Navigate back to the **Upgrade** page and select **Next** to move on to the thir
## Migrate your Try Azure Cosmos DB data
-If you're using the SQL API, you can migrate your Try Azure Cosmos DB data to your upgraded account. HereΓÇÖs how to migrate your Try Azure Cosmos DB database to your new Azure Cosmos DB Core (SQL) API account.
+If you're using the API for NoSQL, you can migrate your Try Azure Cosmos DB data to your upgraded account. HereΓÇÖs how to migrate your Try Azure Cosmos DB database to your new Azure Cosmos DB API for NoSQL account.
### Prerequisites
-* Must be using the Azure Cosmos DB Core (SQL) API.
+* Must be using the Azure Cosmos DB API for NoSQL.
* Must have an active Try Azure Cosmos DB account and Azure account.
-* Must have an Azure Cosmos DB account using the Core (SQL) API in your Azure account.
+* Must have an Azure Cosmos DB account using the API for NoSQL in your Azure subscription.
### Migrate your data
-1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data.
+1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data.
- 1. Go to your Azure Cosmos DB Account in the Azure portal.
-
- 1. Find the connection string of your new Cosmos DB account within the **Keys** page of your new account.
+ 1. Go to your Azure Cosmos DB Account in the Azure portal.
- :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+ 1. Find the connection string of your new Azure Cosmos DB account within the **Keys** page of your new account.
-1. Insert the connection string of the new Cosmos DB account in the **Upgrade your account** page.
+ :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+
+1. Insert the connection string of the new Azure Cosmos DB account in the **Upgrade your account** page.
1. Select **Next** to move the data to your account.
If you're using the SQL API, you can migrate your Try Azure Cosmos DB data to yo
There can only be one free Try Azure Cosmos DB account per Microsoft account. You may want to delete your account or to try different APIs, you'll have to create a new account. HereΓÇÖs how to delete your account.
-1. Go to the [Try AzureCosmos DB](https://aka.ms/trycosmosdb) page
+1. Go to the [Try AzureAzure Cosmos DB](https://aka.ms/trycosmosdb) page.
1. Select Delete my account.
-
+ :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience."::: ## Next steps After you create a Try Azure Cosmos DB sandbox account, you can start building apps with Azure Cosmos DB with the following articles:
-* Use [SQL API to build a console app using .NET](sql/sql-api-get-started.md) to manage data in Azure Cosmos DB.
-* Use [MongoDB API to build a sample app using Python](mongodb/create-mongodb-python.md) to manage data in Azure Cosmos DB.
+* Use [API for NoSQL to build a console app using .NET](nosql/quickstart-dotnet.md) to manage data in Azure Cosmos DB.
+* Use [API for MongoDB to build a sample app using Python](mongodb/quickstart-python.md) to manage data in Azure Cosmos DB.
* [Create a Jupyter notebook](notebooks-overview.md) and analyze your data. * Learn more about [understanding your Azure Cosmos DB bill](understand-your-bill.md) * Get started with Azure Cosmos DB with one of our quickstarts:
- * [Get started with Azure Cosmos DB SQL API](sql/create-cosmosdb-resources-portal.md#add-a-database-and-a-container)
- * [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-python.md#learn-the-object-model)
- * [Get started with Azure Cosmos DB Cassandra API](cassandr)
- * [Get started with Azure Cosmos DB Gremlin API](graph/create-graph-console.md#add-a-graph)
- * [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
+ * [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-portal.md#create-container-database)
+ * [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#learn-the-object-model)
+ * [Get started with Azure Cosmos DB for Cassandra](cassandr)
+ * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-console.md#add-a-graph)
+ * [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for [capacity planning](sql/estimate-ru-with-capacity-planner.md). * If all you know is the number of vCores and servers in your existing database cluster, see [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md). * If you know typical request rates for your current database workload, see [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Tutorial Setup Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-setup-ci-cd.md
Title: Set up CI/CD pipeline with Azure Cosmos DB Emulator build task
-description: Tutorial on how to set up build and release workflow in Azure DevOps using the Cosmos DB emulator build task
+description: Tutorial on how to set up build and release workflow in Azure DevOps using the Azure Cosmos DB emulator build task
Last updated 01/28/2020 -+ # Set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps > [!NOTE]
-> Due to the full removal of Windows 2016 hosted runners on April 1st, 2022, this method of using the Cosmos DB emulator with build task in Azure DevOps is no longer supported. We are actively working on alternative solutions. Meanwhile, you can follow the below instructions to leverage the Azure Cosmos DB emulator which comes pre-installed when using the "windows-2019" agent type.
+> Due to the full removal of Windows 2016 hosted runners on April 1st, 2022, this method of using the Azure Cosmos DB emulator with build task in Azure DevOps is no longer supported. We are actively working on alternative solutions. Meanwhile, you can follow the below instructions to leverage the Azure Cosmos DB emulator which comes pre-installed when using the "windows-2019" agent type.
The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. The emulator allows you to develop and test your application locally, without creating an Azure subscription or incurring any costs. ## PowerShell Task for Emulator
-A typical PowerShell based task that will start the Cosmos DB emulator can be scripted as follows:
+A typical PowerShell based task that will start the Azure Cosmos DB emulator can be scripted as follows:
Example of a job configuration, selecting the "windows-2019" agent type. :::image type="content" source="./media/tutorial-setup-ci-cd/powershell-script-2.png" alt-text="Screenshot of the job configuration using windows-2019":::
For agents that do not come with the Azure Cosmos DB emulator preinstalled, you
To learn more about using the emulator for local development and testing, see [Use the Azure Cosmos DB Emulator for local development and testing](./local-emulator.md).
-To export emulator TLS/SSL certificates, see [Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js](./local-emulator-export-ssl-certificates.md)
+To export emulator TLS/SSL certificates, see [Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js](./local-emulator-export-ssl-certificates.md)
cosmos-db Understand Your Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/understand-your-bill.md
+ Last updated 03/31/2022 # Understand your Azure Cosmos DB bill As a fully managed cloud-native database service, Azure Cosmos DB simplifies billing by charging only for your database operations and consumed storage. There are no additional license fees, hardware, utility costs, or facility costs compared to on-premises or IaaS-hosted alternatives. When you consider the multi region capabilities of Azure Cosmos DB, the database service provides a substantial reduction in costs compared to existing on-premises or IaaS solutions. -- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you are using.
+- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos DB account you are using.
- **Provisioned Throughput**: You are billed hourly for the maximum provisioned throughput for a given hour, in increments of 100 RU/s. - **Serverless**: You are billed hourly for the total amount of Request Units consumed by your database operations.
As a fully managed cloud-native database service, Azure Cosmos DB simplifies bil
See the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for the latest pricing information.
-This article uses some examples to help you understand the details you see on the monthly bill. The numbers shown in the examples may be different if your Azure Cosmos containers have a different amount of throughput provisioned, if they span across multiple regions or run for a different for a period over a month. All the examples in this article calculate the bill based on the pricing information shown in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
+This article uses some examples to help you understand the details you see on the monthly bill. The numbers shown in the examples may be different if your Azure Cosmos DB containers have a different amount of throughput provisioned, if they span across multiple regions or run for a different for a period over a month. All the examples in this article calculate the bill based on the pricing information shown in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
> [!NOTE]
-> Billing is for any portion of a wall-clock hour, not a 60 minute duration. All the examples shown in this doc are based on the price for an Azure Cosmos account deployed in a non-government region in the US. The pricing and calculation vary depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+> Billing is for any portion of a wall-clock hour, not a 60 minute duration. All the examples shown in this doc are based on the price for an Azure Cosmos DB account deployed in a non-government region in the US. The pricing and calculation vary depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
## Billing examples
If you increase provisioned throughput for a container or a set of containers at
### Billing example: multiple containers, each with dedicated provisioned throughput mode
-* If you create an Azure Cosmos account in East US 2 with two containers with provisioned throughput of 500 RU/sec and 700 RU/sec, respectively, you would have a total provisioned throughput of 1,200 RU/sec.
+* If you create an Azure Cosmos DB account in East US 2 with two containers with provisioned throughput of 500 RU/sec and 700 RU/sec, respectively, you would have a total provisioned throughput of 1,200 RU/sec.
* You would be charged 1,200/100 *ΓÇ»$0.008ΓÇ»=ΓÇ»$0.096/hour.
If you increase provisioned throughput for a container or a set of containers at
### Billing example: containers with shared (provisioned) throughput mode
-* If you create an Azure Cosmos account in East US 2 with two Azure Cosmos databases (with a set of containers sharing the throughput at the database level) with the provisioned throughput of 50-K RU/sec and 70-K RU/sec, respectively, you would have a total provisioned throughput of 120 K RU/sec.
+* If you create an Azure Cosmos DB account in East US 2 with two Azure Cosmos DB databases (with a set of containers sharing the throughput at the database level) with the provisioned throughput of 50-K RU/sec and 70-K RU/sec, respectively, you would have a total provisioned throughput of 120 K RU/sec.
* You would be charged 1200 xΓÇ»$0.008ΓÇ»=ΓÇ»$9.60/hour.
If you increase provisioned throughput for a container or a set of containers at
## Billing examples with geo-replication
-You can add/remove Azure regions anywhere in the world to your Azure Cosmos database account at any time. The throughput that you have configured for various Azure Cosmos databases and containers will be reserved in each of the Azure regions associated with your Azure Cosmos database account. If the sum of provisioned throughput (RU/sec) configured across all the databases and containers within your Azure Cosmos database account (provisioned per hour) is T and the number of Azure regions associated with your database account is N, then the total provisioned throughput for a given hour, for your Azure Cosmos database account is equal to T x N RU/sec. Provisioned throughput (single write region) costs $0.008/hour per 100 RU/sec and provisioned throughput with multiple writable regions (multi-region writes config) costs $0.016/per hour per 100 RU/sec (see the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/)). Whether its single write region, or multiple write regions, Azure Cosmos DB allows you to read data from any region.
+You can add/remove Azure regions anywhere in the world to your Azure Cosmos DB database account at any time. The throughput that you have configured for various Azure Cosmos DB databases and containers will be reserved in each of the Azure regions associated with your Azure Cosmos DB database account. If the sum of provisioned throughput (RU/sec) configured across all the databases and containers within your Azure Cosmos DB database account (provisioned per hour) is T and the number of Azure regions associated with your database account is N, then the total provisioned throughput for a given hour, for your Azure Cosmos DB database account is equal to T x N RU/sec. Provisioned throughput (single write region) costs $0.008/hour per 100 RU/sec and provisioned throughput with multiple writable regions (multi-region writes config) costs $0.016/per hour per 100 RU/sec (see the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/)). Whether its single write region, or multiple write regions, Azure Cosmos DB allows you to read data from any region.
-### Billing example: multi-region Azure Cosmos account, single region writes
+### Billing example: multi-region Azure Cosmos DB account, single region writes
-Let's assume you have an Azure Cosmos container in West US. The container is created with throughput 10K RU/sec and you store 1 TB of data this month. Let's assume you add three regions (East US, North Europe, and East Asia) to your Azure Cosmos account, each with the same storage and throughput. Your total monthly bill will be (assuming 30 days in a month). Your bill would be as follows:
+Let's assume you have an Azure Cosmos DB container in West US. The container is created with throughput 10K RU/sec and you store 1 TB of data this month. Let's assume you add three regions (East US, North Europe, and East Asia) to your Azure Cosmos DB account, each with the same storage and throughput. Your total monthly bill will be (assuming 30 days in a month). Your bill would be as follows:
|**Item** |**Usage (month)** |**Rate** |**Monthly Cost** | ||||-|
Let's assume you have an Azure Cosmos container in West US. The container is cre
*Let's also assume that you egress 100 GB of data every month from the container in West US to replicate data into East US, North Europe, and East Asia. You're billed for egress as per data transfer rates.*
-### Billing example: multi-region Azure Cosmos account, multi region writes
+### Billing example: multi-region Azure Cosmos DB account, multi region writes
-Let's assume you create an Azure Cosmos container in West US. The container is created with throughput 10K RU/sec and you store 1 TB of data this month. Let's assume you add three regions (East US, North Europe, and East Asia), each with the same storage and throughput and you want the ability to write to the containers in all regions associated with your Azure Cosmos account. Your total monthly bill will be (assuming 30 days in a month) as follows:
+Let's assume you create an Azure Cosmos DB container in West US. The container is created with throughput 10K RU/sec and you store 1 TB of data this month. Let's assume you add three regions (East US, North Europe, and East Asia), each with the same storage and throughput and you want the ability to write to the containers in all regions associated with your Azure Cosmos DB account. Your total monthly bill will be (assuming 30 days in a month) as follows:
|**Item** |**Usage (month)**|**Rate** |**Monthly Cost** | ||||-|
Let's assume you create an Azure Cosmos container in West US. The container is c
*Let's also assume that you egress 100 GB of data every month from the container in West US to replicate data into East US, North Europe, and East Asia. You're billed for egress as per data transfer rates.*
-### Billing example: Azure Cosmos account with multi-region writes, database-level throughput including dedicated throughput mode for some containers
+### Billing example: Azure Cosmos DB account with multi-region writes, database-level throughput including dedicated throughput mode for some containers
-Let's consider the following example, where we have a multi-region Azure Cosmos account where all regions are writable (multiple write regions config). For simplicity, we will assume storage size stays constant and doesn't change and omit it here to keep the example simpler. The provisioned throughput during the month varied as follows (assuming 30 days or 720 hours):
+Let's consider the following example, where we have a multi-region Azure Cosmos DB account where all regions are writable (multiple write regions config). For simplicity, we will assume storage size stays constant and doesn't change and omit it here to keep the example simpler. The provisioned throughput during the month varied as follows (assuming 30 days or 720 hours):
[0-100 hours]:
-* We created a three region Azure Cosmos account (West US, East US, North Europe), where all regions are writable
+* We created a three region Azure Cosmos DB account (West US, East US, North Europe), where all regions are writable
* We created a database (D1) with shared throughput 10K RU/sec
Let's consider the following example, where we have a multi-region Azure Cosmos
[301-400 hours]:
-* We removed one of the regions from Azure Cosmos account (# of writable regions is now 2)
+* We removed one of the regions from Azure Cosmos DB account (# of writable regions is now 2)
* We scaled down database (D1) to 10K RU/sec
Total Monthly Cost = $25.00 + $53.57 = $78.57
## Billing with Azure Cosmos DB reserved capacity
-Azure Cosmos DB reserved capacity enables you to purchase provisioned throughput in advance (a reserved capacity or a reservation) that can be applied to all Azure Cosmos databases and containers (for any API or data model) across all Azure regions. Because provisioned throughput price varies per region, it helps to think of reserved capacity as a monetary credit that you've purchased at a discount, that can be drawn from for the provisioned throughput at the respective price in each region. For example, let's say you have an Azure Cosmos account with a single container provisioned with 50-K RU/sec and globally replicated two regions - East US and Japan East. If you choose the pay-as-you-go option, you would pay:
+Azure Cosmos DB reserved capacity enables you to purchase provisioned throughput in advance (a reserved capacity or a reservation) that can be applied to all Azure Cosmos DB databases and containers (for any API or data model) across all Azure regions. Because provisioned throughput price varies per region, it helps to think of reserved capacity as a monetary credit that you've purchased at a discount, that can be drawn from for the provisioned throughput at the respective price in each region. For example, let's say you have an Azure Cosmos DB account with a single container provisioned with 50-K RU/sec and globally replicated two regions - East US and Japan East. If you choose the pay-as-you-go option, you would pay:
* in East US: for 50-K RU/sec at the rate of $0.008 per 100 RU/sec in that region
What you've effectively purchased is a credit of $8 per hour, for 100 K RU/s
Next you can proceed to learn about cost optimization in Azure Cosmos DB with the following articles:
-* Learn more about [How Cosmos DB pricing model is cost-effective for customers](total-cost-ownership.md)
+* Learn more about [How Azure Cosmos DB pricing model is cost-effective for customers](total-cost-ownership.md)
* Learn more about [Optimizing for development and testing](optimize-dev-test.md) * Learn more about [Optimizing throughput cost](optimize-cost-throughput.md) * Learn more about [Optimizing storage cost](optimize-cost-storage.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/unique-keys.md
Title: Use unique keys in Azure Cosmos DB
-description: Learn how to define and use unique keys for an Azure Cosmos database. This article also describes how unique keys add a layer of data integrity.
+description: Learn how to define and use unique keys for an Azure Cosmos DB database. This article also describes how unique keys add a layer of data integrity.
-++ Last updated 08/26/2021 # Unique key constraints in Azure Cosmos DB
-Unique keys add a layer of data integrity to an Azure Cosmos container. You create a unique key policy when you create an Azure Cosmos container. With unique keys, you make sure that one or more values within a logical partition is unique. You also can guarantee uniqueness per [partition key](partitioning-overview.md).
+Unique keys add a layer of data integrity to an Azure Cosmos DB container. You create a unique key policy when you create an Azure Cosmos DB container. With unique keys, you make sure that one or more values within a logical partition is unique. You also can guarantee uniqueness per [partition key](partitioning-overview.md).
After you create a container with a unique key policy, the creation of a new or an update of an existing item resulting in a duplicate within a logical partition is prevented, as specified by the unique key constraint. The partition key combined with the unique key guarantees the uniqueness of an item within the scope of the container.
-For example, consider an Azure Cosmos container with `Email address` as the unique key constraint and `CompanyID` as the partition key. When you configure the user's email address with a unique key, each item has a unique email address within a given `CompanyID`. Two items can't be created with duplicate email addresses and with the same partition key value. In Azure Cosmos DB's SQL (Core) API, items are stored as JSON values. These JSON values are case sensitive. When you choose a property as a unique key, you can insert case sensitive values for that property. For example, If you have a unique key defined on the name property, "Gaby" is different from "gaby" and you can insert both into the container.
+For example, consider an Azure Cosmos DB container with `Email address` as the unique key constraint and `CompanyID` as the partition key. When you configure the user's email address with a unique key, each item has a unique email address within a given `CompanyID`. Two items can't be created with duplicate email addresses and with the same partition key value. In Azure Cosmos DB's API for NoSQL, items are stored as JSON values. These JSON values are case sensitive. When you choose a property as a unique key, you can insert case sensitive values for that property. For example, If you have a unique key defined on the name property, "Gaby" is different from "gaby" and you can insert both into the container.
To create items with the same email address, but not the same first name, last name, and email address, add more paths to the unique key policy. Instead of creating a unique key based on the email address only, you also can create a unique key with a combination of the first name, last name, and email address. This key is known as a composite unique key. In this case, each unique combination of the three values within a given `CompanyID` is allowed.
If you attempt to insert another item with the combinations listed in the previo
## Define a unique key
-You can define unique keys only when you create an Azure Cosmos container. A unique key is scoped to a logical partition. In the previous example, if you partition the container based on the ZIP code, you can have the same items in each logical partition. Consider the following properties when you create unique keys:
+You can define unique keys only when you create an Azure Cosmos DB container. A unique key is scoped to a logical partition. In the previous example, if you partition the container based on the ZIP code, you can have the same items in each logical partition. Consider the following properties when you create unique keys:
* You can't update an existing container to use a different unique key. In other words, after a container is created with a unique key policy, the policy can't be changed.
cosmos-db Update Backup Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/update-backup-storage-redundancy.md
Title: Update backup storage redundancy for Azure Cosmos DB periodic backup acco
description: Learn how to update the backup storage redundancy using Azure CLI, and PowerShell. You can also configure an Azure policy on your accounts to enforce the required storage redundancy. + Last updated 12/03/2021 - # Update backup storage redundancy for Azure Cosmos DB periodic backup accounts By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can override the default backup storage redundancy. This article explains how to update the backup storage redundancy using Azure CLI and PowerShell. It also shows how to configure an Azure policy on your accounts to enforce the required storage redundancy.
Azure Policy helps you to enforce organizational standards and to assess complia
* Provision an Azure Cosmos DB account with [periodic backup mode'(configure-periodic-backup-restore.md). * Provision an account with continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
-* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
+* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
Last updated 05/21/2019-+ # Common Azure Cosmos DB use cases
-This article provides an overview of several common use cases for Azure Cosmos DB. The recommendations in this article serve as a starting point as you develop your application with Cosmos DB.
+This article provides an overview of several common use cases for Azure Cosmos DB. The recommendations in this article serve as a starting point as you develop your application with Azure Cosmos DB.
> > [!VIDEO https://aka.ms/docs.essential-use-cases]
After reading this article, you'll be able to answer the following questions:
[Azure Cosmos DB](../cosmos-db/introduction.md) is the Azure solution for a fast NoSQL database, with open APIs for any scale. The service is designed to allow customers to elastically (and independently) scale throughput and storage across any number of geographical regions. Azure Cosmos DB is the first globally distributed database service in the market today to offer comprehensive [service level agreements](https://azure.microsoft.com/support/legal/sla/cosmos-db/) encompassing throughput, latency, availability, and consistency.
-Azure Cosmos DB is a global distributed, multi-model database that is used in a wide range of applications and use cases. It is a good choice for any [serverless](https://azure.com/serverless) application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. It supports multiple data models (key-value, documents, graphs and columnar) and many APIs for data access including [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md), [SQL API](./introduction.md), [Gremlin API](graph-introduction.md), and [Tables API](table/introduction.md) natively, and in an extensible manner.
+Azure Cosmos DB is a global distributed, multi-model database that is used in a wide range of applications and use cases. It is a good choice for any [serverless](https://azure.com/serverless) application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. It supports multiple data models (key-value, documents, graphs and columnar) and many Azure Cosmos DB APIs for data access including [API for MongoDB](mongodb/introduction.md), [API for NoSQL](introduction.md), [API for Gremlin](gremlin/introduction.md), and [API for Table](table/introduction.md) natively, and in an extensible manner.
The following are some attributes of Azure Cosmos DB that make it well-suited for high-performance applications with global ambition.
IoT use cases commonly share some patterns in how they ingest, process, and stor
:::image type="content" source="./media/use-cases/iot.png" alt-text="Azure Cosmos DB IoT reference architecture" border="false":::
-Bursts of data can be ingested by Azure Event Hubs as it offers high throughput data ingestion with low latency. Data ingested that needs to be processed for real-time insight can be funneled to Azure Stream Analytics for real-time analytics. Data can be loaded into Azure Cosmos DB for adhoc querying. Once the data is loaded into Azure Cosmos DB, the data is ready to be queried. In addition, new data and changes to existing data can be read on change feed. Change feed is a persistent, append only log that stores changes to Cosmos containers in sequential order. Then all data or just changes to data in Azure Cosmos DB can be used as reference data as part of real-time analytics. In addition, data can further be refined and processed by connecting Azure Cosmos DB data to HDInsight for Pig, Hive, or Map/Reduce jobs. Refined data is then loaded back to Azure Cosmos DB for reporting.
+Bursts of data can be ingested by Azure Event Hubs as it offers high throughput data ingestion with low latency. Data ingested that needs to be processed for real-time insight can be funneled to Azure Stream Analytics for real-time analytics. Data can be loaded into Azure Cosmos DB for adhoc querying. Once the data is loaded into Azure Cosmos DB, the data is ready to be queried. In addition, new data and changes to existing data can be read on change feed. Change feed is a persistent, append only log that stores changes to Azure Cosmos DB containers in sequential order. Then all data or just changes to data in Azure Cosmos DB can be used as reference data as part of real-time analytics. In addition, data can further be refined and processed by connecting Azure Cosmos DB data to HDInsight for Pig, Hive, or Map/Reduce jobs. Refined data is then loaded back to Azure Cosmos DB for reporting.
For a sample IoT solution using Azure Cosmos DB, Event Hubs and Apache Storm, see the [hdinsight-storm-examples repository on GitHub](https://github.com/hdinsight/hdinsight-storm-examples/).
Azure Cosmos DB is often used for event sourcing to power event driven architect
:::image type="content" source="./media/use-cases/event-sourcing.png" alt-text="Azure Cosmos DB ordering pipeline reference architecture" border="false":::
-In addition, data stored in Azure Cosmos DB can be integrated with HDInsight for big data analytics via Apache Spark jobs. For details on the Spark Connector for Azure Cosmos DB, see [Run a Spark job with Cosmos DB and HDInsight](./create-sql-api-spark.md).
+In addition, data stored in Azure Cosmos DB can be integrated with HDInsight for big data analytics via Apache Spark jobs. For details on the Spark Connector for Azure Cosmos DB, see [Run a Spark job with Azure Cosmos DB and HDInsight](./nosql/quickstart-spark.md).
## Gaming The database tier is a crucial component of gaming applications. Modern games perform graphical processing on mobile/console clients, but rely on the cloud to deliver customized and personalized content like in-game stats, social media integration, and high-score leaderboards. Games often require single-millisecond latencies for reads and writes to provide an engaging in-game experience. A game database needs to be fast and be able to handle massive spikes in request rates during new game launches and feature updates.
Azure Cosmos DB is used by games like [The Walking Dead: No Man's Land](https://
:::image type="content" source="./media/use-cases/gaming.png" alt-text="Azure Cosmos DB gaming reference architecture" border="false"::: ## Web and mobile applications
-Azure Cosmos DB is commonly used within web and mobile applications, and is well suited for modeling social interactions, integrating with third-party services, and for building rich personalized experiences. The Cosmos DB SDKs can be used build rich iOS and Android applications using the popular [Xamarin framework](mobile-apps-with-xamarin.md).
+Azure Cosmos DB is commonly used within web and mobile applications, and is well suited for modeling social interactions, integrating with third-party services, and for building rich personalized experiences. The Azure Cosmos DB SDKs can be used build rich iOS and Android applications using the popular [Xamarin framework](mobile-apps-with-xamarin.md).
### Social Applications
-A common use case for Azure Cosmos DB is to store and query user generated content (UGC) for web, mobile, and social media applications. Some examples of UGC are chat sessions, tweets, blog posts, ratings, and comments. Often, the UGC in social media applications is a blend of free form text, properties, tags, and relationships that are not bounded by rigid structure. Content such as chats, comments, and posts can be stored in Cosmos DB without requiring transformations or complex object to relational mapping layers. Data properties can be added or modified easily to match requirements as developers iterate over the application code, thus promoting rapid development.
+A common use case for Azure Cosmos DB is to store and query user generated content (UGC) for web, mobile, and social media applications. Some examples of UGC are chat sessions, tweets, blog posts, ratings, and comments. Often, the UGC in social media applications is a blend of free form text, properties, tags, and relationships that are not bounded by rigid structure. Content such as chats, comments, and posts can be stored in Azure Cosmos DB without requiring transformations or complex object to relational mapping layers. Data properties can be added or modified easily to match requirements as developers iterate over the application code, thus promoting rapid development.
-Applications that integrate with third-party social networks must respond to changing schemas from these networks. As data is automatically indexed by default in Cosmos DB, data is ready to be queried at any time. Hence, these applications have the flexibility to retrieve projections as per their respective needs.
+Applications that integrate with third-party social networks must respond to changing schemas from these networks. As data is automatically indexed by default in Azure Cosmos DB, data is ready to be queried at any time. Hence, these applications have the flexibility to retrieve projections as per their respective needs.
-Many of the social applications run at global scale and can exhibit unpredictable usage patterns. Flexibility in scaling the data store is essential as the application layer scales to match usage demand. You can scale out by adding additional data partitions under a Cosmos DB account. In addition, you can also create additional Cosmos DB accounts across multiple regions. For Cosmos DB service region availability, see [Azure Regions](https://azure.microsoft.com/regions/#services).
+Many of the social applications run at global scale and can exhibit unpredictable usage patterns. Flexibility in scaling the data store is essential as the application layer scales to match usage demand. You can scale out by adding additional data partitions under an Azure Cosmos DB account. In addition, you can also create additional Azure Cosmos DB accounts across multiple regions. For Azure Cosmos DB service region availability, see [Azure Regions](https://azure.microsoft.com/regions/#services).
:::image type="content" source="./media/use-cases/apps-with-global-reach.png" alt-text="Diagram that shows the Azure Cosmos DB web app reference architecture." border="false"::: ### Personalization Nowadays, modern applications come with complex views and experiences. These are typically dynamic, catering to user preferences or moods and branding needs. Hence, applications need to be able to retrieve personalized settings effectively to render UI elements and experiences quickly.
-JSON, a format supported by Cosmos DB, is an effective format to represent UI layout data as it is not only lightweight, but also can be easily interpreted by JavaScript. Cosmos DB offers tunable consistency levels that allow fast reads with low latency writes. Hence, storing UI layout data including personalized settings as JSON documents in Cosmos DB is an effective means to get this data across the wire.
+JSON, a format supported by Azure Cosmos DB, is an effective format to represent UI layout data as it is not only lightweight, but also can be easily interpreted by JavaScript. Azure Cosmos DB offers tunable consistency levels that allow fast reads with low latency writes. Hence, storing UI layout data including personalized settings as JSON documents in Azure Cosmos DB is an effective means to get this data across the wire.
:::image type="content" source="./media/use-cases/personalization.png" alt-text="Azure Cosmos DB web app reference architecture" border="false"::: ## Next steps
-* To get started with Azure Cosmos DB, follow our [quick starts](create-sql-api-dotnet.md), which walk you through creating an account and getting started with Cosmos DB.
+* To get started with Azure Cosmos DB, follow our [quick starts](create-sql-api-dotnet.md), which walk you through creating an account and getting started with Azure Cosmos DB.
* If you'd like to read more about customers using Azure Cosmos DB, see the [customer case studies](https://azure.microsoft.com/case-studies/?service=cosmos-db) page.
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md
-+ Last updated 11/08/2021-+ # Monitor and debug with insights in Azure Cosmos DB
-Azure Cosmos DB provides insights for throughput, storage, consistency, availability, and latency. The Azure portal provides an aggregated view of these metrics. You can also view Azure Cosmos DB metrics from Azure Monitor API. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn about how to view metrics from Azure monitor, see the [Get metrics from Azure Monitor](./monitor-cosmos-db.md) article.
+Azure Cosmos DB provides insights for throughput, storage, consistency, availability, and latency. The Azure portal provides an aggregated view of these metrics. You can also view Azure Cosmos DB metrics from Azure Monitor API. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn about how to view metrics from Azure monitor, see the [Get metrics from Azure Monitor](./monitor.md) article.
This article walks through common use cases and how Azure Cosmos DB insights can be used to analyze and debug these issues. By default, the metric insights are collected every five minutes and are kept for seven days.
This article walks through common use cases and how Azure Cosmos DB insights can
1. Open the **Insights** pane. By default, the Insights pane shows the throughput, requests, storage, availability, latency, system, and account management metrics for ever container in your account. You can select the **Time Range**, **Database**, and **Container** for which you want to view insights. The **Overview** tab shows RU/s usage, data usage, index usage, throttled requests, and normalized RU/s consumption for the selected database and container.
- :::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Cosmos DB performance metrics in Azure portal" lightbox="./media/use-metrics/performance-metrics.png" :::
+ :::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Azure Cosmos DB performance metrics in Azure portal" lightbox="./media/use-metrics/performance-metrics.png" :::
1. The following metrics are available from the **Insights** pane:
If you would like to conserve index space, you can adjust the [indexing policy](
## Debug why queries are running slow
-In the SQL API SDKs, Azure Cosmos DB provides query execution statistics.
+In the API for NoSQL SDKs, Azure Cosmos DB provides query execution statistics.
```csharp IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
You've now learned how to monitor and debug issues using the metrics provided in the Azure portal. You may want to learn more about improving database performance by reading the following articles:
-* To learn about how to view metrics from Azure monitor, see the [Get metrics from Azure Monitor](./monitor-cosmos-db.md) article.
+* To learn about how to view metrics from Azure monitor, see the [Get metrics from Azure Monitor](./monitor.md) article.
* [Performance and scale testing with Azure Cosmos DB](performance-testing.md)
-* [Performance tips for Azure Cosmos DB](performance-tips.md)
+* [Performance tips for Azure Cosmos DB](performance-tips.md)
cosmos-db Visualize Qlik Sense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/visualize-qlik-sense.md
Title: Connect Qlik Sense to Azure Cosmos DB and visualize your data description: This article describes the steps required to connect Azure Cosmos DB to Qlik Sense and visualize your data. -++
# Connect Qlik Sense to Azure Cosmos DB and visualize your data Qlik Sense is a data visualization tool that combines data from different sources into a single view. Qlik Sense indexes every possible relationship in your data so that you can gain immediate insights to the data. You can visualize Azure Cosmos DB data by using Qlik Sense. This article describes the steps required to connect Azure Cosmos DB to Qlik Sense and visualize your data. > [!NOTE]
-> Connecting Qlik Sense to Azure Cosmos DB is currently supported for SQL API and Azure Cosmos DB's API for MongoDB accounts only.
+> Connecting Qlik Sense to Azure Cosmos DB is currently supported for API for NoSQL and MongoDB accounts only.
You can Connect Qlik Sense to Azure Cosmos DB with:
-* Cosmos DB SQL API by using the ODBC connector.
+* Azure Cosmos DB API for NoSQL by using the ODBC connector.
* Azure Cosmos DB's API for MongoDB by using the Qlik Sense MongoDB connector (currently in preview).
-* Azure Cosmos DB's API for MongoDB and SQL API by using REST API connector in Qlik Sense.
+* Azure Cosmos DB's API for MongoDB and NoSQL by using REST API connector in Qlik Sense.
-* Cosmos DB Mongo DB API by using the gRPC connector for Qlik Core.
-This article describes the details of connecting to the Cosmos DB SQL API by using the ODBC connector.
+* Azure Cosmos DB MongoDB API by using the gRPC connector for Qlik Core.
+This article describes the details of connecting to the Azure Cosmos DB API for NoSQL by using the ODBC connector.
-This article describes the details of connecting to the Cosmos DB SQL API by using the ODBC connector.
+This article describes the details of connecting to the Azure Cosmos DB API for NoSQL by using the ODBC connector.
## Prerequisites
Before following the instructions in this article, ensure that you have the foll
* Download the [Qlik Sense Desktop](https://www.qlik.com/us/try-or-buy/download-qlik-sense) or set up Qlik Sense in Azure by [Installing the Qlik Sense marketplace item](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense).
-* Download the [video game data](https://www.kaggle.com/gregorut/videogamesales), this sample data is in CSV format. You will store this data in a Cosmos DB account and visualize it in Qlik Sense.
+* Download the [video game data](https://www.kaggle.com/gregorut/videogamesales), this sample data is in CSV format. You will store this data in an Azure Cosmos DB account and visualize it in Qlik Sense.
-* Create an Azure Cosmos DB SQL API account by using the steps described in [create an account](create-sql-api-dotnet.md#create-account) section of the quickstart article.
+* Create an Azure Cosmos DB API for NoSQL account by using the steps described in [create an account](create-sql-api-dotnet.md#create-account) section of the quickstart article.
-* [Create a database and a collection](create-sql-api-java.md#add-a-container) ΓÇô You can use set the collection throughput value to 1000 RU/s.
+* [Create a database and a collection](nosql/quickstart-java.md#add-a-container) ΓÇô You can use set the collection throughput value to 1000 RU/s.
-* Load the sample video game sales data to your Cosmos DB account. You can import the data by using Azure Cosmos DB data migration tool, you can do a [sequential](import-data.md#SQLSeqTarget) or a [bulk import](import-data.md#SQLBulkTarget) of data. It takes around 3-5 minutes for the data to import to the Cosmos DB account.
+* Load the sample video game sales data to your Azure Cosmos DB account. You can import the data by using Azure Cosmos DB data migration tool, you can do a [sequential](import-data.md#SQLSeqTarget) or a [bulk import](import-data.md#SQLBulkTarget) of data. It takes around 3-5 minutes for the data to import to the Azure Cosmos DB account.
-* Download, install, and configure the ODBC driver by using the steps in the [connect to Cosmos DB with ODBC driver](odbc-driver.md) article. The video game data is a simple data set and you donΓÇÖt have to edit the schema, just use the default collection-mapping schema.
+* Download, install, and configure the ODBC driver by using the steps in the [connect to Azure Cosmos DB with ODBC driver](odbc-driver.md) article. The video game data is a simple data set and you donΓÇÖt have to edit the schema, just use the default collection-mapping schema.
-## Connect Qlik Sense to Cosmos DB
+## Connect Qlik Sense to Azure Cosmos DB
1. Open Qlik Sense and select **Create new app**. Provide a name for your app and select **Create**.
Before following the instructions in this article, ensure that you have the foll
### Limitations when connecting with ODBC
-Cosmos DB is a schema-less distributed database with drivers modeled around developer needs. The ODBC driver requires a database with schema to infer columns, their data types, and other properties. The regular SQL query or the DML syntax with relational capability is not applicable to Cosmos DB SQL API because SQL API is not ANSI SQL. Due to this reason, the SQL statements issued through the ODBC driver are translated into Cosmos DB-specific SQL syntax that doesnΓÇÖt have equivalents for all constructs. To prevent these translation issues, you must apply a schema when setting up the ODBC connection. The [connect with ODBC driver](odbc-driver.md) article gives you suggestions and methods to help you configure the schema. Make sure to create this mapping for every database/collection within the Cosmos DB account.
+Azure Cosmos DB is a schema-less distributed database with drivers modeled around developer needs. The ODBC driver requires a database with schema to infer columns, their data types, and other properties. The regular SQL query or the DML syntax with relational capability is not applicable to Azure Cosmos DB API for NoSQL because API for NoSQL is not ANSI SQL. Due to this reason, the SQL statements issued through the ODBC driver are translated into Azure Cosmos DB-specific SQL syntax that doesnΓÇÖt have equivalents for all constructs. To prevent these translation issues, you must apply a schema when setting up the ODBC connection. The [connect with ODBC driver](odbc-driver.md) article gives you suggestions and methods to help you configure the schema. Make sure to create this mapping for every database/collection within the Azure Cosmos DB account.
## Next Steps If you are using a different visualization tool such as Power BI, you can connect to it by using the instructions in the following doc:
-* [Visualize Cosmos DB data by using the Power BI connector](powerbi-visualize.md)
+* [Visualize Azure Cosmos DB data by using the Power BI connector](powerbi-visualize.md)
cosmos-db Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/whitepapers.md
Title: Whitepapers that describe Azure Cosmos DB concepts description: Get the list of whitepapers for Azure Cosmos DB, these whitepapers describe the concepts in depth. -+ Last updated 05/07/2021-+ # Azure Cosmos DB whitepapers Whitepapers allow you to explore Azure Cosmos DB concepts at a deeper level. This article provides you with a list of available whitepapers for Azure Cosmos DB.
cost-management-billing Automation Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/automation-faq.md
See [Migrate from EA to MCA APIs](../costs/migrate-cost-management-api.md).
### When will the [Enterprise Reporting APIs](../manage/enterprise-api.md) get turned off?
-The Enterprise Reporting APIs are deprecated. The date that the API will be turned off is still being determined. We recommend that you migrate away from the APIs as soon as possible. For more information, see [Migrate from Enterprise Reporting to Azure Resource Manager APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md).
+The Enterprise Reporting APIs are deprecated. The date that the API will be turned off is still being determined. We recommend that you migrate away from the APIs as soon as possible. For more information, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs](../automate/migrate-ea-reporting-arm-apis-overview.md).
### When will the [Consumption Usage Details API](/rest/api/consumption/usage-details/list) get turned off?
cost-management-billing Migrate Ea Reporting Arm Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reporting-arm-apis-overview.md
The following information describes the differences between the older Azure Ente
## Migration checklist - Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).-- Determine which Enterprise Reporting APIs you use and see which Cost Management APIs to move to at [Enterprise Reporting API mapping to new Cost Management APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#ea-api-mapping-to-new-azure-resource-manager-apis).
+- Determine which Enterprise Reporting APIs you use and see which Cost Management APIs to move to at [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs](../automate/migrate-ea-reporting-arm-apis-overview.md).
- Configure service authorization and authentication for the Cost Management APIs. For more information, see [Assign permission to ACM APIs](cost-management-api-permissions.md). - Test the APIs and then update any programming code to replace Enterprise Reporting API calls with Cost Management API calls. - Update error handling to use new error codes. Some considerations include: - Cost Management APIs have a timeout period of 60 seconds. - Cost Management APIs have rate limiting in place. This results in a `429 throttling error` if rates are exceeded. Build your solutions so that you don't make too many API calls in a short time period.-- Review the other Cost Management APIs available through Azure Resource Manager and assess for use later. For more information, see [Use additional Cost Management APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#use-additional-cost-management-apis).
+- Review the other Cost Management APIs available through Azure Resource Manager and assess for use later. For more information, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs](../automate/migrate-ea-reporting-arm-apis-overview.md).
## Enterprise Reporting API mapping to new Cost Management APIs
After you've migrated to the Cost Management APIs for your existing reporting sc
## Next steps - Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).-- If needed, determine which Enterprise Reporting APIs you use and see which Cost Management APIs to move to at [Enterprise Reporting API mapping to new Cost Management APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#ea-api-mapping-to-new-azure-resource-manager-apis).
+- If needed, determine which Enterprise Reporting APIs you use and see which Cost Management APIs to move to at [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs](../automate/migrate-ea-reporting-arm-apis-overview.md).
- If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad). - If needed, update any of your programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your service principal.
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
The following data fields are found in usage detail files and Cost Management AP
| **CustomerTenantDomainName** | Domain name for the Azure Active Directory tenant of the customer's subscription. | Customer Azure Active Directory tenant domain. | | **PartnerTenantID** | Identifier for the partner's Azure Active Directory tenant. | Partner Azure Active Directory Tenant ID called as Partner ID, in GUID format. | | **PartnerName** | Name of the partner Azure Active Directory tenant. | Partner name. |
-| **ResellerMPNID** | MPNID for the reseller associated with the subscription. | MPN ID of the reseller on record for the subscription. Not available for current activity. |
+| **ResellerMPNID** | ID for the reseller associated with the subscription. | ID of the reseller on record for the subscription. Not available for current activity. |
| costCenter | Cost center associated to the subscription. | N/A | | billingPeriodStartDate | Billing period start date, as shown on the invoice. | N/A | | billingPeriodEndDate | Billing period end date, as shown on the invoice. | N/A |
cost-management-billing Migrate From Enterprise Reporting To Azure Resource Manager Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md
- Title: Migrate from Enterprise Reporting to Azure Resource Manager APIs
-description: This article helps you understand the differences between the Reporting APIs and the Azure Resource Manager APIs, what to expect when you migrate to the Azure Resource Manager APIs, and the new capabilities that are available with the new Azure Resource Manager APIs.
----- Previously updated : 12/17/2021---
-# Migrate from Enterprise Reporting to Azure Resource Manager APIs
-
-This article helps developers who have built custom solutions using the [Azure Reporting APIs for Enterprise Customers](../manage/enterprise-api.md) to migrate onto the Azure Resource Manager APIs for Cost Management. Service Principal support for the newer Azure Resource Manager APIs is now generally available. Azure Resource Manager APIs are in active development. Consider migrating to them instead of using the older Azure Reporting APIs for Enterprise Customers. The older APIs are being deprecated. This article helps you understand the differences between the Reporting APIs and the Azure Resource Manager APIs, what to expect when you migrate to the Azure Resource Manager APIs, and the new capabilities that are available with the new Azure Resource Manager APIs.
-
-## API differences
-
-The following information describes the differences between the older Reporting APIs for Enterprise Customers and the newer Azure Resource Manager APIs.
-
-| **Use** | **Enterprise Agreement APIs** | **Azure Resource Manager APIs** |
-| | | |
-| Authentication | API Key provisioned in the Enterprise Agreement (EA) portal | Azure Active Directory (Azure AD) Authentication using User tokens or Service Principals. Service Principals take the place of API Keys. |
-| Scopes and Permissions | All requests are at the Enrollment scope. The API Key permission assignments will determine whether data for the entire Enrollment, a Department, or a specific Account is returned. No user authentication. | Users or Service Principals are assigned access to the Enrollment, Department, or Account scope. |
-| URI Endpoint | https://consumption.azure.com | https://management.azure.com |
-| Development Status | In maintenance mode. On the path to deprecation. | Actively being developed |
-| Available APIs | Limited to what is available currently | Equivalent APIs are available to replace each EA API. <p> Additional [Cost Management APIs](/rest/api/cost-management/) are also available to you, including: <p> <ul><li>Budgets</li><li>Alerts<li>Exports</li></ul> |
-
-## Migration checklist
--- Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).-- Determine which EA APIs you use and see which Azure Resource Manager APIs to move to at [EA API mapping to new Azure Resource Manager APIs](#ea-api-mapping-to-new-azure-resource-manager-apis).-- Configure service authorization and authentication for the Azure Resource Manager APIs
- - If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad). Registration creates a service principal for you to use to call the APIs.
- - Assign the service principal access to the scopes needed, as outlined below.
- - Update any programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your Service Principal.
-- Test the APIs and then update any programming code to replace EA API calls with Azure Resource Manager API calls.-- Update error handling to use new error codes. Some considerations include:
- - Azure Resource Manager APIs have a timeout period of 60 seconds.
- - Azure Resource Manager APIs have rate limiting in place. This results in a 429 throttling error if rates are exceeded. Build your solutions so that you don't place too many API calls in a short time period.
-- Review the other Cost Management APIs available through Azure Resource Manager and assess for use later. For more information, see [Use additional Cost Management APIs](#use-additional-cost-management-apis).-
-## Assign Service Principal access to Azure Resource Manager APIs
-
-After you create a Service Principal to programmatically call the Azure Resource Manager APIs, you need to assign it the proper permissions to authorize against and execute requests in Azure Resource Manager. There are two permission frameworks for different scenarios.
-
-### Azure Billing Hierarchy Access
-
-To assign Service Principal permissions to your Enterprise Billing Account, Departments, or Enrollment Account scopes, see [Assign roles to Azure Enterprise Agreement service principal names](../manage/assign-roles-azure-service-principals.md).
-
-### Azure role-based access control
-
-New Service Principal support extends to Azure-specific scopes, like management groups, subscriptions, and resource groups. You can assign Service Principal permissions to these scopes directly [in the Azure portal](../../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application) or by using [Azure PowerShell](../../active-directory/develop/howto-authenticate-service-principal-powershell.md#assign-the-application-to-a-role).
-
-## EA API mapping to new Azure Resource Manager APIs
-
-Use the table below to identify the EA APIs that you currently use and the replacement Azure Resource Manager API to use instead.
-
-| **Scenario** | **EA APIs** | **Azure Resource Manager APIs** |
-| | | |
-| Balance Summary | [/balancesummary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) |[Microsoft.Consumption/balances](/rest/api/consumption/balances/getbybillingaccount) |
-| Price Sheet | [/pricesheet](/rest/api/billing/enterprise/billing-enterprise-api-pricesheet) | [Microsoft.Consumption/pricesheets/default](/rest/api/consumption/pricesheet) ΓÇô use for negotiated prices <p> [Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) ΓÇô use for retail prices |
-| Reserved Instance Details | [/reservationdetails](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) | [Microsoft.CostManagement/generateReservationDetailsReport](../reservations/reservation-utilization.md) |
-| Reserved Instance Summary | [/reservationsummaries](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) | [Microsoft.Consumption/reservationSummaries](/rest/api/consumption/reservationssummaries/list#reservationsummariesdailywithbillingaccountid) |
-| Reserved Instance Recommendations | [/SharedReservationRecommendations](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-recommendation)<p>[/SingleReservationRecommendations](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-recommendation) | [Microsoft.Consumption/reservationRecommendations](/rest/api/consumption/reservationrecommendations/list) |
-| Reserved Instance Charges | [/reservationcharges](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-charges) | [Microsoft.Consumption/reservationTransactions](/rest/api/consumption/reservationtransactions/list) |
-
-## Migration details by API
-
-The following sections show old API request examples with new replacement API examples.
-
-### Balance Summary API
-
-Use the following request URIs when calling the new Balance Summary API. Your enrollment number should be used as the billingAccountId.
-
-#### Supported requests
-
-[Get for Enrollment](/rest/api/consumption/balances/getbybillingaccount)
-
-```json
-https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Consumption/balances?api-version=2019-10-01
-```
-
-### Response body changes
-
-_Old response body_:
-
-```json
-{
- "id": "enrollments/100/billingperiods/201507/balancesummaries",
- "billingPeriodId": 201507,
- "currencyCode": "USD",
- "beginningBalance": 0,
- "endingBalance": 1.1,
- "newPurchases": 1,
- "adjustments": 1.1,
- "utilized": 1.1,
- "serviceOverage": 1,
- "chargesBilledSeparately": 1,
- "totalOverage": 1,
- "totalUsage": 1.1,
- "azureMarketplaceServiceCharges": 1,
- "newPurchasesDetails": [
- {
- "name": "",
- "value": 1
- }
- ],
- "adjustmentDetails": [
- {
- "name": "Promo Credit",
- "value": 1.1
- },
- {
- "name": "SIE Credit",
- "value": 1.0
- }
- ]
- }
-
-```
-
-_New response body_:
-
-The same data is now available in the `properties` field of the new API response. There might be minor changes to the spelling on some of the field names.
-
-```json
-{
- "id": "/providers/Microsoft.Billing/billingAccounts/123456/providers/Microsoft.Billing/billingPeriods/201702/providers/Microsoft.Consumption/balances/balanceId1",
- "name": "balanceId1",
- "type": "Microsoft.Consumption/balances",
- "properties": {
- "currency": "USD ",
- "beginningBalance": 3396469.19,
- "endingBalance": 2922371.02,
- "newPurchases": 0,
- "adjustments": 0,
- "utilized": 474098.17,
- "serviceOverage": 0,
- "chargesBilledSeparately": 0,
- "totalOverage": 0,
- "totalUsage": 474098.17,
- "azureMarketplaceServiceCharges": 609.82,
- "billingFrequency": "Month",
- "priceHidden": false,
- "newPurchasesDetails": [
- {
- "name": "Promo Purchase",
- "value": 1
- }
- ],
- "adjustmentDetails": [
- {
- "name": "Promo Credit",
- "value": 1.1
- },
- {
- "name": "SIE Credit",
- "value": 1
- }
- ]
- }
-}
-
-```
-
-### Price Sheet
-
-Use the following request URIs when calling the new Price Sheet API.
-
-#### Supported requests
-
- You can call the API using the following scopes:
--- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`-- Subscription: `subscriptions/{subscriptionId}`-
-[_Get for current Billing Period_](/rest/api/consumption/pricesheet/get)
--
-```json
-https://management.azure.com/{scope}/providers/Microsoft.Consumption/pricesheets/default?api-version=2019-10-01
-```
-
-[_Get for specified Billing Period_](/rest/api/consumption/pricesheet/getbybillingperiod)
-
-```json
-https://management.azure.com/{scope}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}/providers/Microsoft.Consumption/pricesheets/default?api-version=2019-10-01
-```
-
-#### Response body changes
-
-_Old response_:
-
-```json
- [
- {
- "id": "enrollments/57354989/billingperiods/201601/products/343/pricesheets",
- "billingPeriodId": "201704",
- "meterId": "dc210ecb-97e8-4522-8134-2385494233c0",
- "meterName": "A1 VM",
- "unitOfMeasure": "100 Hours",
- "includedQuantity": 0,
- "partNumber": "N7H-00015",
- "unitPrice": 0.00,
- "currencyCode": "USD"
- },
- {
- "id": "enrollments/57354989/billingperiods/201601/products/2884/pricesheets",
- "billingPeriodId": "201404",
- "meterId": "dc210ecb-97e8-4522-8134-5385494233c0",
- "meterName": "Locally Redundant Storage Premium Storage - Snapshots - AU East",
- "unitOfMeasure": "100 GB",
- "includedQuantity": 0,
- "partNumber": "N9H-00402",
- "unitPrice": 0.00,
- "currencyCode": "USD"
- },
- ...
- ]
-
-```
-
-_New response_:
-
-Old data is now in the `pricesheets` field of the new API response. Meter details information is also provided.
-
-```json
-{
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Billing/billingPeriods/201702/providers/Microsoft.Consumption/pricesheets/default",
- "name": "default",
- "type": "Microsoft.Consumption/pricesheets",
- "properties": {
- "nextLink": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.consumption/pricesheets/default?api-version=2018-01-31&$skiptoken=AQAAAA%3D%3D&$expand=properties/pricesheets/meterDetails",
- "pricesheets": [
- {
- "billingPeriodId": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Billing/billingPeriods/201702",
- "meterId": "00000000-0000-0000-0000-000000000000",
- "unitOfMeasure": "100 Hours",
- "includedQuantity": 100,
- "partNumber": "XX-11110",
- "unitPrice": 0.00000,
- "currencyCode": "EUR",
- "offerId": "OfferId 1",
- "meterDetails": {
- "meterName": "Data Transfer Out (GB)",
- "meterCategory": "Networking",
- "unit": "GB",
- "meterLocation": "Zone 2",
- "totalIncludedQuantity": 0,
- "pretaxStandardRate": 0.000
- }
- }
- ]
- }
-}
-
-```
-
-### Reserved instance usage details
-
-Microsoft isn't actively working on synchronous-based Reservation Details APIs. We recommend at you move to the newer SPN-supported asynchronous API call pattern as a part of the migration. Asynchronous requests better handle large amounts of data and will reduce timeout errors.
-
-#### Supported requests
-
-Use the following request URIs when calling the new Asynchronous Reservation Details API. Your enrollment number should be used as the `billingAccountId`. You can call the API with the following scopes:
--- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`-
-#### Sample request to generate a reservation details report
-
-```json
-POST
-https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.CostManagement/generateReservationDetailsReport?startDate={startDate}&endDate={endDate}&api-version=2019-11-01
-
-```
-
-#### Sample request to poll report generation status
-
-```json
-GET
-https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.CostManagement/reservationDetailsOperationResults/{operationId}?api-version=2019-11-01
-
-```
-
-#### Sample poll response
-
-```json
-{
- "status": "Completed",
- "properties": {
- "reportUrl": "https://storage.blob.core.windows.net/details/20200911/00000000-0000-0000-0000-000000000000?sv=2016-05-31&sr=b&sig=jep8HT2aphfUkyERRZa5LRfd9RPzjXbzB%2F9TNiQ",
- "validUntil": "2020-09-12T02:56:55.5021869Z"
- }
-}
-
-```
-
-#### Response body changes
-
-The response of the older synchronous based Reservation Details API is below.
-
-_Old response_:
-
-```json
-{
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "reservationId": "00000000-0000-0000-0000-000000000000",
- "usageDate": "2018-02-01T00:00:00",
- "skuName": "Standard_F2s",
- "instanceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resourvegroup1/providers/microsoft.compute/virtualmachines/VM1",
- "totalReservedQuantity": 18.000000000000000,
- "reservedHours": 432.000000000000000,
- "usedHours": 400.000000000000000
-}
-
-```
-
-_New response_:
-
-The new API creates a CSV file for you. See the following file fields.
-
-| **Old Property** | **New Property** | **Notes** |
-| | | |
-| | InstanceFlexibilityGroup | New property for instance flexibility. |
-| | InstanceFlexibilityRatio | New property for instance flexibility. |
-| instanceId | InstanceName | |
-| | Kind | It's a new property. Value is `None`, `Reservation`, or `IncludedQuantity`. |
-| reservationId | ReservationId | |
-| reservationOrderId | ReservationOrderId | |
-| reservedHours | ReservedHours | |
-| skuName | SkuName | |
-| totalReservedQuantity | TotalReservedQuantity | |
-| usageDate | UsageDate | |
-| usedHours | UsedHours | |
-
-### Reserved Instance Usage Summary
-
-Use the following request URIs to call the new Reservation Summaries API.
-
-#### Supported requests
-
- Call the API with the following scopes:
--- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`-
-[_Get Reservation Summary Daily_](/rest/api/consumption/reservationssummaries/list#reservationsummariesdailywithbillingaccountid)
-
-```json
-https://management.azure.com/{scope}/Microsoft.Consumption/reservationSummaries?grain=daily&$filter=properties/usageDate ge 2017-10-01 AND properties/usageDate le 2017-11-20&api-version=2019-10-01
-```
-
-[_Get Reservation Summary Monthly_](/rest/api/consumption/reservationssummaries/list#reservationsummariesmonthlywithbillingaccountid)
-
-```json
-https://management.azure.com/{scope}/Microsoft.Consumption/reservationSummaries?grain=monthly&$filter=properties/usageDate ge 2017-10-01 AND properties/usageDate le 2017-11-20&api-version=2019-10-01
-```
-
-#### Response body changes
-
-_Old response_:
-
-```json
-[
- {
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "reservationId": "00000000-0000-0000-0000-000000000000",
- "skuName": "Standard_F1s",
- "reservedHours": 24,
- "usageDate": "2018-05-01T00:00:00",
- "usedHours": 23,
- "minUtilizationPercentage": 0,
- "avgUtilizationPercentage": 95.83,
- "maxUtilizationPercentage": 100
- }
-]
-
-```
-
-_New response_:
-
-```json
-{
- "value": [
- {
- "id": "/providers/Microsoft.Billing/billingAccounts/12345/providers/Microsoft.Consumption/reservationSummaries/reservationSummaries_Id1",
- "name": "reservationSummaries_Id1",
- "type": "Microsoft.Consumption/reservationSummaries",
- "tags": null,
- "properties": {
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "reservationId": "00000000-0000-0000-0000-000000000000",
- "skuName": "Standard_B1s",
- "reservedHours": 720,
- "usageDate": "2018-09-01T00:00:00-07:00",
- "usedHours": 0,
- "minUtilizationPercentage": 0,
- "avgUtilizationPercentage": 0,
- "maxUtilizationPercentage": 0
- }
- }
- ]
-}
-
-```
-
-### Reserved instance recommendations
-
-Use the following request URIs to call the new Reservation Recommendations API.
-
-#### Supported requests
-
- Call the API with the following scopes:
--- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`-- Subscription: `subscriptions/{subscriptionId}`-- Resource Groups: `subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}`-
-[_Get Recommendations_](/rest/api/consumption/reservationrecommendations/list)
-
-Both the shared and the single scope recommendations are available through this API. You can also filter on the scope as an optional API parameter.
-
-```json
-https://management.azure.com/providers/Microsoft.Billing/billingAccounts/123456/providers/Microsoft.Consumption/reservationRecommendations?api-version=2019-10-01
-```
-
-#### Response body changes
-
-Recommendations for Shared and Single scopes are combined into one API.
-
-_Old response_:
-
-```json
-[{
- "subscriptionId": "1111111-1111-1111-1111-111111111111",
- "lookBackPeriod": "Last7Days",
- "meterId": "2e3c2132-1398-43d2-ad45-1d77f6574933",
- "skuName": "Standard_DS1_v2",
- "term": "P1Y",
- "region": "westus",
- "costWithNoRI": 186.27634908960002,
- "recommendedQuantity": 9,
- "totalCostWithRI": 143.12931642978083,
- "netSavings": 43.147032659819189,
- "firstUsageDate": "2018-02-19T00:00:00"
-}
-]
-
-```
-
-_New response_:
-
-```json
-{
- "value": [
- {
- "id": "billingAccount/123456/providers/Microsoft.Consumption/reservationRecommendations/00000000-0000-0000-0000-000000000000",
- "name": "00000000-0000-0000-0000-000000000000",
- "type": "Microsoft.Consumption/reservationRecommendations",
- "location": "westus",
- "sku": "Standard_DS1_v2",
- "kind": "legacy",
- "properties": {
- "meterId": "00000000-0000-0000-0000-000000000000",
- "term": "P1Y",
- "costWithNoReservedInstances": 12.0785105,
- "recommendedQuantity": 1,
- "totalCostWithReservedInstances": 11.4899644807748,
- "netSavings": 0.588546019225182,
- "firstUsageDate": "2019-07-07T00:00:00-07:00",
- "scope": "Shared",
- "lookBackPeriod": "Last7Days",
- "instanceFlexibilityRatio": 1,
- "instanceFlexibilityGroup": "DSv2 Series",
- "normalizedSize": "Standard_DS1_v2",
- "recommendedQuantityNormalized": 1,
- "skuProperties": [
- {
- "name": "Cores",
- "value": "1"
- },
- {
- "name": "Ram",
- "value": "1"
- }
- ]
- }
- },
- ]
-}
-
-```
-
-### Reserved instance charges
-
-Use the following request URIs to call the new Reserved Instance Charges API.
-
-#### Supported requests
-
-[_Get Reservation Charges by Date Range_](/rest/api/consumption/reservationtransactions/list)
-
-```json
-https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Consumption/reservationTransactions?$filter=properties/eventDate+ge+2020-05-20+AND+properties/eventDate+le+2020-05-30&api-version=2019-10-01
-```
-
-#### Response body changes
-
-_Old response_:
-
-```json
-[
- {
- "purchasingEnrollment": "string",
- "armSkuName": "Standard_F1s",
- "term": "P1Y",
- "region": "eastus",
- "PurchasingsubscriptionGuid": "00000000-0000-0000-0000-000000000000",
- "PurchasingsubscriptionName": "string",
- "accountName": "string",
- "accountOwnerEmail": "string",
- "departmentName": "string",
- "costCenter": "",
- "currentEnrollment": "string",
- "eventDate": "string",
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "description": "Standard_F1s eastus 1 Year",
- "eventType": "Purchase",
- "quantity": int,
- "amount": double,
- "currency": "string",
- "reservationOrderName": "string"
- }
-]
-
-```
-_New response_:
-
-```json
-{
- "value": [
- {
- "id": "/billingAccounts/123456/providers/Microsoft.Consumption/reservationtransactions/201909091919",
- "name": "201909091919",
- "type": "Microsoft.Consumption/reservationTransactions",
- "tags": {},
- "properties": {
- "eventDate": "2019-09-09T19:19:04Z",
- "reservationOrderId": "00000000-0000-0000-0000-000000000000",
- "description": "Standard_DS1_v2 westus 1 Year",
- "eventType": "Cancel",
- "quantity": 1,
- "amount": -21,
- "currency": "USD",
- "reservationOrderName": "Transaction-DS1_v2",
- "purchasingEnrollment": "123456",
- "armSkuName": "Standard_DS1_v2",
- "term": "P1Y",
- "region": "westus",
- "purchasingSubscriptionGuid": "11111111-1111-1111-1111-11111111111",
- "purchasingSubscriptionName": "Infrastructure Subscription",
- "accountName": "Microsoft Infrastructure",
- "accountOwnerEmail": "admin@microsoft.com",
- "departmentName": "Unassigned",
- "costCenter": "",
- "currentEnrollment": "123456",
- "billingFrequency": "recurring"
- }
- },
- ]
-}
-
-```
-
-## Use additional Cost Management APIs
-
-After you've migrated to Azure Resource Manager APIs for your existing reporting scenarios, you can use many other APIs, too. The APIs are also available through Azure Resource Manager and can be automated using Service Principal-based authentication. Here's a quick summary of the new capabilities that you can use.
--- [Budgets](/rest/api/consumption/budgets/createorupdate) - Use to set thresholds to proactively monitor your costs, alert relevant stakeholders, and automate actions in response to threshold breaches.--- [Alerts](/rest/api/cost-management/alerts) - Use to view alert information including, but not limited to, budget alerts, invoice alerts, credit alerts, and quota alerts.--- [Exports](/rest/api/cost-management/exports) - Use to schedule recurring data export of your charges to an Azure Storage account of your choice. It's the recommended solution for customers with a large Azure presence who want to analyze their data and use it in their own internal systems.-
-## Next steps
--- Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).-- If needed, determine which EA APIs you use and see which Azure Resource Manager APIs to move to at [EA API mapping to new Azure Resource Manager APIs](#ea-api-mapping-to-new-azure-resource-manager-apis).-- If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad).-- If needed, update any of your programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your Service Principal.
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage Azure budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 06/07/2022 Last updated : 10/07/2022
To create or update action groups, select **Manage action group** while you're c
Next, select **Add action group** and create the action group.
-Budget integration with action groups works for action groups which have enabled or disabled common alert schema. For more information on how to enable common alert schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#how-do-i-enable-the-common-alert-schema)
+Budget integration with action groups works for action groups that have enabled or disabled common alert schema. For more information on how to enable common alert schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#how-do-i-enable-the-common-alert-schema)
+
+## View budgets in the Azure mobile app
+
+You can view budgets for your subscriptions and resource groups from the **Cost Management** card in Azure mobile app.
+
+1. Navigate to any subscription or resource group.
+1. Find the **Cost Management** card and tap **More**.
+1. Budgets load below the **Current cost** card. They're sorted by descending order of usage.
+ ## Create and edit budgets with PowerShell
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
The following information shows the currently supported [Microsoft Azure offers]
| **Microsoft Developer Network (MSDN)** | MSDN Platforms<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 | | **Pay-As-You-Go** | Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-0003P | October 2, 2018 | | **Pay-As-You-Go** | Pay-As-You-Go Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0023P | October 2, 2018 |
-| **Pay-As-You-Go** | Microsoft Partner Network | MPN_2014-09-01 | MS-AZR-0025P | October 2, 2018 |
+| **Pay-As-You-Go** | Microsoft Cloud Partner Program | MPN_2014-09-01 | MS-AZR-0025P | October 2, 2018 |
| **Pay-As-You-Go** | Free Trial<sup>3</sup> | FreeTrial_2014-09-01 | MS-AZR-0044P | October 2, 2018 | | **Pay-As-You-Go** | Azure in Open<sup>3</sup> | AzureInOpen_2014-09-01 | MS-AZR-0111P | October 2, 2018 | | **Pay-As-You-Go** | Azure Pass<sup>3</sup> | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
When you transfer billing ownership of your subscription to an account in anothe
## Transfer Visual Studio and Partner Network subscriptions
-Visual Studio and Microsoft Partner Network subscriptions have monthly recurring Azure credit associated with them. When you transfer these subscriptions, your credit isn't available in the destination billing account. The subscription uses the credit in the destination billing account. For example, if Bob transfers a Visual Studio Enterprise subscription to Jane's account on September 9 and Jane accepts the transfer. After the transfer is completed, the subscription starts using credit in Jane's account. The credit will reset every ninth day of the month.
+Visual Studio and Microsoft Cloud Partner Program subscriptions have monthly recurring Azure credit associated with them. When you transfer these subscriptions, your credit isn't available in the destination billing account. The subscription uses the credit in the destination billing account. For example, if Bob transfers a Visual Studio Enterprise subscription to Jane's account on September 9 and Jane accepts the transfer. After the transfer is completed, the subscription starts using credit in Jane's account. The credit will reset every ninth day of the month.
## Next steps after accepting billing ownership
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
Before you link your partner ID, your customer must give you access to their Pow
## Link your access account to your partner ID
-Linking your access account to your partner ID is also called *PAL association*. When you have access to a Production Environment access account, you can use PAL to link the account to your Microsoft Partner Network ID (Location MPN ID)
+Linking your access account to your partner ID is also called *PAL association*. When you have access to a Production Environment access account, you can use PAL to link the account to your Microsoft partner location ID.
-For directory accounts (user or service), use the graphical web-based Azure portal, PowerShell, or the Azure CLI to link to your Microsoft Partner Network ID (Location Account MPN ID).
+For directory accounts (user or service), use the graphical web-based Azure portal, PowerShell, or the Azure CLI to link to your Microsoft partner location ID.
-For service principal, use PowerShell or the Azure CLI to provide the link your Microsoft Partner Network ID (Location Account MPN ID). Link the partner ID to each customer resource.
+For service principal, use PowerShell or the Azure CLI to provide the link your Microsoft partner location ID. Link the partner ID to each customer resource.
To use the Azure portal to link to a new partner ID: 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
-3. Enter the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner center profile. It's typically known as your [Partner Location Account MPN ID](/partner-center/account-structure).
+3. Enter the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated Partner ID** shown on your partner center profile. It's typically known as your [partner location ID](/partner-center/account-structure).
:::image type="content" source="./media/link-partner-id-power-apps-accounts/link-partner-id.png" alt-text="Screenshot showing the Link to a partner ID window." lightbox="./media/link-partner-id-power-apps-accounts/link-partner-id.png" ::: > [!NOTE]
Sign into the customer's tenant with either the user account or the service prin
Update-AzManagementPartner -PartnerId 12345 ```
-Link to the new partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner profile.
+Link to the new partner ID. The partner ID is the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated Partner ID** shown on your partner profile.
```azurepowershell-interactive new-AzManagementPartner -PartnerId 12345
Sign into the customer's tenant with either the user account or the service prin
az login --tenant TenantName ```
-Link to the new partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization.
+Link to the new partner ID. The partner ID is the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization.
```azurecli-interactive az managementpartner create --partner-id 12345
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customerΓÇÖs environment. Using Partner Admin Link (PAL), partners can associate their partner network ID with the credentials used for service delivery.
-PAL enables Microsoft to identify and recognize partners who drive Azure customer success. Microsoft can attribute influence and Azure consumed revenue to your organization based on the account's permissions (Azure role) and scope (subscription, resource group, resource ). If a group has Azure RBAC access, then PAL is recognized for all the users in the group.
+PAL enables Microsoft to identify and recognize partners who drive Azure customer success. Microsoft can attribute influence and Azure consumed revenue to your organization based on the account's permissions (Azure role) and scope (subscription, resource group, resource). If a group has Azure RBAC access, then PAL is recognized for all the users in the group.
## Get access from your customer
Before you link your partner ID, your customer must give you access to their Azu
## Link to a partner ID
-When you have access to the customer's resources, use the Azure portal, PowerShell, or the Azure CLI to link your Microsoft Partner Network ID (MPN ID) to your user ID or service principal. Link the partner ID in each customer tenant.
+When you have access to the customer's resources, use the Azure portal, PowerShell, or the Azure CLI to link your Partner ID to your user ID or service principal. Link the Partner ID in each customer tenant.
### Use the Azure portal to link to a new partner ID
When you have access to the customer's resources, use the Azure portal, PowerShe
2. Sign in to the Azure portal.
-3. Enter the Microsoft partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner profile.
+3. Enter the Microsoft partner ID. The partner ID is the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated Partner ID** shown on your partner profile.
![Screenshot that shows Link to a partner ID](./media/link-partner-id/link-partner-id01.png)
When you have access to the customer's resources, use the Azure portal, PowerShe
C:\> Connect-AzAccount -TenantId XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ```
-3. Link to the new partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner profile.
+3. Link to the new partner ID. The partner ID is the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated Partner ID** shown on your partner profile.
```azurepowershell-interactive
C:\> Remove-AzManagementPartner -PartnerId 12345
C:\ az login --tenant <tenant> ```
-3. Link to the new partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization.
+3. Link to the new partner ID. The partner ID is the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization.
```azurecli-interactive C:\ az managementpartner create --partner-id 12345
However, if you are managing customer resources through Azure Lighthouse, you sh
The link is associated at the user account level. Only you can edit or remove the link to the partner ID. The customer and other partners can't change the link to the partner ID.
-**Which MPN ID should I use if my company has multiple?**
+**Which Partner ID should I use if my company has multiple?**
-Be sure to use the **Associated MPN ID** shown in your partner profile.
+Be sure to use the **Associated Partner ID** shown in your partner profile.
**Where can I find influenced revenue reporting for linked partner ID?**
Yes, You can link your partner ID for Azure Stack.
**How do I link my partner ID if my company uses [Azure Lighthouse](../../lighthouse/overview.md) to access customer resources?**
-In order for Azure Lighthouse activities to be recognized, you'll need to associate your MPN ID with at least one user account that has access to each of your onboarded subscriptions. Note that you'll need to do this in your service provider tenant rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant, associating it with your MPN ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
+In order for Azure Lighthouse activities to be recognized, you'll need to associate your Partner ID with at least one user account that has access to each of your onboarded subscriptions. Note that you'll need to do this in your service provider tenant rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant, associating it with your Partner ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
**How do I explain Partner Admin Link (PAL) to my Customer?**
-Partner Admin Link (PAL) enables Microsoft to identify and recognize those partners who are helping customers achieve business objectives and realize value in the cloud. Customers must first provide a partner access to their Azure resource. Once access is granted, the partnerΓÇÖs Microsoft Partner Network ID (MPN ID) is associated. This association helps Microsoft understand the ecosystem of IT service providers and to refine the tools and programs needed to best support our common customers.
+Partner Admin Link (PAL) enables Microsoft to identify and recognize those partners who are helping customers achieve business objectives and realize value in the cloud. Customers must first provide a partner access to their Azure resource. Once access is granted, the partnerΓÇÖs Microsoft Cloud Partner Program ID is associated. This association helps Microsoft understand the ecosystem of IT service providers and to refine the tools and programs needed to best support our common customers.
**What data does PAL collect?**
The PAL association to existing credentials provides no new customer data to Mic
**Does this impact the security of a customerΓÇÖs Azure Environment?**
-PAL association only adds partnerΓÇÖs MPN ID to the credential already provisioned and it does not alter any permissions (Azure role) or provide additional Azure service data to partner or Microsoft.
+PAL association only adds partnerΓÇÖs ID to the credential already provisioned and it does not alter any permissions (Azure role) or provide additional Azure service data to partner or Microsoft.
cost-management-billing Manage Azure Subscription Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-azure-subscription-policy.md
The policy allows or stops users from other directories, who have access in the
### Exempted Users
-For governance reasons, global administrators can block all subscription directory moves - in to our out of the current directory. However they might want to allow specific users to do either operations. For either situation, they can configure a list of exempted users that allows the users to bypass the policy setting that applies to everyone else.
+For governance reasons, global administrators can block all subscription directory moves - in to or out of the current directory. However they might want to allow specific users to do either operations. For either situation, they can configure a list of exempted users that allows the users to bypass the policy setting that applies to everyone else.
## Setting subscription policy
Non-global administrators can still navigate to the subscription policy area to
## Next steps -- Read the [Cost Management + Billing documentation](../index.yml)
+- Read the [Cost Management + Billing documentation](../index.yml)
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
You can request billing ownership of products for the subscription types listed
- Subscription and reservation transfer are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer. - Only subscription transfers are supported for indirect EA customers. Reservation transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner. - [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/)-- [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>
+- [Microsoft Cloud Partner Program](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>
- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)<sup>1</sup> - [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)<sup>1</sup>-- [Visual Studio Enterprise (MPN) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)<sup>1</sup>
+- [Visual Studio Enterprise (Cloud Partner Program) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)<sup>1</sup>
- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)<sup>1</sup> - [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)<sup>1</sup> - [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)<sup>1</sup>
cost-management-billing Mosp Ea Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mosp-ea-transfer.md
Enrollment Number: <EnrollmentNumber>
All Enrollment Administrators can gain access to all of your subscriptions if you proceed. Additionally, all Azure subscriptions for which you are the account owner will be converted to your Enterprise Agreement.
-This includes subscriptions which include a monthly credit (e.g. Visual Studio, MPN, BizSpart, etc.) meaning you will lose the monthly credit by proceeding.
+This includes subscriptions which include a monthly credit (e.g. Visual Studio, Microsoft Cloud Partner Program, BizSpart, etc.) meaning you will lose the monthly credit by proceeding.
All subscriptions based on a Visual Studio subscriber offer (monthly credit for Visual Studio subscribers or Pay-As-You-Go Dev/Test) will be converted to use the Enterprise Dev/Test usage rates and be billed against this enrollment from today onwards. If you wish to retain the monthly credits currently associated with any of your subscriptions, please cancel. Please see additional details.
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
POST https://management.azure.com/providers/Microsoft.Billing/enrollmentAccounts
||-|--|--| | `displayName` | No | String | The display name of the subscription. If not specified, it's set to the name of the offer, like "Microsoft Azure Enterprise." | | `offerType` | Yes | String | The offer of the subscription. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [turned on using the EA portal](https://ea.azure.com/helpdocs/DevOrTestOffer)). |
-| `owners` | No | String | The Object ID of any user that want to add as an Azure RBAC Owner on the subscription when it's created. |
+| `owners` | No | String | The Object ID of any user to be added as an Azure RBAC Owner on the subscription when it's created. |
In the response, as part of the header `Location`, you get back a url that you can query for status on the subscription creation operation. When the subscription creation is finished, a GET on `Location` url will return a `subscriptionLink` object, which has the subscription ID. For more details, refer [Subscription API documentation](/rest/api/subscription/)
POST https://management.azure.com<customerId>/providers/Microsoft.Subscription/c
||-|--|--| | `displayName` | Yes | String | The display name of the subscription.| | `skuId` | Yes | String | The sku ID of the Azure plan. Use *0001* for subscriptions of type Microsoft Azure Plan |
-| `resellerId` | No | String | The MPN ID of the reseller who will be associated with the subscription. |
+| `resellerId` | No | String | The ID of the reseller who will be associated with the subscription. |
In the response, you get back a `subscriptionCreationResult` object for monitoring. When the subscription creation is finished, the `subscriptionCreationResult` object returns a `subscriptionLink` object. It has the subscription ID.
In the response, you get back a `subscriptionCreationResult` object for monitori
- To view and example of creating an Enterprise Agreement (EA) subscription using .NET, see [sample code on GitHub](https://github.com/Azure-Samples/create-azure-subscription-dotnet-core). * Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md).
-* For more information about about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
+* For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
The only information available to users for the new account is usage and billing
### Remaining product credits
-If you have a Visual Studio or Microsoft Partner Network product, you get monthly credits. Your credit doesn't carry forward with the product in the new account. The user who accepts the transfer request needs to have a Visual Studio license to accept the transfer request. The product uses the Visual Studio credit that's available in the user's account. For more information, see [Transferring Visual Studio and Partner Network subscriptions](billing-subscription-transfer.md#transfer-visual-studio-and-partner-network-subscriptions).
+If you have a Visual Studio or Microsoft Cloud Partner Program product, you get monthly credits. Your credit doesn't carry forward with the product in the new account. The user who accepts the transfer request needs to have a Visual Studio license to accept the transfer request. The product uses the Visual Studio credit that's available in the user's account. For more information, see [Transferring Visual Studio and Partner Network subscriptions](billing-subscription-transfer.md#transfer-visual-studio-and-partner-network-subscriptions).
### Users keep access to transferred resources
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
To resolve this error, follow these steps:
1. Sign in to the [Microsoft account center](https://account.microsoft.com/). 1. At the top of the page, select **Your info**.
-1. Verify that your billing and shipping details are are completed and valid.
+1. Verify that your billing and shipping details are completed and valid.
1. When you sign up for the Azure subscription, verify that the billing address for the credit card registration matches your bank records. If you continue to receive the message, try to sign up by using a different browser.
Complete the Agreement.
## Other issues
-### Can't activate Azure benefit plan like Visual Studio, BizSpark, BizSparkPlus, or MPN
+### Can't activate Azure benefit plan like Visual Studio, BizSpark, BizSparkPlus, or Microsoft Cloud Partner Program
Check that you're using the correct sign-in credentials. Then, check the benefit program and verify that you're eligible. - Visual Studio
Check that you're using the correct sign-in credentials. Then, check the benefit
- Microsoft for Startups - Sign in to the [Microsoft for Startups portal](https://startups.microsoft.com/#start-two) to verify your eligibility status for Microsoft for Startups. - If you can't verify your status, you can get help on the [Microsoft for Startups forums](https://www.microsoftpartnercommunity.com/t5/Microsoft-for-Startups/ct-p/Microsoft_Startups).-- MPN
- - Sign in to the [MPN portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you may be eligible for additional benefits.
- - If you can't verify your status, contact [MPN Support](https://mspartner.microsoft.com/Pages/Support/Premium/contact-support.aspx).
+- Cloud Partner Program
+ - Sign in to the [Cloud Partner Program portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you may be eligible for additional benefits.
+ - If you can't verify your status, contact [Cloud Partner Program Support](https://mspartner.microsoft.com/Pages/Support/Premium/contact-support.aspx).
### Can't activate new Azure In Open subscription
cost-management-billing Understand Mca Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-mca-roles.md
Previously updated : 07/22/2022 Last updated : 10/10/2022
The following table describes the billing roles you use to manage your billing a
|Role|Description| |||
-|Billing account owner (called Organization owner for Marketplace purchases) |Manage everything for billing account|
+|Billing account owner |Manage everything for billing account|
|Billing account contributor|Manage everything except permissions on the billing account| |Billing account reader|Read-only view of everything on billing account| |Billing profile owner|Manage everything for billing profile|
cost-management-billing Understand Vm Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-vm-reservation-charges.md
+ Last updated 10/03/2022
Here's an example. Assume you bought a reservation for five Standard_D1 VMs, the
The reservation discount application ignores the meter used for VMs and only evaluates ServiceType. Look at the `ServiceType` value in `AdditionalInfo` to determine the instance flexibility group/series information for your VMs. The values are in your usage CSV file.
-You can't directly change the instance flexibility group/series of the reservation after purchase. However, you can *exchange* a VM reservation from one instance flexibility group/series to another.
+You can't directly change the instance flexibility group/series of the reservation after purchase. However, you can *exchange* a VM reservation from one instance flexibility group/series to another. For more information about reservation exchanges, see [Exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
## Services that get VM reservation discounts
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
+ Previously updated : 06/20/2022 Last updated : 10/10/2022 # Self-service exchanges and refunds for Azure Reservations
-Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. In an other example, you can exchange multiple SQL database reservation types including Managed Instances and Elastic Pool with each other.
-However, you can't exchange dissimilar reservations. For example, you can't exchange a Cosmos DB reservation for SQL Database.
+Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. You can also exchange multiple SQL database reservation types including SQL Managed Instances and Elastic Pool with each other.
+
+However, you can't exchange dissimilar reservations. For example, you can't exchange an Azure Cosmos DB reservation for SQL Database.
You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region.
+> [!NOTE]
+> Exchanges will be unavailable for Azure reserved instances for compute services purchased on or after **January 1, 2024**. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For a limited time you may [trade-in](../savings-plan/reservation-trade-in.md) your Azure reserved instances for compute for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. Learn more about [Azure savings plan for compute](../savings-plan/index.yml) and how it works with reservations.
+ When you exchange a reservation, you can change your term from one-year to three-year. You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12 month rolling window.
-Azure Databricks reserved capacity, Synapse Analytics Pre-purchase plan, Azure VMware solution by CloudSimple reservation, Azure Red Hat Open Shift reservation, Red Hat plans and, SUSE Linux plans aren't eligible for refunds.
+The following reservations aren't eligible for refunds:
+
+- Azure Databricks reserved capacity
+- Synapse Analytics Pre-purchase plan
+- Azure VMware solution by CloudSimple
+- Azure Red Hat Open Shift
+- Red Hat plans
+- SUSE Linux plans
> [!NOTE] > - **You must have owner access on the Reservation Order to exchange or refund an existing reservation**. You can [Add or change users who can manage a reservation](./manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
To refund a reservation, go to **Reservation Details** and select **Refund**.
You can return similar types of reservations in one action.
-When you exchange reservations, the new purchase currency amount must be greater than the refund amount. If your new purchase amount is less than the refund amount, you'll get an error. If you see the error, reduce the quantity that you want to return or increase the amount to purchase.
+When you exchange reservations, the new purchase currency amount must be greater than the refund amount. If your new purchase amount is less than the refund amount, you'll get an error. If you see the error, reduce the quantity that you want to return, or increase the amount to purchase.
1. Sign in to the Azure portal and navigate to **Reservations**. 1. In the list of reservations, select the box for each reservation that you want to exchange. 1. At the top of the page, select **Exchange**. 1. If needed, revise the quantity to return for each reservation.
-1. If you select the auto-fill return quantity, you can choose to **Refund all** to fill the list with the full quantity that you own for each reservation or **Optimize for utilization (7-day)** to fill the list with a quantity that optimizes for utilization based on the last seven days of usage. **Select Apply**.
+1. If you select the auto-fill return quantity, you can choose to **Refund all** to fill the list with the full quantity that you own for each reservation. Or, select **Optimize for utilization (7-day)** to fill the list with a quantity that optimizes for utilization based on the last seven days of usage. **Select Apply**.
1. At the bottom of the page, select **Next: Purchase**. 1. On the purchase tab, select the available products that you want to exchange for. You can select multiple products of different types. 1. In the Select the product you want to purchase pane, select the products you want and then select **Add to cart** and then select **Close**.
When you exchange reservations, the new purchase currency amount must be greater
## Exchange non-premium storage for premium storage You can exchange a reservation purchased for a VM size that doesn't support premium storage to a corresponding VM size that does. For example, an _F1_ for an _F1s_. To make the exchange, go to Reservation Details and select **Exchange**. The exchange doesn't reset the term of the reserved instance or create a new transaction.
-If you are exchanging for a different size, series, region or payment frequency, the term will be reset for the new reservation.
+If you're exchanging for a different size, series, region or payment frequency, the term will be reset for the new reservation.
## How transactions are processed
-First, Microsoft cancels the existing reservation and refunds the pro-rated amount for that reservation. If there's an exchange, the new purchase is processed. Microsoft processes refunds using one of the following methods, depending on your account type and payment method.
+Microsoft cancels the existing reservation. Then the pro-rated amount for that reservation is refunded. If there's an exchange, the new purchase is processed. Microsoft processes refunds using one of the following methods, depending on your account type and payment method.
### Enterprise agreement customers
Azure has the following policies for cancellations, exchanges, and refunds.
**Refund policies** - We're currently not charging an early termination fee, but in the future there might be a 12% early termination fee for cancellations.-- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, for a three-year reservation (36 months) that's 100 USD per month and it's refunded in the 12th month, the canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund will be 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit will be increased by 2,400 USD and your new pool will be 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment will deplete the same pool, and the same replenishment logic will apply.
+- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It's refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund will be 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit will be increased by 2,400 USD and your new pool will be 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment will deplete the same pool, and the same replenishment logic will apply.
- Azure won't process any refund that will exceed the 50,000 USD limit in a 12-month window for a billing profile or EA enrollment. - Refunds that result from an exchange don't count against the refund limit. - Refunds are calculated based on the lowest price of either your purchase price or the current price of the reservation.
If you have questions or need help, [create a support request](https://portal.az
## Next steps - To learn how to manage a reservation, see [Manage Azure Reservations](manage-reserved-vm-instance.md).
+- Learn about [Azure savings plan for compute](../savings-plan/index.yml)
- To learn more about Azure Reservations, see the following articles: - [What are Azure Reservations?](save-compute-costs-reservations.md) - [Manage Reservations in Azure](manage-reserved-vm-instance.md)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
+ Last updated 09/07/2022
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Azure Blob storage](../../storage/blobs/storage-blob-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure Files](../../storage/files/files-reserve-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure VMware Solution](../../azure-vmware/reserved-instance.md?toc=/azure/cost-management-billing/reservations/toc.json)-- [Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
- [Databricks](prepay-databricks-reserved-capacity.md) - [Data Explorer](/azure/data-explorer/pricing-reserved-capacity?toc=/azure/cost-management-billing/reservations/toc.json) - [Dedicated Host](../../virtual-machines/prepay-dedicated-hosts-reserved-instances.md)
cost-management-billing Reservation Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-apis.md
description: Learn about the Azure APIs that you can use to programmatically get
tags: billing+
You can also buy a reservation in the Azure portal. For more information, see th
Service plans: - [Virtual machine](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json)-- [Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
- [SQL Database](/azure/azure-sql/database/reserved-capacity-overview?toc=/azure/cost-management-billing/reservations/toc.json) Software plans:
To change the scope programmatically, use the API [Reservation - Update](/rest/a
- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md) - [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md) - [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md)-- [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+- [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
cost-management-billing Reservation Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-application.md
+ Last updated 09/15/2021
Read the following articles that apply to you to learn how discounts apply to a
- [App Service](reservation-discount-app-service.md) - [Azure Cache for Redis](understand-azure-cache-for-redis-reservation-charges.md)-- [Cosmos DB](understand-cosmosdb-reservation-charges.md)
+- [Azure Cosmos DB](understand-cosmosdb-reservation-charges.md)
- [Database for MariaDB](understand-reservation-charges-mariadb.md) - [Database for MySQL](understand-reservation-charges-mysql.md) - [Database for PostgreSQL](understand-reservation-charges-postgresql.md)
Read the following articles that apply to you to learn how discounts apply to a
- [Manage Azure Reservations](manage-reserved-vm-instance.md) - [Understand reservation usage for your subscription with pay-as-you-go rates](understand-reserved-instance-usage.md) - [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)-- [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md)
+- [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md)
cost-management-billing Troubleshoot Reservation Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-reservation-utilization.md
description: This article helps you understand and troubleshoot Azure reservatio
+
If you find that your utilization values don't match your expectations, review t
## Other common scenarios - If the reservation status is "No Benefit", it will give you a warning message and to solve this, follow recommendations presented on the reservation's page. - You may have stopped running resource A and started running resource B which is not applicable for the reservation you purchased for. To solve this, you may need to exchange the reservation to match it to the right resource.
+ - For more information about reservation exchanges, see [Exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md)
- You may have moved a resource from one subscription or resource group to another, whereas the scope of the reservation is different from where the resource is being moved to. To resolve this case, you may need to change the scope of the reservation. - You may have purchased another reservation that also applied a benefit to the same scope, and as a result, less of an existing reserved instance applied a benefit. To solve this, you may need to exchange/refund one of the reservations. - You may have stopped running a particular resource, and as a result it stopped emitting usage and the benefit stopped applying. To solve this, you may need to exchange the reservation to match it to the right resource.
cost-management-billing Understand Cosmosdb Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-cosmosdb-reservation-charges.md
+ Last updated 06/10/2021
Stopped resources are billed and continue to use reservation hours. Deallocate o
## Reservation discount applied to Azure Cosmos DB accounts
-A reservation discount is applied to [provisioned throughput](../../cosmos-db/request-units.md) in terms of request units per second (RU/s) on an hour-by-hour basis. For Azure Cosmos DB resources that don't run the full hour, the reservation discount is automatically applied to other Cosmos DB resources that match the reservation attributes. The discount can apply to Azure Cosmos DB resources that are running concurrently. If you don't have Cosmos DB resources that run for the full hour and that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
+A reservation discount is applied to [provisioned throughput](../../cosmos-db/request-units.md) in terms of request units per second (RU/s) on an hour-by-hour basis. For Azure Cosmos DB resources that don't run the full hour, the reservation discount is automatically applied to other Azure Cosmos DB resources that match the reservation attributes. The discount can apply to Azure Cosmos DB resources that are running concurrently. If you don't have Azure Cosmos DB resources that run for the full hour and that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
The discounts are tiered. Reservations with higher request units provide higher discounts.
To learn more about Azure reservations, see the following articles:
* [Manage reservations for Azure](manage-reserved-vm-instance.md) * [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md) * [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
-* [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
+* [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
cost-management-billing Understand Reserved Instance Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage.md
description: Learn how to read your usage to understand how the Azure reservatio
tags: billing+
Filter on **Additional Info** and type in your **Reservation ID**. The following
4. **Meter ID** is the meter ID for the reservation. The cost of this meter is $0. This meter ID appears for any VM that qualifies for the reservation discount. 5. Standard_DS1_v2 is one vCPU VM and the VM is deployed without Azure Hybrid Benefit. So, this meter covers the extra charge of the Windows software. To find the meter corresponding to D series 1 core VM, see [Azure Reserve VM Instances Windows software costs](reserved-instance-windows-software-costs.md). If you have the Azure Hybrid Benefit, this extra charge is not applied.
-## Usage for SQL Database & Cosmos DB reservations
+## Usage for Azure SQL Database and Azure Cosmos DB reservations
The following sections use Azure SQL Database as example to describe the usage report. You can use same steps to get usage for Azure Cosmos DB as well.
cost-management-billing Understand Storage Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-storage-charges.md
Stopped resources are billed and continue to use reservation hours. Deallocate o
## Discount examples The following examples show how the reserved capacity discount applies, depending on the deployments.
-Suppose that you have purchased 100 TiB of reserved capacity in the in US West 2 region for a 1-year term. Your reservation is for locally redundant storage (LRS) blob storage in the hot access tier.
+Suppose that you have purchased 100 TiB of reserved capacity in the US West 2 region for a 1-year term. Your reservation is for locally redundant storage (LRS) blob storage in the hot access tier.
Assume that the cost of this sample reservation is $18,540. You can either choose to pay the full amount up front or to pay fixed monthly installments of $1,545 per month for the next twelve months.
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
+
+ Title: Buy an Azure savings plan
+
+description: This article helps you buy an Azure savings plan.
++++++ Last updated : 10/12/2022+++
+# Buy an Azure savings plan
+
+Azure savings plans help you save money by committing to an hourly spend for one-year or three-years plans for Azure compute resources. Saving plans discounts apply to usage from virtual machines, Dedicated Hosts, Container Instances, App Services and Azure Premium Functions. The hourly commitment is priced in USD for Microsoft Customer Agreement customers and local currency for Enterprise customers. Before you enter a commitment to buy a savings plan, be sure to review the following sections to prepare for your purchase.
+
+## Who can buy a savings plan
+
+You can buy a savings plan for an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement (MCA) or Microsoft Partner Agreement.
+
+Savings plan discounts only apply to resources associated with subscriptions purchased through an Enterprise Agreement, Microsoft Customer Agreement, or Microsoft Partner Agreement (MPA).
+
+### Enterprise Agreement customers
+
+- EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No specific permission for a subscription is needed.
+- Subscription owners for one of the subscriptions in the EA enrollment can purchase savings plans from **Home** > **Savings plan**.
+- Enterprise Agreement (EA) customers can limit purchases to EA admins only by disabling the **Add Savings Plan** option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
+
+### Microsoft Customer Agreement (MCA) customers
+
+- Customers with billing profile contributor permissions and above can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No specific permissions on a subscription needed.
+- Subscription owners for one of the subscriptions in the billing profile can purchase savings plans from **Home** > **Savings plan**.
+- To disallow savings plan purchases on a billing profile, billing profile contributors can navigate to the Policies menu under the billing profile and adjust **Azure Savings Plan** option.
+
+### Microsoft Partner Agreement partners
+
+- Partners can use **Home** > **Savings plan** in the Azure portal to purchase savings plans for their customers.
+
+## Scope savings plans
+
+You can scope a savings plan to a shared scope, management group, subscription, or resource group scopes. Setting the scope for a savings plan selects where the savings plan savings apply. When you scope the savings plan to a resource group, savings plan discounts apply only to the resource groupΓÇönot the entire subscription.
+
+### Savings plan scoping options
+
+You have four options to scope a savings plan, depending on your needs:
+
+- **Shared scope** - Applies the savings plan discounts to matching resources in eligible subscriptions that are in the billing scope. If a subscription was moved to a different billing scope, the benefit no longer applies to the subscription. It does continue to apply to other subscriptions in the billing scope.
+ - For Enterprise Agreement customers, the billing scope is the enrollment. The savings plan shared scope would include multiple Active Directory tenants in an enrollment.
+ - For Microsoft Customer Agreement customers, the billing scope is the billing profile.
+ - For Microsoft Partner Agreement, the billing scope is a customer.
+- **Single subscription scope** - Applies the savings plan discounts to the matching resources in the selected subscription.
+- **Management group** - Applies the savings plan discounts to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. To scope a savings plan to a management group, you must have at least read permission on the management group.
+- **Single resource group scope** - Applies the savings plan discounts to the matching resources in the selected resource group only.
+
+When savings plan discounts are applied to your usage, Azure processes the savings plan in the following order:
+
+1. Savings plans with a single resource group scope
+2. Savings plans with a single subscription scope
+3. Savings plans scoped to a management group
+4. Savings plans with a shared scope (multiple subscriptions), described previously
+
+You can always update the scope after you buy a savings plan. To do so, go to the savings plan, select **Configuration**, and rescope the savings plan. Rescoping a savings plan isn't a commercial transaction. Your savings plan term isn't changed. For more information about updating the scope, see [Update the scope after you purchase a savings plan](manage-savings-plan.md#change-the-savings-plan-scope).
++
+## Discounted subscription and offer types
+
+Savings plan discounts apply to the following eligible subscriptions and offer types.
+
+- Enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P)
+- Microsoft Customer Agreement subscriptions.
+- Microsoft Partner Agreement subscriptions.
+
+## Purchase savings plans
+
+You can purchase savings plans in the Azure portal.
+
+### Buy savings plans with monthly payments
+
+You can pay for savings plans with monthly payments. Unlike an up-front purchase where you pay the full amount, the monthly payment option divides the total cost of the savings plan evenly over each month of the term. The total cost of up-front and monthly savings plans is the same and you don't pay any extra fees when you choose to pay monthly.
+
+If savings plan is purchased using an MCA, your monthly payment amount may vary, depending on the current month's market exchange rate for your local currency.
+
+## View payments made
+
+You can view payments that were made using APIs, usage data, and cost analysis. For savings plans paid for monthly, the frequency value is shown as **recurring** in the usage data and the Savings Plan Charges API. For savings plans paid up front, the value is shown as **onetime**.
+
+Cost analysis shows monthly purchases in the default view. Apply the **purchase** filter to **Charge type** and **recurring** for **Frequency** to see all purchases. To view only savings plans, apply a filter for **Savings Plan**.
++
+## Reservation trade ins and refunds
+
+Unlike reservations, you can't return or exchange savings plans.
+
+You can trade in one or more reservations for a savings plan. When you trade in reservations, the hourly commitment of the new savings plan must be greater than the leftover payments that are canceled for the returned reservations. There are no other limits or fees for trade ins. You can trade in a reservation that's paid for up front to purchase a new savings plan that's billed monthly. However, the lifetime value of the new savings plan must be greater than the prorated value of the reservations traded in.
+
+## Savings plan notifications
+
+Depending on how you pay for your Azure subscription, email savings plan notifications are sent to the following users in your organization. Notifications are sent for various events including:
+
+- Purchase
+- Upcoming savings plan expiration
+- Expiry
+- Renewal
+- Cancellation
+- Scope change
+
+For customers with EA subscriptions:
+
+- Notifications are sent to EA administrators and EA notification contacts.
+- Users added to a savings plan using Azure RBAC (IAM) permission don't receive any email notifications.
+
+For customers with MCA subscriptions:
+
+- The purchaser receives a purchase notification.
+
+For Microsoft Partner Agreement partners:
+
+- Notifications are sent to the partner.
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+- [Permissions to view and manage Azure savings plans](permission-view-manage.md)
+- [Manage Azure savings plans](manage-savings-plan.md)
cost-management-billing Cancel Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/cancel-savings-plan.md
+
+ Title: Azure saving plan cancellation policies
+
+description: Learn about Azure saving plan cancellation policies.
+++++++ Last updated : 10/12/2022++
+# Azure saving plan cancellation policies
+
+All savings plan purchases are final. Before you buy a saving plan, review the following policies.
+
+## Savings plans can't be canceled
+
+After you buy a savings plan, you can't cancel it.
+
+## Saving plans can't be refunded
+
+After you buy a savings plan, you can't refund it.
+
+## Saving plans can't be exchanged for a reservation
+
+After you buy a savings plan, you can't exchange it for an Azure reservation. However, you can trade in an Azure reservation for a new savings plan. For more information about reservation trade ins, see [Self-service trade-in for Azure savings plans](reservation-trade-in.md).
+
+## Next steps
+
+- Learn more about savings plan at [What are Azure savings plans for compute?](savings-plan-compute-overview.md)
+- Learn about [Self-service trade-in for Azure savings plans](reservation-trade-in.md).
cost-management-billing Charge Back Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/charge-back-costs.md
+
+ Title: Charge back Azure saving plan costs
+
+description: Learn how to view Azure saving plan costs for chargeback.
++++++ Last updated : 10/12/2022+++
+# Charge back Azure saving plan costs
+
+Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for savings plans. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly savings plan cost. The cost is the total cost of savings plan usage by the resource on that day.
+
+Users with an individual subscription can get the amortized cost data from their usage file. When a resource gets a savings plan discount, the _AdditionalInfo_ section in the usage file contains the savings plan details. For more information, see [Download usage from the Azure portal](../understand/download-azure-daily-usage.md#download-usage-from-the-azure-portal-csv).
+
+## View savings plan usage data for show back and charge back
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Navigate to **Cost Management + Billing**.
+3. Select **Cost analysis** from the left navigation menu.
+4. Under **Actual Cost**, select the **Amortized Cost** metric.
+5. To see which resources were used by a savings plan, apply a filter for **Pricing Model** and then select **SavingsPlan**.
+6. Set the **Granularity** to **Monthly** or **Daily**.
+7. Set the chart type to **Table**.
+8. Set the **Group by** option to **Resource**.
++
+## Get the data for show back and charge back
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Navigate to **Cost Management + Billing**.
+3. Select **Export** from the left navigation menu.
+4. Select **Add**.
+5. Select **Amortized cost** as the metric and set up your export.
+
+The *EffectivePrice* for the usage that gets savings plan discount is the prorated cost of the savings plan, instead of being zero. It helps you know the monetary value of savings plan consumption by a subscription, resource group or a resource. It can help you charge back for the savings plan utilization internally. The dataset also has unused savings plan benefits.
+
+## Get Azure consumption and savings plan usage data using API
+
+You can get the data using the API or download it from Azure portal.
+
+You call the [Usage Details API](/rest/api/consumption/usagedetails/list) to get the new data. For for information about terminology, see [Usage terms](../understand/understand-usage.md).
+
+Here's an example call to the Usage Details API:
+
+```http
+https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{enrollmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodId}/providers/Microsoft.Consumption/usagedetails?metric={metric}&amp;api-version=2019-05-01&amp;$filter={filter}
+
+```
+
+For more information about `{enrollmentId}` and `{billingPeriodId}`, see the [Usage Details ΓÇô List](/rest/api/consumption/usagedetails/list) API article.
+
+Information in the following table about metric and filter can help solve common savings plan problems.
+
+| Type of API data | API call action |
+| | |
+| **All Charges (usage and purchases)** | Request for an ActualCost report. |
+| **Usage that got savings plan discount** | Request for an ActualCost report. <br><br> Once you've ingested all of the usage, look for records with ChargeType = 'Usage' and PricingModel = 'SavingsPlan'. |
+| **Usage that didn't get savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with PricingModel = 'OnDemand'. |
+| **Amortized charges (usage and purchases)** | Request for an AmortizedCost report. |
+| **Unused savings plan report** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'UnusedBenefit' and PricingModel ='SavingsPlan'. |
+| **Savings plan purchases** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Purchase' and PricingModel = 'SavingsPlan'. |
+| **Refunds** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Refund'. |
+
+## Download the usage CSV file with new data
+
+If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
+
+In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
+
+1. Select the scope.
+ 1. For EA, your scope is the enrollment.
+ 1. For MCA, your scope is the billing account.
+1. Select **Usage + charges**.
+1. Select **Download**.
+1. In **Usage Details**, select **Amortized usage data**.
+
+The CSV files that you download contain actual costs and amortized costs.
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+To learn more about Azure savings plan usage data, see the following articles:
+
+- [View Azure savings plan cost and usage details](utilization-cost-reports.md)
+
cost-management-billing Choose Commitment Amount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/choose-commitment-amount.md
+
+ Title: Choose an Azure saving plan commitment amount
+
+description: This article helps you determine how to choose an Azure saving plan commitment amount.
++++++ Last updated : 10/12/2022+++
+# Choose an Azure saving plan commitment amount
+
+You should purchase savings plans based on consistent base usage. Committing to a greater spend than your historical usage could result in an underutilized commitment, which should be avoided when possible. Unused commitment doesn't carry over from one hour to next. Usage exceeding the savings plan commitment is charged using more expensive pay-as-you-go rates.
+
+## Savings plan purchase recommendations
+
+Savings plan purchase recommendations are calculated by analyzing your hourly usage data over the last 7, 30, and 60 days. Azure calculates what your costs would have been if you had a savings plan and compares it with your actual pay-as-you-go costs incurred over the time duration. The calculation is performed for every quantity that you used during the time frame. The commitment amount that maximizes your savings is recommended.
+
+For example, you might use 500 VMs most of the time, but sometimes usage spikes to 700 VMs. In this example, Azure calculates your savings for both the 500 and 700 VM quantities. Since the 700 VM usage is sporadic, the recommendation calculation determines that savings are maximized for a savings plan commitment that is sufficient to cover 500 VMs and the recommendation is provided for that commitment.
+
+Note the following points:
+
+- Savings plan recommendations are calculated using the on-demand usage rates that apply to you.
+- Recommendations are calculated using individual sizes, not for the instance size family.
+- The recommended commitment for a scope is reduced on the same day that you purchase a commitment for the scope.
+ - However, an update for the commitment amount recommendation across scopes can take up to 25 days. For example, if you purchase based on shared scope recommendations, the single subscription scope recommendations can take up to 25 days to adjust down.
+- Currently, Azure doesn't generate recommendations for the management group scope.
+
+## Recommendations in the Azure portal
+
+Savings plan purchases are calculated by the recommendations engine for the selected term and scope, based on last 30-days of usage. Recommendations are shown in the savings plan purchase experience in the Azure portal.
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+- [Manage Azure savings plans](manage-savings-plan.md)
+- [View Azure savings plan cost and usage details](utilization-cost-reports.md)
+- [Software costs not included in saving plan](software-costs-not-included.md)
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
+
+ Title: How an Azure saving plan discount is applied
+
+description: Learn about how the discounts you receive are applied.
+++++++ Last updated : 10/12/2022++
+# How saving plan discount is applied
+
+Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan can help you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. The savings can significantly reduce your resource costs by up to 66% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
+
+Each hour with savings plan, your eligible compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage after you reach your commitment amount is priced at pay-as-you-go rates. To be eligible for a savings plan benefit, the usage must be generated by a resource within the savings plan's scope. Each hour's benefit is _use-it-or-lose-it_, and can't be rolled over to another hour.
+
+The benefit is first applied to the product that has the greatest savings plan discount when compared to the equivalent pay-as-you-go rate (see your price list for savings plan pricing). The application prioritization is done to ensure that you receive the maximum benefit from your savings plan investment. We multiply the savings plan rate to that product's usage and deduct the result from the savings plan commitment. The process repeats until the commitment is exhausted (or until there's no more usage to consume the benefit).
+
+A savings plan discount only applies to resources associated with Enterprise Agreement, Microsoft Partner Agreement, and Microsoft Customer Agreements. Resources that run in a subscription with other offer types don't receive the discount.
+
+## When the savings plan term expires
+
+At the end of the savings plan term, the billing discount expires, and the resources are billed at the pay-as-you go price. By default, the savings plans aren't set to renew automatically. You can choose to enable automatic renewal of a savings plan by selecting the option in the renewal settings.
+
+## Next steps
+
+- [Manage Azure savings plans](manage-savings-plan.md)
cost-management-billing Manage Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/manage-savings-plan.md
+
+ Title: Manage Azure savings plans
+
+description: Learn how to manage savings plans. See steps to change the plan's scope, split a plan, and optimize its use.
++++++ Last updated : 10/12/2022+++
+# Manage Azure savings plans
++
+After you buy an Azure savings plan, you may need to apply the savings plan to a different subscription, change who can manage the savings plan, or change the scope of the savings plan.
+
+_Permission needed to manage a savings plan is separate from subscription permission._
+
+## Savings plan order and savings plan
+
+To view a savings plan order, go to **Savings Plan** > select the savings plan, and then select the **Savings plan order ID**.
+
+## Change the savings plan scope
+
+Your savings plan discount applies to virtual machines, Azure Dedicated Hosts, Azure App services, Azure Container Instances, and Azure Premium Functions resources that match your savings plan and run in the savings plan scope. The billing scope is dependent on the subscription used to buy the savings plan.
+
+To update a savings plan scope:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Search for **Cost Management + Billing** > **Savings plans**.
+3. Select the savings plan.
+4. Select **Settings** > **Configuration**.
+5. Change the scope.
+
+If you change from shared to single scope, you can only select subscriptions where you're the owner. If you are a billing administrator, you donΓÇÖt need to be an owner on the subscription. Only subscriptions within the same billing scope as the savings plan can be selected.
+
+The scope only applies to Enterprise offers (MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreements, and Microsoft Partner Agreements.
+
+If all subscriptions are moved out of a management group, the scope of the savings plan is automatically changed to **Shared**.
+
+## Who can manage a savings plan
+
+By default, the following users can view and manage savings plan:
+
+- The person who bought the savings plan and the account owner for the billing subscription get Azure RBAC access to the savings plan order.
+- Enterprise Agreement and Microsoft Customer Agreement billing contributors can manage all savings plans from Cost Management + Billing > **Savings plan**.
+
+For more information, see [Permissions to view and manage Azure savings plans](permission-view-manage.md).
+
+## How billing administrators view or manage savings plans
+
+If you're a billing administrator you don't need to be an owner on the subscription. Use following steps to view and manage all savings plans and to their transactions.
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to **Cost Management + Billing**.
+ - If you're an EA admin, in the left menu, select **Billing scopes** and then in the list of billing scopes, select one.
+ - If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one.
+2. In the left menu, select **Products + services** > **Savings plan**.
+3. The complete list of savings plans for your EA enrollment or billing profile is shown.
+
+## Change billing subscription
+
+We don't allow changing Billing subscription after a savings plan is purchased.
+
+## Cancel, exchange, or refund
+
+You can't cancel, exchange, or refund savings plans.
+
+## View savings plan usage
+
+Billing administrators can view savings plan usage Cost Management + Billing.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to **Cost Management + Billing** > **Savings plans** and note the **Utilization (%)** for a savings plan.
+1. Select a savings plan.
+1. Review the savings plan use trend over time.
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+To learn more about Azure Savings plan, see the following articles:
+- [View saving plan utilization](utilization-cost-reports.md)
+- [Cancellation policy](cancel-savings-plan.md)
+- [Renew a savings plan](renew-savings-plan.md)
cost-management-billing Permission View Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/permission-view-manage.md
+
+ Title: Permissions to view and manage Azure savings plans
+
+description: Learn how to view and manage your savings plan in the Azure portal.
++++++ Last updated : 10/12/2022+++
+# Permissions to view and manage Azure savings plans
+
+This article explains how savings plan permissions work and how users can view and manage Azure savings plans in the Azure portal.
+
+## Who can manage a savings plan by default
+
+By default, the following users can view and manage savings plans:
+
+- The person who buys a savings plan and the account administrator of the billing subscription used to buy the savings plan are added to the savings plan order.
+- Enterprise Agreement and Microsoft Customer Agreement billing administrators.
+- Users with elevated access to manage all Azure subscriptions and management groups
+
+The savings plan lifecycle is independent of an Azure subscription, so the savings plan isn't a resource under the Azure subscription. Instead, it's a tenant-level resource with its own Azure RBAC permission separate from subscriptions. Savings plans don't inherit permissions from subscriptions after the purchase.
+
+## View and manage savings plans
+
+If you're a billing administrator, use following steps to view, and manage all savings plans and savings plan transactions in the Azure portal.
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to **Cost Management + Billing**.
+ - If you're an EA admin, in the left menu, select **Billing scopes** and then in the list of billing scopes, select one.
+ - If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one.
+1. In the left menu, select **Products + services** > **Savings plans**.
+1. The complete list of savings plans for your EA enrollment or billing profile is shown.
+
+## Add billing administrators
+
+Add a user as billing administrator to an Enterprise Agreement or a Microsoft Customer Agreement in the Azure portal.
+
+- For an Enterprise Agreement, add users with the _Enterprise Administrator_ role to view and manage all savings plan orders that apply to the Enterprise Agreement. Enterprise administrators can view and manage savings plans in **Cost Management + Billing**.
+ - Users with the _Enterprise Administrator (read only)_ role can only view the savings plan from **Cost Management + Billing**.
+ - Department admins and account owners can't view savings plans _unless_ they're explicitly added to them using Access control (IAM). For more information, see [Managing Azure Enterprise roles](../manage/understand-ea-roles.md).
+- For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all savings plan purchases made using the billing profile. Billing profile readers and invoice managers can view all savings plans that are paid for with the billing profile. However, they can't make changes to savings plans. For more information, see [Billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks).
+
+## View savings plans with Azure RBAC access
+
+If you purchased the savings plan or you're added to a savings plan, use the following steps to view and manage savings plans in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All Services** > **Savings plans** to list savings plans that you have access to.
+
+## Manage subscriptions and management groups with elevated access
+
+You can elevate a user's [access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). After you have elevated access:
+
+1. Navigate to **All Services** > **Savings plan** to see all savings plans that are in the tenant.
+2. To make modifications to the savings plan, add yourself as an owner of the savings plan order using Access control (IAM).
+
+## Grant access to individual savings plans
+
+Users who have owner access on the savings plans, and billing administrators can delegate access management for an individual savings plan order in the Azure portal. To allow other people to manage savings plans, you have two options:
+
+- Delegate access management for an individual savings plan order by assigning the Owner role to a user at the resource scope of the savings plan order. If you want to give limited access, select a different role.
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+- Add a user as billing administrator to an Enterprise Agreement or a Microsoft Customer Agreement:
+ - For an Enterprise Agreement, add users with the _Enterprise Administrator_ role to view and manage all savings plan orders that apply to the Enterprise Agreement. Users with the _Enterprise Administrator (read only)_ role can only view the savings plan. Department admins and account owners can't view savings plans _unless_ they're explicitly added to them using Access control (IAM). For more information, see [Managing Azure Enterprise roles](../manage/understand-ea-roles.md).
+ - For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all savings plan purchases made using the billing profile. Billing profile readers and invoice managers can view all savings plans that are paid for with the billing profile. However, they can't make changes to savings plans. For more information, see [Billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks).
+
+## Next steps
+
+- [Manage savings plans](manage-savings-plan.md).
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
+
+ Title: Azure savings plan recommendations
+
+description: Learn about how Azure makes saving plan purchase recommendations.
+++++++ Last updated : 10/12/2022++
+# Azure savings plan recommendations
+
+Azure savings plan purchase recommendations are provided through [Azure Advisor](../../advisor/advisor-reference-cost-recommendations.md#reserved-instances), and through the savings plan purchase experience in the Azure portal. The recommended commitment is calculated for the highest possible usage, and it's based on your historical usage. Your recommendation might not be for 100% utilization if you have inconsistent usage. To maximize savings with savings plans, try to purchase reservations as close to the recommendation as possible.
+
+The following steps define how recommendations are calculated:
+
+1. The recommendation engine evaluates the hourly usage for your resources in the given scope over the past 7, 30, and 60 days.
+2. Based on the usage data, the engine simulates your costs with and without a savings plan.
+3. The costs are simulated for different commitment amounts, and the commitment amount that maximizes the savings is recommended.
+4. The recommendation calculations include any discounts that you might have on your on-demand usage rates.
+
+## Purchase recommendations in the Azure portal
+
+The savings plan purchase experience shows up to 10 commitment amounts. All recommendations are based on the last 30 days of usage. For each amount, we include the percentage (off of your current pay-as-you-go costs) that the amount could save you. The percentage of your total compute usage that would be covered with the commitment amount is also included.
+
+By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and enrollment for EA). You can view subscription and resource group-level recommendations by restricting benefit application to one of those levels. We don't currently support management group-level recommendations.
+
+The first recommendation value is the one that is projected to result in the highest percent savings. The other values allow you to see how increasing or decreasing your commitment could affect both your savings and compute coverage. When the commitment amount is increased, your savings could be reduced because you could end up with reduced utilization. In other words, you'd pay for an hourly commitment that isn't fully used. If you lower the commitment, your savings could also be reduced. Although you'll have increased utilization, there will likely be periods when your savings plan won't fully cover your use. Usage beyond your hourly commitment will be charged at the more expensive pay-as-you-go rates.
+
+## Reservation trade in recommendations
+
+When you trade one or more reservations for a savings plan, you're shifting the balance of your previous commitments to a new savings plan commitment. For example, if you have a one year reservation with a value of $500, and half way through the term you look to trade it for a savings plan, you would still have an outstanding commitment of about $250.
+
+The minimum hourly commitment must be at least equal to the outstanding amount divided by (24 times the term length in days).
+
+As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 \* term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of ~ $0.029 for a new one year savings plan.
+
+If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan will be used to cover usage of eligible resources.
+
+The minimum value doesn't necessarily represent the hourly commitment necessary to cover the resources that were covered by the exchanged reservation. If you want to cover those resources, you'll most likely have to increase the hourly commitment. To determine the appropriate hourly commitment:
+
+1. Download your price list.
+2. For each reservation order you're returning, find the product in the price sheet and determine its unit price under either a 1-year or 3-year savings plan (filter by term and price type).
+3. Multiple the rate by the number of instances that are being returned.
+4. Repeat for each reservation order to be returned.
+5. Sum the values and enter it as the hourly commitment.
+
+## Recommendations in Azure Advisor
+
+When appropriate, a savings plan purchase recommendation can also be found in Azure Advisor. Keep in mind the following points:
+
+- The savings plan recommendations are for a single-subscription scope. If you want to see recommendations for the entire billing scope (billing account or billing profile), then:
+ - In the Azure portal, navigate to Savings plans > **Add** and then select the type that you want to see the recommendations for.
+- Recommendations available in Advisor consider your past 30-day usage trend.
+- The recommendation is for a three-year savings plan.
+- The recommendation calculations include any special discounts that you might have on your on-demand usage rates.
+- If you recently purchased a savings plan, Advisor reservation purchase and Azure saving plan recommendations can take up to five days to disappear.
+
+## Next steps
+
+- Learn about [How the Azure savings plan discount is applied](discount-application.md).
+- Learn about how to [trade in reservations](reservation-trade-in.md) for a savings plan.
cost-management-billing Renew Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/renew-savings-plan.md
+
+ Title: Automatically renew your Azure savings plan
+
+description: Learn how you can automatically renew an Azure saving plan to continue getting discounts.
++++++ Last updated : 10/12/2022+++
+# Automatically renew your Azure savings plan
+
+You can automatically purchase a replacement savings plan when an existing savings plan expires. Automatic renewal provides an effortless way to continue getting savings plan discounts without having to closely monitor a savings plan's expiration. The renewal setting is turned off by default. Enable or disable the renewal setting anytime, up to the expiration of the existing savings plan.
+
+Renewing a savings plan creates a new savings plan when the existing one expires. It doesn't extend the term of the existing savings plan.
+
+You can opt in to automatically renew at any time.
+
+There's no obligation to renew and you can opt out of the renewal at any time before the existing savings plan expires.
+
+## Required renewal permissions
+
+The following conditions are required to renew a savings plan:
+
+For Enterprise Agreements (EA) and Microsoft Customer Agreements (MCA):
+
+- MCA - You must be a billing profile contributor
+- EA - You must be an EA admin with write access
+
+For Microsoft Partner Agreements (MPA):
+
+- You must be an owner of the existing savings plan.
+- You must be an owner of the subscription if the savings plan is scoped to a single subscription or resource group.
+- You must be an owner of the subscription if it has a shared scope or management group scope.
+
+## Set up renewal
+
+In the Azure portal, search for **Savings plan** and select it.
+
+1. Select the savings plan.
+1. Select **Renewal**.
+1. Select **Automatically renew this savings plan**.
+
+## If you don't automatically renew
+
+Your services continue to run normally. You're charged pay-as-you-go rates for your usage after the savings plan expires. If the savings plan wasn't set for automatic renewal before expiration, you can't renew an expired savings plan. To continue to receive savings, you can buy a new savings plan.
+
+## Default renewal settings
+
+By default, the renewal inherits all properties except automatic renewal setting from the expiring savings plan. A savings plan renewal purchase has the same billing subscription, term, billing frequency, and savings plan commitment.
+
+However, you can update the renewal commitment, billing frequency, and commitment term to optimize your savings.
+
+## When the new savings plan is purchased
+
+A new savings plan is purchased when the existing savings plan expires. We try to prevent any delay between the two savings plan. Continuity ensures that your costs are predictable, and you continue to get discounts.
+
+## Change parent savings plan after setting renewal
+
+If you make any of the following changes to the expiring savings plan, the savings plan renewal is canceled:
+
+- Transferring the savings plan from one account to another
+- Renew the enrollment
+
+The new savings plan inherits the scope setting from the expiring savings plan during renewal.
+
+## New savings plan permissions
+
+Azure copies the permissions from the expiring savings plan to the new savings plan. Additionally, the subscription account administrator of the savings plan purchase has access to the new savings plan.
+
+## Potential renewal problems
+
+Azure may not process the renewal if:
+
+- Payment can't be collected
+- A system error occurs during renewal
+- The EA agreement is renewed into a different EA
+
+You'll receive an email notification if any of the preceding conditions occur, and the renewal is deactivated.
+
+## Renewal notification
+
+Renewal notification emails are sent 30 days before expiration and again on the expiration date. The sending email address is `azure-noreply@microsoft.com`. You might want to add the email address to your safe senders or allowlist.
+
+Emails are sent to different people depending on your purchase method:
+
+- EA customers - Emails are sent to the notification contacts set on the EA portal or Enterprise Administrators who are automatically enrolled to receive usage notifications.
+- MPA - No email notifications are currently sent for Microsoft Partner Agreement subscriptions.
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+- To learn more about Azure savings plans, see [What are Azure Savings Plan?](savings-plan-compute-overview.md)
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
+
+ Title: Self-service trade-in for Azure savings plans
+
+description: Learn how you can trade in your reservations for an Azure saving plan.
++++++ Last updated : 10/12/2022+++
+# Self-service trade-in for Azure savings plans
+
+Azure savings plans provide flexibility to help meet your evolving needs by providing discounted rates for VMs, dedicated hosts, container instances, Azure premium functions and Azure app services, across all supported regions.
+
+If you find that your Azure VMs, Dedicated Hosts, or Azure App Services reservations, don't provide the necessary flexibility you require, you can trade these reservations for a savings plan. When you trade-in a reservation and purchase a savings plan, you can select a savings plan term of either one-year to three-year.
+
+Although you can return the above offerings for a savings plan, you can't exchange a savings plan for them or for another savings plan.
+
+The ability to exchange Azure VM reservations will retire in the future. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
+
+The following reservations aren't eligible to be traded in for savings plans:
+
+- Azure Databricks reserved capacity
+- Synapse Analytics Pre-purchase plan
+- Azure VMware solution by CloudSimple
+- Azure Red Hat Open Shift
+- Red Hat plans
+- SUSE Linux plans
+
+> [!NOTE]
+> - You must have owner access on the Reservation Order to trade in an existing reservation. You can [Add or change users who can manage a savings plan](manage-savings-plan.md#who-can-manage-a-savings-plan).
+> - Microsoft is not currently charging early termination fees for reservation trade ins. We might charge the fees made in the future. We currently don't have a date for enabling the fee.
+
+## How to trade in an existing reservation
+
+You can trade in your reservation from [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). When you trade in VM reservations for a savings plan, we cancel your reservation, issue you a pro-rated refund for them, and cancel any future payments (for reservations that were billed monthly).
+
+1. Select the reservations that you want to trade in and select **Exchange**.
+ :::image type="content" source="./media/reservation-trade-in/exchange-refund-return.png" alt-text="Screenshot showing the Exchange window." lightbox="./media/reservation-trade-in/exchange-refund-return.png" :::
+1. For each reservation order selected, enter the quantity of reservation instances you want to return. The bottom of the window shows the amount that will be refunded. It also shows the value of future payments that will be canceled, if applicable.
+1. Select **Compute Savings Plan** as the product that you want to purchase.
+1. Enter a friendly name for the savings plan. Select the scope where the savings plan benefit will apply and select the term length. Scopes include shared, subscription, resource group, and management group.
+
+By default, the hourly commitment is derived from the remaining value of the reservations that are traded in. Although it's the minimum hourly commitment you may make, it might not be a large enough benefit commitment to cover the VMs that were previously covered by the reservations that you're returning.
+
+To determine the remaining commitment amount needed to cover your VMs:
+
+1. Download your price sheet. For more information, see [View and download your Azure pricing](../manage/ea-pricing.md).
+1. Search the price sheet for the 1-year or 3-year savings plan rate for VM products associated with the reservations that you're returning.
+1. For each reservation, multiply the savings plan rate with the quantity you want to return.
+1. Enter the total of the above step as the hourly commitment, then **Add** to your cart.
+1. Review and complete the transaction.
+
+## How transactions are processed
+
+The new savings plan is purchased and then the traded-in reservations are canceled. If the reservations were paid for upfront, we refund a pro-rated amount for the reservations. If the reservations were paid monthly, we refund a pro-rated amount for the current month and cancel any future payments. Microsoft processes refunds using one of the following methods, depending on your account type and payment method.
+
+### Enterprise agreement customers
+
+Money is added to the Azure Prepayment (previously called monetary commitment) for refunds if the original purchase was made using one. If the Azure Prepayment used to purchase the reservation is no longer active, then credit is added to your current enterprise agreement Azure Prepayment term. The credit is valid for 90 days from the date of refund. Unused credit expires at the end of 90 days.
+
+If the original purchase was made as an overage, the original invoice on which the reservation was purchased and all later invoices are reopened and readjusted. Microsoft issues a credit memo for the refunds.
+
+### Microsoft Customer Agreement customers (credit card)
+
+The original invoice is canceled, and a new invoice is created. The money is refunded to the credit card that was used for the original purchase. If you've changed your card, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Cancel, exchange, and refund policies
+
+You can't cancel, exchange or refund a savings plan.
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+- To learn how to manage a savings plan, see [Manage Azure savings plan](manage-savings-plan.md).
+- To learn more about Azure saving plan, see the following articles:
+ - [What are Azure savings plans?](savings-plan-compute-overview.md)
+ - [How a savings plan discount is applied](discount-application.md)
+ - [View Azure savings plan cost and usage details](utilization-cost-reports.md)
+ - [Software costs not included in savings plan](software-costs-not-included.md)
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
+
+ Title: What are Azure savings plans for compute?
+
+description: Learn how Azure savings plans help you save money by committing an hourly spend for one-year or three-year plan for Azure compute resources.
++++++ Last updated : 10/12/2022+++
+# What are Azure savings plans for compute?
+
+Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 66% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
+
+Each hour with savings plan, your compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage afterward is priced at pay-as-you-go rates. Savings plan commitments are priced in USD for Microsoft Customer Agreement and Microsoft Partner Agreement customers, and in local currency for Enterprise Agreement customers. Usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions, and Azure app services are eligible for savings plan discounts.
+
+You can acquire a savings plan by making a new commitment, or you can trade in one or more active reservations for a savings plan. When you acquire a savings plan with a reservation trade in, the reservation is canceled. The prorated residual value of the unused reservation benefit is converted to the equivalent hourly commitment for the savings plan. The commitment may not be sufficient for your needs, and while you may not reduce it, you can increase it to cover your needs.
+
+After you purchase a savings plan, the discount automatically applies to matching resources. Savings plans provide a billing discount and don't affect the runtime state of your resources.
+
+You can pay for a savings plan up front or monthly. The total cost of up-front and monthly savings plan is the same and you don't pay any extra fees when you choose to pay monthly.
+
+You can buy a savings plan in the Azure portal.
+
+## Why buy a savings plan?
+
+If you have consistent compute spend, buying a savings plan gives you the option to reduce your costs. For example, when you continuously run instances of a service without a savings plan, you're charged at pay-as-you-go rates. When you buy a savings plan, your compute usage is immediately eligible for the savings plan discount. Your discounted rates add-up to the commitment amount. Usage covered by a savings plan receives discounted rates, not the pay-as-you-go rates.
+
+## How savings plan discount is applied
+
+Almost immediately after purchase the savings plan benefit begins to apply without other action required by you. Every hour, we apply benefit to savings plan-eligible meters that are within the savings plan's scope. The benefits are applied to the meter with the greatest discount percentage first. Savings plan scope selects where the savings plan benefit applies.
+
+For more information about how discount is applied, see [Savings plan discount application](discount-application.md).
+
+For more information about how savings plan scope works, see [Scope savings plans](buy-savings-plan.md#scope-savings-plans).
+
+## Determine what to purchase
+
+Usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions and Azure app services are eligible for savings plan benefits. Consider savings plan purchases based on your consistent compute usage. You can determine your optimal commitment by analyzing your usage data or by using the savings plan recommendation. Recommendations are available in:
+
+- Azure Advisor (VMs only)
+- Savings plan purchase experience in the Azure portal
+- Cost Management Power BI app
+- APIs
+
+For more information, seeΓÇ»[Choose an Azure saving plan commitment amount](choose-commitment-amount.md)
+
+## Buying a savings plan
+
+You can purchase savings plans from the Azure portal. For more information, seeΓÇ»[Buy a savings plan](buy-savings-plan.md).
+
+## How is a savings plan billed?
+
+The savings plan is charged to the payment method tied to the subscription. The savings plan cost is deducted from your Azure Prepayment (previously called monetary commitment) balance, if available. When your Azure Prepayment balance doesn't cover the cost of the savings plan, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have on your account is billed immediately for up-front and for monthly purchases. Monthly payments that's you've made appear on your invoice. When you're billed by invoice, you see the charges on your next invoice.
+
+## Who can manage a savings plan by default
+
+By default, the following users can view and manage savings plans:
+
+- The person who buys a savings plan, and the account administrator of the billing subscription used to buy the savings plan are added to the savings plan order.
+- Enterprise Agreement and Microsoft Customer Agreement billing administrators.
+
+To allow other people to manage savings plans, see [Manage savings plan resources](manage-savings-plan.md).
+
+## Get savings plan details and utilization after purchase
+
+With sufficient permissions, you can view the savings plan and usage in the Azure portal. You can get the data using APIs, as well.
+
+For more information about savings plan permissions in the Azure portal, seeΓÇ»[Permissions to view and manage Azure savings plans](permission-view-manage.md)
+
+## Manage savings plan after purchase
+
+After you buy an Azure savings plan, you can update the scope to apply the savings plan to a different subscription and change who can manage the savings plan.
+
+For more information, seeΓÇ»[Manage Azure savings plans](manage-savings-plan.md).
+
+## Cancellation and refund policy
+
+Savings plan purchases can't be canceled or refunded.
+
+## Charges covered by savings plan
+
+- **Virtual Machines** - A savings plan only covers the virtual machine and cloud services compute costs. It doesn't cover other software, Windows, networking, or storage charges.
+- **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host.
+- **Container Instances**
+- **Azure Premium Functions**
+- **Azure App Services** - Not all App Services are eligible.
+
+Some exclusions apply to the above services.
+
+For Windows virtual machines and SQL Database, the savings plan discount doesn't apply to the software costs. You can cover the licensing costs with [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+- Learn [how discounts apply to savings plans](discount-application.md).
+- [Trade in reservations for a savings plan](reservation-trade-in.md).
+- [Buy a savings plan](buy-savings-plan.md).
cost-management-billing Software Costs Not Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/software-costs-not-included.md
+
+ Title: Software costs not included in Azure saving plans
+
+description: Learn how savings plan discounts apply only to Azure compute infrastructure costs and not to the software costs.
+++++++ Last updated : 10/12/2022++
+# Software costs not included in saving plans
+
+Savings plan discounts apply only to the infrastructure costs and not to the software costs. If you use Windows VMs, and you have a savings plan, and you don't have an Azure Hybrid Benefit for your VMs, then you're charged for the software meters listed in the following sections.
+
+## Windows software meters not included in savings plan cost
+
+| **MeterId** | **MeterName in usage file** | **Used by VM** |
+| | | |
+| e7e152ac-f29c-4cce-ad6e-026192c01ef2 | Reservation-Windows Svr Burst (1 Core) | B Series |
+| cac255a2-9f0f-4c62-8bd6-f0fa449c5f76 | Reservation-Windows Svr Burst (2 Core) | B Series |
+| 09756b58-3fb5-4390-976d-9ddd14f9ed18 | Reservation-Windows Svr Burst (4 Core) | B Series |
+| e828cb37-5920-4dc7-b30f-664e4dbcb6c7 | Reservation-Windows Svr Burst (8 Core) | B Series |
+| f65a06cf-c9c3-47a2-8104-f17a8542215a | Reservation-Windows Svr (1 Core) | All except B Series |
+| b99d40ae-41fe-4d1d-842b-56d72f3d15ee | Reservation-Windows Svr (2 Core) | All except B Series |
+| 1cb88381-0905-4843-9ba2-7914066aabe5 | Reservation-Windows Svr (4 Core) | All except B Series |
+| 07d9e10d-3e3e-4672-ac30-87f58ec4b00a | Reservation-Windows Svr (6 Core) | All except B Series |
+| 603f58d1-1e96-460b-a933-ce3775ac7e2e | Reservation-Windows Svr (8 Core) | All except B Series |
+| 36aaadda-da86-484a-b465-c8b5ab292d71 | Reservation-Windows Svr (12 Core) | All except B Series |
+| 02968a6b-1654-4495-ada6-13f378ba7172 | Reservation-Windows Svr (16 Core) | All except B Series |
+| 175434d8-75f9-474b-9906-5d151b6bed84 | Reservation-Windows Svr (20 Core) | All except B Series |
+| 77eb6dd0-88f5-4a16-ab39-05d1742efb25 | Reservation-Windows Svr (24 Core) | All except B Series |
+| 0d5bdf46-b719-4b1f-a780-b9bdfffd0591 | Reservation-Windows Svr (32 Core) | All except B Series |
+| f1214b5c-cc16-445f-be6c-a3bb75f8395a | Reservation-Windows Svr (40 Core) | All except B Series |
+| 637b7c77-65ad-4486-9cc7-dc7b3e9a8731 | Reservation-Windows Svr (64 Core) | All except B Series |
+| da612742-e7cc-4ca3-9334-0fb7234059cd | Reservation-Windows Svr (72 Core) | All except B Series |
+| a485cb8c-069b-4cf3-9a8e-ddd84b323da2 | Reservation-Windows Svr (128 Core) | All except B Series |
+| 904c5c71-1eb7-43a6-961c-d305a9681624 | Reservation-Windows Svr (256 Core) | All except B Series |
+| 6fdab81b-4284-4df9-8939-c237cc7462fe | Reservation-Windows Svr (96 Core) | All except B Series |
+
+## Cloud services software meters not included in savings plan cost
+
+| **MeterId** | **MeterName in usage file** |
+| | |
+| ac9d47ff-ff68-4afc-a145-0c321cf8d0d5 | Cloud Services 1 vCPU License |
+| e0434559-19ee-4132-9c46-05ad4044f3f7 | Cloud Services 2 vCPU License |
+| 6ecc834e-39b3-48b3-8d10-cc5626bacb66 | Cloud Services 4 vCPU License |
+| 13103090-ca72-4825-ab12-7f16c4931d95 | Cloud Services 8 vCPU License |
+| ecd2bb6e-45a5-49aa-a58b-3947ba21c364 | Cloud Services 16 vCPU License |
+| de2c7f1d-06dc-4b16-bc8b-c2ec5f4c8aee | Cloud Services 20 vCPU License |
+| ca1af837-4b35-47f5-8d14-b1988149c4ca | Cloud Services 32 vCPU License |
+| dc72ee45-2ab7-4698-b435-e2cf10d1f9f6 | Cloud Services 64 vCPU License |
+| 7a803026-244c-4659-834c-11e6b2d6b76f | Cloud Services 80 vCPU License |
+
+## Get rates for Azure meters
+
+You can get the pay-as-you-go cost of each of the meters with the Azure Retail Prices API. For information on how to get the rates for an Azure meter, see [Azure Retail Prices overview](/rest/api/cost-management/retail-prices/azure-retail-prices).
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+To learn more about Azure savings plans, see the following articles:
+
+- [What are Azure savings plans?](buy-savings-plan.md)
+- [Manage an Azure savings plan](manage-savings-plan.md)
+- [How saving plan discount is applied](discount-application.md)
+- [View Azure savings plan cost and usage details](utilization-cost-reports.md)
cost-management-billing Utilization Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md
+
+ Title: View Azure savings plan cost and usage
+
+description: Learn how to view saving plan cost and usage details.
++++++ Last updated : 10/12/2022+++
+# View Azure savings plan cost and usage details
+
+Enhanced data for savings plan costs and usage is available for Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) usage in Cost management. This article helps you:
+
+- Get savings plan purchase data
+- Know which subscription, resource group or resource used the savings plan
+- Calculate savings plan savings
+- Get savings plan under-utilization data
+- Amortize savings plan costs
+
+## Savings plan charges in Azure cost data
+
+Fields in the Azure cost data that are relevant to savings plan scenarios are listed below.
+
+- `BenefitId` and `BenefitName` - They are their own fields in the data and correspond to the Savings Plan ID and Savings Plan name associated with your purchase.
+- `PricingModel` - This field will be "SavingsPlan" for purchase and usage cost records that are relevant to a Savings Plan.
+- `ProductOrderId` - The savings plan order ID, added as its own field.
+- `ProductOrderName` - The product name of the purchased savings plan.
+- `Term` ΓÇô The time period associated with your savings plan purchase.
+
+In Azure Cost Management, cost details provide savings plan cost in two separate data sets: _Actual Cost_ and _Amortized Cost_. How these two datasets differ:
+
+**Actual Cost** - Provides data to reconcile with your monthly bill. The data has savings plan purchase costs and savings plan application details. With the data, you can know which subscription or resource group or resource received the savings plan discount in a particular day. The `EffectivePrice` for the usage that receives the savings plan discount is zero.
+
+**Amortized Cost** - This dataset is similar to the Actual Cost dataset except that - the `EffectivePrice` for the usage that gets savings plan discount is the prorated cost of the savings plan (instead of being zero). It helps you know the monetary value of savings plan consumption by a subscription, resource group or a resource, and it can help you charge back for the savings plan utilization internally. The dataset also has unused hours in the savings plan that have been charged for the hourly commitment amount. The dataset doesn't have savings plan purchase records.
+
+Here's a comparison of the two data sets:
+
+| **Data** | **Actual Cost data set** | **Amortized Cost data set** |
+| | | |
+| Savings plan purchases | Available in the view.<br><br>To get the data, filter on ChargeType = `Purchase`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan the charge is for. | Not applicable to the view.<br><br>Purchase costs aren't provided in amortized data. |
+| `EffectivePrice` | The value is zero for usage that gets savings plan discount. | The value is per-hour prorated cost of the savings plan for usage that has the savings plan discount. |
+| Unused benefit (provides the number of hours the savings plan wasn't used in a day and the monetary value of the waste) | Not applicable in the view. | Available in the view.<br><br>To get the data, filter on ChargeType = `UnusedBenefit`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan was underutilized. Indicates how much of the savings plan was wasted for the day. |
+| UnitPrice (price of the resource from your price sheet) | Available | Available |
+
+## Get Azure consumption and savings plan cost data using API
+
+You can get the data using the API or download it from Azure portal. Call the [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation) to get the new data. For details about terminology, seeΓÇ»[Usage terms](../understand/understand-usage.md). To learn more about how to call the Cost Details API, see [Get cost data on demand](../automate/get-small-usage-datasets-on-demand.md).
+
+Information in the following table about metric and filter can help solve for common savings plan problems.
+
+| **Type of API data** | **API call action** |
+|||
+| **All Charges (usage and purchases)** | Request for an ActualCost report. |
+| **Usage that got savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, look for records with ChargeType = 'Usage' and PricingModel = 'SavingsPlan'. |
+| **Usage that didn't get savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with PricingModel = 'OnDemand'. |
+| **Amortized charges (usage and purchases)** | Request for an AmortizedCost report. |
+| **Unused savings plan report** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'UnusedBenefit' and PricingModel ='SavingsPlan'. |
+| **Savings plan purchases** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Purchase' and PricingModel = 'SavingsPlan'. |
+| **Refunds** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Refund'. |
+
+## Download the cost CSV file with new data
+
+To download your saving plan cost and usage file, using the information in the following sections.
+
+### Download for EA customers
+
+If you're an EA admin, you can download the CSV file that contains new cost data from the Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the cost file from Azure portal (portal.azure.com) to see the new data.
+
+In the Azure portal, navigate toΓÇ»[Cost Management + Billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
+
+1. Select the enrollment.
+1. Select **Usage + charges**.
+1. Select **Download**.
+1. In **Download Usage + Charges**, under **Usage Details Version 2**, select **All Charges (usage and purchases)** and then select download. Repeat for **Amortized charges (usage and purchases)**.
+
+### Download for MCA customers
+
+If you're an Owner, Contributor, or Reader on your Billing Account, you can download the CSV file that contains new usage data from the Azure portal. In the Azure portal, navigate to Cost Management + Billing.
+
+1. Select the billing account.
+2. Select **Invoices.**
+3. Download the Actual Cost CSV file based on your scenario.
+ 1. To download the usage for the current month, select **Download pending usage**.
+ 2. To download the usage for a previous invoice, select the ellipsis symbol (**...**) and select **Prepare Azure usage file**.
+4. If you want to download the Amortized Cost CSV file, you'll need to use Exports or our Cost Details API.
+ 1. To use Exports, see [Export data](../costs/tutorial-export-acm-data.md).
+ 2. To use the Cost Details API, see [Get small cost datasets on demand](../automate/get-small-usage-datasets-on-demand.md).
+
+## Common cost and usage tasks
+
+The following sections are common tasks that most people use to view their savings plan cost and usage data.
+
+### Get savings plan purchase costs
+
+Savings plan purchase costs are available in Actual Cost data. Filter for ChargeType = `Purchase`. Refer to `ProductOrderID` to determine which savings plan order the purchase is for.
+
+### Get underutilized savings plan quantity and costs
+
+Get amortized cost data and filter for `ChargeType` = `UnusedBenefit` and `PricingModel` = `SavingsPlan`. You get the daily unused savings plan quantity and the cost. You can filter the data for a savings plan or savings plan order using `BenefitId` and `ProductOrderId` fields, respectively. If a savings plan was 100% utilized, the record has a quantity of 0.
+
+### Amortized savings plan costs
+
+Get amortized cost data and filter for a savings plan order using `ProductOrderID` to get daily amortized costs for a savings plan.
+
+### Chargeback for a savings plan
+
+You can charge-back savings plan use to other organizations by subscription, resource groups, or tags. Amortized cost data provides the monetary value of a savings plan's utilization at the following data types:
+
+- Resources (such as a VM)
+- Resource group
+- Tags
+- Subscription
+
+### Determine savings plan savings
+
+Get the Amortized costs data and filter the data for a savings plan instance. Then:
+
+1. Get estimated pay-as-you-go costs. Multiply the _UnitPrice_ value with _Quantity_ values to get estimated pay-as-you-go costs, if the savings plan discount didn't apply to the usage.
+2. Get the savings plan costs. Sum the _Cost_ values to get the monetary value of what you paid for the savings plan. It includes the used and unused costs of the savings plan.
+3. Subtract savings plan costs from estimated pay-as-you-go costs to get the estimated savings.
+
+Keep in mind that if you have an underutilized savings plan, the _UnusedBenefit_ entry for _ChargeType_ becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any _UnusedBenefit_ quantity reduces savings.
+
+## Purchase and amortization costs in cost analysis
+
+Savings plan costs are available inΓÇ»[cost analysis](https://aka.ms/costanalysis). By default, cost analysis showsΓÇ»**Actual cost**, which is how costs are shown on your bill. To view savings plan purchases broken down and associated with the resources that used the benefit, switch toΓÇ»**Amortized cost**. Here's an example.
++
+Group by _Charge Type_ to see a breakdown of usage, purchases, and refunds; or by _Pricing Model_ for a breakdown of savings plan and on-demand costs. You can also group by _Benefit_ and use the _BenefitId_ and _BenefitName_ associated with your Savings Plan to identify the costs related to specific savings plan purchases. The only savings plan costs that you see when looking at actual cost are purchases. Costs aren't allocated to the individual resources that used the benefit when looking at amortized cost. You'll also see a new _**UnusedBenefit**_ plan charge type when looking at amortized cost.
+
+## Next steps
+
+- Learn more about how to [Charge back Azure saving plan costs](charge-back-costs.md).
cost-management-billing View Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/view-transactions.md
+
+ Title: View Azure savings plan purchase transactions
+
+description: Learn how to view saving plan purchase transactions and details.
++++++ Last updated : 10/12/2022+++
+# View Azure savings plan purchase transactions
+
+You can view savings plan purchase and refund transactions in the Azure portal.
+
+## View savings plan purchases in the Azure portal
+
+Enterprise Agreement and Microsoft Customer Agreement billing readers can view accumulated purchases for reservations in cost analysis.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Navigate to **Cost Management + Billing**.
+3. SelectΓÇ»**Cost analysis** in the left menu.
+4. Apply a filter for **Pricing Model** and then select **SavingsPlan**.
+5. To view purchases for reservations, apply a filter for **Charge Type** and then select **purchase**.
+6. Set theΓÇ»**Granularity** to **Monthly**.
+7. Set the chart type toΓÇ»**Column (Stacked)**.
+ :::image type="content" source="./media/view-transactions/accumulated-costs-cost-analysis.png" alt-text="Screenshot showing accumulated cost in cost analysis." lightbox="./media/view-transactions/accumulated-costs-cost-analysis.png" :::
+
+## Need help? Contact us.
+
+If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
+
+## Next steps
+
+- To learn how to manage a reservation, see [Manage Azure Savings plans](manage-savings-plan.md).
cost-management-billing View Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/view-utilization.md
+
+ Title: View Azure savings plan utilization
+
+description: Learn how to view saving plan utilization in the Azure portal.
++++++ Last updated : 10/12/2022+++
+# View savings plan utilization after purchase
+
+You can view savings plan utilization percentage in the Azure portal.
+
+## View utilization in the Azure portal with Azure RBAC access
+
+To view savings plan utilization, you must have Azure RBAC access to the savings plan or you must have elevated access to manage all Azure subscriptions and management groups.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Search for **Savings plans** and the select it.
+3. The list shows all the savings plans where you have the Owner or Reader role. Each savings plan shows the last known utilization percentage for both the last day and the last seven days in the list view.
+4. Select the utilization percentage to see the utilization history.
+
+## View utilization as billing administrator
+
+An Enterprise Agreement (EA) administrator or a Microsoft Customer Agreement (MCA) billing administrator can view the utilization from **Cost Management + Billing**.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Go to **Cost Management + Billing** > **Savings plans**.
+3. Select the utilization percentage to see the utilization history.
+
+## Get Savings plan utilization with the API
+
+You can get the [Savings plan utilization](https://go.microsoft.com/fwlink/?linkid=2209373) using the Benefit Utilization Summary API.
+
+## Next steps
+
+- [Manage Azure savings plan](manage-savings-plan.md)
+- [View Azure savings plan cost and usage details](utilization-cost-reports.md)
cost-management-billing Create Sql License Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/create-sql-license-assignments.md
The prerequisite roles differ depending on the agreement type.
> [!NOTE]
-> Centrally assigning licenses to scopes isn't available for Sponsored, MSDN Credit subscriptions or MPN subscriptions. SQL software usage is free for Dev/Test subscriptions (MS-AZR-0148P or MS-AZR-0023P offer types).
+> Centrally assigning licenses to scopes isn't available for Sponsored, MSDN Credit subscriptions or Microsoft Cloud Partner Program subscriptions. SQL software usage is free for Dev/Test subscriptions (MS-AZR-0148P or MS-AZR-0023P offer types).
## Create a SQL license assignment
In the following procedure, you navigate from **Cost Management + Billing** to *
1. Sign in to the Azure portal and navigate to **Cost Management + Billing**. :::image type="content" source="./media/create-sql-license-assignments/select-cost-management.png" alt-text="Screenshot showing Azure portal navigation to Cost Management + Billing." lightbox="./media/create-sql-license-assignments/select-cost-management.png" :::
-1. Use one of the following two steps, depending on you agreement type:
+1. Use one of the following two steps, depending on your agreement type:
- If you have an Enterprise Agreement, select a billing scope. :::image type="content" source="./media/create-sql-license-assignments/select-billing-scope.png" alt-text="Screenshot showing EA billing scope selection." lightbox="./media/create-sql-license-assignments/select-billing-scope.png" ::: - If you have a Microsoft Customer Agreement, select a billing profile.
data-catalog Data Catalog Adopting Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-adopting-data-catalog.md
Last updated 02/17/2022
# Approach and process for adopting Azure Data Catalog This article helps you get started adopting **Azure Data Catalog** in your organization. To successfully adopt **Azure Data Catalog**, focus on three key items: define your vision, identify key business use cases within your organization, and choose a pilot project.
data-catalog Data Catalog Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-common-scenarios.md
Last updated 02/22/2022
# Azure Data Catalog common scenarios This article presents common scenarios where Azure Data Catalog can help your organization get more value from its existing data sources.
data-catalog Data Catalog Developer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-developer-concepts.md
Last updated 02/16/2022
# Azure Data Catalog developer concepts Microsoft **Azure Data Catalog** is a fully managed cloud service that provides capabilities for data source discovery and for crowdsourcing data source metadata. Developers can use the service via its REST APIs. Understanding the concepts implemented in the service is important for developers to successfully integrate with **Azure Data Catalog**.
data-catalog Data Catalog Dsr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-dsr.md
Title: Supported data sources in Azure Data Catalog description: This article lists specifications of the currently supported data sources for Azure Data Catalog. + Last updated 02/24/2022 # Supported data sources in Azure Data Catalog You can publish metadata by using a public API or a click-once registration tool, or by manually entering information directly to the Azure Data Catalog web portal. The following table summarizes all data sources that are supported by the catalog today, and the publishing capabilities for each. Also listed are the external data tools that each data source can launch from our portal "open-in" experience. The second table contains a more technical specification of each data-source connection property.
You can publish metadata by using a public API or a click-once registration tool
<td>Γ£ô</td> <td>Γ£ô</td> <td></td>
- <td>Only legacy collections from Azure DocumentDB and SQL API collections in Azure Cosmos DB are compatible. Newer Cosmos DB APIs aren't yet supported. Choose Azure DocumentDB in the Data Source list.</td>
+ <td>Only legacy collections from Azure DocumentDB and Azure Cosmos DB for NoSQL are compatible. Newer Azure Cosmos DB APIs aren't yet supported. Choose Azure DocumentDB in the Data Source list.</td>
</tr> <tr> <td>Generic ODBC table</td>
data-catalog Data Catalog Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-get-started.md
# Quickstart: Create an Azure Data Catalog via the Azure portal Azure Data Catalog is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data assets. For a detailed overview, see [What is Azure Data Catalog](overview.md).
data-catalog Data Catalog How To Annotate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-annotate.md
Last updated 02/18/2022
# How to annotate data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-big-data.md
Last updated 02/14/2022
# How to catalog big data in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-business-glossary.md
Last updated 02/23/2022
# Set up the business glossary for governed tagging ## Introduction
data-catalog Data Catalog How To Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-connect.md
Last updated 02/22/2022
# How to connect to data sources ## Introduction
data-catalog Data Catalog How To Data Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-data-profile.md
Last updated 02/18/2022
# How to data profile data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Discover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-discover.md
Last updated 02/24/2022
# How to discover data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-documentation.md
Last updated 02/17/2022
# How to document data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-manage.md
Last updated 02/15/2022
# Manage data assets in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-register.md
Last updated 02/25/2022
# Register data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Save Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-save-pin.md
Last updated 02/10/2022
# Save searches and pin data assets in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Secure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-secure-catalog.md
Last updated 02/14/2022
# How to secure access to data catalog and data assets > [!IMPORTANT] > This feature is available only in the standard edition of Azure Data Catalog.
data-catalog Data Catalog How To View Related Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-view-related-data-assets.md
Last updated 02/11/2022
# How to view related data assets in Azure Data Catalog Azure Data Catalog allows you to view data assets that are related to a selected data asset, and see the relationships between them.
data-catalog Data Catalog Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-keyboard-shortcuts.md
Last updated 02/11/2022
# Keyboard shortcuts for Azure Data Catalog ## Keyboard shortcuts for the Data Catalog data source registration tool
data-catalog Data Catalog Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-samples.md
Last updated 02/16/2022
# Azure Data Catalog developer samples Get started developing Azure Data Catalog apps using the Data Catalog REST API. The Data Catalog REST API is a REST-based API that provides programmatic access to Data Catalog resources to register, annotate, and search data assets programmatically.
data-catalog Data Catalog Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-terminology.md
Last updated 02/15/2022
# Azure Data Catalog terminology This article provides an introduction to concepts and terms used in Azure Data Catalog documentation.
data-catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/overview.md
Last updated 02/24/2022
# What is Azure Data Catalog? Azure Data Catalog is a fully managed cloud service that lets users discover the data sources they need and understand the data sources they find. At the same time, Data Catalog helps organizations get more value from their existing investments.
data-catalog Register Data Assets Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/register-data-assets-tutorial.md
Last updated 02/24/2022
# Tutorial: Register data assets in Azure Data Catalog In this tutorial, you use the registration tool to register data assets from the database sample with the catalog. Registration is the process of extracting key structural metadata such as names, types, and locations from the data source and the assets it contains, and copying that metadata to the catalog. The data source and data assets remain where they are, but the metadata is used by the catalog to make them more easily discoverable and understandable.
data-catalog Troubleshoot Policy Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/troubleshoot-policy-configuration.md
Last updated 02/10/2022
# Troubleshooting Azure Data Catalog This article describes common troubleshooting concerns for Azure Data Catalog resources.
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
+
+ Title: Change Data Capture
+
+description: Learn about change data capture in Azure Data Factory and Azure Synapse Analytics.
+++++++ Last updated : 07/26/2022++
+# Change data capture in Azure Data Factory and Azure Synapse Analytics
++
+This article describes change data capture (CDC) in Azure Data Factory.
+
+To learn more, see [Azure Data Factory overview](introduction.md) or [Azure Synapse overview](../synapse-analytics/overview-what-is.md).
+
+## Overview
+
+When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. Executing pipelines that only read the latest changed data is available in many of ADF's source connectors by simply enabling a checkbox property inside the source transformation. Support for full-fidelity CDC, which includes row markers for upserts, deletes, and updates, as well as rules for resetting the ADF-managed checkpoint are available in several ADF connectors. To easily capture changes and deltas, ADF supports patterns and templates for managing incremental pipelines with user-controlled checkpoints as well, which you'll find in the table below.
+
+## CDC Connector support
+
+| Connector | Full CDC | Incremental CDC | Incremental pipeline pattern |
+| :-- | : | : | : |
+| [ADLS Gen1](load-azure-data-lake-store.md) | &nbsp; | Γ£ô | &nbsp; |
+| [ADLS Gen2](load-azure-data-lake-storage-gen2.md) | &nbsp; | Γ£ô | &nbsp; |
+| [Azure Blob Storage](connector-azure-blob-storage.md) | &nbsp; | Γ£ô | &nbsp; |
+| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md) | &nbsp; | Γ£ô | &nbsp; |
+| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | &nbsp; | Γ£ô | &nbsp; |
+| [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | &nbsp; | Γ£ô | &nbsp; |
+| [Azure SQL Database](connector-azure-sql-database.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-portal.md) |
+| [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-change-data-capture-feature-portal.md) |
+| [Azure SQL Server](connector-sql-server.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-multiple-tables-portal.md) |
+| [Common data model](format-common-data-model.md) | &nbsp; | Γ£ô | &nbsp; |
+| [SAP CDC](connector-sap-change-data-capture.md) | Γ£ô | Γ£ô | Γ£ô |
++
+ADF makes it super-simple to enable and use CDC. Many of the connectors listed above will enable a checkbox similar to the one shown below from the data flow source transformation.
++
+The "Full CDC" and "Incremental CDC" features are available in both ADF and Synapse data flows and pipelines. In each of those options, ADF manages the checkpoint automatically for you. You can turn on the change data capture feature in the data flow source and you can also reset the checkpoint in the data flow activity. To reset the checkpoint for your CDC pipeline, go into the data flow activity in your pipeline and override the checkpoint key. Connectors in ADF that support "full CDC" also provide automatic tagging of rows as update, insert, delete.
+
+## Next steps
+
+- [Learn how to use the checkpoint key in the data flow activity](control-flow-execute-data-flow-activity.md).
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
Title: Copy data from Azure Cosmos DB's API for MongoDB
-description: Learn how to copy data from supported source data stores to or from Azure Cosmos DB's API for MongoDB to supported sink stores using Azure Data Factory or Synapse Analytics pipelines.
+ Title: Copy data from Azure Cosmos DB for MongoDB
+description: Learn how to copy data from supported source data stores to or from Azure Cosmos DB for MongoDB to supported sink stores using Azure Data Factory or Synapse Analytics pipelines.
-+ Last updated 07/04/2022
-# Copy data to or from Azure Cosmos DB's API for MongoDB using Azure Data Factory or Synapse Analytics
+# Copy data to or from Azure Cosmos DB for MongoDB using Azure Data Factory or Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity in Azure Data Factory and Synapse Analytics pipelines to copy data from and to Azure Cosmos DB's API for MongoDB. The article builds on [Copy Activity](copy-activity-overview.md), which presents a general overview of Copy Activity.
+This article outlines how to use Copy Activity in Azure Data Factory and Synapse Analytics pipelines to copy data from and to Azure Cosmos DB for MongoDB. The article builds on [Copy Activity](copy-activity-overview.md), which presents a general overview of Copy Activity.
>[!NOTE]
->This connector only supports copy data to/from Azure Cosmos DB's API for MongoDB. For SQL API, refer to [Cosmos DB SQL API connector](connector-azure-cosmos-db.md). Other API types are not supported now.
+>This connector only supports copy data to/from Azure Cosmos DB for MongoDB. For Azure Cosmos DB for NoSQL, refer to the [Azure Cosmos DB for NoSQL connector](connector-azure-cosmos-db.md). Other API types are not currently supported.
## Supported capabilities
-This Azure Cosmos DB's API for MongoDB connector is supported for the following capabilities:
+This Azure Cosmos DB for MongoDB connector is supported for the following capabilities:
| Supported capabilities|IR | Managed private endpoint| || --| --|
This Azure Cosmos DB's API for MongoDB connector is supported for the following
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-You can copy data from Azure Cosmos DB's API for MongoDB to any supported sink data store, or copy data from any supported source data store to Azure Cosmos DB's API for MongoDB. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+You can copy data from Azure Cosmos DB for MongoDB to any supported sink data store, or copy data from any supported source data store to Azure Cosmos DB for MongoDB. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
-You can use the Azure Cosmos DB's API for MongoDB connector to:
+You can use the Azure Cosmos DB for MongoDB connector to:
-- Copy data from and to the [Azure Cosmos DB's API for MongoDB](../cosmos-db/mongodb-introduction.md).
+- Copy data from and to the [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb-introduction.md).
- Write to Azure Cosmos DB as **insert** or **upsert**. - Import and export JSON documents as-is, or copy data from or to a tabular dataset. Examples include a SQL database and a CSV file. To copy documents as-is to or from JSON files or to or from another Azure Cosmos DB collection, see Import or export JSON documents.
You can use the Azure Cosmos DB's API for MongoDB connector to:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-## Create a linked service to Azure Cosmos DB's API for MongoDB using UI
+## Create a linked service to Azure Cosmos DB for MongoDB using UI
-Use the following steps to create a linked service to Azure Cosmos DB's API for MongoDB in the Azure portal UI.
+Use the following steps to create a linked service to Azure Cosmos DB for MongoDB in the Azure portal UI.
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
Use the following steps to create a linked service to Azure Cosmos DB's API for
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
-2. Search for Azure Cosmos DB (MongoDB API) and select the Azure Cosmos DB's API for MongoDB connector.
+2. Search for *Azure Cosmos DB for MongoDB* and select that connector.
- :::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/azure-cosmos-db-mongodb-api-connector.png" alt-text="Select the Azure Cosmos DB's API for MongoDB connector.":::
+ :::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/azure-cosmos-db-mongodb-api-connector.png" alt-text="Select the Azure Cosmos DB for MongoDB connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/configure-azure-cosmos-db-mongodb-api-linked-service.png" alt-text="Configure a linked service to Azure Cosmos DB's API for MongoDB.":::
+ :::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/configure-azure-cosmos-db-mongodb-api-linked-service.png" alt-text="Configure a linked service to Azure Cosmos DB for MongoDB.":::
## Connector configuration details
-The following sections provide details about properties you can use to define Data Factory entities that are specific to Azure Cosmos DB's API for MongoDB.
+The following sections provide details about properties you can use to define Data Factory entities that are specific to Azure Cosmos DB for MongoDB.
## Linked service properties
-The following properties are supported for the Azure Cosmos DB's API for MongoDB linked service:
+The following properties are supported for the Azure Cosmos DB for MongoDB linked service:
| Property | Description | Required | |: |: |: | | type | The **type** property must be set to **CosmosDbMongoDbApi**. | Yes |
-| connectionString |Specify the connection string for your Azure Cosmos DB's API for MongoDB. You can find it in the Azure portal -> your Cosmos DB blade -> primary or secondary connection string. <br/>For 3.2 server version, the string pattern is `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb`. <br/>For 3.6+ server versions, the string pattern is `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<cosmosdb-name>@`.<br/><br />You can also put a password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details.|Yes |
+| connectionString |Specify the connection string for your Azure Cosmos DB for MongoDB. You can find it in the Azure portal -> your Azure Cosmos DB blade -> primary or secondary connection string. <br/>For 3.2 server version, the string pattern is `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb`. <br/>For 3.6+ server versions, the string pattern is `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<cosmosdb-name>@`.<br/><br />You can also put a password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details.|Yes |
| database | Name of the database that you want to access. | Yes | | isServerVersionAbove32 | Specify whether the server version is above 3.2. Allowed values are **true** and **false**(default). This will determine the driver to use in the service. | Yes | | connectVia | The [Integration Runtime](concepts-integration-runtime.md) to use to connect to the data store. You can use the Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If this property isn't specified, the default Azure Integration Runtime is used. |No |
The following properties are supported for the Azure Cosmos DB's API for MongoDB
## Dataset properties
-For a full list of sections and properties that are available for defining datasets, see [Datasets and linked services](concepts-datasets-linked-services.md). The following properties are supported for Azure Cosmos DB's API for MongoDB dataset:
+For a full list of sections and properties that are available for defining datasets, see [Datasets and linked services](concepts-datasets-linked-services.md). The following properties are supported for Azure Cosmos DB for MongoDB dataset:
| Property | Description | Required | |: |: |: |
For a full list of sections and properties that are available for defining datas
}, "schema": [], "linkedServiceName":{
- "referenceName": "<Azure Cosmos DB's API for MongoDB linked service name>",
+ "referenceName": "<Azure Cosmos DB for MongoDB linked service name>",
"type": "LinkedServiceReference" } }
For a full list of sections and properties that are available for defining datas
## Copy Activity properties
-This section provides a list of properties that the Azure Cosmos DB's API for MongoDB source and sink support.
+This section provides a list of properties that the Azure Cosmos DB for MongoDB source and sink support.
For a full list of sections and properties that are available for defining activities, see [Pipelines](concepts-pipelines-activities.md).
-### Azure Cosmos DB's API for MongoDB as source
+### Azure Cosmos DB for MongoDB as source
The following properties are supported in the Copy Activity **source** section:
The following properties are supported in the Copy Activity **source** section:
| cursorMethods.sort | Specifies the order in which the query returns matching documents. Refer to [cursor.sort()](https://docs.mongodb.com/manual/reference/method/cursor.sort/#cursor.sort). | No | | cursorMethods.limit | Specifies the maximum number of documents the server returns. Refer to [cursor.limit()](https://docs.mongodb.com/manual/reference/method/cursor.limit/#cursor.limit). | No | | cursorMethods.skip | Specifies the number of documents to skip and from where MongoDB begins to return results. Refer to [cursor.skip()](https://docs.mongodb.com/manual/reference/method/cursor.skip/#cursor.skip). | No |
-| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Cosmos DB limits each batch cannot exceed 40MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
+| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Azure Cosmos DB limits each batch cannot exceed 40MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
>[!TIP]
->ADF support consuming BSON document in **Strict mode**. Make sure your filter query is in Strict mode instead of Shell mode. More description can be found at [MongoDB manual](https://docs.mongodb.com/manual/reference/mongodb-extended-json/https://docsupdatetracker.net/index.html).
+>ADF support consuming BSON document in **Strict mode**. Make sure your filter query is in Strict mode instead of Shell mode. More description can be found in the [MongoDB manual](https://docs.mongodb.com/manual/reference/mongodb-extended-json/https://docsupdatetracker.net/index.html).
**Example**
The following properties are supported in the Copy Activity **source** section:
"type": "Copy", "inputs": [ {
- "referenceName": "<Azure Cosmos DB's API for MongoDB input dataset name>",
+ "referenceName": "<Azure Cosmos DB for MongoDB input dataset name>",
"type": "DatasetReference" } ],
The following properties are supported in the Copy Activity **source** section:
] ```
-### Azure Cosmos DB's API for MongoDB as sink
+### Azure Cosmos DB for MongoDB as sink
The following properties are supported in the Copy Activity **sink** section:
You can use this Azure Cosmos DB connector to easily:
To achieve schema-agnostic copy:
-* When you use the Copy Data tool, select the **Export as-is to JSON files or Cosmos DB collection** option.
+* When you use the Copy Data tool, select the **Export as-is to JSON files or Azure Cosmos DB collection** option.
* When you use activity authoring, choose JSON format with the corresponding file store for source or sink. ## Schema mapping
-To copy data from Azure Cosmos DB's API for MongoDB to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
+To copy data from Azure Cosmos DB for MongoDB to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
-Specifically for writing into Cosmos DB, to make sure you populate Cosmos DB with the right object ID from your source data, for example, you have an "id" column in SQL database table and want to use the value of that as the document ID in MongoDB for insert/upsert, you need to set the proper schema mapping according to MongoDB strict mode definition (`_id.$oid`) as the following:
+Specifically for writing into Azure Cosmos DB, to make sure you populate Azure Cosmos DB with the right object ID from your source data, for example, you have an "id" column in SQL database table and want to use the value of that as the document ID in MongoDB for insert/upsert, you need to set the proper schema mapping according to MongoDB strict mode definition (`_id.$oid`) as the following:
:::image type="content" source="./media/connector-azure-cosmos-db-mongodb-api/map-id-in-mongodb-sink.png" alt-text="Map ID in MongoDB sink":::
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
Title: Copy and transform data in Azure Cosmos DB (SQL API)
+ Title: Copy and transform data in Azure Cosmos DB for NoSQL
-description: Learn how to copy data to and from Azure Cosmos DB (SQL API), and transform data in Azure Cosmos DB (SQL API) using Azure Data Factory and Azure Synapse Analytics.
+description: Learn how to copy data to and from Azure Cosmos DB for NoSQL, and transform data in Azure Cosmos DB for NoSQL using Azure Data Factory and Azure Synapse Analytics.
-+ Last updated 07/04/2022
-# Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory
+# Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](v1/data-factory-azure-documentdb-connector.md)
Last updated 07/04/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Azure Cosmos DB (SQL API), and use Data Flow to transform data in Azure Cosmos DB (SQL API). To learn more, read the introductory articles for [Azure Data Factory](introduction.md) and [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Azure Cosmos DB for NoSQL, and use Data Flow to transform data in Azure Cosmos DB for NoSQL. To learn more, read the introductory articles for [Azure Data Factory](introduction.md) and [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
>[!NOTE]
->This connector only support Cosmos DB SQL API. For MongoDB API, refer to [connector for Azure Cosmos DB's API for MongoDB](connector-azure-cosmos-db-mongodb-api.md). Other API types are not supported now.
+>This connector only support Azure Cosmos DB for NoSQL. For Azure Cosmos DB for MongoDB, refer to [connector for Azure Cosmos DB for MongoDB](connector-azure-cosmos-db-mongodb-api.md). Other API types are not supported now.
## Supported capabilities
-This Azure Cosmos DB (SQL API) connector is supported for the following capabilities:
+This Azure Cosmos DB for NoSQL connector is supported for the following capabilities:
| Supported capabilities|IR | Managed private endpoint| || --| --|
This Azure Cosmos DB (SQL API) connector is supported for the following capabili
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-For Copy activity, this Azure Cosmos DB (SQL API) connector supports:
+For Copy activity, this Azure Cosmos DB for NoSQL connector supports:
-- Copy data from and to the Azure Cosmos DB [SQL API](../cosmos-db/introduction.md) using key, service principal, or managed identities for Azure resources authentications.
+- Copy data from and to the [Azure Cosmos DB for NoSQL](../cosmos-db/introduction.md) using key, service principal, or managed identities for Azure resources authentications.
- Write to Azure Cosmos DB as **insert** or **upsert**. - Import and export JSON documents as-is, or copy data from or to a tabular dataset. Examples include a SQL database and a CSV file. To copy documents as-is to or from JSON files or to or from another Azure Cosmos DB collection, see [Import and export JSON documents](#import-and-export-json-documents).
Use the following steps to create a linked service to Azure Cosmos DB in the Azu
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-2. Search for Azure Cosmos DB (SQL API) and select the Azure Cosmos DB (SQL API) connector.
+2. Search for Azure Cosmos DB for NoSQL and select the Azure Cosmos DB for NoSQL connector.
- :::image type="content" source="media/connector-azure-cosmos-db/azure-cosmos-db-connector.png" alt-text="Select Azure Cosmos DB (SQL API) connector.":::
+ :::image type="content" source="media/connector-azure-cosmos-db/azure-cosmos-db-connector.png" alt-text="Select Azure Cosmos DB for NoSQL connector.":::
1. Configure the service details, test the connection, and create the new linked service.
Use the following steps to create a linked service to Azure Cosmos DB in the Azu
## Connector configuration details
-The following sections provide details about properties you can use to define entities that are specific to Azure Cosmos DB (SQL API).
+The following sections provide details about properties you can use to define entities that are specific to Azure Cosmos DB for NoSQL.
## Linked service properties
-The Azure Cosmos DB (SQL API) connector supports the following authentication types. See the corresponding sections for details:
+The Azure Cosmos DB for NoSQL connector supports the following authentication types. See the corresponding sections for details:
- [Key authentication](#key-authentication) - [Service principal authentication](#service-principal-authentication)
To use service principal authentication, follow these steps.
- Application key - Tenant ID
-2. Grant the service principal proper permission. See examples on how permission works in Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the service principle via service principle object ID.
+2. Grant the service principal proper permission. See examples on how permission works in Azure Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the service principle via service principle object ID.
These properties are supported for the linked service: | Property | Description | Required | |: |: |: | | type | The type property must be set to **CosmosDb**. |Yes |
-| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB. | Yes |
+| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB instance. | Yes |
| database | Specify the name of the database. | Yes | | servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
You can also store service principal key in Azure Key Vault.
>[!NOTE] >Currently, the system-assigned managed identity authentication is not supported in data flow.
-A data factory or Synapse pipeline can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Cosmos DB.
+A data factory or Synapse pipeline can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Azure Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Azure Cosmos DB instance.
To use system-assigned managed identities for Azure resource authentication, follow these steps. 1. [Retrieve the system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your service.
-2. Grant the system-assigned managed identity proper permission. See examples on how permission works in Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the system-assigned managed identity.
+2. Grant the system-assigned managed identity proper permission. See examples on how permission works in Azure Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the system-assigned managed identity.
These properties are supported for the linked service: | Property | Description | Required | |: |: |: | | type | The type property must be set to **CosmosDb**. |Yes |
-| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB. | Yes |
+| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB instance. | Yes |
| database | Specify the name of the database. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
These properties are supported for the linked service:
>[!NOTE] >Currently, the user-assigned managed identity authentication is not supported in data flow.
-A data factory or Synapse pipeline can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Cosmos DB.
+A data factory or Synapse pipeline can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Azure Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Azure Cosmos DB instance.
To use user-assigned managed identities for Azure resource authentication, follow these steps.
-1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant the user-assigned managed identity proper permission. See examples on how permission works in Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the user-assigned managed identity.
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant the user-assigned managed identity proper permission. See examples on how permission works in Azure Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the user-assigned managed identity.
2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
These properties are supported for the linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to **CosmosDb**. |Yes |
-| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB. | Yes |
+| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB instance. | Yes |
| database | Specify the name of the database. | Yes | | credentials | Specify the user-assigned managed identity as the credential object. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
These properties are supported for the linked service:
For a full list of sections and properties that are available for defining datasets, see [Datasets and linked services](concepts-datasets-linked-services.md).
-The following properties are supported for Azure Cosmos DB (SQL API) dataset:
+The following properties are supported for Azure Cosmos DB for NoSQL dataset:
| Property | Description | Required | |: |: |: |
If you use "DocumentDbCollection" type dataset, it is still supported as-is for
## Copy Activity properties
-This section provides a list of properties that the Azure Cosmos DB (SQL API) source and sink support. For a full list of sections and properties that are available for defining activities, see [Pipelines](concepts-pipelines-activities.md).
+This section provides a list of properties that the Azure Cosmos DB for NoSQL source and sink support. For a full list of sections and properties that are available for defining activities, see [Pipelines](concepts-pipelines-activities.md).
-### Azure Cosmos DB (SQL API) as source
+### Azure Cosmos DB for NoSQL as source
-To copy data from Azure Cosmos DB (SQL API), set the **source** type in Copy Activity to **DocumentDbCollectionSource**.
+To copy data from Azure Cosmos DB for NoSQL, set the **source** type in Copy Activity to **DocumentDbCollectionSource**.
The following properties are supported in the Copy Activity **source** section:
The following properties are supported in the Copy Activity **source** section:
|: |: |: | | type | The **type** property of the copy activity source must be set to **CosmosDbSqlApiSource**. |Yes | | query |Specify the Azure Cosmos DB query to read data.<br/><br/>Example:<br /> `SELECT c.BusinessEntityID, c.Name.First AS FirstName, c.Name.Middle AS MiddleName, c.Name.Last AS LastName, c.Suffix, c.EmailPromotion FROM c WHERE c.ModifiedDate > \"2009-01-01T00:00:00\"` |No <br/><br/>If not specified, this SQL statement is executed: `select <columns defined in structure> from mycollection` |
-| preferredRegions | The preferred list of regions to connect to when retrieving data from Cosmos DB. | No |
+| preferredRegions | The preferred list of regions to connect to when retrieving data from Azure Cosmos DB. | No |
| pageSize | The number of documents per page of the query result. Default is "-1" which means uses the service side dynamic page size up to 1000. | No | | detectDatetime | Whether to detect datetime from the string values in the documents. Allowed values are: **true** (default), **false**. | No |
-If you use "DocumentDbCollectionSource" type source, it is still supported as-is for backward compatibility. You are suggested to use the new model going forward which provide richer capabilities to copy data from Cosmos DB.
+If you use "DocumentDbCollectionSource" type source, it is still supported as-is for backward compatibility. You are suggested to use the new model going forward which provide richer capabilities to copy data from Azure Cosmos DB.
**Example**
If you use "DocumentDbCollectionSource" type source, it is still supported as-is
"type": "Copy", "inputs": [ {
- "referenceName": "<Cosmos DB SQL API input dataset name>",
+ "referenceName": "<Cosmos DB for NoSQL input dataset name>",
"type": "DatasetReference" } ],
If you use "DocumentDbCollectionSource" type source, it is still supported as-is
] ```
-When copying data from Cosmos DB, unless you want to [export JSON documents as-is](#import-and-export-json-documents), the best practice is to specify the mapping in copy activity. The service honors the mapping you specified on the activity - if a row doesn't contain a value for a column, a null value is provided for the column value. If you don't specify a mapping, the service infers the schema by using the first row in the data. If the first row doesn't contain the full schema, some columns will be missing in the result of the activity operation.
+When copying data from Azure Cosmos DB, unless you want to [export JSON documents as-is](#import-and-export-json-documents), the best practice is to specify the mapping in copy activity. The service honors the mapping you specified on the activity - if a row doesn't contain a value for a column, a null value is provided for the column value. If you don't specify a mapping, the service infers the schema by using the first row in the data. If the first row doesn't contain the full schema, some columns will be missing in the result of the activity operation.
-### Azure Cosmos DB (SQL API) as sink
+### Azure Cosmos DB for NoSQL as sink
-To copy data to Azure Cosmos DB (SQL API), set the **sink** type in Copy Activity to **DocumentDbCollectionSink**.
+To copy data to Azure Cosmos DB for NoSQL, set the **sink** type in Copy Activity to **DocumentDbCollectionSink**.
The following properties are supported in the Copy Activity **sink** section:
The following properties are supported in the Copy Activity **sink** section:
| type | The **type** property of the Copy Activity sink must be set to **CosmosDbSqlApiSink**. |Yes | | writeBehavior |Describes how to write data to Azure Cosmos DB. Allowed values: **insert** and **upsert**.<br/><br/>The behavior of **upsert** is to replace the document if a document with the same ID already exists; otherwise, insert the document.<br /><br />**Note**: The service automatically generates an ID for a document if an ID isn't specified either in the original document or by column mapping. This means that you must ensure that, for **upsert** to work as expected, your document has an ID. |No<br />(the default is **insert**) | | writeBatchSize | The service uses the [Azure Cosmos DB bulk executor library](https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started) to write data to Azure Cosmos DB. The **writeBatchSize** property controls the size of documents the service provides to the library. You can try increasing the value for **writeBatchSize** to improve performance and decreasing the value if your document size being large - see below tips. |No<br />(the default is **10,000**) |
-| disableMetricsCollection | The service collects metrics such as Cosmos DB RUs for copy performance optimization and recommendations. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
+| disableMetricsCollection | The service collects metrics such as Azure Cosmos DB RUs for copy performance optimization and recommendations. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | >[!TIP]
->To import JSON documents as-is, refer to [Import or export JSON documents](#import-and-export-json-documents) section; to copy from tabular-shaped data, refer to [Migrate from relational database to Cosmos DB](#migrate-from-relational-database-to-cosmos-db).
+>To import JSON documents as-is, refer to [Import or export JSON documents](#import-and-export-json-documents) section; to copy from tabular-shaped data, refer to [Migrate from relational database to Azure Cosmos DB](#migrate-from-relational-database-to-azure-cosmos-db).
>[!TIP]
->Cosmos DB limits single request's size to 2MB. The formula is Request Size = Single Document Size * Write Batch Size. If you hit error saying **"Request size is too large."**, **reduce the `writeBatchSize` value** in copy sink configuration.
+>Azure Cosmos DB limits single request's size to 2MB. The formula is Request Size = Single Document Size * Write Batch Size. If you hit error saying **"Request size is too large."**, **reduce the `writeBatchSize` value** in copy sink configuration.
-If you use "DocumentDbCollectionSink" type source, it is still supported as-is for backward compatibility. You are suggested to use the new model going forward which provide richer capabilities to copy data from Cosmos DB.
+If you use "DocumentDbCollectionSink" type source, it is still supported as-is for backward compatibility. You are suggested to use the new model going forward which provide richer capabilities to copy data from Azure Cosmos DB.
**Example**
To copy data from Azure Cosmos DB to tabular sink or reversed, refer to [schema
## Mapping data flow properties
-When transforming data in mapping data flow, you can read and write to collections in Cosmos DB. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+When transforming data in mapping data flow, you can read and write to collections in Azure Cosmos DB. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
> [!Note] > The Azure Cosmos DB serverless is not supported in mapping data flow.
When transforming data in mapping data flow, you can read and write to collectio
Settings specific to Azure Cosmos DB are available in the **Source Options** tab of the source transformation.
-**Include system columns:** If true, ```id```, ```_ts```, and other system columns will be included in your data flow metadata from CosmosDB. When updating collections, it is important to include this so that you can grab the existing row ID.
+**Include system columns:** If true, ```id```, ```_ts```, and other system columns will be included in your data flow metadata from Azure Cosmos DB. When updating collections, it is important to include this so that you can grab the existing row ID.
**Page size:** The number of documents per page of the query result. Default is "-1" which uses the service dynamic page up to 1000.
-**Throughput:** Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each execution of this data flow during the read operation. Minimum is 400.
+**Throughput:** Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each execution of this data flow during the read operation. Minimum is 400.
**Preferred regions:** Choose the preferred read regions for this process. **Change feed:** If true, you will get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) which is a persistent record of changes to a container in the order they occur from last run automatically. When you set it true, do not set both **Infer drifted column types** and **Allow schema drift** as true at the same time. For more details, see [Azure Cosmos DB change feed)](#azure-cosmos-db-change-feed).
-**Start from beginning:** If true, you will get initial load of full snapshot data in the first run, followed by capturing changed data in next runs. If false, the initial load will be skipped in the first run, followed by capturing changed data in next runs. The setting is aligned with the same setting name in [Cosmos DB reference](https://github.com/Azure/azure-cosmosdb-spark/wiki/Configuration-references#reading-cosmosdb-collection-change-feed). For more details, see [Azure Cosmos DB change feed](#azure-cosmos-db-change-feed).
+**Start from beginning:** If true, you will get initial load of full snapshot data in the first run, followed by capturing changed data in next runs. If false, the initial load will be skipped in the first run, followed by capturing changed data in next runs. The setting is aligned with the same setting name in [Azure Cosmos DB reference](https://github.com/Azure/azure-cosmosdb-spark/wiki/Configuration-references#reading-cosmosdb-collection-change-feed). For more details, see [Azure Cosmos DB change feed](#azure-cosmos-db-change-feed).
### Sink transformation
Settings specific to Azure Cosmos DB are available in the **Settings** tab of th
* None: No action will be done to the collection. * Recreate: The collection will get dropped and recreated
-**Batch size**: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, note:
+**Batch size**: An integer that represents how many objects are being written to Azure Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, note:
-- Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
+- Azure Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
- The larger the batch size, the better throughput the service can achieve, while make sure you allocate enough RUs to empower your workload. **Partition key:** Enter a string that represents the partition key for your collection. Example: ```/movies/title```
-**Throughput:** Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each execution of this data flow. Minimum is 400.
+**Throughput:** Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each execution of this data flow. Minimum is 400.
**Write throughput budget:** An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.
To learn details about the properties, check [Lookup activity](control-flow-look
## Import and export JSON documents
-You can use this Azure Cosmos DB (SQL API) connector to easily:
+You can use this Azure Cosmos DB for NoSQL connector to easily:
* Copy documents between two Azure Cosmos DB collections as-is. * Import JSON documents from various sources to Azure Cosmos DB, including from Azure Blob storage, Azure Data Lake Store, and other file-based stores that the service supports.
You can use this Azure Cosmos DB (SQL API) connector to easily:
To achieve schema-agnostic copy:
-* When you use the Copy Data tool, select the **Export as-is to JSON files or Cosmos DB collection** option.
+* When you use the Copy Data tool, select the **Export as-is to JSON files or Azure Cosmos DB collection** option.
* When you use activity authoring, choose JSON format with the corresponding file store for source or sink.
-## Migrate from relational database to Cosmos DB
+## Migrate from relational database to Azure Cosmos DB
-When migrating from a relational database e.g. SQL Server to Azure Cosmos DB, copy activity can easily map tabular data from source to flatten JSON documents in Cosmos DB. In some cases, you may want to redesign the data model to optimize it for the NoSQL use-cases according to [Data modeling in Azure Cosmos DB](../cosmos-db/modeling-data.md), for example, to de-normalize the data by embedding all of the related sub-items within one JSON document. For such case, refer to [this article](../cosmos-db/migrate-relational-to-cosmos-db-sql-api.md) with a walk-through on how to achieve it using the copy activity.
+When migrating from a relational database e.g. SQL Server to Azure Cosmos DB, copy activity can easily map tabular data from source to flatten JSON documents in Azure Cosmos DB. In some cases, you may want to redesign the data model to optimize it for the NoSQL use-cases according to [Data modeling in Azure Cosmos DB](../cosmos-db/modeling-data.md), for example, to de-normalize the data by embedding all of the related sub-items within one JSON document. For such case, refer to [this article](../cosmos-db/migrate-relational-to-cosmos-db-sql-api.md) with a walk-through on how to achieve it using the copy activity.
## Azure Cosmos DB change feed
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
-+ Last updated 09/09/2021
The following properties are supported in the copy activity **source** section:
| cursorMethods.sort | Specifies the order in which the query returns matching documents. Refer to [cursor.sort()](https://docs.mongodb.com/manual/reference/method/cursor.sort/#cursor.sort). | No | | cursorMethods.limit | Specifies the maximum number of documents the server returns. Refer to [cursor.limit()](https://docs.mongodb.com/manual/reference/method/cursor.limit/#cursor.limit). | No | | cursorMethods.skip | Specifies the number of documents to skip and from where MongoDB Atlas begins to return results. Refer to [cursor.skip()](https://docs.mongodb.com/manual/reference/method/cursor.skip/#cursor.skip). | No |
-| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB Atlas instance. In most cases, modifying the batch size will not affect the user or the application. Cosmos DB limits each batch cannot exceed 40MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
+| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB Atlas instance. In most cases, modifying the batch size will not affect the user or the application. Azure Cosmos DB limits each batch cannot exceed 40MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
>[!TIP] >The service supports consuming BSON document in **Strict mode**. Make sure your filter query is in Strict mode instead of Shell mode. More description can be found at [MongoDB manual](https://docs.mongodb.com/manual/reference/mongodb-extended-json/https://docsupdatetracker.net/index.html).
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md
-+ Last updated 09/09/2021
The following properties are supported in the copy activity **source** section:
| cursorMethods.sort | Specifies the order in which the query returns matching documents. Refer to [cursor.sort()](https://docs.mongodb.com/manual/reference/method/cursor.sort/#cursor.sort). | No | | cursorMethods.limit | Specifies the maximum number of documents the server returns. Refer to [cursor.limit()](https://docs.mongodb.com/manual/reference/method/cursor.limit/#cursor.limit). | No | | cursorMethods.skip | Specifies the number of documents to skip and from where MongoDB begins to return results. Refer to [cursor.skip()](https://docs.mongodb.com/manual/reference/method/cursor.skip/#cursor.skip). | No |
-| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Cosmos DB limits each batch cannot exceed 40 MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
+| batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Azure Cosmos DB limits each batch cannot exceed 40 MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
>[!TIP] >The service supports consuming BSON document in **Strict mode**. Make sure your filter query is in Strict mode instead of Shell mode. More description can be found at [MongoDB manual](https://docs.mongodb.com/manual/reference/mongodb-extended-json/https://docsupdatetracker.net/index.html).
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
Title: Copy data from Microsoft 365 (Office 365)
+ Title: Copy and transform data from Microsoft 365 (Office 365)
-description: Learn how to copy data from Microsoft 365 (Office 365) to supported sink data stores by using copy activity in an Azure Data Factory or Synapse Analytics pipeline.
+description: Learn how to copy and transform data from Microsoft 365 (Office 365) to supported sink data stores by using copy and mapping data flow activity in an Azure Data Factory or Synapse Analytics pipeline.
-+ Previously updated : 08/04/2022 Last updated : 09/29/2022
-# Copy data from Microsoft 365 (Office 365) into Azure using Azure Data Factory or Synapse Analytics
+# Copy and transform data from Microsoft 365 (Office 365) into Azure using Azure Data Factory or Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] Azure Data Factory and Synapse Analytics pipelines integrate with [Microsoft Graph data connect](/graph/data-connect-concept-overview), allowing you to bring the rich organizational data in your Microsoft 365 (Office 365) tenant into Azure in a scalable way and build analytics applications and extract insights based on these valuable data assets. Integration with Privileged Access Management provides secured access control for the valuable curated data in Microsoft 365 (Office 365). Please refer to [this link](/graph/data-connect-concept-overview) for an overview on Microsoft Graph data connect and refer to [this link](/graph/data-connect-policies#licensing) for licensing information.
-This article outlines how to use the Copy Activity to copy data from Microsoft 365 (Office 365). It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
+This article outlines how to use the Copy Activity to copy data and Data Flow to transform data from Microsoft 365 (Office 365). For an introduction to copy data, read the [copy activity overview](copy-activity-overview.md). For an introduction to transforming data, read [mapping data flow overview](concepts-data-flow-overview.md).
+
+> [!NOTE]
+> Microsoft 365 Data Flow connector is currently in preview. To participate, use this sign-up form: [M365 + Analytics Preview](https://aka.ms/m365-analytics-preview).
## Supported capabilities
This Microsoft 365 (Office 365) connector is supported for the following capabil
| Supported capabilities|IR | || --| |[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312;|
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-ADF Microsoft 365 (Office 365) connector and Microsoft Graph data connect enables at scale ingestion of different types of datasets from Exchange Email enabled mailboxes, including address book contacts, calendar events, email messages, user information, mailbox settings, and so on. Refer [here](/graph/data-connect-datasets) to see the complete list of datasets available.
+ADF Microsoft 365 (Office 365) connector and Microsoft Graph Data Connect enables at scale ingestion of different types of datasets from Exchange Email enabled mailboxes, including address book contacts, calendar events, email messages, user information, mailbox settings, and so on. Refer [here](/graph/data-connect-datasets) to see the complete list of datasets available.
-For now, within a single copy activity you can only **copy data from Microsoft 365 (Office 365) into [Azure Blob Storage](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) in JSON format** (type setOfObjects). If you want to load Microsoft 365 (Office 365) into other types of data stores or in other formats, you can chain the first copy activity with a subsequent copy activity to further load data into any of the [supported ADF destination stores](copy-activity-overview.md#supported-data-stores-and-formats) (refer to "supported as a sink" column in the "Supported data stores and formats" table).
+For now, within a single copy activity and data flow, you can only **ingest data from Microsoft 365 (Office 365) into [Azure Blob Storage](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) in JSON format** (type setOfObjects). If you want to load Microsoft 365 (Office 365) into other types of data stores or in other formats, you can chain the first copy activity or data flow with a subsequent activity to further load data into any of the [supported ADF destination stores](copy-activity-overview.md#supported-data-stores-and-formats) (refer to "supported as a sink" column in the "Supported data stores and formats" table).
>[!IMPORTANT] >- The Azure subscription containing the data factory or Synapse workspace and the sink data store must be under the same Azure Active Directory (Azure AD) tenant as Microsoft 365 (Office 365) tenant. >- Ensure the Azure Integration Runtime region used for copy activity as well as the destination is in the same region where the Microsoft 365 (Office 365) tenant users' mailbox is located. Refer [here](concepts-integration-runtime.md#integration-runtime-location) to understand how the Azure IR location is determined. Refer to [table here](/graph/data-connect-datasets#regions) for the list of supported Office regions and corresponding Azure regions. >- Service Principal authentication is the only authentication mechanism supported for Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2 as destination stores.
-> [!Note]
-> Please use Azure integration runtime in both source and sink linked services. The self-hosted integration runtime and the managed virtual network integration runtime are not supported.
+> [!NOTE]
+> Please use Azure integration runtime in both source and sink linked services. The self-hosted integration runtime and the managed virtual network integration runtime are not supported.
## Prerequisites
-To copy data from Microsoft 365 (Office 365) into Azure, you need to complete the following prerequisite steps:
+To copy and transform data from Microsoft 365 (Office 365) into Azure, you need to complete the following prerequisite steps:
- Your Microsoft 365 (Office 365) tenant admin must complete on-boarding actions as described [here](/events/build-may-2021/microsoft-365-teams/breakouts/od483/). - Create and configure an Azure AD web application in Azure Active Directory. For instructions, see [Create an Azure AD application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). - Make note of the following values, which you will use to define the linked service for Microsoft 365 (Office 365):
- - Tenant ID. For instructions, see [Get tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in).
- - Application ID and Application key. For instructions, see [Get application ID and authentication key](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in).
-- Add the user identity who will be making the data access request as the owner of the Azure AD web application (from the Azure AD web application > Settings > Owners > Add owner).
- - The user identity must be in the Microsoft 365 (Office 365) organization you are getting data from and must not be a Guest user.
+ - Tenant ID. For instructions, see [Get tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in).
+ - Application ID and Application key. For instructions, see [Get application ID and authentication key](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in).
+- Add the user identity who will be making the data access request as the owner of the Azure AD web application (from the Azure AD web application > Settings > Owners > Add owner).
+ - The user identity must be in the Microsoft 365 (Office 365) organization you are getting data from and must not be a Guest user.
## Approving new data access requests
Refer [here](/graph/data-connect-faq#how-can-i-approve-pam-requests-via-microsof
>[!TIP] >For a walkthrough of using Microsoft 365 (Office 365) connector, see [Load data from Microsoft 365 (Office 365)](load-office-365-data.md) article.
-You can create a pipeline with the copy activity by using one of the following tools or SDKs. Select a link to go to a tutorial with step-by-step instructions to create a pipeline with a copy activity.
+You can create a pipeline with the copy activity and data flow by using one of the following tools or SDKs. Select a link to go to a tutorial with step-by-step instructions to create a pipeline with a copy activity.
- [Azure portal](quickstart-create-data-factory-portal.md) - [.NET SDK](quickstart-create-data-factory-dot-net.md) - [Python SDK](quickstart-create-data-factory-python.md) - [Azure PowerShell](quickstart-create-data-factory-powershell.md) - [REST API](quickstart-create-data-factory-rest-api.md)-- [Azure Resource Manager template](quickstart-create-data-factory-resource-manager-template.md).
+- [Azure Resource Manager template](quickstart-create-data-factory-resource-manager-template.md).
## Create a linked service to Microsoft 365 (Office 365) using UI
To copy data from Microsoft 365 (Office 365), the following properties are suppo
] ```
+## Transform data with the Microsoft 365 connector
+
+Microsoft 365 datasets can be used as a source with mapping data flows. The data flow will transform the data by flattening the dataset automatically. This allows users to concentrate on leveraging the flattened dataset to accelerate their analytics scenarios.
+
+### Mapping data flow properties
+
+To create a mapping data flow using the Microsoft 365 connector as a source, complete the following steps:
+
+1. In ADF Studio, go to the **Data flows** section of the **Author** hub, select the **…** button to drop down the **Data flow actions** menu, and select the **New data flow** item. Turn on debug mode by using the **Data flow debug** button in the top bar of data flow canvas.
+
+ :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-data-flow-debug.png" alt-text="Screenshot of the data flow debug button in mapping data flow.":::
+
+2. In the mapping data flow editor, select **Add Source**.
+
+ :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-add-source.png" alt-text="Screenshot of add source in mapping data flow.":::
+
+3. On the tab **Source settings**, select **Inline** in the **Source type** property, **Microsoft 365 (Office 365)** in the **Inline dataset type**, and the Microsoft 365 linked service that you have created earlier.
+
+ :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source.":::
+
+4. On the tab **Source options** select the **Table name** of the Microsoft 365 table that you would like to transform. Also select the **Auto flatten** option to decide if you would like data flow to auto flatten the source dataset.
+
+ :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-source-options.png" alt-text="Screenshot of the source options of mapping data flow source.":::
+
+5. For the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
+
+6. On the tab **Data preview** click on the **Refresh** button to fetch a sample dataset for validation.
+ ## Next steps
-For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Troubleshoot Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-cosmos-db.md
Title: Troubleshoot the Azure Cosmos DB connector
-description: Learn how to troubleshoot issues with the Azure Cosmos DB and Azure Cosmos DB (SQL API) connectors in Azure Data Factory and Azure Synapse Analytics.
+description: Learn how to troubleshoot issues with the Azure Cosmos DB and Azure Cosmos DB for NoSQL connectors in Azure Data Factory and Azure Synapse Analytics.
Last updated 07/29/2022 -+ # Troubleshoot the Azure Cosmos DB connector in Azure Data Factory and Azure Synapse [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article provides suggestions to troubleshoot common problems with the Azure Cosmos DB and Azure Cosmos DB (SQL API) connectors in Azure Data Factory and Azure Synapse.
+This article provides suggestions to troubleshoot common problems with the Azure Cosmos DB and Azure Cosmos DB for NoSQL connectors in Azure Data Factory and Azure Synapse.
## Error message: Request size is too large
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `CosmosDbSqlApi operation Failed. ErrorMessage: %msg;.` -- **Cause**: A problem with the CosmosDbSqlApi operation. This applies to the Cosmos DB (SQL API) connector specifically.
+- **Cause**: A problem with the CosmosDbSqlApi operation. This applies to the Azure Cosmos DB for NoSQL connector specifically.
- **Recommendation**: To check the error details, see [Azure Cosmos DB help document](../cosmos-db/troubleshoot-dot-net-sdk.md). For further help, contact the Azure Cosmos DB team.
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md
Last updated 06/29/2022 -+ # Troubleshoot Azure Data Factory and Azure Synapse Analytics connectors
This article describes how to troubleshoot connectors in Azure Data Factory and
You can refer to the troubleshooting pages for each connector to see problems specific to it with explanations of their causes and recommendations to resolve them. - [Azure Blob Storage](connector-troubleshoot-azure-blob-storage.md)-- [Azure Cosmos DB (including SQL API connector)](connector-troubleshoot-azure-cosmos-db.md)
+- [Azure Cosmos DB (including Azure Cosmos DB for NoSQL connector)](connector-troubleshoot-azure-cosmos-db.md)
- [Azure Data Lake (Gen1 and Gen2)](connector-troubleshoot-azure-data-lake.md) - [Azure Database for PostgreSQL](connector-troubleshoot-postgresql.md) - [Azure Files storage](connector-troubleshoot-azure-files.md)
data-factory Connector Troubleshoot Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-sap.md
+
+ Title: Troubleshoot the SAP Table, SAP Business Warehouse Open Hub, and SAP ODP connectors
+
+description: Learn how to troubleshoot issues with the SAP Table, SAP Business Warehouse Open Hub, and SAP ODP connectors in Azure Data Factory and Azure Synapse Analytics.
++++ Last updated : 08/02/2022++++
+# Troubleshoot the SAP Table, SAP Business Warehouse Open Hub, and SAP ODP connectors in Azure Data Factory and Azure Synapse
++
+This article provides suggestions to troubleshoot common problems with the SAP Table, SAP Business Warehouse Open Hub, and SAP ODP connectors in Azure Data Factory and Azure Synapse.
+
+## Error code: SapRfcDestinationAddFailed
+
+- **Message**: `Get or create destination '%destination; failed.`
+
+- **Cause**: Unable to connect to SAP server, which may be caused by the connection issue between the machine with an integration runtime installed and SAP server, or wrong credentials.
+
+- **Recommendation**: If the error messages are like `'<serverName>:3300' not reached`, test connection in the machine with an integration runtime installed first by following the PowerShell command below:
+
+ ```powershell
+
+ Test-NetConnection <sap server> -port 3300
+
+ ```
+
+## Next steps
+
+For more troubleshooting help, try these resources:
+
+- [Connector troubleshooting guide](connector-troubleshoot-guide.md)
+- [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
+- [Data Factory feature requests](/answers/topics/azure-data-factory.html)
+- [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
+- [Microsoft Q&A page](/answers/topics/azure-data-factory.html)
+- [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
+- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
description: Learn about how copy activity in Azure Data Factory and Azure Synap
-+ Last updated 09/09/2021
You can configure the mapping on the Authoring UI -> copy activity -> mapping ta
| -- | | -- | | name | Name of the source or sink column/field. Apply for tabular source and sink. | Yes | | ordinal | Column index. Start from 1. <br>Apply and required when using delimited text without header line. | No |
-| path | JSON path expression for each field to extract or map. Apply for hierarchical source and sink, for example, Cosmos DB, MongoDB, or REST connectors.<br>For fields under the root object, the JSON path starts with root `$`; for fields inside the array chosen by `collectionReference` property, JSON path starts from the array element without `$`. | No |
+| path | JSON path expression for each field to extract or map. Apply for hierarchical source and sink, for example, Azure Cosmos DB, MongoDB, or REST connectors.<br>For fields under the root object, the JSON path starts with root `$`; for fields inside the array chosen by `collectionReference` property, JSON path starts from the array element without `$`. | No |
| type | Interim data type of the source or sink column. In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No | | culture | Culture of the source or sink column. Apply when type is `Datetime` or `Datetimeoffset`. The default is `en-us`.<br>In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No | | format | Format string to be used when type is `Datetime` or `Datetimeoffset`. Refer to [Custom Date and Time Format Strings](/dotnet/standard/base-types/custom-date-and-time-format-strings) on how to format datetime. In general, you don't need to specify or change this property. Learn more about [data type mapping](#data-type-mapping). | No |
The following properties are supported under `translator` in addition to `mappin
| Property | Description | Required | | - | | -- |
-| collectionReference | Apply when copying data from hierarchical source, for example, Cosmos DB, MongoDB, or REST connectors.<br>If you want to iterate and extract data from the objects **inside an array field** with the same pattern and convert to per row per object, specify the JSON path of that array to do cross-apply. | No |
+| collectionReference | Apply when copying data from a hierarchical source, such as Azure Cosmos DB, MongoDB, or REST connectors.<br>If you want to iterate and extract data from the objects **inside an array field** with the same pattern and convert to per row per object, specify the JSON path of that array to do cross-apply. | No |
#### Tabular source to tabular sink
If you are using the syntax of `"columnMappings": "UserId: MyUserId, Group: MyGr
### Alternative schema-mapping (legacy model)
-You can specify copy activity -> `translator` -> `schemaMapping` to map between hierarchical-shaped data and tabular-shaped data, for example, copy from MongoDB/REST to text file and copy from Oracle to Azure Cosmos DB's API for MongoDB. The following properties are supported in copy activity `translator` section:
+You can specify copy activity -> `translator` -> `schemaMapping` to map between hierarchical-shaped data and tabular-shaped data, for example, copy from MongoDB/REST to text file and copy from Oracle to Azure Cosmos DB for MongoDB. The following properties are supported in copy activity `translator` section:
| Property | Description | Required | | : | :-- | :- |
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-alter-row.md
-+ Last updated 08/03/2022
Use the Alter Row transformation to set insert, delete, update, and upsert polic
:::image type="content" source="media/data-flow/alter-row1.png" alt-text="Alter row settings":::
-Alter Row transformations will only operate on database, REST, or CosmosDB sinks in your data flow. The actions that you assign to rows (insert, update, delete, upsert) won't occur during debug sessions. Run an Execute Data Flow activity in a pipeline to enact the alter row policies on your database tables.
+Alter Row transformations only operate on database, REST, or Azure Cosmos DB sinks in your data flow. The actions that you assign to rows (insert, update, delete, upsert) won't occur during debug sessions. Run an Execute Data Flow activity in a pipeline to enact the alter row policies on your database tables.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4vJYc]
Each alter row policy is represented by an icon that indicates whether an insert
## Allow alter row policies in sink
-For the alter row policies to work, the data stream must write to a database or Cosmos sink. In the **Settings** tab in your sink, enable which alter row policies are allowed for that sink.
+For the alter row policies to work, the data stream must write to a database or Azure Cosmos DB sink. In the **Settings** tab in your sink, enable which alter row policies are allowed for that sink.
:::image type="content" source="media/data-flow/alter-row2.png" alt-text="Alter row sink":::
The default behavior is to only allow inserts. To allow updates, upserts, or del
> [!NOTE] > If your inserts, updates, or upserts modify the schema of the target table in the sink, the data flow will fail. To modify the target schema in your database, choose **Recreate table** as the table action. This will drop and recreate your table with the new schema definition.
-The sink transformation requires either a single key or a series of keys for unique row identification in your target database. For SQL sinks, set the keys in the sink settings tab. For CosmosDB, set the partition key in the settings and also set the CosmosDB system field "id" in your sink mapping. For CosmosDB, it is mandatory to include the system column "id" for updates, upserts, and deletes.
+The sink transformation requires either a single key or a series of keys for unique row identification in your target database. For SQL sinks, set the keys in the sink settings tab. For Azure Cosmos DB, set the partition key in the settings and also set the Azure Cosmos DB system field "id" in your sink mapping. For Azure Cosmos DB, it is mandatory to include the system column "id" for updates, upserts, and deletes.
## Merges and upserts with Azure SQL Database and Azure Synapse
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
-+ Last updated 09/01/2022
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| Connector | Format | Dataset/inline | | | | -- | | [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/Γ£ô <br>Γ£ô/Γ£ô <br>-/Γ£ô <br>Γ£ô/Γ£ô <br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
-| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- |
+| [Azure Cosmos DB for NoSQL](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- |
| [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/- <br>Γ£ô/- <br>Γ£ô/- <br>Γ£ô/Γ£ô<br>Γ£ô/- | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br/>[Common Data Model](format-common-data-model.md#sink-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/Γ£ô <br>-/Γ£ô <br>Γ£ô/Γ£ô <br>-/Γ£ô <br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô <br>Γ£ô/Γ£ô | | [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô/Γ£ô |
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
-+ Last updated 08/23/2022
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
|[Appfigures (Preview)](connector-appfigures.md#mapping-data-flow-properties) | | -/Γ£ô | |[Asana (Preview)](connector-asana.md#mapping-data-flow-properties) | | -/Γ£ô | |[Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties) <br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
-| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- |
+| [Azure Cosmos DB for NoSQL](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- |
| [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Common Data Model](format-common-data-model.md#source-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br/>-/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br/>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô | | [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô/Γ£ô |
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md
+ Last updated 08/04/2022
To solve this issue, refer to the following recommendations:
### Support customized schemas in the source #### Symptoms
-When you want to use the ADF data flow to move or transfer data from Cosmos DB/JSON into other data stores, some columns of the source data may be missed. 
+When you want to use the ADF data flow to move or transfer data from Azure Cosmos DB/JSON into other data stores, some columns of the source data may be missed. 
#### Cause  For the schema-free connectors (the column number, column name and column data type of each row can be different when comparing with others), by default, ADF uses sample rows (for example, top 100 or 1000 rows data) to infer the schema, and the inferred result will be used as a schema to read data. So if your data stores have extra columns that don't appear in sample rows, the data of these extra columns are not read, moved, or transferred into sink data stores.
To overwrite the default behavior and bring in additional fields, ADF provides o
### Support map type in the source #### Symptoms
-In ADF data flows, map data type cannot be directly supported in Cosmos DB or JSON source, so you cannot get the map data type under "Import projection".
+In ADF data flows, map data type cannot be directly supported in Azure Cosmos DB or JSON source, so you cannot get the map data type under "Import projection".
#### Cause
-For Cosmos DB and JSON, they are schema-free connectivity and related spark connector uses sample data to infer the schema, and then that schema is used as the Cosmos DB/JSON source schema. When inferring the schema, the Cosmos DB/JSON spark connector can only infer object data as a struct rather than a map data type, and that's why map type cannot be directly supported.
+For Azure Cosmos DB and JSON, they are schema-free connectivity and related spark connector uses sample data to infer the schema, and then that schema is used as the Azure Cosmos DB/JSON source schema. When inferring the schema, the Azure Cosmos DB/JSON Spark connector can only infer object data as a struct rather than a map data type, and that's why map type cannot be directly supported.
#### Recommendation 
-To solve this issue, refer to the following examples and steps to manually update the script (DSL) of the Cosmos DB/JSON source to get the map data type support.
+To solve this issue, refer to the following examples and steps to manually update the script (DSL) of the Azure Cosmos DB/JSON source to get the map data type support.
**Examples**: **Step-1**: Open the script of the data flow activity.
The map type support:
|Excel, CSV |No |Both are tabular data sources with the primitive type, so there is no need to support the map type. | |Orc, Avro |Yes |None.| |JSON|Yes |The map type cannot be directly supported, follow the recommendation part in this section to update the script (DSL) under the source projection.|
-|Cosmos DB |Yes |The map type cannot be directly supported, follow the recommendation part in this section to update the script (DSL) under the source projection.|
+|Azure Cosmos DB |Yes |The map type cannot be directly supported, follow the recommendation part in this section to update the script (DSL) under the source projection.|
|Parquet |Yes |Today the complex data type is not supported on the parquet dataset, so you need to use the "Import projection" under the data flow parquet source to get the map type.| |XML |No |None.|
You use the Azure Synapse Analytics and the linked service actually is a Synapse
1. When you select 'enable staging' in the Source, you face the following error: `shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near 'IDENTITY'.` 1. When you want to fetch data from an external table, you face the following error: `shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: External table 'dbo' is not accessible because location does not exist or it is used by another process.`
-1. When you want to fetch data from Cosmos DB through Serverless pool by query/from view, you face the following error:
+1. When you want to fetch data from Azure Cosmos DB through Serverless pool by query/from view, you face the following error:
`Job failed due to reason: Connection reset.` 1. When you want to fetch data from a view, you may face with different errors.
Causes of the symptoms are stated below respectively:
1. Serverless pool cannot be used as a sink. It doesn't support write data into the database. 1. Serverless pool doesn't support staged data loading, so 'enable staging' is not supported. 1. The authentication method that you use doesn't have a correct permission to the external data source where the external table referring to.
-1. There is a known limitation in Synapse serverless pool, blocking you to fetch Cosmos DB data from data flows.
+1. There is a known limitation in Synapse serverless pool, blocking you to fetch Azure Cosmos DB data from data flows.
1. View is a virtual table based on an SQL statement. The root cause is inside the statement of the view. #### Recommendation
You can apply the following steps to solve your issues correspondingly.
>[!Note] > The user-password authentication can not query external tables. For more information, see [Security model](../synapse-analytics/metadat#security-model).
-1. You can use copy activity to fetch Cosmos DB data from the serverless pool.
+1. You can use copy activity to fetch Azure Cosmos DB data from the serverless pool.
1. You can provide the SQL statement that creates the view to the engineering support team, and they can help analyze if the statement hits an authentication issue or something else.
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
+ Previously updated : 09/02/2022 Last updated : 09/29/2022 # Troubleshoot mapping data flows in Azure Data Factory
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Cosmos-DeleteDataFailed -- **Message**: Failed to delete data from cosmos after 3 times retry.-- **Cause**: The throughput on the Cosmos collection is small and leads to meeting throttling or row data not existing in Cosmos.
+- **Message**: Failed to delete data from Azure Cosmos DB after 3 times retry.
+- **Cause**: The throughput on the Azure Cosmos DB collection is small and leads to meeting throttling or row data not existing in Azure Cosmos DB.
- **Recommendation**: Please take the following actions to solve this problem:
- - If the error is 404, make sure that the related row data exists in the Cosmos collection.
- - If the error is throttling, please increase the Cosmos collection throughput or set it to the automatic scale.
- - If the error is request timed out, please set 'Batch size' in the Cosmos sink to smaller value, for example 1000.
+ - If the error is 404, make sure that the related row data exists in the Azure Cosmos DB collection.
+ - If the error is throttling, please increase the Azure Cosmos DB collection throughput or set it to the automatic scale.
+ - If the error is request timed out, please set 'Batch size' in the Azure Cosmos DB sink to smaller value, for example 1000.
### Error code: DF-Cosmos-FailToResetThroughput -- **Message**: Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.
+- **Message**: Azure Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.
- **Cause**: The throughput scale operation of the Azure Cosmos DB can't be performed because another scale operation is in progress. - **Recommendation**: Login to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.
This section lists common error codes and messages reported by mapping data flow
- **Message**: Either accountName or accountEndpoint should be specified. - **Cause**: Invalid account information is provided.-- **Recommendation**: In the Cosmos DB linked service, specify the account name or account endpoint.
+- **Recommendation**: In the Azure Cosmos DB linked service, specify the account name or account endpoint.
### Error code: DF-Cosmos-InvalidAccountKey
This section lists common error codes and messages reported by mapping data flow
- **Message**: Invalid connection mode. - **Cause**: An invalid connection mode is provided.-- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Cosmos DB settings.
+- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Azure Cosmos DB settings.
### Error code: DF-Cosmos-InvalidPartitionKey
This section lists common error codes and messages reported by mapping data flow
- **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings. - **Message**: Partition key is not mapped in sink for delete and update operations. - **Cause**: An invalid partition key is provided.-- **Recommendation**: In Cosmos DB sink settings, use the right partition key that is same as your container's partition key.
+- **Recommendation**: In Azure Cosmos DB sink settings, use the right partition key that is same as your container's partition key.
### Error code: DF-Cosmos-InvalidPartitionKeyContent - **Message**: partition key should start with /. - **Cause**: An invalid partition key is provided.-- **Recommendation**: Ensure that the partition key start with `/` in Cosmos DB sink settings, for example: `/movieId`.
+- **Recommendation**: Ensure that the partition key start with `/` in Azure Cosmos DB sink settings, for example: `/movieId`.
### Error code: DF-Cosmos-PartitionKeyMissed
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Cosmos-ShortTypeNotSupport -- **Message**: Short data type is not supported in Cosmos DB.-- **Cause**: The short data type is not supported in the Azure Cosmos DB.
+- **Message**: Short data type is not supported in Azure Cosmos DB.
+- **Cause**: The short data type is not supported in the Azure Cosmos DB instance.
- **Recommendation**: Add a derived column transformation to convert related columns from short to integer before using them in the Azure Cosmos DB sink transformation. ### Error code: DF-Delimited-ColumnDelimiterMissed
This section lists common error codes and messages reported by mapping data flow
- **Cause**: The SQL database's firewall setting blocks the data flow to access. - **Recommendation**: Please check the firewall setting for your SQL database, and allow Azure services and resources to access this server.
+### Error code: DF-MSSQL-InvalidCertificate
+
+- **Message**: SQL server configuration error, please either install a trusted certificate on your server or change 'encrypt' connection string setting to false and 'trustServerCertificate' connection string setting to true.
+- **Cause**: SQL server configuration error.
+- **Recommendations**: Install a trusted certificate on your SQL server, or change `encrypt` connection string setting to false and `trustServerCertificate` connection string setting to true.
+++ ### Error code: DF-PGSQL-InvalidCredential - **Message**: User/password should be specified.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: This error is a data flow system error or SAP server system error. - **Recommendation**: Check the error message. If it contains SAP server related error stacktrace, contact SAP admin for assistance. Otherwise, contact Microsoft support for further assistance.
+### Error code: DF-SAPODP-NotReached
+
+- **Message**: partner '.*' not reached
+- **Causes and recommendations**: This is a connectivity issue. Different causes may lead to this issue. Check below list for possible cause analysis and related recommendation.
+
+ | Cause analysis | Recommendation |
+ | : | : |
+ | Your SAP server is shut down. | Check your SAP server is started. |
+ | Your IP or port of the self-hosted integration runtime is not in SAP network security rule. | Check your IP or port of self-hosted integration runtime is in your SAP network security rule. |
+ | Self-hosted integration runtime proxy issue. | Check your self-hosted integration runtime proxy. |
+ | Incorrect parameters input (e.g. wrong SAP server name or IP). | Check your input parameters: SAP server name, IP. |
+
+### Error code: DF-SAPODP-DependencyNotFound
+- **Message**: Could not load file or assembly 'sapnco, Version=*
+- **Cause**: You don't download and install SAP .NET connector on the machine of the self-hosted integration runtime.
+- **Recommendation**: Follow [Set up a self-hosted integration runtime](sap-change-data-capture-shir-preparation.md) to set up the self-hosted integration runtime for the SAP CDC connector.
+
+### Error code: DF-SAPODP-NoAuthForFunctionModule
+- **Message**: No REF authorization for function module RODPS_REPL_CONTEXT_GET_LIST
+- **Cause**: Lack of authorization to execute the related function module.
+- **Recommendation**: Follow this [SAP notes](https://launchpad.support.sap.com/#/notes/460089) to add the required authorization profile to your SAP account.
+
+### Error code: DF-SAPODP-OOM
+
+- **Message**: No more memory available to add rows to an internal table
+- **Cause**: SAP Table connector has its limitation for big table extraction. SAP Table underlying relies on an RFC which will read all the data from the table into the memory of SAP system, so out of memory (OOM) issue will happen when we extracting big tables.
+- **Recommendation**: Use SAP CDC connector to do full load directly from your source system, then move delta to SAP Landscape Transformation Replication Server (SLT) after init without delta is released.
+
+### Error code: DF-SAPODP-SourceNotSupportDelta
+
+- **Message**: Source .* does not support deltas
+- **Cause**: The ODP context/ODP name you specified does not support delta.
+- **Recommendation**: Enable delta mode for your SAP source, or select **Full on every run** as run mode in data flow. For more information, see this [document](https://userapps.support.sap.com/sap/support/knowledge/en/2752413).
+
+### Error code: DF-SAPODP-SAPI-LIMITATION
+
+- **Message**: Error Number 518, Source .* not found, not released or not authorized
+- **cause**: Check if your context is SAPI. If so, in SAPI context, you can only extract the relevant extractors for SAP tables.
+- **Recommendations**: Refer to this [document](https://userapps.support.sap.com/sap/support/knowledge/en/2646092).
+
+### Error code: DF-SAPODP-KeyColumnsNotSpecified
++
+- **Message**: Key column(s) should be specified for non-insertable operations (updates/deletes)
+- **Cause**: This error occurs when you skip selecting **Key Columns** in the sink table.
+- **Recommendations**: Allowing delete, upsert and update options requires a key column to be specified. Specify one or more columns for the row matching in sink.
+
+### Error code: DF-SAPODP-InsufficientResource
+
+- **Message**: A short dump has occurred in a database operation
+- **Cause**: SAP system ran out of resources, which resulted in short dump in SAP server.
+- **Recommendations**: Contact your SAP administrator to address the problem in SAP instance and retry.
+
+### Error code: DF-SAPODP-ExecuteFuncModuleWithPointerFailed
+
+- **Message**: Execute function module .* with pointer .* failed
+- **Cause**: SAP system issue.
+- **Recommendations**: Go to SAP instance, and check ST22 (short dump, similar to windows dump) and review the code where the error happened. In most cases, SAP offers hints on various possibilities for further troubleshooting.
+ ### Error code: DF-Snowflake-IncompatibleDataType - **Message**: Expression type does not match column data type, expecting VARIANT but got VARCHAR.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: An invalid storage type is provided for staging. - **Recommendation**: Check the storage type of the linked service used for staging and make sure that it's Blob or Gen2.
+### Error code: DF-SQLDW-StagingStorageNotSupport
+
+- **Message**: Staging Storage with partition DNS enabled is not supported if enable staging. Please uncheck enable staging in sink using Synapse Analytics.
+- **Cause**: Staging storage with partition DNS enabled is not supported if you enable staging.
+- **Recommendations**: Uncheck **Enable staging** in sink when using Azure Synapse Analytics.
+
+### Error code: DF-SQLDW-DataTruncation
+
+- **Message**: Your target table has a column with (n)varchar or (n)varbinary type that has a smaller column length limitation than real data, please either adjust the column definition in your target table or change the source data.
+- **Cause**: Your target table has a column with varchar or varbinary type that has a smaller column length limitation than real data.
+- **Recommendations**: Adjust the column definition in your target table or change the source data.
+ ### Error code: DF-Synapse-DBNotExist - **Cause**: The database does not exist.
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
This example shows how to create a linked service to an on-premises SQL Server d
### Create initial linked service JSON file description
-Create a JSON file named **SqlServerLinkedService.json** in any folder with the following content:
+Create a JSON file named **SqlServerLinkedService.json** with the following content:
Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with values for your SQL Server before saving the file. And, replace `<integration runtime name>` with the name of your integration runtime.
Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with va
``` ### Encrypt credentials
-To encrypt the sensitive data from the JSON payload on an on-premises self-hosted integration runtime, run **New-AzDataFactoryV2LinkedServiceEncryptedCredential**, and pass on the JSON payload. This cmdlet ensures the credentials are encrypted using DPAPI and stored on the self-hosted integration runtime node locally. It can be run from any machine provided the **Remote access** option is enabled on the targeted self-hosted integration runtime, and PowerShell 7.0 or higher is used to execute it. The output payload containing the encrypted reference to the credential can be redirected to another JSON file (in this case 'encryptedLinkedService.json').
+To encrypt the sensitive data from the JSON payload on an on-premises self-hosted integration runtime, run **New-AzDataFactoryV2LinkedServiceEncryptedCredential**, and pass on the JSON payload. This cmdlet ensures the credentials are encrypted using DPAPI and stored on the self-hosted integration runtime node locally. The output payload containing the encrypted reference to the credential can be redirected to another JSON file (in this case 'encryptedLinkedService.json').
+
+Please ensure the following prerequisites are met:
+- Remote access option is enabled on the self-hosted integration runtime.
+- Powershell 7.0 or higher is used to execute the cmdlet.
```powershell
-New-AzDataFactoryV2LinkedServiceEncryptedCredential -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "SqlServerLinkedService" -DefinitionFile ".\SQLServerLinkedService.json" > encryptedSQLServerLinkedService.json
+New-AzDataFactoryV2LinkedServiceEncryptedCredential -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -IntegrationRuntimeName 'test-selfhost-ir' -DefinitionFile ".\SQLServerLinkedService.json" > encryptedSQLServerLinkedService.json
``` ### Use the JSON with encrypted credentials
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 09/15/2022 Last updated : 10/06/2022 # Manage Azure Data Factory studio preview experience
UI (user interface) changes have been made to activities in the pipeline editor
#### Adding activities to the canvas > [!NOTE]
-> This experience will soon be available in the default ADF settings.
+> This experience is now available in the default ADF settings.
You now have the option to add an activity using the Add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
Select an activity by using the search box or scrolling through the listed activ
#### Iteration and conditionals container view > [!NOTE]
-> This experience will soon be available in the default ADF settings.
-
+> This experience is now available in the default ADF settings.
You can now view the activities contained iteration and conditional activities.
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
+ Last updated 08/10/2022
Last updated 08/10/2022
This guide will explain how to take an existing normalized database schema in Azure SQL Database and convert it into an Azure Cosmos DB denormalized schema for loading into Azure Cosmos DB.
-SQL schemas are typically modeled using third normal form, resulting in normalized schemas that provide high levels of data integrity and fewer duplicate data values. Queries can join entities together across tables for reading. Cosmos DB is optimized for super-quick transactions and querying within a collection or container via denormalized schemas with data self-contained inside a document.
+SQL schemas are typically modeled using third normal form, resulting in normalized schemas that provide high levels of data integrity and fewer duplicate data values. Queries can join entities together across tables for reading. Azure Cosmos DB is optimized for super-quick transactions and querying within a collection or container via denormalized schemas with data self-contained inside a document.
Using Azure Data Factory, we'll build a pipeline that uses a single Mapping Data Flow to read from two Azure SQL Database normalized tables that contain primary and foreign keys as the entity relationship. ADF will join those tables into a single stream using the data flow Spark engine, collect joined rows into arrays and produce individual cleansed documents for insert into a new Azure Cosmos DB container.
The representative SQL query for this guide is:
FROM SalesLT.SalesOrderHeader o; ```
-The resulting Cosmos DB container will embed the inner query into a single document and look like this:
+The resulting Azure Cosmos DB container will embed the inner query into a single document and look like this:
:::image type="content" source="media/data-flow/cosmosb3.png" alt-text="Collection":::
The resulting Cosmos DB container will embed the inner query into a single docum
6. Define the source for "SourceOrderHeader". For dataset, create a new Azure SQL Database dataset that points to the ```SalesOrderHeader``` table.
-7. On the top source, add a Derived Column transformation after "SourceOrderDetails". Call the new transformation "TypeCast". We need to round the ```UnitPrice``` column and cast it to a double data type for Cosmos DB. Set the formula to: ```toDouble(round(UnitPrice,2))```.
+7. On the top source, add a Derived Column transformation after "SourceOrderDetails". Call the new transformation "TypeCast". We need to round the ```UnitPrice``` column and cast it to a double data type for Azure Cosmos DB. Set the formula to: ```toDouble(round(UnitPrice,2))```.
8. Add another derived column and call it "MakeStruct". This is where we will create a hierarchical structure to hold the values from the details table. Remember, details is a ```M:1``` relation to header. Name the new structure ```orderdetailsstruct``` and create the hierarchy in this way, setting each subcolumn to the incoming column name:
The resulting Cosmos DB container will embed the inner query into a single docum
:::image type="content" source="media/data-flow/cosmosb4.png" alt-text="Join":::
-11. Before we can create the arrays to denormalize these rows, we first need to remove unwanted columns and make sure the data values will match Cosmos DB data types.
+11. Before we can create the arrays to denormalize these rows, we first need to remove unwanted columns and make sure the data values match Azure Cosmos DB data types.
12. Add a Select transformation next and set the field mapping to look like this:
The resulting Cosmos DB container will embed the inner query into a single docum
:::image type="content" source="media/data-flow/cosmosb6.png" alt-text="Aggregate":::
-18. We're ready to finish the migration flow by adding a sink transformation. Click "new" next to dataset and add a Cosmos DB dataset that points to your Cosmos DB database. For the collection, we'll call it "orders" and it will have no schema and no documents because it will be created on the fly.
+18. We're ready to finish the migration flow by adding a sink transformation. Click "new" next to dataset and add an Azure Cosmos DB dataset that points to your Azure Cosmos DB database. For the collection, we'll call it "orders" and it will have no schema and no documents because it will be created on the fly.
19. In Sink Settings, Partition Key to ```/SalesOrderID``` and collection action to "recreate". Make sure your mapping tab looks like this:
The resulting Cosmos DB container will embed the inner query into a single docum
:::image type="content" source="media/data-flow/cosmosb8.png" alt-text="Screenshot shows the Data preview tab.":::
-If everything looks good, you are now ready to create a new pipeline, add this data flow activity to that pipeline and execute it. You can execute from debug or a triggered run. After a few minutes, you should have a new denormalized container of orders called "orders" in your Cosmos DB database.
+If everything looks good, you are now ready to create a new pipeline, add this data flow activity to that pipeline and execute it. You can execute from debug or a triggered run. After a few minutes, you should have a new denormalized container of orders called "orders" in your Azure Cosmos DB database.
## Next steps
data-factory Industry Sap Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md
description: Overview of the SAP Connectors
-+ Last updated 08/11/2022
The following table shows the SAP connectors and in which activity scenarios the
| :-- | :-- | :-- | :-- | | |[SAP Business Warehouse Open Hub](connector-sap-business-warehouse-open-hub.md) | ✓/− | | ✓ | SAP Business Warehouse version 7.01 or higher. SAP BW/4HANA isn't supported by this connector. | |[SAP Business Warehouse via MDX](connector-sap-business-warehouse.md)| ✓/− | | ✓ | SAP Business Warehouse version 7.x. |
-| [SAP CDC (Preview)](connector-sap-change-data-capture.md) | | Γ£ô/- | | Can connect to all SAP releases supporting SAP Operational Data Provisioning (ODP). This includes most SAP ECC and SAP BW releases, as well as SAP S/4HANA, SAP BW/4HANA and SAP Landscape Transformation Replication Server (SLT). For details, follow [Overview and architecture of the SAP CDC capabilities (preview)](sap-change-data-capture-introduction-architecture.md) |
+| [SAP CDC](connector-sap-change-data-capture.md) | | Γ£ô/- | | Can connect to all SAP releases supporting SAP Operational Data Provisioning (ODP). This includes most SAP ECC and SAP BW releases, as well as SAP S/4HANA, SAP BW/4HANA and SAP Landscape Transformation Replication Server (SLT). For details, follow [Overview and architecture of the SAP CDC capabilities](sap-change-data-capture-introduction-architecture.md) |
| [SAP Cloud for Customer (C4C)](connector-sap-cloud-for-customer.md) | ✓/✓ | | ✓ | SAP Cloud for Customer including the SAP Cloud for Sales, SAP Cloud for Service, and SAP Cloud for Social Engagement solutions. | | [SAP ECC](connector-sap-ecc.md) | ✓/− | | ✓ | SAP ECC on SAP NetWeaver version 7.0 and later. | | [SAP HANA](connector-sap-hana.md) | ✓/✓ | | ✓ | Any version of SAP HANA database |
data-factory Industry Sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md
description: Overview of the ADF SAP Knowledge Center and ADF SAP IP
-+ Last updated 08/11/0222
Azure Data Factory and Azure Synapse Analytics pipelines provide a collection of
## SAP connectors
-Azure Data Factory and Synapse pipelines support extracting data from the following SAP connectors
+Azure Data Factory and Synapse pipelines support extracting data using the following SAP connectors
- SAP Business Warehouse Open Hub - SAP Business Warehouse via MDX
+- SAP CDC
- SAP Cloud for Customer - SAP ECC - SAP HANA
See [pipeline templates](solution-templates-introduction.md) for an overview of
Templates are offered for the following scenarios - Incrementally copy from SAP BW to ADLS Gen 2 - Incrementally copy from SAP Table to Blob-- Dynamically copy multiple tables from SAP ECC to ADLS Gen 2 - Dynamically copy multiple tables from SAP HANA to ADLS Gen 2 For a summary of the SAP specific templates and how to use them see [SAP templates](industry-sap-templates.md).
data-factory Industry Sap Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-templates.md
description: Overview of the SAP templates
-+ Last updated 08/11/2022
The following table shows the templates related to SAP connectors that can be fo
| SAP Data Store | Scenario | Description | | -- | -- | -- | | SAP BW via Open Hub | [Incremental copy to Azure Data Lake Storage Gen 2](load-sap-bw-data.md) | Use this template to incrementally copy SAP BW data via LastRequestID watermark to ADLS Gen 2 |
-| SAP ECC | Dynamically copy tables to Azure Data Lake Storage Gen 2 | Use this template to do a full copy of list of tables from SAP ECC to ADLS Gen 2 |
| SAP HANA | Dynamically copy tables to Azure Data Lake Storage Gen 2 | Use this template to do a full copy of list of tables from SAP HANA to ADLS Gen 2 | | SAP Table | Incremental copy to Azure Blob Storage | Use this template to incrementally copy SAP Table data via a date timestamp watermark to Azure Blob Storage |
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
description: Learn how to parameterize linked services in Azure Data Factory and Azure Synapse Analytics pipelines, and pass dynamic values at run time. -+ Last updated 09/05/2022
All the linked service types are supported for parameterization.
- Amazon S3 - Amazon S3 Compatible Storage - Azure Blob Storage-- Azure Cosmos DB (SQL API)
+- Azure Cosmos DB for NoSQL
- Azure Data Explorer - Azure Data Lake Storage Gen1 - Azure Data Lake Storage Gen2
data-factory Sap Change Data Capture Debug Shir Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md
Title: Debug SAP CDC connector (preview) by sending logs
+ Title: Debug issues with the SAP CDC connector by sending logs
-description: Learn how to debug issues with the Azure Data Factory SAP CDC (change data capture) connector (preview) by sending self-hosted integration runtime logs to Microsoft.
+description: Learn how to debug issues with the Azure Data Factory SAP CDC (change data capture) connector by sending self-hosted integration runtime logs to Microsoft.
+ Last updated 08/18/2022
-# Debug the SAP CDC connector by sending self-hosted integration runtime logs
+# Debug issues with the SAP CDC connector by sending self-hosted integration runtime logs
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-If you want Microsoft to debug Azure Data Factory issues with your SAP CDC connector (preview), send us your self-hosted integration runtime logs, and then contact us.
+If you want Microsoft to debug Azure Data Factory issues with your SAP CDC connector, send us your self-hosted integration runtime logs, and then contact us.
## Send logs to Microsoft
After you've uploaded and sent your self-hosted integration runtime logs, contac
## Next steps
-[SAP CDC (Change Data Capture) Connector](connector-sap-change-data-capture.md)
+[SAP CDC (Change Data Capture) Connector](connector-sap-change-data-capture.md)
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
Title: Overview and architecture of the SAP CDC capabilities (preview)
+ Title: Overview and architecture of the SAP CDC capabilities
-description: Learn about the SAP change data capture (CDC) capabilities (preview) in Azure Data Factory and understand its architecture.
+description: Learn about the SAP change data capture (CDC) capabilities in Azure Data Factory and understand its architecture.
+ Last updated 08/18/2022
-# Overview and architecture of the SAP CDC capabilities (preview)
+# Overview and architecture of the SAP CDC capabilities
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn about the SAP change data capture (CDC) capabilities (preview) in Azure Data Factory and understand the architecture.
+Learn about the SAP change data capture (CDC) capabilities in Azure Data Factory and understand the architecture.
Azure Data Factory is an ETL and ELT data integration platform as a service (PaaS). For SAP data integration, Data Factory currently offers six general availability connectors:
This article provides a high-level architecture of the SAP CDC capabilities in A
## How to use the SAP CDC capabilities
-At the core of the SAP CDC capabilities is the new SAP CDC connector (preview). It can connect to all SAP systems that support ODP. This includes SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. It doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC connector extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
+At the core of the SAP CDC capabilities is the new SAP CDC connector. It can connect to all SAP systems that support ODP. This includes SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. It doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC connector extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
Use the SAP CDC connector with Data Factory features like mapping data flow activities, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline.
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
Title: Manage your SAP CDC (preview) ETL process
+ Title: Manage the SAP CDC process
-description: Learn how to manage your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
+description: Learn how to manage your SAP change data capture (CDC) process in Azure Data Factory.
+ Last updated 08/18/2022
-# Manage your SAP CDC (preview) ETL process
+# Manage the SAP CDC process
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-After you create a pipeline in Azure Data Factory using the SAP CDC connector (preview), it's important to manage the solution.
+After you create a pipeline and mapping data flow in Azure Data Factory using the SAP CDC connector, it's important to manage the ETL process appropriately.
## Run an SAP data replication pipeline on a recurring schedule
To monitor data extractions on SAP systems:
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-logon-tool.png" alt-text="Screenshot of the SAP Logon Tool.":::
-1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP CDC (preview) linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service.
+1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP CDC linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-monitor-delta-queues.png" alt-text="Screenshot of the SAP ODQMON tool with all data extractions for a specific subscriber.":::
- You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory copy activities that use your SAP CDC linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed.
+ You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory mapping data flow that use your SAP CDC linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed.
-1. When Data Factory copy activities that extract SAP data are no longer needed, you should delete their ODQ subscriptions. When you delete ODQ subscriptions, SAP systems can stop tracking their subscription states and remove the unconsumed data packages from the ODQ. To delete an ODQ subscription, select the subscription and select the Delete icon.
+1. When Data Factory mapping data flows that extract SAP data are no longer needed, you should delete their ODQ subscriptions. When you delete ODQ subscriptions, SAP systems can stop tracking their subscription states and remove the unconsumed data packages from the ODQ. To delete an ODQ subscription, select the subscription and select the Delete icon.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-delete-queue-subscriptions.png" alt-text="Screenshot of the SAP ODQMON tool with the delete button highlighted for a specific queue subscription.":::
To monitor data extractions on SAP systems:
The SAP CDC connector in Data Factory reads delta changes from the SAP ODP framework. The deltas are recorded in ODQ tables.
-In scenarios in which data movement works (copy activities finish without errors), but data isn't delivered correctly (no data at all, or maybe just a subset of the expected data), you should first investigate whether the number of records provided on the SAP side match the number of rows transferred by Data Factory. If they match, the issue isn't related to Data Factory, but probably comes from an incorrect or missing configuration on the SAP side.
+In scenarios in which data movement works (mapping data flows finish without errors), but data isn't delivered correctly (no data at all, or maybe just a subset of the expected data), you should first check in ODQMON whether the number of records provided on the SAP side match the number of rows transferred by Data Factory. If they match, the issue isn't related to Data Factory, but probably comes from an incorrect or missing configuration on the SAP side.
### Troubleshoot in SAP by using ODQMON To analyze what data the SAP system has provided for your scenario, start transaction ODQMON in your SAP back-end system. If you're using SAP Landscape Transformation Replication Server (SLT) with a standalone server, start the transaction there.
-To find the ODQs that correspond to your copy activities or copy activity runs, use the filter options. In **Queue**, you can use wildcards to narrow the search. For example, you can search by the table name **EKKO**.
+To find the ODQs that correspond to your mapping data flows, use the filter options. In **Queue**, you can use wildcards to narrow the search. For example, you can search by the table name **EKKO**.
Select the **Calculate Data Volume** checkbox to see details about the number of rows and data volume (in bytes) contained in the ODQs. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-odqmon-troubleshoot-queues.png" alt-text="Screenshot of the SAP ODQMON tool, with delta queues shown.":::
-To view the ODQ subscriptions, double-click the queue. An ODQ can have multiple subscribers, so check for the subscriber name that you entered in the Data Factory linked service. Choose the subscription that has a timestamp that most closely matches the time your copy activity ran. For delta subscriptions, the first run of the copy activity for the subscription is recorded on the SAP side.
+To view the ODQ subscriptions, double-click the queue. An ODQ can have multiple subscribers, so check for the subscriber name that you entered in the Data Factory linked service. Choose the subscription that has a timestamp that most closely matches the time your mapping data flow ran. For delta subscriptions, the first run of the mapping data flow for the subscription is recorded on the SAP side.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-odqmon-troubleshoot-subscriptions.png" alt-text="Screenshot of the SAP ODQMON tool, with delta queue subscriptions shown.":::
-In the subscription, a list of requests corresponds to copy activity runs in Data Factory. In the following figure, you see the result of four copy activity runs:
+In the subscription, a list of requests corresponds to mapping data flow runs in Data Factory. In the following figure, you see the result of four mapping data flow runs:
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-odqmon-troubleshoot-requests.png" alt-text="Screenshot of the SAP ODQMON tool with delta queue requests shown.":::
-Based on the timestamp in the first row, find the line that corresponds to the copy activity run you want to analyze. If the number of rows shown equals the number of rows read by the copy activity, you've verified that Data Factory has read and transferred the data as provided by the SAP system. In this scenario, we recommend that you consult with the team that's responsible for your SAP system.
+Based on the timestamp in the first row, find the line that corresponds to the mapping data flow run you want to analyze. If the number of rows shown equals the number of rows read by the mapping data flow, you've verified that Data Factory has read and transferred the data as provided by the SAP system. In this scenario, we recommend that you consult with the team that's responsible for your SAP system.
## Current limitations Here are current limitations of the SAP CDC connector in Data Factory: -- You can't reset or delete ODQ subscriptions in Data Factory.
+- You can't reset or delete ODQ subscriptions in Data Factory (use ODQMON for this).
- You can't use SAP hierarchies with the solution. ## Next steps
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
Title: Set up a linked service and dataset for the SAP CDC connector (preview)
+ Title: Set up a linked service and dataset for the SAP CDC connector
-description: Learn how to set up a linked service and source dataset to use with the SAP CDC (change data capture) connector (preview) in Azure Data Factory.
+description: Learn how to set up a linked service and source dataset to use with the SAP CDC (change data capture) connector in Azure Data Factory.
+ Last updated 08/18/2022
-# Set up a linked service and source dataset for the SAP CDC connector (preview)
+# Set up a linked service and source dataset for the SAP CDC connector
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn how to set up the linked service and source dataset for the SAP CDC connector (preview) in Azure Data Factory.
+Learn how to set up the linked service and source dataset for the SAP CDC connector in Azure Data Factory.
## Set up a linked service
-To set up an SAP CDC (preview) linked service:
+To set up an SAP CDC linked service:
1. In Azure Data Factory Studio, go to the Manage hub of your data factory. In the menu under **Connections**, select **Linked services**. Select **New** to create a new linked service. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-linked-service.png" alt-text="Screenshot of the Manage hub in Azure Data Factory Studio, with the New linked service button highlighted.":::
-1. In **New linked service**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**.
+1. In **New linked service**, search for **SAP**. Select **SAP CDC**, and then select **Continue**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP CDC (Preview) selected.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP CDC selected.":::
1. Set the linked service properties. Many of the properties are similar to SAP Table linked service properties. For more information, see [Linked service properties](connector-sap-table.md?tabs=data-factory#linked-service-properties).
To set up an SAP CDC (preview) linked service:
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-dataset.png" alt-text="Screenshot that shows creating a new pipeline in the Data Factory Studio Author hub.":::
-1. In **New dataset**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**.
+1. In **New dataset**, search for **SAP**. Select **SAP CDC**, and then select **Continue**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP CDC (Preview) dataset type in the New dataset dialog.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP CDC dataset type in the New dataset dialog.":::
1. In **Set properties**, enter a name for the SAP CDC linked service data source. In **Linked service**, select the dropdown and select **New**.
To set up an SAP CDC (preview) linked service:
To enter the selections directly, select the **Edit** checkbox.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP CDC (Preview) dataset configuration page.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP CDC dataset configuration page.":::
1. Select **OK** to create your new SAP CDC source dataset.
To set up a mapping data flow using the SAP CDC dataset as a source, follow [Tra
## Next steps
-[Debug copy activity by sending self-hosted integration runtime logs](sap-change-data-capture-debug-shir-logs.md)
+[Debug the SAP CDC connector by sending self-hosted integration runtime logs](sap-change-data-capture-debug-shir-logs.md)
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
Title: Prerequisites and setup for the SAP CDC connector (preview)
+ Title: Prerequisites and setup for the SAP CDC connector
-description: Learn about the prerequisites and setup for the SAP CDC connector (preview) in Azure Data Factory.
+description: Learn about the prerequisites and setup for the SAP CDC connector in Azure Data Factory.
+ Last updated 08/18/2022
-# Prerequisites and setup for the SAP CDC connector (preview)
+# Prerequisites and setup for the SAP CDC connector
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn about the prerequisites for the SAP CDC connector (preview) in Azure Data Factory and how to set up the solution in Azure Data Factory Studio.
+Learn about the prerequisites for the SAP CDC connector in Azure Data Factory and how to set up the solution in Azure Data Factory Studio.
## Prerequisites
-To preview the SAP CDC capabilities in Azure Data Factory, be able to complete these prerequisites:
+To use the SAP CDC capabilities in Azure Data Factory, be able to complete these prerequisites:
-- In Azure Data Factory Studio, [enable the preview experience](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience). - Set up SAP systems to use the [SAP Operational Data Provisioning (ODP) framework](https://help.sap.com/docs/SAP_LANDSCAPE_TRANSFORMATION_REPLICATION_SERVER/007c373fcacb4003b990c6fac29a26e4/b6e26f56fbdec259e10000000a441470.html?q=SAP%20Operational%20Data%20Provisioning%20%28ODP%29%20framework). - Be familiar with Data Factory concepts like integration runtimes, linked services, datasets, activities, data flows, pipelines, and triggers. - Set up a self-hosted integration runtime to use for the connector.-- Set up an SAP CDC (preview) linked service.-- Set up the Data Factory copy activity with an SAP CDC (preview) source dataset.-- Debug Data Factory copy activity issues by sending self-hosted integration runtime logs to Microsoft.-- Be able to run an SAP data replication pipeline frequently.-- Be able to recover a failed SAP data replication pipeline run.
+- Set up an SAP CDC linked service.
+- Debug issues with the SAP CDC connector by sending self-hosted integration runtime logs to Microsoft.
- Be familiar with monitoring data extractions on SAP systems. ## Set up SAP systems to use the SAP ODP framework
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
Title: Set up a self-hosted integration runtime for the SAP CDC connector (preview)
+ Title: Set up a self-hosted integration runtime for the SAP CDC connector
-description: Learn how to create and set up a self-hosted integration runtime for your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
+description: Learn how to create and set up a self-hosted integration runtime for the SAP change data capture (CDC) connector in Azure Data Factory.
+ Last updated 08/18/2022
-# Set up a self-hosted integration runtime for the SAP CDC connector (preview)
+# Set up a self-hosted integration runtime for the SAP CDC connector
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn how to create and set up a self-hosted integration runtime for the SAP CDC connector (preview) in Azure Data Factory.
+Learn how to create and set up a self-hosted integration runtime for the SAP CDC connector in Azure Data Factory.
-To prepare a self-hosted integration runtime to use with the SAP CDC connector (preview), complete the steps that are described in the following sections.
+To prepare a self-hosted integration runtime to use with the SAP CDC connector, complete the steps that are described in the following sections.
## Create and set up a self-hosted integration runtime
zzz.zzz.zzz.zzz sapnw01
## Next steps
-[Set up an SAP ODP linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)
+[Set up an SAP CDC linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)
data-factory Data Factory Azure Documentdb Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-documentdb-connector.md
+ Last updated 10/22/2021
> [!NOTE] > This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [Azure Cosmos DB connector in V2](../connector-azure-cosmos-db.md).
-This article explains how to use the Copy Activity in Azure Data Factory to move data to/from Azure Cosmos DB (SQL API). It builds on the [Data Movement Activities](data-factory-data-movement-activities.md) article, which presents a general overview of data movement with the copy activity.
+This article explains how to use the Copy Activity in Azure Data Factory to move data to/from Azure Cosmos DB for NoSQL. It builds on the [Data Movement Activities](data-factory-data-movement-activities.md) article, which presents a general overview of data movement with the copy activity.
You can copy data from any supported source data store to Azure Cosmos DB or from Azure Cosmos DB to any supported sink data store. For a list of data stores supported as sources or sinks by the copy activity, see the [Supported data stores](data-factory-data-movement-activities.md#supported-data-stores-and-formats) table. > [!IMPORTANT]
-> Azure Cosmos DB connector only supports the SQL API.
+> The Azure Cosmos DB connector only supports Azure Cosmos DB for NoSQL.
-To copy data as-is to/from JSON files or another Cosmos DB collection, see [Import/Export JSON documents](#importexport-json-documents).
+To copy data as-is to/from JSON files or another Azure Cosmos DB collection, see [Import/Export JSON documents](#importexport-json-documents).
## Getting started You can create a pipeline with a copy activity that moves data to/from Azure Cosmos DB by using different tools/APIs.
Whether you use the tools or APIs, you perform the following steps to create a p
2. Create **datasets** to represent input and output data for the copy operation. 3. Create a **pipeline** with a copy activity that takes a dataset as an input and a dataset as an output.
-When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For samples with JSON definitions for Data Factory entities that are used to copy data to/from Cosmos DB, see [JSON examples](#json-examples) section of this article.
+When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For samples with JSON definitions for Data Factory entities that are used to copy data to/from Azure Cosmos DB, see [JSON examples](#json-examples) section of this article.
-The following sections provide details about JSON properties that are used to define Data Factory entities specific to Cosmos DB:
+The following sections provide details about JSON properties that are used to define Data Factory entities specific to Azure Cosmos DB:
## Linked service properties The following table provides description for JSON elements specific to Azure Cosmos DB linked service.
The following table provides description for JSON elements specific to Azure Cos
Example:
-```JSON
+```json
{ "name": "CosmosDbLinkedService", "properties": {
The typeProperties section is different for each type of dataset and provides in
| **Property** | **Description** | **Required** | | | | |
-| collectionName |Name of the Cosmos DB document collection. |Yes |
+| collectionName |Name of the Azure Cosmos DB document collection. |Yes |
Example:
-```JSON
+```json
{ "name": "PersonCosmosDbTable", "properties": {
the following properties are available in **typeProperties** section:
| **Property** | **Description** | **Allowed values** | **Required** | | | | | |
-| nestingSeparator |A special character in the source column name to indicate that nested document is needed. <br/><br/>For example above: `Name.First` in the output table produces the following JSON structure in the Cosmos DB document:<br/><br/>"Name": {<br/> "First": "John"<br/>}, |Character that is used to separate nesting levels.<br/><br/>Default value is `.` (dot). |Character that is used to separate nesting levels. <br/><br/>Default value is `.` (dot). |
-| writeBatchSize |Number of parallel requests to Azure Cosmos DB service to create documents.<br/><br/>You can fine-tune the performance when copying data to/from Cosmos DB by using this property. You can expect a better performance when you increase writeBatchSize because more parallel requests to Cosmos DB are sent. However you'll need to avoid throttling that can throw the error message: "Request rate is large".<br/><br/>Throttling is decided by a number of factors, including size of documents, number of terms in documents, indexing policy of target collection, etc. For copy operations, you can use a better collection (e.g. S3) to have the most throughput available (2,500 request units/second). |Integer |No (default: 5) |
+| nestingSeparator |A special character in the source column name to indicate that nested document is needed. <br/><br/>For example above: `Name.First` in the output table produces the following JSON structure in the Azure Cosmos DB document:<br/><br/>"Name": {<br/> "First": "John"<br/>}, |Character that is used to separate nesting levels.<br/><br/>Default value is `.` (dot). |Character that is used to separate nesting levels. <br/><br/>Default value is `.` (dot). |
+| writeBatchSize |Number of parallel requests to Azure Cosmos DB service to create documents.<br/><br/>You can fine-tune the performance when copying data to/from Azure Cosmos DB by using this property. You can expect a better performance when you increase writeBatchSize because more parallel requests to Azure Cosmos DB are sent. However you'll need to avoid throttling that can throw the error message: "Request rate is large".<br/><br/>Throttling is decided by a number of factors, including size of documents, number of terms in documents, indexing policy of target collection, etc. For copy operations, you can use a better collection (e.g. S3) to have the most throughput available (2,500 request units/second). |Integer |No (default: 5) |
| writeBatchTimeout |Wait time for the operation to complete before it times out. |timespan<br/><br/> Example: "00:30:00" (30 minutes). |No | ## Import/Export JSON documents
-Using this Cosmos DB connector, you can easily
+Using this Azure Cosmos DB connector, you can easily:
-* Import JSON documents from various sources into Cosmos DB, including Azure Blob, Azure Data Lake, on-premises File System or other file-based stores supported by Azure Data Factory.
-* Export JSON documents from Cosmos DB collection into various file-based stores.
-* Migrate data between two Cosmos DB collections as-is.
+* Import JSON documents from various sources into Azure Cosmos DB, including Azure Blob storage, Azure Data Lake, on-premises file system, or other file-based stores supported by Azure Data Factory.
+* Export JSON documents from Azure Cosmos DB collection into various file-based stores.
+* Migrate data between two Azure Cosmos DB collections as-is.
To achieve such schema-agnostic copy,
-* When using copy wizard, check the **"Export as-is to JSON files or Cosmos DB collection"** option.
-* When using JSON editing, do not specify the "structure" section in Cosmos DB dataset(s) nor "nestingSeparator" property on Cosmos DB source/sink in copy activity. To import from/export to JSON files, in the file store dataset specify format type as "JsonFormat", config "filePattern" and skip the rest format settings, see [JSON format](data-factory-supported-file-and-compression-formats.md#json-format) section on details.
+* When using copy wizard, check the **"Export as-is to JSON files or Azure Cosmos DB collection"** option.
+* When using JSON editing, do not specify the "structure" section in Azure Cosmos DB dataset(s) nor "nestingSeparator" property on Azure Cosmos DB source/sink in copy activity. To import from/export to JSON files, in the file store dataset specify format type as "JsonFormat", config "filePattern" and skip the rest format settings, see [JSON format](data-factory-supported-file-and-compression-formats.md#json-format) section on details.
## JSON examples The following examples provide sample JSON definitions that you can use to create a pipeline by using [Visual Studio](data-factory-copy-activity-tutorial-using-visual-studio.md) or [Azure PowerShell](data-factory-copy-activity-tutorial-using-powershell.md). They show how to copy data to and from Azure Cosmos DB and Azure Blob Storage. However, data can be copied **directly** from any of the sources to any of the sinks stated [here](data-factory-data-movement-activities.md#supported-data-stores-and-formats) using the Copy Activity in Azure Data Factory.
The sample copies data in Azure Cosmos DB to Azure Blob. The JSON properties use
**Azure Cosmos DB linked service:**
-```JSON
+```json
{ "name": "CosmosDbLinkedService", "properties": {
The sample copies data in Azure Cosmos DB to Azure Blob. The JSON properties use
``` **Azure Blob storage linked service:**
-```JSON
+```json
{ "name": "StorageLinkedService", "properties": {
The sample assumes you have a collection named **Person** in an Azure Cosmos DB
Setting "external": "true" and specifying externalData policy information the Azure Data Factory service that the table is external to the data factory and not produced by an activity in the data factory.
-```JSON
+```json
{ "name": "PersonCosmosDbTable", "properties": {
Setting "external": "true" and specifying externalData policy information the Az
Data is copied to a new blob every hour with the path for the blob reflecting the specific datetime with hour granularity.
-```JSON
+```json
{ "name": "PersonBlobTableOut", "properties": {
Data is copied to a new blob every hour with the path for the blob reflecting th
} } ```
-Sample JSON document in the Person collection in a Cosmos DB database:
-```JSON
+Sample JSON document in the Person collection in an Azure Cosmos DB database:
+
+```json
{ "PersonId": 2, "Name": {
Sample JSON document in the Person collection in a Cosmos DB database:
} } ```
-Cosmos DB supports querying documents using a SQL like syntax over hierarchical JSON documents.
+Azure Cosmos DB supports querying documents using a SQL-like syntax over hierarchical JSON documents.
Example:
SELECT Person.PersonId, Person.Name.First AS FirstName, Person.Name.Middle as Mi
The following pipeline copies data from the Person collection in the Azure Cosmos DB database to an Azure blob. As part of the copy activity the input and output datasets have been specified.
-```JSON
+```json
{ "name": "DocDbToBlobPipeline", "properties": {
The sample copies data from Azure blob to Azure Cosmos DB. The JSON properties u
**Azure Blob storage linked service:**
-```JSON
+```json
{ "name": "StorageLinkedService", "properties": {
The sample copies data from Azure blob to Azure Cosmos DB. The JSON properties u
``` **Azure Cosmos DB linked service:**
-```JSON
+```json
{ "name": "CosmosDbLinkedService", "properties": {
The sample copies data from Azure blob to Azure Cosmos DB. The JSON properties u
``` **Azure Blob input dataset:**
-```JSON
+```json
{ "name": "PersonBlobTableIn", "properties": {
The sample copies data from Azure blob to Azure Cosmos DB. The JSON properties u
The sample copies data to a collection named "Person".
-```JSON
+```json
{ "name": "PersonCosmosDbTableOut", "properties": {
The sample copies data to a collection named "Person".
} } ```
-The following pipeline copies data from Azure Blob to the Person collection in the Cosmos DB. As part of the copy activity the input and output datasets have been specified.
-```JSON
+The following pipeline copies data from Azure Blob storage to the Person collection in the Azure Cosmos DB instance. As part of the copy activity the input and output datasets have been specified.
+
+```json
{ "name": "BlobToDocDbPipeline", "properties": {
If the sample blob input is as
``` 1,John,,Doe ```
-Then the output JSON in Cosmos DB will be as:
+Then the output JSON in Azure Cosmos DB will be:
-```JSON
+```json
{ "Id": 1, "Name": {
Azure Cosmos DB is a NoSQL store for JSON documents, where nested structures are
No. Only one collection can be specified at this time. ## Performance and Tuning
-See [Copy Activity Performance & Tuning Guide](data-factory-copy-activity-performance.md) to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
+See [Copy Activity Performance & Tuning Guide](data-factory-copy-activity-performance.md) to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-json-scripting-reference.md
Last updated 10/22/2021-+ # Azure Data Factory - JSON Scripting Reference
To define an Azure Cosmos DB dataset, set the **type** of the dataset to **Docum
For more information, see [Azure Cosmos DB connector](data-factory-azure-documentdb-connector.md#dataset-properties) article. ### Azure Cosmos DB Collection Source in Copy Activity
-If you are copying data from an Azure Cosmos DB, set the **source type** of the copy activity to **DocumentDbCollectionSource**, and specify following properties in the **source** section:
+If you are copying data from an Azure Cosmos DB instance, set the **source type** of the copy activity to **DocumentDbCollectionSource**, and specify following properties in the **source** section:
| **Property** | **Description** | **Allowed values** | **Required** |
If you are copying data to Azure Cosmos DB, set the **sink type** of the copy ac
| **Property** | **Description** | **Allowed values** | **Required** | | | | | |
-| nestingSeparator |A special character in the source column name to indicate that nested document is needed. <br/><br/>For example above: `Name.First` in the output table produces the following JSON structure in the Cosmos DB document:<br/><br/>"Name": {<br/> "First": "John"<br/>}, |Character that is used to separate nesting levels.<br/><br/>Default value is `.` (dot). |Character that is used to separate nesting levels. <br/><br/>Default value is `.` (dot). |
+| nestingSeparator |A special character in the source column name to indicate that nested document is needed. <br/><br/>For example above: `Name.First` in the output table produces the following JSON structure in the Azure Cosmos DB document:<br/><br/>"Name": {<br/> "First": "John"<br/>}, |Character that is used to separate nesting levels.<br/><br/>Default value is `.` (dot). |Character that is used to separate nesting levels. <br/><br/>Default value is `.` (dot). |
| writeBatchSize |Number of parallel requests to Azure Cosmos DB service to create documents.<br/><br/>You can fine-tune the performance when copying data to/from Azure Cosmos DB by using this property. You can expect a better performance when you increase writeBatchSize because more parallel requests to Azure Cosmos DB are sent. However youΓÇÖll need to avoid throttling that can throw the error message: "Request rate is large".<br/><br/>Throttling is decided by a number of factors, including size of documents, number of terms in documents, indexing policy of target collection, etc. For copy operations, you can use a better collection (for example, S3) to have the most throughput available (2,500 request units/second). |Integer |No (default: 5) | | writeBatchTimeout |Wait time for the operation to complete before it times out. |timespan<br/><br/> Example: ΓÇ£00:30:00ΓÇ¥ (30 minutes). |No |
For more information, see [SFTP connector](data-factory-sftp-connector.md#copy-a
## HTTP ### Linked service
-To define a HTTP linked service, set the **type** of the linked service to **Http**, and specify following properties in the **typeProperties** section:
+To define an HTTP linked service, set the **type** of the linked service to **Http**, and specify following properties in the **typeProperties** section:
| Property | Description | Required | | | | |
This linked service links your data factory to an on-premises HTTP web server. I
For more information, see [HTTP connector](data-factory-http-connector.md#linked-service-properties) article. ### Dataset
-To define a HTTP dataset, set the **type** of the dataset to **Http**, and specify the following properties in the **typeProperties** section:
+To define an HTTP dataset, set the **type** of the dataset to **Http**, and specify the following properties in the **typeProperties** section:
| Property | Description | Required | |: |: |: |
To define a HTTP dataset, set the **type** of the dataset to **Http**, and speci
For more information, see [HTTP connector](data-factory-http-connector.md#dataset-properties) article. ### HTTP Source in Copy Activity
-If you are copying data from a HTTP source, set the **source type** of the copy activity to **HttpSource**, and specify following properties in the **source** section:
+If you are copying data from an HTTP source, set the **source type** of the copy activity to **HttpSource**, and specify following properties in the **source** section:
| Property | Description | Required | | -- | -- | -- |
You can specify the following properties in a U-SQL Activity JSON definition. Th
For more information, see [Data Lake Analytics U-SQL Activity](data-factory-usql-activity.md). ## Stored Procedure Activity
-You can specify the following properties in a Stored Procedure Activity JSON definition. The type property for the activity must be: **SqlServerStoredProcedure**. You must create an one of the following linked services and specify the name of the linked service as a value for the **linkedServiceName** property:
+You can specify the following properties in a Stored Procedure Activity JSON definition. The type property for the activity must be: **SqlServerStoredProcedure**. You must create a one of the following linked services and specify the name of the linked service as a value for the **linkedServiceName** property:
- SQL Server - Azure SQL Database
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Previously updated : 09/27/2022 Last updated : 10/10/2022 # What's new in Azure Data Factory
This page is updated monthly, so revisit it regularly. For older months' update
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
+## September 2022
+
+### Data flow
+
+- Amazon S3 source connector added [Learn more](connector-amazon-simple-storage-service.md?tabs=data-factory)
+- Google Sheets REST-based connector added as Source (Preview) [Learn more](connector-google-sheets.md?tabs=data-factory)
+- Maximum column optimization in dataflow [Learn more](format-delimited-text.md#mapping-data-flow-properties)
+- SAP Change Data Capture capabilities in Mapping Data Flow (Preview) - Extract and transform data changes from SAP systems for a more efficient data refresh [Learn more](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector)
+- Writing data to a lookup field via alternative keys supported in Dynamics 365/CRM connectors for mapping data flows [Learn more](connector-dynamics-crm-office-365.md?tabs=data-factory#writing-data-to-a-lookup-field-via-alternative-keys)
+
+### Data movement
+
+Support to convert Oracle NUMBER type to corresponding integer in source [Learn more](connector-oracle.md?tabs=data-factory#oracle-as-source)
+
+### Monitoring
+
+- Additional monitoring improvements in Azure Data Factory [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/further-adf-monitoring-improvements/ba-p/3607669)
+ - Monitoring loading improvements - pipeline re-run groupings data fetched only when expanded
+ - Pagination added to pipeline activity runs view to show all activity records in pipeline run
+ - Monitoring consumption improvement ΓÇô loading icon added to know when consumption report is fully calculated
+ - Additional sorting columns in monitoring ΓÇô sorting added for Pipeline name, Run End, and Status
+ - Time-zone settings now saved in monitoring
+- Gantt chart view now supported in IR monitoring [Learn more](monitor-integration-runtime.md)
+
+### Orchestration
+
+DELETE method in the Web activity now supports sending a body with HTTP request [Learn more](control-flow-web-activity.md#type-properties)
+
+### User interface
+
+- Native UI support of parameterization added for 6 additional linked services ΓÇô SAP ODP, ODBC, Microsoft Access, Informix, Snowflake, and DB2 [Learn more](parameterize-linked-services.md?tabs=data-factory#supported-linked-service-types)
+- Pipeline designer enhancements added in Studio Preview experience ΓÇô users can view workflow inside pipeline objects like For Each, If Then, etc.. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-updated-pipeline-designer/ba-p/3618755)
++ ## August 2022 ### Data flow
When CI/CD integrating ARM template, instead of turning off all triggers, it can
### Data flow -- Asana connector added as source. [Learn more](connector-asana.md)-- Three new data transformation functions now supported. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/3-new-data-transformation-functions-in-adf/ba-p/3582738)
+- Asana connector added as source [Learn more](connector-asana.md)
+- Three new data transformation functions now supported [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/3-new-data-transformation-functions-in-adf/ba-p/3582738)
- [collectUnique()](data-flow-expressions-usage.md#collectUnique) - Create a new collection of unique values in an array. - [substringIndex()](data-flow-expressions-usage.md#substringIndex) - Extract the substring before n occurrences of a delimiter. - [topN()](data-flow-expressions-usage.md#topN) - Return the top n results after sorting your data.-- Refetch from source available in Refresh for data source change scenarios. [Learn more](concepts-data-flow-debug-mode.md#data-preview)-- User defined functions (GA) - Create reusable and customized expressions to avoid building complex logic over and over. [Learn more](concepts-data-flow-udf.md) [Video](https://www.youtube.com/watch?v=ZFTVoe8eeOc&t=170s)-- Easier configuration on data flow runtime - choose compute size among Small, Medium and Large to pre-configure all integration runtime settings. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-makes-it-easy-to-select-azure-ir-size-for-data-flows/ba-p/3578033)
+- Refetch from source available in Refresh for data source change scenarios [Learn more](concepts-data-flow-debug-mode.md#data-preview)
+- User defined functions (GA) - Create reusable and customized expressions to avoid building complex logic over and over [Learn more](concepts-data-flow-udf.md) [Video](https://www.youtube.com/watch?v=ZFTVoe8eeOc&t=170s)
+- Easier configuration on data flow runtime - choose compute size among Small, Medium and Large to pre-configure all integration runtime settings [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-makes-it-easy-to-select-azure-ir-size-for-data-flows/ba-p/3578033)
### Continuous integration and continuous delivery (CI/CD) Include Global parameters supported in ARM template. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvement-using-global-parameters-in-azure-data-factory/ba-p/3557265#M665) ### Developer productivity
-Be a part of Azure Data Factory studio preview features - Experience the latest Azure Data Factory capabilities and be the first to share your feedback. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-azure-data-factory-studio-preview-experience/ba-p/3563880)
+Be a part of Azure Data Factory studio preview features - Experience the latest Azure Data Factory capabilities and be the first to share your feedback [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-azure-data-factory-studio-preview-experience/ba-p/3563880)
### Video summary
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
Title: View and configure DDoS protection alerts for Azure DDoS Protection Standard
-description: Learn how to view and configure DDoS protection alerts for Azure DDoS Protection Standard.
+ Title: 'Tutorial: View and configure Azure DDoS Protection alerts'
+description: Learn how to view and configure DDoS protection alerts for Azure DDoS Protection.
documentationcenter: na na+ Previously updated : 06/07/2022 Last updated : 10/12/2022 -
-# View and configure DDoS protection alerts
+# Tutorial: View and configure Azure DDoS Protection alerts
-Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
In this tutorial, you'll learn how to:
In this tutorial, you'll learn how to:
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address.
- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.   ## Configure alerts through Azure Monitor
-With these templates, you will be able to configure alerts for all public IP addresses that you have enabled diagnostic logging on. Hence in order to use these alert templates, you will first need a Log Analytics Workspace with diagnostic settings enabled. See [View and configure DDoS diagnostic logging](diagnostic-logging.md).
+With these templates, you will be able to configure alerts for all public IP addresses that you have enabled diagnostic logging on. Hence in order to use these alert templates, you will first need a Log Analytics Workspace with diagnostic settings enabled. See [View and configure Azure DDoS Protection diagnostic logging](diagnostic-logging.md).
### Azure Monitor alert rule
This [Azure Monitor alert rule](https://aka.ms/DDOSmitigationstatus) will run a
### Azure Monitor alert rule with Logic App
-This [template](https://aka.ms/ddosalert) deploys the necessary components of an enriched DDoS mitigation alert: Azure Monitor alert rule, action group, and Logic App. The result of the process is an email alert with details about the IP address under attack, including information about the resource associated with the IP. The owner of the resource is added as a recipient of the email, along with the security team. A basic application availability test is also performed and the results are included in the email alert.
+This [DDoS Mitigation Alert Enrichment template](https://aka.ms/ddosalert) deploys the necessary components of an enriched DDoS mitigation alert: Azure Monitor alert rule, action group, and Logic App. The result of the process is an email alert with details about the IP address under attack, including information about the resource associated with the IP. The owner of the resource is added as a recipient of the email, along with the security team. A basic application availability test is also performed and the results are included in the email alert.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FDDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json) ## Configure alerts through portal
-You can select any of the available DDoS protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration.
+You can select any of the available Azure DDoS Protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration.
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS Protection Plan.
You can select any of the available DDoS protection metrics to alert you when th
Within a few minutes of attack detection, you should receive an email from Azure Monitor metrics that looks similar to the following picture:
-![Attack alert](./media/manage-ddos-protection/ddos-alert.png)
You can also learn more about [configuring webhooks](../azure-monitor/alerts/alerts-webhooks.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [logic apps](../logic-apps/logic-apps-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for creating alerts.
There are two specific alerts that you will see for any DDoS attack detection an
- **DDoS Attack mitigated for Public IP**: This alert is generated when an attack on the public IP address has been mitigated. To view the alerts, open **Defender for Cloud** in the Azure portal and select **Security alerts**. Under **Threat Protection**, select **Security alerts**. The following screenshot shows an example of the DDoS attack alerts.
-![DDoS Alert in Microsoft Defender for Cloud](./media/manage-ddos-protection/ddos-alert-asc.png)
-The alerts include general information about the public IP address thatΓÇÖs under attack, geo and threat intelligence information, and remediations steps.
+The alerts include general information about the public IP address thatΓÇÖs under attack, geo and threat intelligence information, and remediation steps.
## Validate and test
-To simulate a DDoS attack to validate your alerts, see [Validate DDoS detection](test-through-simulations.md).
+To simulate a DDoS attack to validate your alerts, see [Validate Azure DDoS Protection detection](test-through-simulations.md).
## Next steps
ddos-protection Ddos Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-disaster-recovery-guidance.md
Title: Azure DDoS Protection Standard business continuity | Microsoft Docs
-description: Learn what to do in the event of an Azure service disruption impacting Azure DDoS Protection Standard.
+ Title: Azure DDoS Protection business continuity | Microsoft Docs
+description: Learn what to do in the event of an Azure service disruption impacting Azure DDoS Protection.
documentationcenter: na na+ Previously updated : 04/16/2021 Last updated : 10/12/2022
-# Azure DDoS Protection Standard ΓÇô business continuity
+# Azure DDoS Protection ΓÇô business continuity
-Business continuity and disaster recovery in Azure DDoS Protection Standard enables your business to continue operating in the face of a disruption. This article discusses availability (intra-region) and disaster recovery.
+Business continuity and disaster recovery in Azure DDoS Protection enables your business to continue operating in the face of a disruption. This article discusses availability (intra-region) and disaster recovery.
## Overview
-Azure DDoS Protection Standard protects public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes.
+Azure DDoS Protection protects public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes.
A Virtual Network (VNet) is a logical representation of your network in the cloud. VNets serve as a trust boundary to host your resources such as Azure Application Gateway, Azure Firewall and Azure Virtual Machines. It is created within the scope of a region. You can *create* VNets with same address space in two different regions (For example, US East and US West), but because they have the same address space, you can't connect them together.
ddos-protection Ddos Protection Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-features.md
+
+ Title: Azure DDoS Protection features
+description: Learn Azure DDoS Protection features
+
+documentationcenter: na
+++
+ na
++ Last updated : 10/12/2022++
+# Azure DDoS Protection features
+
+The following sections outline the key features of the Azure DDoS Protection service.
+
+## Always-on traffic monitoring
+
+Azure DDoS Protection monitors actual traffic utilization and constantly compares it against the thresholds defined in the DDoS Policy. When the traffic threshold is exceeded, DDoS mitigation is initiated automatically. When traffic returns below the thresholds, the mitigation is stopped.
++
+During mitigation, traffic sent to the protected resource is redirected by the DDoS protection service and several checks are performed, such as:
+
+- Ensure packets conform to internet specifications and are not malformed.
+- Interact with the client to determine if the traffic is potentially a spoofed packet (e.g: SYN Auth or SYN Cookie or by dropping a packet for the source to retransmit it).
+- Rate-limit packets, if no other enforcement method can be performed.
+
+Azure DDoS Protection drops attack traffic and forwards the remaining traffic to its intended destination. Within a few minutes of attack detection, you are notified using Azure Monitor metrics. By configuring logging on DDoS Protection telemetry, you can write the logs to available options for future analysis. Metric data in Azure Monitor for DDoS Protection is retained for 30 days.
+
+## Adaptive real time tuning
+
+The complexity of attacks (for example, multi-vector DDoS attacks) and the application-specific behaviors of tenants call for per-customer, tailored protection policies. The service accomplishes this by using two insights:
+
+- Automatic learning of per-customer (per-Public IP) traffic patterns for Layer 3 and 4.
+
+- Minimizing false positives, considering that the scale of Azure allows it to absorb a significant amount of traffic.
++
+## DDoS Protection telemetry, monitoring, and alerting
+
+Azure DDoS Protection exposes rich telemetry via [Azure Monitor](../azure-monitor/overview.md). You can configure alerts for any of the Azure Monitor metrics that DDoS Protection uses. You can integrate logging with Splunk (Azure Event Hubs), Azure Monitor logs, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+
+### Azure DDoS Protection mitigation policies
+
+In the Azure portal, select **Monitor** > **Metrics**. In the **Metrics** pane, select the resource group, select a resource type of **Public IP Address**, and select your Azure public IP address. DDoS metrics are visible in the **Available metrics** pane.
+
+DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the metric **Inbound packets to trigger DDoS mitigation**.
++
+The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
+
+### Metric for an IP address under DDoS attack
+
+If the public IP address is under attack, the value for the metric **Under DDoS attack or not** changes to 1 as DDoS Protection performs mitigation on the attack traffic.
++
+We recommend configuring an alert on this metric. You'll then be notified when thereΓÇÖs an active DDoS mitigation performed on your public IP address.
+
+For more information, see [Manage Azure DDoS Protection using the Azure portal](manage-ddos-protection.md).
+
+## Web application firewall for resource attacks
+
+Specific to resource attacks at the application layer, you should configure a web application firewall (WAF) to help secure web applications. A WAF inspects inbound web traffic to block SQL injections, cross-site scripting, DDoS, and other Layer 7 attacks. Azure provides [WAF as a feature of Application Gateway](../web-application-firewall/ag/ag-overview.md) for centralized protection of your web applications from common exploits and vulnerabilities. There are other WAF offerings available from Azure partners that might be more suitable for your needs via the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WAF&page=1).
+
+Even web application firewalls are susceptible to volumetric and state exhaustion attacks. We strongly recommend enabling DDoS Protection on the WAF virtual network to help protect from volumetric and protocol attacks. For more information, see the [Azure DDoS Protection reference architectures](ddos-protection-reference-architectures.md) section.
+
+## Protection Planning
+
+Planning and preparation are crucial to understand how a system will perform during a DDoS attack. Designing an incident management response plan is part of this effort.
+
+If you have DDoS Protection, make sure that it's enabled on the virtual network of internet-facing endpoints. Configuring DDoS alerts helps you constantly watch for any potential attacks on your infrastructure.
+
+Monitor your applications independently. Understand the normal behavior of an application. Prepare to act if the application is not behaving as expected during a DDoS attack.
+
+Learn how your services will respond to an attack by [testing through simulations](test-through-simulations.md).
+
+## Next steps
+
+- Learn how to [create an Azure DDoS Protection plan](manage-ddos-protection.md).
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Title: Azure DDoS Protection Standard Overview
-description: Learn how the Azure DDoS Protection Standard, when combined with application design best practices, provides defense against DDoS attacks.
+ Title: Azure DDoS Protection Overview
+description: Learn how the Azure DDoS Protection, when combined with application design best practices, provides defense against DDoS attacks.
documentationcenter: na na+ Previously updated : 08/17/2022 Last updated : 10/12/2022 -
-# What is Azure DDoS Protection Standard?
+# What is Azure DDoS Protection?
Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns facing customers that are moving their applications to the cloud. A DDoS attack attempts to exhaust an application's resources, making the application unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet.
-Azure DDoS Protection Standard, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes.
+Azure DDoS Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes.
## Key benefits ### Always-on traffic monitoring
- Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. DDoS Protection Standard instantly and automatically mitigates the attack, once it's detected.
+ Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. Azure DDoS Protection instantly and automatically mitigates the attack, once it's detected.
### Adaptive real time tuning Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time. ### DDoS Protection telemetry, monitoring, and alerting
-DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
+Azure DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
### Azure DDoS Rapid Response
- During an active attack, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md).
+ During an active attack, Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md).
-## SKUs
-
-Azure DDoS Protection has one available SKU named DDoS Protection Standard. For more information about configuring DDoS Protection Standard, see [Quickstart: Create and configure Azure DDoS Protection Standard](manage-ddos-protection.md).
+## SKU
-The following table shows features.
+Azure DDoS Protection is offered in two available SKUs, DDoS IP Protection and DDoS Network Protection. For more information about the SKUs, see [SKU comparison](ddos-protection-sku-comparison.md).
-| Feature | DDoS infrastructure protection | DDoS Protection Standard |
-||||
-| Active traffic monitoring & always on detection| Yes | Yes|
-| Automatic attack mitigation | Yes | Yes |
-| Availability guarantee| Not available | Yes |
-| Cost protection | Not available | Yes |
-| Application based mitigation policies | Not available | Yes|
-| Metrics & alerts | Not available | Yes |
-| Mitigation reports | Not available | Yes |
-| Mitigation flow logs| Not available | Yes|
-| Mitigation policy customizations | Not available | Yes|
-| DDoS rapid response support | Not available| Yes|
-
-## Features
### Native platform integration
- Natively integrated into Azure. Includes configuration through the Azure portal. DDoS Protection Standard understands your resources and resource configuration.
+ Natively integrated into Azure. Includes configuration through the Azure portal. Azure DDoS Protection understands your resources and resource configuration.
+ ### Turnkey protection
-Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required.
+Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Network Protection is enabled. No intervention or user definition is required. Similarly, simplified configuration immediately protects a public IP resource when DDoS IP Protection is enabled for it.
-### Multi-Layered protection:
-When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=/azure/virtual-network/toc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
+### Multi-Layered protection
+When deployed with a web application firewall (WAF), Azure DDoS Protection protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=/azure/virtual-network/toc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
### Extensive mitigation scale All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.+ ### Attack analytics Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more.
Get detailed reports in five-minute increments during an attack, and a complete
### Cost guarantee Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks. -- ## Architecture
-DDoS Protection Standard is designed for [services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default infrastructure-level DDoS protection applies, which defends against common network-layer attacks. To learn more about supported architectures, see [DDoS Protection reference architectures](./ddos-protection-reference-architectures.md).
+Azure DDoS Protection is designed for [services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default infrastructure-level DDoS protection applies, which defends against common network-layer attacks. To learn more about supported architectures, see [DDoS Protection reference architectures](./ddos-protection-reference-architectures.md).
+ ## Pricing
-Under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there's no need to create more than one DDoS protection plan.
+For DDoS Network Protection, under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there's no need to create more than one DDoS protection plan.
+For DDoS IP Protection, there's no need to create a DDoS protection plan. Customers can enable DDoS on any public IP resource.
-To learn about Azure DDoS Protection Standard pricing, see [Azure DDoS Protection Standard pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
+To learn about Azure DDoS Protection pricing, see [Azure DDoS Protection pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
## DDoS Protection FAQ
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-partner-onboarding.md
Title: Partnering with Azure DDoS Protection Standard
-description: "Understand partnering opportunities enabled by Azure DDoS Protection Standard."
+ Title: Partnering with Azure DDoS Protection
+description: "Understand partnering opportunities enabled by Azure DDoS Protection."
+ documentationcenter: na Previously updated : 06/07/2022 Last updated : 10/12/2022
-# Partnering with Azure DDoS Protection Standard
-This article describes partnering opportunities enabled by the Azure DDoS Protection Standard. This article is designed to help product managers and business development roles understand the investment paths and provide insight into the partnering value propositions.
+# Partnering with Azure DDoS Protection
+This article describes partnering opportunities enabled by the Azure DDoS Protection. This article is designed to help product managers and business development roles understand the investment paths and provide insight into the partnering value propositions.
## Background
-Distributed denial of service (DDoS) attacks are one of the top availability and security concerns voiced by customers moving their applications to the cloud. With extortion and hacktivism being the common motivations behind DDoS attacks, they have been consistently increasing in type, scale, and frequency of occurrence as they are relatively easy and cheap to launch.
+Distributed denial of service (DDoS) attacks are one of the top availability and security concerns voiced by customers moving their applications to the cloud. With extortion and hacktivism being the common motivations behind DDoS attacks, they have been consistently increasing in type, scale, and frequency of occurrence as they're relatively easy and cheap to launch.
Azure DDoS Protection provides countermeasures against the most sophisticated DDoS threats, leveraging the global scale of Azure networking. The service provides enhanced DDoS mitigation capabilities for applications and resources deployed in virtual networks.
-Technology partners can protect their customers' resources natively with Azure DDoS Protection Standard to address the availability and reliability concerns due to DDoS attacks.
+Technology partners can protect their customers' resources natively with Azure DDoS Protection to address the availability and reliability concerns due to DDoS attacks.
-## Introduction to Azure DDoS Protection Standard
-Azure DDoS Protection Standard provides enhanced DDoS mitigation capabilities against Layer 3 and Layer 4 DDoS attacks. The following are the key features of DDoS Protection Standard service.
+## Introduction to Azure DDoS Protection
+Azure DDoS Protection provides enhanced DDoS mitigation capabilities against Layer 3 and Layer 4 DDoS attacks. The following are the key features of DDoS Protection service.
### Adaptive real-time tuning
-For every protected application, Azure DDoS Protection Standard automatically tunes the DDoS mitigation policy thresholds based on the applicationΓÇÖs traffic profile patterns. The service accomplishes this customization by using two insights:
+For every protected application, Azure DDoS Protection automatically tunes the DDoS mitigation policy thresholds based on the applicationΓÇÖs traffic profile patterns. The service accomplishes this customization by using two insights:
- Automatic learning of per-customer (per-IP) traffic patterns for Layer 3 and 4. - Minimizing false positives, considering that the scale of Azure allows it to absorb a significant amount of traffic.
-![Adaptive real time tuning](./media/ddos-protection-partner-onboarding/real-time-tuning.png)
### Attack analytics, telemetry, monitoring, and alerting Azure DDoS Protection identifies and mitigates DDoS attacks without any user intervention. -- If the protected resource is in the subscription covered under Microsoft Defender for Cloud, DDoS Protection Standard automatically sends an alert to Defender for Cloud whenever a DDoS attack is detected and mitigated against the protected application.
+- If the protected resource is in the subscription covered under Microsoft Defender for Cloud, DDoS Protection automatically sends an alert to Defender for Cloud whenever a DDoS attack is detected and mitigated against the protected application.
- Alternatively, to get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure an alert](alerts.md) on the metric Under DDoS attack or not. - You can additionally choose to create alerts for the other DDoS metrics and [configure attack telemetry](telemetry.md) to understand the scale of the attack, traffic being dropped, attack vectors, top contributors, and other details. ![DDoS metrics](./media/ddos-protection-partner-onboarding/ddos-metrics.png) ### DDoS rapid response (DRR)
-DDoS Protection Standard customers have access to [Rapid Response team](ddos-rapid-response.md) during an active attack. DRR can help with attack investigation during an attack as well as post-attack analysis.
+DDoS Protection customers have access to [Rapid Response team](ddos-rapid-response.md) during an active attack. DRR can help with attack investigation during an attack and post-attack analysis.
### SLA guarantee and cost protection
-DDoS Protection Standard service is covered by a 99.99% SLA, and cost protection provides resource credits for scale out during a documented attack. For more information, see [SLA for Azure DDoS Protection](https://azure.microsoft.com/support/legal/sla/ddos-protection/v1_0/).
+DDoS Protection service is covered by a 99.99% SLA, and cost protection provides resource credits for scale-out during a documented attack. For more information, see [SLA for Azure DDoS Protection](https://azure.microsoft.com/support/legal/sla/ddos-protection/v1_0/).
## Featured partner scenarios
-The following are key benefits you can derive by integrating with the Azure DDoS Protection Standard:
+The following are key benefits you can derive by integrating with the Azure DDoS Protection:
-- Partners' offered services (load balancer, web application firewall, firewall, etc.) to their customers are automatically protected (white labeled) by Azure DDoS Protection Standard in the back end.-- Partners have access to Azure DDoS Protection Standard attack analytics and telemetry that they can integrate with their own products, offering a unified customer experience.
+- Partners' offered services (load balancer, web application firewall, firewall, etc.) to their customers are automatically protected (white labeled) by Azure DDoS Protection in the back end.
+- Partners have access to Azure DDoS Protection attack analytics and telemetry that they can integrate with their own products, offering a unified customer experience.
- Partners have access to DDoS rapid response support even in the absence of Azure rapid response, for DDoS related issues. - Partners' protected applications are backed by a DDoS SLA guarantee and cost protection in the event of DDoS attacks. ## Technical integration overview
-Azure DDoS Protection Standard partnering opportunities are made available via Azure portal, APIs, and CLI/PS.
+Azure DDoS Protection partnering opportunities are made available via Azure portal, APIs, and CLI/PS.
-### Integrate with DDoS Protection Standard
-The following steps are required for partners to configure integration with Azure DDoS Protection Standard:
-1. Create a DDoS Protection Plan in your desired (partner) subscription. For step-by-step instructions, see [Create a DDoS Standard Protection plan](manage-ddos-protection.md#create-a-ddos-protection-plan).
+### Integrate with DDoS Protection
+The following steps are required for partners to configure integration with Azure DDoS Protection:
+1. Create a DDoS Protection Plan in your desired (partner) subscription. For step-by-step instructions, see [Create a DDoS Protection plan](manage-ddos-protection.md#create-a-ddos-protection-plan).
> [!NOTE] > Only 1 DDoS Protection Plan needs to be created for a given tenant. 2. Deploy a service with public endpoint in your (partner) subscriptions, such as load balancer, firewalls, and web application firewall.
-3. Enable Azure DDoS Protection Standard on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For step-by-step instructions, see [Enable DDoS Standard Protection plan](manage-ddos-protection.md#enable-ddos-protection-for-an-existing-virtual-network)
+3. Enable Azure DDoS Protection on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For step-by-step instructions, see [Enable DDoS Protection plan](manage-ddos-protection.md#enable-ddos-protection-for-an-existing-virtual-network)
> [!IMPORTANT]
- > After Azure DDoS Protection Standard is enabled on a virtual network, all public IPs within that virtual network are automatically protected. The origin of these public IPs can be either within Azure (client subscription) or outside of Azure.
-4. Optionally, integrate Azure DDoS Protection Standard telemetry and attack analytics in your application-specific customer-facing dashboard. For more information about using telemetry, see [View and configure DDoS protection telemetry](telemetry.md).
+ > After Azure DDoS Protection is enabled on a virtual network, all public IPs within that virtual network are automatically protected. The origin of these public IPs can be either within Azure (client subscription) or outside of Azure.
+4. Optionally, integrate Azure DDoS Protection telemetry and attack analytics in your application-specific customer-facing dashboard. For more information about using telemetry, see [View and configure DDoS protection telemetry](telemetry.md).
### Onboarding guides and technical documentation
The following steps are required for partners to configure integration with Azur
### Get help -- If you have questions about application, service, or product integrations with Azure DDoS Protection Standard, reach out to the [Azure security community](https://techcommunity.microsoft.com/t5/security-identity/bd-p/Azure-Security).
+- If you have questions about application, service, or product integrations with Azure DDoS Protection, reach out to the [Azure security community](https://techcommunity.microsoft.com/t5/security-identity/bd-p/Azure-Security).
- Follow discussions on [Stack Overflow](https://stackoverflow.com/tags/azure-ddos/). ### Get to market
The following steps are required for partners to configure integration with Azur
View existing partner integrations: - [Barracuda WAF-as-a-service](https://www.barracuda.com/waf-as-a-service)-
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
Previously updated : 09/13/2022 Last updated : 10/12/2022 -+
-# DDoS Protection reference architectures
+# Azure DDoS Protection reference architectures
-DDoS Protection Standard is designed [for services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). The following reference architectures are arranged by scenarios, with architecture patterns grouped together.
+Azure DDoS Protection is designed [for services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). The following reference architectures are arranged by scenarios, with architecture patterns grouped together.
> [!NOTE] > Protected resources include public IPs attached to an IaaS VM (except for single VM running behind a public IP), Load Balancer (Classic & Standard Load Balancers), Application Gateway (including WAF) cluster, Firewall, Bastion, VPN Gateway, Service Fabric, IaaS based Network Virtual Appliance (NVA) or Azure API Management (Premium tier only), connected to a virtual network (VNet) in the external mode. Protection also covers public IP ranges brought to Azure via Custom IP Prefixes (BYOIPs). PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than those supported above, or Azure Virtual WAN are not supported at present. - ## Virtual machine (Windows/Linux) workloads
-### Application running on load-balanced VMs
-
-This reference architecture shows a set of proven practices for running multiple Windows VMs in a scale set behind a load balancer, to improve availability and scalability. This architecture can be used for any stateless workload, such as a web server.
+### Application running on load-balanced virtual machines
-![Diagram of the reference architecture for an application running on load-balanced VMs](./media/ddos-best-practices/image-9.png)
+This reference architecture shows a set of proven practices for running multiple Windows virtual machines in a scale set behind a load balancer, to improve availability and scalability. This architecture can be used for any stateless workload, such as a web server.
-In this architecture, a workload is distributed across multiple VM instances. There is a single public IP address, and internet traffic is distributed to the VMs through a load balancer. DDoS Protection Standard is enabled on the virtual network of the Azure (internet) load balancer that has the public IP associated with it.
+In this architecture, a workload is distributed across multiple virtual machine instances. There's a single public IP address, and internet traffic is distributed to the virtual machine through a load balancer.
The load balancer distributes incoming internet requests to the VM instances. Virtual machine scale sets allow the number of VMs to be scaled in or out manually, or automatically based on predefined rules. This is important if the resource is under DDoS attack. For more information on this reference architecture, see
-[this article](/azure/architecture/reference-architectures/virtual-machines-windows/multi-vm).
+[Windows N-tier application on Azure](/azure/architecture/reference-architectures/virtual-machines-windows/multi-vm).
+#### DDoS Network Protection virtual machine architecture
++
+ DDoS Network Protection is enabled on the virtual network of the Azure (internet) load balancer that has the public IP associated with it.
+
+#### DDoS IP Protection virtual machine architecture
++
+DDoS IP Protection is enabled on the frontend public IP address of a public load balancer.
### Application running on Windows N-tier
-There are many ways to implement an N-tier architecture. The following diagram shows a typical three-tier web application. This architecture builds on the article [Run load-balanced VMs for scalability and availability](/azure/architecture/reference-architectures/virtual-machines-windows/multi-vm). The web and business tiers use load-balanced VMs.
+There are many ways to implement an N-tier architecture. The following diagrams show a typical three-tier web application. This architecture builds on the article [Run load-balanced VMs for scalability and availability](/azure/architecture/reference-architectures/virtual-machines-windows/multi-vm). The web and business tiers use load-balanced VMs.
+
+#### DDoS Network Protection Windows N-tier architecture
-![Diagram of the reference architecture for an application running on Windows N-tier](./media/ddos-best-practices/image-10.png)
-In this architecture, DDoS Protection Standard is enabled on the virtual network. All public IPs in the virtual network get DDoS protection for Layer 3 and 4. For Layer 7 protection, deploy Application Gateway in the WAF SKU. For more information on this reference architecture, see
+ In this architecture diagram DDoS Network Protection is enabled on the virtual network. All public IPs in the virtual network get DDoS protection for Layer 3 and 4. For Layer 7 protection, deploy Application Gateway in the WAF SKU. For more information on this reference architecture, see
[Windows N-tier application on Azure](/azure/architecture/reference-architectures/virtual-machines-windows/n-tier).
+#### DDoS IP Protection Windows N-tier architecture
++
+ In this architecture diagram DDoS IP Protection is enabled on the public IP address.
> [!NOTE] > Scenarios in which a single VM is running behind a public IP are not supported. DDoS mitigation may not initiate instantaneously when a DDoS attack is detected. As a result a single VM deployment that canΓÇÖt scale out will go down in such cases.
In this architecture, DDoS Protection Standard is enabled on the virtual network
This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](/azure/app-service/) and [Azure SQL Database](/azure/sql-database/). A standby region is set up for failover scenarios.
-![Diagram of the reference architecture for a PaaS web application](./media/ddos-best-practices/image-11.png)
- Azure Traffic Manager routes incoming requests to Application Gateway in one of the regions. During normal operations, it routes requests to Application Gateway in the active region. If that region becomes unavailable, Traffic Manager fails over to Application Gateway in the standby region.
-All traffic from the internet destined to the web application is routed to the [Application Gateway public IP address](../application-gateway/configure-web-app.md) via Traffic Manager. In this scenario, the app service (web app) itself is not directly externally facing and is protected by Application Gateway.
+All traffic from the internet destined to the web application is routed to the [Application Gateway public IP address](../application-gateway/configure-web-app.md) via Traffic Manager. In this scenario, the app service (web app) itself isn't directly externally facing and is protected by Application Gateway.
We recommend that you configure the Application Gateway WAF SKU (prevent mode) to help protect against Layer 7 (HTTP/HTTPS/WebSocket) attacks. Additionally, web apps are configured to [accept only traffic from the Application Gateway](https://azure.microsoft.com/blog/ip-and-domain-restrictions-for-windows-azure-web-sites/) IP address. For more information about this reference architecture, see [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region).
+#### DDoS Network Protection with PaaS web application architecture
++
+ In this architecture diagram DDoS Network Protection is enabled on the web app gateway virtual network.
+
+#### DDoS IP Protection with PaaS web application architecture
++
+ In this architecture diagram DDoS IP Protection is enabled on the public IP associated with the web application gateway.
## Mitigation for non-web PaaS services ### HDInsight on Azure
-This reference architecture shows configuring DDoS Protection Standard for an [Azure HDInsight cluster](../hdinsight/index.yml). Make sure that the HDInsight cluster is linked to a virtual network and that DDoS Protection is enabled on the virtual network.
+This reference architecture shows configuring DDoS Protection for an [Azure HDInsight cluster](../hdinsight/index.yml). Make sure that the HDInsight cluster is linked to a virtual network and that DDoS Protection is enabled on the virtual network.
!["HDInsight" and "Advanced settings" panes, with virtual network settings](./media/ddos-best-practices/image-12.png) ![Selection for enabling DDoS Protection](./media/ddos-best-practices/image-13.png)
-In this architecture, traffic destined to the HDInsight cluster from the internet is routed to the public IP associated with the HDInsight gateway load balancer. The gateway load balancer then sends the traffic to the head nodes or the worker nodes directly. Because DDoS Protection Standard is enabled on the HDInsight virtual network, all public IPs in the virtual network get DDoS protection for Layer 3 and 4. This reference architecture can be combined with the N-Tier and multi-region reference architectures.
+In this architecture, traffic destined to the HDInsight cluster from the internet is routed to the public IP associated with the HDInsight gateway load balancer. The gateway load balancer then sends the traffic to the head nodes or the worker nodes directly. Because DDoS Protection is enabled on the HDInsight virtual network, all public IPs in the virtual network get DDoS protection for Layer 3 and 4. This reference architecture can be combined with the N-Tier and multi-region reference architectures.
For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=/azure/virtual-network/toc.json) documentation.
documentation.
This reference architecture details a hub-and-spoke topology with Azure Firewall inside the hub as a DMZ for scenarios that require central control over security aspects. Azure Firewall is a managed firewall as a service and is placed in its own subnet. Azure Bastion is deployed and placed in its own subnet.
-There are two spokes that are connected to the hub using VNet peering and there is no spoke-to-spoke connectivity. If you require spoke-to-spoke connectivity, then you need to create routes to forward traffic from one spoke to the firewall, which can then route it to the other spoke.
+There are two spokes that are connected to the hub using VNet peering and there's no spoke-to-spoke connectivity. If you require spoke-to-spoke connectivity, then you need to create routes to forward traffic from one spoke to the firewall, which can then route it to the other spoke. All the Public IPs that are inside the hub are protected by DDoS Protection. In this scenario, the firewall in the hub helps control the ingress traffic from the internet, while the firewall's public IP is being protected. Azure DDoS Protection also protects the public IP of the bastion.
+
+DDoS Protection is designed for services that are deployed in a virtual network. For more information, see [Deploy dedicated Azure service into virtual networks](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network).
+
+#### DDoS Network Protection hub-and-spoke network
++
+ In this architecture diagram Azure DDoS Network Protection is enabled on the hub virtual network.
+#### DDoS IP Protection hub-and-spoke network
-Azure DDoS Protection Standard is enabled on the hub virtual network. Therefore, all the Public IPs that are inside the hub are protected by the DDoS Standard plan. In this scenario, the firewall in the hub helps control the ingress traffic from the internet, while the firewall's public IP is being protected. Azure DDoS Protection Standard also protects the public IP of the bastion.
-DDoS Protection Standard is designed for services that are deployed in a virtual network. For more information, see [Deploy dedicated Azure service into virtual networks](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network).
+In this architecture diagram Azure DDoS IP Protection is enabled on the public IP Address.
> [!NOTE]
-> DDoS Protection Standard protects the Public IPs of Azure resource. DDoS infrastructure protection, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection Standard overview](ddos-protection-overview.md).
+> Azure DDoS Protection protects the Public IPs of Azure resource. DDoS infrastructure protection, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection overview](ddos-protection-overview.md).
For more information about hub-and-spoke topology, see [Hub-spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli). ## Next steps
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
+
+ Title: 'Azure DDoS Protection SKU Comparison'
+description: Learn about the available SKUs for Azure DDoS Protection.
++++ Last updated : 10/12/2022++++
+# About Azure DDoS Protection SKU Comparison
++
+The sections in this article discuss the resources and settings of Azure DDoS Protection.
+
+## DDoS Network Protection
+
+Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. For more information about enabling DDoS Network Protection, see [Quickstart: Create and configure Azure DDoS Network Protection using the Azure portal](manage-ddos-protection.md).
+
+## DDoS IP Protection
+
+ DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
+
+> [!NOTE]
+> DDoS IP Protection is currently only available in the Azure Preview Portal.
+
+## SKUs
+
+Azure DDoS Protection supports two SKU Types, DDoS IP Protection and DDoS Network Protection. The SKU is configured in the Azure portal during the workflow when you configure Azure DDoS Protection.
+
+The following table shows features and corresponding SKUs.
+
+| Feature | DDoS IP Protection | DDoS Network Protection |
+||||
+| Active traffic monitoring & always on detection | Yes| Yes |
+| L3/L4 Automatic attack mitigation | Yes | Yes |
+| Automatic attack mitigation | Yes | Yes |
+| Application based mitigation policies | Yes| Yes |
+| Metrics & alerts | Yes | Yes |
+| Mitigation reports | Yes | Yes |
+| Mitigation flow logs| Yes| Yes |
+| Mitigation policies tuned to customers application | Yes| Yes |
+| Integration with Firewall Manager | Yes | Yes |
+| Azure Sentinel data connector and workbook | Yes | Yes |
+| DDoS rapid response support | Not available | Yes |
+| Cost protection | Not available | Yes |
+| WAF discount | Not available | Yes |
+| Protection of resources across subscriptions in a tenant | Yes | Yes |
+| Price | Per protected IP | Per 100 protected IP addresses |
+
+>[!Note]
+>At no additional cost, Azure DDoS infrastructure protection protects every Azure service that uses public IPv4 and IPv6 addresses. This DDoS protection service helps to protect all Azure services, including platform as a service (PaaS) services such as Azure DNS. For more information on supported PaaS services, see [DDoS Protection reference architectures](ddos-protection-reference-architectures.md). Azure DDoS infrastructure protection requires no user configuration or application changes. Azure provides continuous protection against DDoS attacks. DDoS protection does not store customer data.
+
+## Next steps
+
+* [Quickstart: Create an Azure DDoS Protection Plan](manage-ddos-protection.md)
+* [Azure DDoS Protection features](ddos-protection-features.md)
ddos-protection Ddos Protection Standard Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-standard-features.md
- Title: Azure DDoS Protection features
-description: Learn Azure DDoS Protection features
----- Previously updated : 09/08/2020---
-# Azure DDoS Protection Standard features
-
-The following sections outline the key features of the Azure DDoS Protection Standard service.
-
-## Always-on traffic monitoring
-
-DDoS Protection Standard monitors actual traffic utilization and constantly compares it against the thresholds defined in the DDoS Policy. When the traffic threshold is exceeded, DDoS mitigation is initiated automatically. When traffic returns below the thresholds, the mitigation is stopped.
-
-![Azure DDoS Protection Standard Mitigation](./media/ddos-protection-overview/mitigation.png)
-
-During mitigation, traffic sent to the protected resource is redirected by the DDoS protection service and several checks are performed, such as:
--- Ensure packets conform to internet specifications and are not malformed.-- Interact with the client to determine if the traffic is potentially a spoofed packet (e.g: SYN Auth or SYN Cookie or by dropping a packet for the source to retransmit it).-- Rate-limit packets, if no other enforcement method can be performed.-
-DDoS protection drops attack traffic and forwards the remaining traffic to its intended destination. Within a few minutes of attack detection, you are notified using Azure Monitor metrics. By configuring logging on DDoS Protection Standard telemetry, you can write the logs to available options for future analysis. Metric data in Azure Monitor for DDoS Protection Standard is retained for 30 days.
-
-## Adaptive real time tuning
-
-The complexity of attacks (for example, multi-vector DDoS attacks) and the application-specific behaviors of tenants call for per-customer, tailored protection policies. The service accomplishes this by using two insights:
--- Automatic learning of per-customer (per-Public IP) traffic patterns for Layer 3 and 4.--- Minimizing false positives, considering that the scale of Azure allows it to absorb a significant amount of traffic.-
-![Diagram of how DDoS Protection Standard works, with "Policy Generation" circled](./media/ddos-best-practices/image-5.png)
-
-## DDoS Protection telemetry, monitoring, and alerting
-
-DDoS Protection Standard exposes rich telemetry via [Azure Monitor](../azure-monitor/overview.md). You can configure alerts for any of the Azure Monitor metrics that DDoS Protection uses. You can integrate logging with Splunk (Azure Event Hubs), Azure Monitor logs, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
-
-### DDoS mitigation policies
-
-In the Azure portal, select **Monitor** > **Metrics**. In the **Metrics** pane, select the resource group, select a resource type of **Public IP Address**, and select your Azure public IP address. DDoS metrics are visible in the **Available metrics** pane.
-
-DDoS Protection Standard applies three autotuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the metric **Inbound packets to trigger DDoS mitigation**.
-
-![Available metrics and metrics chart](./media/ddos-best-practices/image-7.png)
-
-The policy thresholds are autoconfigured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
-
-### Metric for an IP address under DDoS attack
-
-If the public IP address is under attack, the value for the metric **Under DDoS attack or not** changes to 1 as DDoS Protection performs mitigation on the attack traffic.
-
-!["Under DDoS attack or not" metric and chart](./media/ddos-best-practices/image-8.png)
-
-We recommend configuring an alert on this metric. You'll then be notified when thereΓÇÖs an active DDoS mitigation performed on your public IP address.
-
-For more information, see [Manage Azure DDoS Protection Standard using the Azure portal](manage-ddos-protection.md).
-
-## Web application firewall for resource attacks
-
-Specific to resource attacks at the application layer, you should configure a web application firewall (WAF) to help secure web applications. A WAF inspects inbound web traffic to block SQL injections, cross-site scripting, DDoS, and other Layer 7 attacks. Azure provides [WAF as a feature of Application Gateway](../web-application-firewall/ag/ag-overview.md) for centralized protection of your web applications from common exploits and vulnerabilities. There are other WAF offerings available from Azure partners that might be more suitable for your needs via the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WAF&page=1).
-
-Even web application firewalls are susceptible to volumetric and state exhaustion attacks. We strongly recommend enabling DDoS Protection Standard on the WAF virtual network to help protect from volumetric and protocol attacks. For more information, see the [DDoS Protection reference architectures](ddos-protection-reference-architectures.md) section.
-
-## Protection Planning
-
-Planning and preparation are crucial to understand how a system will perform during a DDoS attack. Designing an incident management response plan is part of this effort.
-
-If you have DDoS Protection Standard, make sure that it's enabled on the virtual network of internet-facing endpoints. Configuring DDoS alerts helps you constantly watch for any potential attacks on your infrastructure.
-
-Monitor your applications independently. Understand the normal behavior of an application. Prepare to act if the application is not behaving as expected during a DDoS attack.
-
-Learn how your services will respond to an attack by [testing through simulations](test-through-simulations.md).
-
-## Next steps
--- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
ddos-protection Ddos Rapid Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-rapid-response.md
na+ Previously updated : 08/28/2020 Last updated : 10/12/2022 - # Azure DDoS Rapid Response
-During an active attack, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis.
+During an active attack, Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis.
## Prerequisites -- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md).
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md).
## When to engage DRR You should only engage DRR if: -- During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource is not available. -- You think your resource is under DDoS attack, but DDoS Protection service is not mitigating the attack effectively.
+- During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource isn't available.
+- You think your resource is under DDoS attack, but DDoS Protection service isn't mitigating the attack effectively.
- You're planning a viral event that will significantly increase your network traffic. - For attacks that have a critical business impact.
You should only engage DRR if:
1. From Azure portal while creating a new support request, choose **Issue Type** as Technical. 2. Choose **Service** as **DDOS Protection**.
-3. Choose a resource in the resource drop-down menu. _You must select a DDoS Plan thatΓÇÖs linked to the virtual network being protected by DDoS Protection Standard to engage DRR._
+3. Choose a resource in the resource drop-down menu. _You must select a DDoS Plan thatΓÇÖs linked to the virtual network being protected by DDoS Protection to engage DRR._
![Choose Resource](./media/ddos-rapid-response/choose-resource.png)
-4. On the next **Problem** page select the **severity** as A -Critical Impact and **Problem Type** as ΓÇÿUnder attack.ΓÇÖ
+4. On the next **Problem** page, select the **severity** as A -Critical Impact and **Problem Type** as ΓÇÿUnder attack.ΓÇÖ
![PSeverity and Problem Type](./media/ddos-rapid-response/severity-and-problem-type.png)
You should only engage DRR if:
DRR follows the Azure Rapid Response support model. Refer to [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/) for more information on Rapid Response.
-To learn more, read the [DDoS Protection Standard documentation](./ddos-protection-overview.md).
+To learn more, read the [DDoS Protection documentation](./ddos-protection-overview.md).
## Next steps
ddos-protection Ddos Response Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-response-strategy.md
Title: Components of a DDoS response strategy
-description: Learn what how to use Azure DDoS Protection Standard to respond to DDoS attacks.
+description: Learn what how to use Azure DDoS Protection to respond to DDoS attacks.
documentationcenter: na na+ Previously updated : 09/08/2020 Last updated : 10/12/2022
A DDoS attack that targets Azure resources usually requires minimal intervention
## Microsoft threat intelligence
-Microsoft has an extensive threat intelligence network. This network uses the collective knowledge of an extended security community that supports Microsoft online services, Microsoft partners, and relationships within the internet security community.
+Microsoft has an extensive threat intelligence network. This network uses the collective knowledge of an extended security community that supports Microsoft online services, Microsoft partners, and relationships within the internet security community.
As a critical infrastructure provider, Microsoft receives early warnings about threats. Microsoft gathers threat intelligence from its online services and from its global customer base. Microsoft incorporates all of this threat intelligence back into the Azure DDoS Protection products.
ItΓÇÖs imperative to understand the scope of your risk from a DDoS attack on an
- What new publicly available Azure resources need protection? -- Is there a single point of failure in the service?
+- Is there a single point of failure in the service?
- How can services be isolated to limit the impact of an attack while still making services available to valid customers? -- Are there virtual networks where DDoS Protection Standard should be enabled but isn't?
+- Are there virtual networks where DDoS Protection should be enabled but isn't?
- Are my services active/active with failover across multiple regions?
-It is essential that you understand the normal behavior of an application and prepare to act if the application is not behaving as expected during a DDoS attack. Have monitors configured for your business-critical applications that mimic client behavior, and notify you when relevant anomalies are detected. Refer to [monitoring and diagnostics best practices](/azure/architecture/best-practices/monitoring#monitoring-and-diagnostics-scenarios) to gain insights on the health of your application.
+It's essential that you understand the normal behavior of an application and prepare to act if the application isn't behaving as expected during a DDoS attack. Have monitors configured for your business-critical applications that mimic client behavior, and notify you when relevant anomalies are detected. Refer to [monitoring and diagnostics best practices](/azure/architecture/best-practices/monitoring#monitoring-and-diagnostics-scenarios) to gain insights on the health of your application.
[Azure Application Insights](../azure-monitor/app/app-insights-overview.md) is an extensible application performance management (APM) service for web developers on multiple platforms. Use Application Insights to monitor your live web application. It automatically detects performance anomalies. It includes analytics tools to help you diagnose issues and to understand what users do with your app. It's designed to help you continuously improve performance and usability. ## Customer DDoS response team
-Creating a DDoS response team is a key step in responding to an attack quickly and effectively. Identify contacts in your organization who will oversee both planning and execution. This DDoS response team should thoroughly understand the Azure DDoS Protection Standard service. Make sure that the team can identify and mitigate an attack by coordinating with internal and external customers, including the Microsoft support team.
+Creating a DDoS response team is a key step in responding to an attack quickly and effectively. Identify contacts in your organization who will oversee both planning and execution. This DDoS response team should thoroughly understand the Azure DDoS Protection Standard service. Make sure that the team can identify and mitigate an attack by coordinating with internal and external customers, including the Microsoft support team.
We recommend that you use simulation exercises as a normal part of your service availability and continuity planning, and these exercises should include scale testing. See [test through simulations](test-through-simulations.md) to learn how to simulate DDoS test traffic against your Azure public endpoints. ## Alerts during an attack
-Azure DDoS Protection Standard identifies and mitigates DDoS attacks without any user intervention. To get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure alerts](alerts.md).
+Azure DDoS Protection identifies and mitigates DDoS attacks without any user intervention. To get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure alerts](alerts.md).
### When to contact Microsoft support
-Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack as well as post-attack analysis. See [DDoS Rapid Response](ddos-rapid-response.md) for more details, including when you should engage the DRR team.
+Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack as well as post-attack analysis. See [DDoS Rapid Response](ddos-rapid-response.md) for more details, including when you should engage the DRR team.
## Post-attack steps
If you suspect you're under a DDoS attack, escalate through your normal Azure Su
## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
+- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
Title: Azure DDoS Protection Standard reports and flow logs
+ Title: 'Tutorial: View and configure Azure DDoS Protection diagnostic logging'
description: Learn how to configure reports and flow logs. documentationcenter: na
na+ Previously updated : 08/29/2022 Last updated : 10/12/2022 -
-# Tutorial: View and configure DDoS diagnostic logging
+# Tutorial: View and configure Azure DDoS Protection diagnostic logging
-Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
-The following diagnostic logs are available for Azure DDoS Protection Standard:
+The following diagnostic logs are available for Azure DDoS Protection:
- **DDoSProtectionNotifications**: Notifications will notify you anytime a public IP resource is under attack, and when attack mitigation is over.-- **DDoSMitigationFlowLogs**: Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting datapoints during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Microsoft Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.-- **DDoSMitigationReports**: Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over. -- **AllMetrics**: Provides all possible metrics available during the duration of a DDoS attack.
+- **DDoSMitigationFlowLogs**: Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting data-points during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Microsoft Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.
+- **DDoSMitigationReports**: Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
+- **AllMetrics**: Provides all possible metrics available during the duration of a DDoS attack.
In this tutorial, you'll learn how to: > [!div class="checklist"]
-> * Configure DDoS diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+> * Configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
> * Enable diagnostic logging on all public IPs in a defined scope. > * View log data in workbooks. ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.-- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address.
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
-## Configure DDoS diagnostic logs
+## Configure Azure DDoS Protection diagnostic logs
If you want to automatically enable diagnostic logging on all public IPs within an environment, skip to [Enable diagnostic logging on all public IPs](#enable-diagnostic-logging-on-all-public-ips). 1. Select **All services** on the top, left of the portal.
-2. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
-3. Under **Settings**, select **Diagnostic Settings**.
-4. Select the **Subscription** and **Resource group** that contain the public IP address you want to log.
-5. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to enable logs for.
-6. Select **Add diagnostic setting**. Under **Category Details**, select as many of the following options you require, and then select **Save**.
+1. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
+1. Under **Settings**, select **Diagnostic Settings**.
+1. Select the **Subscription** and **Resource group** that contain the public IP address you want to log.
+1. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to enable logs for.
+1. Select **Add diagnostic setting**. Under **Category Details**, select as many of the following options you require, and then select **Save**.
:::image type="content" source="./media/ddos-attack-telemetry/ddos-diagnostic-settings.png" alt-text="Screenshot of DDoS diagnostic settings." lightbox="./media/ddos-attack-telemetry/ddos-diagnostic-settings.png":::
-7. Under **Destination details**, select as many of the following options as you require:
+1. Under **Destination details**, select as many of the following options as you require:
- **Archive to a storage account**: Data is written to an Azure Storage account. To learn more about this option, see [Archive resource logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
- - **Stream to an event hub**: Allows a log receiver to pick up logs using an Azure Event Hub. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
+ - **Stream to an event hub**: Allows a log receiver to pick up logs using Azure Event Hubs. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
- **Send to Log Analytics**: Writes logs to the Azure Monitor service. To learn more about this option, see [Collect logs for use in Azure Monitor logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-log-analytics-workspace).
-### Query DDOS protection logs in log analytics workspace
+### Query Azure DDOS Protection logs in log analytics workspace
#### DDoSProtectionNotifications logs 1. Under the **Log analytics workspaces** blade, select your log analytics workspace.
-4. Under **General**, click on **Logs**
+1. Under **General**, select on **Logs**
-5. In Query explorer, type in the following Kusto Query and change the time range to Custom and change the time range to last three months. Then hit Run.
+1. In Query explorer, type in the following Kusto Query and change the time range to Custom and change the time range to last three months. Then hit Run.
```kusto AzureDiagnostics | where Category == "DDoSProtectionNotifications" ```
-#### DDoSMitigationFlowLogs
-
-1. Now change the query to the following and keep the same time range and hit Run.
+1. To view **DDoSMitigationFlowLogs** change the query to the following and keep the same time range and hit Run.
```kusto AzureDiagnostics | where Category == "DDoSMitigationFlowLogs" ```
-#### DDoSMitigationReports
-
-1. Now change the query to the following and keep the same time range and hit Run.
+1. To view **DDoSMitigationReports** change the query to the following and keep the same time range and hit Run.
```kusto AzureDiagnostics
The following table lists the field names and descriptions:
## Enable diagnostic logging on all public IPs
-This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F752154a7-1e0f-45c6-a880-ac75a7e4f648) automatically enables diagnostic logging on all public IP logs in a defined scope. See [Azure Policy built-in definitions for Azure DDoS Protection Standard](policy-reference.md) for full list of built-in policies.
+This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F752154a7-1e0f-45c6-a880-ac75a7e4f648) automatically enables diagnostic logging on all public IP logs in a defined scope. See [Azure Policy built-in definitions for Azure DDoS Protection](policy-reference.md) for full list of built-in policies.
## View log data in workbooks ### Microsoft Sentinel data connector
-You can connect logs to Microsoft Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Microsoft Sentinel, see [Connect to Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection).
+You can connect logs to Microsoft Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Microsoft Sentinel, see [Connect to Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection).
:::image type="content" source="./media/ddos-attack-telemetry/azure-sentinel-ddos.png" alt-text="Screenshot of Microsoft Sentinel DDoS Connector." lightbox="./media/ddos-attack-telemetry/azure-sentinel-ddos.png":::
-### Azure DDoS Protection Workbook
+### Azure DDoS Protection workbook
-You can use [this Azure Resource Manager (ARM) template](https://aka.ms/ddosworkbook) to deploy an attack analytics workbook. This workbook allows you to visualize attack data across several filterable panels to easily understand whatΓÇÖs at stake.
+You can use [this Azure Resource Manager (ARM) template](https://aka.ms/ddosworkbook) to deploy an attack analytics workbook. This workbook allows you to visualize attack data across several filterable panels to easily understand whatΓÇÖs at stake.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%20DDoS%20Protection%2FWorkbook%20-%20Azure%20DDOS%20monitor%20workbook%2FAzureDDoSWorkbook_ARM.json) ## Validate and test
-To simulate a DDoS attack to validate your logs, see [Validate DDoS detection](test-through-simulations.md).
+To simulate a DDoS attack to validate your logs, see [Test with simulation partners](test-through-simulations.md).
## Next steps In this tutorial, you learned how to: -- Configure DDoS diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+- Configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
- Enable diagnostic logging on all public IPs in a defined scope. - View log data in workbooks.
ddos-protection Fundamental Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/fundamental-best-practices.md
Title: Azure DDoS Protection fundamental best practices
-description: Learn the best security practices using DDoS protection.
+description: Learn the best security practices using Azure DDoS Protection.
documentationcenter: na na+ Previously updated : 09/08/2020 Last updated : 10/12/2022 -
-# Fundamental best practices
+# Azure DDoS Protection fundamental best practices
The following sections give prescriptive guidance to build DDoS-resilient services on Azure. ## Design for security
-Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications can have bugs that allow a relatively low volume of requests to use an inordinate amount of resources, resulting in a service outage.
+Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications can have bugs that allow a relatively low volume of requests to use an inordinate amount of resources, resulting in a service outage.
To help protect a service running on Microsoft Azure, you should have a good understanding of your application architecture and focus on the [five pillars of software quality](/azure/architecture/guide/pillars). You should know typical traffic volumes, the connectivity model between the application and other applications, and the service endpoints that are exposed to the public internet.
We often see customers' on-premises resources getting attacked along with their
## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
+- Learn how to [create an Azure DDoS protection plan](manage-ddos-protection.md).
ddos-protection Inline Protection Glb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/inline-protection-glb.md
Title: Inline L7 DDoS protection with Gateway Load Balancer and partner NVAs
+ Title: Inline L7 DDoS Protection with Gateway Load Balancer and partner NVAs
description: Learn how to create and enable inline L7 DDoS Protection with Gateway Load Balancer and Partner NVAs documentationcenter: na
na Previously updated : 10/21/2021- Last updated : 10/12/2022+ # Inline L7 DDoS Protection with Gateway Load Balancer and Partner NVAs
-Azure DDoS Protection is always-on but not inline and takes 30-60 seconds from the time an attack is detected until it is mitigated. Azure DDoS Protection Standard also works at L3/4 (network layer) and does not inspect the packet payload i.e. application layer (L7).
+Azure DDoS Protection is always-on but not inline and takes 30-60 seconds from the time an attack is detected until it's mitigated. Azure DDoS Protection also works at L3/4 (network layer) and doesn't inspect the packet payload that is, application layer (L7).
-Workloads that are highly sensitive to latency and cannot tolerate 30-60 seconds of on-ramp time for DDoS protection to kick in requires inline protection. Inline protection entails that all the traffic always goes through the DDoS protection pipeline. Further, for scenarios such as web protection or gaming workload protection (UDP) it becomes crucial to inspect the packet payload to mitigate against extreme low volume attacks which exploit the vulnerability in the application layer (L7).
+Workloads that are highly sensitive to latency and can't tolerate 30-60 seconds of on-ramp time for DDoS protection to kick in requires inline protection. Inline protection entails that all the traffic always goes through the DDoS protection pipeline. Further, for scenarios such as web protection or gaming workload protection (UDP) it becomes crucial to inspect the packet payload to mitigate against extreme low volume attacks, which exploit the vulnerability in the application layer (L7).
-Partner NVAs deployed with Gateway Load Balancer and integrated with Azure DDoS Protection Standard offers comprehensive inline L7 DDoS Protection for high performance and high availability scenarios. Inline L7 DDoS Protection combined with Azure DDoS Protection Standard provides comprehensive L3-L7 protection against volumetric as well as low-volume DDoS attacks.
+Partner NVAs deployed with Gateway Load Balancer and integrated with Azure DDoS Protection offers comprehensive inline L7 DDoS Protection for high performance and high availability scenarios. Inline L7 DDoS Protection combined with Azure DDoS Protection provides comprehensive L3-L7 protection against volumetric and low-volume DDoS attacks.
## What is a Gateway Load Balancer? Gateway Load Balancer is a SKU of Azure Load Balancer catered specifically for high performance and high availability scenarios with third-party Network Virtual Appliances (NVAs).
-With the capabilities of Gateway LB, you can deploy, scale, and manage NVAs with ease ΓÇô chaining a Gateway LB to your public endpoint merely requires one click. You can insert appliances for a variety of scenarios such as firewalls, advanced packet analytics, intrusion detection and prevention systems, or custom scenarios that suit your needs into the network path with Gateway LB. In scenarios with NVAs, it is especially important that flows are ΓÇÿsymmetricalΓÇÖ ΓÇô this ensures sessions are maintained and symmetrical. Gateway LB maintains flow symmetry to a specific instance in the backend pool.
+With the capabilities of Gateway LB, you can deploy, scale, and manage NVAs with ease ΓÇô chaining a Gateway LB to your public endpoint merely requires one select. You can insert appliances for various scenarios such as firewalls, advanced packet analytics, intrusion detection and prevention systems, or custom scenarios that suit your needs into the network path with Gateway LB. In scenarios with NVAs, it's especially important that flows are ΓÇÿsymmetricalΓÇÖ ΓÇô this ensures sessions are maintained and symmetrical. Gateway LB maintains flow symmetry to a specific instance in the backend pool.
-For more details on Gateway Load Balancer refer to the [Gateway LB](../load-balancer/gateway-overview.md) product and documentation.
+For more information on Gateway Load Balancer, see the [Gateway load balancer](../load-balancer/gateway-overview.md) product and documentation.
-## Inline DDoS protection with Gateway LB and Partner NVAs
+## Inline DDoS protection with Gateway Load Balancer and Partner NVAs
-DDoS attacks on high latency sensitive workloads (e.g., gaming) can cause outage ranging from 2-10 seconds resulting in availability disruption. Gateway Load Balancer enables protection of such workloads by ensuring the relevant NVAs are injected into the ingress path of the internet traffic. Once chained to a Standard Public Load Balancer frontend or IP configuration on a virtual machine, no additional configuration is needed to ensure traffic to and from the application endpoint is sent to the Gateway LB.
+DDoS attacks on high latency sensitive workloads (e.g., gaming) can cause outage ranging from 2-10 seconds resulting in availability disruption. Gateway Load Balancer enables protection of such workloads by ensuring the relevant NVAs are injected into the ingress path of the internet traffic. Once chained to a Standard Public Load Balancer frontend or IP configuration on a virtual machine, no additional configuration is needed to ensure traffic to, and from the application endpoint is sent to the Gateway LB.
Inbound traffic is always inspected via the NVAs in the path and the clean traffic is returned to the backend infrastructure (gamer servers). Traffic flows from the consumer virtual network to the provider virtual network and then returns to the consumer virtual network. The consumer virtual network and provider virtual network can be in different subscriptions, tenants, or regions enabling greater flexibility and ease of management.
-![DDoS inline protection via gateway load balancer](./media/ddos-glb.png)
-Enabling Azure DDoS Protection Standard on the VNet of the Standard Public Load Balancer frontend or VNet of the virtual machine will offer protection from L3/4 DDoS attacks.
+Enabling Azure DDoS Protection on the VNet of the Standard Public Load Balancer frontend or VNet of the virtual machine will offer protection from L3/4 DDoS attacks.
1. Unfiltered game traffic from the internet is directed to the public IP of the game servers Load Balancer.
-2. Unfiltered game traffic is redirected to the chained Gateway Load Balancer private IP.
-3. The unfiltered game traffic is inspected for DDoS attacks in real time via the partner NVAs.
-4. Filtered game traffic is sent back to the game servers for final processing.
-5. Azure DDoS Protection Standard on the gamer servers Load Balancer protects from L3/4 DDoS attacks and the DDoS protection policies are automatically tuned for game servers traffic profile and application scale.
+1. Unfiltered game traffic is redirected to the chained Gateway Load Balancer private IP.
+1. The unfiltered game traffic is inspected for DDoS attacks in real time via the partner NVAs.
+1. Filtered game traffic is sent back to the game servers for final processing.
+1. Azure DDoS Protection on the gamer servers Load Balancer protects from L3/4 DDoS attacks and the DDoS protection policies are automatically tuned for game servers traffic profile and application scale.
## Next steps - Learn more about our launch partner [A10 Networks](https://www.a10networks.com/blog/introducing-l3-7-ddos-protection-for-microsoft-azure-tenants/)-- Learn more about [Azure DDoS Protection Standard](./ddos-protection-overview.md)
+- Learn more about [Azure DDoS Protection](./ddos-protection-overview.md)
- Learn more about [Gateway Load Balancer](../load-balancer/gateway-overview.md)
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
Title: Create and enable an Azure DDoS Protection plan using Bicep.
+ Title: 'Create and configure Azure DDoS Network Protection using Bicep'
description: Learn how to create and enable an Azure DDoS Protection plan using Bicep. documentationcenter: na
na -- Previously updated : 04/04/2022++ Last updated : 10/12/2022
-# Quickstart: Create an Azure DDoS Protection Standard using Bicep
+# Quickstart: Create and configure Azure DDoS Network Protection using Bicep
-This quickstart describes how to use Bicep to create a distributed denial of service (DDoS) protection plan and virtual network (VNet), then enable the protection plan for the VNet. An Azure DDoS Protection Standard plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+This quickstart describes how to use Bicep to create a distributed denial of service (DDoS) protection plan and virtual network (VNet), then enable the protection plan for the VNet. An Azure DDoS Network Protection plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-cli.md
Title: Create and configure an Azure DDoS Protection plan using Azure CLI
+ Title: Create and configure an Azure DDoS Network Protection plan using Azure CLI
description: Learn how to create a DDoS Protection Plan using Azure CLI documentationcenter: na
na+ Previously updated : 08/12/2022 Last updated : 10/12/2022 -
-# Quickstart: Create and configure Azure DDoS Protection Standard using Azure CLI
+# Quickstart: Create and configure Azure DDoS Network Protection using Azure CLI
-Get started with Azure DDoS Protection Standard by using Azure CLI.
+Get started with Azure DDoS Network Protection by using Azure CLI.
-A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS Network Protection enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
+
+ Title: 'Quickstart: Create and configure Azure DDoS IP Protection using PowerShell'
+description: Learn how to create Azure DDoS IP Protection using PowerShell
++++ Last updated : 10/12/2022++++
+# Quickstart: Create and configure Azure DDoS IP Protection using Azure PowerShell
+
+Get started with Azure DDoS IP Protection by using Azure PowerShell.
+In this quickstart, you'll enable DDoS IP protection and link it to a public IP address utilizing PowerShell.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell
+- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 8.3.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
++++
+## Enable DDoS IP Protection for a public IP address
+
+You can enable DDoS IP Protection when creating a public IP address. In this example, we'll name our public IP address _myStandardPublicIP_:
+
+```azurepowershell-interactive
+#Creates the resource group
+New-AzResourceGroup -Name MyResourceGroup -Location eastus
+
+#Creates the IP address and enables DDoS IP Protection
+New-AzPublicIpAddress -Name myStandardPublicIP -ResourceGroupName MyResourceGroup -Sku Standard -Location "East US" -AllocationMethod Static -DdosProtectionMode Enabled
+```
+> [!NOTE]
+> DDoS IP Protection is enabled only on Public IP Standard SKU.
+
+### Enable DDoS IP Protection for an existing public IP address
+
+You can associate an existing public IP address:
+
+```azurepowershell-interactive
+#Gets the public IP address
+$publicIp = Get-AzPublicIpAddress -Name myStandardPublicIP -ResourceGroupName MyResourceGroup
+
+#Enables DDoS IP Protection for the public IP address
+$publicIp.DdosSettings.ProtectionMode = 'Enabled'
+
+#Updates public IP address
+Set-AzPublicIpAddress -PublicIpAddress $publicIp
+```
++
+## Validate and test
+
+Check the details of your public IP address and verify that DDoS IP Protection is enabled.
+
+```azurepowershell-interactive
+#Gets the public IP address
+$publicIp = Get-AzPublicIpAddress -Name myStandardPublicIP -ResourceGroupName MyResourceGroup
+
+#Checks the status of the public IP address
+$protectionMode = $publicIp.DdosSettings.ProtectionMode
+
+#Returns the status of the pubic IP address
+$protectionMode
+
+```
+## Disable DDoS IP Protection for an existing public IP address
+
+```azurepowershell-interactive
+$publicIp = Get-AzPublicIpAddress -Name myStandardPublicIP -ResourceGroupName MyResourceGroup
+
+$publicIp.DdosSettings.ProtectionMode = 'Disabled'
+
+Set-AzPublicIpAddress -PublicIpAddress $publicIp
+```
+> [!NOTE]
+> When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will not be available.
+
+## Clean up resources
+
+You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name MyResourceGroup
+```
+## Next steps
+
+In this quickstart, you created:
+* A resource group
+* A public IP address
+
+You enabled DDoS IP Protection using Azure PowerShell.
+To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
+
+> [!div class="nextstepaction"]
+> [View and configure DDoS protection telemetry](telemetry.md)
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
Title: Create and configure an Azure DDoS Protection plan using Azure PowerShell
+ Title: Create and configure Azure DDoS Network Protection using Azure PowerShell
description: Learn how to create a DDoS Protection Plan using Azure PowerShell documentationcenter: na
na Previously updated : 04/18/2022 Last updated : 10/12/2022 --+
-# Quickstart: Create and configure Azure DDoS Protection Standard using Azure PowerShell
+# Quickstart: Create and configure Azure DDoS Network Protection using Azure PowerShell
-Get started with Azure DDoS Protection Standard by using Azure PowerShell.
+Get started with Azure DDoS Network Protection by using Azure PowerShell.
-A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS Network Protection enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
ddos-protection Manage Ddos Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-template.md
Title: Create and enable an Azure DDoS Protection plan using an Azure Resource Manager template (ARM template).
+ Title: 'Quickstart: Create and configure Azure DDoS Network Protection using ARM template.'
description: Learn how to create and enable an Azure DDoS Protection plan using an Azure Resource Manager template (ARM template). documentationcenter: na
na -- Previously updated : 08/12/2022++ Last updated : 10/12/2022
-# Quickstart: Create an Azure DDoS Protection Standard using ARM template
+# Quickstart: Create and configure Azure DDoS Network Protection using ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a distributed denial of service (DDoS) protection plan and virtual network (VNet), then enables the protection plan for the VNet. An Azure DDoS Protection Standard plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a distributed denial of service (DDoS) protection plan and virtual network (VNet), then enables the protection plan for the VNet. An Azure DDoS Network Protection plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
Title: Manage Azure DDoS Protection Standard using the Azure portal
-description: Learn how to use Azure DDoS Protection Standard to mitigate an attack.
--
-tags: azure-resource-manager
-
+ Title: 'Quickstart: Create and configure Azure DDoS Network Protection using the Azure portal'
+description: Learn how to use Azure DDoS Network Protection to mitigate an attack.
++ ---- Previously updated : 05/04/2022---+ Last updated : 10/12/2022+
-# Quickstart: Create and configure Azure DDoS Protection Standard
+# Quickstart: Create and configure Azure DDoS Network Protection using the Azure portal
-Get started with Azure DDoS Protection Standard by using the Azure portal.
+Get started with Azure DDoS Network Protection by using the Azure portal.
-A DDoS protection plan defines a set of virtual networks that have DDoS Protection Standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions under a single AAD tenant to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS Network Protection enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions under a single Azure AD tenant to the same plan.
In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
1. Select **Add**. 1. Select **Next: Security**.
-1. Select **Enable** on the **DDoS Protection Standard** radio.
+1. Select **Enable** on the **DDoS Network Protection** radio.
1. Select **MyDdosProtectionPlan** from the **DDoS protection plan** pane. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant. 1. Select **Review + create** then **Create**.
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
### Enable DDoS protection for an existing virtual network 1. Create a DDoS protection plan by completing the steps in [Create a DDoS protection plan](#create-a-ddos-protection-plan), if you don't have an existing DDoS protection plan.
-1. Enter the name of the virtual network that you want to enable DDoS Protection Standard for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it.
+1. Enter the name of the virtual network that you want to enable DDoS Network Protection for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it.
1. Select **DDoS protection**, under **Settings**. 1. Select **Enable**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then click **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
Azure Firewall Manager is a platform to manage and protect your network resource
## Enable DDoS protection for all virtual networks
-This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) will detect any virtual networks in a defined scope that don't have DDoS Protection Standard enabled. This policy will then optionally create a remediation task that will create the association to protect the Virtual Network. See [Azure Policy built-in definitions for Azure DDoS Protection Standard](policy-reference.md) for full list of built-in policies.
+This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) will detect any virtual networks in a defined scope that don't have DDoS Network Protection enabled. This policy will then optionally create a remediation task that will create the association to protect the Virtual Network. See [Azure Policy built-in definitions for Azure DDoS Network Protection](policy-reference.md) for full list of built-in policies.
## Validate and test
You can keep your resources for the next tutorial. If no longer needed, delete t
To disable DDoS protection for a virtual network:
-1. Enter the name of the virtual network you want to disable DDoS protection standard for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
-1. Under **DDoS Protection Standard**, select **Disable**.
+1. Enter the name of the virtual network you want to disable DDoS Network Protection for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
+1. Under **DDoS Network Protection**, select **Disable**.
> [!NOTE] > If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
ddos-protection Manage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-permissions.md
Title: Azure DDoS Protection Plan permissions
-description: Learn how to manage permission in a protection plan.
+description: Learn how to manage permission in a DDoS Protection plan.
documentationcenter: na na+ Previously updated : 09/08/2020 Last updated : 10/12/2022 - # Manage DDoS Protection Plans: permissions and restrictions
A DDoS protection plan works across regions and subscriptions. The same plan can
## Prerequisites -- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md).
+- Before you can complete the steps in this tutorial, you must first create an [Azure DDoS Protection plan](manage-ddos-protection.md).
## Permissions
To enable DDoS protection for a virtual network, your account must also be assig
Creation of more than one plan is not required for most organizations. A plan cannot be moved between subscriptions. If you want to change the subscription a plan is in, you have to delete the existing plan and create a new one.
-For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to [restrict creation of Azure DDoS Protection Standard plans](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Policy%20Definitions/Restrict%20creation%20of%20Azure%20DDoS%20Protection%20Standard%20Plans%20with%20Azure%20Policy). This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance.
+For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to [restrict creation of Azure DDoS Protection plans](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Policy%20Definitions/Restrict%20creation%20of%20Azure%20DDoS%20Protection%20Standard%20Plans%20with%20Azure%20Policy). This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance.
## Next steps
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Title: Built-in policy definitions for Azure DDoS Protection Standard
-description: Lists Azure Policy built-in policy definitions for Azure DDoS Protection Standard. These built-in policy definitions provide common approaches to managing your Azure resources.
+ Title: Azure Policy built-in definitions for Azure DDoS Protection
+description: Lists Azure Policy built-in policy definitions for Azure DDoS Protection. These built-in policy definitions provide common approaches to managing your Azure resources.
documentationcenter: na na Previously updated : 09/12/2022 Last updated : 10/12/2022
-# Azure Policy built-in definitions for Azure DDoS Protection Standard
+# Azure Policy built-in definitions for Azure DDoS Protection
This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure DDoS Protection Standard. For additional Azure Policy built-ins for other services, see
+definitions for Azure DDoS Protection. For additional Azure Policy built-ins for other services, see
[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the **Version** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-## Azure DDoS Protection Standard
+## Azure DDoS Protection
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Virtual networks should be protected by Azure DDoS Protection Standard](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d)|Protect your virtual networks against volumetric and protocol attacks with Azure DDoS Protection Standard. For more information, visit [https://aka.ms/ddosprotectiondocs](./ddos-protection-overview.md).|Modify, Audit, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkDdosStandard_Audit.json)|
-|[Public IP addresses should have resource logs enabled for Azure DDoS Protection Standard](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F752154a7-1e0f-45c6-a880-ac75a7e4f648)|Enable resource logs for public IP addresses in diagnostic settings to stream to a Log Analytics workspace. Get detailed visibility into attack traffic and actions taken to mitigate DDoS attacks via notifications, reports and flow logs.|AuditIfNotExists, DeployIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/PublicIpDdosLogging_Audit.json)|
+|[Virtual networks should be protected by Azure DDoS Protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d)|Protect your virtual networks against volumetric and protocol attacks with Azure DDoS Protection. For more information, visit [https://aka.ms/ddosprotectiondocs](./ddos-protection-overview.md).|Modify, Audit, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkDdosStandard_Audit.json)|
+|[Public IP addresses should have resource logs enabled for Azure DDoS Protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F752154a7-1e0f-45c6-a880-ac75a7e4f648)|Enable resource logs for public IP addresses in diagnostic settings to stream to a Log Analytics workspace. Get detailed visibility into attack traffic and actions taken to mitigate DDoS attacks via notifications, reports and flow logs.|AuditIfNotExists, DeployIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/PublicIpDdosLogging_Audit.json)|
## Next steps
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
Title: 'Tutorial: View and configure DDoS protection telemetry for Azure DDoS Protection Standard'
-description: Learn how to view and configure DDoS protection telemetry for Azure DDoS Protection Standard.
+ Title: 'Tutorial: View and configure DDoS protection telemetry for Azure DDoS Protection'
+description: Learn how to view and configure DDoS protection telemetry for Azure DDoS Protection.
documentationcenter: na na+ Previously updated : 09/14/2022 Last updated : 10/12/2022 -
-# Tutorial: View and configure DDoS protection telemetry
+# Tutorial: View and configure Azure DDoS protection telemetry
-Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
In this tutorial, you'll learn how to: > [!div class="checklist"]
-> * View DDoS protection telemetry
-> * View DDoS mitigation policies
-> * Validate and test DDoS protection telemetry
-
+> * View Azure DDoS Protection telemetry
+> * View Azure DDoS Protection mitigation policies
+> * Validate and test Azure DDoS Protection telemetry
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - ## Prerequisites
+* Before you can complete the steps in this tutorial, you must first create an [Azure DDoS Protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address.
+* DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
-- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.-- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. -
-## View DDoS protection telemetry
+## View Azure DDoS Protection telemetry
Telemetry for an attack is provided through Azure Monitor in real time. While [mitigation triggers](#view-ddos-mitigation-policies) for TCP SYN, TCP & UDP are available during peace-time, other telemetry is available only when a public IP address has been under mitigation.
You can view DDoS telemetry for a protected public IP address through three diff
### Metrics The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:-- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.-- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.-- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.+
+* **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.
+
+* **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that wasn't filtered.
+
+* **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
+ > [!NOTE] > While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
-The following [metrics](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) are available for Azure DDoS Protection Standard. These metrics are also exportable via diagnostic settings (see [View and configure DDoS diagnostic logging](diagnostic-logging.md)).
+The following [metrics](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) are available for Azure DDoS Protection. These metrics are also exportable via diagnostic settings, see [View and configure DDoS diagnostic logging](diagnostic-logging.md).
| Metric | Metric Display Name | Unit | Aggregation Type | Description | | | | | | |
-| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï|
+| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï|
| BytesForwardedDDoSΓÇï | Inbound bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes forwarded DDoSΓÇï | | BytesInDDoSΓÇï | Inbound bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes DDoSΓÇï | | DDoSTriggerSYNPacketsΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï |
The following [metrics](../azure-monitor/essentials/metrics-supported.md#microso
| UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï | | UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï | - ### View metrics from DDoS protection plan 1. Sign in to the [Azure portal](https://portal.azure.com/) and select your DDoS protection plan. 1. On the Azure portal menu, select or search for and select **DDoS protection plans** then select your DDoS protection plan. 1. Under **Monitoring**, select **Metrics**.
-1. Click **Add metric** then click **Scope**.
-1. In the **Select a scope** menu select the **Subscription** that contains the public IP address you want to log.
+1. Select **Add metric** then select **Scope**.
+1. In the Select a scope menu, select the **Subscription** that contains the public IP address you want to log.
1. Select **Public IP Address** for **Resource type** then select the specific public IP address you want to log metrics for, and then select **Apply**. 1. For **Metric** select **Under DDoS attack or not**. 1. Select the **Aggregation** type as **Max**. :::image type="content" source="./media/ddos-attack-telemetry/ddos-metrics-menu.png" alt-text="Screenshot of creating DDoS protection metrics menu." lightbox="./media/ddos-attack-telemetry/ddos-metrics-menu.png":::
-### View metrics from virtual network
+### View metrics from virtual network
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your virtual network that has DDoS protection enabled. 1. Under **Monitoring**, select **Metrics**.
-1. Click **Add metric** then click **Scope**.
-1. In the **Select a scope** menu select the **Subscription** that contains the public IP address you want to log.
+1. Select **Add metric** then select **Scope**.
+1. In the Select a scope menu, select the **Subscription** that contains the public IP address you want to log.
1. Select **Public IP Address** for **Resource type** then select the specific public IP address you want to log metrics for, and then select **Apply**. 1. Under **Metric** select your chosen metric then under **Aggregation** select type as **Max**. >[!NOTE]
->To filter IP Addresses select **Add filter**. Under **Property**, select **Protected IP Address**, and the operator should be set to **=**. Under **Values**, you will see a dropdown of public IP addresses, associated with the virtual network, that are protected by DDoS protection.
+>To filter IP Addresses select **Add filter**. Under **Property**, select **Protected IP Address**, and the operator should be set to **=**. Under **Values**, you will see a dropdown of public IP addresses, associated with the virtual network, that are protected by Azure DDoS Protection.
:::image type="content" source="./media/ddos-attack-telemetry/vnet-ddos-metrics.png" alt-text="Screenshot of DDoS diagnostic settings." lightbox="./media/ddos-attack-telemetry/vnet-ddos-metrics.png":::
The following [metrics](../azure-monitor/essentials/metrics-supported.md#microso
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your public IP address. 1. On the Azure portal menu, select or search for and select **Public IP addresses** then select your public IP address. 1. Under **Monitoring**, select **Metrics**.
-1. Click **Add metric** then click **Scope**.
-1. In the **Select a scope** menu select the **Subscription** that contains the public IP address you want to log.
+1. Select **Add metric** then select **Scope**.
+1. In the Select a scope menu, select the **Subscription** that contains the public IP address you want to log.
1. Select **Public IP Address** for **Resource type** then select the specific public IP address you want to log metrics for, and then select **Apply**. 1. Under **Metric** select your chosen metric then under **Aggregation** select type as **Max**.
+>[!NOTE]
+>When changing DDoS IP protection from **enabled** to **disabled**, telemetry for the public IP resource will not be available.
## View DDoS mitigation policies
-DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS protection enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
-
+Azure DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS protection enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
:::image type="content" source="./media/manage-ddos-protection/view-mitigation-policies.png" alt-text="Screenshot of viewing mitigation policies." lightbox="./media/manage-ddos-protection/view-mitigation-policies.png"::: ## Validate and test
To simulate a DDoS attack to validate DDoS protection telemetry, see [Validate D
In this tutorial, you learned how to: -- Configure alerts for DDoS protection metrics-- View DDoS protection telemetry-- View DDoS mitigation policies-- Validate and test DDoS protection telemetry
+* Configure alerts for DDoS protection metrics
+* View DDoS protection telemetry
+* View DDoS mitigation policies
+* Validate and test DDoS protection telemetry
To learn how to configure attack mitigation reports and flow logs, continue to the next tutorial.
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
na+ Previously updated : 04/21/2022 Last updated : 10/12/2022 - # Test with simulation partners
Our testing partners' simulation environments are built within Azure. You can on
## Prerequisites -- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) with protected public IP addresses.
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md) with protected public IP addresses.
- For BreakingPoint Cloud, you must first [create an account](https://www.ixiacom.com/products/breakingpoint-cloud). ## BreakingPoint Cloud
ddos-protection Types Of Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/types-of-attacks.md
Title: Types of attacks Azure DDoS Protection Standard mitigates
-description: Learn what types of attacks Azure DDoS Protection Standard protects against.
+ Title: 'Types of attacks Azure DDoS Protection mitigates'
+description: Learn what types of attacks Azure DDoS Protection protects against.
documentationcenter: na na+ Previously updated : 09/08/2020 Last updated : 10/12/2022 -
-# Types of DDoS attacks overview
+# Types of attacks Azure DDoS Protection mitigates
-DDoS Protection Standard can mitigate the following types of attacks:
+Azure DDoS Protection can mitigate the following types of attacks:
-- **Volumetric attacks**: These attacks flood the network layer with a substantial amount of seemingly legitimate traffic. They include UDP floods, amplification floods, and other spoofed-packet floods. DDoS Protection Standard mitigates these potential multi-gigabyte attacks by absorbing and scrubbing them, with Azure's global network scale, automatically.-- **Protocol attacks**: These attacks render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. They include SYN flood attacks, reflection attacks, and other protocol attacks. DDoS Protection Standard mitigates these attacks, differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic. -- **Resource (application) layer attacks**: These attacks target web application packets, to disrupt the transmission of data between hosts. They include HTTP protocol violations, SQL injection, cross-site scripting, and other layer 7 attacks. Use a Web Application Firewall, such as the Azure [Application Gateway web application firewall](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json), as well as DDoS Protection Standard to provide defense against these attacks. There are also third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
+- **Volumetric attacks**: These attacks flood the network layer with a substantial amount of seemingly legitimate traffic. They include UDP floods, amplification floods, and other spoofed-packet floods. DDoS Protection mitigates these potential multi-gigabyte attacks by absorbing and scrubbing them, with Azure's global network scale, automatically.
+- **Protocol attacks**: These attacks render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. They include SYN flood attacks, reflection attacks, and other protocol attacks. DDoS Protection mitigates these attacks, differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic.
+- **Resource (application) layer attacks**: These attacks target web application packets, to disrupt the transmission of data between hosts. They include HTTP protocol violations, SQL injection, cross-site scripting, and other layer 7 attacks. Use a Web Application Firewall, such as the Azure [Application Gateway web application firewall](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json), as well as DDoS Protection to provide defense against these attacks. There are also third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
-## Azure DDoS Protection Standard
+## Azure DDoS Protection
-DDoS Protection Standard protects resources in a virtual network including public IP addresses associated with virtual machines, load balancers, and application gateways. When coupled with the Application Gateway web application firewall, or a third-party web application firewall deployed in a virtual network with a public IP, DDoS Protection Standard can provide full layer 3 to layer 7 mitigation capability.
+Azure DDoS Protection protects resources in a virtual network including public IP addresses associated with virtual machines, load balancers, and application gateways. When coupled with the Application Gateway web application firewall, or a third-party web application firewall deployed in a virtual network with a public IP, Azure DDoS Protection can provide full layer 3 to layer 7 mitigation capability.
## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
+- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
dedicated-hsm Deployment Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/deployment-architecture.md
The HSMs are distributed across MicrosoftΓÇÖs data centers and can be easily pro
* Canada Central * South Central US * Southeast Asia
-* East Asia
* India Central * India South * Japan East
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
Title: Alert validation in Microsoft Defender for Cloud description: Learn how to validate that your security alerts are correctly configured in Microsoft Defender for Cloud Previously updated : 07/04/2022 Last updated : 10/06/2022 # Alert validation in Microsoft Defender for Cloud
Last updated 07/04/2022
This document helps you learn how to verify if your system is properly configured for Microsoft Defender for Cloud alerts. ## What are security alerts?+ Alerts are the notifications that Defender for Cloud generates when it detects threats on your resources. It prioritizes and lists the alerts along with the information needed to quickly investigate the problem. Defender for Cloud also provides recommendations for how you can remediate an attack.
-For more information, see [Security alerts in Defender for Cloud](alerts-overview.md) and [Managing and responding to security alerts](managing-and-responding-alerts.md)
+For more information, see [Security alerts in Defender for Cloud](alerts-overview.md) and [Managing and responding to security alerts](managing-and-responding-alerts.md).
+
+## Prerequisites
+
+To receive all the alerts, your machines and the connected Log Analytics workspaces need to be in the same tenant.
## Generate sample security alerts
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Title: Harden your Windows and Linux OS with Azure security baseline and Microsoft Defender for Cloud
-description: Learn how Microsoft Defender for Cloud uses the guest configuration to compare your OS hardening with the guidance from Azure Security Benchmark
+description: Learn how Microsoft Defender for Cloud uses the guest configuration to compare your OS hardening with the guidance from Microsoft Cloud Security Benchmark
+ Last updated 11/09/2021- # Apply Azure security baselines to machines To reduce a machine's attack surface and avoid known risks, it's important to configure the operating system (OS) as securely as possible.
-The Azure Security Benchmark has guidance for OS hardening which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md).
+The Microsoft Cloud Security Benchmark has guidance for OS hardening which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md).
Use the security recommendations described in this article to assess the machines in your environment and:
Microsoft Defender for Cloud includes two recommendations that check whether the
- For **Windows** machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda) compares the configuration with the [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md). - For **Linux** machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6) compares the configuration with the [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md).
-These recommendations use the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Azure Security Benchmark](/security/benchmark/azure/overview).
+These recommendations use the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Microsoft Cloud Security Benchmark](/security/benchmark/azure/overview).
## Compare machines in your subscriptions with the OS security baselines
To learn more about these configuration settings, see:
- [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md) - [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md)-- [Azure Security Benchmark](/security/benchmark/azure/overview)
+- [Microsoft Cloud Security Benchmark](/security/benchmark/azure/overview)
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
+
+ Title: Reference list of attack paths
+
+description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource.
++ Last updated : 09/21/2022+++
+# Reference list of attack paths
+
+This article lists the attack paths, connections and insights you might see in Microsoft Defender for Cloud. What you are shown in your environment depends on the resources you're protecting and your customized configuration.
+
+To learn about how to respond to these attack paths, see [Identify and remediate attack paths](how-to-manage-attack-path.md).
+
+## Attack paths
+
+### Azure VMs
+
+| Attack Path Display Name | Attack Path Description |
+|--|--|
+| Internet exposed VM has high severity vulnerabilities | Virtual machine '\[MachineName]' is reachable from the internet and has high severity vulnerabilities \[RCE] |
+| Internet exposed VM has high severity vulnerabilities and high permission to a subscription | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with \[PermissionType] permission to subscription '\[SubscriptionName]' |
+| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). |
+| Internet exposed VM has high severity vulnerabilities and read permission to a data store | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]'. |
+| Internet exposed VM has high severity vulnerabilities and read permission to a Key Vault | Virtual machine '\[MachineName]' is reachable from the internet, has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to Key Vault '\[KVName]' |
+| VM has high severity vulnerabilities and high permission to a subscription | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and has high permission to subscription '\[SubscriptionName]' |
+| VM has high severity vulnerabilities and read permission to a data store with sensitive data | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). |
+| VM has high severity vulnerabilities and read permission to a Key Vault | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to Key Vault '\[KVName]' |
+| VM has high severity vulnerabilities and read permission to a data store | Virtual machine '\[MachineName]' has high severity vulnerabilities \[RCE] and \[IdentityDescription] with read permission to \[DatabaseType] '\[DatabaseName]' |
+
+### AWS VMs
+
+| Attack Path Display Name | Attack Path Description |
+|--|--|
+| Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | AWS EC2 instance '\[EC2Name]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has '\[permission]' permission to account '\[AccountName]' |
+| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has '\[permission]' permission to DB '\[DatabaseName]'|
+| Internet exposed EC2 instance has high severity vulnerabilities and read permission to S3 bucket | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to S3 bucket '\[BucketName]' <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]' <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]'|
+| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to S3 bucket '\[BucketName]' containing sensitive data <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[S3permission]' permission via bucket policy to S3 bucket '\[BucketName]' containing sensitive data <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[S3permission] permission via bucket policy to S3 bucket '\[BucketName]' containing sensitive data <br><br> . For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). |
+| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a KMS | Option 1 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has IAM role attached with '\[Rolepermission]' permission via IAM policy to AWS Key Management Service (KMS) '\[KeyName]' <br> <br> Option 2 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has vulnerabilities allowing remote code execution and has IAM role attached with '\[Keypermission]' permission via AWS Key Management Service (KMS) policy to key '\[KeyName]' <br> <br> Option 3 <br> AWS EC2 instance '\[MachineName]' is reachable from the internet, has vulnerabilities allowing remote code execution and has IAM role attached with '\[Rolepermission]' permission via IAM policy and '\[Keypermission] permission via AWS Key Management Service (KMS) policy to key '\[KeyName]' |
+| Internet exposed EC2 instance has high severity vulnerabilities | AWS EC2 instance '\[EC2Name]' is reachable from the internet and has high severity vulnerabilities\[RCE] |
+
+### Azure data
+
+| Attack Path Display Name | Attack Path Description |
+|--|--|
+| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM '\[SqlVirtualMachineName]' is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM |
+| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM '\[SqlVirtualMachineName]' is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) |
+| SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM '\[SqlVirtualMachineName]' has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM |
+| SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM '\[SqlVirtualMachineName]' has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) |
+
+### AWS Data
+
+| Attack Path Display Name | Attack Path Description |
+|--|--|
+| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | S3 bucket '\[BucketName]' with sensitive data is reachable from the internet and allows public read access without authorization required. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). |
+
+### Azure containers
+
+| Attack Path Display Name | Attack Path Description |
+|--|--|--|
+| Internet exposed Kubernetes pod is running a container with RCE vulnerabilities | Internet exposed Kubernetes pod '\[pod name]' in namespace '\[namespace]' is running a container '\[container name]' using image '\[image name]' which has vulnerabilities allowing remote code execution |
+| Kubernetes pod running on an internet exposed node uses host network is running a container with RCE vulnerabilities | Kubernetes pod '\[pod name]' in namespace '\[namespace]' with host network access enabled is exposed to the internet via the host network. The pod is running container '\[container name]' using image '\[image name]' which has vulnerabilities allowing remote code execution |
+
+## Insights and connections
+
+### Insights
+
+| Insight | Description | Supported entities |
+|--|--|--|
+| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod. |
+| Contains sensitive data | Indicates that a resource contains sensitive data based on Microsoft Purview scan and applicable only if Microsoft Purview is enabled. For more details, you can learn how to [prioritize security actions by data sensitivity](/azure/defender-for-cloud/information-protection). | Azure SQL Server, Azure Storage Account, AWS S3 bucket. |
+| Has tags | List the resource tags of the cloud resource | All Azure and AWS resources. |
+| Installed software | List all software installed on the machine. This is applicable only for VMs that have Threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
+| Allows public access | Indicates that a public read access is allowed to the data store with no authorization required | Azure storage account, AWS S3 bucjet |
+| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | AAD User account, IAM user |
+| Is external user | Indicates that the user account is outside the organization's domain | AAD User account |
+| Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
+| Contains common usernames | Indicates that a SQL server has user accounts with common usernames which are prone to brute force attacks. | SQL on VM |
+| Can execute code on the host | Indicates that a SQL server allows executing code on the underlying VM using a built-in mechanism such as xp_cmdshell. | SQL on VM |
+| Has vulnerabilities | indicates that the resource SQL server has vulnerabilities detected | SQL on VM |
+| DEASM findings | Microsft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP |
+| Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container |
+| Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
+| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Kubernetes image |
+| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Kubernetes image |
+| Public IP metadata | List the metadata of an Public IP | Public IP |
+| Identity metadata | List the metadata of an identity | AAD Identity |
+
+### Connections
+
+| Connection | Description | Source entity types | Destination entity types |
+|--|--|--|--|
+| Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | AAD Managed identity |
+| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | AAD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
+| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, Github owner, Azure DevOps project, Azure DevOps organization | All Azure & AWS resources, All Kubernetes entities, All DevOps entities |
+| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
+| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, Kubernetes container | SQL, Kubernetes image, Kubernetes pod |
+| Member of | Indicates that the source identity is a member of the target identities group | AAD group, AAD user | AAD group |
+| Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
++
+## Next steps
+
+For related information, see the following:
+- [What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer?](concept-attack-path.md)
+- [Identify and remediate attack paths](how-to-manage-attack-path.md)
+- [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md)
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
Title: Deploy the Azure Monitor Agent with auto provisioning
-description: Learn how to deploy the Azure Monitor Agent on your Azure, multicloud, and on-premises servers with auto provisioning to support Microsoft Defender for Cloud protections.
+ Title: Deploy the Azure Monitor Agent with Microsoft Defender for Cloud
+description: Learn how to deploy the Azure Monitor Agent on your Azure, multicloud, and on-premises servers to support Microsoft Defender for Cloud protections.
Last updated 08/03/2022-+
-# Auto provision the Azure Monitor Agent to protect your servers with Microsoft Defender for Cloud
+# Deploy the Azure Monitor Agent to protect your servers with Microsoft Defender for Cloud
-To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to quietly deploy the Azure Monitor Agent on your servers.
+To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can quietly deploy the Azure Monitor Agent on your servers when you enable Defender for Servers.
-In this article, we're going to show you how to use auto provisioning to deploy the agent so that you can protect your servers.
+In this article, we're going to show you how to deploy the agent so that you can protect your servers.
## Availability
In this article, we're going to show you how to use auto provisioning to deploy
## Prerequisites
-Before you enable auto provisioning, you must have the following prerequisites:
+Before you deploy AMA with Defender for Cloud, you must have the following prerequisites:
- Make sure your multicloud and on-premises machines have Azure Arc installed. - AWS and GCP machines
Before you enable auto provisioning, you must have the following prerequisites:
- [Enable Defender plans on the subscriptions for your AWS VMs](quickstart-onboard-aws.md) - [Enable Defender plans on the subscriptions for your GCP VMs](quickstart-onboard-gcp.md)
-## Deploy the Azure Monitor Agent with auto provisioning
+## Deploy the Azure Monitor Agent with Defender for Cloud
-To deploy the Azure Monitor Agent with auto provisioning:
+To deploy the Azure Monitor Agent with Defender for Cloud:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
-1. Open the **Auto provisioning** page.
-
- :::image type="content" source="./media/auto-deploy-azure-monitoring-agent/select-auto-provisioning.png" alt-text="Screenshot of the auto provisioning menu item for enabling the Azure Monitor Agent.":::
-
+1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.
1. Enable deployment of the Azure Monitor Agent: 1. For the **Log Analytics agent/Azure Monitor Agent**, select the **On** status.
- In the Configuration column, you can see the enabled agent type. When you enable auto provisioning, Defender for Cloud decides which agent to provision based on your environment. In most cases, the default is the Log Analytics agent.
-
- :::image type="content" source="./media/auto-deploy-azure-monitoring-agent/turn-on-azure-monitor-agent-auto-provision.png" alt-text="Screenshot of the auto provisioning page for enabling the Azure Monitor Agent." lightbox="media/auto-deploy-azure-monitoring-agent/turn-on-azure-monitor-agent-auto-provision.png":::
+ In the Configuration column, you can see the enabled agent type. When you enable Defender plans, Defender for Cloud decides which agent to provision based on your environment. In most cases, the default is the Log Analytics agent.
1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**.
- :::image type="content" source="./media/auto-deploy-azure-monitoring-agent/configure-azure-monitor-agent-auto-provision.png " alt-text="Screenshot of editing the Azure Monitor Agent configuration." lightbox="media/auto-deploy-azure-monitoring-agent/configure-azure-monitor-agent-auto-provision.png":::
- 1. For the Auto-provisioning configuration agent type, select **Azure Monitor Agent**.
- :::image type="content" source="./media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png" alt-text="Screenshot of selecting the Azure Monitor Agent." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png":::
- By default: - The Azure Monitor Agent is installed on all existing machines in the selected subscription, and on all new machines created in the subscription.
You can run both the Log Analytics and Azure Monitor Agents on the same machine,
- Each machine is billed once in Defender for Cloud, but make sure you track billing of other services connected to the Log Analytics and Azure Monitor, such as the Log Analytics workspace data ingestion. - Both agents have performance impact on the machine.
-When you enable auto provisioning, Defender for Cloud decides which agent to provision. In most cases, the default is the Log Analytics agent.
+When you enable Defender for Servers Plan 2, Defender for Cloud decides which agent to provision. In most cases, the default is the Log Analytics agent.
Learn more about [migrating to the Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-migration).
To configure a custom destination workspace for the Azure Monitor Agent:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
-1. Open the **Auto provisioning** page.
+1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.
1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**. 1. Select **Custom workspace**, and select the workspace that you want to send data to.
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Title: Configure Microsoft Defender for Cloud to automatically assess machines for vulnerabilities description: Use Microsoft Defender for Cloud to ensure your machines have a vulnerability assessment solution + Last updated 11/09/2021
Last updated 11/09/2021
# Automatically configure vulnerability assessment for your machines
-Defender for Cloud collects data from your machines using agents and extensions. Those agents and extensions *can* be installed manually (see [Manual installation of the Log Analytics agent](enable-data-collection.md#manual-agent)). However, **auto provisioning** reduces management overhead by installing all required agents and extensions on existing - and new - machines to ensure faster security coverage for all supported resources. Learn more in [Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud](enable-data-collection.md).
+Defender for Cloud collects data from your machines using agents and extensions. To save you the process of manually installing the extensions, such as [the manual installation of the Log Analytics agent](working-with-log-analytics-agent.md#manual-agent-provisioning), Defender for Cloud reduces management overhead by installing all required extensions on existing and new machines. Learn more [monitoring components](monitoring-components.md).
To assess your machines for vulnerabilities, you can use one of the following solutions:
To assess your machines for vulnerabilities, you can use one of the following so
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
-1. Open the **Auto provisioning** page.
-1. Set the status of auto provisioning for the vulnerability assessment for machines to **On** and select the relevant solution.
- :::image type="content" source="media/deploy-vulnerability-assessment-tvm/auto-provision-vulnerability-assessment-agent.png" alt-text="Configure auto provisioning of the threat and vulnerability management module in Microsoft Defender for Cloud.":::
+1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.
+1. Turn on the vulnerability assessment for machines and select the relevant solution.
> [!TIP] > Defender for Cloud enables the following policy: [(Preview) Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
To assess your machines for vulnerabilities, you can use one of the following so
Defender for Cloud also offers vulnerability assessment for your: -- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
+- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
+
+ Title: Configure the Microsoft Security DevOps Azure DevOps extension
+description: Learn how to configure the Microsoft Security DevOps Azure DevOps extension.
Last updated : 09/20/2022++++
+# Configure the Microsoft Security DevOps Azure DevOps extension
+
+Microsoft Security DevOps is a command line application that integrates static analysis tools into the development lifecycle. Microsoft Security DevOps installs, configures, and runs the latest versions of static analysis tools (including, but not limited to, SDL/security and compliance tools). Microsoft Security DevOps is data-driven with portable configurations that enable deterministic execution across multiple environments.
+
+The Microsoft Security DevOps uses the following Open Source tools:
+
+| Name | Language | License |
+|--|--|--|
+| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/master/LICENSE) |
+| [BinSkim](https://github.com/Microsoft/binskim) | Binary--Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/eslint/eslint/blob/main/LICENSE) |
+| [Credscan](detect-credential-leaks.md) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files <br> common types: default passwords, SQL connection strings, Certificates with private keys | Not Open Source |
+| [Template Analyzer](https://github.com/Azure/template-analyzer) | ARM template, Bicep file | [MIT License](https://github.com/Azure/template-analyzer/blob/main/LICENSE.txt) |
+| [Terrascan](https://github.com/accurics/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/accurics/terrascan/blob/master/LICENSE) |
+| [Trivy](https://github.com/aquasecurity/trivy) | container images, file systems, git repositories | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) |
+
+## Prerequisites
+
+- Admin privileges to the Azure DevOps organization are required to install the extension.
+
+If you don't have access to install the extension, you must request access from your Azure DevOps organization's administrator during the installation process.
+
+## Configure the Microsoft Security DevOps Azure DevOps extension
+
+**To configure the Microsoft Security DevOps Azure DevOps extension**:
+
+1. Sign in to [Azure DevOps](https://dev.azure.com/)
+
+1. Navigate to **Shopping Bag** > **Manage extensions**.
+
+ :::image type="content" source="media/msdo-azure-devops-extension/manage-extensions.png" alt-text="Screenshot that shows how to navigate to the manage extensions screen.":::
+
+1. Select **Shared**.
+
+ > [!Note]
+ > If you have already [installed the Microsoft Security DevOps extension](azure-devops-extension.md), it will be listed under Installed tab.
+
+1. Select **Microsoft Security DevOps**.
+
+ :::image type="content" source="media/msdo-azure-devops-extension/marketplace-shared.png" alt-text="Screenshot that shows where to select Microsoft Security DevOps.":::
+
+1. Select **Install**.
+
+1. Select the appropriate Organization from the dropdown menu.
+
+1. Select **Install**.
+
+1. Select **Proceed to organization**.
+
+## Configure your Pipelines using YAML
+
+**To configure your pipeline using YAML**:
+
+1. Sign into [Azure DevOps](https://dev.azure.com/)
+
+1. Select your project.
+
+1. Navigate to **Pipelines**
+
+1. Select **New pipeline**.
+
+ :::image type="content" source="media/msdo-azure-devops-extension/create-pipeline.png" alt-text="Screenshot showing where to locate create pipeline in DevOps." lightbox="media/msdo-azure-devops-extension/create-pipeline.png":::
+
+1. Select **Azure Repos Git**.
+
+ :::image type="content" source="media/msdo-azure-devops-extension/repo-git.png" alt-text="Screenshot that shows you where to navigate to, to select Azure repo git.":::
+
+1. Select the relevant repository.
+
+ :::image type="content" source="media/msdo-azure-devops-extension/repository.png" alt-text="Screenshot showing where to select your repository.":::
+
+5. Select **Starter pipeline**.
+
+ :::image type="content" source="media/msdo-azure-devops-extension/starter-piepline.png" alt-text="Screenshot showing where to select starter pipeline.":::
+
+1. Paste the following YAML into the pipeline
+
+ ```yml
+ # Starter pipeline
+ # Start with a minimal pipeline that you can customize to build and deploy your code.
+ # Add steps that build, run tests, deploy, and more:
+ # https://aka.ms/yaml
+ trigger: none
+ pool:
+ vmImage: 'windows-latest'
+ steps:
+ - task: UseDotNet@2
+ displayName: 'Use dotnet'
+ inputs:
+ version: 3.1.x
+ - task: UseDotNet@2
+ displayName: 'Use dotnet'
+ inputs:
+ version: 5.0.x
+ - task: UseDotNet@2
+ displayName: 'Use dotnet'
+ inputs:
+ version: 6.0.x
+ - task: MicrosoftSecurityDevOps@1
+ displayName: 'Microsoft Security DevOps'
+ ```
+
+1. Select **Save and run**.
+
+1. Select **Save and run** to commit the pipeline.
+
+The pipeline will run for a few minutes and save the results.
+
+## Learn more
+
+- Learn how to [create your first pipeline](/azure/devops/pipelines/create-first-pipeline?view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
+
+- Learn how to [deploy pipelines to Azure](/azure/devops/pipelines/overview-azure?toc=%2Fazure%2Fdevops%2Fcross-service%2Ftoc.json&bc=%2Fazure%2Fdevops%2Fcross-service%2Fbreadcrumb%2Ftoc.json&view=azure-devops).
+
+## Next steps
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+
+[Discover misconfigurations in Infrastructure as Code (IaC)](iac-vulnerabilities.md)
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
+
+ Title: Agentless scanning of cloud machines using Microsoft Defender for Cloud
+description: Learn how Defender for Cloud can gather information about your multicloud compute resources without installing an agent on your machines.
++++ Last updated : 09/28/2022+++
+# Agentless scanning for machines (Preview)
+
+Microsoft Defender for Cloud maximizes coverage on OS posture issues and extends beyond the reach of agent-based assessments. With agentless scanning for VMs, you can get frictionless, wide, and instant visibility on actionable posture issues without installed agents, network connectivity requirements, or machine performance impact.
+
+Agentless scanning for VMs provides vulnerability assessment and software inventory, both powered by Defender vulnerability management, in Azure and Amazon AWS environments. Agentless scanning is available in both [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and [Defender for Servers P2](defender-for-servers-introduction.md).
+
+## Availability
+
+| Aspect | Details |
+|||
+|Release state:|Preview|
+|Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans)|
+| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender vulnerability management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender vulnerability management) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
+| Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
+| Instance types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
+| Encryption: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted ΓÇô platform managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted |
+
+## How agentless scanning for VMs works
+
+While agent-based methods use OS APIs in runtime to continuously collect security related data, agentless scanning for VMs uses cloud APIs to collect data. Defender for Cloud takes snapshots of VM disks and does an out-of-band, deep analysis of the OS configuration and file system stored in the snapshot. The copied snapshot doesn't leave the original compute region of the VM, and the VM is never impacted by the scan.
+
+After the necessary metadata is acquired from the disk, Defender for Cloud immediately deletes the copied snapshot of the disk and sends the metadata to Microsoft engines to analyze configuration gaps and potential threats. For example, in vulnerability assessment, the analysis is done by Defender vulnerability management. The results are displayed in Defender for Cloud, seamlessly consolidating agent-based and agentless results.
+
+The scanning environment where disks are analyzed is regional, volatile, isolated, and highly secure. Disk snapshots and data unrelated to the scan aren't stored longer than is necessary to collect the metadata, typically a few minutes.
++
+## FAQ
+
+### How does scanning affect the instances?
+Since the scanning process is an out-of-band analysis of snapshots, it doesn't impact the actual workloads and isn't visible by the guest operating system.
+
+### How does scanning affect the account/subscription?
+
+The scanning process has minimal footprint on your accounts and subscriptions.
+
+| Cloud provider | Changes |
+|||
+| Azure | - Adds a ΓÇ£VM Scanner OperatorΓÇ¥ role assignment<br>- Adds a ΓÇ£vmScannersΓÇ¥ resource with the relevant configurations used to manage the scanning process |
+| AWS | - Adds role assignment<br>- Adds authorized audience to OpenIDConnect provider<br>- Snapshots are created next to the scanned volumes, in the same account, during the scan (typically for a few minutes) |
+
+### What is the scan freshness?
+
+Each VM is scanned every 24 hours.
+
+### Which permissions are used by agentless scanning?
+
+The roles and permissions used by Defender for Cloud to perform agentless scanning on your Azure and AWS environments are listed here. In Azure, these permissions are automatically added to your subscriptions when you enable agentless scanning. In AWS, these permissions are [added to the CloudFormation stack in your AWS connector](enable-vulnerability-assessment-agentless.md#agentless-vulnerability-assessment-on-aws).
+
+- Azure permissions - The built-in role ΓÇ£VM scanner operatorΓÇ¥ has read-only permissions for VM disks which are required for the snapshot process. The detailed list of permissions is:
+
+ - `Microsoft.Compute/disks/read`
+ - `Microsoft.Compute/disks/beginGetAccess/action`
+ - `Microsoft.Compute/virtualMachines/instanceView/read`
+ - `Microsoft.Compute/virtualMachines/read`
+ - `Microsoft.Compute/virtualMachineScaleSets/instanceView/read`
+ - `Microsoft.Compute/virtualMachineScaleSets/read`
+ - `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read`
+ - `Microsoft.Compute/virtualMachineScaleSets/virtualMachines/instanceView/read`
+
+- AWS permissions - The role ΓÇ£VmScannerΓÇ¥ is assigned to the scanner when you enable agentless scanning. This role has the minimal permission set to create and clean up snapshots (scoped by tag) and to verify the current state of the VM. The detailed list of permissions is:
+
+ - `ec2:DeleteSnapshot`
+ - `ec2:ModifySnapshotAttribute`
+ - `ec2:DeleteTags`
+ - `ec2:CreateTags`
+ - `ec2:CreateSnapshots`
+ - `ec2:CreateSnapshot`
+ - `ec2:DescribeSnapshots`
+ - `ec2:DescribeInstanceStatus`
+
+### Which data is collected from snapshots?
+
+Agentless scanning collects data similar to the data an agent collects to perform the same analysis. Raw data, PIIs or sensitive business data isn't collected, and only metadata results are sent to Defender for Cloud.
+
+### What are the costs related to agentless scanning?
+
+Agentless scanning is included in Defender Cloud Security Posture Management (CSPM) and Defender for Servers P2 plans. No other costs will incur to Defender for Cloud when enabling it.
+
+> [!NOTE]
+> AWS charges for retention of disk snapshots. Defender for Cloud scanning process actively tries to minimize the period during which a snapshot is stored in your account (typically up to a few minutes), but you may be charged by AWS a minimal overhead cost for the disk snapshots storage.
+
+### How are VM snapshots secured?
+
+Agentless scanning protects disk snapshots according to MicrosoftΓÇÖs highest security standards. To ensure VM snapshots are private and secure during the analysis process, some of the measures taken are:
+
+- Data is encrypted at rest and in-transit.
+- Snapshots are immediately deleted when the analysis process is complete.
+- Snapshots remain within their original AWS or Azure region. EC2 snapshots aren't copied to Azure.
+- Isolation of environments per customer account/subscription.
+- Only metadata containing scan results is sent outside the isolated scanning environment.
+- All operations are audited.
+
+### Does agentless scanning support encrypted disks?
+Agentless scanning doesn't yet support encrypted disks, except for Azure Disk Encryption.
+
+## Next steps
+
+This article explains how agentless scanning works and how it helps you collect data from your machines.
+
+Learn more about how to [enable vulnerability assessment with agentless scanning](enable-vulnerability-assessment-agentless.md).
defender-for-cloud Concept Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md
+
+ Title: What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer?
+description: Learn how to prioritize remediation of cloud misconfigurations and vulnerabilities based on risk.
+++ Last updated : 09/21/2022++
+# What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer?
+
+One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolve and never enough resources to address them all.
+
+Defender for Cloud's contextual security capabilities assists security teams to assess the risk behind each security issue, and identify the highest risk issues that need to be resolved soonest. Defender for Cloud assists security teams to reduce the risk of an impactful breach to their environment in the most effective way.
+
+## What is Cloud Security Graph?
+
+The Cloud Security Graph is a graph-based context engine that exists within Defender for Cloud. The Cloud Security Graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment.
+
+Defender for Cloud then uses the generated graph to perform an Attack Path Analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the Cloud Security Explorer.
++
+## What is Attack Path Analysis?
+
+Attack Path Analysis is a graph-based algorithm that scans the Cloud Security Graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack Path Analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach.
+
+By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack Path Analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first.
++
+Learn how to use [Attack Path Analysis](how-to-manage-attack-path.md).
+
+## What is Cloud Security Explorer?
+
+Using the Cloud Security Explorer, you can proactively identify security risks in your multicloud environment by running graph-based queries on the Cloud Security Graph. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account.
+
+Cloud Security Explorer provides you with the ability to perform proactive exploration features. You can search for security risks within your organization by running graph-based path-finding queries on top the contextual security data that is already provided by Defender for Cloud. Such as, cloud misconfigurations, vulnerabilities, resource context, lateral movement possibilities between resources and more.
+
+Learn how to use the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md), or check out the list of [insights and connections](attack-path-reference.md#insights-and-connections).
+
+## Next steps
+
+[Identify and remediate attack paths](how-to-manage-attack-path.md)
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
+
+ Title: Overview of Cloud Security Posture Management (CSPM)
+description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan.
++ Last updated : 09/20/2022++
+# Cloud Security Posture Management (CSPM)
+
+One of Microsoft Defender for Cloud's main pillars for cloud security is Cloud Security Posture Management (CSPM). CSPM provides you with hardening guidance that helps you efficiently and effectively improve your security. CSPM also gives you visibility into your current security situation.
+
+Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues and shows your security posture in secure score, an aggregated score of the security findings that tells you, at a glance, your current security situation: the higher the score, the lower the identified risk level.
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP projects|
+
+## Defender CSPM plan options
+
+The Defender CSPM plan comes with two options, foundational CSPM capabilities and Defender Cloud Security Posture Management (CSPM). When you deploy Defender for Cloud to your subscription and resources, you'll automatically gain the basic coverages offered by the CSPM plan. To gain access to the other capabilities provided by Defender CSPM, you'll need to [enable the Defender Cloud Security Posture Management (CSPM) plan](enable-enhanced-security.md) to your subscription and resources.
+
+The following table summarizes what's included in each plan and their cloud availability.
+
+| Feature | Foundational CSPM capabilities | Defender Cloud Security Posture Management (CSPM) | Cloud availability |
+|--|--|--|--|
+| Continuous assessment of the security configuration of your cloud resources | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Security recommendations to fix misconfigurations and weaknesses](review-security-recommendations.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png":::| Azure, AWS, GCP, on-premises |
+| [Secure score](secure-score-access-and-track.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Governance](#security-governance-and-regulatory-compliance) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Regulatory compliance](#security-governance-and-regulatory-compliance) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Cloud Security Explorer](#cloud-security-explorer) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Attack Path Analysis](#attack-path-analysis) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Agentless scanning for machines](#agentless-scanning-for-machines) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
++
+> [!NOTE]
+> If you have enabled Defender for DevOps, you will only gain Cloud Security Graph and Attack Path Analysis to the artifacts that arrive through those connectors.
+>
+> To enable Governance for for DevOps related recommendations, the Defender Cloud Security Posture Management (CSPM) plan needs to be enabled on the Azure subscription that hosts the DevOps connector.
+
+## Security governance and regulatory compliance
+
+Security governance and regulatory compliance refer to the policies and processes which organizations have in place to ensure that they comply with laws, rules and regulations put in place by external bodies (government) which control activity in a given jurisdiction. Defender for Cloud allows you to view your regulatory compliance through the regulatory compliance dashboard.
+
+Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
+
+Learn more about [security and regulatory compliance in Defender for Cloud](concept-regulatory-compliance.md).
+
+## Cloud Security Explorer
+
+The Cloud Security Graph is a graph-based context engine that exists within Defender for Cloud. The Cloud Security Graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment.
+
+Defender for Cloud then uses the generated graph to perform an Attack Path Analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the Cloud Security Explorer.
+
+Learn more about [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer)
+
+## Attack Path Analysis
+
+Attack Path Analysis is a graph-based algorithm that scans the Cloud Security Graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack Path Analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach.
+
+By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack Path Analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first.
+
+Learn more about [Attack Path Analysis](concept-attack-path.md#what-is-attack-path-analysis)
+
+## Agentless scanning for machines
+
+With agentless scanning for VMs, you can get visibility on actionable OS posture issues without installed agents, network connectivity, or machine performance impact.
+
+Learn more about [agentless scanning](concept-agentless-data-collection.md).
+
+## Next steps
+
+Learn about [Microsoft Defender for Cloud's basic and enhanced security features](enhanced-security-features-overview.md)
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
Title: Overview of Defender for Azure Cosmos DB
description: Learn about the benefits and features of Microsoft Defender for Azure Cosmos DB. + Last updated 03/01/2022
Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, an
|Aspect|Details| |-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Protected Azure Cosmos DB API | :::image type="icon" source="./media/icons/yes-icon.png"::: SQL/Core API <br> :::image type="icon" source="./media/icons/no-icon.png"::: Cassandra API <br> :::image type="icon" source="./media/icons/no-icon.png"::: MongoDB API <br> :::image type="icon" source="./media/icons/no-icon.png"::: Table API <br> :::image type="icon" source="./media/icons/no-icon.png"::: Gremlin API |
+|Protected Azure Cosmos DB API | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Cosmos DB for NoSQL <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Apache Cassandra <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for MongoDB <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Table <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Apache Gremlin |
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet | ## What are the benefits of Microsoft Defender for Azure Cosmos DB
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
+
+ Title: External attack surface management (EASM)
+description: Learn how to gain comprehensive visibility and insights over external facing organizational assets and their digital footprint with Defender EASM.
+++ Last updated : 09/21/2022++
+# What is an external attack surface?
+
+An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. An organization's attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it is to protect.
+
+You can use Defender for Cloud's new integration with Microsoft Defender External Attack Surface Management (Defender EASM), to improve your organization's security posture and reduce the potential risk of being attacked. Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall.
+
+Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
+
+- Discover digital assets, always-on inventory
+- Analyze and prioritize risks and threats
+- Pinpoint attacker-exposed weaknesses, anywhere and on-demand
+- Gain visibility into third-party attack surfaces
+
+EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥) which can be used by MDC CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers.
+
+## Learn more
+
+You can learn more about [Defender EASM](../external-attack-surface-management/index.md), and learn about the [pricing](https://azure.microsoft.com/pricing/details/defender-external-attack-surface-management/) options available.
+
+You can also learn how to [deploy Defender for EASM](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md) to your Azure resource.
+
+## Next step
+
+[What are the Cloud Security Graph, Attack Path Analysis, and the Cloud Security Explorer?](concept-attack-path.md)
defender-for-cloud Concept Regulatory Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance.md
+
+ Title: Regulatory compliance Microsoft Cloud Security Benchmark
+description: Learn about the Microsoft Cloud Security Benchmark and the benefits it can bring to your compliance standards across your multicloud environments.
+++ Last updated : 09/21/2022++
+# Microsoft Cloud Security Benchmark in Defender for Cloud
+
+Microsoft Defender for Cloud streamlines the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
+
+The [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction) (MCSB) is automatically assigned to your subscriptions and accounts when you onboard Defender for Cloud. This benchmark builds on the cloud security principles defined by the Azure Security Benchmark and applies these principles with detailed technical implementation guidance for Azure, for other cloud providers (such as AWS and GCP), and for other Microsoft clouds.
++
+The compliance dashboard gives you a view of your overall compliance standing. Security for non-Azure platforms follows the same cloud-neutral security principles as Azure. Each control within the benchmark provides the same granularity and scope of technical guidance across Azure and other cloud resources.
++
+From the compliance dashboard, you're able to manage all of your compliance requirements for your cloud deployments, including automatic, manual and shared responsibilities.
+
+> [!NOTE]
+> Shared responsibilities is only compatible with Azure.
+
+## Next steps
+
+- [Improve your regulatory compliance](regulatory-compliance-dashboard.md)
+- [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md)
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Title: Workbooks gallery in Microsoft Defender for Cloud description: Learn how to create rich, interactive reports of your Microsoft Defender for Cloud data with the integrated Azure Monitor Workbooks gallery + Last updated 01/23/2022
Learn more about using these scanners:
- [Find vulnerabilities with Microsoft threat and vulnerability management](deploy-vulnerability-assessment-tvm.md) - [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md)-- [Scan your registry images for vulnerabilities](defender-for-containers-usage.md)
+- [Scan your ACR images for vulnerabilities](defender-for-containers-va-acr.md)
+- [Scan your ECR images for vulnerabilities](defender-for-containers-va-ecr.md)
- [Scan your SQL resources for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md) Findings for each resource type are reported in separate recommendations:
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security policies in Microsoft Defender for Cloud description: Azure custom policy definitions monitored by Microsoft Defender for Cloud. + Last updated 07/20/2022 zone_pivot_groups: manage-asc-initiatives
Important concepts in Azure Policy:
- An **assignment** is an application of an initiative or a policy to a specific scope (management group, subscription, etc.)
-Defender for Cloud has a built-in initiative, [Azure Security Benchmark](/security/benchmark/azure/introduction), that includes all of its security policies. To assess Defender for CloudΓÇÖs policies on your Azure resources, you should create an assignment on the management group, or subscription you want to assess.
+Defender for Cloud has a built-in initiative, [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction), that includes all of its security policies. To assess Defender for CloudΓÇÖs policies on your Azure resources, you should create an assignment on the management group, or subscription you want to assess.
The built-in initiative has all of Defender for CloudΓÇÖs policies enabled by default. You can choose to disable certain policies from the built-in initiative. For example, to apply all of Defender for CloudΓÇÖs policies except **web application firewall**, change the value of the policyΓÇÖs effect parameter to **Disabled**.
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: What is Microsoft Defender for Cloud?
description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads. - Previously updated : 07/10/2022+ Last updated : 10/04/2022 # What is Microsoft Defender for Cloud?
Defender for Cloud continually assesses your resources, subscriptions, and organ
As soon as you open Defender for Cloud for the first time, Defender for Cloud: -- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
+- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Microsoft Cloud Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
+
+ You can also [learn more about secure score](secure-score-security-controls.md).
- **Provides hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multicloud resources.
-[Learn more about secure score](secure-score-security-controls.md).
+- **Analyze and secure your attack paths** through the Cloud Security Graph, which is a graph-based context engine that exists within Defender for Cloud. The Cloud Security Graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment.
+
+ Attack Path Analysis is a graph-based algorithm that scans the Cloud Security Graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack Path Analysis exposes those attack paths and suggests recommendations as to how best remediate the issues that will break the attack path and prevent successful breach.
+
+ By taking your environment's contextual information into account such as, internet exposure, permissions, lateral movement, and more. Attack Path Analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first.
+
+ Learn more about [attack path analysis](concept-attack-path.md#what-is-attack-path-analysis).
+
+Defender CSPM offers two options to protect your environments and resources, a free option and a premium option. We recommend enabling the premium option to gain the full coverage and benefits of CSPM. You can learn more about the benefits offered by [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and [the differences between the two plans](concept-cloud-security-posture-management.md).
### CWP - Identify unique workload security requirements
For example, if you've [connected an Amazon Web Services (AWS) account](quicksta
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.-- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), File Integrity Monitoring (FIM), and more.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quickstart-onboard-gcp.md) accounts to Microsoft Defender for Cloud.
Review the findings from these vulnerability scanners and respond to them all fr
Learn more on the following pages: - [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md)-- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-usage.md#identify-vulnerabilities-in-images-in-other-container-registries)
+- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-va-acr.md#identify-vulnerabilities-in-images-in-other-container-registries)
## Enforce your security policy from the top down
It's a security basic to know and make sure your workloads are secure, and it st
Defender for Cloud continuously discovers new resources that are being deployed across your workloads and assesses whether they're configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. Recommendations help you reduce the attack surface across each of your resources.
-The list of recommendations is enabled and supported by the Azure Security Benchmark. This Microsoft-authored, Azure-specific, benchmark provides a set of guidelines for security and compliance best practices based on common compliance frameworks. Learn more in [Azure Security Benchmark introduction](/security/benchmark/azure/introduction).
+The list of recommendations is enabled and supported by the Microsoft Cloud Security Benchmark. This Microsoft-authored benchmark, based on common compliance frameworks, began with Azure and now provides a set of guidelines for security and compliance best practices for multiple cloud environments. Learn more in [Microsoft Cloud Security Benchmark introduction](/security/benchmark/azure/introduction).
In this way, Defender for Cloud enables you not just to set security policies, but to *apply secure configuration standards across your resources*.
The **Defender plans** of Microsoft Defender for Cloud offer comprehensive defen
- [Microsoft Defender for DNS](defender-for-dns-introduction.md) - [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)
+- [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md)
+ - [Security governance and regulatory compliance](concept-cloud-security-posture-management.md#security-governance-and-regulatory-compliance)
+ - [Cloud Security Explorer](concept-cloud-security-posture-management.md#cloud-security-explorer)
+ - [Attack Path Analysis](concept-cloud-security-posture-management.md#attack-path-analysis)
+ - [Agentless scanning for machines](concept-cloud-security-posture-management.md#agentless-scanning-for-machines)
+- [Defender for DevOps](defender-for-devops-introduction.md)
+ Use the advanced protection tiles in the [workload protections dashboard](workload-protections-dashboard.md) to monitor and configure each of these protections.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Title: Microsoft Defender for container registries - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for container registries. Last updated 04/07/2022 +
If you connect unsupported registries to your Azure subscription, Defender for C
### Can I customize the findings from the vulnerability scanner? Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
-[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-usage.md#disable-specific-findings).
+[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-va-acr.md#disable-specific-findings).
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
Defender for Cloud provides vulnerability assessments for every image pushed or
## Next steps > [!div class="nextstepaction"]
-> [Scan your images for vulnerabilities](defender-for-containers-usage.md)
+> [Scan your images for vulnerabilities](defender-for-containers-va-acr.md)
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
description: Learn about the architecture of Microsoft Defender for Containers f
+ Last updated 06/19/2022 # Defender for Containers architecture
To protect your Kubernetes containers, Defender for Containers receives and anal
- Workload configuration from Azure Policy - Security signals and events from the node level
-## Architecture for each container environment
+## Architecture for each Kubernetes environment
## [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for Cloud description: Enable the container protections of Microsoft Defender for Containers + zone_pivot_groups: k8s-host Last updated 07/25/2022
You can check out the following blogs:
## Next steps
-[Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md).
+Now that you enabled Defender for Containers, you can:
+
+- [Scan your ACR images for vulnerabilities](defender-for-containers-va-acr.md)
+- [Scan your Amazon AWS ECR images for vulnerabilities](defender-for-containers-va-ecr.md)
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 08/17/2022+ Last updated : 09/11/2022 # Overview of Microsoft Defender for Containers
You can learn more by watching this video from the Defender for Cloud in the Fie
| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. | | Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.| | Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Required roles and permissions: | ΓÇó To auto provision the required components, see the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
+| Required roles and permissions: | ΓÇó To deploy the required components, see the [permissions for each of the components](monitoring-components.md#defender-for-containers-extensions)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). | ## Hardening
You can use the resource filter to review the outstanding recommendations for yo
### Kubernetes data plane hardening
-To protect the workloads of your Kubernetes containers with tailored recommendations, you can install the [Azure Policy for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md). You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma).
+To protect the workloads of your Kubernetes containers with tailored recommendations, you can install the [Azure Policy for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md). Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure it to enforce the best practices and mandate them for future workloads.
You can learn more about [Kubernetes data plane hardening](kubernetes-workload-p
## Vulnerability assessment
-### Scanning images in ACR registries
+### Scanning images in container registries
-Defender for Containers offers vulnerability scanning for images in Azure Container Registries (ACRs). Triggers for scanning an image include:
+Defender for Containers scans the containers in Azure Container Registry (ACR) and Amazon AWS Elastic Container Registry (ECR) to notify you if there are known vulnerabilities in your images.
-- **On push**: When an image is pushed in to a registry for storage, Defender for Containers automatically scans the image.
+When you push an image to a container registry and while the image is stored in the container registry, Defender for Containers automatically scans it. Defender for Containers checks for known vulnerabilities in packages or dependencies defined in the image file.
-- **Recently pulled**: Weekly scans of images that have been pulled in the last 30 days.
+When the scan completes, Defender for Containers provides details for each vulnerability detected, a security classification for each vulnerability detected, and guidance on how to remediate issues and protect vulnerable attack surfaces.
-- **On import**: When you import images into an ACR, Defender for Containers scans any supported images.
+Learn more about:
+- [Vulnerability assessment for Azure Container Registry (ACR)](defender-for-containers-va-acr.md)
+- [Vulnerability assessment for Amazon AWS Elastic Container Registry (ECR)](defender-for-containers-va-ecr.md)
-Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
--
-### View vulnerabilities for running images
+### View vulnerabilities for running images in Azure Container Registry (ACR)
Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
-Defender for Cloud is able to provide the recommendation, by correlating the inventory of your running containers that are collected by the Defender agent which is installed on your AKS clusters, with the vulnerability assessment scan of images that are stored in ACR. The recommendation then shows your running containers with the vulnerabilities associated with the images that are used by each container and provides you with vulnerability reports and remediation steps.
-
-> [!NOTE]
-> **Windows containers**: There is no Defender agent for Windows containers, the Defender agent is deployed to a Linux node running in the cluster, to retrieve the running container inventory for your Windows nodes.
->
-> Images that aren't pulled from ACR for deployment in AKS won't be checked and will appear under the **Not applicable** tab.
->
-> Images that have been deleted from their ACR registry, but are still running, won't be reported on only 30 days after their last scan occurred in ACR.
+To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable." lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
+Learn more about [viewing vulnerabilities for running images in (ACR)](defender-for-containers-va-acr.md).
+ ## Run-time protection for Kubernetes nodes and clusters Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
-In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
+Defender for Containers also includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
-This solution monitors the growing attack surface of multicloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
+Defender for Cloud monitors the attack surface of multicloud Kubernetes deployments based on the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft.
## FAQ - Defender for Containers
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
- Title: How to use Defender for Containers to identify vulnerabilities in Microsoft Defender for Cloud
-description: Learn how to use Defender for Containers to scan images in your registries
-- Previously updated : 06/08/2022---
-# Use Defender for Containers to scan your ACR images for vulnerabilities
-
-This page explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
-
-To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
-
-> [!TIP]
-> You can also scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
-
-There are four triggers for an image scan:
--- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.--- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.--- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).--- **Continuous scan**- This trigger has two modes:-
- - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
-
- - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-
-This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
-
-Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
-
-## Identify vulnerabilities in images in Azure container registries
-
-To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-
-1. [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
-
- >[!NOTE]
- > This feature is charged per image.
-
- When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
-
-1. [View and remediate findings as explained below](#view-and-remediate-findings).
-
-## Identify vulnerabilities in images in other container registries
-
-1. Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
-
- Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
-
- When the scan completes (typically after approximately 2 minutes, but can be up to 15 minutes), findings are available as Defender for Cloud recommendations.
-
-1. [View and remediate findings as explained below](#view-and-remediate-findings).
-
-## View and remediate findings
-
-1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
-
- ![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
-
-1. Select the recommendation.
-
- The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
-
-1. Select a specific registry to see the repositories within it that have vulnerable repositories.
-
- ![Select a registry.](media/monitor-container-security/acr-finding-select-registry.png)
-
- The registry details page opens with the list of affected repositories.
-
-1. Select a specific repository to see the repositories within it that have vulnerable images.
-
- ![Select a repository.](media/monitor-container-security/acr-finding-select-repository.png)
-
- The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
-
-1. Select a specific image to see the vulnerabilities.
-
- ![Select images.](media/monitor-container-security/acr-finding-select-image.png)
-
- The list of findings for the selected image opens.
-
- ![List of findings.](media/monitor-container-security/acr-findings.png)
-
-1. To learn more about a finding, select the finding.
-
- The findings details pane opens.
-
- [![Findings details pane.](media/monitor-container-security/acr-finding-details-pane.png)](media/monitor-container-security/acr-finding-details-pane.png#lightbox)
-
- This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
-
-1. Follow the steps in the remediation section of this pane.
-
-1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
-
- 1. Push the updated image to trigger a scan.
-
- 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
-
- If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
-
- 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
-
-## Disable specific findings
-
-> [!NOTE]
-> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-
-When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
--- Disable findings with severity below medium-- Disable findings that are non-patchable-- Disable findings with CVSS score below 6.5-- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)-
-> [!IMPORTANT]
-> To create a rule, you need permissions to edit a policy in Azure Policy.
->
-> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
-
-You can use any of the following criteria:
--- Finding ID-- Category-- Security check-- CVSS v3 scores-- Severity-- Patchable status-
-To create a rule:
-
-1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
-1. Select the relevant scope.
-1. Define your criteria.
-1. Select **Apply rule**.
-
- :::image type="content" source="./media/defender-for-containers-usage/new-disable-rule-for-registry-finding.png" alt-text="Create a disable rule for VA findings on registry.":::
-
-1. To view, override, or delete a rule:
- 1. Select **Disable rule**.
- 1. From the scope list, subscriptions with active rules show as **Rule applied**.
- :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule.":::
- 1. To view or delete the rule, select the ellipsis menu ("...").
-
-## FAQ
-
-### How does Defender for Containers scan an image?
-
-Defender for Containers pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
-
-Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
-
-### Can I get the scan results via REST API?
-
-Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
-
-### What registry types are scanned? What types are billed?
-
-For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md#additional-information).
-
-If you connect unsupported registries to your Azure subscription, Defender for Containers won't scan them and won't bill you for them.
-
-### Can I customize the findings from the vulnerability scanner?
-
-Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
-
-[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-usage.md#disable-specific-findings).
-
-### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
-
-Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
-
-## Next steps
-
-Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
defender-for-cloud Defender For Containers Va Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-va-acr.md
+
+ Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defender for Cloud
+description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities.
++ Last updated : 09/11/2022++++
+# Use Defender for Containers to scan your Azure Container Registry images for vulnerabilities
+
+This page explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
+
+To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+
+Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
+
+The triggers for an image scan are:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+
+- **Continuous scan**- This trigger has two modes:
+
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+
+ > [!NOTE]
+ > **Windows containers**: There is no Defender agent for Windows containers, the Defender agent is deployed to a Linux node running in the cluster, to retrieve the running container inventory for your Windows nodes.
+ >
+ > Images that aren't pulled from ACR for deployment in AKS won't be checked and will appear under the **Not applicable** tab.
+ >
+ > Images that have been deleted from their ACR registry, but are still running, won't be reported on only 30 days after their last scan occurred in ACR.
+
+This scan typically completes within 2 minutes, but it might take up to 40 minutes.
+
+Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
+
+## Identify vulnerabilities in images in Azure container registries
+
+To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
+
+1. [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
+
+ >[!NOTE]
+ > This feature is charged per image.
+
+ When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
+
+1. [View and remediate findings as explained below](#view-and-remediate-findings).
+
+## Identify vulnerabilities in images in other container registries
+
+If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+
+You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-va-ecr.md) directly from the Azure portal.
+
+1. Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+
+ Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
+
+ When the scan completes (typically after approximately 2 minutes, but can be up to 15 minutes), findings are available as Defender for Cloud recommendations.
+
+1. [View and remediate findings as explained below](#view-and-remediate-findings).
+
+## View and remediate findings
+
+1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ ![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
+
+1. Select the recommendation.
+
+ The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
+
+1. Select a specific registry to see the repositories within it that have vulnerable repositories.
+
+ ![Select a registry.](media/monitor-container-security/acr-finding-select-registry.png)
+
+ The registry details page opens with the list of affected repositories.
+
+1. Select a specific repository to see the repositories within it that have vulnerable images.
+
+ ![Select a repository.](media/monitor-container-security/acr-finding-select-repository.png)
+
+ The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
+
+1. Select a specific image to see the vulnerabilities.
+
+ ![Select images.](media/monitor-container-security/acr-finding-select-image.png)
+
+ The list of findings for the selected image opens.
+
+ ![List of findings.](media/monitor-container-security/acr-findings.png)
+
+1. To learn more about a finding, select the finding.
+
+ The findings details pane opens.
+
+ [![Findings details pane.](media/monitor-container-security/acr-finding-details-pane.png)](media/monitor-container-security/acr-finding-details-pane.png#lightbox)
+
+ This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
+
+1. Follow the steps in the remediation section of this pane.
+
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
+
+ 1. Push the updated image to trigger a scan.
+
+ 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+
+## Disable specific findings
+
+> [!NOTE]
+> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+
+- Disable findings with severity below medium
+- Disable findings that are non-patchable
+- Disable findings with CVSS score below 6.5
+- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
+
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+>
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+You can use any of the following criteria:
+
+- Finding ID
+- Category
+- Security check
+- CVSS v3 scores
+- Severity
+- Patchable status
+
+To create a rule:
+
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
+1. Select the relevant scope.
+1. Define your criteria.
+1. Select **Apply rule**.
+
+ :::image type="content" source="./media/defender-for-containers-va-acr/new-disable-rule-for-registry-finding.png" alt-text="Create a disable rule for VA findings on registry.":::
+
+1. To view, override, or delete a rule:
+ 1. Select **Disable rule**.
+ 1. From the scope list, subscriptions with active rules show as **Rule applied**.
+ :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule.":::
+ 1. To view or delete the rule, select the ellipsis menu ("...").
+
+## FAQ
+
+### How does Defender for Containers scan an image?
+
+Defender for Containers pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+
+Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+
+### Can I get the scan results via REST API?
+
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+
+### What registry types are scanned? What types are billed?
+
+For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md#additional-information). Defender for Containers doesn't scan unsupported registries that you connect to your Azure subscription.
+
+### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
+
+Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it will expose security vulnerabilities.
+
+## Next steps
+
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
defender-for-cloud Defender For Containers Va Ecr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-va-ecr.md
+
+ Title: Identify vulnerabilities in Amazon AWS Elastic Container Registry with Microsoft Defender for Cloud
+description: Learn how to use Defender for Containers to scan images in your Amazon AWS Elastic Container Registry (ECR) to find vulnerabilities.
++ Last updated : 09/11/2022++++
+# Use Defender for Containers to scan your Amazon AWS Elastic Container Registry images for vulnerabilities (Preview)
+
+Defender for Containers lets you scan the container images stored in your Amazon AWS Elastic Container Registry (ECR) as part of the protections provided within Microsoft Defender for Cloud.
+
+To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities. Defender for Containers creates resources in your AWS account, such as an ECS cluster in a dedicated VPC, internet gateway and an S3 bucket, so that images stay within your account for privacy and intellectual property protection. Resources are created in two AWS regions: us-east-1 and eu-central-1.
+
+Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
+
+The triggers for an image scan are:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image within 2 hours.
+
+- **Continuous scan** - Defender for Containers reassesses the images based on the latest database of vulnerabilities of Trivy. This reassessment is performed weekly.
+
+## Prerequisites
+
+Before you can scan your ECR images:
+
+- [Connect your AWS account to Defender for Cloud and enable Defender for Containers](quickstart-onboard-aws.md)
+- You must have at least one free VPC in us-east-1 and eu-central-1.
+
+> [!NOTE]
+> - Images that have at least one layer over 2GB are not scanned.
+> - Public repositories and manifest lists are not supported.
+
+## Enable vulnerability assessment
+
+To enable vulnerability assessment:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the AWS connector that connects to your AWS account.
+
+ :::image type="content" source="media/defender-for-kubernetes-intro/select-aws-connector.png" alt-text="Screenshot of Defender for Cloud's environment settings page showing an AWS connector.":::
+
+1. In the Monitoring Coverage section of the Containers plan, select **Settings**.
+
+ :::image type="content" source="media/defender-for-containers-va-ecr/aws-containers-settings.png" alt-text="Screenshot of Containers settings for the AWS connector." lightbox="media/defender-for-containers-va-ecr/aws-containers-settings.png":::
+
+1. Turn on **Vulnerability assessment**.
+
+ :::image type="content" source="media/defender-for-containers-va-ecr/aws-containers-enable-va.png" alt-text="Screenshot of the toggle to turn on vulnerability assessment for ECR images.":::
+
+1. Select **Next: Configure access**.
+
+1. Download the CloudFormation template.
+
+1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. It takes up to 30 minutes for the AWS resources to be created. The resources have the prefix `defender-for-containers-va`.
+
+1. Select **Next: Review and generate**.
+
+1. Select **Update**.
+
+Findings are available as Defender for Cloud recommendations from 2 hours after vulnerability assessment is turned on.
+
+## View and remediate findings
+
+1. To view the findings, open the **Recommendations** page. If the scan found issues, you'll see the recommendation [Elastic container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03587042-5d4b-44ff-af42-ae99e3c71c87).
+
+ :::image type="content" source="media/defender-for-containers-va-ecr/elastic-container-registry-recommendation.png" alt-text="Screenshot of the Recommendation to remediate findings in ECR images.":::
+
+1. Select the recommendation.
+
+ The recommendation details page opens with additional information. This information includes the list of repositories with vulnerable images ("Affected resources") and the remediation steps.
+
+1. Select specific repositories to the vulnerabilities found in images in those repositories.
+
+ :::image type="content" source="media/defender-for-containers-va-ecr/elastic-container-registry-unhealthy-repositories.png" alt-text="Screenshot of ECR repositories that have vulnerabilities." lightbox="media/defender-for-containers-va-ecr/elastic-container-registry-unhealthy-repositories.png":::
+
+ The vulnerabilities section shows the identified vulnerabilities.
+
+1. To learn more about a vulnerability, select the vulnerability.
+
+ The vulnerability details pane opens.
+
+ :::image type="content" source="media/defender-for-containers-va-ecr/elastic-container-registry-vulnerability.png" alt-text="Screenshot of vulnerability details in ECR repositories." lightbox="media/defender-for-containers-va-ecr/elastic-container-registry-vulnerability.png":::
+
+ This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
+
+1. Follow the steps in the remediation section of this pane.
+
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
+
+ 1. Push the updated image to trigger a scan.
+
+ 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+
+<!--
+## Disable specific findings
+
+> [!NOTE]
+> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+
+- Disable findings with severity below medium
+- Disable findings that are non-patchable
+- Disable findings with CVSS score below 6.5
+- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
+
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+>
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+You can use any of the following criteria:
+
+- Finding ID
+- Category
+- Security check
+- CVSS v3 scores
+- Severity
+- Patchable status
+
+To create a rule:
+
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
+1. Select the relevant scope.
+1. Define your criteria.
+1. Select **Apply rule**.
+
+ :::image type="content" source="media/defender-for-containers-va-acr/new-disable-rule-for-registry-finding.png" alt-text="Screenshot of how to create a disable rule for VA findings on registry.":::
+
+1. To view, override, or delete a rule:
+ 1. Select **Disable rule**.
+ 1. From the scope list, subscriptions with active rules show as **Rule applied**.
+ :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot of how to modify or delete an existing rule.":::
+ 1. To view or delete the rule, select the ellipsis menu ("..."). -->
+
+## FAQs
+
+### Can I get the scan results via REST API?
+
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+
+## Next steps
+
+Learn more about:
+
+- [Advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md)
+- [Multicloud protections](multicloud.yml) for your AWS account
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
Title: Enable Microsoft Defender for Azure Cosmos DB
-description: Learn how to enable Microsoft Defender for Azure Cosmos DB's enhanced security features.
+description: Learn how to enable enhanced security features in Microsoft Defender for Azure Cosmos DB.
+ Last updated 06/07/2022
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
+
+ Title: Microsoft Defender for DevOps - the benefits and features
+description: Learn about the benefits and features of Microsoft Defender for
Last updated : 09/20/2022++++
+# Overview of Defender for DevOps
+
+Microsoft Defender for Cloud enables comprehensive visibility, posture management and threat protection across multicloud environments including Azure, AWS, Google, and on-premises resources. Defender for DevOps integrates with GitHub Advanced Security that is embedded into both GitHub and Azure DevOps, to empower security teams with the ability to protect resources from code to cloud.
+
+Defender for DevOps uses a central console to provide security teams DevOps insights across multi-pipeline environments, such as GitHub and Azure DevOps. These insights can then be correlated with other contextual cloud security intelligence to prioritize remediation in code and apply consistent security guardrails throughout the application lifecycle. Key capabilities starting in Defender for DevOps, available through Defender for Cloud includes:
+
+- **Unified visibility into DevOps security posture**: Security administrators are given full visibility into the DevOps inventory, the security posture of pre-production application code, resource configurations across multi-pipeline and multicloud environments in a single view.
+
+- **Strengthen cloud resource configurations throughout the development lifecycle**: Enables security of Infrastructure as Code (IaC) templates and container images to minimize cloud misconfigurations reaching production environments, allowing security administrators to focus on any critical evolving threats.
+
+- **Prioritize remediation of critical issues in code**: Applies comprehensive code to cloud contextual insights within Defender for Cloud, security admins can help developers prioritize critical code fixes with actionable remediation and assign developer ownership by triggering custom workflows feeding directly into the tools developers use and love.
+
+Defender for DevOps strengthens the development lifecycle by protecting code management systems so that security issues can be found early and mitigated before deployment to production. By using security configuration recommendations, security teams have the ability to harden code management systems to protect them from attacks.
+
+## Availability
+
+| Aspect | Details |
+|--|--|
+| Release state: | Preview<br>The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
+| Required roles and permissions: | - **Contributor**: on the relevant Azure subscription <br> - **Security Admin Role**: for Defender for Cloud <br>- **GitHub Organization Administrator**<br>- **Developer(s)/Engineer(s)**: Access to setup GitHub workflows and Azure DevOps builds<br>- **Security Administrator(s)**: The ability to set up and evaluate the connector, evaluate and respond to Microsoft Defender for Cloud recommendations <br> - **Azure account**: with permissions to sign into Azure portal <br>- **Security Admin** permissions in Defender for Cloud to configure a connection to GitHub in Defender for Cloud <br>- **Security Reader** permissions in Defender for Cloud to view recommendations |
+
+## Benefits of Defender for DevOps
+
+Defender for DevOps gives Security Operators the ability to see how their organizations' code and development management systems work, without interfering with their developers. Security Operators can implement security operations and controls at every stage of the development lifecycle to make DevSecOps easier to achieve.
+
+Defender for DevOps grants developers the ability to scan code, infrastructure as code, credentials, and containers, to make the process easier for developers to find and remediate security issues.
+
+Defender for DevOps gives security teams the ability to set, evaluate, and enforce security policies and address risks before they are deployed to the cloud. Security teams gain visibility into their organizations' engineering systems, including security risks and pre-production security debt across multiple development environments and cloud applications.
+
+## Manage your DevOps environments in Defender for Cloud
+
+Defender for DevOps allows you to manage your connected environments and provides your security teams with a high level overview of all the issues that may exist within them.
++
+Here, you can add environments, open and customize DevOps workbooks to show your desired metrics, view our guides and give feedback, and configure your pull request annotations.
+
+### Understanding your metrics
++
+|Page section| Description |
+|--|--|
+| :::image type="content" source="media/defender-for-devops-introduction/number-vulnerabilities.png" alt-text="Screenshot of the vulnerabilities section of the page."::: | From here you can see the total number of vulnerabilities that were found by the Defender for DevOps scanners and you can organize the results by severity level. |
+| :::image type="content" source="media/defender-for-devops-introduction/number-findings.png" alt-text="Screenshot of the findings section and the associated recommendations."::: | Presents the total number of findings by scan type and the associated recommendations for any onboarded resources. Selecting a result will take you to relevant recommendations. |
+| :::image type="content" source="media/defender-for-devops-introduction/connectors-section.png" alt-text="Screenshot of the connectors section."::: | Provides visibility into the number of connectors. The number of repositories that have been onboarded by an environment. |
+
+### Review your findings
+
+The lower half of the page allows you to review all of the onboarded DevOps resources and the security information related to them.
+++
+On this part of the screen you will see:
+
+- **Repositories**: Lists all onboarded repositories from GitHub and Azure DevOps. You can get more information about specific resources by selecting it.
+
+- **Pull request status**: Shows whether PR annotations are enabled for the repository.
+ - `On` - PR annotations are enabled.
+ - `Off` - PR annotations are not enabled.
+ - `NA` - Defender for Cloud doesn't have information about the enablement. Currently, this information is available only for Azure DevOps repositories.
+
+- **Total exposed secrets** - Shows number of secrets identified in the repositories.
+
+- **OSS vulnerabilities** ΓÇô Shows number of vulnerabilities identified in the repositories. Currently, this information is available only for GitHub repositories.
+
+- **Total code scanning findings** ΓÇô Shows number of other code vulnerabilities and misconfigurations identified in the repositories.
+
+## Learn more
+
+- You can learn more about DevOps from our [DevOps resource center](/devops/).
+
+- Learn about [security in DevOps](/devops/operate/security-in-devops).
+
+- You can learn about [securing Azure Pipelines](/azure/devops/pipelines/security/overview?view=azure-devops).
+
+- Learn about [security hardening practices for GitHub Actions](https://docs.github.com/actions/security-guides/security-hardening-for-github-actions).
+
+## Next steps
+
+Learn how to [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md).
+
+Learn how to [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md).
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Last updated 06/22/2022 + # Overview of Microsoft Defender for Servers
The following table summarizes what's included in each plan.
| **Unified view** | The Defender for Cloud portal displays Defender for Endpoint alerts. You can then drill down into Defender for Endpoint portal, with additional information such as the alert process tree, the incident graph, and a detailed machine timeline showing historical data up to six months.| :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Automatic MDE provisioning** | Automatic provisioning of Defender for Endpoint on Azure, AWS, and GCP resources. | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Microsoft threat and vulnerability management** | Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, without needing other agents or periodic scans. [Learn more](deploy-vulnerability-assessment-tvm.md). | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Fileless attack detection** | Fileless attack detection in Defender for Servers and Microsoft Defender for Endpoint (MDE) generate detailed security alerts that accelerate alert triage, correlation, and downstream response time. | :::image type="icon" source="./mediE & Defender for Servers) |
+| **Threat detection for OS and network** | Defender for Servers and Microsoft Defender for Endpoint (MDE) detect threats at the OS and network levels, including VM behavioral detections. | :::image type="icon" source="./mediE & Defender for Servers) |
+| **Threat detection for the control plane** | Defender for Servers detects threats directed at the control plane, including network-based detections. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Security Policy and Regulatory Compliance** | Customize a security policy for your subscription and also compare the configuration of your resources with requirements in industry standards, regulations, and benchmarks. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Integrated vulnerability assessment powered by Qualys** | Use the Qualys scanner for real-time identification of vulnerabilities in Azure and hybrid VMs. Everything's handled by Defender for Cloud. You don't need a Qualys license or even a Qualys account. [Learn more](deploy-vulnerability-assessment-vm.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Log Analytics 500 MB free data ingestion** | Defender for Cloud leverages Azure Monitor to collect data from Azure VMs and servers, using the Log Analytics agent. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| **Threat detection** | Defender for Cloud detects threats at the OS level, network layer, and control plane. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Adaptive application controls (AAC)** | [AACs](adaptive-application-controls.md) in Defender for Cloud define allowlists of known safe applications for machines. | |:::image type="icon" source="./media/icons/yes-icon.png"::: | | **File Integrity Monitoring (FIM)** | [FIM](file-integrity-monitoring-overview.md) (change monitoring) examines files and registries for changes that might indicate an attack. A comparison method is used to determine whether suspicious modifications have been made to files. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Just-in-time VM access for management ports** | Defender for Cloud provides [JIT access](just-in-time-access-overview.md), locking down machine ports to reduce the machine's attack surface.| | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Adaptive network hardening** | Filtering traffic to and from resources with network security groups (NSG) improves your network security posture. You can further improve security by [hardening the NSG rules](adaptive-network-hardening.md) based on actual traffic patterns. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Docker host hardening** | Defender for Cloud assesses containers hosted on Linux machines running Docker containers, and compares them with the Center for Internet Security (CIS) Docker Benchmark. [Learn more](harden-docker-hosts.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| **Fileless attack detection** | Fileless attack detection in Defender for Cloud generates detailed security alerts that accelerate alert triage, correlation, and downstream response time. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
<!-- [Learn more](fileless-attack-detection.md). | Future ΓÇô TVM P2 | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Future ΓÇô disk scanning insights | | :::image type="icon" source="./media/icons/yes-icon.png"::: | -->
+> [!NOTE]
+> If you only enable Defender for Cloud at the workspace level, Defender for Cloud won't enable just-in-time VM access, adaptive application controls, and network detections for Azure resources.
+ Want to learn more? Watch an overview of enhanced workload protection features in Defender for Servers in our [Defender for Cloud in the Field](episode-twelve.md) series. ## Provisioning
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Title: How to enable Microsoft Defender for SQL servers on machines description: Learn how to protect your Microsoft SQL servers on Azure VMs, on-premises, and in hybrid and multicloud environments with Microsoft Defender for Cloud. + Last updated 07/28/2022
To enable this plan:
<a name="auto-provision-mma"></a> -- **SQL Server on Azure VM** - If your SQL machine is hosted on an Azure VM, you can [enable auto provisioning of the Log Analytics agent](enable-data-collection.md#auto-provision-mma). Alternatively, you can follow the manual procedure for [Onboard your Azure Stack Hub VMs](quickstart-onboard-machines.md?pivots=azure-portal#onboard-your-azure-stack-hub-vms).
+- **SQL Server on Azure VM** - If your SQL machine is hosted on an Azure VM, you can [customize the Log Analytics agent configuration](working-with-log-analytics-agent.md). Alternatively, you can follow the manual procedure for [Onboard your Azure Stack Hub VMs](quickstart-onboard-machines.md?pivots=azure-portal#onboard-your-azure-stack-hub-vms).
- **SQL Server on Azure Arc-enabled servers** - If your SQL Server is managed by [Azure Arc](../azure-arc/index.yml) enabled servers, you can deploy the Log Analytics agent using the Defender for Cloud recommendation ΓÇ£Log Analytics agent should be installed on your Windows-based Azure Arc machines (Preview)ΓÇ¥. - **SQL Server on-premises** - If your SQL Server is hosted on an on-premises Windows machine without Azure Arc, you can connect the machine to Azure by either:
defender-for-cloud Defender For Storage Exclude https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-exclude.md
Microsoft Defender for Storage can exclude specific active Databricks workspace
The tags will be inherited by the Storage account of the Databricks workspace and prevent Defender for Storage from turning on.
-> [!Note]
+> [!NOTE]
> Tags can't be added directly to the Databricks Storage account, or its Managed Resource Group. ### Prevent auto-enabling on a new Databricks workspace storage account
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Title: BYOL VM vulnerability assessment in Microsoft Defender for Cloud description: Deploy a BYOL vulnerability assessment solution on your Azure virtual machines to get recommendations in Microsoft Defender for Cloud that can help you protect your virtual machines. + Last updated 11/09/2021
When you set up your solution, you must choose a resource group to attach it to.
Defender for Cloud also offers vulnerability analysis for your: -- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
+- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines + Last updated 07/13/2022
You can check out the following blogs:
Defender for Cloud also offers vulnerability analysis for your: -- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
+- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
description: Install a vulnerability assessment solution on your Azure machines
+ Last updated 07/12/2022 # Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines
Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-usage.md)
+- Azure Container Registry images - [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-va-acr.md)
defender-for-cloud Detect Credential Leaks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/detect-credential-leaks.md
+
+ Title: Detect exposed secrets in code
+
+description: Prevent passwords and other secrets that may be stored in your code from being accessed by outside individuals by using Defender for Cloud's secret scanning for Defender for DevOps.
++ Last updated : 09/11/2022++
+# Detect exposed secrets in code
+
+When passwords and other secrets are stored in source code, it poses a significant risk and could compromise the security of your environments. Defender for Cloud offers a solution by using secret scanning to detect credentials, secrets, certificates, and other sensitive content in your source code and your build output. Secret scanning can be run as part of the Microsoft Security DevOps for Azure DevOps extension. To explore the options available for secret scanning in GitHub, learn more [about secret scanning](https://docs.github.com/en/enterprise-cloud@latest/code-security/secret-scanning/about-secret-scanning) in GitHub.
+
+> [!NOTE]
+> During the Defender for DevOps preview period, GitHub Advanced Security for Azure DevOps (GHAS for AzDO) is also providing a free trial of secret scanning.
+
+Check the list of [supported file types and exit codes](#supported-file-types-and-exit-codes).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
+
+- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md)
+
+## Setup secret scanning in Azure DevOps
+
+You can run secret scanning as part of the Azure DevOps build process by using the Microsoft Security DevOps (MSDO) Azure DevOps extension.
+
+**To add secret scanning to Azure DevOps build process**:
+
+1. Sign in to [Azure DevOps](https://dev.azure.com/)
+
+1. Navigate to **Pipeline**.
+
+1. Locate the pipeline with MSDO Azure DevOps Extension is configured.
+
+1. Select **Edit**.
+
+1. Add the following lines to the YAML file
+
+ ```yml
+ inputs:
+ categories: 'secrets'
+ ```
+
+1. Select **Save**.
+
+By adding the additions to your yaml file, you will ensure that secret scanning only runs when you execute a build to your Azure DevOps pipeline.
+
+## Suppress false positives
+
+When the scanner runs, it may detect credentials that are false positives. Inline-suppression tools can be used to suppress false positives.
+
+You may want to suppress fake secrets in unit tests or mock paths, or inaccurate results. We don't recommend using suppression to suppress test credentials. Test credentials can still pose a security risk and should be securely stored.
+
+> [!NOTE]
+> Valid inline suppression syntax depends on the language, data format and CredScan version you are using.
+
+### Suppress a same line secret
+
+To suppress a secret that is found on the same line, add the following code as a comment at the end of the line that has the secret:
+
+```bash
+[SuppressMessage("Microsoft.Security", "CS001:SecretInLine", Justification="... .")]
+```
+
+### Suppress a secret in the next line
+
+To suppress the secret found in the next line, add the following code as a comment before the line that has the secret:
+
+```bash
+[SuppressMessage("Microsoft.Security", "CS002:SecretInNextLine", Justification="... .")]
+```
+
+## Supported file types and exit codes
+
+CredScan supports the following file types:
+
+| Supported file types | | | | | |
+|--|--|--|--|--|--|
+| 0.001 |\*.conf | id_rsa |\*.p12 |\*.sarif |\*.wadcfgx |
+| 0.1 |\*.config |\*.iis |\*.p12* |\*.sc |\*.waz |
+| 0.8 |\*.cpp |\*.ijs |\*.params |\*.scala |\*.webtest |
+| *_sk |\*.crt |\*.inc | password |\*.scn |\*.wsx |
+| *password |\*.cs |\*.inf |\*.pem | scopebindings.json |\*.wtl |
+| *pwd*.txt |\*.cscfg |\*.ini |\*.pfx* |\*.scr |\*.xaml |
+|\*.*_/key |\*.cshtm |\*.ino | pgpass |\*.script |\*.xdt |
+|\*.*__/key |\*.cshtml |\*.insecure |\*.php |\*.sdf |\*.xml |
+|\*.1/key |\*.csl |\*.install |\*.pkcs12* |\*.secret |\*.xslt |
+|\*.32bit |\*.csv |\*.ipynb |\*.pl |\*.settings |\*.yaml |
+|\*.3des |\*.cxx |\*.isml |\*.plist |\*.sh |\*.yml |
+|\*.added_cluster |\*.dart |\*.j2 |\*.pm |\*.shf |\*.zaliases |
+|\*.aes128 |\*.dat |\*.ja |\*.pod |\*.side |\*.zhistory |
+|\*.aes192 |\*.data |\*.jade |\*.positive |\*.side2 |\*.zprofile |
+|\*.aes256 |\*.dbg |\*.java |\*.ppk* |\*.snap |\*.zsh_aliases |
+|\*.al |\*.defaults |\*.jks* |\*.priv |\*.snippet |\*.zsh_history |
+|\*.argfile |\*.definitions |\*.js | privatekey |\*.sql |\*.zsh_profile |
+|\*.as |\*.deployment |\*.json | privatkey |\*.ss |\*.zshrc |
+|\*.asax | dockerfile |\*.jsonnet |\*.prop | ssh\\config | |
+|\*.asc | _dsa |\*.jsx |\*.properties | ssh_config | |
+|\*.ascx |\*.dsql | kefile |\*.ps |\*.ste | |
+|\*.asl |\*.dtsx | key |\*.ps1 |\*.svc | |
+|\*.asmmeta | _ecdsa | keyfile |\*.psclass1 |\*.svd | |
+|\*.asmx | _ed25519 |\*.key |\*.psm1 |\*.svg | |
+|\*.aspx |\*.ejs |\*.key* | psql_history |\*.svn-base | |
+|\*.aurora |\*.env |\*.key.* |\*.pub |\*.swift | |
+|\*.azure |\*.erb |\*.keys |\*.publishsettings |\*.tcl | |
+|\*.backup |\*.ext |\*.keystore* |\*.pubxml |\*.template | |
+|\*.bak |\*.ExtendedTests |\*.linq |\*.pubxml.user | template | |
+|\*.bas |\*.FF |\*.loadtest |\*.pvk* |\*.test | |
+|\*.bash_aliases |\*.frm |\*.local |\*.py |\*.textile | |
+|\*.bash_history |\*.gcfg |\*.log |\*.pyo |\*.tf | |
+|\*.bash_profile |\*.git |\*.m |\*.r |\*.tfvars | |
+|\*.bashrc |\*.git/config |\*.managers |\*.rake | tmdb | |
+|\*.bat |\*.gitcredentials |\*.map |\*.razor |\*.trd | |
+|\*.Beta |\*.go |\*.md |\*.rb |\*.trx | |
+|\*.BF |\*.gradle |\*.md-e |\*.rc |\*.ts | |
+|\*.bicep |\*.groovy |\*.mef |\*.rdg |\*.tsv | |
+|\*.bim |\*.grooy |\*.mst |\*.rds |\*.tsx | |
+|\*.bks* |\*.gsh |\*.my |\*.reg |\*.tt | |
+|\*.build |\*.gvy |\*.mysql_aliases |\*.resx |\*.txt | |
+|\*.c |\*.gy |\*.mysql_history |\*.retail |\*.user | |
+|\*.cc |\*.h |\*.mysql_profile |\*.robot | user | |
+|\*.ccf | host | npmrc |\*.rqy | userconfig* | |
+|\*.cfg |\*.hpp |\*.nuspec | _rsa |\*.usersaptinstall | |
+|\*.clean |\*.htm |\*.ois_export |\*.rst |\*.usersaptinstall | |
+|\*.cls |\*.html |\*.omi |\*.ruby |\*.vb | |
+|\*.cmd |\*.htpassword |\*.opn |\*.runsettings |\*.vbs | |
+|\*.code-workspace | hubot |\*.orig |\*.sample |\*.vizfx | |
+|\*.coffee |\*.idl |\*.out |\*.SAMPLE |\*.vue | |
+
+The following exit codes are available in CredScan:
+
+| Code | Description |
+|--|--|
+| 0 | Scan completed successfully with no application warning, no suppressed match, no credential match. |
+| 1 | Partial scan completed with nothing but application warning. |
+| 2 | Scan completed successfully with nothing but suppressed match(es). |
+| 3 | Partial scan completed with both application warning(s) and suppressed match(es). |
+| 4 | Scan completed successfully with nothing but credential match(es). |
+| 5 | Partial scan completed with both application warning(s) and credential match(es). |
+| 6 | Scan completed successfully with both suppressed match(es) and credential match(es). |
+| 7 | Partial scan completed with application warning(s), suppressed match(es) and credential match(es). |
+| -1000 | Scan failed with command line argument error. |
+| -1100 | Scan failed with app settings error. |
+| -1500 | Scan failed with other configuration error. |
+| -1600 | Scan failed with IO error. |
+| -9000 | Scan failed with unknown error. |
+
+## Next steps
++ Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud to remediate secrets in code before they are shipped to production.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
- Title: Configure auto provisioning of agents for Microsoft Defender for Cloud
-description: This article describes how to set up auto provisioning of the Log Analytics agent and other agents and extensions used by Microsoft Defender for Cloud
--- Previously updated : 08/14/2022--
-# Quickstart: Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud
-
-Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to auto-provision the necessary agents and extensions used by Defender for Cloud to your resources.
-
-When you enable auto provisioning of any of the supported extensions, the extensions are installed on existing and future machines in the subscription. When you **disable** auto provisioning for an extension, the extension is not installed on future machines, but it is also not uninstalled from existing machines.
--
-## Prerequisites
-
-To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-
-## Availability
-
-### [**Auto provisioning**](#tab/autoprovision-feature)
-
-This table shows the availability details for the auto provisioning **feature** itself.
-
-| Aspect | Details |
-||:|
-| Release state: | Auto provisioning is generally available (GA) |
-| Pricing: | Auto provisioning is free to use |
-| Required roles and permissions: | Depends on the specific extension - see relevant tab |
-| Supported destinations: | Depends on the specific extension - see relevant tab |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
--
-### [**Log Analytics agent**](#tab/autoprovision-loganalytic)
-
-| Aspect | Azure virtual machines | Azure Arc-enabled machines |
-||:|:--|
-| Release state: | Generally available (GA) | Preview |
-| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |
-| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
-| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
-| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
-
-### [**Azure Monitor Agent**](#tab/autoprovision-ama)
--
-Learn more about [using the Azure Monitor Agent with Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
-
-### [**Vulnerability assessment**](#tab/autoprovision-va)
-
-| Aspect | Details |
-||:--|
-| Release state: | Generally available (GA) |
-| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md) |
-| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
-| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
-| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
-
-### [**Defender for Endpoint**](#tab/autoprovision-defendpoint)
-
-| Aspect | Linux | Windows |
-||:--|:-|
-| Release state: | Generally available (GA) | Generally available (GA) |
-| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md) |
-| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) |
-| Supported destinations: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 10 |
-| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/no-icon.png"::: No |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
--
-### [**Guest Configuration**](#tab/autoprovision-guestconfig)
-
-| Aspect | Details |
-||:--|
-| Release state: | Preview |
-| Relevant Defender plan: | No plan required |
-| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
-| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
-
-### [**Defender for Containers**](#tab/autoprovision-containers)
-
-This table shows the availability details for the components that are required for auto provisioning to provide the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
-
-By default, auto provisioning is enabled when you enable Defender for Containers from the Azure portal.
-
-| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters |
-||-||
-| Release state: | ΓÇó Defender profile: GA<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
-| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) |
-| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) |
-| Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) |
-| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
-| Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet|
----
-## How does Defender for Cloud collect data?
-
-Defender for Cloud collects data from your Azure virtual machines (VMs), virtual machine scale sets, IaaS containers, and non-Azure (including on-premises) machines to monitor for security vulnerabilities and threats.
-
-Data collection is required to provide visibility into missing updates, misconfigured OS security settings, endpoint protection status, and health and threat protection. Data collection is only needed for compute resources such as VMs, virtual machine scale sets, IaaS containers, and non-Azure computers.
-
-You can benefit from Microsoft Defender for Cloud even if you donΓÇÖt provision agents. However, you'll have limited security and the capabilities listed above aren't supported.
-
-Data is collected using:
--- The **Log Analytics agent**, which reads various security-related configurations and event logs from the machine and copies the data to your workspace for analysis. Examples of such data are: operating system type and version, operating system logs (Windows event logs), running processes, machine name, IP addresses, and logged in user.-- **Security extensions**, such as the [Azure Policy Add-on for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md), which can also provide data to Defender for Cloud regarding specialized resource types.-
-## Why use auto provisioning?
-
-Any of the agents and extensions described on this page *can* be installed manually (see [Manual installation of the Log Analytics agent](#manual-agent)). However, **auto provisioning** reduces management overhead by installing all required agents and extensions on existing - and new - machines to ensure faster security coverage for all supported resources.
-
-We recommend enabling auto provisioning, but it's disabled by default.
-
-## How does auto provisioning work?
-
-Defender for Cloud's auto provisioning settings page has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type.
-
-> [!TIP]
-> Learn more about Azure Policy effects including **Deploy if not exists** in [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
--
-<a name="auto-provision-mma"></a>
-
-## Enable auto provisioning of the Log Analytics agent and extensions
-
-When auto provisioning is on for the Log Analytics agent, Defender for Cloud deploys the agent on all supported Azure VMs and any new ones created. For the list of supported platforms, see [Supported platforms in Microsoft Defender for Cloud](security-center-os-coverage.md).
-
-To enable auto provisioning of the Log Analytics agent:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the relevant subscription.
-1. In the **Auto provisioning** page, set the status of auto provisioning for the Log Analytics agent to **On**.
-
- :::image type="content" source="./media/enable-data-collection/enable-automatic-provisioning.png" alt-text="Enabling auto provisioning of the Log Analytics agent." lightbox="./media/enable-data-collection/enable-automatic-provisioning.png":::
-
-1. From the configuration options pane, define the workspace to use.
-
- :::image type="content" source="./media/enable-data-collection/log-analytics-agent-deploy-options.png" alt-text="Configuration options for auto provisioning Log Analytics agents to VMs." lightbox="./media/enable-data-collection/log-analytics-agent-deploy-options.png":::
-
- - **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** - Defender for Cloud creates a new resource group and default workspace in the same geolocation, and connects the agent to that workspace. If a subscription contains VMs from multiple geolocations, Defender for Cloud creates multiple workspaces to ensure compliance with data privacy requirements.
-
- The naming convention for the workspace and resource group is:
- - Workspace: DefaultWorkspace-[subscription-ID]-[geo]
- - Resource Group: DefaultResourceGroup-[geo]
-
- A Defender for Cloud solution is automatically enabled on the workspace per the pricing tier set for the subscription.
-
- > [!TIP]
- > For questions regarding default workspaces, see:
- >
- > - [Am I billed for Azure Monitor logs on the workspaces created by Defender for Cloud?](./faq-data-collection-agents.yml#am-i-billed-for-azure-monitor-logs-on-the-workspaces-created-by-defender-for-cloud-)
- > - [Where is the default Log Analytics workspace created?](./faq-data-collection-agents.yml#where-is-the-default-log-analytics-workspace-created-)
- > - [Can I delete the default workspaces created by Defender for Cloud?](./faq-data-collection-agents.yml#can-i-delete-the-default-workspaces-created-by-defender-for-cloud-)
-
- - **Connect Azure VMs to a different workspace** - From the dropdown list, select the workspace to store collected data. The dropdown list includes all workspaces across all of your subscriptions. You can use this option to collect data from virtual machines running in different subscriptions and store it all in your selected workspace.
-
- If you already have an existing Log Analytics workspace, you might want to use the same workspace (requires read and write permissions on the workspace). This option is useful if you're using a centralized workspace in your organization and want to use it for security data collection. Learn more in [Manage access to log data and workspaces in Azure Monitor](../azure-monitor/logs/manage-access.md).
-
- If your selected workspace already has a "Security" or "SecurityCenterFree" solution enabled, the pricing will be set automatically. If not, install a Defender for Cloud solution on the workspace:
-
- 1. From Defender for Cloud's menu, open **Environment settings**.
- 1. Select the workspace to which you'll be connecting the agents.
- 1. Set Security posture management to **on** or select **Enable all** to turn all Microsoft Defender plans on.
-
-1. From the **Windows security events** configuration, select the amount of raw event data to store:
- - **None** ΓÇô Disable security event storage. (Default)
- - **Minimal** ΓÇô A small set of events for when you want to minimize the event volume.
- - **Common** ΓÇô A set of events that satisfies most customers and provides a full audit trail.
- - **All events** ΓÇô For customers who want to make sure all events are stored.
-
- > [!TIP]
- > To set these options at the workspace level, see [Setting the security event option at the workspace level](#setting-the-security-event-option-at-the-workspace-level).
- >
- > For more information of these options, see [Windows security event options for the Log Analytics agent](#data-collection-tier).
-
-1. Select **Apply** in the configuration pane.
-
-1. To enable auto provisioning of an extension other than the Log Analytics agent:
-
- 1. Toggle the status to **On** for the relevant extension.
-
- :::image type="content" source="./media/enable-data-collection/toggle-kubernetes-add-on.png" alt-text="Toggle to enable auto provisioning for K8s policy add-on.":::
-
- 1. Select **Save**. The Azure Policy definition is assigned and a remediation task is created.
-
- |Extension |Policy |
- |||
- |Policy Add-on for Kubernetes |[Deploy Azure Policy Add-on to Azure Kubernetes Service clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa8eff44f-8c92-45c3-a3fb-9880802d67a7)|
- |Guest Configuration agent (preview) |[Deploy prerequisites to enable Guest Configuration policies on virtual machines](https://github.com/Azure/azure-policy/blob/64dcfa3033a3ff231ec4e73d2c1dad4db4e3b5dd/built-in-policies/policySetDefinitions/Guest%20Configuration/GuestConfiguration_Prerequisites.json)|
--
-1. Select **Save**. If a workspace needs to be provisioned, agent installation might take up to 25 minutes.
-
-1. You'll be asked if you want to reconfigure monitored VMs that were previously connected to a default workspace:
-
- :::image type="content" source="./media/enable-data-collection/reconfigure-monitored-vm.png" alt-text="Review options to reconfigure monitored VMs.":::
-
- - **No** - your new workspace settings will only be applied to newly discovered VMs that don't have the Log Analytics agent installed.
- - **Yes** - your new workspace settings will apply to all VMs and every VM currently connected to a Defender for Cloud created workspace will be reconnected to the new target workspace.
-
- > [!NOTE]
- > If you select **Yes**, don't delete the workspace(s) created by Defender for Cloud until all VMs have been reconnected to the new target workspace. This operation fails if a workspace is deleted too early.
--
-<a name="data-collection-tier"></a>
-
-## Windows security event options for the Log Analytics agent
-
-When you select a data collection tier in Microsoft Defender for Cloud, the security events of the selected tier are stored in your Log Analytics workspace so that you can investigate, search, and audit the events in your workspace. The Log Analytics agent also collects and analyzes the security events required for Defender for CloudΓÇÖs threat protection.
-
-### Requirements
-
-The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-
-You may be charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-
-### Information for Microsoft Sentinel users
-
-Security events collection within the context of a single workspace can be configured from either Microsoft Defender for Cloud or Microsoft Sentinel, but not both. If you want to add Microsoft Sentinel to a workspace that already gets alerts from Microsoft Defender for Cloud and to collect Security Events, you can either:
--- Leave the Security Events collection in Microsoft Defender for Cloud as is. You'll be able to query and analyze these events in both Microsoft Sentinel and Defender for Cloud. If you want to monitor the connector's connectivity status or change its configuration in Microsoft Sentinel, consider the second option.-- Disable Security Events collection in Microsoft Defender for Cloud and then add the Security Events connector in Microsoft Sentinel. You'll be able to query and analyze events in both Microsoft Sentinel, and Defender for Cloud, but you'll also be able to monitor the connector's connectivity status or change its configuration in - and only in - Microsoft Sentinel. To disable Security Events collection in Defender for Cloud, set **Windows security events** to **None** in the configuration of your Log Analytics agent.-
-### What event types are stored for "Common" and "Minimal"?
-
-The **Common** and **Minimal** event sets were designed to address typical scenarios based on customer and industry standards for the unfiltered frequency of each event and their usage.
--- **Minimal** - This set is intended to cover only events that might indicate a successful breach and important events with low volume. Most of the data volume of this set is successful user logon (event ID 4625), failed user logon events (event ID 4624), and process creation events (event ID 4688). Sign out events are important for auditing only and have relatively high volume, so they aren't included in this event set.-- **Common** - This set is intended to provide a full user audit trail, including events with low volume. For example, this set contains both user logon events (event ID 4624) and user logoff events (event ID 4634). We include auditing actions like security group changes, key domain controller Kerberos operations, and other events that are recommended by industry organizations.-
-Here's a complete breakdown of the Security and App Locker event IDs for each set:
-
-| Data tier | Collected event indicators |
-| | |
-| Minimal | 1102,4624,4625,4657,4663,4688,4700,4702,4719,4720,4722,4723,4724,4727,4728,4732,4735,4737,4739,4740,4754,4755, |
-| | 4756,4767,4799,4825,4946,4948,4956,5024,5033,8001,8002,8003,8004,8005,8006,8007,8222 |
-| Common | 1,299,300,324,340,403,404,410,411,412,413,431,500,501,1100,1102,1107,1108,4608,4610,4611,4614,4622, |
-| | 4624,4625,4634,4647,4648,4649,4657,4661,4662,4663,4665,4666,4667,4688,4670,4672,4673,4674,4675,4689,4697, |
-| | 4700,4702,4704,4705,4716,4717,4718,4719,4720,4722,4723,4724,4725,4726,4727,4728,4729,4733,4732,4735,4737, |
-| | 4738,4739,4740,4742,4744,4745,4746,4750,4751,4752,4754,4755,4756,4757,4760,4761,4762,4764,4767,4768,4771, |
-| | 4774,4778,4779,4781,4793,4797,4798,4799,4800,4801,4802,4803,4825,4826,4870,4886,4887,4888,4893,4898,4902, |
-| | 4904,4905,4907,4931,4932,4933,4946,4948,4956,4985,5024,5033,5059,5136,5137,5140,5145,5632,6144,6145,6272, |
-| | 6273,6278,6416,6423,6424,8001,8002,8003,8004,8005,8006,8007,8222,26401,30004 |
-
-> [!NOTE]
-> - If you are using Group Policy Object (GPO), it is recommended that you enable audit policies Process Creation Event 4688 and the *CommandLine* field inside event 4688. For more information about Process Creation Event 4688, see Defender for Cloud's [FAQ](./faq-data-collection-agents.yml#what-happens-when-data-collection-is-enabled-). For more information about these audit policies, see [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations).
-> - To enable data collection for [Adaptive application controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy.
-> - To collect Windows Filtering Platform [Event ID 5156](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=5156), you need to enable [Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection) (Auditpol /set /subcategory:"Filtering Platform Connection" /Success:Enable)
->
-
-### Setting the security event option at the workspace level
-
-You can define the level of security event data to store at the workspace level.
-
-1. From Defender for Cloud's menu in the Azure portal, select **Environment settings**.
-1. Select the relevant workspace. The only data collection events for a workspace are the Windows security events described on this page.
-
- :::image type="content" source="media/enable-data-collection/event-collection-workspace.png" alt-text="Setting the security event data to store in a workspace.":::
-
-1. Select the amount of raw event data to store and select **Save**.
-
-<a name="manual-agent"></a>
-
-## Manual agent provisioning
-
-To manually install the Log Analytics agent:
-
-1. Disable auto provisioning.
-
-1. Optionally, create a workspace.
-
-1. Enable Microsoft Defender for Cloud on the workspace on which you're installing the Log Analytics agent:
-
- 1. From Defender for Cloud's menu, open **Environment settings**.
-
- 1. Set the workspace on which you're installing the agent. Make sure the workspace is in the same subscription you use in Defender for Cloud and that you have read/write permissions for the workspace.
-
- 1. Select **Microsoft Defender for Cloud on**, and **Save**.
-
- >[!NOTE]
- >If the workspace already has a **Security** or **SecurityCenterFree** solution enabled, the pricing will be set automatically.
-
-1. To deploy agents on new VMs using a Resource Manager template, install the Log Analytics agent:
-
- - [Install the Log Analytics agent for Windows](../virtual-machines/extensions/oms-windows.md)
- - [Install the Log Analytics agent for Linux](../virtual-machines/extensions/oms-linux.md)
-
-1. To deploy agents on your existing VMs, follow the instructions in [Collect data about Azure Virtual Machines](../azure-monitor/vm/monitor-virtual-machine.md) (the section **Collect event and performance data** is optional).
-
-1. To use PowerShell to deploy the agents, use the instructions from the virtual machines documentation:
-
- - [For Windows machines](../virtual-machines/extensions/oms-windows.md?toc=%2fazure%2fazure-monitor%2ftoc.json#powershell-deployment)
- - [For Linux machines](../virtual-machines/extensions/oms-linux.md?toc=%2fazure%2fazure-monitor%2ftoc.json#azure-cli-deployment)
-
-> [!TIP]
-> For more information about onboarding, see [Automate onboarding of Microsoft Defender for Cloud using PowerShell](powershell-onboarding.md).
-
-<a name="preexisting"></a>
-
-## Auto provisioning in cases of a pre-existing agent installation
-
-The following use cases explain how auto provisioning works in cases when there's already an agent or extension installed.
--- **Log Analytics agent is installed on the machine, but not as an extension (Direct agent)** - If the Log Analytics agent is installed directly on the VM (not as an Azure extension), Defender for Cloud will install the Log Analytics agent extension and might upgrade the Log Analytics agent to the latest version. The installed agent will continue to report to its already configured workspaces and to the workspace configured in Defender for Cloud. (Multi-homing is supported on Windows machines.)-
- If the Log Analytics is configured with a user workspace and not Defender for Cloud's default workspace, you'll need to install the "Security" or "SecurityCenterFree" solution on it for Defender for Cloud to start processing events from VMs and computers reporting to that workspace.
-
- For Linux machines, Agent multi-homing isn't yet supported. If an existing agent installation is detected, the Log Analytics agent won't be auto provisioned.
-
- For existing machines on subscriptions onboarded to Defender for Cloud before 17 March 2019, when an existing agent will be detected, the Log Analytics agent extension won't be installed and the machine won't be affected. For these machines, see to the "Resolve monitoring agent health issues on your machines" recommendation to resolve the agent installation issues on these machines.
-
-- **System Center Operations Manager agent is installed on the machine** - Defender for Cloud will install the Log Analytics agent extension side by side to the existing Operations Manager. The existing Operations Manager agent will continue to report to the Operations Manager server normally. The Operations Manager agent and Log Analytics agent share common run-time libraries, which will be updated to the latest version during this process. If Operations Manager agent version 2012 is installed, **do not** enable auto provisioning.--- **A pre-existing VM extension is present**:
- - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.
- - To see to which workspace the existing extension is sending data to, run the test to [Validate connectivity with Microsoft Defender for Cloud](/archive/blogs/yuridiogenes/validating-connectivity-with-azure-security-center). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection.
- - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. For more information, see [Existing log analytics customers](./faq-azure-monitor-logs.yml).
-
-
-<a name="offprovisioning"></a>
-
-## Disable auto provisioning
-
-When you disable auto provisioning, agents won't be provisioned on new VMs.
-
-To turn off auto provisioning of an agent:
-
-1. From Defender for Cloud's menu in the portal, select **Environment settings**.
-1. Select the relevant subscription.
-1. Select **Auto provisioning**.
-1. Toggle the status to **Off** for the relevant agent.
-
- :::image type="content" source="./media/enable-data-collection/agent-toggles.png" alt-text="Toggles to disable auto provisioning per agent type.":::
-
-1. Select **Save**. When auto provisioning is disabled, the default workspace configuration section isn't displayed:
-
- :::image type="content" source="./media/enable-data-collection/empty-configuration-column.png" alt-text="When auto provisioning is disabled, the configuration cell is empty":::
--
-> [!NOTE]
-> Disabling auto provisioning does not remove the Log Analytics agent from Azure VMs where the agent was provisioned. For information on removing the OMS extension, see [How do I remove OMS extensions installed by Defender for Cloud](./faq-data-collection-agents.yml#how-do-i-remove-oms-extensions-installed-by-defender-for-cloud-).
->
--
-## Troubleshooting
--- To identify monitoring agent network requirements, see [Troubleshooting monitoring agent network requirements](troubleshooting-guide.md#mon-network-req).-- To identify manual onboarding issues, see [How to troubleshoot Operations Management Suite onboarding issues](https://support.microsoft.com/help/3126513/how-to-troubleshoot-operations-management-suite-onboarding-issues).---
-## Next steps
-
-This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Data storage in a new or existing Log Analytics workspace might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
Title: Enable Microsoft Defender for Cloud's integrated workload protections
description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multicloud resources Previously updated : 07/14/2022- Last updated : 09/20/2022+ # Quickstart: Enable enhanced security features
-Get started with Defender for Cloud by using its enhanced security features to protect your hybrid and multicloud environments.
+In this quickstart, you'll learn how to enable the enhanced security features by enabling the Defender for Cloud plans through the Azure portal.
-In this quickstart, you'll learn how to enable the enhanced security features by enabling the different Defender for Cloud plans through the Azure portal.
+Microsoft Defender for Cloud uses [monitoring components](monitoring-components.md) to collect data from your resources. These extensions are automatically deployed when you turn on a Defender plan. Each Defender plan has its own requirements for monitoring components, so it's important that the required extensions are deployed to your resources to get all of the benefits of each plan.
-To learn more about the benefits of enhanced security features, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
+The Defender plans show you the monitoring coverage for each Defender plan. If the monitoring coverage is **Full**, all of the necessary extensions are installed. If the monitoring coverage is **Partial**, the information tooltip tells you what extensions are missing. For some plans, you can configure specific monitoring settings.
-## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+To learn more about the benefits of enhanced security features, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
-- You must have [enabled Defender for Cloud](get-started.md) on your Azure subscription.
+## Prerequisites
-## Enable enhanced security features from the Azure portal
+To get started with Defender for Cloud, you'll need a Microsoft Azure subscription with [Defender for Cloud enabled](get-started.md). If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).
-To enable all Defender for Cloud features including threat protection capabilities, you must enable enhanced security features on the subscription containing the applicable workloads.
+## Enable Defender plans to get the enhanced security features
-If you only enable Defender for Cloud at the workspace level, Defender for Cloud won't enable just-in-time VM access, adaptive application controls, and network detections for Azure resources. In addition, the only Microsoft Defender plans available at the workspace level are Microsoft Defender for Servers and Microsoft Defender for SQL servers on machines.
+To get all of the Defender for Cloud protections, you'll need to enable the Defender plans that protect for each of the workloads that you want to protect.
> [!NOTE] > - You can enable **Microsoft Defender for Storage accounts** at either the subscription level or resource level. > - You can enable **Microsoft Defender for SQL** at either the subscription level or resource level. > - You can enable **Microsoft Defender for open-source relational databases** at the resource level only.
+> - The Microsoft Defender plans available at the workspace level are: Microsoft Defender for Servers, Microsoft Defender for SQL servers on machines
-You can protect an entire Azure subscription with Defender for Cloud's enhanced security features and the protections will be inherited by all resources within the subscription.
+When you enabled Defender plans on an entire Azure subscription, the protections are inherited by all resources in the subscription.
-**To enable enhanced security features on one subscription**:
+### Enable enhanced security features on a subscription
+
+**To enable enhanced security features on a subscription**:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Microsoft Defender for Cloud**.
-1. From Defender for Cloud's main menu, select **Environment settings**.
+1. In the Defender for Cloud menu, select **Environment settings**.
1. Select the subscription or workspace that you want to protect.
-
+ 1. Select **Enable all** to enable all of the plans for Defender for Cloud.
- :::image type="content" source="./media/enhanced-security-features-overview/pricing-tier-page.png" alt-text="Screenshot of the Defender for Cloud's pricing page in the Azure portal." lightbox="media/enhanced-security-features-overview/pricing-tier-page.png":::
+ :::image type="content" source="media/enable-enhanced-security/enable-all-plans.png" alt-text="Screenshot that shows where to select enable all on the plans page." lightbox="media/enable-enhanced-security/enable-all-plans.png":::
1. Select **Save**.
+All of the plans are turned on and the monitoring components required by each plan are deployed to the protected resources.
+
+If you want to disable any of the plans, turn the plan off. The extensions used by the plan are not uninstalled but, after a short time, the extensions stop collecting data.
+
+### Enable enhanced security on multiple subscriptions or workspaces
+ **To enable enhanced security on multiple subscriptions or workspaces**: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Microsoft Defender for Cloud**.
-1. From Defender for Cloud's menu, select **Getting started**.
+1. In the Defender for Cloud menu, select **Getting started**.
- The Upgrade tab lists subscriptions and workspaces eligible for onboarding.
+ The Upgrade tab lists subscriptions and workspaces that you can onboard the Defender plans to.
- :::image type="content" source="./media/enable-enhanced-security/get-started-upgrade-tab.png" alt-text="Screenshot of the upgrade tab of the getting started page." lightbox="media/enable-enhanced-security/get-started-upgrade-tab.png":::
+ :::image type="content" source="./media/enable-enhanced-security/getting-started-upgrade.png" alt-text="Screenshot of enabling Defender plans for multiple subscriptions." lightbox="media/enable-enhanced-security/getting-started-upgrade.png":::
-1. Select the desired subscriptions and workspace from the list.
+1. Select the desired subscriptions and workspaces from the list and select **Upgrade**.
-1. Select **Upgrade**.
-
- :::image type="content" source="./media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png" alt-text="Screenshot that shows where the upgrade button is located on the screen." lightbox="media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png":::
+ :::image type="content" source="./media/enable-enhanced-security/upgrade-workspaces-and-subscriptions.png" alt-text="Screenshot that shows where the upgrade button is located on the screen." lightbox="media/enable-enhanced-security/upgrade-workspaces-and-subscriptions-full.png":::
> [!NOTE] > - If you select subscriptions and workspaces that aren't eligible for trial, the next step will upgrade them and charges will begin. > - If you select a workspace that's eligible for a free trial, the next step will begin a trial.
-## Customize plans
-
-Certain plans allow you to customize your protection.
-
-You can learn about the differences between the [Defender for Servers plans](defender-for-servers-introduction.md#defender-for-servers-plans) to help you choose which one you would like to apply to your subscription.
-
-Defender for Databases allows you to [select which type of resources you want to protect](quickstart-enable-database-protections.md). You can learn about the different types of protections offered.
-
-Defender for Containers is available on hybrid and multicloud environments. You can learn more about the [enablement process](defender-for-containers-enable.md) for Defender for Containers for each environment type.
-
-## Disable enhanced security features
-
-If you choose to disable the enhanced security features for a subscription, you'll just need to change the plan to **Off**.
-
-**To disable enhanced security features**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select **Microsoft Defender for Cloud**.
-
-1. From Defender for Cloud's menu, select **Environment settings**.
-
-1. Select the relevant subscriptions and workspaces.
-
-1. Find the plan you wish to turn off and select **Off**.
-
- :::image type="content" source="./media/enable-enhanced-security/disable-plans.png" alt-text="Screenshot that shows you how to enable or disable Defender for Cloud's enhanced security features." lightbox="media/enable-enhanced-security/disable-plans.png":::
-
- > [!NOTE]
- > After you disable enhanced security features - whether you disable a single plan or all plans at once - data collection may continue for a short period of time.
+If you want to disable any of the plans, turn the plan off. The extensions used by the plan are not uninstalled but, after a short time, the extensions stop collecting data.
## Next steps
-Now that you've enabled enhanced security features, enable the necessary agents and extensions to perform automatic data collection as described in [auto provisioning agents and extensions](enable-data-collection.md).
+Certain plans allow you to customize your protection.
+
+- Learn about the [Defender for Servers plans](defender-for-servers-introduction.md#defender-for-servers-plans) to help you choose which plan you want to apply to your subscription.
+- Defender for Databases lets you [select which type of resources you want to protect](quickstart-enable-database-protections.md).
+- Learn more about [how to enable Defender for Containers](defender-for-containers-enable.md) for different Kubernetes environments.
+- Learn about the [monitoring components](monitoring-components.md) that the Defender plans use to collect data from your Azure, hybrid, and multicloud resources.
defender-for-cloud Enable Vulnerability Assessment Agentless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment-agentless.md
+
+ Title: Find software and vulnerabilities with agentless scanning - Microsoft Defender for Cloud
+description: Find installed software and software vulnerabilities on your Azure machines and AWS machines without installing an agent.
++++ Last updated : 09/21/2022+
+# Find vulnerabilities and collect software inventory with agentless scanning (Preview)
+
+Agentless scanning provides visibility into installed software and software vulnerabilities on your workloads to extend vulnerability assessment coverage to server workloads without a vulnerability assessment agent installed.
+
+Learn more about [agentless scanning](concept-agentless-data-collection.md).
+
+Agentless vulnerability assessment uses the Defender Vulnerability Management engine to assess vulnerabilities in the software installed on your VMs, without requiring Defender for Endpoint to be installed. Vulnerability assessment shows software inventory and vulnerability results in the same format as the agent-based assessments.
+
+## Compatibility with agent-based vulnerability assessment solutions
+
+Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender for Endpoint (MDE)](deploy-vulnerability-assessment-tvm.md), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices.
+
+When you enable agentless vulnerability assessment:
+
+- If you have **no existing integrated vulnerability** assessment solutions, Defender for Cloud automatically displays vulnerability assessment results from agentless scanning.
+- If you have **Vulnerability assessment with MDE integration**, Defender for Cloud shows a unified and consolidated view that optimizes coverage and freshness.
+
+ - Machines covered by just one of the sources (MDE or agentless) show the results from that source.
+ - Machines covered by both sources show the agent-based results only for increased freshness.
+
+- If you have **Vulnerability assessment with Qualys or BYOL integrations** - Defender for Cloud shows the agent-based results by default. Results from the agentless scan will be shown for machines that don't have an agent installed or from machines that aren't reporting findings correctly.
+
+ If you want to change the default behavior so that Defender for Cloud always displays results from Defender vulnerability management (regardless of a third-party agent solution), select the [Defender vulnerability management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution.
+
+## Enabling agentless scanning for machines
+
+When you enable Defender [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Defender for Servers P2](defender-for-servers-introduction.md), agentless scanning is enabled on by default.
+
+If you have Defender for Servers P2 already enabled and agentless scanning is turned off, you need to turn on agentless scanning manually.
+
+### Agentless vulnerability assessment on Azure
+
+To enable agentless vulnerability assessment on Azure:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant subscription.
+1. For either the [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or Defender for Servers P2 plan, select **Settings**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings.png" alt-text="Screenshot of link for the settings of the Defender plans for Azure subscriptions." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings.png":::
+
+ The agentless scanning setting is shared by both Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2. When you enable agentless scanning on either plan, the setting is enabled for both plans.
+
+1. In the settings pane, turn on **Agentless scanning for machines**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-off.png" alt-text="Screenshot of the agentless scanning status." lightbox="media/enable-vulnerability-assessment-agentless/agentless-scanning-off.png":::
+
+1. Select **Save**.
+
+### Agentless vulnerability assessment on AWS
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant account.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, select **Settings**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings-aws.png" alt-text="Screenshot of link for the settings of the Defender plans for AWS accounts." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings-aws.png":::
+
+ When you enable agentless scanning on either plan, the setting applies to both plans.
+
+1. In the settings pane, turn on **Agentless scanning for machines**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-off-aws.png" alt-text="Screenshot of the agentless scanning status for AWS accounts.":::
+
+1. Select **Save and Next: Configure Access**.
+
+1. Download the CloudFormation template.
+
+1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding.
+
+1. Select **Next: Review and generate**.
+
+1. Select **Update**.
+
+After you enable agentless scanning, software inventory and vulnerability information is updated automatically in Defender for Cloud.
+
+## Exclude machines from scanning
+
+Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped.
+
+To configure machines for exclusion:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant subscription or multicloud connector.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, select **Settings**.
+1. For agentless scanning, select **Edit configuration**.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-edit-configuration.png" alt-text="Screenshot of the link to edit the agentless scanning configuration." lightbox="media/enable-vulnerability-assessment-agentless/agentless-scanning-edit-configuration.png":::
+
+1. Enter the tag name and value that applies to the machines that you want to exempt. You can enter multiple tag:value pairs.
+
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-exclude-tags.png" alt-text="Screenshot of the tag and value fields for excluding machines from agentless scanning.":::
+
+1. Select **Save** to apply the changes.
+
+## Next steps
+
+In this article, you learned about how to scan your machines for software vulnerabilities without installing an agent.
+
+Learn more about:
+
+- [Vulnerability assessment with Microsoft Defender for Endpoint](deploy-vulnerability-assessment-tvm.md)
+- [Vulnerability assessment with Qualys](deploy-vulnerability-assessment-vm.md)
+- [Vulnerability assessment with BYOL solutions](deploy-vulnerability-assessment-byol-vm.md)
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the basic and extended security features of Microsoft Defender
description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud Last updated 07/21/2022-+ # Microsoft Defender for Cloud's basic and enhanced security features
-Defender for Cloud offers a number of enhanced security features that can help protect your organization against threats and attacks.
+Defender for Cloud offers many enhanced security features that can help protect your organization against threats and attacks.
- **Basic security features** (Free) - When you open Defender for Cloud in the Azure portal for the first time or if you enable it through the API, Defender for Cloud is enabled for free on all your Azure subscriptions. By default, Defender for Cloud provides the [secure score](secure-score-security-controls.md), [security policy and basic recommendations](security-policy-concept.md), and [network security assessment](protect-network-resources.md) to help you protect your Azure resources.
Defender for Cloud offers a number of enhanced security features that can help p
- **Multicloud security** - Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features. - **Hybrid security** ΓÇô Get a unified view of security across all of your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions. - **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage) and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.
- - **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Azure Security Benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md).
- - **Access and application controls** - Block malware and other unwanted applications by applying machine learning powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application controls drastically reduce exposure to brute force and other network attacks.
+ - **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+ - **Access and application controls** - Block malware and other unwanted applications by applying machine learning powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application control drastically reduce exposure to brute force and other network attacks.
- **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments. Charges are based on the number of unique container images pushed to your connected registry. After an image has been scanned once, you won't be charged for it again unless it's modified and pushed once more. - **Breadth threat protection for resources connected to Azure** - Cloud-native threat protection for the Azure services common to all of your resources: Azure Resource Manager, Azure DNS, Azure network layer, and Azure Key Vault. Defender for Cloud has unique visibility into the Azure management layer and the Azure DNS layer, and can therefore protect cloud resources that are connected to those layers.
+ - **Manage your Cloud Security Posture Management (CSPM)** - CSPM offers you the ability to remediate security issues and review your security posture through the tools provided. These tools include:
+ - Security governance and regulatory compliance
+ - Cloud security graph
+ - Attack path analysis
+ - Agentless scanning for machines
+
+ Learn more about [CSPM](concept-cloud-security-posture-management.md).
## FAQ - Pricing and billing
You can use any of the following ways to enable enhanced security for your subsc
No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, all of the connected machines will be protected by Defender for Servers.
-Another alternative, is to enable Microsoft Defender for Servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
+Another alternative is to enable Microsoft Defender for Servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?
No. When you enable [Microsoft Defender for Servers](defender-for-servers-introd
### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
-When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspace(s) automatically when auto-provisioning is enabled. Enable auto-provisioning on the Auto provisioning page by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
+When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspaces automatically. Connect to the default workspace by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
:::image type="content" source="media/enhanced-security-features-overview/connect-workspace.png" alt-text="Screenshot showing how to auto-provision Defender for Cloud to manage your workspaces.":::
If you enable the Servers plan on cross-subscription workspaces, connected VMs f
### Will I be charged for machines without the Log Analytics agent installed?
-Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines includes Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
+Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines include Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
Title: Exempt a Microsoft Defender for Cloud recommendation from a resource, subscription, management group, and secure score description: Learn how to create rules to exempt security recommendations from subscriptions or management groups and prevent them from impacting your secure score + Last updated 01/02/2022
A core priority of every security team is to ensure analysts can focus on the ta
When you investigate your security recommendations in Microsoft Defender for Cloud, one of the first pieces of information you review is the list of affected resources.
-Occasionally, a resource will be listed that you feel shouldn't be included. Or a recommendation will show in a scope where you feel it doesn't belong. The resource might have been remediated by a process not tracked by Defender for Cloud. The recommendation might be inappropriate for a specific subscription. Or perhaps your organization has simply decided to accept the risks related to the specific resource or recommendation.
+Occasionally, a resource will be listed that you feel shouldn't be included. Or a recommendation will show in a scope where you feel it doesn't belong. The resource might have been remediated by a process not tracked by Defender for Cloud. The recommendation might be inappropriate for a specific subscription. Or perhaps your organization has decided to accept the risks related to the specific resource or recommendation.
In such cases, you can create an exemption for a recommendation to:
In such cases, you can create an exemption for a recommendation to:
| Aspect | Details | ||:--| | Release state: | Preview<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
-| Pricing: | This is a premium Azure Policy capability that's offered at no additional cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. |
+| Pricing: | This is a premium Azure Policy capability that's offered at no more cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. |
| Required roles and permissions: | **Owner** or **Resource Policy Contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). |
-| Limitations: | Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, [Azure Security Benchmark](/security/benchmark/azure/introduction), or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). |
+| Limitations: | Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction), or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives can't be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
To fine-tune the security recommendations that Defender for Cloud makes for your
- Mark **one or more resources** as "mitigated" or "risk accepted" for a specific recommendation. > [!NOTE]
-> Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, Azure Security Benchmark or any of the supplied regulatory standard initiatives. Recommendations that are generated from any custom initiatives assigned to your subscriptions cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md).
+> Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, Microsoft Cloud Security Benchmark or any of the supplied regulatory standard initiatives. Recommendations that are generated from any custom initiatives assigned to your subscriptions cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md).
> [!TIP] > You can also create exemptions using the API. For an example JSON, and an explanation of the relevant structures see [Azure Policy exemption structure](../governance/policy/concepts/exemption-structure.md).
To keep track of how your users are exercising this capability, we've created an
The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Defender for Cloud. Learn more in [Explore and manage your resources with asset inventory](asset-inventory.md).
-The inventory page includes many filters to let you narrow the list of resources to the ones of most interest for any given scenario. One such filter is the **Contains exemptions**. Use this filter to find all resources that have been exempted from one or more recommendation.
+The inventory page includes many filters to let you narrow the list of resources to the ones of most interest for any given scenario. One such filter is the **Contains exemptions**. Use this filter to find all resources that have been exempted from one or more recommendations.
:::image type="content" source="media/exempt-resource/inventory-filter-exemptions.png" alt-text="Defender for Cloud's asset inventory page and the filter to find resources with exemptions":::
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
+
+ Title: Configure the Microsoft Security DevOps GitHub action
+description: Learn how to configure the Microsoft Security DevOps GitHub action.
Last updated : 09/11/2022++++
+# Configure the Microsoft Security DevOps GitHub action
+
+Microsoft Security DevOps is a command line application that integrates static analysis tools into the development lifecycle. Security DevOps installs, configures, and runs the latest versions of static analysis tools such as, SDL, security and compliance tools. Security DevOps is data-driven with portable configurations that enable deterministic execution across multiple environments.
+
+Security DevOps uses the following Open Source tools:
+
+| Name | Language | License |
+|--|--|--|
+| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/master/LICENSE) |
+| [BinSkim](https://github.com/Microsoft/binskim) | Binary--Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/eslint/eslint/blob/main/LICENSE) |
+| [Template Analyzer](https://github.com/Azure/template-analyzer) | ARM template, Bicep file | [MIT License](https://github.com/Azure/template-analyzer/blob/main/LICENSE.txt) |
+| [Terrascan](https://github.com/accurics/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/accurics/terrascan/blob/master/LICENSE) |
+| [Trivy](https://github.com/aquasecurity/trivy) | container images, file systems, git repositories | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) |
+
+## Prerequisites
+
+- [Connect your GitHub repositories](quickstart-onboard-github.md).
+
+- Follow the guidance to set up [GitHub Advanced Security](https://docs.github.com/en/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/managing-security-and-analysis-settings-for-your-organization).
+
+- Open the [Microsoft Security DevOps GitHub action](https://github.com/marketplace/actions/security-devops-action) in a new window.
+
+## Configure the Microsoft Security DevOps GitHub action
+
+**To setup GitHub action**:
+
+1. Sign in to [GitHub](https://www.github.com).
+
+1. Select a repository on which you want to configure the GitHub action.
+
+1. Select **Actions**.
+
+ :::image type="content" source="media/msdo-github-action/actions.png" alt-text="Screenshot that shows you where the Actions button is located.":::
+
+1. Select **New workflow**.
+
+1. On the Get started with GitHub Actions page, select **set up a workflow yourself**
+
+ :::image type="content" source="media/msdo-github-action/new-workflow.png" alt-text="Screenshot showing where to select the new workflow button.":::
+
+1. In the text box, enter a name for your workflow file. For example, `msdevopssec.yml`.
+
+ :::image type="content" source="media/msdo-github-action/devops.png" alt-text="Screenshot that shows you where to enter a name for your new workflow.":::
+
+1. Copy and paste the following [sample action workflow](https://github.com/microsoft/security-devops-action/blob/main/.github/workflows/sample-workflow-windows-latest.yml) into the Edit new file tab.
+
+ ```yml
+ name: MSDO windows-latest
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ workflow_dispatch:
+
+ jobs:
+ sample:
+
+ # MSDO runs on windows-latest and ubuntu-latest.
+ # macos-latest supporting coming soon
+ runs-on: windows-latest
+
+ steps:
+ - uses: actions/checkout@v2
+
+ - uses: actions/setup-dotnet@v1
+ with:
+ dotnet-version: |
+ 5.0.x
+ 6.0.x
+
+ # Run analyzers
+ - name: Run Microsoft Security DevOps Analysis
+ uses: microsoft/security-devops-action@preview
+ id: msdo
+
+ # Upload alerts to the Security tab
+ - name: Upload alerts to Security tab
+ uses: github/codeql-action/upload-sarif@v1
+ with:
+ sarif_file: ${{ steps.msdo.outputs.sarifFile }}
+ ```
+
+ For details on various input options, see [action.yml](https://github.com/microsoft/security-devops-action/blob/main/action.yml)
+
+1. Select **Start commit**
+
+ :::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit.":::
+
+1. Select **Commit new file**.
+
+ :::image type="content" source="media/msdo-github-action/commit-new.png" alt-text="Screenshot showing you how to commit a new file.":::
+
+ The process can take up to one minute to complete.
+
+1. Select **Actions** and verify the new action is running.
+
+ :::image type="content" source="media/msdo-github-action/verify-actions.png" alt-text="Screenshot showing you where to navigate to, to see that your new action is running." lightbox="media/msdo-github-action/verify-actions.png":::
+
+## View Scan Results
+
+**To view your scan results**:
+
+1. Sign in to [GitHub](https://www.github.com).
+
+1. Navigate to **Security** > **Code scanning alerts** > **Tool**.
+
+1. From the dropdown menu, select **Filter by tool**.
+
+Code scanning findings will be filtered by specific MSDO tools in GitHub. These code scanning results are also pulled into Defender for Cloud recommendations.
+
+## Learn more
+
+- Learn about [GitHub actions for Azure](/azure/developer/github/github-actions).
+
+- Learn how to [deploy apps from GitHub to Azure](/azure/developer/github/deploy-to-azure).
+
+## Next steps
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
+
+[Discover misconfigurations in Infrastructure as Code (IaC)](iac-vulnerabilities.md)
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
+
+ Title: Identify and remediate attack paths
+
+description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment.
++ Last updated : 10/03/2022++
+# Identify and remediate attack paths
+
+Defender for Cloud's contextual security capabilities assists security teams in the reduction of the risk of impactful breaches. Defender for Cloud uses environment context to perform a risk assessment of your security issues. Defender for Cloud identifies the biggest security risk issues, while distinguishing them from less risky issues.
+
+Attack Path Analysis helps you to address the security issues that pose immediate threats with the greatest potential of being exploited in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate it.
+
+You can check out the full list of [Attack path names and descriptions](attack-path-reference.md).
+
+## Availability
+
+| Aspect | Details |
+|--|--|
+| Release state | Preview |
+| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled |
+| Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
+
+## Features of the attack path overview page
+
+The attack path page shows you an overview of all of your attack paths. You can also see your affected resources and a list of active attack paths.
++
+On this page you can organize your attack paths based on name, environment, paths count, risk categories.
+
+For each attack path you can see all of risk categories and any affected resources.
+
+The potential risk categories include Credentials exposure, Compute abuse, Data exposure, Subscription/account takeover.
+
+## Investigate and remediate attack paths
+
+You can use Attack path analysis to locate the biggest risks to your environment and to remediate them.
+
+**To investigate and remediate an attack path**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack path**
+
+ :::image type="content" source="media/how-to-manage-attack-path/attack-path-icon.png" alt-text="Screenshot that shows where the icon is on the recommendations page to get to attack paths.":::
+
+1. Select an attack path.
+
+ :::image type="content" source="media/how-to-manage-cloud-map/attack-path.png" alt-text="Screenshot that shows a sample of attack paths." lightbox="media/how-to-manage-cloud-map/attack-path.png" :::
+
+ > [!NOTE]
+ > An attack path may have more than one path that is at risk. The path count will tell you how many paths need to be remediated. If the attack path has more than one path, you will need to select each path within that attack path to remediate all risks.
+
+1. Select a node.
+
+ :::image type="content" source="media/how-to-manage-cloud-map/node-select.png" alt-text="Screenshot of the attack path screen that shows you where the nodes are located for selection." lightbox="media/how-to-manage-cloud-map/node-select.png":::
+
+1. Select **Insight** to view the associated insights for that node.
+
+ :::image type="content" source="media/how-to-manage-cloud-map/insights.png" alt-text="Screenshot of the insights tab for a specific node.":::
+
+1. Select **Recommendations**.
+
+ :::image type="content" source="media/how-to-manage-cloud-map/attack-path-recommendations.png" alt-text="Screenshot that shows you where to select recommendations on the screen." lightbox="media/how-to-manage-cloud-map/attack-path-recommendations.png":::
+
+1. Select a recommendation.
+
+1. Follow the remediation steps to remediate the recommendation.
+
+1. Select other nodes as necessary and view their insights and recommendations as necessary.
+
+Once an attack path is resolved, it can take up to 24 hours for an attack path to be removed from the list.
+
+## View all recommendations with attack path
+
+Attack path analysis also gives you the ability to see all recommendations by attack path without having to check each node individually. You can resolve all recommendations without having to view each node individually.
+
+**To resolve all recommendations**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack paths**.
+
+1. Select an attack path.
+
+1. Select **Recommendations**.
+
+ :::image type="content" source="media/how-to-manage-cloud-map/bulk-recommendations.png" alt-text="Screenshot that shows where to select on the screen to see the attack paths full list of recommendations.":::
+
+1. Select a recommendation.
+
+1. Follow the remediation steps to remediate the recommendation.
+
+Once an attack path is resolved, it can take up to 24 hours for an attack path to be removed from the list.
+
+## External attack surface management (EASM)
+
+An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. An organization's attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect.
+
+While you are [investigating and remediating an attack path](#investigate-and-remediate-attack-paths), you can also view your EASM if it is available and you have enabled Defender EASM to your subscription.
+
+> [!NOTE]
+> To manage your EASM, you must [deploy the Defender EASM Azure resource](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md) to your subscription. Defender EASM has it's own cost and is separate from Defender for Cloud. To learn more about Defender for EASM pricing options, you can check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-external-attack-surface-management/).
+
+**To manage your EASM**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack paths**.
+
+1. Select an attack path.
+
+1. Select a resource.
+
+1. Select **Insights**.
+
+1. Select **Open EASM**.
+
+ :::image type="content" source="media/how-to-manage-attack-path/open-easm.png" alt-text="Screenshot that shows you where on the screen you need to select open Defender EASM from." lightbox="media/how-to-manage-attack-path/easm-zoom.png":::
+
+1. Follow the [Using and managing discovery](../external-attack-surface-management/using-and-managing-discovery.md) instructions.
+
+## Next Steps
+
+Learn how to [Build queries with Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
+
+ Title: Build queries with Cloud Security Explorer
+
+description: Learn how to build queries in Cloud Security Explorer to find vulnerabilities that exist on your multicloud environment.
++ Last updated : 10/03/2022++
+# Cloud Security Explorer
+
+Defender for Cloud's contextual security capabilities assists security teams in the reduction of the risk of impactful breaches. Defender for Cloud uses environmental context to perform a risk assessment of your security issues, and identifies the biggest security risks and distinguishes them from less risky issues.
+
+By using the Cloud Security Explorer, you can proactively identify security risks in your cloud environment by running graph-based queries on the Cloud Security Graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
+
+With the Cloud Security Explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, lateral movement between resources and more.
+
+## Availability
+
+| Aspect | Details |
+|--|--|
+| Release state | Preview |
+| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled |
+| Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
+
+## Build a query with the Cloud Security Explorer
+
+You can use the Cloud Security Explorer to build queries that can proactively hunt for security risks in your environments.
+
+**To build a query**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Cloud Security Explorer**.
+
+ :::image type="content" source="media/concept-cloud-map/cloud-security-explorer.png" alt-text="Screenshot of the Cloud Security Explorer page." lightbox="media/concept-cloud-map/cloud-security-explorer.png":::
+
+1. Select a resource from the drop-down menu.
+
+ :::image type="content" source="media/how-to-manage-cloud-security/select-resource.png" alt-text="Screenshot of the resource drop-down menu.":::
+
+1. Select **+** to add other filters to your query. For each filter selected you can add more subfilters as needed.
+
+1. Select **Search**.
+
+ :::image type="content" source="media/how-to-manage-cloud-security/search-query.png" alt-text="Screenshot that shows a full query and where to select on the screen to perform the search.":::
+
+The results will populate on the bottom of the page.
+
+## Query templates
+
+You can select an existing query template from the bottom of the page by selecting **Open query**.
++
+You can alter any template to search for specific results by changing the query and selecting search.
+
+## Query options
+
+The following information can be queried in the Cloud Security Explorer:
+
+- **Recommendations** - All Defender for Cloud security recommendations.
+
+- **Vulnerabilities** - All vulnerabilities found by Defender for Cloud.
+
+- **Insights** - Contextual data about your cloud resources.
+
+- **Connections** - Connections that are identified between cloud resources in your environment.
+
+You can review the [full list of recommendations, insights and connections](attack-path-reference.md).
+
+## Next steps
+
+[Create custom security initiatives and policies](custom-security-policies.md)
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
+
+ Title: Discover misconfigurations in Infrastructure as Code
+
+description: Learn how to use Defender for DevOps to discover misconfigurations in Infrastructure as Code (IaC)
Last updated : 09/20/2022++++
+# Discover misconfigurations in Infrastructure as Code (IaC)
+
+Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, extra support is located in the YAML configuration that can be used to run a specific tool, or several of the tools. For example, setting up the action or extension to run Infrastructure as Code (IaC) scanning only. This can help reduce pipeline run time.
+
+## Prerequisites
+
+- [Configure Microsoft Security DevOps GitHub action](github-action.md).
+- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+
+## View the results of the IaC scan in GitHub
+
+1. Sign in to [GitHub](https://www.github.com).
+
+1. Navigate to **`your repository's home page`** > **.github/workflows** > **msdevopssec.yml** that was created in the [prerequisites](github-action.md#configure-the-microsoft-security-devops-github-action-1).
+
+1. Select **Edit file**.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/workflow-yaml.png" alt-text="Screenshot that shows where to find the edit button for the msdevopssec.yml file." lightbox="media/tutorial-iac-vulnerabilities/workflow-yaml.png":::
+
+1. Under the Run Analyzers section, add:
+
+ ```yml
+ with:
+ categories: 'Iac"
+ ```
+
+ > [!NOTE]
+ > Categories are case sensitive.
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/add-to-yaml.png" alt-text="Screenshot that shows the information that needs to be added to the yaml file.":::
+
+1. Select **Start Commit**
+
+1. Select **Commit changes**.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select commit change on the githib page.":::
+
+1. (Optional) Skip this step if you already have an IaC template in your repository.
+
+ Follow this link to [Install an IaC template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux).
+
+ 1. Select `azuredeploy.json`.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the deploy.json file is located.":::
+
+ 1. Select **Raw**
+
+ 1. Copy all the information in the file.
+
+ ```Bash
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "webAppName": {
+ "type": "string",
+ "defaultValue": "AzureLinuxApp",
+ "metadata": {
+ "description": "Base name of the resource such as web app name and app service plan "
+ },
+ "minLength": 2
+ },
+ "sku": {
+ "type": "string",
+ "defaultValue": "S1",
+ "metadata": {
+ "description": "The SKU of App Service Plan "
+ }
+ },
+ "linuxFxVersion": {
+ "type": "string",
+ "defaultValue": "php|7.4",
+ "metadata": {
+ "description": "The Runtime stack of current web app"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "variables": {
+ "webAppPortalName": "[concat(parameters('webAppName'), '-webapp')]",
+ "appServicePlanName": "[concat('AppServicePlan-', parameters('webAppName'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2020-06-01",
+ "name": "[variables('appServicePlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[parameters('sku')]"
+ },
+ "kind": "linux",
+ "properties": {
+ "reserved": true
+ }
+ },
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2020-06-01",
+ "name": "[variables('webAppPortalName')]",
+ "location": "[parameters('location')]",
+ "kind": "app",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('appServicePlanName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('appServicePlanName'))]",
+ "siteConfig": {
+ "linuxFxVersion": "[parameters('linuxFxVersion')]"
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+ 1. On GitHub, navigate to your repository.
+
+ 1. **Select Add file** > **Create new file**.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/create-file.png" alt-text="Screenshot that shows you where to navigate to, to create a new file." lightbox="media/tutorial-iac-vulnerabilities/create-file.png":::
+
+ 1. Enter a name for the file.
+
+ 1. Paste the copied information into the file.
+
+ 1. Select **Commit new file**.
+
+ The file is now added to your repository.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created has been added to your repository.":::
+
+1. Select **Actions**.
+
+1. Select the workflow to see the results.
+
+1. Navigate in the results to the scan results section.
+
+1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan.
+
+## View the results of the IaC scan in Azure DevOps
+
+**To view the results of the IaC scan in Azure DevOps**
+
+1. Sign in to [Azure DevOps](https://dev.azure.com/)
+
+1. Navigate to **Pipeline**.
+
+1. Locate the pipeline with MSDO Azure DevOps Extension is configured.
+
+1. Select **Edit**.
+
+1. Add the following lines to the YAML file
+
+ ```yml
+ inputs:
+ categories: 'IaC'
+ ```
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/addition-to-yaml.png" alt-text="Screenshot showing you where to add this line to the YAML file.":::
+
+1. Select **Save**.
+
+1. Select **Save** to commit directly to the main branch or Create a new branch for this commit
+
+1. Select **Pipeline** > **`Your created pipeline`** to view the results of the IaC scan.
+
+1. Select any result to see the details.
+
+## Remediate PowerShell based rules:
+
+Information about the PowerShell-based rules included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will only evaluate the rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security) unless the option `--include-non-security-rules` is used.
+
+> [!NOTE]
+> Severity levels are scaled from 1 to 3. Where 1 = High, 2 = Medium, 3 = Low.
+
+### JSON-Based Rules:
+
+#### TA-000001: Diagnostic logs in App Services should be enabled
+
+Audits the enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised.
+
+**Recommendation**: To [enable diagnostic logging](../app-service/troubleshoot-diagnostic-logs.md), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json), add (or update) the *detailedErrorLoggingEnabled*, *httpLoggingEnabled*, and *requestTracingEnabled* properties, setting their values to `true`.
+
+**Severity level**: 2
+
+#### TA-000002: Remote debugging should be turned off for API Apps
+
+Remote debugging requires inbound ports to be opened on an API app. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+
+**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
+
+**Severity level**: 3
+
+#### TA-000003: FTPS only should be required in your API App
+
+Enable FTPS enforcement for enhanced security.
+
+**Recommendation**: To [enforce FTPS](/azure/app-service/deploy-ftp?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
+
+**Severity level**: 1
+
+#### TA-000004: API App Should Only Be Accessible Over HTTPS
+
+API apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
+
+**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](/azure/app-service/configure-ssl-bindings#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
+
+**Severity level**: 2
+
+#### TA-000005: Latest TLS version should be used in your API App
+
+API apps should require the latest TLS version.
+
+**Recommendation**: To [enforce the latest TLS version](/azure/app-service/configure-ssl-bindings#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
+
+**Severity level**: 1
+
+#### TA-000006: CORS should not allow every resource to access your API App
+
+Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API app. Allow only required domains to interact with your API app.
+
+**Recommendation**: To allow only required domains to interact with your API app, in the [Microsoft.Web/sites/config resource cors settings object](/azure/templates/microsoft.web/sites/config-web?tabs=json#corssettings-object), add (or update) the *allowedOrigins* property, setting its value to an array of allowed origins. Ensure it is *not* set to "*" (asterisks allows all origins).
+
+**Severity level**: 3
+
+#### TA-000007: Managed identity should be used in your API App
+
+For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
+
+**Recommendation**: To [use Managed Identity](/azure/app-service/overview-managed-identity?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required.
+
+**Severity level**: 2
+
+#### TA-000008: Remote debugging should be turned off for Function Apps
+
+Remote debugging requires inbound ports to be opened on a function app. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+
+**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
+
+**Severity level**: 3
+
+#### TA-000009: FTPS only should be required in your Function App
+
+Enable FTPS enforcement for enhanced security.
+
+**Recommendation**: To [enforce FTPS](/azure/app-service/deploy-ftp?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
+
+**Severity level**: 1
+
+#### TA-000010: Function App Should Only Be Accessible Over HTTPS
+
+Function apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
+
+**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](/azure/app-service/configure-ssl-bindings#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
+
+**Severity level**: 2
+
+#### TA-000011: Latest TLS version should be used in your Function App
+
+Function apps should require the latest TLS version.
+
+**Recommendation**: To [enforce the latest TLS version](/azure/app-service/configure-ssl-bindings#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
+
+**Severity level**: 1
+
+#### TA-000012: CORS should not allow every resource to access your Function Apps
+
+Cross-Origin Resource Sharing (CORS) should not allow all domains to access your function app. Allow only required domains to interact with your function app.
+
+**Recommendation**: To allow only required domains to interact with your function app, in the [Microsoft.Web/sites/config resource cors settings object](/azure/templates/microsoft.web/sites/config-web?tabs=json#corssettings-object), add (or update) the *allowedOrigins* property, setting its value to an array of allowed origins. Ensure it is *not* set to "*" (asterisks allows all origins).
+
+**Severity level**: 3
+
+#### TA-000013: Managed identity should be used in your Function App
+
+For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
+
+**Recommendation**: To [use Managed Identity](/azure/app-service/overview-managed-identity?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required.
+
+**Severity level**: 2
+
+#### TA-000014: Remote debugging should be turned off for Web Applications
+
+Remote debugging requires inbound ports to be opened on a web application. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+
+**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
+
+**Severity level**: 3
+
+#### TA-000015: FTPS only should be required in your Web App
+
+Enable FTPS enforcement for enhanced security.
+
+**Recommendation**: To [enforce FTPS](/azure/app-service/deploy-ftp?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
+
+**Severity level**: 1
+
+#### TA-000016: Web Application Should Only Be Accessible Over HTTPS
+
+Web apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
+
+**Recommendation**: To [use HTTPS to ensure server/service authentication and protect data in transit from network layer eavesdropping attacks](/azure/app-service/configure-ssl-bindings#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
+
+**Severity level**: 2
+
+#### TA-000017: Latest TLS version should be used in your Web App
+
+Web apps should require the latest TLS version.
+
+**Recommendation**:
+To [enforce the latest TLS version](/azure/app-service/configure-ssl-bindings#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
+
+**Severity level**: 1
+
+#### TA-000018: CORS should not allow every resource to access your Web Applications
+
+Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Web application. Allow only required domains to interact with your web app.
+
+**Recommendation**: To allow only required domains to interact with your web app, in the [Microsoft.Web/sites/config resource cors settings object](/azure/templates/microsoft.web/sites/config-web?tabs=json#corssettings-object), add (or update) the *allowedOrigins* property, setting its value to an array of allowed origins. Ensure it is *not* set to "*" (asterisks allows all origins).
+
+**Severity level**: 3
+
+#### TA-000019: Managed identity should be used in your Web App
+
+For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
+
+**Recommendation**: To [use Managed Identity](/azure/app-service/overview-managed-identity?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if required.
+
+**Severity level**: 2
+
+#### TA-000020: Audit usage of custom RBAC roles
+
+Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling.
+
+**Recommendation**: [Use built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles](/azure/role-based-access-control/built-in-roles)
+
+**Severity level**: 3
+
+#### TA-000021: Automation account variables should be encrypted
+
+It is important to enable encryption of Automation account variable assets when storing sensitive data. This step can only be taken at creation time. If you have Automation Account Variables storing sensitive data that are not already encrypted, then you will need to delete them and recreate them as encrypted variables. To apply encryption of the Automation account variable assets, in Azure PowerShell - run [the following command](/powershell/module/az.automation/set-azautomationvariable?view=azps-5.4.0&viewFallbackFrom=azps-1.4.0): `Set-AzAutomationVariable -AutomationAccountName '{AutomationAccountName}' -Encrypted $true -Name '{VariableName}' -ResourceGroupName '{ResourceGroupName}' -Value '{Value}'`
+
+**Recommendation**: [Enable encryption of Automation account variable assets](/azure/automation/shared-resources/variables?tabs=azure-powershell)
+
+**Severity level**: 1
+
+#### TA-000022: Only secure connections to your Azure Cache for Redis should be enabled
+
+Enable only connections via SSL to Redis Cache. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking.
+
+**Recommendation**: To [enable only connections via SSL to Redis Cache](/security/benchmark/azure/baselines/azure-cache-for-redis-security-baseline?toc=/azure/azure-cache-for-redis/TOC.json#44-encrypt-all-sensitive-information-in-transit), in the [Microsoft.Cache/Redis resource properties](/azure/templates/microsoft.cache/redis?tabs=json#rediscreateproperties-object), update the value of the *enableNonSslPort* property from `true` to `false` or remove the property from the template as the default value is `false`.
+
+**Severity level**: 1
+
+#### TA-000023: Authorized IP ranges should be defined on Kubernetes Services
+
+To ensure that only applications from allowed networks, machines, or subnets can access your cluster, restrict access to your Kubernetes Service Management API server. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster.
+
+**Recommendation**: [Restrict access by defining authorized IP ranges](/azure/aks/api-server-authorized-ip-ranges) or [set up your API servers as private clusters](/azure/aks/private-clusters)
+
+**Severity level**: 1
+
+#### TA-000024: Role-Based Access Control (RBAC) should be used on Kubernetes Services
+
+To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. To Use Role-Based Access Control (RBAC) you must recreate your Kubernetes Service cluster and enable RBAC during the creation process.
+
+**Recommendation**: [Enable RBAC in Kubernetes clusters](/azure/aks/operator-best-practices-identity#use-azure-rbac)
+
+**Severity level**: 1
+
+#### TA-000025: Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version
+
+Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. [Vulnerability CVE-2019-9946](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9946) has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+. Running on older versions could mean you are not using latest security classes. Usage of such old classes and types can make your application vulnerable.
+
+**Recommendation**: To [upgrade Kubernetes service clusters](/azure/aks/upgrade-cluster), in the [Microsoft.ContainerService/managedClusters resource properties](/azure/templates/microsoft.containerservice/managedclusters?tabs=json#managedclusterproperties-object), update the *kubernetesVersion* property, setting its value to one of the following versions (making sure to specify the minor version number): 1.11.9+, 1.12.7+, 1.13.5+, or 1.14.0+.
+
+**Severity level**: 1
+
+#### TA-000026: Service Fabric clusters should only use Azure Active Directory for client authentication
+
+Service Fabric clusters should only use Azure Active Directory for client authentication. A Service Fabric cluster offers several entry points to its management functionality, including the web-based Service Fabric Explorer, Visual Studio and PowerShell. Access to the cluster must be controlled using AAD.
+
+**Recommendation**: [Enable AAD client authentication on your Service Fabric clusters](/azure/service-fabric/service-fabric-cluster-creation-setup-aad)
+
+**Severity level**: 1
+
+#### TA-000027: Transparent Data Encryption on SQL databases should be enabled
+
+Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements.
+
+**Recommendation**: To [enable transparent data encryption](/azure/azure-sql/database/transparent-data-encryption-tde-overview?tabs=azure-portal), in the [Microsoft.Sql/servers/databases/transparentDataEncryption resource properties](/azure/templates/microsoft.sql/servers/databases/transparentdataencryption?tabs=json), add (or update) the value of the *state* property to `enabled`.
+
+**Severity level**: 3
+
+#### TA-000028: SQL servers with auditing to storage account destination should be configured with 90 days retention or higher
+
+Set the data retention for your SQL Server's auditing to storage account destination to at least 90 days.
+
+**Recommendation**: For incident investigation purposes, we recommend setting the data retention for your SQL Server's auditing to storage account destination to at least 90 days, in the [Microsoft.Sql/servers/auditingSettings resource properties](/azure/templates/microsoft.sql/2020-11-01-preview/servers/auditingsettings?tabs=json#serverblobauditingpolicyproperties-object), using the *retentionDays* property. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards.
+
+**Severity level**: 3
+
+#### TA-000029: Azure API Management APIs should use encrypted protocols only
+
+Set the protocols property to only include HTTPs.
+
+**Recommendation**: To use encrypted protocols only, add (or update) the *protocols* property in the [Microsoft.ApiManagement/service/apis resource properties](/azure/templates/microsoft.apimanagement/service/apis?tabs=json), to only include HTTPS. Allowing any additional protocols (for example, HTTP, WS) is insecure.
+
+**Severity level**: 1
+
+## Learn more
+
+- Learn more about the [Template Best Practice Analyzer](https://github.com/Azure/template-analyzer).
+
+In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for only Infrastructure as Code misconfigurations.
+
+## Next steps
+
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
+
+Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud++ + Last updated 06/29/2022
-# Prioritize security actions by data sensitivity
+# Prioritize security actions by data sensitivity (Preview)
[Microsoft Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Microsoft Purview helps organizations manage and govern data in hybrid and multicloud environments.
-Microsoft Defender for Cloud customers using Microsoft Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
+Microsoft Defender for Cloud customers using Microsoft Purview can benefit from another important layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
This page explains the integration of Microsoft Purview's data sensitivity classification labels within Defender for Cloud.
You can learn more by watching this video from the Defender for Cloud in the Fie
|Aspect|Details| |-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|You'll need a Microsoft Purview account to create the data sensitivity classifications and run the scans. Viewing the scan results and using the output is free for Defender for Cloud users|
+|Pricing:|You'll need a Microsoft Purview account to create the data sensitivity classifications and run the scans. The integration between Purview and Microsoft Defender for Cloud doesn't incur extra costs, but the data is shown in Microsoft Defender for Cloud only for enabled plans.|
|Required roles and permissions:|**Security admin** and **Security contributor**| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)|
However, where possible, you'd want to focus the security team's efforts on risk
Microsoft Purview's data sensitivity classifications and data sensitivity labels provide that knowledge. ## Discover resources with sensitive data
-To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Microsoft Purview in multiple locations.
+To provide information about discovered sensitive data and help ensure you have that information when you need it, Defender for Cloud displays information from Microsoft Purview in multiple locations.
> [!TIP] > If a resource is scanned by multiple Microsoft Purview accounts, the information shown in Defender for Cloud relates to the most recent scan. ### Alerts and recommendations pages
-When you're reviewing a recommendation or investigating an alert, the information about any potentially sensitive data involved is included on the page.
-
-This vital additional layer of metadata helps solve the triage challenge and ensures your security team can focus its attention on the threats to sensitive data.
+When you're reviewing a recommendation or investigating an alert, the information about any potentially sensitive data involved is included on the page. You can also filter the list of alerts by **Data sensitivity classifications** and **Data sensitivity labels** to help you focus on the alerts that relate to sensitive data.
+This vital layer of metadata helps solve the triage challenge and ensures your security team can focus its attention on the threats to sensitive data.
### Inventory filters
When you select a single resource - whether from an alert, recommendation, or th
The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the Microsoft Defender plans, you can see outstanding security alerts for that specific resource too.
-When reviewing the health of a specific resource, you'll see the Microsoft Purview information on this page and can use it determine what data has been discovered on this resource alongside the Microsoft Purview account used to scan the resource.
+When reviewing the health of a specific resource, you'll see the Microsoft Purview information on this page and can use it to determine what data has been discovered on this resource. To explore more details and see the list of sensitive files, select the link to launch Microsoft Purview.
:::image type="content" source="./media/information-protection/information-protection-resource-health.png" alt-text="Screenshot of Defender for Cloud's resource health page showing information protection labels and classifications from Microsoft Purview." lightbox="./media/information-protection/information-protection-resource-health.png":::
-### Overview tile
-The dedicated **Information protection** tile in Defender for CloudΓÇÖs [overview dashboard](overview-page.md) shows Microsoft PurviewΓÇÖs coverage. It also shows the resource types with the most sensitive data discovered.
+> [!NOTE]
+> - If the data in the resource is updated and the update affects the resource classifications and labels, Defender for Cloud reflects those changes only after Microsoft Purview rescans the resource.
+> - If Microsoft Purview account is deleted, the resource classifications and labels are still be available in Defender for Cloud.
+> - Defender for Cloud updates the resource classifications and labels within 24 hours of the Purview scan.
+
+## Attack path
+Some of the attack paths consider resources that contain sensitive data, such as ΓÇ£AWS S3 Bucket with sensitive data is publicly accessibleΓÇ¥, based on Purview scan results.
-A graph shows the number of recommendations and alerts by classified resource types. The tile also includes a link to Microsoft Purview to scan additional resources. Select the tile to see classified resources in Defender for CloudΓÇÖs asset inventory page.
+## Security explorer
+The Cloud Map shows resources that ΓÇ£contains sensitive dataΓÇ¥, based on Purview scan results. You can use resources with this label to explore the map.
+- To see the classification and labels of the resource, go to the [inventory](asset-inventory.md).
+- To see the list of classified files in the resource, go to the [Microsoft Purview compliance portal](../purview/overview.md).
## Learn more
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Kubernetes data plane hardening description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes data plane hardening security recommendations + Last updated 03/08/2022
When you enable Microsoft Defender for Containers, Azure Kubernetes Service clus
## Configure Defender for Containers components
-If you disabled any of the default protections when you enabled Microsoft Defender for Containers, you can change the configurations and reenable them via auto provisioning.
+If you disabled any of the default protections when you enabled Microsoft Defender for Containers, you can change the configurations and reenable them.
**To configure the Defender for Containers components**:
If you disabled any of the default protections when you enabled Microsoft Defend
1. Select the relevant subscription.
-1. From the left side tool bar, select **Auto provisioning**.
+1. In the Monitoring coverage column of the Defender for Containers plan, select **Settings**.
1. Ensure that Microsoft Defenders for Containers components (preview) is toggled to On.
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
+
+ Title: Overview of the extensions that collect data from your workloads
+description: Learn about the extensions that collect data from your workloads to let you protect your workloads with Microsoft Defender for Cloud.
++++ Last updated : 09/12/2022++
+# How does Defender for Cloud collect data?
+
+Defender for Cloud collects data from your Azure virtual machines (VMs), virtual machine scale sets, IaaS containers, and non-Azure (including on-premises) machines to monitor for security vulnerabilities and threats. Some Defender plans require monitoring components to collect data from your workloads.
+
+Data collection is required to provide visibility into missing updates, misconfigured OS security settings, endpoint protection status, and health and threat protection. Data collection is only needed for compute resources such as VMs, virtual machine scale sets, IaaS containers, and non-Azure computers.
+
+You can benefit from Microsoft Defender for Cloud even if you donΓÇÖt provision agents. However, you'll have limited security and the capabilities listed above aren't supported.
+
+Data is collected using:
+
+- [Azure Monitor Agent](auto-deploy-azure-monitoring-agent.md) (AMA)
+- [Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) (MDE)
+- [Log Analytics agent](working-with-log-analytics-agent.md)
+- **Security components**, such as the [Azure Policy Add-on for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md)
+
+## Why use Defender for Cloud to deploy monitoring components?
+
+The security of your workloads depends on the data that the monitoring components collect. The components ensure security coverage for all supported resources.
+
+To save you the process of manually installing the extensions, Defender for Cloud reduces management overhead by installing all required extensions on existing and new machines. Defender for Cloud assigns the appropriate **Deploy if not exists** policy to the workloads in the subscription. This policy type ensures the extension is provisioned on all existing and future resources of that type.
+
+> [!TIP]
+> Learn more about Azure Policy effects, including **Deploy if not exists**, in [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+## What plans use monitoring components?
+
+These plans use monitoring components to collect data:
+
+- Defender for Servers
+ - [Azure Arc agent](../azure-arc/servers/manage-vm-extensions.md) (For multicloud and on-premises servers)
+ - [Microsoft Defender for Endpoint](#microsoft-defender-for-endpoint)
+ - Vulnerability assessment
+ - [Azure Monitor Agent](#azure-monitor-agent-ama) or [Log Analytics agent](#log-analytics-agent)
+- [Defender for SQL servers on machines](defender-for-sql-on-machines-vulnerability-assessment.md)
+ - [Azure Arc agent](../azure-arc/servers/manage-vm-extensions.md) (For multicloud and on-premises servers)
+ - [Azure Monitor Agent](#azure-monitor-agent-ama) or [Log Analytics agent](#log-analytics-agent)
+ - Automatic SQL server discovery and registration
+- Defender for Containers
+ - [Azure Arc agent](../azure-arc/servers/manage-vm-extensions.md) (For multicloud and on-premises servers)
+ - [Defender profile, Azure Policy Extension, Kubernetes audit log data](defender-for-containers-introduction.md)
+
+## Availability of extensions
++
+### Azure Monitor Agent (AMA)
++
+Learn more about [using the Azure Monitor Agent with Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
+
+### Log Analytics agent
+
+| Aspect | Azure virtual machines | Azure Arc-enabled machines |
+||:|:--|
+| Release state: | Generally available (GA) | Preview |
+| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |
+| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
+| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+<a name="preexisting"></a>
+
+#### Deploying the Log Analytics agent in cases of a pre-existing agent installation
+
+The following use cases explain how deployment of the Log Analytics agent works in cases when there's already an agent or extension installed.
+
+- **Log Analytics agent is installed on the machine, but not as an extension (Direct agent)** - If the Log Analytics agent is installed directly on the VM (not as an Azure extension), Defender for Cloud will install the Log Analytics agent extension and might upgrade the Log Analytics agent to the latest version. The installed agent will continue to report to its already configured workspaces and to the workspace configured in Defender for Cloud. (Multi-homing is supported on Windows machines.)
+
+ If the Log Analytics is configured with a user workspace and not Defender for Cloud's default workspace, you'll need to install the "Security" or "SecurityCenterFree" solution on it for Defender for Cloud to start processing events from VMs and computers reporting to that workspace.
+
+ For Linux machines, Agent multi-homing isn't yet supported. If an existing agent installation is detected, the Log Analytics agent won't be deployed.
+
+ For existing machines on subscriptions onboarded to Defender for Cloud before 17 March 2019, when an existing agent will be detected, the Log Analytics agent extension won't be installed and the machine won't be affected. For these machines, see to the "Resolve monitoring agent health issues on your machines" recommendation to resolve the agent installation issues on these machines.
+
+- **System Center Operations Manager agent is installed on the machine** - Defender for Cloud will install the Log Analytics agent extension side by side to the existing Operations Manager. The existing Operations Manager agent will continue to report to the Operations Manager server normally. The Operations Manager agent and Log Analytics agent share common run-time libraries, which will be updated to the latest version during this process.
+
+- **A pre-existing VM extension is present**:
+ - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.
+ - To see to which workspace the existing extension is sending data to, run the test to [Validate connectivity with Microsoft Defender for Cloud](/archive/blogs/yuridiogenes/validating-connectivity-with-azure-security-center). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection.
+ - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. For more information, see [Existing log analytics customers](./faq-azure-monitor-logs.yml).
+
+Learn more about [working with the Log Analytics agent](working-with-log-analytics-agent.md).
+
+### Microsoft Defender for Endpoint
+
+| Aspect | Linux | Windows |
+||:--|:-|
+| Release state: | Generally available (GA) | Generally available (GA) |
+| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) |
+| Supported destinations: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 10 |
+| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/no-icon.png"::: No |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
+
+Learn more about [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint).
+
+### Vulnerability assessment
+
+| Aspect | Details |
+||:--|
+| Release state: | Generally available (GA) |
+| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
+| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### Guest Configuration
+
+| Aspect | Details |
+||:--|
+| Release state: | Preview |
+| Relevant Defender plan: | No plan required |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+Learn more about Azure's [Guest Configuration extension](../governance/machine-configuration/overview.md).
+
+### Defender for Containers extensions
+
+This table shows the availability details for the components that are required by the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+
+By default, the required extensions are enabled when you enable Defender for Containers from the Azure portal.
+
+| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters |
+||-||
+| Release state: | ΓÇó Defender profile: GA<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
+| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) |
+| Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) |
+| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet|
+
+## Troubleshooting
+
+- To identify monitoring agent network requirements, see [Troubleshooting monitoring agent network requirements](troubleshooting-guide.md#mon-network-req).
+- To identify manual onboarding issues, see [How to troubleshoot Operations Management Suite onboarding issues](https://support.microsoft.com/help/3126513/how-to-troubleshoot-operations-management-suite-onboarding-issues).
+
+## Next steps
+
+This page explained what monitoring components are and how to enable them.
+
+Learn more about:
+
+- [Setting up email notifications](configure-email-notifications.md) for security alerts
+- Protecting workloads with [enhanced security features](enhanced-security-features-overview.md)
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Title: Platforms supported by Microsoft Defender for Cloud description: This document provides a list of platforms supported by Microsoft Defender for Cloud. + Last updated 11/09/2021 # Supported platforms
Defender for Cloud depends on the [Log Analytics agent](../azure-monitor/agents/
* [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems) * [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems)
-Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](enable-data-collection.md#manual-agent)
+Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent)
To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds-containers.md).
Protection for VMs residing in Azure Stack Hub is also supported. For more infor
## Next steps -- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent).
- Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Learn how to [plan and understand the design considerations to adopt Microsoft Defender for Cloud](security-center-planning-and-operations-guide.md).
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Microsoft Defender for Cloud's main dashboard or 'overview' page description: Learn about the features of the Defender for Cloud overview page Previously updated : 07/20/2022 Last updated : 09/20/2022 + # Microsoft Defender for Cloud's overview page
You can select any element on the page to get more detailed information.
## Features of the overview page ### Metrics
In the center of the page are the **feature tiles**, each linking to a high prof
- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md). - **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md). - **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md).-- **Firewall Manager** - This tile shows the status of your hubs and networks from [Azure Firewall Manager](../firewall-manager/overview.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).-- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Microsoft Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Microsoft Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Microsoft Purview data sensitivity classifications. [Learn more](information-protection.md). ### Insights
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Title: Integrate security solutions in Microsoft Defender for Cloud description: Learn about how Microsoft Defender for Cloud integrates with partners to enhance the overall security of your Azure resources. + Last updated 07/14/2022 # Integrate security solutions in Microsoft Defender for Cloud
Learn more about the integration of [vulnerability scanning tools from Qualys](d
Defender for Cloud also offers vulnerability analysis for your: -- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
+- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
## How security solutions are integrated Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds.
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Title: Permissions in Microsoft Defender for Cloud description: This article explains how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role. + Last updated 05/22/2022
The following table displays roles and allowed actions in Defender for Cloud.
| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-For **auto provisioning**, the specific role required depends on the extension you're deploying. For full details, check the tab for the specific extension in the [availability table on the auto provisioning quick start page](enable-data-collection.md#availability).
+The specific role required to deploy monitoring components depends on the extension you're deploying. Learn more about [monitoring components](monitoring-components.md).
> [!NOTE] > We recommend that you assign the least permissive role needed for users to complete their tasks. For example, assign the Reader role to users who only need to view information about the security health of a resource but not take action, such as applying recommendations or editing policies.
defender-for-cloud Plan Multicloud Security Automate Connector Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-automate-connector-deployment.md
+
+ Title: Defender for Cloud planning multicloud security automating connector deployment
+description: Learn about automating connector deployment when planning multicloud deployment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022+
+# Automate connector deployment
+
+This article is part of a series to guide you in designing a solution for cloud security posture management (CSPM) and cloud workload protection (CWP) across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Connect AWS accounts and/or GCP projects programmatically.
+
+## Get started
+
+As an alternative to creating connectors in the Defender for Cloud portal, you can create them programmatically by using the Defender for Cloud REST API.
+Review the [Security Connectors - REST API](/rest/api/defenderforcloud/security-connectors).
+
+- When you use REST API to create the connector, you also need the CloudFormation template, or Cloud Shell script, depending on the environment that youΓÇÖre onboarding to Defender for Cloud.
+- The easiest way to get this script is to download it from the Defender for Cloud portal.
+- The template/script changes depending on the plans youΓÇÖre enabling.
+
+## Next steps
+
+In this article, you've learned that as an alternative to creating connectors in the Defender for Cloud portal, you can create them programmatically by using the Defender for Cloud REST API. For more information, see [other resources](plan-multicloud-security-other-resources.md#).
defender-for-cloud Plan Multicloud Security Define Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-define-adoption-strategy.md
+
+ Title: Defender for Cloud Planning multicloud security defining adoption strategy lifecycle strategy guidance
+description: Learn about defining broad requirements for business needs and ownership in multicloud environment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022+
+# Define an adoption strategy
+
+This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection platform (CWPP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Consider your high-level business needs, the resource and process ownership model for your organization, and an iteration strategy as you continuously add resources to your solution.
+
+## Get started
+
+Think about your broad requirements:
+
+- **Determine business needs**. Keep first steps simple, and then iterate to accommodate future change. Decide your goals for a successful adoption, and then the metrics youΓÇÖll use to define success.
+- **Determine ownership**. Figure out where multicloud capabilities fall under your teams. Review the [determine ownership requirements](plan-multicloud-security-determine-ownership-requirements.md#determine-ownership-requirements) and [determine access control requirements](plan-multicloud-security-determine-access-control-requirements.md#determine-access-control-requirements) articles to answer these questions:
+
+ - How will your organization use Defender for cloud as a multicloud solution?
+ - What [cloud security posture management (CSPM)](plan-multicloud-security-determine-multicloud-dependencies.md) and [cloud workload protection (CWP)](plan-multicloud-security-determine-multicloud-dependencies.md) capabilities do you want to adopt?
+ - Which teams will own the different parts of Defender for Cloud?
+ - What is your process for responding to security alerts and recommendations? Remember to consider Defender for CloudΓÇÖs governance feature when making decisions about recommendation processes.
+ - How will security teams collaborate to prevent friction during remediation?
+
+- **Plan a lifecycle strategy.** As new multicloud resources onboard into Defender for Cloud, you need a strategic plan in place for that onboarding. Remember that you can use [auto-provisioning](/azure/defender-for-cloud/enable-data-collection?tabs=autoprovision-defendpoint) for easier agent deployment.
+
+## Next steps
+
+In this article, you've learned how to determine your adoption strategy when designing a multicloud security solution. Continue with the next step to [determine data residency requirements](plan-multicloud-security-determine-data-residency-requirements.md).
defender-for-cloud Plan Multicloud Security Determine Access Control Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-access-control-requirements.md
+
+ Title: Defender for Cloud Planning multicloud security determine access control requirements guidance
+description: Learn about determining access control requirements to meet business goals in multicloud environment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022+
+# Determine access control requirements
+
+This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Figure out what permissions and access controls you need on your multicloud deployment.
+
+## Get started
+
+As part of your multicloud solution design you should review access requirements for multicloud resources that will be available to users. As you plan, answer the following questions, take notes, and be clear about the reasons for the answer.
+
+- Who should have access to recommendations and alerts for multicloud resources?
+- Are your multicloud resources and environments owned by different teams? If so, does each team need the same level of access?
+- Do you need to limit access to specific resources for specific users and groups? If so, how can you limit access for Azure, AWS, and GCP resources?
+- Does your organization need identity and access management (IAM permissions) to be inherited to the resource group level?
+- Do you need to determine any IAM requirements for people who:
+ - Implement JIT attack surface reduction VMs and AWS EC2?
+ - Define Adaptive Application Controls (access defined by application owner)?
+ - Security operations?
+
+With clear answers available, you can figure out your Defender for Cloud access requirements. Other things to consider:
+
+- Defender for Cloud multicloud capabilities support inheritance of IAM permissions.
+- Whatever permissions the user has for the resource group level where the AWS/GCP connectors reside, are inherited automatically for multicloud recommendations and security alerts.
+
+## Next steps
+
+In this article, you've learned how to determine access control requirements needs when designing a multicloud security solution. Continue with the next step to [determine multicloud dependencies](plan-multicloud-security-determine-multicloud-dependencies.md).
defender-for-cloud Plan Multicloud Security Determine Business Needs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-business-needs.md
+
+ Title: Defender for Cloud Planning multicloud security determining business needs guidance
+description: Learn about determining business needs to meet business goals in multicloud environment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022++
+# Determine business needs
+
+This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection platform (CWPP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Identify how Defender for CloudΓÇÖs multicloud capabilities can help your organization to meet its business goals and protect AWS/GCP resources.
+
+## Get started
+
+The first step in designing a multicloud security solution is to determine your business needs. Every company, even if in the same industry, has different requirements. Best practices can provide general guidance, but specific requirements are determined by your unique business needs.
+As you start defining requirements, answer these questions:
+
+- Does your company need to assess and strengthen the security configuration of its cloud resources?
+- Does your company want to manage the security posture of multicloud resources from a single point (single pane of glass)?
+- What boundaries do you want to put in place to ensure that your entire organization is covered, and no areas are missed?
+- Does your company need to comply with industry and regulatory standards? If so, which standards?
+- What are your goals for protecting critical workloads, including containers and servers, against malicious attacks?
+- Do you need a solution only in a specific cloud environment, or a cross-cloud solution?
+- How will the company respond to alerts and recommendations, and remediate non-compliant resources?
+- Will workload owners be expected to remediate issues?
+
+## Mapping Defender for Cloud to business requirements
+
+Defender for Cloud provides a single management point for protecting Azure, on-premises, and multicloud resources. Defender for Cloud can meet your business requirements by:
+
+- Securing and protecting your GCP, AWS, and Azure environments.
+- Assessing and strengthening the security configuration of your cloud workloads.
+- Managing compliance against critical industry and regulatory standards.
+- Providing vulnerability management solutions for servers and containers.
+- Protecting critical workloads, including containers, servers, and databases, against malicious attacks.
+
+The diagram below shows the Defender for Cloud architecture. Defender for Cloud can:
+
+- Provide unified visibility and recommendations across multicloud environments. ThereΓÇÖs no need to switch between different portals to see the status of your resources.
+- Compare your resource configuration against industry standards, regulations, and benchmarks. [Learn more](/azure/defender-for-cloud/update-regulatory-compliance-packages) about standards.
+- Help security analysts to triage alerts based on threats/suspicious activities. Workload protection capabilities can be applied to critical workloads for threat detection and advanced defenses.
++
+## Next steps
+
+In this article, you've learned how to determine your business needs when designing a multicloud security solution. Continue with the next step to [determine an adoption strategy](plan-multicloud-security-define-adoption-strategy.md).
defender-for-cloud Plan Multicloud Security Determine Compliance Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-compliance-requirements.md
+
+ Title: Defender for Cloud Planning multicloud security compliance requirements guidance AWS standards GCP standards
+description: Learn about determining compliance requirements in multicloud environment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022+
+# Determine compliance requirements
+
+This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Identify compliance requirements in your organization as you design your multicloud solution.
+
+## Get started
+
+Defender for Cloud continually assesses the configuration of your resources against compliance controls and best practices in the standards and benchmarks youΓÇÖve applied in your subscriptions.
+
+- By default every subscription has the [Azure Security Benchmark](/security/benchmark/azure/introduction) assigned. This benchmark contains Microsoft Azure security and compliance best practices, based on common compliance frameworks.
+- AWS standards include AWS Foundational Best Practices, CIS 1.2.0, and PCI DSS 3.2.1.
+
+- GCP standards include GCP Default, GCP CIS 1.1.0/1.2.0, GCP ISO 27001, GCP NIST 800 53, and PCI DSS 3.2.1.
+- By default, every subscription that contains the AWS connector has the AWS Foundational Security Best Practices assigned.
+- Every subscription with the GCP connector has the GCP Default benchmark assigned.
+- For AWS and GCP, the compliance monitoring freshness interval is 4 hours.
+
+After you enable enhanced security features, you can add other compliance standards to the dashboard. Regulatory compliance is available when you enable at least one Defender plan on the subscription in which the multicloud connector is located, or on the connector.
+
+Additionally, you can also create your own custom standards and assessments for [AWS](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3066575) and [GCP](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3251252) to align to your organizational requirements.
+
+## Next steps
+
+In this article, you've learned how to determine your compliance requirements when designing a multicloud security solution. Continue with the next step to [determine ownership requirements](plan-multicloud-security-determine-ownership-requirements.md).
defender-for-cloud Plan Multicloud Security Determine Data Residency Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-data-residency-requirements.md
+
+ Title: Defender for Cloud Planning multicloud security determine data residency requirements GDPR agent considerations guidance
+description: Learn about determining data residency requirements when planning multicloud deployment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022++
+# Determine data residency requirements
+
+This article is one of a series providing guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Identify data residency constraints as you plan your multicloud deployment.
+
+## Get started
+
+When designing business solutions, data residency (the physical or geographic location of an organizationΓÇÖs data) is often top of mind due to compliance requirements. For example, the European UnionΓÇÖs General Data Protection Regulation (GDPR) requires all data collected on citizens to be stored in the EU, for it to be subject to European privacy laws.
+
+- As you plan, consider these points around data residency:
+- When you create connectors to protect multicloud resources, the connector resource is hosted in an Azure resource group that you choose when you set up the connector. Select this resource group in accordance with your data residency requirements.
+When data is retrieved from AWS/GCP, itΓÇÖs stored in either GDPR-EU, or US:
+
+ - Defender for Cloud looks at the region in which the data is stored in the AWS/GCP cloud and matches that.
+ - Anything in the EU is stored in the EU region. Anything else is stored in the US region.
+
+## Agent considerations
+
+There are data considerations around agents and extensions used by Defender for Cloud.
+
+- **CSPM:** CSPM functionality in Defender for Cloud is agentless. No agents are needed for CSPM to work.
+- **CWP:** Some workload protection functionality for Defender for Cloud requires the use of agents to collect data.
+
+## Defender for Servers plan
+
+Agents are used in the Defender for Servers plan as follows:
+
+- Non-Azure public clouds connect to Azure by leveraging the [Azure Arc](/azure/azure-arc/servers/overview) service.
+- The [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) is installed on multicloud machines that onboard as Azure Arc machines. Defender for Cloud should be enabled in the subscription in which the Azure Arc machines are located.
+- Defender for Cloud leverages the Connected Machine agent to install extensions (such as Microsoft Defender for Endpoint) that are needed for [Defender for Servers](/azure/defender-for-cloud/defender-for-servers-introduction) functionality.
+- [Log analytics agent/Azure Monitor Agent (AMA)](/azure/azure-monitor/agents/agents-overview) is needed for some [Defender for Service Plan 2](/azure/defender-for-cloud/defender-for-servers-introduction) functionality.
+ - The agents can be provisioned automatically by Defender for Cloud.
+ - When you enable auto-provisioning, you specify where to store collected data. Either in the default Log Analytics workspace created by Defender for Cloud, or in any other workspace in your subscription. [Learn more](/azure/defender-for-cloud/enable-data-collection?tabs=autoprovision-feature).
+ - If you select to continuously export data, you can drill into and configure the types of events and alerts that are saved. [Learn more](/azure/defender-for-cloud/continuous-export?tabs=azure-portal).
+- Log Analytics workspace:
+ - You define the Log Analytics workspace you use at the subscription level. It can be either a default workspace, or a custom-created workspace.
+ - There are [several reasons](/azure/azure-monitor/logs/workspace-design) to select the default workspace rather than the custom workspace.
+ - The location of the default workspace depends on your Azure Arc machine region. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/faq-data-collection-agents#where-is-the-default-log-analytics-workspace-created-).
+ - The location of the custom-created workspace is set by your organization. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/faq-data-collection-agents#how-can-i-use-my-existing-log-analytics-workspace-) about using a custom workspace.
+
+## Defender for Containers plan
+
+[Defender for Containers](/azure/defender-for-cloud/defender-for-containers-introduction) protects your multicloud container deployments running in:
+
+- **Azure Kubernetes Service (AKS)** - Microsoft's managed service for developing, deploying, and managing containerized applications.
+- **Amazon Elastic Kubernetes Service (EKS) in a connected AWS account** - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+- **Google Kubernetes Engine (GKE) in a connected GCP project** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.
+- **Other Kubernetes distributions** - using [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview), which allows you to attach and configure Kubernetes clusters running anywhere, including other public clouds and on-premises.
+
+Defender for Containers has both agent-based and agentless components.
+
+- **Agentless collection of Kubernetes audit log data**: [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) or GCP Cloud Logging enables and collects audit log data, and sends the collected information to Defender for Cloud for further analysis. Data storage is based on the EKS cluster AWS region, in accordance with GDPR - EU and US.
+- **Agent-based Azure Arc-enabled Kubernetes**: Connects your EKS and GKE clusters to Azure using [Azure Arc agents](/azure/azure-arc/kubernetes/conceptual-agent-overview), so that theyΓÇÖre treated as Azure Arc resources.
+- **Microsoft Defender extension**: A DaemonSet that collects signals from hosts using eBPF technology, and provides runtime protection. The extension is registered with a Log Analytics workspace and used as a data pipeline. The audit log data isn't stored in the Log Analytics workspace.
+- **Azure Policy extension**: configuration information is collected by the Azure Policy add-on.
+ - The Azure Policy add-on extends the open-source Gatekeeper v3 admission controller webhook for Open Policy Agent.
+ - The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcement, safeguarding your clusters in a centralized, consistent manner.
+
+## Defender for Databases plan
+
+For the [Defender for Databases plan](/azure/defender-for-cloud/quickstart-enable-database-protections) in a multicloud scenario, you leverage Azure Arc to manage the multicloud SQL Server databases. The SQL Server instance is installed in a virtual or physical machine connected to Azure Arc.
+
+- The [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) is installed on machines connected to Azure Arc.
+- The Defender for Databases plan should be enabled in the subscription in which the Azure Arc machines are located.
+- The Log Analytics agent for Microsoft Defender SQL Servers should be provisioned on the Azure Arc machines. It collects security-related configuration settings and event logs from machines.
+- Automatic SQL server discovery and registration needs to be set to On to allow SQL database discovery on the machines.
+
+When it comes to the actual AWS and GCP resources that are protected by Defender for Cloud, their location is set directly from the AWS and GCP clouds.
+
+## Next steps
+
+In this article, you have learned how to determine your data residency requirements when designing a multicloud security solution. Continue with the next step to [determine compliance requirements](plan-multicloud-security-determine-compliance-requirements.md).
defender-for-cloud Plan Multicloud Security Determine Multicloud Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md
+
+ Title: Defender for Cloud Planning multicloud security determine multicloud dependencies CSPM CWPP guidance cloud workload protection
+description: Learn about determining multicloud dependencies when planning multicloud deployment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022++
+# Determine multicloud dependencies
+
+This article is one of a series providing guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Figure out dependencies that might influence your multicloud design.
+
+## Get started
+
+As you design your multicloud solution, itΓÇÖs important to have a clear picture of the components needed to enjoy all multicloud features in Defender for Cloud.
+
+## CSPM
+
+Defender for Cloud provides Cloud Security Posture Management (CSPM) features for your AWS and GCP workloads.
+
+- After you onboard AWS and GCP, Defender for Cloud starts assessing your multicloud workloads against industry standards, and reports on your security posture.
+- CSPM features are agentless and donΓÇÖt rely on any other components except for successful onboarding of AWS/GCP connectors.
+- ItΓÇÖs important to note that the Security Posture Management plan is turned on by default and canΓÇÖt be turned off.
+- Learn about the [IAM permissions](/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings) needed to discover AWS resources for CSPM.
+
+## CWPP
+
+In Defender for Cloud, you enable specific plans to get Cloud Workload Platform Protection (CWPP) features. Plans to protect multicloud resources include:
+
+- [Defender for Servers](/azure/defender-for-cloud/defender-for-servers-introduction): Protect AWS/GCP Windows and Linux machines.
+- [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-introduction): Help secure your Kubernetes clusters with security recommendations and hardening, vulnerability assessments, and runtime protection.
+- [Defender for SQL](/azure/defender-for-cloud/defender-for-sql-usage): Protect SQL databases running in AWS and GCP.
+
+### What agent do I need?
+
+The following table summarizes agent requirements for CWPP.
+
+| Agent |Defender for Servers|Defender for Containers|Defender fo SQL on Machines|
+|::|::|::|::|
+|Azure Arc Agent | Γ£ö | Γ£ö | Γ£ö |
+|Microsoft Defender for Endpoint extension |Γ£ö|
+|Vulnerability assessment| Γ£ö| |
+|Log Analytics or Azure Monitor Agent (preview) extension|Γ£ö| |Γ£ö|
+|Defender profile| | Γ£ö| |
+|Azure policy extension | | Γ£ö| |
+|Kubernetes audit log data | | Γ£ö| |
+|SQL servers on machines | | | Γ£ö|
+|Automatic SQL server discovery and registration | | | Γ£ö|
+
+### Defender for Servers
+
+Enabling Defender for Servers on your AWS or GCP connector allows Defender for Cloud to provide server protection to your Google Compute Engine VMs and AWS EC2 instances.
+
+#### Review plans
+
+Defender for Servers offers two different plans:
+
+- **Plan 1:**
+ - **MDE integration:** Plan 1 integrates with [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1-2?view=o365-worldwide) to provide a full endpoint detection and response (EDR) solution for machines running a [range of operating systems](/microsoft-365/security/defender-endpoint/minimum-requirements?view=o365-worldwide). Defender for Endpoint features include:
+ - [Reducing the attack surface](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction?view=o365-worldwide) for machines.
+ - Providing [antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection?view=o365-worldwide) capabilities.
+ - Threat management, including [threat hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview?view=o365-worldwide), [detection](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response?view=o365-worldwide), [analytics](/microsoft-365/security/defender-endpoint/threat-analytics?view=o365-worldwide), and [automated investigation and response](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response?view=o365-worldwide).
+ - **Provisioning:** Automatic provisioning of the Defender for Endpoint sensor on every supported machine that's connected to Defender for Cloud.
+ - **Licensing:** Charges Defender for Endpoint licenses per hour instead of per seat, lowering costs by protecting virtual machines only when they are in use.
+- **Plan 2:** Includes all the components of Plan 1 along with additional capabilities such as File Integrity Monitoring (FIM), Just-in-time (JIT) VM access, and more.
+
+ Review the [features of each plan](/azure/defender-for-cloud/defender-for-servers-introduction) before onboarding to Defender for Servers.
+
+#### Review components
+
+The following components and requirements are needed to receive full protection from the Defender for Servers plan:
+
+- **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them.
+ - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.
+To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](/azure/defender-for-cloud/quickstart-onboard-gcp?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings) must be configured. [Learn more](/azure/azure-arc/servers/agent-overview) about the agent.
+- **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](/azure/defender-for-cloud/integration-defender-for-endpoint?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities.
+- **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](/azure/defender-for-cloud/deploy-vulnerability-assessment-vm), or the [Microsoft threat and vulnerability management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management?view=o365-worldwide) solution.
+- **Log Analytics agent/[Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines.
+
+#### Check networking requirements
+
+Machines must meet [network requirements](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud) before onboarding the agents. Auto-provisioning is enabled by default.
+
+### Defender for Containers
+
+Enabling Defender for Containers provides GKE and EKS clusters and underlying hosts with threat detection capabilities that include:
+
+- Kubernetes behavioral analytics
+- Anomaly detection
+- Security best practices
+- Built-in admission control policies and more
+
+#### Review components-Defender for Containers
+
+The required [components](/azure/defender-for-cloud/defender-for-containers-introduction) are as follows:
+
+- **Azure Arc Agent**: Connects your GKE and EKS clusters to Azure, and onboards the Defender Profile.
+- **Defender Profile**: Provides host-level runtime threat protection.
+- **Azure Policy extension**: Extends the Gatekeeper v3 to monitor every request to the Kubernetes API server, and ensures that security best practices are being followed on clusters and workloads.
+- **Kubernetes audit logs**: Audit logs from the API server allow Defender for Containers to identify suspicious activity within your multicloud servers, and provide deeper insights while investigating alerts. Sending of the ΓÇ£Kubernetes audit logsΓÇ¥ needs to be enabled on the connector level.
+
+#### Check networking requirements-Defender for Containers
+
+Make sure to check that your clusters meet network requirements so that the Defender Profile can connect with Defender for Cloud.
+
+### Defender for SQL
+
+Defender for SQL provides threat detection for the GCP Compute Engine and AWS. The Defender for SQL Server on Machines plan must be enabled on the subscription where the connector is located.
+
+#### Review components-Defender for SQL
+
+To receive the full benefits of Defender for SQL on your multicloud workload, you need these components:
+
+- **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them.
+ - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.
+ - To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](/azure/defender-for-cloud/quickstart-onboard-gcp?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings) must be configured. [Learn more](/azure/azure-arc/servers/agent-overview) about the agent.
+- **Log Analytics agent/[Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines
+- **Automatic SQL server discovery and registration**: Supports automatic discovery and registration of SQL servers
+
+## Next steps
+
+In this article, you have learned how to determine multicloud dependencies when designing a multicloud security solution. Continue with the next step to [automate connector deployment](plan-multicloud-security-automate-connector-deployment.md).
defender-for-cloud Plan Multicloud Security Determine Ownership Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-ownership-requirements.md
+
+ Title: Defender for Cloud Planning multicloud security determine ownership requirements security functions team alignment best practices guidance
+description: Learn about determining ownership requirements when planning multicloud deployment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022+
+# Determine ownership requirements
+
+This article is one of a series providing guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
+
+## Goal
+
+Identify the teams involved in your multicloud security solution, and plan how they will align and work together.
+
+## Security functions
+
+Depending on the size of your organization, separate teams will manage [security functions](/azure/cloud-adoption-framework/organize/cloud-security-compliance-management). In a complex enterprise, functions might be numerous.
+
+| Security function | Details |
+|||
+|[Security Operations (SecOps)](/azure/cloud-adoption-framework/organize/cloud-security-operations-center) | Reducing organizational risk by reducing the time in which bad actors have access to corporate resources. Reactive detection, analysis, response and remediation of attacks. Proactive threat hunting.
+| [Security architecture](/azure/cloud-adoption-framework/organize/cloud-security-architecture)| Security design summarizing and documenting the components, tools, processes, teams, and technologies that protect your business from risk.|
+|[Security compliance management](/azure/cloud-adoption-framework/organize/cloud-security-compliance-management)| Processes that ensure the organization is compliant with regulatory requirements and internal policies.|
+|[People security](/azure/cloud-adoption-framework/organize/cloud-security-people)|Protecting the organization from human risk to security.|
+|[Application security and DevSecOps](/azure/cloud-adoption-framework/organize/cloud-security-application-security-devsecops)| Integrating security into DevOps processes and apps.|
+|[Data security](/azure/cloud-adoption-framework/organize/cloud-security-data-security)| Protecting your organizational data.|
+|[Infrastructure and endpoint security](/azure/cloud-adoption-framework/organize/cloud-security-infrastructure-endpoint)|Providing protection, detection and response for infrastructure, networks, and endpoint devices used by apps and users.|
+|[Identity and key management](/azure/cloud-adoption-framework/organize/cloud-security-identity-keys)|Authenticating and authorizing users, services, devices, and apps. Provide secure distribution and access for cryptographic operations.|
+|[Threat intelligence](/azure/cloud-adoption-framework/organize/cloud-security-threat-intelligence)| Making decisions and acting on security threat intelligence that provides context and actionable insights on active attacks and potential threats.|
+|[Posture management](/azure/cloud-adoption-framework/organize/cloud-security-posture-management)|Continuously reporting on, and improving, your organizational security posture.|
+|[Incident preparation](/azure/cloud-adoption-framework/organize/cloud-security-incident-preparation)|Building tools, processes, and expertise to respond to security incidents.
+
+## Team alignment
+
+Despite the many different teams who manage cloud security, itΓÇÖs critical that they work together to figure out whoΓÇÖs responsible for decision making in the multicloud environment. Lack of ownership creates friction that can result in stalled projects and insecure deployments that couldnΓÇÖt wait for security approval.
+
+Security leadership, most commonly under the CISO, should specify whoΓÇÖs accountable for security decision making. Typically, responsibilities align as summarized in the table.
+
+|Category | Description | Typical Team|
+| | | |
+|Server endpoint security | Monitor and remediate server security, includes patching, configuration, endpoint security, etc.| Joint responsibility of [central IT operations](/azure/cloud-adoption-framework/organize/central-it) and [Infrastructure and endpoint security](/azure/cloud-adoption-framework/organize/central-it) teams.|
+|Incident monitoring and response| Investigate and remediate security incidents in your organizationΓÇÖs SIEM or source console.| [Security operations](/azure/cloud-adoption-framework/organize/cloud-security-operations-center) team.|
+|Policy management|Set direction for Azure role-based access control (Azure RBAC), Microsoft Defender for Cloud, administrator protection strategy, and Azure Policy, in order to govern Azure resources, custom AWS/GCP recommendations etc.|Joint responsibility of [policy and standards](/azure/cloud-adoption-framework/organize/cloud-security-policy-standards) and [security architecture](/azure/cloud-adoption-framework/organize/cloud-security-architecture) teams.|
+|threat and vulnerability management| Maintain complete visibility and control of the infrastructure, to ensure that critical issues are discovered and remediated as efficiently as possible.| Joint responsibility of [central IT operations](/azure/cloud-adoption-framework/organize/central-it) and [Infrastructure and endpoint security](/azure/cloud-adoption-framework/organize/central-it) teams.|
+|Application workloads|Focus on security controls for specific workloads. The goal is to integrate security assurances into development processes and custom line of business (LOB) applications.|Joint responsibility of [application development](/azure/cloud-adoption-framework/organize/cloud-security-application-security-devsecops) and [central IT operations](/azure/cloud-adoption-framework/organize/central-it) teams.|
+|Identity security and standards | Understand Permission Creep Index (PCI) for Azure subscriptions, AWS accounts, and GCP projects, in order to identify risks associated with unused or excessive permissions across identities and resources.| Joint responsibility of [identity and key management](/azure/cloud-adoption-framework/organize/cloud-security-identity-keys), [policy and standards](/azure/cloud-adoption-framework/organize/cloud-security-policy-standards), and [security architecture](/azure/cloud-adoption-framework/organize/cloud-security-architecture) teams. |
+
+## Best practices
+
+- Although multicloud security might be divided across different areas of the business, teams should manage security across the multicloud estate. This is better than having different teams secure different cloud environments. For example where one team manages Azure and another team manages AWS. Teams working across multicloud environments helps to prevent sprawl within the organization. It also helps to ensure that security policies and compliance requirements are applied in every environment.
+- Often, teams that manage Defender for Cloud donΓÇÖt have privileges to remediate recommendations in workloads. For example, the Defender for Cloud team might not be able to remediate vulnerabilities in an AWS EC2 instance. The security team might be responsible for improving the security posture, but unable to fix the resulting security recommendations. To address this issue:
+ - ItΓÇÖs imperative to involve the AWS workload owners.
+ - [Assigning owners with due dates](/azure/defender-for-cloud/governance-rules) and [defining governance rules](/azure/defender-for-cloud/governance-rules) creates accountability and transparency, as you drive processes to improve security posture.
+- Depending on organizational models, we commonly see these options for central security teams operating with workload owners:
+ - **Option 1: Centralized model.** Security controls are defined, deployed, and monitored by a central team.
+
+ - The central security team decides which security policies will be implemented in the organization and who has permissions to control the set policy.
+ - The team might also have the power to remediate non-compliant resources and enforce resource isolation in case of a security threat or configuration issue.
+ - Workload owners on the other hand are responsible for managing their cloud workloads but need to follow the security policies that the central team has deployed.
+ - This model is most suitable for companies with a high level of automation, to ensure automated response processes to vulnerabilities and threats.
+ - **Option 2: Decentralized model.**- Security controls are defined, deployed, and monitored by workload owners.
+ - Security control deployment is done by workload owners, as they own the policy set and can therefore decide which security policies are applicable to their resources.
+ - Owners need to be aware of, understand, and act upon security alerts and recommendations for their own resources.
+ - The central security team on the other hand only acts as a controlling entity, without write-access to any of the workloads.
+ - The security team usually has insights into the overall security posture of the organization, and they might hold the workload owners accountable for improving their security posture.
+ - This model is most suitable for organizations that need visibility into their overall security posture, but at the same time want to keep responsibility for security with the workload owners.
+ - Currently, the only way to achieve Option 2 in Defender for Cloud is to assign the workload owners with Security Reader permissions to the subscription thatΓÇÖs hosting the multicloud connector resource.
+
+## Next steps
+
+In this article, you have learned how to determine ownership requirements when designing a multicloud security solution. Continue with the next step to [determine access control requirements](plan-multicloud-security-determine-access-control-requirements.md).
defender-for-cloud Plan Multicloud Security Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-get-started.md
+
+ Title: Defender for Cloud Planning multicloud security get started guidance before you begin cloud solution
+description: Learn about designing a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud.
++ Last updated : 10/03/2022+
+# Get started
+
+This article introduces guidance to help you design a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud. The guidance can be used by cloud solution and infrastructure architects, security architects and analysts, and anyone else involved in designing a multicloud security solution.
+
+As you capture your functional and technical requirements, the articles provide an overview of multicloud capabilities, planning guidance, and prerequisites.
+
+Follow the guides in order. They build on each other to help you make design decisions. We recommend that you reread the articles as needed, to understand and incorporate all considerations.
+
+## What should I get from this guide?
+
+Use this guide as an aid as you design Cloud Security Posture Management (CSPM) and Cloud Workload Protection Plan (CWPP) solutions across multicloud environments. After reading the articles you should have answers to the following:
+
+- What questions should I ask and answer as I design my multicloud solution?
+- What steps do I need to complete to design a solution?
+- What technologies and capabilities are available to me?
+- What trade-offs do I need to consider?
+
+## Problem space
+
+As organizations span multiple cloud providers, it becomes increasingly complex to centralize security, and for security teams to work across multiple environments and vendors.
+
+Defender for Cloud helps you to protect your multicloud environment by strengthening your security posture and protecting your workloads. Defender for Cloud provides a single dashboard to manage protection across all environments.
++
+## Before you begin
+
+Before working through these articles, you should have a basic understanding of Azure, Defender for Cloud, Azure Arc, and your multicloud AWS/GCP environment.
+
+## Next steps
+
+In this article, you have been provided an introduction to begin your path to designing a multicloud security solution. Continue with the next step to [determine business needs](plan-multicloud-security-determine-business-needs.md).
defender-for-cloud Plan Multicloud Security Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-other-resources.md
+
+ Title: Other Resources
+description: Other resources to explore when planning multicloud deployment with Microsoft Defender for Cloud.
++ Last updated : 09/21/2022++
+# Other resources
+
+After designing your multicloud adoption, get ready to start deploying CSPM and CWP features. Resources include:
+
+- Defender for Cloud documentation:
+ - [Multicloud documentation](https://learn.microsoft.com/azure/defender-for-cloud/multicloud)
+ - [Connecting your AWS accounts](https://learn.microsoft.com/azure/defender-for-cloud/quickstart-onboard-aws?pivots=env-settings)
+ - [Connecting your GCP projects](https://learn.microsoft.com/azure/defender-for-cloud/quickstart-onboard-gcp?pivots=env-settings)
+ - [Custom assessment and standards for AWS workloads](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3066575)
+ - [Custom assessments and standards for GCP workloads](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3251252)
+- Other resources:
+ - [Important upcoming changes](https://learn.microsoft.com/azure/defender-for-cloud/upcoming-changes)
+ - [WhatΓÇÖs new in Microsoft Defender for Cloud](https://learn.microsoft.com/azure/defender-for-cloud/release-notes)
+ - [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/bg-p/MicrosoftDefenderCloudBlog)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
definitions related to Microsoft Defender for Cloud. The following groupings of
available: - The [initiatives](#microsoft-defender-for-cloud-initiatives) group lists the Azure Policy initiative definitions in the "Defender for Cloud" category.-- The [default initiative](#defender-for-clouds-default-initiative-azure-security-benchmark) group lists all the Azure Policy definitions that are part of Defender for Cloud's default initiative, [Azure Security Benchmark](/security/benchmark/azure/introduction). This Microsoft-authored, widely respected benchmark builds on controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
+- The [default initiative](#defender-for-clouds-default-initiative-microsoft-cloud-security-benchmark) group lists all the Azure Policy definitions that are part of Defender for Cloud's default initiative, [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction). This Microsoft-authored, widely respected benchmark builds on controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
- The [category](#microsoft-defender-for-cloud-category) group lists all the Azure Policy definitions in the "Defender for Cloud" category. For more information about security policies, see [Working with security policies](./tutorial-security-policy.md). For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
To learn about the built-in initiatives that are monitored by Defender for Cloud
[!INCLUDE [azure-policy-reference-policyset-security-center](../../includes/policy/reference/bycat/policysets-security-center.md)]
-## Defender for Cloud's default initiative (Azure Security Benchmark)
+## Defender for Cloud's default initiative (Microsoft Cloud Security Benchmark)
To learn about the built-in policies that are monitored by Defender for Cloud, see the following table:
defender-for-cloud Powershell Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-onboarding.md
Title: Onboard to Microsoft Defender for Cloud with PowerShell
description: This document walks you through the process of enabling Microsoft Defender for Cloud with PowerShell cmdlets. Last updated 11/09/2021-+ # Quickstart: Automate onboarding of Microsoft Defender for Cloud using PowerShell
In this example, we will enable Defender for Cloud on a subscription with ID: d0
2. Set the Log Analytics workspace to which the Log Analytics agent will send the data it collects on the VMs associated with the subscription ΓÇô in this example, an existing user defined workspace (myWorkspace).
-3. Activate Defender for CloudΓÇÖs automatic agent provisioning which [deploys the Log Analytics agent](enable-data-collection.md#auto-provision-mma).
+3. Activate Defender for CloudΓÇÖs automatic agent provisioning which [deploys the Log Analytics agent](working-with-log-analytics-agent.md).
5. Set the organizationΓÇÖs [CISO as the security contact for Defender for Cloud alerts and notable events](configure-email-notifications.md).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 06/29/2022 Last updated : 09/20/2022 zone_pivot_groups: connect-aws-accounts-+ # Quickstart: Connect your AWS accounts to Microsoft Defender for Cloud
-With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), GitHub and Azure DevOps (ADO).
To protect your AWS-based resources, you can connect an AWS account with either:
You can learn more by watching this video from the Defender for Cloud in the Fie
|Aspect|Details| |-|:-| |Release state:|General Availability (GA)|
-|Pricing:|The **CSPM plan** is free.<br>The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure, the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines.<br>Learn more about [Defender plan pricing and billing](enhanced-security-features-overview.md#faqpricing-and-billing)|
+|Pricing:|The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure, the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines.<br>Learn more about [Defender plan pricing and billing](enhanced-security-features-overview.md#faqpricing-and-billing)|
|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription. <br> **Administrator** on the AWS account.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
The native cloud connector requires:
- Additional extensions should be enabled on the Arc-connected machines. - Log Analytics (LA) agent on Arc machines, and ensure the selected workspace has security solution installed. The LA agent is currently configured in the subscription level. All of your multicloud AWS accounts and GCP projects under the same subscription will inherit the subscription settings.
- Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#quickstart-configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+ Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
- **To enable the Defender for Servers plan**, you'll need:
The native cloud connector requires:
- If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
- - Additional extensions should be enabled on the Arc-connected machines.
+ - Additional extensions should be enabled on the Arc-connected machines:
- Microsoft Defender for Endpoint
- - VA solution (TVM/ Qualys)
+ - VA solution (TVM/Qualys)
- Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed. The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
- Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#quickstart-configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+ Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
> [!NOTE] > Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
The following IAM permissions are needed to discover AWS resources:
| API Gateway | `apigateway:GET` | | Application Auto Scaling | `application-autoscaling:Describe*` | | Auto scaling | `autoscaling-plans:Describe*` <br> `autoscaling:Describe*` |
-| Certificate manager | `acm-pca:Describe*` <br> `acm-pca:List*` <br> `acm:Describe* <br>acm:List*` |
+| Certificate manager | `acm-pca:Describe*` <br> `acm-pca:List*` <br> `acm:Describe*` <br> `acm:List*` |
| CloudFormation | `cloudformation:Describe*` <br> `cloudformation:List*` | | CloudFront | `cloudfront:DescribeFunction` <br> `cloudfront:GetDistribution` <br> `cloudfront:GetDistributionConfig` <br> `cloudfront:List*` | | CloudTrail | `cloudtrail:Describe*` <br> `cloudtrail:GetEventSelectors` <br> `cloudtrail:List*` <br> `cloudtrail:LookupEvents` |
The following IAM permissions are needed to discover AWS resources:
| ELB ΓÇô elastic load balancing (v1/2) | `elasticloadbalancing:Describe*` | | Elastic search | `es:Describe*` <br> `es:List*` | | EMR ΓÇô elastic map reduce | `elasticmapreduce:Describe*` <br> `elasticmapreduce:GetBlockPublicAccessConfiguration` <br> `elasticmapreduce:List*` <br> `elasticmapreduce:View*` |
-| GuardDute | `guardduty:DescribeOrganizationConfiguration` <br> `guardduty:DescribePublishingDestination` <br> `guardduty:List*` |
+| GuardDuty | `guardduty:DescribeOrganizationConfiguration` <br> `guardduty:DescribePublishingDestination` <br> `guardduty:List*` |
| IAM | `iam:Generate*` <br> `iam:Get*` <br> `iam:List*` <br> `iam:Simulate*` | | KMS | `kms:Describe*` <br> `kms:List*` |
-| LAMBDA | `lambda:GetPolicy` <br> `lambda:List*` |
+| Lambda | `lambda:GetPolicy` <br> `lambda:List*` |
| Network firewall | `network-firewall:DescribeFirewall` <br> `network-firewall:DescribeFirewallPolicy` <br> `network-firewall:DescribeLoggingConfiguration` <br> `network-firewall:DescribeResourcePolicy` <br> `network-firewall:DescribeRuleGroup` <br> `network-firewall:DescribeRuleGroupMetadata` <br> `network-firewall:ListFirewallPolicies` <br> `network-firewall:ListFirewalls` <br> `network-firewall:ListRuleGroups` <br> `network-firewall:ListTagsForResource` | | RDS | `rds:Describe*` <br> `rds:List*` | | RedShift | `redshift:Describe*` |
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
+
+ Title: 'Quickstart: Connect your Azure DevOps repositories to Microsoft Defender for Cloud'
+description: Learn how to connect your Azure DevOps repositories to Defender for Cloud.
Last updated : 09/20/2022++++
+# Quickstart: Connect your Azure DevOps repositories to Microsoft Defender for Cloud
+
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), GitHub, and Azure DevOps (ADO).
+
+To protect your ADO-based resources, you can connect your ADO organizations on the environment settings page. This page provides a simple onboarding experience (including auto discovery).
+
+By connecting your Azure DevOps repositories to Defender for Cloud, you'll extend Defender for Cloud's enhanced security features to your ADO resources. These features include:
+
+- **Defender for Cloud's CSPM features** - Assesses your Azure DevOps resources according to ADO-specific security recommendations. These recommendations are also included in your secure score. Resources will be assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your Azure DevOps resources alongside your Azure resources.
+
+- **Microsoft Defender for DevOps** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your Azure DevOps resources.
++
+You can view all of the [recommendations for DevOps](recommendations-reference.md) resources.
+
+## Prerequisites
+
+- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Availability
+
+| Aspect | Details |
+|--|--|
+| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
+| Pricing: | The Defender for DevOps plan is free during the Preview. <br><br> After which it will be billed. Pricing to be determined at a later date. |
+| Required roles and permissions: | **Contributor** on the relevant Azure subscription <br> **Security Admin Role** in Defender for Cloud <br> **Azure DevOps Organization Administrator** <br> Third-party applications can gain access using an OAuth, which must be set to `On` . [Learn more about Oath](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)|
+| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) |
+
+## Connect your Azure DevOps organization
+
+**To connect your Azure DevOps organization**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment Settings**.
+
+1. Select **Add environment**.
+
+1. Select **Azure DevOps**.
+
+ :::image type="content" source="media/quickstart-onboard-ado/devop-connector.png" alt-text="Screenshot that shows you where to navigate to select the DevOps connector." lightbox="media/quickstart-onboard-ado/devop-connector.png":::
+
+1. Enter a name, select a subscription, resource group, and region.
+
+ > [!NOTE]
+ > The subscription will be the location where Defender for DevOps will create and store the Azure DevOps connection.
+
+1. Select **Next: Select plans**.
+
+1. Select **Next: Authorize connection**.
+
+1. Select **Authorize**.
+
+1. In the popup screen, read the list of permission requests, and select **Accept**.
+
+ :::image type="content" source="media/quickstart-onboard-ado/accept.png" alt-text="Screenshot that shows you the accept button, to accept the permissions.":::
+
+1. Select your relevant organization(s) from the drop-down menu.
+
+1. For projects
+
+ - Select **Auto discover projects** to discover all projects automatically and apply auto discover to all current and future projects.
+
+ or
+
+ - Select your relevant project(s) from the drop-down menu.
+
+ > [!NOTE]
+ > If you select your relevant project(s) from the drop down menu, you will also need select to auto discover repositories or select individual repositories.
+
+1. Select **Next: Review and create**.
+
+1. Review the information and select **Create**.
+
+The Defender for DevOps service automatically discovers the organizations, projects, and repositories you select and analyzes them for any security issues. The Inventory page populates with your selected repositories, and the Recommendations page shows any security issues related to a selected repository.
+
+## Learn more
+
+- Learn more about [Azure DevOps](https://learn.microsoft.com/azure/devops/?view=azure-devops).
+
+- Learn how to [create your first pipeline](https://learn.microsoft.com/azure/devops/pipelines/create-first-pipeline?view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
+
+## Next steps
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [configure the MSDO Azure DevOps extension](azure-devops-extension.md).
+
+Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 07/11/2022 Last updated : 09/20/2022 zone_pivot_groups: connect-gcp-accounts-+ # Quickstart: Connect your GCP projects to Microsoft Defender for Cloud
-With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), GitHub and Azure DevOps (ADO).
To protect your GCP-based resources, you can connect a GCP project with either:
To protect your GCP-based resources, you can connect a GCP project with either:
|Aspect|Details| |-|:-| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
-|Pricing:|The **CSPM plan** is free.<br>The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
+|Pricing:|The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
|Required roles and permissions:| **Contributor** on the relevant Azure Subscription <br> **Owner** on the GCP organization or project| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
To have full visibility to Microsoft Defender for Servers security content, ensu
The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regard to the LA agent.
- Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#quickstart-configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+ Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
> [!NOTE] > Defender for Servers assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
To have full visibility to Microsoft Defender for SQL security content, ensure y
The LA agent and SQL servers on machines plan are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings and may result in additional charges.
- Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#quickstart-configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+ Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
> [!NOTE] > Defender for SQL assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
+
+ Title: 'Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud'
+description: Learn how to connect your GitHub repositories to Defender for Cloud.
Last updated : 09/20/2022++++
+# Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud
+
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), GitHub, and Azure DevOps (ADO).
+
+To protect your GitHub-based resources, you can connect your GitHub organizations on the environment settings page. This page provides a simple onboarding experience (including auto discovery).
+
+By connecting your GitHub repositories to Defender for Cloud, you'll extend Defender for Cloud's enhanced security features to your GitHub resources. These features include:
+
+- **Defender for Cloud's CSPM features** - Assesses your GitHub resources according to GitHub-specific security recommendations. These recommendations are also included in your secure score. Resources will be assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your GitHub resources alongside your Azure resources.
+
+- **Microsoft Defender for DevOps** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your GitHub resources.
+
+You can view all of the [recommendations for DevOps](recommendations-reference.md) resources.
+
+## Prerequisites
++
+- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Availability
+
+| Aspect | Details |
+|--|--|
+| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
+| Required roles and permissions: | **Contributor** on the relevant Azure subscription <br> **Security Admin Role** in Defender for Cloud <br> **GitHub Organization Administrator** |
+| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) |
+
+## Connect your GitHub account
+
+**To connect your GitHub account to Microsoft Defender for Cloud**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment Settings**.
+
+1. Select **Add environment**.
+
+1. Select **GitHub**.
+
+ :::image type="content" source="media/quickstart-onboard-github/select-github.png" alt-text="Screenshot that shows you where to select, to select GitHub." lightbox="media/quickstart-onboard-github/select-github.png":::
+
+1. Enter a name, select your subscription, resource group, and region.
+
+1. Select a **region**, **subscription**, and **resource group** from the drop-down menus.
+
+ > [!NOTE]
+ > The subscription will be the location where Defender for DevOps will create and store the GitHub connection.
+
+1. Select **Next: Select plans**.
+
+1. Select **Next: Authorize connection**.
+
+1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect
+
+ :::image type="content" source="media/quickstart-onboard-github/authorize.png" alt-text="Screenshot that shows where the authorize button is located on the screen.":::
+
+1. Select **Install**.
+
+1. Select the repositories to install the GitHub application.
+
+ > [!Note]
+ > This will grant Defender for DevOps access to the selected repositories.
+
+9. Select **Next: Review and create**.
+
+10. Select **Create**.
+
+When the process completes, the GitHub connector appears on your Environment settings page.
++
+The Defender for DevOps service automatically discovers the repositories you select and analyzes them for any security issues. The Inventory page populates with your selected repositories, and the Recommendations page shows any security issues related to a selected repository.
+
+## Learn more
+
+- You can learn more about [how Azure and GitHub integrate](https://docs.microsoft.com/azure/developer/github/).
+
+- Learn about [security hardening practices for GitHub Actions](https://docs.github.com/actions/security-guides/security-hardening-for-github-actions).
+
+## Next steps
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [configure the MSDO GitHub action](github-action.md).
+
+Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud.
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
This article lists the recommendations you might see in Microsoft Defender for C
shown in your environment depend on the resources you're protecting and your customized configuration.
-Defender for Cloud's recommendations are based on the [Azure Security Benchmark](../security/benchmarks/introduction.md).
-Azure Security Benchmark is the Microsoft-authored, Azure-specific set of guidelines for security
+Defender for Cloud's recommendations are based on the [Microsoft Cloud Security Benchmark](../security/benchmarks/introduction.md).
+Microsoft Cloud Security Benchmark is the Microsoft-authored, Azure-specific set of guidelines for security
and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on
impact on your secure score.
[!INCLUDE [asc-recs-data](../../includes/asc-recs-data.md)] + ## <a name='recs-identityandaccess'></a>IdentityAndAccess recommendations [!INCLUDE [asc-recs-identityandaccess](../../includes/asc-recs-identityandaccess.md)]
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Title: 'Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud' description: 'Tutorial: Learn how to Improve your regulatory compliance using Microsoft Defender for Cloud.'-- Previously updated : 04/26/2022+ Last updated : 09/21/2022 # Tutorial: Improve your regulatory compliance Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
-When you enable Defender for Cloud on an Azure subscription, the [Azure Security Benchmark](/security/benchmark/azure/introduction) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
+When you enable Defender for Cloud on an Azure subscription, the [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/), [PCI-DSS](https://www.pcisecuritystandards.org/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves.
In this tutorial you'll learn how to:
> [!div class="checklist"] > * Evaluate your regulatory compliance using the regulatory compliance dashboard
+> * Check MicrosoftΓÇÖs compliance offerings for Azure, Dynamics 365 and Power Platform products
> * Improve your compliance posture by taking action on recommendations > * Download PDF/CSV reports as well as certification reports of your compliance status > * Setup alerts on changes to your compliance status
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
To step through the features covered in this tutorial: - [Enable enhanced security features](defender-for-cloud-introduction.md). You can enable these for free for 30 days.-- You must be signed in with an account that has reader access to the policy compliance data. The **Global reader** for the subscription has access to the policy compliance data, but the **Security Reader** role does not. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
+- You must be signed in with an account that has reader access to the policy compliance data. The **Global Reader** for the subscription has access to the policy compliance data, but the **Security Reader** role doesn't. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
## Assess your regulatory compliance
The regulatory compliance dashboard shows your selected compliance standards wit
Use the regulatory compliance dashboard to help focus your attention on the gaps in compliance with your chosen standards and regulations. This focused view also enables you to continuously monitor your compliance over time within dynamic cloud and hybrid environments.
-1. From Defender for Cloud's menu, select **Regulatory compliance**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- At the top of the screen, is a dashboard with an overview of your compliance status and the set of supported compliance regulations. You'll see your overall compliance score, and the number of passing vs. failing assessments associated with each standard.
+1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
- :::image type="content" source="./media/regulatory-compliance-dashboard/compliance-dashboard.png" alt-text="Regulatory compliance dashboard." lightbox="./media/regulatory-compliance-dashboard/compliance-dashboard.png":::
+ The dashboard provides you with an overview of your compliance status and the set of supported compliance regulations. You'll see your overall compliance score, and the number of passing vs. failing assessments associated with each standard.
-1. Select a tab for a compliance standard that is relevant to you (1). You'll see which subscriptions the standard is applied on (2), and the list of all controls for that standard (3). For the applicable controls, you can view the details of passing and failing assessments associated with that control (4), and the number of affected resources (5). Some controls are grayed out. These controls don't have any Defender for Cloud assessments associated with them. Check their requirements and assess them in your environment. Some of these might be process-related and not technical.
- :::image type="content" source="./media/regulatory-compliance-dashboard/compliance-drilldown.png" alt-text="Exploring the details of compliance with a specific standard.":::
+ The following list has a numbered item that matches each location in the image above, and describes what is in the image:
+- Select a compliance standard to see a list of all controls for that standard. (1)
+- View the subscription(s) that the compliance standard is applied on. (2)
+- Select a Control to see more details. Expand the control to view the assessments associated with the selected control. Select an assessment to view the list of resources associated and the actions to remediate compliance concerns. (3)
+- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)
+- In the Your Actions tab, you can see the automated and manual assessments associated to the control. (5)
+- Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6)
+- The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7)
-## Improve your compliance posture
+## Investigate your regulatory compliance issues
-Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard.
+You can use the information in the regulatory compliance dashboard to investigate any issues that may be affecting your compliance posture.
-1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
+**To investigate your compliance issues**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+
+1. Select a regulatory compliance standard.
+
+1. Select a compliance control to expand it.
+
+1. Select **Control details**.
+
+ :::image type="content" source="media/regulatory-compliance-dashboard/control-detail.png" alt-text="Screenshot that shows you where to navigate to select control details on the screen.":::
+
+ - Select **Overview** to see the specific information about the Control you selected.
+ - Select **Your Actions** to see a detailed view of automated and manual actions you need to take to improve your compliance posture.
+ - Select **Microsoft Actions** to see all the actions Microsoft took to ensure compliance with the selected standard.
+
+1. Under **Your Actions**, you can select a down arrow to view more details and resolve the recommendation for that resource.
+
+ :::image type="content" source="media/regulatory-compliance-dashboard/down-arrow.png" alt-text="Screenshot that shows you where the down arrow is on the screen.":::
+
+ For more information about how to apply recommendations, see [Implementing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md).
+
+ > [!NOTE]
+ > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
+
+## Remediate an automated assessment
+
+The regulatory compliance has both automated and manual assessments that may need to be remediated. Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard.
+
+**To remediate an automated assessment**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+
+1. Select a regulatory compliance standard.
+
+1. Select a compliance control to expand it.
+
+1. In the Automated assessments section, select an assessment that failed, to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
1. Select a particular resource to view more details and resolve the recommendation for that resource. <br>For example, in the **Azure CIS 1.1.0** standard, select the recommendation **Disk encryption should be applied on virtual machines**.
Using the information in the regulatory compliance dashboard, improve your compl
For more information about how to apply recommendations, see [Implementing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md).
-1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
+1. After you take action to resolve recommendations, you'll see your compliance score improves on the compliance dashboard.
> [!NOTE] > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
+## Remediate a manual assessment
+
+The regulatory compliance has automated and manual assessments that may need to be remediated. Manual assessments are assessments that require input from the customer to remediate them.
+
+**To remediate a manual assessment**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+
+1. Select a regulatory compliance standard.
+
+1. Select a compliance control to expand it.
+
+1. Under the Manual attestation and evidence section, select an assessment.
+
+1. Select the relevant subscriptions.
+
+1. Select **Attest**.
+
+1. Enter the relevant information and attach evidence for compliance.
+
+1. Select **Save**.
+ ## Generate compliance status reports and certificates - To generate a PDF report with a summary of your current compliance status for a particular standard, select **Download report**.
Using the information in the regulatory compliance dashboard, improve your compl
:::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard-ga.png" alt-text="Filtering the list of available Azure Audit reports using tabs and filters.":::
- For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
+ For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present with the certificate.
> [!NOTE] > When you download one of these certification reports, you'll be shown the following privacy notice: > > _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._
+### Check compliance offerings status
+
+Transparency provided by the compliance offerings, allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
+
+**To check the compliance offerings status**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+
+1. Select **Compliance offerings**.
+
+ :::image type="content" source="media/regulatory-compliance-dashboard/compliance-offerings.png" alt-text="Screenshot that shows where to select the compliance offering button from the dashboard." lightbox="media/regulatory-compliance-dashboard/compliance-offerings.png":::
+
+1. Enter a service in the search bar to view its compliance offering.
+
+ :::image type="content" source="media/regulatory-compliance-dashboard/search-service.png" alt-text="Screenshot of the compliance offering screen with the search bar highlighted." lightbox="media/regulatory-compliance-dashboard/search-service.png":::
+ ## Configure frequent exports of your compliance status data
-If you want to track your compliance status with other monitoring tools in your environment, Defender for Cloud includes an export mechanism to make this straightforward. Configure **continuous export** to send select data to an Azure Event Hub or a Log Analytics workspace. Learn more in [continuously export Defender for Cloud data](continuous-export.md).
+If you want to track your compliance status with other monitoring tools in your environment, Defender for Cloud includes an export mechanism to make this straightforward. Configure **continuous export** to send select data to an Azure Event Hubs or a Log Analytics workspace. Learn more in [continuously export Defender for Cloud data](continuous-export.md).
-Use continuous export data to an Azure Event Hub or a Log Analytics workspace:
+Use continuous export data to an Azure Event Hubs or a Log Analytics workspace:
- Export all regulatory compliance data in a **continuous stream**:
Use continuous export data to an Azure Event Hub or a Log Analytics workspace:
> [!TIP] > You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance) - ## Run workflow automations when there are changes to your compliance
-Defender for Cloud's workflow automation feature can trigger Logic Apps whenever one of your regulatory compliance assessments change state.
+Defender for Cloud's workflow automation feature can trigger Logic Apps whenever one of your regulatory compliance assessments changes state.
-For example, you might want Defender for Cloud to email a specific user when a compliance assessment fails. You'll need to create the logic app first (using [Azure Logic Apps](../logic-apps/logic-apps-overview.md)) and then set up the trigger in a new workflow automation as explained in [Automate responses to Defender for Cloud triggers](workflow-automation.md).
+For example, you might want Defender for Cloud to email a specific user when a compliance assessment fails. You'll need to first create the logic app (using [Azure Logic Apps](../logic-apps/logic-apps-overview.md)) and then set up the trigger in a new workflow automation as explained in [Automate responses to Defender for Cloud triggers](workflow-automation.md).
:::image type="content" source="media/release-notes/regulatory-compliance-triggers-workflow-automation.png" alt-text="Using changes to regulatory compliance assessments to trigger a workflow automation." lightbox="media/release-notes/regulatory-compliance-triggers-workflow-automation.png":::
+## FAQ - Regulatory compliance dashboard
+- [How do I know which benchmark or standard to use?](#how-do-i-know-which-benchmark-or-standard-to-use)
+- [What standards are supported in the compliance dashboard?](#what-standards-are-supported-in-the-compliance-dashboard)
+- [Why do some controls appear grayed out?](#why-do-some-controls-appear-grayed-out)
+- [How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?](#how-can-i-remove-a-built-in-standard-like-pci-dss-iso-27001-or-soc2-tsp-from-the-dashboard)
+- [I made the suggested changes based on the recommendation, but it isn't being reflected in the dashboard?](#i-made-the-suggested-changes-based-on-the-recommendation-but-it-isnt-being-reflected-in-the-dashboard)
+- [What permissions do I need to access the compliance dashboard?](#what-permissions-do-i-need-to-access-the-compliance-dashboard)
+- [The regulatory compliance dashboard isn't loading for me](#the-regulatory-compliance-dashboard-isnt-loading-for-me)
+- [How can I view a report of passing and failing controls per standard in my dashboard?](#how-can-i-view-a-report-of-passing-and-failing-controls-per-standard-in-my-dashboard)
+- [How can I download a report with compliance data in a format other than PDF?](#how-can-i-download-a-report-with-compliance-data-in-a-format-other-than-pdf)
+- [How can I create exceptions for some of the policies in the regulatory compliance dashboard?](#how-can-i-create-exceptions-for-some-of-the-policies-in-the-regulatory-compliance-dashboard)
+- [What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?](#what-microsoft-defender-plans-or-licenses-do-i-need-to-use-the-regulatory-compliance-dashboard)
+- [How do I remediate a manual assessment?](#how-do-i-remediate-a-manual-assessment)
+### How do I know which benchmark or standard to use?
+[Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction) (MCSB) is the canonical set of security recommendations and best practices defined by Microsoft, aligned with common compliance control frameworks including [CIS Control Framework](https://www.cisecurity.org/benchmark/azure/), [NIST SP 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final) and PCI-DSS. MCSB is a comprehensive cloud agnostic set of security principles designed to recommend the most up-to-date technical guidelines for Azure along with other clouds such as AWS and GCP. We recommend MCSB to customers who want to maximize their security posture and align their compliance status with industry standards.
-## FAQ - Regulatory compliance dashboard
+The [CIS Benchmark](https://www.cisecurity.org/benchmark/azure/) is authored by an independent entity ΓÇô Center for Internet Security (CIS) ΓÇô and contains recommendations on a subset of core Azure services. We work with CIS to try to ensure that their recommendations are up to date with the latest enhancements in Azure, but they are sometimes delayed and can become outdated. Nonetheless, some customers like to use this objective, third-party assessment from CIS as their initial and primary security baseline.
-- [Tutorial: Improve your regulatory compliance](#tutorial-improve-your-regulatory-compliance)
- - [Prerequisites](#prerequisites)
- - [Assess your regulatory compliance](#assess-your-regulatory-compliance)
- - [Improve your compliance posture](#improve-your-compliance-posture)
- - [Generate compliance status reports and certificates](#generate-compliance-status-reports-and-certificates)
- - [Configure frequent exports of your compliance status data](#configure-frequent-exports-of-your-compliance-status-data)
- - [Run workflow automations when there are changes to your compliance](#run-workflow-automations-when-there-are-changes-to-your-compliance)
- - [FAQ - Regulatory compliance dashboard](#faqregulatory-compliance-dashboard)
- - [What standards are supported in the compliance dashboard?](#what-standards-are-supported-in-the-compliance-dashboard)
- - [Why do some controls appear grayed out?](#why-do-some-controls-appear-grayed-out)
- - [How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?](#how-can-i-remove-a-built-in-standard-like-pci-dss-iso-27001-or-soc2-tsp-from-the-dashboard)
- - [I made the suggested changes based on the recommendation, but it isn't being reflected in the dashboard?](#i-made-the-suggested-changes-based-on-the-recommendation-but-it-isnt-being-reflected-in-the-dashboard)
- - [What permissions do I need to access the compliance dashboard?](#what-permissions-do-i-need-to-access-the-compliance-dashboard)
- - [The regulatory compliance dashboard isn't loading for me](#the-regulatory-compliance-dashboard-isnt-loading-for-me)
- - [How can I view a report of passing and failing controls per standard in my dashboard?](#how-can-i-view-a-report-of-passing-and-failing-controls-per-standard-in-my-dashboard)
- - [How can I download a report with compliance data in a format other than PDF?](#how-can-i-download-a-report-with-compliance-data-in-a-format-other-than-pdf)
- - [How can I create exceptions for some of the policies in the regulatory compliance dashboard?](#how-can-i-create-exceptions-for-some-of-the-policies-in-the-regulatory-compliance-dashboard)
- - [What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?](#what-microsoft-defender-plans-or-licenses-do-i-need-to-use-the-regulatory-compliance-dashboard)
- - [How do I know which benchmark or standard to use?](#how-do-i-know-which-benchmark-or-standard-to-use)
- - [Next steps](#next-steps)
+Since weΓÇÖve released the Microsoft Cloud Security Benchmark, many customers have chosen to migrate to it as a replacement for CIS benchmarks.
### What standards are supported in the compliance dashboard?
-By default, the regulatory compliance dashboard shows you the Azure Security Benchmark. The Azure Security Benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Azure Security Benchmark introduction](../security/benchmarks/introduction.md).
+By default, the regulatory compliance dashboard shows you the Microsoft Cloud Security Benchmark. The Microsoft Cloud Security Benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Microsoft Cloud Security Benchmark introduction](../security/benchmarks/introduction.md).
To track your compliance with any other standard, you'll need to explicitly add them to your dashboard.
More standards will be added to the dashboard and included in the information on
### Why do some controls appear grayed out? For each compliance standard in the dashboard, there's a list of the standard's controls. For the applicable controls, you can view the details of passing and failing assessments.
-Some controls are grayed out. These controls don't have any Defender for Cloud assessments associated with them. Some may be procedure or process-related, and so can't be verified by Defender for Cloud. Some don't have any automated policies or assessments implemented yet, but will have in the future. And some controls may be the platform's responsibility as explained in [Shared responsibility in the cloud](../security/fundamentals/shared-responsibility.md).
+Some controls are grayed out. These controls don't have any Defender for Cloud assessments associated with them. Some may be procedure or process-related, and so can't be verified by Defender for Cloud. Some don't have any automated policies or assessments implemented yet, but will have in the future. Some controls may be the platform's responsibility as explained in [Shared responsibility in the cloud](../security/fundamentals/shared-responsibility.md).
### How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard? To customize the regulatory compliance dashboard, and focus only on the standards that are applicable to you, you can remove any of the displayed regulatory standards that aren't relevant to your organization. To remove a standard, follow the instructions in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
To customize the regulatory compliance dashboard, and focus only on the standard
After you take action to resolve recommendations, wait 12 hours to see the changes to your compliance data. Assessments are run approximately every 12 hours, so you'll see the effect on your compliance data only after the assessments run. ### What permissions do I need to access the compliance dashboard?
-To view compliance data, you need to have at least **Reader** access to the policy compliance data as well; so Security Reader alone wonΓÇÖt suffice. If you're a global reader on the subscription, that will be enough too.
+To view compliance data, you need to have at least **Reader** access to the policy compliance data as well; Security Reader alone wonΓÇÖt suffice. If you're a Global Reader on the subscription that will be enough too.
The minimum set of roles for accessing the dashboard and managing standards is **Resource Policy Contributor** and **Security Admin**.
To use the regulatory compliance dashboard, Defender for Cloud must be enabled a
1. Clear your browser's cache. 1. Try a different browser.
-1. Try opening the dashboard from different network location.
+1. Try opening the dashboard from a different network location.
### How can I view a report of passing and failing controls per standard in my dashboard? On the main dashboard, you can see a report of passing and failing controls for (1) the 'top 4' lowest compliance standards in the dashboard. To see all the passing/failing controls status, select (2) **Show all *x*** (where x is the number of standards you're tracking). A context plane displays the compliance status for every one of your tracked standards. :::image type="content" source="media/regulatory-compliance-dashboard/summaries-of-compliance-standards.png" alt-text="Summary section of the regulatory compliance dashboard."::: - ### How can I download a report with compliance data in a format other than PDF? When you select **Download report**, select the standard and the format (PDF or CSV). The resulting report will reflect the current set of subscriptions you've selected in the portal's filter.
For other policies, you can create an exemption directly in the policy itself, b
### What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard? If you've got *any* of the Microsoft Defender plan enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
+### How do I remediate a manual assessment?
-### How do I know which benchmark or standard to use?
-[Azure Security Benchmark](/security/benchmark/azure/introduction) (ASB) is the canonical set of security recommendations and best practices defined by Microsoft, aligned with common compliance control frameworks including [CIS Microsoft Azure Foundations Benchmark](https://www.cisecurity.org/benchmark/azure/) and [NIST SP 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final). ASB is a comprehensive benchmark, and is designed to recommend the most up-to-date security capabilities of a wide range of Azure services. We recommend ASB to customers who want to maximize their security posture and align their compliance status with industry standards.
-
-The [CIS Benchmark](https://www.cisecurity.org/benchmark/azure/) is authored by an independent entity ΓÇô Center for Internet Security (CIS) ΓÇô and contains recommendations on a subset of core Azure services. We work with CIS to try to ensure that their recommendations are up to date with the latest enhancements in Azure, but they do sometimes fall behind and become outdated. Nonetheless, some customers like to use this objective, third-party assessment from CIS as their initial and primary security baseline.
-
-Since weΓÇÖve released the Azure Security Benchmark, many customers have chosen to migrate to it as a replacement for CIS benchmarks.
-
+By selecting attest, you'll have the ability to demonstrate compliance with this control. Learn how to [remediate a manual assessment](#remediate-a-manual-assessment).
## Next steps
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Changes in our roadmap and priorities have removed the need for the network traf
Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
-Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-usage.md).
+Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-va-acr.md).
### New alert for Microsoft Defender for Storage (preview)
It's likely that this change will impact your secure scores. For most subscripti
### Azure Defender for container registries now scans for vulnerabilities in registries protected with Azure Private Link
-Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md).
+Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md).
To limit access to a registry hosted in Azure Container Registry, assign virtual network private IP addresses to the registry endpoints and use Azure Private Link as explained in [Connect privately to an Azure container registry using Azure Private Link](../container-registry/container-registry-private-link.md).
With this update, you can now set Security Center to automatically provision thi
:::image type="content" source="media/release-notes/auto-provisioning-guest-configuration.png" alt-text="Enable auto deployment of Guest Configuration extension.":::
-Learn more about how auto provisioning works in [Configure auto provisioning for agents and extensions](enable-data-collection.md).
+Learn more about how auto provisioning works in [Configure auto provisioning for agents and extensions](monitoring-components.md).
### Recommendations to enable Azure Defender plans now support "Enforce"
Learn more about Security Center's vulnerability scanners:
- [Azure Defender's integrated Qualys vulnerability scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md) - [Azure Defender's integrated vulnerability assessment scanner for SQL servers](defender-for-sql-on-machines-vulnerability-assessment.md)-- [Azure Defender's integrated vulnerability assessment scanner for container registries](defender-for-containers-usage.md)
+- [Azure Defender's integrated vulnerability assessment scanner for container registries](defender-for-containers-va-acr.md)
### SQL data classification recommendation severity changed
New vulnerabilities are discovered every day. With this update, container images
Scanning is charged on a per image basis, so there's no additional charge for these rescans.
-Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md).
+Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md).
### Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)
You can now configure the auto provisioning of:
- (New) Azure Policy Add-on for Kubernetes - (New) Microsoft Dependency agent
-Learn more in [Auto provisioning agents and extensions from Azure Security Center](enable-data-collection.md).
+Learn more in [Auto provisioning agents and extensions from Azure Security Center](monitoring-components.md).
### Secure score is now available in continuous export (preview)
This option is available from the recommendations details pages for:
- **Vulnerabilities in Azure Container Registry images should be remediated** - **Vulnerabilities in your virtual machines should be remediated**
-Learn more in [Disable specific findings for your container images](defender-for-containers-usage.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings).
+Learn more in [Disable specific findings for your container images](defender-for-containers-va-acr.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings).
### Exempt a resource from a recommendation
The security findings are now available for export through continuous export whe
Related pages: - [Security Center's integrated Qualys vulnerability assessment solution for Azure virtual machines](deploy-vulnerability-assessment-vm.md)-- [Security Center's integrated vulnerability assessment solution for Azure Container Registry images](defender-for-containers-usage.md)
+- [Security Center's integrated vulnerability assessment solution for Azure Container Registry images](defender-for-containers-va-acr.md)
- [Continuous export](continuous-export.md) ### Prevent security misconfigurations by enforcing recommendations when creating new resources
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud-- Previously updated : 09/20/2022 Last updated : 10/03/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## October 2022
+
+Updates in October include:
+
+- [Announcing the Microsoft Cloud Security Benchmark](#announcing-the-microsoft-cloud-security-benchmark)
+- [Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)](#attack-path-analysis-and-contextual-security-capabilities-in-defender-for-cloud-preview)
+- [Agentless scanning for Azure and AWS machines (Preview)](#agentless-scanning-for-azure-and-aws-machines-preview)
+- [Defender for DevOps (Preview)](#defender-for-devops-preview)
+- [Regulatory Compliance Dashboard now supports manual control management and detailed information on Microsoft's compliance status](#regulatory-compliance-dashboard-now-supports-manual-control-management-and-detailed-information-on-microsofts-compliance-status)
+- [Auto-provisioning has been renamed to Settings & monitoring and has an updated experience](#auto-provisioning-has-been-renamed-to-settings--monitoring-and-has-an-updated-experience)
+- [Defender Cloud Security Posture Management (CSPM) (Preview)](#defender-cloud-security-posture-management-cspm)
+- [MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations](#mitre-attck-framework-mapping-is-now-available-also-for-aws-and-gcp-security-recommendations)
+- [Defender for Containers now supports vulnerability assessment for Elastic Container Registry (Preview)](#defender-for-containers-now-supports-vulnerability-assessment-for-elastic-container-registry-preview)
+
+### Announcing the Microsoft Cloud Security Benchmark
+
+The [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction) (MCSB) is a new framework defining fundamental cloud security principles based on common industry standards and compliance frameworks, together with detailed technical guidance for implementing these best practices across cloud platforms. Replacing the Azure Security Benchmark, the MCSB provides prescriptive details for how to implement its cloud-agnostic security recommendations on multiple cloud service platforms, initially covering Azure and AWS.
+
+You can now monitor your cloud security compliance posture per cloud in a single, integrated dashboard. You can see MCSB as the default compliance standard when you navigate to Defender for Cloud's regulatory compliance dashboard.
+Microsoft Cloud Security Benchmark is automatically assigned to your Azure subscriptions and AWS accounts when you onboard Defender for Cloud.
+
+Learn more about the [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction).
+
+### Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)
+
+The new Cloud Security Graph, Attack Path Analysis and contextual cloud security capabilities are now available in Defender for Cloud in preview.
+
+One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all.
+
+Defender for Cloud's new cloud security graph and attack path analysis capabilities give security teams the ability to assess the risk behind each security issue. Security teams can also identify the highest risk issues that need to be resolved soonest. Defender for Cloud works with security teams to reduce the risk of an impactful breach to their environment in the most effective way.
+
+Learn more about the new [Cloud Security Graph, Attack Path Analysis and the Cloud Security Explorer](concept-attack-path.md).
+
+### Agentless scanning for Azure and AWS machines (Preview)
+
+Until now, Defender for Cloud based its posture assessments for VMs on agent-based solutions. To help customers maximize coverage and reduce onboarding and management friction, we are releasing agentless scanning for VMs to preview.
+
+With agentless scanning for VMs, you get wide visibility on installed software and software CVEs, without the challenges of agent installation and maintenance, network connectivity requirements, and performance impact on your workloads. The analysis is powered by Microsoft Defender vulnerability management.
+
+Agentless vulnerability scanning is available in both Defender Defender Cloud Security Posture Management (CSPM) and in [Defender for Servers P2](defender-for-servers-introduction.md), with native support for AWS and Azure VMs.
+
+- Learn more about [agentless scanning](concept-agentless-data-collection.md).
+- Find out how to enable [agentless vulnerability assessment](enable-vulnerability-assessment-agentless.md).
+
+### Defender for DevOps (Preview)
+
+Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across multicloud environments including Azure, AWS, Google, and on-premises resources.
+
+Now, the new Defender for DevOps service integrates source code management systems, like GitHub and Azure DevOps, into Defender for Cloud. With this new integration we are empowering security teams to protect their resources from code to cloud.
+
+Defender for DevOps allows you to gain visibility into and manage your connected developer environments and code resources. Currently, you can connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) systems to Defender for Cloud and onboard DevOps repositories to Inventory and the new DevOps Security page. It provides security teams with a high level overview of the discovered security issues that exist within them in a unified DevOps Security page.
+
+Security Teams can configure pull request annotations to help Developers address secret scanning findings in Azure DevOps directly on their pull requests.
+
+You can configure the Microsoft Security DevOps tools on Azure DevOps pipelines and GitHub workflows to enable the following security scans:
+
+| Name | Language | License |
+|--|--|--|
+| [Bandit](https://github.com/PyCQA/bandit) | python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) |
+| [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [ESlint](https://github.com/eslint/eslint) | Javascript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (aka CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
+| [Template Analyze](https://github.com/Azure/template-analyzer)r | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
+| [Trivy](https://github.com/aquasecurity/trivy) | container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
+
+The following new recommendations are now available for DevOps Security assessments:
+
+| Recommendation | Description | Severity |
+|--|--|--|
+| (Preview) [Code repositories should have code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/c68a8c2a-6ed4-454b-9e37-4b7654f2165f/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
+| (Preview) [Code repositories should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27/showSecurityCenterCommandBar~/false) | Defender for DevOps has found a secret in code repositories.  This should be remediated immediately to prevent a security breach.  Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories. (No related policy) | High |
+| (Preview) [Code repositories should have Dependabot scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/822425e3-827f-4f35-bc33-33749257f851/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
+| (Preview) [Code repositories should have infrastructure as code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35/showSecurityCenterCommandBar~/false) | (Preview) Code repositories should have infrastructure as code scanning findings resolved | Medium |
+| (Preview) [GitHub repositories should have code scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11/showSecurityCenterCommandBar~/false) | GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project. (No related policy) | Medium |
+| (Preview) [GitHub repositories should have secret scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d/showSecurityCenterCommandBar~/false) | GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. (No related policy) | High |
+| (Preview) [GitHub repositories should have Dependabot scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35/showSecurityCenterCommandBar~/false) | GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. (No related policy) | Medium |
+
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md)
+
+### Regulatory Compliance Dashboard now supports manual control management and detailed information on Microsoft's compliance status
+
+The compliance dashboard in Defender for Cloud is a key tool for customers to help them understand and track their compliance status. Customers can do this by continuously monitoring environments in accordance with requirements from many different standards and regulations.
+
+Now, you can fully manage your compliance posture by manually attesting to operational and non-technical controls. You can now provide evidence of compliance for controls that are not automated. Together with the automated assessments, you can now generate a full report of compliance within a selected scope, addressing the entire set of controls for a given standard.
+
+In addition, with richer control information and in-depth details and evidence for Microsoft's compliance status, you now have all of the information required for audits at your fingertips.
+
+Some of the new benefits include:
+
+- **Manual customer actions** provide a mechanism for manually attesting compliance with non-automated controls. This includes the ability to link evidence, set a compliance date and expiration date.
+
+- Richer control details for supported standards that showcase **Microsoft actions** and **manual customer actions** in addition to the already existing automated customer actions.
+
+- Microsoft actions provide transparency into MicrosoftΓÇÖs compliance status that include audit assessment procedures, test results, and Microsoft responses to deviations.
+
+- **Compliance Offerings** provide a central location to check Azure, Dynamics 365, and Power Platform products and their respective regulatory compliance certifications.
+
+Learn more on how to [Improve your regulatory compliance](regulatory-compliance-dashboard.md) with Defender for Cloud.
+
+### Auto-provisioning has been renamed to Settings & monitoring and has an updated experience
+
+We have renamed the Auto-provisioning page to **Settings & monitoring**.
+
+Auto-provisioning was meant to allow at-scale enablement of pre-requisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we are launching a new experience with the following changes:
+
+**The Defender for Cloud's plans page now includes**:
+- When you enable Defender plans, a Defender plan that requires monitoring components automatically turns on the required components with default settings. These settings can be edited by the user at any time.
+- You can access the monitoring component settings for each Defender plan from the Defender plan page.
+- The Defender plans page clearly indicates whether all the monitoring components are in place for each Defender plan, or if your monitoring coverage is incomplete.
+
+**The Settings & monitoring page**:
+- Each monitoring component indicates the Defender plans that it is related to.
+
+Learn more about [managing your monitoring settings](monitoring-components.md).
+
+### Defender Cloud Security Posture Management (CSPM)
+
+One of Microsoft Defender for Cloud's main pillars for cloud security is Cloud Security Posture Management (CSPM). CSPM provides you with hardening guidance that helps you efficiently and effectively improve your security. CSPM also gives you visibility into your current security situation.
+
+We are announcing the addition of the new Defender Cloud Security Posture Management (CSPM) plan for Defender for Cloud. Defender Cloud Security Posture Management (CSPM) enhances the security capabilities of Defender for Cloud and includes the following new and expanded features:
+
+- Continuous assessment of the security configuration of your cloud resources
+- Security recommendations to fix misconfigurations and weaknesses
+- Secure score
+- Governance
+- Regulatory compliance
+- Cloud security graph
+- Attack Path Analysis
+- Agentless scanning for machines
+
+You can learn more about the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md).
+
+### MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations
+
+For security analysts, itΓÇÖs essential to identify the potential risks associated with security recommendations and understand the attack vectors, so that they can efficiently prioritize their tasks.
+
+Defender for Cloud makes prioritization easier by mapping the Azure, AWS and GCP security recommendations against the MITRE ATT&CK framework. The MITRE ATT&CK framework is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations, allowing customers to strengthen the secure configuration of their environments.
+
+The MITRE ATT&CK framework has been integrated in three ways:
+
+- Recommendations map to MITRE ATT&CK tactics and techniques.
+- Query MITRE ATT&CK tactics and techniques on recommendations using the Azure Resource Graph.
++
+### Defender for Containers now supports vulnerability assessment for Elastic Container Registry (Preview)
+
+Microsoft Defender for Containers now provides agentless vulnerability assessment scanning for Elastic Container Registry (ECR) in Amazon AWS. This expands on coverage for multicloud environments, building on the release earlier this year of advanced threat protection and Kubernetes environment hardening for AWS and Google GCP. The agentless model creates AWS resources in your accounts to scan your images without extracting images out of your AWS accounts and with no footprint on your workload.
+
+Agentless vulnerability assessment scanning for images in ECR repositories helps reduce the attack surface of your containerized estate by continuously scanning images to identify and manage container vulnerabilities. With this new release, Defender for Cloud scans container images after they are pushed to the repository and continually reassess the ECR container images in the registry. The findings are available in Microsoft Defender for Cloud as recommendations, and you can use Defender for Cloud's built-in automated workflows to take action on the findings, such as opening a ticket for fixing a high severity vulnerability in an image.
+
+Learn more about [vulnerability assessment for Amazon ECR images](defender-for-containers-va-ecr.md).
+ ## September 2022 Updates in September include:
Updates in September include:
- [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent) - [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) - [Extra recommendations added to identity](#extra-recommendations-added-to-identity)
+- [Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces](#removed-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces)
### Suppress alerts based on Container and Kubernetes entities
-You can now suppress alerts based on these Kubernetes entities so you can use the container environment details to align your alerts your organization's policy and stop receiving unwanted alerts:
--- Container Image-- Container Registry - Kubernetes Namespace - Kubernetes Pod-- Kubernetes Service - Kubernetes Secret - Kubernetes ServiceAccount-- Kubernetes Deployment - Kubernetes ReplicaSet - Kubernetes StatefulSet - Kubernetes DaemonSet
The new release contains the following capabilities:
The recommendations although in preview, will appear next to the recommendations that are currently in GA.
+### Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces
+
+In the past, Defender for Cloud let you choose the workspace that your Log Analytics agents report to. When a machine belonged to one tenant (ΓÇ£Tenant AΓÇ¥) but its Log Analytics agent reported to a workspace in a different tenant (ΓÇ£Tenant BΓÇ¥), security alerts about the machine were reported to the first tenant (ΓÇ£Tenant AΓÇ¥).
+
+With this change, alerts on machines connected to Log Analytics workspace in a different tenant no longer appear in Defender for Cloud.
+
+If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine.
+
+Learn more about [security alerts](alerts-overview.md).
+ ## August 2022 Updates in August include:
Updates in August include:
- [Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers](#vulnerabilities-for-running-images-are-now-visible-with-defender-for-containers-on-your-windows-containers) - [Azure Monitor Agent integration now in preview](#azure-monitor-agent-integration-now-in-preview) - [Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster](#deprecated-vm-alerts-regarding-suspicious-activity-related-to-a-kubernetes-cluster)-- [Container vulnerabilities now include detailed package information](#container-vulnerabilities-now-include-detailed-package-information)- ### Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers Defender for Containers now shows vulnerabilities for running Windows containers. When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the detected issues: [Running container images should have vulnerability findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
-Learn more about [viewing vulnerabilities for running images](defender-for-containers-introduction.md#view-vulnerabilities-for-running-images).
+Learn more about [viewing vulnerabilities for running images](defender-for-containers-introduction.md#view-vulnerabilities-for-running-images-in-azure-container-registry-acr).
### Azure Monitor Agent integration now in preview Defender for Cloud now includes preview support for the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA). AMA is intended to replace the legacy Log Analytics agent (also referred to as the Microsoft Monitoring Agent (MMA)), which is on a path to deprecation. AMA [provides many benefits](../azure-monitor/agents/azure-monitor-agent-migration.md#benefits) over legacy agents.
-
-In Defender for Cloud, when you [enable auto provisioning for AMA](auto-deploy-azure-monitoring-agent.md), the agent is deployed on **existing and new** VMs and Azure Arc-enabled machines that are detected in your subscriptions. If Defenders for Cloud plans are enabled, AMA collects configuration information and event logs from Azure VMs and Azure Arc machines. The AMA integration is in preview, so we recommend using it in test environments, rather than in production environments.
+In Defender for Cloud, when you [enable auto provisioning for AMA](auto-deploy-azure-monitoring-agent.md), the agent is deployed on **existing and new** VMs and Azure Arc-enabled machines that are detected in your subscriptions. If Defenders for Cloud plans are enabled, AMA collects configuration information and event logs from Azure VMs and Azure Arc machines. The AMA integration is in preview, so we recommend using it in test environments, rather than in production environments.
### Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster
Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.m
Today's increasing threats to organizations stretch the limits of security personnel to protect their expanding workloads. Security teams are challenged to implement the protections defined in their security policies.
-Now with the governance experience, security teams can assign remediation of security recommendations to the resource owners and require a remediation schedule. They can have full transparency into the progress of the remediation and get notified when tasks are overdue.
-
-This feature is free while it is in the preview phase.
+Now with the governance experience in preview, security teams can assign remediation of security recommendations to the resource owners and require a remediation schedule. They can have full transparency into the progress of the remediation and get notified when tasks are overdue.
Learn more about the governance experience in [Driving your organization to remediate security issues with recommendation governance](governance-rules.md).
defender-for-cloud Security Center Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-planning-and-operations-guide.md
Title: Defender for Cloud Planning and Operations Guide description: This document helps you to plan before adopting Defender for Cloud and considerations regarding daily operations. + Last updated 12/14/2021 # Planning and operations guide
When planning access control using Azure RBAC for Defender for Cloud, be sure to
A security policy defines the desired configuration of your workloads and helps ensure compliance with company or regulatory security requirements. In Defender for Cloud, you can define policies for your Azure subscriptions, which can be tailored to the type of workload or the sensitivity of data. Defender for Cloud policies contain the following components:-- [Data collection](enable-data-collection.md): agent provisioning and data collection settings.
+- [Data collection](monitoring-components.md): agent provisioning and data collection settings.
- [Security policy](tutorial-security-policy.md): an [Azure Policy](../governance/policy/overview.md) that determines which controls are monitored and recommended by Defender for Cloud, or use Azure Policy to create new definitions, define additional policies, and assign policies across management groups. - [Email notifications](configure-email-notifications.md): security contacts and notification settings. - [Pricing tier](enhanced-security-features-overview.md): with or without Microsoft Defender for Cloud's enhanced security features, which determine which Defender for Cloud features are available for resources in scope (can be specified for subscriptions and workspaces using the API).
Defender for Cloud automatically creates a default security policy for each of y
Before configuring security policies, review each of the [security recommendations](review-security-recommendations.md), and determine whether these policies are appropriate for your various subscriptions and resource groups. It is also important to understand what action should be taken to address Security Recommendations and who in your organization will be responsible for monitoring for new recommendations and taking the needed steps. ## Data collection and storage
-Defender for Cloud uses the Log Analytics agent ΓÇô this is the same agent used by the Azure Monitor service ΓÇô to collect security data from your virtual machines. [Data collected](enable-data-collection.md) from this agent will be stored in your Log Analytics workspace(s).
+Defender for Cloud uses the Log Analytics agent ΓÇô this is the same agent used by the Azure Monitor service ΓÇô to collect security data from your virtual machines. [Data collected](monitoring-components.md) from this agent will be stored in your Log Analytics workspace(s).
### Agent
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Title: Understanding security policies, initiatives, and recommendations in Microsoft Defender for Cloud description: Learn about security policies, initiatives, and recommendations in Microsoft Defender for Cloud. + Last updated 06/06/2022
A security initiative defines the desired configuration of your workloads and he
Like security policies, Defender for Cloud initiatives are also created in Azure Policy. You can use [Azure Policy](../governance/policy/overview.md) to manage your policies, build initiatives, and assign initiatives to multiple subscriptions or for entire management groups.
-The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is Azure Security Benchmark. This benchmark is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. Learn more about [Azure Security Benchmark](/security/benchmark/azure/introduction).
+The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is Microsoft Cloud Security Benchmark. This benchmark is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. Learn more about [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction).
Defender for Cloud offers the following options for working with security initiatives and policies: -- **View and edit the built-in default initiative** - When you enable Defender for Cloud, the initiative named 'Azure Security Benchmark' is automatically assigned to all Defender for Cloud registered subscriptions. To customize this initiative, you can enable or disable individual policies within it by editing a policy's parameters. See the list of [built-in security policies](./policy-reference.md) to understand the options available out-of-the-box.
+- **View and edit the built-in default initiative** - When you enable Defender for Cloud, the initiative named 'Microsoft Cloud Security Benchmark' is automatically assigned to all Defender for Cloud registered subscriptions. To customize this initiative, you can enable or disable individual policies within it by editing a policy's parameters. See the list of [built-in security policies](./policy-reference.md) to understand the options available out-of-the-box.
- **Add your own custom initiatives** - If you want to customize the security initiatives applied to your subscription, you can do so within Defender for Cloud. You'll then receive recommendations if your machines don't follow the policies you create. For instructions on building and assigning custom policies, see [Using custom security initiatives and policies](custom-security-policies.md).
Recommendations are actions for you to take to secure and harden your resources.
In practice, it works like this:
-1. Azure Security Benchmark is an ***initiative*** that contains requirements.
+1. Microsoft Cloud Security Benchmark is an ***initiative*** that contains requirements.
For example, Azure Storage accounts must restrict network access to reduce their attack surface.
The recommendation details shown are:
## Viewing the relationship between a recommendation and a policy
-As mentioned above, Defender for Cloud's built in recommendations are based on the Azure Security Benchmark. Almost every recommendation has an underlying policy that is derived from a requirement in the benchmark.
+As mentioned above, Defender for Cloud's built in recommendations are based on the Microsoft Cloud Security Benchmark. Almost every recommendation has an underlying policy that is derived from a requirement in the benchmark.
When you're reviewing the details of a recommendation, it's often helpful to be able to see the underlying policy. For every recommendation supported by a policy, use the **View policy definition** link from the recommendation details page to go directly to the Azure Policy entry for the relevant policy:
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability
description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Last updated 08/10/2022-+ # Defender for Containers feature availability
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | EC2 | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | Registry scan | ECR | Preview | - | Agentless | Defender for Containers |
| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy extension | Defender for Containers |
Outbound proxy without authentication and outbound proxy with basic authenticati
## Next steps -- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md).
- Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
Title: Microsoft Defender for Cloud's servers features according to OS, machine
description: Learn about the availability of Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud deployment. Last updated 03/08/2022-+ # Feature coverage for machines
For information about when recommendations are generated for each of these solut
| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available | | - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA | | - [Email notifications for security alerts](./configure-email-notifications.md) | GA | GA | GA |
-| - [Auto provisioning for agents and extensions](./enable-data-collection.md) | GA | GA | GA |
+| - [Deployment of agents and extensions](monitoring-components.md) | GA | GA | GA |
| - [Asset inventory](./asset-inventory.md) | GA | GA | GA | | - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA | | - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | Not Available | Not Available |
For information about when recommendations are generated for each of these solut
## Next steps -- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent).
- Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
description: This guide is for IT professionals, security analysts, and cloud ad
+ Last updated 07/17/2022 # Microsoft Defender for Cloud Troubleshooting Guide
GCP connector issues:
## Troubleshooting the Log Analytics agent
-Defender for Cloud uses the Log Analytics agent to [collect and store data](./enable-data-collection.md). The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
+Defender for Cloud uses the Log Analytics agent to [collect and store data](./monitoring-components.md#log-analytics-agent). The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
Alert types:
defender-for-cloud Tutorial Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-pull-request-annotations.md
+
+ Title: Tutorial Enable pull request annotations in GitHub or in Azure DevOps
+description: Add pull request annotations in GitHub or in Azure DevOps. By adding pull request annotations, your SecOps and developer teams so that they can be on the same page when it comes to mitigating issues.
++ Last updated : 09/20/2022++
+# Tutorial: Enable pull request annotations in GitHub and Azure DevOps
+
+With Microsoft Defender for Cloud, you can configure pull request annotations in Azure DevOps. Pull request annotations are enabled in Microsoft Defender for Cloud by security operators and are sent to the developers who can then take action directly in their pull requests. This allows both security operators and developers to see the same security issue information in the systems they're accustomed to working in. Security operators see unresolved findings in Defender for Cloud and developers see them in their source code management systems. These issues can then be acted upon by developers when they submit their pull requests. This helps prevent and fix potential security vulnerabilities and misconfigurations before they enter the production stage.
+
+You can get pull request annotations in GitHub if you're a customer of GitHub Advanced Security.
+
+> [!NOTE]
+> During the Defender for DevOps preview period, GitHub Advanced Security for Azure DevOps (GHAS for AzDO) is also providing a free trial of pull request annotations.
+
+In this tutorial you'll learn how to:
+
+> [!div class="checklist"]
+> * [Enable pull request annotations in GitHub](#enable-pull-request-annotations-in-github).
+> * [Enable pull request annotations in Azure DevOps](#enable-pull-request-annotations-in-azure-devops).
+
+## Prerequisites
+
+Before you can follow the steps in this tutorial, you must:
+
+**For GitHub**:
+
+ - Have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin
+ - [Enable Defender for Cloud](get-started.md)
+ - Have [enhanced security features](enhanced-security-features-overview.md) enabled on your Azure subscriptions
+ - [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md)
+ - [Configure the Microsoft Security DevOps GitHub action](github-action.md)
+ - Be a [GitHub Advanced Security customer](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security)
+
+**For Azure DevOps**:
+
+ - Have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin
+ - [Enable Defender for Cloud](get-started.md)
+ - Have [enhanced security features](enhanced-security-features-overview.md) enabled on your Azure subscriptions
+ - [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md)
+ - [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md)
+ - [Setup secret scanning in Azure DevOps](detect-credential-leaks.md#setup-secret-scanning-in-azure-devops)
+
+## Enable pull request annotations in GitHub
+
+By enabling pull request annotations in GitHub, your developers gain the ability to see their security issues when they submit their pull requests directly to the main branch.
+
+**To enable pull request annotations in GitHub**:
+
+1. Sign in to [GitHub](https://github.com/).
+
+1. Select the relevant repository.
+
+1. Select **.github/workflows**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/workflow-folder.png" alt-text="Screenshot that shows where to navigate to, to select the GitHub workflow folder.":::
+
+1. Select **msdevopssec.yml**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/devopssec.png" alt-text="Screenshot that shows you where on the screen to select the msdevopssec.yml file.":::
+
+1. Select **edit**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/edit-button.png" alt-text="Screenshot that shows you what the edit button looks like.":::
+
+1. Locate and update the trigger section to include:
+
+ ```yml
+ # Triggers the workflow on push or pull request events but only for the main branch
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ ```
+
+ By adding these lines to your yaml file, you'll configure the action to run when either a push or pull request event occurs on the designated repository.ΓÇ»
+
+ You can also view a [sample repository](https://github.com/microsoft/security-devops-action/tree/main/samples).
+
+ (Optional) You can select which branches you want to run it on by entering the branch(es) under the trigger section. If you want to include all branches remove the lines with the branch list.ΓÇ»
+
+1. Select **Start commit**.
+
+1. Select **Commit changes**.
+
+1. Select **Files changed**.
+
+You'll now be able to see all the issues that were discovered by the scanner.
+
+### Mitigate GitHub issues found by the scanner
+
+Once you've configured the scanner, you'll be able to view all issues that were detected.
+
+**To mitigate GitHub issues found by the scanner**:
+
+1. Navigate through the page and locate an affected file with an annotation.
+
+1. Select **Dismiss alert**.
+
+1. Select a reason to dismiss:
+
+ - **Won't fix** - The alert is noted but won't be fixed.
+ - **False positive** - The alert isn't valid.
+ - **Used in tests** - The alert isn't in the production code.
+
+## Enable pull request annotations in Azure DevOps
+
+By enabling pull request annotations in Azure DevOps, your developers gain the ability to see their security issues when they submit their pull requests directly to the main branch.
+
+### Enable Build Validation policy for the CI Build
+
+Before you can enable pull request annotations, your main branch must have enabled Build Validation policy for the CI Build.
+
+**To enable Build Validation policy for the CI Build**:
+
+1. Sign in to your Azure DevOps project.
+
+1. Navigate to **Project settings** > **Repositories**.
+
+1. Select the repository to enable pull requests on.
+
+1. Select **Policies**.
+
+1. Navigate to **Branch Policies** > **Build Validation**.
+
+1. Toggle the CI Build to **On**.
+
+### Enable pull request annotations
+
+**To enable pull request annotations in Azure DevOps**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **DevOps Security**.
+
+1. Select all relevant repositories to enable pull request annotations on.
+
+1. Select **Configure**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/select-configure.png" alt-text="Screenshot that shows you where to select configure, on the screen.":::
+
+1. Toggle Pull request annotations to **On**.
+
+1. Select a category from the drop-down menu.
+
+ > [!NOTE]
+ > Only secret scan results is currently supported.
+
+1. Select a severity level from the drop-down menu.
+
+1. Select **Save**.
+
+All annotations will now be displayed based on your configurations with the relevant line of code.
+
+### Mitigate Azure DevOps issues found by the scanner
+
+Once you've configured the scanner, you'll be able to view all issues that were detected.
+
+**To mitigate Azure DevOps issues found by the scanner**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Pull requests**.
+
+1. Scroll through the Overview page and locate an affected line with an annotation.
+
+1. Select **Active**.
+
+1. Select action to take:
+
+ - **Active** - The default status for new annotations.
+ - **Pending** - The finding is being worked on.
+ - **Resolved** - The finding has been addressed.
+ - **Won't fix** - The finding is noted but won't be fixed.
+ - **Closed** - The discussion in this annotation is closed.
+
+## Learn more
+
+In this tutorial, you learned how to enable pull request annotations in GitHub and Azure DevOps.
+
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
+
+Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Now learn more about [Defender for DevOps](defender-for-devops-introduction.md).
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-incident.md
Title: Alert response tutorial - Microsoft Defender for Cloud
description: In this tutorial, you'll learn how to triage security alerts and determine the root cause & scope of an alert. ms.assetid: 181e3695-cbb8-4b4e-96e9-c4396754862f + Last updated 11/09/2021
If you don't plan to continue, or you want to disable either of these features:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
-1. Select **Auto provisioning**.
+1. In the Monitoring coverage column of the Defender plan, select **Settings**.
1. Disable the relevant extensions. >[!NOTE]
- > Disabling automatic provisioning does not remove the Log Analytics agent from Azure VMs that already have the agent. Disabling automatic provisioning limits security monitoring for your resources.
+ > Disabling extensions does not remove the Log Analytics agent from Azure VMs that already have the agent, but does limits security monitoring for your resources.
## Next steps In this tutorial, you learned about Defender for Cloud features to be used when responding to a security alert. For related material see:
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
Title: Working with security policies description: Learn how to work with security policies in Microsoft Defender for Cloud. + Last updated 01/25/2022
For more information about recommendations, see [Managing security recommendatio
## Enable a security policy
-Some policies in your initiatives might be disabled by default. For example, in the Azure Security Benchmark initiative, some policies are provided for you to enable only if they meet a specific regulatory or compliance requirement for your organization. Such policies include recommendations to encrypt data at rest with customer-managed keys, such as "Container registries should be encrypted with a customer-managed key (CMK)".
+Some policies in your initiatives might be disabled by default. For example, in the Microsoft Cloud Security Benchmark initiative, some policies are provided for you to enable only if they meet a specific regulatory or compliance requirement for your organization. Such policies include recommendations to encrypt data at rest with customer-managed keys, such as "Container registries should be encrypted with a customer-managed key (CMK)".
To enable a disabled policy and ensure it's assessed for your resources:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 |
-| [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 |
### Multiple changes to identity recommendations
-**Estimated date for change:** September 2022
+**Estimated date for change:** October 2022
-Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In June, we'll be making the changes outlined below.
+Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In October, we'll be making the changes outlined below.
#### New recommendations in preview
The new release will bring the following capabilities:
|Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf| |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
-### Removing security alerts for machines reporting to cross-tenant Log Analytics workspaces
-
-**Estimated date for change:** September 2022
-
-Defender for Cloud lets you choose the workspace that your Log Analytics agents report to. When a machine belongs to one tenant (ΓÇ£Tenant AΓÇ¥) but its Log Analytics agent reports to a workspace in a different tenant (ΓÇ£Tenant BΓÇ¥), security alerts about the machine are reported to the first tenant (ΓÇ£Tenant AΓÇ¥).
-
-With this change, alerts on machines connected to Log Analytics workspace in a different tenant will no longer appear in Defender for Cloud.
-
-If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine.
- ## Next steps
-For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md)
+For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: The regulatory compliance dashboard in Microsoft Defender for Cloud description: Learn how to add and remove regulatory standards from the regulatory compliance dashboard in Defender for Cloud Previously updated : 09/04/2022+ Last updated : 09/18/2022 # Customize the set of standards in your regulatory compliance dashboard
Microsoft tracks the regulatory standards themselves and automatically improves
## What regulatory compliance standards are available in Defender for Cloud?
-By default, every Azure subscription has the **Azure Security Benchmark** assigned. This is the Microsoft-authored, Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction).
+By default, every Azure subscription has the **Microsoft Cloud Security Benchmark** assigned. This is the Microsoft-authored, cloud specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction).
Available regulatory standards: -- PCI-DSS v3.2.1:2018
+- PCI-DSS v3.2.1
- SOC TSP-- NIST SP 800-53 R4-- NIST SP 800 171 R2-- UK OFFICIAL and UK NHS-- Canada Federal PBMM-- Azure CIS 1.1.0-- HIPAA/HITRUST-- SWIFT CSP CSCF v2020 - ISO 27001:2013-- New Zealand ISM Restricted-- CMMC Level 3
+- Azure CIS 1.1.0
- Azure CIS 1.3.0
+- Azure CIS 1.4.0
+- NIST SP 800-53 R4
- NIST SP 800-53 R5
+- NIST SP 800 171 R2
+- CMMC Level 3
- FedRAMP H - FedRAMP M
+- HIPAA/HITRUST
+- SWIFT CSP CSCF v2020
+- UK OFFICIAL and UK NHS
+- Canada Federal PBMM
+- New Zealand ISM Restricted
+- New Zealand ISM Restricted v3.5
+- Australian Government ISM Protected
+- RMIT Malaysia
-By default, every AWS connector subscription has the **AWS Foundational Security Best Practices** assigned. This is the AWS-specific guidelines for security and compliance best practices based on common compliance frameworks.
+By default, every AWS connector subscription has the **Microsoft Cloud Security Benchmark (MCSB)** assigned along with the **AWS Foundational Security Best Practices**. MCSB is the Microsoft recommended cloud security best practices based on common compliance frameworks, with detailed guidance for applying to an AWS environment. AWS Foundational Security Best Practices is the AWS-recommended security guidelines.
Available AWS regulatory standards: - AWS CIS 1.2.0
To add standards to your dashboard:
> [!NOTE] > It may take a few hours for a newly added standard to appear in the compliance dashboard.
- :::image type="content" source="./media/regulatory-compliance-dashboard/compliance-dashboard.png" alt-text="Regulatory compliance dashboard." lightbox="./media/regulatory-compliance-dashboard/compliance-dashboard.png":::
- ## Remove a standard from your dashboard You can continue to customize the regulatory compliance dashboard, to focus only on the standards that are applicable to you, by removing any of the supplied regulatory standards that aren't relevant to your organization.
In this article, you learned how to **add compliance standards** to monitor your
For related material, see the following pages: -- [Azure Security Benchmark](/security/benchmark/azure/introduction)
+- [Microsoft Cloud Security Benchmark](/security/benchmark/azure/introduction)
- [Defender for Cloud regulatory compliance dashboard](regulatory-compliance-dashboard.md) - Learn how to track and export your regulatory compliance data with Defender for Cloud and external tools - [Working with security policies](tutorial-security-policy.md)
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Title: Workflow automation in Microsoft Defender for Cloud description: Learn how to create and automate workflows in Microsoft Defender for Cloud Previously updated : 07/31/2022+ Last updated : 09/21/2022 # Automate responses to Microsoft Defender for Cloud triggers
Unfortunately, this change came with an unavoidable breaking change. The breakin
| Original name | New name| |--|--|
- |Deploy Workflow Automation for Microsoft Defender for Cloud alerts | When an Microsoft Defender for Clou dAlert is created or triggered <sup>[1](#footnote1)</sup>|
- | Deploy Workflow Automation for Microsoft Defender for Cloud recommendations | When an Microsoft Defender for Cloud Recommendation is created or triggered |
+ |Deploy Workflow Automation for Microsoft Defender for Cloud alerts | When a Microsoft Defender for Cloud dAlert is created or triggered <sup>[1](#footnote1)</sup>|
+ | Deploy Workflow Automation for Microsoft Defender for Cloud recommendations | When a Microsoft Defender for Cloud Recommendation is created or triggered |
| Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance | When a Microsoft Defender for Cloud Regulatory Compliance Assessment is created or triggered | <sup><a name="footnote1"></a>1</sup> The typo `Clou dAlert` is intentional.
defender-for-cloud Working With Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/working-with-log-analytics-agent.md
+
+ Title: Collect data from your workloads with the Log Analytics agent
+description: Learn about how the Log Analytics agent collects data from your workloads to let you protect your workloads with Microsoft Defender for Cloud.
++++ Last updated : 09/12/2022++
+# Collect data from your workloads with the Log Analytics agent
+
+## Configure the Log Analytics agent and workspaces
+
+When the Log Analytics agent is on, Defender for Cloud deploys the agent on all supported Azure VMs and any new ones created. For the list of supported platforms, see [Supported platforms in Microsoft Defender for Cloud](security-center-os-coverage.md).
+
+To configure integration with the Log Analytics agent:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant subscription.
+1. In the Monitoring Coverage column of the Defender plans, select **Settings**.
+1. From the configuration options pane, define the workspace to use.
+
+ :::image type="content" source="./media/enable-data-collection/log-analytics-agent-deploy-options.png" alt-text="Screenshot of configuration options for Log Analytics agents for VMs." lightbox="./media/enable-data-collection/log-analytics-agent-deploy-options.png":::
+
+ - **Connect Azure VMs to the default workspaces created by Defender for Cloud** - Defender for Cloud creates a new resource group and default workspace in the same geolocation, and connects the agent to that workspace. If a subscription contains VMs from multiple geolocations, Defender for Cloud creates multiple workspaces to ensure compliance with data privacy requirements.
+
+ The naming convention for the workspace and resource group is:
+ - Workspace: DefaultWorkspace-[subscription-ID]-[geo]
+ - Resource Group: DefaultResourceGroup-[geo]
+
+ A Defender for Cloud solution is automatically enabled on the workspace per the pricing tier set for the subscription.
+
+ > [!TIP]
+ > For questions regarding default workspaces, see:
+ >
+ > - [Am I billed for Azure Monitor logs on the workspaces created by Defender for Cloud?](./faq-data-collection-agents.yml#am-i-billed-for-azure-monitor-logs-on-the-workspaces-created-by-defender-for-cloud-)
+ > - [Where is the default Log Analytics workspace created?](./faq-data-collection-agents.yml#where-is-the-default-log-analytics-workspace-created-)
+ > - [Can I delete the default workspaces created by Defender for Cloud?](./faq-data-collection-agents.yml#can-i-delete-the-default-workspaces-created-by-defender-for-cloud-)
+
+ - **Connect Azure VMs to a different workspace** - From the dropdown list, select the workspace to store collected data. The dropdown list includes all workspaces across all of your subscriptions. You can use this option to collect data from virtual machines running in different subscriptions and store it all in your selected workspace.
+
+ If you already have an existing Log Analytics workspace, you might want to use the same workspace (requires read and write permissions on the workspace). This option is useful if you're using a centralized workspace in your organization and want to use it for security data collection. Learn more in [Manage access to log data and workspaces in Azure Monitor](../azure-monitor/logs/manage-access.md).
+
+ If your selected workspace already has a "Security" or "SecurityCenterFree" solution enabled, the pricing will be set automatically. If not, install a Defender for Cloud solution on the workspace:
+
+ 1. From Defender for Cloud's menu, open **Environment settings**.
+ 1. Select the workspace to which you'll be connecting the agents.
+ 1. Set Security posture management to **on** or select **Enable all** to turn all Microsoft Defender plans on.
+
+1. From the **Windows security events** configuration, select the amount of raw event data to store:
+ - **None** ΓÇô Disable security event storage. (Default)
+ - **Minimal** ΓÇô A small set of events for when you want to minimize the event volume.
+ - **Common** ΓÇô A set of events that satisfies most customers and provides a full audit trail.
+ - **All events** ΓÇô For customers who want to make sure all events are stored.
+
+ > [!TIP]
+ > To set these options at the workspace level, see [Setting the security event option at the workspace level](#setting-the-security-event-option-at-the-workspace-level).
+ >
+ > For more information of these options, see [Windows security event options for the Log Analytics agent](#data-collection-tier).
+
+1. Select **Apply** in the configuration pane.
+
+<a name="data-collection-tier"></a>
+
+## Windows security event options for the Log Analytics agent
+
+When you select a data collection tier in Microsoft Defender for Cloud, the security events of the selected tier are stored in your Log Analytics workspace so that you can investigate, search, and audit the events in your workspace. The Log Analytics agent also collects and analyzes the security events required for Defender for CloudΓÇÖs threat protection.
+
+### Requirements
+
+The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
+
+You maybe charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+
+### Information for Microsoft Sentinel users
+
+Security events collection within the context of a single workspace can be configured from either Microsoft Defender for Cloud or Microsoft Sentinel, but not both. If you want to add Microsoft Sentinel to a workspace that already gets alerts from Microsoft Defender for Cloud and to collect Security Events, you can either:
+
+- Leave the Security Events collection in Microsoft Defender for Cloud as is. You'll be able to query and analyze these events in both Microsoft Sentinel and Defender for Cloud. If you want to monitor the connector's connectivity status or change its configuration in Microsoft Sentinel, consider the second option.
+- Disable Security Events collection in Microsoft Defender for Cloud and then add the Security Events connector in Microsoft Sentinel. You'll be able to query and analyze events in both Microsoft Sentinel, and Defender for Cloud, but you'll also be able to monitor the connector's connectivity status or change its configuration in - and only in - Microsoft Sentinel. To disable Security Events collection in Defender for Cloud, set **Windows security events** to **None** in the configuration of your Log Analytics agent.
+
+### What event types are stored for "Common" and "Minimal"?
+
+The **Common** and **Minimal** event sets were designed to address typical scenarios based on customer and industry standards for the unfiltered frequency of each event and their usage.
+
+- **Minimal** - This set is intended to cover only events that might indicate a successful breach and important events with low volume. Most of the data volume of this set is successful user logon (event ID 4625), failed user logon events (event ID 4624), and process creation events (event ID 4688). Sign out events are important for auditing only and have relatively high volume, so they aren't included in this event set.
+- **Common** - This set is intended to provide a full user audit trail, including events with low volume. For example, this set contains both user logon events (event ID 4624) and user logoff events (event ID 4634). We include auditing actions like security group changes, key domain controller Kerberos operations, and other events that are recommended by industry organizations.
+
+Here's a complete breakdown of the Security and App Locker event IDs for each set:
+
+| Data tier | Collected event indicators |
+| | |
+| Minimal | 1102,4624,4625,4657,4663,4688,4700,4702,4719,4720,4722,4723,4724,4727,4728,4732,4735,4737,4739,4740,4754,4755, |
+| | 4756,4767,4799,4825,4946,4948,4956,5024,5033,8001,8002,8003,8004,8005,8006,8007,8222 |
+| Common | 1,299,300,324,340,403,404,410,411,412,413,431,500,501,1100,1102,1107,1108,4608,4610,4611,4614,4622, |
+| | 4624,4625,4634,4647,4648,4649,4657,4661,4662,4663,4665,4666,4667,4688,4670,4672,4673,4674,4675,4689,4697, |
+| | 4700,4702,4704,4705,4716,4717,4718,4719,4720,4722,4723,4724,4725,4726,4727,4728,4729,4733,4732,4735,4737, |
+| | 4738,4739,4740,4742,4744,4745,4746,4750,4751,4752,4754,4755,4756,4757,4760,4761,4762,4764,4767,4768,4771, |
+| | 4774,4778,4779,4781,4793,4797,4798,4799,4800,4801,4802,4803,4825,4826,4870,4886,4887,4888,4893,4898,4902, |
+| | 4904,4905,4907,4931,4932,4933,4946,4948,4956,4985,5024,5033,5059,5136,5137,5140,5145,5632,6144,6145,6272, |
+| | 6273,6278,6416,6423,6424,8001,8002,8003,8004,8005,8006,8007,8222,26401,30004 |
+
+> [!NOTE]
+> - If you are using Group Policy Object (GPO), it is recommended that you enable audit policies Process Creation Event 4688 and the *CommandLine* field inside event 4688. For more information about Process Creation Event 4688, see Defender for Cloud's [FAQ](./faq-data-collection-agents.yml#what-happens-when-data-collection-is-enabled-). For more information about these audit policies, see [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations).
+> - To enable data collection for [Adaptive application controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy.
+> - To collect Windows Filtering Platform [Event ID 5156](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=5156), you need to enable [Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection) (Auditpol /set /subcategory:"Filtering Platform Connection" /Success:Enable)
+>
+
+### Setting the security event option at the workspace level
+
+You can define the level of security event data to store at the workspace level.
+
+1. From Defender for Cloud's menu in the Azure portal, select **Environment settings**.
+1. Select the relevant workspace. The only data collection events for a workspace are the Windows security events described on this page.
+
+ :::image type="content" source="media/enable-data-collection/event-collection-workspace.png" alt-text="Screenshot of setting the security event data to store in a workspace.":::
+
+1. Select the amount of raw event data to store and select **Save**.
+
+<a name="manual-agent"></a>
+
+## Manual agent provisioning
+
+To manually install the Log Analytics agent:
+
+1. Turn off the Log Analytics agent in **Environment Settings** > Monitoring coverage > **Settings**.
+
+1. Optionally, create a workspace.
+
+1. Enable Microsoft Defender for Cloud on the workspace on which you're installing the Log Analytics agent:
+
+ 1. From Defender for Cloud's menu, open **Environment settings**.
+
+ 1. Set the workspace on which you're installing the agent. Make sure the workspace is in the same subscription you use in Defender for Cloud and that you have read/write permissions for the workspace.
+
+ 1. Select **Microsoft Defender for Cloud on**, and **Save**.
+
+ >[!NOTE]
+ >If the workspace already has a **Security** or **SecurityCenterFree** solution enabled, the pricing will be set automatically.
+
+1. To deploy agents on new VMs using a Resource Manager template, install the Log Analytics agent:
+
+ - [Install the Log Analytics agent for Windows](../virtual-machines/extensions/oms-windows.md)
+ - [Install the Log Analytics agent for Linux](../virtual-machines/extensions/oms-linux.md)
+
+1. To deploy agents on your existing VMs, follow the instructions in [Collect data about Azure Virtual Machines](../azure-monitor/vm/monitor-virtual-machine.md) (the section **Collect event and performance data** is optional).
+
+1. To use PowerShell to deploy the agents, use the instructions from the virtual machines documentation:
+
+ - [For Windows machines](../virtual-machines/extensions/oms-windows.md?toc=%2fazure%2fazure-monitor%2ftoc.json#powershell-deployment)
+ - [For Linux machines](../virtual-machines/extensions/oms-linux.md?toc=%2fazure%2fazure-monitor%2ftoc.json#azure-cli-deployment)
+
+> [!TIP]
+> For more information about onboarding, see [Automate onboarding of Microsoft Defender for Cloud using PowerShell](powershell-onboarding.md).
+
+<a name="offprovisioning"></a>
+
+To turn off monitoring components:
+
+- Go to the Defender plans and turn off the plan that uses the extension and select **Save**.
+- For Defender plans that have monitoring settings, go to the settings of the Defender plan, turn off the extension, and select **Save**.
+
+> [!NOTE]
+> - Disabling extensions does not remove the extensions from the effected workloads.
+> - For information on removing the OMS extension, see [How do I remove OMS extensions installed by Defender for Cloud](./faq-data-collection-agents.yml#how-do-i-remove-oms-extensions-installed-by-defender-for-cloud-).
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
-# HPE ProLiant DL20/DL20 Plus
+# HPE ProLiant DL20 Gen10/DL20 Gen10 Plus
-This article describes the **HPE ProLiant DL20** or **HPE ProLiant DL20 Plus** appliance for OT sensors in an enterprise deployment.
+This article describes the **HPE ProLiant DL20 Gen10** or **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an enterprise deployment.
-The HPE ProLiant DL20 Plus is also available for the on-premises management console.
+The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises management console.
| Appliance characteristic |Details | |||
The following image shows a sample of the HPE ProLiant DL20 back panel:
|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in | |Weight | Max 7.9 kg / 17.41 lb |
-**DL20 BOM**
+**DL20 Gen10 BOM**
| Quantity | PN| Description: high end | |--|--|--|
The following image shows a sample of the HPE ProLiant DL20 back panel:
|1| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit | |1| 775612-B21 | HPE 1U Short Friction Rail Kit |
-**DL20 Plus BOM**:
+**DL20 Gen10 Plus BOM**:
|Quantity|PN|Description| |-||-|
Optional modules for port expansion include:
| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver| | SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
-## HPE ProLiant DL20 / HPE ProLiant DL20 Plus installation
+## HPE ProLiant DL20 Gen10 / HPE ProLiant DL20 Gen10 Plus installation
-This section describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus appliance.
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus appliance.
Installation includes:
This procedure describes how to update the HPE BIOS configuration for your OT de
> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED). >
-### Install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus
-This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus.
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
Title: HPE ProLiant DL20/DL20 Plus (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20/DL20 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
+ Title: HPE ProLiant DL20 Gen10/DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10/DL20 Gen10 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
Last updated 04/24/2022
-# HPE ProLiant DL20/DL20 Plus (NHP 2LFF) for SMB deployments
+# HPE ProLiant DL20 Gen10/DL20 Gen10 Plus (NHP 2LFF) for SMB deployments
-This article describes the **HPE ProLiant DL20** or **HPE ProLiant DL20 Plus** appliance for OT sensors in an SBM deployment.
+This article describes the **HPE ProLiant DL20 Gen10** or **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an SBM deployment.
-The HPE ProLiant DL20 Plus is also available for the on-premises management console.
+The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises management console.
| Appliance characteristic |Details | |||
The HPE ProLiant DL20 Plus is also available for the on-premises management cons
|**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45| |**Status** | Supported; Available as pre-configured |
-The following image shows a sample of the HPE ProLiant DL20 front panel:
+The following image shows a sample of the HPE ProLiant DL20 Gen10 front panel:
-The following image shows a sample of the HPE ProLiant DL20 back panel:
+The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
## Specifications
The following image shows a sample of the HPE ProLiant DL20 back panel:
|512485-B21|HPE iLO Adv 1-Server License 1 Year Support|1| |775612-B21|HPE 1U Short Friction Rail Kit|1|
-## HPE ProLiant DL20/HPE ProLiant DL20 Plus installation
+## HPE ProLiant DL20 Gen10/HPE ProLiant DL20 Gen10 Plus installation
-This section describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus appliance.
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus appliance.
Installation includes:
This procedure describes how to update the HPE BIOS configuration for your OT de
:::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
-### Install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus
-This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus.
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Last updated 09/11/2022
# OT sensor cloud connection methods
-This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the cloud.
+This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the cloud. An integral part of the Microsoft Defender for IoT service is the managed cloud service in Azure that acts as the central security monitoring portal for aggregating security information collected from network monitoring sensors and security agents. In order to ensure the security of IoT/OT at a global scale, the service supports millions of concurrent telemetry sources securely and reliably.
++ The cloud connection methods described in this article are supported only for OT sensor version 22.x and later. All methods provide:
The cloud connection methods described in this article are supported only for OT
- **Improved security**, without needing to configure or lock down any resource security settings in the Azure VNET
+- **Encryption**, Transport Layer Security (TLS1.2/AES-256) provides encrypted communication between the sensor and Azure resources.
+ - **Scalability** for new features supported only in the cloud - **Flexible connectivity** using any of the connection methods described in this article
The following image shows how you can connect your sensors to the Defender for I
With direct connections -- Any sensors connected to Azure data centers directly over the internet have a secure and encrypted connection to the Azure data centers. Transport Layer Security (TLS) provides *always-on* communication between the sensor and Azure resources.
+- Any sensors connected to Azure data centers directly over the internet have a secure and encrypted connection to the Azure data centers. Transport Layer Security (TLS1.2/AES-256) provides *always-on* communication between the sensor and Azure resources.
- The sensor initiates all connections to the Azure portal. Initiating connections only from the sensor protects internal network devices from unsolicited inbound connections, but also means that you don't need to configure any inbound firewall rules.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
System health checks include the following:
|- Longest Key | Displays the longest keys that might cause extensive memory usage. | |**System** | | |- Core Log | Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log. |
-|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cash layer (SQL) |
+|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cache layer (SQL) |
|- Network Statistics | Displays your network statistics. | |- TOP | Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. | |- Backup Memory Check | Provides the status of the backup memory, checking the following:<br><br> - The location of the backup folder<br> - The size of the backup folder<br> - The limitations of the backup folder<br> - When the last backup happened<br> - How much space there are for the extra backup files |
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
You can associate Active Directory groups defined here with specific permission
|--|--| | Domain controller FQDN | Set the fully qualified domain name (FQDN) exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. | | Domain controller port | Define the port on which your LDAP is configured. |
- | Primary domain | Set the domain name (for example, `subdomain.domain.com`) and the connection type according to your LDAP configuration. |
+ | Primary domain | Set the domain name (for example, `subdomain.domain.com`) |
+ | Connection type | Set the authentication type: LDAPS/NTLMv3 (Recommended), LDAP/NTLMv3 or LDAP/SASL-MD5 |
| Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.| | Trusted endpoints | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted endpoints only for users who were defined under users. |
You can associate Active Directory groups defined here with specific permission
## Next steps
-For more information, see [how to create and manage users](./how-to-create-and-manage-users.md).
+For more information, see [how to create and manage users](./how-to-create-and-manage-users.md).
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com) any of the following preconfigu
||||| |**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) | |**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|**E1800** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
|**L500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 | |**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Webhook extended can be used to send extra data to the endpoint. The extended fe
### Unicode support for certificate passphrases
-Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md#about-certificates)
+Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md).
## April 2021
In sensor and on-premises management console Alerts, the term Manage this Event
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
+| 22.2.7 | 10/2022 | 04/2023 |
| 22.2.6 | 09/2022 | 04/2023 | | 22.2.5 | 08/2022 | 04/2023 | | 22.2.4 | 07/2022 | 04/2023 |
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | |||
-|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
+|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.7**: <br> - Bug fixes and stability improvements <br> **Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
### Security recommendations for OT networks (Public preview)
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
+
+ Title: Azure Deployment Environments key concepts
+description: Learn the key concepts behind Azure Deployment Environments.
+++++ Last updated : 10/12/2022++
+# Key concepts for new Azure Deployment Environments Preview users
+
+Learn about the key concepts and components of Azure Deployment Environments Preview. This knowledge can help you to more effectively deploy environments for your scenarios.
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Dev centers
+
+A dev center is a collection of projects that require similar settings. Dev centers enable dev infrastructure managers to manage the infrastructure-as-code templates made available to the projects using Catalogs, and configure the different types of environments, various development teams can create, using Environment Types.
+
+## Projects
+
+A project is the point of access for the development team members. When you associate a project with a dev center, all the settings at the dev center level will be automatically applied to the project. Each project can be associated with only one dev center. Dev infra admins can configure different types of environments made available for the project by specifying the environment types appropriate for the specific development team.
+
+## Environments
+
+Environment is a collection of Azure resources on which your application is deployed. For example, to deploy a web application, you might create an environment consisting of an [App Service](../app-service/overview.md), [Key Vault](../key-vault/general/basic-concepts.md), [Cosmos DB](../cosmos-db/introduction.md) and a [Storage account](../storage/common/storage-account-overview.md). An environment could consist of both Azure PaaS and IaaS resources such as AKS Cluster, App Service, VMs, databases, etc.
+
+## Identities
+
+[Managed Identities](../active-directory/managed-identities-azure-resources/overview.md) are used in Azure Deployment Environments to provide elevation-of-privilege capabilities. Identities will help provide self-serve capabilities to your development teams without them needing any access to the target subscriptions in which the Azure resources are created. The managed identity attached to the dev center needs to be granted appropriate access to connect to the Catalogs and should be granted 'Owner' access to the target deployment subscriptions configured at the project level. Azure Deployment Environments service will use the specific deployment identity to perform the deployment on behalf of the developer.
+
+## Dev center environment types
+
+You can use environment types to define the type of environments the development teams can create, for example, dev, test, sandbox, pre-production, or production. Azure Deployment Environments provides the flexibility to name the environment types as per the nomenclature used in your enterprise. When you create an environment type, you'll be able to configure and apply different settings for different environment types based on specific needs of the development teams.
+
+## Project environment types
+
+Project Environment Types are a subset of the environment types configured per dev center and help you pre-configure the different types of environments specific development teams can create. You'll be able to configure the target subscription in which Azure resources are created per project per environment type. Project environment types will allow you to automatically apply the right set of policies on different environments and help abstract the Azure governance related concepts from your development teams. The service also provides the flexibility to pre-configure the [managed identity](concept-environments-key-concepts.md#identities) that will be used to perform the deployment and the access levels the development teams will get after a specific environment is created.
+
+## Catalogs
+
+Catalogs help you provide a set of curated infra-as-code templates for your development teams to create Environments. You can attach either a [GitHub repository](https://docs.github.com/repositories/creating-and-managing-repositories/about-repositories) or an [Azure DevOps Services repository](/devops/repos/get-started/what-is-repos) as a Catalog. Deployment Environments will scan through the specified folder of the repository to find [Catalog Items](#catalog-items), and make them available for use by all the Projects associated with the dev center.
+
+## Catalog Items
+
+A Catalog Item is a combination of an infra-as-code template and a manifest file. The environment definition will be defined in the template and the manifest will be used to provide metadata about the template. The Catalog Items that you provide in the Catalog will be used by your development teams to create environments in Azure.
+
+> [!NOTE]
+> During public preview, Azure Deployments Environments uses Azure Resource Manager (ARM) templates.
+
+## Azure Resource Manager (ARM) templates
+
+[Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md) help you implement the infrastructure as code for your Azure solutions by defining the infrastructure and configuration for your project, the resources to deploy, and the properties of those resources.
+
+[Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md) describes the structure of an Azure Resource Manager template, the different sections of a template, and the properties that are available in those sections.
+
+## Next steps
+
+[Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md)
deployment-environments Concept Environments Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-scenarios.md
+
+ Title: User scenarios for Azure Deployment Environments
+description: Learn about scenarios enabled by Azure Deployment Environments.
+++++ Last updated : 10/12/2022+
+# Scenarios for using Azure Deployment Environments Preview
+
+This article discusses a few possible scenarios and benefits of Azure Deployment Environments Preview, and the resources used to implement those scenarios. Depending on the needs of an enterprise, Azure Deployment Environments can be configured to meet different requirements.
+
+Some possible scenarios are:
+- Environments as part of a CI/CD pipeline
+- Sandbox environments for investigations
+- On-demand test environments
+- Training sessions, hands-on labs, and hackathons
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Environments as part of a CI/CD pipeline
+
+Creating and managing environments across an enterprise can require significant effort. With Azure Deployment Environments, different types of product lifecycle environments such as development, testing, staging, pre-Production, Production, etc. can be easily created, updated, and plugged into a CI/CD pipeline.
+
+In this scenario, Azure Deployment Environments provides the following benefits:
+
+- Organizations can attach a [Catalog](./concept-environments-key-concepts.md#catalogs) and provide common 'infra-as-code' templates to create environments ensuring consistency across teams.
+- Developers and testers can test the latest version of their application by quickly provisioning environments by using reusable templates.
+- Development teams can connect their environments to CI/CD pipelines to enable DevOps scenarios.
+- Central Dev IT teams can centrally track costs, security alerts, and manage environments across different projects and dev centers.
+
+## Sandbox environments for investigations
+
+Developers often investigate different technologies or infrastructure designs. By default, all environments created with Azure Deployment Environments are created in their own resource group and the Project members get contributor access to those resources by default.
+
+In this scenario, Azure Deployment Environments provides the following benefits:
+ - Allows developers to add and/or change Azure resources as they need for their development or test environments.
+ - Central Dev IT teams can easily and quickly track costs for all the environments used for investigation purposes.
+
+## On-demand test environments
+
+Often developers need to create adhoc test environments that mimic their formal development or testing environments to test a new capability before checking in the code and executing a pipeline. With Azure Deployment Environments, test environments can be easily created, updated, or duplicated.
+
+In this scenario, Azure Deployment Environments provides the following benefits:
+- Allows teams to access a fully configured environment when itΓÇÖs needed.
+- Developers can test the latest version of their application by quickly creating new adhoc environments using reusable templates.
+
+## Trainings, hands-on labs, and hackathons
+
+A Project in Azure Deployment Environments acts as a great container for transient activities like workshops, hands-on labs, trainings, or hackathons. The service allows you to create a Project where you can provide custom templates to each user.
+
+In this scenario, Azure Deployment Environments provides the following benefits:
+- Each trainee can create identical and isolated environments for training.
+- Easily delete a Project and all related resources when the training is over.
+
+## Proof of concept deployment vs. scaled deployment
+
+Once you decide to explore Azure Deployment Environments, there are two general paths forward: Proof of concept vs scaled deployment.
+
+### Proof of concept deployment
+
+A **proof of concept** deployment focuses on a concentrated effort from a single team to establish organizational value. While it can be tempting to think of a scaled deployment, the approach tends to fail more often than the proof of concept option. Therefore, we recommend that you start small, learn from the first team, repeat the same approach with two to three additional teams, and then plan for a scaled deployment based on the knowledge gained. For a successful proof of concept, we recommend that you pick one or two teams, and identify their scenarios ([environments as part of a CI/CD pipeline](#environments-as-part-of-a-cicd-pipeline) vs [sandbox environments](#sandbox-environments-for-investigations)), document their current use cases, and then deploy Azure Deployment Environments.
+
+### Scaled deployment
+
+A **scaled deployment** consists of weeks of reviewing and planning with an intent of deploying Azure Deployment Environments to the entire enterprise that has hundreds or thousands of developers.
+
+## Next steps
+
+- To get started with the service, [Quickstart: Create and configure the Azure Deployment Environments dev center](./quickstart-create-and-configure-devcenter.md)
+- Learn more about [Azure Deployment Environments key concepts](./concept-environments-key-concepts.md)
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
+
+ Title: Configure a Catalog Item in Azure Deployment Environments
+description: This article helps you configure a Catalog Item in GitHub repo or Azure DevOps repo.
++++ Last updated : 10/12/2022+++
+# Configure a Catalog Item in GitHub repo or Azure DevOps repo
+
+In Azure Deployment Environments Preview service, you can use a [Catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [*infrastructure as code (IaC)*](/devops/deliver/what-is-infrastructure-as-code) templates called [Catalog Items](concept-environments-key-concepts.md#catalog-items). A catalog item is a combination of an *infrastructure as code (IaC)* template (for example, [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md)) and a manifest (*manifest.yml*) file.
+
+>[!NOTE]
+> Azure Deployment Environments Preview currently only supports Azure Resource Manager (ARM) templates.
+
+The IaC template will contain the environment definition and the manifest file will be used to provide metadata about the template. The catalog items that you provide in the catalog will be used by your development teams to deploy environments in Azure.
+
+We offer an example [Sample Catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can attach as-is, or you can fork and customize the catalog items. You can attach your private repo to use your own catalog items.
+
+After you [attach a catalog](how-to-configure-catalog.md) to your dev center, the service will scan through the specified folder path to identify folders containing an ARM template and the associated manifest file. The specified folder path should be a folder that contains sub-folders with the catalog item files.
+
+In this article, you'll learn how to:
+
+* Add a new catalog item
+* Update a catalog item
+* Delete a catalog item
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Add a new catalog item
+
+Provide a new catalog item to your development team as follows:
+
+1. Create a subfolder in the specified folder path, and then add a *ARM_template.json* and the associated *manifest.yaml* file.
+ :::image type="content" source="../deployment-environments/media/configure-catalog-item/create-subfolder-in-path.png" alt-text="Screenshot of subfolder in folder path containing ARM template and manifest file.":::
+
+ 1. **Add ARM template**
+
+ To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates).
+
+ [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md) help you define the infrastructure and configuration of your Azure solution and repeatedly deploy it in a consistent state.
+
+ To learn about how to get started with ARM templates, see the following:
+
+ - [Understand the structure and syntax of Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md) describes the structure of an Azure Resource Manager template and the properties that are available in the different sections of a template.
+ - [Use linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#use-relative-path-for-linked-templates) describes how to use linked templates with the new ARM `relativePath` property to easily modularize your templates and share core components between catalog items.
+
+ 1. **Add manifest file**
+
+ The *manifest.yaml* file contains metadata related to the ARM template.
+
+ The following is a sample *manifest.yaml* file.
+
+ ```
+ name: WebApp
+ version: 1.0.0
+ description: Deploys an Azure Web App without a data store
+ engine:
+ type: ARM
+ templatePath: azuredeploy.json
+ ```
+
+ >[!NOTE]
+ > `version` is an optional field, and will later be used to support multiple versions of catalog items.
+
+1. On the **Catalogs** page of the dev center, select the specific repo, and then select **Sync**.
+
+ :::image type="content" source="../deployment-environments/media/configure-catalog-item/sync-catalog-items.png" alt-text="Screenshot showing how to sync the catalog." :::
+
+1. The service scans through the repository to discover any new catalog items and makes them available to all the projects.
+
+## Update an existing catalog item
+
+To modify the configuration of Azure resources in an existing catalog item, directly update the associated *ARM_Template.json* file in the repository. The change is immediately reflected when you create a new environment using the specific catalog item, and when you redeploy an environment associated with that catalog item.
+
+To update any metadata related to the ARM template, modify the *manifest.yaml* and [update the catalog](how-to-configure-catalog.md).
+
+## Delete a catalog item
+To delete an existing Catalog Item, delete the subfolder containing the ARM template and the associated manifest, and then [update the catalog](how-to-configure-catalog.md).
+
+Once you delete a catalog item, development teams will no longer be able to use the specific catalog item to deploy a new environment. You'll need to update the catalog item reference for any existing environments created using the deleted catalog item. Redeploying the environment without updating the reference will result in a deployment failure.
+
+## Next steps
+
+* [Create and configure projects](./quickstart-create-and-configure-projects.md)
+* [Create and configure environment types](quickstart-create-access-environments.md).
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
+
+ Title: Configure a catalog
+
+description: Learn how to configure a catalog in your dev center to provide curated infra-as-code templates to your development teams to deploy self-serve environments.
++++ Last updated : 10/12/2022+++
+# Configure a catalog to provide curated infra-as-code templates
+
+Learn how to configure a dev center [catalog](./concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of 'infra-as-code' templates called [catalog items](./concept-environments-key-concepts.md#catalog-items). To learn about configuring catalog items, see [How to configure a catalog item](./configure-catalog-item.md).
+
+The catalog could be a repository hosted in [GitHub](https://github.com) or in [Azure DevOps Services](https://dev.azure.com/).
+
+* To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started).
+* To learn how to host a Git repository in an Azure DevOps Services project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
+
+We offer an example [Sample Catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can attach as-is, or you can fork and customize the catalog items. You can attach your private repo to use your own catalog items.
+
+In this article, you'll learn how to:
+
+* [Add a new catalog](#add-a-new-catalog)
+* [Update a catalog](#update-a-catalog)
+* [Delete a catalog](#delete-a-catalog)
+
+## Add a new catalog
+
+To add a new catalog, you'll need to:
+
+ - Get the clone URL for your repository
+ - Create a personal access token and store it as a Key Vault secret
+
+### Get the clone URL for your repository
+
+**Get the clone URL of your GitHub repo**
+
+1. Go to the home page of the GitHub repository that contains the template definitions.
+1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo).
+1. Copy and save the URL. You'll use it later.
+
+**Get the clone URL of your Azure DevOps Services Git repo**
+
+1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
+1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo).
+1. Copy and save the URL. You'll use it later.
+
+### Create a personal access token and store it as a Key Vault secret
+
+#### Create a personal access token in GitHub
+
+1. Go to the home page of the GitHub repository that contains the template definitions.
+1. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
+1. In the left sidebar, select **<> Developer settings**.
+1. In the left sidebar, select **Personal access tokens**.
+1. Select **Generate new token**.
+1. On the **New personal access token** page, add a description for your token in the **Note** field.
+1. Select an expiration for your token from the **Expiration** dropdown.
+1. For a private repository, select the **repo** scope under **Select scopes**.
+1. Select **Generate Token**.
+1. Save the generated token. You'll use the token later.
+
+#### Create a personal access token in Azure DevOps Services
+
+1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
+1. [Create a Personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
+1. Save the generated token. You'll use the token later.
+
+#### Store the personal access token as a Key Vault secret
+
+To store the personal access token(PAT) that you generated as a [Key Vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
+1. [Create a vault](../key-vault/general/quick-create-portal.md#create-a-vault)
+1. [Add](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault) the personal access token (PAT) as a secret to the Key Vault.
+1. [Open](../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault) the secret and copy the secret identifier.
+
+### Connect your repository as a catalog
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Go to your dev center.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) attached to the dev center has [access to the Key Vault's secret](./how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) where the PAT is stored.
+1. Select **Catalogs** from the left pane.
+1. Select **+ Add** from the command bar.
+1. On the **Add catalog** form, enter the following details, and then select **Add**.
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Git clone URI** | Enter the [Git HTTPS clone URL](#get-the-clone-url-for-your-repository) for GitHub or Azure DevOps Services repo, that you copied earlier.|
+ | **Branch** | Enter the repository branch you'd like to connect to.|
+ | **Folder Path** | Enter the folder path relative to the clone URI that contains sub-folders with your catalog items. This folder path should be the path to the folder containing the sub-folders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.|
+ | **Secret Identifier**| Enter the [secret identifier](#create-a-personal-access-token-and-store-it-as-a-key-vault-secret) which contains your Personal Access Token(PAT) for the repository.|
+
+1. Verify that your catalog is listed on the **Catalogs** page. If the connection is successful, the **Status** will show as **Connected**.
+
+## Update a catalog
+
+If you update the ARM template contents or definition in the attached repository, you can provide the latest set of catalog items to your development teams by syncing the catalog.
+
+To sync to the updated catalog:
+
+1. Select **Catalogs** from the left pane.
+1. Select the specific catalog and select **Sync**. The service scans through the repository and makes the latest list of catalog items available to all the associated projects in the dev center.
+
+## Delete a catalog
+
+You can delete a catalog to remove it from the dev center. Any templates contained in a deleted catalog will not be available when deploying new environments. You'll need to update the catalog item reference for any existing environments created using the catalog items in the deleted catalog. If the reference is not updated and the environment is redeployed, it'll result in deployment failure.
+
+To delete a catalog:
+
+1. Select **Catalogs** from the left pane.
+1. Select the specific catalog and select **Delete**.
+1. Confirm to delete the catalog.
+
+## Catalog sync errors
+
+When adding or syncing a catalog, you may encounter a sync error. This indicates that some or all of the catalog items were found to have errors. You can use CLI or REST API to *GET* the catalog, the response to which will show you the list of invalid catalog items which failed due to schema, reference, or validation errors and ignored catalog items which were detected to be duplicates.
+
+### Handling ignored catalog items
+
+Ignored catalog items are caused by adding two or more catalog items with the same name. You can resolve this issue by renaming catalog items so that each item has a unique name within the catalog.
+
+### Handling invalid catalog items
+
+Invalid catalog items can be caused due to a variety of reasons. Potential issues are:
+
+ - **Manifest schema errors**
+ - Ensure that your catalog item manifest matches the required schema as described [here](./configure-catalog-item.md#add-a-new-catalog-item).
+
+ - **Validation errors**
+ - Ensure that the manifest's engine type is correctly configured as "ARM".
+ - Ensure that the catalog item name is between 3 and 63 characters.
+ - Ensure that the catalog item name includes only URL-valid characters. This includes alphanumeric characters as well as these symbols: *~!,.';:=-\_+)(\*&$@*
+
+ - **Reference errors**
+ - Ensure that the template path referenced by the manifest is a valid relative path to a file within the repository.
+
+## Next steps
+
+* [Create and Configure Projects](./quickstart-create-and-configure-projects.md).
deployment-environments How To Configure Deployment Environments User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-deployment-environments-user.md
+
+ Title: Configure deployment environments user access
+
+description: Learn how to configure access for developers by using the Deployment Environments Users built-in role.
++++ Last updated : 10/12/2022++++
+# Provide access to developers
+
+Development team members must have access to a specific project before they can create deployment environments. By using the built-in Deployment Environments User role, you can assign permissions to Active Directory Users or Groups at either the project level or the specific project environment type level.
+
+Based on the scope that users are granted access to, a Deployment Environments User can:
+
+* View the project environment types
+* Create an environment
+* Read, write, delete, or perform actions (deploy, reset, etc.) on their own environment
+* Read or perform actions (deploy, reset, etc.) on environments created by other users
+
+When the role is assigned at the project level, the Deployment Environments User will be able to perform the actions listed above on all environment types enabled at the Project level. When the role is assigned to specific environment type(s), the user will be able to perform the actions only on the respective environment type(s).
+
+## Assign permissions to developers to a project
+
+1. Select the project you want to provide your development team members access to.
+2. Select **Access Control(IAM)** from the left menu.
+
+ :::image type="content" source=".\media\configure-deployment-environments-user\access-control-page.png" alt-text="Screenshot showing link to access control page.":::
+
+3. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source=".\media\configure-deployment-environments-user\add-role-assignment.png" alt-text="Screenshot showing Add role assignment menu option.":::
+
+4. On the Add role assignment page, on the Role tab, search for *deployment environments user*, select the **Deployment Environments User** built-in role, and then select **Next**.
+5. On the Members tab, select **+ Select Members**.
+6. In **Select members**, select the Active Directory Users or Groups you want to add, and then select **Select**.
+7. On the Members tab, select **Review + assign**.
+
+The user can now view the project and all the Environment Types enabled within it. Deployment Environments users can [create environments from the CLI](./quickstart-create-access-environments.md).
+
+## Assign permissions to developers to a specific environment type
+
+1. Select the project you want to provide your development team members access to.
+2. Select **Environment Types** and select the **...** beside the specific environment type.
+
+ :::image type="content" source=".\media\configure-deployment-environments-user\project-environment-types.png" alt-text="Screenshot showing the environment types associated with a project.":::
+
+3. Select **Access Control**.
+
+ :::image type="content" source=".\media\configure-deployment-environments-user\access-control-page.png" alt-text="Screenshot showing link to access control page.":::
+
+4. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source=".\media\configure-deployment-environments-user\add-role-assignment.png" alt-text="Screenshot showing Add role assignment menu option.":::
+
+5. On the Add role assignment page, on the Role tab, search for *deployment environments user*, select the **Deployment Environments User** built-in role, and then select **Next**.
+6. On the Members tab, select **+ Select Members**.
+7. In **Select members**, select the Active Directory Users or Groups you want to add, and then select **Select**.
+8. On the Members tab, select **Review + assign**.
+
+The user can now view the project and the specific environment type that they have been granted access to. Deployment Environments users can [create environments using the CLI](./quickstart-create-access-environments.md).
+
+> [!NOTE]
+> Only users assigned the Deployment Environments User role, the DevCenter Project Admin role, or a built-in role with appropriate permissions will be able to create environments.
+
+## Next steps
+
+* [Create and Configure Projects](./quickstart-create-and-configure-projects.md)
+* [Provide access to Dev Managers](./how-to-configure-project-admin.md)
deployment-environments How To Configure Devcenter Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-devcenter-environment-types.md
+
+ Title: Configure Dev center environment types
+
+description: Learn how to configure dev center environment types to define the types of environments that your developers can deploy.
++++ Last updated : 10/12/2022+++
+# Configure environment types for your Dev center
+
+In Azure Deployment Environments Preview, [environment types](./concept-environments-key-concepts.md#dev-center-environment-types) are used to define the types of environments available to development teams to deploy. You have the flexibility to name the environment types as per the nomenclature used in your enterprise, for example, sandbox, dev, test, or production. You can specify deployment settings and the permissions available to developers per environment type per project.
+
+In this article, you'll learn how to:
+
+* Add a new environment type to your dev center
+* Delete an environment type from the dev center
++
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Add a new dev center environment type
+
+Environment types allow your development teams to choose from different types of environments when creating self-service environments.
+
+Add a new environment type to the dev center as follows:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select your dev center from the list.
+1. Select **Environment types** from the left pane.
+1. Select **+ Add**
+1. On the **Add environment type** page, add the following details:
+ 1. Add a **Name** for the environment type.
+ 1. Add a **Description** (optional).
+ 1. Add **Tags** by adding **Name/Value** (optional).
+1. Select **Add**.
++
+>[!NOTE]
+> A dev center environment type is available to a specific project only after an associated [project environment type](how-to-configure-project-environment-types.md) is added.
+
+## Delete a dev center environment type
+
+> [!NOTE]
+> Environment types can't be deleted if any existing project environment types or deployed environments reference the specific dev center environment type. Delete all the associated deployed environments and project environment types before attempting to delete an environment type.
+
+When you delete an environment type, it'll no longer be available when deploying environments or configuring new project environment types.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select your dev center from the list.
+1. Select **Environment types** from the left pane.
+1. Select the environment type(s) you want to delete.
+1. Select **Delete** and confirm.
+
+## Next steps
+
+* [Create and configure project environment type](how-to-configure-project-environment-types.md) to enable environment types for specific projects.
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
+
+ Title: Configure a managed identity
+
+description: Learn how to configure a managed identity that'll be used to deploy environments.
++++ Last updated : 10/12/2022+++
+# Configure a managed identity
+
+ A [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) is used to provide elevation-of-privilege capabilities and securely authenticate to any service that supports Azure Active Directory (Azure AD) authentication. Azure Deployment Environments Preview service uses identities to provide self-serve capabilities to your development teams without granting them access to the target subscriptions in which the Azure resources are created.
+
+The managed identity attached to the dev center should be [granted 'Owner' access to the deployment subscriptions](how-to-configure-managed-identity.md) configured per environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities configured per environment type to perform deployments on behalf of the user.
+The managed identity attached to a dev center will also be used to connect to a [catalog](how-to-configure-catalog.md) and access the [catalog items](configure-catalog-item.md) made available through the catalog.
+
+In this article, you'll learn about:
+
+* Types of managed identities
+* Assigning a subscription role assignment to the managed identity
+* Assigning the identity access to the Key Vault secret
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Types of managed identities
+
+In Azure Deployment Environments, you can use two types of managed identities:
+
+* A **system-assigned identity** is tied to either your dev center or the project environment type and is deleted when the attached resource is deleted. A dev center or a project environment type can have only one system-assigned identity.
+* A **user-assigned identity** is a standalone Azure resource that can be assigned to your dev center or to a project environment type. For Azure Deployment Environments Preview, a dev center or a project environment type can have only one user-assigned identity.
+
+> [!NOTE]
+> If you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity will be used by the service.
+
+### Configure a system-assigned managed identity for a dev center
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select your dev center from the list.
+1. Select **Identity** from the left pane.
+1. On the **System assigned** tab, set the **Status** to **On**, select **Save** and then confirm enabling a System assigned managed identity.
+++
+### Configure a user-assigned managed identity for a dev center
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select your dev center from the list.
+1. Select **Identity** from the left pane.
+1. Switch to the **User assigned** tab and select **+ Add** to attach an existing identity.
++
+1. On the **Add user assigned managed identity** page, add the following details:
+ 1. Select the **Subscription** in which the identity exists.
+ 1. Select an existing **User assigned managed identities** from the dropdown.
+ 1. Select **Add**.
+
+## Assign a subscription role assignment to the managed identity
+
+The identity attached to the dev center should be granted 'Owner' access to all the deployment subscriptions, as well as 'Reader' access to all subscriptions that a project lives in. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity attached to a project environment type and use it to perform deployment on behalf of the user. This will allow you to empower developers to create environments without granting them access to the subscription and abstract Azure governance related constructs from them.
+
+1. To add a role assignment to the managed identity:
+ 1. For a system-assigned identity, select **Azure role assignments**.
+
+ :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot showing the Azure role assignment for system assigned identity.":::
+
+ 1. For the user-assigned identity, select the specific identity, and then select the **Azure role assignments** from the left pane.
+
+1. On the **Azure role assignments** page, select **Add role assignment (Preview)** and provide the following details:
+ 1. For **Scope**, select **SubScription** from the dropdown.
+ 1. For **Subscription**, select the target subscription to use from the dropdown.
+ 1. For **Role**, select **Owner** from the dropdown.
+ 1. Select **Save**.
+
+## Assign the managed identity access to the Key Vault secret
+
+>[!NOTE]
+> Providing the identity with access to the Key Vault secret, which contains the repo's personal access token (PAT), is a prerequisite to adding the repo as a catalog.
+
+To grant the identity access to the secret:
+
+A Key Vault can be configured to use either the [Vault access policy'](../key-vault/general/assign-access-policy.md) or the [Azure role-based access control](../key-vault/general/rbac-guide.md) permission model.
+
+1. If the Key Vault is configured to use the **Vault access policy** permission model,
+ 1. Access the [Azure portal](https://portal.azure.com/) and search for the specific Key Vault that contains the PAT secret.
+ 1. Select **Access policies** from the left pane.
+ 1. Select **+ Create**.
+ 1. On the **Create an access policy** page, provide the following details:
+ 1. Enable **Get** for **Secret permissions** on the **Permissions** page.
+ 1. Select the identity that is attached to the dev center as **Principal**.
+ 1. Select **Create** on the **Review + create** page.
+
+1. If the Key Vault is configured to use **Azure role-based access control** permission model,
+ 1. Select the specific identity and select the **Azure role assignments** from the left pane.
+ 1. Select **Add Role Assignment** and provide the following details:
+ 1. Select Key Vault from the **Scope** dropdown.
+ 1. Select the **Subscription** in which the Key Vault exists.
+ 1. Select the specific Key Vault for **Resource**.
+ 1. Select **Key Vault Secrets User** from the dropdown for **Role**.
+ 1. Select **Save**.
+
+## Next steps
+
+* [Configure a Catalog](how-to-configure-catalog.md)
+* [Configure a project environment type](how-to-configure-project-environment-types.md)
deployment-environments How To Configure Project Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-admin.md
+
+ Title: Configure deployment environments project admin access
+
+description: Learn how to configure access for dev managers by using the DevCenter Project Admin built-in role.
++++ Last updated : 10/12/2022+++
+# Provide access to Dev Managers
+
+You can create multiple projects associated with the dev center to align with each team's specific requirements. By using the built-in DevCenter Project Admin role, you can delegate project administration to a member of a team. Project Admins can configure [project environment types](concept-environments-key-concepts.md#project-environment-types) to enable developers to create different types of [environments](concept-environments-key-concepts.md#environments) and apply different settings to each environment type.
+
+The DevCenter Project Admin role can be assigned at either the project level or the specific project environment type level.
+
+Based on the scope that users are granted access to, a DevCenter Project Admin can:
+
+* View, add, update, disable or delete the project environment types
+* Create an environment
+* Read, write, delete, or perform actions (deploy, reset, etc.) on their own environment
+* Read or perform actions (deploy, reset, etc.) on environments created by other users
+
+When the role is assigned at the project level, the DevCenter Project Admin can perform the actions listed above on all environment types at the Project level. When the role is assigned to specific environment type(s), the DevCenter Project Admin can perform the actions only on the respective environment type(s).
+
+## Assign permissions to dev managers to a project
+
+1. Select the project you want to provide your development team members access to.
+2. Select **Access Control(IAM)** from the left menu.
+
+ :::image type="content" source=".\media\configure-project-admin\access-control-page.png" alt-text="Screenshot showing link to access control page.":::
+
+3. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source=".\media\configure-project-admin\add-role-assignment.png" alt-text="Screenshot showing Add role assignment menu option.":::
+
+4. On the Add role assignment page, on the Role tab, search for *DevCenter Project Admin*, select the **DevCenter Project Admin** built-in role, and then select **Next**.
+
+ :::image type="content" source=".\media\configure-project-admin\built-in-role.png" alt-text="Screenshot showing built-in DevCenter project admin role.":::
+
+5. On the Members tab, select **+ Select Members**.
+
+ :::image type="content" source=".\media\configure-project-admin\select-role-members.png" alt-text="Screenshot showing link to select role members pane.":::
+
+1. In **Select members**, select the Active Directory Users or Groups you want to add, and then select **Select**.
+
+7. On the Members tab, select **Review + assign**.
+
+The user will now be able to view the project and manage all the Environment Types that have been enabled within it. DevCenter Project Admin can also [create environments from the CLI](./quickstart-create-access-environments.md).
+
+## Assign permissions to dev managers to a specific environment type
+
+1. Select the project you want to provide your development team members access to.
+2. Select **Environment Types** and select the **...** beside the specific environment type.
+
+ :::image type="content" source=".\media\configure-project-admin\project-environment-types.png" alt-text="Screenshot showing the environment types associated with a project.":::
+
+3. Select **Access Control**.
+
+ :::image type="content" source=".\media\configure-project-admin\access-control-page.png" alt-text="Screenshot showing link to access control page.":::
+
+4. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source=".\media\configure-project-admin\add-role-assignment.png" alt-text="Screenshot showing Add role assignment menu option.":::
+
+5. On the Add role assignment page, on the Role tab, search for **DevCenter Project Admin**, select the **DevCenter Project Admin** built-in role, and then select **Next**.
+
+ :::image type="content" source=".\media\configure-project-admin\built-in-role.png" alt-text="Screenshot showing built-in DevCenter project admin role.":::
+
+6. On the Members tab, select **+ Select Members**.
+7. In **Select members**, select the Active Directory Users or Groups you want to add, and then select **Select**.
+8. On the Members tab, select **Review + assign**.
+
+The user will now be able to view the project and manage only the specific environment type that they have been granted access to. DevCenter Project Admin can also [create environments using the CLI](./quickstart-create-access-environments.md).
+
+> [!NOTE]
+> Only users assigned the Deployment Environments User role, the DevCenter Project Admin role, or a built-in role with appropriate permissions will be able to create environments.
+
+## Next steps
+
+* [Create and Configure Projects](./quickstart-create-and-configure-projects.md)
+* [Provide access to developers](./how-to-configure-deployment-environments-user.md)
deployment-environments How To Configure Project Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-environment-types.md
+
+ Title: Configure project environment types
+
+description: Learn how to configure environment types to define deployment settings and permissions available to developers when deploying environments in a project.
++++ Last updated : 10/12/2022+++
+# Configure project environment types
+
+Project environment types are a subset of the [environment types configured per dev center](how-to-configure-devcenter-environment-types.md) and help pre-configure the different types of environments a specific development team can create . In Azure Deployment Environments Preview, [environment types](concept-environments-key-concepts.md#project-environment-types) added to the project will be available to developers when they deploy environments, and they determine the subscription and identity used for those deployments.
+
+Project environment types enable the Dev Infra teams to:
+- Configure the target subscription in which Azure resources will be created per environment type per project.
+ You will be able to provide different subscriptions for different Environment Types in a given project and thereby, automatically apply the right set of policies on different environments. This also abstracts Azure governance related concepts from your development teams.
+- Pre-configure the managed identity that will be used to perform the deployment and the access levels development teams get after the specific environment is created.
+
+In this article, you'll learn how to:
+
+* Add a new project environment type
+* Update a project environment type
+* Enable or disable a project environment type
+* Delete a project environment type
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+- A [dev center level environment type](how-to-configure-devcenter-environment-types.md).
+
+## Add a new project environment type
+
+>[!NOTE]
+> To configure project environment types, you'll need write [access](/devops/organizations/security/add-users-team-project) to the specific project.
+
+Configuring a new project environment type will enable your development teams to create an environment with a specific environment type. The environment will be created in the mapped subscription using the configured deployment identity, along with permissions granted to the resources created as part of the environment, and all the associated policies automatically applied.
+
+Add a new project environment type as follows:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select **Projects** from the left pane, and then select the specific Project.
+1. Select **Environment types** from the left pane.
+1. Select **+ Add**.
+
+ :::image type="content" source="./media/configure-project-environment-types/add-new-project-environment-type.png" alt-text="Screenshot showing adding a project environment type.":::
+
+1. On the **Add environment type to Project** page, provide the following details:
+
+ |Name |Value |
+ ||-|
+ |**Type**| Select a dev center level environment type to enable for the specific project.|
+ |**Deployment Subscription**| Select the target subscription in which the environments will be created.|
+ |**Deployment Identity** | Select either a system assigned identity or a user assigned managed identity that'll be used to perform deployments on behalf of the user.|
+ |**Permissions on environment resources** > **Environment Creator Role(s)**| Select the role(s) that'll get access to the environment resources.|
+ |**Permissions on environment resources** > **Additional access** | Select the user(s) or Azure Active Directory (Azure AD) group(s) that'll be granted specific role(s) on the environment resources.|
+ |**Tags** (optional) | Provide a **Name** and **Value**. These tags will be applied on all resources created as part of the environments.|
+
+ :::image type="content" source="./media/configure-project-environment-types/add-project-environment-type-page.png" alt-text="Screenshot showing adding details on the add project environment type page.":::
+
+> [!NOTE]
+> At least one identity (system assigned or user assigned) must be enabled for deployment identity and will be used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [granted 'Owner' access to the deployment subscription](how-to-configure-managed-identity.md) configured per environment type.
+
+## Update a project environment type
+
+A project environment type can be updated to use a different subscription or deployment identity when deploying environments. Once the project environment type is updated, it will only affect creation of new environments. Existing environments will continue to exist in the previously mapped subscription.
+
+Update an existing project environment type as follows:
+
+1. Navigate to the Azure Deployment Environments Project.
+1. Select **Environment types** from the left pane of the specific Project.
+1. Select the environment type you want to update.
+1. Select the **Edit** icon (![image](./media/configure-project-environment-types/edit-icon.png)) on the specific row.
+1. On the **Edit environment type** page, update the previous configuration, and then select **Submit**.
+
+## Enable or disable a project environment type
+
+A project environment type can be disabled to prevent developers from creating new environments with the specific environment type. Once a project environment type is disabled, it cannot be used to create a new environment. Existing environments are not affected.
+
+When a disabled environment type is re-enabled, development teams will be able to create new environments with that specific environment type.
+
+1. Navigate to the Azure Deployment Environments project.
+1. Select **Environment types** on the left pane of the specific project.
+1. Select the specific environment type to enable or disable.
+1. Select **Enable** or **Disable** from the command bar and then confirm.
+
+## Delete a project environment type
+
+You can delete a specific project environment type only if it is not being used by any deployed environments in the Project. Once you delete a specific project environment type, development teams will no longer be able to use it to create environments.
+
+1. Navigate to the Azure Deployment Environments project.
+1. Select **Environment types** from the left pane of the specific project.
+1. Select a project environment type to delete.
+1. Select **Delete** from the command bar.
+1. Confirm to delete the project environment type.
+
+## Next steps
+
+* Get started with [creating environments](quickstart-create-access-environments.md)
deployment-environments How To Configure Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-use-cli.md
+
+ Title: Configure and use Deployment Environments Azure CLI extension
+
+description: Learn how to setup and use Deployment Environments Azure CLI extension to configure the Azure Deployment environments service.
++++ Last updated : 10/12/2022+++
+# Configure Azure Deployment Environments service using Azure CLI
+
+This article shows you how to use the Deployment Environments Azure CLI extension to configure an Azure Deployment Environments Preview service. In Azure Deployment Environments Preview, you'll use Deployment Environments Azure CLI extension to create [environments](./concept-environments-key-concepts.md#environments).
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Setup
+
+1. Install the Deployment Environments Azure CLI Extension:
+ - [Download and install the Azure CLI](/cli/azure/install-azure-cli).
+ - Install the Deployment Environments AZ CLI extension:
+
+ **Automated install**
+
+ Execute the script https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 directly in PowerShell to install:
+ ```powershell
+ iex "& { $(irm https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 ) }"
+ ```
+
+ This will uninstall any existing dev center extension and install the latest version.
+
+ **Manual install**
+
+ Run the following command in the Azure CLI:
+ ```azurecli
+ az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-environments-0.1.0-py3-none-any.whl
+ ```
+1. Sign in to Azure CLI.
+ ```azurecli
+ az login
+ ```
+
+1. Set the default subscription to the subscription where you'll be creating your specific Deployment Environment resources.
+ ```azurecli
+ az account set --subscription {subscriptionId}
+ ```
+
+## Commands
+
+**Create a new resource group**
+
+```azurecli
+az group create -l <region-name> -n <resource-group-name>
+```
+
+Optionally, set defaults (which means there is no need to pass the argument into each command):
+
+```azurecli
+az configure --defaults group=<resource-group-name>
+```
+
+**Get help for a command**
+
+```azurecli
+az devcenter admin <command> --help
+```
+```azurecli
+az devcenter dev <command> --help
+```
+
+### Dev centers
+
+**Create a dev center with User Assigned identity**
+
+```azurecli
+az devcenter admin devcenter create --identity-type "UserAssigned" --user-assigned-identity
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/identityGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testidentity1" --location <location-name> -g <resource-group-name> - <name>
+```
+
+**Create a dev center with System Assigned identity**
+
+```azurecli
+az devcenter admin devcenter create --location <location-name> -g <resource-group-name> -n <name> \
+ --identity-type "SystemAssigned"
+```
+
+**List dev centers (in the selected subscription if resource group is not specified or configured in defaults)**
+
+```azurecli
+az devcenter admin devcenter list --output table
+```
+
+**List dev centers (in the specified resource group)**
+
+```azurecli
+az devcenter admin devcenter list -g <resource-group-name>
+```
+
+**Get a specific dev center**
+
+```azurecli
+az devcenter admin devcenter show -g <resource-group-name> --name <name>
+```
+
+**Delete a dev center**
+
+```azurecli
+az devcenter admin devcenter delete -g <resource-group-name> --name <name>
+```
+
+**Force delete a dev center**
+
+```azurecli
+az devcenter admin devcenter delete -g <resource-group-name> --name <name> --yes
+```
+
+### Environment Types
+
+**Create an Environment Type**
+
+```azurecli
+az devcenter admin environment-type create --dev-center-name <devcenter-name> -g <resource-group-name> --name <name>
+```
+
+**List environment types by dev center**
+
+```azurecli
+az devcenter admin environment-type list --dev-center-name <devcenter-name> --resource-group <resource-group-name>
+```
+
+**List environment types by project**
+
+```azurecli
+az devcenter admin environment-type list --project-name <devcenter-name> --resource-group <resource-group-name>
+```
+
+**Delete an environment type**
+
+```azurecli
+az devcenter admin environment-type delete --dev-center-name <devcenter-name> --name "{environmentTypeName}" \
+ --resource-group <resource-group-name>
+```
+
+**List environment types by dev center and project for developers**
+
+```azurecli
+az devcenter dev environment list --dev-center <devcenter-name> --project-name <project-name>
+```
+
+### Project Environment Types
+
+**Create project environment types**
+
+```azurecli
+az devcenter admin project-environment-type create --description "Developer/Testing environment" --dev-center-name \
+ <devcenter-name> --name "{environmentTypeName}" --resource-group <resource-group-name> \
+ --deployment-target-id "/subscriptions/00000000-0000-0000-0000-000000000000" \
+ --status Enabled --type SystemAssigned
+```
+
+**List project environment types by dev center**
+
+```azurecli
+az devcenter admin project-environment-type list --dev-center-name <devcenter-name> \
+ --resource-group <resource-group-name>
+```
+
+**List project environment types by project**
+
+```azurecli
+az devcenter admin project-environment-type list --project-name <project-name> --resource-group <resource-group-name>
+```
+
+**Delete project environment types**
+
+```azurecli
+az devcenter admin project-environment-type delete --project-name <project-name> \
+ --environment-type-name "{environmentTypeName}" --resource-group <resource-group-name>
+```
+
+**List allowed project environment types**
+
+```azurecli
+az devcenter admin project-allowed-environment-type list --project-name <project-name> \
+ --resource-group <resource-group-name>
+```
+
+### Catalogs
+
+**Create a catalog with a GitHub repository**
+
+```azurecli
+az devcenter admin catalog create --git-hub secret-identifier="https://<key-vault-name>.azure-int.net/secrets/<secret-name>" uri=<git-clone-uri> branch=<git-branch> -g <resource-group-name> --name <name> --dev-center-name <devcenter-name>
+```
+
+**Create a catalog with a Azure DevOps repository**
+
+```azurecli
+az devcenter admin catalog create --ado-git secret-identifier="https://<key-vault-name>.azure-int.net/secrets/<secret-name>" uri=<git-clone-uri> branch=<git-branch> -g <resource-group-name> --name <name> --dev-center-name <devcenter-name>
+```
+
+**Sync a catalog**
+
+```azurecli
+az devcenter admin catalog sync --name <name> --dev-center-name <devcenter-name> -g <resource-group-name>
+```
+
+**List catalogs in a dev center**
+
+```azurecli
+az devcenter admin catalog list -g <resource-group-name> --dev-center-name <devcenter-name>
+```
+
+**Delete a catalog**
+
+```azurecli
+az devcenter admin catalog delete -g <resource-group-name> --dev-center-name <devcenter-name> -n <name>
+```
+
+### Catalog items
+
+**List catalog items available in a project**
+
+```azurecli
+az devcenter dev catalog-item list --dev-center-name <devcenter-name> --project-name <name>
+```
+
+### Project
+
+**Create a project**
+
+```azurecli
+az devcenter admin project create -g <resource-group-name> -n <project-name> --dev-center-id <devcenter-resource-id>
+```
+
+**List projects (in the selected subscription if resource group is not specified or configured in defaults)**
+
+```azurecli
+az graph query -q "Resources | where type =~ 'microsoft.devcenter/projects' | project id, name"
+```
+
+**List projects (in the specified resource group)**
+
+```azurecli
+az devcenter admin project list -g <resource-group-name>
+```
+
+**Delete a project**
+
+```azurecli
+az devcenter admin project delete -g <resource-group-name> --name <project-name>
+```
+
+### Environments
+
+**Create an environment**
+
+```azurecli
+az devcenter dev environment create -g <resource-group-name> --dev-center-name <devcenter-name> \
+ --project-name <project-name> -n <name> --environment-type <environment-type-name> \
+ --catalog-item-name <catalog-item-name> catalog-name <catalog-name> \
+ --parameters <deployment-parameters-json-string>
+```
+
+**Deploy an environment**
+
+```azurecli
+az devcenter environment deploy-action --action-id "deploy" --dev-center <devcenter-name> \
+ -g <resource-group-name> --project-name <project-name> -n <name> --parameters <parameters-json-string>
+```
+
+**List environments in a project**
+
+```azurecli
+az devcenter dev environment list --dev-center <devcenter-name> --project-name <project-name>
+```
+
+**Delete an environment**
+
+```azurecli
+az devcenter environment delete --dev-center <devcenter-name> --project-name <project-name> -n <name> --user-id "me"
+```
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
+
+ Title: What is Azure Deployment Environments?
+description: 'Azure Deployment Environments enables developer teams to quickly spin up app infrastructure with project-based templates, minimizing set-up time while maximizing security, compliance, and cost efficiency.'
++++++ Last updated : 10/12/2022++
+# What is Azure Deployment Environments Preview?
+
+Azure Deployment Environments empowers development teams to quickly and easily spin-up app infrastructure with project-based templates that establish consistency and best practices while maximizing security, compliance, and cost efficiency. This on-demand access to secure environments accelerates the different stages of the software development lifecycle in a compliant and cost-efficient manner.
+
+A Deployment Environment is a pre-configured collection of Azure resources deployed in predefined subscriptions, where Azure governance is applied based on the type of environment, such as sandbox, testing, staging, or production.
++
+With Azure Deployment Environments, your Dev Infra Admin can enforce enterprise security policies and provide curated set of environment templates, which are predefined infrastructure-as-code templates.
+
+>[!NOTE]
+> Azure Deployment Environments Preview currently only supports Azure Resource Manager (ARM) templates.
+
+Learn more about the [key concepts for Azure Deployment Environments](./concept-environments-key-concepts.md).
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Usage scenarios
+
+Azure Deployment Environments enables usage [scenarios](./concept-environments-scenarios.md) for both DevOps teams and developers.
+
+Some common use cases:
+
+- Quickly create on-demand Azure environments by using reusable infrastructure-as-code (IaC) templates.
+- Create [sandbox environments](concept-environments-scenarios.md#sandbox-environments-for-investigations) to test your code.
+- Pre-configure various types of environments and seamlessly integrate with your CI/CD pipeline.
+- Create pre-configured environments for trainings and demos.
+
+### Developer scenarios
+
+Developers have self-service experience when working with [environments](./concept-environments-key-concepts.md#environments):
+
+>[!NOTE]
+> Developers will have a CLI based experience to create and manage environments for Azure Deployment Environments Preview.
+
+- Deploy a pre-configured environment for any stage of your development cycle.
+- Spin up a sandbox environment to explore Azure.
+- Create PaaS and IaaS environments quickly and easily by following a few simple steps.
+- Deploy an environment easily and quickly right from where you work.
+
+### Dev Infra scenarios
+
+Azure Deployment Environments enable your Dev Infra Admin to ensure that the right set of policies and settings are applied on different types of environments, control the resource configuration that the developers can create, and centrally track environments across different projects by doing the following tasks:
+
+- Provide project-based curated set of reusable 'infra as code' templates.
+- Define specific Azure deployment configurations per project and per environment type.
+- Provide self-service experience without giving control over subscription.
+- Track cost and ensure compliance with enterprise governance policies.
+
+Azure Deployment Environments Preview will support two [built-in roles](../role-based-access-control/built-in-roles.md):
+
+- **Dev center Project Admin**, who can create environments and manage the environment types for a project.
+- **Deployment Environments User**, who can create environments as per appropriate access.
++
+## Benefits
+
+Azure Deployment Environments provide the following benefits to creating, configuring, and managing environments in the cloud.
+
+- **Standardization and collaboration**:
+Capture and share 'infra as code' templates in source control within your team or organization, to easily create on-demand environments. Promote collaboration through inner-sourcing of templates through source control repositories.
+
+- **Compliance and governance**:
+Dev Infra Teams can curate environment templates to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
+
+- **Project-based configurations**:
+Create and organize environment templates by the type of applications development teams are working on, rather than an unorganized list of templates or a traditional IaC setup.
+
+- **Worry-free self-service**:
+Enable your development teams to quickly and easily create app infrastructure (PaaS, serverless, and more) resources by using a set of pre-configured templates. You can also track costs on these resources to stay within your budget.
+
+- **Integrate with your existing toolchain**:
+Use the APIs to provision environments directly from your preferred continuous integration (CI) tool, integrated development environment (IDE), or automated release pipeline. You can also use the comprehensive command-line tool.
+
+## Next steps
+Start using Azure Deployment Environments:
+
+- Learn about the [key concepts for Azure Deployment Environments](./concept-environments-key-concepts.md).
+- [Azure Deployment Environments scenarios](./concept-environments-scenarios.md).
+- [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
+- [Quickstart: Create and configure project](./quickstart-create-and-configure-projects.md).
+- [Quickstart: Create and access environments](./quickstart-create-access-environments.md)
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
+
+ Title: Create and access Environments
+description: This quickstart shows you how to create and access environments in an Azure Deployment Environments Project.
+++++ Last updated : 10/12/2022++
+# Quickstart: Create and access Environments
+
+This quickstart shows you how to create and access [environments](concept-environments-key-concepts.md#environments) in an existing Azure Deployment Environments Preview Project. Only users with a [Deployment Environments User](how-to-configure-deployment-environments-user.md) role, a [DevCenter Project Admin](how-to-configure-project-admin.md) role, or a [built-in role](../role-based-access-control/built-in-roles.md) with appropriate permissions can create environments.
+
+In this quickstart, you do the following actions:
+
+* Create an environment
+* Access environments in a project
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- [Create and configure a project](quickstart-create-and-configure-projects.md).
+- Install the Deployment Environments Azure CLI Extension
+ 1. [Download and install the Azure CLI](/cli/azure/install-azure-cli).
+ 2. Install the Deployment Environments AZ CLI extension:
+
+ **Automated install**
+ Execute the script https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 directly in PowerShell to install:
+ ```powershell
+ iex "& { $(irm https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 ) }"
+ ```
+
+ This will uninstall any existing dev center extension and install the latest version.
+
+ **Manual install**
+
+ Run the following command in the Azure CLI:
+ ```azurecli
+ az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-environments-0.1.0-py3-none-any.whl
+ ```
+
+## Create an Environment
+
+Run the following steps in Azure CLI to create an Environment and configure resources. You'll be able to view the outputs as defined in the specific Azure Resource Manager (ARM) template.
+
+1. Sign in to Azure CLI.
+ ```azurecli
+ az login
+ ```
+
+1. List all the Deployment Environments projects you have access to.
+ ```azurecli
+ az graph query -q "Resources | where type =~ 'microsoft.devcenter/projects'" -o table
+ ```
+
+1. Configure the default subscription to the subscription containing the project.
+ ```azurecli
+ az account set --subscription <name>
+ ```
+
+1. Configure the default resource group (RG) to the RG containing the project.
+ ```azurecli
+ az config set defaults.group=<name>
+ ```
+
+1. Once you have set the defaults, list the type of environments you can create in a specific project.
+ ```azurecli
+ az devcenter dev environment-type list --dev-center <name> --project-name <name> -o table
+ ```
+
+1. List the [Catalog Items](concept-environments-key-concepts.md#catalog-items) available to a specific project.
+ ```azurecli
+ az devcenter dev catalog-item list --dev-center <name> --project-name <name> -o table
+ ```
+
+1. Create an environment by using a *catalog-item* ('infra-as-code' template) from the list of available catalog items.
+ ```azurecli
+ az devcenter dev environment create -g <resource-group-name> --dev-center-name <devcenter-name>
+ --project-name <project-name> -n <name> --environment-type <environment-type-name>
+ --catalog-item-name <catalog-item-name> catalog-name <catalog-name>
+ ```
+
+ If the specific *catalog-item* requires any parameters use `--deployment-parameters` and provide the parameters as a json-string or json-file, for example:
+ ```json
+ $params = "{ 'name': 'firstMsi', 'location': 'northeurope' }"
+ az devcenter dev environment create -g <resource-group-name> --dev-center-name <devcenter-name>
+ --project-name <project-name> -n <name> --environment-type <environment-type-name>
+ --catalog-item-name <catalog-item-name> catalog-name <catalog-name>
+ --parameters $params
+ ```
+
+> [!NOTE]
+> You can use `--help` to view more details about any command, accepted arguments, and examples. For example use `az devcenter dev environment create --help` to view more details about Environment creation.
+
+## Access Environments
+
+1. List existing environments in a specific project.
+ ```azurecli
+ az devcenter dev environment list --dev-center <devcenter-name> --project-name <project-name>
+ ```
+
+1. You can view the access end points to various resources as defined in the ARM template outputs.
+1. Access the specific resources using the endpoints.
+
+## Next steps
+
+- [Learn how to configure a catalog](how-to-configure-catalog.md).
+- [Learn how to configure a catalog item](configure-catalog-item.md).
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
+
+ Title: Configure the Azure Deployment Environments service
+description: This quickstart shows you how to configure the Azure Deployment Environments service. You'll create a dev center, attach an identity, attach a catalog, and create environment types.
+++++ Last updated : 10/12/2022++
+# Quickstart: Configure the Azure Deployment Environments Preview service
+
+This quickstart shows you how to configure Azure Deployment Environments Preview by using the Azure portal. The Enterprise Dev Infra team typically sets up a Dev center, configures different entities within the Dev center, creates projects, and provides access to development teams. Development teams create [Environments](concept-environments-key-concepts.md#environments) using the [Catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy their applications.
+
+In this quickstart, you'll perform the following actions:
+
+* Create a Dev center
+* Attach an Identity
+* Attach a Catalog
+* Create Environment types
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure RBAC role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+
+## Create a Dev center
+
+The following steps illustrate how to use the Azure portal to create and configure a Dev center in Azure Deployment Environments.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-add-devcenter.png" alt-text="Screenshot to create and configure an Azure Deployment Environments dev center.":::
+
+1. Select **+ Add** to create a new dev center.
+1. Add the following details on the **Basics** tab of the **Create a dev center** page.
+
+ |Name |Value |
+ |-|--|
+ |**Subscription**|Select the subscription in which you want to create the dev center.|
+ |**Resource group**|Either use an existing resource group or select **Create new**, and enter a name for the resource group.|
+ |**Name**|Enter a name for the dev center.|
+ |**Location**|Select the location/region in which you want the dev center to be created.|
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-devcenter-page-basics.png" alt-text="Screenshot of Basics tab of the Create a dev center page.":::
+
+1. [Optional] Select the **Tags** tab and add a **Name**/**Value** pair.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-devcenter-page-tags.png" alt-text="Screenshot of Tags tab of a Dev center to apply the same tag to multiple resources and resource groups.":::
+
+1. Select **Review + Create**
+1. Validate all details on the **Review** tab, and then select **Create**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-devcenter-review.png" alt-text="Screenshot of Review tab of a DevCenter to validate all the details.":::
+
+1. Confirm that the dev center is created successfully by checking **Notifications**. Select **Go to resource**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/azure-notifications.png" alt-text="Screenshot of Notifications to confirm the creation of dev center.":::
+
+1. Confirm that you see the dev center on the **Dev centers** page.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-envrionments-devcenter-created.png" alt-text="Screenshot of Dev centers page to confirm the dev center is created and displayed on the page":::
+
+## Attach an Identity
+
+After you've created a dev center, the next step is to attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the [types of identities](how-to-configure-managed-identity.md#types-of-managed-identities) (system assigned managed identity or a user assigned managed identity) you can attach.
+
+### Using a system-assigned managed identity
+
+1. Create a [system-assigned managed identity](how-to-configure-managed-identity.md#configure-a-system-assigned-managed-identity-for-a-dev-center).
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity.png" alt-text="Screenshot of system assigned managed identity.":::
+
+1. After the system-assigned managed identity is created, select **Azure role assignments** to provide **Owner** access on the subscriptions that will be used to configure [Project Environment Types](concept-environments-key-concepts.md#project-environment-types) and ensure the **Identity** has [access to the **Key Vault** secrets](how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) containing the personal access token (PAT) token to access your repository.
+
+### Using the user-assigned existing managed identity
+
+1. Attach a [user assigned managed identity](how-to-configure-managed-identity.md#configure-a-user-assigned-managed-identity-for-a-dev-center).
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/user-assigned-managed-identity.png" alt-text="Screenshot of user assigned managed identity.":::
+
+1. After the identity is attached, ensure that the attached identity has **Owner** access on the subscriptions that will be used to configure [Project Environment Types](how-to-configure-project-environment-types.md) and provide **Reader** access to all subscriptions that a project lives in. Also ensure the identity has [access to the Key Vault secrets](how-to-configure-managed-identity.md#assign-the-managed-identity-access-to-the-key-vault-secret) containing the personal access token (PAT) token to access the repository.
+
+>[!NOTE]
+> The [identity](concept-environments-key-concepts.md#identities) attached to the dev center should be granted 'Owner' access to the deployment subscription configured per environment type.
+
+## Attach a Catalog
+
+**Prerequisite** - Before attaching a [Catalog](concept-environments-key-concepts.md#catalogs), store the personal access token (PAT) as a [Key Vault secret](../key-vault/secrets/quick-create-portal.md) and copy the **Secret Identifier**. Ensure that the [Identity](concept-environments-key-concepts.md#identities) attached to the dev center has [**Get** access to the **Secret**](../key-vault/general/assign-access-policy.md).
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select your dev center from the list.
+1. Select **Catalogs** from the left pane and select **+ Add**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/catalogs-page.png" alt-text="Screenshot of Catalogs page.":::
+
+1. On the **Add New Catalog** page, provide the following details, and then select **Add**.
+
+ |Name |Value |
+ ||-|
+ |**Name**|Provide a name for your catalog.|
+ |**Git clone URI**|Provide the URI to your GitHub or ADO repository.|
+ |**Branch**|Provide the repository branch that you would like to connect.|
+ |**Folder path**|Provide the repo path in which the [catalog item](concept-environments-key-concepts.md#catalog-items) exist.|
+ |**Secret identifier**|Provide the secret identifier that which contains your Personal Access Token (PAT) for the repository|
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-new-catalog-form.png" alt-text="Screenshot of add new catalog page.":::
+
+1. Confirm that the catalog is successfully added by checking the **Notifications**.
+
+## Create Environment types
+
+Environment types help you define the different types of environments your development teams can deploy. You can apply different settings per environment type.
+
+1. Select the **Environment types** from the left pane and select **+ Create**.
+1. On the **Create environment type** page, provide the following details and select **Add**.
+
+ |Name |Value |
+ ||-|
+ |**Name**|Add a name for the environment type.|
+ |**Tags**|Provide a **Name** and **Value**.|
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-environment-type.png" alt-text="Screenshot of Create environment type form.":::
+
+1. Confirm that the environment type is added by checking the **Notifications**.
+
+Environment types added to the dev center are available within each project it contains, but are not enabled by default. When enabled at the project level, the environment type determines the managed identity and subscription that is used for deploying environments.
+
+## Next steps
+
+In this quickstart, you created a dev center and configured it with an identity, a catalog, and environment types. To learn about how to create and configure a project, advance to the next quickstart:
+
+* [Quickstart: Create and Configure projects](./quickstart-create-and-configure-projects.md)
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
+
+ Title: Set up an Azure Deployment Environments Project
+description: This quickstart shows you how to create and configure an Azure Deployment Environments project and associate it with a dev center.
+++++ Last updated : 10/12/2022++
+# Quickstart: Configure an Azure Deployment Environments Project
+
+This quickstart shows you how to create and configure an Azure Deployment Environments Preview Project and associate it to the dev center created in [Quickstart: Configure an Azure Deployment Environments service](./quickstart-create-and-configure-devcenter.md). The enterprise Dev Infra team typically creates projects and provides access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) using the [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy their applications.
+
+In this quickstart, you'll learn how to:
+
+* Create a project
+* Configure a project
+* Provide access to the development team
+
+> [!IMPORTANT]
+> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure RBAC role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+
+## Create a project
+
+Create and configure a project in your dev center as follows:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Access Azure Deployment Environments.
+1. Select **Projects** from the left pane.
+1. Select **+ Create**.
+1. On the **Basics** tab of the **Create a project** page, provide the following details:
+
+ |Name |Value |
+ |-|--|
+ |**Subscription** |Select the subscription in which you want to create the project. |
+ |**Resource group**|Either use an existing resource group or select **Create new**, and enter a name for the resource group. |
+ |**Dev center**|Select a Dev center to associate with this project. All the Dev center level settings will then apply to the project. |
+ |**Name**|Add a name for the project. |
+ |[Optional]**Description**|Add any project related details. |
+
+ :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-basics.png" alt-text="Screenshot of the Basics tab of the Create a project page.":::
+
+1. [Optional]On the **Tags** tab, add a **Name**/**Value** pair that you want to assign.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-tags.png" alt-text="Screenshot of the Tags tab of the Create a project page.":::
+
+1. On the **Review + create** tab, validate all the details and select **Create**:
+
+ :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-review-create.png" alt-text="Screenshot of selecting the Create button to validate and create a project.":::
+
+1. Confirm that the project is created successfully by checking the **Notifications**. Select **Go to resource**.
+
+1. Confirm that you see the **Project** page.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot of the Project page.":::
+
+## Configure a Project
+
+Add a [project environment type](how-to-configure-project-environment-types.md) as follows:
+
+1. On the Project page, select **Environment types** from the left pane and select **+ Add**.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/add-environment-types.png" alt-text="Screenshot of the Environment types page.":::
+
+1. On the **Add environment type to Project** page, provide the following details:
+
+ |Name |Value |
+ ||-|
+ |**Type**| Select a dev center level environment type to enable for the specific project.|
+ |**Deployment Subscription**| Select the target subscription in which the environments will be created.|
+ |**Deployment Identity** | Select either a system assigned identity or a user assigned managed identity that'll be used to perform deployments on behalf of the user.|
+ |**Permissions on environment resources** > **Environment Creator Role(s)**| Select the role(s) that'll get access to the environment resources.|
+ |**Permissions on environment resources** > **Additional access** | Select the user(s) or Azure Active Directory (Azure AD) group(s) that'll be granted specific role(s) on the environment resources.|
+ |**Tags** | Provide a **Name** and **Value**. These tags will be applied on all resources created as part of the environments.|
+
+ :::image type="content" source="./media/configure-project-environment-types/add-project-environment-type-page.png" alt-text="Screenshot showing adding details on the add project environment type page.":::
++
+> [!NOTE]
+> At least one identity (system assigned or user assigned) must be enabled for deployment identity and will be used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [granted 'Owner' access to the deployment subscription](how-to-configure-managed-identity.md) configured per environment type.
+
+## Provide access to the development team
+
+1. On the **Project** page, select **Access Control (IAM)** from the left pane.
+1. Select **+ Add** > **Add role assignment**.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/project-access-control-page.png" alt-text="Screenshot of the Access control page.":::
+
+1. On the **Add role assignment** page, provide the following details, and select **Save**:
+ 1. On the **Role** tab, select either [DevCenter Project Admin](how-to-configure-project-admin.md) or [Deployment Environments user](how-to-configure-deployment-environments-user.md).
+ 1. On the **Members** tab, select either a **User, group, or service principal** or a **Managed identity** to assign access.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/add-role-assignment.png" alt-text="Screenshot of the Add role assignment page.":::
+
+>[!NOTE]
+> Only users with a [Deployment Environments user](how-to-configure-deployment-environments-user.md) role or a [DevCenter Project Admin](how-to-configure-project-admin.md) role or a built-in role with appropriate permissions will be able to create environments.
+
+## Next steps
+
+In this quickstart, you created a project and granted access to your development team. To learn about how your development team members can create environments, advance to the next quickstart:
+
+* [Quickstart: Create & access Environments](quickstart-create-access-environments.md)
dev-box Cli Reference Subset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/cli-reference-subset.md
+
+ Title: Microsoft Dev Box Preview Azure CLI Reference
+
+description: This article contains descriptions and definitions for a subset of the Dev Box Azure CLI extension.
+++++ Last updated : 10/12/2022+
+# Microsoft Dev Box Preview Azure CLI reference
+This article contains descriptions and definitions for a subset of the Microsoft Dev Box Preview CLI extension.
+
+> [!NOTE]
+> Microsoft Dev Box is currently in public preview. Features and commands may change. If you need additional assistance, contact the Dev Box team by using [Report a problem](https://aka.ms/devbox/report).
+
+## Prerequisites
+Install the Azure CLI and the Dev Box CLI extension as described here: [Microsoft Dev Box CLI](how-to-install-dev-box-cli.md)
+## Commands
+
+* [Azure Compute Gallery](#azure-compute-gallery)
+* [DevCenter](#devcenter)
+* [Project](#project)
+* [Network Connection](#network-connection)
+* [Dev Box Definition](#dev-box-definition)
+* [Dev Box Pool](#dev-box-pool)
+* [Dev Boxes](#dev-boxes)
+
+### Azure Compute Gallery
+
+#### Create an image definition that meets all requirements
+
+```azurecli
+az sig image-definition create --resource-group {resourceGroupName} `
+--gallery-name {galleryName} --gallery-image-definition {definitionName} `
+--publisher {publisherName} --offer {offerName} --sku {skuName} `
+--os-type windows --os-state Generalized `
+--hyper-v-generation v2 `
+--features SecurityType=TrustedLaunch `
+```
+
+#### Attach a Gallery to the DevCenter
+
+```azurecli
+az devcenter admin gallery create -g demo-rg `
+--devcenter-name contoso-devcenter -n SharedGallery `
+--gallery-resource-id "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/galleries/{computeGalleryName}" `
+```
+
+### DevCenter
+
+#### Create a DevCenter
+
+```azurecli
+az devcenter admin devcenter create -g demo-rg `
+-n contoso-devcenter --identity-type UserAssigned `
+--user-assigned-identity ` "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{managedIdentityName}" `
+--location {regionName} `
+```
+
+### Project
+
+#### Create a Project
+
+```azurecli
+az devcenter admin project create -g demo-rg `
+-n ContosoProject `
+--description "project description" `
+--devcenter-id /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevCenter/devcenters/{devCenterName} `
+```
+
+#### Delete a Project
+
+```azurecli
+az devcenter admin project delete `
+-g {resourceGroupName} `
+--project {projectName} `
+```
+
+### Network Connection
+
+#### Create a native AADJ Network Connection
+
+```azurecli
+az devcenter admin network-connection create --location "centralus" `
+--domain-join-type "AzureADJoin" `
+--subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default" `
+--name "{networkConnectionName}" --resource-group "rg1" `
+```
+
+#### Create a hybrid AADJ Network Connection
+
+```azurecli
+az devcenter admin network-connection create --location "centralus" `
+--domain-join-type "HybridAzureADJoin" --domain-name "mydomaincontroller.local" `
+--domain-password "Password value for user" --domain-username "testuser@mydomaincontroller.local" `
+--subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default" `
+--name "{networkConnectionName}" --resource-group "rg1" `
+```
+
+#### Attach a Network Connection to the DevCenter
+
+```azurecli
+az devcenter admin attached-network create --attached-network-connection-name westus3network `
+--devcenter-name contoso-devcenter -g demo-rg `
+--network-connection-id /subscriptions/f141e9f2-4778-45a4-9aa0-8b31e6469454/resourceGroups/demo-rg/providers/Microsoft.DevCenter/networkConnections/netset99 `
+```
+
+### Dev Box Definition
+
+#### List Dev Box Definitions in a DevCenter
+
+```azurecli
+az devcenter admin devbox-definition list `
+--devcenter-name "Contoso" --resource-group "rg1" `
+```
+
+#### List skus available in your subscription
+
+```azurecli
+az devcenter admin sku list
+```
+#### Create a Dev Box Definition with a marketplace image
+
+```azurecli
+az devcenter admin devbox-definition create -g demo-rg `
+--devcenter-name contoso-devcenter -n BaseImageDefinition `
+--image-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/Default/images/MicrosoftWindowsDesktop_windows-ent-cpc_win11-21h2-ent-cpc-m365" `
+--sku name="general_a_8c32gb_v1" `
+```
+
+#### Create a Dev Box Definition with a custom image
+
+```azurecli
+az devcenter admin devbox-definition create -g demo-rg `
+--devcenter-name contoso-devcenter -n CustomDefinition `
+--image-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/SharedGallery/images/CustomImageName" `
+--os-storage-type "ssd_1024gb" --sku name=general_a_8c32gb_v1
+```
+
+### Dev Box Pool
+
+#### Create a Pool
+
+```azurecli
+az devcenter admin pool create -g demo-rg `
+--project-name ContosoProject -n MarketplacePool `
+--devbox-definition-name Definition `
+--network-connection-name westus3network `
+--license-type Windows_Client --local-administrator Enabled `
+```
+
+#### Get Pool
+
+```azurecli
+az devcenter admin pool show --resource-group "{resourceGroupName}" `
+--project-name {projectName} --name "{poolName}" `
+```
+
+#### List Pools
+
+```azurecli
+az devcenter admin pool list --resource-group "{resourceGroupName}" `
+--project-name {projectName} `
+```
+
+#### Update Pool
+
+Update Network Connection
+
+```azurecli
+az devcenter admin pool update `
+--resource-group "{resourceGroupName}" `
+--project-name {projectName} `
+--name "{poolName}" `
+--network-connection-name {networkConnectionName}
+```
+
+Update Dev Box Definition
+
+```azurecli
+az devcenter admin pool update `
+--resource-group "{resourceGroupName}" `
+--project-name {projectName} `
+--name "{poolName}" `
+--devbox-definition-name {devBoxDefinitionName} `
+```
+
+#### Delete Pool
+
+```azurecli
+az devcenter admin pool delete `
+--resource-group "{resourceGroupName}" `
+--project-name "{projectName}" `
+--name "{poolName}" `
+```
+
+### Dev Boxes
+
+#### List available Projects
+
+```azurecli
+az devcenter dev project list `
+--devcenter {devCenterName}
+```
+
+#### List Pools in a Project
+
+```azurecli
+az devcenter dev pool list `
+--devcenter {devCenterName} `
+--project-name {ProjectName} `
+```
+
+#### Create a dev box
+
+```azurecli
+az devcenter dev dev-box create `
+--devcenter {devCenterName} `
+--project-name {projectName} `
+--pool-name {poolName} `
+-n {devBoxName} `
+```
+
+#### Get web connection URL for a dev box
+
+```azurecli
+az devcenter dev dev-box show-remote-connection `
+--devcenter {devCenterName} `
+--project-name {projectName} `
+--user-id "me"
+-n {devBoxName} `
+```
+
+#### List your Dev Boxes
+
+```azurecli
+az devcenter dev dev-box list --devcenter {devCenterName} `
+```
+
+#### View details of a Dev Box
+
+```azurecli
+az devcenter dev dev-box show `
+--devcenter {devCenterName} `
+--project-name {projectName} `
+-n {devBoxName}
+```
+
+#### Stop a Dev Box
+
+```azurecli
+az devcenter dev dev-box stop `
+--devcenter {devCenterName} `
+--project-name {projectName} `
+--user-id "me" `
+-n {devBoxName} `
+```
+
+#### Start a Dev Box
+
+```azurecli
+az devcenter dev dev-box start `
+--devcenter {devCenterName} `
+--project-name {projectName} `
+--user-id "me" `
+-n {devBoxName} `
+```
+
+## Next steps
+
+Learn how to install the Azure CLI and the Dev Box CLI extension at:
+
+- [Microsoft Dev Box CLI](./how-to-install-dev-box-cli.md)
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
Title: Microsoft Dev Box key concepts-
-description: Learn key concepts and terminology for Microsoft Dev Box.
+ Title: Microsoft Dev Box Preview key concepts
+
+description: Learn key concepts and terminology for Microsoft Dev Box Preview.
Previously updated : 08/10/2022 Last updated : 10/12/2022
Customer intent: As a developer I want to understand Dev Box concepts and terminology so that I can set up Dev Box environment. -->
-# Microsoft Dev Box key concepts
+# Microsoft Dev Box Preview key concepts
This article describes the key concepts and components of Microsoft Dev Box.
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
Title: Configure an Azure Compute Gallery-+ description: 'Learn how to create a repository for managing and sharing Dev Box images.' Previously updated : 07/28/2022 Last updated : 10/12/2022
You can detach galleries from dev centers so that their images can no longer be
The gallery will be detached from the dev center. The gallery and its images won't be deleted, and you can reattach it if necessary. ## Next steps
-Learn more about Microsoft Dev Box:
-- [Microsoft Dev Box key concepts](./concept-dev-box-concepts.md)
+Learn more about Microsoft Dev Box Preview:
+- [Microsoft Dev Box Preview key concepts](./concept-dev-box-concepts.md)
dev-box How To Dev Box User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md
Title: Provide access to dev box users-+ description: Learn how to provide access to projects for dev box users so that they can create and manage dev boxes. Previously updated : 04/15/2022 Last updated : 10/12/2022 # Provide access to projects for dev box users
-Team members must have access to a specific Dev Box project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory Users or Groups at the project level.
+Team members must have access to a specific Microsoft Dev Box Preview project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory Users or Groups at the project level.
A DevCenter Dev Box User can:
dev-box How To Install Dev Box Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md
+
+ Title: Install the Microsoft Dev Box Preview Azure CLI extension
+
+description: Learn how to install the Azure CLI and the Microsoft Dev Box Preview CLI extension so you can create Dev Box resources from the command line.
+++++ Last updated : 10/12/2022
+Customer intent: As a dev infra admin, I want to install the Dev Box CLI extension so that I can create Dev Box resources from the command line.
++
+# Microsoft Dev Box Preview CLI
+
+In addition to the Azure admin portal and the Dev Box user portal, you can use Dev Box's Azure CLI Extension to create resources.
+
+## Install the Dev Box CLI extension
+
+1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
+
+1. Install the Dev Box Azure CLI extension:
+ #### [Install by using a PowerShell script](#tab/Option1/)
+
+ Using <https://aka.ms/DevCenter/Install-DevCenterCli.ps1> uninstalls any existing Dev Box CLI extension and installs the latest version.
+
+ ```azurepowershell
+ write-host "Setting Up DevCenter CLI"
+
+ # Get latest version
+ $indexResponse = Invoke-WebRequest -Method Get -Uri "https://fidalgosetup.blob.core.windows.net/cli-extensions/index.json" -UseBasicParsing
+ $index = $indexResponse.Content | ConvertFrom-Json
+ $versions = $index.extensions.devcenter
+ $latestVersion = $versions[0]
+ if ($latestVersion -eq $null) {
+ throw "Could not find a valid version of the CLI."
+ }
+
+ # remove existing
+ write-host "Attempting to remove existing CLI version (if any)"
+ az extension remove -n devcenter
+
+ # Install new version
+ $downloadUrl = $latestVersion.downloadUrl
+ write-host "Installing from url " $downloadUrl
+ az extension add --source=$downloadUrl -y
+ ```
+
+ To execute the script directly in PowerShell:
+
+ ```azurecli
+ iex "& { $(irm https://aka.ms/DevCenter/Install-DevCenterCli.ps1 ) }"
+ ```
+
+ The final line of the script enables you to specify the location of the source file to download. If you want to access the file from a different location, update 'source' in the script to point to the downloaded file in the new location.
+
+ #### [Install manually](#tab/Option2/)
+
+ Remove existing extension if one exists:
+
+ ```azurecli
+ az extension remove --name devcenter
+ ```
+
+ Manually run this command in the CLI:
+
+ ```azurecli
+ az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-0.1.0-py3-none-any.whl
+ ```
+
+1. Verify that the Dev Box CLI extension installed successfully by using the following command:
+
+ ```azurecli
+ az extension list
+ ```
+
+ You will see the devcenter extension listed:
+ :::image type="content" source="media/how-to-install-dev-box-cli/dev-box-cli-installed.png" alt-text="Screenshot showing the dev box extension listed.":::
+
+## Configure your Dev Box CLI
+
+1. Log in to Azure CLI with your work account.
+
+ ```azurecli
+ az login
+ ```
+
+1. Set your default subscription to the sub where you'll be creating your specific Dev Box resources
+
+ ```azurecli
+ az account set --subscription {subscriptionId}
+ ```
+
+1. Set default resource group (which means no need to pass into each command)
+
+ ```azurecli
+ az configure --defaults group={resourceGroupName}
+ ```
+
+1. Get Help for a command
+
+ ```azurecli
+ az devcenter admin --help
+ ```
+
+## Next steps
+
+Discover the Dev Box commands you can use at:
+
+- [Microsoft Dev Box Preview Azure CLI reference](./cli-reference-subset.md)
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
Title: How to manage a dev box pool-
-description: This article describes how to create, and delete Microsoft Dev Box dev box pools.
+
+description: This article describes how to create, and delete Microsoft Dev Box Preview dev box pools.
Previously updated : 09/16/2022 Last updated : 10/12/2022
The following steps show you how to create a dev box pool associated with a proj
<!-- how many dev box pools can you create -->
-If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure the Microsoft Dev Box service](quickstart-configure-dev-box-service.md) to create them.
+If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure the Microsoft Dev Box Preview service](quickstart-configure-dev-box-service.md) to create them.
1. Sign in to the [Azure portal](https://portal.azure.com).
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
Title: How to manage a dev center-
-description: This article describes how to create, delete, and manage Microsoft Dev Box dev centers.
+
+description: This article describes how to create, delete, and manage Microsoft Dev Box Preview dev centers.
Previously updated : 08/18/2022 Last updated : 10/12/2022
-<!-- Intent: As a dev infrastructure manager, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box implementation. -->
+<!-- Intent: As a dev infrastructure manager, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box Preview implementation. -->
# Manage a dev center Development teams vary in the way they function and may have different needs. A dev center helps you to manage these different scenarios by enabling you to group similar sets of projects together and apply similar settings.
The following steps show you how to create a dev center.
:::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-basics.png" alt-text="Screenshot showing the Create dev center Basics tab.":::
- The currently supported Azure locations with capacity are listed here: [Microsoft Dev Box](https://aka.ms/devbox_acom).
+ The currently supported Azure locations with capacity are listed here: [Microsoft Dev Box Preview](https://aka.ms/devbox_acom).
1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign. :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-tags.png" alt-text="Screenshot showing the Create dev center Tags tab.":::
dev-box How To Manage Network Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-network-connection.md
Title: How to manage network connections-
-description: This article describes how to create, delete, attach and remove Microsoft Dev Box network connections.
+
+description: This article describes how to create, delete, attach and remove Microsoft Dev Box Preview network connections.
Previously updated : 04/15/2022 Last updated : 10/12/2022
Network ingress and egress can be controlled using a firewall, network security
If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ## Plan a network connection
-The following steps show you how to create and configure a network connection in Microsoft Dev Box.
+The following steps show you how to create and configure a network connection in Microsoft Dev Box Preview.
### Types of Azure Active Directory Join The Dev Box service requires a configured and working Azure AD join or Hybrid AD join, which defines how dev boxes join your domain and access resources.
The network connection will no longer be available for use in the dev center.
## Next steps <!-- [Manage a dev center](./how-to-manage-dev-center.md) -->-- [Quickstart: Configure a Microsoft Dev Box Project](./quickstart-configure-dev-box-project.md)
+- [Quickstart: Configure a Microsoft Dev Box Preview Project](./quickstart-configure-dev-box-project.md)
dev-box How To Project Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md
Title: Manage Dev Box projects-
+ Title: Manage Microsoft Dev Box Preview projects
+ description: Learn how to manage multiple projects by delegating permissions to project admins. Previously updated : 07/29/2022 Last updated : 10/12/2022
The user will now be able to manage the project and create dev box pools within
[!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] ## Next steps -- [Quickstart: Configure the Microsoft Dev Box service](quickstart-configure-dev-box-service.md)
+- [Quickstart: Configure the Microsoft Dev Box Preview service](quickstart-configure-dev-box-service.md)
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
Title: What is Microsoft Dev Box?
-description: Microsoft Dev Box gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations.
+ Title: What is Microsoft Dev Box Preview?
+
+description: Microsoft Dev Box Preview gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations.
Previously updated : 03/21/2022 Last updated : 10/12/2022 adobe-target: true
Microsoft Dev Box bridges the gap between development teams and IT, bringing con
## Next steps Start using Microsoft Dev Box:-- [Quickstart: Configure the Microsoft Dev Box service](./quickstart-configure-dev-box-service.md)-- [Quickstart: Configure a Microsoft Dev Box Project](./quickstart-configure-dev-box-project.md)
+- [Quickstart: Configure the Microsoft Dev Box Preview service](./quickstart-configure-dev-box-service.md)
+- [Quickstart: Configure a Microsoft Dev Box Preview project](./quickstart-configure-dev-box-project.md)
- [Quickstart: Create a Dev Box](./quickstart-create-dev-box.md)
dev-box Quickstart Configure Dev Box Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-project.md
Title: Configure a Microsoft Dev Box project
-description: 'This quickstart shows you how to configure a Microsoft Dev Box project, create a dev box pool and provide access to dev boxes for your users.'
+ Title: Configure a Microsoft Dev Box Preview project
+
+description: 'This quickstart shows you how to configure a Microsoft Dev Box Preview project, create a dev box pool and provide access to dev boxes for your users.'
Previously updated : 07/03/2022 Last updated : 10/12/2022 <!-- Customer intent: As a Dev Box Project Admin I want to configure projects so that I can provide Dev Boxes for my users. -->
-# Quickstart: Configure a Microsoft Dev Box project
+# Quickstart: Configure a Microsoft Dev Box Preview project
To enable developers to self-serve dev boxes in projects, you must configure dev box pools that specify the dev box definitions and network connections used when dev boxes are created. Dev box users create dev boxes using the dev box pool. In this quickstart, you'll perform the following tasks:
A dev box pool is a collection of dev boxes that you manage together. You must h
The following steps show you how to create a dev box pool associated with a project. You'll use an existing dev box definition and network connection in the dev center to configure a dev box pool.
-If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure the Microsoft Dev Box service](quickstart-configure-dev-box-service.md) to create them.
+If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure the Microsoft Dev Box Preview service](quickstart-configure-dev-box-service.md) to create them.
1. Sign in to the [Azure portal](https://portal.azure.com).
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Title: Configure the Microsoft Dev Box service
-description: 'This quickstart shows you how to configure the Microsoft Dev Box service to provide dev boxes for your users. You will create a dev center, add a network connection, and then create a dev box definition, and a project.'
+ Title: Configure the Microsoft Dev Box Preview service
+
+description: 'This quickstart shows you how to configure the Microsoft Dev Box Preview service to provide dev boxes for your users. You will create a dev center, add a network connection, and then create a dev box definition, and a project.'
Previously updated : 07/22/2022 Last updated : 10/12/2022 <!--
As an enterprise admin I want to understand how to create and configure dev box components so that I can provide dev box projects my users. -->
-# Quickstart: Configure the Microsoft Dev Box service
+# Quickstart: Configure the Microsoft Dev Box Preview service
This quickstart describes how to configure the Microsoft Dev Box service by using the Azure portal to enable development teams to self-serve dev boxes.
The following steps show you how to create and configure a dev center.
:::image type="content" source="./media/quickstart-configure-dev-box-service/create-devcenter-basics.png" alt-text="Screenshot showing the Create dev center Basics tab.":::
- The currently supported Azure locations with capacity are listed here: [Microsoft Dev Box](https://aka.ms/devbox_acom).
+ The currently supported Azure locations with capacity are listed here: [Microsoft Dev Box Preview](https://aka.ms/devbox_acom).
1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign. :::image type="content" source="./media/quickstart-configure-dev-box-service/create-devcenter-tags.png" alt-text="Screenshot showing the Create dev center Tags tab.":::
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
Title: Create a Microsoft Dev Box
-description: This quickstart shows you how to create a Microsoft Dev Box and connect to it through a browser.
+ Title: Create a Microsoft Dev Box Preview
+
+description: This quickstart shows you how to create a Microsoft Dev Box Preview and connect to it through a browser.
Previously updated : 07/29/2022 Last updated : 10/12/2022 <!-- Customer intent:
Last updated 07/29/2022
# Quickstart: Create a dev box by using the developer portal
-Get started with Microsoft Dev Box by creating a dev box through the developer portal. After creating the dev box, you connect to it with a remote desktop (RD) session through a browser, or through a remote desktop app.
+Get started with Microsoft Dev Box Preview by creating a dev box through the developer portal. After creating the dev box, you connect to it with a remote desktop (RD) session through a browser, or through a remote desktop app.
You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow.
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
Title: 'Tutorial: Use the Remote Desktop client to connect to a dev box'+ description: In this tutorial, you learn how to download a Remote Desktop client and connect to a dev box. - Previously updated : 07/28/2022 Last updated : 10/12/2022
To use a non-Windows Remote Desktop client to connect to your dev box, follow th
:::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png" alt-text="Screenshot of the non-Windows Remote Desktop client workspace with dev box."::: ## Next steps
-To learn about managing Microsoft Dev Box, see:
+To learn about managing Microsoft Dev Box Preview, see:
- [Provide access to project admins](./how-to-project-admin.md) - [Provide access to dev box users](./how-to-dev-box-user.md)
devtest-labs Report Usage Across Multiple Labs Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/report-usage-across-multiple-labs-subscriptions.md
Title: Azure DevTest Labs usage across multiple labs and subscriptions description: Learn how to report Azure DevTest Labs usage across multiple labs and subscriptions. + Last updated 06/26/2020
The long-term storage can be used to do any text manipulation, for example:
* Creating complex groupings * Aggregating the data
-Some common storage solutions are: [SQL Server](https://azure.microsoft.com/services/sql-database/), [Azure Data Lake](https://azure.microsoft.com/services/storage/data-lake-storage/), and [Cosmos DB](https://azure.microsoft.com/services/cosmos-db/). The long-term storage solution you choose depends on preference. You might consider choosing the tool depending what it offers for interaction availability when visualizing the data.
+Some common storage solutions are: [SQL Server](https://azure.microsoft.com/services/sql-database/), [Azure Data Lake](https://azure.microsoft.com/services/storage/data-lake-storage/), and [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/). The long-term storage solution you choose depends on preference. You might consider choosing the tool depending what it offers for interaction availability when visualizing the data.
## Visualizing data and gathering insights
Once you set up the system and data is moving to the long-term storage, the next
- Which Marketplace images are being used? Are custom images the most common VM base, should a common Image store be built like [Shared Image Gallery](../virtual-machines/shared-image-galleries.md) or [Image factory](image-factory-create.md).-- Which custom images are being used, or not used?
+- Which custom images are being used, or not used?
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
-+ Last updated 03/03/2020
The following tables identify the services and tools you can use to plan for dat
| Oracle | Azure SQL DB, MI, VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[MigVisor*](https://www.migvisor.com/) | | | Oracle | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | |
-| MongoDB | Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) | |
-| Cassandra | Cosmos DB | | | |
+| MongoDB | Azure Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) | |
+| Cassandra | Azure Cosmos DB | | | |
| MySQL | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | MySQL | Azure DB for MySQL | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | RDS MySQL | Azure DB for MySQL | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
The following tables identify the services and tools you can use to plan for dat
| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | |
-| MongoDB | Cosmos DB | | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) |
-| Cassandra | Cosmos DB | | | |
+| MongoDB | Azure Cosmos DB | | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) |
+| Cassandra | Azure Cosmos DB | | | |
| MySQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | | | MySQL | Azure DB for MySQL | | | | | RDS MySQL | Azure DB for MySQL | | | |
The following tables identify the services and tools you can use to plan for dat
| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ispirer*](https://www.ispirer.com/solutions) | [Ispirer*](https://www.ispirer.com/solutions) | [Ora2Pg*](http://ora2pg.darold.net/start.html) |
-| MongoDB | Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Cassandra | Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
+| MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | RDS MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
The following tables identify the services and tools you can use to plan for dat
| Oracle | Azure SQL DB, MI, VM | | | Oracle | Azure Synapse Analytics | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | |
-| MongoDB | Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) |
-| Cassandra | Cosmos DB | |
+| MongoDB | Azure Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) |
+| Cassandra | Azure Cosmos DB | |
| MySQL | Azure SQL DB, MI, VM | | | MySQL | Azure DB for MySQL | | | RDS MySQL | Azure DB for MySQL | |
The following tables identify the services and tools you can use to plan for dat
## Next steps
-For an overview of the Azure Database Migration Service, see the article [What is the Azure Database Migration Service](dms-overview.md).
+For an overview of the Azure Database Migration Service, see the article [What is the Azure Database Migration Service](dms-overview.md).
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
--- "seo-lt-2019"-- kr2b-contr-experiment+ Last updated 05/18/2022
-# Known issues with migrations from MongoDB to Azure Cosmos DB's API
+# Known issues with migrations from MongoDB to Azure Cosmos DB
-The following sections describe known issues and limitations associated with migrations from MongoDB to Cosmos DB's API for MongoDB.
+The following sections describe known issues and limitations associated with migrations from MongoDB to Azure Cosmos DB for MongoDB.
## Migration fails as a result of using the incorrect TLS/SSL Cert
This issue is apparent when a user can't connect to the MongoDB source server. D
| Cause | Resolution | | - | - |
-| Using a self-signed certificate in Azure Database Migration Service might lead to the migration failing because of the incorrect TLS/SSL certificate. The error message might include "The remote certificate is invalid according to the validation procedure." | Use a genuine certificate from CA. Connections to Cosmos DB use TLS over Mongo API. Self-signed certs are generally only used in internal tests. When you install a genuine cert from a CA authority, you can then use TLS in Azure Database Migration Service without issue. |
+| Using a self-signed certificate in Azure Database Migration Service might lead to the migration failing because of the incorrect TLS/SSL certificate. The error message might include "The remote certificate is invalid according to the validation procedure." | Use a genuine certificate from CA. Connections to Azure Cosmos DB for MongoDB connect via TLS to the MongoDB API. Self-signed certificates are generally only used in internal tests. When you install a genuine cert from a CA authority, you can then use TLS in Azure Database Migration Service without issue. |
## Unable to get the list of databases to map in DMS
The migration fails.
## Next steps
-* View the tutorial [Migrate MongoDB to Azure Cosmos DB's API for MongoDB online using DMS](tutorial-mongodb-cosmos-db-online.md).
-* View the tutorial [Migrate MongoDB to Azure Cosmos DB's API for MongoDB offline using DMS](tutorial-mongodb-cosmos-db.md).
+* View the tutorial [Migrate MongoDB to Azure Cosmos DB for MongoDB online using DMS](tutorial-mongodb-cosmos-db-online.md).
+* View the tutorial [Migrate MongoDB to Azure Cosmos DB for MongoDB offline using DMS](tutorial-mongodb-cosmos-db.md).
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
-+ Last updated 02/20/2020
When you try to connect to source in the Azure Database Migration service projec
| Cause | Resolution | | - | - |
-| When using [ExpressRoute](https://azure.microsoft.com/services/expressroute/), Azure Database Migration Service [requires](./tutorial-sql-server-to-azure-sql.md) provisioning three service endpoints on the Virtual Network subnet associated with the service:<br> -- Service Bus endpoint<br> -- Storage endpoint<br> -- Target database endpoint (e.g. SQL endpoint, Cosmos DB endpoint)<br><br><br><br><br> | [Enable](./tutorial-sql-server-to-azure-sql.md) the required service endpoints for ExpressRoute connectivity between source and Azure Database Migration Service. <br><br><br><br><br><br><br><br> |
+| When using [ExpressRoute](https://azure.microsoft.com/services/expressroute/), Azure Database Migration Service [requires](./tutorial-sql-server-to-azure-sql.md) provisioning three service endpoints on the Virtual Network subnet associated with the service:<br> -- Service Bus endpoint<br> -- Storage endpoint<br> -- Target database endpoint (e.g. SQL endpoint, Azure Cosmos DB endpoint)<br><br><br><br><br> | [Enable](./tutorial-sql-server-to-azure-sql.md) the required service endpoints for ExpressRoute connectivity between source and Azure Database Migration Service. <br><br><br><br><br><br><br><br> |
## Lock wait timeout error when migrating a MySQL database to Azure DB for MySQL
When you try to connect Azure Database Migration Service to SQL Server source th
* View the article [Azure Database Migration Service PowerShell](/powershell/module/azurerm.datamigration#data_migration). * View the article [How to configure server parameters in Azure Database for MySQL by using the Azure portal](../mysql/howto-server-parameters.md). * View the article [Overview of prerequisites for using Azure Database Migration Service](./pre-reqs.md).
-* See the [FAQ about using Azure Database Migration Service](./faq.yml).
+* See the [FAQ about using Azure Database Migration Service](./faq.yml).
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
Title: "Tutorial: Migrate MongoDB online to Azure Cosmos DB API for MongoDB"
+ Title: "Tutorial: Migrate MongoDB online to Azure Cosmos DB for MongoDB"
-description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB online by using Azure Database Migration Service.
+description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB online by using Azure Database Migration Service.
-+ Last updated 09/21/2021
-# Tutorial: Migrate MongoDB to Azure Cosmos DB's API for MongoDB online using DMS
+# Tutorial: Migrate MongoDB to Azure Cosmos DB for MongoDB online using DMS
-> [!IMPORTANT]
+> [!IMPORTANT]
> Please read this entire guide before carrying out your migration steps. >
This MongoDB migration guide is part of series on MongoDB migration. The critica
## Overview of online data migration from MongoDB to Azure Cosmos DB using DMS
-You can use Azure Database Migration Service to perform an online (minimal downtime) migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB's API for MongoDB.
+You can use Azure Database Migration Service to perform an online (minimal downtime) migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB for MongoDB.
This tutorial demonstrates the steps associated with using Azure Database Migration Service to migrate MongoDB data to Azure Cosmos DB: > [!div class="checklist"]
This tutorial demonstrates the steps associated with using Azure Database Migrat
> * Verify data in Azure Cosmos DB. > * Complete the migration when you are ready.
-In this tutorial, you migrate a dataset in MongoDB hosted in an Azure Virtual Machine to Azure Cosmos DB's API for MongoDB with minimal downtime by using Azure Database Migration Service. If you don't have a MongoDB source set up already, see the article [Install and configure MongoDB on a Windows VM in Azure](/previous-versions/azure/virtual-machines/windows/install-mongodb).
+In this tutorial, you migrate a dataset in MongoDB hosted in an Azure virtual machine to Azure Cosmos DB for MongoDB with minimal downtime via Azure Database Migration Service. If you don't have a MongoDB source set up already, see [Install and configure MongoDB on a Windows VM in Azure](/previous-versions/azure/virtual-machines/windows/install-mongodb).
> [!NOTE] > Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
In this tutorial, you migrate a dataset in MongoDB hosted in an Azure Virtual Ma
[!INCLUDE [online-offline](../../includes/database-migration-service-offline-online.md)]
-This article describes an online migration from MongoDB to Azure Cosmos DB's API for MongoDB. For an offline migration, see [Migrate MongoDB to Azure Cosmos DB's API for MongoDB offline using DMS](tutorial-mongodb-cosmos-db.md).
+This article describes an online migration from MongoDB to Azure Cosmos DB for MongoDB. For an offline migration, see [Migrate MongoDB to Azure Cosmos DB for MongoDB offline using DMS](tutorial-mongodb-cosmos-db.md).
## Prerequisites To complete this tutorial, you need to: * [Complete the pre-migration](../cosmos-db/mongodb-pre-migration.md) steps such as estimating throughput, choosing a partition key, and the indexing policy.
-* [Create an Azure Cosmos DB's API for MongoDB account](https://portal.azure.com/#create/Microsoft.DocumentDB) and ensure [SSR (server side retry)](../cosmos-db/mongodb/prevent-rate-limiting-errors.md) is enabled.
+* [Create an Azure Cosmos DB for MongoDB account](https://portal.azure.com/#create/Microsoft.DocumentDB) and ensure [SSR (server side retry)](../cosmos-db/mongodb/prevent-rate-limiting-errors.md) is enabled.
> [!NOTE]
- > DMS is currently not supported if you are migrating to API for MongoDB account that is provisioned with serverless mode.
+ > DMS is currently not supported if you're migrating to an Azure Cosmos DB for MongoDB account provisioned with serverless mode.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). > [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> * Storage endpoint > * Service bus endpoint >
After the service is created, locate it within the Azure portal, open it, and th
3. Select + **New Migration Project**.
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **MongoDB**, in the **Target server type** text box, select **CosmosDB (MongoDB API)**, and then for **Choose type of activity**, select **Online data migration [preview]**.
+4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **MongoDB**, in the **Target server type** text box, select **Azure Cosmos DB for MongoDB**, and then for **Choose type of activity**, select **Online data migration [preview]**.
![Create Database Migration Service project](media/tutorial-mongodb-to-cosmosdb-online/dms-create-project1.png)
After the service is created, locate it within the Azure portal, open it, and th
* For JSON dumps, the files in the blob container must be placed into folders named after the containing databases. Within each database folder, data files must be placed in a subfolder called "data" and named using the format *collection*.json. Metadata files (if any) must be placed in a subfolder called "metadata" and named using the same format, *collection*.json. The metadata files must be in the same format as produced by the MongoDB bsondump tool. > [!IMPORTANT]
- > It is discouraged to use a self-signed certificate on the mongo server. However, if one is used, please connect to the server using **connection string mode** and ensure that your connection string has ΓÇ£ΓÇ¥
+ > It is discouraged to use a self-signed certificate on the MongoDB server. However, if one is used, please connect to the server using **connection string mode** and ensure that your connection string has ΓÇ£ΓÇ¥
> >``` >&sslVerifyCertificate=false
After the service is created, locate it within the Azure portal, open it, and th
## Specify target details
-1. On the **Migration target details** screen, specify the connection details for the target Azure Cosmos DB account, which is the pre-provisioned Azure Cosmos DB's API for MongoDB account to which you're migrating your MongoDB data.
+1. On the **Migration target details** screen, specify the connection details for the target Azure Cosmos DB account, which is the pre-provisioned Azure Cosmos DB for the MongoDB account to which you're migrating your MongoDB data.
![Specify target details](media/tutorial-mongodb-to-cosmosdb-online/dms-specify-target1.png)
After the service is created, locate it within the Azure portal, open it, and th
If the string **Create** appears next to the database name, it indicates that Azure Database Migration Service didn't find the target database, and the service will create the database for you.
- At this point in the migration, if you want share throughput on the database, specify a throughput RU. In Cosmos DB, you can provision throughput either at the database-level or individually for each collection. Throughput is measured in [Request Units](../cosmos-db/request-units.md) (RUs). Learn more about [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
+ At this point in the migration, if you want share throughput on the database, specify a throughput RU. In Azure Cosmos DB, you can provision throughput either at the database-level or individually for each collection. Throughput is measured in [Request Units](../cosmos-db/request-units.md) (RUs). Learn more about [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
![Map to target databases](media/tutorial-mongodb-to-cosmosdb-online/dms-map-target-databases1.png)
After the service is created, locate it within the Azure portal, open it, and th
![Activity status replaying](media/tutorial-mongodb-to-cosmosdb-online/dms-activity-replaying.png)
-## Verify data in Cosmos DB
+## Verify data in Azure Cosmos DB
1. Make changes to your source MongoDB database.
-2. Connect to COSMOS DB to verify if the data is replicated from the source MongoDB server.
+2. Connect to Azure Cosmos DB to verify if the data is replicated from the source MongoDB server.
![Screenshot that shows where you can verify that the data was replicated.](media/tutorial-mongodb-to-cosmosdb-online/dms-verify-data.png) ## Complete the migration
-* After all documents from the source are available on the COSMOS DB target, select **Finish** from the migration activityΓÇÖs context menu to complete the migration.
+* After all documents from the source are available on the Azure Cosmos DB target, select **Finish** from the migration activityΓÇÖs context menu to complete the migration.
This action will finish replaying all the pending changes and complete the migration.
After the service is created, locate it within the Azure portal, open it, and th
## Post-migration optimization
-After you migrate the data stored in MongoDB database to Azure Cosmos DBΓÇÖs API for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps such as optimizing the indexing policy, update the default consistency level, or configure global distribution for your Azure Cosmos DB account. For more information, see the [Post-migration optimization](../cosmos-db/mongodb-post-migration.md) article.
+After you migrate the data stored in MongoDB database to Azure Cosmos DB for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps such as optimizing the indexing policy, update the default consistency level, or configure global distribution for your Azure Cosmos DB account. For more information, see the [Post-migration optimization](../cosmos-db/mongodb-post-migration.md) article.
## Additional resources
-* [Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/)
+* [Azure Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/)
* Trying to do capacity planning for a migration to Azure Cosmos DB? * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../cosmos-db/convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../cosmos-db/mongodb/estimate-ru-capacity-planner.md) ## Next steps
-* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB"
+ Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB for MongoDB"
-description: Migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB offline, by using Azure Database Migration Service.
+description: Migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB offline via Azure Database Migration Service.
-+ Last updated 09/21/2021
-# Tutorial: Migrate MongoDB to Azure Cosmos DB API for MongoDB offline
+# Tutorial: Migrate MongoDB to Azure Cosmos DB for MongoDB offline
-> [!IMPORTANT]
+> [!IMPORTANT]
> Please read this entire guide before carrying out your migration steps. >
This MongoDB migration guide is part of series on MongoDB migration. The critica
## Overview of offline data migration from MongoDB to Azure Cosmos DB using DMS
-Use Azure Database Migration Service to perform an offline, one-time migration of databases from an on-premises or cloud instance of MongoDB to the Azure Cosmos DB API for MongoDB.
+Use Azure Database Migration Service to perform an offline, one-time migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB for MongoDB.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
> * Run the migration. > * Monitor the migration.
-In this tutorial, you migrate a dataset in MongoDB that is hosted in an Azure virtual machine. By using Azure Database Migration Service, you migrate the dataset to the Azure Cosmos DB API for MongoDB. If you don't have a MongoDB source set up already, see [Install and configure MongoDB on a Windows VM in Azure](/previous-versions/azure/virtual-machines/windows/install-mongodb).
+In this tutorial, you migrate a dataset in MongoDB that is hosted in an Azure virtual machine. By using Azure Database Migration Service, you migrate the dataset to Azure Cosmos DB for MongoDB. If you don't have a MongoDB source set up already, see [Install and configure MongoDB on a Windows VM in Azure](/previous-versions/azure/virtual-machines/windows/install-mongodb).
## Prerequisites To complete this tutorial, you need to: * [Complete the pre-migration](../cosmos-db/mongodb-pre-migration.md) steps, such as estimating throughput and choosing a partition key.
-* [Create an account for the Azure Cosmos DB API for MongoDB](https://portal.azure.com/#create/Microsoft.DocumentDB).
+* [Create an account for the Azure Cosmos DB for MongoDB](https://portal.azure.com/#create/Microsoft.DocumentDB).
> [!NOTE]
- > DMS is currently not supported if you are migrating to API for MongoDB account that is provisioned with serverless mode.
+ > DMS is currently not supported if you're migrating to an Azure Cosmos DB for MongoDB account that is provisioned with serverless mode.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager. This deployment model provides site-to-site connectivity to your on-premises source servers by using either [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Azure Virtual Network documentation](../virtual-network/index.yml), especially the "quickstart" articles with step-by-step details.
After you create the service, locate it within the Azure portal, and open it. Th
3. Select **+ New Migration Project**.
-4. On **New migration project**, specify a name for the project, and in the **Source server type** text box, select **MongoDB**. In the **Target server type** text box, select **CosmosDB (MongoDB API)**, and then for **Choose type of activity**, select **Offline data migration**.
+4. On **New migration project**, specify a name for the project, and in the **Source server type** text box, select **MongoDB**. In the **Target server type** text box, select **Azure Cosmos DB for NoSQL**, and then for **Choose type of activity**, select **Offline data migration**.
![Screenshot that shows project options.](media/tutorial-mongodb-to-cosmosdb/dms-create-project.png)
After you create the service, locate it within the Azure portal, and open it. Th
## Specify target details
-1. On the **Migration target details** screen, specify the connection details for the target Azure Cosmos DB account. This account is the pre-provisioned Azure Cosmos DB API for MongoDB account to which you're migrating your MongoDB data.
+1. On the **Migration target details** screen, specify the connection details for the target Azure Cosmos DB account. This account is the pre-provisioned Azure Cosmos DB for MongoDB account to which you're migrating your MongoDB data.
![Screenshot that shows specifying target details.](media/tutorial-mongodb-to-cosmosdb/dms-specify-target.png)
After the migration finishes, you can check your Azure Cosmos DB account to veri
## Post-migration optimization
-After you migrate the data stored in MongoDB database to the Azure Cosmos DB API for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps. These might include optimizing the indexing policy, updating the default consistency level, or configuring global distribution for your Azure Cosmos DB account. For more information, see [Post-migration optimization](../cosmos-db/mongodb-post-migration.md).
+After you migrate the data stored in MongoDB database to the Azure Cosmos DB for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps. These might include optimizing the indexing policy, updating the default consistency level, or configuring global distribution for your Azure Cosmos DB account. For more information, see [Post-migration optimization](../cosmos-db/mongodb-post-migration.md).
## Additional resources
After you migrate the data stored in MongoDB database to the Azure Cosmos DB API
## Next steps Review migration guidance for additional scenarios in the [Azure Database Migration Guide](https://datamigration.microsoft.com/).---
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
-+ Last updated 04/11/2021
To complete this tutorial, you need to:
> [!NOTE] > During virtual networkNet setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> * Storage endpoint > * Service bus endpoint >
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
-+ Last updated 04/11/2020
To complete this tutorial, you need to:
> [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> * Storage endpoint > * Service bus endpoint >
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
-+ Last updated 04/11/2020
To complete this tutorial, you need to:
> [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> * Storage endpoint > * Service bus endpoint >
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
-+ Last updated 08/20/2021
To complete this tutorial, you need to:
> [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> * Storage endpoint > * Service bus endpoint >
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
-+ Last updated 01/03/2021
To complete this tutorial, you need to:
> [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > - Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> - Storage endpoint > - Service bus endpoint >
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
-+ Last updated 08/16/2021
To complete this tutorial, you need to:
> [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- > - Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
> - Storage endpoint > - Service bus endpoint >
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
Last updated 09/27/2022 -+ #Customer intent: As an experienced network administrator, I want to create an Azure private DNS resolver, so I can resolve host names on my private virtual networks.
-# Quickstart: Create an Azure private DNS Resolver using the Azure portal
+# Quickstart: Create an Azure Private DNS Resolver using the Azure portal
-This quickstart walks you through the steps to create an Azure DNS Private Resolver (Public Preview) using the Azure portal. If you prefer, you can complete this quickstart using [Azure PowerShell](private-dns-getstarted-powershell.md).
+This quickstart walks you through the steps to create an Azure DNS Private Resolver using the Azure portal. If you prefer, you can complete this quickstart using [Azure PowerShell](private-dns-getstarted-powershell.md).
Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multicloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
Next, add a virtual network to the resource group that you created, and configur
## Create a DNS resolver inside the virtual network
-1. To display the **DNS Private Resolvers** resource during public preview, open the following [preview-enabled Azure portal link](https://go.microsoft.com/fwlink/?linkid=2194569).
-2. Search for and select **DNS Private Resolvers**, select **Create**, and then on the **Basics** tab for **Create a DNS Private Resolver** enter the following:
+1. Open the Azure portal and search for **DNS Private Resolvers**.
+2. Select **DNS Private Resolvers**, select **Create**, and then on the **Basics** tab for **Create a DNS Private Resolver** enter the following:
- Subscription: Choose the subscription name you're using. - Resource group: Choose the name of the resource group that you created. - Name: Enter a name for your DNS resolver (ex: mydnsresolver).
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
Last updated 09/27/2022 -+ #Customer intent: As an experienced network administrator, I want to create an Azure private DNS resolver, so I can resolve host names on my private virtual networks.
This article walks you through the steps to create your first private DNS zone a
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-Azure DNS Private Resolver is a new service currently in public preview. Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+Azure DNS Private Resolver is a new service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
## Prerequisites
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Title: What is Azure DNS Private Resolver? description: In this article, get started with an overview of the Azure DNS Private Resolver service. -+
Azure DNS Private Resolver is a new service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers.
-> [!IMPORTANT]
-> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## How does it work? Azure DNS Private Resolver requires an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
Azure DNS Private Resolver provides the following benefits:
Azure DNS Private Resolver is available in the following regions: -- Australia East-- UK South-- North Europe-- South Central US-- West US 3-- East US-- North Central US-- West Central US-- East US 2-- West Europe
+| Americas | Europe | Asia & Africa |
+|||-|
+| East US | West Europe | East Asia |
+| East US 2 | North Europe | Southeast Asia |
+| Central US | UK South | Japan East |
+| South Central US | France Central | Korea Central |
+| North Central US | Sweden Central | South Africa North|
+| West Central US | Switzerland North| Australia East |
+| West US 3 | | |
+| Canada Central | | |
+| Brazil South | | |
## Data residency
Outbound endpoints have the following limitations:
### Ruleset restrictions -- Rulesets can have no more than 25 rules in Public Preview.
+- Rulesets can have up to 1000 rules.
### Other restrictions -- IPv6 enabled subnets aren't supported in Public Preview.
+- IPv6 enabled subnets aren't supported.
## Next steps
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
description: In this article, understand the Azure DNS Private Resolver endpoint
+ Last updated 09/09/2022
In this article, you'll learn about components of the [Azure DNS Private Resolver](dns-private-resolver-overview.md). Inbound endpoints, outbound endpoints, and DNS forwarding rulesets are discussed. Properties and settings of these components are described, and examples are provided for how to use them.
-> [!IMPORTANT]
-> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- The architecture for Azure DNS Private Resolver is summarized in the following figure. In this example network, a DNS resolver is deployed in a hub vnet that peers with a spoke vnet. [Ruleset links](#ruleset-links) are provisioned in the [DNS forwarding ruleset](#dns-forwarding-rulesets) to both the hub and spoke vnets, enabling resources in both vnets to resolve custom DNS namespaces using DNS forwarding rules. A private DNS zone is also deployed and linked to the hub vnet, enabling resources in the hub vnet to resolve records in the zone. The spoke vnet resolves records in the private zone by using a DNS forwarding [rule](#rules) that forwards private zone queries to the inbound endpoint VIP in the hub vnet. [ ![Diagram that shows private resolver architecture](./media/private-resolver-endpoints-rulesets/ruleset.png) ](./media/private-resolver-endpoints-rulesets/ruleset-high.png#lightbox)
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
description: Configure Azure and on-premises DNS to resolve private DNS zones an
+ Previously updated : 08/18/2022 Last updated : 09/08/2022
-# Customer intent: As an administrator, I want to resolve on-premises domains in Azure and resolve Azure private zones on-premises.
+#Customer intent: As an administrator, I want to resolve on-premises domains in Azure and resolve Azure private zones on-premises.
# Resolve Azure and on-premises domains
+## Hybrid DNS resolution
+ This article provides guidance on how to configure hybrid DNS resolution by using an [Azure DNS Private Resolver](#azure-dns-private-resolver) with a [DNS forwarding ruleset](#dns-forwarding-ruleset). *Hybrid DNS resolution* is defined here as enabling Azure resources to resolve your on-premises domains, and on-premises DNS to resolve your Azure private DNS zones.
-> [!IMPORTANT]
-> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Azure DNS Private Resolver The [Azure DNS Private Resolver](dns-private-resolver-overview.md) is a service that can resolve on-premises DNS queries for Azure DNS private zones. Previously, it was necessary to [deploy a VM-based custom DNS resolver](/azure/hdinsight/connect-on-premises-network), or use non-Microsoft DNS, DHCP, and IPAM (DDI) solutions to perform this function.
dns Tutorial Dns Private Resolver Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md
description: A tutorial on how to configure regional failover using the Azure DN
+ Last updated 09/27/2022
This article details how to eliminate a single point of failure in your on-premises DNS services by using two or more Azure DNS private resolvers deployed across different regions. DNS failover is enabled by assigning a local resolver as your primary DNS and the resolver in an adjacent region as secondary DNS. If the primary DNS server fails to respond, DNS clients automatically retry using the secondary DNS server.
-> [!IMPORTANT]
-> Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- In this tutorial, you learn how to: > [!div class="checklist"]
energy-data-services How To Add More Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md
Last updated 07/05/2022-+ # How to manage data partitions?
Each partition provides the highest level of data isolation within a single depl
3. **Choose a name for your data partition**
- Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Cosmos DB and Storage Accounts is started.
+ Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started.
>[!NOTE] >It generally takes 15-20 minutes to create a data partition.
energy-data-services How To Generate Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md
Previously updated : 08/25/2022 Last updated : 10/06/2022
+#Customer intent: As a developer, I want to learn how to generate a refresh token
-# OAuth 2.0 authorization
+# How to generate a refresh token
-The following are the basic steps to use the OAuth 2.0 authorization code grant flow to get a refresh token from the Microsoft identity platform endpoint:
+In this article, you will learn how to generate a refresh token. The following are the basic steps to use the OAuth 2.0 authorization code grant flow to get a refresh token from the Microsoft identity platform endpoint:
1. Register your app with Azure AD. 2. Get authorization.
The following are the basic steps to use the OAuth 2.0 authorization code grant
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-## 1. Register your app with Azure AD
+## Register your app with Azure AD
To use the Microsoft Energy Data Services Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. To configure an app to use the OAuth 2.0 authorization code grant flow, save the following values when registering the app:
To configure an app to use the OAuth 2.0 authorization code grant flow, save the
- The `Directory (tenant) ID` that will be used in place of `{Tenant ID}` - The `application (client) ID` assigned by the app registration portal, which will be used instead of `client_id`. - A `client (application) secret`, either a password or a public/private key pair (certificate). The client secret isn't required for native apps. This secret will be used instead of `{AppReg Secret}` later.-- A `redirect URI (or reply URL)` for your app to receive responses from Azure AD.
-
-> [!NOTE]
-> If there's no redirect URIs specified, add a platform, select "Web", then add `http://localhost:8080`, and select save.
-
+- A `redirect URI (or reply URL)` for your app to receive responses from Azure AD. If there's no redirect URIs specified, add a platform, select "Web", then add `http://localhost:8080`, and select save.
For steps on how to configure an app in the Azure portal, see [Register your app](/azure/active-directory/develop/quickstart-register-app#register-an-application).
-## 2. Get authorization
-The first step to getting an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform /authorize endpoint. Azure AD will sign the user in and request their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Azure AD will return an `authorization_code` to your app that it can redeem at the Microsoft identity platform /token endpoint for an access token.
+## Get authorization
+The first step to getting an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Azure AD will sign the user in and request their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Azure AD will return an `authorization_code` to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token.
### Authorization request
Copy the code between `code=` and `&state`.
> [!WARNING] > Running the URL in Postman won't work as it requires extra configuration for token retrieval.
-## 3. Get a refresh token
+## Get a refresh token
Your app uses the authorization code received in the previous step to request an access token by sending a POST request to the `/token` endpoint. ### Sample request
Your app uses the authorization code received in the previous step to request an
For more information, see [Generate refresh tokens](/graph/auth-v2-user#2-get-authorization).
-## Alternative options
-
-If you're struggling with getting a proper authorization token, follow the steps in [OSDU&trade; auth app](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/release/0.15/tools/rest/osduauth) to locally run a static webpage that generates the refresh token for you. Once it's running, fill in the correct values in the UI of the static webpage (they may be filled in with the wrong values to start). Use the UI to generate a refresh token.
- OSDU&trade; is a trademark of The Open Group. ## Next steps
-<!-- Add a context sentence for the following links -->
+To learn more about how to use the generated refresh token, follow the section below:
> [!div class="nextstepaction"] > [How to convert segy to ovds](how-to-convert-segy-to-zgy.md)
energy-data-services How To Manage Data Security And Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md
+
+ Title: Data security and encryption in Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand.
+description: Guide on security in Microsoft Energy Data Services and how to set up customer managed keys on Microsoft Energy Data Services #Required; article description that is displayed in search results.
++++ Last updated : 10/06/2022++
+#Customer intent: As a developer, I want to set up customer-managed keys on Microsoft Energy Data Services.
+
+# Data security and encryption in Microsoft Energy Data Services Preview
+
+This article provides an overview of security features in Microsoft Energy Data Services Preview. It covers the major areas of [encryption at rest](../security/fundamentals/encryption-atrest.md), encryption in transit, TLS, https, microsoft-managed keys, and customer managed key.
+
+## Encrypt data at rest
+
+Microsoft Energy Data Services Preview uses several storage resources for storing metadata, user data, in-memory data etc. The platform uses service-side encryption to automatically encrypt all the data when it is persisted to the cloud. Data encryption at rest protects your data to help you to meet your organizational security and compliance commitments. All data in Microsoft Energy Data Services is encrypted with Microsoft-managed keys by default.
+In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Microsoft Energy Data Services Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data.
+
+## Encrypt data in transit
+
+Microsoft Energy Data Services Preview supports Transport Layer Security (TLS 1.2) protocol to protect data when itΓÇÖs traveling between the cloud services and customers. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, and algorithm flexibility.
+
+In addition to TLS, when you interact with Microsoft Energy Data Services, all transactions take place over HTTPS.
+
+## Set up Customer Managed Keys (CMK) for Microsoft Energy Data Services Preview instance
+> [!IMPORTANT]
+> You cannot edit CMK settings once the Microsoft Energy Data Services instance is created.
+
+### Prerequisites
+
+**Step 1- Configure the key vault**
+
+1. You can use a new or existing key vault to store customer-managed keys. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../key-vault/general/overview.md) and [What is Azure Key Vault](../key-vault/general/basic-concepts.md)?
+2. Using customer-managed keys with Microsoft Energy Data Services requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created.
+3. To learn how to create a key vault with the Azure portal, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). When you create the key vault, select Enable purge protection.
+
+ [![Screenshot of enabling purge protection and soft delete while creating key vault](media/how-to-manage-data-security-and-encryption/customer-managed-key-1-create-key-vault.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-1-create-key-vault.png#lightbox)
+
+4. To enable purge protection on an existing key vault, follow these steps:
+ 1. Navigate to your key vault in the Azure portal.
+ 2. Under **Settings**, choose **Properties**.
+ 3. In the **purge protection** section, choose **Enable purge protection**.
+
+**Step 2 - Add a key**
+1. Next, add a key to the key vault.
+2. To learn how to add a key with the Azure portal, see [Quickstart: Set and retrieve a key from Azure Key Vault using the Azure portal](../key-vault/keys/quick-create-portal.md).
+3. It is recommended that the RSA key size is 3072, see [Configure customer-managed keys for your Azure Cosmos DB account | Microsoft Learn](../cosmos-db/how-to-setup-customer-managed-keys.md#generate-a-key-in-azure-key-vault).
+
+**Step 3 - Choose a managed identity to authorize access to the key vault**
+1. When you enable customer-managed keys for an existing Microsoft Energy Data Services Preview instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault.
+2. You can create a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity).
+
+### Configure customer-managed keys for an existing account
+1. Create a **Microsoft Energy Data Services** instance.
+2. Select the **Encryption** tab.
+
+ [![Screenshot of Encyption tab while creating Microsoft Energy Data Services](media/how-to-manage-data-security-and-encryption/customer-managed-key-2-encryption-tab.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-2-encryption-tab.png#lightbox)
+
+3. In the encryption tab, select **Customer-managed keys (CMK)**.
+4. For using CMK, you need to select the key vault where the key is stored.
+5. Select Encryption key as ΓÇ£**Select a key vault and key**.ΓÇ¥
+6. Then, select ΓÇ£**Select a key vault and key**.ΓÇ¥
+7. Next, select the **key vault** and **key**.
+8. Next, select the user-assigned managed identity that will be used to authorize access to the key vault that contains the key.
+9. Select ΓÇ£**Select a user identity**.ΓÇ¥ Select the user-assigned managed identity that you created in the pre-requisites.
+
+ [![Screenshot of key vault, key, user assigned identity, and CMK on encryption tab](media/how-to-manage-data-security-and-encryption/customer-managed-key-3-enable-cmk.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-3-enable-cmk.png#lightbox)
+
+10. This user assigned identity must have _get key_, _list key_, _wrap key_, and _unwrap key_ permissions on the key vault. For more information on assigning Azure Key Vault access policies, see [Assign a Key Vault Access Policy](../key-vault/general/assign-access-policy.md).
+
+ [![Screenshot of get, list, wrap, and upwrap key access policy](media/how-to-manage-data-security-and-encryption/customer-managed-key-4-access-policy.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-4-access-policy.png#lightbox)
+
+11. You can also select Encryption Key as ΓÇ£**Enter key from Uri**.ΓÇ¥ It is mandatory for the Key to have soft delete and purge protection to be enabled. You will have to confirm that by checking the box shown below.
+
+ [![Screenshot of key vault uri for encryption](media/how-to-manage-data-security-and-encryption/customer-managed-key-5-key-vault-url.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-5-key-vault-url.png#lightbox)
+
+12. Next, select ΓÇ£**Review+Create**ΓÇ¥ after completing other tabs.
+13. Select the "**Create**" button.
+14. A Microsoft Energy Data Services instance is created with customer-managed keys.
+15. Once CMK is enabled you will see its status on the **Overview** screen.
+
+ [![Screenshot of CMK enabled on MEDS overview page](media/how-to-manage-data-security-and-encryption/customer-managed-key-6-cmk-enabled-meds-overview.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-6-cmk-enabled-meds-overview.png#lightbox)
+
+16. You can navigate to **Encryption** and see that CMK enabled with user managed identity.
+
+ [![Screenshot of CMK settings disabled once MEDS instance is installed](media/how-to-manage-data-security-and-encryption/customer-managed-key-7-cmk-disabled-meds-instance-created.png)](media/how-to-manage-data-security-and-encryption/customer-managed-key-7-cmk-disabled-meds-instance-created.png#lightbox)
+++
+## Next steps
+To learn more about Private Links.
+> [!div class="nextstepaction"]
+> [How to set up private links](how-to-set-up-private-links.md)
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
Title: Microsoft Energy Data Services - how to set up private links #Required; page title is displayed in search results. Include the brand.
-description: Guide to set up private links on Microsoft Energy Data Services #Required; article description that is displayed in search results.
----
+ Title: Create a private endpoint for Microsoft Energy Data Services
+description: Learn how to set up private endpoints for Microsoft Energy Data Services by using Azure Private Link.
++++ Last updated 09/29/2022-
-#Customer intent: As a developer, I want to set up private links on Microsoft Energy Data Services
+
+#Customer intent: As a developer, I want to set up private endpoints for Microsoft Energy Data Services.
-# Private Links in Microsoft Energy Data Services
+# Create a private endpoint for Microsoft Energy Data Services
[Azure Private Link](../private-link/private-link-overview.md) provides private connectivity from a virtual network to Azure platform as a service (PaaS). It simplifies the network architecture and secures the connection between endpoints in Azure by eliminating data exposure to the public internet.
-By using Azure Private Link, you can connect to a Microsoft Energy Data Services Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network.
+By using Azure Private Link, you can connect to a Microsoft Energy Data Services Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Microsoft Energy Data Services instance over these private IP addresses.
-You can then limit access to your Microsoft Energy Data Services Preview instance over these private IP addresses.
-You can connect to a Microsoft Energy Data Services configured with Private Link by using the automatic or manual approval method. To [learn more](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow), see the Approval workflow section of the Private Link documentation.
+You can connect to a Microsoft Energy Data Services instance that's configured with Private Link by using an automatic or manual approval method. To learn more, see the [Private Link documentation](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
-
-This article describes how to set up private endpoints for Microsoft Energy Data Services Preview.
+This article describes how to set up a private endpoint for Microsoft Energy Data Services.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-## Pre-requisites
+## Prerequisites
-Create a virtual network in the same subscription as the Microsoft Energy Data Services instance. [Learn more](../virtual-network/quick-create-portal.md). This will allow auto-approval of the private link endpoint.
+[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Microsoft Energy Data Services instance. This virtual network will allow automatic approval of the Private Link endpoint.
## Create a private endpoint by using the Azure portal
-Use the following steps to create a private endpoint for an existing Microsoft Energy Data Services instance by using the Azure portal:
-1. From the **All resources** pane, choose a Microsoft Energy Data Services Preview instance.
-2. Select **Networking** from the list of settings.
-
- [![Screenshot of public access under Networking tab for Private Links.](media/how-to-manage-private-links/private-links-1-Networking.png)](media/how-to-manage-private-links/private-links-1-Networking.png#lightbox)
-
-
-3. Select **Public Access** and select **Enabled from all networks** to allow traffic from all networks.
-4. To block traffic from all networks, select **Disabled**.
-5. Select **Private access** tab and select **Create a private endpoint**, to create a Private Endpoint Connection.
+Use the following steps to create a private endpoint for an existing Microsoft Energy Data Services Preview instance by using the Azure portal:
+
+1. From the **All resources** pane, choose a Microsoft Energy Data Services Preview instance.
+1. Select **Networking** from the list of settings.
+1. On the **Public Access** tab, select **Enabled from all networks** to allow traffic from all networks.
+
+ [![Screenshot of the Public Access tab.](media/how-to-manage-private-links/private-links-1-Networking.png)](media/how-to-manage-private-links/private-links-1-Networking.png#lightbox)
+
+ If you want to block traffic from all networks, select **Disabled**.
+
+1. Select the **Private Access** tab, and then select **Create a private endpoint**.
- [![Screenshot of private access for Private Links.](media/how-to-manage-private-links/private-links-2-create-private-endpoint.png)](media/how-to-manage-private-links/private-links-2-create-private-endpoint.png#lightbox)
+ [![Screenshot of the Private Access tab.](media/how-to-manage-private-links/private-links-2-create-private-endpoint.png)](media/how-to-manage-private-links/private-links-2-create-private-endpoint.png#lightbox)
-6. In the Create a private endpoint - **Basics pane**, enter or select the following details:
+1. In the **Create a private endpoint** wizard, on the **Basics** page, enter or select the following details:
- |Setting| Value|
+ |Setting| Value|
|--|--|
- |Project details|
- |Subscription| Select your subscription.|
- |Resource group| Select a resource group.|
- |Instance details|
- |Name| Enter any name for your private endpoint. If this name is taken, create a unique one.|
- |Region| Select the region where you want to deploy Private Link. |
-
- [![Screenshot of creating a MEDS instance with private link.](media/how-to-manage-private-links/private-links-3-basics.png)](media/how-to-manage-private-links/private-links-3-basics.png#lightbox)
-
-> [!NOTE]
-> Auto-approval only happens when the Microsoft Energy Data Services Preview instance and the vnet for the private link are in the same subscription.
+ |**Subscription**| Select your subscription for the project.|
+ |**Resource group**| Select a resource group for the project.|
+ |**Name**| Enter a name for your private endpoint. The name must be unique.|
+ |**Region**| Select the region where you want to deploy Private Link. |
+ [![Screenshot of entering basic information for a private endpoint.](media/how-to-manage-private-links/private-links-3-basics.png)](media/how-to-manage-private-links/private-links-3-basics.png#lightbox)
+
+ > [!NOTE]
+ > Automatic approval happens only when the Microsoft Energy Data Services instance and the virtual network for the private endpoint are in the same subscription.
-7. Select **Next: Resource.**
-8. In **Create a private endpoint - Resource**, the following information should be selected or available:
+1. Select **Next: Resource**. On the **Resource** page, confirm the following information:
- |Setting | Value |
+ |Setting| Value|
|--|--|
- |Subscription| Your subscription.|
- |Resource type| Microsoft.OpenEnergyPlatform/energyServices|
- |Resource |Your Microsoft Energy Data Services instance.|
- |Target sub-resource| This defaults to MEDS. |
+ |**Subscription**| Your subscription|
+ |**Resource type**| **Microsoft.OpenEnergyPlatform/energyServices**|
+ |**Resource**| Your Microsoft Energy Data Services instance|
+ |**Target sub-resource**| **MEDS** (for Microsoft Energy Data Services) by default|
- [![Screenshot of resource tab for private link during a MEDS instance creation.](media/how-to-manage-private-links/private-links-4-resource.png)](media/how-to-manage-private-links/private-links-4-resource.png#lightbox)
-
+ [![Screenshot of resource information for a private endpoint.](media/how-to-manage-private-links/private-links-4-resource.png)](media/how-to-manage-private-links/private-links-4-resource.png#lightbox)
-9. Select **Next: Virtual Network.**
-10. In the Virtual Network screen, you can:
-
- * Configure Networking and Private IP Configuration settings. [Learn more](../private-link/create-private-endpoint-portal.md#create-a-private-endpoint)
-
- * Configure private endpoint with ASG. [Learn more](../private-link/configure-asg-private-endpoint.md#create-private-endpoint-with-an-asg)
+1. Select **Next: Virtual Network**. On the **Virtual Network** page, you can:
- [![Screenshot of virtual network tab for private link during a MEDS instance creation.](media/how-to-manage-private-links/private-links-4-virtual-network.png)](media/how-to-manage-private-links/private-links-4-virtual-network.png#lightbox)
+ * Configure network and private IP settings. [Learn more](../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
+ * Configure a private endpoint with an application security group. [Learn more](../private-link/configure-asg-private-endpoint.md#create-private-endpoint-with-an-asg).
-11. Select **Next: DNS**. You can leave the default settings or learn more about DNS configuration. [Learn more](../private-link/private-endpoint-overview.md#dns-configuration)
+ [![Screenshot of virtual network information for a private endpoint.](media/how-to-manage-private-links/private-links-4-virtual-network.png)](media/how-to-manage-private-links/private-links-4-virtual-network.png#lightbox)
+1. Select **Next: DNS**. On the **DNS** page, you can leave the default settings or configure private DNS integration. [Learn more](../private-link/private-endpoint-overview.md#dns-configuration).
- [![Screenshot of DNS tab for private link during a MEDS instance creation.](media/how-to-manage-private-links/private-links-5-dns.png)](media/how-to-manage-private-links/private-links-5-dns.png#lightbox)
+ [![Screenshot of DNS information for a private endpoint.](media/how-to-manage-private-links/private-links-5-dns.png)](media/how-to-manage-private-links/private-links-5-dns.png#lightbox)
-12. Select **Next: Tags** and add tags to categorize resources.
-13. Select **Review + create**. On the Review + create page, Azure validates your configuration.
-14. When you see the Validation passed message, select **Create**.
+1. Select **Next: Tags**. On the **Tags** page, you can add tags to categorize resources.
+1. Select **Review + create**. On the **Review + create** page, Azure validates your configuration.
- [![Screenshot of summary screen while creating MEDS instance.](media/how-to-manage-private-links/private-links-6-review.png)](media/how-to-manage-private-links/private-links-6-review.png#lightbox)
+ When you see **Validation passed**, select **Create**.
+ [![Screenshot of the page that summarizes and validates configuration of your private endpoint.](media/how-to-manage-private-links/private-links-6-review.png)](media/how-to-manage-private-links/private-links-6-review.png#lightbox)
-15. Once the deployment is complete, select **Go to resource**.
+1. After the deployment is complete, select **Go to resource**.
- [![Screenshot of MEDS resource created.](media/how-to-manage-private-links/private-links-7-deploy.png)](media/how-to-manage-private-links/private-links-7-deploy.png#lightbox)
-
-
-16. The Private Endpoint created is **Auto-approved**.
+ [![Screenshot that shows an overview of a private endpoint deployment.](media/how-to-manage-private-links/private-links-7-deploy.png)](media/how-to-manage-private-links/private-links-7-deploy.png#lightbox)
+
+1. Confirm that the private endpoint that you created was automatically approved.
- [![Screenshot of private link created with auto-approval.](media/how-to-manage-private-links/private-links-8-request-response.png)](media/how-to-manage-private-links/private-links-8-request-response.png#lightbox)
+ [![Screenshot of information about a private endpoint with an indication of automatic approval.](media/how-to-manage-private-links/private-links-8-request-response.png)](media/how-to-manage-private-links/private-links-8-request-response.png#lightbox)
-17. Select the **Microsoft Energy Data Services** instance and navigate to the **Networking** tab to see the Private Endpoint created.
-
- [![Screenshot of private link showing connection state as auto-approved.](media/how-to-manage-private-links/private-links-9-auto-approved.png)](media/how-to-manage-private-links/private-links-9-auto-approved.png#lightbox)
-
+1. Select the **Microsoft Energy Data Services** instance, select **Networking**, and then select the **Private Access** tab. Confirm that your newly created private endpoint connection appears in the list.
-18. When the Microsoft Energy Data Services and vnet are in different tenants or subscriptions, you will be required to **Approve** or **Reject** the **Private Endpoint** creation request.
-
- [![Screenshot of private link showing approve or reject option.](media/how-to-manage-private-links/private-links-10-awaiting-approval.png)](media/how-to-manage-private-links/private-links-10-awaiting-approval.png#lightbox)
+ [![Screenshot of the Private Access tab with an automatically approved private endpoint connection.](media/how-to-manage-private-links/private-links-9-auto-approved.png)](media/how-to-manage-private-links/private-links-9-auto-approved.png#lightbox)
+> [!NOTE]
+> When the Microsoft Energy Data Services instance and the virtual network are in different tenants or subscriptions, you have to manually approve the request to create a private endpoint. The **Approve** and **Reject** buttons appear on the **Private Access** tab.
+>
+> [![Screenshot that shows options for rejecting or approving a request to create a private endpoint.](media/how-to-manage-private-links/private-links-10-awaiting-approval.png)](media/how-to-manage-private-links/private-links-10-awaiting-approval.png#lightbox)
## Next steps <!-- Add a context sentence for the following links -->
energy-data-services Overview Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md
Last updated 09/01/2022 -+ # Domain data management services (DDMS)
Domain data management services (DDMS) store, access, and retrieve metadata and
### Aspirational components for any DDMS - Direct connection to OSDU&trade; core
- - Connection to adjacent or proximal databases (blob storage, Cosmos, external) and client applications
+ - Connection to adjacent or proximal databases (Blob storage, Azure Cosmos DB, external) and client applications
- Configure infrastructure provisioning to enable optimal performance for data streaming and access ### Additional components for most DDMS (may include but not be limited to)
energy-data-services Resources Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/resources-partner-solutions.md
+
+ Title: Microsoft Energy Data Services partners
+description: Lists of third-party Microsoft Energy Data Services partners solutions.
++ Last updated : 09/24/2022+++++
+# Microsoft Energy Data Services Preview partners
+
+Partner community is the growth engine for Microsoft. To help our customers quickly realize the benefits of Microsoft Energy Data Services Preview, we've worked closely with many partners who have tested their software applications and tools on our data platform.
+
+## Partner solutions
+This article highlights Microsoft partners with software solutions officially supporting Microsoft Energy Data Services.
+
+| Partner | Description | Website/Product link |
+| - | -- | -- |
+| Bluware | Bluware enables energy companies to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Microsoft Energy Data Services is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI&trade;, drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)|
+| Katalyst | Katalyst Data Management&reg; provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) |
+| Interica | Interica OneView&trade; harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos, and clearly determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a complete holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Microsoft Energy Data Services adoption with Interica OneView&trade;](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView&trade;](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView&trade; connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)|
++
+## Next steps
+To learn more about Microsoft Energy Data Services, visit
+> [!div class="nextstepaction"]
+> [What is Microsoft Energy Data Services Preview?](overview-microsoft-energy-data-services.md)
energy-data-services Tutorial Well Delivery Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-well-delivery-ddms.md
Title: Microsoft Energy Data Services Preview - Steps to interact with Well Delivery DDMS #Required; page title is displayed in search results. Include the brand.
-description: This tutorial shows you how to interact with Well Delivery DDMS #Required; article description that is displayed in search results.
----
+ Title: Tutorial - Work with well data records by using Well Delivery DDMS APIs
+description: Learn how to work with well data records in your Microsoft Energy Data Services Preview instance by using Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman.
++++ Last updated 07/28/2022-+
-# Tutorial: Sample steps to interact with Well Delivery DDMS
+# Tutorial: Work with well data records by using Well Delivery DDMS APIs
-Well Delivery DDMS provides the capability to manage well related data in the Microsoft Energy Data Services Preview instance.
+Use Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman to work with well data in your instance of Microsoft Energy Data Services Preview.
In this tutorial, you'll learn how to:- > [!div class="checklist"]
-> * Utilize Well Delivery DDMS API's to store and retrieve well data
+>
+> - Set up Postman to use a Well Delivery DDMS collection.
+> - Set up Postman to use a Well Delivery DDMS environment.
+> - Send requests via Postman.
+> - Generate an authorization token.
+> - Use Well Delivery DDMS APIs to work with well data records.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
+For more information about DDMS, see [DDMS concepts](concepts-ddms.md).
+ ## Prerequisites
-### Get Microsoft Energy Data Services Preview instance details
+- An Azure subscription
+- An instance of [Microsoft Energy Data Services Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription
+
+## Get your Microsoft Energy Data Services instance details
+
+The first step is to get the following information from your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden):
+
+| Parameter | Value | Example |
+| | |-- |
+| CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+| CLIENT_SECRET | Client secrets | _fl****************** |
+| TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
+| SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+| base_uri | URI | `<instance>.energy.azure.com` |
+| data-partition-id | Data Partition(s) | `<instance>-<data-partition-name>` |
+
+You'll use this information later in the tutorial.
+
+## Set up Postman
+
+Next, set up Postman:
+
+1. Download and install the [Postman](https://www.postman.com/downloads/) desktop app.
+
+1. Import the following files in Postman:
+
+ - [Well Delivery DDMS Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WelldeliveryDDMS.postman_collection.json)
+ - [Well Delivery DDMS Postman environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WelldeliveryDDMSEnviroment.postman_environment.json)
+
+ To import the files:
+
+ 1. Create two JSON files on your computer by copying the data that's in the collection and environment files.
+
+ 1. In Postman, select **Import** > **Files** > **Choose Files**, and then select the two JSON files on your computer.
+
+ 1. In **Import Entities** in Postman, select **Import**.
-* Once the [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details:
+ :::image type="content" source="media/tutorial-well-delivery/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-well-delivery/postman-import-files.png":::
+
+1. In the Postman environment, update **CURRENT VALUE** with the information from your [Microsoft Energy Data Services instance](#get-your-microsoft-energy-data-services-instance-details):
-```Table
- | Parameter | Value to use | Example |
- | | |-- |
- | CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
- | CLIENT_SECRET | Client secrets | _fl****************** |
- | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
- | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
- | base_uri | URI | <instance>.energy.azure.com |
- | data-partition-id | Data Partition(s) | <instance>-<data-partition-name> |
-```
+ 1. In Postman, in the left menu, select **Environments**, and then select **WellDelivery Environment**.
-### How to set up Postman
+ 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Microsoft Energy Data Services instance details](#get-your-microsoft-energy-data-services-instance-details). Scroll to see all relevant variables.
-* Download and install [Postman](https://www.postman.com/) desktop app.
-* Import the following files into Postman:
- * [Well Delivery DDMS Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WelldeliveryDDMS.postman_collection.json)
- * [Well Delivery DDMS Postman Environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WelldeliveryDDMSEnviroment.postman_environment.json)
-
-* Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Microsoft Energy Data Services Preview instance details](#get-microsoft-energy-data-services-preview-instance-details).
+ :::image type="content" source="media/tutorial-well-delivery/postman-environment-current-values.png" alt-text="Screenshot that shows where to enter current values in the Well Delivery DDMS environment.":::
-### How to execute Postman requests
+## Send a Postman request
-* The Postman collection for Well Delivery DDMS contains requests that allows interaction with wells, wellbore, well planning, wellbore planning, well activity program and well trajectory data.
-* Make sure to choose the **Well Delivery DDMS Environment** before triggering the Postman collection.
-* Each request can be triggered by clicking the **Send** Button.
-* On every request Postman will validate the actual API response code against the expected response code; if there's any mismatch the Test Section will indicate failures.
+The Postman collection for Well Delivery DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Microsoft Energy Data Services instance.
-### Generate a token
+For an example of how to send a Postman request, see the [Wellbore DDMS tutorial](tutorial-wellbore-ddms.md#send-an-example-postman-request).
-1. **Get a Token** - Import the CURL command in Postman to generate the bearer token. Update the bearerToken in well delivery DDMS environment. Use Bearer Token as Authorization type in other API calls.
- ```bash
+In the next sections, generate a token, and then use it to work with Well Delivery DDMS APIs.
+
+## Generate a token to use in APIs
+
+To generate a token:
+
+1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Microsoft Energy Data Services instance.
+
+ ```bash
curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=client_credentials' \
In this tutorial, you'll learn how to:
--data-urlencode 'client_secret={{CLIENT_SECRET}}' \ --data-urlencode 'scope={{SCOPE}}' ```
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-generate-token.png" alt-text="Screenshot of the well delivery generate token." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-generate-token.png":::
--
-## Store and retrieve well data with Well Delivery ddms APIs
-
-1. **Create a Legal Tag** - Create a legal tag that will be added automatically to the environment for data compliance purpose.
-1. **Create Well** - Creates the well record.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well.png" alt-text="Screenshot of the well delivery - create well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well.png":::
-1. **Create Wellbore** - Creates the wellbore record.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well-bore.png" alt-text="Screenshot of the well delivery - create wellbore." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well-bore.png":::
-1. **Get Well Version** - Returns the well record based on given WellId.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well.png" alt-text="Screenshot of the well delivery - get well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well.png":::
-1. **Get Wellbore Version** - Returns the wellbore record based on given WellboreId.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well-bore.png" alt-text="Screenshot of the well delivery - get wellbore." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well-bore.png":::
-1. **Create ActivityPlan** - Create the ActivityPlan.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-activity-plan.png" alt-text="Screenshot of the well delivery - create activity plan." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-activity-plan.png":::
-1. **Get ActivityPlan by Well Id** - Returns the Activity Plan object from a wellId generated in Step 1.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-activity-plans-by-well.png" alt-text="Screenshot of the well delivery - get activity plan by well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-activity-plans-by-well.png":::
-1. **Delete wellbore record** - Deletes the specified wellbore record.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well-bore.png" alt-text="Screenshot of the well delivery - delete wellbore." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well-bore.png":::
-1. **Delete well record** - Deletes the specified well record.
- :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well.png" alt-text="Screenshot of the well delivery - delete well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well.png":::
-
-Completion of the above steps indicates successful creation and retrieval of well and wellbore records. Similar steps could be followed for well planning, wellbore planning, well activity program and wellbore trajectory data.
-
-## See also
-Advance to the next tutorial to learn how to use sdutil to load seismic data into seismic store
+
+ :::image type="content" source="media/tutorial-well-delivery/postman-generate-token.png" alt-text="Screenshot of the Well Delivery DDMS generate token cURL code." lightbox="media/tutorial-well-delivery/postman-generate-token.png":::
+
+1. Use the token output to update `access_token` in your Well Delivery DDMS environment. Then, you can use the bearer token as an authorization type in other API calls.
+
+## Use Well Delivery DDMS APIs to work with well data records
+
+Successfully completing the Postman requests that are described in the following Well Delivery DDMS APIs indicates successful ingestion and retrieval of well records in your Microsoft Energy Data Services instance.
+
+### Create a well
+
+Create a well record.
+
+API: **UC1** > **entity_create well**
+
+Method: PUT
++
+### Create a wellbore
+
+Create a wellbore record.
+
+API: **UC1** > **entity_create wellbore**
+
+Method: PUT
++
+### Get a well version
+
+Get a well record based on a specific well ID.
+
+API: **UC1** > **entity_create well Copy**
+
+Method: GET
++
+### Create an activity plan
+
+Create an activity plan.
+
+API: **UC1** > **entity_create activityplan**
+
+Method: PUT
++
+### Get activity plan by well ID
+
+Get the activity plan object for a specific well ID.
+
+API: **UC2** > **activity_plans_by_well**
+
+Method: GET
++
+### Delete a wellbore record
+
+You can delete a wellbore record in your Microsoft Energy Data Services instance by using Well Delivery DDMS APIs. For example:
++
+### Delete a well record
+
+You can delete a well record in your Microsoft Energy Data Services instance by using Well Delivery DDMS APIs. For example:
++
+## Next steps
+
+Go to the next tutorial to learn how to use work with well data by using Wellbore DDMS APIs:
+ > [!div class="nextstepaction"]
-> [Tutorial: Sample steps to interact with Wellbore DDMS](tutorial-wellbore-ddms.md)
+> [Tutorial: Work with well data records by using Wellbore DDMS APIs](tutorial-wellbore-ddms.md)
energy-data-services Tutorial Wellbore Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-wellbore-ddms.md
Title: Tutorial - Sample steps to interact with Wellbore DDMS #Required; page title is displayed in search results. Include the brand.
-description: This tutorial shows you how to interact with Wellbore DDMS in Microsoft Energy Data Services #Required; article description that is displayed in search results.
----
+ Title: Tutorial - Work with well data records by using Wellbore DDMS APIs
+description: Learn how to work with well data records in your Microsoft Energy Data Services Preview instance by using Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman.
++++ Last updated 09/07/2022-+
-# Tutorial: Sample steps to interact with Wellbore DDMS
+# Tutorial: Work with well data records by using Wellbore DDMS APIs
-Wellbore DDMS provides the capability to operate on well data in the Microsoft Energy Data Services instance.
+Use Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman to work with well data in your instance of Microsoft Energy Data Services Preview.
In this tutorial, you'll learn how to: > [!div class="checklist"]
-> * Utilize Wellbore DDMS APIs to store and retrieve Wellbore and well log data
+>
+> - Set up Postman to use a Wellbore DDMS collection.
+> - Set up Postman to use a Wellbore DDMS environment.
+> - Send requests via Postman.
+> - Generate an authorization token.
+> - Use Wellbore DDMS APIs to work with well data records.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
+For more information about DDMS, see [DDMS concepts](concepts-ddms.md).
+ ## Prerequisites
-### Microsoft Energy Data Services instance details
+- An Azure subscription
+- An instance of [Microsoft Energy Data Services Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.
+
+## Get your Microsoft Energy Data Services instance details
+
+The first step is to get the following information from your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden):
+
+| Parameter | Value | Example |
+| | |-- |
+| CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+| CLIENT_SECRET | Client secrets | _fl****************** |
+| TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
+| SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+| base_uri | URI | `<instance>.energy.azure.com` |
+| data-partition-id | Data Partition(s) | `<instance>-<data-partition-name>` |
+
+You'll use this information later in the tutorial.
+
+## Set up Postman
+
+Next, set up Postman:
-* Once the [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) is created, save the following details:
+1. Download and install the [Postman](https://www.postman.com/downloads/) desktop app.
-```Table
- | Parameter | Value to use | Example |
- | | |-- |
- | CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
- | CLIENT_SECRET | Client secrets | _fl****************** |
- | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
- | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
- | base_uri | URI | <instance>.energy.azure.com |
- | data-partition-id | Data Partition(s) | <instance>-<data-partition-name> |
-```
+1. Import the following files in Postman:
-### Postman setup
+ - [Wellbore DDMS Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WellboreDDMS.postman_collection.json)
+ - [Wellbore DDMS Postman environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WellboreDDMSEnvironment.postman_environment.json)
-1. Download and install [Postman](https://www.postman.com/) desktop app
-1. Import the following files into Postman:
- 1. [Wellbore ddms Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WellboreDDMS.postman_collection.json)
- 1. [Wellbore ddms Postman Environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WellboreDDMSEnvironment.postman_environment.json)
+ To import the files:
+
+ 1. Create two JSON files on your computer by copying the data that's in the collection and environment files.
+
+ 1. In Postman, select **Import** > **Files** > **Choose Files**, and then select the two JSON files on your computer.
+
+ 1. In **Import Entities** in Postman, select **Import**.
+
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-wellbore-ddms/postman-import-files.png":::
-1. Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Microsoft Energy Data Services instance details](#microsoft-energy-data-services-instance-details)
+1. In the Postman environment, update **CURRENT VALUE** with the information from your [Microsoft Energy Data Services instance](#get-your-microsoft-energy-data-services-instance-details):
+
+ 1. In Postman, in the left menu, select **Environments**, and then select **Wellbore DDMS Environment**.
+
+ 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Microsoft Energy Data Services instance details](#get-your-microsoft-energy-data-services-instance-details).
+
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-environment-current-values.png" alt-text="Screenshot that shows where to enter current values in the Wellbore DDMS environment.":::
-### Executing Postman Requests
+## Send an example Postman request
-1. The Postman collection for Wellbore ddms contains requests that allows interaction with wells, wellbore, well log and well trajectory data.
-2. Make sure to choose the **Wellbore DDMS Environment** before triggering the Postman collection.
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-postman-choose-wellbore-environment.png" alt-text="Choose environment." lightbox="media/tutorial-wellbore-ddms/tutorial-postman-choose-wellbore-environment.png":::
-3. Each request can be triggered by clicking the **Send** Button.
-4. On every request Postman will validate the actual API response code against the expected response code; if there's any mismatch the Test Section will indicate failures.
+The Postman collection for Wellbore DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Microsoft Energy Data Services instance.
-**Successful Postman Call**
+1. In Postman, in the left menu, select **Collections**, and then select **Wellbore DDMS**. Under **Setup**, select **Get an SPN Token**.
+1. In the environment dropdown in the upper-right corner, select **Wellbore DDMS Environment**.
-**Failed Postman Call**
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-get-spn-token.png" alt-text="Screenshot that shows the Get an SPN Token setup and selecting the environment." lightbox="media/tutorial-wellbore-ddms/postman-get-spn-token.png":::
+1. To send the request, select **Send**.
-### Generate a token
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-request-send.png" alt-text="Screenshot that shows the Send button for a request in Postman.":::
+
+1. The request validates the actual API response code against the expected response code. Select the **Test Results** tab to see whether the request succeeded or failed.
+
+ **Example of a successful Postman call:**
+
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-test-success.png" alt-text="Screenshot that shows success for a Postman call." lightbox="media/tutorial-wellbore-ddms/postman-test-success.png":::
+
+ **Example of a failed Postman call:**
+
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-test-failure.png" alt-text="Screenshot that shows failure for a Postman call." lightbox="media/tutorial-wellbore-ddms/postman-test-failure.png":::
+
+## Generate a token to use in APIs
+
+To generate a token:
+
+1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Microsoft Energy Data Services instance.
-1. **Get a Token** - Import the CURL command in Postman to generate the bearer token. Update the bearerToken in wellbore ddms environment. Use Bearer Token as Authorization type in other API calls.
```bash curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ --header 'Content-Type: application/x-www-form-urlencoded' \
In this tutorial, you'll learn how to:
--data-urlencode 'client_secret={{CLIENT_SECRET}}' \ --data-urlencode 'scope={{SCOPE}}' ```
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-of-the-wellbore-generate-token.png" alt-text="Screenshot of the wellbore generate token." lightbox="media/tutorial-wellbore-ddms/tutorial-of-the-wellbore-generate-token.png":::
-## Store and retrieve Wellbore and Well Log data with Wellbore ddms APIs
+ :::image type="content" source="media/tutorial-wellbore-ddms/postman-generate-token.png" alt-text="Screenshot of the Wellbore DDMs generate token cURL code." lightbox="media/tutorial-wellbore-ddms/postman-generate-token.png":::
+
+1. Use the token output to update `access_token` in your Wellbore DDMS environment. Then, you can use the bearer token as an authorization type in other API calls.
+
+## Use Wellbore DDMS APIs to work with well data records
+
+Successfully completing the Postman requests that are described in the following Wellbore DDMS APIs indicates successful ingestion and retrieval of well records in your Microsoft Energy Data Services instance.
+
+### Create a legal tag
+
+Create a legal tag that's automatically added to your Wellbore DDMS environment for data compliance.
+
+API: **Setup** > **Create Legal Tag for WDMS**
-1. **Create a Legal Tag** - Create a legal tag that will be added automatically to the environment for data compliance purpose.
-2. **Create Well** - Creates the wellbore record in Microsoft Energy Data Services instance.
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-create-well.png" alt-text="Screenshot of creating a Well." lightbox="media/tutorial-wellbore-ddms/tutorial-create-well.png":::
-3. **Get Wells** - Returns the well data created in the last step.
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-get-wells.png" alt-text="Screenshot of getting all wells." lightbox="media/tutorial-wellbore-ddms/tutorial-get-wells.png":::
-1. **Get Well Versions** - Returns the versions of each ingested well record.
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-get-well-versions.png" alt-text="Screenshot of getting all Well versions." lightbox="media/tutorial-wellbore-ddms/tutorial-get-well-versions.png":::
-1. **Get specific Well Version** - Returns the details of specified version of specified record.
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-get-specific-well-version.png" alt-text="Screenshot of getting a specific well version." lightbox="media/tutorial-wellbore-ddms/tutorial-get-specific-well-version.png":::
-1. **Delete well record** - Deletes the specified record.
- :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-delete-well.png" alt-text="Screenshot of delete well record." lightbox="media/tutorial-wellbore-ddms/tutorial-delete-well.png":::
+Method: POST
-***Successful completion of above steps indicates success ingestion and retrieval of well records***
+
+For more information, see [Manage legal tags](how-to-manage-legal-tags.md).
+
+### Create a well
+
+Create a well record in your Microsoft Energy Data Services instance.
+
+API: **Well** > **Create Well**.
+
+Method: POST
++
+### Get a well record
+
+Get the well record data for your Microsoft Energy Data Services instance.
+
+API: **Well** > **Well**
+
+Method: GET
++
+### Get well versions
+
+Get the versions of each ingested well record in your Microsoft Energy Data Services instance.
+
+API: **Well** > **Well versions**
+
+Method: GET
++
+### Get a specific well version
+
+Get the details of a specific version for a specific well record in your Microsoft Energy Data Services instance.
+
+API: **Well** > **Well Specific version**
+
+Method: GET
++
+### Delete a well record
+
+Delete a specific well record from your Microsoft Energy Data Services instance.
+
+API: **Clean up** > **Well Record**
+
+Method: DELETE
+ ## Next steps
-Advance to the next tutorial to learn about sdutil
+
+Go to the Seismic Store DDMS sdutil tutorial to learn how to use sdutil to load seismic data into Seismic Store:
+ > [!div class="nextstepaction"]
-> [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md)
+> [Tutorial: Seismic Store DDMS sdutil](tutorial-seismic-ddms-sdutil.md)
event-grid Auth0 Log Stream App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-app-insights.md
+
+ Title: Send Auth0 events Azure Monitor's Application Insights
+description: This article shows how to send Auth0 events received by Azure Event Grid to Azure Monitor's Application Insights.
+ Last updated : 10/12/2022++
+# Send Auth0 events to Azure Monitor's Application Insights
+This article shows how to send Auth0 events received by Azure Event Grid to Azure Monitor's Application Insights.
+
+## Prerequisites
+
+[Create an Azure Event Grid stream on Auth0](https://marketplace.auth0.com/integrations/azure-log-streaming).
+
+## Create an Azure function
+
+1. Create an Azure function by following instructions from the **Create a local project** section of [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-node.md).
+ 1. Select **Azure Event Grid trigger** for the function template instead of **HTTP trigger** as mentioned in the quickstart.
+ 1. Continue to follow the steps, but use the following **index.js**.
+
+ > [!IMPORTANT]
+ > Update the **package.json** to include `applicationinsights` as a dependency.
+
+ **index.js**
+
+ ```javascript
+ const appInsights = require("applicationinsights");
+ appInsights.setup();
+ const appInsightsClient = appInsights.defaultClient;
+
+ // Event Grid always sends an array of data and may send more
+ // than one event in the array. The runtime invokes this function
+ // once for each array element, so we are always dealing with one.
+ // See: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-grid-trigger?tabs=
+ module.exports = async function (context, eventGridEvent) {
+ context.log(typeof eventGridEvent);
+ context.log(eventGridEvent);
+
+ // As written, the Application Insights custom event will not be
+ // correlated to any other context or span. If the custom event
+ // should be correlated to the parent function invocation, use
+ // the tagOverrides property. For example:
+ // var operationIdOverride = {
+ // "ai.operation.id": context.traceContext.traceparent
+ // };
+ // client.trackEvent({
+ // name: "correlated to function invocation",
+ // tagOverrides: operationIdOverride,
+ // properties: {}
+ // });
+
+ context.log(`Sending to App Insights...`);
+
+ appInsightsClient.trackEvent({
+ name: "Event Grid Event",
+ properties: {
+ eventGridEvent: eventGridEvent
+ }
+ });
+
+ context.log(`Sent to App Insights successfully`);
+ };
+ ```
+1. Create an Azure function app using instructions from [Quick function app create](../azure-functions/functions-develop-vs-code.md?tabs=csharp#quick-function-app-create).
+1. Deploy your function to the function app on Azure using instructions from [Deploy project files](../azure-functions/functions-develop-vs-code.md?tabs=csharp#republish-project-files).
+
+
+## Create event subscription for partner topic using function
+
+1. In the Azure portal, navigate to the Event Grid **partner topic** created by your **Auth0 log stream**.
+
+ :::image type="content" source="./media/auth0-log-stream-blob-storage/add-event-subscription-menu.png" alt-text="Screenshot showing the Event Grid Partner Topics page with Add Event Subscription button selected.":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. Enter a **name** for the event subscription.
+ 1. For **Endpoint type**, select **Azure Function**.
+
+ :::image type="content" source="./media/auth0-log-stream-blob-storage/select-endpoint-type.png" alt-text="Screenshot showing the Create Event Subscription page with Azure Functions selected as the endpoint type.":::
+ 1. Click **Select an endpoint** to specify details about the function.
+1. On the **Select Azure Function** page, follow these steps.
+ 1. Select the **Azure subscription** that contains the function.
+ 1. Select the **resource group** that contains the function.
+ 1. Select your **function app**.
+ 1. Select your **Azure function**.
+ 1. Then, select **Confirm Selection**.
+1. Now, back on the **Create Event Subscription** page, select **Create** to create the event subscription.
+1. After the event subscription is created successfully, you see the event subscription in the bottom pane of the **Event Grid Partner Topic - Overview** page.
+1. Select the link to your Azure function at the bottom of the page.
+1. On the **Azure Function** page, select **Application Insights** under **Settings** on the left menu.
+1. Select **Application Insights** link and then select **View Application Insights data**.
+1. Once your Auth0 logs are generated, your data should now be visible in Application Insights
+
+ > [!NOTE]
+ > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor's Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
+
+## Next steps
+
+- [Auth0 Partner Topic](auth0-overview.md)
+- [Subscribe to Auth0 events](auth0-how-to.md)
+- [Send Auth0 events to Azure Monitor's Application Insights](auth0-log-stream-app-insights.md)
event-grid Auth0 Log Stream Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-blob-storage.md
+
+ Title: Send Auth0 events to Blob Storage via Azure Event Grid
+description: This article shows how to send Auth0 events received by Azure Event Grid to Azure Blob Storage by using Azure Functions.
+ Last updated : 10/12/2022++
+# Send Auth0 events to Azure Blob Storage
+This article shows you how to send Auth0 events to Azure Blob Storage via Azure Event Grid by using Azure Functions.
+
+## Prerequisites
+- [Create an Azure Event Grid stream on Auth0](https://marketplace.auth0.com/integrations/azure-log-streaming).
+- [Create a Azure Blob Storage resource](../storage/common/storage-account-create.md?tabs=azure-portal)
+- [Get connection string to Azure Storage account](../storage/common/storage-account-keys-manage.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal#view-account-access-keys). Make sure you select the **Copy** button to copy connection string to the clipboard.
+
+## Create an Azure function
+1. Create an Azure function by following instructions from the **Create a local project** section of [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-node.md).
+ 1. Select **Azure Event Grid trigger** for the function template instead of **HTTP trigger** as mentioned in the quickstart.
+ 1. Continue to follow the steps, but use the following **index.js** and **function.json** files.
+
+ > [!IMPORTANT]
+ > Update the **package.json** to include `@azure/storage-blob` as a dependency.
+
+ **function.json**
+ ```json
+ {
+ "bindings": [{
+ "type": "eventGridTrigger",
+ "name": "eventGridEvent",
+ "direction": "in"
+
+ },
+ {
+ "type": "blob",
+ "name": "outputBlob",
+ "path": "events/{rand-guid}.json",
+ "connection": "OUTPUT_STORAGE_ACCOUNT",
+ "direction": "out"
+
+ }
+ ]
+ }
+ ```
+
+ **index.js**
+
+ ```javascript
+ // Event Grid always sends an array of data and may send more
+ // than one event in the array. The runtime invokes this function
+ // once for each array element, so we are always dealing with one.
+ // See: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-grid-trigger?tabs=
+ module.exports = async function (context, eventGridEvent) {
+ context.log(JSON.stringify(context.bindings));
+ context.log(JSON.stringify(context.bindingData));
+
+ context.bindings.outputBlob = JSON.stringify(eventGridEvent);
+ };
+ ```
+1. Create an Azure function app using instructions from [Quick function app create](../azure-functions/functions-develop-vs-code.md?tabs=csharp#quick-function-app-create).
+1. Deploy your function to the function app on Azure using instructions from [Deploy project files](../azure-functions/functions-develop-vs-code.md?tabs=csharp#republish-project-files).
+
+
+## Configure Azure function to use your blob storage
+1. Configure your Azure function to use your storage account.
+ 1. Select **Configuration** under **Settings** on the left menu.
+ 1. On the **Application settings** page, select **+ New connection string** on the command bar.
+ 1. Set **Name** to **AzureWebJobsOUTPUT_STORAGE_ACCOUNT**.
+ 1. Set **Value** to the connection string to the storage account that you copied to the clipboard in the previous step.
+ 1. Select **OK**.
+
+## Create event subscription for partner topic using function
+
+1. In the Azure portal, navigate to the Event Grid **partner topic** created by your **Auth0 log stream**.
+
+ :::image type="content" source="./media/auth0-log-stream-blob-storage/add-event-subscription-menu.png" alt-text="Screenshot showing the Event Grid Partner Topics page with Add Event Subscription button selected.":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. Enter a **name** for the event subscription.
+ 1. For **Endpoint type**, select **Azure Function**.
+
+ :::image type="content" source="./media/auth0-log-stream-blob-storage/select-endpoint-type.png" alt-text="Screenshot showing the Create Event Subscription page with Azure Functions selected as the endpoint type.":::
+ 1. Click **Select an endpoint** to specify details about the function.
+1. On the **Select Azure Function** page, follow these steps.
+ 1. Select the **Azure subscription** that contains the function.
+ 1. Select the **resource group** that contains the function.
+ 1. Select your **function app**.
+ 1. Select your **Azure function**.
+ 1. Then, select **Confirm Selection**.
+1. Now, back on the **Create Event Subscription** page, select **Create** to create the event subscription.
+1. After the event subscription is created successfully, you see the event subscription in the bottom pane of the **Event Grid Partner Topic - Overview** page.
+1. Select the link to your Azure function at the bottom of the page.
+1. On the **Azure Function** page, select **Monitor** and confirm data is successfully being sent. You may need to trigger logs from Auth0.
+
+## Verify that logs are stored in the storage account
+
+1. Locate your storage account in the Azure portal.
+1. Select **Containers** under **Data Storage** on the left menu.
+1. Confirm that you see a container named **events**.
+1. Select the container and verify that your Auth0 logs are being stored.
+
+ > [!NOTE]
+ > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor's Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
+
+## Next steps
+
+- [Auth0 Partner Topic](auth0-overview.md)
+- [Subscribe to Auth0 events](auth0-how-to.md)
+- [Send Auth0 events to Azure Blob Storage](auth0-log-stream-blob-storage.md)
event-grid Cloud Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloud-event-schema.md
Title: CloudEvents v1.0 schema with Azure Event Grid description: Describes how to use the CloudEvents v1.0 schema for events in Azure Event Grid. The service supports events in the JSON implementation of Cloud Events. + Last updated 07/22/2021
This article describes CloudEvents schema with Event Grid.
Here is an example of an Azure Blob Storage event in CloudEvents format:
-``` JSON
+```json
{ "specversion": "1.0", "type": "Microsoft.Storage.BlobCreated",
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
description: Describes how to use the CloudEvents schema for events in Azure Eve
Last updated 07/20/2022 ms.devlang: csharp, javascript-+ # Use CloudEvents v1.0 schema with Event Grid
This article describes how to use the CloudEvents schema with Event Grid.
Here's an example of an Azure Blob Storage event in CloudEvents format:
-``` JSON
+```json
{ "specversion": "1.0", "type": "Microsoft.Storage.BlobCreated",
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
event-hubs Connect Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/connect-event-hub.md
+
+ Title: Connect to an Azure event hub - .NET
+description: This article shows different ways of connecting to an event hub in Azure Event Hubs.
+ Last updated : 10/10/2022 ++
+# Connect to an event hub (.NET)
+This article shows how to connect to an event hub in different ways by using the .NET SDK. The examples use [EventHubProducerClient](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient), which is used to send messages to an event hub. You can use similar variations of constructors for [EventHubConsumerClient](/dotnet/api/azure.messaging.eventhubs.consumer.eventhubconsumerclient) to consume events from an event hub.
+
+## Connect using a connection string
+This section shows how to connect to an event hub using a connection string to a namespace or an event hub.
+
+If you have connection string to the namespace and the event hub name, use the [EventHubProducerClient](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient) constructor that has the connection string and the event hub name parameter.
+
+```csharp
+// Use the constructor that takes the connection string to the namespace and event hub name
+producerClient = new EventHubProducerClient(NAMESPACE-CONNECTIONSTRING, EVENTHUBNAME);
+```
+
+Alternatively, you can append `;EntityPath=<EVENTHUBNAME>` the namespace's connection string and use the constructor that takes only the connection string.
+
+```csharp
+// Use the constructor that takes the connection string to the namespace and event hub name
+producerClient = new EventHubProducerClient(connectionString);
+```
+
+You can also use this constructor if you have a connection string to the event hub (not the namespace).
+
+## Connect using a policy name and its key value
+The following example shows you how to connect to an event hub using a name and value of the SAS policy you created for an event hub.
+
+```csharp
+//use the constructor that takes AzureNamedKeyCredential parameter
+producerClient = new EventHubProducerClient("<NAMESPACENAME>.servicebus.windows.net", "EVENTHUBNAME", new AzureNamedKeyCredential("SASPOLICYNAME", "KEYVALUE"));
+```
+
+## Connect using a SAS token
+The following example shows you how to connect to an event hub using a SAS token that's generated using a SAS policy.
+
+```csharp
+var token = createToken("NAMESPACENAME.servicebus.windows.net", "SASPOLICYNAME", "KEYVALUE");
+producerClient = new EventHubProducerClient("NAMESPACENAME.servicebus.windows.net", "EVENTHUBNAME", new AzureSasCredential(token));
+```
+
+Here's the sample code for generating a token using a SAS policy and key value:
+
+```csharp
+private static string createToken(string resourceUri, string keyName, string key)
+{
+ TimeSpan sinceEpoch = DateTime.UtcNow - new DateTime(1970, 1, 1);
+ var week = 60 * 60 * 24 * 7;
+ var expiry = Convert.ToString((int)sinceEpoch.TotalSeconds + week);
+ string stringToSign = HttpUtility.UrlEncode(resourceUri) + "\n" + expiry;
+ HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));
+ var signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
+ var sasToken = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}", HttpUtility.UrlEncode(resourceUri), HttpUtility.UrlEncode(signature), expiry, keyName);
+ return sasToken;
+}
+```
+
+## Connect using Azure AD application
+
+1. Create an Azure AD application.
+1. Assign application's service principal to the appropriate [role-based access control (RBAC) role](authorize-access-azure-active-directory.md#azure-built-in-roles-for-azure-event-hubs) (owner, sender, or receiver). For more information, see [Authorize access with Azure Active Directory](authorize-access-azure-active-directory.md).
+
+```csharp
+var clientSecretCredential = new ClientSecretCredential("TENANTID", "CLIENTID", "CLIENTSECRET");
+producerClient = new EventHubProducerClient("NAMESPACENAME.servicebus.windows.net", "EVENTHUBNAME", clientSecretCredential);
+```
+
+## Next steps
+
+Review samples on GitHub:
+
+* [Azure.Messaging.EventHubs samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/Azure.Messaging.EventHubs/samples)
+* [Azure.Messaging.EventHubs.Processor](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/Azure.Messaging.EventHubs.Processor)
event-hubs Event Hubs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-create.md
Title: Azure Quickstart - Create an event hub using the Azure portal description: In this quickstart, you learn how to create an Azure event hub using Azure portal. Previously updated : 10/20/2021 Last updated : 10/10/2022
To complete this quickstart, make sure that you have:
A resource group is a logical collection of Azure resources. All resources are deployed and managed in a resource group. To create a resource group: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the left navigation, select **Resource groups**. Then select **Add**.
-
- ![Resource groups - Add button](./media/event-hubs-quickstart-portal/resource-groups1.png)
+1. In the left navigation, select **Resource groups**, and then select **Create**.
+ :::image type="content" source="./media/event-hubs-quickstart-portal/resource-groups1.png" alt-text="Screenshot showing the Resource groups page with the selection of the Create button.":::
1. For **Subscription**, select the name of the Azure subscription in which you want to create the resource group. 1. Type a unique **name for the resource group**. The system immediately checks to see if the name is available in the currently selected Azure subscription. 1. Select a **region** for the resource group. 1. Select **Review + Create**.
- ![Resource group - create](./media/event-hubs-quickstart-portal/resource-groups2.png)
+ :::image type="content" source="./media/event-hubs-quickstart-portal/resource-groups2.png" alt-text="Screenshot showing the Create a resource group page.":::
1. On the **Review + Create** page, select **Create**. ## Create an Event Hubs namespace
An Event Hubs namespace provides a unique scoping container, in which you create
1. In the Azure portal, and select **Create a resource** at the top left of the screen. 1. Select **All services** in the left menu, and select **star (`*`)** next to **Event Hubs** in the **Analytics** category. Confirm that **Event Hubs** is added to **FAVORITES** in the left navigational menu.
-
- ![Search for Event Hubs](./media/event-hubs-quickstart-portal/select-event-hubs-menu.png)
-1. Select **Event Hubs** under **FAVORITES** in the left navigational menu, and select **Add** on the toolbar.
- ![Add button](./media/event-hubs-quickstart-portal/event-hubs-add-toolbar.png)
+ :::image type="content" source="./media/event-hubs-quickstart-portal/select-event-hubs-menu.png" alt-text="Screenshot showing the selection of Event Hubs in the All services page.":::
+1. Select **Event Hubs** under **FAVORITES** in the left navigational menu, and select **Create** on the toolbar.
+
+ :::image type="content" source="./media/event-hubs-quickstart-portal/event-hubs-add-toolbar.png" alt-text="Screenshot showing the selection of Create button on the Event hubs page.":::
1. On the **Create namespace** page, take the following steps: 1. Select the **subscription** in which you want to create the namespace. 1. Select the **resource group** you created in the previous step.
An Event Hubs namespace provides a unique scoping container, in which you create
1. Leave the **throughput units** (for standard tier) or **processing units** (for premium tier) settings as it is. To learn about throughput units or processing units: [Event Hubs scalability](event-hubs-scalability.md). 1. Select **Review + Create** at the bottom of the page.
- ![Create an event hub namespace](./media/event-hubs-quickstart-portal/create-event-hub1.png)
- 1. On the **Review + Create** page, review the settings, and select **Create**. Wait for the deployment to complete.
-
- ![Review + create page](./media/event-hubs-quickstart-portal/review-create.png)
+ :::image type="content" source="./media/event-hubs-quickstart-portal/create-event-hub1.png" alt-text="Screenshot of the Create Namespace page in the Azure portal.":::
+ 1. On the **Review + Create** page, review the settings, and select **Create**. Wait for the deployment to complete.
+1. On the **Deployment** page, select **Go to resource** to navigate to the page for your namespace.
- 1. On the **Deployment** page, select **Go to resource** to navigate to the page for your namespace.
-
- ![Deployment complete - go to resource](./media/event-hubs-quickstart-portal/deployment-complete.png)
- 1. Confirm that you see the **Event Hubs Namespace** page similar to the following example:
-
- ![Home page for the namespace](./media/event-hubs-quickstart-portal/namespace-home-page.png)
+ :::image type="content" source="./media/event-hubs-quickstart-portal/deployment-complete.png" alt-text="Screenshot of the Deployment complete page with the link to resource.":::
+1. Confirm that you see the **Event Hubs Namespace** page similar to the following example:
+
+ :::image type="content" source="./media/event-hubs-quickstart-portal/namespace-home-page.png" lightbox="./media/event-hubs-quickstart-portal/namespace-home-page.png" alt-text="Screenshot of the home page for your Event Hubs namespace in the Azure portal.":::
> [!NOTE] > Azure Event Hubs provides you with a Kafka endpoint. This endpoint enables your Event Hubs namespace to natively understand [Apache Kafka](https://kafka.apache.org/intro) message protocol and APIs. With this capability, you can communicate with your event hubs as you would with Kafka topics without changing your protocol clients or running your own clusters. Event Hubs supports [Apache Kafka versions 1.0](https://kafka.apache.org/10/documentation.html) and later. For more information, see [Use Event Hubs from Apache Kafka applications](event-hubs-for-kafka-ecosystem-overview.md).
An Event Hubs namespace provides a unique scoping container, in which you create
To create an event hub within the namespace, do the following actions:
-1. On the Event Hubs Namespace page, select **Event Hubs** in the left menu.
-1. At the top of the window, select **+ Event Hub**.
+1. On the **Overview** page, select **+ Event hub** on the command bar.
- ![Add Event Hub - button](./media/event-hubs-quickstart-portal/create-event-hub4.png)
-1. Type a name for your event hub, then select **Create**.
+ :::image type="content" source="./media/event-hubs-quickstart-portal/create-event-hub4.png" lightbox="./media/event-hubs-quickstart-portal/create-event-hub4.png" alt-text="Screenshot of the selection of Add event hub button on the command bar.":::
+1. Type a name for your event hub, then select **Review + create**.
- ![Create event hub](./media/event-hubs-quickstart-portal/create-event-hub5.png)
+ :::image type="content" source="./media/event-hubs-quickstart-portal/create-event-hub5.png" alt-text="Screenshot of the Create event bub page.":::
The **partition count** setting allows you to parallelize consumption across many consumers. For more information, see [Partitions](event-hubs-scalability.md#partitions). The **message retention** setting specifies how long the Event Hubs service keeps data. For more information, see [Event retention](event-hubs-features.md#event-retention).
+1. On the **Review + create** page, select **Create**.
1. You can check the status of the event hub creation in alerts. After the event hub is created, you see it in the list of event hubs.
- ![Event hub created](./media/event-hubs-quickstart-portal/event-hub-created.png)
+ :::image type="content" source="./media/event-hubs-quickstart-portal/event-hub-created.png" lightbox="./media/event-hubs-quickstart-portal/event-hub-created.png" alt-text="Screenshot showing the list of event hubs.":::
## Next steps
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
The Event Hubs dedicated offering is billed at a fixed monthly price, with a **m
For more information about quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
-## High availability with Azure Availability Zones
-Event Hubs dedicated clusters offer [availability zones](../availability-zones/az-overview.md#availability-zones) support where you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
+## High availability with availability zones
+Event Hubs standard, premium, and dedicated tiers offer [availability zones](../availability-zones/az-overview.md#availability-zones) support where you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
> [!IMPORTANT] > Event Hubs dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
event-hubs Event Hubs Federation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-federation-overview.md
Title: Multi-site and multi-region federation - Azure Event Hubs | Microsoft Docs description: This article provides an overview of multi-site and multi-region federation with Azure Event Hubs. + Last updated 09/28/2021
example, you can:
Azure Synapse Analytics, etc.) to perform batch analytics or train machine learning models based on very large, indexed pools of historical data. - Store projections (also called "materialized views") in databases ([SQL
- Database](../stream-analytics/sql-database-output.md), [Cosmos
- DB](../stream-analytics/azure-cosmos-db-output.md) ).
+ Database](../stream-analytics/sql-database-output.md), [Azure Cosmos DB](../stream-analytics/azure-cosmos-db-output.md) ).
### Stateless replication applications in Azure Functions
custom extensions for
triggers will dynamically adapt to the throughput needs by scaling the number of concurrently executing instances up and down based on documented metrics. For building log projections, Azure Functions supports output bindings for
-[Cosmos DB](../azure-functions/functions-bindings-cosmosdb-v2-output.md)
+[Azure Cosmos DB](../azure-functions/functions-bindings-cosmosdb-v2-output.md)
and [Azure Table Storage](../azure-functions/functions-bindings-storage-table-output.md). Azure Functions can run under a [Azure managed
event-hubs Event Hubs Federation Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-federation-patterns.md
description:
replication task patterns Last updated 09/28/2021+ # Event replication tasks patterns
key. The shape of the target database is ultimately up to you and your
application's needs. This pattern is also referred to as "event sourcing". > [!TIP]
-> You can easily create log projections into [Azure SQL
-> Database](../stream-analytics/sql-database-output.md) and [Azure Cosmos
-> DB](../stream-analytics/azure-cosmos-db-output.md) in Azure Stream Analytics and
-> you should prefer that option.
+> You can easily create log projections into [Azure SQL Database](../stream-analytics/sql-database-output.md) and [Azure Cosmos DB](../stream-analytics/azure-cosmos-db-output.md) in Azure Stream Analytics, and you should prefer that option.
-The following Azure Function projects the contents of an Event Hub
-compacted into an Azure CosmosDB collection.
+The following Azure Function projects the contents of an Event Hub compacted into an Azure Cosmos DB collection.
```C# [FunctionName("Eh1ToCosmosDb1Json")]
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md
Title: Send or receive events from Azure Event Hubs using Java (latest) description: This article provides a walkthrough of creating a Java application that sends/receives events to/from Azure Event Hubs using the latest azure-messaging-eventhubs package. Previously updated : 04/30/2021 Last updated : 10/10/2022 ms.devlang: java
This section shows you how to create a Java application to send events an event
### Add reference to Azure Event Hubs library
-The Java client library for Event Hubs is available in the [Maven Central Repository](https://search.maven.org/search?q=a:azure-messaging-eventhubs). You can reference this library using the following dependency declaration inside your Maven project file:
+First, create a new **Maven** project for a console/shell application in your favorite Java development environment. Update the `pom.xml` file with the following dependency. The Java client library for Event Hubs is available in the [Maven Central Repository](https://search.maven.org/search?q=a:azure-messaging-eventhubs).
```xml <dependency>
The Java client library for Event Hubs is available in the [Maven Central Reposi
### Write code to send messages to the event hub
-For the following sample, first create a new Maven project for a console/shell application in your favorite Java development environment. Add a class named `Sender`, and add the following code to the class:
+Add a class named `Sender`, and add the following code to the class:
> [!IMPORTANT] > Update `<Event Hubs namespace connection string>` with the connection string to your Event Hubs namespace. Update `<Event hub name>` with the name of your event hub in the namespace.
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
Azure Event Hubs provides encryption of data at rest with Azure Storage Service
The premium tier offers all the features of the standard plan, but with better performance, isolation and more generous quotas. For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
-## High availability with Azure Availability Zones
-Event Hubs premium offers [availability zones](../availability-zones/az-overview.md#availability-zones) support with no extra cost. Using availability zones, you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
+## High availability with availability zones
+Event Hubs standard, premium, and dedicated tiers offer [availability zones](../availability-zones/az-overview.md#availability-zones) support with no extra cost. Using availability zones, you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
> [!IMPORTANT] > Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md
Title: Send or receive events from Azure Event Hubs using Python (latest) description: This article provides a walkthrough for creating a Python application that sends/receives events to/from Azure Event Hubs using the latest azure-eventhub package. Previously updated : 09/01/2021 Last updated : 10/10/2022 ms.devlang: python
To complete this quickstart, you need the following prerequisites:
pip install azure-eventhub ```
- Install the following package for receiving the events by using Azure Blob storage as the checkpoint store:
+ Install the following package for receiving the events using Azure Blob storage as the checkpoint store:
```cmd pip install azure-eventhub-checkpointstoreblob-aio ```-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create an Event Hubs namespace, and obtain the management credentials that your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You'll use the connection string later in this quickstart.
## Send events
-In this section, you create a Python script to send events to the event hub that you created earlier.
+In this section, create a Python script to send events to the event hub that you created earlier.
1. Open your favorite Python editor, such as [Visual Studio Code](https://code.visualstudio.com/). 2. Create a script called *send.py*. This script sends a batch of events to the event hub that you created earlier.
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md
Title: Monitoring Azure Event Hubs data reference
description: Important reference material needed when you monitor Azure Event Hubs. Previously updated : 02/10/2022 Last updated : 10/06/2022
Counts the number of data and management operations requests.
| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | |
-| Incoming Requests| Yes | Count | Total | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. | Entity name|
-| Successful Requests| No | Count | Total | The number of successful requests made to the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result |
-| Throttled Requests| No | Count | Total | The number of requests that were throttled because the usage was exceeded. | Entity name<br/><br/>Operation Result |
+| Incoming Requests| Yes | Count | Count | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. | Entity name|
+| Successful Requests| No | Count | Count | The number of successful requests made to the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result |
+| Throttled Requests| No | Count | Count | The number of requests that were throttled because the usage was exceeded. | Entity name<br/><br/>Operation Result |
The following two types of errors are classified as **user errors**:
The following two types of errors are classified as **user errors**:
### Message metrics | Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | |
-|Incoming Messages| Yes | Count | Total | The number of events or messages sent to Event Hubs over a specified period. | Entity name|
-|Outgoing Messages| Yes | Count | Total | The number of events or messages received from Event Hubs over a specified period. | Entity name |
-| Captured Messages| No | Count| Total | The number of captured messages. | Entity name |
-|Incoming Bytes | Yes | Bytes | Total | Incoming bytes for an event hub over a specified period. | Entity name|
-|Outgoing Bytes | Yes | Bytes | Total |Outgoing bytes for an event hub over a specified period. | Entity name |
+|Incoming Messages| Yes | Count | Count | The number of events or messages sent to Event Hubs over a specified period. | Entity name|
+|Outgoing Messages| Yes | Count | Count | The number of events or messages received from Event Hubs over a specified period. | Entity name |
+| Captured Messages| No | Count| Count | The number of captured messages. | Entity name |
+|Incoming Bytes | Yes | Bytes | Count | Incoming bytes for an event hub over a specified period. | Entity name|
+|Outgoing Bytes | Yes | Bytes | Count | Outgoing bytes for an event hub over a specified period. | Entity name |
| Size | No | Bytes | Average | Size of an event hub in bytes.|Entity name |
The following two types of errors are classified as **user errors**:
### Capture metrics | Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | -- | | | | |
-| Captured Messages| No | Count| Total | The number of captured messages. | Entity name |
-| Captured Bytes | No | Bytes | Total | Captured bytes for an event hub | Entity name |
-| Capture Backlog | No | Count| Total | Capture backlog for an event hub | Entity name |
+| Captured Messages| No | Count| Count | The number of captured messages. | Entity name |
+| Captured Bytes | No | Bytes | Count | Captured bytes for an event hub | Entity name |
+| Capture Backlog | No | Count| Count | Capture backlog for an event hub | Entity name |
### Connection metrics
The following two types of errors are classified as **user errors**:
### Error metrics | Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | -- | | | | |
-|Server Errors| No | Count | Total | The number of requests not processed because of an error in the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result |
-|User Errors | No | Count | Total | The number of requests not processed because of user errors over a specified period. | Entity name<br/><br/>Operation Result|
-|Quota Exceeded Errors | No |Count | Total | The number of errors caused by exceeding quotas over a specified period. | Entity name<br/><br/>Operation Result|
+|Server Errors| No | Count | Count | The number of requests not processed because of an error in the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result |
+|User Errors | No | Count | Count | The number of requests not processed because of user errors over a specified period. | Entity name<br/><br/>Operation Result|
+|Quota Exceeded Errors | No |Count | Count | The number of errors caused by exceeding quotas over a specified period. | Entity name<br/><br/>Operation Result|
> [!NOTE] > Logic Apps creates epoch receivers and receivers may be moved from one node to another depending on the service load. During those moves, `ReceiverDisconnection` exceptions may occur. They are counted as user errors on the Event Hubs service side. Logic Apps may collect failures from Event Hubs clients so that you can view them in user logs.
event-hubs Resource Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-overview.md
Title: Resource governance with application groups description: This article describes how to enable resource governance using application groups. Previously updated : 08/23/2022- Last updated : 10/12/2022+
-# Resource governance with application groups (preview)
+# Resource governance with application groups
Azure Event Hubs enables you to govern event streaming workloads of client applications that connect to Event Hubs. You can create logical groups known as *application groups* where each group is a collection of client applications, and then apply quota and access management policies for an application group (group of client applications).
Azure Event Hubs enables you to govern event streaming workloads of client appli
## Application groups
-An application group is a collection of one or more client applications that interact with the Event Hubs data plane. Each application group is scoped to a single Event Hubs namespace and should use a uniquely identifying condition such as the security context - shared access signatures (SAS) or Azure Active Directory (Azure AD) application ID - of the client application.
+An application group is a collection of one or more client applications that interact with the Event Hubs data plane. Each application group can be scoped to a single Event Hubs namespace or event hubs (entity) within a namespace and should use a uniquely identifying condition such as the security context - shared access signatures (SAS) or Azure Active Directory (Azure AD) application ID - of the client application.
-Event Hubs currently supports using security contexts for creating application groups. Therefore, each application group must have a unique SAS policy or Azure AD application ID associated with them.
+Event Hubs currently supports using security contexts for creating application groups. Therefore, each application group must have a unique SAS policy or Azure AD application ID associated with them. If preferred, you can use security context at event hub level to use an application group with a specific event hub within a namespace.
Application groups are logical entities that are created at the namespace level. Therefore, client applications interacting with event hubs don't need to be aware of the existence of an application group. Event Hubs can associate any client application to an application group by using the identifying condition.
You can have throttling policies specified using different ingress and egress me
When policies for application groups are applied, the client application workload may slow down or encounter server busy exceptions.
+### Throttling policy - threshold Limits
+
+The following table shows minimum threshold limits that you can set for different metric ID in throttling policy:
+
+| Metric ID | Minimum limit |
+| | - |
+| IncomingByte | 1 KB |
+| OutgoingByte | 1 KB |
+| IncomingMessage | 1 |
+| Outgoing message | 1 |
+
+> [!NOTE]
+> Limits set on the throttling policy's threshold value would take precedence over any value set for Kafka topic properties. For example, `IncomingBytes` would have higher priority over `message.max.bytes`.
+
+### Protocol support and error codes
+
+Application group supports throttling operations happening via following protocols ΓÇô AMQP, Kafka and HTTP. The following table provides you the expected error codes returned by application groups:
+
+| Protocol | Operation | Error code | Error message |
+| -- | | - | - |
+| AMQP | Send | 50004 | Application group is throttled with application group ID & policy name |
+| HTTP | Send | 503 | Subcode: 50004. Application group is throttled with application group ID and policy name |
+| Kafka | Send | PolicyViolation | Broker: policy violation |
+
+Due to restrictions at protocol level, error messages aren't supported during receive operation. When application groups are throttling on receive operations, you would experience sluggish consumption of messages at consumer side.
+ ### Disabling application groups + Application group is enabled by default and that means all the client applications can access Event Hubs namespace for publishing and consuming events by adhering to the application group policies. When an application group is disabled, the client will still be able to connect to the event hub, but the authorization will fail and then the client connection gets closed. Therefore, you'll see lots of successful open and close connections, with same number of authorization failures in diagnostic logs.
event-hubs Resource Governance With App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md
Title: Govern resources for client applications with application groups description: Learn how to use application groups to govern resources for client applications that connect with Event Hubs. - Previously updated : 05/24/2022 Last updated : 10/12/2022+ # Govern resources for client applications with application groups
This article shows you how to perform the following tasks:
- Create an application group. - Enable or disable an application group-- Apply throttling policies to an application group
+- Define threshold limits and apply throttling policies to an application group
> [!NOTE] > Application groups are available only in **premium** and **dedicated** tiers.
You can create an application group using the Azure portal by following these st
1. Confirm that **Enabled** is selected. To have the application group in the disabled state first, clear the **Enabled** option. This flag determines whether the clients of an application group can access Event Hubs or not. 1. For **Security context type**, select **Shared access policy** or **AAD application**. When you create the application group, you should associate with either a shared access signatures (SAS) or Azure Active Directory(Azure AD) application ID, which is used by client applications. 1. If you selected **Shared access policy**:
- 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group. You can select **Add SAS Policy** to add a new policy and then associate with the application group.
- 1. Review the auto-generated **Client group ID**, which is the unique ID associated with the application group. You can update it if you like.
+ 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group. Application group supports the selection of SAS key at either namespace or at entity (event hub) level. You can select **Add SAS Policy** to add a new policy and then associate with the application group.
+ 1. Review the auto-generated **Client group ID**, which is the unique ID associated with the application group. You can update it if you like. The following table shows auto generated Client Group ID for different level keys:
- :::image type="content" source="./media/resource-governance-with-app-groups/add-app-group.png" alt-text="Screenshot of the Add application group page with Shared access policy option selected.":::
- 1. If you selected **AAD application**:
- 1. For **AAD Application (client) ID**, specify the Azure Active Directory (Azure AD) application or client ID.
- 1. Review the auto-generated **Client group ID**, which is the unique ID associated with the application group. You can update it if you like.
+ | Key type | Auto-generated client group ID |
+ | -- | |
+ | Namespace-level key | `NamespaceSASKeyName=RootManageSharedAccessKey` |
+ | Entity-level Key | `EntitySASKeyName=RootManageSharedAccessKey` |
+
+ > [!NOTE]
+ > All existing application groups created with namespace level key would continue to work with client group ID starting with `SASKeyName`. However all new application groups would have updated client group ID as shown above.
+
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/add-app-group.png" alt-text="Screenshot of the Add application group page with Shared access policy option selected.":::
+ 1. If you selected **AAD application**:
+ 1. For **AAD Application (client) ID**, specify the Azure Active Directory (Azure AD) application or client ID.
+ 1. Review the auto-generated **Client group ID**, which is the unique ID associated with the application group. You can update it if you like. The scope of application governance (namespace or entity level) would depend on the access level for the used Azure AD application ID.
:::image type="content" source="./media/resource-governance-with-app-groups/add-app-group-active-directory.png" alt-text="Screenshot of the Add application group page with Azure AD option."::: 1. To add a policy, follow these steps:
The following ARM template shows how to update an existing namespace (`contosona
```
+### Decide threshold value for throttling policies
+
+Azure Event Hubs supports [runtime audit logs](monitor-event-hubs-reference.md#runtime-audit-logs) functionality to help you decide on a threshold value for your usual throughput to throttle the application group. You can follow these steps to find out threshold value to explore a good threshold value:
+
+1. Turn on [diagnostic settings](monitor-event-hubs.md#collection-and-routing) in Event Hubs with **runtime audit logs** as selected category and choose **Log Analytics** as destination.
+2. Create an empty application group without any throttling policy.
+3. Continue sending messages/events to event hub at usual throughput.
+4. Go to **Log Analytics workspace** and query for the right activity name (based on the metric ID) in **AzureDiagnostics** table. The following sample query is set to track threshold value for incoming messages:
+
+ ```kusto
+ AzureDiagnostics
+ | where ActivityName_s =="IncomingMessages"
+ | where Outcome_s =="Success"
+ ```
+5. Select the **Chart** section on Log Analytics workspace and plot a chart between time generated on Y axis and count of messages sent on x axis.
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/azure-monitor-logs.png" lightbox="./media/resource-governance-with-app-groups/azure-monitor-logs.png" alt-text="Screenshot of the Azure Monitor logs page in the Azure portal.":::
+
+ In this example, you can see that the usual throughput never crossed more than 550 messages (expected current throughput). This observation helps you define the actual threshold value.
+6. Once you decide the best threshold value, add a new throttling policy inside the application group.
+ ## Publish or consume events Once you successfully add throttling policies to the application group, you can test the throttling behavior by either publishing or consuming events using client applications that are part of the `contosoAppGroup` application group. To test, you can use either an [AMQP client](event-hubs-dotnet-standard-getstarted-send.md) or a [Kafka client](event-hubs-quickstart-kafka-enabled-event-hubs.md) application and same SAS policy name or Azure AD application ID that's used to create the application group.
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
event-hubs Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-audit-minimum-version.md
Last updated 04/25/2022
-# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace (Preview)
+# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace
If you have a large number of Microsoft Azure Event Hubs namespaces, you may want to perform an audit to make sure that all namespaces are configured for the minimum version of TLS that your organization requires. To audit a set of Event Hubs namespaces for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../governance/policy/overview.md).
event-hubs Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-client-version.md
Last updated 04/25/2022
-# Configure Transport Layer Security (TLS) for an Event Hubs client application (Preview)
+# Configure Transport Layer Security (TLS) for an Event Hubs client application
For security purposes, an Azure Event Hubs namespace may require that clients use a minimum version of Transport Layer Security (TLS) to send requests. Calls to Azure Event Hubs will fail if the client is using a version of TLS that is lower than the minimum required version. For example, if a namespace requires TLS 1.2, then a request sent by a client who is using TLS 1.1 will fail.
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
Last updated 07/06/2022
-# Configure the minimum TLS version for an Event Hubs namespace (Preview)
+# Configure the minimum TLS version for an Event Hubs namespace
Azure Event Hubs namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Hubs namespace to require that clients send and receive data with a newer version of TLS. If an Event Hubs namespace requires a minimum version of TLS, then any requests made with an older version will fail. For conceptual information about this feature, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md).
event-hubs Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-enforce-minimum-version.md
Last updated 04/25/2022
-# Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace (Preview)
+# Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace
Communication between a client application and an Azure Event Hubs namespace is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
expressroute Expressroute Erdirect About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-erdirect-about.md
description: Learn about key features of Azure ExpressRoute Direct and informati
+ Last updated 08/31/2021
ExpressRoute Direct gives you the ability to connect directly into MicrosoftΓÇÖs
Key features that ExpressRoute Direct provides include, but aren't limited to:
-* Massive Data Ingestion into services like Storage and Cosmos DB
+* Massive data ingestion into services like Azure Storage and Azure Cosmos DB
* Physical isolation for industries that are regulated and require dedicated and isolated connectivity like: Banking, Government, and Retail * Granular control of circuit distribution based on business unit
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
+ Last updated 10/05/2020 - # What is Azure ExpressRoute? ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection with the help of a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure and Microsoft 365.
ExpressRoute Direct provides customers the opportunity to connect directly into
Key features that ExpressRoute Direct provides include, but aren't limited to:
-* Massive Data Ingestion into services like Storage and Cosmos DB
+* Massive data ingestion into services like Azure Storage and Azure Cosmos DB
* Physical isolation for industries that are regulated and require dedicated and isolated connectivity, such as: Banking, Government, and Retail * Granular control of circuit distribution based on business unit
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
Last updated 12/07/2021 ---+ # ExpressRoute routing requirements To connect to Microsoft cloud services using ExpressRoute, youΓÇÖll need to set up and manage routing. Some connectivity providers offer setting up and managing routing as a managed service. Check with your connectivity provider to see if they offer this service. If they don't, you must adhere to the following requirements:
Refer to the [ExpressRoute partners and peering locations](expressroute-location
You can purchase more than one ExpressRoute circuit per geopolitical region. Having multiple connections offers you significant benefits on high availability due to geo-redundancy. In cases where you have multiple ExpressRoute circuits, you will receive the same set of prefixes advertised from Microsoft on the Microsoft peering and public peering paths. This means you will have multiple paths from your network into Microsoft. This can potentially cause suboptimal routing decisions to be made within your network. As a result, you may experience suboptimal connectivity experiences to different services. You can rely on the community values to make appropriate routing decisions to offer [optimal routing to users](expressroute-optimize-routing.md).
-| **Microsoft Azure region** | **Regional BGP community (private peering)** | **Regional BGP community (Microsoft peering)** | **Storage BGP community** | **SQL BGP community** | **Cosmos DB BGP community** | **Backup BGP community** |
+| **Microsoft Azure region** | **Regional BGP community (private peering)** | **Regional BGP community (Microsoft peering)** | **Storage BGP community** | **SQL BGP community** | **Azure Cosmos DB BGP community** | **Backup BGP community** |
| | | | | | | | | **North America** | | | East US | 12076:50004 | 12076:51004 | 12076:52004 | 12076:53004 | 12076:54004 | 12076:55004 |
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
description: In this tutorial, you learn how to secure your virtual network with
+ Last updated 06/15/2022
Create the on-premises to hub virtual network connection. This step is similar t
1. Open the **FW-Hybrid-Test** resource group and select the **GW-Onprem** gateway. 2. Select **Connections** in the left column. 3. Select **Add**.
-4. The the connection name, type **Onprem-to-Hub**.
+4. For the connection name, type **Onprem-to-Hub**.
5. Select **VNet-to-VNet** for **Connection type**. 6. For the **Second virtual network gateway**, select **GW-hub**. 7. For **Shared key (PSK)**, type **AzureA1b2C3**.
You can keep your firewall resources for further investigation, or if no longer
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Secure your virtual WAN using Azure Firewall Manager](secure-cloud-network.md)
+> [Tutorial: Secure your virtual WAN using Azure Firewall Manager](secure-cloud-network.md)
firewall-manager Secured Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secured-virtual-hub.md
Previously updated : 06/09/2022 Last updated : 10/07/2022
A virtual hub is a Microsoft-managed virtual network that enables connectivity from other resources. When a virtual hub is created from a Virtual WAN in the Azure portal, a virtual hub VNet and gateways (optional) are created as its components.
-A *secured* virtual hub is an [Azure Virtual WAN Hub](../virtual-wan/virtual-wan-about.md#resources) with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create hub-and-spoke and transitive architectures with native security services for traffic governance and protection.
+A *secured* virtual hub is an [Azure Virtual WAN Hub](../virtual-wan/virtual-wan-about.md#resources) with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create hub-and-spoke and transitive architectures with native security services for traffic governance and protection.
+
+> [!IMPORTANT]
+> Currently, Azure Firewall in secured virtual hubs (vWAN) is not supported in Qatar.
You can use a secured virtual hub to filter traffic between virtual networks (V2V), virtual networks and branch offices (B2V) and traffic to the Internet (B2I/V2I). A secured virtual hub provides automated routing. There's no need to configure your own UDRs (user defined routes) to route traffic through your firewall.
firewall Dns Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/dns-settings.md
Previously updated : 01/28/2022 Last updated : 10/07/2022
If you want to enable FQDN (fully qualified domain name) filtering in network ru
:::image type="content" source="media/dns-settings/dns-proxy-2.png" alt-text="D N S proxy configuration using a custom D N S server.":::
-If you enable FQDN filtering in network rules, and you don't configure client virtual machines to use the firewall as a DNS proxy, then DNS requests from these clients might travel to a DNS server at a different time or return a different response compared to that of the firewall. DNS proxy puts Azure Firewall in the path of the client requests to avoid inconsistency.
-
+If you enable FQDN filtering in network rules, and you don't configure client virtual machines to use the firewall as a DNS proxy, then DNS requests from these clients might travel to a DNS server at a different time or return a different response compared to that of the firewall. ItΓÇÖs recommended to configure client virtual machines to use the Azure Firewall as their DNS proxy. This puts Azure Firewall in the path of the client requests to avoid inconsistency.
When Azure Firewall is a DNS proxy, two caching function types are possible:
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Previously updated : 08/29/2022 Last updated : 10/12/2022
The Azure Firewall signatures/rulesets include:
- Over 58,000 rules in over 50 categories. - The categories include malware command and control, phishing, trojans, botnets, informational events, exploits, vulnerabilities, SCADA network protocols, exploit kit activity, and more. - 20 to 40+ new rules are released each day.-- Low false positive rating by using state-of-the-art malware sandbox and global sensor network feedback loop.
+- Low false positive rating by using state-of-the-art malware detection techniques such as global sensor network feedback loop.
IDPS allows you to detect attacks in all ports and protocols for non-encrypted traffic. However, when HTTPS traffic needs to be inspected, Azure Firewall can use its TLS inspection capability to decrypt the traffic and better detect malicious activities.
firewall Tutorial Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal.md
description: In this article, you learn how to deploy and configure Azure Firewa
+ Last updated 04/29/2021
In this step, you create the connection from the hub virtual network to the on-p
1. Open the **FW-Hybrid-Test** resource group and select the **GW-hub** gateway. 2. Select **Connections** in the left column. 3. Select **Add**.
-4. The the connection name, type **Hub-to-Onprem**.
+4. For the connection name, type **Hub-to-Onprem**.
5. Select **VNet-to-VNet** for **Connection type**. 6. For the **Second virtual network gateway**, select **GW-Onprem**. 7. For **Shared key (PSK)**, type **AzureA1b2C3**.
firewall Web Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/web-categories.md
Web categories lets administrators allow or deny user access to web site categor
|Professional networking | Sites that enable professional networking for online communities. | |Search engines + portals |Sites enabling the searching of the Web, newsgroups, images, directories, and other online content. Includes portal and directory sites such as white/yellow pages. | |Translators | Sites that translate Web pages or phrases from one language to another. These sites bypass the proxy server, presenting the risk that unauthorized content may be accessed, similar to using an anonymizer. |
-|File repository | Web pages including collections of shareware, freeware, open source, and other software downloads. |
+|Web repository + storage | Web pages including collections of shareware, freeware, open source, and other software downloads. |
|Web-based email | Sites that enable users to send and receive email through a web accessible email account. | | | |
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Azure Front Door private link is available in the following regions:
## Limitations
-Origin support for direct private end point connectivity is limited to Storage (Azure Blobs), App Services and internal load balancers.
+Origin support for direct private endpoint connectivity is currently limited to:
+* Storage (Azure Blobs)
+* App Services
+* Internal load balancers.
The Azure Front Door Private Link feature is region agnostic but for the best latency, you should always pick an Azure region closest to your origin when choosing to enable Azure Front Door Private Link endpoint.
frontdoor Quickstart Create Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door.md
documentationcenter: na
Previously updated : 06/08/2022 Last updated : 10/12/2022
If you don't already have a web app, use the following steps to set up example w
1. Select **Review + create**, review the **Summary**, and then select **Create**. It might take several minutes for the deployment to complete.
- :::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Review summary for web app." lightbox="./media/quickstart-create-front-door/create-web-app.png":::
+ :::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Screenshot showing Create Web App page." lightbox="./media/quickstart-create-front-door/create-web-app.png":::
After your deployment is complete, create a second web app. Use the same procedure with the same values, except for the following values:
Finally, add a routing rule. A routing rule maps your frontend host to the backe
1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule.
- :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Add a rule to your Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-rule.png":::
+ :::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Screenshot showing Add a rule when creating Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-rule.png":::
>[!WARNING] > You **must** ensure that each of the frontend hosts in your Front Door has a routing rule with a default path (`/*`) associated with it. That is, across all of your routing rules there must be at least one routing rule for each of your frontend hosts defined at the default path (`/*`). Failing to do so may result in your end-user traffic not getting routed correctly.
governance Machine Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-custom.md
the `.zip` file for the content package, the configuration name in the MOF file,
and the guest assignment name in the Azure Resource Manager template, must be the same.
+## Running commands in Windows PowerShell
+
+Running Windows modules in PowerShell can be achieved through using the below pattern in your DSC resources. The below pattern temporarily sets the `PSModulePath` to run Windows PowerShell instead of PowerShell core in order to discover required modules available in Windows PowerShell. This sample is a snippet from the DSC resource used in the [Secure Web Server](https://github.com/Azure/azure-policy/blob/master/samples/GuestConfiguration/package-samples/resource-modules/SecureProtocolWebServer/DSCResources/SecureWebServer/SecureWebServer.psm1#L253) built-in DSC resource.
+
+This pattern temporarily sets the PowerShell execution path to run from full PowerShell and discovers the required cmdlet which in this case is `Get-WindowsFeature`. The output of the command is returned and then standardized for compatability requirements. Once the cmdlet has been executed, the `PSModulePath` is set back to the original path.
+
+```powershell
+
+ # This command needs to be run through full PowerShell rather than through PowerShell Core which is what the Policy engine runs
+ $null = Invoke-Command -ScriptBlock {
+ param ($fileName)
+ $fullPowerShellExePath = "$env:SystemRoot\System32\WindowsPowershell\v1.0\powershell.exe"
+ $oldPSModulePath = $env:PSModulePath
+ try
+ {
+ # Set env variable to full powershell module path so that powershell can discover Get-WindowsFeature cmdlet.
+ $env:PSModulePath = "$env:SystemRoot\System32\WindowsPowershell\v1.0\Modules"
+ &$fullPowerShellExePath -command "if (Get-Command 'Get-WindowsFeature' -errorAction SilentlyContinue){Get-WindowsFeature -Name Web-Server | ConvertTo-Json | Out-File $fileName} else { Add-Content -Path $fileName -Value 'NotServer'}"
+ }
+ finally
+ {
+ $env:PSModulePath = $oldPSModulePath
+ }
+ }
+
+```
+ ## Common DSC features not available during machine configuration public preview During public preview, machine configuration doesn't support
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
To move a subscription in PowerShell, you use the New-AzManagementGroupSubscript
New-AzManagementGroupSubscription -GroupId 'Contoso' -SubscriptionId '12345678-1234-1234-1234-123456789012' ```
-To remove the link between and subscription and the management group use the
+To remove the link between the subscription and the management group use the
Remove-AzManagementGroupSubscription command. ```azurepowershell-interactive
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 09/21/2022 Last updated : 10/03/2022
You use JavaScript Object Notation (JSON) to create a policy assignment. The pol
- non-compliance messages - parameters - identity
+- resource selectors (preview)
+- overrides (preview)
For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
parameters:
"value": "-LC" } }
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ "resourceSelectors": []
+ "overrides": []
} } ```
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
} ``` +
+## Resource selectors (preview)
+
+The optional **resourceSelectors** property facilitates safe deployment practices (SDP) by enabling you to gradually roll
+out policy assignments based on factors like resource location, resource type, or whether a resource has a location. When resource selectors are used, Azure Policy will only evaluate resources that are applicable to the specifications made in the resource selectors. Resource selectors can also be leveraged to narrow down the scope of [exemptions](exemption-structure.md) in the same way.
+
+In the following example scenario, the new policy assignment will be evaluated only if the resource's location is
+either **East US** or **West US**.
+
+```json
+{
+ "properties": {
+ "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
+ "definitionVersion": "1.1",
+ "resourceSelectors": [
+ {
+ "name": "SDPRegions",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [ "eastus", "westus" ]
+ }
+ ]
+ }
+ ]
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/ResourceLimit",
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "ResourceLimit"
+}
+```
+
+When you're ready to expand the evaluation scope for your policy, you just have to modify the assignment. The following example
+shows our policy assignment with two additional Azure regions added to the **SDPRegions** selector. Note, in this example, _SDP_ means to _Safe Deployment Practice_:
+
+```json
+{
+ "properties": {
+ "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
+ "definitionVersion": "1.1",
+ "resourceSelectors": [
+ {
+ "name": "SDPRegions",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [ "eastus", "westus", "centralus", "southcentralus" ]
+ }
+ ]
+ }
+ ]
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/ResourceLimit",
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "ResourceLimit"
+}
+```
+
+Resource selectors have the following properties:
+- `name`: The name of the resource selector.
+- `selectors`: The factor used to determine which subset of resources applicable to the policy assignment should be evaluated for compliance.
+ - `kind`: The property of a `selector` that describes what characteristic will narrow down the set of evaluated resources. Each 'kind' can only be used once in a single resource selector. Allowed values are:
+ - `resourceLocation`: This is used to select resources based on their type. Can be used in up to 10 resource selectors. Cannot be used in the same resource selector as `resourceWithoutLocation`.
+ - `resourceType`: This is used to select resources based on their type.
+ - `resourceWithoutLocation`: This is used to select resources at the subscription level which do not have a location. Currently only supports `subscriptionLevelResources`. Cannot be used in the same resource selector as `resourceLocation`.
+ - `in`: The list of allowed values for the specified `kind`. Cannot be used with `notIn`. Can contain up to 50 values.
+ - `notIn`: The list of not-allowed values for the specified `kind`. Cannot be used with `in`. Can contain up to 50 values.
+
+A **resource selector** can contain multiple **selectors**. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, multiple **resource selectors** can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
+
+## Overrides (preview)
+
+The optional **overrides** property allows you to change the effect of a policy definition without modifying
+the underlying policy definition or using a parameterized effect in the policy definition.
+
+The most common use case for overrides is policy initiatives with a large number of associated policy definitions. In this situation, managing multiple policy effects can consume significant administrative effort, especially when the effect needs to be updated from time to time. Overrides can be used to simultaneously update the effects of multiple policy definitions within an initiative.
+
+Let's take a look at an example. Imagine you have a policy initiative named _CostManagement_ that includes a custom policy definition with `policyDefinitionReferenceId` _corpVMSizePolicy_ and a single effect of `audit`. Suppose you want to assign the _CostManagement_ initiative, but do not yet want to see compliance reported for this policy. This policy's 'audit' effect can be replaced by 'disabled' through an override on the initiative assignment, as shown below:
+
+```json
+{
+ "properties": {
+ "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policySetDefinitions/CostManagement",
+ "overrides": [
+ {
+ "kind": "policyEffect",
+ "value": "disabled",
+ "selectors": [
+ {
+ "kind": "policyDefinitionReferenceId",
+ "in": [ "corpVMSizePolicy" ]
+ }
+ ]
+ }
+ ]
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "CostManagement"
+}
+```
+
+Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they are specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list, in cases where the effect is [parameterized](definition-structure.md#parameters).
+ ## Enforcement mode The **enforcementMode** property provides customers the ability to test the outcome of a policy on
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Previously updated : 06/27/2022 Last updated : 08/29/2022
_common_ properties used by Azure Policy and in built-ins. Each `metadata` prope
## Parameters
-Parameters help simplify your policy management by reducing the number of policy definitions. Think
+Parameters help simplify your policy management by reducing the number of policy definitions. Think
of parameters like the fields on a form - `name`, `address`, `city`, `state`. These parameters always stay the same, however their values change based on the individual filling out the form. Parameters work the same way when building policies. By including parameters in a policy definition,
governance NZ_ISM_Restricted_V3_5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/NZ_ISM_Restricted_v3_5.md
Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance RBI_ITF_Banks_V2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/RBI_ITF_Banks_v2016.md
+
+ Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016
+description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated : 10/10/2022+++
+# Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative
+
+The following article details how the Azure Policy Regulatory Compliance built-in initiative
+definition maps to **compliance domains** and **controls** in Reserve Bank of India IT Framework for Banks v2016.
+For more information about this compliance standard, see
+[Reserve Bank of India IT Framework for Banks v2016](https://rbidocs.rbi.org.in/rdocs/notification/PDFs/NT41893F697BC1D57443BB76AFC7AB56272EB.PDF). To understand
+_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
+[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+
+The following mappings are to the **Reserve Bank of India IT Framework for Banks v2016** controls. Use the
+navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete
+initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
+Then, find and select the **[Preview]: Reserve Bank of India - IT Framework for Banks** Regulatory Compliance built-in
+initiative definition.
+
+> [!IMPORTANT]
+> Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
+> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
+> control; however, there often is not a one-to-one or complete match between a control and one or
+> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
+> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
+> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
+> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
+> overall compliance status. The associations between compliance domains, controls, and Azure Policy
+> definitions for this compliance standard may change over time. To view the change history, see the
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/RBI_ITF_Banks_v2016.json).
+
+## Authentication Framework For Customers
+
+### Authentication Framework For Customers-9.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+
+### Authentication Framework For Customers-9.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) |
+|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) |
+|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
+|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+
+## Network Management And Security
+
+### Network Inventory-4.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_TrafficAnalytics_Audit.json) |
+
+### Network Device Configuration Management-4.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+|[Azure firewall policy should enable TLS inspection within application rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa58ac66d-92cb-409c-94b8-8e48d7a96596) |Enabling TLS inspection is recommended for all application rules to detect, alert, and mitigate malicious activity in HTTPS. To learn more about TLS inspection with Azure Firewall, visit [https://aka.ms/fw-tlsinspect](https://aka.ms/fw-tlsinspect) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_FirewallPolicy_EnbaleTlsForAllAppRules_Audit.json) |
+|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
+|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should enable all firewall rules for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F632d3993-e2c0-44ea-a7db-2eca131f356d) |Enabling all Web Application Firewall (WAF) rules strengthens your application security and protects your web applications against common vulnerabilities. To learn more about Web Application Firewall (WAF) with Application Gateway, visit [https://aka.ms/waf-ag](https://aka.ms/waf-ag) |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_WAF_AppGatewayAllRulesEnabled_Audit.json) |
+
+### Anomaly Detection-4.7
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+
+### Security Operation Centre-4.9
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Perimeter Protection And Detection-4.10
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+
+## Preventing Execution Of Unauthorised Software
+
+### Software Inventory-2.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+
+### Authorised Software Installation-2.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+
+### Security Update Management-2.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+## Patch/Vulnerability & Change Management
+
+### Patch/Vulnerability & Change Management-7.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Patch/Vulnerability & Change Management-7.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Patch/Vulnerability & Change Management-7.6
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+### Patch/Vulnerability & Change Management-7.7
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) |
+|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
+|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) |
+|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) |
+|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
+|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
+|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) |
+|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |
+|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) |
+|[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+
+## Maintenance, Monitoring, And Analysis Of Audit Logs
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) |
+|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_TrafficAnalytics_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Log Analytics extension should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
+|[\[Preview\]: Log Analytics extension should be installed on your Windows Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd69b1763-b96d-40b8-a2d9-ca31e9fd0d3e) |This policy audits Windows Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Windows_LogAnalytics_Audit.json) |
+|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+
+### Patch/Vulnerability & Change Management-7.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Patch/Vulnerability & Change Management-7.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Patch/Vulnerability & Change Management-7.6
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+### Patch/Vulnerability & Change Management-7.7
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) |
+|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
+|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) |
+|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) |
+|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
+|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
+|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) |
+|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |
+|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) |
+|[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+
+## Secure Configuration
+
+### Secure Configuration-5.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender for Azure Cosmos DB should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fadbe85b5-83e6-4350-ab58-bf3a4f736e5e) |Microsoft Defender for Azure Cosmos DB is an Azure-native layer of security that detects attempts to exploit databases in your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitations of your database through compromised identities or malicious insiders. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_Azure_Cosmos_DB_Audit.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+
+### Secure Configuration-5.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Hotpatch should be enabled for Windows Server Azure Edition VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d02d2f7-e38b-4bdc-96f3-adc0a8726abc) |Minimize reboots and install updates quickly with hotpatch. Learn more at [https://docs.microsoft.com/azure/automanage/automanage-hotpatch](../../../automanage/automanage-hotpatch.md) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automanage/HotpatchShouldBeEnabledforWindowsServerAzureEditionVMs.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+## Secure Mail And Messaging Systems
+
+### Secure Mail And Messaging Systems-10.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) |
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+
+### Secure Mail And Messaging Systems-10.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) |
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+
+## User Access Control / Management
+
+### User Access Control / Management-8.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) |
+|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) |
+|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) |
+|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
+|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+
+### User Access Control / Management-8.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) |
+
+### User Access Control / Management-8.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+
+### User Access Control / Management-8.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) |
+
+### User Access Control / Management-8.5
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+
+### User Access Control / Management-8.8
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+
+## Vulnerability Assessment And Penetration Test And Red Team Exercises
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+## Risk Based Transaction Monitoring
+
+### Risk Based Transaction Monitoring-20.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) |
+|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_TrafficAnalytics_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Log Analytics extension should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
+|[\[Preview\]: Log Analytics extension should be installed on your Windows Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd69b1763-b96d-40b8-a2d9-ca31e9fd0d3e) |This policy audits Windows Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Windows_LogAnalytics_Audit.json) |
+|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+
+## Metrics
+
+### Metrics-21.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) |
+|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
+|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+
+### Metrics-21.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Hotpatch should be enabled for Windows Server Azure Edition VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d02d2f7-e38b-4bdc-96f3-adc0a8726abc) |Minimize reboots and install updates quickly with hotpatch. Learn more at [https://docs.microsoft.com/azure/automanage/automanage-hotpatch](../../../automanage/automanage-hotpatch.md) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automanage/HotpatchShouldBeEnabledforWindowsServerAzureEditionVMs.json) |
+
+### Authentication Framework For Customers-9.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+
+### Authentication Framework For Customers-9.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) |
+|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) |
+|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
+|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+
+## Audit Log Settings
+
+### Audit Log Settings-17.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
+|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) |
+|[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
+
+## Anti-Phishing
+
+### Anti-Phishing-14.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Key Vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to key vault, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVault_Should_Use_PrivateEndpoint_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) |
+|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |
+|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) |
+|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
+|[Azure File Sync should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
+|[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
+|[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
+|[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) |
+|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) |
+|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |
+|[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) |
+|[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+
+## Advanced Real-Timethreat Defenceand Management
+
+### Advanced Real-Timethreat Defenceand Management-13.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Guest Attestation extension should be installed on supported Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F672fe5a1-2fcd-42d7-b85d-902b6e28c6ff) |Install Guest Attestation extension on supported Linux virtual machines to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled Linux virtual machines. |AuditIfNotExists, Disabled |[5.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLinuxGAExtOnVm_Audit.json) |
+|[\[Preview\]: Guest Attestation extension should be installed on supported Linux virtual machines scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa21f8c92-9e22-4f09-b759-50500d1d2dda) |Install Guest Attestation extension on supported Linux virtual machines scale sets to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled Linux virtual machine scale sets. |AuditIfNotExists, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLinuxGAExtOnVmss_Audit.json) |
+|[\[Preview\]: Guest Attestation extension should be installed on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb4d9c2-f88f-4069-bee0-dba239a57b09) |Install Guest Attestation extension on supported virtual machines to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled virtual machines. |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallWindowsGAExtOnVm_Audit.json) |
+|[\[Preview\]: Guest Attestation extension should be installed on supported Windows virtual machines scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff655e522-adff-494d-95c2-52d4f6d56a42) |Install Guest Attestation extension on supported virtual machines scale sets to allow Azure Security Center to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled virtual machine scale sets. |AuditIfNotExists, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallWindowsGAExtOnVmss_Audit.json) |
+|[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment only applies to trusted launch enabled Windows virtual machines. |Audit, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) |
+|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+|[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
+|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) |
+|[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
+|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+|[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
+
+### Advanced Real-Timethreat Defenceand Management-13.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](../../../defender-for-cloud/defender-for-containers-enable.md?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks) |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+
+### Advanced Real-Timethreat Defenceand Management-13.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+
+### Advanced Real-Timethreat Defenceand Management-13.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) |
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) |
+|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) |
+|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+
+## Application Security Life Cycle (Aslc)
+
+### Application Security Life Cycle (Aslc)-6.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
+|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Application Insights components should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc02227-0cb6-4e11-8f53-eb0b22eab7e8) |Improve Application Insights security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs of this component. Learn more at [https://aka.ms/AzMonPrivateLink#configure-application-insights](https://aka.ms/AzMonPrivateLink#configure-application-insights). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_NetworkAccessEnabled_Deny.json) |
+|[Application Insights components should block non-Azure Active Directory based ingestion.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F199d5677-e4d9-4264-9465-efe1839c06bd) |Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system. |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_DisableLocalAuth_Deny.json) |
+|[Application Insights components with Private Link enabled should use Bring Your Own Storage accounts for profiler and debugger.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0c4bd2e8-8872-4f37-a654-03f6f38ddc76) |To support private link and customer-managed key policies, create your own storage account for profiler and debugger. Learn more in [https://docs.microsoft.com/azure/azure-monitor/app/profiler-bring-your-own-storage](/azure/azure-monitor/app/profiler-bring-your-own-storage) |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_ForceCustomerStorageForProfiler_Deny.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
+|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.6
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.7
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should enable all firewall rules for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F632d3993-e2c0-44ea-a7db-2eca131f356d) |Enabling all Web Application Firewall (WAF) rules strengthens your application security and protects your web applications against common vulnerabilities. To learn more about Web Application Firewall (WAF) with Application Gateway, visit [https://aka.ms/waf-ag](https://aka.ms/waf-ag) |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_WAF_AppGatewayAllRulesEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) |
+|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_TrafficAnalytics_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Log Analytics extension should be installed on your Linux Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F842c54e8-c2f9-4d79-ae8d-38d8b8019373) |This policy audits Linux Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Linux_LogAnalytics_Audit.json) |
+|[\[Preview\]: Log Analytics extension should be installed on your Windows Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd69b1763-b96d-40b8-a2d9-ca31e9fd0d3e) |This policy audits Windows Azure Arc machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Arc_Windows_LogAnalytics_Audit.json) |
+|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+
+### Maintenance, Monitoring, And Analysis Of Audit Logs-16.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+
+### Application Security Life Cycle (Aslc)-6.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
+|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Application Insights components should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc02227-0cb6-4e11-8f53-eb0b22eab7e8) |Improve Application Insights security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs of this component. Learn more at [https://aka.ms/AzMonPrivateLink#configure-application-insights](https://aka.ms/AzMonPrivateLink#configure-application-insights). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_NetworkAccessEnabled_Deny.json) |
+|[Application Insights components should block non-Azure Active Directory based ingestion.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F199d5677-e4d9-4264-9465-efe1839c06bd) |Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system. |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_DisableLocalAuth_Deny.json) |
+|[Application Insights components with Private Link enabled should use Bring Your Own Storage accounts for profiler and debugger.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0c4bd2e8-8872-4f37-a654-03f6f38ddc76) |To support private link and customer-managed key policies, create your own storage account for profiler and debugger. Learn more in [https://docs.microsoft.com/azure/azure-monitor/app/profiler-bring-your-own-storage](/azure/azure-monitor/app/profiler-bring-your-own-storage) |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_ForceCustomerStorageForProfiler_Deny.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
+|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.6
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+
+### Application Security Life Cycle (Aslc)-6.7
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should enable all firewall rules for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F632d3993-e2c0-44ea-a7db-2eca131f356d) |Enabling all Web Application Firewall (WAF) rules strengthens your application security and protects your web applications against common vulnerabilities. To learn more about Web Application Firewall (WAF) with Application Gateway, visit [https://aka.ms/waf-ag](https://aka.ms/waf-ag) |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ACAT_WAF_AppGatewayAllRulesEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) |
+
+## Data Leak Prevention Strategy
+
+### Data Leak Prevention Strategy-15.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+
+### Data Leak Prevention Strategy-15.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
+|[Storage accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2982f36-99f2-4db5-8eff-283140c09693) |To improve the security of Storage Accounts, ensure that they aren't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://aka.ms/storageaccountpublicnetworkaccess](https://aka.ms/storageaccountpublicnetworkaccess). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StoragePublicNetworkAccess_AuditDeny.json) |
+|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) |
+
+### Data Leak Prevention Strategy-15.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+
+## Forensics
+
+### Forensics-22.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+
+## Incident Response & Management
+
+### Responding To Cyber-Incidents:-19.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Recovery From Cyber - Incidents-19.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+
+### Recovery From Cyber - Incidents-19.5
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+
+### Recovery From Cyber - Incidents-19.6
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Recovery From Cyber - Incidents-19.6b
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+
+### Recovery From Cyber - Incidents-19.6c
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Recovery From Cyber - Incidents-19.6e
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+
+### Metrics-21.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) |
+|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
+|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+
+### Metrics-21.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Hotpatch should be enabled for Windows Server Azure Edition VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d02d2f7-e38b-4bdc-96f3-adc0a8726abc) |Minimize reboots and install updates quickly with hotpatch. Learn more at [https://docs.microsoft.com/azure/automanage/automanage-hotpatch](../../../automanage/automanage-hotpatch.md) |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automanage/HotpatchShouldBeEnabledforWindowsServerAzureEditionVMs.json) |
+
+### Data Leak Prevention Strategy-15.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+
+### Data Leak Prevention Strategy-15.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) |
+|[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
+|[Storage accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2982f36-99f2-4db5-8eff-283140c09693) |To improve the security of Storage Accounts, ensure that they aren't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://aka.ms/storageaccountpublicnetworkaccess](https://aka.ms/storageaccountpublicnetworkaccess). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StoragePublicNetworkAccess_AuditDeny.json) |
+|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) |
+
+### Data Leak Prevention Strategy-15.3
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.1
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4
+
+**ID**:
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+
+## Next steps
+
+Additional articles about Azure Policy:
+
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- See the [initiative definition structure](../concepts/initiative-definition-structure.md).
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
This built-in initiative is deployed as part of the
|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) | |[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
-### Ensure that 'Unattached disks' are encrypted
+### Ensure that only approved extensions are installed
-**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.3
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.4
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
This built-in initiative is deployed as part of the
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) |
-### Separate the duties of individuals to reduce the risk of malevolent activity without collusion.
+### Control the flow of CUI in accordance with approved authorizations.
-**ID**: CMMC L3 AC.3.017
+**ID**: CMMC L3 AC.2.016
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
Policy And Procedures
|[Create a data inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F043c1e56-5a16-52f8-6af8-583098ff3e60) |CMA_0096 - Create a data inventory |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0096.json) | |[Maintain records of processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92ede480-154e-0e22-4dca-8b46a74a3a51) |CMA_0353 - Maintain records of processing of personal data |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0353.json) |
-### Updates During Installations / Removals
+### Automated Unauthorized Component Detection
-**ID**: FedRAMP High CM-8 (1)
+**ID**: FedRAMP High CM-8 (3)
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark (Azure Government) description: Details of the Azure Security Benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Dod Impact Level 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-4.md
initiative definition.
|[Bot Service should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) | |[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
-|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
|[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) | |[Managed disks should be double encrypted with both platform-managed and customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca91455f-eace-4f96-be59-e6e2c35b4816) |High security sensitive customers who are concerned of the risk associated with any particular encryption algorithm, implementation, or key being compromised can opt for additional layer of encryption using a different encryption algorithm/mode at the infrastructure layer using platform managed encryption keys. The disk encryption sets are required to use double encryption. Learn more at [https://aka.ms/disks-doubleEncryption](/azure/virtual-machines/disk-encryption#double-encryption-at-rest). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DoubleEncryptionRequired_Deny.json) |
governance Gov Dod Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-5.md
initiative definition.
|[Bot Service should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) | |[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
-|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
|[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) | |[Managed disks should be double encrypted with both platform-managed and customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca91455f-eace-4f96-be59-e6e2c35b4816) |High security sensitive customers who are concerned of the risk associated with any particular encryption algorithm, implementation, or key being compromised can opt for additional layer of encryption using a different encryption algorithm/mode at the infrastructure layer using platform managed encryption keys. The disk encryption sets are required to use double encryption. Learn more at [https://aka.ms/disks-doubleEncryption](/azure/virtual-machines/disk-encryption#double-encryption-at-rest). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/DoubleEncryptionRequired_Deny.json) |
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
initiative definition.
|[Bot Service should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) | |[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
-|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
|[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[IoT Hub device provisioning service data should be encrypted using customer-managed keys (CMK)](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47031206-ce96-41f8-861b-6a915f3de284) |Use customer-managed keys to manage the encryption at rest of your IoT Hub device provisioning service. The data is automatically encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. Learn more about CMK encryption at [https://aka.ms/dps/CMK](../../../iot-dps/iot-dps-customer-managed-keys.md). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_CMKEncryptionEnabled_AuditDeny.json) | |[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) |
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
This built-in initiative is deployed as part of the
|[Conduct risk assessment and document its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1dbd51c2-2bd1-5e26-75ba-ed075d8f0d68) |CMA_C1542 - Conduct risk assessment and document its results |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1542.json) | |[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) |
-### 10.01 Security Requirements of Information Systems
+### 03.01 Risk Management Program
-**ID**: 1780.10a1Organizational.1-10.a
+**ID**: 1737.03d2Organizational.5-03.d
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
This built-in initiative is deployed as part of the
|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
-### Restrictions on changes to software packages
+### Technical review of applications after operating platform changes
-**ID**: ISO 27001:2013 A.14.2.4
+**ID**: ISO 27001:2013 A.14.2.3
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
initiative definition.
|[Bot Service should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) | |[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
-|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
|[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[HPC Cache accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F970f84d8-71b6-4091-9979-ace7e3fb6dbb) |Manage encryption at rest of Azure HPC Cache with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Disabled, Deny |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageCache_CMKEnabled.json) | |[IoT Hub device provisioning service data should be encrypted using customer-managed keys (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47031206-ce96-41f8-861b-6a915f3de284) |Use customer-managed keys to manage the encryption at rest of your IoT Hub device provisioning service. The data is automatically encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. Learn more about CMK encryption at [https://aka.ms/dps/CMK](../../../iot-dps/iot-dps-customer-managed-keys.md). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_CMKEncryptionEnabled_AuditDeny.json) |
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Rbi_Itf_Nbfc_V2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi_itf_nbfc_v2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Swift Cscf V2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-cscf-v2021.md
initiative definition.
|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) | |[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
-|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
|[Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Geo-redundant storage should be enabled for Storage Accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf045164-79ba-4215-8f95-f8048dc1780b) |Use geo-redundancy to create highly available applications |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/GeoRedundant_StorageAccounts_Audit.json) | |[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/03/2022 Last updated : 10/10/2022
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md
Here is the list of KQL tabular operators supported by Resource Graph with speci
|[join](/azure/data-explorer/kusto/query/joinoperator) |[Key vault with subscription name](../samples/advanced.md#join) |Join flavors supported: [innerunique](/azure/data-explorer/kusto/query/joinoperator#default-join-flavor), [inner](/azure/data-explorer/kusto/query/joinoperator#inner-join), [leftouter](/azure/data-explorer/kusto/query/joinoperator#left-outer-join). Limit of 3 `join` in a single query, 1 of which may be a cross-table `join`. If all cross-table `join` use is between _Resource_ and _ResourceContainers_, then 3 cross-table `join` are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use `join`, see [Resource Graph tables](#resource-graph-tables). | |[limit](/azure/data-explorer/kusto/query/limitoperator) |[List all public IP addresses](../samples/starter.md#list-publicip) |Synonym of `take`. Doesn't work with [Skip](./work-with-data.md#skipping-records). | |[mvexpand](/azure/data-explorer/kusto/query/mvexpandoperator) | | Legacy operator, use `mv-expand` instead. _RowLimit_ max of 400. The default is 128. |
-|[mv-expand](/azure/data-explorer/kusto/query/mvexpandoperator) |[List Cosmos DB with specific write locations](../samples/advanced.md#mvexpand-cosmosdb) |_RowLimit_ max of 400. The default is 128. Limit of 2 `mv-expand` in a single query.|
+|[mv-expand](/azure/data-explorer/kusto/query/mvexpandoperator) |[List Azure Cosmos DB with specific write locations](../samples/advanced.md#mvexpand-cosmosdb) |_RowLimit_ max of 400. The default is 128. Limit of 2 `mv-expand` in a single query.|
|[order](/azure/data-explorer/kusto/query/orderoperator) |[List resources sorted by name](../samples/starter.md#list-resources) |Synonym of `sort` | |[parse](/azure/data-explorer/kusto/query/parseoperator) |[Get virtual networks and subnets of network interfaces](../samples/advanced.md#parse-subnets) |It's optimal to access properties directly if they exist instead of using `parse`. | |[project](/azure/data-explorer/kusto/query/projectoperator) |[List resources sorted by name](../samples/starter.md#list-resources) | |
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/work-with-data.md
Title: Work with large data sets description: Understand how to get, format, page, and skip records in large data sets while working with Azure Resource Graph. Previously updated : 09/29/2021+ Last updated : 10/06/2022 + # Working with large Azure resource data sets
set returned would be the smaller value configured by **top** or **limit**.
**First** has a maximum allowed value of _1000_.
+## CSV export result size limitation
+
+When using the comma-separated value (CSV) export functionality of Azure Resource Graph Explorer, the
+result set is limited to 55,000 records. This is a platform limit that cannot be overridden by filing an Azure support ticket.
+
+To download CSV results from the Azure portal, browse to the Azure Resource Graph Explorer and run a
+query. On the toolbar, click **Download as CSV**.
+ ## Skipping records The next option for working with large data sets is the **Skip** control. This control allows your
governance First Query Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-portal.md
Title: 'Quickstart: Your first portal query' description: In this quickstart, you follow the steps to run your first query from Azure portal using Azure Resource Graph Explorer. Previously updated : 08/17/2021++ Last updated : 10/12/2022
The schema browser is a great way to discover properties for use in queries. Be
_INSERT\_VALUE\_HERE_ with your own value, adjust the query with conditions, operators, and functions to achieve your intended results.
+## Download query results as a CSV file
+
+To download CSV results from the Azure portal, browse to the Azure Resource Graph Explorer and run a
+query. On the toolbar, click **Download as CSV** as shown in the following screenshot:
++
+> [!NOTE]
+> When using the comma-separated value (CSV) export functionality of Azure Resource Graph Explorer, the result set is limited to 55,000 records. This is a platform limit that cannot be overridden by filing an Azure support ticket.
+ ## Create a chart from the Resource Graph query After running the previous query, if you select the **Charts** tab, you get a message that "the
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
We'll walk through the following advanced queries:
- [Remove columns from results](#remove-column) - [List all tag names](#list-all-tags) - [Virtual machines matched by regex](#vm-regex)-- [List Cosmos DB with specific write locations](#mvexpand-cosmosdb)
+- [List Azure Cosmos DB with specific write locations](#mvexpand-cosmosdb)
- [Key vaults with subscription name](#join) - [List SQL Databases and their elastic pools](#join-sql) - [List virtual machines with their network interface and public IP](#join-vmpip)
Search-AzGraph -Query "Resources | where type =~ 'microsoft.compute/virtualmachi
-## <a name="mvexpand-cosmosdb"></a>List Cosmos DB with specific write locations
+## <a name="mvexpand-cosmosdb"></a>List Azure Cosmos DB with specific write locations
The following query limits to Azure Cosmos DB resources, uses `mv-expand` to expand the property bag for **properties.writeLocations**, then project specific fields and limit the results further to
governance Create Share Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/tutorials/create-share-query.md
Title: "Tutorial: Manage queries in the Azure portal" description: In this tutorial, you create a Resource Graph Query and share the new query with others in the Azure portal. Previously updated : 06/15/2022-- + Last updated : 10/06/2022+ # Tutorial: Create and share an Azure Resource Graph query in the Azure portal
hdinsight Apache Kafka Spark Structured Streaming Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
Title: Apache Spark & Apache Kafka with Cosmos DB - Azure HDInsight
+ Title: Apache Spark and Apache Kafka with Azure Cosmos DB - Azure HDInsight
description: Learn how to use Apache Spark Structured Streaming to read data from Apache Kafka and then store it into Azure Cosmos DB. In this example, you stream data using a Jupyter Notebook from Spark on HDInsight. -+ Last updated 04/08/2022
Last updated 04/08/2022
Learn how to use [Apache Spark](https://spark.apache.org/) [Structured Streaming](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html) to read data from [Apache Kafka](https://kafka.apache.org/) on Azure HDInsight, and then store the data into Azure Cosmos DB.
-[Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is a globally distributed, multi-model database. This example uses a SQL API database model. For more information, see the [Welcome to Azure Cosmos DB](../cosmos-db/introduction.md) document.
+[Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is a globally distributed, multi-model database. This example uses an Azure Cosmos DB for NoSQL database model. For more information, see the [Welcome to Azure Cosmos DB](../cosmos-db/introduction.md) document.
Spark structured streaming is a stream processing engine built on Spark SQL. It allows you to express streaming computations the same as batch computation on static data. For more information on Structured Streaming, see the [Structured Streaming Programming Guide](https://spark.apache.org/docs/2.2.0/structured-streaming-programming-guide.html) at Apache.org.
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
* An Azure Virtual Network, which contains the HDInsight clusters. The virtual network created by the template uses the 10.0.0.0/16 address space.
- * An Azure Cosmos DB SQL API database.
+ * An Azure Cosmos DB for NoSQL database.
> [!IMPORTANT] > The structured streaming notebook used in this example requires Spark on HDInsight 4.0. If you use an earlier version of Spark on HDInsight, you receive errors when using the notebook.
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
||| |Subscription|Select your Azure subscription.| |Resource group|Create a group or select an existing one. This group contains the HDInsight cluster.|
- |Cosmos DB Account Name|This value is used as the name for the Cosmos DB account. The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must be between 3-31 characters in length.|
+ |Azure Cosmos DB Account Name|This value is used as the name for the Azure Cosmos DB account. The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must be between 3-31 characters in length.|
|Base Cluster Name|This value is used as the base name for the Spark and Kafka clusters. For example, entering **myhdi** creates a Spark cluster named __spark-myhdi__ and a Kafka cluster named **kafka-myhdi**.| |Cluster Version|The HDInsight cluster version. This example is tested with HDInsight 4.0, and may not work with other cluster types.| |Cluster Login User Name|The admin user name for the Spark and Kafka clusters.|
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
1. Read the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
-1. Finally, select **Purchase**. It may take up to 45 minutes to create the clusters, virtual network, and Cosmos DB account.
+1. Finally, select **Purchase**. It may take up to 45 minutes to create the clusters, virtual network, and Azure Cosmos DB account.
-## Create the Cosmos DB database and collection
+## Create the Azure Cosmos DB database and collection
-The project used in this document stores data in Cosmos DB. Before running the code, you must first create a _database_ and _collection_ in your Cosmos DB instance. You must also retrieve the document endpoint and the _key_ used to authenticate requests to Cosmos DB.
+The project used in this document stores data in Azure Cosmos DB. Before running the code, you must first create a _database_ and _collection_ in your Azure Cosmos DB instance. You must also retrieve the document endpoint and the _key_ used to authenticate requests to Azure Cosmos DB.
One way to do this is to use the [Azure CLI](/cli/azure/). The following script will create a database named `kafkadata` and a collection named `kafkacollection`. It then returns the primary key.
One way to do this is to use the [Azure CLI](/cli/azure/). The following script
# Replace 'myresourcegroup' with the name of your resource group resourceGroupName='myresourcegroup'
-# Replace 'mycosmosaccount' with the name of your Cosmos DB account name
+# Replace 'mycosmosaccount' with the name of your Azure Cosmos DB account name
name='mycosmosaccount' # WARNING: If you change the databaseName or collectionName
hdinsight Hdinsight Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/hdinsight-security-overview.md
Title: Overview of enterprise security in Azure HDInsight
description: Learn the various methods to ensure enterprise security in Azure HDInsight. -+ Last updated 04/14/2022 #Customer intent: As a user of Azure HDInsight, I want to learn the means that Azure HDInsight offers to ensure security for the enterprise.
The following table provides links to resources for each type of security soluti
| Data Access Security | Configure [access control lists ACLs](../../storage/blobs/data-lake-storage-access-control.md) for Azure Data Lake Storage Gen1 and Gen2 | Customer | | | Enable the ["Secure transfer required"](../../storage/common/storage-require-secure-transfer.md) property on storage accounts. | Customer | | | Configure [Azure Storage firewalls](../../storage/common/storage-network-security.md) and virtual networks | Customer |
-| | Configure [Azure virtual network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) for Cosmos DB and [Azure SQL DB](/azure/azure-sql/database/vnet-service-endpoint-rule-overview) | Customer |
+| | Configure [Azure virtual network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Cosmos DB and [Azure SQL DB](/azure/azure-sql/database/vnet-service-endpoint-rule-overview) | Customer |
| | Ensure that the [Encryption in transit](./encryption-in-transit.md) feature is enabled to use TLS and IPSec for intra-cluster communication. | Customer | | | Configure [customer-managed keys](../../storage/common/customer-managed-keys-configure-key-vault.md) for Azure Storage encryption | Customer | | | Control access to your data by Azure support using [Customer lockbox](../../security/fundamentals/customer-lockbox-overview.md) | Customer |
hdinsight Apache Hadoop On Premises Migration Best Practices Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-infrastructure.md
Title: 'Infrastructure: On-premises Apache Hadoop to Azure HDInsight'
description: Learn infrastructure best practices for migrating on-premises Hadoop clusters to Azure HDInsight. -+ Last updated 06/29/2022
Applications or components that were available in on-premises clusters but aren'
|Arcadia|IaaS  |Atlas|None (Only HDP) |Datameer|HDInsight edge node
-|Datastax (Cassandra)|IaaS (CosmosDB an alternative on Azure)
+|Datastax (Cassandra)|IaaS (Azure Cosmos DB an alternative on Azure)
|DataTorrent|IaaS  |Drill|IaaS  |Ignite|IaaS |Jethro|IaaS  |Mapador|IaaS 
-|Mongo|IaaS (CosmosDB an alternative on Azure)
+|MongoDB|IaaS (Azure Cosmos DB an alternative on Azure)
|NiFi|IaaS  |Presto|IaaS or HDInsight edge node |Python 2|PaaS 
For more information, see the following articles:
## Securely connect to Azure services with Azure Virtual Network service endpoints
-HDInsight supports [virtual network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md), which allow you to securely connect to Azure Blob Storage, Azure Data Lake Storage Gen2, Cosmos DB, and SQL databases. By enabling a Service Endpoint for Azure HDInsight, traffic flows through a secured route from within the Azure data center. With this enhanced level of security at the networking layer, you can lock down big data storage accounts to their specified Virtual Networks (VNETs) and still use HDInsight clusters seamlessly to access and process that data.
+HDInsight supports [virtual network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md), which allow you to securely connect to Azure Blob Storage, Azure Data Lake Storage Gen2, Azure Cosmos DB, and SQL databases. By enabling a Service Endpoint for Azure HDInsight, traffic flows through a secured route from within the Azure data center. With this enhanced level of security at the networking layer, you can lock down big data storage accounts to their specified Virtual Networks (VNETs) and still use HDInsight clusters seamlessly to access and process that data.
For more information, see the following articles:
hdinsight Apache Hadoop On Premises Migration Motivation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-motivation.md
Title: 'Benefits: Migrate on-premises Apache Hadoop to Azure HDInsight' description: Learn the motivation and benefits for migrating on-premises Hadoop clusters to Azure HDInsight. + Last updated 04/28/2022
This section provides template questionnaires to help gather important informati
| Preferred Region|US East|| |VNet preferred?|Yes|| |HA / DR Needed?|Yes||
-|Integration with other cloud services?|ADF, CosmosDB||
+|Integration with other cloud services?|ADF, Azure Cosmos DB||
|**Topic**: **Data Movement** ||| |Initial load preference|DistCp, Data box, ADF, WANDisco|| |Data transfer delta|DistCp, AzCopy||
hdinsight Hbase Troubleshoot Rest Not Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-rest-not-spending.md
Title: Apache HBase REST not responding to requests in Azure HDInsight
description: Resolve issue with Apache HBase REST not responding to requests in Azure HDInsight. Previously updated : 04/07/2022 Last updated : 10/10/2022 # Scenario: Apache HBase REST not responding to requests in Azure HDInsight
System.Net.Sockets.SocketException : A connection attempt failed because the con
Restart HBase REST using the below command after SSHing to the host. You can also use script actions to restart this service on all worker nodes: ```bash
-sudo service hdinsight-hbrest restart
+sudo /usr/hdp/current/hbase-master/bin/hbase-daemon.sh restart rest
```
-This command will stop HBase Region Server on the same host. You can either manually start HBase Region Server through Ambari, or let Ambari auto restart functionality recover HBase Region Server automatically.
- If the issue still persists, you can install the following mitigation script as a CRON job that runs every 5 minutes on every worker node. This mitigation script pings the REST service and restarts it in case the REST service does not respond. ```bash
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
The new updates and capabilities fall in to the following categories:
* ***Support for Azure Data Lake Storage Gen2*** ΓÇô HDInsight will support the Preview release of Azure Data Lake Storage Gen2. In the available regions, customers will be able to choose an ADLS Gen2 account as the Primary or Secondary store for their HDInsight clusters.
-* ***HDInsight Enterprise Security Package Updates (Preview)*** ΓÇô (Preview) [Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) support for Azure Blob Storage, ADLS Gen1, Cosmos DB, and Azure DB.
+* ***HDInsight Enterprise Security Package Updates (Preview)*** ΓÇô (Preview) [Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) support for Azure Blob Storage, ADLS Gen1, Azure Cosmos DB, and Azure DB.
### Component versions
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-17621*](https://issues.apache.org/jira/browse/HIVE-17621): Hive-site settings are ignored during HCatInputFormat split-calculation. -- [*HIVE-17629*](https://issues.apache.org/jira/browse/HIVE-17629): CachedStore: Have a approved/not-approved config to allow selective caching of tables/partitions and allow read while prewarming.
+- [*HIVE-17629*](https://issues.apache.org/jira/browse/HIVE-17629): CachedStore: Have an approved/not-approved config to allow selective caching of tables/partitions and allow read while prewarming.
- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for blobstores.
hdinsight Hortonworks Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hortonworks-release-notes.md
- Title: Hortonworks release notes associated with Azure HDInsight versions
-description: Learn the Apache Hadoop components and versions in Azure HDInsight.
--- Previously updated : 05/26/2022--
-# Hortonworks release notes associated with HDInsight versions
-
-The section provides links to release notes for the Hortonworks Data Platform distributions and Apache components that are used with HDInsight.
-
-## Current versions
-
-* HDInsight cluster version 4.0 uses a Hadoop distribution that is based on [Hortonworks Data Platform 3.0](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/release-notes/content/relnotes.html).
-
-* HDInsight cluster version 3.6 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.6](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_release-notes/content/ch_relnotes.html).
-
-## Older versions
-
-* HDInsight cluster version 3.5 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.5](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/ch_relnotes_v250.html). HDInsight cluster version 3.5 is the _default_ Hadoop cluster that is created in the Azure portal.
-
-* HDInsight cluster version 3.4 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.4](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_HDP_RelNotes/content/ch_relnotes_v240.html).
-
-* HDInsight cluster version 3.3 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.3](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_HDP_RelNotes/content/ch_relnotes_v230.html).
-
- * [Apache Storm release notes](https://storm.apache.org/2015/11/05/storm0100-released.html) are available on the Apache website.
- * [Apache Hive release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332384&styleName=Text&projectId=12310843) are available on the Apache website.
-
-* HDInsight cluster version 3.2 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.2][hdp-2-2].
-
- * Release notes for specific Apache components are available as follows: [Hive 0.14](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310843&version=12326450), [Pig 0.14](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310730&version=12326954), [HBase 0.98.4](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326810), [Phoenix 4.2.0](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120&version=12327581), [M/R 2.6](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310941&version=12327180), [HDFS 2.6](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310942&version=12327181), [YARN 2.6](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313722&version=12327197), [Common](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310240&version=12327179), [Tez 0.5.2](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314426&version=12328742), [Ambari 2.0](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12312020&version=12327486), [Storm 0.9.3](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314820&version=12327112), and [Oozie 4.1.0](https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12324960&projectId=12311620).
-
-* HDInsight cluster version 3.1 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.1.7][hdp-2-1-7]. HDInsight 3.1 clusters created before November, 7, 2014, are based on [Hortonworks Data Platform 2.1.1][hdp-2-1-1].
-
-* HDInsight cluster version 3.0 uses a Hadoop distribution that is based on [Hortonworks Data Platform 2.0][hdp-2-0-8].
-
-* HDInsight cluster version 2.1 uses a Hadoop distribution that is based on [Hortonworks Data Platform 1.3][hdp-1-3-0].
-
-* HDInsight cluster version 1.6 uses a Hadoop distribution that is based on [Hortonworks Data Platform 1.1][hdp-1-1-0].
-
-## Next steps
-
-* [Apache Hadoop components on HDInsight](./hdinsight-component-versioning.md)
-* [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)
-
-[hdp-2-2]: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.9/bk_HDP_RelNotes/content/ch_relnotes_v229.html
-
-[hdp-2-1-7]: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7-Win/bk_releasenotes_HDP-Win/content/ch_relnotes-HDP-2.1.7.html
-
-[hdp-2-1-1]: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.1/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.1.html
-
-[hdp-2-0-8]: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.8.0/bk_releasenotes_hdp_2.0/content/ch_relnotes-hdp2.0.8.0.html
-
-[hdp-1-3-0]: https://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.0/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.3.0_1.html
-
-[hdp-1-1-0]: https://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.0/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.1.1.16_1.html
hdinsight Apache Spark Load Data Run Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-load-data-run-query.md
Title: 'Tutorial: Load data & run queries with Apache Spark - Azure HDInsight'
description: Tutorial - Learn how to load data and run interactive queries on Spark clusters in Azure HDInsight. -+ Last updated 06/08/2022
-# Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to load data into a Spark cluster, so I can run interactive SQL queries against the data.
+#Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to load data into a Spark cluster, so I can run interactive SQL queries against the data.
# Tutorial: Load data and run queries on an Apache Spark cluster in Azure HDInsight
Jupyter Notebook is an interactive notebook environment that supports various pr
## Create a dataframe from a csv file
-Applications can create dataframes directly from files or folders on the remote storage such as Azure Storage or Azure Data Lake Storage; from a Hive table; or from other data sources supported by Spark, such as Cosmos DB, Azure SQL DB, DW, and so on. The following screenshot shows a snapshot of the HVAC.csv file used in this tutorial. The csv file comes with all HDInsight Spark clusters. The data captures the temperature variations of some buildings.
+Applications can create dataframes directly from files or folders on the remote storage such as Azure Storage or Azure Data Lake Storage; from a Hive table; or from other data sources supported by Spark, such as Azure Cosmos DB, Azure SQL DB, DW, and so on. The following screenshot shows a snapshot of the HVAC.csv file used in this tutorial. The csv file comes with all HDInsight Spark clusters. The data captures the temperature variations of some buildings.
:::image type="content" source="./media/apache-spark-load-data-run-query/hdinsight-spark-sample-data-interactive-spark-sql-query.png " alt-text="Snapshot of data for interactive Spark SQL query" border="true":::
hdinsight Apache Spark Manage Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-manage-dependencies.md
description: This article provides an introduction of how to manage Spark depend
-+ Last updated 07/22/2022-
-# Customer intent: As a developer for Apache Spark and Apache Spark in Azure HDInsight, I want to learn how to manage my Spark application dependencies and install packages on my HDInsight cluster.
+#Customer intent: As a developer for Apache Spark and Apache Spark in Azure HDInsight, I want to learn how to manage my Spark application dependencies and install packages on my HDInsight cluster.
# Manage Spark application dependencies
After locating the package from Maven Repository, gather the values for **GroupI
:::image type="content" source="./media/apache-spark-manage-dependencies/spark-package-schema.png " alt-text="Concatenate package schema" border="true":::kage schema" border="true":::
-Make sure the values you gather match your cluster. In this case, we're using Spark Cosmos DB connector package for Scala 2.11 and Spark 2.3 for HDInsight 3.6 Spark cluster. If you are not sure, run `scala.util.Properties.versionString` in code cell on Spark kernel to get cluster Scala version. Run `sc.version` to get cluster Spark version.
+Make sure the values you gather match your cluster. In this case, we're using Spark Azure Cosmos DB connector package for Scala 2.11 and Spark 2.3 for HDInsight 3.6 Spark cluster. If you are not sure, run `scala.util.Properties.versionString` in code cell on Spark kernel to get cluster Scala version. Run `sc.version` to get cluster Spark version.
``` %%configure { "conf": {"spark.jars.packages": "com.microsoft.azure:azure-cosmosdb-spark_2.3.0_2.11:1.3.3" }}
hdinsight Apache Spark Troubleshoot Job Fails Noclassdeffounderror https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-job-fails-noclassdeffounderror.md
Title: NoClassDefFoundError - Apache Spark with Apache Kafka data in Azure HDIns
description: Apache Spark streaming job that reads data from an Apache Kafka cluster fails with a NoClassDefFoundError in Azure HDInsight Previously updated : 07/29/2019 Last updated : 10/07/2022 # Apache Spark streaming job that reads Apache Kafka data fails with NoClassDefFoundError in HDInsight
Use the Spark-submit command with the `ΓÇôpackages` option, and ensure that the
## Next steps
hdinsight Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/transport-layer-security.md
description: Transport layer security (TLS) and secure sockets layer (SSL) are c
Previously updated : 04/21/2020 Last updated : 09/29/2022 # Transport layer security in Azure HDInsight
-Connections to the HDInsight cluster via the public cluster endpoint `https://CLUSTERNAME.azurehdinsight.net` are proxied through cluster gateway nodes. These connections are secured using a protocol called TLS. Enforcing higher versions of TLS on gateways improves the security for these connections. For more information on why you should use newer versions of TLS, see [Solving the TLS 1.0 Problem](/security/solving-tls1-problem).
-
-By default, Azure HDInsight clusters accept TLS 1.2 connections on public HTTPS endpoints, and older versions for backward compatibility. You can control the minimum TLS version supported on the gateway nodes during cluster creation using either the Azure portal, or a Resource Manager template. For the portal, select the TLS version from the **Security + networking** tab during cluster creation. For a Resource Manager template at deployment time, use the **minSupportedTlsVersion** property. For a sample template, see [HDInsight minimum TLS 1.2 Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-minimum-tls/azuredeploy.json). This property supports three values: "1.0", "1.1" and "1.2", which correspond to TLS 1.0+, TLS 1.1+ and TLS 1.2+ respectively.
+Connections to the HDInsight cluster via the public cluster endpoint `https://CLUSTERNAME.azurehdinsight.net` are proxied through cluster gateway nodes. These connections are secured using a protocol called TLS. Enforcing higher versions of TLS on gateways improves the security for these connections.
+By default, Azure HDInsight clusters accept TLS 1.2 connections on public HTTPS endpoints. You can control the minimum TLS version supported on the gateway nodes during cluster creation using either the Azure portal, or a Resource Manager template. For the portal, select the TLS version from the **Security + networking** tab during cluster creation. For a Resource Manager template at deployment time, use the **minSupportedTlsVersion** property. For a sample template, see [HDInsight minimum TLS 1.2 Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-minimum-tls/azuredeploy.json). This property supports one value: "1.2", which correspond to TLS 1.2+.
## Next steps * [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) * [Create virtual networks for Azure HDInsight clusters](hdinsight-create-virtual-network.md).
-* [Network security groups](../virtual-network/network-security-groups-overview.md).
+* [Network security groups](../virtual-network/network-security-groups-overview.md).
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
description: This article describes the autoscale feature for Azure API for FHIR
+ Last updated 06/02/2022
You can adjust the max `RU/s` or `Tmax` value through the portal if it's a valid
## How to estimate throughput RU/s required?
-The data size is one of several factors used in calculating the total throughput RU/s required for manual scale and autoscale. You can find the data size using the Metrics menu option under **Monitoring**. Start a new chart and select "Cosmos DB Collection Size" in the Metric dropdown box and "Max" in the "Aggregation" box.
+The data size is one of several factors used in calculating the total throughput RU/s required for manual scale and autoscale. You can find the data size using the Metrics menu option under **Monitoring**. Start a new chart and select **Cosmos DB Collection Size** in the Metric dropdown box and **Max** in the "Aggregation" box.
[ ![Screenshot of metrics_new_chart](media/cosmosdb/metrics-new-chart.png) ](media/cosmosdb/metrics-new-chart.png#lightbox)
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
description: This article describes how to configure Database settings in Azure
+ Last updated 06/03/2022
Azure API for FHIR uses database to store its data. Performance of the underlying database depends on the number of Request Units (RU) selected during service provisioning or in database settings after the service has been provisioned.
-Azure API for FHIR borrows the concept of RUs from Cosmos DB (see [Request Units in Azure Cosmos DB](../../cosmos-db/request-units.md)) when setting the performance of underlying database.
+Azure API for FHIR borrows the concept of [Request Units (RUs) in Azure Cosmos DB](../../cosmos-db/request-units.md)) when setting the performance of underlying database.
Throughput must be provisioned to ensure that sufficient system resources are available for your database at all times. How many RUs you need for your application depends on operations you perform. Operations can range from simple read and writes to more complex queries.
If the database throughput is greater than 10,000 RU/s or if the data stored in
> [!NOTE] > Higher value means higher Azure API for FHIR throughput and higher cost of the service.
-![Config Cosmos DB](media/database/database-settings.png)
+![Configure Azure Cosmos DB](media/database/database-settings.png)
## Next steps
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
Title: Configure customer-managed keys for Azure API for FHIR
-description: Bring your own key feature supported in Azure API for FHIR through Cosmos DB
+description: Bring your own key feature supported in Azure API for FHIR via Azure Cosmos DB
Last updated 06/03/2022 -+ ms.devlang: azurecli
ms.devlang: azurecli
When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself.
-In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you'll have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a Fast Healthcare Interoperability Resources (FHIR&#174;) request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
+In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Azure Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Azure Cosmos DB. When you create an account, you'll have the option to specify an Azure Key Vault key URI. This key will be passed on to Azure Cosmos DB when the DB account is provisioned. When a Fast Healthcare Interoperability Resources (FHIR&#174;) request is made, Azure Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
To get started, refer to the following links:
New-AzResourceGroupDeployment `
In this article, you learned how to configure customer-managed keys at rest using the Azure portal, PowerShell, CLI, and Resource Manager Template. You can refer to the Azure Cosmos DB FAQ section for more information. >[!div class="nextstepaction"]
->[Cosmos DB: how to setup CMK](../../cosmos-db/how-to-setup-cmk.md#frequently-asked-questions)
+>[Azure Cosmos DB: how to setup CMK](../../cosmos-db/how-to-setup-cmk.md#frequently-asked-questions)
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API For FHIR supports $export at the following levels:
* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>` * [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
-When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large. We create a new file after the size of a single-exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (that is, Patient-1.ndjson, Patient-2.ndjson).
-
+With export, data is exported in multiple files each containing resources of only one type. No individual file will exceed 100,000 resource records. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`).
> [!Note] > `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
+ Last updated 06/03/2022
Below is a summary of the supported RESTful capabilities. For more information o
| intermediaries | No | No | > [!Note]
-> In the Azure API for FHIR and the open-source FHIR server backed by Cosmos, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 1000 results, an error will be thrown.
+> In the Azure API for FHIR and the open-source FHIR server backed by Azure Cosmos DB, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Azure Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 1000 results, an error will be thrown.
## Extended Operations
The Microsoft FHIR Server has a pluggable persistence module (see [`Microsoft.He
Currently the FHIR Server open-source code includes an implementation for [Azure Cosmos DB](../../cosmos-db/index-overview.md) and [SQL Database](https://azure.microsoft.com/services/sql-database/).
-Cosmos DB is a globally distributed multi-model (SQL API, MongoDB API, etc.) database. It supports different [consistency levels](../../cosmos-db/consistency-levels.md). The default deployment template configures the FHIR Server with `Strong` consistency, but the consistency policy can be modified (generally relaxed) on a request by request basis using the `x-ms-consistency-level` request header.
+Azure Cosmos DB is a globally distributed multi-model (NoSQL, MongoDB, and others) database. It supports different [consistency levels](../../cosmos-db/consistency-levels.md). The default deployment template configures the FHIR Server with `Strong` consistency, but the consistency policy can be modified (generally relaxed) on a request by request basis using the `x-ms-consistency-level` request header.
## Role-based access control
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
description: This article describes how to run a reindex job to index any search
+ Last updated 06/03/2022
Below is a table outlining the available parameters, defaults, and recommended r
| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speed up the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50-500000 | | MaximumResourcesPerQuery | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 | | MaximumConcurrency | The number of batches done at a time. | 1 | 1-10 |
-| targetDataStoreUsagePercentage | Allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Cosmos DB. | Not present, which means that up to 100% can be used. | 0-100 |
+| targetDataStoreUsagePercentage | Allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Azure Cosmos DB. | Not present, which means that up to 100% can be used. | 0-100 |
If you want to use any of the parameters above, you can pass them into the Parameters resource when you start the reindex job.
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
description: This article describes an overview of FHIR search that is implement
+ Last updated 06/03/2022
There are [common search parameters](https://www.hl7.org/fhir/search.html#all) t
| _type | Yes | Yes | | _security | Yes | Yes | | _profile | Yes | Yes |
-| _has | Partial | Yes | Support for _has is in MVP in the Azure API for FHIR and the OSS version backed by Cosmos DB. More details are included under the chaining section below. |
+| _has | Partial | Yes | Support for _has is in MVP in the Azure API for FHIR and the OSS version backed by Azure Cosmos DB. More details are included under the chaining section below. |
| _query | No | No | | _filter | No | No | | _list | No | No |
To help manage the returned resources, there are search result parameters that y
| - | -- | - | | | _elements | Yes | Yes | | _count | Yes | Yes | _count is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be returned in the bundle. |
-| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
-| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
+| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Azure Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
+| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Azure Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
| _summary | Yes | Yes | | _total | Partial | Partial | _total=none and _total=accurate |
-| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
+| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Azure Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
| _contained | No | No | | _containedType | No | No | | _score | No | No |
A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to
Similarly, you can do a reverse chained search. This allows you to get resources where you specify criteria on other resources that refer to them. For more examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page. > [!NOTE]
-> In the Azure API for FHIR and the open source backed by Cosmos DB, there's a limitation where each subquery required for the chained and reverse chained searches will only return 1000 items. If there are more than 1000 items found, youΓÇÖll receive the following error message: ΓÇ£Subqueries in a chained expression can't return more than 1000 results, please use a more selective criteria.ΓÇ¥ To get a successful query, youΓÇÖll need to be more specific in what you are looking for.
+> In the Azure API for FHIR and the open source backed by Azure Cosmos DB, there's a limitation where each subquery required for the chained and reverse chained searches will only return 1000 items. If there are more than 1000 items found, youΓÇÖll receive the following error message: ΓÇ£Subqueries in a chained expression can't return more than 1000 results, please use a more selective criteria.ΓÇ¥ To get a successful query, youΓÇÖll need to be more specific in what you are looking for.
## Pagination
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Resolved retry 503 error |Related information | | :-- | : |
-|Retry 503 error from Cosmos DB. |[#2106](https://github.com/microsoft/fhir-server/pull/2106)|
+|Retry 503 error from Azure Cosmos DB. |[#2106](https://github.com/microsoft/fhir-server/pull/2106)|
|Fixes processing 429s from StoreProcedures. |[#2165](https://github.com/microsoft/fhir-server/pull/2165)| |GitHub issues closed |Related information |
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
description: How to search using different search parameters, modifiers, and oth
+ Last updated 06/03/2022
GET [your-fhir-server]/Patient?general-practitioner:Practitioner.name=Sarah&gene
This would return all `Patient` resources that have "Sarah" as the `generalPractitioner` and have a `generalPractitioner` that has the address with the state WA. In other words, if a patient had Sarah from the state NY and Bill from the state WA both referenced as the patient's `generalPractitioner`, the would be returned.
-For scenarios in which the search has to be an AND operation that covers all conditions as a group, refer to the **composite search** example below.
+For scenarios in which the search has to be an `AND` operation that covers all conditions as a group, refer to the **composite search** example below.
## Reverse chain search
GET [base]/Patient?_has:Observation:patient:_has:AuditEvent:entity:agent:Practit
``` > [!NOTE]
-> In the Azure API for FHIR and the open-source FHIR server backed by Cosmos, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 100 results, an error will be thrown.
+> In the Azure API for FHIR and the open-source FHIR server backed by Azure Cosmos DB, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Azure Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 100 results, an error will be thrown.
## Composite search
In this article, you learned about how to search using different search paramete
>[!div class="nextstepaction"] >[Overview of FHIR Search](overview-of-search.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
For example:
`POST https://myAzureAPIforFHIR.azurehealthcareapis.com/Patient/$validate`
-This request will create the new resource you're specifying in the request payload and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
+This request will first validate the resource. New resource you're specifying in the request will be created after validation. The server will always return an OperationOutcome as the result.
## Validate on resource CREATE or resource UPDATE
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Last updated 06/06/2022-+
-# Configure bulk-import settings (Preview)
+# Configure bulk-import settings
The FHIR service supports $import operation that allows you to import data into FHIR service account from a storage account.
To get the request URL and body, browse to the Azure portal of your FHIR service
[![Screenshot of Get JSON View](media/bulk-import/fhir-json-view.png)](media/bulk-import/fhir-json-view.png#lightbox)
+Select the API version to **2022-06-01** or later version.
+ Copy the URL as request URL and do following changes of the JSON as body: - Set enabled in importConfiguration to **true** - add or change the integrationDataStore with target storage account name
After you've completed this final step, you're ready to import data using $impor
You can also use the **Deploy to Azure** button below to open custom Resource Manager template that updates the configuration for $import.
- [![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fiotc-device-bridge%2Fmaster%2Fazuredeploy.json)
+ [![Deploy to Azure Button.](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Ffhir-import%2Fazuredeploy.json)
+ ## Next steps
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
description: This article describes an overview of FHIR search that is implement
+ Last updated 08/18/2022
FHIR specifies a set of search result parameters to help manage the information
| - | -- | - | | | `_elements` | Yes | Yes | | `_count` | Yes | Yes | `_count` is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be included in the bundle. |
-| `_include` | Yes | Yes | Items retrieved with `_include` are limited to 100. `_include` on PaaS and OSS on Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
-| `_revinclude` | Yes | Yes |Items retrieved with `_revinclude` are limited to 100. `_revinclude` on PaaS and OSS on Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319). |
+| `_include` | Yes | Yes | Items retrieved with `_include` are limited to 100. `_include` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
+| `_revinclude` | Yes | Yes |Items retrieved with `_revinclude` are limited to 100. `_revinclude` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319). |
| `_summary` | Yes | Yes | | `_total` | Partial | Partial | `_total=none` and `_total=accurate` |
-| `_sort` | Partial | Partial | `sort=_lastUpdated` is supported on the FHIR service. For the FHIR service and the OSS SQL DB FHIR servers, sorting by strings and dateTime fields are supported. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
+| `_sort` | Partial | Partial | `sort=_lastUpdated` is supported on the FHIR service. For the FHIR service and the OSS SQL DB FHIR servers, sorting by strings and dateTime fields are supported. For Azure API for FHIR and OSS Azure Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
| `_contained` | No | No | | `_containedType` | No | No | | `_score` | No | No |
healthcare-apis Deploy 02 New Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-02-new-button.md
Title: Deploy the MedTech service with a QuickStart template - Azure Health Data Services description: In this article, you'll learn how to deploy the MedTech service in the Azure portal using a QuickStart template.-+ Previously updated : 09/30/2022- Last updated : 10/10/2022+
-# Deploy the MedTech service with an Azure Resource Manager QuickStart template
+# Deploy the MedTech service with an Azure Resource Manager Quickstart template
In this article, you'll learn how to deploy the MedTech service in the Azure portal using an Azure Resource Manager (ARM) Quickstart template. This template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically set up the infrastructure and configuration of your deployment. For more information about Azure ARM templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md).
-If you need to see a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resource (FHIR) Observation.
+If you need to see a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resources (FHIR&#174;) Observation.
There are four simple tasks you need to complete in order to deploy MedTech service with the ARM template **Deploy to Azure** button. They are:
healthcare-apis Deploy 03 New Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-03-new-manual.md
Title: Overview of manually deploying the MedTech service using the Azure portal - Azure Health Data Services description: In this article, you'll see an overview of how to manually deploy the MedTech service in the Azure portal.-+ Previously updated : 09/30/2022- Last updated : 10/10/2022+ # How to manually deploy MedTech service using the Azure portal
The explanation of MedTech service manual deployment using the Azure portal is d
- Part 2: Configuration (see [Configure for manual deployment](./deploy-05-new-config.md)) - Part 3: Deployment and Post Deployment (see [Manual deployment and post-deployment](./deploy-06-new-deploy.md))
-If you need a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resource (FHIR) Observation.
+If you need a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resources (FHIR&#174;) Observation.
## Part 1: Prerequisites
healthcare-apis Deploy 05 New Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-05-new-config.md
Title: Configuring the MedTech service for deployment using the Azure portal - Azure Health Data Services description: In this article, you'll learn how to configure the MedTech service for manual deployment using the Azure portal.-+ Previously updated : 09/30/2022- Last updated : 10/10/2022+ # Part 2: Configure the MedTech service for manual deployment using the Azure portal
Follow these six steps to fill in the Basics tab configuration:
The Event Hubs Namespace is the name of the **Event Hubs Namespace** that you previously deployed. For this example, we'll use `eh-azuredocsdemo` with our MedTech service device messages. > [!TIP]
- >
> For information about deploying an Azure Event Hubs Namespace, see [Create an Event Hubs Namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace). > > For more information about Azure Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document.
Follow these six steps to fill in the Basics tab configuration:
The Event Hubs name is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we'll use `devicedata` with our MedTech service device messages. > [!TIP]
- >
> For information about deploying an Azure event hub, see [Create an event hub](../../event-hubs/event-hubs-create.md#create-an-event-hub). 4. Enter the **Consumer group**.
Follow these six steps to fill in the Basics tab configuration:
6. By default, a consumer group named **$Default** is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment. > [!IMPORTANT]
- >
> If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples:-
+ >
> - Two MedTech services accessing the same device message event hub.-
+ >
> - A MedTech service and a storage writer application accessing the same device message event hub. The Basics tab should now look like this after you have filled it out:
To begin configuring destination mapping, go to the Create MedTech service page
Under the **Destination** tab, use these values to enter the destination properties for your MedTech service instance: -- First, enter the name of your **FHIR server** using the following four steps:
+- First, enter the name of your **Fast Healthcare Interoperability Resources (FHIR&#174;) server** using the following four steps:
1. The **FHIR Server** name (also known as the **FHIR service**) can be located by using the **Search** bar at the top of the screen. 1. To connect to your FHIR service instance, enter the name of the FHIR service you used in the manual deploy configuration article at [Deploy the FHIR service](deploy-03-new-manual.md#deploy-the-fhir-service).
healthcare-apis Deploy 06 New Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-06-new-deploy.md
Title: Manual deployment and post-deployment of the MedTech service using the Azure portal - Azure Health Data Services description: In this article, you'll learn how to manually create a deployment and post-deployment of the MedTech service in the Azure portal.-+ Last updated 08/30/2022-+ # Part 3: Manual Deployment and Post-deployment of MedTech service
Your screen should look something like this:
## Manual Post-deployment requirements
-There are two post-deployment steps you must perform or the MedTech service can't read device data from the device message event hub, and it also can't read or write to the FHIR service. These steps are:
+There are two post-deployment steps you must perform or the MedTech service can't read device data from the device message event hub, and it also can't read or write to the Fast Healthcare Interoperability Resources (FHIR&#174;) service. These steps are:
1. Grant access to the device message event hub. 2. Grant access to the FHIR service.
healthcare-apis Deploy 08 New Ps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-08-new-ps-cli.md
Title: Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates - Azure Health Data Services description: In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager template.-+ Previously updated : 09/30/2022- Last updated : 10/10/2022+ # Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates
The ARM template will help you automatically configure and deploy the following
The ARM template used in this article is available from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json).
-If you need to see a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resource (FHIR) Observation.
+If you need to see a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a FHIR Observation.
## Azure PowerShell prerequisites
az group delete --name <ResourceGroupName>
For example: `az group delete --resource-group ArmTestDeployment` > [!TIP]
->
> For a step-by-step tutorial that guides you through the process of creating an ARM template, see [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md). ## Next steps
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: Choosing a method of deployment for MedTech service in Azure - Azure Health Data Services description: In this article, you'll learn how to choose a method to deploy the MedTech service in Azure.-+ Previously updated : 09/30/2022- Last updated : 10/10/2022+ # Choose a deployment method
The different deployment methods are:
- Azure PowerShell and Azure CLI automation - Manual deployment
-## Azure ARM QuickStart template with Deploy to Azure button
+## Azure ARM Quickstart template with Deploy to Azure button
-Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a FHIR service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but does not allow for much customization.
+Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR&#174;) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but does not allow for much customization.
For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md).
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
description: In this tutorial, you learn how to deploy the IIoT Platform.
+ Last updated 3/22/2021
The Azure Industrial IoT Platform is a Microsoft suite of modules (OPC Publisher
The deployment script allows to select which set of components to deploy. - Minimum dependencies: - [IoT Hub](https://azure.microsoft.com/services/iot-hub/) to communicate with the edge and ingress raw OPC UA telemetry data
- - [Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) to persist state that is not persisted in IoT Hub
+ - [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) to persist state that is not persisted in IoT Hub
- [Service Bus](https://azure.microsoft.com/services/service-bus/) as integration event bus - [Event Hubs](https://azure.microsoft.com/services/event-hubs/) contains processed and contextualized OPC UA telemetry data - [Key Vault](https://azure.microsoft.com/services/key-vault/) to manage secrets and certificates
industry Disaster Recovery For Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/disaster-recovery-for-farmbeats.md
Title: Disaster recovery for FarmBeats
description: This article describes how data recovery protects from losing your data. + Last updated 04/13/2020
The following sections provide information about how you can configure data reco
FarmBeats stores data in three Azure first party services, which are **Azure storage**, **Cosmos DB** and **Time Series Insights**. Use the following steps to enable data redundancy for these services to a paired Azure region: 1. **Azure Storage** - Follow this guideline to enable data redundancy for each storage account in your FarmBeats deployment.
-2. **Azure Cosmos DB** - Follow this guideline to enable data redundancy for Cosmos DB account your FarmBeats deployment.
+2. **Azure Cosmos DB** - Follow this guideline to enable data redundancy for the Azure Cosmos DB account used for your FarmBeats deployment.
3. **Azure Time Series Insights (TSI)** - TSI currently doesn't offer data redundancy. To recover Time Series Insights data, go to your sensor/weather partner and push the data again to FarmBeats deployment. ## Restore service from online backup
-You can initiate failover and recover data stored for which, each of the above-mentioned data stores for your FarmBeats deployment. Once you've recovered the data for Azure storage and Cosmos DB, create another FarmBeats deployment in the Azure paired region and then configure the new deployment to use data from restored data stores (i.e. Azure Storage and Cosmos DB) by using the below steps:
+You can initiate failover and recover data stored for which, each of the above-mentioned data stores for your FarmBeats deployment. Once you've recovered the data for Azure Storage and Azure Cosmos DB, create another FarmBeats deployment in the Azure paired region and then configure the new deployment to use data from restored data stores (i.e. Azure Storage and Azure Cosmos DB) by using the below steps:
-1. [Configure Cosmos DB](#configure-cosmos-db)
+1. [Configure Azure Cosmos DB](#configure-azure-cosmos-db)
2. [Configure Storage Account](#configure-storage-account)
-### Configure Cosmos DB
+### Configure Azure Cosmos DB
-Copy the access key of the restored Cosmos DB and update the new FarmBeats Datahub Key Vault.
+Copy the access key of the restored Azure Cosmos DB and update the new FarmBeats Datahub Key Vault.
![Screenshot that highlights where to get the copy of the access key.](./media/disaster-recovery-for-farmbeats/key-vault-secrets.png) > [!NOTE]
-> Copy the URL of restored Cosmos DB and update it in the new FarmBeats Datahub App Service Configuration. You can now delete Cosmos DB account in the new FarmBeats deployment.
+> Copy the URL of restored Azure Cosmos DB and update it in the new FarmBeats Datahub App Service Configuration. You can now delete Azure Cosmos DB account in the new FarmBeats deployment.
- ![Screenshot that shows where to copy the URL of restored Cosmos DB.](./media/disaster-recovery-for-farmbeats/configuration.png)
+ ![Screenshot that shows where to copy the URL of restored Azure Cosmos DB.](./media/disaster-recovery-for-farmbeats/configuration.png)
### Configure Storage Account
internet-peering Howto Peering Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-portal.md
# Enable Azure Peering Service on a Direct peering by using the Azure portal
-This article describes how to enable [Azure Peering Service](/articles/peering-service/about.md) on a Direct peering by using the Azure portal.
+This article describes how to enable [Azure Peering Service](../peering-service/about.md) on a Direct peering by using the Azure portal.
If you prefer, you can complete this guide by using [PowerShell](howto-peering-service-powershell.md).
internet-peering Howto Peering Service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-powershell.md
# Enable Azure Peering Service on a Direct peering by using PowerShell
-This article describes how to enable [Azure Peering Service](/articles/peering-service/about.md) on a Direct peering by using PowerShell cmdlets and the Azure Resource Manager deployment model.
+This article describes how to enable [Azure Peering Service](../peering-service/about.md) on a Direct peering by using PowerShell cmdlets and the Azure Resource Manager deployment model.
If you prefer, you can complete this guide by using the Azure [portal](howto-peering-service-portal.md).
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Previously updated : 03/30/2021 Last updated : 10/10/2022
Below are the steps to activate the prefix.
**Q.** When will my BGP peer come up?
-**A.** After the LAG comes up our automated process configures BGP. Note, BFD must be configured on the non-MSFT peer to start route exchange.
+**A.** After the LAG comes up, our automated process configures BGP with BFD. Peer must configure BGP with BFD. Note, BFD must be configured and up on the non-MSFT peer to start route exchange.
**Q.** When will peering IP addresses be allocated and displayed in the Azure portal?
Below are the steps to activate the prefix.
**Q.** Are there any AS path constraints?
-**A.** Yes, for registered prefixes smaller than /24, advertised AS path must be less than four. Path of four or longer will cause the advertisement to be rejected by policy.
+**A.** Yes, a private ASN cannot be in the AS path. For registered prefixes smaller than /24, the AS path must be less than four.
**Q.** I need to set the prefix limit, how many routes Microsoft would be announcing?
iot-central Howto Manage Dashboards With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards-with-rest-api.md
+
+ Title: Use the REST API to manage dashboards in Azure IoT Central
+description: How to use the IoT Central REST API to manage dashboards in an application
++ Last updated : 10/06/2022++++++
+# How to use the IoT Central REST API to manage dashboards
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage dashboards in your IoT Central application.
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
++
+To learn how to manage dashboards by using the IoT Central UI, see [How to manage dashboards.](../core/howto-manage-dashboards.md)
+
+## Dashboards
+
+You can create dashboards that are associated with a specific organization. An organization dashboard is only visible to users who have access to the organization the dashboard is associated with. Only users in a role that has [organization dashboard permissions](howto-manage-users-roles.md#customizing-the-app) can create, edit, and delete organization dashboards.
+
+All users can create *personal dashboards*, visible only to themselves. Users can switch between organization and personal dashboards.
+
+> [!NOTE]
+> Creating personal dashboards using API is currently not supported.
+
+## Dashboards REST API
+
+The IoT Central REST API lets you:
+
+* Add a dashboard to your application
+* Update a dashboard in your application
+* Get a list of the dashboard in the application
+* Get a dashboard by ID
+* Delete a dashboard in your application
+
+## Add a dashboard
+
+Use the following request to create a dashboard.
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+```
+
+`dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#digital-twin-model-identifier) identifier for the dashboard.
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the dashboard.
+* `@favorite`: Is the dashboard in the favorites list.
+* `group`: Device group ID.
+* `Tile` : Configuration specifying tile object, including the layout, display name, and configuration.
+
+Tile has some required fields:
+
+| Name | Description |
+| - | -- |
+| `displayName` | Display name of the tile |
+| `height` | Height of the tile |
+| `width` | Width of the tile |
+| `x` | Horizontal position of the tile |
+| `y` | Vertical position of the tile |
+
+The dimensions and location of a tile both use integer units. The smallest possible tile has a height and width of one.
+
+You can configure a tile object to display multiple types of data. This article includes examples of tiles that show line charts, markdown, and last known value. To learn more about the different tile types you can add to a dashboard, see [Tile types](howto-manage-dashboards.md#tile-types).
+
+### Line chart tile
+
+Plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices during the past hour.
+
+The line chart tile has the following configuration:
+
+| Name | Description |
+|--|--|
+| `capabilities` | Specifies the aggregate value of the telemetry to display. |
+| `devices` | The list of devices to display. |
+| `format` | The format configuration of the chart such as the axes to use. |
+| `group` | The ID of the device group to display. |
+| `queryRange` | The time range and resolution to display.|
+| `type` | `lineChart` |
+
+### Markdown tile
+
+Clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.
+The markdown tile has the following configuration:
+
+| Name | Description |
+|--|--|
+| `description` | The markdown string to render inside the tile. |
+| `href` | The link to visit when the tile is selected. |
+| `image` | A base64 encoded image to display. |
+| `type` | `markdown` |
+
+### Last known value tile
+
+Display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices.
+
+The last known value (LKV) tile has the following configuration:
+
+| Name | Description |
+|--|--|
+| `capabilities` | Specifies the telemetry to display. |
+| `devices` | The list of devices to display. |
+| `format` | The format configuration of the LKV tile such as text size of word wrapping. |
+| `group` | The ID of the device group to display. |
+| `showTrend` | Show the difference between the last known value and the previous value. |
+| `type` | `lkv` |
+
+The following example shows a request body that adds a new dashboard with line chart, markdown, and last known value tiles. The LKV and line chart tiles are `2x2` tiles. The markdown tile is a `1x1` tile. The tiles are arranged on the top row of the dashboard:
+
+```json
+{
+ "displayName": "My Dashboard ",
+ "tiles": [
+ {
+ "displayName": "LKV Temperature",
+ "configuration": {
+ "type": "lkv",
+ "capabilities": [
+ {
+ "capability": "temperature",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "group": "0fb6cf08-f03c-4987-93f6-72103e9f6100",
+ "devices": [
+ "3xksbkqm8r",
+ "1ak6jtz2m5q",
+ "h4ow04mv3d"
+ ],
+ "format": {
+ "abbreviateValue": false,
+ "wordWrap": false,
+ "textSize": 14
+ }
+ },
+ "x": 0,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ },
+ {
+ "displayName": "Documentation",
+ "configuration": {
+ "type": "markdown",
+ "description": "Comprehensive help articles and links to more support.",
+ "href": "https://aka.ms/iotcentral-pnp-docs",
+ "image": "4d6c6373-0220-4191-be2e-d58ca2a289e1"
+ },
+ "x": 2,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Average temperature",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "temperature",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "3xksbkqm8r",
+ "1ak6jtz2m5q",
+ "h4ow04mv3d"
+ ],
+ "group": "0fb6cf08-f03c-4987-93f6-72103e9f6100",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 3,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+}
+```
+<!-- TODO: Fix this - also check the image example above... -->
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "dtmi:kkfvwa2xi:p7pyt5x38",
+ "displayName": "My Dashboard",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "temperature",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+}
+```
+
+## Get a dashboard
+
+Use the following request to retrieve the details of a dashboard by using a dashboard ID.
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "dtmi:kkfvwa2xi:p7pyt5x38",
+ "displayName": "My Dashboard",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+}
+```
+
+## Update a dashboard
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+```
+
+The following example shows a request body that updates the display name of a dashboard and size of the tile:
+
+```json
+
+{
+ "displayName": "New Dashboard Name",
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 5,
+ "height": 5
+ }
+ ],
+ "favorite": false
+}
+
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "dtmi:kkfvwa2xi:p7pyt5x38",
+ "displayName": "New Dashboard Name",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 5,
+ "height": 5
+ }
+ ],
+ "favorite": false
+}
+```
+
+## Delete a dashboard
+
+Use the following request to delete a dashboard by using the dashboard ID:
+
+```http
+DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+```
+
+## List dashboards
+
+Use the following request to retrieve a list of dashboards from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-06-30-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "dtmi:kkfvwa2xi:p7pyt5x3o",
+ "displayName": "Dashboard",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "Device templates",
+ "configuration": {
+ "type": "markdown",
+ "description": "Get started by adding your first device.",
+ "href": "/device-templates/new/devicetemplates",
+ "image": "f5ba1b00-1d24-4781-869b-6f954df48736"
+ },
+ "x": 1,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Quick start demo",
+ "configuration": {
+ "type": "markdown",
+ "description": "Learn how to use Azure IoT Central in minutes.",
+ "href": "https://aka.ms/iotcentral-pnp-video",
+ "image": "9eb01d71-491a-44e5-8fac-7af3bc9f9acd"
+ },
+ "x": 2,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Tutorials",
+ "configuration": {
+ "type": "markdown",
+ "description": "Step-by-step articles teach you how to create apps and devices.",
+ "href": "https://aka.ms/iotcentral-pnp-tutorials",
+ "image": "7d9fc12c-d46e-41c6-885f-0a67c619366e"
+ },
+ "x": 3,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Documentation",
+ "configuration": {
+ "type": "markdown",
+ "description": "Comprehensive help articles and links to more support.",
+ "href": "https://aka.ms/iotcentral-pnp-docs",
+ "image": "4d6c6373-0220-4191-be2e-d58ca2a289e1"
+ },
+ "x": 4,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "IoT Central Image",
+ "configuration": {
+ "type": "image",
+ "format": {
+ "backgroundColor": "#FFFFFF",
+ "fitImage": true,
+ "showTitle": false,
+ "textColor": "#FFFFFF",
+ "textSize": 0,
+ "textSizeUnit": "px"
+ },
+ "image": ""
+ },
+ "x": 0,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Contoso Image",
+ "configuration": {
+ "type": "image",
+ "format": {
+ "backgroundColor": "#FFFFFF",
+ "fitImage": true,
+ "showTitle": false,
+ "textColor": "#FFFFFF",
+ "textSize": 0,
+ "textSizeUnit": "px"
+ },
+ "image": "c9ac5af4-f38e-4cd3-886a-e0cb107f391c"
+ },
+ "x": 0,
+ "y": 1,
+ "width": 5,
+ "height": 3
+ },
+ {
+ "displayName": "Available Memory",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+ }
+ ]
+}
+```
+
+## Next steps
+
+Now that you've learned how to manage dashboards with the REST API, a suggested next step is to [How to manage file upload with rest api.](howto-upload-file-rest-api.md)
iot-develop About Iot Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-develop.md
description: Learn how to use Azure IoT to do embedded device development and bu
- Previously updated : 01/11/2021+ Last updated : 10/07/2022+ # What is Azure IoT device and application development?
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md
Title: Overview of Azure IoT device SDK options description: Learn which Azure IoT device SDK to use based on your development role and tasks.--++ Previously updated : 02/11/2021- Last updated : 10/07/2022+ # Overview of Azure IoT Device SDKs
iot-develop Concepts Iot Device Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-iot-device-types.md
Previously updated : 01/11/2021 Last updated : 10/07/2022+ # Overview of Azure IoT device types
Some important factors when choosing your hardware are cost, power consumption,
* **Power:** How much power a device consumes is important if the device will be utilizing batteries and not connected to the power grid. MCUs are often designed for lower power scenarios and can be a better choice for extending battery life.
-* **Network Access:** There are many ways to connect a device to a cloud service. Ethernet, Wi-fi and cellular and some of the available options. The connection type you choose will depend on where the device is deployed and how it is used. For example, cellular can be an attractive option given the high coverage, however for high traffic devices it can an expensive. Hardwired ethernet provides cheaper data costs but with the downside of being less portable.
+* **Network Access:** There are many ways to connect a device to a cloud service. Ethernet, Wi-fi and cellular and some of the available options. The connection type you choose will depend on where the device is deployed and how it's used. For example, cellular can be an attractive option given the high coverage, however for high traffic devices it can an expensive. Hardwired ethernet provides cheaper data costs but with the downside of being less portable.
* **Input and Outputs:** The inputs and outputs available on the device directly affect the devices operating capabilities. A microcontroller will typically have many I/O functions built directly into the chip and provides a wide choice of sensors to connect directly.
IoT devices can be separated into two broad categories, microcontrollers (MCUs)
**MCUs** are less expensive and simpler to operate than MPUs. An MCU will contain many of the functions, such as memory, interfaces, and I/O within the chip itself. An MPU will draw this functionality from components in supporting chips. An MCU will often use a real-time OS (RTOS) or run bare-metal (No OS) and provide real-time response and highly deterministic reactions to external events.
-**MPUs** will generally run a general purpose OS, such as Windows, Linux, or MacOSX, that provide a non-deterministic real-time response. There is typically no guarantee to when a task will be completed.
+**MPUs** will generally run a general purpose OS, such as Windows, Linux, or MacOSX that provide a non-deterministic real-time response. There's typically no guarantee to when a task will be completed.
:::image type="content" border="false" source="media/concepts-iot-device-types/mcu-vs-mpu.png" alt-text="MCU vs MPU":::
iot-develop Concepts Overview Connection Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-overview-connection-options.md
Previously updated : 02/11/2021 Last updated : 10/07/2022+ # Overview: Connection options for Azure IoT device developers
-As a developer who works with devices, you have several options for connecting and managing Azure IoT devices. This article overviews the most commonly used options and tools to help you connect and manage devices.
+As a developer who works with devices, you have several options for connecting and managing Azure IoT devices. This article gives an overview of the most commonly used options and tools to help you connect and manage devices.
As you evaluate Azure IoT connection options for your devices, it's helpful to compare the following items: - Azure IoT device application platforms
As you evaluate Azure IoT connection options for your devices, it's helpful to c
## Application platforms: IoT Central and IoT Hub Azure IoT contains two services that are platforms for device-enabled cloud applications: Azure IoT Central, and Azure IoT Hub. You can use either one to host an IoT application and connect devices. - [IoT Central](../iot-central/core/overview-iot-central.md) is designed to reduce the complexity and cost of working with IoT solutions. It's a software-as-a-service (SaaS) application that provides a complete platform for hosting IoT applications. You can use the IoT Central web UI to streamline the entire lifecycle of creating and managing IoT applications. The web UI simplifies the tasks of creating applications, and connecting and managing from a few up to millions of devices. IoT Central uses IoT Hub to create and manage applications, but keeps the details transparent to the user. In general, IoT Central provides reduced complexity and development effort, and simplified device management through the web UI.-- [IoT Hub](../iot-hub/about-iot-hub.md) acts as a central message hub for bi-directional communication between IoT applications and connected devices. It's a platform-as-a-service (PaaS) application that also provides a platform for hosting IoT applications. Like IoT Central, it can scale to support millions of devices. In general, IoT Hub offers greater control and customization over your application design, and more developer tool options for working with the service, at the cost of some increase in development and management complexity.
+- [IoT Hub](../iot-hub/about-iot-hub.md) acts as a central message hub for bi-directional communication between IoT applications and connected devices. It's a platform-as-a-service (PaaS) application that also provides a platform for hosting IoT applications. Like IoT Central, it can scale to support millions of devices. In general, IoT Hub offers greater control and customization over your application design. It also offers more developer tool options for working with the service, at the cost of some increase in development and management complexity.
## Tools to connect and manage devices After you select IoT Hub or IoT Central to host your IoT application, you have several options of developer tools. You can use these tools to connect to your selected IoT application platform, and to create and manage applications and devices. The following table summarizes common tool options.
After you select IoT Hub or IoT Central to host your IoT application, you have s
||||| |Central web UI | Central | [Central quickstart](../iot-central/core/quick-deploy-iot-central.md) | Browser-based portal for IoT Central. | |Azure portal | Hub, Central | [Create an IoT hub with Azure portal](../iot-hub/iot-hub-create-through-portal.md), [Manage IoT Central from the Azure portal](../iot-central/core/howto-manage-iot-central-from-portal.md)| Browser-based portal for IoT Hub and devices. Also works with other Azure resources including IoT Central. |
-|Azure IoT Explorer | Hub | [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer#azure-iot-explorer-preview) | Cannot create IoT hubs. Connects to an existing IoT hub to manage devices. Often used with CLI or Portal.|
+|Azure IoT Explorer | Hub | [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer#azure-iot-explorer-preview) | Can't create IoT hubs. Connects to an existing IoT hub to manage devices. Often used with CLI or Portal.|
|Azure CLI | Hub, Central | [Create an IoT hub with CLI](../iot-hub/iot-hub-create-using-cli.md), [Manage IoT Central from Azure CLI](../iot-central/core/howto-manage-iot-central-from-cli.md) | Command-line interface for creating and managing IoT applications. | |Azure PowerShell | Hub, Central | [Create an IoT hub with PowerShell](../iot-hub/iot-hub-create-using-powershell.md), [Manage IoT Central from Azure PowerShell](../iot-central/core/howto-manage-iot-central-from-powershell.md) | PowerShell interface for creating and managing IoT applications | |Azure IoT Tools for VS Code | Hub | [Create an IoT hub with Tools for VS Code](../iot-hub/iot-hub-create-use-iot-toolkit.md) | VS Code extension for IoT Hub applications. |
iot-develop Troubleshoot Embedded Device Quickstarts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md
Previously updated : 06/10/2021- Last updated : 10/07/2022+ # Troubleshooting the Azure RTOS embedded device quickstarts
iot-edge How To Configure Api Proxy Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-api-proxy-module.md
Configure the following modules at the **top layer**:
| Name | Value | | - | -- |
- | `BLOB_UPLOAD_ROUTE_ADDRESS` | The blob storage module name and open port. For example, `azureblobstorageoniotedge:1102`. |
+ | `BLOB_UPLOAD_ROUTE_ADDRESS` | The blob storage module name and open port. For example, `azureblobstorageoniotedge:11002`. |
| `NGINX_DEFAULT_PORT` | The port that the nginx proxy listens on for requests from downstream devices. For example, `8000`. | * Configure the following createOptions:
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/security-recommendations.md
Title: Security recommendations for Azure IoT | Microsoft Docs description: This article summarizes additional steps to ensure security in your Azure IoT Hub solution. - Last updated 08/24/2022 -+ # Security recommendations for Azure Internet of Things (IoT) deployment
Some of the recommendations included in this article can be automatically monito
| Recommendation | Comments | |-|-| | Define access control for the hub | [Understand and define the type of access](iot-security-deployment.md#securing-the-cloud) each component will have in your IoT Hub solution, based on the functionality. The allowed permissions are *Registry Read*, *RegistryReadWrite*, *ServiceConnect*, and *DeviceConnect*. Default [shared access policies in your IoT hub](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions) can also help define the permissions for each component based on its role. |
-| Define access control for backend services | Data ingested by your IoT Hub solution can be consumed by other Azure services such as [Cosmos DB](../cosmos-db/index.yml), [Stream Analytics](../stream-analytics/index.yml), [App Service](../app-service/index.yml), [Logic Apps](../logic-apps/index.yml), and [Blob storage](../storage/blobs/storage-blobs-introduction.md). Make sure to understand and allow appropriate access permissions as documented for these services. |
+| Define access control for backend services | Data ingested by your IoT Hub solution can be consumed by other Azure services such as [Azure Cosmos DB](../cosmos-db/index.yml), [Stream Analytics](../stream-analytics/index.yml), [App Service](../app-service/index.yml), [Logic Apps](../logic-apps/index.yml), and [Blob storage](../storage/blobs/storage-blobs-introduction.md). Make sure to understand and allow appropriate access permissions as documented for these services. |
## Data protection
iot-hub Iot Hub Azure Service Health Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-azure-service-health-integration.md
# Check IoT Hub service and resource health
-Azure IoT Hub integrates with the [Azure Service Health service](../service-health/overview.md) to provide you with the ability to monitor the service-level health of the IoT Hub service and individual IoT hubs. You can also set up alerts to be notified when the status of the IoT Hub service or an IoT hub changes. The Azure Service Health service is a combination of three smaller
+Azure IoT Hub integrates with [Azure Service Health](../service-health/overview.md) to enable service-level health monitoring of the IoT Hub service and individual IoT hubs. You can also set up alerts to be notified when the status of the IoT Hub service or an IoT hub (instance) changes. Azure Service Health is a combination of three smaller
-Azure Service Health service helps you monitor service-level events like outages and upgrades that may affect the availability of the IoT Hub service and your individual IoT hubs. IoT Hub also integrates with Azure Monitor to provide IoT Hub platform metrics and IoT Hub resource logs that you can use to monitor operational errors and conditions that occur on a specific IoT hub. To learn more, see [Monitor IoT Hub](monitor-iot-hub.md).
+Azure Service Health helps you monitor service-level events like outages and upgrades that may affect the availability of the IoT Hub service and your individual IoT hubs. IoT Hub also integrates with Azure Monitor to provide IoT Hub platform metrics and IoT Hub resource logs that you can use to monitor operational errors and conditions that occur on a specific IoT hub. To learn more, see [Monitor IoT Hub](monitor-iot-hub.md).
-## Check health of an IoT hub with Azure Resource Health
+## Check IoT hub health with Azure Resource Health
-Azure Resource Health is a part of Azure Service Health service that tracks the health of individual resources. You can check the health status of your IoT hub directly from the portal.
+Azure Resource Health is part of Azure Service Health and tracks the health of individual resources. You can check the health status of your IoT hub directly from the portal.
To see status and status history of your IoT hub using the portal, follow these steps:
-1. In [Azure portal](https://portal.azure.com), go to your IoT hub in Azure portal.
+1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
1. On the left pane, under **Support + troubleshooting**, select **Resource Health**.
- :::image type="content" source="./media/iot-hub-azure-service-health-integration/iot-hub-resource-health.png" alt-text="Resource health page for an IoT hub":::
+ :::image type="content" source="media/iot-hub-azure-service-health-integration/iot-hub-resource-health.png" alt-text="Screenshot of the 'Resource health' page in an IoT hub" lightbox="media/iot-hub-azure-service-health-integration/iot-hub-resource-health.png":::
To learn more about Azure Resource Health and how to interpret health data, see [Resource Health overview](../service-health/resource-health-overview.md) in the Azure Service Health documentation.
-You can also select **Add resource health alert** to set up alerts to trigger when the health status of your IoT hub changes. To learn more, see [Configure alerts for service health events](../service-health/alerts-activity-log-service-notifications-portal.md) and related topics in the Azure Service Health documentation.
+You can also select **Add resource health alert** to configure alerts to trigger when the health status of your IoT hub changes. To learn more, see [Configure alerts for service health events](../service-health/alerts-activity-log-service-notifications-portal.md) and related topics in the Azure Service Health documentation.
-## Check health of IoT hubs in your subscription with Azure Service Health
+## Check all IoT hubs' health with Azure Service Health
-With Azure Service Health, you can check the health status of all of the IoT hubs in your subscription.
-
-To check the health of your IoT hubs, follow these steps:
+With Azure Service Health, you can check the health status of all IoT hubs in your subscription.
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to **Service Health** > **Resource health**.
-3. From the drop-down boxes, select your subscription then select **IoT Hub** as the resource type.
+3. From the drop-down menus, select your subscription then **IoT Hub** as the resource type.
+
+ You see a list all IoT hubs in your subscription.
To learn more about Azure Service Health and how to interpret health data, see [Service Health overview](../service-health/service-health-overview.md) in the Azure Service Health documentation.
To learn how to set up alerts with Azure Service Health, see [Configure alerts f
## Check health of the IoT Hub service by region on Azure status page
-To see the status of IoT Hub and other services by region world-wide, you can use the [Azure status page](https://azure.status.microsoft/status). For more information about the Azure status page, see [Azure status overview](../service-health/azure-status-overview.md) in the Azure Service Health documentation.
+To check the status of IoT Hub and other services by region worldwide, view the [Azure status page](https://azure.status.microsoft/status). For more information about the Azure status page, see [Azure status overview](../service-health/azure-status-overview.md) in the Azure Service Health documentation.
## Next steps
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Title: Understand the Azure IoT Hub identity registry | Microsoft Docs
+ Title: Understand the Azure IoT Hub identity registry
description: Developer guide - description of the IoT Hub identity registry and how to use it to manage your devices. Includes information about the import and export of device identities in bulk.
Last updated 06/29/2021-+ # Understand the identity registry in your IoT hub
Device identities can also be exported and imported from an IoT Hub via the Serv
## Device provisioning
-The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as table storage, blob storage, or Cosmos DB to store any additional device data.
+The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as Table storage, Blob storage, or Azure Cosmos DB to store any additional device data.
*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](/azure/iot-dps).
The IoT Hub identity registry contains a field called **connectionState**. Only
If your IoT solution needs to know if a device is connected, you can implement the *heartbeat pattern*. In the heartbeat pattern, the device sends device-to-cloud messages at least once every fixed amount of time (for example, at least once every hour). Therefore, even if a device does not have any data to send, it still sends an empty device-to-cloud message (usually with a property that identifies it as a heartbeat). On the service side, the solution maintains a map with the last heartbeat received for each device. If the solution does not receive a heartbeat message within the expected time from the device, it assumes that there is a problem with the device.
-A more complex implementation could include the information from [Azure Monitor](../azure-monitor/index.yml) and [Azure Resource Health](../service-health/resource-health-overview.md) to identify devices that are trying to connect or communicate but failing. To learn more about using these services with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md) and [Check IoT Hub resource health](iot-hub-azure-service-health-integration.md#check-health-of-an-iot-hub-with-azure-resource-health). For more specific information about using Azure Monitor or Event Grid to monitor device connectivity, see [Monitor, diagnose, and troubleshoot device connectivity](iot-hub-troubleshoot-connectivity.md). When you implement the heartbeat pattern, make sure to check [IoT Hub Quotas and Throttles](iot-hub-devguide-quotas-throttling.md).
+A more complex implementation could include the information from [Azure Monitor](../azure-monitor/index.yml) and [Azure Resource Health](../service-health/resource-health-overview.md) to identify devices that are trying to connect or communicate but failing. To learn more about using these services with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md) and [Check IoT Hub resource health](iot-hub-azure-service-health-integration.md#check-iot-hub-health-with-azure-resource-health). For more specific information about using Azure Monitor or Event Grid to monitor device connectivity, see [Monitor, diagnose, and troubleshoot device connectivity](iot-hub-troubleshoot-connectivity.md). When you implement the heartbeat pattern, make sure to check [IoT Hub Quotas and Throttles](iot-hub-devguide-quotas-throttling.md).
> [!NOTE] > If an IoT solution uses the connection state solely to determine whether to send cloud-to-device messages, and messages are not broadcast to large sets of devices, consider using the simpler *short expiry time* pattern. This pattern achieves the same result as maintaining a device connection state registry using the heartbeat pattern, while being more efficient. If you request message acknowledgements, IoT Hub can notify you about which devices are able to receive messages and which are not.
Module identities are represented as JSON documents with the following propertie
| generationId |required, read-only |An IoT hub-generated, case-sensitive string up to 128 characters long. This value is used to distinguish devices with the same **deviceId**, when they have been deleted and re-created. | | etag |required, read-only |A string representing a weak ETag for the device identity, as per [RFC7232](https://tools.ietf.org/html/rfc7232). | | authentication |optional |A composite object containing authentication information and security materials. For more information, see [Authentication Mechanism](/rest/api/iothub/service/modules/get-identity#authenticationmechanism) in the REST API documentation. |
-| managedBy | optional | Identifies who manages this module. For instance, this value is "IotEdge" if the edge runtime owns this module. |
+| managedBy | optional | Identifies who manages this module. For instance, this value is "Iot Edge" if the edge runtime owns this module. |
| cloudToDeviceMessageCount | read-only | The number of cloud-to-module messages currently queued to be sent to the module. | | connectionState |read-only |A field indicating connection status: either **Connected** or **Disconnected**. This field represents the IoT Hub view of the device connection status. **Important**: This field should be used only for development/debugging purposes. The connection state is updated only for devices using MQTT or AMQP. Also, it is based on protocol-level pings (MQTT pings, or AMQP pings), and it can have a maximum delay of only 5 minutes. For these reasons, there can be false positives, such as devices reported as connected but that are disconnected. | | connectionStateUpdatedTime |read-only |A temporal indicator, showing the date and last time the connection state was updated. |
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
Title: Add correlation IDs to IoT messages w/distributed tracing (pre)
-description: Learn how to use the distributed tracing ability to trace IoT messages throughout the Azure services used by your solution.
+ Title: Add correlation IDs to IoT messages with distributed tracing (preview)
+description: Learn how to use distributed tracing to trace IoT messages throughout the Azure services that your solution uses.
-# Trace Azure IoT device-to-cloud messages with distributed tracing (preview)
+# Trace Azure IoT device-to-cloud messages by using distributed tracing (preview)
Microsoft Azure IoT Hub currently supports distributed tracing as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you'll be able trace IoT messages throughout the Azure services involved in your solution. For a background on distributed tracing, see [Distributed Tracing](../azure-monitor/app/distributed-tracing.md).
+IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you'll be able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For a background on the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-tracing.md).
-Enabling distributed tracing for IoT Hub gives you the ability to:
+When you enable distributed tracing for IoT Hub, you can:
-- Precisely monitor the flow of each message through IoT Hub using [trace context](https://github.com/w3c/trace-context). This trace context includes correlation IDs that allow you to correlate events from one component with events from another component. It can be applied for a subset or all IoT device messages using [device twin](iot-hub-devguide-device-twins.md).
+- Precisely monitor the flow of each message through IoT Hub by using [trace context](https://github.com/w3c/trace-context). Trace context includes correlation IDs that allow you to correlate events from one component with events from another component. You can apply it for a subset or all IoT device messages by using a [device twin](iot-hub-devguide-device-twins.md).
- Automatically log the trace context to [Azure Monitor Logs](monitor-iot-hub.md). - Measure and understand message flow and latency from devices to IoT Hub and routing endpoints. - Start considering how you want to implement distributed tracing for the non-Azure services in your IoT solution.
In this article, you use the [Azure IoT device SDK for C](iot-hub-device-sdk-c-i
## Prerequisites -- The preview of distributed tracing is currently only supported for IoT Hubs created in the following regions:
+- The preview of distributed tracing is currently supported only for IoT hubs created in the following regions:
- - **North Europe**
- - **Southeast Asia**
- - **West US 2**
+ - North Europe
+ - Southeast Asia
+ - West US 2
-- This article assumes that you're familiar with sending telemetry messages to your IoT hub. Make sure you've completed the [Send telemetry C Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c).
+- This article assumes that you're familiar with sending telemetry messages to your IoT hub. Make sure you've completed the [quickstart for sending telemetry in C](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c).
-- Register a device with your IoT hub (steps available in the Quickstart) and save the connection string.
+- Register a device with your IoT hub and save the connection string. Registration steps are available in the quickstart.
- Install the latest version of [Git](https://git-scm.com/download/).
-## Configure IoT Hub
+## Configure an IoT hub
-In this section, you configure an IoT Hub to log distributed tracing attributes (correlation IDs and timestamps).
+In this section, you configure an IoT hub to log distributed tracing attributes (correlation IDs and time stamps).
-1. Navigate to your IoT hub in the [Azure portal](https://portal.azure.com/).
+1. Go to your IoT hub in the [Azure portal](https://portal.azure.com/).
-1. In the left pane for your IoT hub, scroll down to the **Monitoring** section and click **Diagnostics settings**.
+1. On the left pane for your IoT hub, scroll down to the **Monitoring** section and select **Diagnostics settings**.
-1. Click **Add diagnostic setting**.
+1. Select **Add diagnostic setting**.
-1. In the **Diagnostic setting name** field, enter a name for a new diagnostic setting. For example, **DistributedTracingSettings**.
+1. In the **Diagnostic setting name** box, enter a name for a new diagnostic setting. For example, enter **DistributedTracingSettings**.
- :::image type="content" source="media/iot-hub-distributed-tracing/diagnostic-setting-name.png" alt-text="Screenshot showing where to add a name for your diagnostic settings." lightbox="media/iot-hub-distributed-tracing/diagnostic-setting-name.png":::
+ :::image type="content" source="media/iot-hub-distributed-tracing/diagnostic-setting-name.png" alt-text="Screenshot that shows where to add a name for your diagnostic settings." lightbox="media/iot-hub-distributed-tracing/diagnostic-setting-name.png":::
1. Choose one or more of the following options under **Destination details** to determine where the logging will be sent: - **Archive to a storage account**: Configure a storage account to contain the logging information. - **Stream to an event hub**: Configure an event hub to contain the logging information.
- - **Send to Log Analytics**: Configure a log analytics workspace to contain the logging information.
+ - **Send to Log Analytics**: Configure a Log Analytics workspace to contain the logging information.
-1. In the **Logs** section, select the operations you want to log.
+1. In the **Logs** section, select the operations that you want to log.
- Include **Distributed Tracing** and configure a **Retention** for how many days you want the logging retained. Log retention affects storage costs.
+ Include **Distributed Tracing** and configure a **Retention** period for how many days you want the logging retained. Log retention affects storage costs.
- :::image type="content" source="media/iot-hub-distributed-tracing/select-distributed-tracing.png" alt-text="Screenshot showing where the Distributed Tracing operation is for IoT Hub diagnostic settings.":::
+ :::image type="content" source="media/iot-hub-distributed-tracing/select-distributed-tracing.png" alt-text="Screenshot that shows where the Distributed Tracing operation is for IoT Hub diagnostic settings.":::
1. Select **Save** for this new setting. 1. (Optional) To see the messages flow to different places, set up [routing rules to at least two different endpoints](iot-hub-devguide-messages-d2c.md).
-Once the logging is turned on, IoT Hub records a log when a message containing valid trace properties is encountered in any of the following situations:
+After the logging is turned on, IoT Hub records a log when a message that contains valid trace properties is encountered in any of the following situations:
-- The message arrives at IoT Hub's gateway.-- The message is processed by the IoT Hub.
+- The message arrives at the IoT hub's gateway.
+- The IoT hub processes the message.
- The message is routed to custom endpoints. Routing must be enabled. To learn more about these logs and their schemas, see [Monitor IoT Hub](monitor-iot-hub.md) and [Distributed tracing in IoT Hub resource logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
-## Set up device
+## Set up a device
In this section, you prepare a development environment for use with the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). Then, you modify one of the samples to enable distributed tracing on your device's telemetry messages.
These instructions are for building the sample on Windows. For other environment
### Clone the source code and initialize
-1. Install ["Desktop development with C++" workload](/cpp/build/vscpp-step-0-installation?view=vs-2019&preserve-view=true) for Visual Studio 2022. Visual Studio 2019 is also supported.
+1. Install the [Desktop development with C++](/cpp/build/vscpp-step-0-installation?view=vs-2019&preserve-view=true) workload for Visual Studio 2022. Visual Studio 2019 is also supported.
-1. Install [CMake](https://cmake.org/). Ensure it's in your `PATH` by typing `cmake -version` from a command prompt.
+1. Install [CMake](https://cmake.org/). Ensure that it's in your `PATH` by entering `cmake -version` from a command prompt.
1. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository:
These instructions are for building the sample on Windows. For other environment
git submodule update --init ```
- Expect this operation to take several minutes to complete.
+ Expect this operation to take several minutes to finish.
-1. Run the following commands from the `azure-iot-sdk-c` directory to create a `cmake` subdirectory and navigate to the `cmake` folder.
+1. Run the following commands from the `azure-iot-sdk-c` directory to create a `cmake` subdirectory and go to the `cmake` folder:
```cmd mkdir cmake
These instructions are for building the sample on Windows. For other environment
cmake .. ```
- If `cmake` can't find your C++ compiler, you might get build errors while running the above command. If that happens, try running this command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
+ If CMake can't find your C++ compiler, you might get build errors while running the preceding command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
- Once the build succeeds, the last few output lines will look similar to the following output:
+ After the build succeeds, the last few output lines will look similar to the following output:
```cmd $ cmake ..
These instructions are for building the sample on Windows. For other environment
-- Build files have been written to: E:/IoT Testing/azure-iot-sdk-c/cmake ```
-### Edit the send telemetry sample to enable distributed tracing
+### Edit the telemetry sample to enable distributed tracing
> [!div class="button"] > <a href="https://github.com/Azure-Samples/azure-iot-distributed-tracing-sample/blob/master/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" target="_blank">Get the sample on GitHub</a>
These instructions are for building the sample on Windows. For other environment
:::code language="c" source="~/samples-iot-distributed-tracing/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" range="56-60" highlight="2":::
- Replace the value of the `connectionString` constant with the device connection string you saved in the [register a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c#register-a-device) section of the [Send telemetry C Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c).
+ Replace the value of the `connectionString` constant with the device connection string that you saved in the [Register a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c#register-a-device) section of the quickstart for sending telemetry.
-1. Find the line of code that calls `IoTHubDeviceClient_LL_SetConnectionStatusCallback` to register a connection status callback function before the send message loop. Add code under that line as shown below to call `IoTHubDeviceClient_LL_EnablePolicyConfiguration` enabling distributed tracing for the device:
+1. Find the line of code that calls `IoTHubDeviceClient_LL_SetConnectionStatusCallback` to register a connection status callback function before the send message loop. Add code under that line to call `IoTHubDeviceClient_LL_EnablePolicyConfiguration` and enable distributed tracing for the device:
:::code language="c" source="~/samples-iot-distributed-tracing/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" range="144-152" highlight="5":::
- The `IoTHubDeviceClient_LL_EnablePolicyConfiguration` function enables policies for specific IoTHub features that are configured via [device twins](./iot-hub-devguide-device-twins.md). Once `POLICY_CONFIGURATION_DISTRIBUTED_TRACING` is enabled with the line of code above, the tracing behavior of the device will reflect distributed tracing changes made on the device twin.
+ The `IoTHubDeviceClient_LL_EnablePolicyConfiguration` function enables policies for specific IoT Hub features that are configured via [device twins](./iot-hub-devguide-device-twins.md). After you enable `POLICY_CONFIGURATION_DISTRIBUTED_TRACING` by using the extra line of code, the tracing behavior of the device will reflect distributed tracing changes made on the device twin.
1. To keep the sample app running without using up all your quota, add a one-second delay at the end of the send message loop:
These instructions are for building the sample on Windows. For other environment
### Compile and run
-1. Navigate to the *iothub_ll_telemetry_sample* project directory from the CMake directory (`azure-iot-sdk-c/cmake`) you created earlier, and compile the sample:
+1. Go to the `iothub_ll_telemetry_sample` project directory from the CMake directory (`azure-iot-sdk-c/cmake`) that you created earlier, and compile the sample:
```cmd cd iothub_client/samples/iothub_ll_telemetry_sample cmake --build . --target iothub_ll_telemetry_sample --config Debug ```
-1. Run the application. The device sends telemetry supporting distributed tracing.
+1. Run the application. The device sends telemetry that supports distributed tracing.
```cmd Debug/iothub_ll_telemetry_sample.exe ```
-1. Keep the app running. Optionally observe the message being sent to IoT Hub by looking at the console window.
+1. Keep the app running. You can observe the message being sent to IoT Hub by looking at the console window.
<!-- For a client app that can receive sampling decisions from the cloud, check out [this sample](https://aka.ms/iottracingCsample). --> ### Workaround for third-party clients
-Implementing the distributed tracing feature without using the C SDK is more complex. Therefore, this approach isn't recommended.
+Implementing the distributed tracing feature without using the C SDK is more complex. We don't recommend it.
-First, you must implement all the IoT Hub protocol primitives in your messages by following the dev guide [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md). Then, edit the protocol properties in the MQTT/AMQP messages to add `tracestate` as **system property**.
+First, you must implement all the IoT Hub protocol primitives in your messages by following the developer guide [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md). Then, edit the protocol properties in the MQTT and AMQP messages to add `tracestate` as a system property.
Specifically:
-* For MQTT, add `%24.tracestate=timestamp%3d1539243209` to the message topic, where `1539243209` should be replaced with the creation time of the message in the unix timestamp format. As an example, refer to the implementation [in the C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/iothubtransport_mqtt_common.c#L761)
+* For MQTT, add `%24.tracestate=timestamp%3d1539243209` to the message topic. Replace `1539243209` with the creation time of the message in Unix time-stamp format. As an example, refer to the implementation [in the C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/iothubtransport_mqtt_common.c#L761).
* For AMQP, add `key("tracestate")` and `value("timestamp=1539243209")` as message annotation. For a reference implementation, see the [uamqp_messaging.c](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/uamqp_messaging.c#L527) file.
-To control the percentage of messages containing this property, implement logic to listen to cloud-initiated events such as twin updates.
+To control the percentage of messages that contain this property, implement logic to listen to cloud-initiated events such as twin updates.
## Update sampling options
-To change the percentage of messages to be traced from the cloud, you must update the device twin. Updates can be made, using the JSON editor in the Azure portal or the IoT Hub service SDK. The following subsections provide examples.
+To change the percentage of messages to be traced from the cloud, you must update the device twin. You can make updates by using the JSON editor in the Azure portal or the IoT Hub service SDK. The following subsections provide examples.
-### Update using the portal
+### Update by using the portal
-1. Navigate to your IoT hub in the [Azure portal](https://portal.azure.com/), then select **Devices** from the menu.
+1. Go to your IoT hub in the [Azure portal](https://portal.azure.com/), and then select **Devices** from the menu.
1. Choose your device.
-1. Look for the gear icon under **Distributed Tracing (preview)**, then select the icon. A new blade opens. Select the **Enable** option. Choose a **Sampling rate** between 0% and 100%, then select **Save**.
+1. Under **Distributed Tracing (preview)**, select the gear icon. In the panel that opens:
+
+ 1. Select the **Enable** option.
+ 1. For **Sampling rate**, choose a percentage between 0 and 100.
+ 1. Select **Save**.
- :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing.png" alt-text="Screenshot showing how to enable distributed tracing in the Azure portal." lightbox="media/iot-hub-distributed-tracing/enable-distributed-tracing.png":::
+ :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing.png" alt-text="Screenshot that shows how to enable distributed tracing in the Azure portal." lightbox="media/iot-hub-distributed-tracing/enable-distributed-tracing.png":::
-1. Wait a few seconds, and hit **Refresh**, then if successfully acknowledged by device, a sync icon with a checkmark appears.
+1. Wait a few seconds, and then select **Refresh**. If the device successfully acknowledges your changes, a sync icon with a check mark appears.
-1. Go back to the console window for the telemetry message app. You'll see messages being sent with `tracestate` in the application properties.
+1. Go back to the console window for the telemetry message app. Confirm that messages are being sent with `tracestate` in the application properties.
- :::image type="content" source="media/iot-hub-distributed-tracing/MicrosoftTeams-image.png" alt-text="Screenshot showing trace state messages." lightbox="media/iot-hub-distributed-tracing/MicrosoftTeams-image.png":::
+ :::image type="content" source="media/iot-hub-distributed-tracing/MicrosoftTeams-image.png" alt-text="Screenshot that shows trace state messages." lightbox="media/iot-hub-distributed-tracing/MicrosoftTeams-image.png":::
1. (Optional) Change the sampling rate to a different value, and observe the change in frequency that messages include `tracestate` in the application properties.
-### Update using Azure IoT Hub for VS Code
+### Update by using Azure IoT Hub for Visual Studio Code
-1. With VS Code installed, install the latest version of [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) for VS Code.
+1. With Visual Studio Code installed, install the latest version of [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) for Visual Studio Code.
-1. Open VS Code and go to the **Explorer** tab and **Azure IoT Hub** section. Select the `...` next to **Azure IoT Hub** to see a submenu. Choose the **Select IoT Hub** option to retrieve your IoT hub from Azure. A popup window appears at the top of VS Code where you can select your subscription and IoT hub.
+1. Open Visual Studio Code, and go to the **Explorer** tab and the **Azure IoT Hub** section.
- See a demonstration on the [**vscode-azure-iot-toolkit**](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Select-IoT-Hub) GitHub page.
+1. Select the ellipsis (...) next to **Azure IoT Hub** to see a submenu. Choose the **Select IoT Hub** option to retrieve your IoT hub from Azure.
-1. Expand your device under **Devices** and look for right-click **Distributed Tracing Setting (Preview)**. Select the option **Update Distributed Tracing Setting (Preview)**. A popup window will appear at the top where you can select **Enable**. You now see **Enable Distributed Tracing: Enabled** under **Desired** of your **Distributed Tracing Setting (Preview)**.
+ In the pop-up window that appears at the top of Visual Studio Code, you can select your subscription and IoT hub.
- :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing-vsc.png" alt-text="Screenshot showing how to enable distributed tracing in the Azure IoT Hub extension.":::
+ See a demonstration on the [vscode-azure-iot-toolkit](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Select-IoT-Hub) GitHub page.
-1. Next a popup will appear for **Sampling Rate**. Add **100**, then press ENTER. You now see **Sample rate: 100(%)** under the **Desired** section as well.
+1. Expand your device under **Devices**. Right-click **Distributed Tracing Setting (Preview)**, and then select **Update Distributed Tracing Setting (Preview)**.
- ![Update sampling rate](./media/iot-hub-distributed-tracing/update-distributed-tracing-setting-3.png)
+1. In the pop-up pane that appears at the top of the window, select **Enable**.
+
+ :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing-vsc.png" alt-text="Screenshot that shows how to enable distributed tracing in the Azure IoT Hub extension.":::
+
+ **Enable Distributed Tracing: Enabled** now appears under **Distributed Tracing Setting (Preview)** > **Desired**.
+
+1. In the pop-up pane that appears for the sampling rate, type **100**, and then select the Enter key.
+
+ ![Screenshot that shows entering a sampling rate](./media/iot-hub-distributed-tracing/update-distributed-tracing-setting-3.png)
+
+ **Sample rate: 100(%)** now also appears under **Distributed Tracing Setting (Preview)** > **Desired**.
### Bulk update for multiple devices
-To update the distributed tracing sampling configuration for multiple devices, use [automatic device configuration](./iot-hub-automatic-device-management.md). Make sure you follow this twin schema:
+To update the distributed tracing sampling configuration for multiple devices, use [automatic device configuration](./iot-hub-automatic-device-management.md). Follow this twin schema:
```json {
To update the distributed tracing sampling configuration for multiple devices, u
| Element name | Required | Type | Description | |--|-||--|
-| `sampling_mode` | Yes | Integer | Two mode values are currently supported to turn sampling on and off. `1` is On and, `2` is Off. |
+| `sampling_mode` | Yes | Integer | Two mode values are currently supported to turn sampling on and off. `1` is on, and `2` is off. |
| `sampling_rate` | Yes | Integer | This value is a percentage. Only values from `0` to `100` (inclusive) are permitted. | ## Query and visualize
-To see all the traces logged by an IoT Hub, query the log store that you selected in diagnostic settings. This section walks through a couple different options.
+To see all the traces that an IoT hub has logged, query the log store that you selected in diagnostic settings. This section shows how to query by using Log Analytics.
-### Query using Log Analytics
-
-If you've set up [Log Analytics with resource logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage), query by looking for logs in the `DistributedTracing` category. For example, this query shows all the traces logged:
+If you've set up [Log Analytics with resource logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage), query by looking for logs in the `DistributedTracing` category. For example, this query shows all the logged traces:
```Kusto // All distributed traces
AzureDiagnostics
| order by TimeGenerated asc ```
-Example logs as shown by Log Analytics:
+Here are a few example logs in Log Analytics:
-| TimeGenerated | OperationName | Category | Level | CorrelationId | DurationMs | Properties |
+| Time generated | Operation name | Category | Level | Correlation ID | Duration in milliseconds | Properties |
|--||--|||||
-| 2018-02-22T03:28:28.633Z | DiagnosticIoTHubD2C | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-0144d2590aacd909-01 | | {"deviceId":"AZ3166","messageSize":"96","callerLocalTimeUtc":"2018-02-22T03:27:28.633Z","calleeLocalTimeUtc":"2018-02-22T03:27:28.687Z"} |
-| 2018-02-22T03:28:38.633Z | DiagnosticIoTHubIngress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 20 | {"isRoutingEnabled":"false","parentSpanId":"0144d2590aacd909"} |
-| 2018-02-22T03:28:48.633Z | DiagnosticIoTHubEgress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 23 | {"endpointType":"EventHub","endpointName":"myEventHub", "parentSpanId":"0144d2590aacd909"} |
+| 2018-02-22T03:28:28.633Z | DiagnosticIoTHubD2C | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-0144d2590aacd909-01 | | `{"deviceId":"AZ3166","messageSize":"96","callerLocalTimeUtc":"2018-02-22T03:27:28.633Z","calleeLocalTimeUtc":"2018-02-22T03:27:28.687Z"}` |
+| 2018-02-22T03:28:38.633Z | DiagnosticIoTHubIngress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 20 | `{"isRoutingEnabled":"false","parentSpanId":"0144d2590aacd909"}` |
+| 2018-02-22T03:28:48.633Z | DiagnosticIoTHubEgress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 23 | `{"endpointType":"EventHub","endpointName":"myEventHub", "parentSpanId":"0144d2590aacd909"}` |
-To understand the different types of logs, see [Azure IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
+To understand the types of logs, see [Azure IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
## Understand Azure IoT distributed tracing
-### Context
-
-Many IoT solutions, including our own [reference architecture](/azure/architecture/reference-architectures/iot) (English only), generally follow a variant of the [microservice architecture](/azure/architecture/microservices/). As an IoT solution grows more complex, you end up using a dozen or more microservices. These microservices may or may not be from Azure. Pinpointing where IoT messages are dropping or slowing down can become challenging. For example, you have an IoT solution that uses 5 different Azure services and 1500 active devices. Each device sends 10 device-to-cloud messages/second (for a total of 15,000 messages/second), but you notice that your web app sees only 10,000 messages/second. Where is the issue? How do you find the culprit?
-
-### Distributed tracing pattern in microservice architecture
-
-To reconstruct the flow of an IoT message across different services, each service should propagate a *correlation ID* that uniquely identifies the message. Once collected by Azure Monitor in a centralized system, correlation IDs enable you to see message flow. This method is called the [distributed tracing pattern](/azure/architecture/microservices/logging-monitoring#distributed-tracing).
+Many IoT solutions, including the [Azure IoT reference architecture](/azure/architecture/reference-architectures/iot) (English only), generally follow a variant of the [microservice architecture](/azure/architecture/microservices/). As an IoT solution grows more complex, you end up using a dozen or more microservices. These microservices might or might not be from Azure.
-To support wider adoption for distributed tracing, Microsoft is contributing to [W3C standard proposal for distributed tracing](https://w3c.github.io/trace-context/).
+Pinpointing where IoT messages are dropping or slowing down can be challenging. For example, imagine that you have an IoT solution that uses 5 different Azure services and 1,500 active devices. Each device sends 10 device-to-cloud messages per second, for a total of 15,000 messages per second. But you notice that your web app sees only 10,000 messages per second. How do you find the culprit?
-### IoT Hub support
+For you to reconstruct the flow of an IoT message across services, each service should propagate a *correlation ID* that uniquely identifies the message. After Azure Monitor collects correlation IDs in a centralized system, you can use those IDs to see message flow. This method is called the [distributed tracing pattern](/azure/architecture/microservices/logging-monitoring#distributed-tracing).
-Once enabled, distributed tracing support for IoT Hub will follow this flow:
+To support wider adoption for distributed tracing, Microsoft is contributing to [W3C standard proposal for distributed tracing](https://w3c.github.io/trace-context/). When distributed tracing support for IoT Hub is enabled, it will follow this flow:
1. A message is generated on the IoT device.
-1. The IoT device decides (with help from cloud) that this message should be assigned with a trace context.
-1. The SDK adds a `tracestate` to the message property, containing the message creation timestamp.
+1. The IoT device decides (with help from the cloud) that this message should be assigned with a trace context.
+1. The SDK adds a `tracestate` value to the message property, which contains the time stamp for message creation.
1. The IoT device sends the message to IoT Hub.
-1. The message arrives at IoT hub gateway.
-1. IoT Hub looks for the `tracestate` in the message properties, and checks to see if it's in the correct format.
-1. If so, IoT Hub generates a globally unique `trace-id` for the message, a `span-id` for the "hop", and logs them to [IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview) under the operation `DiagnosticIoTHubD2C`.
-1. Once the message processing is finished, IoT Hub generates another `span-id` and logs it along with the existing `trace-id` under the operation `DiagnosticIoTHubIngress`.
-1. If routing is enabled for the message, IoT Hub writes it to the custom endpoint, and logs another `span-id` with the same `trace-id` under the category `DiagnosticIoTHubEgress`.
-1. The steps above are repeated for each message generated.
+1. The message arrives at the IoT Hub gateway.
+1. IoT Hub looks for the `tracestate` value in the message properties and checks whether it's in the correct format. If so, IoT Hub generates a globally unique `trace-id` value for the message and a `span-id` value for the "hop." IoT Hub records these values in the [IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview) under the `DiagnosticIoTHubD2C` operation.
+1. When the message processing is finished, IoT Hub generates another `span-id` value and logs it, along with the existing `trace-id` value, under the `DiagnosticIoTHubIngress` operation.
+1. If routing is enabled for the message, IoT Hub writes it to the custom endpoint. IoT Hub logs another `span-id` value with the same `trace-id` value under the `DiagnosticIoTHubEgress` category.
+1. IoT Hub repeats the preceding steps for each message that's generated.
## Public preview limits and considerations -- Proposal for W3C Trace Context standard is currently a working draft.-- Currently, the only development language supported by client SDK is C.-- Cloud-to-device twin capability isn't available for [IoT Hub basic tier](iot-hub-scaling.md#basic-and-standard-tiers). However, IoT Hub will still log to Azure Monitor if it sees a properly composed trace context header.
+- The proposal for the W3C Trace Context standard is currently a working draft.
+- The only development language that the client SDK currently supports is C.
+- Cloud-to-device twin capability isn't available for the [IoT Hub basic tier](iot-hub-scaling.md#basic-and-standard-tiers). However, IoT Hub will still log to Azure Monitor if it sees a properly composed trace context header.
- To ensure efficient operation, IoT Hub will impose a throttle on the rate of logging that can occur as part of distributed tracing. ## Next steps
Once enabled, distributed tracing support for IoT Hub will follow this flow:
- To learn more about the general distributed tracing pattern in microservices, see [Microservice architecture pattern: distributed tracing](https://microservices.io/patterns/observability/distributed-tracing.html). - To set up configuration to apply distributed tracing settings to a large number of devices, see [Configure and monitor IoT devices at scale](./iot-hub-automatic-device-management.md). - To learn more about Azure Monitor, see [What is Azure Monitor?](../azure-monitor/overview.md).-- To learn more about using Azure Monitor with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md)
+- To learn more about using Azure Monitor with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md).
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
Title: Order device connection events fr Azure IoT Hub w/Azure Cosmos DB
description: This article describes how to order and record device connection events from Azure IoT Hub using Azure Cosmos DB to maintain the latest connection state + Last updated 04/11/2019 - # Order device connection events from Azure IoT Hub using Azure Cosmos DB
-Azure Event Grid helps you build event-based applications and easily integrate IoT events in your business solutions. This article walks you through a setup which can be used to track and store the latest device connection state in Cosmos DB. We will use the sequence number available in the Device Connected and Device Disconnected events and store the latest state in Cosmos DB. We are going to use a stored procedure, which is an application logic that is executed against a collection in Cosmos DB.
+Azure Event Grid helps you build event-based applications and easily integrate IoT events in your business solutions. This article walks you through a setup which can be used to track and store the latest device connection state in Azure Cosmos DB. We will use the sequence number available in the Device Connected and Device Disconnected events and store the latest state in Azure Cosmos DB. We are going to use a stored procedure, which is an application logic that is executed against a collection in Azure Cosmos DB.
The sequence number is a string representation of a hexadecimal number. You can use string compare to identify the larger number. If you are converting the string to hex, then the number will be a 256-bit number. The sequence number is strictly increasing, and the latest event will have a higher number than other events. This is useful if you have frequent device connects and disconnects, and want to ensure only the latest event is used to trigger a downstream action, as Azure Event Grid doesnΓÇÖt support ordering of events.
The sequence number is a string representation of a hexadecimal number. You can
* An active Azure account. If you don't have one, you can [create a free account](https://azure.microsoft.com/pricing/free-trial/).
-* An active Azure Cosmos DB SQL API account. If you haven't created one yet, see [Create a database account](../cosmos-db/create-sql-api-java.md#create-a-database-account) for a walkthrough.
+* An active Azure Cosmos DB for NoSQL account. If you haven't created one yet, see [Create a database account](../cosmos-db/create-sql-api-java.md#create-a-database-account) for a walkthrough.
* A collection in your database. See [Add a collection](../cosmos-db/create-sql-api-java.md#add-a-container) for a walkthrough. When you create your collection, use `/id` for the partition key.
The sequence number is a string representation of a hexadecimal number. You can
First, create a stored procedure and set it up to run a logic that compares sequence numbers of incoming events and records the latest event per device in the database.
-1. In your Cosmos DB SQL API, select **Data Explorer** > **Items** > **New Stored Procedure**.
+1. In your Azure Cosmos DB for NoSQL, select **Data Explorer** > **Items** > **New Stored Procedure**.
![Create stored procedure](./media/iot-hub-how-to-order-connection-state-events/create-stored-procedure.png)
In your logic app workflow, conditions help run specific actions after passing t
![Add action if true](./media/iot-hub-how-to-order-connection-state-events/action-if-true.png)
-3. Search for Cosmos DB and select **Azure Cosmos DB - Execute stored procedure**
+3. Search for Azure Cosmos DB and select **Azure Cosmos DB - Execute stored procedure**
- ![Search for CosmosDB](./media/iot-hub-how-to-order-connection-state-events/cosmosDB-search.png)
+ ![Search for Azure Cosmos DB](./media/iot-hub-how-to-order-connection-state-events/cosmosDB-search.png)
4. Fill in **cosmosdb-connection** for the **Connection Name** and select the entry in the table, then select **Create**. You see the **Execute stored procedure** panel. Enter the values for the fields:
You see something similar to the following output that shows the sensor data and
You have now run a sample application to collect sensor data and send it to your IoT hub.
-### Observe events in Cosmos DB
+### Observe events in Azure Cosmos DB
-You can see results of the executed stored procedure in your Cosmos DB document. Here's what it looks like. Each row contains the latest device connection state per device.
+You can see results of the executed stored procedure in your Azure Cosmos DB document. Here's what it looks like. Each row contains the latest device connection state per device.
![How to outcome](./media/iot-hub-how-to-order-connection-state-events/cosmosDB-outcome.png)
To remove an Azure Cosmos DB account from the Azure portal, right-click the acco
* Learn about what else you can do with [Event Grid](../event-grid/overview.md)
-* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
+* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
There can be a number of causes for a 500xxx error response. In all cases, the i
To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](iot-hub-reliability-features-in-sdks.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
-If the problem persists, check [Resource Health](iot-hub-azure-service-health-integration.md#check-health-of-an-iot-hub-with-azure-resource-health) and [Azure Status](https://azure.status.microsoft/) to see if IoT Hub has a known problem. You can also use the [manual failover feature](tutorial-manual-failover.md).
+If the problem persists, check [Resource Health](iot-hub-azure-service-health-integration.md#check-iot-hub-health-with-azure-resource-health) and [Azure Status](https://azure.status.microsoft/) to see if IoT Hub has a known problem. You can also use the [manual failover feature](tutorial-manual-failover.md).
If there are no known problems and the issue continues, [contact support](https://azure.microsoft.com/support/options/) for further investigation.
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
Certificate contacts contain contact information to send notifications triggered
TLS certificates can help encrypt communications over the internet and establish the identity of websites, making the entry point and mode of communication secure. Additionally, a chained certificate signed by a public CA can help verify that the entities holding the certificates are whom they claim to be. As an example, the following are some excellent use cases of using certificates to secure communication and enable authentication: * Intranet/Internet websites: protect access to your intranet site and ensure encrypted data transfer over the internet using TLS certificates. * IoT and Networking devices: protect and secure your devices by using certificates for authentication and communication.
-* Cloud/Multi-Cloud: secure cloud-based applications on-prem, cross-cloud, or in your cloud provider's tenant.
+* Cloud/Multi-Cloud: secure cloud-based applications on-premises, cross-cloud, or in your cloud provider's tenant.
### Code signing A certificate can help secure the code/script of software, thereby ensuring that the author can share the software over the internet without being changed by malicious entities. Furthermore, once the author signs the code using a certificate leveraging the code signing technology, the software is marked with a stamp of authentication displaying the author and their website. Therefore, the certificate used in code signing helps validate the software's authenticity, promoting end-to-end security. ## Next steps- - [About Key Vault](../general/overview.md) - [About keys, secrets, and certificates](../general/about-keys-secrets-certificates.md) - [About keys](../keys/about-keys.md) - [About secrets](../secrets/about-secrets.md)
+- [Key management in Azure](../../security/fundamentals/key-management.md)
- [Authentication, requests, and responses](../general/authentication-requests-and-responses.md) - [Key Vault Developer's Guide](../general/developers-guide.md)
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md) | Azure Backup|Allow backup and restore of relevant keys and secrets during Azure Virtual Machines backup, by using [Azure Backup](../../backup/backup-overview.md).| | Azure CDN | [Configure HTTPS on an Azure CDN custom domain: Grant Azure CDN access to your key vault](../../cdn/cdn-custom-ssl.md?tabs=option-2-enable-https-with-your-own-certificate#grant-azure-cdn-access-to-your-key-vault)|
-| Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/container-registry-customer-managed-keys.md)
+| Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/tutorial-enable-customer-managed-keys.md)
| Azure Data Factory|[Fetch data store credentials in Key Vault from Data Factory](https://go.microsoft.com/fwlink/?linkid=2109491)| | Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.| | Azure Database for MySQL | [Data encryption for Azure Database for MySQL](../../mysql/howto-data-encryption-cli.md) |
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).| | Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)| | Azure Virtual Machines deployment service|[Deploy certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault).|
-|Exchange Online, SharePoint Online, M365DataAtRestEncryption | Allow access to customer managed keys for Data-At-Rest Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview?view=o365-worldwide).|
+|Exchange Online, SharePoint Online, M365DataAtRestEncryption | Allow access to customer managed keys for Data-At-Rest Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview?view=o365-worldwide&preserve-view=true).|
| Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md) > [!NOTE]
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview.md
# About Azure Key Vault
-Azure Key Vault helps solve the following problems:
+Azure Key Vault is one of several [key management solutions in Azure](../../security/fundamentals/key-management.md), and helps solve the following problems:
- **Secrets Management** - Azure Key Vault can be used to Securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets - **Key Management** - Azure Key Vault can be used as a Key Management solution. Azure Key Vault makes it easy to create and control the encryption keys used to encrypt your data.
As a secure store in Azure, Key Vault has been used to simplify scenarios like:
Key Vault itself can integrate with storage accounts, event hubs, and log analytics. ## Next steps-
+- [Key management in Azure](../../security/fundamentals/key-management.md)
- Learn more about [keys, secrets, and certificates](about-keys-secrets-certificates.md) - [Quickstart: Create an Azure Key Vault using the CLI](../secrets/quick-create-cli.md) - [Authentication, requests, and responses](../general/authentication-requests-and-responses.md)
key-vault About Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys.md
See [Key types, algorithms, and operations](about-keys-details.md) for details a
| Keyless TLS | - Use key [Client Libraries](../general/client-libraries.md#client-libraries-per-language-and-object) | ## Next steps
+- [Key management in Azure](../../security/fundamentals/key-management.md)
- [About Key Vault](../general/overview.md) - [About Managed HSM](../managed-hsm/overview.md) - [About secrets](../secrets/about-secrets.md)
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
# What is Azure Key Vault Managed HSM?
-Azure Key Vault Managed HSM (Hardware Security Module) is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs.
+Azure Key Vault Managed HSM (Hardware Security Module) is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. It is one of several [key management solutions in Azure](../../security/fundamentals/key-management.md).
For pricing information, please see Managed HSM Pools section on [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). For supported key types, see [About keys](../keys/about-keys.md).
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
- Generate HSM-protected keys in your on-premises HSM and import them securely into Managed HSM. ## Next steps
+- [Key management in Azure](../../security/fundamentals/key-management.md)
- See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to create and activate a managed HSM - See [Best Practices using Azure Key Vault Managed HSM](best-practices.md) - [Managed HSM Status](https://azure.status.microsoft)
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-secrets.md
You can specify additional application-specific metadata in the form of tags. Ke
## Next steps
+- [Key management in Azure](../../security/fundamentals/key-management.md)
- [Best practices for secrets management in Key Vault](secrets-best-practices.md) - [About Key Vault](../general/overview.md) - [About keys, secrets, and certificates](../general/about-keys-secrets-certificates.md)
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
Last updated 06/22/2020 -+ # Automate the rotation of a secret for resources that have two sets of authentication credentials
Rotation functions template for two sets of credentials and several ready to use
- [Project template](https://serverlesslibrary.net/sample/bc72c6c3-bd8f-4b08-89fb-c5720c1f997f) - [Redis Cache](https://serverlesslibrary.net/sample/0d42ac45-3db2-4383-86d7-3b92d09bc978) - [Storage Account](https://serverlesslibrary.net/sample/0e4e6618-a96e-4026-9e3a-74b8412213a4)-- [Cosmos DB](https://serverlesslibrary.net/sample/bcfaee79-4ced-4a5c-969b-0cc3997f47cc)
+- [Azure Cosmos DB](https://serverlesslibrary.net/sample/bcfaee79-4ced-4a5c-969b-0cc3997f47cc)
> [!NOTE] > Above rotation functions are created by a member of the community and not by Microsoft. Community Azure Functions are not supported under any Microsoft support programme or service, and are made available AS IS without warranty of any kind.
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
kubernetes-fleet Architectural Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/architectural-overview.md
+
+ Title: "Azure Kubernetes Fleet Manager architectural overview"
+description: This article provides an architectural overview of Azure Kubernetes Fleet Manager
Last updated : 10/03/2022+++++++
+# Architectural overview of Azure Kubernetes Fleet Manager
+
+Azure Kubernetes Fleet Manager is meant to solve at-scale and multi-cluster problems of Azure Kubernetes Service (AKS) clusters. This document provides an architectural overview of topological relationship between a Fleet resource and AKS clusters. This document also provides a conceptual overview of scenarios available on top of Fleet resource like Kubernetes resource propagation and multi-cluster Layer-4 load balancing.
++
+## Relationship between Fleet and Azure Kubernetes Service clusters
+
+[ ![Diagram that shows relationship between Fleet and Azure Kubernetes Service clusters.](./media/conceptual-fleet-aks-relationship.png) ](./media/conceptual-fleet-aks-relationship.png#lightbox)
+
+Fleet supports joining the following types of existing AKS clusters as member clusters:
+
+* AKS clusters across same or different resource groups within same subscription
+* AKS clusters across different subscriptions of the same Azure AD tenant
+* AKS clusters from different regions but within the same tenant
+
+During preview, you can join up to 20 AKS clusters as member clusters to the same fleet resource.
+
+Once a cluster is joined to a fleet resource, a MemeberCluster custom resource is created on the fleet.
+
+The member clusters can be viewed by running the following command:
+
+```bash
+kubectl get crd memberclusters.fleet.azure.com -o yaml
+```
+
+The complete specification of the `MemberCluster` custom resource can be viewed by running the following command:
+
+```bash
+kubectl get crd memberclusters -o yaml
+```
+
+The following labels are added automatically to all member clusters, which can then be used for target cluster selection in resource propagation.
+
+* `fleet.azure.com/location`
+* `fleet.azure.com/resource-group`
+* `fleet.azure.com/subscription-id`
+
+## Kubernetes resource propagation
+
+Fleet provides `ClusterResourcePlacement` as a mechanism to control how cluster-scoped Kubernetes resources are propagated to member clusters.
+
+[ ![Diagram that shows how Kubernetes resource are propagated to member clusters.](./media/conceptual-resource-propagation.png) ](./media/conceptual-resource-propagation.png#lightbox)
+
+A `ClusterResourcePlacement` has two parts to it:
+
+* **Resource selection**: The `ClusterResourcePlacement` custom resource is used to select which cluster-scoped Kubernetes resource objects need to be propagated from the fleet cluster and to select which member clusters to propagate these objects to. It supports the following forms of resource selection:
+ * Select resources by specifying just the *<group, version, kind>*. This selection propagates all resources with matching *<group, version, kind>*.
+ * Select resources by specifying the *<group, version, kind>* and name. This selection propagates only one resource that matches the *<group, version, kind>* and name.
+ * Select resources by specifying the *<group, version, kind>* and a set of labels using `ClusterResourcePlacement` -> `LabelSelector`. This selection propagates all resources that match the *<group, version, kind>* and label specified.
+
+ > [!NOTE]
+ > `ClusterResourcePlacement` can be used to select and propagate namespaces, which are cluster-scoped resources. When a namespace is selected, all the namespace-scoped objects under this namespace are propagated to the selected member clusters along with this namespace.
+
+* **Target cluster selection**: The `ClusterResourcePlacement` custom resource can also be used to limit propagation of selected resources to a specific subset of member clusters. The following forms of target cluster selection are supported:
+
+ * Select all the clusters by specifying empty policy under `ClusterResourcePlacement`
+ * Select clusters by listing names of `MemberCluster` custom resources
+ * Select clusters using cluster selectors to match labels present on `MemberCluster` custom resources
+
+## Multi-cluster load balancing
+
+Fleet can be used to set up layer 4 multi-cluster load balancing across workloads deployed across a fleet's member clusters.
+
+[ ![Diagram that shows how multi-cluster load balancing works.](./media/conceptual-load-balancing.png) ](./media/conceptual-load-balancing.png#lightbox)
+
+For multi-cluster load balancing, Fleet requires target clusters to be using [Azure CNI networking](../aks/configure-azure-cni.md). Azure CNI networking enables pod IPs to be directly addressable on the Azure virtual network so that they can be routed to from the Azure Load Balancer.
+
+The ServiceExport itself can be propagated from the fleet cluster to a member cluster using the Kubernetes resource propagation feature describe above, or it can be created directly on the member cluster. Once this ServiceExport resource is created, it results in a ServiceImport being created on the fleet cluster, and all other member clusters to build the awareness of the service
+
+The user can then create a `MultiClusterService` custom resource to indicate that they want to set up Layer 4 multi-cluster load balancing. This `MultiClusterService` results in the member cluster mapped Azure Load Balancer being configured to load balance incoming traffic across endpoints of this service on multiple member clusters.
+
+## Next steps
+
+* Create an [Azure Kubernetes Fleet Manager resource and join member clusters](./quickstart-create-fleet-and-members.md)
kubernetes-fleet Configuration Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/configuration-propagation.md
+
+ Title: "Propagate Kubernetes resource objects from an Azure Kubernetes Fleet Manager resource to member clusters (preview)"
+description: Learn how to control how Kubernetes resource objects get propagated to all or a subset of member clusters of an Azure Kubernetes Fleet Manager resource.
+ Last updated : 09/09/2022++++++
+# Propagate Kubernetes resource objects from an Azure Kubernetes Fleet Manager resource to member clusters (preview)
+
+Platform admins and application developers need a way to deploy the same Kubernetes resource objects across all member clusters or just a subset of member clusters of the fleet. Kubernetes Fleet Manager (Fleet) provides `ClusterResourcePlacement` as a mechanism to control how cluster-scoped Kubernetes resources are propagated to member clusters.
++
+## Prerequisites
+
+* You have a Fleet resource with one or more member clusters. If not, follow the [quickstart](quickstart-create-fleet-and-members.md) to create a Fleet resource and join Azure Kubernetes Service (AKS) clusters as members.
+
+* Set the following environment variables and obtain the kubeconfigs for the fleet and all member clusters:
+
+ ```bash
+ export GROUP=<resource-group>
+ export FLEET=<fleet-name>
+ export MEMBER_CLUSTER_1=aks-member-1
+ export MEMBER_CLUSTER_2=aks-member-2
+ export MEMBER_CLUSTER_3=aks-member-3
+
+ az fleet get-credentials --resource-group ${GROUP} --name ${FLEET} --file fleet
+
+ az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_1} --file aks-member-1
+
+ az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_2} --file aks-member-2
+
+ az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_3} --file aks-member-3
+ ```
+
+* Follow the [conceptual overview of this feature](./architectural-overview.md#kubernetes-resource-propagation), which provides an explanation of resource selection, target cluster selection, and the allowed inputs.
+
+## Resource selection
+
+1. Create a sample namespace by running the following command on the fleet cluster:
+
+ ```bash
+ KUBECONFIG=fleet kubectl create namespace hello-world
+ ```
+
+1. Create the following `ClusterResourcePlacement` in a file called `crp.yaml`. Notice we're selecting clusters in the `eastus` region:
+
+ ```yaml
+ apiVersion: fleet.azure.com/v1alpha1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: hello-world
+ spec:
+ resourceSelectors:
+ - group: ""
+ version: v1
+ kind: Namespace
+ name: hello-world
+ policy:
+ affinity:
+ clusterAffinity:
+ clusterSelectorTerms:
+ - labelSelector:
+ matchLabels:
+ fleet.azure.com/location: eastus
+ ```
+
+ > [!TIP]
+ > The above example propagates `hello-world` namespace to only those member clusters that are from the `eastus` region. If your desired target clusters are from a different region, you can substitute `eastus` for that region instead.
++
+1. Apply the `ClusterResourcePlacement`:
+
+ ```bash
+ KUBECONFIG=fleet kubectl apply -f crp.yaml
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ clusterresourceplacement.fleet.azure.com/hello-world created
+ ```
+
+1. Check the status of the `ClusterResourcePlacement`:
+
+ ```bash
+ KUBECONFIG=fleet kubectl get clusterresourceplacements
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE
+ hello-world 1 True 1 True 1 16s
+ ```
+
+1. On each member cluster, check if the namespace has been propagated:
+
+ ```bash
+ KUBECONFIG=aks-member-1 kubectl get namespace hello-world
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ NAME STATUS AGE
+ hello-world Active 96s
+ ```
+
+ ```bash
+ KUBECONFIG=aks-member-2 kubectl get namespace hello-world
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ NAME STATUS AGE
+ hello-world Active 1m16s
+ ```
+
+ ```bash
+ KUBECONFIG=aks-member-3 kubectl get namespace hello-world
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ Error from server (NotFound): namespaces "hello-world" not found
+ ```
+
+ We observe that the `ClusterResourcePlacement` has resulted in the namespace being propagated only to clusters of `eastus` region and not to `aks-member-3` cluster from `westcentralus` region.
+
+ > [!TIP]
+ > The above steps describe an example using one way of selecting the resources to be propagated using labels and cluster selectors. More methods and their examples can be found in this [sample repository](https://github.com/Azure/AKS/tree/master/examples/fleet/helloworld).
+
+## Target cluster selection
+
+1. Create a sample namespace by running the following command:
+
+ ```bash
+ KUBECONFIG=fleet kubectl create namespace hello-world-1
+ ```
+
+1. Create the following `ClusterResourcePlacement` in a file named `crp-1.yaml`:
++
+ ```yaml
+ apiVersion: fleet.azure.com/v1alpha1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: hello-world-1
+ spec:
+ resourceSelectors:
+ - group: ""
+ version: v1
+ kind: Namespace
+ name: hello-world-1
+ policy:
+ clusterNames:
+ - aks-member-1
+ ```
+
+ Apply this `ClusterResourcePlacement` to the cluster:
+
+ ```bash
+ KUBECONFIG=fleet kubectl apply -f crp-1.yaml
+ ```
+
+1. Check the status of the `ClusterResourcePlacement`:
++
+ ```bash
+ KUBECONFIG=fleet kubectl get clusterresourceplacements
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE
+ hello-world-1 1 True 1 True 1 18s
+ ```
+
+1. On each AKS cluster, run the following command to see if the namespace has been propagated:
+
+ ```bash
+ KUBECONFIG=aks-member-1 kubectl get namespace hello-world-1
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ NAME STATUS AGE
+ hello-world-1 Active 70s
+ ```
+
+ ```bash
+ KUBECONFIG=aks-member-2 kubectl get namespace hello-world-1
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ Error from server (NotFound): namespaces "hello-world-1" not found
+ ```
+
+ ```bash
+ KUBECONFIG=aks-member-3 kubectl get namespace hello-world-1
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ Error from server (NotFound): namespaces "hello-world-1" not found
+ ```
+
+ We're able to verify that the namespace has been propagated only to `aks-member-1` cluster, but not the other clusters.
++
+> [!TIP]
+> The above steps gave an example of one method of identifying the target clusters specifically by name. More methods and their examples can be found in this [sample repository](https://github.com/Azure/AKS/tree/master/examples/fleet/helloworld).
+
+## Next steps
+
+* [Set up multi-cluster Layer 4 load balancing](./l4-load-balancing.md)
kubernetes-fleet Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/faq.md
+
+ Title: "Frequently asked questions - Azure Kubernetes Fleet Manager"
+description: This article covers the frequently asked questions for Azure Kubernetes Fleet Manager
Last updated : 10/03/2022+++++++
+# Frequently Asked Questions - Azure Kubernetes Fleet Manager
+
+This article covers the frequently asked questions for Azure Kubernetes Fleet Manager.
++
+## Relationship to Azure Kubernetes Service clusters
+
+Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since the Kubernetes control plane is managed by Azure, you only manage and maintain the agent nodes. You run your actual workloads on the AKS clusters.
+
+Azure Kubernetes Fleet Manager (Fleet) will help you address at-scale and multi-cluster scenarios for Azure Kubernetes Service clusters. Azure Kubernetes Fleet Manager only provides a group representation for your AKS clusters and helps users with orchestrating Kubernetes resource propagation and multi-cluster load balancing. User workloads can't be run on the fleet cluster itself.
+
+## Creation of AKS clusters from fleet resource
+
+The current preview of Azure Kubernetes Fleet Manager resource supports joining only existing AKS clusters as member. Creation and lifecycle management of new AKS clusters from fleet cluster is in the [roadmap](https://aka.ms/fleet/roadmap).
+
+## Number of clusters
+
+During preview, you can join up to 20 AKS clusters as member clusters to the same fleet resource.
+
+## AKS clusters that can be joined as members
+
+Fleet supports joining the following types of AKS clusters as member clusters:
+
+* AKS clusters across same or different resource groups within same subscription
+* AKS clusters across different subscriptions of the same Azure AD tenant
+* AKS clusters from different regions but within the same tenant
+
+## Relationship to Azure-Arc enabled Kubernetes
+
+The current preview of Azure Kubernetes Fleet Manager resource supports joining only AKS clusters as member clusters. Support for joining member clusters to the fleet resource is in the [roadmap](https://aka.ms/fleet/roadmap).
+
+## Regional or global
+
+Azure Kubernetes Fleet Manager resource is a regional resource. Support for region failover for disaster recovery use cases is in the [roadmap](https://aka.ms/fleet/roadmap).
+
+## Roadmap
+
+The roadmap for Azure Kubernetes Fleet Manager resource is available [here](https://aka.ms/fleet/roadmap).
+
+## Next steps
+
+* Create an [Azure Kuberntes Fleet Manager resource and join member clusters](./quickstart-create-fleet-and-members.md)
kubernetes-fleet L4 Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/l4-load-balancing.md
+
+ Title: "How to set up multi-cluster Layer 4 load balancing across Azure Kubernetes Fleet Manager member clusters (preview)"
+description: Learn how to use Azure Kubernetes Fleet Manager to set up multi-cluster Layer 4 load balancing across workloads deployed on multiple member clusters.
+ Last updated : 09/09/2022++++++
+# Set up multi-cluster layer 4 load balancing across Azure Kubernetes Fleet Manager member clusters (preview)
+
+After an application has been deployed across multiple clusters, admins often want to set up load balancing for incoming traffic across these application endpoints on member clusters.
+
+In this how-to guide, you'll set up layer 4 load balancing across workloads deployed across a fleet's member clusters.
++
+## Prerequisites
++
+* You must have a Fleet resource with member clusters to which a workload has been deployed. If you don't have this resource, follow [Quickstart: Create a Fleet resource and join member clusters](quickstart-create-fleet-and-members.md) and [Propagate Kubernetes configurations from a Fleet resource to member clusters](configuration-propagation.md)
+
+* These target clusters should be using [Azure CNI networking](../aks/configure-azure-cni.md).
+
+* The target AKS clusters on which the workloads are deployed need to be present on either the same [virtual network](../virtual-network/virtual-networks-overview.md) or on [peered virtual networks](../virtual-network/virtual-network-peering-overview.md).
+
+* These target clusters have to be [added as member clusters to the Fleet resource](./quickstart-create-fleet-and-members.md#join-member-clusters).
+
+* Follow the [conceptual overview of this feature](./architectural-overview.md#multi-cluster-load-balancing), which provides an explanation of `ServiceExport` and `MultiClusterService` objects referenced in this document.
+
+* Set the following environment variables and obtain the kubeconfigs for the fleet and all member clusters:
+
+ ```bash
+ export GROUP=<resource-group>
+ export FLEET=<fleet-name>
+ export MEMBER_CLUSTER_1=aks-member-1
+
+ az fleet get-credentials --resource-group ${GROUP} --name ${FLEET} --file fleet
+
+ az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_1} --file aks-member-1
+ ```
++
+## Deploy a sample workload to demo clusters
+
+> [!NOTE]
+>
+> * The steps in this how-to guide refer to a sample application for demonstration purposes only. You can substitute this workload for any of your own existing Deployment and Service objects.
+>
+> * These steps deploy the sample workload from the Fleet cluster to member clusters using Kubernetes configuration propagation. Alternatively, you can choose to deploy these Kubernetes configurations to each member cluster separately, one at a time.
+
+1. Create a namespace on the fleet cluster:
+
+ ```bash
+ KUBECONFIG=fleet kubectl create namespace kuard-demo
+ ```
+
+ Output will look similar to the following example:
+
+ ```console
+ namespace/kuard-demo created
+ ```
+
+1. Apply the Deployment, Service, ServiceExport objects:
+
+ ```bash
+ KUBECONFIG=fleet kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-export-service.yaml
+ ```
+
+ The `ServiceExport` specification in the above file allows you to export a service from member clusters to the Fleet resource. Once successfully exported, the service and all its endpoints will be synced to the fleet cluster and can then be used to set up multi-cluster load balancing across these endpoints. The output will look similar to the following example:
+
+ ```console
+ deployment.apps/kuard created
+ service/kuard created
+ serviceexport.networking.fleet.azure.com/kuard created
+ ```
+
+1. Create the following `ClusterResourcePlacement` in a file called `crp-2.yaml`. Notice we're selecting clusters in the `eastus` region:
+
+ ```yaml
+ apiVersion: fleet.azure.com/v1alpha1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: kuard-demo
+ spec:
+ resourceSelectors:
+ - group: ""
+ version: v1
+ kind: Namespace
+ name: kuard-demo
+ policy:
+ affinity:
+ clusterAffinity:
+ clusterSelectorTerms:
+ - labelSelector:
+ matchLabels:
+ fleet.azure.com/location: eastus
+ ```
+
+1. Apply the `ClusterResourcePlacement`:
+
+ ```bash
+ KUBECONFIG=fleet kubectl apply -f crp-2.yaml
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ clusterresourceplacement.fleet.azure.com/kuard-demo created
+ ```
+
+1. Check the status of the `ClusterResourcePlacement`:
++
+ ```bash
+ KUBECONFIG=fleet kubectl get clusterresourceplacements
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE
+ kuard-demo 1 True 1 True 1 20s
+ ```
+
+## Create MultiClusterService to load balance across the service endpoints in multiple member clusters
++
+1. Check the member clusters in `eastus` region to see if the service is successfully exported:
+
+ ```bash
+ KUBECONFIG=aks-member-1 kubectl get serviceexport kuard --namespace kuard-demo
+ ```
+
+ Output will look similar to the following example:
+
+ ```console
+ NAME IS-VALID IS-CONFLICTED AGE
+ kuard True False 25s
+ ```
+
+ ```bash
+ KUBECONFIG=aks-member-2 kubectl get serviceexport kuard --namespace kuard-demo
+ ```
+
+ Output will look similar to the following example:
+
+ ```console
+ NAME IS-VALID IS-CONFLICTED AGE
+ kuard True False 55s
+ ```
+
+ You should see that the service is valid for export (`IS-VALID` field is `true`) and has no conflicts with other exports (`IS-CONFLICT` is `false`).
+
+ > [!NOTE]
+ > It may take a minute or two for the ServiceExport to be propagated.
+
+1. Apply the MultiClusterService on one of these members to load balance across the service endpoints in these clusters:
+
+ ```bash
+ KUBECONFIG=aks-member-1 kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-mcs.yaml
+ ```
+
+ Output will look similar to the following example:
+
+ ```console
+ multiclusterservice.networking.fleet.azure.com/kuard created
+ ```
+
+1. Verify the MultiClusterService is valid by running the following command:
+
+ ```bash
+ KUBECONFIG=aks-member-1 kubectl get multiclusterservice kuard --namespace kuard-demo
+ ```
+
+ The output should look similar to the following example:
+
+ ```console
+ NAME SERVICE-IMPORT EXTERNAL-IP IS-VALID AGE
+ kuard kuard <a.b.c.d> True 40s
+ ```
+
+ The `IS-VALID` field should be `true` in the output. Check out the external load balancer IP address (`EXTERNAL-IP`) in the output. It may take a while before the import is fully processed and the IP address becomes available.
+
+1. Run the following command multiple times using the External IP address from above:
+
+ ```bash
+ curl <a.b.c.d>:8080 | grep addrs
+ ```
+
+ Notice that the IPs of the pods serving the request is changing and that these pods are from member clusters `aks-member-1` and `aks-member-2` from the `eastus` region. You can verify the pod IPs by running the following commands on the clusters from `eastus` region:
+
+ ```bash
+ KUBECONFIG=aks-member-1 kubectl get pods -n kuard-demo -o wide
+ ```
+
+ ```bash
+ KUBECONFIG=aks-member-2 kubectl get pods -n kuard-demo -o wide
+ ```
kubernetes-fleet Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/overview.md
+
+ Title: "Overview of Azure Kubernetes Fleet Manager (preview)"
+++ Last updated : 08/29/2022+++
+description: "This article provides an overview of Azure Kubernetes Fleet Manager."
+keywords: "Kubernetes, Azure, multi-cluster, multi, containers"
++
+# What is Azure Kubernetes Fleet Manager (preview)?
+
+Azure Kubernetes Fleet Manager (Fleet) enables multi-cluster and at-scale scenarios for Azure Kubernetes Service (AKS) clusters. A Fleet resource creates a cluster that can be used to manage other member clusters.
+
+Fleet supports the following scenarios:
+
+* Create a Fleet resource and group AKS clusters as member clusters.
+
+* Create Kubernetes resource objects on the Fleet resource's cluster and control their propagation to all or a subset of all member clusters.
+
+* Export a service from one member cluster to the Fleet resource. Once successfully exported, the service and its endpoints are synced to the hub, which other member clusters (or any Fleet resource-scoped load balancer) can consume.
++
+## Next steps
+
+[Create an Azure Kubernetes Fleet Manager resource and group multiple AKS clusters as member clusters of the fleet](./quickstart-create-fleet-and-members.md).
kubernetes-fleet Quickstart Create Fleet And Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members.md
+
+ Title: "Quickstart: Create an Azure Kubernetes Fleet Manager resource and join member clusters (preview)"
+description: In this quickstart, you learn how to create an Azure Kubernetes Fleet Manager resource and join member clusters.
Last updated : 09/06/2022++++
+ms.devlang: azurecli
+++
+# Quickstart: Create an Azure Kubernetes Fleet Manager resource and join member clusters (preview)
+
+Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure CLI to create a Fleet resource and later connect Azure Kubernetes Service (AKS) clusters as member clusters.
++
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* A basic understanding of [Kubernetes core concepts](../aks/concepts-clusters-workloads.md).
+
+* An identity (user or service principal) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli). This identity needs to have the following permissions on the Fleet and AKS resource types for completing the steps listed in this quickstart:
+
+ * Microsoft.ContainerService/fleets/read
+ * Microsoft.ContainerService/fleets/write
+ * Microsoft.ContainerService/fleets/listCredentials/action
+ * Microsoft.ContainerService/fleets/members/read
+ * Microsoft.ContainerService/fleets/members/write
+ * Microsoft.ContainerService/fleetMemberships/read
+ * Microsoft.ContainerService/fleetMemberships/write
+ * Microsoft.ContainerService/managedClusters/read
+ * Microsoft.ContainerService/managedClusters/write
+ * Microsoft.ContainerService/managedClusters/listClusterUserCredential/action
+
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version is at least `2.37.0`
+
+* Enable the following feature for each subscription where you'll be creating the fleet resource or where your AKS clusters that will be joined as members are located in:
+
+ ```azurecli
+ az feature register --namespace Microsoft.ContainerService --name FleetResourcePreview
+ ```
+
+* Install the **fleet** Azure CLI extension. Make sure your version is at least `0.1.0`:
+
+ ```azurecli
+ az extension add --name fleet
+ ```
+
+* Set the following environment variables:
+
+ ```azurecli
+ export SUBSCRIPTION_ID=<subscription_id>
+ export GROUP=<your_resource_group_name>
+ export FLEET=<your_fleet_name>
+ ```
+
+## Create a resource group
+
+An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is:
+
+* The storage location of your resource group metadata.
+* Where your resources will run in Azure if you don't specify another location during resource creation.
+
+Set the Azure subscription and create a resource group using the [az group create](/cli/azure/group#az-group-create) command.
+
+```azurecli-interactive
+az account set -s ${SUBSCRIPTION_ID}
+az group create --name ${GROUP} --location eastus
+```
+
+The following output example resembles successful creation of the resource group:
+
+```json
+{
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/fleet-demo",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "fleet-demo",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null,
+ "type": "Microsoft.Resources/resourceGroups"
+}
+```
+
+## Create a Fleet resource
+
+A Fleet resource can be created to later group your AKS clusters as member clusters. This resource enables multi-cluster scenarios, such as Kubernetes object propagation to member clusters and north-south load balancing across endpoints deployed on these multiple member clusters.
+
+Create a Fleet resource using the [az fleet create](/cli/azure/fleet#az-fleet-create) command:
+
+```azurecli-interactive
+az fleet create --resource-group ${GROUP} --name ${FLEET} --location eastus
+```
+
+The output will look similar to the following example:
+
+```json
+{
+ "etag": "...",
+ "hubProfile": {
+ "dnsPrefix": "fleet-demo-fleet-demo-3959ec",
+ "fqdn": "<unique>.eastus.azmk8s.io",
+ "kubernetesVersion": "1.24.6"
+ },
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/fleet-demo/providers/Microsoft.ContainerService/fleets/fleet-demo",
+ "location": "eastus",
+ "name": "fleet-demo",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "fleet-demo",
+ "systemData": {
+ "createdAt": "2022-10-04T18:40:22.317686+00:00",
+ "createdBy": "<user>",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-04T18:40:22.317686+00:00",
+ "lastModifiedBy": "<user>",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "Microsoft.ContainerService/fleets"
+}
+```
+
+## Join member clusters
+
+Fleet currently supports joining existing AKS clusters as member clusters.
+
+1. If you already have existing AKS clusters that you want to join to the fleet resource, you can skip to Step 2. If not, you can create three AKS clusters using the following commands:
+
+ **Create virtual network and subnets**
+
+ ```azurecli-interactive
+ export FIRST_VNET=first-vnet
+ export SECOND_VNET=second-vnet
+ export MEMBER_1_SUBNET=member-1
+ export MEMBER_2_SUBNET=member-2
+ export MEMBER_3_SUBNET=member-3
+
+ az network vnet create \
+ --name ${FIRST_VNET} \
+ --resource-group ${GROUP} \
+ --location eastus \
+ --address-prefixes 10.0.0.0/8
+
+ az network vnet create \
+ --name ${SECOND_VNET} \
+ --resource-group ${GROUP} \
+ --location westcentralus \
+ --address-prefixes 10.0.0.0/8
+
+ az network vnet subnet create \
+ --vnet-name ${FIRST_VNET} \
+ --name ${MEMBER_1_SUBNET} \
+ --resource-group ${GROUP} \
+ --address-prefixes 10.1.0.0/16
+
+ az network vnet subnet create \
+ --vnet-name ${FIRST_VNET} \
+ --name ${MEMBER_2_SUBNET} \
+ --resource-group ${GROUP} \
+ --address-prefixes 10.2.0.0/16
+
+ az network vnet subnet create \
+ --vnet-name ${SECOND_VNET} \
+ --name ${MEMBER_3_SUBNET} \
+ --resource-group ${GROUP} \
+ --address-prefixes 10.1.0.0/16
+ ```
+
+ **Create AKS clusters**
+
+ ```azurecli-interactive
+ export MEMBER_CLUSTER_1=aks-member-1
+
+ az aks create \
+ --resource-group ${GROUP} \
+ --location eastus \
+ --name ${MEMBER_CLUSTER_1} \
+ --node-count 1 \
+ --network-plugin azure \
+ --vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.Network/virtualNetworks/${FIRST_VNET}/subnets/${MEMBER_1_SUBNET}"
+ ```
+
+ ```azurecli-interactive
+ export MEMBER_CLUSTER_2=aks-member-2
+
+ az aks create \
+ --resource-group ${GROUP} \
+ --location eastus \
+ --name ${MEMBER_CLUSTER_2} \
+ --node-count 1 \
+ --network-plugin azure \
+ --vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.Network/virtualNetworks/${FIRST_VNET}/subnets/${MEMBER_2_SUBNET}"
+ ```
+
+ ```azurecli-interactive
+ export MEMBER_CLUSTER_3=aks-member-3
+
+ az aks create \
+ --resource-group ${GROUP} \
+ --location westcentralus \
+ --name ${MEMBER_CLUSTER_3} \
+ --node-count 1 \
+ --network-plugin azure \
+ --vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.Network/virtualNetworks/${SECOND_VNET}/subnets/${MEMBER_3_SUBNET}"
+ ```
+
+ We created the third cluster in a different region above to demonstrate that fleet can support joining clusters from different regions. Fleet also supports joining clusters from different subscriptions. The only requirement for AKS clusters being joined to fleet as members is that they all need to be a part of the same Azure AD tenant.
+
+1. Set the following environment variables for members:
+
+ ```azurecli-interactive
+ export MEMBER_CLUSTER_ID_1=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_CLUSTER_1}
+ export MEMBER_NAME_1=aks-member-1
+
+ export MEMBER_CLUSTER_ID_2=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_CLUSTER_2}
+ export MEMBER_NAME_2=aks-member-2
+
+ export MEMBER_CLUSTER_ID_3=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_CLUSTER_3}
+ export MEMBER_NAME_3=aks-member-3
+ ```
+
+1. Join these clusters to the Fleet resource using the following commands:
+
+ ```azurecli-interactive
+ az fleet member create \
+ --resource-group ${GROUP} \
+ --fleet-name ${FLEET} \
+ --name ${MEMBER_NAME_1} \
+ --member-cluster-id ${MEMBER_CLUSTER_ID_1}
+ ```
+
+ The output will look similar to the following example:
+
+ ```json
+ {
+ "clusterResourceId": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-1",
+ "etag": "...",
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/members/aks-member-1",
+ "name": "aks-member-1",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "<GROUP>",
+ "systemData": {
+ "createdAt": "2022-10-04T19:04:56.455813+00:00",
+ "createdBy": "<user>",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-04T19:04:56.455813+00:00",
+ "lastModifiedBy": "<user>",
+ "lastModifiedByType": "User"
+ },
+ "type": "Microsoft.ContainerService/fleets/members"
+ }
+ ```
+
+ ```azurecli-interactive
+ az fleet member create \
+ --resource-group ${GROUP} \
+ --fleet-name ${FLEET} \
+ --name ${MEMBER_NAME_2} \
+ --member-cluster-id ${MEMBER_CLUSTER_ID_2}
+ ```
+
+ The output will look similar to the following example:
+
+ ```json
+ {
+ "clusterResourceId": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-2",
+ "etag": "...",
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/members/aks-member-2",
+ "name": "aks-member-2",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "<GROUP>",
+ "systemData": {
+ "createdAt": "2022-10-04T19:05:06.483268+00:00",
+ "createdBy": "<user>",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-04T19:05:06.483268+00:00",
+ "lastModifiedBy": "<user>",
+ "lastModifiedByType": "User"
+ },
+ "type": "Microsoft.ContainerService/fleets/members"
+ }
+ ```
+
+ ```azurecli-interactive
+ az fleet member create \
+ --resource-group ${GROUP} \
+ --fleet-name ${FLEET} \
+ --name ${MEMBER_NAME_3} \
+ --member-cluster-id ${MEMBER_CLUSTER_ID_3}
+ ```
+
+ The output will look similar to the following example:
+
+ ```json
+ {
+ "clusterResourceId": "/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-3",
+ "etag": "...",
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/members/aks-member-3",
+ "name": "aks-member-3",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "fleet-demo",
+ "systemData": {
+ "createdAt": "2022-10-04T19:05:19.497275+00:00",
+ "createdBy": "<user>",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-04T19:05:19.497275+00:00",
+ "lastModifiedBy": "<user>",
+ "lastModifiedByType": "User"
+ },
+ "type": "Microsoft.ContainerService/fleets/members"
+ }
+ ```
+
+1. Verify that the member cluster has successfully joined by running the following command:
+
+ ```azurecli-interactive
+ az fleet member list --resource-group ${GROUP} --fleet-name ${FLEET} -o table
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ ClusterResourceId Name ProvisioningState ResourceGroup
+ -- -
+ /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-1 aks-member-1 Succeeded <GROUP>
+ /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-2 aks-member-2 Succeeded <GROUP>
+ /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-3 aks-member-3 Succeeded <GROUP>
+ ```
+
+## (Optional) Access the Kubernetes API of the Fleet resource cluster
+
+An Azure Kubernetes Fleet Manager resource is itself a Kubernetes cluster that you use to centrally orchestrate scenarios, like Kubernetes object propagation. To access the Fleet cluster's Kubernetes API, run the following commands:
+
+1. Get the kubeconfig file of the Fleet resource:
+
+ ```azurecli-interactive
+ az fleet get-credentials --resource-group ${GROUP} --name ${FLEET}
+ ```
+
+ The output will look similar to the following example:
+
+ ```console
+ Merged "hub" as current context in /home/fleet/.kube/config
+ ```
+
+1. Set the following environment variable for the `id` of the Fleet resource:
+
+ ```azurecli-interactive
+ export FLEET_ID=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/fleets/${FLEET}
+ ```
+
+1. Authorize your identity to the Fleet resource's Kubernetes API:
+
+ ```azurecli-interactive
+ export IDENTITY=$(az ad signed-in-user show --query "id" --output tsv)
+ export ROLE="Azure Kubernetes Fleet Manager RBAC Cluster Admin"
+ az role assignment create --role "${ROLE}" --assignee ${IDENTITY} --scope ${FLEET_ID}
+ ```
+
+ For the above command, for the `ROLE` environment variable, you can use one of the following four built-in role definitions as value:
+
+ * Azure Kubernetes Fleet Manager RBAC Reader
+ * Azure Kubernetes Fleet Manager RBAC Writer
+ * Azure Kubernetes Fleet Manager RBAC Admin
+ * Azure Kubernetes Fleet Manager RBAC Cluster Admin
++
+ You should see output similar to the following example:
+
+ ```json
+ {
+ "canDelegate": null,
+ "condition": null,
+ "conditionVersion": null,
+ "description": null,
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/providers/Microsoft.Authorization/roleAssignments/<assignment>",
+ "name": "<name>",
+ "principalId": "<id>",
+ "principalType": "User",
+ "resourceGroup": "<GROUP>",
+ "roleDefinitionId": "/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Authorization/roleDefinitions/18ab4d3d-a1bf-4477-8ad9-8359bc988f69",
+ "scope": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>",
+ "type": "Microsoft.Authorization/roleAssignments"
+ }
+ ```
+
+1. Verify the status of the member clusters:
+
+ ```bash
+ kubectl get memberclusters
+ ```
+
+ If successful, the output will look similar to the following example:
+
+ ```console
+ NAME JOINED AGE
+ aks-member-1 True 2m
+ aks-member-2 True 2m
+ aks-member-3 True 2m
+ ```
+
+## Next steps
+
+* Learn how to use [Kubernetes resource objects propagation](./configuration-propagation.md)
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
When you're assigning roles, it helps to follow these tips:
## Content filtering
-Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering.
+Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Azure Lab Services doesn't offer built-in support for content filtering, and doesn't support network-level filtering.
-Schools typically approach content filtering by installing third-party software that performs content filtering on each computer. Azure Lab Services does not support network-level filtering.
+Schools typically approach content filtering by installing third-party software that performs content filtering on each computer. To install third-party content filtering software on each computer, you should install the software on each lab's template VM.
-By default, Azure Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. You'll need to use [advanced networking](how-to-connect-vnet-injection.md) in the lab plan. Make sure to check known limitations of VNet injection before proceeding.
-
-We recommend the second approach, which is to install third-party software on each lab's template VM. There are a few key points to highlight as part of this solution:
+There are a few key points to highlight as part of this solution:
- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you'll need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab. - You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. Adding a non-admin account must be done when creating the lab.
lab-services Reliability In Azure Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reliability-in-azure-lab-services.md
Title: Reliability in Azure Lab Services description: Learn about reliability in Azure Lab Services -+ Last updated 08/18/2022
Currently, the service is not zonal. That is, you canΓÇÖt configure a lab or the
There are no increased SLAs available for availability in Azure Lab Services. For the monthly uptime SLAs for Azure Lab Services, see [SLA for Azure Lab Services](https://azure.microsoft.com/support/legal/sla/lab-services/v1_0/).
-The Azure Lab Services infrastructure uses Cosmos DB storage. The Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Cosmos DB](/azure/cosmos-db/high-availability#slas).
+The Azure Lab Services infrastructure uses Azure Cosmos DB storage. The Azure Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Azure Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Azure Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Azure Cosmos DB](/azure/cosmos-db/high-availability#slas).
### Zone down experience
In the event of a zone outage in these regions, you can still perform the follow
- Configure lab schedules - Create/manage labs and VMs in regions unaffected by the zone outage.
-Data loss may occur only with an unrecoverable disaster in the Cosmos DB region. For more information, see [Region Outages](/azure/cosmos-db/high-availability#region-outages).
+Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](/azure/cosmos-db/high-availability#region-outages).
For regions not listed, access to the Azure Lab Services infrastructure is not guaranteed when there is a zone outage in the region containing the lab plan. You will only be able to perform the following tasks:
Azure Lab Services does not provide any service-specific signals about an outage
## Next steps > [!div class="nextstepaction"]
-> [Resiliency in Azure](/azure/availability-zones/overview)
+> [Resiliency in Azure](/azure/availability-zones/overview)
lighthouse Migration At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/migration-at-scale.md
The workflow for this model will be similar to the following:
## Partner recognition for customer migrations
-As a member of the [Microsoft Partner Network](https://partner.microsoft.com), you can link your partner ID with the credentials used to manage delegated customer resources. This allows Microsoft to attribute influence and Azure consumed revenue to your organization based on the tasks you perform for customers, including migration projects.
+As a member of the [Microsoft Cloud Partner Program](https://partner.microsoft.com), you can link your partner ID with the credentials used to manage delegated customer resources. This allows Microsoft to attribute influence and Azure consumed revenue to your organization based on the tasks you perform for customers, including migration projects.
For more information, see [Link your partner ID to track your impact on delegated resources](partner-earned-credit.md).
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-customer.md
Whenever possible, we recommend using Azure AD user groups for each assignment w
When defining your authorizations, be sure to follow the principle of least privilege so that users only have the permissions needed to complete their job. For information about supported roles and best practices, see [Tenants, users, and roles in Azure Lighthouse scenarios](../concepts/tenants-users-roles.md).
-To track your impact across customer engagements and receive recognition, associate your Microsoft Partner Network (MPN) ID with at least one user account that has access to each of your onboarded subscriptions. You'll need to perform this association in your service provider tenant. We recommend creating a service principal account in your tenant that is associated with your MPN ID, then including that service principal every time you onboard a customer. For more info, see [Link your partner ID to enable partner earned credit on delegated resources](partner-earned-credit.md).
+To track your impact across customer engagements and receive recognition, associate your Microsoft Cloud Partner Program ID with at least one user account that has access to each of your onboarded subscriptions. You'll need to perform this association in your service provider tenant. We recommend creating a service principal account in your tenant that is associated with your partner ID, then including that service principal every time you onboard a customer. For more info, see [Link your partner ID to enable partner earned credit on delegated resources](partner-earned-credit.md).
> [!TIP] > You can also create *eligible authorizations* that let users in your managing tenant temporarily elevate their role. This feature is currently in public preview and has specific licensing requirements. For more information, see [Create eligible authorizations](create-eligible-authorizations.md).
lighthouse Partner Earned Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/partner-earned-credit.md
# Link your partner ID to track your impact on delegated resources
-If you're a member of the [Microsoft Partner Network](https://partner.microsoft.com/), you can link your partner ID with the credentials used to manage delegated customer resources. This link allows Microsoft to identify and recognize partners who drive Azure customer success. It also allows [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partners to receive [partner earned credit for managed services (PEC)](/partner-center/partner-earned-credit) for customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started).
+If you're a member of the [Microsoft Cloud Partner Program](https://partner.microsoft.com/), you can link your partner ID with the credentials used to manage delegated customer resources. This link allows Microsoft to identify and recognize partners who drive Azure customer success. It also allows [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partners to receive [partner earned credit for managed services (PEC)](/partner-center/partner-earned-credit) for customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started).
-To earn recognition for Azure Lighthouse activities, you'll need to [link your MPN ID](../../cost-management-billing/manage/link-partner-id.md) with at least one user account in your managing tenant, and ensure that the linked account has access to each of your onboarded subscriptions.
+To earn recognition for Azure Lighthouse activities, you'll need to [link your partner ID](../../cost-management-billing/manage/link-partner-id.md) with at least one user account in your managing tenant, and ensure that the linked account has access to each of your onboarded subscriptions.
## Associate your partner ID when you onboard new customers
-Use the following process to link your partner ID (and enable partner earned credit, if applicable). You'll need to know your [MPN partner ID](/partner-center/partner-center-account-setup#locate-your-mpn-id) to complete these steps. Be sure to use the **Associated MPN ID** shown on your partner profile.
+Use the following process to link your partner ID (and enable partner earned credit, if applicable). You'll need to know your [partner ID](/partner-center/partner-center-account-setup#locate-your-partner-id) to complete these steps. Be sure to use the **Associated Partner ID** shown on your partner profile.
-For simplicity, we recommend creating a service principal account in your tenant, linking it to your **Associated MPN ID**, then granting it an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec) to every customer that you onboard.
+For simplicity, we recommend creating a service principal account in your tenant, linking it to your **Associated Partner ID**, then granting it an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec) to every customer that you onboard.
1. [Create a service principal user account](../../active-directory/develop/howto-authenticate-service-principal-powershell.md) in your managing tenant. For this example, we'll use the name *Provider Automation Account* for this service principal account.
-1. Using that service principal account, [link to your Associated MPN ID](../../cost-management-billing/manage/link-partner-id.md#link-to-a-partner-id) in your managing tenant. You only need to do this one time.
+1. Using that service principal account, [link to your Associated Partner ID](../../cost-management-billing/manage/link-partner-id.md#link-to-a-partner-id) in your managing tenant. You only need to do this one time.
1. When you onboard a customer [using ARM templates](onboard-customer.md) or [Managed Service offers](publish-managed-services-offers.md), be sure to include at least one authorization which includes the Provider Automation Account as a user with an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec). This role must be granted as a permanent assignment, not as a just-in-time [eligible authorization](create-eligible-authorizations.md), in order for PEC to apply. By following these steps, every customer tenant you manage will be associated with your partner ID. The Provider Automation Account does not need to authenticate or perform any actions in the customer tenant.
By following these steps, every customer tenant you manage will be associated wi
## Add your partner ID to previously onboarded customers
-If you have already onboarded a customer, you may not want to perform another deployment to add your Provider Automation Account service principal. Instead, you can link your **Associated MPN ID** with a user account which already has access to work in that customer's tenant. Be sure that the account has been granted an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec) as a permanent role assignment.
+If you have already onboarded a customer, you may not want to perform another deployment to add your Provider Automation Account service principal. Instead, you can link your **Associated Partner ID** with a user account which already has access to work in that customer's tenant. Be sure that the account has been granted an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec) as a permanent role assignment.
-Once the account has been [linked to your Associated MPN ID](../../cost-management-billing/manage/link-partner-id.md#link-to-a-partner-id) in your managing tenant, you will be able to track recognition for your impact on that customer.
+Once the account has been [linked to your Associated Partner ID](../../cost-management-billing/manage/link-partner-id.md#link-to-a-partner-id) in your managing tenant, you will be able to track recognition for your impact on that customer.
## Confirm partner earned credit
You can also use the [Partner Center SDK](/partner-center/develop/get-invoice-un
## Next steps -- Learn more about the [Microsoft Partner Network](/partner-center/mpn-overview).
+- Learn more about the [Microsoft Cloud Partner Program](/partner-center/mpn-overview).
- Learn [how PEC is calculated and paid](/partner-center/partner-earned-credit-explanation). - [Onboard customers](onboard-customer.md) to Azure Lighthouse.
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Last updated 09/19/2022
# Upgrading from basic Load Balancer - Guidance >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
In this article, we'll discuss guidance for upgrading your Basic Load Balancer instances to Standard Load Balancer. Standard Load Balancer is recommended for all production instances and provides many [key differences](#basic-load-balancer-sku-vs-standard-load-balancer-sku) to your infrastructure.
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
If all probes for all instances in a backend pool fail, existing UDP flows will
## Probe source IP address
-Load Balancer uses a distributed probing service for its internal health model. The probing service resides on each host where VMs and can be programmed on-demand to generate health probes per the customer's configuration. The health probe traffic is directly between the probing service that generates the health probe and the customer VM. All Load Balancer health probes originate from the IP address 168.63.129.16 as their source. You can use an IP address space inside of a virtual network that isn't RFC1918 space. Use of a globally reserved, Microsoft owned IP address reduces the chance of an IP address conflict with the IP address space you use inside the virtual network. The IP address is the same in all regions. The IP doesn't change and isn't a security risk. Only the internal Azure platform can source a packet from the IP address.
+Load Balancer uses a distributed probing service for its internal health model. The probing service resides on each host where VMs and can be programmed on-demand to generate health probes per the customer's configuration. The health probe traffic is directly between the probing service that generates the health probe and the customer VM. All Load Balancer health probes originate from the IP address 168.63.129.16 as their source.
The **AzureLoadBalancer** service tag identifies this source IP address in your [network security groups](../virtual-network/network-security-groups-overview.md) and permits health probe traffic by default.
load-balancer Move Across Regions Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-portal.md
The following steps show how to prepare the internal load balancer for the move
``` For more information on the differences between basic and standard sku load balancers, see [Azure Standard Load Balancer overview](./load-balancer-overview.md)
+ * **Availability zone** - You can change the zone(s) of the load balancer's frontend by changing the **zone** property. If the zone property isn't specified, the frontend will be created as no-zone. You can specify a single zone to create a zonal frontend or all 3 zones for a zone-redundant frontend.
+
+ ```json
+ "frontendIPConfigurations": [
+ {
+ "name": "myfrontendIPinbound",
+ "etag": "W/\"39e5e9cd-2d6d-491f-83cf-b37a259d86b6\"",
+ "type": "Microsoft.Network/loadBalancers/frontendIPConfigurations",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "privateIPAddress": "10.0.0.6",
+ "privateIPAllocationMethod": "Dynamic",
+ "subnet": {
+ "id": "[concat(parameters('virtualNetworks_myVNET2_internalid'), '/subnet-1')]"
+ },
+ "loadBalancingRules": [
+ {
+ "id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('loadBalancers_myLoadBalancer_name')), '/loadBalancingRules/myInboundRule')]"
+ }
+ ],
+ "privateIPAddressVersion": "IPv4"
+ },
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ]
+ },
+ ```
+ For more about availability zones, see [Regions and availability zones in Azure](../availability-zones/az-overview.md).
+ * **Load balancing rules** - You can add or remove load balancing rules in the configuration by adding or removing entries to the **loadBalancingRules** section of the **template.json** file: ```json
In this tutorial, you moved an Azure internal load balancer from one region to a
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
# Azure Load Balancer SKUs >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
Azure Load Balancer has three SKUs. ## <a name="skus"></a> SKU comparison Azure Load Balancer has 3 SKUs - Basic, Standard, and Gateway. Each SKU is catered towards a specific scenario and has differences in scale, features, and pricing.
-To compare and understand the differences between Basic and Standard SKU, see the following table. For more information, see [Azure Standard Load Balancer overview](./load-balancer-overview.md). For information on Gateway SKU - catered for third-party network virtual appliances (NVAs), see [Gateway Load Balancer overview](gateway-overview.md)
+To compare and understand the differences between Basic and Standard SKU, see the following table.
->[!NOTE]
-> Microsoft recommends Standard load balancer. See [Upgrade from Basic to Standard Load Balancer](upgrade-basic-standard.md) for a guided instruction on upgrading SKUs along with an upgrade script.
->
-> Standalone VMs, availability sets, and virtual machine scale sets can be connected to only one SKU, never both. Load balancer and the public IP address SKU must match when you use them with public IP addresses. Load balancer and public IP SKUs aren't mutable.
-
-| | Standard Load Balancer | Basic Load Balancer |
+| | Standard Load Balancer | Basic Load Balancer | Gateway Load Balancer
| | | | | **Scenario** | Equipped for load-balancing network layer traffic when high performance and ultra-low latency is needed. Routes traffic within and across regions, and to availability zones for high resiliency. | Equipped for small-scale applications that don't need high availability or redundancy. Not compatible with availability zones. | | **Backend type** | IP based, NIC based | NIC based |
To compare and understand the differences between Basic and Standard SKU, see th
| **[Private Link Support](../private-link/private-link-overview.md)** | Standard ILB is supported via Private Link | Not supported | | **[Global tier (Preview)](./cross-region-overview.md)** | Standard LB supports the Global tier for Public LBs enabling cross-region load balancing | Not supported |
-For more information, see [Load balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer). For Standard Load Balancer details, see [overview](./load-balancer-overview.md), [pricing](https://aka.ms/lbpricing), and [SLA](https://aka.ms/lbsla).
+For more information, see [Load balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer). For Standard Load Balancer details, see [overview](./load-balancer-overview.md), [pricing](https://aka.ms/lbpricing), and [SLA](https://aka.ms/lbsla). For information on Gateway SKU - catered for third-party network virtual appliances (NVAs), see [Gateway Load Balancer overview](gateway-overview.md)
## Limitations- - A standalone virtual machine resource, availability set resource, or virtual machine scale set resource can reference one SKU, never both. - [Move operations](../azure-resource-manager/management/move-resource-group-and-subscription.md): - Resource group move operations (within same subscription) **are supported** for Standard Load Balancer and Standard Public IP. - [Subscription group move operations](../azure-resource-manager/management/move-support-resources.md) are **not** supported for Standard Load Balancers. ## Next steps- - See [Create a public Standard Load Balancer](quickstart-load-balancer-standard-public-portal.md) to get started with using a Load Balancer. - Learn about using [Standard Load Balancer and Availability Zones](load-balancer-standard-availability-zones.md). - Learn about [Health Probes](load-balancer-custom-probe-overview.md).
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
description: Resolutions for common problems with outbound connectivity with Azu
+ Last updated 04/21/2022
For smaller scale deployments, you can consider assigning a public IP to a VM. I
We highly recommend considering utilizing NAT gateway instead, as assigning individual public IP addresses isn't a scalable solution. > [!NOTE]
-> If you need to connect your Azure virtual network to Azure PaaS services like Storage, SQL, Cosmos DB, or any other of the Azure services [listed here](../private-link/availability.md), you can leverage Azure Private Link to avoid SNAT entirely. Azure Private Link sends traffic from your virtual network to Azure services over the Azure backbone network instead of over the internet.
+> If you need to connect your Azure virtual network to Azure PaaS services like Azure Storage, Azure SQL, Azure Cosmos DB, or other [available Azure services](../private-link/availability.md), you can use Azure Private Link to avoid SNAT entirely. Azure Private Link sends traffic from your virtual network to Azure services over the Azure backbone network instead of over the internet.
> >Private Link is the recommended option over service endpoints for private access to Azure hosted services. For more information on the difference between Private Link and service endpoints, see [Compare Private Endpoints and Service Endpoints](../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
# Upgrade a basic load balancer used with Virtual Machine Scale Sets >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus).
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
# Upgrade from a basic public to standard public load balancer >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Azure Load Balancer SKUs, see [comparison table](./skus.md#skus).
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
# Upgrade Azure Internal Load Balancer- No Outbound Connection Required >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus).
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
# Upgrade an internal basic load balancer - Outbound connections required >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
A standard [Azure Load Balancer](load-balancer-overview.md) offers increased functionality and high availability through zone redundancy. For more information about Azure Load Balancer SKUs, see [Azure Load Balancer SKUs](./skus.md#skus). A standard internal Azure Load Balancer doesn't provide outbound connectivity. The PowerShell script in this article, migrates the basic load balancer configuration to a standard public load balancer.
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
You can also find the latest Azure Load Balancer updates and subscribe to the RS
| Type |Name |Description |Date added | | ||||
+| SKU | [Basic Load Balancer is retiring on 30 September 2025](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/) | Basic Load Balancer will retire on 30th September 2025. Make sure to [migrate to Standard SKU](load-balancer-basic-upgrade-guidance.md) before this date. | September 2022 |
| SKU | [Gateway Load Balancer now generally available](https://azure.microsoft.com/updates/generally-available-azure-gateway-load-balancer/) | Gateway Load Balancer is a new SKU of Azure Load Balancer targeted for scenarios requiring transparent NVA (network virtual appliance) insertion. Learn more about [Gateway Load Balancer](gateway-overview.md) or our supported [third party partners](gateway-partners.md). | July 2022 | | SKU | [Gateway Load Balancer public preview](https://azure.microsoft.com/updates/gateway-load-balancer-preview/) | Gateway Load Balancer is a fully managed service enabling you to deploy, scale, and enhance the availability of third party network virtual appliances (NVAs) in Azure. You can add your favorite third party appliance whether it is a firewall, inline DDoS appliance, deep packet inspection system, or even your own custom appliance into the network path transparently ΓÇô all with a single click.| November 2021 | | Feature | [Support for IP-based backend pools (General Availability)](https://azure.microsoft.com/updates/iplbg)|March 2021 |
load-testing How To Test Secured Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-secured-endpoints.md
# Load test secured endpoints with Azure Load Testing Preview
-In this article, you learn how to load test applications with Azure Load Testing Preview that require authentication. Azure Load Testing enables you to [authenticate with endpoints by using shared secrets or credentials](#authenticate-with-a-shared-secret-or-credentials), or to [authenticate with client certificates](#authenticate-with-client-certificates).
+In this article, you learn how to load test secured applications with Azure Load Testing Preview. Secured applications require authentication to access the endpoint. Azure Load Testing enables you to [authenticate with endpoints by using shared secrets or credentials](#authenticate-with-a-shared-secret-or-credentials), or to [authenticate with client certificates](#authenticate-with-client-certificates).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An Azure Load Testing resource. To create a load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
+* If you're using client certificates, an Azure key vault. To create a key vault, see the quickstart [Create a key vault using the Azure portal](/azure/key-vault/general/quick-create-portal).
+ ## Authenticate with a shared secret or credentials In this scenario, the application endpoint requires that you use a shared secret, such as an access token, an API key, or user credentials to authenticate. In the JMeter script, you have to provide this security information with each application request. For example, to load test a web endpoint that uses OAuth 2.0, you add an `Authorization` header, which contains the access token, to the HTTP request.
-To avoid storing, and disclosing, security information in the JMeter script, Azure Load Testing enables you to securely store secrets in Azure Key Vault or in the CI/CD secrets store. By using a custom JMeter function `GetSecret`, you can retrieve the secret value and pass it to the application endpoint.
-
-The following diagram shows how to use shared secrets or credentials to authenticate with an application endpoint in your load test.
+The following diagram shows how to use shared secrets or credentials to authenticate with an application endpoint in your load test. To avoid storing, and disclosing, security information in the JMeter script, you can securely store secrets in Azure Key Vault or in the CI/CD secrets store. In the JMeter script, you then use a custom JMeter function `GetSecret` to retrieve the secret value. Finally, you specify the secret value in the JMeter request to the application endpoint.
:::image type="content" source="./media/how-to-test-secured-endpoints/load-test-authentication-with-shared-secret.png" alt-text="Diagram that shows how to use shared-secret authentication with Azure Load Testing.":::
The following diagram shows how to use shared secrets or credentials to authenti
1. Update the JMeter script to retrieve the secret value: 1. Create a user-defined variable that retrieves the secret value with the `GetSecret` custom function:
- <!-- Add screenshot -->
- 1. Update the JMeter sampler component to pass the secret in the request. For example, to provide an OAuth2 access token, you configure the `Authentication` HTTP header:
- <!-- Add screenshot -->
+ :::image type="content" source="./media/how-to-test-secured-endpoints/jmeter-user-defined-variables.png" alt-text="Screenshot that shows how to add a user-defined variable that uses the GetSecret function in JMeter.":::
+
+ 1. Update the JMeter sampler component to pass the secret in the request. For example, to provide an OAuth2 access token, you configure the `Authorization` HTTP header:
+ :::image type="content" source="./media/how-to-test-secured-endpoints/jmeter-add-http-header.png" alt-text="Screenshot that shows how to add an authorization header to a request in JMeter.":::
+
When you now run your load test, the JMeter script can retrieve the secret information from the secrets store and authenticate with the application endpoint. ## Authenticate with client certificates In this scenario, the application endpoint requires that you use a client certificate to authenticate. Azure Load Testing supports Public Key Certificate Standard #12 (PKCS12) type of certificates. You can use only one client certificate in a load test.
-To avoid storing, and disclosing, the client certificate alongside the JMeter script, Azure Load Testing uses Azure Key Vault to store the certificate. When you run the load test, Azure Load Testing passes the certificate to JMeter, which uses it to authenticate with the application endpoint. You don't have to update the JMeter script to use the client certificate.
-
-The following diagram shows how to use a client certificate to authenticate with an application endpoint in your load test.
+The following diagram shows how to use a client certificate to authenticate with an application endpoint in your load test. To avoid storing, and disclosing, the client certificate alongside the JMeter script, you store the certificate in Azure Key Vault. When you run the load test, Azure Load Testing reads the certificate from the key vault, and automatically passes it to JMeter. JMeter then transparently passes the certificate in all application requests. You don't have to update the JMeter script to use the client certificate.
:::image type="content" source="./media/how-to-test-secured-endpoints/load-test-authentication-with-client-certificate.png" alt-text="Diagram that shows how to use client-certificate authentication with Azure Load Testing.":::
logic-apps Create Custom Built In Connector Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-custom-built-in-connector-standard.md
ms.suite: integration + Last updated 08/20/2022
-# As a developer, I want learn how to create my own custom built-in connector operations to use and run in my Standard logic app workflows.
# Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps
Last updated 08/20/2022
If you need connectors that aren't available in Standard logic app workflows, you can create your own built-in connectors using the same extensibility model that's used by the [*service provider-based built-in connectors*](custom-connector-overview.md#service-provider-interface-implementation) available for Standard workflows in single-tenant Azure Logic Apps. This extensibility model is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
-This article shows how to create an example custom built-in Cosmos DB connector, which has a single Azure Functions-based trigger and no actions. The trigger fires when a new document is added to the lease collection or container in Cosmos DB and then runs a workflow that uses the input payload as the Cosmos document.
+This article shows how to create an example custom built-in Azure Cosmos DB connector, which has a single Azure Functions-based trigger and no actions. The trigger fires when a new document is added to the lease collection or container in Azure Cosmos DB and then runs a workflow that uses the input payload as the Azure Cosmos DB document.
| Operation | Operation details | Description | |--|-|-|
-| Trigger | When a document is received | This trigger operation runs when an insert operation happens in the specified Cosmos DB database and collection. |
+| Trigger | When a document is received | This trigger operation runs when an insert operation happens in the specified Azure Cosmos DB database and collection. |
| Action | None | This connector doesn't define any action operations. | ||||
-This sample connector uses the same functionality as the [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), which is based on [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md). For the complete sample, review [Sample custom built-in Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB).
+This sample connector uses the same functionality as the [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), which is based on [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md). For the complete sample, review [Sample custom built-in Azure Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB).
For more information, review the following documentation:
For more information, review the following documentation:
> > This authoring capability is currently available only in Visual Studio Code.
-* An Azure Cosmos account, database, and container or collection. For more information, review [Quickstart: Create an Azure Cosmos account, database, container and items from the Azure portal](../cosmos-db/sql/create-cosmosdb-resources-portal.md).
+* An Azure Cosmos DB account, database, and container or collection. For more information, review [Quickstart: Create an Azure Cosmos DB account, database, container and items from the Azure portal](../cosmos-db/sql/create-cosmosdb-resources-portal.md).
## High-level steps
The following outline describes the high-level steps to build the example connec
To provide the operations for the sample built-in connector, in the **Microsoft.Azure.Workflows.WebJobs.Extension** NuGet package, implement the methods for the following interfaces. The following diagram shows the interfaces with the method implementations that the Azure Logic Apps designer and runtime expect for a custom built-in connector that has an Azure Functions-based trigger:
-![Conceptual class diagram showing method implementation for sample Cosmos DB custom built-in connector.](./media/create-custom-built-in-connector-standard/service-provider-cosmos-db-example.png)
+![Conceptual class diagram showing method implementation for sample Azure Cosmos DB custom built-in connector.](./media/create-custom-built-in-connector-standard/service-provider-cosmos-db-example.png)
### IServiceOperationsProvider
This interface includes the following methods that provide the operation manifes
If your connector has actions, the runtime in Azure Logic Apps requires the [**InvokeOperation()**](#invokeoperation) method to call each action in your connector that runs during workflow execution. If your connector doesn't have actions, you don't have to implement the **InvokeOperation()** method.
- In this example, the Cosmos DB custom built-in connector doesn't have actions. However, the method is included in this example for completeness.
+ In this example, the Azure Cosmos DB custom built-in connector doesn't have actions. However, the method is included in this example for completeness.
For more information about these methods and their implementation, review these methods later in this article.
public string GetBindingConnectionInformation(string operationId, InsensitiveDic
#### InvokeOperation()
-The example Cosmos DB custom built-in connector doesn't have actions, but the following method is included for completeness:
+The example Azure Cosmos DB custom built-in connector doesn't have actions, but the following method is included for completeness:
```csharp public Task<ServiceOperationResponse> InvokeOperation(string operationId, InsensitiveDictionary<JToken> connectionParameters, ServiceOperationRequest serviceOperationRequest)
This method has a default implementation, so you don't need to explicitly implem
## Register your connector
-To load your custom built-in connector extension during the Azure Functions runtime start process, you have to add the Azure Functions extension registration as a startup job and register your connector as a service provider in service provider list. Based on the type of data that your built-in trigger needs as inputs, optionally add the converter. This example converts the **Document** data type for Cosmos DB Documents to a **JObject** array.
+To load your custom built-in connector extension during the Azure Functions runtime start process, you have to add the Azure Functions extension registration as a startup job and register your connector as a service provider in service provider list. Based on the type of data that your built-in trigger needs as inputs, optionally add the converter. This example converts the **Document** data type for Azure Cosmos DB documents to a **JObject** array.
The following sections show how to register your custom built-in connector as an Azure Functions extension.
The following sections show how to register your custom built-in connector as an
1. Implement the **IWebJobsStartup** interface. In the **Configure()** method, register the extension and inject the service provider.
- For example, the following code snippet shows the startup class implementation for the sample custom built-in Cosmos DB connector:
+ For example, the following code snippet shows the startup class implementation for the sample custom built-in Azure Cosmos DB connector:
```csharp using Microsoft.Azure.WebJobs;
The following sections show how to register your custom built-in connector as an
### Register the service provider
-Now, register the service provider implementation as an Azure Functions extension with the Azure Logic Apps engine. This example uses the built-in [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp) as a new trigger. This example also registers the new Cosmos DB service provider for an existing list of service providers, which is already part of the Azure Logic Apps extension. For more information, review [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md).
+Now, register the service provider implementation as an Azure Functions extension with the Azure Logic Apps engine. This example uses the built-in [Azure Cosmos DB trigger for Azure Functions](../azure-functions/functions-bindings-cosmosdb-v2-trigger.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp) as a new trigger. This example also registers the new Azure Cosmos DB service provider for an existing list of service providers, which is already part of the Azure Logic Apps extension. For more information, review [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md).
```csharp using Microsoft.Azure.Documents;
namespace ServiceProviders.CosmosDb.Extensions
serviceOperationsProvider.RegisterService(serviceName: CosmosDBServiceOperationsProvider.ServiceName, serviceOperationsProviderId: CosmosDBServiceOperationsProvider.ServiceId, serviceOperationsProviderInstance: operationsProvider); }
- // Convert the Cosmos Document array to a generic JObject array.
+ // Convert the Azure Cosmos DB Document array to a generic JObject array.
public static JObject[] ConvertDocumentToJObject(IReadOnlyList<Document> data) { List<JObject> jobjects = new List<JObject>();
namespace ServiceProviders.CosmosDb.Extensions
// In the Initialize method, you can add any custom implementation. public void Initialize(ExtensionConfigContext context) {
- // Convert the Cosmos Document list to a JObject array.
+ // Convert the Azure Cosmos DB Document list to a JObject array.
context.AddConverter<IReadOnlyList<Document>, JObject[]>(ConvertDocumentToJObject); } }
namespace ServiceProviders.CosmosDb.Extensions
Azure Logic Apps has a generic way to handle any Azure Functions built-in trigger by using the **JObject** array. However, if you want to convert the read-only list of Azure Cosmos DB documents into a **JObject** array, you can add a converter. When the converter is ready, register the converter as part of **ExtensionConfigContext** as shown earlier in this example: ```csharp
-// Convert the Cosmos document list to a JObject array.
+// Convert the Azure Cosmos DB document list to a JObject array.
context.AddConverter<IReadOnlyList<Document>, JObject[]>(ConvertDocumentToJObject); ```
To add the NuGet reference from the previous section, in the extension bundle na
The operations picker shows your custom built-in connector and trigger, for example:
- ![Screenshot showing Visual Studio Code and the designer for a Standard logic app workflow with the new custom built-in Cosmos DB connector.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-picker.png)
+ ![Screenshot showing Visual Studio Code and the designer for a Standard logic app workflow with the new custom built-in Azure Cosmos DB connector.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-picker.png)
1. From the **Triggers** list, select your custom built-in trigger to start your workflow.
To add the NuGet reference from the previous section, in the extension bundle na
| Property | Required | Value | Description | |-|-|-|-|
- | **Connection name** | Yes | <*Cosmos-DB-connection-name*> | The name for the Cosmos DB connection to create |
- | **Connection String** | Yes | <*Cosmos-DB-connection-string*> | The connection string for the Azure Cosmos DB database collection or lease collection where you want to add each new received document. |
+ | **Connection name** | Yes | <*Azure-Cosmos-DB-connection-name*> | The name for the Azure Cosmos DB connection to create |
+ | **Connection String** | Yes | <*Azure Cosmos DB-DB-connection-string*> | The connection string for the Azure Cosmos DB database collection or lease collection where you want to add each new received document. |
||||| ![Screenshot showing the connection pane when using the connector for the first time.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-create-connection.png)
To add the NuGet reference from the previous section, in the extension bundle na
| Property | Required | Value | Description | |-|-|-|-|
- | **Database name** | Yes | <*Cosmos-DB-database-name*> | The name for the Cosmos DB database to use |
- | **Collection name** | Yes | <*Cosmos-DB-collection-name*> | The name for the Cosmos DB collection where you want to add each new received document. |
+ | **Database name** | Yes | <*Azure-Cosmos-DB-database-name*> | The name for the Azure Cosmos DB database to use |
+ | **Collection name** | Yes | <*Azure-Cosmos-DB-collection-name*> | The name for the Azure Cosmos DB collection where you want to add each new received document. |
||||| ![Screenshot showing the trigger properties pane.](./media/create-custom-built-in-connector-standard/visual-studio-code-built-in-connector-trigger-properties.png)
To add the NuGet reference from the previous section, in the extension bundle na
1. To trigger your workflow, in the Azure portal, open your Azure Cosmos DB account. On the account menu, select **Data Explorer**. Browse to the database and collection that you specified in the trigger. Add an item to the collection.
- ![Screenshot showing the Azure portal, Cosmos DB account, and Data Explorer open to the specified database and collection.](./media/create-custom-built-in-connector-standard/cosmos-db-account-test-add-item.png)
+ ![Screenshot showing the Azure portal, Azure Cosmos DB account, and Data Explorer open to the specified database and collection.](./media/create-custom-built-in-connector-standard/cosmos-db-account-test-add-item.png)
## Next steps
-* [Source for sample custom built-in Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB)
+* [Source for sample custom built-in Azure Cosmos DB connector - Azure Logic Apps Connector Extensions](https://github.com/Azure/logicapps-connector-extensions/tree/CosmosDB/src/CosmosDB)
* [Built-in Service Bus trigger: batching and session handling](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-running-anywhere-built-in-service-bus-trigger/ba-p/2079995)
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 08/22/2022 Last updated : 10/12/2022
Azure Logic Apps supports the [*system-assigned* managed identity](../active-dir
* A logic app resource can share the same user-assigned identity across a group of other logic app resources.
-This article shows how to enable and set up a managed identity for your logic app and provides an example for how use the identity for authentication. Unlike the system-assigned identity, which you don't have to manually create, you *do* have to manually create the user-assigned identity. This article shows how to create a user-assigned identity using the Azure portal and Azure Resource Manager template (ARM template). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
+This article shows how to enable and set up a managed identity for your logic app and provides an example for how to use the identity for authentication. Unlike the system-assigned identity, which you don't have to manually create, you *do* have to manually create the user-assigned identity. This article shows how to create a user-assigned identity using the Azure portal and Azure Resource Manager template (ARM template). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
| Tool | Documentation | |||
The following table lists the connectors that support using a managed identity i
| Connector type | Supported connectors | |-|-| | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. However, they don't support the user-assigned managed identity for authenticating the same connections. |
-| Managed | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
+| Managed | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
### [Standard](#tab/standard)
The following table lists the connectors that support using a managed identity i
| Connector type | Supported connectors | |-|-|
-| Built-in | - Azure Event Hubs <br>- Azure Service Bus <br>- HTTP <br>- HTTP + Webhook <br>- SQL Server <br><br>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
-| Managed connector | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
+| Built-in | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- Azure Queues <br>- Azure Tables <br>- HTTP <br>- HTTP + Webhook <br>- SQL Server <br><br>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
+| Managed connector | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
logic-apps Logic Apps Enterprise Integration Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-metadata.md
Title: Manage integration account artifact metadata
-description: Add or get artifact metadata from integration accounts in Azure Logic Apps with Enterprise Integration Pack.
+ Title: Manage artifact metadata in integration accounts
+description: In Azure Logic Apps, retrieve metadata that you add to an integration account artifact. Use that metadata in a logic app workflow.
ms.suite: integration Previously updated : 01/17/2019 Last updated : 10/12/2022
+#Customer intent: As an Azure Logic Apps developer, I want to define custom metadata for integration account artifacts so that my logic app workflow can use that metadata.
-# Manage artifact metadata in integration accounts with Azure Logic Apps and Enterprise Integration Pack
+# Manage artifact metadata in integration accounts for Azure Logic Apps
-You can define custom metadata for artifacts in integration accounts
-and get that metadata during runtime for your logic app to use.
-For example, you can provide metadata for artifacts, such as partners,
-agreements, schemas, and maps - all store metadata using key-value pairs.
+You can define custom metadata for artifacts in integration accounts and get that metadata during runtime for your logic app workflow to use. For example, you can provide metadata for artifacts, such as partners, agreements, schemas, and maps. All these artifact types store metadata as key-value pairs.
+
+This how-to guide shows how to add metadata to an integration account artifact. You can then use actions in your workflow to retrieve and use the metadata values.
## Prerequisites
-* An Azure subscription. If you don't have a subscription,
-<a href="https://azure.microsoft.com/free/" target="_blank">sign up for a free Azure account</a>.
+* An Azure account and subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A basic [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
-that has the artifacts where you want to add metadata, for example:
+* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) that has the artifacts where you want to add metadata. The artifacts can be the following types:
* [Partner](logic-apps-enterprise-integration-partners.md) * [Agreement](logic-apps-enterprise-integration-agreements.md) * [Schema](logic-apps-enterprise-integration-schemas.md) * [Map](logic-apps-enterprise-integration-maps.md)
-* A logic app that's linked to the integration account
-and artifact metadata you want to use. If your logic app
-isn't already linked, learn [how to link logic apps to integration accounts](logic-apps-enterprise-integration-create-integration-account.md#link-account).
+* The logic app workflow where you want to use the artifact metadata. Make sure that your workflow has at least a trigger, such as the **Request** or **HTTP** trigger, and the action that you want to use for working with artifact metadata. The example in this article uses the **Request** trigger named **When a HTTP request is received**.
+
+ * If you don't have a logic app workflow, learn [how to create a Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md) or [how to create a Standard logic app workflow](create-single-tenant-workflows-azure-portal.md).
- If you don't have a logic app yet, learn [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md).
- Add the trigger and actions you want to use for managing
- artifact metadata. Or, to just try things out, add a trigger
- such as **Request** or **HTTP** to your logic app.
+* Make sure to [link your integration account to your Consumption logic app resource](logic-apps-enterprise-integration-create-integration-account.md?tabs=azure-portal%2Cconsumption#link-account) or [to your Standard logic app workflow](logic-apps-enterprise-integration-create-integration-account.md?tabs=azure-portal%2Cstandard#link-account).
## Add metadata to artifacts
-1. Sign in to the <a href="https://portal.azure.com" target="_blank">Azure portal</a>
-with your Azure account credentials. Find and open your integration account.
+1. In the [Azure portal](https://portal.azure.com), go to your integration account.
-1. Select the artifact where you want to add metadata,
-and choose **Edit**. Enter the metadata details for
-that artifact, for example:
+1. Select the artifact where you want to add metadata, and then select **Edit**.
- ![Enter metadata](media/logic-apps-enterprise-integration-metadata/add-partner-metadata.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/edit-partner-metadata.png" alt-text="Screenshot of Azure portal, integration account, and 'Partners' page with 'TradingPartner1' and 'Edit' button selected.":::
-1. When you're done, choose **OK**.
+1. On the **Edit** pane, enter the metadata details for that artifact, and then select **OK**. The following screenshot shows three metadata key-value pairs:
-1. To view this metadata in the JavaScript Object Notation (JSON)
-definition for the integration account, choose **Edit as JSON**
-so that the JSON editor opens:
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/add-partner-metadata.png" alt-text="Screenshot of the 'Edit' pane for 'TradingPartner1'. Under 'Metadata', three key-value pairs are highlighted and 'OK' is selected.":::
- ![JSON for partner metadata](media/logic-apps-enterprise-integration-metadata/partner-metadata.png)
+1. To view this metadata in the integration account's JavaScript Object Notation (JSON) definition, select **Edit as JSON**, which opens the JSON editor.
+
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/partner-metadata.png" alt-text="Screenshot of the JSON code that contains information about 'TradingPartner1'. In the 'metadata' object, three key-value pairs are highlighted.":::
## Get artifact metadata
-1. In the Azure portal, open the logic app that's
-linked to the integration account you want.
+1. In the Azure portal, open the logic app resource that's linked to your integration account.
+
+1. On the logic app navigation menu, select **Logic app designer**.
+
+1. In the designer, add the **Integration Account Artifact Lookup** action to get the metadata.
+
+ 1. Under the trigger or an existing action, select **New step**.
+
+ 1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **integration account**.
+
+ 1. From the actions list, select the action named **Integration Account Artifact Lookup**.
-1. In the Logic App Designer, if you're adding the step for
-getting metadata under the trigger or last action in the workflow,
-choose **New step** > **Add an action**.
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/integration-account-artifact-lookup.png" alt-text="Screenshot of the designer for a Consumption logic app workflow with the 'Integration Account Artifact Lookup' action selected.":::
-1. In the search box, enter "integration account".
-Under the search box, choose **All**. From the actions list,
-select this action: **Integration Account Artifact Lookup - Integration Account**
+1. Provide the following information for the artifact that you want to find:
- ![Select "Integration Account Artifact Lookup"](media/logic-apps-enterprise-integration-metadata/integration-account-artifact-lookup.png)
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Artifact Type** | Yes | **Schema**, **Map**, **Partner**, **Agreement**, or a custom type | The type for the artifact you want to get |
+ | **Artifact Name** | Yes | <*artifact-name*> | The name for the artifact you want to get |
-1. Provide this information for the artifact you want to find:
+ This example gets the metadata for a trading partner artifact by following these steps:
- | Property | Required | Value | Description |
- |-||-|-|
- | **Artifact Type** | Yes | **Schema**, **Map**, **Partner**, **Agreement**, or a custom type | The type for the artifact you want |
- | **Artifact Name** | Yes | <*artifact-name*> | The name for the artifact you want |
- |||
+ 1. For **Artifact Type**, select **Partner**.
- For example, suppose you want to get the metadata
- for a trading partner artifact:
+ 1. For **Artifact Name**, click inside the edit box. When the dynamic content list appears, select the **name** output from the trigger.
- ![Select artifact type and enter artifact name](media/logic-apps-enterprise-integration-metadata/artifact-lookup-information.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/artifact-lookup-information.png" alt-text="Screenshot of the 'Integration Account Artifact Lookup' action with the 'Artifact Type' and 'Artifact Name' properties highlighted.":::
-1. Add the action that you want for handling that metadata, for example:
+1. Now, add the action that you want to use for using the metadata. This example continues with the built-in **HTTP** action.
- 1. Under the **Integration Account Artifact Lookup** action,
- choose **Next step**, and select **Add an action**.
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/http-action.png" alt-text="Screenshot of the designer search box with 'http' entered, the 'Built-in' tab highlighted, and the HTTP action selected.":::
- 1. In the search box, enter "http". Under the search box,
- choose **Built-ins**, and select this action: **HTTP - HTTP**
+1. Provide the following information for the artifact metadata that you want the HTTP action to use.
- ![Add HTTP action](media/logic-apps-enterprise-integration-metadata/http-action.png)
+ For example, suppose you want to get the `routingUrl` metadata that you added earlier. Here are the property values that you might specify:
- 1. Provide information for the artifact metadata you want to manage.
+ | Property | Required | Value | Description | Example value |
+ |-|-|-|-||
+ | **Method** | Yes | <*operation-to-run*> | The HTTP operation to run on the artifact. | Use the **GET** method for this HTTP action. |
+ | **URI** | Yes | <*metadata-location*> | The endpoint where you want to send the outgoing request. | To reference the `routingUrl` metadata value from the artifact that you retrieved, follow these steps: <br><br>1. Click inside the **URI** box. <br><br>2. In the dynamic content list that opens, select **Expression**. <br><br>3. In the expression editor, enter an expression like the following example:<br><br>`outputs('Integration_Account_Artifact_Lookup')['properties']['metadata']['routingUrl']` <br><br>4. When you're done, select **OK**. |
+ | **Headers** | No | <*header-values*> | Any header outputs from the trigger that you want to pass to the HTTP action. | To pass in the `Content-Type` value from the trigger header, follow these steps for the first row under **Headers**: <br><br>1. In the first column, enter `Content-Type` as the header name. <br><br>2. In the second column, use the expression editor to enter the following expression as the header value: <br><br>`triggeroutputs()['headers']['Content-Type']` <br><br>To pass in the `Host` value from the trigger header, follow these steps for the second row under **Headers**: <br><br>1. In the first column, enter `Host` as the header name. <br><br>2. In the second column, use the expression editor to enter the following expression as the header value: <br><br>`triggeroutputs()['headers']['Host']` |
+ | **Body** | No | <*body-content*> | Any other content that you want to pass through the HTTP action's `body` property. | To pass the artifact's `properties` values to the HTTP action: <br><br>1. Click inside the **Body** box to open the dynamic content list. If no properties appear, select **See more**. <br><br>2. From the dynamic content list, under **Integration Account Artifact Lookup**, select **Properties**. |
- For example, suppose you want to get the `routingUrl` metadata
- that's added earlier in this topic. Here are the property
- values you might specify:
+ The following screenshot shows the example values:
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Method** | Yes | <*operation-to-run*> | The HTTP operation to run on the artifact. For example, this HTTP action uses the **GET** method. |
- | **URI** | Yes | <*metadata-location*> | To access the `routingUrl` metadata value from the artifact you retrieved, you can use an expression, for example: <p>`@{outputs('Integration_Account_Artifact_Lookup')['properties']['metadata']['routingUrl']}` |
- | **Headers** | No | <*header-values*> | Any header outputs from the trigger you want to pass into the HTTP action. For example, to pass in the trigger's `headers` property value: you can use an expression, for example: <p>`@triggeroutputs()['headers']` |
- | **Body** | No | <*body-content*> | Any other content you want to pass through the HTTP action's `body` property. This example passes the artifact's `properties` values into the HTTP action: <p>1. Click inside the **Body** property so the dynamic content list appears. If no properties appear, choose **See more**. <br>2. From the dynamic content list, under **Integration Account Artifact Lookup**, select **Properties**. |
- ||||
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/add-http-action-values.png" alt-text="Screenshot of the designer with an HTTP action. Some property values are highlighted. The dynamic content list is open with 'Properties' highlighted.":::
- For example:
+1. To check the information that you provided for the HTTP action, you can view your workflow's JSON definition. On the designer toolbar, select **Code view**.
- ![Specify values and expressions for HTTP action](media/logic-apps-enterprise-integration-metadata/add-http-action-values.png)
+ The workflow's JSON definition appears, as shown in the following example:
- 1. To check the information you provided for the HTTP action,
- view your logic app's JSON definition. On the Logic App
- Designer toolbar, choose **Code view** so the app's JSON
- definition appears, for example:
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/finished-http-action-definition.png" alt-text="Screenshot of the HTTP action's JSON definition with the 'body', 'headers', 'method', and 'URI' properties highlighted.":::
- ![Logic app JSON definition](media/logic-apps-enterprise-integration-metadata/finished-logic-app-definition.png)
+1. On the code view toolbar, select **Designer**.
- After you switch back to the Logic App Designer,
- any expressions you used now appear resolved,
- for example:
+ Any expressions that you entered in the designer now appear resolved.
- ![Resolved expressions in Logic App Designer](media/logic-apps-enterprise-integration-metadata/resolved-expressions.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-metadata/resolved-expressions.png" alt-text="Screenshot of the designer with the 'URI', 'Headers', and 'Body' expressions now resolved.":::
## Next steps
logic-apps Logic Apps Exceed Default Page Size With Pagination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exceed-default-page-size-with-pagination.md
property along with the `"minimumItemCount"` property in that action's
}, ```
+In this case, the response returns an array that contains JSON objects.
+ ## Get support For questions, visit the
logic-apps Logic Apps Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-gateway-install.md
ms.suite: integration Previously updated : 10/04/2022 Last updated : 10/12/2022
-#Customer intent: As a software developer, I want to install and set up the on-premises data gateway so that I can create logic app workflows that can access data in on-premises systems.
+#Customer intent: As a software developer, I want to create logic app workflows that can access data in on-premises systems, which requires that I install and set up the on-premises data gateway.
# Install on-premises data gateway for Azure Logic Apps
-Before you can [connect to on-premises data sources from Azure Logic Apps](../logic-apps/logic-apps-gateway-connection.md), download and install the [on-premises data gateway](https://aka.ms/on-premises-data-gateway-installer) on a local computer. The gateway works as a bridge that provides quick data transfer and encryption between data sources on premises and your logic apps. You can use the same gateway installation with other cloud services, such as Power Automate, Power BI, Power Apps, and Azure Analysis Services. For information about how to use the gateway with these services, see these articles:
+In Consumption logic app workflows, some connectors provide access on-premises data sources. However, before you can create these connections, you have to download and install the [on-premises data gateway](https://aka.ms/on-premises-data-gateway-installer) and then create an Azure resource for that gateway installation. The gateway works as a bridge that provides quick data transfer and encryption between on-premises data sources and your workflows. You can use the same gateway installation with other cloud services, such as Power Automate, Power BI, Power Apps, and Azure Analysis Services.
+
+In Standard logic app workflows, [built-in service provider connectors](/azure/logic-apps/connectors/built-in/reference/) don't need the gateway to access your on-premises data source. Instead, you provide information that authenticates your identity and authorizes access to your data source. If a built-in connector isn't available for your data source, but a managed connector is available, you'll need the on-premises data gateway.
+
+This article shows how to download, install, and set up your on-premises data gateway so that you can access on-premises data sources from Azure Logic Apps. You can also learn more about [how the data gateway works](#gateway-cloud-service) later in this topic. For more information about the gateway, see [What is an on-premises gateway](/data-integration/gateway/service-gateway-onprem)? To automate gateway installation and management tasks, visit the PowerShell gallery for the [DataGateway PowerShell cmdlets](https://www.powershellgallery.com/packages/DataGateway/3000.15.15).
+
+For information about how to use the gateway with these services, see these articles:
* [Microsoft Power Automate on-premises data gateway](/power-automate/gateway-reference) * [Microsoft Power BI on-premises data gateway](/power-bi/service-gateway-onprem) * [Microsoft Power Apps on-premises data gateway](/powerapps/maker/canvas-apps/gateway-reference) * [Azure Analysis Services on-premises data gateway](../analysis-services/analysis-services-gateway.md)
-This article shows how to download, install, and set up your on-premises data gateway so that you can access on-premises data sources from Azure Logic Apps. You can also learn more about [how the data gateway works](#gateway-cloud-service) later in this topic. For more information about the gateway, see [What is an on-premises gateway](/data-integration/gateway/service-gateway-onprem)? To automate gateway installation and management tasks, visit the PowerShell gallery for the [DataGateway PowerShell cmdlets](https://www.powershellgallery.com/packages/DataGateway/3000.15.15).
- <a name="requirements"></a> ## Prerequisites * An Azure account and subscription. If you don't have an Azure account with a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- * Your Azure account needs to be either a work account or school account, which looks like `username@contoso.com`. You can't use Azure B2B (guest) accounts or personal Microsoft accounts, such as @hotmail.com or @outlook.com.
+ * Your Azure account needs to use either a work account or school account, which looks like `username@contoso.com`. You can't use Azure B2B (guest) accounts or personal Microsoft accounts, such as @hotmail.com or @outlook.com.
> [!NOTE] > If you signed up for a Microsoft 365 offering and didn't provide your work email address,
This article shows how to download, install, and set up your on-premises data ga
* Later in the Azure portal, you need to use the same Azure account to create an Azure gateway resource that links to your gateway installation. You can link only one gateway installation and one Azure gateway resource to each other. However, your Azure account can link to different gateway installations that are each associated with an Azure gateway resource. Your logic apps can then use this gateway resource in triggers and actions that can access on-premises data sources.
-* Here are requirements for your local computer:
+* Local computer requirements:
**Minimum requirements**
This article shows how to download, install, and set up your on-premises data ga
* Install the on-premises data gateway only on a local computer, not a domain controller. You don't have to install the gateway on the same computer as your data source. You need only one gateway for all your data sources, so you don't need to install the gateway for each data source.
- > [!TIP]
- > To minimize latency, you can install the gateway as close
- > as possible to your data source, or on the same computer,
- > assuming that you have permissions.
+ * To minimize latency, install the gateway as close as possible to your data source, or on the same computer, assuming that you have permissions.
* Install the gateway on a local computer that's on a wired network, connected to the internet, always turned on, and doesn't go to sleep. Otherwise, the gateway can't run, and performance might suffer over a wireless network.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 08/19/2022 Last updated : 10/12/2022
The following table identifies the authentication types that are available on th
||-| | [Basic](#basic-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Active Directory OAuth](#azure-active-directory-oauth-authentication) | - **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook <br><br>- **Standard**: Azure Event Hubs, Azure Service Bus, Azure Event Hubs, Azure Service Bus, HTTP, HTTP Webhook, SQL Server |
+| [Active Directory OAuth](#azure-active-directory-oauth-authentication) | - **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server |
| [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Event Hubs, Azure Service Bus, HTTP, HTTP Webhook, SQL Server <br><br>**Managed connectors**: Azure AD, Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server |
+| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Managed connectors**: Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server |
<a name="secure-inbound-requests"></a>
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
machine-learning Algorithm Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/algorithm-cheat-sheet.md
adobe-target: true
The **Azure Machine Learning Algorithm Cheat Sheet** helps you choose the right algorithm from the designer for a predictive analytics model.
+>[!Note]
+> Designer supports two type of components, classic prebuilt components and custom components. These two types of components are not compatible.
+>
+>Classic prebuilt components provides prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
+>
+>
+>Custom components allow you to provide your own code as a component. It supports sharing across workspaces and seamless authoring across Studio, CLI, and SDK interfaces.
+>
+>This article applies to classic prebuilt components.
+ Azure Machine Learning has a large library of algorithms from the ***classification***, ***recommender systems***, ***clustering***, ***anomaly detection***, ***regression***, and ***text analytics*** families. Each is designed to address a different type of machine learning problem. For more information, see [How to select algorithms](how-to-select-algorithms.md).
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
## 2021-05-25
-### Announcing the CLI (v2) (preview) for Azure Machine Learning
+### Announcing the CLI (v2) for Azure Machine Learning
The `ml` extension to the Azure CLI is the next-generation interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle. [Install and get started](how-to-configure-cli.md).
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 08/29/2022 Last updated : 09/26/2022 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us` +
+## 2022-09-26
+
+### Azure Machine Learning SDK for Python v1.46.0
+ + **azureml-automl-dnn-nlp**
+ + Customers will no longer be allowed to specify a line in CoNLL which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label.
+ + **azureml-contrib-automl-dnn-forecasting**
+ + There is a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
+ + **azureml-core**
+ + Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when python version is 3.6 and less.
+ + The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
+ + Default - displays the warning when customer uses python 3.6 and less and for cli/sdk v1.
+ + `True` - displays the sdk v1 deprecation warning on azureml-sdk packages.
+ + `False` - disables the sdk v1 deprecation warning on azureml-sdk packages.
+ + Command to be executed to set the environment variable to disable the deprecation message:
+ + Windows - `setx AZUREML_LOG_DEPRECATION_WARNING_ENABLED "False"`
+ + Linux - `export AZUREML_LOG_DEPRECATION_WARNING_ENABLED="False"`
+ + **azureml-interpret**
+ + update azureml-interpret package to interpret-community 0.27.*
+ + **azureml-pipeline-core**
+ + Fix schedule default time zone to UTC.
+ + Fix incorrect reuse when using SqlDataReference in DataTransfer step.
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package and curated images to raiwidgets and responsibleai v0.22.0
+ + **azureml-train-automl-runtime**
+ + Fixed a bug in generated scripts that caused certain metrics to not render correctly in ui.
+ + Many Models now supports rolling forecast for inferencing.
+ + Support to return top `N` models in Many models scenario.
++++ ## 2022-08-29 ### Azure Machine Learning SDK for Python v1.45.0
This breaking change comes from the June release of `azureml-inference-server-ht
## 2021-05-25
-### Announcing the CLI (v2) (preview) for Azure Machine Learning
+### Announcing the CLI (v2) for Azure Machine Learning
The `ml` extension to the Azure CLI is the next-generation interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle. [Install and get started](how-to-configure-cli.md).
Access the following web-based authoring tools from the studio:
+ **New features** + Dataset: Add two options `on_error` and `out_of_range_datetime` for `to_pandas_dataframe` to fail when data has error values instead of filling them with `None`.
- + Workspace: Added the `hbi_workspace` flag for workspaces with sensitive data that enables further encryption and disables advanced diagnostics on workspaces. We also added support for bringing your own keys for the associated Cosmos DB instance, by specifying the `cmk_keyvault` and `resource_cmk_uri` parameters when creating a workspace, which creates a Cosmos DB instance in your subscription while provisioning your workspace. To learn more, see the [Azure Cosmos DB section of data encryption article](./concept-data-encryption.md#azure-cosmos-db).
+ + Workspace: Added the `hbi_workspace` flag for workspaces with sensitive data that enables further encryption and disables advanced diagnostics on workspaces. We also added support for bringing your own keys for the associated Azure Cosmos DB instance, by specifying the `cmk_keyvault` and `resource_cmk_uri` parameters when creating a workspace, which creates an Azure Cosmos DB instance in your subscription while provisioning your workspace. To learn more, see the [Azure Cosmos DB section of data encryption article](./concept-data-encryption.md#azure-cosmos-db).
+ **Bug fixes and improvements** + **azureml-automl-runtime**
At the time, of this release, the following browsers are supported: Chrome, Fire
+ **azureml-core** + Fixed issue with blob_cache_timeout parameter ordering. + Added external fit and transform exception types to system errors.
- + Added support for Key Vault secrets for remote runs. Add a azureml.core.keyvault.Keyvault class to add, get, and list secrets from the keyvault associated with your workspace. Supported operations are:
+ + Added support for Key Vault secrets for remote runs. Add an `azureml.core.keyvault.Keyvault` class to add, get, and list secrets from the key vault associated with your workspace. Supported operations are:
+ azureml.core.workspace.Workspace.get_default_keyvault() + azureml.core.keyvault.Keyvault.set_secret(name, value) + azureml.core.keyvault.Keyvault.set_secrets(secrets_dict)
machine-learning Component Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/component-reference.md
Last updated 11/09/2020
# Algorithm & component reference for Azure Machine Learning designer
-This reference content provides the technical background on each of the built-in machine learning algorithms and components available in Azure Machine Learning designer.
+>[!Note]
+> Designer supports two type of components, classic prebuilt components and custom components. These two types of components are not compatible.
+>
+>Classic prebuilt components provides prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
+>
+>
+>Custom components allow you to provide your own code as a component. It supports sharing across workspaces and seamless authoring across Studio, CLI, and SDK interfaces.
+>
+>This article applies to classic prebuilt components.
+
+This reference content provides the technical background on each of the classic prebuilt components available in Azure Machine Learning designer.
+ Each component represents a set of code that can run independently and perform a machine learning task, given the required inputs. A component might contain a particular algorithm, or perform a task that is important in machine learning, such as missing value replacement, or statistical analysis.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Last updated 03/15/2022-+ # What is automated machine learning (AutoML)?
-Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
+
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning, you'll accelerate the time it takes to get production-ready ML models with great ease and efficiency.
+> * [v1](./v1/concept-automated-ml-v1.md)
+> * [v2 (current version)](concept-automated-ml.md)
-## Ways to use AutoML in Azure Machine Learning
+Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
-Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand feature availability in each experience.
+* For code-experienced customers, [Azure Machine Learning Python SDK](https://aka.ms/sdk-v2-install). Get started with [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
-* For code-experienced customers, [Azure Machine Learning Python SDK](https://aka.ms/sdk-v2-install). Get started with [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
-* For limited/no-code experience customers, Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with these tutorials:
- * [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
- * [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md)
+## How does AutoML work?
-### Experiment settings
+During training, Azure Machine Learning creates a number of pipelines in parallel that try different algorithms and parameters for you. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The better the score for the metric you want to optimize for, the better the model is considered to "fit" your data. It will stop once it hits the exit criteria defined in the experiment.
-The following settings allow you to configure your automated ML experiment.
+Using **Azure Machine Learning**, you can design and run your automated ML training experiments with these steps:
-| |The Python SDK|The studio web experience|
-|-|:-:|:-:|
-|**Split data into train/validation sets**| Γ£ô|Γ£ô
-|**Supports ML tasks: classification, regression, & forecasting**| Γ£ô| Γ£ô
-|**Supports computer vision tasks (preview): image classification, object detection & instance segmentation**| Γ£ô|
-|**NLP-Text**| Γ£ô| Γ£ô
-|**Optimizes based on primary metric**| Γ£ô| Γ£ô
-|**Supports Azure ML compute as compute target** | Γ£ô|Γ£ô
-|**Configure forecast horizon, target lags & rolling window**|Γ£ô|Γ£ô
-|**Set exit criteria** |Γ£ô|Γ£ô
-|**Set concurrent iterations**| Γ£ô|Γ£ô
-|**Block algorithms**|Γ£ô|Γ£ô
-|**Cross validation** |Γ£ô|Γ£ô
-|**Supports training on Azure Databricks clusters**| Γ£ô|
-|**View engineered feature names**|Γ£ô|
-|**Featurization summary**| Γ£ô|
-|**Featurization for holidays**|Γ£ô|
-|**Log file verbosity levels**| Γ£ô|
+1. **Identify the ML problem** to be solved: classification, forecasting, regression, computer vision or NLP.
-### Model settings
+1. **Choose whether you want to a code-first experience or a no-code studio web experience**: Users who prefer a code-first experience can use the [AzureML SDKv2](how-to-configure-auto-train.md) or the [AzureML CLIv2](how-to-train-cli.md). Get started with [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). Users who prefer a limited/no-code experience can use the [web interface](how-to-use-automated-ml-for-ml-models.md) in Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
+
+1. **Specify the source of the labeled training data**: You can bring your data to AzureML in [many different ways](concept-data.md).
+
+1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model.
+1. **Submit the training job.**
-These settings can be applied to the best model as a result of your automated ML experiment.
+1. **Review the results**
-| |The Python SDK|The studio web experience|
-|-|:-:|:-:|
-|**Best model registration, deployment, explainability**| Γ£ô|Γ£ô|
-|**Enable voting ensemble & stack ensemble models**| Γ£ô|Γ£ô|
-|**Show best model based on non-primary metric**|Γ£ô||
-|**Enable/disable ONNX model compatibility**|Γ£ô||
-|**Test the model** | Γ£ô| Γ£ô (preview)|
+The following diagram illustrates this process.
+![Automated Machine learning](./media/concept-automated-ml/automl-concept-diagram2.png)
-### Job control settings
-These settings allow you to review and control your experiment jobs and its child jobs.
+You can also inspect the logged job information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the job. The training job produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
-| |The Python SDK|The studio web experience|
-|-|:-:|:-:|
-|**Job summary table**| Γ£ô|Γ£ô|
-|**Cancel jobs & child jobs**| Γ£ô|Γ£ô|
-|**Get guardrails**| Γ£ô|Γ£ô|
+While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md#explain) to the generated models.
## When to use AutoML: classification, regression, forecasting, computer vision & NLP
ML professionals and developers across industries can use automated ML to:
### Classification
-Classification is a common machine learning task. Classification is a type of supervised learning in which models learn using training data, and apply those learnings to new data. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification. Learn more about [featurization options](how-to-configure-auto-features.md#featurization).
+Classification is a type of supervised learning in which models learn using training data, and apply those learnings to new data. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification. Learn more about [featurization options](how-to-configure-auto-train.md#data-featurization). You can also find the list of algorithms supported by AutoML [here](how-to-configure-auto-train.md#supported-algorithms).
-The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML](tutorial-first-experiment-automated-ml.md).
+The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection.
-See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
+See an example of classification and automated machine learning in this Python notebook: [Bank Marketing](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb).
### Regression
-Similar to classification, regression tasks are also a common supervised learning task. Azure Machine Learning offers [featurizations specifically for these tasks](how-to-configure-auto-features.md#featurization).
+Similar to classification, regression tasks are also a common supervised learning task. AzureML offers featurization specific to regression problems. Learn more about [featurization options](how-to-configure-auto-train.md#data-featurization). You can also find the list of algorithms supported by AutoML [here](how-to-configure-auto-train.md#supported-algorithms).
+
+Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc.
-Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning](v1/how-to-auto-train-models-v1.md).
+See an example of regression and automated machine learning for predictions in these Python notebooks: [Hardware Performance](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb).
-See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
### Time-series forecasting
-Building forecasts is an integral part of any business, whether it's revenue, inventory, sales, or customer demand. You can use automated ML to combine techniques and approaches and get a recommended, high-quality time-series forecast. Learn more with this how-to: [automated machine learning for time series forecasting](how-to-auto-train-forecast.md).
+Building forecasts is an integral part of any business, whether it's revenue, inventory, sales, or customer demand. You can use automated ML to combine techniques and approaches and get a recommended, high-quality time-series forecast. You can find the list of algorithms supported by AutoML [here](how-to-configure-auto-train.md#supported-algorithms).
An automated time-series experiment is treated as a multivariate regression problem. Past time-series values are "pivoted" to become additional dimensions for the regressor together with other predictors. This approach, unlike classical time series methods, has an advantage of naturally incorporating multiple contextual variables and their relationship to one another during training. Automated ML learns a single, but often internally branched model for all items in the dataset and prediction horizons. More data is thus available to estimate model parameters and generalization to unseen series becomes possible.
Advanced forecasting configuration includes:
* configurable lags * rolling window aggregate features
+See an example of forecasting and automated machine learning in this Python notebook: [Energy Demand](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb).
-See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
-
-### Computer vision (preview)
-
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+### Computer vision
Support for computer vision tasks allows you to easily generate models trained on image data for scenarios like image classification and object detection.
Instance segmentation | Tasks to identify objects in an image at the pixel level
<a name="nlp"></a>
-### Natural language processing: NLP (preview)
-
+### Natural language processing: NLP
Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
The NLP capability supports:
Learn how to [set up AutoML training for NLP models](how-to-auto-train-nlp-models.md).
-## How automated ML works
-
-During training, Azure Machine Learning creates a number of pipelines in parallel that try different algorithms and parameters for you. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The higher the score, the better the model is considered to "fit" your data. It will stop once it hits the exit criteria defined in the experiment.
-
-Using **Azure Machine Learning**, you can design and run your automated ML training experiments with these steps:
-
-1. **Identify the ML problem** to be solved: classification, forecasting, regression or computer vision (preview).
-
-1. **Choose whether you want to use the Python SDK or the studio web experience**:
- Learn about the parity between the [Python SDK and studio web experience](#ways-to-use-automl-in-azure-machine-learning).
-
- * For limited or no code experience, try the Azure Machine Learning studio web experience at [https://ml.azure.com](https://ml.azure.com/)
- * For Python developers, check out the [Azure Machine Learning Python SDK](how-to-configure-auto-train.md)
-
-1. **Specify the source and format of the labeled training data**: Numpy arrays or Pandas dataframe
-
-1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model.
-1. **Submit the training job.**
-
-1. **Review the results**
-
-The following diagram illustrates this process.
-![Automated Machine learning](./media/concept-automated-ml/automl-concept-diagram2.png)
--
-You can also inspect the logged job information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the job. The training job produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
-
-While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md#explain) to the generated models.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE2Xc9t]
-
-<a name="local-remote"></a>
-
-## Guidance on local vs. remote managed ML compute targets
-
-The web interface for automated ML always uses a remote [compute target](concept-compute-target.md). But when you use the Python SDK, you will choose either a local compute or a remote compute target for automated ML training.
-
-* **Local compute**: Training occurs on your local laptop or VM compute.
-* **Remote compute**: Training occurs on Machine Learning compute clusters.
-
-### Choose compute target
-Consider these factors when choosing your compute target:
-
- * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child job), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available.
- * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child job, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
-
-### Pros and cons
-
-Consider these pros and cons when choosing to use local vs. remote.
-
-| | Pros (Advantages) |Cons (Handicaps) |
-|||||
-|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize jobs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
-|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child jobs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child job |
-
-### Feature availability
-
-More features are available when you use the remote compute, as shown in the table below.
-
-| Feature | Remote | Local |
-||--|-|
-| Data streaming (Large data support, up to 100 GB) | Γ£ô | |
-| DNN-BERT-based text featurization and training | Γ£ô | |
-| Out-of-the-box GPU support (training and inference) | Γ£ô | |
-| Image Classification and Labeling support | Γ£ô | |
-| Auto-ARIMA, Prophet and ForecastTCN models for forecasting | Γ£ô | |
-| Multiple jobs/iterations in parallel | Γ£ô | |
-| Create models with interpretability in AutoML studio web experience UI | Γ£ô | |
-| Feature engineering customization in studio web experience UI| Γ£ô | |
-| Azure ML hyperparameter tuning | Γ£ô | |
-| Azure ML Pipeline workflow support | Γ£ô | |
-| Continue a job | Γ£ô | |
-| Forecasting | Γ£ô | Γ£ô |
-| Create and run experiments in notebooks | Γ£ô | Γ£ô |
-| Register and visualize experiment's info and metrics in UI | Γ£ô | Γ£ô |
-| Data guardrails | Γ£ô | Γ£ô |
## Training, validation and test data
-With automated ML you provide the **training data** to train ML models, and you can specify what type of model validation to perform. Automated ML performs model validation as part of training. That is, automated ML uses **validation data** to tune model hyperparameters based on the applied algorithm to find the best combination that best fits the training data. However, the same validation data is used for each iteration of tuning, which introduces model evaluation bias since the model continues to improve and fit to the validation data.
+With automated ML you provide the **training data** to train ML models, and you can specify what type of model validation to perform. Automated ML performs model validation as part of training. That is, automated ML uses **validation data** to tune model hyperparameters based on the applied algorithm to find the combination that best fits the training data. However, the same validation data is used for each iteration of tuning, which introduces model evaluation bias since the model continues to improve and fit to the validation data.
To help confirm that such bias isn't applied to the final recommended model, automated ML supports the use of **test data** to evaluate the final model that automated ML recommends at the end of your experiment. When you provide test data as part of your AutoML experiment configuration, this recommended model is tested by default at the end of your experiment (preview). >[!IMPORTANT] > Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
-Learn how to [configure AutoML experiments to use test data (preview) with the SDK](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
+Learn how to [configure AutoML experiments to use test data (preview) with the SDK](how-to-configure-auto-train.md#training-validation-and-test-data) or with the [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
-You can also [test any existing automated ML model (preview)](./v1/how-to-configure-auto-train-v1.md#test-existing-automated-ml-model)), including models from child jobs, by providing your own test data or by setting aside a portion of your training data.
## Feature engineering
Enable this setting with:
+ Azure Machine Learning studio: Enable **Automatic featurization** in the **View additional configuration** section [with these steps](how-to-use-automated-ml-for-ml-models.md#customize-featurization).
-+ Python SDK: Specify `"feauturization": 'auto' / 'off' / 'FeaturizationConfig'` in your [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object. Learn more about [enabling featurization](how-to-configure-auto-features.md).
++ Python SDK: Specify featurization in your [AutoML Job](/python/api/azure-ai-ml/azure.ai.ml.automl) object. Learn more about [enabling featurization](how-to-configure-auto-train.md#data-featurization). ## <a name="ensemble"></a> Ensemble models
Automated machine learning supports ensemble models, which are enabled by defaul
The [Caruana ensemble selection algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf) with sorted ensemble initialization is used to decide which models to use within the ensemble. At a high level, this algorithm initializes the ensemble with up to five models with the best individual scores, and verifies that these models are within 5% threshold of the best score to avoid a poor initial ensemble. Then for each ensemble iteration, a new model is added to the existing ensemble and the resulting score is calculated. If a new model improved the existing ensemble score, the ensemble is updated to include the new model.
-See the [how-to](./v1/how-to-configure-auto-train-v1.md#ensemble) for changing default ensemble settings in automated machine learning.
+See the [AutoML package](/python/api/azure-ai-ml/azure.ai.ml.automl) for changing default ensemble settings in automated machine learning.
<a name="use-with-onnx"></a>
There are multiple resources to get you up and running with AutoML.
### Tutorials/ how-tos Tutorials are end-to-end introductory examples of AutoML scenarios.
-+ **For a code first experience**, follow the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
++ **For a code first experience**, follow the [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md) + **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md).-
-+ **For using AutoML to train computer vision models**, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
How-to articles provide additional detail into what functionality automated ML offers. For example,
How-to articles provide additional detail into what functionality automated ML o
+ [Without code in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md). + [With the Python SDK](how-to-configure-auto-train.md).
-+ Learn how to [train forecasting models with time series data](how-to-auto-train-forecast.md).
- + Learn how to [train computer vision models with Python](how-to-auto-train-image-models.md). + Learn how to [view the generated code from your automated ML models](how-to-generate-automl-training-code.md). ### Jupyter notebook samples
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs).
+ ### Python SDK reference
-Deepen your expertise of SDK design patterns and class specifications with the [AutoML class reference documentation](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig).
+Deepen your expertise of SDK design patterns and class specifications with the [AutoML Job class reference documentation](/python/api/azure-ai-ml/azure.ai.ml.automl).
> [!Note] > Automated machine learning capabilities are also available in other Microsoft solutions such as,
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
description: This article gives you a high-level understanding of the resources
-+
An Azure Machine Learning [component](concept-component.md) is a self-contained
## Next steps * [How to migrate from v1 to v2](how-to-migrate-from-v1.md)
-* [Train models with the v2 CLI and SDK (preview)](how-to-train-model.md)
+* [Train models with the v2 CLI and SDK](how-to-train-model.md)
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
-+ Last updated 05/24/2022 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
You can use the following options for input data when invoking a batch endpoint:
> [!NOTE] > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
machine-learning Concept Machine Learning Registries Mlops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-machine-learning-registries-mlops.md
+
+ Title: Machine Learning registries (preview)
+
+description: Learn what are Azure Machine Learning registries and how to use to for MLOps
++++++ Last updated : 9/9/2022++++
+# Machine Learning registries (preview) for MLOps
+
+In this article, you'll learn how to scale MLOps across development, testing and production environments. Your environments can vary from few to many based on the complexity of your IT environment and is influenced by factors such as:
+
+* Security and compliance policies - do production environments need to be isolated from development environments in terms of access controls, network architecture, data exposure, etc.?
+* Subscriptions - Are your development environments in one subscription and production environments in a different subscription? Often separate subscriptions are used to account for billing, budgeting, and cost management purposes.
+* Regions - Do you need to deploy to different Azure regions to support latency and redundancy requirements?
+
+In such scenarios, you may be using different AzureML workspaces for development, testing and production. This configuration presents the following challenges for model training and deployment:
+* You need to train a model in a development workspace but deploy it an endpoint in a production workspace, possibly in a different Azure subscription or region. In this case, you must be able to trace back the training job. For example, to analyze the metrics, logs, code, environment, and data used to train the model if you encounter accuracy or performance issues with the production deployment.
+* You need to develop a training pipeline with test data or anonymized data in the development workspace but retrain the model with production data in the production workspace. In this case, you may need to compare training metrics on sample vs production data to ensure the training optimizations are performing well with actual data.
+
+## Cross-workspace MLOps with registries
+
+Registries, much like a Git repository, decouples ML assets from workspaces and hosts them in a central location, making them available to all workspaces in your organization.
+
+If you want to promote models across environments (dev, test, prod), start by iteratively developing a model in dev. When you have a good candidate model, you can publish it to a registry. You can then deploy the model from the registry to endpoints in different workspaces.
+
+> [!TIP]
+> If you already have models registered in a workspace, you can promote them to a registry. You can also register a model directly in a registry from the output of a training job.
+
+If you want to develop a pipeline in one workspace and then run it in others, start by registering the components and environments that form the building blocks of the pipeline. When you submit the pipeline job, the workspace it runs in is selected by the compute and training data, which are unique to each workspace.
+
+The following diagram illustrates promotion of pipelines between exploratory and dev workspaces, then model promotion between dev, test, and production.
++
+## Next steps
+
+* [Create a registry](./how-to-manage-registries.md).
+* [Share models, components, and environments using registries](./how-to-share-models-pipelines-across-workspaces-with-registries.md).
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
description: Identify and manage common pitfalls of ML models with Azure Machine
+
The following techniques are additional options to handle imbalanced data **outs
See examples and learn how to build models using automated machine learning:
-+ Follow the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
++ Follow the [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). + Configure the settings for automatic training experiment: + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Last updated 08/15/2022 -+ # MLflow and Azure Machine Learning
Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiments via the Azure Machine Learning Python SDK, the Azure Machine Learning CLI, or Azure Machine Learning studio. We recommend using MLflow for tracking experiments. To get started, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md). > [!NOTE]
-> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 (preview). We recommend that you use MLflow for logging.
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2. We recommend that you use MLflow for logging.
With MLflow Tracking, you can connect Azure Machine Learning as the back end of your MLflow experiments. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
-+ Last updated 05/11/2022
-# MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning
+# MLOps: Model management, deployment, and monitoring with Azure Machine Learning
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Last updated 08/30/2022-+ ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"] > * [v1](v1/concept-train-machine-learning-model-v1.md)
-> * [v2 (preview)](concept-train-machine-learning-model.md)
+> * [v2 (current)](concept-train-machine-learning-model.md)
Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
Define the iterations, hyperparameter settings, featurization, and other setting
Machine learning pipelines can use the previously mentioned training methods. Pipelines are more about creating a workflow, so they encompass more than just the training of models. * [What are ML pipelines in Azure Machine Learning?](concept-ml-pipelines.md)
-* [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
+* [Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
### Understand what happens when you submit a training job
You can use the VS Code extension to run and manage your training jobs. See the
## Next steps
-Learn how to [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md).
+Learn how to [Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md).
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Last updated 04/29/2022-+ #Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI, SDK.
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-Azure Machine Learning CLI v2 and Azure Machine Learning Python SDK v2 (preview) introduce a consistency of features and terminology across the interfaces. In order to create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
+Azure Machine Learning CLI v2 and Azure Machine Learning Python SDK v2 introduce a consistency of features and terminology across the interfaces. In order to create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
## Azure Machine Learning CLI v2
The CLI v2 is useful in the following scenarios:
* Managed inference deployments
- Azure ML offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2 (preview).
+ Azure ML offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
* Reusable components in pipelines Azure ML introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
-## Azure Machine Learning Python SDK v2 (preview)
-
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Azure Machine Learning Python SDK v2
Azure ML Python SDK v2 is an updated Python SDK package, which allows users to:
The Azure Machine Learning CLI v1 has been deprecated. We recommend you to use C
* You don't want to use a Python SDK - CLI v2 allows you to use YAML with scripts in python, R, Java, Julia or C# * You were a user of R SDK previously - Azure ML won't support an SDK in `R`. However, the CLI v2 has support for `R` scripts. * You want to use command line based automation/deployments
+* You don't need Spark Jobs. This feature is currently available in preview in CLI v2.
### SDK v2
The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date
* You want to use new features like - reusable components, managed inferencing * You're starting a new workflow or pipeline - all new features and future investments will be introduced in v2 * You want to take advantage of the improved usability of the Python SDK v2 - ability to compose jobs and pipelines using Python functions, easy evolution from simple to complex tasks etc.
-* You don't need features like AutoML in pipelines, Parallel Run Steps, Scheduling Pipelines and Spark Jobs. These features are not yet available in SDK v2.
+* You don't need Spark Jobs. This feature is currently available in preview in SDK v2.
## Next steps
The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date
* Get started with SDK v2 * [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install)
- * [Train models with the Azure ML Python SDK v2 (preview)](how-to-train-model.md)
- * [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
+ * [Train models with the Azure ML Python SDK v2](how-to-train-model.md)
+ * [Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
description: The workspace is the top-level resource for Azure Machine Learning.
-+
When you create a new workspace, it automatically creates several Azure resource
+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/): Registers docker containers that are used for the following components: * [Azure Machine Learning environments](concept-environments.md) when training and deploying models * [AutoML](concept-automated-ml.md) when deploying
- * [Data profiling](v1/how-to-connect-data-ui.md#data-profile-and-preview)
+ * [Data profiling](v1/how-to-connect-data-ui.md#data-preview-and-profile)
To minimize costs, ACR is **lazy-loaded** until images are needed.
machine-learning Dsvm Tools Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-ingestion.md
description: Learn about the data ingestion tools and utilities that are preinst
keywords: data science tools, data science virtual machine, tools for data science, linux data science + Last updated 05/12/2021- # Data Science Virtual Machine data ingestion tools
Here are some data movement tools that are available in the DSVM.
|--|--| | - | - |
-| What is it? | Tool to import data from various sources into Azure Cosmos DB, a NoSQL database in the cloud. These sources include JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and Azure Cosmos DB SQL API collections. |
+| What is it? | Tool to import data from various sources into Azure Cosmos DB, a NoSQL database in the cloud. These sources include JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and Azure Cosmos DB for NoSQL collections. |
| Supported DSVM versions | Windows |
-| Typical uses | Importing files from a VM to CosmosDB, importing data from Azure table storage to CosmosDB, and importing data from a Microsoft SQL Server database to CosmosDB. |
+| Typical uses | Importing files from a VM to Azure Cosmos DB, importing data from Azure table storage to Azure Cosmos DB, and importing data from a Microsoft SQL Server database to Azure Cosmos DB. |
| How to use / run it? | To use the command-line version, open a command prompt and type `dt`. To use the GUI tool, open a command prompt and type `dtui`. |
-| Links to samples | [CosmosDB Import data](../../cosmos-db/import-data.md) |
+| Links to samples | [Import data into Azure Cosmos DB](../../cosmos-db/import-data.md) |
## Azure Storage Explorer
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Selected version updates are:
- CUDA 11.3, cuDNN 8, NCCL2 - Python 3.8 - R 4.0.5-- Spark 3.1 incl. mmlspark, connectors to Blob Storage, Data Lake, Cosmos DB
+- Spark 3.1 incl. mmlspark, connectors to Blob Storage, Data Lake, Azure Cosmos DB
- Java 11 (OpenJDK) - Jupyter Lab 3.0.14 - PyTorch 1.8.1 incl. torchaudio torchtext torchvision, torch-tb-profiler
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
Last updated 06/23/2022--+ # What tools are included on the Azure Data Science Virtual Machine?
The Data Science Virtual Machine comes with the most useful data-science tools p
| [Azure CLI](/cli/azure) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | | | [AzCopy](../../storage/common/storage-use-azcopy-v10.md) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [AzCopy on the DSVM](./dsvm-tools-ingestion.md#azcopy) | | [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
-| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) |
+| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [Azure Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) |
| Unix/Linux command-line tools | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | | | Apache Spark 3.1 (standalone) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Last updated 04/07/2022 -+ #Customer intent: As a data scientist, I want to securely access Azure resources for my machine learning model deployment with an online endpoint and managed identity. # Access Azure resources from an online endpoint with a managed identity --
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Learn how to access Azure resources from your scoring script with an online endpoint and either a system-assigned managed identity or a user-assigned managed identity.
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
-+ Last updated 08/31/2022 #Customer intent: As part of ML Professionals focusing on ML infratrasture setup using self-managed compute, I want to understand what Kubernetes compute target is and why do I need it.
With a simple cluster extension deployment on AKS or Arc Kubernetes cluster, Kub
- Create and manage instance types for different ML workload scenarios and gain efficient compute resource utilization. - Trouble shooting workload issues related to Kubernetes cluster.
-**Data-science team**. Once the IT-operations team finishes compute setup and compute target(s) creation, the data-science team can discover a list of available compute targets and instance types in AzureML workspace. These compute resources can be used for training or inference workload. Data science specifies compute target name and instance type name using their preferred tools or APIs such as AzureML CLI v2, Python SDK v2 (preview), or Studio UI.
+**Data-science team**. Once the IT-operations team finishes compute setup and compute target(s) creation, the data-science team can discover a list of available compute targets and instance types in AzureML workspace. These compute resources can be used for training or inference workload. Data science specifies compute target name and instance type name using their preferred tools or APIs such as AzureML CLI v2, Python SDK v2, or Studio UI.
## Kubernetes usage scenarios
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
Last updated 05/10/2022 -+ # Key and token-based authentication for online endpoints
You can set the authentication type when you create an online endpoint. Set the
When deploying using CLI v2, set this value in the [online endpoint YAML file](reference-yaml-endpoint-online.md). For more information, see [How to deploy an online endpoint](how-to-deploy-managed-online-endpoints.md).
-When deploying using the Python SDK v2 (preview), use the [OnlineEndpoint](/python/api/azure-ai-ml/azure.ai.ml.entities.onlineendpoint) class.
+When deploying using the Python SDK v2, use the [OnlineEndpoint](/python/api/azure-ai-ml/azure.ai.ml.entities.onlineendpoint) class.
## Get the key or token
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Title: Set up AutoML for computer vision
-description: Set up Azure Machine Learning automated ML to train computer vision models with the CLI v2 and Python SDK v2 (preview).
+description: Set up Azure Machine Learning automated ML to train computer vision models with the CLI v2 and Python SDK v2.
-+ Last updated 07/13/2022 #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
Last updated 07/13/2022
> * [v1](v1/how-to-auto-train-image-models-v1.md) > * [v2 (current version)](how-to-auto-train-image-models.md)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article, you learn how to train computer vision models on image data with automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2 (preview).
+In this article, you learn how to train computer vision models on image data with automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2.
Automated ML supports model training for computer vision tasks like image classification, object detection, and instance segmentation. Authoring AutoML models for computer vision tasks is currently supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs are accessible from the Azure Machine Learning studio UI. [Learn more about automated ml for computer vision tasks on image data](concept-automated-ml.md).
Automated ML supports model training for computer vision tasks like image classi
* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* The Azure Machine Learning Python SDK v2 (preview) installed.
+* The Azure Machine Learning Python SDK v2 installed.
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
Automated ML supports model training for computer vision tasks like image classi
```python pip install azure-ai-ml ```
-
- > [!NOTE]
- > Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks.
-
+
## Select your task type
Field| Description
The following is a sample JSONL file for image classification:
-```python
+```json
{ "image_url": "azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/image_data/Image_01.png", "image_details":
The following is a sample JSONL file for image classification:
The following code is a sample JSONL file for object detection:
- ```python
+ ```json
{ "image_url": "azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/image_data/Image_01.png", "image_details":
If you want to use tiling, and want to control tiling behavior, the following pa
### Test the deployment Please check this [Test the deployment](./tutorial-auto-train-image-models.md#test-the-deployment) section to test the deployment and visualize the detections from the model.
+## Large datasets
+
+If you're using AutoML to train on large datasets, there are some experimental settings that may be useful.
+
+> [!IMPORTANT]
+> These settings are currently in public preview. They are provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+### Multi-GPU and multi-node training
+
+By default, each model trains on a single VM. If training a model is taking too much time, using VMs that contain multiple GPUs may help. The time to train a model on large datasets should decrease in roughly linear proportion to the number of GPUs used. (For instance, a model should train roughly twice as fast on a VM with two GPUs as on a VM with one GPU.) If the time to train a model is still high on a VM with multiple GPUs, you can increase the number of VMs used to train each model. Similar to multi-GPU training, the time to train a model on large datasets should also decrease in roughly linear proportion to the number of VMs used. When training a model across multiple VMs, be sure to use a compute SKU that supports [InfiniBand](how-to-train-distributed-gpu.md#infiniband) for best results. You can configure the number of VMs used to train a single model by setting the `node_count_per_trial` property of the AutoML job.
+
+# [Azure CLI](#tab/cli)
++
+```yaml
+properties:
+ node_count_per_trial: "2"
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+Multi-node training is supported for all tasks. The `node_count_per_trial` property can be specified using the task-specific `automl` functions. For instance, for object detection:
+
+```python
+from azure.ai.ml import automl
+
+image_object_detection_job = automl.image_object_detection(
+ ...,
+ properties={"node_count_per_trial": 2}
+)
+```
++
+### Streaming image files from storage
+
+By default, all image files are downloaded to disk prior to model training. If the size of the image files is greater than available disk space, the run will fail. Instead of downloading all images to disk, you can select to stream image files from Azure storage as they're needed during training. Image files are streamed from Azure storage directly to system memory, bypassing disk. At the same time, as many files as possible from storage are cached on disk to minimize the number of requests to storage.
+
+> [!NOTE]
+> If streaming is enabled, ensure the Azure storage account is located in the same region as compute to minimize cost and latency.
+
+# [Azure CLI](#tab/cli)
++
+```yaml
+training_parameters:
+ advanced_settings: >
+ {"stream_image_files": true}
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+```python
+from azure.ai.ml import automl
+
+image_object_detection_job = automl.image_object_detection(...)
+
+image_object_detection_job.set_training_parameters(
+ ...,
+ advanced_settings='{"stream_image_files": true}'
+)
+```
++ ## Example notebooks Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
Review detailed code examples and use cases in the [GitHub notebook repository f
## Next steps
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
* [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
-+ Last updated 03/15/2022 #Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
Last updated 03/15/2022
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 (preview) or the Azure Machine Learning CLI v2.
+In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 or the Azure Machine Learning CLI v2.
Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
You can also run your NLP experiments with distributed training on an Azure ML c
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment set up. Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two.
+This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment setup. Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two.
```python max_concurrent_iterations = number_of_vms
https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/au
+## Model sweeping and hyperparameter tuning (preview)
+
+AutoML NLP allows you to provide a list of models and combinations of hyperparameters, via the hyperparameter search space in the config. Hyperdrive generates several child runs, each of which is a fine-tuning run for a given NLP model and set of hyperparameter values that were chosen and swept over based on the provided search space.
+
+## Supported model algorithms
+
+All the pre-trained text DNN models currently available in AutoML NLP for fine-tuning are listed below:
+
+* bert_base_cased
+* bert_large_uncased
+* bert_base_multilingual_cased
+* bert_base_german_cased
+* bert_large_cased
+* distilbert_base_cased
+* distilbert_base_uncased
+* roberta_base
+* roberta_large
+* distilroberta_base
+* xlm_roberta_base
+* xlm_roberta_large
+* xlnet_base_cased
+* xlnet_large_cased
+
+Note that the large models are significantly larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results.
+
+## Supported hyperparameters
+
+The following table describes the hyperparameters that AutoML NLP supports.
+
+| Parameter name | Description | Syntax |
+|-|||
+| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizerΓÇÖs step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
+| learning_rate | Initial learning rate. | Must be a float in the range (0, 1). |
+| learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. |
+| model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. |
+| number_of_epochs | Number of training epochs. | Must be a positive integer. |
+| training_batch_size | Training batch size. | Must be a positive integer. |
+| validation_batch_size | Validation batch size. | Must be a positive integer. |
+| warmup_ratio | Ratio of total training steps used for a linear warmup from 0 to learning_rate. | Must be a float in the range [0, 1]. |
+| weight_decay | Value of weight decay when optimizer is sgd, adam, or adamw. | Must be a float in the range [0, 1]. |
+
+All discrete hyperparameters only allow choice distributions, such as the integer-typed `training_batch_size` and the string-typed `model_name` hyperparameters. All continuous hyperparameters like `learning_rate` support all distributions.
+
+## Configure your sweep settings
+
+You can configure all the sweep-related parameters. Multiple model subspaces can be constructed with hyperparameters conditional to the respective model, as seen below in each example.
+
+The same discrete and continuous distribution options that are available for general HyperDrive jobs are supported here. See all nine options in [Hyperparameter tuning a model](how-to-tune-hyperparameters.md#define-the-search-space)
++
+# [Azure CLI](#tab/cli)
++
+```yaml
+limits:
+ timeout_minutes: 120
+ max_trials: 4
+ max_concurrent_trials: 2
+
+sweep:
+ sampling_algorithm: grid
+ early_termination:
+ type: bandit
+ evaluation_interval: 10
+ slack_factor: 0.2
+
+search_space:
+ - model_name:
+ type: choice
+ values: [bert_base_cased, roberta_base]
+ number_of_epochs:
+ type: choice
+ values: [3, 4]
+ - model_name:
+ type: choice
+ values: [distilbert_base_cased]
+ learning_rate:
+ type: uniform
+ min_value: 0.000005
+ max_value: 0.00005
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
++
+You can set the limits for your model sweeping job:
+
+```python
+text_ner_job.set_limits(
+ timeout_minutes=120,
+ trial_timeout_minutes=20,
+ max_trials=4,
+ max_concurrent_trials=2,
+ max_nodes=4)
+```
+
+You can define a search space with customized settings:
+
+```python
+text_ner_job.extend_search_space(
+ [
+ SearchSpace(
+ model_name=Choice([NlpModels.BERT_BASE_CASED, NlpModels.ROBERTA_BASE])
+ ),
+ SearchSpace(
+ model_name=Choice([NlpModels.DISTILROBERTA_BASE]),
+ learning_rate_scheduler=Choice([NlpLearningRateScheduler.LINEAR,
+ NlpLearningRateScheduler.COSINE]),
+ learning_rate=Uniform(5e-6, 5e-5)
+ )
+ ]
+)
+ ```
+
+You can configure the sweep procedure via sampling algorithm early termination:
+```python
+text_ner_job.set_sweep(
+ sampling_algorithm="Random",
+ early_termination=BanditPolicy(
+ evaluation_interval=2, slack_factor=0.05, delay_evaluation=6
+ )
+)
+```
+++
+### Sampling methods for the sweep
+
+When sweeping hyperparameters, you need to specify the sampling method to use for sweeping over the defined parameter space. Currently, the following sampling methods are supported with the `sampling_algorithm` parameter:
+
+| Sampling type | AutoML Job syntax |
+|-||
+|[Random Sampling](how-to-tune-hyperparameters.md#random-sampling)| `random` |
+|[Grid Sampling](how-to-tune-hyperparameters.md#grid-sampling)| `grid` |
+|[Bayesian Sampling](how-to-tune-hyperparameters.md#bayesian-sampling)| `bayesian` |
+
+### Experiment budget
+
+You can optionally specify the experiment budget for your AutoML NLP training job using the `timeout_minutes` parameter in the `limits` - the amount of time in minutes before the experiment terminates. If none specified, the default experiment timeout is seven days (maximum 60 days).
+
+AutoML NLP also supports `trial_timeout_minutes`, the maximum amount of time in minutes an individual trial can run before being terminated, and `max_nodes`, the maximum number of nodes from the backing compute cluster to leverage for the job. These parameters also belong to the `limits` section.
++++
+```yaml
+limits:
+ timeout_minutes: 60
+ trial_timeout_minutes: 20
+ max_nodes: 2
+```
++
+### Early termination policies
+
+You can automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency, saving compute resources that would have been otherwise spent on less promising configurations. AutoML NLP supports early termination policies using the `early_termination` parameter. If no termination policy is specified, all configurations are run to completion.
+
+Learn more about [how to configure the early termination policy for your hyperparameter sweep.](how-to-tune-hyperparameters.md#early-termination)
+
+### Resources for the sweep
+
+You can control the resources spent on your hyperparameter sweep by specifying the `max_trials` and the `max_concurrent_trials` for the sweep.
+
+Parameter | Detail
+--|-
+`max_trials` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
+`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
+
+You can configure all the sweep related parameters as shown in the example below.
+++
+```yaml
+sweep:
+ limits:
+ max_trials: 10
+ max_concurrent_trials: 2
+ sampling_algorithm: random
+ early_termination:
+ type: bandit
+ evaluation_interval: 2
+ slack_factor: 0.2
+ delay_evaluation: 6
+```
++ ## Known Issues Dealing with very low scores, or higher loss values:
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
Last updated 10/21/2021-+ # Regenerate storage account access keys
To update Azure Machine Learning to use the new key, use the following steps:
```
- 1. **To re-register datastores via the studio**, select **Datastores** from the left pane of the studio.
+ 1. **To re-register datastores via the studio**
+ 1. In the studio, select **Data** on the left pane under **Assets**.
+ 1. At the top, select **Datastores**.
1. Select which datastore you want to update. 1. Select the **Update credentials** button on the top left. 1. Use your new access key from step 1 to populate the form and click **Save**.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
Title: Set up AutoML with Python (v2)
-description: Learn how to set up an AutoML training run with the Azure Machine Learning Python SDK v2 (preview) using Azure Machine Learning automated ML.
+description: Learn how to set up an AutoML training run with the Azure Machine Learning Python SDK v2 using Azure Machine Learning automated ML.
Last updated 04/20/2022 -+
-# Set up AutoML training with the Azure ML Python SDK v2 (preview)
+# Set up AutoML training with the Azure ML Python SDK v2
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-In this guide, learn how to set up an automated machine learning, AutoML, training job with the [Azure Machine Learning Python SDK v2 (preview)](/python/api/overview/azure/ml/intro). Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
+In this guide, learn how to set up an automated machine learning, AutoML, training job with the [Azure Machine Learning Python SDK v2](/python/api/overview/azure/ml/intro). Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md).
If you prefer to submit training jobs with the Azure Machine learning CLI v2 ext
For this article you need: * An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* The Azure Machine Learning Python SDK v2 (preview) installed.
+* The Azure Machine Learning Python SDK v2 installed.
To install the SDK you can either, * Create a compute instance, which already has installed the latest AzureML Python SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Last updated 04/08/2022 -+ # Install and set up the CLI (v2)
> * [v1](v1/reference-azure-machine-learning-cli.md) > * [v2 (current version)](how-to-configure-cli.md)
-The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
+The `ml` extension to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
## Prerequisites
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
-+
You can also provide test data to evaluate the recommended model that automated
> [!WARNING] > This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks (preview)](how-to-auto-train-image-models.md)
+> * [Computer vision tasks](how-to-auto-train-image-models.md)
> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md) > * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning) > * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
Last updated 03/22/2021 -+ # Set up a Python development environment for Azure Machine Learning
To configure a local development environment or remote VM:
1. Install the [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme). 1. To configure your local environment to use your Azure Machine Learning workspace, [create a workspace configuration file](#local-and-dsvm-only-create-a-workspace-configuration-file) or use an existing one.
-Now that you have your local environment set up, you're ready to start working with Azure Machine Learning. See the [Azure Machine Learning Python getting started guide](tutorial-1st-experiment-hello-world.md) to get started.
+Now that you have your local environment set up, you're ready to start working with Azure Machine Learning. See the [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md) to get started.
### Jupyter Notebooks
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
-+
To enable public access, use the following steps:
> [!TIP] > There are two possible properties that you can configure: > * `allow_public_access_when_behind_vnet` - used by the Python SDK v1
-> * `public_network_access` - used by the CLI and Python SDK v2 (preview)
+> * `public_network_access` - used by the CLI and Python SDK v2
> Each property overrides the other. For example, setting `public_network_access` will override any previous setting to `allow_public_access_when_behind_vnet`. > > Microsoft recommends using `public_network_access` to enable or disable public access to a workspace.
To enable public access, use the following steps:
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), use the `az ml update` command to enable `public_network_access` for the workspace:
+When using the Azure CLI [extension 2.0 CLI for machine learning](how-to-configure-cli.md), use the `az ml update` command to enable `public_network_access` for the workspace:
```azurecli az ml workspace update \
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
Title: 'Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2 (Preview)'
+ Title: 'Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2'
description: Build a machine learning pipeline for image classification. Focus on machine learning instead of infrastructure and automation.
Last updated 05/26/2022-+
-# Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2 (Preview)
+# Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
Last updated 05/10/2022 -+ # Create and run machine learning pipelines using components with the Azure Machine Learning studio (Preview)
In the example below take using CLI for example. If you want to learn more about
- Use [these Jupyter notebooks on GitHub](https://github.com/Azure/azureml-examples/tree/pipeline/builder_function_samples/cli/jobs/pipelines-with-components) to explore machine learning pipelines further - Learn [how to use CLI v2 to create pipeline using components](how-to-create-component-pipelines-cli.md).-- Learn [how to use SDK v2 (preview) to create pipeline using components](how-to-create-component-pipeline-python.md)
+- Learn [how to use SDK v2 to create pipeline using components](how-to-create-component-pipeline-python.md)
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
-+ Previously updated : 05/24/2022-
-# Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
- Last updated : 09/22/2022
+#Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
# Create data assets
my_data = Data(
ml_client.data.create_or_update(my_data) ```
+# [Studio](#tab/Studio)
+
+To create a Folder data asset in the Azure Machine Learning studio, use the following steps:
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+
+1. Give your data asset a name and optional description. Then, select the "Folder (uri_folder)" option under Type, if it is not already selected.
+
+1. You have a few options for your data source. If you already have the path to the folder you want to upload, choose "From a URI". If your folder is already stored in Azure, choose "From Azure storage". If you want to upload your folder from your local drive, choose "From local files".
+
+1. Follow the steps, once you reach the Review step, click Create on the last page
## Create a `uri_file` data asset
my_data = Data(
ml_client.data.create_or_update(my_data) ```
+# [Studio](#tab/Studio)
+
+To create a File data asset in the Azure Machine Learning studio, use the following steps:
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+
+1. Give your data asset a name and optional description. Then, select the "File (uri_file)" option under Type.
+
+1. You have a few options for your data source. If you already have the path to the file you want to upload, choose "From a URI". If your file is already stored in Azure, choose "From Azure storage". If you want to upload your file from your local drive, choose "From local files".
+
+1. Follow the steps, once you reach the Review step, click Create on the last page
## Create a `mltable` data asset
my_data = Data(
ml_client.data.create_or_update(my_data) ```
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
+
+# [Studio](#tab/Studio)
+To create a Table data asset in the Azure Machine Learning studio, use the following steps.
> [!NOTE]
-> The path points to the **folder** containing the MLTable artifact.
+> You must have a **folder** prepared containing the MLTable artifact.
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+
+1. Give your data asset a name and optional description. Then, select the "Table (mltable)" option under Type.
+
+1. You have a few options for your data source. If you already have the path to the folder you want to upload, choose "From a URI". If your folder is already stored in Azure, choose "From Azure storage". If you want to upload your folder from your local drive, choose "From local files".
+
+1. Follow the steps, once you reach the Review step, click Create on the last page
+ ## Next steps - [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
Last updated 10/21/2021-+ # Create an image labeling project and export labels
Access exported Azure Machine Learning datasets in the **Datasets** section of M
![Exported dataset](./media/how-to-create-labeling-projects/exported-dataset.png)
-Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md)
+Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python](how-to-auto-train-image-models.md)
## Troubleshooting
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
-+ Last updated 08/12/2022--
-# Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
+#Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
# Use an Azure Resource Manager template to create a workspace for Azure Machine Learning
New-AzResourceGroupDeployment `
The following example template demonstrates how to create a workspace with three settings:
-* Enable high confidentiality settings for the workspace. This creates a new Cosmos DB instance.
+* Enable high confidentiality settings for the workspace. This creates a new Azure Cosmos DB instance.
* Enable encryption for the workspace.
-* Uses an existing Azure Key Vault to retrieve customer-managed keys. Customer-managed keys are used to create a new Cosmos DB instance for the workspace.
+* Uses an existing Azure Key Vault to retrieve customer-managed keys. Customer-managed keys are used to create a new Azure Cosmos DB instance for the workspace.
> [!IMPORTANT] > Once a workspace has been created, you cannot change the settings for confidential data, encryption, key vault ID, or key identifiers. To change these values, you must create a new workspace using the new values.
New-AzResourceGroupDeployment `
```
-When using a customer-managed key, Azure Machine Learning creates a secondary resource group which contains the Cosmos DB instance. For more information, see [encryption at rest - Cosmos DB](concept-data-encryption.md#encryption-at-rest).
+When using a customer-managed key, Azure Machine Learning creates a secondary resource group which contains the Azure Cosmos DB instance. For more information, see [Encryption at rest in Azure Cosmos DB](concept-data-encryption.md#encryption-at-rest).
An additional configuration you can provide for your data is to set the **confidential_data** parameter to **true**. Doing so, does the following:
To avoid this problem, we recommend one of the following approaches:
* [Creating and deploying Azure resource groups through Visual Studio](../azure-resource-manager/templates/create-visual-studio-deployment-project.md). * [For other templates related to Azure Machine Learning, see the Azure Quickstart Templates repository](https://github.com/Azure/azure-quickstart-templates). * [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md).
-* [Move an Azure Machine Learning workspace to another subscription](how-to-move-workspace.md).
+* [Move an Azure Machine Learning workspace to another subscription](how-to-move-workspace.md).
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Last updated 11/03/2021 -+ ms.devlang: azurecli #Customer intent: As a machine learning engineer, I want to test and debug online endpoints locally using Visual Studio Code before deploying them Azure. # Debug online endpoints locally in Visual Studio Code (preview) --
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to use the Visual Studio Code (VS Code) debugger to test and debug online endpoints locally before deploying them to Azure.
This guide assumes you have the following items installed locally on your PC.
For more information, see the guide on [how to prepare your system to deploy managed online endpoints](how-to-deploy-managed-online-endpoints.md#prepare-your-system).
-The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
+The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples
-cd sdk/endpoints/online/managed
+cd sdk/python/endpoints/online/managed
``` Import the required modules:
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Last updated 05/11/2022 -+ ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md). In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using:
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
Last updated 05/24/2022 -+ # Deploy models with REST for batch scoring
Below are some examples using different types of input data.
> - If you want to use local data, you can upload it to Azure Machine Learning registered datastore and use REST API for Cloud data. > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
#### Configure the output location and overwrite settings
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
- Title: Deploy machine learning models to managed online endpoint using Python SDK v2 (preview).-
-description: Learn to deploy your machine learning model to Azure using Python SDK v2 (preview).
------ Previously updated : 05/25/2022----
-# Deploy and score a machine learning model with managed online endpoint using Python SDK v2 (preview)
--
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-In this article, you learn how to deploy your machine learning model to managed online endpoint and get predictions. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
-
-## Prerequisites
--
-* To deploy locally, you must install [Docker Engine](https://docs.docker.com/engine/) on your local computer. We highly recommend this option, so it's easier to debug issues.
-
-### Clone examples repository
-
-To run the training examples, first clone the examples repository and change into the `sdk` directory:
-
-```bash
-git clone --depth 1 https://github.com/Azure/azureml-examples
-cd azureml-examples/sdk
-```
-
-> [!TIP]
-> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
-
-## Connect to Azure Machine Learning workspace
-
-The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
-
-1. Import the required libraries:
-
- ```python
- # import required libraries
- from azure.ai.ml import MLClient
- from azure.ai.ml.entities import (
- ManagedOnlineEndpoint,
- ManagedOnlineDeployment,
- Model,
- Environment,
- CodeConfiguration,
- )
- from azure.identity import DefaultAzureCredential
- ```
-
-1. Configure workspace details and get a handle to the workspace:
-
- To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
-
- ```python
- # enter details of your AzureML workspace
- subscription_id = "<SUBSCRIPTION_ID>"
- resource_group = "<RESOURCE_GROUP>"
- workspace = "<AZUREML_WORKSPACE_NAME>"
- ```
-
- ```python
- # get a handle to the workspace
- ml_client = MLClient(
- DefaultAzureCredential(), subscription_id, resource_group, workspace
- )
- ```
-
-## Create local endpoint and deployment
-
-> [!NOTE]
-> To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed.
-> Docker Engine must be running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
-
-1. Create local endpoint:
-
- The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:
-
- * Local endpoints don't support traffic rules, authentication, or probe settings.
- * Local endpoints support only one deployment per endpoint.
-
- ```python
- # Creating a local endpoint
- import datetime
-
- local_endpoint_name = "local-" + datetime.datetime.now().strftime("%m%d%H%M%f")
-
- # create an online endpoint
- endpoint = ManagedOnlineEndpoint(
- name=local_endpoint_name, description="this is a sample local endpoint"
- )
- ```
-
- ```python
- ml_client.online_endpoints.begin_create_or_update(endpoint, local=True)
- ```
-
-1. Create local deployment:
-
- The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
-
- * Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.
- * The code that's required to score the model. In this case, we have a score.py file.
- * An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
- * Settings to specify the instance type and scaling capacity.
-
- **Key aspects of deployment**
- * `name` - Name of the deployment.
- * `endpoint_name` - Name of the endpoint to create the deployment under.
- * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
- * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
- * `code_configuration` - the configuration for the source code and scoring script
- * `path`- Path to the source code directory for scoring the model
- * `scoring_script` - Relative path to the scoring file in the source code directory
- * `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
- * `instance_count` - The number of instances to use for the deployment
-
- ```python
- model = Model(path="../model-1/model/sklearn_regression_model.pkl")
- env = Environment(
- conda_file="../model-1/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
- )
-
- blue_deployment = ManagedOnlineDeployment(
- name="blue",
- endpoint_name=local_endpoint_name,
- model=model,
- environment=env,
- code_configuration=CodeConfiguration(
- code="../model-1/onlinescoring", scoring_script="score.py"
- ),
- instance_type="Standard_F2s_v2",
- instance_count=1,
- )
- ```
-
- ```python
- ml_client.online_deployments.begin_create_or_update(
- deployment=blue_deployment, local=True
- )
- ```
-
-## Verify the local deployment succeeded
-
-1. Check the status to see whether the model was deployed without error:
-
- ```python
- ml_client.online_endpoints.get(name=local_endpoint_name, local=True)
- ```
-
-1. Get logs:
-
- ```python
- ml_client.online_deployments.get_logs(
- name="blue", endpoint_name=local_endpoint_name, local=True, lines=50
- )
- ```
-
-## Invoke the local endpoint
-
-Invoke the endpoint to score the model by using the convenience command invoke and passing query parameters that are stored in a JSON file
-
-```python
-ml_client.online_endpoints.invoke(
- endpoint_name=local_endpoint_name,
- request_file="../model-1/sample-request.json",
- local=True,
-)
-```
-
-## Deploy your online endpoint to Azure
-
-Next, deploy your online endpoint to Azure.
-
-1. Configure online endpoint:
-
- > [!TIP]
- > * `endpoint_name`: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
- > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
- > * Optionally, you can add description, tags to your endpoint.
-
- ```python
- # Creating a unique endpoint name with current datetime to avoid conflicts
- import datetime
-
- online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
-
- # create an online endpoint
- endpoint = ManagedOnlineEndpoint(
- name=online_endpoint_name,
- description="this is a sample online endpoint",
- auth_mode="key",
- tags={"foo": "bar"},
- )
- ```
-
-1. Create the endpoint:
-
- Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
-
- ```python
- ml_client.begin_create_or_update(endpoint)
- ```
-
-1. Configure online deployment:
-
- A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
-
- ```python
- model = Model(path="../model-1/model/sklearn_regression_model.pkl")
- env = Environment(
- conda_file="../model-1/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
- )
-
- blue_deployment = ManagedOnlineDeployment(
- name="blue",
- endpoint_name=online_endpoint_name,
- model=model,
- environment=env,
- code_configuration=CodeConfiguration(
- code="../model-1/onlinescoring", scoring_script="score.py"
- ),
- instance_type="Standard_F2s_v2",
- instance_count=1,
- )
- ```
-
-1. Create the deployment:
-
- Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
-
- ```python
- ml_client.begin_create_or_update(blue_deployment)
- ```
-
- ```python
- # blue deployment takes 100 traffic
- endpoint.traffic = {"blue": 100}
- ml_client.begin_create_or_update(endpoint)
- ```
-
-## Test the endpoint with sample data
-
-Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
-
-* `endpoint_name` - Name of the endpoint
-* `request_file` - File with request data
-* `deployment_name` - Name of the specific deployment to test in an endpoint
-
-We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/model-1/sample-request.json) file.
-
-```python
-# test the blue deployment with some sample data
-ml_client.online_endpoints.invoke(
- endpoint_name=online_endpoint_name,
- deployment_name="blue",
- request_file="../model-1/sample-request.json",
-)
-```
-
-## Managing endpoints and deployments
-
-1. Get details of the endpoint:
-
- ```python
- # Get the details for online endpoint
- endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
-
- # existing traffic details
- print(endpoint.traffic)
-
- # Get the scoring URI
- print(endpoint.scoring_uri)
- ```
-
-1. Get the logs for the new deployment:
-
- Get the logs for the green deployment and verify as needed
-
- ```python
- ml_client.online_deployments.get_logs(
- name="blue", endpoint_name=online_endpoint_name, lines=50
- )
- ```
-
-## Delete the endpoint
-
-```python
-ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
-```
-
-## Next steps
-
-Try these next steps to learn how to use the Azure Machine Learning SDK (v2) for Python:
-* [Managed online endpoint safe rollout](how-to-safely-rollout-managed-endpoints-sdk-v2.md)
-* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Title: Deploy an ML model by using an online endpoint
+ Title: Deploy machine learning models to online endpoints
-description: Learn to deploy your machine learning model as a web service that's to Azure.
+description: Learn to deploy your machine learning model as an online endpoint that's to Azure.
Previously updated : 08/31/2022 Last updated : 10/06/2022 -+ # Deploy and score a machine learning model by using an online endpoint [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to use an online endpoint to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure. You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for online and real-time scoring.
-Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document inline with the managed online endpoint discussion. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
+Online endpoints are endpoints that are used for online (real-time) inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints, and differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
-## Prerequisites
+Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure.
-* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document inline with the managed online endpoint discussion.
-* Install and configure the Azure CLI and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+## Prerequisites
-* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it. A resource group is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+# [Azure CLI](#tab/azure-cli)
-* You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
Managed online endpoints help to deploy your ML models in a turnkey manner. Mana
az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group> ```
-* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
- * (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues. > [!IMPORTANT] > The examples in this document assume that you are using the Bash shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
+# [Python](#tab/python)
+++
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
+++ ## Prepare your system
-To follow along with this article, first clone the samples repository (azureml-examples). Then, run the following code to go to the samples directory:
+# [Azure CLI](#tab/azure-cli)
+
+### Clone the sample repository
+
+To follow along with this article, first clone the [samples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the samples directory:
```azurecli
-git clone https://github.com/Azure/azureml-examples
+git clone --depth 1 https://github.com/Azure/azureml-examples
cd azureml-examples cd cli ```
-To set your endpoint name, choose one of the following commands, depending on your operating system (replace `YOUR_ENDPOINT_NAME` with a unique name).
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+### Set an endpoint name
+
+To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
For Unix, run this command:
For Unix, run this command:
> [!NOTE] > Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
-## Review the endpoint and deployment configurations
+# [Python](#tab/python)
+
+### Clone the sample repository
+
+To run the training examples, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples) and change into the `azureml-examples/sdk/python/endpoints/online/managed` directory:
+
+```bash
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples/sdk/python/endpoints/online/managed
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+The information in this article is based on the [online-endpoints-simple-deployment.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/online-endpoints-simple-deployment.ipynb) notebook. It contains the same content as this article, although the order of the codes is slightly different.
+
+### Connect to Azure Machine Learning workspace
+
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+ ```python
+ # import required libraries
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+ CodeConfiguration,
+ )
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Configure workspace details and get a handle to the workspace:
+
+ To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
+
+ ```python
+ # enter details of your AzureML workspace
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
+ ```
+
+ ```python
+ # get a handle to the workspace
+ ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+ )
+ ```
+++
+## Define the endpoint and deployment
+
+# [Azure CLI](#tab/azure-cli)
The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* file:
The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* f
The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
-| Key | Description |
-| | |
-| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding example in a browser.|
-| `name` | The name of the endpoint. It must be unique in the Azure region.<br>Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).|
+| Key | Description |
+| -- | -- |
+| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding example in a browser. |
+| `name` | The name of the endpoint. It must be unique in the Azure region.<br>Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. `key` doesn't expire, but `aml_token` does expire. (Get the most recent token by using the `az ml online-endpoint get-credentials` command.) | The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
The following snippet shows the *endpoints/online/managed/sample/blue-deployment
The table describes the attributes of a `deployment`:
-| Key | Description |
-| | |
-| `name` | The name of the deployment. |
-| `model` | In this example, we specify the model properties inline: `path`. Model files are automatically uploaded and registered with an autogenerated name. For related best practices, see the tip in the next section. |
-| `code_configuration.code.path` | The directory on the local development environment that contains all the Python source code for scoring the model. You can use nested directories and packages. |
+| Key | Description |
+| -- | -- |
+| `name` | The name of the deployment. |
+| `model` | In this example, we specify the model properties inline: `path`. Model files are automatically uploaded and registered with an autogenerated name. For related best practices, see the tip in the next section. |
+| `code_configuration.code.path` | The directory on the local development environment that contains all the Python source code for scoring the model. You can use nested directories and packages. |
| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.path` scoring directory on the local development environment. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
-| `environment` | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include the`path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image. For more information, see the tip in the next section. |
-| `instance_type` | The VM SKU that will host your deployment instances. For more information, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). |
-| `instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `instance_count` to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+| `environment` | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include the`path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image. For more information, see the tip in the next section. |
+| `instance_type` | The VM SKU that will host your deployment instances. For more information, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). |
+| `instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `instance_count` to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
During deployment, the local files such as the Python source for the scoring model, are uploaded from the development environment.
For more information about the YAML schema, see the [online endpoint YAML refere
> > All the commands that are used in this article (except the optional SLA monitoring and Azure Log Analytics integration) can be used either with managed endpoints or with Kubernetes endpoints.
+# [Python](#tab/python)
+
+In this article, we first define names of online endpoint and deployment for debug locally.
+
+1. Define endpoint (with name for local endpoint):
+ ```python
+ # Creating a local endpoint
+ import datetime
+
+ local_endpoint_name = "local-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # create an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=local_endpoint_name, description="this is a sample local endpoint"
+ )
+ ```
+
+1. Define deployment (with name for local deployment)
+
+ The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
+
+ * Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.
+ * The code that's required to score the model. In this case, we have a score.py file.
+ * An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
+ * Settings to specify the instance type and scaling capacity.
+
+ **Key aspects of deployment**
+ * `name` - Name of the deployment.
+ * `endpoint_name` - Name of the endpoint to create the deployment under.
+ * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+ * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+ * `code_configuration` - the configuration for the source code and scoring script
+ * `path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+ * `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+ * `instance_count` - The number of instances to use for the deployment
+
+ ```python
+ model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+ env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=local_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+ )
+ ```
+++ ### Register your model and environment separately
+# [Azure CLI](#tab/azure-cli)
+ In this example, we specify the `path` (where to upload files from) inline. The CLI automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the YAML. Use the form `model: azureml:my-model:1` or `environment: azureml:my-env:1`. For registration, you can extract the YAML definitions of `model` and `environment` into separate YAML files and use the commands `az ml model create` and `az ml environment create`. To learn more about these commands, run `az ml model create -h` and `az ml environment create -h`.
+# [Python](#tab/python)
+
+In this example, we specify the `path` (where to upload files from) inline. The SDK automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the codes.
+
+For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the SDK](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-sdk)
+
+For more information on creating an environment, see
+[Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment)
+++ ### Use different CPU and GPU instance types
-The preceding YAML uses a general-purpose type (`Standard_F2s_v2`) and a non-GPU Docker image (in the YAML, see the `image` attribute). For GPU compute, choose a GPU compute type SKU and a GPU Docker image.
+The preceding YAML uses a general-purpose type (`Standard_DS2_v2`) and a non-GPU Docker image (in the YAML, see the `image` attribute). For GPU compute, choose a GPU compute type SKU and a GPU Docker image.
For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
Currently, you can specify only one model per deployment in the YAML. If you've
> [!TIP] > The format of the scoring script for online endpoints is the same format that's used in the preceding version of the CLI and in the Python SDK.
-As noted earlier, the `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py). The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
+# [Azure CLI](#tab/azure-cli)
+As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
+
+# [Python](#tab/python)
+As noted earlier, the script specified in `CodeConfiguration(scoring_script="score.py")` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/model-1/onlinescoring/score.py).
+++
+The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
## Deploy and debug locally by using local endpoints
To save time debugging, we *highly recommend* that you test-run your endpoint lo
### Deploy the model locally
-First create the endpoint. Optionally, for a local endpoint, you can skip this step and directly create the deployment (next step), which will, in turn, create the required metadata. This is useful for development and testing purposes.
+First create an endpoint. Optionally, for a local endpoint, you can skip this step and directly create the deployment (next step), which will, in turn, create the required metadata. Deploying models locally is useful for development and testing purposes.
+
+# [Azure CLI](#tab/azure-cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-local-endpoint.sh" ID="create_endpoint":::
+# [Python](#tab/python)
+
+```python
+ml_client.online_endpoints.begin_create_or_update(endpoint, local=True)
+```
+++ Now, create a deployment named `blue` under the endpoint.
+# [Azure CLI](#tab/azure-cli)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-local-endpoint.sh" ID="create_deployment"::: The `--local` flag directs the CLI to deploy the endpoint in the Docker environment.
+# [Python](#tab/python)
+
+```python
+ml_client.online_deployments.begin_create_or_update(
+ deployment=blue_deployment, local=True
+)
+```
+
+The `local=True` flag directs the SDK to deploy the endpoint in the Docker environment.
+++ > [!TIP] > Use Visual Studio Code to test and debug your endpoints locally. For more information, see [debug online endpoints locally in Visual Studio Code](how-to-debug-managed-online-endpoints-visual-studio-code.md).
The `--local` flag directs the CLI to deploy the endpoint in the Docker environm
Check the status to see whether the model was deployed without error:
+# [Azure CLI](#tab/azure-cli)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-local-endpoint.sh" ID="get_status"::: The output should appear similar to the following JSON. The `provisioning_state` is `Succeeded`.
The output should appear similar to the following JSON. The `provisioning_state`
} ```
+# [Python](#tab/python)
+
+```python
+ml_client.online_endpoints.get(name=local_endpoint_name, local=True)
+```
+
+The method returns [`ManagedOnlineEndpoint` entity](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint.md). The `provisioning_state` is `Succeeded`.
+
+```python
+ManagedOnlineEndpoint({'public_network_access': None, 'provisioning_state': 'Succeeded', 'scoring_uri': 'http://localhost:49158/score', 'swagger_uri': None, 'name': 'local-10061534497697', 'description': 'this is a sample local endpoint', 'tags': {}, 'properties': {}, 'id': None, 'Resource__source_path': None, 'base_path': '/path/to/your/working/directory', 'creation_context': None, 'serialize': <msrest.serialization.Serializer object at 0x7ffb781bccd0>, 'auth_mode': 'key', 'location': 'local', 'identity': None, 'traffic': {}, 'mirror_traffic': {}, 'kind': None})
+```
+++ The following table contains the possible values for `provisioning_state`:
-| State | Description |
-| -- | -- |
-| __Creating__ | The resource is being created. |
-| __Updating__ | The resource is being updated. |
-| __Deleting__ | The resource is being deleted. |
-| __Succeeded__ | The create/update operation was successful. |
-| __Failed__ | The create/update/delete operation has failed. |
+| State | Description |
+| - | - |
+| __Creating__ | The resource is being created. |
+| __Updating__ | The resource is being updated. |
+| __Deleting__ | The resource is being deleted. |
+| __Succeeded__ | The create/update operation was successful. |
+| __Failed__ | The create/update/delete operation has failed. |
### Invoke the local endpoint to score data by using your model
+# [Azure CLI](#tab/azure-cli)
+ Invoke the endpoint to score the model by using the convenience command `invoke` and passing query parameters that are stored in a JSON file: :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-local-endpoint.sh" ID="test_endpoint"::: If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run `az ml online-endpoint show --local -n $ENDPOINT_NAME`. In the returned data, find the `scoring_uri` attribute. Sample curl based commands are available later in this doc.
+# [Python](#tab/python)
+
+Invoke the endpoint to score the model by using the convenience command invoke and passing query parameters that are stored in a JSON file
+
+```python
+ml_client.online_endpoints.invoke(
+ endpoint_name=local_endpoint_name,
+ request_file="../model-1/sample-request.json",
+ local=True,
+)
+```
+
+If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run the following code. In the returned data, find the `scoring_uri` attribute. Sample curl based commands are available later in this doc.
+
+```python
+endpoint = ml_client.online_endpoints.get(endpoint_name)
+scoring_uri = endpoint.scoring_uri
+```
+++ ### Review the logs for output from the invoke operation
-In the example *score.py* file, the `run()` method logs some output to the console. You can view this output by using the `get-logs` command again:
+In the example *score.py* file, the `run()` method logs some output to the console.
+
+# [Azure CLI](#tab/azure-cli)
+
+You can view this output by using the `get-logs` command:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-local-endpoint.sh" ID="get_logs":::
+# [Python](#tab/python)
+
+You can view this output by using the `get_logs` method:
+
+```python
+ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=local_endpoint_name, local=True, lines=50
+)
+```
+++ ## Deploy your online endpoint to Azure Next, deploy your online endpoint to Azure. ### Deploy to Azure
+# [Azure CLI](#tab/azure-cli)
+ To create the endpoint in the cloud, run the following code: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="create_endpoint" :::
To create the deployment named `blue` under the endpoint, run the following code
This deployment might take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.
+> [!TIP]
+> * If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status.
+ > [!IMPORTANT] > The `--all-traffic` flag in the above `az ml online-deployment create` allocates 100% of the traffic to the endpoint to the newly created deployment. Though this is helpful for development and testing purposes, for production, you might want to open traffic to the new deployment through an explicit command. For example, > `az ml online-endpoint update -n $ENDPOINT_NAME --traffic "blue=100"`
+# [Python](#tab/python)
+
+1. Configure online endpoint:
+
+ > [!TIP]
+ > * `endpoint_name`: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+ > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
+ > * Optionally, you can add description, tags to your endpoint.
+
+ ```python
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+
+ online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # create an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint",
+ auth_mode="key",
+ tags={"foo": "bar"},
+ )
+ ```
+
+1. Create the endpoint:
+
+ Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+ ```python
+ ml_client.online_endpoints.begin_create_or_update(endpoint)
+ ```
+
+2. Configure online deployment:
+
+ A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
+
+ ```python
+ model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+ env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+ )
+ ```
+
+3. Create the deployment:
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.online_deployments.begin_create_or_update(blue_deployment)
+ ```
+
+ > [!TIP]
+ > * If you prefer not to block your Python console, you may add the flag `no_wait=True` to the parameters. However, this will stop the interactive display of the deployment status.
+
+ ```python
+ # blue deployment takes 100 traffic
+ endpoint.traffic = {"blue": 100}
+ ml_client.online_endpoints.begin_create_or_update(endpoint)
+ ```
+++ > [!TIP]
-> * If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status.
->
> * Use [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md) to debug errors.
-### Check the status of the deployment
+### Check the status of the endpoint
+
+# [Azure CLI](#tab/azure-cli)
The `show` command contains information in `provisioning_status` for endpoint and deployment:
You can list all the endpoints in the workspace in a table format by using the `
az ml online-endpoint list --output table ```
-### Check the status of the cloud deployment
+# [Python](#tab/python)
+
+Check the status to see whether the model was deployed without error:
+
+```python
+ml_client.online_endpoints.get(name=online_endpoint_name)
+```
+
+You can list all the endpoints in the workspace in a table format by using the `list` method:
+
+```python
+for endpoint in ml_client.online_endpoints.list():
+ print(endpoint.name)
+```
+
+The method returns list (iterator) of `ManagedOnlineEndpoint` entities. You can get other information by specifying [parameters](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint.md#parameters).
+
+For example, output the list of endpoints like a table:
+
+```python
+print("Kind\tLocation\tName")
+print("-\t-\t")
+for endpoint in ml_client.online_endpoints.list():
+ print(f"{endpoint.kind}\t{endpoint.location}\t{endpoint.name}")
+```
+++
+### Check the status of the online deployment
Check the logs to see whether the model was deployed without error:
+# [Azure CLI](#tab/azure-cli)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" ::: By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag. +
+# [Python](#tab/python)
+
+You can view this output by using the `get_logs` method:
+
+```python
+ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=online_endpoint_name, lines=50
+)
+```
+
+By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `container_type="storage-initializer"` option.
+
+```python
+ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=online_endpoint_name, lines=50, container_type="storage-initializer"
+)
+```
++
+For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
+ ### Invoke the endpoint to score data by using your model
+# [Azure CLI](#tab/azure-cli)
+ You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="test_endpoint" :::
To see the invocation logs, run `get-logs` again.
For information on authenticating using a token, see [Authenticate to online endpoints](how-to-authenticate-online-endpoint.md). +
+# [Python](#tab/python)
+
+Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+
+* `endpoint_name` - Name of the endpoint
+* `request_file` - File with request data
+* `deployment_name` - Name of the specific deployment to test in an endpoint
+
+We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/model-1/sample-request.json) file.
+
+```python
+# test the blue deployment with some sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="blue",
+ request_file="../model-1/sample-request.json",
+)
+```
+++ ### (Optional) Update the deployment
+# [Azure CLI](#tab/azure-cli)
+ If you want to update the code, model, or environment, update the YAML file, and then run the `az ml online-endpoint update` command. > [!NOTE]
The `update` command also works with local deployments. Use the same `az ml onli
> [!TIP] > With the `update` command, you can use the [`--set` parameter in the Azure CLI](/cli/azure/use-cli-effectively#generic-update-arguments) to override attributes in your YAML *or* to set specific attributes without passing the YAML file. Using `--set` for single attributes is especially valuable in development and test scenarios. For example, to scale up the `instance_count` value for the first deployment, you could use the `--set instance_count=2` flag. However, because the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops).
+# [Python](#tab/python)
+
+If you want to update the code, model, or environment, update the configuration, and then run the `MLClient`'s [`online_deployments.begin_create_or_update`](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations.md#azure-ai-ml-operations-onlinedeploymentoperations-begin-create-or-update) module/method.
+
+> [!NOTE]
+> If you update instance count and along with other model settings (code, model, or environment) in a single `begin_create_or_update` method: first the scaling operation will be performed, then the other updates will be applied. In production environment is a good practice to perform these operations separately.
+
+To understand how `begin_create_or_update` works:
+
+1. Open the file *online/model-1/onlinescoring/score.py*.
+2. Change the last line of the `init()` function: After `logging.info("Init complete")`, add `logging.info("Updated successfully")`.
+3. Save the file.
+4. Run the method:
+
+ ```python
+ ml_client.online_deployments.begin_create_or_update(blue_deployment)
+ ```
+
+5. Because you modified the `init()` function (`init()` runs when the endpoint is created or updated), the message `Updated successfully` will be in the logs. Retrieve the logs by running:
+
+ ```python
+ ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=online_endpoint_name, lines=50
+ )
+ ```
+
+The `begin_create_or_update` method also works with local deployments. Use the same method with the `local=True` flag.
+++ > [!Note] > The above is an example of inplace rolling update. > * For managed online endpoint, the same deployment is updated with the new configuration, with 20% nodes at a time, i.e. if the deployment has 10 nodes, 2 nodes at a time will be updated.
To view metrics and set alerts based on your SLA, complete the steps that are de
### (Optional) Integrate with Log Analytics
-The `get-logs` command provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#logs)
+The `get-logs` command for CLI or the `get_logs` method for SDK provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#logs)
[!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
The `get-logs` command provides only the last few hundred lines of logs from an
If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
+# [Azure CLI](#tab/azure-cli)
+ ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="delete_endpoint" :::
+# [Python](#tab/python)
+
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+```
+++ ## Next steps
+Try safe rollout of your models as a next step:
+- [Safe rollout for online endpoints (CLI v2)](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints (SDK v2)](how-to-safely-rollout-managed-endpoints-sdk-v2.md)
+ To learn more, review these articles: - [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) - [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)-- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)-- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Access Azure resources from an online endpoint with a managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md) - [Enable network isolation with managed online endpoints](how-to-secure-online-endpoint.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
-+ ms.devlang: azurecli # High-performance serving with Triton Inference Server (Preview) --
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [online endpoints](concept-endpoints.md#what-are-online-endpoints).
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
-+ Last updated 02/16/2022
-# View automated ML model's training code (preview)
--
+# View training code for an Automated ML model
In this article, you learn how to view the generated training code from any automated machine learning trained model.
The following diagram illustrates that you can generate the code for automated M
* Automated ML code generation is only available for experiments run on remote Azure ML compute targets. Code generation isn't supported for local runs.
-* To enable code generation with the SDK, you have the following options:
-
- * You can run your code via a Jupyter notebook in an [Azure Machine Learning compute instance](), which contains the latest Azure ML SDK already installed. The compute instance comes with a ready-to-use Conda environment that is compatible with the automated ML code generation (preview) capability.
-
- * Alternatively, you can create a new local Conda environment on your local machine and then install the latest Azure ML SDK. [How to install AutoML client SDK in Conda environment with the `automl` package](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml#setup-using-a-local-conda-environment).
-
-## Code generation with the SDK
+* All automated ML runs triggered through AzureML Studio, SDKv2 or CLIv2 will have code generation enabled.
+## Get generated code and model artifacts
By default, each automated ML trained model generates its training code after training completes. Automated ML saves this code in the experiment's `outputs/generated_code` for that specific model. You can view them in the Azure ML studio UI on the **Outputs + logs** tab of the selected model.
-You can also explicitly enable code generation for your automated ML experiments in your AutoMLConfig object with the `enable_code_generation=True` parameter. This parameter must be set prior to submitting your experiment.
-
-Confirm that you call `experiment.submit()` from a Conda environment that contains the latest Azure ML SDK with automated ML. This ensures that code generation is triggered properly for the experiments that are run on a remote compute target.
-
-```python
-config = AutoMLConfig( task="classification",
- training_data=data,
- label_column_name="label",
- compute_target=compute_target,
- enable_code_generation=True
- )
-```
-
- In some troubleshooting cases, you might want to disable code generation. Before you submit your automated ML experiment, you can disable code generation in your `AutoMLConfig` object with the `enable_code_generation=False` parameter.
-
-```python
-# Disabling Code Generation
-config = AutoMLConfig( task="classification",
- training_data=data,
- label_column_name="label",
- compute_target=compute_target,
- enable_code_generation=False
- )
-```
-
-There are two main files with the generated code,
- * **script.py** This is the model's training code that you likely want to analyze with the featurization steps, specific algorithm used, and hyperparameters.
-* **script_run_notebook.ipynb** Notebook with boiler-plate code to run the model's training code (script.py) in AzureML compute through Azure ML SDK classes such as `ScriptRunConfig`.
--
-## Get generated code and model artifacts
-
-After the automated ML training run completes, you can get the `script.py` and the `script_run_notebook.ipynb` files.
-The following code gets the best child run and downloads both files.
-
-```python
-
-best_run = remote_run.get_best_child()
-
-best_run.download_file("outputs/generated_code/script.py", "script.py")
-best_run.download_file("outputs/generated_code/script_run_notebook.ipynb", "script_run_notebook.ipynb")
-```
+* **script_run_notebook.ipynb** Notebook with boiler-plate code to run the model's training code (script.py) in AzureML compute through Azure ML SDKv2.
-You also can view the generated code and prepare it for code customization via the Azure Machine Learning studio UI.
+After the automated ML training run completes, there are you can access the `script.py` and the `script_run_notebook.ipynb` files via the Azure Machine Learning studio UI.
-To do so, navigate to the **Models** tab of the automated ML experiment parent run page. After you select one of the trained models, you can select the **View generated code (preview)** button. This button redirects you to the **Notebooks** portal extension, where you can view, edit and run the generated code for that particular selected model.
+To do so, navigate to the **Models** tab of the automated ML experiment parent run page. After you select one of the trained models, you can select the **View generated code** button. This button redirects you to the **Notebooks** portal extension, where you can view, edit and run the generated code for that particular selected model.
![parent run models tab view generate code button](./media/how-to-generate-automl-training-code/parent-run-view-generated-code.png)
-Alternatively, you can also access to the model's generated code from the top of the child run's page once you navigate into that child run's page of a particular model.
+You can also access to the model's generated code from the top of the child run's page once you navigate into that child run's page of a particular model.
![child run page view generated code button](./media/how-to-generate-automl-training-code/child-run-view-generated-code.png)
+If you're using the Python SDKv2, you can also download the "script.py" and the "script_run_notebook.ipynb" by retrieving the best run via MLFlow & downloading the resulting artifacts.
## script.py
def get_training_dataset(dataset_id):
return dataset.to_pandas_dataframe() ```
-When running as part of a script run, `Run.get_context().experiment.workspace` retrieves the correct workspace. However, if this script is run inside of a different workspace or run locally without using `ScriptRunConfig`, you need to modify the script to [explicitly specify the appropriate workspace](/python/api/azureml-core/azureml.core.workspace.workspace).
+When running as part of a script run, `Run.get_context().experiment.workspace` retrieves the correct workspace. However, if this script is run inside of a different workspace or run locally, you need to modify the script to [explicitly specify the appropriate workspace](/python/api/azureml-core/azureml.core.workspace.workspace).
Once the workspace has been retrieved, the original dataset is retrieved by its ID. Another dataset with exactly the same structure could also be specified by ID or name with the [`get_by_id()`](/python/api/azureml-core/azureml.core.dataset.dataset#get-by-id-workspace--id-) or [`get_by_name()`](/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--), respectively. You can find the ID later on in the script, in a similar section as the following code.
You can also opt to replace this entire function with your own data loading mech
### Data preparation code The function `prepare_data()` cleans the data, splits out the feature and sample weight columns and prepares the data for use in training.
-This function can vary depending on the type of dataset and the experiment task type: classification, regression, or time-series forecasting.
+This function can vary depending on the type of dataset and the experiment task type: classification, regression, time-series forecasting, images or NLP tasks.
The following example shows that in general, the dataframe from the data loading step is passed in. The label column and sample weights, if originally specified, are extracted and rows containing `NaN` are dropped from the input data.
For example, possible data transformation that can happen in this function can b
The following is a transformer of type `StringCastTransformer()` that can be used to transform a set of columns. In this case, the set indicated by `column_names`. ```python
-def get_mapper_c6ba98(column_names):
+def get_mapper_0(column_names):
# ... Multiple imports to package dependencies, removed for simplicity ... definition = gen_features(
def generate_data_transformation_config():
column_group_3 = ['ps_ind_01', 'ps_ind_02_cat', 'ps_ind_03', 'ps_ind_04_cat', 'ps_ind_05_cat', 'ps_ind_14', 'ps_ind_15', 'ps_car_01_cat', 'ps_car_02_cat', 'ps_car_03_cat', 'ps_car_04_cat', 'ps_car_05_cat', 'ps_car_06_cat', 'ps_car_07_cat', 'ps_car_09_cat', 'ps_car_10_cat', 'ps_car_11', 'ps_calc_04', 'ps_calc_05', 'ps_calc_06', 'ps_calc_07', 'ps_calc_08', 'ps_calc_09', 'ps_calc_10', 'ps_calc_11', 'ps_calc_12', 'ps_calc_13', 'ps_calc_14'] feature_union = FeatureUnion([
- ('mapper_ab1045', get_mapper_ab1045(column_group_1)),
- ('mapper_c6ba98', get_mapper_c6ba98(column_group_3)),
- ('mapper_9133f9', get_mapper_9133f9(column_group_2)),
+ ('mapper_0', get_mapper_0(column_group_1)),
+ ('mapper_1', get_mapper_1(column_group_3)),
+ ('mapper_2', get_mapper_2(column_group_2)),
]) return feature_union ```
This notebook is similar to the existing automated ML sample notebooks however,
### Environment
-Typically, the training environment for an automated ML run is automatically set by the SDK. However, when running a custom script run like the generated code, automated ML is no longer driving the process, so the environment must be specified for the script run to succeed.
+Typically, the training environment for an automated ML run is automatically set by the SDK. However, when running a custom script run like the generated code, automated ML is no longer driving the process, so the environment must be specified for the command job to succeed.
Code generation reuses the environment that was used in the original automated ML experiment, if possible. Doing so guarantees that the training script run doesn't fail due to missing dependencies, and has a side benefit of not needing a Docker image rebuild, which saves time and compute resources.
-If you make changes to `script.py` that require additional dependencies, or you would like to use your own environment, you need to update the `Create environment` cell in `script_run_notebook.ipynb` accordingly.
+If you make changes to `script.py` that require additional dependencies, or you would like to use your own environment, you need to update the environment in the `script_run_notebook.ipynb` accordingly.
-For more information about AzureML environments, see [the Environment class documentation](/python/api/azureml-core/azureml.core.environment.environment).
### Submit the experiment
-Since the generated code isnΓÇÖt driven by automated ML anymore, instead of creating an `AutoMLConfig` and then passing it to `experiment.submit()`, you need to create a [`ScriptRunConfig`](/python/api/azureml-core/azureml.core.scriptrunconfig) and provide the generated code (script.py) to it.
-
-The following example contains the parameters and regular dependencies needed to run `ScriptRunConfig`, such as compute, environment, etc. For more information on how to use ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
+Since the generated code isnΓÇÖt driven by automated ML anymore, instead of creating and submitting an AutoML Job, you need to create a [`Command Job`](/how-to-train-sdk) and provide the generated code (script.py) to it.
+The following example contains the parameters and regular dependencies needed to run a Command Job, such as compute, environment, etc.
```python
-from azureml.core import ScriptRunConfig
-
-src = ScriptRunConfig(source_directory=project_folder,
- script='script.py',
- compute_target=cpu_cluster,
- environment=myenv,
- docker_runtime_config=docker_config)
+from azure.ai.ml import command, Input
+
+# To test with new training / validation datasets, replace the default dataset id(s) taken from parent run below
+training_dataset_id = '<DATASET_ID>'
+
+dataset_arguments = {'training_dataset_id': training_dataset_id}
+command_str = 'python script.py --training_dataset_id ${{inputs.training_dataset_id}}'
+
+command_job = command(
+ code=project_folder,
+ command=command_str,
+ environment='AutoML-Non-Prod-DNN:25',
+ inputs=dataset_arguments,
+ compute='automl-e2e-cl2',
+ experiment_name='build_70775722_9249eda8'
+)
-run = experiment.submit(config=src)
-```
-
-### Download and load the serialized trained model in-memory
-
-Once you have a trained model, you can save/serialize it to a `.pkl` file with `pickle.dump()` and `pickle.load()`. You can also use `joblib.dump()` and `joblib.load()`.
-
-The following example is how you download and load a model in-memory that was trained in AzureML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
-
-```python
-import joblib
-
-# Load the fitted model from the script run.
-
-# Note that if training dependencies are not installed on the machine
-# this notebook is being run from, this step can fail.
-try:
- # Download the model from the run in the Workspace
- run.download_file("outputs/model.pkl", "model.pkl")
-
- # Load the model into memory
- model = joblib.load("model.pkl")
-
-except ImportError:
- print('Required dependencies are missing; please run pip install azureml-automl-runtime.')
- raise
-
-```
-
-### Making predictions with the model in-memory
-
-Finally, you can load test data in a Pandas dataframe and use the model to make predictions.
-
-```python
-import os
-import numpy as np
-import pandas as pd
-
-DATA_DIR = "."
-filepath = os.path.join(DATA_DIR, 'porto_seguro_safe_driver_test_dataset.csv')
-
-test_data_df = pd.read_csv(filepath)
-
-print(test_data_df.shape)
-test_data_df.head(5)
-
-#test_data_df is a Pandas dataframe with test data
-y_predictions = model.predict(test_data_df)
+returned_job = ml_client.create_or_update(command_job)
+print(returned_job.studio_url) # link to naviagate to submitted run in AzureML Studio
```
-In an Azure ML compute instance you have all the automated ML dependencies, so youΓÇÖre able to load the model and predict from any notebook in a compute instance recently created.
-
-However, in order to load that model in a notebook in your custom local Conda environment, you need to have all the dependencies coming from the environment used when training (AutoML environment) installed.
- ## Next steps * Learn more about [how and where to deploy a model](/azure/machine-learning/v1/how-to-deploy-and-where).
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Azure Machine Learning is composed of multiple Azure services. There are multipl
* Azure Machine Learning uses Azure Container Registry (ACR) to store Docker images used to train and deploy models. If you allow Azure ML to automatically create ACR, it will enable the __admin account__. * The Azure ML compute cluster uses a __managed identity__ to retrieve connection information for datastores from Azure Key Vault and to pull Docker images from ACR. You can also configure identity-based access to datastores, which will instead use the managed identity of the compute cluster. * Data access can happen along multiple paths depending on the data storage service and your configuration. For example, authentication to the datastore may use an account key, token, security principal, managed identity, or user identity.-
- For more information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article. For information on configuring identity based access to data, see [Create datastores](how-to-datastore.md).
- * Managed online endpoints can use a managed identity to access Azure resources when performing inference. For more information, see [Access Azure resources from an online endpoint](how-to-access-resources-from-endpoints-managed-identities.md). ## Prerequisites
During cluster creation or when editing compute cluster details, in the **Advanc
+### Data storage
+
+When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
+
+In contrast, datastores that use **credential-based authentication** cache connection information, like your storage account key or SAS token, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. This approach has the limitation that other workspace users with sufficient permissions can retrieve those credentials, which may be a security concern for some organization.
+
+For more information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article. For information on configuring identity based access to data, see [Create datastores](how-to-datastore.md).
+
+There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
+
+- Accessing storage services
+- Training machine learning models
+
+The identity-based access allows you to use [role-based access controls (RBAC)](/azure/storage/blobs/assign-azure-role-data-access) to restrict which identities, such as users or compute resources, have access to the data.
+
+### Accessing storage services
+
+You can connect to storage services via identity-based data access with[Azure Machine Learning datastores](how-to-datastore.md).
+
+When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
+
+The same behavior applies when you work with data interactively via a Jupyter Notebook on your local computer or [compute instance](concept-compute-instance.md).
+
+> [!NOTE]
+> Credentials stored via credential-based authentication include subscription IDs, shared access signature (SAS) tokens, and storage access key and service principal information, like client IDs and tenant IDs.
+
+To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
+
+> [!WARNING]
+> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+
+Identity-based data access supports connections to **only** the following storage services.
+
+* Azure Blob Storage
+* Azure Data Lake Storage Gen1
+* Azure Data Lake Storage Gen2
+
+To access these storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+If you prefer to not use your user identity (Azure Active Directory), you can also grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and [specify the MSI credentials when creating the datastore](how-to-datastore.md?tabs=cli-identity-based-access%2Ccli-adls-sp%2Ccli-azfiles-account-key%2Ccli-adlsgen1-sp).
+
+If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](#compute-cluster).
+
+### Working with private data
+
+Certain machine learning scenarios involve working with private data. In such cases, data scientists may not have direct access to data as Azure AD users. In this scenario, the managed identity of a compute can be used for data access authentication. In this scenario, the data can only be accessed from a compute instance or a machine learning compute cluster executing a training job.
+
+With this approach, the admin grants the compute instance or compute cluster managed identity Storage Blob Data Reader permissions on the storage. The individual data scientists don't need to be granted access. For more information on configuring the managed identity for the compute cluster, see the [compute cluster](#compute-cluster) section. For information on using configuring Azure RBAC for the storage, see [role-based access controls](/storage/blobs/assign-azure-role-data-access).
+
+### Work with virtual networks
+
+By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
+
+You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires extra steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](v1/how-to-access-data.md#virtual-network).
+
+If your storage account has virtual network settings, that dictates what identity type and permissions access is needed. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
+
+* In scenarios where only certain IPs and subnets are allowed to access the storage, then Azure Machine Learning uses the workspace MSI to accomplish data previews and profiles.
+
+* If your storage is ADLS Gen 2 or Blob and has virtual network settings, customers can use either user identity or workspace MSI depending on the datastore settings defined during creation.
+
+* If the virtual network setting is ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥, then Workspace MSI is used.
+ ## Scenario: Azure Container Registry without admin user When you disable the admin user for ACR, Azure ML uses a managed identity to build and pull Docker images. There are two workflows when configuring Azure ML to use an ACR with the admin user disabled:
In this scenario, Azure Machine Learning service builds the training or inferenc
description: Environment created from private ACR. ```
+## Scenario: Access data for training jobs on compute clusters (preview)
++
+When training on [Azure Machine Learning compute clusters](how-to-create-attach-compute-cluster.md#what-is-a-compute-cluster), you can authenticate to storage with your user Azure Active Directory token.
+
+This authentication mode allows you to:
+* Set up fine-grained permissions, where different workspace users can have access to different storage accounts or folders within storage accounts.
+* Let data scientists re-use existing permissions on storage systems.
+* Audit storage access because the storage logs show which identities were used to access data.
+
+> [!IMPORTANT]
+> This functionality has the following limitations
+> * Feature is only supported for experiments submitted via the [Azure Machine Learning CLI](how-to-configure-cli.md)
+> * Only CommandJobs, and PipelineJobs with CommandSteps and AutoMLSteps are supported
+> * User identity and compute managed identity cannot be used for authentication within same job.
+
+> [!WARNING]
+> This feature is __public preview__ and is __not secure for production workloads__. Ensure that only trusted users have permissions to access your workspace and storage accounts.
+>
+> Preview features are provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+>
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+The following steps outline how to set up identity-based data access for training jobs on compute clusters.
+
+1. Grant the user identity access to storage resources. For example, grant StorageBlobReader access to the specific storage account you want to use or grant ACL-based permission to specific folders or files in Azure Data Lake Gen 2 storage.
+
+1. Create an Azure Machine Learning datastore without cached credentials for the storage account. If a datastore has cached credentials, such as storage account key, those credentials are used instead of user identity.
+
+1. Submit a training job with property **identity** set to **type: user_identity**, as shown in following job specification. During the training job, the authentication to storage happens via the identity of the user that submits the job.
+
+ > [!NOTE]
+ > If the **identity** property is left unspecified and datastore does not have cached credentials, then compute managed identity becomes the fallback option.
+
+ ```yaml
+ command: |
+ echo "--census-csv: ${{inputs.census_csv}}"
+ python hello-census.py --census-csv ${{inputs.census_csv}}
+ code: src
+ inputs:
+ census_csv:
+ type: uri_file
+ path: azureml://datastores/mydata/paths/census.csv
+ environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+ compute: azureml:cpu-cluster
+ identity:
+ type: user_identity
+ ```
+ ## Next steps * Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
A model in MLflow is also an artifact, but with a specific structure that serves
Logging models has the following advantages: > [!div class="checklist"]
-> * You don't need to provide a scoring script nor an environment for deployment.
-> * Swagger is enabled in endpoints automatically and the __Test__ feature can be used in Azure ML studio.
+> * Models can be directly loaded for inference using `mlflow.<flavor>.load_model` and use the `predict` function.
> * Models can be used as pipelines inputs directly.
+> * Models can be deployed without indicating a scoring script nor an environment.
+> * Swagger is enabled in deployed endpoints automatically and the __Test__ feature can be used in Azure ML studio.
> * You can use the Responsable AI dashbord. There are different ways to start using the model's concept in Azure Machine Learning with MLflow, as explained in the following sections:
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
For example, the following code snippet demonstrates configuring the experiment,
```python import mlflow
+# Set the experiment
mlflow.set_experiment("mlflow-experiment")
-# Start the run, log metrics, end the run
+# Start the run
mlflow_run = mlflow.start_run()
+# Log metrics or other information
mlflow.log_metric('mymetric', 1)
+# End run
mlflow.end_run() ``` > [!TIP]
-> Technically you don't have to call `start_run()` as a new run is created if one doesn't exist and you call a logging API. In that case, you can use `mlflow.active_run()` to retrieve the run. However, the `mlflow.ActiveRun` object returned by `mlflow.active_run()` won't contain items like parameters, metrics, etc. For more information, see [mlflow.active_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.active_run).
+> Technically you don't have to call `start_run()` as a new run is created if one doesn't exist and you call a logging API. In that case, you can use `mlflow.active_run()` to retrieve the run once currently being used. For more information, see [mlflow.active_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.active_run).
You can also use the context manager paradigm:
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Last updated 04/15/2022 -+ # Work with models in Azure Machine Learning
To create a model in Machine Learning, from the UI, open the **Models** page. Se
## Next steps
-* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
+* [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install)
* [No-code deployment for MLflow models](how-to-deploy-mlflow-models-online-endpoints.md) * Learn more about [MLflow and Azure Machine Learning](concept-mlflow.md)
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
+
+ Title: Create and manage registries (preview)
+
+description: Learn how create registries with the CLI, Azure portal and AzureML Studio
++++++ Last updated : 09/21/2022++++
+# Manage Azure Machine Learning registries (preview)
+
+Azure Machine Learning entities can be grouped into two broad categories:
+
+* Assets such as __models__, __environments__, __components__, and __datasets__ are durable entities that are _workspace agnostic_. For example, a model can be registered with any workspace and deployed to any endpoint.
+* Resources such as __compute__, __job__, and __endpoints__ are _transient entities that are workspace specific_. For example, an online endpoint has a scoring URI that is unique to a specific instance in a specific workspace. Similarly, a job runs for a known duration and generates logs and metrics each time it's run.
+
+Assets lend themselves to being stored in a central repository and used in different workspaces, possibly in different regions. Resources are workspace specific.
+
+AzureML registries (preview) enable you to create and use those assets in different workspaces. Registries support multi-region replication for low latency access to assets, so you can use assets in workspaces located in different Azure regions. Creating a registry will provision Azure resources required to facilitate replication. First, Azure blob storage accounts in each supported region. Second, a single Azure Container Registry with replication enabled to each supported region.
++
+## Prerequisites
++
+## Prepare to create registry
+
+You need to decide the following information carefully before proceeding to create a registry:
+
+### Choose a name
+
+Consider the following factors before picking a name.
+* Registries are meant to facilitate sharing of ML assets across teams within your organization across all workspaces. Choose a name that is reflective of the sharing scope. The name should help identify your group, division or organization.
+* Registry unique with your organization (Azure Active Directory tenant). It's recommended to prefix your team or organization name and avoid generic names.
+* Registry names can't be changed once created because they're used in IDs of models, environments and components that are referenced in code.
+ * Length can be 2-32 characters.
+ * Alphanumerics, underscore, hyphen are allowed. No other special characters. No spaces - registry names are part of model, environment, and component IDs that can be referenced in code.
+ * Name can contain underscore or hyphen but can't start with an underscore or hyphen. Needs to start with an alphanumeric.
+
+### Choose Azure regions
+
+Registries enable sharing of assets across workspaces. To do so, a registry replicates content across multiple Azure regions. You need to define the list of regions that a registry supports when creating the registry. Create a list of all regions in which you have workspaces today and plan to add in near future. This list is a good set of regions to start with. When creating a registry, you define a primary region and a set of additional regions. The primary region can't be changed after registry creation, but the additional regions can be updated at a later point.
+
+### Check permissions
+
+Make sure you're the "Owner" or "Contributor" of the subscription or resource group in which you plan to create the registry. If you don't have one of these built-in roles, review the section on permissions toward the end of this article.
++
+## Create a registry
+
+# [Azure CLI](#tab/cli)
+
+Create the YAML definition and name it `registry.yml`.
+
+> [!NOTE]
+> The primary location is listed twice in the YAML file. In the following example, `eastus` is listed first as the primary location (`location` item) and also in the `replication_locations` list.
+
+```YAML
+name: DemoRegistry1
+description: Basic registry with one primary region and to additional regions
+tags:
+ foo: bar
+location: eastus
+replication_locations:
+ - location: eastus
+ - location: eastus2
+ - location: westus
+```
++
+> [!TIP]
+> You typically see display names of Azure regions such as 'East US' in the Azure Portal but the registry creation YAML needs names of regions without spaces and lower case letters. Use `az account list-locations -o table` to find the mapping of region display names to the name of the region that can be specified in YAML.
+
+Run the registry create command.
+
+`az ml registry create --file registry.yml`
+
+# [AzureML studio](#tab/studio)
+
+You can create registries in AzureML studio using the following steps:
+
+1. In the [AzureML studio](https://ml.azure.com), select the __Registries__, and then __Manage registries__. Select __+ Create registry__.
+
+ > [!TIP]
+ > If you are in a workspace, navigate to the global UI by clicking your organization or tenant name in the navigation pane to find the __Registries__ entry. You can also go directly there by navigating to [https://ml.azure.com/registries](https://ml.azure.com/registries).
+
+ :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-button.png" alt-text="Screenshot of the create registry screen.":::
+
+1. Enter the registry name, select the subscription and resource group and then select __Next__.
+
+ :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-basics.png" alt-text="Screenshot of the registry creation basics tab.":::
+
+1. Select the __Primary region__ and __Additional region__, then select __Next__.
+
+ :::image type="content" source="./media/how-to-manage-registries/studio-registry-select-regions.png" alt-text="Screenshot of the registry region selection":::
+
+1. Review the information you provided, and then select __Create__. You can track the progress of the create operation in the Azure portal. Once the registry is successfully created, you can find it listed in the __Manage Registries__ tab.
+
+ :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-review.png" alt-text="Screenshot of the create + review tab.":::
+# [Azure portal](#tab/portal)
+
+1. From the [Azure portal](https://portal.azure.com), navigate to the Azure Machine Learning service. You can get there by searching for __Azure Machine Learning__ in the search bar at the top of the page or going to __All Services__ looking for __Azure Machine Learning__ under the __AI + machine learning__ category.
+
+1. Select __Create__, and then select __Azure Machine Learning registry__. Enter the registry name, select the subscription, resource group and primary region, then select __Next__.
+
+1. Select the additional regions the registry must support, then select __Next__ until you arrive at the __Review + Create__ tab.
+
+ :::image type="content" source="./media/how-to-manage-registries/create-registry-review.png" alt-text="Screenshot of the review + create tab.":::
+
+1. Review the information and select __Create__.
+++
+## Specify storage account type and SKU (optional)
+
+> [!TIP]
+> Specifying the Azure Storage Account type and SKU is only available from the Azure CLI.
+
+Azure storage offers several types of storage accounts with different features and pricing. For more information, see the [Types of storage accounts](/azure/storage/common/storage-account-overview#types-of-storage-accounts) article. Once you identify the optimal storage account SKU that best suites your needs, [find the value for the appropriate SKU type](/rest/api/storagerp/srp_sku_types). In the YAML file, use your selected SKU type as the value of the `storage_account_type` field. This field is under each `location` in the `replication_locations` list.
+
+Next, decide if you want to use an [Azure Blob storage](/azure/storage/blobs/storage-blobs-introduction) account or [Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction). To create Azure Data Lake Storage Gen2, set `storage_account_hns` to `true`. To create Azure Blob Storage, set `storage_account_hns` to `false`. The `storage_account_hns` field is under each `location` in the `replication_locations` list.
+
+> [!NOTE]
+>The `hns` portion of `storage_account_hns` refers to the [hierarchical namespace](/azure/storage/blobs/data-lake-storage-namespace) capability of Azure Data Lake Storage Gen2 accounts.
+
+Below is an example YAML that demonstrates this advanced storage configuration:
+
+```YAML
+name: DemoRegistry2
+description: Registry with additional configuration for storage accounts
+tags:
+ foo: bar
+location: eastus
+replication_locations:
+ - location: eastus
+ storage_config:
+ storage_account_hns: False
+ storage_account_type: Standard_LRS
+ - location: eastus2
+ storage_config:
+ storage_account_hns: False
+ storage_account_type: Standard_LRS
+ - location: westus
+ storage_config:
+ storage_account_hns: False
+ storage_account_type: Standard_LRS
+```
+
+## Add users to the registry
+
+Decide if you want to allow users to only use assets (models, environments and components) from the registry or both use and create assets in the registry. Review [steps to assign a role](/azure/role-based-access-control/role-assignments-steps) if you aren't familiar how to manage permissions using [Azure role-based access control](/azure/role-based-access-control/overview).
+
+### Allow users to use assets from the registry
+
+To let a user only read assets, you can grant the user the built-in __Reader__ role. If don't want to use the built-in role, create a custom role with the following permissions
+
+Permission | Description
+--|--
+Microsoft.MachineLearningServices/registries/read | Allows the user to list registries and get registry metadata
+Microsoft.MachineLearningServices/registries/assets/read | Allows the user to browse assets and use the assets in a workspace
+
+### Allow users to create and use assets from the registry
+
+To let the user both read and create or delete assets, grant the following write permission in addition to the above read permissions.
+
+Permission | Description
+--|--
+Microsoft.MachineLearningServices/registries/assets/write | Create assets in registries
+Microsoft.MachineLearningServices/registries/assets/delete| Delete assets in registries
+
+> [!WARNING]
+> The built-in __Contributor__ and __Owner__ roles allow users to create, update and delete registries. You must create a custom role if you want the user to create and use assets from the registry, but not create or update registries. Review [custom roles](/azure/role-based-access-control/custom-roles) to learn how to create custom roles from permissions.
+
+### Allow users to create and manage registries
+
+To let users create, update and delete registries, grant them the built-in __Contributor__ or __Owner__ role. If you don't want to use built in roles, create a custom role with the following permissions, in addition to all the above permissions to read, create and delete assets in registry.
+
+Permission | Description
+--|--
+Microsoft.MachineLearningServices/registries/write| Allows the user to create or update registries
+Microsoft.MachineLearningServices/registries/delete | Allows the user to delete registries
+++
+## Next steps
+
+* [Learn how to share models, components and environments across workspaces with registries (preview)](./how-to-share-models-pipelines-across-workspaces-with-registries.md)
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Last updated 09/21/2022 -+ # Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](v1/how-to-manage-workspace.md)
-> * [v2 (preview)](how-to-manage-workspace.md)
+> * [v2 (current)](how-to-manage-workspace.md)
In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the [Azure portal](https://portal.azure.com) or the [SDK for Python](/python/api/overview/azure/ml/).
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Last updated 09/23/2022 -+
-# How to migrate from v1 to v2
+# Upgrade to v2
-Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (preview) introduce consistency and a set of new features to accelerate the production machine learning lifecycle. This article provides an overview of migrating from v1 to v2 with recommendations to help you decide on v1, v2, or both.
+Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK introduce consistency and a set of new features to accelerate the production machine learning lifecycle. This article provides an overview of upgrading to v2 with recommendations to help you decide on v1, v2, or both.
## Prerequisites
Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (prev
## Should I use v2?
-You should use v2 if you're starting a new machine learning project. A new v2 project can reuse resources like workspaces and compute and assets like models and environments created using v1. You can also use v1 and v2 in tandem, for example using the v1 Python SDK within jobs that are submitted from the v2 CLI extension. However, see the [section below](#can-i-use-v1-and-v2-together) for details on why separating v1 and v2 use is recommended.
+You should use v2 if you're starting a new machine learning project or workflow. You should use v2 if you want to use the new features offered in v2. The features include:
+* Managed Inferencing
+* Reusable components in pipelines
+* Improved scheduling of pipelines
+* Responsible AI dashboard
+* Registry of assets
-We recommend assessing the effort needed to migrate a project from v1 to v2. First, you should ensure all the features needed from v1 are available in v2. Some notable feature gaps include:
+A new v2 project can reuse existing resources like workspaces and compute and existing assets like models and environments created using v1.
-- Spark support in jobs.-- Publishing jobs (pipelines in v1) as endpoints.-- AutoML jobs within pipeline jobs (AutoML step in a pipeline in v1).-- Model deployment to Azure Container Instance (ACI), replaced with managed online endpoints.-- An equivalent for ParallelRunStep in jobs.
+Some feature gaps in v2 include:
+
+- Spark support in jobs - this is currently in preview in v2.
+- Publishing jobs (pipelines in v1) as endpoints. You can however, schedule pipelines without publishing.
- Support for SQL/database datastores.-- Built-in components in the designer.
+- Ability to use classic prebuilt components in the designer with v2.
-You should then ensure the features you need in v2 meet your organization's requirements, such as being generally available. You and your team will need to assess on a case-by-case basis whether migrating to v2 is right for you.
+You should then ensure the features you need in v2 meet your organization's requirements, such as being generally available.
> [!IMPORTANT] > New features in Azure ML will only be launched in v2.
-## How do I migrate to v2?
-
-To migrate to v2, start by prototyping an existing v1 workflow into v2. Migrating will typically include:
+## Should I upgrade existing code to v2
-- Optionally (and recommended in most cases), re-create resources and assets with v2.-- Refactor model training code to de-couple Azure ML code from ML model code (model training, model logging, and other model tracking code).-- Refactor Azure ML model deployment code and test with v2 endpoints.-- Refactor CI/CD code to use the v2 CLI (recommended), v2 Python SDK, or directly use REST.
+You can reuse your existing assets in your v2 workflows. For instance a model created in v1 can be used to perform Managed Inferencing in v2.
-Based on this prototype, you can estimate the effort involved for a full migration to v2. Consider the workflow patterns (like [GitOps](#a-note-on-gitops-with-v2)) your organization wants to establish for use with v2 and factor this effort in.
+Optionally, if you want to upgrade specific parts of your existing code to v2, please refer to the comparison links provided in the details of each resource or asset in the rest of this document.
## Which v2 API should I use?
-In v2 interfaces via REST API, CLI, and Python SDK (preview) are available. The interface you should use depends on your scenario and preferences.
+In v2 interfaces via REST API, CLI, and Python SDK are available. The interface you should use depends on your scenario and preferences.
|API|Notes| |-|-|
In v2 interfaces via REST API, CLI, and Python SDK (preview) are available. The
## Can I use v1 and v2 together?
-Generally, yes. Resources like workspace, compute, and datastore work across v1 and v2, with exceptions. A user can call the v1 Python SDK to change a workspace's description, then using the v2 CLI extension change it again. Jobs (experiments/runs/pipelines in v1) can be submitted to the same workspace from the v1 or v2 Python SDK. A workspace can have both v1 and v2 model deployment endpoints. You can also call v1 Python SDK code within jobs created via v2, though [this pattern isn't recommended](#production-model-training).
+v1 and v2 can co-exist in a workspace. You can reuse your existing assets in your v2 workflows. For instance a model created in v1 can be used to perform Managed Inferencing in v2. Resources like workspace, compute, and datastore work across v1 and v2, with exceptions. A user can call the v1 Python SDK to change a workspace's description, then using the v2 CLI extension change it again. Jobs (experiments/runs/pipelines in v1) can be submitted to the same workspace from the v1 or v2 Python SDK. A workspace can have both v1 and v2 model deployment endpoints.
-We recommend creating a new workspace for using v2 to keep v1/v2 entities separate and avoid backward/forward compatibility considerations.
+### Using v1 and v2 code together
+We do not recommend using the v1 and v2 SDKs together in the same code. It is technically possible to use v1 and v2 in the same code because they use different Azure namespaces. However, there are many classes with the same name across these namespaces (like Workspace, Model) which can cause confusion and make code readability and debuggability challenging.
> [!IMPORTANT] > If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md) for details.
-## Migrating resources and assets
+## Resources and assets in v1 and v2
-This section gives an overview of migration recommendations for specific resources and assets in Azure ML. See the concept article for each entity for details on their usage in v2.
+This section gives an overview of specific resources and assets in Azure ML. See the concept article for each entity for details on their usage in v2.
### Workspace
-Workspaces don't need to be migrated with v2. You can use the same workspace, regardless of whether you're using v1 or v2. We recommend creating a new workspace for using v2 to keep v1/v2 entities separate and avoid backward/forward compatibility considerations.
+Workspaces don't need to be migrated with v2. You can use the same workspace, regardless of whether you're using v1 or v2.
+
+If you create workspaces using automation, do consider migrating the code for creating a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the [CLI (v2) and YAML files](how-to-manage-workspace-cli.md#create-a-workspace).
-Do consider migrating the code for creating a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the [CLI (v2) and YAML files](how-to-manage-workspace-cli.md#create-a-workspace).
+For a comparison of SDK v1 and v2 code, see [Workspace management in SDK v1 and SDK v2](migrate-to-v2-resource-workspace.md).
> [!IMPORTANT] > If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md) for details. - ### Connection (workspace connection in v1) Workspace connections from v1 are persisted on the workspace, and fully available with v2.
-We recommend migrating the code for creating connections to v2.
-For a comparison of SDK v1 and v2 code, see [Migrate workspace management from SDK v1 to SDK v2](migrate-to-v2-resource-workspace.md).
+For a comparison of SDK v1 and v2 code, see [Workspace management in SDK v1 and SDK v2](migrate-to-v2-resource-workspace.md).
### Datastore Object storage datastore types created with v1 are fully available for use in v2. Database datastores are not supported; export to object storage (usually Azure Blob) is the recommended migration path.
-We recommend migrating the code for [creating datastores](how-to-datastore.md) to v2.
+For a comparison of SDK v1 and v2 code, see [Datastore management in SDK v1 and SDK v2](migrate-to-v2-resource-datastore.md).
### Compute Compute of type `AmlCompute` and `ComputeInstance` are fully available for use in v2.
-We recommend migrating the code for creating compute to v2.
+For a comparison of SDK v1 and v2 code, see [Compute management in SDK v1 and SDK v2](migrate-to-v2-resource-compute.md).
### Endpoint and deployment (endpoint and web service in v1)
For upgrade steps from your existing ACI web services to managed online endpoint
In v2, "experiments", "runs", and "pipelines" are consolidated into jobs. A job has a type. Most jobs are `command` jobs that run a command, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else. Another common type of job is `pipeline`, which defines child jobs that may have input/output relationships, forming a directed acyclic graph (DAG).
-To migrate, you'll need to change your code for submitting jobs to v2. We recommend refactoring the control-plane code authoring a job into YAML file specification, which can then be submitted through the v2 CLI or Python SDK (preview). A simple `command` job looks like this:
+For a comparison of SDK v1 and v2 code, see
+* [Run a script](migrate-to-v2-command-job.md)
+* [Local runs](migrate-to-v2-local-runs.md)
+* [Hyperparameter tuning](migrate-to-v2-execution-hyperdrive.md)
+* [Parallel Run](migrate-to-v2-execution-parallel-run-step.md)
+* [Pipelines](migrate-to-v2-execution-pipeline.md)
+* [AutoML](migrate-to-v2-execution-automl.md)
+### Designer
-What you run *within* the job does not need to be migrated to v2. However, it is recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md).
+You can use designer to build pipelines using your own v2 custom components and the new prebuilt components from registry. In this situation, you can use v1 or v2 data assets in your pipeline.
-We recommend migrating the code for creating jobs to v2. You can see [how to train models](how-to-train-model.md) and the [job YAML references](reference-yaml-job-command.md) for authoring jobs in v2 YAMLs.
+You can continue to use designer to build pipelines using classic prebuilt components and v1 dataset types (tabular, file). You cannot use existing designer classic prebuilt components with v2 data asset.
-For a comparison of SDK v1 and v2 code, see [Migrate script run from SDK v1 to SDK v2](migrate-to-v2-command-job.md).
+You cannot build a pipeline using both existing designer classic prebuilt components and v2 custom components.
### Data (datasets in v1)
-Datasets are renamed to data assets. Interoperability between v1 datasets and v2 data assets is the most complex of any entity in Azure ML.
-
-Data assets in v2 (or File Datasets in v1) are *references* to files in object storage. Thus, deleting a data asset (or v1 dataset) doesn't actually delete anything in underlying storage, only a reference. Therefore, it may be easier to avoid backward and forward compatibility considerations for data by re-creating v1 datasets as v2 data assets.
+Datasets are renamed to data assets. *Backwards compatibility* is provided, which means you can use V1 Datasets in V2. When you consume a V1 Dataset in a V2 job you will notice they are automatically mapped into V2 types as follows:
-For details on data in v2, see the [data concept article](concept-data.md).
+* V1 FileDataset = V2 Folder (`uri_folder`)
+* V1 TabularDataset = V2 Table (`mltable`)
-We recommend migrating the code for [creating data assets](how-to-create-data-assets.md) to v2.
+It should be noted that *forwards compatibility* is **not** provided, which means you **cannot** use V2 data assets in V1.
-For a comparison of SDK v1 and v2 code, see [Migrate data management from SDK v1 to v2](migrate-to-v2-assets-data.md).
+This article talks more about handling data in v2 - [Read and write data in a job](how-to-read-write-data-v2.md)
+For a comparison of SDK v1 and v2 code, see [Data assets in SDK v1 and v2](migrate-to-v2-assets-data.md).
### Model
-Models created from v1 can be used in v2. In v2, explicit model types are introduced. Similar to data assets, it may be easier to re-create a v1 model as a v2 model, setting the type appropriately.
+Models created from v1 can be used in v2.
-We recommend migrating the code for creating models. For more information, see [How to train models](how-to-train-model.md).
-
-For a comparison of SDK v1 and v2 code, see
-
-* [Migrate model management from SDK v1 to SDK v2](migrate-to-v2-assets-model.md)
-* [Migrate AutoML from SDK v1 to SDK v2](migrate-to-v2-execution-automl.md)
-* [Migrate hyperparameter tuning from SDK v1 to SDK v2](migrate-to-v2-execution-hyperdrive.md)
-* [Migrate parallel run step from SDK v1 to SDK v2](migrate-to-v2-execution-parallel-run-step.md)
+For a comparison of SDK v1 and v2 code, see [Model management in SDK v1 and SDK v2](migrate-to-v2-assets-model.md)
### Environment Environments created from v1 can be used in v2. In v2, environments have new features like creation from a local Docker context.
-We recommend migrating the code for creating environments to v2.
- ## Managing secrets The management of Key Vault secrets differs significantly in V2 compared to V1. The V1 set_secret and get_secret SDK methods are not available in V2. Instead, direct access using Key Vault client libraries should be used.
We recommend v2 for prototyping models. You may consider using the CLI for an in
We recommend v2 for production model training. Jobs consolidate the terminology and provide a set of consistency that allows for easier transition between types (for example, `command` to `sweep`) and a GitOps-friendly process for serializing jobs into YAML files.
-With v2, you should separate your machine learning code from the control plane code. This separation allows for easier iteration and allows for easier transition between local and cloud.
-
-Typically, converting to v2 will involve refactoring your code to use MLflow for tracking and model logging. See the [MLflow concept article](concept-mlflow.md) for details.
+With v2, you should separate your machine learning code from the control plane code. This separation allows for easier iteration and allows for easier transition between local and cloud. We also recommend using MLflow for tracking and model logging. See the [MLflow concept article](concept-mlflow.md) for details.
### Production model deployment
Kubernetes deployments are supported in v2 through AKS or Azure Arc, enabling Az
### Machine learning operations (MLOps)
-A MLOps workflow typically involves CI/CD through an external tool. It's recommended refactor existing CI/CD workflows to use v2 APIs. Typically a CLI is used in CI/CD, though you can alternatively invoke Python or directly use REST.
+A MLOps workflow typically involves CI/CD through an external tool. Typically a CLI is used in CI/CD, though you can alternatively invoke Python or directly use REST.
The solution accelerator for MLOps with v2 is being developed at https://github.com/Azure/mlops-v2 and can be used as reference or adopted for setup and automation of the machine learning lifecycle.
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-tensorboard.md
How you launch TensorBoard with Azure Machine Learning experiments depends on th
* Azure Machine Learning compute instance - no downloads or installation necessary * Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository. * In the samples folder on the notebook server, find two completed and expanded notebooks by navigating to these directories:
- * **v1 (`<version>`) > how-to-use-azureml > track-and-monitor-experiments > tensorboard > export-run-history-to-tensorboard > export-run-history-to-tensorboard.ipynb**
- * **v1 (`<version>`) > how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb**
+ * **SDK v1 > how-to-use-azureml > track-and-monitor-experiments > tensorboard > export-run-history-to-tensorboard > export-run-history-to-tensorboard.ipynb**
+ * **SDK v1 > how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb**
* Your own Juptyer notebook server * [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) with the `tensorboard` extra * [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
-+ Last updated 05/26/2022
-# Prepare data for computer vision tasks with automated machine learning (preview)
+# Prepare data for computer vision tasks with automated machine learning
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Last updated 05/26/2022-+ #Customer intent: As an experienced Python developer, I need to read in my data to make it available to a remote compute to train my machine learning models.
> * [v1](v1/how-to-train-with-datasets.md) > * [v2 (current version)](how-to-read-write-data-v2.md)
-Learn how to read and write data for your jobs with the Azure Machine Learning Python SDK v2(preview) and the Azure Machine Learning CLI extension v2.
+Learn how to read and write data for your jobs with the Azure Machine Learning Python SDK v2 and the Azure Machine Learning CLI extension v2.
## Prerequisites
returned_job.services["Studio"].endpoint
## Data in pipelines
-If you're working with Azure Machine Learning pipelines, you can read data into and move data between pipeline components with the Azure Machine Learning CLI v2 extension or the Python SDK v2 (preview).
+If you're working with Azure Machine Learning pipelines, you can read data into and move data between pipeline components with the Azure Machine Learning CLI v2 extension or the Python SDK v2.
### Azure Machine Learning CLI v2 The following YAML file demonstrates how to use the output data from one component as the input for another component of the pipeline using the Azure Machine Learning CLI v2 extension:
The following YAML file demonstrates how to use the output data from one compone
:::code language="yaml" source="~/azureml-examples-main/CLI/jobs/pipelines-with-components/basics/3b_pipeline_with_data/pipeline.yml":::
-### Python SDK v2 (preview)
+### Python SDK v2
The following example defines a pipeline containing three nodes and moves data between each node.
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
+ Last updated 02/28/2022 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio.
Use the kernel dropdown on the right to change to any of the installed kernels.
## Manage packages
-Since your compute instance has multiple kernels, make sure use `%pip` or `%conda` [magic functions](https://ipython.readthedocs.io/en/stable/interactive/magics.html), which install packages into the currently-running kernel. Don't use `!pip` or `!conda`, which refers to all packages (including packages outside the currently-running kernel).
+Since your compute instance has multiple kernels, make sure use `%pip` or `%conda` [magic functions](https://ipython.readthedocs.io/en/stable/interactive/magics.html), which install packages into the currently running kernel. Don't use `!pip` or `!conda`, which refers to all packages (including packages outside the currently running kernel).
### Status indicators
An indicator next to the **Kernel** dropdown shows its status.
Find details about your compute instances on the **Compute** page in [studio](https://ml.azure.com). ## Useful keyboard shortcuts
-Similar to Jupyter Notebooks, Azure Machine Learning Studio notebooks have a modal user interface. The keyboard does different things depending on which mode the notebook cell is in. Azure Machine Learning Studio notebooks support the following two modes for a given code cell: command mode and edit mode.
+Similar to Jupyter Notebooks, Azure Machine Learning studio notebooks have a modal user interface. The keyboard does different things depending on which mode the notebook cell is in. Azure Machine Learning studio notebooks support the following two modes for a given code cell: command mode and edit mode.
### Command mode shortcuts
Using the following keystroke shortcuts, you can more easily navigate and run co
* **File upload limit**: When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods:
- * Use the SDK to upload the data to a datastore. For more information, see the [Upload the data](./tutorial-1st-experiment-bring-data.md#upload) section of the tutorial.
+ * Use the SDK to upload the data to a datastore. For more information, see [Create data assets](how-to-create-data-assets.md?tabs=Python-SDK).
* Use [Azure Data Factory](v1/how-to-data-ingest-adf.md) to create a data ingestion pipeline.
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
Title: Safe rollout for managed online endpoints using Python SDK v2 (preview).
+ Title: Safe rollout for managed online endpoints using Python SDK v2.
-description: Safe rollout for online endpoints using Python SDK v2 (preview).
+description: Safe rollout for online endpoints using Python SDK v2.
Last updated 05/25/2022 -+
-# Safe rollout for managed online endpoints using Python SDK v2 (preview)
+# Safe rollout for managed online endpoints using Python SDK v2
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- In this article, you learn how to deploy a new version of the model without causing any disruption. With blue-green deployment or safe rollout, an approach in which a new version of a web service is introduced to production by rolling out the change to a small subset of users/requests before rolling it out completely. This article assumes you're using online endpoints; for more information, see [Azure Machine Learning endpoints](concept-endpoints.md). In this article, you'll learn to:
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
Last updated 08/15/2022 --+ # Schedule machine learning pipeline jobs (preview) [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- In this article, you'll learn how to programmatically schedule a pipeline to run on Azure. You can create a schedule based on elapsed time. Time-based schedules can be used to take care of routine tasks, such as retrain models or do batch predictions regularly to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI and SDK. ## Prerequisites
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Previously updated : 06/06/2022 Last updated : 10/04/2022 # Use network isolation with managed online endpoints (preview) + When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md). Using a private endpoint with online endpoints is currently a preview feature. [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
The following diagram shows how communications flow through private endpoints to
* To use Azure machine learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* You must install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+* You must install and configure the Azure CLI and ML extension or the AzureML Python SDK v2. For more information, see the following articles:
+
+ * [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+ * [Install the Python SDK v2](https://aka.ms/sdk-v2-install).
* You must have an Azure Resource Group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
The following diagram shows how communications flow through private endpoints to
To secure scoring requests to the online endpoint to your virtual network, set the `public_network_access` flag for the endpoint to `disabled`:
+# [Azure CLI](#tab/cli)
+ ```azurecli az ml online-endpoint create -f endpoint.yml --set public_network_access=disabled ```
-When `public_network_access` is `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](./how-to-configure-private-link.md) and the endpoint can't be reached from public networks.
+# [Python SDK](#tab/python)
+
+```python
+from azure.ai.ml.entities._common import PublicNetworkAccess
+
+endpoint = ManagedOnlineEndpoint(name='my-online-endpoint',
+ description='this is a sample online endpoint',
+ tags={'foo': 'bar'},
+ auth_mode="key",
+ public_network_access=PublicNetworkAccess.Disabled
+ # public_network_access=PublicNetworkAccess.Enabled
+)
+
+ml_client.begin_create_or_update(endpoint)
+```
++
+When `public_network_access` is `Disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](./how-to-configure-private-link.md) and the endpoint can't be reached from public networks.
## Outbound (resource access)
The following are the resources that the deployment communicates with over the p
When you configure the `egress_public_network_access` to `disabled`, a new private endpoint is created per deployment, per service. For example, if you set the flag to `disabled` for three deployments to an online endpoint, nine private endpoints are created. Each deployment would have three private endpoints that are used to communicate with the workspace, blob, and container registry.
+# [Azure CLI](#tab/cli)
+ ```azurecli az ml online-deployment create -f deployment.yml --set egress_public_network_access=disabled ```
+# [Python SDK](#tab/python)
+
+```python
+blue_deployment = ManagedOnlineDeployment(name='blue',
+ endpoint_name='my-online-endpoint',
+ model=model,
+ code_configuration=CodeConfiguration(code_local_path='./model-1/onlinescoring/',
+ scoring_script='score.py'),
+ environment=env,
+ instance_type='Standard_DS2_v2',
+ instance_count=1,
+ egress_public_network_access=PublicNetworkAccess.Disabled
+ # egress_public_network_access=PublicNetworkAccess.Enabled
+)
+
+ml_client.begin_create_or_update(blue_deployment)
+```
+++ ## Scenarios The following table lists the supported configurations when configuring inbound and outbound communications for an online endpoint:
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-select-algorithms.md
Along with guidance in the Azure Machine Learning Algorithm Cheat Sheet, keep in
## Comparison of machine learning algorithms
+>[!Note]
+> Designer supports two type of components, classic prebuilt components and custom components. These two types of components are not compatible.
+>
+>Classic prebuilt components provides prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
+>
+>
+>Custom components allow you to provide your own code as a component. It supports sharing across workspaces and seamless authoring across Studio, CLI, and SDK interfaces.
+>
+>This article applies to classic prebuilt components.
+ Some learning algorithms make particular assumptions about the structure of the data or the desired results. If you can find one that fits your needs, it can give you more useful results, more accurate predictions, or faster training times. The following table summarizes some of the most important characteristics of algorithms from the classification, regression, and clustering families:
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Last updated 09/23/2022 -+ # Set up authentication for Azure Machine Learning resources and workflows+ > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](./v1/how-to-setup-authentication.md) > * [v2 (current version)](how-to-setup-authentication.md)
-Learn how to set up client to Azure authentication to your Azure Machine Learning workspace. Specifically, authenticating from the Azure CLI or the Azure Machine Learning SDK v2 (preview). Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
+Learn how to set up authentication to your Azure Machine Learning workspace from the Azure CLI or Azure Machine Learning SDK v2. Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
* __Interactive__: You use your account in Azure Active Directory to either directly authenticate, or to get a token that is used for authentication. Interactive authentication is used during _experimentation and iterative development_. Interactive authentication enables you to control access to resources (such as a web service) on a per-user basis.
Azure AD Conditional Access can be used to further control or restrict access to
* Create an [Azure Machine Learning workspace](how-to-manage-workspace.md). * [Configure your development environment](how-to-configure-environment.md) or use a [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) and install the [Azure Machine Learning SDK v2](https://aka.ms/sdk-v2-install).
- > [!IMPORTANT]
- > SDK v2 is currently in public preview.
- > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
- > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- * Install the [Azure CLI](/cli/azure/install-azure-cli). ## Azure Active Directory
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
description: 'Learn how to improve data security with Azure Machine Learning by
-+
Once the workspace has been created, you'll notice that Azure resource group is
> [!WARNING] > __Don't delete the resource group__ that contains this Azure Cosmos DB instance, or any of the resources automatically created in this group. If you need to delete the resource group or Microsoft-managed services in it, you must delete the Azure Machine Learning workspace that uses it. The resource group resources are deleted when the associated workspace is deleted.
-For more information on customer-managed keys with Cosmos DB, see [Configure customer-managed keys for your Azure Cosmos DB account](../cosmos-db/how-to-setup-cmk.md).
+For more information on customer-managed keys with Azure Cosmos DB, see [Configure customer-managed keys for your Azure Cosmos DB account](../cosmos-db/how-to-setup-cmk.md).
### Azure Container Instance
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
+
+ Title: Share models, components, and environments across workspaces with registries (preview)
+
+description: Learn how practice cross-workspace MLOps and collaborate across teams buy sharing models, components and environments through registries.
++++++ Last updated : 09/21/2022++++
+# Share models, components and environments across workspaces with registries (preview)
+
+Azure Machine Learning registry (preview) enables you to collaborate across workspaces within your organization. Using registries, you can share models, components, and environments.
+
+There are two scenarios where you'd want to use the same set of models, components and environments in multiple workspaces:
+
+* __Cross-workspace MLOps__: You're training a model in a `dev` workspace and need to deploy it to `test` and `prod` workspaces. In this case you, want to have end-to-end lineage between endpoints to which the model is deployed in `test` or `prod` workspaces and the training job, metrics, code, data and environment that was used to train the model in the `dev` workspace.
+* __Share and reuse models and pipelines across different teams__: Sharing and reuse improve collaboration and productivity. In this scenario, you may want to publish a trained model and the associated components and environments used to train it to a central catalog. From there, colleagues from other teams can search and reuse the assets you shared in their own experiments.
+
+In this article, you'll learn how to:
+
+* Create an environment and component in the registry.
+* Use the component from registry to submit a model training job in a workspace.
+* Register the trained model in the registry.
+* Deploy the model from the registry to an online-endpoint in the workspace, then submit an inference request.
+## Prerequisites
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- An Azure Machine Learning registry (preview) to share models, components and environments. To create a registry, see [Learn how to create a registry](how-to-manage-registries.md).
+
+- An Azure Machine Learning workspace. If you don't have one, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.
+
+ > [!IMPORTANT]
+ > The Azure region (location) where you create your workspace must be in the list of supported regions for Azure ML registry
+
+- The Azure CLI and the `ml` extension __or__ the Azure Machine Learning Python SDK v2:
+
+ # [Azure CLI](#tab/cli)
+
+ To install the Azure CLI and extension, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+ > [!IMPORTANT]
+ > * The CLI examples in this article assume that you are using the Bash (or compatible) shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
+ > * The examples also assume that you have configured defaults for the Azure CLI so that you don't have to specify the parameters for your subscription, workspace, resource group, or location. To set default settings, use the following commands. Replace the following parameters with the values for your configuration:
+ >
+ > * Replace `<subscription>` with your Azure subscription ID.
+ > * Replace `<workspace>` with your Azure Machine Learning workspace name.
+ > * Replace `<resource-group>` with the Azure resource group that contains your workspace.
+ > * Replace `<location>` with the Azure region that contains your workspace.
+ >
+ > ```azurecli
+ > az account set --subscription <subscription>
+ > az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ > ```
+ > You can see what your current defaults are by using the `az configure -l` command.
+
+ # [Python SDK](#tab/python)
+
+ To install the Python SDK v2, use the following command:
+
+ ```bash
+ pip install --pre azure-ai-ml
+ ```
+
+
+
+### Clone examples repository
+
+The code examples in this article are based on the `nyc_taxi_data_regression` sample in the [examples repository](https://github.com/Azure/azureml-examples). To use these files on your development environment, use the following commands to clone the repository and change directories to the example:
+
+```bash
+git clone https://github.com/Azure/azureml-examples
+cd azureml-examples
+# changing branch is temporary until samples merge to main
+git checkout mabables/registry
+```
+
+# [Azure CLI](#tab/cli)
+
+For the CLI example, change directories to `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` in your local clone of the [examples repository](https://github.com/Azure/azureml-examples).
+
+```bash
+cd cli/jobs/pipelines-with-components/nyc_taxi_data_regression
+```
+
+# [Python SDK](#tab/python)
+
+For the Python SDK example, use the `nyc_taxi_data_regression` sample from the [examples repository](https://github.com/Azure/azureml-examples). The sample notebook, [share-models-components-environments.ipynb,](https://github.com/Azure/azureml-examples/tree/main/sdk/python/assets/assets-in-registry/share-models-components-environments.ipynb) is available in the `sdk/python/assets/assets-in-registry` folder. All the sample YAML for components, model training code, sample data for training and inference is available in `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`. Change to the `sdk/resources/registry` directory and open the `share-models-components-environments.ipynb` notebook if you'd like to step through a notebook to try out the code in this document.
+++
+### Create SDK connection
+
+> [!TIP]
+> This step is only needed when using the Python SDK.
+
+Create a client connection to both the AzureML workspace and registry:
+
+```python
+ml_client_workspace = MLClient( credential=credential,
+ subscription_id = "<workspace-subscription>",
+ resource_group_name = "<workspace-resource-group",
+ workspace_name = "<workspace-name>")
+print(ml_client_workspace)
+
+ml_client_registry = MLClient ( credential=credential,
+ registry_name = "<registry-name>")
+print(ml_client_registry)
+```
+
+## Create environment in registry
+
+Environments define the docker container and python dependencies required to run training jobs or deploy models. For more information on environments, see the following articles:
+
+* [Environment concepts](./concept-environments.md)
+* [How to create environments (CLI)](./how-to-manage-environments-v2.md) articles.
+
+# [Azure CLI](#tab/cli)
+
+> [!TIP]
+> The same CLI command `az ml environment create` can be used to create environments in a workspace or registry. Running the command with `--workspace-name` command creates the environment in a workspace whereas running the command with `--registry-name` creates the environment in the registry.
+
+We'll create an environment that uses the `python:3.8` docker image and installs Python packages required to run a training job using the SciKit Learn framework. If you've cloned the examples repo and are in the folder `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`, you should be able to see environment definition file `env_train.yml` that references the docker file `env_train/Dockerfile`. The `env_train.yml` is shown below for your reference:
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
+name: SKLearnEnv
+version: 1
+build:
+ path: ./env_train
+```
+
+Create the environment using the `az ml environment create` as follows
+
+```azurecli
+az ml environment create --file env_train.yml --registry-name <registry-name>
+```
+
+If you get an error that an environment with this name and version already exists in the registry, you can either edit the `version` field in `env_train.yml` or specify a different version on the CLI that overrides the version value in `env_train.yml`.
+
+```azurecli
+# use shell epoch time as the version
+version=$(date +%s)
+az ml environment create --file env_train.yml --registry-name <registry-name> --set version=$version
+```
+
+> [!TIP]
+> `version=$(date +%s)` works only in Linux. Replace `$version` with a random number if this does not work.
+
+Note down the `name` and `version` of the environment from the output of the `az ml environment create` command and use them with `az ml environment show` commands as follows. You'll need the `name` and `version` in the next section when you create a component in the registry.
+
+```azurecli
+az ml environment show --name SKLearnEnv --version 1 --registry-name <registry-name>
+```
+
+> [!TIP]
+> If you used a different environment name or version, replace the `--name` and `--version` parameters accordingly.
+
+ You can also use `az ml environment list --registry-name <registry-name>` to list all environments in the registry.
+
+# [Python SDK](#tab/python)
+
+> [!TIP]
+> The same `MLClient.environments.create_or_update()` can be used to create environments in either a workspace or a registry depending on the target it has been initialized with. Since you work wth both workspace and registry in this document, you have initialized `ml_client_workspace` and `ml_client_registry` to work with workspace and registry respectively.
++
+We'll create an environment that uses the `python:3.8` docker image and installs Python packages required to run a training job using the SciKit Learn framework. The `Dockerfile` with base image and list of Python packages to install is available in `cli/jobs/pipelines-with-components/nyc_taxi_data_regression/env_train`. Initialize the environment object and create the environment.
+
+```python
+env_docker_context = Environment(
+ build=BuildContext(path="../../../../cli/jobs/pipelines-with-components/nyc_taxi_data_regression/env_train/"),
+ name="SKLearnEnv",
+ version=str(1),
+ description="Scikit Learn environment",
+)
+ml_client_registry.environments.create_or_update(env_docker_context)
+```
+
+> [!TIP]
+> If you get an error that an environment with this name and version already exists in the registry, specify a different version for the `version` parameter.
+
+Note down the `name` and `version` of the environment from the output and pass them to the `ml_client_registry.environments.get()` method to fetch the environment from registry.
+
+You can also use `ml_client_registry.environments.list()` to list all environments in the registry.
+++
+You can browse all environments in the AzureML studio. Make sure you navigate to the global UI and look for the __Registries__ entry.
++
+
+## Create a component in registry
+
+Components are reusable building blocks of Machine Learning pipelines in AzureML. You can package the code, command, environment, input interface and output interface of an individual pipeline step into a component. Then you can reuse the component across multiple pipelines without having to worry about porting dependencies and code each time you write a different pipeline.
+
+Creating a component in a workspace allows you to use the component in any pipeline job within that workspace. Creating a component in a registry allows you to use the component in any pipeline in any workspace within your organization. Creating components in a registry is a great way to build modular reusable utilities or shared training tasks that can be used for experimentation by different teams within your organization.
+
+For more information on components, see the following articles:
+* [Component concepts](concept-component.md)
+* [How to use components in pipelines (CLI)](how-to-create-component-pipelines-cli.md)
+* [How to use components in pipelines (SDK)](how-to-create-component-pipeline-python.md)
+
+# [Azure CLI](#tab/cli)
+
+Make sure you are in the folder `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`. You'll find the component definition file `train.yml` that packages a Scikit Learn training script `train_src/train.py` and the [curated environment](resource-curated-environments.md) `AzureML-sklearn-0.24-ubuntu18.04-py37-cpu`. We'll use the Scikit Learn environment created in pervious step instead of the curated environment. You can edit `environment` field in the `train.yml` to refer to your Scikit Learn environment. The resulting component definition file `train.yml` will be similar to the following example:
+
+```YAML
+# <component>
+$schema: https://azuremlschemas.azureedge.net/latest/commandComponent.schema.json
+name: train_linear_regression_model
+display_name: TrainLinearRegressionModel
+version: 1
+type: command
+inputs:
+ training_data:
+ type: uri_folder
+ test_split_ratio:
+ type: number
+ min: 0
+ max: 1
+ default: 0.2
+outputs:
+ model_output:
+ type: mlflow_model
+ test_data:
+ type: uri_folder
+code: ./train_src
+environment: azureml://registries/<registry-name>/environments/SKLearnEnv/versions/1`
+command: >-
+ python train.py
+ --training_data ${{inputs.training_data}}
+ --test_data ${{outputs.test_data}}
+ --model_output ${{outputs.model_output}}
+ --test_split_ratio ${{inputs.test_split_ratio}}
+
+```
+
+If you used different name or version, the more generic representation looks like this: `environment: azureml://registries/<registry-name>/environments/<sklearn-environment-name>/versions/<sklearn-environment-version>`, so make sure you replace the `<registry-name>`, `<sklearn-environment-name>` and `<sklearn-environment-version>` accordingly. You then run the `az ml component create` command to create the component as follows.
+
+```azurecli
+az ml component create --file train.yml --registry-name <registry-name>
+```
+
+> [!TIP]
+> The same the CLI command `az ml component create` can be used to create components in a workspace or registry. Running the command with `--workspace-name` command creates the component in a workspace whereas running the command with `--registry-name` creates the component in the registry.
+
+If you prefer to not edit the `train.yml`, you can override the environment name on the CLI as follows:
+
+```azurecli
+az ml component create --file train.yml --registry-name <registry-name>` --set environment=azureml://registries/<registry-name>/environments/SKLearnEnv/versions/1
+# or if you used a different name or version, replace `<sklearn-environment-name>` and `<sklearn-environment-version>` accordingly
+az ml component create --file train.yml --registry-name <registry-name>` --set environment=azureml://registries/<registry-name>/environments/<sklearn-environment-name>/versions/<sklearn-environment-version>
+```
+
+> [!TIP]
+> If you get an error that the name of the component already exists in the registry, you can either edit the version in `train.yml` or override the version on the CLI with a random version.
+
+Note down the `name` and `version` of the component from the output of the `az ml component create` command and use them with `az ml component show` commands as follows. You'll need the `name` and `version` in the next section when you create submit a training job in the workspace.
+
+```azurecli
+az ml component show --name <component_name> --version <component_version> --registry-name <registry-name>
+```
+ You can also use `az ml component list --registry-name <registry-name>` to list all components in the registry.
+
+# [Python SDK](#tab/python)
+
+Review the component definition file `train.yml` and the Python code `train_src/train.py` to train a regression model using Scikit Learn available in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` folder. Load the component object from the component definition file `train.yml`.
+
+```python
+parent_dir = "../../../../cli/jobs/pipelines-with-components/nyc_taxi_data_regression"
+train_model = load_component(path=parent_dir + "/train.yml")
+print(train_model)
+```
+
+Update the `environment` to point to the `SKLearnEnv` environment created in the previous section and create the environment.
+
+```python
+train_model.environment=env_from_registry
+ml_client_registry.components.create_or_update(train_model)
+```
+
+> [!TIP]
+> If you get an error that the name of the component already exists in the registry, you can either update the version with `train_model.version=<unique_version_number>` before creating the component.
+
+Note down the `name` and `version` of the component from the output and pass them to the `ml_client_registry.component.get()` method to fetch the component from registry.
+
+You can also use `ml_client_registry.component.list()` to list all components in the registry or browse all components in the AzureML Studio UI. Make sure you navigate to the global UI and look for the Registries hub.
+++
+You can browse all components in the AzureML studio. Make sure you navigate to the global UI and look for the __Registries__ entry.
++
+## Run a pipeline job in a workspace using component from registry
+
+When running a pipeline job that uses a component from a registry, the _compute_ resources and _training data_ are local to the workspace. For more information on running jobs, see the following articles:
+
+* [Running jobs (CLI)](./how-to-train-cli.md)
+* [Running jobs (SDK)](./how-to-train-sdk.md)
+* [Pipeline jobs with components (CLI)](./how-to-create-component-pipelines-cli.md)
+* [Pipeline jobs with components (SDK)](./how-to-create-component-pipeline-python.md)
+
+# [Azure CLI](#tab/cli)
+
+We'll run a pipeline job with the Scikit Learn training component created in the previous section to train a model. Check that you are in the folder `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`. The training dataset is located in the `data_transformed` folder. Edit the `component` section in under the `train_job` section of the `single-job-pipeline.yml` file to refer to the training component created in the previous section. The resulting `single-job-pipeline.yml` is shown below.
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
+type: pipeline
+display_name: nyc_taxi_data_regression_single_job
+description: Single job pipeline to train regression model based on nyc taxi dataset
+
+jobs:
+ train_job:
+ type: command
+ component: azureml://registries/<registry-name>/component/train_linear_regression_model/versions/1
+ compute: azureml:cpu-cluster
+ inputs:
+ training_data:
+ type: uri_folder
+ path: ./data_transformed
+ outputs:
+ model_output:
+ type: mlflow_model
+ test_data:
+```
+
+The key aspect is that this pipeline is going to run in a workspace using a component that isn't in the specific workspace. The component is in a registry that can be used with any workspace in your organization. You can run this training job in any workspace you have access to without having worry about making the training code and environment available in that workspace.
+
+> [!WARNING]
+> * Before running the pipeline job, confirm that the workspace in which you will run the job is in a Azure region that is supported by the registry in which you created the component.
+> * Confirm that the workspace has a compute cluster with the name `cpu-cluster` or edit the `compute` field under `jobs.train_job.compute` with the name of your compute.
+
+Run the pipeline job with the `az ml job create` command.
+
+```azurecli
+az ml job create --file single-job-pipeline.yml
+```
+
+> [!TIP]
+> If you have not configured the default workspace and resource group as explained in the prerequisites section, you will need to specify the `--workspace-name` and `--resource-group` parameters for the `az ml job create` to work.
++
+Alternatively, ou can skip editing `single-job-pipeline.yml` and override the component name used by `train_job` in the CLI.
+
+```azurecli
+az ml job create --file single-job-pipeline.yml --set jobs.train_job.component=azureml://registries/<registry-name>/component/train_linear_regression_model/versions/1
+```
+
+Since the component used in the training job is shared through a registry, you can submit the job to any workspace that you have access to in your organization, even across different subscriptions. For example, if you have `dev-workspace`, `test-workspace` and `prod-workspace`, running the training job in these three workspaces is as easy as running three `az ml job create` commands.
+
+```azurecli
+az ml job create --file single-job-pipeline.yml --workspace-name dev-workspace --resource-group <resource-group-of-dev-workspace>
+az ml job create --file single-job-pipeline.yml --workspace-name test-workspace --resource-group <resource-group-of-test-workspace>
+az ml job create --file single-job-pipeline.yml --workspace-name prod-workspace --resource-group <resource-group-of-prod-workspace>
+```
+
+# [Python SDK](#tab/python)
+
+You'll run a pipeline job with the Scikit Learn training component created in the previous section to train a model. The training dataset is located in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression/data_transformed` folder. Construct the pipeline using the component created in the previous step.
+
+The key aspect is that this pipeline is going to run in a workspace using a component that isn't in the specific workspace. The component is in a registry that can be used with any workspace in your organization. You can run this training job in any workspace you have access to without having worry about making the training code and environment available in that workspace.
+
+```Python
+@pipeline()
+def pipeline_with_registered_components(
+ training_data
+):
+ train_job = train_component_from_registry(
+ training_data=training_data,
+ )
+pipeline_job = pipeline_with_registered_components(
+ training_data=Input(type="uri_folder", path=parent_dir + "/data_transformed/"),
+)
+pipeline_job.settings.default_compute = "cpu-cluster"
+print(pipeline_job)
+```
+
+> [!WARNING]
+> * Confirm that the workspace in which you will run this job is in a Azure location that is supported by the registry in which you created the component before you run the pipeline job.
+> * Confirm that the workspace has a compute cluster with the name `cpu-cluster` or update it `pipeline_job.settings.default_compute=<compute-cluster-name>`.
+
+Run the pipeline job and wait for it to complete.
+
+```python
+pipeline_job = ml_client_workspace.jobs.create_or_update(
+ pipeline_job, experiment_name="sdk_job_component_from_registry" , skip_validation=True
+)
+ml_client_workspace.jobs.stream(pipeline_job.name)
+pipeline_job=ml_client_workspace.jobs.get(pipeline_job.name)
+pipeline_job
+```
+
+> [!TIP]
+> Notice that you are using `ml_client_workspace` to run the pipeline job whereas you had used `ml_client_registry` to use create environment and component.
+
+Since the component used in the training job is shared through a registry, you can submit the job to any workspace that you have access to in your organization, even across different subscriptions. For example, if you have `dev-workspace`, `test-workspace` and `prod-workspace`, you can connect to those workspaces and resubmit the job.
+++
+In AzureML studio, select the endpoint link in the job output to view the job. Here you can analyze training metrics, verify that the job is using the component and environment from registry, and review the trained model. Note down the `name` of the job from the output or find the same information from the job overview in AzureML studio. You'll need this information to download the trained model in the next section on creating models in registry.
++
+## Create a model in registry
+
+You'll learn how to create models in a registry in this section. Review [manage models](./how-to-manage-models.md) to learn more about model management in AzureML. We'll look at two different ways to create a model in a registry. First is from local files. Second, is to copy a model registered in the workspace to a registry.
+
+In both the options, you'll create model with the [MLflow format](./how-to-manage-models-mlflow.md), which will help you to [deploy this model for inference without writing any inference code](./how-to-deploy-mlflow-models-online-endpoints.md).
+
+### Create a model in registry from local files
+
+# [Azure CLI](#tab/cli)
+
+Download the model, which is available as output of the `train_job` by replacing `<job-name>` with the name from the job from the previous section. The model along with MLflow metadata files should be available in the `./artifacts/model/`.
+
+```azurecli
+# fetch the name of the train_job by listing all child jobs of the pipeline job
+train_job_name=$(az ml job list --parent-job-name <job-name> --query [0].name | sed 's/\"//g')
+# download the default outputs of the train_job
+az ml job download --name $train_job_name
+# review the model files
+ls -l ./artifacts/model/
+```
+
+> [!TIP]
+> If you have not configured the default workspace and resource group as explained in the prerequisites section, you will need to specify the `--workspace-name` and `--resource-group` parameters for the `az ml model create` to work.
+
+> [!WARNING]
+> The output of `az ml job list` is passed to `sed`. This works only on Linux shells. If you are on Windows, run `az ml job list --parent-job-name <job-name> --query [0].name ` and strip any quotes you see in the train job name.
+
+If you're unable to download the model, you can find sample MLflow model trained by the training job in the previous section in `cli/jobs/pipelines-with-components/nyc_taxi_data_regression/artifacts/model/` folder.
+
+Create the model in the registry:
+
+```azurecli
+# create model in registry
+az ml model create --name nyc-taxi-model --version 1 --type mlflow_model --path ./artifacts/model/ --registry-name <registry-name>
+```
+
+> [!TIP]
+> * Use a random number for the `version` parameter if you get an error that model name and version exists.
+> * The same the CLI command `az ml model create` can be used to create models in a workspace or registry. Running the command with `--workspace-name` command creates the model in a workspace whereas running the command with `--registry-name` creates the model in the registry.
+
+# [Python SDK](#tab/python)
+
+Make sure you use the `pipeline_job` object from the previous section or fetch the pipeline job using `ml_client_workspace.jobs.get(name="<pipeline-job-name>")` method to get the list of child jobs in the pipeline. You'll then look for the job with `display_name` as `train_job` and download the trained model from `train_job` output. The downloaded model along with MLflow metadata files should be available in the `./artifacts/model/`.
+
+```python
+jobs=ml_client_workspace.jobs.list(parent_job_name=pipeline_job.name)
+for job in jobs:
+ if (job.display_name == "train_job"):
+ print (job.name)
+ ml_client_workspace.jobs.download(job.name)
+```
+If you're unable to download the model, you can find sample MLflow model trained by the training job in the previous section in `sdk/resources/registry/model` folder.
+
+Create the model in the registry.
+
+```python
+mlflow_model = Model(
+ path="./artifacts/model/",
+ type=AssetTypes.MLFLOW_MODEL,
+ name="nyc-taxi-model",
+ version=str(1), # use str(int(time.time())) if you want a random model number
+ description="MLflow model created from local path",
+)
+ml_client_registry.model.create_or_update(mlflow_model)
+```
+++
+### Copy a model from workspace to registry
+
+In this workflow, you'll first create the model in the workspace and then copy it to the registry. This workflow is useful when you want to test the model in the workspace before sharing it. For example, deploy it to endpoints, try out inference with some test data and then copy the model to a registry if everything looks good. This workflow may also be useful when you're developing a series of models using different techniques, frameworks or parameters and want to promote just one of them to the registry as a production candidate.
+
+# [Azure CLI](#tab/cli)
+
+Make sure you have the name of the pipeline job from the previous section and replace that in the command to fetch the training job name below. You'll then register the model from the output of the training job into the workspace. Note how the `--path` parameter refers to the output `train_job` output with the `azureml://jobs/$train_job_name/outputs/artifacts/paths/model` syntax.
+
+```azurecli
+# fetch the name of the train_job by listing all child jobs of the pipeline job
+train_job_name=$(az ml job list --parent-job-name <job-name> --workspace-name <workspace-name> --resource-group <workspace-resource-group> --query [0].name | sed 's/\"//g')
+# create model in workspace
+az ml model create --name nyc-taxi-model --version 1 --type mlflow_model --path azureml://jobs/$train_job_name/outputs/artifacts/paths/model
+```
+
+> [!TIP]
+> * Use a random number for the `version` parameter if you get an error that model name and version exists.`
+> * If you have not configured the default workspace and resource group as explained in the prerequisites section, you will need to specify the `--workspace-name` and `--resource-group` parameters for the `az ml model create` to work.
+
+Note down the model name and version. You can validate if the model is registered in the workspace by browsing it in the Studio UI or using `az ml model show --name nyc-taxi-model --version $model_version` command.
+
+Next, you'll now copy the model from the workspace to the registry. Note now the `--path` parameter is referring to the model with the workspace with the `azureml://subscriptions/<subscription-id-of-workspace>/resourceGroups/<resource-group-of-workspace>/workspaces/<workspace-name>/models/<model-name>/versions/<model-version>` syntax.
++
+```azurecli
+# copy model registered in workspace to registry
+az ml model create --registry-name <registry-name> --path azureml://subscriptions/<subscription-id-of-workspace>/resourceGroups/<resource-group-of-workspace>/workspaces/<workspace-name>/models/nyc-taxi-model/versions/1
+```
+
+> [!TIP]
+> * Make sure to use the right model name and version if you changed it in the `az ml model create` command.
+> * The above command creates the model in the registry with the same name and version. You can provide a different name or version with the `--name` or `--version` parameters.
+Note down the `name` and `version` of the model from the output of the `az ml model create` command and use them with `az ml model show` commands as follows. You'll need the `name` and `version` in the next section when you deploy the model to an online endpoint for inference.
+
+```azurecli
+az ml model show --name <model_name> --version <model_version> --registry-name <registry-name>
+```
+
+You can also use `az ml model list --registry-name <registry-name>` to list all models in the registry or browse all components in the AzureML Studio UI. Make sure you navigate to the global UI and look for the Registries hub.
+
+# [Python SDK](#tab/python)
+
+Make sure you use the `pipeline_job` object from the previous section or fetch the pipeline job using `ml_client_workspace.jobs.get(name="<pipeline-job-name>")` method to get the list of child jobs in the pipeline. You'll then look for the job with `display_name` as `train_job` and use the `name` of the `train_job` to construct the path pointing to the model output, which looks like this: `azureml://jobs/<job_name>/outputs/artifacts/paths/model`.
+
+```python
+jobs=ml_client_workspace.jobs.list(parent_job_name=pipeline_job.name)
+for job in jobs:
+ if (job.display_name == "train_job"):
+ print (job.name)
+ model_path_from_job="azureml://jobs/{job_name}/outputs/artifacts/paths/model".format(job_name=job.name)
+
+print(model_path_from_job)
+```
+
+Register the model from the output of the training job into the workspace using the path constructed above.
+
+```python
+mlflow_model = Model(
+ path=model_path_from_job,
+ type=AssetTypes.MLFLOW_MODEL,
+ name="nyc-taxi-model",
+ version=version_timestamp,
+ description="MLflow model created from job output",
+)
+ml_client_workspace.model.create_or_update(mlflow_model)
+```
+
+> [!TIP]
+> Notice that you are using MLClient object `ml_client_workspace` since you are creating the model in the workspace.
+
+Note down the model name and version. You can validate if the model is registered in the workspace by browsing it in the Studio UI or fetching it using `ml_client_workspace.model.get()` method.
+
+Next, you'll now copy the model from the workspace to the registry. Construct the path to the model with the workspace using the `azureml://subscriptions/<subscription-id-of-workspace>/resourceGroups/<resource-group-of-workspace>/workspaces/<workspace-name>/models/<model-name>/versions/<model-version>` syntax.
++
+```python
+# fetch the model from workspace
+model_in_workspace = ml_client_workspace.models.get(name="nyc-taxi-model", version=version)
+print(model_in_workspace )
+# change the format such that the registry understands the model (when you print the model_ready_to_copy object, notice the asset id
+model_ready_to_copy = ml_client_workspace.models._prepare_to_copy(model_in_workspace)
+print(model_ready_to_copy)
+# copy the model from registry to workspace
+ml_client_registry.models.create_or_update(model_ready_to_copy)
+```
+
+> [!TIP]
+> Make sure to use the right model name and version if you changed it in the `ml_client_workspace.model.create_or_update()` method used to create the model in workspace.
+
+Note down the `name` and `version` of the model from the output and use them with `ml_client_workspace.model.get()` commands as follows. You'll need the `name` and `version` in the next section when you deploy the model to an online endpoint for inference.
+
+```python
+mlflow_model_from_registry = ml_client_registry.models.get(name="nyc-taxi-model", version=str(1))
+print(mlflow_model_from_registry)
+```
+You can also use `ml_client_registry.models.list()` to list all models in the registry or browse all components in the AzureML Studio UI. Make sure you navigate to the global UI and look for the Registries hub.
+++
+The following screenshot shows a model in a registry in AzureML studio. If you created a model from the job output and then copied the model from the workspace to registry, you'll see that the model has a link to the job that trained the model. You can use that link to navigate to the training job to review the code, environment and data used to train the model.
++
+## Deploy model from registry to online endpoint in workspace
+
+In the last section, you'll deploy a model from registry to an online endpoint in a workspace. You can choose to deploy any workspace you have access to in your organization, provided the location of the workspace is one of the locations supported by the registry. This capability is helpful if you trained a model in a `dev` workspace and now need to deploy the model to `test` or `prod` workspace, while preserving the lineage information around the code, environment and data used to train the model.
+
+Online endpoints let you deploy model and submit inference requests through the REST APIs. For more information, see the following articles:
+
+* [Managed endpoints (CLI)](how-to-deploy-managed-online-endpoints.md)
+* [Managed endpoints (SDK)](how-to-deploy-managed-online-endpoint-sdk-v2.md)
+
+# [Azure CLI](#tab/cli)
+
+Create an online endpoint.
+
+```azurecli
+az ml online-endpoint create --name reg-ep-1234
+```
+
+Update the `model:` line `deploy.yml` available in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` folder to refer the model name and version from the pervious step. Create an online deployment to the online endpoint. The `deploy.yml` is shown below for reference.
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
+name: demo
+endpoint_name: reg-ep-1234
+model: azureml://registries/<registry-name>/models/nyc-taxi-model/versions/1
+instance_type: Standard_DS2_v2
+instance_count: 1
+```
+Create the online deployment. The deployment takes several minutes to complete.
+
+```azurecli
+az ml online-deployment create --file deploy.yml --all-traffic
+```
+
+Fetch the scoring URI and submit a sample scoring request. Sample data for the scoring request is available in the `scoring-data.json` in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` folder.
+
+```azurecli
+ENDPOINT_KEY=$(az ml online-endpoint get-credentials -n reg-ep-1234 -o tsv --query primaryKey)
+SCORING_URI=$(az ml online-endpoint show -n $ep_name -o tsv --query scoring_uri)
+curl --request POST "$SCORING_URI" --header "Authorization: Bearer $ENDPOINT_KEY" --header 'Content-Type: application/json' --data @./scoring-data.json
+```
+
+> [!TIP]
+> * `curl` command works only on Linux.
+> * If you have not configured the default workspace and resource group as explained in the prerequisites section, you will need to specify the `--workspace-name` and `--resource-group` parameters for the `az ml online-endpoint` and `az ml online-deployment` commands to work.
+
+# [Python SDK](#tab/python)
+
+Create an online endpoint.
+
+```python
+online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint for mlflow model",
+ auth_mode="key"
+)
+ml_client_workspace.begin_create_or_update(endpoint)
+```
+
+Make sure you have the `mlflow_model_from_registry` model object from the previous section or fetch the model from the registry using `ml_client_registry.models.get()` method. Pass it to the deployment configuration object and create the online deployment. The deployment takes several minutes to complete. Set all traffic to be routed to the new deployment.
+
+```python
+demo_deployment = ManagedOnlineDeployment(
+ name="demo",
+ endpoint_name=online_endpoint_name,
+ model=mlflow_model_from_registry,
+ instance_type="Standard_F4s_v2",
+ instance_count=1
+)
+ml_client_workspace.online_deployments.begin_create_or_update(demo_deployment)
+
+endpoint.traffic = {"demo": 100}
+ml_client_workspace.begin_create_or_update(endpoint)
+```
+
+Submit a sample scoring request using the sample data file `scoring-data.json`. This file is available in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` folder.
+
+```azurecli
+# test the deployment with some sample data
+ml_client_workspace.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="demo",
+ request_file=parent_dir + "/scoring-data.json"
+)
+```
+++
+## Clean up resources
+
+If you aren't going use the deployment, you should delete it to reduce costs. The following example deletes the endpoint and all the underlying deployments:
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml online-endpoint delete --name reg-ep-1234 --yes --no-wait
+```
+
+# [Python SDK](#tab/python)
+
+```python
+ml_client_workspace.online_endpoints.begin_delete(name=online_endpoint_name)
+```
+++
+## Next steps
+
+* [How to create and manage registries](how-to-manage-registries.md)
+* [How to manage environments](how-to-manage-environments-v2.md)
+* [How to train models](how-to-train-cli.md)
+* [How to create pipelines using components](how-to-create-component-pipeline-python.md)
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
Last updated 06/08/2022 -+ # Query & compare experiments and runs with MLflow
Experiments and runs in Azure Machine Learning can be queried using MLflow. This removes the need of any Azure Machine Learning specific SDKs to manage anything that happens inside of a training job, allowing dependencies removal and creating a more seamless transition between local runs and cloud. > [!NOTE]
-> The Azure Machine Learning Python SDK v2 (preview) does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, we recommend to use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure ML.
+> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, we recommend to use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure ML.
MLflow allows you to:
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
Title: Train deep learning Keras models
+ Title: Train deep learning Keras models (SDK v2)
-description: Learn how to train and register a Keras deep neural network classification model running on TensorFlow using Azure Machine Learning.
+description: Learn how to train and register a Keras deep neural network classification model running on TensorFlow using Azure Machine Learning SDK (v2).
--- Previously updated : 09/28/2020+++ Last updated : 10/05/2022 -+ #Customer intent: As a Python Keras developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale. # Train Keras models at scale with Azure Machine Learning
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [v1](v1/how-to-train-keras.md)
+> * [v2 (preview)](how-to-train-keras.md)
-In this article, learn how to run your Keras training scripts with Azure Machine Learning.
+In this article, learn how to run your Keras training scripts using the Azure Machine Learning (AzureML) Python SDK v2.
-The example code in this article shows you how to train and register a Keras classification model built using the TensorFlow backend with Azure Machine Learning. It uses the popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify handwritten digits using a deep neural network (DNN) built using the [Keras Python library](https://keras.io) running on top of [TensorFlow](https://www.tensorflow.org/overview).
+The example code in this article uses AzureML to train, register, and deploy a Keras model built using the TensorFlow backend. The model, a deep neural network (DNN) built with the [Keras Python library](https://keras.io) running on top of [TensorFlow](https://www.tensorflow.org/overview), classifies handwritten digits from the popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/).
-Keras is a high-level neural network API capable of running top of other popular DNN frameworks to simplify development. With Azure Machine Learning, you can rapidly scale out training jobs using elastic cloud compute resources. You can also track your training runs, version models, deploy models, and much more.
+Keras is a high-level neural network API capable of running top of other popular DNN frameworks to simplify development. With AzureML, you can rapidly scale out training jobs using elastic cloud compute resources. You can also track your training runs, version models, deploy models, and much more.
-Whether you're developing a Keras model from the ground-up or you're bringing an existing model into the cloud, Azure Machine Learning can help you build production-ready models.
+Whether you're developing a Keras model from the ground-up or you're bringing an existing model into the cloud, AzureML can help you build production-ready models.
> [!NOTE] > If you are using the Keras API **tf.keras** built into TensorFlow and not the standalone Keras package, refer instead to [Train TensorFlow models](how-to-train-tensorflow.md). ## Prerequisites
-Run this code on either of these environments:
+To benefit from this article, you'll need to:
-- Azure Machine Learning compute instance - no downloads or installation necessary
+- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/).
+- Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook.
+ - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
+ - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-keras**.
+ - Your Jupyter notebook server
+ - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
+- Download the training scripts [keras_mnist.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/src/keras_mnist.py) and [utils.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/src/utils.py).
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- - In the samples folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > keras > train-hyperparameter-tune-deploy-with-keras** folder.
+You can also find a completed [Jupyter Notebook version](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) of this guide on the GitHub samples page.
- - [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.15.0).
- - [Create a workspace configuration file](v1/how-to-configure-environment-v1.md).
- - [Download the sample script files](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras) `keras_mnist.py` and `utils.py`
+## Set up the job
- You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets.
+This section sets up the job for training by loading the required Python packages, connecting to a workspace, creating a compute resource to run a command job, and creating an environment to run the job.
+### Connect to the workspace
+
+First, you'll need to connect to your AzureML workspace. The [AzureML workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
-## Set up the experiment
+We're using `DefaultAzureCredential` to get access to the workspace. This credential should be capable of handling most Azure SDK authentication scenarios.
-This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the FileDataset for the input training data, creating the compute target, and defining the training environment.
+If `DefaultAzureCredential` doesn't work for you, see [`azure-identity reference documentation`](/python/api/azure-identity/azure.identity) or [`Set up authentication`](how-to-setup-authentication.md?tabs=sdk) for more available credentials.
-### Import packages
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=credential)]
-First, import the necessary Python libraries.
+If you prefer to use a browser to sign in and authenticate, you should uncomment the following code and use it instead.
+
+```python
+# Handle to the workspace
+# from azure.ai.ml import MLClient
-```Python
-import os
-import azureml
-from azureml.core import Experiment
-from azureml.core import Environment
-from azureml.core import Workspace, Run
-from azureml.core.compute import ComputeTarget, AmlCompute
-from azureml.core.compute_target import ComputeTargetException
+# Authentication package
+# from azure.identity import InteractiveBrowserCredential
+# credential = InteractiveBrowserCredential()
```
-### Initialize a workspace
+Next, get a handle to the workspace by providing your Subscription ID, Resource Group name, and workspace name. To find these parameters:
-The [Azure Machine Learning workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object.
+1. Look for your workspace name in the upper-right corner of the Azure Machine Learning studio toolbar.
+2. Select your workspace name to show your Resource Group and Subscription ID.
+3. Copy the values for Resource Group and Subscription ID into the code.
-Create a workspace object from the `config.json` file created in the [prerequisites section](#prerequisites).
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=ml_client)]
-```Python
-ws = Workspace.from_config()
-```
+The result of running this script is a workspace handle that you'll use to manage other resources and jobs.
-### Create a file dataset
+> [!NOTE]
+> - Creating `MLClient` will not connect the client to the workspace. The client initialization is lazy and will wait for the first time it needs to make a call. In this article, this will happen during compute creation.
-A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they will be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. See the [how-to](./v1/how-to-create-register-datasets.md) guide on the `Dataset` package for more information.
+### Create a compute resource to run the job
-```python
-from azureml.core.dataset import Dataset
-
-web_paths = [
- 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
- 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
- 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
- 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
- ]
-dataset = Dataset.File.from_files(path=web_paths)
-```
+AzureML needs a compute resource to run a job. This resource can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-You can use the `register()` method to register the data set to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
+In the following example script, we provision a Linux [`compute cluster`](/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python). You can see the [`Azure Machine Learning pricing`](https://azure.microsoft.com/pricing/details/machine-learning/) page for the full list of VM sizes and prices. Since we need a GPU cluster for this example, let's pick a *STANDARD_NC6* model and create an AzureML compute.
-```python
-dataset = dataset.register(workspace=ws,
- name='mnist-dataset',
- description='training and test dataset',
- create_new_version=True)
-```
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=cpu_compute_target)]
-### Create a compute target
+### Create a job environment
-Create a compute target for your training job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster.
+To run an AzureML job, you'll need an environment. An AzureML [environment](concept-environments.md) encapsulates the dependencies (such as software runtime and libraries) needed to run your machine learning training script on your compute resource. This environment is similar to a Python environment on your local machine.
+AzureML allows you to either use a curated (or ready-made) environment or create a custom environment using a Docker image or a Conda configuration. In this article, you'll create a custom Conda environment for your jobs, using a Conda YAML file.
-```Python
-cluster_name = "gpu-cluster"
+#### Create a custom environment
-try:
- compute_target = ComputeTarget(workspace=ws, name=cluster_name)
- print('Found existing compute target')
-except ComputeTargetException:
- print('Creating a new compute target...')
- compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
- max_nodes=4)
+To create your custom environment, you'll define your Conda dependencies in a YAML file. First, create a directory for storing the file. In this example, we've named the directory `dependencies`.
- compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=dependencies_folder)]
+Then, create the file in the dependencies directory. In this example, we've named the file `conda.yml`.
- compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
-```
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=conda_file)]
+The specification contains some usual packages (such as numpy and pip) that you'll use in your job.
-For more information on compute targets, see the [what is a compute target](concept-compute-target.md) article.
+Next, use the YAML file to create and register this custom environment in your workspace. The environment will be packaged into a Docker container at runtime.
-### Define your environment
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=custom_environment)]
-Define the Azure ML [Environment](concept-environments.md) that encapsulates your training script's dependencies.
+For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
-First, define your conda dependencies in a YAML file; in this example the file is named `conda_dependencies.yml`.
+## Configure and submit your training job
-```yaml
-channels:
-- conda-forge
-dependencies:
-- python=3.6.2-- pip:
- - azureml-defaults
- - tensorflow-gpu==2.0.0
- - keras<=2.3.1
- - matplotlib
-```
+In this section, we'll begin by introducing the data for training. We'll then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in AzureML.
-Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
+### Obtain the training data
+You'll use data from the Modified National Institute of Standards and Technology (MNIST) database of handwritten digits. This data is sourced from Yan LeCun's website and stored in an Azure storage account.
-By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you will need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use, see the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repo for more information.
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=data_url)]
-```python
-keras_env = Environment.from_conda_specification(name='keras-env', file_path='conda_dependencies.yml')
+For more information about the MNIST dataset, visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/).
-# Specify a GPU base image
-keras_env.docker.enabled = True
-keras_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04'
-```
+### Prepare the training script
-For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+In this article, we've provided the training script *keras_mnist.py*. In practice, you should be able to take any custom training script as is and run it with AzureML without having to modify your code.
-## Configure and submit your training run
+The provided training script does the following:
+ - handles the data preprocessing, splitting the data into test and train data;
+ - trains a model, using the data; and
+ - returns the output model.
-### Create a ScriptRunConfig
-First get the data from the workspace datastore using the `Dataset` class.
+During the pipeline run, you'll use MLFlow to log the parameters and metrics. To learn how to enable MLFlow tracking, see [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
-```python
-dataset = Dataset.get_by_name(ws, 'mnist-dataset')
+In the training script `keras_mnist.py`, we create a simple deep neural network (DNN). This DNN has:
-# list the files referenced by mnist-dataset
-dataset.to_path()
-```
+- An input layer with 28 * 28 = 784 neurons. Each neuron represents an image pixel.
+- Two hidden layers. The first hidden layer has 300 neurons and the second hidden layer has 100 neurons.
+- An output layer with 10 neurons. Each neuron represents a targeted label from 0 to 9.
-Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on.
-Any arguments to your training script will be passed via command line if specified in the `arguments` parameter. The DatasetConsumptionConfig for our FileDataset is passed as an argument to the training script, for the `--data-folder` argument. Azure ML will resolve this DatasetConsumptionConfig to the mount-point of the backing datastore, which can then be accessed from the training script.
+### Build the training job
-```python
-from azureml.core import ScriptRunConfig
-
-args = ['--data-folder', dataset.as_mount(),
- '--batch-size', 50,
- '--first-layer-neurons', 300,
- '--second-layer-neurons', 100,
- '--learning-rate', 0.001]
-
-src = ScriptRunConfig(source_directory=script_folder,
- script='keras_mnist.py',
- arguments=args,
- compute_target=compute_target,
- environment=keras_env)
-```
+Now that you have all the assets required to run your job, it's time to build it using the AzureML Python SDK v2. For this example, we'll be creating a `command`.
+
+An AzureML `command` is a resource that specifies all the details needed to execute your training code in the cloud. These details include the inputs and outputs, type of hardware to use, software to install, and how to run your code. The `command` contains information to execute a single command.
++
+#### Configure the command
+
+You'll use the general purpose `command` to run the training script and perform your desired tasks. Create a `Command` object to specify the configuration details of your training job.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=job)]
+
+- The inputs for this command include the data location, batch size, number of neurons in the first and second layer, and learning rate. Notice that we've passed in the web path directly as an input.
+
+- For the parameter values:
+ - provide the compute cluster `gpu_compute_target = "gpu-cluster"` that you created for running this command;
+ - provide the custom environment `keras-env` that you created for running the AzureML job;
+ - configure the command line action itselfΓÇöin this case, the command is `python keras_mnist.py`. You can access the inputs and outputs in the command via the `${{ ... }}` notation; and
+ - configure metadata such as the display name and experiment name; where an experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in AzureML studio.
+
+- In this example, you'll use the `UserIdentity` to run the command. Using a user identity means that the command will use your identity to run the job and access the data from the blob.
+
+### Submit the job
-For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
+It's now time to submit the job to run in AzureML. This time, you'll use `create_or_update` on `ml_client.jobs`.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=create_job)]
+
+Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in AzureML studio.
> [!WARNING]
-> If you were previously using the TensorFlow estimator to configure your Keras training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory.
-### Submit your run
+### What happens during job execution
+As the job is executed, it goes through the following stages:
-The [Run object](/python/api/azureml-core/azureml.core.run%28class%29) provides the interface to the run history while the job is running and after it has completed.
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the job history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment will be used.
-```Python
-run = Experiment(workspace=ws, name='Tutorial-Keras-Minst').submit(src)
-run.wait_for_completion(show_output=True)
-```
+- **Scaling**: The cluster attempts to scale up if it requires more nodes to execute the run than are currently available.
-### What happens during run execution
-As the run is executed, it goes through the following stages:
+- **Running**: All scripts in the script folder *src* are uploaded to the compute target, data stores are mounted or copied, and the script is executed. Outputs from *stdout* and the *./logs* folder are streamed to the job history and can be used to monitor the job.
-- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
+## Tune model hyperparameters
-- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
+You've trained the model with one set of parameters, let's now see if you can further improve the accuracy of your model. You can tune and optimize your model's hyperparameters using Azure Machine Learning's [`sweep`](/python/api/azure-ai-ml/azure.ai.ml.sweep) capabilities.
-- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the `script` is executed. Outputs from stdout and the **./logs** folder are streamed to the run history and can be used to monitor the run.
+To tune the model's hyperparameters, define the parameter space in which to search during training. You'll do this by replacing some of the parameters (`batch_size`, `first_layer_neurons`, `second_layer_neurons`, and `learning_rate`) passed to the training job with special inputs from the `azure.ml.sweep` package.
-- **Post-Processing**: The **./outputs** folder of the run is copied over to the run history.
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=job_for_sweep)]
-## Register the model
+Then, you'll configure sweep on the command job, using some sweep-specific parameters, such as the primary metric to watch and the sampling algorithm to use.
-Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+In the following code, we use random sampling to try different configuration sets of hyperparameters in an attempt to maximize our primary metric, `validation_acc`.
-```Python
-model = run.register_model(model_name='keras-mnist', model_path='outputs/model')
-```
+We also define an early termination policyΓÇöthe `BanditPolicy`. This policy operates by checking the job every two iterations. If the primary metric, `validation_acc`, falls outside the top ten percent range, AzureML will terminate the job. This saves the model from continuing to explore hyperparameters that show no promise of helping to reach the target metric.
-> [!TIP]
-> The deployment how-to
-contains a section on registering models, but you can skip directly to [creating a compute target](./v1/how-to-deploy-and-where.md#choose-a-compute-target) for deployment, since you already have a registered model.
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=sweep_job)]
-You can also download a local copy of the model. This can be useful for doing additional model validation work locally. In the training script, `keras_mnist.py`, a TensorFlow saver object persists the model to a local folder (local to the compute target). You can use the Run object to download a copy from the run history.
+Now, you can submit this job as before. This time, you'll be running a sweep job that sweeps over your train job.
-```Python
-# Create a model folder in the current directory
-os.makedirs('./model', exist_ok=True)
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=create_sweep_job)]
+
+You can monitor the job by using the studio user interface link that is presented during the job run.
+
+## Find and register the best model
+
+Once all the runs complete, you can find the run that produced the model with the highest accuracy.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=model)]
+
+You can then register this model.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=register_model)]
++
+## Deploy the model as an online endpoint
+
+After you've registered your model, you can deploy it as an [online endpoint](concept-endpoints.md)ΓÇöthat is, as a web service in the Azure cloud.
+
+To deploy a machine learning service, you'll typically need:
+- The model assets that you want to deploy. These assets include the model's file and metadata that you already registered in your training job.
+- Some code to run as a service. The code executes the model on a given input request (an entry script). This entry script receives data submitted to a deployed web service and passes it to the model. After the model processes the data, the script returns the model's response to the client. The script is specific to your model and must understand the data that the model expects and returns. When you use an MLFlow model, AzureML automatically creates this script for you.
+
+For more information about deployment, see [Deploy and score a machine learning model with managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md).
+
+### Create a new online endpoint
+
+As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you'll create a unique name using a universally unique identifier (UUID).
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=online_endpoint_name)]
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=endpoint)]
+
+Once you've created the endpoint, you can retrieve it as follows:
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=get_endpoint)]
+
+### Deploy the model to the endpoint
+
+After you've created the endpoint, you can deploy the model with the entry script. An endpoint can have multiple deployments. Using rules, the endpoint can then direct traffic to these deployments.
+
+In the following code, you'll create a single deployment that handles 100% of the incoming traffic. We've specified an arbitrary color name (*tff-blue*) for the deployment. You could also use any other name such as *tff-green* or *tff-red* for the deployment.
+The code to deploy the model to the endpoint does the following:
+
+- deploys the best version of the model that you registered earlier;
+- scores the model, using the `score.py` file; and
+- uses the custom environment (that you created earlier) to perform inferencing.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=blue_deployment)]
+
+> [!NOTE]
+> Expect this deployment to take a bit of time to finish.
+
+### Test the deployed model
+
+Now that you've deployed the model to the endpoint, you can predict the output of the deployed model, using the `invoke` method on the endpoint.
+
+To test the endpoint you need some test data. Let us locally download the test data which we used in our training script.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=download_test_data)]
+
+Load these into a test dataset.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=load_test_data)]
+
+Pick 30 random samples from the test set and write them to a JSON file.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=generate_test_json)]
+
+You can then invoke the endpoint, print the returned predictions, and plot them along with the input images. Use red font color and inverted image (white on black) to highlight the misclassified samples.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=invoke_and_test)]
++
+> [!NOTE]
+> Because the model accuracy is high, you might have to run the cell a few times before seeing a misclassified sample.
+
+### Clean up resources
+
+If you won't be using the endpoint, delete it to stop using the resource. Make sure no other deployments are using the endpoint before you delete it.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb?name=delete_endpoint)]
+
+> [!NOTE]
+> Expect this cleanup to take a bit of time to finish.
-for f in run.get_file_names():
- if f.startswith('outputs/model'):
- output_file_path = os.path.join('./model', f.split('/')[-1])
- print('Downloading from {} to {} ...'.format(f, output_file_path))
- run.download_file(name=f, output_file_path=output_file_path)
-```
## Next steps
-In this article, you trained and registered a Keras model on Azure Machine Learning. To learn how to deploy a model, continue on to our model deployment article.
+In this article, you trained and registered a Keras model. You also deployed the model to an online endpoint. See these other articles to learn more about Azure Machine Learning.
-* [How and where to deploy models](./v1/how-to-deploy-and-where.md)
-* [Track run metrics during training](how-to-log-view-metrics.md)
-* [Tune hyperparameters](how-to-tune-hyperparameters.md)
-* [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
+- [Track run metrics during training](how-to-log-view-metrics.md)
+- [Tune hyperparameters](how-to-tune-hyperparameters.md)
+- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
Last updated 08/25/2022 -+ # Train models with Azure Machine Learning CLI, SDK, and REST API
Azure Machine Learning provides multiple ways to submit ML training jobs. In thi
* Python SDK v2 for Azure Machine Learning. * REST API: The API that the CLI and SDK are built on.
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
For more information on the Azure CLI commands, Python SDK classes, or REST APIs
* [Azure CLI `ml` extension](/cli/azure/ml) * [Python SDK](/python/api/azure-ai-ml/azure.ai.ml)
-* [REST API](/rest/api/azureml/)
+* [REST API](/rest/api/azureml/)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
Title: Train deep learning PyTorch models
+ Title: Train deep learning PyTorch models (SDK v2)
-description: Learn how to run your PyTorch training scripts at enterprise scale using Azure Machine Learning.
+description: Learn how to run your PyTorch training scripts at enterprise scale using Azure Machine Learning SDK (v2).
-- Previously updated : 02/28/2022+++ Last updated : 10/05/2022 -+ #Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale. # Train PyTorch models at scale with Azure Machine Learning
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [v1](v1/how-to-train-pytorch.md)
+> * [v2 (preview)](how-to-train-pytorch.md)
-In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
+In this article, you'll learn to train, hyperparameter tune, and deploy a [PyTorch](https://pytorch.org/) model using the Azure Machine Learning (AzureML) Python SDK v2.
-The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article.
+You'll use the example scripts in this article to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article.
-Whether you're training a deep learning PyTorch model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning.
+Whether you're training a deep learning PyTorch model from the ground-up or you're bringing an existing model into the cloud, you can use AzureML to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with AzureML.
## Prerequisites
-Run this code on either of these environments:
+To benefit from this article, you'll need to:
-- Azure Machine Learning compute instance - no downloads or installation necessary
+- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/).
+- Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook.
+ - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
+ - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > pytorch > train-hyperparameter-tune-deploy-with-pytorch**.
+ - Your Jupyter notebook server
+ - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
+ - Download the training script file [pytorch_train.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/src/pytorch_train.py).
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > pytorch > train-hyperparameter-tune-deploy-with-pytorch** folder.
-
- - [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.15.0).
- - [Create a workspace configuration file](v1/how-to-configure-environment-v1.md).
- - [Download the sample script files](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch) `pytorch_train.py`
-
- You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets.
+You can also find a completed [Jupyter Notebook version](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page.
[!INCLUDE [gpu quota](../../includes/machine-learning-gpu-quota-prereq.md)]
-## Set up the experiment
+## Set up the job
+
+This section sets up the job for training by loading the required Python packages, connecting to a workspace, creating a compute resource to run a command job, and creating an environment to run the job.
+
+### Connect to the workspace
-This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
+First, you'll need to connect to your AzureML workspace. The [AzureML workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
-### Import packages
+We're using `DefaultAzureCredential` to get access to the workspace. This credential should be capable of handling most Azure SDK authentication scenarios.
-First, import the necessary Python libraries.
+If `DefaultAzureCredential` doesn't work for you, see [`azure-identity reference documentation`](/python/api/azure-identity/azure.identity) or [`Set up authentication`](how-to-setup-authentication.md?tabs=sdk) for more available credentials.
-```Python
-import os
-import shutil
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=credential)]
-from azureml.core.workspace import Workspace
-from azureml.core import Experiment
-from azureml.core import Environment
+If you prefer to use a browser to sign in and authenticate, you should uncomment the following code and use it instead.
+
+```python
+# Handle to the workspace
+# from azure.ai.ml import MLClient
-from azureml.core.compute import ComputeTarget, AmlCompute
-from azureml.core.compute_target import ComputeTargetException
+# Authentication package
+# from azure.identity import InteractiveBrowserCredential
+# credential = InteractiveBrowserCredential()
```
-### Initialize a workspace
+Next, get a handle to the workspace by providing your Subscription ID, Resource Group name, and workspace name. To find these parameters:
-The [Azure Machine Learning workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object.
+1. Look for your workspace name in the upper-right corner of the Azure Machine Learning studio toolbar.
+2. Select your workspace name to show your Resource Group and Subscription ID.
+3. Copy the values for Resource Group and Subscription ID into the code.
-Create a workspace object from the `config.json` file created in the [prerequisites section](#prerequisites).
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=ml_client)]
-```Python
-ws = Workspace.from_config()
-```
+The result of running this script is a workspace handle that you'll use to manage other resources and jobs.
-### Get the data
+> [!NOTE]
+> - Creating `MLClient` will not connect the client to the workspace. The client initialization is lazy and will wait for the first time it needs to make a call. In this article, this will happen during compute creation.
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
+### Create a compute resource to run the job
-### Prepare training script
+AzureML needs a compute resource to run a job. This resource can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-In this tutorial, the training script, `pytorch_train.py`, is already provided. In practice, you can take any custom training script, as is, and run it with Azure Machine Learning.
+In the following example script, we provision a Linux [`compute cluster`](/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python). You can see the [`Azure Machine Learning pricing`](https://azure.microsoft.com/pricing/details/machine-learning/) page for the full list of VM sizes and prices. Since we need a GPU cluster for this example, let's pick a *STANDARD_NC6* model and create an AzureML compute.
-Create a folder for your training script(s).
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=gpu_compute_target)]
-```python
-project_folder = './pytorch-birds'
-os.makedirs(project_folder, exist_ok=True)
-shutil.copy('pytorch_train.py', project_folder)
-```
+### Create a job environment
-### Create a compute target
+To run an AzureML job, you'll need an environment. An AzureML [environment](concept-environments.md) encapsulates the dependencies (such as software runtime and libraries) needed to run your machine learning training script on your compute resource. This environment is similar to a Python environment on your local machine.
-Create a compute target for your PyTorch job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster.
+AzureML allows you to either use a curated (or ready-made) environment or create a custom environment using a Docker image or a Conda configuration. In this article, you'll reuse the curated AzureML environment `AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu`. You'll use the latest version of this environment using the `@latest` directive.
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=curated_env_name)]
-```Python
+## Configure and submit your training job
-# Choose a name for your CPU cluster
-cluster_name = "gpu-cluster"
+In this section, we'll begin by introducing the data for training. We'll then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in AzureML.
-# Verify that cluster does not exist already
-try:
- compute_target = ComputeTarget(workspace=ws, name=cluster_name)
- print('Found existing compute target')
-except ComputeTargetException:
- print('Creating a new compute target...')
- compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
- max_nodes=4)
+### Obtain the training data
+You'll use data that is stored on a public blob as a [zip file](https://azureopendatastorage.blob.core.windows.net/testpublic/temp/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). We'll download and extract the dataset as part of our training script `pytorch_train.py`.
- # Create the cluster with the specified name and configuration
- compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+### Prepare the training script
- # Wait for the cluster to complete, show the output log
- compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
-```
+In this article, we've provided the training script *pytorch_train.py*. In practice, you should be able to take any custom training script as is and run it with AzureML without having to modify your code.
-If you instead want to create a CPU cluster, provide a different VM size to the vm_size parameter, such as STANDARD_D2_V2.
+The provided training script downloads the data, trains a model, and registers the model.
+### Build the training job
-For more information on compute targets, see the [what is a compute target](concept-compute-target.md) article.
+Now that you have all the assets required to run your job, it's time to build it using the AzureML Python SDK v2. For this example, we'll be creating a `command`.
-### Define your environment
+An AzureML `command` is a resource that specifies all the details needed to execute your training code in the cloud. These details include the inputs and outputs, type of hardware to use, software to install, and how to run your code. The `command` contains information to execute a single command.
-To define the [Azure ML Environment](concept-environments.md) that encapsulates your training script's dependencies, you can either define a custom environment or use an Azure ML curated environment.
-#### Use a curated environment
+#### Configure the command
-Azure ML provides prebuilt, [curated environments](resource-curated-environments.md) if you don't want to define your own environment. There are several CPU and GPU curated environments for PyTorch corresponding to different versions of PyTorch.
+You'll use the general purpose `command` to run the training script and perform your desired tasks. Create a `Command` object to specify the configuration details of your training job.
-If you want to use a curated environment, you can run the following command instead:
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=job)]
-```python
-curated_env_name = 'AzureML-PyTorch-1.6-GPU'
-pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
-```
+- The inputs for this command include the number of epochs, learning rate, momentum, and output directory.
+- For the parameter values:
+ - provide the compute cluster `gpu_compute_target = "gpu-cluster"` that you created for running this command;
+ - provide the curated environment `AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu` that you initialized earlier;
+ - configure the command line action itselfΓÇöin this case, the command is `python pytorch_train.py`. You can access the inputs and outputs in the command via the `${{ ... }}` notation; and
+ - configure metadata such as the display name and experiment name; where an experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in AzureML studio.
-To see the packages included in the curated environment, you can write out the conda dependencies to disk:
+### Submit the job
-```python
-pytorch_env.save_to_directory(path=curated_env_name)
-```
+It's now time to submit the job to run in AzureML. This time, you'll use `create_or_update` on `ml_client.jobs`.
-Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=create_job)]
-```python
-pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml')
-```
+Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in AzureML studio.
-If you had instead modified the curated environment object directly, you can clone that environment with a new name:
+> [!WARNING]
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory.
-```python
-pytorch_env = pytorch_env.clone(new_name='pytorch-1.6-gpu')
-```
+### What happens during job execution
+As the job is executed, it goes through the following stages:
-#### Create a custom environment
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the job history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment will be used.
-You can also create your own Azure ML environment that encapsulates your training script's dependencies.
+- **Scaling**: The cluster attempts to scale up if it requires more nodes to execute the run than are currently available.
-First, define your conda dependencies in a YAML file; in this example the file is named `conda_dependencies.yml`.
+- **Running**: All scripts in the script folder *src* are uploaded to the compute target, data stores are mounted or copied, and the script is executed. Outputs from *stdout* and the *./logs* folder are streamed to the job history and can be used to monitor the job.
-```yaml
-channels:
-- conda-forge
-dependencies:
-- python=3.6.2-- pip=21.3.1-- pip:
- - azureml-defaults
- - torch==1.6.0
- - torchvision==0.7.0
- - future==0.17.1
- - pillow
-```
+## Tune model hyperparameters
-Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
+You've trained the model with one set of parameters, let's now see if you can further improve the accuracy of your model. You can tune and optimize your model's hyperparameters using Azure Machine Learning's [`sweep`](/python/api/azure-ai-ml/azure.ai.ml.sweep) capabilities.
-By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you'll need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use. For more information, see [AzureML-Containers GitHub repo](https://github.com/Azure/AzureML-Containers).
+To tune the model's hyperparameters, define the parameter space in which to search during training. You'll do this by replacing some of the parameters passed to the training job with special inputs from the `azure.ml.sweep` package.
-```python
-pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml')
+Since the training script uses a learning rate schedule to decay the learning rate every several epochs, you can tune the initial learning rate and the momentum parameters.
-# Specify a GPU base image
-pytorch_env.docker.enabled = True
-pytorch_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04'
-```
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=job_for_sweep)]
-> [!TIP]
-> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](how-to-train-with-custom-image.md).
+Then, you'll configure sweep on the command job, using some sweep-specific parameters, such as the primary metric to watch and the sampling algorithm to use.
-For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+In the following code, we use random sampling to try different configuration sets of hyperparameters in an attempt to maximize our primary metric, `best_val_acc`.
-## Configure and submit your training run
+We also define an early termination policy, the `BanditPolicy`, to terminate poorly performing runs early.
+The `BanditPolicy` will terminate any run that doesn't fall within the slack factor of our primary evaluation metric. You will apply this policy every epoch (since we report our `best_val_acc` metric every epoch and `evaluation_interval`=1). Notice we will delay the first policy evaluation until after the first 10 epochs (`delay_evaluation`=10).
-### Create a ScriptRunConfig
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=sweep_job)]
-Create a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Any arguments to your training script will be passed via command line if specified in the `arguments` parameter. The following code will configure a single-node PyTorch job.
+Now, you can submit this job as before. This time, you'll be running a sweep job that sweeps over your train job.
-```python
-from azureml.core import ScriptRunConfig
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=create_sweep_job)]
-src = ScriptRunConfig(source_directory=project_folder,
- script='pytorch_train.py',
- arguments=['--num_epochs', 30, '--output_dir', './outputs'],
- compute_target=compute_target,
- environment=pytorch_env)
-```
+You can monitor the job by using the studio user interface link that is presented during the job run.
-> [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](v1/how-to-train-with-datasets.md).
+## Find the best model
-For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
+Once all the runs complete, you can find the run that produced the model with the highest accuracy.
-> [!WARNING]
-> If you were previously using the PyTorch estimator to configure your PyTorch training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=model)]
-## Submit your run
+## Deploy the model as an online endpoint
-The [Run object](/python/api/azureml-core/azureml.core.run%28class%29) provides the interface to the run history while the job is running and after it has completed.
+You can now deploy your model as an [online endpoint](concept-endpoints.md)ΓÇöthat is, as a web service in the Azure cloud.
-```Python
-run = Experiment(ws, name='Tutorial-pytorch-birds').submit(src)
-run.wait_for_completion(show_output=True)
-```
+To deploy a machine learning service, you'll typically need:
+- The model assets that you want to deploy. These assets include the model's file and metadata that you already registered in your training job.
+- Some code to run as a service. The code executes the model on a given input request (an entry script). This entry script receives data submitted to a deployed web service and passes it to the model. After the model processes the data, the script returns the model's response to the client. The script is specific to your model and must understand the data that the model expects and returns. When you use an MLFlow model, AzureML automatically creates this script for you.
-### What happens during run execution
+For more information about deployment, see [Deploy and score a machine learning model with managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md).
-As the run is executed, it goes through the following stages:
+### Create a new online endpoint
-- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
+As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you'll create a unique name using a universally unique identifier (UUID).
-- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=online_endpoint_name)]
-- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the `script` is executed. Outputs from stdout and the **./logs** folder are streamed to the run history and can be used to monitor the run.
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=endpoint)]
-- **Post-Processing**: The **./outputs** folder of the run is copied over to the run history.
+Once you've created the endpoint, you can retrieve it as follows:
-## Register or download a model
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=get_endpoint)]
-Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+### Deploy the model to the endpoint
-```Python
-model = run.register_model(model_name='pytorch-birds', model_path='outputs/model.pt')
-```
+After you've created the endpoint, you can deploy the model with the entry script. An endpoint can have multiple deployments. Using rules, the endpoint can then direct traffic to these deployments.
-> [!TIP]
-> The deployment how-to
-contains a section on registering models, but you can skip directly to [creating a compute target](./v1/how-to-deploy-and-where.md#choose-a-compute-target) for deployment, since you already have a registered model.
+In the following code, you'll create a single deployment that handles 100% of the incoming traffic. We've specified an arbitrary color name (*aci-blue*) for the deployment. You could also use any other name such as *aci-green* or *aci-red* for the deployment.
+The code to deploy the model to the endpoint does the following:
-You can also download a local copy of the model by using the Run object. In the training script `pytorch_train.py`, a PyTorch save object persists the model to a local folder (local to the compute target). You can use the Run object to download a copy.
+- deploys the best version of the model that you registered earlier;
+- scores the model, using the `score.py` file; and
+- uses the curated environment (that you specified earlier) to perform inferencing.
-```Python
-# Create a model folder in the current directory
-os.makedirs('./model', exist_ok=True)
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=blue_deployment)]
-# Download the model from run history
-run.download_file(name='outputs/model.pt', output_file_path='./model/model.pt'),
-```
+> [!NOTE]
+> Expect this deployment to take a bit of time to finish.
+
+### Test the deployed model
+
+Now that you've deployed the model to the endpoint, you can predict the output of the deployed model, using the `invoke` method on the endpoint.
+
+To test the endpoint, let's use a sample image for prediction. First, let's display the image.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=display_image)]
+
+Create a function to format and resize the image.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=process_image)]
+
+Format the image and convert it to a JSON file.
+
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=test_json)]
+
+You can then invoke the endpoint with this JSON and print the result.
-## Distributed training
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=test_deployment)]
-Azure Machine Learning also supports multi-node distributed PyTorch jobs so that you can scale your training workloads. You can easily run distributed PyTorch jobs and Azure ML will manage the orchestration for you.
+### Clean up resources
-Azure ML supports running distributed PyTorch jobs with both Horovod and PyTorch's built-in DistributedDataParallel module.
+If you won't be using the endpoint, delete it to stop using the resource. Make sure no other deployments are using the endpoint before you delete it.
-For more information about distributed training, see the [Distributed GPU training guide](how-to-train-distributed-gpu.md).
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=delete_endpoint)]
-## Export to ONNX
+> [!NOTE]
+> Expect this cleanup to take a bit of time to finish.
-To optimize inference with the [ONNX Runtime](concept-onnx.md), convert your trained PyTorch model to the ONNX format. Inference, or model scoring, is the phase where the deployed model is used for prediction, most commonly on production data. For an example, see the [Exporting model from PyTorch to ONNX tutorial](https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb).
## Next steps
-In this article, you trained and registered a deep learning, neural network using PyTorch on Azure Machine Learning. To learn how to deploy a model, continue on to our model deployment article.
+In this article, you trained and registered a deep learning neural network using PyTorch on Azure Machine Learning. You also deployed the model to an online endpoint. See these other articles to learn more about Azure Machine Learning.
-- [How and where to deploy models](./v1/how-to-deploy-and-where.md) - [Track run metrics during training](how-to-log-view-metrics.md) - [Tune hyperparameters](how-to-tune-hyperparameters.md)-- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
+- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
To benefit from this article, you'll need to:
- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/). - Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook.
- - Azure Machine Learning compute instance - no downloads or installation necessary
+ - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
- Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow**.
- - You can use the pre-populated code in the samples deep learning folder to complete this tutorial.
-- Your Jupyter notebook server
- - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
- - Download the following files:
- - training script [tf_mnist.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/src/tf_mnist.py)
- - scoring script [score.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/src/score.py)
- - sample request file [sample-request.json](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/request/sample-request.json)
+ - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow**.
+ - Your Jupyter notebook server
+ - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
+- Download the following files:
+ - training script [tf_mnist.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/src/tf_mnist.py)
+ - scoring script [score.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/src/score.py)
+ - sample request file [sample-request.json](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/request/sample-request.json)
- You can also find a completed [Jupyter Notebook version](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) of this guide on the GitHub samples page.
+You can also find a completed [Jupyter Notebook version](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) of this guide on the GitHub samples page.
[!INCLUDE [gpu quota](../../includes/machine-learning-gpu-quota-prereq.md)]
For more information about the MNIST dataset, visit [Yan LeCun's website](http:/
In this article, we've provided the training script *tf_mnist.py*. In practice, you should be able to take any custom training script as is and run it with AzureML without having to modify your code.
-> [!NOTE]
-> The provided training script does the following:
-> - handles the data preprocessing, splitting the data into test and train data;
-> - trains a model, using the data; and
-> - returns the output model.
+The provided training script does the following:
+- handles the data preprocessing, splitting the data into test and train data;
+- trains a model, using the data; and
+- returns the output model.
During the pipeline run, you'll use MLFlow to log the parameters and metrics. To learn how to enable MLFlow tracking, see [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
Once completed, the job will register a model in your workspace (as a result of
### What happens during job execution As the job is executed, it goes through the following stages: -- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment will be used.
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the job history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment will be used.
- **Scaling**: The cluster attempts to scale up if it requires more nodes to execute the run than are currently available. -- **Running**: All scripts in the script folder *src* are uploaded to the compute target, data stores are mounted or copied, and the script is executed. Outputs from *stdout* and the *./logs* folder are streamed to the run history and can be used to monitor the run.
+- **Running**: All scripts in the script folder *src* are uploaded to the compute target, data stores are mounted or copied, and the script is executed. Outputs from *stdout* and the *./logs* folder are streamed to the job history and can be used to monitor the job.
## Tune model hyperparameters
In the following code, you'll create a single deployment that handles 100% of th
The code to deploy the model to the endpoint does the following: - deploys the best version of the model that you registered earlier;-- scores the model, using the `core.py` file; and
+- scores the model, using the `score.py` file; and
- uses the same curated environment (that you declared earlier) to perform inferencing. [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=blue_deployment)]
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
Last updated 03/01/2022 -+ # Troubleshooting environment image builds using troubleshooting log error messages
image builds on the environment
#### **"R section is deprecated"** - The Azure Machine Learning SDK for R will be deprecated by the end of 2021 to make way for an improved R training and deployment experience using Azure Machine Learning CLI 2.0-- See the [samples repository](https://aka.ms/azureml/environment/train-r-models-cli-v2) to get started with the Public Preview edition of the 2.0 CLI
+- See the [samples repository](https://aka.ms/azureml/environment/train-r-models-cli-v2) to get started with the edition CLI 2.0.
## *Image build problems*
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Last updated 04/12/2022 -+ #Customer intent: As a data scientist, I want to figure out why my online endpoint deployment failed so that I can fix it.
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to resolve common issues in the deployment and scoring of Azure Machine Learning online endpoints.
When you access online endpoints with REST requests, the returned status codes a
| 404 | Not found | Your URL isn't correct. | | 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.| | 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
+| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum `max_concurrent_requests_per_instance` * `instance_count` / `request_process_time (in seconds)` requests per second. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-calculate-instance-count). |
| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints.|
-| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow 2 * `max_concurrent_requests_per_instance` * `instance_count` requests in parallel at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. |
| 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
+### How to calculate instance count
+To increase the number of instances, you could calculate the required replicas following below code.
+```python
+from math import ceil
+# target requests per second
+target_qps = 20
+# time to process the request (in seconds)
+request_process_time = 10
+# Maximum concurrent requests per instance
+max_concurrent_requests_per_instance = 1
+# The target CPU usage of the model container. 70% in this example
+target_utilization = .7
+
+concurrent_requests = target_qps * request_process_time / target_utilization
+
+# Number of instance count
+instance_count = ceil(concurrent_requests / max_concurrent_requests_per_instance)
+```
+ ## Common network isolation issues [!INCLUDE [network isolation issues](../../includes/machine-learning-online-endpoint-troubleshooting.md)]
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Last updated 11/15/2021 -+ # Set up no-code AutoML training with the studio UI
If you specified a test dataset or opted for a train/test split during your expe
> [!WARNING] > This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks (preview)](how-to-auto-train-image-models.md)
+> * [Computer vision tasks](how-to-auto-train-image-models.md)
> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md) > * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning) > * [Automated ML jobs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
The model test job generates the predictions.csv file that's stored in the defau
> [!WARNING] > This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks (preview)](how-to-auto-train-image-models.md)
+> * [Computer vision tasks](how-to-auto-train-image-models.md)
> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md) > * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning) > * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
Title: Use AutoML to detect small objects in images
-description: Set up Azure Machine Learning automated ML to train small object detection models with the CLI v2 and Python SDK v2 (preview).
+description: Set up Azure Machine Learning automated ML to train small object detection models with the CLI v2 and Python SDK v2.
Last updated 10/13/2021-+
-# Train a small object detection model with AutoML (preview)
+# Train a small object detection model with AutoML
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](v1/how-to-use-automl-small-object-detect-v1.md) > * [v2 (current version)](how-to-use-automl-small-object-detect.md)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this article, you'll learn how to train an object detection model to detect small objects in high-resolution images with [automated ML](concept-automated-ml.md) in Azure Machine Learning.
training_parameters:
tile_grid_size: '3x2' ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK v2](#tab/SDK-v2)
```python image_object_detection_job.set_training_parameters(
search_space:
values: ['2x1', '3x2', '5x3'] ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK v2](#tab/SDK-v2)
```python image_object_detection_job.extend_search_space(
See the [object detection sample notebook](https://github.com/Azure/azureml-exam
* Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints). * For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
* See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md). * [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
Title: 'Use batch endpoints for batch scoring using Python SDK v2 (preview)'
+ Title: 'Use batch endpoints for batch scoring using Python SDK v2'
-description: In this article, learn how to create a batch endpoint to continuously batch score large data using Python SDK v2 (preview).
+description: In this article, learn how to create a batch endpoint to continuously batch score large data using Python SDK v2.
Last updated 05/25/2022-+ #Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
-# Use batch endpoints for batch scoring using Python SDK v2 (preview)
+# Use batch endpoints for batch scoring using Python SDK v2
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Learn how to use batch endpoints to do batch scoring using Python SDK v2. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md). In this article, you'll learn to:
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
Last updated 05/24/2022-+ #Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
There are several options to specify the data inputs in CLI `invoke`.
> [!NOTE] > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
#### Configure the output location and overwrite settings
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Last updated 07/01/2022 -+ # Track Azure Databricks ML experiments with MLflow and Azure Machine Learning
In this article, learn how to enable MLflow to connect to Azure Machine Learning
See [MLflow and Azure Machine Learning](concept-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
-If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
+If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
## Prerequisites
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Last updated 09/16/2022 -+ # Use authentication credential secrets in Azure Machine Learning jobs
Before following the steps in this article, make sure you have the following pre
## Next steps
-For an example of submitting a training job using the Azure Machine Learning Python SDK v2 (preview), see [Train models with the Python SDK v2](how-to-train-sdk.md).
+For an example of submitting a training job using the Azure Machine Learning Python SDK v2, see [Train models with the Python SDK v2](how-to-train-sdk.md).
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
Last updated 05/26/2022-+
-# How to do hyperparameter tuning in pipeline (V2) (preview)
+# How to do hyperparameter tuning in pipeline (v2)
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-register-dataset.md
description: Rebuild Studio (classic) datasets in Azure Machine Learning designe
-+ Previously updated : 10/21/2021 Last updated : 09/28/2022 # Migrate a Studio (classic) dataset to Azure Machine Learning
To convert your dataset to a CSV and download the results:
### Upload your dataset to Azure Machine Learning
-After you download the data file, you can register the dataset in Azure Machine Learning:
+After you download the data file, you can register it as a data asset in Azure Machine Learning:
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
-1. In the left navigation pane, select the **Datasets** tab.
-1. Select **Create dataset** > **From local files**.
-
- :::image type="content" source="./media/migrate-register-dataset/register-dataset.png" alt-text="Screenshot showing the datasets tab and the button for creating a local file.":::
-1. Enter a name and description.
-1. For **Dataset type**, select **Tabular**.
+1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Give your data asset a name and optional description. Then, select the **Tabular** option under **Type**, in the **Dataset types** section of the dropdown.
> [!NOTE]
- > You can also upload ZIP files as datasets. To upload a ZIP file, select **File** for **Dataset type**.
+ > You can also upload ZIP files as data assets. To upload a ZIP file, select **File** for **Type**, in the **Dataset types** section of the dropdown.
+
+1. For data source, select the "From local files" option to upload your dataset.
-1. **For Datastore and file selection**, select the datastore you want to upload your dataset file to.
+1. For file selection, first choose where you want your data to be stored in Azure. You will be selecting an Azure Machine Learning datastore. For more information on datastores, see [Connect to storage services](v1/how-to-access-data.md). Next, upload the dataset you downloaded earlier.
- By default, Azure Machine Learning stores the dataset to the default workspace blobstore. For more information on datastores, see [Connect to storage services](how-to-access-data.md).
+1. Follow the steps to set the data parsing settings and schema for your data asset.
-1. Set the data parsing settings and schema for your dataset. Then, confirm your settings.
+1. Once you reach the Review step, click Create on the last page
## Import data from cloud sources
Use the following steps to register a dataset to Azure Machine Learning from a c
1. [Create a datastore](v1/how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
-1. [Register a dataset](v1/how-to-connect-data-ui.md#create-datasets). If you are migrating a Studio (classic) dataset, select the **Tabular** dataset setting.
+1. [Register a dataset](v1/how-to-connect-data-ui.md#create-data-assets). If you are migrating a Studio (classic) dataset, select the **Tabular** dataset setting.
After you register a dataset in Azure Machine Learning, you can use it in designer:
machine-learning Overview What Happened To Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-happened-to-workbench.md
-+ Last updated 07/01/2022
For an overview of the service, read [What is Azure Machine Learning?](overview-
Start with [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md). Then use these resources to create your first experiment with your preferred method:
- + [Run a "Hello world!" Python script (part 1 of 3)](tutorial-1st-experiment-hello-world.md)
+ + [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md)
+ [Use a Jupyter notebook to train image classification models](tutorial-train-deploy-notebook.md) + [Use automated machine learning](tutorial-designer-automobile-price-train-score.md) + [Use the designer's drag & drop capabilities](tutorial-first-experiment-automated-ml.md)
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Last updated 09/22/2022-+ adobe-target: true
Machine learning projects often require a team with varied skill set to build an
Anyone on an ML team can use their preferred tools to get the job done. Whether you're running rapid experiments, hyperparameter-tuning, building pipelines, or managing inferences, you can use familiar interfaces including: * [Azure Machine Learning studio](https://ml.azure.com)
-* [Python SDK](https://aka.ms/sdk-v2-install)
-* [CLI v2 ](how-to-configure-cli.md))
-* [Azure Resource Manager REST APIs (preview)](/rest/api/azureml/)
+* [Python SDK (v2)](https://aka.ms/sdk-v2-install)
+* [CLI (v2)](how-to-configure-cli.md))
+* [Azure Resource Manager REST APIs ](/rest/api/azureml/)
As you're refining the model and collaborating with others throughout the rest of Machine Learning development cycle, you can share and find assets, resources, and metrics for your projects on the Azure Machine Learning studio UI.
Also, Azure Machine Learning includes features for monitoring and auditing:
Start using Azure Machine Learning: - [Set up an Azure Machine Learning workspace](quickstart-create-resources.md) - [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md)-- [How to run training jobs(how-to-train-model.md)
+- [How to run training jobs](how-to-train-model.md)
machine-learning Overview What Is Machine Learning Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-machine-learning-studio.md
description: The studio is a web portal for Azure Machine Learning workspaces. T
+
Visit the [studio](https://ml.azure.com), or explore the different authoring opt
Start with [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md). Then use these resources to create your first experiment with your preferred method:
- + [Run a "Hello world!" Python script (part 1 of 3)](tutorial-1st-experiment-hello-world.md)
- + [Use a Jupyter notebook to train image classification models](tutorial-train-deploy-notebook.md)
+ + [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md)
+ [Use automated machine learning to train & deploy models](tutorial-first-experiment-automated-ml.md) + [Use the designer to train & deploy models](tutorial-designer-automobile-price-train-score.md) + [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md)
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Last updated 08/26/2022 adobe-target: true-+ #Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
You now have an Azure Machine Learning workspace that contains:
Use these resources to learn more about Azure Machine Learning and train a model with Python scripts. > [!div class="nextstepaction"]
-> [Learn more with Python scripts](tutorial-1st-experiment-hello-world.md)
+> [Quickstart: Run Juypter notebook in Azure Machine Learning studio](quickstart-run-notebooks.md)
>
machine-learning Quickstart Run Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-run-notebooks.md
+
+ Title: "Quickstart: Run notebooks"
+
+description: Learn to run Jupyter notebooks in studio, and find sample notebooks to learn more about Azure Machine Learning.
+++++++ Last updated : 09/28/2022
+adobe-target: true
+#Customer intent: As a data scientist, I want to run notebooks and explore sample notebooks in Azure Machine Learning.
++
+# Quickstart: Run Jupyter notebooks in studio
+
+Get started with Azure Machine Learning by using Jupyter notebooks to learn more about the Python SDK.
+
+In this quickstart, you'll learn how to run notebooks on a *compute instance* in Azure Machine Learning studio. A compute instance is an online compute resource that has a development environment already installed and ready to go.
+
+You'll also learn where to find sample notebooks to help jump-start your path to training and deploying models with Azure Machine Learning.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Run the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md) to create a workspace and a compute instance.
+
+## Create a new notebook
+
+Create a new notebook in studio.
+
+1. Sign into [Azure Machine Learning studio](https://ml.azure.com).
+1. Select your workspace, if it isn't already open.
+1. On the left, select **Notebooks**.
+1. Select **Create new file**.
+
+ :::image type="content" source="media/quickstart-run-notebooks/create-new-file.png" alt-text="Screenshot: create a new notebook file.":::
+
+1. Name your new notebook **my-new-notebook.ipynb**.
++
+## Create a markdown cell
+
+1. On the upper right of each notebook cell is a toolbar of actions you can use for that cell. Select the **Convert to markdown cell** tool to change the cell to markdown.
+
+ :::image type="content" source="media/quickstart-run-notebooks/convert-to-markdown.png" alt-text="Screenshot: Convert to markdown.":::
+
+1. Double-click on the cell to open it.
+1. Inside the cell, type:
+
+ ```markdown
+ # Testing a new notebook
+ Use markdown cells to add nicely formatted content to the notebook.
+ ```
+
+## Create a code cell
+
+1. Just below the cell, select **+ Code** to create a new code cell.
+1. Inside this cell, add:
+
+ ```python
+ print("Hello, world!")
+ ```
+
+## Run the code
+
+1. If you stopped your compute instance at the end of the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md), start it again now:
+
+ :::image type="content" source="media/quickstart-run-notebooks/start-compute.png" alt-text="Screenshot: Start a compute instance.":::
+
+1. Wait until the compute instance is "Running". When it is running, the **Compute instance** dot is green. You can also see the status after the compute instance name. You may have to select the arrow to see the full name.
+
+ :::image type="content" source="media/quickstart-run-notebooks/compute-running.png" alt-text="Screenshot: Compute is running.":::
+
+1. You can run code cells either by using **Shift + Enter**, or by selecting the **Run cell** tool to the right of the cell. Use one of these methods to run the cell now.
+
+ :::image type="content" source="media/quickstart-run-notebooks/run-cell.png" alt-text="Screenshot: run cell tool.":::
+
+1. The brackets to the left of the cell now have a number inside. The number represents the order in which cells were run. Since this is the first cell you've run, you'll see `[1]` next to the cell. You also see the output of the cell, `Hello, world!`.
+
+1. Run the cell again. You'll see the same output (since you didn't change the code), but now the brackets contain `[2]`. As your notebook gets larger, these numbers help you understand what code was run, and in what order.
+
+## Run a second code cell
+
+1. Add a second code cell:
+
+ ```python
+ two = 1 + 1
+ print("One plus one is ",two)
+ ```
+
+1. Run the new cell.
+1. Your notebook now looks like:
+
+ :::image type="content" source="media/quickstart-run-notebooks/notebook.png" alt-text="Screenshot: Notebook contents.":::
+
+## See your variables
+
+Use the **Variable explorer** to see the variables that are defined in your session.
+
+1. Select the **"..."** in the notebook toolbar.
+1. Select **Variable explorer**.
+
+ :::image type="content" source="media/quickstart-run-notebooks/variable-explorer.png" alt-text="Screenshot: Variable explorer tool.":::":::
+
+ The explorer appears at the bottom. You currently have one variable, `two`, assigned.
+
+1. Add another code cell:
+
+ ```python
+ three = 1+two
+ ```
+
+1. Run this cell to see the variable `three` appear in the variable explorer.
+
+## Learn from sample notebooks
+
+There are sample notebooks available in studio to help you learn more about Azure Machine Learning. To find these samples:
+
+1. Still in the **Notebooks** section, select **Samples** at the top.
+
+ :::image type="content" source="media/quickstart-run-notebooks/samples.png" alt-text="Screenshot: Sample notebooks.":::
+
+1. The **SDK v1** folder can be used with the previous, v1 version of the SDK. If you're just starting, you won't need these samples.
+1. Use notebooks in the **SDK v2** folder for examples that show the current version of the SDK, v2.
+1. Select the notebook **SDK v2/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb**. You'll see a read-only version of the notebook.
+1. To get your own copy, you can select **Clone this notebook**. This action will also copy the rest of the folder's content for that notebook. No need to do that now, though, as you're going to instead clone the whole folder.
+
+## Clone tutorials folder
+
+You can also clone an entire folder. The **tutorials** folder is a good place to start learning more about how Azure Machine Learning works.
+
+1. Open the **SDK v2** folder.
+1. Select the **"..."** at the right of **tutorials** folder to get the menu, then select **Clone**.
+
+ :::image type="content" source="media/quickstart-run-notebooks/clone-folder.png" alt-text="Screenshot: clone v2 tutorials folder.":::
+
+1. Your new folder is now displayed in the **Files** section.
+1. Run the notebooks in this folder to learn more about using the Python SDK v2 to train and deploy models.
+
+## Clean up resources
+
+If you plan to continue now to the next tutorial, skip to [Next steps](#next-steps).
+
+### Delete all resources
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md)
+>
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
description: Learn which hyperparameters are available for computer vision tasks
+
The following hyperparameters are for object detection and instance segmentation
## Next steps
-* Learn how to [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md).
+* Learn how to [Set up AutoML to train computer vision models with Python](how-to-auto-train-image-models.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
Title: JSONL format for computer vision tasks
-description: Learn how to format your JSONL files for data consumption in automated ML experiments for computer vision tasks with the CLI v2 and Python SDK v2 (preview).
+description: Learn how to format your JSONL files for data consumption in automated ML experiments for computer vision tasks with the CLI v2 and Python SDK v2.
+
Last updated 09/09/2022
> * [v1](v1/reference-automl-images-schema-v1.md) > * [v2 (current version)](reference-automl-images-schema.md)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to format your JSONL files for data consumption in automated ML experiments for computer vision tasks during training and inference.
In instance segmentation, output consists of multiple boxes with their scaled to
* Learn how to [Prepare data for training computer vision models with automated ML](how-to-prepare-datasets-for-automl-images.md). * [Set up computer vision tasks in AutoML](how-to-auto-train-image-models.md)
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
Last updated 06/02/2022
# Managed online endpoints SKU list -- This table shows the VM SKUs that are supported for Azure Machine Learning managed online endpoints. * The full SKU names listed in the table can be used for Azure CLI or Azure Resource Manager templates (ARM templates) requests to create and update deployments. * For more information on configuration details such as CPU and RAM, see [Azure Machine Learning Pricing](https://azure.microsoft.com/pricing/details/machine-learning/) and [VM sizes](../virtual-machines/sizes.md).
-> [!IMPORTANT]
-> If you use a Windows-based image for your deployment, we recommend using a VM SKU that provides a minimum of 4 cores.
-
-| Size | General Purpose | Compute Optimized | Memory Optimized | GPU |
+| Relative Size | General Purpose | Compute Optimized | Memory Optimized | GPU |
| | | | | | | V.Small | Standard_DS1_v2 <br/> Standard_DS2_v2 | Standard_F2s_v2 | Standard_E2s_v3 | Standard_NC4as_T4_v3 | | Small | Standard_DS3_v2 | Standard_F4s_v2 | Standard_E4s_v3 | Standard_NC6s_v2 <br/> Standard_NC6s_v3 <br/> Standard_NC8as_T4_v3 | | Medium | Standard_DS4_v2 | Standard_F8s_v2 | Standard_E8s_v3 | Standard_NC12s_v2 <br/> Standard_NC12s_v3 <br/> Standard_NC16as_T4_v3 | | Large | Standard_DS5_v2 | Standard_F16s_v2 | Standard_E16s_v3 | Standard_NC24s_v2 <br/> Standard_NC24s_v3 <br/> Standard_NC64as_T4_v3 |
-| X-Large| - | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 | Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | - |
+| X-Large| - | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 <br/> Standard_FX24mds <br/> Standard_FX36mds <br/> Standard_FX48mds| Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | Standard_ND40rs_v2 <br/> Standard_ND96asr_v4 <br/> Standard_ND96amsr_A100_v4 <br/>|
+
+> [!CAUTION]
+> `Standard_DS1_v2` and `Standard_DS2_v2` may be too small to compute resources used with managed online endpoints. If you want to reduce the cost of deploying multiple models, see [the example for multi models](how-to-deploy-managed-online-endpoints.md#use-more-than-one-model).
machine-learning Reference Migrate Sdk V1 Mlflow Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md
-+
Last updated 05/04/2022
-# Migrate logging from SDK v1 to SDK v2 (preview)
+# Migrate logging from SDK v1 to SDK v2
-The Azure Machine Learning Python SDK v2 does not provide native logging APIs. Instead, we recommend that you use [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). If you're migrating from SDK v1 to SDK v2 (preview), use the information in this section to understand the MLflow equivalents of SDK v1 logging APIs.
+The Azure Machine Learning Python SDK v2 does not provide native logging APIs. Instead, we recommend that you use [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). If you're migrating from SDK v1 to SDK v2, use the information in this section to understand the MLflow equivalents of SDK v1 logging APIs.
## Setup
experiment = Experiment(ws, "create-experiment-sdk-v1")
azureml_run = experiment.start_logging() ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python # Set the MLflow experiment and start a run
__SDK v1__
azureml_run.log("sample_int_metric", 1) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python mlflow.log_metric("sample_int_metric", 1)
__SDK v1__
azureml_run.log("sample_boolean_metric", True) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python mlflow.log_metric("sample_boolean_metric", 1)
__SDK v1__
azureml_run.log("sample_string_metric", "a_metric") ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python mlflow.log_text("sample_string_text", "string.txt")
__SDK v1__
azureml_run.log_image("sample_image", path="Azure.png") ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python mlflow.log_artifact("Azure.png")
plt.plot([1, 2, 3])
azureml_run.log_image("sample_pyplot", plot=plt) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python import matplotlib.pyplot as plt
list_to_log = [1, 2, 3, 2, 1, 2, 3, 2, 1]
azureml_run.log_list('sample_list', list_to_log) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python list_to_log = [1, 2, 3, 2, 1, 2, 3, 2, 1]
__SDK v1__
azureml_run.log_row("sample_table", col1=5, col2=10) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python metrics = {"sample_table.col1": 5, "sample_table.col2": 10}
table = {
azureml_run.log_table("table", table) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python # Add a metric for each column prefixed by metric name. Similar to log_row
ACCURACY_TABLE = '{"schema_type": "accuracy_table", "schema_version": "v1", "dat
azureml_run.log_accuracy_table('v1_accuracy_table', ACCURACY_TABLE) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python ACCURACY_TABLE = '{"schema_type": "accuracy_table", "schema_version": "v1", "data": {"probability_tables": ' +\
CONF_MATRIX = '{"schema_type": "confusion_matrix", "schema_version": "v1", "data
azureml_run.log_confusion_matrix('v1_confusion_matrix', json.loads(CONF_MATRIX)) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python CONF_MATRIX = '{"schema_type": "confusion_matrix", "schema_version": "v1", "data": {"class_labels": ' + \
PREDICTIONS = '{"schema_type": "predictions", "schema_version": "v1", "data": {"
azureml_run.log_predictions('test_predictions', json.loads(PREDICTIONS)) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python PREDICTIONS = '{"schema_type": "predictions", "schema_version": "v1", "data": {"bin_averages": [0.25,' + \
RESIDUALS = '{"schema_type": "residuals", "schema_version": "v1", "data": {"bin_
azureml_run.log_residuals('test_residuals', json.loads(RESIDUALS)) ```
-__SDK v2 (preview) with MLflow__
+__SDK v2 with MLflow__
```python RESIDUALS = '{"schema_type": "residuals", "schema_version": "v1", "data": {"bin_edges": [100, 200, 300], ' + \
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `tags` | object | Dictionary of tags for the deployment. | | | | `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | | | `model` | string or object | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. <br><br> This field is optional for [custom container deployment](how-to-deploy-custom-container.md) scenarios.| | |
-| `model_mount_path` | string | The path to mount the model in a custom container. Applicable only for [custom container deployment](how-to-deploy-custom-container.md) scenarios. If the `model` field is specified, it is mounted on this path in the container. | | |
+| `model_mount_path` | string | The path to mount the model in a custom container. Applicable only for [custom container deployment](how-to-deploy-custom-container.md) scenarios. If the `model` field is specified, it's mounted on this path in the container. | | |
| `code_configuration` | object | Configuration for the scoring code logic. <br><br> This field is optional for [custom container deployment](how-to-deploy-custom-container.md) scenarios. | | | | `code_configuration.code` | string | Local path to the source code directory for scoring the model. | | | | `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `instance_type` | string | **Required.** The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). | | | | `instance_count` | integer | **Required.** The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. <br><br> We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | | | | `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` |
-| `scale_settings` | object | The scale settings for the deployment. Currently only the `default` scale type is supported, so you do not need to specify this property. <br><br> With this `default` scale type, you can either manually scale the instance count up and down after deployment creation by updating the `instance_count` property, or create an [autoscaling policy](how-to-autoscale-endpoints.md). | | |
+| `scale_settings` | object | The scale settings for the deployment. Currently only the `default` scale type is supported, so you don't need to specify this property. <br><br> With this `default` scale type, you can either manually scale the instance count up and down after deployment creation by updating the `instance_count` property, or create an [autoscaling policy](how-to-autoscale-endpoints.md). | | |
| `scale_settings.type` | string | The scale type. | `default` | `default` | | `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | | | `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Default value | | | - | -- | - | | `request_timeout_ms` | integer | The scoring timeout in milliseconds. | `5000` |
-| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Do not change this setting from the default value unless instructed by Microsoft Technical Support or a member of the Azure ML team.** | `1` |
+| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low may lead to under utilized nodes. Setting too low may also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. <br><br> For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` |
| `max_queue_wait_ms` | integer | The maximum amount of time in milliseconds a request will stay in the queue. | `500` | ### ProbeSettings
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
Last updated 08/30/2022-+ #Customer intent: As a professional data scientist, I find and run example Jupyter Notebooks for Azure Machine Learning.
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](<v1/samples-notebooks-v1.md>)
-> * [v2 (preview)](samples-notebooks.md)
+> * [v1](v1/samples-notebooks-v1.md)
+> * [v2](samples-notebooks.md)
The [AzureML-Examples](https://github.com/Azure/azureml-examples) repository includes the latest (v2) Azure Machine Learning Python CLI and SDK samples. For information on the various example types, see the [readme](https://github.com/Azure/azureml-examples#azure-machine-learning-examples).
This article shows you how to access the repository from the following environme
The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
-To add the community-driven repository, [use a compute instance terminal](how-to-access-terminal.md). In the terminal window, clone the repository:
+To view example notebooks:
+ 1. Sign in to [studio](https://ml.azure.com) and select your workspace if necessary.
+ 1. Select **Notebooks**.
+ 1. Select the **Samples** tab. Use the **SDK v2** folder for examples using Python SDK v2.
-```bash
-git clone https://github.com/Azure/azureml-examples.git --depth 1
-```
## Option 2: Access on your own notebook server
Try these tutorials:
- [Train and deploy an image classification model with MNIST](tutorial-train-deploy-notebook.md) -- [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md)
+- [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md)
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-bring-data.md
- Title: "Tutorial: Upload data and train a model (SDK v2)"-
-description: How to upload and use your own data in a remote training job. This is part 3 of a three-part getting-started series.
------- Previously updated : 07/10/2022---
-# Tutorial: Upload data and train a model (part 3 of 3)
--
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](v1/tutorial-1st-experiment-bring-data.md)
-> * [v2 (preview)](tutorial-1st-experiment-bring-data.md)
-
-This tutorial shows you how to upload and use your own data to train machine learning models in Azure Machine Learning. This tutorial is *part 3 of a three-part tutorial series*.
-
-In [Part 2: Train a model](tutorial-1st-experiment-sdk-train.md), you trained a model in the cloud, using sample data from `PyTorch`. You also downloaded that data through the `torchvision.datasets.CIFAR10` method in the PyTorch API. In this tutorial, you'll use the downloaded data to learn the workflow for working with your own data in Azure Machine Learning.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
->
-> * Upload data to Azure.
-> * Create a control script.
-> * Understand the new Azure Machine Learning concepts (passing parameters, data inputs).
-> * Submit and run your training script.
-> * View your code output in the cloud.
-
-## Prerequisites
-
-You'll need the data that was downloaded in the previous tutorial. Make sure you have completed these steps:
-
-1. [Create the training script](tutorial-1st-experiment-sdk-train.md#create-training-scripts).
-1. [Test locally](tutorial-1st-experiment-sdk-train.md#test-local).
-
-## Adjust the training script
-
-By now you have your training script (get-started/src/train.py) running in Azure Machine Learning, and you can monitor the model performance. Let's parameterize the training script by introducing arguments. Using arguments will allow you to easily compare different hyperparameters.
-
-Our training script is currently set to download the CIFAR10 dataset on each run. The following Python code has been adjusted to read the data from a directory.
-
-> [!NOTE]
-> The use of `argparse` parameterizes the script.
-
-1. Open *train.py* and replace it with this code:
-
- ```python
- import os
- import argparse
- import torch
- import torch.optim as optim
- import torchvision
- import torchvision.transforms as transforms
- from model import Net
- import mlflow
-
- if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--data_path',
- type=str,
- help='Path to the training data'
- )
- parser.add_argument(
- '--learning_rate',
- type=float,
- default=0.001,
- help='Learning rate for SGD'
- )
- parser.add_argument(
- '--momentum',
- type=float,
- default=0.9,
- help='Momentum for SGD'
- )
- args = parser.parse_args()
- print("===== DATA =====")
- print("DATA PATH: " + args.data_path)
- print("LIST FILES IN DATA PATH...")
- print(os.listdir(args.data_path))
- print("================")
- # prepare DataLoader for CIFAR10 data
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
- ])
- trainset = torchvision.datasets.CIFAR10(
- root=args.data_path,
- train=True,
- download=False,
- transform=transform,
- )
- trainloader = torch.utils.data.DataLoader(
- trainset,
- batch_size=4,
- shuffle=True,
- num_workers=2
- )
- # define convolutional network
- net = Net()
- # set up pytorch loss / optimizer
- criterion = torch.nn.CrossEntropyLoss()
- optimizer = optim.SGD(
- net.parameters(),
- lr=args.learning_rate,
- momentum=args.momentum,
- )
- # train the network
- for epoch in range(2):
- running_loss = 0.0
- for i, data in enumerate(trainloader, 0):
- # unpack the data
- inputs, labels = data
- # zero the parameter gradients
- optimizer.zero_grad()
- # forward + backward + optimize
- outputs = net(inputs)
- loss = criterion(outputs, labels)
- loss.backward()
- optimizer.step()
- # print statistics
- running_loss += loss.item()
- if i % 2000 == 1999:
- loss = running_loss / 2000
- mlflow.log_metric('loss', loss)
- print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')
- running_loss = 0.0
- print('Finished Training')
- ```
-
-1. **Save** the file. Close the tab if you wish.
-
-### Understanding the code changes
-
-The code in `train.py` has used the `argparse` library to set up `data_path`, `learning_rate`, and `momentum`.
-
-```python
-# .... other code
-parser = argparse.ArgumentParser()
-parser.add_argument('--data_path', type=str, help='Path to the training data')
-parser.add_argument('--learning_rate', type=float, default=0.001, help='Learning rate for SGD')
-parser.add_argument('--momentum', type=float, default=0.9, help='Momentum for SGD')
-args = parser.parse_args()
-# ... other code
-```
-
-Also, the `train.py` script was adapted to update the optimizer to use the user-defined parameters:
-
-```python
-optimizer = optim.SGD(
- net.parameters(),
- lr=args.learning_rate, # get learning rate from command-line argument
- momentum=args.momentum, # get momentum from command-line argument
-)
-```
-
-## <a name="upload"></a> Upload the data to Azure
-
-To run this script in Azure Machine Learning, you need to make your training data available in Azure. Your Azure Machine Learning workspace comes equipped with a _default_ datastore. This is an Azure Blob Storage account where you can store your training data.
-
-> [!NOTE]
-> Azure Machine Learning allows you to connect other cloud-based storages that store your data. For more details, see the [data documentation](./concept-data.md).
-
-> [!TIP]
-> There is no additional step needed for uploading data, the control script will define and upload the CIFAR10 training data.
-
-## <a name="control-script"></a> Create a control script
-
-As you've done previously, create a new Python control script called *run-pytorch-data.py* in the **get-started** folder:
-
-```python
-# run-pytorch-data.py
-from azure.ai.ml import MLClient, command, Input
-from azure.identity import DefaultAzureCredential
-from azure.ai.ml.entities import Environment
-from azure.ai.ml import command, Input
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-from azureml.core import Workspace
-
-if __name__ == "__main__":
- # get details of the current Azure ML workspace
- ws = Workspace.from_config()
-
- # default authentication flow for Azure applications
- default_azure_credential = DefaultAzureCredential()
- subscription_id = ws.subscription_id
- resource_group = ws.resource_group
- workspace = ws.name
-
- # client class to interact with Azure ML services and resources, e.g. workspaces, jobs, models and so on.
- ml_client = MLClient(
- default_azure_credential,
- subscription_id,
- resource_group,
- workspace)
-
- # the key here should match the key passed to the command
- my_job_inputs = {
- "data_path": Input(type=AssetTypes.URI_FOLDER, path="./data")
- }
-
- env_name = "pytorch-env"
- env_docker_image = Environment(
- image="pytorch/pytorch:latest",
- name=env_name,
- conda_file="pytorch-env.yml",
- )
- ml_client.environments.create_or_update(env_docker_image)
-
- # target name of compute where job will be executed
- computeName="cpu-cluster"
- job = command(
- code="./src",
- # the parameter will match the training script argument name
- # inputs.data_path key should match the dictionary key
- command="python train.py --data_path ${{inputs.data_path}}",
- inputs=my_job_inputs,
- environment=f"{env_name}@latest",
- compute=computeName,
- display_name="day1-experiment-data",
- )
-
- returned_job = ml_client.create_or_update(job)
- aml_url = returned_job.studio_url
- print("Monitor your job at", aml_url)
-```
-
-> [!TIP]
-> If you used a different name when you created your compute cluster, make sure to adjust the name in the code `computeName='cpu-cluster'` as well.
-
-### Understand the code changes
-
-The control script is similar to the one from [part 2 of this series](tutorial-1st-experiment-sdk-train.md), with the following new lines:
-
- :::column span="":::
- `my_job_inputs = { "data_path": Input(type=AssetTypes.URI_FOLDER, path="./data")}`
- :::column-end:::
- :::column span="2":::
- An [Input](/python/api/azure-ai-ml/azure.ai.ml.input) is used to reference inputs to your job. These can encompass data, either uploaded as part of the job or references to previously registered data assets. URI\*FOLDER tells that the reference points to a folder of data. The data will be mounted by default to the compute for the job.
- :::column-end:::
- :::column span="":::
- `command="python train.py --data_path ${{inputs.data_path}}"`
- :::column-end:::
- :::column span="2":::
- `--data_path` matches the argument defined in the updated training script. `${{inputs.data_path}}` passes the input defined by the input dictionary, and the keys must match.
- :::column-end:::
-
-## <a name="submit-to-cloud"></a> Submit the job to Azure Machine Learning
-
-Select **Save and run script in terminal** to run the *run-pytorch-data.py* script. This job will train the model on the compute cluster using the data you uploaded.
-
-This code will print a URL to the experiment in the Azure Machine Learning studio. If you go to that link, you'll be able to see your code running.
--
-### <a name="inspect-log"></a> Inspect the log file
-
-In the studio, go to the experiment job (by selecting the previous URL output) followed by **Outputs + logs**. Select the `std_log.txt` file. Scroll down through the log file until you see the following output:
-
-```txt
-===== DATA =====
-DATA PATH: /mnt/azureml/cr/j/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/cap/data-capability/wd/INPUT_data_path
-LIST FILES IN DATA PATH...
-['.amlignore', 'cifar-10-batches-py', 'cifar-10-python.tar.gz']
-================
-epoch=1, batch= 2000: loss 2.20
-epoch=1, batch= 4000: loss 1.90
-epoch=1, batch= 6000: loss 1.70
-epoch=1, batch= 8000: loss 1.58
-epoch=1, batch=10000: loss 1.54
-epoch=1, batch=12000: loss 1.48
-epoch=2, batch= 2000: loss 1.41
-epoch=2, batch= 4000: loss 1.38
-epoch=2, batch= 6000: loss 1.33
-epoch=2, batch= 8000: loss 1.30
-epoch=2, batch=10000: loss 1.29
-epoch=2, batch=12000: loss 1.25
-Finished Training
-
-```
-
-Notice:
--- Azure Machine Learning has mounted Blob Storage to the compute cluster automatically for you, passing the mount point into `--data_path`. Compared to the previous job, there is no on the fly data download.-- The `inputs=my_job_inputs` used in the control script resolves to the mount point.-
-## Clean up resources
-
-If you plan to continue now to another tutorial, or to start your own training jobs, skip to [Next steps](#next-steps).
-
-### Stop compute instance
-
-If you're not going to use it now, stop the compute instance:
-
-1. In the studio, on the left, select **Compute**.
-1. In the top tabs, select **Compute instances**
-1. Select the compute instance in the list.
-1. On the top toolbar, select **Stop**.
-
-### Delete all resources
--
-You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
-
-## Next steps
-
-In this tutorial, we saw how to upload data to Azure by using `Datastore`. The datastore served as cloud storage for your workspace, giving you a persistent and flexible place to keep your data.
-
-You saw how to modify your training script to accept a data path via the command line. By using `Dataset`, you were able to mount a directory to the remote job.
-
-Now that you have a model, learn:
-
-> [!div class="nextstepaction"]
-> [How to deploy models with Azure Machine Learning](how-to-deploy-managed-online-endpoints.md).
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-hello-world.md
- Title: "Tutorial: Get started with a Python script (SDK v2)"-
-description: Get started with your first Python script in Azure Machine Learning. This is part 1 of a three-part getting-started series.
------- Previously updated : 07/10/2022---
-# Tutorial: Get started with a Python script in Azure Machine Learning (part 1 of 3)
--
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](v1/tutorial-1st-experiment-hello-world.md)
-> * [v2 (preview)](tutorial-1st-experiment-hello-world.md)
-
-In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a three-part tutorial series*.
-
-This tutorial avoids the complexity of training a machine learning model. You will run a "Hello World" Python script in the cloud. You will learn how a control script is used to configure and create a run in Azure Machine Learning.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
-> * Create and run a "Hello world!" Python script.
-> * Create a Python control script to submit "Hello world!" to Azure Machine Learning.
-> * Understand the Azure Machine Learning concepts in the control script.
-> * Submit and run the "Hello world!" script.
-> * View your code output in the cloud.
-
-## Prerequisites
--- Complete [Quickstart: Set up your workspace to get started with Azure Machine Learning](quickstart-create-resources.md) to create a workspace, compute instance, and compute cluster to use in this tutorial series.--- The Azure Machine Learning Python SDK v2 (preview) installed.-
- To install the SDK you can either,
- * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
-
- * Use the following commands to install Azure ML Python SDK v2:
- * Uninstall previous preview version:
- ```python
- pip uninstall azure-ai-ml
- ```
- * Install the Azure ML Python SDK v2:
- ```python
- pip install azure-ai-ml
- ```
-
-## Create and run a Python script
-
-This tutorial will use the compute instance as your development computer. First create a few folders and the script:
-
-1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com) and select your workspace if prompted.
-1. On the left, select **Notebooks**
-1. In the **Files** toolbar, select **+**, then select **Create new folder**.
- :::image type="content" source="media/tutorial-1st-experiment-hello-world/create-folder.png" alt-text="Screenshot shows create a new folder tool in toolbar.":::
-1. Name the folder **get-started**.
-1. To the right of the folder name, use the **...** to create another folder under **get-started**.
- :::image type="content" source="media/tutorial-1st-experiment-hello-world/create-sub-folder.png" alt-text="Screenshot shows create a subfolder menu.":::
-1. Name the new folder **src**. Use the **Edit location** link if the file location is not correct.
-1. To the right of the **src** folder, use the **...** to create a new file in the **src** folder.
-1. Name your file *hello.py*. Switch the **File type** to *Python (*.py)*.
-
-Copy this code into your file:
-
-```python
-# src/hello.py
-print("Hello world!")
-```
-
-Your project folder structure will now look like:
--
-### <a name="test"></a>Test your script
-
-You can run your code locally, which in this case means on the compute instance. Running code locally has the benefit of interactive debugging of code.
-
-If you have previously stopped your compute instance, start it now with the **Start compute** tool to the right of the compute dropdown. Wait about a minute for state to change to *Running*.
--
-Select **Save and run script in terminal** to run the script.
--
-You'll see the output of the script in the terminal window that opens. Close the tab and select **Terminate** to close the session.
-
-## <a name="control-script"></a> Create a control script
-
-A *control script* allows you to run your `hello.py` script on different compute resources. You use the control script to control how and where your machine learning code is run.
-
-Select the **...** at the end of **get-started** folder to create a new file. Create a Python file called *run-hello.py* and copy/paste the following code into that file:
-
-```python
-# get-started/run-hello.py
-from azure.ai.ml import MLClient, command, Input
-from azure.identity import DefaultAzureCredential
-from azureml.core import Workspace
-
-# get details of the current Azure ML workspace
-ws = Workspace.from_config()
-
-# default authentication flow for Azure applications
-default_azure_credential = DefaultAzureCredential()
-subscription_id = ws.subscription_id
-resource_group = ws.resource_group
-workspace = ws.name
-
-# client class to interact with Azure ML services and resources, e.g. workspaces, jobs, models and so on.
-ml_client = MLClient(
- default_azure_credential,
- subscription_id,
- resource_group,
- workspace)
-
-# target name of compute where job will be executed
-computeName="cpu-cluster"
-job = command(
- code="./src",
- command="python hello.py",
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu@latest",
- compute=computeName,
- display_name="hello-world-example",
-)
-
-returned_job = ml_client.create_or_update(job)
-aml_url = returned_job.studio_url
-print("Monitor your job at", aml_url)
-```
-
-> [!TIP]
-> If you used a different name when you created your compute cluster, make sure to adjust the name in the code `computeName='cpu-cluster'` as well.
-
-### Understand the code
-
-Here's a description of how the control script works:
-
- :::column span="":::
- `ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)`
- :::column-end:::
- :::column span="2":::
- [MLClient](/python/api/azure-ai-ml/azure.ai.ml.mlclient) manages your Azure Machine Learning workspace and its assets and resources.
- :::column-end:::
- :::column span="":::
- `job = command(...)`
- :::column-end:::
- :::column span="2":::
- [command](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command) provides a simple way to construct a standalone command job or one as part of a dsl.pipeline.
- :::column-end:::
- :::column span="":::
- `returned_job = ml_client.create_or_update(job)`
- :::column-end:::
- :::column span="2":::
- Submits your script. This submission is called a [job](/python/api/azure-ai-ml/azure.ai.ml.entities.job). A job encapsulates a single execution of your code. Use a job to monitor the script progress, capture the output, analyze the results, visualize metrics, and more.
- :::column-end:::
- :::column span="":::
- `aml_url = returned_job.studio_url`
- :::column-end:::
- :::column span="2":::
- The `job` object provides a handle on the execution of your code. Monitor its progress from the Azure Machine Learning studio with the URL that's printed from the Python script.
- :::column-end:::
-
-## <a name="submit"></a> Submit and run your code in the cloud
-
-1. Select **Save and run script in terminal** to run your control script, which in turn runs `hello.py` on the compute cluster that you created in the [setup tutorial](quickstart-create-resources.md).
-
-1. In the terminal, you may be asked to sign in to authenticate. Copy the code and follow the link to complete this step.
-
-1. Once you're authenticated, you'll see a link in the terminal. Select the link to view the job.
-
- [!INCLUDE [amlinclude-info](../../includes/machine-learning-py38-ignore.md)]
-
-## View the output
-
-1. In the page that opens, you'll see the job status.
-1. When the status of the job is **Completed**, select **Output + logs** at the top of the page.
-1. Select **std_log.txt** to view the output of your job.
-
-## <a name="monitor"></a>Monitor your code in the cloud in the studio
-
-The output from your script will contain a link to the studio that looks something like this:
-`https://ml.azure.com/runs/<run-id>?wsid=/subscriptions/<subscription-id>/resourcegroups/<resource-group>/workspaces/<workspace-name>`.
-
-Follow the link. At first, you'll see a status of **Queued** or **Preparing**. The very first run will take 5-10 minutes to complete. This is because the following occurs:
-
-* A docker image is built in the cloud
-* The compute cluster is resized from 0 to 1 node
-* The docker image is downloaded to the compute.
-
-Subsequent jobs are much quicker (~15 seconds) as the docker image is cached on the compute. You can test this by resubmitting the code below after the first job has completed.
-
-Wait about 10 minutes. You'll see a message that the run has completed. Then use **Refresh** to see the status change to _Completed_. Once the job completes, go to the **Outputs + logs** tab.
-
-The `std_log.txt` file contains the standard output from a run. This file can be useful when you're debugging remote runs in the cloud.
-
-```txt
-hello world
-```
-
-## Next steps
-
-In this tutorial, you took a simple "Hello world!" script and ran it on Azure. You saw how to connect to your Azure Machine Learning workspace, create an experiment, and submit your `hello.py` code to the cloud.
-
-In the next tutorial, you build on these learnings by running something more interesting than `print("Hello world!")`.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Train a model](tutorial-1st-experiment-sdk-train.md)
-
-> [!NOTE]
-> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-bring-data.md#clean-up-resources).
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
- Title: "Tutorial: Train a first Python machine learning model (SDK v2)"-
-description: How to train a machine learning model in Azure Machine Learning. This is part 2 of a three-part getting-started series.
------- Previously updated : 07/10/2022---
-# Tutorial: Train your first machine learning model (part 2 of 3)
--
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](v1/tutorial-1st-experiment-sdk-train.md)
-> * [v2 (preview)](tutorial-1st-experiment-sdk-train.md)
-
-This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is _part 2 of a three-part tutorial series_.
-
-In [Part 1: Run "Hello world!"](tutorial-1st-experiment-hello-world.md) of the series, you learned how to use a control script to run a job in the cloud.
-
-In this tutorial, you take the next step by submitting a script that trains a machine learning model. This example will help you understand how Azure Machine Learning eases consistent behavior between local debugging and remote runs.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> * Create a training script.
-> * Use Conda to define an Azure Machine Learning environment.
-> * Create a control script.
-> * Understand Azure Machine Learning classes (`Environment`, `Run`, `Metrics`).
-> * Submit and run your training script.
-> * View your code output in the cloud.
-> * Log metrics to Azure Machine Learning.
-> * View your metrics in the cloud.
-
-## Prerequisites
--- Completion of [part 1](tutorial-1st-experiment-hello-world.md) of the series.-
-## Create training scripts
-
-First you define the neural network architecture in a *model.py* file. All your training code will go into the `src` subdirectory, including *model.py*.
-
-The training code is taken from [this introductory example](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) from PyTorch. Note that the Azure Machine Learning concepts apply to any machine learning code, not just PyTorch.
-
-1. Create a *model.py* file in the **src** subfolder. Copy this code into the file:
-
- ```python
- import torch.nn as nn
- import torch.nn.functional as F
-
-
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.conv1 = nn.Conv2d(3, 6, 5)
- self.pool = nn.MaxPool2d(2, 2)
- self.conv2 = nn.Conv2d(6, 16, 5)
- self.fc1 = nn.Linear(16 * 5 * 5, 120)
- self.fc2 = nn.Linear(120, 84)
- self.fc3 = nn.Linear(84, 10)
-
- def forward(self, x):
- x = self.pool(F.relu(self.conv1(x)))
- x = self.pool(F.relu(self.conv2(x)))
- x = x.view(-1, 16 * 5 * 5)
- x = F.relu(self.fc1(x))
- x = F.relu(self.fc2(x))
- x = self.fc3(x)
- return x
- ```
-
-1. On the toolbar, select **Save** to save the file. Close the tab if you wish.
-
-1. Next, define the training script, also in the **src** subfolder. This script downloads the CIFAR10 dataset by using PyTorch `torchvision.dataset` APIs, sets up the network defined in *model.py*, and trains it for two epochs by using standard SGD and cross-entropy loss.
-
- Create a *train.py* script in the **src** subfolder:
-
- ```python
- import torch
- import torch.optim as optim
- import torchvision
- import torchvision.transforms as transforms
-
- from model import Net
-
- # download CIFAR 10 data
- trainset = torchvision.datasets.CIFAR10(
- root="../data",
- train=True,
- download=True,
- transform=torchvision.transforms.ToTensor(),
- )
- trainloader = torch.utils.data.DataLoader(
- trainset, batch_size=4, shuffle=True, num_workers=2
- )
-
-
- if __name__ == "__main__":
-
- # define convolutional network
- net = Net()
-
- # set up pytorch loss / optimizer
- criterion = torch.nn.CrossEntropyLoss()
- optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
-
- # train the network
- for epoch in range(2):
-
- running_loss = 0.0
- for i, data in enumerate(trainloader, 0):
- # unpack the data
- inputs, labels = data
-
- # zero the parameter gradients
- optimizer.zero_grad()
-
- # forward + backward + optimize
- outputs = net(inputs)
- loss = criterion(outputs, labels)
- loss.backward()
- optimizer.step()
-
- # print statistics
- running_loss += loss.item()
- if i % 2000 == 1999:
- loss = running_loss / 2000
- print(f"epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}")
- running_loss = 0.0
-
- print("Finished Training")
- ```
-
-1. You now have the following folder structure:
-
- :::image type="content" source="media/tutorial-1st-experiment-sdk-train/directory-structure.png" alt-text="Directory structure shows train.py in src subdirectory":::
-
-## <a name="test-local"></a> Test locally
-
-Select **Save and run script in terminal** to run the *train.py* script directly on the compute instance.
-
-After the script completes, select **Refresh** above the file folders. You'll see the new data folder called **get-started/data** Expand this folder to view the downloaded data.
--
-## Create a Python environment
-
-Azure Machine Learning provides the concept of an [environment](/python/api/azure-ai-ml/azure.ai.ml.entities.environment) to represent a reproducible, versioned Python environment for running experiments. It's easy to create an environment from a local Conda or pip environment.
-
-First you'll create a file with the package dependencies.
-
-1. Create a new file in the **get-started** folder called `pytorch-env.yml`:
-
- ```yml
- name: pytorch-env
- channels:
- - defaults
- - pytorch
- dependencies:
- - python=3.8.5
- - pytorch
- - torchvision
- ```
-1. On the toolbar, select **Save** to save the file. Close the tab if you wish.
-
-## <a name="create-local"></a> Create the control script
-
-The difference between the following control script and the one that you used to submit "Hello world!" is that you add a couple of extra lines to set the environment.
-
-Create a new Python file in the **get-started** folder called `run-pytorch.py`:
-
-```python
-# run-pytorch.py
-from azure.ai.ml import MLClient, command, Input
-from azure.identity import DefaultAzureCredential
-from azure.ai.ml.entities import Environment
-from azureml.core import Workspace
-
-if __name__ == "__main__":
- # get details of the current Azure ML workspace
- ws = Workspace.from_config()
-
- # default authentication flow for Azure applications
- default_azure_credential = DefaultAzureCredential()
- subscription_id = ws.subscription_id
- resource_group = ws.resource_group
- workspace = ws.name
-
- # client class to interact with Azure ML services and resources, e.g. workspaces, jobs, models and so on.
- ml_client = MLClient(
- default_azure_credential,
- subscription_id,
- resource_group,
- workspace)
-
- env_name = "pytorch-env"
- env_docker_image = Environment(
- image="pytorch/pytorch:latest",
- name=env_name,
- conda_file="pytorch-env.yml",
- )
- ml_client.environments.create_or_update(env_docker_image)
-
- # target name of compute where job will be executed
- computeName="cpu-cluster"
- job = command(
- code="./src",
- command="python train.py",
- environment=f"{env_name}@latest",
- compute=computeName,
- display_name="day1-experiment-train",
- )
-
- returned_job = ml_client.create_or_update(job)
- aml_url = returned_job.studio_url
- print("Monitor your job at", aml_url)
-```
-
-> [!TIP]
-> If you used a different name when you created your compute cluster, make sure to adjust the name in the code `computeName='cpu-cluster'` as well.
-
-### Understand the code changes
-
- :::column span="":::
- `env_docker_image = ...`
- :::column-end:::
- :::column span="2":::
- Creates the custom environment against the pytorch base image, with additional conda file to install.
- :::column-end:::
- :::column span="":::
- `environment=f"{env_name}@latest"`
- :::column-end:::
- :::column span="2":::
- Adds the environment to [command](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command).
- :::column-end:::
-
-## <a name="submit"></a> Submit the job to Azure Machine Learning
-
-1. Select **Save and run script in terminal** to run the *run-pytorch.py* script.
-
-1. You'll see a link in the terminal window that opens. Select the link to view the job.
-
- [!INCLUDE [amlinclude-info](../../includes/machine-learning-py38-ignore.md)]
-
-### View the output
-
-1. In the page that opens, you'll see the job status. The first time you run this script, Azure Machine Learning will build a new Docker image from your PyTorch environment. The whole job might take around 10 minutes to complete. This image will be reused in future jobs to make them job much quicker.
-1. You can see view Docker build logs in the Azure Machine Learning studio. Select the **Outputs + logs** tab, and then select **20_image_build_log.txt**.
-1. When the status of the job is **Completed**, select **Output + logs**.
-1. Select **std_log.txt** to view the output of your job.
-
-```txt
-Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ../data/cifar-10-python.tar.gz
-Extracting ../data/cifar-10-python.tar.gz to ../data
-epoch=1, batch= 2000: loss 2.30
-epoch=1, batch= 4000: loss 2.17
-epoch=1, batch= 6000: loss 1.99
-epoch=1, batch= 8000: loss 1.87
-epoch=1, batch=10000: loss 1.72
-epoch=1, batch=12000: loss 1.63
-epoch=2, batch= 2000: loss 1.54
-epoch=2, batch= 4000: loss 1.53
-epoch=2, batch= 6000: loss 1.50
-epoch=2, batch= 8000: loss 1.46
-epoch=2, batch=10000: loss 1.44
-epoch=2, batch=12000: loss 1.41
-Finished Training
-
-```
-
-If you see an error `Your total snapshot size exceeds the limit`, the **data** folder is located in the `source_directory` value used in `ScriptRunConfig`.
-
-Select the **...** at the end of the folder, then select **Move** to move **data** to the **get-started** folder.
-
-## <a name="log"></a> Log training metrics
-
-Now that you have a model training script in Azure Machine Learning, let's start tracking some performance metrics.
-
-The current training script prints metrics to the terminal. Azure Machine Learning supports logging and tracking experiments using [MLflow tracking](https://www.mlflow.org/docs/latest/tracking.html). By adding a few lines of code, you gain the ability to visualize metrics in the studio and to compare metrics between multiple jobs.
-
-### Modify *train.py* to include logging
-
-1. Modify your *train.py* script to include two more lines of code:
-
- ```python
- import torch
- import torch.optim as optim
- import torchvision
- import torchvision.transforms as transforms
- from model import Net
- import mlflow
--
- # ADDITIONAL CODE: OPTIONAL: turn on autolog
- # mlflow.autolog()
-
- # download CIFAR 10 data
- trainset = torchvision.datasets.CIFAR10(
- root='./data',
- train=True,
- download=True,
- transform=torchvision.transforms.ToTensor()
- )
- trainloader = torch.utils.data.DataLoader(
- trainset,
- batch_size=4,
- shuffle=True,
- num_workers=2
- )
--
- if __name__ == "__main__":
- # define convolutional network
- net = Net()
- # set up pytorch loss / optimizer
- criterion = torch.nn.CrossEntropyLoss()
- optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
- # train the network
- for epoch in range(2):
- running_loss = 0.0
- for i, data in enumerate(trainloader, 0):
- # unpack the data
- inputs, labels = data
- # zero the parameter gradients
- optimizer.zero_grad()
- # forward + backward + optimize
- outputs = net(inputs)
- loss = criterion(outputs, labels)
- loss.backward()
- optimizer.step()
- # print statistics
- running_loss += loss.item()
- if i % 2000 == 1999:
- loss = running_loss / 2000
- # ADDITIONAL CODE: log loss metric to AML
- mlflow.log_metric('loss', loss)
- print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')
- running_loss = 0.0
- print('Finished Training')
- ```
-
-2. **Save** this file, then close the tab if you wish.
-
-#### Understand the additional two lines of code
-
-```python
-# ADDITIONAL CODE: OPTIONAL: turn on autolog
-mlflow.autolog()
-```
-
-With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model.
-
-```python
-# ADDITIONAL CODE: log loss metric to AML
-mlflow.log_metric('loss', loss)
-```
-
-You can log individual metrics as well.
-
-Metrics in Azure Machine Learning are:
--- Organized by experiment and job, so it's easy to keep track of and compare metrics.-- Equipped with a UI so you can visualize training performance in the studio.-- Designed to scale, so you keep these benefits even as you run hundreds of experiments.-
-### Update the Conda environment file
-
-The `train.py` script just took a new dependency on `azureml.core`. Update `pytorch-env.yml` to reflect this change:
-
-```yml
-name: pytorch-env
-channels:
- - defaults
- - pytorch
-dependencies:
- - python=3.8.5
- - pytorch
- - torchvision
- - pip
- - pip:
- - mlflow
- - azureml-mlflow
-```
-
-Make sure you save this file before you submit the job.
-
-### <a name="submit-again"></a> Submit the job to Azure Machine Learning
-
-Select the tab for the *run-pytorch.py* script, then select **Save and run script in terminal** to re-run the *run-pytorch.py* script. Make sure you've saved your changes to `pytorch-env.yml` first.
-
-This time when you visit the studio, go to the **Metrics** tab where you can now see live updates on the model training loss! It may take 1 to 2 minutes before the training begins.
--
-## Next steps
-
-In this session, you upgraded from a basic "Hello world!" script to a more realistic training script that required a specific Python environment to run. You saw how to use curated Azure Machine Learning environments. Finally, you saw how in a few lines of code you can log metrics to Azure Machine Learning.
-
-In the next session, you'll see how to work with data in Azure Machine Learning by uploading the CIFAR10 dataset to Azure.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Bring your own data](tutorial-1st-experiment-bring-data.md)
-
-> [!NOTE]
-> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-bring-data.md#clean-up-resources).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Title: 'Tutorial: AutoML- train object detection model'
-description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning CLI v2 and Python SDK v2(preview).
+description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning CLI v2 and Python SDK v2.
Last updated 05/26/2022-+
-# Tutorial: Train an object detection model (preview) with AutoML and Python
+# Tutorial: Train an object detection model with AutoML and Python
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
> * [v1](v1/tutorial-auto-train-image-models-v1.md) > * [v2 (current version)](tutorial-auto-train-image-models.md)
->[!IMPORTANT]
-> The features presented in this article are in preview. They should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that might change at any time.
-In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2 (preview).
+In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2.
This object detection model identifies whether the image contains objects, such as a can, carton, milk bottle, or water bottle. Automated ML accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
limits:
# [Python SDK](#tab/python) [!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]+
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
``` # [Python SDK](#tab/python)
- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
When you've configured your AutoML Job to the desired settings, you can submit the job.
In this automated machine learning tutorial, you did the following tasks:
> * Deployed your model > * Visualized detections
-* [Learn more about computer vision in automated ML (preview)](concept-automated-ml.md#computer-vision-preview).
-* [Learn how to set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md).
+* [Learn more about computer vision in automated ML](concept-automated-ml.md#computer-vision).
+* [Learn how to set up AutoML to train computer vision models with Python](how-to-auto-train-image-models.md).
* [Learn how to configure incremental training on computer vision models](how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md). * Code examples:
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Last updated 10/21/2021-
-# Customer intent: As a non-coding data scientist, I want to use automated machine learning to build a demand forecasting model.
+
+#Customer intent: As a non-coding data scientist, I want to use automated machine learning to build a demand forecasting model.
# Tutorial: Forecast demand with no-code automated machine learning in the Azure Machine Learning studio
You won't write any code in this tutorial, you'll use the studio interface to pe
Also try automated machine learning for these other model types: * For a no-code example of a classification model, see [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
-* For a code first example of an object detection model, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* For a code first example of an object detection model, see the [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
## Prerequisites
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
+
+ Title: "Tutorial: Azure ML in a day"
+
+description: Use Azure Machine Learning to train and deploy a model in a cloud-based Python Jupyter Notebook.
++++++ Last updated : 09/15/2022+
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial: Azure Machine Learning in a day
++
+Learn how a data scientist uses Azure Machine Learning (Azure ML) to train a model, then use the model for prediction. This tutorial will help you become familiar with the core concepts of Azure ML and their most common usage.
+
+You'll learn how to submit a *command job* to run your *training script* on a specified *compute resource*, configured with the *job environment* necessary to run the script.
+
+The *training script* handles the data preparation, then trains and registers a model. Once you have the model, you'll deploy it as an *endpoint*, then call the endpoint for inferencing.
+
+The steps you'll take are:
+
+> [!div class="checklist"]
+> * Connect to your Azure ML workspace
+> * Create your compute resource and job environment
+> * Create your training script
+> * Create and run your command job to run the training script on the compute resource, configured with the appropriate job environment
+> * View the output of your training script
+> * Deploy the newly-trained model as an endpoint
+> * Call the Azure ML endpoint for inferencing
++
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
+ * Create a workspace.
+ * Create a cloud-based compute instance to use for your development environment.
+
+* Create a new notebook or copy our notebook.
+ * Follow the [Quickstart: Run Juypter notebook in Azure Machine Learning studio](quickstart-run-notebooks.md) steps to create a new notebook.
+ * Or use the steps in the quickstart to [clone the v2 tutorials folder](quickstart-run-notebooks.md#learn-from-sample-notebooks), then open the notebook from the **tutorials/azureml-in-a-day/azureml-in-a-day.ipynb** folder in your **File** section.
+
+## Run your notebook
+
+On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to use for running the notebook.
+
+> [!Important]
+> The rest of this tutorial contains cells of the tutorial notebook. Copy/paste them into your new notebook, or switch to the notebook now if you cloned it.
+>
+> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
+
+## Connect to the workspace
+
+Before you dive in the code, you'll need to connect to your Azure ML workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
+
+We're using `DefaultAzureCredential` to get access to workspace.
+`DefaultAzureCredential` is used to handle most Azure SDK authentication scenarios.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=credential)]
+
+In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values:
+
+1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+1. Copy the value for workspace, resource group and subscription ID into the code.
+1. You'll need to copy one value, close the area and paste, then come back for the next one.
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=ml_client)]
+
+The result is a handler to the workspace that you'll use to manage other resources and jobs.
+
+> [!IMPORTANT]
+> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (in the notebook below, that will happen during compute creation).
+
+## Create a compute resource to run your job
+
+You'll need a compute resource for running a job. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
+
+You'll provision a Linux compute cluster. See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/) .
+
+For this example, you only need a basic cluster, so you'll use a Standard_DS3_v2 model with 2 vCPU cores, 7-GB RAM and create an Azure ML Compute.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=cpu_compute_target)]
+
+## Create a job environment
+
+To run your AzureML job on your compute resource, you'll need an [environment](concept-environments.md). An environment lists the software runtime and libraries that you want installed on the compute where youΓÇÖll be training. It's similar to your python environment on your local machine.
+
+AzureML provides many curated or ready-made environments, which are useful for common training and inference scenarios. You can also create your own custom environments using a docker image, or a conda configuration.
+
+In this example, you'll create a custom conda environment for your jobs, using a conda yaml file.
+
+First, create a directory to store the file in.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=dependencies_dir)]
+
+Now, create the file in the dependencies directory.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=write_model)]
+
+The specification contains some usual packages, that you'll use in your job (numpy, pip).
+
+Reference this *yaml* file to create and register this custom environment in your workspace:
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=custom_env_name)]
+
+## What is a command job?
+
+You'll create an Azure ML *command job* to train a model for credit default prediction. The command job is used to run a *training script* in a specified environment on a specified compute resource. You've already created the environment and the compute resource. Next you'll create the training script.
+
+The *training script* handles the data preparation, training and registering of the trained model. In this tutorial, you'll create a Python training script.
+
+Command jobs can be run from CLI, Python SDK, or studio interface. In this tutorial, you'll use the Azure ML Python SDK v2 to create and run the command job.
+
+After running the training job, you'll deploy the model, then use it to produce a prediction.
++
+## Create training script
+
+Let's start by creating the training script - the *main.py* python file.
+
+First create a source folder for the script:
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=train_src_dir)]
+
+This script handles the preprocessing of the data, splitting it into test and train data. It then consumes this data to train a tree based model and return the output model.
+
+[MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=write_main)]
+
+As you can see in this script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
+
+## Configure the command
+
+Now that you have a script that can perform the desired tasks, you'll use the general purpose **command** that can run command line actions. This command line action can be directly calling system commands or by running a script.
+
+Here, you'll create input variables to specify the input data, split ratio, learning rate and registered model name. The command script will:
+* Use the compute created earlier to run this command.
+* Use the environment created earlier - you can use the `@latest` notation to indicate the latest version of the environment when the command is run.
+* Configure some metadata like display name, experiment name etc. An *experiment* is a container for all the iterations you do on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure ML studio.
+* Configure the command line action itself - `python main.py` in this case. The inputs/outputs are accessible in the command via the `${{ ... }}` notation.
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=registered_model_name)]
+
+## Submit the job
+
+It's now time to submit the job to run in AzureML. This time you'll use `create_or_update` on `ml_client.jobs`.
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=create_job)]
+
+## View job output and wait for job completion
+
+View the job in AzureML studio by selecting the link in the output of the previous cell.
+
+The output of this job will look like this in the AzureML studio. Explore the tabs for various details like metrics, outputs etc. Once completed, the job will register a model in your workspace as a result of training.
+
+![Screenshot that shows the job overview](media/tutorial-azure-ml-in-a-day/view-job.gif "Overview of the job.")
+
+> [!IMPORTANT]
+> Wait until the status of the job is complete before returning to this notebook to continue. The job will take 2 to 3 minutes to run. It could take longer (up to 10 minutes) if the compute cluster has been scaled down to zero nodes and custom environment is still building.
+
+## Deploy the model as an online endpoint
+
+Now deploy your machine learning model as a web service in the Azure cloud, an [`online endpoint`](concept-endpoints.md).
+
+To deploy a machine learning service, you usually need:
+
+* The model assets (file, metadata) that you want to deploy. You've already registered these assets in your training job.
+* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns. With an MLFlow model, as in this tutorial, this script is automatically created for you. Samples of scoring scripts can be found [here](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/endpoints/online).
++
+## Create a new online endpoint
+
+Now that you have a registered model and an inference script, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=online_endpoint_name)]
+
+> [!NOTE]
+> Expect the endpoint creation to take approximately 6 to 8 minutes.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=endpoint)]
+
+Once you've created an endpoint, you can retrieve it as below:
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=retrieve_endpoint)]
+
+## Deploy the model to the endpoint
+
+Once the endpoint is created, deploy the model with the entry script. Each endpoint can have multiple deployments. Direct traffic to these deployments can be specified using rules. Here you'll create a single deployment that handles 100% of the incoming traffic. We have chosen a color name for the deployment, for example, *blue*, *green*, *red* deployments, which is arbitrary.
+
+You can check the **Models** page on the Azure ML studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use.
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=latest_model_version)]
+
+Deploy the latest version of the model.
+
+> [!NOTE]
+> Expect this deployment to take approximately 6 to 8 minutes.
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=blue_deployment)]
++
+### Test with a sample query
+
+Now that the model is deployed to the endpoint, you can run inference with it.
+
+Create a sample request file following the design expected in the run method in the score script.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=deploy_dir)]
++
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=write_sample)]
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=test)]
+
+## Clean up resources
+
+If you're not going to use the endpoint, delete it to stop using the resource. Make sure no other deployments are using an endpoint before you delete it.
+
+> [!NOTE]
+> Expect this step to take approximately 6 to 8 minutes.
+
+[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=delete_endpoint)]
++
+### Delete everything
+
+Use these steps to delete your Azure Machine Learning workspace and all compute resources.
+++
+## Next steps
+++ Convert this tutorial into a production ready [pipeline with reusable components](tutorial-pipeline-python-sdk.md).++ Learn about all of the [deployment options](how-to-deploy-managed-online-endpoints.md) for Azure Machine Learning.++ Learn how to [authenticate to the deployed model](how-to-authenticate-online-endpoint.md).
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
description: Use a template to create an Azure Machine Learning workspace and re
+
The Bicep template is made up of the [main.bicep](https://github.com/Azure/azure
| [keyvault.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/keyvault.bicep) | Defines the Azure Key Vault used by the workspace. | | [containerregistry.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/containerregistry.bicep) | Defines the Azure Container Registry used by the workspace. | | [applicationinsights.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/applicationinsights.bicep) | Defines the Azure Application Insights instance used by the workspace. |
-| [machinelearningnetworking.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/machinelearningnetworking.bicep) | Defines te private endpoints and DNS zones for the Azure Machine Learning workspace. |
+| [machinelearningnetworking.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/machinelearningnetworking.bicep) | Defines the private endpoints and DNS zones for the Azure Machine Learning workspace. |
| [Machinelearning.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/machinelearning.bicep) | Defines the Azure Machine Learning workspace. | | [machinelearningcompute.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/machinelearningcompute.bicep) | Defines an Azure Machine Learning compute cluster and compute instance. | | [privateaks.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/privateaks.bicep) | Defines an Azure Kubernetes Services cluster instance. |
After the template completes, use the following steps to connect to the DSVM:
> * [Create/manage VMs (Windows)](../virtual-machines/windows/tutorial-manage-vm.md). > * [Create/manage compute instance](how-to-create-manage-compute-instance.md).
-To continue learning how to use the secured workspace from the DSVM, see [Tutorial: Get started with a Python script in Azure Machine Learning](tutorial-1st-experiment-hello-world.md).
+To continue learning how to use the secured workspace from the DSVM, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
To learn more about common secure workspace configurations and input/output requirements, see [Azure Machine Learning secure workspace traffic flow](concept-secure-network-traffic-flow.md).
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Last updated 09/06/2022 -+ # How to create a secure workspace
Use the following steps to create a Data Science Virtual Machine for use as a ju
1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields: * __Virtual machine name__: A unique name for the VM.
- * __Username__: The username you will use to login to the VM.
+ * __Username__: The username you will use to log in to the VM.
* __Password__: The password for the username. * __Security type__: Standard. * __Image__: Data Science Virtual Machine - Windows Server 2019 - Gen1.
When Azure Container Registry is behind the virtual network, Azure Machine Learn
> > As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints (preview)](how-to-secure-online-endpoint.md).
-At this point, you can use studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [run a Python script](tutorial-1st-experiment-hello-world.md).
+At this point, you can use studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
## Stop compute instance and jump box
To delete all resources created in this tutorial, use the following steps:
1. Enter the resource group name, then select __Delete__. ## Next steps
-Now that you have created a secure workspace and can access studio, learn how to [run a Python script](tutorial-1st-experiment-hello-world.md) using Azure Machine Learning.
+Now that you have created a secure workspace and can access studio, learn how to [run a Python script](tutorial-azure-ml-in-a-day.md) using Azure Machine Learning.
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
You need an Azure Machine Learning workspace to use the designer. The workspace
### Create the pipeline
+>[!Note]
+> Designer supports two type of components, classic prebuilt components and custom components. These two types of components are not compatible.
+>
+>Classic prebuilt components provides prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
+>
+>
+>Custom components allow you to provide your own code as a component. It supports sharing across workspaces and seamless authoring across Studio, CLI, and SDK interfaces.
+>
+>This article applies to classic prebuilt components.
+ 1. Sign in to <a href="https://ml.azure.com?tabs=jre" target="_blank">ml.azure.com</a>, and select the workspace you want to work with. 1. Select **Designer**.
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
Last updated 10/21/2021--
-# Customer intent: As a non-coding data scientist, I want to use automated machine learning techniques so that I can build a classification model.
+
+#Customer intent: As a non-coding data scientist, I want to use automated machine learning techniques so that I can build a classification model.
# Tutorial: Train a classification model with no-code AutoML in the Azure Machine Learning studio
You won't write any code in this tutorial, you'll use the studio interface to pe
Also try automated machine learning for these other model types: * For a no-code example of forecasting, see [Tutorial: Demand forecasting & AutoML](tutorial-automated-ml-forecast.md).
-* For a code first example of an object detection model, see the [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md),
+* For a code first example of an object detection model, see the [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md),
## Prerequisites
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Title: "Tutorial: ML pipelines with Python SDK v2 (preview)"
+ Title: "Tutorial: ML pipelines with Python SDK v2"
-description: Use Azure Machine Learning to create your production-ready ML project in a cloud-based Python Jupyter Notebook using Azure ML Python SDK V2 (preview).
+description: Use Azure Machine Learning to create your production-ready ML project in a cloud-based Python Jupyter Notebook using Azure ML Python SDK v2.
Last updated 08/29/2022-+ #Customer intent: This tutorial is intended to introduce Azure ML to data scientists who want to scale up or publish their ML projects. By completing a familiar end-to-end project, which starts by loading the data and ends by creating and calling an online inference endpoint, the user should become familiar with the core concepts of Azure ML and their most common usage. Each step of this tutorial can be modified or performed in other ways that might have security or scalability advantages. We will cover some of those in the Part II of this tutorial, however, we suggest the reader use the provide links in each section to learn more on each topic.
-# Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook
+# Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!IMPORTANT]
-> SDK v2 is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- > [!NOTE] > For a tutorial that uses SDK v1 to build a pipeline, see [Tutorial: Build an Azure Machine Learning pipeline for image classification](v1/tutorial-pipeline-python-sdk.md) >
-In this tutorial, you'll use Azure Machine Learning (Azure ML) to create a production ready machine learning (ML) project, using AzureML Python SDK v2 (preview).
+In this tutorial, you'll use Azure Machine Learning (Azure ML) to create a production ready machine learning (ML) project, using AzureML Python SDK v2.
You'll learn how to use the AzureML Python SDK v2 to:
You'll learn how to use the AzureML Python SDK v2 to:
* Create a workspace. * Create a cloud-based compute instance to use for your development environment. * Create a cloud-based compute cluster to use for training your model.
+* Complete the [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md) to clone the **SDK v2/tutorials** folder.
-## Install the SDK
-
-You'll complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
-
-First you'll install the v2 SDK on your compute instance:
-
-1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
-
-1. Select the subscription and the workspace you created as part of the [Prerequisites](#prerequisites).
-
-1. On the left, select **Compute**.
-
-1. From the list of **Compute Instances**, find the one you created.
-
-1. Select "Terminal", to open the terminal session on the compute instance.
-
-1. In the terminal window, install Python SDK v2 (preview) with this command:
-
- ```
- pip install --pre azure-ai-ml
- ```
-
- For more information, see [Install the Python SDK v2](https://aka.ms/sdk-v2-install).
-
-## Clone the azureml-examples repo
-
-1. Now on the terminal, run the command:
-
- ```
- git clone --depth 1 https://github.com/Azure/azureml-examples
- ```
-
-1. On the left, select **Notebooks**.
-
-1. Now, on the left, Select the **Files**
-
- :::image type="content" source="media/tutorial-pipeline-python-sdk/clone-tutorials-users-files.png" alt-text="Screenshot that shows the Clone tutorials folder.":::
-
-1. A list of folders shows each user who accesses the workspace. Select your folder, you'll find **azureml-samples** is cloned.
-## Open the cloned notebook
+## Open the notebook
-1. Open the **tutorials** folder that was cloned into your **User files** section.
+1. Open the **tutorials** folder that was cloned into your **Files** section from the [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md).
-1. Select the **e2e-ml-workflow.ipynb** file from your **azureml-examples/tutorials/e2e-ds-experience/** folder.
+1. Select the **e2e-ml-workflow.ipynb** file from your **tutorials/azureml-examples/tutorials/e2e-ds-experience/** folder.
:::image type="content" source="media/tutorial-pipeline-python-sdk/expand-folder.png" alt-text="Screenshot shows the open tutorials folder.":::
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
Last updated 03/15/2022-+ # Automated machine learning (AutoML)?
Tutorials are end-to-end introductory examples of AutoML scenarios.
+ **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](../tutorial-first-experiment-automated-ml.md).
-+ **For using AutoML to train computer vision models**, see the [Tutorial: Train an object detection model (preview) with AutoML and Python (v1)](../tutorial-auto-train-image-models.md).
++ **For using AutoML to train computer vision models**, see the [Tutorial: Train an object detection model (preview) with AutoML and Python (v1)](./tutorial-auto-train-image-models-v1.md). How-to articles provide additional detail into what functionality automated ML offers. For example,
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
Last updated 10/21/2021-+ #Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
Because Machine Learning Compute is a managed compute target (that is, it's mana
1. Management code is written to the user's Azure Files share. 1. The container is started with an initial command. That is, management code as described in the previous step.
-* After the run completes, you can query runs and metrics. In the flow diagram below, this step occurs when the training compute target writes the run metrics back to Azure Machine Learning from storage in the Cosmos DB database. Clients can call Azure Machine Learning. Machine Learning will in turn pull metrics from the Cosmos DB database and return them back to the client.
+* After the run completes, you can query runs and metrics. In the flow diagram below, this step occurs when the training compute target writes the run metrics back to Azure Machine Learning from storage in the Azure Cosmos DB database. Clients can call Azure Machine Learning. Machine Learning will in turn pull metrics from the Azure Cosmos DB database and return them back to the client.
[![Training workflow](media/concept-Azure-machine-learning-architecture/training-and-metrics.png)](media/concept-azure-machine-learning-architecture/training-and-metrics.png#lightbox)
machine-learning Concept Train Machine Learning Model V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-train-machine-learning-model-v1.md
Last updated 08/30/2022-+ ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"] > * [v1](concept-train-machine-learning-model-v1.md)
-> * [v2 (preview)](../concept-train-machine-learning-model.md)
+> * [v2 (current)](../concept-train-machine-learning-model.md)
Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
Last updated 01/18/2022-+ #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
Review detailed code examples and use cases in the [GitHub notebook repository f
## Next steps
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](../tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model with AutoML and Python](../tutorial-auto-train-image-models.md).
* [Make predictions with ONNX on computer vision models from AutoML](../how-to-inference-onnx-automl-image-models.md) * [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
If you donΓÇÖt have an Azure subscription, create a free account before you begi
* After you complete the quickstart: 1. Select **Notebooks** in the studio. 1. Select the **Samples** tab.
- 1. Open the *tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook.
+ 1. Open the *SDK v1/tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook.
1. To run each cell in the tutorial, select **Clone this notebook** This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment-v1.md).
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
Last updated 01/24/2021 -+ # Set up AutoML training with Python
For this article you need,
Before you begin your experiment, you should determine the kind of machine learning problem you are solving. Automated machine learning supports task types of `classification`, `regression`, and `forecasting`. Learn more about [task types](../concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp). >[!NOTE]
-> Support for computer vision tasks: image classification (multi-class and multi-label), object detection, and instance segmentation is available in public preview. [Learn more about computer vision tasks in automated ML](../concept-automated-ml.md#computer-vision-preview).
->
>Support for natural language processing (NLP) tasks: image classification (multi-class and multi-label) and named entity recognition is available in public preview. [Learn more about NLP tasks in automated ML](../concept-automated-ml.md#nlp). > > These preview capabilities are provided without a service-level agreement. Certain features might not be supported or might have constrained functionality. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
RunDetails(run).show()
> [!WARNING] > This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks (preview)](../how-to-auto-train-image-models.md)
+> * [Computer vision tasks](../how-to-auto-train-image-models.md)
> * [Many models and hiearchical time series forecasting training (preview)](../how-to-auto-train-forecast.md) > * [Forecasting tasks where deep learning neural networks (DNN) are enabled](../how-to-auto-train-forecast.md#enable-deep-learning) > * [Automated ML runs from local computes or Azure Databricks clusters](../how-to-configure-auto-train.md#compute-to-run-experiment)
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-connect-data-ui.md
Previously updated : 01/18/2021- Last updated : 09/28/2021+ #Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
Create a new datastore in a few steps with the Azure Machine Learning studio.
> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied. 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
-1. Select **Datastores** on the left pane under **Manage**.
-1. Select **+ New datastore**.
+1. Select **Data** on the left pane under **Assets**.
+1. At the top, select **Datastores**.
+1. Select **+Create**.
1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type and authentication type. See the [storage access and permissions section](#access-validation) to understand where to find the authentication credentials you need to populate this form. The following example demonstrates what the form looks like when you create an **Azure blob datastore**:
Create a new datastore in a few steps with the Azure Machine Learning studio. Le
> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied. 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
-1. Select **Datastores** on the left pane under **Manage**.
-1. Select **+ New datastore**.
+1. Select **Data** on the left pane under **Assets**.
+1. At the top, select **Datastores**.
+1. Select **+Create**.
1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type. See [which storage types support identity-based](how-to-identity-based-data-access.md#storage-access-permissions) data access. 1. Customers need to choose the storage acct and container name they want to use Blob reader role (for ADLS Gen 2 and Blob storage) is required; whoever is creating needs permissions to see the contents of the storage
The following example demonstrates what the form looks like when you create an *
-## Create datasets
+## Create data assets
After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. [Learn more about datasets](how-to-create-register-datasets.md). There are two types of datasets, FileDataset and TabularDataset. [FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas [TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
-The following steps and animation show how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
+The following steps describe how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
> [!Note] > Datasets created through Azure Machine Learning studio are automatically registered to the workspace.
-![Create a dataset with the UI](./media/how-to-connect-data-ui/create-dataset-ui.gif)
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-To create a dataset in the studio:
-1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com/).
-1. Select **Datasets** in the **Assets** section of the left pane.
-1. Select **Create Dataset** to choose the source of your dataset. This source can be local files, a datastore, public URLs, or [Azure Open Datasets](/azure/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset).
-1. Select **Tabular** or **File** for Dataset type.
-1. Select **Next** to open the **Datastore and file selection** form. On this form you select where to keep your dataset after creation, and select what data files to use for your dataset.
+1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+
+1. Give your data asset a name and optional description. Then, under **Type**, select one of the Dataset types, either **File** or **Tabular**.
+
+1. You have a few options for your data source. If your data is already stored in Azure, choose "From Azure storage". If you want to upload data from your local drive, choose "From local files". If your data is stored at a public web location, choose "From web files". You can also create a data asset from a SQL database, or from [Azure Open Datasets](/azure/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset).
+
+1. For the file selection step, select where you want your data to be stored in Azure, and what data files you want to use.
1. Enable skip validation if your data is in a virtual network. Learn more about [virtual network isolation and privacy](../how-to-enable-studio-virtual-network.md).
-1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they're intelligently populated based on file type and you can further configure your dataset prior to creation on these forms.
- 1. On the Settings and preview form, you can indicate if your data contains multi-line data.
- 1. On the Schema form, you can specify that your TabularDataset has a time component by selecting type: **Timestamp** for your date or time column.
- 1. If your data is formatted into subsets, for example time windows, and you want to use those subsets for training, select type **Partition timestamp**. Doing so enables time series operations on your dataset. Learn more about how to [use partitions in your dataset for training](how-to-monitor-datasets.md?tabs=azure-studio#create-target-dataset).
-1. Select **Next** to review the **Confirm details** form. Check your selections and create an optional data profile for your dataset. Learn more about [data profiling](#profile).
-1. Select **Create** to complete your dataset creation.
+1. Follow the steps to set the data parsing settings and schema for your data asset. The settings will be pre-populated based on file type and you can further configure your settings prior to creating the data asset.
+
+1. Once you reach the Review step, click Create on the last page
<a name="profile"></a>
-### Data profile and preview
+### Data preview and profile
-After you create your dataset, verify you can view the profile and preview in the studio with the following steps.
+After you create your dataset, verify you can view the preview and profile in the studio with the following steps:
1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com/)
-1. Select **Datasets** in the **Assets** section of the left pane.
+1. Under __Assets__ in the left navigation, select __Data__.
1. Select the name of the dataset you want to view.
-1. Select the **Explore** tab.
-1. Select the **Preview** or **Profile** tab.
-
-![View dataset profile and preview](./media/how-to-connect-data-ui/dataset-preview-profile.gif)
+1. Select the **Explore** tab.
+1. Select the **Preview** tab.
+1. Select the **Profile** tab.
You can get a vast variety of summary statistics across your data set to verify whether your data set is ML-ready. For non-numeric columns, they include only basic statistics like min, max, and error count. For numeric columns, you can also review their statistical moments and estimated quantiles.
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
-+ Previously updated : 05/11/2022 Last updated : 09/28/2022 #Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
In this article, you learn how to create Azure Machine Learning datasets to acce
By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
-For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](how-to-connect-data-ui.md#create-datasets)
+For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](how-to-connect-data-ui.md#create-data-assets)
With Azure Machine Learning datasets, you can:
If your data is already cleansed, and ready to use in training experiments, you
We recommend FileDatasets for your machine learning workflows, since the source files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
-Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets)
+Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets)
. ### TabularDataset
A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represe
With TabularDatasets, you can specify a time stamp from a column in the data or from wherever the path pattern data is stored to enable a time series trait. This specification allows for easy and efficient filtering by time. For an example, see [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
-Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
+Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets).
>[!NOTE] > [Automated ML](../concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
machine-learning How To Designer Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-import-data.md
Previously updated : 10/21/2021 Last updated : 09/22/2022 -+ # Import data into Azure Machine Learning designer
We recommend that you use [datasets](concept-data.md) to import data into the de
### Register a dataset
-You can register existing datasets [programatically with the SDK](how-to-create-register-datasets.md#create-datasets-from-datastores) or [visually in Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
+You can register existing datasets [programatically with the SDK](how-to-create-register-datasets.md#create-datasets-from-datastores) or [visually in Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets).
You can also register the output for any designer component as a dataset.
While we recommend that you use datasets to import data, you can also use the [I
For detailed information on how to use the Import Data component, see the [Import Data reference page](../algorithm-module-reference/import-data.md). > [!NOTE]
-> If your dataset has too many columns, you may encounter the following error: "Validation failed due to size limitation". To avoid this, [register the dataset in the Datasets interface](how-to-connect-data-ui.md#create-datasets).
+> If your dataset has too many columns, you may encounter the following error: "Validation failed due to size limitation". To avoid this, [register the dataset in the Datasets interface](how-to-connect-data-ui.md#create-data-assets).
## Supported sources
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
-+ Last updated 04/19/2021
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"] > * [v1](how-to-log-view-metrics.md)
-> * [v2 (preview)](../how-to-log-view-metrics.md)
+> * [v2](../how-to-log-view-metrics.md)
Log real-time information using both the default Python logging package and Azure Machine Learning Python SDK-specific functionality. You can log locally and send logs to your workspace in the portal.
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
Last updated 03/08/2022 -+ # Manage Azure Machine Learning workspaces with the Python SDK (v1)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](how-to-manage-workspace.md)
-> * [v2 (preview)](../how-to-manage-workspace.md)
+> * [v2 (current version)](../how-to-manage-workspace.md)
In this article, you create, view, and delete [**Azure Machine Learning workspaces**](../concept-workspace.md) for [Azure Machine Learning](../overview-what-is-azure-machine-learning.md), using the [SDK for Python](/python/api/overview/azure/ml/).
machine-learning How To Prepare Datasets For Automl Images V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images-v1.md
-+ Last updated 10/13/2021
print("Training dataset name: " + training_dataset.name)
* [Train computer vision models with automated machine learning](../how-to-auto-train-image-models.md). * [Train a small object detection model with automated machine learning](../how-to-use-automl-small-object-detect.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](../tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model with AutoML and Python](../tutorial-auto-train-image-models.md).
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-set-up-training-targets.md
If you want to run a [distributed training](../how-to-train-distributed-gpu.md)
For more information and examples on running distributed Horovod, TensorFlow and PyTorch jobs, see:
-* [Train PyTorch models](../how-to-train-pytorch.md#distributed-training)
+* [Distributed training of deep learning models on Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
## Submit the experiment
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-keras.md
# Train Keras models at scale with Azure Machine Learning (SDK v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [v1](how-to-train-keras.md)
+> * [v2 (preview)](../how-to-train-keras.md)
In this article, learn how to run your Keras training scripts with Azure Machine Learning.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
# Train PyTorch models at scale with Azure Machine Learning SDK (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [v1](how-to-train-pytorch.md)
+> * [v2 (preview)](../how-to-train-pytorch.md)
In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-labeled-dataset.md
-+ Last updated 08/17/2022 #Customer intent: As an experienced Python developer, I need to export my data labels and use them for machine learning tasks.
You can access the exported Azure Machine Learning dataset in the **Datasets** s
![Exported dataset](../media/how-to-create-labeling-projects/exported-dataset.png) > [!TIP]
-> Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md)
+> Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python](../how-to-auto-train-image-models.md)
## Explore labeled datasets via pandas dataframe
imgplot = plt.imshow(img)
## Next steps * Learn to [train image classification models in Azure](../tutorial-train-deploy-notebook.md)
-* [Set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md)
+* [Set up AutoML to train computer vision models with Python](../how-to-auto-train-image-models.md)
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-reinforcement-learning.md
Run this code in either of these environments. We recommend you try Azure Machin
- Azure Machine Learning compute instance
- - Learn how to clone sample notebooks in [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md).
- - Clone the **v1 (`<version>`) > how-to-use-azureml** folder instead of **tutorials**
+ - Learn how to clone sample notebooks in [Quickstart: Run Jupyter notebooks in studio](../quickstart-run-notebooks.md).
+ - Clone the **SDK v1 > how-to-use-azureml** folder instead of **SDK v2 > tutorials**
- Run the virtual network setup notebook located at `/how-to-use-azureml/reinforcement-learning/setup/devenv_setup.ipynb` to open network ports used for distributed reinforcement learning. - Run the sample notebook `/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb`
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
Last updated 05/10/2022-+ # Azure Machine Learning SDK & CLI (v1)
All articles in this section document the use of the first version of Azure Mach
## SDK v1
-The Azure SDK examples in articles in this section require the `azureml-core`, or Python SDK v1 for Azure Machine Learning. The Python SDK v2 is now available in preview.
+The Azure SDK examples in articles in this section require the `azureml-core`, or Python SDK v1 for Azure Machine Learning. The Python SDK v2 is now available.
The v1 and v2 Python SDK packages are incompatible, and v2 style of coding will not work for articles in this directory. However, machine learning workspaces and all underlying resources can be interacted with from either, meaning one user can create a workspace with the SDK v1 and another can submit jobs to the same workspace with the SDK v2.
machine-learning Samples Notebooks V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-notebooks-v1.md
Last updated 12/27/2021-+ #Customer intent: As a professional data scientist, I find and run example Jupyter Notebooks for Azure Machine Learning.
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"] > * [v1](<samples-notebooks-v1.md>)
-> * [v2 (preview)](../samples-notebooks.md)
+> * [v2](../samples-notebooks.md)
The [Azure Machine Learning Notebooks repository](https://github.com/azure/machinelearningnotebooks) includes Azure Machine Learning Python SDK (v1) samples. These Jupyter notebooks are designed to help you explore the SDK and serve as models for your own machine learning projects. In this repository, you'll find tutorial notebooks in the **tutorials** folder and feature-specific notebooks in the **how-to-use-azureml** folder.
This article shows you how to access the repositories from the following environ
## Option 1: Access on Azure Machine Learning compute instance (recommended)
-The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
+The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
+
+To view example notebooks:
+ 1. Sign in to [studio](https://ml.azure.com) and select your workspace if necessary.
+ 1. Select **Notebooks**.
+ 1. Select the **Samples** tab. Use the **SDK v1** folder for examples using Python SDK v1.
## Option 2: Access on your own notebook server
Explore the [MachineLearningNotebooks](https://github.com/Azure/MachineLearningN
For more GitHub sample projects and examples, see these repos: + [Microsoft/MLOps](https://github.com/Microsoft/MLOps) + [Microsoft/MLOpsPython](https://github.com/microsoft/MLOpsPython)-
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-bring-data.md
Last updated 07/29/2022-+ # Tutorial: Upload data and train a model (SDK v1, part 3 of 3)
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-1st-experiment-bring-data.md)
-> * [v2 (preview)](../tutorial-1st-experiment-bring-data.md)
+> * [v2](../tutorial-1st-experiment-bring-data.md)
This tutorial shows you how to upload and use your own data to train machine learning models in Azure Machine Learning. This tutorial is *part 3 of a three-part tutorial series*.
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-hello-world.md
Last updated 07/29/2022-+ # Tutorial: Get started with a Python script in Azure Machine Learning (SDK v1, part 1 of 3)
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-1st-experiment-hello-world.md)
-> * [v2 (preview)](../tutorial-1st-experiment-hello-world.md)
+> * [v2](../tutorial-1st-experiment-hello-world.md)
In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a three-part tutorial series*.
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-sdk-train.md
Last updated 07/29/2022-+ # Tutorial: Train your first machine learning model (SDK v1, part 2 of 3)
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-1st-experiment-sdk-train.md)
-> * [v2 (preview)](../tutorial-1st-experiment-sdk-train.md)
+> * [v2](../tutorial-1st-experiment-sdk-train.md)
This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is *part 2 of a three-part tutorial series*.
machine-learning Tutorial Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models-v1.md
Last updated 10/06/2021-+ # Tutorial: Train an object detection model (preview) with AutoML and Python (v1)
In this automated machine learning tutorial, you did the following tasks:
> * Deployed your model > * Visualized detections
-* [Learn more about computer vision in automated ML (preview)](../concept-automated-ml.md#computer-vision-preview).
-* [Learn how to set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md).
+* [Learn more about computer vision in automated ML](../concept-automated-ml.md#computer-vision).
+* [Learn how to set up AutoML to train computer vision models with Python](../how-to-auto-train-image-models.md).
* [Learn how to configure incremental training on computer vision models](../how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](../reference-automl-images-hyperparameters.md). * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
Last updated 01/28/2022-+ # Tutorial: Build an Azure Machine Learning pipeline for image classification
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!NOTE]
-> For a tutorial that uses SDK v2 to build a pipeline, see [Tutorial: Use ML pipelines for production ML workflows with Python SDK v2 (preview) in a Jupyter Notebook](../tutorial-pipeline-python-sdk.md).
+> For a tutorial that uses SDK v2 to build a pipeline, see [Tutorial: Use ML pipelines for production ML workflows with Python SDK v2 in a Jupyter Notebook](../tutorial-pipeline-python-sdk.md).
In this tutorial, you learn how to build an [Azure Machine Learning pipeline](../concept-ml-pipelines.md) to prepare data and train a machine learning model. Machine learning pipelines optimize your workflow with speed, portability, and reuse, so you can focus on machine learning instead of infrastructure and automation.
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
You complete the following experiment setup and run steps in Azure Machine Learn
1. At the top, select the **Samples** tab.
-1. Open the **v1 (`<version>`)** folder. The version number represents the current v1 release for the Python SDK.
+1. Open the **SDK v1** folder.
1. Select the **...** button at the right of the **tutorials** folder, and then select **Clone**.
managed-grafana Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/encryption.md
Last updated 07/22/2022-+ # Encryption in Azure Managed Grafana
This article provides a short description of encryption within Azure Managed Gra
- Resource-provider related system metadata is stored in Azure Cosmos DB. - Grafana instance user data is stored in a per instance Azure Database for PostgreSQL.
-## Encryption in Cosmos DB and Azure Database for PostgreSQL
+## Encryption in Azure Cosmos DB and Azure Database for PostgreSQL
-Managed Grafana leverages encryption offered by Cosmos DB and Azure Database for PostgreSQL.
+Managed Grafana leverages encryption offered by Azure Cosmos DB and Azure Database for PostgreSQL.
-Data stored in Cosmos DB and Azure Database for PostgreSQL is encrypted at rest on storage devices and in transport over the network.
+Data stored in Azure Cosmos DB and Azure Database for PostgreSQL is encrypted at rest on storage devices and in transport over the network.
For more information, go to [Encryption at rest in Azure Cosmos DB](/azure/cosmos-db/database-encryption-at-rest) and [Security in Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/concepts-security).
managed-instance-apache-cassandra Add Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/add-service-principal.md
Title: How to add Cosmos DB service principal for Azure Managed Instance for Apache Cassandra
-description: Learn how to add a Cosmos DB service principal to an existing virtual network for Azure Managed Instance for Apache Cassandra
+ Title: How to add Azure Cosmos DB service principal for Azure Managed Instance for Apache Cassandra
+description: Learn how to add an Azure Cosmos DB service principal to an existing virtual network for Azure Managed Instance for Apache Cassandra
Last updated 11/02/2021 -+
-# Use Azure portal to add Cosmos DB service principal
+# Use Azure portal to add Azure Cosmos DB service principal
For successful deployment into an existing virtual network, Azure Managed Instance for Apache Cassandra requires the Azure Cosmos DB service principal with a role (such as Network Contributor) that allows the action `Microsoft.Network/virtualNetworks/subnets/join/action`. In some circumstances, it may be required to add these permissions manually. This article shows how to do this using Azure portal.
-## Add Cosmos DB service principal
+## Add Azure Cosmos DB service principal
1. Sign in to the [Azure portal](https://portal.azure.com/).
For successful deployment into an existing virtual network, Azure Managed Instan
1. Ensure that `User, group, or service principal` is selected for `Assign access to`, and then click `Select members` to search for the `Azure Cosmos DB` service principal. Select it in the right hand side window:
- :::image type="content" source="./media/add-service-principal/service-principal-3.png" alt-text="Select Cosmos DB service principal" lightbox="./media/add-service-principal/service-principal-3.png" border="true":::
+ :::image type="content" source="./media/add-service-principal/service-principal-3.png" alt-text="Select Azure Cosmos DB service principal" lightbox="./media/add-service-principal/service-principal-3.png" border="true":::
-1. Click on the `Review + assign` tab at the top, then click the `Review + assign` button at the bottom. The Cosmos DB service principal should now be assigned.
+1. Click on the `Review + assign` tab at the top, then click the `Review + assign` button at the bottom. The Azure Cosmos DB service principal should now be assigned.
:::image type="content" source="./media/add-service-principal/service-principal-4.png" alt-text="Review and assign" lightbox="./media/add-service-principal/service-principal-4.png" border="true":::
managed-instance-apache-cassandra Compare Cosmosdb Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/compare-cosmosdb-managed-instance.md
Title: Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB Cassandra API
-description: Learn about the differences between Azure Managed Instance for Apache Cassandra and Cassandra API in Azure Cosmos DB. You also learn the benefits of each of these services and when to choose them.
+ Title: Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
+description: Learn about the differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra. You also learn the benefits of each of these services and when to choose them.
Last updated 12/10/2021-+
-# Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB Cassandra API
+# Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
-In this article, you will learn the differences between Azure Managed Instance for Apache Cassandra and the Cassandra API in Azure Cosmos DB. This article provides recommendations on how to choose between the two services, or when to host your own Apache Cassandra environment.
+In this article, you will learn the differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra. This article provides recommendations on how to choose between the two services, or when to host your own Apache Cassandra environment.
## Key differences Azure Managed Instance for Apache Cassandra provides automated deployment, scaling, and operations to maintain the node health for open-source Apache Cassandra instances in Azure. It also provides the capability to scale out the capacity of existing on-premises or cloud self-hosted Apache Cassandra clusters. It scales out by adding managed Cassandra datacenters to the existing cluster ring.
-The [Cassandra API](../cosmos-db/cassandra-introduction.md) in Azure Cosmos DB is a compatibility layer over Microsoft's globally distributed cloud-native database service [Azure Cosmos DB](../cosmos-db/index.yml). The combination of these services in Azure provides a continuum of choices for users of Apache Cassandra in complex hybrid cloud environments.
+The [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md) in Azure Cosmos DB is a compatibility layer over Microsoft's globally distributed cloud-native database service [Azure Cosmos DB](../cosmos-db/index.yml). The combination of these services in Azure provides a continuum of choices for users of Apache Cassandra in complex hybrid cloud environments.
## How to choose? The following table shows the common scenarios, workload requirements, and aspirations where each of this deployment approaches fit:
-| |Self-hosted Apache Cassandra on-premises or in Azure | Azure Managed Instance for Apache Cassandra | Azure Cosmos DB Cassandra API |
+| |Self-hosted Apache Cassandra on-premises or in Azure | Azure Managed Instance for Apache Cassandra | Azure Cosmos DB for Apache Cassandra |
||||| |**Deployment type**| You have a highly customized Apache Cassandra deployment with custom patches or snitches. | You have a standard open-source Apache Cassandra deployment without any custom code. | You are content with a platform that is not Apache Cassandra underneath but is compliant with all open-source client drivers at a [wire protocol](../cosmos-db/cassandra-support.md) level. | | **Operational overhead**| You have existing Cassandra experts who can deploy, configure, and maintain your clusters. | You want to lower the operational overhead for your Apache Cassandra node health, but still maintain control over the platform level configurations such as replication and consistency. | You want to eliminate the operational overhead by using a fully managed Platform-as-as-service database in the cloud. | | **Operating system requirements**| You have a requirement to maintain custom or golden Virtual Machine operating system images. | You can use vanilla images but want to have control over SKUs, memory, disks, and IOPS. | You want capacity provisioning to be simplified and expressed as a single normalized metric, with a one-to-one relationship to throughput, such as [request units](../cosmos-db/request-units.md) in Azure Cosmos DB. | | **Pricing model**| You want to use management software such as Datastax tooling and are happy with licensing costs. | You prefer pure open-source licensing and VM instance-based pricing. | You want to use cloud-native pricing, which includes [autoscale](../cosmos-db/manage-scale-cassandra.md#use-autoscale) and [serverless](../cosmos-db/serverless.md) offers. |
-| **Analytics**| You want full control over the provisioning of analytical pipelines regardless of the overhead to build and maintain them. | You want to use cloud-based analytical services like Azure Databricks. | You want near real-time hybrid transactional analytics built into the platform with [Azure Synapse Link for Cosmos DB](../cosmos-db/synapse-link.md). |
+| **Analytics**| You want full control over the provisioning of analytical pipelines regardless of the overhead to build and maintain them. | You want to use cloud-based analytical services like Azure Databricks. | You want near real-time hybrid transactional analytics built into the platform with [Azure Synapse Link for Azure Cosmos DB](../cosmos-db/synapse-link.md). |
| **Workload pattern**| Your workload is fairly steady-state and you don't require scaling nodes in the cluster frequently. | Your workload is volatile and you need to be able to scale up or scale down nodes in a data center or add/remove data centers easily. | Your workload is often volatile and you need to be able to scale up or scale down quickly and at a significant volume. | | **SLAs**| You are happy with your processes for maintaining SLAs on consistency, throughput, availability, and disaster recovery. | You are happy with your processes for maintaining SLAs on consistency and throughput, but want an [SLA for availability](https://azure.microsoft.com/support/legal/sl#backup-and-restore). | You want [fully comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_4/) on consistency, throughput, availability, and disaster recovery. |
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
The above instructions provide guidance for configuring a hybrid cluster. Howeve
1. Temporarily disable automatic repairs in Azure Managed Instance for Apache Cassandra for the duration of the migration: ```azurecli-interactive
- az managed-cassandra cluster update --cluster-name --resource-group--repair-enabled false
+ az managed-cassandra cluster update \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName --repair-enabled false
```
-1. Run `nodetool repair --full` on each node in your existing cluster's data center. You should run this **only after all of the prior steps have been taken**. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. For most installations you can only run one or two in parallel to not overload the cluster. You can monitor a particular repair run by checking `nodetool netsats` and `nodetool compactionstats` against the specific node. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra.
+1. In Azure CLI, run the below command to execute `nodetool rebuild` on each node in your new Azure Managed Instance for Apache Cassandra data center, replacing `<ip address>` with the IP address of the node, and `<sourcedc>` with the name of your existing data center (the one you are migrating from):
+ ```azurecli-interactive
+ az managed-cassandra cluster invoke-command \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName \
+ --host <ip address> \
+ --command-name nodetool --arguments rebuild="" "<sourcedc>"=""
+ ```
+ You should run this **only after all of the prior steps have been taken**. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. You can run rebuild on one or more nodes at the same time. Run on one node at a time to reduce the impact on the existing cluster. Run on multiple nodes when the cluster can handle the extra I/O and network pressure. For most installations you can only run one or two in parallel to not overload the cluster.
- > [!NOTE]
- > To speed up repairs we advise (if system load permits it) to increase both stream throughput and compaction throughput as in the example below:
- >```azure-cli
- > az managed-cassandra cluster invoke-command --resource-group $resourceGroupName --cluster-name $clusterName --host $host --command-name nodetool --arguments "setstreamthroughput"="" "7000"=""
- >
- > az managed-cassandra cluster invoke-command --resource-group $resourceGroupName --cluster-name $clusterName --host $host --command-name nodetool --arguments "setcompactionthroughput"="" "960"=""
+ > [!WARNING]
+ > You must specify the source *data center* when running `nodetool rebuild`. If you provide the data center incorrectly on the first attempt, this will result in token ranges being copied, without data being copied for your non-system tables. Subsequent attempts will fail even if you provide the data center correctly. You can resolve this by deleting entries for each non-system keyspace in `system.available_ranges` via the `cqlsh` query tool in your target Cassandra MI data center:
+ > ```shell
+ > delete from system.available_ranges where keyspace_name = 'myKeyspace';
+ > ```
1. Cut over your application code to point to the seed nodes in your new Azure Managed Instance for Apache Cassandra data center(s). > [!IMPORTANT] > As also mentioned in the hybrid setup instructions, if the data center(s) in your existing cluster do not enforce [client-to-node encryption (SSL)](https://cassandra.apache.org/doc/3.11/cassandra/operating/security.html#client-to-node-encryption), you will need to enable this in your application code, as Cassandra Managed Instance enforces this.
-1. Run nodetool repair **again** on all the nodes in your existing cluster's data center, in the same manner as in step 3 above (to ensure any deltas are replicated following application cut over).
- 1. Run ALTER KEYSPACE for each keyspace, in the same manner as done earlier, but now removing your old data center(s). 1. Run [nodetool decommission](https://cassandra.apache.org/doc/latest/cassandra/tools/nodetool/decommission.html) for each old data center node.
The above instructions provide guidance for configuring a hybrid cluster. Howeve
1. Re-enable automatic repairs: ```azurecli-interactive
- az managed-cassandra cluster update --cluster-name --resource-group--repair-enabled true
+ az managed-cassandra cluster update \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName --repair-enabled true
``` ## Troubleshooting
managed-instance-apache-cassandra Create Multi Region Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-multi-region-cluster.md
Last updated 11/02/2021--- ignite-fall-2021-- mode-other-- devx-track-azurecli-- kr2b-contr-experiment+ ms.devlang: azurecli
If you encounter errors when you run `az role assignment create`, you might not
## Troubleshooting
-If you encounter an error when applying permissions to your Virtual Network using Azure CLI, you can apply the same permission manually from the Azure portal. An example error might be *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*. For more information, see [Use Azure portal to add Cosmos DB service principal](add-service-principal.md).
+If you encounter an error when applying permissions to your Virtual Network using Azure CLI, you can apply the same permission manually from the Azure portal. An example error might be *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*. For more information, see [Use Azure portal to add an Azure Cosmos DB service principal](add-service-principal.md).
> [!NOTE] > The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
Last updated 11/02/2021-+ # Frequently asked questions about Azure Managed Instance for Apache Cassandra
The Apache Cassandra database is the right choice when you need scalability and
It can be used either entirely in the cloud or as a part of a hybrid cloud and on-premises cluster. This service is a great choice when you want fine-grained configuration and control you have in open-source Apache Cassandra, without any maintenance overhead.
-### Why should I use this service instead of Azure Cosmos DB Cassandra API?
+### Why should I use this service instead of Azure Cosmos DB for Apache Cassandra?
-Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB team. It's a standalone managed service for deploying, maintaining, and scaling open-source Apache Cassandra data-centers and clusters. [Azure Cosmos DB Cassandra API](../cosmos-db/cassandra-introduction.md) on the other hand is a Platform-as-a-Service, providing an interoperability layer for the Apache Cassandra wire protocol. If your expectation is for the platform to behave in exactly the same way as any Apache Cassandra cluster, you should choose the managed instance service. To learn more, see the [Azure Managed Instance for Apache Cassandra Vs Azure Cosmos DB Cassandra API](compare-cosmosdb-managed-instance.md) article.
+Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB team. It's a standalone managed service for deploying, maintaining, and scaling open-source Apache Cassandra data-centers and clusters. [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md) on the other hand is a Platform-as-a-Service, providing an interoperability layer for the Apache Cassandra wire protocol. If your expectation is for the platform to behave in exactly the same way as any Apache Cassandra cluster, you should choose the managed instance service. To learn more, see [Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra](compare-cosmosdb-managed-instance.md).
### Is Azure Managed Instance for Apache Cassandra dependent on Azure Cosmos DB?
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
marketplace Find Tenant Object Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/find-tenant-object-id.md
This section describes how to find tenant, object, and PartnerID (formerly MPN I
## Next steps -- [Supported countries and regions](sell-from-countries.md)
+- [Supported countries and regions](supported-countries-regions.md)
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
For billing terms with equal payments, payment collection will be enforced for t
For SaaS apps that run in your (the publisherΓÇÖs) Azure subscription, infrastructure usage is billed to you directly; customers do not see actual infrastructure usage fees. You should bundle Azure infrastructure usage fees into your software license pricing to compensate for the cost of the infrastructure you deployed to run the solution.
-SaaS app offers that are sold through Microsoft support one-time upfront monthly or annual billing (payment option) based on a flat fee, per user, or consumption charges using the [metered billing service](./partner-center-portal/saas-metered-billing.md). The commercial marketplace operates on an agency model, whereby publishers set prices, Microsoft bills customers, and Microsoft pays revenue to publishers while withholding an agency fee.
+SaaS app offers that are sold through Microsoft support one-time upfront, monthly, or annual billing (payment options) based on a flat fee, per user, or consumption charges using the [metered billing service](./partner-center-portal/saas-metered-billing.md). The commercial marketplace operates on an agency model, whereby publishers set prices, Microsoft bills customers, and Microsoft pays revenue to publishers while withholding an agency fee.
The following example shows a sample breakdown of costs and payouts to demonstrate the agency model. In this example, Microsoft bills $100.00 to the customer for your software license and pays out $97.00 to the publisher.
marketplace Supported Countries Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/supported-countries-regions.md
+
+ Title: Supported publisher countries and regions
+description: List of countries from which you can publish an offer to the Microsoft commercial marketplace.
+++++++ Last updated : 01/04/2021++
+# Supported publisher countries and regions
+
+To publish an offer to the Microsoft commercial marketplace, your company must legally reside in one of the following countries or regions:
+
+- Afghanistan
+- Åland Islands
+- Albania
+- Algeria
+- American Samoa
+- Andorra
+- Angola
+- Anguilla
+- Antarctica
+- Antigua and Barbuda
+- Argentina
+- Armenia
+- Aruba
+- Australia
+- Austria
+- Azerbaijan
+- Bahamas
+- Bahrain
+- Bangladesh
+- Barbados
+- Belarus
+- Belgium
+- Belize
+- Benin
+- Bermuda
+- Bhutan
+- Bolivia
+- Bonaire, Sint Eustatius and Saba
+- Bosnia and Herzegovina
+- Botswana
+- Bouvet Island
+- Brazil
+- British Indian Ocean Territory
+- British Virgin Islands
+- Brunei
+- Bulgaria
+- Burkina Faso
+- Burundi
+- Cabo Verde
+- Cambodia
+- Cameroon
+- Canada
+- Cayman Islands
+- Central African Republic
+- Chad
+- Chile
+- Christmas Island
+- Cocos (Keeling) Islands
+- Colombia
+- Comoros
+- Congo
+- Congo (DRC)
+- Cook Islands
+- Costa Rica
+- Croatia
+- Curaçao
+- Cyprus
+- C├┤te dΓÇÖIvoire
+- Czechia
+- Denmark
+- Djibouti
+- Dominica
+- Dominican Republic
+- Ecuador
+- Egypt
+- El Salvador
+- Equatorial Guinea
+- Eritrea
+- Estonia
+- Ethiopia
+- Falkland Islands
+- Faroe Islands
+- Fiji
+- Finland
+- France
+- French Guiana
+- French Polynesia
+- French Southern Territories
+- Gabon
+- Gambia
+- Georgia
+- Germany
+- Ghana
+- Gibraltar
+- Greece
+- Greenland
+- Grenada
+- Guadeloupe
+- Guam
+- Guatemala
+- Guernsey
+- Guinea
+- Guinea-Bissau
+- Guyana
+- Haiti
+- Heard Island and McDonald Islands
+- Honduras
+- Hong Kong SAR
+- Hungary
+- Iceland
+- India
+- Indonesia
+- Iraq
+- Ireland
+- Isle of Man
+- Israel
+- Italy
+- Jamaica
+- Japan
+- Jersey
+- Jordan
+- Kazakhstan
+- Kenya
+- Kiribati
+- Kosovo
+- Kuwait
+- Kyrgyzstan
+- Laos
+- Latvia
+- Lebanon
+- Lesotho
+- Liberia
+- Libya
+- Liechtenstein
+- Lithuania
+- Luxembourg
+- Macao SAR
+- Madagascar
+- Malawi
+- Malaysia
+- Maldives
+- Mali
+- Malta
+- Marshall Islands
+- Martinique
+- Mauritania
+- Mauritius
+- Mayotte
+- Mexico
+- Micronesia
+- Moldova
+- Monaco
+- Mongolia
+- Montenegro
+- Montserrat
+- Morocco
+- Mozambique
+- Myanmar
+- Namibia
+- Nauru
+- Nepal
+- Netherlands
+- New Caledonia
+- New Zealand
+- Nicaragua
+- Niger
+- Nigeria
+- Niue
+- Norfolk Island
+- North Macedonia
+- Northern Mariana Islands
+- Norway
+- Oman
+- Pakistan
+- Palau
+- Palestinian Authority
+- Panama
+- Papua New Guinea
+- Paraguay
+- Peru
+- Philippines
+- Pitcairn Islands
+- Poland
+- Portugal
+- Puerto Rico
+- Qatar
+- Romania
+- Russia
+- Rwanda
+- Réunion
+- Saba
+- Saint Barthélemy
+- Saint Kitts and Nevis
+- Saint Lucia
+- Saint Martin
+- Saint Pierre and Miquelon
+- Saint Vincent and the Grenadines
+- Samoa
+- San Marino
+- Saudi Arabia
+- Senegal
+- Serbia
+- Seychelles
+- Sierra Leone
+- Singapore
+- Sint Eustatius
+- Sint Maarten
+- Slovakia
+- Slovenia
+- Solomon Islands
+- Somalia
+- South Africa
+- South Georgia and South Sandwich Islands
+- South Korea (Republic of Korea)
+- South Sudan
+- Spain
+- Sri Lanka
+- St Helena, Ascension, Tristan da Cunha
+- Suriname
+- Svalbard and Jan Mayen
+- Swaziland
+- Sweden
+- Switzerland
+- São Tomé and Príncipe
+- Taiwan
+- Tajikistan
+- Tanzania
+- Thailand
+- Timor-Leste
+- Togo
+- Tokelau
+- Tonga
+- Trinidad and Tobago
+- Tunisia
+- Turkey
+- Turkmenistan
+- Turks and Caicos Islands
+- Tuvalu
+- U.S. Outlying Islands
+- U.S. Virgin Islands
+- Uganda
+- Ukraine
+- United Arab Emirates
+- United Kingdom
+- United States
+- Uruguay
+- Uzbekistan
+- Vanuatu
+- Vatican City
+- Venezuela
+- Vietnam
+- Wallis and Futuna
+- Yemen
+- Zambia
+- Zimbabwe
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
ms. + Last updated 11/01/2021
Data that's collected by the Azure Migrate appliance is stored in the Azure loca
Here's more information about how data is stored: -- The collected data is securely stored in Cosmos DB in a Microsoft subscription. The data is deleted when you delete the project. Storage is handled by Azure Migrate. You can't specifically choose a storage account for collected data.
+- The collected data is securely stored in Azure Cosmos DB in a Microsoft subscription. The data is deleted when you delete the project. Storage is handled by Azure Migrate. You can't specifically choose a storage account for collected data.
- If you use [dependency visualization](concepts-dependency-visualization.md), the data that's collected is stored in an Azure Log Analytics workspace created in your Azure subscription. The data is deleted when you delete the Log Analytics workspace in your subscription. ## How much data is uploaded during continuous profiling?
The volume of data that's sent to Azure Migrate depends on multiple parameters.
Yes, for both: - Metadata is securely sent to the Azure Migrate service over the internet via HTTPS.-- Metadata is stored in an [Azure Cosmos](../cosmos-db/database-encryption-at-rest.md) database and in [Azure Blob storage](../storage/common/storage-service-encryption.md) in a Microsoft subscription. The metadata is encrypted at rest for storage.
+- Metadata is stored in an [Azure Cosmos DB](../cosmos-db/database-encryption-at-rest.md) database and in [Azure Blob storage](../storage/common/storage-service-encryption.md) in a Microsoft subscription. The metadata is encrypted at rest for storage.
- The data for dependency analysis also is encrypted in transit (by secure HTTPS). It's stored in a Log Analytics workspace in your subscription. The data is encrypted at rest for dependency analysis. ## How does the appliance connect to vCenter Server?
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
-++ Last updated 05/24/2022
You can create an Azure Database for MySQL Flexible Server in one of three diffe
| Resource / Tier | **Burstable** | **General Purpose** | **Business Critical** | |:|:-|:--|:|
-| VM series| [B-series](/azure/virtual-machines/sizes-b-series-burstable) | [Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series#ddsv4-series) | [Edsv4](/azure/virtual-machines/edv4-edsv4-series#edsv4-series)/[Edsv5-series](/azure/virtual-machines/edv5-edsv5-series#edsv5-series)*|
+| VM series| [B-series](/azure/virtual-machines/sizes-b-series-burstable) | [Dadsv5-series](/azure/virtual-machines/dasv5-dadsv5-series#dadsv5-series)[Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series#ddsv4-series) | [Edsv4](/azure/virtual-machines/edv4-edsv4-series#edsv4-series)/[Edsv5-series](/azure/virtual-machines/edv5-edsv5-series#edsv5-series)*/[Eadsv5-series](/azure/virtual-machines/easv5-eadsv5-series#eadsv5-series)|
| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64, 80, 96 |
-| Memory per vCore | Variable | 4 GiB | 8 GiB * |
+| Memory per vCore | Variable | 4 GiB | 8 GiB ** |
| Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB | | Database backup retention period | 1 to 35 days | 1 to 35 days | 1 to 35 days |
-\* With the exception of E64ds_v4 (Business Critical) SKU, which has 504 GB of memory
+\** With the exception of 64,80, and 96 vCores, which has 504, 504 and 672 GiB of memory respectively.
\* Ev5 compute provides best performance among other VM series in terms of QPS and latency. learn more about performance and region availability of Ev5 compute from [here](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698).
The detailed specifications of the available server types are as follows:
|Standard_B16ms | 16 | 64 | 4300 | 10923 |Standard_B20ms | 20 | 80 | 5000 | 13653 |**General Purpose**|
+|Standard_D2ads_v5 |2 |8 |3200 |1365
|Standard_D2ds_v4 |2 |8 |3200 |1365
+|Standard_D4ads_v5 |4 |16 |6400 |2731
|Standard_D4ds_v4 |4 |16 |6400 |2731
+|Standard_D8ads_v5 |8 |32 |12800 |5461
|Standard_D8ds_v4 |8 |32 |12800 |5461
+|Standard_D16ads_v5 |16 |64 |20000 |10923
|Standard_D16ds_v4 |16 |64 |20000 |10923
+|Standard_D32ads_v5 |32 |128 |20000 |21845
|Standard_D32ds_v4 |32 |128 |20000 |21845
+|Standard_D48ads_v5 |48 |192 |20000 |32768
|Standard_D48ds_v4 |48 |192 |20000 |32768
+|Standard_D64ads_v5 |64 |256 |20000 |43691
|Standard_D64ds_v4 |64 |256 |20000 |43691 |**Business Critical** | |Standard_E2ds_v4 | 2 | 16 | 5000 | 2731
+|Standard_E2ads_v5 | 2 | 16 | 5000 | 2731
|Standard_E4ds_v4 | 4 | 32 | 10000 | 5461
+|Standard_E4ads_v5 | 4 | 32 | 10000 | 5461
|Standard_E8ds_v4 | 8 | 64 | 18000 | 10923
+|Standard_E8ads_v5 | 8 | 64 | 18000 | 10923
|Standard_E16ds_v4 | 16 | 128 | 28000 | 21845
+|Standard_E16ads_v5 | 16 | 128 | 28000 | 21845
|Standard_E32ds_v4 | 32 | 256 | 38000 | 43691
+|Standard_E32ads_v5 | 32 | 256 | 38000 | 43691
|Standard_E48ds_v4 | 48 | 384 | 48000 | 65536
+|Standard_E48ads_v5 | 48 | 384 | 48000 | 65536
|Standard_E64ds_v4 | 64 | 504 | 48000 | 86016
+|Standard_E64ads_v5 | 64 | 504 | 48000 | 86016
|Standard_E80ids_v4 | 80 | 504 | 48000 | 86016 |Standard_E2ds_v5 | 2 | 16 | 5000 | 2731 |Standard_E4ds_v5 | 4 | 32 | 10000 | 5461
The detailed specifications of the available server types are as follows:
|Standard_E64ds_v5 | 64 | 512 | 48000 | 87383 |Standard_E96ds_v5 | 96 | 672 | 48000 | 100000
-To get more details about the compute series available, refer to Azure VM documentation for [Burstable (B-series)](../../virtual-machines/sizes-b-series-burstable.md), [General Purpose (Ddsv4-series)](../../virtual-machines/ddv4-ddsv4-series.md), and Business Critical [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)]
+To get more details about the compute series available, refer to Azure VM documentation for [Burstable (B-series)](../../virtual-machines/sizes-b-series-burstable.md), General Purpose [Dadsv5-series](/azure/virtual-machines/dasv5-dadsv5-series#dadsv5-series)[Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series#ddsv4-series), and Business Critical [Edsv4](/azure/virtual-machines/edv4-edsv4-series#edsv4-series)/[Edsv5-series](/azure/virtual-machines/edv5-edsv5-series#edsv5-series)/[Eadsv5-series](/azure/virtual-machines/easv5-eadsv5-series#eadsv5-series)
+ >[!NOTE] >For [Burstable (B-series) compute tier](../../virtual-machines/sizes-b-series-burstable.md) if the VM is started/stopped or restarted, the credits may be lost. For more information, see [Burstable (B-Series) FAQ](../../virtual-machines/sizes-b-series-burstable.md#q-why-is-my-remaining-credit-set-to-0-after-a-redeploy-or-a-stopstart).
The maximum IOPS is dependent on the maximum available IOPS per compute size. Re
You can monitor your I/O consumption in the Azure portal (with Azure Monitor) using [IO percent](./concepts-monitoring.md) metric. If you need more IOPS than the max IOPS based on compute then you need to scale your server's compute.
+## Autoscale IOPS
+The cornerstone of the Azure Database for MySQL - Flexible Server is its ability to achieve the best performance for tier 1 workloads, which can be improved by enabling server automatically scale performance (IO) of its database servers seamlessly depending on the workload needs. This is an opt-in feature which enable users to scale IOPS on demand without having to pre-provision a certain amount of IO per second. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs.ΓÇ»
+
+With Autoscale IOPS, you pay only for the IO the server use and no longer need to provision and pay for resources they arenΓÇÖt fully using, saving both time and money. In addition, mission-critical Tier-1 applications can achieve consistent performance by making additional IO available to the workload at any time. Autoscale IOPS eliminates the administration required to provide the best performance at the least cost for Azure Database for MySQL customers.
+ ## Backup The service automatically takes backups of your server. You can select a retention period from a range of 1 to 35 days. Learn more about backups in the [backup and restore concepts article](concepts-backup-restore.md).
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
+
+ Title: Azure Database for MySQL - flexible server - major version upgrade
+description: Learn how to upgrade major version for an Azure Database for MySQL - Flexible server.
+++++++ Last updated : 10/12/2022++
+# Major version upgrade in Azure Database for MySQL - Flexible Server (Preview)
++
+>[!Note]
+> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
+
+This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL Flexible server.
+This feature will enable customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 with a select of button without any data movement or the need of any application connection string changes.
+
+>[!Important]
+> - Major version upgrade for Azure database for MySQL Flexible Server is available in public preview.
+> - Major version upgrade is currently not available for Burstable SKU 5.7 servers.
+> - Duration of downtime will vary based on the size of your database instance and the number of tables on the database.
+> - Upgrading major MySQL version is irreversible. Your deployment might fail if validation identifies the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again
+
+## Prerequisites
+
+- Read Replicas with MySQL version 5.7 should be upgraded before Primary Server for replication to be compatible between different MySQL versions, read more on [Replication Compatibility between MySQL versions](https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-compatibility.html).
+- Before you upgrade your production servers, we strongly recommend you to test your application compatibility and verify your database compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.
+- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
++
+## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server.
+ >[!Important]
+ > We recommend performing upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](./how-to-restore-server-portal.md).
+
+2. From the overview page, select the Upgrade button in the toolbar
+
+ >[!Important]
+ > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
+ > Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
+ > [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) with values NO_AUTO_CREATE_USER, NO_FIELD_OPTIONS, NO_KEY_OPTIONS and NO_TABLE_OPTIONS are no longer supported in MySQL 8.0.
+
+ :::image type="content" source="./media/how-to-upgrade/1-how-to-upgrade.png" alt-text="Screenshot showing Azure Database for MySQL Upgrade.":::
+
+3. In the Upgrade sidebar, verify Major Upgrade version to upgrade i.e 8.0.
+
+ :::image type="content" source="./media/how-to-upgrade/2-how-to-upgrade.png" alt-text="Screenshot showing Upgrade.":::
+
+4. For Primary Server, select on confirmation checkbox, to confirm that all your replica servers are upgraded before primary server. Once confirmed that all your replicas are upgraded, Upgrade button will be enabled. For your read-replicas and standalone servers, Upgrade button will be enabled by default.
+
+ :::image type="content" source="./media/how-to-upgrade/3-how-to-upgrade.png" alt-text="Screenshot showing confirmation.":::
+
+5. Once Upgrade button is enabled, you can select on Upgrade button to proceed with deployment.
+
+ :::image type="content" source="./media/how-to-upgrade/4-how-to-upgrade.png" alt-text="Screenshot showing upgrade.":::
+
+## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure CLI
+
+Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.7 server using Azure CLI.
+
+1. Install [Azure CLI](/cli/azure/install-azure-cli) for Windows or use [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands.
+
+ This upgrade requires version 2.40.0 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
+
+2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command.
+
+ ```azurecli
+ az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --version 8.0
+ ```
+
+3. Under confirmation prompt, type ΓÇ£yΓÇ¥ for confirming or ΓÇ£nΓÇ¥ to stop the upgrade process and enter.
+
+## Perform major version upgrade from MySQL 5.7 to MySQL 8.0 on read replica using Azure portal
+
+1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server.
+
+2. From the Overview page, select the Upgrade button in the toolbar.
+>[!Important]
+> Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
+>Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
+
+3. In the Upgrade section, select Upgrade button to upgrade Azure database for MySQL 5.7 read replica server to 8.0 server.
+
+4. A notification will confirm that upgrade is successful.
+
+5. From the Overview page, confirm that your Azure database for MySQL read replica server version is 8.0.
+
+6. Now go to your primary server and perform major version upgrade on it.
+
+## Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas
+
+1. In the Azure portal, select your existing Azure Database for MySQL 5.7.
+
+2. Create a [read replica](./how-to-read-replicas-portal.md) from your primary server.
+
+3. Upgrade your [read replica to version](#perform-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-azure-cli) 8.0.
+
+4. Once you confirm that the replica server is running on version 8.0, stop your application from connecting to your primary server.
+
+5. Check replication status, and make sure replica is all caught up with primary, so all the data is in sync and ensure there are no new operations performed in primary.
+Confirm with the show slave status command on the replica server to view the replication status.
+
+ ```azurecli
+ SHOW SLAVE STATUS\G
+ ```
+ If the state of Slave_IO_Running and Slave_SQL_Running are "yes" and the value of Seconds_Behind_Master is "0", replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm Seconds_Behind_Master is "0" it's safe to stop replication.
+
+6. Promote your read replica to primary by stopping replication.
+
+7. Set Server Parameter read_only to 0 that is, OFF to start writing on promoted primary.
+
+ Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+
+>[!Note]
+> This scenario will have downtime during steps 4, 5 and 6 only.
+
+## Frequently asked questions
+
+- Will this cause downtime of the server and if so, how long?
+ To have minimal downtime during upgrades, follow the steps mentioned under - [Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas](#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas).
+ The server will be unavailable during the upgrade process, so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server.
++
+- When will this upgrade feature be GA?
+ The GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production.
+
+- What happens to my backups after upgrade?
+ All backups (automated/on-demand) taken before major version upgrade, when used for restoration will always restore to a server with older version (5.7).
+ All the backups (automated/on-demand) taken after major version upgrade will restore to server with upgraded version (8.0). It's highly recommended to take on-demand backup before you perform the major version upgrade for an easy rollback.
++
+## Next steps
+- Learn more on [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server.
+- Learn about what's new in [MySQL version 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html).
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## October 2022
+
+- **AMD compute SKUs for General Purpose and Business Critical tiers in in Azure Database for MySQL - Flexible Server**
+
+ You can now choose between Intel and AMD hardware for Azure Database for MySQL flexible servers based on the General Purpose (Dadsv5-series) and Business Critical (Eadsv5-series) tiers. AMD SKU offers competitive price-performance options to all Azure Database for MySQL - Flexible server users. To ensure transparency when working in the portal, you can select the compute hardware vendor for both primary and secondary server. After you determine the best compute processor for your workload, deploy flexible servers in an increased number of availability regions and zones. [Learn more](./concepts-service-tiers-storage.md)
+
+- **Autoscale IOPS in Azure Database for MySQL - Flexible Server**
+
+ You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature you pay only for the IO you use and no longer need to provision and pay for resources they arenΓÇÖt fully using, saving both time and money. In addition, mission-critical Tier-1 applications can achieve consistent performance by making additional IO available to the workload at any time. Auto scale IO eliminates the administration required to provide the best performance at the least cost for Azure Database for MySQL customers. [Learn more](./concepts-service-tiers-storage.md)
+ ## September 2022 - **Read replica for HA enabled Azure Database for MySQL - Flexible Server (General Availability)**
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Previously updated : 10/03/2022 Last updated : 10/10/2022 # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
-+ Last updated 09/29/2022
Yes, migration from lower version MySQL servers (v5.6 and above) to higher versi
2. For *Issue type*, select **Technical**. 3. For *Subscription*, select your subscription. 4. For *Service*, select **My services**.
-5. For *Service type*, select **Azure Database for MySQL Single Server**.**
+5. For *Service type*, select **Azure Database for MySQL Single Server**.
6. For *Resource*, select your resource. 7. For *Problem type*, select **Migration**. 8. For *Problem subtype*, select **Migrating from single to flexible server**
network-watcher Azure Monitor Agent With Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/azure-monitor-agent-with-connection-monitor.md
+
+ Title: Monitor Network Connectivity using Azure Monitor Agent
+description: This article describes how to monitor network connectivity in Connection Monitor using the Azure monitor agent.
+++++ Last updated : 09/11/2022+
+#Customer intent: I need to monitor a connection using Azure Monitor agent.
++
+# Monitor Network Connectivity using Azure Monitor Agent with Connection Monitor
+
+Connection Monitor supports the Azure Monitor Agent extension, thereby eliminating the dependency on the legacy Log Analytics agent.
+
+With Azure Monitor Agent, a single agent consolidates all the features necessary to address all connectivity logs and metrics data collection needs across Azure and On-premises machines compared to running various monitoring agents.
+
+The Azure Monitor Agent provides enhanced security and performance capabilities, effective cost savings with efficient data collection and ease of troubleshooting with simpler data collection management for Log Analytics agent.
+
+[Learn more](../azure-monitor/agents/agents-overview.md) about Azure Monitor Agent.
+
+To start using Connection Monitor for monitoring, do the following:
+* Install monitoring agents
+* Create a connection monitor
+* Analyze monitoring data and set alerts
+* Diagnose issues in your network
+
+The following sections provide details on the installation of Monitor Agents. For more information, see [Connection Monitor](connection-monitor-overview.md).
+
+## Installing monitoring agents for Azure and Non-Azure resources
+
+The Connection Monitor relies on lightweight executable files to run connectivity checks. It supports connectivity checks from both Azure environments and on-premises environments. The executable file that you use depends on whether your VM is hosted on Azure or on-premises.
+
+### Agents for Azure virtual machines and scale sets
+
+Refer [here](connection-monitor-overview.md#agents-for-azure-virtual-machines-and-virtual-machine-scale-sets) to install agents for Azure virtual machines and scale sets.
+
+### Agents for on-premises machines
+
+To make Connection Monitor recognize your on-premises machines as sources for monitoring, do the following:
+* Enable your hybrid endpoints to [ARC Enabled Servers](../azure-arc/overview.md).
+* Connect hybrid machines by installing the [Azure Connected Machine agent](../azure-arc/servers/overview.md) on each machine.
+ This agent doesn't deliver any other functionality, and it doesn't replace the Azure Monitor Agent. The Azure Connected Machine agent simply enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
+* Install the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) to enable the Network Watcher extension.
+* The agent collects monitoring logs and data from the hybrid sources and delivers it to Azure Monitor.
+
+## Coexistence with other agents
+
+The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin the transition, given the limitations, you must review the following:
+* Ensure that you do not collect duplicate data because it could alter query results and affect downstream features like alerts, dashboards, or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. So, ensure you're not collecting the same data from both agents. If you are, ensure they're collecting from different machines or going to separate destinations.
+* Besides data duplication, this would also generate more charges for data ingestion and retention.
+* Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
+
+## Next steps
+
+- Install [Azure Connected machine agent](connection-monitor-connected-machine-agent.md).
network-watcher Connection Monitor Connected Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-connected-machine-agent.md
+
+ Title: Install agents for connection monitor
+description: This article describes how to install Azure Connected machine agent
+++++ Last updated : 09/11/2022+
+#Customer intent: I need to monitor a connection using Azure Monitor agent.
++
+# Install Azure Connected Machine agent to enable Arc
+
+This article describes the procedure to install the Azure Connected Machine agent.
+
+## Prerequisites
+
+* An Azure account with an active subscription, else [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Administrator permissions to install and configure the Connected Machine agent. On Linux, the installation and configuration is done using the root account, and on Windows, with an account that is a member of the Local Administrators group.
+* Register the Microsoft.HybridCompute, Microsoft.GuestConfiguration, and Microsoft.HybridConnectivity resource providers on your subscription. You can [register these resource providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) ahead of time, or while completing the steps in this article.
+* Review the [agent prerequisites](../azure-arc/servers/prerequisites.md) and ensure the following:
+ * Your target machine is running a supported [operating system](../azure-arc/servers/prerequisites.md#supported-operating-systems).
+ * Your account has the [required Azure built-in roles](../azure-arc/servers/prerequisites.md#required-permissions).
+ * The machine is in a [supported region](../azure-arc/overview.md).
+ * If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the listed [URLs](../azure-arc/servers/network-requirements.md#urls) aren't blocked.
+
+## Generate installation script
+
+Use the Azure portal to create a script that automates the download and installation of the agent and establishes the connection with Azure Arc.
+
+1. In the Azure portal, search for **Servers - Azure Arc** and select it.
+
+ 1. On the **Servers - Azure Arc** page, select **Add**.
+
+1. On the next screen, from the **Add a single server** tile, select **Generate script**.
+
+1. Review the information on the **Prerequisites** page, then select **Next**.
+
+1. On the **Resource details** page, provide the following:
+
+ 1. Select the subscription and resource group where you want the machine to be managed within Azure.
+ 1. For **Region**, choose the Azure region in which the server's metadata will be stored.
+ 1. For **Operating system**, select the operating system of the server you want to connect.
+ 1. For **Connectivity method**, choose how the Azure Connected Machine agent should connect to the internet. If you select **Proxy server**, enter the proxy server IP address, or the name and port number that the machine will use in the format `http://<proxyURL>:<proxyport>`.
+ 1. Select **Next**.
+
+1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards. Then select **Next**.
+
+1. On the **Download and run script** page, select the **Register** button to register the required resource providers in your subscription, if you haven't already done so.
+
+1. In the **Download or copy the following script** section, review the script. If you want to make any changes, use the **Previous** button to go back and update your selections. Otherwise, select **Download** to save the script file.
+
+## Install the agent using the script
+
+After you've generated the script, the next step is to run it on the server that you want to onboard to Azure Arc. The script will download the Connected Machine agent from the Microsoft Download Center, install the agent on the server, create the Azure Arc-enabled server resource, and associate it with the agent.
+
+Follow the steps corresponding to the operating system of your server.
+
+# [Windows agent](#tab/WindowsScript)
+
+1. Sign in to the server.
+
+1. Open an elevated 64-bit PowerShell command prompt.
+
+1. Change to the folder or share that you copied the script to, then execute it on the server by running the `./OnboardingScript.ps1` script.
+
+### [Linux agent](#tab/LinuxScript)
+
+1. To install the Linux agent on the target machine that can directly communicate to Azure, run the following command:
+
+ ```bash
+ bash ~/Install_linux_azcmagent.sh
+ ```
+
+1. Alternately, if the target machine communicates through a proxy server, run the following command:
+
+ ```bash
+ bash ~/Install_linux_azcmagent.sh --proxy "{proxy-url}:{proxy-port}"
+ ```
+
+
+## Verify the connection with Azure Arc
+
+After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://aka.ms/hybridmachineportal).
+
+## Connect hybrid machines to Azure by using PowerShell
+
+For servers enabled with Azure Arc, you can take manual steps mentioned above to enable them for one or more Windows or Linux machines in your environment. Alternatively, you can use the PowerShell cmdlet Connect-AzConnectedMachine to download the Connected Machine agent, install the agent, and register the machine with Azure Arc. The cmdlet downloads the Windows agent package (Windows Installer) from the Microsoft Download Center, and the Linux agent package from the Microsoft package repository.
+Refer to the linked document to discover the required steps to install the [Arc agent via PowerShell](../azure-arc/servers/onboard-powershell.md)
+
+## Connect hybrid machines to Azure from Windows Admin Center
+
+You can enable Azure Arc-enabled servers for one or more Windows machines in your environment manually or use the Windows Admin Center to deploy the Connected Machine agent and register your on-premises servers without having to perform any steps outside of this tool. [Learn more](../azure-arc/servers/onboard-windows-admin-center.md) about installing Arc agent via the Windows Admin Center.
+
+## Next steps
+
+- Install [Azure Monitor agent](connection-monitor-install-azure-monitor-agent.md).
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
Title: Create Connection Monitor - Azure portal
description: This article describes how to create a monitor in Connection Monitor by using the Azure portal. + Last updated 11/23/2020 #Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
-# Create a monitor in Connection Monitor by using the Azure portal
+# Create a monitor in Connection Monitor using the Azure portal
> [!IMPORTANT] > Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024. > [!IMPORTANT]
-> Connection Monitor will now support end to end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
+> Connection Monitor supports end to end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
-Learn how to use Connection Monitor to monitor communication between your resources. This article describes how to create a monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments.
+This article describes how to create a monitor using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments.
## Before you begin
-In connection monitors that you create by using Connection Monitor, you can add both on-premises machines, Azure VMs and Azure Virtual Machine scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
+In monitors that you create using the Connection Monitor, you can add on-premises machines, Azure VMs and Azure virtual machine scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
Here are some definitions to get you started:
-* **Connection monitor resource**. A region-specific Azure resource. All the following entities are properties of a connection monitor resource.
-* **Endpoint**. A source or destination that participates in connectivity checks. Examples of endpoints include:
- * Azure VMs.
- * Azure virtual networks.
- * Azure subnets.
- * On-premises agents.
- * On-premises subnets.
- * On-premises custom networks that include multiple subnets.
- * URLs and IPs.
-* **Test configuration**. A protocol-specific configuration for a test. Based on the protocol you choose, you can define the port, thresholds, test frequency, and other parameters.
-* **Test group**. The group that contains source endpoints, destination endpoints, and test configurations. A connection monitor can contain more than one test group.
-* **Test**. The combination of a source endpoint, destination endpoint, and test configuration. A test is the most granular level at which monitoring data is available. The monitoring data includes the percentage of checks that failed and the round-trip time (RTT).
+* **Connection monitor resource** - A region-specific Azure resource. All the following entities are properties of a connection monitor resource.
+* **Endpoint** - A source or destination that participates in connectivity checks. Examples of endpoints include:
+ * Azure VMs
+ * Azure virtual networks
+ * Azure subnets
+ * On-premises agents
+ * On-premises subnets
+ * On-premises custom networks that include multiple subnets
+ * URLs and IPs
+* **Test configuration** - A protocol-specific configuration for a test. Based on the protocol you choose, you can define the port, thresholds, test frequency, and other parameters.
+* **Test group** - The group that contains source endpoints, destination endpoints, and test configurations. A connection monitor can contain more than one test group.
+* **Test** - The combination of a source endpoint, destination endpoint, and test configuration. A test is the most granular level at which monitoring data is available. The monitoring data includes the percentage of checks that failed and the round-trip time (RTT).
:::image type="content" source="./media/connection-monitor-2-preview/cm-tg-2.png" alt-text="Diagram that shows a connection monitor and defines the relationship between test groups and tests.":::
Here are some definitions to get you started:
## Create a connection monitor
-To create a monitor in Connection Monitor by using the Azure portal:
+> [!Note]
+> Connection Monitor now supports the Azure Monitor Agent extension, thus eliminating the dependency on the legacy Log Analytics agent
+
+To create a connection monitor using the Azure portal, follow these steps:
1. On the Azure portal home page, go to **Network Watcher**. 1. In the left pane, in the **Monitoring** section, select **Connection monitor**.
- You'll see all the connection monitors that were created in Connection Monitor. To see the connection monitors that were created in the classic Connection Monitor, go to the **Connection monitor** tab.
+ You'll see all the monitors that were created in Connection Monitor displayed. To see the connection monitors that were created in classic Connection Monitor, go to the **Connection monitor** tab.
:::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that shows connection monitors created in Connection Monitor.":::
-
-
-1. In the **Connection Monitor** dashboard, in the upper-left corner, select **Create**.
-
-
+
+1. In the **Connection Monitor** dashboard, select **Create**.
-1. On the **Basics** tab, enter information for your connection monitor:
- * **Connection Monitor Name**: Enter a name for your connection monitor. Use the standard naming rules for Azure resources.
- * **Subscription**: Select a subscription for your connection monitor.
- * **Region**: Select a region for your connection monitor. You can select only the source VMs that are created in this region.
- * **Workspace configuration**: Choose a custom workspace or the default workspace. Your workspace holds your monitoring data.
- * To use the default workspace, select the check box.
- * To choose a custom workspace, clear the check box. Then select the subscription and region for your custom workspace.
+1. In the **Basics** tab, enter the following details:
+ * **Connection Monitor Name** - Enter a name for your connection monitor. Use the standard naming rules for Azure resources.
+ * **Subscription** - Select a subscription for your connection monitor.
+ * **Region** - Select a region for your connection monitor. You can select only the source VMs that are created in this region.
+ * **Workspace configuration** - Choose a custom workspace or the default workspace. Your workspace holds your monitoring data.
+ * To choose a custom workspace, clear the default workspace check box. Then, select the subscription and region for your custom workspace.
:::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the Basics tab in Connection Monitor.":::
-1. At the bottom of the tab, select **Next: Test groups**.
+1. Select **Next: Test groups**.
1. Add sources, destinations, and test configurations in your test groups. To learn about setting up your test groups, see [Create test groups in Connection Monitor](#create-test-groups-in-a-connection-monitor). :::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the Test groups tab in Connection Monitor.":::
-1. At the bottom of the tab, select **Next: Create Alerts**. To learn about creating alerts, see [Create alerts in Connection Monitor](#create-alerts-in-connection-monitor).
+1. Select **Next: Create Alerts**. [Learn more](#create-alerts-in-connection-monitor).
:::image type="content" source="./media/connection-monitor-2-preview/create-alert.png" alt-text="Screenshot that shows the Create alert tab.":::
To create a monitor in Connection Monitor by using the Azure portal:
1. When you're ready to create the connection monitor, at the bottom of the **Review + create** tab, select **Create**.
-Connection Monitor creates the connection monitor resource in the background.
+The Connection Monitor creates the connection monitor resource in the background.
## Create test groups in a connection monitor
- >[!NOTE]
- >> Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
+> [!NOTE]
+> Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
Each test group in a connection monitor includes sources and destinations that get tested on network parameters. They're tested for the percentage of checks that fail and the RTT over test configurations. In the Azure portal, to create a test group in a connection monitor, you specify values for the following fields:
-* **Disable test group**: You can select this check box to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default.
-* **Name**: Name your test group.
-* **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
+* **Test group Name** - Enter the name of your test group.
+* **Sources** - Select **Add sources** to specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or virtual machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and virtual machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
- You can drill down from the **Subscription** level to other levels in the hierarchy:
+ You can drill down to further levels in the hierarchy from the **Subscription** level:
**Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents** You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **VNET** > **Subnet** > **VMs with agents**.
- When you select a VNET, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ When you select a VNET, subnet, a single VM, or a virtual machine scale set, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints including V M S S tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints including virtual machine scale sets tab in Connection Monitor.":::
- * To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
+ * To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. Select from a list of on-premises hosts with Log Analytics agent installed. Select **Arc Endpoint** as the **Type** and select the subscriptions from the **Subscription** dropdown. The list of hosts that have the [Arc endpoint](azure-monitor-agent-with-connection-monitor.md) extension and [AMA extension](connection-monitor-install-azure-monitor-agent.md) enabled are displayed.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/arc-endpoint.png" alt-text="Screenshot of Arc enabled and AMA enabled hosts.":::
- If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
+ If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
- Under **Create Connection Monitor**, on the **Basics** tab, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
+ Under **Create Connection Monitor**, on the **Basics** tab, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
- :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor.":::
- * To choose recently used endpoints, you can use the **Recent endpoint** tab
+ * Select the recently used endpoints from the **Recent endpoint** tab.
* You need not choose the endpoints with monitoring agents enabled only. You can select Azure or Non-Azure endpoints without the agent enabled and proceed with the creation of Connection Monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled. :::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor with unified enablement.":::
In the Azure portal, to create a test group in a connection monitor, you specify
:::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor."::: * **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or Non-Azure endpoints.
- * For selected Azure VMs or Azure virtual machine scale sets and Non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the NPM solution for Non-Azure endpoints will be auto enablement once the creation of Connection Monitor begins.
+ * For selected Azure VMs or Azure virtual machine scale sets and Non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the npm solution for Non-Azure endpoints will be auto enablement once the creation of Connection Monitor begins.
* In case the virtual machine scale set selected is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale set as endpoints. In-case the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
- * In the scenario mentioned above, user can consent to auto upgradation of virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for Virtual Machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
+ * In the scenario mentioned above, user can consent to auto upgradation of virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
- :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test groups and consent for auto-upgradation of V M S S in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test groups and consent for auto-upgradation of virtual machine scale set in Connection Monitor.":::
+* **Disable test group**: You can select this check box to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default.
## Create alerts in Connection Monitor
In the Azure portal, to create alerts for a connection monitor, you specify valu
:::image type="content" source="./media/connection-monitor-2-preview/unified-enablement-create.png" alt-text="Screenshot that shows the Create alert tab in Connection Monitor."::: Once all the steps are completed, the process will proceed with unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by creation of Connection Monitor.
-Once the creation process is successful , it will take ~ 5 mins for the connection monitor to show up on the dashboard.
+Once the creation process is successful, it will take ~ 5 mins for the connection monitor to show up on the dashboard.
## Scale limits
network-watcher Connection Monitor Install Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-install-azure-monitor-agent.md
+
+ Title: Install Azure monitor agent for connection monitor
+description: This article describes how to install Azure monitor agent
+++++ Last updated : 09/11/2022+
+#Customer intent: I need to monitor a connection using Azure Monitor agent.
++
+# Install Azure Monitor Agent
+
+The Azure Monitor agent is implemented as an Azure VM extension with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article. [Learn more](../azure-monitor/agents/agents-overview.md) about Azure Monitor.
+
+The following section covers installing an Azure Monitor Agent on Arc-enabled servers using PowerShell and Azure CLI. For more information, see [Manage the Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=ARMAgentPowerShell%2CPowerShellWindows%2CPowerShellWindowsArc%2CCLIWindows%2CCLIWindowsArc).
+
+## Using PowerShell
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using New-AzConnectedMachineExtension, the PowerShell cmdlet for adding a virtual machine extension.
+
+### Install on Azure Arc-enabled servers
+Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+New-AzConnectedMachineExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+New-AzConnectedMachineExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
+```
++
+### Uninstall on Azure Arc-enabled servers
+Use the following PowerShell commands to uninstall the Azure Monitor agent from the Azure Arc-enabled servers.
+
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AMAWindows
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AMALinux
+```
++
+### Upgrade on Azure Arc-enabled servers
+
+To perform a **one time** upgrade of the agent, use the following PowerShell commands:
+
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+$target = @{"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent" = @{"targetVersion"=<target-version-number>}}
+Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+$target = @{"Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" = @{"targetVersion"=<target-version-number>}}
+Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
+```
+
+## Using Azure CLI
+
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the Azure CLI command for adding a virtual machine extension.
+
+### Install on Azure Arc-enabled servers
+Use the following CLI commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
+```
++
+### Uninstall on Azure Arc-enabled servers
+Use the following CLI commands to uninstall the Azure Monitor agent from the Azure Arc-enabled servers.
+
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine extension delete --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine extension delete --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
++
+### Upgrade on Azure Arc-enabled servers
+To perform a **one time upgrade** of the agent, use the following CLI commands:
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
++
+## Enable Network Watcher Agent
+
+After your machine is Arc-enabled, the same is recognized as an Azure resource. After enabling the Azure Monitor Agent extension, follow the steps to install the Network Watcher extension. The steps are similar to installing the Network Watcher extension in an Azure VM.
+
+To make Connection Monitor recognize your Azure Arc-enabled on-premises machines with Azure Monitor Agent extension as monitoring sources, install the Network Watcher Agent virtual machine extension on them. This extension is also known as the Network Watcher extension.
+Refer [here](connection-monitor-overview.md#agents-for-azure-virtual-machines-and-virtual-machine-scale-sets) to install the Network Watcher extension on your Azure Arc-enabled servers with Azure Monitor Agent extension installed.
+
+You can also use the following command to install the Network Watcher extension in your Arc-enabled machine with Azure Monitor Agent extension.
+
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+New-AzConnectedMachineExtension -Name AzureNetworkWatcherExtension -ExtensionType NetworkWatcherAgentWindows -Publisher Microsoft.Azure.NetworkWatcher -ResourceGroupName $rg -MachineName $vm -Location $location
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+New-AzConnectedMachineExtension -Name AzureNetworkWatcherExtension -ExtensionType NetworkWatcherAgentLinux -Publisher Microsoft.Azure.NetworkWatcher -ResourceGroupName $rg -MachineName $vm -Location $location
+```
++
+## Next steps
+
+- After installing the monitoring agents, proceed to [create a Connection Monitor](connection-monitor-create-using-portal.md#create-a-connection-monitor). After creating a Connection Monitor, analyze monitoring data and set alerts. Diagnose issues in your connection monitor and your network.
+
+- Monitor the network connectivity of your Azure and Non-Azure set-ups using [Connection Monitor](connection-monitor-overview.md).
network-watcher Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-overview.md
Last updated 11/25/2020 --+ # Azure Monitor Network Insights
-Azure Monitor Network Insights provides a comprehensive view of [health](../service-health/resource-health-checks-resource-types.md) and [metrics](../azure-monitor/essentials/metrics-supported.md) for all deployed network resources, without requiring any configuration. It also provides access to network monitoring capabilities like [Connection Monitor](../network-watcher/connection-monitor-overview.md), [flow logging for network security groups (NSGs)](../network-watcher/network-watcher-nsg-flow-logging-overview.md), and [Traffic Analytics](../network-watcher/traffic-analytics.md). And it provides other network [diagnostic](../network-watcher/network-watcher-monitoring-overview.md#diagnostics) features.
+Azure Monitor Network Insights provides a comprehensive and visual representation through [topologies](network-insights-topology.md), of [health](../service-health/resource-health-checks-resource-types.md) and [metrics](../azure-monitor/essentials/metrics-supported.md) for all deployed network resources, without requiring any configuration. It also provides access to network monitoring capabilities like [Connection Monitor](../network-watcher/connection-monitor-overview.md), [flow logging for network security groups (NSGs)](../network-watcher/network-watcher-nsg-flow-logging-overview.md), and [Traffic Analytics](../network-watcher/traffic-analytics.md). And it provides other network [diagnostic](../network-watcher/network-watcher-monitoring-overview.md#diagnostics) features.
Azure Monitor Network Insights is structured around these key components of monitoring:
+- [Topology](#topology)
- [Network health and metrics](#networkhealth) - [Connectivity](#connectivity) - [Traffic](#traffic) - [Diagnostic Toolkit](#diagnostictoolkit)
-## <a name="networkhealth"></a>Network health and metrics
+## Topology
+
+[Topology](network-insights-topology.md) helps you visualize how a resource is configured. It provides a graphic representation of the entire hybrid network for understanding network configuration. Topology is a unified visualization tool for resource inventory and troubleshooting.
-The Azure Monitor Network Insights **Overview** page provides an easy way to visualize the inventory of your networking resources, together with resource health and alerts. It's divided into four key functional areas: search and filtering, resource health and metrics, alerts, and dependency view.
+It provides an interactive interface to view resources and their relationships in Azure, spanning across multiple subscriptions, resource groups, and locations. You can also drill down to the basic unit of each topology and view the resource view diagram of each unit.
-[![Screenshot that shows the Overview page](media/network-insights-overview/overview.png)](media/network-insights-overview/overview.png#lightbox)
+## <a name="networkhealth"></a>Network health and metrics
+
+The Azure Monitor Network Insights **Overview** page provides an easy way to visualize the inventory of your networking resources, together with resource health and alerts. It's divided into four key functional areas: search and filtering, resource health and metrics, alerts, and resource view.
### Search and filtering You can customize the resource health and alerts view by using filters like **Subscription**, **Resource Group**, and **Type**.
You can use the search box to search for resources and their associated resource
[![Screenshot that shows Azure Monitor Network Insights search results.](media/network-insights-overview/search.png)](media/network-insights-overview/search.png#lightbox) - ### Resource health and metrics In the following example, each tile represents a resource type. The tile displays the number of instances of that resource type deployed across all selected subscriptions. It also displays the health status of the resource. In this example, there are 105 ER and VPN connections deployed. 103 are healthy, and 2 are unavailable.
You can select any item in the grid view. Select the icon in the **Health** colu
### Alerts The **Alert** box on the right side of the page provides a view of all alerts generated for the selected resources across all subscriptions. Select the alert counts to go to a detailed alerts page.
-### Dependency view
-Dependency view helps you visualize how a resource is configured. Dependency view is currently available for Azure Application Gateway, Azure Virtual WAN, and Azure Load Balancer. For example, for Application Gateway, you can access dependency view by selecting the Application Gateway resource name in the metrics grid view. You can do the same thing for Virtual WAN and Load Balancer.
+### Resource view
+The Resource view helps you visualize how a resource is configured. The Resource view is currently available for Azure Application Gateway, Azure Virtual WAN, and Azure Load Balancer. For example, for Application Gateway, you can access the resource view by selecting the Application Gateway resource name in the metrics grid view. You can do the same thing for Virtual WAN and Load Balancer.
![Sreenshot that shows Application Gateway view in Azure Monitor Network Insights.](media/network-insights-overview/application-gateway.png)
-The dependency view for Application Gateway provides a simplified view of how the front-end IPs are connected to the listeners, rules, and backend pool. The connecting lines are color coded and provide additional details based on the backend pool health. The view also provides a detailed view of Application Gateway metrics and metrics for all related backend pools, like virtual machine scale set and VM instances.
+The resource view for Application Gateway provides a simplified view of how the front-end IPs are connected to the listeners, rules, and backend pool. The connecting lines are color coded and provide additional details based on the backend pool health. The view also provides a detailed view of Application Gateway metrics and metrics for all related backend pools, like virtual machine scale set and VM instances.
[![Screenshot that shows dependency view in Azure Monitor Network Insights.](media/network-insights-overview/dependency-view.png)](media/network-insights-overview/dependency-view.png#lightbox)
The dependency graph provides easy navigation to configuration settings. Right-c
![Screenshot that shows the dependency view menu in Azure Monitor Network Insights.](media/network-insights-overview/dependency-view-menu.png)
-The search and filter bar on the dependency view provides an easy way to search through the graph. For example, if you search for **AppGWTestRule** in the previous example, the view will scale down to all nodes connected via AppGWTestRule:
+The search and filter bar on the resource view provides an easy way to search through the graph. For example, if you search for **AppGWTestRule** in the previous example, the view will scale down to all nodes connected via AppGWTestRule:
![Screenshot that shows an example of a search in Azure Monitor Network Insights.](media/network-insights-overview/search-example.png)
You can select any source or destination tile to open a metric view:
[![Screenshot that shows connectivity metrics in Azure Monitor Network Insights.](media/network-insights-overview/azure-monitor-for-networks-connectivity-metrics.png)](media/network-insights-overview/azure-monitor-for-networks-connectivity-metrics.png#lightbox) - You can select any item in the grid view. Select the icon in the **Reachability** column to go to the Connection Monitor portal page and view the hop-by-hop topology and connectivity affecting issues identified. Select the value in the **Alert** column to go to alerts. Select the graphs in the **Checks Failed Percent** and **Round-Trip Time (ms)** columns to go to the metrics page for the selected connection monitor. TheΓÇ»**Alert** box on the right side of the page provides a view of all alerts generated for the connectivity tests configured across all subscriptions. Select the alert counts to go to a detailed alerts page.
Diagnostic Toolkit provides access to all the diagnostic features available for
## Availability of resources
-By default, all networking resources are visible in Network Insights. Customers can click on the resource type for viewing resource health and metrics (if available), subscription details, location, etc. A subset of networking resources have been _Onboarded_. For Onboarded resources, customers have access to a resource specific topology view and a built-in metrics workbook. These out-of-the-box experiences make it easier to explore resource metrics and troubleshoot issues.
+By default, all networking resources are visible in Network Insights. Customers can select the resource type for viewing resource health and metrics (if available), subscription details, location, etc. A subset of networking resources has been _Onboarded_. For Onboarded resources, customers have access to a resource specific topology view and a built-in metrics workbook. These out-of-the-box experiences make it easier to explore resource metrics and troubleshoot issues.
Resources that have been onboarded are: - Application Gateway
Resources that have been onboarded are:
- Virtual Network - Virtual Network NAT - Virtual WAN-- ER/VPN Gateway
+- ER/VPN Gateway
- Virtual Hub
-## Troubleshooting
-For general troubleshooting guidance, see the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
-This section will help you diagnose and troubleshoot some common problems you might encounter when you use Azure Monitor Network Insights.
-
-### How do I resolve performance problems or failures?
-
-To learn about troubleshooting any networking-related problems you identify with Azure Monitor Network Insights, see the troubleshooting documentation for the malfunctioning resource.
-
-Here are some links to troubleshooting articles for frequently used services. For more troubleshooting articles about these services, see the other articles in the Troubleshooting section of the table of contents for the service.
-* [Azure Virtual Network](../virtual-network/virtual-network-troubleshoot-peering-issues.md)
-* [Azure Application Gateway](../application-gateway/create-gateway-internal-load-balancer-app-service-environment.md)
-* [Azure VPN Gateway](../vpn-gateway/vpn-gateway-troubleshoot.md)
-* [Azure ExpressRoute](../expressroute/expressroute-troubleshooting-expressroute-overview.md)
-* [Azure Load Balancer](../load-balancer/load-balancer-troubleshoot.md)
-* [Azure NAT Gateway](../virtual-network/nat-gateway/troubleshoot-nat.md)
-
-### Why don't I see the resources for all the subscriptions I've selected?
-
-Azure Monitor Network Insights can show resources for only five subscriptions at a time.
-
-### How do I make changes or add visualizations to Azure Monitor Network Insights?
-
-To make changes, select **Edit Mode** to modify the workbook. You can then save your changes as a new workbook that's tied to a designated subscription and resource group.
-
-### What's the time grain after I pin any part of the workbooks?
-
-Azure Monitor Network Insights uses the **Auto** time grain, so the time grain is based on the selected time range.
-
-### What's the time range when any part of a workbook is pinned?
-
-The time range depends on the dashboard settings.
-
-### What if I want to see other data or make my own visualizations? How can I make changes to Azure Monitor Network Insights?
-
-You can edit the workbook you see in any side-panel or detailed metric view by using the edit mode. You can then save your changes as a new workbook.
- ## Next steps -- Learn more about network monitoring: [What is Azure Network Watcher?](../network-watcher/network-watcher-monitoring-overview.md)-- Learn the scenarios workbooks are designed to support, how to create reports and customize existing reports, and more: [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md)
+- Learn more about network monitoring: [What is Azure Network Watcher?](../network-watcher/network-watcher-monitoring-overview.md).
+- Learn the scenarios workbooks are designed to support, how to create reports and customize existing reports, and more: [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
+
+ Title: Network Insights topology
+description: An overview of topology, which provides a pictorial representation of the resources.
++++ Last updated : 09/09/2022+++
+# Topology (Preview)
+
+Topology provides a visualization of the entire hybrid network for understanding network configuration. It provides an interactive interface to view resources and their relationships in Azure spanning across multiple subscriptions, resource groups and locations. You can also drill down to a resource view for resources to view their component level visualization.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
+- An account with the necessary [RBAC permissions](required-rbac-permissions.md) to utilize the Network watcher capabilities.
+
+## Supported resource types
+
+The following are the resource types supported by Topology:
+
+- Application gateways
+- ExpressRoute Circuits
+- Load balancers
+- Network Interfaces
+- Network Security Groups
+- PrivateLink Endpoints
+- PrivateLink Services
+- Public IP Addresses
+- Virtual Machines
+- Virtual Network Gateways
+- Virtual Networks
+
+## View Topology
+
+To view a topology, follow these steps:
+
+1. Log into the [Azure portal](https://portal.azure.com) with an account that has the necessary [permissions](required-rbac-permissions.md).
+2. Select **More services**.
+3. In the **All services** screen, enter **Monitor** in the **Filter services** search box and select it from the search result.
+4. Under **Insights**, select **Networks**.
+5. In the **Networks** screen that appears, select **Topology**.
+6. Select **Scope** to define the scope of the Topology.
+7. In the **Select scope** pane, select the list of **Subscriptions**, **Resource groups**, and **Locations** of the resources for which you want to view the topology. Select **Save**.
+
+ The duration to render the topology may vary depending on the number of subscriptions selected.
+8. Select the [**Resource type**](#supported-resource-types) that you want to include in the topology and select **Apply**.
+
+The topology containing the resources according to the scope and resource type specified, appears.
+
+Each edge of the topology represents an association between each of the resources. In the topology, similar types of resources are grouped together.
+
+## Add regions
+
+You can add regions that aren't part of the existing topology. The number of regions that aren't part of the existing topology are displayed.
+To add a region, follow these steps:
+
+1. Hover on **Regions** under **Azure Regions**.
+2. From the list of **Hidden Resources**, select the regions to be added and select **Add to View**.
+
+You can view the resources in the added region as part of the topology.
+
+## Drilldown resources
+
+To drill down to the basic unit of each network, select the plus sign on each resource. When you hover on the resource, you can see the details of that resource. Selecting a resource displays a pane on the right with a summary of the resource.
+
+Drilling down into Azure resources such as Application Gateways and Firewalls displays the resource view diagram of that resource.
+
+## Integration with diagnostic tools
+
+When you drill down to a VM within the topology, the summary pane contains the **Insights + Diagnostics** section from where you can find the next hop. Follow these steps to find the next hop.
+1. Click **Next hop** and enter the destination IP address.
+2. Select **Check Next Hop**. The [Next hop](network-watcher-next-hop-overview.md) checks if the destination IP address is reachable from the source VM.
+
+## Next steps
+
+[Learn more](/connection-monitor-overview.md) about connectivity related metrics.
network-watcher Network Insights Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-troubleshooting.md
+
+ Title: Azure Monitor Network Insights troubleshooting
+description: Troubleshooting steps for issues that may arise while using Network insights
+++++ Last updated : 09/09/2022++
+# Troubleshooting
+
+For general troubleshooting guidance, see the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+
+This section will help you diagnose and troubleshoot some common problems you might encounter when you use Azure Monitor Network Insights.
+
+## How do I resolve performance problems or failures?
+
+To learn about troubleshooting any networking-related problems you identify with Azure Monitor Network Insights, see the troubleshooting documentation for the malfunctioning resource.
+
+For more troubleshooting articles about these services, see the other articles in the Troubleshooting section of the table of contents for the service.
+- Application Gateway
+- Azure ExpressRoute
+- Azure Firewall
+- Azure Private Link
+- Connections
+- Load Balancer
+- Local Network Gateway
+- Network Interface
+- Network Security Groups
+- Public IP addresses
+- Route Table / UDR
+- Traffic Manager
+- Virtual Network
+- Virtual Network NAT
+- Virtual WAN
+- ER/VPN Gateway
+- Virtual Hub
+
+## How do I make changes or add visualizations to Azure Monitor Network Insights?
+
+To make changes, select **Edit Mode** to modify the workbook. You can then save your changes as a new workbook that's tied to a designated subscription and resource group.
+
+## What's the time grain after I pin any part of the workbooks?
+
+Azure Monitor Network Insights uses the **Auto** time grain, so the time grain is based on the selected time range.
+
+## What's the time range when any part of a workbook is pinned?
+
+The time range depends on the dashboard settings.
+
+## What if I want to see other data or make my own visualizations? How can I make changes to Azure Monitor Network Insights?
+
+You can edit the workbook you see in any side-panel or detailed metric view by using the edit mode. You can then save your changes as a new workbook.
+
+## Next steps
+- Learn more about network monitoring: [What is Azure Network Watcher?](../network-watcher/network-watcher-monitoring-overview.md)
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
$virtualNetwork | Set-AzVirtualNetwork
**Incompatible Services**: Due to current platform limitations, a small set of Azure services are not supported by NSG Flow Logs. The current list of incompatible services is - [Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/) - [Logic Apps](https://azure.microsoft.com/services/logic-apps/)
+- [Azure Functions](https://azure.microsoft.com/services/functions/)
> [!NOTE] > App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works) for additional details.
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
na+ Last updated 05/09/2018
# View the topology of an Azure virtual network
+> [!IMPORTANT]
+> Try the new [Topology](network-insights-topology.md) experience which offers visualization of Azure resources for ease of inventory management and monitoring network at scale. Leverage it to visualize resources and their dependencies across subscriptions, regions and locations. [Click](https://ms.portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/overview) to navigate to the experience.
+ In this article, you learn how to view resources in a Microsoft Azure virtual network, and the relationships between the resources. For example, a virtual network contains subnets. Subnets contain resources, such as Azure Virtual Machines (VM). VMs have one or more network interfaces. Each subnet can have a network security group and a route table associated to it. The topology capability of Azure Network Watcher enables you to view all of the resources in a virtual network, the resources associated to resources in a virtual network, and the relationships between the resources. You can use the [Azure portal](#azure-portal), the [Azure CLI](#azure-cli), or [PowerShell](#powershell) to view a topology.
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
openshift Configure Azure Ad Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/configure-azure-ad-cli.md
# Configure Azure Active Directory authentication for an Azure Red Hat OpenShift 4 cluster (CLI)
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
Retrieve your cluster-specific URLs that are going to be used to configure the Azure Active Directory application.
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
description: Learn how to create an Azure Red Hat OpenShift private cluster runn
Last updated 03/12/2020--++ keywords: aro, openshift, az aro, red hat, cli #Customer intent: As an operator, I need to create a private Azure Red Hat OpenShift cluster
In this article, you'll prepare your environment to create Azure Red Hat OpenShi
> * Setup the prerequisites and create the required virtual network and subnets > * Deploy a cluster with a private API server endpoint and a private ingress controller
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Before you begin
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
The following sections explain how to create and use a service principal to depl
## Prerequisites - Azure CLI
-If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.30.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Create a resource group - Azure CLI
To create a service principal, see [Use the portal to create an Azure AD applica
openshift Howto Update Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-update-certificates.md
+
+ Title: Update ARO cluster certificates
+description: Learn how to manually update Azure Red Hat OpenShift cluster certificates
++ Last updated : 10/05/2022++
+keywords: aro, openshift, az aro, red hat, cli, azure, update, certificates
+#Customer intent: I need to understand how to manually update my Azure Red Hat OpenShift cluster certificates.
++
+# Update Azure Red Hat OpenShift cluster certificates
+
+Azure Red Hat OpenShift uses cluster certificates stored on worker machines for API and application ingress. These certificates are normally updated in a transparent process during routine maintenance. In some cases, cluster certificates might fail to update during maintenance.
+
+If you're experiencing certificate issues, you can manually update your certificates using [the `az aro update` command](/cli/azure/aro#az-aro-update):
+
+```azurecli-interactive
+az aro update --name MyCluster --resource-group MyResourceGroup --refresh-credentials
+```
+where:
+
+* `name` is the name of the cluster
+* `resource-group` is the name of the resource group. You can configure the default group using `az-config --defaults group=<name>`.
+* refresh-credentials refreshes cluster application credentials
+
+Running this command restarts worker machines and updates the cluster certificates, setting the cluster to a known, proper state.
+
+> [!NOTE]
+> Certificates for custom domains need to be updated manually. For more information, see the [Red Hat OpenShift documentation](https://docs.openshift.com/rosa/applications/deployments/osd-config-custom-domains-applications.html).
+
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
Title: Tutorial - Create an Azure Red Hat OpenShift 4 cluster description: Learn how to create a Microsoft Azure Red Hat OpenShift cluster using the Azure CLI--++ Last updated 10/26/2020
In this tutorial, part one of three, you'll prepare your environment to create a
## Before you begin
-If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription does not meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md).
orbital Geospatial Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/geospatial-reference-architecture.md
description: Show how to architect end-to-end geospatial data on Azure.
-+ Last updated 06/13/2022
-# Customer intent: As a geospatial architect, I'd like to understand how to architecture a solution on Azure.
+#Customer intent: As a geospatial architect, I'd like to understand how to architecture a solution on Azure.
# End-to-end geospatial storage, analysis, and visualization
Azure has many native geospatial capabilities. In this diagram and the ones that
:::image type="content" source="media/geospatial-overview.png" alt-text="Geospatial On Azure" lightbox="media/geospatial-overview.png":::
-This architecture flow assumes that the data may be coming from databases, files or streaming sources and not stored in a native GIS format. Once the data is ingested with Azure Data Factory, or via Azure IoT, Event Hubs and Stream Analytics, it could then be stored permanently in warm storage with Azure SQL, Azure SQL Managed Instance, Azure Database for PostgreSQL or Azure Data Lake Storage. From there, the data can be transformed and processed in batch with Azure Batch or Synapse Spark Pool, of which both can be automated through the usage of an Azure Data Factory or Synapse pipeline. For real-time data, it can be further transformed or processed with Stream Analytics, Azure Maps or brought into context with Azure Digital Twins. Once the data is transformed, it can then once again be served for additional uses in Azure SQL DB or Azure Database for PostgreSQL, Synapse SQL Pool (for abstracted non-geospatial data), Cosmos DB or Azure Data Explorer. Once ready, the data can be queried directly through the data base API, but frequently a publish layer is used. The Azure Maps Data API would suffice for small datasets, otherwise a non-native service can be introduced based on OSS or COTS, for accessing the data through web services or desktop applications. Finally, the Azure Maps Web SDK hosted in Azure App Service would allow for geovisualization. Another option is to use Azure Maps in Power BI. Lastly, HoloLens and Azure Spatial Anchors can be used to view the data and place it in the real-world for virtual reality (VR) and augmented reality (AR) experiences.
+This architecture flow assumes that the data may be coming from databases, files or streaming sources and not stored in a native GIS format. Once the data is ingested with Azure Data Factory, or via Azure IoT, Event Hubs and Stream Analytics, it could then be stored permanently in warm storage with Azure SQL, Azure SQL Managed Instance, Azure Database for PostgreSQL or Azure Data Lake Storage. From there, the data can be transformed and processed in batch with Azure Batch or Synapse Spark Pool, of which both can be automated through the usage of an Azure Data Factory or Synapse pipeline. For real-time data, it can be further transformed or processed with Stream Analytics, Azure Maps or brought into context with Azure Digital Twins. Once the data is transformed, it can then once again be served for additional uses in Azure SQL DB or Azure Database for PostgreSQL, Synapse SQL Pool (for abstracted non-geospatial data), Azure Cosmos DB, or Azure Data Explorer. Once ready, the data can be queried directly through the data base API, but frequently a publish layer is used. The Azure Maps Data API would suffice for small datasets, otherwise a non-native service can be introduced based on OSS or COTS, for accessing the data through web services or desktop applications. Finally, the Azure Maps Web SDK hosted in Azure App Service would allow for geovisualization. Another option is to use Azure Maps in Power BI. Lastly, HoloLens and Azure Spatial Anchors can be used to view the data and place it in the real-world for virtual reality (VR) and augmented reality (AR) experiences.
It should be noted as well that many of these options are optional and could be supplemented with OSS to reduce cost while also maintaining scalability, or 3rd party tools to utilize their specific capabilities. The next session addresses this need.
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
Title: Azure services and Confluent Cloud integration - Azure partner solutions description: This article describes how to use Azure services and install connectors for Confluent Cloud integration. + Last updated 06/24/2022
This article describes how to use Azure services like Azure Functions, and insta
## Azure Cosmos DB connector
-**Azure Cosmos DB Sink Connector fully managed connector** is generally available within Confluent Cloud. The fully managed connector eliminates the need for the development and management of custom integrations, and reduces the overall operational burden of connecting your data between Confluent Cloud and Azure Cosmos DB. The Microsoft Azure Cosmos Sink Connector for Confluent Cloud reads from and writes data to a Microsoft Azure Cosmos database. The connector polls data from Kafka and writes to database containers.
+**Azure Cosmos DB Sink Connector fully managed connector** is generally available within Confluent Cloud. The fully managed connector eliminates the need for the development and management of custom integrations, and reduces the overall operational burden of connecting your data between Confluent Cloud and Azure Cosmos DB. The Azure Cosmos DB Sink Connector for Confluent Cloud reads from and writes data to an Azure Cosmos DB database. The connector polls data from Kafka and writes to database containers.
To set up your connector, see [Azure Cosmos DB Sink Connector for Confluent Cloud](https://docs.confluent.io/cloud/current/connectors/cc-azure-cosmos-sink.html).
-**Azure Cosmos DB Self Managed connector** must be installed manually. First download an uber JAR from the [Cosmos DB Releases page](https://github.com/microsoft/kafka-connect-cosmosdb/releases). Or, you can [build your own uber JAR directly from the source code](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/README_Sink.md#install-sink-connector). Complete the installation by following the guidance described in the Confluent documentation for [installing connectors manually](https://docs.confluent.io/home/connect/install.html#install-connector-manually).
+**Azure Cosmos DB Self Managed connector** must be installed manually. First download an uber JAR from the [Azure Cosmos DB Releases page](https://github.com/microsoft/kafka-connect-cosmosdb/releases). Or, you can [build your own uber JAR directly from the source code](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/README_Sink.md#install-sink-connector). Complete the installation by following the guidance described in the Confluent documentation for [installing connectors manually](https://docs.confluent.io/home/connect/install.html#install-connector-manually).
## Azure Functions
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
+
+ Title: Active Directory authentication - Azure Database for PostgreSQL - Flexible Server
+description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for PostgreSQL - Flexible Server
+++ Last updated : 10/12/2022+++++
+# Azure Active Directory Authentication (Azure AD) with PostgreSQL Flexible Server
++
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+Microsoft Azure Active Directory authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD.
+With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+
+Benefits of using Azure AD include:
+
+- Authentication of users across Azure Services in a uniform way
+- Management of password policies and password rotation in a single place
+- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords
+- Customers can manage database permissions using external (Azure AD) groups.
+- Azure AD authentication uses PostgreSQL database roles to authenticate identities at the database level
+- Support of token-based authentication for applications connecting to Azure Database for PostgreSQL
+
+## Azure Active Directory Authentication (Single Server VS Flexible Server)
+
+Azure Active Directory Authentication for Flexible Server is built using our experience and feedback we've collected from Azure Database for PostgreSQL Single Server, and supports the following features and improvements over single server:
+
+The following table provides a list of high-level Azure AD features and capabilities comparisons between Single Server and Flexible Server
+
+| **Feature / Capability** | **Single Server** | **Flexible Server** |
+| - | - | - |
+| Multiple Azure AD Admins | No | Yes|
+| Managed Identities (System & User assigned) | Partial | Full|
+| Invited User Support | No | Yes |
+| Disable Password Authentication | Not Available | Available|
+| Service Principal can act as group member| No | Yes |
+| Audit Azure AD Logins | No | Yes |
+| PG bouncer support | No | Planned for GA |
+
+## How Azure AD Works In Flexible Server
+
+The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for PostgreSQL. The arrows indicate communication pathways.
+
+![authentication flow][1]
+
+ Use these steps to configure Azure AD with Azure Database for PostgreSQL Flexible Server [Configure and sign in with Azure AD for Azure Database for PostgreSQL Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
+
+## Manage PostgreSQL Access For AD Principals
+
+When Azure AD authentication is enabled and Azure AD principal is added as an Azure AD administrator the account gets the same privileges as the original PostgreSQL administrator. Only Azure AD administrator can manage other Azure AD enabled roles on the server using Azure portal or Database API. The Azure AD administrator login can be an Azure AD user, Azure AD group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Multiple Azure AD administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL Flexible Server for better auditing and compliance needs.
+
+![admin structure][2]
+
+ > [!NOTE]
+ > Service Principal or Managed Identity can now act as fully functional Azure AD Administrator in Flexible Server and this was a limitation in our Single Server.
+
+Azure AD administrators that are created via Portal, API or SQL would have the same permissions as the regular admin user created during server provisioning. Additionally, database permissions for non-admin Azure AD enabled roles are managed similar to regular roles.
+
+## Connect using Azure AD identities
+
+Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
+
+- Azure Active Directory Password
+- Azure Active Directory Integrated
+- Azure Active Directory Universal with MFA
+- Using Active Directory Application certificates or client secrets
+- [Managed Identity](how-to-connect-with-managed-identity.md)
+
+Once you've authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
+
+> [!NOTE]
+> Use these steps to configure Azure AD with Azure Database for PostgreSQL Flexible Server [Configure and sign in with Azure AD for Azure Database for PostgreSQL Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
+
+## Additional considerations
+
+- Multiple Azure AD principals (a user, group, service principal or managed identity) can be configured as Azure AD Administrator for an Azure Database for PostgreSQL server at any time.
+- Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.
+- If an Azure AD principal is deleted from Azure AD, it still remains as PostgreSQL role, but it will no longer be able to acquire new access token. In this case, although the matching role still exists in the database it won't be able to authenticate to the server. Database administrators need to transfer ownership and drop roles manually.
+
+> [!NOTE]
+> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for PostgreSQL this access will be revoked immediately.
+
+- Azure Database for PostgreSQL Flexible Server matches access tokens to the database role using the userΓÇÖs unique Azure Active Directory user ID, as opposed to using the username. If an Azure AD user is deleted and a new user is created with the same name, Azure Database for PostgreSQL Flexible Server considers that a different user. Therefore, if a user is deleted from Azure AD and a new user is added with the same name the new user won't be able to connect with the existing role.
+
+## Next steps
+
+- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
+- For an overview of logins, users, and database roles Azure Database for PostgreSQL, see [Create users in Azure Database for PostgreSQL - Flexible Server](how-to-create-users.md).
+- To learn how to manage Azure AD users for Flexible Server, see [Manage Azure Active Directory users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
+
+<!--Image references-->
+
+[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
+[2]: ./media/concepts-azure-ad-authentication/admin-structure.png
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for PostgreSQL - Flexible Server
description: Learn about Azure Advisor recommendations for PostgreSQL - Flexible Server. +
The Azure Advisor system uses telemetry to issue performance and reliability rec
Advisor recommendations are split among our PostgreSQL database offerings: * Azure Database for PostgreSQL - Single Server * Azure Database for PostgreSQL - Flexible Server
-* Azure Database for PostgreSQL - Hyperscale (Citus)
Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations. ## Where can I view my recommendations?
Recommendations are available from the **Overview** navigation sidebar in the Az
## Recommendation types Azure Database for PostgreSQL prioritize the following types of recommendations: * **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
+* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, and connection limits. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-cost-recommendations.md). ## Understanding your recommendations
Azure Database for PostgreSQL prioritize the following types of recommendations:
* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately. ## Next steps
-For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
+For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
+
+ Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Flexible server
+description: Azure Database for PostgreSQL Flexible server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
++++++ Last updated : 10/12/2022++
+# Azure Database for PostgreSQL - Flexible Server Data Encryption with a Customer-managed Key Preview
++
+Azure PostgreSQL uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it's similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control of access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server - Preview enables you to bring your key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+
+Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server - Preview is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](https://azure.microsoft.com/services/key-vault/)) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) are described in more detail later in this article.
+
+Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key but provides encryption and decryption services to authorized entities. Key Vault can generate the key, import it, or have it transferred from an on-premises HSM device.
+
+## Benefits
+
+Data encryption with customer-managed keys for Azure Database for PostgreSQL - Flexible Server (Preview) provides the following benefits:
+
+- You fully control data-access by the ability to remove the key and make the database inaccessible.
+
+- Full control over the key-lifecycle, including rotation of the key to aligning with corporate policies.
+
+- Central management and organization of keys in Azure Key Vault.
+
+- Enabling encryption doesn't have any additional performance impact with or without customers managed key (CMK) as PostgreSQL relies on the Azure storage layer for data encryption in both scenarios. The only difference is when CMK is used **Azure Storage Encryption Key**, which performs actual data encryption, is encrypted using CMK.
+
+- Ability to implement separation of duties between security officers, DBA, and system administrators.
+
+## Terminology and description
+
+**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that encrypts and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
+
+**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deleting the KEK.
+
+The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md).
+
+## How data encryption with a customer-managed key work
++
+For a PostgreSQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
+
+- **get**: For retrieving the public part and properties of the key in the key Vault.
+
+- **list**: For listing\iterating through keys in the key Vault.
+
+- **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for PostgreSQL.
+
+- **unwrapKey**: To be able to decrypt the DEK. Azure Database for PostgreSQL needs the decrypted DEK to encrypt/decrypt the data
+
+The key vault administrator can also [enable logging of Key Vault audit events](/azure/key-vault/general/howto-logging?tabs=azure-cli), so they can be audited later.
+
+When the server is configured to use the customer-managed key stored in the key Vault, the server sends the DEK to the key Vault for encryptions. Key Vault returns the encrypted DEK stored in the user database. Similarly, when needed, the server sends the protected DEK to the key Vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
+
+## Requirements for configuring data encryption in preview for Azure Database for PostgreSQL Flexible server
+
+The following are requirements for configuring Key Vault:
+
+- Key Vault and Azure Database for PostgreSQL Flexible server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving the Key Vault resource afterward requires you to reconfigure the data encryption.
+
+- The key Vault must be set with 90 days for 'Days to retain deleted vaults'. If the existing key Vault has been configured with a lower number, you'll need to create a new key vault as it can't be modified after creation.
+
+- Enable the soft-delete feature on the key Vault, to protect from data loss if an accidental key (or Key Vault) deletion happens. Soft-deleted resources are retained for 90 days unless the user recovers or purges them in the meantime. The recover and purge actions have their own permissions associated with a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through PowerShell or the Azure CLI (note that you can't enable it through the Azure portal).
+
+- Enable Purge protection to enforce a mandatory retention period for deleted vaults and vault objects
+
+- Grant the Azure Database for PostgreSQL Flexible server access to the key Vault with the get, list, wrapKey, and unwrapKey permissions using its unique managed identity.
+
+The following are requirements for configuring the customer-managed key in Flexible Server:
+
+- The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
+
+- The key activation date (if set) must be a date and time in the past. The expiration date (if set) must be a future date and time.
+
+- The key must be in the *Enabled- state.
+
+- If you're importing an existing key into the Key Vault, provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
+
+### Recommendations
+
+When you're using data encryption by using a customer-managed key, here are recommendations for configuring Key Vault:
+
+- Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
+
+- Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
+
+- Ensure that Key Vault and Azure Database for PostgreSQL = Flexible server reside in the same region to ensure a faster access for DEK wrap, and unwrap operations.
+
+- Lock down the Azure KeyVault to only **disable public access** and allow only *trusted Microsoft* services to secure the resources.
+
+ :::image type="content" source="media/concepts-data-encryption/key-vault-trusted-service.png" alt-text="Screenshot of an image of networking screen with trusted-service-with-AKV setting." lightbox="media/concepts-data-encryption/key-vault-trusted-service.png":::
+
+Here are recommendations for configuring a customer-managed key:
+
+- Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service.
+
+- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault.
+
+### Accidental key access revocation from Key Vault
+
+It might happen that someone with sufficient access rights to Key Vault accidentally disables server access to the key by:
+
+- Revoking the Key Vault's list, get, wrapKey, and unwrapKey permissions from the server.
+
+- Deleting the key.
+
+- Deleting the Key Vault.
+
+- Changing the Key Vault's firewall rules.
+
+- Deleting the managed identity of the server in Azure AD.
+
+## Monitor the customer-managed key in Key Vault
+
+To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
+
+- [Azure Resource Health](../../service-health/resource-health-overview.md): An inaccessible database that has lost access to the Customer Key shows as "Inaccessible" after the first connection to the database has been denied.
+- [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the Customer Key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access if you create alerts for these events as soon as possible.
+- [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
+
+## Restore and replicate with a customer's managed key in Key Vault
+
+After Azure Database for PostgreSQL - Flexible Server is encrypted with a customer's managed key stored in Key Vault, any newly created server copy is also encrypted. You can make this new copy through a [PITR restore](concepts-backup-restore.md) operation or read replicas.
+
+> [!NOTE]
+> At this time we don't support revoking the original encryption key after restoring CMK enabled server to another server
+
+Avoid issues while setting up customer-managed data encryption during restore or read replica creation by following these steps on the primary and restored/replica servers:
+
+- Initiate the restore or read replica creation process from the primary Azure Database for PostgreSQL - Flexible server.
+
+- On the restored/replica server, you can change the customer-managed key and\or Azure Active Directory (Azure AD) identity used to access Azure Key Vault in the data encryption settings. Ensure that the newly created server is given list, wrap and unwrap permissions to the key stored in Key Vault.
+
+- Don't revoke the original key after restoring, as at this time we don't support key revocation after restoring CMK enabled server to another server
+
+## Inaccessible customer-managed key condition
+
+When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*.
+
+## Setup Customer Managed Key during Server Creation
+
+Prerequisites:
+
+- Azure Active Directory (Azure AD) user managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.
+
+- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. Follow [requirements section above](#requirements-for-configuring-data-encryption-in-preview-for-azure-database-for-postgresql-flexible-server) for required Azure Key Vault settings
+
+Follow the steps below to enable CMK while creating Postgres Flexible Server.
+
+1. Navigate to Azure Database for PostgreSQL - Flexible Server create blade via Azure portal
+
+2. Provide required information on Basics and Networking tabs
+
+3. Navigate to Security(preview) tab. On the screen, provide Azure Active Directory (Azure AD) identity that has access to the Key Vault and Key in Key Vault in the same region where you're creating this server
+
+4. On Review Summary tab, make sure that you provided correct information in Security section and press Create button
+
+5. Once it's finished, you should be able to navigate to Data Encryption (preview) screen for the server and update identity or key if necessary
+
+## Update Customer Managed Key on the CMK enabled Flexible Server
+
+Prerequisites:
+
+- Azure Active Directory (Azure AD) user-managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.
+
+- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
+
+Follow the steps below to update CMK on CMK enabled Flexible Server:
+
+1. Navigate to Azure Database for PostgreSQL - Flexible Server create a page via the Azure portal.
+
+2. Navigate to Data Encryption (preview) screen under Security tab
+
+3. Select different identity to connect to Azure Key Vault, remembering that this identity needs to have proper access rights to the Key Vault
+
+4. Select different key by choosing subscription, Key Vault and key from dropdowns provided.
+
+## Limitations
+
+The following are limitations for configuring the customer-managed key in Flexible Server:
+
+- CMK encryption can only be configured during creation of a new server, not as an update to the existing Flexible Server.
+
+- Once enabled, CMK encryption can't be removed. If customer desires to remove this feature, it can only be done via restore of the server to non-CMK server.
+
+- CMK encryption isn't available on Burstable SKU.
+
+The following are other limitations for the public preview of configuring the customer-managed key that we expect to remove at the General Availability of this feature:
+
+- No support for Geo backup enabled servers
+
+- **No support for revoking key after restoring CMK enabled server to another server**
+
+- No support for Azure HSM Key Vault
+
+- No CLI or PowerShell support
+
+## Next steps
+
+- [Azure Active Directory](../../active-directory-domain-services/overview.md)
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
+
+ Title: Use Azure Active Directory - Azure Database for PostgreSQL - Flexible Server
+description: Learn about how to set up Azure Active Directory (Azure AD) for authentication with Azure Database for PostgreSQL - Flexible Server
+++ Last updated : 10/12/2022+++++
+# Use Azure Active Directory (Azure AD) for authentication with PostgreSQL Flexible Server
++
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for PostgreSQL Flexible Server, and how to connect using an Azure AD token.
+
+## Enable Azure AD Authentication
+
+Azure Active Directory Authentication for Azure Database for PostgreSQL Flexible Server can be configured either during server provisioning or later.
+Only Azure AD administrator users can create/enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations, as it has elevated user permissions (for example, CREATEDB). You can now have multiple Azure AD admin users with flexible server and Azure AD admin user can be either user, group or a service principal.
+
+## Prerequisites
+
+The below three steps are mandatory to use Azure Active Directory Authentication with Azure Database for PostgreSQL Flexible Server and must be run by tenant administrator or a user with tenant admin rights and this is one time activity per tenant.
+
+Install AzureAD PowerShell: AzureAD Module
+
+### Step 1: Connect to user tenant.
+
+```powershell
+Connect-AzureAD -TenantId <customer tenant id>
+```
+### Step 2: Grant Flexible Server Service Principal read access to customer tenant
+
+```powershell
+New-AzureADServicePrincipal -AppId 5657e26c-cc92-45d9-bc47-9da6cfdb4ed
+```
+This command will grant Azure Database for PostgreSQL Flexible Server Service Principal read access to customer tenant to request Graph API tokens for Azure AD validation tasks. AppID (5657e26c-cc92-45d9-bc47-9da6cfdb4ed) in the above command is the AppID for Azure Database for PostgreSQL Flexible Server Service.
+
+### Step 3: Networking Requirements
+
+Azure Active Directory is a multi-tenant application and requires outbound connectivity to perform certain operations like adding Azure AD admin groups and additional networking rules are required for Azure AD connectivity to work depending upon your network topology.
+
+ `Public access (allowed IP addresses)`
+
+No additional networking rules are required.
+
+ `Private access (VNet Integration)`
+
+* An outbound NSG rule to allow virtual network traffic to reach AzureActiveDirectory service tag only.
+
+* Optionally, if youΓÇÖre using a proxy then add a new firewall rule to allow http/s traffic to reach AzureActiveDirectory service tag only.
+
+Complete the above prerequisites steps before adding Azure AD administrator to your server. To set the Azure AD admin during server provisioning, follow the below steps.
+
+1. In the Azure portal, during server provisioning select either `PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as authentication method.
+1. Set Azure AD Admin using `set admin` tab and select a valid Azure AD user/ group /service principal/Managed Identity in the customer tenant to be Azure AD administrator
+1. You can also optionally add local postgreSQL admin account if you prefer `PostgreSQL and Azure Active Directory authentication` method.
+
+Note only one Azure admin user can be added during server provisioning and you can add multiple Azure AD admin users after server is created.
+
+![set-azure-ad-administrator][3]
+
+To set the Azure AD administrator after server creation, follow the below steps
+
+1. In the Azure portal, select the instance of Azure Database for PostgreSQL that you want to enable for Azure AD.
+1. Under Security, select Authentication and choose either`PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as authentication method based upon your requirements.
+
+![set azure ad administrator][2]
+
+1. Select `Add Azure AD Admins` and select a valid Azure AD user / group /service principal/Managed Identity in the customer tenant to be Azure AD administrator.
+1. Select Save,
+
+> [!IMPORTANT]
+> When setting the administrator, a new user is added to the Azure Database for PostgreSQL server with full administrator permissions.
+
+## Connect to Azure Database for PostgreSQL using Azure AD
+
+The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for PostgreSQL:
+
+![authentication flow][1]
+
+We've designed the Azure AD integration to work with common PostgreSQL tools like psql, which aren't Azure AD aware and only support specifying username and password when connecting to PostgreSQL. We pass the Azure AD token as the password as shown in the picture above.
+
+We currently have tested the following clients:
+
+- psql commandline (utilize the PGPASSWORD variable to pass the token, see step 3 for more information)
+- Azure Data Studio (using the PostgreSQL extension)
+- Other libpq based clients (for example, common application frameworks and ORMs)
+- PgAdmin (uncheck connect now at server creation. See step 4 for more information)
+
+These are the steps that a user/application will need to do authenticate with Azure AD described below:
+
+### CLI Prerequisites
+
+You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
+
+## Authenticate with Azure AD as a Flexible user
+
+### Step 1: Log in to the user's Azure subscription
+
+Start by authenticating with Azure AD using the Azure CLI tool. This step isn't required in Azure Cloud Shell.
+
+```azurecli-interactive
+az login
+```
+
+The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
+
+### Step 2: Retrieve Azure AD access token
+
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for PostgreSQL.
+
+Example (for Public Cloud):
+
+```azurecli-interactive
+az account get-access-token --resource https://ossrdbms-Azure AD.database.windows.net
+```
+
+The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+
+```azurecli-interactive
+az cloud show
+```
+
+For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+
+```azurecli-interactive
+az account get-access-token --resource-type oss-rdbms
+```
+
+After authentication is successful, Azure AD will return an access token:
+
+```json
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
+}
+```
+
+The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for PostgreSQL service.
+
+### Step 3: Use token as password for logging in with client psql
+
+When connecting you need to use the access token as the PostgreSQL user password.
+
+While using the `psql` command line client, the access token needs to be passed through the `PGPASSWORD` environment variable, since the access token exceeds the password length that `psql` can accept directly:
+
+Windows Example:
+
+```cmd
+set PGPASSWORD=<copy/pasted TOKEN value from step 2>
+```
+
+```PowerShell
+$env:PGPASSWORD='<copy/pasted TOKEN value from step 2>'
+```
+
+Linux/macOS Example:
+
+```shell
+export PGPASSWORD=<copy/pasted TOKEN value from step 2>
+```
+
+Now you can initiate a connection with Azure Database for PostgreSQL like you normally would:
+
+```shell
+psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com dbname=postgres sslmode=require"
+```
+### Step 4: Use token as a password for logging in with PgAdmin
+
+To connect using an Azure AD token with pgAdmin, you need to follow the next steps:
+1. Uncheck the connect now option at server creation.
+1. Enter your server details in the connection tab and save.
+1. From the browser menu, select connect to the Azure Database for PostgreSQL Flexible server
+1. Enter the AD token password when prompted.
+
+Important considerations when connecting:
+
+* `user@tenant.onmicrosoft.com` is the name of the Azure AD user
+* Make sure to use the exact way the Azure user is spelled - as the Azure AD user and group names are case sensitive.
+* If the name contains spaces, use `\` before each space to escape it.
+* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
+
+You're now authenticated to your Azure Database for PostgreSQL server using Azure AD authentication.
+
+## Authenticate with Azure AD as a group member
+
+### Step 1: Create Azure AD groups in Azure Database for PostgreSQL
+
+To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
+
+Example:
+
+```
+select * from pgAzure ADauth_create_principal('Prod DB Readonly', false, false).
+```
+
+When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
+
+> [!NOTE]
+> PostgreSQL Flexible Servers supports Managed Identities as group members.
+
+### Step 2: Log in to the userΓÇÖs Azure Subscription
+
+Authenticate with Azure AD using the Azure CLI tool. This step isn't required in Azure Cloud Shell. The user needs to be member of the Azure AD group.
+
+```
+az login
+```
+
+### Step 3: Retrieve Azure AD access token
+
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 2 to access Azure Database for PostgreSQL.
+
+Example (for Public Cloud):
+
+```azurecli-interactive
+az account get-access-token --resource https://ossrdbms-Azure AD.database.windows.net
+```
+
+The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+
+```azurecli-interactive
+az cloud show
+```
+
+For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+
+```azurecli-interactive
+az account get-access-token --resource-type oss-rdbms
+```
+
+After authentication is successful, Azure AD will return an access token:
+
+```json
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
+}
+```
+
+### Step 4: Use token as password for logging in with psql or PgAdmin (see above steps for user connection)
+
+Important considerations when connecting as a group member:
+* groupname is the name of the Azure AD group you're trying to connect as
+* Make sure to use the exact way the Azure AD group name is spelled.
+* Azure AD user and group names are case sensitive
+* When connecting as a group, use only the group name and not the alias of a group member.
+* If the name contains spaces, use \ before each space to escape it.
+* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
+
+You're now authenticated to your PostgreSQL server using Azure AD authentication.
+
+## Next steps
+
+* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md)
+* Learn how to create and manage Azure AD enabled PostgreSQL roles [Manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server](Manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server.md)
+
+<!--Image references-->
+
+[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
+[2]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin.png
+[3]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin-server-creation.png
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-with-managed-identity.md
+
+ Title: Connect with Managed Identity - Azure Database for PostgreSQL - Flexible Server
+description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL Flexible Server
++++++ Last updated : 09/26/2022++
+# Connect with Managed Identity to Azure Database for PostgreSQL Flexible Server
++
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL server. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
+
+You learn how to:
+- Grant your VM access to an Azure Database for PostgreSQL Flexible server
+- Create a user in the database that represents the VM's system-assigned identity
+- Get an access token using the VM identity and use it to query an Azure Database for PostgreSQL Flexible server
+- Implement the token retrieval in a C# example application
+
+## Prerequisites
+
+- If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
+- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.md).
+- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity
+- You need an Azure Database for PostgreSQL database server that has [Azure AD authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured
+- To follow the C# example, first complete the guide how to [Connect with C#](connect-csharp.md)
+
+## Creating a system-assigned managed identity for your VM
+
+Use [az vm identity assign](/cli/azure/vm/identity/) with the `identity assign` command enable the system-assigned identity to an existing VM:
+
+```azurecli-interactive
+az vm identity assign -g myResourceGroup -n myVm
+```
+
+Retrieve the application ID for the system-assigned managed identity, which you'll need in the next few steps:
+
+```azurecli
+# Get the client ID (application ID) of the system-assigned managed identity
+++
+az ad sp list --display-name vm-name --query [*].appId --out tsv
+```
+
+## Creating a PostgreSQL user for your Managed Identity
+
+Now, connect as the Azure AD administrator user to your PostgreSQL database, and run the following SQL statements, replacing `CLIENT_ID` with the client ID you retrieved for your system-assigned managed identity:
+
+```sql
+select * from pgaadauth_create_principal('<identity_name>', false, false);
+```
+
+For more details on managing Azure AD enabled database roles see [how to manage Azure AD enabled PostgreSQL roles](./how-to-manage-azure-ad-users.md)
+
+The managed identity now has access when authenticating with the identity name as a role name and Azure AD token as a password.
+
+## Retrieving the access token from Azure Instance Metadata service
+
+Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
+
+This token retrieval is done by making an HTTP request to `http://169.254.169.254/metadata/identity/oauth2/token` and passing the following parameters:
+
+* `api-version` = `2018-02-01`
+* `resource` = `https://ossrdbms-aad.database.windows.net`
+* `client_id` = `CLIENT_ID` (that you retrieved earlier)
+
+You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database.
+
+For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `psql` client installed.
+
+```bash
+# Retrieve the access token
+++
+export PGPASSWORD=`curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token`
+
+# Connect to the database
+++
+psql -h SERVER --user USER DBNAME
+```
+
+You are now connected to the database you've configured earlier.
+
+## Connecting using Managed Identity in C#
+
+This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for PostgreSQL. Azure Database for PostgreSQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to PostgreSQL, you pass the access token in the password field.
+
+Here's a .NET code example of opening a connection to PostgreSQL using an access token. This code must run on the VM to use the system-assigned managed identity to obtain an access token from Azure AD. Replace the values of HOST, USER, DATABASE, and CLIENT_ID.
+
+```csharp
+using System;
+using System.Net;
+using System.IO;
+using System.Collections;
+using System.Collections.Generic;
+using System.Text.Json;
+using System.Text.Json.Serialization;
+using Npgsql;
+using Azure.Identity;
+
+namespace Driver
+{
+ class Script
+ {
+ // Obtain connection string information from the portal for use in the following variables
+ private static string Host = "HOST";
+ private static string User = "USER";
+ private static string Database = "DATABASE";
+
+ static async Task Main(string[] args)
+ {
+ //
+ // Get an access token for PostgreSQL.
+ //
+ Console.Out.WriteLine("Getting access token from Azure AD...");
+
+ // Azure AD resource ID for Azure Database for PostgreSQL Flexible Server is https://ossrdbms-aad.database.windows.net/
+ string accessToken = null;
+
+ try
+ {
+ // Call managed identities for Azure resources endpoint.
+ var sqlServerTokenProvider = new DefaultAzureCredential();
+ accessToken = (await sqlServerTokenProvider.GetTokenAsync(
+ new Azure.Core.TokenRequestContext(scopes: new string[] { "https://ossrdbms-aad.database.windows.net/.default" }) { })).Token;
+
+ }
+ catch (Exception e)
+ {
+ Console.Out.WriteLine("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
+ System.Environment.Exit(1);
+ }
+
+ //
+ // Open a connection to the PostgreSQL server using the access token.
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4}; SSLMode=Prefer",
+ Host,
+ User,
+ Database,
+ 5432,
+ accessToken);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ Console.Out.WriteLine("Opening connection using access token...");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("SELECT version()", conn))
+ {
+
+ var reader = command.ExecuteReader();
+ while (reader.Read())
+ {
+ Console.WriteLine("\nConnected!\n\nPostgres version: {0}", reader.GetString(0));
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+When run, this command will give an output like this:
+
+```
+Getting access token from Azure AD...
+Opening connection using access token...
+
+Connected!
+
+Postgres version: PostgreSQL 11.11, compiled by Visual C++ build 1800, 64-bit
+```
+
+## Next steps
+
+* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL](concepts-azure-ad-authentication.md)
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md
+
+ Title: Create users - Azure Database for PostgreSQL - Flexible Server
+description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 09/26/2022++
+# Create users in Azure Database for PostgreSQL - Flexible Server
++
+This article describes how you can create users within an Azure Database for PostgreSQL server.
+
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+If you would like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
+
+## The server admin account
+
+When you first created your Azure Database for PostgreSQL, you provided a server admin user name and password. For more information, you can follow the [Quickstart](quickstart-create-server-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
+
+The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
+
+- azure_pg_admin
+- azure_superuser
+- your server admin user
+
+Your server admin user is a member of the azure_pg_admin role. However, the server admin account is not part of the azure_superuser role. Since this service is a managed PaaS service, only Microsoft is part of the super user role.
+
+The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL, the server admin user is granted these privileges:
+ LOGIN, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION
+
+The server admin user account can be used to create additional users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
+
+## How to create additional admin users in Azure Database for PostgreSQL
+
+1. Get the connection information and admin user name.
+ To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+ If you are unsure of how to connect, see [the quickstart](./quickstart-create-server-portal.md)
+
+3. Edit and run the following SQL code. Replace your new user name for the placeholder value <new_user>, and replace the placeholder password with your own strong password.
+
+ ```sql
+ CREATE ROLE <new_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
+
+ GRANT azure_pg_admin TO <new_user>;
+ ```
+
+## How to create database users in Azure Database for PostgreSQL
+
+1. Get the connection information and admin user name.
+ To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+
+3. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name, and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password.
+
+ This sql code syntax creates a new database named testdb, for example purposes. Then it creates a new user in the PostgreSQL service, and grants connect privileges to the new database for that user.
+
+ ```sql
+ CREATE DATABASE <newdb>;
+
+ CREATE ROLE <db_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
+
+ GRANT CONNECT ON DATABASE <newdb> TO <db_user>;
+ ```
+
+4. Using an admin account, you may need to grant additional privileges to secure the objects in the database. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/ddl-priv.html) for further details on database roles and privileges. For example:
+
+ ```sql
+ GRANT ALL PRIVILEGES ON DATABASE <newdb> TO <db_user>;
+ ```
+
+ If a user creates a table "role," the table belongs to that user. If another user needs access to the table, you must grant privileges to the other user on the table level.
+
+ For example:
+
+ ```sql
+ GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <db_user>;
+ ```
+
+5. Log in to your server, specifying the designated database, using the new user name and password. This example shows the psql command line. With this command, you are prompted for the password for the user name. Replace your own server name, database name, and user name.
+
+ ```shell
+ psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=db_user@mydemoserver --dbname=newdb
+ ```
+
+## Next steps
+
+Open the firewall for the IP addresses of the new users' machines to enable them to connect:
+[Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
+
+For more information regarding user account management, see PostgreSQL product documentation for [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html), [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html), and [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html).
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
+
+ Title: Manage Azure Active Directory Users - Azure Database for PostgreSQL - Flexible Server
+description: This article describes how you can manage Azure AD enabled roles to interact with an Azure Database for PostgreSQL - Flexible Server.
+++ Last updated : 10/12/2022+++++
+# Manage Azure Active Directory (Azure AD) roles in Azure Database for PostgreSQL - Flexible Server
++
+This article describes how you can create Azure AD enabled database roles within an Azure Database for PostgreSQL server.
+
+> [!NOTE]
+> This guide assumes you already enabled Azure Active Directory authentication on your PostgreSQL Flexible server.
+> See [How to Configure Azure AD Authentication](./how-to-configure-sign-in-azure-ad-authentication.md)
+
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+If you like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
+
+## Create or Delete Azure AD administrators using Azure portal or Azure Resource Manager (ARM) API
+
+1. Open **Authentication** page for your Azure Database for PostgreSQL Flexible Server in Azure portal
+1. To add an administrator - select **Add Azure AD Admin** and select a user, group, application or a managed identity from the current Azure AD tenant.
+1. To remove an administrator - select **Delete** icon for the one to remove.
+1. Select **Save** and wait for provisioning operation to completed.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/how-to-manage-azure-ad-users/add-aad-principal-via-portal.png" alt-text="Screenshot of managing Azure AD administrators via portal":::
+
+> [!NOTE]
+> Support for Azure AD Administrators management via Azure SDK, az cli and Azure Powershell is coming soon,
+
+## Manage Azure AD roles using SQL
+
+Once first Azure AD administrator is created from the Azure portal or API, you can use the administrator role to manage Azure AD roles in your Azure Database for PostgreSQL Flexible Server.
+
+We recommend getting familiar with [Microsoft identity platform](../../active-directory/develop/v2-overview.md). for best use of Azure AD integration with Azure Database for PostgreSQL Flexible Servers.
+
+### Principal Types
+
+Azure Database for PostgreSQL Flexible servers internally stores mapping between PostgreSQL database roles and unique identifiers of AzureAD objects.
+Each PostgreSQL database role can be mapped to one of the following Azure AD object types:
+
+1. **User** - Including Tenant local and guest users.
+1. **Service Principal**. Including [Applications and Managed identities](../../active-directory/develop/app-objects-and-service-principals.md)
+1. **Group** When a PostgreSQL Role is linked to an Azure AD group, any user or service principal member of this group can connect to the Azure Database for PostgreSQL Flexible Server instance with the group role.
+
+### List Azure AD roles using SQL
+
+```sql
+select * from pgaadauth_list_principals(isAdmin);
+```
+
+**Parameters:**
+- *isAdmin* - Boolean flag if the function should only return Admin users, or all users.
+
+## Create a role using Azure AD principal name
+
+```sql
+select * from pgaadauth_create_principal('mary@contoso.com', false, false);
+```
+
+**Parameters:**
+- *roleName* - Name of the role to be created. This **must match a name of Azure AD principal**:
+ - For **users** use User Principal Name from Profile. For guest users, include the full name in their home domain with #EXT# tag.
+ - For **groups** and **service principals** use display name. The name must be unique in the tenant.
+- *isAdmin* - Set to **true** if when creating an admin user and **false** for a regular user. Admin user created this way has the same privileges as one created via portal or API.
+- *isMfa* - Flag if Multi Factor Authentication must be enforced for this role.
+
+## Create a role using Azure AD object identifier
+
+```sql
+select * from pgaadauth_create_principal_with_oid('accounting_application', '00000000-0000-0000-0000-000000000000', 'service', false, false);
+```
+
+**Parameters:**
+- *roleName* - Name of the role to be created.
+- *objectId* - Unique object identifier of the Azure AD object:
+ - For **Users**, **Groups** and **Managed Identities** the ObjectId can be found by searching for the object name in Azure AD page in Azure portal. [See this guide as example](/partner-center/find-ids-and-domain-names)
+ - For **Applications**, Objectid of the corresponding **Service Principal** must be used. In Azure portal the required ObjectId can be found on **Enterprise Applications** page.
+- *objectType* - Type of the Azure AD object to link to this role.
+- *isAdmin* - Set to **true** if when creating an admin user and **false** for a regular user. Admin user created this way has the same privileges as one created via portal or API.
+- *isMfa* - Flag if Multi Factor Authentication must be enforced for this role.
+
+## Enable Azure AD authentication for an existing PostgreSQL role using SQL
+
+Azure Database for PostgreSQL Flexible Servers uses Security Labels associated with database roles to store Azure AD mapping. During preview, we don't provide a function to associate existing Azure AD roles.
+
+You can use the following SQL to assign security label:
+
+```sql
+SECURITY LABEL for "pgaadauth" on role "<roleName>" is 'aadauth,oid=<objectId>'
+```
+
+**Parameters:**
+- *roleName* - Name of an existing PostgreSQL role to which Azure AD authentication needs to be enabled.
+- *objectId* - Unique object identifier of the Azure AD object.
+
+## Next steps
+
+- Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md)
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
Follow these steps to restart your flexible server.
6. A notification will be shown that the restart operation has been initiated.
+> [!NOTE]
+> Using custom RBAC role to restart server please make sure that in addition to Microsoft.DBforPostgreSQL/flexibleServers/restart/action permission this role also has Microsoft.DbforPostgreSQL/servers/write permission granted to it.
## Next steps - Learn about [business continuity](./concepts-business-continuity.md)
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Title: Restore - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how to perform restore operations in Azure Database for PostgreSQL through the Azure portal.
+description: This article describes how to perform restore operations in Azure Database for PostgreSQL Flexible Server through the Azure portal.
Last updated 11/30/2021
# Point-in-time restore of a Flexible Server
-[! INCLUDE [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups. You can perform either to a latest restore point or a custom restore point within your retention period.
Follow these steps to restore your flexible server using an existing backup.
2. From the overview page, click **Restore**. :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
-3. Restore page will be shown with an option to choose between the latest restore point and Custom restore point.
+3. Restore page will be shown with an option to choose between the latest restore point, custom restore point and fast restore point.
4. Choose **Custom restore point**.
Follow these steps to restore your flexible server using an existing backup.
6. Click **OK**.
+7. A notification will be shown that the restore operation has been initiated.
+
+ ## Restoring using fast restore
+
+Follow these steps to restore your flexible server using a fast restore option.
+
+1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+
+2. Click **Overview** from the left panel and click **Restore**
+
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+
+3. Restore page will be shown with an option to choose between the latest restore point, custom restore point and fast restore point.
+
+4. Choose **Fast restore point (Restore using full backup only)**.
+
+5. Select full backup of your choice from the Fast Restore Point drop-down. Provide a **new server name** and you can optionally choose the **Availability zone** to restore to.
+
+
+6. Click **OK**.
+ 7. A notification will be shown that the restore operation has been initiated. ## Performing Geo-Restore
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Last updated 08/11/2022-+ # Overview - Azure Database for PostgreSQL - Flexible Server [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-[Azure Database for PostgreSQL](../overview.md) powered by the PostgreSQL community edition is available in three deployment modes:
+[Azure Database for PostgreSQL](../overview.md) powered by the PostgreSQL community edition is available in two deployment modes:
- [Single Server](../overview-single-server.md) - [Flexible Server](./overview.md) -- [Hyperscale (Citus)](../hyperscale/overview.md) In this article, we will provide an overview and introduction to core concepts of flexible server deployment model.
The flexible server comes with a [built-in PgBouncer](concepts-pgbouncer.md), a
One advantage of running your workload in Azure is global reach. The flexible server is currently available in the following Azure regions:
-| Region | V3/V4 compute availability | Zone-redundant HA | Geo-Redundant backup |
-| | | | |
-| Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: (v3 only) | :x: | :x: |
-| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| China East 3 | :heavy_check_mark: | :x: | :x:|
-| China North 3 | :heavy_check_mark: | :x: | :x:|
-| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: |
-| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: |
-| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| France South | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Germany West Central | :x: $$ | :x: $ | :x: |
-| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Jio India West | :heavy_check_mark: (v3 only)| :x: | :x: |
-| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: |
-| Korea South | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| North Central US | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Norway East | :heavy_check_mark: | :x: | :x: |
-| Qatar Central | :heavy_check_mark: | :x: | :x: |
-| South Africa North | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
-| South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| South India | :x: $$ | :x: | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark: | :x: $ | :heavy_check_mark: |
-| Sweden Central | :heavy_check_mark: | :x: | :x: |
-| Switzerland North | :heavy_check_mark: | :x: $ ** | :heavy_check_mark: |
-| Switzerland West | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| UAE North | :heavy_check_mark: | :x: | :x: |
-| US Gov Arizona | :heavy_check_mark: | :x: | :x: |
-| US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| UK West | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| West US | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| West US 2 | :x: $$ | :x: $ | :heavy_check_mark:|
-| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
+| Region | V3/V4 compute availability | Zone-Redundant HA | Same-Zone HA | Geo-Redundant backup |
+| | | | | |
+| Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Brazil South | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: |
+| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| China East 3 | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
+| China North 3 | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
+| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
+| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
+| France Central | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| France South | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Germany West Central | :x: $$ | :x: $ | :heavy_check_mark: |:x: |
+| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Jio India West | :heavy_check_mark: (v3 only)| :x: | :heavy_check_mark: |:x: |
+| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
+| Korea South | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| North Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Norway East | :heavy_check_mark: | :x: | :heavy_check_mark: |:x: |
+| Qatar Central | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
+| South Africa North | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: |
+| South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| South India | :x: $$ | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Southeast Asia | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
+| Sweden Central | :heavy_check_mark: | :x: | :heavy_check_mark: |:x: |
+| Switzerland North | :heavy_check_mark: | :x: $ ** | :heavy_check_mark: | :heavy_check_mark: |
+| Switzerland West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| UAE North | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
+| US Gov Arizona | :heavy_check_mark: | :x: | :heavy_check_mark: |:x: |
+| US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| UK West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| West US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| West US 2 | :x: $$ | :x: $ | :heavy_check_mark: | :heavy_check_mark:|
+| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: |
$ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-audit.md
- Title: Audit logging - Azure Database for PostgreSQL - Hyperscale (Citus)
-description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 08/03/2021--
-# Audit logging in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-> [!IMPORTANT]
-> The pgAudit extension in Hyperscale (Citus) is currently in preview. This
-> preview version is provided without a service level agreement, and it's not
-> recommended for production workloads. Certain features might not be supported
-> or might have constrained capabilities.
->
-> You can see a complete list of other new features in [preview features for
-> Hyperscale (Citus)](product-updates.md).
-
-Audit logging of database activities in Azure Database for PostgreSQL - Hyperscale (Citus) is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session or object audit logging.
-
-If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md).
-
-## Usage considerations
-By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Database for PostgreSQL - Hyperscale (Citus), you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, or Azure Monitor logs, depending on your choice.
-
-## Enabling pgAudit
-
-The pgAudit extension is pre-installed and enabled on a limited number of
-Hyperscale (Citus) server groups at this time. It may or may not be available
-for preview yet on your server group.
-
-## pgAudit settings
-
-pgAudit allows you to configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
-
-> [!NOTE]
-> pgAudit settings are specified globally and cannot be specified at a database or role level.
->
-> Also, pgAudit settings are specified per-node in a server group. To make a change on all nodes, you must apply it to each node individually.
-
-You must configure pgAudit parameters to start logging. The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first and confirm that you're getting the expected behavior.
-
-> [!NOTE]
-> Setting `pgaudit.log_client` to ON will redirect logs to a client process (like psql) instead of being written to file. This setting should generally be left disabled. <br> <br>
-> `pgaudit.log_level` is only enabled when `pgaudit.log_client` is on.
-
-> [!NOTE]
-> In Azure Database for PostgreSQL - Hyperscale (Citus), `pgaudit.log` cannot be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
-
-## Audit log format
-Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
-
-## Getting started
-To quickly get started, set `pgaudit.log` to `WRITE`, and open your server logs to review the output.
-
-## Viewing audit logs
-The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
-
-For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../../azure-monitor/logs/log-query-overview.md) overview.
-
-You can use this query to get started. You can configure alerts based on queries.
-
-Search for all pgAudit entries in Postgres logs for a particular server in the last day
-```kusto
-AzureDiagnostics
-| where LogicalServerName_s == "myservername"
-| where TimeGenerated > ago(1d)
-| where Message contains "AUDIT:"
-```
-
-## Next steps
--- [Learn how to setup logging in Azure Database for PostgreSQL - Hyperscale (Citus) and how to access logs](howto-logging.md)
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-backup.md
- Title: Backup and restore ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Protecting data from accidental corruption or deletion
----- Previously updated : 04/14/2021--
-# Backup and restore in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) automatically creates
-backups of each node and stores them in locally redundant storage. Backups can
-be used to restore your Hyperscale (Citus) server group to a specified time.
-Backup and restore are an essential part of any business continuity strategy
-because they protect your data from accidental corruption or deletion.
-
-## Backups
-
-At least once a day, Azure Database for PostgreSQL takes snapshot backups of
-data files and the database transaction log. The backups allow you to restore a
-server to any point in time within the retention period. (The retention period
-is currently 35 days for all server groups.) All backups are encrypted using
-AES 256-bit encryption.
-
-In Azure regions that support availability zones, backup snapshots are stored
-in three availability zones. As long as at least one availability zone is
-online, the Hyperscale (Citus) server group is restorable.
-
-Backup files can't be exported. They may only be used for restore operations
-in Azure Database for PostgreSQL.
-
-### Backup storage cost
-
-For current backup storage pricing, see the Azure Database for PostgreSQL -
-Hyperscale (Citus) [pricing
-page](https://azure.microsoft.com/pricing/details/postgresql/hyperscale-citus/).
-
-## Restore
-
-You can restore a Hyperscale (Citus) server group to any point in time within
-the last 35 days. Point-in-time restore is useful in multiple scenarios. For
-example, when a user accidentally deletes data, drops an important table or
-database, or if an application accidentally overwrites good data with bad data.
-
-> [!IMPORTANT]
-> Deleted Hyperscale (Citus) server groups can't be restored. If you delete the
-> server group, all nodes that belong to the server group are deleted and can't
-> be recovered. To protect server group resources, post deployment, from
-> accidental deletion or unexpected changes, administrators can leverage
-> [management locks](../../azure-resource-manager/management/lock-resources.md).
-
-The restore process creates a new server group in the same Azure region,
-subscription, and resource group as the original. The server group has the
-original's configuration: the same number of nodes, number of vCores, storage
-size, user roles, PostgreSQL version, and version of the Citus extension.
-
-Firewall settings and PostgreSQL server parameters are not preserved from the
-original server group, they are reset to default values. The firewall will
-prevent all connections. You will need to manually adjust these settings after
-restore. In general, see our list of suggested [post-restore
-tasks](howto-restore-portal.md#post-restore-tasks).
-
-## Next steps
-
-* See the steps to [restore a server group](howto-restore-portal.md)
- in the Azure portal.
-* Learn aboutΓÇ»[Azure availability zones](../../availability-zones/az-overview.md).
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-colocation.md
- Title: Table colocation - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to store related information together for faster queries
----- Previously updated : 05/06/2019--
-# Table colocation in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-Colocation means storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node.
-
-## Data colocation for hash-distributed tables
-
-In Azure Database for PostgreSQL ΓÇô Hyperscale (Citus), a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables.
--
-## A practical example of colocation
-
-Consider the following tables that might be part of a multi-tenant web
-analytics SaaS:
-
-```sql
-CREATE TABLE event (
- tenant_id int,
- event_id bigint,
- page_id int,
- payload jsonb,
- primary key (tenant_id, event_id)
-);
-
-CREATE TABLE page (
- tenant_id int,
- page_id int,
- path text,
- primary key (tenant_id, page_id)
-);
-```
-
-Now we want to answer queries that might be issued by a customer-facing
-dashboard. An example query is "Return the number of visits in the past week for
-all pages starting with '/blog' in tenant six."
-
-If our data was in the Single-Server deployment option, we could easily express
-our query by using the rich set of relational operations offered by SQL:
-
-```sql
-SELECT page_id, count(event_id)
-FROM
- page
-LEFT JOIN (
- SELECT * FROM event
- WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
-) recent
-USING (tenant_id, page_id)
-WHERE tenant_id = 6 AND path LIKE '/blog%'
-GROUP BY page_id;
-```
-
-As long as the [working set](https://en.wikipedia.org/wiki/Working_set) for this query fits in memory, a single-server table is an appropriate solution. Let's consider the opportunities of scaling the data model with the Hyperscale (Citus) deployment option.
-
-### Distribute tables by ID
-
-Single-server queries start slowing down as the number of tenants and the data stored for each tenant grows. The working set stops fitting in memory and CPU becomes a bottleneck.
-
-In this case, we can shard the data across many nodes by using Hyperscale (Citus). The
-first and most important choice we need to make when we decide to shard is the
-distribution column. Let's start with a naive choice of using `event_id` for
-the event table and `page_id` for the `page` table:
-
-```sql
naively use event_id and page_id as distribution columns-
-SELECT create_distributed_table('event', 'event_id');
-SELECT create_distributed_table('page', 'page_id');
-```
-
-When data is dispersed across different workers, we can't perform a join like we would on a single PostgreSQL node. Instead, we need to issue two queries:
-
-```sql
(Q1) get the relevant page_ids
-SELECT page_id FROM page WHERE path LIKE '/blog%' AND tenant_id = 6;
- (Q2) get the counts
-SELECT page_id, count(*) AS count
-FROM event
-WHERE page_id IN (/*…page IDs from first query…*/)
- AND tenant_id = 6
- AND (payload->>'time')::date >= now() - interval '1 week'
-GROUP BY page_id ORDER BY count DESC LIMIT 10;
-```
-
-Afterwards, the results from the two steps need to be combined by the
-application.
-
-Running the queries must consult data in shards scattered across nodes.
--
-In this case, the data distribution creates substantial drawbacks:
--- Overhead from querying each shard and running multiple queries.-- Overhead of Q1 returning many rows to the client.-- Q2 becomes large.-- The need to write queries in multiple steps requires changes in the application.-
-The data is dispersed, so the queries can be parallelized. It's
-only beneficial if the amount of work that the query does is substantially
-greater than the overhead of querying many shards.
-
-### Distribute tables by tenant
-
-In Hyperscale (Citus), rows with the same distribution column value are guaranteed to
-be on the same node. Starting over, we can create our tables with `tenant_id`
-as the distribution column.
-
-```sql
co-locate tables by using a common distribution column
-SELECT create_distributed_table('event', 'tenant_id');
-SELECT create_distributed_table('page', 'tenant_id', colocate_with => 'event');
-```
-
-Now Hyperscale (Citus) can answer the original single-server query without modification (Q1):
-
-```sql
-SELECT page_id, count(event_id)
-FROM
- page
-LEFT JOIN (
- SELECT * FROM event
- WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
-) recent
-USING (tenant_id, page_id)
-WHERE tenant_id = 6 AND path LIKE '/blog%'
-GROUP BY page_id;
-```
-
-Because of filter and join on tenant_id, Hyperscale (Citus) knows that the entire
-query can be answered by using the set of colocated shards that contain the data
-for that particular tenant. A single PostgreSQL node can answer the query in
-a single step.
--
-In some cases, queries and table schemas must be changed to include the tenant ID in unique constraints and join conditions. This change is usually straightforward.
-
-## Next steps
--- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-columnar.md
- Title: Columnar table storage - Azure PostgreSQL Hyperscale (Citus)
-description: Learn how to compress data using columnar storage.
----- Previously updated : 05/23/2022---
-# Compress data with columnar tables in Hyperscale (CItus)
--
-Azure Database for PostgreSQL - Hyperscale (Citus) supports append-only
-columnar table storage for analytic and data warehousing workloads. When
-columns (rather than rows) are stored contiguously on disk, data becomes more
-compressible, and queries can request a subset of columns more quickly.
-
-## Create a table
-
-To use columnar storage, specify `USING columnar` when creating a table:
-
-```postgresql
-CREATE TABLE contestant (
- handle TEXT,
- birthdate DATE,
- rating INT,
- percentile FLOAT,
- country CHAR(3),
- achievements TEXT[]
-) USING columnar;
-```
-
-Hyperscale (Citus) converts rows to columnar storage in "stripes" during
-insertion. Each stripe holds one transaction's worth of data, or 150000 rows,
-whichever is less. (The stripe size and other parameters of a columnar table
-can be changed with the
-[alter_columnar_table_set](reference-functions.md#alter_columnar_table_set)
-function.)
-
-For example, the following statement puts all five rows into the same stripe,
-because all values are inserted in a single transaction:
-
-```postgresql
insert these values into a single columnar stripe-
-INSERT INTO contestant VALUES
- ('a','1990-01-10',2090,97.1,'XA','{a}'),
- ('b','1990-11-01',2203,98.1,'XA','{a,b}'),
- ('c','1988-11-01',2907,99.4,'XB','{w,y}'),
- ('d','1985-05-05',2314,98.3,'XB','{}'),
- ('e','1995-05-05',2236,98.2,'XC','{a}');
-```
-
-It's best to make large stripes when possible, because Hyperscale (Citus)
-compresses columnar data separately per stripe. We can see facts about our
-columnar table like compression rate, number of stripes, and average rows per
-stripe by using `VACUUM VERBOSE`:
-
-```postgresql
-VACUUM VERBOSE contestant;
-```
-```
-INFO: statistics for "contestant":
-storage id: 10000000000
-total file size: 24576, total data size: 248
-compression rate: 1.31x
-total row count: 5, stripe count: 1, average rows per stripe: 5
-chunk count: 6, containing data for dropped columns: 0, zstd compressed: 6
-```
-
-The output shows that Hyperscale (Citus) used the zstd compression algorithm to
-obtain 1.31x data compression. The compression rate compares a) the size of
-inserted data as it was staged in memory against b) the size of that data
-compressed in its eventual stripe.
-
-Because of how it's measured, the compression rate may or may not match the
-size difference between row and columnar storage for a table. The only way
-to truly find that difference is to construct a row and columnar table that
-contain the same data, and compare.
-
-## Measuring compression
-
-Let's create a new example with more data to benchmark the compression savings.
-
-```postgresql
first a wide table using row storage
-CREATE TABLE perf_row(
- c00 int8, c01 int8, c02 int8, c03 int8, c04 int8, c05 int8, c06 int8, c07 int8, c08 int8, c09 int8,
- c10 int8, c11 int8, c12 int8, c13 int8, c14 int8, c15 int8, c16 int8, c17 int8, c18 int8, c19 int8,
- c20 int8, c21 int8, c22 int8, c23 int8, c24 int8, c25 int8, c26 int8, c27 int8, c28 int8, c29 int8,
- c30 int8, c31 int8, c32 int8, c33 int8, c34 int8, c35 int8, c36 int8, c37 int8, c38 int8, c39 int8,
- c40 int8, c41 int8, c42 int8, c43 int8, c44 int8, c45 int8, c46 int8, c47 int8, c48 int8, c49 int8,
- c50 int8, c51 int8, c52 int8, c53 int8, c54 int8, c55 int8, c56 int8, c57 int8, c58 int8, c59 int8,
- c60 int8, c61 int8, c62 int8, c63 int8, c64 int8, c65 int8, c66 int8, c67 int8, c68 int8, c69 int8,
- c70 int8, c71 int8, c72 int8, c73 int8, c74 int8, c75 int8, c76 int8, c77 int8, c78 int8, c79 int8,
- c80 int8, c81 int8, c82 int8, c83 int8, c84 int8, c85 int8, c86 int8, c87 int8, c88 int8, c89 int8,
- c90 int8, c91 int8, c92 int8, c93 int8, c94 int8, c95 int8, c96 int8, c97 int8, c98 int8, c99 int8
-);
- next a table with identical columns using columnar storage
-CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
-```
-
-Fill both tables with the same large dataset:
-
-```postgresql
-INSERT INTO perf_row
- SELECT
- g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
- g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
- g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
- g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
- g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
- g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
- g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
- g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
- g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
- g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
- FROM generate_series(1,50000000) g;
-
-INSERT INTO perf_columnar
- SELECT
- g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
- g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
- g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
- g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
- g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
- g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
- g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
- g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
- g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
- g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
- FROM generate_series(1,50000000) g;
-
-VACUUM (FREEZE, ANALYZE) perf_row;
-VACUUM (FREEZE, ANALYZE) perf_columnar;
-```
-
-For this data, you can see a compression ratio of better than 8X in the
-columnar table.
-
-```postgresql
-SELECT pg_total_relation_size('perf_row')::numeric/
- pg_total_relation_size('perf_columnar') AS compression_ratio;
- compression_ratio
- 8.0196135873627944
-(1 row)
-```
-
-## Example
-
-Columnar storage works well with table partitioning. For an example, see the
-Citus Engine community documentation, [archiving with columnar
-storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage).
-
-## Gotchas
-
-* Columnar storage compresses per stripe. Stripes are created per transaction,
- so inserting one row per transaction will put single rows into their own
- stripes. Compression and performance of single row stripes will be worse than
- a row table. Always insert in bulk to a columnar table.
-* If you mess up and columnarize a bunch of tiny stripes, you're stuck.
- The only fix is to create a new columnar table and copy
- data from the original in one transaction:
- ```postgresql
- BEGIN;
- CREATE TABLE foo_compacted (LIKE foo) USING columnar;
- INSERT INTO foo_compacted SELECT * FROM foo;
- DROP TABLE foo;
- ALTER TABLE foo_compacted RENAME TO foo;
- COMMIT;
- ```
-* Fundamentally non-compressible data can be a problem, although columnar
- storage is still useful when selecting specific columns. It doesn't need
- to load the other columns into memory.
-* On a partitioned table with a mix of row and column partitions, updates must
- be carefully targeted. Filter them to hit only the row partitions.
- * If the operation is targeted at a specific row partition (for example,
- `UPDATE p2 SET i = i + 1`), it will succeed; if targeted at a specified columnar
- partition (for example, `UPDATE p1 SET i = i + 1`), it will fail.
- * If the operation is targeted at the partitioned table and has a WHERE
- clause that excludes all columnar partitions (for example
- `UPDATE parent SET i = i + 1 WHERE timestamp = '2020-03-15'`),
- it will succeed.
- * If the operation is targeted at the partitioned table, but does not
- filter on the partition key columns, it will fail. Even if there are
- WHERE clauses that match rows in only columnar partitions, it's not
- enough--the partition key must also be filtered.
-
-## Limitations
-
-This feature still has significant limitations. See [Hyperscale
-(Citus) limits and limitations](reference-limits.md#columnar-storage).
-
-## Next steps
-
-* See an example of columnar storage in a Citus [time series
- tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html)
- (external link).
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
- Title: Connection pooling ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Scaling client database connections
----- Previously updated : 05/31/2022--
-# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling
--
-Establishing new connections takes time. That works against most applications,
-which request many short-lived connections. We recommend using a connection
-pooler, both to reduce idle transactions and reuse existing connections. To
-learn more, visit our [blog
-post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
-
-You can run your own connection pooler, or use PgBouncer managed by Azure.
-
-## Managed PgBouncer
-
-Connection poolers such as PgBouncer allow more clients to connect to the
-coordinator node at once. Applications connect to the pooler, and the pooler
-relays commands to the destination database.
-
-When clients connect through PgBouncer, the number of connections that can
-actively run in the database doesn't change. Instead, PgBouncer queues excess
-connections and runs them when the database is ready.
-
-Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
-groups. It supports up to 2,000 simultaneous client connections. Additionally,
-if a server group has [high availability](concepts-high-availability.md) (HA)
-enabled, then so does its managed PgBouncer.
-
-To connect through PgBouncer, follow these steps:
-
-1. Go to the **Connection strings** page for your server group in the Azure
- portal.
-2. Enable the checkbox **PgBouncer connection strings**. (The listed connection
- strings will change.)
-3. Update client applications to connect with the new string.
-
-## Next steps
-
-Discover more about the [limits and limitations](reference-limits.md)
-of Hyperscale (Citus).
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-distributed-data.md
- Title: Distributed data ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn about distributed tables, reference tables, local tables, and shards in Azure Database for PostgreSQL.
----- Previously updated : 05/06/2019--
-# Distributed data in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-This article outlines the three table types in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus).
-It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
-
-## Table types
-
-There are three types of tables in a Hyperscale (Citus) server group, each
-used for different purposes.
-
-### Type 1: Distributed tables
-
-The first type, and most common, is distributed tables. They
-appear to be normal tables to SQL statements, but they're horizontally
-partitioned across worker nodes. What this means is that the rows
-of the table are stored on different nodes, in fragment tables called
-shards.
-
-Hyperscale (Citus) runs not only SQL but DDL statements throughout a cluster.
-Changing the schema of a distributed table cascades to update
-all the table's shards across workers.
-
-#### Distribution column
-
-Hyperscale (Citus) uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
-of a table column called the distribution column. The cluster
-administrator must designate this column when distributing a table.
-Making the right choice is important for performance and functionality.
-
-### Type 2: Reference tables
-
-A reference table is a type of distributed table whose entire contents are
-concentrated into a single shard. The shard is replicated on every worker and
-the coordinator. Queries on any worker can access the reference information
-locally, without the network overhead of requesting rows from another node.
-Reference tables have no distribution column because there's no need to
-distinguish separate shards per row.
-
-Reference tables are typically small and are used to store data that's
-relevant to queries running on any worker node. An example is enumerated
-values like order statuses or product categories.
-
-### Type 3: Local tables
-
-When you use Hyperscale (Citus), the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
-
-A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
-
-## Shards
-
-The previous section described how distributed tables are stored as shards on
-worker nodes. This section discusses more technical details.
-
-The `pg_dist_shard` metadata table on the coordinator contains a
-row for each shard of each distributed table in the system. The row
-matches a shard ID with a range of integers in a hash space
-(shardminvalue, shardmaxvalue).
-
-```sql
-SELECT * from pg_dist_shard;
- logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
-++--++
- github_events | 102026 | t | 268435456 | 402653183
- github_events | 102027 | t | 402653184 | 536870911
- github_events | 102028 | t | 536870912 | 671088639
- github_events | 102029 | t | 671088640 | 805306367
- (4 rows)
-```
-
-If the coordinator node wants to determine which shard holds a row of
-`github_events`, it hashes the value of the distribution column in the
-row. Then the node checks which shard\'s range contains the hashed value. The
-ranges are defined so that the image of the hash function is their
-disjoint union.
-
-### Shard placements
-
-Suppose that shard 102027 is associated with the row in question. The row
-is read or written in a table called `github_events_102027` in one of
-the workers. Which worker? That's determined entirely by the metadata
-tables. The mapping of shard to worker is known as the shard placement.
-
-The coordinator node
-rewrites queries into fragments that refer to the specific tables
-like `github_events_102027` and runs those fragments on the
-appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
-
-```sql
-SELECT
- shardid,
- node.nodename,
- node.nodeport
-FROM pg_dist_placement placement
-JOIN pg_dist_node node
- ON placement.groupid = node.groupid
- AND node.noderole = 'primary'::noderole
-WHERE shardid = 102027;
-```
-
-```output
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé shardid Γöé nodename Γöé nodeport Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102027 Γöé localhost Γöé 5433 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Next steps
--- Learn how to [choose a distribution column](howto-choose-distribution-column.md) for distributed tables.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-firewall-rules.md
- Title: Public access - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: This article describes public access for Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 10/15/2021--
-# Public access in Azure Database for PostgreSQL - Hyperscale (Citus)
---
-This page describes the public access option. For private access, see
-[here](concepts-private-access.md).
-
-## Firewall overview
-
-Azure Database for PostgreSQL server firewall prevents all access to your Hyperscale (Citus) coordinator node until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
-To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
-
-**Firewall rules:** These rules enable clients to access your Hyperscale (Citus) coordinator node, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
-
-All database access to your coordinator node is blocked by the firewall by default. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules.
-Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram:
--
-## Connecting from the Internet and from Azure
-
-A Hyperscale (Citus) server group firewall controls who can connect to the group's coordinator node. The firewall determines access by consulting a configurable list of rules. Each rule is an IP address, or range of addresses, that are allowed in.
-
-When the firewall blocks connections, it can cause application errors. Using the PostgreSQL JDBC driver, for instance, raises an error like this:
-
-> java.util.concurrent.ExecutionException: java.lang.RuntimeException:
-> org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "citus", database "citus", SSL
-
-See [Create and manage firewall rules](howto-manage-firewall-using-portal.md) to learn how the rules are defined.
-
-## Troubleshooting the database server firewall
-When access to the Microsoft Azure Database for PostgreSQL - Hyperscale (Citus) service doesn't behave as you expect, consider these points:
-
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Hyperscale (Citus) firewall configuration to take effect.
-
-* **The user is not authorized or an incorrect password was used:** If a user does not have permissions on the server or the password used is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must still provide the necessary security credentials.
-
-For example, using a JDBC client, the following error may appear.
-> java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
-
-* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
-
-* Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Hyperscale (Citus) coordinator node, and then add the IP address range as a firewall rule.
-
-* Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
-
-## Next steps
-For articles on creating server-level and database-level firewall rules, see:
-* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](howto-manage-firewall-using-portal.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-high-availability.md
- Title: High availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: High availability and disaster recovery concepts
----- Previously updated : 07/15/2022--
-# High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-High availability (HA) avoids database downtime by maintaining standby replicas
-of every node in a server group. If a node goes down, Hyperscale (Citus)
-switches incoming connections from the failed node to its standby. Failover
-happens within a few minutes, and promoted nodes always have fresh data through
-PostgreSQL synchronous streaming replication.
-
-All primary nodes in a server group are provisioned into one availability zone
-for better latency between the nodes. The standby nodes are provisioned into
-another zone. The Azure portal
-[displays](concepts-server-group.md#node-availability-zone) the availability
-zone of each node in a server group.
-
-Even without HA enabled, each Hyperscale (Citus) node has its own locally
-redundant storage (LRS) with three synchronous replicas maintained by Azure
-Storage service. If there's a single replica failure, itΓÇÖs detected by Azure
-Storage service and is transparently re-created. For LRS storage durability,
-see metrics [on this
-page](../../storage/common/storage-redundancy.md#summary-of-redundancy-options).
-
-When HA *is* enabled, Hyperscale (Citus) runs one standby node for each primary
-node in the server group. The primary and its standby use synchronous
-PostgreSQL replication. This replication allows customers to have predictable
-downtime if a primary node fails. In a nutshell, our service detects a failure
-on primary nodes, and fails over to standby nodes with zero data loss.
-
-To take advantage of HA on the coordinator node, database applications need to
-detect and retry dropped connections and failed transactions. The newly
-promoted coordinator will be accessible with the same connection string.
-
-## High availability states
-
-Recovery can be broken into three stages: detection, failover, and full
-recovery. Hyperscale (Citus) runs periodic health checks on every node, and
-after four failed checks it determines that a node is down. Hyperscale (Citus)
-then promotes a standby to primary node status (failover), and creates a new
-standby-to-be. Streaming replication begins, bringing the new node up to date.
-When all data has been replicated, the node has reached full recovery.
-
-Hyperscale (Citus) displays its failover progress state on the Overview page
-for server groups in the Azure portal.
-
-* **Healthy**: HA is enabled and the node is fully replicated to its standby.
-* **Failover in progress**: A failure was detected on the primary node and
- a failover to standby was initiated. This state will transition into
- **Creating standby** once failover to the standby node is completed, and the
- standby becomes the new primary.
-* **Creating standby**: The previous standby was promoted to primary, and a
- new standby is being created for it. When the new secondary is ready, this
- state will transition into **Replication in progress**.
-* **Replication in progress**: The new standby node is provisioned and data
- synchronization is in progress. Once all data is replicated to the new
- standby, synchronous replication will be enabled between the primary and
- standby nodes, and the nodes' state will transition back to **Healthy**.
-* **No**: HA isn't enabled on this node.
-
-## Next steps
--- Learn how to [enable high
- availability](howto-high-availability.md) in a Hyperscale (Citus) server
- group.
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-maintenance.md
- Title: Scheduled maintenance - Azure Database for PostgreSQL - Hyperscale (Citus)
-description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 02/14/2022--
-# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-Azure Database for PostgreSQL - Hyperscale (Citus) does periodic maintenance to
-keep your managed database secure, stable, and up-to-date. During maintenance,
-all nodes in the server group get new features, updates, and patches.
-
-The key features of scheduled maintenance for Hyperscale (Citus) are:
-
-* Updates are applied at the same time on all nodes in the server group
-* Notifications about upcoming maintenance are posted to Azure Service Health
- five days in advance
-* Usually there are at least 30 days between successful maintenance events for
- a server group
-* Preferred day of the week and time window within that day for maintenance
- start can be defined for each server group individually
-
-## Selecting a maintenance window and notification about upcoming maintenance
-
-You can schedule maintenance during a specific day of the week and a time
-window within that day. Or you can let the system pick a day and a time window
-for you automatically. Either way, the system will alert you five days before
-running any maintenance. The system will also let you know when maintenance is
-started, and when it's successfully completed.
-
-Notifications about upcoming scheduled maintenance are posted to Azure Service
-Health and can be:
-
-* Emailed to a specific address
-* Emailed to an Azure Resource Manager Role
-* Sent in a text message (SMS) to mobile devices
-* Pushed as a notification to an Azure app
-* Delivered as a voice message
-
-When specifying preferences for the maintenance schedule, you can pick a day of
-the week and a time window. If you don't specify, the system will pick times
-between 11pm and 7am in your server group's region time. You can define
-different schedules for each Hyperscale (Citus) server group in your Azure
-subscription.
-
-> [!IMPORTANT]
-> Normally there are at least 30 days between successful scheduled maintenance
-> events for a server group.
->
-> However, in case of a critical emergency update such as a severe
-> vulnerability, the notification window could be shorter than five days. The
-> critical update may be applied to your server even if a successful scheduled
-> maintenance was performed in the last 30 days.
-
-You can update scheduling settings at any time. If there's maintenance
-scheduled for your Hyperscale (Citus) server group and you update the schedule,
-the pre-existing events will be rescheduled.
-
-If maintenance fails or gets canceled, the system will create a notification.
-It will try maintenance again according to current scheduling settings, and
-notify you five days before the next maintenance event.
-
-## Next steps
-
-* Learn how to [change the maintenance schedule](howto-maintenance.md)
-* Learn how to [get notifications about upcoming maintenance](../../service-health/service-notifications.md) using Azure Service Health
-* Learn how to [set up alerts about upcoming scheduled maintenance events](../../service-health/resource-health-alert-monitor-guide.md)
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
- Title: Monitor and tune - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 09/27/2022--
-# Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus)
--
-Monitoring data about your servers helps you troubleshoot and optimize for your
-workload. Hyperscale (Citus) provides various monitoring options to provide
-insight into the behavior of nodes in a server group.
-
-## Metrics
-
-Hyperscale (Citus) provides metrics for nodes in a server group, and aggregate
-metrics for the group as a whole. The metrics give insight into the behavior of
-supporting resources. Each metric is emitted at a one-minute frequency, and has
-up to 30 days of history.
-
-In addition to viewing graphs of the metrics, you can configure alerts. For
-step-by-step guidance, see [How to set up
-alerts](howto-alert-on-metric.md). Other tasks include setting up
-automated actions, running advanced analytics, and archiving history. For more
-information, see the [Azure Metrics
-Overview](../../azure-monitor/data-platform.md).
-
-### Per node vs aggregate
-
-By default, the Azure portal aggregates Hyperscale (Citus) metrics across nodes
-in a server group. However, some metrics, such as disk usage percentage, are
-more informative on a per-node basis. To see metrics for nodes displayed
-individually, use Azure Monitor [metric
-splitting](howto-monitoring.md#view-metrics-per-node) by server
-name.
-
-> [!NOTE]
->
-> Some Hyperscale (Citus) server groups do not support metric splitting. On
-> these server groups, you can view metrics for individual nodes by clicking
-> the node name in the server group **Overview** page. Then open the
-> **Metrics** page for the node.
-
-### List of metrics
-
-These metrics are available for Hyperscale (Citus) nodes:
-
-|Metric|Metric Display Name|Unit|Description|
-|||||
-|active_connections|Active Connections|Count|The number of active connections to the server.|
-|apps_reserved_memory_percent|Reserved Memory Percent|Percent|Calculated from the ratio of Committed_AS/CommitLimit as shown in /proc/meminfo.|
-|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
-|iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Hyperscale (Citus) throughput](resources-compute.md)|
-|memory_percent|Memory percent|Percent|The percentage of memory in use.|
-|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
-|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
-|replication_lag|Replication Lag|Seconds|How far read replica nodes are behind their counterparts in the primary cluster.|
-|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
-|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
-
-Azure supplies no aggregate metrics for the cluster as a whole, but metrics for
-multiple nodes can be placed on the same graph.
-
-## Next steps
--- Learn how to [view metrics](howto-monitoring.md) for a
- Hyperscale (Citus) server group.
-- See [how to set up alerts](howto-alert-on-metric.md) for guidance
- on creating an alert on a metric.
-- Learn how to do [metric
- splitting](../../azure-monitor/essentials/metrics-charts.md#metric-splitting) to
- inspect metrics per node in a server group.
-- See other measures of database health with [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-nodes.md
- Title: Nodes ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn about the types of nodes and tables in a server group in Azure Database for PostgreSQL.
----- Previously updated : 07/28/2019--
-# Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-## Nodes
-
-The Hyperscale (Citus) hosting type allows Azure Database for PostgreSQL
-servers (called nodes) to coordinate with one another in a "shared nothing"
-architecture. The nodes in a server group collectively hold more data and use
-more CPU cores than would be possible on a single server. The architecture also
-allows the database to scale by adding more nodes to the server group.
-
-### Coordinator and workers
-
-Every server group has a coordinator node and multiple workers. Applications
-send their queries to the coordinator node, which relays it to the relevant
-workers and accumulates their results. Applications are not able to connect
-directly to workers.
-
-Hyperscale (Citus) allows the database administrator to *distribute* tables,
-storing different rows on different worker nodes. Distributed tables are the
-key to Hyperscale (Citus) performance. Failing to distribute tables leaves them entirely
-on the coordinator node and cannot take advantage of cross-machine parallelism.
-
-For each query on distributed tables, the coordinator either routes it to a
-single worker node, or parallelizes it across several depending on whether the
-required data lives on a single node or multiple. The coordinator decides what
-to do by consulting metadata tables. These tables track the DNS names and
-health of worker nodes, and the distribution of data across nodes.
-
-## Table types
-
-There are three types of tables in a Hyperscale (Citus) server group, each
-stored differently on nodes and used for different purposes.
-
-### Type 1: Distributed tables
-
-The first type, and most common, is distributed tables. They
-appear to be normal tables to SQL statements, but they're horizontally
-partitioned across worker nodes. What this means is that the rows
-of the table are stored on different nodes, in fragment tables called
-shards.
-
-Hyperscale (Citus) runs not only SQL but DDL statements throughout a cluster.
-Changing the schema of a distributed table cascades to update
-all the table's shards across workers.
-
-#### Distribution column
-
-Hyperscale (Citus) uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
-of a table column called the distribution column. The cluster
-administrator must designate this column when distributing a table.
-Making the right choice is important for performance and functionality.
-
-### Type 2: Reference tables
-
-A reference table is a type of distributed table whose entire
-contents are concentrated into a single shard. The shard is replicated on every worker. Queries on any worker can access the reference information locally, without the network overhead of requesting rows from another node. Reference tables have no distribution column
-because there's no need to distinguish separate shards per row.
-
-Reference tables are typically small and are used to store data that's
-relevant to queries running on any worker node. An example is enumerated
-values like order statuses or product categories.
-
-### Type 3: Local tables
-
-When you use Hyperscale (Citus), the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
-
-A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
-
-## Shards
-
-The previous section described how distributed tables are stored as shards on
-worker nodes. This section discusses more technical details.
-
-The `pg_dist_shard` metadata table on the coordinator contains a
-row for each shard of each distributed table in the system. The row
-matches a shard ID with a range of integers in a hash space
-(shardminvalue, shardmaxvalue).
-
-```sql
-SELECT * from pg_dist_shard;
- logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
-++--++
- github_events | 102026 | t | 268435456 | 402653183
- github_events | 102027 | t | 402653184 | 536870911
- github_events | 102028 | t | 536870912 | 671088639
- github_events | 102029 | t | 671088640 | 805306367
- (4 rows)
-```
-
-If the coordinator node wants to determine which shard holds a row of
-`github_events`, it hashes the value of the distribution column in the
-row. Then the node checks which shard\'s range contains the hashed value. The
-ranges are defined so that the image of the hash function is their
-disjoint union.
-
-### Shard placements
-
-Suppose that shard 102027 is associated with the row in question. The row
-is read or written in a table called `github_events_102027` in one of
-the workers. Which worker? That's determined entirely by the metadata
-tables. The mapping of shard to worker is known as the shard placement.
-
-The coordinator node
-rewrites queries into fragments that refer to the specific tables
-like `github_events_102027` and runs those fragments on the
-appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
-
-```sql
-SELECT
- shardid,
- node.nodename,
- node.nodeport
-FROM pg_dist_placement placement
-JOIN pg_dist_node node
- ON placement.groupid = node.groupid
- AND node.noderole = 'primary'::noderole
-WHERE shardid = 102027;
-```
-
-```output
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé shardid Γöé nodename Γöé nodeport Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102027 Γöé localhost Γöé 5433 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Next steps
--- [Determine your application's type](howto-app-type.md) to prepare for data modeling
postgresql Concepts Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-performance-tuning.md
- Title: Performance tuning - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Improving query performance in the distributed database
----- Previously updated : 08/30/2022--
-# Hyperscale (Citus) performance tuning
--
-Running a distributed database at its full potential offers high performance.
-However, reaching that performance can take some adjustments in application
-code and data modeling. This article covers some of the most common--and
-effective--techniques to improve performance.
-
-## Client-side connection pooling
-
-A connection pool holds open database connections for reuse. An application
-requests a connection from the pool when needed, and the pool returns one that
-is already established if possible, or establishes a new one. When done, the
-application releases the connection back to the pool rather than closing it.
-
-Adding a client-side connection pool is an easy way to boost application
-performance with minimal code changes. In our measurements, running single-row
-insert statements goes about **24x faster** on a Hyperscale (Citus) server
-group with pooling enabled.
-
-For language-specific examples of adding pooling in application code, see the
-[app stacks guide](quickstart-app-stacks-overview.md).
-
-> [!NOTE]
->
-> Hyperscale (Citus) also provides [server-side connection
-> pooling](concepts-connection-pool.md) using pgbouncer, but it mainly serves
-> to increase the client connection limit. An individual application's
-> performance benefits more from client- rather than server-side pooling.
-> (Although both forms of pooling can be used at once without harm.)
-
-## Scoping distributed queries
-
-### Updates
-
-When updating a distributed table, try to filter queries on the distribution
-column--at least when it makes sense, when the new filters don't change the
-meaning of the query.
-
-In some workloads, it's easy. Transactional/operational workloads like
-multi-tenant SaaS apps or the Internet of Things distribute tables by tenant or
-device. Queries are scoped to a tenant- or device-ID.
-
-For instance, in our [multi-tenant
-tutorial](tutorial-design-database-multi-tenant.md#use-psql-utility-to-create-a-schema)
-we have an `ads` table distributed by `company_id`. The naive way to update an
-ad is to single it out like this:
-
-```sql
slow-
-UPDATE ads
- SET impressions_count = impressions_count+1
- WHERE id = 42; -- missing filter on distribution column
-```
-
-Although the query uniquely identifies a row and updates it, Hyperscale (Citus)
-doesn't know, at planning time, which shard the query will update. Citus takes a
-ShareUpdateExclusiveLock on all shards to be safe, which blocks other queries
-trying to update the table.
-
-Even though the `id` was sufficient to identify a row, we can include an
-extra filter to make the query faster:
-
-```sql
fast-
-UPDATE ads
- SET impressions_count = impressions_count+1
- WHERE id = 42
- AND company_id = 1; -- the distribution column
-```
-
-The Hyperscale (Citus) query planner sees a direct filter on the distribution
-column and knows exactly which single shard to lock. In our tests, adding
-filters for the distribution column increased parallel update performance by
-**100x**.
-
-### Joins and CTEs
-
-We've seen how UPDATE statements should scope by the distribution column to
-avoid unnecessary shard locks. Other queries benefit from scoping too, usually
-to avoid the network overhead of unnecessarily shuffling data between worker
-nodes.
---
-```sql
logically correct, but slow-
-WITH single_ad AS (
- SELECT *
- FROM ads
- WHERE id=1
-)
-SELECT *
- FROM single_ad s
- JOIN campaigns c ON (s.campaign_id=c.id);
-```
-
-We can speed up the query up by filtering on the distribution column,
-`company_id`, in the CTE and main SELECT statement.
-
-```sql
faster, joining on distribution column-
-WITH single_ad AS (
- SELECT *
- FROM ads
- WHERE id=1 and company_id=1
-)
-SELECT *
- FROM single_ad s
- JOIN campaigns c ON (s.campaign_id=c.id)
- WHERE s.company_id=1 AND c.company_id = 1;
-```
-
-In general, when joining distributed tables, try to include the distribution
-column in the join conditions. However, when joining between a distributed and
-reference table it's not required, because reference table contents are
-replicated across all worker nodes.
-
-If it seems inconvenient to add the extra filters to all your queries, keep in
-mind there are helper libraries for several popular application frameworks that
-make it easier. Here are instructions:
-
-* [Ruby on Rails](https://docs.citusdata.com/en/stable/develop/migration_mt_ror.html),
-* [Django](https://docs.citusdata.com/en/stable/develop/migration_mt_django.html),
-* [ASP.NET](https://docs.citusdata.com/en/stable/develop/migration_mt_asp.html),
-* [Java Hibernate](https://www.citusdata.com/blog/2018/02/13/using-hibernate-and-spring-to-build-multitenant-java-apps/).
-
-## Efficient database logging
-
-Logging all SQL statements all the time adds overhead. In our measurements,
-using more a judicious log level improved the transactions per second by
-**10x** vs full logging.
-
-For efficient everyday operation, you can disable logging except for errors and
-abnormally long-running queries:
-
-| setting | value | reason |
-||-|--|
-| log_statement_stats | OFF | Avoid profiling overhead |
-| log_duration | OFF | Don't need to know the duration of normal queries |
-| log_statement | NONE | Don't log queries without a more specific reason |
-| log_min_duration_statement | A value longer than what you think normal queries should take | Shows the abnormally long queries |
-
-> [!NOTE]
->
-> The log-related settings in our managed service take the above
-> recommendations into account. You can leave them as they are. However, we've
-> sometimes seen customers change the settings to make logging aggressive,
-> which has led to performance issues.
-
-## Lock contention
-
-The database uses locks to keep data consistent under concurrent access.
-However, some query patterns require an excessive amount of locking, and faster
-alternatives exist.
-
-### System health and locks
-
-Before diving into common locking inefficiencies, let's see how to view locks
-and activity throughout the database cluster. The
-[citus_stat_activity](reference-metadata.md#distributed-query-activity) view
-gives a detailed view.
-
-The view shows, among other things, how queries are blocked by "wait events,"
-including locks. Grouping by
-[wait_event_type](https://www.postgresql.org/docs/14/monitoring-stats.html#WAIT-EVENT-TABLE)
-paints a picture of system health:
-
-```sql
general system health-
-SELECT wait_event_type, count(*)
- FROM citus_stat_activity
- WHERE state != 'idle'
- GROUP BY 1
- ORDER BY 2 DESC;
-```
-
-A NULL `wait_event_type` means the query isn't waiting on anything.
-
-If you do see locks in the stat activity output, you can view the specific
-blocked queries using `citus_lock_waits`:
-
-```sql
-SELECT * FROM citus_lock_waits;
-```
-
-For example, if one query is blocked on another trying to update the same row,
-you'll see the blocked and blocking statements appear:
-
-```
--[ RECORD 1 ]-+--
-waiting_gpid | 10000011981
-blocking_gpid | 10000011979
-blocked_statement | UPDATE numbers SET j = 3 WHERE i = 1;
-current_statement_in_blocking_process | UPDATE numbers SET j = 2 WHERE i = 1;
-waiting_nodeid | 1
-blocking_nodeid | 1
-```
-
-To see not only the locks happening at the moment, but historical patterns, you
-can capture locks in the PostgreSQL logs. To learn more, see the
-[log_lock_waits](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LOCK-WAITS)
-server setting in the PostgreSQL documentation. Another great resource is
-[seven tips for dealing with
-locks](https://www.citusdata.com/blog/2018/02/22/seven-tips-for-dealing-with-postgres-locks/)
-on the Citus Data Blog.
-
-### Common problems and solutions
-
-#### DDL commands
-
-DDL Commands like `truncate`, `drop`, and `create index` all take write locks,
-and block writes on the entire table. Minimizing such operations reduces
-locking issues.
-
-Tips:
-
-* Try to consolidate DDL into maintenance windows, or use them less often.
-
-* PostgreSQL supports [building indices
- concurrently](https://www.postgresql.org/docs/current/sql-createhttps://docsupdatetracker.net/index.html#SQL-CREATEINDEX-CONCURRENTLY),
- to avoid taking a write lock on the table.
-
-* Consider setting
- [lock_timeout](https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-LOCK-TIMEOUT)
- in a SQL session prior to running a heavy DDL command. With `lock_timeout`,
- PostgreSQL will abort the DDL command if the command waits too long for a write
- lock. A DDL command waiting for a lock can cause later queries to queue behind
- itself.
-
-#### Idle in transaction connections
-
-Idle (uncommitted) transactions sometimes block other queries unnecessarily.
-For example:
-
-```sql
-BEGIN;
-
-UPDATE ... ;
- Suppose the client waits now and doesn't COMMIT right away. Other queries that want to update the same rows will be blocked.-
-COMMIT; -- finally!
-```
-
-To manually clean up any long-idle queries on the coordinator node, you can run
-a command like this:
-
-```sql
-SELECT pg_terminate_backend(pid)
-FROM pg_stat_activity
-WHERE datname = 'citus'
- AND pid <> pg_backend_pid()
- AND state in ('idle in transaction')
- AND state_change < current_timestamp - INTERVAL '15' MINUTE;
-```
-
-PostgreSQL also offers an
-[idle_in_transaction_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT)
-setting to automate idle session termination.
-
-#### Deadlocks
-
-Citus detects distributed deadlocks and cancels their queries, but the
-situation is less performant than avoiding deadlocks in the first place. A
-common source of deadlocks comes from updating the same set of rows in a
-different order from multiple transactions at once.
-
-For instance, running these transactions in parallel:
-
-Session A:
-
-```sql
-BEGIN;
-UPDATE ads SET updated_at = now() WHERE id = 1 AND company_id = 1;
-UPDATE ads SET updated_at = now() WHERE id = 2 AND company_id = 1;
-```
-
-Session B:
-
-```sql
-BEGIN;
-UPDATE ads SET updated_at = now() WHERE id = 2 AND company_id = 1;
-UPDATE ads SET updated_at = now() WHERE id = 1 AND company_id = 1;
- ERROR: canceling the transaction since it was involved in a distributed deadlock
-```
-
-Session A updated ID 1 then 2, whereas the session B updated 2 then 1. Write
-SQL code for transactions carefully to update rows in the same order. (The
-update order is sometimes called a "locking hierarchy.")
-
-In our measurement, bulk updating a set of rows with many transactions went
-**3x faster** when avoiding deadlock.
-
-## I/O during ingestion
-
-I/O bottlenecking is typically less of a problem for Hyperscale (Citus) than
-for single-node PostgreSQL because of sharding. The shards are individually
-smaller tables, with better index and cache hit rates, yielding better
-performance.
-
-However, even with Hyperscale (Citus), as tables and indices grow larger, disk
-I/O can become a problem for data ingestion. Things to look out for are an
-increasing number of 'IO' `wait_event_type` entries appearing in
-`citus_stat_activity`:
-
-```sql
-SELECT wait_event_type, wait_event count(*)
- FROM citus_stat_activity
- WHERE state='active'
- GROUP BY 1,2;
-```
-
-Run the above query repeatedly to capture wait event related information. Note
-how the counts of different wait event types change.
-
-Also look at [metrics in the Azure portal](concepts-monitoring.md),
-particularly the IOPS metric maxing out.
-
-Tips:
--- If your data is naturally ordered, such as in a time series, use PostgreSQL
- table partitioning. See [this
- guide](https://docs.citusdata.com/en/stable/use_cases/timeseries.html) to learn
- how to partition distributed tables in Hyperscale (Citus).
--- Remove unused indices. Index maintenance causes I/O amplification during
- ingestion. To find which indices are unused, use [this
- query](howto-useful-diagnostic-queries.md#identifying-unused-indices).
--- If possible, avoid indexing randomized data. For instance, some UUID
- generation algorithms follow no order. Indexing such a value causes a lot
- overhead. Try a bigint sequence instead, or monotonically increasing UUIDs.
-
-## Summary of results
-
-In benchmarks of simple ingestion with INSERTs, UPDATEs, transaction blocks, we
-observed the following query speedups for the techniques in this article.
-
-| Technique | Query speedup |
-|--||
-| Scoping queries | 100x |
-| Connection pooling | 24x |
-| Efficient logging | 10x |
-| Avoiding deadlock | 3x |
-
-## Next steps
-
-* [Advanced query performance tuning](https://docs.citusdata.com/en/stable/performance/performance_tuning.html)
-* [Useful diagnostic queries](howto-useful-diagnostic-queries.md)
-* Build fast [app stacks](quickstart-app-stacks-overview.md)
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-private-access.md
- Title: Private access - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: This article describes private access for Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 10/15/2021--
-# Private access in Azure Database for PostgreSQL - Hyperscale (Citus)
---
-This page describes the private access option. For public access, see
-[here](concepts-firewall-rules.md).
-
-## Definitions
-
-**Virtual network**. An Azure Virtual Network (VNet) is the fundamental
-building block for private networking in Azure. Virtual networks enable many
-types of Azure resources, such as database servers and Azure Virtual Machines
-(VM), to securely communicate with each other. Virtual networks support on-prem
-connections, allow hosts in multiple virtual networks to interact with each
-other through peering, and provide added benefits of scale, security options,
-and isolation. Each private endpoint for a Hyperscale (Citus) server group
-requires an associated virtual network.
-
-**Subnet**. Subnets segment a virtual network into one or more subnetworks.
-Each subnetwork gets a portion of the address space, improving address
-allocation efficiency. You can secure resources within subnets using Network
-Security Groups. For more information, see Network security groups.
-
-When you select a subnet for a Hyperscale (Citus)ΓÇÖs private endpoint, make sure
-enough private IP addresses are available in that subnet for your current and
-future needs.
-
-**Private endpoint**. A private endpoint is a network interface that uses a
-private IP address from a virtual network. This network interface connects
-privately and securely to a service powered by Azure Private Link. Private
-endpoints bring the services into your virtual network.
-
-Enabling private access for Hyperscale (Citus) creates a private endpoint for
-the server groupΓÇÖs coordinator node. The endpoint allows hosts in the selected
-virtual network to access the coordinator. You can optionally create private
-endpoints for worker nodes too.
-
-**Private DNS zone**. An Azure private DNS zone resolves hostnames within a
-linked virtual network, and within any peered virtual network. Domain records
-for Hyperscale (Citus) nodes are created in a private DNS zone selected for
-their server group. Be sure to use fully qualified domain names (FQDN) for
-nodes' PostgreSQL connection strings.
-
-## Private link
-
-You can use [private endpoints](../../private-link/private-endpoint-overview.md)
-for your Hyperscale (Citus) server groups to allow hosts on a virtual network
-(VNet) to securely access data over a [Private
-Link](../../private-link/private-link-overview.md).
-
-The server group's private endpoint uses an IP address from the virtual
-network's address space. Traffic between hosts on the virtual network and
-Hyperscale (Citus) nodes goes over a private link on the Microsoft backbone
-network, eliminating exposure to the public Internet.
-
-Applications in the virtual network can connect to the Hyperscale (Citus) nodes
-over the private endpoint seamlessly, using the same connection strings and
-authorization mechanisms that they would use otherwise.
-
-You can select private access during Hyperscale (Citus) server group creation,
-and you can switch from public access to private access at any point.
-
-### Using a private DNS zone
-
-A new private DNS zone is automatically provisioned for each private endpoint,
-unless you select one of the private DNS zones previously created by Hyperscale
-(Citus). For more information, see the [private DNS zones
-overview](../../dns/private-dns-overview.md).
-
-Hyperscale (Citus) service creates DNS records such as
-`c.privatelink.mygroup01.postgres.database.azure.com` in the selected private
-DNS zone for each node with a private endpoint. When you connect to a
-Hyperscale (Citus) node from an Azure VM via private endpoint, Azure DNS
-resolves the nodeΓÇÖs FQDN into a private IP address.
-
-Private DNS zone settings and virtual network peering are independent of each
-other. If you want to connect to a node in the server group from a client
-that's provisioned in another virtual network (from the same region or a
-different region), you have to link the private DNS zone with the virtual
-network. For more information, see [Link the virtual
-network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network).
-
-> [!NOTE]
->
-> The service also always creates public CNAME records such as
-> `c.mygroup01.postgres.database.azure.com` for every node. However, selected
-> computers on the public internet can connect to the public hostname only if
-> the database administrator enables [public
-> access](concepts-firewall-rules.md) to the server group.
-
-If you're using a custom DNS server, you must use a DNS forwarder to resolve
-the FQDN of Hyperscale (Citus) nodes. The forwarder IP address should be
-168.63.129.16. The custom DNS server should be inside the virtual network or
-reachable via the virtual network's DNS server setting. To learn more, see
-[Name resolution that uses your own DNS
-server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-
-### Recommendations
-
-When you enable private access for your Hyperscale (Citus) server group,
-consider:
-
-* **Subnet size**: When selecting subnet size for Hyperscale (Citus) server
- group, consider current needs such as IP addresses for coordinator or all
- nodes in that server group, and future needs such as growth of that server
- group. Make sure you have enough private IP addresses for the current and
- future needs. Keep in mind, Azure reserves five IP addresses in each subnet.
- See more details [in this
- FAQ](../../virtual-network/virtual-networks-faq.md#configuration).
-* **Private DNS zone**: DNS records with private IP addresses are going to be
- maintained by Hyperscale (Citus) service. Make sure you donΓÇÖt delete private
- DNS zone used for Hyperscale (Citus) server groups.
-
-## Limits and limitations
-
-See Hyperscale (Citus) [limits and limitations](reference-limits.md)
-page.
-
-## Next steps
-
-* Learn how to [enable and manage private access](howto-private-access.md)
-* Follow a [tutorial](tutorial-private-access.md) to see private access in
- action.
-* Learn about [private
- endpoints](../../private-link/private-endpoint-overview.md)
-* Learn about [virtual
- networks](../../virtual-network/concepts-and-best-practices.md)
-* Learn about [private DNS zones](../../dns/private-dns-overview.md)
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-read-replicas.md
- Title: Read replicas - Azure Database for PostgreSQL - Hyperscale (Citus)
-description: This article describes the read replica feature in Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 09/27/2022--
-# Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-The read replica feature allows you to replicate data from a Hyperscale (Citus)
-server group to a read-only server group. Replicas are updated
-**asynchronously** with PostgreSQL physical replication technology. You can
-run to up to five replicas from the primary server.
-
-Replicas are new server groups that you manage similar to regular Hyperscale
-(Citus) server groups. For each read replica, you're billed for the provisioned
-compute in vCores and storage in GiB/month. Compute and storage costs for
-replica server groups are the same as for regular server groups.
-
-Learn how to [create and manage replicas](howto-read-replicas-portal.md).
-
-## When to use a read replica
-
-The read replica feature helps to improve the performance and scale of
-read-intensive workloads. Read workloads can be isolated to the replicas, while
-write workloads can be directed to the primary.
-
-A common scenario is to have BI and analytical workloads use the read replica
-as the data source for reporting.
-
-Because replicas are read-only, they don't directly reduce write-capacity
-burdens on the primary.
-
-### Considerations
-
-The feature is meant for scenarios where replication lag is acceptable, and is
-meant for offloading queries. It isn't meant for synchronous replication
-scenarios where replica data is expected to be up to date. There will be a
-measurable delay between the primary and the replica. The delay can be minutes
-or even hours, depending on the workload and the latency between primary and
-replica. The data on the replica eventually becomes consistent with the
-data on the primary. Use this feature for workloads that can accommodate this
-delay.
-
-## Create a replica
-
-When you start the create replica workflow, a blank Hyperscale (Citus) server
-group is created. The new group is filled with the data that was on the primary
-server group. The creation time depends on the amount of data on the primary
-and the time since the last weekly full backup. The time can range from a few
-minutes to several hours.
-
-The read replica feature uses PostgreSQL physical replication, not logical
-replication. The default mode is streaming replication using replication slots.
-When necessary, log shipping is used to catch up.
-
-Learn how to [create a read replica in the Azure
-portal](howto-read-replicas-portal.md).
-
-## Connect to a replica
-
-When you create a replica, it doesn't inherit firewall rules the primary
-server group. These rules must be set up independently for the replica.
-
-The replica inherits the admin (`citus`) account from the primary server group.
-All user accounts are replicated to the read replicas. You can only connect to
-a read replica by using the user accounts that are available on the primary
-server.
-
-You can connect to the replica's coordinator node by using its hostname and a
-valid user account, as you would on a regular Hyperscale (Citus) server group.
-For instance, given a server named **my replica** with the admin username
-**citus**, you can connect to the coordinator node of the replica by using
-psql:
-
-```bash
-psql -h c.myreplica.postgres.database.azure.com -U citus@myreplica -d postgres
-```
-
-At the prompt, enter the password for the user account.
-
-## Replica promotion to independent server group
-
-You can promote a replica to an independent server group that is readable and
-writable. A promoted replica no longer receives updates from its original, and
-promotion can't be undone. Promoted replicas can have replicas of their own.
-
-There are two common scenarios for promoting a replica:
-
-1. **Disaster recovery.** If something goes wrong with the primary, or with an
- entire region, you can open another server group for writes as an emergency
- procedure.
-2. **Migrating to another region.** If you want to move to another region,
- create a replica in the new region, wait for data to catch up, then promote
- the replica. To avoid potentially losing data during promotion, you may want
- to disable writes to the original server group after the replica catches up.
-
- You can see how far a replica has caught up using the `replication_lag`
- metric. See [metrics](concepts-monitoring.md#metrics) for more information.
-
-## Considerations
-
-This section summarizes considerations about the read replica feature.
-
-### New replicas
-
-A read replica is created as a new Hyperscale (Citus) server group. An existing
-server group can't be made into a replica. You can't create a replica of
-another read replica.
-
-### Replica configuration
-
-Replicas inherit compute, storage, and worker node settings from their
-primaries. You can change some--but not all--settings on a replica. For
-instance, you can change compute, firewall rules for public access, and private
-endpoints for private access. You can't change the storage size or number of
-worker nodes.
-
-Remember to keep replicas strong enough to keep up changes arriving from the
-primary. For instance, be sure to upscale compute power in replicas if you
-upscale it on the primary.
-
-Firewall rules and parameter settings aren't inherited from the primary server
-to the replica when the replica is created or afterwards.
-
-### Cross-region replication
-
-Read replicas can be created in the region of the primary server group, or in
-any other [region supported by Hyperscale (Citus)](resources-regions.md). The
-limit of five replicas per server group counts across all regions, meaning five
-total, not five per region.
-
-## Next steps
-
-* Learn how to [create and manage read replicas in the Azure
- portal](howto-read-replicas-portal.md).
postgresql Concepts Row Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-row-level-security.md
- Title: Row level security ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Multi-tenant security through database roles
----- Previously updated : 06/30/2022--
-# Row-level security in Hyperscale (Citus)
--
-PostgreSQL [row-level security
-policies](https://www.postgresql.org/docs/current/ddl-rowsecurity.html)
-restrict which users can modify or access which table rows. Row-level security
-can be especially useful in a multi-tenant Hyperscale (Citus) server group. It
-allows individual tenants to have full SQL access to the database while hiding
-each tenantΓÇÖs information from other tenants.
-
-## Implementing for multi-tenant apps
-
-We can implement the separation of tenant data by using a naming convention for
-database roles that ties into table row-level security policies. WeΓÇÖll assign
-each tenant a database role in a numbered sequence: `tenant1`, `tenant2`,
-etc. Tenants will connect to Citus using these separate roles. Row-level
-security policies can compare the role name to values in the `tenant_id`
-distribution column to decide whether to allow access.
-
-Here's how to apply the approach on a simplified events table distributed by
-`tenant_id`. First [create the roles](howto-create-users.md) `tenant1` and
-`tenant2`. Then run the following SQL commands as the `citus` administrator
-user:
-
-```postgresql
-CREATE TABLE events(
- tenant_id int,
- id int,
- type text
-);
-
-SELECT create_distributed_table('events','tenant_id');
-
-INSERT INTO events VALUES (1,1,'foo'), (2,2,'bar');
- assumes that roles tenant1 and tenant2 exist
-GRANT select, update, insert, delete
- ON events TO tenant1, tenant2;
-```
-
-As it stands, anyone with select permissions for this table can see both rows.
-Users from either tenant can see and update the row of the other tenant. We can
-solve the data leak with row-level table security policies.
-
-Each policy consists of two clauses: USING and WITH CHECK. When a user tries to
-read or write rows, the database evaluates each row against these clauses.
-PostgreSQL checks existing table rows against the expression specified in the
-USING clause, and rows that would be created via INSERT or UPDATE against the
-WITH CHECK clause.
-
-```postgresql
first a policy for the system admin "citus" user
-CREATE POLICY admin_all ON events
- TO citus -- apply to this role
- USING (true) -- read any existing row
- WITH CHECK (true); -- insert or update any row
- next a policy which allows role "tenant<n>" to access rows where tenant_id = <n>
-CREATE POLICY user_mod ON events
- USING (current_user = 'tenant' || tenant_id::text);
- -- lack of CHECK means same condition as USING
- enforce the policies
-ALTER TABLE events ENABLE ROW LEVEL SECURITY;
-```
-
-Now roles `tenant1` and `tenant2` get different results for their queries:
-
-**Connected as tenant1:**
-
-```sql
-SELECT * FROM events;
-```
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé tenant_id Γöé id Γöé type Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 1 Γöé 1 Γöé foo Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-**Connected as tenant2:**
-
-```sql
-SELECT * FROM events;
-```
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé tenant_id Γöé id Γöé type Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 2 Γöé 2 Γöé bar Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-```sql
-INSERT INTO events VALUES (3,3,'surprise');
-/*
-ERROR: new row violates row-level security policy for table "events_102055"
-*/
-```
-
-## Next steps
-
-Learn how to [create roles](howto-create-users.md) in a Hyperscale (Citus)
-server group.
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-security-overview.md
- Title: Security overview - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Information protection and network security for Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 01/14/2022--
-# Security in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-This page outlines the multiple layers of security available to protect the data in your
-Hyperscale (Citus) server group.
-
-## Information protection and encryption
-
-### In transit
-
-Whenever data is ingested into a node, Hyperscale (Citus) secures your data by
-encrypting it in-transit with Transport Layer Security 1.2. Encryption
-(SSL/TLS) is always enforced, and canΓÇÖt be disabled.
-
-### At rest
-
-The Hyperscale (Citus) service uses the FIPS 140-2 validated cryptographic
-module for storage encryption of data at-rest. Data, including backups, are
-encrypted on disk, including the temporary files created while running queries.
-The service uses the AES 256-bit cipher included in Azure storage encryption,
-and the keys are system-managed. Storage encryption is always on, and can't be
-disabled.
-
-## Network security
--
-## Limits and limitations
-
-See Hyperscale (Citus) [limits and limitations](reference-limits.md)
-page.
-
-## Next steps
-
-* Learn how to [enable and manage private access](howto-private-access.md)
-* Learn about [private
- endpoints](../../private-link/private-endpoint-overview.md)
-* Learn about [virtual
- networks](../../virtual-network/concepts-and-best-practices.md)
-* Learn about [private DNS zones](../../dns/private-dns-overview.md)
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-server-group.md
- Title: Server group - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: What is a server group in Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 07/15/2022--
-# Hyperscale (Citus) server group
--
-## Nodes
-
-The Azure Database for PostgreSQL - Hyperscale (Citus) deployment option allows
-PostgreSQL servers (called nodes) to coordinate with one another in a "server
-group." The server group's nodes collectively hold more data and use more CPU
-cores than would be possible on a single server. The architecture also allows
-the database to scale by adding more nodes to the server group.
-
-To learn more about the types of Hyperscale (Citus) nodes, see [nodes and
-tables](concepts-nodes.md).
-
-### Node status
-
-Hyperscale (Citus) displays the status of nodes in a server group on the
-Overview page in the Azure portal. Each node can have one of these status
-values:
-
-* **Provisioning**: Initial node provisioning, either as a part of its server
- group provisioning, or when a worker node is added.
-* **Available**: Node is in a healthy state.
-* **Need attention**: An issue is detected on the node. The node is attempting
- to self-heal. If self-healing fails, an issue gets put in the queue for our
- engineers to investigate.
-* **Dropping**: Server group deletion started.
-* **Disabled**: The server group's Azure subscription turned into Disabled
- states. For more information about subscription states, see [this
- page](../../cost-management-billing/manage/subscription-states.md).
-
-### Node availability zone
-
-Hyperscale (Citus) displays the [availability
-zone](../../availability-zones/az-overview.md#availability-zones) of each node
-in a server group on the Overview page in the Azure portal. The **Availability
-zone** column contains either the name of the zone, or `--` if the node isn't
-assigned to a zone. (Only [certain
-regions](https://azure.microsoft.com/global-infrastructure/geographies/#geographies)
-support availability zones.)
-
-If high availability is enabled for the server group, and a node [fails
-over](concepts-high-availability.md) to a standby, you may see its availability
-zone differs from the other nodes. In this case, the nodes will be moved back
-into the same availability zone together during the next [maintenance
-window](concepts-maintenance.md).
-
-## Tiers
-
-The basic tier in Azure Database for PostgreSQL - Hyperscale (Citus) is a
-simple way to create a small server group that you can scale later. While
-server groups in the standard tier have a coordinator node and at least two
-worker nodes, the basic tier runs everything in a single database node.
-
-Other than using fewer nodes, the basic tier has all the features of the
-standard tier. Like the standard tier, it supports high availability, read
-replicas, and columnar table storage, among other features.
-
-### Choosing basic vs standard tier
-
-The basic tier can be an economical and convenient deployment option for
-initial development, testing, and continuous integration. It uses a single
-database node and presents the same SQL API as the standard tier. You can test
-applications with the basic tier and later [graduate to the standard
-tier](howto-scale-grow.md#add-worker-nodes) with confidence that the
-interface remains the same.
-
-The basic tier is also appropriate for smaller workloads in production. ThereΓÇÖs
-room to scale vertically *within* the basic tier by increasing the number of
-server vCores.
-
-When greater scale is required right away, use the standard tier. Its smallest
-allowed server group has one coordinator node and two workers. You can choose
-to use more nodes based on your use-case, as described in our [initial
-sizing](howto-scale-initial.md) how-to.
-
-#### Tier summary
-
-**Basic tier**
-
-* 2 to 8 vCores, 8 to 32 gigabytes of memory.
-* Consists of a single database node, which can be scaled vertically.
-* Supports sharding on a single node and can be easily upgraded to a standard tier.
-* Economical deployment option for initial development, testing.
-
-**Standard tier**
-
-* 8 to 1000+ vCores, up to 8+ TiB memory
-* Distributed Postgres cluster, which consists of a dedicated coordinator
- node and at least two worker nodes.
-* Supports Sharding on multiple worker nodes. The cluster can be scaled
- horizontally by adding new worker nodes, and scaled vertically by
- increasing the node vCores.
-* Best for performance and scale.
-
-## Next steps
-
-* Learn to [provision the basic tier](quickstart-create-basic-tier.md)
-* When you're ready, see [how to graduate](howto-scale-grow.md#add-worker-nodes) from the basic tier to the standard tier
-* The [columnar storage](concepts-columnar.md) option is available in both the basic and standard tier
postgresql Concepts Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-upgrade.md
- Title: Server group upgrades - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Types of upgrades, and their precautions
----- Previously updated : 08/29/2022--
-# Hyperscale (Citus) server group upgrades
--
-The Hyperscale (Citus) managed service can handle upgrades of both the
-PostgreSQL server, and the Citus extension. You can choose these versions
-mostly independently of one another, except Citus 11 requires PostgreSQL 13 or
-higher.
-
-## Upgrade precautions
-
-Upgrades require some downtime in the database cluster. The exact time depends
-on the source and destination versions of the upgrade. To prepare for the
-production cluster upgrade, we recommend [testing the
-upgrade](howto-upgrade.md#test-the-upgrade-first), and measure downtime during
-the test.
-
-Also, upgrading a major version of Citus can introduce changes in behavior.
-It's best to familiarize yourself with new product features and changes to
-avoid surprises.
-
-Noteworthy Citus 11 changes:
-
-* Table shards may disappear in your SQL client. Their visibility
- is now controlled by
- [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text).
-* There are several [deprecated
- features](https://www.citusdata.com/updates/v11-0/#deprecated-features).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to perform upgrades](howto-upgrade.md)
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-alert-on-metric.md
- Title: Configure alerts - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 3/16/2020--
-# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Hyperscale (Citus)
--
-This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on [monitoring metrics](concepts-monitoring.md) for your Azure services.
-
-We'll set up an alert to trigger when the value of a specified metric crosses a threshold. The alert triggers when the condition is first met, and continues to trigger afterwards.
-
-You can configure an alert to do the following actions when it triggers:
-* Send email notifications to the service administrator and coadministrators.
-* Send email to additional emails that you specify.
-* Call a webhook.
-
-You can configure and get information about alert rules using:
-* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
-* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
-* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
-
-## Create an alert rule on a metric from the Azure portal
-1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL server you want to monitor.
-
-2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules":::
-
-3. Select **New alert rule** (+ icon).
-
-4. The **Create rule** page opens as shown below. Fill in the required information:
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form":::
-
-5. Within the **Condition** section, select **Add**.
-
-6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/6-configure-signal-logic.png" alt-text="Screenshot shows the Configure signal logic page where you can view several signals.":::
-
-7. Configure the alert logic:
-
- * **Operator** (ex. "Greater than")
- * **Threshold value** (ex. 85 percent)
- * **Aggregation granularity** amount of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes")
- * and **Frequency of evaluation** (ex. "1 minute")
-
- Select **Done** when complete.
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/7-set-threshold-time.png" alt-text="Screenshot shows the pane where you can configure Alert logic.":::
-
-8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert.
-
-9. Fill out the "Add action group" form with a name, short name, subscription, and resource group.
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/9-add-action-group.png" alt-text="Screenshot shows the Add action group form where you can enter the described values.":::
-
-10. Configure an **Email/SMS/Push/Voice** action type.
-
- Choose "Email Azure Resource Manager Role" to send notifications to subscription owners, contributors, and readers.
-
- Select **OK** when completed.
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/10-action-group-type.png" alt-text="Screenshot shows the Email/S M S/Push/Voice pane.":::
-
-11. Specify an Alert rule name, Description, and Severity.
-
- :::image type="content" source="../media/howto-hyperscale-alert-on-metric/11-name-description-severity.png" alt-text="Screenshot shows the Alert Details pane.":::
-
-12. Select **Create alert rule** to create the alert.
-
- Within a few minutes, the alert is active and triggers as previously described.
-
-### Managing alerts
-
-Once you've created an alert, you can select it and do the following actions:
-
-* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
-* **Edit** or **Delete** the alert rule.
-* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
-
-## Suggested alerts
-
-### Disk space
-
-Monitoring and alerting is important for every production Hyperscale (Citus) server group. The underlying PostgreSQL database requires free disk space to operate correctly. If the disk becomes full, the database server node will go offline and refuse to start until space is available. At that point, it requires a Microsoft support request to fix the situation.
-
-We recommend setting disk space alerts on every node in every server group, even for non-production usage. Disk space usage alerts provide the advance warning needed to intervene and keep nodes healthy. For best results, try a series of alerts at 75%, 85%, and 95% usage. The percentages to choose depend on data ingestion speed, since fast data ingestion fills up the disk faster.
-
-As the disk approaches its space limit, try these techniques to get more free space:
-
-* Review data retention policy. Move older data to cold storage if feasible.
-* Consider [adding nodes](howto-scale-grow.md#add-worker-nodes) to the server group and rebalancing shards. Rebalancing distributes the data across more computers.
-* Consider [growing the capacity](howto-scale-grow.md#increase-or-decrease-vcores-on-nodes) of worker nodes. Each worker can have up to 2 TiB of storage. However adding nodes should be attempted before resizing nodes because adding nodes completes faster.
-
-### CPU usage
-
-Monitoring CPU usage is useful to establish a baseline for performance. For example, you may notice that CPU usage is usually around 40-60%. If CPU usage suddenly begins hovering around 95%, you can recognize an anomaly. The CPU usage may reflect organic growth, but it may also reveal a stray query. When creating a CPU alert, set a long aggregation granularity to catch prolonged increases and ignore momentary spikes.
-
-## Next steps
-* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
-* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
postgresql Howto App Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-type.md
- Title: Determine application type - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Identify your application for effective distributed data modeling
----- Previously updated : 07/17/2020--
-# Determining Application Type
--
-Running efficient queries on a Hyperscale (Citus) server group requires that
-tables be properly distributed across servers. The recommended distribution
-varies by the type of application and its query patterns.
-
-There are broadly two kinds of applications that work well on Hyperscale
-(Citus). The first step in data modeling is to identify which of them more
-closely resembles your application.
-
-## At a Glance
-
-| Multi-Tenant Applications | Real-Time Applications |
-|--|-|
-| Sometimes dozens or hundreds of tables in schema | Small number of tables |
-| Queries relating to one tenant (company/store) at a time | Relatively simple analytics queries with aggregations |
-| OLTP workloads for serving web clients | High ingest volume of mostly immutable data |
-| OLAP workloads that serve per-tenant analytical queries | Often centering around large table of events |
-
-## Examples and Characteristics
-
-**Multi-Tenant Application**
-
-> These are typically SaaS applications that serve other companies,
-> accounts, or organizations. Most SaaS applications are inherently
-> relational. They have a natural dimension on which to distribute data
-> across nodes: just shard by tenant\_id.
->
-> Hyperscale (Citus) enables you to scale out your database to millions of
-> tenants without having to re-architect your application. You can keep the
-> relational semantics you need, like joins, foreign key constraints,
-> transactions, ACID, and consistency.
->
-> - **Examples**: Websites which host store-fronts for other
-> businesses, such as a digital marketing solution, or a sales
-> automation tool.
-> - **Characteristics**: Queries relating to a single tenant rather
-> than joining information across tenants. This includes OLTP
-> workloads for serving web clients, and OLAP workloads that serve
-> per-tenant analytical queries. Having dozens or hundreds of tables
-> in your database schema is also an indicator for the multi-tenant
-> data model.
->
-> Scaling a multi-tenant app with Hyperscale (Citus) also requires minimal
-> changes to application code. We have support for popular frameworks like Ruby
-> on Rails and Django.
-
-**Real-Time Analytics**
-
-> Applications needing massive parallelism, coordinating hundreds of cores for
-> fast results to numerical, statistical, or counting queries. By sharding and
-> parallelizing SQL queries across multiple nodes, Hyperscale (Citus) makes it
-> possible to perform real-time queries across billions of records in under a
-> second.
->
-> Tables in real-time analytics data models are typically distributed by
-> columns like user\_id, host\_id, or device\_id.
->
-> - **Examples**: Customer-facing analytics dashboards requiring
-> sub-second response times.
-> - **Characteristics**: Few tables, often centering around a big
-> table of device-, site- or user-events and requiring high ingest
-> volume of mostly immutable data. Relatively simple (but
-> computationally intensive) analytics queries involving several
-> aggregations and GROUP BYs.
-
-If your situation resembles either case above, then the next step is to decide
-how to shard your data in the server group. The database administrator\'s
-choice of distribution columns needs to match the access patterns of typical
-queries to ensure performance.
-
-## Next steps
-
-* [Choose a distribution
- column](howto-choose-distribution-column.md) for tables in your
- application to distribute data efficiently
postgresql Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-choose-distribution-column.md
- Title: Choose distribution columns ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn how to choose distribution columns in common scenarios in Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 02/28/2022--
-# Choose distribution columns in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-Choosing each table's distribution column is one of the most important modeling
-decisions you'll make. Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
-stores rows in shards based on the value of the rows' distribution column.
-
-The correct choice groups related data together on the same physical nodes,
-which makes queries fast and adds support for all SQL features. An incorrect
-choice makes the system run slowly.
-
-## General tips
-
-Here are four criteria for choosing the ideal distribution column for your
-distributed tables.
-
-1. **Pick a column that is a central piece in the application workload.**
-
- You might think of this column as the "heart," "central piece," or "natural dimension"
- for partitioning data.
-
- Examples:
-
- * `device_id` in an IoT workload
- * `security_id` for a financial app that tracks securities
- * `user_id` in user analytics
- * `tenant_id` for a multi-tenant SaaS application
-
-2. **Pick a column with decent cardinality, and an even statistical
- distribution.**
-
- The column should have many values, and distribute thoroughly and evenly
- between all shards.
-
- Examples:
-
- * Cardinality over 1000
- * Don't pick a column that has the same value on a large percentage of rows
- (data skew)
- * In a SaaS workload, having one tenant much bigger than the rest can cause
- data skew. For this situation, you can use [tenant
- isolation](reference-functions.md#isolate_tenant_to_new_shard) to create a
- dedicated shard to handle the tenant.
-
-3. **Pick a column that benefits your existing queries.**
-
- For a transactional or operational workload (where most queries take only a
- few milliseconds), pick a column that appears as a filter in `WHERE` clauses
- for at least 80% of queries. For instance, the `device_id` column in `SELECT *
- FROM events WHERE device_id=1`.
-
- For an analytical workload (where most queries take 1-2 seconds), pick a
- column that enables queries to be parallelized across worker nodes. For
- instance, a column frequently occurring in GROUP BY clauses, or queried over
- multiple values at once.
-
-4. **Pick a column that is present in the majority of large tables.**
-
- Tables over 50 GB should be distributed. Picking the same distribution column
- for all of them enables you to co-locate data for that column on worker nodes.
- Co-location makes it efficient to run JOINs and rollups, and enforce foreign
- keys.
-
- The other (smaller) tables can be local or reference tables. If the smaller
- table needs to JOIN with distributed tables, make it a reference table.
-
-## Use-case examples
-
-We've seen general criteria for picking the distribution column. Now let's see
-how they apply to common use cases.
-
-### Multi-tenant apps
-
-The multi-tenant architecture uses a form of hierarchical database modeling to
-distribute queries across nodes in the server group. The top of the data
-hierarchy is known as the *tenant ID* and needs to be stored in a column on
-each table.
-
-Hyperscale (Citus) inspects queries to see which tenant ID they involve and
-finds the matching table shard. It routes the query to a single worker node
-that contains the shard. Running a query with all relevant data placed on the
-same node is called colocation.
-
-The following diagram illustrates colocation in the multi-tenant data model. It
-contains two tables, Accounts and Campaigns, each distributed by `account_id`.
-The shaded boxes represent shards. Green shards are stored together on one
-worker node, and blue shards are stored on another worker node. Notice how a
-join query between Accounts and Campaigns has all the necessary data together
-on one node when both tables are restricted to the same account\_id.
-
-![Multi-tenantcolocation](../media/concepts-hyperscale-choosing-distribution-column/multi-tenant-colocation.png)
-
-To apply this design in your own schema, identify what constitutes a tenant in
-your application. Common instances include company, account, organization, or
-customer. The column name will be something like `company_id` or `customer_id`.
-Examine each of your queries and ask yourself, would it work if it had
-more WHERE clauses to restrict all tables involved to rows with the same
-tenant ID? Queries in the multi-tenant model are scoped to a tenant. For
-instance, queries on sales or inventory are scoped within a certain store.
-
-#### Best practices
--- **Distribute tables by a common tenant\_id column.** For
- instance, in a SaaS application where tenants are companies, the
- tenant\_id is likely to be the company\_id.
-- **Convert small cross-tenant tables to reference tables.** When
- multiple tenants share a small table of information, distribute it
- as a reference table.
-- **Restrict filter all application queries by tenant\_id.** Each
- query should request information for one tenant at a time.
-
-Read the [multi-tenant
-tutorial](./tutorial-design-database-multi-tenant.md) for an example of how to
-build this kind of application.
-
-### Real-time apps
-
-The multi-tenant architecture introduces a hierarchical structure and uses data
-colocation to route queries per tenant. By contrast, real-time architectures
-depend on specific distribution properties of their data to achieve highly
-parallel processing.
-
-We use "entity ID" as a term for distribution columns in the real-time model.
-Typical entities are users, hosts, or devices.
-
-Real-time queries typically ask for numeric aggregates grouped by date or
-category. Hyperscale (Citus) sends these queries to each shard for partial
-results and assembles the final answer on the coordinator node. Queries run
-fastest when as many nodes contribute as possible, and when no single node must
-do a disproportionate amount of work.
-
-#### Best practices
--- **Choose a column with high cardinality as the distribution
- column.** For comparison, a Status field on an order table with
- values New, Paid, and Shipped is a poor choice of distribution column.
- It assumes only those few values, which limits the number of shards
- that can hold the data, and the number of nodes that can process it.
- Among columns with high cardinality, it's also good to choose those
- columns that are frequently used in group-by clauses or as join keys.
-- **Choose a column with even distribution.** If you distribute a
- table on a column skewed to certain common values, data in the
- table tends to accumulate in certain shards. The nodes that hold
- those shards end up doing more work than other nodes.
-- **Distribute fact and dimension tables on their common columns.**
- Your fact table can have only one distribution key. Tables that join
- on another key won't be colocated with the fact table. Choose
- one dimension to colocate based on how frequently it's joined and
- the size of the joining rows.
-- **Change some dimension tables into reference tables.** If a
- dimension table can't be colocated with the fact table, you can
- improve query performance by distributing copies of the dimension
- table to all of the nodes in the form of a reference table.
-
-Read the [real-time dashboard tutorial](./tutorial-design-database-realtime.md)
-for an example of how to build this kind of application.
-
-### Time-series data
-
-In a time-series workload, applications query recent information while they
-archive old information.
-
-The most common mistake in modeling time-series information in Hyperscale
-(Citus) is to use the timestamp itself as a distribution column. A hash
-distribution based on time distributes times seemingly at random into different
-shards rather than keeping ranges of time together in shards. Queries that
-involve time generally reference ranges of time, for example, the most recent
-data. This type of hash distribution leads to network overhead.
-
-#### Best practices
--- **Don't choose a timestamp as the distribution column.** Choose a
- different distribution column. In a multi-tenant app, use the tenant
- ID, or in a real-time app use the entity ID.
-- **Use PostgreSQL table partitioning for time instead.** Use table
- partitioning to break a large table of time-ordered data into
- multiple inherited tables with each table containing different time
- ranges. Distributing a Postgres-partitioned table in Hyperscale (Citus)
- creates shards for the inherited tables.
-
-## Next steps
--- Learn how [colocation](concepts-colocation.md) between distributed data helps queries run fast.-- Discover the distribution column of a distributed table, and other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-compute-quota.md
- Title: Change compute quotas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus)
-description: Learn how to increase vCore quotas per region in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
----- Previously updated : 12/10/2021--
-# Change compute quotas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal
--
-Azure enforces a vCore quota per subscription per region. There are two
-independently adjustable limits: vCores for coordinator nodes, and vCores for
-worker nodes.
-
-## Request quota increase
-
-1. Select **New support request** in the Azure portal menu for your Hyperscale
- (Citus) server group.
-2. Fill out **Summary** with the quota increase request for your region, for
- example "Quota increase in West Europe region."
-3. These fields should be autoselected, but verify:
- * **Issue Type** should be "Technical + your subscription"
- * **Service type** should be "Azure Database for PostgreSQL"
-4. Select "Create, Update, and Drop Resources" for **Problem type**.
-5. Select "Node compute or storage scaling" for **Problem subtype**.
-6. Select **Next: Solutions >>** then **Next: Details >>**
-7. In the problem description include two pieces of information:
- * The region where you want the quota(s) increased
- * Quota increase details, for example "Need to increase worker node quota
- in West Europe to 512 vCores"
-
-![support request in Azure portal](../media/howto-hyperscale-compute-quota/support-request.png)
-
-## Next steps
-
-* Learn about other Hyperscale (Citus) [quotas and limits](reference-limits.md).
postgresql Howto Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-connect.md
- Title: Connect to server - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn how to connect to and query a Hyperscale (Citus) server group
----- Previously updated : 08/11/2022--
-# Connect to a server group
--
-Choose your database client below to learn how to configure it to connect to
-Hyperscale (Citus).
-
-# [pgAdmin](#tab/pgadmin)
-
-[pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source
-administration and development platform for PostgreSQL.
-
-1. [Download](https://www.pgadmin.org/download/) and install pgAdmin.
-
-2. Open the pgAdmin application on your client computer. From the Dashboard,
- select **Add New Server**.
-
- ![pgAdmin dashboard](../media/howto-hyperscale-connect/pgadmin-dashboard.png)
-
-3. Choose a **Name** in the General tab. Any name will work.
-
- ![pgAdmin general connection settings](../media/howto-hyperscale-connect/pgadmin-general.png)
-
-4. Enter connection details in the Connection tab.
-
- ![pgAdmin db connection settings](../media/howto-hyperscale-connect/pgadmin-connection.png)
-
- Customize the following fields:
-
- * **Host name/address**: Obtain this value from the **Overview** page for your
- server group in the Azure portal. It's listed there as **Coordinator name**.
- It will be of the form, `c.myservergroup.postgres.database.azure.com`.
- * **Maintenance database**: use the value `citus`.
- * **Username**: use the value `citus`.
- * **Password**: the connection password.
- * **Save password**: enable if desired.
-
-5. In the SSL tab, set **SSL mode** to **Require**.
-
- ![pgAdmin ssl settings](../media/howto-hyperscale-connect/pgadmin-ssl.png)
-
-6. Select **Save** to save and connect to the database.
-
-# [psql](#tab/psql)
-
-The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a
-terminal-based front-end to PostgreSQL. It enables you to type in queries
-interactively, issue them to PostgreSQL, and see the query results.
-
-1. Install psql. It's included with a [PostgreSQL
- installation](https://www.postgresql.org/docs/current/tutorial-install.html),
- or available separately in package managers for several operating systems.
-
-2. Obtain the connection string. In the server group page, select the
- **Connection strings** menu item.
-
- ![get connection string](../media/quickstart-connect-psql/get-connection-string.png)
-
- Find the string marked **psql**. It will be of the form, `psql
- "host=c.servergroup.postgres.database.azure.com port=5432 dbname=citus
- user=citus password={your_password} sslmode=require"`
-
- * Copy the string.
- * Replace "{your\_password}" with the administrative password you chose earlier.
- * Notice the hostname starts with a `c.`, for instance
- `c.demo.postgres.database.azure.com`. This prefix indicates the
- coordinator node of the server group.
- * The default dbname and username is `citus` and can't be changed.
-
-3. In a local terminal prompt, paste the psql connection string, *substituting
- your password for the string `{your_password}`*, then press enter.
---
-**Next steps**
-
-* Troubleshoot [connection issues](howto-troubleshoot-common-connection-issues.md).
-* [Verify TLS](howto-ssl-connection-security.md) certificates in your
- connections.
-* Now that you can connect to the database, learn how to [build scalable
- apps](quickstart-build-scalable-apps-overview.md).
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-create-users.md
- Title: Create users - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 1/8/2019--
-# Create users in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-## The server admin account
-
-The PostgreSQL engine uses
-[roles](https://www.postgresql.org/docs/current/sql-createrole.html) to control
-access to database objects, and a newly created Hyperscale (Citus) server group
-comes with several roles pre-defined:
-
-* The [default PostgreSQL roles](https://www.postgresql.org/docs/current/default-roles.html)
-* `azure_pg_admin`
-* `postgres`
-* `citus`
-
-Since Hyperscale (Citus) is a managed PaaS service, only Microsoft can sign in with the
-`postgres` superuser role. For limited administrative access, Hyperscale (Citus)
-provides the `citus` role.
-
-Permissions for the `citus` role:
-
-* Read all configuration variables, even variables normally visible only to
- superusers.
-* Read all pg\_stat\_\* views and use various statistics-related
- extensions--even views or extensions normally visible only to superusers.
-* Execute monitoring functions that may take ACCESS SHARE locks on tables,
- potentially for a long time.
-* [Create PostgreSQL extensions](reference-extensions.md) (because
- the role is a member of `azure_pg_admin`).
-
-Notably, the `citus` role has some restrictions:
-
-* Can't create roles
-* Can't create databases
-
-## How to create additional user roles
-
-As mentioned, the `citus` admin account lacks permission to create additional
-users. To add a user, use the Azure portal interface.
-
-1. Go to the **Roles** page for your Hyperscale (Citus) server group, and
- select **+ Add**:
-
- :::image type="content" source="../media/howto-hyperscale-create-users/1-role-page.png" alt-text="The roles page":::
-
-2. Enter the role name and password. Select **Save**.
-
- :::image type="content" source="../media/howto-hyperscale-create-users/2-add-user-fields.png" alt-text="Add role":::
-
-The user will be created on the coordinator node of the server group,
-and propagated to all the worker nodes. Roles created through the Azure
-portal have the `LOGIN` attribute, which means theyΓÇÖre true users who
-can sign in to the database.
-
-## How to modify privileges for user role
-
-New user roles are commonly used to provide database access with restricted
-privileges. To modify user privileges, use standard PostgreSQL commands, using
-a tool such as PgAdmin or psql. (See [Connect to a Hyperscale (Citus) server
-group](quickstart-connect-psql.md).)
-
-For example, to allow `db_user` to read `mytable`, grant the permission:
-
-```sql
-GRANT SELECT ON mytable TO db_user;
-```
-
-Hyperscale (Citus) propagates single-table GRANT statements through the entire
-cluster, applying them on all worker nodes. It also propagates GRANTs that are
-system-wide (for example, for all tables in a schema):
-
-```sql
applies to the coordinator node and propagates to workers
-GRANT SELECT ON ALL TABLES IN SCHEMA public TO db_user;
-```
-
-## How to delete a user role or change their password
-
-To update a user, visit the **Roles** page for your Hyperscale (Citus) server group,
-and select the ellipses **...** next to the user. The ellipses will open a menu
-to delete the user or reset their password.
-
- :::image type="content" source="../media/howto-hyperscale-create-users/edit-role.png" alt-text="Edit a role":::
-
-The `citus` role is privileged and can't be deleted.
-
-## Next steps
-
-Open the firewall for the IP addresses of the new users' machines to enable
-them to connect: [Create and manage Hyperscale (Citus) firewall rules using
-the Azure portal](howto-manage-firewall-using-portal.md).
-
-For more information about database user management, see PostgreSQL
-product documentation:
-
-* [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html)
-* [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html)
-* [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html)
postgresql Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-high-availability.md
- Title: Configure high availability - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to enable or disable high availability
----- Previously updated : 07/27/2020--
-# Configure Hyperscale (Citus) high availability
--
-Azure Database for PostgreSQL - Hyperscale (Citus) provides high availability
-(HA) to avoid database downtime. With HA enabled, every node in a server group
-will get a standby. If the original node becomes unhealthy, its standby will be
-promoted to replace it.
-
-> [!IMPORTANT]
-> Because HA doubles the number of servers in the group, it will also double
-> the cost.
-
-Enabling HA is possible during server group creation, or afterward in the
-**Compute + storage** tab for your server group in the Azure portal. The user
-interface looks similar in either case. Drag the slider for **High
-availability** from NO to YES:
--
-Click the **Save** button to apply your selection. Enabling HA can take some
-time as the server group provisions standbys and streams data to them.
-
-The **Overview** tab for the server group will list all nodes and their
-standbys, along with a **High availability** column indicating whether HA is
-successfully enabled for each node.
--
-### Next steps
-
-Learn more about [high availability](concepts-high-availability.md).
postgresql Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ingest-azure-data-factory.md
- Title: Azure Data Factory
-description: Step-by-step guide for using Azure Data Factory for ingestion on Hyperscale Citus
----- Previously updated : 06/27/2022--
-# How to ingest data using Azure Data Factory
-
-[Azure Data Factory](../../data-factory/introduction.md) (ADF) is a cloud-based
-ETL and data integration service. It allows you to create data-driven workflows
-to move and transform data at scale.
-
-Using Azure Data Factory, you can create and schedule data-driven workflows
-(called pipelines) that ingest data from disparate data stores. Pipelines can
-run on-premises, in Azure, or on other cloud providers for analytics and
-reporting.
-
-ADF has a data sink for Hyperscale (Citus). The data sink allows you to bring
-your data (relational, NoSQL, data lake files) into Hyperscale (Citus) tables
-for storage, processing, and reporting.
-
-![Dataflow diagram for Azure Data Factory.](../media/howto-hyperscale-ingestion/azure-data-factory-architecture.png)
-
-## ADF for real-time ingestion to Hyperscale (Citus)
-
-Here are key reasons to choose Azure Data Factory for ingesting data into
-Hyperscale (Citus):
-
-* **Easy-to-use** - Offers a code-free visual environment for orchestrating and automating data movement.
-* **Powerful** - Uses the full capacity of underlying network bandwidth, up to 5 GiB/s throughput.
-* **Built-in Connectors** - Integrates all your data sources, with more than 90 built-in connectors.
-* **Cost Effective** - Supports a pay-as-you-go, fully managed serverless cloud service that scales on demand.
-
-## Steps to use ADF with Hyperscale (Citus)
-
-In this article, we'll create a data pipeline by using the Azure Data Factory
-user interface (UI). The pipeline in this data factory copies data from Azure
-Blob storage to a database in Hyperscale (Citus). For a list of data stores
-supported as sources and sinks, see the [supported data
-stores](../../data-factory/copy-activity-overview.md#supported-data-stores-and-formats)
-table.
-
-In Azure Data Factory, you can use the **Copy** activity to copy data among
-data stores located on-premises and in the cloud to Hyperscale Citus. If you're
-new to Azure Data Factory, here's a quick guide on how to get started:
-
-1. Once ADF is provisioned, go to your data factory. You'll see the Data
- Factory home page as shown in the following image:
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-home.png" alt-text="Landing page of Azure Data Factory." border="true":::
-
-2. On the home page, select **Orchestrate**.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-orchestrate.png" alt-text="Orchestrate page of Azure Data Factory." border="true":::
-
-3. In the General panel under **Properties**, specify the desired pipeline name.
-
-4. In the **Activities** toolbox, expand the **Move and Transform** category,
- and drag and drop the **Copy Data** activity to the pipeline designer
- surface. Specify the activity name.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-pipeline-copy.png" alt-text="Pipeline in Azure Data Factory." border="true":::
-
-5. Configure **Source**
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-configure-source.png" alt-text="Configuring Source in of Azure Data Factory." border="true":::
-
- 1. Go to the Source tab. Select** + New **to create a source dataset.
- 2. In the **New Dataset** dialog box, select **Azure Blob Storage**, and then select **Continue**.
- 3. Choose the format type of your data, and then select **Continue**.
- 4. Under the **Linked service** text box, select **+ New**.
- 5. Specify Linked Service name and select your storage account from the **Storage account name** list. Test connection
- 6. Next to **File path**, select **Browse** and select the desired file from BLOB storage.
- 7. Select **Ok** to save the configuration.
-
-6. Configure **Sink**
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-configure-sink.png" alt-text="Configuring Sink in of Azure Data Factory." border="true":::
-
- 1. Go to the Sink tab. Select **+ New** to create a source dataset.
- 2. In the **New Dataset** dialog box, select **Azure Database for PostgreSQL**, and then select **Continue**.
- 3. Under the **Linked service** text box, select **+ New**.
- 4. Specify Linked Service name and select your server group from the list for Hyperscale (Citus) server groups. Add connection details and test the connection
-
- > [!NOTE]
- >
- > If your server group is not present in the drop down, use the **Enter
- > manually** option to add server details.
-
- 5. Select the table name where you want to ingest the data.
- 6. Specify **Write method** as COPY command.
- 7. Select **Ok** to save the configuration.
-
-7. From the toolbar above the canvas, select **Validate** to validate pipeline
- settings. Fix errors (if any), revalidate and ensure that the pipeline has
- been successfully validated.
-
-8. Select Debug from the toolbar execute the pipeline.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-execute.png" alt-text="Debug and Execute in of Azure Data Factory." border="true":::
-
-9. Once the pipeline can run successfully, in the top toolbar, select **Publish
- all**. This action publishes entities (datasets, and pipelines) you created
- to Data Factory.
-
-## Calling a Stored Procedure in ADF
-
-In some specific scenarios, you might want to call a stored procedure/function
-to push aggregated data from staging table to summary table. As of today, ADF
-doesn't offer Stored Procedure activity for Azure Database for Postgres, but as
-a workaround we can use Lookup Activity with query to call a stored procedure
-as shown below:
--
-## Next steps
-
-Learn how to create a [real-time
-dashboard](tutorial-design-database-realtime.md) with Hyperscale (Citus).
postgresql Howto Ingest Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ingest-azure-stream-analytics.md
- Title: Real-time data ingestion with Azure Stream Analytics - Hyperscale (Citus) - Azure DB for PostgreSQL
-description: How to transform and ingest streaming data
----- Previously updated : 08/11/2022--
-# How to ingest data using Azure Stream Analytics
-
-[Azure Stream
-Analytics](https://azure.microsoft.com/services/stream-analytics/#features)
-(ASA) is a real-time analytics and event-processing engine that is designed to
-process high volumes of fast streaming data from devices, sensors, and web
-sites. It's also available on the Azure IoT Edge runtime, enabling data
-processing on IoT devices.
--
-Hyperscale (Citus) shines at real-time workloads such as
-[IoT](quickstart-build-scalable-apps-model-high-throughput.md). For these workloads,
-Azure Stream Analytics (ASA) can act as a no-code, performant and scalable
-alternative to pre-process and stream data from Event Hubs, IoT Hub and Azure
-Blob Storage into Hyperscale (Citus).
-
-## Steps to set up ASA with Hyperscale (Citus)
-
-> [!NOTE]
->
-> This article uses [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md)
-> as an example datasource, but the technique is applicable to any other source
-> supported by ASA. Also, the demonstration data shown below comes from the
-> [Azure IoT Device Telemetry
-> Simulator](https://github.com/Azure-Samples/Iot-Telemetry-Simulator). This
-> article doesn't cover setting up the simulator.
-
-1. Open **Azure portal** and select **Create a resource** in the upper left-hand corner of the Azure portal.
-1. Select **Analytics** > **Stream Analytics job** from the results list.
-1. Fill out the Stream Analytics job page with the following information:
- * **Job name** - Name to identify your Stream Analytics job.
- * **Subscription** - Select the Azure subscription that you want to use for this job.
- * **Resource group** - Select the same resource group as your IoT Hub.
- * **Location** - Select geographic location where you can host your Stream Analytics job. Use the location that's closest to your users for better performance and to reduce the data transfer cost.
- * **Streaming units** - Streaming units represent the computing resources that are required to execute a job.
- * **Hosting environment** - **Cloud** allows you to deploy to Azure Cloud, and **Edge** allows you to deploy to an IoT Edge device.
-1. Select **Create**. You should see a **Deployment in progress...** notification displayed in the top right of your browser window.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-02-create.png" alt-text="Create Azure Stream Analytics form." border="true":::
-
-1. Configure job input.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-03-input.png" alt-text="Configure job input in Azure Stream Analytics." border="true":::
-
- 1. Once the resource deployment is complete, navigate to your Stream Analytics
- job. Select **Inputs** > **Add Stream input** > **IoT Hub**.
-
- 1. Fill out the IoT Hub page with the following values:
- * **Input alias** - Name to identify the job's input.
- * **Subscription** - Select the Azure subscription that has the IOT Hub account you created.
- * **IoT Hub** ΓÇô Select the name of the IoT Hub you have already created.
- * Leave other options as default values
- 1. Select **Save** to save the settings.
- 1. Once the input stream is added, you can also verify/download the dataset flowing in.
- Below is the data for sample event in our use case:
-
- ```json
- {
- "deviceId": "sim000001",
- "time": "2022-04-25T13:49:11.6892185Z",
- "counter": 1,
- "EventProcessedUtcTime": "2022-04-25T13:49:41.4791613Z",
- "PartitionId": 3,
- "EventEnqueuedUtcTime": "2022-04-25T13:49:12.1820000Z",
- "IoTHub": {
- "MessageId": null,
- "CorrelationId": "990407b8-4332-4cb6-a8f4-d47f304397d8",
- "ConnectionDeviceId": "sim000001",
- "ConnectionDeviceGenerationId": "637842405470327268",
- "EnqueuedTime": "2022-04-25T13:49:11.7060000Z"
- }
- }
- ```
-
-1. Configure Job Output.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-04-output.png" alt-text="Configure job output in Azure Stream Analytics." border="true":::
-
- 1. Navigate to the Stream Analytics job that you created earlier.
- 1. Select **Outputs** > **Add** > **Azure PostgreSQL**.
- 1. Fill out the **Azure PostgreSQL** page with the following values:
- * **Output alias** - Name to identify the job's output.
- * Select **"Provide PostgreSQL database settings manually"** and enter the DB server connection details like server FQDN, database, table name, username, and password.
- * For our example dataset, we chose the table name `device_data`.
- 1. Select **Save** to save the settings.
-
- > [!NOTE]
- > The **Test Connection** feature for Hyperscale (Citus) is currently not
- > supported and might throw an error, even when the connection works fine.
-
-1. Define transformation query.
-
- :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-05-transformation-query.png" alt-text="Transformation query in Azure Stream Analytics." border="true":::
-
- 1. Navigate to the Stream Analytics job that you created earlier.
- 1. For this tutorial, we'll be ingesting only the alternate events from IoT Hub into Hyperscale (Citus) to reduce the overall data size.
-
- ```sql
- select
- counter,
- iothub.connectiondeviceid,
- iothub.correlationid,
- iothub.connectiondevicegenerationid,
- iothub.enqueuedtime
- from
- [src-iot-hub]
- where counter%2 = 0;
- ```
-
- 1. Select **Save Query**
-
- > [!NOTE]
- > We are using the query to not only sample the data, but also extract the
- > desired attributes from the data stream. The custom query option with
- > stream analytics is helpful in pre-processing/transforming the data
- > before it gets ingested into the DB.
-
-1. Start the Stream Analytics job and verify output.
-
- 1. Return to the job overview page and select Start.
- 1. Under **Start job**, select **Now**, for the Job output start time field. Then, select **Start** to start your job.
- 1. After few minutes, you can query the Hyperscale (Citus) database to verify the data loaded. The job will take some time to start at the first time, but once triggered it will continue to run as the data arrives.
-
- ```
- citus=> SELECT * FROM public.device_data LIMIT 10;
-
- counter | connectiondeviceid | correlationid | connectiondevicegenerationid | enqueuedtime
- +--+--++
- 2 | sim000001 | 7745c600-5663-44bc-a70b-3e249f6fc302 | 637842405470327268 | 2022-05-25T18:24:03.4600000Z
- 4 | sim000001 | 389abfde-5bec-445c-a387-18c0ed7af227 | 637842405470327268 | 2022-05-25T18:24:05.4600000Z
- 6 | sim000001 | 3932ce3a-4616-470d-967f-903c45f71d0f | 637842405470327268 | 2022-05-25T18:24:07.4600000Z
- 8 | sim000001 | 4bd8ecb0-7ee1-4238-b034-4e03cb50f11a | 637842405470327268 | 2022-05-25T18:24:09.4600000Z
- 10 | sim000001 | 26cebc68-934e-4e26-80db-e07ade3775c0 | 637842405470327268 | 2022-05-25T18:24:11.4600000Z
- 12 | sim000001 | 067af85c-a01c-4da0-b208-e4d31a24a9db | 637842405470327268 | 2022-05-25T18:24:13.4600000Z
- 14 | sim000001 | 740e5002-4bb9-4547-8796-9d130f73532d | 637842405470327268 | 2022-05-25T18:24:15.4600000Z
- 16 | sim000001 | 343ed04f-0cc0-4189-b04a-68e300637f0e | 637842405470327268 | 2022-05-25T18:24:17.4610000Z
- 18 | sim000001 | 54157941-2405-407d-9da6-f142fc8825bb | 637842405470327268 | 2022-05-25T18:24:19.4610000Z
- 20 | sim000001 | 219488e5-c48a-4f04-93f6-12c11ed00a30 | 637842405470327268 | 2022-05-25T18:24:21.4610000Z
- (10 rows)
- ```
-
-## Next steps
-
-Learn how to create a [real-time
-dashboard](tutorial-design-database-realtime.md) with Hyperscale (Citus).
postgresql Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-logging.md
- Title: Logs - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to access database logs for Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 9/13/2021--
-# Logs in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-PostgreSQL database server logs are available for every node of a Hyperscale
-(Citus) server group. You can ship logs to a storage server, or to an analytics
-service. The logs can be used to identify, troubleshoot, and repair
-configuration errors and suboptimal performance.
-
-## Capturing logs
-
-To access PostgreSQL logs for a Hyperscale (Citus) coordinator or worker node,
-you have to enable the PostgreSQLLogs diagnostic setting. In the Azure
-portal, open **Diagnostic settings**, and select **+ Add diagnostic setting**.
--
-Pick a name for the new diagnostics settings, check the **PostgreSQLLogs** box,
-and check the **Send to Log Analytics workspace** box. Then select **Save**.
--
-## Viewing logs
-
-To view and filter the logs, we'll use Kusto queries. Open **Logs** in the
-Azure portal for your Hyperscale (Citus) server group. If a query selection
-dialog appears, close it:
--
-You'll then see an input box to enter queries.
--
-Enter the following query and select the **Run** button.
-
-```kusto
-AzureDiagnostics
-| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
-```
-
-The above query lists log messages from all nodes, along with their severity
-and timestamp. You can add `where` clauses to filter the results. For instance,
-to see errors from the coordinator node only, filter the error level and server
-name like this:
-
-```kusto
-AzureDiagnostics
-| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
-| where LogicalServerName_s == 'example-server-group-c'
-| where errorLevel_s == 'ERROR'
-```
-
-Replace the server name in the above example with the name of your server. The
-coordinator node name has the suffix `-c` and worker nodes are named
-with a suffix of `-w0`, `-w1`, and so on.
-
-The Azure logs can be filtered in different ways. Here's how to find logs
-within the past day whose messages match a regular expression.
-
-```kusto
-AzureDiagnostics
-| where TimeGenerated > ago(24h)
-| order by TimeGenerated desc
-| where Message matches regex ".*error.*"
-```
-
-## Next steps
--- [Get started with log analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md)-- Learn about [Azure event hubs](../../event-hubs/event-hubs-about.md)
postgresql Howto Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-maintenance.md
- Title: Azure Database for PostgreSQL - Hyperscale (Citus) - Scheduled maintenance - Azure portal
-description: Learn how to configure scheduled maintenance settings for an Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
----- Previously updated : 04/07/2021--
-# Manage scheduled maintenance settings for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-You can specify maintenance options for each Hyperscale (Citus) server group in
-your Azure subscription. Options include the maintenance schedule and
-notification settings for upcoming and finished maintenance events.
-
-## Prerequisites
-
-To complete this how-to guide, you need:
--- An [Azure Database for PostgreSQL - Hyperscale (Citus) server
- group](quickstart-create-portal.md)
-
-## Specify maintenance schedule options
-
-1. On the Hyperscale (Citus) server group page, under the **Settings** heading,
- choose **Maintenance** to open scheduled maintenance options.
-2. The default (system-managed) schedule is a random day of the week, and
- 30-minute window for maintenance start between 11pm and 7am server group's
- [Azure region time](https://go.microsoft.com/fwlink/?linkid=2143646). If you
- want to customize this schedule, choose **Custom schedule**. You can then
- select a preferred day of the week, and a 30-minute window for maintenance
- start time.
-
-## Notifications about scheduled maintenance events
-
-You can use Azure Service Health to [view
-notifications](../../service-health/service-notifications.md) about upcoming
-and past scheduled maintenance on your Hyperscale (Citus) server group. You can
-also [set up](../../service-health/resource-health-alert-monitor-guide.md)
-alerts in Azure Service Health to get notifications about maintenance events.
-
-## Next steps
-
-* Learn about [scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](concepts-maintenance.md)
-* Lean about [Azure Service Health](../../service-health/overview.md)
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-manage-firewall-using-portal.md
- Title: Manage firewall rules - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Create and manage firewall rules for Azure Database for PostgreSQL - Hyperscale (Citus) using the Azure portal
----- Previously updated : 11/16/2021-
-# Manage public access for Azure Database for PostgreSQL - Hyperscale (Citus)
--
-Server-level firewall rules can be used to manage [public
-access](concepts-firewall-rules.md) to a Hyperscale (Citus)
-coordinator node from a specified IP address (or range of IP addresses) in the
-public internet.
-
-## Prerequisites
-To step through this how-to guide, you need:
-- A server group [Create an Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) server group](quickstart-create-portal.md).-
-## Create a server-level firewall rule in the Azure portal
-
-> [!NOTE]
-> These settings are also accessible during the creation of an Azure Database for PostgreSQL - Hyperscale (Citus) server group. Under the **Networking** tab, select **Public access (allowed IP address)**.
->
-> :::image type="content" source="../media/howto-hyperscale-manage-firewall-using-portal/0-create-public-access.png" alt-text="Azure portal - networking tab":::
-
-1. On the PostgreSQL server group page, under the Security heading, click **Networking** to open the Firewall rules.
-
- :::image type="content" source="../media/howto-hyperscale-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Networking":::
-
-2. Select **Allow public access from Azure services and resources within Azure to this server group**.
-
-3. If desired, select **Enable access to the worker nodes**. With this option, the firewall rules will allow access to all worker nodes as well as the coordinator node.
-
-4. Click **Add current client IP address** to create a firewall rule with the public IP address of your computer, as perceived by the Azure system.
-
-Alternately, clicking **+Add 0.0.0.0 - 255.255.255.255** (to the right of option B) allows not just your IP, but the whole internet to access the coordinator node's port 5432 (and 6432 for connection pooling). In this situation, clients still must log in with the correct username and password to use the cluster. Nevertheless, we recommend allowing worldwide access for only short periods of time and for only non-production databases.
-
-5. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Thus, you may need to change the Start IP and End IP to make the rule function as expected.
- Use a search engine or other online tool to check your own IP address. For example, search for "what is my IP."
-
- :::image type="content" source="../media/howto-hyperscale-manage-firewall-using-portal/3-what-is-my-ip.png" alt-text="Bing search for What is my IP":::
-
-6. Add more address ranges. In the firewall rules, you can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP and End IP. Opening the firewall enables administrators, users, and applications to access the coordinator node on ports 5432 and 6432.
-
-7. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
-
-## Connecting from Azure
-
-There is an easy way to grant Hyperscale (Citus) database access to applications hosted on Azure (such as an Azure Web Apps application, or those running in an Azure VM). Select the checkbox **Allow Azure services and resources to access this server group** in the portal from the **Networking** pane and hit **Save**.
-
-> [!IMPORTANT]
-> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
-
-## Manage existing server-level firewall rules through the Azure portal
-Repeat the steps to manage the firewall rules.
-* To add the current computer, click the button to + **Add current client IP address**. Click **Save** to save the changes.
-* To add more IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Click **Save** to save the changes.
-* To modify an existing rule, click any of the fields in the rule and modify. Click **Save** to save the changes.
-* To delete an existing rule, click the ellipsis […] and click **Delete** to remove the rule. Click **Save** to save the changes.
-
-## Next steps
-- Learn more about [Concept of firewall rules](concepts-firewall-rules.md), including how to troubleshoot connection problems.
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
- Title: Modify distributed tables - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: SQL commands to create and modify distributed tables - Hyperscale (Citus) using the Azure portal
----- Previously updated : 08/02/2022--
-# Distribute and modify tables
--
-## Distributing tables
-
-To create a distributed table, you need to first define the table schema. To do
-so, you can define a table using the [CREATE
-TABLE](http://www.postgresql.org/docs/current/static/sql-createtable.html)
-statement in the same way as you would do with a regular PostgreSQL table.
-
-```sql
-CREATE TABLE github_events
-(
- event_id bigint,
- event_type text,
- event_public boolean,
- repo_id bigint,
- payload jsonb,
- repo jsonb,
- actor jsonb,
- org jsonb,
- created_at timestamp
-);
-```
-
-Next, you can use the create\_distributed\_table() function to specify
-the table distribution column and create the worker shards.
-
-```sql
-SELECT create_distributed_table('github_events', 'repo_id');
-```
-
-The function call informs Hyperscale (Citus) that the github\_events table
-should be distributed on the repo\_id column (by hashing the column value).
-
-It creates a total of 32 shards by default, where each shard owns a portion of
-a hash space and gets replicated based on the default
-citus.shard\_replication\_factor configuration value. The shard replicas
-created on the worker have the same table schema, index, and constraint
-definitions as the table on the coordinator. Once the replicas are created, the
-function saves all distributed metadata on the coordinator.
-
-Each created shard is assigned a unique shard ID and all its replicas have the
-same shard ID. Shards are represented on the worker node as regular PostgreSQL
-tables named \'tablename\_shardid\' where tablename is the name of the
-distributed table, and shard ID is the unique ID assigned. You can connect to
-the worker postgres instances to view or run commands on individual shards.
-
-You're now ready to insert data into the distributed table and run queries on
-it. You can also learn more about the UDF used in this section in the [table
-and shard DDL](reference-functions.md#table-and-shard-ddl)
-reference.
-
-### Reference Tables
-
-The above method distributes tables into multiple horizontal shards. Another
-possibility is distributing tables into a single shard and replicating the
-shard to every worker node. Tables distributed this way are called *reference
-tables.* They are used to store data that needs to be frequently accessed by
-multiple nodes in a cluster.
-
-Common candidates for reference tables include:
--- Smaller tables that need to join with larger distributed tables.-- Tables in multi-tenant apps that lack a tenant ID column or which aren\'t
- associated with a tenant. (Or, during migration, even for some tables
- associated with a tenant.)
-- Tables that need unique constraints across multiple columns and are
- small enough.
-
-For instance, suppose a multi-tenant eCommerce site needs to calculate sales
-tax for transactions in any of its stores. Tax information isn\'t specific to
-any tenant. It makes sense to put it in a shared table. A US-centric reference
-table might look like this:
-
-```postgresql
a reference table-
-CREATE TABLE states (
- code char(2) PRIMARY KEY,
- full_name text NOT NULL,
- general_sales_tax numeric(4,3)
-);
- distribute it to all workers-
-SELECT create_reference_table('states');
-```
-
-Now queries such as one calculating tax for a shopping cart can join on the
-`states` table with no network overhead, and can add a foreign key to the state
-code for better validation.
-
-In addition to distributing a table as a single replicated shard, the
-`create_reference_table` UDF marks it as a reference table in the Hyperscale
-(Citus) metadata tables. Hyperscale (Citus) automatically performs two-phase
-commits ([2PC](https://en.wikipedia.org/wiki/Two-phase_commit_protocol)) for
-modifications to tables marked this way, which provides strong consistency
-guarantees.
-
-For another example of using reference tables, see the [multi-tenant database
-tutorial](tutorial-design-database-multi-tenant.md).
-
-### Distributing Coordinator Data
-
-If an existing PostgreSQL database is converted into the coordinator node for a
-Hyperscale (Citus) cluster, the data in its tables can be distributed
-efficiently and with minimal interruption to an application.
-
-The `create_distributed_table` function described earlier works on both empty
-and non-empty tables, and for the latter it automatically distributes table
-rows throughout the cluster. You will know if it copies data by the presence of
-the message, \"NOTICE: Copying data from local table\...\" For example:
-
-```postgresql
-CREATE TABLE series AS SELECT i FROM generate_series(1,1000000) i;
-SELECT create_distributed_table('series', 'i');
-NOTICE: Copying data from local table...
- create_distributed_table
- --
-
- (1 row)
-```
-
-Writes on the table are blocked while the data is migrated, and pending writes
-are handled as distributed queries once the function commits. (If the function
-fails then the queries become local again.) Reads can continue as normal and
-will become distributed queries once the function commits.
-
-When distributing tables A and B, where A has a foreign key to B, distribute
-the key destination table B first. Doing it in the wrong order will cause an
-error:
-
-```
-ERROR: cannot create foreign key constraint
-DETAIL: Referenced table must be a distributed table or a reference table.
-```
-
-If it's not possible to distribute in the correct order, then drop the foreign
-keys, distribute the tables, and recreate the foreign keys.
-
-When migrating data from an external database, such as from Amazon RDS to
-Hyperscale (Citus) Cloud, first create the Hyperscale (Citus) distributed
-tables via `create_distributed_table`, then copy the data into the table.
-Copying into distributed tables avoids running out of space on the coordinator
-node.
-
-## Colocating tables
-
-Colocation means placing keeping related information on the same machines. It
-enables efficient queries, while taking advantage of the horizontal scalability
-for the whole dataset. For more information, see
-[colocation](concepts-colocation.md).
-
-Tables are colocated in groups. To manually control a table's colocation group
-assignment, use the optional `colocate_with` parameter of
-`create_distributed_table`. If you don\'t care about a table\'s colocation then
-omit this parameter. It defaults to the value `'default'`, which groups the
-table with any other default colocation table having the same distribution
-column type, shard count, and replication factor. If you want to break or
-update this implicit colocation, you can use
-`update_distributed_table_colocation()`.
-
-```postgresql
these tables are implicitly co-located by using the same distribution column type and shard count with the default co-location group-
-SELECT create_distributed_table('A', 'some_int_col');
-SELECT create_distributed_table('B', 'other_int_col');
-```
-
-When a new table is not related to others in its would-be implicit
-colocation group, specify `colocated_with => 'none'`.
-
-```postgresql
not co-located with other tables-
-SELECT create_distributed_table('A', 'foo', colocate_with => 'none');
-```
-
-Splitting unrelated tables into their own colocation groups will improve [shard
-rebalancing](howto-scale-rebalance.md) performance, because
-shards in the same group have to be moved together.
-
-When tables are indeed related (for instance when they will be joined), it can
-make sense to explicitly colocate them. The gains of appropriate colocation are
-more important than any rebalancing overhead.
-
-To explicitly colocate multiple tables, distribute one and then put the others
-into its colocation group. For example:
-
-```postgresql
distribute stores
-SELECT create_distributed_table('stores', 'store_id');
- add to the same group as stores
-SELECT create_distributed_table('orders', 'store_id', colocate_with => 'stores');
-SELECT create_distributed_table('products', 'store_id', colocate_with => 'stores');
-```
-
-Information about colocation groups is stored in the
-[pg_dist_colocation](reference-metadata.md#colocation-group-table)
-table, while
-[pg_dist_partition](reference-metadata.md#partition-table) reveals
-which tables are assigned to which groups.
-
-## Dropping tables
-
-You can use the standard PostgreSQL DROP TABLE command to remove your
-distributed tables. As with regular tables, DROP TABLE removes any indexes,
-rules, triggers, and constraints that exist for the target table. In addition,
-it also drops the shards on the worker nodes and cleans up their metadata.
-
-```sql
-DROP TABLE github_events;
-```
-
-## Modifying tables
-
-Hyperscale (Citus) automatically propagates many kinds of DDL statements.
-Modifying a distributed table on the coordinator node will update shards on the
-workers too. Other DDL statements require manual propagation, and certain
-others are prohibited such as any which would modify a distribution column.
-Attempting to run DDL that is ineligible for automatic propagation will raise
-an error and leave tables on the coordinator node unchanged.
-
-Here is a reference of the categories of DDL statements that propagate.
-
-### Adding/Modifying Columns
-
-Hyperscale (Citus) propagates most [ALTER
-TABLE](https://www.postgresql.org/docs/current/static/ddl-alter.html) commands
-automatically. Adding columns or changing their default values work as they
-would in a single-machine PostgreSQL database:
-
-```postgresql
Adding a column-
-ALTER TABLE products ADD COLUMN description text;
- Changing default value-
-ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
-```
-
-Significant changes to an existing column like renaming it or changing its data
-type are fine too. However the data type of the [distribution
-column](concepts-nodes.md#distribution-column) cannot be altered.
-This column determines how table data distributes through the Hyperscale
-(Citus) cluster, and modifying its data type would require moving the data.
-
-Attempting to do so causes an error:
-
-```postgres
assumining store_id is the distribution column for products, and that it has type integer-
-ALTER TABLE products
-ALTER COLUMN store_id TYPE text;
-
-/*
-ERROR: XX000: cannot execute ALTER TABLE command involving partition column
-LOCATION: ErrorIfUnsupportedAlterTableStmt, multi_utility.c:2150
-*/
-```
-
-### Adding/Removing Constraints
-
-Using Hyperscale (Citus) allows you to continue to enjoy the safety of a
-relational database, including database constraints (see the PostgreSQL
-[docs](https://www.postgresql.org/docs/current/static/ddl-constraints.html)).
-Due to the nature of distributed systems, Hyperscale (Citus) will not
-cross-reference uniqueness constraints or referential integrity between worker
-nodes.
-
-To set up a foreign key between colocated distributed tables, always include
-the distribution column in the key. Including the distribution column may
-involve making the key compound.
-
-Foreign keys may be created in these situations:
--- between two local (non-distributed) tables,-- between two reference tables,-- between two [colocated](concepts-colocation.md) distributed
- tables when the key includes the distribution column, or
-- as a distributed table referencing a [reference
- table](concepts-nodes.md#type-2-reference-tables)
-
-Foreign keys from reference tables to distributed tables are not
-supported.
-
-> [!NOTE]
->
-> Primary keys and uniqueness constraints must include the distribution
-> column. Adding them to a non-distribution column will generate an error
-
-This example shows how to create primary and foreign keys on distributed
-tables:
-
-```postgresql
Adding a primary key --- We'll distribute these tables on the account_id. The ads and clicks tables must use compound keys that include account_id.-
-ALTER TABLE accounts ADD PRIMARY KEY (id);
-ALTER TABLE ads ADD PRIMARY KEY (account_id, id);
-ALTER TABLE clicks ADD PRIMARY KEY (account_id, id);
- Next distribute the tables-
-SELECT create_distributed_table('accounts', 'id');
-SELECT create_distributed_table('ads', 'account_id');
-SELECT create_distributed_table('clicks', 'account_id');
- Adding foreign keys -- Note that this can happen before or after distribution, as long as there exists a uniqueness constraint on the target column(s) which can only be enforced before distribution.-
-ALTER TABLE ads ADD CONSTRAINT ads_account_fk
- FOREIGN KEY (account_id) REFERENCES accounts (id);
-ALTER TABLE clicks ADD CONSTRAINT clicks_ad_fk
- FOREIGN KEY (account_id, ad_id) REFERENCES ads (account_id, id);
-```
-
-Similarly, include the distribution column in uniqueness constraints:
-
-```postgresql
Suppose we want every ad to use a unique image. Notice we can enforce it only per account when we distribute by account id.-
-ALTER TABLE ads ADD CONSTRAINT ads_unique_image
- UNIQUE (account_id, image_url);
-```
-
-Not-null constraints can be applied to any column (distribution or not)
-because they require no lookups between workers.
-
-```postgresql
-ALTER TABLE ads ALTER COLUMN image_url SET NOT NULL;
-```
-
-### Using NOT VALID Constraints
-
-In some situations it can be useful to enforce constraints for new rows, while
-allowing existing non-conforming rows to remain unchanged. Hyperscale (Citus)
-supports this feature for CHECK constraints and foreign keys, using
-PostgreSQL\'s \"NOT VALID\" constraint designation.
-
-For example, consider an application that stores user profiles in a
-[reference table](concepts-nodes.md#type-2-reference-tables).
-
-```postgres
we're using the "text" column type here, but a real application might use "citext" which is available in a postgres contrib module-
-CREATE TABLE users ( email text PRIMARY KEY );
-SELECT create_reference_table('users');
-```
-
-In the course of time imagine that a few non-addresses get into the
-table.
-
-```postgres
-INSERT INTO users VALUES
- ('foo@example.com'), ('hacker12@aol.com'), ('lol');
-```
-
-We would like to validate the addresses, but PostgreSQL does not
-ordinarily allow us to add a CHECK constraint that fails for existing
-rows. However it *does* allow a constraint marked not valid:
-
-```postgres
-ALTER TABLE users
-ADD CONSTRAINT syntactic_email
-CHECK (email ~
- '^[a-zA-Z0-9.!#$%&''*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$'
-) NOT VALID;
-```
-
-New rows are now protected.
-
-```postgres
-INSERT INTO users VALUES ('fake');
-
-/*
-ERROR: new row for relation "users_102010" violates
- check constraint "syntactic_email_102010"
-DETAIL: Failing row contains (fake).
-*/
-```
-
-Later, during non-peak hours, a database administrator can attempt to
-fix the bad rows and revalidate the constraint.
-
-```postgres
later, attempt to validate all rows
-ALTER TABLE users
-VALIDATE CONSTRAINT syntactic_email;
-```
-
-The PostgreSQL documentation has more information about NOT VALID and
-VALIDATE CONSTRAINT in the [ALTER
-TABLE](https://www.postgresql.org/docs/current/sql-altertable.html)
-section.
-
-### Adding/Removing Indices
-
-Hyperscale (Citus) supports adding and removing
-[indices](https://www.postgresql.org/docs/current/static/sql-createhttps://docsupdatetracker.net/index.html):
-
-```postgresql
Adding an index-
-CREATE INDEX clicked_at_idx ON clicks USING BRIN (clicked_at);
- Removing an index-
-DROP INDEX clicked_at_idx;
-```
-
-Adding an index takes a write lock, which can be undesirable in a
-multi-tenant \"system-of-record.\" To minimize application downtime,
-create the index
-[concurrently](https://www.postgresql.org/docs/current/static/sql-createhttps://docsupdatetracker.net/index.html#SQL-CREATEINDEX-CONCURRENTLY)
-instead. This method requires more total work than a standard index
-build and takes longer to complete. However, since it
-allows normal operations to continue while the index is built, this
-method is useful for adding new indexes in a production environment.
-
-```postgresql
Adding an index without locking table writes-
-CREATE INDEX CONCURRENTLY clicked_at_idx ON clicks USING BRIN (clicked_at);
-```
-### Types and Functions
-
-Creating custom SQL types and user-defined functions propogates to worker
-nodes. However, creating such database objects in a transaction with
-distributed operations involves tradeoffs.
-
-Hyperscale (Citus) parallelizes operations such as `create_distributed_table()`
-across shards using multiple connections per worker. Whereas, when creating a
-database object, Citus propagates it to worker nodes using a single connection
-per worker. Combining the two operations in a single transaction may cause
-issues, because the parallel connections will not be able to see the object
-that was created over a single connection but not yet committed.
-
-Consider a transaction block that creates a type, a table, loads data, and
-distributes the table:
-
-```postgresql
-BEGIN;
- type creation over a single connection:
-CREATE TYPE coordinates AS (x int, y int);
-CREATE TABLE positions (object_id text primary key, position coordinates);
- data loading thus goes over a single connection:
-SELECT create_distributed_table(ΓÇÿpositionsΓÇÖ, ΓÇÿobject_idΓÇÖ);
-\COPY positions FROM ΓÇÿpositions.csvΓÇÖ
-
-COMMIT;
-```
-
-Prior to Citus 11.0, Citus would defer creating the type on the worker nodes,
-and commit it separately when creating the distributed table. This enabled the
-data copying in `create_distributed_table()` to happen in parallel. However, it
-also meant that the type was not always present on the Citus worker nodes ΓÇô or
-if the transaction rolled back, the type would remain on the worker nodes.
-
-With Citus 11.0, the default behaviour changes to prioritize schema consistency
-between coordinator and worker nodes. The new behavior has a downside: if
-object propagation happens after a parallel command in the same transaction,
-then the transaction can no longer be completed, as highlighted by the ERROR in
-the code block below:
-
-```postgresql
-BEGIN;
-CREATE TABLE items (key text, value text);
parallel data loading:
-SELECT create_distributed_table(ΓÇÿitemsΓÇÖ, ΓÇÿkeyΓÇÖ);
-\COPY items FROM ΓÇÿitems.csvΓÇÖ
-CREATE TYPE coordinates AS (x int, y int);
-
-ERROR: cannot run type command because there was a parallel operation on a distributed table in the transaction
-```
-
-If you run into this issue, there are two simple workarounds:
-
-1. Use set `citus.create_object_propagation` to `automatic` to defer creation
- of the type in this situation, in which case there may be some inconsistency
- between which database objects exist on different nodes.
-1. Use set `citus.multi_shard_modify_mode` to `sequential` to disable per-node
- parallelism. Data load in the same transaction might be slower.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Useful diagnostic queries](howto-useful-diagnostic-queries.md)
postgresql Howto Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-monitoring.md
- Title: How to view metrics - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to access database metrics for Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 10/05/2021--
-# How to view metrics in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-Resource metrics are available for every node of a Hyperscale (Citus) server
-group, and in aggregate across the nodes.
-
-## View metrics
-
-To access metrics for a Hyperscale (Citus) server group, open **Metrics**
-under **Monitoring** in the Azure portal.
--
-Choose a dimension and an aggregation, for instance **CPU percent** and
-**Max**, to view the metric aggregated across all nodes. For an explanation of
-each metric, see [here](concepts-monitoring.md#list-of-metrics).
--
-### View metrics per node
-
-Viewing each node's metrics separately on the same graph is called "splitting."
-To enable it, select **Apply splitting**:
--
-Select the value by which to split. For Hyperscale (Citus) nodes, choose **Server name**.
--
-The metrics will now be plotted in one color-coded line per node.
--
-## Next steps
-
-* Review Hyperscale (Citus) [monitoring concepts](concepts-monitoring.md)
postgresql Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-private-access.md
- Title: Enable private access - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to set up private link in a server group for Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 01/14/2022--
-# Private access in Azure Database for PostgreSQL Hyperscale (Citus)
--
-[Private access](concepts-private-access.md) allows resources in an Azure
-virtual network to connect securely and privately to nodes in a Hyperscale
-(Citus) server group. This how-to assumes you've already created a virtual
-network and subnet. For an example of setting up prerequisites, see the
-[private access tutorial](tutorial-private-access.md).
-
-## Create a server group with a private endpoint
-
-1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-
-2. Select **Databases** from the **New** page, and select **Azure Database for
- PostgreSQL** from the **Databases** page.
-
-3. For the deployment option, select the **Create** button under **Hyperscale
- (Citus) server group**.
-
-4. Fill out the new server details form with your resource group, desired
- server group name, location, and database user password.
-
-5. Select **Configure server group**, choose the desired plan, and select
- **Save**.
-
-6. Select **Next: Networking** at the bottom of the page.
-
-7. Select **Private access**.
-
-8. A screen appears called **Create private endpoint**. Choose appropriate values
- for your existing resources, and click **OK**:
-
- - **Resource group**
- - **Location**
- - **Name**
- - **Target sub-resource**
- - **Virtual network**
- - **Subnet**
- - **Integrate with private DNS zone**
-
-9. After creating the private endpoint, select **Review + create** to create
- your Hyperscale (Citus) server group.
-
-## Enable private access on an existing server group
-
-To create a private endpoint to a node in an existing server group, open the
-**Networking** page for the server group.
-
-1. Select **+ Add private endpoint**.
-
- :::image type="content" source="../media/howto-hyperscale-private-access/networking.png" alt-text="Networking screen":::
-
-2. In the **Basics** tab, confirm the **Subscription**, **Resource group**, and
- **Region**. Enter a **Name** for the endpoint, such as `my-server-group-eq`.
-
- > [!NOTE]
- >
- > Unless you have a good reason to choose otherwise, we recommend picking a
- > subscription and region that match those of your server group. The
- > default values for the form fields may not be correct; check them and
- > update if necessary.
-
-3. Select **Next: Resource >**. In the **Target sub-resource** choose the target
- node of the server group. Generally `coordinator` is the desired node.
-
-4. Select **Next: Configuration >**. Choose the desired **Virtual network** and
- **Subnet**. Customize the **Private DNS integration** or accept its default
- settings.
-
-5. Select **Next: Tags >** and add any desired tags.
-
-6. Finally, select **Review + create >**. Review the settings and select
- **Create** when satisfied.
-
-## Next steps
-
-* Learn more about [private access](concepts-private-access.md).
-* Follow a [tutorial](tutorial-private-access.md) to see private access in
- action.
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-read-replicas-portal.md
- Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus)
-description: Learn how to manage read replicas Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
----- Previously updated : 09/27/2022--
-# Create and manage read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal
--
-In this article, you learn how to create and manage read replicas in Hyperscale
-(Citus) from the Azure portal. To learn more about read replicas, see the
-[overview](concepts-read-replicas.md).
--
-## Prerequisites
-
-A [Hyperscale (Citus) server group](quickstart-create-portal.md) to
-be the primary.
-
-## Create a read replica
-
-To create a read replica, follow these steps:
-
-1. Select an existing Azure Database for PostgreSQL server group to use as the
- primary.
-
-2. On the server group sidebar, under **Server group management**, select
- **Replication**.
-
-3. Select **Add Replica**.
-
-4. Enter a name for the read replica.
-
-5. Select a value from the **Location** drop-down.
-
-6. Select **OK** to confirm the creation of the replica.
-
-After the read replica is created, it can be viewed from the **Replication** window.
-
-> [!IMPORTANT]
->
-> Review the [considerations section of the Read Replica
-> overview](concepts-read-replicas.md#considerations).
->
-> Before a primary server group setting is updated to a new value, update the
-> replica setting to an equal or greater value. This action helps the replica
-> keep up with any changes made to the master.
-
-## Delete a primary server group
-
-To delete a primary server group, you use the same steps as to delete a
-standalone Hyperscale (Citus) server group. From the Azure portal, follow these
-steps:
-
-1. In the Azure portal, select your primary Azure Database for PostgreSQL
- server group.
-
-2. Open the **Overview** page for the server group. Select **Delete**.
-
-3. Enter the name of the primary server group to delete. Select **Delete** to
- confirm deletion of the primary server group.
-
-## Promote a replica to an independent server group
-
-To promote a server group replica, follow these steps:
-
-1. In the Azure portal, open the **Replication** page for your server group.
-
-2. Select the **Promote** icon for the desired replica.
-
-3. Select the checkbox indicating you understand the action is irreversible.
-
-4. Select **Promote** to confirm.
-
-## Delete a replica
-
-You can delete a read replica similarly to how you delete a primary server
-group.
--- In the Azure portal, open the **Overview** page for the read replica. Select
- **Delete**.
-
-You can also delete the read replica from the **Replication** window by
-following these steps:
-
-1. In the Azure portal, select your primary Hyperscale (Citus) server group.
-
-2. On the server group menu, under **Server group management**, select
- **Replication**.
-
-3. Select the read replica to delete.
-
-4. Select **Delete replica**.
-
-5. Enter the name of the replica to delete. Select **Delete** to confirm
- deletion of the replica.
-
-## Next steps
-
-* Learn more about [read replicas in Azure Database for
- PostgreSQL - Hyperscale (Citus)](concepts-read-replicas.md).
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restart.md
- Title: Restart server - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn how to restart all nodes in a Hyperscale (Citus) server group from the Azure portal.
------ Previously updated : 05/06/2022--
-# Restart Azure Database for PostgreSQL - Hyperscale (Citus)
--
-You can restart your Hyperscale (Citus) server group for the Azure portal. Restarting the server group applies to all nodes; you can't selectively restart
-individual nodes. The restart applies to all PostgreSQL server processes in the nodes. Any applications attempting to use the database will experience
-connectivity downtime while the restart happens.
-
-1. In the Azure portal, navigate to the server group's **Overview** page.
-
-1. Select **Restart** on the top bar.
- > [!NOTE]
- > If the Restart button is not yet present for your server group, please open
- > an Azure support request to restart the server group.
-
-1. In the confirmation dialog, select **Restart all** to continue.
-
-**Next steps**
--- Changing some server parameters requires a restart. See the list of [all
- server parameters](reference-parameters.md) configurable on
- Hyperscale (Citus).
postgresql Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restore-portal.md
- Title: Restore - Hyperscale (Citus) - Azure Database for PostgreSQL - Azure portal
-description: This article describes how to perform restore operations in Azure Database for PostgreSQL - Hyperscale (Citus) through the Azure portal.
----- Previously updated : 07/09/2021--
-# Point-in-time restore of a Hyperscale (Citus) server group
--
-This article provides step-by-step procedures to perform [point-in-time
-recoveries](concepts-backup.md#restore) for a Hyperscale (Citus)
-server group using backups. You can restore either to the earliest backup or to
-a custom restore point within your retention period.
-
-## Restoring to the earliest restore point
-
-Follow these steps to restore your Hyperscale (Citus) server group to its
-earliest existing backup.
-
-1. In the [Azure portal](https://portal.azure.com/), choose the server group
- that you want to restore.
-
-2. Click **Overview** from the left panel and click **Restore**.
-
- > [!IMPORTANT]
- > If the **Restore** button is not yet present for your server group,
- > please open an Azure support request to restore your server group.
-
-3. The restore page will ask you to choose between the **Earliest** and a
- **Custom** restore point, and will display the earliest date.
-
-4. Select **Earliest restore point**.
-
-5. Provide a new server group name in the **Restore to new server** field. The
- other fields (subscription, resource group, and location) are displayed but
- not editable.
-
-6. Click **OK**.
-
-7. A notification will be shown that the restore operation has been initiated.
-
-Finally, follow the [post-restore tasks](#post-restore-tasks).
-
-## Restoring to a custom restore point
-
-Follow these steps to restore your Hyperscale (Citus) server group to a date
-and time of your choosing.
-
-1. In the [Azure portal](https://portal.azure.com/), choose the server group
- that you want to restore.
-
-2. Click **Overview** from the left panel and click **Restore**
-
- > [!IMPORTANT]
- > If the **Restore** button is not yet present for your server group,
- > please open an Azure support request to restore your server group.
-
-3. The restore page will ask you to choose between the **Earliest** and a
- **Custom** restore point, and will display the earliest date.
-
-4. Choose **Custom restore point**.
-
-5. Select date and time for **Restore point (UTC)**, and provide a new server
- group name in the **Restore to new server** field. The other fields
- (subscription, resource group, and location) are displayed but not editable.
-
-6. Click **OK**.
-
-7. A notification will be shown that the restore operation has been
- initiated.
-
-Finally, follow the [post-restore tasks](#post-restore-tasks).
-
-## Post-restore tasks
-
-After a restore, you should do the following to get your users and applications
-back up and running:
-
-* If the new server is meant to replace the original server, redirect clients
- and client applications to the new server
-* Ensure an appropriate server-level firewall is in place for
- users to connect. These rules aren't copied from the original server group.
-* Adjust PostgreSQL server parameters as needed. The parameters aren't copied
- from the original server group.
-* Ensure appropriate logins and database level permissions are in place.
-* Configure alerts, as appropriate.
-
-## Next steps
-
-* Learn more about [backup and restore](concepts-backup.md) in
- Hyperscale (Citus).
-* SetΓÇ»[suggested
- alerts](./howto-alert-on-metric.md#suggested-alerts) on Hyperscale
- (Citus) server groups.
postgresql Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-grow.md
- Title: Scale server group - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Adjust server group memory, disk, and CPU resources to deal with increased load
----- Previously updated : 12/10/2021--
-# Scale a Hyperscale (Citus) server group
--
-Azure Database for PostgreSQL - Hyperscale (Citus) provides self-service
-scaling to deal with increased load. The Azure portal makes it easy to add new
-worker nodes, and to increase the vCores of existing nodes. Adding nodes causes
-no downtime, and even moving shards to the new nodes (called [shard
-rebalancing](howto-scale-rebalance.md)) happens without interrupting
-queries.
-
-## Add worker nodes
-
-To add nodes, go to the **Compute + storage** tab in your Hyperscale (Citus) server
-group. Dragging the slider for **Worker node count** changes the value.
-
-> [!NOTE]
->
-> A Hyperscale (Citus) server group created with the [basic
-> tier](concepts-tiers.md) has no workers. Increasing the worker
-> count automatically graduates the server group to the standard tier. After
-> graduating a server group to the standard tier, you can't downgrade it back
-> to the basic tier.
--
-Click the **Save** button to make the changed value take effect.
-
-> [!NOTE]
-> Once increased and saved, the number of worker nodes cannot be decreased
-> using the slider.
-
-> [!NOTE]
-> To take advantage of newly added nodes you must [rebalance distributed table
-> shards](howto-scale-rebalance.md), which means moving some
-> [shards](concepts-distributed-data.md#shards) from existing nodes
-> to the new ones. Rebalancing can work in the background, and requires no
-> downtime.
-
-## Increase or decrease vCores on nodes
-
-In addition to adding new nodes, you can increase the capabilities of existing
-nodes. Adjusting compute capacity up and down can be useful for performance
-experiments, and short- or long-term changes to traffic demands.
-
-To change the vCores for all worker nodes, adjust the **vCores** slider under
-**Configuration (per worker node)**. The coordinator node's vCores can be
-adjusted independently. Adjust the **vCores** slider under **Configuration
-(coordinator node)**.
-
-> [!NOTE]
-> There is a vCore quota per Azure subscription per region. The default quota
-> should be more than enough to experiment with Hyperscale (Citus). If you
-> need more vCores for a region in your subscription, see how to [adjust
-> compute quotas](howto-compute-quota.md).
-
-## Increase storage on nodes
-
-In addition to adding new nodes, you can increase the disk space of existing
-nodes. Increasing disk space can allow you to do more with existing worker
-nodes before needing to add more worker nodes.
-
-To change the storage for all worker nodes, adjust the **storage** slider under
-**Configuration (per worker node)**. The coordinator node's storage can be
-adjusted independently. Adjust the **storage** slider under **Configuration
-(coordinator node)**.
-
-> [!NOTE]
-> Once increased and saved, the storage per node cannot be decreased using the
-> slider.
-
-## Next steps
--- Learn more about server group [performance options](resources-compute.md).-- [Rebalance distributed table shards](howto-scale-rebalance.md)
- so that all worker nodes can participate in parallel queries
-- See the sizes of distributed tables, and other [useful diagnostic
- queries](howto-useful-diagnostic-queries.md).
postgresql Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-initial.md
- Title: Initial server group size - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Pick the right initial size for your use case
----- Previously updated : 08/03/2021--
-# Pick initial size for Hyperscale (Citus) server group
--
-The size of a server group, both number of nodes and their hardware capacity,
-is [easy to change](howto-scale-grow.md)). However you still need to
-choose an initial size for a new server group. Here are some tips for a
-reasonable choice.
-
-## Use-cases
-
-Hyperscale (Citus) is frequently used in the following ways.
-
-### Multi-tenant SaaS
-
-When migrating to Hyperscale (Citus) from an existing single-node PostgreSQL
-database instance, choose a cluster where the number of worker vCores and RAM
-in total equals that of the original instance. In such scenarios we have seen
-2-3x performance improvements because sharding improves resource utilization,
-allowing smaller indices etc.
-
-The vCore count is actually the only decision. RAM allocation is currently
-determined based on vCore count, as described in the [compute and
-storage](resources-compute.md) page. The coordinator node doesn't require as
-much RAM as workers, but there's no way to choose RAM and vCores independently.
-
-### Real-time analytics
-
-Total vCores: when working data fits in RAM, you can expect a linear
-performance improvement on Hyperscale (Citus) proportional to the number of
-worker cores. To determine the right number of vCores for your needs, consider
-the current latency for queries in your single-node database and the required
-latency in Hyperscale (Citus). Divide current latency by desired latency, and
-round the result.
-
-Worker RAM: the best case would be providing enough memory that most the
-working set fits in memory. The type of queries your application uses affect
-memory requirements. You can run EXPLAIN ANALYZE on a query to determine how
-much memory it requires. Remember that vCores and RAM are scaled together as
-described in the [compute and storage](resources-compute.md) article.
-
-## Choosing a Hyperscale (Citus) tier
-
-The sections above give an idea how many vCores and how much RAM are needed for
-each use case. You can meet these demands through a choice between two
-Hyperscale (Citus) tiers: the basic tier and the standard tier.
-
-The basic tier uses a single database node to perform processing, while the
-standard tier allows more nodes. The tiers are otherwise identical, offering
-the same features. In some cases, a single node's vCores and disk space can be
-scaled to suffice, and in other cases it requires the cooperation of multiple
-nodes.
-
-For a comparison of the tiers, see the [basic
-tier](concepts-tiers.md) concepts page.
-
-## Next steps
--- [Scale a server group](howto-scale-grow.md)-- Learn more about server group [performance
- options](concepts-configuration-options.md).
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-rebalance.md
- Title: Rebalance shards - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn how to use the Azure portal to rebalance data in a server group using the Shard rebalancer.
------ Previously updated : 07/20/2021--
-# Rebalance shards in Hyperscale (Citus) server group
--
-To take advantage of newly added nodes, rebalance distributed table
-[shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Hyperscale (Citus) offers
-zero-downtime rebalancing, meaning queries continue without interruption during
-shard rebalancing.
-
-## Determine if the server group is balanced
-
-The Azure portal shows whether data is distributed equally between
-worker nodes in a server group or not. From the **Server group management** menu, select **Shard rebalancer**.
--- If data is skewed between workers: You'll see the message, **Rebalancing is recommended** and a list of the size of each node.--- If data is balanced: You'll see the message, **Rebalancing is not recommended at this time**.-
-## Run the Shard rebalancer
-
-To start the Shard rebalancer, connect to the coordinator node of the server group and then run the [rebalance_table_shards](reference-functions.md#rebalance_table_shards) SQL function on distributed tables.
-
-The function rebalances all tables in the
-[colocation](concepts-colocation.md) group of the table named in its
-argument. You don't have to call the function for every distributed
-table. Instead, call it on a representative table from each colocation group.
-
-```sql
-SELECT rebalance_table_shards('distributed_table_name');
-```
-
-## Monitor rebalance progress
-
-You can view the rebalance progress from the Azure portal. From the **Server group management** menu, select **Shard rebalancer** . The
-message **Rebalancing is underway** displays with two tables:
--- The first table shows the number of shards moving into or out of a node. For
-example, "6 of 24 moved in."
-- The second table shows progress per database table: name, shard count affected, data size affected, and rebalancing status.-
-Select **Refresh** to update the page. When rebalancing is complete, you'll see the message **Rebalancing is not recommended at this time**.
-
-## Next steps
--- Learn more about server group [performance options](resources-compute.md).-- [Scale a server group](howto-scale-grow.md) up or out-- See the
- [rebalance_table_shards](reference-functions.md#rebalance_table_shards)
- reference material
postgresql Howto Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ssl-connection-security.md
- Title: Transport Layer Security (TLS) - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Instructions and information to configure Azure Database for PostgreSQL - Hyperscale (Citus) and associated applications to properly use TLS connections.
----- Previously updated : 07/16/2020-
-# Configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)
-
-The Hyperscale (Citus) coordinator node requires client applications to connect with Transport Layer Security (TLS). Enforcing TLS between the database server and client applications helps keep data confidential in transit. Extra verification settings described below also protect against "man-in-the-middle" attacks.
-
-## Enforcing TLS connections
-Applications use a "connection string" to identify the destination database and settings for a connection. Different clients require different settings. To see a list of connection strings used by common clients, consult the **Connection Strings** section for your server group in the Azure portal.
-
-The TLS parameters `ssl` and `sslmode` vary based on the capabilities of the connector, for example `ssl=true` or `sslmode=require` or `sslmode=required`.
-
-## Ensure your application or framework supports TLS connections
-Some application frameworks don't enable TLS by default for PostgreSQL connections. However, without a secure connection an application can't connect to a Hyperscale (Citus) coordinator node. Consult your application's documentation to learn how to enable TLS connections.
-
-## Applications that require certificate verification for TLS connectivity
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file (.cer) to connect securely. The certificate to connect to an Azure Database for PostgreSQL - Hyperscale (Citus) is located at https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem. Download the certificate file and save it to your preferred location.
-
-> [!NOTE]
->
-> To check the certificate's authenticity, you can verify its SHA-256
-> fingerprint using the OpenSSL command line tool:
->
-> ```sh
-> openssl x509 -in DigiCertGlobalRootCA.crt.pem -noout -sha256 -fingerprint
->
-> # should output:
-> # 43:48:A0:E9:44:4C:78:CB:26:5E:05:8D:5E:89:44:B4:D8:4F:96:62:BD:26:DB:25:7F:89:34:A4:43:C7:01:61
-> ```
-
-### Connect using psql
-The following example shows how to connect to your Hyperscale (Citus) coordinator node using the psql command-line utility. Use the `sslmode=verify-full` connection string setting to enforce TLS certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
-
-Below is an example of the psql connection string:
-```
-psql "sslmode=verify-full sslrootcert=DigiCertGlobalRootCA.crt.pem host=mydemoserver.postgres.database.azure.com dbname=citus user=citus password=your_pass"
-```
-> [!TIP]
-> Confirm that the value passed to `sslrootcert` matches the file path for the certificate you saved.
-
-## Next steps
-Increase security further with [Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-firewall-rules.md).
postgresql Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-table-size.md
- Title: Determine table size - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to find the true size of distributed tables in a Hyperscale (Citus) server group
----- Previously updated : 12/06/2021--
-# Determine table and relation size
--
-The usual way to find table sizes in PostgreSQL, `pg_total_relation_size`,
-drastically under-reports the size of distributed tables on Hyperscale (Citus).
-All this function does on a Hyperscale (Citus) server group is to reveal the size
-of tables on the coordinator node. In reality, the data in distributed tables
-lives on the worker nodes (in shards), not on the coordinator. A true measure
-of distributed table size is obtained as a sum of shard sizes. Hyperscale
-(Citus) provides helper functions to query this information.
-
-<table>
-<colgroup>
-<col width="40%" />
-<col width="59%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Function</th>
-<th>Returns</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>citus_relation_size(relation_name)</td>
-<td><ul>
-<li>Size of actual data in table (the "<a href="https://www.postgresql.org/docs/current/static/storage-file-layout.html">main fork</a>").</li>
-<li>A relation can be the name of a table or an index.</li>
-</ul></td>
-</tr>
-<tr class="even">
-<td>citus_table_size(relation_name)</td>
-<td><ul>
-<li><p>citus_relation_size plus:</p>
-<blockquote>
-<ul>
-<li>size of <a href="https://www.postgresql.org/docs/current/static/storage-fsm.html">free space map</a></li>
-<li>size of <a href="https://www.postgresql.org/docs/current/static/storage-vm.html">visibility map</a></li>
-</ul>
-</blockquote></li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>citus_total_relation_size(relation_name)</td>
-<td><ul>
-<li><p>citus_table_size plus:</p>
-<blockquote>
-<ul>
-<li>size of indices</li>
-</ul>
-</blockquote></li>
-</ul></td>
-</tr>
-</tbody>
-</table>
-
-These functions are analogous to three of the standard PostgreSQL [object size
-functions](https://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE),
-except if they can't connect to a node, they error out.
-
-## Example
-
-Here's how to list the sizes of all distributed tables:
-
-``` postgresql
-SELECT logicalrelid AS name,
- pg_size_pretty(citus_table_size(logicalrelid)) AS size
- FROM pg_dist_partition;
-```
-
-Output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé name Γöé size Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé github_users Γöé 39 MB Γöé
-Γöé github_events Γöé 37 MB Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Next steps
-
-* Learn to [scale a server group](howto-scale-grow.md) to hold more data.
-* Distinguish [table types](concepts-nodes.md) in a Hyperscale (Citus) server group.
-* See other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-common-connection-issues.md
- Title: Troubleshoot connections - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus)
-keywords: postgresql connection,connection string,connectivity issues,transient error,connection error
----- Previously updated : 12/17/2021--
-# Troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus)
--
-Connection problems may be caused by several things, such as:
-
-* Firewall settings
-* Connection time-out
-* Incorrect sign in information
-* Connection limit reached for server group
-* Issues with the infrastructure of the service
-* Service maintenance
-* The coordinator node failing over to new hardware
-
-Generally, connection issues to Hyperscale (Citus) can be classified as follows:
-
-* Transient errors (short-lived or intermittent)
-* Persistent or non-transient errors (errors that regularly recur)
-
-## Troubleshoot transient errors
-
-Transient errors occur for a number of reasons. The most common include system
-Maintenance, error with hardware or software, and coordinator node vCore
-upgrades.
-
-Enabling high availability for Hyperscale (Citus) server group nodes can mitigate these
-types of problems automatically. However, your application should still be
-prepared to lose its connection briefly. Also other events can take longer to
-mitigate, such as when a large transaction causes a long-running recovery.
-
-### Steps to resolve transient connectivity issues
-
-1. Check the [Microsoft Azure Service
- Dashboard](https://azure.microsoft.com/status) for any known outages that
- occurred during the time in which the application was reporting errors.
-2. Applications that connect to a cloud service such as Hyperscale (Citus)
- should expect transient errors and react gracefully. For instance,
- applications should implement retry logic to handle these errors instead of
- surfacing them as application errors to users.
-3. As the server group approaches its resource limits, errors can seem like
- transient connectivity issues. Increasing node RAM, or adding worker nodes
- and rebalancing data may help.
-4. If connectivity problems continue, or last longer than 60 seconds, or happen
- more than once per day, file an Azure support request by
- selecting **Get Support** on the [Azure
- Support](https://azure.microsoft.com/support/options) site.
-
-## Troubleshoot persistent errors
-
-If the application persistently fails to connect to Hyperscale (Citus), the
-most common causes are firewall misconfiguration or user error.
-
-* Coordinator node firewall configuration: Make sure that the Hyperscale (Citus) server
- firewall is configured to allow connections from your client, including proxy
- servers and gateways.
-* Client firewall configuration: The firewall on your client must allow
- connections to your database server. Some firewalls require allowing not only
- application by name, but allowing the IP addresses and ports of the server.
-* User error: Double-check the connection string. You might have mistyped
- parameters like the server name. You can find connection strings for various
- language frameworks and psql in the Azure portal. Go to the **Connection
- strings** page in your Hyperscale (Citus) server group. Also keep in mind that
- Hyperscale (Citus) clusters have only one database and its predefined name is
- **citus**.
-
-### Steps to resolve persistent connectivity issues
-
-1. Set up [firewall rules](howto-manage-firewall-using-portal.md) to
- allow the client IP address. For temporary testing purposes only, set up a
- firewall rule using 0.0.0.0 as the starting IP address and using
- 255.255.255.255 as the ending IP address. That rule opens the server to all IP
- addresses. If the rule resolves your connectivity issue, remove it and
- create a firewall rule for an appropriately limited IP address or address
- range.
-2. On all firewalls between the client and the internet, make sure that port
- 5432 is open for outbound connections (and 6432 if using [connection
- pooling](concepts-connection-pool.md)).
-3. Verify your connection string and other connection settings.
-4. Check the service health in the dashboard.
-
-## Next steps
-
-* Learn the concepts of [Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-firewall-rules.md)
-* See how to [Manage firewall rules for Azure Database for PostgreSQL - Hyperscale (Citus)](howto-manage-firewall-using-portal.md)
postgresql Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-read-only.md
- Title: Troubleshoot read-only access - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Learn why a Hyperscale (Citus) server group can become read-only, and what to do
-keywords: postgresql connection,read only
----- Previously updated : 08/03/2021--
-# Troubleshoot read-only access to Azure Database for PostgreSQL - Hyperscale (Citus)
--
-PostgreSQL can't run on a machine without some free disk space. To maintain
-access to PostgreSQL servers, it's necessary to prevent the disk space from
-running out.
-
-In Hyperscale (Citus), nodes are set to a read-only (RO) state when the disk is
-almost full. Preventing writes stops the disk from continuing to fill, and
-keeps the node available for reads. During the read-only state, you can take
-measures to free more disk space.
-
-Specifically, a Hyperscale (Citus) node becomes read-only when it has less than
-5 GiB of free storage left. When the server becomes read-only, all existing
-sessions are disconnected, and uncommitted transactions are rolled back. Any
-write operations and transaction commits will fail, while read queries will
-continue to work.
-
-## Ways to recover write-access
-
-### On the coordinator node
-
-* [Increase storage
- size](howto-scale-grow.md#increase-storage-on-nodes)
- on the coordinator node, and/or
-* Distribute local tables to worker nodes, or drop data. You'll need to run
- `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE` after you've
- connected to the database and before you execute other commands.
-
-### On a worker node
-
-* [Increase storage
- size](howto-scale-grow.md#increase-storage-on-nodes)
- on the worker nodes, and/or
-* [Rebalance data](howto-scale-rebalance.md) to other nodes, or drop
- some data.
- * You'll need to set the worker node as read-write temporarily. You can
- connect directly to worker nodes and use `SET SESSION CHARACTERISTICS` as
- described above for the coordinator node.
-
-## Prevention
-
-We recommend that you set up an alert to notify you when server storage is
-approaching the threshold. That way you can act early to avoid getting into the
-read-only state. For more information, see the documentation about [recommended
-alerts](howto-alert-on-metric.md#suggested-alerts).
-
-## Next steps
-
-* [Set up Azure
- alerts](howto-alert-on-metric.md#suggested-alerts)
- for advance notice so you can take action before reaching the read-only state.
-* Learn about [disk
- usage](https://www.postgresql.org/docs/current/diskusage.html) in PostgreSQL
- documentation.
-* Learn about [session
- characteristics](https://www.postgresql.org/docs/13/sql-set-transaction.html)
- in PostgreSQL documentation.
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
- Title: Upgrade server group - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: This article describes how you can upgrade PostgreSQL and Citus in Azure Database for PostgreSQL - Hyperscale (Citus).
----- Previously updated : 08/29/2022--
-# Upgrade Hyperscale (Citus) server group
--
-These instructions describe how to upgrade to a new major version of PostgreSQL
-on all server group nodes.
-
-## Test the upgrade first
-
-Upgrading PostgreSQL causes more changes than you might imagine, because
-Hyperscale (Citus) will also upgrade the [database
-extensions](reference-extensions.md), including the Citus extension. Upgrades
-also require downtime in the database cluster.
-
-We strongly recommend you to test your application with the new PostgreSQL and
-Citus version before you upgrade your production environment. Also, see
-our list of [upgrade precautions](concepts-upgrade.md).
-
-A convenient way to test is to make a copy of your server group using
-[point-in-time restore](concepts-backup.md#restore). Upgrade the
-copy and test your application against it. Once you've verified everything
-works properly, upgrade the original server group.
-
-## Upgrade a server group in the Azure portal
-
-1. In the **Overview** section of a Hyperscale (Citus) server group, select the
- **Upgrade** button.
-1. A dialog appears, showing the current version of PostgreSQL and Citus.
- Choose a new PostgreSQL version in the **Upgrade to** list.
-1. Verify the value in **Citus version after upgrade** is what you expect.
- This value changes based on the PostgreSQL version you selected.
-1. Select the **Upgrade** button to continue.
-
-## Next steps
-
-* Learn about [supported PostgreSQL versions](reference-versions.md).
-* See [which extensions](reference-extensions.md) are packaged with
- each PostgreSQL version in a Hyperscale (Citus) server group.
-* Learn more about [upgrades](concepts-upgrade.md)
postgresql Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-useful-diagnostic-queries.md
- Title: Useful diagnostic queries - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Queries to learn about distributed data and more
----- Previously updated : 8/23/2021--
-# Useful Diagnostic Queries
--
-## Finding which node contains data for a specific tenant
-
-In the multi-tenant use case, we can determine which worker node contains the
-rows for a specific tenant. Hyperscale (Citus) groups the rows of distributed
-tables into shards, and places each shard on a worker node in the server group.
-
-Suppose our application's tenants are stores, and we want to find which worker
-node holds the data for store ID=4. In other words, we want to find the
-placement for the shard containing rows whose distribution column has value 4:
-
-``` postgresql
-SELECT shardid, shardstate, shardlength, nodename, nodeport, placementid
- FROM pg_dist_placement AS placement,
- pg_dist_node AS node
- WHERE placement.groupid = node.groupid
- AND node.noderole = 'primary'
- AND shardid = (
- SELECT get_shard_id_for_distribution_column('stores', 4)
- );
-```
-
-The output contains the host and port of the worker database.
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé shardid Γöé shardstate Γöé shardlength Γöé nodename Γöé nodeport Γöé placementid Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102009 Γöé 1 Γöé 0 Γöé 10.0.0.16 Γöé 5432 Γöé 2 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Finding the distribution column for a table
-
-Each distributed table in Hyperscale (Citus) has a "distribution column." (For
-more information, see [Distributed Data
-Modeling](howto-choose-distribution-column.md).) It can be
-important to know which column it is. For instance, when joining or filtering
-tables, you may see error messages with hints like, "add a filter to the
-distribution column."
-
-The `pg_dist_*` tables on the coordinator node contain diverse metadata about
-the distributed database. In particular `pg_dist_partition` holds information
-about the distribution column for each table. You can use a convenient utility
-function to look up the distribution column name from the low-level details in
-the metadata. Here's an example and its output:
-
-``` postgresql
create example table-
-CREATE TABLE products (
- store_id bigint,
- product_id bigint,
- name text,
- price money,
-
- CONSTRAINT products_pkey PRIMARY KEY (store_id, product_id)
-);
- pick store_id as distribution column-
-SELECT create_distributed_table('products', 'store_id');
- get distribution column name for products table-
-SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
- FROM pg_dist_partition
- WHERE logicalrelid='products'::regclass;
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé dist_col_name Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé store_id Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Detecting locks
-
-This query will run across all worker nodes and identify locks, how long
-they've been open, and the offending queries:
-
-``` postgresql
-SELECT run_command_on_workers($cmd$
- SELECT array_agg(
- blocked_statement || ' $ ' || cur_stmt_blocking_proc
- || ' $ ' || cnt::text || ' $ ' || age
- )
- FROM (
- SELECT blocked_activity.query AS blocked_statement,
- blocking_activity.query AS cur_stmt_blocking_proc,
- count(*) AS cnt,
- age(now(), min(blocked_activity.query_start)) AS "age"
- FROM pg_catalog.pg_locks blocked_locks
- JOIN pg_catalog.pg_stat_activity blocked_activity
- ON blocked_activity.pid = blocked_locks.pid
- JOIN pg_catalog.pg_locks blocking_locks
- ON blocking_locks.locktype = blocked_locks.locktype
- AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
- AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
- AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
- AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
- AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
- AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
- AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
- AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
- AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
- AND blocking_locks.pid != blocked_locks.pid
- JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
- WHERE NOT blocked_locks.GRANTED
- AND blocking_locks.GRANTED
- GROUP BY blocked_activity.query,
- blocking_activity.query
- ORDER BY 4
- ) a
-$cmd$);
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé run_command_on_workers Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé (10.0.0.16,5432,t,"") Γöé
-│ (10.0.0.20,5432,t,"{""update ads_102277 set name = 'new name' where id = 1; $ sel…│
-│…ect * from ads_102277 where id = 1 for update; $ 1 $ 00:00:03.729519""}") │
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Querying the size of your shards
-
-This query will provide you with the size of every shard of a given
-distributed table, called `my_distributed_table`:
-
-``` postgresql
-SELECT *
-FROM run_command_on_shards('my_distributed_table', $cmd$
- SELECT json_build_object(
- 'shard_name', '%1$s',
- 'size', pg_size_pretty(pg_table_size('%1$s'))
- );
-$cmd$);
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé shardid Γöé success Γöé result Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102008 Γöé t Γöé {"shard_name" : "my_distributed_table_102008", "size" : "2416 kB"} Γöé
-Γöé 102009 Γöé t Γöé {"shard_name" : "my_distributed_table_102009", "size" : "3960 kB"} Γöé
-Γöé 102010 Γöé t Γöé {"shard_name" : "my_distributed_table_102010", "size" : "1624 kB"} Γöé
-Γöé 102011 Γöé t Γöé {"shard_name" : "my_distributed_table_102011", "size" : "4792 kB"} Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Querying the size of all distributed tables
-
-This query gets a list of the sizes for each distributed table plus the
-size of their indices.
-
-``` postgresql
-SELECT
- tablename,
- pg_size_pretty(
- citus_total_relation_size(tablename::text)
- ) AS total_size
-FROM pg_tables pt
-JOIN pg_dist_partition pp
- ON pt.tablename = pp.logicalrelid::text
-WHERE schemaname = 'public';
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé tablename Γöé total_size Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé github_users Γöé 39 MB Γöé
-Γöé github_events Γöé 98 MB Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-Note there are other Hyperscale (Citus) functions for querying distributed
-table size, see [determining table size](howto-table-size.md).
-
-## Identifying unused indices
-
-The following query will identify unused indexes on worker nodes for a given
-distributed table (`my_distributed_table`)
-
-``` postgresql
-SELECT *
-FROM run_command_on_shards('my_distributed_table', $cmd$
- SELECT array_agg(a) as infos
- FROM (
- SELECT (
- schemaname || '.' || relname || '##' || indexrelname || '##'
- || pg_size_pretty(pg_relation_size(i.indexrelid))::text
- || '##' || idx_scan::text
- ) AS a
- FROM pg_stat_user_indexes ui
- JOIN pg_index i
- ON ui.indexrelid = i.indexrelid
- WHERE NOT indisunique
- AND idx_scan < 50
- AND pg_relation_size(relid) > 5 * 8192
- AND (schemaname || '.' || relname)::regclass = '%s'::regclass
- ORDER BY
- pg_relation_size(i.indexrelid) / NULLIF(idx_scan, 0) DESC nulls first,
- pg_relation_size(i.indexrelid) DESC
- ) sub
-$cmd$);
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé shardid Γöé success Γöé result Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102008 Γöé t Γöé Γöé
-Γöé 102009 Γöé t Γöé {"public.my_distributed_table_102009##some_index_102009##28 MB##0"} Γöé
-Γöé 102010 Γöé t Γöé Γöé
-Γöé 102011 Γöé t Γöé Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Monitoring client connection count
-
-The following query counts the connections open on the coordinator, and groups
-them by type.
-
-``` sql
-SELECT state, count(*)
-FROM pg_stat_activity
-GROUP BY state;
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé state Γöé count Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé active Γöé 3 Γöé
-Γöé idle Γöé 3 Γöé
-Γöé Γêà Γöé 6 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Viewing system queries
-
-### Active queries
-
-The `pg_stat_activity` view shows which queries are currently executing. You
-can filter to find the actively executing ones, along with the process ID of
-their backend:
-
-```sql
-SELECT pid, query, state
- FROM pg_stat_activity
- WHERE state != 'idle';
-```
-
-### Why are queries waiting
-
-We can also query to see the most common reasons that non-idle queries that are
-waiting. For an explanation of the reasons, check the [PostgreSQL
-documentation](https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE).
-
-```sql
-SELECT wait_event || ':' || wait_event_type AS type, count(*) AS number_of_occurences
- FROM pg_stat_activity
- WHERE state != 'idle'
-GROUP BY wait_event, wait_event_type
-ORDER BY number_of_occurences DESC;
-```
-
-Example output when running `pg_sleep` in a separate query concurrently:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé type Γöé number_of_occurences Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé Γêà Γöé 1 Γöé
-Γöé PgSleep:Timeout Γöé 1 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Index hit rate
-
-This query will provide you with your index hit rate across all nodes. Index
-hit rate is useful in determining how often indices are used when querying.
-A value of 95% or higher is ideal.
-
-``` postgresql
on coordinator
-SELECT 100 * (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) AS index_hit_rate
- FROM pg_statio_user_indexes;
- on workers
-SELECT nodename, result as index_hit_rate
-FROM run_command_on_workers($cmd$
- SELECT 100 * (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) AS index_hit_rate
- FROM pg_statio_user_indexes;
-$cmd$);
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé nodename Γöé index_hit_rate Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 10.0.0.16 Γöé 96.0 Γöé
-Γöé 10.0.0.20 Γöé 98.0 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Cache hit rate
-
-Most applications typically access a small fraction of their total data at
-once. PostgreSQL keeps frequently accessed data in memory to avoid slow reads
-from disk. You can see statistics about it in the
-[pg_statio_user_tables](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STATIO-ALL-TABLES-VIEW)
-view.
-
-An important measurement is what percentage of data comes from the memory cache
-vs the disk in your workload:
-
-``` postgresql
on coordinator
-SELECT
- sum(heap_blks_read) AS heap_read,
- sum(heap_blks_hit) AS heap_hit,
- 100 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_rate
-FROM
- pg_statio_user_tables;
- on workers
-SELECT nodename, result as cache_hit_rate
-FROM run_command_on_workers($cmd$
- SELECT
- 100 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_rate
- FROM
- pg_statio_user_tables;
-$cmd$);
-```
-
-Example output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé heap_read Γöé heap_hit Γöé cache_hit_rate Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 1 Γöé 132 Γöé 99.2481203007518796 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-If you find yourself with a ratio significantly lower than 99%, then you likely
-want to consider increasing the cache available to your database.
-
-## Next steps
-
-* Learn about other [system tables](reference-metadata.md)
- that are useful for diagnostics
postgresql Moved https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/moved.md
+
+ Title: Where is Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Hyperscale (Citus) is now Azure Cosmos DB for PostgreSQL
+++++
+recommendations: false
Last updated : 10/12/2022++
+# Azure Database for PostgreSQL - Hyperscale (Citus) is now Azure Cosmos DB for PostgreSQL
+
+Existing Hyperscale (Citus) server groups will automatically become Azure
+Cosmos DB for PostgreSQL clusters under the new name, with zero downtime.
+All features and pricing, including reserved compute pricing and
+regional availability, will be preserved under the new name.
+
+Once the name change is complete, all Hyperscale (Citus) information such as
+product overview, pricing information, documentation, and more will be moved
+under the Azure Cosmos DB sections in the Azure portal.
+
+> [!NOTE]
+>
+> The name change in the Azure portal for existing Hyperscale (Citus) customers
+> will happen at the end of October. During this process, the cluster may
+> temporarily disappear in the Azure portal in both Hyperscale (Citus) and
+> Azure Cosmos DB. There will be no service downtime for users of the database,
+> only a possible interruption in the portal administrative interface.
+
+## Find your cluster in the renamed service
+
+View the list of Azure Cosmos DB for PostgreSQL clusters in your subscription.
+
+# [Direct link](#tab/direct)
+
+Go to the [list of Azure Cosmos DB for PostgreSQL clusters](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.DocumentDb%2FdatabaseAccounts) in the Azure portal.
+
+# [Portal search](#tab/portal-search)
+
+In the [Azure portal](https://portal.azure.com), search for `cosmosdb` and
+select **Azure Cosmos DB** from the results.
++++
+Your cluster will appear in this list. Once it's listed in Azure Cosmos DB, it
+will no longer appear as an Azure Database for PostgreSQL server group.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try Azure Cosmos DB for PostgreSQL >](../../cosmos-db/postgresql/quickstart-create-portal.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/overview.md
- Title: Overview of Azure Database for PostgreSQL - Hyperscale (Citus)
-description: A guide to running Hyperscale (Citus) on Azure
------
-recommendations: false
Previously updated : 08/11/2022-
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD026 -->
-
-# Azure Database for PostgreSQL - Hyperscale (Citus)
-
-## The superpower of distributed tables
-
-<!-- markdownlint-disable MD034 -->
-
-> [!VIDEO https://www.youtube.com/embed/Q30KQ5wRGxU]
-
-<!-- markdownlint-enable MD034 -->
-
-Hyperscale (Citus) is PostgreSQL extended with the superpower of "distributed
-tables." This superpower enables you to build highly scalable relational apps.
-You can start building apps on a single node server group, the same way you
-would with PostgreSQL. As your app's scalability and performance requirements
-grow, you can seamlessly scale to multiple nodes by transparently distributing
-your tables.
-
-Real-world customer applications built on Citus include SaaS apps, real-time
-operational analytics apps, and high throughput transactional apps. These apps
-span various verticals such as sales & marketing automation, healthcare,
-IOT/telemetry, finance, logistics, and search.
-
-![distributed architecture](../media/overview-hyperscale/distributed.png)
-
-## Implementation checklist
-
-As you're looking to create applications with Hyperscale (Citus), ensure you're
-reviewed the following topics:
-
-<!-- markdownlint-disable MD032 -->
-
-> [!div class="checklist"]
-> - Learn how to [build scalable apps](quickstart-build-scalable-apps-overview.md)
-> - Connect and query with your [app stack](quickstart-app-stacks-overview.md)
-> - See how the [Hyperscale (Citus) API](reference-overview.md) extends
-> PostgreSQL, and try [useful diagnostic
-> queries](howto-useful-diagnostic-queries.md)
-> - Pick the best [server group size](howto-scale-initial.md) for your workload
-> - [Monitor](howto-monitoring.md) server group performance
-> - Ingest data efficiently with [Azure Stream Analytics](howto-ingest-azure-stream-analytics.md)
-> and [Azure Data Factory](howto-ingest-azure-data-factory.md)
-
-<!-- markdownlint-enable MD032 -->
-
-## Fully managed, resilient database
-
-As Hyperscale (Citus) is a fully managed service, it has all the features for
-worry-free operation in production. Features include:
-
-* automatic high availability
-* backups
-* built-in pgBouncer
-* read-replicas
-* easy monitoring
-* private endpoints
-* encryption
-* and more
-
-> [!div class="nextstepaction"]
-> [Try the quickstart >](quickstart-create-portal.md)
-
-## Always the latest PostgreSQL features
-
-Hyperscale (Citus) is built around the open-source
-[Citus](https://github.com/citusdata/citus) extension to PostgreSQL. Because
-Citus is an extension--not a fork--of the underlying database, it always
-supports the latest PostgreSQL version within one day of release.
-
-Your apps can use the newest PostgreSQL features and extensions, such as
-native partitioning for performance, JSONB support to store and query
-unstructured data, and geospatial functionality via the PostGIS extension.
-It's the speed you need, on the database you love.
-
-## Start simply, scale seamlessly
-
-The Basic Tier allows you to deploy Hyperscale (Citus) as a single node, while
-having the superpower of distributing tables. At a few dollars a day, it's the
-most cost-effective way to experience Hyperscale (Citus). Later, if your
-application requires greater scale, you can add nodes and rebalance your data.
-
-![graduating to standard tier](../media/overview-hyperscale/graduate.png)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try the quickstart >](quickstart-create-portal.md)
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/product-updates.md
- Title: Product updates for Azure Database for PostgreSQL - Hyperscale (Citus)
-description: New features and features in preview
------ Previously updated : 09/27/2022--
-# Product updates for PostgreSQL - Hyperscale (Citus)
--
-## Updates feed
-
-The Microsoft Azure website lists newly available features per product, plus
-features in preview and development. Check the [Hyperscale (Citus)
-updates](https://azure.microsoft.com/updates/?category=databases&query=citus)
-section for the latest. An RSS feed is also available on that page.
-
-## Features in preview
-
-Azure Database for PostgreSQL - Hyperscale (Citus) offers
-previews for unreleased features. Preview versions are provided
-without a service level agreement, and aren't recommended for
-production workloads. Certain features might not be supported or
-might have constrained capabilities. For more information, see
-[Supplemental Terms of Use for Microsoft Azure
-Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
-
-Here are the features currently available for preview:
-
-* **[pgAudit](concepts-audit.md)**. Provides detailed
- session and object audit logging via the standard PostgreSQL
- logging facility. It produces audit logs required to pass
- certain government, financial, or ISO certification audits.
-
-## Contact us
-
-Let us know about your experience using preview features, by emailing [Ask
-Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com).
-(This email address isn't a technical support channel. For technical problems,
-open a [support
-request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).)
postgresql Quickstart App Stacks Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-csharp.md
- Title: C# app to connect and query Hyperscale (Citus)
-description: Learn to query Hyperscale (Citus) using C#
-----
-recommendations: false
Previously updated : 08/24/2022--
-# C# app to connect and query Hyperscale (Citus)
--
-In this document, you'll learn how to connect to a Hyperscale (Citus) database using a C# application. You'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using C#, and are new to working with Hyperscale (Citus).
-
-> [!TIP]
->
-> The process of creating a C# app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
-* Create a Hyperscale (Citus) server group using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
-* Install the [.NET SDK](https://dotnet.microsoft.com/download) for your platform (Windows, Ubuntu Linux, or macOS) for your platform.
-* Install [Visual Studio](https://www.visualstudio.com/downloads/) to build your project.
-* Install the [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio.
-
-## Get database connection information
-
-To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See below screenshot.
-
-![Diagram showing C# connection string.](../media/howto-app-stacks/03-csharp-connection-string.png)
-
-## Step 1: Connect, create table, and insert data
-
-Use the following code to connect and load the data using CREATE TABLE and INSERT INTO SQL statements. The code uses these `NpgsqlCommand` class methods:
-
-* [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Hyperscale (Citus),
-* [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) to set the CommandText property
-* [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run database commands.
--
-```csharp
-using System;
-using Npgsql;
-namespace Driver
-{
- public class AzurePostgresCreate
- {
-
- static void Main(string[] args)
- {
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
-
- connStr.TrustServerCertificate = true;
-
- using (var conn = new NpgsqlConnection(connStr.ToString()))
- {
- Console.Out.WriteLine("Opening connection");
- conn.Open();
- using (var command = new NpgsqlCommand("DROP TABLE IF EXISTS pharmacy;", conn))
- {
- command.ExecuteNonQuery();
- Console.Out.WriteLine("Finished dropping table (if existed)");
- }
- using (var command = new NpgsqlCommand("CREATE TABLE pharmacy (pharmacy_id integer ,pharmacy_name text,city text,state text,zip_code integer);", conn))
- {
- command.ExecuteNonQuery();
- Console.Out.WriteLine("Finished creating table");
- }
- using (var command = new NpgsqlCommand("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);", conn))
- {
- command.ExecuteNonQuery();
- Console.Out.WriteLine("Finished creating index");
- }
- using (var command = new NpgsqlCommand("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (@n1, @q1, @a, @b, @c)", conn))
- {
- command.Parameters.AddWithValue("n1", 0);
- command.Parameters.AddWithValue("q1", "Target");
- command.Parameters.AddWithValue("a", "Sunnyvale");
- command.Parameters.AddWithValue("b", "California");
- command.Parameters.AddWithValue("c", 94001);
- int nRows = command.ExecuteNonQuery();
- Console.Out.WriteLine(String.Format("Number of rows inserted={0}", nRows));
- }
-
- }
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## Step 2: Use the super power of distributed tables
-
-Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
-
-> [!TIP]
->
-> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
-
-```csharp
-using System;
-using Npgsql;
-namespace Driver
-{
- public class AzurePostgresCreate
- {
-
- static void Main(string[] args)
- {
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50");
-
- connStr.TrustServerCertificate = true;
-
- using (var conn = new NpgsqlConnection(connStr.ToString()))
- {
- Console.Out.WriteLine("Opening connection");
- conn.Open();
- using (var command = new NpgsqlCommand("select create_distributed_table('pharmacy','pharmacy_id');", conn))
- {
- command.ExecuteNonQuery();
- Console.Out.WriteLine("Finished distributing the table");
- }
-
- }
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## Step 3: Read data
-
-Use the following code to connect and read the data using a SELECT SQL statement. The code uses these `NpgsqlCommand` class methods:
-
-* [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Hyperscale (Citus).
-* [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
-* [Read()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_Read) to advance to the record in the results.
-* [GetInt32()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetInt32_System_Int32_) and [GetString()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetString_System_Int32_) to parse the values in the record.
-
-```csharp
-using System;
-using Npgsql;
-namespace Driver
-{
- public class read
- {
-
- static void Main(string[] args)
- {
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
-
- connStr.TrustServerCertificate = true;
-
- using (var conn = new NpgsqlConnection(connStr))
- {
- Console.Out.WriteLine("Opening connection");
- conn.Open();
- using (var command = new NpgsqlCommand("SELECT * FROM pharmacy", conn))
- {
- var reader = command.ExecuteReader();
- while (reader.Read())
- {
- Console.WriteLine(
- string.Format(
- "Reading from table=({0}, {1}, {2}, {3}, {4})",
- reader.GetInt32(0).ToString(),
- reader.GetString(1),
- reader.GetString(2),
- reader.GetString(3),
- reader.GetInt32(4).ToString()
- )
- );
- }
- reader.Close();
- }
- }
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## Step 4: Update data
-
-Use the following code to connect and update data using an UPDATE SQL statement.
-
-```csharp
-using System;
-using Npgsql;
-namespace Driver
-{
- public class AzurePostgresUpdate
- {
- static void Main(string[] args)
- {
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
-
- connStr.TrustServerCertificate = true;
-
- using (var conn = new NpgsqlConnection(connStr.ToString()))
- {
- Console.Out.WriteLine("Opening connection");
- conn.Open();
- using (var command = new NpgsqlCommand("UPDATE pharmacy SET city = @q WHERE pharmacy_id = @n", conn))
- {
- command.Parameters.AddWithValue("n", 0);
- command.Parameters.AddWithValue("q", "guntur");
- int nRows = command.ExecuteNonQuery();
- Console.Out.WriteLine(String.Format("Number of rows updated={0}", nRows));
- }
- }
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## Step 5: Delete data
-
-Use the following code to connect and delete data using a DELETE SQL statement.
-
-```csharp
-using System;
-using Npgsql;
-namespace Driver
-{
- public class AzurePostgresDelete
- {
-
- static void Main(string[] args)
- {
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
-
- connStr.TrustServerCertificate = true;
-
- using (var conn = new NpgsqlConnection(connStr.ToString()))
- {
-
- Console.Out.WriteLine("Opening connection");
- conn.Open();
- using (var command = new NpgsqlCommand("DELETE FROM pharmacy WHERE pharmacy_id = @n", conn))
- {
- command.Parameters.AddWithValue("n", 0);
- int nRows = command.ExecuteNonQuery();
- Console.Out.WriteLine(String.Format("Number of rows deleted={0}", nRows));
- }
- }
- Console.WriteLine("Press RETURN to exit");
- Console.ReadLine();
- }
- }
-}
-```
-
-## COPY command for super fast ingestion
-
-The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
-
-### COPY command to load data from a file
-
-The following code is an example for copying data from a CSV file to a database table.
-
-It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
-
-```csharp
-using Npgsql;
-public class csvtotable
-{
-
- static void Main(string[] args)
- {
- String sDestinationSchemaAndTableName = "pharmacy";
- String sFromFilePath = "C:\\Users\\Documents\\pharmacies.csv";
-
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
-
- connStr.TrustServerCertificate = true;
-
- NpgsqlConnection conn = new NpgsqlConnection(connStr.ToString());
- NpgsqlCommand cmd = new NpgsqlCommand();
-
- conn.Open();
-
- if (File.Exists(sFromFilePath))
- {
- using (var writer = conn.BeginTextImport("COPY " + sDestinationSchemaAndTableName + " FROM STDIN WITH(FORMAT CSV, HEADER true,NULL ''); "))
- {
- foreach (String sLine in File.ReadAllLines(sFromFilePath))
- {
- writer.WriteLine(sLine);
- }
- }
- Console.WriteLine("csv file data copied sucessfully");
- }
- }
-}
-```
-
-### COPY command to load data in-memory
-
-The following code is an example for copying in-memory data to table.
-
-```csharp
-using Npgsql;
-using NpgsqlTypes;
-namespace Driver
-{
- public class InMemory
- {
-
- static async Task Main(string[] args)
- {
-
- // Replace below argument with connection string from portal.
- var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50 ");
-
- connStr.TrustServerCertificate = true;
-
- using (var conn = new NpgsqlConnection(connStr.ToString()))
- {
- conn.Open();
- var text = new dynamic[] { 0, "Target", "Sunnyvale", "California", 94001 };
- using (var writer = conn.BeginBinaryImport("COPY pharmacy FROM STDIN (FORMAT BINARY)"))
- {
- writer.StartRow();
- foreach (var item in text)
- {
- writer.Write(item);
- }
- writer.Complete();
- }
- Console.WriteLine("in-memory data copied sucessfully");
- }
- }
- }
-}
-```
-## App retry during database request failures
--
-```csharp
-using System;
-using System.Data;
-using System.Runtime.InteropServices;
-using System.Text;
-using Npgsql;
-
-namespace Driver
-{
- public class Reconnect
- {
- static string connStr = new NpgsqlConnectionStringBuilder("Server = <host name>; Database = citus; Port = 5432; User Id = citus; Password = {Your Password}; Ssl Mode = Require; Pooling = true; Minimum Pool Size=0; Maximum Pool Size =50;TrustServerCertificate = true").ToString();
- static string executeRetry(string sql, int retryCount)
- {
- for (int i = 0; i < retryCount; i++)
- {
- try
- {
- using (var conn = new NpgsqlConnection(connStr))
- {
- conn.Open();
- DataTable dt = new DataTable();
- using (var _cmd = new NpgsqlCommand(sql, conn))
- {
- NpgsqlDataAdapter _dap = new NpgsqlDataAdapter(_cmd);
- _dap.Fill(dt);
- conn.Close();
- if (dt != null)
- {
- if (dt.Rows.Count > 0)
- {
- int J = dt.Rows.Count;
- StringBuilder sb = new StringBuilder();
-
- for (int k = 0; k < dt.Rows.Count; k++)
- {
- for (int j = 0; j < dt.Columns.Count; j++)
- {
- sb.Append(dt.Rows[k][j] + ",");
- }
- sb.Remove(sb.Length - 1, 1);
- sb.Append("\n");
- }
- return sb.ToString();
- }
- }
- }
- }
- return null;
- }
- catch (Exception e)
- {
- Thread.Sleep(60000);
- Console.WriteLine(e.Message);
- }
- }
- return null;
- }
- static void Main(string[] args)
- {
- string result = executeRetry("select 1",5);
- Console.WriteLine(result);
- }
- }
-}
-```
-
-## Next steps
-
postgresql Quickstart App Stacks Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-java.md
- Title: Java app to connect and query Hyperscale (Citus)
-description: Learn building a simple app on Hyperscale (Citus) using Java
-----
-recommendations: false
Previously updated : 08/24/2022--
-# Java app to connect and query Hyperscale (Citus)
--
-In this document, you'll learn how to connect to a Hyperscale (Citus) server group using a Java application. You'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity), and are new to working with Hyperscale (Citus).
-
-> [!TIP]
->
-> The process of creating a Java app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
-* Create a Hyperscale (Citus) database using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
-* A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).
-* The [Apache Maven](https://maven.apache.org/) build tool.
-
-## Setup
-
-### Get Database Connection Information
-
-To get the database credentials, you can use the **Connection strings** tab in the Azure portal. Replace the password placeholder with the actual password. See below screenshot.
-
-![Diagram showing Java connection string.](../media/howto-app-stacks/02-java-connection-string.png)
-
-### Create a new Java project
-
-Using your favorite IDE, create a new Java project with groupId **test** and artifactId **crud**. Add a `pom.xml` file in its root directory:
-
-```XML
-<?xml version="1.0" encoding="UTF-8"?>
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>test</groupId>
- <artifactId>crud</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <packaging>jar</packaging>
-
- <name>crud</name>
- <url>http://www.example.com</url>
-
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <maven.compiler.source>1.8</maven.compiler.source>
- <maven.compiler.target>1.8</maven.compiler.target>
- </properties>
-
- <dependencies>
- <dependency>
- <groupId>org.junit.jupiter</groupId>
- <artifactId>junit-jupiter-engine</artifactId>
- <version>5.7.1</version>
- <scope>test</scope>
- </dependency>
- <dependency>
- <groupId>org.postgresql</groupId>
- <artifactId>postgresql</artifactId>
- <version>42.2.12</version>
- </dependency>
- <!-- https://mvnrepository.com/artifact/com.zaxxer/HikariCP -->
- <dependency>
- <groupId>com.zaxxer</groupId>
- <artifactId>HikariCP</artifactId>
- <version>5.0.0</version>
- </dependency>
- <dependency>
- <groupId>org.junit.jupiter</groupId>
- <artifactId>junit-jupiter-params</artifactId>
- <version>5.7.1</version>
- <scope>test</scope>
- </dependency>
- </dependencies>
-
- <build>
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-surefire-plugin</artifactId>
- <version>3.0.0-M5</version>
- </plugin>
- </plugins>
- </build>
-</project>
-```
-
-This file configures [Apache Maven](https://maven.apache.org/) to use:
-
-* Java 18
-* A recent PostgreSQL driver for Java
-
-### Prepare a configuration file to connect to Hyperscale (Citus)
-
-Create a `src/main/resources/application.properties` file, and add:
-
-``` properties
-driver.class.name=org.postgresql.Driver
-db.url=jdbc:postgresql://<host>:5432/citus?ssl=true&sslmode=require
-db.username=citus
-db.password=<password>
-```
-
-Replace the \<host\> using the Connection string that you gathered previously. Replace \<password\> with the password that you set for the database.
-
-> [!NOTE]
->
-> We append `?ssl=true&sslmode=require` to the configuration property url, to tell the JDBC driver to use TLS (Transport Layer Security) when connecting to the database. It's mandatory to use TLS with Hyperscale (Citus), and it is a good security practice.
-
-## Create tables in Hyperscale (Citus)
-
-### Create an SQL file to generate the database schema
-
-We'll use a `src/main/resources/schema.sql` file in order to create a database schema. Create that file, with the following content:
-
-``` SQL
-DROP TABLE IF EXISTS public.pharmacy;
-CREATE TABLE public.pharmacy(pharmacy_id integer,pharmacy_name text ,city text ,state text ,zip_code integer);
-CREATE INDEX idx_pharmacy_id ON public.pharmacy(pharmacy_id);
-```
-
-### Use the super power of distributed tables
-
-Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
-
-> [!TIP]
->
-> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
-
-Append the below command to the `schema.sql` file in the previous section if you wanted to distribute your table.
-
-```SQL
-select create_distributed_table('public.pharmacy','pharmacy_id');
-```
-
-### Connect to the database, and create schema
-
-Next, add the Java code that will use JDBC to store and retrieve data from your Hyperscale (Citus) server group.
-
-#### Connection Pooling Setup
-
-Using the code below, create a `DButil.java` file, which contains the `DButil`
-class. The `DBUtil` class sets up a connection pool to PostgreSQL using
-[HikariCP](https://github.com/brettwooldridge/HikariCP). In the example
-application, we'll be using this class to connect to PostgreSQL and start
-querying.
--
-```java
-//DButil.java
-package test.crud;
-
-import java.io.FileInputStream;
-import java.io.IOException;
-import java.sql.SQLException;
-import java.util.Properties;
-
-import javax.sql.DataSource;
-
-import com.zaxxer.hikari.HikariDataSource;
-
-public class DButil {
- private static final String DB_USERNAME = "db.username";
- private static final String DB_PASSWORD = "db.password";
- private static final String DB_URL = "db.url";
- private static final String DB_DRIVER_CLASS = "driver.class.name";
- private static Properties properties = null;
- private static HikariDataSource datasource;
-
- static {
- try {
- properties = new Properties();
- properties.load(new FileInputStream("src/main/java/application.properties"));
-
- datasource = new HikariDataSource();
- datasource.setDriverClassName(properties.getProperty(DB_DRIVER_CLASS ));
- datasource.setJdbcUrl(properties.getProperty(DB_URL));
- datasource.setUsername(properties.getProperty(DB_USERNAME));
- datasource.setPassword(properties.getProperty(DB_PASSWORD));
- datasource.setMinimumIdle(100);
- datasource.setMaximumPoolSize(1000000000);
- datasource.setAutoCommit(true);
- datasource.setLoginTimeout(3);
- } catch (IOException | SQLException e) {
- e.printStackTrace();
- }
- }
- public static DataSource getDataSource() {
- return datasource;
- }
-}
-```
-
-Create a `src/main/java/DemoApplication.java` file that contains:
-
-``` java
-package test.crud;
-import java.io.IOException;
-import java.sql.*;
-import java.util.*;
-import java.util.logging.Logger;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import org.postgresql.copy.CopyManager;
-import org.postgresql.core.BaseConnection;
-import java.io.IOException;
-import java.io.Reader;
-import java.io.StringReader;
-
-public class DemoApplication {
-
- private static final Logger log;
-
- static {
- System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
- log =Logger.getLogger(DemoApplication.class.getName());
- }
- public static void main(String[] args)throws Exception
- {
- log.info("Connecting to the database");
- Connection connection = DButil.getDataSource().getConnection();
- System.out.println("The Connection Object is of Class: " + connection.getClass());
- log.info("Database connection test: " + connection.getCatalog());
- log.info("Creating table");
- log.info("Creating index");
- log.info("distributing table");
- Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
- Statement statement = connection.createStatement();
- while (scanner.hasNextLine()) {
- statement.execute(scanner.nextLine());
- }
- log.info("Closing database connection");
- connection.close();
- }
-
-}
-```
-
-The above code will use the **application.properties** and **schema.sql** files to connect to Hyperscale (Citus) and create the schema.
-
-> [!NOTE]
->
-> The database credentials are stored in the user and password properties of the application.properties file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
-
-You can now execute this main class with your favorite tool:
-
-* Using your IDE, you should be able to right-click on the `DemoApplication` class and execute it.
-* Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
-
-The application should connect to the Hyperscale (Citus), create a database schema, and then close the connection, as you should see in the console logs:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Create database schema
-[INFO ] Closing database connection
-```
-
-## Create a domain class
-
-Create a new `Pharmacy` Java class, next to the `DemoApplication` class, and add the following code:
-
-``` Java
-public class Pharmacy {
- private Integer pharmacy_id;
- private String pharmacy_name;
- private String city;
- private String state;
- private Integer zip_code;
- public Pharmacy() { }
- public Pharmacy(Integer pharmacy_id, String pharmacy_name, String city,String state,Integer zip_code)
- {
- this.pharmacy_id = pharmacy_id;
- this.pharmacy_name = pharmacy_name;
- this.city = city;
- this.state = state;
- this.zip_code = zip_code;
- }
-
- public Integer getpharmacy_id() {
- return pharmacy_id;
- }
-
- public void setpharmacy_id(Integer pharmacy_id) {
- this.pharmacy_id = pharmacy_id;
- }
-
- public String getpharmacy_name() {
- return pharmacy_name;
- }
-
- public void setpharmacy_name(String pharmacy_name) {
- this.pharmacy_name = pharmacy_name;
- }
-
- public String getcity() {
- return city;
- }
-
- public void setcity(String city) {
- this.city = city;
- }
-
- public String getstate() {
- return state;
- }
-
- public void setstate(String state) {
- this.state = state;
- }
-
- public Integer getzip_code() {
- return zip_code;
- }
-
- public void setzip_code(Integer zip_code) {
- this.zip_code = zip_code;
- }
- @Override
- public String toString() {
- return "TPharmacy{" +
- "pharmacy_id=" + pharmacy_id +
- ", pharmacy_name='" + pharmacy_name + '\'' +
- ", city='" + city + '\'' +
- ", state='" + state + '\'' +
- ", zip_code='" + zip_code + '\'' +
- '}';
- }
-}
-```
-
-This class is a domain model mapped on the `Pharmacy` table that you created when executing the `schema.sql` script.
-
-## Insert data into Hyperscale (Citus)
-
-In the `src/main/java/DemoApplication.java` file, after the `main` method, add the following method to insert data into the database:
-
-``` Java
-private static void insertData(Pharmacy todo, Connection connection) throws SQLException {
- log.info("Insert data");
- PreparedStatement insertStatement = connection
- .prepareStatement("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (?, ?, ?, ?, ?);");
-
- insertStatement.setInt(1, todo.getpharmacy_id());
- insertStatement.setString(2, todo.getpharmacy_name());
- insertStatement.setString(3, todo.getcity());
- insertStatement.setString(4, todo.getstate());
- insertStatement.setInt(5, todo.getzip_code());
-
- insertStatement.executeUpdate();
-}
-```
-
-You can now add the two following lines in the main method:
-
-```java
-Pharmacy todo = new Pharmacy(0,"Target","Sunnyvale","California",94001);
-insertData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Creating table
-[INFO ] Creating index
-[INFO ] distributing table
-[INFO ] Insert data
-[INFO ] Closing database connection
-```
-
-## Reading data from Hyperscale (Citus)
-
-Let's read the data previously inserted, to validate that our code works correctly.
-
-In the `src/main/java/DemoApplication.java` file, after the `insertData` method, add the following method to read data from the database:
-
-``` java
-private static Pharmacy readData(Connection connection) throws SQLException {
- log.info("Read data");
- PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM Pharmacy;");
- ResultSet resultSet = readStatement.executeQuery();
- if (!resultSet.next()) {
- log.info("There is no data in the database!");
- return null;
- }
- Pharmacy todo = new Pharmacy();
- todo.setpharmacy_id(resultSet.getInt("pharmacy_id"));
- todo.setpharmacy_name(resultSet.getString("pharmacy_name"));
- todo.setcity(resultSet.getString("city"));
- todo.setstate(resultSet.getString("state"));
- todo.setzip_code(resultSet.getInt("zip_code"));
- log.info("Data read from the database: " + todo.toString());
- return todo;
-}
-```
-
-You can now add the following line in the main method:
-
-``` java
-todo = readData(connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Creating table
-[INFO ] Creating index
-[INFO ] distributing table
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
-[INFO ] Closing database connection
-```
-
-## Updating data in Hyperscale (Citus)
-
-Let's update the data we previously inserted.
-
-Still in the `src/main/java/DemoApplication.java` file, after the `readData` method, add the following method to update data inside the database:
-
-``` java
-private static void updateData(Pharmacy todo, Connection connection) throws SQLException {
- log.info("Update data");
- PreparedStatement updateStatement = connection
- .prepareStatement("UPDATE pharmacy SET city = ? WHERE pharmacy_id = ?;");
-
- updateStatement.setString(1, todo.getcity());
-
- updateStatement.setInt(2, todo.getpharmacy_id());
- updateStatement.executeUpdate();
- readData(connection);
-}
-
-```
-
-You can now add the two following lines in the main method:
-
-``` java
-todo.setcity("Guntur");
-updateData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Creating table
-[INFO ] Creating index
-[INFO ] distributing table
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
-[INFO ] Closing database connection
-```
-
-## Deleting data in Hyperscale (Citus)
-
-Finally, let's delete the data we previously inserted.
-
-Still in the `src/main/java/DemoApplication.java` file, after the `updateData` method, add the following method to delete data inside the database:
-
-``` java
-private static void deleteData(Pharmacy todo, Connection connection) throws SQLException {
- log.info("Delete data");
- PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM pharmacy WHERE pharmacy_id = ?;");
- deleteStatement.setLong(1, todo.getpharmacy_id());
- deleteStatement.executeUpdate();
- readData(connection);
-}
-```
-
-You can now add the following line in the main method:
-
-``` java
-deleteData(todo, connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Creating table
-[INFO ] Creating index
-[INFO ] distributing table
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-[INFO ] Closing database connection
-```
-
-## COPY command for super fast ingestion
-
-The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
-
-### COPY command to load data from a file
-
-The following code is an example for copying data from a CSV file to a database table.
-
-It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
-
-```java
-public static long
-copyFromFile(Connection connection, String filePath, String tableName)
-throws SQLException, IOException {
- long count = 0;
- FileInputStream fileInputStream = null;
-
- try {
- Connection unwrap = connection.unwrap(Connection.class);
- BaseConnection connSec = (BaseConnection) unwrap;
-
- CopyManager copyManager = new CopyManager((BaseConnection) connSec);
- fileInputStream = new FileInputStream(filePath);
- count = copyManager.copyIn("COPY " + tableName + " FROM STDIN delimiter ',' csv", fileInputStream);
- } finally {
- if (fileInputStream != null) {
- try {
- fileInputStream.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- return count;
-}
-```
-
-You can now add the following line in the main method:
-
-``` java
-int c = (int) copyFromFile(connection,"C:\\Users\\pharmacies.csv", "pharmacy");
-log.info("Copied "+ c +" rows using COPY command");
-```
-
-Executing the `main` class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Creating table
-[INFO ] Creating index
-[INFO ] distributing table
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-[INFO ] Copied 5000 rows using COPY command
-[INFO ] Closing database connection
-```
-
-### COPY command to load data in-memory
-
-The following code is an example for copying in-memory data to table.
-
-```java
-private static void inMemory(Connection connection) throws SQLException,IOException
- {
- log.info("Copying inmemory data into table");
-
- final List<String> rows = new ArrayList<>();
- rows.add("0,Target,Sunnyvale,California,94001");
- rows.add("1,Apollo,Guntur,Andhra,94003");
-
- final BaseConnection baseConnection = (BaseConnection) connection.unwrap(Connection.class);
- final CopyManager copyManager = new CopyManager(baseConnection);
-
- // COPY command can change based on the format of rows. This COPY command is for above rows.
- final String copyCommand = "COPY pharmacy FROM STDIN with csv";
-
- try (final Reader reader = new StringReader(String.join("\n", rows))) {
- copyManager.copyIn(copyCommand, reader);
- }
-}
-```
-
-You can now add the following line in the main method:
-
-``` java
-inMemory(connection);
-```
-
-Executing the main class should now produce the following output:
-
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: citus
-[INFO ] Creating table
-[INFO ] Creating index
-[INFO ] distributing table
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-5000
-[INFO ] Copying in-memory data into table
-[INFO ] Closing database connection
-```
-
-## App retry during database request failures
--
-```java
-package test.crud;
-
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.ResultSetMetaData;
-import java.util.logging.Logger;
-import com.zaxxer.hikari.HikariDataSource;
-
-public class DemoApplication
-{
- private static final Logger log;
-
- static
- {
- System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
- log = Logger.getLogger(DemoApplication.class.getName());
- }
- private static final String DB_USERNAME = "citus";
- private static final String DB_PASSWORD = "<Your Password>";
- private static final String DB_URL = "jdbc:postgresql://<Server Name>:5432/citus?sslmode=require";
- private static final String DB_DRIVER_CLASS = "org.postgresql.Driver";
- private static HikariDataSource datasource;
-
- private static String executeRetry(String sql, int retryCount) throws InterruptedException
- {
- Connection con = null;
- PreparedStatement pst = null;
- ResultSet rs = null;
- for (int i = 1; i <= retryCount; i++)
- {
- try
- {
- datasource = new HikariDataSource();
- datasource.setDriverClassName(DB_DRIVER_CLASS);
- datasource.setJdbcUrl(DB_URL);
- datasource.setUsername(DB_USERNAME);
- datasource.setPassword(DB_PASSWORD);
- datasource.setMinimumIdle(10);
- datasource.setMaximumPoolSize(1000);
- datasource.setAutoCommit(true);
- datasource.setLoginTimeout(3);
- log.info("Connecting to the database");
- con = datasource.getConnection();
- log.info("Connection established");
- log.info("Read data");
- pst = con.prepareStatement(sql);
- rs = pst.executeQuery();
- StringBuilder builder = new StringBuilder();
- int columnCount = rs.getMetaData().getColumnCount();
- while (rs.next())
- {
- for (int j = 0; j < columnCount;)
- {
- builder.append(rs.getString(j + 1));
- if (++j < columnCount)
- builder.append(",");
- }
- builder.append("\r\n");
- }
- return builder.toString();
- }
- catch (Exception e)
- {
- Thread.sleep(60000);
- System.out.println(e.getMessage());
- }
- }
- return null;
- }
-
- public static void main(String[] args) throws Exception
- {
- String result = executeRetry("select 1", 5);
- System.out.print(result);
- }
-}
-```
-
-## Next steps
-
postgresql Quickstart App Stacks Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-nodejs.md
- Title: Node.js app to connect and query Hyperscale (Citus)
-description: Learn to query Hyperscale (Citus) using Node.js
-----
-recommendations: false
Previously updated : 08/24/2022--
-# Node.js app to connect and query Hyperscale (Citus)
--
-In this article, you'll connect to a Hyperscale (Citus) server group using a Node.js application. We'll see how to use SQL statements to query, insert, update and delete data in the database. The steps in this article assume that you're familiar with developing using Node.js and are new to working with Hyperscale (Citus).
-
-> [!TIP]
->
-> The process of creating a NodeJS application with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
-
-## Setup
-
-### Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
-* Create a Hyperscale (Citus) database using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
-* [Node.js](https://nodejs.org/)
-
-Install [pg](https://www.npmjs.com/package/pg), which is a PostgreSQL client for Node.js.
-To do so, run the node package manager (npm) for JavaScript from your command line to install the pg client.
-
-```bash
-npm install pg
-```
-
-Verify the installation by listing the packages installed.
-
-```bash
-npm list
-```
-
-### Get database connection information
-
-To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See the screenshot below.
-
-![Diagram showing NodeJS connection string.](../media/howto-app-stacks/01-python-connection-string.png)
-
-### Running JavaScript code in Node.js
-
-You may launch Node.js from the Bash shell, Terminal or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it.
-
-## Connect, create table and insert data
-
-All examples in this article need to connect to the database. Let's put the
-connection logic into its own module for reuse. We'll use the
-[pg.Client](https://node-postgres.com/) object to
-interface with the PostgreSQL server.
--
-Create a folder called `db` and inside this folder create `citus.js` file with the common connection code:
-
-```javascript
-/**
-* file: db/citus.js
-*/
-
-const { Pool } = require('pg');
-
-const pool = new Pool({
- max: 300,
- connectionTimeoutMillis: 5000,
-
- host: '<host>',
- port: 5432,
- user: 'citus',
- password: '<your password>',
- database: 'citus',
- ssl: true,
-});
-
-module.exports = {
- pool,
-};
-```
-
-Next, use the following code to connect and load the data using CREATE TABLE
-and INSERT INTO SQL statements.
-
-```javascript
-/**
-* file: create.js
-*/
-
-const { pool } = require('./db/citus');
-
-async function queryDatabase() {
- const queryString = `
- DROP TABLE IF EXISTS pharmacy;
- CREATE TABLE pharmacy (pharmacy_id integer,pharmacy_name text,city text,state text,zip_code integer);
- INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);
- INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);
- INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (2,'Walgreens','San Diego','California',94003);
- CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);
- `;
-
- try {
- /* Real application code would probably request a dedicated client with
- pool.connect() and run multiple queries with the client. In this
- example, we're running only one query, so we use the pool.query()
- helper method to run it on the first available idle client.
- */
-
- await pool.query(queryString);
- console.log('Created the Pharmacy table and inserted rows.');
- } catch (err) {
- console.log(err.stack);
- } finally {
- pool.end();
- }
-}
-
-queryDatabase();
-```
-
-To execute the code above, run `node create.js`. This command will create a new "pharmacy" table and insert some sample data.
-
-## Super power of Distributed Tables
-
-Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
-
-> [!TIP]
->
-> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
-
-Use the following code to connect to the database and distribute the table.
-
-```javascript
-/**
-* file: distribute-table.js
-*/
-
-const { pool } = require('./db/citus');
-
-async function queryDatabase() {
- const queryString = `
- SELECT create_distributed_table('pharmacy', 'pharmacy_id');
- `;
-
- try {
- await pool.query(queryString);
- console.log('Distributed pharmacy table.');
- } catch (err) {
- console.log(err.stack);
- } finally {
- pool.end();
- }
-}
-
-queryDatabase();
-```
-
-## Read data
-
-Use the following code to connect and read the data using a SELECT SQL statement.
-
-```javascript
-/**
-* file: read.js
-*/
-
-const { pool } = require('./db/citus');
-
-async function queryDatabase() {
- const queryString = `
- SELECT * FROM pharmacy;
- `;
-
- try {
- const res = await pool.query(queryString);
- console.log(res.rows);
- } catch (err) {
- console.log(err.stack);
- } finally {
- pool.end();
- }
-}
-
-queryDatabase();
-```
-
-## Update data
-
-Use the following code to connect and read the data using a UPDATE SQL statement.
-
-```javascript
-/**
-* file: update.js
-*/
-
-const { pool } = require('./db/citus');
-
-async function queryDatabase() {
- const queryString = `
- UPDATE pharmacy SET city = 'Long Beach'
- WHERE pharmacy_id = 1;
- `;
-
- try {
- const result = await pool.query(queryString);
- console.log('Update completed.');
- console.log(`Rows affected: ${result.rowCount}`);
- } catch (err) {
- console.log(err.stack);
- } finally {
- pool.end();
- }
-}
-
-queryDatabase();
-```
-
-## Delete data
-
-Use the following code to connect and read the data using a DELETE SQL statement.
-
-```javascript
-/**
-* file: delete.js
-*/
-
-const { pool } = require('./db/citus');
-
-async function queryDatabase() {
- const queryString = `
- DELETE FROM pharmacy
- WHERE pharmacy_name = 'Target';
- `;
-
- try {
- const result = await pool.query(queryString);
- console.log('Delete completed.');
- console.log(`Rows affected: ${result.rowCount}`);
- } catch (err) {
- console.log(err.stack);
- } finally {
- pool.end();
- }
-}
-
-queryDatabase();
-```
-
-## COPY command for super fast ingestion
-
-The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
-
-### COPY command to load data from a file
-
-Before running code below, install
-[pg-copy-streams](https://www.npmjs.com/package/pg-copy-streams). To do so,
-run the node package manager (npm) for JavaScript from your command line.
-
-```bash
-npm install pg-copy-streams
-```
-
-The following code is an example for copying data from a CSV file to a database table.
-It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
-
-```javascript
-/**
-* file: copycsv.js
-*/
-
-const inputFile = require('path').join(__dirname, '/pharmacies.csv');
-const fileStream = require('fs').createReadStream(inputFile);
-const copyFrom = require('pg-copy-streams').from;
-const { pool } = require('./db/citus');
-
-async function importCsvDatabase() {
- return new Promise((resolve, reject) => {
- const queryString = `
- COPY pharmacy FROM STDIN WITH (FORMAT CSV, HEADER true, NULL '');
- `;
-
- fileStream.on('error', reject);
-
- pool
- .connect()
- .then(client => {
- const stream = client
- .query(copyFrom(queryString))
- .on('error', reject)
- .on('end', () => {
- reject(new Error('Connection closed!'));
- })
- .on('finish', () => {
- client.release();
- resolve();
- });
-
- fileStream.pipe(stream);
- })
- .catch(err => {
- reject(new Error(err));
- });
- });
-}
-
-(async () => {
- console.log('Copying from CSV...');
- await importCsvDatabase();
- await pool.end();
- console.log('Inserted csv successfully');
-})();
-```
-
-### COPY command to load data in-memory
-
-Before running the code below, install
-[through2](https://www.npmjs.com/package/through2) package. This package allows pipe
-chaining. Install it with node package manager (npm) for JavaScript like this:
-
-```bash
-npm install through2
-```
-
-The following code is an example for copying in-memory data to a table.
-
-```javascript
-/**
- * file: copyinmemory.js
- */
-
-const through2 = require('through2');
-const copyFrom = require('pg-copy-streams').from;
-const { pool } = require('./db/citus');
-
-async function importInMemoryDatabase() {
- return new Promise((resolve, reject) => {
- pool
- .connect()
- .then(client => {
- const stream = client
- .query(copyFrom('COPY pharmacy FROM STDIN'))
- .on('error', reject)
- .on('end', () => {
- reject(new Error('Connection closed!'));
- })
- .on('finish', () => {
- client.release();
- resolve();
- });
-
- const internDataset = [
- ['100', 'Target', 'Sunnyvale', 'California', '94001'],
- ['101', 'CVS', 'San Francisco', 'California', '94002'],
- ];
-
- let started = false;
- const internStream = through2.obj((arr, _enc, cb) => {
- const rowText = (started ? '\n' : '') + arr.join('\t');
- started = true;
- cb(null, rowText);
- });
-
- internStream.on('error', reject).pipe(stream);
-
- internDataset.forEach((record) => {
- internStream.write(record);
- });
-
- internStream.end();
- })
- .catch(err => {
- reject(new Error(err));
- });
- });
-}
-(async () => {
- await importInMemoryDatabase();
- await pool.end();
- console.log('Inserted inmemory data successfully.');
-})();
-```
-
-## App retry during database request failures
--
-```javascript
-const { Pool } = require('pg');
-const { sleep } = require('sleep');
-
-const pool = new Pool({
- host: '<host>',
- port: 5432,
- user: 'citus',
- password: '<your password>',
- database: 'citus',
- ssl: true,
- connectionTimeoutMillis: 0,
- idleTimeoutMillis: 0,
- min: 10,
- max: 20,
-});
-
-(async function() {
- res = await executeRetry('select nonexistent_thing;',5);
- console.log(res);
- process.exit(res ? 0 : 1);
-})();
-
-async function executeRetry(sql,retryCount)
-{
- for (let i = 0; i < retryCount; i++) {
- try {
- result = await pool.query(sql)
- return result;
- } catch (err) {
- console.log(err.message);
- sleep(60);
- }
- }
-
- // didn't succeed after all the tries
- return null;
-}
-```
-
-## Next steps
-
postgresql Quickstart App Stacks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-overview.md
- Title: Writing apps to connect and query Hyperscale (Citus)
-description: Learn to query Hyperscale (Citus) in multiple languages
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Query Hyperscale (Citus) from your app stack
--
-Select your development language to learn how to connect to Hyperscale (Citus)
-to create, read, update, and delete data.
-
-**Next steps**
-
-> [!div class="nextstepaction"]
-> [Python >](quickstart-app-stacks-python.md)
-
-> [!div class="nextstepaction"]
-> [Node JS >](quickstart-app-stacks-nodejs.md)
-
-> [!div class="nextstepaction"]
-> [C# >](quickstart-app-stacks-csharp.md)
-
-> [!div class="nextstepaction"]
-> [Java >](quickstart-app-stacks-java.md)
-
-> [!div class="nextstepaction"]
-> [Ruby >](quickstart-app-stacks-ruby.md)
postgresql Quickstart App Stacks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-python.md
- Title: Python app to connect and query Hyperscale (Citus)
-description: Learn to query Hyperscale (Citus) using Python
-----
-recommendations: false
Previously updated : 08/24/2022--
-# Python app to connect and query Hyperscale (Citus)
--
-In this article, you'll learn how to connect to the database on Hyperscale (Citus) and run SQL statements to query using Python on macOS, Ubuntu Linux, or Windows.
-
-> [!TIP]
->
-> The process of creating a Python app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
-
-## Setup
-
-### Prerequisites
-
-For this article you need:
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Create a Hyperscale (Citus) server group using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md).
-* [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
-* The latest [pip](https://pip.pypa.io/en/stable/installing/) package installer.
-* Install [psycopg2](https://pypi.python.org/pypi/psycopg2-binary/) using pip in a terminal or command prompt window. For more information, see [how to install psycopg2](https://www.psycopg.org/docs/install.html).
-
-### Get database connection information
-
-To get the database credentials, you can use the **Connection strings** tab in the Azure portal:
-
-![Diagram showing python connection string.](../media/howto-app-stacks/01-python-connection-string.png)
-
-Replace the following values:
-
-* \<host\> with the value you copied from the Azure portal.
-* \<password\> with the server password you created.
-* Use the default admin user, which is `citus`.
-* Use the default database, which is `citus`.
-
-## Step 1: Connect, create table, and insert data
-
-The following code example creates a connection pool to your Postgres database using
-the [psycopg2.pool](https://www.psycopg.org/docs/pool.html) library. **pool.getconn()** is used to get a connection from the pool.
-[cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) function executes the SQL query against the database.
--
-```python
-import psycopg2
-from psycopg2 import pool
-
-# NOTE: fill in these variables for your own server group
-host = "<host>"
-dbname = "citus"
-user = "citus"
-password = "<password>"
-sslmode = "require"
-
-# now we'll build a connection string from the variables
-conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode)
-
-postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20,conn_string)
-if (postgreSQL_pool):
- print("Connection pool created successfully")
-
-# Use getconn() to Get Connection from connection pool
-conn = postgreSQL_pool.getconn()
-
-cursor = conn.cursor()
-
-# Drop previous table of same name if one exists
-cursor.execute("DROP TABLE IF EXISTS pharmacy;")
-print("Finished dropping table (if existed)")
-
-# Create a table
-cursor.execute("CREATE TABLE pharmacy (pharmacy_id integer, pharmacy_name text, city text, state text, zip_code integer);")
-print("Finished creating table")
-
-# Create a index
-cursor.execute("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);")
-print("Finished creating index")
-
-# Insert some data into the table
-cursor.execute("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (%s, %s, %s, %s,%s);", (1,"Target","Sunnyvale","California",94001))
-cursor.execute("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (%s, %s, %s, %s,%s);", (2,"CVS","San Francisco","California",94002))
-print("Inserted 2 rows of data")
-
-# Clean up
-conn.commit()
-cursor.close()
-conn.close()
-```
-
-When the code runs successfully, it produces the following output:
-
-```
-Connection established
-Finished dropping table
-Finished creating table
-Finished creating index
-Inserted 2 rows of data
-```
-
-## Step 2: Use the super power of distributed tables
-
-Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
-
-> [!TIP]
->
-> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
-
-```python
-# Create distribute table
-cursor.execute("select create_distributed_table('pharmacy','pharmacy_id');")
-print("Finished distributing the table")
-```
-
-## Step 3: Read data
-
-The following code example uses these APIs:
-
-* [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL SELECT statement to read data.
-* [cursor.fetchall()](https://www.psycopg.org/docs/cursor.html#cursor.fetchall) accepts a query and returns a result set to iterate
-
-```python
-# Fetch all rows from table
-cursor.execute("SELECT * FROM pharmacy;")
-rows = cursor.fetchall()
-
-# Print all rows
-for row in rows:
- print("Data row = (%s, %s)" %(str(row[0]), str(row[1])))
-```
-
-## Step 4: Update data
-
-The following code example uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL UPDATE statement to update data.
-
-```python
-# Update a data row in the table
-cursor.execute("UPDATE pharmacy SET city = %s WHERE pharmacy_id = %s;", ("guntur",1))
-print("Updated 1 row of data")
-```
-
-## Step 5: Delete data
-
-The following code example runs [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL DELETE statement to delete the data.
-
-```python
-# Delete data row from table
-cursor.execute("DELETE FROM pharmacy WHERE pharmacy_name = %s;", ("Target",))
-print("Deleted 1 row of data")
-```
-
-## COPY command for super fast ingestion
-
-The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
-
-### COPY command to load data from a file
-
-The following code is an example for copying data from a CSV file to a database table.
-
-It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
-
-```python
-with open('pharmacies.csv', 'r') as f:
- # Notice that we don't need the `csv` module.
- next(f) # Skip the header row.
- cursor.copy_from(f, 'pharmacy', sep=',')
-print("copying data completed")
-```
-
-### COPY command to load data in-memory
-
-The following code is an example for copying in-memory data to table.
-
-```python
-data = [[3,"Walgreens","Sunnyvale","California",94006], [4,"Target","Sunnyvale","California",94016]]
-buf = io.StringIO()
-writer = csv.writer(buf)
-writer.writerows(data)
-
-buf.seek(0)
-with conn.cursor() as cur:
- cur.copy_from(buf, "pharmacy", sep=",")
-
-conn.commit()
-conn.close()
-```
-## App retry during database request failures
--
-```python
-import psycopg2
-import time
-from psycopg2 import pool
-
-host = "<host>"
-dbname = "citus"
-user = "citus"
-password = "{your password}"
-sslmode = "require"
-
-conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(
- host, user, dbname, password, sslmode)
-postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20, conn_string)
-
-def executeRetry(query, retryCount):
- for x in range(retryCount):
- try:
- if (postgreSQL_pool):
- # Use getconn() to Get Connection from connection pool
- conn = postgreSQL_pool.getconn()
- cursor = conn.cursor()
- cursor.execute(query)
- return cursor.fetchall()
- break
- except Exception as err:
- print(err)
- postgreSQL_pool.putconn(conn)
- time.sleep(60)
- return None
-
-print(executeRetry("select 1", 5))
-```
-
-## Next steps
-
postgresql Quickstart App Stacks Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-ruby.md
- Title: Ruby app to connect and query Hyperscale (Citus)
-description: Learn to query Hyperscale (Citus) using Ruby
-----
-recommendations: false
Previously updated : 08/24/2022--
-# Ruby app to connect and query Hyperscale (Citus)
--
-In this how-to article, you'll connect to a Hyperscale (Citus) server group using a Ruby application. We'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using Node.js, and are new to working with Hyperscale (Citus).
-
-> [!TIP]
->
-> The process of creating a Ruby app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
-
-## Setup
-
-### Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
-* Create a Hyperscale (Citus) database using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
-* [Ruby](https://www.ruby-lang.org/en/downloads/)
-* [Ruby pg](https://rubygems.org/gems/pg/), the PostgreSQL module for Ruby
-
-### Get database connection information
-
-To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See below screenshot.
-
-![Diagram showing ruby connection string.](../media/howto-app-stacks/01-python-connection-string.png)
-
-## Connect, create table, insert data
-
-Use the following code to connect and create a table using CREATE TABLE SQL statement, followed by INSERT INTO SQL statements to add rows into the table.
-
-The code uses a `PG::Connection` object with constructor to connect to Hyperscale (Citus). Then it calls method `exec()` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
--
-```ruby
-require 'pg'
-begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- # Drop previous table of same name if one exists
- connection.exec('DROP TABLE IF EXISTS pharmacy;')
- puts 'Finished dropping table (if existed).'
-
- # Drop previous table of same name if one exists.
- connection.exec('CREATE TABLE pharmacy (pharmacy_id integer ,pharmacy_name text,city text,state text,zip_code integer);')
- puts 'Finished creating table.'
-
- # Insert some data into table.
- connection.exec("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);")
- connection.exec("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);")
- puts 'Inserted 2 rows of data.'
-
- # Create index
- connection.exec("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);")
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-
-## Use the super power of distributed tables
-
-Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
-
-> [!TIP]
->
-> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
-
-Use the following code to connect to the database and distribute the table:
-
-```ruby
-require 'pg'
-begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- # Super power of Distributed Tables.
- connection.exec("select create_distributed_table('pharmacy','pharmacy_id');")
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-
-## Read data
-
-Use the following code to connect and read the data using a SELECT SQL statement.
-
-The code uses a `PG::Connection` object with constructor new to connect to Hyperscale (Citus). Then it calls method `exec()` to run the SELECT command, keeping the results in a result set. The result set collection is iterated using the `resultSet.each` do loop, keeping the current row values in the row variable. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
-
-```ruby
-require 'pg'
-begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- resultSet = connection.exec('SELECT * from pharmacy')
- resultSet.each do |row|
- puts 'Data row = (%s, %s, %s, %s, %s)' % [row['pharmacy_id'], row['pharmacy_name'], row['city'], row['state'], row['zip_code ']]
- end
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-
-## Update data
-
-Use the following code to connect and update the data using a UPDATE SQL statement.
-
-The code uses a `PG::Connection` object with constructor to connect to Hyperscale (Citus). Then it calls method `exec()` to run the UPDATE command. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
-
-```ruby
-require 'pg'
-begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- # Modify some data in table.
- connection.exec('UPDATE pharmacy SET city = %s WHERE pharmacy_id = %d;' % ['\'guntur\'',100])
- puts 'Updated 1 row of data.'
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-
-## Delete data
-
-Use the following code to connect and read the data using a DELETE SQL statement.
-
-The code uses a `PG::Connection` object with constructor new to connect to Hyperscale (Citus). Then it calls method `exec()` to run the DELETE command. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
-
-```ruby
-require 'pg'
-begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- # Delete some data in table.
- connection.exec('DELETE FROM pharmacy WHERE city = %s;' % ['\'guntur\''])
- puts 'Deleted 1 row of data.'
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-
-## COPY command for super fast ingestion
-
-The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
-
-### COPY command to load data from a file
-
-The following code is an example for copying data from a CSV file to a database table.
-
-It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
-
-```ruby
-require 'pg'
-begin
- filename = String('pharmacies.csv')
-
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- # Copy the data from Csv to table.
- result = connection.copy_data "COPY pharmacy FROM STDIN with csv" do
- File.open(filename , 'r').each do |line|
- connection.put_copy_data line
- end
- puts 'Copied csv data successfully .'
- end
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-
-### COPY command to load data in-memory
-
-The following code is an example for copying in-memory data to a table.
-
-```ruby
-require 'pg'
-begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
- puts 'Successfully created connection to database'
-
- enco = PG::TextEncoder::CopyRow.new
- connection.copy_data "COPY pharmacy FROM STDIN", enco do
- connection.put_copy_data [5000,'Target','Sunnyvale','California','94001']
- connection.put_copy_data [5001, 'CVS','San Francisco','California','94002']
- puts 'Copied inmemory data successfully .'
- end
-rescue PG::Error => e
- puts e.message
-ensure
- connection.close if connection
-end
-```
-## App retry during database request failures
--
-```ruby
-require 'pg'
-
-def executeretry(sql,retryCount)
- begin
- for a in 1..retryCount do
- begin
- # NOTE: Replace the host and password arguments in the connection string.
- # (The connection string can be obtained from the Azure portal)
- connection = PG::Connection.new("host=<Server Name> port=5432 dbname=citus user=citus password={Your Password} sslmode=require")
- resultSet = connection.exec(sql)
- return resultSet.each
- rescue PG::Error => e
- puts e.message
- sleep 60
- ensure
- connection.close if connection
- end
- end
- end
- return nil
-end
-
-var = executeretry('select 1',5)
-
-if var !=nil then
- var.each do |row|
- puts 'Data row = (%s)' % [row]
- end
-end
-```
-
-## Next steps
-
postgresql Quickstart Build Scalable Apps Classify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-build-scalable-apps-classify.md
- Title: Classify application workload - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Classify workload for scalable application
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Classify application workload
--
-Here are common characteristics of the workloads that are the best fit for
-Hyperscale (Citus).
-
-## Prerequisites
-
-This article assumes you know the [fundamental concepts for
-scaling](quickstart-build-scalable-apps-concepts.md). If you haven't read about
-them, take a moment to do so.
-
-## Characteristics of multi-tenant SaaS
-
-* Tenants see their own data; they can't see other tenants' data.
-* Most B2B SaaS apps are multi-tenant. Examples include Salesforce or Shopify.
-* In most B2B SaaS apps, there are hundreds to tens of thousands of tenants, and
- more tenants keep joining.
-* Multi-tenant SaaS apps are primarily operational/transactional, with single
- digit millisecond latency requirements for their database queries.
-* These apps have a classic relational data model, and are built using ORMs ΓÇô
- like RoR, Hibernate, Django etc.
- <br><br>
- > [!VIDEO https://www.youtube.com/embed/7gAW08du6kk]
-
-## Characteristics of real-time operational analytics
-
-* These apps have a customer/user facing interactive analytics dashboard, with
- a subsecond query latency requirement.
-* High concurrency required - at least 20 users.
-* Analyzes data that's fresh, within the last one second to few minutes.
-* Most have time series data such as events, logs, etc.
-* Common data models in these apps include:
- * Star Schema - few large/fact tables, the rest being small/dimension tables
- * Mostly fewer than 20 major tables
- <br><br>
- > [!VIDEO https://www.youtube.com/embed/xGWVVTva434]
-
-## Characteristics of high-throughput transactional
-
-* Run NoSQL/document style workloads, but require PostgreSQL features such as
- transactions, foreign/primary keys, triggers, extension like PostGIS, etc.
-* The workload is based on a single key. It has CRUD and lookups based on that
- key.
-* These apps have high throughput requirements: thousands to hundreds of thousands of
- TPS.
-* Query latency in single-digit milliseconds, with a high concurrency
- requirement.
-* Time series data, such as internet of things.
- <br><br>
- > [!VIDEO https://www.youtube.com/embed/A9q7w96yO_E]
-
-## Next steps
-
-Choose whichever fits your application the best:
-
-> [!div class="nextstepaction"]
-> [Model multi-tenant SaaS app >](quickstart-build-scalable-apps-model-multi-tenant.md)
-
-> [!div class="nextstepaction"]
-> [Model real-time analytics app](quickstart-build-scalable-apps-model-real-time.md)
-
-> [!div class="nextstepaction"]
-> [Model high-throughput app](quickstart-build-scalable-apps-model-high-throughput.md)
postgresql Quickstart Build Scalable Apps Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-build-scalable-apps-concepts.md
- Title: Fundamental concepts for scaling - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Ideas you need to know to build relational apps that scale
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Fundamental concepts for scaling
--
-Before we investigate the steps of building a new app, it's helpful to see a
-quick overview of the terms and concepts involved.
-
-## Architectural overview
-
-Hyperscale (Citus) gives you the power to distribute tables across multiple
-machines in a server group and transparently query them the same you query
-plain PostgreSQL:
-
-![Diagram of the coordinator node sharding a table onto worker nodes.](../media/howto-hyperscale-build-scalable-apps/architecture.png)
-
-In the Hyperscale (Citus) architecture, there are multiple kinds of nodes:
-
-* The **coordinator** node stores distributed table metadata and is responsible
- for distributed planning.
-* By contrast, the **worker** nodes store the actual data and do the computation.
-* Both the coordinator and workers are plain PostgreSQL databases, with the
- `citus` extension loaded.
-
-To distribute a normal PostgreSQL table, like `campaigns` in the diagram above,
-run a command called `create_distributed_table()`. Once you run this
-command, Hyperscale (Citus) transparently creates shards for the table across
-worker nodes. In the diagram, shards are represented as blue boxes.
-
-> [!NOTE]
->
-> On the basic tier, shards of distributed tables are on the coordinator node,
-> not worker nodes.
-
-Shards are plain (but specially named) PostgreSQL tables that hold slices of
-your data. In our example, because we distributed `campaigns` by `company_id`,
-the shards hold campaigns, where the campaigns of different companies are
-assigned to different shards.
-
-## Distribution column (also known as shard key)
-
-`create_distributed_table()` is the magic function that Hyperscale (Citus)
-provides to distribute tables and use resources across multiple machines.
-
-```postgresql
-SELECT create_distributed_table(
- 'table_name',
- 'distribution_column');
-```
-
-The second argument above picks a column from the table as a **distribution
-column**. It can be any column with a native PostgreSQL type (with integer and
-text being most common). The value of the distribution column determines which
-rows go into which shards, which is why the distribution column is also called
-the **shard key**.
-
-Hyperscale (Citus) decides how to run queries based on their use of the shard
-key:
-
-| Query involves | Where it runs |
-|-||
-| just one shard key | on the worker node that holds its shard |
-| multiple shard keys | parallelized across multiple nodes |
-
-The choice of shard key dictates the performance and scalability of your
-applications.
-
-* Uneven data distribution per shard keys (also known as *data skew*) isn't optimal
- for performance. For example, donΓÇÖt choose a column for which a single value
- represents 50% of data.
-* Shard keys with low cardinality can affect scalability. You can use only as
- many shards as there are distinct key values. Choose a key with cardinality
- in the hundreds to thousands.
-* Joining two large tables with different shard keys can be slow. Choose a
- common shard key across large tables. Learn more in
- [colocation](#colocation).
-
-## Colocation
-
-Another concept closely related to shard key is *colocation*. Tables sharded by
-the same distribution column values are colocated - The shards of colocated
-tables are stored together on the same workers.
-
-Below are two tables sharded by the same key, `site_id`. They're colocated.
-
-![Diagram of tables http_request and http_request_1min colocated by site_id.](../media/howto-hyperscale-build-scalable-apps/colocation.png)
-
-Hyperscale (Citus) ensures that rows with a matching `site_id` value in both
-tables are stored on the same worker node. You can see that, for both tables,
-rows with `site_id=1` are stored on worker 1. Similarly for other site IDs.
-
-Colocation helps optimize JOINs across these tables. If you join the two tables
-on `site_id`, Hyperscale (Citus) can perform the join locally on worker nodes
-without shuffling data between nodes.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Classify application workload >](quickstart-build-scalable-apps-classify.md)
postgresql Quickstart Build Scalable Apps Model High Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-build-scalable-apps-model-high-throughput.md
- Title: Model high throughput apps - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Techniques for scalable high-throughput transactional apps
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Model high-throughput transactional apps
--
-## Common filter as shard key
-
-To pick the shard key for a high-throughput transactional application, follow
-these guidelines:
-
-* Choose a column that is used for point lookups and is present in most
- create, read, update, and delete operations.
-* Choose a column that is a natural dimension in the data, or a central piece
- of the application. For example:
- * In an IOT workload, `device_id` is a good distribution column.
-
-The choice of a good shard key helps optimize network hops, while taking
-advantage of memory and compute to achieve millisecond latency.
-
-## Optimal data model for high-throughput apps
-
-Below is an example of a sample data-model for an IoT app that captures
-telemetry (time series data) from devices. There are two tables for capturing
-telemetry: `devices` and `events`. There could be other tables, but they're not
-covered in this example.
-
-![Diagram of events and devices tables, and partitions of events.](../media/howto-hyperscale-build-scalable-apps/high-throughput-data-model.png)
-
-When building a high-throughput app, keep some optimization in mind.
-
-* Distribute large tables on a common column that is central piece of the app,
- and the column that your app mostly queries. In the above example of an IOT
- app, `device_id` is that column, and it co-locates the events and devices
- tables.
-* The rest of the small tables can be reference tables.
-* As IOT apps have a time dimension, partition your distributed tables based on
- time. You can use native Hyperscale (Citus) time series capabilities to
- create and maintain partitions.
- * Partitioning helps efficiently filter data for queries with time filters.
- * Expiring old data is also fast, using the DROP vs DELETE command.
- * The events table in our example is partitioned by month.
-* Use the JSONB datatype to store semi-structured data. Device telemetry
- data is typically not structured, every device has its own metrics.
- * In our example, the events table has a `detail` column, which is JSONB.
-* If your IoT app requires geospatial features, you can use the PostGIS
- extension, which Hyperscale (Citus) supports natively.
-
-## Next steps
-
-Now we've finished exploring data modeling for scalable apps. The next step is
-connecting and querying the database with your programming language of choice.
-
-> [!div class="nextstepaction"]
-> [App stacks >](quickstart-app-stacks-overview.md)
postgresql Quickstart Build Scalable Apps Model Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-build-scalable-apps-model-multi-tenant.md
- Title: Model multi-tenant apps - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Techniques for scalable multi-tenant SaaS apps
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Model multi-tenant SaaS apps
--
-## Tenant ID as the shard key
-
-The tenant ID is the column at the root of the workload, or the top of the
-hierarchy in your data-model. For example, in this SaaS e-commerce schema,
-it would be the store ID:
-
-![Diagram of tables, with the store_id column highlighted.](../media/howto-hyperscale-build-scalable-apps/multi-tenant-id.png)
-
-This data model would be typical for a business such as Shopify. It hosts sites
-for multiple online stores, where each store interacts with its own data.
-
-* This data-model has a bunch of tables: stores, products, orders, line items
- and countries.
-* The stores table is at the top of the hierarchy. Products, orders and
- line items are all associated with stores, thus lower in the hierarchy.
-* The countries table isn't related to individual stores, it is amongst across
- stores.
-
-In this example, `store_id`, which is at the top of the hierarchy, is the
-identifier for tenant. It's the right shard key. Picking `store_id` as the
-shard key enables collocating data across all tables for a single store on a
-single worker.
-
-Colocating tables by store has advantages:
-
-* Provides SQL coverage such as foreign keys, JOINs. Transactions for a single
- tenant are localized on a single worker node where each tenant exists.
-* Achieves single digit millisecond performance. Queries for a single tenant are
- routed to a single node instead of getting parallelized, which helps optimize
- network hops and still scale compute/memory.
-* It scales. As the number of tenants grows, you can add nodes and rebalance
- the tenants to new nodes, or even isolate large tenants to their own nodes.
- Tenant isolation allows you to provide dedicated resources.
-
-![Diagram of tables colocated to the same nodes.](../media/howto-hyperscale-build-scalable-apps/multi-tenant-colocation.png)
-
-## Optimal data model for multi-tenant apps
-
-In this example, we should distribute the store-specific tables by store ID,
-and make `countries` a reference table.
-
-![Diagram of tables with store_id more universally highlighted.](../media/howto-hyperscale-build-scalable-apps/multi-tenant-data-model.png)
-
-Notice that tenant-specific tables have the tenant ID and are distributed. In
-our example, stores, products and line\_items are distributed. The rest of the
-tables are reference tables. In our example, the countries table is a reference table.
-
-```sql
Distribute large tables by the tenant ID-
-SELECT create_distributed_table('stores', 'store_id');
-SELECT create_distributed_table('products', 'store_id', colocate_with => 'stores');
etc for the rest of the tenant tables...- Then, make "countries" a reference table, with a synchronized copy of the table maintained on every worker node-
-SELECT create_reference_table('countries');
-```
-
-Large tables should all have the tenant ID.
-
-* If you're **migrating an existing** multi-tenant app to Hyperscale (Citus),
- you may need to denormalize a little and add the tenant ID column to large
- tables if it's missing, then backfill the missing values of the column.
-* For **new apps** on Hyperscale (Citus), make sure the tenant ID is present
- on all tenant-specific tables.
-
-Ensure to include the tenant ID on primary, unique, and foreign key constraints
-on distributed tables in the form of a composite key. For example, if a table
-has a primary key of `id`, turn it into the composite key `(tenant_id,id)`.
-There's no need to change keys for reference tables.
-
-## Query considerations for best performance
-
-Distributed queries that filter on the tenant ID run most efficiently in
-multi-tenant apps. Ensure that your queries are always scoped to a single
-tenant.
-
-```sql
-SELECT *
- FROM orders
- WHERE order_id = 123
- AND store_id = 42; -- ← tenant ID filter
-```
-
-It's necessary to add the tenant ID filter even if the original filter
-conditions unambiguously identify the rows you want. The tenant ID filter,
-while seemingly redundant, tells Hyperscale (Citus) how to route the query to a
-single worker node.
-
-Similarly, when you're joining two distributed tables, ensure that both the
-tables are scoped to a single tenant. Scoping can be done by ensuring that join
-conditions include the tenant ID.
-
-```sql
-SELECT sum(l.quantity)
- FROM line_items l
- INNER JOIN products p
- ON l.product_id = p.product_id
- AND l.store_id = p.store_id -- ← tenant ID in join
- WHERE p.name='Awesome Wool Pants'
- AND l.store_id='8c69aa0d-3f13-4440-86ca-443566c1fc75';
- -- Γåæ tenant ID filter
-```
-
-There are helper libraries for several popular application frameworks that make
-it easy to include a tenant ID in queries. Here are instructions:
-
-* [Ruby on Rails instructions](https://docs.citusdata.com/en/stable/develop/migration_mt_ror.html)
-* [Django instructions](https://docs.citusdata.com/en/stable/develop/migration_mt_django.html)
-* [ASP.NET](https://docs.citusdata.com/en/stable/develop/migration_mt_asp.html)
-* [Java Hibernate](https://www.citusdata.com/blog/2018/02/13/using-hibernate-and-spring-to-build-multitenant-java-apps/)
-
-## Next steps
-
-Now we've finished exploring data modeling for scalable apps. The next step is
-connecting and querying the database with your programming language of choice.
-
-> [!div class="nextstepaction"]
-> [App stacks >](quickstart-app-stacks-overview.md)
postgresql Quickstart Build Scalable Apps Model Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-build-scalable-apps-model-real-time.md
- Title: Model real-time apps - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Techniques for scalable real-time analytical apps
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Model real-time analytics apps
--
-## Colocate large tables with shard key
-
-To pick the shard key for a real-time operational analytics application, follow
-these guidelines:
-
-* Choose a column that is common on large tables
-* Choose a column that is a natural dimension in the data, or a central piece
- of the application. Some examples:
- * In the financial world, an application that analyzes security trends would
- probably use `security_id`.
- * In a user analytics workload where you want to analyze website usage
- metrics, `user_id` would be a good distribution column
-
-By colocating large tables, you can push SQL queries down to worker nodes in
-parallel. Pushing down queries avoids shuffling data between nodes over the
-network. Operations such as JOINs, aggregates, rollups, filters, LIMITs can be
-efficiently executed.
-
-To visualize parallel distributed queries on colocated tables, consider this
-diagram:
-
-![Diagram of joins happening within worker nodes.](../media/howto-hyperscale-build-scalable-apps/real-time-join.png)
-
-The `users` and `events` tables are both sharded by `user_id`, so related
-rows for the same user ID are placed together on the same worker node. The
-SQL JOINs can happen without pulling information between workers.
-
-## Optimal data model for real-time apps
-
-Let's continue with the example of an application that analyzes user website
-visits and metrics. There are two "fact" tables--users and events--and other
-smaller "dimension" tables.
-
-![Diagram of users, events, and miscellaneous tables.](../media/howto-hyperscale-build-scalable-apps/real-time-data-model.png)
-
-To apply the super power of distributed tables on Hyperscale (Citus), follow
-the following steps:
-
-* Distribute large fact tables on a common column. In our case, users and
- events are distributed on `user_id`.
-* Mark the small/dimension tables (`device_types`, `countries`, and
- `event_types) as Hyperscale (Citus) reference tables.
-* Be sure to include the distribution column in primary, unique, and foreign
- key constraints on distributed tables. Including the column may require making the keys
- composite. There's need to update keys for reference tables.
-* When you're joining large distributed tables, be sure to join using the
- shard key.
-
-```sql
Distribute the fact tables-
-SELECT create_distributed_table('users', 'user_id');
-SELECT create_distributed_table('products', 'user_id', colocate_with => 'users');
- Turn dimension tables into reference tables, with synchronized copies maintained on every worker node-
-SELECT create_reference_table('countries');
similarly for device_types and event_types...
-```
-
-## Next steps
-
-Now we've finished exploring data modeling for scalable apps. The next step is
-connecting and querying the database with your programming language of choice.
-
-> [!div class="nextstepaction"]
-> [App stacks >](quickstart-app-stacks-overview.md)
postgresql Quickstart Build Scalable Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-build-scalable-apps-overview.md
- Title: Build scalable apps - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: How to build relational apps that scale
-----
-recommendations: false
Previously updated : 08/11/2022--
-# Build scalable apps
-
-Early in the quickstart, we [created a server
-group](quickstart-create-portal.md) using the [basic
-tier](concepts-server-group.md#tiers). The basic tier is good for apps that a
-single node database node (64vcore, 256-GB RAM and 512-GB storage) can handle
-for the near future (~6 months). Later, you can add more nodes, rebalance your,
-data and scale out seamlessly.
-
-If your app needs requires multiple database nodes in the short term, start
-with the Hyperscale (Citus) **Standard Tier**.
-
-> [!TIP]
->
-> If you choose the Basic Tier, you can treat Hyperscale (Citus) just like
-> standard PostgreSQL, and achieve full feature parity. You donΓÇÖt need any
-> distributed data modeling techniques while building your app. If you decide
-> to go that route, you can skip this section.
-
-## Three steps for building highly scalable apps
-
-There are three steps involved in building scalable apps with Hyperscale
-(Citus):
-
-1. Classify your application workload. There are use-case where Hyperscale
- (Citus) shines: multi-tenant SaaS, real-time operational analytics, and high
- throughput OLTP. Determine whether your app falls into one of these categories.
-2. Based on the workload, identify the optimal shard key for the distributed
- tables. Classify your tables as reference, distributed, or local.
-3. Update the database schema and application queries to make them go fast
- across nodes.
-
-## Next steps
-
-Before you start building a new app, you must first review a little more about
-the architecture of Hyperscale (Citus).
-
-> [!div class="nextstepaction"]
-> [Fundamental concepts for scaling >](quickstart-build-scalable-apps-concepts.md)
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
- Title: 'Quickstart: connect to a server group with psql - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: Quickstart to connect psql to Azure Database for PostgreSQL - Hyperscale (Citus).
--
-recommendations: false
---- Previously updated : 05/05/2022--
-# Connect to a Hyperscale (Citus) server group with psql
--
-## Prerequisites
-
-To follow this quickstart, you'll first need to:
-
-* [Create a server group](quickstart-create-portal.md) in the Azure portal.
-
-## Connect
-
-When you create your Hyperscale (Citus) server group, a default database named **citus** is created. To connect to your database server, you need a connection string and the admin password.
-
-1. Obtain the connection string. In the server group page, select the
- **Connection strings** menu item.
-
- ![get connection string](../media/quickstart-connect-psql/get-connection-string.png)
-
- Find the string marked **psql**. It will be of the form, `psql
- "host=c.servergroup.postgres.database.azure.com port=5432 dbname=citus
- user=citus password={your_password} sslmode=require"`
-
- * Copy the string.
- * Replace "{your\_password}" with the administrative password you chose earlier.
- * Notice the hostname starts with a `c.`, for instance
- `c.demo.postgres.database.azure.com`. This prefix indicates the
- coordinator node of the server group.
- * The default dbname and username is `citus` and can't be changed.
-
-2. Open the Azure Cloud Shell. Select the **Cloud Shell** icon in the Azure portal.
-
- ![cloud shell icon](../media/quickstart-connect-psql/open-cloud-shell.png)
-
- If prompted, choose an Azure subscription in which to store Cloud Shell data.
-
-3. In the shell, paste the psql connection string, *substituting your password
- for the string `{your_password}`*, then press enter. For example:
-
- ![run psql in cloud
- shell](../media/quickstart-connect-psql/cloud-shell-run-psql.png)
-
- When psql successfully connects to the database, you'll see a new prompt:
-
- ```
- psql (14.2 (Debian 14.2-1.pgdg100+1))
- SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
- Type "help" for help.
-
- citus=>
- ```
-
-4. Run a test query. Copy the following command and paste it into the psql
- prompt, then press enter to run:
-
- ```sql
- SHOW server_version;
- ```
-
- You should see a result matching the PostgreSQL version you selected
- during server group creation. For instance:
-
- ```
- server_version
- -
- 14.2
- (1 row)
- ```
-
-## Next steps
-
-Now that you've connected to the server group, the next step is to create
-tables and shard them for horizontal scaling.
-
-> [!div class="nextstepaction"]
-> [Create and distribute tables >](quickstart-distribute-tables.md)
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
- Title: 'Quickstart: create a server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: Quickstart to create and query distributed tables on Azure Database for PostgreSQL Hyperscale (Citus).
--
-recommendations: false
---- Previously updated : 06/29/2022
-#Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
--
-# Create a Hyperscale (Citus) server group in the Azure portal
--
-Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that
-allows you to run horizontally scalable PostgreSQL databases in the cloud.
-
-## Prerequisites
-
-To follow this quickstart, you'll first need to:
-
-* Create a [free account](https://azure.microsoft.com/free/) (If you don't have
- an Azure subscription).
-* Sign in to the [Azure portal](https://portal.azure.com).
-
-## Get started with the Basic Tier
-
-The Basic Tier allows you to deploy Hyperscale (Citus) as a single node, while
-having the superpower of distributing tables. At a few dollars a day, it's the
-most cost-effective way to experience Hyperscale (Citus). Later, if your
-application requires greater scale, you can add nodes and rebalance your data.
-
-Let's get started!
--
-## Next steps
-
-With your server group created, it's time to connect with a SQL client.
-
-> [!div class="nextstepaction"]
-> [Connect to your server group >](quickstart-connect-psql.md)
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
- Title: 'Quickstart: distribute tables - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: Quickstart to distribute table data across nodes in Azure Database for PostgreSQL - Hyperscale (Citus).
--
-recommendations: false
---- Previously updated : 08/11/2022--
-# Create and distribute tables
--
-In this example, we'll use Hyperscale (Citus) distributed tables to store and
-query events recorded from GitHub open source contributors.
-
-## Prerequisites
-
-To follow this quickstart, you'll first need to:
-
-1. [Create a server group](quickstart-create-portal.md) in the Azure portal.
-2. [Connect to the server group](quickstart-connect-psql.md) with psql to
- run SQL commands.
-
-## Create tables
-
-Once you've connected via psql, let's create our table. Copy and paste the
-following commands into the psql terminal window, and hit enter to run:
-
-```sql
-CREATE TABLE github_users
-(
- user_id bigint,
- url text,
- login text,
- avatar_url text,
- gravatar_id text,
- display_login text
-);
-
-CREATE TABLE github_events
-(
- event_id bigint,
- event_type text,
- event_public boolean,
- repo_id bigint,
- payload jsonb,
- repo jsonb,
- user_id bigint,
- org jsonb,
- created_at timestamp
-);
-
-CREATE INDEX event_type_index ON github_events (event_type);
-CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
-```
-
-Notice the GIN index on `payload` in `github_events`. The index allows fast
-querying in the JSONB column. Since Citus is a PostgreSQL extension, Hyperscale
-(Citus) supports advanced PostgreSQL features like the JSONB datatype for
-storing semi-structured data.
-
-## Distribute tables
-
-`create_distributed_table()` is the magic function that Hyperscale (Citus)
-provides to distribute tables and use resources across multiple machines. The
-function decomposes tables into shards, which can be spread across nodes for
-increased storage and compute performance.
-
-The server group in this quickstart uses the Basic Tier, so the shards will be
-stored on just one node. However, if you later decide to graduate to the
-Standard Tier, then the shards can be spread across more nodes. With Hyperscale
-(Citus), you can start small and scale seamlessly.
-
-Let's distribute the tables:
-
-```sql
-SELECT create_distributed_table('github_users', 'user_id');
-SELECT create_distributed_table('github_events', 'user_id');
-```
--
-## Load data into distributed tables
-
-We're ready to fill the tables with sample data. For this quickstart, we'll use
-a dataset previously captured from the GitHub API.
-
-Run the following commands to download example CSV files and load them into the
-database tables. (The `curl` command downloads the files, and comes
-pre-installed in the Azure Cloud Shell.)
-
-```
download users and store in table-
-\COPY github_users FROM PROGRAM 'curl https://examples.citusdata.com/users.csv' WITH (FORMAT CSV)
- download events and store in table-
-\COPY github_events FROM PROGRAM 'curl https://examples.citusdata.com/events.csv' WITH (FORMAT CSV)
-```
-
-We can review details of our distributed tables, including their sizes, with
-the `citus_tables` view:
-
-```sql
-SELECT * FROM citus_tables;
-```
-
-```
- table_name | citus_table_type | distribution_column | colocation_id | table_size | shard_count | table_owner | access_method
-+++++-+-+
- github_events | distributed | user_id | 1 | 388 MB | 32 | citus | heap
- github_users | distributed | user_id | 1 | 39 MB | 32 | citus | heap
-(2 rows)
-```
-
-## Next steps
-
-Now we have distributed tables and loaded them with data. Next, let's try
-running queries across the distributed tables.
-
-> [!div class="nextstepaction"]
-> [Run distributed queries >](quickstart-run-queries.md)
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
- Title: 'Quickstart: Run queries - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: Quickstart to run queries on table data in Azure Database for PostgreSQL - Hyperscale (Citus).
--
-recommendations: false
---- Previously updated : 08/11/2022--
-# Run queries
--
-## Prerequisites
-
-To follow this quickstart, you'll first need to:
-
-1. [Create a server group](quickstart-create-portal.md) in the Azure portal.
-2. [Connect to the server group](quickstart-connect-psql.md) with psql to
- run SQL commands.
-3. [Create and distribute tables](quickstart-distribute-tables.md) with our
- example dataset.
-
-## Distributed queries
-
-Now it's time for the fun part in our quickstart series--running queries.
-Let's start with a simple `count (*)` to verify how much data we loaded in
-the previous section.
-
-```sql
count all rows (across shards)-
-SELECT count(*) FROM github_users;
-```
-
-```
- count
- 264308
-(1 row)
-```
-
-Recall that `github_users` is a distributed table, meaning its data is divided
-between multiple shards. Hyperscale (Citus) automatically runs the count on all
-shards in parallel, and combines the results.
-
-Let's continue looking at a few more query examples:
-
-```sql
Find all events for a single user. (A common transactional/operational query)-
-SELECT created_at, event_type, repo->>'name' AS repo_name
- FROM github_events
- WHERE user_id = 3861633;
-```
-
-```
- created_at | event_type | repo_name
-+--+--
- 2016-12-01 06:28:44 | PushEvent | sczhengyabin/Google-Image-Downloader
- 2016-12-01 06:29:27 | CreateEvent | sczhengyabin/Google-Image-Downloader
- 2016-12-01 06:36:47 | ReleaseEvent | sczhengyabin/Google-Image-Downloader
- 2016-12-01 06:42:35 | WatchEvent | sczhengyabin/Google-Image-Downloader
- 2016-12-01 07:45:58 | IssuesEvent | sczhengyabin/Google-Image-Downloader
-(5 rows)
-```
-
-## More complicated queries
-
-Here's an example of a more complicated query, which retrieves hourly
-statistics for push events on GitHub. It uses PostgreSQL's JSONB feature to
-handle semi-structured data.
-
-```sql
Querying JSONB type. Query is parallelized across nodes. Find the number of commits on the master branch per hour -
-SELECT date_trunc('hour', created_at) AS hour,
- sum((payload->>'distinct_size')::int) AS num_commits
-FROM github_events
-WHERE event_type = 'PushEvent' AND
- payload @> '{"ref":"refs/heads/master"}'
-GROUP BY hour
-ORDER BY hour;
-```
-
-```
- hour | num_commits
-+-
- 2016-12-01 05:00:00 | 13051
- 2016-12-01 06:00:00 | 43480
- 2016-12-01 07:00:00 | 34254
- 2016-12-01 08:00:00 | 29307
-(4 rows)
-```
-
-Hyperscale (Citus) combines the power of SQL and NoSQL datastores
-with structured and semi-structured data.
-
-In addition to running queries, Hyperscale (Citus) also applies data definition
-changes across the shards of a distributed table:
-
-```sql
DDL commands that are also parallelized-
-ALTER TABLE github_users ADD COLUMN dummy_column integer;
-```
-
-## Next steps
-
-You've successfully created a scalable Hyperscale (Citus) server group, created
-tables, distributed them, loaded data, and run distributed queries.
-
-Now you're ready to learn to build applications with Hyperscale (Citus).
-
-> [!div class="nextstepaction"]
-> [Build scalable applications >](quickstart-build-scalable-apps-overview.md)
postgresql Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-extensions.md
- Title: Extensions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Describes the ability to extend the functionality of your database by using extensions in Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 08/02/2022-
-# PostgreSQL extensions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html).
-
-## Use PostgreSQL extensions
-
-PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command from the psql tool to load the packaged objects into your database.
-
-> [!NOTE]
-> If `CREATE EXTENSION` fails with a permission denied error, try the
-> `create_extension()` function instead. For instance:
->
-> ```sql
-> SELECT create_extension('postgis');
-> ```
-
-Azure Database for PostgreSQL - Hyperscale (Citus) currently supports a subset of key extensions as listed here. Extensions other than the ones listed aren't supported. You can't create your own extension with Azure Database for PostgreSQL.
-
-## Extensions supported by Azure Database for PostgreSQL
-
-The following tables list the standard PostgreSQL extensions that are currently supported by Azure Database for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
-
-The versions of each extension installed in a server group sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
-
-### Citus extension
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 |
-
-### Data types extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
-> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
-> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 |
-> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
-> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
-> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 |
-
-### Full-text search extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
-
-### Functions extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
-> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 |
-> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 |
-> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
-> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
-> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
-
-### Index types extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
-
-### Language extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
-
-### Miscellaneous extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
-> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 |
-> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
-> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 |
-> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
-> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
-> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
--
-### PostGIS extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
-> ||||||
-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
--
-## pg_stat_statements
-The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
-
-The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](../howto-configure-server-parameters-using-portal.md) or the [Azure CLI](../howto-configure-server-parameters-using-cli.md).
-
-There's a tradeoff between the query execution information pg_stat_statements provides and the effect on server performance as it logs each SQL statement. If you aren't actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Some third-party monitoring services might rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
-
-## dblink and postgres_fdw
-
-You can use dblink and postgres\_fdw to connect from one PostgreSQL server to
-another, or to another database in the same server. The receiving server needs
-to allow connections from the sending server through its firewall. To use
-these extensions to connect between Azure Database for PostgreSQL servers or
-Hyperscale (Citus) server groups, set **Allow Azure services and resources to
-access this server group (or server)** to ON. You also need to turn this
-setting ON if you want to use the extensions to loop back to the same server.
-The **Allow Azure services and resources to access this server group** setting
-can be found in the Azure portal page for the Hyperscale (Citus) server group
-under **Networking**. Currently, outbound connections from Azure Database for
-PostgreSQL Single server and Hyperscale (Citus) aren't supported, except for
-connections to other Azure Database for PostgreSQL servers and Hyperscale
-(Citus) server groups.
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
- Title: SQL functions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Functions in the Hyperscale (Citus) SQL API
----- Previously updated : 02/24/2022--
-# Functions in the Hyperscale (Citus) SQL API
--
-This section contains reference information for the user-defined functions
-provided by Hyperscale (Citus). These functions help in providing
-distributed functionality to Hyperscale (Citus).
-
-> [!NOTE]
->
-> Hyperscale (Citus) server groups running older versions of the Citus Engine may not
-> offer all the functions listed below.
-
-## Table and Shard DDL
-
-### create\_distributed\_table
-
-The create\_distributed\_table() function is used to define a distributed table
-and create its shards if it's a hash-distributed table. This function takes in
-a table name, the distribution column, and an optional distribution method and
-inserts appropriate metadata to mark the table as distributed. The function
-defaults to 'hash' distribution if no distribution method is specified. If the
-table is hash-distributed, the function also creates worker shards based on the
-shard count and shard replication factor configuration values. If the table
-contains any rows, they're automatically distributed to worker nodes.
-
-This function replaces usage of master\_create\_distributed\_table() followed
-by master\_create\_worker\_shards().
-
-#### Arguments
-
-**table\_name:** Name of the table that needs to be distributed.
-
-**distribution\_column:** The column on which the table is to be
-distributed.
-
-**distribution\_type:** (Optional) The method according to which the
-table is to be distributed. Permissible values are append or hash, with
-a default value of 'hash'.
-
-**colocate\_with:** (Optional) include current table in the colocation group
-of another table. By default tables are colocated when they're distributed by
-columns of the same type, have the same shard count, and have the same
-replication factor. Possible values for `colocate_with` are `default`, `none`
-to start a new colocation group, or the name of another table to colocate
-with that table. (See [table colocation](concepts-colocation.md).)
-
-Keep in mind that the default value of `colocate_with` does implicit
-colocation. [Colocation](concepts-colocation.md)
-can be a great thing when tables are related or will be joined. However when
-two tables are unrelated but happen to use the same datatype for their
-distribution columns, accidentally colocating them can decrease performance
-during [shard rebalancing](howto-scale-rebalance.md). The
-table shards will be moved together unnecessarily in a \"cascade.\"
-
-If a new distributed table isn't related to other tables, it's best to
-specify `colocate_with => 'none'`.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-This example informs the database that the github\_events table should
-be distributed by hash on the repo\_id column.
-
-```postgresql
-SELECT create_distributed_table('github_events', 'repo_id');
- alternatively, to be more explicit:
-SELECT create_distributed_table('github_events', 'repo_id',
- colocate_with => 'github_repo');
-```
-
-### truncate\_local\_data\_after\_distributing\_table
-
-Truncate all local rows after distributing a table, and prevent constraints
-from failing due to outdated local records. The truncation cascades to tables
-having a foreign key to the designated table. If the referring tables aren't themselves distributed, then truncation is forbidden until they are, to protect
-referential integrity:
-
-```
-ERROR: cannot truncate a table referenced in a foreign key constraint by a local table
-```
-
-Truncating local coordinator node table data is safe for distributed tables
-because their rows, if any, are copied to worker nodes during
-distribution.
-
-#### Arguments
-
-**table_name:** Name of the distributed table whose local counterpart on the
-coordinator node should be truncated.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
requires that argument is a distributed table
-SELECT truncate_local_data_after_distributing_table('public.github_events');
-```
-
-### create\_reference\_table
-
-The create\_reference\_table() function is used to define a small
-reference or dimension table. This function takes in a table name, and
-creates a distributed table with just one shard, replicated to every
-worker node.
-
-#### Arguments
-
-**table\_name:** Name of the small dimension or reference table that
-needs to be distributed.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-This example informs the database that the nation table should be
-defined as a reference table
-
-```postgresql
-SELECT create_reference_table('nation');
-```
-
-### alter_distributed_table
-
-The alter_distributed_table() function can be used to change the distribution
-column, shard count or colocation properties of a distributed table.
-
-#### Arguments
-
-**table\_name:** Name of the table that will be altered.
-
-**distribution\_column:** (Optional) Name of the new distribution column.
-
-**shard\_count:** (Optional) The new shard count.
-
-**colocate\_with:** (Optional) The table that the current distributed table
-will be colocated with. Possible values are `default`, `none` to start a new
-colocation group, or the name of another table with which to colocate. (See
-[table colocation](concepts-colocation.md).)
-
-**cascade_to_colocated:** (Optional) When this argument is set to "true",
-`shard_count` and `colocate_with` changes will also be applied to all of the
-tables that were previously colocated with the table, and the colocation will
-be preserved. If it is "false", the current colocation of this table will be
-broken.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
change distribution column
-SELECT alter_distributed_table('github_events', distribution_column:='event_id');
- change shard count of all tables in colocation group
-SELECT alter_distributed_table('github_events', shard_count:=6, cascade_to_colocated:=true);
- change colocation
-SELECT alter_distributed_table('github_events', colocate_with:='another_table');
-```
-
-### update_distributed_table_colocation
-
-The update_distributed_table_colocation() function is used to update colocation
-of a distributed table. This function can also be used to break colocation of a
-distributed table. Citus will implicitly colocate two tables if the
-distribution column is the same type, this can be useful if the tables are
-related and will do some joins. If tables A and B are colocated, and table A
-gets rebalanced, table B will also be rebalanced. If table B doesn't have a
-replica identity, the rebalance will fail. Therefore, this function can be
-useful breaking the implicit colocation in that case.
-
-This function doesn't move any data around physically.
-
-#### Arguments
-
-**table_name:** Name of the table colocation of which will be updated.
-
-**colocate_with:** The table to which the table should be colocated with.
-
-If you want to break the colocation of a table, you should specify
-`colocate_with => 'none'`.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-This example shows that colocation of table A is updated as colocation of table
-B.
-
-```postgresql
-SELECT update_distributed_table_colocation('A', colocate_with => 'B');
-```
-
-Assume that table A and table B are colocated (possibly implicitly), if you
-want to break the colocation:
-
-```postgresql
-SELECT update_distributed_table_colocation('A', colocate_with => 'none');
-```
-
-Now, assume that table A, table B, table C and table D are colocated and you
-want to colocate table A and table B together, and table C and table D
-together:
-
-```postgresql
-SELECT update_distributed_table_colocation('C', colocate_with => 'none');
-SELECT update_distributed_table_colocation('D', colocate_with => 'C');
-```
-
-If you have a hash distributed table named none and you want to update its
-colocation, you can do:
-
-```postgresql
-SELECT update_distributed_table_colocation('"none"', colocate_with => 'some_other_hash_distributed_table');
-```
-
-### undistribute\_table
-
-The undistribute_table() function undoes the action of create_distributed_table
-or create_reference_table. Undistributing moves all data from shards back into
-a local table on the coordinator node (assuming the data can fit), then deletes
-the shards.
-
-Citus won't undistribute tables that have--or are referenced by--foreign
-keys, unless the cascade_via_foreign_keys argument is set to true. If this
-argument is false (or omitted), then you must manually drop the offending
-foreign key constraints before undistributing.
-
-#### Arguments
-
-**table_name:** Name of the distributed or reference table to undistribute.
-
-**cascade_via_foreign_keys:** (Optional) When this argument set to "true,"
-undistribute_table also undistributes all tables that are related to table_name
-through foreign keys. Use caution with this parameter, because it can
-potentially affect many tables.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-This example distributes a `github_events` table and then undistributes it.
-
-```postgresql
first distribute the table
-SELECT create_distributed_table('github_events', 'repo_id');
- undo that and make it local again
-SELECT undistribute_table('github_events');
-```
-
-### create\_distributed\_function
-
-Propagates a function from the coordinator node to workers, and marks it for
-distributed execution. When a distributed function is called on the
-coordinator, Hyperscale (Citus) uses the value of the \"distribution argument\"
-to pick a worker node to run the function. Executing the function on workers
-increases parallelism, and can bring the code closer to data in shards for
-lower latency.
-
-The Postgres search path isn't propagated from the coordinator to workers
-during distributed function execution, so distributed function code should
-fully qualify the names of database objects. Also notices emitted by the
-functions won't be displayed to the user.
-
-#### Arguments
-
-**function\_name:** Name of the function to be distributed. The name
-must include the function's parameter types in parentheses, because
-multiple functions can have the same name in PostgreSQL. For instance,
-`'foo(int)'` is different from `'foo(int, text)'`.
-
-**distribution\_arg\_name:** (Optional) The argument name by which to
-distribute. For convenience (or if the function arguments don't have
-names), a positional placeholder is allowed, such as `'$1'`. If this
-parameter isn't specified, then the function named by `function_name`
-is merely created on the workers. If worker nodes are added in the
-future, the function will automatically be created there too.
-
-**colocate\_with:** (Optional) When the distributed function reads or writes to
-a distributed table (or, more generally, colocation group), be sure to name
-that table using the `colocate_with` parameter. Then each invocation of the
-function will run on the worker node containing relevant shards.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
an example function which updates a hypothetical event_responses table which itself is distributed by event_id
-CREATE OR REPLACE FUNCTION
- register_for_event(p_event_id int, p_user_id int)
-RETURNS void LANGUAGE plpgsql AS $fn$
-BEGIN
- INSERT INTO event_responses VALUES ($1, $2, 'yes')
- ON CONFLICT (event_id, user_id)
- DO UPDATE SET response = EXCLUDED.response;
-END;
-$fn$;
- distribute the function to workers, using the p_event_id argument to determine which shard each invocation affects, and explicitly colocating with event_responses which the function updates
-SELECT create_distributed_function(
- 'register_for_event(int, int)', 'p_event_id',
- colocate_with := 'event_responses'
-);
-```
-
-### alter_columnar_table_set
-
-The alter_columnar_table_set() function changes settings on a [columnar
-table](concepts-columnar.md). Calling this function on a
-non-columnar table gives an error. All arguments except the table name are
-optional.
-
-To view current options for all columnar tables, consult this table:
-
-```postgresql
-SELECT * FROM columnar.options;
-```
-
-The default values for columnar settings for newly created tables can be
-overridden with these GUCs:
-
-* columnar.compression
-* columnar.compression_level
-* columnar.stripe_row_count
-* columnar.chunk_row_count
-
-#### Arguments
-
-**table_name:** Name of the columnar table.
-
-**chunk_row_count:** (Optional) The maximum number of rows per chunk for
-newly inserted data. Existing chunks of data won't be changed and may have
-more rows than this maximum value. The default value is 10000.
-
-**stripe_row_count:** (Optional) The maximum number of rows per stripe for
-newly inserted data. Existing stripes of data won't be changed and may have
-more rows than this maximum value. The default value is 150000.
-
-**compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
-for newly inserted data. Existing data won't be recompressed or
-decompressed. The default and suggested value is zstd (if support has
-been compiled in).
-
-**compression_level:** (Optional) Valid settings are from 1 through 19. If the
-compression method doesn't support the level chosen, the closest level will be
-selected instead.
-
-#### Return value
-
-N/A
-
-#### Example
-
-```postgresql
-SELECT alter_columnar_table_set(
- 'my_columnar_table',
- compression => 'none',
- stripe_row_count => 10000);
-```
-
-### alter_table_set_access_method
-
-The alter_table_set_access_method() function changes access method of a table
-(for example, heap or columnar).
-
-#### Arguments
-
-**table_name:** Name of the table whose access method will change.
-
-**access_method:** Name of the new access method.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
-SELECT alter_table_set_access_method('github_events', 'columnar');
-```
-
-### create_time_partitions
-
-The create_time_partitions() function creates partitions of a given interval to
-cover a given range of time.
-
-#### Arguments
-
-**table_name:** (regclass) table for which to create new partitions. The table
-must be partitioned on one column, of type date, timestamp, or timestamptz.
-
-**partition_interval:** an interval of time, such as `'2 hours'`, or `'1
-month'`, to use when setting ranges on new partitions.
-
-**end_at:** (timestamptz) create partitions up to this time. The last partition
-will contain the point end_at, and no later partitions will be created.
-
-**start_from:** (timestamptz, optional) pick the first partition so that it
-contains the point start_from. The default value is `now()`.
-
-#### Return Value
-
-True if it needed to create new partitions, false if they all existed already.
-
-#### Example
-
-```postgresql
create a year's worth of monthly partitions in table foo, starting from the current time-
-SELECT create_time_partitions(
- table_name := 'foo',
- partition_interval := '1 month',
- end_at := now() + '12 months'
-);
-```
-
-### drop_old_time_partitions
-
-The drop_old_time_partitions() function removes all partitions whose intervals
-fall before a given timestamp. In addition to using this function, you might
-consider
-[alter_old_partitions_set_access_method](#alter_old_partitions_set_access_method)
-to compress the old partitions with columnar storage.
-
-#### Arguments
-
-**table_name:** (regclass) table for which to remove partitions. The table must
-be partitioned on one column, of type date, timestamp, or timestamptz.
-
-**older_than:** (timestamptz) drop partitions whose upper range is less than or
-equal to older_than.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
drop partitions that are over a year old-
-CALL drop_old_time_partitions('foo', now() - interval '12 months');
-```
-
-### alter_old_partitions_set_access_method
-
-In a timeseries use case, tables are often partitioned by time, and old
-partitions are compressed into read-only columnar storage.
-
-#### Arguments
-
-**parent_table_name:** (regclass) table for which to change partitions. The
-table must be partitioned on one column, of type date, timestamp, or
-timestamptz.
-
-**older_than:** (timestamptz) change partitions whose upper range is less than
-or equal to older_than.
-
-**new_access_method:** (name) either ΓÇÿheapΓÇÖ for row-based storage, or
-ΓÇÿcolumnarΓÇÖ for columnar storage.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
-CALL alter_old_partitions_set_access_method(
- 'foo', now() - interval '6 months',
- 'columnar'
-);
-```
-
-## Metadata / Configuration Information
-
-### get\_shard\_id\_for\_distribution\_column
-
-Hyperscale (Citus) assigns every row of a distributed table to a shard based on
-the value of the row's distribution column and the table's method of
-distribution. In most cases, the precise mapping is a low-level detail that the
-database administrator can ignore. However it can be useful to determine a
-row's shard, either for manual database maintenance tasks or just to satisfy
-curiosity. The `get_shard_id_for_distribution_column` function provides this
-info for hash-distributed, range-distributed, and reference tables. It
-doesn't work for the append distribution.
-
-#### Arguments
-
-**table\_name:** The distributed table.
-
-**distribution\_value:** The value of the distribution column.
-
-#### Return Value
-
-The shard ID Hyperscale (Citus) associates with the distribution column value
-for the given table.
-
-#### Example
-
-```postgresql
-SELECT get_shard_id_for_distribution_column('my_table', 4);
-
- get_shard_id_for_distribution_column
- 540007
-(1 row)
-```
-
-### column\_to\_column\_name
-
-Translates the `partkey` column of `pg_dist_partition` into a textual column
-name. The translation is useful to determine the distribution column of a
-distributed table.
-
-For a more detailed discussion, see [choosing a distribution
-column](howto-choose-distribution-column.md).
-
-#### Arguments
-
-**table\_name:** The distributed table.
-
-**column\_var\_text:** The value of `partkey` in the `pg_dist_partition`
-table.
-
-#### Return Value
-
-The name of `table_name`'s distribution column.
-
-#### Example
-
-```postgresql
get distribution column name for products table-
-SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
- FROM pg_dist_partition
- WHERE logicalrelid='products'::regclass;
-```
-
-Output:
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé dist_col_name Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé company_id Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-### citus\_relation\_size
-
-Get the disk space used by all the shards of the specified distributed table.
-The disk space includes the size of the \"main fork,\" but excludes the
-visibility map and free space map for the shards.
-
-#### Arguments
-
-**logicalrelid:** the name of a distributed table.
-
-#### Return Value
-
-Size in bytes as a bigint.
-
-#### Example
-
-```postgresql
-SELECT pg_size_pretty(citus_relation_size('github_events'));
-```
-
-```
-pg_size_pretty
-23 MB
-```
-
-### citus\_table\_size
-
-Get the disk space used by all the shards of the specified distributed table,
-excluding indexes (but including TOAST, free space map, and visibility map).
-
-#### Arguments
-
-**logicalrelid:** the name of a distributed table.
-
-#### Return Value
-
-Size in bytes as a bigint.
-
-#### Example
-
-```postgresql
-SELECT pg_size_pretty(citus_table_size('github_events'));
-```
-
-```
-pg_size_pretty
-37 MB
-```
-
-### citus\_total\_relation\_size
-
-Get the total disk space used by the all the shards of the specified
-distributed table, including all indexes and TOAST data.
-
-#### Arguments
-
-**logicalrelid:** the name of a distributed table.
-
-#### Return Value
-
-Size in bytes as a bigint.
-
-#### Example
-
-```postgresql
-SELECT pg_size_pretty(citus_total_relation_size('github_events'));
-```
-
-```
-pg_size_pretty
-73 MB
-```
-
-### citus\_stat\_statements\_reset
-
-Removes all rows from
-[citus_stat_statements](reference-metadata.md#query-statistics-table).
-This function works independently from `pg_stat_statements_reset()`. To reset
-all stats, call both functions.
-
-#### Arguments
-
-N/A
-
-#### Return Value
-
-None
-
-### citus_get_active_worker_nodes
-
-The citus_get_active_worker_nodes() function returns a list of active worker
-host names and port numbers.
-
-#### Arguments
-
-N/A
-
-#### Return Value
-
-List of tuples where each tuple contains the following information:
-
-**node_name:** DNS name of the worker node
-
-**node_port:** Port on the worker node on which the database server is
-listening
-
-#### Example
-
-```postgresql
-SELECT * from citus_get_active_worker_nodes();
- node_name | node_port
+--
- localhost | 9700
- localhost | 9702
- localhost | 9701
-
-(3 rows)
-```
-
-## Server group management and repair
-
-### master\_copy\_shard\_placement
-
-If a shard placement fails to be updated during a modification command or a DDL
-operation, then it gets marked as inactive. The master\_copy\_shard\_placement
-function can then be called to repair an inactive shard placement using data
-from a healthy placement.
-
-To repair a shard, the function first drops the unhealthy shard placement and
-recreates it using the schema on the coordinator. Once the shard placement is
-created, the function copies data from the healthy placement and updates the
-metadata to mark the new shard placement as healthy. This function ensures that
-the shard will be protected from any concurrent modifications during the
-repair.
-
-#### Arguments
-
-**shard\_id:** ID of the shard to be repaired.
-
-**source\_node\_name:** DNS name of the node on which the healthy shard
-placement is present (\"source\" node).
-
-**source\_node\_port:** The port on the source worker node on which the
-database server is listening.
-
-**target\_node\_name:** DNS name of the node on which the invalid shard
-placement is present (\"target\" node).
-
-**target\_node\_port:** The port on the target worker node on which the
-database server is listening.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-The example below will repair an inactive shard placement of shard
-12345, which is present on the database server running on 'bad\_host'
-on port 5432. To repair it, it will use data from a healthy shard
-placement present on the server running on 'good\_host' on port
-5432.
-
-```postgresql
-SELECT master_copy_shard_placement(12345, 'good_host', 5432, 'bad_host', 5432);
-```
-
-### master\_move\_shard\_placement
-
-This function moves a given shard (and shards colocated with it) from one node
-to another. It's typically used indirectly during shard rebalancing rather
-than being called directly by a database administrator.
-
-There are two ways to move the data: blocking or nonblocking. The blocking
-approach means that during the move all modifications to the shard are paused.
-The second way, which avoids blocking shard writes, relies on Postgres 10
-logical replication.
-
-After a successful move operation, shards in the source node get deleted. If
-the move fails at any point, this function throws an error and leaves the
-source and target nodes unchanged.
-
-#### Arguments
-
-**shard\_id:** ID of the shard to be moved.
-
-**source\_node\_name:** DNS name of the node on which the healthy shard
-placement is present (\"source\" node).
-
-**source\_node\_port:** The port on the source worker node on which the
-database server is listening.
-
-**target\_node\_name:** DNS name of the node on which the invalid shard
-placement is present (\"target\" node).
-
-**target\_node\_port:** The port on the target worker node on which the
-database server is listening.
-
-**shard\_transfer\_mode:** (Optional) Specify the method of replication,
-whether to use PostgreSQL logical replication or a cross-worker COPY
-command. The possible values are:
-
-> - `auto`: Require replica identity if logical replication is
-> possible, otherwise use legacy behaviour (e.g. for shard repair,
-> PostgreSQL 9.6). This is the default value.
-> - `force_logical`: Use logical replication even if the table
-> doesn't have a replica identity. Any concurrent update/delete
-> statements to the table will fail during replication.
-> - `block_writes`: Use COPY (blocking writes) for tables lacking
-> primary key or replica identity.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
-SELECT master_move_shard_placement(12345, 'from_host', 5432, 'to_host', 5432);
-```
-
-### rebalance\_table\_shards
-
-The rebalance\_table\_shards() function moves shards of the given table to
-distribute them evenly among the workers. The function first calculates the
-list of moves it needs to make in order to ensure that the server group is
-balanced within the given threshold. Then, it moves shard placements one by one
-from the source node to the destination node and updates the corresponding
-shard metadata to reflect the move.
-
-Every shard is assigned a cost when determining whether shards are \"evenly
-distributed.\" By default each shard has the same cost (a value of 1), so
-distributing to equalize the cost across workers is the same as equalizing the
-number of shards on each. The constant cost strategy is called
-\"by\_shard\_count\" and is the default rebalancing strategy.
-
-The default strategy is appropriate under these circumstances:
-
-* The shards are roughly the same size
-* The shards get roughly the same amount of traffic
-* Worker nodes are all the same size/type
-* Shards haven't been pinned to particular workers
-
-If any of these assumptions don't hold, then the default rebalancing
-can result in a bad plan. In this case you may customize the strategy,
-using the `rebalance_strategy` parameter.
-
-It's advisable to call
-[get_rebalance_table_shards_plan](#get_rebalance_table_shards_plan) before
-running rebalance\_table\_shards, to see and verify the actions to be
-performed.
-
-#### Arguments
-
-**table\_name:** (Optional) The name of the table whose shards need to
-be rebalanced. If NULL, then rebalance all existing colocation groups.
-
-**threshold:** (Optional) A float number between 0.0 and 1.0 that
-indicates the maximum difference ratio of node utilization from average
-utilization. For example, specifying 0.1 will cause the shard rebalancer
-to attempt to balance all nodes to hold the same number of shards ┬▒10%.
-Specifically, the shard rebalancer will try to converge utilization of
-all worker nodes to the (1 - threshold) \* average\_utilization \... (1
-+ threshold) \* average\_utilization range.
-
-**max\_shard\_moves:** (Optional) The maximum number of shards to move.
-
-**excluded\_shard\_list:** (Optional) Identifiers of shards that
-shouldn't be moved during the rebalance operation.
-
-**shard\_transfer\_mode:** (Optional) Specify the method of replication,
-whether to use PostgreSQL logical replication or a cross-worker COPY
-command. The possible values are:
-
-> - `auto`: Require replica identity if logical replication is
-> possible, otherwise use legacy behaviour (e.g. for shard repair,
-> PostgreSQL 9.6). This is the default value.
-> - `force_logical`: Use logical replication even if the table
-> doesn't have a replica identity. Any concurrent update/delete
-> statements to the table will fail during replication.
-> - `block_writes`: Use COPY (blocking writes) for tables lacking
-> primary key or replica identity.
-
-**drain\_only:** (Optional) When true, move shards off worker nodes who have
-`shouldhaveshards` set to false in
-[pg_dist_node](reference-metadata.md#worker-node-table); move no
-other shards.
-
-**rebalance\_strategy:** (Optional) the name of a strategy in
-[pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table).
-If this argument is omitted, the function chooses the default strategy, as
-indicated in the table.
-
-#### Return Value
-
-N/A
-
-#### Example
-
-The example below will attempt to rebalance the shards of the
-github\_events table within the default threshold.
-
-```postgresql
-SELECT rebalance_table_shards('github_events');
-```
-
-This example usage will attempt to rebalance the github\_events table
-without moving shards with ID 1 and 2.
-
-```postgresql
-SELECT rebalance_table_shards('github_events', excluded_shard_list:='{1,2}');
-```
-
-### get\_rebalance\_table\_shards\_plan
-
-Output the planned shard movements of
-[rebalance_table_shards](#rebalance_table_shards) without performing them.
-While it's unlikely, get\_rebalance\_table\_shards\_plan can output a slightly
-different plan than what a rebalance\_table\_shards call with the same
-arguments will do. They aren't executed at the same time, so facts about the
-server group \-- for example, disk space \-- might differ between the calls.
-
-#### Arguments
-
-The same arguments as rebalance\_table\_shards: relation, threshold,
-max\_shard\_moves, excluded\_shard\_list, and drain\_only. See
-documentation of that function for the arguments' meaning.
-
-#### Return Value
-
-Tuples containing these columns:
--- **table\_name**: The table whose shards would move-- **shardid**: The shard in question-- **shard\_size**: Size in bytes-- **sourcename**: Hostname of the source node-- **sourceport**: Port of the source node-- **targetname**: Hostname of the destination node-- **targetport**: Port of the destination node-
-### get\_rebalance\_progress
-
-Once a shard rebalance begins, the `get_rebalance_progress()` function lists
-the progress of every shard involved. It monitors the moves planned and
-executed by `rebalance_table_shards()`.
-
-#### Arguments
-
-N/A
-
-#### Return Value
-
-Tuples containing these columns:
--- **sessionid**: Postgres PID of the rebalance monitor-- **table\_name**: The table whose shards are moving-- **shardid**: The shard in question-- **shard\_size**: Size in bytes-- **sourcename**: Hostname of the source node-- **sourceport**: Port of the source node-- **targetname**: Hostname of the destination node-- **targetport**: Port of the destination node-- **progress**: 0 = waiting to be moved; 1 = moving; 2 = complete-
-#### Example
-
-```sql
-SELECT * FROM get_rebalance_progress();
-```
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé sessionid Γöé table_name Γöé shardid Γöé shard_size Γöé sourcename Γöé sourceport Γöé targetname Γöé targetport Γöé progress Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 7083 Γöé foo Γöé 102008 Γöé 1204224 Γöé n1.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 0 Γöé
-Γöé 7083 Γöé foo Γöé 102009 Γöé 1802240 Γöé n1.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 0 Γöé
-Γöé 7083 Γöé foo Γöé 102018 Γöé 614400 Γöé n2.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 1 Γöé
-Γöé 7083 Γöé foo Γöé 102019 Γöé 8192 Γöé n3.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 2 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-### citus\_add\_rebalance\_strategy
-
-Append a row to
-[pg_dist_rebalance_strategy](reference-metadata.md?#rebalancer-strategy-table)
-.
-
-#### Arguments
-
-For more about these arguments, see the corresponding column values in
-`pg_dist_rebalance_strategy`.
-
-**name:** identifier for the new strategy
-
-**shard\_cost\_function:** identifies the function used to determine the
-\"cost\" of each shard
-
-**node\_capacity\_function:** identifies the function to measure node
-capacity
-
-**shard\_allowed\_on\_node\_function:** identifies the function that
-determines which shards can be placed on which nodes
-
-**default\_threshold:** a floating point threshold that tunes how
-precisely the cumulative shard cost should be balanced between nodes
-
-**minimum\_threshold:** (Optional) a safeguard column that holds the
-minimum value allowed for the threshold argument of
-rebalance\_table\_shards(). Its default value is 0
-
-#### Return Value
-
-N/A
-
-### citus\_set\_default\_rebalance\_strategy
-
-Update the
-[pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table)
-table, changing the strategy named by its argument to be the default chosen
-when rebalancing shards.
-
-#### Arguments
-
-**name:** the name of the strategy in pg\_dist\_rebalance\_strategy
-
-#### Return Value
-
-N/A
-
-#### Example
-
-```postgresql
-SELECT citus_set_default_rebalance_strategy('by_disk_size');
-```
-
-### citus\_remote\_connection\_stats
-
-The citus\_remote\_connection\_stats() function shows the number of
-active connections to each remote node.
-
-#### Arguments
-
-N/A
-
-#### Example
-
-```postgresql
-SELECT * from citus_remote_connection_stats();
-```
-
-```
- hostname | port | database_name | connection_count_to_node
--+++--
- citus_worker_1 | 5432 | postgres | 3
-(1 row)
-```
-
-### isolate\_tenant\_to\_new\_shard
-
-This function creates a new shard to hold rows with a specific single value in
-the distribution column. It's especially handy for the multi-tenant Hyperscale
-(Citus) use case, where a large tenant can be placed alone on its own shard and
-ultimately its own physical node.
-
-#### Arguments
-
-**table\_name:** The name of the table to get a new shard.
-
-**tenant\_id:** The value of the distribution column that will be
-assigned to the new shard.
-
-**cascade\_option:** (Optional) When set to \"CASCADE,\" also isolates a shard
-from all tables in the current table's [colocation
-group](concepts-colocation.md).
-
-#### Return Value
-
-**shard\_id:** The function returns the unique ID assigned to the newly
-created shard.
-
-#### Examples
-
-Create a new shard to hold the lineitems for tenant 135:
-
-```postgresql
-SELECT isolate_tenant_to_new_shard('lineitem', 135);
-```
-
-```
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé isolate_tenant_to_new_shard Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102240 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Next steps
-
-* Many of the functions in this article modify system [metadata tables](reference-metadata.md)
-* [Server parameters](reference-parameters.md) customize the behavior of some functions
postgresql Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-limits.md
- Title: Limits and limitations ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Current limits for Hyperscale (Citus) server groups
----- Previously updated : 02/25/2022--
-# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
--
-The following section describes capacity and functional limits in the
-Hyperscale (Citus) service.
-
-### Naming
-
-#### Server group name
-
-A Hyperscale (Citus) server group must have a name that is 40 characters or
-shorter.
-
-## Networking
-
-### Maximum connections
-
-Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
-it's important to limit simultaneous connections. Here are the limits we chose
-to keep nodes healthy:
-
-* Maximum connections per node
- * 300 for 0-3 vCores
- * 500 for 4-15 vCores
- * 1000 for 16+ vCores
-
-The connection limits above are for *user* connections (`max_connections` minus
-`superuser_reserved_connections`). We reserve extra connections for
-administration and recovery.
-
-The limits apply to both worker nodes and the coordinator node. Attempts to
-connect beyond these limits will fail with an error.
-
-#### Connection pooling
-
-You can scale connections further using [connection
-pooling](concepts-connection-pool.md). Hyperscale (Citus) offers a
-managed pgBouncer connection pooler configured for up to 2,000 simultaneous
-client connections.
-
-## Storage
-
-### Storage scaling
-
-Storage on coordinator and worker nodes can be scaled up (increased) but can't
-be scaled down (decreased).
-
-### Storage size
-
-Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
-available storage options and IOPS calculation [above](resources-compute.md)
-for node and cluster sizes.
-
-## Compute
-
-### Subscription vCore limits
-
-Azure enforces a vCore quota per subscription per region. There are two
-independently adjustable quotas: vCores for coordinator nodes, and vCores for
-worker nodes. The default quota should be more than enough to experiment with
-Hyperscale (Citus). If you do need more vCores for a region in your
-subscription, see how to [adjust compute
-quotas](howto-compute-quota.md).
-
-## PostgreSQL
-
-### Database creation
-
-The Azure portal provides credentials to connect to exactly one database per
-Hyperscale (Citus) server group, the `citus` database. Creating another
-database is currently not allowed, and the CREATE DATABASE command will fail
-with an error.
-
-### Columnar storage
-
-Hyperscale (Citus) currently has these limitations with [columnar
-tables](concepts-columnar.md):
-
-* Compression is on disk, not in memory
-* Append-only (no UPDATE/DELETE support)
-* No space reclamation (for example, rolled-back transactions may still consume
- disk space)
-* No index support, index scans, or bitmap index scans
-* No tidscans
-* No sample scans
-* No TOAST support (large values supported inline)
-* No support for ON CONFLICT statements (except DO NOTHING actions with no
- target specified).
-* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
-* No support for serializable isolation level
-* Support for PostgreSQL server versions 12+ only
-* No support for foreign keys, unique constraints, or exclusion constraints
-* No support for logical decoding
-* No support for intra-node parallel scans
-* No support for AFTER ... FOR EACH ROW triggers
-* No UNLOGGED columnar tables
-* No TEMPORARY columnar tables
-
-## Next steps
-
-* Learn how to [create a Hyperscale (Citus) server group in the
- portal](quickstart-create-portal.md).
-* Learn to enable [connection pooling](concepts-connection-pool.md).
postgresql Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-metadata.md
- Title: System tables ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Metadata for distributed query execution
----- Previously updated : 02/18/2022--
-# System tables and views
--
-Hyperscale (Citus) creates and maintains special tables that contain
-information about distributed data in the server group. The coordinator node
-consults these tables when planning how to run queries across the worker nodes.
-
-## Coordinator Metadata
-
-Hyperscale (Citus) divides each distributed table into multiple logical shards
-based on the distribution column. The coordinator then maintains metadata
-tables to track statistics and information about the health and location of
-these shards.
-
-In this section, we describe each of these metadata tables and their schema.
-You can view and query these tables using SQL after logging into the
-coordinator node.
-
-> [!NOTE]
->
-> Hyperscale (Citus) server groups running older versions of the Citus Engine may not
-> offer all the tables listed below.
-
-### Partition table
-
-The pg\_dist\_partition table stores metadata about which tables in the
-database are distributed. For each distributed table, it also stores
-information about the distribution method and detailed information about
-the distribution column.
-
-| Name | Type | Description |
-|--|-|-|
-| logicalrelid | regclass | Distributed table to which this row corresponds. This value references the relfilenode column in the pg_class system catalog table. |
-| partmethod | char | The method used for partitioning / distribution. The values of this column corresponding to different distribution methods are append: ΓÇÿaΓÇÖ, hash: ΓÇÿhΓÇÖ, reference table: ΓÇÿnΓÇÖ |
-| partkey | text | Detailed information about the distribution column including column number, type and other relevant information. |
-| colocationid | integer | Colocation group to which this table belongs. Tables in the same group allow colocated joins and distributed rollups among other optimizations. This value references the colocationid column in the pg_dist_colocation table. |
-| repmodel | char | The method used for data replication. The values of this column corresponding to different replication methods are: Citus statement-based replication: ΓÇÿcΓÇÖ, postgresql streaming replication: ΓÇÿsΓÇÖ, two-phase commit (for reference tables): ΓÇÿtΓÇÖ |
-
-```
-SELECT * from pg_dist_partition;
- logicalrelid | partmethod | partkey | colocationid | repmodel
-+++--+-
- github_events | h | {VAR :varno 1 :varattno 4 :vartype 20 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 4 :location -1} | 2 | c
- (1 row)
-```
-
-### Shard table
-
-The pg\_dist\_shard table stores metadata about individual shards of a
-table. Pg_dist_shard has information about which distributed table shards
-belong to, and statistics about the distribution column for shards.
-For append distributed tables, these statistics correspond to min / max
-values of the distribution column. For hash distributed tables,
-they're hash token ranges assigned to that shard. These statistics are
-used for pruning away unrelated shards during SELECT queries.
-
-| Name | Type | Description |
-||-|-|
-| logicalrelid | regclass | Distributed table to which this row corresponds. This value references the relfilenode column in the pg_class system catalog table. |
-| shardid | bigint | Globally unique identifier assigned to this shard. |
-| shardstorage | char | Type of storage used for this shard. Different storage types are discussed in the table below. |
-| shardminvalue | text | For append distributed tables, minimum value of the distribution column in this shard (inclusive). For hash distributed tables, minimum hash token value assigned to that shard (inclusive). |
-| shardmaxvalue | text | For append distributed tables, maximum value of the distribution column in this shard (inclusive). For hash distributed tables, maximum hash token value assigned to that shard (inclusive). |
-
-```
-SELECT * from pg_dist_shard;
- logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
-++--++
- github_events | 102026 | t | 268435456 | 402653183
- github_events | 102027 | t | 402653184 | 536870911
- github_events | 102028 | t | 536870912 | 671088639
- github_events | 102029 | t | 671088640 | 805306367
- (4 rows)
-```
-
-#### Shard Storage Types
-
-The shardstorage column in pg\_dist\_shard indicates the type of storage
-used for the shard. A brief overview of different shard storage types
-and their representation is below.
-
-| Storage Type | Shardstorage value | Description |
-|--|--||
-| TABLE | 't' | Indicates that shard stores data belonging to a regular distributed table. |
-| COLUMNAR | 'c' | Indicates that shard stores columnar data. (Used by distributed cstore_fdw tables) |
-| FOREIGN | 'f' | Indicates that shard stores foreign data. (Used by distributed file_fdw tables) |
--
-### Shard information view
-
-In addition to the low-level shard metadata table described above, Hyperscale
-(Citus) provides a `citus_shards` view to easily check:
-
-* Where each shard is (node, and port),
-* What kind of table it belongs to, and
-* Its size
-
-This view helps you inspect shards to find, among other things, any size
-imbalances across nodes.
-
-```postgresql
-SELECT * FROM citus_shards;
-.
- table_name | shardid | shard_name | citus_table_type | colocation_id | nodename | nodeport | shard_size
-++--+++--+-+
- dist | 102170 | dist_102170 | distributed | 34 | localhost | 9701 | 90677248
- dist | 102171 | dist_102171 | distributed | 34 | localhost | 9702 | 90619904
- dist | 102172 | dist_102172 | distributed | 34 | localhost | 9701 | 90701824
- dist | 102173 | dist_102173 | distributed | 34 | localhost | 9702 | 90693632
- ref | 102174 | ref_102174 | reference | 2 | localhost | 9701 | 8192
- ref | 102174 | ref_102174 | reference | 2 | localhost | 9702 | 8192
- dist2 | 102175 | dist2_102175 | distributed | 34 | localhost | 9701 | 933888
- dist2 | 102176 | dist2_102176 | distributed | 34 | localhost | 9702 | 950272
- dist2 | 102177 | dist2_102177 | distributed | 34 | localhost | 9701 | 942080
- dist2 | 102178 | dist2_102178 | distributed | 34 | localhost | 9702 | 933888
-```
-
-The colocation_id refers to the colocation group.
-
-### Shard placement table
-
-The pg\_dist\_placement table tracks the location of shard replicas on
-worker nodes. Each replica of a shard assigned to a specific node is
-called a shard placement. This table stores information about the health
-and location of each shard placement.
-
-| Name | Type | Description |
-|-|--|-|
-| shardid | bigint | Shard identifier associated with this placement. This value references the shardid column in the pg_dist_shard catalog table. |
-| shardstate | int | Describes the state of this placement. Different shard states are discussed in the section below. |
-| shardlength | bigint | For append distributed tables, the size of the shard placement on the worker node in bytes. For hash distributed tables, zero. |
-| placementid | bigint | Unique autogenerated identifier for each individual placement. |
-| groupid | int | Denotes a group of one primary server and zero or more secondary servers when the streaming replication model is used. |
-
-```
-SELECT * from pg_dist_placement;
- shardid | shardstate | shardlength | placementid | groupid
- ++-+-+
- 102008 | 1 | 0 | 1 | 1
- 102008 | 1 | 0 | 2 | 2
- 102009 | 1 | 0 | 3 | 2
- 102009 | 1 | 0 | 4 | 3
- 102010 | 1 | 0 | 5 | 3
- 102010 | 1 | 0 | 6 | 4
- 102011 | 1 | 0 | 7 | 4
-```
-
-#### Shard Placement States
-
-Hyperscale (Citus) manages shard health on a per-placement basis. If a placement
-puts the system in an inconsistent state, Citus automatically marks it as unavailable. Placement state is recorded in the pg_dist_shard_placement table,
-within the shardstate column. Here's a brief overview of different shard placement states:
-
-| State name | Shardstate value | Description |
-|||-|
-| FINALIZED | 1 | The state new shards are created in. Shard placements in this state are considered up to date and are used in query planning and execution. |
-| INACTIVE | 3 | Shard placements in this state are considered inactive due to being out-of-sync with other replicas of the same shard. The state can occur when an append, modification (INSERT, UPDATE, DELETE), or a DDL operation fails for this placement. The query planner will ignore placements in this state during planning and execution. Users can synchronize the data in these shards with a finalized replica as a background activity. |
-| TO_DELETE | 4 | If Citus attempts to drop a shard placement in response to a master_apply_delete_command call and fails, the placement is moved to this state. Users can then delete these shards as a subsequent background activity. |
-
-### Worker node table
-
-The pg\_dist\_node table contains information about the worker nodes in
-the cluster.
-
-| Name | Type | Description |
-|||--|
-| nodeid | int | Autogenerated identifier for an individual node. |
-| groupid | int | Identifier used to denote a group of one primary server and zero or more secondary servers, when the streaming replication model is used. By default it's the same as the nodeid. |
-| nodename | text | Host Name or IP Address of the PostgreSQL worker node. |
-| nodeport | int | Port number on which the PostgreSQL worker node is listening. |
-| noderack | text | (Optional) Rack placement information for the worker node. |
-| hasmetadata | boolean | Reserved for internal use. |
-| isactive | boolean | Whether the node is active accepting shard placements. |
-| noderole | text | Whether the node is a primary or secondary |
-| nodecluster | text | The name of the cluster containing this node |
-| shouldhaveshards | boolean | If false, shards will be moved off node (drained) when rebalancing, nor will shards from new distributed tables be placed on the node, unless they're colocated with shards already there |
-
-```
-SELECT * from pg_dist_node;
- nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | shouldhaveshards
++--+-+-+-+-+-+-+
- 1 | 1 | localhost | 12345 | default | f | t | primary | default | t
- 2 | 2 | localhost | 12346 | default | f | t | primary | default | t
- 3 | 3 | localhost | 12347 | default | f | t | primary | default | t
-(3 rows)
-```
-
-### Distributed object table
-
-The citus.pg\_dist\_object table contains a list of objects such as
-types and functions that have been created on the coordinator node and
-propagated to worker nodes. When an administrator adds new worker nodes
-to the cluster, Hyperscale (Citus) automatically creates copies of the distributed
-objects on the new nodes (in the correct order to satisfy object
-dependencies).
-
-| Name | Type | Description |
-|--|||
-| classid | oid | Class of the distributed object |
-| objid | oid | Object ID of the distributed object |
-| objsubid | integer | Object sub ID of the distributed object, for example, attnum |
-| type | text | Part of the stable address used during pg upgrades |
-| object_names | text[] | Part of the stable address used during pg upgrades |
-| object_args | text[] | Part of the stable address used during pg upgrades |
-| distribution_argument_index | integer | Only valid for distributed functions/procedures |
-| colocationid | integer | Only valid for distributed functions/procedures |
-
-\"Stable addresses\" uniquely identify objects independently of a
-specific server. Hyperscale (Citus) tracks objects during a PostgreSQL upgrade using
-stable addresses created with the
-[pg\_identify\_object\_as\_address()](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-OBJECT-TABLE)
-function.
-
-Here\'s an example of how `create_distributed_function()` adds entries
-to the `citus.pg_dist_object` table:
-
-```psql
-CREATE TYPE stoplight AS enum ('green', 'yellow', 'red');
-
-CREATE OR REPLACE FUNCTION intersection()
-RETURNS stoplight AS $$
-DECLARE
- color stoplight;
-BEGIN
- SELECT *
- FROM unnest(enum_range(NULL::stoplight)) INTO color
- ORDER BY random() LIMIT 1;
- RETURN color;
-END;
-$$ LANGUAGE plpgsql VOLATILE;
-
-SELECT create_distributed_function('intersection()');
- will have two rows, one for the TYPE and one for the FUNCTION
-TABLE citus.pg_dist_object;
-```
-
-```
--[ RECORD 1 ]+
-classid | 1247
-objid | 16780
-objsubid | 0
-type |
-object_names |
-object_args |
-distribution_argument_index |
-colocationid |
--[ RECORD 2 ]+
-classid | 1255
-objid | 16788
-objsubid | 0
-type |
-object_names |
-object_args |
-distribution_argument_index |
-colocationid |
-```
-
-### Distributed tables view
-
-The `citus_tables` view shows a summary of all tables managed by Hyperscale
-(Citus) (distributed and reference tables). The view combines information from
-Hyperscale (Citus) metadata tables for an easy, human-readable overview of
-these table properties:
-
-* Table type
-* Distribution column
-* Colocation group ID
-* Human-readable size
-* Shard count
-* Owner (database user)
-* Access method (heap or columnar)
-
-HereΓÇÖs an example:
-
-```postgresql
-SELECT * FROM citus_tables;
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé table_name Γöé citus_table_type Γöé distribution_column Γöé colocation_id Γöé table_size Γöé shard_count Γöé table_owner Γöé access_method Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé foo.test Γöé distributed Γöé test_column Γöé 1 Γöé 0 bytes Γöé 32 Γöé citus Γöé heap Γöé
-Γöé ref Γöé reference Γöé <none> Γöé 2 Γöé 24 GB Γöé 1 Γöé citus Γöé heap Γöé
-Γöé test Γöé distributed Γöé id Γöé 1 Γöé 248 TB Γöé 32 Γöé citus Γöé heap Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-### Time partitions view
-
-Hyperscale (Citus) provides UDFs to manage partitions for the Timeseries Data
-use case. It also maintains a `time_partitions` view to inspect the partitions
-it manages.
-
-Columns:
-
-* **parent_table** the table that is partitioned
-* **partition_column** the column on which the parent table is partitioned
-* **partition** the name of a partition table
-* **from_value** lower bound in time for rows in this partition
-* **to_value** upper bound in time for rows in this partition
-* **access_method** heap for row-based storage, and columnar for columnar
- storage
-
-```postgresql
-SELECT * FROM time_partitions;
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé parent_table Γöé partition_column Γöé partition Γöé from_value Γöé to_value Γöé access_method Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0000 Γöé 2015-01-01 00:00:00 Γöé 2015-01-01 02:00:00 Γöé columnar Γöé
-Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0200 Γöé 2015-01-01 02:00:00 Γöé 2015-01-01 04:00:00 Γöé columnar Γöé
-Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0400 Γöé 2015-01-01 04:00:00 Γöé 2015-01-01 06:00:00 Γöé columnar Γöé
-Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0600 Γöé 2015-01-01 06:00:00 Γöé 2015-01-01 08:00:00 Γöé heap Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-### Colocation group table
-
-The pg\_dist\_colocation table contains information about which tables\' shards
-should be placed together, or [colocated](concepts-colocation.md). When two
-tables are in the same colocation group, Hyperscale (Citus) ensures shards with
-the same distribution column values will be placed on the same worker nodes.
-Colocation enables join optimizations, certain distributed rollups, and foreign
-key support. Shard colocation is inferred when the shard counts, replication
-factors, and partition column types all match between two tables; however, a
-custom colocation group may be specified when creating a distributed table, if
-so desired.
-
-| Name | Type | Description |
-|||-|
-| colocationid | int | Unique identifier for the colocation group this row corresponds to. |
-| shardcount | int | Shard count for all tables in this colocation group |
-| replicationfactor | int | Replication factor for all tables in this colocation group. |
-| distributioncolumntype | oid | The type of the distribution column for all tables in this colocation group. |
-
-```
-SELECT * from pg_dist_colocation;
- colocationid | shardcount | replicationfactor | distributioncolumntype
- --++-+
- 2 | 32 | 2 | 20
- (1 row)
-```
-
-### Rebalancer strategy table
-
-This table defines strategies that
-[rebalance_table_shards](reference-functions.md#rebalance_table_shards)
-can use to determine where to move shards.
-
-| Name | Type | Description |
-|--|||
-| default_strategy | boolean | Whether rebalance_table_shards should choose this strategy by default. Use citus_set_default_rebalance_strategy to update this column |
-| shard_cost_function | regproc | Identifier for a cost function, which must take a shardid as bigint, and return its notion of a cost, as type real |
-| node_capacity_function | regproc | Identifier for a capacity function, which must take a nodeid as int, and return its notion of node capacity as type real |
-| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Citus may store the shard on the node |
-| default_threshold | float4 | Threshold for deeming a node too full or too empty, which determines when the rebalance_table_shards should try to move shards |
-| minimum_threshold | float4 | A safeguard to prevent the threshold argument of rebalance_table_shards() from being set too low |
-
-A Hyperscale (Citus) installation ships with these strategies in the table:
-
-```postgresql
-SELECT * FROM pg_dist_rebalance_strategy;
-```
-
-```
--[ RECORD 1 ]-+--
-Name | by_shard_count
-default_strategy | true
-shard_cost_function | citus_shard_cost_1
-node_capacity_function | citus_node_capacity_1
-shard_allowed_on_node_function | citus_shard_allowed_on_node_true
-default_threshold | 0
-minimum_threshold | 0
--[ RECORD 2 ]-+--
-Name | by_disk_size
-default_strategy | false
-shard_cost_function | citus_shard_cost_by_disk_size
-node_capacity_function | citus_node_capacity_1
-shard_allowed_on_node_function | citus_shard_allowed_on_node_true
-default_threshold | 0.1
-minimum_threshold | 0.01
-```
-
-The default strategy, `by_shard_count`, assigns every shard the same
-cost. Its effect is to equalize the shard count across nodes. The other
-predefined strategy, `by_disk_size`, assigns a cost to each shard
-matching its disk size in bytes plus that of the shards that are
-colocated with it. The disk size is calculated using
-`pg_total_relation_size`, so it includes indices. This strategy attempts
-to achieve the same disk space on every node. Note the threshold of 0.1--it prevents unnecessary shard movement caused by insignificant
-differences in disk space.
-
-#### Creating custom rebalancer strategies
-
-Here are examples of functions that can be used within new shard rebalancer
-strategies, and registered in the
-[pg_dist_rebalance_strategy](reference-metadata.md?#rebalancer-strategy-table)
-with the
-[citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy)
-function.
--- Setting a node capacity exception by hostname pattern:-
- ```postgresql
- CREATE FUNCTION v2_node_double_capacity(nodeidarg int)
- RETURNS boolean AS $$
- SELECT
- (CASE WHEN nodename LIKE '%.v2.worker.citusdata.com' THEN 2 ELSE 1 END)
- FROM pg_dist_node where nodeid = nodeidarg
- $$ LANGUAGE sql;
- ```
--- Rebalancing by number of queries that go to a shard, as measured by the
- [citus_stat_statements](reference-metadata.md#query-statistics-table):
-
- ```postgresql
- -- example of shard_cost_function
-
- CREATE FUNCTION cost_of_shard_by_number_of_queries(shardid bigint)
- RETURNS real AS $$
- SELECT coalesce(sum(calls)::real, 0.001) as shard_total_queries
- FROM citus_stat_statements
- WHERE partition_key is not null
- AND get_shard_id_for_distribution_column('tab', partition_key) = shardid;
- $$ LANGUAGE sql;
- ```
--- Isolating a specific shard (10000) on a node (address \'10.0.0.1\'):-
- ```postgresql
- -- example of shard_allowed_on_node_function
-
- CREATE FUNCTION isolate_shard_10000_on_10_0_0_1(shardid bigint, nodeidarg int)
- RETURNS boolean AS $$
- SELECT
- (CASE WHEN nodename = '10.0.0.1' THEN shardid = 10000 ELSE shardid != 10000 END)
- FROM pg_dist_node where nodeid = nodeidarg
- $$ LANGUAGE sql;
-
- -- The next two definitions are recommended in combination with the above function.
- -- This way the average utilization of nodes is not impacted by the isolated shard.
- CREATE FUNCTION no_capacity_for_10_0_0_1(nodeidarg int)
- RETURNS real AS $$
- SELECT
- (CASE WHEN nodename = '10.0.0.1' THEN 0 ELSE 1 END)::real
- FROM pg_dist_node where nodeid = nodeidarg
- $$ LANGUAGE sql;
- CREATE FUNCTION no_cost_for_10000(shardid bigint)
- RETURNS real AS $$
- SELECT
- (CASE WHEN shardid = 10000 THEN 0 ELSE 1 END)::real
- $$ LANGUAGE sql;
- ```
-
-### Query statistics table
-
-Hyperscale (Citus) provides `citus_stat_statements` for stats about how queries are
-being executed, and for whom. It\'s analogous to (and can be joined
-with) the
-[pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html)
-view in PostgreSQL, which tracks statistics about query speed.
-
-This view can trace queries to originating tenants in a multi-tenant
-application, which helps for deciding when to do tenant isolation.
-
-| Name | Type | Description |
-||--|-|
-| queryid | bigint | identifier (good for pg_stat_statements joins) |
-| userid | oid | user who ran the query |
-| dbid | oid | database instance of coordinator |
-| query | text | anonymized query string |
-| executor | text | Citus executor used: adaptive, real-time, task-tracker, router, or insert-select |
-| partition_key | text | value of distribution column in router-executed queries, else NULL |
-| calls | bigint | number of times the query was run |
-
-```sql
create and populate distributed table
-create table foo ( id int );
-select create_distributed_table('foo', 'id');
-insert into foo select generate_series(1,100);
- enable stats pg_stat_statements must be in shared_preload libraries
-create extension pg_stat_statements;
-
-select count(*) from foo;
-select * from foo where id = 42;
-
-select * from citus_stat_statements;
-```
-
-Results:
-
-```
--[ RECORD 1 ]-+-
-queryid | -909556869173432820
-userid | 10
-dbid | 13340
-query | insert into foo select generate_series($1,$2)
-executor | insert-select
-partition_key |
-calls | 1
--[ RECORD 2 ]-+-
-queryid | 3919808845681956665
-userid | 10
-dbid | 13340
-query | select count(*) from foo;
-executor | adaptive
-partition_key |
-calls | 1
--[ RECORD 3 ]-+-
-queryid | 5351346905785208738
-userid | 10
-dbid | 13340
-query | select * from foo where id = $1
-executor | adaptive
-partition_key | 42
-calls | 1
-```
-
-Caveats:
--- The stats data isn't replicated, and won\'t survive database
- crashes or failover
-- Tracks a limited number of queries, set by the
- `pg_stat_statements.max` GUC (default 5000)
-- To truncate the table, use the `citus_stat_statements_reset()`
- function
-
-### Distributed Query Activity
-
-Hyperscale (Citus) provides special views to watch queries and locks throughout the
-cluster, including shard-specific queries used internally to build
-results for distributed queries.
--- **citus\_dist\_stat\_activity**: shows the distributed queries that
- are executing on all nodes. A superset of `pg_stat_activity`, usable
- wherever the latter is.
-- **citus\_worker\_stat\_activity**: shows queries on workers,
- including fragment queries against individual shards.
-- **citus\_lock\_waits**: Blocked queries throughout the cluster.-
-The first two views include all columns of
-[pg\_stat\_activity](https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW)
-plus the host host/port of the worker that initiated the query and the
-host/port of the coordinator node of the cluster.
-
-For example, consider counting the rows in a distributed table:
-
-```postgresql
run from worker on localhost:9701-
-SELECT count(*) FROM users_table;
-```
-
-We can see that the query appears in `citus_dist_stat_activity`:
-
-```postgresql
-SELECT * FROM citus_dist_stat_activity;
---[ RECORD 1 ]-+-
-query_hostname | localhost
-query_hostport | 9701
-master_query_host_name | localhost
-master_query_host_port | 9701
-transaction_number | 1
-transaction_stamp | 2018-10-05 13:27:20.691907+03
-datid | 12630
-datname | postgres
-pid | 23723
-usesysid | 10
-usename | citus
-application\_name | psql
-client\_addr |
-client\_hostname |
-client\_port | -1
-backend\_start | 2018-10-05 13:27:14.419905+03
-xact\_start | 2018-10-05 13:27:16.362887+03
-query\_start | 2018-10-05 13:27:20.682452+03
-state\_change | 2018-10-05 13:27:20.896546+03
-wait\_event_type | Client
-wait\_event | ClientRead
-state | idle in transaction
-backend\_xid |
-backend\_xmin |
-query | SELECT count(*) FROM users_table;
-backend\_type | client backend
-```
-
-This query requires information from all shards. Some of the information is in
-shard `users_table_102038`, which happens to be stored in `localhost:9700`. We can
-see a query accessing the shard by looking at the `citus_worker_stat_activity`
-view:
-
-```postgresql
-SELECT * FROM citus_worker_stat_activity;
---[ RECORD 1 ]-+--
-query_hostname | localhost
-query_hostport | 9700
-master_query_host_name | localhost
-master_query_host_port | 9701
-transaction_number | 1
-transaction_stamp | 2018-10-05 13:27:20.691907+03
-datid | 12630
-datname | postgres
-pid | 23781
-usesysid | 10
-usename | citus
-application\_name | citus
-client\_addr | ::1
-client\_hostname |
-client\_port | 51773
-backend\_start | 2018-10-05 13:27:20.75839+03
-xact\_start | 2018-10-05 13:27:20.84112+03
-query\_start | 2018-10-05 13:27:20.867446+03
-state\_change | 2018-10-05 13:27:20.869889+03
-wait\_event_type | Client
-wait\_event | ClientRead
-state | idle in transaction
-backend\_xid |
-backend\_xmin |
-query | COPY (SELECT count(*) AS count FROM users_table_102038 users_table WHERE true) TO STDOUT
-backend\_type | client backend
-```
-
-The `query` field shows data being copied out of the shard to be
-counted.
-
-> [!NOTE]
-> If a router query (e.g. single-tenant in a multi-tenant application, `SELECT
-> * FROM table WHERE tenant_id = X`) is executed without a transaction block,
-> then master\_query\_host\_name and master\_query\_host\_port columns will be
-> NULL in citus\_worker\_stat\_activity.
-
-Here are examples of useful queries you can build using
-`citus_worker_stat_activity`:
-
-```postgresql
active queries' wait events on a certain node-
-SELECT query, wait_event_type, wait_event
- FROM citus_worker_stat_activity
- WHERE query_hostname = 'xxxx' and state='active';
- active queries' top wait events-
-SELECT wait_event, wait_event_type, count(*)
- FROM citus_worker_stat_activity
- WHERE state='active'
- GROUP BY wait_event, wait_event_type
- ORDER BY count(*) desc;
- total internal connections generated per node by Citus-
-SELECT query_hostname, count(*)
- FROM citus_worker_stat_activity
- GROUP BY query_hostname;
- total internal active connections generated per node by Citus-
-SELECT query_hostname, count(*)
- FROM citus_worker_stat_activity
- WHERE state='active'
- GROUP BY query_hostname;
-```
-
-The next view is `citus_lock_waits`. To see how it works, we can generate a
-locking situation manually. First we'll set up a test table from the
-coordinator:
-
-```postgresql
-CREATE TABLE numbers AS
- SELECT i, 0 AS j FROM generate_series(1,10) AS i;
-SELECT create_distributed_table('numbers', 'i');
-```
-
-Then, using two sessions on the coordinator, we can run this sequence of
-statements:
-
-```postgresql
session 1 -- session 2-- -
-BEGIN;
-UPDATE numbers SET j = 2 WHERE i = 1;
- BEGIN;
- UPDATE numbers SET j = 3 WHERE i = 1;
- -- (this blocks)
-```
-
-The `citus_lock_waits` view shows the situation.
-
-```postgresql
-SELECT * FROM citus_lock_waits;
---[ RECORD 1 ]-+-
-waiting_pid | 88624
-blocking_pid | 88615
-blocked_statement | UPDATE numbers SET j = 3 WHERE i = 1;
-current_statement_in_blocking_process | UPDATE numbers SET j = 2 WHERE i = 1;
-waiting_node_id | 0
-blocking_node_id | 0
-waiting_node_name | coordinator_host
-blocking_node_name | coordinator_host
-waiting_node_port | 5432
-blocking_node_port | 5432
-```
-
-In this example the queries originated on the coordinator, but the view
-can also list locks between queries originating on workers (executed
-with Hyperscale (Citus) MX for instance).
-
-## Next steps
-
-* Learn how some [Hyperscale (Citus) functions](reference-functions.md) alter system tables
-* Review the concepts of [nodes and tables](concepts-nodes.md)
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
- Title: Reference ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Overview of the Hyperscale (Citus) SQL API
----- Previously updated : 08/02/2022--
-# The Hyperscale (Citus) SQL API
--
-Azure Database for PostgreSQL - Hyperscale (Citus) includes features beyond
-standard PostgreSQL. Below is a categorized reference of functions and
-configuration options for:
-
-* Parallelizing query execution across shards
-* Managing sharded data between multiple servers
-* Compressing data with columnar storage
-* Automating timeseries partitioning
-
-## SQL functions
-
-### Sharding
-
-| Name | Description |
-||-|
-| [alter_distributed_table](reference-functions.md#alter_distributed_table) | Change the distribution column, shard count or colocation properties of a distributed table |
-| [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | Repair an inactive shard placement using data from a healthy placement |
-| [create_distributed_table](reference-functions.md#create_distributed_table) | Turn a PostgreSQL table into a distributed (sharded) table |
-| [create_reference_table](reference-functions.md#create_reference_table) | Maintain full copies of a table in sync across all nodes |
-| [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | Create a new shard to hold rows with a specific single value in the distribution column |
-| [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | Truncate all local rows after distributing a table |
-| [undistribute_table](reference-functions.md#undistribute_table) | Undo the action of create_distributed_table or create_reference_table |
-
-### Shard rebalancing
-
-| Name | Description |
-||-|
-| [citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy) | Append a row to `pg_dist_rebalance_strategy` |
-| [citus_move_shard_placement](reference-functions.md#master_move_shard_placement) | Typically used indirectly during shard rebalancing rather than being called directly by a database administrator |
-| [citus_set_default_rebalance_strategy](reference-functions.md#) | Change the strategy named by its argument to be the default chosen when rebalancing shards |
-| [get_rebalance_progress](reference-functions.md#get_rebalance_progress) | Monitor the moves planned and executed by `rebalance_table_shards` |
-| [get_rebalance_table_shards_plan](reference-functions.md#get_rebalance_table_shards_plan) | Output the planned shard movements of rebalance_table_shards without performing them |
-| [rebalance_table_shards](reference-functions.md#rebalance_table_shards) | Move shards of the given table to distribute them evenly among the workers |
-
-### Colocation
-
-| Name | Description |
-||-|
-| [create_distributed_function](reference-functions.md#create_distributed_function) | Make function run on workers near colocated shards |
-| [update_distributed_table_colocation](reference-functions.md#update_distributed_table_colocation) | Update or break colocation of a distributed table |
-
-### Columnar storage
-
-| Name | Description |
-||-|
-| [alter_columnar_table_set](reference-functions.md#alter_columnar_table_set) | Change settings on a columnar table |
-| [alter_table_set_access_method](reference-functions.md#alter_table_set_access_method) | Convert a table between heap or columnar storage |
-
-### Timeseries partitioning
-
-| Name | Description |
-||-|
-| [alter_old_partitions_set_access_method](reference-functions.md#alter_old_partitions_set_access_method) | Change storage method of partitions |
-| [create_time_partitions](reference-functions.md#create_time_partitions) | Create partitions of a given interval to cover a given range of time |
-| [drop_old_time_partitions](reference-functions.md#drop_old_time_partitions) | Remove all partitions whose intervals fall before a given timestamp |
-
-### Informational
-
-| Name | Description |
-||-|
-| [citus_get_active_worker_nodes](reference-functions.md#citus_get_active_worker_nodes) | Get active worker host names and port numbers |
-| [citus_relation_size](reference-functions.md#citus_relation_size) | Get disk space used by all the shards of the specified distributed table |
-| [citus_remote_connection_stats](reference-functions.md#citus_remote_connection_stats) | Show the number of active connections to each remote node |
-| [citus_stat_statements_reset](reference-functions.md#citus_stat_statements_reset) | Remove all rows from `citus_stat_statements` |
-| [citus_table_size](reference-functions.md#citus_table_size) | Get disk space used by all the shards of the specified distributed table, excluding indexes |
-| [citus_total_relation_size](reference-functions.md#citus_total_relation_size) | Get total disk space used by the all the shards of the specified distributed table, including all indexes and TOAST data |
-| [column_to_column_name](reference-functions.md#column_to_column_name) | Translate the `partkey` column of `pg_dist_partition` into a textual column name |
-| [get_shard_id_for_distribution_column](reference-functions.md#get_shard_id_for_distribution_column) | Find the shard ID associated with a value of the distribution column |
-
-## Server parameters
-
-### Query execution
-
-| Name | Description |
-||-|
-| [citus.all_modifications_commutative](reference-parameters.md#citusall_modifications_commutative) | Allow all commands to claim a shared lock |
-| [citus.count_distinct_error_rate](reference-parameters.md#cituscount_distinct_error_rate-floating-point) | Tune error rate of postgresql-hll approximate counting |
-| [citus.enable_repartition_joins](reference-parameters.md#citusenable_repartition_joins-boolean) | Allow JOINs made on non-distribution columns |
-| [citus.enable_repartitioned_insert_select](reference-parameters.md#citusenable_repartition_joins-boolean) | Allow repartitioning rows from the SELECT statement and transferring them between workers for insertion |
-| [citus.limit_clause_row_fetch_count](reference-parameters.md#cituslimit_clause_row_fetch_count-integer) | The number of rows to fetch per task for limit clause optimization |
-| [citus.local_table_join_policy](reference-parameters.md#cituslocal_table_join_policy-enum) | Where data moves when doing a join between local and distributed tables |
-| [citus.multi_shard_commit_protocol](reference-parameters.md#citusmulti_shard_commit_protocol-enum) | The commit protocol to use when performing COPY on a hash distributed table |
-| [citus.propagate_set_commands](reference-parameters.md#cituspropagate_set_commands-enum) | Which SET commands are propagated from the coordinator to workers |
-| [citus.create_object_propagation](reference-parameters.md#cituscreate_object_propagation-enum) | Behavior of CREATE statements in transactions for supported objects |
-| [citus.use_citus_managed_tables](reference-parameters.md#citususe_citus_managed_tables-boolean) | Allow local tables to be accessed in worker node queries |
-
-### Informational
-
-| Name | Description |
-||-|
-| [citus.explain_all_tasks](reference-parameters.md#citusexplain_all_tasks-boolean) | Make EXPLAIN output show all tasks |
-| [citus.explain_analyze_sort_method](reference-parameters.md#citusexplain_analyze_sort_method-enum) | Sort method of the tasks in the output of EXPLAIN ANALYZE |
-| [citus.log_remote_commands](reference-parameters.md#cituslog_remote_commands-boolean) | Log queries the coordinator sends to worker nodes |
-| [citus.multi_task_query_log_level](reference-parameters.md#citusmulti_task_query_log_level-enum-multi_task_logging) | Log-level for any query that generates more than one task |
-| [citus.stat_statements_max](reference-parameters.md#citusstat_statements_max-integer) | Max number of rows to store in `citus_stat_statements` |
-| [citus.stat_statements_purge_interval](reference-parameters.md#citusstat_statements_purge_interval-integer) | Frequency at which the maintenance daemon removes records from `citus_stat_statements` that are unmatched in `pg_stat_statements` |
-| [citus.stat_statements_track](reference-parameters.md#citusstat_statements_track-enum) | Enable/disable statement tracking |
-| [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text) | Allows shards to be displayed for selected clients that want to see them |
-| [citus.override_table_visibility](reference-parameters.md#citusoverride_table_visibility-boolean) | Enable/disable shard hiding |
-
-### Inter-node connection management
-
-| Name | Description |
-||-|
-| [citus.executor_slow_start_interval](reference-parameters.md#citusexecutor_slow_start_interval-integer) | Time to wait in milliseconds between opening connections to the same worker node |
-| [citus.force_max_query_parallelization](reference-parameters.md#citusforce_max_query_parallelization-boolean) | Open as many connections as possible |
-| [citus.max_adaptive_executor_pool_size](reference-parameters.md#citusmax_adaptive_executor_pool_size-integer) | Max worker connections per session |
-| [citus.max_cached_conns_per_worker](reference-parameters.md#citusmax_cached_conns_per_worker-integer) | Number of connections kept open to speed up subsequent commands |
-| [citus.node_connection_timeout](reference-parameters.md#citusnode_connection_timeout-integer) | Max duration (in milliseconds) to wait for connection establishment |
-
-### Data transfer
-
-| Name | Description |
-||-|
-| [citus.enable_binary_protocol](reference-parameters.md#citusenable_binary_protocol-boolean) | Use PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data with workers |
-| [citus.max_intermediate_result_size](reference-parameters.md#citusmax_intermediate_result_size-integer) | Size in KB of intermediate results for CTEs and subqueries that are unable to be pushed down |
-
-### Deadlock
-
-| Name | Description |
-||-|
-| [citus.distributed_deadlock_detection_factor](reference-parameters.md#citusdistributed_deadlock_detection_factor-floating-point) | Time to wait before checking for distributed deadlocks |
-| [citus.log_distributed_deadlock_detection](reference-parameters.md#cituslog_distributed_deadlock_detection-boolean) | Whether to log distributed deadlock detection-related processing in the server log |
-
-## System tables
-
-The Hyperscale (Citus) coordinator node contains metadata tables and views to
-help you see data properties and query activity across the server group.
-
-| Name | Description |
-||-|
-| [citus_dist_stat_activity](reference-metadata.md#distributed-query-activity) | Distributed queries that are executing on all nodes |
-| [citus_lock_waits](reference-metadata.md#distributed-query-activity) | Queries blocked throughout the server group |
-| [citus_shards](reference-metadata.md#shard-information-view) | The location of each shard, the type of table it belongs to, and its size |
-| [citus_stat_statements](reference-metadata.md#query-statistics-table) | Stats about how queries are being executed, and for whom |
-| [citus_tables](reference-metadata.md#distributed-tables-view) | A summary of all distributed and reference tables |
-| [citus_worker_stat_activity](reference-metadata.md#distributed-query-activity) | Queries on workers, including tasks on individual shards |
-| [pg_dist_colocation](reference-metadata.md#colocation-group-table) | Which tables' shards should be placed together |
-| [pg_dist_node](reference-metadata.md#worker-node-table) | Information about worker nodes in the server group |
-| [pg_dist_object](reference-metadata.md#distributed-object-table) | Objects such as types and functions that have been created on the coordinator node and propagated to worker nodes |
-| [pg_dist_placement](reference-metadata.md#shard-placement-table) | The location of shard replicas on worker nodes |
-| [pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table) | Strategies that `rebalance_table_shards` can use to determine where to move shards |
-| [pg_dist_shard](reference-metadata.md#shard-table) | The table, distribution column, and value ranges for every shard |
-| [time_partitions](reference-metadata.md#time-partitions-view) | Information about each partition managed by such functions as `create_time_partitions` and `drop_old_time_partitions` |
--
-## Next steps
-
-* Learn some [useful diagnostic queries](howto-useful-diagnostic-queries.md)
-* Review the list of [configuration
- parameters](reference-parameters.md#postgresql-parameters) in the underlying
- PostgreSQL database.
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
- Title: Server parameters ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Parameters in the Hyperscale (Citus) SQL API
----- Previously updated : 08/02/2022--
-# Server parameters
--
-There are various server parameters that affect the behavior of Hyperscale
-(Citus), both from standard PostgreSQL, and specific to Hyperscale (Citus).
-These parameters can be set in the Azure portal for a Hyperscale (Citus) server
-group. Under the **Settings** category, choose **Worker node parameters** or
-**Coordinator node parameters**. These pages allow you to set parameters for
-all worker nodes, or just for the coordinator node.
-
-## Hyperscale (Citus) parameters
-
-> [!NOTE]
->
-> Hyperscale (Citus) server groups running older versions of the Citus Engine may not
-> offer all the parameters listed below.
-
-### General configuration
-
-#### citus.use\_secondary\_nodes (enum)
-
-Sets the policy to use when choosing nodes for SELECT queries. If it's set to 'always', then the planner will query only nodes that are
-marked as 'secondary' noderole in
-[pg_dist_node](reference-metadata.md#worker-node-table).
-
-The supported values for this enum are:
--- **never:** (default) All reads happen on primary nodes.-- **always:** Reads run against secondary nodes instead, and
- insert/update statements are disabled.
-
-#### citus.cluster\_name (text)
-
-Informs the coordinator node planner which cluster it coordinates. Once
-cluster\_name is set, the planner will query worker nodes in that
-cluster alone.
-
-#### citus.enable\_version\_checks (boolean)
-
-Upgrading Hyperscale (Citus) version requires a server restart (to pick up the
-new shared-library), followed by the ALTER EXTENSION UPDATE command. The
-failure to execute both steps could potentially cause errors or crashes.
-Hyperscale (Citus) thus validates the version of the code and that of the
-extension match, and errors out if they don\'t.
-
-This value defaults to true, and is effective on the coordinator. In
-rare cases, complex upgrade processes may require setting this parameter
-to false, thus disabling the check.
-
-#### citus.log\_distributed\_deadlock\_detection (boolean)
-
-Whether to log distributed deadlock detection-related processing in the
-server log. It defaults to false.
-
-#### citus.distributed\_deadlock\_detection\_factor (floating point)
-
-Sets the time to wait before checking for distributed deadlocks. In particular
-the time to wait will be this value multiplied by PostgreSQL\'s
-[deadlock\_timeout](https://www.postgresql.org/docs/current/static/runtime-config-locks.html)
-setting. The default value is `2`. A value of `-1` disables distributed
-deadlock detection.
-
-#### citus.node\_connection\_timeout (integer)
-
-The `citus.node_connection_timeout` GUC sets the maximum duration (in
-milliseconds) to wait for connection establishment. Hyperscale (Citus) raises
-an error if the timeout elapses before at least one worker connection is
-established. This GUC affects connections from the coordinator to workers, and
-workers to each other.
--- Default: five seconds-- Minimum: 10 milliseconds-- Maximum: one hour-
-```postgresql
set to 30 seconds
-ALTER DATABASE foo
-SET citus.node_connection_timeout = 30000;
-```
-
-#### citus.log_remote_commands (boolean)
-
-Log all commands that the coordinator sends to worker nodes. For instance:
-
-```postgresql
reveal the per-shard queries behind the scenes
-SET citus.log_remote_commands TO on;
- run a query on distributed table "github_users"
-SELECT count(*) FROM github_users;
-```
-
-The output reveals several queries running on workers because of the single
-`count(*)` query on the coordinator.
-
-```
-NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102040 github_events WHERE true
-DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
-NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102041 github_events WHERE true
-DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
-NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102042 github_events WHERE true
-DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
-... etc, one for each of the 32 shards
-```
-
-#### citus.show\_shards\_for\_app\_name\_prefixes (text)
-
-By default, Citus hides shards from the list of tables PostgreSQL gives to SQL
-clients. It does this because there are multiple shards per distributed table,
-and the shards can be distracting to the SQL client.
-
-The `citus.show_shards_for_app_name_prefixes` GUC allows shards to be displayed
-for selected clients that want to see them. Its default value is ''.
-
-```postgresql
show shards to psql only (hide in other clients, like pgAdmin)-
-SET citus.show_shards_for_app_name_prefixes TO 'psql';
- also accepts a comma separated list-
-SET citus.show_shards_for_app_name_prefixes TO 'psql,pg_dump';
-```
-
-Shard hiding can be disabled entirely using
-[citus.override_table_visibility](#citusoverride_table_visibility-boolean).
-
-#### citus.override\_table\_visibility (boolean)
-
-Determines whether
-[citus.show_shards_for_app_name_prefixes](#citusshow_shards_for_app_name_prefixes-text)
-is active. The default value is 'true'. When set to 'false', shards are visible
-to all client applications.
-
-#### citus.use\_citus\_managed\_tables (boolean)
-
-Allow new [local tables](concepts-nodes.md#type-3-local-tables) to be accessed
-by queries on worker nodes. Adds all newly created tables to Citus metadata
-when enabled. The default value is 'false'.
-
-### Query Statistics
-
-#### citus.stat\_statements\_purge\_interval (integer)
-
-Sets the frequency at which the maintenance daemon removes records from
-[citus_stat_statements](reference-metadata.md#query-statistics-table)
-that are unmatched in `pg_stat_statements`. This configuration value sets the
-time interval between purges in seconds, with a default value of 10. A value of
-0 disables the purges.
-
-```psql
-SET citus.stat_statements_purge_interval TO 5;
-```
-
-This parameter is effective on the coordinator and can be changed at
-runtime.
-
-#### citus.stat_statements_max (integer)
-
-The maximum number of rows to store in `citus_stat_statements`. Defaults to
-50000, and may be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its
-maximum value of 10M would consume 1.4 GB of memory.
-
-Changing this GUC won't take effect until PostgreSQL is restarted.
-
-#### citus.stat_statements_track (enum)
-
-Recording statistics for `citus_stat_statements` requires extra CPU resources.
-When the database is experiencing load, the administrator may wish to disable
-statement tracking. The `citus.stat_statements_track` GUC can turn tracking on
-and off.
-
-* **all:** (default) Track all statements.
-* **none:** Disable tracking.
-
-### Data Loading
-
-#### citus.multi\_shard\_commit\_protocol (enum)
-
-Sets the commit protocol to use when performing COPY on a hash distributed
-table. On each individual shard placement, the COPY is performed in a
-transaction block to ensure that no data is ingested if an error occurs during
-the COPY. However, there's a particular failure case in which the COPY
-succeeds on all placements, but a (hardware) failure occurs before all
-transactions commit. This parameter can be used to prevent data loss in that
-case by choosing between the following commit protocols:
--- **2pc:** (default) The transactions in which COPY is performed on
- the shard placements are first prepared using PostgreSQL\'s
- [two-phase
- commit](http://www.postgresql.org/docs/current/static/sql-prepare-transaction.html)
- and then committed. Failed commits can be manually recovered or
- aborted using COMMIT PREPARED or ROLLBACK PREPARED, respectively.
- When using 2pc,
- [max\_prepared\_transactions](http://www.postgresql.org/docs/current/static/runtime-config-resource.html)
- should be increased on all the workers, typically to the same value
- as max\_connections.
-- **1pc:** The transactions in which COPY is performed on the shard
- placements is committed in a single round. Data may be lost if a
- commit fails after COPY succeeds on all placements (rare).
-
-#### citus.shard\_replication\_factor (integer)
-
-Sets the replication factor for shards that is, the number of nodes on which
-shards will be placed, and defaults to 1. This parameter can be set at run-time
-and is effective on the coordinator. The ideal value for this parameter depends
-on the size of the cluster and rate of node failure. For example, you may want
-to increase this replication factor if you run large clusters and observe node
-failures on a more frequent basis.
-
-### Planner Configuration
-
-#### citus.local_table_join_policy (enum)
-
-This GUC determines how Hyperscale (Citus) moves data when doing a join between
-local and distributed tables. Customizing the join policy can help reduce the
-amount of data sent between worker nodes.
-
-Hyperscale (Citus) will send either the local or distributed tables to nodes as
-necessary to support the join. Copying table data is referred to as a
-ΓÇ£conversion.ΓÇ¥ If a local table is converted, then it will be sent to any
-workers that need its data to perform the join. If a distributed table is
-converted, then it will be collected in the coordinator to support the join.
-The Citus planner will send only the necessary rows doing a conversion.
-
-There are four modes available to express conversion preference:
-
-* **auto:** (Default) Citus will convert either all local or all distributed
- tables to support local and distributed table joins. Citus decides which to
- convert using a heuristic. It will convert distributed tables if they're
- joined using a constant filter on a unique index (such as a primary key). The
- conversion ensures less data gets moved between workers.
-* **never:** Citus won't allow joins between local and distributed tables.
-* **prefer-local:** Citus will prefer converting local tables to support local
- and distributed table joins.
-* **prefer-distributed:** Citus will prefer converting distributed tables to
- support local and distributed table joins. If the distributed tables are
- huge, using this option might result in moving lots of data between workers.
-
-For example, assume `citus_table` is a distributed table distributed by the
-column `x`, and that `postgres_table` is a local table:
-
-```postgresql
-CREATE TABLE citus_table(x int primary key, y int);
-SELECT create_distributed_table('citus_table', 'x');
-
-CREATE TABLE postgres_table(x int, y int);
- even though the join is on primary key, there isn't a constant filter hence postgres_table will be sent to worker nodes to support the join
-SELECT * FROM citus_table JOIN postgres_table USING (x);
- there is a constant filter on a primary key, hence the filtered row from the distributed table will be pulled to coordinator to support the join
-SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10;
-
-SET citus.local_table_join_policy to 'prefer-distributed';
since we prefer distributed tables, citus_table will be pulled to coordinator to support the join. Note that citus_table can be huge.
-SELECT * FROM citus_table JOIN postgres_table USING (x);
-
-SET citus.local_table_join_policy to 'prefer-local';
even though there is a constant filter on primary key for citus_table postgres_table will be sent to necessary workers because we are using 'prefer-local'.
-SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10;
-```
-
-#### citus.limit\_clause\_row\_fetch\_count (integer)
-
-Sets the number of rows to fetch per task for limit clause optimization.
-In some cases, select queries with limit clauses may need to fetch all
-rows from each task to generate results. In those cases, and where an
-approximation would produce meaningful results, this configuration value
-sets the number of rows to fetch from each shard. Limit approximations
-are disabled by default and this parameter is set to -1. This value can
-be set at run-time and is effective on the coordinator.
-
-#### citus.count\_distinct\_error\_rate (floating point)
-
-Hyperscale (Citus) can calculate count(distinct) approximates using the
-postgresql-hll extension. This configuration entry sets the desired
-error rate when calculating count(distinct). 0.0, which is the default,
-disables approximations for count(distinct); and 1.0 provides no
-guarantees about the accuracy of results. We recommend setting this
-parameter to 0.005 for best results. This value can be set at run-time
-and is effective on the coordinator.
-
-#### citus.task\_assignment\_policy (enum)
-
-> [!NOTE]
-> This GUC is applicable only when
-> [shard_replication_factor](reference-parameters.md#citusshard_replication_factor-integer)
-> is greater than one, or for queries against
-> [reference_tables](concepts-distributed-data.md#type-2-reference-tables).
-
-Sets the policy to use when assigning tasks to workers. The coordinator
-assigns tasks to workers based on shard locations. This configuration
-value specifies the policy to use when making these assignments.
-Currently, there are three possible task assignment policies that can
-be used.
--- **greedy:** The greedy policy is the default and aims to evenly
- distribute tasks across workers.
-- **round-robin:** The round-robin policy assigns tasks to workers in
- a round-robin fashion alternating between different replicas. This policy
- enables better cluster utilization when the shard count for a
- table is low compared to the number of workers.
-- **first-replica:** The first-replica policy assigns tasks based on the insertion order of placements (replicas) for the
- shards. In other words, the fragment query for a shard is assigned to the worker that has the first replica of that shard.
- This method allows you to have strong guarantees about which shards
- will be used on which nodes (that is, stronger memory residency
- guarantees).
-
-This parameter can be set at run-time and is effective on the
-coordinator.
-
-### Intermediate Data Transfer
-
-#### citus.max\_intermediate\_result\_size (integer)
-
-The maximum size in KB of intermediate results for CTEs that are unable
-to be pushed down to worker nodes for execution, and for complex
-subqueries. The default is 1 GB, and a value of -1 means no limit.
-Queries exceeding the limit will be canceled and produce an error
-message.
-
-### Executor Configuration
-
-#### General
-
-##### citus.all\_modifications\_commutative
-
-Hyperscale (Citus) enforces commutativity rules and acquires appropriate locks
-for modify operations in order to guarantee correctness of behavior. For
-example, it assumes that an INSERT statement commutes with another INSERT
-statement, but not with an UPDATE or DELETE statement. Similarly, it assumes
-that an UPDATE or DELETE statement doesn't commute with another UPDATE or
-DELETE statement. This precaution means that UPDATEs and DELETEs require
-Hyperscale (Citus) to acquire stronger locks.
-
-If you have UPDATE statements that are commutative with your INSERTs or
-other UPDATEs, then you can relax these commutativity assumptions by
-setting this parameter to true. When this parameter is set to true, all
-commands are considered commutative and claim a shared lock, which can
-improve overall throughput. This parameter can be set at runtime and is
-effective on the coordinator.
-
-##### citus.remote\_task\_check\_interval (integer)
-
-Sets the frequency at which Hyperscale (Citus) checks for statuses of jobs
-managed by the task tracker executor. It defaults to 10 ms. The coordinator
-assigns tasks to workers, and then regularly checks with them about each
-task\'s progress. This configuration value sets the time interval between two
-consequent checks. This parameter is effective on the coordinator and can be
-set at runtime.
-
-##### citus.task\_executor\_type (enum)
-
-Hyperscale (Citus) has three executor types for running distributed SELECT
-queries. The desired executor can be selected by setting this configuration
-parameter. The accepted values for this parameter are:
--- **adaptive:** The default. It's optimal for fast responses to
- queries that involve aggregations and colocated joins spanning
- across multiple shards.
-- **task-tracker:** The task-tracker executor is well suited for long
- running, complex queries that require shuffling of data across
- worker nodes and efficient resource management.
-- **real-time:** (deprecated) Serves a similar purpose as the adaptive
- executor, but is less flexible and can cause more connection
- pressure on worker nodes.
-
-This parameter can be set at run-time and is effective on the coordinator.
-
-##### citus.multi\_task\_query\_log\_level (enum) {#multi_task_logging}
-
-Sets a log-level for any query that generates more than one task (that is,
-which hits more than one shard). Logging is useful during a multi-tenant
-application migration, as you can choose to error or warn for such queries, to
-find them and add a tenant\_id filter to them. This parameter can be set at
-runtime and is effective on the coordinator. The default value for this
-parameter is \'off\'.
-
-The supported values for this enum are:
--- **off:** Turn off logging any queries that generate multiple tasks
- (that is, span multiple shards)
-- **debug:** Logs statement at DEBUG severity level.-- **log:** Logs statement at LOG severity level. The log line will
- include the SQL query that was run.
-- **notice:** Logs statement at NOTICE severity level.-- **warning:** Logs statement at WARNING severity level.-- **error:** Logs statement at ERROR severity level.-
-It may be useful to use `error` during development testing,
-and a lower log-level like `log` during actual production deployment.
-Choosing `log` will cause multi-task queries to appear in the database
-logs with the query itself shown after \"STATEMENT.\"
-
-```
-LOG: multi-task query about to be executed
-HINT: Queries are split to multiple tasks if they have to be split into several queries on the workers.
-STATEMENT: select * from foo;
-```
-
-##### citus.propagate_set_commands (enum)
-
-Determines which SET commands are propagated from the coordinator to workers.
-The default value for this parameter is ΓÇÿnoneΓÇÖ.
-
-The supported values are:
-
-* **none:** no SET commands are propagated.
-* **local:** only SET LOCAL commands are propagated.
-
-##### citus.create\_object\_propagation (enum)
-
-Controls the behavior of CREATE statements in transactions for supported
-objects.
-
-When objects are created in a multi-statement transaction block, Citus switches
-sequential mode to ensure created objects are visible to later statements on
-shards. However, the switch to sequential mode is not always desirable. By
-overriding this behavior, the user can trade off performance for full
-transactional consistency in the creation of new objects.
-
-The default value for this parameter is 'immediate'.
-
-The supported values are:
-
-* **immediate:** raises error in transactions where parallel operations like
- create\_distributed\_table happen before an attempted CREATE TYPE.
-* **automatic:** defer creation of types when sharing a transaction with a
- parallel operation on distributed tables. There may be some inconsistency
- between which database objects exist on different nodes.
-* **deferred:** return to pre-11.0 behavior, which is like automatic but with
- other subtle corner cases. We recommend the automatic setting over deferred,
- unless you require true backward compatibility.
-
-For an example of this GUC in action, see [type
-propagation](howto-modify-distributed-tables.md#types-and-functions).
-
-##### citus.enable\_repartition\_joins (boolean)
-
-Ordinarily, attempting to perform repartition joins with the adaptive executor
-will fail with an error message. However setting
-`citus.enable_repartition_joins` to true allows Hyperscale (Citus) to
-temporarily switch into the task-tracker executor to perform the join. The
-default value is false.
-
-##### citus.enable_repartitioned_insert_select (boolean)
-
-By default, an INSERT INTO … SELECT statement that can’t be pushed down will
-attempt to repartition rows from the SELECT statement and transfer them between
-workers for insertion. However, if the target table has too many shards then
-repartitioning will probably not perform well. The overhead of processing the
-shard intervals when determining how to partition the results is too great.
-Repartitioning can be disabled manually by setting
-`citus.enable_repartitioned_insert_select` to false.
-
-##### citus.enable_binary_protocol (boolean)
-
-Setting this parameter to true instructs the coordinator node to use
-PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data
-with workers. Some column types don't support binary serialization.
-
-Enabling this parameter is mostly useful when the workers must return large
-amounts of data. Examples are when many rows are requested, the rows have
-many columns, or they use wide types such as `hll` from the postgresql-hll
-extension.
-
-The default value is true for Postgres versions 14 and higher. For Postgres
-versions 13 and lower the default is false, which means all results are encoded
-and transferred in text format.
-
-##### citus.max_adaptive_executor_pool_size (integer)
-
-Max_adaptive_executor_pool_size limits worker connections from the current
-session. This GUC is useful for:
-
-* Preventing a single backend from getting all the worker resources
-* Providing priority management: designate low priority sessions with low
- max_adaptive_executor_pool_size, and high priority sessions with higher
- values
-
-The default value is 16.
-
-##### citus.executor_slow_start_interval (integer)
-
-Time to wait in milliseconds between opening connections to the same worker
-node.
-
-When the individual tasks of a multi-shard query take little time, they
-can often be finished over a single (often already cached) connection. To avoid
-redundantly opening more connections, the executor waits between
-connection attempts for the configured number of milliseconds. At the end of
-the interval, it increases the number of connections it's allowed to open next
-time.
-
-For long queries (those taking >500 ms), slow start might add latency, but for
-short queries itΓÇÖs faster. The default value is 10 ms.
-
-##### citus.max_cached_conns_per_worker (integer)
-
-Each backend opens connections to the workers to query the shards. At the end
-of the transaction, the configured number of connections is kept open to speed
-up subsequent commands. Increasing this value will reduce the latency of
-multi-shard queries, but will also increase overhead on the workers.
-
-The default value is 1. A larger value such as 2 might be helpful for clusters
-that use a small number of concurrent sessions, but itΓÇÖs not wise to go much
-further (for example, 16 would be too high).
-
-##### citus.force_max_query_parallelization (boolean)
-
-Simulates the deprecated and now nonexistent real-time executor. This is used
-to open as many connections as possible to maximize query parallelization.
-
-When this GUC is enabled, Citus will force the adaptive executor to use as many
-connections as possible while executing a parallel distributed query. If not
-enabled, the executor might choose to use fewer connections to optimize overall
-query execution throughput. Internally, setting this true will end up using one
-connection per task.
-
-One place where this is useful is in a transaction whose first query is
-lightweight and requires few connections, while a subsequent query would
-benefit from more connections. Citus decides how many connections to use in a
-transaction based on the first statement, which can throttle other queries
-unless we use the GUC to provide a hint.
-
-```postgresql
-BEGIN;
add this hint
-SET citus.force_max_query_parallelization TO ON;
- a lightweight query that doesn't require many connections
-SELECT count(*) FROM table WHERE filter = x;
- a query that benefits from more connections, and can obtain them since we forced max parallelization above
-SELECT ... very .. complex .. SQL;
-COMMIT;
-```
-
-The default value is false.
-
-#### Task tracker executor configuration
-
-##### citus.task\_tracker\_delay (integer)
-
-This parameter sets the task tracker sleep time between task management rounds
-and defaults to 200 ms. The task tracker process wakes up regularly, walks over
-all tasks assigned to it, and schedules and executes these tasks. Then, the
-task tracker sleeps for a time period before walking over these tasks again.
-This configuration value determines the length of that sleeping period. This
-parameter is effective on the workers and needs to be changed in the
-postgresql.conf file. After editing the config file, users can send a SIGHUP
-signal or restart the server for the change to take effect.
-
-This parameter can be decreased to trim the delay caused due to the task
-tracker executor by reducing the time gap between the management rounds.
-Decreasing the delay is useful in cases when the shard queries are short and
-hence update their status regularly.
-
-##### citus.max\_assign\_task\_batch\_size (integer)
-
-The task tracker executor on the coordinator synchronously assigns tasks in
-batches to the daemon on the workers. This parameter sets the maximum number of
-tasks to assign in a single batch. Choosing a larger batch size allows for
-faster task assignment. However, if the number of workers is large, then it may
-take longer for all workers to get tasks. This parameter can be set at runtime
-and is effective on the coordinator.
-
-##### citus.max\_running\_tasks\_per\_node (integer)
-
-The task tracker process schedules and executes the tasks assigned to it as
-appropriate. This configuration value sets the maximum number of tasks to
-execute concurrently on one node at any given time and defaults to 8.
-
-The limit ensures that you don\'t have many tasks hitting disk at the same
-time, and helps in avoiding disk I/O contention. If your queries are served
-from memory or SSDs, you can increase max\_running\_tasks\_per\_node without
-much concern.
-
-##### citus.partition\_buffer\_size (integer)
-
-Sets the buffer size to use for partition operations and defaults to 8 MB.
-Hyperscale (Citus) allows for table data to be repartitioned into multiple
-files when two large tables are being joined. After this partition buffer fills
-up, the repartitioned data is flushed into files on disk. This configuration
-entry can be set at run-time and is effective on the workers.
-
-#### Explain output
-
-##### citus.explain\_all\_tasks (boolean)
-
-By default, Hyperscale (Citus) shows the output of a single, arbitrary task
-when running
-[EXPLAIN](http://www.postgresql.org/docs/current/static/sql-explain.html) on a
-distributed query. In most cases, the explain output will be similar across
-tasks. Occasionally, some of the tasks will be planned differently or have much
-higher execution times. In those cases, it can be useful to enable this
-parameter, after which the EXPLAIN output will include all tasks. Explaining
-all tasks may cause the EXPLAIN to take longer.
-
-##### citus.explain_analyze_sort_method (enum)
-
-Determines the sort method of the tasks in the output of EXPLAIN ANALYZE. The
-default value of `citus.explain_analyze_sort_method` is `execution-time`.
-
-The supported values are:
-
-* **execution-time:** sort by execution time.
-* **taskId:** sort by task ID.
-
-## PostgreSQL parameters
-
-* [DateStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-OUTPUT) - Sets the display format for date and time values
-* [IntervalStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-OUTPUT) - Sets the display format for interval values
-* [TimeZone](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TIMEZONE) - Sets the time zone for displaying and interpreting time stamps
-* [application_name](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-APPLICATION-NAME) - Sets the application name to be reported in statistics and logs
-* [array_nulls](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-ARRAY-NULLS) - Enables input of NULL elements in arrays
-* [autovacuum](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM) - Starts the autovacuum subprocess
-* [autovacuum_analyze_scale_factor](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-ANALYZE-SCALE-FACTOR) - Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples
-* [autovacuum_analyze_threshold](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-ANALYZE-THRESHOLD) - Minimum number of tuple inserts, updates, or deletes prior to analyze
-* [autovacuum_naptime](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-NAPTIME) - Time to sleep between autovacuum runs
-* [autovacuum_vacuum_cost_delay](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-DELAY) - Vacuum cost delay in milliseconds, for autovacuum
-* [autovacuum_vacuum_cost_limit](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-LIMIT) - Vacuum cost amount available before napping, for autovacuum
-* [autovacuum_vacuum_scale_factor](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR) - Number of tuple updates or deletes prior to vacuum as a fraction of reltuples
-* [autovacuum_vacuum_threshold](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-THRESHOLD) - Minimum number of tuple updates or deletes prior to vacuum
-* [autovacuum_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM) - Sets the maximum memory to be used by each autovacuum worker process
-* [backend_flush_after](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BACKEND-FLUSH-AFTER) - Number of pages after which previously performed writes are flushed to disk
-* [backslash_quote](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-BACKSLASH-QUOTE) - Sets whether "\'" is allowed in string literals
-* [bgwriter_delay](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-DELAY) - Background writer sleep time between rounds
-* [bgwriter_flush_after](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-FLUSH-AFTER) - Number of pages after which previously performed writes are flushed to disk
-* [bgwriter_lru_maxpages](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-LRU-MAXPAGES) - Background writer maximum number of LRU pages to flush per round
-* [bgwriter_lru_multiplier](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-LRU-MULTIPLIER) - Multiple of the average buffer usage to free per round
-* [bytea_output](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-BYTEA-OUTPUT) - Sets the output format for bytea
-* [check_function_bodies](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CHECK-FUNCTION-BODIES) - Checks function bodies during CREATE FUNCTION
-* [checkpoint_completion_target](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET) - Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval
-* [checkpoint_timeout](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT) - Sets the maximum time between automatic WAL checkpoints
-* [checkpoint_warning](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-WARNING) - Enables warnings if checkpoint segments are filled more frequently than this
-* [client_encoding](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CLIENT-ENCODING) - Sets the client's character set encoding
-* [client_min_messages](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CLIENT-MIN-MESSAGES) - Sets the message levels that are sent to the client
-* [commit_delay](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-COMMIT-DELAY) - Sets the delay in microseconds between transaction commit and flushing WAL to disk
-* [commit_siblings](https://www.postgresql.org/docs/12/runtime-config-wal.html#GUC-COMMIT-SIBLINGS) - Sets the minimum concurrent open transactions before performing commit_delay
-* [constraint_exclusion](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION) - Enables the planner to use constraints to optimize queries
-* [cpu_index_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-INDEX-TUPLE-COST) - Sets the planner's estimate of the cost of processing each index entry during an index scan
-* [cpu_operator_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-OPERATOR-COST) - Sets the planner's estimate of the cost of processing each operator or function call
-* [cpu_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-TUPLE-COST) - Sets the planner's estimate of the cost of processing each tuple (row)
-* [cursor_tuple_fraction](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CURSOR-TUPLE-FRACTION) - Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved
-* [deadlock_timeout](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-DEADLOCK-TIMEOUT) - Sets the amount of time, in milliseconds, to wait on a lock before checking for deadlock
-* [debug_pretty_print](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.3.1.3) - Indents parse and plan tree displays
-* [debug_print_parse](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's parse tree
-* [debug_print_plan](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's execution plan
-* [debug_print_rewritten](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's rewritten parse tree
-* [default_statistics_target](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET) - Sets the default statistics target
-* [default_tablespace](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TABLESPACE) - Sets the default tablespace to create tables and indexes in
-* [default_text_search_config](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TEXT-SEARCH-CONFIG) - Sets default text search configuration
-* [default_transaction_deferrable](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-DEFERRABLE) - Sets the default deferrable status of new transactions
-* [default_transaction_isolation](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION) - Sets the transaction isolation level of each new transaction
-* [default_transaction_read_only](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-READ-ONLY) - Sets the default read-only status of new transactions
-* default_with_oids - Creates new tables with OIDs by default
-* [effective_cache_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE) - Sets the planner's assumption about the size of the disk cache
-* [enable_bitmapscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-BITMAPSCAN) - Enables the planner's use of bitmap-scan plans
-* [enable_gathermerge](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-GATHERMERGE) - Enables the planner's use of gather merge plans
-* [enable_hashagg](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-HASHAGG) - Enables the planner's use of hashed aggregation plans
-* [enable_hashjoin](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-HASHJOIN) - Enables the planner's use of hash join plans
-* [enable_indexonlyscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-INDEXONLYSCAN) - Enables the planner's use of index-only-scan plans
-* [enable_indexscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-INDEXSCAN) - Enables the planner's use of index-scan plans
-* [enable_material](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-MATERIAL) - Enables the planner's use of materialization
-* [enable_mergejoin](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-MERGEJOIN) - Enables the planner's use of merge join plans
-* [enable_nestloop](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-NESTLOOP) - Enables the planner's use of nested loop join plans
-* [enable_seqscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SEQSCAN) - Enables the planner's use of sequential-scan plans
-* [enable_sort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SORT) - Enables the planner's use of explicit sort steps
-* [enable_tidscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-TIDSCAN) - Enables the planner's use of TID scan plans
-* [escape_string_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-ESCAPE-STRING-WARNING) - Warns about backslash escapes in ordinary string literals
-* [exit_on_error](https://www.postgresql.org/docs/current/runtime-config-error-handling.html#GUC-EXIT-ON-ERROR) - Terminates session on any error
-* [extra_float_digits](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-EXTRA-FLOAT-DIGITS) - Sets the number of digits displayed for floating-point values
-* [force_parallel_mode](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FORCE-PARALLEL-MODE) - Forces use of parallel query facilities
-* [from_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which subqueries aren't collapsed
-* [geqo](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO) - Enables genetic query optimization
-* [geqo_effort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-EFFORT) - GEQO: effort is used to set the default for other GEQO parameters
-* [geqo_generations](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-GENERATIONS) - GEQO: number of iterations of the algorithm
-* [geqo_pool_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-POOL-SIZE) - GEQO: number of individuals in the population
-* [geqo_seed](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-SEED) - GEQO: seed for random path selection
-* [geqo_selection_bias](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-SELECTION-BIAS) - GEQO: selective pressure within the population
-* [geqo_threshold](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-THRESHOLD) - Sets the threshold of FROM items beyond which GEQO is used
-* [gin_fuzzy_search_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.5.2.2.1.3) - Sets the maximum allowed result for exact search by GIN
-* [gin_pending_list_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.2.2.23.1.3) - Sets the maximum size of the pending list for GIN index
-* [idle_in_transaction_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT) - Sets the maximum allowed duration of any idling transaction
-* [join_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which JOIN constructs aren't flattened
-* [lc_monetary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-MONETARY) - Sets the locale for formatting monetary amounts
-* [lc_numeric](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-NUMERIC) - Sets the locale for formatting numbers
-* [lo_compat_privileges](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES) - Enables backward compatibility mode for privilege checks on large objects
-* [lock_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LOCK-TIMEOUT) - Sets the maximum allowed duration (in milliseconds) of any wait for a lock. 0 turns this off
-* [log_autovacuum_min_duration](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#) - Sets the minimum execution time above which autovacuum actions will be logged
-* [log_checkpoints](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-CHECKPOINTS) - Logs each checkpoint
-* [log_connections](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-CONNECTIONS) - Logs each successful connection
-* [log_destination](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DESTINATION) - Sets the destination for server log output
-* [log_disconnections](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DISCONNECTIONS) - Logs end of a session, including duration
-* [log_duration](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DURATION) - Logs the duration of each completed SQL statement
-* [log_error_verbosity](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY) - Sets the verbosity of logged messages
-* [log_lock_waits](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LOCK-WAITS) - Logs long lock waits
-* [log_min_duration_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT) - Sets the minimum execution time (in milliseconds) above which statements will be logged. -1 disables logging statement durations
-* [log_min_error_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-ERROR-STATEMENT) - Causes all statements generating error at or above this level to be logged
-* [log_min_messages](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-MESSAGES) - Sets the message levels that are logged
-* [log_replication_commands](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-REPLICATION-COMMANDS) - Logs each replication command
-* [log_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-STATEMENT) - Sets the type of statements logged
-* [log_statement_stats](https://www.postgresql.org/docs/current/runtime-config-statistics.html#id-1.6.6.12.3.2.1.1.3) - For each query, writes cumulative performance statistics to the server log
-* [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) - Logs the use of temporary files larger than this number of kilobytes
-* [maintenance_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM) - Sets the maximum memory to be used for maintenance operations
-* [max_parallel_workers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS) - Sets the maximum number of parallel workers that can be active at one time
-* [max_parallel_workers_per_gather](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-PER-GATHER) - Sets the maximum number of parallel processes per executor node
-* [max_pred_locks_per_page](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-PAGE) - Sets the maximum number of predicate-locked tuples per page
-* [max_pred_locks_per_relation](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-RELATION) - Sets the maximum number of predicate-locked pages and tuples per relation
-* [max_standby_archive_delay](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-STANDBY-ARCHIVE-DELAY) - Sets the maximum delay before canceling queries when a hot standby server is processing archived WAL data
-* [max_standby_streaming_delay](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-STANDBY-STREAMING-DELAY) - Sets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data
-* [max_sync_workers_per_subscription](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SYNC-WORKERS-PER-SUBSCRIPTION) - Maximum number of table synchronization workers per subscription
-* [max_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MAX-WAL-SIZE) - Sets the WAL size that triggers a checkpoint
-* [min_parallel_index_scan_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-MIN-PARALLEL-INDEX-SCAN-SIZE) - Sets the minimum amount of index data for a parallel scan
-* [min_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MIN-WAL-SIZE) - Sets the minimum size to shrink the WAL to
-* [operator_precedence_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-OPERATOR-PRECEDENCE-WARNING) - Emits a warning for constructs that changed meaning since PostgreSQL 9.4
-* [parallel_setup_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-SETUP-COST) - Sets the planner's estimate of the cost of starting up worker processes for parallel query
-* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend
-* [pg_stat_statements.save](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Saves pg_stat_statements statistics across server shutdowns
-* [pg_stat_statements.track](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects which statements are tracked by pg_stat_statements
-* [pg_stat_statements.track_utility](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects whether utility commands are tracked by pg_stat_statements
-* [quote_all_identifiers](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-QUOTE-ALL-IDENTIFIERS) - When generating SQL fragments, quotes all identifiers
-* [random_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-RANDOM-PAGE-COST) - Sets the planner's estimate of the cost of a nonsequentially fetched disk page
-* [row_security](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) - Enables row security
-* [search_path](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH) - Sets the schema search order for names that aren't schema-qualified
-* [seq_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-SEQ-PAGE-COST) - Sets the planner's estimate of the cost of a sequentially fetched disk page
-* [session_replication_role](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE) - Sets the session's behavior for triggers and rewrite rules
-* [standard_conforming_strings](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.7.1.3) - Causes '...' strings to treat backslashes literally
-* [statement_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STATEMENT-TIMEOUT) - Sets the maximum allowed duration (in milliseconds) of any statement. 0 turns this off
-* [synchronize_seqscans](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.8.1.3) - Enables synchronized sequential scans
-* [synchronous_commit](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) - Sets the current transaction's synchronization level
-* [tcp_keepalives_count](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-COUNT) - Maximum number of TCP keepalive retransmits
-* [tcp_keepalives_idle](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE) - Time between issuing TCP keepalives
-* [tcp_keepalives_interval](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-INTERVAL) - Time between TCP keepalive retransmits
-* [temp_buffers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-TEMP-BUFFERS) - Sets the maximum number of temporary buffers used by each database session
-* [temp_tablespaces](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TEMP-TABLESPACES) - Sets the tablespace(s) to use for temporary tables and sort files
-* [track_activities](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES) - Collects information about executing commands
-* [track_counts](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS) - Collects statistics on database activity
-* [track_functions](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-FUNCTIONS) - Collects function-level statistics on database activity
-* [track_io_timing](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING) - Collects timing statistics for database I/O activity
-* [transform_null_equals](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-TRANSFORM-NULL-EQUALS) - Treats "expr=NULL" as "expr IS NULL"
-* [vacuum_cost_delay](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-DELAY) - Vacuum cost delay in milliseconds
-* [vacuum_cost_limit](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-LIMIT) - Vacuum cost amount available before napping
-* [vacuum_cost_page_dirty](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-DIRTY) - Vacuum cost for a page dirtied by vacuum
-* [vacuum_cost_page_hit](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-HIT) - Vacuum cost for a page found in the buffer cache
-* [vacuum_cost_page_miss](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-MISS) - Vacuum cost for a page not found in the buffer cache
-* [vacuum_defer_cleanup_age](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-VACUUM-DEFER-CLEANUP-AGE) - Number of transactions by which VACUUM and HOT cleanup should be deferred, if any
-* [vacuum_freeze_min_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE) - Minimum age at which VACUUM should freeze a table row
-* [vacuum_freeze_table_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE) - Age at which VACUUM should scan whole table to freeze tuples
-* [vacuum_multixact_freeze_min_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-MIN-AGE) - Minimum age at which VACUUM should freeze a MultiXactId in a table row
-* [vacuum_multixact_freeze_table_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-TABLE-AGE) - Multixact age at which VACUUM should scan whole table to freeze tuples
-* [wal_receiver_status_interval](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-WAL-RECEIVER-STATUS-INTERVAL) - Sets the maximum interval between WAL receiver status reports to the primary
-* [wal_writer_delay](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-WRITER-DELAY) - Time between WAL flushes performed in the WAL writer
-* [wal_writer_flush_after](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-WRITER-FLUSH-AFTER) - Amount of WAL written out by WAL writer that triggers a flush
-* [work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM) - Sets the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files
-* [xmlbinary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-XMLBINARY) - Sets how binary values are to be encoded in XML
-* [xmloption](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-XMLOPTION) - Sets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments
-
-## Next steps
-
-* Another form of configuration, besides server parameters, are the resource [configuration options](resources-compute.md) in a Hyperscale (Citus) server group.
-* The underlying PostgreSQL data base also has [configuration parameters](http://www.postgresql.org/docs/current/static/runtime-config.html).
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
- Title: Supported versions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: PostgreSQL versions available in Azure Database for PostgreSQL - Hyperscale (Citus)
----- Previously updated : 06/28/2021--
-# Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-## PostgreSQL versions
-
-The version of PostgreSQL running in a Hyperscale (Citus) server group is
-customizable during creation. Hyperscale (Citus) currently supports the
-following major [PostgreSQL
-versions](https://www.postgresql.org/docs/release/):
-
-### PostgreSQL version 14
-
-The current minor release is 14.4. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/14/release-14-1.html) to
-learn more about improvements and fixes in this minor release.
-
-### PostgreSQL version 13
-
-The current minor release is 13.7. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/13/release-13-5.html) to
-learn more about improvements and fixes in this minor release.
-
-### PostgreSQL version 12
-
-The current minor release is 12.11. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/12/release-12-9.html) to
-learn more about improvements and fixes in this minor release.
-
-### PostgreSQL version 11
-
-The current minor release is 11.16. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/11/release-11-14.html) to
-learn more about improvements and fixes in this minor release.
-
-### PostgreSQL version 10 and older
-
-We don't support PostgreSQL version 10 and older for Azure Database for
-PostgreSQL - Hyperscale (Citus).
-
-## Citus and other extension versions
-
-Depending on which version of PostgreSQL is running in a server group,
-different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, PostgreSQL 14 comes with Citus 11, PostgreSQL versions 12 and 13 come with
-Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
-
-## Next steps
-
-* See which [extensions](reference-extensions.md) are installed in
- which versions.
-* Learn to [create a Hyperscale (Citus) server
- group](quickstart-create-portal.md).
postgresql Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-compute.md
- Title: Compute and storage ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Options for a Hyperscale (Citus) server group, including node compute and storage
----- Previously updated : 07/08/2022--
-# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) compute and storage
-
-
-You can select the compute and storage settings independently for
-worker nodes and the coordinator node in a Hyperscale (Citus) server
-group. Compute resources are provided as vCores, which represent
-the logical CPU of the underlying hardware. The storage size for
-provisioning refers to the capacity available to the coordinator
-and worker nodes in your Hyperscale (Citus) server group. The storage
-includes database files, temporary files, transaction logs, and
-the Postgres server logs.
-
-## Standard tier
-
-| Resource | Worker node | Coordinator node |
-|--|--|--|
-| Compute, vCores | 4, 8, 16, 32, 64, 96, 104 | 4, 8, 16, 32, 64, 96 |
-| Memory per vCore, GiB | 8 | 4 |
-| Storage size, TiB | 0.5, 1, 2 | 0.5, 1, 2 |
-| Storage type | General purpose (SSD) | General purpose (SSD) |
-| IOPS | Up to 3 IOPS/GiB | Up to 3 IOPS/GiB |
-
-The total amount of RAM in a single Hyperscale (Citus) node is based on the
-selected number of vCores.
-
-| vCores | One worker node, GiB RAM | Coordinator node, GiB RAM |
-|--|--||
-| 4 | 32 | 16 |
-| 8 | 64 | 32 |
-| 16 | 128 | 64 |
-| 32 | 256 | 128 |
-| 64 | 432 or 512 | 256 |
-| 96 | 672 | 384 |
-| 104 | 672 | n/a |
-
-The total amount of storage you provision also defines the I/O capacity
-available to each worker and coordinator node.
-
-| Storage size, TiB | Maximum IOPS |
-|-|--|
-| 0.5 | 1,536 |
-| 1 | 3,072 |
-| 2 | 6,148 |
-
-For the entire Hyperscale (Citus) cluster, the aggregated IOPS work out to the
-following values:
-
-| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 TiB, total IOPS |
-|--||-|-|
-| 2 | 3,072 | 6,144 | 12,296 |
-| 3 | 4,608 | 9,216 | 18,444 |
-| 4 | 6,144 | 12,288 | 24,592 |
-| 5 | 7,680 | 15,360 | 30,740 |
-| 6 | 9,216 | 18,432 | 36,888 |
-| 7 | 10,752 | 21,504 | 43,036 |
-| 8 | 12,288 | 24,576 | 49,184 |
-| 9 | 13,824 | 27,648 | 55,332 |
-| 10 | 15,360 | 30,720 | 61,480 |
-| 11 | 16,896 | 33,792 | 67,628 |
-| 12 | 18,432 | 36,864 | 73,776 |
-| 13 | 19,968 | 39,936 | 79,924 |
-| 14 | 21,504 | 43,008 | 86,072 |
-| 15 | 23,040 | 46,080 | 92,220 |
-| 16 | 24,576 | 49,152 | 98,368 |
-| 17 | 26,112 | 52,224 | 104,516 |
-| 18 | 27,648 | 55,296 | 110,664 |
-| 19 | 29,184 | 58,368 | 116,812 |
-| 20 | 30,720 | 61,440 | 122,960 |
-
-## Basic tier
-
-The Hyperscale (Citus) [basic tier](concepts-tiers.md) is a server
-group with just one node. Because there isn't a distinction between
-coordinator and worker nodes, it's less complicated to choose compute and
-storage resources.
-
-| Resource | Available options |
-|--|--|
-| Compute, vCores | 2, 4, 8, 16, 32, 64 |
-| Memory per vCore, GiB | 4 |
-| Storage size, GiB | 128, 256, 512 |
-| Storage type | General purpose (SSD) |
-| IOPS | Up to 3 IOPS/GiB |
-
-The total amount of RAM in a single Hyperscale (Citus) node is based on the
-selected number of vCores.
-
-| vCores | GiB RAM |
-|--||
-| 2 | 8 |
-| 4 | 16 |
-| 8 | 32 |
-| 16 | 64 |
-| 32 | 128 |
-| 64 | 256 |
-
-The total amount of storage you provision also defines the I/O capacity
-available to the basic tier node.
-
-| Storage size, GiB | Maximum IOPS |
-|-|--|
-| 128 | 384 |
-| 256 | 768 |
-| 512 | 1,536 |
-
-## Next steps
-
-* Learn how to [create a Hyperscale (Citus) server group in the portal](quickstart-create-portal.md)
-* Change [compute quotas](howto-compute-quota.md) for a subscription and region
postgresql Resources Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-pricing.md
- Title: Pricing ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Pricing and how to save with Hyperscale (Citus)
----- Previously updated : 02/23/2022--
-# Pricing for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-## General pricing
-
-For the most up-to-date pricing information, see the service
-[pricing page](https://azure.microsoft.com/pricing/details/postgresql/).
-To see the cost for the configuration you want, the
-[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)
-shows the monthly cost on the **Configure** tab based on the options you
-select. If you don't have an Azure subscription, you can use the Azure pricing
-calculator to get an estimated price. On the
-[Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-website, select **Add items**, expand the **Databases** category, and choose
-**Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)** to customize the
-options.
-
-## Prepay for compute resources with reserved capacity
-
-Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Hyperscale (Citus) reserved capacity, you make an upfront commitment on Hyperscale (Citus) server group for a one- or three-year period to get a significant discount on the compute costs. To purchase Hyperscale (Citus) reserved capacity, you need to specify the Azure region, reservation term, and billing frequency.
-
-> [!IMPORTANT]
-> This article is about reserved capacity for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). For information about reserved capacity for Azure Database for PostgreSQL ΓÇô Single Server, see [Prepay for Azure Database for PostgreSQL ΓÇô Single Server compute resources with reserved capacity](../concept-reserved-pricing.md).
-
-You don't need to assign the reservation to specific Hyperscale (Citus) server groups. An already running Hyperscale (Citus) server group or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one year or three years. As soon as you buy a reservation, the Hyperscale (Citus) compute charges that match the reservation attributes are no longer charged at the pay-as-you-go rates.
-
-A reservation doesn't cover software, networking, or storage charges associated with the Hyperscale (Citus) server groups. At the end of the reservation term, the billing benefit expires, and the Hyperscale (Citus) server groups are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/hyperscale-citus/).
-
-You can buy Hyperscale (Citus) reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
-
-* You must be in the owner role for at least one Enterprise Agreement (EA) or individual subscription with pay-as-you-go rates.
-* For Enterprise Agreement subscriptions, **Add Reserved Instances** must be enabled in the [EA Portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an Enterprise Agreement admin on the subscription.
-* For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Hyperscale (Citus) reserved capacity.
-
-For information on how Enterprise Agreement customers and pay-as-you-go customers are charged for reservation purchases, see:
-- [Understand Azure reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)-- [Understand Azure reservation usage for your pay-as-you-go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md)-
-### Determine the right server group size before purchase
-
-The size of reservation is based on the total amount of compute used by the existing or soon-to-be-deployed coordinator and worker nodes in Hyperscale (Citus) server groups within a specific region.
-
-For example, let's suppose you're running one Hyperscale (Citus) server group with 16 vCore coordinator and three 8 vCore worker nodes. Further, let's assume you plan to deploy within the next month an additional Hyperscale (Citus) server group with a 32 vCore coordinator and two 4 vCore worker nodes. Let's also suppose you need these resources for at least one year.
-
-In this case, purchase a one-year reservation for:
-
-* Total 16 vCores + 32 vCores = 48 vCores for coordinator nodes
-* Total 3 nodes x 8 vCores + 2 nodes x 4 vCores = 24 + 8 = 32 vCores for worker nodes
-
-### Buy Azure Database for PostgreSQL reserved capacity
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Select **All services** > **Reservations**.
-1. Select **Add**. In the **Purchase reservations** pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
-1. Select the **Hyperscale (Citus) Compute** type to purchase, and click **Select**.
-1. Review the quantity for the selected compute type on the **Products** tab.
-1. Continue to the **Buy + Review** tab to finish your purchase.
-
-The following table describes required fields.
-
-| Field | Description |
-|--|--|
-| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an Enterprise Agreement subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. |
-| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select **Shared**, the vCore reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions within your billing context. For Enterprise Agreement customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator. If you select **Management group**, the reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions that are a part of both the management group and billing scope. If you select **Single subscription**, the vCore reservation discount is applied to Hyperscale (Citus) server groups in this subscription. If you select **Single resource group**, the reservation discount is applied to Hyperscale (Citus) server groups in the selected subscription and the selected resource group within that subscription. |
-| Region | The Azure region that's covered by the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation. |
-| Term | One year or three years. |
-| Quantity | The amount of compute resources being purchased within the Hyperscale (Citus) reserved capacity reservation. In particular, the number of coordinator or worker node vCores in the selected Azure region that are being reserved and which will get the billing discount. For example, if you're running (or plan to run) Hyperscale (Citus) server groups with the total compute capacity of 64 coordinator node vCores and 32 worker node vCores in the East US region, specify the quantity as 64 and 32 for coordinator and worker nodes, respectively, to maximize the benefit for all servers. |
---
-### Cancel, exchange, or refund reservations
-
-You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-
-### vCore size flexibility
-
-vCore size flexibility helps you scale up or down coordinator and worker nodes within a region, without losing the reserved capacity benefit.
-
-### Need help? Contact us
-
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-## Next steps
-
-The vCore reservation discount is applied automatically to the number of Hyperscale (Citus) server groups that match the Azure Database for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation through the Azure portal, PowerShell, the Azure CLI, or the API.
-
-To learn more about Azure reservations, see the following articles:
-
-* [What are Azure reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
-* [Manage Azure reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
-* [Understand reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
postgresql Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-regions.md
- Title: Regional availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Where you can run a Hyperscale (Citus) server group
------ Previously updated : 06/21/2022--
-# Regional availability for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-Hyperscale (Citus) server groups are available in the following Azure regions:
-
-* Americas:
- * Brazil South
- * Canada Central
- * Central US
- * East US
- * East US 2
- * North Central US
- * South Central US
- * West Central US
- * West US
- * West US 2
- * West US 3
-* Asia Pacific:
- * Australia East
- * Central India
- * East Asia
- * Japan East
- * Japan West
- * Korea Central
- * Southeast Asia
-* Europe:
- * France Central
- * Germany West Central
- * North Europe
- * Switzerland North
- * UK South
- * West Europe
-
-Some of these regions may not be initially activated on all Azure
-subscriptions. If you want to use a region from the list above and don't see it
-in your subscription, or if you want to use a region not on this list, open a
-[support
-request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-**Next steps**
-
-Learn how to [create a Hyperscale (Citus) server group in the portal](quickstart-create-portal.md).
postgresql Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-multi-tenant.md
- Title: Multi-tenant database - Azure PostgreSQL Hyperscale (Citus)
-description: Learn how to design a scalable multi-tenant application with Azure Database for PostgreSQL Hyperscale (Citus).
------ Previously updated : 06/29/2022
-#Customer intent: As an developer, I want to design a hyperscale database so that my multi-tenant application runs efficiently for all tenants.
--
-# Design a multi-tenant database using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to:
-
-> [!div class="checklist"]
-> * Create a Hyperscale (Citus) server group
-> * Use psql utility to create a schema
-> * Shard tables across nodes
-> * Ingest sample data
-> * Query tenant data
-> * Share data between tenants
-> * Customize the schema per-tenant
-
-## Prerequisites
--
-## Use psql utility to create a schema
-
-Once connected to the Azure Database for PostgreSQL - Hyperscale (Citus) using psql, you can complete some basic tasks. This tutorial walks you through creating a web app that allows advertisers to track their campaigns.
-
-Multiple companies can use the app, so let's create a table to hold companies and another for their campaigns. In the psql console, run these commands:
-
-```sql
-CREATE TABLE companies (
- id bigserial PRIMARY KEY,
- name text NOT NULL,
- image_url text,
- created_at timestamp without time zone NOT NULL,
- updated_at timestamp without time zone NOT NULL
-);
-
-CREATE TABLE campaigns (
- id bigserial,
- company_id bigint REFERENCES companies (id),
- name text NOT NULL,
- cost_model text NOT NULL,
- state text NOT NULL,
- monthly_budget bigint,
- blacklisted_site_urls text[],
- created_at timestamp without time zone NOT NULL,
- updated_at timestamp without time zone NOT NULL,
-
- PRIMARY KEY (company_id, id)
-);
-```
-
->[!NOTE]
-> This article contains references to the term *blacklisted*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
-
-Each campaign will pay to run ads. Add a table for ads too, by running the following code in psql after the code above:
-
-```sql
-CREATE TABLE ads (
- id bigserial,
- company_id bigint,
- campaign_id bigint,
- name text NOT NULL,
- image_url text,
- target_url text,
- impressions_count bigint DEFAULT 0,
- clicks_count bigint DEFAULT 0,
- created_at timestamp without time zone NOT NULL,
- updated_at timestamp without time zone NOT NULL,
-
- PRIMARY KEY (company_id, id),
- FOREIGN KEY (company_id, campaign_id)
- REFERENCES campaigns (company_id, id)
-);
-```
-
-Finally, we'll track statistics about clicks and impressions for each ad:
-
-```sql
-CREATE TABLE clicks (
- id bigserial,
- company_id bigint,
- ad_id bigint,
- clicked_at timestamp without time zone NOT NULL,
- site_url text NOT NULL,
- cost_per_click_usd numeric(20,10),
- user_ip inet NOT NULL,
- user_data jsonb NOT NULL,
-
- PRIMARY KEY (company_id, id),
- FOREIGN KEY (company_id, ad_id)
- REFERENCES ads (company_id, id)
-);
-
-CREATE TABLE impressions (
- id bigserial,
- company_id bigint,
- ad_id bigint,
- seen_at timestamp without time zone NOT NULL,
- site_url text NOT NULL,
- cost_per_impression_usd numeric(20,10),
- user_ip inet NOT NULL,
- user_data jsonb NOT NULL,
-
- PRIMARY KEY (company_id, id),
- FOREIGN KEY (company_id, ad_id)
- REFERENCES ads (company_id, id)
-);
-```
-
-You can see the newly created tables in the list of tables now in psql by running:
-
-```postgres
-\dt
-```
-
-Multi-tenant applications can enforce uniqueness only per tenant,
-which is why all primary and foreign keys include the company ID.
-
-## Shard tables across nodes
-
-A hyperscale deployment stores table rows on different nodes based on the value of a user-designated column. This "distribution column" marks which tenant owns which rows.
-
-Let's set the distribution column to be company\_id, the tenant
-identifier. In psql, run these functions:
-
-```sql
-SELECT create_distributed_table('companies', 'id');
-SELECT create_distributed_table('campaigns', 'company_id');
-SELECT create_distributed_table('ads', 'company_id');
-SELECT create_distributed_table('clicks', 'company_id');
-SELECT create_distributed_table('impressions', 'company_id');
-```
--
-## Ingest sample data
-
-Outside of psql now, in the normal command line, download sample data sets:
-
-```bash
-for dataset in companies campaigns ads clicks impressions geo_ips; do
- curl -O https://examples.citusdata.com/mt_ref_arch/${dataset}.csv
-done
-```
-
-Back inside psql, bulk load the data. Be sure to run psql in the same directory where you downloaded the data files.
-
-```sql
-SET CLIENT_ENCODING TO 'utf8';
-
-\copy companies from 'companies.csv' with csv
-\copy campaigns from 'campaigns.csv' with csv
-\copy ads from 'ads.csv' with csv
-\copy clicks from 'clicks.csv' with csv
-\copy impressions from 'impressions.csv' with csv
-```
-
-This data will now be spread across worker nodes.
-
-## Query tenant data
-
-When the application requests data for a single tenant, the database
-can execute the query on a single worker node. Single-tenant queries
-filter by a single tenant ID. For example, the following query
-filters `company_id = 5` for ads and impressions. Try running it in
-psql to see the results.
-
-```sql
-SELECT a.campaign_id,
- RANK() OVER (
- PARTITION BY a.campaign_id
- ORDER BY a.campaign_id, count(*) desc
- ), count(*) as n_impressions, a.id
- FROM ads as a
- JOIN impressions as i
- ON i.company_id = a.company_id
- AND i.ad_id = a.id
- WHERE a.company_id = 5
-GROUP BY a.campaign_id, a.id
-ORDER BY a.campaign_id, n_impressions desc;
-```
-
-## Share data between tenants
-
-Until now all tables have been distributed by `company_id`. However,
-some data doesn't naturally "belong" to any tenant in particular,
-and can be shared. For instance, all companies in the example ad
-platform might want to get geographical information for their
-audience based on IP addresses.
-
-Create a table to hold shared geographic information. Run the following commands in psql:
-
-```sql
-CREATE TABLE geo_ips (
- addrs cidr NOT NULL PRIMARY KEY,
- latlon point NOT NULL
- CHECK (-90 <= latlon[0] AND latlon[0] <= 90 AND
- -180 <= latlon[1] AND latlon[1] <= 180)
-);
-CREATE INDEX ON geo_ips USING gist (addrs inet_ops);
-```
-
-Next make `geo_ips` a "reference table" to store a copy of the
-table on every worker node.
-
-```sql
-SELECT create_reference_table('geo_ips');
-```
-
-Load it with example data. Remember to run this command in psql from inside the directory where you downloaded the dataset.
-
-```sql
-\copy geo_ips from 'geo_ips.csv' with csv
-```
-
-Joining the clicks table with geo\_ips is efficient on all nodes.
-Here's a join to find the locations of everyone who clicked on ad
-290. Try running the query in psql.
-
-```sql
-SELECT c.id, clicked_at, latlon
- FROM geo_ips, clicks c
- WHERE addrs >> c.user_ip
- AND c.company_id = 5
- AND c.ad_id = 290;
-```
-
-## Customize the schema per-tenant
-
-Each tenant may need to store special information not needed by
-others. However, all tenants share a common infrastructure with
-an identical database schema. Where can the extra data go?
-
-One trick is to use an open-ended column type like PostgreSQL's
-JSONB. Our schema has a JSONB field in `clicks` called `user_data`.
-A company (say company five), can use the column to track whether
-the user is on a mobile device.
-
-Here's a query to find who clicks more: mobile, or traditional
-visitors.
-
-```sql
-SELECT
- user_data->>'is_mobile' AS is_mobile,
- count(*) AS count
-FROM clicks
-WHERE company_id = 5
-GROUP BY user_data->>'is_mobile'
-ORDER BY count DESC;
-```
-
-We can optimize this query for a single company by creating a
-[partial
-index](https://www.postgresql.org/docs/current/static/indexes-partial.html).
-
-```sql
-CREATE INDEX click_user_data_is_mobile
-ON clicks ((user_data->>'is_mobile'))
-WHERE company_id = 5;
-```
-
-More generally, we can create a [GIN
-indices](https://www.postgresql.org/docs/current/static/gin-intro.html) on
-every key and value within the column.
-
-```sql
-CREATE INDEX click_user_data
-ON clicks USING gin (user_data);
- this speeds up queries like, "which clicks have the is_mobile key present in user_data?"-
-SELECT id
- FROM clicks
- WHERE user_data ? 'is_mobile'
- AND company_id = 5;
-```
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Select the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and select the final *Delete* button.
-
-## Next steps
-
-In this tutorial, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data. You learned to query data both within and between tenants, and to customize the schema per tenant.
--- Learn about server group [node types](./concepts-nodes.md)-- Determine the best [initial
- size](howto-scale-initial.md) for your server group
postgresql Tutorial Design Database Realtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-realtime.md
- Title: 'Tutorial: Design a real-time dashboard - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: This tutorial shows how to parallelize real-time dashboard queries with Azure Database for PostgreSQL Hyperscale (Citus).
------ Previously updated : 06/29/2022
-#Customer intent: As a developer, I want to parallelize queries so that I can make a real-time dashboard application.
--
-# Tutorial: Design a real-time analytics dashboard by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to:
-
-> [!div class="checklist"]
-> * Create a Hyperscale (Citus) server group
-> * Use psql utility to create a schema
-> * Shard tables across nodes
-> * Generate sample data
-> * Perform rollups
-> * Query raw and aggregated data
-> * Expire data
-
-## Prerequisites
--
-## Use psql utility to create a schema
-
-Once connected to the Azure Database for PostgreSQL - Hyperscale (Citus) using psql, you can complete some basic tasks. This tutorial walks you through ingesting traffic data from web analytics, then rolling up the data to provide real-time dashboards based on that data.
-
-Let's create a table that will consume all of our raw web traffic data. Run the following commands in the psql terminal:
-
-```sql
-CREATE TABLE http_request (
- site_id INT,
- ingest_time TIMESTAMPTZ DEFAULT now(),
-
- url TEXT,
- request_country TEXT,
- ip_address TEXT,
-
- status_code INT,
- response_time_msec INT
-);
-```
-
-We're also going to create a table that will hold our per-minute aggregates, and a table that maintains the position of our last rollup. Run the following commands in psql as well:
-
-```sql
-CREATE TABLE http_request_1min (
- site_id INT,
- ingest_time TIMESTAMPTZ, -- which minute this row represents
-
- error_count INT,
- success_count INT,
- request_count INT,
- average_response_time_msec INT,
- CHECK (request_count = error_count + success_count),
- CHECK (ingest_time = date_trunc('minute', ingest_time))
-);
-
-CREATE INDEX http_request_1min_idx ON http_request_1min (site_id, ingest_time);
-
-CREATE TABLE latest_rollup (
- minute timestamptz PRIMARY KEY,
-
- CHECK (minute = date_trunc('minute', minute))
-);
-```
-
-You can see the newly created tables in the list of tables now with this psql command:
-
-```postgres
-\dt
-```
-
-## Shard tables across nodes
-
-A hyperscale deployment stores table rows on different nodes based on the value of a user-designated column. This "distribution column" marks how data is sharded across nodes.
-
-Let's set the distribution column to be site\_id, the shard
-key. In psql, run these functions:
-
- ```sql
-SELECT create_distributed_table('http_request', 'site_id');
-SELECT create_distributed_table('http_request_1min', 'site_id');
-```
--
-## Generate sample data
-
-Now our server group should be ready to ingest some data. We can run the
-following locally from our `psql` connection to continuously insert data.
-
-```sql
-DO $$
- BEGIN LOOP
- INSERT INTO http_request (
- site_id, ingest_time, url, request_country,
- ip_address, status_code, response_time_msec
- ) VALUES (
- trunc(random()*32), clock_timestamp(),
- concat('http://example.com/', md5(random()::text)),
- ('{China,India,USA,Indonesia}'::text[])[ceil(random()*4)],
- concat(
- trunc(random()*250 + 2), '.',
- trunc(random()*250 + 2), '.',
- trunc(random()*250 + 2), '.',
- trunc(random()*250 + 2)
- )::inet,
- ('{200,404}'::int[])[ceil(random()*2)],
- 5+trunc(random()*150)
- );
- COMMIT;
- PERFORM pg_sleep(random() * 0.25);
- END LOOP;
-END $$;
-```
-
-The query inserts approximately eight rows every second. The rows are stored on different worker nodes as directed by the distribution column, `site_id`.
-
- > [!NOTE]
- > Leave the data generation query running, and open a second psql
- > connection for the remaining commands in this tutorial.
- >
-
-## Query
-
-The hyperscale hosting option allows multiple nodes to process queries in
-parallel for speed. For instance, the database calculates aggregates like SUM
-and COUNT on worker nodes, and combines the results into a final answer.
-
-Here's a query to count web requests per minute along with a few statistics.
-Try running it in psql and observe the results.
-
-```sql
-SELECT
- site_id,
- date_trunc('minute', ingest_time) as minute,
- COUNT(1) AS request_count,
- SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
- SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
- SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
-FROM http_request
-WHERE date_trunc('minute', ingest_time) > now() - '5 minutes'::interval
-GROUP BY site_id, minute
-ORDER BY minute ASC;
-```
-
-## Rolling up data
-
-The previous query works fine in the early stages, but its performance
-degrades as your data scales. Even with distributed processing, it's faster to pre-compute the data than to recalculate it repeatedly.
-
-We can ensure our dashboard stays fast by regularly rolling up the
-raw data into an aggregate table. You can experiment with the aggregation duration. We used a per-minute aggregation table, but you could break data into 5, 15, or 60 minutes instead.
-
-To run this roll-up more easily, we're going to put it into a plpgsql function. Run these commands in psql to create the `rollup_http_request` function.
-
-```sql
initialize to a time long ago
-INSERT INTO latest_rollup VALUES ('10-10-1901');
- function to do the rollup
-CREATE OR REPLACE FUNCTION rollup_http_request() RETURNS void AS $$
-DECLARE
- curr_rollup_time timestamptz := date_trunc('minute', now());
- last_rollup_time timestamptz := minute from latest_rollup;
-BEGIN
- INSERT INTO http_request_1min (
- site_id, ingest_time, request_count,
- success_count, error_count, average_response_time_msec
- ) SELECT
- site_id,
- date_trunc('minute', ingest_time),
- COUNT(1) as request_count,
- SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
- SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
- SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
- FROM http_request
- -- roll up only data new since last_rollup_time
- WHERE date_trunc('minute', ingest_time) <@
- tstzrange(last_rollup_time, curr_rollup_time, '(]')
- GROUP BY 1, 2;
-
- -- update the value in latest_rollup so that next time we run the
- -- rollup it will operate on data newer than curr_rollup_time
- UPDATE latest_rollup SET minute = curr_rollup_time;
-END;
-$$ LANGUAGE plpgsql;
-```
-
-With our function in place, execute it to roll up the data:
-
-```sql
-SELECT rollup_http_request();
-```
-
-And with our data in a pre-aggregated form we can query the rollup
-table to get the same report as earlier. Run the following query:
-
-```sql
-SELECT site_id, ingest_time as minute, request_count,
- success_count, error_count, average_response_time_msec
- FROM http_request_1min
- WHERE ingest_time > date_trunc('minute', now()) - '5 minutes'::interval;
- ```
-
-## Expiring old data
-
-The rollups make queries faster, but we still need to expire old data to avoid unbounded storage costs. Decide how long youΓÇÖd like to keep data for each granularity, and use standard queries to delete expired data. In the following example, we decided to keep raw data for one day, and per-minute aggregations for one month:
-
-```sql
-DELETE FROM http_request WHERE ingest_time < now() - interval '1 day';
-DELETE FROM http_request_1min WHERE ingest_time < now() - interval '1 month';
-```
-
-In production, you could wrap these queries in a function and call it every minute in a cron job.
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
-
-## Next steps
-
-In this tutorial, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data. You learned to query data in the raw form, regularly aggregate that data, query the aggregated tables, and expire old data.
--- Learn about server group [node types](./concepts-nodes.md)-- Determine the best [initial
- size](howto-scale-initial.md) for your server group
postgresql Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-private-access.md
- Title: Create server group with private access - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Connect a VM to a server group private endpoint
----- Previously updated : 01/14/2022--
-# Create server group with private access in Azure Database for PostgreSQL - Hyperscale (Citus)
--
-This tutorial creates a virtual machine and a Hyperscale (Citus) server group,
-and establishes [private access](concepts-private-access.md) between
-them.
-
-## Create a virtual network
-
-First, weΓÇÖll set up a resource group and virtual network. It will hold our
-server group and virtual machine.
-
-```sh
-az group create \
- --name link-demo \
- --location eastus
-
-az network vnet create \
- --resource-group link-demo \
- --name link-demo-net \
- --address-prefix 10.0.0.0/16
-
-az network nsg create \
- --resource-group link-demo \
- --name link-demo-nsg
-
-az network vnet subnet create \
- --resource-group link-demo \
- --vnet-name link-demo-net \
- --name link-demo-subnet \
- --address-prefixes 10.0.1.0/24 \
- --network-security-group link-demo-nsg
-```
-
-## Create a virtual machine
-
-For demonstration, weΓÇÖll use a virtual machine running Debian Linux, and the
-`psql` PostgreSQL client.
-
-```sh
-# provision the VM
-
-az vm create \
- --resource-group link-demo \
- --name link-demo-vm \
- --vnet-name link-demo-net \
- --subnet link-demo-subnet \
- --nsg link-demo-nsg \
- --public-ip-address link-demo-net-ip \
- --image debian \
- --admin-username azureuser \
- --generate-ssh-keys
-
-# install psql database client
-
-az vm run-command invoke \
- --resource-group link-demo \
- --name link-demo-vm \
- --command-id RunShellScript \
- --scripts \
- "sudo touch /home/azureuser/.hushlogin" \
- "sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
- "sudo DEBIAN_FRONTEND=noninteractive apt-get install -q -y postgresql-client"
-```
-
-## Create a server group with a private link
-
-1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-
-2. Select **Databases** from the **New** page, and select **Azure Database for
- PostgreSQL** from the **Databases** page.
-
-3. For the deployment option, select the **Create** button under **Hyperscale
- (Citus) server group**.
-
-4. Fill out the new server details form with the following information:
-
- - **Resource group**: `link-demo`
- - **Server group name**: `link-demo-sg`
- - **Location**: `East US`
- - **Password**: (your choice)
-
- > [!NOTE]
- >
- > The server group name must be globally unique across Azure because it
- > creates a DNS entry. If `link-demo-sg` is unavailable, please choose
- > another name and adjust the steps below accordingly.
-
-5. Select **Configure server group**, choose the **Basic** plan, and select
- **Save**.
-
-6. Select **Next: Networking** at the bottom of the page.
-
-7. Select **Private access**.
-
-8. A screen appears called **Create private endpoint**. Enter these values and
- select **OK**:
-
- - **Resource group**: `link-demo`
- - **Location**: `(US) East US`
- - **Name**: `link-demo-sg-c-pe1`
- - **Target sub-resource**: `coordinator`
- - **Virtual network**: `link-demo-net`
- - **Subnet**: `link-demo-subnet`
- - **Integrate with private DNS zone**: Yes
-
-9. After creating the private endpoint, select **Review + create** to create
- your Hyperscale (Citus) server group.
-
-## Access the server group privately from the virtual machine
-
-The private link allows our virtual machine to connect to our server group,
-and prevents external hosts from doing so. In this step, we'll check that
-the `psql` database client on our virtual machine can communicate with the
-coordinator node of the server group.
-
-```sh
-# save db URI
-
-#
-# obtained from Settings -> Connection Strings in the Azure portal
-
-#
-# replace {your_password} in the string with your actual password
-
-PG_URI='host=c.link-demo-sg.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require'
-
-# attempt to connect to server group with psql in the virtual machine
-
-az vm run-command invoke \
- --resource-group link-demo \
- --name link-demo-vm \
- --command-id RunShellScript \
- --scripts "psql '$PG_URI' -c 'SHOW citus.version;'" \
- --query 'value[0].message' \
- | xargs printf
-```
-
-You should see a version number for Citus in the output. If you do, then psql
-was able to execute the command, and the private link worked.
-
-## Clean up resources
-
-We've seen how to create a private link between a virtual machine and a
-Hyperscale (Citus) server group. Now we can deprovision the resources.
-
-Delete the resource group, and the resources inside will be deprovisioned:
-
-```sh
-az group delete --resource-group link-demo
-
-# press y to confirm
-
-```
-
-## Next steps
-
-* Learn more about [private access](concepts-private-access.md)
-* Learn about [private
- endpoints](../../private-link/private-endpoint-overview.md)
-* Learn about [virtual
- networks](../../virtual-network/concepts-and-best-practices.md)
-* Learn about [private DNS zones](../../dns/private-dns-overview.md)
postgresql Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-shard.md
- Title: 'Tutorial: Shard data on worker nodes - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: This tutorial shows how to create distributed tables and visualize their data distribution with Azure Database for PostgreSQL Hyperscale (Citus).
------ Previously updated : 12/16/2020--
-# Tutorial: Shard data on worker nodes in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
--
-In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to:
-
-> [!div class="checklist"]
-> * Create hash-distributed shards
-> * See where table shards are placed
-> * Identify skewed distribution
-> * Create constraints on distributed tables
-> * Run queries on distributed data
-
-## Prerequisites
-
-This tutorial requires a running Hyperscale (Citus) server group with two
-worker nodes. If you don't have a running server group, follow the [create
-server group](tutorial-server-group.md) tutorial and then come back
-to this one.
-
-## Hash-distributed data
-
-Distributing table rows across multiple PostgreSQL servers is a key technique
-for scalable queries in Hyperscale (Citus). Together, multiple nodes can hold
-more data than a traditional database, and in many cases can use worker CPUs in
-parallel to execute queries.
-
-In the prerequisites section, we created a Hyperscale (Citus) server group with
-two worker nodes.
-
-![coordinator and two workers](../tutorial-hyperscale-shard/nodes.png)
-
-The coordinator node's metadata tables track workers and distributed data. We
-can check the active workers in the
-[pg_dist_node](reference-metadata.md#worker-node-table) table.
-
-```sql
-select nodeid, nodename from pg_dist_node where isactive;
-```
-```
- nodeid | nodename
+--
- 1 | 10.0.0.21
- 2 | 10.0.0.23
-```
-
-> [!NOTE]
-> Nodenames on Hyperscale (Citus) are internal IP addresses in a virtual
-> network, and the actual addresses you see may differ.
-
-### Rows, shards, and placements
-
-To use the CPU and storage resources of worker nodes, we have to distribute
-table data throughout the server group. Distributing a table assigns each row
-to a logical group called a *shard.* Let's create a table and distribute it:
-
-```sql
create a table on the coordinator
-create table users ( email text primary key, bday date not null );
- distribute it into shards on workers
-select create_distributed_table('users', 'email');
-```
-
-Hyperscale (Citus) assigns each row to a shard based on the value of the
-*distribution column*, which, in our case, we specified to be `email`. Every
-row will be in exactly one shard, and every shard can contain multiple rows.
-
-![users table with rows pointing to shards](../tutorial-hyperscale-shard/table.png)
-
-By default `create_distributed_table()` makes 32 shards, as we can see by
-counting in the metadata table
-[pg_dist_shard](reference-metadata.md#shard-table):
-
-```sql
-select logicalrelid, count(shardid)
- from pg_dist_shard
- group by logicalrelid;
-```
-```
- logicalrelid | count
+-
- users | 32
-```
-
-Hyperscale (Citus) uses the `pg_dist_shard` table to assign rows to shards,
-based on a hash of the value in the distribution column. The hashing details
-are unimportant for this tutorial. What matters is that we can query to see
-which values map to which shard IDs:
-
-```sql
Where would a row containing hi@test.com be stored? (The value doesn't have to actually be present in users, the mapping is a mathematical operation consulting pg_dist_shard.)
-select get_shard_id_for_distribution_column('users', 'hi@test.com');
-```
-```
- get_shard_id_for_distribution_column
- 102008
-```
-
-The mapping of rows to shards is purely logical. Shards must be assigned to
-specific worker nodes for storage, in what Hyperscale (Citus) calls *shard
-placement*.
-
-![shards assigned to workers](../tutorial-hyperscale-shard/shard-placement.png)
-
-We can look at the shard placements in
-[pg_dist_placement](reference-metadata.md#shard-placement-table).
-Joining it with the other metadata tables we've seen shows where each shard
-lives.
-
-```sql
limit the output to the first five placements-
-select
- shard.logicalrelid as table,
- placement.shardid as shard,
- node.nodename as host
-from
- pg_dist_placement placement,
- pg_dist_node node,
- pg_dist_shard shard
-where placement.groupid = node.groupid
- and shard.shardid = placement.shardid
-order by shard
-limit 5;
-```
-```
- table | shard | host
--+--+
- users | 102008 | 10.0.0.21
- users | 102009 | 10.0.0.23
- users | 102010 | 10.0.0.21
- users | 102011 | 10.0.0.23
- users | 102012 | 10.0.0.21
-```
-
-### Data skew
-
-A server group runs most efficiently when you place data evenly on worker
-nodes, and when you place related data together on the same workers. In this
-section we'll focus on the first part, the uniformity of placement.
-
-To demonstrate, let's create sample data for our `users` table:
-
-```sql
load sample data
-insert into users
-select
- md5(random()::text) || '@test.com',
- date_trunc('day', now() - random()*'100 years'::interval)
-from generate_series(1, 1000);
-```
-
-To see shard sizes, we can run [table size
-functions](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE)
-on the shards.
-
-```sql
sizes of the first five shards
-select *
-from
- run_command_on_shards('users', $cmd$
- select pg_size_pretty(pg_table_size('%1$s'));
- $cmd$)
-order by shardid
-limit 5;
-```
-```
- shardid | success | result
-++--
- 102008 | t | 16 kB
- 102009 | t | 16 kB
- 102010 | t | 16 kB
- 102011 | t | 16 kB
- 102012 | t | 16 kB
-```
-
-We can see the shards are of equal size. We already saw that placements are
-evenly distributed among workers, so we can infer that the worker nodes hold
-roughly equal numbers of rows.
-
-The rows in our `users` example distributed evenly because properties of the
-distribution column, `email`.
-
-1. The number of email addresses was greater than or equal to the number of shards.
-2. The number of rows per email address was similar (in our case, exactly one
- row per address because we declared email a key).
-
-Any choice of table and distribution column where either property fails will
-end up with uneven data size on workers, that is, *data skew*.
-
-### Add constraints to distributed data
-
-Using Hyperscale (Citus) allows you to continue to enjoy the safety of a
-relational database, including [database
-constraints](https://www.postgresql.org/docs/current/ddl-constraints.html).
-However, there's a limitation. Because of the nature of distributed systems,
-Hyperscale (Citus) won't cross-reference uniqueness constraints or referential
-integrity between worker nodes.
-
-Let's consider our `users` table example with a related table.
-
-```sql
books that users own
-create table books (
- owner_email text references users (email),
- isbn text not null,
- title text not null
-);
- distribute it
-select create_distributed_table('books', 'owner_email');
-```
-
-For efficiency, we distribute `books` the same way as `users`: by the owner's
-email address. Distributing by similar column values is called
-[colocation](concepts-colocation.md).
-
-We had no problem distributing books with a foreign key to users, because the
-key was on a distribution column. However, we would have trouble making `isbn`
-a key:
-
-```sql
will not work
-alter table books add constraint books_isbn unique (isbn);
-```
-```
-ERROR: cannot create constraint on "books"
-DETAIL: Distributed relations cannot have UNIQUE, EXCLUDE, or
- PRIMARY KEY constraints that do not include the partition column
- (with an equality operator if EXCLUDE).
-```
-
-In a distributed table the best we can do is make columns unique modulo
-the distribution column:
-
-```sql
a weaker constraint is allowed
-alter table books add constraint books_isbn unique (owner_email, isbn);
-```
-
-The above constraint merely makes isbn unique per user. Another option is to
-make books a [reference
-table](howto-modify-distributed-tables.md#reference-tables) rather
-than a distributed table, and create a separate distributed table associating
-books with users.
-
-## Query distributed tables
-
-In the previous sections, we saw how distributed table rows are placed in shards
-on worker nodes. Most of the time you don't need to know how or where data is
-stored in a server group. Hyperscale (Citus) has a distributed query executor
-that automatically splits up regular SQL queries. It runs them in parallel on
-worker nodes close to the data.
-
-For instance, we can run a query to find the average age of users, treating the
-distributed `users` table like it's a normal table on the coordinator.
-
-```sql
-select avg(current_date - bday) as avg_days_old from users;
-```
-```
- avg_days_old
- 17926.348000000000
-```
-
-![query going to shards via coordinator](../tutorial-hyperscale-shard/query-fragments.png)
-
-Behind the scenes, the Hyperscale (Citus) executor creates a separate query for
-each shard, runs them on the workers, and combines the result. You can see it
-if you use the PostgreSQL EXPLAIN command:
-
-```sql
-explain select avg(current_date - bday) from users;
-```
-```
- QUERY PLAN
--
- Aggregate (cost=500.00..500.02 rows=1 width=32)
- -> Custom Scan (Citus Adaptive) (cost=0.00..0.00 rows=100000 width=16)
- Task Count: 32
- Tasks Shown: One of 32
- -> Task
- Node: host=10.0.0.21 port=5432 dbname=citus
- -> Aggregate (cost=41.75..41.76 rows=1 width=16)
- -> Seq Scan on users_102040 users (cost=0.00..22.70 rows=1270 width=4)
-```
-
-The output shows an example of an execution plan for a *query fragment* running
-on shard 102040 (the table `users_102040` on worker 10.0.0.21). The other
-fragments aren't shown because they're similar. We can see that the worker node
-scans the shard tables and applies the aggregate. The coordinator node combines
-aggregates for the final result.
-
-## Next steps
-
-In this tutorial, we created a distributed table, and learned about its shards
-and placements. We saw a challenge of using uniqueness and foreign key
-constraints, and finally saw how distributed queries work at a high level.
-
-* Read more about Hyperscale (Citus) [table types](concepts-nodes.md)
-* Get more tips on [choosing a distribution column](howto-choose-distribution-column.md)
-* Learn the benefits of [table colocation](concepts-colocation.md)
postgresql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-online.md
Title: Minimal-downtime migration to Azure Database for PostgreSQL - Single Serv
description: This article describes how to perform a minimal-downtime migration of a PostgreSQL database to Azure Database for PostgreSQL - Single Server by using the Azure Database Migration Service. +
Last updated 5/6/2019
# Minimal-downtime migration to Azure Database for PostgreSQL - Single Server You can perform PostgreSQL migrations to Azure Database for PostgreSQL with minimal downtime by using the newly introduced **continuous sync capability** for the [Azure Database Migration Service](https://aka.ms/get-dms) (DMS). This functionality limits the amount of downtime that is incurred by the application.
Azure DMS performs an initial load of your on-premises to Azure Database for Pos
## Next steps - View the video [App Modernization with Microsoft Azure](https://medius.studios.ms/Embed/Video/BRK2102?sid=BRK2102), which contains a demo showing how to migrate PostgreSQL apps to Azure Database for PostgreSQL.-- See the tutorial [Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS](../../dms/tutorial-postgresql-azure-postgresql-online.md).
+- See the tutorial [Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS](../../dms/tutorial-postgresql-azure-postgresql-online.md).
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
The structure of the JSON is:
```bash { "properties": {
- "SourceDBServerResourceId":"subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
+ "SourceDBServerResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
-"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server"
+"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server",
"SecretParameters": { "AdminCredentials":
The structure of the JSON is:
"MigrationResourceGroup": {
- "ResourceId":"subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
+ "ResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
"SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>" },
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
Title: Reserved compute pricing - Azure Database for PostgreSQL
description: Prepay for Azure Database for PostgreSQL compute resources with reserved capacity +
Azure Database for PostgreSQL now helps you save money by prepaying for compute
You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br> > [!IMPORTANT]
-> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](../flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](../hyperscale/concepts-reserved-pricing.md).
+> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server) and [Flexible Server](../flexible-server/overview.md) deployment options.
You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for PostgreSQL
description: Learn about Azure Advisor recommendations for PostgreSQL. +
Advisor recommendations are split among our PostgreSQL database offerings:
* Azure Database for PostgreSQL - Single Server * Azure Database for PostgreSQL - Flexible Server
-* Azure Database for PostgreSQL - Hyperscale (Citus)
Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations. ## Where can I view my recommendations?
Recommendations are available from the **Overview** navigation sidebar in the Az
Azure Database for PostgreSQL prioritize the following types of recommendations: * **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
+* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, and connection limits. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-cost-recommendations.md). ## Understanding your recommendations
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-version-policy.md
Last updated 09/14/2022--+ # Azure Database for PostgreSQL versioning policy
This page describes the Azure Database for PostgreSQL versioning policy, and is
* Single Server * Flexible Server
-* Hyperscale (Citus)
## Supported PostgreSQL versions Azure Database for PostgreSQL supports the following database versions.
-| Version | Single Server | Flexible Server | Hyperscale (Citus) |
-| -- | :: | :-: | :-: |
-| PostgreSQL 14 | | X | X |
-| PostgreSQL 13 | | X | X |
-| PostgreSQL 12 | | X | X |
-| PostgreSQL 11 | X | X | X |
-| PostgreSQL 10 | X | | |
-| *PostgreSQL 9.6 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | |
-| *PostgreSQL 9.5 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | |
+| Version | Single Server | Flexible Server |
+| -- | :: | :-: |
+| PostgreSQL 14 | | X |
+| PostgreSQL 13 | | X |
+| PostgreSQL 12 | | X |
+| PostgreSQL 11 | X | X |
+| PostgreSQL 10 | X | |
+| *PostgreSQL 9.6 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | |
+| *PostgreSQL 9.5 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | |
## Major version support
Azure Database for PostgreSQL automatically performs minor version upgrades to t
The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/). | Version | What's New | Azure support start date | Retirement date (Azure)|
-| -- | -- | | -- |
+| - | - | | - |
| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
-| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
+| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
| [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 [Single Server, Flexible Server] <br> Nov 9, 2023 [Hyperscale Citus]
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 [Single Server, Flexible Server] |
| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024 | [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | October 1, 2021 (Hyperscale Citus) <br> June 29, 2022 (Flexible Server)| November 12, 2026
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 (Flexible Server)| November 12, 2026
## PostgreSQL 11 support in Single Server and Flexible Server
Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.pos
- See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md) - See Azure Database for PostgreSQL - Flexible Server [supported versions](../flexible-server/concepts-supported-versions.md)-- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](../hyperscale/reference-versions.md)
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-postgres-choose-server-options.md
-+ Last updated 06/24/2022
With Azure, your PostgreSQL Server workloads can run in a hosted virtual machine
When making your decision, consider the following three options in PaaS or alternatively running on Azure VMs (IaaS) - [Azure Database for PostgreSQL Single Server](./overview-single-server.md) - [Azure Database for PostgreSQL Flexible Server](../flexible-server/overview.md)-- [Azure Database for PostgreSQL Hyperscale (Citus)](../hyperscale/index.yml) **PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run PostgreSQL Server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
The main differences between these options are listed in the following table:
| **Attribute** | **Postgres on Azure VMs** | **PostgreSQL as PaaS** | | -- | -- | -- |
-| **Availability SLA** |- [Virtual Machine SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines) | - [Single Server, Flexible Server, and Hyperscale (Citus) SLA](https://azure.microsoft.com/support/legal/sla/postgresql)|
-| **OS and PostgreSQL patching** | - Customer managed | - Single Server ΓÇô Automatic <br> - Flexible Server ΓÇô Automatic with optional customer managed window <br> - Hyperscale (Citus) ΓÇô Automatic |
-| **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | - Single Server: built-in <br> - Flexible Server: built-in <br> - Hyperscale (Citus): built with standby |
-| **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | - Single Server: No <br> - Flexible Server: Yes <br> - Hyperscale (Citus): No |
-| **Hybrid Scenario** | - Customer managed |- Single Server: Read-replica <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-| **Backup and Restore** | - Customer Managed | - Single Server: built-in with user configuration for local and geo <br> - Flexible Server: built-in with user configuration on zone-redundant storage <br> - Hyperscale (Citus): built-in |
-| **Monitoring Database Operations** | - Customer Managed | - Single Server, Flexible Server, and Hyperscale (Citus): All offer customers the ability to set alerts on the database operation and act upon reaching thresholds. |
-| **Advanced Threat Protection** | - Customers must build this protection for themselves. |- Single Server: Yes <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-| **Disaster Recovery** | - Customer Managed | - Single Server: Geo redundant backup and geo read-replica <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
-| **Intelligent Performance** | - Customer Managed | - Single Server: Yes <br> - Flexible Server: Not available during Preview <br> - Hyperscale (Citus): No |
+| **Availability SLA** |- [Virtual Machine SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines) | - [Single Server and Flexible Server](https://azure.microsoft.com/support/legal/sla/postgresql)|
+| **OS and PostgreSQL patching** | - Customer managed | - Single Server ΓÇô Automatic <br> - Flexible Server ΓÇô Automatic with optional customer managed window |
+| **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | - Single Server: built-in <br> - Flexible Server: built-in |
+| **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | - Single Server: No <br> - Flexible Server: Yes |
+| **Hybrid Scenario** | - Customer managed |- Single Server: Read-replica <br> - Flexible Server: Not available during Preview |
+| **Backup and Restore** | - Customer Managed | - Single Server: built-in with user configuration for local and geo <br> - Flexible Server: built-in with user configuration on zone-redundant storage |
+| **Monitoring Database Operations** | - Customer Managed | - Single Server and Flexible Server: All offer customers the ability to set alerts on the database operation and act upon reaching thresholds. |
+| **Advanced Threat Protection** | - Customers must build this protection for themselves. |- Single Server: Yes <br> - Flexible Server: Not available during Preview |
+| **Disaster Recovery** | - Customer Managed | - Single Server: Geo redundant backup and geo read-replica <br> - Flexible Server: Not available during Preview |
+| **Intelligent Performance** | - Customer Managed | - Single Server: Yes <br> - Flexible Server: Not available during Preview |
## Total cost of ownership (TCO)
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
-+ Last updated 06/24/2022
Last updated 06/24/2022
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-[Azure Database for PostgreSQL](./overview.md) powered by the PostgreSQL community edition is available in three deployment modes:
+[Azure Database for PostgreSQL](./overview.md) powered by the PostgreSQL community edition is available in two deployment modes:
- [Single Server](./overview-single-server.md) - [Flexible Server](../flexible-server/overview.md)-- [Hyperscale (Citus)](../hyperscale/overview.md)
-In this article, we will provide an overview and introduction to core concepts of single server deployment model. To learn about flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md) and Hyperscale (Citus) Overview respectively.
+In this article, we will provide an overview and introduction to core concepts of single server deployment model. To learn about flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md).
## Overview
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview.md
-+ Last updated 06/24/2022 adobe-target: true
Azure Database for PostgreSQL powered by the PostgreSQL community edition is ava
- Single Server - Flexible Server-- Hyperscale (Citus) ### Azure Database for PostgreSQL - Single Server
Flexible servers are best suited for
For a detailed overview of flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md).
-### Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
-
-The Hyperscale (Citus) option horizontally scales queries across multiple machines using sharding. Its query engine parallelizes incoming SQL queries across these servers for faster responses on large datasets. It serves applications that require greater scale and performance, generally workloads that are approaching--or already exceed--100 GB of data.
-
-The Hyperscale (Citus) deployment option delivers:
--- Horizontal scaling across multiple machines using sharding-- Query parallelization across these servers for faster responses on large datasets-- Excellent support for multi-tenant applications, real-time operational analytics, and high-throughput transactional workloads-
-Applications built for PostgreSQL can run distributed queries on Hyperscale (Citus) with standard [connection libraries](./concepts-connection-libraries.md) and minimal changes.
- ## Next steps Learn more about the three deployment modes for Azure Database for PostgreSQL and choose the right options based on your needs. - [Single Server](./overview-single-server.md) - [Flexible Server](../flexible-server/overview.md)-- [Hyperscale (Citus)](../hyperscale/overview.md)
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 10/03/2022 Last updated : 10/10/2022 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
Last updated 3/15/2021-+ # Azure Private Link availability
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| | Azure SQL Database | All public regions <br/> All Government regions<br/>All China regions | Supported for Proxy [connection policy](/azure/azure-sql/database/connectivity-architecture#connection-policy) | GA <br/> [Learn how to create a private endpoint for Azure SQL](./tutorial-private-endpoint-sql-portal.md) |
-|Azure Cosmos DB| All public regions<br/> All Government regions</br> All China regions | |GA <br/> [Learn how to create a private endpoint for Cosmos DB.](./tutorial-private-endpoint-cosmosdb-portal.md)|
+|Azure Cosmos DB| All public regions<br/> All Government regions</br> All China regions | |GA <br/> [Learn how to create a private endpoint for Azure Cosmos DB.](./tutorial-private-endpoint-cosmosdb-portal.md)|
| Azure Database for PostgreSQL - Single server | All public regions <br/> All Government regions<br/>All China regions | Supported for General Purpose and Memory Optimized pricing tiers | GA <br/> [Learn how to create a private endpoint for Azure Database for PostgreSQL.](../postgresql/concepts-data-access-and-security-private-link.md) | | Azure Database for MySQL | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for MySQL.](../mysql/concepts-data-access-security-private-link.md) | | Azure Database for MariaDB | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for MariaDB.](../mariadb/concepts-data-access-security-private-link.md) |
The following tables list the Private Link services and the regions where they'r
|:-|:--|:-|:--| |Azure Event Grid| All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Grid.](../event-grid/network-security.md) | |Azure Service Bus | All public region<br/>All Government regions | Supported with premium tier of Azure Service Bus. [Select for tiers](../service-bus-messaging/service-bus-premium-messaging.md) | GA <br/> [Learn how to create a private endpoint for Azure Service Bus.](../service-bus-messaging/private-link-service.md) |
-| Azure API Management | All public regions<br/> All Government regions | | GA <br/> [Connect privately to API Management using a private endpoint.](../event-grid/network-security.md) |
+| Azure API Management | All public regions<br/> All Government regions | | Preview <br/> [Connect privately to API Management using a private endpoint.](../event-grid/network-security.md) |
### Internet of Things (IoT)
The following tables list the Private Link services and the regions where they'r
Learn more about Azure Private Link service: - [What is Azure Private Link?](private-link-overview.md)-- [Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
+- [Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
description: In this article, you'll learn how to use the Private Endpoint feature of Azure Private Link.
-# Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
Last updated 08/10/2022 -+
+#Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
# What is a private endpoint?
A private-link resource is the destination target of a specified private endpoin
| Azure Storage | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary)<BR> Dfs (dfs, dfs_secondary) | | Azure File Sync | Microsoft.StorageSync/storageSyncServices | File Sync Service | | Azure Synapse | Microsoft.Synapse/privateLinkHubs | web |
-| Azure Synapse Analytics | Microsoft.Synapse/workspaces | SQL, SqlOnDemand, Dev |
+| Azure Synapse Analytics | Microsoft.Synapse/workspaces | Sql, SqlOnDemand, Dev |
| Azure App Service | Microsoft.Web/hostingEnvironments | hosting environment | | Azure App Service | Microsoft.Web/sites | sites | | Azure Static Web Apps | Microsoft.Web/staticSites | staticSites |
The following table shows an example of a dual port NSG rule:
- The following services may require all destination ports to be open when leveraging a private endpoint and adding NSG security filters:
- - Cosmos DB - For more information see, [Service port ranges](../cosmos-db/sql/sql-sdk-connection-modes.md#service-port-ranges).
+ - Azure Cosmos DB - For more information, see [Service port ranges](../cosmos-db/sql/sql-sdk-connection-modes.md#service-port-ranges).
### UDR
private-link Troubleshoot Private Endpoint Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/troubleshoot-private-endpoint-connectivity.md
editor: ''
na+ Last updated 01/31/2020 - # Troubleshoot Azure Private Endpoint connectivity problems
Review these steps to make sure all the usual configurations are as expected to
1. It's always good to narrow down before raising the support ticket.
- a. If the Source is On-Premises connecting to Private Endpoint in Azure having issues, then try to connect
- - To another Virtual Machine from On-Premises and check if you have IP connectivity to the Virtual Network from On-Premises.
+ a. If the Source is on-premises, connecting to Private Endpoint in Azure having issues, then try to connect
+ - To another Virtual Machine from on-premises and check if you have IP connectivity to the Virtual Network from on-premises.
- From a Virtual Machine in the Virtual Network to the Private Endpoint. b. If the Source is Azure and Private Endpoint is in different Virtual Network, then try to connect
- - To the Private Endpoint from a different Source. By doing this you can isolate any Virtual Machine specific issues.
- - To any Virtual Machine which is part of the same Virtual Network of that of Private Endpoint.
+ - To the Private Endpoint from a different Source. By doing this, you can isolate any Virtual Machine specific issues.
+ - To any Virtual Machine, which is part of the same Virtual Network of that of Private Endpoint.
-1. If the Private Endpoint is linked to a [Private Link Service](./troubleshoot-private-link-connectivity.md) which is linked to a Load Balancer, check if the backend pool is reporting healthy. Fixing the Load Balancer health will fix the issue with connecting to the Private Endpoint.
+1. If the Private Endpoint is linked to a [Private Link Service](./troubleshoot-private-link-connectivity.md), which is linked to a Load Balancer, check if the backend pool is reporting healthy. Fixing the Load Balancer health will fix the issue with connecting to the Private Endpoint.
- - You can see a visual diagram or a [dependency view](../network-watcher/network-insights-overview.md#dependency-view) of the related resources, metrics, and insights by going to:
+ - You can see a visual diagram or a [resource view](../network-watcher/network-insights-overview.md#resource-view) of the related resources, metrics, and insights by going to:
- Azure Monitor - Networks - Private endpoints
- - Dependency view
+ - Resource view
![Monitor-Networks](https://user-images.githubusercontent.com/20302679/134994620-0660b9e2-e2a3-4233-8953-d3e49b93e2f2.png)
private-link Troubleshoot Private Link Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/troubleshoot-private-link-connectivity.md
editor: ''
na+ Last updated 01/31/2020 - # Troubleshoot Azure Private Link Service connectivity problems
If you experience connectivity problems with your private link setup, review the
![Verify private link service metrics](./media/private-link-tsg/pls-metrics.png)
-1. Use [Azure Monitor - Networks](../network-watcher/network-insights-overview.md#dependency-view) for insights and to see a dependency view of the resources by going to:
+1. Use [Azure Monitor - Networks](../network-watcher/network-insights-overview.md#resource-view) for insights and to see a resource view of the resources by going to:
- Azure Monitor - Networks - Private Link services
- - Dependency view
+ - Resource view
![AzureMonitor](https://user-images.githubusercontent.com/20302679/135001735-56a9484b-f9b4-484b-a503-cfb9d20b264a.png)
private-link Tutorial Private Endpoint Cosmosdb Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-cosmosdb-portal.md
Title: 'Tutorial: Connect to an Azure Cosmos account using an Azure Private endpoint'
+ Title: 'Tutorial: Connect to an Azure Cosmos DB account using an Azure Private endpoint'
description: Get started with this tutorial using Azure Private endpoint to connect to an Azure Cosmos DB account privately.
Last updated 06/22/2022-+
-# Tutorial: Connect to an Azure Cosmos account using an Azure Private Endpoint
+# Tutorial: Connect to an Azure Cosmos DB account using an Azure Private Endpoint
Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as Azure Cosmos DB.
In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a virtual network and bastion host. > * Create a virtual machine.
-> * Create a Cosmos DB account with a private endpoint.
-> * Test connectivity to Cosmos DB account private endpoint.
+> * Create an Azure Cosmos DB account with a private endpoint.
+> * Test connectivity to the private endpoint.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
In this section, you'll create a virtual machine that will be used to test the p
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Create a Cosmos DB account with a private endpoint
+## Create an Azure Cosmos DB account with a private endpoint
-In this section, you'll create a Cosmos DB account and configure the private endpoint.
+In this section, you'll create an Azure Cosmos DB account and configure the private endpoint.
1. In the left-hand menu, select **Create a resource** > **Databases** > **Azure Cosmos DB**, or search for **Azure Cosmos DB** in the search box.
-2. In **Select API option** page, Select **Create** under **Core (SQL)**.
+2. In **Select API option** page, Select **Create** under **Azure Cosmos DB for NoSQL**.
-2. In the **Basics** tab of **Create Cosmos DB account** enter or select the following information:
+2. In the **Basics** tab of **Create Azure Cosmos DB account** enter or select the following information:
| Setting | Value | |--|-|
In this section, you'll create a Cosmos DB account and configure the private end
| Resource Group | Select **myResourceGroup**. | | Location | Select **East US**. | | Name | Enter **myPrivateEndpoint**. |
- | CosmosDB sub-resource | Leave the default **Core (SQL) - Recommended**. |
+ | Azure Cosmos DB sub-resource | Leave the default **Azure Cosmos DB for NoSQL - Recommended**. |
| **Networking** | | | Virtual network | Select **myVNet**. | | Subnet | Select **mySubnet**. |
In this section, you'll create a Cosmos DB account and configure the private end
5. Select **OK**.
-6. In the **Settings** section of the Cosmos DB account, select **Keys**.
+6. In the **Settings** section of the Azure Cosmos DB account, select **Keys**.
7. Select copy on the **PRIMARY CONNECTION STRING**. A valid connection string is in the format: `AccountEndpoint=https://<cosmosdb-account-name>.documents.azure.com:443/;AccountKey=<accountKey>;` ## Test connectivity to private endpoint
-In this section, you'll use the virtual machine you created in the previous steps to connect to the Cosmos DB account across the private endpoint using **Azure Cosmos DB Explorer**.
+In this section, you'll use the virtual machine you created in the previous steps to connect to the Azure Cosmos DB account across the private endpoint using **Azure Cosmos DB Explorer**.
1. Select **Resource groups** in the left-hand navigation pane.
In this section, you'll use the virtual machine you created in the previous step
1. Open Windows PowerShell on the server after you connect.
-1. Enter `nslookup <cosmosdb-account-name>.documents.azure.com` and validate the name resolution. Replace **\<cosmosdb-account-name>** with the name of the Cosmos DB account you created in the previous steps. You'll receive a message similar to what is displayed below:
+1. Enter `nslookup <cosmosdb-account-name>.documents.azure.com` and validate the name resolution. Replace **\<cosmosdb-account-name>** with the name of the Azure Cosmos DB account you created in the previous steps. You'll receive a message similar to what is displayed below:
```powershell Server: UnKnown
In this section, you'll use the virtual machine you created in the previous step
Address: 10.1.0.5 Aliases: mycosmosdb.documents.azure.com ```
- A private IP address of **10.1.0.5** is returned for the Cosmos DB account name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
+ A private IP address of **10.1.0.5** is returned for the Azure Cosmos DB account name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
-1. Go to https://cosmos.azure.com/. Select **Connect to your account with connection string**, then paste the connection string that you copied in the previous steps and select **Connect**.
+1. Go to [Azure Cosmos DB](https://cosmos.azure.com/). Select **Connect to your account with connection string**, then paste the connection string that you copied in the previous steps and select **Connect**.
-1. Under the SQL API left-hand menu, you see **mydatabaseid** and **mycontainerid** that you previously created in **mycosmosdb**.
+1. Under the **Azure Cosmos DB for NoSQL** menu on the left, you see **mydatabaseid** and **mycontainerid** that you previously created in **mycosmosdb**.
1. Close the connection to **myVM**. ## Clean up resources
-If you're not going to continue to use this application, delete the virtual network, virtual machine, and Cosmos DB account with the following steps:
+If you're not going to continue to use this application, delete the virtual network, virtual machine, and Azure Cosmos DB account with the following steps:
1. From the left-hand menu, select **Resource groups**.
In this tutorial, you learned how to create:
* Virtual network and bastion host. * Virtual Machine.
-* Cosmos DB Account.
+* Azure Cosmos DB account.
Learn how to connect to a web app using an Azure Private Endpoint: > [!div class="nextstepaction"]
private-link Tutorial Private Endpoint Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-storage-portal.md
Last updated 06/22/2022-+ # Tutorial: Connect to a storage account using an Azure Private Endpoint
In this tutorial, you learned how to create:
* Virtual machine. * Storage account and a container.
-Learn how to connect to an Azure Cosmos DB account using an Azure Private Endpoint:
+Learn how to connect to an Azure Cosmos DB account via Azure Private Endpoint:
+ > [!div class="nextstepaction"]
-> [Connect to Azure Cosmos using Private Endpoint](tutorial-private-endpoint-cosmosdb-portal.md)
+> [Connect to Azure Cosmos DB using Private Endpoint](tutorial-private-endpoint-cosmosdb-portal.md)
public-multi-access-edge-compute-mec Considerations For Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/considerations-for-deployment.md
For Azure public MEC, follow these best practices:
- Deploy in Azure public MEC only those components of the application that are latency sensitive or need low latency compute at the Azure public MEC. Deploy in the parent Azure region those components of the application that perform control plane and management plane functionalities. -- Because Azure public MEC sites are connected to the telecommunications network, accessing resources deployed in it over the internet isn't allowed. To access VMs deployed in the Azure public MEC, deploy jump box virtual machines (VMs) or Azure Bastion in a virtual network (VNet) in the parent region.
+- To access VMs deployed in the Azure public MEC, deploy jump box virtual machines (VMs) or Azure Bastion in a virtual network (VNet) in the parent region.
- For compute resources in the Azure public MEC, deploy Azure Key Vault in the Azure region to provide secrets management and key management services.
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-scanning.md
Running a *scan* invokes the process to ingest metadata from the registered data
The curation process applies automated classification labels on the schema attributes based on the scan rule set configured. Sensitivity labels are applied if your Microsoft Purview account is connected to the Microsoft Purview compliance portal.
+> [!IMPORTANT]
+> If you have any [Azure Policies](../governance/policy/overview.md) preventing **updates to Storage accounts**, this will cause errors for Microsoft Purview's scanning process. Follow the [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) to create an exception for Microsoft Purview accounts.
+ ## Why do you need best practices to manage data sources? Best practices enable you to:
purview Create Microsoft Purview Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal-faq.md
Previously updated : 08/26/2021 Last updated : 10/10/2022 # Create an exception to deploy Microsoft Purview
-Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Microsoft Purview accounts deploy two other Azure resources when they're created: an Azure Storage account, and optionally an Event Hubs namespace. When you [create Microsoft Purview Account](create-catalog-portal.md), these resources will be deployed. They'll be managed by Azure, so you don't need to maintain them, but you'll need to deploy them. Existing policies may block this deployment, and you may receive an error when attempting to create a Microsoft Purview account.
+Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation or update of some resources. This is to maintain subscription security and cleanliness. However, Microsoft Purview accounts deploy two other Azure resources when they're created: an Azure Storage account, and optionally an Event Hubs namespace. When you [create Microsoft Purview Account](create-catalog-portal.md), these resources will be deployed. They'll be managed by Azure, so you don't need to maintain them, but you'll need to deploy them. Existing policies may block this deployment, and you may receive an error when attempting to create a Microsoft Purview account.
-To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create an exception.
+Microsoft Purview also regularly updates its Azure Storage account after creation, so any policies blocking updates to this storage account will cause errors during scanning.
+
+To maintain your policies in your subscription, but still allow the creation and updates to these managed resources, you can create an exception.
## Create an Azure policy exception for Microsoft Purview
purview Create Microsoft Purview Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal.md
Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account'
description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account and configure permissions to begin using it. Previously updated : 06/20/2022 Last updated : 10/11/2022
For more information about the governance capabilities of Microsoft Purview, for
## Create an account
+> [!IMPORTANT]
+> If you have any [Azure Policies](../governance/policy/overview.md) preventing creation of **Storage accounts** or **Event Hub namespaces**, or preventing **updates to Storage accounts** first follow the [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) to create an exception for Microsoft Purview accounts. Otherwise you will not be able to deploy Microsoft Purview.
+ 1. Search for **Microsoft Purview** in the [Azure portal](https://portal.azure.com). :::image type="content" source="media/create-catalog-portal/purview-accounts-page.png" alt-text="Screenshot showing the purview accounts page in the Azure portal":::
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
+ Last updated 04/22/2022
Microsoft Purview allows you to apply sensitivity labels to assets, enabling you
* **Label travels with the data:** The sensitivity labels created in Microsoft Purview Information Protection can also be extended to the Microsoft Purview Data Map, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it into the Microsoft Purview Data Map, the label will be applied to the data asset. While the label is applied to the actual file in Microsoft Purview Information Protection, it's only added as metadata in the Microsoft Purview map. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to. * **Overview of your data estate:** Microsoft Purview provides insights into your data through pre-canned reports. When you scan data into the Microsoft Purview Data Map, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc. * **Automatic labeling:** Labels can be applied automatically based on sensitivity of the data. When an asset is scanned for sensitive data, autolabeling rules are used to decide which sensitivity label to apply. You can create autolabeling rules for each sensitivity label, defining which classification/sensitive information type constitutes a label.
-* **Apply labels to files and database columns:** Labels can be applied to files in storage like Azure Data Lake, Azure Files, etc. and to schematized data like columns in Azure SQL DB, Cosmos DB, etc.
+* **Apply labels to files and database columns:** Labels can be applied to files in storage such as Azure Data Lake or Azure Files as well as to schematized data such as columns in Azure SQL Database or Azure Cosmos DB.
Sensitivity labels are tags that you can apply on assets to classify and protect your data. Learn more about [sensitivity labels here](/microsoft-365/compliance/create-sensitivity-labels).
Sensitivity labels are supported in the Microsoft Purview Data Map for the follo
|Data type |Sources | ||| |Automatic labeling for files | - Azure Blob Storage</br>- Azure Files</br>- Azure Data Lake Storage Gen 1 and Gen 2</br>- Amazon S3|
-|Automatic labeling for schematized data assets | - SQL server</br>- Azure SQL database</br>- Azure SQL Managed Instance</br>- Azure Synapse Analytics workspaces</br>- Azure Cosmos Database (SQL API)</br> - Azure database for MySQL</br> - Azure database for PostgreSQL</br> - Azure Data Explorer</br> |
+|Automatic labeling for schematized data assets | - SQL server</br>- Azure SQL database</br>- Azure SQL Managed Instance</br>- Azure Synapse Analytics workspaces</br>- Azure Cosmos DB for NoSQL</br> - Azure Database for MySQL</br> - Azure Database for PostgreSQL</br> - Azure Data Explorer</br> |
| | | ## Labeling for SQL databases
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-automatically-label-your-content.md
+ Last updated 07/07/2022
For more information on how to set up scans on various assets in the Microsoft P
|Source |Reference | ||| |**Files within Storage** | [Register and Scan Azure Blob Storage](register-scan-azure-blob-storage-source.md) </br> [Register and scan Azure Files](register-scan-azure-files-storage-source.md) [Register and scan Azure Data Lake Storage Gen1](register-scan-adls-gen1.md) </br>[Register and scan Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</br>[Register and scan Amazon S3](register-scan-amazon-s3.md) |
-|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
+|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos DB for NoSQL database](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
| | | ## View labels on assets in the catalog
Find insights on your classified and labeled data in the Microsoft Purview Data
> [Overview of Labeling in Microsoft Purview](create-sensitivity-label.md) > [!div class="nextstepaction"]
-> [Labeling Frequently Asked Questions](sensitivity-labels-frequently-asked-questions.yml)
+> [Labeling Frequently Asked Questions](sensitivity-labels-frequently-asked-questions.yml)
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
Title: Enabling Data use management on your Microsoft Purview sources
-description: Step-by-step guide on how to enable data use access for your registered sources.
+description: Step-by-step guide on how to enable Data use management for your registered sources.
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-*Data use management* (DUM) is an option within the data source registration in Microsoft Purview. This option lets Microsoft Purview manage data access for your resources. The high level concept is that the data owner allows its data resource to be available for access policies by enabling *DUM*.
+*Data use management* is an option within the data source registration in Microsoft Purview. This option lets Microsoft Purview manage data access for your resources. The high level concept is that the data owner allows its data resource to be available for access policies by enabling *Data use management*.
-Currently, a data owner can enable DUM on a data resource for these types of access policies:
+Currently, a data owner can enable Data use management on a data resource, which enables it for these types of access policies:
* [Data owner access policies](concept-policies-data-owner.md) - access policies authored via Microsoft Purview data policy experience. * [Self-service access policies](concept-self-service-data-access-policy.md) - access policies automatically generated by Microsoft Purview after a [self-service access request](how-to-request-access.md) is approved.
-To be able to create any data policy on a resource, DUM must first be enabled on that resource. This article will explain how to enable DUM on your resources in Microsoft Purview.
+To be able to create any data policy on a resource, Data use management must first be enabled on that resource. This article will explain how to enable Data use management on your resources in Microsoft Purview.
>[!IMPORTANT]
->Because Data use management directly affects access to your data, it directly affects your data security. Review [**additional considerations**](#additional-considerations-related-to-data-use-management) and [**best practices**](#data-use-management-best-practices) below before enabling DUM in your environment.
+>Because Data use management directly affects access to your data, it directly affects your data security. Review [**additional considerations**](#additional-considerations-related-to-data-use-management) and [**best practices**](#data-use-management-best-practices) below before enabling Data use management in your environment.
## Prerequisites [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
purview How To Monitor Scan Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-scan-runs.md
In Microsoft Purview, you can register and scan various types of data sources, a
1. You can come back to **Scan Status** page by following the bread crumbs on the top left corner of the run history page.
+## Scans no longer run
+
+If your Microsoft Purview scan used to successfully run, but are now failing, check these things:
+1. Have credentials to your resource changed or been rotated? If so, you'll need to update your scan to have the correct credentials.
+1. Is an [Azure Policy](../governance/policy/overview.md) preventing **updates to Storage accounts**? If so follow the [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) to create an exception for Microsoft Purview accounts.
+1. Are you using a self-hosted integration runtime? Check that it's up to date with the latest software and that it's connected to your network.
+ ## Next steps * [Microsoft Purview supported data sources and file types](azure-purview-connector-overview.md)
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
Previously updated : 08/12/2022 Last updated : 10/11/2022 # Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Access policies](concept-policies-data-owner.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
-This how-to guide describes how a data owner can delegate authoring policies in Microsoft Purview to enable access to SQL Server on Azure Arc-enabled servers. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. These 3 actions are only supported for policies at server level. *Modify* is not supported at this point.
+This guide covers how a data owner can delegate authoring policies in Microsoft Purview to enable access to SQL Server on Azure Arc-enabled servers. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. These 3 actions are only supported for policies at server level. *Modify* is not supported at this point.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]-- Get SQL server version 2022 RC 1 or later running on Windows and install it. [Follow this link](https://www.microsoft.com/sql-server/sql-server-2022).-- Complete process to onboard that SQL server with Azure Arc [Follow this link](https://learn.microsoft.com/sql/sql-server/azure-arc/connect).-- Enable Azure AD Authentication in that SQL server. [Follow this guide to learn how](https://learn.microsoft.com/sql/relational-databases/security/authentication-access/azure-ad-authentication-sql-server-setup-tutorial). For a simpler setup [follow this link](https://learn.microsoft.com/sql/relational-databases/security/authentication-access/azure-ad-authentication-sql-server-automation-setup-tutorial#setting-up-azure-ad-admin-using-the-azure-portal).
-**Enforcement of policies for this data source is available only in the following regions for Microsoft Purview**
-- East US-- East US 2-- South Central US-- West US 3-- Canada Central-- West Europe-- North Europe-- UK South-- France Central-- UAE North-- Central India-- Korea Central-- Japan East-- Australia East-
-## Security considerations
-- The Server admin can turn off the Microsoft Purview policy enforcement.-- Arc Admin/Server admin permissions empower the Arc admin or Server admin with the ability to change the ARM path of the given server. Given that mappings in Microsoft Purview use ARM paths, this can lead to wrong policy enforcements. -- SQL Admin (DBA) can gain the power of Server admin and can tamper with the cached policies from Microsoft Purview.-- The recommended configuration is to create a separate App Registration per SQL server instance. This prevents SQL server2 from reading the policies meant for SQL server1, in case a rogue admin in SQL server2 tampers with the ARM path.-
-## Configuration
-
-> [!Important]
-> You can assign the data source side permission (i.e., *IAM Owner*) **only** by entering Azure portal through this [special link](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=sqlrbacmain#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/sqlServers). Alternatively, you can configure this permission at the parent resource group level so that it gets inherited by the "SQL Server - Azure Arc" data source.
-
-### SQL Server on Azure Arc-enabled server configuration
-This section describes the steps to configure the SQL Server on Azure Arc to use Microsoft Purview.
-
-1. Sign in to Azure portal with a [special link](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=sqlrbacmain#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/sqlServers) that contains feature flags to list SQL Servers on Azure Arc
-
-1. Navigate to a SQL Server you want to configure
-
-1. Navigate to **Azure Active Directory** feature on the left pane
-
-1. Verify that Azure Active Directory Authentication is configured. This means that all these have been entered: an admin login, a SQL Server service certificate, and a SQL Server app registration.
-![Screenshot shows how to configure Microsoft Purview endpoint in Azure AD section.](./media/how-to-policies-data-owner-sql/setup-sql-on-arc-for-purview.png)
-
-1. Scroll down to set **External Policy Based Authorization** to enabled
-
-1. Enter **Microsoft Purview Endpoint** in the format *https://\<purview-account-name\>.purview.azure.com*. You can see the names of Microsoft Purview accounts in your tenant through [this link](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Purview%2FAccounts). Optionally, you can confirm the endpoint by navigating to the Microsoft Purview account, then to the Properties section on the left menu and scrolling down until you see "Scan endpoint". The full endpoint path will be the one listed without the "/Scan" at the end.
-
-1. Make a note of the **App registration ID**, as you will need it when you register and enable this data source for *Data Use Management* in Microsoft Purview.
-
-1. Select the **Save** button to save the configuration.
+## Microsoft Purview configuration
### Register data sources in Microsoft Purview Register each data source with Microsoft Purview to later define access policies.
This section contains a reference of how actions in Microsoft Purview data polic
## Next steps Check blog, demo and related how-to guides
-* [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
* [Concepts for Microsoft Purview data owner policies](./concept-policies-data-owner.md)
-* Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
* [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md) * [Enable Microsoft Purview data owner policies on an Azure SQL DB](./how-to-policies-data-owner-azure-sql-db.md)
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
Previously updated : 08/22/2022 Last updated : 10/10/2022 # Authoring and publishing data owner access policies (Preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-Access policies allow a data owner to delegate in Microsoft Purview access management to a data source. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source. This tutorial describes how to create, update, and publish access policies in the Microsoft Purview governance portal.
+[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
+
+This guide describes how to create, update, and publish data owner policies in the Microsoft Purview governance portal.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
-## Configuration
+## Microsoft Purview configuration
### Data source configuration
Before authoring data policies in the Microsoft Purview governance portal, you'l
1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](microsoft-purview-connector-overview.md) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections. 1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](microsoft-purview-connector-overview.md) for your resources.
-1. [Enable the Data use management toggle on the data source](how-to-enable-data-use-management.md#enable-data-use-management). Additional permissions for this step are described in the linked document.
+1. Enable the Data use management option on the data source. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+
## Create a new policy This section describes the steps to create a new policy in Microsoft Purview.
-Ensure you have the *Policy Author* permission as described [here](#permissions-for-policy-authoring-and-publishing).
+Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-and-publish-data-owner-policies).
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
Now that you have created your policy, you will need to publish it for it to bec
## Publish a policy A newly created policy is in the **draft** state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
-Ensure you have the *Data Source Admin* permission as described [here](#permissions-for-policy-authoring-and-publishing)
+Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-and-publish-data-owner-policies)
The steps to publish a policy are as follows:
The steps to publish a policy are as follows:
## Update or delete a policy Steps to update or delete a policy in Microsoft Purview are as follows.
-Ensure you have the *Policy Author* permission as described [here](#permissions-for-policy-authoring-and-publishing)
+Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-and-publish-data-owner-policies)
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
purview How To Policies Data Owner Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-azure-sql-db.md
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Access policies](concept-policies-data-owner.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
-This how-to guide describes how a data owner can delegate authoring policies in Microsoft Purview to enable access to Azure SQL DB. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. The first two actions are supported only at server level. *Modify* is not supported at this point.
+This guide covers how a data owner can delegate authoring policies in Microsoft Purview to enable access to Azure SQL Database. The following actions are currently enabled: *SQL Performance Monitoring*, *SQL Security Auditing* and *Read*. The first two actions are supported only at server level. *Modify* is not supported at this point.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
-## Microsoft Purview Configuration
+## Microsoft Purview configuration
### Register the data sources in Microsoft Purview
-The Azure SQL DB resources need to be registered first with Microsoft Purview to later define access policies. You can follow these guides:
+The Azure SQL Database data source needs to be registered first with Microsoft Purview before creating access policies. You can follow these guides:
-[Register and scan Azure SQL DB](./register-scan-azure-sql-database.md)
+[Register and scan Azure SQL Database](./register-scan-azure-sql-database.md)
After you've registered your resources, you'll need to enable Data Use Management. Data Use Management can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**:- [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture. This will enable the access policies to be used with the given SQL server and all its contained databases.
+Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this screenshot. This will enable the access policies to be used with the given Azure SQL server and all its contained databases.
![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png)
purview How To Policies Data Owner Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-resource-group.md
Previously updated : 05/27/2022 Last updated : 10/10/2022 # Resource group and subscription access provisioning by data owner (Preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Access policies](concept-policies-data-owner.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
+[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
-You can also [register an entire resource group or subscription](register-scan-azure-multiple-sources.md), and create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards. This article describes how this is done.
+In this guide we cover how to register an entire resource group or subscription and then create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
You can also [register an entire resource group or subscription](register-scan-a
(*) Only the *SQL Performance monitoring* and *Security auditing* actions are fully supported for SQL-type data sources. The *Read* action needs a workaround described later in this guide. The *Modify* action is not currently supported for SQL-type data sources.
-## Configuration
+## Microsoft Purview configuration
### Register the subscription or resource group for Data Use Management The subscription or resource group needs to be registered with Microsoft Purview to later define access policies.
To register your subscription or resource group, follow the **Prerequisites** an
After you've registered your resources, you'll need to enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-In the end, your resource will have the **Data Use Management** toggle **Enabled**, as shown in the picture:
+In the end, your resource will have the **Data Use Management** toggle **Enabled**, as shown in the screenshot:
![Screenshot shows how to register a resource group or subscription for policy by toggling the enable tab in the resource editor.](./media/how-to-policies-data-owner-resource-group/register-resource-group-for-policy.png)
purview How To Policies Data Owner Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-storage.md
Previously updated : 8/11/2022 Last updated : 10/10/2022 # Access provisioning by data owner to Azure Storage datasets (Preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Access policies](concept-policies-data-owner.md) allow you to manage access from Microsoft Purview to data sources that have been registered for *Data Use Management*.
-
-This article describes how a data owner can delegate in Microsoft Purview management of access to Azure Storage datasets. Currently, these two Azure Storage sources are supported:
+[Data owner policies](concept-policies-data-owner.md) are a type of Microsoft Purview access policies. They allow you to manage access to user data in sources that have been registered for *Data Use Management* in Microsoft Purview. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
+This guide covers how a data owner can delegate in Microsoft Purview management of access to Azure Storage datasets. Currently, these two Azure Storage sources are supported:
- Blob storage - Azure Data Lake Storage (ADLS) Gen2 ## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]- [!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
-## Configuration
+## Microsoft Purview configuration
### Register the data sources in Microsoft Purview for Data Use Management The Azure Storage resources need to be registered first with Microsoft Purview to later define access policies.
To register your resources, follow the **Prerequisites** and **Register** sectio
After you've registered your resources, you'll need to enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-Once your data source has the **Data Use Management** toggle **Enabled**, it will look like this picture:
+Once your data source has the **Data Use Management** toggle **Enabled**, it will look like this screenshot:
:::image type="content" source="./media/how-to-policies-data-owner-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows how to register a data source for policy by toggling the enable tab in the resource editor.":::
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
> - Publish is a background operation. Azure Storage accounts can take up to **2 hours** to reflect the changes. ## Data Consumption-- Data consumer can access the requested dataset using tools such as PowerBI or Azure Synapse Analytics workspace.-- Sub-container access: Policy statements set below container level on a Storage account are supported. However, users will not be able to browse to the data asset using Azure Portal's Storage Browser or Microsoft Azure Storage Explorer tool if access is granted only at file or folder level of the Azure Storage account. This is because these apps attempt to crawl down the hierarchy starting at container level, and the request fails because no access has been granted at that level. Instead, the App that requests the data must execute a direct access by providing a fully qualified name to the data object. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
+- Data consumer can access the requested dataset using tools such as Power BI or Azure Synapse Analytics workspace.
+- Sub-container access: Policy statements set below container level on a Storage account are supported. However, users will not be able to browse to the data asset using Azure portal's Storage Browser or Microsoft Azure Storage Explorer tool if access is granted only at file or folder level of the Azure Storage account. This is because these apps attempt to crawl down the hierarchy starting at container level, and the request fails because no access has been granted at that level. Instead, the App that requests the data must execute a direct access by providing a fully qualified name to the data object. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
- [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Previously updated : 05/09/2022 Last updated : 10/12/2022
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
Previously updated : 08/03/2022- Last updated : 10/10/2022+ # Supported data sources and file types
The table below shows the supported capabilities for each data source. Select th
|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** | **Data Sharing** | ||||||||
-| Azure | [Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes (Preview)](register-scan-azure-blob-storage-source.md#access-policy) | [Yes](register-scan-azure-blob-storage-source.md#data-sharing)|
+| Azure |[Multiple sources](register-scan-azure-multiple-sources.md)| [Yes](register-scan-azure-multiple-sources.md#register) | [Yes](register-scan-azure-multiple-sources.md#scan) | No |[Yes (Preview)](register-scan-azure-multiple-sources.md#access-policy) | No |
+||[Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes (Preview)](register-scan-azure-blob-storage-source.md#access-policy) | [Yes](register-scan-azure-blob-storage-source.md#data-sharing)|
|| [Azure Cosmos DB](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|No| No| || [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | No | No| || [Azure Data Factory](how-to-link-azure-data-factory.md) | [Yes](how-to-link-azure-data-factory.md) | No | [Yes](how-to-link-azure-data-factory.md) | No | No|
For all structured file formats, the Microsoft Purview Data Map scanner samples
- For document file formats, it samples the first 20 MB of each file. - If a document file is larger than 20 MB, then it isn't subject to a deep scan (subject to classification). In that case, Microsoft Purview captures only basic meta data like file name and fully qualified name. - For **tabular data sources (SQL)**, it samples the top 128 rows.-- For **Azure Cosmos DB (SQL API)**, up to 300 distinct properties from the first 10 documents in a container will be collected for the schema and for each property, values from up to 128 documents or the first 1 MB will be sampled.
+- For **Azure Cosmos DB for NoSQL**, up to 300 distinct properties from the first 10 documents in a container will be collected for the schema and for each property, values from up to 128 documents or the first 1 MB will be sampled.
## Resource set file sampling
File sampling for resource sets by file types:
- **Delimited files (CSV, PSV, SSV, TSV)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set' - **Data Lake file types (Parquet, Avro, Orc)** - 1 in 18446744073709551615 (long max) files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set' - **Other structured file types (JSON, XML, TXT)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'-- **SQL objects and CosmosDB entities** - Each file is L3 scanned.
+- **SQL objects and Azure Cosmos DB entities** - Each file is L3 scanned.
- **Document file types** - Each file is L3 scanned. Resource set patterns don't apply to these file types.
-## Classification
-
-All 208 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in the Microsoft Purview Data Map](supported-classifications.md).
## Next steps
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
Source storage account can support up to 20 targets, and target storage account
## Access policy
+### Supported policies
+The following types of policies are supported on this data resource from Microsoft Purview:
+- [Data owner policies](concept-policies-data-owner.md)
+- [Self-service access policies](concept-self-service-data-access-policy.md)
+ ### Access policy pre-requisites on Azure Storage accounts [!INCLUDE [Access policies Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
Source storage account can support up to 20 targets, and target storage account
### Register the data source in Microsoft Purview for Data Use Management The Azure Storage resource needs to be registered first with Microsoft Purview before you can create access policies. To register your resource, follow the **Prerequisites** and **Register** sections of this guide:-- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Microsoft Purview](register-scan-adls-gen2.md#prerequisites)
+- [Register Azure Data Lake Storage (ADLS) Gen2 in Microsoft Purview](register-scan-adls-gen2.md#prerequisites)
After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
To create an access policy for Azure Data Lake Storage Gen2, follow these guides
* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled. ## Next steps- Follow the below guides to learn more about Microsoft Purview and your data. - [Data owner policies in Microsoft Purview](concept-policies-data-owner.md) - [Data Estate Insights in Microsoft Purview](concept-insights.md)
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
Source storage account can support up to 20 targets, and target storage account
## Access policy
+### Supported policies
+The following types of policies are supported on this data resource from Microsoft Purview:
+- [Data owner policies](concept-policies-data-owner.md)
+- [Self-service access policies](concept-self-service-data-access-policy.md)
+ ### Access policy pre-requisites on Azure Storage accounts [!INCLUDE [Access policies Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
Source storage account can support up to 20 targets, and target storage account
### Register the data source in Microsoft Purview for Data Use Management The Azure Storage resource needs to be registered first with Microsoft Purview before you can create access policies. To register your resource, follow the **Prerequisites** and **Register** sections of this guide:-- [Register and scan Azure Storage Blob - Microsoft Purview](register-scan-azure-blob-storage-source.md#prerequisites)
+- [Register Azure Blob Storage in Microsoft Purview](register-scan-azure-blob-storage-source.md#prerequisites)
After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
Title: 'Register and scan Azure Cosmos Database (SQL API)'
-description: This article outlines the process to register an Azure Cosmos data source (SQL API) in Microsoft Purview including instructions to authenticate and interact with the Azure Cosmos database
+ Title: 'Register and scan Azure Cosmos DB Database (SQL API)'
+description: This article outlines the process to register an Azure Cosmos DB instance in Microsoft Purview including instructions to authenticate and interact with the Azure Cosmos DB database
Last updated 09/14/2022-+
-# Connect to Azure Cosmos database (SQL API) in Microsoft Purview
+# Connect to Azure Cosmos DB for NoSQL in Microsoft Purview
-This article outlines the process to register an Azure Cosmos database (SQL API) in Microsoft Purview including instructions to authenticate and interact with the Azure Cosmos database source
+This article outlines the process to register an Azure Cosmos DB for NoSQL instance in Microsoft Purview, including instructions to authenticate and interact with the Azure Cosmos DB database source
## Supported capabilities
This article outlines the process to register an Azure Cosmos database (SQL API)
## Register
-This section will enable you to register the Azure Cosmos database (SQL API) and set up an appropriate authentication mechanism to ensure successful scanning of the data source.
+This section will enable you to register the Azure Cosmos DB for NoSQL instance and set up an appropriate authentication mechanism to ensure successful scanning of the data source.
### Steps to register
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-collections.png" alt-text="Screenshot that shows the collection menu to create collection hierarchy":::
-1. Navigate to the appropriate collection under the **Sources** menu and select the **Register** icon to register a new Azure Cosmos database
+1. Navigate to the appropriate collection under the **Sources** menu and select the **Register** icon to register a new Azure Cosmos DB database
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-register-data-source.png" alt-text="Screenshot that shows the collection used to register the data source":::
-1. Select the **Azure Cosmos DB (SQL API)** data source and select **Continue**
+1. Select the **Azure Cosmos DB for NoSQL** data source and select **Continue**
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-select-data-source.png" alt-text="Screenshot that allows selection of the data source":::
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-data-source-details.png" alt-text="Screenshot that shows the details to be entered in order to register the data source":::
-1. The _Azure Cosmos database_ storage account will be shown under the selected Collection
+1. The _Azure Cosmos DB database_ storage account will be shown under the selected Collection
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-collection-mapping.png" alt-text="Screenshot that shows the data source mapped to the collection to initiate scanning":::
It is important to register the data source in Microsoft Purview prior to settin
### Authentication for a scan
-In order to have access to scan the data source, an authentication method in the Azure Cosmos database Storage account needs to be configured.
+In order to have access to scan the data source, an authentication method in the Azure Cosmos DB database Storage account needs to be configured.
-There is only one way to set up authentication for Azure Cosmos Database:
+There is only one way to set up authentication for Azure Cosmos DB Database:
**Account Key** - Secrets can be created inside an Azure Key Vault to store credentials in order to enable access for Microsoft Purview to scan data sources securely using the secrets. A secret can be a storage account key, SQL login password or a password.
There is only one way to set up authentication for Azure Cosmos Database:
You need to get your access key and store in the key vault:
-1. Navigate to your Azure Cosmos database storage account
+1. Navigate to your Azure Cosmos DB database storage account
1. Select **Settings > Keys** :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-access-keys.png" alt-text="Screenshot that shows the access keys in the storage account":::
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**| |||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](how-to-policies-data-owner-resource-group.md) | [Source Dependant](catalog-lineage-user-guide.md)| No |
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](#access-policy) | [Source Dependant](catalog-lineage-user-guide.md)| No |
## Prerequisites
To manage a scan, do the following:
- You can delete the scan by selecting **Delete**. - If the scan is running, you can cancel it by selecting **Cancel**.
+## Access Policy
+
+### Supported policies
+The following types of policies are supported on this data resource from Microsoft Purview:
+- [Data owner policies](concept-policies-data-owner.md)
++
+### Access policy pre-requisites on Azure Storage accounts
+To be able to enforce policies from Microsoft Purview, data sources under a resource group or subscription need to be configured first. Instructions vary based on the data source type.
+Please review whether they support Microsoft Purview policies, and if so, the specific instructions to enable them, under the Access Policy link in the [Microsoft Purview connector document](./microsoft-purview-connector-overview.md).
+
+### Configure the Microsoft Purview account for policies
+
+### Register the data source in Microsoft Purview for Data Use Management
+The Azure subscription or resource group needs to be registered first with Microsoft Purview before you can create access policies.
+To register your resource, follow the **Prerequisites** and **Register** sections of this guide:
+- [Register multiple sources in Microsoft Purview](register-scan-azure-multiple-sources.md#prerequisites)
+
+After you've registered the data resource, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data resource. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+Once your data source has the **Data Use Management** option set to **Enabled**, it will look like this screenshot:
+![Screenshot shows how to register a data source for policy with the option Data use management set to enable.](./media/how-to-policies-data-owner-resource-group/register-resource-group-for-policy.png)
+
+### Create a policy
+To create an access policy on an entire Azure subscription or resource group, follow these guide:
+* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
++ ## Next steps Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.-
+- [Data owner policies in Microsoft Purview](concept-policies-data-owner.md)
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Scans can be managed or run again on completion
## Access policy
+### Supported policies
+The following types of policies are supported on this data resource from Microsoft Purview:
+- [Data owner policies](concept-policies-data-owner.md)
+ ### Access policy pre-requisites on Azure SQL Database [!INCLUDE [Access policies specific Azure SQL DB pre-requisites](./includes/access-policies-prerequisites-azure-sql-db.md)]
Scans can be managed or run again on completion
### Register the data source and enable Data use management The Azure SQL Database resource needs to be registered first with Microsoft Purview before you can create access policies. To register your resources, follow the **Prerequisites** and **Register** sections of this guide:
-[Register Azure SQL Database](./register-scan-azure-sql-database.md#prerequisites)
+[Register Azure SQL Database in Microsoft Purview](./register-scan-azure-sql-database.md#prerequisites)
After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
purview Troubleshoot Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-connections.md
To verify this, do the following steps:
If you don't see your Microsoft Purview managed identity listed, then follow the steps in [Create and manage credentials for scans](manage-credentials.md) to add it.
+## Scans no longer run
+
+If your Microsoft Purview scan used to successfully run, but are now failing, check these things:
+1. Have credentials to your resource changed or been rotated? If so, you'll need to update your scan to have the correct credentials.
+1. Is an [Azure Policy](../governance/policy/overview.md) preventing **updates to Storage accounts**? If so follow the [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) to create an exception for Microsoft Purview accounts.
+1. Are you using a self-hosted integration runtime? Check that it's up to date with the latest software and that it's connected to your network.
+ ## Next steps - [Browse the Microsoft Purview Data catalog](how-to-browse-catalog.md)
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
description: Get answers to common questions about Azure Resource Mover
+ Last updated 02/21/2021 - # Common questions
No. Resource Mover service doesn't store customer data, it only stores metadata
### Where is the metadata for moving across regions stored?
-It's stored in an [Azure Cosmos](../cosmos-db/database-encryption-at-rest.md) database, and in [Azure blob storage](../storage/common/storage-service-encryption.md), in a Microsoft subscription. Currently metadata is stored in East US 2 and North Europe. We will expand this coverage to other regions. This doesn't restrict you from moving resources across any public regions.
+It's stored in an [Azure Cosmos DB](../cosmos-db/database-encryption-at-rest.md) database, and in [Azure Blob storage](../storage/common/storage-service-encryption.md), in a Microsoft subscription. Currently metadata is stored in East US 2 and North Europe. We will expand this coverage to other regions. This doesn't restrict you from moving resources across any public regions.
### Is the collected metadata encrypted?
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Azure Connected SQL Server Onboarding](#azure-connected-sql-server-onboarding) | Allows for read and write access to Azure resources for SQL Server on Arc-enabled servers. | e8113dce-c529-4d33-91fa-e9b972617508 | > | [Cosmos DB Account Reader Role](#cosmos-db-account-reader-role) | Can read Azure Cosmos DB account data. See [DocumentDB Account Contributor](#documentdb-account-contributor) for managing Azure Cosmos DB accounts. | fbdf93bf-df7d-467e-a4d2-9458aa1360c8 | > | [Cosmos DB Operator](#cosmos-db-operator) | Lets you manage Azure Cosmos DB accounts, but not access data in them. Prevents access to account keys and connection strings. | 230815da-be43-4aae-9cb4-875f7bd000aa |
-> | [CosmosBackupOperator](#cosmosbackupoperator) | Can submit restore request for a Cosmos DB database or a container for an account | db7b14f2-5adf-42da-9f96-f2ee17bab5cb |
-> | [CosmosRestoreOperator](#cosmosrestoreoperator) | Can perform restore action for Cosmos DB database account with continuous backup mode | 5432c526-bc82-444a-b7ba-57c5b0b5b34f |
+> | [CosmosBackupOperator](#cosmosbackupoperator) | Can submit restore request for an Azure Cosmos DB database or a container for an account | db7b14f2-5adf-42da-9f96-f2ee17bab5cb |
+> | [CosmosRestoreOperator](#cosmosrestoreoperator) | Can perform restore action for an Azure Cosmos DB database account with continuous backup mode | 5432c526-bc82-444a-b7ba-57c5b0b5b34f |
> | [DocumentDB Account Contributor](#documentdb-account-contributor) | Can manage Azure Cosmos DB accounts. Azure Cosmos DB is formerly known as DocumentDB. | 5bd9cd88-fe45-4216-938b-f97437e15450 | > | [Redis Cache Contributor](#redis-cache-contributor) | Lets you manage Redis caches, but not access to them. | e0f68234-74aa-48ed-b826-c38b57376e17 | > | [SQL DB Contributor](#sql-db-contributor) | Lets you manage SQL databases, but not access to them. Also, you can't manage their security-related policies or their parent SQL servers. | 9b7fa17d-e63e-47b0-bb0a-15c516ac86ec |
Lets you manage Azure Cosmos DB accounts, but not access data in them. Prevents
### CosmosBackupOperator
-Can submit restore request for a Cosmos DB database or a container for an account [Learn more](../cosmos-db/role-based-access-control.md)
+Can submit restore request for an Azure Cosmos DB database or a container for an account [Learn more](../cosmos-db/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Azure Data Lake Store](../storage/blobs/data-lake-storage-introd
> | Microsoft.DataLakeStore/accounts/delete | Delete a DataLakeStore account. | > | Microsoft.DataLakeStore/accounts/enableKeyVault/action | Enable KeyVault for a DataLakeStore account. | > | Microsoft.DataLakeStore/accounts/Superuser/action | Grant Superuser on Data Lake Store when granted with Microsoft.Authorization/roleAssignments/write. |
-> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/read | Get information about a Cosmos Cert Mapping. |
-> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/write | Create or update a Cosmos Cert Mapping. |
-> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/delete | Delete a Cosmos Cert Mapping. |
+> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/read | Get information about an Azure Cosmos DB Cert Mapping. |
+> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/write | Create or update an Azure Cosmos DB Cert Mapping. |
+> | Microsoft.DataLakeStore/accounts/cosmosCertMappings/delete | Delete an Azure Cosmos DB Cert Mapping. |
> | Microsoft.DataLakeStore/accounts/eventGridFilters/read | Get an EventGrid Filter. | > | Microsoft.DataLakeStore/accounts/eventGridFilters/write | Create or update an EventGrid Filter. | > | Microsoft.DataLakeStore/accounts/eventGridFilters/delete | Delete an EventGrid Filter. |
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 09/26/2022 Last updated : 10/06/2022
Removing the last Owner role assignment for a subscription is not supported to a
If you want to cancel your subscription, see [Cancel your Azure subscription](../cost-management-billing/manage/cancel-azure-subscription.md).
-You are allowed to remove the last Owner (or User Access Administrator) role assignment at subscription scope, if you are the Global Administrator for the tenant. In this case, there is no constraint for deletion. However, if the call comes from some other principal, then you won't be able to remove the last Owner role assignment at subscription scope.
+You are allowed to remove the last Owner (or User Access Administrator) role assignment at subscription scope, if you are a Global Administrator for the tenant or a classic administrator (Service Administrator or Co-Administrator) for the subscription. In this case, there is no constraint for deletion. However, if the call comes from some other principal, then you won't be able to remove the last Owner role assignment at subscription scope.
### Symptom - Role assignment is not moved after moving a resource
You added managed identities to a group and assigned a role to that group. The b
It can take several hours for changes to a managed identity's group or role membership to take effect. For more information, see [Limitation of using managed identities for authorization](../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization).
+### Symptom - Removing role assignments using PowerShell takes several minutes
+
+You use the [Remove-AzRoleAssignment](/powershell/module/az.resources/remove-azroleassignment) command to remove a role assignment. You then use the [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) command to verify the role assignment was removed for a security principal. For example:
+
+```powershell
+Get-AzRoleAssignment -ObjectId $securityPrincipalObject.Id
+```
+
+The [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) command indicates that the role assignment was not removed. However, if you wait 5-10 minutes and run [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) again, the output indicates the role assignment was removed.
+
+**Cause**
+
+The role assignment has been removed. However, to improve performance, PowerShell uses a cache when listing role assignments. There can be delay of around 10 minutes for the cache to be refreshed.
+
+**Solution**
+
+Instead of listing the role assignments for a security principal, list all the role assignments at the subscription scope and filter the output. For example, the following command:
+
+```powershell
+$validateRemovedRoles = Get-AzRoleAssignment -ObjectId $securityPrincipalObject.Id
+```
+
+Can be replaced with this command instead:
+
+```powershell
+$validateRemovedRoles = Get-AzRoleAssignment -Scope /subscriptions/$subId | Where-Object -Property ObjectId -EQ $securityPrincipalObject.Id
+```
+ ## Custom roles ### Symptom - Unable to update a custom role
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
+ Last updated 06/23/2022
Indexer was unable to read the document from the data source. This can happen du
| Reason | Details/Example | Resolution | | | | | | Inconsistent field types across different documents | `Type of value has a mismatch with column type. Couldn't store '{47.6,-122.1}' in authors column. Expected type is JArray.` `Error converting data type nvarchar to float.` `Conversion failed when converting the nvarchar value '12 months' to data type int.` `Arithmetic overflow error converting expression to data type int.` | Ensure that the type of each field is the same across different documents. For example, if the first document `'startTime'` field is a DateTime, and in the second document it's a string, this error will be hit. |
-| Errors from the data source's underlying service | From Cosmos DB: `{"Errors":["Request rate is large"]}` | Check your storage instance to ensure it's healthy. You may need to adjust your scaling/partitioning. |
+| Errors from the data source's underlying service | From Azure Cosmos DB: `{"Errors":["Request rate is large"]}` | Check your storage instance to ensure it's healthy. You might need to adjust your scaling or partitioning. |
| Transient issues | `A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host` | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. | <a name="could-not-extract-document-content"></a>
The indexer ran the skill in the skillset, but the response from the Web API req
## `Warning: The current indexer configuration does not support incremental progress`
-This warning only occurs for Cosmos DB data sources.
+This warning only occurs for Azure Cosmos DB data sources.
Incremental progress during indexing ensures that if indexer execution is interrupted by transient failures or execution time limit, the indexer can pick up where it left off next time it runs, instead of having to re-index the entire collection from scratch. This is especially important when indexing large collections.
To work around this warning, determine what the text encoding for this blob is a
<a name="cosmos-db-collection-has-a-lazy-indexing-policy"></a>
-## `Warning: Cosmos DB collection 'X' has a Lazy indexing policy. Some data may be lost`
+## `Warning: Azure Cosmos DB collection 'X' has a Lazy indexing policy. Some data may be lost`
Collections with [Lazy](../cosmos-db/index-policy.md#indexing-mode) indexing policies can't be queried consistently, resulting in your indexer missing data. To work around this warning, change your indexing policy to Consistent. ## `Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.`
-This warning is passed from the Language service of Azure Cognitive Services. In some cases, it's safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
+This warning is passed from the Language service of Azure Cognitive Services. In some cases, it's safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
description: Explains the annotation syntax and how to reference inputs and outp
+ Last updated 09/16/2022
Paths to an annotation are specified in the "context" and "source" properties of
:::image type="content" source="media/cognitive-search-annotations-syntax/content-source-annotation-path.png" alt-text="Screenshot of a skillset definition with context and source elements highlighted.":::
-The example in the screenshot illustrates the path for an item in a Cosmos DB collection.
+The example in the screenshot illustrates the path for an item in an Azure Cosmos DB collection.
+ "context" path is `/document/HotelId` because the collection is partitioned into documents by the `/HotelId` field.
All paths start with `/document`. An enriched document is created in the "docume
The following list includes several common examples: + `/document` is the root node and indicates an entire blob in Azure Storage, or a row in a SQL table.
-+ `/document/{key}` is the syntax for a document or item in a Cosmos DB collection, where `{key}` is the actual key, such as `/document/HotelId` in the previous example.
++ `/document/{key}` is the syntax for a document or item in an Azure Cosmos DB collection, where `{key}` is the actual key, such as `/document/HotelId` in the previous example. + `/document/content` specifies the "content" property of a JSON blob. + `/document/{field}` is the syntax for an operation performed on a specific field, such as translating the `/document/Description` field, seen in the previous example. + `/document/pages/*` or `/document/sentences/*` become the context if you're breaking a large document into smaller chunks for processing. If "context" is `/document/pages/*`, the skill executes once over each page in the document. Because there might be more than one page or sentence, you'll append `/*` to catch them all.
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
Title: AI enrichment concepts
-description: Content extraction, natural language processing (NLP), and image processing are used to create searchable content in Azure Cognitive Search indexes. AI enrichment can use both pre-defined cognitive skills and custom AI algorithms.
+description: Content extraction, natural language processing (NLP), and image processing can create searchable content in Azure Cognitive Search indexes.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
+ Last updated 06/15/2022
A debug session is a cached indexer and skillset execution, scoped to a single d
A Debug Session works with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The following list notes the exceptions:
-+ The MongoDB API (preview) of Cosmos DB is currently not supported.
++ Azure Cosmos DB for MongoDB is currently not supported.
-+ For the SQL API of Cosmos DB, if a row fails during index and there's no corresponding metadata, the debug session might not pick the correct row.
++ For the Azure Cosmos DB for NoSQL, if a row fails during index and there's no corresponding metadata, the debug session might not pick the correct row.
-+ For the SQL API of Cosmos DB, if a partitioned collection was previously non-partitioned, a Debug Session won't find the document.
++ For the SQL API of Azure Cosmos DB, if a partitioned collection was previously non-partitioned, a Debug Session won't find the document. ## Create a debug session
At this point, new requests from your debug session should now be sent to your l
Now that you understand the layout and capabilities of the Debug Sessions visual editor, try the tutorial for a hands-on experience. > [!div class="nextstepaction"]
-> [Tutorial: Explore Debug sessions](./cognitive-search-tutorial-debug-sessions.md)
+> [Tutorial: Explore Debug sessions](./cognitive-search-tutorial-debug-sessions.md)
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
description: Export the enriched content created by a skillset by mapping its ou
+ Last updated 09/14/2022
If your source data is composed of nested or hierarchical JSON, you can't use fi
This section walks you through an import process that produces a one-to-one reflection of a complex document on both the source and target sides. Next, it uses the same source document to illustrate the retrieval and flattening of individual nodes into string collections.
-Here's an example of a document in Cosmos DB with nested JSON:
+Here's an example of a document in Azure Cosmos DB with nested JSON:
```json {
Here's a sample indexer definition that executes the import (notice there are no
} ```
-The result is the following sample search document, similar to the original in Cosmos DB.
+The result is the following sample search document, similar to the original in Azure Cosmos DB.
```json {
Results from the above definition are as follows. Simplifying the structure lose
+ [Define field mappings in a search indexer](search-indexer-field-mappings.md) + [AI enrichment overview](cognitive-search-concept-intro.md)
-+ [Skillset overview](cognitive-search-working-with-skillsets.md)
++ [Skillset overview](cognitive-search-working-with-skillsets.md)
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
+ Last updated 06/24/2022
The **Optical character recognition (OCR)** skill recognizes printed and handwri
An OCR skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
-+ For the languages listed under [Cognitive Services Computer Vision language support](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr), the [Read API](../cognitive-services/computer-vision/overview-ocr.md#read-api) is used.
++ For the languages listed under [Cognitive Services Computer Vision language support](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr), the [Read API](../cognitive-services/computer-vision/overview-ocr.md) is used. + For Greek and Serbian Cyrillic, the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. The **OCR** skill extracts text from image files. Supported file formats include:
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-working-with-skillsets.md
description: Skillsets are where you author an AI enrichment pipeline in Azure C
+ Last updated 07/14/2022
The initial content is metadata and the *root node* (`document/content`). The ro
|||| |Blob Storage|/document/content<br>/document/normalized_images/*<br>…|/document/{key1}<br>/document/{key2}<br>…| |Azure SQL|/document/{column1}<br>/document/{column2}<br>…|N/A |
-|Cosmos DB|/document/{key1}<br>/document/{key2}<br>…|N/A|
+|Azure Cosmos DB|/document/{key1}<br>/document/{key2}<br>…|N/A|
As skills execute, output is added to the enrichment tree as new nodes. If skill execution is over the entire document, nodes are added at the first level under the root.
search Query Lucene Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-lucene-syntax.md
Last updated 04/04/2022
# Lucene query syntax in Azure Cognitive Search
-When creating queries, you can opt for the [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html) syntax for specialized query forms: wildcard, fuzzy search, proximity search, regular expressions. Much of the Lucene Query Parser syntax is [implemented intact in Azure Cognitive Search](search-lucene-query-architecture.md), with the exception of *range searches* which are constructed through **`$filter`** expressions.
+When creating queries in Azure Cognitive Search, you can opt for the full [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html) syntax for specialized query forms: wildcard, fuzzy search, proximity search, regular expressions. Much of the Lucene Query Parser syntax is [implemented intact in Azure Cognitive Search](search-lucene-query-architecture.md), except for *range searches, which are constructed through **`$filter`** expressions.
To use full Lucene syntax, you'll set the queryType to "full" and pass in a query expression patterned for wildcard, fuzzy search, or one of the other query forms supported by the full syntax. In REST, query expressions are provided in the **`search`** parameter of a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request.
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
While not specific to any query type, the **`searchMode`** parameter is relevant in this example. Whenever operators are on the query, you should generally set `searchMode=all` to ensure that *all* of the criteria is matched.
-For additional examples, see [Lucene query syntax examples](search-query-lucene-examples.md). For details about the query request and parameters, including searchMode, see [Search Documents (REST API)](/rest/api/searchservice/Search-Documents).
+For more examples, see [Lucene query syntax examples](search-query-lucene-examples.md). For details about the query request and parameters, including searchMode, see [Search Documents (REST API)](/rest/api/searchservice/Search-Documents).
## <a name="bkmk_syntax"></a> Syntax fundamentals
Special characters that require escaping include the following:
### Encoding unsafe and reserved characters in URLs
-Please ensure all unsafe and reserved characters are encoded in a URL. For example, '#' is an unsafe character because it is a fragment/anchor identifier in a URL. The character must be encoded to `%23` if used in a URL. '&' and '=' are examples of reserved characters as they delimit parameters and specify values in Azure Cognitive Search. Please see [RFC1738: Uniform Resource Locators (URL)](https://www.ietf.org/rfc/rfc1738.txt) for more details.
+Ensure all unsafe and reserved characters are encoded in a URL. For example, '#' is an unsafe character because it's a fragment/anchor identifier in a URL. The character must be encoded to `%23` if used in a URL. '&' and '=' are examples of reserved characters as they delimit parameters and specify values in Azure Cognitive Search. See [RFC1738: Uniform Resource Locators (URL)](https://www.ietf.org/rfc/rfc1738.txt) for more details.
Unsafe characters are ``" ` < > # % { } | \ ^ ~ [ ]``. Reserved characters are `; / ? : @ = + &`.
You can embed Boolean operators in a query string to improve the precision of a
|--|-- |--|-| | AND | `+` | `wifi AND luxury` | Specifies terms that a match must contain. In the example, the query engine will look for documents containing both `wifi` and `luxury`. The plus character (`+`) can also be used directly in front of a term to make it required. For example, `+wifi +luxury` stipulates that both terms must appear somewhere in the field of a single document.| | OR | (none) <sup>1</sup> | `wifi OR luxury` | Finds a match when either term is found. In the example, the query engine will return match on documents containing either `wifi` or `luxury` or both. Because OR is the default conjunction operator, you could also leave it out, such that `wifi luxury` is the equivalent of `wifi OR luxury`.|
-| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns matches on documents that exclude the term. For example, `wifi ΓÇôluxury` will search for documents that have the `wifi` term but not `luxury`. </p>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. </p>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `wifi -luxury` will match documents that either contain the term `wifi` or those that don't contain the term `luxury`. </p>`searchMode=all` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". For example, `wifi -luxury` will match documents that contain the term `wifi` and don't contain the term "luxury". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.</p>When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns matches on documents that exclude the term. For example, `wifi ΓÇôluxury` will search for documents that have the `wifi` term but not `luxury`. </p>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. </p>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `wifi -luxury` will match documents that either contain the term `wifi` or those that don't contain the term `luxury`. </p>`searchMode=all` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". For example, `wifi -luxury` will match documents that contain the term `wifi` and don't contain the term "luxury". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.</p> When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
-<sup>1</sup> The `|` character is not supported for OR operations.
+<sup>1</sup> The `|` character isn't supported for OR operations.
## <a name="bkmk_fields"></a> Fielded search
Fuzzy search can only be applied to terms, not phrases, but you can append the t
## <a name="bkmk_proximity"></a> Proximity search
-Proximity searches are used to find terms that are near each other in a document. Insert a tilde "~" symbol at the end of a phrase followed by the number of words that create the proximity boundary. For example, `"hotel airport"~5` will find the terms "hotel" and "airport" within 5 words of each other in a document.
+Proximity searches are used to find terms that are near each other in a document. Insert a tilde "~" symbol at the end of a phrase followed by the number of words that create the proximity boundary. For example, `"hotel airport"~5` will find the terms "hotel" and "airport" within five words of each other in a document.
## <a name="bkmk_termboost"></a> Term boosting
-Term boosting refers to ranking a document higher if it contains the boosted term, relative to documents that do not contain the term. This differs from scoring profiles in that scoring profiles boost certain fields, rather than specific terms.
+Term boosting refers to ranking a document higher if it contains the boosted term, relative to documents that don't contain the term. This differs from scoring profiles in that scoring profiles boost certain fields, rather than specific terms.
The following example helps illustrate the differences. Suppose that there's a scoring profile that boosts matches in a certain field, say *genre* in the [musicstoreindex example](index-add-scoring-profiles.md#bkmk_ex). Term boosting could be used to further boost certain search terms higher than others. For example, `rock^2 electronic` will boost documents that contain the search terms in the genre field higher than other searchable fields in the index. Further, documents that contain the search term *rock* will be ranked higher than the other search term *electronic* as a result of the term boost value (2).
The following example helps illustrate the differences. Suppose that there's a s
For example, to find documents containing "motel" or "hotel", specify `/[mh]otel/`. Regular expression searches are matched against single words.
-Some tools and languages impose additional escape character requirements. For JSON, strings that include a forward slash are escaped with a backward slash: "microsoft.com/azure/" becomes `search=/.*microsoft.com\/azure\/.*/` where `search=/.* <string-placeholder>.*/` sets up the regular expression, and `microsoft.com\/azure\/` is the string with an escaped forward slash.
+Some tools and languages impose other escape character requirements. For JSON, strings that include a forward slash are escaped with a backward slash: "microsoft.com/azure/" becomes `search=/.*microsoft.com\/azure\/.*/` where `search=/.* <string-placeholder>.*/` sets up the regular expression, and `microsoft.com\/azure\/` is the string with an escaped forward slash.
Two common symbols in regex queries are `.` and `*`. A `.` matches any one character and a `*` matches the previous character zero or more times. For example, `/be./` will match the terms "bee" and "bet" while `/be*/` would match "be", "bee", and "beee" but not "bet". Together, `.*` allow you to match any series of characters so `/be.*/` would match any term that starts with "be" such as "better".
Suffix matching requires the regular expression forward slash `/` delimiters. Ge
### Impact of an analyzer on wildcard queries
-During query parsing, queries that are formulated as prefix, suffix, wildcard, or regular expressions are passed as-is to the query tree, bypassing [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis). Matches will only be found if the index contains the strings in the format your query specifies. In most cases, you will need an analyzer during indexing that preserves string integrity so that partial term and pattern matching succeeds. For more information, see [Partial term search in Azure Cognitive Search queries](search-query-partial-matching.md).
+During query parsing, queries that are formulated as prefix, suffix, wildcard, or regular expressions are passed as-is to the query tree, bypassing [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis). Matches will only be found if the index contains the strings in the format your query specifies. In most cases, you'll need an analyzer during indexing that preserves string integrity so that partial term and pattern matching succeeds. For more information, see [Partial term search in Azure Cognitive Search queries](search-query-partial-matching.md).
Consider a situation where you may want the search query 'terminat*' to return results that contain terms such as 'terminate', 'termination' and 'terminates'.
-If you were to use the en.lucene (English Lucene) analyzer, it would apply aggressive stemming of each term. For example, 'terminate', 'termination', 'terminates' will all be tokenized down to the token 'termi' in your index. On the other side, terms in queries using wildcards or fuzzy search are not analyzed at all., so there would be no results that would match the 'terminat*' query.
+If you were to use the en.lucene (English Lucene) analyzer, it would apply aggressive stemming of each term. For example, 'terminate', 'termination', 'terminates' will all be tokenized down to the token 'termi' in your index. On the other side, terms in queries using wildcards or fuzzy search aren't analyzed at all., so there would be no results that would match the 'terminat*' query.
On the other side, the Microsoft analyzers (in this case, the en.microsoft analyzer) are a bit more advanced and use lemmatization instead of stemming. This means that all generated tokens should be valid English words. For example, 'terminate', 'terminates' and 'termination' will mostly stay whole in the index, and would be a preferable choice for scenarios that depend a lot on wildcards and fuzzy search.
Azure Cognitive Search uses frequency-based scoring ([BM25](https://en.wikipedia
## Special characters
-In some circumstances, you may want to search for a special character, like an '❤' emoji or the '€' sign. In such cases, make sure that the analyzer you use does not filter those characters out. The standard analyzer bypasses many special characters, excluding them from your index.
+In some circumstances, you may want to search for a special character, like an '❤' emoji or the '€' sign. In such cases, make sure that the analyzer you use doesn't filter those characters out. The standard analyzer bypasses many special characters, excluding them from your index.
Analyzers that will tokenize special characters include the "whitespace" analyzer, which takes into consideration any character sequences separated by whitespaces as tokens (so the "❤" string would be considered a token). Also, a language analyzer like the Microsoft English analyzer ("en.microsoft"), would take the "€" string as a token. You can [test an analyzer](/rest/api/searchservice/test-analyzer) to see what tokens it generates for a given query.
Field grouping is similar but scopes the grouping to a single field. For example
## Query size limits
-Azure Cognitive Search imposes limits on query size and composition because unbounded queries can destabilize your search service. There are limits on query size and composition (the number of clauses). Limits also exist for the length of prefix search and for the complexity of regex search and wildcard search. If your application generates search queries programmatically, we recommend designing it in such a way that it does not generate queries of unbounded size.
+Azure Cognitive Search imposes limits on query size and composition because unbounded queries can destabilize your search service. There are limits on query size and composition (the number of clauses). Limits also exist for the length of prefix search and for the complexity of regex search and wildcard search. If your application generates search queries programmatically, we recommend designing it in such a way that it doesn't generate queries of unbounded size.
For more information on query limits, see [API request limits](search-limits-quotas-capacity.md#api-request-limits).
search Resource Demo Sites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-demo-sites.md
+
+ Title: Demo sites for search features
+
+description: This page provides links to demo sites that are built on Azure Cognitive Search. Try a web app to see how search performs.
+++++ Last updated : 09/20/2022++
+# Demos - Azure Cognitive Search
+
+Demos are hosted apps that showcase search and AI enrichment functionality in Azure Cognitive Search. Several of these demos include source code on GitHub so that you can see how they were made.
+
+The following demos are built and hosted by Microsoft.
+
+| Demo name | Description | Source code |
+|--| |-|
+| [NYC Jobs demo](https://azjobsdemo.azurewebsites.net/) | An ASP.NET app with facets, filters, details, geo-search (map controls). | [https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs](https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs) |
+| [JFK files demo](https://jfk-demo-2019.azurewebsites.net/#/) | An ASP.NET web app built on a public data set, transformed with custom and predefined skills to extract searchable content from scanned document (JPEG) files. [Learn more...](https://www.microsoft.com/ai/ai-lab-jfk-files) | [https://github.com/Microsoft/AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) |
+| [Semantic search for retail](https://brave-meadow-0f59c9b1e.1.azurestaticapps.net/) | Web app for a fictitious online retailer, "Terra" | Not available |
+| [Semantic search for content](https://semantic-search-demo-web.azurewebsites.net/) | Web app that includes search index for three data sets: Covid-19, MS-Marco, and Microsoft documentation. | Not available |
+| [Wolters Kluwer demo search app](https://wolterskluwereap.azurewebsites.net/) | Financial files demo that uses custom skills and forms recognition to make fictitious business documents searchable. | Not available |
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
+
+ Title: Tools for search indexing
+
+description: Use existing code samples or build your own tools for working with a search index in Azure Cognitive Search.
+++++ Last updated : 09/20/2022++
+# Tools - Azure Cognitive Search
+
+Tools are provided as source code that you can download, modify, and build to create an app that helps you develop or maintain a search solution.
+
+The following tools are built by engineers at Microsoft, but aren't part of the Azure Cognitive Search service and aren't under Service Level Agreement (SLA).
+
+| Tool name | Description | Source code |
+|--| |-|
+| [Azure Cognitive Search Lab readme](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Connects to your search service with a Web UI that exercises the full REST API, including the ability to edit a live search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) |
+| [Knowledge Mining Accelerator readme](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
+| [Back up and Restore readme](https://github.com/liamc) | Download a search index to your local device and then upload the index to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
+| [Performance testing readme](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md | This pipeline helps to load test Azure Cognitive Search, it leverages Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
search Resource Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-training.md
+
+ Title: Search training modules
+
+description: Get hands-on training on Azure Cognitive Search from Microsoft and other third-party training providers.
+++++ Last updated : 09/20/2022++
+# Training - Azure Cognitive Search
+
+Training modules deliver an end-to-end experience that helps you build skills and develop insights while working through a progression of exercises. Visit the following links to begin learning with prepared lessons from Microsoft and other training providers.
+++ [Introduction to Azure Cognitive Search (Microsoft)](/training/modules/intro-to-azure-search/)++ [Implement knowledge mining with Azure Cognitive Search (Microsoft)](/training/paths/implement-knowledge-mining-azure-cognitive-search/)++ [Add search to apps (Pluralsight)](https://www.pluralsight.com/courses/azure-adding-search-abilities-apps)++ [Developer course (Pluralsight)](https://www.pluralsight.com/courses/microsoft-azure-textual-content-search-enabling)
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
+ Last updated 10/03/2022
Existing code written against earlier API versions will break on api-version=201
#### Indexer for Azure Cosmos DB - datasource is now `"type": "cosmosdb"`
-If you're using a [Cosmos DB indexer](search-howto-index-cosmosdb.md ), you must change `"type": "documentdb"` to `"type": "cosmosdb"`.
+If you're using an [Azure Cosmos DB indexer](search-howto-index-cosmosdb.md ), you must change `"type": "documentdb"` to `"type": "cosmosdb"`.
#### Indexer execution result errors no longer have status
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
+ Last updated 06/15/2022
Preview features that transition to general availability are removed from this l
| [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | Use the [Reset Documents REST API](/rest/api/searchservice/preview-api/reset-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**SharePoint Indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal. | | [**MySQL indexer data source**](search-howto-index-mysql.md) | Indexer data source | Index content and metadata from Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. |
-| [**Cosmos DB indexer: MongoDB API, Gremlin API**](search-howto-index-cosmosdb.md) | Indexer data source | For Cosmos DB, SQL API is generally available, but MongoDB and Gremlin APIs are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
+| [**Azure Cosmos DB indexer: Azure Cosmos DB for MongoDB, Azure Cosmos DB for Apache Gremlin**](search-howto-index-cosmosdb.md) | Indexer data source | For Azure Cosmos DB, SQL API is generally available, but Azure CosmosDB for MongoDB and Azure CosmosDB for Apache Gremlin are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
| [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexer data source | The Azure Blob Storage indexer in Azure Cognitive Search will recognize blobs that are in a soft deleted state, and remove the corresponding search document during indexing. | Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**Semantic search**](semantic-search-overview.md) | Relevance (scoring) | Semantic ranking of results, captions, and answers. | Configure semantic search using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). | | [**speller**](cognitive-search-aml-skill.md) | Query | Optional spelling correction on query term inputs for simple, full, and semantic queries. | [Search Preview REST API](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). |
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
+ layout: LandingPage Last updated 06/21/2022- # Data sources gallery
Extract blob metadata and content, serialized into JSON documents, and imported
-### Azure Cosmos DB (SQL API)
+### Azure Cosmos DB for NoSQL
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to Cosmos DB through the SQL API to extract items from a container, serialized into JSON documents, and imported into a search index as search documents. Configure change tracking to refresh the search index with the latest changes in your database.
+Connect to Azure Cosmos DB through the SQL API to extract items from a container, serialized into JSON documents, and imported into a search index as search documents. Configure change tracking to refresh the search index with the latest changes in your database.
[More details](search-howto-index-cosmosdb.md)
New data sources are issued as preview features. [Sign up](https://aka.ms/azure-
-### Cosmos DB (Gremlin API)
+### Azure Cosmos DB for Apache Gremlin
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to Cosmos DB through the Gremlin API to extract items from a container, serialized into JSON documents, and imported into a search index as search documents. Configure change tracking to refresh the search index with the latest changes in your database.
+Connect to Azure Cosmos DB for Apache Gremlin to extract items from a container, serialized into JSON documents, and imported into a search index as search documents. Configure change tracking to refresh the search index with the latest changes in your database.
[More details](search-howto-index-cosmosdb-gremlin.md)
Connect to Cosmos DB through the Gremlin API to extract items from a container,
-### Cosmos DB (Mongo API)
+### Azure Cosmos DB for MongoDB
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to Cosmos DB through the Mongo API to extract items from a container, serialized into JSON documents, and imported into a search index as search documents. Configure change tracking to refresh the search index with the latest changes in your database.
+Connect to Azure Cosmos DB for MongoDB to extract items from a container, serialized into JSON documents, and imported into a search index as search documents. Configure change tracking to refresh the search index with the latest changes in your database.
[More details](search-howto-index-cosmosdb.md)
The File Share Connector makes it possible to surface content from File Shares (
by [Accenture](https://www.accenture.com)
-The File system connector will crawl content from a file system location, allowing incremental crawling, metadata extraction, filtering of documents by path, supporting Windows/Linux/MacOS file systems.
+The File System connector will crawl content from a file system location, allowing incremental crawling, metadata extraction, filtering of documents by path, supporting Windows/Linux/macOS file systems.
[More details](https://contentanalytics.digital.accenture.com/display/aspire40/File+System+Connector)
search Search Dotnet Mgmt Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-mgmt-sdk-migration.md
ms.devlang: csharp+ Last updated 10/03/2022
Version 3.0 adds private endpoint protection by restricting access to IP ranges,
| [NetworkRuleSet](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#networkruleset) | IP firewall | Restrict access to a service endpoint to a list of allowed IP addresses. See [Configure IP firewall](service-configure-firewall.md) for concepts and portal instructions. | | [Shared Private Link Resource](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources) | Private Link | Create a shared private link resource to be used by a search service. | | [Private Endpoint Connections](/rest/api/searchmanagement/2021-04-01-preview/private-endpoint-connections) | Private Link | Establish and manage connections to a search service through private endpoint. See [Create a private endpoint](service-create-private-endpoint.md) for concepts and portal instructions.|
-| [Private Link Resources](/rest/api/searchmanagement/2021-04-01-preview/private-link-resources) | Private Link | For a search service that has a private endpoint connection, get a list of all services used in the same virtual network. If your search solution includes indexers that pull from Azure data sources (Azure Storage, Cosmos DB, Azure SQL), or uses Cognitive Services or Key Vault, then all of those resources should have endpoints in the virtual network, and this API should return a list. |
+| [Private Link Resources](/rest/api/searchmanagement/2021-04-01-preview/private-link-resources) | Private Link | For a search service that has a private endpoint connection, get a list of all services used in the same virtual network. If your search solution includes indexers that pull from Azure data sources (such as Azure Storage, Azure Cosmos DB, Azure SQL), or uses Cognitive Services or Key Vault, then all of those resources should have endpoints in the virtual network, and this API should return a list. |
| [PublicNetworkAccess](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#publicnetworkaccess)| Private Link | This is a property on Create or Update Service requests. When disabled, private link is the only access modality. | ### Breaking changes
Version 2 of the Azure Search .NET Management SDK is a minor upgrade, so changin
## Next steps
-If you encounter problems, the best forum for posting questions is [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-cognitive-search?tab=Newest). If you find a bug, you can file an issue in the [Azure .NET SDK GitHub repository](https://github.com/Azure/azure-sdk-for-net/issues). Make sure to label your issue title with "[search]".
+If you encounter problems, the best forum for posting questions is [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-cognitive-search?tab=Newest). If you find a bug, you can file an issue in the [Azure .NET SDK GitHub repository](https://github.com/Azure/azure-sdk-for-net/issues). Make sure to label your issue title with "[search]".
search Search Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-filters.md
Title: Filter on search results
-description: Filter by user security identity, language, geo-location, or numeric values to reduce search results on queries in Azure Cognitive Search, a hosted cloud search service on Microsoft Azure.
+description: Apply filter criteria to include or exclude content before query execution in Azure Cognitive Search.
Previously updated : 12/13/2021 Last updated : 10/06/2022 # Filters in Azure Cognitive Search
-A *filter* provides value-based criteria for selecting which documents to include in search results. A filter can be a single value or an OData [filter expression](search-query-odata-filter.md). In contrast with full text search, a filter succeeds only if an exact match is made.
+A *filter* provides value-based criteria for including or excluding content before query execution. For example, you could set filters to select documents based on dates, locations, or some other field. Filters are specified on individual fields. A field definition must be attributed as "filterable" if you want to use it in filter expressions.
-Filters are specified on individual fields. A field definition must be attributed as "filterable" if you want to use it in filter expressions.
+A filter can be a single value or an OData [filter expression](search-query-odata-filter.md). In contrast with full text search, a filter succeeds only if an exact match is made.
## When to use a filter
Common scenarios include the following:
+ [Geospatial search](search-query-odata-geo-spatial-functions.md) uses a filter to pass coordinates of the current location in "find near me" apps and functions that match within an area or by distance. + [Security filters](search-security-trimming-for-azure-search.md) pass security identifiers as filter criteria, where a match in the index serves as a proxy for access rights to the document.
-+ Do a "numbers search". Numeric fields are retrievable and can appear in search results, but they are not searchable (subject to full text search) individually. If you need selection criteria based on numeric data, use a filter.
++ Do a "numbers search". Numeric fields are retrievable and can appear in search results, but they aren't searchable (subject to full text search) individually. If you need selection criteria based on numeric data, use a filter. ## How filters are executed
Filtering occurs in tandem with search, qualifying which documents to include in
Filters are OData expressions, articulated in the [filter syntax](search-query-odata-filter.md) supported by Cognitive Search.
-You can specify one filter for each **search** operation, but the filter itself can include multiple fields, multiple criteria, and if you use an **ismatch** function, multiple full-text search expressions. In a multi-part filter expression, you can specify predicates in any order (subject to the rules of operator precedence). There is no appreciable gain in performance if you try to rearrange predicates in a particular sequence.
+You can specify one filter for each **search** operation, but the filter itself can include multiple fields, multiple criteria, and if you use an **ismatch** function, multiple full-text search expressions. In a multi-part filter expression, you can specify predicates in any order (subject to the rules of operator precedence). There's no appreciable gain in performance if you try to rearrange predicates in a particular sequence.
-One of the limits on a filter expression is the maximum size limit of the request. The entire request, inclusive of the filter, can be a maximum of 16 MB for POST, or 8 KB for GET. There is also a limit on the number of clauses in your filter expression. A good rule of thumb is that if you have hundreds of clauses, you are at risk of running into the limit. We recommend designing your application in such a way that it does not generate filters of unbounded size.
+One of the limits on a filter expression is the maximum size limit of the request. The entire request, inclusive of the filter, can be a maximum of 16 MB for POST, or 8 KB for GET. There's also a limit on the number of clauses in your filter expression. A good rule of thumb is that if you have hundreds of clauses, you are at risk of running into the limit. We recommend designing your application in such a way that it doesn't generate filters of unbounded size.
The following examples represent prototypical filter definitions in several APIs.
POST https://[service name].search.windows.net/indexes/hotels/docs/search?api-ve
The following examples illustrate several usage patterns for filter scenarios. For more ideas, see [OData expression syntax > Examples](./search-query-odata-filter.md#examples).
-+ Standalone **$filter**, without a query string, useful when the filter expression is able to fully qualify documents of interest. Without a query string, there is no lexical or linguistic analysis, no scoring, and no ranking. Notice the search string is just an asterisk, which means "match all documents".
++ Standalone **$filter**, without a query string, useful when the filter expression is able to fully qualify documents of interest. Without a query string, there's no lexical or linguistic analysis, no scoring, and no ranking. Notice the search string is just an asterisk, which means "match all documents". ```json {
The following examples illustrate several usage patterns for filter scenarios. F
$filter=search.ismatchscoring('luxury | high-end', 'Description') or Category eq 'Luxury'&$count=true ```
- It is also possible to combine full-text search via `search.ismatchscoring` with filters using `and` instead of `or`, but this is functionally equivalent to using the `search` and `$filter` parameters in a search request. For example, the following two queries produce the same result:
+ It's also possible to combine full-text search via `search.ismatchscoring` with filters using `and` instead of `or`, but this is functionally equivalent to using the `search` and `$filter` parameters in a search request. For example, the following two queries produce the same result:
```http $filter=search.ismatchscoring('pool') and Rating ge 4
You can't modify existing fields to make them filterable. Instead, you need to a
Text filters match string fields against literal strings that you provide in the filter: `$filter=Category eq 'Resort and Spa'`
-Unlike full-text search, there is no lexical analysis or word-breaking for text filters, so comparisons are for exact matches only. For example, assume a field *f* contains "sunny day", `$filter=f eq 'sunny'` does not match, but `$filter=f eq 'sunny day'` will.
+Unlike full-text search, there's no lexical analysis or word-breaking for text filters, so comparisons are for exact matches only. For example, assume a field *f* contains "sunny day", `$filter=f eq 'sunny'` doesn't match, but `$filter=f eq 'sunny day'` will.
-Text strings are case-sensitive which means text filters are case sensitive by default. For example, `$filter=f eq 'Sunny day'` will not find "sunny day". However, you can use a [normalizer](search-normalizers.md) to make it so filtering isn't case sensitive.
+Text strings are case-sensitive, which means text filters are case sensitive by default. For example, `$filter=f eq 'Sunny day'` won't find "sunny day". However, you can use a [normalizer](search-normalizers.md) to make it so filtering isn't case sensitive.
### Approaches for filtering on text
Text strings are case-sensitive which means text filters are case sensitive by d
## Numeric filter fundamentals
-Numeric fields are not `searchable` in the context of full text search. Only strings are subject to full text search. For example, if you enter 99.99 as a search term, you won't get back items priced at $99.99. Instead, you would see items that have the number 99 in string fields of the document. Thus, if you have numeric data, the assumption is that you will use them for filters, including ranges, facets, groups, and so forth.
+Numeric fields aren't `searchable` in the context of full text search. Only strings are subject to full text search. For example, if you enter 99.99 as a search term, you won't get back items priced at $99.99. Instead, you would see items that have the number 99 in string fields of the document. Thus, if you have numeric data, the assumption is that you'll use them for filters, including ranges, facets, groups, and so forth.
-Documents that contain numeric fields (price, size, SKU, ID) provide those values in search results if the field is marked `retrievable`. The point here is that full text search itself is not applicable to numeric field types.
+Documents that contain numeric fields (price, size, SKU, ID) provide those values in search results if the field is marked `retrievable`. The point here's that full text search itself isn't applicable to numeric field types.
## Next steps
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Check the service overview page to find out how many indexes, indexers, and data
Search queries iterate over an [*index*](search-what-is-an-index.md) that contains searchable data, metadata, and other constructs that optimize certain search behaviors.
-For this tutorial, we use a built-in sample dataset that can be crawled using an [*indexer*](search-indexer-overview.md) via the [**Import data** wizard](search-import-data-portal.md). An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are used programmatically, but in the portal, you can access them through the **Import data** wizard.
+For this tutorial, we'll create and load the index using a built-in sample dataset that can be crawled using an [*indexer*](search-indexer-overview.md) via the [**Import data wizard**](search-import-data-portal.md). The hotels-sample data set is hosted by Microsoft on Azure Cosmos DB and accessed over an internal connection. You don't need your own Cosmos DB account or source files to access the data.
+
+An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are created programmatically, but in the portal, you can create them through the **Import data wizard**.
### Step 1 - Start the Import data wizard and create a data source
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-complex-data-types.md
tags: complex data types; compound data types; aggregate data types+ Last updated 11/17/2021
As with top-level simple fields, simple sub-fields of complex fields can only be
## Next steps
-Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels) in the **Import data** wizard. You'll need the Cosmos DB connection information provided in the readme to access the data.
+Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels) in the **Import data** wizard. You'll need the Azure Cosmos DB connection information provided in the readme to access the data.
With that information in hand, your first step in the wizard is to create a new Azure Cosmos DB data source. Further on in the wizard, when you get to the target index page, you'll see an index with complex types. Create and load this index, and then execute queries to understand the new structure.
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
+ Last updated 05/11/2022
Indexers have the following requirements:
Parameters are optional and modify run time behaviors, such as how many errors to accept before failing the entire job. The parameters above are available for all indexers and are documented in the [REST API reference](/rest/api/searchservice/create-indexer#request-body).
-Source-specific indexers for blobs, SQL, and Cosmos DB provide extra "configuration" parameters for source-specific behaviors. For example, if the source is Blob Storage, you can set a parameter that filters on file extensions: `"parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }`.
+Source-specific indexers for blobs, SQL, and Azure Cosmos DB provide extra "configuration" parameters for source-specific behaviors. For example, if the source is Blob Storage, you can set a parameter that filters on file extensions: `"parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }`.
[Field mappings](search-indexer-field-mappings.md) are used to explicitly map source-to-destination fields if those fields differ by name or type.
Change detection logic is built into the data platforms. How an indexer supports
+ Azure Storage has built-in change detection, which means an indexer can recognize new and updated documents automatically. Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. An indexer can use this information to determine which documents to update in the index.
-+ Azure SQL and Cosmos DB provide optional change detection features in their platforms. You can specify the change detection policy in your data source definition.
++ Azure SQL and Azure Cosmos DB provide optional change detection features in their platforms. You can specify the change detection policy in your data source definition. For large indexing loads, an indexer also keeps track of the last document it processed through an internal "high water mark". The marker is never exposed in the API, but internally the indexer keeps track of where it stopped. When indexing resumes, either through a scheduled run or an on-demand invocation, the indexer references the high water mark so that it can pick up where it left off.
If you need to clear the high water mark to re-index in full, you can use [Reset
+ [Index data from Azure SQL database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) + [Index data from Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md) + [Index data from Azure Table Storage](search-howto-indexing-azure-tables.md)
-+ [Index data from Azure Cosmos DB](search-howto-index-cosmosdb.md)
++ [Index data from Azure Cosmos DB](search-howto-index-cosmosdb.md)
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Title: Azure Cosmos DB Gremlin indexer
-description: Set up an Azure Cosmos DB indexer to automate indexing of Gremlin API content for full text search in Azure Cognitive Search. This article explains how index data using the Gremlin API protocol.
+description: Set up an Azure Cosmos DB indexer to automate indexing of Azure Cosmos DB for Apache Gremlin content for full text search in Azure Cognitive Search. This article explains how index data using the Azure Cosmos DB for Apache Gremlin protocol.
+ Last updated 09/08/2022
-# Index data from Azure Cosmos DB using the Gremlin API
+# Index data in Azure Cosmos DB for Apache Gremlin
> [!IMPORTANT]
-> The Gremlin API indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
+> The Azure Cosmos DB for Apache Gremlin indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the Gremlin API from Azure Cosmos DB.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content via Azure Cosmos DB for Apache Gremlin
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [Gremlin API](../cosmos-db/choose-api.md#gremlin-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to [Azure Cosmos DB for Apache Gremlin](../cosmos-db/choose-api.md#gremlin-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
+Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
## Prerequisites + [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
-+ An [Azure Cosmos DB account, database, container and items](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Cognitive Search and Cosmos DB for lower latency and to avoid bandwidth charges.
++ An [Azure Cosmos DB account, database, container, and items](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Cognitive Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges.
-+ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data.
++ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Azure Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data. + Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
Because terminology can be confusing, it's worth noting that [Cosmos DB indexing
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
-For this call, specify a [preview REST API version](search-api-preview.md) (2020-06-30-Preview or 2021-04-30-Preview) to create a data source that connects using the Gremlin API.
+For this call, specify a [preview REST API version](search-api-preview.md) (2020-06-30-Preview or 2021-04-30-Preview) to create a data source that connects via Azure Cosmos DB for Apache Gremlin.
1. [Create or update a data source](/rest/api/searchservice/preview-api/create-or-update-data-source) to set its definition:
For this call, specify a [preview REST API version](search-api-preview.md) (2020
1. Set "container" to the collection. The "name" property is required and it specifies the ID of the graph.
- The "query" property is optional. By default the Azure Cognitive Search Cosmos DB Gremlin API indexer will make every vertex in your graph a document in the index. Edges will be ignored. The query default is `g.V()`. Alternatively, you could set the query to only index the edges. To index the edges, set the query to `g.E()`.
+ The "query" property is optional. By default the Azure Cognitive Search indexer for Azure Cosmos DB for Apache Gremlin makes every vertex in your graph a document in the index. Edges will be ignored. The query default is `g.V()`. Alternatively, you could set the query to only index the edges. To index the edges, set the query to `g.E()`.
1. [Set "dataChangeDetectionPolicy"](#DataChangeDetectionPolicy) if data is volatile and you want the indexer to pick up just the new and updated items on subsequent runs. Incremental progress will be enabled by default using `_ts` as the high water mark column.
For this call, specify a [preview REST API version](search-api-preview.md) (2020
### Supported credentials and connection strings
-Indexers can connect to a collection using the following connections. For connections that target the [Gremlin API](../cosmos-db/graph/graph-introduction.md), be sure to include "ApiKind" in the connection string.
+Indexers can connect to a collection using the following connections. For connections that target [Azure Cosmos DB for Apache Gremlin](../cosmos-db/graph/graph-introduction.md), be sure to include "ApiKind" in the connection string.
Avoid port numbers in the endpoint URL. If you include the port number, the connection will fail. | Full access connection string | |--| |`{ "connectionString" : "AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>;ApiKind=MongoDb" }` |
-| You can get the connection string from the Cosmos DB account page in Azure portal by selecting **Keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
+| You can get the connection string from the Azure Cosmos DB account page in Azure portal by selecting **Keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
| Managed identity connection string | || |`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information. |
+|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information. |
## Add search fields to an index
-In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with your graph. For content in Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-db-items) in your data source.
+In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with your graph. For content in Azure Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-db-items) in your data source.
1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store data:
In a [search index](search-what-is-an-index.md), add fields to accept the source
} ```
-1. Create a document key field ("key": true). For partitioned collections, the default document key is Azure Cosmos DB's `_rid` property, which Azure Cognitive Search automatically renames to `rid` because field names canΓÇÖt start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure Cognitive Search keys. For this reason, the `_rid` values are Base64 encoded.
+1. Create a document key field ("key": true). For partitioned collections, the default document key is the Azure Cosmos DB `_rid` property, which Azure Cognitive Search automatically renames to `rid` because field names canΓÇÖt start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure Cognitive Search keys. For this reason, the `_rid` values are Base64 encoded.
1. Create additional fields for more searchable content. See [Create an index](search-how-to-create-search-index.md) for details.
In a [search index](search-what-is-an-index.md), add fields to accept the source
| GeoJSON objects such as { "type": "Point", "coordinates": [long, lat] } |Edm.GeographyPoint | | Other JSON objects |N/A |
-## Configure and run the Cosmos DB indexer
+## Configure and run the Azure Cosmos DB indexer
Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
Once an indexer has fully populated a search index, you might want subsequent in
To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. This property tells the indexer which change tracking mechanism is used on your data.
-For Cosmos DB indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
+For Azure Cosmos DB indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
Even if you enable deletion detection policy, deleting complex (`Edm.ComplexType
## Mapping graph data to fields in a search index
-The Cosmos DB Gremlin API indexer will automatically map a couple pieces of graph data for you:
+The Azure Cosmos DB for Apache Gremlin indexer will automatically map a couple pieces of graph data:
1. The indexer will map `_rid` to an `rid` field in the index if it exists, and Base64 encode it. 1. The indexer will map `_id` to an `id` field in the index if it exists.
-1. When querying your Cosmos DB database using the Gremlin API you may notice that the JSON output for each property has an `id` and a `value`. Azure Cognitive Search Cosmos DB indexer will automatically map the properties `value` into a field in your search index that has the same name as the property if it exists. In the following example, 450 would be mapped to a `pages` field in the search index.
+1. When querying your Azure Cosmos DB database using the Azure Cosmos DB for Apache Gremlin you may notice that the JSON output for each property has an `id` and a `value`. Azure Cognitive Search Azure Cosmos DB indexer will automatically map the properties `value` into a field in your search index that has the same name as the property if it exists. In the following example, 450 would be mapped to a `pages` field in the search index.
```http {
Notice how the Output Field Mapping starts with `/document` and does not include
## Next steps
-+ To learn more about Azure Cosmos DB Gremlin API, see the [Introduction to Azure Cosmos DB: Gremlin API](../cosmos-db/graph-introduction.md).
++ To learn more about Azure Cosmos DB for Apache Gremlin, see the [Introduction to Azure Cosmos DB: Azure Cosmos DB for Apache Gremlin](../cosmos-db/graph-introduction.md). + For more information about Azure Cognitive Search scenarios and pricing, see the [Search service page on azure.microsoft.com](https://azure.microsoft.com/services/search/).
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Title: Azure Cosmos DB MongoDB indexer
+ Title: Indexing with Azure Cosmos DB for MongoDB
-description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data using the MongoDB API protocol.
+description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data in Azure Cosmos DB for MongoDB.
+ Last updated 07/12/2022
-# Index data from Azure Cosmos DB using the MongoDB API
+# Indexing with Azure Cosmos DB for MongoDB
> [!IMPORTANT] > MongoDB API support is currently in public preview under [supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the MongoDB API from Azure Cosmos DB.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the MongoDB API provided by Azure Cosmos DB for MongoDB.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [MongoDB API](../cosmos-db/choose-api.md#api-for-mongodb). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to [Azure Cosmos DB for MongoDB](../cosmos-db/choose-api.md#api-for-mongodb). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
+Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
## Prerequisites + [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
-+ An [Azure Cosmos DB account, database, collection, and documents](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Cognitive Search and Cosmos DB for lower latency and to avoid bandwidth charges.
++ An [Azure Cosmos DB account, database, collection, and documents](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Cognitive Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges.
-+ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data.
++ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Azure Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data. + Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
These are the limitations of this feature:
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
-For this call, specify a [preview REST API version](search-api-preview.md) (2020-06-30-Preview or 2021-04-30-Preview) to create a data source that connects using MongoDB API.
+For this call, specify a [preview REST API version](search-api-preview.md) (2020-06-30-Preview or 2021-04-30-Preview) to create a data source that connects via the MongoDB API.
1. [Create or update a data source](/rest/api/searchservice/preview-api/create-or-update-data-source) to set its definition:
For this call, specify a [preview REST API version](search-api-preview.md) (2020
1. Set "credentials" to a connection string. The next section describes the supported formats.
-1. Set "container" to the collection. The "name" property is required and it specifies the ID of the database collection to be indexed. For the MongoDB API, "query" isn't supported.
+1. Set "container" to the collection. The "name" property is required and it specifies the ID of the database collection to be indexed. For Azure Cosmos DB for MongoDB, "query" isn't supported.
1. [Set "dataChangeDetectionPolicy"](#DataChangeDetectionPolicy) if data is volatile and you want the indexer to pick up just the new and updated items on subsequent runs.
Avoid port numbers in the endpoint URL. If you include the port number, the conn
| Full access connection string | |--| |`{ "connectionString" : "AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>;ApiKind=MongoDb" }` |
-| You can get the *Cosmos DB auth key* from the Cosmos DB account page in Azure portal by selecting **Connection String** in the left navigation pane. Make sure to copy **Primary Password** and replace *Cosmos DB auth key* value with it. |
+| You can get the *Cosmos DB auth key* from the Azure Cosmos DB account page in the Azure portal by selecting **Connection String** in the left navigation pane. Make sure to copy **Primary Password** and replace *Cosmos DB auth key* value with it. |
| Managed identity connection string | || |`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information. |
+|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information. |
## Add search fields to an index
-In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with source data. For content in Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-db-items) in your data source.
+In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with source data. For content in Azure Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-db-items) in your data source.
1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store data:
In a [search index](search-what-is-an-index.md), add fields to accept the source
+ "doc_id" represents "_id" for the object identifier. If you specify a field of "doc_id" in the index, the indexer populates it with the values of the object identifier.
- + "rid" is a system property in Cosmos DB. If you specify a field of "rid" in the index, the indexer populates it with the base64-encoded value of the "rid" property.
+ + "rid" is a system property in Azure Cosmos DB. If you specify a field of "rid" in the index, the indexer populates it with the base64-encoded value of the "rid" property.
+ For any other field, your search field should have the same name as defined in the collection.
In a [search index](search-what-is-an-index.md), add fields to accept the source
| GeoJSON objects such as { "type": "Point", "coordinates": [long, lat] } |Edm.GeographyPoint | | Other JSON objects |N/A |
-## Configure and run the Cosmos DB indexer
+## Configure and run the Azure Cosmos DB indexer
Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
Once an indexer has fully populated a search index, you might want subsequent in
To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. This property tells the indexer which change tracking mechanism is used on your data.
-For Cosmos DB indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
+For Azure Cosmos DB indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
api-key: [Search service admin key]
You can now control how you [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Cosmos DB:
-+ [Set up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md)
++ [Set up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) + [Index large data sets](search-howto-large-index.md) + [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
description: Set up a search indexer to index data stored in Azure Cosmos DB for
+ Last updated 07/12/2022
Last updated 07/12/2022
In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the SQL API from Azure Cosmos DB.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [SQL API](../cosmos-db/choose-api.md#coresql-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to [Azure Cosmos DB for NoSQL](../cosmos-db/choose-api.md#coresql-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
+Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
## Prerequisites
-+ An [Azure Cosmos DB account, database, container and items](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Cognitive Search and Cosmos DB for lower latency and to avoid bandwidth charges.
++ An [Azure Cosmos DB account, database, container and items](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Cognitive Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges.
-+ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data.
++ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Azure Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data. + Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
Avoid port numbers in the endpoint URL. If you include the port number, the conn
| Full access connection string | |--| |`{ "connectionString" : "AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>`" }` |
-| You can get the connection string from the Cosmos DB account page in Azure portal by selecting **Keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
+| You can get the connection string from the Azure Cosmos DB account page in the Azure portal by selecting **Keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
| Managed identity connection string | || |`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information.|
+|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information.|
<a name="flatten-structures"></a>
SELECT DISTINCT VALUE c.name FROM c ORDER BY c.name
SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup FROM Food f GROUP BY f.foodGroup ```
-Although Cosmos DB has a workaround to support [SQL query pagination with the DISTINCT keyword by using the ORDER BY clause](../cosmos-db/sql-query-pagination.md#continuation-tokens), it isn't compatible with Azure Cognitive Search. The query will return a single JSON value, whereas Azure Cognitive Search expects a JSON object.
+Although Azure Cosmos DB has a workaround to support [SQL query pagination with the DISTINCT keyword by using the ORDER BY clause](../cosmos-db/sql-query-pagination.md#continuation-tokens), it isn't compatible with Azure Cognitive Search. The query will return a single JSON value, whereas Azure Cognitive Search expects a JSON object.
```sql -- The following query returns a single JSON value and isn't supported by Azure Cognitive Search
SELECT DISTINCT VALUE c.name FROM c ORDER BY c.name
## Add search fields to an index
-In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with source data. For content in Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-db-items) in your data source.
+In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with source data. For content in Azure Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-db-items) in your data source.
1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store data:
In a [search index](search-what-is-an-index.md), add fields to accept the source
} ```
-1. Create a document key field ("key": true). For partitioned collections, the default document key is Azure Cosmos DB's `_rid` property, which Azure Cognitive Search automatically renames to `rid` because field names canΓÇÖt start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure Cognitive Search keys. For this reason, the `_rid` values are Base64 encoded.
+1. Create a document key field ("key": true). For partitioned collections, the default document key is the Azure Cosmos DB `_rid` property, which Azure Cognitive Search automatically renames to `rid` because field names canΓÇÖt start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure Cognitive Search keys. For this reason, the `_rid` values are Base64 encoded.
1. Create additional fields for more searchable content. See [Create an index](search-how-to-create-search-index.md) for details.
In a [search index](search-what-is-an-index.md), add fields to accept the source
| GeoJSON objects such as { "type": "Point", "coordinates": [long, lat] } |Edm.GeographyPoint | | Other JSON objects |N/A |
-## Configure and run the Cosmos DB indexer
+## Configure and run the Azure Cosmos DB indexer
Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
Once an indexer has fully populated a search index, you might want subsequent in
To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. This property tells the indexer which change tracking mechanism is used on your data.
-For Cosmos DB indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
+For Azure Cosmos DB indexers, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
If you're using a [custom query to retrieve documents](#flatten-structures), mak
In some cases, even if your query contains an `ORDER BY [collection alias]._ts` clause, Azure Cognitive Search may not infer that the query is ordered by the `_ts`. You can tell Azure Cognitive Search that results are ordered by setting the `assumeOrderByHighWaterMarkColumn` configuration property.
-To specify this hint, [create or update your indexer definition](#configure-and-run-the-cosmos-db-indexer) as follows:
+To specify this hint, [create or update your indexer definition](#configure-and-run-the-azure-cosmos-db-indexer) as follows:
```http {
For data accessed through the SQL API protocol, you can use the .NET SDK to auto
You can now control how you [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Cosmos DB:
-+ [Set up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md)
++ [Set up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) + [Index large data sets](search-howto-large-index.md) + [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
Title: Connect to Cosmos DB
+ Title: Set up an indexer connection to Azure Cosmos DB via a managed identity
-description: Learn how to set up an indexer connection to a Cosmos DB account using a managed identity
+description: Learn how to set up an indexer connection to an Azure Cosmos DB account via a managed identity
Last updated 09/19/2022-+
-# Set up an indexer connection to Cosmos DB using a managed identity
+# Set up an indexer connection to Azure Cosmos DB via a managed identity
This article explains how to set up an indexer connection to an Azure Cosmos DB database using a managed identity instead of providing credentials in the connection string.'
-You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure Active Directory logins and require Azure role assignments to access data in Cosmos DB.
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure Active Directory logins and require Azure role assignments to access data in Azure Cosmos DB.
## Prerequisites * [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Cosmos DB.
+* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Azure Cosmos DB.
- For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role.
+ For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Azure Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role.
- At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Cosmos DB.
+ At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Azure Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Azure Cosmos DB.
* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-index-cosmosdb.md).
Create the data source and provide either a system-assigned managed identity or
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection) support using a system-assigned managed identity.
-When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Cosmos DB, the resource group, and the Cosmos DB account name.
+When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name.
* For SQL collections, the connection string doesn't require "ApiKind". * For MongoDB collections, add "ApiKind=MongoDb" to the connection string and use a preview REST API.
api-key: [Search service admin key]
The 2021-04-30-preview REST API supports connections based on a user-assigned managed identity. When you're connecting with a user-assigned managed identity, there are two changes to the data source definition:
-* First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Cosmos DB, the resource group, and the Cosmos DB account name.
+* First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name.
* For SQL collections, the connection string doesn't require "ApiKind". * For MongoDB collections, add "ApiKind=MongoDb" to the connection string
api-key: [admin key]
An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create and run the indexer. If the indexer is successful, the connection syntax and role assignments are valid.
-Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call with a Cosmos DB indexer definition. The indexer will run when you submit the request.
+Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call with an Azure Cosmos DB indexer definition. The indexer will run when you submit the request.
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call
## Troubleshooting
-If you recently rotated your Cosmos DB account keys you'll need to wait up to 15 minutes for the managed identity connection string to work.
+If you recently rotated your Azure Cosmos DB account keys, you'll need to wait up to 15 minutes for the managed identity connection string to work.
-Check to see if the Cosmos DB account has its access restricted to select networks. You can rule out any firewall issues by trying the connection without restrictions in place.
+Check to see if the Azure Cosmos DB account has its access restricted to select networks. You can rule out any firewall issues by trying the connection without restrictions in place.
## See also
-* [Azure Cosmos DB indexer using SQL API](search-howto-index-cosmosdb.md)
-* [Azure Cosmos DB indexer using MongoDB API](search-howto-index-cosmosdb-mongodb.md)
-* [Azure Cosmos DB indexer using Gremlin API](search-howto-index-cosmosdb-gremlin.md)
+* [Indexing via Azure Cosmos DB for NoSQL](search-howto-index-cosmosdb.md)
+* [Indexing via Azure Cosmos DB for MongoDB](search-howto-index-cosmosdb-mongodb.md)
+* [Indexing via Azure Cosmos DB for Apache Gremlin](search-howto-index-cosmosdb-gremlin.md)
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
+ Last updated 07/28/2022
A managed identity must be paired with an Azure role that determines permissions
+ Contributor (write) permissions are needed for AI enrichment features that use Azure Storage for hosting debug session data, enrichment caching, and long-term content storage in a knowledge store.
-The following steps are for Azure Storage. If your resource is Cosmos DB or Azure SQL, the steps are similar.
+The following steps are for Azure Storage. If your resource is Azure Cosmos DB or Azure SQL, the steps are similar.
1. [Sign in to Azure portal](https://portal.azure.com) and [find your Azure resource](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) to which the search service must have access.
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
+ Last updated 01/07/2022
Indexer limits vary by the workload. For each workload, the following job limits
[Run Indexer](/rest/api/searchservice/run-indexer) will detect and process only what it necessary to synchronize the search index with changes in the underlying data source. Incremental indexing starts by locating an internal high-water mark to find the last updated search document, which becomes the starting point for indexer execution over new and updated documents in the data source.
-Change detection is essential for determining what's new or updated in the data source. If the content is unchanged, Run has no effect. Blob storage has built-in change detection through its LastModified property. Other data sources, such as Azure SQL or Cosmos DB, have to be configured for change detection before the indexer can read new and updated rows.
+Change detection is essential for determining what's new or updated in the data source. If the content is unchanged, Run has no effect. Blob storage has built-in change detection through its LastModified property. Other data sources, such as Azure SQL or Azure Cosmos DB, have to be configured for change detection before the indexer can read new and updated rows.
<a name="reset-indexers"></a>
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-schedule-indexers.md
+ Last updated 01/21/2022
Indexers can be configured to run on a schedule when you set the "schedule" prop
+ A valid indexer configured with a data source and index.
-+ Change detection in the data source. Azure Storage and SharePoint have built-in change detection. Other data sources, such as [Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) and [Cosmos DB](search-howto-index-cosmosdb.md) must be enabled manually.
++ Change detection in the data source. Azure Storage and SharePoint have built-in change detection. Other data sources, such as [Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) and [Azure Cosmos DB](search-howto-index-cosmosdb.md) must be enabled manually. ## Schedule definition
For indexers that run on a schedule, you can monitor operations by retrieving st
+ [Monitor search indexer status](search-howto-monitor-indexers.md) + [Collect and analyze log data](monitor-azure-cognitive-search.md)
-+ [Index large data sets](search-howto-large-index.md)
++ [Index large data sets](search-howto-large-index.md)
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
+ Last updated 06/21/2022
Last updated 06/21/2022
On behalf of an indexer, a search service will issue outbound calls to an external Azure resource to pull in data during indexing. If your Azure resource uses IP firewall rules to filter incoming calls, you'll need to create an inbound rule in your firewall that admits indexer requests.
-This article explains how to find the IP address of your search service and configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Cosmos DB and Azure SQL.
+This article explains how to find the IP address of your search service and configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Azure Cosmos DB and Azure SQL.
> [!NOTE] > A storage account and your search service must be in different regions if you want to define IP firewall rules. If your setup doesn't permit this, try the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) or [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances) instead.
It can take five to ten minutes for the firewall rules to be updated, after whic
## Next Steps - [Configure Azure Storage firewalls](../storage/common/storage-network-security.md)-- [Configure IP firewall for Cosmos DB](../cosmos-db/how-to-configure-firewall.md)
+- [Configure an IP firewall for Azure Cosmos DB](../cosmos-db/how-to-configure-firewall.md)
- [Configure IP firewall for Azure SQL Server](/azure/azure-sql/database/firewall-configure)
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
+ Last updated 06/30/2022
When setting up a shared private link resource, make sure the group ID value is
| Azure Storage - Blob | `blob` <sup>1,</sup> <sup>2</sup> | | Azure Storage - Data Lake Storage Gen2 | `dfs` and `blob` | | Azure Storage - Tables | `table` <sup>2</sup> |
-| Azure Cosmos DB - SQL API | `Sql`|
+| Azure Cosmos DB for NoSQL | `Sql`|
| Azure SQL Database | `sqlServer`| | Azure Database for MySQL (preview) | `mysqlServer`| | Azure Key Vault for [customer-managed keys](search-security-manage-encryption-keys.md) | `vault` |
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
+ Last updated 07/27/2022
Indexers crawl data stores on Azure and outside of Azure.
+ [Azure Files](search-file-storage-integration.md) (in preview) + [Azure MySQL](search-howto-index-mysql.md) (in preview) + [SharePoint in Microsoft 365](search-howto-index-sharepoint-online.md) (in preview)
-+ [Azure Cosmos DB (MongoDB API)](search-howto-index-cosmosdb-mongodb.md) (in preview)
-+ [Azure Cosmos DB (Gremlin API)](search-howto-index-cosmosdb-gremlin.md) (in preview)
++ [Azure Cosmos DB for MongoDB](search-howto-index-cosmosdb-mongodb.md) (in preview)++ [Azure Cosmos DB for Apache Gremlin](search-howto-index-cosmosdb-gremlin.md) (in preview) Indexers accept flattened row sets, such as a table or view, or items in a container or folder. In most cases, it creates one search document per row, record, or item.
Indexer connections to remote data sources can be made using standard Internet c
## Stages of indexing
-On an initial run, when the index is empty, an indexer will read in all of the data provided in the table or container. On subsequent runs, the indexer can usually detect and retrieve just the data that has changed. For blob data, change detection is automatic. For other data sources like Azure SQL or Cosmos DB, change detection must be enabled.
+On an initial run, when the index is empty, an indexer will read in all of the data provided in the table or container. On subsequent runs, the indexer can usually detect and retrieve just the data that has changed. For blob data, change detection is automatic. For other data sources like Azure SQL or Azure Cosmos DB, change detection must be enabled.
For each document it receives, an indexer implements or coordinates multiple steps, from document retrieval to a final search engine "handoff" for indexing. Optionally, an indexer also drives [skillset execution and outputs](cognitive-search-concept-intro.md), assuming a skillset is defined.
Depending on the data source, the indexer will try different operations to extra
+ When the document is a record in [Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), the indexer will extract non-binary content from each field in each record.
-+ When the document is a record in [Cosmos DB](search-howto-index-cosmosdb.md), the indexer will extract non-binary content from fields and subfields from the Cosmos DB document.
++ When the document is a record in [Azure Cosmos DB](search-howto-index-cosmosdb.md), the indexer will extract non-binary content from fields and subfields from the Azure Cosmos DB document. ### Stage 2: Field mappings
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
+ Last updated 06/20/2022
Your Azure resources could be protected using any number of the network isolatio
| | | - | | Azure Storage for text-based indexing (blobs, ADLS Gen 2, files, tables) | Supported only if the storage account and search service are in different regions. | Supported | | Azure Storage for AI enrichment (caching, debug sessions, knowledge store) | Supported only if the storage account and search service are in different regions. | Supported |
-| Azure Cosmos DB - SQL API | Supported | Supported |
-| Azure Cosmos DB - MongoDB API | Supported | Unsupported |
-| Azure Cosmos DB - Gremlin API | Supported | Unsupported |
+| Azure Cosmos DB for NoSQL | Supported | Supported |
+| Azure Cosmos DB for MongoDB | Supported | Unsupported |
+| Azure Cosmos DB for Apache Gremlin | Supported | Unsupported |
| Azure SQL Database | Supported | Supported | | SQL Server on Azure virtual machines | Supported | N/A | | SQL Managed Instance | Supported | N/A |
When search and storage are in different regions, you can use the previously men
Now that you're familiar with indexer data access options for solutions deployed in an Azure virtual network, review either of the following how-to articles as your next step: - [How to make indexer connections to a private endpoint](search-indexer-howto-access-private.md)-- [How to make indexer connections through an IP firewall](search-indexer-howto-access-ip-restricted.md)
+- [How to make indexer connections through an IP firewall](search-indexer-howto-access-ip-restricted.md)
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
+ Last updated 06/24/2022
For data sources that are secured by Azure network security mechanisms, indexers
### Firewall rules
-Azure Storage, Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic. Some common errors include:
+Azure Storage, Azure Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic. Some common errors include:
* `The remote server returned an error: (403) Forbidden` * `This request is not authorized to perform this operation` * `Credentials provided in the connection string are invalid or have expired`
Details for configuring IP address range restrictions for each data source type
* [Azure Storage](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
-* [Cosmos DB](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
+* [Azure Cosmos DB](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
* [Azure SQL](/azure/azure-sql/database/firewall-configure#create-and-manage-ip-firewall-rules)
api-key: [admin key]
} ```
-## Missing content from Cosmos DB
+## Missing content from Azure Cosmos DB
-Azure Cognitive Search has an implicit dependency on Cosmos DB indexing. If you turn off automatic indexing in Cosmos DB, Azure Cognitive Search returns a successful state, but fails to index container contents. For instructions on how to check settings and turn on indexing, see [Manage indexing in Azure Cosmos DB](../cosmos-db/how-to-manage-indexing-policy.md#use-the-azure-portal).
+Azure Cognitive Search has an implicit dependency on Azure Cosmos DB indexing. If you turn off automatic indexing in Azure Cosmos DB, Azure Cognitive Search returns a successful state, but fails to index container contents. For instructions on how to check settings and turn on indexing, see [Manage indexing in Azure Cosmos DB](../cosmos-db/how-to-manage-indexing-policy.md#use-the-azure-portal).
## Indexer reflects a different document count than data source or index
Azure Cognitive Search has an implicit dependency on Cosmos DB indexing. If you
Indexer may show a different document count than either the data source, the index or count in your code in a point in time, depending on specific circumstances. Here are some possible causes of why this may occur: - The indexer has a Deleted Document Policy. The deleted documents get counted on the indexer end if they are indexed before they get deleted.-- If the ID column in the data source is not unique. This is for data sources that have the concept of column, such as Cosmos DB.
+- If the ID column in the data source is not unique. This is for data sources that have the concept of columns, such as Azure Cosmos DB.
- If the data source definition has a different query than the one you are using to estimate the number of records. In example, in your data base you are querying all your data base record count, while in the data source definition query you may be selecting just a subset of records to index. - The counts are being checked in different intervals for each component of the pipeline: data source, indexer and index. - The index may take some minutes to show the real document count.
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
+ Last updated 07/11/2022- # Security overview for Azure Cognitive Search
For multitenancy solutions requiring security boundaries at the index level, suc
If you require granular, per-user control over search results, you can build security filters on your queries, returning documents associated with a given security identity.
-Conceptually equivalent to "row-level security", authorization to content within the index isn't natively supported using predefined roles or role assignments that map to entities in Azure Active Directory. Any user permissions on data in external systems, such as Cosmos DB, don't transfer with that data as its being indexed by Cognitive Search.
+Conceptually equivalent to "row-level security", authorization to content within the index isn't natively supported using predefined roles or role assignments that map to entities in Azure Active Directory. Any user permissions on data in external systems, such as Azure Cosmos DB, don't transfer with that data as its being indexed by Cognitive Search.
Workarounds for solutions that require "row-level security" include creating a field in the data source that represents a security group or user identity, and then using filters in Cognitive Search to selectively trims search results of documents and content based on identities. The following table describes two approaches for trimming search results of unauthorized content.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Last updated 07/22/2022-+ # What is Azure Cognitive Search?
On the search service itself, the two primary workloads are *indexing* and *quer
+ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload JSON documents, or use an indexer to serialize your data into JSON.
- Additionally, if your content includes mixed file types, you have the option of adding *AI enrichment* through [cognitive skills](cognitive-search-working-with-skillsets.md). AI enrichment can extract text embedded in application files, and also infer text and structure from non-text files by analyzing the content.
+ [AI enrichment](cognitive-search-concept-intro.md) through [cognitive skills](cognitive-search-working-with-skillsets.md) is an extension of indexing. If your content needs image or language analysis before it can be indexed, AI enrichment can extract text embedded in application files, translate text, and also infer text and structure from non-text files by analyzing the content.
-+ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you control.
++ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you control.+
+ [Semantic search](semantic-search-overview.md) is an extension of query execution. It adds language understanding to search results processing, promoting the most semantically relevant results to the top.
## Why use Cognitive Search? Azure Cognitive Search is well suited for the following application scenarios:
-+ Consolidate heterogeneous content into a private, user-defined search index. Offload indexing and query workloads onto a dedicated search service.
++ Consolidate heterogeneous content into a private, user-defined search index. +++ Offload indexing and query workloads onto a dedicated search service. + Easily implement search-related features: relevance tuning, faceted navigation, filters (including geo-spatial search), synonym mapping, and autocomplete.
-+ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Cosmos DB, into searchable chunks. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing.
++ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Azure Cosmos DB, into searchable chunks. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing. + Add linguistic or custom text analysis. If you have non-English content, Azure Cognitive Search supports both Lucene analyzers and Microsoft's natural language processors. You can also configure analyzers to achieve specialized processing of raw content, such as filtering out diacritics, or recognizing and preserving patterns in strings.
For more information about specific functionality, see [Features of Azure Cognit
## How to get started
-Functionality is exposed through simple [REST APIs](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md).
-
-You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets.
+Functionality is exposed through the Azure portal, simple [REST APIs](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md). The Azure portal supports service administration and content management, with tools for prototyping and querying your indexes and skillsets.
An end-to-end exploration of core search features can be accomplished in four steps:
Customers often ask how Azure Cognitive Search compares with other search-relate
|-|--| | Microsoft Search | [Microsoft Search](/microsoftsearch/overview-microsoft-search) is for Microsoft 365 authenticated users who need to query over content in SharePoint. It's offered as a ready-to-use search experience, enabled and configured by administrators, with the ability to accept external content through connectors from Microsoft and other sources. If this describes your scenario, then Microsoft Search with Microsoft 365 is an attractive option to explore.<br/><br/>In contrast, Azure Cognitive Search executes queries over an index that you define, populated with data and documents you own, often from diverse sources. Azure Cognitive Search has crawler capabilities for some Azure data sources through [indexers](search-indexer-overview.md), but you can push any JSON document that conforms to your index schema into a single, consolidated searchable resource. You can also customize the indexing pipeline to include machine learning and lexical analyzers. Because Cognitive Search is built to be a plug-in component in larger solutions, you can integrate search into almost any app, on any platform.| |Bing | [Bing Web Search API](../cognitive-services/bing-web-search/index.yml) searches the indexes on Bing.com for matching terms you submit. Indexes are built from HTML, XML, and other web content on public sites. Built on the same foundation, [Bing Custom Search](/azure/cognitive-services/bing-custom-search/) offers the same crawler technology for web content types, scoped to individual web sites.<br/><br/>In Cognitive Search, you can define and populate the index. You can use [indexers](search-indexer-overview.md) to crawl data on Azure data sources, or push any index-conforming JSON document to your search service. |
-|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.<br/><br/>Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
+|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Azure Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.<br/><br/>Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
|Dedicated search solution | Assuming you've decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search. <br/><br/>A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale. <br/><br/>Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. | Among cloud providers, Azure Cognitive Search is strongest for full text search workloads over content stores and databases on Azure, for apps that rely primarily on search for both information retrieval and content navigation.
Key strengths include:
+ [Full search experience](search-features-list.md): rich query language, relevance tuning and semantic ranking, faceting, autocomplete queries and suggested results, and synonyms. + Azure scale, reliability, and world-class availability.
-Among our customers, those able to leverage the widest range of features in Azure Cognitive Search include online catalogs, line-of-business programs, and document discovery applications.
+Among our customers, those able to apply the widest range of features in Azure Cognitive Search include online catalogs, line-of-business programs, and document discovery applications.
## Watch this video In this 15-minute video, review the main capabilities of Azure Cognitive Search.
->[!VIDEO https://www.youtube.com/embed/kOJU0YZodVk?version=3]
+>[!VIDEO https://www.youtube.com/embed/kOJU0YZodVk?version=3]
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
+ Last updated 02/26/2022
Some common errors that occur during the creation phase are listed below.
| | | | | Azure Storage - Blob (or) ADLS Gen 2 | `blob`| `2020-08-01` | | Azure Storage - Tables | `table`| `2020-08-01` |
- | Azure Cosmos DB - SQL API | `Sql`| `2020-08-01` |
+ | Azure Cosmos DB for NoSQL | `Sql`| `2020-08-01` |
| Azure SQL Database | `sqlServer`| `2020-08-01` | | Azure Database for MySQL (preview) | `mysqlServer`| `2020-08-01-Preview` | | Azure Key Vault | `vault` | `2020-08-01` |
Shared private link resources that have failed Azure Resource Manager deployment
| Deployment failure reason | Description | Resolution | | | | |
-| Network resource provider not registered on target resource's subscription | A private endpoint (and associated DNS mappings) is created for the target resource (Storage Account, Cosmos DB, Azure SQL) via the `Microsoft.Network` resource provider (RP). If the subscription that hosts the target resource ("target subscription") isn't registered with `Microsoft.Network` RP, then the Azure Resource Manager deployment can fail. | You need to register this RP in their target subscription. You can [register the resource provider](../azure-resource-manager/management/resource-providers-and-types.md) using the Azure portal, PowerShell, or CLI.|
-| Invalid `groupId` for the target resource | When Cosmos DB accounts are created, you can specify the API type for the database account. While Cosmos DB offers several different API types, Azure Cognitive Search only supports "Sql" as the `groupId` for shared private link resources. When a shared private link of type "Sql" is created for a `privateLinkResourceId` pointing to a non-Sql database account, the Azure Resource Manager deployment will fail because of the `groupId` mismatch. The Azure resource ID of a Cosmos DB account isn't sufficient to determine the API type that is being used. Azure Cognitive Search tries to create the private endpoint, which is then denied by Cosmos DB. | You should ensure that the `privateLinkResourceId` of the specified Cosmos DB resource is for a database account of "Sql" API type |
+| Network resource provider not registered on target resource's subscription | A private endpoint (and associated DNS mappings) is created for the target resource (Storage Account, Azure Cosmos DB, Azure SQL) via the `Microsoft.Network` resource provider (RP). If the subscription that hosts the target resource ("target subscription") isn't registered with `Microsoft.Network` RP, then the Azure Resource Manager deployment can fail. | You need to register this RP in their target subscription. You can [register the resource provider](../azure-resource-manager/management/resource-providers-and-types.md) using the Azure portal, PowerShell, or CLI.|
+| Invalid `groupId` for the target resource | When Azure Cosmos DB accounts are created, you can specify the API type for the database account. While Azure Cosmos DB offers several different API types, Azure Cognitive Search only supports "Sql" as the `groupId` for shared private link resources. When a shared private link of type "Sql" is created for a `privateLinkResourceId` pointing to a non-Sql database account, the Azure Resource Manager deployment will fail because of the `groupId` mismatch. The Azure resource ID of an Azure Cosmos DB account isn't sufficient to determine the API type that is being used. Azure Cognitive Search tries to create the private endpoint, which is then denied by Azure Cosmos DB. | You should ensure that the `privateLinkResourceId` of the specified Azure Cosmos DB resource is for a database account of "Sql" API type |
| Target resource not found | Existence of the target resource specified in `privateLinkResourceId` is checked only during the commencement of the Azure Resource Manager deployment. If the target resource is no longer available, then the deployment will fail. | You should ensure that the target resource is present in the specified subscription and resource group and isn't moved or deleted. | | Transient/other errors | The Azure Resource Manager deployment can fail if there is an infrastructure outage or because of other unexpected reasons. This should be rare and usually indicates a transient state. | Retry creating this resource at a later time. If the problem persists, reach out to Azure Support. |
Some common errors that occur during the deletion phase are listed below.
Learn more about shared private link resources and how to use it for secure access to protected content. + [Accessing protected content via indexers](search-indexer-howto-access-private.md)
-+ [REST API reference](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources)
++ [REST API reference](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources)
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
Last updated 08/29/2022-+ # Tutorial: Index from multiple data sources using the .NET SDK Azure Cognitive Search can import, analyze, and index data from multiple data sources into a single consolidated search index.
-This tutorial uses C# and the [Azure.Search.Documents](/dotnet/api/overview/azure/search) client library in the Azure SDK for .NET to index sample hotel data from an Azure Cosmos DB, and merge that with hotel room details drawn from Azure Blob Storage documents. The result will be a combined hotel search index containing hotel documents, with rooms as a complex data types.
+This tutorial uses C# and the [Azure.Search.Documents](/dotnet/api/overview/azure/search) client library in the Azure SDK for .NET to index sample hotel data from an Azure Cosmos DB instance, and merge that with hotel room details drawn from Azure Blob Storage documents. The result will be a combined hotel search index containing hotel documents, with rooms as a complex data types.
In this tutorial, you'll perform the following tasks:
If possible, create all services in the same region and resource group for proxi
This sample uses two small sets of data that describe seven fictional hotels. One set describes the hotels themselves, and will be loaded into an Azure Cosmos DB database. The other set contains hotel room details, and is provided as seven separate JSON files to be uploaded into Azure Blob Storage.
-### Start with Cosmos DB
+### Start with Azure Cosmos DB
1. Sign in to the [Azure portal](https://portal.azure.com), and then navigate your Azure Cosmos DB account Overview page.
In Azure Cognitive Search, the key field uniquely identifies each document. Ever
When indexing data from multiple data sources, make sure each incoming row or document contains a common document key to merge data from two physically distinct source documents into a new search document in the combined index.
-It often requires some up-front planning to identify a meaningful document key for your index, and make sure it exists in both data sources. In this demo, the `HotelId` key for each hotel in Cosmos DB is also present in the rooms JSON blobs in Blob storage.
+It often requires some up-front planning to identify a meaningful document key for your index, and make sure it exists in both data sources. In this demo, the `HotelId` key for each hotel in Azure Cosmos DB is also present in the rooms JSON blobs in Blob storage.
-Azure Cognitive Search indexers can use field mappings to rename and even reformat data fields during the indexing process, so that source data can be directed to the correct index field. For example, in Cosmos DB, the hotel identifier is called **`HotelId`**. But in the JSON blob files for the hotel rooms, the hotel identifier is named **`Id`**. The program handles this discrepancy by mapping the **`Id`** field from the blobs to the **`HotelId`** key field in the indexer.
+Azure Cognitive Search indexers can use field mappings to rename and even reformat data fields during the indexing process, so that source data can be directed to the correct index field. For example, in Azure Cosmos DB, the hotel identifier is called **`HotelId`**. But in the JSON blob files for the hotel rooms, the hotel identifier is named **`Id`**. The program handles this discrepancy by mapping the **`Id`** field from the blobs to the **`HotelId`** key field in the indexer.
> [!NOTE] > In most cases, auto-generated document keys, such as those created by default by some indexers, do not make good document keys for combined indexes. In general you will want to use a meaningful, unique key value that already exists in, or can be easily added to, your data sources.
This simple C#/.NET console app performs the following tasks:
* Creates a new index based on the data structure of the C# Hotel class (which also references the Address and Room classes). * Creates a new data source and an indexer that maps Azure Cosmos DB data to index fields. These are both objects in Azure Cognitive Search.
-* Runs the indexer to load Hotel data from Cosmos DB.
+* Runs the indexer to load Hotel data from Azure Cosmos DB.
* Creates a second data source and an indexer that maps JSON blob data to index fields. * Runs the second indexer to load Rooms data from Blob storage.
private static async Task CreateAndRunCosmosDbIndexerAsync(string indexName, Sea
connectionString: cosmosConnectString, container: new SearchIndexerDataContainer("hotels"));
- // The Cosmos DB data source does not need to be deleted if it already exists,
+ // The Azure Cosmos DB data source does not need to be deleted if it already exists,
// but the connection string might need to be updated if it has changed. await indexerClient.CreateOrUpdateDataSourceConnectionAsync(cosmosDbDataSource); ```
catch (RequestFailedException ex) when (ex.Status == 404)
await indexerClient.CreateOrUpdateIndexerAsync(cosmosDbIndexer);
-Console.WriteLine("Running Cosmos DB indexer...\n");
+Console.WriteLine("Running Azure Cosmos DB indexer...\n");
try {
You can find and manage resources in the portal, using the All resources or Reso
## Next steps
-Now that you're familiar with the concept of ingesting data from multiple sources, let's take a closer look at indexer configuration, starting with Cosmos DB.
+Now that you're familiar with the concept of ingesting data from multiple sources, let's take a closer look at indexer configuration, starting with Azure Cosmos DB.
> [!div class="nextstepaction"] > [Configure an Azure Cosmos DB indexer](search-howto-index-cosmosdb.md)
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Learn about the latest updates to Azure Cognitive Search. The following links su
| May | [Azure MySQL indexer (preview)](search-howto-index-mysql.md) | Public preview, REST api-version=2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. | | May | [More queryLanguages for spell check and semantic results](/rest/api/searchservice/preview-api/search-documents#queryLanguage) | See [Announcement (techcommunity blog)](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multilingual-support-for-semantic-search-on-azure/ba-p/2385110). Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview, [Azure.Search.Documents 11.3.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.3.0-beta.2), or [Search explorer](search-explorer.md) in Azure portal. | | May| [More regions for double encryption](search-security-manage-encryption-keys.md#double-encryption) | Generally available in all regions, subject to [service creation dates](search-security-manage-encryption-keys.md#double-encryption). |
-| April | [Gremlin API support (preview)](search-howto-index-cosmosdb-gremlin.md) | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using api-version=2020-06-30-Preview. |
+| April | [Azure Cosmos DB for Apache Gremlin support (preview)](search-howto-index-cosmosdb-gremlin.md) | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using api-version=2020-06-30-Preview. |
| March | [Semantic search (preview)](semantic-search-overview.md) | Search results relevance scoring based on semantic models. Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview or [Search explorer](search-explorer.md) in Azure portal. Region and tier restrictions apply. | | March | [Spell check query terms (preview)](speller-how-to-add.md) | The `speller` option works with any query type (simple, full, or semantic). Public preview, REST only, api-version=2020-06-30-Preview| | March | [SharePoint indexer (preview)](search-howto-index-sharepoint-online.md) | Public preview, REST only, api-version=2020-06-30-Preview |
security Threat Modeling Tool Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authorization.md
na
Last updated 02/07/2017 --+ # Security Frame: Authorization | Mitigations
SELECT data
FROM personaldata WHERE userID=:id < - session var ```
-Now an possible attacker can not tamper and change the application operation since the identifier for retrieving the data is handled server-side.
+Now a possible attacker can't tamper and change the application operation since the identifier for retrieving the data is handled server-side.
## <a id="enumerable-browsing"></a>Ensure that content and resources are not enumerable or accessible via forceful browsing
Please note that RLS as an out-of-the-box database feature is applicable only to
| **References** | [Event Hubs authentication and security model overview](../../event-hubs/authenticate-shared-access-signature.md) | | **Steps** | Provide least privilege permissions to various back-end applications that connect to the Event Hub. Generate separate SAS keys for each back-end application and only provide the required permissions - Send, Receive or Manage to them.|
-## <a id="resource-docdb"></a>Use resource tokens to connect to Cosmos DB whenever possible
+## <a id="resource-docdb"></a>Use resource tokens to connect to Azure Cosmos DB whenever possible
| Title | Details | | -- | |
public class CustomController : ApiController
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| **Steps** | The Field Gateway should authorize the caller to check if the caller has the required permissions to perform the action requested. For e.g. there should be different permissions for an admin user interface/API used to configure a field gateway v/s devices that connect to it.|
+| **Steps** | The Field Gateway should authorize the caller to check if the caller has the required permissions to perform the action requested. For e.g. there should be different permissions for an admin user interface/API used to configure a field gateway v/s devices that connect to it.|
security Threat Modeling Tool Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-input-validation.md
na
Last updated 02/07/2017 --+ # Security Frame: Input Validation | Mitigations
In the preceding code example, the input value cannot be longer than 11 characte
| **Applicable Technologies** | Generic, MVC5, MVC6 | | **Attributes** | N/A | | **References** | [Adding Validation](https://www.asp.net/mvc/overview/getting-started/introduction/adding-validation), [Validating Model Data in an MVC Application](/previous-versions/dd410404(v=vs.90)), [Guiding Principles For Your ASP.NET MVC Applications](/archive/msdn-magazine/2009/brownfield/extreme-asp-net-guiding-principles-for-your-asp-net-mvc-applications) |
-| **Steps** | <p>All the input parameters must be validated before they are used in the application to ensure that the application is safeguarded against malicious user inputs. Validate the input values using regular expression validations on server side with a allowed list validation strategy. Unsanitized user inputs / parameters passed to the methods can cause code injection vulnerabilities.</p><p>For web applications, entry points can also include form fields, QueryStrings, cookies, HTTP headers, and web service parameters.</p><p>The following input validation checks must be performed upon model binding:</p><ul><li>The model properties should be annotated with RegularExpression annotation, for accepting allowed characters and maximum permissible length</li><li>The controller methods should perform ModelState validity</li></ul>|
+| **Steps** | <p>All the input parameters must be validated before they are used in the application to ensure that the application is safeguarded against malicious user inputs. Validate the input values using regular expression validations on server side with an allowed list validation strategy. Unsanitized user inputs / parameters passed to the methods can cause code injection vulnerabilities.</p><p>For web applications, entry points can also include form fields, QueryStrings, cookies, HTTP headers, and web service parameters.</p><p>The following input validation checks must be performed upon model binding:</p><ul><li>The model properties should be annotated with RegularExpression annotation, for accepting allowed characters and maximum permissible length</li><li>The controller methods should perform ModelState validity</li></ul>|
## <a id="richtext"></a>Sanitization should be applied on form fields that accept all characters, e.g, rich text editor
myCommand.Fill(userDataset);
``` In the preceding code example, the input value cannot be longer than 11 characters. If the data does not conform to the type or length defined by the parameter, the SqlParameter class throws an exception.
-## <a id="sql-docdb"></a>Use parameterized SQL queries for Cosmos DB
+## <a id="sql-docdb"></a>Use parameterized SQL queries for Azure Cosmos DB
| Title | Details | | -- | |
In the preceding code example, the input value cannot be longer than 11 characte
| **Applicable Technologies** | Generic, NET Framework 3 | | **Attributes** | N/A | | **References** | [MSDN](/previous-versions/msp-n-p/ff647875(v=pandp.10)) |
-| **Steps** | <p>Input and data validation represents one important line of defense in the protection of your WCF application. You should validate all parameters exposed in WCF service operations to protect the service from attack by a malicious client. Conversely, you should also validate all return values received by the client to protect the client from attack by a malicious service</p><p>WCF provides different extensibility points that allow you to customize the WCF runtime behavior by creating custom extensions. Message Inspectors and Parameter Inspectors are two extensibility mechanisms used to gain greater control over the data passing between a client and a service. You should use parameter inspectors for input validation and use message inspectors only when you need to inspect the entire message flowing in and out of a service.</p><p>To perform input validation, you will build a .NET class and implement a custom parameter inspector in order to validate parameters on operations in your service. You will then implement a custom endpoint behavior to enable validation on both the client and the service. Finally, you will implement a custom configuration element on the class that allows you to expose the extended custom endpoint behavior in the configuration file of the service or the client</p>|
+| **Steps** | <p>Input and data validation represents one important line of defense in the protection of your WCF application. You should validate all parameters exposed in WCF service operations to protect the service from attack by a malicious client. Conversely, you should also validate all return values received by the client to protect the client from attack by a malicious service</p><p>WCF provides different extensibility points that allow you to customize the WCF runtime behavior by creating custom extensions. Message Inspectors and Parameter Inspectors are two extensibility mechanisms used to gain greater control over the data passing between a client and a service. You should use parameter inspectors for input validation and use message inspectors only when you need to inspect the entire message flowing in and out of a service.</p><p>To perform input validation, you will build a .NET class and implement a custom parameter inspector in order to validate parameters on operations in your service. You will then implement a custom endpoint behavior to enable validation on both the client and the service. Finally, you will implement a custom configuration element on the class that allows you to expose the extended custom endpoint behavior in the configuration file of the service or the client</p>|
security Threat Modeling Tool Sensitive Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-sensitive-data.md
na
Last updated 02/07/2017 -+ # Security Frame: Sensitive Data | Mitigations
cacheLocation: 'localStorage', // enable this for IE, as sessionStorage does not
}; ```
-## <a id="encrypt-docdb"></a>Encrypt sensitive data stored in Cosmos DB
+## <a id="encrypt-docdb"></a>Encrypt sensitive data stored in Azure Cosmos DB
| Title | Details | | -- | |
Security Mode Across all service bindings there are five possible security modes
</binding> </bindings> </system.serviceModel>
- ```
+ ```
security Backup Plan To Protect Against Ransomware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md
Previously updated : 08/27/2021 Last updated : 10/10/2022
Apply these best practices before an attack.
| Identify the important systems that you need to bring back online first (using top five categories above) and immediately begin performing regular backups of those systems. | To get back up and running as quickly as possible after an attack, determine today what is most important to you. | | Migrate your organization to the cloud. <br><br>Consider purchasing a Microsoft Unified Support plan or working with a Microsoft partner to help support your move to the cloud. | Reduce your on-premises exposure by moving data to cloud services with automatic backup and self-service rollback. Microsoft Azure has a robust set of tools to help you backup your business-critical systems and restore your backups faster. <br><br>[Microsoft Unified Support](https://www.microsoft.com/en-us/msservices/unified-support-solutions) is a cloud services support model that is there to help you whenever you need it. Unified Support: <br><br>Provides a designated team that is available 24x7 with as-needed problem resolution and critical incident escalation <br><br>Helps you monitor the health of your IT environment and works proactively to make sure problems are prevented before they happen | | Move user data to cloud solutions like OneDrive and SharePoint to take advantage of [versioning and recycle bin capabilities](/compliance/assurance/assurance-malware-and-ransomware-protection#sharepoint-online-and-onedrive-for-business-protection-against-ransomware). <br><br>Educate users on how to recover their files by themselves to reduce delays and cost of recovery. For example, if a userΓÇÖs OneDrive files were infected by malware, they can [restore](https://support.microsoft.com/office/restore-your-onedrive-fa231298-759d-41cf-bcd0-25ac53eb8a15?ui=en-US&rs=en-US&ad=US) their entire OneDrive to a previous time. <br><br>Consider a defense strategy, such as [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender), before allowing users to restore their own files. | User data in the Microsoft cloud can be protected by built-in security and data management features. <br><br>It's good to teach users how to restore their own files but you need to be careful that your users do not restore the malware used to carry out the attack. You need to: <br><br>Ensure your users don't restore their files until you are confident that the attacker has been evicted <br><br>Have a mitigation in place in case a user does restore some of the malware <br><br>Microsoft 365 Defender uses AI-powered automatic actions and playbooks to remediate impacted assets back to a secure state. Microsoft 365 Defender leverages automatic remediation capabilities of the suite products to ensure all impacted assets related to an incident are automatically remediated where possible. |
-| Implement [Azure Security Benchmark](/security/benchmark/azure/introduction). | Azure Security Benchmark is AzureΓÇÖs own security control framework based on industry-based security control frameworks such as NIST SP800-53, CIS Controls v7.1. It provides organizations guidance on how to configure Azure and Azure services and implement the security controls. See [Backup and Recovery](/security/benchmark/azure/security-controls-v3-backup-recovery). |
+| Implement the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). | The Microsoft cloud security benchmark is our security control framework based on industry-based security control frameworks such as NIST SP800-53, CIS Controls v7.1. It provides organizations guidance on how to configure Azure and Azure services and implement the security controls. See [Backup and Recovery](/security/benchmark/azure/security-controls-v3-backup-recovery). |
| Regularly exercise your business continuity/disaster recovery (BC/DR) plan. <br><br>Simulate incident response scenarios. Exercises you perform in preparing for an attack should be planned and conducted around your prioritized backup and restore lists. <br><br>Regularly test ΓÇÿRecover from ZeroΓÇÖ scenario to ensure your BC/DR can rapidly bring critical business operations online from zero functionality (all systems down). | Ensures rapid recovery of business operations by treating a ransomware or extortion attack with the same importance as a natural disaster. <br><br>Conduct practice exercise(s) to validate cross-team processes and technical procedures, including out of band employee and customer communications (assume all email and chat is down). | | Consider creating a risk register to identify potential risks and address how you will mediate through preventative controls and actions. Add ransomware to risk register as high likelihood and high impact scenario. | A risk register can help you prioritize risks based on the likelihood of that risk occurring and the severity to your business should that risk occur. <br><br>Track mitigation status via [Enterprise Risk Management (ERM)](/compliance/assurance/assurance-risk-management) assessment cycle. | | Backup all critical business systems automatically on a regular schedule (including backup of critical dependencies like Active Directory). <br><br>Validate that your backup is good as your backup is created. | Allows you to recover data up to the last backup. |
Apply these best practices before an attack.
| Protect backups against deliberate erasure and encryption: <br><br>Store backups in offline or off-site storage and/or immutable storage. <br><br>Require out of band steps (such as [MFA](../../active-directory/authentication/concept-mfa-howitworks.md) or a security PIN) before permitting an online backup to be modified or erased. <br><br>Create private endpoints within your Azure Virtual Network to securely back up and restore data from your Recovery Services vault. | Backups that are accessible by attackers can be rendered unusable for business recovery. <br><br>Offline storage ensures robust transfer of backup data without using any network bandwidth. Azure Backup supports [offline backup](../../backup/offline-backup-overview.md), which transfers initial backup data offline, without the use of network bandwidth. It provides a mechanism to copy backup data onto physical storage devices. The devices are then shipped to a nearby Azure datacenter and uploaded onto a [Recovery Services vault](../../backup/backup-azure-recovery-services-vault-overview.md). <br><br>Online immutable storage (such as [Azure Blob](../../storage/blobs/immutable-storage-overview.md)) enables you to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. <br><br>[Multifactor authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) should be mandatory for all admin accounts and is strongly recommended for all users. The preferred method is to use an authenticator app rather than SMS or voice where possible. When you set up Azure Backup you can configure your recovery services to enable MFA using a security PIN generated in the Azure portal. This ensures that a security pin is generated to perform critical operations such as updating or removing a recovery point. | | Designate [protected folders](/windows/security/threat-protection/microsoft-defender-atp/controlled-folders). | Makes it more difficult for unauthorized applications to modify the data in these folders. | | Review your permissions: <br><br>Discover broad write/delete permissions on file shares, SharePoint, and other solutions. Broad is defined as many users having write/delete permissions for business-critical data. <br><br>Reduce broad permissions while meeting business collaboration requirements. <br><br>Audit and monitor to ensure broad permissions donΓÇÖt reappear. | Reduces risk from broad access-enabling ransomware activities. |
-| Protect against a phishing attempt: <br><br>Conduct security awareness training regularly to help users identify a phishing attempt and avoid clicking on something that can create an initial entry point for a compromise. <br><br>Apply security filtering controls to email to detect and minimize the likelihood of a successful phishing attempt. | The most common method used by attackers to infiltrate an organization is phishing attempts via email. [Exchange Online Protection (EOP)](/microsoft-365/security/office-365-security/exchange-online-protection-overview) is the cloud-based filtering service that protects your organization against spam, malware, and other email threats. EOP is included in all Microsoft 365 organizations with Exchange Online mailboxes. <br><br>An example of a security filtering control for email is [Safe Links](/microsoft-365/security/office-365-security/safe-links). Safe Links is a feature in Defender for Office 365 that provides URL scanning and rewriting of inbound email messages in mail flow, and time-of-click verification of URLs and links in email messages and other locations. Safe Links scanning occurs in addition to the regular anti-spam and anti-malware protection in inbound email messages in EOP. Safe Links scanning can help protect your organization from malicious links that are used in phishing and other attacks. <br><br>Learn more about [anti-phishing protection](/microsoft-365/security/office-365-security/tuning-anti-phishing). |
+| Protect against a phishing attempt: <br><br>Conduct security awareness training regularly to help users identify a phishing attempt and avoid clicking on something that can create an initial entry point for a compromise. <br><br>Apply security filtering controls to email to detect and minimize the likelihood of a successful phishing attempt. | The most common method used by attackers to infiltrate an organization is phishing attempts via email. [Exchange Online Protection (EOP)](/microsoft-365/security/office-365-security/exchange-online-protection-overview) is the cloud-based filtering service that protects your organization against spam, malware, and other email threats. EOP is included in all Microsoft 365 organizations with Exchange Online mailboxes. <br><br>An example of a security filtering control for email is [Safe Links](/microsoft-365/security/office-365-security/safe-links). Safe Links is a feature in Defender for Office 365 that provides scanning and rewriting of URLs and links in email messages during inbound mail flow, and time-of-click verification of URLs and links in email messages and other locations (Microsoft Teams and Office documents). Safe Links scanning occurs in addition to the regular anti-spam and anti-malware protection in inbound email messages in EOP. Safe Links scanning can help protect your organization from malicious links that are used in phishing and other attacks. <br><br>Learn more about [anti-phishing protection](/microsoft-365/security/office-365-security/tuning-anti-phishing). |
## What to do during an attack
In this article, you learned how to improve your backup and restore plan to prot
Key industry information: -- [Microsoft Digital Defense Report](https://www.microsoft.com/security/business/security-intelligence-report?rtc=1) (see pages 22-24)
+- [2021 Microsoft Digital Defense Report](https://www.microsoft.com/security/business/microsoft-digital-defense-report) (see pages 10-19)
Microsoft Azure:
security Best Practices And Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/best-practices-and-patterns.md
The best practices are intended to be a resource for IT pros. This might include
## Next steps
-Microsoft has found that using security benchmarks can help you quickly secure cloud deployments. Benchmark recommendations from your cloud service provider give you a starting point for selecting specific security configuration settings in your environment and allow you to quickly reduce risk to your organization. See the [Azure Security Benchmark](/security/benchmark/azure/introduction) for a collection of high-impact security recommendations you can use to help secure the services you use in Azure.
+Microsoft has found that using security benchmarks can help you quickly secure cloud deployments. Benchmark recommendations from your cloud service provider give you a starting point for selecting specific security configuration settings in your environment and allow you to quickly reduce risk to your organization. See the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) for a collection of high-impact security recommendations you can use to help secure the services you use in Azure.
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
As an example:
![Azure Customer Lockbox - activity logs](./media/customer-lockbox-overview/customer-lockbox-activitylogs.png)
-## Customer Lockbox integration with Azure Security Benchmark
+## Customer Lockbox integration with the Microsoft cloud security benchmark
-We've introduced a new baseline control ([3.13](../benchmarks/security-control-identity-access-control.md#313-provide-microsoft-with-access-to-relevant-customer-data-during-support-scenarios)) in Azure Security Benchmark that covers Customer Lockbox applicability. Customers can now leverage the benchmark to review Customer Lockbox applicability for a service.
+We've introduced a new baseline control ([PA-8: Determine access process for cloud provider support](/security/benchmark/azure/mcsb-privileged-access#pa-8-determine-access-process-for-cloud-provider-support)) in the Microsoft cloud security benchmark that covers Customer Lockbox applicability. Customers can now leverage the benchmark to review Customer Lockbox applicability for a service.
## Exclusions
security Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-overview.md
ms.assetid: + Last updated 10/26/2021 - # Azure encryption overview
With Azure SQL Database, you can apply symmetric encryption to a column of data
CLE has built-in functions that you can use to encrypt data by using either symmetric or asymmetric keys, the public key of a certificate, or a passphrase using 3DES.
-### Cosmos DB database encryption
+### Azure Cosmos DB database encryption
-[Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) is Microsoft's globally distributed, multi-model database. User data that's stored in Cosmos DB in non-volatile storage (solid-state drives) is encrypted by default. There are no controls to turn it on or off. Encryption at rest is implemented by using a number of security technologies, including secure key storage systems, encrypted networks, and cryptographic APIs. Encryption keys are managed by Microsoft and are rotated per Microsoft internal guidelines. Optionally, you can choose to add a second layer of encryption with keys you manage using the [customer-managed keys or CMK](../../cosmos-db/how-to-setup-cmk.md) feature.
+[Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) is Microsoft's globally distributed, multi-model database. User data that's stored in Azure Cosmos DB in non-volatile storage (solid-state drives) is encrypted by default. There are no controls to turn it on or off. Encryption at rest is implemented by using a number of security technologies, including secure key storage systems, encrypted networks, and cryptographic APIs. Encryption keys are managed by Microsoft and are rotated per Microsoft internal guidelines. Optionally, you can choose to add a second layer of encryption with keys you manage using the [customer-managed keys or CMK](../../cosmos-db/how-to-setup-cmk.md) feature.
### At-rest encryption in Data Lake
Key Vault relieves organizations of the need to configure, patch, and maintain h
- [Azure database security overview](/azure/azure-sql/database/security-overview) - [Azure virtual machines security overview](virtual-machines-overview.md) - [Data encryption at rest](encryption-atrest.md)-- [Data security and encryption best practices](data-encryption-best-practices.md)
+- [Data security and encryption best practices](data-encryption-best-practices.md)
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md
The security services map organizes services by the resources they protect (colu
- Detect threats ΓÇô Services that identify suspicious activities and facilitate mitigating the threat. - Investigate and respond ΓÇô Services that pull logging data so you can assess a suspicious activity and respond.
-The diagram includes the Azure Security Benchmark program, a collection of high-impact security recommendations you can use to help secure the services you use in Azure.
+The diagram includes the Microsoft cloud security benchmark, a collection of high-impact security recommendations you can use to help secure the services you use in Azure.
:::image type="content" source="media/end-to-end/security-diagram.svg" alt-text="Diagram showing end-to-end security services in Azure." border="false"::: ## Security controls and baselines
-The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a collection of high-impact security recommendations you can use to help secure the services you use in Azure:
+The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) includes a collection of high-impact security recommendations you can use to help secure the services you use in Azure:
- Security controls - These recommendations are generally applicable across your Azure tenant and Azure services. Each recommendation identifies a list of stakeholders that are typically involved in planning, approval, or implementation of the benchmark. - Service baselines - These apply the controls to individual Azure services to provide recommendations on that serviceΓÇÖs security configuration.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
description: This article describes security feature availability in Azure and A
+ Last updated 12/30/2021
The following table displays the current Defender for Cloud feature availability
| <li> [Recommendation exemption rules](../../defender-for-cloud/exempt-resource.md) | Public Preview | Not Available | | <li> [Alert suppression rules](../../defender-for-cloud/alerts-suppression-rules.md) | GA | GA | | <li> [Email notifications for security alerts](../../defender-for-cloud/configure-email-notifications.md) | GA | GA |
-| <li> [Auto provisioning for agents and extensions](../../defender-for-cloud/enable-data-collection.md) | GA | GA |
+| <li> [Auto provisioning for agents and extensions](../../defender-for-cloud/monitoring-components.md) | GA | GA |
| <li> [Asset inventory](../../defender-for-cloud/asset-inventory.md) | GA | GA | | <li> [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](../../defender-for-cloud/custom-dashboards-azure-workbooks.md) | GA | GA | | <li> [Integration with Microsoft Defender for Cloud Apps](../../defender-for-cloud/other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | Not Available |
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
Learn more:
Microsoft Defender for Cloud helps you prevent, detect, and respond to threats. Defender for Cloud gives you increased visibility into, and control over, the security of your Azure resources as well as those in your hybrid cloud environment.
-Defender for Cloud performs continuous security assessments of your connected resources and compares their configuration and deployment against the [Azure Security Benchmark](../benchmarks/introduction.md) to provide detailed security recommendations tailored for your environment.
+Defender for Cloud performs continuous security assessments of your connected resources and compares their configuration and deployment against the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) to provide detailed security recommendations tailored for your environment.
Defender for Cloud helps you optimize and monitor the security of your Azure resources by:
security Ransomware Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-prepare.md
description: Prepare for a ransomware attack
+ Last updated 01/10/2022- # Prepare for a ransomware attack ## Adopt a Cybersecurity framework
-A good place to start is to adopt the [Azure Security Benchmark](/security/benchmark/azure/) to secure the Azure environment. The Azure Security Benchmark is the Azure security control framework, based on industry-based security control frameworks such as NIST SP800-53, CIS Controls v7.1.
+A good place to start is to adopt the [Microsoft cloud security benchmark](/security/benchmark/azure) (MCSB) to secure the Azure environment. The Microsoft cloud security benchmark is the Azure security control framework, based on industry-based security control frameworks such as NIST SP800-53, CIS Controls v7.1.
:::image type="content" source="./media/ransomware/ransomware-13.png" alt-text="Screenshot of the NS-1: Establish Network Segmentation Boundaries security control":::
-The Azure Security Benchmark provides organizations guidance on how to configure Azure and Azure Services and implement the security controls. Organizations can use [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) to monitor their live Azure environment status with all the Azure Security Benchmark controls.
+The Microsoft cloud security benchmark provides organizations guidance on how to configure Azure and Azure Services and implement the security controls. Organizations can use [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) to monitor their live Azure environment status with all the MCSB controls.
Ultimately, the Framework is aimed at reducing and better managing cybersecurity risks.
-| Azure Security Benchmark stack |
+| Microsoft cloud security benchmark stack |
|--|
-| [Network&nbsp;security&nbsp;(NS)](/security/benchmark/azure/security-controls-v3-network-security) |
-| [Identity&nbsp;Management&nbsp;(IM)](/security/benchmark/azure/security-controls-v3-identity-management) |
-| [Privileged&nbsp;Access&nbsp;(PA)](/security/benchmark/azure/security-controls-v3-privileged-access) |
-| [Data&nbsp;Protection&nbsp;(DP)](/security/benchmark/azure/security-controls-v3-data-protection) |
-| [Asset&nbsp;Management&nbsp;(AM)](/security/benchmark/azure/security-controls-v3-asset-management) |
+| [Network&nbsp;security&nbsp;(NS)](/security/benchmark/azure/mcsb-network-security) |
+| [Identity&nbsp;Management&nbsp;(IM)](/security/benchmark/azure/mcsb-identity-management) |
+| [Privileged&nbsp;Access&nbsp;(PA)](/security/benchmark/azure/mcsb-privileged-access) |
+| [Data&nbsp;Protection&nbsp;(DP)](/security/benchmark/azure/mcsb-data-protection) |
+| [Asset&nbsp;Management&nbsp;(AM)](/security/benchmark/azure/mcsb-asset-management) |
| [Logging&nbsp;and&nbsp;Threat&nbsp;Detection (LT)](/security/benchmark/azure/security-controls-v2-logging-threat-detection) |
-| [Incident&nbsp;Response&nbsp;(IR)](/security/benchmark/azure/security-controls-v3-incident-response) |
-| [Posture&nbsp;and&nbsp;Vulnerability&nbsp;Management&nbsp;(PV)](/security/benchmark/azure/security-controls-v3-posture-vulnerability-management) |
-| [Endpoint&nbsp;Security&nbsp;(ES)](/security/benchmark/azure/security-controls-v3-endpoint-security) |
-| [Backup&nbsp;and&nbsp;Recovery&nbsp;(BR)](/security/benchmark/azure/security-controls-v3-backup-recovery) |
-| [DevOps&nbsp;Security&nbsp;(DS)](/security/benchmark/azure/security-controls-v3-devops-security) |
-| [Governance&nbsp;and&nbsp;Strategy&nbsp;(GS)](/security/benchmark/azure/security-controls-v3-governance-strategy) |
+| [Incident&nbsp;Response&nbsp;(IR)](/security/benchmark/azure/mcsb-incident-response) |
+| [Posture&nbsp;and&nbsp;Vulnerability&nbsp;Management&nbsp;(PV)](/security/benchmark/azure/mcsb-posture-vulnerability-management) |
+| [Endpoint&nbsp;Security&nbsp;(ES)](/security/benchmark/azure/mcsb-endpoint-security) |
+| [Backup&nbsp;and&nbsp;Recovery&nbsp;(BR)](/security/benchmark/azure/mcsb-backup-recovery) |
+| [DevOps&nbsp;Security&nbsp;(DS)](/security/benchmark/azure/mcsb-devops-security) |
+| [Governance&nbsp;and&nbsp;Strategy&nbsp;(GS)](/security/benchmark/azure/mcsb-governance-strategy) |
## Prioritize mitigation
Local (operational) backups with Azure Backup
- Azure Disks Built-in backups from Azure services-- Data services like Azure Databases (SQL, MySQL, MariaDB, PostgreSQL), Cosmos DB, and ANF offer built-in backup capabilities
+- Data services like Azure Databases (SQL, MySQL, MariaDB, PostgreSQL), Azure Cosmos DB, and ANF offer built-in backup capabilities
## What's Next
Other articles in this series:
- [Ransomware protection in Azure](ransomware-protection.md) - [Detect and respond to ransomware attack](ransomware-detect-respond.md)-- [Azure features and resources that help you protect, detect, and respond](ransomware-features-resources.md)
+- [Azure features and resources that help you protect, detect, and respond](ransomware-features-resources.md)
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
Implementing new updates will help identify any prior campaigns and prevent futu
Therefore, we recommend also taking the following actions: -- Make sure that you've applied the [Azure security benchmark documentation](/security/benchmark/azure/), and are monitoring compliance via [Microsoft Defender for Cloud](../../security-center/index.yml).
+- Make sure that you've applied the [Microsoft cloud security benchmark](/security/benchmark/azure), and are monitoring compliance via [Microsoft Defender for Cloud](../../security-center/index.yml).
- Incorporate threat intelligence feeds into your SIEM, such as by configuring Microsoft Purview Data Connectors in [Microsoft Sentinel](../../sentinel/understand-threat-intelligence.md).
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
Resource Manager provides several benefits:
## Next step
-The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a collection of security recommendations you can use to help secure the services you use in Azure.
+The [Microsoft cloud security benchmark](/security/benchmark/azure) includes a collection of security recommendations you can use to help secure the services you use in Azure.
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
You can create and manage DSC resources that are hosted in Azure and apply them
Microsoft Defender for Cloud helps protect your hybrid cloud environment. By performing continuous security assessments of your connected resources, it's able to provide detailed security recommendations for the discovered vulnerabilities.
-Defender for Cloud's recommendations are based on the [Azure Security Benchmark](../benchmarks/introduction.md) - the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud centric security.
+Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) - the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud centric security.
Enabling Defender for Cloud's enhanced security features brings advanced, intelligent, protection of your Azure, hybrid and multi-cloud resources and workloads. Learn more in [Microsoft Defender for Cloud's enhanced security features](../../defender-for-cloud/enhanced-security-features-overview.md).
sentinel Basic Logs Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/basic-logs-use-cases.md
Last updated 04/25/2022
-# Log sources to use for Basic Logs ingestion (preview)
+# Log sources to use for Basic Logs ingestion
Log collection is critical to a successful security analytics program. The more log sources you have for an investigation or threat hunt, the more you might accomplish.
The primary log sources used for detection often contain the metadata and contex
Event log data in Basic Logs can't be used as the primary log source for security incidents and alerts. But Basic Log event data is useful to correlate and draw conclusions when you investigate an incident or perform threat hunting.
-This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans (preview)](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans-preview).
-
-> [!IMPORTANT]
-> The Basic Logs feature is currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
+This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans).
## Storage access logs for cloud providers
A new and growing source of log data is Internet of Things (IoT) connected devic
## Next steps -- [Log plans](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans-preview)-- [Configure Basic Logs in Azure Monitor (Preview)](../azure-monitor/logs/basic-logs-configure.md)
+- [Log plans](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans)
+- [Configure Basic Logs in Azure Monitor](../azure-monitor/logs/basic-logs-configure.md)
- [Start an investigation by searching for events in large datasets (preview)](investigate-large-datasets.md)
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md
For more information, see [Data residency in Azure](https://azure.microsoft.com/
You may have situations planned where different teams will need access to the same data. For example, your SOC team must have access to all Microsoft Sentinel data, while operations and applications teams will need access to only specific parts. Independent security teams may also need to access Microsoft Sentinel features, but with varying sets of data.
-Combine [resource-context RBAC](resource-context-rbac.md) and [table-level RBAC](../azure-monitor/logs/manage-access.md#table-level-azure-rbac) to provide your teams with a wide range of access options that should support most use cases.
+Combine [resource-context RBAC](resource-context-rbac.md) and [table-level RBAC](../azure-monitor/logs/manage-access.md#set-table-level-read-access) to provide your teams with a wide range of access options that should support most use cases.
For more information, see [Permissions in Microsoft Sentinel](roles.md).
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
description: Learn how to create a codeless connector in Microsoft Sentinel usin
+ Last updated 06/30/2022 # Create a codeless connector for Microsoft Sentinel (Public preview)
The `pollingConfig` section includes the following properties:
| Name | Type | Description | | | -- | |
-| **id** | String | Mandatory. Defines a unique identifier for a rule or configuration entry, using one of the following values: <br><br>- A GUID (recommended) <br>- A document ID, if the data source resides in a Cosmos DB |
+| **id** | String | Mandatory. Defines a unique identifier for a rule or configuration entry, using one of the following values: <br><br>- A GUID (recommended) <br>- A document ID, if the data source resides in Azure Cosmos DB |
| **auth** | String | Describes the authentication properties for polling the data. For more information, see [auth configuration](#auth-configuration). | | <a name="authtype"></a>**auth.authType** | String | Mandatory. Defines the type of authentication, nested inside the `auth` object, as one of the following values: `Basic`, `APIKey`, `OAuth2`, `Session`, `CiscoDuo` | | **request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). |
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customer-managed-keys.md
Last updated 11/09/2021 -+ # Set up Microsoft Sentinel customer-managed key
To provision CMK, follow these steps: 
2. Enable CMK on your Log Analytics workspace.
-3. Register to the Cosmos DB Resource Provider.
+3. Register to the Azure Cosmos DB Resource Provider.
4. Add an access policy to your Azure Key Vault instance.
To provision CMK, follow these steps: 
Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that will be used as the Microsoft Sentinel workspace in the following steps.
-### STEP 3: Register to the Cosmos DB Resource Provider
+### STEP 3: Register to the Azure Cosmos DB Resource Provider
-Microsoft Sentinel works with Cosmos DB as an additional storage resource. Make sure to register to the Cosmos DB Resource Provider.
+Microsoft Sentinel works with Azure Cosmos DB as an additional storage resource. Make sure to register to the Azure Cosmos DB Resource Provider.
-Follow the Cosmos DB instruction to [Register the Azure Cosmos DB Resource Provider](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) resource provider for your Azure subscription.
+Follow the instructions to [Register the Azure Cosmos DB Resource Provider](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) for your Azure subscription.
### STEP 4: Add an access policy to your Azure Key Vault instance
-Make sure to add access from Cosmos DB to your Azure Key Vault instance. Follow the Cosmos DB instruction to [add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-access-policy) with Azure Cosmos DB principal.
+Make sure to add access from Azure Cosmos DB to your Azure Key Vault instance. Follow the Azure Cosmos DB instructions to [add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-access-policy) with an Azure Cosmos DB principal.
### STEP 5: Onboard the workspace to Microsoft Sentinel via the onboarding API
Onboard the workspace to Microsoft Sentinel via the [Onboarding API](https://git
## Key Encryption Key revocation or deletion
-In the event that a user revokes the key encryption key (the CMK), either by deleting it or removing access for the dedicated cluster and Cosmos DB Resource Provider, Microsoft Sentinel will honor the change and behave as if the data is no longer available, within one hour. At this point, any operation that uses persistent storage resources such as data ingestion, persistent configuration changes, and incident creation, will be prevented. Previously stored data will not be deleted but will remain inaccessible. Inaccessible data is governed by the data-retention policy and will be purged in accordance with that policy.
+In the event that a user revokes the key encryption key (the CMK), either by deleting it or removing access for the dedicated cluster and Azure Cosmos DB Resource Provider, Microsoft Sentinel will honor the change and behave as if the data is no longer available, within one hour. At this point, any operation that uses persistent storage resources such as data ingestion, persistent configuration changes, and incident creation, will be prevented. Previously stored data will not be deleted but will remain inaccessible. Inaccessible data is governed by the data-retention policy and will be purged in accordance with that policy.
The only operation possible after the encryption key is revoked or deleted is account deletion. If access is restored after revocation, Microsoft Sentinel will restore access to the data within an hour.
-Access to the data can be revoked by disabling the customer-managed key in the key vault, or deleting the access policy to the key, for both the dedicated Log Analytics cluster and Cosmos DB. Revoking access by removing the key from the dedicated Log Analytics cluster, or by removing the identity associated with the dedicated Log Analytics cluster is not supported.
+Access to the data can be revoked by disabling the customer-managed key in the key vault, or deleting the access policy to the key, for both the dedicated Log Analytics cluster and Azure Cosmos DB. Revoking access by removing the key from the dedicated Log Analytics cluster, or by removing the identity associated with the dedicated Log Analytics cluster is not supported.
To understand more about how this works in Azure Monitor, see [Azure Monitor CMK revocation](../azure-monitor/logs/customer-managed-keys.md#key-revocation).
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
Title: Archive for What's new in Azure Sentinel
-description: A description of what's new and changed in Azure Sentinel from six months ago and earlier.
+ Title: Archive for What's new in Microsoft Sentinel
+description: A description of what's new and changed in Microsoft Sentinel from six months ago and earlier.
Last updated 08/31/2022-
-# Archive for What's new in Azure Sentinel
-
+# Archive for What's new in Microsoft Sentinel
The primary [What's new in Sentinel](whats-new.md) release notes page contains updates for the last six months, while this page contains older items.
-For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/azure-sentinel/bg-p/AzureSentinelBlog/label-name/What's%20New).
+For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog/label-name/What's%20New).
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!TIP]
-> Our threat hunting teams across Microsoft contribute queries, playbooks, workbooks, and notebooks to the [Azure Sentinel Community](https://github.com/Azure/Azure-Sentinel), including specific [hunting queries](https://github.com/Azure/Azure-Sentinel) that your teams can adapt and use.
+> Our threat hunting teams across Microsoft contribute queries, playbooks, workbooks, and notebooks to the [Microsoft Sentinel Community](https://github.com/Azure/Azure-Sentinel), including specific [hunting queries](https://github.com/Azure/Azure-Sentinel) that your teams can adapt and use.
>
-> You can also contribute! Join us in the [Azure Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+> You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
## December 2021
We're evolving our free trial experience to include the following updates:
Only the Microsoft Sentinel charges are waived during the 31-day trial period.
-Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
+Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
> [!TIP] > During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you've left until it expires.
For more information, see [Use Jupyter notebooks to hunt for security threats](n
### Microsoft Sentinel renaming
-Starting in November 2021, Azure Sentinel is being renamed to Microsoft Sentinel, and you'll see upcoming updates in the portal, documentation, and other resources in parallel.
+Starting in November 2021, Microsoft Sentinel is being renamed to Microsoft Sentinel, and you'll see upcoming updates in the portal, documentation, and other resources in parallel.
Earlier entries in this article and the older [Archive for What's new in Sentinel](whats-new-archive.md) continue to use the name *Azure* Sentinel, as that was the service name when those features were new.
In addition to those from Microsoft Defender for Endpoint, you can now ingest ra
A playbook template is a pre-built, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios.
-Playbook templates have been developed by the Sentinel community, independent software vendors (ISVs), and Microsoft's own experts, and you can find them in the **Playbook templates** tab (under **Automation**), as part of an [Azure Sentinel solution](sentinel-solutions.md), or in the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks).
+Playbook templates have been developed by the Sentinel community, independent software vendors (ISVs), and Microsoft's own experts, and you can find them in the **Playbook templates** tab (under **Automation**), as part of an [Microsoft Sentinel solution](sentinel-solutions.md), or in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks).
For more information, see [Create and customize playbooks from built-in templates](use-playbook-templates.md). ### Manage template versions for your scheduled analytics rules (Public preview)
-When you create analytics rules from [built-in Azure Sentinel rule templates](detect-threats-built-in.md), you effectively create a copy of the template. Past that point, the active rule is ***not*** dynamically updated to match any changes that get made to the originating template.
+When you create analytics rules from [built-in Microsoft Sentinel rule templates](detect-threats-built-in.md), you effectively create a copy of the template. Past that point, the active rule is ***not*** dynamically updated to match any changes that get made to the originating template.
However, rules created from templates ***do*** remember which templates they came from, which allows you two advantages:
However, rules created from templates ***do*** remember which templates they cam
### DHCP normalization schema (Public preview)
-The Advanced Security Information Model (ASIM) now supports a DHCP normalization schema, which is used to describe events reported by a DHCP server and is used by Azure Sentinel to enable source-agnostic analytics.
+The Advanced Security Information Model (ASIM) now supports a DHCP normalization schema, which is used to describe events reported by a DHCP server and is used by Microsoft Sentinel to enable source-agnostic analytics.
Events described in the DHCP normalization schema include serving requests for DHCP IP address leased from client systems and updating a DNS server with the leases granted. For more information, see: -- [Azure Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)-- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)
+- [Microsoft Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)
+- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md)
## September 2021
For more information, see:
### Data connector health enhancements (Public preview)
-Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Azure Sentinel health feature](monitor-sentinel-health.md) in your Azure Sentinel workspace, at the first success or failure health event generated.
+Microsoft Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Microsoft Sentinel health feature](monitor-sentinel-health.md) in your Microsoft Sentinel workspace, at the first success or failure health event generated.
-For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
+For more information, see [Monitor the health of your data connectors with this Microsoft Sentinel workbook](monitor-data-connector-health.md).
> [!NOTE] > The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
For more information, see [Monitor the health of your data connectors with this
### New in docs: scaling data connector documentation
-As we continue to add more and more built-in data connectors for Azure Sentinel, we reorganized our data connector documentation to reflect this scaling.
+As we continue to add more and more built-in data connectors for Microsoft Sentinel, we reorganized our data connector documentation to reflect this scaling.
For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
-Check the [Azure Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
+Check the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
For more information, see:
For more information, see:
- **Generic how-to articles**: - [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
- - [Connect your data source to the Azure Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
- - [Get CEF-formatted logs from your device or appliance into Azure Sentinel](connect-common-event-format.md)
+ - [Connect your data source to the Microsoft Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
+ - [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md)
- [Collect data from Linux-based sources using Syslog](connect-syslog.md)
- - [Collect data in custom log formats to Azure Sentinel with the Log Analytics agent](connect-custom-logs.md)
- - [Use Azure Functions to connect your data source to Azure Sentinel](connect-azure-functions-template.md)
- - [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md)
+ - [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md)
+ - [Use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md)
+ - [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md)
### Azure Storage account connector changes
You'll only see the storage types that you actually have defined resources for.
### Advanced incident search (Public preview)
-By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. Azure Sentinel now provides [advanced search options](investigate-cases.md#search-for-incidents) to search across more data, including alert details, descriptions, entities, tactics, and more.
+By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. Microsoft Sentinel now provides [advanced search options](investigate-cases.md#search-for-incidents) to search across more data, including alert details, descriptions, entities, tactics, and more.
For example:
For more information, see [Search for incidents](investigate-cases.md#search-for
### Fusion detection for Ransomware (Public preview)
-Azure Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
+Microsoft Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host / device and to evade detection.
Supported data connectors include:
- [Microsoft Defender for Endpoint](./data-connectors-reference.md#microsoft-defender-for-endpoint) - [Microsoft Defender for Identity](./data-connectors-reference.md#microsoft-defender-for-identity) - [Microsoft Cloud App Security](./data-connectors-reference.md#microsoft-defender-for-cloud-apps)-- [Azure Sentinel scheduled analytics rules](detect-threats-built-in.md#scheduled)
+- [Microsoft Sentinel scheduled analytics rules](detect-threats-built-in.md#scheduled)
For more information, see [Multiple alerts possibly related to Ransomware activity detected](fusion.md#fusion-for-ransomware). ### Watchlist templates for UEBA data (Public preview)
-Azure Sentinel now provides built-in watchlist templates for UEBA data, which you can customize for your environment and use during investigations.
+Microsoft Sentinel now provides built-in watchlist templates for UEBA data, which you can customize for your environment and use during investigations.
After UEBA watchlists are populated with data, you can correlate that data with analytics rules, view it in the entity pages and investigation graphs as insights, create custom uses such as to track VIP or sensitive users, and more.
For more information, see [Create watchlists in Microsoft Sentinel](watchlists-c
### File Event normalization schema (Public preview)
-The Azure Sentinel Information Model (ASIM) now supports a File Event normalization schema, which is used to describe file activity, such as creating, modifying, or deleting files or documents. File events are reported by operating systems, file storage systems such as Azure Files, and document management systems such as Microsoft SharePoint.
+The Microsoft Sentinel Information Model (ASIM) now supports a File Event normalization schema, which is used to describe file activity, such as creating, modifying, or deleting files or documents. File events are reported by operating systems, file storage systems such as Azure Files, and document management systems such as Microsoft SharePoint.
For more information, see: -- [Azure Sentinel File Event normalization schema reference (Public preview)](file-event-normalization-schema.md)-- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)
+- [Microsoft Sentinel File Event normalization schema reference (Public preview)](file-event-normalization-schema.md)
+- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md)
### New in docs: Best practice guidance
In response to multiple requests from customers and our support teams, we added
For more information, see: -- [Prerequisites for deploying Azure Sentinel](prerequisites.md)-- [Best practices for Azure Sentinel](best-practices.md)-- [Azure Sentinel workspace architecture best practices](best-practices-workspace-architecture.md)-- [Design your Azure Sentinel workspace architecture](design-your-workspace-architecture.md)-- [Azure Sentinel sample workspace designs](sample-workspace-designs.md)
+- [Prerequisites for deploying Microsoft Sentinel](prerequisites.md)
+- [Best practices for Microsoft Sentinel](best-practices.md)
+- [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md)
+- [Design your Microsoft Sentinel workspace architecture](design-your-workspace-architecture.md)
+- [Microsoft Sentinel sample workspace designs](sample-workspace-designs.md)
- [Data collection best practices](best-practices-data.md) > [!TIP]
For more information, see:
## July 2021 - [Microsoft Threat Intelligence Matching Analytics (Public preview)](#microsoft-threat-intelligence-matching-analytics-public-preview)-- [Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)](#use-azure-ad-data-with-azure-sentinels-identityinfo-table-public-preview)
+- [Use Azure AD data with Microsoft Sentinel's IdentityInfo table (Public preview)](#use-azure-ad-data-with-microsoft-sentinels-identityinfo-table-public-preview)
- [Enrich Entities with geolocation data via API (Public preview)](#enrich-entities-with-geolocation-data-via-api-public-preview) - [Support for ADX cross-resource queries (Public preview)](#support-for-adx-cross-resource-queries-public-preview) - [Watchlists are in general availability](#watchlists-are-in-general-availability)
For more information, see:
### Microsoft Threat Intelligence Matching Analytics (Public preview)
-Azure Sentinel now provides the built-in **Microsoft Threat Intelligence Matching Analytics** rule, which matches Microsoft-generated threat intelligence data with your logs. This rule generates high-fidelity alerts and incidents, with appropriate severities based on the context of the logs detected. After a match is detected, the indicator is also published to your Azure Sentinel threat intelligence repository.
+Microsoft Sentinel now provides the built-in **Microsoft Threat Intelligence Matching Analytics** rule, which matches Microsoft-generated threat intelligence data with your logs. This rule generates high-fidelity alerts and incidents, with appropriate severities based on the context of the logs detected. After a match is detected, the indicator is also published to your Microsoft Sentinel threat intelligence repository.
The **Microsoft Threat Intelligence Matching Analytics** rule currently matches domain indicators against the following log sources:
The **Microsoft Threat Intelligence Matching Analytics** rule currently matches
For more information, see [Detect threats using matching analytics (Public preview)](use-matching-analytics-to-detect-threats.md).
-### Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)
+### Use Azure AD data with Microsoft Sentinel's IdentityInfo table (Public preview)
As attackers often use the organization's own user and service accounts, data about those user accounts, including the user identification and privileges, are crucial for the analysts in the process of an investigation.
-Now, having [UEBA enabled](enable-entity-behavior-analytics.md) in your Azure Sentinel workspace also synchronizes Azure AD data into the new **IdentityInfo** table in Log Analytics. Synchronizations between your Azure AD and the **IdentifyInfo** table create a snapshot of your user profile data that includes user metadata, group information, and the Azure AD roles assigned to each user.
+Now, having [UEBA enabled](enable-entity-behavior-analytics.md) in your Microsoft Sentinel workspace also synchronizes Azure AD data into the new **IdentityInfo** table in Log Analytics. Synchronizations between your Azure AD and the **IdentifyInfo** table create a snapshot of your user profile data that includes user metadata, group information, and the Azure AD roles assigned to each user.
Use the **IdentityInfo** table during investigations and when fine-tuning analytics rules for your organization to reduce false positives.
For more information, see [IdentityInfo table](ueba-reference.md#identityinfo-ta
### Enrich entities with geolocation data via API (Public preview)
-Azure Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
+Microsoft Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
-For more information, see [Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md) and [Classify and analyze data using entities in Azure Sentinel](entities.md).
+For more information, see [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md) and [Classify and analyze data using entities in Microsoft Sentinel](entities.md).
### Support for ADX cross-resource queries (Public preview)
-The hunting experience in Azure Sentinel now supports [ADX cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
+The hunting experience in Microsoft Sentinel now supports [ADX cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
-Although Log Analytics remains the primary data storage location for performing analysis with Azure Sentinel, there are cases where ADX is required to store data due to cost, retention periods, or other factors. This capability enables customers to hunt over a wider set of data and view the results in the [Azure Sentinel hunting experiences](hunting.md), including hunting queries, [livestream](livestream.md), and the Log Analytics search page.
+Although Log Analytics remains the primary data storage location for performing analysis with Microsoft Sentinel, there are cases where ADX is required to store data due to cost, retention periods, or other factors. This capability enables customers to hunt over a wider set of data and view the results in the [Microsoft Sentinel hunting experiences](hunting.md), including hunting queries, [livestream](livestream.md), and the Log Analytics search page.
To query data stored in ADX clusters, use the adx() function to specify the ADX cluster, database name, and desired table. You can then query the output as you would any other table. See more information in the pages linked above.
The [watchlists](watchlists.md) feature is now generally available. Use watchlis
### Support for data residency in more geos
-Azure Sentinel now supports full data residency in the following additional geos:
+Microsoft Sentinel now supports full data residency in the following additional geos:
Brazil, Norway, South Africa, Korea, Germany, United Arab Emirates (UAE), and Switzerland.
See the [complete list of supported geos](quickstart-onboard.md#geographical-ava
### Bidirectional sync in Azure Defender connector (Public preview)
-The Azure Defender connector now supports bi-directional syncing of alerts' status between Defender and Azure Sentinel. When you close a Sentinel incident containing a Defender alert, the alert will automatically be closed in the Defender portal as well.
+The Azure Defender connector now supports bi-directional syncing of alerts' status between Defender and Microsoft Sentinel. When you close a Sentinel incident containing a Defender alert, the alert will automatically be closed in the Defender portal as well.
See this [complete description of the updated Azure Defender connector](connect-defender-for-cloud.md). ## June 2021 -- [Upgrades for normalization and the Azure Sentinel Information Model](#upgrades-for-normalization-and-the-azure-sentinel-information-model)
+- [Upgrades for normalization and the Microsoft Sentinel Information Model](#upgrades-for-normalization-and-the-microsoft-sentinel-information-model)
- [Updated service-to-service connectors](#updated-service-to-service-connectors) - [Export and import analytics rules (Public preview)](#export-and-import-analytics-rules-public-preview) - [Alert enrichment: alert details (Public preview)](#alert-enrichment-alert-details-public-preview) - [More help for playbooks!](#more-help-for-playbooks) - [New documentation reorganization](#new-documentation-reorganization)
-### Upgrades for normalization and the Azure Sentinel Information Model
+### Upgrades for normalization and the Microsoft Sentinel Information Model
-The Azure Sentinel Information Model enables you to use and create source-agnostic content, simplifying your analysis of the data in your Azure Sentinel workspace.
+The Microsoft Sentinel Information Model enables you to use and create source-agnostic content, simplifying your analysis of the data in your Microsoft Sentinel workspace.
In this month's update, we've enhanced our normalization documentation, providing new levels of detail and full DNS, process event, and authentication normalization schemas. For more information, see: -- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md) (updated)-- [Azure Sentinel Authentication normalization schema reference (Public preview)](authentication-normalization-schema.md) (new!)-- [Azure Sentinel data normalization schema reference](./network-normalization-schema.md)-- [Azure Sentinel DNS normalization schema reference (Public preview)](dns-normalization-schema.md) (new!)-- [Azure Sentinel Process Event normalization schema reference (Public preview)](process-events-normalization-schema.md) (new!)-- [Azure Sentinel Registry Event normalization schema reference (Public preview)](registry-event-normalization-schema.md) (new!)
+- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md) (updated)
+- [Microsoft Sentinel Authentication normalization schema reference (Public preview)](authentication-normalization-schema.md) (new!)
+- [Microsoft Sentinel data normalization schema reference](./network-normalization-schema.md)
+- [Microsoft Sentinel DNS normalization schema reference (Public preview)](dns-normalization-schema.md) (new!)
+- [Microsoft Sentinel Process Event normalization schema reference (Public preview)](process-events-normalization-schema.md) (new!)
+- [Microsoft Sentinel Registry Event normalization schema reference (Public preview)](registry-event-normalization-schema.md) (new!)
### Updated service-to-service connectors
The upgrades are not automatic. Users of these connectors are encouraged to enab
### Export and import analytics rules (Public preview)
-You can now export your analytics rules to JSON-format Azure Resource Manager (ARM) template files, and import rules from these files, as part of managing and controlling your Azure Sentinel deployments as code. Any type of [analytics rule](detect-threats-built-in.md) - not just **Scheduled** - can be exported to an ARM template. The template file includes all the rule's information, from its query to its assigned MITRE ATT&CK tactics.
+You can now export your analytics rules to JSON-format Azure Resource Manager (ARM) template files, and import rules from these files, as part of managing and controlling your Microsoft Sentinel deployments as code. Any type of [analytics rule](detect-threats-built-in.md) - not just **Scheduled** - can be exported to an ARM template. The template file includes all the rule's information, from its query to its assigned MITRE ATT&CK tactics.
For more information, see [Export and import analytics rules to and from ARM templates](import-export-analytics-rules.md).
For more information, see [Export and import analytics rules to and from ARM tem
In addition to enriching your alert content with entity mapping and custom details, you can now custom-tailor the way alerts - and by extension, incidents - are presented and displayed, based on their particular content. Like the other alert enrichment features, this is configurable in the [analytics rule wizard](detect-threats-custom.md).
-For more information, see [Customize alert details in Azure Sentinel](customize-alert-details.md).
+For more information, see [Customize alert details in Microsoft Sentinel](customize-alert-details.md).
### More help for playbooks! Two new documents can help you get started or get more comfortable with creating and working with playbooks.-- [Authenticate playbooks to Azure Sentinel](authenticate-playbooks-to-sentinel.md) helps you understand the different authentication methods by which Logic Apps-based playbooks can connect to and access information in Azure Sentinel, and when it's appropriate to use each one.
+- [Authenticate playbooks to Microsoft Sentinel](authenticate-playbooks-to-sentinel.md) helps you understand the different authentication methods by which Logic Apps-based playbooks can connect to and access information in Microsoft Sentinel, and when it's appropriate to use each one.
- [Use triggers and actions in playbooks](playbook-triggers-actions.md) explains the difference between the **incident trigger** and the **alert trigger** and which to use when, and shows you some of the different actions you can take in playbooks in response to incidents, including how to access the information in [custom details](playbook-triggers-actions.md#work-with-custom-details). Playbook documentation also explicitly addresses the multi-tenant MSSP scenario. ### New documentation reorganization
-This month we've reorganized our [Azure Sentinel documentation](index.yml), restructuring into intuitive categories that follow common customer journeys. Use the filtered docs search and updated landing page to navigate through Azure Sentinel docs.
+This month we've reorganized our [Microsoft Sentinel documentation](index.yml), restructuring into intuitive categories that follow common customer journeys. Use the filtered docs search and updated landing page to navigate through Microsoft Sentinel docs.
## May 2021 -- [Azure Sentinel PowerShell module](#azure-sentinel-powershell-module)
+- [Microsoft Sentinel PowerShell module](#microsoft-sentinel-powershell-module)
- [Alert grouping enhancements](#alert-grouping-enhancements)-- [Azure Sentinel solutions (Public preview)](#azure-sentinel-solutions-public-preview)
+- [Microsoft Sentinel solutions (Public preview)](#microsoft-sentinel-solutions-public-preview)
- [Continuous Threat Monitoring for SAP solution (Public preview)](#continuous-threat-monitoring-for-sap-solution-public-preview) - [Threat intelligence integrations (Public preview)](#threat-intelligence-integrations-public-preview) - [Fusion over scheduled alerts (Public preview)](#fusion-over-scheduled-alerts-public-preview)
This month we've reorganized our [Azure Sentinel documentation](index.yml), rest
- [IP Entity page (Public preview)](#ip-entity-page-public-preview) - [Activity customization (Public preview)](#activity-customization-public-preview) - [Hunting dashboard (Public preview)](#hunting-dashboard-public-preview)-- [Incident teams - collaborate in Microsoft Teams (Public preview)](#azure-sentinel-incident-teamcollaborate-in-microsoft-teams-public-preview)
+- [Incident teams - collaborate in Microsoft Teams (Public preview)](#microsoft-sentinel-incident-teamcollaborate-in-microsoft-teams-public-preview)
- [Zero Trust (TIC3.0) workbook](#zero-trust-tic30-workbook)
-### Azure Sentinel PowerShell module
+### Microsoft Sentinel PowerShell module
-The official Azure Sentinel PowerShell module to automate daily operational tasks has been released as GA!
+The official Microsoft Sentinel PowerShell module to automate daily operational tasks has been released as GA!
You can download it here: [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.SecurityInsights/).
Then, select your entity type and the relevant details you want to match:
For more information, see [Alert grouping](detect-threats-custom.md#alert-grouping).
-### Azure Sentinel solutions (Public preview)
+### Microsoft Sentinel solutions (Public preview)
-Azure Sentinel now offers **packaged content** [solutions](sentinel-solutions-catalog.md) that include combinations of one or more data connectors, workbooks, analytics rules, playbooks, hunting queries, parsers, watchlists, and other components for Azure Sentinel.
+Microsoft Sentinel now offers **packaged content** [solutions](sentinel-solutions-catalog.md) that include combinations of one or more data connectors, workbooks, analytics rules, playbooks, hunting queries, parsers, watchlists, and other components for Microsoft Sentinel.
Solutions provide improved in-product discoverability, single-step deployment, and end-to-end product scenarios. For more information, see [Centrally discover and deploy built-in content and solutions](sentinel-solutions-deploy.md). ### Continuous Threat Monitoring for SAP solution (Public preview)
-Azure Sentinel solutions now includes **Continuous Threat Monitoring for SAP**, enabling you to monitor SAP systems for sophisticated threats within the business and application layers.
+Microsoft Sentinel now includes the **Continuous Threat Monitoring for SAP** solution, enabling you to monitor SAP systems for sophisticated threats within the business and application layers.
-The SAP data connector streams a multitude of 14 application logs from the entire SAP system landscape, and collects logs from both Advanced Business Application Programming (ABAP) via NetWeaver RFC calls and file storage data via OSSAP Control interface. The SAP data connector adds to Azure Sentinels ability to monitor the SAP underlying infrastructure.
+The SAP data connector streams a multitude of 14 application logs from the entire SAP system landscape, and collects logs from both Advanced Business Application Programming (ABAP) via NetWeaver RFC calls and file storage data via OSSAP Control interface. The SAP data connector adds to Microsoft Sentinels ability to monitor the SAP underlying infrastructure.
-To ingest SAP logs into Azure Sentinel, you must have the Azure Sentinel SAP data connector installed on your SAP environment. After the SAP data connector is deployed, deploy the rich SAP solution security content to smoothly gain insight into your organization's SAP environment and improve any related security operation capabilities.
+To ingest SAP logs into Microsoft Sentinel, you must have the Microsoft Sentinel SAP data connector installed on your SAP environment. After the SAP data connector is deployed, deploy the rich SAP solution security content to smoothly gain insight into your organization's SAP environment and improve any related security operation capabilities.
For more information, see [Deploying SAP continuous threat monitoring](sap/deployment-overview.md). ### Threat intelligence integrations (Public preview)
-Azure Sentinel gives you a few different ways to [use threat intelligence](./understand-threat-intelligence.md) feeds to enhance your security analysts' ability to detect and prioritize known threats.
+Microsoft Sentinel gives you a few different ways to [use threat intelligence](./understand-threat-intelligence.md) feeds to enhance your security analysts' ability to detect and prioritize known threats.
You can now use one of many newly available integrated threat intelligence platform (TIP) products, connect to TAXII servers to take advantage of any STIX-compatible threat intelligence source, and make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator). You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions.
-For more information, see [Threat intelligence integration in Azure Sentinel](threat-intelligence-integration.md).
+For more information, see [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md).
### Fusion over scheduled alerts (Public preview) The **Fusion** machine-learning correlation engine can now detect multi-stage attacks using alerts generated by a set of [scheduled analytics rules](detect-threats-custom.md) in its correlations, in addition to the alerts imported from other data sources.
-For more information, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
+For more information, see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md).
### SOC-ML anomalies (Public preview)
-Azure Sentinel's SOC-ML machine learning-based anomalies can identify unusual behavior that might otherwise evade detection.
+Microsoft Sentinel's SOC-ML machine learning-based anomalies can identify unusual behavior that might otherwise evade detection.
SOC-ML uses analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve the fidelity of detections, investigations, and threat hunting.
-For more information, see [Use SOC-ML anomalies to detect threats in Azure Sentinel](soc-ml-anomalies.md).
+For more information, see [Use SOC-ML anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md).
### IP Entity page (Public preview)
-Azure Sentinel now supports the IP address entity, and you can now view IP entity information in the new IP entity page.
+Microsoft Sentinel now supports the IP address entity, and you can now view IP entity information in the new IP entity page.
Like the user and host entity pages, the IP page includes general information about the IP, a list of activities the IP has been found to be a part of, and more, giving you an ever-richer store of information to enhance your investigation of security incidents.
The **Hunting** blade has gotten a refresh. The new dashboard lets you run all y
Identify where to start hunting by looking at result count, spikes, or the change in result count over a 24-hour period. You can also sort and filter by favorites, data source, MITRE ATT&CK tactic and technique, results, or results delta. View the queries that do not yet have the necessary data sources connected, and get recommendations on how to enable these queries.
-For more information, see [Hunt for threats with Azure Sentinel](hunting.md).
+For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md).
-### Azure Sentinel incident team - collaborate in Microsoft Teams (public preview)
+### Microsoft Sentinel incident team - collaborate in Microsoft Teams (public preview)
-Azure Sentinel now supports a direct integration with Microsoft Teams, enabling you to collaborate seamlessly across the organization and with external stakeholders.
+Microsoft Sentinel now supports a direct integration with Microsoft Teams, enabling you to collaborate seamlessly across the organization and with external stakeholders.
-Directly from the incident in Azure Sentinel, create a new *incident team* to use for central communication and coordination.
+Directly from the incident in Microsoft Sentinel, create a new *incident team* to use for central communication and coordination.
-Incident teams are especially helpful when used as a dedicated conference bridge for high-severity, ongoing incidents. Organizations that already use Microsoft Teams for communication and collaboration can use the Azure Sentinel integration to bring security data directly into their conversations and daily work.
+Incident teams are especially helpful when used as a dedicated conference bridge for high-severity, ongoing incidents. Organizations that already use Microsoft Teams for communication and collaboration can use the Microsoft Sentinel integration to bring security data directly into their conversations and daily work.
-In Microsoft Teams, the new team's **Incident page** tab always has the most updated and recent data from Azure Sentinel, ensuring that your teams have the most relevant data right at hand.
+In Microsoft Teams, the new team's **Incident page** tab always has the most updated and recent data from Microsoft Sentinel, ensuring that your teams have the most relevant data right at hand.
[ ![Incident page in Microsoft Teams.](media/collaborate-in-microsoft-teams/incident-in-teams.png) ](media/collaborate-in-microsoft-teams/incident-in-teams.png#lightbox)
For more information, see [Collaborate in Microsoft Teams (Public preview)](coll
### Zero Trust (TIC3.0) workbook
-The new, Azure Sentinel Zero Trust (TIC3.0) workbook provides an automated visualization of [Zero Trust](/security/zero-trust/) principles, cross-walked to the [Trusted Internet Connections](https://www.cisa.gov/trusted-internet-connections) (TIC) framework.
+The new, Microsoft Sentinel Zero Trust (TIC3.0) workbook provides an automated visualization of [Zero Trust](/security/zero-trust/) principles, cross-walked to the [Trusted Internet Connections](https://www.cisa.gov/trusted-internet-connections) (TIC) framework.
-We know that compliance isnΓÇÖt just an annual requirement, and organizations must monitor configurations over time like a muscle. Azure Sentinel's Zero Trust workbook uses the full breadth of Microsoft security offerings across Azure, Office 365, Teams, Intune, Azure Virtual Desktop, and many more.
+We know that compliance isnΓÇÖt just an annual requirement, and organizations must monitor configurations over time like a muscle. Microsoft Sentinel's Zero Trust workbook uses the full breadth of Microsoft security offerings across Azure, Office 365, Teams, Intune, Azure Virtual Desktop, and many more.
[ ![Zero Trust workbook.](media/zero-trust-workbook.gif) ](media/zero-trust-workbook.gif#lightbox)
For more information, see [Visualize and monitor your data](monitor-your-data.md
### Azure Policy-based data connectors
-Azure Policy allows you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Azure Sentinel.
+Azure Policy allows you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Microsoft Sentinel.
Continuing our efforts to bring the power of [Azure Policy](../governance/policy/overview.md) to the task of data collection configuration, we are now offering another Azure Policy-enhanced data collector, for [Azure Storage account](./data-connectors-reference.md#azure-storage-account) resources, released to public preview.
For example:
:::image type="content" source="media/investigate-cases/incident-timeline.png" alt-text="Incident timeline tab":::
-For more information, see [Tutorial: Investigate incidents with Azure Sentinel](investigate-cases.md).
+For more information, see [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md).
For more information, see [Tutorial: Investigate incidents with Azure Sentinel](
- [New detections for Azure Firewall](#new-detections-for-azure-firewall) - [Automation rules and incident-triggered playbooks (Public preview)](#automation-rules-and-incident-triggered-playbooks-public-preview) (including all-new playbook documentation) - [New alert enrichments: enhanced entity mapping and custom details (Public preview)](#new-alert-enrichments-enhanced-entity-mapping-and-custom-details-public-preview)-- [Print your Azure Sentinel workbooks or save as PDF](#print-your-azure-sentinel-workbooks-or-save-as-pdf)
+- [Print your Microsoft Sentinel workbooks or save as PDF](#print-your-microsoft-sentinel-workbooks-or-save-as-pdf)
- [Incident filters and sort preferences now saved in your session (Public preview)](#incident-filters-and-sort-preferences-now-saved-in-your-session-public-preview) - [Microsoft 365 Defender incident integration (Public preview)](#microsoft-365-defender-incident-integration-public-preview) - [New Microsoft service connectors using Azure Policy](#new-microsoft-service-connectors-using-azure-policy) ### Set workbooks to automatically refresh while in view mode
-Azure Sentinel users can now use the new [Azure Monitor ability](https://techcommunity.microsoft.com/t5/azure-monitor/azure-workbooks-set-it-to-auto-refresh/ba-p/2228555) to automatically refresh workbook data during a view session.
+Microsoft Sentinel users can now use the new [Azure Monitor ability](https://techcommunity.microsoft.com/t5/azure-monitor/azure-workbooks-set-it-to-auto-refresh/ba-p/2228555) to automatically refresh workbook data during a view session.
In each workbook or workbook template, select :::image type="icon" source="media/whats-new/auto-refresh-workbook.png" border="false"::: **Auto refresh** to display your interval options. Select the option you want to use for the current view session, and select **Apply**.
For more information, see [Visualize and monitor your data](monitor-your-data.md
### New detections for Azure Firewall
-Several out-of-the-box detections for Azure Firewall have been added to the [Analytics](./understand-threat-intelligence.md) area in Azure Sentinel. These new detections allow security teams to get alerts if machines on the internal network attempt to query or connect to internet domain names or IP addresses that are associated with known IOCs, as defined in the detection rule query.
+Several out-of-the-box detections for Azure Firewall have been added to the [Analytics](./understand-threat-intelligence.md) area in Microsoft Sentinel. These new detections allow security teams to get alerts if machines on the internal network attempt to query or connect to internet domain names or IP addresses that are associated with known IOCs, as defined in the detection rule query.
The new detections include:
Detections for Azure Firewalls are continuously added to the built-in template g
:::image type="content" source="media/whats-new/new-detections-analytics-efficiency-workbook.jpg" alt-text="New detections in the Analytics efficiency workbook":::
-For more information, see [New detections for Azure Firewall in Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-network-security/new-detections-for-azure-firewall-in-azure-sentinel/ba-p/2244958).
+For more information, see [New detections for Azure Firewall in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-network-security/new-detections-for-azure-firewall-in-azure-sentinel/ba-p/2244958).
### Automation rules and incident-triggered playbooks (Public preview)
-Automation rules are a new concept in Azure Sentinel, allowing you to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Azure Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
+Automation rules are a new concept in Microsoft Sentinel, allowing you to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
Learn more with this [complete explanation of automation rules](automate-incident-handling-with-automation-rules.md).
Give your investigative and response capabilities an even greater boost by custo
-### Print your Azure Sentinel workbooks or save as PDF
+### Print your Microsoft Sentinel workbooks or save as PDF
-Now you can print Azure Sentinel workbooks, which also enables you to export to them to PDFs and save locally or share.
+Now you can print Microsoft Sentinel workbooks, which also enables you to export to them to PDFs and save locally or share.
In your workbook, select the options menu > :::image type="icon" source="media/whats-new/print-icon.png" border="false"::: **Print content**. Then select your printer, or select **Save as PDF** as needed.
For more information, see [Visualize and monitor your data](monitor-your-data.md
### Incident filters and sort preferences now saved in your session (Public preview)
-Now your incident filters and sorting is saved throughout your Azure Sentinel session, even while navigating to other areas of the product.
-As long as you're still in the same session, navigating back to the [Incidents](investigate-cases.md) area in Azure Sentinel shows your filters and sorting just as you left it.
+Now your incident filters and sorting is saved throughout your Microsoft Sentinel session, even while navigating to other areas of the product.
+As long as you're still in the same session, navigating back to the [Incidents](investigate-cases.md) area in Microsoft Sentinel shows your filters and sorting just as you left it.
> [!NOTE]
-> Incident filters and sorting are not saved after leaving Azure Sentinel or refreshing your browser.
+> Incident filters and sorting are not saved after leaving Microsoft Sentinel or refreshing your browser.
### Microsoft 365 Defender incident integration (Public preview)
-Azure Sentinel's [Microsoft 365 Defender (M365D)](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all M365D incidents into Azure Sentinel and keep them synchronized between both portals. Incidents from M365D (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Azure Sentinel. Once in Sentinel, Incidents will remain bi-directionally synced with M365D, allowing you to take advantage of the benefits of both portals in your incident investigation.
+Microsoft Sentinel's [Microsoft 365 Defender (M365D)](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all M365D incidents into Microsoft Sentinel and keep them synchronized between both portals. Incidents from M365D (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Microsoft Sentinel. Once in Sentinel, Incidents will remain bi-directionally synced with M365D, allowing you to take advantage of the benefits of both portals in your incident investigation.
-Using both Azure Sentinel and Microsoft 365 Defender together gives you the best of both worlds. You get the breadth of insight that a SIEM gives you across your organization's entire scope of information resources, and also the depth of customized and tailored investigative power that an XDR delivers to protect your Microsoft 365 resources, both of these coordinated and synchronized for seamless SOC operation.
+Using both Microsoft Sentinel and Microsoft 365 Defender together gives you the best of both worlds. You get the breadth of insight that a SIEM gives you across your organization's entire scope of information resources, and also the depth of customized and tailored investigative power that an XDR delivers to protect your Microsoft 365 resources, both of these coordinated and synchronized for seamless SOC operation.
-For more information, see [Microsoft 365 Defender integration with Azure Sentinel](microsoft-365-defender-sentinel-integration.md).
+For more information, see [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md).
### New Microsoft service connectors using Azure Policy [Azure Policy](../governance/policy/overview.md) is an Azure service which allows you to use policies to enforce and control the properties of a resource. The use of policies ensures that resources stay compliant with your IT governance standards.
-Among the properties of resources that can be controlled by policies are the creation and handling of diagnostics and auditing logs. Azure Sentinel now uses Azure Policy to allow you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Azure Sentinel. Thanks to Azure Policy, you'll no longer have to set diagnostics logs settings resource by resource.
+Among the properties of resources that can be controlled by policies are the creation and handling of diagnostics and auditing logs. Microsoft Sentinel now uses Azure Policy to allow you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Microsoft Sentinel. Thanks to Azure Policy, you'll no longer have to set diagnostics logs settings resource by resource.
Azure Policy-based connectors are now available for the following Azure - [Azure Key Vault](./data-connectors-reference.md#azure-key-vault) (public preview)
Customers will still be able to send the logs manually for specific instances an
### Cybersecurity Maturity Model Certification (CMMC) workbook
-The Azure Sentinel CMMC Workbook provides a mechanism for viewing log queries aligned to CMMC controls across the Microsoft portfolio, including Microsoft security offerings, Office 365, Teams, Intune, Azure Virtual Desktop and many more.
+The Microsoft Sentinel CMMC Workbook provides a mechanism for viewing log queries aligned to CMMC controls across the Microsoft portfolio, including Microsoft security offerings, Office 365, Teams, Intune, Azure Virtual Desktop and many more.
The CMMC workbook enables security architects, engineers, security operations analysts, managers, and IT professionals to gain situational awareness visibility for the security posture of cloud workloads. There are also recommendations for selecting, designing, deploying, and configuring Microsoft offerings for alignment with respective CMMC requirements and practices. Even if you arenΓÇÖt required to comply with CMMC, the CMMC workbook is helpful in building Security Operations Centers, developing alerts, visualizing threats, and providing situational awareness of workloads.
-Access the CMMC workbook in the Azure Sentinel **Workbooks** area. Select **Template**, and then search for **CMMC**.
+Access the CMMC workbook in the Microsoft Sentinel **Workbooks** area. Select **Template**, and then search for **CMMC**.
:::image type="content" source="media/whats-new/cmmc-guide-toggle.gif" alt-text="GIF recording of the C M M C workbook guide toggled on and off." lightbox="media/whats-new/cmmc-guide-toggle.gif"::: For more information, see: -- [Azure Sentinel Cybersecurity Maturity Model Certification (CMMC) Workbook](https://techcommunity.microsoft.com/t5/public-sector-blog/azure-sentinel-cybersecurity-maturity-model-certification-cmmc/ba-p/2110524)
+- [Microsoft Sentinel Cybersecurity Maturity Model Certification (CMMC) Workbook](https://techcommunity.microsoft.com/t5/public-sector-blog/azure-sentinel-cybersecurity-maturity-model-certification-cmmc/ba-p/2110524)
- [Visualize and monitor your data](monitor-your-data.md)
Our collection of third-party integrations continues to grow, with thirty connec
### UEBA insights in the entity page (Public preview)
-The Azure Sentinel entity details pages provide an [Insights pane](entity-pages.md#entity-insights), which displays behavioral insights on the entity and help to quickly identify anomalies and security threats.
+The Microsoft Sentinel entity details pages provide an [Insights pane](entity-pages.md#entity-insights), which displays behavioral insights on the entity and help to quickly identify anomalies and security threats.
If you have [UEBA enabled](ueba-reference.md), and have selected a timeframe of at least four days, this Insights pane will now also include the following new sections for UEBA insights:
If you have [UEBA enabled](ueba-reference.md), and have selected a timeframe of
### Improved incident search (Public preview)
-We've improved the Azure Sentinel incident searching experience, enabling you to navigate faster through incidents as you investigate a specific threat.
+We've improved the Microsoft Sentinel incident searching experience, enabling you to navigate faster through incidents as you investigate a specific threat.
-When searching for incidents in Azure Sentinel, you're now able to search by the following incident details:
+When searching for incidents in Microsoft Sentinel, you're now able to search by the following incident details:
- ID - Title
When searching for incidents in Azure Sentinel, you're now able to search by the
### Analytics rule wizard: Improved query editing experience (Public preview)
-The Azure Sentinel Scheduled analytics rule wizard now provides the following enhancements for writing and editing queries:
+The Microsoft Sentinel Scheduled analytics rule wizard now provides the following enhancements for writing and editing queries:
- An expandable editing window, providing you with more screen space to view your query. - Key word highlighting in your query code.
The Azure Sentinel Scheduled analytics rule wizard now provides the following en
For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). ### Az.SecurityInsights PowerShell module (Public preview)
-Azure Sentinel now supports the new [Az.SecurityInsights](https://www.powershellgallery.com/packages/Az.SecurityInsights/) PowerShell module.
+Microsoft Sentinel now supports the new [Az.SecurityInsights](https://www.powershellgallery.com/packages/Az.SecurityInsights/) PowerShell module.
-The **Az.SecurityInsights** module supports common Azure Sentinel use cases, like interacting with incidents to change statues, severity, owner, and so on, adding comments and labels to incidents, and creating bookmarks.
+The **Az.SecurityInsights** module supports common Microsoft Sentinel use cases, like interacting with incidents to change statues, severity, owner, and so on, adding comments and labels to incidents, and creating bookmarks.
Although we recommend using [Azure Resource Manager (ARM)](../azure-resource-manager/templates/index.yml) templates for your CI/CD pipeline, the **Az.SecurityInsights** module is useful for post-deployment tasks, and is targeted for SOC automation. For example, your SOC automation might include steps to configure data connectors, create analytics rules, or add automation actions to analytics rules.
For more information, including a full list and description of the available cmd
### SQL database connector
-Azure Sentinel now provides an Azure SQL database connector, which you to stream your databases' auditing and diagnostic logs into Azure Sentinel and continuously monitor activity in all your instances.
+Microsoft Sentinel now provides an Azure SQL database connector, which you to stream your databases' auditing and diagnostic logs into Microsoft Sentinel and continuously monitor activity in all your instances.
Azure SQL is a fully managed, Platform-as-a-Service (PaaS) database engine that handles most database management functions, such as upgrading, patching, backups, and monitoring, without user involvement.
For more information, see [Connect Azure SQL database diagnostics and auditing l
### Dynamics 365 connector (Public preview)
-Azure Sentinel now provides a connector for Microsoft Dynamics 365, which lets you collect your Dynamics 365 applications' user, admin, and support activity logs into Azure Sentinel. You can use this data to help you audit the entirety of data processing actions taking place and analyze it for possible security breaches.
+Microsoft Sentinel now provides a connector for Microsoft Dynamics 365, which lets you collect your Dynamics 365 applications' user, admin, and support activity logs into Microsoft Sentinel. You can use this data to help you audit the entirety of data processing actions taking place and analyze it for possible security breaches.
-For more information, see [Connect Dynamics 365 activity logs to Azure Sentinel](./data-connectors-reference.md#dynamics-365).
+For more information, see [Connect Dynamics 365 activity logs to Microsoft Sentinel](./data-connectors-reference.md#dynamics-365).
### Improved incident comments
Our improved incident commenting experience enables you to format your comments
For more information, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). ### Dedicated Log Analytics clusters
-Azure Sentinel now supports dedicated Log Analytics clusters as a deployment option. We recommend considering a dedicated cluster if you:
+Microsoft Sentinel now supports dedicated Log Analytics clusters as a deployment option. We recommend considering a dedicated cluster if you:
-- **Ingest over 1 Tb per day** into your Azure Sentinel workspace-- **Have multiple Azure Sentinel workspaces** in your Azure enrollment
+- **Ingest over 1 Tb per day** into your Microsoft Sentinel workspace
+- **Have multiple Microsoft Sentinel workspaces** in your Azure enrollment
Dedicated clusters enable you to use features like customer-managed keys, lockbox, double encryption, and faster cross-workspace queries when you have multiple workspaces on the same cluster.
For more information, see [Azure Monitor logs dedicated clusters](../azure-monit
### Logic apps managed identities
-Azure Sentinel now supports managed identities for the Azure Sentinel Logic Apps connector, enabling you to grant permissions directly to a specific playbook to operate on Azure Sentinel instead of creating extra identities.
+Microsoft Sentinel now supports managed identities for the Microsoft Sentinel Logic Apps connector, enabling you to grant permissions directly to a specific playbook to operate on Microsoft Sentinel instead of creating extra identities.
-- **Without a managed identity**, the Logic Apps connector requires a separate identity with an Azure Sentinel RBAC role in order to run on Azure Sentinel. The separate identity can be an Azure AD user or a Service Principal, such as an Azure AD registered application.
+- **Without a managed identity**, the Logic Apps connector requires a separate identity with an Microsoft Sentinel RBAC role in order to run on Microsoft Sentinel. The separate identity can be an Azure AD user or a Service Principal, such as an Azure AD registered application.
-- **Turning on managed identity support in your Logic App** registers the Logic App with Azure AD and provides an object ID. Use the object ID in Azure Sentinel to assign the Logic App with an Azure RBAC role in your Azure Sentinel workspace.
+- **Turning on managed identity support in your Logic App** registers the Logic App with Azure AD and provides an object ID. Use the object ID in Microsoft Sentinel to assign the Logic App with an Azure RBAC role in your Microsoft Sentinel workspace.
For more information, see: - [Authenticating with Managed Identity in Azure Logic Apps](../logic-apps/create-managed-service-identity.md)-- [Azure Sentinel Logic Apps connector documentation](/connectors/azuresentinel)
+- [Microsoft Sentinel Logic Apps connector documentation](/connectors/azuresentinel)
### Improved rule tuning with the analytics rule preview graphs (Public preview)
-Azure Sentinel now helps you better tune your analytics rules, helping you to increase their accuracy and decrease noise.
+Microsoft Sentinel now helps you better tune your analytics rules, helping you to increase their accuracy and decrease noise.
After editing an analytics rule on the **Set rule logic** tab, find the **Results simulation** area on the right.
-Select **Test with current data** to have Azure Sentinel run a simulation of the last 50 runs of your analytics rule. A graph is generated to show the average number of alerts that the rule would have generated, based on the raw event data evaluated.
+Select **Test with current data** to have Microsoft Sentinel run a simulation of the last 50 runs of your analytics rule. A graph is generated to show the average number of alerts that the rule would have generated, based on the raw event data evaluated.
For more information, see [Define the rule query logic and configure settings](detect-threats-custom.md#define-the-rule-query-logic-and-configure-settings).
For more information, see [Define the rule query logic and configure settings](d
### 80 new built-in hunting queries
-Azure Sentinel's built-in hunting queries empower SOC analysts to reduce gaps in current detection coverage and ignite new hunting leads.
+Microsoft Sentinel's built-in hunting queries empower SOC analysts to reduce gaps in current detection coverage and ignite new hunting leads.
-This update for Azure Sentinel includes new hunting queries that provide coverage across the MITRE ATT&CK framework matrix:
+This update for Microsoft Sentinel includes new hunting queries that provide coverage across the MITRE ATT&CK framework matrix:
- **Collection** - **Command and Control**
The added hunting queries are designed to help you find suspicious activity in y
If after running these queries, you are confident with the results, you may want to convert them to analytics rules or add hunting results to existing or new incidents.
-All of the added queries are available via the Azure Sentinel Hunting page. For more information, see [Hunt for threats with Azure Sentinel](hunting.md).
+All of the added queries are available via the Microsoft Sentinel Hunting page. For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md).
### Log Analytics agent improvements
-Azure Sentinel users benefit from the following Log Analytics agent improvements:
+Microsoft Sentinel users benefit from the following Log Analytics agent improvements:
- **Support for more operating systems**, including CentOS 8, RedHat 8, and SUSE Linux 15. - **Support for Python 3** in addition to Python 2
-Azure Sentinel uses the Log Analytics agent to sent events to your workspace, including Windows Security events, Syslog events, CEF logs, and more.
+Microsoft Sentinel uses the Log Analytics agent to sent events to your workspace, including Windows Security events, Syslog events, CEF logs, and more.
> [!NOTE] > The Log Analytics agent is sometimes referred to as the OMS Agent or the Microsoft Monitoring Agent (MMA).
Azure Sentinel uses the Log Analytics agent to sent events to your workspace, in
For more information, see the [Log Analytics documentation](../azure-monitor/agents/log-analytics-agent.md) and the [Log Analytics agent release notes](https://github.com/microsoft/OMS-Agent-for-Linux/releases). ## November 2020 -- [Monitor your Playbooks' health in Azure Sentinel](#monitor-your-playbooks-health-in-azure-sentinel)
+- [Monitor your Playbooks' health in Microsoft Sentinel](#monitor-your-playbooks-health-in-microsoft-sentinel)
- [Microsoft 365 Defender connector (Public preview)](#microsoft-365-defender-connector-public-preview)
-### Monitor your Playbooks' health in Azure Sentinel
+### Monitor your Playbooks' health in Microsoft Sentinel
-Azure Sentinel playbooks are based on workflows built in [Azure Log Apps](../logic-apps/index.yml), a cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows. Playbooks can be automatically invoked when an incident is created, or when triaging and working with incidents.
+Microsoft Sentinel playbooks are based on workflows built in [Azure Log Apps](../logic-apps/index.yml), a cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows. Playbooks can be automatically invoked when an incident is created, or when triaging and working with incidents.
To provide insights into the health, performance, and usage of your playbooks, we've added a [workbook](../azure-monitor/visualize/workbooks-overview.md) named **Playbooks health monitoring**. Use the **Playbooks health monitoring** workbook to monitor the health of your playbooks, or look for anomalies in the amount of succeeded or failed runs.
-The **Playbooks health monitoring** workbook is now available in the Azure Sentinel Templates gallery:
+The **Playbooks health monitoring** workbook is now available in the Microsoft Sentinel Templates gallery:
:::image type="content" source="media/whats-new/playbook-monitoring-workbook.gif" alt-text="Sample Playbooks health monitoring workbook":::
For more information, see:
### Microsoft 365 Defender connector (Public preview)
-The Microsoft 365 Defender connector for Azure Sentinel enables you to stream advanced hunting logs (a type of raw event data) from Microsoft 365 Defender into Azure Sentinel.
+The Microsoft 365 Defender connector for Microsoft Sentinel enables you to stream advanced hunting logs (a type of raw event data) from Microsoft 365 Defender into Microsoft Sentinel.
-With the integration of [Microsoft Defender for Endpoint (MDATP)](/windows/security/threat-protection/) into the [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) security umbrella, you can now collect your Microsoft Defender for Endpoint advanced hunting events using the Microsoft 365 Defender connector, and stream them straight into new purpose-built tables in your Azure Sentinel workspace.
+With the integration of [Microsoft Defender for Endpoint (MDATP)](/windows/security/threat-protection/) into the [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) security umbrella, you can now collect your Microsoft Defender for Endpoint advanced hunting events using the Microsoft 365 Defender connector, and stream them straight into new purpose-built tables in your Microsoft Sentinel workspace.
-The Azure Sentinel tables are built on the same schema that's used in the Microsoft 365 Defender portal, and provide you with complete access to the full set of advanced hunting logs.
+The Microsoft Sentinel tables are built on the same schema that's used in the Microsoft 365 Defender portal, and provide you with complete access to the full set of advanced hunting logs.
-For more information, see [Connect data from Microsoft 365 Defender to Azure Sentinel](connect-microsoft-365-defender.md).
+For more information, see [Connect data from Microsoft 365 Defender to Microsoft Sentinel](connect-microsoft-365-defender.md).
> [!NOTE] > Microsoft 365 Defender was formerly known as Microsoft Threat Protection or MTP. Microsoft Defender for Endpoint was formerly known as Microsoft Defender Advanced Threat Protection or MDATP.
For more information, see [Connect data from Microsoft 365 Defender to Azure Sen
## Next steps > [!div class="nextstepaction"]
->[On-board Azure Sentinel](quickstart-onboard.md)
+>[On-board Microsoft Sentinel](quickstart-onboard.md)
> [!div class="nextstepaction"] >[Get visibility into alerts](get-visibility.md)
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/duplicate-detection.md
Title: Azure Service Bus duplicate message detection | Microsoft Docs description: This article explains how you can detect duplicates in Azure Service Bus messages. The duplicate message can be ignored and dropped. + Last updated 05/31/2022
The *MessageId* can always be some GUID, but anchoring the identifier to the bus
>- When **partitioning** is **enabled**, `MessageId+PartitionKey` is used to determine uniqueness. When sessions are enabled, partition key and session ID must be the same. >- When **partitioning** is **disabled** (default), only `MessageId` is used to determine uniqueness. >- For information about SessionId, PartitionKey, and MessageId, see [Use of partition keys](service-bus-partitioning.md#use-of-partition-keys).
->- The [premier tier](service-bus-premium-messaging.md) doesn't support partitioning, so we recommend that you use unique message IDs in your applications and not rely on partition keys for duplicate detection.
- ## Duplicate detection window size
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)-
service-bus-messaging Enable Partitions Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-basic-standard.md
+
+ Title: Enable partitioning in Azure Service Bus basic or standard
+description: This article explains how to enable partitioning in Azure Service Bus queues and topics by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript)
+ Last updated : 10/12/2022 +
+ms.devlang: azurecli
++
+# Enable partitioning in Azure Service Bus basic or standard
+
+Service Bus partitions enable queues and topics, or messaging entities, to be partitioned across multiple message brokers. Partitioning means that the overall throughput of a partitioned entity is no longer limited by the performance of a single message broker. In addition, a temporary outage of a message broker, for example during an upgrade, doesn't render a partitioned queue or topic unavailable. Partitioned queues and topics can contain all advanced Service Bus features, such as support for transactions and sessions. For more information, see [Partitioned queues and topics](service-bus-partitioning.md). This article shows you different ways to enable duplicate message detection for a Service Bus queue or a topic.
+
+> [!IMPORTANT]
+> - Partitioning is available at entity creation for all queues and topics in **Basic** or **Standard** SKUs.
+> - It's not possible to change the partitioning option on any existing queue or topic. You can only set the option when you create a queue or a topic.
+> - In a **Standard** tier namespace, you can create Service Bus queues and topics in 1, 2, 3, 4, or 5-GB sizes (the default is 1 GB). With partitioning enabled, Service Bus creates 16 copies (16 partitions) of the entity, each of the same size specified. As such, if you create a queue that's 5 GB in size, with 16 partitions the maximum queue size becomes (5 \* 16) = 80 GB.
+
+## Use Azure portal
+When creating a **queue** in the Azure portal, select **Enable partitioning** as shown in the following image.
++
+When creating a topic in the Azure portal, select **Enable partitioning** as shown in the following image.
++
+## Use Azure CLI
+To **create a queue with partitioning enabled**, use the [`az servicebus queue create`](/cli/azure/servicebus/queue#az-servicebus-queue-create) command with `--enable-partitioning` set to `true`.
+
+```azurecli-interactive
+az servicebus queue create \
+ --resource-group myresourcegroup \
+ --namespace-name mynamespace \
+ --name myqueue \
+ --enable-partitioning true
+```
+
+To **create a topic with partitioning enabled**, use the [`az servicebus topic create`](/cli/azure/servicebus/topic#az-servicebus-topic-create) command with `--enable-partitioning` set to `true`.
+
+```azurecli-interactive
+az servicebus topic create \
+ --resource-group myresourcegroup \
+ --namespace-name mynamespace \
+ --name mytopic \
+ --enable-partitioning true
+```
+
+## Use Azure PowerShell
+To **create a queue with partitioning enabled**, use the [`New-AzServiceBusQueue`](/powershell/module/az.servicebus/new-azservicebusqueue) command with `-EnablePartitioning` set to `$True`.
+
+```azurepowershell-interactive
+New-AzServiceBusQueue -ResourceGroup myresourcegroup `
+ -NamespaceName mynamespace `
+ -QueueName myqueue `
+ -EnablePartitioning $True
+```
+
+To **create a topic with partitioning enabled**, use the [`New-AzServiceBusTopic`](/powershell/module/az.servicebus/new-azservicebustopic) command with `-EnablePartitioning` set to `true`.
+
+```azurepowershell-interactive
+New-AzServiceBusTopic -ResourceGroup myresourcegroup `
+ -NamespaceName mynamespace `
+ -Name mytopic `
+ -EnablePartitioning $True
+```
+
+## Use Azure Resource Manager template
+To **create a queue with partitioning enabled**, set `enablePartitioning` to `true` in the queue properties section. For more information, see [Microsoft.ServiceBus namespaces/queues template reference](/azure/templates/microsoft.servicebus/namespaces/queues?tabs=json).
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "serviceBusNamespaceName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Service Bus namespace"
+ }
+ },
+ "serviceBusQueueName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Queue"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ServiceBus/namespaces",
+ "apiVersion": "2018-01-01-preview",
+ "name": "[parameters('serviceBusNamespaceName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard"
+ },
+ "properties": {},
+ "resources": [
+ {
+ "type": "Queues",
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('serviceBusQueueName')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]"
+ ],
+ "properties": {
+ "enablePartitioning": true
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+To **create a topic with duplicate detection enabled**, set `enablePartitioning` to `true` in the topic properties section. For more information, see [Microsoft.ServiceBus namespaces/topics template reference](/azure/templates/microsoft.servicebus/namespaces/topics?tabs=json).
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "service_BusNamespace_Name": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Service Bus namespace"
+ }
+ },
+ "serviceBusTopicName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Topic"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "resources": [
+ {
+ "apiVersion": "2018-01-01-preview",
+ "name": "[parameters('service_BusNamespace_Name')]",
+ "type": "Microsoft.ServiceBus/namespaces",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard"
+ },
+ "properties": {},
+ "resources": [
+ {
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('serviceBusTopicName')]",
+ "type": "topics",
+ "dependsOn": [
+ "[resourceId('Microsoft.ServiceBus/namespaces/', parameters('service_BusNamespace_Name'))]"
+ ],
+ "properties": {
+ "enablePartitioning": true
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Next steps
+Try the samples in the language of your choice to explore Azure Service Bus features.
+
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)
+- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
+
+Find samples for the older .NET and Java client libraries below:
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Partitions Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md
+
+ Title: Enable partitioning in Azure Service Bus Premium namespaces
+description: This article explains how to enable partitioning in Azure Service Bus Premium namespaces by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript)
+ Last updated : 10/12/2022 +
+ms.devlang: azurecli
++
+# Enable partitioning for an Azure Service Bus Premium namespace (Preview)
+Service Bus partitions enable queues and topics, or messaging entities, to be partitioned across multiple message brokers. Partitioning means that the overall throughput of a partitioned entity is no longer limited by the performance of a single message broker. In addition, a temporary outage of a message broker, for example during an upgrade, doesn't render a partitioned queue or topic unavailable. Partitioned queues and topics can contain all advanced Service Bus features, such as support for transactions and sessions. For more information, see [Partitioned queues and topics](service-bus-partitioning.md). This article shows you different ways to enable duplicate partitioning for a Service Bus Premium namespace. All entities in this namespace will be partitioned.
+
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+> [!NOTE]
+> - Partitioning is available at entity creation for namespaces in the Premium SKU. Any previously existing partitioned entities in Premium namespaces continue to work as expected.
+> - It's not possible to change the partitioning option on any existing namespace. You can only set the option when you create a namespace.
+> - The assigned messaging units are always a multiplier of the amount of partitions in a namespace, and are equally distributed across the partitions. For example, in a namespace with 16MU and 4 partitions, each partition will be assigned 4MU.
+>
+> Some limitations may be encountered during public preview, which will be resolved before going into GA.
+> - It is currently not possible to use JMS on partitioned entities.
+> - Metrics are currently only available on an aggregated namespace level, not for individual partitions.
+> - This feature is rolling out during Ignite 2022, and will initially be available in East US and South Central US, with more regions following later.
+
+## Use Azure portal
+When creating a **namespace** in the Azure portal, set the **Partitioning** to **Enabled** and choose the number of partitions, as shown in the following image.
+
+## Use Azure Resource Manager template
+To **create a namespace with partitioning enabled**, set `partitions` to a number larger than 1 in the namespace properties section. In the example below a partitioned namespace is created with 4 partitions, and 1 messaging unit assigned to each partition. For more information, see [Microsoft.ServiceBus namespaces template reference](/azure/templates/microsoft.servicebus/namespaces?tabs=json).
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "serviceBusNamespaceName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Service Bus namespace"
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ServiceBus/namespaces",
+ "apiVersion": "2022-10-01-preview",
+ "name": "[parameters('serviceBusNamespaceName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Premium",
+ "capacity": 4
+ },
+ "properties": {
+ "premiumMessagingPartitions": 4
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+Try the samples in the language of your choice to explore Azure Service Bus features.
+
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)
+- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
+
+Find samples for the older .NET and Java client libraries below:
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions.md
- Title: Enable partitioning in Azure Service Bus queues and topics
-description: This article explains how to enable partitioning in Azure Service Bus queues and topics by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript)
- Previously updated : 04/19/2021 ---
-# Enable partitioning for an Azure Service Bus queue or a topic
-Service Bus partitions enable queues and topics, or messaging entities, to be partitioned across multiple message brokers and messaging stores. Partitioning means that the overall throughput of a partitioned entity is no longer limited by the performance of a single message broker or messaging store. In addition, a temporary outage of a messaging store doesn't render a partitioned queue or topic unavailable. Partitioned queues and topics can contain all advanced Service Bus features, such as support for transactions and sessions. For more information, See [Partitioned queues and topics](service-bus-partitioning.md). This article shows you different ways to enable duplicate message detection for a Service Bus queue or a topic.
-
-> [!IMPORTANT]
-> - Partitioning is available at entity creation for all queues and topics in Basic or Standard SKUs. It isn't available for the Premium messaging SKU, but any previously existing partitioned entities in Premium namespaces continue to work as expected.
-> - It's not possible to change the partitioning option on any existing queue or topic. You can only set the option when you create a queue or a topic.
-> - In a **Standard** tier namespace, you can create Service Bus queues and topics in 1, 2, 3, 4, or 5-GB sizes (the default is 1 GB). With partitioning enabled, Service Bus creates 16 copies (16 partitions) of the entity, each of the same size specified. As such, if you create a queue that's 5 GB in size, with 16 partitions the maximum queue size becomes (5 \* 16) = 80 GB.
-> - In a **Premium** tier namespace, partitioning entities are not supported. However, you can still create Service Bus queues and topics in 1, 2, 3, 4, 5, 10, 20, 40, or 80-GB sizes (the default is 1 GB). You can see the maximum size of your partitioned queue or topic on the **Overview** page in the [Azure portal](https://portal.azure.com).
--
-## Using Azure portal
-When creating a **queue** in the Azure portal, select **Enable partitioning** as shown in the following image.
--
-When creating a topic in the Azure portal, select **Enable partitioning** as shown in the following image.
--
-## Using Azure CLI
-To **create a queue with partitioning enabled**, use the [`az servicebus queue create`](/cli/azure/servicebus/queue#az-servicebus-queue-create) command with `--enable-partitioning` set to `true`.
-
-```azurecli-interactive
-az servicebus queue create \
- --resource-group myresourcegroup \
- --namespace-name mynamespace \
- --name myqueue \
- --enable-partitioning true
-```
-
-To **create a topic with partitioning enabled**, use the [`az servicebus topic create`](/cli/azure/servicebus/topic#az-servicebus-topic-create) command with `--enable-partitioning` set to `true`.
-
-```azurecli-interactive
-az servicebus topic create \
- --resource-group myresourcegroup \
- --namespace-name mynamespace \
- --name mytopic \
- --enable-partitioning true
-```
-
-## Using Azure PowerShell
-To **create a queue with partitioning enabled**, use the [`New-AzServiceBusQueue`](/powershell/module/az.servicebus/new-azservicebusqueue) command with `-EnablePartitioning` set to `$True`.
-
-```azurepowershell-interactive
-New-AzServiceBusQueue -ResourceGroup myresourcegroup `
- -NamespaceName mynamespace `
- -QueueName myqueue `
- -EnablePartitioning $True
-```
-
-To **create a topic with partitioning enabled**, use the [`New-AzServiceBusTopic`](/powershell/module/az.servicebus/new-azservicebustopic) command with `-EnablePartitioning` set to `true`.
-
-```azurepowershell-interactive
-New-AzServiceBusTopic -ResourceGroup myresourcegroup `
- -NamespaceName mynamespace `
- -Name mytopic `
- -EnablePartitioning $True
-```
-
-## Using Azure Resource Manager template
-To **create a queue with partitioning enabled**, set `enablePartitioning` to `true` in the queue properties section. For more information, see [Microsoft.ServiceBus namespaces/queues template reference](/azure/templates/microsoft.servicebus/namespaces/queues?tabs=json).
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "serviceBusNamespaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Service Bus namespace"
- }
- },
- "serviceBusQueueName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Queue"
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "metadata": {
- "description": "Location for all resources."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.ServiceBus/namespaces",
- "apiVersion": "2018-01-01-preview",
- "name": "[parameters('serviceBusNamespaceName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard"
- },
- "properties": {},
- "resources": [
- {
- "type": "Queues",
- "apiVersion": "2017-04-01",
- "name": "[parameters('serviceBusQueueName')]",
- "dependsOn": [
- "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]"
- ],
- "properties": {
- "enablePartitioning": true
- }
- }
- ]
- }
- ]
-}
-
-```
-
-To **create a topic with duplicate detection enabled**, set `enablePartitioning` to `true` in the topic properties section. For more information, see [Microsoft.ServiceBus namespaces/topics template reference](/azure/templates/microsoft.servicebus/namespaces/topics?tabs=json).
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "service_BusNamespace_Name": {
- "type": "string",
- "metadata": {
- "description": "Name of the Service Bus namespace"
- }
- },
- "serviceBusTopicName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Topic"
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "metadata": {
- "description": "Location for all resources."
- }
- }
- },
- "resources": [
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[parameters('service_BusNamespace_Name')]",
- "type": "Microsoft.ServiceBus/namespaces",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard"
- },
- "properties": {},
- "resources": [
- {
- "apiVersion": "2017-04-01",
- "name": "[parameters('serviceBusTopicName')]",
- "type": "topics",
- "dependsOn": [
- "[resourceId('Microsoft.ServiceBus/namespaces/', parameters('service_BusNamespace_Name'))]"
- ],
- "properties": {
- "enablePartitioning": true
- }
- }
- ]
- }
- ]
-}
-```
---
-## Next steps
-Try the samples in the language of your choice to explore Azure Service Bus features.
--- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) -- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)-- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)-- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)-- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-
-Find samples for the older .NET and Java client libraries below:
-- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
Title: Use Azure Service Bus Explorer to run data operations (Preview)
+ Title: Use Azure Service Bus Explorer to run data operations
description: This article provides information on how to use the portal-based Azure Service Bus Explorer to access Azure Service Bus data. - Previously updated : 06/17/2022+ Last updated : 09/26/2022
-# Use Service Bus Explorer to run data operations on Service Bus (Preview)
+# Use Service Bus Explorer to run data operations on Service Bus
Azure Service Bus allows sender and receiver client applications to decouple their business logic with the use of familiar point-to-point (Queue) and publish-subscribe (Topic-Subscription) semantics. > [!NOTE]
To use the Service Bus Explorer, navigate to the Service Bus namespace on which
:::image type="content" source="./media/service-bus-explorer/queue-topics-left-navigation.png" alt-text="Screenshot of left side navigation, where entity can be selected." lightbox="./media/service-bus-explorer/queue-topics-left-navigation.png"::: 1. After selecting **Queues** or **Topics**, select the specific queue or topic.
-1. Select the **Service Bus Explorer (preview)** from the left navigation menu
+1. Select the **Service Bus Explorer** from the left navigation menu
:::image type="content" source="./media/service-bus-explorer/left-navigation-menu-selected.png" alt-text="Screenshot of queue blade where Service Bus Explorer can be selected." lightbox="./media/service-bus-explorer/left-navigation-menu-selected.png":::
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
Title: Monitoring Azure Service Bus data reference
description: Important reference material needed when you monitor Azure Service Bus. Previously updated : 02/10/2022 Last updated : 10/11/2022
The following two types of errors are classified as **user errors**:
| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | |
-|Incoming Messages| Yes | Count | Total | The number of events or messages sent to Service Bus over a specified period. This metric doesn't include messages that are auto forwarded. | Entity name|
-|Outgoing Messages| Yes | Count | Total | The number of events or messages received from Service Bus over a specified period. | Entity name|
+|Incoming Messages| Yes | Count | Total | The number of events or messages sent to Service Bus over a specified period. For basic and standard tiers, incoming auto-forwarded messages are included in this metric. And, for the premium tier, they aren't included. | Entity name|
+|Outgoing Messages| Yes | Count | Total | The number of events or messages received from Service Bus over a specified period. The outgoing auto-forwarded messages aren't included in this metric. | Entity name|
| Messages| No | Count | Average | Count of messages in a queue/topic. | Entity name | | Active Messages| No | Count | Average | Count of active messages in a queue/topic. | Entity name | | Dead-lettered messages| No | Count | Average | Count of dead-lettered messages in a queue/topic. | Entity name |
The following two types of errors are classified as **user errors**:
| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | | |Active Connections| No | Count | Total | The number of active connections on a namespace and on an entity in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric. | |
-|Connections Opened | No | Count | Average | The number of connections opened. Value for this metric is an aggregation, and includes all connections that were opened in the aggregration time window. | Entity name|
-|Connections Closed | No | Count | Average | The number of connections closed. Value for this metric is an aggregation, and includes all connections that were opened in the aggregration time window. | Entity name|
+|Connections Opened | No | Count | Average | The number of connections opened. Value for this metric is an aggregation, and includes all connections that were opened in the aggregation time window. | Entity name|
+|Connections Closed | No | Count | Average | The number of connections closed. Value for this metric is an aggregation, and includes all connections that were opened in the aggregation time window. | Entity name|
### Resource usage metrics
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
service-bus-messaging Service Bus Async Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-async-messaging.md
Title: Service Bus asynchronous messaging | Microsoft Docs description: Learn how Azure Service Bus supports asynchronism via a store and forward mechanism with queues, topics, and subscriptions. + Last updated 04/23/2021
Asynchronous messaging can be implemented in a variety of different ways. With q
Applications typically use asynchronous messaging patterns to enable a number of communication scenarios. You can build applications in which clients can send messages to services, even when the service is not running. For applications that experience bursts of communications, a queue can help level the load by providing a place to buffer communications. Finally, you can get a simple but effective load balancer to distribute messages across multiple machines.
-In order to maintain availability of any of these entities, consider a number of different ways in which these entities can appear unavailable for a durable messaging system. Generally speaking, we see the entity become unavailable to applications we write in the following different ways:
+In order to maintain availability of any of these entities, consider a number of different ways in which these entities can appear unavailable for a durable messaging system. Generally speaking, we see the entity becomes unavailable to applications we write in the following different ways:
* Unable to send messages. * Unable to receive messages.
There are several ways to handle message and entity issues, and there are guidel
* Throttling from an external system on which Service Bus depends. Throttling occurs from interactions with storage and compute resources. * Issue for a system on which Service Bus depends. For example, a given part of storage can encounter issues. * Failure of Service Bus on single subsystem. In this situation, a compute node can get into an inconsistent state and must restart itself, causing all entities it serves to load balance to other nodes. This in turn can cause a short period of slow message processing.
-* Failure of Service Bus within an Azure datacenter. This is a "catastrophic failure" during which the system is unreachable for many minutes or a few hours.
+* Failure of Service Bus within an Azure data-center. This is a "catastrophic failure" during which the system is unreachable for many minutes or a few hours.
> [!NOTE] > The term **storage** can mean both Azure Storage and SQL Azure.
Service Bus contains a number of mitigations for these issues. The following sec
### Throttling With Service Bus, throttling enables cooperative message rate management. Each individual Service Bus node houses many entities. Each of those entities makes demands on the system in terms of CPU, memory, storage, and other facets. When any of these facets detects usage that exceeds defined thresholds, Service Bus can deny a given request. The caller receives a server busy exception and retries after 10 seconds.
-As a mitigation, the code must read the error and halt any retries of the message for at least 10 seconds. Since the error can happen across pieces of the customer application, it is expected that each piece independently executes the retry logic. The code can reduce the probability of being throttled by enabling partitioning on a queue or topic.
+As a mitigation, the code must read the error and halt any retries of the message for at least 10 seconds. Since the error can happen across pieces of the customer application, it is expected that each piece independently executes the retry logic. The code can reduce the probability of being throttled by enabling partitioning on a namespace, queue or topic.
### Issue for an Azure dependency Other components within Azure can occasionally have service issues. For example, when a system that Service Bus uses is being upgraded, that system can temporarily experience reduced capabilities. To work around these types of issues, Service Bus regularly investigates and implements mitigations. Side effects of these mitigations do appear. For example, to handle transient issues with storage, Service Bus implements a system that allows message send operations to work consistently. Due to the nature of the mitigation, a sent message can take up to 15 minutes to appear in the affected queue or subscription and be ready for a receive operation. Generally speaking, most entities will not experience this issue. However, given the number of entities in Service Bus within Azure, this mitigation is sometimes needed for a small subset of Service Bus customers.
In these cases, the client application generates a timeout exception or a messag
Now that you've learned the basics of asynchronous messaging in Service Bus, read more details about [handling outages and disasters][handling outages and disasters]. [Best practices for insulating applications against Service Bus outages and disasters]: service-bus-outages-disasters.md
-[handling outages and disasters]: service-bus-outages-disasters.md
+[handling outages and disasters]: service-bus-outages-disasters.md
service-bus-messaging Service Bus Auto Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-auto-forwarding.md
Title: Auto-forwarding Azure Service Bus messaging entities
description: This article describes how to chain an Azure Service Bus queue or subscription to another queue or topic. Last updated 07/27/2022-+ # Chaining Service Bus entities with autoforwarding
If Alice goes on vacation, her personal queue, rather than the ERP topic, fills
> When autoforwarding is setup, the value for `AutoDeleteOnIdle` on the source entity is automatically set to the maximum value of the data type. > > - On the source side, autoforwarding acts as a receive operation, so the source that has autoforwarding enabled is never really "idle" and hence it won't be automatically deleted.
-> - Autoforwarding doesn't make any changes to the destination entity. If `AutoDeleteOnIdle` is enabled on destination entity, the entity is automatically deleted if it's inactive for the specified idle interval. We recommend that you don't enable `AutoDeleteOnIdle` on the destination entity because if the destination entity is deleted, the souce entity will continually see exceptions when trying to forward messages that destination.
+> - Autoforwarding doesn't make any changes to the destination entity. If `AutoDeleteOnIdle` is enabled on destination entity, the entity is automatically deleted if it's inactive for the specified idle interval. We recommend that you don't enable `AutoDeleteOnIdle` on the destination entity because if the destination entity is deleted, the source entity will continually see exceptions when trying to forward messages that destination.
## Autoforwarding considerations
If the destination entity accumulates too many messages and exceeds the quota, o
When chaining together individual topics to obtain a composite topic with many subscriptions, it's recommended that you have a moderate number of subscriptions on the first-level topic and many subscriptions on the second-level topics. For example, a first-level topic with 20 subscriptions, each of them chained to a second-level topic with 200 subscriptions, allows for higher throughput than a first-level topic with 200 subscriptions, each chained to a second-level topic with 20 subscriptions.
-Service Bus bills one operation for each forwarded message. For example, sending a message to a topic with 20 subscriptions, each of them configured to autoforward messages to another queue or topic, is billed as 21 operations if all first-level subscriptions receive a copy of the message.
+Service Bus bills one operation for each forwarded message. For example, sending a message to a topic with 20 subscriptions, each of them configured to auto-forward messages to another queue or topic, is billed as 21 operations if all first-level subscriptions receive a copy of the message.
To create a subscription that is chained to another queue or topic, the creator of the subscription must have **Manage** permissions on both the source and the destination entity. Sending messages to the source topic only requires **Send** permissions on the source topic.
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
Title: Compare Azure Storage queues and Service Bus queues description: Analyzes differences and similarities between two types of queues offered by Azure. + Last updated 06/15/2021
This section compares Storage queues and Service Bus queues from the perspective
### Additional information * Service Bus enforces queue size limits. The maximum queue size is specified when creating a queue. It can be between 1 GB and 80 GB. If the queue's size reaches this limit, additional incoming messages will be rejected and the caller receives an exception. For more information about quotas in Service Bus, see [Service Bus Quotas](service-bus-quotas.md).
-* Partitioning isn't supported in the [Premium tier](service-bus-premium-messaging.md). In the Standard messaging tier, you can create Service Bus queues and topics in 1 (default), 2, 3, 4, or 5-GB sizes. With partitioning enabled, Service Bus creates 16 copies (16 partitions) of the entity, each of the same size specified. As such, if you create a queue that's 5 GB in size, with 16 partitions the maximum queue size becomes (5 * 16) = 80 GB. You can see the maximum size of your partitioned queue or topic in the [Azure portal][Azure portal].
+* In the Standard messaging tier, you can create Service Bus queues and topics in 1 (default), 2, 3, 4, or 5-GB sizes. When enabling partitioning in the Standard tier, Service Bus creates 16 copies (16 partitions) of the entity, each of the same size specified. As such, if you create a queue that's 5 GB in size, with 16 partitions the maximum queue size becomes (5 * 16) = 80 GB. You can see the maximum size of your partitioned queue or topic in the [Azure portal][Azure portal].
* With Storage queues, if the content of the message isn't XML-safe, then it must be **Base64** encoded. If you **Base64**-encode the message, the user payload can be up to 48 KB, instead of 64 KB. * With Service Bus queues, each message stored in a queue is composed of two parts: a header and a body. The total size of the message can't exceed the maximum message size supported by the service tier. * When clients communicate with Service Bus queues over the TCP protocol, the maximum number of concurrent connections to a single Service Bus queue is limited to 100. This number is shared between senders and receivers. If this quota is reached, requests for additional connections will be rejected and an exception will be received by the calling code. This limit isn't imposed on clients connecting to the queues using REST-based API.
The following articles provide more guidance and information about using Storage
* [Best practices for performance improvements using Service Bus brokered messaging](service-bus-performance-improvements.md) [Azure portal]: https://portal.azure.com-
service-bus-messaging Service Bus Manage With Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-manage-with-ps.md
Title: Use PowerShell to manage Azure Service Bus resources | Microsoft Docs
description: This article explains how to use Azure PowerShell module to create and manage Service Bus entities (namespaces, queues, topics, subscriptions). Last updated 09/20/2021-+ # Use PowerShell to manage Service Bus resources
else
{ Write-Host "The $QueueName queue does not exist." Write-Host "Creating the $QueueName queue in the $Location region..."
- New-AzServiceBusQueue -ResourceGroup $ResGrpName -NamespaceName $Namespace -QueueName $QueueName -EnablePartitioning $True
+ New-AzServiceBusQueue -ResourceGroup $ResGrpName -NamespaceName $Namespace -QueueName $QueueName
$CurrentQ = Get-AzServiceBusQueue -ResourceGroup $ResGrpName -NamespaceName $Namespace -QueueName $QueueName Write-Host "The $QueueName queue in Resource Group $ResGrpName in the $Location region has been successfully created." }
service-bus-messaging Service Bus Migrate Standard Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md
Title: Migrate Azure Service Bus namespaces - standard to premium description: Guide to allow migration of existing Azure Service Bus standard namespaces to premium + Last updated 06/27/2022
Here is a list of features not supported by Premium and their mitigation -
If you utilize Azure Resource Manager (ARM) templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
-### Partitioned entities
-
- Partitioned entities were supported in the Standard tier to provide better availability in a multi-tenant setup. With the provision of dedicated resources available per namespace in the Premium tier, this is no longer needed.
-
- During migration, any partitioned entity in the Standard namespace is created on the Premium namespace as a non-partitioned entity.
-
- If your ARM template sets 'enablePartitioning' to 'true' for a specific Queue or Topic, then it will be ignored by the broker.
- ### RBAC settings The role-based access control (RBAC) settings on the namespace aren't migrated to the premium namespace. You'll need to add them manually after the migration.
However, if you can migrate during a planned maintenance/housekeeping window, an
* Learn more about the [differences between standard and premium Messaging](./service-bus-premium-messaging.md). * Learn about the [High-Availability and Geo-Disaster recovery aspects for Service Bus premium](service-bus-outages-disasters.md#protecting-against-outages-and-disastersservice-bus-premium).-
service-bus-messaging Service Bus Outages Disasters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md
Title: Insulate Azure Service Bus applications against outages and disasters
-description: This articles provides techniques to protect applications against a potential Azure Service Bus outage.
+description: This article provides techniques to protect applications against a potential Azure Service Bus outage.
+ Last updated 02/10/2021
To learn more about disaster recovery, see these articles:
* [Designing resilient applications for Azure][Azure resiliency technical guidance] [Service Bus Authentication]: service-bus-authentication-and-authorization.md
-[Partitioned messaging entities]: service-bus-partitioning.md
[Asynchronous messaging patterns and high availability]: service-bus-async-messaging.md#failure-of-service-bus-within-an-azure-datacenter [BrokeredMessage.MessageId]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage [BrokeredMessage.Label]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
service-bus-messaging Service Bus Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-partitioning.md
Title: Create partitioned Azure Service Bus queues and topics | Microsoft Docs description: Describes how to partition Service Bus queues and topics by using multiple message brokers. Previously updated : 09/21/2021 Last updated : 10/12/2022 ms.devlang: csharp-+ # Partitioned queues and topics
Azure Service Bus employs multiple message brokers to process messages and multiple messaging stores to store messages. A conventional queue or topic is handled by a single message broker and stored in one messaging store. Service Bus *partitions* enable queues and topics, or *messaging entities*, to be partitioned across multiple message brokers and messaging stores. Partitioning means that the overall throughput of a partitioned entity is no longer limited by the performance of a single message broker or messaging store. In addition, a temporary outage of a messaging store doesn't render a partitioned queue or topic unavailable. Partitioned queues and topics can contain all advanced Service Bus features, such as support for transactions and sessions. > [!NOTE]
-> Partitioning is available at entity creation for all queues and topics in Basic or Standard SKUs. It isn't available for the Premium messaging SKU anymore, but any previously existing partitioned entities from when they were supported in Premium namespaces will continue to work as expected.
+> There are some differences between the Basic / Standard and Premium SKU when it comes to partitioning.
+> - Partitioning is available at entity creation for all queues and topics in Basic or Standard SKUs. A namespace can have both partitioned and non-partitioned entities.
+> - Partitioning is available at namespace creation for the Premium messaging SKU, and all queues and topics in that namespace will be partitioned. Any previously migrated partitioned entities in Premium namespaces will continue to work as expected.
+> - When partitioning is enabled in the Basic or Standard SKUs, we will always create 16 partitions.
+> - When partitioning is enabled in the Premium SKU, the amount of partitions is specified during namespace creation.
-It isn't possible to change the partitioning option on any existing queue or topic; you can only set the option when you create the entity.
+It isn't possible to change the partitioning option on any existing namespace, queue, or topic; you can only set the option when you create the entity.
## How it works
Each partitioned queue or topic consists of multiple partitions. Each partition
When a client wants to receive a message from a partitioned queue, or from a subscription to a partitioned topic, Service Bus queries all partitions for messages, then returns the first message that is obtained from any of the messaging stores to the receiver. Service Bus caches the other messages and returns them when it receives more receive requests. A receiving client isn't aware of the partitioning; the client-facing behavior of a partitioned queue or topic (for example, read, complete, defer, deadletter, prefetching) is identical to the behavior of a regular entity.
-The peek operation on a non-partitioned entity always returns the oldest message, but not on a partitioned entity. Instead, it returns the oldest message in one of the partitions whose message broker responded first. There is no guarantee that the returned message is the oldest one across all partitions.
+The peek operation on a non-partitioned entity always returns the oldest message, but not on a partitioned entity. Instead, it returns the oldest message in one of the partitions whose message broker responded first. There's no guarantee that the returned message is the oldest one across all partitions.
-There is no extra cost when sending a message to, or receiving a message from, a partitioned queue or topic.
+There's no extra cost when sending a message to, or receiving a message from, a partitioned queue or topic.
> [!NOTE] > The peek operation returns the oldest message from the partition based on its sequence number. For partitioned entities, the sequence number is issued relative to the partition. For more information, see [Message sequencing and timestamps](../service-bus-messaging/message-sequencing.md).
For a more in-depth discussion of the tradeoff between availability (no partitio
To give Service Bus enough time to enqueue the message into a different partition, the timeout value specified by the client that sends the message must be greater than 15 seconds. The default value of 60 seconds is recommended.
-A partition key "pins" a message to a specific partition. If the messaging store that holds this partition is unavailable, Service Bus returns an error. In the absence of a partition key, Service Bus can choose a different partition and the operation succeeds. Therefore, it is recommended that you don't supply a partition key unless it is required.
+A partition key "pins" a message to a specific partition. If the messaging store that holds this partition is unavailable, Service Bus returns an error. In the absence of a partition key, Service Bus can choose a different partition and the operation succeeds. Therefore, it's recommended that you don't supply a partition key unless it's required.
## Advanced topics
using (TransactionScope ts = new TransactionScope(committableTransaction))
committableTransaction.Commit(); ```
-If any of the properties that serve as a partition key are set, Service Bus pins the message to a specific partition. This behavior occurs whether or not a transaction is used. It is recommended that you don't specify a partition key if it isn't necessary.
+If any of the properties that serve as a partition key are set, Service Bus pins the message to a specific partition. This behavior occurs whether or not a transaction is used. It's recommended that you don't specify a partition key if it isn't necessary.
### Use transactions in sessions with partitioned entities
Service Bus supports automatic message forwarding from, to, or between partition
## Considerations and guidelines * **High consistency features**: If an entity uses features such as sessions, duplicate detection, or explicit control of partitioning key, then the messaging operations are always routed to specific partition. If any of the partitions experience high traffic or the underlying store is unhealthy, those operations fail and availability is reduced. Overall, the consistency is still much higher than non-partitioned entities; only a subset of traffic is experiencing issues, as opposed to all the traffic. For more information, see this [discussion of availability and consistency](../event-hubs/event-hubs-availability-and-consistency.md). * **Management**: Operations such as Create, Update, and Delete must be performed on all the partitions of the entity. If any partition is unhealthy, it could result in failures for these operations. For the Get operation, information such as message counts must be aggregated from all partitions. If any partition is unhealthy, the entity availability status is reported as limited.
-* **Low volume message scenarios**: For such scenarios, especially when using the HTTP protocol, you may have to perform multiple receive operations in order to obtain all the messages. For receive requests, the front end performs a receive on all the partitions and caches all the responses received. A subsequent receive request on the same connection would benefit from this caching and receive latencies will be lower. However, if you have multiple connections or use HTTP, that establishes a new connection for each request. As such, there is no guarantee that it would land on the same node. If all existing messages are locked and cached in another front end, the receive operation returns **null**. Messages eventually expire and you can receive them again. HTTP keep-alive is recommended. When using partitioning in low-volume scenarios, receive operations may take longer than expected. Hence, we recommend that you don't use partitioning in these scenarios. Delete any existing partitioned entities and recreate them with partitioning disabled to improve performance.
-* **Browse/Peek messages**: The peek operation doesn't always return the number of messages asked for. There are two common reasons for this behavior. One reason is that the aggregated size of the collection of messages exceeds the maximum size of 256 KB. Another reason is that in partitioned queues or topics, a partition may not have enough messages to return the requested number of messages. In general, if an application wants to peek/browse a specific number of messages, it should call the peek operation repeatedly until it gets that number of messages, or there are no more messages to peek. For more information, including code samples, see [Message browsing](message-browsing.md).
+* **Low volume message scenarios**: For such scenarios, especially when using the HTTP protocol, you may have to perform multiple receive operations in order to obtain all the messages. For receive requests, the front end performs a receive on all the partitions and caches all the responses received. A subsequent receive request on the same connection would benefit from this caching and receive latencies will be lower. However, if you have multiple connections or use HTTP, a new connection is established for each request. As such, there's no guarantee that it would land on the same node. If all existing messages are locked and cached in another front end, the receive operation returns **null**. Messages eventually expire and you can receive them again. HTTP keep-alive is recommended. When using partitioning in low-volume scenarios, receive operations may take longer than expected. Hence, we recommend that you don't use partitioning in these scenarios. Delete any existing partitioned entities and recreate them with partitioning disabled to improve performance.
+* **Browse/Peek messages**: The peek operation doesn't always return the number of messages asked for. There are two common reasons for this behavior. One reason is that the aggregated size of the collection of messages exceeds the maximum size. Another reason is that in partitioned queues or topics, a partition may not have enough messages to return the requested number of messages. In general, if an application wants to peek/browse a specific number of messages, it should call the peek operation repeatedly until it gets that number of messages, or there are no more messages to peek. For more information, including code samples, see [Message browsing](message-browsing.md).
## Partitioned entities limitations Currently Service Bus imposes the following limitations on partitioned queues and topics:
-* Partitioned queues and topics aren't supported in the Premium messaging tier. Sessions are supported in the premier tier by using SessionId.
* Partitioned queues and topics don't support sending messages that belong to different sessions in a single transaction.
-* Service Bus currently allows up to 100 partitioned queues or topics per namespace. Each partitioned queue or topic counts towards the quota of 10,000 entities per namespace (doesn't apply to Premium tier).
-
+* Service Bus currently allows up to 100 partitioned queues or topics per namespace for the Basic and Standard SKU. Each partitioned queue or topic counts towards the quota of 10,000 entities per namespace.
## Next steps
-You can enable partitioning by using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable partitioning](enable-partitions.md).
+You can enable partitioning by using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable partitioning (Basic / Standard)](enable-partitions-basic-standard.md) or [Enable partitioning (Premium)](enable-partitions-premium.md).
Read about the core concepts of the AMQP 1.0 messaging specification in the [AMQP 1.0 protocol guide](service-bus-amqp-protocol-guide.md). [Azure portal]: https://portal.azure.com
-[QueueDescription.EnablePartitioning]: /dotnet/api/microsoft.servicebus.messaging.queuedescription.enablepartitioning
-[TopicDescription.EnablePartitioning]: /dotnet/api/microsoft.servicebus.messaging.topicdescription.enablepartitioning
-[QueueDescription.ForwardTo]: /dotnet/api/microsoft.servicebus.messaging.queuedescription.forwardto
[AMQP 1.0 support for Service Bus partitioned queues and topics]: ./service-bus-amqp-protocol-guide.md
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
description: Describes how to use Service Bus to optimize performance when excha
Last updated 09/28/2022 ms.devlang: csharp+ # Best Practices for performance improvements using Service Bus Messaging
As expected, throughput is higher for smaller message payloads that can be batch
#### Benchmarks
-Here's a [GitHub sample](https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance) which you can run to see the expected throughput you'll receive for your SB namespace. In our [benchmark tests](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722), we observed approximately 4 MB/second per Messaging Unit (MU) of ingress and egress.
+Here's a [GitHub sample](https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance) that you can run to see the expected throughput you'll receive for your SB namespace. In our [benchmark tests](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722), we observed approximately 4 MB/second per Messaging Unit (MU) of ingress and egress.
The benchmarking sample doesn't use any advanced features, so the throughput your applications observe will be different based on your scenarios.
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Title: Azure Service Bus premium and standard tiers description: This article describes standard and premium tiers of Azure Service Bus. Compares these tiers and provides technical differences. Previously updated : 11/08/2021-+ Last updated : 10/12/2022 # Service Bus Premium and Standard messaging tiers
Some high-level differences are highlighted in the following table.
**Service Bus Premium Messaging** provides resource isolation at the CPU and memory level so that each customer workload runs in isolation. This resource container is called a *messaging unit*. Each premium namespace is allocated at least one messaging unit. You can purchase 1, 2, 4, 8 or 16 messaging units for each Service Bus Premium namespace. A single workload or entity can span multiple messaging units and the number of messaging units can be changed at will. The result is predictable and repeatable performance for your Service Bus-based solution.
-Not only is this performance more predictable and available, but it is also faster. With Premium Messaging, peak performance is much faster than with the Standard tier.
+Not only is this performance more predictable and available, but it's also faster. With Premium Messaging, peak performance is much faster than with the Standard tier.
## Premium Messaging technical differences The following sections discuss a few differences between Premium and Standard messaging tiers.
-### Partitioned queues and topics
-
-Partitioned queues and topics aren't supported in Premium Messaging. For more information about partitioning, see [Partitioned queues and topics](service-bus-partitioning.md).
- ### Express entities Because Premium messaging runs in an isolated run-time environment, express entities aren't supported in Premium namespaces. An express entity holds a message in memory temporarily before writing it to persistent storage. If you have code running under Standard messaging and want to port it to the Premium tier, ensure that the express entity feature is disabled.
The CPU and memory usage are tracked and displayed to you for the following reas
- Understand the capacity of resources purchased. - Capacity planning that helps you decide to scale up/down.
-## Messaging unit - How many are needed?
+## How many messaging units are needed?
-When provisioning an Azure Service Bus Premium namespace, the number of messaging units allocated must be specified. These messaging units are dedicated resources that are allocated to the namespace.
+You specify the number of messaging units when provisioning an Azure Service Bus Premium namespace. These messaging units are dedicated resources that are allocated to the namespace. When partitioning has been enabled on the namespace, the messaging units are equally distributed across the partitions.
The number of messaging units allocated to the Service Bus Premium namespace can be **dynamically adjusted** to factor in the change (increase or decrease) in workloads. There are a few factors to take into consideration when deciding the number of messaging units for your architecture: -- Start with ***1 or 2 messaging units*** allocated to your namespace.
+- Start with ***1 or 2 messaging units*** allocated to your namespace, or ***1 message unit per partition***.
- Study the CPU usage metrics within the [Resource usage metrics](monitor-service-bus-reference.md#resource-usage-metrics) for your namespace. - If CPU usage is ***below 20%***, you might be able to ***scale down*** the number of messaging units allocated to your namespace. - If CPU usage is ***above 70%***, your application will benefit from ***scaling up*** the number of messaging units allocated to your namespace.
To learn how to configure a Service Bus namespace to automatically scale (increa
## Get started with Premium Messaging
-Getting started with Premium Messaging is straightforward and the process is similar to that of Standard Messaging. Begin by [creating a namespace](service-bus-create-namespace-portal.md) in the [Azure portal](https://portal.azure.com). Make sure you select **Premium** under **Pricing tier**. Click **View full pricing details** to see more information about each tier.
+Getting started with Premium Messaging is straightforward and the process is similar to that of Standard Messaging. Begin by [creating a namespace](service-bus-create-namespace-portal.md) in the [Azure portal](https://portal.azure.com). Make sure you select **Premium** under **Pricing tier**. Select **View full pricing details** to see more information about each tier.
:::image type="content" source="./media/service-bus-premium-messaging/select-premium-tier.png" alt-text="Screenshot that shows the selection of premium tier when creating a namespace.":::
Here are some considerations when sending large messages on Azure Service Bus -
* Sending large messages will result in decreased throughput and increased latency. * While 100 MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace. * The max message size is enforced only for messages sent to the queue or topic. The size limit isn't enforced for the receive operation. It allows you to update the max message size for a given queue (or topic).
- * Batching is not supported.
+ * Batching isn't supported.
### Enabling large messages support for a new queue (or topic)
service-bus-messaging Service Bus Queues Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-queues-topics-subscriptions.md
Title: Azure Service Bus messaging - queues, topics, and subscriptions description: This article provides an overview of Azure Service Bus messaging entities (queue, topics, and subscriptions). Previously updated : 08/27/2021 Last updated : 10/11/2022 # Service Bus queues, topics, and subscriptions
-Azure Service Bus supports a set of cloud-based, message-oriented middleware technologies including reliable message queuing and durable publish/subscribe messaging. These brokered messaging capabilities can be thought of as decoupled messaging features that support publish-subscribe, temporal decoupling, and load-balancing scenarios using the Service Bus messaging workload. Decoupled communication has many advantages. For example, clients and servers can connect as needed and do their operations in an asynchronous fashion.
-The messaging entities that form the core of the messaging capabilities in Service Bus are **queues**, **topics and subscriptions**, and rules/actions.
+Azure Service Bus supports reliable message queuing and durable publish/subscribe messaging. The messaging entities that form the core of the messaging capabilities in Service Bus are **queues**, **topics and subscriptions**.
+
+> [!IMPORTANT]
+> If you are new to Azure Service Bus, read through [What is Azure Service Bus?](service-bus-messaging-overview.md) before going through this article.
## Queues
-Queues offer **First In, First Out** (FIFO) message delivery to one or more competing consumers. That is, receivers typically receive and process messages in the order in which they were added to the queue. And, only one message consumer receives and processes each message. A key benefit of using queues is to achieve **temporal decoupling of application components**. In other words, the producers (senders) and consumers (receivers) don't have to send and receive messages at the same time. That's because messages are stored durably in the queue. Furthermore, the producer doesn't have to wait for a reply from the consumer to continue to process and send messages.
+
+Queues offer **First In, First Out** (FIFO) message delivery to one or more competing consumers. That is, receivers typically receive and process messages in the order in which they were added to the queue. And, only one message consumer receives and processes each message.
++
+A key benefit of using queues is to achieve **temporal decoupling of application components**. In other words, the producers (senders) and consumers (receivers) don't have to send and receive messages at the same time. That's because messages are stored durably in the queue. Furthermore, the producer doesn't have to wait for a reply from the consumer to continue to process and send messages.
A related benefit is **load-leveling**, which enables producers and consumers to send and receive messages at different rates. In many applications, the system load varies over time. However, the processing time required for each unit of work is typically constant. Intermediating message producers and consumers with a queue means that the consuming application only has to be able to handle average load instead of peak load. The depth of the queue grows and contracts as the incoming load varies. This capability directly saves money regarding the amount of infrastructure required to service the application load. As the load increases, more worker processes can be added to read from the queue. Each message is processed by only one of the worker processes. Furthermore, this pull-based load balancing allows for best use of the worker computers even if the worker computers with processing power pull messages at their own maximum rate. This pattern is often termed the **competing consumer** pattern.
-Using queues to intermediate between message producers and consumers provides an inherent loose coupling between the components. Because producers and consumers aren't aware of each other, a consumer can be upgraded without having any effect on the producer.
+Using queues to intermediate between message producers and consumers provides an inherent loose coupling between the components. Because producers and consumers aren't aware of each other, a consumer can be **upgraded** without having any effect on the producer.
### Create queues
-You can create queues using the [Azure portal](service-bus-quickstart-portal.md), [PowerShell](service-bus-quickstart-powershell.md), [CLI](service-bus-quickstart-cli.md), or [Azure Resource Manager templates (ARM templates)](service-bus-resource-manager-namespace-queue.md). Then, send and receive messages using clients written in [C#](service-bus-dotnet-get-started-with-queues.md), [Java](service-bus-java-how-to-use-queues.md), [Python](service-bus-python-how-to-use-queues.md), and [JavaScript](service-bus-nodejs-how-to-use-queues.md).
+
+You can create queues using one of the following options:
+
+- [Azure portal](service-bus-quickstart-portal.md)
+- [PowerShell](service-bus-quickstart-powershell.md)
+- [CLI](service-bus-quickstart-cli.md)
+- [Azure Resource Manager templates (ARM templates)](service-bus-resource-manager-namespace-queue.md).
+
+Then, send and receive messages using clients written in programming languages including the following ones:
+
+- [C#](service-bus-dotnet-get-started-with-queues.md)
+- [Java](service-bus-java-how-to-use-queues.md)
+- [Python](service-bus-python-how-to-use-queues.md)
+- [JavaScript](service-bus-nodejs-how-to-use-queues.md).
### Receive modes
-You can specify two different modes in which Service Bus receives messages.
-- **Receive and delete**. In this mode, when Service Bus receives the request from the consumer, it marks the message as being consumed and returns it to the consumer application. This mode is the simplest model. It works best for scenarios in which the application can tolerate not processing a message if a failure occurs. To understand this scenario, consider a scenario in which the consumer issues the receive request and then crashes before processing it. As Service Bus marks the message as being consumed, the application begins consuming messages upon restart. It will miss the message that it consumed before the crash.
+You can specify two different modes in which consumers can receive messages from Service Bus.
+
+- **Receive and delete**. In this mode, when Service Bus receives the request from the consumer, it marks the message as being consumed and returns it to the consumer application. This mode is the simplest model. It works best for scenarios in which the application can tolerate not processing a message if a failure occurs. To understand this scenario, consider a scenario in which the consumer issues the receive request and then crashes before processing it. As Service Bus marks the message as consumed, the application begins consuming messages upon restart. It will miss the message that it consumed before the crash. This process is often called **at-most once** processing.
- **Peek lock**. In this mode, the receive operation becomes two-stage, which makes it possible to support applications that can't tolerate missing messages. 1. Finds the next message to be consumed, **locks** it to prevent other consumers from receiving it, and then, return the message to the application.
- 1. After the application finishes processing the message, it requests the Service Bus service to complete the second stage of the receive process. Then, the service **marks the message as being consumed**.
+ 1. After the application finishes processing the message, it requests the Service Bus service to complete the second stage of the receive process. Then, the service **marks the message as consumed**.
If the application is unable to process the message for some reason, it can request the Service Bus service to **abandon** the message. Service Bus **unlocks** the message and makes it available to be received again, either by the same consumer or by another competing consumer. Secondly, there's a **timeout** associated with the lock. If the application fails to process the message before the lock timeout expires, Service Bus unlocks the message and makes it available to be received again.
- If the application crashes after it processes the message, but before it requests the Service Bus service to complete the message, Service Bus redelivers the message to the application when it restarts. This process is often called **at-least once** processing. That is, each message is processed at least once. However, in certain situations the same message may be redelivered. If your scenario can't tolerate duplicate processing, add additional logic in your application to detect duplicates. For more information, see [Duplicate detection](duplicate-detection.md). This feature is known as **exactly once** processing.
+ If the application crashes after it processes the message, but before it requests the Service Bus service to complete the message, Service Bus redelivers the message to the application when it restarts. This process is often called **at-least once** processing. That is, each message is processed at least once. However, in certain situations the same message may be redelivered. If your scenario can't tolerate duplicate processing, add additional logic in your application to detect duplicates. For more information, see [Duplicate detection](duplicate-detection.md), which is known as **exactly once** processing.
> [!NOTE] > For more information about these two modes, see [Settling receive operations](message-transfers-locks-settlement.md#settling-receive-operations). ## Topics and subscriptions
-A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a **publish and subscribe** pattern. It's useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message, depending on filter rules set on these subscriptions. The subscriptions can use additional filters to restrict the messages that they want to receive. Publishers send messages to a topic in the same way that they send messages to a queue. But, consumers don't receive messages directly from the topic. Instead, consumers receive messages from subscriptions of the topic. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Consumers receive messages from a subscription identically to the way they receive messages from a queue.
+
+A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a **publish and subscribe** pattern. It's useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message.
++
+The subscriptions can use additional filters to restrict the messages that they want to receive. Publishers send messages to a topic in the same way that they send messages to a queue. But, consumers don't receive messages directly from the topic. Instead, consumers receive messages from subscriptions of the topic. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Consumers receive messages from a subscription identically to the way they receive messages from a queue.
The message-sending functionality of a queue maps directly to a topic and its message-receiving functionality maps to a subscription. Among other things, this feature means that subscriptions support the same patterns described earlier in this section regarding queues: competing consumer, temporal decoupling, load leveling, and load balancing. + ### Create topics and subscriptions
-Creating a topic is similar to creating a queue, as described in the previous section. You can create topics and subscriptions using the [Azure portal](service-bus-quickstart-topics-subscriptions-portal.md), [PowerShell](service-bus-quickstart-powershell.md), [CLI](service-bus-tutorial-topics-subscriptions-cli.md), or [ARM templates](service-bus-resource-manager-namespace-topic.md). Then, send messages to a topic and receive messages from subscriptions using clients written in [C#](service-bus-dotnet-how-to-use-topics-subscriptions.md), [Java](service-bus-java-how-to-use-topics-subscriptions.md), [Python](service-bus-python-how-to-use-topics-subscriptions.md), and [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md).
+
+Creating a topic is similar to creating a queue, as described in the previous section. You can create topics and subscriptions using one of the following options:
+
+- [Azure portal](service-bus-quickstart-topics-subscriptions-portal.md)
+- [PowerShell](service-bus-quickstart-powershell.md)
+- [CLI](service-bus-tutorial-topics-subscriptions-cli.md)
+- [ARM templates](service-bus-resource-manager-namespace-topic.md).
+
+Then, send messages to a topic and receive messages from subscriptions using clients written in programming languages including the following ones:
+
+- [C#](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+- [Java](service-bus-java-how-to-use-topics-subscriptions.md)
+- [Python](service-bus-python-how-to-use-topics-subscriptions.md)
+- [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md).
### Rules and actions
-In many scenarios, messages that have specific characteristics must be processed in different ways. To enable this processing, you can configure subscriptions to find messages that have desired properties and then perform certain modifications to those properties. While Service Bus subscriptions see all messages sent to the topic, it is possible to only copy a subset of those messages to the virtual subscription queue. This filtering is accomplished using subscription filters. Such modifications are called **filter actions**. When a subscription is created, you can supply a filter expression that operates on the properties of the message. The properties can be both the system properties (for example, **Label**) and custom application properties (for example, **StoreName**.) The SQL filter expression is optional in this case. Without a SQL filter expression, any filter action defined on a subscription will be done on all the messages for that subscription.
-For a full working example, see the [TopicFilters sample](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples/TopicFilters) on GitHub.
+In many scenarios, messages that have specific characteristics must be processed in different ways. To enable this processing, you can configure subscriptions to find messages that have desired properties and then perform certain modifications to those properties. While Service Bus subscriptions see all messages sent to the topic, it's possible to only copy a subset of those messages to the virtual subscription queue. This filtering is accomplished using subscription filters. Such modifications are called **filter actions**. When a subscription is created, you can supply a filter expression that operates on the properties of the message. The properties can be both the system properties (for example, **Label**) and custom application properties (for example, **StoreName**.) The SQL filter expression is optional in this case. Without a SQL filter expression, any filter action defined on a subscription will be done on all the messages for that subscription.
-For more information about filters, see [Topic filters and actions](topic-filters.md).
+For a full working example, see the [TopicFilters sample](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples/TopicFilters) on GitHub. For more information about filters, see [Topic filters and actions](topic-filters.md).
## Java message service (JMS) 2.0 entities The following entities are accessible through the Java message service (JMS) 2.0 API.
service-bus-messaging Topic Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/topic-filters.md
Title: Azure Service Bus topic filters | Microsoft Docs description: This article explains how subscribers can define which messages they want to receive from a topic by specifying filters. Previously updated : 07/19/2021 Last updated : 10/05/2022 # Topic filters and actions
Each newly created topic subscription has an initial default subscription rule.
Service Bus supports three filter conditions: - *SQL Filters* - A **SqlFilter** holds a SQL-like conditional expression that is evaluated in the broker against the arriving messages' user-defined properties and system properties. All system properties must be prefixed with `sys.` in the conditional expression. The [SQL-language subset for filter conditions](service-bus-messaging-sql-filter.md) tests for the existence of properties (`EXISTS`), null-values (`IS NULL`), logical NOT/AND/OR, relational operators, simple numeric arithmetic, and simple text pattern matching with `LIKE`.+
+ **.NET example for defining a SQL filter:**
+ ```csharp
+ adminClient = new ServiceBusAdministrationClient(connectionString);
+
+ // Create a SQL filter with color set to blue and quantity to 10
+ await adminClient.CreateSubscriptionAsync(
+ new CreateSubscriptionOptions(topicName, "ColorBlueSize10Orders"),
+ new CreateRuleOptions("BlueSize10Orders", new SqlRuleFilter("color='blue' AND quantity=10")));
+
+ // Create a SQL filter with color set to red
+ // Action is defined to set the quantity to half if the color is red
+ await adminClient.CreateRuleAsync(topicName, "ColorRed", new CreateRuleOptions
+ {
+ Name = "RedOrdersWithAction",
+ Filter = new SqlRuleFilter("user.color='red'"),
+ Action = new SqlRuleAction("SET quantity = quantity / 2;")
+ }
+ ```
- *Boolean filters* - The **TrueFilter** and **FalseFilter** either cause all arriving messages (**true**) or none of the arriving messages (**false**) to be selected for the subscription. These two filters derive from the SQL filter. +
+ **.NET example for defining a boolean filter:**
+ ```csharp
+ // Create a True Rule filter with an expression that always evaluates to true
+ // It's equivalent to using SQL rule filter with 1=1 as the expression
+ await adminClient.CreateSubscriptionAsync(
+ new CreateSubscriptionOptions(topicName, subscriptionAllOrders),
+ new CreateRuleOptions("AllOrders", new TrueRuleFilter()));
+ ```
- *Correlation Filters* - A **CorrelationFilter** holds a set of conditions that are matched against one or more of an arriving message's user and system properties. A common use is to match against the **CorrelationId** property, but the application can also choose to match against the following properties: - **ContentType**
Service Bus supports three filter conditions:
- **To** - any user-defined properties.
- A match exists when an arriving message's value for a property is equal to the value specified in the correlation filter. For string expressions, the comparison is case-sensitive. When specifying multiple match properties, the filter combines them as a logical AND condition, meaning for the filter to match, all conditions must match.
+ A match exists when an arriving message's value for a property is equal to the value specified in the correlation filter. For string expressions, the comparison is case-sensitive. If you specify multiple match properties, the filter combines them as a logical AND condition, meaning for the filter to match, all conditions must match.
+
+ **.NET example for defining a correlation filter:**
+
+ ```csharp
+ // Create a correlation filter with color set to Red and priority set to High
+ await adminClient.CreateSubscriptionAsync(
+ new CreateSubscriptionOptions(topicName, "HighPriorityRedOrders"),
+ new CreateRuleOptions("HighPriorityRedOrdersRule", new CorrelationRuleFilter() {Subject = "red", CorrelationId = "high"} ));
+ ```
All filters evaluate message properties. Filters can't evaluate the message body.
Complex filter rules require processing capacity. In particular, the use of SQL
With SQL filter conditions, you can define an action that can annotate the message by adding, removing, or replacing properties and their values. The action [uses a SQL-like expression](service-bus-messaging-sql-rule-action.md) that loosely leans on the SQL UPDATE statement syntax. The action is done on the message after it has been matched and before the message is selected into the subscription. The changes to the message properties are private to the message copied into the subscription.
+**.NET example:**
+
+```csharp
+adminClient = new ServiceBusAdministrationClient(connectionString);
+
+// Create a SQL filter with color set to red
+// Action is defined to set the quantity to half if the color is red
+await adminClient.CreateRuleAsync(topicName, "ColorRed", new CreateRuleOptions
+{
+ Name = "RedOrdersWithAction",
+ Filter = new SqlRuleFilter("user.color='red'"),
+ Action = new SqlRuleAction("SET quantity = quantity / 2;")
+}
+```
+ ## Usage patterns The simplest usage scenario for a topic is that every subscription gets a copy of each message sent to a topic, which enables a broadcast pattern.
service-bus-messaging Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md
+ Previously updated : 04/22/2022 Last updated : 09/26/2022
-# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Service Bus namespace (Preview)
+# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Service Bus namespace
If you have a large number of Microsoft Azure Service Bus namespaces, you may want to perform an audit to make sure that all namespaces are configured for the minimum version of TLS that your organization requires. To audit a set of Service Bus namespaces for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../governance/policy/overview.md).
service-bus-messaging Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-client-version.md
+ Previously updated : 06/23/2022 Last updated : 09/26/2022
-# Configure Transport Layer Security (TLS) for a Service Bus client application (Preview)
+# Configure Transport Layer Security (TLS) for a Service Bus client application
For security purposes, an Azure Service Bus namespace may require that clients use a minimum version of Transport Layer Security (TLS) to send requests. Calls to Azure Service Bus will fail if the client is using a version of TLS that is lower than the minimum required version. For example, if a namespace requires TLS 1.2, then a request sent by a client who is using TLS 1.1 will fail.
See the following documentation for more information.
- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md) - [Configure the minimum TLS version for a Service Bus namespace](transport-layer-security-configure-minimum-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
+ Previously updated : 06/06/2022 Last updated : 09/26/2022
-# Configure the minimum TLS version for a Service Bus namespace (Preview)
+# Configure the minimum TLS version for a Service Bus namespace
Azure Service Bus namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Service Bus namespace to require that clients send and receive data with a newer version of TLS. If a Service Bus namespace requires a minimum version of TLS, then any requests made with an older version will fail. For conceptual information about this feature, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md).
You can also specify the minimum TLS version for an existing namespace on the **
:::image type="content" source="./media/transport-layer-security-configure-minimum-version/existing-namespace-tls.png" alt-text="Screenshot showing the page to set the minimum TLS version for an existing namespace.":::
+## Use Azure CLI
+To **create a namespace with minimum TLS version set to 1.2**, use the [`az servicebus namespace create`](/cli/azure/servicebus/namespace#az-servicebus-namespace-create) command with `--min-tls` set to `1.2`.
+
+```azurecli-interactive
+az servicebus namespace create \
+ --name mynamespace \
+ --resource-group myresourcegroup \
+ --min-tls 1.2
+```
+
+## Use Azure PowerShell
+To **create a namespace with minimum TLS version set to 1.2**, use the [`New-AzServiceBusNamespace`](/powershell/module/az.servicebus/new-azservicebusnamespace) command with `-MinimumTlsVersion` set to `1.2`.
+
+```azurepowershell-interactive
+New-AzServiceBusNamespace `
+ -ResourceGroup myresourcegroup `
+ -Name mynamespace `
+ -MinimumTlsVersion 1.2
+```
++ ## Create a template to configure the minimum TLS version To configure the minimum TLS version for a Service Bus namespace, set the `MinimumTlsVersion` version property to 1.0, 1.1, or 1.2. When you create a Service Bus namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
service-bus-messaging Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
+ Previously updated : 04/12/2022 Last updated : 09/26/2022
-# Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace (Preview)
+# Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace
Communication between a client application and an Azure Service Bus namespace is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
Title: Integrate the Azure Cosmos DB Cassandra API with Service Connector
-description: Integrate the Azure Cosmos DB Cassandra API into your application with Service Connector
+ Title: Integrate the Azure Cosmos DB for Apache Cassandra with Service Connector
+description: Integrate the Azure Cosmos DB for Apache Cassandra into your application with Service Connector
Last updated 09/19/2022-+
-# Integrate the Azure Cosmos DB API for Cassandra with Service Connector
+# Integrate the Azure Cosmos DB for Cassandra with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB Cassandra API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB for Apache Cassandra using Service Connector. You might still be able to connect to the Azure Cosmos DB for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect your compute services to the Cosmos DB Cassandra API. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `keyspace`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, `<tenant-id>`, and `<Azure-region>` with your own information.
+Use the connection details below to connect your compute services to the Azure Cosmos DB for Apache Cassandra. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `keyspace`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, `<tenant-id>`, and `<Azure-region>` with your own information.
### Azure App Service and Azure Container Apps
Use the connection details below to connect your compute services to the Cosmos
| Default environment variable name | Description | Example value | |--|--||
-| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_CONTACTPOINT | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
| AZURE_COSMOS_PORT | Cassandra connection port | 10350 | | AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | | AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
Use the connection details below to connect your compute services to the Cosmos
| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` | | AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
-| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_CONTACTPOINT | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
| AZURE_COSMOS_PORT | Cassandra connection port | 10350 | | AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | | AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
Use the connection details below to connect your compute services to the Cosmos
| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` | | AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
-| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_CONTACTPOINT | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
| AZURE_COSMOS_PORT | Cassandra connection port | 10350 | | AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | | AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
Use the connection details below to connect your compute services to the Cosmos
| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` | | AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
-| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_CONTACTPOINT | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
| AZURE_COSMOS_PORT | Cassandra connection port | 10350 | | AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | | AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
Use the connection details below to connect your compute services to the Cosmos
| Default environment variable name | Description | Example value | |-|--|--|
-| spring.data.cassandra.contact_points | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| spring.data.cassandra.contact_points | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
| spring.data.cassandra.port | Cassandra connection port | 10350 | | spring.data.cassandra.keyspace_name | Cassandra keyspace | `<keyspace>` | | spring.data.cassandra.username | Cassandra username | `<username>` |
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Title: Integrate the Azure Cosmos DB MongoDB API with Service Connector
-description: Integrate the Azure Cosmos DB MongoDB API into your application with Service Connector
+ Title: Integrate Azure Cosmos DB for MongoDB with Service Connector
+description: Integrate Azure Cosmos DB for MongoDB into your application with Service Connector
Last updated 09/19/2022-+
-# Integrate the Azure Cosmos DB API for MongoDB with Service Connector
+# Integrate Azure Cosmos DB for MongoDB with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB Mongo API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for MongoDB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB for MongoDB using Service Connector. You might still be able to connect to the Azure Cosmos DB for MongoDB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute services
This page shows the supported authentication types and client types for the Azur
## Supported authentication types and client types
-Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+Supported authentication and clients for App Service, Container Apps, and Azure Spring Apps:
### [Azure App Service](#tab/app-service)
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect compute services to Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<Azure-Cosmos-DB-API-for-MongoDB-account>`, `<subscription-ID>`, `<resource-group-name>`, `<client-secret>`, and `<tenant-id>` with your own information.
+Use the connection details below to connect compute services to Azure Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<Azure-Cosmos-DB-API-for-MongoDB-account>`, `<subscription-ID>`, `<resource-group-name>`, `<client-secret>`, and `<tenant-id>` with your own information.
### Azure App Service and Azure Container Apps
service-connector How To Integrate Cosmos Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-gremlin.md
Title: Integrate the Azure Cosmos DB Gremlin API with Service Connector
-description: Integrate the Azure Cosmos DB Gremlin API into your application with Service Connector
+ Title: Integrate the Azure Cosmos DB for Apache Gremlin with Service Connector
+description: Integrate the Azure Cosmos DB for Apache Gremlin into your application with Service Connector
Last updated 09/19/2022-+
-# Integrate the Azure Cosmos DB API for Gremlin with Service Connector
+# Integrate the Azure Cosmos DB for Gremlin with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB Gremlin API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for Gremlin in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB for Apache Gremlin using Service Connector. You might still be able to connect to the Azure Cosmos DB for Gremlin in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect your compute services to the Cosmos DB Gremlin API. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `<database>`, `<collection or graphs>`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, and `<tenant-id>` with your own information.
+Use the connection details below to connect your compute services to the Azure Cosmos DB for Apache Gremlin. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `<database>`, `<collection or graphs>`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, and `<tenant-id>` with your own information.
### Azure App Service and Azure Container Apps
service-connector How To Integrate Cosmos Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md
Title: Integrate the Azure Cosmos DB SQL API with Service Connector
+ Title: Integrate the Azure Cosmos DB for NoSQL with Service Connector
description: Integrate the Azure Cosmos DB SQL into your application with Service Connector Last updated 09/19/2022-+
-# Integrate the Azure Cosmos DB API for SQL with Service Connector
+# Integrate the Azure Cosmos DB for NoSQL with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB SQL API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for SQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB for NoSQL using Service Connector. You might still be able to connect to the Azure Cosmos DB for SQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect your compute services to the Cosmos DB SQL API. For each example below, replace the placeholder texts `<database-server>`, `<database-name>`,`<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<SQL-server>`, `<client-secret>`, `<tenant-id>`, and `<access-key>` with your own information.
+Use the connection details below to connect your compute services to the Azure Cosmos DB for NoSQL. For each example below, replace the placeholder texts `<database-server>`, `<database-name>`,`<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<SQL-server>`, `<client-secret>`, `<tenant-id>`, and `<access-key>` with your own information.
### Azure App Service and Azure Container Apps
Use the connection details below to connect your compute services to the Cosmos
| Default environment variable name | Description | Example value | |--|-|-|
-| AZURE_COSMOS_CONNECTIONSTRING | Cosmos DB SQL API connection string | `AccountEndpoint=https://<database-server>.documents.azure.com:443/;AccountKey=<account-key>` |
+| AZURE_COSMOS_CONNECTIONSTRING | Azure Cosmos DB for NoSQL connection string | `AccountEndpoint=https://<database-server>.documents.azure.com:443/;AccountKey=<account-key>` |
#### System-assigned managed identity
service-connector How To Integrate Cosmos Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md
Title: Integrate the Azure Cosmos DB Table API with Service Connector
-description: Integrate the Azure Cosmos DB Table API into your application with Service Connector
+ Title: Integrate the Azure Cosmos DB for Table with Service Connector
+description: Integrate the Azure Cosmos DB for Table into your application with Service Connector
Last updated 08/11/2022-+
-# Integrate the Azure Cosmos DB Table API with Service Connector
+# Integrate the Azure Cosmos DB for Table with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB Table API using Service Connector. You might still be able to connect to the Azure Cosmos DB Table API in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB for Table using Service Connector. You might still be able to connect to the Azure Cosmos DB for Table in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect your compute services to the Cosmos DB Table API. For each example below, replace the placeholder texts `<account-name>`, `<table-name>`, `<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<client-secret>`, `<tenant-id>` with your own information.
+Use the connection details below to connect your compute services to the Azure Cosmos DB for Table. For each example below, replace the placeholder texts `<account-name>`, `<table-name>`, `<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<client-secret>`, `<tenant-id>` with your own information.
### Azure App Service and Azure Container Apps
Use the connection details below to connect your compute services to the Cosmos
| Default environment variable name | Description | Example value | |--|-|-|
-| AZURE_COSMOS_CONNECTIONSTRING | Cosmos DB Table API connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;TableEndpoint=https://<table-name>.table.cosmos.azure.com:443/; ` |
+| AZURE_COSMOS_CONNECTIONSTRING | Azure Cosmos DB for Table connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;TableEndpoint=https://<table-name>.table.cosmos.azure.com:443/; ` |
#### System-assigned managed identity
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
description: Understand typical use case scenarios for Service Connector, and le
-+ Last updated 06/14/2022
This article provides an overview of Service Connector.
Any application that runs on Azure compute services and requires a backing service, can use Service Connector. Find below some examples that can use Service Connector to simplify service-to-service connection experience.
-* **WebApp/Container Apps/Spring Apps + DB:** Use Service Connector to connect PostgreSQL, MySQL, or Cosmos DB to your App Service/Container Apps/Spring Apps.
+* **WebApp/Container Apps/Spring Apps + Database:** Use Service Connector to connect PostgreSQL, MySQL, or Azure Cosmos DB to your App Service/Container Apps/Spring Apps.
* **WebApp/Container Apps/Spring Apps + Storage:** Use Service Connector to connect to Azure Storage accounts and use your preferred storage products easily for any of your apps.
-* **WebApp/Container Apps/Spring Apps + Messaging
+* **WebApp/Container Apps/Spring Apps + Messaging
See [what services are supported in Service Connector](#what-services-are-supported-in-service-connector) to see more supported services and application patterns.
Once a service connection is created, developers can validate and check the heal
* Apache Kafka on Confluent Cloud * Azure App Configuration * Azure Cache for Redis (Basic, Standard and Premium and Enterprise tiers)
-* Azure Cosmos DB (Core, MangoDB, Gremlin, Cassandra, Table)
+* Azure Cosmos DB (NoSQL, MongoDB, Gremlin, Cassandra, Table)
* Azure Database for MySQL * Azure Database for PostgreSQL * Azure Event Hubs
service-fabric How To Deploy Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-custom-image.md
+
+ Title: Custom Image
+description: Learn how to deploy a custom image on Azure Service Fabric clusters (SFMC).
+++++ Last updated : 09/15/2022++
+# Deploy a custom Windows virtual machine scale set image on new node types within a Service Fabric Managed Cluster (preview)
+
+Custom windows images are like marketplace images, but you create them yourself for each new node type within a cluster. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations. Once you create a custom windows image, you can then deploy to one or more new node types within a Service Fabric Managed Cluster.
+
+## Before you begin
+Ensure that you've [created a custom image](../virtual-machines/linux/tutorial-custom-images.md).
+Custom image is enabled with Service Fabric Managed Cluster (SFMC) API version 2022-08-01-preview and forward. To use custom images, you must grant SFMC First Party Azure Active Directory (Azure AD) App read access to the virtual machine (VM) Managed Image or Shared Gallery image so that SFMC has permission to read and create VM with the image.
+
+Check [Add a managed identity to a Service Fabric Managed Cluster node type](how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md#prerequisites) as reference on how to obtain information about SFMC First Party Azure AD App and grant it access to the resources. Reader access is sufficient.
+
+`Role definition name: Reader`
+
+`Role definition ID: acdd72a7-3385-48ef-bd42-f606fba81ae7`
+
+```powershell
+New-AzRoleAssignment -PrincipalId "<SFMC SPID>" -RoleDefinitionName "Reader" -Scope "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/galleries/<galleryName>"
+```
+
+## Use the ARM template
+
+When you create a new node type, you will need to modify your ARM template with the new property: VmImageResourceId: <Image name>. The following is an example:
+
+ ```JSON
+ {
+ "name": "SF",
+ "properties": {
+ "isPrimary" : true,
+ "vmImageResourceId": "/subscriptions/<SubscriptionID>/resourceGroups/<myRG>/providers/Microsoft.Compute/images/<MyCustomImage>",
+ "vmSize": "Standard_D2",
+ "vmInstanceCount": 5,
+ "dataDiskSizeGB": 100
+ }
+}
+```
+
+The vmImageResourceId will be passed along to the virtual machine scale set as an image reference ID, currently we support 3 types of resources:
+
+- Managed Image (Microsoft.Compute/images)
+- Shared Gallery Image (Microsoft.Compute/galleries/images)
+- Shared Gallery Image Version (Microsoft.Compute/galleries/images/versions)
++
+## Auto OS upgrade
+
+Auto OS upgrade is also supported for custom image. To enable auto OS upgrade, the node type must not be pinned to a specific image version, i.e. must use a gallery image (Microsoft.Compute/galleries/images), for example:
+
+`/subscriptions/<subscriptionID>/resourceGroups/<myRG>/providers/Microsoft.Compute/galleries/<CustomImageGallery>/images/<CustomImageDef>`
+
+When node type is created with this as vmImageResourceId and the cluster has [auto OS upgrade](how-to-managed-cluster-upgrades.md) enabled, SFMC will monitor the published versions for this image definition and if any new version is published, start to reimage the virtual machine scale sets created with this image definition which will bring them to the latest version.
+
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Set-AzServiceFabricManagedNodeType -ResourceGroupName $rgName -ClusterName $clus
## Modify the VM SKU for a node type
-Service Fabric managed cluster does not support in-place modification of the VM SKU, but is simpler than classic. In order to accomplish this you'll need to do the following:
-* [Create a new node type via portal, ARM template, or PowerShell](how-to-managed-cluster-modify-node-type.md#add-a-node-type) with the required VM SKU. You'll need to use a template or PowerShell for adding a primary or stateless node type.
-* Migrate your workload over. One way is to use a [placement property to ensure that certain workloads run only on certain types of nodes in the cluster](./service-fabric-cluster-resource-manager-cluster-description.md#node-properties-and-placement-constraints).
-* [Delete old node type via portal or PowerShell](how-to-managed-cluster-modify-node-type.md#remove-a-node-type). To remove a primary node type you will have to use PowerShell.
-
+To modify the VM SKU size used for a node type using an ARM Template, adjust the `vmSize` property with the new value and do a cluster deployment for the setting to take effect. The managed cluster provider will reimage each instance by upgrade domain. For a list of SKU options, please refer to the [VM sizes - Azure Virtual Machines | Microsoft Learn](../virtual-machines/sizes.md).
+```json
+ {
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ ...
+ "vmSize": "[parameters('vmImageVersion')]",
+ ...
+ }
+ }
+}
+```
## Configure multiple managed disks Service Fabric managed clusters by default configure one managed disk. By configuring the following optional property and values, you can add more managed disks to node types within a cluster. You are able to specify the drive letter, disk type, and size per disk.
service-fabric Overview Managed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/overview-managed-cluster.md
Service Fabric managed clusters are available in both Basic and Standard SKUs.
| Network resource (SKU for [Load Balancer](../load-balancer/skus.md), [Public IP](../virtual-network/ip-services/public-ip-addresses.md)) | Basic | Standard | | Min node (VM instance) count | 3 | 5 | | Max node count per node type | 100 | 1000 |
-| Max node type count | 1 | 20 |
+| Max node type count | 1 | 50 |
| Add/remove node types | No | Yes | | Zone redundancy | No | Yes |
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Previously updated : 10/03/2022 Last updated : 10/10/2022 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
+ Last updated 07/14/2022
access_token=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-v
``` Your Service Fabric app can then use the access token to authenticate to Azure Resources that support Active Directory.
-The following example shows how to do this for Cosmos DB resource:
+The following example shows how to do this for a Azure Cosmos DB resource:
```bash cosmos_db_password=$(curl 'https://management.azure.com/subscriptions/<YOUR SUBSCRIPTION>/resourceGroups/<YOUR RG>/providers/Microsoft.DocumentDB/databaseAccounts/<YOUR ACCOUNT>/listKeys?api-version=2016-03-31' -X POST -d "" -H "Authorization: Bearer $access_token" | python -c "import sys, json; print(json.load(sys.stdin)['primaryMasterKey'])")
service-fabric Service Fabric Cluster Creation Setup Azure Ad Via Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-azure-ad-via-portal.md
+
+ Title: Set up Azure Active Directory for client authentication using Azure portal
+description: Learn how to set up Azure Active Directory (Azure AD) to authenticate clients for Service Fabric clusters using Azure portal.
+ Last updated : 8/8/2022+++
+# Set up Azure Active Directory for client authentication in Azure portal
+
+For clusters running on Azure, Azure Active Directory (Azure AD) is recommended to secure access to management endpoints. This article describes how to set up Azure AD to authenticate clients for a Service Fabric cluster in Azure portal.
+
+In this article, the term "application" will be used to refer to [Azure Active Directory applications](../active-directory/develop/developer-glossary.md#client-application), not Service Fabric applications; the distinction will be made where necessary. Azure AD enables organizations (known as tenants) to manage user access to applications.
+
+A Service Fabric cluster offers several entry points to its management functionality, including the web-based [Service Fabric Explorer][service-fabric-visualizing-your-cluster] and [Visual Studio][service-fabric-manage-application-in-visual-studio]. As a result, you will create two Azure AD applications to control access to the cluster: one web application and one native application. After the applications are created, you will assign users to read-only and admin roles.
+
+> [!NOTE]
+> On Linux, you must complete the following steps before you create the cluster. On Windows, you also have the option to [configure Azure AD authentication for an existing cluster](https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Security/Configure%20Azure%20Active%20Directory%20Authentication%20for%20Existing%20Cluster.md).
+
+> [!NOTE]
+> It is a [known issue](https://github.com/microsoft/service-fabric/issues/399) that applications and nodes on Linux Azure AD-enabled clusters cannot be viewed in Azure portal.
+
+> [!NOTE]
+> Azure Active Directory now requires an application (app registration) publishers domain to be verified or use of default scheme. See [Configure an application's publisher domain](../active-directory/develop/howto-configure-publisher-domain.md) and [AppId Uri in single tenant applications will require use of default scheme or verified domains](../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains) for additional information.
+
+## Prerequisites
+
+In this article, we assume that you have already created a tenant. If you have not, start by reading [How to get an Azure Active Directory tenant][active-directory-howto-tenant].
+
+## Azure AD Cluster App Registration
+
+Open Azure AD 'App Registrations' blade in Azure portal and select '+ New registration'.
+
+[Default Directory | App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps)
+
+![Screenshot of portal app registration.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-app-registration.png)
+
+### Properties
+
+- **Name:** Enter a descriptive Name. It is helpful to define registration type in name as shown below.
+
+ - Example: {{cluster name}}_Cluster
+
+- **Supported account types:** Select 'Accounts in this organizational directory only'.
+
+- **Redirect URI:** Select 'Web' and enter the URL that the client will redirect to. In this example the Service Fabric Explorer (SFX) URL is used. After registration is complete, additional Redirect URIs can be added by selecting the 'Authentication' and select 'Add URI'.
+
+ - Example: 'https://{{cluster name}}.{{location}}.cloudapp.azure.com:19080/Explorer/https://docsupdatetracker.net/index.html'
+
+> [!NOTE]
+> Add additional Redirect URIs if planning to access SFX using a shortened URL such as 'https://{{cluster name}}.{{location}}.cloudapp.azure.com:19080/Explorer'. An exact URL is required to avoid AADSTS50011 error (The redirect URI specified in the request does not match the redirect URIs configured for the application. Make sure the redirect URI sent in the request matches one added to the application in Azure portal. Navigate to https://aka.ms/redirectUriMismatchError to learn more about troubleshooting this error.
+
+![Screenshot of portal cluster app registration.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-app-registration.png)
+
+### Branding & properties
+
+After registering the 'Cluster' App Registration, select 'Branding & Properties' and populate any additional information.
+- **Home page URL:** Enter SFX URL.
+
+![Screenshot of portal cluster branding.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-branding.png)
+
+### Authentication
+
+Select 'Authentication'. Under 'Implicit grant and hybrid flows', check 'ID tokens (used for implicit and hybrid flows)'.
+
+![Screenshot of portal cluster authentication.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-authentication.png)
+
+### Expose an API
+
+Select 'Expose an API' and 'Set' link to enter value for 'Application ID URI'. Enter either the uri of a 'verified domain' or uri using api scheme format of api://{{tenant Id}}/{{cluster name}}.
+
+See [AppId Uri in single tenant applications will require use of default scheme or verified domains](../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains) for additional information.
+
+Example api scheme: api://{{tenant id}}/{{cluster}}
+
+Example: api://0e3d2646-78b3-4711-b8be-74a381d9890c/mysftestcluster
+
+![Screenshot of portal cluster expose application ID.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-expose-application-id.png)
+
+Select '+ Add a scope' to add a new scope with 'user_impersonation'.
+
+#### Properties
+
+- **Scope name:** user_impersonation
+- **Who can consent?:** Select 'Admins and users'
+- **Admin consent display name:** Enter a descriptive Name. It is helpful to define cluster name and authentication type as shown below.
+
+ - Example: Access mysftestcluster_Cluster
+
+- **Admin consent description:** Example: Allow the application to access mysftestcluster_Cluster on behalf of the signed-in user.
+- **User consent display name:** Enter a descriptive Name. It is helpful to define cluster name and authentication type as shown below.
+
+ - Example: Access mysftestcluster_Cluster
+
+- **User consent description:** Example: Allow the application to access mysftestcluster_Cluster on your behalf.
+- **State:** Select 'Enabled'.
+
+![Screenshot of portal cluster expose scope.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-expose-scope.png)
+
+### App roles
+
+Select 'App roles', '+ Create app role' to add 'Admin' and 'ReadOnly' roles.
+
+![Screenshot of portal cluster roles.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-roles.png)
+
+#### Admin User Properties
+- **Display Name:** Enter 'Admin'.
+- **Allowed member types:** Select 'Users/Groups'.
+- **Value:** Enter 'Admin'.
+- **Description:** Enter 'Admins can manage roles and perform all task actions'.
+
+![Screenshot of portal cluster roles admin.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-roles-admin.png)
+
+#### ReadOnly User Properties
+- **Display Name:** Enter 'ReadOnly'.
+- **Allowed member types:** Select 'Users/Groups'.
+- **Value:** Enter 'ReadOnly'.
+- **Description:** Enter 'ReadOnly roles have limited query access'.
+
+![Screenshot of portal cluster roles readonly.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-roles-readonly.png)
+
+## Azure AD Client App Registration
+
+Open Azure AD 'App Registrations' blade in Azure portal and select '+ New registration'.
+
+[Default Directory | App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps)
+
+![Screenshot of portal app registration.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-app-registration.png)
+
+### Properties
+
+- **Name:** Enter a descriptive Name. It is helpful to define registration type in name as shown below. Example: {{cluster name}}_Client
+- **Supported account types:** Select 'Accounts in this organizational directory only'.
+- **Redirect URI:** Select 'Public client/native' and Enter 'urn:ietf:wg:oauth:2.0:oob'
+
+ ![Screenshot of portal client app registration.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-client-app-registration.png)
+
+### Authentication
+
+After Registering, select 'Authentication'. Under 'Advanced Settings', select 'Yes' to 'Allow public client flows' and 'Save'.
+
+ ![Screenshot of portal client authentication.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-client-authentication.png)
+
+### API Permissions
+
+Select 'API permissions', '+ Add a permission' to add 'user_impersonation' from 'Cluster' App Registration from above.
+
+![Screenshot of portal client API cluster.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-client-api-cluster-add.png)
+
+Select 'Delegated Permissions', select 'user_impersonation' permissions, and 'Add permissions'.
+
+![Screenshot of portal client API delegated.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-client-api-delegated.png)
+
+In API permissions list, select 'Grant admin consent for Default Directory'
+
+![Screenshot of portal client API grant.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-client-api-grant.png)
+
+![Screenshot of portal client API grant confirm.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-client-api-grant-confirm.png)
+
+### Disable 'Assignment required' for Azure AD Client App Registration
+
+For the 'Client' App Registration only, navigate to 'Enterprise Applications' blade for 'Client' app registration. Use link above or steps below.
+In the 'Properties' view, select 'No' for 'Assignment required?'.
+
+![Screenshot of portal app registration client properties.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-app-registration-client-properties.png)
+
+## Assigning Application Roles to Users
+
+After creation of Service Fabric Azure AD App Registrations, Azure AD users can be modified to use app registrations to connect to cluster with Azure AD.
+For both the ReadOnly and Admin roles, the 'Azure AD Cluster App Registration' is used.
+The 'Azure AD Client App Registration' is not used for role assignments.
+
+Role Assignments are performed from the [Enterprise Applications](https://portal.azure.com/#view/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/~/AppAppsPreview/menuId~/null) blade.
+
+> [!NOTE]
+> To view the Enterprise applications created during the App Registration process, the default filters for 'Application type' and 'Applicaiton ID starts with' must be removed from the 'All applications' portal view. Optionally, the Enterprise application can also be viewed by opening the 'Enterprise applications' link from the App Registration 'API Permissions' page as shown in figure above.
+
+### Default Filters to be removed
+
+![Screenshot of portal enterprise apps filter.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-filter.png)
+
+### Filter Removed
+
+![Screenshot of portal enterprise apps no filter.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-no-filter.png)
+
+### Adding Role Assignment to Azure AD User
+
+To add applications to existing Azure AD users, navigate to 'Enterprise Applications and find the App Registration created for 'Azure AD Cluster App Registration'.
+Select 'Users and groups' and '+ Add user/group' to add existing Azure AD user role assignment.
+
+![Screenshot of portal enterprise apps add user.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-add-user.png)
+
+Select 'Users' 'None Selected' link.
+
+![Screenshot of portal enterprise apps add assignment.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-add-assignment.png)
+
+#### Assigning ReadOnly Role
+
+For users needing readonly / view access, find the user, and for 'Select a role', click on the 'None Selected' link to add the 'ReadOnly' role.
+
+![Screenshot of portal enterprise apps readonly role.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-readonly-role.png)
+
+#### Assigning Admin Role
+
+For users needing full read / write access, find the user, and for 'Select a role', click on the 'None Selected' link to add the 'Admin' role.
+
+![Screenshot of portal enterprise apps admin role.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-admin-role.png)
+
+![Screenshot of portal enterprise apps user assignment.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-enterprise-apps-user-assignments.png)
+
+## Configuring Cluster with Azure AD Registrations
+
+In Azure portal, open [Service Fabric Clusters](https://ms.portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.ServiceFabric%2Fclusters) blade.
+
+### Azure Service Fabric Managed Cluster Configuration
+
+Open the managed cluster resource and select 'Security'.
+Check 'Enable Azure Active Directory'.
+
+#### Properties
+- **TenantID:** Enter Tenant ID
+- **Cluster application:** Enter Azure App Registration 'Application (client)ID' for the 'Azure AD Cluster App Registration'. This is also known as the web application.
+- **Client application:** Enter Azure App Registration 'Application (client)ID' for the 'Azure AD Client App Registration'. This is also known as the native application.
+
+![Screenshot of portal managed cluster Azure AD.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-managed-cluster-azure-ad.png)
+
+### Azure Service Fabric Cluster Configuration
+
+Open the cluster resource and select 'Security'.
+Select '+ Add...'
+
+![Screenshot of portal cluster Azure AD add.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-azure-ad-add.png)
+
+#### Properties
+
+- **Authentication type:** Select 'Azure Active Directory'.
+- **TenantID:** Enter Tenant ID.
+- **Cluster application:** Enter Azure App Registration 'Application (client)ID' for the 'Azure AD Cluster App Registration'. This is also known as the web application.
+- **Client application:** Enter Azure App Registration 'Application (client)ID' for the 'Azure AD Client App Registration'. This is also known as the native application.
+
+![Screenshot of portal cluster Azure AD settings.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-azure-ad-settings.png)
+
+## Connecting to Cluster with Azure AD
+
+### Connect to Service Fabric cluster by using Azure AD authentication via PowerShell
+
+To use PowerShell to connect to a service fabric cluster, the commands have to be run from a machine that has Service Fabric SDK installed which includes nodes currently in a cluster.
+To connect the Service Fabric cluster, use the following PowerShell command example:
+
+```powershell
+Import-Module servicefabric
+
+$clusterEndpoint = 'sftestcluster.eastus.cloudapp.azure.com'
+$serverCertThumbprint = ''
+Connect-ServiceFabricCluster -ConnectionEndpoint $clusterEndpoint `
+ -AzureActiveDirectory `
+ -ServerCertThumbprint $serverCertThumbprint `
+ -Verbose
+```
+
+### Connect to Service Fabric managed cluster by using Azure AD authentication via PowerShell
+
+To connect to a managed cluster, 'Az.Resources' PowerShell module is also required to query the dynamic cluster server certificate thumbprint that needs to be enumerated and used.
+To connect the Service Fabric cluster, use the following PowerShell command example:
+
+```powershell
+Import-Module servicefabric
+Import-Module Az.Resources
+
+$clusterEndpoint = 'mysftestcluster.eastus.cloudapp.azure.com'
+$clusterName = 'mysftestcluster'
+
+$clusterResource = Get-AzResource -Name $clusterName -ResourceType 'Microsoft.ServiceFabric/managedclusters'
+$serverCertThumbprint = $clusterResource.Properties.clusterCertificateThumbprints
+
+Connect-ServiceFabricCluster -ConnectionEndpoint $clusterEndpoint `
+ -AzureActiveDirectory `
+ -ServerCertThumbprint $serverCertThumbprint `
+ -Verbose
+```
+
+## Troubleshooting help in setting up Azure Active Directory
+Setting up Azure AD and using it can be challenging, so here are some pointers on what you can do to debug the issue.
+
+### **Service Fabric Explorer prompts you to select a certificate**
+#### **Problem**
+After you sign in successfully to Azure AD in Service Fabric Explorer, the browser returns to the home page but a message prompts you to select a certificate.
+
+![Screenshot of SFX certificate dialog.][sfx-select-certificate-dialog]
+
+#### **Reason**
+The user is not assigned a role in the Azure AD cluster application. Thus, Azure AD authentication fails on Service Fabric cluster. Service Fabric Explorer falls back to certificate authentication.
+
+#### **Solution**
+Follow the instructions for setting up Azure AD, and assign user roles. Also, we recommend that you turn on "User assignment required to access app," as `SetupApplications.ps1` does.
+
+### **Connection with PowerShell fails with an error: "The specified credentials are invalid"**
+#### **Problem**
+When you use PowerShell to connect to the cluster by using "AzureActiveDirectory" security mode, after you sign in successfully to Azure AD, the connection fails with an error: "The specified credentials are invalid."
+
+#### **Solution**
+This solution is the same as the preceding one.
+
+### **Service Fabric Explorer returns a failure when you sign in: "AADSTS50011"**
+#### **Problem**
+When you try to sign in to Azure AD in Service Fabric Explorer, the page returns a failure: "AADSTS50011: The reply address &lt;url&gt; does not match the reply addresses configured for the application: &lt;guid&gt;."
+
+![Screenshot of SFX reply address does not match.][sfx-reply-address-not-match]
+
+#### **Reason**
+The cluster (web) application that represents Service Fabric Explorer attempts to authenticate against Azure AD, and as part of the request it provides the redirect return URL. But the URL is not listed in the Azure AD application **REPLY URL** list.
+
+#### **Solution**
+On the Azure AD app registration page for your cluster, select **Authentication**, and under the **Redirect URIs** section, add the Service Fabric Explorer URL to the list. Save your change.
+
+![Screenshot of Web application reply URL.][web-application-reply-url]
+
+### **Connecting to the cluster using Azure AD authentication via PowerShell gives an error when you sign in: "AADSTS50011"**
+#### **Problem**
+When you try to connect to a Service Fabric cluster using Azure AD via PowerShell, the sign-in page returns a failure: "AADSTS50011: The reply url specified in the request does not match the reply urls configured for the application: &lt;guid&gt;."
+
+#### **Reason**
+Similar to the preceding issue, PowerShell attempts to authenticate against Azure AD, which provides a redirect URL that isn't listed in the Azure AD application **Reply URLs** list.
+
+#### **Solution**
+Use the same process as in the preceding issue, but the URL must be set to `urn:ietf:wg:oauth:2.0:oob`, a special redirect for command-line authentication.
+
+### **Connect the cluster by using Azure AD authentication via PowerShell**
+To connect the Service Fabric cluster, use the following PowerShell command example:
+
+```powershell
+Connect-ServiceFabricCluster -ConnectionEndpoint <endpoint> -KeepAliveIntervalInSec 10 -AzureActiveDirectory -ServerCertThumbprint <thumbprint>
+```
+
+To learn more, see [Connect-ServiceFabricCluster cmdlet](/powershell/module/servicefabric/connect-servicefabriccluster).
+
+### **Can I reuse the same Azure AD tenant in multiple clusters?**
+Yes. But remember to add the URL of Service Fabric Explorer to your cluster (web) application. Otherwise, Service Fabric Explorer does not work.
+
+### **Why do I still need a server certificate while Azure AD is enabled?**
+FabricClient and FabricGateway perform a mutual authentication. During Azure AD authentication, Azure AD integration provides a client identity to the server, and the server certificate is used by the client to verify the server's identity. For more information about Service Fabric certificates, see [X.509 certificates and Service Fabric][x509-certificates-and-service-fabric].
+
+## Next steps
+After setting up Azure Active Directory applications and setting roles for users, [configure and deploy a cluster](service-fabric-cluster-creation-via-arm.md).
+
+<!-- Links -->
+
+[azure-cli]: /cli/azure/get-started-with-azure-cli
+[azure-portal]: https://portal.azure.com/
+[service-fabric-cluster-security]: service-fabric-cluster-security.md
+[active-directory-howto-tenant]:../active-directory/develop/quickstart-create-new-tenant.md
+[service-fabric-visualizing-your-cluster]: service-fabric-visualizing-your-cluster.md
+[service-fabric-manage-application-in-visual-studio]: service-fabric-manage-application-in-visual-studio.md
+[sf-azure-ad-ps-script-download]:http://servicefabricsdkstorage.blob.core.windows.net/publicrelease/MicrosoftAzureServiceFabric-AADHelpers.zip
+[x509-certificates-and-service-fabric]: service-fabric-cluster-security.md#x509-certificates-and-service-fabric
+
+<!-- Images -->
+[sfx-select-certificate-dialog]: ./media/service-fabric-cluster-creation-setup-aad/sfx-select-certificate-dialog.png
+[sfx-reply-address-not-match]: ./media/service-fabric-cluster-creation-setup-aad/sfx-reply-address-not-match.png
+[web-application-reply-url]: ./media/service-fabric-cluster-creation-setup-aad/web-application-reply-url.png
service-fabric Service Fabric Cluster Region Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-region-move.md
+ Last updated 07/14/2022
Before engaging in any regional migration, we recommend establishing a testbed a
* If you do plan to keep the existing source region and you have a DNS/CNAME associated with the public IP of a Network Load Balancer that was delivering calls to your original source cluster. Set up an instance of Azure Traffic Manager and then associate the DNS name with that Azure Traffic Manager Instance. The Azure Traffic Manager could be configured to then route to the individual Network Load Balancers within each region.
-4. If you do plan to keep both regions, then you will usually have some sort of ΓÇ£back syncΓÇ¥, where the source of truth is kept in some remote store, such as SQL, Cosmos DB, or Blob or File Storage, which is then synced between the regions. If this applies to your workload, then it is recommended to confirm that data is flowing between the regions as expected.
+4. If you do plan to keep both regions, then you will usually have some sort of ΓÇ£back syncΓÇ¥, where the source of truth is kept in some remote store, such as Azure SQL, Azure Cosmos DB, or Blob or File Storage, which is then synced between the regions. If this applies to your workload, then it is recommended to confirm that data is flowing between the regions as expected.
## Final Validation 1. As a final validation, verify that traffic is flowing as expected and that the services in the new region (and potentially the old region) are operating as expected.
Before engaging in any regional migration, we recommend establishing a testbed a
Now that you've moved your cluster and applications to a new region you should validate backups are set up to protect any required data. > [!div class="nextstepaction"]
-> [Set up backups after migration](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
+> [Set up backups after migration](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
service-fabric Service Fabric Diagnostics Oms Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-agent.md
Now that you have added the Log Analytics agent, head on over to the Log Analyti
## Next steps
-* Collect relevant [performance counters](service-fabric-diagnostics-event-generation-perf.md). To configure the Log Analytics agent to collect specific performance counters, review [configuring data sources](../azure-monitor/agents/agent-data-sources.md#configuring-data-sources).
+* Collect relevant [performance counters](service-fabric-diagnostics-event-generation-perf.md). To configure the Log Analytics agent to collect specific performance counters, review [configuring data sources](../azure-monitor/agents/agent-data-sources.md#configure-data-sources).
* Configure Azure Monitor logs to set up [automated alerting](../azure-monitor/alerts/alerts-overview.md) to aid in detecting and diagnostics * As an alternative you can collect performance counters through [Azure Diagnostics extension and send them to Application Insights](service-fabric-diagnostics-event-aggregation-wad.md#add-the-application-insights-sink-to-the-resource-manager-template)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
+14.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 14.04 LTS kernels supported in this release. |
14.04 LTS | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 14.04 LTS kernels supported in this release. |
-14.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new 14.04 LTS kernels supported in this release. |
|||
+16.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 16.04 LTS kernels supported in this release. |
16.04 LTS | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1112-azure, 4.15.0-1113-azure | 16.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 16.04 LTS kernels supported in this release. |
-16.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic <br/>4.8.0-34-generic to 4.8.0-58-generic <br/>4.10.0-14-generic to 4.10.0-42-generic <br/>4.11.0-13-generic to 4.11.0-14-generic <br/>4.13.0-16-generic to 4.13.0-45-generic <br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
|||
-18.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 18.04 LTS kernels supported in this release. |
+18.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic |
+18.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic|
18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-1146-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 4.15.0-189-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic | 18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-1137-azure </br> 4.15.0-1138-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 4.15.0-176-generic </br> 4.15.0-177-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-107-generic </br> 5.4.0-109-generic </br> 5.4.0-110-generic | 18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure |
-18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic |
|||
-20.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic |
+20.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic |
+20.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic </br> 5.4.0-124-generic </br> 5.4.0-125-generic |
20.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release. | 20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-109-generic </br> 5.4.0-110-generic </br> 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic | 20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure |
-20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic |
+ > [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
+Debian 7 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 7 kernels supported in this release. |
Debian 7 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 7 kernels supported in this release. |
-Debian 7 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 7 kernels supported in this release. |
|||
+Debian 8 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 8 kernels supported in this release. |
Debian 8 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 8 kernels supported in this release. |
-Debian 8 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 8 kernels supported in this release. |
|||
+Debian 9.1 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 9.1 kernels supported in this release. |
Debian 9.1 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64 Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-18-amd64 </br> 4.19.0-0.bpo.19-amd64 </br> 4.19.0-0.bpo.17-cloud-amd64 to 4.19.0-0.bpo.19-cloud-amd64 Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-16-amd64, 4.9.0-17-amd64
-Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 9.1 kernels supported in this release.
|||
+Debian 10 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 10 kernels supported in this release. |
Debian 10 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release. Debian 10 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64 Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 | Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 10 kernels supported in this release.
-Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 10 kernels supported in this release.
> [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>4.12.14-16.100-azure:5 </br> 4.12.14-16.103-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.12.14-16.94-azure:5 </br> 4.12.14-16.97-azure:5 </br> 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.121-default:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.85-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.110-default:5 |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/en-us/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure |
+ #### Supported SUSE Linux Enterprise Server 15 kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 15 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 15 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.75-azure:3 |
SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>5.3.18-150300.38.69-azure:3 </br> SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.38.47-azure:3 </br> 5.3.18-150300.38.50-azure:3 </br> 5.3.18-150300.38.53-azure:3 </br> 5.3.18-150300.38.56-azure:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br> 5.3.18-36-azure:3 </br> 5.3.18-38.11-azure:3 </br> 5.3.18-38.14-azure:3 </br> 5.3.18-38.17-azure:3 </br> 5.3.18-38.22-azure:3 </br> 5.3.18-38.25-azure:3 </br> 5.3.18-38.28-azure:3 </br> 5.3.18-38.31-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-38.3-azure:3 </br> 5.3.18-38.8-azure:3 | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br>
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure
+ > [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
Hyper-V without Virtual Machine Manager | You can perform disaster recovery to A
**Server** | **Requirements** | **Details** | |
-Hyper-V (running without Virtual Machine Manager) | Windows Server 2022 (Server core not supported), Windows Server 2019, Windows Server 2016, Windows Server 2012 R2 with latest updates <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If you have already configured Windows Server 2012 R2 with/or SCVMM 2012 R2 with Azure Site Recovery and plan to upgrade the OS, please follow the guidance [documentation.](upgrade-2012R2-to-2016.md)
+Hyper-V (running without Virtual Machine Manager) | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2 with latest updates <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If you have already configured Windows Server 2012 R2 with/or SCVMM 2012 R2 with Azure Site Recovery and plan to upgrade the OS, please follow the guidance [documentation.](upgrade-2012R2-to-2016.md)
Hyper-V (running with Virtual Machine Manager) | Virtual Machine Manager 2022 (Server core not supported), Virtual Machine Manager 2019, Virtual Machine Manager 2016, Virtual Machine Manager 2012 R2 <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If Virtual Machine Manager is used, Windows Server 2019 hosts should be managed in Virtual Machine Manager 2019. Similarly, Windows Server 2016 hosts should be managed in Virtual Machine Manager 2016. > [!NOTE]
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9249.0
[Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0 [Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0 [Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0 [Rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) | 9.47.6219.1 | 5.1.7127.0 | 9.47.6219.1 | 5.1.7127.0 | 2.0.9241.0
-[Rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 9.46.6149.1 | 5.1.7029.0 | 9.46.6149.1 | 5.1.7030.0 | 2.0.9239.0
+ [Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (October 2022)
+
+### Update Rollup 63
+
+[Update rollup 63](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for Ubuntu 20.04 Linux distro.
+**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 20.04 Linux distro.<br/><br/> Modernized experience to enable disaster recovery of VMware vritual machines is now generally available.[Learn more](https://azure.microsoft.com/updates/vmware-dr-ga-with-asr).<br/><br/> Protecting physical machines modernized experience is now supported.<br/><br/> Portecting machines with private endpoint and managed identity enabled is now supported with modernized experience.
+ ## Updates (August 2022) ### Update Rollup 63
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
+14.04 LTS | [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b), [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
|||
-16.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic |
+16.04 LTS |[9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b), [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic |
|||
+18.04 |[9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a)|4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-1151-azure </br>4.15.0-191-generic </br> 4.15.0-192-generic </br> 4.15.0-193-generic </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-1091-azure </br> 5.4.0-124-generic </br> 5.4.0-125-generic </br> 5.4.0-126-generic|
18.04 |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1146-azure </br> 4.15.0-189-generic </br> 5.4.0-1086-azure </br> 5.4.0-122-generic </br> 18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 18.04 LTS |[9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1009-azure to 4.15.0-1138-azure </br> 4.15.0-101-generic to 4.15.0-177-generic </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.3.0-19-generic to 5.3.0-76-generic </br> 5.4.0-1020-azure to 5.4.0-1078-azure </br> 5.4.0-37-generic to 5.4.0-110-generic | 18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.15.0-1126-azure </br> 4.15.0-1127-azure </br> 4.15.0-1129-azure </br> 4.15.0-162-generic </br> 4.15.0-163-generic </br> 4.15.0-166-generic </br> 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic |
-18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1123-azure </br> 4.15.0-1124-azure </br> 4.15.0-1125-azure</br> 4.15.0-156-generic </br> 4.15.0-158-generic </br> 4.15.0-159-generic </br> 4.15.0-161-generic </br> 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-87-generic </br> 5.4.0-89-generic |
|||
+20.04 LTS|[9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a)|5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br>5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-41-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-1091-azure </br> 5.4.0-124-generic </br> 5.4.0-125-generic </br> 5.4.0-126-generic |
20.04 LTS|[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic | 20.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release | 20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-26-generic to 5.4.0-110-generic </br> 5.4.0-1010-azure to 5.4.0-1078-azure </br> 5.8.0-1033-azure to 5.8.0-1043-azure </br> 5.8.0-23-generic to 5.8.0-63-generic </br> 5.11.0-22-generic to 5.11.0-46-generic </br> 5.11.0-1007-azure to 5.11.0-1028-azure | 20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic |
-20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-88-generic </br> 5.4.0-89-generic |
### Debian kernel versions
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b), [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
+Debian 8 |[9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b), [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
|||
+Debian 9.1 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a)| No new Debian 9.1 kernels supported in this release|
Debian 9.1 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release| Debian 9.1 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64 </br> Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-17-amd64 to 4.9.0-19-amd64 </br> 4.19.0-0.bpo.19-cloud-amd64 </br> Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-17-amd64 </br>
-Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64
|||
+Debian 10 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 5.10.0-0.deb10.16-amd64 </br> 5.10.0-0.deb10.16-cloud-amd64 </br> 5.10.0-0.deb10.17-amd64 </br> 5.10.0-0.deb10.17-cloud-amd64 |
Debian 10 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release| Debian 10 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64 Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-19-cloud-amd64, 4.19.0-20-cloud-amd64 </br> 4.19.0-19-amd64, 4.19.0-20-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 | Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new kernels supported.
-Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
### SUSE Linux Enterprise Server 12 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.103-azure:5 |
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.100-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.109-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.103-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.100-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.85-azure:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-16.94-azure:5 </br> 4.12.14-16.97-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure </br> 4.12.14-122.103-default </br> 4.12.14-122.98-default5 |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.73-azure </br> 4.12.14-16.76-azure </br> 4.12.14-122.88-default </br> 4.12.14-122.91-default |
### SUSE Linux Enterprise Server 15 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.75-azure:3 |
SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.69-azure:3 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.38.47-azure:3 </br> 5.3.18-150300.38.50-azure:3 </br> 5.3.18-150300.38.53-azure:3 </br> 5.3.18-150300.38.56-azure:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br> 5.3.18-36-azure:3 </br> 5.3.18-38.11-azure:3 </br> 5.3.18-38.14-azure:3 </br> 5.3.18-38.17-azure:3 </br> 5.3.18-38.22-azure:3 </br> 5.3.18-38.25-azure:3 </br> 5.3.18-38.28-azure:3 </br> 5.3.18-38.31-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-38.3-azure:3 </br> 5.3.18-38.8-azure:3 </br> | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-18.72-azure: </br> 5.3.18-18.75-azure: </br> 5.3.18-24.93-default </br> 5.3.18-24.96-default </br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.66-azure </br> 5.3.18-18.69-azure </br> 5.3.18-24.83-default </br> 5.3.18-24.86-default |
- ## Linux file systems/guest storage
spatial-anchors Tutorial Use Cosmos Db To Store Anchors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-use-cosmos-db-to-store-anchors.md
Last updated 11/20/2020 + # Tutorial: Sharing Azure Spatial Anchors across sessions and devices with an Azure Cosmos DB back end
It's worth noting that, though you'll be using Unity and Azure Cosmos DB in this
## Create a database account
-Add an Azure Cosmos Database to the resource group you created earlier.
+Add an Azure Cosmos DB database to the resource group you created earlier.
[!INCLUDE [cosmos-db-create-dbaccount-table](../../cosmos-db/includes/cosmos-db-create-dbaccount-table.md)]
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Last updated 09/08/2020 -+ zone_pivot_groups: programming-languages-spring-apps
Which one should I use and what are the limits within each tier?
### What's the difference between Service Binding and Service Connector?
-We are not actively developing additional capabilities for Service Binding in favor of the new Azure-wise solution named [Service Connector](../service-connector/overview.md). On the one hand, the new solution brings you consistent integration experience across App hosting services on Azure like App Service. On the other hand, it covers your needs better by starting with supporting 10+ most used target Azure services including MySQL, SQL DB, Cosmos DB, Postgres DB, Redis, Storage and more. Service Connector is currently in Public Preview, we invite you to try out the new experience.
+We are not actively developing additional capabilities for Service Binding in favor of the new Azure-wise solution named [Service Connector](../service-connector/overview.md). On the one hand, the new solution brings you consistent integration experience across App hosting services on Azure like App Service. On the other hand, it covers your needs better by starting with supporting 10+ most used target Azure services including MySQL, SQL DB, Azure Cosmos DB, Postgres DB, Redis, Storage and more. Service Connector is currently in Public Preview, we invite you to try out the new experience.
### How can I provide feedback and report issues?
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
Last updated 10/06/2019 -+ # Bind an Azure Cosmos DB database to your application in Azure Spring Apps
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. Add one of the following dependencies to your application's pom.xml pom.xml file. Choose the dependency that is appropriate for your API type.
- * API type: Core (SQL)
+ * API type: NoSQL
```xml <dependency>
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md
Title: Control egress traffic for an Azure Spring Apps instance
-description: Learn how to control egress traffic for an Azure Spring Apps instance
+description: Learn how to control egress traffic for an Azure Spring Apps instance.
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
Title: Use Application Configuration Service for Tanzu with Azure Spring Apps Enterprise Tier
-description: How to use Application Configuration Service for Tanzu with Azure Spring Apps Enterprise Tier.
+description: Learn how to use Application Configuration Service for Tanzu with Azure Spring Apps Enterprise Tier.
Last updated 02/09/2022-+ # Use Application Configuration Service for Tanzu
This article shows you how to use Application Configuration Service for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
-[Application Configuration Service for Tanzu](https://docs.pivotal.io/tcs-k8s/0-1/) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories.
+[Application Configuration Service for VMware Tanzu](https://docs.pivotal.io/tcs-k8s/0-1/) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native `ConfigMap` resources that are populated from properties defined in one or more Git repositories.
-With Application Configuration Service for Tanzu, you have a central place to manage external properties for applications across all environments.
+Application Configuration Service for Tanzu gives you a central place to manage external properties for applications across all environments.
## Prerequisites - An already provisioned Azure Spring Apps Enterprise tier instance with Application Configuration Service for Tanzu enabled. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md). > [!NOTE]
- > To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
+ > To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You can't enable it after you provision the instance.
## Manage Application Configuration Service for Tanzu settings
Application Configuration Service for Tanzu supports Azure DevOps, GitHub, GitLa
To manage the service settings, open the **Settings** section and add a new entry under the **Repositories** section.
-The properties for each entry are described in the following table.
+The following table describes properties for each entry.
-| Property | Required? | Description |
-|-|--|-|
-| Name | Yes | A unique name to label each Git repository. |
-| Patterns | Yes | Patterns to search in Git repositories. For each pattern, use a format like *{application}* or *{application}/{profile}* instead of *{application}-{profile}.yml*, and separate the patterns with commas. For more information, see the [Pattern](./how-to-enterprise-application-configuration-service.md#pattern) section. |
-| URI | Yes | A Git URI (for example, `https://github.com/Azure-Samples/piggymetrics-config` or `git@github.com:Azure-Samples/piggymetrics-config`) |
-| Label | Yes | The branch name to search in the Git repository. |
-| Search path | No | Optional search paths, separated by commas, for searching subdirectories of the Git repository. |
+| Property | Required? | Description |
+||--|-|
+| `Name` | Yes | A unique name to label each Git repository. |
+| `Patterns` | Yes | Patterns to search in Git repositories. For each pattern, use a format such as *{application}* or *{application}/{profile}* rather than *{application}-{profile}.yml*. Separate the patterns with commas. For more information, see the [Pattern](./how-to-enterprise-application-configuration-service.md#pattern) section of this article. |
+| `URI` | Yes | A Git URI (for example, `https://github.com/Azure-Samples/piggymetrics-config` or `git@github.com:Azure-Samples/piggymetrics-config`) |
+| `Label` | Yes | The branch name to search in the Git repository. |
+| `Search path` | No | Optional search paths, separated by commas, for searching subdirectories of the Git repository. |
### Pattern
-Configuration will be pulled from Git backends using what is defined in a pattern. A pattern is a combination of *{application}/{profile}* as described in the following list.
+Configuration is pulled from Git backends using what you define in a pattern. A pattern is a combination of *{application}/{profile}* as described in the following guidelines.
-- *{application}* - The name of an application for which the configuration is being retrieved. The value `application` is considered the default application and includes configuration shared across multiple applications. Any other value specifies a specific application and will include properties for both the specified application and shared properties for the default application.-- *{profile}* - Optional. The name of a profile for which properties may be retrieved. An empty value, or the value `default`, includes properties that are shared across any and all profiles. Non-default values include properties for the specified profile and properties for the default profile.
+- *{application}* - The name of an application whose configuration you're retrieving. The value `application` is considered the default application and includes configuration information shared across multiple applications. Any other value refers to a specific application and includes properties for both the specific application and shared properties for the default application.
+- *{profile}* - Optional. The name of a profile whose properties you may be retrieving. An empty value, or the value `default`, includes properties that are shared across profiles. Non-default values include properties for the specified profile and properties for the default profile.
### Authentication The following image shows the three types of repository authentication supported by Application Configuration Service for Tanzu. - Public repository.
- You don't need extra Authentication configuration when using a public repository. Just select **Public** in the **Authentication** form.
+ You don't need extra Authentication configuration when you use a public repository. Select **Public** in the **Authentication** form.
- Private repository with basic authentication.
- The following table shows all the configurable properties used to set up a private Git repository with basic authentication.
+ The following table shows the configurable properties you can use to set up a private Git repository with basic authentication.
- | Property | Required? | Description |
- |-|--||
- | username | Yes | The username used to access the repository. |
- | password | Yes | The password used to access the repository. |
+ | Property | Required? | Description |
+ ||--||
+ | `username` | Yes | The username used to access the repository. |
+ | `password` | Yes | The password used to access the repository. |
- Private repository with SSH authentication.
- The following table shows all configurable properties used to set up a private Git repository with SSH.
+ The following table shows the configurable properties you can use to set up a private Git repository with SSH.
- | Property | Required? | Description |
- |--|--|-|
- | Private key | Yes | The private key that identifies the Git user. Passphrase-encrypted private keys aren't supported. |
- | Host key | No | The host key of the Git server. If you've connected to the server via Git on the command line, the host key is in your *.ssh/known_hosts* file. Don't include the algorithm prefix, because it's specified in `Host key algorithm`. |
- | Host key algorithm | No | The algorithm for `hostKey`: one of `ssh-dss`, `ssh-rsa`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, and `ecdsa-sha2-nistp521`. (Required if supplying `Host key`). |
- | Strict host key checking | No | Optional value that indicates whether the backend should be ignored if it encounters an error when using the provided `Host key`. Valid values are `true` and `false`. The default value is `true`. |
+ | Property | Required? | Description |
+ |-|--|-|
+ | `Private key` | Yes | The private key that identifies the Git user. Passphrase-encrypted private keys aren't supported. |
+ | `Host key` | No | The host key of the Git server. If you've connected to the server via Git on the command line, the host key is in your *.ssh/known_hosts* file. Don't include the algorithm prefix, because it's specified in `Host key algorithm`. |
+ | `Host key algorithm` | No | The algorithm for `hostKey`: one of `ssh-dss`, `ssh-rsa`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, and `ecdsa-sha2-nistp521`. (Required if supplying `Host key`). |
+ | `Strict host key checking` | No | Optional value that indicates whether the backend should be ignored if it encounters an error when using the provided `Host key`. Valid values are `true` and `false`. The default value is `true`. |
> [!NOTE] > Application Configuration Service for Tanzu doesn't support SHA-2 signatures yet and we are actively working on to support it in future release. Before that, please use SHA-1 signatures or basic auth instead.
Use the following steps to refresh your application configuration after you upda
A Spring application holds the properties as the beans of the Spring Application Context via the Environment interface. The following list shows several ways to load the new configurations: -- Restart the application. After restarting, the application will always load the new configuration.
+- Restart the application. Restarting the application always loads the new configuration.
-- Call the */actuator/refresh* endpoint exposed on the config client via the Spring Actuator.
+- Call the `/actuator/refresh` endpoint exposed on the config client via the Spring Actuator.
To use this method, add the following dependency to your configuration clientΓÇÖs *pom.xml* file.
A Spring application holds the properties as the beans of the Spring Application
management.endpoints.web.exposure.include=refresh, bus-refresh, beans, env ```
- After reloading the property sources by calling the */actuator/refresh* endpoint, the attributes bound with `@Value` in the beans having the annotation `@RefreshScope` are refreshed.
+ After you reload the property sources by calling the `/actuator/refresh` endpoint, the attributes bound with `@Value` in the beans having the annotation `@RefreshScope` are refreshed.
``` java @Service
A Spring application holds the properties as the beans of the Spring Application
} ```
- Next, use curl with the application endpoint to refresh the new configuration.
+ Use curl with the application endpoint to refresh the new configuration.
``` bash curl -X POST http://{app-endpoint}/actuator/refresh
A Spring application holds the properties as the beans of the Spring Application
## Configure Application Configuration Service for Tanzu settings using the portal
-You can configure Application Configuration Service for Tanzu using the portal by following these steps:
+Use the following steps to configure Application Configuration Service for Tanzu using the portal.
1. Select **Application Configuration Service**. 1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
- ![Application Configuration Service Overview screen](./media/enterprise/getting-started-enterprise/config-service-overview.png)
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-overview.png" alt-text="Screenshot of the Application Configuration Service page showing the Overview tab." lightbox="media/how-to-enterprise-application-configuration-service/config-service-overview.png":::
1. Select **Settings** and add a new entry in the **Repositories** section with the Git backend information. 1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
- ![Application Configuration Service Settings overview](./media/enterprise/getting-started-enterprise/config-service-settings.png)
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-settings.png" alt-text="Screenshot of the Application Configuration Service page showing the Settings tab." lightbox="media/how-to-enterprise-application-configuration-service/config-service-settings.png":::
## Configure Application Configuration Service for Tanzu settings using the CLI
-You can configure Application Configuration Service for Tanzu using the CLI, by following these steps:
+Use the following steps to configure Application Configuration Service for Tanzu using the CLI.
```azurecli az spring application-configuration-service git repo add \
az spring application-configuration-service git repo add \
## Use Application Configuration Service for Tanzu with applications using the portal
-When you use Application Configuration Service for Tanzu with a Git back end, keep the following items in mind.
-
-To use the centralized configurations, you must bind the app to Application Configuration Service for Tanzu. After binding the app, you'll need to configure which pattern to be used by the app by following these steps:
+When you use Application Configuration Service for Tanzu with a Git back end and use the centralized configurations, you must bind the app to Application Configuration Service for Tanzu. After binding the app, use the following steps to configure the pattern to be used by the app.
1. Open the **App binding** tab. 1. Select **Bind app** and choose one app in the dropdown. Select **Apply** to bind.
- :::image type="content" source="media/enterprise/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png" alt-text="Screenshot of where to select the application to bind.":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png" alt-text="Screenshot of the Application Configuration Service page showing the App binding tab." lightbox="media/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png":::
> [!NOTE] > When you change the bind/unbind status, you must restart or redeploy the app to for the binding to take effect.
-1. Select **Apps**, then select the [pattern(s)](./how-to-enterprise-application-configuration-service.md#pattern) to be used by the apps.
+1. Select **Apps**, and then select the [pattern(s)](./how-to-enterprise-application-configuration-service.md#pattern) to be used by the apps.
- a. In the left navigation menu, select **Apps** to view the list all the apps.
+ 1. In the left navigation menu, select **Apps** to view the list all the apps.
- b. Select the target app to configure patterns for from the `name` column.
+ 1. Select the target app to configure patterns for from the `name` column.
- c. In the left navigation pane, select **Configuration**, then select **General settings**.
+ 1. In the left navigation pane, select **Configuration**, then select **General settings**.
- d. In the **Config file patterns** dropdown, choose one or more patterns from the list.
+ 1. In the **Config file patterns** dropdown, choose one or more patterns from the list.
- :::image type="content" source="media/enterprise/how-to-enterprise-application-configuration-service/config-service-pattern.png" alt-text="Screenshot of the pattern selection screen.":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-pattern.png" alt-text="Screenshot of the Application Configuration Service page showing the General settings tab." lightbox="media/how-to-enterprise-application-configuration-service/config-service-pattern.png":::
- e. Select **Save**
+ 1. Select **Save**
## Use Application Configuration Service for Tanzu with applications using the CLI
-You can use Application Configuration Service for Tanzu with applications, by using this command:
+Use the following command to use Application Configuration Service for Tanzu with applications.
```azurecli az spring application-configuration-service bind --app <app-name>
spring-apps How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-managed-identities.md
Last updated 04/15/2022-+ zone_pivot_groups: spring-apps-tier-selection
The following services do not currently support managed identity-based access:
- Azure Flexible MySQL - Azure Flexible PostgreSQL - Azure Database for MariaDB-- Azure Cosmos DB - Mongo DB-- Azure Cosmos DB - Cassandra
+- Azure Cosmos DB for MongoDB
+- Azure Cosmos DB for Apache Cassandra
- Azure Databricks
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
static-web-apps Add Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-mongoose.md
Title: "Tutorial: Access data in Cosmos DB using Mongoose with Azure Static Web Apps"
-description: Learn to access data in Cosmos DB using Mongoose from an Azure Static Web Apps API function.
+ Title: "Tutorial: Access data in Azure Cosmos DB using Mongoose with Azure Static Web Apps"
+description: Learn to access data in Azure Cosmos DB using Mongoose from an Azure Static Web Apps API function.
+ Last updated 01/25/2021
-# Tutorial: Access data in Cosmos DB using Mongoose with Azure Static Web Apps
+# Tutorial: Access data in Azure Cosmos DB using Mongoose with Azure Static Web Apps
-[Mongoose](https://mongoosejs.com/) is the most popular ODM (Object Document Mapping) client for Node.js. Allowing you to design a data structure and enforce validation, Mongoose provides all the tooling necessary to interact with databases that support the MongoDB API. [Cosmos DB](../cosmos-db/mongodb-introduction.md) supports the necessary MongoDB APIs and is available as a back-end server option on Azure.
+[Mongoose](https://mongoosejs.com/) is the most popular ODM (Object Document Mapping) client for Node.js. Allowing you to design a data structure and enforce validation, Mongoose provides all the tooling necessary to interact with databases that support the MongoDB API. [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb-introduction.md) supports the necessary MongoDB APIs and is available as a back-end server option on Azure.
In this tutorial, you learn how to: > [!div class="checklist"]
-> - Create a Cosmos DB serverless account
+> - Create an Azure Cosmos DB serverless account
> - Create Azure Static Web Apps > - Update application settings to store the connection string
If you donΓÇÖt have an Azure subscription, create a [free trial account](https:/
Sign in to the [Azure portal](https://portal.azure.com).
-## Create a Cosmos DB serverless database
+## Create an Azure Cosmos DB serverless database
-Begin by creating a [Cosmos DB serverless](../cosmos-db/serverless.md) account. By using a serverless account, you only pay for the resources as they are used and avoid needing to create a full infrastructure.
+Begin by creating an [Azure Cosmos DB serverless](../cosmos-db/serverless.md) account. By using a serverless account, you only pay for the resources as they are used and avoid needing to create a full infrastructure.
-1. Navigate to [https://portal.azure.com](https://portal.azure.com)
+1. Navigate to the [Azure portal](https://portal.azure.com)
2. Select **Create a resource** 3. Enter **Azure Cosmos DB** in the search box 4. Select **Azure Cosmos DB** 5. Select **Create**
-6. If prompted, under **Azure Cosmos DB API for MongoDB** select **Create**
+6. If prompted, under **Azure Cosmos DB for MongoDB** select **Create**
7. Configure your Azure Cosmos DB Account with the following information - Subscription: Choose the subscription you wish to use - Resource: Select **Create new**, and set the name to **aswa-mongoose**
Begin by creating a [Cosmos DB serverless](../cosmos-db/serverless.md) account.
- Location: **West US 2** - Capacity mode: **Serverless (preview)** - Version: **4.0** 8. Select **Review + create** 9. Select **Create**
In order to allow the web app to communicate with the database, the database con
1. Select **Home** in the upper left corner of the Azure portal (or navigate back to [https://portal.azure.com](https://portal.azure.com)) 2. Select **Resource groups** 3. Select **aswa-mongoose**
-4. Select the name of your database account - it will have a type of **Azure Cosmos DB API for Mongo DB**
+4. Select the name of your database account - it will have a type of **Azure Cosmos DB for Mongo DB**
5. Under **Settings** select **Connection String** 6. Copy the connection string listed under **PRIMARY CONNECTION STRING** 7. In the breadcrumbs, select **aswa-mongoose**
static-web-apps Deploy Nextjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs.md
Title: "Tutorial: Deploy static-rendered Next.js websites on Azure Static Web Apps"
+ Title: "Deploy static-rendered Next.js websites on Azure Static Web Apps"
description: "Generate and deploy Next.js dynamic sites with Azure Static Web Apps." Previously updated : 03/26/2022 Last updated : 10/11/2022 -+ - # Deploy static-rendered Next.js websites on Azure Static Web Apps
-In this tutorial, you learn to deploy a [Next.js](https://nextjs.org) generated static website to [Azure Static Web Apps](overview.md). For more information about Next.js specifics, see the [starter template's readme](https://github.com/staticwebdev/nextjs-starter#readme).
+In this tutorial, learn to deploy a [Next.js](https://nextjs.org) generated static website to [Azure Static Web Apps](overview.md). For more information about Next.js specifics, see the [starter template readme](https://github.com/staticwebdev/nextjs-starter#readme).
## Prerequisites
In this tutorial, you learn to deploy a [Next.js](https://nextjs.org) generated
- A GitHub account. [Create an account for free](https://github.com/join). - [Node.js](https://nodejs.org) installed.
-## Set up a Next.js app
+## 1. Set up a Next.js app
-Rather than using the Next.js CLI to create your app, you can use a starter repository. The starter repository contains an existing Next.js application that supports dynamic routes.
+Rather than using the Next.js CLI to create your app, you can use a starter repository. The starter repository contains an existing Next.js app that supports dynamic routes.
To begin, create a new repository under your GitHub account from a template repository.
-1. Navigate to [https://github.com/staticwebdev/nextjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nextjs-starter/generate)
-1. Name the repository **nextjs-starter**
-1. Next, clone the new repo to your machine. Make sure to replace `<YOUR_GITHUB_ACCOUNT_NAME>` with your account name.
+1. Go to [https://github.com/staticwebdev/nextjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nextjs-starter/generate).
+2. Name the repository **nextjs-starter**.
+3. Clone the new repo to your machine. Make sure to replace `<YOUR_GITHUB_ACCOUNT_NAME>` with your account name.
```bash git clone http://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/nextjs-starter ```
-1. Navigate to the newly cloned Next.js app:
+4. Go to the newly cloned Next.js app.
```bash cd nextjs-starter ```
-1. Install dependencies:
+5. Install dependencies.
```bash npm install ```
-1. Start Next.js app in development:
+6. Start Next.js app in development.
```bash npm run dev ```
-Navigate to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser:
+7. Go to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser.
:::image type="content" source="media/deploy-nextjs/start-nextjs-app.png" alt-text="Start Next.js app":::
When you select a framework or library, you see a details page about the selecte
:::image type="content" source="media/deploy-nextjs/start-nextjs-details.png" alt-text="Details page":::
-## Deploy your static website
+## 2. Create a static app
The following steps show how to link your app to Azure Static Web Apps. Once in Azure, you can deploy the application to a production environment.
-### Create a static app
-
-1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Go to the [Azure portal](https://portal.azure.com).
1. Select **Create a Resource**. 1. Search for **Static Web Apps**.
-1. Select **Static Web Apps**.
+1. Select **Static Web App**.
1. Select **Create**. 1. On the _Basics_ tab, enter the following values.
The following steps show how to link your app to Azure Static Web Apps. Once in
| _Region for Azure Functions API and staging environments_ | Select a region closest to you. | | _Source_ | **GitHub** |
-1. Select **Sign in with GitHub** and authenticate with GitHub.
+1. Select **Sign in with GitHub** and authenticate with GitHub, if prompted.
1. Enter the following GitHub values.
The following steps show how to link your app to Azure Static Web Apps. Once in
| _Api location_ | Leave this box empty. | | _Output location_ | Enter **out** in the box. |
-### Review and create
+## 3. Review and create
1. Select the **Review + Create** button to verify the details are all correct.
The following steps show how to link your app to Azure Static Web Apps. Once in
1. Once the deployment completes select, **Go to resource**.
-1. On the _Overview_ window, select the *URL* link to open your deployed application.
+2. On the _Overview_ window, select the *URL* link to open your deployed app.
If the website doesn't load immediately, then the build is still running. Once the workflow is complete, you can refresh the browser to view your web app.
-To check the status of the Actions workflow, navigate to the Actions dashboard for your repository:
+To check the status of the Actions workflow, go to the *Actions* dashboard for your repository:
```url https://github.com/<YOUR_GITHUB_USERNAME>/nextjs-starter/actions
https://github.com/<YOUR_GITHUB_USERNAME>/nextjs-starter/actions
Now any changes made to the `main` branch start a new build and deployment of your website.
-### Sync changes
+## 4. Sync changes
When you created the app, Azure Static Web Apps created a GitHub Actions file in your repository. Synchronize with the server by pulling down the latest to your local repository.
Return to the terminal and run the following command `git pull origin main`.
## Clean up resources
-If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance through the following steps:
+If you're not going to continue to use this app, you can delete the Azure Static Web Apps instance through the following steps.
1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **my-nextjs-group** from the top search bar.
-1. Select on the group name.
-1. Select on the **Delete** button.
-1. Select **Yes** to confirm the delete action.
+2. Search for **my-nextjs-group** from the top search bar.
+3. Select on the group name.
+4. Select on the **Delete** button.
+5. Select **Yes** to confirm the delete action.
+
+## Next steps
> [!div class="nextstepaction"] > [Set up a custom domain](custom-domain.md)+
+## Related articles
+
+- [Set up authentication and authorization](authentication-authorization.md)
+- [Configure app settings](application-settings.md)
+- [Enable monitoring](monitor.md)
+- [Azure CLI](https://github.com/Azure/static-web-apps-cli)
static-web-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/quotas.md
The following quotas exist for Azure Static Web Apps.
| Allowed IP ranges | Unavailable | 25 | | Authorization (built-in roles) | Unlimited end-users that may authenticate with built-in `authenticated` role | Unlimited end-users that may authenticate with built-in `authenticated` role | | Authorization (custom roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-authorization.md?tabs=invitations#role-management) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-authorization.md?tabs=invitations#role-management), or unlimited end-users that may be assigned custom roles via [serverless function](authentication-authorization.md?tabs=function#role-management) |
+| Request Size Limit | 30 MB | 30 MB |
## GitHub storage
storage-mover Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/troubleshooting.md
+
+ Title: Create, retrieve, and view the support bundle from an Azure Storage Mover agent
+description: Learn how to create, retrieve, and view the support bundle from the Azure Storage Mover Agent.
++++ Last updated : 10/06/2022+++
+<!--
+!########################################################
+STATUS: IN REVIEW
+
+CONTENT: final
+
+REVIEW Stephen/Fabian: not reviewed
+REVIEW Engineering: not reviewed
+EDIT PASS: not started
+
+!########################################################
+-->
+
+# Create, retrieve, and view the Azure Storage Mover support bundle
+
+Your organization's migration project utilizes the Azure Storage Mover to do the bulk of the migration-specific work. An unexpected issue within one of the components has the potential to bring the migration to a standstill. Storage Mover agents are capable of generating a support bundle to help resolve such issues.
+
+This article will help you through the process of creating the support bundle on the agent, retrieving the compressed log bundle, and accessing the logs it contains. This article assumes that you're using the virtual machine (VM) host operating system (OS), and that the host is able to connect to the guest VM.
+
+You'll need a secure FTP client on the host if you want to transfer the bundle. A secure FTP client is installed on most typical Windows instances.
+
+You'll also need a Zstandard (ZSTD) archive tool such as WinRAR to extract the compressed logs if you want to review them yourself.
+
+## About the agent support bundle
+
+A support bundle is a set of logs that can help determine the underlying cause of an error or other behavior that canΓÇÖt be immediately explained. They can vary widely in size and are compressed with a Zstandard archive tool.
+
+After the logs within the bundle are extracted, they can be used to located and diagnose issues that have occurred during the migration.
+
+Extracting the logs from the Zstd compressed tar file will create the following file structure:
+
+- ***misc***
+ - df.txt ΓÇö Filesystem usage
+ - dmesg.txt ΓÇö Kernel messages
+ - files.txt ΓÇö Directory listings
+ - ifconfig.txt ΓÇö Network interface settings
+ - meminfo.txt ΓÇö Memory usage
+ - netstat.txt ΓÇö Network connections
+ - top.txt ΓÇö Process memory and CPU usage
+- ***root***
+ - **xdmdata**
+ - archive ΓÇö Archived job logs
+ - azcopy ΓÇö AzCopy logs
+ - kvΓÇöAgent ΓÇö persisted data
+ - xdmsh ΓÇö Restricted shell logs
+- ***run***
+ - **xdatamoved**
+ - datadir ΓÇö Location of data directory
+ - kv ΓÇö Agent persisted data
+ - pid ΓÇö Agent process ID
+ - watchdog
+- ***var***
+ - **log** ΓÇö Various agent and system logs
+ - xdatamoved.err ΓÇö Agent error log
+ - xdatamoved.log ΓÇö Agent log
+ - xdatamoved.warn ΓÇö Agent warning log
+ - xdmreg.log ΓÇö Registration service log
+
+## Generate the agent support bundle
+
+The first step to identifying the root cause of the error is to collect the support bundle from the agent. To retrieve the bundle, complete the steps listed below.
+
+1. Connect to the agent using the administrative credentials. The default password for agents `admin`, though you'll need to supply the updated password if it was changed. In the example provided, the agent maintains the default password.
+1. From the root menu, choose option `6`, the **Collect support bundle** command, to generate the bundle with a unique filename. The support bundle will be created and stored in a share, locally on the agent. A confirmation message containing the name of the support bundle is displayed. The commands necessary to retrieve the bundle are also displayed as shown in the example provided. These commands should be copied and are utilized in the [Retrieve the agent support bundle](#retrieve-the-agent-support-bundle) section.
+
+ :::image type="content" source="media/troubleshooting/bundle-collect-sml.png" alt-text="Screen capture of the agent menu showing the results of the Collect Support Bundle command." lightbox="media/troubleshooting/bundle-collect-lrg.png":::
+
+## Retrieve the agent support bundle
+
+Using the VM's host machine, enter the commands provided by the agent to fetch a copy of the support bundle. You may be prompted to trust the host and be presented with the ECDSA (Elliptic Curve Digital Signature Algorithm) key during the initial connection to the VM. The commands are case-sensitive, and that the flag provided is an upper-case `P`.
++
+## Next steps
+
+You may find information in the following articles helpful:
+
+- [Release notes](release-notes.md)
+- [Resource hierarchy](resource-hierarchy.md)
storage Blobfuse2 Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-configuration.md
Title: How to configure Blobfuse2 (preview) | Microsoft Docs
+ Title: Configure settings for BlobFuse2 Preview
-description: How to configure Blobfuse2 (preview).
+description: Learn about your options for setting and changing configuration settings for BlobFuse2 Preview.
++ Last updated 09/29/2022--
-# How to configure Blobfuse2 (preview)
+# Configure settings for BlobFuse2 Preview
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-
-BlobFuse2 uses a variety of configuration settings to control its behaviors, including:
+You can use configuration settings to manage BlobFuse2 Preview in your deployment. Through configuration settings, you can set these aspects of how BlobFuse2 works in your environment:
- Access to a storage blob - Logging
BlobFuse2 uses a variety of configuration settings to control its behaviors, inc
- Caching behavior - Permissions
-For a complete list of settings and their descriptions, see [the base configuration file on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/setup/baseConfig.yaml).
+For a list of all BlobFuse2 settings and their descriptions, see the [base configuration file on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/setup/baseConfig.yaml).
-There are 3 ways of managing configuration settings for BlobFuse2 (in order of precedence):
+> [!IMPORTANT]
+> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
+>
+> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
+> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+To manage configuration settings for BlobFuse2, you have three options (in order of precedence):
+
+(1) [Configuration file](#configuration-file)
+
+(2) [Environment variables](#environment-variables)
-1. [CLI parameters](#cli-parameters)
-1. [Environment variables](#environment-variables)
-1. [A configuration file](#configuration-file)
+(3) [CLI parameters](#cli-parameters)
-Using a configuration file is the preferred method, but the other methods can be useful in some circumstances.
+Using a configuration file is the preferred method, but the other methods might be useful in some circumstances.
## Configuration file
-Creating a configuration file is the preferred method of establishing settings for BlobFuse2. Once you have specified the desired settings in the file, reference the configuration file when using the `blobfuse2 mount` or other commands. Example:
+Creating a configuration file is the preferred method to establish settings for BlobFuse2. When you've specified the settings you want in the configuration file, reference the configuration file when you use `blobfuse2 mount` or other commands.
+
+Here's an example:
````bash blobfuse2 mount ./mount --config-file=./config.yaml ````
-The [Base BlobFuse2 configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/setup/baseConfig.yaml) contains a complete list of all settings and a brief explanation of each.
+The [BlobFuse2 base configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/setup/baseConfig.yaml) contains a list of all settings and a brief explanation of each setting.
-Use the [Sample file cache configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleFileCacheConfig.yaml) or [the sample streaming configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleStreamingConfig.yaml) to get started quickly using some basic settings for each of those scenarios.
+Use the [sample file cache configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleFileCacheConfig.yaml) or the [sample streaming configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleStreamingConfig.yaml) to get started quickly by using some basic settings for each of those scenarios.
## Environment variables
-Setting environment variables is another way to configure some BlobFuse2 settings. The supported environment variables are useful for specifying the blob storage container to be accessed and the authorization method.
-
-For more details on using environment variables, see [The environment variables documentation](https://github.com/Azure/azure-storage-fuse/tree/main#environment-variables)
+Setting environment variables is another way to configure some BlobFuse2 settings. The supported environment variables are useful for specifying the Azure Blob Storage container to access and the authorization method to use.
-See [the BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/tree/main#environment-variables) for a complete list of variables that can be used.
+For more information about using environment variables and a list of all variables you can use, see the [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/tree/main#environment-variables).
-## CLI Parameters
+## CLI parameters
-Configuration settings can be set when passed as parameters of the BlobFuse2 command set, such as the `blobfuse2 mount` command. The mount command typically references a configuration file that contains all of the settings, but individual settings in the configuration file can be overridden by CLI parameters. In this example, the config.yaml configuration file is referenced, but the container to be mounted and the logging options are overridden:
+You also can set configuration settings when you pass them as parameters of the BlobFuse2 command set, such as by using the `blobfuse2 mount` command. The mount command typically references a configuration file that contains all the settings. But you can use CLI parameters to override individual settings in the configuration file. In this example, the *config.yaml* configuration file is referenced, but the container to be mounted and the logging options are overridden:
```bash blobfuse2 mount ./mount_dir --config-file=./config.yaml --container-name=blobfuse2b --log-level=log_debug --log-file-path=./bobfuse2b.log ```
-For more information about the complete BlobFuse2 command set, including the `blobfuse2 mount` command, see [The BlobFuse2 command set reference (preview)](blobfuse2-commands.md) and [The BlobFuse2 mount command reference (preview)](blobfuse2-commands-mount.md).
+For more information about the entire BlobFuse2 command set, including the `blobfuse2 mount` command, see [BlobFuse2 commands](blobfuse2-commands.md) and [BlobFuse2 mount commands](blobfuse2-commands-mount.md).
## See also -- [What is BlobFuse2? (preview)](blobfuse2-what-is.md)-- [How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)](blobfuse2-how-to-deploy.md)
+- [Migrate to BlobFuse2 from BlobFuse v1](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)
+- [BlobFuse2 commands](blobfuse2-commands.md)
+- [Troubleshoot BlobFuse2 issues](blobfuse2-troubleshooting.md)
+
+## Next steps
+
+- [Mount an Azure Blob Storage container on Linux by using BlobFuse2](blobfuse2-how-to-deploy.md)
+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage](blobfuse2-health-monitor.md)
storage Blobfuse2 Health Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-health-monitor.md
Title: How to use BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview) | Microsoft Docs
+ Title: Use BlobFuse2 Preview Health Monitor to monitor mount activities and resource usage
-description: How to use BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview).
-
+description: Learn how to use BlobFuse2 Health Monitor to gain insights into BlobFuse2 Preview mount activities and resource usage.
+++ Last updated 09/26/2022--
-# Use Health Monitor to gain insights into BlobFuse2 mounts (preview)
+# Use Health Monitor to gain insights into BlobFuse2 Preview mounts
-This article provides references to assist in deploying and using BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.
+This article provides references to help you deploy and use BlobFuse2 Preview Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.
> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and might not suitable for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
>
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
+> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
You can use BlobFuse2 Health Monitor to:
You can use BlobFuse2 Health Monitor to:
- Monitor CPU, memory, and network usage by BlobFuse2 mount processes - Track file cache usage and events
-## BlobFuse2 Health Monitor Resources
+## BlobFuse2 Health Monitor resources
During the preview of BlobFuse2, refer to [the BlobFuse2 Health Monitor README on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/tools/health-monitor/README.md) for full details on how to deploy and use Health Monitor. The README file describes:
During the preview of BlobFuse2, refer to [the BlobFuse2 Health Monitor README o
## See also -- [What is BlobFuse2? (preview)](blobfuse2-what-is.md)-- [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md)-- [How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)](blobfuse2-how-to-deploy.md)
+- [Migrate to BlobFuse2 from BlobFuse v1](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)
+- [BlobFuse2 commands](blobfuse2-commands.md)
+- [Troubleshoot BlobFuse2 issues](blobfuse2-troubleshooting.md)
+
+## Next steps
+
+- [Configure settings for BlobFuse2](blobfuse2-configuration.md)
+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage](blobfuse2-health-monitor.md)
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Title: How to mount an Azure Blob Storage container on Linux with BlobFuse2 (preview) | Microsoft Docs
+ Title: Mount an Azure Blob Storage container on Linux by using BlobFuse2 Preview
-description: How to mount an Azure Blob Storage container on Linux with BlobFuse2 (preview).
-
+description: Learn how to mount an Azure Blob Storage container on Linux by using BlobFuse2 Preview.
+++ Last updated 10/01/2022--
-# How to mount an Azure Blob Storage container on Linux with BlobFuse2 (preview)
+# Mount an Azure Blob Storage container on Linux by using BlobFuse2 Preview
-[BlobFuse2](blobfuse2-what-is.md) is a virtual file system driver for Azure Blob Storage. BlobFuse2 allows you to access your existing Azure block blob data in your storage account through the Linux file system. For more details see [What is BlobFuse2? (preview)](blobfuse2-what-is.md).
+[BlobFuse2 Preview](blobfuse2-what-is.md) is a virtual file system driver for Azure Blob Storage. BlobFuse2 allows you to access your existing Azure block blob data in your storage account through the Linux file system. For more information, see [What is BlobFuse2?](blobfuse2-what-is.md).
> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
>
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
+> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-This guide shows you how to install and configure BlobFuse2, mount an Azure blob container, and access data in the container. The basic steps are:
+This article shows you how to install and configure BlobFuse2, mount an Azure blob container, and access data in the container. The basic steps are:
- [Install BlobFuse2](#install-blobfuse2) - [Configure BlobFuse2](#configure-blobfuse2)-- [Mount blob container](#mount-blob-container)
+- [Mount a blob container](#mount-a-blob-container)
- [Access data](#access-data) ## Install BlobFuse2
-There are 2 basic options for installing BlobFuse2:
+To install BlobFuse2, you have two basic options:
+
+- [Install BlobFuse2 binaries](#option-1-install-blobfuse2-binaries-preferred) (preferred)
+- [Build BlobFuse2 binaries from source code](#option-2-build-binaries-from-source-code)
-1. [Install the BlobFuse2 Binary](#option-1-install-the-blobfuse2-binary-preferred)
-1. [Build it from source](#option-2-build-from-source)
+### Option 1: Install BlobFuse2 binaries (preferred)
-### Option 1: Install the BlobFuse2 Binary (preferred)
+To see supported distributions, see [BlobFuse2 releases](https://github.com/Azure/azure-storage-fuse/releases).
-For supported distributions see [the BlobFuse2 releases page](https://github.com/Azure/azure-storage-fuse/releases).
-For libfuse support information, refer to [the BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#distinctive-features-compared-to-blobfuse-v1x).
+For information about libfuse support, see the [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#distinctive-features-compared-to-blobfuse-v1x).
To check your version of Linux, run the following command:
To check your version of Linux, run the following command:
lsb_release -a ```
-If there are no binaries available for your distribution, you can [build the binaries from source code](https://github.com/MicrosoftDocs/azure-docs-pr/pull/203174#option-2-build-from-source).
+If no binaries are available for your distribution, you can [build the binaries from source code](https://github.com/MicrosoftDocs/azure-docs-pr/pull/203174#option-2-build-from-source).
-#### Install the BlobFuse2 binariesFstream
+#### Install the BlobFuse2 binaries
-To install BlobFuse2:
+To install BlobFuse2 binaries:
-1. Retrieve the latest Blobfuse2 binary for your distro from GitHub, for example:
+1. Retrieve the latest BlobFuse2 binary for your distribution from GitHub. For example:
```bash wget https://github.com/Azure/azure-storage-fuse/releases/download/blobfuse2-2.0.0-preview.3/blobfuse2-2.0.0-preview.3-Ubuntu-22.04-x86-64.deb ```
-
-1. Install BlobFuse2. For example, on an Ubuntu distribution run:
+
+1. Install BlobFuse2. For example, on an Ubuntu distribution, run:
```bash sudo apt-get install libfuse3-dev fuse3 sudo dpkg -i blobfuse2-2.0.0-preview.3-Ubuntu-22.04-x86-64.deb ```
-
-### Option 2: Build from source
-To build the BlobFuse2 binaries from source:
+### Option 2: Build binaries from source code
+
+To build the BlobFuse2 binaries from source code:
+
+1. Install the dependencies:
-1. Install the dependencies
1. Install Git: ```bash sudo apt-get install git ```
- 1. Install BlobFuse2 dependencies:
- Ubuntu:
+ 1. Install BlobFuse2 dependencies.
+
+ On Ubuntu:
```bash sudo apt-get install libfuse3-dev fuse3 -y ```
-1. Clone the repo
+1. Clone the repository:
```Git git clone https://github.com/Azure/azure-storage-fuse/
To build the BlobFuse2 binaries from source:
git checkout main ```
-1. Build
+1. Build BlobFuse2:
```Git go get
To build the BlobFuse2 binaries from source:
``` > [!TIP]
-> If you need to install Go, refer to [The download and install page for Go](https://go.dev/doc/install).
+> If you need to install Go, see [Download and install Go](https://go.dev/doc/install).
## Configure BlobFuse2
-You can configure BlobFuse2 with a variety of settings. Some of the typical settings used include:
+You can configure BlobFuse2 by using various settings. Some of the typical settings include:
- Logging location and options - Temporary file path for caching - Information about the Azure storage account and blob container to be mounted
-The settings can be configured in a yaml configuration file, using environment variables, or as parameters passed to the BlobFuse2 commands. The preferred method is to use the configuration file.
+The settings can be configured in a YAML configuration file, using environment variables, or as parameters passed to the BlobFuse2 commands. The preferred method is to use the configuration file.
-For details about each of the configuration parameters for BlobFuse2 and how to specify them, consult the references below:
+For details about each of the configuration parameters for BlobFuse2 and how to specify them, see these articles:
-- [Complete BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md)-- [Configuration file reference (preview)](blobfuse2-configuration.md#configuration-file)-- [Environment variable reference (preview)](blobfuse2-configuration.md#environment-variables)-- [Mount command reference (preview)](blobfuse2-commands-mount.md)
+- [Configure settings for BlobFuse2](blobfuse2-configuration.md)
+- [BlobFuse2 configuration file](blobfuse2-configuration.md#configuration-file)
+- [BlobFuse2 environment variables](blobfuse2-configuration.md#environment-variables)
+- [BlobFuse2 mount commands](blobfuse2-commands-mount.md)
-The basic steps for configuring BlobFuse2 in preparation for mounting are:
+To configure BlobFuse2 for mounting:
-1. [Configure caching](#configure-caching)
-1. [Create an empty directory for mounting the blob container](#create-an-empty-directory-for-mounting-the-blob-container)
-1. [Authorize access to your storage account](#authorize-access-to-your-storage-account)
+1. [Configure caching](#configure-caching).
+1. [Create an empty directory to mount the blob container](#create-an-empty-directory-to-mount-the-blob-container).
+1. [Authorize access to your storage account](#authorize-access-to-your-storage-account).
### Configure caching
-BlobFuse2 provides native-like performance by using local file-caching techniques. The caching configuration and behavior varies, depending on whether you are streaming large files or accessing smaller files.
+BlobFuse2 provides native-like performance by using local file-caching techniques. The caching configuration and behavior varies, depending on whether you're streaming large files or accessing smaller files.
#### Configure caching for streaming large files
-BlobFuse2 supports streaming for both read and write operations as an alternative to disk caching for files. In streaming mode, BlobFuse2 caches blocks of large files in memory for both reading and writing. The configuration settings related to caching for streaming are under the `stream:` settings in your configuration file as follows:
+BlobFuse2 supports streaming for read and write operations as an alternative to disk caching for files. In streaming mode, BlobFuse2 caches blocks of large files in memory both for reading and writing. The configuration settings related to caching for streaming are under the `stream:` settings in your configuration file:
```yml stream: block-size-mb: For read only mode, the size of each block to be cached in memory while streaming (in MB)
- For read/write mode: the size of newly created blocks
+ For read/write mode, the size of newly created blocks
max-buffers: The total number of buffers to store blocks in buffer-size-mb: The size for each buffer ```
-See [the sample streaming configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleStreamingConfig.yaml) to get started quickly with some settings for a basic streaming scenario.
+To get started quickly with some settings for a basic streaming scenario, see the [sample streaming configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleStreamingConfig.yaml).
#### Configure caching for smaller files
-Smaller files are cached to a temporary path specified under `file_cache:` in the configuration file as follows:
+Smaller files are cached to a temporary path that's specified under `file_cache:` in the configuration file:
```yml file_cache:
file_cache:
``` > [!NOTE]
-> BlobFuse2 stores all open file contents in the temporary path. Make sure to have enough space to contain all open files.
+> BlobFuse2 stores all open file contents in the temporary path. Make sure you have enough space to contain all open files.
>
-There are 3 common options for configuring the temporary path for file caching:
+You have three common options to configure the temporary path for file caching:
- [Use a local high-performing disk](#use-a-local-high-performing-disk)-- [Use a ramdisk](#use-a-ramdisk)
+- [Use a RAM disk](#use-a-ram-disk)
- [Use an SSD](#use-an-ssd) ##### Use a local high-performing disk
-If an existing local disk is chosen for file caching, choose one that will provide the best performance possible, such as an SSD.
+If you use an existing local disk for file caching, choose a disk that provides the best performance possible, such as a solid-state disk (SSD).
-##### Use a ramdisk
+##### Use a RAM disk
-The following example creates a ramdisk of 16 GB and a directory for BlobFuse2. Choose the size based on your needs. This ramdisk allows BlobFuse2 to open files up to 16 GB in size.
+The following example creates a RAM disk of 16 GB and a directory for BlobFuse2. Choose a size that meets your requirements. BlobFuse2 uses the RAM disk to open files that are up to 16 GB in size.
```bash sudo mkdir /mnt/ramdisk
sudo chown <youruser> /mnt/ramdisk/blobfuse2tmp
##### Use an SSD
-In Azure, you may use the ephemeral disks (SSD) available on your VMs to provide a low-latency buffer for BlobFuse2. Depending on the provisioning agent used, the ephemeral disk would be mounted on '/mnt' for cloud-init or '/mnt/resource' for waagent VMs.
+In Azure, you can use the SSD ephemeral disks that are available on your VMs to provide a low-latency buffer for BlobFuse2. Depending on the provisioning agent you use, mount the ephemeral disk on */mnt* for cloud-init or */mnt/resource* for Microsoft Azure Linux Agent (waagent) VMs.
-Make sure your user has access to the temporary path:
+Make sure that your user has access to the temporary path:
```bash sudo mkdir /mnt/resource/blobfuse2tmp -p sudo chown <youruser> /mnt/resource/blobfuse2tmp ```
-### Create an empty directory for mounting the blob container
+### Create an empty directory to mount the blob container
+
+To create an empty directory to mount the blob container:
```bash mkdir ~/mycontainer
mkdir ~/mycontainer
### Authorize access to your storage account
-You must grant the user mounting the container access to the storage account. The most common ways of doing this are by using:
+You must grant access to the storage account for the user who mounts the container. The most common ways to grant access are by using one of the following options:
-- A storage account access key-- A shared access signature (SAS)-- A managed identity-- A Service Principal
+- Storage account access key
+- Shared access signature
+- Managed identity
+- Service principal
-Authorization information can be provided in a configuration file or in environment variables. For details, see [How to configure Blobfuse2 (preview)](blobfuse2-configuration.md).
+You can provide authorization information in a configuration file or in environment variables. For more information, see [Configure settings for BlobFuse2](blobfuse2-configuration.md).
-## Mount blob container
+## Mount a blob container
> [!IMPORTANT]
-> BlobFuse2 does not support overlapping mount paths. While running multiple instances of BlobFuse2 make sure each instance has a unique and non-overlapping mount point.
+> BlobFuse2 doesn't support overlapping mount paths. If you run multiple instances of BlobFuse2, make sure that each instance has a unique and non-overlapping mount point.
>
-> BlobFuse2 does not support co-existence with NFS on the same mount path. The behavior resulting from doing so is undefined and could result in data corruption.
+> BlobFuse2 doesn't support coexistence with NFS on the same mount path. The results of running BlobFuse2 on the same mount path as NFS are undefined and might result in data corruption.
-To mount an Azure block blob container using BlobFuse2, run the following command. This command mounts the container specified in `./config.yaml` onto the location `~/mycontainer`:
+To mount an Azure block blob container by using BlobFuse2, run the following command. The command mounts the container specified in `./config.yaml` onto the location `~/mycontainer`:
```bash blobfuse2 mount ~/mycontainer --config-file=./config.yaml ``` > [!NOTE]
-> For a full list of mount options, check [the BlobFuse2 mount command reference (preview)](blobfuse2-commands-mount.md).
+> For a full list of mount options, see [BlobFuse2 mount commands](blobfuse2-commands-mount.md).
-You should now have access to your block blobs through the Linux file system and related APIs. To test your deployment try creating a new directory and file:
+You should now have access to your block blobs through the Linux file system and related APIs. To test your deployment, try creating a new directory and file:
```bash cd ~/mycontainer
echo "hello world" > test/blob.txt
## Access data
-Generally, you can work with the BlobFuse2-mounted storage like you would work with the native Linux file system. It uses the virtual directory scheme with the forward-slash '/' as a delimiter in the file path and supports basic file system operations such as mkdir, opendir, readdir, rmdir, open, read, create, write, close, unlink, truncate, stat, and rename.
+Generally, you can work with the BlobFuse2-mounted storage like you would work with the native Linux file system. It uses the virtual directory scheme with a forward slash (`/`) as a delimiter in the file path and supports basic file system operations such as `mkdir`, `opendir`, `readdir`, `rmdir`, `open`, `read`, `create`, `write`, `close`, `unlink`, `truncate`, `stat`, and `rename`.
-However, there are some [differences in functionality](blobfuse2-what-is.md#limitations) you need to be aware of. Some of the key differences are related to:
+However, you should be aware of some key [differences in functionality](blobfuse2-what-is.md#limitations):
- [Differences between the Linux file system and BlobFuse2](blobfuse2-what-is.md#differences-between-the-linux-file-system-and-blobfuse2)-- [Data Integrity](blobfuse2-what-is.md#data-integrity)
+- [Data integrity](blobfuse2-what-is.md#data-integrity)
- [Permissions](blobfuse2-what-is.md#permissions) ## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
+This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities:
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--| | Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
+<sup>1</sup> Azure Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
## See also -- [Blobfuse2 Migration Guide (from BlobFuse v1)](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)-- [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md)-- [BlobFuse2 command reference (preview)](blobfuse2-commands.md)-- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview)](blobfuse2-health-monitor.md)-- [How to troubleshoot BlobFuse2 issues (preview)](blobfuse2-troubleshooting.md)
+- [Migrate to BlobFuse2 from BlobFuse v1](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)
+- [BlobFuse2 commands](blobfuse2-commands.md)
+- [Troubleshoot BlobFuse2 issues](blobfuse2-troubleshooting.md)
+
+## Next steps
+
+- [Configure settings for BlobFuse2](blobfuse2-configuration.md)
+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage](blobfuse2-health-monitor.md)
storage Blobfuse2 Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-troubleshooting.md
Title: How to troubleshoot BlobFuse2 issues (preview) | Microsoft Docs
+ Title: Troubleshoot issues in BlobFuse2 Preview
-description: Learn how to troubleshoot BlobFuse2 issues (preview).
-
+description: Learn how to troubleshoot issues in BlobFuse2 Preview.
+++ Last updated 08/02/2022--
-# Troubleshooting BlobFuse2 (preview)
-
-This article provides references to assist in troubleshooting BlobFuse2 issues during the preview.
+# Troubleshoot issues in BlobFuse2 Preview
-## The troubleshooting guide (TSG)
-
-During the preview of BlobFuse2, refer to [The BlobFuse2 Troubleshooting Guide (TSG) on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/TSG.md)
+This article provides references to help you troubleshoot BlobFuse2 issues during the preview.
> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
>
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
+> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+
+## BlobFuse2 troubleshooting guide
+
+For troubleshooting guidance during the preview of BlobFuse2, see the [BlobFuse2 troubleshooting guide on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/TSG.md).
## See also -- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview)](blobfuse2-health-monitor.md)-- [What is BlobFuse2? (preview)](blobfuse2-what-is.md)-- [How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)](blobfuse2-how-to-deploy.md)-- [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md)-- [BlobFuse2 command reference (preview)](blobfuse2-commands.md)
+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage](blobfuse2-health-monitor.md)
+- [Migrate to BlobFuse2 from BlobFuse v1](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)
+
+## Next steps
+
+- [Configure settings for BlobFuse2](blobfuse2-configuration.md)
+- [BlobFuse2 commands](blobfuse2-commands.md)
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
Title: Use BlobFuse2 (preview) to mount and manage Azure Blob storage containers on Linux. | Microsoft Docs
+ Title: What is BlobFuse2 Preview?
-description: Use BlobFuse2 (preview) to mount and manage Azure Blob storage containers on Linux.
-
+description: Get an overview of BlobFuse2 Preview and how to use it, including migration options if you use BlobFuse v1.
+++ Last updated 10/01/2022--
-# What is BlobFuse2? (preview)
+# What is BlobFuse2 Preview?
+
+BlobFuse2 Preview is a virtual file system driver for Azure Blob Storage. Use BlobFuse2 to access your existing Azure block blob data in your storage account through the Linux file system. BlobFuse2 also supports storage accounts that have a hierarchical namespace enabled.
> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
+> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
>
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-
-## Overview
-
-BlobFuse2 is a virtual file system driver for Azure Blob storage. It allows you to access your existing Azure block blob data in your storage account through the Linux file system. BlobFuse2 also supports storage accounts with a hierarchical namespace enabled.
+> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
+> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-### About the BlobFuse2 open source project
+## About the BlobFuse2 open source project
-BlobFuse2 is an open source project that uses the libfuse open source library (fuse3) to communicate with the Linux FUSE kernel module, and implements the filesystem operations using the Azure Storage REST APIs.
+BlobFuse2 is an open source project that uses the libfuse open source library (fuse3) to communicate with the Linux FUSE kernel module. BlobFuse2 implements file system operations by using the Azure Storage REST APIs.
-The open source BlobFuse2 project can be found on GitHub:
+The open source BlobFuse2 project is on GitHub:
-- [BlobFuse2 home page](https://github.com/Azure/azure-storage-fuse/tree/main)-- [Blobfuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md)
+- [BlobFuse2 repository](https://github.com/Azure/azure-storage-fuse/tree/main)
+- [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md)
- [Report BlobFuse2 issues](https://github.com/Azure/azure-storage-fuse/issues)
-#### Licensing
+### Licensing
-This project is [licensed under MIT](https://github.com/Azure/azure-storage-fuse/blob/main/LICENSE).
+The BlobFuse2 project is [licensed under MIT](https://github.com/Azure/azure-storage-fuse/blob/main/LICENSE).
## Features
-A full list of BlobFuse2 features is in the [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#features). Some key features are:
+A full list of BlobFuse2 features is in the [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#features). These are some of the key tasks you can do by using BlobFuse2:
+
+- Mount an Azure Blob Storage container or Azure Data Lake Storage Gen2 file system on Linux
+- Use basic file system operations like `mkdir`, `opendir`, `readdir`, `rmdir`, `open`, `read`, `create`, `write`, `close`, `unlink`, `truncate`, `stat`, and `rename`
+- Use local file caching to improve subsequent access times
+- Gain insights into mount activities and resource usage by using BlobFuse2 Health Monitor
+
+Other key features in BlobFuse2 include:
-- Mount an Azure storage blob container or Data Lake Storage Gen2 file system on Linux-- Use basic file system operations, such as mkdir, opendir, readdir, rmdir, open, read, create, write, close, unlink, truncate, stat, and rename-- Local file caching to improve subsequent access times - Streaming to support reading and writing large files-- Gain insights into mount activities and resource usage using BlobFuse2 Health Monitor - Parallel downloads and uploads to improve access time for large files - Multiple mounts to the same container for read-only workloads
-### BlobFuse2 Enhancements
+## BlobFuse2 enhancements from BlobFuse v1
-Blobfuse2 has more feature support and improved performance in multiple user scenarios over v1. For the extensive list of improvements, see [the BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#distinctive-features-compared-to-blobfuse-v1x). A summary of enhancements is provided below:
+BlobFuse2 has more feature support and improved performance in multiple user scenarios from BlobFuse v1. For the extensive list of improvements, see the [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#distinctive-features-compared-to-blobfuse-v1x). Here's a summary of enhancements in BlobFuse2 from BlobFuse v1:
- Improved caching - More management support through new Azure CLI commands-- Additional logging support-- The addition of write-streaming for large files (read-streaming was previous supported)-- Gain insights into mount activities and resource usage using BlobFuse2 Health Monitor
+- More logging support
+- The addition of write-streaming for large files (previously, only read-streaming was supported)
+- New BlobFuse2 Health Monitor to help you gain insights into mount activities and resource usage
- Compatibility and upgrade options for existing BlobFuse v1 users - Version checking and upgrade prompting - Support for configuration file encryption
-For a list of performance enhancements over v1, see [the BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#blobfuse2-performance-compared-to-blobfusev1xx) for more details.
+See the [list of BlobFuse2 performance enhancements](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#blobfuse2-performance-compared-to-blobfusev1xx) from BlobFuse v1.
-### For existing BlobFuse v1 users
+### For BlobFuse v1 users
-The enhancements provided by BlobFuse2 are compelling reasons for upgrading and migrating to BlobFuse2. However, if you aren't ready to migrate, you can [use BlobFuse2 to mount a blob container by using the same configuration options and Azure CLI parameters you used with BlobFuse v1](blobfuse2-commands-mountv1.md).
+The enhancements provided by BlobFuse2 are compelling reasons to upgrade and migrate to BlobFuse2. If you aren't ready to migrate, you can use BlobFuse2 to [mount a blob container by using the same configuration options and Azure CLI parameters you use with BlobFuse v1](blobfuse2-commands-mountv1.md).
-The [Blobfuse2 Migration Guide](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md) provides all of the details you need for compatibility and migrating your existing workloads.
+The [BlobFuse2 migration guide](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md) provides all the details you need for compatibility and migrating your current workloads.
## Support
-BlobFuse2 is supported by Microsoft provided that it's used within the specified [limits](#limitations). If you encounter an issue in this preview version, [report it on GitHub](https://github.com/Azure/azure-storage-fuse/issues).
+BlobFuse2 is supported by Microsoft if it's used within the specified [limits](#limitations). If you encounter an issue in this preview version, [report it on GitHub](https://github.com/Azure/azure-storage-fuse/issues).
## Limitations
-BlobFuse2 doesn't guarantee 100% POSIX compliance as it simply translates requests into [Blob REST APIs](/rest/api/storageservices/blob-service-rest-api). For example, rename operations are atomic in POSIX, but not in BlobFuse2.
+BlobFuse2 doesn't guarantee 100% POSIX compliance because BlobFuse2 simply translates requests into [Blob REST APIs](/rest/api/storageservices/blob-service-rest-api). For example, rename operations are atomic in POSIX but not in BlobFuse2.
See [the full list of differences between a native file system and BlobFuse2](#differences-between-the-linux-file-system-and-blobfuse2). ### Differences between the Linux file system and BlobFuse2
-In many ways, BlobFuse2-mounted storage can be used just like the native Linux file system. The virtual directory scheme is the same with the forward-slash '/' as a delimiter. Basic file system operations, such as mkdir, opendir, readdir, rmdir, open, read, create, write, close, unlink, truncate, stat, and rename work normally.
+In many ways, you can use BlobFuse2-mounted storage just like the native Linux file system. The virtual directory scheme is the same and uses a forward slash (`/`) as a delimiter. Basic file system operations like `mkdir`, `opendir`, `readdir`, `rmdir`, `open`, `read`, `create`, `write`, `close`, `unlink`, `truncate`, `stat`, and `rename` work the same as in the Linux file system.
-However, there are some key differences in the way BlobFuse2 behaves:
+BlobFuse2 is different from the Linux file system in some key ways:
- **Readdir count of hard links**:
- For performance reasons, BlobFuse2 does not correctly report the hard links inside a directory. The number of hard links for empty directories is returned as 2. The number for non-empty directories is always returned as 3, regardless of the actual number of hard links.
+ For performance reasons, BlobFuse2 doesn't correctly report the hard links inside a directory. The number of hard links for empty directories returns as 2. The number for non-empty directories always returns as 3, regardless of the actual number of hard links.
- **Non-atomic renames**:
- Atomic rename operations aren't supported by the Azure Storage Blob Service. Single file renames are actually two operations - a copy, followed by a delete of the original. Directory renames recursively enumerate all files in the directory, and renames each.
+ Azure Blob Storage doesn't support atomic rename operations. Single-file renames are actually two operations: a copy, and then a deletion of the original. Directory renames recursively enumerate all files in the directory and renames each file.
- **Special files**:
- Blobfuse supports only directories, regular files, and symbolic links. Special files, such as device files, pipes, and sockets aren't supported.
+ BlobFuse2 supports only directories, regular files, and symbolic links. Special files like device files, pipes, and sockets aren't supported.
- **mkfifo**:
However, there are some key differences in the way BlobFuse2 behaves:
- **chown and chmod**:
- Data Lake Storage Gen2 storage accounts support per object permissions and ACLs, while flat namespace (FNS) block blobs don't. As a result, BlobFuse2 doesn't support the `chown` and `chmod` operations for mounted block blob containers. The operations are supported for Data Lake Storage Gen2.
+ Data Lake Storage Gen2 storage accounts support per object permissions and ACLs, but flat namespace (FNS) block blobs don't. As a result, BlobFuse2 doesn't support the `chown` and `chmod` operations for mounted block blob containers. The operations are supported for Data Lake Storage Gen2.
- **Device files or pipes**:
- Creation of device files or pipes isn't supported by BlobFuse2.
+ BlobFuse2 doesn't support creating device files or pipes.
- **Extended-attributes (x-attrs)**:
- BlobFuse2 doesn't support extended-attributes (x-attrs) operations.
+ BlobFuse2 doesn't support extended-attributes (`x-attrs`) operations.
- **Write-streaming**:
- Concurrent streaming of read and write operations on large file data can produce unpredictable results. Simultaneously writing to the same blob from different threads is not supported.
+ Concurrent streaming of read and write operations on large file data might produce unpredictable results. Simultaneously writing to the same blob from different threads isn't supported.
### Data integrity
-The file caching behavior plays an important role in the integrity of the data being read and written to a Blob Storage file system mount. Streaming mode is recommended for use with large files, which supports streaming for both read and write operations. BlobFuse2 caches blocks of streaming files in memory. For smaller files that do not consist of blocks, the entire file is stored in memory. File cache is the second mode and is recommended for workloads that do not contain large files. Where files are stored on disk in their entirety.
+File caching plays an important role in the integrity of data that's read and written to a Blob Storage file system mount. We recommend streaming mode for use with large files, which supports streaming for both read and write operations. BlobFuse2 caches blocks of streaming files in memory. For smaller files that don't consist of blocks, the entire file is stored in memory. File cache is the second mode. We recommend file cache for workloads that don't contain large files, such as when files are stored on disk in their entirety.
-BlobFuse2 supports both read and write operations. Continuous synchronization of data written to storage by using other APIs or other mounts of BlobFuse2 isn't guaranteed. For data integrity, it's recommended that multiple sources don't modify the same blob, especially at the same time. If one or more applications attempt to write to the same file simultaneously, the results could be unexpected. Depending on the timing of multiple write operations and the freshness of the cache for each, the result could be that the last writer wins and previous writes are lost, or generally that the updated file isn't in the desired state.
+BlobFuse2 supports read and write operations. Continuous synchronization of data written to storage by using other APIs or other mounts of BlobFuse2 isn't guaranteed. For data integrity, we recommend that multiple sources don't modify the same blob, especially at the same time. If one or more applications attempt to write to the same file simultaneously, the results might be unexpected. Depending on the timing of multiple write operations and the freshness of the cache for each operation, the result might be that the last writer wins and previous writes are lost, or generally that the updated file isn't in the intended state.
#### File caching on disk
-When a file is written to, the data is first persisted into cache on a local disk. The data is written to blob storage only after the file handle is closed. If there's an issue attempting to persist the data to blob storage, you will receive an error message.
+When a file is the subject of a write operation, the data is first persisted to cache on a local disk. The data is written to Blob Storage only after the file handle is closed. If an issue attempting to persist the data to Blob Storage occurs, an error message appears.
#### Streaming
-For streaming during both read and write operations, blocks of data are cached in memory as they are read or updated. Updates are flushed to Azure Storage when a file is closed or when the buffer is filled with dirty blocks.
+For streaming during read and write operations, blocks of data are cached in memory as they're read or updated. Updates are flushed to Azure Storage when a file is closed or when the buffer is filled with dirty blocks.
-Reading the same blob from multiple simultaneous threads is supported. However, simultaneous write operations could result in unexpected file data outcomes, including data loss. Performing simultaneous read operations and a single write operation is supported, but the data being read from some threads might not be current.
+Reading the same blob from multiple simultaneous threads is supported. However, simultaneous write operations might result in unexpected file data outcomes, including data loss. Performing simultaneous read operations and a single write operation is supported, but the data being read from some threads might not be current.
### Permissions
-When a container is mounted with the default options, all files will get 770 permissions, and only be accessible by the user doing the mounting. If you desire to allow anyone on your machine to access the BlobFuse2 mount, mount it with option "--allow-other". This option can also be configured through the `yaml` config file.
+When a container is mounted with the default options, all files get 770 permissions and are accessible only by the user who does the mounting. To allow any user to access the BlobFuse2 mount, mount BlobFuse2 by using the `--allow-other` option. You also can configure this option in the YAML config file.
-As stated previously, the `chown` and `chmod` operations are supported for Data Lake Storage Gen2, but not for flat namespace (FNS) block blobs. Running a 'chmod' operation against a mounted FNS block blob container returns a success message, but the operation won't actually succeed.
+As stated earlier, the `chown` and `chmod` operations are supported for Data Lake Storage Gen2, but not for FNS block blobs. Running a `chmod` operation against a mounted FNS block blob container returns a success message, but the operation doesn't actually succeed.
## Feature support
-This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
+This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities.
-| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
+| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | Network File System (NFS) 3.0 <sup>1</sup> | SSH File Transfer Protocol (SFTP) <sup>1</sup> |
|--|--|--|--|--| | Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
+<sup>1</sup> Data Lake Storage Gen2, the NFS 3.0 protocol, and SFTP support all require a storage account that has a hierarchical namespace enabled.
-## Next steps
+## See also
-- [How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)](blobfuse2-how-to-deploy.md)-- [The BlobFuse2 Migration Guide (from v1)](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)
+- [Migrate to BlobFuse2 from BlobFuse v1](https://github.com/Azure/azure-storage-fuse/blob/main/MIGRATION.md)
+- [BlobFuse2 commands](blobfuse2-commands.md)
+- [Troubleshoot BlobFuse2 issues](blobfuse2-troubleshooting.md)
-## See also
+## Next steps
-- [BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md)-- [BlobFuse2 command reference (preview)](blobfuse2-commands.md)-- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage (preview)](blobfuse2-health-monitor.md)-- [How to troubleshoot BlobFuse2 issues (preview)](blobfuse2-troubleshooting.md)
+- [Mount an Azure Blob Storage container on Linux by using BlobFuse2](blobfuse2-how-to-deploy.md)
+- [Configure settings for BlobFuse2](blobfuse2-configuration.md)
+- [Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage](blobfuse2-health-monitor.md)
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
Data Lake Storage Gen2 APIs, NFS 3.0, and Blob APIs can operate on the same data
This section describes issues and limitations with using blob APIs, NFS 3.0, and Data Lake Storage Gen2 APIs to operate on the same data. -- You cannot use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file. If you write to a file by using Data Lake Storage Gen2 or APIs or NFS 3.0, then that file's blocks won't be visible to calls to the [Get Block List](/rest/api/storageservices/get-block-list) blob API. The only exception is when using you are overwriting. You can overwrite a file/blob using either API or with NFS 3.0 by using the zero-truncate option.
+- You cannot use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file. If you write to a file by using Data Lake Storage Gen2 APIs or NFS 3.0, then that file's blocks won't be visible to calls to the [Get Block List](/rest/api/storageservices/get-block-list) blob API. The only exception is when you are overwriting. You can overwrite a file/blob using either API or with NFS 3.0 by using the zero-truncate option.
- When you use the [List Blobs](/rest/api/storageservices/list-blobs) operation without specifying a delimiter, the results will include both directories and blobs. If you choose to use a delimiter, use only a forward slash (`/`). This is the only supported delimiter.
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Log entries are created only if there are requests made against the service endp
Requests made by the Blob storage service itself, such as log creation or deletion, aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-blob-storage-reference.md).
+> [!NOTE]
+> Azure Monitor currently filters out logs that describe activity in the `insights-` container. You can track activities in that container by using storage analytics (classic logs).
+ ### Log anonymous requests The following types of anonymous requests are logged:
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Create a directory on your Linux system, and then mount the container in the sto
|`NFS3ERR_IO/EIO ("Input/output error"`) |This error can appear when a client attempts to read, write, or set attributes on blobs that are stored in the archive access tier. | |`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob Storage or Azure Data Lake Storage Gen2 API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS 3.0 endpoint to work with symbolic links. | |`mount: /nfsdata: bad option;`| Install the NFS helper program by using `sudo apt install nfs-common`.|
+|`Connection Timed Out`| Make sure that client allows outgoing communication through ports 111 and 2048. The NFS 3.0 protocol uses these ports. Makes sure to mount the storage account by using the Blob service endpoint and not the Data Lake Storage endpoint. |
## See also
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Previously updated : 09/29/2022 Last updated : 10/10/2022
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
> [!IMPORTANT] > While you can enable both forms of authentication, SFTP clients can connect by using only one of them. Multifactor authentication, whereby both a valid password and a valid public and private key pair are required for successful authentication is not supported.
- If you select **SSH Password**, then your password will appear when you've completed all of the steps in the **Add local user** configuration pane. SSH passwords are generated by Azure and are minimum 88 characters in length.
+ If you select **SSH Password**, then your password will appear when you've completed all of the steps in the **Add local user** configuration pane. SSH passwords are generated by Azure and are minimum 32 characters in length.
If you select **SSH Key pair**, then select **Public key source** to specify a key source.
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
$password ``` > [!IMPORTANT]
- > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you lose this password, you'll have to generate a new one. Note that SSH passwords are generated by Azure and are minimum 88 characters in length.
+ > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you lose this password, you'll have to generate a new one. Note that SSH passwords are generated by Azure and are minimum 32 characters in length.
### [Azure CLI](#tab/azure-cli)
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
az storage account local-user regenerate-password --account-name contosoaccount -g contoso-resource-group -n contosouser ``` > [!IMPORTANT]
- > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you lose this password, you'll have to generate a new one. Note that SSH passwords are generated by Azure and are minimum 88 characters in length.
+ > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you lose this password, you'll have to generate a new one. Note that SSH passwords are generated by Azure and are minimum 32 characters in length.
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
# How to mount Azure Blob Storage as a file system with BlobFuse v1 > [!IMPORTANT]
-> [BlobFuse2](blobfuse2-what-is.md) is the latest version of BlobFuse and has many significant improvements over the version discussed in this article, BlobFuse v1. To learn about the improvements made in BlobFuse2, see [the list of BlobFuse2 enhancements](blobfuse2-what-is.md#blobfuse2-enhancements). BlobFuse2 is currently in preview and might not be suitable for production workloads.
+> [BlobFuse2](blobfuse2-what-is.md) is the latest version of BlobFuse and has many significant improvements over the version discussed in this article, BlobFuse v1. To learn about the improvements made in BlobFuse2, see [the list of BlobFuse2 enhancements](blobfuse2-what-is.md#blobfuse2-enhancements-from-blobfuse-v1). BlobFuse2 is currently in preview and might not be suitable for production workloads.
[BlobFuse](https://github.com/Azure/azure-storage-fuse) is a virtual file system driver for Azure Blob Storage. BlobFuse allows you to access your existing block blob data in your storage account through the Linux file system. BlobFuse uses the virtual directory scheme with the forward-slash '/' as a delimiter.
storage Multiple Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/multiple-identity-scenarios.md
Last updated 09/23/2022 -+ # Configure passwordless connections between multiple Azure apps and services
BlobServiceClient blobServiceClient2 = new BlobServiceClient(
// Get the second user-assigned managed identity ID to connect to shared databases var clientIDdatabases = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID_Databases");
-// Create a Cosmos DB client
+// Create an Azure Cosmos DB client
CosmosClient client = new CosmosClient( accountEndpoint: Environment.GetEnvironmentVariable("COSMOS_ENDPOINT", EnvironmentVariableTarget.Process), new DefaultAzureCredential()
class Demo {
// Get the second user-assigned managed identity ID to connect to shared databases String clientIdDatabase = System.getenv("Managed_Identity_Client_ID_Databases");
- // Create a Cosmos DB client
+ // Create an Azure Cosmos DB client
CosmosClient cosmosClient = new CosmosClientBuilder() .endpoint("https://<cosmos-db-account>.documents.azure.com:443/") .credential(new DefaultAzureCredentialBuilder().managedIdentityClientId(clientIdDatabase).build())
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Last updated 09/29/2022 + # Introduction to Azure Storage
The following table compares Files, Blobs, Disks, Queues, Tables, and Azure NetA
| **Azure Blobs** | Allows unstructured data to be stored and accessed at a massive scale in block blobs.<br/><br/>Also supports [Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) for enterprise big data analytics solutions. | You want your application to support streaming and random access scenarios.<br/><br/>You want to be able to access application data from anywhere.<br/><br/>You want to build an enterprise data lake on Azure and perform big data analytics. | | **Azure Disks** | Allows data to be persistently stored and accessed from an attached virtual hard disk. | You want to "lift and shift" applications that use native file system APIs to read and write data to persistent disks.<br/><br/>You want to store data that is not required to be accessed from outside the virtual machine to which the disk is attached. | | **Azure Queues** | Allows for asynchronous message queueing between application components. | You want to decouple application components and use asynchronous messaging to communicate between them.<br><br>For guidance around when to use Queue storage versus Service Bus queues, see [Storage queues and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md). |
-| **Azure Tables** | Allow you to store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. | You want to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. <br/><br/>For guidance around when to use Table storage versus the Azure Cosmos DB Table API, see [Developing with Azure Cosmos DB Table API and Azure Table storage](../../cosmos-db/table-support.md). |
+| **Azure Tables** | Allows you to store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. | You want to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. <br/><br/>For guidance around when to use Table storage versus Azure Cosmos DB for Table, see [Developing with Azure Cosmos DB for Table and Azure Table storage](../../cosmos-db/table-support.md). |
| **Azure NetApp Files** | Offers a fully managed, highly available, enterprise-grade NAS service that can handle the most demanding, high-performance, low-latency workloads requiring advanced data management capabilities. | You have a difficult-to-migrate workload such as POSIX-compliant Linux and Windows applications, SAP HANA, databases, high-performance compute (HPC) infrastructure and apps, and enterprise web applications. <br></br> You require support for multiple file-storage protocols in a single service, including NFSv3, NFSv4.1, and SMB3.1.x, enables a wide range of application lift-and-shift scenarios, with no need for code changes. | ## Blob storage
For more information about Azure Queues, see [Introduction to Queues](../queues/
## Table storage
-Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the [Azure Table Storage Overview](../tables/table-storage-overview.md). In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB Table API offering that provides throughput-optimized tables, global distribution, and automatic secondary indexes. To learn more and try out the new premium experience, see [Azure Cosmos DB Table API](../../cosmos-db/table-introduction.md).
+Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the [Azure Table storage overview](../tables/table-storage-overview.md). In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB for Table offering that provides throughput-optimized tables, global distribution, and automatic secondary indexes. To learn more and try out the new premium experience, see [Azure Cosmos DB for Table](../../cosmos-db/table-introduction.md).
For more information about Table storage, see [Overview of Azure Table storage](../tables/table-storage-overview.md).
You can access resources in a storage account by any language that can make HTTP
## Next steps
-To get up and running with Azure Storage, see [Create a storage account](storage-account-create.md).
+To get up and running with Azure Storage, see [Create a storage account](storage-account-create.md).
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Clients in VNets with existing private endpoints face constraints when accessing
This constraint is a result of the DNS changes made when account A2 creates a private endpoint.
-### Network Security Group rules for subnets with private endpoints
-
-Currently, you can't configure [Network Security Group](../../virtual-network/network-security-groups-overview.md) (NSG) rules and user-defined routes for private endpoints. NSG rules applied to the subnet hosting the private endpoint are not applied to the private endpoint. They are applied only to other endpoints (For example: network interface controllers). A limited workaround for this issue is to implement your access rules for private endpoints on the source subnets, though this approach may require a higher management overhead.
### Copying blobs between storage accounts
storage Elastic San Batch Create Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-batch-create-sample.md
+
+ Title: Create multiple Azure Elastic SAN (preview) volumes in a batch
+description: Azure PowerShell Script Sample - Create multiple elastic SAN (preview) volumes in a batch.
+++ Last updated : 10/12/2022+++++
+# Create multiple elastic SAN (preview) volumes in a batch
+
+To simplify creating multiple volumes as a batch, you can use a .csv with pre-filled values to create as many volumes of varying sizes as you like.
+
+Format your .csv with five columns, **ResourceGroupName**, **ElasticSanName**, **VolumeGroupName**, **Name**, and **SizeGiB**. The following screenshot provides an example:
++
+Then you can use the following script to create your volumes.
+
+```azurepowershell
+$filePath = "D:\ElasticSan\TestCsv3.csv"
+$BatchCreationList = Import-Csv -Path $filePath
+
+foreach($creationParam in $BatchCreationList) {
+ # -AsJob can be added to make the operations parallel
+ # -ErrorAction can be added to change the behavior of the for loop when an error occurs
+ New-AzElasticSanVolume -ElasticSanName $creationParam.ElasticSanName -GroupName $creationParam.VolumeGroupName -Name $creationParam.Name -ResourceGroupName $creationParam.ResourceGroupName -SizeGiB $creationParam.SizeGiB #-ErrorAction Continue #-AsJob
+
+}
+```
storage Elastic San Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect.md
+
+ Title: Connect to an Azure Elastic SAN (preview) volume
+description: Learn how to connect to an Azure Elastic SAN (preview) volume.
+++ Last updated : 10/12/2022+++++
+# Connect to Elastic SAN (preview) volumes
+
+This article explains how to connect to an elastic storage area network (SAN) volume.
+
+## Prerequisites
+
+- Complete [Deploy an Elastic SAN (preview)](elastic-san-create.md)
+- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+
+## Limitations
++
+## Enable Storage service endpoint
+
+In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN.
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your virtual network and select **Service Endpoints**.
+1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
+1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
++
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+$resourceGroupName = "yourResourceGroup"
+$vnetName = "yourVirtualNetwork"
+$subnetName = "yourSubnet"
+
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+
+$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+```
++
+## Configure networking
+
+Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
+
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For more information on networking, see [Configure Elastic SAN networking (preview)](elastic-san-networking.md).
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your SAN and select **Volume groups**.
+1. Select a volume group and select **Modify**.
+1. Add an existing virtual network and select **Save**.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
+
+Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
+
+```
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+```
++
+## Connect to a volume
+
+You can connect to Elastic SAN volumes over iSCSI from multiple compute clients. The following sections cover how to establish connections from a Windows client and a Linux client.
+
+### Windows
+
+Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume.
+
+Run the following commands to get these values:
+
+```azurepowershell
+# Get the target name and iSCSI portal name to connect a volume to a client
+$connectVolume = Get-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $searchedVolumeGroup -Name $searchedVolume
+$connectVolume.storagetargetiqn
+$connectVolume.storagetargetportalhostname
+$connectVolume.storagetargetportalport
+```
+
+Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands.
+
+Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
+
+```
+# Add target IQN
+# The *s are essential, as they are default arguments
+iscsicli AddTarget $yourStorageTargetIQN * $yourStorageTargetPortalHostName $yourStorageTargetPortalPort * 0 * * * * * * * * * 0
+
+# Login
+# The *s are essential, as they are default arguments
+iscsicli LoginTarget $yourStorageTargetIQN t $yourStorageTargetPortalHostName $yourStorageTargetPortalPort Root\ISCSIPRT\0000_0 -1 * * * * * * * * * * * 0
+
+```
+
+### Linux
+
+Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure resources.
+
+Run the following command to get these values:
+
+```azurecli
+az elastic-san volume-group list -e $sanName -g $resourceGroupName -v $searchedVolumeGroup -n $searchedVolume
+```
+
+You should see a list of output that looks like the following:
+++
+Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands.
+
+Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
+
+```
+iscsiadm -m node --targetname **yourStorageTargetIQN** --portal **yourStorageTargetPortalHostName**:**yourStorageTargetPortalPort** -o new
+
+iscsiadm -m node --targetname **yourStorageTargetIQN** -p **yourStorageTargetPortalHostName**:**yourStorageTargetPortalPort** -l
+```
+
+## Next steps
+
+[Configure Elastic SAN networking (preview)](elastic-san-networking.md)
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
+
+ Title: Create an Azure Elastic SAN (preview)
+description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure portal, Azure PowerShell module, or Azure CLI.
+++ Last updated : 10/12/2022+++++
+# Deploy an Elastic SAN (preview)
+
+This article explains how to deploy and configure an elastic storage area network (SAN).
+
+## Prerequisites
+
+- If you're using Azure PowerShell, use `Install-Module -Name Az.Elastic-SAN -Scope CurrentUser -Repository PSGallery -Force -RequiredVersion .10-preview` to install the preview module.
+- If you're using Azure CLI, install the latest version. For installation instructions, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+ - Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN.
+
+## Limitations
++
+## Register for the preview
+
+Sign up for the preview at [https://aka.ms/ElasticSANPreviewSignUp](https://aka.ms/ElasticSANPreviewSignUp).
+
+If your request for access to the preview is approved, register your subscription with Microsoft.ElasticSAN resource provider and the preview feature using the following command:
+
+# [Portal](#tab/azure-portal)
+
+Use either the Azure PowerShell module or the Azure CLI to register your subscription for the preview.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.ElasticSan
+Register-AzProviderFeature -FeatureNameAllow ElasticSanPreviewAccess -ProviderNamespace Microsoft.ElasticSan
+```
+
+It may take a few minutes for registration to complete. To confirm that you've registered, use the following command:
+
+```azurepowershell
+Get-AzResourceProvider -ProviderNamespace Microsoft.ElasticSan
+Get-AzProviderFeature -FeatureName "ElasticSanPreviewAccess" -ProviderNamespace "Microsoft.ElasticSan"
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az provider register --namespace Microsoft.ElasticSan
+az feature register --name ElasticSanPreviewAccess --namespace Microsoft.ElasticSan
+```
+
+It may take a few minutes for registration to complete. To confirm you've registered, use the following command:
+
+```azurecli
+az provider show --namespace Microsoft.ElasticSan
+az feature show --name ElasticSanPreviewAccess --namespace Microsoft.ElasticSan
+```
++
+## Create the SAN
+
+# [Portal](#tab/azure-portal)
+
+1. Sign in to the Azure portal and search for **Elastic SAN**.
+1. Select **+ Create a new SAN**
+1. On the basics page, fill out the values.
+ 1. Select the same region as your Azure virtual network and compute client.
+1. Specify the amount of base capacity you require, and any additional capacity, then select next.
+
+ Increasing your SAN's base size will also increase its IOPS and bandwidth. Increasing additional capacity only increase its total size (base+additional) but won't increase IOPS or bandwidth, however, it's cheaper than increasing base.
+
+1. Select **Next : Volume groups**.
+
+ :::image type="content" source="media/elastic-san-create/elastic-create-flow.png" alt-text="Screenshot of creation flow." lightbox="media/elastic-san-create/elastic-create-flow.png":::
+
+# [PowerShell](#tab/azure-powershell)
+
+The following command creates an Elastic SAN that uses locally-redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`.
+
+```azurepowershell
+## Variables
+$rgName = "yourResourceGroupName"
+## Select the same availability zone as where you plan to host your workload
+$zone = 1
+## Select the same region as your Azure virtual network
+$region = "yourRegion"
+$sanName = "desiredSANName"
+$volGroupName = "desiredVolumeGroupName"
+$volName = "desiredVolumeName"
+
+## Create the SAN, itself
+New-AzElasticSAN -ResourceGroupName $rgName -Name $sanName -AvailabilityZone $zone -Location $region -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
+```
+# [Azure CLI](#tab/azure-cli)
+
+The following command creates an Elastic SAN that uses locally-redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`.
+
+```azurecli
+## Variables
+$sanName="yourSANNameHere"
+$resourceGroupName="yourResourceGroupNameHere"
+$sanLocation="desiredRegion"
+$volumeGroupName="desiredVolumeGroupName"
+
+az elastic-san create -n $sanName -g $resourceGroupName -l $sanLocation --base-size-tib 100 --extended-capacity-size-tib 20 --sku ΓÇ£{name:Premium_LRS,tier:Premium}ΓÇ¥
+```
++
+## Create volume groups
+
+Now that you've configured the basic settings and provisioned your storage, you can create volume groups. Volume groups are a tool for managing volumes at scale. Any settings or configurations applied to a volume group apply to all volumes associated with that volume group.
+
+# [Portal](#tab/azure-portal)
+
+1. Select **+ Create volume group** and name your volume.
+ The volume group name can't be changed once created.
+1. Select **Next : Volumes**
+
+# [PowerShell](#tab/azure-powershell)
++
+```azurepowershell
+## Create the volume group, this script only creates one.
+New-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSANName $sanName -Name $volGroupName
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n volumeGroupName
+```
+++
+## Create volumes
+
+Now that you've configured the SAN itself, and created at least one volume group, you can create volumes.
+
+Volumes are usable partitions of the SAN's total capacity, you must allocate a portion of that total capacity as a volume in order to use it. Only the actual volumes themselves can be mounted and used, not volume groups.
+
+# [Portal](#tab/azure-portal)
+
+1. Create volumes by entering a name, selecting an appropriate volume group, and entering the capacity you'd like to allocate for your volume.
+ The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
+1. Select **Review + create** and deploy your SAN.
+
+ :::image type="content" source="media/elastic-san-create/elastic-volume-partitions.png" alt-text="Screenshot of volume creation." lightbox="media/elastic-san-create/elastic-volume-partitions.png":::
+
+# [PowerShell](#tab/azure-powershell)
+
+In this article, we provide you the command to create a single volume. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md).
+
+> [!IMPORTANT]
+> The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
+
+Replace `volumeName` with the name you'd like the volume to use, then run the following script:
+
+```azurepowershell
+## Create the volume, this command only creates one.
+New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -Name $volName -sizeGiB 2000
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+> [!IMPORTANT]
+> The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
+
+Replace `$volumeName` with the name you'd like the volume to use, then run the following script:
+
+```azurecli
+az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -v volumeGroupName -n $volumeName --size-gib 2000
+```
++
+## Next steps
+
+Now that you've deployed an Elastic SAN, [Connect to Elastic SAN (preview) volumes](elastic-san-connect.md).
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
+
+ Title: Delete an Azure Elastic SAN (preview)
+description: Learn how to delete an Azure Elastic SAN (preview) with the Azure portal, Azure PowerShell module, or the Azure CLI.
+++ Last updated : 10/12/2022+++++
+# Delete an Elastic SAN (preview)
+
+To delete an elastic storage area network (SAN), you first need to disconnect every volume in your Elastic SAN (preview) from any connected hosts.
+
+## Disconnect volumes from clients
+
+### Windows
+
+To delete iSCSI connections to volumes, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume.
+
+Run the following commands to get these values:
+
+```azurepowershell
+# Get the target name and iSCSI portal name to connect a volume to a client
+$connectVolume = Get-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $searchedVolumeGroup -Name $searchedVolume
+$connectVolume.storagetargetiqn
+$connectVolume.storagetargetportalhostname
+$connectVolume.storagetargetportalport
+```
+
+Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands.
+
+In your compute client, retrieve the sessionID for the Elastic SAN volumes you'd like to disconnect using `iscsicli SessionList`.
+
+Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to disconnect an Elastic SAN volume.
+
+```
+iscsicli RemovePersistentTarget $yourStorageTargetIQN $yourStorageTargetPortalPort $yourStorageTargetPortalHostName
+
+iscsicli LogoutTarget <sessionID>
+
+```
+
+### Linux
+
+To delete iSCSI connections to volumes, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume.
+
+Run the following command to get these values:
+
+```azurecli
+az elastic-san volume-group list -e $sanName -g $resourceGroupName -v $searchedVolumeGroup -n $searchedVolume
+```
+
+Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands.
+
+Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
+
+```
+iscsiadm --mode node --target **yourStorageTargetIQN** --portal **yourStorageTargetPortalHostName**:**yourStorageTargetPortalPort** --logout
+```
+
+## Delete a SAN
+
+When your SAN has no active connections to any clients, you may delete it using the Azure portal or Azure PowerShell module.
+
+First, delete each volume.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -GroupName $volumeGroupName -Name $volumeName
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san delete -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName
+```
++
+Then, delete each volume group.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzElasticSanVolumeGroup -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -GroupName $volumeGroupName
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san delete -e $sanName -g $resourceGroupName -n $volumeGroupName
+```
++
+Finally, delete the Elastic SAN itself.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzElasticSan -ResourceGroupName $resourceGroupName -Name $sanName
+```
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san delete -n $sanName -g $resourceGroupName
+```
++
+## Next steps
+
+[Plan for deploying an Elastic SAN (preview)](elastic-san-planning.md)
storage Elastic San Expand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md
+
+ Title: Increase the size of an Azure Elastic SAN and its volumes (preview)
+description: Learn how to increase the size of an Azure Elastic SAN (preview) and its volumes with the Azure portal, Azure PowerShell module, or Azure CLI.
+++ Last updated : 10/12/2022+++++
+# Increase the size of an Elastic SAN (preview)
+
+This article covers increasing the size of an Elastic storage area network (preview) and an individual volume, if you need additional storage or performance. Be sure you need the storage or performance before you increase the size because decreasing the size isn't supported, to prevent data loss.
+
+## Expand SAN size
+
+First, increase the size of your Elastic storage area network (SAN).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+
+# You can either update the base size or the additional size.
+# This command updates the base size, to update the additional size, replace -BaseSizeTiB $newBaseSizeTib with -ExtendedCapacitySizeTib $newExtendedSizeTib
+
+Update-AzElasticSan -ResourceGroupName $resourceGroupName -Name $sanName -BaseSizeTib $newBaseSizeTib
+
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+# You can either update the base size or the additional size.
+# This command updates the base size, to update the additional size, replace -base-size-tib $newBaseSizeTib with ΓÇôextended-capacity-size-tib $newExtendedCapacitySizeTib
+
+az elastic-san update -e $sanName -g $resourceGroupName --base-size-tib $newBaseSizeTib
+```
+++
+## Expand volume size
+
+Once you've expanded the size of your SAN, you can either create an additional volume, or expand the size of an existing volume. In Preview, you can expand the volume when there is no active connection to the volume.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Update-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volumeGroupName -Name $volumeName -sizeGib $newVolumeSize
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san update -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib $newVolumeSize
+```
+++
+## Next steps
+
+To create a new volume with the extra capacity you added to your SAN, see [Create volumes](elastic-san-create.md#create-volumes).
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
+
+ Title: Introduction to Azure Elastic SAN (preview)
+description: An overview of Azure Elastic SAN (preview), a service that enables you to create a virtual SAN to act as the storage for multiple compute options.
+++ Last updated : 10/12/2022+++++
+# What is Azure Elastic SAN? (preview)
+
+Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN (preview) is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
+
+Elastic SAN is designed for large scale IO-intensive workloads and top tier databases such as SQL, MariaDB, and support hosting the workloads on virtual machines, or containers such as Azure Kubernetes Service.
+
+## Benefits of Elastic SAN
+
+### Compatibility
+
+Azure Elastic SAN volumes can connect to a wide variety of compute resources using the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. Because of this, rather than having to configure storage for each of your compute options, you can configure an Elastic SAN to serve as the storage solution for multiple compute options, and manage it separately from each option.
+
+### Simplified provisioning and management
+
+Elastic SAN simplifies deploying and managing storage at scale through grouping and policy enforcement. With [volume groups](#volume-groups) you can manage a large number of volumes from a single resource. For instance, you can create virtual network rules on the volume group and grant access to all your volumes.
+
+### Performance
+
+With an Elastic SAN, it's possible to scale your performance up to millions of IOPS, with double-digit GB/s throughput, and have single-digit millisecond latency. The performance of a SAN is shared across all of its volumes, as long as the SAN's caps aren't exceeded and the volumes are large enough, each volume can scale up to 64,000 IOPs. Elastic SAN volumes connect to your clients using the [iSCSI](https://en.wikipedia.org/wiki/ISCSI) protocol, which allows them to bypass the IOPS limit of an Azure VM and offers high throughput limits.
+
+### Cost optimization and consolidation
+
+Cost optimization can be achieved with Elastic SAN since you can increase your SAN storage in bulk. You can either increase your performance along with the storage capacity, or increase the storage capacity without increasing the SAN's performance, potentially offering a lower total cost of ownership. With Elastic SAN, you generally won't need to overprovision volumes, because you share the performance of the SAN with all its volumes.
+
+## Elastic SAN resources
+
+Each Azure Elastic SAN has two internal resources: Volume groups and volumes.
+
+The following diagram illustrates the relationship and mapping of an Azure Elastic SAN's resources to the resources of an on-premises SAN:
++
+### Elastic SAN
+
+When you configure an Elastic SAN, you select the redundancy of the entire SAN and provision storage. The storage you provision determines how much performance your SAN has, and the total capacity that can be distributed to each volume within the SAN.
+
+Your Elastic SAN's name has some requirements. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character. The name must be between 3 and 24 characters long.
+
+### Volume groups
+
+Volume groups are management constructs that you use to manage volumes at scale. Any settings or configurations applied to a volume group, such as virtual network rules, are inherited by any volumes associated with that volume group.
+
+Your volume group's name has some requirements. The name may only contain lowercase letters, numbers and hyphens, and must begin and end with a letter or a number. Each hyphen must be preceded and followed by an alphanumeric character. The name must be between 3 and 63 characters long.
+
+### Volumes
+
+You partition the SAN's storage capacity into individual volumes. These individual volumes can be mounted to your clients with iSCSI.
+
+The name of your volume is part of their iSCSI IQN. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character. The name must also be between 3 and 63 characters long.
+
+## Support for Azure Storage features
+
+The following table indicates support for Azure Storage features with Azure Elastic SAN.
+
+The status of items in this table may change over time.
+
+| Storage feature | Supported for Elastic SAN |
+|--||
+| Encryption at rest| ✔️ |
+| Encryption in transit| Γ¢ö |
+| [LRS or ZRS redundancy types](elastic-san-planning.md#redundancy)| ✔️ |
+| Private endpoints | Γ¢ö |
+| Grant network access to specific Azure virtual networks| ✔️ |
+| Soft delete | Γ¢ö |
+| Snapshots | Γ¢ö |
+
+## Next steps
+
+[Plan for deploying an Elastic SAN (preview)](elastic-san-planning.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
+
+ Title: Azure Elastic SAN networking (preview)
+description: An overview of Azure Elastic SAN (preview), a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
+++ Last updated : 10/12/2022+++++
+# Configure Elastic SAN networking (preview)
+
+Azure Elastic storage area network (SAN) allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments demand, based on the type and subset of networks or resources used. When network rules are configured, only applications requesting data over the specified set of networks or through the specified set of Azure resources that can access an Elastic SAN (preview). Access to your SAN's volumes are limited to resources in subnets in the same Azure Virtual Network that your SAN's volume group is configured with.
+
+Volume groups are configured to allow access only from specific subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+
+You must enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the SAN that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the Elastic SAN to access the data.
+
+Each volume group supports up to 200 virtual network rules.
+
+> [!IMPORTANT]
+> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+
+## Required permissions
+
+To enable service point for Azure Storage, the user must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role.
+
+An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+
+> [!NOTE]
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+
+## Available virtual network regions
+
+By default, service endpoints work between virtual networks and service instances in the same Azure region. When using service endpoints with Azure Storage, service endpoints also work between virtual networks and service instances in a [paired region](../../availability-zones/cross-region-replication-azure.md). If you want to use a service endpoint to grant access to virtual networks in other regions, you must register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. This capability is currently in public preview.
+
+Service endpoints allow continuity during a regional failover. When planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your zone-redundant SANs.
+
+## Enabling access to virtual networks in other regions (preview)
+
+>
+> [!IMPORTANT]
+> This capability is currently in PREVIEW.
+>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network.
+
+> [!NOTE]
+> For updating the existing service endpoints to access a volume group in another region, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
++
+### [Portal](#tab/azure-portal)
+
+During the preview you must use either PowerShell or the Azure CLI to enable this feature.
+
+### [PowerShell](#tab/azure-powershell)
+
+- Open a Windows PowerShell command window.
+
+- Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+- If your identity is associated with more than one subscription, then set your active subscription to the subscription of the virtual network.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+- Register the `AllowGlobalTagsForStorage` feature by using the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
+
+ ```powershell
+ Register-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage
+ ```
+
+ > [!NOTE]
+ > The registration process might not complete immediately. Verify that the feature is registered before using it.
+
+- To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
+
+ ```powershell
+ Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage
+ ```
+
+### [Azure CLI](#tab/azure-cli)
+
+- Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+- If your identity is associated with more than one subscription, then set your active subscription to subscription of the virtual network.
+
+ ```azurecli-interactive
+ az account set --subscription <subscription-id>
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+- Register the `AllowGlobalTagsForStorage` feature by using the [az feature register](/cli/azure/feature#az-feature-register) command.
+
+ ```azurecli
+ az feature register --namespace Microsoft.Network --name AllowGlobalTagsForStorage
+ ```
+
+ > [!NOTE]
+ > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
+
+- To verify that the registration is complete, use the [az feature](/cli/azure/feature#az-feature-show) command.
+
+ ```azurecli
+ az feature show --namespace Microsoft.Network --name AllowGlobalTagsForStorage
+ ```
+++
+## Managing virtual network rules
+
+You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
+
+> [!NOTE]
+> If you registered the `AllowGlobalTagsForStorage` feature, and you want to enable access to your volumes from a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the SAN or its paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to configure access rules for virtual networks in other regions.
+
+### [Portal](#tab/azure-portal)
+
+Currently, you must use either the Azure PowerShell module or Azure CLI to manage virtual network rules for a volume group.
+
+### [PowerShell](#tab/azure-powershell)
+
+- Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+
+- List virtual network rules.
+
+ ```azurepowershell
+ $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName
+ $Rules.NetworkAclsVirtualNetworkRule
+ ```
+
+- Enable service endpoint for Azure Storage on an existing virtual network and subnet.
+
+ ```azurepowershell
+ Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.0.0.0/24" -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+ ```
+
+- Add a network rule for a virtual network and subnet.
+
+ ```azurepowershell
+ $rule1 = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId <resourceIDHere> -Action Allow
+
+ Update-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName -NetworkAclsVirtualNetworkRule $rule1
+ ```
+
+ > [!TIP]
+ > To add a network rule for a subnet in a virtual network belonging to another Azure AD tenant, use a fully qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+
+- Remove a virtual network rule.
+
+ ```azurepowershell
+ ## You can remove a virtual network rule by object, by resource ID, or by removing all the rules in a volume group
+ ### remove by networkRule object
+ Remove-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName myRGName -ElasticSanName mySANName -VolumeGroupName myVolGroupName -NetworkAclsVirtualNetworkRule $virtualNetworkRule1,$virtualNetworkRule2
+ ### remove by networkRuleResourceId
+ Remove-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName myRGName -ElasticSanName mySANName -VolumeGroupName myVolGroupName -NetworkAclsVirtualNetworkResourceId "myResourceID"
+ ### Remove all network rules in a volume group by pipeline
+ ((Get-AzElasticSanVolumeGroup -ResourceGroupName myRGName -ElasticSanName mySANName -VolumeGroupName myVolGroupName).NetworkAclsVirtualNetworkRule) | Remove-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName myRGName -ElasticSanName mySANName -VolumeGroupName myVolGroupName
+ ```
+
+### [Azure CLI](#tab/azure-cli)
+
+- Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
+
+- List information from a particular volume group, including their virtual network rules.
+
+ ```azurecli
+ az elastic-san volume-group show -e $sanName -g $resourceGroupName -n $volumeGroupName
+ ```
+
+- Enable service endpoint for Azure Storage on an existing virtual network and subnet.
+
+ ```azurecli
+ az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+ ```
+
+- Add a network rule for a virtual network and subnet.
+
+ > [!TIP]
+ > To add a rule for a subnet in a virtual network belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form `/subscriptions/\<subscription-ID\>/resourceGroups/\<resourceGroup-Name\>/providers/Microsoft.Network/virtualNetworks/\<vNet-name\>/subnets/\<subnet-name\>`.
+ >
+ > You can use the **subscription** parameter to retrieve the subnet ID for a virtual network belonging to another Azure AD tenant.
+
+ ```azurecli
+ az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+ ```
+
+- Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like.
+
+ ```azurecli
+ az elastic-san volume-group update -e $sanName -g $resourceGroupName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
+ ```
++++
+## Next steps
+
+[Plan for deploying an Elastic SAN (preview)](elastic-san-planning.md)
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
+
+ Title: Planning for an Azure Elastic SAN (preview)
+description: Understand planning for an Azure Elastic SAN deployment. Learn about storage capacity, performance, redundancy, and encryption.
+++ Last updated : 10/12/2022+++++
+# Plan for deploying an Elastic SAN (preview)
+
+There are three main aspects to an elastic storage area network (SAN): the SAN itself, volume groups, and volumes. When deploying a SAN, you make selections while configuring the SAN, including the redundancy of the entire SAN, and how much performance and storage the SAN has. Then you create volume groups that are used to manage volumes at scale. Any settings applied to a volume group are inherited by volumes inside that volume group. Finally, you partition the storage capacity that was allocated at the SAN-level into individual volumes.
+
+Before deploying an Elastic SAN (preview), consider the following:
+
+- How much storage do you need?
+- What level of performance do you need?
+- What type of redundancy do you require?
+
+Answering those three questions can help you to successfully provision a SAN that meets your needs.
+
+## Storage and performance
+
+There are two layers when it comes to performance and storage, the total storage and performance that an Elastic SAN has, and the performance and storage of individual volumes.
+
+### Elastic SAN
+
+There are two ways to provision storage for an Elastic SAN: You can either provision base capacity or additional capacity. Each TiB of base capacity also increases your SAN's IOPS and throughput (MB/s) but costs more than each TiB of additional capacity. Increasing additional capacity doesn't increase your SAN's IOPS or throughput (MB/s).
+
+When provisioning storage for an Elastic SAN, consider how much storage you require and how much performance you require. Using a combination of base capacity and additional capacity to meet these requirements allows you to optimize your costs. For example, if you needed 100 TiB of storage but only needed 250,000 IOPS and 4,000 MB/s, you could provision 50 TiB in your base capacity and 50 TiB in your additional capacity.
+
+### Volumes
+
+You create volumes from the storage that you provisioned to your Elastic SAN. When you create a volume, think of it like partitioning a section of the storage of your Elastic SAN. The maximum performance of an individual volume is determined by the amount of storage allocated to it. Individual volumes can have fairly high IOPS and throughput, but the total IOPS and throughput of all your volumes can't exceed the total IOPS and throughput your SAN has.
+
+Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Say this SAN had 100 1 TiB volumes. You could potentially have three of these volumes operating at their maximum performance (64,000 IOPS, 1,024 MB/s) since this would be below the SAN's limits. But if four or five volumes all needed to operate at maximum at the same time, they wouldn't be able to. Instead the performance of the SAN would be split evenly among them.
+
+## Networking
+
+In Preview, Elastic SAN supports public endpoint from selected virtual network, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. Note that you need to first enable [service point for Azure Storage] (../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
+
+## Redundancy
+
+To protect the data in your Elastic SAN against data loss or corruption, all SANs store multiple copies of each file as they're written. Depending on the requirements of your workload, you can select additional degrees of redundancy. The following data redundancy options are currently supported:
+
+- **Locally-redundant storage (LRS)**: With LRS, every SAN is stored three times within an Azure storage cluster. This protects against loss of data due to hardware faults, such as a bad disk drive. However, if a disaster such as fire or flooding occurs within the data center, all replicas of an Elastic SAN using LRS may be lost or unrecoverable.
+- **Zone-redundant storage (ZRS)**: With ZRS, three copies of each SAN are stored in three distinct and physically isolated storage clusters in different Azure *availability zones*. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. A write request to storage that is using ZRS happens synchronously. The write operation only returns successfully after the data is written to all replicas across the three availability zones.
+
+## Encryption
+
+All data stored in an Elastic SAN is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. SSE protects your data and to help you to meet your organizational security and compliance commitments. Data stored in Elastic SAN is encrypted with Microsoft-managed keys. With Microsoft-managed keys, Microsoft holds the keys to encrypt/decrypt the data, and is responsible for rotating them regularly.
+
+Data in an Azure Elastic SAN is encrypted and decrypted transparently using 256-bit [AES encryption](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard), one of the strongest block ciphers available, and is FIPS 140-2 compliant. Encryption is enabled for all Elastic SANs and can't be disabled. Because your data is secured by default, you don't need to modify your code, or applications to take advantage of SSE. There's no extra cost for SSE.
+
+For more information about the cryptographic modules underlying SSE, see [Cryptography API: Next Generation](/windows/desktop/seccng/cng-portal).
+
+## Protocol compatibility
+
+### iSCSI support
+
+Elastic SAN supports the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. The following iSCSI commands are currently supported:
+
+- TEST UNIT READY
+- REQUEST SENSE
+- INQUIRY
+- REPORT LUNS
+- MODE SENSE
+- READ CAPACITY (10)
+- READ CAPACITY (16)
+- READ (6)
+- READ (10)
+- READ (16)
+- WRITE (6)
+- WRITE (10)
+- WRITE (16)
+- WRITE VERIFY (10)
+- WRITE VERIFY (16)
+- VERIFY (10)
+- VERIFY (16)
+- SYNCHRONIZE CACHE (10)
+- SYNCHRONIZE CACHE (16)
+
+The following iSCSI features aren't currently supported:
+- CHAP authorization
+- Initiator registration
+- iSCSI Error Recovery Levels 1 and 2
+- ESXi iSCSI flow control
+- More than one LUN per iSCSI target
+- Multiple connections per session (MC/S)
+
+## Next steps
+
+[Deploy an Elastic SAN (preview)](elastic-san-create.md)
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
+
+ Title: Elastic SAN (preview) scalability and performance targets
+description: Learn about the capacity, IOPS, and throughput rates for Azure Elastic SAN.
+++ Last updated : 10/12/2022+++++
+# Elastic SAN (preview) scale targets
+
+There are three main components to an elastic storage area network (SAN): the SAN itself, volume groups, and volumes.
+
+## The Elastic SAN
+
+An Elastic SAN (preview) has three attributes that determine its performance: total capacity, IOPS, and throughput.
+
+### Capacity
+
+The total capacity of your Elastic SAN is determined by two different capacities, the base capacity and the additional capacity. Increasing the base capacity also increases the SAN's IOPS and throughput but is more costly than increasing the additional capacity. Increasing additional capacity doesn't increase IOPS or throughput.
+
+The maximum total capacity of your SAN is determined by the region where it's located and by its redundancy configuration. The minimum total capacity for an Elastic SAN is 64 tebibyte (TiB). Base or additional capacity can be increased in increments of 1 TiB.
+
+### IOPS
+
+The IOPS of an Elastic SAN increases by 5,000 per base TiB. So if you had an Elastic SAN that has 6 TiB of base capacity, that SAN could still provide up to 30,000 IOPS. That same SAN would still provide 30,000 IOPS whether it had 50 TiB of additional capacity or 500 TiB of additional capacity, since the SAN's performance is only determined by the base capacity. The IOPS of an Elastic SAN are distributed among all its volumes.
+
+### Throughput
+
+The throughput of an Elastic SAN increases by 80 MB/s per base TiB. So if you had an Elastic SAN that has 6 TiB of base capacity, that SAN could still provide up to 480 MB/s. That same SAN would provide 480-MB/s throughput whether it had 50 TiB of additional capacity or 500 TiB of additional capacity, since the SAN's performance is only determined by the base capacity. The throughput of an Elastic SAN is distributed among all its volumes.
+
+### Elastic SAN scale targets
+
+The appliance scale targets vary depending on region and redundancy of the SAN itself. The following table breaks out the scale targets based on whether the SAN's [redundancy](elastic-san-planning.md#redundancy) is set to locally-redundant storage (LRS) or zone-redundant storage (ZRS), and what region the SAN is in.
+
+#### LRS
++
+|Resource |France Central |Southeast Asia |
+||||
+|Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |
+|Maximum total capacity (TiB) |100 |100 |
+|Maximum base capacity (TiB) |100 |100 |
+|Minimum total capacity (TiB) |64 |64 |
+|Maximum total IOPS |500,000 |500,000 |
+|Maximum total throughput (MB/s) |8,000 |8,000 |
++
+#### ZRS
+
+ZRS is only available in France Central.
+
+|Resource |France Central |
+||||
+|Maximum number of Elastic SAN that can be deployed per subscription per region |5 |
+|Maximum total capacity (TiB) |200 |
+|Maximum base capacity (TiB) |100 |
+|Minimum total capacity (TiB) |64 |
+|Maximum total IOPS |500,000 |
+|Maximum total throughput (MB/s) |8,000 |
++
+## Volume group
+
+An Elastic SAN can have a maximum of 20 volume groups, and a volume group can contain up to 1,000 volumes.
+
+## Volume
+
+The performance of an individual volume is determined by its capacity. The maximum IOPS of a volume increase by 750 per GiB, up to a maximum of 64,000 IOPS. The maximum throughput increases by 60 MB/s per GiB, up to a maximum of 1,024 MB/s. A volume needs at least 86 GiB to be capable of using 64,000 IOPS. A volume needs at least 18 GiB in order to be capable of using the maximum 1,024 MB/s. The combined IOPS and throughput of all your volumes can't exceed the IOPS and throughput of your SAN.
+
+### Volume scale targets
+
+|Supported capacities |Maximum potential IOPS |Maximum potential throughput (MB/s) |
+||||
+|1 GiB - 64 TiB |750 - 64,000 (increases by 750 per GiB) |60 - 1,024 (increases by 60 per GiB) |
+
+## Next steps
+
+[Plan for deploying an Elastic SAN (preview)](elastic-san-planning.md)
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md
Title: Azure File Sync on-premises firewall and proxy settings | Microsoft Docs
+ Title: Azure File Sync on-premises firewall and proxy settings
description: Understand Azure File Sync on-premises proxy and firewall settings. Review configuration details for ports, networks, and special connections to Azure. Previously updated : 04/13/2021 Last updated : 10/12/2022
Azure File Sync connects your on-premises servers to Azure Files, enabling multi-site synchronization and cloud tiering features. As such, an on-premises server must be connected to the internet. An IT admin needs to decide the best path for the server to reach into Azure cloud services.
-This article will provide insight into specific requirements and options available to successfully and securely connect your server to Azure File Sync.
+This article provides insight into specific requirements and options available to successfully and securely connect your server to Azure File Sync.
We recommend reading [Azure File Sync networking considerations](file-sync-networking-overview.md) prior to reading this how to guide. ## Overview
-Azure File Sync acts as an orchestration service between your Windows Server, your Azure file share, and several other Azure services to sync data as described in your sync group. For Azure File Sync to work correctly, you will need to configure your servers to communicate with the following Azure
+Azure File Sync acts as an orchestration service between your Windows Server, your Azure file share, and several other Azure services to sync data as described in your sync group. For Azure File Sync to work correctly, you'll need to configure your servers to communicate with the following Azure
- Azure Storage - Azure File Sync
Azure File Sync acts as an orchestration service between your Windows Server, yo
- Authentication services > [!NOTE]
-> The Azure File Sync agent on Windows Server initiates all requests to cloud services which results in only having to consider outbound traffic from a firewall perspective. <br /> No Azure service initiates a connection to the Azure File Sync agent.
+> The Azure File Sync agent on Windows Server initiates all requests to cloud services which results in only having to consider outbound traffic from a firewall perspective. No Azure service initiates a connection to the Azure File Sync agent.
## Ports
-Azure File Sync moves file data and metadata exclusively over HTTPS and requires port 443 to be open outbound.
-As a result all traffic is encrypted.
+Azure File Sync moves file data and metadata exclusively over HTTPS and requires port 443 to be open outbound. As a result, all traffic is encrypted.
## Networks and special connections to Azure
The following table describes the required domains for communication:
> When allowing traffic to &ast;.afs.azure.net, traffic is only possible to the sync service. There are no other Microsoft services using this domain. > When allowing traffic to &ast;.one.microsoft.com, traffic to more than just the sync service is possible from the server. There are many more Microsoft services available under subdomains.
-If &ast;.afs.azure.net or &ast;.one.microsoft.com is too broad, you can limit the server's communication by allowing communication to only explicit regional instances of the Azure Files Sync service. Which instance(s) to choose depends on the region of the storage sync service you have deployed and registered the server to. That region is called "Primary endpoint URL" in the table below.
+If &ast;.afs.azure.net or &ast;.one.microsoft.com is too broad, you can limit the server's communication by allowing communication to only explicit regional instances of the Azure File Sync service. Which instance(s) to choose depends on the region of the storage sync service you have deployed and registered the server to. That region is called "Primary endpoint URL" in the table below.
For business continuity and disaster recovery (BCDR) reasons you may have created your Azure file shares in a storage account that is configured for geo-redundant storage (GRS). If that is the case, your Azure file shares will fail over to the paired region in the event of a lasting regional outage. Azure File Sync uses the same regional pairings as storage. So if you use GRS storage accounts, you need to enable additional URLs to allow your server to talk to the paired region for Azure File Sync. The table below calls this "Paired region". Additionally, there is a traffic manager profile URL that needs to be enabled as well. This will ensure network traffic can be seamlessly re-routed to the paired region in the event of a fail-over and is called "Discovery URL" in the table below.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Title: Release notes for the Azure File Sync agent | Microsoft Docs
+ Title: Release notes for the Azure File Sync agent
description: Read the release notes for the Azure File Sync agent, which lets you centralize your organization's file shares in Azure Files.
The following release notes are for version 15.0.0.0 of the Azure File Sync agen
- Azure File Sync has a cloud change enumeration job that runs every 24 hours to detect changes made directly in the Azure file share and sync those changes to servers in your sync groups. In the v14 release, we made improvements to reduce the number of transactions when this job runs and in the v15 release we made further improvements. The transaction cost is also more predictable, each job will now produce 1 List transaction per directory, per day. - View Cloud Tiering status for a server endpoint or volume
- - The Get-StorageSyncCloudTieringStatus cmdlet will show cloud tiering status for a specific server endpoint or for a specific volume (depending on path specified). The cmdlet will show current policies, current distribution of tiered vs. fully downloaded data, and last tiering session statistics if the server endpoint path is specified. If the volume path is specified, it will show the effective volume free space policy, the server endpoints located on that volume, and whether these server endpoints have cloud tiering enabled.
+ - The `Get-StorageSyncCloudTieringStatus` cmdlet will show cloud tiering status for a specific server endpoint or for a specific volume (depending on path specified). The cmdlet will show current policies, current distribution of tiered vs. fully downloaded data, and last tiering session statistics if the server endpoint path is specified. If the volume path is specified, it will show the effective volume free space policy, the server endpoints located on that volume, and whether these server endpoints have cloud tiering enabled.
To get the cloud tiering status for a server endpoint or volume, run the following PowerShell commands: ```powershell
The following release notes are for version 15.0.0.0 of the Azure File Sync agen
Get-StorageSyncCloudTieringStatus -Path <server endpoint path or volume> ``` - New diagnostic and troubleshooting tool
- - The Debug-StorageSyncServer cmdlet will diagnose common issues like certificate misconfiguration and incorrect server time. Also, we have simplified Azure Files Sync troubleshooting by merging the functionality of some of existing scripts and cmdlets (AFSDiag.ps1, FileSyncErrorsReport.ps1, Test-StorageSyncNetworkConnectivity) into the Debug-StorageSyncServer cmdlet.
+ - The Debug-StorageSyncServer cmdlet will diagnose common issues like certificate misconfiguration and incorrect server time. Also, we have simplified Azure File Sync troubleshooting by merging the functionality of some of existing scripts and cmdlets (AFSDiag.ps1, FileSyncErrorsReport.ps1, Test-StorageSyncNetworkConnectivity) into the `Debug-StorageSyncServer` cmdlet.
To run diagnostics on the server, run the following PowerShell commands: ```powershell
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
If files fail to tier to Azure Files:
| 0x80c83053 | -2134364077 | ECS_E_CREATE_SV_FILE_DELETED | The file failed to tier because it was deleted in the Azure file share. | No action required. The file should be deleted on the server when the next download sync session runs. | | 0x80c8600e | -2134351858 | ECS_E_AZURE_SERVER_BUSY | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | | 0x80072ee7 | -2147012889 | WININET_E_NAME_NOT_RESOLVED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
-| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to tier due to access denied error. This error can occur if the file is located on a DFS-R read-only replication folder. | Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
+| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to tier due to access denied error. This error can occur if the file is located on a DFS-R read-only replication folder. | Azure File Sync doesn't support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
| 0x80072efe | -2147012866 | WININET_E_CONNECTION_ABORTED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | | 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | The minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. | | 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. |
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (loc
| 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. | | 0x80c83008 | -2134364152 | ECS_E_CANNOT_CREATE_AZURE_STAGED_FILE | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. | | 0x80c8027C | -2134375812 | ECS_E_ACCESS_DENIED_EFS | The file is encrypted by an unsupported solution (like NTFS EFS). | Decrypt the file and use a supported encryption solution. For a list of support solutions, see the [Encryption](file-sync-planning.md#encryption) section of the planning guide. |
-| 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
+| 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure File Sync doesn't support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. |
| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file has a delete pending state. | No action required. File will be deleted once all open file handles are closed. |
-| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
+| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled, and the server doesn't have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file cannot be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. | | 0x8000ffff | -2147418113 | E_UNEXPECTED | The file cannot be synced due to an unexpected error. | If the error persists for several days, please open a support case. | | 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. |
storage Storage Files Configure P2s Vpn Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-linux.md
vpnType=$(xmllint --xpath "string(/VpnProfile/VpnType)" Generic/VpnSettings.xml
routes=$(xmllint --xpath "string(/VpnProfile/Routes)" Generic/VpnSettings.xml) sudo cp "${installDir}ipsec.conf" "${installDir}ipsec.conf.backup"
-sudo cp "Generic/VpnServerRoot.cer" "${installDir}ipsec.d/cacerts"
+sudo cp "Generic/VpnServerRoot.cer_0" "${installDir}ipsec.d/cacerts"
sudo cp "${username}.p12" "${installDir}ipsec.d/private" echo -e "\nconn $virtualNetworkName" | sudo tee -a "${installDir}ipsec.conf" >
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 09/15/2022 Last updated : 10/12/2022
Use one of the following three methods:
- Configure this group policy on the client(s): `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon` - Create the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
+Changes are not instant, and require a policy refresh or a reboot to take effect.
+ ## Disable Azure AD authentication on your storage account If you want to use another authentication method, you can disable Azure AD authentication on your storage account by using the Azure portal.
storage Storage Files Networking Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-dns.md
Last updated 07/02/2021 -+ # Configuring DNS forwarding for Azure Files
By default, `storageaccount.file.core.windows.net` resolves to the public endpoi
Since our ultimate objective is to access the Azure file shares hosted within the storage account from on-premises using a network tunnel such as a VPN or ExpressRoute connection, you must configure your on-premises DNS servers to forward requests made to the Azure Files service to the Azure private DNS service. To accomplish this, you need to set up *conditional forwarding* of `*.core.windows.net` (or the appropriate storage endpoint suffix for the US Government, Germany, or China national clouds) to a DNS server hosted within your Azure virtual network. This DNS server will then recursively forward the request on to Azure's private DNS service that will resolve the fully qualified domain name of the storage account to the appropriate private IP address.
-Configuring DNS forwarding for Azure Files will require running a virtual machine to host a DNS server to forward the requests, however this is a one time step for all the Azure file shares hosted within your virtual network. Additionally, this is not an exclusive requirement to Azure Files - any Azure service that supports private endpoints that you want to access from on-premises can make use of the DNS forwarding you will configure in this guide: Azure Blob storage, SQL Azure, Cosmos DB, etc.
+Configuring DNS forwarding for Azure Files will require running a virtual machine to host a DNS server to forward the requests, however this is a one time step for all the Azure file shares hosted within your virtual network. Additionally, this is not an exclusive requirement to Azure Files - any Azure service that supports private endpoints that you want to access from on-premises can make use of the DNS forwarding you will configure in this guide, including Azure Blob storage, Azure SQL, and Azure Cosmos DB.
This guide shows the steps for configuring DNS forwarding for the Azure storage endpoint, so in addition to Azure Files, DNS name resolution requests for all of the other Azure storage services (Azure Blob storage, Azure Table storage, Azure Queue storage, etc.) will be forwarded to Azure's private DNS service. Additional endpoints for other Azure services can also be added if desired. DNS forwarding back to your on-premises DNS servers will also be configured, enabling cloud resources within your virtual network (such as a DFS-N server) to resolve on-premises machine names.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure file
Previously updated : 10/04/2022 Last updated : 10/12/2022
File scale targets apply to individual files stored in Azure file shares.
|-|-|-| | Maximum file size | 4 TiB | 4 TiB | | Maximum concurrent request rate | 1,000 IOPS | Up to 8,000<sup>1</sup> |
-| Maximum ingress for a file | 60 MiB/sec | 200 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup>|
+| Maximum ingress for a file | 60 MiB/sec | 200 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup> |
| Maximum egress for a file | 60 MiB/sec | 300 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup> |
-| Maximum concurrent handles per file, directory, and share root | 2,000 handles | 2,000 handles |
+| Maximum concurrent handles per file, directory, and share root<sup>3</sup> | 2,000 handles | 2,000 handles |
<sup>1 Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower.</sup>+ <sup>2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).</sup>
+<sup>3 Azure Files supports 2,000 open handles per share, and in practice can go higher. However, if an application keeps an open handle on the root of the share, the share root limit will be reached before the per-file or per-directory limit is reached.</sup>
+ ## Azure File Sync scale targets The following table indicates which target are soft, representing the Microsoft tested boundary, and hard, indicating an enforced maximum:
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
description: Troubleshoot problems with SMB Azure file shares in Windows. See co
Previously updated : 09/28/2022 Last updated : 10/12/2022
To view open handles for a file share, directory or file, use the [Get-AzStorage
To close open handles for a file share, directory or file, use the [Close-AzStorageFileHandle](/powershell/module/az.storage/close-azstoragefilehandle) PowerShell cmdlet.
-> [!Note]
-> The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+> [!Note]
+> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+
+<a id="networkerror59"></a>
+## ERROR_UNEXP_NET_ERR (59) when doing any operations on a handle
+
+### Cause
+
+If you cache/hold a large number of open handles for a long time, you might see this server-side failure due to throttling reasons. When a large number of handles are cached by the client, many of those handles can go into a reconnect phase at the same time, building up a queue on the server which needs to be throttled. The retry logic and the throttling on the backend for reconnect takes longer than the client's timeout. This situation manifests itself as a client not being able to use an existing handle for any operation, with all operations failing with ERROR_UNEXP_NET_ERR (59).
+
+There are also edge cases in which the client handle becomes disconnected from the server (for example, a network outage lasting several minutes) that could cause this error.
+
+### Solution
+
+DonΓÇÖt keep a large number of handles cached. Close handles and then retry. See preceding troubleshooting entry for PowerShell cmdlets to view/close open handles.
<a id="noaaccessfailureportal"></a>
-## Error "No access" when you try to access or delete an Azure File Share
+## Error "No access" when you try to access or delete an Azure File Share
When you try to access or delete an Azure file share in the portal, you might receive the following error: No access
-Error code: 403
+Error code: 403
### Cause 1: Virtual network or firewall rules are enabled on the storage account
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/analytics/partner-overview.md
Last updated 04/12/2021
-+ # Azure Storage analytics and big data partners
This article highlights Microsoft partner companies that are integrated with Azu
![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)| ![Qlik company logo](./media/qlik-logo.png) |**Qlik**<br>Qlik helps accelerate BI and ML initiatives with a scalable data integration and automation solution. Qlik also goes beyond migration tools to help drive agility throughout the data and analytics process with automated data pipelines and a governed, self-service catalog.|[Partner page](https://www.qlik.com/us/products/technology/qlik-microsoft-azure-migration)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform)| ![Starburst logo](./media/starburst-logo.jpg) |**Starburst**<br>Starburst unlocks the value of data by making it fast and easy to access anywhere. Starburst queries data across any database, making it actionable for data-driven organizations. With Starburst, teams can prevent vendor lock-in, and use the existing tools that work for their business.|[Partner page](https://www.starburst.io/platform/deployment-options/starburst-on-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/starburstdatainc1582306810515.starburst-enterprise)|
-![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Cosmos DB, Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner page](https://www.striim.com/partners/striim-for-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.azurestorageintegration?tab=overview)|
+![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Azure Cosmos DB, and Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner page](https://www.striim.com/partners/striim-for-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.azurestorageintegration?tab=overview)|
![Talend company logo](./media/talend-logo.png) |**Talend**<br>Talend Data Fabric is a platform that brings together multiple integration and governance capabilities. Using a single unified platform, Talend delivers complete, clean, and uncompromised data in real time. The Talend Trust Score helps assess the reliability of any data set. |[Partner page](https://www.talend.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendclouddi)| ![Unravel](./media/unravel-logo.png) |**Unravel Data**<br>Unravel Data provides observability and automatic management through a single pane of glass. AI-powered recommendations proactively improve reliability, speed, and resource allocations of your data pipelines and jobs. Unravel connects easily with Azure Databricks, HDInsight, Azure Data Lake Storage, and more through the Azure Marketplace or Unravel SaaS service. Unravel Data also helps migrate to Azure by providing an assessment of your environment. This assessment uncovers usage details, dependency maps, cost, and effort needed for a fast move with less risk.|[Partner page](https://www.unraveldata.com/azure-databricks/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/unravel-data.unravel4databrickssubscriptionasaservice?tab=Overview) |![Wandisco company logo](./medi) is tightly integrated with Azure. Besides having an Azure portal deployment experience, it also uses role-based access control, Azure Active Directory, Azure Policy enforcement, and Activity log integration. With Azure Billing integration, you don't need to add a vendor contract or get more vendor approvals.<br><br>Accelerate the replication of Hadoop data between multiple sources and targets for any data architecture. With LiveData Cloud Services, your data will be available for Azure Databricks, Synapse Analytics, and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wandisco.ldma?tab=Overview)|
storage Isv File Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md
Last updated 03/22/2022 + # Running ISV file services in Azure
This article compares several ISV solutions that provide files services in Azure
- Ransomware protection **XenData**-- Cosmos DB service provides fast synchronization of multiple gateways, including application-specific owner files for global collaboration
+- The Azure Cosmos DB service provides fast synchronization of multiple gateways, including application-specific owner files for global collaboration
- Each gateway has highly granular control of locally cached content - Supports video streaming and partial file restores - Supports Azure Data Box uploads
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md
Last updated 06/24/2021
ms.devlang: csharp-+ # Table design patterns
Notice how this example expects the entity it retrieves to be of type **Employee
### Retrieving multiple entities using LINQ
-You can use LINQ to retrieve multiple entities from the Table service when working with Microsoft Azure Cosmos Table Standard Library.
+You can use LINQ to retrieve multiple entities from the Table service when working with Microsoft Azure Cosmos DB Table Standard Library.
```azurecli dotnet add package Microsoft.Azure.Cosmos.Table
storage Table Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-overview.md
ms.devlang: csharp+ Last updated 05/27/2021
-# What is Azure Table storage ?
+# What is Azure Table storage?
[!INCLUDE [storage-table-cosmos-db-tip-include](../../../includes/storage-table-cosmos-db-tip-include.md)]
You can use Table storage to store flexible datasets like user data for web appl
* [Storage Client Library for .NET reference](/dotnet/api/overview/azure/storage)
- * [REST API reference](/rest/api/storageservices/)
+ * [REST API reference](/rest/api/storageservices/)
stream-analytics Azure Cosmos Db Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-cosmos-db-output.md
description: This article describes how to output data from Azure Stream Analyti
+ Last updated 12/13/2021
The following table describes the properties for creating an Azure Cosmos DB out
| | | | Output alias | An alias to refer this output in your Stream Analytics query. | | Sink | Azure Cosmos DB. |
-| Import option | Choose either **Select Cosmos DB from your subscription** or **Provide Cosmos DB settings manually**.
+| Import option | Choose either **Select Azure Cosmos DB from your subscription** or **Provide Azure Cosmos DB settings manually**.
| Account ID | The name or endpoint URI of the Azure Cosmos DB account. | | Account key | The shared access key for the Azure Cosmos DB account. | | Database | The Azure Cosmos DB database name. |
-| Container name | The container name to be used, which must exist in Cosmos DB. Example: <br /><ul><li> _MyContainer_: A container named "MyContainer" must exist.</li>|
+| Container name | The container name to be used, which must exist in Azure Cosmos DB. Example: <br /><ul><li> _MyContainer_: A container named "MyContainer" must exist.</li>|
| Document ID |Optional. The name of the field in output events that's used to specify the primary key on which insert or update operations are based. > [!Note]
-> Cosmos DB Output for Azure Stream Analytics uses .NET V3 SDK. When writing to multiple regions, the SDK automatically picks the best region available.
+> Azure Cosmos DB Output for Azure Stream Analytics uses .NET V3 SDK. When writing to multiple regions, the SDK automatically picks the best region available.
## Partitioning
-The partition key is based on the PARTITION BY clause in the query. The number of output writers follows the input partitioning for [fully parallelized queries](stream-analytics-scale-jobs.md). Stream Analytics converts the Cosmos DB output partition key to a string. For example, if you have a partition key with a value of 1 of type bigint, it is converted to "1" of type string. This conversion always happens regardless of whether the partition property is written to Cosmos DB.
+The partition key is based on the PARTITION BY clause in the query. The number of output writers follows the input partitioning for [fully parallelized queries](stream-analytics-scale-jobs.md). Stream Analytics converts the Azure Cosmos DB output partition key to a string. For example, if you have a partition key with a value of 1 of type bigint, it is converted to "1" of type string. This conversion always happens regardless of whether the partition property is written to Azure Cosmos DB.
## Output batch size
stream-analytics Azure Database Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-database-explorer-output.md
Title: Azure Data Explorer output from Azure Stream Analytics (Preview)
+ Title: Azure Data Explorer output from Azure Stream Analytics
description: This article describes using Azure Database Explorer as an output for Azure Stream Analytics. + Last updated 09/26/2022
-# Azure Data Explorer output from Azure Stream Analytics (Preview)
+# Azure Data Explorer output from Azure Stream Analytics
You can use [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) as an output for analyzing large volumes of diverse data from any data source, such as websites, applications, IoT devices, and more. Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. It helps you handle the many data streams emitted by modern software, so you can collect, store, and analyze data. This data is used for diagnostics, monitoring, reporting, machine learning, and additional analytics capabilities.
stream-analytics Blob Storage Azure Data Lake Gen2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-storage-azure-data-lake-gen2-output.md
description: This article describes blob storage and Azure Data Lake Gen 2 as ou
+ Previously updated : 06/06/2022 Last updated : 10/12/2022 # Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics
The following table lists the property names and their descriptions for creating
| Output alias | A friendly name used in queries to direct the query output to this blob storage. | | Storage account | The name of the storage account where you're sending your output. | | Storage account key | The secret key associated with the storage account. |
-| Storage container | A logical grouping for blobs stored in the Azure Blob service. When you upload a blob to the Blob service, you must specify a container for that blob. |
-| Path pattern | Optional. The file path pattern that's used to write your blobs within the specified container. <br /><br /> In the path pattern, you can choose to use one or more instances of the date and time variables to specify the frequency that blobs are written: <br /> {date}, {time} <br /><br />You can use custom blob partitioning to specify one custom {field} name from your event data to partition blobs. The field name is alphanumeric and can include spaces, hyphens, and underscores. Restrictions on custom fields include the following: <ul><li>Field names aren't case-sensitive. For example, the service can't differentiate between column "ID" and column "id."</li><li>Nested fields are not permitted. Instead, use an alias in the job query to "flatten" the field.</li><li>Expressions can't be used as a field name.</li></ul> <br />This feature enables the use of custom date/time format specifier configurations in the path. Custom date and time formats must be specified one at a time, enclosed by the {datetime:\<specifier>} keyword. Allowable inputs for \<specifier> are yyyy, MM, M, dd, d, HH, H, mm, m, ss, or s. The {datetime:\<specifier>} keyword can be used multiple times in the path to form custom date/time configurations. <br /><br />Examples: <ul><li>Example 1: cluster1/logs/{date}/{time}</li><li>Example 2: cluster1/logs/{date}</li><li>Example 3: cluster1/{client_id}/{date}/{time}</li><li>Example 4: cluster1/{datetime:ss}/{myField} where the query is: SELECT data.myField AS myField FROM Input;</li><li>Example 5: cluster1/year={datetime:yyyy}/month={datetime:MM}/day={datetime:dd}</ul><br />The time stamp of the created folder structure follows UTC and not local time. [System.Timestamp](./stream-analytics-time-handling.md#choose-the-best-starting-time) is the time used for all time based partitioning.<br /><br />File naming uses the following convention: <br /><br />{Path Prefix Pattern}/schemaHashcode_Guid_Number.extension<br /><br /> Here Guid represents the unique identifier assigned to an internal writer that is created to write to a blob file. The number represents index of the blob block. <br /><br /> Example output files:<ul><li>Myoutput/20170901/00/45434_gguid_1.csv</li> <li>Myoutput/20170901/01/45434_gguid_1.csv</li></ul> <br />For more information about this feature, see [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md). |
-| Date format | Optional. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
-| Time format | Optional. If the time token is used in the prefix path, specify the time format in which your files are organized. Currently the only supported value is HH. |
-| Event serialization format | Serialization format for output data. JSON, CSV, Avro, and Parquet are supported. |
+| Container | A logical grouping for blobs stored in the Azure Blob service. When you upload a blob to the Blob service, you must specify a container for that blob. |
+| Event serialization format | Serialization format for output data. JSON, CSV, Avro, and Parquet are supported. Delta Lake is listed as an option here. The data will be in Parquet format if Delta Lake is selected. Learn more about [Delta Lake](write-to-delta-lake.md) |
+| Delta path name | Required when Event serialization format is Delta Lake. The path that is used to write the delta lake table within the specified container. It includes the table name. [More details and examples.](write-to-delta-lake.md) |
+|Write Mode | Write mode controls the way ASA writes to output file. Exactly once delivery only happens when write mode is Once. More information in the section below. |
+| Partition Column | Optional. The {field} name from your output data to partition. Only one partition column is supported. |
+| Path pattern | Required when Event serialization format is Delta lake. The file path pattern that's used to write your blobs within the specified container. <br /><br /> In the path pattern, you can choose to use one or more instances of the date and time variables to specify the frequency that blobs are written: {date}, {time}. <br /><br />Note that if your write mode is Once, you need to use both {date} and {time}. <br /><br />You can use custom blob partitioning to specify one custom {field} name from your event data to partition blobs. The field name is alphanumeric and can include spaces, hyphens, and underscores. Restrictions on custom fields include the following: <ul><li>No dynamic custom {field} name is allowed if your write mode is Once. </li><li>Field names aren't case-sensitive. For example, the service can't differentiate between column "ID" and column "id."</li><li>Nested fields aren't permitted. Instead, use an alias in the job query to "flatten" the field.</li><li>Expressions can't be used as a field name.</li></ul> <br />This feature enables the use of custom date/time format specifier configurations in the path. Custom date and time formats must be specified one at a time, enclosed by the {datetime:\<specifier>} keyword. Allowable inputs for \<specifier> are yyyy, MM, M, dd, d, HH, H, mm, m, ss, or s. The {datetime:\<specifier>} keyword can be used multiple times in the path to form custom date/time configurations. <br /><br />Examples: <ul><li>Example 1: cluster1/logs/{date}/{time}</li><li>Example 2: cluster1/logs/{date}</li><li>Example 3: cluster1/{client_id}/{date}/{time}</li><li>Example 4: `cluster1/{datetime:ss}/{myField}` where the query is: SELECT data.myField AS myField FROM Input;</li><li>Example 5: cluster1/year={datetime:yyyy}/month={datetime:MM}/day={datetime:dd}</ul><br />The time stamp of the created folder structure follows UTC and not local time. [System.Timestamp](./stream-analytics-time-handling.md#choose-the-best-starting-time) is the time used for all time based partitioning.<br /><br />File naming uses the following convention: <br /><br />{Path Prefix Pattern}/schemaHashcode_Guid_Number.extension<br /><br /> Here Guid represents the unique identifier assigned to an internal writer that is created to write to a blob file. The number represents index of the blob block. <br /><br /> Example output files:<ul><li>Myoutput/20170901/00/45434_gguid_1.csv</li> <li>Myoutput/20170901/01/45434_gguid_1.csv</li></ul> <br />For more information about this feature, see [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md). |
+| Date format | Required when Event serialization format is Delta lake. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
+| Time format | Required when Event serialization format is Delta lake. If the time token is used in the prefix path, specify the time format in which your files are organized.|
|Minimum rows |The number of minimum rows per batch. For Parquet, every batch will create a new file. The current default value is 2,000 rows and the allowed maximum is 10,000 rows.|
-|Maximum time |The maximum wait time per batch. After this time, the batch will be written to the output even if the minimum rows requirement is not met. The current default value is 1 minute and the allowed maximum is 2 hours. If your blob output has path pattern frequency, the wait time cannot be higher than the partition time range.|
+|Maximum time |The maximum wait time per batch. After this time, the batch will be written to the output even if the minimum rows requirement isn't met. The current default value is 1 minute and the allowed maximum is 2 hours. If your blob output has path pattern frequency, the wait time can't be higher than the partition time range.|
| Encoding | If you're using CSV or JSON format, an encoding must be specified. UTF-8 is the only supported encoding format at this time. | | Delimiter | Applicable only for CSV serialization. Stream Analytics supports a number of common delimiters for serializing CSV data. Supported values are comma, semicolon, space, tab, and vertical bar. |
-| Format | Applicable only for JSON serialization. **Line separated** specifies that the output is formatted by having each JSON object separated by a new line. If you select **Line separated**, the JSON is read one object at a time. The whole content by itself would not be a valid JSON. **Array** specifies that the output is formatted as an array of JSON objects. This array is closed only when the job stops or Stream Analytics has moved on to the next time window. In general, it's preferable to use line-separated JSON, because it doesn't require any special handling while the output file is still being written to. |
+| Format | Applicable only for JSON serialization. **Line separated** specifies that the output is formatted by having each JSON object separated by a new line. If you select **Line separated**, the JSON is read one object at a time. The whole content by itself wouldn't be a valid JSON. **Array** specifies that the output is formatted as an array of JSON objects. This array is closed only when the job stops or Stream Analytics has moved on to the next time window. In general, it's preferable to use line-separated JSON, because it doesn't require any special handling while the output file is still being written to. |
+
+## Exactly Once Delivery (Public Preview)
+
+End-to-end exactly once delivery when reading any streaming input means that processed data will be written to Azure Data Lake Storage Gen2 output once without duplicates. When the feature is enabled, your Stream Analytics job guarantees no data loss and no duplicates being produced as output, across user-initiated restart from last output time. This greatly simplifies your streaming pipeline by not having to implement and troubleshoot deduplication logic.
+
+### Write Mode
+
+There are two ways that Stream Analytics writes to your Blob storage or ADLS Gen2 account. One is to append results either to the same file or to a sequence of files as results are coming in. The other one is to write after all results for the time partition when all the data for the time partition is available. Exactly Once Delivery is enabled when Write Mode is Once.
+
+There's no write mode option for Delta Lake. However Delta lake output also provides exactly once guarantees using delta log. It doesn't require time partition and would write results continuously based on the batching parameters that user defined.
+
+> [!NOTE]
+> If you prefer not to use the preview feature for exactly once delivery, please select **Append as results arrive**.
+
+### Configuration
+
+To receive exactly once delivery for your Blob storage or ADLS Gen2 account, you need to configure as below:
+
+* Select **Once after all results of time partition is available** for your **Write Mode**.
+* Provide **Path Pattern** with both {date} and {time} specified.
+* Specify **date format** and **time format**.
+
+### Limitations
+
+* [Substream](/stream-analytics-query/timestamp-by-azure-stream-analytics) isn't supported.
+* Path Pattern becomes a required property, and must contain both{date} and {time}. No dynamic custom {field} name is allowed. Learn more about [custom path pattern](stream-analytics-custom-path-patterns-blob-storage-output.md).
+* If the job is started at a **custom time** before or after the last output time, there's risk of file being overwritten. For example, when the **time format** is HH, the file is generated every hour. If you stop the job at 8:15am, and restart the job at 8:30am, the file generated between 8am to 9am will only cover data from 8:30am to 9am. The data from 8am to 8:15am will be lost as it's overwritten.
+
+### Regions Availability
+
+This feature is currently supported in West Central US, Japan East and Canada Central.
## Blob output files When you're using Blob storage as output, a new file is created in the blob in the following cases:
-* If the file exceeds the maximum number of allowed blocks (currently 50,000). You might reach the maximum allowed number of blocks without reaching the maximum allowed blob size. For example, if the output rate is high, you can see more bytes per block, and the file size is larger. If the output rate is low, each block has less data, and the file size is smaller.
-* If there's a schema change in the output, and the output format requires fixed schema (CSV, Avro, Parquet).
-* If a job is restarted, either externally by a user stopping it and starting it, or internally for system maintenance or error recovery.
-* If the query is fully partitioned, and a new file is created for each output partition. This comes from using PARTITION BY, or the native parallelization introduced in [compatibility level 1.2](stream-analytics-compatibility-level.md#parallel-query-execution-for-input-sources-with-multiple-partitions)
-* If the user deletes a file or a container of the storage account.
-* If the output is time partitioned by using the path prefix pattern, and a new blob is used when the query moves to the next hour.
-* If the output is partitioned by a custom field, and a new blob is created per partition key if it does not exist.
-* If the output is partitioned by a custom field where the partition key cardinality exceeds 8,000, and a new blob is created per partition key.
+* The file exceeds the maximum number of allowed blocks (currently 50,000). You might reach the maximum allowed number of blocks without reaching the maximum allowed blob size. For example, if the output rate is high, you can see more bytes per block, and the file size is larger. If the output rate is low, each block has less data, and the file size is smaller.
+* There's a schema change in the output, and the output format requires fixed schema (CSV, Avro, Parquet).
+* A job is restarted, either externally by a user stopping it and starting it, or internally for system maintenance or error recovery.
+* Query is fully partitioned, and a new file is created for each output partition. This comes from using PARTITION BY, or the native parallelization introduced in [compatibility level 1.2](stream-analytics-compatibility-level.md#parallel-query-execution-for-input-sources-with-multiple-partitions)
+* User deletes a file or a container of the storage account.
+* Output is time partitioned by using the path prefix pattern, and a new blob is used when the query moves to the next hour.
+* Output is partitioned by a custom field, and a new blob is created per partition key if it doesn't exist.
+* Output is partitioned by a custom field where the partition key cardinality exceeds 8,000, and a new blob is created per partition key.
## Partitioning
For the maximum message size, see [Azure Storage limits](../azure-resource-manag
## Limitations
-* If "/" is used in the path pattern (e.g /folder2/folder3), then empty folders will be created and they will not be visible in Storage Explorer
-* Azure Stream Analytics appends to the same file in cases where a new blob file is not needed. Please note that this could cause additional triggers to be generated if azure services like event grid are configured to be triggered on blob file update
-* Azure Stream Analytics appends to blob by default. When the output format is a Json array, it completes the file on shutdown or when the output moves to the next time partition for time partitioned outputs. Note that, in some cases such as an unclean restart, it is possible that the closing "]" for json array may be missing.
+* If a forward slash symbol `/` is used in the path pattern (e.g /folder2/folder3), then empty folders will be created and they won't be visible in Storage Explorer
+* Azure Stream Analytics appends to the same file in cases where a new blob file isn't needed. Note that this could cause additional triggers to be generated if Azure services like Event Grid are configured to be triggered on blob file update
+* Azure Stream Analytics appends to blob by default. When the output format is a Json array, it completes the file on shutdown or when the output moves to the next time partition for time partitioned outputs. In some cases such as an unclean restart, it's possible that the closing "]" for json array may be missing.
## Next steps
stream-analytics Configuration Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/configuration-error-codes.md
Last updated 05/07/2020 + # Azure Stream Analytics configuration error codes
You can use activity logs and resource logs to help debug unexpected behaviors f
## CosmosDBPartitionKeyNotFound
-* **Cause**: Stream Analytics couldn't find the partition key of a particular Cosmos DB collection in the database.
-* **Recommendation**: Ensure there is a valid partition key specified for the collection in Cosmos DB.
+* **Cause**: Stream Analytics couldn't find the partition key of a particular Azure Cosmos DB collection in the database.
+* **Recommendation**: Ensure there is a valid partition key specified for the collection in Azure Cosmos DB.
## CosmosDBInvalidPartitionKeyColumn
You can use activity logs and resource logs to help debug unexpected behaviors f
## CosmosDBDatabaseNotFound
-* **Cause**: Stream Analytics can't find a CosmosDB database.
+* **Cause**: Stream Analytics can't find an Azure Cosmos DB database.
## CosmosDBCollectionNotFound
-* **Cause**: Stream Analytics can't find a particular Cosmos DB collection in a database.
+* **Cause**: Stream Analytics can't find a particular Azure Cosmos DB collection in a database.
## CosmosDBOutputWriteThrottling
-* **Cause**: An error occurred while writing data due to throttling by Cosmos DB.
+* **Cause**: An error occurred while writing data due to throttling by Azure Cosmos DB.
* **Recommendation**: Upgrade the collection performance tier and tune the performance of your database. ## SQLDatabaseConnectionStringError
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
Title: Use managed identities to access Cosmos DB from an Azure Stream Analytics job
-description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure CosmosDB output.
+ Title: Use managed identities to access Azure Cosmos DB from an Azure Stream Analytics job
+description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure Cosmos DB output.
Last updated 09/22/2022-+
-# Use managed identities to access Cosmos DB from an Azure Stream Analytics job (preview)
+# Use managed identities to access Azure Cosmos DB from an Azure Stream Analytics job
Azure Stream Analytics supports managed identity authentication for Azure Cosmos DB output. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days. When you remove the need to manually authenticate, your Stream Analytics deployments can be fully automated.  A managed identity is a managed application registered in Azure Active Directory that represents a given Stream Analytics job. The managed application is used to authenticate to a targeted resource. For more information on managed identities for Azure Stream Analytics, see [Managed identities for Azure Stream Analytics](stream-analytics-managed-identities-overview.md).
-This article shows you how to enable system-assigned managed identity for a Cosmos DB output of a Stream Analytics job through the Azure portal. Before you can enable system-assigned managed identity, you must first have a Stream Analytics job and an Azure Cosmos DB resource.
+This article shows you how to enable system-assigned managed identity for an Azure Cosmos DB output of a Stream Analytics job through the Azure portal. Before you can enable system-assigned managed identity, you must first have a Stream Analytics job and an Azure Cosmos DB resource.
## Create a managed identity 
First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
## Grant the Stream Analytics job permissions to access the Azure Cosmos DB account
-For the Stream Analytics job to access your Cosmos DB using managed identity, the service principal you created must have special permissions to your Azure Cosmos DB account. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you will use the following role:
+For the Stream Analytics job to access your Azure Cosmos DB using managed identity, the service principal you created must have special permissions to your Azure Cosmos DB account. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you will use the following role:
|Built-in role | ||
-|Cosmos DB Built-in Data Contributor|
+| Cosmos DB Built-in Data Contributor|
> [!IMPORTANT]
-> Cosmos DB data plane built-in role-based access control (RBAC) is not exposed through the Azure Portal. To assign the Cosmos DB Built-in Data Contributor role, you must grant permission via Azure Powershell. For more information about role-based access control with Azure Active Directory for your Azure Cosmos DB account please visit the: [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account documentation.](https://learn.microsoft.com/azure/cosmos-db/how-to-setup-rbac/)
+> Azure Cosmos DB data plane built-in role-based access control (RBAC) is not exposed through the Azure Portal. To assign the Cosmos DB Built-in Data Contributor role, you must grant permission via Azure Powershell. For more information about role-based access control with Azure Active Directory for your Azure Cosmos DB account, please see [configure role-based access control with Azure Active Directory for your Azure Cosmos DB account documentation.](../cosmos-db/how-to-setup-rbac.md)
-The following command can be used to authenticate your ASA job with Cosmos DB. The `$accountName` and `$resourceGroupName` are for your Cosmos DB account, and the `$principalId` is the value obtained in the previous step, in the Identity tab of your ASA job. You need to have "Contributor" access to your Cosmos DB account for this command to work as intended.
+The following command can be used to authenticate your ASA job with Azure Cosmos DB. The `$accountName` and `$resourceGroupName` are for your Azure Cosmos DB account, and the `$principalId` is the value obtained in the previous step, in the Identity tab of your ASA job. You need to have "Contributor" access to your Azure Cosmos DB account for this command to work properly.
```azurecli-interactive New-AzCosmosDBSqlRoleAssignment -AccountName $accountName -ResourceGroupName $resourceGroupName -RoleDefinitionId '00000000-0000-0000-0000-000000000002' -Scope "/" -PrincipalId $principalId
New-AzCosmosDBSqlRoleAssignment -AccountName $accountName -ResourceGroupName $re
> Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 10 minutes. Even though test connection can pass initially, jobs may fail when they are started before the permissions fully propagate.
-### Add the Cosmos DB as an output
+### Add Azure Cosmos DB as an output
-Now that your managed identity is configured, you're ready to add the Cosmos DB resource as an output to your Stream Analytics job. 
+Now that your managed identity is configured, you're ready to add the Azure Cosmos DB resource as an output to your Stream Analytics job. 
1. Go to your Stream Analytics job and navigate to the **Outputs** page under **Job Topology**.
-1. Select **Add > Cosmos DB**. In the output properties window, search and select your Cosmos DB account and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
+1. Select **Add > Azure Cosmos DB**. In the output properties window, search and select your Azure Cosmos DB account and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
1. Fill out the rest of the properties and select **Save**.
Now that your managed identity is configured, you're ready to add the Cosmos D
* [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md) * [Azure Cosmos DB Output](azure-cosmos-db-output.md) * [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
-* [Cosmos DB Optimization](stream-analytics-documentdb-output.md)
+* [Azure Cosmos DB optimization](stream-analytics-documentdb-output.md)
stream-analytics External Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/external-error-codes.md
Last updated 05/07/2020 + # Azure Stream Analytics external error codes
You can use activity logs and resource logs to help debug unexpected behaviors f
## CosmosDBConnectionFailureAfterMaxRetries
-* **Cause**: Stream Analytics failed to connect to a Cosmos DB account after the maximum number of retries.
+* **Cause**: Stream Analytics failed to connect to an Azure Cosmos DB account after the maximum number of retries.
## CosmosDBFailureAfterMaxRetries
-* **Cause**: Stream Analytics failed to query the Cosmos DB database and collection after the maximum number of retries.
+* **Cause**: Stream Analytics failed to query the Azure Cosmos DB database and collection after the maximum number of retries.
## CosmosDBFailedToCreateStoredProcedure
-* **Cause**: CosmosDB can't create a stored procedure after several retries.
+* **Cause**: Azure Cosmos DB can't create a stored procedure after several retries.
## CosmosDBOutputRequestTimeout
stream-analytics Filter Ingest Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-data-lake-storage-gen2.md
- Previously updated : 05/24/2022+ Last updated : 10/12/2022 # Filter and ingest to Azure Data Lake Storage Gen2 using the Stream Analytics no code editor
This article describes how you can use the no code editor to easily create a Str
1. Select **Save** and then select **Start** the Stream Analytics job. :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/no-code-save-start.png" alt-text="Screenshot showing the job Save and Start options." lightbox="./media/filter-ingest-data-lake-storage-gen2/no-code-save-start.png" ::: 1. To start the job, specify the number of **Streaming Units (SUs)** that the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed.
-1. After your select **Start**, the job starts running within two minutes.
+1. After you select **Start**, the job starts running within two minutes and the metrics will be open in tab section below.
-You can see the job under the Process Data section in the **Stream Analytics jobs** tab. Select **Refresh** until you see the job status as **Running**. Select **Open metrics** to monitor it or stop and restart it, as needed.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window." lightbox="./media/filter-ingest-data-lake-storage-gen2/no-code-start-job.png" :::
-Here's a sample **Metrics** page:
+ You can see the job under the Process Data section in the **Stream Analytics jobs** tab. Select **Refresh** until you see the job status as **Running**. Select **Open metrics** to monitor it or stop and restart it, as needed.
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/no-code-list-jobs.png" alt-text="Screenshot showing the Stream Analytics jobs tab." lightbox="./media/filter-ingest-data-lake-storage-gen2/no-code-list-jobs.png" :::
+
+ Here's a sample **Metrics** page:
+
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/metrics-page.png" alt-text="Screenshot showing the Metrics page." lightbox="./media/filter-ingest-data-lake-storage-gen2/metrics-page.png" :::
## Verify data in Data Lake Storage
stream-analytics Filter Ingest Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-synapse-sql.md
- Previously updated : 05/08/2022+ Last updated : 10/12/2022 # Filter and ingest to Azure Synapse SQL using the Stream Analytics no code editor
Use the following steps to develop a Stream Analytics job to filter and ingest r
- The number of **Streaming Units (SUs)** the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed. - **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events. :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/filter-ingest-synapse-sql/no-code-start-job.png" :::
-1. After you select **Start**, the job starts running within two minutes.
+1. After you select **Start**, the job starts running within two minutes and the metrics will be open in tab section below.
-You can see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
+ You can also see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
+ :::image type="content" source="./media/filter-ingest-synapse-sql/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/filter-ingest-synapse-sql/no-code-list-jobs.png" :::
## Next steps
stream-analytics Internal Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/internal-error-codes.md
Last updated 05/07/2020 + # Azure Stream Analytics internal error codes
You can use activity logs and resource logs to help debug unexpected behaviors f
## CosmosDBOutputBatchSizeTooLarge
-* **Cause**: The batch size used to write to Cosmos DB is too large.
+* **Cause**: The batch size used to write to Azure Cosmos DB is too large.
* **Recommendation**: Retry with a smaller batch size. ## Next steps
You can use activity logs and resource logs to help debug unexpected behaviors f
* [Troubleshoot Azure Stream Analytics outputs](stream-analytics-troubleshoot-output.md) * [Troubleshoot Azure Stream Analytics queries](stream-analytics-troubleshoot-query.md) * [Troubleshoot Azure Stream Analytics by using resource logs](stream-analytics-job-diagnostic-logs.md)
-* [Azure Stream Analytics data errors](data-errors.md)
+* [Azure Stream Analytics data errors](data-errors.md)
stream-analytics Job Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-diagram-with-metrics.md
+
+ Title: Stream Analytics job diagram (preview) in Azure portal
+description: This article describes the Azure Stream Analytics job diagram with metrics in Azure portal.
++++++ Last updated : 10/12/2022++
+# Stream Analytics job diagram (preview) in Azure portal
+
+The job diagram in the Azure portal can help you visualize your job's query steps (logical concept) or streaming node (physical concept) with its input source, output destination, and metrics. You can use the job diagram to examine the metrics for each step or streaming node and quickly identify the source of a problem when you troubleshoot issues.
+
+There are two types of job diagrams:
+
+* **Physical diagram**: it visualizes the key metrics of Stream Analytics job with the physical computation concept: streaming node dimension. A streaming node represents a set of compute resources that's used to process job's input data. To learn more details about the streaming node dimension, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+* **Logical diagram**: it visualizes the key metrics of Stream Analytics job with the logical concept: query step based on job's queries. To learn more, see [Debugging with the logical job diagram (preview) in Azure portal](./stream-analytics-job-logical-diagram-with-metrics.md).
+
+This article describes the two types of job diagrams to guide you.
+
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Use the job diagram
+
+In the Azure portal, locate and select a Stream Analytics job. Then select **Job diagram (preview)** under **Developer tools**:
++
+In the top left corner, you can switch the two types of job diagram by clicking **Logical**, **Physical**.
++
+## Physical job diagram
+
+The following screenshot shows a physical job diagram with a default time period (last 30 minutes).
++
+1. **Command bar section**: it's the command area where you can configure the time range of the job metrics, switch/configure heatmap visualization, search a streaming node, and switch view between **Diagram** and **Table**.
+ * **Heatmap settings**: the heatmap setting enables you to sort the nodes in diagram based on your wanted metrics and sorting type. The metrics can be CPU/memory utilization, watermark delay, input event, and backlogged input events.
+ * **Time range**: you can choose different time range and job run to view the diagram and metrics.
+ * **Job run**: Job run is inside **Time range**. When a job is started, restarted or scaled-up/down (SU changes), a new job run will be generated. One job run maps on physical job diagram.
+ * **Diagram/Table view switcher**: you can switch the view between diagram and table. The table view is shown as below:
+
+ :::image type="content" source="./media/job-diagram-with-metrics/4-physical-diagram-table-view.png" alt-text="Screenshot that shows physical job diagram with table overview." lightbox="./media/job-diagram-with-metrics/4-physical-diagram-table-view.png":::
+
+1. **Diagram/Table section**: it's the place where you can view the metrics (aggregated within the selected time range) in streaming node level with diagram view or table view. Each box in this section represents a streaming node that is used to process the input data. The metrics on each node are:
+ * **Input Events** (Aggregation type: SUM)
+ * **CPU % Utilization** (Aggregation type: Avg)
+ * **SU (Memory) % Utilization** (Aggregation type: Max)
+ * **Partition IDs** (A list, no aggregation)
+ * **Watermark Delay** (Aggregation type: Max)
+ * **Backlogged Input Events** (Aggregation type: SUM)
+
+ For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+1. **Chart section**: it's the place where you can view the historical metrics data within the selected time range. The default metrics shown in the default chart are **SU (Memory) % Utilization** and **CPU % Utilization**". You can also add more charts by clicking **Add chart**.
+
+The **Diagram/Table section** and **Chart section** can be interactive with each other. You can select multiple nodes in **Diagram/Table section** to get the metrics in **Chart section** filtered by the selected nodes and vice versa.
+++
+To learn more about how to debug with physical diagram, see [Debugging with the physical job diagram (preview) in Azure portal](./stream-analytics-job-physical-diagram-with-metrics.md).
++
+## Logical job diagram
+
+The logical job diagram has a similar layout to the physical diagram, with three sections, but it has different metrics and configuration settings.
++
+1. **Command bar section**: in logical diagram, you can operate the cloud job (Stop, Delete), and configure the time range of the job metrics. The diagram view is only available for logical diagrams.
+2. **Diagram section**: the node box in this selection represents the job's input, output, and query steps. You can view the metrics in the node directly or in the chart section interactively by clicking certain node in this section. For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+3. **Chart section**: the chart section in a logical diagram has two tabs: **Metrics** and **Activity Logs**.
+ * **Metrics**: job's metrics data is shown here when the corresponding metrics are selected in the right panel.
+ * **Activity Logs**: job's operations performed on jobs is shown here. When the job's diagnostic log is enabled, it's also shown here. To learn more about the job logs, see [Azure Stream Analytics job logs](./stream-analytics-job-diagnostic-logs.md).
+
+ When a logical job diagram is loaded, this job's metrics: Watermark delay, Input events, Output Events, and Backlogged Input Events are shown in the chart section for the latest 30 minutes.
+
+The interaction between **Diagram section** and **Chart section** is also available in logical diagram as well. The metrics data will be filtered by the node's properties.
++
+To learn more about how to debug with logical diagrams, see [Debugging with the logical job diagram (preview) in Azure portal](./stream-analytics-job-logical-diagram-with-metrics.md).
++
+## Next steps
+* [Introduction to Stream Analytics](stream-analytics-introduction.md)
+* [Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md)
+* [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics No Code Enrich Event Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-enrich-event-hub-data.md
+
+ Title: Enrich data and ingest to event hub using the Stream Analytics no code editor
+description: Learn how to use the no code editor to easily create a Stream Analytics job to enrich the data and ingest to event hub.
+++++ Last updated : 10/12/2022++
+# Enrich data and ingest to event hub using the Stream Analytics no code editor
+
+This article describes how you can use the no code editor to easily create a Stream Analytics job. It continuously reads from your Event Hubs, enrich the incoming data with SQL reference data, and then writes the results continuously to event hub.
+
+## Prerequisites
+
+- Your Azure Event Hubs and SQL reference data resources must be publicly accessible and not be behind a firewall or secured in an Azure Virtual Network
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Develop a Stream Analytics job to enrich event hub data
+
+1. In the Azure portal, locate and select the Azure Event Hubs instance.
+1. Select **Features** > **Process Data** and then select **Start** on the **Enrich data and ingest to Event Hub** card.
+
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/no-code-enrich-event-hub-data/event-hub-process-data-templates.png" :::
+
+1. Enter a name for the Stream Analytics job, then select **Create**.
+
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/create-new-stream-analytics-job.png" alt-text="Screenshot showing where to enter a job name." lightbox="./media/no-code-enrich-event-hub-data/create-new-stream-analytics-job.png" :::
+
+1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/no-code-enrich-event-hub-data/event-hub-configuration.png" :::
+
+1. When the connection is established successfully and you have data streams flowing into your Event Hubs instance, you'll immediately see two things:
+ - Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to a field to remove, rename, or change its type.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-schema.png" alt-text="Screenshot showing the Event Hubs field list where you can remove, rename, or change the field type." lightbox="./media/no-code-enrich-event-hub-data/no-code-schema.png" :::
+ - A live sample of incoming data in the **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of the sample input data.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-sample-input.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/no-code-enrich-event-hub-data/no-code-sample-input.png" :::
+
+1. Select the **Reference SQL input** tile to connect to the reference SQL database.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/sql-reference-connection-configuration.png" alt-text="Screenshot that shows the sql reference data connection configuration." lightbox="./media/no-code-enrich-event-hub-data/sql-reference-connection-configuration.png" :::
+
+1. Select the **Join** tile. In the right configuration panel, choose a field from each input to join the incoming data from the two inputs.
+
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/join-operator-configuration.png" alt-text="Screenshot that shows the join operator configuration." lightbox="./media/no-code-enrich-event-hub-data/join-operator-configuration.png" :::
+
+1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output to event hub. If you want to add all the fields, select **Add all fields**.
+
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/manage-fields-configuration.png" alt-text="Screenshot that shows the manage field operator configuration." lightbox="./media/no-code-enrich-event-hub-data/manage-fields-configuration.png" :::
+
+1. Select **Event Hub** tile. In the **Event Hub** configuration panel, fill in needed parameters and connect, similarly to the input event hub configuration.
+
+1. Optionally, select **Get static preview/Refresh static preview** to see the data preview that will be ingested in event hub.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-output-static-preview.png" alt-text="Screenshot showing the Get static preview/Refresh static preview option." lightbox="./media/no-code-enrich-event-hub-data/no-code-output-static-preview.png" :::
+
+1. Select **Save** and then select **Start** the Stream Analytics job.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/no-code-enrich-event-hub-data/no-code-save-start.png" :::
+
+1. To start the job, specify:
+ - The number of **Streaming Units (SUs)** the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed.
+ - **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events.
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/no-code-enrich-event-hub-data/no-code-start-job.png" :::
+
+1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section below.
+
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/job-metrics-after-started.png" alt-text="Screenshot that shows the job metrics data after it's started." lightbox="./media/no-code-enrich-event-hub-data/job-metrics-after-started.png" :::
+
+ You can also see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
+
+ :::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/no-code-enrich-event-hub-data/no-code-list-jobs.png" :::
+
+## Next steps
+
+Learn more about Azure Stream Analytics and how to monitor the job you've created.
+
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics No Code Filter Ingest Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-filter-ingest-data-explorer.md
+
+ Title: Filter and ingest to Azure Data Explorer using the Stream Analytics no code editor
+description: Learn how to use the no code editor to easily create a Stream Analytics job. It continuously reads from Event Hubs, filters the incoming data, and then writes the results continuously to Azure Data Explorer.
+++++ Last updated : 10/12/2022++
+# Filter and ingest to Azure Data Explorer using the Stream Analytics no code editor
+
+This article describes how you can use the no code editor to easily create a Stream Analytics job. It continuously reads from your Event Hubs, filters the incoming data, and then writes the results continuously to Azure Data Explorer.
+
+## Prerequisites
+
+- Your Azure Event Hubs and Azure Data Explorer resources must be publicly accessible and not be behind a firewall or secured in an Azure Virtual Network
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Develop a Stream Analytics job to filter and ingest real time data
+
+1. In the [Azure portal](https://portal.azure.com), locate and select the Azure Event Hubs instance.
+1. Select **Features** > **Process Data** and then select **Start** on the **Filter and store data to Azure Data Explorer** card.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/no-code-filter-ingest-data-explorer/event-hub-process-data-templates.png" :::
+
+1. Enter a name for the Stream Analytics job, then select **Create**.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/create-new-stream-analytics-job.png" alt-text="Screenshot showing where to enter a job name." lightbox="./media/no-code-filter-ingest-data-explorer/create-new-stream-analytics-job.png" :::
+
+1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/no-code-filter-ingest-data-explorer/event-hub-configuration.png" :::
+
+1. When the connection is established successfully and you have data streams flowing into your Event Hubs instance, you'll immediately see two things:
+ - Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to a field to remove, rename, or change its type.
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-schema.png" alt-text="Screenshot showing the Event Hubs field list where you can remove, rename, or change the field type." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-schema.png" :::
+ - A live sample of incoming data in the **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of the sample input data.
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-sample-input.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-sample-input.png" :::
+
+1. Select the **Filter** tile to aggregate the data. In the Filter area, select a field to filter the incoming data with a condition.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/filter-operator-configuration.png" alt-text="Screenshot that shows the filter operator configuration." lightbox="./media/no-code-filter-ingest-data-explorer/filter-operator-configuration.png" :::
+
+1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output to event hub. If you want to add all the fields, select **Add all fields**.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/manage-fields-configuration.png" alt-text="Screenshot that shows the manage field operator configuration." lightbox="./media/no-code-filter-ingest-data-explorer/manage-fields-configuration.png" :::
+
+1. Select **Azure Data Explorer** tile. In the configuration panel, fill in needed parameters and connect.
+
+ > [!NOTE]
+ > The table must exist in your selected database and the table schema must exactly match the number of fields and their types that your data preview generates.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/adx-output-configuration.png" alt-text="Screenshot that shows the Kusto output configuration." lightbox="./media/no-code-filter-ingest-data-explorer/adx-output-configuration.png" :::
+
+1. Optionally, select **Get static preview/Refresh static preview** to see the data preview that will be ingested in event hub.
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-output-static-preview.png" alt-text="Screenshot showing the Get static preview/Refresh static preview option." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-output-static-preview.png" :::
+
+1. Select **Save** and then select **Start** the Stream Analytics job.
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-save-start.png" :::
+
+1. To start the job, specify:
+ - The number of **Streaming Units (SUs)** the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed.
+ - **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events.
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-start-job.png" :::
+
+1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section below.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/job-metrics-after-started.png" alt-text="Screenshot that shows the job metrics data after it's started." lightbox="./media/no-code-filter-ingest-data-explorer/job-metrics-after-started.png" :::
+
+ You can also see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
+
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-list-jobs.png" :::
+
+## Next steps
+
+Learn more about Azure Stream Analytics and how to monitor the job you've created.
+
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics No Code Materialize Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-materialize-cosmos-db.md
Title: Materialize data in Azure Cosmos DB using no code editor
-description: Learn how to use the no code editor in Stream Analytics to materialize data from Event Hubs to Cosmos DB.
+description: Learn how to use the no code editor in Stream Analytics to materialize data from Event Hubs to Azure Cosmos DB.
-+ Last updated 05/12/2022 # Materialize data in Azure Cosmos DB using the Stream Analytics no code editor
This article describes how you can use the no code editor to easily create a Str
## Develop a Stream Analytics job
-Use the following steps to develop a Stream Analytics job to materialize data in Cosmos DB.
+Use the following steps to develop a Stream Analytics job to materialize data in Azure Cosmos DB.
1. In the Azure portal, locate and select your Azure Event Hubs instance.
-2. Under **Features**, select **Process Data**. Then, select **Start** in the card titled **Materialize Data in Cosmos DB**.
+2. Under **Features**, select **Process Data**. Then, select **Start** in the card titled **Materialize Data in Azure Cosmos DB**.
:::image type="content" source="./media/no-code-materialize-cosmos-db/no-code-materialize-view-start.png" alt-text="Screenshot showing the Start Materialize Data Flow." lightbox="./media/no-code-materialize-cosmos-db/no-code-materialize-view-start.png" ::: 3. Enter a name for your job and select **Create**. 4. Specify the **Serialization** type of your data in the event hub and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
- Previously updated : 08/26/2022+ Last updated : 10/12/2022
-# No-code stream processing through Azure Stream Analytics (preview)
+# No-code stream processing through Azure Stream Analytics
You can process your real-time data streams in Azure Event Hubs by using Azure Stream Analytics. The no-code editor allows you to develop a Stream Analytics job without writing a single line of code. In minutes, you can develop and run a job that tackles many scenarios, including:
After you set up your event hub's details and select **Connect**, you can add fi
When Stream Analytics jobs detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
-You can always edit the field names, or remove or change the data type, by selecting the three-dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image.
+You can always edit the field names, or remove or change the data type, or change the event time (**Mark as event time**: TIMESTAMP BY clause if a datetime type field), by selecting the three-dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image.
:::image type="content" source="./media/no-code-stream-processing/event-hub-schema.png" alt-text="Screenshot that shows selections for adding, removing, and editing the fields for an event hub." lightbox="./media/no-code-stream-processing/event-hub-schema.png" :::
The **Manage fields** transformation allows you to add, remove, or rename fields
:::image type="content" source="./media/no-code-stream-processing/manage-field-transformation.png" alt-text="Screenshot that shows selections for managing fields." lightbox="./media/no-code-stream-processing/manage-field-transformation.png" :::
+You can also add new field with the **Build-in Functions** to aggregate the data from upstream. Currently, the build-in functions we support are some functions in **String Functions**, **Data and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics.md).
++ > [!TIP] > After you configure a tile, the diagram view gives you a glimpse of the settings within the tile. For example, in the **Manage fields** area of the preceding image, you can see the first three fields being managed and the new names assigned to them. Each tile has information that's relevant to it.
Under the **Outputs** section on the ribbon, select **CosmosDB** as the output f
When you're connecting to Azure Cosmos DB, if you select **Managed Identity** as the authentication mode, then the Contributor role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for Azure Cosmos DB, see [Use managed identities to access Azure Cosmos DB from an Azure Stream Analytics job (preview)](cosmos-db-managed-identity.md).
-Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
-
-![Screenshot that shows selecting managed identity as the authentication method for Azure Cosmos DB.](./media/no-code-stream-processing/msi-cosmosdb-nocode.png)
+Managed identities authentication method is also supported in the Azure Cosmos DB output in no-code editor which has the same benefit as it is in above ADLS Gen2 output.
### Azure SQL Database [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is a fully managed platform as a service (PaaS) database engine that can help you to create a highly available and high-performance data storage layer for the applications and solutions in Azure. By using the no-code editor, you can configure Azure Stream Analytics jobs to write the processed data to an existing table in SQL Database.
+To configure Azure SQL Database as output, select **SQL Database** under the **Outputs** section on the ribbon. Then fill in the needed information to connect your SQL database and select the table that you want to write data to.
+ > [!IMPORTANT] > The Azure SQL Database table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
-To configure Azure SQL Database as output, select **SQL Database** under the **Outputs** section on the ribbon. Then fill in the needed information to connect your SQL database and select the table that you want to write data to.
- For more information about Azure SQL Database output for a Stream Analytics job, see [Azure SQL Database output from Azure Stream Analytics](./sql-database-output.md).
+### Event Hubs
+
+With the real-time data coming through event hub to ASA, no-code editor can transform, enrich the data and then output the data to another event hub as well. You can choose the **Event Hub** output when you configure your Azure Stream Analytics job.
+
+To configure Event Hub as output, select **Event Hub** under the Outputs section on the ribbon. Then fill in the needed information to connect your event hub that you want to write data to.
+
+For more information about Event Hub output for a Stream Analytics job, see [Event Hubs output from Azure Stream Analytics](./event-hubs-output.md).
+
+### Azure Data Explorer
+
+Azure Data Explorer is a fully managed, high-performance, big data analytics platform that makes it easy to analyze high volumes of data. You can use [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) as an output for your Azure Stream Analytics job by using no-code editor as well.
+
+To configure Azure Data Explorer as output, select **Azure Data Explorer** under the Outputs section on the ribbon. Then fill in the needed information to connect your Azure Data Explorer database and specify the table that you want to write data to.
+
+> [!IMPORTANT]
+> The table must exist in your selected database and the table's schema must exactly match the fields and their types in your job's output.
+
+For more information about Azure Data Explorer output for a Stream Analytics job, see [Azure Data Explorer output from Azure Stream Analytics (Preview)](./azure-database-explorer-output.md).
+ ## Data preview, authoring errors, runtime logs, and metrics The no-code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data.
Learn how to use the no-code editor to address common scenarios by using predefi
- [Filter and ingest to Azure Synapse SQL](filter-ingest-synapse-sql.md) - [Filter and ingest to Azure Data Lake Storage Gen2](filter-ingest-data-lake-storage-gen2.md) - [Materialize data to Azure Cosmos DB](no-code-materialize-cosmos-db.md)
+- [Transform and store data to SQL database](no-code-transform-filter-ingest-sql.md)
+- [Filter and store data to Azure Data Explorer](no-code-filter-ingest-data-explorer.md)
+- [Enrich data and ingest to Event Hub](no-code-enrich-event-hub-data.md)
stream-analytics No Code Transform Filter Ingest Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md
+
+ Title: Transform and store data to Azure SQL database using the Stream Analytics no code editor
+description: Learn how to use the no code editor to easily create a Stream Analytics job. It continuously reads from Event Hubs, transform the incoming data, and then writes the results continuously to Azure SQL database.
+++++ Last updated : 10/12/2022++
+# Transform and store data to Azure SQL database using the Stream Analytics no code editor
+
+This article describes how you can use the no code editor to easily create a Stream Analytics job. It continuously reads from your Event Hubs, transform the incoming data, and then writes the results continuously to Azure SQL database.
+
+## Prerequisites
+
+- Your Azure Event Hubs and Azure SQL database resources must be publicly accessible and not be behind a firewall or secured in an Azure Virtual Network
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+
+## Develop a Stream Analytics job to transform event hub data and store to SQL database
+
+1. In the [Azure portal](https://portal.azure.com), locate and select the Azure Event Hubs instance.
+1. Select **Features** > **Process Data** and then select **Start** on the **Transform and store data to SQL database** card.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/no-code-transform-filter-ingest-sql/event-hub-process-data-templates.png" :::
+
+1. Enter a name for the Stream Analytics job, then select **Create**.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/create-new-stream-analytics-job.png" alt-text="Screenshot showing where to enter a job name." lightbox="./media/no-code-transform-filter-ingest-sql/create-new-stream-analytics-job.png" :::
+
+1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job will use to connect to the Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/no-code-transform-filter-ingest-sql/event-hub-configuration.png" :::
+
+1. When the connection is established successfully and you have data streams flowing into your Event Hubs instance, you'll immediately see two things:
+ - Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to a field to remove, rename, or change its type.
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/no-code-schema.png" alt-text="Screenshot showing the Event Hubs field list where you can remove, rename, or change the field type." lightbox="./media/no-code-transform-filter-ingest-sql/no-code-schema.png" :::
+ - A live sample of incoming data in the **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of the sample input data.
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/no-code-sample-input.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/no-code-transform-filter-ingest-sql/no-code-sample-input.png" :::
+
+1. Select the **Group by** tile to aggregate the data. In the **Group by** configuration panel, You can specify the field that you want to **Group By** along with the **Time window**. Then you can validate the results of the step in the **Data preview** section.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/group-by-operation-configuration.png" alt-text="Screenshot that shows the group by operator configuration." lightbox="./media/no-code-transform-filter-ingest-sql/group-by-operation-configuration.png" :::
+
+1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output to event hub. If you want to add all the fields, click **Add all fields**.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/manage-fields-configuration.png" alt-text="Screenshot that shows the manage field operator configuration." lightbox="./media/no-code-transform-filter-ingest-sql/manage-fields-configuration.png" :::
+
+1. Select **SQL** tile. In the **SQL Database** configuration panel, fill in needed parameters and connect.
+
+ > [!NOTE]
+ > The schema of the table you choose to write must exactly match the number of fields and their types that your data preview generates.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/sql-output-configuration.png" alt-text="Screenshot that shows the sql database output configuration." lightbox="./media/no-code-transform-filter-ingest-sql/sql-output-configuration.png" :::
+
+1. Optionally, select **Get static preview/Refresh static preview** to see the data preview that will be ingested in event hub.
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/no-code-output-static-preview.png" alt-text="Screenshot showing the Get static preview/Refresh static preview option." lightbox="./media/no-code-transform-filter-ingest-sql/no-code-output-static-preview.png" :::
+
+1. Select **Save** and then select **Start** the Stream Analytics job.
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/no-code-transform-filter-ingest-sql/no-code-save-start.png" :::
+
+1. To start the job, specify:
+ - The number of **Streaming Units (SUs)** the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed.
+ - **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events.
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/no-code-transform-filter-ingest-sql/no-code-start-job.png" :::
+
+1. After you select **Start**, the job starts running within two minutes and the metrics will be open in tab section below.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/job-metrics-after-started.png" alt-text="Screenshot that shows the job metrics after it is started." lightbox="./media/no-code-transform-filter-ingest-sql/job-metrics-after-started.png" :::
+
+ You can also see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
+
+ :::image type="content" source="./media/no-code-transform-filter-ingest-sql/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/no-code-transform-filter-ingest-sql/no-code-list-jobs.png" :::
+
+## Next steps
+
+Learn more about Azure Stream Analytics and how to monitor the job you've created.
+
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics Optimize Query Using Job Diagram Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/optimize-query-using-job-diagram-simulator.md
+
+ Title: Optimize query performance using job diagram simulator (preview)
+description: This article provides guidance for evaluating query parallelism and optimizing query execution using a job diagram simulator in Visual Studio Code.
+++++ Last updated : 10/12/2022++
+# Optimize query using job diagram simulator
+
+One way to improve the performance of an Azure Stream Analytics job is to apply parallelism in query. This article demonstrates how to use the Job Diagram Simulator in Visual Studio Code (VSCode) and evaluate the query parallelism for a Stream Analytics job. You learn to visualize a query execution with different number of streaming units and improve query parallelism based on the edit suggestions.
+
+## What is parallel query?
+
+Query parallelism divides the workload of a query by creating multiple processes (or streaming nodes) and executes it in parallel. It greatly reduces overall execution time of the query and hence less streaming hours are needed.
+
+For a job to be parallel, all inputs, outputs and query steps must be aligned and use the same partition keys. The query logic partitioning is determined by the keys used for aggregations (GROUP BY).
+
+If you want to learn more about query parallelization, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).
+
+## How to use job diagram simulator?
+
+You can find the job diagram simulator in the Azure Stream Analytics tools extension for VSCode. If you haven't installed it yet, follow [this guide](quick-create-visual-studio-code.md) to install.
+
+In this tutorial, you learn to improve query performance based on edit suggestions and make it executed in parallel. As an example, we're using a non-parallel job that takes the input data from an event hub and sends the results to another event hub.
++
+1. Open the Azure Stream Analytics project in VSCode after you finish your query authoring, go to the query file **\*.asaql** and select **Simulate job** to start the job diagram simulation.
+
+ :::image type="content" source="./media/job-diagram-simulator/query-file-simulate-job.png" alt-text="Screenshot of the VSCode opening job diagram simulator in query file." lightbox= "./media/job-diagram-simulator/query-file-simulate-job.png" :::
+
+1. Under the **Diagram** tab, you can see the topology of the Azure Stream Analytics job. It shows the number of streaming nodes allocated to the job and the number of partitions in each streaming node. The below job diagram shows this job isn't in parallel since there's data interaction between the two nodes.
+
+ :::image type="content" source="./media/job-diagram-simulator/diagram-tab.png" alt-text="Screenshot of the VSCode using job diagram simulator and showing job topology." lightbox= "./media/job-diagram-simulator/diagram-tab.png" :::
+
+1. Since this query is **NOT** in parallel, you can select **Enhancements** tab and view suggestions about optimizing query parallelism.
+
+ :::image type="content" source="./media/job-diagram-simulator/edit-suggestions.png" alt-text="Screenshot of the VSCode using job diagram simulator and showing the query edit suggestions." lightbox= "./media/job-diagram-simulator/edit-suggestions.png" :::
+
+1. Select query step in the enhancements list, you see the corresponding lines are highlighted and you can edit the query based on the suggestions.
+
+ :::image type="content" source="./media/job-diagram-simulator/query-highlight.png" alt-text="Screenshot of the VSCode using job diagram simulator and highlighting the query step." lightbox= "./media/job-diagram-simulator/query-highlight.png" :::
+
+ > [!NOTE]
+ > These are edit suggestions for improving your query parallelism. However, if you are using aggregate function among all partitions, having a parallel query might not be applicable to your business scenarios.
+
+1. For this example, you add the **PartitionId** to line#22 and save your change. Then you can use **Refresh simulation** to get the new diagram as below.
+
+ :::image type="content" source="./media/job-diagram-simulator/refresh-simulation-after-update-query.png" alt-text="Screenshot that shows the refresh diagram after updating query." lightbox= "./media/job-diagram-simulator/refresh-simulation-after-update-query.png" :::
++
+1. You can also adjust **Streaming Units** to stimulate how streaming nodes are allocated with different SUs. It will give you an idea of how many SUs you need to maximize the query performance.
+
+ :::image type="content" source="./media/job-diagram-simulator/job-diagram-simulator-adjust-su.png" alt-text="Screenshot of the VSCode using SU adjuster." lightbox= "./media/job-diagram-simulator/job-diagram-simulator-adjust-su.png" :::
+++
+## Enhancement suggestions
+
+Here are the explanations for Enhancements:
+
+| **Type** | **Meaning** |
+| | |
+| Customized partition not supported | Change input ΓÇÿxxxΓÇÖ partition key to ΓÇÿxxxΓÇÖ. |
+| Number of partitions not matched | Input and output must have the same number of partitions. |
+| Partition keys not matched | Input, output and each query step must use the same partition key. |
+| Number of input partitions not matched | All inputs must have the same number of partitions. |
+| Input partition keys not matched | All inputs must use the same partition key. |
+| Low compatibility level | Upgrade **CompatibilityLevel** on **JobConfig.json** file. |
+| Output partition key not found | You need to use specified partition key for the output. |
+| Customized partition not supported | You can only use pre-defined partition keys. |
+| Query step not using partition | Your query isn't using any PARTITION BY clause. |
++
+## Next steps
+If you want to learn more about query parallelization and job diagram, check out these tutorials:
+
+* [Debug queries using job diagram](debug-locally-using-job-diagram-vs-code.md)
+* [Stream Analytics job diagram (preview) in Azure portal](./job-diagram-with-metrics.md)
+* [Debugging with the physical job diagram (preview) in Azure portal](./stream-analytics-job-physical-diagram-with-metrics.md).
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
stream-analytics Sql Database Upsert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-upsert.md
Title: Update or merge records in Azure SQL Database with Azure Functions description: This article describes how to use Azure Functions to update or merge records from Azure Stream Analytics to Azure SQL Database + Last updated 12/03/2021
For Synapse SQL, ASA can insert into a [staging table](../synapse-analytics/sql/
### Pre-processing in Azure Cosmos DB
-Azure Cosmos DB [supports UPSERT natively](./stream-analytics-documentdb-output.md#upserts-from-stream-analytics). Here only append/replace is possible. Accumulations must be managed client-side in Cosmos DB.
+Azure Cosmos DB [supports UPSERT natively](./stream-analytics-documentdb-output.md#upserts-from-stream-analytics). Here only append/replace is possible. Accumulations must be managed client-side in Azure Cosmos DB.
If the requirements match, an option is to replace the target SQL database by an Azure Cosmos DB instance. Doing so requires an important change in the overall solution architecture.
-For Synapse SQL, Cosmos DB can be used as an intermediary layer via [Azure Synapse Link for Azure Cosmos DB](../cosmos-db/synapse-link.md). Synapse Link can be used to create an [analytical store](../cosmos-db/analytical-store-introduction.md). This data store can then be queried directly in Synapse SQL.
+For Synapse SQL, Azure Cosmos DB can be used as an intermediary layer via [Azure Synapse Link for Azure Cosmos DB](../cosmos-db/synapse-link.md). Synapse Link can be used to create an [analytical store](../cosmos-db/analytical-store-introduction.md). This data store can then be queried directly in Synapse SQL.
### Comparison of the alternatives
Each approach offers different value proposition and capabilities:
||Staging|Replace, Accumulate|+|+| |Pre-Processing||||| ||Azure Functions|Replace, Accumulate|+|- (row-by-row performance)|
-||Cosmos DB replacement|Replace|N/A|N/A|
-||Cosmos DB Synapse Link|Replace|N/A|+|
+||Azure Cosmos DB replacement|Replace|N/A|N/A|
+||Azure Cosmos DB Synapse Link|Replace|N/A|+|
## Get support
stream-analytics Stream Analytics Build An Iot Solution Using Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-build-an-iot-solution-using-stream-analytics.md
Last updated 12/06/2018-+ # Build an IoT solution by using Stream Analytics
There are several resources that can easily be deployed in a resource group toge
2. Locate the Resource Group that you named in the previous section. 3. Verify that the following resources are listed in the resource group:
- - One Cosmos DB Account
+ - One Azure Cosmos DB Account
- One Azure Stream Analytics Job - One Azure Storage Account - One Azure Event Hub
There are several resources that can easily be deployed in a resource group toge
- **Registration** input is an Azure Blob storage connection, pointing to a static registration.json file, used for lookups as needed. This reference data input is used in later variations of the query syntax. 4. Examine the Outputs of the TollApp sample job.
- - **Cosmos DB** output is a Cosmos database container that receives the output sink events. Note that this output is used in INTO clause of the streaming query.
+ - **Azure Cosmos DB** output is an Azure Cosmos DB database container that receives the output sink events. Note that this output is used in INTO clause of the streaming query.
## Start the TollApp streaming job Follow these steps to start the streaming job:
Follow these steps to start the streaming job:
3. After a few moments, once the job is running, on the **Overview** page of the streaming job, view the **Monitoring** graph. The graph should show several thousand input events, and tens of output events.
-## Review the CosmosDB output data
+## Review the Azure Cosmos DB output data
1. Locate the resource group that contains the TollApp resources. 2. Select the Azure Cosmos DB Account with the name pattern **tollapp\<random\>-cosmos**.
AND DATEDIFF (minute, EntryStream, ExitStream ) BETWEEN 0 AND 15
7. On the **Start job** pane, select **Now**. ### Review the total time in the output
-Repeat the steps in the preceding section to review the CosmosDB output data from the streaming job. Review the latest JSON documents.
+Repeat the steps in the preceding section to review the Azure Cosmos DB output data from the streaming job. Review the latest JSON documents.
For example, this document shows an example car with a certain license plate, the entrytime and exit time, and the DATEDIFF calculated durationinminutes field showing the toll booth duration as two minutes: ```JSON
WHERE Registration.Expired = '1'
1. Repeat the steps in the preceding section to update the TollApp streaming job query syntax.
-2. Repeat the steps in the preceding section to review the CosmosDB output data from the streaming job.
+2. Repeat the steps in the preceding section to review the Azure Cosmos DB output data from the streaming job.
Example output: ```json
You can access **Activity Logs** from the job dashboard **Settings** area as wel
## Conclusion This solution introduced you to the Azure Stream Analytics service. It demonstrated how to configure inputs and outputs for the Stream Analytics job. Using the Toll Data scenario, the solution explained common types of problems that arise in the space of data in motion and how they can be solved with simple SQL-like queries in Azure Stream Analytics. The solution described SQL extension constructs for working with temporal data. It showed how to join data streams, how to enrich the data stream with static reference data, and how to scale out a query to achieve higher throughput.
-Although this solution provides a good introduction, it is not complete by any means. You can find more query patterns using the SAQL language at [Query examples for common Stream Analytics usage patterns](stream-analytics-stream-analytics-query-patterns.md).
+Although this solution provides a good introduction, it is not complete by any means. You can find more query patterns using the SAQL language at [Query examples for common Stream Analytics usage patterns](stream-analytics-stream-analytics-query-patterns.md).
stream-analytics Stream Analytics Compatibility Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-compatibility-level.md
description: Learn how to set a compatibility level for an Azure Stream Analytic
+ Last updated 03/18/2021
For more information, see [Updates to geospatial features in Azure Stream Analyt
**1.2 level:** If query logic can be parallelized across input source partitions, Azure Stream Analytics creates separate query instances and runs computations in parallel.
-### Native Bulk API integration with CosmosDB output
+### Native Bulk API integration with Azure Cosmos DB output
**Previous levels:** The upsert behavior was *insert or merge*.
-**1.2 level:** Native Bulk API integration with CosmosDB output maximizes throughput and efficiently handles throttling requests. For more information, see [the Azure Stream Analytics output to Azure Cosmos DB page](./stream-analytics-documentdb-output.md#improved-throughput-with-compatibility-level-12).
+**1.2 level:** Native Bulk API integration with Azure Cosmos DB output maximizes throughput and efficiently handles throttling requests. For more information, see [the Azure Stream Analytics output to Azure Cosmos DB page](./stream-analytics-documentdb-output.md#improved-throughput-with-compatibility-level-12).
The upsert behavior is *insert or replace*.
Adding a prefix to built-in aggregates also results in error. For example, `mypr
Using the prefix "system" for any user-defined functions results in error.
-### Disallow Array and Object as key properties in Cosmos DB output adapter
+### Disallow Array and Object as key properties in Azure Cosmos DB output adapter
**Previous levels:** Array and Object types were supported as a key property.
stream-analytics Stream Analytics Documentdb Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-documentdb-output.md
Last updated 09/15/2022-+ # Azure Stream Analytics output to Azure Cosmos DB Azure Stream Analytics can target [Azure Cosmos DB](https://azure.microsoft.com/services/documentdb/) for JSON output, enabling data archiving and low-latency queries on unstructured JSON data. This document covers some best practices for implementing this configuration. We recommend that you set your job to compatability level 1.2 when using Azure Cosmos DB as output.
It's important to choose a partition key property that has a number of distinct
The storage size for documents that belong to the same partition key value is limited to 20 GB (the [physical partition size limit](../cosmos-db/partitioning-overview.md) is 50 GB). An [ideal partition key](../cosmos-db/partitioning-overview.md#choose-partitionkey) is one that appears frequently as a filter in your queries and has sufficient cardinality to ensure that your solution is scalable.
-Partition keys used for Stream Analytics queries and Cosmos DB don't need to be identical. Fully parallel topologies recommend using *Input Partition key*, `PartitionId`, as the Stream Analytics query's partition key but that may not be the recommended choice for a Cosmos DB container's partition key.
+Partition keys used for Stream Analytics queries and Azure Cosmos DB don't need to be identical. Fully parallel topologies recommend using *Input Partition key*, `PartitionId`, as the Stream Analytics query's partition key but that may not be the recommended choice for an Azure Cosmos DB container's partition key.
A partition key is also the boundary for transactions in stored procedures and triggers for Azure Cosmos DB. You should choose the partition key so that documents that occur together in transactions share the same partition key value. The article [Partitioning in Azure Cosmos DB](../cosmos-db/partitioning-overview.md) gives more details on choosing a partition key.
stream-analytics Stream Analytics Job Analysis With Metric Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-analysis-with-metric-dimensions.md
Title: Analyze Stream Analytics job performance by using metrics and dimensions description: This article describes how to use Azure Stream Analytics metrics and dimensions to analyze a job's performance.-+ - Previously updated : 07/07/2022+ Last updated : 10/12/2022 # Analyze Stream Analytics job performance by using metrics and dimensions
stream-analytics Stream Analytics Job Diagram With Metrics New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-diagram-with-metrics-new.md
- Title: Debugging with the job diagram (preview) in Azure portal
-description: This article describes how to troubleshoot your Azure Stream Analytics job with job diagram and metrics in the Azure portal.
----- Previously updated : 07/01/2022--
-# Debugging with the job diagram (preview) in Azure portal
-
-The job diagram in the Azure portal can help you visualize your job's query steps with its input source, output destination, and metrics. You can use the job diagram to examine the metrics for each step and quickly identify the source of a problem when you troubleshoot issues.
-
-The job diagram is also available in Stream Analytics extension for VS Code. It provides the similar functions with more metrics when you debug your job that runs locally on your device. To learn more details, see [Debug Azure Stream Analytics queries locally using job diagram](./debug-locally-using-job-diagram-vs-code.md).
-
-## Using the job diagram
-
-In the Azure portal, while in a Stream Analytics job, under **Support + troubleshooting**, select **Job diagram (preview)**:
---
-The job level default metrics such as Watermark delay, Input events, Output Events, and Backlogged Input Events are shown in the chart section for the latest 30 minutes. You can visualize other metrics in a chart by selecting them in the left pane.
--
-If you select one of the nodes in diagram section, the metrics data and the metrics options in the chart section will be filtered according to the selected node's properties. For example, if you select the input node, only the input node related metrics and its options are shown:
--
-To see the query script snippet that is mapping the corresponding query step, click the **'{}'** in the query step node as below:
--
-To see the job overview information summary, click the **Job Summary** button in right side.
--
-It also provides the job operation actions in the menu section. You can use them to stop the job (**Stop** button), refresh the metrics data (**Refresh** button), and change the metrics time range (**Time range**).
--
-## Troubleshoot with metrics
-
-A job's metrics provides lots of insights to your job's health. You can view these metrics through the job diagram in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
-
-### Is the job running well with its computation resource?
-
-* **SU (Memory) % utilization** is the percentage of memory utilized by your job. If SU (Memory) % utilization is consistently over 80%, it shows the job is approaching to the maximum allocated memory.
-* **CPU % utilization** is the percentage of CPU utilized by your job. There might be spikes intermittently for this metric. Thus, we often check its average percentage data. High CPU utilization indicates that there might be CPU bottleneck if the number of backlogged input events or watermark delay increases at the same time.
-
-
-### How much data is being read?
-
-The input data related metrics can be viewed under **Input** category in the chart section. They're available in the step of the input.
-* **Input events** is the number of data events read.
-* **Input events bytes** is the number of event bytes read. This can be used to validate that events are being sent to the input source.
-* **Input source received** is the number of messages read by the job.
-
-### Are there any errors in data processing?
-
-* **Deserialization errors** is the number of input events that couldn't be deserialized.
-* **Data conversion errors** is the number of output events that couldn't be converted to the expected output schema.
-* **Runtime errors** is the total number of errors related to query processing (excluding errors found while ingesting events or outputting results).
-
-### Are there any events out of order that are being dropped or adjusted?
-
-* **Out of order events** is the number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. This can be impacted by the configuration of the **"Out of order events"** setting under **Event ordering** section in Azure portal.
-
-### Is the job falling behind in processing input data streams?
-
-* **Backlogged input events** tells you how many more messages from the input need to be processed. When this number is consistently greater than 0, it means your job can't process the data as fast as it's coming in. In this case you may need to increase the number of Streaming Units and/or make sure your job can be parallelized. You can see more info on this in the [query parallelization page](./stream-analytics-parallelization.md).
--
-## Get help
-For more assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html).
-
-## Next steps
-* [Introduction to Stream Analytics](stream-analytics-introduction.md)
-* [Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md)
-* [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Job Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-diagram-with-metrics.md
Title: Data-driven debugging in Azure Stream Analytics
description: This article describes how to troubleshoot your Azure Stream Analytics job by using the job diagram and metrics in the Azure portal. + Previously updated : 05/01/2017 Last updated : 10/12/2022 # Data-driven debugging by using the job diagram
For additional assistance, try our [Microsoft Q&A question page for Azure Strea
* [Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md) * [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
+* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Job Logical Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-logical-diagram-with-metrics.md
+
+ Title: Debugging with the job diagram (preview) in Azure portal
+description: This article describes how to troubleshoot your Azure Stream Analytics job with job diagram and metrics in the Azure portal.
++++++ Last updated : 10/12/2022++
+# Debugging with the logical job diagram (preview) in Azure portal
+
+The job diagram (physical diagram and logical diagram) in the Azure portal can help you visualize your job's query steps with its input source, output destination, and metrics. You can use the job diagram to examine the metrics for each step and quickly identify the source of a problem when you troubleshoot issues.
+
+This article describes how to use the logical job diagram to analyze and troubleshoot your job in Azure portal.
+
+The logical job diagram is also available in Stream Analytics extension for VS Code. It provides the similar functions with more metrics when you debug your job that runs locally on your device. To learn more, see [Debug Azure Stream Analytics queries locally using job diagram](./debug-locally-using-job-diagram-vs-code.md).
+
+## Use the logical job diagram
+
+In the Azure portal, locate and select a Stream Analytics job. Then select **Job diagram (preview)** under **Developer tools**:
+++
+The job level default metrics such as Watermark delay, Input events, Output Events, and Backlogged Input Events are shown in the chart section for the latest 30 minutes. You can visualize other metrics in a chart by selecting them in the left pane.
++
+If you select one of the nodes in diagram section, the metrics data and the metrics options in the chart section will be filtered according to the selected node's properties. For example, if you select the input node, only the input node related metrics and its options are shown:
++
+To see the query script snippet that is mapping the corresponding query step, select the **`{}`'** icon in the query step node as shown below:
++
+To see the job overview information summary, select the **Job Summary** button on the right side.
++
+It also provides the job operation actions in the menu section. You can use them to stop the job (**Stop** button), refresh the metrics data (**Refresh** button), and change the metrics time range (**Time range**).
++
+## Troubleshoot with metrics
+
+A job's metrics provides lots of insights to your job's health. You can view these metrics through the job diagram in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
+
+### Is the job running well with its computation resource?
+
+* **SU (Memory) % utilization** is the percentage of memory utilized by your job. If SU (Memory) % utilization is consistently over 80%, it shows the job is approaching to the maximum allocated memory.
+* **CPU % utilization** is the percentage of CPU utilized by your job. There might be spikes intermittently for this metric. Thus, we often check its average percentage data. High CPU utilization indicates that there might be CPU bottleneck when the number of backlogged input events or watermark delay increases at the same time.
+
+
+### How much data is being read?
+
+The input data related metrics can be viewed under **Input** category in the chart section. They're available in the step of the input.
+* **Input events** is the number of data events read.
+* **Input events bytes** is the number of event bytes read. It can be used to validate that events are being sent to the input source.
+* **Input source received** is the number of messages read by the job.
+
+### Are there any errors in data processing?
+
+* **Deserialization errors** is the number of input events that couldn't be deserialized.
+* **Data conversion errors** is the number of output events that couldn't be converted to the expected output schema.
+* **Runtime errors** is the total number of errors related to query processing (excluding errors found while ingesting events or outputting results).
+
+### Are there any events out of order that are being dropped or adjusted?
+
+* **Out of order events** is the number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. It can be impacted by the configuration of the **"Out of order events"** setting under **Event ordering** section in Azure portal.
+
+### Is the job falling behind in processing input data streams?
+
+* **Backlogged input events** tells you how many more messages from the input need to be processed. When this number is consistently greater than 0, it means your job can't process the data as fast as it's coming in. In this case, you may need to increase the number of Streaming Units and/or make sure your job can be parallelized. You can see more info in the [query parallelization page](./stream-analytics-parallelization.md).
++
+## Get help
+For more assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html).
+
+## Next steps
+* [Introduction to Stream Analytics](stream-analytics-introduction.md)
+* [Stream Analytics job diagram (preview) in Azure portal](./job-diagram-with-metrics.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md)
+* [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Job Metrics Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics-dimensions.md
Title: Dimensions for Azure Stream Analytics metrics description: This article describes dimensions for Azure Stream Analytics metrics.-+ - Previously updated : 06/30/2022+ Last updated : 10/12/2022 # Dimensions for Azure Stream Analytics metrics
The **Logical Name** dimension is available for filtering and splitting the foll
## Node Name dimension
-A streaming node represents a set of compute resources that's used to process your input data. Every six streaming units (SUs) translates to one node, which the service automatically manages on your behalf. For more information about the relationship between streaming units and streaming nodes, see [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md).
+A streaming node represents a set of compute resources that's used to process your input data. Every six streaming units (SUs) translate to one node, which the service automatically manages on your behalf. For more information about the relationship between streaming units and streaming nodes, see [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md).
**Node Name** is a dimension at the streaming node level. It can help you to drill down certain metrics to the specific streaming node level. For example, you can split the **CPU % Utilization** metric by streaming node level to check the CPU utilization of an individual streaming node. :::image type="content" source="./media/stream-analytics-job-metrics-dimensions/07-average-cpu-splitting-by-node-name.png" alt-text="Screenshot of a chart that shows splitting average CPU utilization by the Node Name dimension." lightbox="./media/stream-analytics-job-metrics-dimensions/07-average-cpu-splitting-by-node-name.png"::: The **Node Name** dimension is available for filtering and splitting the following metrics:-- **CPU % Utilization** (preview)-- **SU (Memory) % Utilization**
+- **Backlogged Input Events**
+- **CPU % Utilization (preview)**
- **Input Events**
+- **Output Events**
+- **SU (Memory) % Utilization**
+- **Watermark Delay**
+ ## Partition ID dimension
The **Partition ID** dimension is available for filtering and splitting the foll
* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md) * [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
+* [Debugging with the physical job diagram (preview) in Azure portal](./stream-analytics-job-physical-diagram-with-metrics.md)
+* [Debugging with the logical job diagram (preview) in Azure portal](./stream-analytics-job-logical-diagram-with-metrics.md)
* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md) * [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Physical Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-physical-diagram-with-metrics.md
+
+ Title: Debug using the physical job diagram (preview) in Azure portal
+description: This article describes how to troubleshoot your Azure Stream Analytics job with physical job diagram and metrics in the Azure portal.
++++++ Last updated : 10/12/2022++
+# Debug using the physical job diagram (preview) in Azure portal
+
+The physical job diagram in the Azure portal can help you visualize your job's key metrics with streaming node in diagram or table format, for example: CPU utilization, memory utilization, Input Events, Partition IDs, and Watermark delay. It helps you to identify the cause of a problem when you troubleshoot issues.
+
+This article demonstrates how to use physical job diagram to analyze a job's performance and quickly identify its bottleneck in Azure portal.
+
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Identify the parallelism of a job
+
+Job with parallelization is the scalable scenario in Stream Analytics that can provide the better performance. If a job isn't in parallel mode, it most likely has certain bottleneck to its performance. It's important to identify if a job is in parallel mode or not. Physical job diagram provides a visual graph to illustrate the job parallelism. In physical job diagram, if there's data interaction among different streaming nodes, this job is a non-parallel job that needs more attention. For example, the non-parallel job diagram below:
++
+You may consider to optimize it to parallel job (as example below) by rewriting your query or updating input/output configurations with the **job diagram simulator** within VSCode ASA extension or query editor in Azure portal. To learn more, see [Optimize query using job diagram simulator (preview)](./optimize-query-using-job-diagram-simulator.md).
++
+## Key metrics to identify the bottleneck of a parallel job
+
+Watermark delay and backlogged input events are the key metrics to determine the performance of your Stream Analytics job. If your job's watermark delay is continuously increasing and input events are backlogged, your job can't keep up with the rate of input events and produce outputs in a timely manner. From the computation resource point of view, the CPU and memory resources are utilized in high level when this case happens.
+
+The physical job diagram visualizes these key metrics in the diagram together to provide you a full picture of them for identifying the bottleneck easily.
++
+For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
++
+## Identify the uneven distributed input events (data-skew)
+
+When you have a job already running in parallel mode, but you observe a high watermark delay, use this method to determine why.
+
+To find the root cause, you open the physical job diagram in the Azure portal. Select **Job diagram (preview)** under **Monitoring**, and switch to **Physical diagram**.
++
+From the physical diagram, you can easily identify whether all the partitions have high watermark delay, or just a few of them by viewing the watermark delay value in each node or choosing the watermark delay heatmap setting to sort the streaming nodes (recommended):
++
+After you apply the heatmap settings you made above, you'll get the streaming nodes with high watermark delay in the top left corner. Then you can check if the corresponding streaming nodes are having significant more input events than others. For this example, the **streamingnode#0** and **streamingnode#1** are having more input events.
++
+You can further check how many partitions are allocated to the streaming nodes individually in order to find out if the more input events are caused by more partitions allocated or any specific partition having more input events. For this example, all the streaming nodes are having two partitions. It means the **streamingnode#0** and **streamingnode#1** are having certain specific partition that contains more input events than other partitions.
+
+To locate which partition has more input events than other partitions in **streamingnode#0** and **streamingnode#1**, do the following steps:
+* Select **Add chart** in chart section
+* Add **Input Events** into metric and **Partition ID** into splitter
+* Select **Apply** to bring out the input events chart
+* Tick **streamingnode#0** and **streamingnode#1** in diagram
+
+You'll see the chart below with the input events metric filtered by the partitions in the two streaming nodes.
++
+### What further action can you take?
+
+As shown in the example, the partitions (0 and 1) have more input data than other partitions are. We call this **data skew**. The streaming nodes that are processing the partitions with data skew need to consume more CPU and memory resources than others do. This imbalance leads to slower performance and increases the watermark delay. You can check the CPU and memory usage in the two streaming nodes as well in the physical diagram. To mitigate the problem, you need to repartition your input data more evenly.
++
+## Identify the cause of overloaded CPU or memory
+
+When a parallel job has an increasing watermark delay without the previously mentioned data skew situation, it may be caused by a significant amount of data across all streaming nodes that inhibits performance. You can identify that the job has this characteristic using the physical diagram.
+
+1. Open the physical job diagram, go to your job Azure portal under **Monitoring**, select **Job diagram (preview)**, and switch to **Physical diagram**. You'll see the physical diagram loaded as below.
+
+ :::image type="content" source="./media/job-physical-diagram-debug/5-overloaded-cpu-memory-physical-diagram.png" alt-text="Screenshot that shows the overview of the overloaded cpu and memory job." lightbox="./media/job-physical-diagram-debug/5-overloaded-cpu-memory-physical-diagram.png":::
+
+2. Check the CPU and memory utilization in each streaming node to see if the utilization in all streaming nodes is too high. If the CPU and SU utilization is high (more than 80 percent) in all streaming nodes, you can conclude that this job has a large amount of data being processed within each streaming node.
+
+ From above case, the CPU utilization is around 90% and memory utilization is 100% already. It shows that each streaming node is running out of resource to process the data.
+
+ :::image type="content" source="./media/job-physical-diagram-debug/6-streaming-node-details.png" alt-text="Screenshot that shows overloaded cpu and memory in all nodes." lightbox="./media/job-physical-diagram-debug/6-streaming-node-details.png":::
+
+3. Check how many partitions are allocated into each streaming node so that you can decide if you need more streaming nodes to balance the partitions to reduce the burden of the existing streaming nodes.
+
+ For this case, each streaming node is having four partitions allocated which looks too much to a streaming node.
+
+### What further action can you take?
+
+Consider reducing the partition count for each streaming node to reduce the input data. You could double the SUs allocated to each streaming node to two partitions per node by increasing streaming node count from 8 to 16. Or you can quadruple the SUs to have each streaming node handle data from one partition.
+
+To learn more about the relationship between streaming node and streaming unit, see [Understand streaming unit and streaming node](stream-analytics-streaming-unit-consumption.md#understand-streaming-unit-and-streaming-node).
+
+What should you do if the watermark delay is still increasing when one streaming node is handling data from one partition? Repartition your input with more partitions to reduce the amount of data in each partition. For details, see [Use repartitioning to optimize Azure Stream Analytics jobs](./repartition.md).
++
+## Next steps
+* [Introduction to Stream Analytics](stream-analytics-introduction.md)
+* [Stream Analytics job diagram (preview) in Azure portal](./job-diagram-with-metrics.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md)
+* [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
stream-analytics Stream Analytics Machine Learning Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-machine-learning-anomaly-detection.md
Title: Anomaly detection in Azure Stream Analytics description: This article describes how to use Azure Stream Analytics and Azure Machine Learning together to detect anomalies. + Previously updated : 06/21/2019 Last updated : 10/05/2022 # Anomaly detection in Azure Stream Analytics Available in both the cloud and Azure IoT Edge, Azure Stream Analytics offers built-in machine learning based anomaly detection capabilities that can be used to monitor the two most commonly occurring anomalies: temporary and persistent. With the **AnomalyDetection_SpikeAndDip** and **AnomalyDetection_ChangePoint** functions, you can perform anomaly detection directly in your Stream Analytics job.
-The machine learning models assume a uniformly sampled time series. If the time series is not uniform, you may insert an aggregation step with a tumbling window prior to calling anomaly detection.
+The machine learning models assume a uniformly sampled time series. If the time series isn't uniform, you may insert an aggregation step with a tumbling window prior to calling anomaly detection.
-The machine learning operations do not support seasonality trends or multi-variate correlations at this time.
+The machine learning operations don't support seasonality trends or multi-variate correlations at this time.
## Anomaly detection using machine learning in Azure Stream Analytics The following video demonstrates how to detect an anomaly in real time using machine learning functions in Azure Stream Analytics.
-> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
+> [!VIDEO /Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
## Model behavior Generally, the model's accuracy improves with more data in the sliding window. The data in the specified sliding window is treated as part of its normal range of values for that time frame. The model only considers event history over the sliding window to check if the current event is anomalous. As the sliding window moves, old values are evicted from the model's training.
-The functions operate by establishing a certain normal based on what they have seen so far. Outliers are identified by comparing against the established normal, within the confidence level. The window size should be based on the minimum events required to train the model for normal behavior so that when an anomaly occurs, it would be able to recognize it.
+The functions operate by establishing a certain normal based on what they've seen so far. Outliers are identified by comparing against the established normal, within the confidence level. The window size should be based on the minimum events required to train the model for normal behavior so that when an anomaly occurs, it would be able to recognize it.
-The model's response time increases with history size because it needs to compare against a higher number of past events. It is recommended to only include the necessary number of events for better performance.
+The model's response time increases with history size because it needs to compare against a higher number of past events. It's recommended to only include the necessary number of events for better performance.
Gaps in the time series can be a result of the model not receiving events at certain points in time. This situation is handled by Stream Analytics using imputation logic. The history size, as well as a time duration, for the same sliding window is used to calculate the average rate at which events are expected to arrive.
FROM AnomalyDetectionStep
Persistent anomalies in a time series event stream are changes in the distribution of values in the event stream, like level changes and trends. In Stream Analytics, such anomalies are detected using the Machine Learning based [AnomalyDetection_ChangePoint](/stream-analytics-query/anomalydetection-changepoint-azure-stream-analytics) operator.
-Persistent changes last much longer than spikes and dips and could indicate catastrophic event(s). Persistent changes are not usually visible to the naked eye, but can be detected with the **AnomalyDetection_ChangePoint** operator.
+Persistent changes last much longer than spikes and dips and could indicate catastrophic event(s). Persistent changes aren't usually visible to the naked eye, but can be detected with the **AnomalyDetection_ChangePoint** operator.
The following image is an example of a level change:
The performance of these models depends on the history size, window duration, ev
### Relationship The history size, window duration, and total event load are related in the following way:
-windowDuration (in ms) = 1000 * historySize / (Total Input Events Per Sec / Input Partition Count)
+windowDuration (in ms) = 1000 * historySize / (total input events per second / Input Partition Count)
When partitioning the function by deviceId, add "PARTITION BY deviceId" to the anomaly detection function call. ### Observations The following table includes the throughput observations for a single node (6 SU) for the non-partitioned case:
-| History size (events) | Window duration (ms) | Total input events per sec |
+| History size (events) | Window duration (ms) | Total input events per second |
| | -- | -- | | 60 | 55 | 2,200 | | 600 | 728 | 1,650 |
The following table includes the throughput observations for a single node (6 SU
The following table includes the throughput observations for a single node (6 SU) for the partitioned case:
-| History size (events) | Window duration (ms) | Total input events per sec | Device count |
+| History size (events) | Window duration (ms) | Total input events per second | Device count |
| | -- | -- | | | 60 | 1,091 | 1,100 | 10 | | 600 | 10,910 | 1,100 | 10 |
The following table includes the throughput observations for a single node (6 SU
| 600 | 218,182 | 550 | 100 | | 6,000 | 2,181,819 | <550 | 100 |
-Sample code to run the non-partitioned configurations above is located in the [Streaming At Scale repo](https://github.com/Azure-Samples/streaming-at-scale/blob/f3e66fa9d8c344df77a222812f89a99b7c27ef22/eventhubs-streamanalytics-eventhubs/anomalydetection/create-solution.sh) of Azure Samples. The code creates a stream analytics job with no function level partitioning, which uses Event Hub as input and output. The input load is generated using test clients. Each input event is a 1KB json document. Events simulate an IoT device sending JSON data (for up to 1K devices). The history size, window duration, and total event load are varied over 2 input partitions.
+Sample code to run the non-partitioned configurations above is located in the [Streaming At Scale repo](https://github.com/Azure-Samples/streaming-at-scale/blob/f3e66fa9d8c344df77a222812f89a99b7c27ef22/eventhubs-streamanalytics-eventhubs/anomalydetection/create-solution.sh) of Azure Samples. The code creates a stream analytics job with no function level partitioning, which uses Event Hubs as input and output. The input load is generated using test clients. Each input event is a 1KB json document. Events simulate an IoT device sending JSON data (for up to 1K devices). The history size, window duration, and total event load are varied over 2 input partitions.
> [!Note] > For a more accurate estimate, customize the samples to fit your scenario. ### Identifying bottlenecks
-Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hub metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
+Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hub metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Azure Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
## Next steps
stream-analytics Stream Analytics Managed Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-overview.md
description: This article describes managed identities for Azure Stream Analytic
+ Last updated 08/10/2022
Below is a table that shows Azure Stream Analytics inputs and outputs that suppo
| | Table Storage | No | No | | | Service Bus Topic | Yes | Yes | | | Service Bus Queue | Yes | Yes |
-| | Cosmos DB | Yes | Yes |
+| | Azure Cosmos DB | Yes | Yes |
| | Power BI | No | Yes | | | Data Lake Storage Gen1 | Yes | Yes | | | Azure Functions | No | No |
stream-analytics Stream Analytics Parallelization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-parallelization.md
Title: Use query parallelization and scale in Azure Stream Analytics description: This article describes how to scale Stream Analytics jobs by configuring input partitions, tuning the query definition, and setting job streaming units. -+ Last updated 05/10/2022
When you work with Stream Analytics, you can take advantage of partitioning in t
- Azure Functions - Azure Table - Blob storage (can set the partition key explicitly)-- Cosmos DB (need to set the partition key explicitly)
+- Azure Cosmos DB (need to set the partition key explicitly)
- Event Hubs (need to set the partition key explicitly) - IoT Hub (need to set the partition key explicitly) - Service Bus
This query can be scaled to 24 SUs.
An [embarrassingly parallel](#embarrassingly-parallel-jobs) job is necessary but not sufficient to sustain a higher throughput at scale. Every storage system, and its corresponding Stream Analytics output, has variations on how to achieve the best possible write throughput. As with any at-scale scenario, there are some challenges which can be solved by using the right configurations. This section discusses configurations for a few common outputs and provides samples for sustaining ingestion rates of 1 K, 5 K and 10 K events per second.
-The following observations use a Stream Analytics job with stateless (passthrough) query, a basic JavaScript UDF which writes to Event Hubs, Azure SQL DB, or Cosmos DB.
+The following observations use a Stream Analytics job with stateless (passthrough) query, a basic JavaScript UDF which writes to Event Hubs, Azure SQL, or Azure Cosmos DB.
#### Event Hubs
The [Event Hubs](https://github.com/Azure-Samples/streaming-at-scale/tree/main/e
[Azure SQL](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-azuresql) supports writing in parallel, called Inherit Partitioning, but it's not enabled by default. However, enabling Inherit Partitioning, along with a fully parallel query, may not be sufficient to achieve higher throughputs. SQL write throughputs depend significantly on your database configuration and table schema. The [SQL Output Performance](./stream-analytics-sql-output-perf.md) article has more detail about the parameters that can maximize your write throughput. As noted in the [Azure Stream Analytics output to Azure SQL Database](./stream-analytics-sql-output-perf.md#azure-stream-analytics) article, this solution doesn't scale linearly as a fully parallel pipeline beyond 8 partitions and may need repartitioning before SQL output (see [INTO](/stream-analytics-query/into-azure-stream-analytics#into-shard-count)). Premium SKUs are needed to sustain high IO rates along with overhead from log backups happening every few minutes.
-#### Cosmos DB
+#### Azure Cosmos DB
|Ingestion Rate (events per second) | Streaming Units | Output Resources | |-|-|| | 1 K | 3 | 20K RU | | 5 K | 24 | 60K RU | | 10 K | 48 | 120K RU |
-[Cosmos DB](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-cosmosdb) output from Stream Analytics has been updated to use native integration under [compatibility level 1.2](./stream-analytics-documentdb-output.md#improved-throughput-with-compatibility-level-12). Compatibility level 1.2 enables significantly higher throughput and reduces RU consumption compared to 1.1, which is the default compatibility level for new jobs. The solution uses Cosmos DB containers partitioned on /deviceId and the rest of solution is identically configured.
+[Azure Cosmos DB](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-cosmosdb) output from Stream Analytics has been updated to use native integration under [compatibility level 1.2](./stream-analytics-documentdb-output.md#improved-throughput-with-compatibility-level-12). Compatibility level 1.2 enables significantly higher throughput and reduces RU consumption compared to 1.1, which is the default compatibility level for new jobs. The solution uses Azure Cosmos DB containers partitioned on /deviceId and the rest of solution is identically configured.
All [Streaming at Scale Azure samples](https://github.com/Azure-Samples/streaming-at-scale) use Event Hubs as input that is fed by load simulating test clients. Each input event is a 1 KB JSON document, which translates configured ingestion rates to throughput rates (1MB/s, 5MB/s and 10MB/s) easily. Events simulate an IoT device sending the following JSON data (in a shortened form) for up to 1000 devices:
All [Streaming at Scale Azure samples](https://github.com/Azure-Samples/streamin
### Identifying Bottlenecks
-Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hubs metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
+Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hubs metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Azure Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
## Get help
stream-analytics Stream Analytics Solution Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-solution-patterns.md
description: Learn about common solution patterns for Azure Stream Analytics, su
+ Last updated 06/21/2019
You can create custom real-time visualizations, such as dashboard or map visuali
## Incorporate real-time insights into your application through data stores
-Most web services and web applications today use a request/response pattern to serve the presentation layer. The request/response pattern is simple to build and can be easily scaled with low response time using a stateless frontend and scalable stores, like Cosmos DB.
+Most web services and web applications today use a request/response pattern to serve the presentation layer. The request/response pattern is simple to build and can be easily scaled with low response time using a stateless frontend and scalable stores such as Azure Cosmos DB.
High data volume often creates performance bottlenecks in a CRUD-based system. The [event sourcing solution pattern](/azure/architecture/patterns/event-sourcing) is used to address the performance bottlenecks. Temporal patterns and insights are also difficult and inefficient to extract from a traditional data store. Modern high-volume data driven applications often adopt a dataflow-based architecture. Azure Stream Analytics as the compute engine for data in motion is a linchpin in that architecture. ![ASA event sourcing app](media/stream-analytics-solution-patterns/event-sourcing-app.png)
-In this solution pattern, events are processed and aggregated into data stores by Azure Stream Analytics. The application layer interacts with data stores using the traditional request/response pattern. Because of Stream Analytics' ability to process a large number of events in real-time, the application is highly scalable without the need to bulk up the data store layer. The data store layer is essentially a materialized view in the system. [Azure Stream Analytics output to Azure Cosmos DB](stream-analytics-documentdb-output.md) describes how Cosmos DB is used as a Stream Analytics output.
+In this solution pattern, events are processed and aggregated into data stores by Azure Stream Analytics. The application layer interacts with data stores using the traditional request/response pattern. Because of Stream Analytics' ability to process a large number of events in real-time, the application is highly scalable without the need to bulk up the data store layer. The data store layer is essentially a materialized view in the system. [Azure Stream Analytics output to Azure Cosmos DB](stream-analytics-documentdb-output.md) describes how Azure Cosmos DB is used as a Stream Analytics output.
In real applications where processing logic is complex and there is the need to upgrade certain parts of the logic independently, multiple Stream Analytics jobs can be composed together with Event Hubs as the intermediary event broker.
Provisioning more resources can speed up the process, but the effect of having a
- Make sure there are enough partitions in the upstream Event Hubs or IoT Hub that you can add more Throughput Units (TUs) to scale the input throughput. Remember, each Event Hubs TU maxes out at an output rate of 2 MB/s. -- Make sure you have provisioned enough resources in the output sinks (i.e., SQL Database, Cosmos DB), so they don't throttle the surge in output, which can sometimes cause the system to lock up.
+- Make sure you have provisioned enough resources in the output sinks (i.e., SQL Database, Azure Cosmos DB), so they don't throttle the surge in output, which can sometimes cause the system to lock up.
The most important thing is to anticipate the processing rate change, test these scenarios before going into production, and be ready to scale the processing correctly during failure recovery time.
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
Title: User-assigned managed identities for Azure Stream Analytics (preview)
+ Title: User-assigned managed identities for Azure Stream Analytics
description: This article describes configuring user-assigned managed identities for Azure Stream Analytics. + Last updated 09/29/2022
-# User-assigned managed identities for Azure Stream Analytics (preview)
+# User-assigned managed identities for Azure Stream Analytics
Azure Stream Analytics currently allows you to use user-assigned managed identities to authenticate to your job's inputs and outputs.
stream-analytics Write To Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/write-to-delta-lake.md
+
+ Title: Azure Stream Analytics - Writing to Delta Lake table
+
+description: This article describes how to write data to a delta lake table stored in Azure Data Lake Storage Gen2.
++++ Last updated : 10/12/2022+++
+# Azure Stream Analytics - write to Delta Lake table (Public Preview)
++
+Delta Lake is an open format that brings reliability, quality and performance to data lakes. Azure Stream Analytics allows you to directly write streaming data to your delta lake tables without writing a single line of code.
+
+A stream analytics job can be configured to write through a native delta lake output connector, either to a new or a pre-created Delta table in an Azure Data Lake Storage Gen2 account. This connector is optimized for high-speed ingestion to delta tables in append mode and also provides exactly once semantics, which guarantees that no data is lost or duplicated. Ingesting real-time data streams from Azure Event Hubs into Delta tables allows you to perform ad-hoc interactive or batch analytics.
+
+## Delta Lake configuration
++
+To write data in Delta Lake, you need to connect to an Azure Data Lake Storage Gen2 account. The below table lists the properties related to Delta Lake configuration.
+
+|Property Name |Description |
+|-|--|
+|Event Serialization Format|Serialization format for output data. JSON, CSV, AVRO, Parquet are supported. Delta Lake is listed as an option here. The data will be in Parquet format if Delta Lake is selected. |
+|Delta path name| The path that is used to write your delta lake table within the specified container. It includes the table name. More details in the section below |
+|Partition Column |Optional. The {field} name from your output data to partition. Only one partition column is supported. The column's value must be of string type |
+
+To see the full list of ADLS Gen2 configuration, see [ALDS Gen2 Overview](blob-storage-azure-data-lake-gen2-output.md).
+
+### Delta Path name
++
+The Delta Path Name is used to specify the location and name of your Delta Lake table stored in Azure Data Lake Storage Gen2.
+
+You can choose to use one or more path segments to define the path to the delta table and the delta table name. A path segment is the string between consecutive delimiter characters (for example, the forward slash `/`) that corresponds to the name of a virtual directory.
+
+The segment name is alphanumeric and can include spaces, hyphens, and underscores. The last path segment will be used as the table name.
+
+Restrictions on Delta Path name include the following ones:
+
+- Field names aren't case-sensitive. For example, the service can't differentiate between column "ID" and "id".
+- No dynamic {field} name is allowed. For example, {ID} will be treated as text {ID}.
+- The number of path segments comprising the name can't exceed 254.
+
+### Examples
+
+Examples for Delta path name:
+
+- Example 1: WestUS/CA/factory1/device-table
+- Example 2: Test/demo
+- Example 3: mytable
+
+Example output files:
+
+1. Under the chosen container, directory path would be `WestEurope/CA/factory1`, delta table folder name would be **device-table**.
+2. Under the chosen container, directory path would be `Test`, delta table folder name would be **demo**.
+3. Under the chosen container, delta table folder name would be **mytable**.
+
+## Writing to the table
+
+To create a new Delta Lake table, you need to specify a Delta Path Name that doesn't lead to any existing tables. If there's already a Delta Lake table existing with the same name and in the location specified by the Delta Path name, by default, Azure Stream Analytics writes new records to the existing table.
+
+### Exactly once delivery
++
+The transaction log enables Delta Lake to guarantee exactly once processing. Azure Stream Analytics also provides exactly once delivery when output data to Azure Data Lake Storage Gen2 during one single job run.
+
+### Schema enforcement
++
+Schema enforcement means that all new writes to a table are enforced to be compatible with the target table's schema at write time, to ensure data quality.
+
+All records of output data are projected to the schema of the existing table. If the output is being written to a new delta table, the table schema will be created with the first record. If the incoming data has one extra column compared to the existing table schema, it will be written in the table without the extra column. If the incoming data is missing one column compared to the existing table schema, it will be written in the table with the column being null.
++
+At the failure of schema conversion, the job behavior will follow the [output data error handing policy](stream-analytics-output-error-policy.md) configured at the job level.
+
+## Limitations
+
+- Dynamic partition key isn't supported.
+- Writing to Delta lake is append only.
+- Schema checking in query testing isn't available.
+- Checkpoints for delta lake aren't taken by Stream Analytics.
+
+## Regions Availability
+
+The feature is currently supported in West Central US, Japan East and Canada Central.
+
+## Next steps
+
+* [Create a Stream Analytics job writing to Delta Lake Table in ADLS Gen2](write-to-delta-lake.md)
stream-analytics Write To Delta Table Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/write-to-delta-table-adls-gen2.md
+
+ Title: Write to a Delta Table in ADLS Gen2 (Azure Stream Analytics)
+description: This article shows how to create an ASA job writing to a delta table stored in ADLS Gen2.
+++++ Last updated : 10/12/2022++
+# Tutorial: Write to a Delta Table stored in Azure Data Lake Storage Gen2 (Public Preview)
+
+This tutorial shows how you can create a Stream Analytics job to write to a Delta table in Azure Data Lake Storage Gen2. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Deploy an event generator that sends data to your event hub
+> * Create a Stream Analytics job
+> * Configure Azure Data Lake Storage Gen2 to which the Delta table will be stored in
+> * Run the Stream Analytics job
+
+## Prerequisites
+
+Before you start, make sure you've completed the following steps:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this step.
+* Create a [Data Lake Storage Gen2 account](../storage/blobs/create-data-lake-storage-account.md).
+
+## Create a Stream Analytics job
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+3. Select **Analytics** > **Stream Analytics job** from the results list.
+4. On the **New Stream Analytics job** page, follow these steps:
+ 1. For **Subscription**, select your Azure subscription.
+ 2. For **Resource group**, select the same resource that you used earlier in the TollApp deployment.
+ 3. For **Name**, enter a name for the job. Stream Analytics job name can contain alphanumeric characters, hyphens, and underscores only and it must be between 3 and 63 characters long.
+ 4. For **Hosting environment**, confirm that **Cloud** is selected.
+ 5. For **Stream units**, select **1**. Streaming units represent the computing resources that are required to execute a job. To learn about scaling streaming units, refer to [understanding and adjusting streaming units](stream-analytics-streaming-unit-consumption.md) article.
+ 6. Select **Review + create** at the bottom of the page.
+
+
+5. On the **Review + create** page, review settings, and select **Create** to create a Stream Analytics page.
+6. On the deployment page, select **Go to resource** to navigate to the **Stream Analytics job** page.
+
+## Configure job input
+
+The next step is to define an input source for the job to read data using the event hub created in the TollApp deployment.
+
+1. Find the Stream Analytics job created in the previous section.
+
+2. In the **Job Topology** section of the Stream Analytics job, select **Inputs**.
+
+3. Select **+ Add stream input** and **Event hub**.
+
+4. Fill out the input form with the following values created through [TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json):
+
+ 1. For **Input alias**, provide a friendly name to identify your input.
+ 2. For **Subscription**, select your Azure subscription.
+ 3. For **Resource group**, select the same resource that you used earlier in the TollApp deployment.
+ 4. For **Event Hub namespace**, select the event hub namespace you created in the previous section.
+ 5. Use default options on the remaining settings and select **Save**.
++
+## Configure job output
+
+The next step is to define an output sink where the job can write data to. In this tutorial, you write output to a Delta table in Azure Data Lake Storage Gen2.
+
+1. In the **Job Topology** section of the Stream Analytics job, select the **Outputs** option.
+
+2. Select **+ Add** > **Blob storage/ADLS Gen2**.
+
+3. Fill the output form with the following details and select **Save**:
+ 1. For **Output alias**, enter **DeltaOutput**.
+ 2. For **Subscription**, select your Azure subscription.
+ 3. For **Resource group**, select the same resource under which you created the ADLS Gen2 account in prerequisites.
+ 4. For **Storage account**, choose the ADLS Gen2 account you created.
+ 5. For **container**, provide a unique container name.
+ 6. For **Event Serialization Format**, select **Delta Lake**. Although Delta lake is listed as one of the options here, it isn't a data format. Delta Lake uses versioned Parquet files to store your data. To learn more about [Delta lake](write-to-delta-lake.md).
+ 7. For **Delta table path**, enter **tutorial folder/delta table**.
+ 8. Use default options on the remaining settings and select **Save**.
+
+
+## Create queries
+
+At this point, you have a Stream Analytics job set up to read an incoming data stream. The next step is to create a query that analyzes the data in real time. The queries use a SQL-like language that has some extensions specific to Stream Analytics.
+
+1. Now, select **Query** under **Job topology** on the left menu.
+2. Enter the following query into the query window. In this example, the query reads the data from Event Hubs and copies selected values to a Delta table in ADLS Gen2.
+
+ ```sql
+ SELECT State, CarModel.Make, TollAmount
+ INTO DeltaOutput
+ FROM EntryStream TIMESTAMP BY EntryTime
+ ```
+
+3. Select **Save query** on the toolbar.
+
+## Start the Stream Analytics job and check the output
+
+1. Return to the job overview page in the Azure portal, and select **Start**.
+2. On the **Start job** page, confirm that **Now** is selected for Job output start time, and then select **Start** at the bottom of the page.
+3. After few minutes, in the portal, find the storage account & the container that you've configured as output for the job. You can now see the delta table in the folder specified in the container. The job takes a few minutes to start for the first time, after it's started, it will continue to run as the data arrives.
+
+## Clean up resources
+
+When no longer needed, delete the resource group, the Stream Analytics job, and all related resources. Deleting the job avoids billing the streaming units consumed by the job. If you're planning to use the job in future, you can stop it and restart it later when you need. If you aren't going to continue to use this job, delete all resources created by this tutorial by using the following steps:
+
+1. From the left-hand menu in the Azure portal, select Resource groups and then select the name of the resource you created.
+2. On your resource group page, select Delete, type the name of the resource to delete in the text box, and then select Delete.
+
+## Next steps
+
+In this tutorial, you created a simple Stream Analytics job, filtered the incoming data, and write results in a Delta table in ADLS Gen2 account. To learn more about Stream Analytics jobs:
+
+> [!div class="nextstepaction"]
+> [ASA writes to Delta Lake output](write-to-delta-lake.md)
synapse-analytics Restore Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool.md
Previously updated : 09/26/2022 Last updated : 10/07/2022 -+ # Restore an existing dedicated SQL pool
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
+ Last updated 02/02/2022
However, as you continue data exploration, you might want to create some utility
> [!IMPORTANT] > Use a collation with `_UTF8` suffix to ensure that UTF-8 text is properly converted to `VARCHAR` columns. `Latin1_General_100_BIN2_UTF8` provides
- > the best performance in the queries that read data from Parquet files and cosmos Db containers.
+ > the best performance in the queries that read data from Parquet files and Azure Cosmos DB containers.
1. Switch from master to `DataExplorationDB` using the following command. You can also use the UI control **use database** to switch your current database:
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
Previously updated : 09/29/2021 Last updated : 10/07/2022+ # Analyze data with dedicated SQL pools
synapse-analytics Proof Of Concept Playbook Data Explorer Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-data-explorer-pool.md
+
+ Title: "Synapse POC playbook: Big data analytics with Data Explorer pool in Azure Synapse Analytics"
+description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Data Explorer pool."
+++++ Last updated : 09/28/2022++
+# Synapse POC playbook: Big data analytics with Data Explorer pool in Azure Synapse Analytics
+
+This article presents a high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Data Explorer pool.
++
+## Before you begin
+
+The playbook helps you to evaluate the use of Data Explorer for Azure Synapse and is designed for scenarios that are most suitable for Data Explore pools. Use the following scenarios to determine if a Data Explorer pool is the right solution for you before you start your POC.
+
+### General architecture patterns scenarios
+
+- Any data that fits one or more of the following characteristics should be a good candidate for a Data Explorer pool:
+ - High volume and speed
+ - Append only and immutable data
+ - Data Quiescence: Data that doesn't change. For example, an order placed in an online store is a good example of quiesced data.
+- [Command Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs) is suited to the Data Explorer pool's append only architecture.
+- [Event sourcing pattern](/azure/architecture/patterns/event-sourcing)
+
+### Other scenarios
+
+The following scenarios are also good candidates for a Data Explorer pool:
+
+- Low latency data store for real-time telemetry-based alerts
+- [IoT telemetry data storage and analytics](/azure/architecture/solution-ideas/articles/iot-azure-data-explorer)
+- [High speed interactive analytics layer](/azure/architecture/solution-ideas/articles/interactive-azure-data-explorer). Particularly when used with Apache Spark engines such as Synapse Spark, DataBricks, or traditional data warehouses such as Synapse SQL pools.
+- [Log and observability analytics](/azure/architecture/solution-ideas/articles/monitor-azure-data-explorer)
+
+## Prepare for the POC
+
+A POC project can help you make an informed business decision about implementing a big data and advanced analytics environment on a cloud-based platform that uses Azure Data a pool in Azure Synapse.
+
+A POC project will identify your key goals and business drivers that cloud-based big data and advanced analytics platform must support. It will test key metrics and prove key behaviors that are critical to the success of your data engineering, machine learning model building, and training requirements. A POC isn't designed to be deployed to a production environment. Rather, it's a short-term project that focuses on key questions, and its result can be discarded.
+
+Before you begin planning your Data Explorer POC project:
+
+> [!div class="checklist"]
+>
+> - Identify any restrictions or guidelines your organization has about moving data to the cloud.
+> - Identify executive or business sponsors for a big data and advanced analytics platform project. Secure their support for migration to the cloud.
+> - Identify availability of technical experts and business users to support you during the POC execution.
+
+Before you start preparing for the POC project, we recommend you first read the [Azure Data Explorer documentation](/azure/data-explorer?context=/azure/synapse-analytics/context/context).
+
+By now you should have determined that there are no immediate blockers and then you can start preparing for your POC. If you are new to Data Explorer, you can refer to [this documentation](../data-explorer/data-explorer-overview.md) where you can get an overview of the Data Explorer architecture.
+
+Develop an understanding of these key concepts:
+
+- Data Explorer and its architecture.
+- Support data formats and data sources.
+- Databases, tables, materialized views, functions as Data Explorer artifacts.
+- Supported ingestion methods for ingestion wizard and continuous ingestion.
+- Authentication and authorization in Data Explorer.
+- Native connectors that integrate with visualization solutions such as Power BI, Grafana, Kibana, and more.
+- Creating external tables to read data from Azure SQL/SQL Server, Azure Cosmos DB, Azure Monitor, Azure Digital Twin.
+
+Data Explorer decouples compute resources from storage so that you can better manage your data processing needs and control costs. You only pay for compute when it's in use. When it's not in use, you only pay for storage. You can [scale up and down](../data-explorer/data-explorer-overview.md#data-explorer-pool-architecture) (vertical), as well as scale in and out (horizontal). You can also manually stop, or autostop, your workspace without losing your data. For example, you can scale up your workspace for heavy data processing needs or large loads, and then scale it back down during less intense processing times, or shut it down completely. Similarly, you can effectively scale and stop a workspace during the weekends to reduce costs.
+
+### Set the goals
+
+A successful POC project requires planning. Start by identify why you're doing a POC to fully understand the real motivations. Motivations could include modernization, cost saving, performance improvement, or integrated experience. Be sure to document clear goals for your POC and the criteria that will define its success. Ask yourself:
+
+> [!div class="checklist"]
+>
+> - What do you want as the outputs of your POC?
+> - What will you do with those outputs?
+> - Who will use the outputs?
+> - What will define a successful POC?
+
+Keep in mind that a POC should be a short and focused effort to quickly prove a limited set of concepts and capabilities. These concepts and capabilities should be representative of the overall workload. If you have a long list of items to prove, you may want to plan more than one POC. In that case, define gates between the POCs to determine whether you need to continue with the next one. For example, one POC could focus on requirements for the data engineering role, such as ingestion and processing. Another POC could focus on machine learning (ML) model development.
+
+As you consider your POC goals, ask yourself the following questions to help you shape the goals:
+
+> [!div class="checklist"]
+>
+> - Are you migrating from an existing big data and advanced analytics platform (on-premises or cloud)?
+> - Are you migrating and want to do some extensive improvements along the way? For example, migrating from Elastic Search to Data Explorer for log analysis, migrating from InfluxDB or Timescale DB to Data Explorer.
+> - Are you building an entirely new big data and advanced analytics platform (greenfield project)?
+> - What are your current pain points? For example, scalability, performance, or flexibility.
+> - What new business requirements do you need to support?
+> - What are the SLAs that you're required to meet?
+> - What will the workloads be? For example, ETL, batch processing, stream processing, machine learning model training, analytics, reporting queries, or interactive queries?
+> - What are the skills of the users who will own the project (should the POC be implemented)? For example, SQL, Python, PowerBI, or other skills.
+
+Here are some examples of POC goal setting:
+
+- Why are we doing a POC?
+ - We need to know that the data ingestion and processing performance for our big data workload will meet our new SLAs.
+ - We need to know whether near real-time stream processing is possible and how much throughput it can support. (Will it support our business requirements?)
+ - We need to know if our existing data ingestion and transformation processes are a good fit and where improvements will need to be made.
+ - We need to know if we can shorten our data integration run times and by how much.
+ - We need to know if our data scientists can build and train machine learning models and use AI/ML libraries as needed in Data Explorer.
+ - Will the move to cloud-based Data Explorer meet our cost goals?
+- At the conclusion of this POC:
+ - We'll have the data to determine if our data processing performance requirements can be met for both batch and real-time streaming.
+ - We'll have tested ingestion and processing of all our different data types (structured, semi-structured, and unstructured) that support our use cases.
+ - We'll have tested some of our existing data processing needs and can identify the work that can be completed with update policies in Data Explorer.
+ - We'll have tested data ingestion and processing and will have the data points to estimate the effort required for the initial migration and load of historical data.
+ - We'll have tested data ingestion and processing and can determine if our ETL/ELT processing requirements can be met.
+ - We'll have gained insight to better estimate the effort required to complete the implementation project.
+ - We'll have tested scale and scaling options and will have the data points to better configure our platform for better price-performance settings.
+ - We'll have a list of items that may need more testing.
+
+### Plan the project
+
+Use your goals to identify specific tests and to provide the outputs you identified. It's important to make sure that you have at least one test to support each goal and expected output. Also, identify specific data ingestion, batch or stream processing, and all other processes that will be executed so you can identify a specific dataset and codebase. This specific dataset and codebase will define the scope of the POC.
+
+Here are the typical subject areas that are evaluated with Data Explorer:
+
+- **Data Ingestion and processing**: Data sources, data formats, ingestion methods, connectors, tools, ingestion policies, streaming vs batch ingestion
+- **Data Storage**: schema, storage artifacts such as tables and materialized views
+- **Policies**: Such as partitioning, update, merge
+- **Querying and visualization**
+- **Performance**: Such as query response times, ingestion latencies, weak consistency, query cache results
+- **Cost**: Total Cost of Ownership (TCO)
+- **Security**: Such as authentication, authorization, data access, row level security
+
+> [!NOTE]
+> Use the following frequently asked questions to help you plan your POC.
+>
+> - **How do I choose the caching period when creating my Data Explorer pool?**
+> To provide best query performance, ingested data is cached on the local SSD disk. This level of performance is not always required and less frequently queried data can often be stored on cheaper blob storage. Queries on data in blob storage run slower, but this acceptable in many scenarios. Knowing this can help you identify the number of compute nodes you need to hold your data in local SSD and continue to meet your query performance requirements. For example, if you you want to query *x* days worth of data (based on ingestion age) more frequently and retain data for *y* days and query it less frequently, in your cache retention policy, specify *x* as the value for hot cache retention and *y* as the value for the total retention. For more information, see [Cache policy](/azure/data-explorer/kusto/management/cachepolicy?context=/azure/synapse-analytics/context/context).
+> - **How do I choose the retention period when creating my Data Explorer pool?**
+> The retention period is a combination of hot and cold cache data that is available for querying. You choose data retention based on how long you need to retain the data based on compliance or other regulatory requirements. You can use the hot window capability, to warm data stored in the cold cache, for faster queries for any auditing purpose. For more information, see [Query cold data with hot windows](/azure/data-explorer/hot-windows?context=/azure/synapse-analytics/context/context).
+
+Here's an example of the needed level of specificity in planning:
+
+- **Goal A**: We need to know whether our requirement for data ingestion and processing of batch data can be met under our defined SLA.
+- **Output A**: We'll have the data to determine whether our batch data ingestion and processing can meet the data processing requirement and SLA.
+ - **Test A1**: Processing queries A, B, and C are identified as good performance tests as they're commonly executed by the data engineering team. Also, they represent overall data processing needs.
+ - **Test A2**: Processing queries X, Y, and Z are identified as good performance tests as they contain near real-time stream processing requirements. Also, they represent overall event-based stream processing needs.
+ - **Test A3**: Compare the performance of these queries at different scale of our workspace (number of instances) with the benchmark obtained from the existing system.
+- **Goal B**: We need to know if our business users can build their dashboards on this platform.
+- **Output B**: We'll have tested some of our existing dashboards and visuals on data in our workspace, using different visualization options, connectors and Kusto queries. These tests will help to determine which dashboards can be migrated to the new environment.
+ - **Test B1**: Specific visuals will be created with Data Explorer data and will be tested.
+ - **Test B2**: Test out of the box KQL functions and operators to meet the requirement.
+- **Goal C**: We'll have tested data ingestion and will have the data points to:
+ - Estimate the effort for our initial historical data migration to our Data Explorer pool.
+ - Plan an approach to migrate historical data.
+- **Output C**: We'll have tested and determined the data ingestion rate achievable in our environment and can determine whether our data ingestion rate is sufficient to migrate historical data during the available time window.
+ - **Test C1**: Test different approaches of historical data migration. For more information, see [Comparing ingestion methods and tools](../data-explorer/ingest-dat#ingestion-methods-and-tools).
+ - **Test C2**: Test data transfer from the data source to our workspace by using either LightIngest, Continuous ingestion from blob storage or data lake store. For more information, seeΓÇ»[Use wizard for one-time ingestion of historical data with LightIngest](/azure/data-explorer/generate-lightingest-command?context=/azure/synapse-analytics/context/context).
+- **Goal D**: We'll have tested the data ingestion rate of incremental data loading and will have the data points to estimate the data ingestion and processing time window.
+- **Output D**: We'll have tested the data ingestion rate and can determine whether our data ingestion and processing requirements can be met with the identified approach.
+ - **Test D1**: Test daily, hourly, and near-real time data ingestion and processing.
+ - **Test D2**: Execute the continuous (batch or streaming) data ingestion and processing while running end-user queries.
+
+Be sure to refine your tests by adding multiple testing scenarios.
+
+Here are some testing scenarios:
+
+- **Data Explorer test A**: We'll execute data ingestion, processing, and querying across multiple workspace sizes and different numbers of pool instances.
+- **Data Explorer test B**: We'll query processed data from our workspace using dashboards and querying tools such as the Data Explorer KQL script.
+
+The following is a high level example of tasks that you can use to help you plan your POC:
+
+| Sprint | Task |
+|--|--|
+| 0 | Develop an understanding of Data Explorer using online resources or with the help of Microsoft account team |
+| 0 | Define business scenarios that you want to achieve with Data Explorer |
+| 0 | Define technical requirements in terms of data sources, ingestion methods, data retention, data caching, SLAs, security, networking, IAM |
+| 0 | Define key performance measures, such as query performance expectation, latency, concurrent requests, ingestion throughput, data freshness |
+| 0 | Define high level architecture with Data Explorer and its data ingesters and consumers |
+| 0 | Define POC Scope |
+| 0 | Define POC planning and timelines |
+| 0 | Define, prioritize and weigh POC evaluation criteria |
+| 1 | Define and prioritize queries to be tested |
+| 1 | Define data access rules for each group of users |
+| 1 | Estimate one-time (historical) data ingestion volume and daily data ingestion volume |
+| 1 | Define data retention, caching, and purge strategy |
+| 1 | Define configuration elements needed when creating workspaces, such as streaming, Python/R plugins, purge |
+| 1 | Review source data format, structure, schema |
+| 1 | Review, refine, revise evaluation criteria |
+| 2 | Create workspace, Data Explorer pool, and the required databases, tables, and materialized views per the architecture design |
+| 2 | Assign permissions to the relevant users for data plane access |
+| 2 | Implement partitioning and merge policies (if required) |
+| 2 | Implement one-time ingestion of data, typically historical or migration data |
+| 2 | Install and configure query tool (if required) |
+| 2 | Test queries on the ingested data using Data Explorer web UI |
+| 2 | Test update and delete scenarios |
+| 2 | Test connection to your chosen visualization and querying tools (such as Power BI, Grafana, etc.) |
+| 2 | Configure data access management rules |
+| 2 | Implement continuous ingestion |
+| 2 | Create data connections with Event Hubs/Iot Hub/Event Grid |
+| 3 | Implement autorefreshing dashboard for near real-time monitoring in Azure Data Explorer Dashboards or Grafana |
+| 3 | Define how to perform load testing |
+| 3 | Optimize ingestion methods and processes based on learnings from previous sprints and completed backlog items |
+| 3 | Performance assessment on Grafana or your chosen dashboarding tool |
+| 3 | Perform load testing in line with concurrency and expected load requirements |
+| 3 | Validate success criteria |
+| 3 | Review scoring |
+| 3 | Test ability to ingest data with different formats |
+| 3 | Validate POC result |
+
+### Evaluate the POC dataset
+
+Using the specific tests you identified, select a dataset to support the tests. Take time to review this dataset. You should verify that the dataset will adequately represent your future processing in terms of content, complexity, and scale. Don't use a dataset that's too small (less than 1 GB) because it won't deliver representative performance. Conversely, don't use a dataset that's too large because the POC shouldn't become a full data migration. Be sure to obtain the appropriate benchmarks from existing systems so you can use them for performance comparisons. Check if your dataset aligns with the supported data formats. Then, depending on the ingestion method (batch or streaming), your dataset can be ingested in batches of appropriate sizes.
+
+> [!IMPORTANT]
+> Make sure you check with business owners for any blockers before moving any data to the cloud. Identify any security or privacy concerns or any data obfuscation needs that should be done before moving data to the cloud.
+
+### Create a high-level architecture
+
+Based upon the high-level architecture of your proposed future state architecture, identify the components that will form part of your POC. Your high-level future state architecture likely contains many data sources, numerous data consumers, big data components, and possibly machine learning and artificial intelligence (AI) data consumers. Your POC architecture should specifically identify components that will be part of the POC. Importantly, it should identify any components that won't form part of the POC testing.
+
+If you're already using Azure, identify any resources you already have in place (Azure Active Directory, ExpressRoute, and others) that you can use during the POC. Also identify the Azure regions your organization uses. Now is a great time to identify the throughput of your ExpressRoute connection and to check with other business users that your POC can consume some of that throughput without adverse impact on production systems.
+
+For more information, see [Big data architectures](/azure/architecture/data-guide/big-data/).
+
+### Identify POC resources
+
+Specifically identify the technical resources and time commitments required to support your POC. Your POC will need:
+
+- A business representative to oversee requirements and results.
+- An application data expert, to source the data for the POC and provide knowledge of the existing processes and logic.
+- A Data Explorer expert. You can request your Microsoft contacts to arrange, if necessary.
+- An expert advisor, to optimize the POC tests. You can request your Microsoft contacts to arrange, if necessary.
+- Resources that will be required for specific components of your POC project, but not necessarily required during the POC. These resources could include network admins, Azure admins, Active Directory admins, Azure portal admins, and others.
+- Ensure all the required Azure services resources are provisioned and the required level of access is granted, including access to storage accounts.
+- Ensure you have an account that has required data access permissions to retrieve data from all data sources in the POC scope.
+
+> [!TIP]
+> We recommend engaging an expert advisor to assist with your POC. Contact your Microsoft account team or reach out to the global availability of expert consultants who can help you assess, evaluate, or implement a Data Explorer pool. You can also post questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-data-explorer) with Data Explorer tag.
+
+### Set the timeline
+
+Review your POC planning details and business needs to identify a time frame for your POC. Make realistic estimates of the time that will be required to complete the POC goals. The time to complete your POC will be influenced by the size of your POC dataset, the number and complexity of tests, and the number of interfaces to test. If you estimate that your POC will run longer than four weeks, consider reducing the POC scope to focus on the highest priority goals. Be sure to obtain approval and commitment from all the lead resources and sponsors before continuing.
+
+## Put the POC into practice
+
+We recommend you execute your POC project with the discipline and rigor of any production project. Run the project according to plan and manage a change request process to prevent uncontrolled growth of the POC's scope.
+
+Here are some examples of high-level tasks:
+
+1. Create a Data Explorer pool, and all Azure resources identified in the POC plan.
+1. Load POC dataset:
+ - Make data available in Azure by extracting from the source or by creating sample data in Azure. For an initial test on ingesting data in your Data Explorer pool, use the [ingestion wizard](../data-explorer/ingest-dat).
+ - Test the connector/integration methods you've planned to use to ingest data into your workspace.
+1. Write Kusto Queries to query data:
+1. Execute the tests:
+ - Many tests can be executed in parallel on your workspaces using different client interfaces such as dashboards, PowerBIm and a Data Explorer KQL script.
+ - Record your results in a consumable and readily understandable format.
+1. Optimize the queries and workspace:
+ - Whether you're writing new KQL queries or converting existing queries from other languages, we recommend checking that your queries follow [Query best practices](/azure/data-explorer/kusto/query/best-practices?context=/azure/synapse-analytics/context/context).
+ - Depending on the test results, you may need to fine-tune your workspace with a caching policy, partitioning policy, workspace sizing, or other optimizations. For recommendations, see [Optimize for high concurrency](/azure/data-explorer/high-concurrency?context=/azure/synapse-analytics/context/context).
+1. Monitor for troubleshooting and performance:
+ - For more information, see [Monitor Data Explorer performance, health, and usage with metrics](../data-explorer/data-explorer-monitor-pools.md).
+ - For technical issues, please [create a support ticket](https://ms.portal.azure.com/#create/Microsoft.Support).
+1. Estimating the pricing:
+ - At the end of the POC, you should use what you learned in the POC to [estimate the cost](https://azure.microsoft.com/pricing/calculator/?service=synapse-analytics) of a workspace that meets your requirements.
+1. Close the POC:
+ - Record the results, lessons learned and the outcome of the POC phase including the benchmarks, configuration, optimization that you applied during the POC.
+ - Clean up any Azure resources that you created during the POC that you no longer need.
+
+ > [!TIP]
+ > If you have decided to proceed with Data Explorer in Azure Synapse and intend on [migrating it to a production environment](#migrating-from-poc-to-production), we recommend keeping the POC workspace running. This will help you set up your production workspace ensuring that you don't lose the configurations and optimizations that you may have applied during the POC.
+
+## Interpret the POC results
+
+When you complete all the POC tests, you evaluate the results. Begin by evaluating whether the POC goals were met and the desired outputs were collected. Determine whether more testing is necessary or any questions need addressing.
+
+## Migrating from POC to production
+
+If you've decided to proceed with your Data Explorer pool in Azure Synapse and intend to migrate your POC pool to production, we strongly recommend that you keep the Data Explorer pool in the POC workspace running, and use it to set up your production workspace. This will help you ensure that you don't lose the configurations and optimizations that you may have applied during the POC.
+
+Before you migrate your Data Explorer pool in the POC workspace to production, we highly recommend that you consider, design, and decide on the following factors:
+
+- Functional and non-functional requirements
+- Disaster Recovery and High Availability requirements
+- Security requirements
+- Networking requirements
+- Continuous Integration/Continuous Deployment requirements
+- Monitoring and Support requirements
+- Training of key personnel in Data Explorer
+- Control and data plane access control requirements
+- Schema, data model and data flow requirements
+- Ingestion requirements
+- Visualization requirements
+- Data and insights consumption requirements
+- Testing requirements
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Common questions about Data Explorer ingestion](/azure/data-explorer/kusto/management/ingestion-faq?context=/azure/synapse-analytics/context/context)
+
+> [!div class="nextstepaction"]
+> [Best practices for schema management](/azure/data-explorer/kusto/management/management-best-practices?context=/azure/synapse-analytics/context/context)
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
+ Last updated 05/23/2022
Here are some testing scenarios:
- **Spark pool test A:** We will execute data processing across multiple node types (small, medium, and large) as well as different numbers of worker nodes. - **Spark pool test B:** We will load/retrieve processed data from the Spark pool to the dedicated SQL pool by using [the connector](../spark/synapse-spark-sql-pool-import-export.md).-- **Spark pool test C:** We will load/retrieve processed data from the Spark pool to Cosmos DB by using Azure Synapse Link.
+- **Spark pool test C:** We will load/retrieve processed data from the Spark pool to Azure Cosmos DB via Azure Synapse Link.
### Evaluate the POC dataset
synapse-analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/overview.md
Last updated 10/05/2021--++
synapse-analytics Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/table.md
Last updated 10/13/2021--++
synapse-analytics Quickstart Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-synapse-link-cosmos-db.md
Last updated 04/21/2020 -+ # Quickstart: Connect to Azure Synapse Link for Azure Cosmos DB
Before you connect an Azure Cosmos DB account to your workspace, there are a few
## Enable Azure Cosmos DB analytical store
-To run large-scale analytics into Azure Cosmos DB without impacting your operational performance, we recommend enabling Synapse Link for Azure Cosmos DB. This function brings HTAP capability to a container and built-in support in Azure Synapse. Follow this quickstart to enable Synapse Link for Cosmos DB containers.
+To run large-scale analytics into Azure Cosmos DB without impacting your operational performance, we recommend enabling Synapse Link for Azure Cosmos DB. This function brings HTAP capability to a container and built-in support in Azure Synapse. Follow this quickstart to enable Synapse Link for Azure Cosmos DB containers.
## Navigate to Synapse Studio
From your Synapse workspace, select **Launch Synapse Studio**. On the Synapse St
## Connect an Azure Cosmos DB database to a Synapse workspace
-Connecting an Azure Cosmos DB database is done as linked service. A Cosmos DB linked service enables users to browse and explore data, read, and write from Apache Spark for Azure Synapse Analytics or SQL into Azure Cosmos DB.
+Connecting an Azure Cosmos DB database is done as linked service. An Azure Cosmos DB linked service enables users to browse and explore data, read, and write from Apache Spark for Azure Synapse Analytics or SQL into Azure Cosmos DB.
From the Data Object Explorer, you can directly connect an Azure Cosmos DB database by doing the following steps:
From the Data Object Explorer, you can directly connect an Azure Cosmos DB datab
4. Select ***Continue*** 5. Name the linked service. The name will be displayed in the Object Explorer and used by Synapse run-times to connect to the database and containers. We recommend using a friendly name. 6. Select the **Cosmos DB account name** and **database name**
-7. (Optional) If no region is specified, Synapse run-time operations will be routed toward the nearest region where the analytical store is enabled. However, you can set manually which region you want your users to access Cosmos DB analytical store. Select **Additional connection properties** and then **New**. Under **Property Name**, write ***PreferredRegions*** and set the **Value** to the region you want (example: WestUS2, there is no space between words and numbers)
+7. (Optional) If no region is specified, Synapse run-time operations will be routed toward the nearest region where the analytical store is enabled. However, you can set manually the region where you want your users to access the Azure Cosmos DB analytical store. Select **Additional connection properties** and then **New**. Under **Property Name**, write ***PreferredRegions*** and set the **Value** to the region you want (example: WestUS2, there is no space between words and numbers)
8. Select ***Create*** Azure Cosmos DB databases are visible under the tab **Linked** in the Azure Cosmos DB section. You can differentiate an HTAP enabled Azure Cosmos DB container from an OLTP only container with the following icons:
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| Delta Lake | 0.6| | Python | 3.6|
+>[!Note]
+> Following are the recent library changes for Apache Spark 2.4 Python runtime:
+>
+> Modifications:
+>
+> - conda 4.3.21 --> 4.9.2
+> - libgcc-ng 9.3.0 --> 12.1.0
+> - libgfortran-ng 9.3.0 --> 7.5.0
+> - libgomp=9.3.0 --> 12.1.0
+> - nest-asyncio 1.5.5 --> 1.5.6
+>
+> New additions:
+> - cached_property=1.5.2
+> - backports-entry-points-selectable=1.1.1
## Scala and Java libraries
zstd-jni-1.3.2-2.jar
## Python libraries
-_libgcc_mutex=0.1
-
-_openmp_mutex=4.5
-
-c-ares=1.16.1
-
-ca-certificates=2020.6.20
-
-certifi=2020.6.20
-
-cffi=1.14.3
-
-chardet=3.0.4
-
-cryptography=3.1.1
-
-conda=4.3.21
-
-conda-env=2.6.0
-
-cytoolz=0.8.2
-
-gperftools=2.7
-
-h5py=2.10.0
-
-hdf5=1.10.6
-
-jpeg=9d
-
-krb5=1.17.1
-
-ld_impl_linux-64=2.35
-
-libblas=3.9.0
-
-libcblas=3.9.0
-
-libcurl=7.71.1
-
-libedit=3.1.20191231
-
-libev=4.33
-
-libffi=3.2.1
-
-libgcc-ng=9.3.0
-
-libgfortran-ng=9.3.0
-
-libgfortran4=7.5.0
-
-libgfortran5=9.3.0
-
-libgomp=9.3.0
-
-libiconv=1.16
-
-liblapack=3.9.0
-
-libnghttp2=1.41.0
-
-libopenblas=0.3.12
-
-libssh2=1.9.0
-
-libstdcxx-ng=9.3.0
-
-numpy=1.18.5
-
-ncurses=6.2
-
-openssl=1.1.1
-
-perl=5.32.0
-
-pip=20.2.4
-
-pygments=2.7.3
-
-pyopenssl=19.1.0
-
-python=3.6.11
-
-python_abi=3.6
-
-readline=8.0
-
-requests=2.24.0
-
-sentencepiece=0.1.9
-
-setuptools=41.4.0
-
-six=1.15.0
-
-sqlite=3.33.0
-
-tk=8.6.1
-
-toolz=0.11.1
-
-urllib3=1.25.10
-
-unixodbc=2.3.9
-
-xz=5.2.5
-
-wheel=0.30.0
-
-yaml=0.2.5
-
-zlib=1.2.11
-
-absl-py=0.11.0
-
-adal=1.2.4
-
-adlfs=0.5.5
-
-aiohttp=3.7.2
-
-alembic=1.4.1
-
-altair=4.1.0
-
-appdirs=1.4.4
-
-applicationinsights=0.11.9
-
-asn1crypto=1.4.0
-
-astor=0.8.1
-
-astroid=2.4.2
-
-astunparse=1.6.3
-
-async-timeout=3.0.1
-
-attrs=20.3.0
-
-azure-common=1.1.25
-
-azure-core=1.8.2
-
-azure-datalake-store=0.0.51
-
-azure-graphrbac=0.61.1
-
-azure-identity=1.4.1
-
-azure-mgmt-authorization=0.61.0
-
-azure-mgmt-containerregistry=2.8.0
-
-azure-mgmt-keyvault=2.2.0
-
-azure-mgmt-resource=12.0.0
-
-azure-mgmt-storage=11.2.0
-
-azure-storage-blob=12.5.0
-
-azure-storage-common=2.1.0
-
-azure-storage-queue=12.1.5
-
-azureml-automl-core=1.22.0
-
-azureml-automl-runtime=1.22.0
-
-azureml-core=1.22.0
-
-azureml-dataprep=2.9.0
-
-azureml-dataprep-native=29.0.0
-
-azureml-dataprep-rslex=1.7.0
-
-azureml-dataset-runtime=1.22.0
-
-azureml-defaults=1.22.0
-
-azureml-interpret=1.22.0
-
-azureml-mlflow=1.16.0
-
-azureml-model-management-sdk=1.0.1b6.post1
-
-azureml-opendatasets=1.18.0
-
-azureml-pipeline=1.22.0
-
-azureml-pipeline-core=1.22.0
-
-azureml-pipeline-steps=1.22.0
-
-azureml-sdk=1.22.0
-
-azure-storage-blob=12.5.0
-
-azureml-telemetry=1.22.0
-
-azureml-train=1.22.0
-
-azureml-train-automl=1.22.0
-
-azureml-train-automl-client=1.22.0.post1
-
-azureml-train-automl-runtime=1.22.0.post1
-
-azureml-train-core=1.22.0
-
-azureml-train-restclients-hyperdrive=1.22.0
-
-backports.tempfile=1.0
-
-backports.weakref=1.0.post1
-
-beautifulsoup4=4.9.3
-
-bitarray=1.6.1
-
-bokeh=2.2.3
-
-boto=2.49.0
-
-boto3=1.15.14
-
-botocore=1.18.14
-
-Bottleneck=1.3.2
-
-bpemb=0.3.2
-
-cachetools=4.1.1
-
-certifi=2020.6.20
-
-click=7.1.2
-
-cloudpickle=1.6.0
-
-configparser=3.7.4
-
-contextlib2=0.6.0.post1
-
-cycler=0.10.0
-
-cython=0.29.21
-
-cytoolz=0.8.2
-
-databricks-cli=0.14.0
-
-dataclasses=0.8
-
-datashape=0.5.2
-
-decorator=4.4.2
-
-Deprecated=1.2.10
-
-dill=0.3.2
-
-distro=1.5.0
-
-docker=4.3.1
-
-docutils=0.16
-
-dotnetcore2=2.1.17
-
-entrypoints=0.3
-
-et-xmlfile=1.0.1
-
-filelock=3.0.12
-
-fire=0.3.1
-
-flair=0.5
-
-Flask=1.0.3
-
-fsspec=0.8.4
-
-fusepy=3.0.1
-
-future=0.18.2
-
-gast=0.3.3
-
-gensim=3.8.3
-
-geographiclib=1.50
-
-geopy=2.0.0
-
-gitdb=4.0.5
-
-GitPython=3.1.11
-
-google-auth=1.23.0
-
-google-auth-oauthlib=0.4.2
-
-google-pasta=0.2.0
-
-gorilla=0.3.0
-
-grpcio=1.33.2
-
-gunicorn=19.9.0
-
-html5lib=1.1
-
-hummingbird-ml=0.0.6
-
-hyperopt=0.2.5
-
-idna=2.10
-
-idna-ssl=1.1.0
-
-imageio=2.9.0
-
-importlib-metadata=1.7.0
-
-interpret-community=0.16.0
-
-interpret-core=0.2.1
-
-ipykernel=5.5.3
-
-ipython=7.8.0
-
-ipython-genutils=0.2.0
-
-ipywidgets=7.6.3
-
-isodate=0.6.0
-
-isort=5.6.4
-
-itsdangerous=1.1.0
-
-jdcal=1.4.1
-
-jeepney=0.4.3
-
-Jinja2=2.11.2
-
-jmespath=0.10.0
-
-joblib=0.14.1
-
-json-logging-py=0.2
-
-jsonpickle=1.4.1
-
-jsonschema=3.2.0
-
-Keras-Applications=1.0.8
-
-Keras-Preprocessing=1.1.2
-
-keras2onnx=1.6.0
-
-kiwisolver=1.3.1
-
-koalas=1.2.0
-
-langdetect=1.0.8
-
-lazy-object-proxy=1.4.3
-
-liac-arff=2.5.0
-
-lightgbm=2.3.0
-
-Mako=1.1.3
-
-Markdown=3.3.3
-
-MarkupSafe=1.1.1
-
-matplotlib=3.2.2
-
-mccabe=0.6.1
-
-mistune=0.8.4
-
-mleap=0.16.1
-
-mlflow=1.11.0
-
-more-itertools=8.6.0
-
-mpld3=0.3
-
-mpmath=1.1.0
-
-msal=1.5.0
-
-msal-extensions=0.2.2
-
-msrest=0.6.19
-
-msrestazure=0.6.4
-
-multidict=5.0.0
-
-multipledispatch=0.6.0
-
-mypy=0.780
-
-mypy-extensions=0.4.3
-
-ndg-httpsclient=0.5.1
-
-networkx=2.5
-
-nimbusml=1.7.1
-
-nltk=3.5
-
-nose=1.3.7
-
-oauthlib=3.1.0
-
-odo=0.5.0
-
-olefile=0.46
-
-onnx=1.6.0
-
-onnxconverter-common=1.6.0
-
-onnxmltools=1.4.1
-
-onnxruntime=1.3.0
-
-openpyxl=3.0.5
-
-opt-einsum=3.3.0
-
-packaging=20.4
-
-pandas=0.25.3
-
-pandasql=0.7.3
-
-pathspec=0.8.0
-
-patsy=0.5.1
-
-pickleshare=0.7.5
-
-Pillow=8.0.1
-
-plotly=4.12.0
-
-pluggy=0.13.1
-
-pmdarima=1.1.1
-
-portalocker=1.7.1
-
-prometheus-client=0.8.0
-
-prometheus-flask-exporter=0.18.1
-
-protobuf=3.13.0
-
-psutil=5.7.2
-
-py=1.9.0
-
-py-cpuinfo=5.0.0
-
-py4j=0.10.7
-
-pyarrow=1.0.1
-
-pyasn1=0.4.8
-
-pyasn1-modules=0.2.8
-
-pycrypto=2.6.1
-
-pycparser=2.20
-
-PyJWT=1.7.1
-
-pylint=2.6.0
-
-pymssql=2.1.5
-
-pyodbc=4.0.30
-
-pyopencl=2020.1
-
-pyparsing=2.4.7
-
-pyrsistent=0.17.3
-
-pyspark=2.4.5
-
-pytest=5.3.2
-
-python-dateutil=2.8.1
-
-python-editor=1.0.4
-
-pytools=2021.1
-
-pytz=2020.1
-
-PyWavelets=1.1.1
-
-PyYAML=5.3.1
-
-querystring-parser=1.2.4
-
-regex=2020.10.28
-
-requests-oauthlib=1.3.0
-
-retrying=1.3.3
-
-rsa=4.6
-
-ruamel.yaml=0.16.12
-
-ruamel.yaml.clib=0.2.2
-
-s3transfer=0.3.3
-
-sacremoses=0.0.43
-
-scikit-image=0.17.2
-
-scikit-learn=0.22.2.post1
-
-scipy=1.5.2
-
-seaborn=0.11.0
-
-SecretStorage=3.1.2
-
-segtok=1.5.10
-
-shap=0.34.0
-
-skl2onnx=1.4.9
-
-sklearn-pandas=1.7.0
-
-smart-open=1.9.0
-
-smmap=3.0.4
-
-soupsieve=2.0.1
-
-SQLAlchemy=1.3.13
-
-sqlitedict=1.7.0
-
-sqlparse=0.4.1
-
-statsmodels=0.10.2
-
-tabulate=0.8.7
-
-tb-nightly=1.14.0a20190603
-
-tensorboard=2.3.0
-
-tensorboard-plugin-wit=1.7.0
-
-tensorflow=2.0.0b1
-
-tensorflow-estimator=2.3.0
-
-termcolor=1.1.0
-
-textblob=0.15.3
-
-tf-estimator-nightly=1.14.0.dev2019060501
-
-tf2onnx=1.7.2
-
-tifffile=2020.9.3
-
-tokenizers=0.9.2
-
-toml=0.10.2
-
-torch=1.7.0
-
-tornado=6.1
-
-tqdm=4.48.2
-
-transformers=3.4.0
-
-typed-ast=1.4.1
-
-typing-extensions=3.7.4.3
-
-urllib3=1.25.10
-
-wcwidth=0.2.5
-
-webencodings=0.5.1
-
-websocket-client=0.57.0
-
-Werkzeug=1.0.1
-
-wheel=0.30.0
-
-wrapt=1.11.2
-
-xgboost=0.90
-
-zict=1.0.0
-
-zipp=0.6.0
+- conda:
+ - _libgcc_mutex=0.1
+ - _openmp_mutex=4.5
+ - brotlipy=0.7.0
+ - c-ares=1.16.1
+ - ca-certificates=2020.6.20
+ - cached-property=1.5.2
+ - cached_property=1.5.2
+ - certifi=2020.6.20
+ - cffi=1.14.3
+ - chardet=3.0.4
+ - conda=4.9.2
+ - conda-env=2.6.0
+ - cryptography=3.1.1
+ - cytoolz=0.8.2
+ - gperftools=2.7
+ - h5py=2.10.0
+ - hdf5=1.10.6
+ - idna=2.10
+ - jpeg=9d
+ - krb5=1.17.1
+ - ld_impl_linux-64=2.35
+ - libblas=3.9.0
+ - libcblas=3.9.0
+ - libcurl=7.71.1
+ - libedit=3.1.20191231
+ - libev=4.33
+ - libffi=3.2.1
+ - libgcc-ng=12.1.0
+ - libgfortran-ng=7.5.0
+ - libgfortran4=7.5.0
+ - libgomp=12.1.0
+ - libiconv=1.16
+ - liblapack=3.9.0
+ - libnghttp2=1.41.0
+ - libopenblas=0.3.12
+ - libssh2=1.9.0
+ - libstdcxx-ng=9.3.0
+ - ncurses=6.2
+ - numpy=1.18.5
+ - openssl=1.1.1h
+ - perl=5.32.0
+ - pip=20.2.4
+ - pycparser=2.20
+ - pygments=2.7.3
+ - pyopenssl=19.1.0
+ - pysocks=1.7.1
+ - python=3.6.11
+ - python_abi=3.6
+ - readline=8.0
+ - requests=2.24.0
+ - sentencepiece=0.1.92
+ - setuptools=41.4.0
+ - six=1.15.0
+ - sqlite=3.33.0
+ - tk=8.6.10
+ - toolz=0.11.1
+ - unixodbc=2.3.9
+ - urllib3=1.25.10
+ - wheel=0.30.0
+ - xz=5.2.5
+ - yaml=0.2.5
+ - zlib=1.2.11
+- pip:
+ - absl-py==0.11.0
+ - adal==1.2.4
+ - adlfs==0.5.5
+ - aiohttp==3.7.2
+ - alembic==1.4.1
+ - altair==4.1.0
+ - appdirs==1.4.4
+ - applicationinsights==0.11.9
+ - argon2-cffi==21.1.0
+ - asn1crypto==1.4.0
+ - astor==0.8.1
+ - astroid==2.4.2
+ - astunparse==1.6.3
+ - async-generator==1.10
+ - async-timeout==3.0.1
+ - attrs==20.3.0
+ - azure-common==1.1.25
+ - azure-core==1.15.0
+ - azure-datalake-store==0.0.51
+ - azure-graphrbac==0.61.1
+ - azure-identity==1.4.1
+ - azure-mgmt-authorization==0.61.0
+ - azure-mgmt-containerregistry==2.8.0
+ - azure-mgmt-keyvault==2.2.0
+ - azure-mgmt-resource==12.0.0
+ - azure-mgmt-storage==11.2.0
+ - azure-storage-blob==12.5.0
+ - azure-storage-common==2.1.0
+ - azure-storage-queue==12.1.5
+ - azureml-automl-core==1.32.0
+ - azureml-automl-runtime==1.32.0
+ - azureml-core==1.32.0
+ - azureml-dataprep==2.18.0
+ - azureml-dataprep-native==36.0.0
+ - azureml-dataprep-rslex==1.16.1
+ - azureml-dataset-runtime==1.32.0
+ - azureml-defaults==1.32.0
+ - azureml-interpret==1.32.0
+ - azureml-mlflow==1.32.0
+ - azureml-model-management-sdk==1.0.1b6.post1
+ - azureml-opendatasets==1.32.0
+ - azureml-pipeline==1.32.0
+ - azureml-pipeline-core==1.32.0
+ - azureml-pipeline-steps==1.32.0
+ - azureml-sdk==1.32.0
+ - azureml-telemetry==1.32.0
+ - azureml-train==1.32.0
+ - azureml-train-automl==1.32.0
+ - azureml-train-automl-client==1.32.0
+ - azureml-train-automl-runtime==1.32.0
+ - azureml-train-core==1.32.0
+ - azureml-train-restclients-hyperdrive==1.32.0
+ - backcall==0.2.0
+ - backports-datetime-fromisoformat==1.0.0
+ - backports-entry-points-selectable==1.1.1
+ - backports-tempfile==1.0
+ - backports-weakref==1.0.post1
+ - beautifulsoup4==4.9.3
+ - bitarray==1.6.1
+ - bleach==4.1.0
+ - bokeh==2.2.3
+ - boto==2.49.0
+ - boto3==1.15.14
+ - botocore==1.18.14
+ - bottleneck==1.3.2
+ - bpemb==0.3.2
+ - cachetools==4.1.1
+ - click==7.1.2
+ - cloudpickle==1.6.0
+ - configparser==3.7.4
+ - contextlib2==0.6.0.post1
+ - contextvars==2.4
+ - cycler==0.10.0
+ - cython==0.29.21
+ - databricks-cli==0.14.0
+ - dataclasses==0.8
+ - datashape==0.5.2
+ - decorator==4.4.2
+ - defusedxml==0.7.1
+ - deprecated==1.2.10
+ - dill==0.3.2
+ - distlib==0.3.6
+ - distro==1.5.0
+ - docker==4.3.1
+ - docutils==0.16
+ - dotnetcore2==2.1.17
+ - entrypoints==0.3
+ - et-xmlfile==1.0.1
+ - fastapi==0.63.0
+ - filelock==3.0.12
+ - fire==0.3.1
+ - flair==0.5
+ - flask==1.0.3
+ - flatbuffers==2.0
+ - fsspec==0.8.4
+ - fusepy==3.0.1
+ - future==0.18.2
+ - gast==0.3.3
+ - gensim==3.8.3
+ - geographiclib==1.50
+ - geopy==2.0.0
+ - gitdb==4.0.5
+ - gitpython==3.1.11
+ - google-api-core==1.30.0
+ - google-auth==1.32.1
+ - google-auth-oauthlib==0.4.2
+ - google-pasta==0.2.0
+ - googleapis-common-protos==1.53.0
+ - gorilla==0.3.0
+ - grpcio==1.33.2
+ - gunicorn==20.1.0
+ - h11==0.12.0
+ - heapdict==1.0.1
+ - html5lib==1.1
+ - hummingbird-ml==0.0.6
+ - hyperopt==0.2.5
+ - idna-ssl==1.1.0
+ - imageio==2.9.0
+ - immutables==0.16
+ - importlib-metadata==1.7.0
+ - importlib-resources==5.4.0
+ - interpret-community==0.18.1
+ - interpret-core==0.2.1
+ - ipykernel==5.5.3
+ - ipython==7.8.0
+ - ipython-genutils==0.2.0
+ - ipywidgets==7.6.3
+ - isodate==0.6.0
+ - isort==5.6.4
+ - itsdangerous==1.1.0
+ - jdcal==1.4.1
+ - jedi==0.18.1
+ - jeepney==0.4.3
+ - jinja2==2.11.2
+ - jmespath==0.10.0
+ - joblib==0.14.1
+ - json-logging-py==0.2
+ - jsonpickle==1.4.1
+ - jsonschema==3.2.0
+ - jupyter-client==7.1.2
+ - jupyter-core==4.9.2
+ - jupyterlab-pygments==0.1.2
+ - jupyterlab-widgets==1.1.1
+ - keras-applications==1.0.8
+ - keras-preprocessing==1.1.2
+ - keras2onnx==1.6.0
+ - kiwisolver==1.3.1
+ - koalas==1.2.0
+ - langdetect==1.0.8
+ - lazy-object-proxy==1.4.3
+ - liac-arff==2.5.0
+ - lightgbm==2.3.0
+ - mako==1.1.3
+ - markdown==3.3.3
+ - markupsafe==1.1.1
+ - matplotlib==3.2.2
+ - mccabe==0.6.1
+ - mistune==0.8.4
+ - mleap==0.16.1
+ - mlflow==1.18.0
+ - mlflow-skinny==1.20.2
+ - more-itertools==8.6.0
+ - mpld3==0.3
+ - mpmath==1.1.0
+ - msal==1.5.0
+ - msal-extensions==0.2.2
+ - msrest==0.6.19
+ - msrestazure==0.6.4
+ - multidict==5.0.0
+ - multipledispatch==0.6.0
+ - mypy==0.780
+ - mypy-extensions==0.4.3
+ - nbclient==0.5.9
+ - nbconvert==6.0.7
+ - nbformat==5.1.3
+ - ndg-httpsclient==0.5.1
+ - nest-asyncio==1.5.6
+ - networkx==2.5
+ - nimbusml==1.7.1
+ - nltk==3.5
+ - nose==1.3.7
+ - notebook==6.1.5
+ - oauthlib==3.1.0
+ - odo==0.5.0
+ - olefile==0.46
+ - onnx==1.7.0
+ - onnxconverter-common==1.6.0
+ - onnxmltools==1.4.1
+ - onnxruntime==1.8.0
+ - opencensus==0.7.13
+ - opencensus-context==0.1.2
+ - opencensus-ext-azure==1.0.8
+ - openpyxl==3.0.5
+ - opt-einsum==3.3.0
+ - packaging==20.4
+ - pandas==0.25.3
+ - pandasql==0.7.3
+ - pandocfilters==1.5.0
+ - parso==0.8.2
+ - pathspec==0.8.0
+ - patsy==0.5.1
+ - pexpect==4.8.0
+ - pickleshare==0.7.5
+ - pillow==8.0.1
+ - platformdirs==2.4.0
+ - plotly==4.12.0
+ - pluggy==0.13.1
+ - pmdarima==1.1.1
+ - portalocker==1.7.1
+ - prometheus-client==0.8.0
+ - prometheus-flask-exporter==0.18.1
+ - prompt-toolkit==2.0.10
+ - protobuf==3.13.0
+ - psutil==5.7.2
+ - ptyprocess==0.7.0
+ - py==1.9.0
+ - py-cpuinfo==5.0.0
+ - py4j==0.10.7
+ - pyarrow==1.0.1
+ - pyasn1==0.4.8
+ - pyasn1-modules==0.2.8
+ - pycrypto==2.6.1
+ - pydantic==1.9.2
+ - pyjwt==1.7.1
+ - pylint==2.6.0
+ - pymssql==2.1.5
+ - pyodbc==4.0.30
+ - pyopencl==2020.1
+ - pyparsing==2.4.7
+ - pyrsistent==0.17.3
+ - pyspark==2.4.5
+ - pytest==5.3.2
+ - python-dateutil==2.8.1
+ - python-editor==1.0.4
+ - pytools==2021.1
+ - pytz==2020.1
+ - pywavelets==1.1.1
+ - pyyaml==5.3.1
+ - pyzmq==22.3.0
+ - querystring-parser==1.2.4
+ - regex==2020.10.28
+ - requests-oauthlib==1.3.0
+ - retrying==1.3.3
+ - rsa==4.6
+ - ruamel-yaml==0.16.12
+ - ruamel-yaml-clib==0.2.2
+ - s3transfer==0.3.3
+ - sacremoses==0.0.43
+ - scikit-image==0.17.2
+ - scikit-learn==0.22.2.post1
+ - scipy==1.5.2
+ - seaborn==0.11.0
+ - secretstorage==3.1.2
+ - segtok==1.5.10
+ - send2trash==1.8.0
+ - shap==0.34.0
+ - skl2onnx==1.4.9
+ - sklearn-pandas==1.7.0
+ - smart-open==1.9.0
+ - smmap==3.0.4
+ - soupsieve==2.0.1
+ - sqlalchemy==1.3.13
+ - sqlitedict==1.7.0
+ - sqlparse==0.4.1
+ - starlette==0.13.6
+ - statsmodels==0.10.2
+ - tabulate==0.8.7
+ - tb-nightly==1.14.0a20190603
+ - tensorboard==2.3.0
+ - tensorboard-plugin-wit==1.7.0
+ - tensorflow==2.0.1
+ - tensorflow-estimator==2.3.0
+ - termcolor==1.1.0
+ - terminado==0.12.1
+ - testpath==0.5.0
+ - textblob==0.15.3
+ - tf-estimator-nightly==1.14.0.dev2019060501
+ - tf2onnx==1.7.2
+ - tifffile==2020.9.3
+ - tokenizers==0.9.2
+ - toml==0.10.2
+ - torch==1.7.0
+ - tornado==6.1
+ - tqdm==4.48.2
+ - traitlets==4.3.3
+ - transformers==3.4.0
+ - typed-ast==1.4.1
+ - typing-extensions==3.7.4.3
+ - uvicorn==0.13.2
+ - virtualenv==20.8.1
+ - wcwidth==0.2.5
+ - webencodings==0.5.1
+ - websocket-client==0.57.0
+ - werkzeug==1.0.1
+ - widgetsnbextension==3.5.2
+ - wrapt==1.11.2
+ - xgboost==0.90
+ - yarl==1.6.3
+ - zict==1.0.0
+ - zipp==3.4.1
## Next steps
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Last updated 06/13/2022 -+
widgetsnbextension==3.5.2
- [Azure Synapse Analytics](../overview-what-is.md) - [Apache Spark Documentation](https://spark.apache.org/docs/3.2.1/)-- [Apache Spark Concepts](apache-spark-concepts.md)
+- [Apache Spark Concepts](apache-spark-concepts.md)
synapse-analytics Tutorial Use Pandas Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md
Last updated 11/02/2021--++
synapse-analytics Use Prometheus Grafana To Monitor Apache Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md
Title: Tutorial - Monitor Apache Spark Applications metrics with Prometheus and Grafana description: Tutorial - Learn how to deploy the Apache Spark application metrics solution to an Azure Kubernetes Service (AKS) cluster and learn how to integrate the Grafana dashboards. --++
synapse-analytics Vscode Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/vscode-tool-synapse.md
- Title: Tutorial - Spark & Hive Tools for VSCode (Spark application)
-description: Tutorial - Use the Spark & Hive Tools for VSCode to develop Spark applications, which are written in Python, and submit them to a serverless Apache Spark pool.
------- Previously updated : 09/03/2020--
-# Tutorial: Create an Apache Spark application with VSCode using a Synapse workspace
-
-Learn how to use Apache Spark & Hive Tools for Visual Studio Code. Use the tools to create and submit Apache Hive batch jobs, interactive Hive queries, and PySpark scripts for Apache Spark. First we'll describe how to install Spark & Hive Tools in Visual Studio Code. Then we'll walk through how to submit jobs to Spark & Hive Tools.
-
-Spark & Hive Tools can be installed on platforms that are supported by Visual Studio Code. Note the following prerequisites for different platforms.
-
-## Prerequisites
-
-The following items are required for completing the steps in this article:
--- A serverless Apache Spark pool. To create a serverless Apache Spark pool, see [Create Apache Spark pool using Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md).-- [Visual Studio Code](https://code.visualstudio.com/).-- [Mono](https://www.mono-project.com/docs/getting-started/install/). Mono is required only for Linux and macOS.-- [A PySpark interactive environment for Visual Studio Code](../../hdinsight/set-up-pyspark-interactive-environment.md).-- A local directory. This article uses **C:\HD\Synaseexample**.-
-## Install Spark & Hive Tools
--
-After you meet the prerequisites, you can install Spark & Hive Tools for Visual Studio Code by following these steps:
-
-1. Open Visual Studio Code.
-
-2. From the menu bar, navigate to **View** > **Extensions**.
-
-3. In the search box, enter **Spark & Hive**.
-
-4. Select **Spark & Hive Tools** from the search results, and then select **Install**:
-
- ![Spark & Hive for Visual Studio Code Python install](./media/vscode-tool-synapse/install-hdInsight-plugin.png)
-
-5. Select **Reload** when necessary.
-
-> [!Note]
->
-> **Synapse PySpark installation error** is an [known issue](#known-issues).
-
-## Open a work folder
-
-To open a work folder and to create a file in Visual Studio Code, follow these steps:
-
-1. From the menu bar, navigate to **File** > **Open Folder...** > **C:\HD\Synaseexample**, and then select the **Select Folder** button. The folder appears in the **Explorer** view on the left.
-
-2. In **Explorer** view, select the **Synaseexample** folder, and then select the **New File** icon next to the work folder:
-
- ![visual studio code new file icon](./media/vscode-tool-synapse/visual-studio-code-new-file.png)
-
-3. Name the new file by using the `.py` (Spark script) file extension. This example uses **HelloWorld.py**.
-
-## Connect to your Spark pools
-
-Sign in to Azure subscription to connect to your Spark pools.
-
-### Sign in to your Azure subscription
-
-Follow these steps to connect to Azure:
-
-1. From the menu bar, navigate to **View** > **Command Palette...**, and enter **Azure: Sign In**:
-
- ![Spark & Hive Tools for Visual Studio Code login](./media/vscode-tool-synapse/hdinsight-for-vscode-extension-login.png)
-
-2. Follow the sign-in instructions to sign in to Azure. After you're connected, your Azure account name shows on the status bar at the bottom of the Visual Studio Code window.
-
-## Set the default Spark pool
-
-1. Reopen the **Synaseexample** folder that was discussed [earlier](#open-a-work-folder), if closed.
-
-2. Select the **HelloWorld.py** file that was created [earlier](#open-a-work-folder). It opens in the script editor.
-
-3. Right-click the script editor, and then select **Synapse: Set default Spark pool**.
-
-4. [Connect](#connect-to-your-spark-pools) to your Azure account if you haven't yet done so.
-
-5. Select a Spark pool as the default Spark pool for the current script file.
-
-6. Use **Synapse: PySpark Interactive** to submit this file. And the tools automatically update the **.VSCode\settings.json** configuration file:
-
- ![Set default cluster configuration](./media/vscode-tool-synapse/set-default-cluster-configuration.png)
-
-## Submit interactive Synapse PySpark queries to Spark pool (Not supported anymore)
-
-> [!NOTE]
->
->For Synapse Pyspark interactive, since its dependency will not be maintained anymore by other team, this will not be maintained anymore as well. If you trying to use Synapse Pyspark interactive, please switch to use [Azure Synapse Analytics](https://ms.web.azuresynapse.net/en-us/) instead. And it's a long term change.
->
-
-Users can perform Synapse PySpark interactive on Spark pool in the following ways:
-
-### Using the Synapse PySpark interactive command in PY file
-Using the PySpark interactive command to submit the queries, follow these steps:
-
-1. Reopen the **Synaseexample** folder that was discussed [earlier](#open-a-work-folder), if closed.
-
-2. Create a new **HelloWorld.py** file, following the [earlier](#open-a-work-folder) steps.
-
-3. Copy and paste the following code into the script file:
-
-```python
-import sys
-from operator import add
-from pyspark.sql import SparkSession, Row
-
-spark = SparkSession\
- .builder\
- .appName("PythonWordCount")\
- .getOrCreate()
-
-data = [Row(col1='pyspark and spark', col2=1), Row(col1='pyspark', col2=2), Row(col1='spark vs hadoop', col2=2), Row(col1='spark', col2=2), Row(col1='hadoop', col2=2)]
-df = spark.createDataFrame(data)
-lines = df.rdd.map(lambda r: r[0])
-
-counters = lines.flatMap(lambda x: x.split(' ')) \
- .map(lambda x: (x, 1)) \
- .reduceByKey(add)
-
-output = counters.collect()
-sortedCollection = sorted(output, key = lambda r: r[1], reverse = True)
-
-for (word, count) in sortedCollection:
- print("%s: %i" % (word, count))
-```
-
-4. The prompt to install PySpark/Synapse Pyspark kernel is displayed in the lower right corner of the window. You can click on **Install** button to proceed for the PySpark/Synapse Pyspark installations; or click on **Skip** button to skip this step.
-
- ![install pyspark kernel](./media/vscode-tool-synapse/install-the-pyspark-kernel.png)
-
-5. If you need to install it later, you can navigate to **File** > **Preference** > **Settings**, then uncheck **Hdinsight: Enable Skip Pyspark Installation** in the settings.
-
- ![enable skip pyspark installation](./media/vscode-tool-synapse/enable-skip-pyspark-installation.png)
-
-6. If the installation is successful in step 4, the "PySpark/Synapse Pyspark installed successfully" message box is displayed in the lower right corner of the window. Click on **Reload** button to reload the window.
-
- ![pyspark installed successfully](./media/vscode-tool-synapse/pyspark-kernel-installed-successfully.png)
-
-7. From the menu bar, navigate to **View** > **Command Palette...** or use the **Shift + Ctrl + P** keyboard shortcut, and enter **Python: Select Interpreter to start Jupyter Server**.
-
- ![select interpreter to start jupyter server](./media/vscode-tool-synapse/select-interpreter-to-start-jupyter-server.png)
-
-8. Select the Python option below.
-
- ![choose the below option](./media/vscode-tool-synapse/choose-the-below-option.png)
-
-9. From the menu bar, navigate to **View** > **Command Palette...** or use the **Shift + Ctrl + P** keyboard shortcut, and enter **Developer: Reload Window**.
-
- ![reload window](./media/vscode-tool-synapse/reload-window.png)
-
-10. [Connect](#connect-to-your-spark-pools) to your Azure account if you haven't yet done so.
-
-11. Select all the code, right-click the script editor, and select **Synapse: Pyspark Interactive** to submit the query.
-
- ![pyspark interactive context menu](./media/vscode-tool-synapse/pyspark-interactive-right-click.png)
-
-12. Select the Spark pool, if you haven't specified a default Spark pool. After a few moments, the **Python Interactive** results appear in a new tab. Click on PySpark to switch the kernel to **Synapse PySpark**, then, submit the selected code again, and the code will run successfully. The tools also let you submit a block of code instead of the whole script file by using the context menu:
-
- ![interactive](./media/vscode-tool-synapse/pyspark-interactive-python-interactive-window.png)
-
-### Perform interactive query in PY file using a #%% comment
-
-1. Add **#%%** before the code to get notebook experience.
-
- ![add #%%](./media/vscode-tool-synapse/run-cell.png)
-
-2. Click on **Run Cell**. After a few moments, the Python Interactive results appear in a new tab. Click on PySpark to switch the kernel to **Synapse PySpark**, then, click on **Run Cell** again, and the code will run successfully.
-
- ![run cell results](./media/vscode-tool-synapse/run-cell-get-results.png)
-
-## Leverage IPYNB support from Python extension
-
-1. You can create a Jupyter Notebook by command from the Command Palette or by creating a new .ipynb file in your workspace. For more information, see [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support)
-
-2. Click on **Run cell** button, follow the prompts to **Set the default spark pool** (strongly encourage to set default cluster/pool every time before opening a notebook) and then, **Reload** window.
-
- ![set the default spark pool and reload](./media/vscode-tool-synapse/set-the-default-spark-pool-and-reload.png)
-
-3. Click on PySpark to switch kernel to **Synapse Pyspark**, and then click on **Run Cell**, after a while, the result will be displayed.
-
- ![run ipynb results](./media/vscode-tool-synapse/run-ipynb-file-results.png)
--
-> [!NOTE]
->
->1. Ms-python >=2020.5.78807 version is not supported on this extention is a [known issue](#known-issues).
->
->2. Switch to Synapse Pyspark kernel, disabling auto-settings in Azure Portal is encouraged. Otherwise it may take a long while to wake up the cluster and set synapse kernel for the first time use.
->
-> ![auto settings](./media/vscode-tool-synapse/auto-settings.png)
-
-## Spark session config
-
-You can specify the timeout duration, the number, and the size of executors to give to the current Spark session in **Configure session**. Restart the Spark session is for configuration changes to take effect. All cached notebook variables are cleared.
-
-```python
-%%configure -f
-{
- // refer to https://github.com/cloudera/livy#request-body for a list of valid parameters to config the session.
- "driverMemory":"2g",
- "driverCores":3,
- "executorMemory":"2g",
- "executorCores":2,
- "jars":[],
- "conf":{
- "spark.driver.maxResultSize":"10g"
- }
-}
-```
-
-> [!NOTE]
->
-> Display function and Spark SQL may not be rendered properly in the output cell.
-
-## Submit PySpark batch job to Spark pool
-
-1. Reopen the **Synaseexample** folder that you discussed [earlier](#open-a-work-folder), if closed.
-
-2. Create a new **BatchFile.py** file by following the [earlier](#open-a-work-folder) steps.
-
-3. Copy and paste the following code into the script file:
-
- ```python
- from __future__ import print_function
- import sys
- from operator import add
- from pyspark.sql import SparkSession
- if __name__ == "__main__":
- spark = SparkSession\
- .builder\
- .appName("PythonWordCount")\
- .getOrCreate()
-
- lines = spark.read.text('/HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv').rdd.map(lambda r: r[0])
- counts = lines.flatMap(lambda x: x.split(' '))\
- .map(lambda x: (x, 1))\
- .reduceByKey(add)
- output = counts.collect()
- for (word, count) in output:
- print("%s: %i" % (word, count))
- spark.stop()
- ```
-
-4. [Connect](#connect-to-your-spark-pools) to your Azure account if you haven't yet done so.
-
-5. Right-click the script editor, and then select **Synapse: PySpark Batch**.
-
-6. Select a Spark pool to submit your PySpark job to:
-
- ![Submit Python job result output](./media/vscode-tool-synapse/submit-pythonjob-result.png)
-
-After you submit a batch job to spark pool, submission logs appear in the **OUTPUT** window in Visual Studio Code. The **Spark UI** URL and **Spark Job Application UI** URL are also shown. You can open the URL in a web browser to track the job status.
-
-## Access and manage Synapse Workspace
-
-You can perform different operations in Azure Explorer within Spark & Hive tools for VSCode. From the Azure Explorer.
-
-![azure image](./media/vscode-tool-synapse/azure.png)
-
-### Launch workspace
-
-1. From Azure Explorer, navigate to **SYNAPSE**, expand it, and display the Synapse Subscription list.
-
- ![synapse explorer](./media/vscode-tool-synapse/synapse-explorer.png)
-
-2. Click on Subscription of Synapse workspace, expand it, and display the workspace list.
-
-3. Right-click a workspace, then select **View Apache Spark applications**, the Apache Spark application page in the Synapse Studio website will be opened.
-
- ![right click on workspace](./media/vscode-tool-synapse/right-click-on-workspace.png)
-
- ![synapse studio application](./media/vscode-tool-synapse/synapse-studio-application.png)
-
-4. Expand a workspace, **Default Storage** and **Spark Pools** are displayed.
-
-5. Right-click on **Default Storage**, the **Copy Full Path** and **Open in Synapse Studio** are displayed.
-
- ![right click on default storage](./media/vscode-tool-synapse/right-click-on-default-storage.png)
-
- - Click on **Copy Full Path**, the Primary ADLS Gen2 account URL will be copied, you can paste it where you need。
-
- - Click on **Open in Synapse Studio**, the Primary Storage Account will be opened in Synapse Studio.
-
- ![default storage in synapse studio](./media/vscode-tool-synapse/default-storage-in-synapse-studio.png)
-
-6. Expand the **Default Storage**, the Primary Storage Account is displayed.
-
-7. Expand the **Spark Pools**, all spark pools in the workspace are displayed.
-
- ![expand storage pool](./media/vscode-tool-synapse/expand-storage-pool.png)
--
-## Known issues
-
-### Synapse PySpark installation error.
-
-![known issues](./media/vscode-tool-synapse/known-issue.png)
-
-### Not supported submit interactive Synapse PySpark queries to Spark pool anymore
-
-For Synapse Pyspark interactive, since its dependency will not be maintained anymore by other team, this will not be maintained anymore as well. If you trying to use Synapse Pyspark interactive, please switch to use [Azure Synapse Analytics](https://ms.web.azuresynapse.net/en-us/) instead. And it's a long term change.
-
-## Next steps
--- [Azure Synapse Analytics](../overview-what-is.md)-- [Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md)
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
Previously updated : 9/25/2020 Last updated : 10/07/2022
synapse-analytics Sql Data Warehouse Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-collation-types.md
Title: Data warehouse collation types description: Collation types supported for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-- Last updated 12/04/2019-++
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
You can use a performance-optimized parser when you query CSV files. For details
### Manually create statistics for CSV files
-Serverless SQL pool relies on statistics to generate optimal query execution plans. Statistics are automatically created for columns in Parquet files when needed. At this moment, statistics aren't automatically created for columns in CSV files. Create statistics manually for columns that you use in queries, particularly those used in DISTINCT, JOIN, WHERE, ORDER BY, and GROUP BY. Check [statistics in serverless SQL pool](develop-tables-statistics.md#statistics-in-serverless-sql-pool) for details.
+Serverless SQL pool relies on statistics to generate optimal query execution plans. Statistics are automatically created for columns in Parquet files when needed, as well as CSV files when using OPENROWSET. At this moment, statistics aren't automatically created for columns in CSV files when using external tables. Create statistics manually for columns that you use in queries, particularly those used in DISTINCT, JOIN, WHERE, ORDER BY, and GROUP BY. Check [statistics in serverless SQL pool](develop-tables-statistics.md#statistics-in-serverless-sql-pool) for details.
## Data types
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-views.md
Last updated 05/20/2020 -+ # Create and use views using serverless SQL pool in Azure Synapse Analytics
from openrowset(
The `OPENJSON` function parses each line from the JSONL file containing one JSON document per line in textual format.
-## <a id="cosmosdb-view"></a> Cosmos DB views on containers
+## <a id="cosmosdb-view"></a> Azure Cosmos DB views on containers
-The views can be created on top of the Azure Cosmos DB containers if the Cosmos DB analytical storage is enabled on the container. Cosmos DB account name, database name, and container name should be added as a part of the view, and the read-only access key should be placed in the database scoped credential that the view references.
+The views can be created on top of the Azure Cosmos DB containers if the Azure Cosmos DB analytical storage is enabled on the container. The Azure Cosmos DB account name, database name, and container name should be added as a part of the view, and the read-only access key should be placed in the database scoped credential that the view references.
```sql CREATE DATABASE SCOPED CREDENTIAL MyCosmosDbAccountCredential
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
The key differences between Hadoop and native external tables are presented in t
| [Folder partition elimination](#folder-partition-elimination) | No | Partition elimination is available only in the partitioned tables created on Parquet or CSV formats that are synchronized from Apache Spark pools. You might create external tables on Parquet partitioned folders, but the partitioning columns will be inaccessible and ignored, while the partition elimination will not be applied. Do not create [external tables on Delta Lake folders](create-use-external-tables.md#delta-tables-on-partitioned-folders) because they are not supported. Use [Delta partitioned views](create-use-views.md#delta-lake-partitioned-views) if you need to query partitioned Delta Lake data. | | [File elimination](#file-elimination) (predicate pushdown) | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns to enable pushdown. | | Custom format for location | No | Yes, using wildcards like `/year=*/month=*/day=*` for Parquet or CSV formats. Custom folder paths are not available in Delta Lake. In the serverless SQL pool you can also use recursive wildcards `/logs/**` to reference Parquet or CSV files in any sub-folder beneath the referenced folder. |
-| Recursive folder scan | Yes | Yes. In serverless SQL pools must be specified `/**` at the end of the location path. In Dedicated pool the folders are alwasy scanned recursively. |
+| Recursive folder scan | Yes | Yes. In serverless SQL pools must be specified `/**` at the end of the location path. In Dedicated pool the folders are always scanned recursively. |
| Storage authentication | Storage Access Key(SAK), AAD passthrough, Managed identity, Custom application Azure AD identity | [Shared Access Signature(SAS)](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), [AAD passthrough](develop-storage-files-storage-access-control.md?tabs=user-identity), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), [Custom application Azure AD identity](develop-storage-files-storage-access-control.md?tabs=service-principal). | | Column mapping | Ordinal - the columns in the external table definition are mapped to the columns in the underlying Parquet files by position. | Serverless pool: by name. The columns in the external table definition are mapped to the columns in the underlying Parquet files by column name matching. <br/> Dedicated pool: ordinal matching. The columns in the external table definition are mapped to the columns in the underlying Parquet files by position.| | CETAS (exporting/transformation) | Yes | CETAS with the native tables as a target works only in the serverless SQL pool. You cannot use the dedicated SQL pools to export data using native tables. |
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
Previously updated : 04/13/2022 Last updated : 10/11/2022
DBCC SHOW_STATISTICS([<schema_name>.<table_name>],<stats_name>)
For example: ```sql
-DBCC SHOW_STATISTICS (dbo.table1, stats_col1);
+DBCC SHOW_STATISTICS ('dbo.table1', 'stats_col1');
``` #### Show one or more parts of DBCC SHOW_STATISTICS()
DBCC SHOW_STATISTICS([<schema_name>.<table_name>],<stats_name>)
For example: ```sql
-DBCC SHOW_STATISTICS (dbo.table1, stats_col1)
+DBCC SHOW_STATISTICS ('dbo.table1', 'stats_col1')
WITH histogram, density_vector ```
synapse-analytics On Demand Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/on-demand-workspace-overview.md
+ Last updated 01/19/2022 # Serverless SQL pool in Azure Synapse Analytics
-Every Azure Synapse Analytics workspace comes with serverless SQL pool endpoints that you can use to query data in the [Azure Data Lake](query-data-storage.md) ([Parquet](query-data-storage.md#query-parquet-files), [Delta Lake](query-delta-lake-format.md), [delimited text](query-data-storage.md#query-csv-files) formats), [Cosmos DB](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key), or Dataverse.
+Every Azure Synapse Analytics workspace comes with serverless SQL pool endpoints that you can use to query data in the [Azure Data Lake](query-data-storage.md) ([Parquet](query-data-storage.md#query-parquet-files), [Delta Lake](query-delta-lake-format.md), [delimited text](query-data-storage.md#query-csv-files) formats), [Azure Cosmos DB](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key), or Dataverse.
Serverless SQL pool is a query service over the data in your data lake. It enables you to access your data through the following functionalities:
In order to enable smooth experience for in place querying of data residing in f
[Various delimited text formats (with custom field terminator, row terminator, escape char)](query-data-storage.md#query-csv-files)
-[Cosmos DB analytical store](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key)
+[Azure Cosmos DB analytical store](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key)
[Read a chosen subset of columns](query-data-storage.md#read-a-chosen-subset-of-columns)
A user that is logged into the serverless SQL pool service must be authorized to
- **[Workspace Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity)** is an authorization type where the identity of the Synapse workspace is used to authorize access to the data. Before accessing the data, Azure Storage administrator must grant permissions to workspace identity for accessing the data.
-### Access to Cosmos DB
+### Access to Azure Cosmos DB
-You need to create server-level or database-scoped credential with the Cosmos DB account read-only key to [access Cosmos DB analytical store](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key).
+You need to create server-level or database-scoped credential with the Azure Cosmos DB account read-only key to [access the Azure Cosmos DB analytical store](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key).
## Next steps Additional information on endpoint connection and querying files can be found in the following articles:
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
-+ Last updated 03/24/2022
Query languages used in Synapse SQL can have different supported features depend
| | Dedicated | Serverless | | | | | | **SELECT statement** | Yes. `SELECT` statement is supported, but some Transact-SQL query clauses, such as [FOR XML/FOR JSON](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), OFFSET/FETCH are not supported. | Yes, `SELECT` statement is supported, but some Transact-SQL query clauses like [FOR XML](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), [PREDICT](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true), GROUPNG SETS, and the query hints are not supported. |
-| **INSERT statement** | Yes | No. Upload new data to Data lake using Spark or other tools. Use Cosmos DB with the analytical storage for highly transactional workloads. You can use [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) to create an external table and insert data. |
-| **UPDATE statement** | Yes | No, update Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads. |
-| **DELETE statement** | Yes | No, delete Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads.|
+| **INSERT statement** | Yes | No. Upload new data to Data Lake using Spark or other tools. Use Azure Cosmos DB with the analytical storage for highly transactional workloads. You can use [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) to create an external table and insert data. |
+| **UPDATE statement** | Yes | No, update Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Azure Cosmos DB with the analytical storage for highly transactional workloads. |
+| **DELETE statement** | Yes | No, delete Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Azure Cosmos DB with the analytical storage for highly transactional workloads.|
| **MERGE statement** | Yes ([preview](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true)) | No, merge Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. | | **CTAS statement** | Yes | No, [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) statement is not supported in the serverless SQL pool. | | **CETAS statement** | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). CETAS supports Parquet and CSV output formats. | | **[Transactions](develop-transactions.md)** | Yes | Yes, transactions are applicable only on the meta-data objects. | | **[Labels](develop-label.md)** | Yes | No, labels are not supported in serverless SQL pools. | | **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) for data loading. | No, you cannot load data into the serverless SQL pool because data is stored on external storage. You can initially load data into an external table using CETAS statement. |
-| **Data export** | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes. You can export data from external storage (Azure data lake, Dataverse, Cosmos DB) into Azure data lake using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
+| **Data export** | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes. You can export data from external storage (Azure Data Lake, Dataverse, Azure Cosmos DB) into Azure data lake using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
| **Types** | Yes, all Transact-SQL types except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL types are supported, except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Table type. See how to [map Parquet column types to SQL types here](develop-openrowset.md#type-mapping-for-parquet). | | **Cross-database queries** | No | Yes, the cross-database queries and the 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. The queries can reference the serverless SQL databases or the Lake databases in the same workspace. Cross-workspace queries are not supported. | | **Built-in/system functions (analysis)** | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions, except [CHOOSE](/sql/t-sql/functions/logical-functions-choose-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [PARSE](/sql/t-sql/functions/parse-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, and [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions are supported. |
Data that is analyzed can be stored on various storage types. The following tabl
| | Dedicated | Serverless | | | | |
-| **Internal storage** | Yes | No, data is placed in Azure Data Lake or [Cosmos DB analytical storage](query-cosmos-db-analytical-store.md). |
+| **Internal storage** | Yes | No, data is placed in Azure Data Lake or [Azure Cosmos DB analytical storage](query-cosmos-db-analytical-store.md). |
| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). | | **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). | | **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). |
-| **Dataverse** | No, you can [load CosmosDB data into a dedicated pool using Azure Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168) or Spark. | Yes, you can read Dataverse tables using [Azure Synapse link for Dataverse with Azure Data Lake](/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
-| **Azure Cosmos DB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use [Spark pools to update the Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
-| **Azure Cosmos DB analytical storage** | No, you can [load CosmosDB data into a dedicated pool using Azure Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Azure Synapse Link](../../cosmos-db/synapse-link.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
+| **Dataverse** | No, you can [load Azure Cosmos DB data into a dedicated pool using Azure Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168) or Spark. | Yes, you can read Dataverse tables using [Azure Synapse link for Dataverse with Azure Data Lake](/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
+| **Azure Cosmos DB transactional storage** | No | No, you cannot access Azure Cosmos DB containers to update data or read data from the Azure Cosmos DB transactional storage. Use [Spark pools to update Azure Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
+| **Azure Cosmos DB analytical storage** | No, you can [load Azure Cosmos DB data into a dedicated pool using Azure Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Azure Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Azure Synapse Link](../../cosmos-db/synapse-link.md?toc=/azure/synapse-analytics/toc.json). |
| **Apache Spark tables (in workspace)** | No | Yes, serverless pool can read PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md). | | **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference external Spark table location. | | **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference Databricks table location. |
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
Last updated 05/10/2022 -+ # Query Azure Cosmos DB data with a serverless SQL pool in Azure Synapse Link
In this article, you'll learn how to write a query with a serverless SQL pool th
## Prerequisites - Make sure that you have prepared Analytical store:
- - Enable analytical store on [your Cosmos DB containers](../quickstart-connect-synapse-link-cosmos-db.md#enable-azure-cosmos-db-analytical-store).
+ - Enable analytical store on [your Azure Cosmos DB containers](../quickstart-connect-synapse-link-cosmos-db.md#enable-azure-cosmos-db-analytical-store).
- Get the connection string with a read-only key that you will use to query analytical store.
- - Get the read-only [key that will be used to access Cosmos DB container](../../cosmos-db/database-security.md#primary-keys)
+ - Get the read-only [key that will be used to access the Azure Cosmos DB container](../../cosmos-db/database-security.md#primary-keys)
- Make sure that you have applied all [best practices](best-practices-serverless-sql-pool.md), such as:
- - Ensure that your Cosmos DB analytical storage is in the same region as serverless SQL pool.
+ - Ensure that your Azure Cosmos DB analytical storage is in the same region as serverless SQL pool.
- Ensure that the client application (Power BI, Analysis service) is in the same region as serverless SQL pool. - If you are returning a large amount of data (bigger than 80GB), consider using caching layer such as Analysis services and load the partitions smaller than 80GB in the Analysis services model. - If you are filtering data using string columns, make sure that you are using the `OPENROWSET` function with the explicit `WITH` clause that has the smallest possible types (for example, don't use VARCHAR(1000) if you know that the property has up to 5 characters).
In this article, you'll learn how to write a query with a serverless SQL pool th
Serverless SQL pool enables you to query Azure Cosmos DB analytical storage using `OPENROWSET` function. - `OPENROWSET` with inline key. This syntax can be used to query Azure Cosmos DB collections without need to prepare credentials.-- `OPENROWSET` that referenced credential that contains the Cosmos DB account key. This syntax can be used to create views on Azure Cosmos DB collections.
+- `OPENROWSET` that referenced credential that contains the Azure Cosmos DB account key. This syntax can be used to create views on Azure Cosmos DB collections.
### [OPENROWSET with key](#tab/openrowset-key)
The number of cases is information stored as an `int32` value, but there's one v
> [!IMPORTANT] > The `OPENROWSET` function without a `WITH` clause exposes both values with expected types and the values with incorrectly entered types. This function is designed for data exploration and not for reporting. Don't parse JSON values returned from this function to build reports. Use an explicit [WITH clause](#query-items-with-full-fidelity-schema) to create your reports. You should clean up the values that have incorrect types in the Azure Cosmos DB container to apply corrections in the full fidelity analytical store.
-If you need to query Azure Cosmos DB accounts of the Mongo DB API kind, you can learn more about the full fidelity schema representation in the analytical store and the extended property names to be used in [What is Azure Cosmos DB Analytical Store?](../../cosmos-db/analytical-store-introduction.md#analytical-schema).
+To query Azure Cosmos DB for MongoDB accounts, you can learn more about the full fidelity schema representation in the analytical store and the extended property names to be used in [What is Azure Cosmos DB Analytical Store?](../../cosmos-db/analytical-store-introduction.md#analytical-schema).
### Query items with full fidelity schema
In this example, the number of cases is stored either as `int32`, `int64`, or `f
## Troubleshooting
-Review the [self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) to find the known issues or troubleshooting steps that can help you to resolve potential problems with Cosmos DB queries.
+Review the [self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) to find the known issues or troubleshooting steps that can help you to resolve potential problems with Azure Cosmos DB queries.
## Next steps
For more information, see the following articles:
- [Use Power BI and serverless SQL pool with Azure Synapse Link](../../cosmos-db/synapse-link-power-bi.md) - [Create and use views in a serverless SQL pool](create-use-views.md) - [Tutorial on building serverless SQL pool views over Azure Cosmos DB and connecting them to Power BI models via DirectQuery](./tutorial-data-analyst.md)-- Visit the [Azure Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) if you are getting some errors or experiencing performance issues.
+- Visit the [Azure Synapse link for Azure Cosmos DB self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) if you are getting some errors or experiencing performance issues.
- Checkout the Learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Last updated 09/01/2022
-+ # Troubleshoot serverless SQL pool in Azure Synapse Analytics
The error "Column `column name` of the type `type name` is not compatible with t
### <a id="resolving-azure-cosmos-db-path-has-failed-with-error"></a>Resolve: Azure Cosmos DB path has failed with error
-If you get the error "Resolving CosmosDB path has failed with error 'This request is not authorized to perform this operation'," check to see if you used private endpoints in Azure Cosmos DB. To allow serverless SQL pool to access an analytical store with private endpoints, you must [configure private endpoints for the Azure Cosmos DB analytical store](../../cosmos-db/analytical-store-private-endpoints.md#using-synapse-serverless-sql-pools).
+If you get the error "Resolving Azure Cosmos DB path has failed with error 'This request is not authorized to perform this operation'," check to see if you used private endpoints in Azure Cosmos DB. To allow serverless SQL pool to access an analytical store with private endpoints, you must [configure private endpoints for the Azure Cosmos DB analytical store](../../cosmos-db/analytical-store-private-endpoints.md#using-synapse-serverless-sql-pools).
### Azure Cosmos DB performance issues
Check the following issues if you experience slow query execution:
- Make sure that the client applications are collocated with the serverless SQL pool endpoint. Executing a query across the region can cause additional latency and slow streaming of result set. - Make sure that you don't have networking issues that can cause the slow streaming of result set - Make sure that the client application has enough resources (for example, not using 100% CPU).-- Make sure that the storage account or Cosmos DB analytical storage is placed in the same region as your serverless SQL endpoint.
+- Make sure that the storage account or Azure Cosmos DB analytical storage is placed in the same region as your serverless SQL endpoint.
See best practices for [collocating the resources](best-practices-serverless-sql-pool.md#client-applications-and-network-connections).
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
+ Last updated 08/20/2021
CREATE DATABASE Ldw
COLLATE Latin1_General_100_BIN2_UTF8; ```
-This collation will provide the optimal performance while reading Parquet and Cosmos DB. If you don't want to specify the database collation,
+This collation will provide the optimal performance while reading Parquet and Azure Cosmos DB. If you don't want to specify the database collation,
make sure that you specify this collation in the column definition. ## Configure data sources and formats
You can explicitly define a custom credential that will be used while accessing
- [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) of the Synapse workspace - [Shared Access Signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) of the Azure storage - Custom [Service Principal Name or Azure Application identity](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types).-- Read-only Cosmos Db account key that enables you to read Cosmos DB analytical storage.
+- Read-only Azure Cosmos DB account key that enables you to read Azure Cosmos DB analytical storage.
As a prerequisite, you will need to create a master key in the database: ```sql
CREATE EXTERNAL DATA SOURCE ecdc_cases WITH (
); ```
-In order to access Cosmos DB analytical storage, you need to define a credential containing a read-only Cosmos DB account key.
+In order to access Azure Cosmos DB analytical storage, you need to define a credential containing a read-only Azure Cosmos DB account key.
```sql CREATE DATABASE SCOPED CREDENTIAL MyCosmosDbAccountCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 's5zarR2pT0JWH9k8roipnWxUYBegOuFGjJpSjGlR36y86cW0GQ6RaaG8kGjsRAQoWMw1QKTkkX8HQtFpJjC8Hg=='; ```
-Any user with the Synapse Administrator role can use these credentials to access Azure Data Lake storage or Cosmos DB analytical storage.
+Any user with the Synapse Administrator role can use these credentials to access Azure Data Lake storage or Azure Cosmos DB analytical storage.
If you have low privileged users that do not have Synapse Administrator role, you would need to give them an explicit permission to reference these database scoped credentials: ```sql
Similar to the tables shown in the previous example, you should place the views
create schema ecdc_cosmosdb; ```
-Now you are able to create a view in the schema that is referencing a Cosmos DB container:
+Now you are able to create a view in the schema that is referencing an Azure Cosmos DB container:
```sql CREATE OR ALTER VIEW ecdc_cosmosdb.Ecdc
synapse-analytics Concept Synapse Link Cosmos Db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md
Last updated 06/02/2021 -+ # Azure Synapse Link for Azure Cosmos DB supported features
You can connect to an Azure Cosmos DB container without enabling Synapse Link. I
| -- | -- |-- |-- | | **Explore data** |Explore data from a container with familiar T-SQL syntax and automatic schema inference|X| Γ£ô | | **Create views and build BI reports** |Create a SQL view to have direct access to a container for BI through serverless SQL pool |X| Γ£ô |
-| **Join disparate data sources along with Cosmos DB data** | Store results of query reading data from Cosmos DB containers along with data in Azure Blob Storage or Azure Data Lake Storage using CETAS |X| Γ£ô |
+| **Join disparate data sources along with Azure Cosmos DB data** | Store results of query reading data from Azure Cosmos DB containers along with data in Azure Blob Storage or Azure Data Lake Storage using CETAS |X| Γ£ô |
## Next steps * See how to [connect to Synapse Link for Azure Cosmos DB](../quickstart-connect-synapse-link-cosmos-db.md)
-* [Learn how to query the Cosmos DB Analytical Store with Spark 3](how-to-query-analytical-store-spark-3.md)
-* [Learn how to query the Cosmos DB Analytical Store with Spark 2](how-to-query-analytical-store-spark.md)
+* [Learn how to query the Azure Cosmos DB Analytical Store with Spark 3](how-to-query-analytical-store-spark-3.md)
+* [Learn how to query the Azure Cosmos DB Analytical Store with Spark 2](how-to-query-analytical-store-spark.md)
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
-+ Last updated 09/27/2022
synapse-analytics How To Copy To Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md
Last updated 08/10/2020 -+ # Copy data from Azure Cosmos DB into a dedicated SQL pool using Apache Spark
Azure Synapse Link for Azure Cosmos DB enables users to run near real-time analy
* [Provision a Synapse workspace](../quickstart-create-workspace.md) with: * [Serverless Apache Spark pool](../quickstart-create-apache-spark-pool-studio.md) * [dedicated SQL pool](../quickstart-create-sql-pool-studio.md)
-* [Provision a Cosmos DB account with a HTAP container with data](../../cosmos-db/configure-synapse-link.md)
+* [Provision an Azure Cosmos DB account with a HTAP container with data](../../cosmos-db/configure-synapse-link.md)
* [Connect the Azure Cosmos DB HTAP container to the workspace](./how-to-connect-synapse-link-cosmos-db.md) * [Have the right setup to import data into a dedicated SQL pool from Spark](../spark/synapse-spark-sql-pool-import-export.md) ## Steps In this tutorial, you'll connect to the analytical store so there's no impact on the transactional store (it won't consume any Request Units). We'll go through the following steps:
-1. Read the Cosmos DB HTAP container into a Spark dataframe
+1. Read the Azure Cosmos DB HTAP container into a Spark dataframe
2. Aggregate the results in a new dataframe 3. Ingest the data into a dedicated SQL pool
We'll aggregate the sales (*quantity*, *revenue* (price x quantity) by *productC
Create a Spark notebook with Scala as Spark (Scala) as the main language. We use the notebook's default setting for the session. ## Read the data in Spark
-Read the Cosmos DB HTAP container with Spark into a dataframe in the first cell.
+Read the Azure Cosmos DB HTAP container with Spark into a dataframe in the first cell.
```java val df_olap = spark.read.format("cosmos.olap").
synapse-analytics How To Query Analytical Store Spark 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md
Last updated 02/15/2022 -+ # Interact with Azure Cosmos DB using Apache Spark 3 in Azure Synapse Link
On the other hand, in the case of **creating a Spark table**, the metadata of th
Thus, you can choose between loading to Spark DataFrame and creating a Spark table based on whether you want your Spark analysis to be evaluated against a fixed snapshot of the analytical store or against the latest snapshot of the analytical store respectively. > [!NOTE]
-> To query the Azure Cosmos DB API of Mongo DB accounts, learn more about the [full fidelity schema representation](../../cosmos-db/analytical-store-introduction.md#analytical-schema) in the analytical store and the extended property names to be used.
+> To query Azure Cosmos DB for MongoDB accounts, learn more about the [full fidelity schema representation](../../cosmos-db/analytical-store-introduction.md#analytical-schema) in the analytical store and the extended property names to be used.
> [!NOTE] > Please note that all `options` in the commands below are case sensitive.
df.write.format("cosmos.oltp").
## Load streaming DataFrame from container In this gesture, you'll use Spark Streaming capability to load data from a container into a dataframe. The data will be stored in the primary data lake account (and file system) you connected to the workspace. > [!NOTE]
-> If you are looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md). For instance, if you are looking to ingest a Spark DataFrame to a container of Cosmos DB API for Mongo DB, you can leverage the Mongo DB connector for Spark [here](https://docs.mongodb.com/spark-connector/master/).
+> If you are looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md). For instance, if you are looking to ingest a Spark DataFrame to a container of Azure Cosmos DB for MongoDB, you can leverage the MongoDB connector for Spark [here](https://docs.mongodb.com/spark-connector/master/).
## Load streaming DataFrame from Azure Cosmos DB container In this example, you'll use Spark's structured streaming capability to load data from an Azure Cosmos DB container into a Spark streaming DataFrame using the change feed functionality in Azure Cosmos DB. The checkpoint data used by Spark will be stored in the primary data lake account (and file system) that you connected to the workspace.
synapse-analytics How To Query Analytical Store Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md
Last updated 11/02/2021 -+ # Interact with Azure Cosmos DB using Apache Spark 2 in Azure Synapse Link > [!NOTE]
-> For Synapse Link for Cosmos DB using Spark 3, refer to this article [Azure Synapse Link for Azure Cosmos DB on Spark 3](how-to-query-analytical-store-spark-3.md)
+> For Azure Synapse Link for Azure Cosmos DB using Spark 3, refer to this article [Azure Synapse Link for Azure Cosmos DB on Spark 3](how-to-query-analytical-store-spark-3.md)
In this article, you'll learn how to interact with Azure Cosmos DB using Synapse Apache Spark 2. With its full support for Scala, Python, SparkSQL, and C#, Synapse Apache Spark is central to analytics, data engineering, data science, and data exploration scenarios in [Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/synapse-link.md).
Thus, you can choose between loading to Spark DataFrame and creating a Spark tab
If your analytical queries have frequently used filters, you have the option to partition based on these fields for better query performance. You can periodically execute partitioning job from an Azure Synapse Spark notebook, to trigger partitioning on analytical store. This partitioned store points to the ADLS Gen2 primary storage account that is linked to your Azure Synapse workspace. To learn more, see the [introduction to custom partitioning](../../cosmos-db/custom-partitioning-analytical-store.md) and [how to configure custom partitioning](../../cosmos-db/configure-custom-partitioning.md) articles. > [!NOTE]
-> To query the Azure Cosmos DB API of Mongo DB accounts, learn more about the [full fidelity schema representation](../../cosmos-db/analytical-store-introduction.md#analytical-schema) in the analytical store and the extended property names to be used.
+> To query Azure Cosmos DB for MongoDB accounts, learn more about the [full fidelity schema representation](../../cosmos-db/analytical-store-introduction.md#analytical-schema) in the analytical store and the extended property names to be used.
> [!NOTE] > Please note that all `options` in the commands below are case sensitive. For example, you must use `Gateway` while `gateway` will return an error.
df.write.format("cosmos.oltp").
``` ## Load streaming DataFrame from container
-In this gesture, you'll use Spark Streaming capability to load data from a container into a dataframe. The data will be stored in the primary data lake account (and file system) you connected to the workspace.
+In this gesture, you'll use Spark Streaming capability to load data from a container into a dataframe. The data will be stored in the primary data lake account (and file system) you connected to the workspace.
+ > [!NOTE]
-> If you are looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md). For instance, if you are looking to ingest a Spark DataFrame to a container of Cosmos DB API for Mongo DB, you can leverage the Mongo DB connector for Spark [here](https://docs.mongodb.com/spark-connector/master/).
+> If you are looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md). For instance, if you are looking to ingest a Spark DataFrame to a container of Azure Cosmos DB for MongoDB, you can use the [MongoDB connector for Spark](https://docs.mongodb.com/spark-connector/master/).
## Load streaming DataFrame from Azure Cosmos DB container In this example, you'll use Spark's structured streaming capability to load data from an Azure Cosmos DB container into a Spark streaming DataFrame using the change feed functionality in Azure Cosmos DB. The checkpoint data used by Spark will be stored in the primary data lake account (and file system) that you connected to the workspace.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
The following table lists the features of Azure Synapse Analytics that have tran
| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).| | April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator RBAC (role-based access control) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).| | March 2022 | **Flowlets** | Flowlets help you design portions of new data flow logic, or to extract portions of an existing data flow, and save them as separate artifact inside your Synapse workspace. Then, you can reuse these Flowlets can inside other data flows. To learn more, review the [Flowlets GA announcement blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md). |
-| March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).|
+| March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Azure Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).|
| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools, as well as the dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. | | March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). | | November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
## Azure Synapse Link
-Azure Synapse Link is an automated system for replicating data from [SQL Server or Azure SQL Database](synapse-link/sql-synapse-link-overview.md), [Cosmos DB](/azure/cosmos-db/synapse-link?context=/azure/synapse-analytics/context/context), or [Dataverse](/power-apps/maker/data-platform/export-to-data-lake?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext) into Azure Synapse Analytics. This section summarizes recent news about the Azure Synapse Link feature.
+Azure Synapse Link is an automated system for replicating data from [SQL Server or Azure SQL Database](synapse-link/sql-synapse-link-overview.md), [Azure Cosmos DB](/azure/cosmos-db/synapse-link?context=/azure/synapse-analytics/context/context), or [Dataverse](/power-apps/maker/data-platform/export-to-data-lake?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext) into Azure Synapse Analytics. This section summarizes recent news about the Azure Synapse Link feature.
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
This section summarizes recent improvements and features in SQL pools in Azure S
## Next steps - [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate)-- [Become a Azure Synapse Influencer](https://aka.ms/synapseinfluencers)
+- [Become an Azure Synapse Influencer](https://aka.ms/synapseinfluencers)
- [Azure Synapse Analytics terminology](overview-terminology.md) - [Azure Synapse Analytics migration guides](migration-guides/index.yml) - [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
update-center Manage Vms Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md
POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers
To specify the POST request, you can use the Azure CLI [az rest](/cli/azure/reference-index#az_rest) command. ```azurecli
-az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Network/Microsoft.Compute/virtualMachines/virtualMachineName/assessPatches?api-version=2020-12-01
+az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/virtualMachineName/assessPatches?api-version=2020-12-01
``` # [Azure PowerShell](#tab/powershell)
PUT on '/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourceGroups/atsca
"location": "eastus2euap", "properties": { "namespace": null,
- "extensionProperties": {},
+ "extensionProperties": {
+ "InGuestPatchMode" : "User"
+ },
"maintenanceScope": "InGuestPatch", "maintenanceWindow": { "startDateTime": "2021-08-21 01:18",
The format of the request body is as follows:
"location": "eastus2euap", "properties": { "namespace": null,
- "extensionProperties": {},
+ "extensionProperties": {
+ "InGuestPatchMode": "User"
+ },
"maintenanceScope": "InGuestPatch", "maintenanceWindow": { "startDateTime": "2021-08-21 01:18",
Invoke-AzRestMethod -Path "/subscriptions/<subscriptionId>/resourceGroups/<resou
"location": "eastus2euap", "properties": { "namespace": null,
- "extensionProperties": {},
+ "extensionProperties": {
+ "InGuestPatchMode" : "User"
+ },
"maintenanceScope": "InGuestPatch", "maintenanceWindow": { "startDateTime": "2021-12-21 01:18",
virtual-desktop Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery.md
Let's go over the five options for user profile disaster recovery plans in more
### Native Azure replication
-One way you can set up disaster recovery is to set up native Azure replication. For example, you can set up native replication with Azure Files Standard storage account replication, Azure NetApp Files replication, or Azure Files Sync for file servers.
+One way you can set up disaster recovery is to set up native Azure replication. For example, you can set up native replication with Azure Files Standard storage account replication, Azure NetApp Files replication, or Azure File Sync for file servers.
>[!NOTE] >NetApp replication is automatic after you first set it up. With Azure Site Recovery plans, you can add pre-scripts and post-scripts to fail over non-VM resources replicate Azure Storage resources.
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md
description: Best practices for keeping your Azure Virtual Desktop environment secure. Previously updated : 02/22/2022 Last updated : 10/12/2022 + # Security best practices
Here are the security needs you're responsible for in your Azure Virtual Desktop
|Identity|Yes| |User devices (mobile and PC)|Yes| |App security|Yes|
-|Session host OS|Yes|
+|Session host operating system (OS)|Yes|
|Deployment configuration|Yes| |Network controls|Yes| |Virtualization control plane|No|
By restricting operating system capabilities, you can strengthen the security of
Trusted launch are Gen2 Azure VMs with enhanced security features aimed to protect against ΓÇ£bottom of the stackΓÇ¥ threats through attack vectors such as rootkits, boot kits, and kernel-level malware. The following are the enhanced security features of trusted launch, all of which are supported in Azure Virtual Desktop. To learn more about trusted launch, visit [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+## Azure Confidential Computing virtual machines (preview)
+
+> [!IMPORTANT]
+> Azure Virtual Desktop support for Azure Confidential virtual machines is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Virtual Desktop support for Azure Confidential Computing virtual machines (preview) ensures a userΓÇÖs virtual desktop is encrypted in memory, protected in use, and backed by hardware root of trust. Deploying confidential VMs with Azure Virtual Desktop gives users access to Microsoft 365 and other applications on session hosts that use hardware-based isolation, which hardens isolation from other virtual machines, the hypervisor, and the host OS. These virtual desktops are powered by the latest Third-generation (Gen 3) Advanced Micro Devices (AMD) EPYCΓäó processor with Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) technology. Memory encryption keys are generated and safeguarded by a dedicated secure processor inside the AMD CPU that can't be read from software. For more information, see the [Azure Confidential Computing overview](../confidential-computing/overview.md).
+ ### Secure Boot Secure Boot is a mode that platform firmware supports that protects your firmware from malware-based rootkits and boot kits. This mode only allows signed OSes and drivers to start up the machine.
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
+
+ Title: Get high availability and cost savings with Spot Priority Mix for Virtual Machine Scale Sets
+description: Learn how to run a mix of Spot VMs and uninterruptible regular VMs for Virtual Machine Scale Sets to achieve high availability and cost savings.
++++++ Last updated : 10/12/2022+++
+# Spot Priority Mix for high availability and cost savings (preview)
+
+**Applies to:** :heavy_check_mark: Flexible scale sets
+
+> [!IMPORTANT]
+> **Spot Priority Mix** is currently in public preview.
+> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure allows you to have the flexibility of running a mix of uninterruptible regular VMs and interruptible Spot VMs for Virtual Machine Scale Set deployments. You're able to deploy this Spot Priority Mix using Flexible orchestration to easily balance between high-capacity availability and lower infrastructure costs according to your workload requirements. This feature allows you to easily manage your scale set capability to achieve the following:
+
+- Reduce compute infrastructure costs by applying the deep discounts of Spot VMs
+- Maintain capacity availability through uninterruptible regular VMs in the scale set deployment
+- Provide reassurance that all your VMs won't be taken away simultaneously due to evictions before the infrastructure has time to react and recover the evicted capacity
+- Simplify the scale-out and scale-in of compute workloads that require both Spot and regular VMs by letting Azure orchestrate the creation and deletion of VMs
+
+You can configure a custom percentage distribution across Spot and regular VMs. The platform automatically orchestrates each scale-out and scale-in operation to achieve the desired distribution by selecting an appropriate number of VMs to create or delete. You can also optionally configure the number of base regular uninterruptible VMs you would like to maintain in the Virtual Machine Scale Set during any scale operation.
+
+## Template
+
+You can set your Spot Priority Mix by using a template to add the following properties to a scale set with Flexible orchestration using a Spot priority VM profile:
+
+```json
+"priorityMixPolicy": {
+ "baseRegularPriorityCount": 0,
+ "regularPriorityPercentageAboveBase": 50
+},
+```
+
+**Parameters:**
+- `baseRegularPriorityCount` ΓÇô Specifies a base number of VMs that will be *Regular* priority; if the Scale Set capacity is at or below this number, all VMs will be *Regular* priority.
+- `regularPriorityPercentageAboveBase` ΓÇô Specifies the percentage split of *Regular* and *Spot* priority VMs that will be used when the Scale Set capacity is above the *baseRegularPriorityCount*.
+
+You can refer to this [ARM template example](https://paste.microsoft.com/f84d2f83-f6bf-4d24-aa03-175b0c43da32) for more context.
+
+## Azure portal
+
+You can set your Spot Priority Mix in the Scaling tab of the Virtual Machine Scale Sets creation process in the Azure portal. The following steps will instruct you on how to access this feature during that process.
+
+1. Log in to the [Azure portal](https://portal.azure.com) through the [public preview access link](https://aka.ms/SpotMix).
+1. In the search bar, search for and select **Virtual machine scale sets**.
+1. Select **Create** on the **Virtual machine scale sets** page.
+1. In the **Basics** tab, fill out the required fields and select **Flexible** as the **Orchestration** mode.
+1. Fill out the **Disks** and **Networking** tabs.
+1. In the **Scaling** tab, select the check-box next to *Scale with VMs and Spot VMs* option under the **Scale with VMs and discounted Spot VMs** section.
+1. Fill out the **Base VM (uninterruptible) count** and **Instance distribution** fields to configure your priorities.
+
+ :::image type="content" source="./media/spot-priority-mix/scale-with-vms-and-discounted-spot-vms.png" alt-text="Screenshot of the Scale with VMs and discounted Spot VMs section in the Scaling tab within Azure portal.":::
+
+1. Continue through the Virtual Machine Scale Set creation process.
+
+## Azure CLI
+
+You can set your Spot Priority Mix using Azure CLI by setting the `priority` flag to `Spot` and including the `regular-priority-count` and `regular-priority-percentage` flags.
+
+```azurecli
+az vmss create -n myScaleSet \
+ -g myResourceGroup \
+ --orchestration-mode flexible \
+ --regular-priority-count 2 \
+ --regular-priority-percentage 50 \
+ --orchestration-mode flexible \
+ --instance-count 4 \
+ --image Centos \
+ --priority Spot \
+ --eviction-policy Deallocate \
+ --single-placement-group False \
+```
+
+## Azure PowerShell
+
+You can set your Spot Priority Mix using Azure PowerShell by setting the `Priority` flag to `Spot` and including the `BaseRegularPriorityCount` and `RegularPriorityPercentage` flags.
+
+```azurepowershell
+$vmssConfig = New-AzVmssConfig `
+ -Location "East US" `
+ -SkuCapacity 4 `
+ -SkuName Standard_D2_v5 `
+ -OrchestrationMode 'Flexible' `
+ -EvictionPolicy 'Delete' `
+ -PlatformFaultDomainCount 1 `
+ -Priority 'Spot' `
+ -BaseRegularPriorityCount 2 `
+ -RegularPriorityPercentage 50;
+
+New-AzVmss `
+ -ResourceGroupName myResourceGroup `
+ -Name myScaleSet `
+ -VirtualMachineScaleSet $vmssConfig;
+
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Spot virtual machines](../virtual-machines/spot-vms.md)
virtual-machine-scale-sets Virtual Machine Scale Sets Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md
To create a scale set using an Azure template, make sure the API version of the
```json "publicIpAddressConfiguration": { "name": "pub1",
- "sku" {
+ "sku": {
"name": "Standard" }, "properties": {
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
||||| | Virtual machine type | Standard Azure IaaS VM (Microsoft.compute/virtualmachines) | Scale Set specific VMs (Microsoft.compute/virtualmachinescalesets/virtualmachines) | Standard Azure IaaS VM (Microsoft.compute/virtualmachines) | | Maximum Instance Count (with FD guarantees) | 1000 | 100 | 200 |
-| SKUs supported | D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported | All SKUs | All SKUs |
+| SKUs supported | All SKUs | All SKUs | All SKUs |
| Full control over VM, NICs, Disks | Yes | Limited control with virtual machine scale sets VM API | Yes | | RBAC Permissions Required | Compute VMSS Write, Compute VM Write, Network | Compute VMSS Write | N/A | | Cross tenant shared image gallery | No | Yes | Yes |
The following table compares the Flexible orchestration mode, Uniform orchestrat
The following virtual machine scale set parameters are not currently supported with virtual machine scale sets in Flexible orchestration mode: - Single placement group - you must choose `singlePlacementGroup=False`-- Deployment using Specialty SKUs: G, H, L, M, N series VM families - Ultra disk configuration: `diskIOPSReadWrite`, `diskMBpsReadWrite` - Virtual machine scale set Overprovisioning - Image-based Automatic OS Upgrades
virtual-machines Disk Bursting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-bursting.md
Title: Managed disk bursting
description: Learn about disk bursting for Azure disks and Azure virtual machines. Previously updated : 04/19/2022 Last updated : 09/10/2022 -+ # Managed disk bursting
The following scenarios can benefit greatly from bursting:
## Disk-level bursting
-Currently, there are two managed disk types that can burst, [premium SSDs](disks-types.md#premium-ssds), and [standard SSDs](disks-types.md#standard-ssds). Other disk types cannot currently burst. There are two models of bursting for disks:
+Currently, there are two managed disk types that can burst, [Premium SSD managed disks](disks-types.md#premium-ssds), and [standard SSDs](disks-types.md#standard-ssds). Other disk types cannot currently burst. There are two models of bursting for disks:
-- An on-demand bursting model, where the disk bursts whenever its needs exceed its current capacity. This model incurs additional charges anytime the disk bursts. On-demand bursting is only available for premium SSDs larger than 512 GiB.-- A credit-based model, where the disk will burst only if it has burst credits accumulated in its credit bucket. This model does not incur additional charges when the disk bursts. Credit-based bursting is only available for premium SSDs 512 GiB and smaller, and standard SSDs 1024 GiB and smaller.
+- An on-demand bursting model, where the disk bursts whenever its needs exceed its current capacity. This model incurs additional charges anytime the disk bursts. On-demand bursting is only available for Premium SSDs larger than 512 GiB.
+- A credit-based model, where the disk will burst only if it has burst credits accumulated in its credit bucket. This model does not incur additional charges when the disk bursts. Credit-based bursting is only available for Premium SSD managed disks 512 GiB and smaller, and standard SSDs 1024 GiB and smaller.
-Azure [premium SSDs](disks-types.md#premium-ssds) can use either bursting model, but [standard SSDs](disks-types.md#standard-ssds) currently only offer credit-based bursting.
+Azure [Premium SSD managed disks](disks-types.md#premium-ssds) can use either bursting model, but [standard SSDs](disks-types.md#standard-ssds) currently only offer credit-based bursting.
Additionally, the [performance tier of managed disks can be changed](disks-change-performance.md), which could be ideal if your workload would otherwise be running in burst.
Additionally, the [performance tier of managed disks can be changed](disks-chang
||||| | **Scenarios**|Ideal for short-term scaling (30 minutes or less).|Ideal for short-term scaling(Not time restricted).|Ideal if your workload would otherwise continually be running in burst.| |**Cost** |Free |Cost is variable, see the [Billing](#billing) section for details. |The cost of each performance tier is fixed, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for details. |
-|**Availability** |Only available for premium SSDs 512 GiB and smaller, and standard SSDs 1024 GiB and smaller. |Only available for premium SSDs larger than 512 GiB. |Available to all premium SSD sizes. |
+|**Availability** |Only available for premium SSD managed disks 512 GiB and smaller, and standard SSDs 1024 GiB and smaller. |Only available for premium SSD managed disks larger than 512 GiB. |Available to all premium SSD sizes. |
|**Enablement** |Enabled by default on eligible disks. |Must be enabled by user. |User must manually change their tier. | [!INCLUDE [managed-disks-bursting](../../includes/managed-disks-bursting-2.md)]
Additionally, the [performance tier of managed disks can be changed](disks-chang
- To enable on-demand bursting, see [Enable on-demand bursting](disks-enable-bursting.md). - To learn how to gain insight into your bursting resources, see [Disk bursting metrics](disks-metrics.md).-- To see exactly how much each applicable disk size can burst, see [Scalability and performance targets for VM disks](disks-scalability-targets.md).
+- To see exactly how much each applicable disk size can burst, see [Scalability and performance targets for VM disks](disks-scalability-targets.md).
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 (preview) managed disk
-description: Learn how to deploy a Premium SSD v2 (preview).
+ Title: Deploy a Premium SSD v2 managed disk
+description: Learn how to deploy a Premium SSD v2.
Previously updated : 08/03/2022 Last updated : 10/12/2022 -+
-# Deploy a Premium SSD v2 (preview)
+# Deploy a Premium SSD v2
-Azure Premium SSD v2 (preview) is designed for IO-intense enterprise workloads that require sub-millisecond disk latencies and high IOPS and throughput at a low cost. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big dat#premium-ssd-v2-preview).
+Azure Premium SSD v2 is designed for IO-intense enterprise workloads that require sub-millisecond disk latencies and high IOPS and throughput at a low cost. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big dat#premium-ssd-v2).
## Limitations
Azure Premium SSD v2 (preview) is designed for IO-intense enterprise workloads t
## Prerequisites -- [Sign-up](https://aka.ms/PremiumSSDv2PreviewForm) for the public preview. - Install either the latest [Azure CLI](/cli/azure/install-azure-cli) or the latest [Azure PowerShell module](/powershell/azure/install-az-ps). ## Determine region availability programmatically
Update-AzVM -VM $vm -ResourceGroupName $resourceGroupName
1. Select the **Disk SKU** and select **Premium SSD v2 (Preview)**.
- :::image type="content" source="media/disks-deploy-premium-v2/premv2-select.png" alt-text="Screenshot selecting Premium SSD v2 (preview) SKU." lightbox="media/disks-deploy-premium-v2/premv2-select.png":::
+ :::image type="content" source="media/disks-deploy-premium-v2/premv2-select.png" alt-text="Screenshot selecting Premium SSD v2 SKU." lightbox="media/disks-deploy-premium-v2/premv2-select.png":::
1. Proceed through the rest of the VM deployment, making any choices that you desire.
Unlike other managed disks, the performance of a Premium SSD v2 can be configure
The following command changes the performance of your disk, update the values as you like, then run the command: ```azurecli
-az disk update `
subscription $subscription `resource-group $rgname `name $diskName `set diskIopsReadWrite=5000 `set diskMbpsReadWrite=200
+az disk update --subscription $subscription --resource-group $rgname --name $diskName --disk-iops-read-write=5000 --disk-mbps-read-write=200
``` # [PowerShell](#tab/azure-powershell)
Update-AzDisk -ResourceGroupName $resourceGroup -DiskName $diskName -DiskUpdate
# [Azure portal](#tab/portal)
-> [!IMPORTANT]
-> Premium SSD v2 managed disks can only be deployed and managed in the Azure portal from the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
-
-1. Sign in to the Azure portal with the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
-1. Navigate to your disk and select **Size + Performance**.
-1. Change the values to your desire.
-1. Select **Resize**.
-
- :::image type="content" source="media/disks-deploy-premium-v2/premv2-performance.png" alt-text="Screenshot showing the Size+performance page of a premium v2 SSD." lightbox="media/disks-deploy-premium-v2/premv2-performance.png":::
+Currently, adjusting disk performance is only supported with Azure CLI or the Azure PowerShell module.
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 06/06/2022 Last updated : 10/12/2022 -+ # Using Azure ultra disks
Update-AzVM -VM $vm -ResourceGroupName $resourceGroup
# [Portal](#tab/azure-portal)
-Ultra disks offer a unique capability that allows you to adjust their performance. You can make these adjustments from the Azure portal, on the disks themselves.
-
-1. Navigate to your VM and select **Disks**.
-1. Select the ultra disk you'd like to modify the performance of.
-
- :::image type="content" source="media/virtual-machines-disks-getting-started-ultra-ssd/select-ultra-disk-to-modify.png" alt-text="Screenshot of disks blade on your vm, ultra disk is highlighted.":::
-
-1. Select **Size + performance** and then make your modifications.
-1. Select **Save**.
-
- :::image type="content" source="media/virtual-machines-disks-getting-started-ultra-ssd/modify-ultra-disk-performance.png" alt-text="Screenshot of configuration blade on your ultra disk, disk size, iops, and throughput are highlighted, save is highlighted.":::
+Currently, adjusting disk performance is only supported with Azure CLI or the Azure PowerShell module.
# [Azure CLI](#tab/azure-cli) Ultra disks offer a unique capability that allows you to adjust their performance, the following command depicts how to use this feature: ```azurecli-interactive
-az disk update `
subscription $subscription `resource-group $rgname `name $diskName `set diskIopsReadWrite=80000 `set diskMbpsReadWrite=800
+az disk update --subscription $subscription --resource-group $rgname --name $diskName --disk-iops-read-write=5000 --disk-mbps-read-write=200
``` # [PowerShell](#tab/azure-powershell)
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 04/11/2022 Last updated : 10/12/2022 -+ ms.devlang: azurecli
You can also use Azure Resource Manager templates to create an incremental snaps
``` - ## Next steps See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions.
virtual-machines Disks Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-metrics.md
description: Examples of disk bursting metrics
Previously updated : 07/19/2021 Last updated : 10/12/2022 + # Disk performance metrics
The following metrics help with observability into our [bursting](disk-bursting.
- **Data Disk Used Burst IO Credits Percentage**: The accumulated percentage of the IOPS burst used for the data disk(s). Emitted on a 5 minute interval. - **OS Disk Used Burst IO Credits Percentage**: The accumulated percentage of the IOPS burst used for the OS disk. Emitted on a 5 minute interval.
+## VM Bursting metrics
+The following metrics provide insight on VM-level bursting:
+
+- **VM Uncached Used Burst IO Credits Percentage**: The accumulated percentage of the VMΓÇÖs uncached IOPS burst used. Emitted on a 5 minute interval.
+- **VM Uncached Used Burst BPS Credits Percentage**: The accumulated percentage of the VMΓÇÖs uncached throughput burst used. Emitted on a 5 minute interval.
+- **VM Cached Used Burst IO Credits Percentage**: The accumulated percentage of the VMΓÇÖs cached IOPS burst used. Emitted on a 5 minute interval.
+- **VM Cached Used Burst BPS Credits Percentage**: The accumulated percentage of the VMΓÇÖs cached IOPS burst used. Emitted on a 5 minute interval.
+ ## Storage IO utilization metrics The following metrics help diagnose bottleneck in your Virtual Machine and Disk combination. These metrics are only available with the following configuration: - Only available on VM series that support premium storage.
This metric tells us the data disks attached on LUN 3 and 2 are using around 85%
- [Azure Monitor Metrics overview](../azure-monitor/essentials/data-platform-metrics.md) - [Metrics aggregation explained](../azure-monitor/essentials/metrics-aggregation-explained.md)-- [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md)
+- [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md)
virtual-machines Disks Pools Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-deploy.md
Last updated 11/09/2021 -+ ms.devlang: azurecli # Deploy an Azure disk pool (preview)
For a disk to be able to be used in a disk pool, it must meet the following requ
- The **StoragePool** resource provider must have been assigned an RBAC role that contains **Read** and **Write** permissions for every managed disk in the disk pool. - Must be either a premium SSD, standard SSD, or an ultra disk in the same availability zone as the disk pool. - For ultra disks, it must have a disk sector size of 512 bytes.-- Disk pools can't be configured to contain both premium/standard SSDs and ultra disks. A disk pool configured for ultra disks can only contain ultra disks. Likewise, a disk pool configured for premium or standard SSDs can only contain premium and standard SSDs.
+- Disk pools can't be configured to contain both Premium/standard SSDs and ultra disks. A disk pool configured for ultra disks can only contain ultra disks. Likewise, a disk pool configured for premium or standard SSDs can only contain premium and standard SSDs.
- Must be a shared disk with a maxShares value of two or greater. 1. Sign in to the [Azure portal](https://portal.azure.com/).
virtual-machines Disks Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools.md
Last updated 01/04/2022 -+ # Azure disk pools (preview)
When you add a managed disk to the disk pool, the disk is attached to managed iS
In preview, disk pools have the following restrictions: -- Only premium SSDs and standard SSDs, or ultra disks can be added to a disk pool.
+- Only premium SSD managed disks and standard SSDs, or ultra disks can be added to a disk pool.
- A disk pool can't be configured to contain both ultra disks and premium/standard SSDs. If a disk pool is configured to use ultra disks, it can only contain ultra disks. Likewise, a disk pool configured to use premium and standard SSDs can only contain premium and standard SSDs. - Disks using [zone-redundant storage (ZRS)](disks-redundancy.md#zone-redundant-storage-for-managed-disks) aren't currently supported.
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
-description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2 (preview), Premium SSDs, standard SSDs, and Standard HDDs.
+description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs.
Previously updated : 08/01/2022 Last updated : 10/12/2022 -+ # Azure managed disk types
Azure managed disks currently offers five disk types, each intended to address a specific customer scenario: - [Ultra disks](#ultra-disks)-- [Premium SSD v2 (preview)](#premium-ssd-v2-preview)
+- [Premium SSD v2](#premium-ssd-v2)
- [Premium SSDs (solid-state drives)](#premium-ssds) - [Standard SSDs](#standard-ssds) - [Standard HDDs (hard disk drives)](#standard-hdds)
It's possible for a performance resize operation to fail because of a lack of pe
If you would like to start using ultra disks, see the article on [using Azure ultra disks](disks-enable-ultra-ssd.md).
-## Premium SSD v2 (preview)
+## Premium SSD v2
-Azure Premium SSD v2 (preview) is designed for IO-intense enterprise workloads that require consistent sub-millisecond disk latencies and high IOPS and throughput at a low cost. The performance (capacity, throughput, and IOPS) of Premium SSD v2 disks can be independently configured at any time, making it easier for more scenarios to be cost efficient while meeting performance needs. For example, a transaction-intensive database workload may need a large amount of IOPS at a small size, or a gaming application may need a large amount of IOPS during peak hours. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data/analytics, and gaming, on virtual machines or stateful containers.
+Azure Premium SSD v2 is designed for IO-intense enterprise workloads that require consistent sub-millisecond disk latencies and high IOPS and throughput at a low cost. The performance (capacity, throughput, and IOPS) of Premium SSD v2 disks can be independently configured at any time, making it easier for more scenarios to be cost efficient while meeting performance needs. For example, a transaction-intensive database workload may need a large amount of IOPS at a small size, or a gaming application may need a large amount of IOPS during peak hours. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data/analytics, and gaming, on virtual machines or stateful containers.
### Differences between Premium SSD and Premium SSD v2
With Premium SSD v2 disks, you can individually set the capacity, throughput, an
Premium SSD v2 capacities range from 1 GiB to 64 TiBs, in 1-GiB increments. You're billed on a per GiB ratio, see the [pricing page](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
-Premium SSD v2 offers up to 32 TiBs per region per subscription by default in the public preview, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
+Premium SSD v2 offers up to 32 TiBs per region per subscription by default, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
#### Premium SSD v2 IOPS
The following table provides a comparison of disk capacities and performance max
|||| |1 GiB-64 TiBs |3,000-80,000 (Increases by 500 IOPS per GiB) |125-1,200 (increases by 0.25 MB/s per set IOPS) |
-To deploy a Premium SSD v2, see [Deploy a Premium SSD v2 (preview)](disks-deploy-premium-v2.md).
+To deploy a Premium SSD v2, see [Deploy a Premium SSD v2](disks-deploy-premium-v2.md).
## Premium SSDs
virtual-machines Migration Classic Resource Manager Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md
Previously updated : 02/06/2020 Last updated : 10/10/2022
This article catalogs the most common errors and mitigations during the migratio
| Error string | Mitigation | | | |
-| Internal server error |In some cases, this is a transient error that goes away with a retry. If it continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) as it needs investigation of platform logs. <br><br> **NOTE:** Once the incident is tracked by the support team, please don't attempt any self-mitigation as this might have unintended consequences on your environment. |
-| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it's a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, please remove the web/worker role from the deployment and try migration again. |
-| Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, please [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, please don't attempt any self-mitigation as this might have unintended consequences on your environment. |
+| Internal server error |In some cases, this is a transient error that goes away with a retry. If it continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) as it needs investigation of platform logs. <br><br> **NOTE:** Once the incident is tracked by the support team, don't attempt any self-mitigation as this might have unintended consequences on your environment. |
+| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it's a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, remove the web/worker role from the deployment and try migration again. |
+| Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, don't attempt any self-mitigation as this might have unintended consequences on your environment. |
| The virtual network {virtual-network-name} doesn't exist. |This can happen if you created the Virtual Network in the new Azure portal. The actual Virtual Network name follows the pattern "Group * \<VNET name>" | | VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. If these extensions are left installed on the virtual machine, they're automatically uninstalled before completing the migration. |
-| VM {vm-name} in HostedService {hosted-service-name} contains Extension VMSnapshot/VMSnapshotLinux, which is currently not supported for Migration. Uninstall it from the VM and add it back using Azure Resource Manager after the Migration is Complete |This is the scenario where the virtual machine is configured for Azure Backup. Since this is currently an unsupported scenario, please follow the workaround at https://aka.ms/vmbackupmigration |
+| VM {vm-name} in HostedService {hosted-service-name} contains Extension VMSnapshot/VMSnapshotLinux, which is currently not supported for Migration. Uninstall it from the VM and add it back using Azure Resource Manager after the Migration is Complete |This is the scenario where the virtual machine is configured for Azure Backup. Since this is currently an unsupported scenario, follow the workaround at https://aka.ms/vmbackupmigration |
| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} whose Status isn't being reported from the VM. Hence, this VM can't be migrated. Ensure that the Extension status is being reported or uninstall the extension from the VM and retry migration. <br><br> VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} reporting Handler Status: {handler-status}. Hence, the VM can't be migrated. Ensure that the Extension handler status being reported is {handler-status} or uninstall it from the VM and retry migration. <br><br> VM Agent for VM {vm-name} in HostedService {hosted-service-name} is reporting the overall agent status as Not Ready. Hence, the VM may not be migrated, if it has a migratable extension. Ensure that the VM Agent is reporting overall agent status as Ready. Refer to https://aka.ms/classiciaasmigrationfaqs. |Azure guest agent & VM Extensions need outbound internet access to the VM storage account to populate their status. Common causes of status failure include <li> a Network Security Group that blocks outbound access to the internet <li> If the VNET has on premises DNS servers and DNS connectivity is lost <br><br> If you continue to see an unsupported status, you can uninstall the extensions to skip this check and move forward with migration. | | Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it has multiple Availabilities Sets. |Currently, only hosted services that have 1 or less Availability sets can be migrated. To work around this problem, move the additional availability sets, and Virtual machines in those availability sets, to a different hosted service. | | Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name because it has VMs that are not part of the Availability Set even though the HostedService contains one. |The workaround for this scenario is to either move all the virtual machines into a single Availability set or remove all Virtual machines from the Availability set in the hosted service. | | Storage account/HostedService/Virtual Network {virtual-network-name} is in the process of being migrated and hence cannot be changed |This error happens when the "Prepare" migration operation has been completed on the resource and an operation that would make a change to the resource is triggered. Because of the lock on the management plane after "Prepare" operation, any changes to the resource are blocked. To unlock the management plane, you can run the "Commit" migration operation to complete migration or the "Abort" migration operation to roll back the "Prepare" operation. | | Migration isn't allowed for HostedService {hosted-service-name} because it has VM {vm-name} in State: RoleStateUnknown. Migration is allowed only when the VM is in one of the following states - Running, Stopped, Stopped Deallocated. |The VM might be undergoing through a state transition, which usually happens when during an update operation on the HostedService such as a reboot, extension installation etc. It is recommended for the update operation to complete on the HostedService before trying migration. | | Deployment {deployment-name} in HostedService {hosted-service-name} contains a VM {vm-name} with Data Disk {data-disk-name} whose physical blob size {size-of-the-vhd-blob-backing-the-data-disk} bytes doesn't match the VM Data Disk logical size {size-of-the-data-disk-specified-in-the-vm-api} bytes. Migration will proceed without specifying a size for the data disk for the Azure Resource Manager VM. | This error happens if you've resized the VHD blob without updating the size in the VM API model. Detailed mitigation steps are outlined [below](#vm-with-data-disk-whose-physical-blob-size-bytes-does-not-match-the-vm-data-disk-logical-size-bytes).|
-| A storage exception occurred while validating data disk {data disk name} with media link {data disk Uri} for VM {VM name} in Cloud Service {Cloud Service name}. Please ensure that the VHD media link is accessible for this virtual machine | This error can happen if the disks of the VM have been deleted or are not accessible anymore. Please make sure the disks for the VM exist.|
+| A storage exception occurred while validating data disk {data disk name} with media link {data disk Uri} for VM {VM name} in Cloud Service {Cloud Service name}. Ensure that the VHD media link is accessible for this virtual machine | This error can happen if the disks of the VM have been deleted or are not accessible anymore. Make sure the disks for the VM exist.|
| VM {vm-name} in HostedService {cloud-service-name} contains Disk with MediaLink {vhd-uri} which has blob name {vhd-blob-name} that isn't supported in Azure Resource Manager. | This error occurs when the name of the blob has a "/" in it which isn't supported in Compute Resource Provider currently. |
-| Migration isn't allowed for Deployment {deployment-name} in HostedService {cloud-service-name} as it isn't in the regional scope. Please refer to https:\//aka.ms/regionalscope for moving this deployment to regional scope. | In 2014, Azure announced that networking resources will move from a cluster level scope to regional scope. See [https://aka.ms/regionalscope](https://aka.ms/regionalscope) for more details. This error happens when the deployment being migrated has not had an update operation, which automatically moves it to a regional scope. The best work-around is to either add an endpoint to a VM, or a data disk to the VM, and then retry migration. <br> See [How to set up endpoints on a classic virtual machine in Azure](/previous-versions/azure/virtual-machines/windows/classic/setup-endpoints#create-an-endpoint) or [Attach a data disk to a virtual machine created with the classic deployment model](./linux/attach-disk-portal.md)|
+| Migration isn't allowed for Deployment {deployment-name} in HostedService {cloud-service-name} as it isn't in the regional scope. Refer to https:\//aka.ms/regionalscope for moving this deployment to regional scope. | In 2014, Azure announced that networking resources will move from a cluster level scope to regional scope. See [https://aka.ms/regionalscope](https://aka.ms/regionalscope) for more details. This error happens when the deployment being migrated has not had an update operation, which automatically moves it to a regional scope. The best work-around is to either add an endpoint to a VM, or a data disk to the VM, and then retry migration. <br> See [How to set up endpoints on a classic virtual machine in Azure](/previous-versions/azure/virtual-machines/windows/classic/setup-endpoints#create-an-endpoint) or [Attach a data disk to a virtual machine created with the classic deployment model](./linux/attach-disk-portal.md)|
| Migration isn't supported for Virtual Network {vnet-name} because it has non-gateway PaaS deployments. | This error occurs when you have non-gateway PaaS deployments such as Application Gateway or API Management services that are connected to the Virtual Network.|
-| Management operations on VM are disallowed because migration is in progress| This error occurs because the VM is in Prepare state and therefore locked for any update/delete operation. Call Abort using PS/CLI on the VM to rollback the migration and unlock the VM for update/delete operations. Calling commit will also unlock the VM but will commit the migration to ARM.|
+| Management operations on VM are disallowed because migration is in progress| This error occurs because the VM is in Prepare state and therefore locked for any update/delete operation. Call Abort using PS/CLI on the VM to roll back the migration and unlock the VM for update/delete operations. Calling commit will also unlock the VM but will commit the migration to ARM.|
## Detailed mitigations
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/overview.md
- Previously updated : 11/14/2019 Last updated : 10/03/2022 -+ # Virtual machines in Azure
This table shows some of the ways you can get a list of available locations.
There are multiple options to manage the availability of your virtual machines in Azure. - **[Availability Zones](../availability-zones/az-overview.md)** are physically separated zones within an Azure region. Availability zones guarantee you will have virtual machine Connectivity to at least one instance at least 99.99% of the time when you have two or more instances deployed across two or more Availability Zones in the same Azure region. - **[Virtual machine scale sets](../virtual-machine-scale-sets/overview.md)** let you create and manage a group of load balanced virtual machines. The number of virtual machine instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update many virtual machines. Virtual machines in a scale set can also be deployed into multiple availability zones, a single availability zone, or regionally.-- **[Proximity Placement Groups](co-location.md)** are a grouping construct used to ensure Azure compute resources are physically located close to each other. Proximity placement groups are useful for workloads where low latency is a requirement. Fore more information see [Availability options for Azure virtual machines](availability.md) and [SLA for Azure virtual machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
-## Virtual machine size
+## Sizes and pricing
The [size](sizes.md) of the virtual machine that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, storage capacity, and network bandwidth. Azure offers a wide variety of sizes to support many types of uses. Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) based on the virtual machineΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
This table shows some ways that you can find the information for an image.
| REST APIs |[List image publishers](/rest/api/compute/platformimages/platformimages-list-publishers)<BR>[List image offers](/rest/api/compute/platformimages/platformimages-list-publisher-offers)<BR>[List image skus](/rest/api/compute/platformimages/platformimages-list-publisher-offer-skus) | | Azure CLI |[az vm image list-publishers](/cli/azure/vm/image) --location *location*<BR>[az vm image list-offers](/cli/azure/vm/image) --location *location* --publisher *publisherName*<BR>[az vm image list-skus](/cli/azure/vm) --location *location* --publisher *publisherName* --offer *offerName*|
-Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure runtime. For more information on Azure partner offers, see the following links:
-
-* Linux on Azure - [Endorsed Distributions](linux/endorsed-distros.md)
-* SUSE - [Azure Marketplace - SUSE Linux Enterprise Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=suse)
-* Red Hat - [Azure Marketplace - Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Red%20Hat%20Enterprise%20Linux)
-* Canonical - [Azure Marketplace - Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&filters=partners&search=canonical)
-* Debian - [Azure Marketplace - Debian](https://azuremarketplace.microsoft.com/marketplace/apps?search=Debian&page=1)
-* FreeBSD - [Azure Marketplace - FreeBSD](https://azuremarketplace.microsoft.com/marketplace/apps?search=freebsd&page=1)
-* Flatcar - [Azure Marketplace - Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Flatcar&page=1)
-* RancherOS - [Azure Marketplace - RancherOS](https://azuremarketplace.microsoft.com/marketplace/apps/rancher.rancheros)
-* Bitnami - [Bitnami Library for Azure](https://azure.bitnami.com/)
-* Mesosphere - [Azure Marketplace - Mesosphere DC/OS on Azure](https://azure.microsoft.com/services/kubernetes-service/mesosphere/)
-* Docker - [Azure Marketplace - Docker images](https://azuremarketplace.microsoft.com/marketplace/apps?search=docker&page=1&filters=virtual-machine-images)
-* Jenkins - [Azure Marketplace - CloudBees Jenkins Platform](https://azuremarketplace.microsoft.com/marketplace/apps/cloudbees.cloudbees-core-contact)
+Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure runtime. For more information on Azure partner offers, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=partners%3Bvirtual-machine-images&page=1)
## Cloud-init
-To achieve a proper DevOps culture, all infrastructures must be code. When all the infrastructure lives in code it can easily be recreated. Azure works with all the major automation tooling like Ansible, Chef, SaltStack, and Puppet. Azure also has its own tooling for automation:
-
-* [Azure Templates](linux/create-ssh-secured-vm-from-template.md)
-* [Azure `VMaccess`](extensions/vmaccess.md)
-
-Azure supports for [cloud-init](https://cloud-init.io/) across most Linux Distros that support it. We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure marketplace. These images will make your cloud-init deployments and configurations work seamlessly with virtual machines and virtual machine scale sets.
+Azure supports for [cloud-init](https://cloud-init.io/) across most Linux distributions that support it. We are actively working with our Linux partners in order to have cloud-init enabled images available in the Azure Marketplace. These images will make your cloud-init deployments and configurations work seamlessly with virtual machines and virtual machine scale sets.
-* [Using cloud-init on Azure Linux virtual machines](linux/using-cloud-init.md)
+For more information, see [Using cloud-init on Azure Linux virtual machines](linux/using-cloud-init.md).
## Storage * [Introduction to Microsoft Azure Storage](../storage/common/storage-introduction.md)
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
Get the ID of the image version. The value will be used in the VM deployment req
```rest GET
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/Locations/{location}/CommunityGalleries/{CommunityGalleryPublicName}/Images/{galleryImageName}/Versions/{1.0.0}?api-version=2021-07-01
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/Locations/{location}/sharedGalleries/{galleryUniqueName}/Images/{galleryImageName}/Versions/{1.0.0}?api-version=2021-07-01
```
Response:
```json "location": "West US", "identifier": {
- "uniqueId": "/CommunityGalleries/{PublicGalleryName}/Images/{imageName}/Versions/{verionsName}"
+ "uniqueId": "/sharedGalleries/{PublicGalleryName}/Images/{imageName}/Versions/{verionsName}"
}, "name": "1.0.0" ```
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rg}/
}, "storageProfile": { "imageReference": {
- "communityGalleryImageId":"/communityGalleries/{publicGalleryName}/images/{galleryImageName}/versions/1.0.0"
+ "sharedGalleryImageId":"/sharedGalleries/{galleryUniqueName}/images/{galleryImageName}/versions/1.0.0"
}, "osDisk": { "caching": "ReadWrite",
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rg}/
**Next steps** - [Create an Azure Compute Gallery](create-gallery.md)-- [Create an image in an Azure Compute Gallery](image-version.md)
+- [Create an image in an Azure Compute Gallery](image-version.md)
virtual-machines Vm Naming Conventions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-naming-conventions.md
This page outlines the naming conventions used for Azure VMs. VMs use these nami
| *Sub-family | Used for specialized VM differentiations only| | # of vCPUs| Denotes the number of vCPUs of the VM | | *Constrained vCPUs| Used for certain VM sizes only. Denotes the number of vCPUs for the [constrained vCPU capable size](./constrained-vcpu.md) |
-| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> c = confidential <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> NP = node packing <br> P = ARM Cpu <br>|
+| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> c = confidential <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> NP = node packing <br> p = ARM Cpu <br>|
| *Accelerator Type | Denotes the type of hardware accelerator in the specialized/GPU SKUs. Only the new specialized/GPU SKUs launched from Q3 2020 will have the hardware accelerator in the name. | | Version | Denotes the version of the VM Family Series |
virtual-machines External Ntpsource Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/external-ntpsource-configuration.md
Title: Active Directory Windows Virtual Machines in Azure with External NTP Source
-description: Active Directory Windows Virtual Machines in Azure with External NTP Source
+ Title: Time mechanism for Active Directory Windows Virtual Machines in Azure
+description: Time mechanism for Active Directory Windows Virtual Machines in Azure
Last updated 08/05/2022
-# Configure Active Directory Windows Virtual Machines in Azure with External NTP Source
+# Configure the time mechanism for Active Directory Windows Virtual Machines in Azure
**Applies to:** :heavy_check_mark: Windows Virtual Machines
-Use this guide to learn how to setup time synchronization for your Azure Windows Virtual Machines that belong to an Active Directory Domain with an external NTP source.
+Use this guide to learn how to setup time synchronization for your Azure Windows Virtual Machines that belong to an Active Directory Domain.
-## Time Sync for Active Directory Windows Virtual Machines in Azure with External NTP Source
+## Time sync hierarchy in Active Directory Domain Services
-Time synchronization in Active Directory should be managed by only allowing the PDC to access an external time source or NTP Server. All other Domain Controllers would then sync time against the PDC. If your PDC is an Azure Virtual Machine follow these steps:
+Time synchronization in Active Directory should be managed by only allowing the **PDC** to access an external time source or NTP Server.
+
+All other Domain Controllers would then sync time against the PDC, and all other members will get their time from the Domain Controller that satisfied that member's authentication request.
+
+If you have an Active Directory domain running on virtual machines hosted in Azure, follow these steps to properly set up Time Sync.
>[!NOTE]
->Due to Azure Security configurations, the following settings must be applied on the PDC using the **Local Group Policy Editor**.
+>This guide focuses on usign the **Group Policy Management** console to perform the configuration. You can achieve the same results by using the Command Prompt, PowerShell, or by manually modifying the Registry; however those methods are not in scope in this article.
+
+## GPO to allow the PDC to synchronize with an External NTP Source
To check current time source in your **PDC**, from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
-1. From *Start* run *gpedit.msc*.
-2. Navigate to the *Global Configuration Settings* policy under *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service*.
-3. Set it to *Enabled* and configure the *AnnounceFlags* parameter to **5**.
-4. Navigate to *Computer Settings* -> *Administrative Templates* -> *System* -> *Windows Time Service* -> *Time Providers*.
-5. Double click the *Configure Windows NTP Client* policy and set it to *Enabled*, configure the parameter *NTPServer* to point to an IP address of a time server followed by `,0x9` for example: `131.107.13.100,0x9` and configure *Type* to NTP. For all the other parameters you can use the default values, or use custom ones according to your corporate needs.
+1. From *Start* run *gpmc.msc*.
+2. Browse to the Forest and Domain where you want to create the GPO.
+3. Create a new GPO, for example *PDC Time Sync*, in the container *Group Policy Objects*.
+4. Right-click on the newly created GPO and Edit.
+5. Navigate to the *Global Configuration Settings* policy under *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service*.
+6. Set it to *Enabled* and configure the *AnnounceFlags* parameter to **5**.
+7. Navigate to *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service* -> *Time Providers*.
+8. Double click the *Configure Windows NTP Client* policy and set it to *Enabled*, configure the parameter *NTPServer* to point to an IP address or FQDN of a time server followed by `,0x9` for example: `131.107.13.100,0x9` and configure *Type* to **NTP**. For all the other parameters you can use the default values, or use custom ones according to your corporate needs.
+9. Click the *Next Setting* button, set the *Enable Windows NTP Client* policy to *Enabled* and click *OK*
+10. In the **Security Filtering** of the newly created GPO highlight the *Authenticated Users* group -> Click the *Remove* button -> *OK* -> *OK*
+11. In the **Security Filtering** of the newly created GPO click the *Add* button and browse for the computer object that holds the **PDC** role in your domain, then click *OK*
+12. Link the GPO to the **Domain Controllers** Organizational Unit.
>[!IMPORTANT]
->You must mark the VMIC provider as *Disabled* in the Local Registry. Remember that serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For how to back up and restore the Windows Registry follow the steps below.
+>Avoid using a WMI Filter in the Security Filtering options to dinamycally get the Domain Controller that holds the PDC role. If you do so, the WMI Filter will be automatically exluded by the virtual platform, resulting in the GPO not being applied to the PDC.
-## Back up the registry manually
+>[!NOTE]
+>It can take up to 15 minutes for these changes to reflect in the system.
-- Select Start, type regedit.exe in the search box, and then press Enter. If you are prompted for an administrator password or for confirmation, type the password or provide confirmation.-- In Registry Editor, locate and click the registry key or subkey that you want to back up.-- Select File -> Export.-- In the Export Registry File dialog box, select the location to which you want to save the backup copy, and then type a name for the backup file in the File name field.-- Select Save.
+From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the NTP Server you chose.
-## Restore a manual backup
+>[!TIP]
+>If you want to speed-up the process of changing the NTP source on your **PDC**, from an elevated command prompt run *gpupdate /force*, followed by *w32tm /resync /nowait*, then rerun *w32tm /query /source*; the output should be the NTP Server you used in the above GPO.
-- Select Start, type regedit.exe, and then press Enter. If you are prompted for an administrator password or for confirmation, type the password or provide confirmation.-- In Registry Editor, click File -> Import.-- In the Import Registry File dialog box, select the location to which you saved the backup copy, select the backup file, and then click Open.
+## GPO for Members
-To mark the VMIC provider as *Disabled* from *Start* type *regedit.exe* -> In the *Registry Editor* navigate to *HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders* -> On key *VMICTimeProvider* set the value to **0**
+Usually, NTP in Active Directory Domain Services will follow the AD DS Time Hierarchy mentioned at the beginning of this article and no further configuration is required.
->[!NOTE]
->It can take up to 15 minutes for these changes to reflect in the system.
+Nevertheless, virtual machines hosted in Azure have specific security settings applied to them directly by the Cloud platform.
-From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the NTP Server you chose.
+For all other domain members that are not Domain Controllers, you will need to modify the registry and set the value to **0** in the key **Enabled** under **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider**
->[!TIP]
->Follow the steps below if you want to speed-up the process of changing the NTP source on your PDC. You can create a scheduled task to run at **System Start-up** with the **Delay** task for up to (random delay) set to **2 minutes**.
+>[!IMPORTANT]
+>Remember that serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully and test them on a couple of test virtual machines to make sure you will get the expected outcome. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For how to back up and restore the Windows Registry follow the steps below.
+
+## Back up the Registry
-## Scheduled task to set NTP source on your PDC
+1. From *Start* type *regedit.exe*, and then press `Enter`. If you are prompted for an administrator password or for confirmation, type the password or provide confirmation.
+2. In the *Registry Editor* window, locate and click the registry key or subkey that you want to back up.
+3. From the *File* menu select *Export*.
+4. In the *Export Registry File* dialog box, select the location to which you want to save the backup copy, type a name for the backup file in the *File name* field, and then click *Save*.
-1. From *Start* run *Task Scheduler*.
-2. Browse to *Task Scheduler* Library -> *Microsoft* -> *Windows* -> *Time Synchronization* -> Right-click in the right hand side pane and select *Create New Task*.
-3. In the *General* tab, click the *Change User or Group...* button and set it to run as *LOCAL SERVICE*. Then check the box to *Run with highest privileges*.
-4. Under *Configure for:* select your operating system version.
-5. Switch to the *Triggers* tab, click the *New...* button, and set the schedule as per your requirements. Before clicking *OK*, make sure the box next to *Enabled* is checked.
-6. Go to the *Actions* tab. Click the *New...* button and enter the following details:
-- On *Action:* set *Start a program*. -- On *Program/script:* set the path to *%windir%\system32\w32tm.exe*. -- On *Add arguments:* type */resync*, and click *OK* to save changes.
-7. Under the *Conditions* tab ensure that *Start the task only if the computer is in idle for* and *Start the task only if the computer is on AC power* is *not selected*. Click *OK*.
+## Restore a Registry backup
-## GPO for Clients
+1. From *Start* type *regedit.exe*, and then press `Enter`. If you are prompted for an administrator password or for confirmation, type the password or provide confirmation.
+2. In the *Registry Editor* window, from the *File* menu select *Import*.
+3. In the *Import Registry File* dialog box, select the location to which you saved the backup copy, select the backup file, and then click *Open*.
-Configure the following Group Policy Object to enable your clients to synchronize time with any Domain Controller in your Domain:
+## GPO to disable the VMICTimeProvider
-To check current time source in your client, from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
+Configure the following Group Policy Object to enable domain members to synchronize time with Domain Controllers in their corresponding Active Directory Site:
+
+To check current time source, login to any domain member and from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
1. From a Domain Controller go to *Start* run *gpmc.msc*. 2. Browse to the Forest and Domain where you want to create the GPO. 3. Create a new GPO, for example *Clients Time Sync*, in the container *Group Policy Objects*. 4. Right-click on the newly created GPO and Edit.
-5. In the *Group Policy Management Editor* navigate to the *Configure Windows NTP Client* policy under *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service* -> *Time Providers*
-6. Set it to *Enabled*, configure the parameter *NTPServer* to point to a Domain Controller in your Domain followed by `,0x8` for example: `DC1.contoso.com,0x8` and configure *Type* to NT5DS. For all the other parameters you can use the default values, or use custom ones according to your corporate needs.
-7. Link the GPO to the Organizational Unit where your clients are located.
-
->[!IMPORTANT]
->In the the parameter `NTPServer` you can specify a list with all the Domain Controllers in your domain, like this: `DC1.contoso.com,0x8 DC2.contoso.com,0x8 DC3.contoso.com,0x8`
-
-From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the Domain Controller that satisfied the client's authentication request.
+5. Navigate to *Computer Configuration* -> *Preferences* -> Right-click on *Registry* -> *New* -> *Registry Item*
+6. In the *New Registry Properties* window set the following values:
+ - On **Action:** *Update*
+ - On **Hive:** *HKEY_LOCAL_MACHINE*
+ - On **Key Path**: browse to *SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider*
+ - On **Value Name** type *Enabled*
+ - On **Value Type**: *REG_DWORD*
+ - On **Value Data**: type *0*
+7. For all the other parameters use the default values and click *OK*
+8. Link the GPO to the Organizational Unit where your members are located.
+9. Wait or manually force a *Group Policy Update* on the domain member.
+
+Go back to the domain member and from an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the Domain Controller that satisfied the member's authentication request.
## Next steps
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/time-sync.md
There are three options for configuring time sync for your Windows VMs hosted in
- Host time and time.windows.com. This is the default configuration used in Azure Marketplace images. - Host-only.-- Use another, external time server with or without using host time. For this option follow the [Configure Azure Windows VMs with External NTP Source](external-ntpsource-configuration.md) guide.
+- Use another, external time server with or without using host time. For this option follow the [Time mechanism for Active Directory Windows Virtual Machines in Azure](external-ntpsource-configuration.md) guide.
### Use the default
virtual-machines Expose Sap Odata To Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-odata-to-power-query.md
+
+ Title: Enable SAP Principal Propagation for live OData feeds with Power Query
+description: Learn about configuring SAP Principal Propagation for live OData feeds with Power Query
+++++ Last updated : 06/10/2022++
+# Enable SAP Principal Propagation for live OData feeds with Power Query
+
+Working with SAP datasets in Microsoft Excel or Power BI is a common requirement for customers.
+
+This article describes the required configurations and components to enable SAP dataset consumption via OData with [Power Query](/power-query/power-query-what-is-power-query). The SAP data integration is considered **"live"** because it can be refreshed from clients such as Microsoft Excel or Power BI on-demand, unlike data exports (like [SAP List Viewer (ALV)](https://help.sap.com/doc/saphelp_nw73ehp1/7.31.19/en-us/4e/c38f8788d22b90e10000000a42189d/content.htm) CSV exports) for instance. Those exports are **static** by nature and have no continuous relationship with the data origin.
+
+The article puts emphasis on end-to-end user mapping between the known Azure AD identity in Power Query and the SAP backend user. This mechanism is often referred to as SAP Principal Propagation.
+
+The focus of the described configuration is on the [Azure API Management](/azure/api-management/), [SAP Gateway](https://help.sap.com/viewer/product/SAP_GATEWAY), [SAP OAuth 2.0 Server with AS ABAP](https://help.sap.com/docs/SAP_NETWEAVER_750/e815bb97839a4d83be6c4fca48ee5777/0b899f00477b4034b83aa31764361852.html), and OData sources, but the concepts used apply to any web-based resource.
+
+> [!IMPORTANT]
+> Note: SAP Principal Propagation ensures user-mapping to the licensed named SAP user. For any SAP license related questions please contact your SAP representative.
+
+## Overview of Microsoft products with SAP integration
+
+Integrations between SAP products and the Microsoft 365 portfolio range from custom codes, and partner add-ons to fully customized Office products. Here are a couple of examples:
+
+- [SAP Analysis for Microsoft Office Excel and PowerPoint](https://help.sap.com/docs/SAP_BUSINESSOBJECTS_ANALYSIS_OFFICE/ca9c58444d64420d99d6c136a3207632/ebf198667aa54740b9049d9da804a901.html)
+
+- [SAP Analytics Cloud, add-in for Microsoft Office](https://help.sap.com/docs/SAP_ANALYTICS_CLOUD_OFFICE/c637c9ff5d61457eb415ce161e38e57b/c9217302603a4fd6baa0fe6a6e780f8d.html)
+
+- [Access SAP Data Warehouse Cloud with Microsoft Excel](https://blogs.sap.com/2022/05/17/access-sap-data-warehouse-cloud-with-saps-microsoft-excel-add-ins/)
+
+- [SAP HANA Connector for Power Query](/power-query/connectors/sap-hana/overview)
+
+- [Custom Excel Macros to interact with SAP back ends](https://help.sap.com/docs/SAP_BUSINESSOBJECTS_ANALYSIS_OFFICE/ca9c58444d64420d99d6c136a3207632/f270fd456c9b1014bf2c9a7eb0e91070.html)
+
+- [Export from SAP List Viewer (ALV) to Microsoft Excel](https://help.sap.com/docs/ABAP_PLATFORM_NEW/b1c834a22d05483b8a75710743b5ff26/4ec38f8788d22b90e10000000a42189d.html)
+
+The mechanism described in this article uses the standard built-in OData capabilities of Power Query and puts emphasis for SAP landscapes deployed on Azure. Address on-premises landscapes with the Azure API Management [self-hosted Gateway](/azure/api-management/self-hosted-gateway-overview).
+
+For more information on which Microsoft products support Power Query, see [the Power Query documentation](/power-query-what-is-power-query#where-can-you-use-power-query).
+
+## Setup considerations
+
+End users have a choice between local desktop or web-based clients (for instance Excel or Power BI). The client execution environment needs to be considered for the network path between the client application and the target SAP workload. Network access solutions such as VPN aren't in scope for apps like Excel for the web.
+
+[Azure API Management](/services/api-management/) reflects local and web-based environment needs with different deployment modes that can be applied to Azure landscapes ([internal](/azure/api-management/api-management-using-with-internal-vnet?tabs=stv2)
+or [external](/azure/api-management/api-management-using-with-vnet?tabs=stv2)). `Internal` refers to instances that are fully restricted to a private virtual network whereas `external` retains public access to Azure API Management. On-premises installations require a hybrid deployment to apply the approach as is using the Azure API Management [self-hosted Gateway](/azure/api-management/self-hosted-gateway-overview).
+
+Power Query requires matching API service URL and Azure AD application ID URL. Configure a [custom domain for Azure API Management](/azure/api-management/configure-custom-domain) to meet the requirement.
+
+[SAP Gateway](https://help.sap.com/docs/SAP_GATEWAY) needs to be configured to expose the desired target OData services. Discover and activate available services via SAP transaction code `/IWFND/MAINT_SERVICE`. For more information, see SAP's [OData configuration](https://help.sap.com/docs/SAP_GATEWAY).
+
+## Azure API Management custom domain configuration
+
+See below the screenshot of an example configuration in API Management using a custom domain called `api.custom-apim.domain.com` with a managed certificate and [Azure App Service Domain](/azure/app-service/manage-custom-dns-buy-domain). For more domain certificate options, see the Azure API Management [documentation](/azure/api-management/configure-custom-domain?tabs=managed).
++
+Complete the setup of your custom domain as per the domain requirements. For more information, see the [custom domain documentation](/azure/api-management/configure-custom-domain?tabs=managed#set-a-custom-domain-nameportal). To prove domain name ownership and grant access to the certificate, add those DNS records to your Azure App Service Domain `custom-apim.domain.com` as below:
++
+The respective Azure AD application registration for the Azure API Management tenant would look like below.
++
+> [!NOTE]
+> If custom domain for Azure API Management isn't an option for you, you need to use a [custom Power Query Connector](/power-query/startingtodevelopcustomconnectors) instead.
+
+## Azure API Management policy design for Power Query
+
+Use [this](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Handle%20Power%20Query%20access%20request%20to%20custom%20API.policy.xml) Azure API Management policy for your target OData API to support Power Query's authentication flow. See below a snippet from that policy highlighting the authentication mechanism. Find the used client ID for Power Query [here](/power-query/connectorauthentication#supported-workflow).
+
+```xml
+<!-- if empty Bearer token supplied assume Power Query sign-in request as described [here:](/power-query/connectorauthentication#supported-workflow) -->
+<when condition="@(context.Request.Headers.GetValueOrDefault("Authorization","").Trim().Equals("Bearer"))">
+ <return-response>
+ <set-status code="401" reason="Unauthorized" />
+ <set-header name="WWW-Authenticate" exists-action="override">
+ <!-- Check the client ID for Power Query [here:](/power-query/connectorauthentication#supported-workflow) -->
+ <value>Bearer authorization_uri=https://login.microsoftonline.com/{{AADTenantId}}/oauth2/authorize?response_type=code%26client_id=a672d62c-fc7b-4e81-a576-e60dc46e951d</value>
+ </set-header>
+ </return-response>
+</when>
+```
+
+In addition to the support of the **Organizational Account login flow**, the policy supports **OData URL response rewriting** because the target server replies with original URLs. See below a snippet from the mentioned policy:
+
+```xml
+<!-- URL rewrite in body only required for GET operations -->
+<when condition="@(context.Request.Method == "GET")">
+ <!-- ensure downstream API metadata matches Azure API Management caller domain in Power Query -->
+ <find-and-replace from="@(context.Api.ServiceUrl.Host +":"+ context.Api.ServiceUrl.Port + context.Api.ServiceUrl.Path)" to="@(context.Request.OriginalUrl.Host + ":" + context.Request.OriginalUrl.Port + context.Api.Path)" />
+</when>
+```
+
+> [!NOTE]
+> For more information about secure SAP access from the Internet and SAP perimeter network design, see this [guide](/azure/architecture/guide/sap/sap-internet-inbound-outbound#network-design). Regarding securing SAP APIs with Azure, see this [article](/azure/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure).
+
+## SAP OData authentication via Power Query on Excel Desktop
+
+With the given configuration, the built-in authentication mechanism of Power Query becomes available to the exposed OData APIs. Add a new OData source to the Excel sheet via the Data ribbon (Get Data -\> From Other Sources -\> From OData Feed). Maintain your target service URL. Below example uses the SAP Gateway demo service **GWSAMPLE_BASIC**. Discover or activate it using SAP transaction `/IWFND/MAINT_SERVICE`. Finally add it to Azure API Management using the [official OData import guide](/azure/api-management/sap-api).
++
+Retrieve the Base URL and insert in your target application. Below example shows the integration experience with Excel Desktop.
++
+Switch the login method to **Organizational account** and click Sign in. Supply the Azure AD account that is mapped to the named SAP user on the SAP Gateway using SAP Principal Propagation. For more information about the configuration, see [this Microsoft tutorial](/azure/active-directory/saas-apps/sap-netweaver-tutorial#configure-sap-netweaver-for-oauth). Learn more about SAP Principal Propagation from [this](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) SAP community post and [this video series](https://github.com/MartinPankraz/SAP-MSTeams-Hero/blob/main/Towel-Bearer/103a-sap-principal-propagation-basics.md).
+
+Continue to choose at which level the authentication settings should be applied by Power Query on Excel. Below example shows a setting that would apply to all OData services hosted on the target SAP system (not only to the sample service GWSAMPLE_BASIC).
+
+> [!NOTE]
+> The authorization scope setting on URL level in below screen is independent of the actual authorizations on the SAP backend. SAP Gateway remains the final validator of each request and associated authorizations of a mapped named SAP user.
++
+> [!IMPORTANT]
+> The above guidance focusses on the process of obtaining a valid authentication token from Azure AD via Power Query. This token needs to be further processed for SAP Principal Propagation.
+
+## Configure SAP Principal Propagation with Azure API Management
+
+Use [this](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml) second Azure API Management policy for SAP to complete the configuration for SAP Principal Propagation on the middle layer. For more information about the configuration of the SAP Gateway backend, see [this Microsoft tutorial](/azure/active-directory/saas-apps/sap-netweaver-tutorial#configure-sap-netweaver-for-oauth).
+
+> [!NOTE]
+> Learn more about SAP Principal Propagation from [this](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) SAP community post and [this video series](https://github.com/MartinPankraz/SAP-MSTeams-Hero/blob/main/Towel-Bearer/103a-sap-principal-propagation-basics.md).
++
+The policy relies on an established SSO setup between Azure AD and SAP Gateway (use [SAP NetWeaver from the Azure AD gallery](/azure/active-directory/saas-apps/sap-netweaver-tutorial#adding-sap-netweaver-from-the-gallery)). See below an example with the demo user Adele Vance. User mapping between Azure AD and the SAP system happens based on the user principal name (UPN) as the unique user identifier.
+++
+The UPN mapping is maintained on the SAP back end using transaction **SAML2**.
++
+According to this configuration **named SAP users** will be mapped to the respective Azure AD user. See below an example configuration from the SAP back end using transaction code **SU01**.
++
+ For more information about the required [SAP OAuth 2.0 Server with AS ABAP](https://help.sap.com/docs/SAP_NETWEAVER_750/e815bb97839a4d83be6c4fca48ee5777/0b899f00477b4034b83aa31764361852.html) configuration, see this [Microsoft](/azure/active-directory/saas-apps/sap-netweaver-tutorial#configure-sap-netweaver-for-oauth) tutorial about SSO with SAP NetWeaver using OAuth.
+
+Using the described Azure API Management policies **any** Power Query enabled Microsoft product may call SAP hosted OData services, while
+honoring the SAP named user mapping.
++
+## SAP OData access via other Power Query enabled applications and services
+
+Above example shows the flow for Excel Desktop, but the approach is applicable to **any** Power Query enabled Microsoft product. For more information which products support Power Query, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query). Popular consumers are [Power BI](/power-bi/connect-data/desktop-connect-odata), [Excel for the web](https://www.office.com/launch/excel), [Azure Data Factory](/azure/data-factory/control-flow-power-query-activity), [Azure Synapse Analytics Pipelines](/azure/data-factory/control-flow-power-query-activity), [Power Automate](/flow/) and [Dynamics 365](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
+
+## Next steps
+
+[Learn from where you can use Power Query](/power-query/power-query-what-is-power-query#where-can-you-use-power-query)
+
+[Work with SAP OData APIs in Azure API Management](/azure/api-management/sap-api)
+
+[Configure Azure API Management for SAP APIs](/azure/api-management/sap-api)
+
+[Tutorial: Analyze sales data from Excel and an OData feed](/power-bi/connect-data/desktop-tutorial-analyzing-sales-data-from-excel-and-an-odata-feed)
+
+[Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis)
+
+[Integrate API Management in an internal virtual network with Application Gateway](/azure/api-management/api-management-howto-integrate-internal-vnet-appgateway)
+
+[Understand Azure Application Gateway and Web Application Firewall for SAP](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
+
+[Automate API deployments with APIOps](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops)
virtual-machines Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
Title: SAP HANA Azure virtual machine storage configurations | Microsoft Docs
-description: Storage recommendations for VM that have SAP HANA deployed in them.
+description: General Storage recommendations for VM that have SAP HANA deployed.
tags: azure-resource-manager
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
Previously updated : 02/28/2022 Last updated : 10/09/2022
The minimum SAP HANA certified conditions for the different storage types are:
- Azure Ultra disk at least for the **/hana/log** volume. The **/hana/data** volume can be placed on either premium storage without Azure Write Accelerator or in order to get faster restart times Ultra disk - **NFS v4.1** volumes on top of Azure NetApp Files for **/hana/log and /hana/data**. The volume of /hana/shared can use NFS v3 or NFS v4.1 protocol
-Some of the storage types can be combined. For example, it is possible to put **/hana/data** onto premium storage and **/hana/log** can be placed on Ultra disk storage in order to get the required low latency. If you use a volume based on ANF for **/hana/data**, **/hana/log** volume needs to be based on NFS on top of ANF as well. Using NFS on top of ANF for one of the volumes (like /hana/data) and Azure premium storage or Ultra disk for the other volume (like **/hana/log**) is **not supported**.
+Some of the storage types can be combined. For example, it's possible to put **/hana/data** onto premium storage and **/hana/log** can be placed on Ultra disk storage in order to get the required low latency. If you use a volume based on ANF for **/hana/data**, **/hana/log** volume needs to be based on NFS on top of ANF as well. Using NFS on top of ANF for one of the volumes (like /hana/data) and Azure premium storage or Ultra disk for the other volume (like **/hana/log**) is **not supported**.
In the on-premises world, you rarely had to care about the I/O subsystems and its capabilities. Reason was that the appliance vendor needed to make sure that the minimum storage requirements are met for SAP HANA. As you build the Azure infrastructure yourself, you should be aware of some of these SAP issued requirements. Some of the minimum throughput characteristics that SAP is recommending, are:
In the on-premises world, you rarely had to care about the I/O subsystems and it
- Read activity of at least 400 MB/sec for **/hana/data** for 16 MB and 64 MB I/O sizes - Write activity of at least 250 MB/sec for **/hana/data** with 16 MB and 64 MB I/O sizes
-Given that low storage latency is critical for DBMS systems, even as DBMS, like SAP HANA, keep data in-memory. The critical path in storage is usually around the transaction log writes of the DBMS systems. But also operations like writing savepoints or loading data in-memory after crash recovery can be critical. Therefore, it is **mandatory** to leverage Azure premium storage, Ultra disk, or ANF for **/hana/data** and **/hana/log** volumes.
+Given that low storage latency is critical for DBMS systems, even as DBMS, like SAP HANA, keep data in-memory. The critical path in storage is usually around the transaction log writes of the DBMS systems. But also operations like writing savepoints or loading data in-memory after crash recovery can be critical. Therefore, it's **mandatory** to use Azure premium storage, Ultra disk, or ANF for **/hana/data** and **/hana/log** volumes.
Some guiding principles in selecting your storage configuration for HANA can be listed like: - Decide on the type of storage based on [Azure Storage types for SAP workload](./planning-guide-storage.md) and [Select a disk type](../../disks-types.md) - The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM storage throughput is documented in the article [Memory optimized virtual machine sizes](../../sizes-memory.md). -- When deciding for the storage configuration, try to stay below the overall throughput of the VM with your **/hana/data** volume configuration. Writing savepoints, SAP HANA can be aggressive issuing I/Os. It is easily possible to push up to throughput limits of your **/hana/data** volume when writing a savepoint. If your disk(s) that build the **/hana/data** volume have a higher throughput than your VM allows, you could run into situations where throughput utilized by the savepoint writing is interfering with throughput demands of the redo log writes. A situation that can impact the application throughput-- If you are considering using HANA System Replication, you need to use exactly the same type of Azure storage for **/hana/data** and **/hana/log** for all the VMs participating in the HANA System Replication configuration. For example, using Azure premium storage for **/hana/data** with one VM and Azure Ultra disk for **/hana/log** in another VM within the same HANA System replication configuration, is not supported
+- When deciding for the storage configuration, try to stay below the overall throughput of the VM with your **/hana/data** volume configuration. SAP HANA writing savepoints, HANA can be aggressive issuing I/Os. It's easily possible to push up to throughput limits of your **/hana/data** volume when writing a savepoint. If your disk(s) that build the **/hana/data** volume have a higher throughput than your VM allows, you could run into situations where throughput utilized by the savepoint writing is interfering with throughput demands of the redo log writes. A situation that can impact the application throughput
+- If you're considering using HANA System Replication, you need to use exactly the same type of Azure storage for **/hana/data** and **/hana/log** for all the VMs participating in the HANA System Replication configuration. For example, using Azure premium storage for **/hana/data** with one VM and Azure Ultra disk for **/hana/log** in another VM within the same HANA System replication configuration, isn't supported
> [!IMPORTANT]
-> The suggestions for the storage configurations are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
+> The suggestions for the storage configurations in this or subsequent documents are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you're not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
## Stripe sets versus SAP HANA data volume partitioning
-Using Azure premium storage you may hit the best price/performance ratio when you stripe the **/hanADM volume managers which are part of Linux. The method of striping disks is decades old and well known. As beneficial as those striped volumes are to get to the IOPS or throughput capabilities you may need, it adds complexities around managing those striped volumes. Especially in cases when the volumes need to get extended in capacity. At least for **/hana/data**, SAP introduced an alternative method that achieves the same goal as striping across multiple Azure disks. Since SAP HANA 2.0 SPS03, the HANA indexserver is able to stripe its I/O activity across multiple HANA data files, which are located on different Azure disks. The advantage is that you don't have to take care of creating and managing a striped volume across different Azure disks. The SAP HANA functionality of data volume partitioning is described in detail in:
+Using Azure premium storage you may hit the best price/performance ratio when you stripe the **/hanADM volume managers, which are part of Linux. The method of striping disks is decades old and well known. As beneficial as those striped volumes are to get to the IOPS or throughput capabilities you may need, it adds complexities around managing those striped volumes. Especially in cases when the volumes need to get extended in capacity. At least for **/hana/data**, SAP introduced an alternative method that achieves the same goal as striping across multiple Azure disks. Since SAP HANA 2.0 SPS03, the HANA indexserver is able to stripe its I/O activity across multiple HANA data files, which are located on different Azure disks. The advantage is that you don't have to take care of creating and managing a striped volume across different Azure disks. The SAP HANA functionality of data volume partitioning is described in detail in:
- [The HANA Administrator's Guide](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/40b2b2a880ec4df7bac16eae3daef756.html?q=hana%20data%20volume%20partitioning) - [Blog about SAP HANA ΓÇô Partitioning Data Volumes](https://blogs.sap.com/2020/10/07/sap-hana-partitioning-data-volumes/) - [SAP Note #2400005](https://launchpad.support.sap.com/#/notes/2400005) - [SAP Note #2700123](https://launchpad.support.sap.com/#/notes/2700123)
-Reading through the details, it is apparent that leveraging this functionality takes away complexities of volume manager based stripe sets. You also realize that the HANA data volume partitioning is not only working for Azure block storage, like Azure premium storage. You can use this functionality as well to stripe across NFS shares in case these shares have IOPS or throughput limitations.
+Reading through the details, it's apparent that applying this functionality takes away complexities of volume manager based stripe sets. You also realize that the HANA data volume partitioning isn't only working for Azure block storage, like Azure premium storage. You can use this functionality as well to stripe across NFS shares in case these shares have IOPS or throughput limitations.
## Linux I/O Scheduler mode
Linux has several different I/O scheduling modes. Common recommendation through
On Red Hat, leave the settings as established by the specific tune profiles for the different SAP applications.
-## Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines
-Azure Write Accelerator is a functionality that is available for Azure M-Series VMs exclusively. As the name states, the purpose of the functionality is to improve I/O latency of writes against the Azure premium storage. For SAP HANA, Write Accelerator is supposed to be used against the **/hana/log** volume only. Therefore, the **/hana/data** and **/hana/log** are separate volumes with Azure Write Accelerator supporting the **/hana/log** volume only.
-
-> [!IMPORTANT]
-> When using Azure premium storage, the usage of Azure [Write Accelerator](../../how-to-enable-write-accelerator.md) for the **/hana/log** volume is mandatory. Write Accelerator is available for premium storage and M-Series and Mv2-Series VMs only. Write Accelerator is not working in combination with other Azure VM families, like Esv3 or Edsv4.
-
-The caching recommendations for Azure premium disks below are assuming the I/O characteristics for SAP HANA that list like:
--- There hardly is any read workload against the HANA data files. Exceptions are large sized I/Os after restart of the HANA instance or when data is loaded into HANA. Another case of larger read I/Os against data files can be HANA database backups. As a result read caching mostly does not make sense since in most of the cases, all data file volumes need to be read completely. -- Writing against the data files is experienced in bursts based by HANA savepoints and HANA crash recovery. Writing savepoints is asynchronous and are not holding up any user transactions. Writing data during crash recovery is performance critical in order to get the system responding fast again. However, crash recovery should be rather exceptional situations-- There are hardly any reads from the HANA redo files. Exceptions are large I/Os when performing transaction log backups, crash recovery, or in the restart phase of a HANA instance. -- Main load against the SAP HANA redo log file is writes. Dependent on the nature of workload, you can have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more. Write latency against the SAP HANA redo log is performance critical.-- All writes need to be persisted on disk in a reliable fashion-
-**Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for the different volumes using Azure premium storage should be set like:**
--- **/hana/data** - no caching or read caching-- **/hana/log** - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be enabled -- **/hana/shared** - read caching-- **OS disk** - don't change default caching that is set by Azure at creation time of the VM--
-If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. These sizes differ between **/hana/data** and **/hana/log**. **Recommendation: As stripe sizes the recommendation is to use:**
+## Stripe sizes when using logical volume managers
+If you're using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. These sizes differ between **/hana/data** and **/hana/log**. **Recommendation: As stripe sizes the recommendation is to use:**
- 256 KB for **/hana/data** - 64 KB for **/hana/log**
If you are using LVM or mdadm to build stripe sets across several Azure premium
> [!NOTE] > You don't need to configure any redundancy level using RAID volumes since Azure block storage keeps three images of a VHD. The usage of a stripe set with Azure premium disks is purely to configure volumes that provide sufficient IOPS and/or I/O throughput.
-Accumulating a number of Azure VHDs underneath a stripe set, is accumulative from an IOPS and storage throughput side. So, if you put a stripe set across over 3 x P30 Azure premium storage disks, it should give you three times the IOPS and three times the storage throughput of a single Azure premium Storage P30 disk.
+Accumulating multiple Azure disks underneath a stripe set, is accumulative from an IOPS and storage throughput side. So, if you put a stripe set across over 3 x P30 Azure premium storage disks, it should give you three times the IOPS and three times the storage throughput of a single Azure premium Storage P30 disk.
> [!IMPORTANT]
-> In case you are using LVM or mdadm as volume manager to create stripe sets across multiple Azure premium disks, the three SAP HANA FileSystems /data, /log and /shared must not be put in a default or root volume group. It is highly recommended to follow the Linux Vendors guidance which is typically to create individual Volume Groups for /data, /log and /shared.
--
-### Azure burst functionality for premium storage
-For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You are going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
-
-The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that contain data files for the different DBMS. The I/O workload expected against those volumes, especially with small to mid-ranged systems is expected to look like:
--- Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA should be completely in memory-- Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis-- Backup workload that reads in a continuous stream in cases where backups are not executed via storage snapshots-- For SAP HANA, load of the data into memory after an instance restart-
-Especially on smaller DBMS systems where your workload is handling a few hundred transactions per seconds only, such a burst functionality can make sense as well for the disks or volumes that store the transaction or redo log. Expected workload against such a disk or volumes looks like:
+> In case you're using LVM or mdadm as volume manager to create stripe sets across multiple Azure premium disks, the three SAP HANA FileSystems /data, /log and /shared must not be put in a default or root volume group. It's highly recommended to follow the Linux Vendors guidance which is typically to create individual Volume Groups for /data, /log and /shared.
-- Regular writes to the disk that are dependent on the workload and the nature of workload since every commit issued by the application is likely to trigger an I/O operation-- Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes-- Read bursts when performing transaction log or redo log backups--
-### Production recommended storage solution based on Azure premium storage
-
-> [!IMPORTANT]
-> SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write Accelerator for the **/hana/log** volume. As a result, production scenario SAP HANA deployments on Azure M-Series virtual machines are expected to be configured with Azure Write Accelerator for the **/hana/log** volume.
-
-> [!NOTE]
-> In scenarios that involve Azure premium storage, we are implementing burst capabilities into the configuration. As you are using storage test tools of whatever shape or form, keep the way [Azure premium disk bursting works](../../disk-bursting.md) in mind. Running the storage tests delivered through the SAP HWCCT or HCMT tool, we are not expecting that all tests will pass the criteria since some of the tests will exceed the bursting credits you can accumulate. Especially when all the tests run sequentially without break.
-
-> [!NOTE]
-> With M32ts and M32ls VMs it can happen that disk throughput could be lower than expected using HCMT/HWCCT disk tests. Even with disk bursting or with sufficiently provisioned I/O throughput of the underlying disks. Root cause of the observed behavior was that the HCMT/HWCCT storage test files were completely cached in the read cache of the Premium storage data disks. This cache is located on the compute host that hosts the virtual machine and can cache the test files of HCMT/HWCCT completely. In such a case the quotas listed in the column **Max cached and temp storage throughput: IOPS/MBps (cache size in GiB)** in the article [M-series](../../m-series.md) are relevant. Specifically for M32ts and M32ls, the throughput quota against the read cache is only 400MB/sec. As a result of the tests files being completely cached, it is possible that despite disk bursting or higher provisioned I/O throughput, the tests can fall slightly short of 400MB/sec maximum throughput. As an alternative, you can test without read cache enabled on the Azure Premium storage data disks.
--
-> [!NOTE]
-> For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the [SAP documentation for IAAS](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html).
-
-**Recommendation: The recommended configurations with Azure premium storage for production scenarios look like:**
-
-Configuration for SAP **/hana/data** volume:
-
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS |
-| | | | | | | |
-| M32ts | 192 GiB | 500 MBps | 4 x P6 | 200 MBps | 680 MBps | 960 | 14,000 |
-| M32ls | 256 GiB | 500 MBps | 4 x P6 | 200 MBps | 680 MBps | 960 | 14,000 |
-| M64ls | 512 GiB | 1,000 MBps | 4 x P10 | 400 MBps | 680 MBps | 2,000 | 14,000 |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 |
-| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 |
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 |
-| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting |
-| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting |
-| M208s_v2 | 2,850 GiB | 1,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting |
-| M208ms_v2 | 5,700 GiB | 1,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
-| M416s_v2 | 5,700 GiB | 2,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
-| M416ms_v2 | 11,400 GiB | 2,000 MBps | 4 x P50 | 1,000 MBps | no bursting | 30,000 | no bursting |
--
-For the **/hana/log** volume. the configuration would look like:
-
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | **/hana/log** volume | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS |
-| | | | | | | |
-| M32ts | 192 GiB | 500 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
-| M32ls | 256 GiB | 500 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
-| M64ls | 512 GiB | 1,000 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500|
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500|
-| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M208s_v2 | 2,850 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M208ms_v2 | 5,700 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M416s_v2 | 5,700 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M416ms_v2 | 11,400 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
--
-For the other volumes, the configuration would look like:
-
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/shared | /root volume | /usr/sap |
-| | | | | | | | | -- |
-| M32ts | 192 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 |
-| M32ls | 256 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 |
-| M64ls | 512 GiB | 1000 MBps | 1 x P20 | 1 x P6 | 1 x P6 |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 1 x P30 | 1 x P6 | 1 x P6 |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 |
-| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M208s_v2 | 2,850 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M208ms_v2 | 5,700 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M416s_v2 | 5,700 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M416ms_v2 | 11,400 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
--
-Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual machine type.
-
-Azure Write Accelerator only works with [Azure managed disks](https://azure.microsoft.com/services/managed-disks/). So at least the Azure premium storage disks forming the **/han).
-
-For the HANA certified VMs of the Azure [Esv3](../../ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), [Edsv5](../../edv5-edsv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv5-series), and [Esv5](../../ev5-esv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv5-series) you need to use ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs. Though, many customers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
-
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS |
-| | | | | | | |
-| E20ds_v4| 160 GiB | 480 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
-| E20(d)s_v5| 160 GiB | 750 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
-| E32ds_v4 | 256 GiB | 768 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500|
-| E32ds_v5 | 256 GiB | 865 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500|
-| E48ds_v4 | 384 GiB | 1,152 MBps | 3 x P15 | 375 MBps |510 MBps | 3,300 | 10,500 |
-| E48ds_v4 | 384 GiB | 1,315 MBps | 3 x P15 | 375 MBps |510 MBps | 3,300 | 10,500 |
-| E64s_v3 | 432 GiB | 1,200 MB/s | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| E64ds_v4 | 504 GiB | 1,200 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| E64(d)s_v5 | 512 GiB | 1,735 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| E96(d)s_v5 | 672 GiB | 2,600 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
--
-For the other volumes, including **/hana/log** on Ultra disk, the configuration could look like:
-
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS | /hana/shared | /root volume | /usr/sap |
-| | | | | | | | | -- |
-| E20ds_v4 | 160 GiB | 480 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
-| E20(d)s_v5 | 160 GiB | 750 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
-| E32ds_v4 | 256 GiB | 768 MBps | 128 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
-| E32(d)s_v5 | 256 GiB | 865 MBps | 128 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
-| E48ds_v4 | 384 GiB | 1,152 MBps | 192 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
-| E48(d)s_v5 | 384 GiB | 1,315 MBps | 192 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
-| E64s_v3 | 432 GiB | 1,200 MBps | 220 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
-| E64ds_v4 | 504 GiB | 1,200 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
-| E64(d)s_v5 | 512 GiB | 1,735 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
-| E96(d)s_v5 | 672 GiB | 2,600 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+## Azure Premium Storage configurations for HANA
+For detailed HANA storage configuration recommendations using Azure premium storage, read the document [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md)
+## Azure Premium SSD v2 configurations for HANA
+For detailed HANA storage configuration recommendations using Azure premium ssd v2 storage, read the document [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
## Azure Ultra disk storage configuration for SAP HANA
-Another Azure storage type is called [Azure Ultra disk](../../disks-types.md#ultra-disks). The significant difference between Azure storage offered so far and Ultra disk is that the disk capabilities are not bound to the disk size anymore. As a customer you can define these capabilities for Ultra disk:
--- Size of a disk ranging from 4 GiB to 65,536 GiB-- IOPS range from 100 IOPS to 160K IOPS (maximum depends on VM types as well)-- Storage throughput from 300 MB/sec to 2,000 MB/sec-
-Ultra disk gives you the possibility to define a single disk that fulfills your size, IOPS, and disk throughput range. Instead of using logical volume managers like LVM or MDADM on top of Azure premium storage to construct volumes that fulfill IOPS and storage throughput requirements. You can run a configuration mix between Ultra disk and premium storage. As a result, you can limit the usage of Ultra disk to the performance critical **/hana/data** and **/hana/log** volumes and cover the other volumes with Azure premium storage
-
-Other advantages of Ultra disk can be the better read latency in comparison to premium storage. The faster read latency can have advantages when you want to reduce the HANA startup times and the subsequent load of the data into memory. Advantages of Ultra disk storage also can be felt when HANA is writing savepoints.
-
-> [!NOTE]
-> Ultra disk is not yet present in all the Azure regions and is also not yet supporting all VM types listed below. For detailed information where Ultra disk is available and which VM families are supported, check the article [What disk types are available in Azure?](../../disks-types.md#ultra-disks).
-
-### Production recommended storage solution with pure Ultra disk configuration
-In this configuration, you keep the **/hana/data** and **/hana/log** volumes separately. The suggested values are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as recommended in the [SAP TDI Storage Whitepaper](https://www.sap.com/documents/2017/09/e6519450-d47c-0010-82c7-eda71af511fa.html).
-
-The recommendations are often exceeding the SAP minimum requirements as stated earlier in this article. The listed recommendations are a compromise between the size recommendations by SAP and the maximum storage throughput the different VM types provide.
-
-> [!NOTE]
-> Azure Ultra disk is enforcing a minimum of 2 IOPS per Gigabyte capacity of a disk
--
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data volume | /hana/data I/O throughput | /hana/data IOPS | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS |
-| | | | | | | | | -- |
-| E20ds_v4 | 160 GiB | 480 MB/s | 200 GB | 400 MBps | 2,500 | 80 GB | 250 MB | 1,800 |
-| E32ds_v4 | 256 GiB | 768 MB/s | 300 GB | 400 MBps | 2,500 | 128 GB | 250 MBps | 1,800 |
-| E48ds_v4 | 384 GiB | 1152 MB/s | 460 GB | 400 MBps | 3,000 | 192 GB | 250 MBps | 1,800 |
-| E64ds_v4 | 504 GiB | 1200 MB/s | 610 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
-| E64s_v3 | 432 GiB | 1,200 MB/s | 610 GB | 400 MBps | 3,500 | 220 GB | 250 MB | 1,800 |
-| M32ts | 192 GiB | 500 MB/s | 250 GB | 400 MBps | 2,500 | 96 GB | 250 MBps | 1,800 |
-| M32ls | 256 GiB | 500 MB/s | 300 GB | 400 MBps | 2,500 | 256 GB | 250 MBps | 1,800 |
-| M64ls | 512 GiB | 1,000 MB/s | 620 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MB/s | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MB/s | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
-| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MB/s | 2,100 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MB/s |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MB/s |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
-| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MB/s | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
-| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MB/s | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
-| M208s_v2 | 2,850 GiB | 1,000 MB/s | 3,500 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
-| M208ms_v2 | 5,700 GiB | 1,000 MB/s | 7,200 GB | 750 MBps | 14,400 | 512 GB | 250 MBps | 2,500 |
-| M416s_v2 | 5,700 GiB | 2,000 MB/s | 7,200 GB | 1,000 MBps | 14,400 | 512 GB | 400 MBps | 4,000 |
-| M416ms_v2 | 11,400 GiB | 2,000 MB/s | 14,400 GB | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
-
-**The values listed are intended to be a starting point and need to be evaluated against the real demands.** The advantage with Azure Ultra disk is that the values for IOPS and throughput can be adapted without the need to shut down the VM or halting the workload applied to the system.
-
-> [!NOTE]
-> So far, storage snapshots with Ultra disk storage is not available. This blocks the usage of VM snapshots with Azure Backup Services
-
+For detailed HANA storage configuration recommendations using Azure Ultra Disk, read the document [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md)
## NFS v4.1 volumes on Azure NetApp Files For detail on ANF for HANA, read the document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) ---
-## Cost conscious solution with Azure premium storage
-So far, the Azure premium storage solution described in this document in section [Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines](#solutions-with-premium-storage-and-azure-write-accelerator-for-azure-m-series-virtual-machines) were meant for SAP HANA production supported scenarios. One of the characteristics of production supportable configurations is the separation of the volumes for SAP HANA data and redo log into two different volumes. Reason for such a separation is that the workload characteristics on the volumes are different. And that with the suggested production configurations, different type of caching or even different types of Azure block storage could be necessary. For non-production scenarios, some of the considerations taken for production systems may not apply to more low end non-production systems. As a result the HANA data and log volume could be combined. Though eventually with some culprits, like eventually not meeting certain throughput or latency KPIs that are required for production systems. Another aspect to reduce costs in such environments can be the usage of [Azure Standard SSD storage](./planning-guide-storage.md#azure-standard-ssd-storage). Keep in mind that choosing Standard SSD or Standard HDD Azure storage has impact on your single VM SLAs as documented in the article [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines).
-
-A less costly alternative for such configurations could look like:
--
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hanADM | /hana/shared | /root volume | /usr/sap | comments |
-| | | | | | | | -- |
-| DS14v2 | 112 GiB | 768 MB/s | 4 x P6 | 1 x E10 | 1 x E6 | 1 x E6 | Will not achieve less than 1ms storage latency<sup>1</sup> |
-| E16v3 | 128 GiB | 384 MB/s | 4 x P6 | 1 x E10 | 1 x E6 | 1 x E6 | VM type not HANA certified <br /> Will not achieve less than 1ms storage latency<sup>1</sup> |
-| M32ts | 192 GiB | 500 MB/s | 3 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |
-| E20ds_v4 | 160 GiB | 480 MB/s | 4 x P6 | 1 x E15 | 1 x E6 | 1 x E6 | Will not achieve less than 1ms storage latency<sup>1</sup> |
-| E32v3 | 256 GiB | 768 MB/s | 4 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | VM type not HANA certified <br /> Will not achieve less than 1ms storage latency<sup>1</sup> |
-| E32ds_v4 | 256 GiB | 768 MBps | 4 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | Will not achieve less than 1ms storage latency<sup>1</sup> |
-| M32ls | 256 GiB | 500 MB/s | 4 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |
-| E48ds_v4 | 384 GiB | 1,152 MBps | 6 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Will not achieve less than 1ms storage latency<sup>1</sup> |
-| E64v3 | 432 GiB | 1,200 MB/s | 6 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Will not achieve less than 1ms storage latency<sup>1</sup> |
-| E64ds_v4 | 504 GiB | 1200 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Will not achieve less than 1ms storage latency<sup>1</sup> |
-| M64ls | 512 GiB | 1,000 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MB/s | 6 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MB/s | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
-| M64ms, M64dms_v2, M64ms_v2| 1,792 GiB | 1,000 MB/s | 6 x P20 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
-| M128ms, M128dms_v2, M128ms_v2 | 3,800 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
-| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
-| M208s_v2 | 2,850 GiB | 1,000 MB/s | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
-| M208ms_v2 | 5,700 GiB | 1,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
-| M416s_v2 | 5,700 GiB | 2,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
-| M416ms_v2 | 11400 GiB | 2,000 MB/s | 7 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
--
-<sup>1</sup> [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) can't be used with the Ev4 and Ev4 VM families. As a result of using Azure premium storage the I/O latency will not be less than 1ms
-
-<sup>2</sup> The VM family supports [Azure Write Accelerator](../../how-to-enable-write-accelerator.md), but there is a potential that the IOPS limit of Write accelerator could limit the disk configurations IOPS capabilities
-
-In the case of combining the data and log volume for SAP HANA, the disks building the striped volume should not have read cache or read/write cache enabled.
-
-There are VM types listed that are not certified with SAP and as such not listed in the so called [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). Feedback of customers was that those non-listed VM types were used successfully for some non-production tasks.
-- ## Next steps For more information, see:
+- [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md).
+- [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md).
+- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md).
- [SAP HANA High Availability guide for Azure virtual machines](./sap-hana-availability-overview.md).
virtual-machines Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-premium-ssd-v1.md
+
+ Title: SAP HANA Azure virtual machine premium storage configurations | Microsoft Docs
+description: Storage recommendations HANA using premium storage.
++
+tags: azure-resource-manager
+keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
+++ Last updated : 10/07/2022++++
+# SAP HANA Azure virtual machine Premium SSD storage configurations
+This document is about HANA storage configurations for Azure premium storage or premium ssd as it was introduced years back as low latency storage for DBMS and other applications that need low latency storage. For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other considerations that are independent of the particular storage type, check these two documents:
+
+- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
+- [Azure Storage types for SAP workload](./planning-guide-storage.md)
++
+> [!IMPORTANT]
+> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
+
+## Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines
+Azure Write Accelerator is a functionality that is available for Azure M-Series VMs exclusively in combination with Azure premium storage. As the name states, the purpose of the functionality is to improve I/O latency of writes against the Azure premium storage. For SAP HANA, Write Accelerator is supposed to be used against the **/hana/log** volume only. Therefore, the **/hana/data** and **/hana/log** are separate volumes with Azure Write Accelerator supporting the **/hana/log** volume only.
+
+> [!IMPORTANT]
+> When using Azure premium storage, the usage of Azure [Write Accelerator](../../how-to-enable-write-accelerator.md) for the **/hana/log** volume is mandatory. Write Accelerator is available for premium storage and M-Series and Mv2-Series VMs only. Write Accelerator is not working in combination with other Azure VM families, like Esv3 or Edsv4.
+
+The caching recommendations for Azure premium disks below are assuming the I/O characteristics for SAP HANA that list like:
+
+- There hardly is any read workload against the HANA data files. Exceptions are large sized I/Os after restart of the HANA instance or when data is loaded into HANA. Another case of larger read I/Os against data files can be HANA database backups. As a result read caching mostly does not make sense since in most of the cases, all data file volumes need to be read completely.
+- Writing against the data files is experienced in bursts based by HANA savepoints and HANA crash recovery. Writing savepoints is asynchronous and are not holding up any user transactions. Writing data during crash recovery is performance critical in order to get the system responding fast again. However, crash recovery should be rather exceptional situations
+- There are hardly any reads from the HANA redo files. Exceptions are large I/Os when performing transaction log backups, crash recovery, or in the restart phase of a HANA instance.
+- Main load against the SAP HANA redo log file is writes. Dependent on the nature of workload, you can have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more. Write latency against the SAP HANA redo log is performance critical.
+- All writes need to be persisted on disk in a reliable fashion
+
+**Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for the different volumes using Azure premium storage should be set like:**
+
+- **/hana/data** - no caching or read caching
+- **/hana/log** - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be enabled
+- **/hana/shared** - read caching
+- **OS disk** - don't change default caching that is set by Azure at creation time of the VM
++
+### Azure burst functionality for premium storage
+For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You are going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
+
+The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that contain data files for the different DBMS. The I/O workload expected against those volumes, especially with small to mid-ranged systems is expected to look like:
+
+- Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA should be completely in memory
+- Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis
+- Backup workload that reads in a continuous stream in cases where backups are not executed via storage snapshots
+- For SAP HANA, load of the data into memory after an instance restart
+
+Especially on smaller DBMS systems where your workload is handling a few hundred transactions per seconds only, such a burst functionality can make sense as well for the disks or volumes that store the transaction or redo log. Expected workload against such a disk or volumes looks like:
+
+- Regular writes to the disk that are dependent on the workload and the nature of workload since every commit issued by the application is likely to trigger an I/O operation
+- Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes
+- Read bursts when performing transaction log or redo log backups
++
+### Production recommended storage solution based on Azure premium storage
+
+> [!IMPORTANT]
+> SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write Accelerator for the **/hana/log** volume. As a result, production scenario SAP HANA deployments on Azure M-Series virtual machines are expected to be configured with Azure Write Accelerator for the **/hana/log** volume.
+
+> [!NOTE]
+> In scenarios that involve Azure premium storage, we are implementing burst capabilities into the configuration. As you are using storage test tools of whatever shape or form, keep the way [Azure premium disk bursting works](../../disk-bursting.md) in mind. Running the storage tests delivered through the SAP HWCCT or HCMT tool, we are not expecting that all tests will pass the criteria since some of the tests will exceed the bursting credits you can accumulate. Especially when all the tests run sequentially without break.
+
+> [!NOTE]
+> With M32ts and M32ls VMs it can happen that disk throughput could be lower than expected using HCMT/HWCCT disk tests. Even with disk bursting or with sufficiently provisioned I/O throughput of the underlying disks. Root cause of the observed behavior was that the HCMT/HWCCT storage test files were completely cached in the read cache of the Premium storage data disks. This cache is located on the compute host that hosts the virtual machine and can cache the test files of HCMT/HWCCT completely. In such a case the quotas listed in the column **Max cached and temp storage throughput: IOPS/MBps (cache size in GiB)** in the article [M-series](../../m-series.md) are relevant. Specifically for M32ts and M32ls, the throughput quota against the read cache is only 400MB/sec. As a result of the tests files being completely cached, it is possible that despite disk bursting or higher provisioned I/O throughput, the tests can fall slightly short of 400MB/sec maximum throughput. As an alternative, you can test without read cache enabled on the Azure Premium storage data disks.
++
+> [!NOTE]
+> For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the [SAP documentation for IAAS](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html).
+
+**Recommendation: The recommended configurations with Azure premium storage for production scenarios look like:**
+
+Configuration for SAP **/hana/data** volume:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS |
+| | | | | | | |
+| M32ts | 192 GiB | 500 MBps | 4 x P6 | 200 MBps | 680 MBps | 960 | 14,000 |
+| M32ls | 256 GiB | 500 MBps | 4 x P6 | 200 MBps | 680 MBps | 960 | 14,000 |
+| M64ls | 512 GiB | 1,000 MBps | 4 x P10 | 400 MBps | 680 MBps | 2,000 | 14,000 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 |
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 |
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting |
+| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 4 x P50 | 1,000 MBps | no bursting | 30,000 | no bursting |
++
+For the **/hana/log** volume. the configuration would look like:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | **/hana/log** volume | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS |
+| | | | | | | |
+| M32ts | 192 GiB | 500 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| M32ls | 256 GiB | 500 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| M64ls | 512 GiB | 1,000 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500|
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500|
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
++
+For the other volumes, the configuration would look like:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/shared | /root volume | /usr/sap |
+| | | | | | | | | -- |
+| M32ts | 192 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 |
+| M32ls | 256 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 |
+| M64ls | 512 GiB | 1000 MBps | 1 x P20 | 1 x P6 | 1 x P6 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 1 x P30 | 1 x P6 | 1 x P6 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
++
+Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual machine type.
+
+Azure Write Accelerator only works with [Azure managed disks](https://azure.microsoft.com/services/managed-disks/). So at least the Azure premium storage disks forming the **/han).
+
+For the HANA certified VMs of the Azure [Esv3](../../ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), [Edsv5](../../edv5-edsv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv5-series), and [Esv5](../../ev5-esv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv5-series) you need to use ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs. Though, many customers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS |
+| | | | | | | |
+| E20ds_v4| 160 GiB | 480 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| E20(d)s_v5| 160 GiB | 750 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| E32ds_v4 | 256 GiB | 768 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500|
+| E32ds_v5 | 256 GiB | 865 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500|
+| E48ds_v4 | 384 GiB | 1,152 MBps | 3 x P15 | 375 MBps |510 MBps | 3,300 | 10,500 |
+| E48ds_v4 | 384 GiB | 1,315 MBps | 3 x P15 | 375 MBps |510 MBps | 3,300 | 10,500 |
+| E64s_v3 | 432 GiB | 1,200 MB/s | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
++
+For the other volumes, including **/hana/log** on Ultra disk, the configuration could look like:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS | /hana/shared | /root volume | /usr/sap |
+| | | | | | | | | -- |
+| E20ds_v4 | 160 GiB | 480 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
+| E20(d)s_v5 | 160 GiB | 750 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
+| E32ds_v4 | 256 GiB | 768 MBps | 128 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
+| E32(d)s_v5 | 256 GiB | 865 MBps | 128 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
+| E48ds_v4 | 384 GiB | 1,152 MBps | 192 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E48(d)s_v5 | 384 GiB | 1,315 MBps | 192 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E64s_v3 | 432 GiB | 1,200 MBps | 220 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
++
+## Cost conscious solution with Azure premium storage
+So far, the Azure premium storage solution described in this document in section [Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines](#solutions-with-premium-storage-and-azure-write-accelerator-for-azure-m-series-virtual-machines) were meant for SAP HANA production supported scenarios. One of the characteristics of production supportable configurations is the separation of the volumes for SAP HANA data and redo log into two different volumes. Reason for such a separation is that the workload characteristics on the volumes are different. And that with the suggested production configurations, different type of caching or even different types of Azure block storage could be necessary. For non-production scenarios, some of the considerations taken for production systems may not apply to more low end non-production systems. As a result the HANA data and log volume could be combined. Though eventually with some culprits, like eventually not meeting certain throughput or latency KPIs that are required for production systems. Another aspect to reduce costs in such environments can be the usage of [Azure Standard SSD storage](./planning-guide-storage.md#azure-standard-ssd-storage). Keep in mind that choosing Standard SSD or Standard HDD Azure storage has impact on your single VM SLAs as documented in the article [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines).
+
+A less costly alternative for such configurations could look like:
++
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hanADM | /hana/shared | /root volume | /usr/sap | comments |
+| | | | | | | | -- |
+| DS14v2 | 112 GiB | 768 MB/s | 4 x P6 | 1 x E10 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> |
+| E16v3 | 128 GiB | 384 MB/s | 4 x P6 | 1 x E10 | 1 x E6 | 1 x E6 | VM type not HANA certified <br /> won't achieve less than 1ms storage latency<sup>1</sup> |
+| M32ts | 192 GiB | 500 MB/s | 3 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |
+| E20ds_v4 | 160 GiB | 480 MB/s | 4 x P6 | 1 x E15 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> |
+| E32v3 | 256 GiB | 768 MB/s | 4 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | VM type not HANA certified <br /> won't achieve less than 1ms storage latency<sup>1</sup> |
+| E32ds_v4 | 256 GiB | 768 MBps | 4 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> |
+| M32ls | 256 GiB | 500 MB/s | 4 x P10 | 1 x E15 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |
+| E48ds_v4 | 384 GiB | 1,152 MBps | 6 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> |
+| E64v3 | 432 GiB | 1,200 MB/s | 6 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> |
+| E64ds_v4 | 504 GiB | 1200 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> |
+| M64ls | 512 GiB | 1,000 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MB/s | 6 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MB/s | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
+| M64ms, M64dms_v2, M64ms_v2| 1,792 GiB | 1,000 MB/s | 6 x P20 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M128ms, M128dms_v2, M128ms_v2 | 3,800 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M208s_v2 | 2,850 GiB | 1,000 MB/s | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
+| M208ms_v2 | 5,700 GiB | 1,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
+| M416s_v2 | 5,700 GiB | 2,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M416ms_v2 | 11400 GiB | 2,000 MB/s | 7 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
++
+<sup>1</sup> [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) can't be used with the Ev4 and Ev4 VM families. As a result of using Azure premium storage the I/O latency won't be less than 1ms
+
+<sup>2</sup> The VM family supports [Azure Write Accelerator](../../how-to-enable-write-accelerator.md), but there's a potential that the IOPS limit of Write accelerator could limit the disk configurations IOPS capabilities
+
+In the case of combining the data and log volume for SAP HANA, the disks building the striped volume should not have read cache or read/write cache enabled.
+
+There are VM types listed that are not certified with SAP and as such not listed in the so called [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). Feedback of customers was that those non-listed VM types were used successfully for some non-production tasks.
++
+## Next steps
+For more information, see:
+
+- [SAP HANA High Availability guide for Azure virtual machines](./sap-hana-availability-overview.md).
virtual-machines Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-premium-ssd-v2.md
+
+ Title: SAP HANA Azure virtual machine Premium SSD v2 configurations | Microsoft Docs
+description: Storage recommendations HANA using Premium SSD v2.
++
+tags: azure-resource-manager
+keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage, Premium SSD v2'
+++ Last updated : 10/09/2022++++
+# SAP HANA Azure virtual machine Premium SSD v2 storage configurations
+This document is about HANA storage configurations for Azure Premium SSD v2. Azure Premium SSD v2 is a new storage that was developed to more flexible block storage with submillisecond latency for general purpose and DBMS workload. Premium SSD v2 simplifies the way how you build storage architectures and let's you tailor and adapt the storage capabilities to your workload. Premium SSD v2 allows you to configure and pay for capacity, IOPS, and throughput independent of each other.
+
+For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other considerations that are independent of the particular storage type, check these two documents:
+
+- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
+- [Azure Storage types for SAP workload](./planning-guide-storage.md)
++
+> [!IMPORTANT]
+> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you're not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
+
+## Major differences of Premium SSD v2 to premium storage and Ultra disk
+The major difference of Premium SSD v2 to the existing netWeaver and HANA certified storages can be listed like:
+
+- With Premium SSD v2, you pay the exact deployed capacity. Unlike with premium disk and Ultra disk, where brackets of sizes are being taken to determine the costs of capacity
+- Every Premium SSD v2 storage disk comes with 3,000 IOPS and 125MBps on throughput that is included in the capacity pricing
+- Extra IOPS and throughput on top of the default ones that come with each disk can be provisioned at any point in time and are charged separately
+- Changes to the provisioned IOPS and throughput can be executed ones in 6 hours
+- Latency of Premium SSD v2 is lower than premium storage, but higher than Ultra disk. But is submilliseconds, so, that it passes the SAP HANA KPIs without the help of any other functionality, like Azure Write Accelerator
+- **Like with Ultra disk, you can use Premium SSD v2 for /hana/data and /hana/log volumes without the need of any accelerators or other caches**.
+- Like Ultra disk, Azure Premium SSD doesn't offer caching options as premium storage does
+- With Premium SSD v2, the same storage configuration applies to the HANA certified Ev4, Ev5, and M-series VMs that offer the same memory
+- Unlike premium storage, there's no disk bursting for Premium SSD v2
+
+Not having Azure Write Accelerator support or support by other caches makes the configuration of Premium SSD v2 for the different VM families easier and more unified and avoid variations that need to be considered in deployment automation. Not having bursting capabilities makes throughput and IOPS delivered more deterministic and reliable. Since Premium SSD v2 is a new storage type, there are still some restrictions related to its features and capabilities. to read up on these limitations and differences between the different storages, start with reading the document [Azure managed disk types](../../disks-types.md).
++
+## Production recommended storage solution based on Azure premium storage
+
+> [!NOTE]
+> The configurations we suggest below keep the HANA minimum KPIs, as we listed them in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) in mind. Our tests so far gave no indications that with the values listed, SAP HCMT tests would fail in throughput or latency. That stated, we did not test all possible combinations and possibilities around stripe sets stretched across multiple disks or different stripe sizes. Thests we condcuted with striped volumes across multiple disks were done with the stripe sizes we documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
++
+> [!NOTE]
+> For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the [SAP documentation for IAAS](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24).
+
+When you look up the price list for Azure managed disks, then it becomes apparent that the cost scheme introduced with Premium SSD v2, gives you two general paths to pursue:
+
+- You try to simplify your storage architecture by using a single disk for **/hana/data** and **/hana/log** and pay for more IOPS and throughput as needed to achieve the levels we recommend below. With the awareness that a single disk has a throughput level of 1,200MBps and 80,000 IOPS.
+- You want to benefit of the 3,000 IOPS and 125MBps that come for free with each disk. To do so, you would build multiple smaller disks that sum up to the capacity you need and then build a striped volume with a logical volume manager across these multiple disks. Striping across multiple disks would give you the possibility to reduce the IOPS and throughput cost factors. But would result in some more efforts in automating deployments and operating such solutions.
+
+Since we don't want to define which direction you should go, we are leaving the decision to you on whether to take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing storage throughput are going to grow over time. And that HANA savepoints are extremely critical and demand high throughput for the **/hana/data** volume
+
+**Recommendation: The recommended configurations with Azure premium storage for production scenarios look like:**
+
+Configuration for SAP **/hana/data** volume:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | /hana/data capacity | /hana/data throughput| /hana/data IOPS |
+| | | | | | | |
+| E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 192 MB | 425 MBps | 3,000 |
+| E20(d)s_v5| 160 GiB | 750 MBps | 32,000 | 192 MB | 425 MBps | 3,000 |
+| E32ds_v4 | 256 GiB | 768 MBps | 51,200| 304 MB | 425 MBps | 3,000 |
+| E32ds_v5 | 256 GiB | 865 MBps | 51,200| 304 MB | 425 MBps | 3,000 |
+| E48ds_v4 | 384 GiB | 1,152 MBps | 76,800 | 464 MB |425 MBps | 3,000 |
+| E48ds_v4 | 384 GiB | 1,315 MBps | 76,800 | 464 MB |425 MBps | 3,000 |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 80,000 | 608 MB | 425 MBps | 3,000 |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 80,000 | 608 MB | 425 MBps | 3,000 |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 80,000 | 800 MB | 425 MBps | 3,000 |
+| M32ts | 192 GiB | 500 MBps | 20,000 | 224 MB | 425 MBps | 3,000|
+| M32ls | 256 GiB | 500 MBps | 20,000 | 304 MB | 425 MBps | 3,000 |
+| M64ls | 512 GiB | 1,000 MBps | 40,000 | 608 MB | 425 MB | 3,000 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 30,000 | 1056 MB | 425 MBps | 3,000 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1232 MB | 680 MBps | 5,000 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2144 MB | 600 MBps | 5,000 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2464 MB | 800 MBps | 12,000|
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2464 MB | 800 MBps | 12,000|
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4672 MB | 800 MBps | 12,000 |
+| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4912 MB | 800 MBps | 12,000 |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3424 MB | 1,000 MBps| 15,000 |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 6,848 MB | 1,000 MBps | 15,000 |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 6,848 MB | 1,200 MBps| 17,000 |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 13,680 MB | 1,200 MBps| 25,000 |
++
+For the **/hana/log** volume. the configuration would look like:
+
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | **/hana/log** capacity | **/hana/log** throughput** | **/hana/log** IOPS | **/hana/shared** capacity <br />using default IOPS <br /> and throughput |
+| | | | | | | |
+| E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
+| E20(d)s_v5 | 160 GiB | 750 MBps | 2,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
+| E32ds_v4 | 256 GiB | 768 MBps | 51,200 | 128 GB | 275 MBps | 3,000 | 256 GB |
+| E32(d)s_v5 | 256 GiB | 865 MBps | 51,200 | 128 GB | 275 MBps | 3,000 | 256 GB |
+| E48ds_v4 | 384 GiB | 1,152 MBps | 76,800 | 192 GB | 275 MBps | 3,000 | 384 GB |
+| E48(d)s_v5 | 384 GiB | 1,315 MBps | 76,800 | 192 GB | 275 MBps | 3,000 | 384 GB |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 80,000 | 256 GB | 275 MBps | 3,000 | 504 GB |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 80,000 | 256 GB | 275 MBps | 3,000 | 512 GB |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 80,000 | 256 GB | 275 MBps | 3,000 | 672 GB |
+| M32ts | 192 GiB | 500 MBps | 20,000 | 96 GB | 275 MBps | 3,000 | 192 GB |
+| M32ls | 256 GiB | 500 MBps | 20,000 | 128 GB | 275 MBps | 3,000 | 256 GB |
+| M64ls | 512 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 512 GB |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 20,000 | 512 GB | 275 MBps | 3,000 | 875 GB |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
+| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 512 GB | 350 MBps | 4,500 | 1,024 GB |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB |
++
+Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase either IOPS, and/or throughput on the individual disks you're using.
+
+A few examples on how combining multiple Premium SSD v2 disk with a stripe set could impact the requirement to provision more IOPS or throughput for **/hana/data** is displayed in this table:
+
+| VM SKU | RAM | number of <br />disks | individual disk<br /> size | Proposed IOPS | Default IOPS provisioned | Extra IOPS <br />provisioned | Proposed throughput<br /> for volume | Default throughput provisioned | Extra throughput <br />provisioned |
+| | | | | | | | | | |
+| E32(d)s_v5 | 256 GiB | 1 | 304 GB | 3,000 | 3,000 | 0 | 425 MBps | 125 MBps | 300 MBps |
+| E32(d)s_v5 | 256 GiB | 2 | 152 GB | 3,000 | 6,000 | 0 | 425 MBps | 250 MBps | 175 MBps |
+| E32(d)s_v5 | 256 GiB | 4 | 76 GB | 3,000 | 12,000 | 0 | 425 MBps | 500 MBps | 0 MBps |
+| E96(d)s_v5 | 672 GiB | 1 | 304 GB | 3,000 | 3,000 | 0 | 425 MBps | 125 MBps | 300 MBps |
+| E96(d)s_v5 | 672 GiB | 2 | 152 GB | 3,000 | 6,000 | 0 | 425 MBps | 250 MBps | 175 MBps |
+| E96(d)s_v5 | 672 GiB | 4 | 76 GB | 3,000 | 12,000 | 0 | 425 MBps | 500 MBps | 0 MBps |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 1 | 2,464 GB | 12,000 | 3,000 | 9,000 | 800 MBps | 125 MBps | 675 MBps |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2 | 1,232 GB | 12,000 | 6,000 | 6,000 | 800 MBps | 250 MBps | 550 MBps |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 4 | 2,464 GB | 12,000 | 12,000 | 0 | 800 MBps | 500 MBps | 300 MBps |
+| M416ms_v2 | 11,400 GiB | 1 | 13,680 | 25,000 | 3,000 | 22,000 | 1,200 MBps | 125 MBps | 1,075 MBps |
+| M416ms_v2 | 11,400 GiB | 2 | 13,680 | 25,000 | 6,000 | 19,000 | 1,200 MBps | 250 MBps | 950 MBps |
+| M416ms_v2 | 11,400 GiB | 4 | 13,680 | 25,000 | 12,000 | 13,000 | 1,200 MBps | 500 MBps | 700 MBps |
+
+For **/hana/log**, a similar approach of using two disks could look like:
+
+| VM SKU | RAM | number of <br />disks | individual disk<br /> size | Proposed IOPS | Default IOPS provisioned | Extra IOPS <br />provisioned | Proposed throughput<br /> for volume | Default throughput provisioned | Extra throughput <br />provisioned |
+| | | | | | | | | | |
+| E32(d)s_v5 | 256 GiB | 1 | 128 GB | 3,000 | 3,000 | 0 | 275 MBps | 125 MBps | 150 MBps |
+| E32(d)s_v5 | 256 GiB | 2 | 64 GB | 3,000 | 6,000 | 0 | 275 MBps | 250 MBps | 25 MBps |
+| E96(d)s_v5 | 672 GiB | 1 | 512 GB | 3,000 | 3,000 | 0 | 275 MBps | 125 MBps | 150 MBps |
+| E96(d)s_v5 | 672 GiB | 2 | 512 GB | 3,000 | 6,000 | 0 | 275 MBps | 250 MBps | 25 MBps |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 1 | 512 GB | 4,000 | 3,000 | 1,000 | 300 MBps | 125 MBps | 175 MBps |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2 | 256 GB | 4,000 | 6,000 | 0 | 300 MBps | 250 MBps | 50 MBps |
+| M416ms_v2 | 11,400 GiB | 1 | 512 GB | 5,000 | 3,000 | 2,000 | 400 MBps | 125 MBps | 275 MBps |
+| M416ms_v2 | 11,400 GiB | 2 | 256 GB | 5,000 | 6,000 | 0 | 400 MBps | 250 MBps | 150 MBps |
+
+These tables combined with the [prices of IOPS and throughput](https://azure.microsoft.com/pricing/details/managed-disks/) should give you an idea how striping across multiple Premium SSD v2 disks could reduce the costs for the particular storage configuration you're looking at. Based on these calculations, you can decide whether to move ahead with a single disk approach for **/hana/data** and/or **/hana/log**.
+
+## Next steps
+For more information, see:
+
+- [SAP HANA High Availability guide for Azure virtual machines](./sap-hana-availability-overview.md).
virtual-machines Hana Vm Ultra Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-ultra-disk.md
+
+ Title: SAP HANA Azure virtual machine Ultra Disk configurations | Microsoft Docs
+description: Storage recommendations for SAP HANA using Ultra disk.
++
+tags: azure-resource-manager
+keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
+++ Last updated : 10/07/2022++++
+# SAP HANA Azure virtual machine Ultra Disk storage configurations
+This document is about HANA storage configurations for Azure Ultra Disk storage as it was introduced as ultra low latency storage for DBMS and other applications that need ultra low latency storage. For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other considerations that are independent of the particular storage type, check these two documents:
+
+- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
+- [Azure Storage types for SAP workload](./planning-guide-storage.md)
++
+## Azure Ultra disk storage configuration for SAP HANA
+Another Azure storage type is called [Azure Ultra disk](../../disks-types.md#ultra-disks). The significant difference between Azure storage offered so far and Ultra disk is that the disk capabilities aren't bound to the disk size anymore. As a customer you can define these capabilities for Ultra disk:
+
+- Size of a disk ranging from 4 GiB to 65,536 GiB
+- IOPS range from 100 IOPS to 160K IOPS (maximum depends on VM types as well)
+- Storage throughput from 300 MB/sec to 2,000 MB/sec
+
+Ultra disk gives you the possibility to define a single disk that fulfills your size, IOPS, and disk throughput range. Instead of using logical volume managers like LVM or MDADM on top of Azure premium storage to construct volumes that fulfill IOPS and storage throughput requirements. You can run a configuration mix between Ultra disk and premium storage. As a result, you can limit the usage of Ultra disk to the performance critical **/hana/data** and **/hana/log** volumes and cover the other volumes with Azure premium storage
+
+Other advantages of Ultra disk can be the better read latency in comparison to premium storage. The faster read latency can have advantages when you want to reduce the HANA startup times and the subsequent load of the data into memory. Advantages of Ultra disk storage also can be felt when HANA is writing savepoints.
+
+> [!NOTE]
+> Ultra disk might not be present in all the Azure regions. For detailed information where Ultra disk is available and which VM families are supported, check the article [What disk types are available in Azure?](../../disks-types.md#ultra-disks).
+
+## Production recommended storage solution with pure Ultra disk configuration
+In this configuration, you keep the **/hana/data** and **/hana/log** volumes separately. The suggested values are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as recommended in the [SAP TDI Storage Whitepaper](https://www.sap.com/documents/2017/09/e6519450-d47c-0010-82c7-eda71af511fa.html).
+
+The recommendations are often exceeding the SAP minimum requirements as stated earlier in this article. The listed recommendations are a compromise between the size recommendations by SAP and the maximum storage throughput the different VM types provide.
+
+> [!NOTE]
+> Azure Ultra disk is enforcing a minimum of 2 IOPS per Gigabyte capacity of a disk
++
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data volume | /hana/data I/O throughput | /hana/data IOPS | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS |
+| | | | | | | | | -- |
+| E20ds_v4 | 160 GiB | 480 MB/s | 200 GB | 400 MBps | 2,500 | 80 GB | 250 MB | 1,800 |
+| E32ds_v4 | 256 GiB | 768 MB/s | 300 GB | 400 MBps | 2,500 | 128 GB | 250 MBps | 1,800 |
+| E48ds_v4 | 384 GiB | 1152 MB/s | 460 GB | 400 MBps | 3,000 | 192 GB | 250 MBps | 1,800 |
+| E64ds_v4 | 504 GiB | 1200 MB/s | 610 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
+| E64s_v3 | 432 GiB | 1,200 MB/s | 610 GB | 400 MBps | 3,500 | 220 GB | 250 MB | 1,800 |
+| M32ts | 192 GiB | 500 MB/s | 250 GB | 400 MBps | 2,500 | 96 GB | 250 MBps | 1,800 |
+| M32ls | 256 GiB | 500 MB/s | 300 GB | 400 MBps | 2,500 | 256 GB | 250 MBps | 1,800 |
+| M64ls | 512 GiB | 1,000 MB/s | 620 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MB/s | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MB/s | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MB/s | 2,100 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MB/s |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MB/s |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MB/s | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
+| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MB/s | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
+| M208s_v2 | 2,850 GiB | 1,000 MB/s | 3,500 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
+| M208ms_v2 | 5,700 GiB | 1,000 MB/s | 7,200 GB | 750 MBps | 14,400 | 512 GB | 250 MBps | 2,500 |
+| M416s_v2 | 5,700 GiB | 2,000 MB/s | 7,200 GB | 1,000 MBps | 14,400 | 512 GB | 400 MBps | 4,000 |
+| M416ms_v2 | 11,400 GiB | 2,000 MB/s | 14,400 GB | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
+
+**The values listed are intended to be a starting point and need to be evaluated against the real demands.** The advantage with Azure Ultra disk is that the values for IOPS and throughput can be adapted without the need to shut down the VM or halting the workload applied to the system.
+
+> [!NOTE]
+> So far, storage snapshots with Ultra disk storage is not available. This blocks the usage of VM snapshots with Azure Backup Services
++
+## Next steps
+For more information, see:
+
+- [SAP HANA High Availability guide for Azure virtual machines](./sap-hana-availability-overview.md).
virtual-machines Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide-storage.md
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
Previously updated : 11/02/2021 Last updated : 10/11/2022 # Azure Storage types for SAP workload
-Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and prices. Some of the storage types are not, or of limited usable for SAP scenarios. Whereas several Azure storage types are well suited or optimized for specific SAP workload scenarios. Especially for SAP HANA, some Azure storage types got certified for the usage with SAP HANA. In this document, we are going through the different types of storage and describe their capability and usability with SAP workloads and SAP components.
+Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and prices. Some of the storage types aren't, or of limited usable for SAP scenarios. Whereas several Azure storage types are well suited or optimized for specific SAP workload scenarios. Especially for SAP HANA, some Azure storage types got certified for the usage with SAP HANA. In this document, we're going through the different types of storage and describe their capability and usability with SAP workloads and SAP components.
-Remark about the units used throughout this article. The public cloud vendors moved to use GiB ([Gibibyte](https://en.wikipedia.org/wiki/Gibibyte)) or TiB ([Tebibyte](https://en.wikipedia.org/wiki/Tebibyte) as size units, instead of Gigabyte or Terabyte. Therefore all Azure documentation and prizing are using those units. Throughout the document, we are referencing these size units of MiB, GiB, and TiB units exclusively. You might need to plan with MB, GB, and TB. So, be aware of some small differences in the calculations if you need to size for a 400 MiB/sec throughput, instead of a 250 MiB/sec throughput.
+Remark about the units used throughout this article. The public cloud vendors moved to use GiB ([Gibibyte](https://en.wikipedia.org/wiki/Gibibyte)) or TiB ([Tebibyte](https://en.wikipedia.org/wiki/Tebibyte) as size units, instead of Gigabyte or Terabyte. Therefore all Azure documentation and prizing are using those units. Throughout the document, we're referencing these size units of MiB, GiB, and TiB units exclusively. You might need to plan with MB, GB, and TB. So, be aware of some small differences in the calculations if you need to size for a 400 MiB/sec throughput, instead of a 250 MiB/sec throughput.
## Microsoft Azure Storage resiliency
-Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, and Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs in three copies on three different storage nodes. Failing over to another replica and seeding of a new replica in case of a storage node failure is transparent. As a result of this redundancy, it is **NOT** required to use any kind of storage redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) provides sufficient redundancy to achieve the same SLAs as other native Azure storage.
+Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, Premium SSD v2, and Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs in three copies on three different storage nodes. Failing over to another replica and seeding of a new replica if there's a storage node failure, is transparent. As a result of this redundancy, it's **NOT** required to use any kind of storage redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) provides sufficient redundancy to achieve the same SLAs as other native Azure storage.
There are several more redundancy methods, which are all described in the article [Azure Storage replication](../../../storage/common/storage-redundancy.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json) that apply to some of the different storage types Azure has to offer.
+> [!NOTE]
+> Using Azure storage for storing database data and redo log file, LRS is the only supported reseleincy level at this point in time
+ Also keep in mind that different Azure storage types influence the single VM availability SLAs as released in [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines). ### Azure managed disks
-Managed disks are a resource type in Azure Resource Manager that can be used instead of VHDs that are stored in Azure Storage Accounts. Managed Disks automatically align with the [availability set][virtual-machines-manage-availability] of the virtual machine they are attached to and therefore increase the availability of your virtual machine and the services that are running on the virtual machine. For more information, read the [overview article](../../managed-disks-overview.md).
+Managed disks are a resource type in Azure Resource Manager that can be used instead of VHDs that are stored in Azure Storage Accounts. Managed Disks automatically align with the [availability set][virtual-machines-manage-availability] of the virtual machine they're attached to. With such an alignment, you experience an improvement of the availability of your virtual machine and the services that are running in the virtual machine. For more information, read the [overview article](../../managed-disks-overview.md).
Related to resiliency, this example demonstrates the advantage of managed disks: -- You are deploying your two DBMS VMs for your SAP system in an Azure availability set -- As Azure deploys the VMs, the disk with the OS image will be placed in a different storage cluster. This avoids that both VMs get impacted by an issue of a single Azure storage cluster-- As you create new managed disks that you assign to these VMs to store the data and log files of your database, these new disks for the two VMs are also deployed in separate storage clusters, so, that none of disks of the first VM are sharing storage clusters with the disks of the second VM
+- You're deploying your two DBMS VMs for your SAP system in an Azure availability set
+- As Azure deploys the VMs, the disk with the OS image will be placed in a different storage cluster. This split across different storage clusters avoids that both VMs get impacted by an issue of a single Azure storage cluster
+- As you create new managed disks that you assign to these VMs to store the data and log files of your database, these new disks for the two VMs are also deployed in separate storage clusters. So, that none of disks of the first VM are sharing storage clusters with the disks of the second VM
Deploying without managed disks in customer defined storage accounts, disk allocation is arbitrary and has no awareness of the fact that VMs are deployed within an AvSet for resiliency purposes. > [!NOTE]
-> Out of this reason and several other improvements that are exclusively available through managed disks, we require that new deployments of VMs that use Azure block storage for their disks (all Azure storage except Azure NetApp Files) need to use Azure managed disks for the base VHD/OS disks, data disks that contain SAP database files. Independent on whether you deploy the VMs through availability set, across Availability Zones or independent of the sets and zones. Disks that are used for the purpose of storing backups are not necessarily required to be managed disks.
-
-> [!NOTE]
-> Azure managed disks provide local redundancy (LRS) only.
+> Out of this reason and several other improvements that are exclusively available through managed disks, we require that new deployments of VMs that use Azure block storage for their disks (all Azure storage except Azure NetApp Files) need to use Azure managed disks for the base VHD/OS disks, data disks that contain SAP database files. Independent on whether you deploy the VMs through availability set, across Availability Zones or independent of the sets and zones. Disks that are used for the purpose of storing backups aren't necessarily required to be managed disks.
## Storage scenarios with SAP workloads
Persisted storage is needed in SAP workload in various components of the stack t
- File shares or shared disks that contain your global transport directory for NetWeaver or S/4HANA. Content of those shares is either consumed by software running in multiple VMs or is used to build high-availability failover cluster scenarios - The /sapmnt directory or common file shares for EDI processes or similar. Content of those shares is either consumed by software running in multiple VMs or is used to build high-availability failover cluster scenarios
-In the next few sections, the different Azure storage types and their usability for SAP workload gets discussed that apply to the four scenarios above. A general categorization of how the different Azure storage types should be used is documented in the article [What disk types are available in Azure?](../../disks-types.md). The recommendations for using the different Azure storage types for SAP workload are not going to be majorly different.
+In the next few sections, the different Azure storage types and their usability for SAP workload gets discussed that apply to the four scenarios above. A general categorization of how the different Azure storage types should be used is documented in the article [What disk types are available in Azure?](../../disks-types.md). The recommendations for using the different Azure storage types for SAP workload aren't going to be majorly different.
-For support restrictions on Azure storage types for SAP NetWeaver/application layer of S/4HANA, read the [SAP support note 2015553](https://launchpad.support.sap.com/#/notes/2015553)
-For SAP HANA certified and supported Azure storage types read the article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
+For support restrictions on Azure storage types for SAP NetWeaver/application layer of S/4HANA, read the [SAP support note 2015553](https://launchpad.support.sap.com/#/notes/2015553). For SAP HANA certified and supported Azure storage types, read the article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
The sections describing the different Azure storage types will give you more background about the restrictions and possibilities using the SAP supported storage. ### Storage choices when using DBMS replication
-Our reference architectures foresee the usage of DBMS functionality like SQL Server Always On, HANA System Replication, Db2 HADR, or Oracle Data Guard. In case, you are using these technologies between two or multiple Azure virtual machines, the storage types chosen for each of the VMs is required to be the same. That means if the storage chose for the redo log volume of a DBMS system is Azure premium storage on one VM, the same volume is required to be based on Azure premium storage with all the other VMs that are in the same high availability synchronization configuration. The same is true for the data volumes used for the database files.
+Our reference architectures foresee the usage of DBMS functionality like SQL Server Always On, HANA System Replication, Db2 HADR, or Oracle Data Guard. In case, you're using these technologies between two or multiple Azure virtual machines, the storage types chosen for each of the VMs is required to be the same. Means the storage configuration between active node and replica node in DBMS HA configuration needs to be the same.
## Storage recommendations for SAP storage scenarios
-Before going into the details, we are presenting the summary and recommendations already at the beginning of the document. Whereas the details for the particular types of Azure storage are following this section of the document. Summarizing the storage recommendations for the SAP storage scenarios in a table, it looks like:
-
-| Usage scenario | Standard HDD | Standard SSD | Premium Storage | Ultra disk | Azure NetApp Files |
-| | | | | | |
-| OS disk | Not suitable | Restricted suitable (non-prod) | Recommended | Not possible | Not possible |
-| Global transport Directory | Not supported | Not supported | Recommended | Recommended | Recommended |
-| /sapmnt | Not suitable | Restricted suitable (non-prod) | Recommended | Recommended | Recommended |
-| DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended<sup>2</sup> |
-| DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended<sup>2</sup> |
-| DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended<sup>2</sup> |
-| DBMS log volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Not supported | Recommended | Recommended<sup>2</sup> |
-| DBMS Data volume non-HANA | Not supported | Restricted suitable (non-prod) | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux |
-| DBMS log volume non-HANA M/Mv2 VM families | Not supported | Restricted suitable (non-prod) | Recommended<sup>1</sup> | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux |
-| DBMS log volume non-HANA non-M/Mv2 VM families | Not supported | restricted suitable (non-prod) | Suitable for up to medium workload | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux |
+Before going into the details, we're presenting the summary and recommendations already at the beginning of the document. Whereas the details for the particular types of Azure storage are following this section of the document. Summarizing the storage recommendations for the SAP storage scenarios in a table, it looks like:
+
+| Usage scenario | Standard HDD | Standard SSD | Premium Storage | Premium SSD v2 | Ultra disk | Azure NetApp Files | Azure Premium Files |
+| | | | | | | | |
+| OS disk | Not suitable | Restricted suitable (non-prod) | Recommended | Not possible | Not possible | Not possible | Not possible |
+| Global transport Directory | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended |
+| /sapmnt | Not suitable | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Recommended | Recommended |
+| DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
+| DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
+| DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
+| DBMS log volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Not supported | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
+| HANA shared volume | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended<sup>3</sup> |
+| DBMS Data volume non-HANA | Not supported | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
+| DBMS log volume non-HANA M/Mv2 VM families | Not supported | Restricted suitable (non-prod) | Recommended<sup>1</sup> | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
+| DBMS log volume non-HANA non-M/Mv2 VM families | Not supported | restricted suitable (non-prod) | Suitable for up to medium workload | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
<sup>1</sup> With usage of [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
-<sup>2</sup> Using ANF requires /hana/data as well as /hana/log to be on ANF
+<sup>2</sup> Using ANF requires /hana/data and /hana/log to be on ANF
+<sup>3</sup> So far tested on SLES only
Characteristics you can expect from the different storage types list like:
-| Usage scenario | Standard HDD | Standard SSD | Premium Storage | Ultra disk | Azure NetApp Files |
-| | | | | | |
-| Throughput/ IOPS SLA | No | No | Yes | Yes | Yes |
-| Latency Reads | High | Medium to high | Low | sub-millisecond | sub-millisecond |
-| Latency Writes | High | Medium to high | Low (sub-millisecond<sup>1</sup>) | sub-millisecond | sub-millisecond |
-| HANA supported | No | No | yes<sup>1</sup> | Yes | Yes |
-| Disk snapshots possible | Yes | Yes | Yes | No | Yes |
-| Allocation of disks on different storage clusters when using availability sets | Through managed disks | Through managed disks | Through managed disks | Disk type not supported with VMs deployed through availability sets | No<sup>3</sup> |
-| Aligned with Availability Zones | Yes | Yes | Yes | Yes | Needs engagement of Microsoft |
-| Zonal redundancy | Not for managed disks | Not for managed disks | Not for managed disks | No | No |
-| Geo redundancy | Not for managed disks | Not for managed disks | No | No | No |
+| Usage scenario | Standard HDD | Standard SSD | Premium Storage | Premium SSD v2 | Ultra disk | Azure NetApp Files | Azure Premium Files |
+| | | | | | | | |
+| Throughput/ IOPS SLA | No | No | Yes | Yes | Yes | Yes | Yes |
+| Latency Reads | High | Medium to high | Low | submillisecond | submillisecond | submillisecond | low |
+| Latency Writes | High | Medium to high | Low (submillisecond<sup>1</sup>) | submillisecond | submillisecond | submillisecond | low |
+| HANA supported | No | No | yes<sup>1</sup> | Yes | Yes | Yes | No |
+| Disk snapshots possible | Yes | Yes | Yes | No | No | Yes | No |
+| Allocation of disks on different storage clusters when using availability sets | Through managed disks | Through managed disks | Through managed disks | Disk type not supported with VMs deployed through availability sets | Disk type not supported with VMs deployed through availability sets | No<sup>3</sup> | No |
+| Aligned with Availability Zones | Yes | Yes | Yes | Yes | Yes | Needs engagement of Microsoft | No |
+| Zonal redundancy | Not for managed disks | Not for managed disks | Not supported for DBMS | No | No | No | Yes |
+| Geo redundancy | Not for managed disks | Not for managed disks | No | No | No | Possible | No |
<sup>1</sup> With usage of [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes <sup>2</sup> Costs depend on provisioned IOPS and throughput
-<sup>3</sup> Creation of different ANF capacity pools does not guarantee deployment of capacity pools onto different storage units
--
-> [!IMPORTANT]
-> To achieve less than 1 millisecond I/O latency using Azure NetApp Files (ANF), you need to work with Microsoft to arrange the correct placement between your VMs and the NFS shares based on ANF. So far there is no mechanism in place that provides an automatic proximity between a VM deployed and the NFS volumes hosted on ANF. Given the different setup of the different Azure regions, the network latency added could push the I/O latency beyond 1 millisecond if the VM and the NFS share are not allocated in proximity.
+<sup>3</sup> Creation of different ANF capacity pools doesn't guarantee deployment of capacity pools onto different storage units
> [!IMPORTANT]
-> None of the currently offered Azure block storage based managed disks, or Azure NetApp Files offer any zonal or geographical redundancy. As a result, you need to make sure that your high availability and disaster recovery architectures are not relying on any type of Azure native storage replication for these managed disks, NFS or SMB shares.
+> To achieve less than 1 millisecond I/O latency using Azure NetApp Files (ANF), you need to work with Microsoft to arrange the correct placement between your VMs and the NFS shares based on ANF. So far there's no mechanism in place that provides an automatic proximity between a VM deployed and the NFS volumes hosted on ANF. Given the different setup of the different Azure regions, the network latency added could push the I/O latency beyond 1 millisecond if the VM and the NFS share aren't allocated in proximity. [Application Volume Groups](../../../azure-netapp-files/application-volume-group-introduction.md), which is a functionality still in preview phase are a way to create an alignment in an easier manner.
## Azure premium storage
Azure premium SSD storage got introduced with the goal to provide:
* SLAs for IOPS and throughput * Less variability in I/O latency
-This type of storage is targeting DBMS workloads, storage traffic that requires low single digit millisecond latency, and SLAs on IOPS and throughput
-Cost basis in the case of Azure premium storage is not the actual data volume stored in such disks, but the size category of such a disk, independent of the amount of the data that is stored within the disk. You also can create disks on premium storage that are not directly mapping into the size categories shown in the article [Premium SSD](../../disks-types.md#premium-ssds). Conclusions out of this article are:
+This type of storage is targeting DBMS workloads, storage traffic that requires low single digit millisecond latency, and SLAs on IOPS and throughput. Cost basis for Azure premium storage isn't the actual data volume stored in such disks, but the size category of such a disk, independent of the amount of the data that is stored within the disk. You also can create disks on premium storage that aren't directly mapping into the size categories shown in the article [Premium SSD](../../disks-types.md#premium-ssds). Conclusions out of this article are:
- The storage is organized in ranges. For example, a disk in the range 513 GiB to 1024 GiB capacity share the same capabilities and the same monthly costs-- The IOPS per GiB are not tracking linear across the size categories. Smaller disks below 32 GiB have higher IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB, the IOPS rate per GiB is between 4-5 IOPS per GiB. For larger disks up to 32,767 GiB, the IOPS rate per GiB is going below 1-- The I/O throughput for this storage is not linear with the size of the disk category. For smaller disks, like the category between 65 GiB and 128 GiB capacity, the throughput is around 780KB/GiB. Whereas for the extreme large disks like a 32,767 GiB disk, the throughput is around 28KB/GiB
+- The IOPS per GiB aren't tracking linear across the size categories. Smaller disks below 32 GiB have higher IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB, the IOPS rate per GiB is between 4-5 IOPS per GiB. For larger disks up to 32,767 GiB, the IOPS rate per GiB is going below 1
+- The I/O throughput for this storage isn't linear with the size of the disk category. For smaller disks, like the category between 65 GiB and 128 GiB capacity, the throughput is around 780KB per GiB. Whereas for the extreme large disks like a 32,767 GiB disk, the throughput is around 28KB per GiB
- The IOPS and throughput SLAs cannot be changed without changing the capacity of the disk
The capability matrix for SAP workload looks like:
| Throughput SLA | Yes | - | | Throughput linear to capacity | Semi linear in brackets | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | | HANA certified | Yes | [specially for SAP HANA](../../how-to-enable-write-accelerator.md) |
+| Azure Write Accelerator support | No | - |
+| Disk bursting | Yes | - |
| Disk snapshots possible | Yes | - |
-| Azure Backup VM snapshots possible | Yes | Except for [Write Accelerator](../../how-to-enable-write-accelerator.md) cached disks |
+| Azure Backup VM snapshots possible | Yes | - |
| Costs | Medium| - |
-Azure premium storage does not fulfill SAP HANA storage latency KPIs with the common caching types offered with Azure premium storage. In order to fulfill the storage latency KPIs for SAP HANA log writes, you need to use Azure Write Accelerator caching as described in the article [Enable Write Accelerator](../../how-to-enable-write-accelerator.md). Azure Write Accelerator benefits all other DBMS systems for their transaction log writes and redo log writes. Therefore, it is recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of Azure Write Accelerator in conjunction with Azure premium storage is mandatory.
+Azure premium storage doesn't fulfill SAP HANA storage latency KPIs with the common caching types offered with Azure premium storage. In order to fulfill the storage latency KPIs for SAP HANA log writes, you need to use Azure Write Accelerator caching as described in the article [Enable Write Accelerator](../../how-to-enable-write-accelerator.md). Azure Write Accelerator benefits all other DBMS systems for their transaction log writes and redo log writes. Therefore, it's recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of Azure Write Accelerator for **/hana/log** with Azure premium storage is mandatory.
-**Summary:** Azure premium storage is one of the Azure storage types recommended for SAP workload. This recommendation applies for non-production as well as production systems. Azure premium storage is suited to handle database workloads. The usage of Azure Write Accelerator is going to improve write latency against Azure premium disks substantially. However, for DBMS systems with high IOPS and throughput rates, you need to either over-provision storage capacity or you need to use functionality like Windows Storage Spaces or logical volume managers in Linux to build stripe sets that give you the desired capacity on the one side, but also the necessary IOPS or throughput at best cost efficiency.
+**Summary:** Azure premium storage is one of the Azure storage types recommended for SAP workload. This recommendation applies for non-production and production systems. Azure premium storage is suited to handle database workloads. The usage of Azure Write Accelerator is going to improve write latency against Azure premium disks substantially. However, for DBMS systems with high IOPS and throughput rates, you need to either overprovision storage capacity. Or you need to use functionality like Windows Storage Spaces or logical volume managers in Linux to build stripe sets that give you the desired capacity on the one side. But also the necessary IOPS or throughput at best cost efficiency.
### Azure burst functionality for premium storage
-For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You are going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
+For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You're going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that contain data files for the different DBMS. The I/O workload expected against those volumes, especially with small to mid-ranged systems is expected to look like: - Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA should be completely in memory-- Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis-- Backup workload that reads in a continuous stream in cases where backups are not executed via storage snapshots
+- Bursts of write triggered by database checkpoints or savepoints that are issued regularly
+- Backup workload that reads in a continuous stream in cases where backups aren't executed via storage snapshots
- For SAP HANA, load of the data into memory after an instance restart Especially on smaller DBMS systems where your workload is handling a few hundred transactions per seconds only, such a burst functionality can make sense as well for the disks or volumes that store the transaction or redo log. Expected workload against such a disk or volumes looks like:
Especially on smaller DBMS systems where your workload is handling a few hundred
- Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes - Read bursts when performing transaction log or redo log backups
+## Azure Premium SSD v2
+Azure Premium SSD v2 storage is a new version of premium storage that got introduced with the goal to provide:
+
+* Submillisecond I/O latency for smaller read and write I/O sizes
+* SLAs for IOPS and throughput
+* Pay capacity by the provisioned GB
+* Provide a default set of IOPS and storage throughput per disk
+* Give the possibility to add more IOPS and throughput to each disk and pay separately for these extra provisioned resources
+* Pass SAP HANA certification without the help of other functionality like Azure Write Accelerator or other caches
+
+This type of storage is targeting DBMS workloads, storage traffic that requires submillisecond latency, and SLAs on IOPS and throughput. The Premium SSD v2 disks are delivered with a default set of 3,000 IOPS and 125 MBps throughput. And the possibility to add more IOPS and throughput to individual disks. The pricing of the storage is structured in a way that adding more throughput or IOPS isn't influencing the price majorly. Nevertheless, we leave it up to you to decide how the storage configuration for Premium SSD v2 will look like. For a base start, read [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md).
+
+For the actual regions, this new block storage type is available and the actual restrictions read the document [Premium SSD v2](../../disks-types.md#premium-ssd-v2-preview).
+
+The capability matrix for SAP workload looks like:
+
+| Capability| Comment| Notes/Links |
+| | | |
+| OS base VHD | Not supported | No system |
+| Data disk | Suitable | All systems |
+| SAP global transport directory | Yes | All systems |
+| SAP sapmnt | Suitable | All systems |
+| Backup storage | Suitable | For short term storage of backups |
+| Shares/shared disk | Not available | Needs Azure Premium Files or Azure NetApp Files |
+| Resiliency | LRS | No GRS or ZRS available for disks |
+| Latency | submillisecond | - |
+| IOPS SLA | Yes | - |
+| IOPS linear to capacity | semi linear | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) |
+| Maximum IOPS per disk | 80,000 [dependent on disk size](https://azure.microsoft.com/pricing/details/managed-disks/) | Also consider [VM limits](../../sizes.md) |
+| Throughput SLA | Yes | - |
+| Throughput linear to capacity | Semi linear | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) |
+| HANA certified | Yes | - |
+| Azure Write Accelerator support | No | - |
+| Disk bursting | No | - |
+| Disk snapshots possible | No | - |
+| Azure Backup VM snapshots possible | No | - |
+| Costs | Medium | - |
+
+In opposite to Azure premium storage, Azure Premium SSD v2 fulfills SAP HANA storage latency KPIs. As a result, you **DON'T need to use Azure Write Accelerator caching** as described in the article [Enable Write Accelerator](../../how-to-enable-write-accelerator.md).
+
+**Summary:** Azure Premium SSD v2 is the block storage that fits the best price/performance ratio for SAP workloads. Azure Premium SSD v2 is suited to handle database workloads. The submillisecond latency is ideal storage for demanding DBMS workloads. Though it's a new storage type that just released. Therefore, there still might be some limitations that are going to go away over the next few months.
+ ## Azure Ultra disk
-Azure ultra disks deliver high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS VMs. Some additional benefits of ultra disks include the ability to dynamically change the IOPS and throughput of the disk, along with your workloads, without the need to restart your virtual machines (VM). Ultra disks are suited for data-intensive workloads such as SAP DBMS workload. Ultra disks can only be used as data disks and can't be used as base VHD disk that stores the operating system. We would recommend the usage of Azure premium storage as based VHD disk.
+Azure ultra disks deliver high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS VMs. Some benefits of ultra disks include the ability to dynamically change the IOPS and throughput of the disk, along with your workloads, without the need to restart your virtual machines (VM). Ultra disks are suited for data-intensive workloads such as SAP DBMS workload. Ultra disks can only be used as data disks and can't be used as base VHD disk that stores the operating system. We would recommend the usage of Azure premium storage as based VHD disk.
As you create an ultra disk, you have three dimensions you can define:
The capability matrix for SAP workload looks like:
| Capability| Comment| Notes/Links | | | | |
-| OS base VHD | Does not work | - |
+| OS base VHD | Doesn't work | - |
| Data disk | Suitable | All systems | | SAP global transport directory | Yes | [Supported](https://launchpad.support.sap.com/#/notes/2015553) | | SAP sapmnt | Suitable | All systems |
The capability matrix for SAP workload looks like:
| Throughput SLA | Yes | - | | Throughput linear to capacity | Semi linear in brackets | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | | HANA certified | Yes | - |
+| Azure Write Accelerator support | No | - |
+| Disk bursting | No | - |
| Disk snapshots possible | No | - | | Azure Backup VM snapshots possible | No | - | | Costs | Higher than Premium storage | - | -
-**Summary:** Azure ultra disks are a suitable storage with low latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). Ultra disk is not supporting storage snapshots at this point in time. In opposite to all other storage, Ultra disk cannot be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
+**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). Ultra disk isn't supporting storage snapshots. In opposite to all other storage, Ultra disk cannot be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
## Azure NetApp files (ANF)
ANF storage is currently supported for several SAP workload scenarios:
> [!NOTE] > So far no DBMS workloads are supported on SMB based on Azure NetApp Files.
-As already with Azure premium storage, a fixed or linear throughput size per GB can be a problem when you are required to adhere to some minimum numbers in throughput. Like this is the case for SAP HANA. With ANF, this problem can become more pronounced than with Azure premium disk. In case of Azure premium disk, you can take several smaller disks with a relatively high throughput per GiB and stripe across them to be cost efficient and have higher throughput at lower capacity. This kind of striping does not work for NFS or SMB shares hosted on ANF. This restriction resulted in deployment of overcapacity like:
+As already with Azure premium storage, a fixed or linear throughput size per GB can be a problem when you're required to adhere to some minimum numbers in throughput. Like this is the case for SAP HANA. With ANF, this problem can become more pronounced than with Azure premium disk. Using Azure premium disk, you can take several smaller disks with a relatively high throughput per GiB and stripe across them to be cost efficient and have higher throughput at lower capacity. This kind of striping doesn't work for NFS or SMB shares hosted on ANF. This restriction resulted in deployment of overcapacity like:
- To achieve, for example, a throughput of 250 MiB/sec on an NFS volume hosted on ANF, you need to deploy 1.95 TiB capacity of the Ultra service level. - To achieve 400 MiB/sec, you would need to deploy 3.125 TiB capacity. But you may need the over-provisioning of capacity to achieve the throughput you require of the volume. This over-provisioning of capacity impacts the pricing of smaller HANA instances. -- In the space of using NFS on top of ANF for the SAP /sapmnt directory, you are usually going far with the minimum capacity of 100 GiB to 150 GiB that is enforced by Azure NetApp Files. However customer experience showed that the related throughput of 12.8 MiB/sec (using Ultra service level) may not be enough and may have negative impact on the stability of the SAP system. In such cases, customers could avoid issues by increasing the volume of the /sapmnt volume, so, that more throughput is provided to that volume.
+- Using NFS on top of ANF for the SAP /sapmnt directory, you're usually going far with the minimum capacity of 100 GiB to 150 GiB that is enforced by Azure NetApp Files. However customer experience showed that the related throughput of 12.8 MiB/sec (using Ultra service level) may not be enough and may have negative impact on the stability of the SAP system. In such cases, customers could avoid issues by increasing the volume of the /sapmnt volume, so, that more throughput is provided to that volume.
The capability matrix for SAP workload looks like: | Capability| Comment| Notes/Links | | | | |
-| OS base VHD | Does not work | - |
-| Data disk | Suitable | SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASe on SLES/RHEL |
-| SAP global transport directory | Yes | SMB as well as NFS |
-| SAP sapmnt | Suitable | Sll systems SMB (Windows only) or NFS (Linux only) |
+| OS base VHD | Doesn't work | - |
+| Data disk | Suitable | SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on SLES/RHEL |
+| SAP global transport directory | Yes | SMB and NFS |
+| SAP sapmnt | Suitable | All systems SMB (Windows only) or NFS (Linux only) |
| Backup storage | Suitable | - | | Shares/shared disk | Yes | SMB 3.0, NFS v3, and NFS v4.1 |
-| Resiliency | LRS | No GRS or ZRS available for disks |
+| Resiliency | LRS and GRS | [GRS available](../../../azure-netapp-files/cross-region-replication-introduction.md) |
| Latency | Very low | - | | IOPS SLA | Yes | - | | IOPS linear to capacity | strictly linear | Dependent on [Service Level](../../../azure-netapp-files/azure-netapp-files-service-levels.md) | | Throughput SLA | Yes | - |
-| Throughput linear to capacity | Semi linear in brackets | Dependent on [Service Level](../../../azure-netapp-files/azure-netapp-files-service-levels.md) |
+| Throughput linear to capacity | linear | Dependent on [Service Level](../../../azure-netapp-files/azure-netapp-files-service-levels.md) |
| HANA certified | Yes | - | | Disk snapshots possible | Yes | - | | Azure Backup VM snapshots possible | No | - | | Costs | Higher than Premium storage | - |
-Additional built-in functionality of ANF storage:
+Other built-in functionality of ANF storage:
- Capability to perform snapshots of volume - Cloning of ANF volumes from snapshots - Restore volumes from snapshots (snap-revert)
+- [Application consistent Snapshot backup for SAP HANA and Oracle](../../../azure-netapp-files/azacsnap-introduction.md)
**Summary**: Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB volumes or shares. The storage comes with three different service levels that provide different throughput and IOPS in a linear manner per GiB capacity of the volume. The ANF storage is enabling to deploy SAP HANA scale-out scenarios with a standby node. The storage is suitable for providing file shares as needed for /sapmnt or SAP global transport directory. ANF storage come with functionality availability that is available as native NetApp functionality.
+## Azure Premium Files
+[Azure Premium Files](../../../storage/files/storage-files-planning.md) is a shared storage that offers SMB and NFS for a moderate price and sufficient latency to handle shares of the SAP application layer. On top, Azure premium Files offers synchronous zonal replication of the shares with an automatism that in case one replica fails, another replica in another zone can take over. In opposite to Azure NetApp Files, there are no performance tiers. There also is no need for a capacity pool. Charging is based on the real provisioned capacity of the different shares. Azure Premium Files have not been tested as DBMS storage for SAP workload at all. But instead the usage scenario for SAP workload focused on all types of SMB and NFS shares as they're used on the SAP application layer. Azure Premium Files is also suited for the usage for **/hana/shared**.
+
+> [!NOTE]
+> So far no SAP DBMS workloads are supported on shared volumes based on Azure Premium Files.
+
+SAP scenarios supported on Azure Premium Files list like:
+
+- Providing SMB or NFS shares for SAP's global transport directory
+- The share sapmnt in high availability scenarios as documented in:
+ - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md)
+ - [High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md)
+ - [High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for SAP applications](./high-availability-guide-windows-netapp-files-smb.md)
+ - [High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server](./sap-hana-high-availability-scale-out-hsr-suse.md)
+
+Azure Premium Files is starting with larger amount of IOPS at the minimum share size of 100 GB compared to Azure NetApp Files. This higher bar of IOPS can avoid capacity overprovisioning to achieve certain IOPS and throughput values. For IOPS and storage throughput, read the section [Azure file share scale targets in Azure Files scalability and performance targets](../../../storage/files/storage-files-scale-targets.md).
+
+The capability matrix for SAP workload looks like:
+
+| Capability| Comment| Notes/Links |
+| | | |
+| OS base VHD | Doesn't work | - |
+| Data disk | Not supported for SAP workloads | - |
+| SAP global transport directory | Yes | SMB and NFS |
+| SAP sapmnt | Suitable | All systems SMB (Windows only) or NFS (Linux only) |
+| Backup storage | Suitable | - |
+| Shares/shared disk | Yes | SMB 3.0, NFS v4.1 |
+| Resiliency | LRS and ZRS | No GRS available for Azure Premium Files |
+| Latency | low | - |
+| IOPS SLA | Yes | - |
+| IOPS linear to capacity | strictly linear | - |
+| Throughput SLA | Yes | - |
+| Throughput linear to capacity | strictly linear | - |
+| HANA certified | No| - |
+| Disk snapshots possible | No | - |
+| Azure Backup VM snapshots possible | No | - |
+| Costs | low | - |
++
+**Summary**: Azure Premium Files is a low latency storage that allows to deploy NFS and SMB volumes or shares. Azure Premium Files provides excellent price/performance ratio for SAP application layer shares. It also provides synchronous zonal replication for these shares. So far, we don't support this storage type for SAP DBMS workload. Though it can be used for **/hana/shared** volumes.
+ ## Azure standard SSD storage
-Compared to Azure standard HDD storage, Azure standard SSD storage delivers better availability, consistency, reliability, and latency. It is optimized for workloads that need consistent performance at lower IOPS levels. This storage is the minimum storage used for non-production SAP systems that have low IOPS and throughput demands. The capability matrix for SAP workload looks like:
+Compared to Azure standard HDD storage, Azure standard SSD storage delivers better availability, consistency, reliability, and latency. It's optimized for workloads that need consistent performance at lower IOPS levels. This storage is the minimum storage used for non-production SAP systems that have low IOPS and throughput demands. The capability matrix for SAP workload looks like:
| Capability| Comment| Notes/Links | | | | |
Compared to Azure standard HDD storage, Azure standard SSD storage delivers bett
-**Summary:** Azure standard SSD storage is the minimum recommendation for non-production VMs for base VHD, eventual DBMS deployments with relative latency insensitivity and/or low IOPS and throughput rates. This Azure storage type is not supported anymore for hosting the SAP Global Transport Directory.
+**Summary:** Azure standard SSD storage is the minimum recommendation for non-production VMs for base VHD, eventual DBMS deployments with relative latency insensitivity and/or low IOPS and throughput rates. This Azure storage type isn't supported anymore for hosting the SAP Global Transport Directory.
## Azure standard HDD storage
-The Azure Standard HDD storage was the only storage type when Azure infrastructure got certified for SAP NetWeaver workload in the year 2014. In the year 2014, the Azure virtual machines were small and low in storage throughput. Therefore, this storage type was able to just keep up with the demands. The storage is ideal for latency insensitive workloads, that you hardly experience in the SAP space. With the increasing throughput of Azure VMs and the increased workload these VMs are producing, this storage type is not considered for the usage with SAP scenarios anymore. The capability matrix for SAP workload looks like:
+The Azure Standard HDD storage was the only storage type when Azure infrastructure got certified for SAP NetWeaver workload in the year 2014. In the year 2014, the Azure virtual machines were small and low in storage throughput. Therefore, this storage type was able to just keep up with the demands. The storage is ideal for latency insensitive workloads, that you hardly experience in the SAP space. With the increasing throughput of Azure VMs and the increased workload these VMs are producing, this storage type isn't considered for the usage with SAP scenarios anymore. The capability matrix for SAP workload looks like:
| Capability| Comment| Notes/Links | | | | |
The Azure Standard HDD storage was the only storage type when Azure infrastructu
## Azure VM limits in storage traffic
-In opposite to on-premises scenarios, the individual VM type you are selecting, plays a vital role in the storage bandwidth you can achieve. For the different storage types, you need to consider:
+In opposite to on-premises scenarios, the individual VM type you're selecting, plays a vital role in the storage bandwidth you can achieve. For the different storage types, you need to consider:
| Storage type| Linux | Windows | Comments | | | | | | | Standard HDD | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Likely hard to touch the storage limits of medium or large VMs | | Standard SSD | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Likely hard to touch the storage limits of medium or large VMs | | Premium Storage | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Easy to hit IOPS or storage throughput VM limits with storage configuration |
+| Premium SSD v2 | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Easy to hit IOPS or storage throughput VM limits with storage configuration |
| Ultra disk storage | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Easy to hit IOPS or storage throughput VM limits with storage configuration | | Azure NetApp Files | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Storage traffic is using network throughput bandwidth and not storage bandwidth! |
+| Azure Premium Files | [Sizes for Linux VMs in Azure](../../sizes.md) | [Sizes for Windows VMs in Azure](../../sizes.md) | Storage traffic is using network throughput bandwidth and not storage bandwidth! |
-As limitations, you can note that:
+As limitations, you need to note that:
-- The smaller the VM, the fewer disks you can attach. This does not apply to ANF. Since you mount NFS or SMB shares, you don't encounter a limit of number of shared volumes to be attached
+- The smaller the VM, the fewer disks you can attach. This doesn't apply to ANF. Since you mount NFS or SMB shares, you don't encounter a limit of number of shared volumes to be attached
- VMs have I/O throughput and IOPS limits that easily could be exceeded with premium storage disks and Ultra disks-- With ANF, the traffic to the shared volumes is consuming the VM's network bandwidth and not storage bandwidth
+- With ANF and Azure Premium Files, the traffic to the shared volumes is consuming the VM's network bandwidth and not storage bandwidth
- With large NFS volumes in the double digit TiB capacity space, the throughput accessing such a volume out of a single VM is going to plateau based on limits of Linux for a single session interacting with the shared volume. As you up-size Azure VMs in the lifecycle of an SAP system, you should evaluate the IOPS and storage throughput limits of the new and larger VM type. In some cases, it also could make sense to adjust the storage configuration to the new capabilities of the Azure VM. ## Striping or not striping
-Creating a stripe set out of multiple Azure disks into one larger volume allows you to accumulate the IOPS and throughput of the individual disks into one volume. It is used for Azure standard storage and Azure premium storage only. Azure Ultra disk where you can configure the throughput and IOPS independent of the capacity of a disk, does not require the usage of stripe sets. Shared volumes based on NFS or SMB can't be striped. Due to the non-linear nature of Azure premium storage throughput and IOPS, you can provision smaller capacity with the same IOPS and throughput than large single Azure premium storage disks. That is the method to achieve higher throughput or IOPS at lower cost using Azure premium storage. For example, striping across two P15 premium storage disks gets you to a throughput of:
+Creating a stripe set out of multiple Azure disks into one larger volume allows you to accumulate the IOPS and throughput of the individual disks into one volume. It's used for Azure standard storage and Azure premium storage only. Azure Ultra disk where you can configure the throughput and IOPS independent of the capacity of a disk, doesn't require the usage of stripe sets. Shared volumes based on NFS or SMB can't be striped. Due to the non-linear nature of Azure premium storage throughput and IOPS, you can provision smaller capacity with the same IOPS and throughput than large single Azure premium storage disks. That is the method to achieve higher throughput or IOPS at lower cost using Azure premium storage. For example, striping across two P15 premium storage disks gets you to a throughput of:
- 250 MiB/sec. Such a volume is going to have 512 GiB capacity. If you want to have a single disk that gives you 250 MiB throughput per second, you would need to pick a P40 disk with 2 TiB capacity. - 400 MiB/sec by striping four P10 premium storage disks with an overall capacity of 512 GiB by striping. If you would like to have a single disk with a minimum of 500 MiB throughput per second, you would need to pick a P60 premium storage disk with 8 TiB. Because the cost of premium storage is near linear with the capacity, you can sense the cost savings by using striping.
Some rules need to be followed on striping:
- No in-VM configured storage should be used since Azure storage keeps the data redundant already - The disks the stripe set is applied to, need to be of the same size
+- With Premium SSD v2 and Ultra disk, the capacity, provisioned IOPS and provisioned throughput needs to be the same
-Striping across multiple smaller disks is the best way to achieve a good price/performance ratio using Azure premium storage. It is understood that striping has some additional deployment and management overhead.
+Striping across multiple smaller disks is the best way to achieve a good price/performance ratio using Azure premium storage. It's understood that striping can have some extra deployment and management overhead.
For specific stripe size recommendations, read the documentation for the different DBMS, like [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
We recommend that you set up and validate a full HADR solution and security desi
1. Test the validity of your Azure role-based access control (Azure RBAC) architecture. The goal is to separate and limit the access and permissions of different teams. For example, SAP Basis team members should be able to deploy VMs and assign disks from Azure Storage into a given Azure virtual network. But the SAP Basis team shouldn't be able to create its own virtual networks or change the settings of existing virtual networks. Members of the network team shouldn't be able to deploy VMs into virtual networks in which SAP application and DBMS VMs are running. Nor should members of this team be able to change attributes of VMs or even delete VMs or disks. 1. Verify that [network security group and ASC](../../../virtual-network/network-security-groups-overview.md) rules work as expected and shield the protected resources. 1. Make sure that all resources that need to be encrypted are encrypted. Define and implement processes to back up certificates, store and access those certificates, and restore the encrypted entities.
- 1. Use [Azure Disk Encryption](../../../virtual-machines/disk-encryption-overview.md) for OS disks where possible from an OS-support point of view.
- 1. Be sure that you're not using too many layers of encryption. In some cases, it does make sense to use Azure Disk Encryption together with one of the DBMS Transparent Data Encryption methods to protect different disks or components on the same server. For example, on an SAP DBMS server, the Azure Disk Encryption (ADE) can be enabled on the operating system boot disk (if the OS supports ADE) and those data disk(s) not used by the DBMS data persistence files. An example is to use ADE on the disk holding the DBMS TDE encryption keys.
+ 1. For storage encryption, server-side encrption with platform managed key (SSE-PMK) is enabled for every managed disks in Azure by default. [Key management](/azure/virtual-machines/disk-encryption) with customer managed keys can be considered, if required for customer owned key rotation.
+ 1. [Host based server-side encryption](/azure/virtual-machines/disk-encryption#encryption-at-hostend-to-end-encryption-for-your-vm-data) should not be enabled for performance reasons on M-series family Linux VMs.
+ 1. Do not use Azure Disk Encryption with SAP as [OS images](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems) for SAP are not supported.
+ 1. Database native encryption can be considered, such as transparent data encryption (TDE). Encryption key management and location must be secured. Database encryption occurs inside the VM and is independent of any storage encryption such as SSE.
1. Performance testing. In SAP, based on SAP tracing and measurements, make these comparisons: - Where applicable, compare the top 10 online reports to your current implementation. - Where applicable, compare the top 10 batch jobs to your current implementation.
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture
## General platform restrictions Azure has various platforms besides so called native Azure VMs that are offered as first party service. [HANA Large Instances](./hana-overview-architecture.md), which is in sunset mode is one of those platforms. [Azure VMware Services](https://azure.microsoft.com/products/azure-VMware/) is another of these first party services. At this point in time Azure VMware Services in general isn't supported by SAP for hosting SAP workload. Refer to [SAP support note #2138865 - SAP Applications on VMware Cloud: Supported Products and VM configurations](https://launchpad.support.sap.com/#/notes/2138865) for more details of VMware support on different platforms.
-Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS service with [Azure Active Directory Domain Services](https://learn.microsoft.com/azure/active-directory-domain-services/overview) and [Azure Active Directory](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-whatis). SAP components hosted on Windows OS that are supossed to use Active directory, are solely relying on the traditional Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain Services. But these SAP components can't function with the native Azure Active Directory. Reason is that there are still larger gaps in functionality between Active Directory in its on-premises form or its SaaS form (Azure Active Directory Domain Services) and the native Azure Active Directory. This is the reason why Azure Active Directory accounts aren't supported for running SAP components, like ABAP stack, JAVA stack on Windows OS. Traditional Active Directory accounts need to be used in such scenarios.
+Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS service with [Azure Active Directory Domain Services](../../../active-directory-domain-services/overview.md) and [Azure Active Directory](../../../active-directory/fundamentals/active-directory-whatis.md). SAP components hosted on Windows OS that are supposed to use Active directory, are solely relying on the traditional Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain Services. But these SAP components can't function with the native Azure Active Directory. Reason is that there are still larger gaps in functionality between Active Directory in its on-premises form or its SaaS form (Azure Active Directory Domain Services) and the native Azure Active Directory. This is the reason why Azure Active Directory accounts aren't supported for running SAP components, like ABAP stack, JAVA stack on Windows OS. Traditional Active Directory accounts need to be used in such scenarios.
## 2-Tier configuration An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the case of a 2-Tier configuration, the DBMS, and SAP application layer share the resources of the Azure VM. As a result, you need to configure the different components in a way that these components don't compete for resources. You also need to be careful to not oversubscribe the resources of the VM. Such a configuration doesn't provide any high availability, beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
There are multiple ways to turn off default outbound access:
* Flexible scale sets are secure by default. Any instances created via Flexible scale sets won't have the default outbound access IP associated to it. For more information, see [Flexible orchestration mode for virtual machine scale sets](../../virtual-machines/flexible-virtual-machine-scale-sets.md)
+>[!Important]
+> When a backend pool is configured by IP address, it will use default outbound access due to an ongoing known issue. For secure by default configuration and applications with demanding outbound needs, associate a NAT gateway to the VMs in your load balancer's backend pool to secure traffic. See more on existing [known issues](../../load-balancer/whats-new.md#known-issues).
+ ## If I need outbound access, what is the recommended way? NAT gateway is the recommended approach to have explicit outbound connectivity. A firewall can also be used to provide this access.
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
The current IPv6 for Azure virtual network release has the following limitations
- When adding IPv6 to existing IPv4 deployments, IPv6 ranges can not be added to a VNET with existing resource navigation links. - Forward DNS for IPv6 is supported for Azure public DNS today but Reverse DNS is not yet supported. - While it is possible to create NSG rules for IPv4 and IPv6 within the same NSG, it is not currently possible to combine an IPv4 Subnet with an IPv6 subnet in the same rule when specifying IP prefixes.
+- ICMPv6 is not currently supported in Network Security Groups.
## Pricing
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
We recommend the following approach to upgrade to Standard SKU public IP address
| Resource using Basic SKU public IP addresses | Decision path | | | | | Virtual Machine or Virtual Machine Scale Sets | Use the [following upgrade options](#upgrade-using-portal-powershell-and-azure-cli). |
- | Load Balancer (Basic) | Use the [following upgrade options](#upgrade-using-portal-powershell-and-azure-cli). |
+ | Load Balancer (Basic) | Use this [guidance to upgrade from Basic to Standard Load Balancer](../../load-balancer/load-balancer-basic-upgrade-guidance.md). |
| VPN Gateway (Basic) | Cannot dissociate and upgrade. Create a [new VPN gateway with a SKU type other than Basic](../../vpn-gateway/tutorial-create-gateway-portal.md). | | Application Gateway (v1) | Cannot dissociate and upgrade. Use this [migration script to migrate from v1 to v2](../../application-gateway/migrate-v1-v2.md). | 1. Verify your application and workloads are receiving traffic through the Standard SKU public IP address. Then delete your Basic SKU public IP address resource.
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Network Policies provides micro-segmentation for pods just like Network Security
![Kubernetes network policies overview](./media/kubernetes-network-policies/kubernetes-network-policies-overview.png)
-Azure NPM implementation works with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables based on the defined policies. These rules are grouped together using Linux IPSets.
+Azure NPM implementation works with the Azure CNI that provides VNet integration for containers. NPM is supported on Linux and Windows Server 2022. The implementation enforces traffic filtering by configuring allow and deny IP rules based on the defined policies in Linux IPTables or Host Network Service(HNS) ACLPolicies for Windows Server 2022.
## Planning security for your Kubernetes cluster When implementing security for your cluster, use network security groups (NSGs) to filter traffic entering and leaving your cluster subnet (North-South traffic). Use Azure NPM for traffic between pods in your cluster (East-West traffic).
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
+ Last updated 02/25/2022
Review this section to familiarize yourself with considerations for designing vi
Connecting from your Azure virtual network to Azure PaaS services can be done directly over the Azure backbone and bypass the internet. When you bypass the internet to connect to other Azure PaaS services, you free up SNAT ports and reduce the risk of SNAT port exhaustion. [Private Link](../../private-link/private-link-overview.md) should be used when possible to connect to Azure PaaS services in order to free up SNAT port inventory.
-Private Link uses the private IP addresses of your virtual machines or other compute resources from your Azure network to directly connect privately and securely to Azure PaaS services over the Azure backbone. See a list of all [Azure service listed here](../../private-link/availability.md) that are supported by Private Link.
+Private Link uses the private IP addresses of your virtual machines or other compute resources from your Azure network to directly connect privately and securely to Azure PaaS services over the Azure backbone. See a list of [available Azure services](../../private-link/availability.md) that are supported by Private Link.
### Connect to the internet with NAT gateway
virtual-network Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-connectivity.md
description: Troubleshoot connectivity issues with Virtual Network NAT.
+ Last updated 08/29/2022
Once the custom UDR is removed from the routing table, the NAT gateway public IP
### Private IPs are used to connect to Azure services by Private Link
-[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Storage, SQL, or Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you'll notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
+[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Azure Storage, Azure SQL, or Azure Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you'll notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
To check which Private Endpoints you have set up with Private Link:
To learn more about NAT gateway, see:
* [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) * [NAT gateway resource](/azure/virtual-network/nat-gateway/nat-gateway-resource) * [Metrics and alerts for NAT gateway resources](/azure/virtual-network/nat-gateway/nat-metrics) --
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 10/03/2022 Last updated : 10/10/2022
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
Sender parameters: ntttcp -s10.27.33.7 -t 10 -n 1 -P 1
#### Get NTTTCP onto the VMs. Download the latest version:
-https://github.com/microsoft/ntttcp/releases/download/v5.35/NTttcp.exe
-
-Or view the top-level GitHub Page: <https://github.com/microsoft/ntttcp>\
+https://github.com/microsoft/ntttcp/releases/latest
Consider putting NTTTCP in separate folder, like c:\\tools
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
+ Last updated 06/26/2020
The first step is a network side operation and the second step is a service reso
>[!NOTE] > Both the operations described above must be completed before you can limit the Azure service access to the allowed VNet and subnet. Only turning on service endpoints for the Azure service on the network side does not provide you the limited access. In addition, you must also set up VNet ACLs on the Azure service side.
-Certain services (such as SQL and CosmosDB) allow exceptions to the above sequence through the **IgnoreMissingVnetServiceEndpoint** flag. Once the flag is set to **True**, VNet ACLs can be set on the Azure service side prior to setting up the service endpoints on the network side. Azure services provide this flag to help customers in cases where the specific IP firewalls are configured on Azure services and turning on the service endpoints on the network side can lead to a connectivity drop since the source IP changes from a public IPv4 address to a private address. Setting up VNet ACLs on the Azure service side before setting service endpoints on the network side can help avoid a connectivity drop.
+Certain services (such as Azure SQL and Azure Cosmos DB) allow exceptions to the above sequence through the `IgnoreMissingVnetServiceEndpoint` flag. Once the flag is set to `True`, VNet ACLs can be set on the Azure service side prior to setting up the service endpoints on the network side. Azure services provide this flag to help customers in cases where the specific IP firewalls are configured on Azure services and turning on the service endpoints on the network side can lead to a connectivity drop since the source IP changes from a public IPv4 address to a private address. Setting up VNet ACLs on the Azure service side before setting service endpoints on the network side can help avoid a connectivity drop.
### Do all Azure services reside in the Azure virtual network provided by the customer? How does VNet service endpoint work with Azure services?
virtual-network Vnet Integration For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/vnet-integration-for-azure-services.md
+ Last updated 12/01/2020
To compare and understand the differences, see the following table.
| Service can be reached without using any public IP address | No | Yes | | Azure-to-Azure traffic stays on the Azure backbone network | Yes | Yes | | Service can disable its public IP address | No | Yes |
-| You can easily restrict traffic coming from an Azure Virtual Network | Yes (allow access from specific subnets and or use NSGs) | No* |
-| You can easily restrict traffic coming from on-prem (VPN/ExpressRoute) | N/A** | No* |
+| You can easily restrict traffic coming from an Azure Virtual Network | Yes (allow access from specific subnets and or use NSGs) | Yes |
+| You can easily restrict traffic coming from on-prem (VPN/ExpressRoute) | N/A** | Yes |
| Requires DNS changes | No | Yes (see [DNS configuration](../private-link/private-endpoint-dns.md)) | | Impacts the cost of your solution | No | Yes (see [Private link pricing](https://azure.microsoft.com/pricing/details/private-link/)) | | Impacts the [composite SLA](/azure/architecture/framework/resiliency/business-metrics#composite-slas) of your solution | No | Yes (Private link service itself has a [99.99% SLA](https://azure.microsoft.com/support/legal/sla/private-link/)) | | Setup and maintenance | Simple to set up with less management overhead | Additional effort is required | | Limits | No limit on the total number of service endpoints in a virtual network. Azure services may enforce limits on the number of subnets used for securing the resource. (see [VNet FAQ](virtual-networks-faq.md#are-there-any-limits-on-how-many-vnet-service-endpoints-i-can-set-up-from-my-vnet)) | Yes (see [Private Link limits](../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits)) |
-*Anything with network line-of-sight into the private endpoint will have network-level access. This access can't be controlled by an NSG on the private endpoint itself.
**Azure service resources secured to virtual networks aren't reachable from on-premises networks. If you want to allow traffic from on-premises, allow public (typically, NAT) IP addresses from your on-premises or ExpressRoute. These IP addresses can be added through the IP firewall configuration for the Azure service resources. For more information, see the [VNet FAQ](virtual-networks-faq.md#can-an-on-premises-devices-ip-address-that-is-connected-through-azure-virtual-network-gateway-vpn-or-expressroute-gateway-access-azure-paas-service-over-vnet-service-endpoints).
To compare and understand the differences, see the following table.
- Learn how to [integrate your app with an Azure network](../app-service/overview-vnet-integration.md). - Learn how to [restrict access to resources using Service Tags](tutorial-restrict-network-access-to-resources.md).-- Learn how to [connect privately to an Azure Cosmos account using Azure Private Link](../private-link/tutorial-private-endpoint-cosmosdb-portal.md).
+- Learn how to [connect privately to an Azure Cosmos DB account via Azure Private Link](../private-link/tutorial-private-endpoint-cosmosdb-portal.md).
virtual-wan Howto Connect Vnet Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-connect-vnet-hub-powershell.md
This article helps you connect your virtual network to your virtual hub using Po
> * In order to connect it to a virtual hub, the remote virtual network can't have a gateway. > [!IMPORTANT]
-> If VPN gateways are present in the virtual hub, this operation can cause disconnection to point-to-site clients as well as reconnection of site-to-site tunnels and BGP sessions.
+> If VPN gateways are present in the virtual hub, this operation as well as any other write operation on the connected VNet can cause disconnection to point-to-site clients as well as reconnection of site-to-site tunnels and BGP sessions.
## Prerequisites
virtual-wan Howto Connect Vnet Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-connect-vnet-hub.md
This article helps you connect your virtual network to your virtual hub using th
> * In order to connect it to a virtual hub, the remote virtual network can't have a gateway. > [!IMPORTANT]
-> If VPN gateways are present in the virtual hub, this operation can cause disconnection to point-to-site clients as well as reconnection of site-to-site tunnels and BGP sessions.
+> If VPN gateways are present in the virtual hub, this operation as well as any other write operation on the connected VNet can cause disconnection to Point-to-site clients as well as reconnection of site-to-site tunnels and BGP sessions.
## Add a connection
virtual-wan Packet Capture Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/packet-capture-site-to-site-portal.md
To simplify your packet captures, you may specify filters on your packet capture
|TCPFlags| Integer that determines which Types of TCP Packets are captured |0 (none)| FIN = 1, SYN = 2, RST = 4, PSH = 8, ACK = 16,URG = 32, ECE = 64, CWR = 128| |MaxPacketBufferSize| Maximum size of a captured packet in bytes. Packets are truncated if larger than the provided value. |120 |Any| |MaxFileSize | Maximum capture file size in Mb. Captures are stored in a circular buffer so overflow is handled in a FIFO manner (older packets removed first)| 100| Any|
-|SourceSubnets | Packets from the specified CIDR ranges are captured. Specified as an array. | [ ] (all IPv4 addresses) |Array of comma-separated IPV4 Subnets|
-|DestinationSubnets |Packets destined for the specified CIDR ranges are captured. Specified as an array. | [ ] (all IPv4 addresses)| Array of comma-separated IPV4 Subnets|
-|SourcePort| Packets with source in the specified ranges are captured. Specified as an array.| [ ] (all ports)| Array of comma-separated ports|
-|DestinationPort| Packets with destination in the specified ranges are captured. Specified as an array. |[ ] (all ports)| Array of comma-separated ports|
+|SourceSubnets | Packets from the specified CIDR ranges are captured. Specified as an array. | [ ] (all IPv4 addresses) |An IPV4 Subnet|
+|DestinationSubnets |Packets destined for the specified CIDR ranges are captured. Specified as an array. | [ ] (all IPv4 addresses)| An IPV4 Subnet|
+|SourcePort| Packets with source in the specified ranges are captured. Specified as an array.| [ ] (all ports)| A port|
+|DestinationPort| Packets with destination in the specified ranges are captured. Specified as an array. |[ ] (all ports)| A port|
|CaptureSingleDirectionTrafficOnly | If true, only one direction of a bidirectional flow will show up in the packet capture. This will capture all possible combo of IP and ports.| True| True, False|
-|Protocol| An array of integers that correspond IANA protocols.| [ ] (all protocols)| Any protocols listed on this [iana.org](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) page.|
+|Protocol| An array of integers that correspond IANA protocols.| [ ] (all protocols)| A protocol listed on this [iana.org](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) page.|
> [!NOTE] > For TracingFlags and TCPFlags, you may specify multiple protocols by adding up the numerical values for the protocols you want to capture (same as a logical OR). For example, if you want to capture only ESP and OPVN packets, specify a TracingFlag value of 8+1 = 9.
web-application-firewall Waf Front Door Policy Configure Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md
Previously updated : 03/30/2022 Last updated : 10/05/2022
+zone_pivot_groups: web-application-firewall-configuration
# Configure bot protection for Web Application Firewall
-Azure WAF for Front Door provides bot rules to identify good bots and protect from bad bots. This article shows you how to configure bot protection rule in Azure Web Application Firewall (WAF) for Front Door by using Azure portal. Bot protection rule can also be configured using CLI, Azure PowerShell, or Azure Resource Manager template.
+The Azure Web Application Firewall (WAF) for Front Door provides bot rules to identify good bots and protect from bad bots. For more information on the bot protection rule set, see [Bot protection rule set](afds-overview.md#bot-protection-rule-set).
+
+This article shows how to enable bot protection rules on Azure Front Door Standard and Premium tiers.
## Prerequisites Create a basic WAF policy for Front Door by following the instructions described in [Create a WAF policy for Azure Front Door by using the Azure portal](waf-front-door-create-portal.md).
-## Enable bot protection rule set
+
+## Enable the bot protection rule set
+
+1. In the Azure portal, navigate to your WAF policy.
+
+1. Select **Managed rules**, then select **Assign**.
+
+ :::image type="content" source="../media/waf-front-door-configure-bot-protection/managed-rules-assign.png" alt-text="Screenshot of the Azure portal showing the WAF policy's managed rules configuration, and the Assign button highlighted." :::
+
+1. In the **Additional rule set** drop-down list, select the version of the bot protection rule set that you want to use. It's usually a good practice to use the most recent version of the rule set.
+
+ :::image type="content" source="../media/waf-front-door-configure-bot-protection/bot-rule-set.png" alt-text="Screenshot of the Azure portal showing the managed rules assignment page, with the 'Additional rule set' drop-down field highlighted." :::
+
+1. Select **Save**.
+++
+## Get your WAF policy's current configuration
+
+Use the [Get-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/get-azfrontdoorwafpolicy) cmdlet to retrieve the current configuration of your WAF policy. Ensure that you use the correct resource group name and WAF policy name for your own environment.
+
+```azurepowershell
+$frontDoorWafPolicy = Get-AzFrontDoorWafPolicy `
+ -ResourceGroupName 'FrontDoorWafPolicy' `
+ -Name 'WafPolicy'
+```
+
+## Add the bot protecton rule set
+
+Use the [New-AzFrontDoorWafManagedRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleobject) cmdlet to select the bot protection rule set, including the version of the rule set. Then, add the rule set to the WAF's configuration.
+
+The example below adds version 1.0 of the bot protection rule set to the WAF's configuration.
+
+```azurepowershell
+$botProtectionRuleSet = New-AzFrontDoorWafManagedRuleObject `
+ -Type 'Microsoft_BotManagerRuleSet' `
+ -Version '1.0'
+
+$frontDoorWafPolicy.ManagedRules.Add($botProtectionRuleSet)
+```
+
+## Apply the configuration
+
+Use the [Update-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/update-azfrontdoorwafpolicy) cmdlet to update your WAF policy to include the configuration you created above.
+
+```azurepowershell
+$frontDoorWafPolicy | Update-AzFrontDoorWafPolicy
+```
+++
+## Enable the bot protection rule set
+
+Use the [az network front-door waf-policy managed-rules add](/cli/azure/network/front-door/waf-policy/managed-rules#az-network-front-door-waf-policy-managed-rules-add) command to update your WAF policy to add the bot protection rule set.
+
+The example below adds version 1.0 of the bot protection rule set to the WAF. Ensure that you use the correct resource group name and WAF policy name for your own environment.
+
+```azurecli
+az network front-door waf-policy managed-rules add \
+ --resource-group FrontDoorWafPolicy \
+ --policy-name WafPolicy \
+ --type Microsoft_BotManagerRuleSet \
+ --version 1.0
+```
+++
+The following example Bicep file shows how to do the following steps:
+
+- Create a Front Door WAF policy.
+- Enable version 1.0 of the bot protection rule set.
+
+```bicep
+param wafPolicyName string = 'WafPolicy'
+
+@description('The mode that the WAF should be deployed using. In "Prevention" mode, the WAF will block requests it detects as malicious. In "Detection" mode, the WAF will not block requests and will simply log the request.')
+@allowed([
+ 'Detection'
+ 'Prevention'
+])
+param wafMode string = 'Prevention'
-In the **Managed Rules** page when creating a Web Application Firewall policy, first find **Managed rule set** section, select the check box in front of the rule **Microsoft_BotManager_1.0** from the drop-down menu, and then select **Review + Create**.
+resource wafPolicy 'Microsoft.Network/frontDoorWebApplicationFirewallPolicies@2022-05-01' = {
+ name: wafPolicyName
+ location: 'Global'
+ sku: {
+ name: 'Premium_AzureFrontDoor'
+ }
+ properties: {
+ policySettings: {
+ enabledState: 'Enabled'
+ mode: wafMode
+ }
+ managedRules: {
+ managedRuleSets: [
+ {
+ ruleSetType: 'Microsoft_BotManagerRuleSet'
+ ruleSetVersion: '1.0'
+ }
+ ]
+ }
+ }
+}
+```
- ![Bot protection rule](.././media/waf-front-door-configure-bot-protection/botmanager112019.png)
## Next steps